{"text": "\\section {Semilattice Representations, Size, Product and Height}\n\\setlength{\\parindent}{0pt}\n(Prepared by Shivam Goyal)\n\n\\vspace{0.3cm}\n\n\\subsection{Semilattice Representation}\n\\begin{itemize}\n    \\item \\textbf{Set of values $V$ and a meet operator \\^{}}\n    \\item \\textbf{Set of values $V$ and a partial order $\\leq$}\n    \\item \\textbf{A semilattice diagram} : which is a directed acyclic graph with nodes and edges. It can have potentially multiple Top and Bottom nodes, but they may not necessarily exist for eg in a semilattice with $V$ as set of integers and $max$ as the meet operator\n\\end{itemize}\n\n\\subsection{Semilattice Size}\nFor some DFA problems, size of the semilattice i.e. the number of elements in the set of values $V$ can become very large. For eg. it could be exponential in number of definitions for a program, if we consider the set of definitions as a value for the semilattice. As the set of values $V$ would be the power set of the set of definitions in the program for some DFA like available expressions.\n\\subsection{Product of Semilattices}\n\\begin{itemize}\n    \\item One idea considering large semilattices is to represent a larger semilattice using product of smaller semilattices.\n    \\item Consider 2 semilattices with sets of values as $V1$ and $V2$ then we can define a new semilattice as their product.This would contain values in the set $V1$ x $V2$. \n    \\item The partial order for the product semilattice would be obtained by applying the respective partial order relations on both the elements of the tuple, i.e. for $\\leq$ (the new partial order):\n    \\par\n    ($x_{1}$,$x_{2}$) $\\leq$ ($y_{1}$,$y_{2}$) iff $x_{1}$ $\\leq_{1}$ $y_{1}$ and $x_{2}$ $\\leq_{2}$ $y_{2}$ where $\\leq_{1}$ and $\\leq_{2}$ are the respective partial orders of the smaller semilattices. \n    \\item Similar thing can be done in case of the meet operator with the respective meet operators being applied on each element of the tuple, i.e. for \\^{} (the new meet operator):\n    \\par\n    ($x_{1}$,$x_{2}$) \\^{} ($y_{1}$,$y_{2}$) $=$ ($x_{1}$ \\^{}$_{1}$ $y_{1}$ , $x_{2}$ \\^{}$_{2}$ $y_{2}$), where \\^{}$_{1}$ and \\^{}$_{2}$ are the respective meet operators of the smaller semilattices.\n\\end{itemize}\n", "meta": {"hexsha": "4167d8b2793e4460a7266d806ffffc10be0c4b96", "size": 2210, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "module97.tex", "max_stars_repo_name": "arpit-saxena/compiler-notes", "max_stars_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "module97.tex", "max_issues_repo_name": "arpit-saxena/compiler-notes", "max_issues_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "module97.tex", "max_forks_repo_name": "arpit-saxena/compiler-notes", "max_forks_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-02-16T08:32:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-12T19:11:33.000Z", "avg_line_length": 81.8518518519, "max_line_length": 394, "alphanum_fraction": 0.7122171946, "num_tokens": 647, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.59997942305501}}
{"text": "\\documentclass{discussion}\n\n\\usepackage{framed}\n\\usepackage[position=b]{subcaption}\n\\DeclareMathOperator{\\Parents}{Parents}\n\\DeclareMathOperator{\\NonDesc}{NonDesc}\n\\newcommand{\\G}{\\mathcal{G}}\n\\newcommand{\\I}{\\mathcal{I}}\n\\linespread{1.0}\n\\begin{document}\n\n% Lecture Info\n\\lecture{8}{EM}{Chansoo Lee}\n%\\begin{exercise}\n%\\begin{example}\n\n%%%% INTRODUCTION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\newcommand{\\D}{\\mathcal{D}}\n\\section{Supervised Naive Bayes}\nIn supervised learning of Naive Bayes, we get \\emph{labeled} training data $\\D = \\{(\\x^{(i)}, y^{(i)})\\}_{i=1}^{N}$, where $\\x^{(i)} \\in \\{1,\\ldots,M\\}^{D}$ and $y^{(i)} \\in \\{ 1,\\ldots, C\\}$. \n The estimation (training) problem is to find the maximum log likelihood estimate parameters. The log-likelihood function breaks down into a sum of simple terms, which we can easily differentiate: \n% \\begin{align}\n% \t\\sum_{i=1}^{N} \\log P(y^{(i)}, \\x^{(i)}; \\pi, \\theta)\n% \t&= \\sum_{i=1}^{N} \\log P({y^{(i)}})P(y^{(i)} | \\x^{(i)}) \\label{eq:mle} \\\\ \n% \t&= \\sum_{i=1}^{N} \\left( \\log \\pi_{y^{(i)}} + \\sum_{i=1}^{D}\\log \\theta_{y^{(i)}d\\x^{(i)}_d} \\right)\n% \\end{align}\n\\begin{align}\n\t\\sum_{\\x,y \\in \\mathcal{D}} \\log P(y, \\x; \\pi, \\theta)\n\t&= \\sum_{\\x,y \\in \\mathcal{D}} \\big( \\log P({y}) + \\log P(y | \\x)\\big) \\label{eq:mle} \\\\ \n\t&= \\sum_{\\x,y \\in \\mathcal{D}} \\left( \\log \\pi_{y} + \\sum_{i=1}^{D}\\log \\theta_{ydx_d} \\right). \\label{eq:mle-param}\n\\end{align}\nBecause each log term involves only one parameter, when we differentiate the above expression with respect to any model parameter $\\pi_c$ or $\\theta_{cdm}$, we have a very simple expression, which we can set it to zero and easily solve for.\n\nThe closed-form solution shows that \\textbf{sufficient statistics} required to solve the above problem are counts: $N_c$, number of examples in category $c$ and $N_{cdm}$, number of examples in category $c$ with $d$-th feature taking value $M$. \n\n\\section{Unsupervised Naive Bayes}\nIn \\textbf{unsupervised Naive Bayes}, we get unlabeled training data $\\{\\x^{(i)}\\}_{i=1}^{n}$ where each data is a binary feature vector: $\\x^{(i)} \\in \\{0,1\\}^D$. The estimation problem is to find the maximum log likelihood estimate parameters for our \\emph{partially observed data}:\n\\begin{align}\n\\label{eq:em-mle}\n\\sum_{\\x \\in \\mathcal{D}} \\log P(\\x; \\pi, \\theta)\n&= \\sum_{\\x \\in \\mathcal{D}} \\log \\left(\\sum_{c=1}^{C} P(\\x,y = c)\\right) \\\\\n&= \\sum_{\\x \\in \\mathcal{D}} \\log\\left(\\sum_{c=1}^{C} \\Big(\\pi_{c} \\prod_{d=1}^{D} \\theta_{cdx_d} \\Big) \\right) \\label{eq:em-mle-param}.\n\\end{align}\nSince our probabilstic model does not directly define the marginal distribution over $\\x$, we have to marginalize $y$ out from the joint distribution \\eqref{eq:em-mle}. As a result, there is now a summation inside the logarithm, which makes derivatives messy. If you want to check, take derivatives with respect to $\\pi_1$ of \\eqref{eq:mle-param} and \\eqref{eq:em-mle-param}.\\footnote{An alternative explanation for the hardness of unsupervised setting is that the sufficient statistics to compute the MLE estimates are not provided.\n}\n \nThe important observation is that we can break this hard problem into two easy problems: (1) If we fix the model parameters, then it is a simple prediction step to compute the conditional distribution of the missing values: $P(y_d = c | \\x; \\theta, \\pi) \\propto \\pi_{c} \\prod_{d=1}^{D} \\theta_{c d x_d}$. (2) If we fill in the missing values, it is the supervised learning problem that we know how to solve.\n\nThis leads to the EM algorithm, an alternating optimization technique. EM solves the generic question of: \\emph{how do we find the MLE parameters given partially observed data in a principled manner that properly accounts for the dependence among random variables in our probabilstic model?}\n\n\\section{Soft assignment EM}\n\n\\begin{itemize}\n    \\item Initiliaze the parameters to the values that will break the symmetry, and then repeat:\n    \\item In the E-step, we compute the distribution of the missing data. We often say that we compute the \\emph{expected sufficient statistics}, because we fill in the missing data only probabilistically.\n    \\item In the M-step, update the model parameters to be the MLE based on new expected sufficient statistics.\n\\end{itemize}\n\n\n\\subsection{Case Study: Binary Unsupervised Naive Bayes}\n\nSuppose the training set has two data points  $\\x^{(1)} = (0, 1)$ and $\\x^{(2)} = (1,1)$; for clarity, we note that the dimensions are: $N=2, C=2$, $D = 2$, and $M = 2$. In this case study, our binary features and labels take values $\\{0,1\\}$ instead of $\\{1,2\\}$.\n\n\\paragraph{Initilization} Suppose our random initialization gives $\\pi_1 = 0.7, \\theta_{111} = 0.9, \\theta_{011} = 0.3, \\theta_{121} = 0.6, \\theta_{021} = 0.2$.\n\n\\paragraph{Iteration 1: E-step} Compute the probability for each assignment of missing values\\footnote{Unnoramlized probability is $P(\\x,y)$ and the normalized probability is $P(y | \\x)$.}.\n\\[\n\\begin{tabu}{c|c|c|c}\ny^{(1)} & y^{(2)} & \\text{unnormalized probability} & \\text{noramlized probability} \\\\\n\\hline\n1 & 1 & (0.7)(0.1)(0.6)(0.7)(0.9)(0.6) = 0.015876 & 0.4773\\\\\n1 & 0 & (0.7)(0.1)(0.6)(0.3)(0.3)(0.2) = 0.000756 & 0.0227\\\\\n0 & 1 & (0.3)(0.7)(0.2)(0.7)(0.9)(0.6) = 0.015876 & 0.4773\\\\\n0 & 0 & (0.3)(0.7)(0.2)(0.3)(0.3)(0.2) = 0.000756 & 0.0227\\\\\n\\end{tabu}\n\\]\nThe table reads as: the missing $y$ value of the $i$-th example is assigned the value specified in column $i$, with probability in column 4.\n\n\\begin{exercise}\n\tCompute the expected sufficient statistics using the table.\n\\end{exercise}\n\\begin{proof}\n Consider $\\overline{N_1}$, the \\emph{expected} number of examples with class label $y = 1$:% = E_{y^{(1)},y^{(2)}}$\n\\begin{itemize}\n\t\t\\item row 1 shows with probability 0.4773 there are two instances with $y = 1$. \n\t\t\\item row 2 shows with probability 0.0227 there is one instance with $y = 1$.\n\t\t\\item row 3 shows with probability 0.4773 there is one instance with $y = 1$.\n\t\t\\item row 4 shows with probability 0.0227 there is zero instance with $y = 1$.\n\t\t\\item Therefore, \\[\\overline{N_1} = (0.4773)(2) + (0.0227)(1) + (0.4773)(1) + (0.0227)(0) = 1.4546.\\]\n\t\t\\item Because class labels are binary, \\[\\overline{N_0} = \\overline{N} - \\overline{N_1} = 2 - 1.4546 = 0.5454\\]\n\\end{itemize}\n\n Note that given fully observed data, we would do a simple counting where each example adds either 0 or 1. Each example adds 1 to $N_c$ for a single $c$. But for the EM, each example adds an a \\emph{fraction} between 0 and 1, such that it \\emph{distributes} the count 1 across all $N_c$'s.\n\n Consider $\\overline{N_{111}}$, the \\emph{expected} number of examples with class label $y=1$ and first feature taking value 1.\n\\begin{itemize}\n\t\t\\item row 1 shows with probability 0.4773, our train set is $((0,1), 1)$ and $((1,1), 1)$. So there is one example with $y=1$ and $x_1 = 1$.\n\t\t\\item row 2 shows with probability 0.0227, our train set is $((0,1), 1)$ and $((1,1), 0)$. So there is zero example with $y=1$ and $x_1 = 1$.\n\t\t\\item row 3 shows with probability 0.4773, our train set is $((0,1), 0)$ and $((1,1), 1)$. So there is one example with $y=1$ and $x_1 = 1$. \n\t\t\\item row 4 shows with probability 0.0227, our train set is $((0,1), 0)$ and $((1,1), 0)$. So there is zero example with $y=1$ and $x_1 = 1$.\n\t\t\\item Therefore, \\[\\overline{N_{111}} = (0.4773)(1) + (0.0227)(0)+ (0.4773)(1) + (0.0227)(0) = 0.9546.\\]\n\t\t\\item Because features are binary, $\\overline{N_{110}} = \\overline{N_{1}} - \\overline{N_{111}} = 1.4546 - 0.9546 = 0.5$\n\\end{itemize}\nSimilarly, we can compute the rest of the parameters. For example,\n\\[\\overline{N_{121}} = (0.4773)2 + (0.0227) + (0.4773) = 1.4546 \\qedhere\\]\n\\end{proof}\n\n\\paragraph{Iteration 1: M-step} It's the same MLE parameter computation as in supervised Naive Bayes, except we use \\emph{expected} counts instead of counts.\n\n\\[\\pi_1 =\\frac{\\overline{N_1}}{N} = 1.4546 / 2 = 0.7273.\\]\n\n\\[\\theta_{111} = \\frac{\\overline{N_{111}}}{\\overline{N_{1}}} = 0.6563.\\]\n\n\\[\\theta_{121} = \\frac{\\overline{N_{121}}}{\\overline{N_{1}}} = 1.\\]\n\n\n\\paragraph{Efficient E-step}\n\nThe presented implementation of E-step takes exponential time $\\Omega(N^C)$, because it computes the full joint distribution over $N$ labels. Indeed, you might have noticed a bit of redundancy when we computed the expected counts.\n\nThe trick is that we can compute the distribution of the unobserved variables indepnedently, making it $O(NC)$ computation. Mathematically,\n\\begin{align*}\n\\overline{N_1} &= P(y^{(1)} = 1, y^{(2)} = 1) + P(y^{(1)} = 1, y^{(2)} = 1) + P(y^{(1)} = 1, y^{(2)} = 0) + P(y^{(1)} = 0, y^{(2)} = 1) \\\\\n&= (P(y^{(1)} = 1, y^{(2)} = 1) + P(y^{(1)} = 1, y^{(2)} = 0)) + (P(y^{(1)} = 1, y^{(2)} = 1) + P(y^{(1)} = 0, y^{(2)} = 1)) \\\\\n&= P(y^{(1)} = 1) + P(y^{(2)} = 1)    \n\\end{align*}\nThen we compute the probability distributions for missing values, independently for each example:\n\\[P(y^{(1)} = 1) = \\frac{(0.7)(0.1)(0.6)}{(0.7)(0.1)(0.6) + (0.3)(0.7)(0.2)} = 0.5 \\]\n\\[P(y^{(2)} = 1) = \\frac{(0.7)(0.9)(0.6)}{(0.7)(0.9)(0.6) + (0.3)(0.3)(0.2)} = 0.9546 \\]\n\nIt is also easy to note that\n\\[\\overline{N_0} = 2 -  \\overline{N_1} = P(y^{(1)} = 0) + P(y^{(2)} = 0).\n\\]\n\n\nThis gives a very intuitive explanation of the EM algorithm on unsupervised Naive Bayes. Given labeled (fully observed) data, we replace the probabilities with indicator functions; each example contributes one count to either $N_{1}$ or $N_0$. Given unlabeled (partially observed) data, each instance \\emph{distributes} a total of one count over $N_1$ and $N_0$.\n\n\n\\begin{exercise} Numerically work out the second iteration of EM, using the efficient E-step. \n\\end{exercise}\n\n\\subsection{Hard assignment EM}\nSometimes it is difficult (due to mathematical complexity or limited computation time) to handle fractional contribution of each data point. In such applications, we can make E-step assign the missing values with the highest probability, breaking ties arbitrarily. M-step becomes the MLE given fully observed data.\n\n\\begin{exercise}\n\tShow that $k$-means is equivalent to the hard-assignment EM for GMM clustering with fixed identiyy covariance matrices.\n\\end{exercise}\n\n\\begin{exercise}\nWhat are the differences between soft and hard assignment EM?\n\\end{exercise}\n\tThe hard-assignment EM explores the combinatorial space of missing variable assignments. The soft-assignment EM, on the other hand, explores the continuous space.\n\n\tIn clustering, the hard assignment EM tends to amplify the contrast among classes, while soft assignment EM attempts to model mixed-class memberships.\n\n\\subsection{Random assignment EM}\n\nIn E-step, we compute the distribution over missing values and sample accordingly, instead of computing the expected statistics. Use the random samples to fill in the missing data. M-step becomes the MLE given fully observed data.\n\n\n\\end{document}\n", "meta": {"hexsha": "34e6deb39b0b8519c38b591d8b2b739c268cc965", "size": 10838, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "discussion08-em-bayesian-clustering/discussion08-em-bayesian-clustering.tex", "max_stars_repo_name": "xipengwang/umich-eecs445-f16", "max_stars_repo_head_hexsha": "298407af9fd417c1b6daa6127b17cb2c34c2c772", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 97, "max_stars_repo_stars_event_min_datetime": "2016-09-11T23:15:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T08:03:24.000Z", "max_issues_repo_path": "discussion08-em-bayesian-clustering/discussion08-em-bayesian-clustering.tex", "max_issues_repo_name": "eecs445-f16/umich-eecs445-f16", "max_issues_repo_head_hexsha": "298407af9fd417c1b6daa6127b17cb2c34c2c772", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "discussion08-em-bayesian-clustering/discussion08-em-bayesian-clustering.tex", "max_forks_repo_name": "eecs445-f16/umich-eecs445-f16", "max_forks_repo_head_hexsha": "298407af9fd417c1b6daa6127b17cb2c34c2c772", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 77, "max_forks_repo_forks_event_min_datetime": "2016-09-12T20:50:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T14:41:23.000Z", "avg_line_length": 66.490797546, "max_line_length": 533, "alphanum_fraction": 0.6785384757, "num_tokens": 3655, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7745833841649233, "lm_q1q2_score": 0.5999794109631352}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Coding Concepts}\n\\label{coding}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Sorting Algorithms}\n\\label{coding:sorts}\n\n\\begin{table}[H]\n\\centering\n\\begingroup\n\\renewcommand*{\\arraystretch}{1}\n\\input{tables/sorting_algos.tex}\n\\endgroup\n\\caption{\nA collection of sorting algorithms with time complexities.\n}\n\\label{tab:sorting_table}\n\\end{table}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Search Algorithms}\n\\label{coding:searchs}\n\n\\begin{table}[H]\n\\centering\n\\begingroup\n\\renewcommand*{\\arraystretch}{1}\n\\input{tables/search_algos.tex}\n\\endgroup\n\\caption{\nA collection of search algorithms with time complexities.\n}\n\\label{tab:search_table}\n\\end{table}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Breadth First Search (BFS)}\n\\label{coding:other:bfs}\n% TODO\n\n\\subsubsection{Dijkstra's Algorithm}\n\\label{coding:other:bfs:dijkstra}\n% TODO\n% https://youtu.be/GazC3A4OQTE\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Depth First Search (DFS)}\n\\label{coding:other:dfs}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Other}\n\\label{coding:other}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Linked Lists}\n\\label{coding:other:lls}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Inverting a Binary Tree}\n\\label{coding:other:binary_tree_inversion}\n% TODO\n\n", "meta": {"hexsha": "4a9349918244cf8e86fe85afc84b1d12d0282b35", "size": 1774, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/appendixes/coding.tex", "max_stars_repo_name": "mepland/data_science_notes", "max_stars_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-05-30T15:15:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-23T01:01:08.000Z", "max_issues_repo_path": "sections/appendixes/coding.tex", "max_issues_repo_name": "mepland/data_science_notes", "max_issues_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/appendixes/coding.tex", "max_forks_repo_name": "mepland/data_science_notes", "max_forks_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.985915493, "max_line_length": 58, "alphanum_fraction": 0.4487034949, "num_tokens": 360, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7745833737577158, "lm_q1q2_score": 0.5999794109631351}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 6.7 Killing vectors of the Schwarzschild spacetime}\n\n\\begin{cadabra}\n   {t, r, \\theta, \\varphi}::Coordinate.\n   {a,b,c,d,e,f,g,h#}::Indices(values={t, r, \\theta, \\varphi}, position=independent).\n\n   ;::Symbol.\n\n   \\partial{#}::PartialDerivative.\n\n   g_{a b}::Metric.\n   g^{a b}::InverseMetric.  # essential when using complete (gab, $g^{a b}$)\n\n   Gamma := \\Gamma^{a}_{f g} -> 1/2 g^{a b} (   \\partial_{g}{g_{b f}}\n                                              + \\partial_{f}{g_{b g}}\n                                              - \\partial_{b}{g_{f g}} ).\n\n   deriv := \\xi_{a ; b} -> \\partial_{b}{\\xi_{a}} - \\Gamma^{c}_{a b} \\xi_{c}.\n   lower := \\xi_{a} -> g_{a b} \\xi^{b}.\n\n   expr  := \\xi_{a ; b} + \\xi_{b ; a}.                  # cdb(ex-0607.100,expr)\n\n   substitute   (expr, deriv)                           # cdb(ex-0607.101,expr)\n   substitute   (expr, lower)                           # cdb(ex-0607.102,expr)\n   substitute   (expr, Gamma)                           # cdb(ex-0607.103,expr)\n   distribute   (expr)                                  # cdb(ex-0607.104,expr)\n   product_rule (expr)                                  # cdb(ex-0607.105,expr)\n   canonicalise (expr)                                  # cdb(ex-0607.106,expr)\n\n   # choose a vector\n\n   # Kvect := {\\xi^{t} = 1}.\n   # Kvect := {\\xi^{\\varphi} = 1}.\n   Kvect := {\\xi^{\\theta} = \\sin(\\varphi), \\xi^{\\varphi} = \\cos(\\theta)/\\sin(\\theta) \\cos(\\varphi)}.\n   # Kvect := {\\xi^{\\theta} = \\cos(\\varphi), \\xi^{\\varphi} = - \\cos(\\theta)/\\sin(\\theta) \\sin(\\varphi)}.\n                                                         # cdb(ex-0607.107,Kvect)\n\n   gab := { g_{t t}            = -(1-2*m/r),\n            g_{r r}            = 1/(1-(2*m/r)),\n            g_{\\theta\\theta}   = r**2,\n            g_{\\varphi\\varphi} = r**2 \\sin(\\theta)**2}.  # cdb(ex-0607.108,gab)\n\n   complete   (gab, $g^{a b}$)                           # cdb(ex-0607.109,gab)\n\n   evaluate   (expr, gab+Kvect)                          # cdb(ex-0607.110,expr)\n\\end{cadabra}\n\n\\begin{dgroup*}\n   \\Dmath*{[\\xi^{a}] \\hiderel{=} \\Cdb*{ex-0607.107}}\n   \\Dmath*{[g_{ab}] \\hiderel{=} \\Cdb*{ex-0607.108}}\n   \\Dmath*{[g_{ab},g^{ab}] \\hiderel{=} \\Cdb*[\\hfill\\hskip2.5cm]{ex-0607.109}}\n   \\Dmath*{\\cdb{ex-0607.100} = \\Cdb*{ex-0607.101}\n                             = \\Cdb*{ex-0607.102}\n                             = \\Cdb*{ex-0607.103}\n                             = \\Cdb*{ex-0607.104}\n                             = \\Cdb*{ex-0607.105}\n                             = \\Cdb*{ex-0607.106}\n                             = \\Cdb*{ex-0607.110}}\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "9ad354616c29cf3f0518cb0f4b9237c99ebc3547", "size": 2813, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0607.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0607.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0607.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 40.1857142857, "max_line_length": 104, "alphanum_fraction": 0.4287237824, "num_tokens": 936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718435083355187, "lm_q2_score": 0.7772998663336158, "lm_q1q2_score": 0.5999538558596678}}
{"text": "\n\\section{Comparing the Non Relativistic and Relativistic Results}\n    \\begin{figure}[h]\n    \\begin{center}\n    \\begin{tabular}{cc}\n        \\includegraphics[width=7.0cm]{f0_hydrogen.eps}\n        &\n        \\includegraphics[width=7.0cm]{f0_uranium.eps}\n    \\end{tabular}\n    \\caption{Normal Form Factor for Single Electron Hydrogen and Uranium}\n    \\label{fig:hydrogen-uranium}\n    \\end{center}\n    \\end{figure}\n    The relativistic normal form factor in equation~\\ref{eq:nff-relativistic}\n    can be compared to the non relativistic case by considering the small values\n    of $Z$ in which the parameter $\\gamma_1 = \\sqrt{1-\\alpha^2Z^2} \\approx 1$. \n    In this case equation~\\ref{eq:nff-relativistic} becomes:\n    \\begin{equation} \\label{eq:reduction}\n        f_0(q) \\approx \n        2 \\frac{\\Gamma(2)}{\\Gamma(3)}\n        \\left( \\frac{2Z}{a_0} \\right)^4\n        \\left[ \n            \\left( \\frac{2Z}{a_0} \\right)^2 + q^2\n        \\right]^{-2} \n        =\n        \\left( \\frac{2Z}{a_0} \\right)^4\n        \\left[ \n            \\left( \\frac{2Z}{a_0} \\right)^2 + q^2\n        \\right]^{-2} \n    \\end{equation}\n    which is in agreement with equation~\\ref{eq:nff-nonrelativistic}.\n    In figure~\\ref{fig:hydrogen-uranium} we see plots for $f_0(q)$ comparing the non relativistic\n    and relativistic results for standard atomic hydrogen and hydrogenic\n    uranium.\n    We can see that in the case of atomic hydrogen ($Z=1$), there is\n    excellent agreement between the two results.\n    In the case of hydrogenic uranium ($Z=92$) we can see that there is a\n    significant difference over a range of momentum transfers between the\n    relativistic and non relativistic results. \n    This shows the need for using relativistic Dirac wave functions as opposed to\n    the Schr\\\"odinger wave functions when computing atomic form factors\n    especially for heavier atoms.\n    In figure~\\ref{fig:hydrogen-uranium} it is difficult to make out difference\n    between relativistic and non relativistic values for atomic hydrogen, since\n    there seems to be quite good agreement between the two. However, if we plot\n    the difference between the two results as is done in\n    figure~\\ref{fig:delta-theory}, we can clear identify the regions of momentum\n    transfer where the relativistic theory is better even though the difference\n    is of the order of $10^{-6}$ or less. We can see that the maximum\n    difference between the two theories is at approximately $q = 0.3 \\InvAngstrom$ and\n    and it gets smaller at higher values of $q$.\n    \\begin{figure}[h]\n    \\begin{center}\n        \\includegraphics[width=7.0cm]{delta_theory.eps}\n        \\caption{Difference ($\\Delta f_0(q)$) between relativistic and non relativistic theory}\n        \\label{fig:delta-theory}\n    \\end{center}\n    \\end{figure}\n\n\\section{Comparison with other theoretical results}\n    \\begin{figure}[H] \n    \\begin{center}\n    \\begin{tabular}{cc}\n        \\includegraphics[width=7.0cm]{hubbel_papa.eps}\n        &\n        \\includegraphics[width=7.0cm]{delta_hubbell.eps}\n    \\end{tabular}\n        \\caption{Normal Form Factor for Hydrogen -- Comparison With Hubbell's Theoretical Results}\n        \\label{fig:hubbell-comparison}\n    \\end{center}\n    \\end{figure}\n    We can make a comparison between the results obtained using equation~\\ref{eq:nff-relativistic}\n    and other theoretical results which have used relativistic wave functions in\n    computing $f_0(q)$.\n    Tabulations of atomic form factors, incoherent scattering function and\n    photon scattering cross sections were published by Hubbel et.  al.~\\cite{Hubbell-1975}.\n    The tabulations for the normal form factor for hydrogen used non\n    relativistic wave functions. Figure~\\ref{fig:hubbell-comparison} shows two\n    plots comparing the analytic result of equation~\\ref{eq:nff-relativistic} with\n    Hubbel's tabulated results. The plot on the left shows good agreement, but\n    looking at the difference between the two theories on the plot on the right\n    shows the differences more clearly. The shape is similar to that shown\n    in figure~\\ref{fig:delta-theory} but the maximum difference is of the order\n    of $10^{-5}$.\n", "meta": {"hexsha": "e83227565e7b52c9b4d11824a779c854f37ab58c", "size": 4144, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "normal_comparison.tex", "max_stars_repo_name": "mikepsn/atomic-form-factors-thesis", "max_stars_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "normal_comparison.tex", "max_issues_repo_name": "mikepsn/atomic-form-factors-thesis", "max_issues_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "normal_comparison.tex", "max_forks_repo_name": "mikepsn/atomic-form-factors-thesis", "max_forks_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.7529411765, "max_line_length": 98, "alphanum_fraction": 0.6998069498, "num_tokens": 1125, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7718435083355187, "lm_q1q2_score": 0.5999538518777853}}
{"text": "\\section{Ideal Rigid Body (IRB) Bound}\n\n\\begin{frame}{Ideal Rigid Body (IRB) Bound}\n\\begin{itemize}\n    \\item Post-hoc model which selects an impulse pair (Normal and Tangential Impulses) from the impulse space\n    \\item Impulse space is defined by energy allowable impulses given pre-impact energy (no increases in energy due to impact)\n    \\item Impulse pair is chosen to minimize $\\ell_2$ norm of post-impact velocity\n\\end{itemize}     \n     \\begin{figure}\n        \\centering\n        \\includegraphics[scale=0.6]{figures/energyEllipse.jpg}\n        \\caption{Energy Ellipse\\cite{nima2}}\n        \\label{fig:energyEllipse}\n\\end{figure}\n\n\\end{frame}\n\n\\subsection{Implementing the IRB Bound}\n\\begin{frame}{Implementing the IRB Bound}\n    \\begin{itemize}\n        \\item Initial energy, KE $= \\frac{1}{2}v_{pre}^T*\\textbf{M}*v_{pre}$\n        \\item Post-impact velocity, $v_{predicted} = v_{pre} + \\textbf{M}^{-1} \\textbf{J}^T \\textbf{P}^T$\n            \\begin{itemize}\n                \\item $\\textbf{M}$ -- generalized mass matrix\n                \\item $\\textbf{J}$ -- contact Jacobian\n                \\item $\\textbf{P}$ -- impulse vector\n            \\end{itemize}\n        \\item Run \\textit{fmincon} to find the best pair of impulses using normalized error as cost with the energy ellipse as a primary constraint\n        \\item Even though this is a flexible post-hoc model, we were seeing large errors from the IRB predicted impulses (similar to Nima's paper)\n        \n    \\end{itemize}    \n\\end{frame}\n\n%surprisingly high error from normal IRB\n\n%IRB with width/Torque\n% found that width could be as high as roughly 20 mm, which is highly      unlikely given the actual dimensions of the ellipse and the material properties.\n\n%angle correlations/other correlations\n\\subsection{IRB with Torque}\n\\begin{frame}{IRB with Torque}\nOne idea Prof. Posa suggested to explain the errors we were seeing (discussed further in a later section) is to remove the single point contact assumption and treat the impact as if it occurs over a region.\n    \\begin{figure}\n        \\centering\n        \\includegraphics[scale=0.6]{figures/IRBTorque.png}\n        \\caption{IRB Torque Diagram}\n        \\label{fig:IRBTorque}\n\\end{figure}\n\\end{frame}\n\n\\begin{frame}{IRB with Torque}\n\\begin{itemize}\n    \\item Adds an additional torque ($|\\tau| = w_{patch}*\\textbf{P}_n$)\n    \\item Additional constraint ($|w_{patch}|\\leq w_{max}$)\n    \\item The calculation is still $v_{predicted} = v_{pre} + \\textbf{M}^{-1} \\textbf{J}^T \\textbf{P}^T$ however \\textbf{J} and \\textbf{P} now have a third row: \n\\end{itemize}\n\n\n\n\\centering\n\\[\n    \\textbf{J} =\n        \\left [\n        \\begin{array}{ccc}\n             & \\textbf{d} & \\\\\n             & \\textbf{n} & \\\\\n             0 & 0 & 1 \\\\\n        \\end{array}\n        \\right ],\\\n            \\textbf{P} = \n       \\left [\n        \\begin{array}{c}\n            \\textbf{P}_n \\\\\n            \\textbf{P}_t \\\\\n            \\tau   \\\\\n        \\end{array}\n        \\right ]\n\\]\n\n\\begin{itemize}\n    \\item This additional decision variable ($\\tau$) should reduce error and lead to more accurate predictions from the IRB Bound\n\n\\end{itemize}   \n\n  \n\\end{frame}\n    \n\\begin{frame}{IRB with Torque's Effect on Error}\n    \\begin{figure}\n        \\centering\n        \\includegraphics[scale=0.15]{figures/IRBErrorVsMaxWidth.jpg}\n        \\caption{Average Error Across 1000 Random Trials Vs. Maximum Allowable Width}\n        \\label{fig:IRBErrorVSMaxWidth}\n    \\end{figure}\n\\end{frame}\n\n%TAKEAWAYS FROM PLOT ^^^^^\n%Small Patch size has significant effect in lowering error (1mm cuts average error in half)\n%Patch size isn't a sole solution however, since to get really low errors, you need unrealistically large widths\n%As a good sanity check however, as the max width increases and becomes nearly unconstrained, the error approaches 0\n\n\\begin{frame}{IRB Trends}\nOne of the things we did to try to understand how the IRB Bound worked was plotting different properties against each other. \n\n%\\begin{figure}\n    %\\centering\n    %\\includegraphics[scale=0.18]{figures/Trends.png}\n    %\\caption{Torque against IRB moment}\n    %\\label{fig:MomentsError}\n%\\end{figure}\n\n\\begin{figure}\n    \\centering\n    \\begin{minipage}{.5\\textwidth}\n      \\centering\n      \\includegraphics[width=1\\linewidth]{figures/MomentProp2.jpg}\n    \\captionof{figure}{IRB Moment vs Torque}\n      \\label{fig:contourWM2}\n    \\end{minipage}%\n    \\begin{minipage}{.5\\textwidth}\n      \\centering\n      \\includegraphics[width=1\\linewidth]{figures/ellipseAngleWidth.jpg}\n    \\captionof{figure}{Optimal Width vs Angle}\n      \\label{fig:histWM2}\n    \\end{minipage}\n\\end{figure}    \n    \n\\end{frame}", "meta": {"hexsha": "1da4726ccb7e5308cff31bd7b79f88ad892daee5", "size": 4622, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Presentation/sec4.tex", "max_stars_repo_name": "DAIRLab/ImpactModeling", "max_stars_repo_head_hexsha": "f6c28898845da6d48efdd6c1c696db2fb3716edf", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-05-19T21:01:23.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-02T08:56:34.000Z", "max_issues_repo_path": "Presentation/sec4.tex", "max_issues_repo_name": "DAIRLab/ImpactModeling", "max_issues_repo_head_hexsha": "f6c28898845da6d48efdd6c1c696db2fb3716edf", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Presentation/sec4.tex", "max_forks_repo_name": "DAIRLab/ImpactModeling", "max_forks_repo_head_hexsha": "f6c28898845da6d48efdd6c1c696db2fb3716edf", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-19T21:01:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-19T21:01:28.000Z", "avg_line_length": 36.109375, "max_line_length": 206, "alphanum_fraction": 0.6648636954, "num_tokens": 1314, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7772998663336158, "lm_q1q2_score": 0.5999538477007832}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{color}\n\\usepackage{amsmath}\n\\begin{document}\n\t\\title{GST108-Introduction to Quantitative Reasoning: Managing Money}\n\t\\author{Oghenemaro Egbodo}\n\t\\maketitle\n\t\\newpage\n\t\\tableofcontents\n\t\\newpage\n\t\\section{Interest Rate}\n\t\\textcolor{red}{Interest(I)} is the price of money. It is the fee\u00a0paid for borrowing or investing money.\\\\\n\t\\textcolor{red}{Principal(P)}\nis the amount of money being borrowed or deposited.\\\\\n\t\\textcolor{red}{Interest rate (R)} is the percentage of the principal that uis paid as a fee over a period of time.\\\\\n\t\\textcolor{red}{Accumulated Balance(A)} is the total amount to be repaid or the total value of money invested.\\\\\n\t\\textcolor{red}{Simple Interest Formula I=PRT\\\\\n\t\tAccumulated Balance A= P + I\\\\\n\t\tA = P(1+RT)}\n\t\\newpage\n\t\\section{Compound Interest}\n\t\\textcolor{red}{Compound Interest} is\u00a0interest paid on the original principal of a deposit or loan and also on the accumulated interest.\n\tThe addition of interest to the principal is called\u00a0\\textit{compounding}.\n\tInterest can be compounded periodically.\n\t\\textcolor{red}{\\begin{equation}\n\t\t\tA=P(1+\\frac{R}{n})^{nt}\n\t\\end{equation}}\n    \\textcolor{red}{\\begin{equation}\n    \t\tI= A+P\n    \t\t\\end{equation}}\n\tA = Accumulated Balance or Amount\\\\\n\tP = Principal \\\\ R = Annual Rate (in decimal)\\\\ n = Number of compounding periods per year\\\\\n\tT = Time\\\\\n\tI = Interest\n\t\\newpage\n\t\\section{Continuous Compounding}\n\tInterest accrued on investment of \\$1000 at 8\\% for 5 years with various compounding periods.\\\\\n\t\\includegraphics[width=1.0\\linewidth]{bar chart.png}\n\t\\newpage\n\t\\subsection{Continuous Compounding}\n\tContinuous Compounding is when the principal is constantly earning interest and the number of compounding periods increases without bound.\n\t\\textcolor{red}{\\begin{equation}\n\t\t\tA=Pe^{RT}\n\t\\end{equation}}\\\\\n\tA  =  Accumulated Balance or Amount\\\\\n\tP = Principal\n\\\\\n\tR = Annual Rate (in decimal)\n\\\\\n\tT = Time\n\t\\newpage\n\t\\section{Effective Rate}\n\tWhen interest is compounded, the annual rate of interest (R) is called the nominal rate.\\\\\n\tThe effective rate, R$_{e}$ is the simple interest rate that would yield the same amount of interest after 1 year.\\\\\n\tWhen a bank advertises a \u201c7\\% annual interest rate compounded daily and yielding 7.25\\%,\u201d the nominal interest rate is 7\\% and the effective rate is 7.25\\%. \\\\\n\t\\textcolor{red}{\\begin{equation}\n\t\t\tR_{e} = [(1+\\frac{R}{n})^{n-1}]*100\n\t\\end{equation}}\\\\\n\tThe effective rate is useful for comparing rates with different compounding periods. \\\\\n\t\\newpage\n    \\section{Annuity Plan}\n\tAn annuity plan is an investment plan consisting of equal periodic payments.\\\\\n\t\\textcolor{red}{\\begin{equation}\n\t\t\tA = PMT * \\frac{[(1+ \\frac{R}{n})^{nt} - 1]}{\\frac{R}{n}}\n\t\t\\end{equation}}\\\\\n\t\tA = Accumulated Balance or Amount\\\\\n\t\tPMT  = regular payment  or deposit\n\\\\\n\t\tR = Annual Rate (in decimal)\n\\\\\n\t\tn = Number of compounding periods per year\n\\\\\n\t\tT = Time\n\\\\\n\t\t\t\\newpage\n\t\t\\section{Loan Payment Formula}\n\t\tThe \\textcolor{red}{principal} is the amount of money owed at any particular time. Interest is charged on the loan principal. To pay off a loan, you must gradually pay down the principal\\\\\n\t\tAn \\textcolor{red}{installment loan (or amortized loan)}is a loan that is paid off with equal regular payments.\n\\\\\n\t\tThe \\textcolor{red}{loan term} is the time you have to pay back the loan in full.\n\\\\\n\t\t\\textcolor{red}{\\begin{equation}\n\t\t\t\tPMT = \\frac{P*\\frac{R}{n}}{[1-(1+\\frac{R}{n})^{-nt}]}\n\t\t\\end{equation}}\\\\\n\t\tP = starting loan principal\\\\\n\t\tPMT  = regular payment  or deposit\n\\\\\n\t\tR = Annual Rate (in decimal)\n\\\\\n\t\tn = Number of compounding periods per year\n\\\\\n\t\tT = Time\n\t\t\\newpage\n\t\t\\section{Exercises}\n\t\t1. Find the simple interest and Accumulated balance for the following:\n\t\t\\begin{itemize}\n\t\t\t\\item Principal = 10000 naira; APR = 8\\%; Time = 4 months\n\t\t\t\\item Principal = 20000; APR = 6\\%; Time = 3 months\n\t\t\t\\item Principal = 60,000 naira; APR = 4\\%; Time = 2 years\n\t\t\t\\item Principal = 50,000 naira; APR = 7\\%; Time = 3 years\n\t\t\\end{itemize}\n\t\t\n\t\t2. A principal of 20,000 naira earns 6\\% per year simple interest. How long will it take for the accumulated balance to become 23000 naira?\\\\\n\t\t3.An account with 10000 naira earns interest at annual rate of  8\\%. Find the amount in this account after 10 years if the compounding is  a) monthly    b) daily     c) Quarterly   d) weekly\\\\\n\t\t4.How much money should be deposited in a bank account earning an annual interest rate of 8\\% compounded quarterly, in order to have 10 million naira at the end of 10 years?\n\\\\\n\t\t5.Every six months, Segun puts 10000 naira into an account earning an APR of 10\\% compounded bi-annually. How much will be in the account at the end of 15 years\\\\\n\t\t6.Kunle wants to start a small business in 5 years. He will need 20 million naira to start the business. How much should he deposit every month into an account with an APR of 9\\% compounded monthly in order to meet this goal?\n\t\t\\newpage\n\t\t\\subsection{Exercises contd.}\n\t\t7.Imagine you want to retire in 30 years with a pension of 1,000,000 naira from an investment plan. \\\\ \n\t\ta)How much money would you have to invest today at an APR of 9\\% , compounded daily, in order to have 1,000,000 naira in 30 years? \\\\\n\t\tb)Calculate the total return on investment after 30 years and comment on what will happen to the original investment after 30 years.\\\\\n\t\tc)Calculate the Annual Percentage Yield (APY) on the investment over the 30 year period and comment on what will happen to the original investment every year.\\\\\n\t\td)How much will the N1, 000, 000 generate in interest each year, if it is invested at an APR of 9\\%, compounded daily?\n\t\t\\newpage\n\t\t\\subsection{Exercises contd.}\n\t\t8.  You want to buy a motorcycle costing 2 million naira. You have two options:\n\t\tOption 1 - You can borrow 2 million naira at an APR of 8\\% for 1 year and pay it back in monthly payments over the year.\\\\\n\t\tOption 2-   You can save the money you would have made in loan payments during 1 year and purchase the motorcycle.\\\\\n\t\tIf you decide to save your money for 1 year:\n\\\\\n\t\ta) You will have to deposit the equivalent of 1 month\u2019s loan payment in your savings account at the end of each month, and you will earn 5\\% interest on the account, compounded monthly. \\\\\n\t\tb) You will pay 28\\% of the interest earned on the savings account in taxes. \\\\\n\t\tc) An annual inflation rate of 7\\% will have increased the price of the motorcycle by 7\\%.\\\\\n\t\tIf you choose the option of saving your money for 1 year, how much money will you have left after you pay the tax and purchase the motorcycle?\n\t\t\\newpage\n\t\t\\includegraphics[width=1.0\\linewidth]{bruh.png}\n\t\\end{document}", "meta": {"hexsha": "c794dc8b0d2972597b2fd538b0c8eb31dfdd79d9", "size": 6667, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CA/CA1 Oghenemaro Egbodo.tex", "max_stars_repo_name": "EgbodoM2003/MaroCSC102", "max_stars_repo_head_hexsha": "44628b606b6ac9d467600b45c02ccc5d8b565803", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CA/CA1 Oghenemaro Egbodo.tex", "max_issues_repo_name": "EgbodoM2003/MaroCSC102", "max_issues_repo_head_hexsha": "44628b606b6ac9d467600b45c02ccc5d8b565803", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CA/CA1 Oghenemaro Egbodo.tex", "max_forks_repo_name": "EgbodoM2003/MaroCSC102", "max_forks_repo_head_hexsha": "44628b606b6ac9d467600b45c02ccc5d8b565803", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.2846153846, "max_line_length": 227, "alphanum_fraction": 0.7255137243, "num_tokens": 1963, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5999538476032231}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{geometry}                \n\\geometry{letterpaper}\n\\usepackage[]{graphicx}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{enumitem}\n\\usepackage{hyperref}\n\\usepackage{amsmath}\n\\usepackage{multicol}\n\\usepackage{graphics}\n\n\\newcommand{\\braket}[2]{\\langle #1 | #2 \\rangle}\n\n\\input{/home/prall/.vim/tex/Qcircuit.tex}\n\n\\begin{document}\n\n\n\\section*{Probability calculation and the Hidden Shift Algorithm}\n\n\\subsection*{Introduction}\n\nThis note concerns the remark on page 7, which concerns classically controlled gates in response to measurements of qubits. The trick goes as follows: say a qubit has a 50-50 chance of being 0 or 1. Then we can postselect a measurement result $x = $ 0 or 1 with a 50-50 chance, select a circuit $V_x$ depending on the measurement output $x$, and project the measured qubit onto $\\ket{x}\\bra{x}$.\n\nThis trick works for the sampling algorithm, but not for the probability algorithm. This can be seen with the following circuit:\n\\[\n\\Qcircuit @C=.7em @R=.4em @! {\n    \\lstick{\\ket{0}} & \\gate{H} & \\meter\\\\\n\\lstick{\\ket{0}} &  \\qw & \\gate{X} \\cwx &\\qw \n}\n\\]\n\nSay we desire to calculate the probability of the second qubit being zero: $P(0) = 1/2$. We postselect a value $x$ giving either $V_0 = H \\otimes I$ or $V_1 = H \\otimes X$, or compactly $V_x = H \\otimes X^x$. Then we calculate:\n\n$$P_x(0) = \\frac{ \\bra{0^{\\otimes 2}} V^\\dagger_x (\\ket{x}\\bra{x} \\otimes \\ket{0}\\bra{0}) V_x \\ket{0^{\\otimes 2}} }{\\bra{0^{\\otimes 2}} V^\\dagger_x (\\ket{x}\\bra{x} \\otimes I) V_x \\ket{0^{\\otimes 2}} } = \\frac{2^{-1/2} |\\bra{0} X^x \\ket{0}|^2 }{2^{-1/2}} = 1-x$$\n\nNeither $P_1(0) = 0$ nor $P_0(0) = 1$ are the correct probability, therefore this technique does not work for calculating probabilities. However, if a random $x$ is sampled, and we then sample a result from $P_x(0)$, it will be as if we had sampled from the correct distribution.\n\n\\textbf{Why does it always work for the T gate gadget, but not for other situations?}\n\nI believe this subtlety was neglected in the implementation of the Hidden Shift Algorithm (HSA) as detailed in the final appendix. In the following I will demonstrate that this is indeed an issue for HSA in particular, and that it can be resolved without compromising the algorithm's runtime.\n\nIn the following I will use the shorthand:\n$$f(x, \\Pi, y, V_x) = (\\bra{0^{\\otimes n}} \\otimes \\bra{A^{\\otimes t}}) V^\\dagger_x ( \\ket{x}\\bra{x} \\otimes \\Pi \\otimes \\ket{y}\\bra{y}) V_x (\\ket{0^{\\otimes n}} \\otimes \\ket{A^{\\otimes t}}) $$\n\nHere $\\Pi$ is a projector into the qubit(s) whose output probability is being measured. $x$ is a postselection measurements appearing in the circuit, for example in the Toffoli gate gadget. $y$ is a postselection on measurements in the $T$-gate gadget only. The paper's discussion lumps $x,y$ into a single variable $y$, but here it is handy to keep them separate. $V_x$ is a gadgetized circuit, where the gadgetization depends on both $x$ and $y$ (keeping the $y$ dependence implicit).\n\n\\textbf{Proposition 1.} There exists an HSA $V_x$ and postselections $x$ and $y$ such that:\n$$P(q=0) \\neq \\frac{f(x, \\ket{0}\\bra{0}_q, y, V_x)}{f(x, I_q, y, V_x)}$$\nIn other words, the procedure detailed in appendix F can fail to calculate the correct probability.\n\n\\textit{Proof} ...\n\n\n\\textbf{Proposition 2.} For any algorithm $V_x$, any postselections $x$ and $y$, and potentially several output qubits $q$ and output string $\\tilde q$:\n$$P(q=\\tilde q) = \\frac{\\left\\langle  f(x, \\ket{\\tilde q}\\bra{\\tilde q}_q, y, V_x) \\right\\rangle_x}{\\Big\\langle  f(x, I_q, y, V_x) \\Big\\rangle_x} = \\frac{ \\sum_x  f(x, \\ket{\\tilde q}\\bra{\\tilde q}_q, y, V_x) }{\\sum_{x}  f(x, I_q, y, V_x) }$$\nIn other words, if we average over all postselections $x$ in the numerator and denominator, the result is always correct.\n\n\\textit{Proof} ...\n\n\\textbf{Proposition 3.} For any algorithm $V_x$, $T$-gate postselection $y$ and output projector $\\Pi$, let $\\Big\\langle  f(x, \\Pi, y, V_x) \\Big\\rangle_x = \\mu$. Then the random variable $\\alpha = f(\\xi, \\Pi, y, V_\\xi)$, where $\\xi$ is a uniformly random distribution of bit strings, obeys $\\langle\\alpha\\rangle = \\mu$ and:\n$$P( (\\alpha - \\mu)^2 < \\epsilon ) \\leq n/(4\\epsilon) $$.\nIn other words, I can efficiently approximate the average value of $f$ via a polynomial-size random subset of postselections $x$.\n\n\\textit{Proof} ...\n\n\\end{document}\n", "meta": {"hexsha": "5e7374c8a074ad05d609cb99a04f9eaf5a5a427d", "size": 4382, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/probability.tex", "max_stars_repo_name": "patrickrall/CircuitSimulator", "max_stars_repo_head_hexsha": "a925996440313d96c3f102c52caa376f38fed015", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2017-11-07T13:32:00.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-15T01:05:45.000Z", "max_issues_repo_path": "docs/probability.tex", "max_issues_repo_name": "patrickrall/CircuitSimulator", "max_issues_repo_head_hexsha": "a925996440313d96c3f102c52caa376f38fed015", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/probability.tex", "max_forks_repo_name": "patrickrall/CircuitSimulator", "max_forks_repo_head_hexsha": "a925996440313d96c3f102c52caa376f38fed015", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-04-25T20:00:21.000Z", "max_forks_repo_forks_event_max_datetime": "2017-07-19T21:30:55.000Z", "avg_line_length": 63.5072463768, "max_line_length": 486, "alphanum_fraction": 0.7001369238, "num_tokens": 1409, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5999538437189008}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{ambifunb}\n\\section*{\\hspace*{-1.6cm} ambifunb}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nNarrow-band ambiguity function.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[naf,tau,xi] = ambifunb(x)\n[naf,tau,xi] = ambifunb(x,tau)\n[naf,tau,xi] = ambifunb(x,tau,N)\n[naf,tau,xi] = ambifunb(x,tau,N,trace)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty ambifunb} computes the narrow-band ambiguity function of a\n        signal, or the cross-ambiguity function between two signals. Its\n        definition is given by\n\\[A_x(\\xi,\\tau)=\\int_{-\\infty}^{+\\infty} x(s+\\tau/2)\\ x^*(s-\\tau/2)\\\ne^{-j2\\pi \\xi s}\\ ds.\\] \n\n\\hspace*{-.5cm}\n\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\ \n\\hline \n{\\ty x} & signal if auto-AF, or {\\ty [x1,x2]} if cross-AF ({\\ty length(x)=Nx})&\\\\ \n{\\ty tau} & vector of lag values &{\\ty (-Nx/2:Nx/2)}\\\\ \n{\\ty N} & number of frequency bins &{\\ty Nx}\\\\ \n{\\ty trace} & if non-zero, the progression of the algorithm is shown&{\\ty 0}\\\\ \n\\hline\n{\\ty naf} & doppler-lag representation, with the doppler bins stored in the rows\nand the time-lags stored in the columns&\\\\ \n{\\ty xi} & vector of doppler values\\\\\\hline\n\\end{tabular*}\n\\vspace*{.5cm}\n\nThis representation is computed such as its 2D Fourier transform equals the\nWigner-Ville distribution.  When called without output arguments, {\\ty\nambifunb} displays the squared modulus of the ambiguity function by means\nof {\\ty contour}.\n\nThe ambiguity function is a measure of the time-frequency correlation of a\nsignal $x$, i.e. the degree of similarity between $x$ and its translated\nversions in the time-frequency plane.\n\\end{minipage}\n\\vspace*{1cm}\n\n\\newpage\n\n{\\bf \\large \\sf Examples}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nConsider a BPSK signal (see {\\ty anabpsk}) of 256 points, with a keying\nperiod of 8 points, and analyze it with the narrow-band ambiguity\nfunction\\,:\n\\begin{verbatim}\n         sig=anabpsk(256,8);\n         ambifunb(sig);\n\\end{verbatim}\nThe resulting function presents a high thin peak at the origin of the\nambiguity plane, with small sidelobes around. This means that the\ninter-correlation between this signal and a time/frequency-shifted version\nof it is nearly zero (the ambiguity in the estimation of its arrival time\nand mean-frequency is very small).\\\\\n\nHere is an other example that checks the correspondance between the WVD and\nthe narrow-band ambiguity function by means of a 2D Fourier transform\\,:\n\\begin{verbatim}\n         N=128; sig=fmlin(N); amb=ambifunb(sig);\n         amb=amb([N/2+1:N 1:N/2],:);\n         ambi=ifft(amb).';\n         tdr=zeros(N); \t\t% Time-delay representation\n         tdr(1:N/2,:)=ambi(N/2:N-1,:);\n         tdr(N:-1:N/2+2,:)=ambi(N/2-1:-1:1,:);\n         wvd1=real(fft(tdr));\n\n         wvd2=tfrwv(sig);\n         diff=max(max(abs(wvd1-wvd2)))\n         diff = \n                1.5632e-13\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nambifuwb.\n\\end{verbatim}\n\\end{minipage}\n\n", "meta": {"hexsha": "21e5eb5e735762ba79ecafac1a1a1e44d5a4a739", "size": 3452, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/ambifunb.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/ambifunb.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/ambifunb.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 29.7586206897, "max_line_length": 82, "alphanum_fraction": 0.6729432213, "num_tokens": 1169, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746911, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5999538396394584}}
{"text": "\\documentclass{article}\n\n\\usepackage{algorithm}\n\\usepackage{algorithmic}\n\\usepackage{graphicx}\n\\usepackage{marvosym}\n\\usepackage{dingbat}\n\n\\title{The Conjugate Gradient Method}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\section{The Conjugate Gradient}\n\nThe Conjugate Gradient (CG) is an iterative method for the solution\nfor sparse, symmetric and positive-definite linear systems. Remember,\nsparse matrices are matrices where most of the coefficients are equal\nto zero. The zero coefficients are not explicitly stored in memory and\nare not considered in all the operations involving the matrix; this\nallows to reduce the memory consumption as well as the complexity of\nmatrix operations.\n\nThe CG method receives in input a starting solution $x=x_0$ and\nrefines it until the desired accuracy is reached (for example, until\nthe norm of the residual is smaller than a certain threshold $\\|r\\|_2\n= \\|b-Ax\\|_2 < \\varepsilon$ and/or when the number of iterations\nexceeds a maximum value). In its basic form, the CG is described by\nthe algorithm below:\n\n\\begin{algorithmic}[1]\n  \\REQUIRE a matrix $A$, a right-hand side $b$ and a starting solution $x_0$\n  \\STATE $d_0 = r_0 = b-Ax_0$\n  \\WHILE {$\\|r_i\\|_2 > \\varepsilon$ and $i < itmax$}\n  \\STATE $\\alpha_i = \\frac{r^T_i r_i}{d^T_iAd_i}$\n  \\STATE $x_{i+1} = x_i + \\alpha_id_i$\n  \\STATE $r_{i+1} = r_i - \\alpha_iAd_i$\n  \\STATE $\\beta_{i+1} = \\frac{r^T_{i+1} r_{i+1}}{r^T_i r_i}$\n  \\STATE $d_{i+1} = r_{i+1} + \\beta_{i+1}d_i$\n  \\ENDWHILE\n\\end{algorithmic}\n\nThe method only requires four basic operations:\n\\begin{enumerate}\n\\item matrix-vector product $y = \\alpha Ax + \\beta y$\n\\item dot-product $v = x^Ty$\n\\item vector 2-norm $v = \\|x\\|_2$\n\\item vector sum $y = \\alpha x + \\beta y$\n\\end{enumerate}\nwhere $y$ and $x$ are dense vectors, $A$ is a sparse matrix, $\\alpha$,\n$\\beta$ and $v$ are scalars.\n\nThe objective of this exercise is to parallelize each of these four\noperations independently using OpenMP in order to accelerate the\nConjugate Gradient. The mathematical details of the CG method are, in\nthis context, not important and can be neglected.\n\n\\section{Sparse Matrix Format}\n\nBecause the zero coefficients of the matrix must not be stored, a\nstandard 2D array cannot be used for storing a sparse matrix. \nIn this exercise, sparse matrices are represented in Compressed Sparse\nRow (CSR) format that consists of three arrays:\n\n\\begin{itemize}\n\\item \\texttt{val}: this array of size \\texttt{nz} contains the\n  nonzero coefficients of the matrix sorted by rows (all the\n  coefficients in the first row, then all those in the second row\n  etc.)\n\\item \\texttt{colind}: this array of size \\texttt{nz} contains the\n  column indices for the coefficients in the \\texttt{val} array\n\\item \\texttt{rowptr}: this array of size \\texttt{n+1} contains\n  pointer to the beginning of each row inside the \\texttt{val} and\n  \\texttt{colind} arrays. For example \\texttt{rowptr[3]=4} means that\n  the first coefficient of row 3 is the 4th element of array\n  \\texttt{val} and its column index is equal to\n  \\texttt{colind[4]}. Therefore, all the coefficients of row\n  \\texttt{i} are between positions \\texttt{rowptr[i]} and\n  \\texttt{rowptr[i+1]-1}.\n\\end{itemize}\nwhere \\texttt{n} is the size of the matrix and \\texttt{nz} is the number\nof nonzero coefficients in the matrix.\n\nHere is an example:\n\n\\vspace{0.5cm}\n\n\\begin{minipage}{0.4\\linewidth}\n\\begin{displaymath}\n   A=\\left[\\begin{array}{ccccc}\n   1 & 0 & 3 & 0 & 0\\\\\n   4 & 2 & 0 & 1 & 0\\\\\n   0 & 0 & 7 & 0 & 2\\\\\n   8 & 2 & 5 & 0 & 0\\\\\n   0 & 3 & 0 & 4 & 0\\\\\n  \\end{array}\\right]\n\\end{displaymath}\n\\end{minipage}%\n\\begin{minipage}{0.6\\linewidth}\n  \\centering\n  \\includegraphics[width=\\textwidth]{csr}\n\\end{minipage}\n\n\\newpage\n\nThe sparse matrix-vector product $y = \\alpha Ax + \\beta y$ is computed\nwith the following routine: % in Figure~\\ref{fig:spmv}.\n% \\begin{figure}[!h]\n  % \\centering\n\\small\n\\begin{verbatim}\nvoid spmv(int n, int *rowptr, int *colind, double *val, \n          double alpha, double *x, double beta, double *y){\n\n  int i, j;\n\n  for(i=0; i<n; i++){\n    /* for each row... */\n    y[i] = beta*y[i];\n    for(j=rowptr[i]; j<rowptr[i+1]; j++){\n      /* for each coefficient in the row... */\n      y[i] += alpha*val[j]*x[colind[j]];\n    }\n  }\n  return;\n}\n\\end{verbatim}\n% \\caption{\\label{fig:spmv} The sparse matrix-vector product.}\n% \\end{figure}\n\\normalsize\n\nNote that each iteration of the outer loop handles one row of the\nmatrix and each iteration of the inner loop handles one coefficient of\na row.\n\n\n\\section{Package content}\nIn the \\texttt{ConjugateGradient} directory you will find the\nfollowing files:\n\\begin{itemize}\n\\item \\texttt{conjugategradient.c}: this file contains the main\n  program that reads a matrix $A$ from a file, generates a right-hand\n  side $b$ and computes the solution $x$ of the system $Ax=b$ using\n  the Conjugate Gradient method. {\\bf This file should not be\n    modified}.\n\\item \\texttt{kernels.c}: this file contains the subroutines that\n  perform the four basic operations described above. {\\bf This is the\n    only file that must be modified} as described below.\n\\item the remaining files contain auxiliary routines and can be safely\n  ignored.\n\\end{itemize}\n\nThe code can be compiled with the \\texttt{make} command: just type\n\\texttt{make} inside the \\texttt{ConjugateGradient} directory; this\nwill generate a \\texttt{main} program that can be run like this:\n\n\\begin{verbatim}\n$ ./main matrix_file\n\\end{verbatim}\n\nwhere \\texttt{matrix\\_file} can be \\texttt{matrix1.rb},\n\\texttt{matrix2.rb} or \\texttt{matrix3.rb}. When executed, the main\nprogram will read the matrix $A$ from the file, generate the\nright-hand side $b$ and find the solution $x$ of the linear system\n$Ax=b$ using the CG method. It will print the norm of the residual\n$\\|r\\|_2 = \\|b-Ax_i\\|_2$ every 10 iterations and, at the end, will\nprint the total number of iterations done, the residual of the final\nsolution and the CG execution time.\n\n% The matrices can be found in the\n% \\texttt{/mnt/n7fs/ens/tp\\_abuttari/TP\\_SysCo/Matrices\\_BE/}\n% directory. Before running the program for the first time, just make a\n% copy of these files into the \\texttt{ConjugateGradient} directory:\n\n% \\begin{verbatim}\n% $ cp /mnt/n7fs/ens/tp_abuttari/TP_SysCo/Matrices_BE/matrix*.rb .\n% \\end{verbatim}\n\n\\section{Assignment}\n\\begin{itemize}\n\\item {\\huge \\Keyboard} Use OpenMP to parallelize the four subroutines in the\n\\texttt{kernels.c} file. In case you used the \\texttt{omp parallel\n  for} construct, consider using different scheduling types.\n\\item \\smallpencil Analyze and compare the performance of the parallel\n  code using one or two threads. Report in the \\texttt{responses.txt}\n  file the number of iterations and the execution time for the three\n  example matrices \\texttt{matrix1.rb}, \\texttt{matrix2.rb} and\n  \\texttt{matrix3.rb}. Comment on the observed results: did you\n  observe any speedup (reduction of the execution time) using 2 or 4\n  threads instead of 1? did you observe any difference in the number\n  of iterations using 2 or 4 threads instead of 1?  Is the method\n  still converging when using 2 or 4 threads? Did you observe any\n  difference when using a dynamic scheduling instead of a static one?\n  can you explain this difference?\n\\end{itemize}\n\n\n\\paragraph{Advice.}\n\\begin{itemize}\n\\item Note that some operations are easier to parallelize than\n  others. The vector sum routine \\texttt{axpby} is the easiest, the\n  \\texttt{dot} and \\texttt{norm2} are a bit more difficult and,\n  finally, the matrix-vector product \\texttt{spmv} is the hardest. It\n  is recommended to begin with the easier routines first and to\n  validate the correctness of the result each time one routine is\n  parallelized.\n\\item It is reasonable to expect that the number of iterations to\n  convergence is slightly different when using 2 (or more) threads\n  instead of 1. The difference, however, should be very small (not\n  more than 10 iterations for the given test matrices). Therefore, if\n  you observe a big difference in the number of iterations or if the\n  method does not converge anymore it means that the code was not\n  correctly parallelized.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{document}\n\n\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: t\n%%% End: \n", "meta": {"hexsha": "782359277279ced1e7120e183c165fba2241b630", "size": 8242, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2IMA/OpenMP/BE2015_correction/ConjugateGradient/subject.tex", "max_stars_repo_name": "LagOussama/enseeiht", "max_stars_repo_head_hexsha": "d3247880c66cd3754d0bd29781ab1ddec9f6536f", "max_stars_repo_licenses": ["FTL"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-26T11:38:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-26T11:38:34.000Z", "max_issues_repo_path": "2IMA/OpenMP/BE2015_correction/ConjugateGradient/subject.tex", "max_issues_repo_name": "LagOussama/enseeiht", "max_issues_repo_head_hexsha": "d3247880c66cd3754d0bd29781ab1ddec9f6536f", "max_issues_repo_licenses": ["FTL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2IMA/OpenMP/BE2015_correction/ConjugateGradient/subject.tex", "max_forks_repo_name": "LagOussama/enseeiht", "max_forks_repo_head_hexsha": "d3247880c66cd3754d0bd29781ab1ddec9f6536f", "max_forks_repo_licenses": ["FTL"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-04-10T11:39:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-10T11:39:01.000Z", "avg_line_length": 33.7786885246, "max_line_length": 77, "alphanum_fraction": 0.7304052414, "num_tokens": 2414, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434873426302, "lm_q2_score": 0.7772998663336158, "lm_q1q2_score": 0.5999538395418983}}
{"text": "\\section{sparse grid for $x^2$}\nConsider multilevel grid points\n$$\nT_k:\\quad x_i^k = ih_k,\\quad h_k = 2^{-k},\\quad k=0,1,2,...,J,\\quad 0\\le i\\le 2^k.\n$$\nConsider linear interpolation on $T_k$:\n$$\n(I_k v)(x) = \\sum_{i=0}^{2^k}v(x_i^k)\\varphi_i^k(x).\n$$\nLet $v(x)=x^2$, it is easy to see that\n$$\n(I_0v)(x) = x.\n$$\nFor $k\\ge 1$,\n\\[\n\\begin{split}\n(I_k-I_{k-1})v &= \\sum_{j=1}^{2^{k-1}}[(I_k-I_{k-1})v]\\varphi_{2j-1}^k(x)\\\\\n&=\\sum_{j=1}^{2^{k-1}}[v(x_{2j-1}^k)-\\frac{1}{2}(v(x_{2j-2}^k)+v(x_{2j}^k))]\\varphi_{2j-1}^k(x)\\\\\n&=\\sum_{j=1}^{2^{k-1}}\\frac{1}{2}h_k^2[2(2j-1)^2-(2j-2)^2-(2j)^2]\\varphi_{2j-1}^k(x)\\\\\n&=-\\sum_{j=1}^{2^{k-1}}4^{-k}\\varphi_{2j-1}^k(x)\\\\\n&=-4^{-k}\\underbrace{\\varphi_1\\circ\\cdots\\circ\\varphi_1(x)}\\limits_{\\varphi_k}.\n\\end{split}\n\\]\n\n\\begin{lemma}\n\t$$\n\tI_J(x^2) = x-\\sum_{k=1}^J4^{-k}\\varphi_k(x).\n\t$$\n\\end{lemma}\nConsequently,\n\\[\n\\begin{split}\nx^2 &= I_Jv + O(h_J^2) = I_0v+\\sum_{k=1}^J(I_k-I_{k-1})v+O(h_J^2)\\\\\n&=x - \\sum_{k=1}^J4^{-k}\\varphi_k + O(h_J^2)\n\\end{split}\n\\]\n\n\n\n\\section{sparse grid for $x^3$}\n\nGiven $(x_i,y_i),i=1,2,3,$ the quadratic interpolation function is \n\\[\nI(x) = y_1\\frac{(x-x_2)(x-x_3)}{(x_1-x_2)(x_1-x_3)}+y_2\\frac{(x-x_1)(x-x_3)}{(x_2-x_1)(x_2-x_3)}+y_3\\frac{(x-x_1)(x-x_2)}{(x_3-x_1)(x_3-x_2)}.\n\\]\nOn grid \n\\[T_k: x_i = i2^{-k},\\quad i = 0,...,2^k,\n\\]\nwe have the piece quadratic interpolation on each $[x_i,x_{i+1}]$. Define $x_{i+\\frac{1}{2}} = \\frac{1}{2}(x_i+x_{i+1})$, then we have the basis function $\\varphi_i^k,\\varphi_{i+\\frac{1}{2}}^k,\\varphi_{i+1}^k$.\n\nOn grid \n\\[T_{k+1}: x_i = i2^{-(k+1)},\\quad i = 0,...,2^{k+1},\n\\]\nwe have the basis function $\\varphi_i^{k+1},\\varphi_{i+\\frac{1}{4}}^{k+1},\\varphi_{i+\\frac{1}{2}}^{k+1}$ on $[x_i,x_{i+\\frac{1}{2}}]$ and $\\varphi_{i+\\frac{1}{2}}^{k+1},\\varphi_{i+\\frac{3}{4}}^{k+1},\\varphi_{i+1}^{k+1}$ on $[x_{i+\\frac{1}{2}},x_{i+1}]$. The basis function $\\varphi^{k+1}_i$ satisfies\n\\[\n\\varphi^{k+1}_i(x_i) = 1,\\quad \\varphi^{k+1}_i(x_{i-\\frac{1}{4}}) = 0,\\quad \\varphi^{k+1}_i(x_{i+\\frac{1}{4}}) = 0,\n\\]\nand $\\varphi^{k+1}_i$ is a quadratic function on $[x_{i-\\frac{1}{4}},x_{i+\\frac{1}{4}}],$and equal to $0$ out of $[x_{i-\\frac{1}{4}},x_{i+\\frac{1}{4}}].$\nThen we have \n\\[\n\\begin{split}\nI_k(x^3) &= x_i^3\\varphi_i^k + x_{i+{\\frac{1}{2}}}^3\\varphi_{i+\\frac{1}{2}}^k + x_{i+1}^3\\varphi_{i+1}^k\\\\\n&=x_i^3(\\varphi^{k+1}_i + \\frac{3}{8}\\varphi^{k+1}_{i+\\frac{1}{4}}-\\frac{1}{8}\\varphi^{k+1}_{i+\\frac{3}{4}})\\\\\n&+x_{i+\\frac{1}{2}}^3(\\frac{3}{4}\\varphi^{k+1}_{i+\\frac{1}{4}}+\\varphi^{k+1}_{i+\\frac{1}{2}}+\\frac{3}{4}\\varphi^{k+1}_{i+\\frac{3}{4}})\\\\\n&+ x_{i+1}^3(-\\frac{1}{8}\\varphi_{i+\\frac{1}{4}}^{k+1}+\\frac{3}{8}\\varphi_{i+\\frac{3}{4}}^{k+1}+\\varphi_{i+1}^{k+1})\n\\end{split}\n\\]\nand \n\\[\nI_{k+1}(x^3) = x_i^3\\varphi_i^{k+1} + x_{i+\\frac{1}{4}}^3\\varphi_{i+\\frac{1}{4}}^{k+1} + x_{i+\\frac{1}{2}}^3\\varphi_{i+\\frac{1}{2}}^{k+1} + x_{i+\\frac{3}{4}}^3\\varphi_{i+\\frac{3}{4}}^{k+1} + x_{i+1}^3\\varphi_{i+1}^{k+1}\n\\]\nThen on $[x_i,x_{i+1}]$\n\\[\n\\begin{split}\n(I_{k+1}-I_k)(x^3) &=(x^{3}_{i+\\frac{1}{4}}-\\frac{3}{8}x_i^3-\\frac{3}{4}x_{i+\\frac{1}{2}}^3+\\frac{1}{8}x^3_{i+1})\\varphi_{i+\\frac{1}{4}}^{k+1} +(x_{i+\\frac{3}{4}}^3+\\frac{1}{8}x_i^3-\\frac{3}{4}x_{i+\\frac{1}{2}}^3-\\frac{3}{8}x^3_{i+1})\\varphi_{i+\\frac{3}{4}}^{k+1}\\\\\n&=3\\times2^{-3k-6}(\\varphi_{i+\\frac{1}{4}}^{k+1}-\\varphi_{i+\\frac{3}{4}}^{k+1})\n\\end{split}\n\\]\nwhere $\\varphi_{i+\\frac{1}{4}}^{k+1}-\\varphi_{i+\\frac{3}{4}}^{k+1}$ is \n\\begin{center}\n\t\\begin{tikzpicture}[scale = 1]\n\t\\draw[gray, step = 1cm] (0, -1) grid (4, 1);\n\t\\draw[->] (-.5, 0) -- (4.5, 0);\n\t\\draw[->] (0, -2) -- (0, 2);\n\t\n\t\\node[anchor = north east] at (0, 0) {$ x_i $};\n\t\\node[anchor = north east] at (1, 0) {$ x_{i+\\frac{1}{4}} $};\n\t\\node[anchor = north east] at (2, 0) {$ x_{i+\\frac{1}{2}} $};\n\t\\node[anchor = north east] at (3, 0) {$ x_{i+\\frac{3}{4}} $};\n\t\\node[anchor = north east] at (4, 0) {$ x_{i+1} $};\n\t\\node[anchor = north east] at (0, 1) {$ 1 $};\n\t\\node[anchor = north east] at (0, -1) {$ -1 $};\n\t\\node[anchor = north] at (4.5, 0) {$ x $};\n\t\\node[anchor = west] at (0, 2) {$ y $};\n\t\n\t\n\t\\draw[domain = 0:2, smooth, variable=\\x, black]\n\tplot ({\\x}, {\\x * (2-\\x)})\n\tnode[anchor = west] {};\n\t\\draw[domain = 2:4, smooth, variable=\\x, black]\n\tplot ({\\x}, {-(4-\\x) * (\\x-2)})\n\tnode[anchor = west] {};\n\t\\end{tikzpicture}\n\\end{center}\n\nWe sum all the function on $[0,1]$,\n\\[\n(I_{k+1}-I_k)(x^3) = 3\\times2^{-3k-6}\\sum_{i=0}^{2^k-1}(\\varphi_{i+\\frac{1}{4}}^{k+1} - \\varphi_{i+\\frac{3}{4}}^{k+1})\n\\]\n\n\n\\section{2D sparse grid for $xy$}\nConsider the uniform gird $[0,1]^2$, and we define the grid\n\\[\nT_k:(x_i,y_j),\\quad x_i = ih,y_j = jh,\\quad h = 2^{-k}\n\\]\n$I_k$ is the piece linear interpolation operator which satisfies \n\\[\n(I_k f)(x_i,y_j) = f(x_i,y_j)\n\\]\nWe consider the grid $T_k$ and $T_{k-1}$. \n\\begin{itemize}\n\t\\item The basis functions on $T_{k-1}$ are $\\varphi_1^{k-1},\\varphi_2^{k-1},\\varphi_3^{k-1}$.\n\t\\item  The basis functions on refine grid $T_k$ are $\\varphi_i^k,i=1,..,6$.\n\\end{itemize}\n\\begin{center}\n\t\\begin{tikzpicture}[scale = 3 ]\n\t\\draw[gray, step = 1cm] (0, 0) grid (2, 2);\n\t\\draw[->] (-.5, 0) -- (2.5, 0);\n\t\\draw[->] (0, -.5) -- (0, 2.5);\n\t\\node[anchor = north west] at (0, 0) {$1, (x_i,y_j) $};\n\t\\node[anchor = north west] at (2.5, 0) {$ x $};\n\t\\node[anchor = north west] at (0, 2.5) {$ y $};\n\t\\node[anchor = north west] at (1,0) {$6,(x_{i+\\frac{1}{2}},y_j)$};\n\t\\node[anchor = north west] at (2,0) {$2,(x_{i+1},y_j)$};\n\t\\node[anchor = north west] at (0,1) {$(x_{i},y_{j+\\frac{1}{2}})$};\n\t\\node[anchor = north west] at (1,1) {$5,(x_{i+\\frac{1}{2}},y_{j+\\frac{1}{2}})$};\n\t\\node[anchor = north west] at (2,1) {$4,(x_{i+1},y_{j+\\frac{1}{2}})$};\n\t\\node[anchor = north west] at (0,2) {$(x_{i},y_{j+1})$};\n\t\\node[anchor = north west] at (1,2) {$(x_{i+\\frac{1}{2}},y_{j+1})$};\n\t\\node[anchor = north west] at (2,2) {$3,(x_{i+1},y_{j+1})$};\n\t\n\t\\draw (0,0) -- (0,2);\n\t\\draw (0,2) -- (2,2);\n\t\\draw (2,2) -- (2,0);\n\t\\draw (2,0) -- (0,0);\n\t\\draw (0,0) -- (2,2);\n\t\\draw (0,1) -- (1,2);\n\t\\draw (1,0) -- (2,1);\n\t\\draw (1,0) -- (1,2);\n\t\\draw (0,1) -- (2,1);\n\t\\end{tikzpicture}\n\\end{center}\n\\[\n\\begin{split}\nI_{k-1}(xy) &= x_iy_j\\varphi^{k-1}_1 + x_{i+1}y_j\\varphi^{k-1}_2 + x_{i+1}y_{j+1}\\varphi^{k-1}_3\\\\\n&=x_iy_j(\\varphi^k_1 + \\frac{1}{2}(\\varphi^k_5+\\varphi^k_6)) + x_{i+1}y_j(\\varphi^k_2 + \\frac{1}{2}(\\varphi^k_4+\\varphi^k_6))\\\\\n&+x_{i+1}y_{j+1}(\\varphi^k_3+\\frac{1}{2}(\\varphi^k_4+\\varphi^k_5))\\\\\n&=x_iy_j\\varphi^k_1 + x_{i+1}y_j\\varphi^k_2+x_{i+1}y_{j+1}\\varphi^k_3+\\frac{1}{2}(x_{i+1}y_j+x_{i+1}y_{j+1})\\varphi^k_4\\\\\n&+\\frac{1}{2}(x_{i}y_j+x_{i+1}y_{j+1})\\varphi^k_5+\\frac{1}{2}(x_{i}y_j+x_{i+1}y_{i})\\varphi^k_6\\\\\n&=x_iy_j\\varphi^k_1 + x_{i+1}y_j\\varphi^k_2+x_{i+1}y_{j+1}\\varphi^k_3+x_{i+1}y_{j+\\frac{1}{2}}\\varphi^k_4\\\\\n&+\\frac{1}{2}(x_{i}y_j+x_{i+1}y_{j+1})\\varphi^k_5+x_{i+\\frac{1}{2}}y_j\\varphi^k_6\\\\\nI_k(xy) & = x_iy_j\\varphi^k_1+x_{i+1}y_j\\varphi^k_2+x_{i+1}y_{j+1}\\varphi^k_3+x_{i+1}y_{j+\\frac{1}{2}}\\varphi^k_4 \\\\\n&+x_{i+\\frac{1}{2}}y_{j+\\frac{1}{2}}\\varphi^k_5 + x_{i+\\frac{1}{2}}y_j\\varphi^k_6\n\\end{split}\n\\]\n\nThen,\n\\[\n(I_k - I_{k-1})(xy) = (x_{i+\\frac{1}{2}}y_{j+\\frac{1}{2}}-\\frac{1}{2}(x_{i}y_j+x_{i+1}y_{j+1}))\\varphi^k_5 = -\\frac{h^2}{4}\\varphi_5^k\n\\]\nThe error is uniform! which is the basis function on each rectangle $[x_i,x_{i+1}]\\times [y_j,y_{j+1}]$ and \n\\[\n\\varphi_5^k(x_{i+\\frac{1}{2}},y_{j+\\frac{1}{2}}) = 1\n\\]\n\n\n\\section{Spectral approximation properties of ReLU DNN}\nHere we will talk about how to use ReLU DNN to approximate polynomials\nand then achieve the ``spectral\" accuracy for analytical functions.\nThe main results can be found in \\cite{yarotsky2017error},\n\\cite{wang2018exponential}. Some related works are\n\\cite{liang2016why,lu2017expressive}\n\nFirst, we will introduce the following notation\n\\begin{itemize}\n\\item $x=(x_1,\\ldots,x_d)$.\n\\item Colon notation for subscript: let $\\{x_{m:n}\\} = \\{x_i:i = m,m+1,...,n\\}$ and $\\{x_{m_1:n_1,m_2:n_2}\\}= \\{x_{i,j}:i = m_1,...,n_1,j = m_2,...,n_2\\}.$ \n\\item Linear combination: denote $y\\in \\mathcal L(x_1,...,x_d)$ if\n  there exist $\\beta_i\\in \\mathbb{R},i=1,...,d,$ such that $y =\n  \\beta_0+\\beta_1x_1+\\cdots+\\beta_d x_d$.\n\\item Linear combination with ReLU activation: denote $\\tilde{y}\\in\n  \\tilde{\\mathcal L}(x_1,...,x_d)$ if there exists $y\\in\n  \\mathcal{L}(x_1,...,x_d)$ and $\\tilde{y} = \\mbox{ReLU}(y) =\n  \\max(y,0).$\n\\item $\\tilde {\\mathcal L} =\\sigma\\circ\\mathcal L$\n\\end{itemize}\n\\begin{definition}\nGiven a function $f(x)$, if there exist variables $\\{y_{1:L,1:M}\\}$ such that \n\\begin{equation}\ny_{1,m}\\in \\tilde{\\mathcal L}(x),\\quad y_{l+1,m} \\in \\tilde{\\mathcal L}(x,y_{l,1:M}),\\quad f\\in\\mathcal{L}(x,y_{1:L,1:M}),\n\\end{equation}\\label{def:netclass}\nwhere $m=1,...,M,l=1,...,L-1$, then $f$ is said to be in the neural nets class $\\mathcal{F}_{L,M}(\\mathbb{R}^d)$, and $\\{y_{1:L,1:M}\\}$ is called a set of hidden variables of $f$. \n\\end{definition}\n\\newpage\n%\\begin{properties}\n\\begin{proposition}\nA function $f\\in \\mathcal F_{L,M}(\\mathbb R^d)$ can be represented by a ReLU network with depth $L+1$ and width $M+d+1$.\n\\end{proposition}\n%\\end{properties}\n\\begin{proof}\nLet $\\{y_{1:L}\\}$ be the hidden variables of $f$ that satisfies (\\ref{def:netclass}), where\n$$\nf = \\alpha_0 + \\sum_{i=1}^d \\alpha_ix_i+\\sum_{l=1}^L\\sum_{m=1}^M \\beta_{l,m}y_{l,m}.\n$$\n$$\nh\\leftarrow (y,ex^T), e=(1,\\ldots 1)^T.\n$$\nConsider the following variables $\\{h_{1:L,1:M}\\}$:\n$$\nh_{l,1:M} = y_{l,1:M}, \\quad h_{l,M+1:M+d} = x_{1:d}\n$$\nfor $l = 1,...,L,$ and\n$$\nh_{1,M+d+1} = \\alpha_0 + \\sum_{i=1}^d\\alpha_ix_i,\\quad h_{l+1,M+d+1} =h_{l,M+d+1} + \\sum_{m=1}^M\\beta_{l,m}h_{l,m}\n$$\nfor $l=1,...,L-1$. One can see that $h_{1,m}\\in \\tilde{\\mathcal L}(x),h_{l+1,m}\\in \\tilde{L}(h_{l,1:M+d+1}),m = 1,...,M+d+1,l = 1,...,L-1,$ and $f\\in \\mathcal{L}(h_{L,1:M+d+1}),$ which is a representation of a standard neural net.\n\\end{proof}\n\n%\\begin{properties}\n\\begin{proposition}\\label{prop:net class}\n(Addition and composition of neural net class $\\mathcal F_{L,M}$)\n\\begin{itemize}\n\\item[1]\n$$\n\\mathcal F_{L_1,M} + \\mathcal{F}_{L_2,M} \\subseteq \\mathcal{F}_{L_1+L_2,M},\n$$\ni.e. if $f_1\\in \\mathcal F_{L_1,M}(\\mathbb{R}^d)$ and $f_2\\in \\mathcal{F}_{L_2,M}(\\mathbb{R}^d),$ then $f_1+f_2\\in\\mathcal{F}_{L_1+L_2,M}(\\mathbb{R}^d)$.\n\\item[2]\n$$\n\\mathcal F_{L_2,M}\\circ \\mathcal F_{L_1,M+1} \\subseteq \\mathcal F_{L_1+L_2,M+1}.\n$$ \ni.e. if $f_1(x)\\in \\mathcal{F}_{L_1,M+1}(\\mathbb{R}^d)$ and $f_2(x_0,x)\\in \\mathcal{F}_{L_2,M}(\\mathbb{R}^{d+1}),$ then \n$$\nf_2(f_1(x),x)\\in \\mathcal{F}_{L_1+L_2,M+1}(\\mathbb{R}^d).\n$$\n\\end{itemize}\n\\end{proposition}\n%\\end{properties}\n\n\\newpage\n\\begin{proof}\nFor the addition property, denote the hidden variables of $f_1$ and $f_2$ as $\\{y_{1:L_1,1:M}^{(1)}\\}$ and $\\{y_{1:L_2,1:M}^{(2)}\\}$. \nSo we have \n\\[\nf_1 = \\alpha_0^1 + \\sum_{i=1}^d \\alpha^1_ix_i+\\sum_{l=1}^L\\sum_{m=1}^M \\beta^1_{l,m}y^{(1)}_{l,m}\n\\]\n\\[\nf_2 = \\alpha_0^2 + \\sum_{i=1}^d \\alpha^2_ix_i+\\sum_{l=1}^L\\sum_{m=1}^M \\beta^2_{l,m}y^{(2)}_{l,m}\n\\]\n\\[\nf_1+f_2 = \\alpha_0^1 + \\alpha_0^2 + \\sum_{i=1}^d (\\alpha_i^1+\\alpha_i^2)x_i + \\sum_{l=1}^L\\sum_{m=1}^M \\beta^1_{l,m}y^{(1)}_{l,m} +\\sum_{l=1}^L\\sum_{m=1}^M \\beta^2_{l,m}y^{(2)}_{l,m}\n\\]\n$$\ny^{(1)}_{1,1:M} = \\sigma\\circ \\mathcal{L}(x),y^{(1)}_{2,1:M} = \\sigma\\circ \\mathcal L(x,y^{(1)}_{1,1:M}),\\cdots,y^{(1)}_{L_1,1:M}=\\sigma\\circ \\mathcal L(x,y^{(1)}_{L_1-1,1:M}),\n$$\n$$\ny^{(2)}_{1,1:M} = \\sigma\\circ \\mathcal{L}(x) =\\sigma(\\mathcal L(x)+\\bm 0\\cdot y^{(1)}_{L_1,1:M}) = \\sigma\\circ \\mathcal{L}(x,y^{(1)}_{L_1,1:M})\n$$\n$$\ny^{(2)}_{2,1:M} = \\sigma\\circ \\mathcal L(x,y^{(2)}_{1,1:M}),\\cdots,y^{(2)}_{L_2,1:M}=\\sigma\\circ \\mathcal L(x,y^{(2)}_{L_2-1,1:M}),\n$$\nso $f_1 + f_2 \\in \\mathcal{F}_{L_1+L_2,M}$\nLet\n$$\ny_{1:L_1,1:M} = y_{1:L_1,1:M}^{(1)},\\quad y_{L_1+1:L_1+L_2,1:M} = y_{1:L_2,1:M}^{(2)}. \n$$\nBy definition, $\\{y_{1:L_1+L_2,1:M}\\}$ is a set of hidden variables of $f_1+f_2$. Thus $f_1+f_2\\in \\mathcal F_{L_1+L_2,M}.$\n\nFor the composition property, let the hidden variables of $f_1$ and $f_1$ as $\\{y_{1:L_1,1:M+1}^{(1)}\\}$ and $\\{y_{1:L_2,1:M}^{(2)}\\}$. Let\n$$\ny_{1:L_1,1:M+1} = y_{1:L_1,1:M+1}^{(1)},\\quad y_{L_1+1:L_1+L_2,1:M} = y_{1:L_2,1:M}^{(2)},\n$$\n$$\ny_{L_1+1,M+1} =\\cdots= y_{L_1+L_2,M+1} = f_1(x).\n$$\nOne can see that $\\{y_{1:L_1+L_2,1:M+1}\\}$ is a set of hidden variables of $f_2(f_1(\\bm x),\\bm x)$, thus the composition property holds.\n\\end{proof}\n\n\\begin{definition}\nGiven a continuous function $\\varphi(\\bm{x}),\\bm{x} \\in [-1,1]^d$ and a continuous function class $\\mathcal F([-1,1]^d),$ define the $L^\\infty$ distance\n$$\n\\mbox{dist} (\\varphi,\\mathcal F) = \\inf_{f\\in \\mathcal F} \\max_{\\bm x \\in [-1,1]^d} |\\varphi(\\bm{x}) - f(\\bm{x})|.\n$$\n\\end{definition}\n\n%\\begin{properties}\n\\begin{proposition}\\label{prop:dis}\n(Addition and composition properties for distance function)\n\\begin{itemize}\n\\item[1] Let $\\varphi_1$ and $\\varphi_2$ be continuous functions. Let $\\mathcal F_{1}$ and $\\mathcal F_2$ be two continuous function classes, then \n$$\n\\mbox{dist}(\\alpha_{1}\\varphi_1+\\alpha_{2}\\varphi_2,\\mathcal{F}_1+\\mathcal F_2)\\le |\\alpha_1|\\mbox{dist}(\\varphi_1,\\mathcal F_1) + |\\alpha_2|\\mbox{dist}(\\varphi_2,\\mathcal F_2),\n$$\nwhere $\\alpha_1$ and $\\alpha_2$ are two real numbers.\n\\item[2] Assume that $\\varphi_1(\\bm x) = \\varphi_1(x_1,...,x_d),\\varphi_2(y,\\bm x) = \\varphi_2(y,x_1,...,x_d)$ satisfy $\\varphi_1([-1,1]^d)\\subseteq[-1,1]$. Let $\\mathcal F_1([-1,1]^d),\\mathcal F_2([-1,1]^{d+1})$ be two continuous function classes, then\n$$\n\\mbox{dist}(\\varphi_2(\\varphi_1(\\bm x),\\bm x),\\mathcal F_2\\circ \\mathcal F_1)\\le L_{\\varphi_2}\\mbox{dist}(\\varphi_1,\\mathcal F_1) +\\mbox{dist}(\\varphi_2,\\mathcal F_2)\n$$\nwhere $L_{\\varphi_2}$ is the Lipschitz norm of $\\varphi_2$ with respect to $y$.\n\\end{itemize}\n\\end{proposition}\n%\\end{properties}\n\\begin{proof}\nThe additional property obviously holds. Now we prove the composition property. For any $f_1\\in\\mathcal F_1,f_2\\in \\mathcal F_2$, one has\n\\[\n\\begin{split}\n|\\varphi_2(\\varphi_1(\\bm x),\\bm x) - f_2(f_1(\\bm x),\\bm x)|&\\le |\\varphi_2(\\varphi_1(\\bm x),\\bm x) - \\varphi_2(f_1(\\bm x),\\bm x)|+|\\varphi_2(f_1(\\bm x),\\bm x)-f_2(f_1(\\bm x),\\bm x)|\\\\\n& \\le L_{\\varphi_2} ||\\varphi_1(\\bm x) -f_1(\\bm x)||_\\infty +||\\varphi_2(y,\\bm x) - f_2(y,\\bm x)||_\\infty\n\\end{split}\n\\]\nTake $f_1^* = \\argmin_f ||\\varphi_1(\\bm x) - f(\\bm x)||_\\infty$ and $f_2^* = \\argmin_f ||\\varphi_2(y,\\bm x) - f(y,\\bm x)||_\\infty$, then it is proved.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:xsquare}\nThe function $\\varphi(x) = x^2,x\\in [-1,1]$ can be approximated by deep neural nets with an exponential convergence rate:\n$$\n\\mbox{dist}(x^2,\\mathcal{F}_{L,2}([-1,1]))\\le 2^{-2L}.\n$$\n\\end{lemma}\n\\begin{proof}\nConsider the function\n$$\ng(y) = \\left\\{\\begin{split}\n\t&2y,&\\quad 0\\le y<1/2,\\\\\n\t&2(1-y),&\\quad 1/2\\le y\\le 1,\n\t\\end{split}\\right.\n$$\nthen \n\\begin{equation}\\label{func:g}\ng(y) = 2y -4\\mbox{ReLU}(y-1/2)\n\\end{equation}\nin $[0,1]$. Define the hidden variables $\\{y_{1:L,1:2}\\}$ as follows:\n\\[\n\\begin{split}\n y_{1,1} = \\mbox{ReLU}(x),&\\quad y_{1,2}=\\mbox{ReLU}(-x),\\\\\n y_{2,1} = \\mbox{ReLU}(y_{1,1}+y_{1,2}),&\\quad y_{2,2} = \\mbox{ReLU}(y_{1,1}+y_{1,2}-1/2),\\\\\n y_{2,1} = \\mbox{ReLU}(|x|),&\\quad y_{2,2} = \\mbox{ReLU}(|x|-1/2),\\\\\n y_{l+1,1} = \\mbox{ReLU}(2y_{l,1}-4y_{l,2}),&\\quad y_{l+1,2} = \\mbox{ReLU}(2y_{l,1}-4y_{l,2}-1/2)\\\\\n\\end{split}\n\\]\nfor $l = 2,3,...,L-1$. Using induction, one can see that $|x| = y_{1,1}+y_{1,2}$ and \n\\begin{equation}\\label{func:glinduction}\ng_l(|x|)=\\underbrace{g\\circ g\\circ \\cdots \\circ g}_{l}(|x|) = 2y_{l+1,1}-4y_{l+1,2},\\quad l=1,...,L-1,\n\\end{equation} \nfor $x\\in [-1,1]$. i.e.\n\\[\n\\begin{split}\ng_l(|x|) &= g\\left(g_{l-1}(|x|)\\right)\\\\\n         &= g(2y_{l,1}-4y_{l,2})\\qquad (\\mbox{Eq~(\\ref{func:g})})\\\\\n         &= 2(2y_{l,1}-4y_{l,2}) - 4\\mbox{ReLU}(2y_{l,1}-4y_{l,2}-1/2)\\qquad (g_{l-1}(|x|)\\ge 0~\\mbox{if}~x\\in[0,1])\\\\\n         &= 2\\mbox{ReLU}(2y_{l,1}-4y_{l,2})- 4\\mbox{ReLU}(2y_{l,1}-4y_{l,2}-1/2)\\\\\n         & = 2y_{l+1,1} - 4y_{l+1,2}\\\\\n\\end{split}\n\\]\nby induction, Eq (\\ref{func:glinduction}) holds.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{figures/dl_approx_analytic_gl}\n\t\\caption{The figure of $g_l(x)$}\n\t\\label{fig:gl}\n\\end{figure}\n\nLet $f_m$ be the piecewise linear interpolation of $f$ with $2^m+1$ uniformly distributed breakpoints $\\frac{k}{2^m},k = 0,...,2^m:$\n$$\nf_{m}(\\frac{k}{2^m}) = (\\frac{k}{2^m})^2,\\quad k = 0,...,2^m.\n$$\nNote that refining the interpolation from $f_{m-1}$ to $f_m$ amounts to adjusting it by a function proportional to a sawtooth function:\n$$\nf_{m-1}(x) - f_m(x) = \\frac{g_m(x)}{2^{2m}}.\n$$\nHence \n$$\nf_{L-1}(|x|) = |x| - \\sum_{l=1}^{L-1}\\frac{g_l(|x|)}{2^{2l}} .\n$$\nthen $f_{L-1}\\in \\mathcal{F}_{L,2}$, and \n\\[\n\\begin{split}\n||x^2-f_{L-1}(x)||_{\\infty} & = \\max_{k} \\max_{x\\in [\\frac{k}{2^{L-1}},\\frac{k+1}{2^{L-1}}]} |x^2 - f_{L-1}(x)|\\\\\n  & = \\max_{k} |(\\frac{1}{2}(\\frac{k}{2^{L-1}}+\\frac{k+1}{2^{L-1}}))^2 - f_{L-1}(\\frac{1}{2}(\\frac{k}{2^{L-1}}+\\frac{k+1}{2^{L-1}}))|\\\\\n  & = \\max_k |(\\frac{1}{2}(\\frac{k}{2^{L-1}}+\\frac{k+1}{2^{L-1}}))^2 - \\frac{1}{2}((\\frac{k}{2^{L-1}})^2+(\\frac{k+1}{2^{L-1}})^2))|\\\\\n  &=\\frac{1}{4^L}.\n\\end{split}\n\\]\n$|x^2-f_{L-1}(x)|\\le 2^{-2L}$ for $x\\in[-1,1].$\n\\end{proof}\n\n\\begin{lemma}\\label{lem:xy}\nFor multiplication function $\\varphi(x,y) = xy$, we have \n$$\n\\mbox{dist}(xy,\\mathcal{F}_{3L,2}([-1,1]^2))\\le 3\\cdot 2^{-2L}.\n$$\n\\begin{proof}\nNotice that\n$$\n\\varphi = xy = 2\\left(\\frac{x+y}{2} \\right)^2-\\frac{1}{2}x^2-\\frac{1}{2}y^2.\n$$\n\\[\n\\begin{split}\n\\mbox{dist}(xy,\\mathcal{F}_{3L,2}([-1,1]^d))&\\le2\\mbox{dist}(\\left(\\frac{x+y}{2} \\right)^2,\\mathcal{F}_{L,2}([-1,1]^2)) + \\frac{1}{2} \\mbox{dist}(x^2,\\mathcal{F}_{L,2}([-1,1]^2)) \\\\\n &+\\frac{1}{2} \\mbox{dist}(y^2,\\mathcal{F}_{L,2}([-1,1]^2))\\\\\n &\\le 3\\mbox{dist}(x^2,\\mathcal{F}_{L,2}([-1,1]^2))\\\\\n &\\le 3\\mbox{dist}(x^2,\\mathcal{F}_{L,2}([-1,1])) + \\mbox{dist}(I_x,\\mathcal{F}_{L,2}([-1,1]^2))\\\\\n &= 3\\cdot 2^{-2L}\n\\end{split}\n\\]\n\\end{proof}\n\\end{lemma}\n\n\\begin{lemma}\nFor a monomial $M_p(\\bm x)$ of $d$ variables with degree $p$, we have\n$$\n\\mbox{dist}(M_p,\\mathcal F_{3(p-1)L,3}(\\mathbb R^d))\\le 3(p-1)\\cdot 2^{-2L}.\n$$\n\\end{lemma}\n\n\\begin{proof}\nLet $M_p(\\bm x)=x_{i_1}x_{i_2}\\cdots x_{i_p},i_1,...,i_p\\in\\{1,...,d\\}.$ Using induction, assume that the lemma holds for the degree-$p$ monomial $M_p$, consider a degree-$(p+1)$ monomial $M_{p+1}(\\bm{x}) = M_p(\\bm{x})\\cdot x_{i_{p+1}}$. Let $\\varphi(y,x) = yx,$ then $M_{p+1}(\\bm x) = \\varphi(M_p(\\bm{x}),x_{i_{p+1}})$. We have\n\\[\n\\begin{split}\n\\mbox{dist}(M_{p+1},\\mathcal{F}_{3pL,3})&\\le \\mbox{dist}(\\varphi(M_p(\\bm x),x_{i_{p+1}}),\\mathcal F_{3L,2}\\circ \\mathcal F_{3(p-1)L,3})\\\\\n& \\le L_{\\varphi}\\mbox{dist}(M_p,\\mathcal{F}_{3(p-1)L,3})+\\mbox{dist}(\\varphi,\\mathcal F_{3L,2})\\le 3p\\cdot 2^{-2L}.\n\\end{split}\n\\]\nNote that the Lipschitz norm $L_{\\varphi}=1$ since $x_{i_{p+1}}\\in [-1,1].$\n\\end{proof}\n\n\\begin{lemma}\nFor a degree-$p$ polynomial $P_p(\\bm{x}) = \\sum_{|\\bm{k}|\\le p}a_{\\bm k}\\bm x^{\\bm k},\\bm x\\in [-1,1]^d,\\bm k = (k_1,...,k_d)\\in\\mathbb{N}^d,$ we have \n\\[\n\\mbox{dist}\\left(P_p,\\mathcal F_{\\binom{p+d}{d}(p-1)L,3}\\right)<3(p-1)\\cdot 2^{-2L}\\sum_{|\\bm k|\\le p}|a_{\\bm k}|\n\\]\n\\end{lemma}\\label{lem:poly}\n\n\\begin{proof}\nThis lemma can be proved by the properties \\ref{prop:net class}, \\ref{prop:dis} and lemma \\ref{lem:poly}.\nNote that the number of monomials of $d$ variables with degree less or equal to $p$ is $\\binom{p+d}{d}$.\n$$\nk_1 + k_2 + \\cdots + k_d \\le p,\n$$\nAdd a variable $k_{d+1}$, we have\n$$\nk_1 + k_2 + \\cdots + k_d + k_{d+1} = p.\n$$\nthe number of the non-negative solution is $\\binom{p+d}{d}$.\n\\end{proof}\n\n\\begin{theorem}\nLet $f$ be an analytic function over $(-1,1)^d$. Assume that the power series $f(\\bm x) = \\sum_{\\bm k\\in \\mathbb N^d}a_{\\bm k}\\bm x^{\\bm k}$ is absolutely convergent in $[-1,1]^d.$ Then for any $\\delta>0,$ there exists a function $\\hat f$ that can be represented by a deep ReLU network with depth $L$ and width $d+4$, such that\n\\[\n|f(\\bm x) - \\hat f(\\bm x)|<2\\sum_{\\bm k\\in \\mathbb N^d}|a_{\\bm k}|\\cdot \\exp\\left(-d\\delta\\left(e^{-1}L^{1/2d}-1\\right)\\right)\n\\]\nfor all $\\bm x\\in [-1+\\delta,1-\\delta]^d.$\n\\end{theorem}\n\\begin{proof}\nLet $\\epsilon = \\exp(-d\\delta(e^{-1}L^{1/2d}-1)),$ then $L = [e(\\frac{1}{d\\delta}\\log\\frac{1}{\\epsilon}+1)]^{2d}$. Without loss of generality, assume $\\sum_{\\bm k}|a_{\\bm k}|=1.$ We will show that there exists $\\hat f\\in \\mathcal F_{L,3}$ such that $||f-\\hat f||_{\\infty}<2\\epsilon.$\nDenote \n$$\nf(\\bm x) = P_p(\\bm x) + R(\\bm x): = \\sum_{|\\bm k|\\le p}a_{\\bm k}\\bm x^{\\bm k} +  \\sum_{|\\bm k|> p}a_{\\bm k}\\bm x^{\\bm k}.\n$$\nFor $\\bm x\\in [-1+\\delta,1-\\delta]^d$, we have $|R(\\bm x)|<(1-\\delta)^p$, thus truncation to $p = \\frac{1}{\\delta}\\log\\frac{1}{\\epsilon}$ with ensure $|R(\\bm x)|<2\\epsilon$. From lemma \\ref{lem:poly}, we have dist$(P_p,\\mathcal F_{L,3})<3(p-1)\\cdot 2^{-2L'}$, where \n\\[\n\\begin{split}\nL' &= L\\binom{p+d}{d}^{-1}(p-1)^{-1}\\\\\n   &\\ge L(\\frac{(p+d)!}{p!d!})^{-1}p^{-1}\\quad (d! \\thicksim \\sqrt{2\\pi d}(d/e)^d)\\\\\n   &\\ge L(\\frac{(p+d)^d}{(d/e)^d})^{-1}p^{-1}\\\\\n   &= L[e(\\frac{1}{d\\delta}\\log \\frac{1}{\\epsilon}+1)]^{-d}(\\frac{1}{\\delta}\\log\\frac{1}{\\epsilon})^{-1}\\\\\n   & = [e(\\frac{1}{d\\delta}\\log \\frac{1}{\\epsilon}+1)]^{d}(\\frac{1}{\\delta}\\log\\frac{1}{\\epsilon})^{-1}\\\\\n   & \\ge e^d (\\frac{1}{\\delta}\\log\\frac{1}{\\epsilon}) \\\\\n   &\\gg\\log\\frac{1}{\\delta}+ \\log\\frac{1}{\\epsilon}\n\\end{split}\n\\]\nfor $d\\ge 2$ and $\\epsilon\\ll 1$, then dist$(P_p,\\mathcal F_{L,3})<3(p-1)\\cdot 2^{-2L'} = 3(p-1)(\\epsilon^2+\\delta^2)\\ll\\epsilon$. Thus there exists $\\hat f\\in \\mathcal F_{L,3}$ such that $||P_p - \\hat f||_{\\infty}<\\epsilon$, and $||f-\\hat f||_{\\infty}\\le ||f-P_p||_\\infty + ||P_p - \\hat f||_\\infty<3\\epsilon$.\n\\end{proof}\n\n\\subsection{Cosine Function}\n\nLet $f(\\bm x) = \\cos(|\\bm x|^2)$. Assume that the power series \n\\[\nf(\\bm x) = \\sum_{k=0}^{+\\infty} \\frac{(-1)^k|\\bm x|^{4k}}{(2k)!},\\quad \\bm x\\in [-1,1]^d.\n\\]\nDenote\n\\[\n\\begin{split}\nf(\\bm x) = P_p(\\bm x) + R(\\bm x): &= \\sum_{|\\bm k|\\le p}a_{\\bm k} \\bm x^{\\bm k} + \\sum_{|\\bm k|>p} a_{\\bm k} \\bm x^{\\bm k}\\\\\n&=\\sum_{k\\le p} \\frac{(-1)^k}{(2k)!}|\\bm x|^{4 k} + \\frac{(-1)^{p+1}}{(2(p+1))!}|\\bm t|^{4 (p+1)},\\quad \\bm t\\in[0,1]^d\\\\\n\\end{split}\n\\]\n\n\nLet $p(\\varepsilon,d)=p$ be large, s.t.\n\\[\n\\big|\\frac{(-1)^{p+1}}{(2(p+1))!}|\\bm t|^{4(p+1)}\\big| < \\varepsilon.\n\\]\n\\[\n\\begin{split}\n\\big|\\frac{(-1)^{p+1}}{(2(p+1))!}|\\bm t|^{4 (p+1)}\\big| &\\le \\frac{d^{2(p+1)}}{(2(p+1))!}\\\\\n& \\le C(\\frac{ed}{2(p+1)})^{2(p+1)}\\\\\n\\end{split}\n\\]\nLet $p = \\log \\frac{1}{\\varepsilon}-1$, then \n\\[\n(\\frac{ed}{2(p+1)})^{2(p+1)}\\le (\\frac{ed}{2\\log \\frac{1}{\\varepsilon}})^{2\\log\\frac{1}{\\varepsilon}}\\le \\varepsilon^{-2\\log \\frac{ed}{2\\log \\frac{1}{\\varepsilon}}}\\le \\varepsilon\n\\]\n\nWe want to show that \n\\[\n\\mbox{dist}(P_{p(\\varepsilon,d)},\\mathcal{F}_{L(\\varepsilon,d),3})<\\varepsilon.\n\\]\n\nBy Lemma~\\ref{lem:poly}, we have \n\\[\n\\begin{split}\n\\mbox{dist}(P_{p},\\mathcal{F}_{L,3})&<3(p-1)2^{-\\frac{2L}{\\binom{p+d}{d}(p-1)}}\\sum_{\\bm k\\le p}|a_{\\bm k}|\\\\\n&\\le3p2^{-\\frac{2Ld^d}{p(p+d)^de^d}}\\sum_{|\\bm k|\\le p}|a_{\\bm k}|.\n\\end{split}\n\\]\n\nLet $L = \\frac{1}{2} ((p+d)e/d)^{2d}$, we can see that\n\\[\n\\begin{split}\n\\mbox{dist}(P_{p},\\mathcal{F}_{L,3})\n&\\le3p2^{-\\frac{2Ld^d}{p(p+d)^de^d}}\\sum_{|\\bm k|\\le p}|a_{\\bm k}|\\\\\n&= 3p2^{-\\frac{(p+d)^de^d}{d^dp}}\\sum_{|\\bm k|\\le p}|a_{\\bm k}|\\\\\n&\\le 3(\\log \\frac{1}{\\varepsilon}-1)2^{-\\frac{e^d}{4}(d-1)d(\\log \\frac{1}{\\varepsilon}-1)} \\sum_{|\\bm k|\\le p}|a_{\\bm k}|\\\\\n&\\le C(d)\\varepsilon^{\\frac{e^d}{4}d(d-1)\\log 2}\\log\\frac{1}{\\varepsilon}\\le \\varepsilon\n\\end{split}\n\\]\n\nTake \n\\[\n\\varepsilon = \\exp\\left(-d\\left(e^{-1}(2L)^{1/2d}-1\\right)-1\\right)\n\\]\nthen, we have \n\\[\n\\mbox{dist}(f,\\mathcal{F}_{L,3})\\le\\mbox{dist}(P_{p},\\mathcal{F}_{L,3})+ R(\\bm x) \\le 2\\varepsilon \\le 2e^{-d\\left(e^{-1}(2L)^{1/2d}-1\\right)-1}\n\\]\n\n\n\n\\section{A modified ResNet structure and its properties}\nUsing the notation in Lin Li's notes, we have the next two properties for this modified ResNet structure in $\\mathbb{R}^d$ as\n\\begin{itemize}\n\t\\item Added with depth\n\t$$\n\t\\mathcal F_{L_1,M}(\\mathbb{R}^d) + \\mathcal{F}_{L_2,M}(\\mathbb{R}^d)  \\subseteq \\mathcal{F}_{L_1+L_2,M}(\\mathbb{R}^d) .\n\t$$\n\t\\item Added with width \n\t$$\n\t\\mathcal F_{L,M_1}(\\mathbb{R}^d)  + \\mathcal{F}_{L,M_2}(\\mathbb{R}^d)  \\subseteq \\mathcal{F}_{L,M_1 + M_2}(\\mathbb{R}^d) .\n\t$$\n\\end{itemize}\nThis second property works for all DNN structures, but the first structure only works for this special ResNet structure with any activation functions.\n\n\\section{h-method by partition of unit}\nFirst, for any positive integer $M$, let us consider the next partition of unit function of $[0, 1]^d$\n$$\n\\phi_{\\bm m}(x) = \\prod_{k=1}^d \\Phi\\left(3M(x_k - \\frac{m_k}{M})\\right),\n$$\nwith \n$$\n{\\bm m} = (m_1, m_2, \\cdots, m_d) \\in \\{0,1, \\cdots,M\\}^d = \\mathcal{M},\n$$\nand \n$$\n\\Phi(x) = \\begin{cases}\n1, \\quad  &|x| \\le 1, \\\\\n0, \\quad &|x| \\ge 2, \\\\\n2 - |x|, \\quad &1 < |x| <2.\n\\end{cases}\n$$\nThus, we have\n$$\n\\sum_{{\\bm m} \\in \\mathcal{M}} \\phi_{\\bm m}({\\bm x}) = 1, \\quad \\forall {\\bm x} \\in [0,1]^d.\n$$\n\n\\begin{itemize}\n\t\\item How to choose M?\n\\end{itemize}\n\\section{p-method by local Taylor expansion}\nTaylor expansion with $p-$th polynomials at most at ${\\bm x} = \\frac{{\\bm m}}{M}$:\n$$\nP_{{\\bm m}, p}({\\bm x}) = \\sum_{|{\\bm n}| < p}\\frac{\\partial^{\\bm n}f}{ \\bm n !}\\left(\\frac{{\\bm m}}{M}\\right) ({\\bm x} - \\frac{{\\bm m}}{M})^{\\bm n}.\n$$\nHere we just consider about the local Taylor expansion, so the global approximation function is\n$$\nf_p({\\bm x}) = \\sum_{{\\bm m} \\in \\mathcal{M}} \\phi_{\\bm m} P_{{\\bm m}, p}({\\bm x}), \\quad \\forall {\\bm x} \\in [0,1]^d.\n$$\n\n\\begin{itemize}\n\t\\item How to choose $p$? Balance with the residual term?\n\\end{itemize}\n\n\\section{Error estimate for $|f - f_p|_{0,\\infty}$}\n\\begin{align*}\\label{fperoor}\n|f - f_p|_{0,\\infty} &= |\\sum_{\\bm m} \\phi_{\\bm m}(f - P_{{\\bm m}, p})|_{0, \\infty},  \\\\\n&\\le \\max_{{\\bm m} \\in \\mathcal{M}}  |\\sum_{\\bm m} \\phi_{\\bm m}(f - P_{{\\bm m}, p})|_{0, \\infty,  |{\\bm x} - \\frac{\\bm m}{M}| \\le {\\frac{1}{M}}}, \\\\\n&\\le \\max_{{\\bm m} \\in \\mathcal{M}} \\sum_{ {\\bm m} \\in \\mathcal{N}({\\bm m})} |f - P_{{\\bm m}, p}|_{0,\\infty, |{\\bm x} - \\frac{\\bm m}{M}| \\le {\\frac{1}{M}}}, \\\\\n&\\le \\max_{{\\bm m} \\in \\mathcal{M}}  2^d \\max_{{\\bm m} \\in \\mathcal{N}({\\bm m})} |f - P_{{\\bm m}, p}|_{0,\\infty, |{\\bm x} - \\frac{\\bm m}{M}| \\le {\\frac{1}{M}}}, \\\\\n&\\le \\left( 2^{d} \\left( \\frac{1}{M} \\right)^p \\frac{d^p}{ p!} \\right)|f|_{p, \\infty}.\n\\end{align*}\n\n\\section{Approximate $f_p$ by ReLU DNN with error estimate}\n\\subsection{Approximate $\\phi_m P_{m, p}$}\n\\begin{itemize}\n\t\\item Approximate $\\phi_{\\bm m}$ \n\t\n\tFirst we know that \n\t$$\n\t\\Phi(x) = ReLU(x+2) - ReLU(x+1) - ReLU(x-1) + ReLU(x-2),\n\t$$ \n\tso \n\t$$\n\t\\inf_{v \\in DNN_{J}} |\\phi_{\\bm m} - v| \\le 2^{-\\frac{2J}{3d}}.\n\t$$\n\t\\item Approximate $P_{{\\bm m}, p}$ (Original version in Yarotsky2017 )\n\t\n\tAssume that \n\t$$\n\tP_{{\\bm m}, p} = \\sum_{|{\\bm n}| \\le p}a_{\\bm m, \\bm n} {(\\bm x - \\frac{\\bm m}{M})}^{\\bm n},\n\t$$\n\tthen \n\t$$\n\t\\inf_{v \\in DNN^{T}_{J}} |P_{{\\bm m}, p} - v| \\le (\\max_{ \\bm n} a_{\\bm m,\\bm n})2^{-\\frac{2J}{3(p-1)}},\n\t$$\n\twith \n\t$$\n\tT = \\tbinom{p+d}{d}(d+4).\n\t$$\n\t\n\t\\item Approximate $\\phi_{\\bm m} P_{\\bm m, p}$\n\t$$\n\t\\inf_{v \\in DNN^{T}_{J}} |\\phi_{\\bm m} P_{\\bm m, p} - v| \\le (\\max_{ \\bm n} a_{\\bm m,\\bm n}) \\max\\{2^{-\\frac{2J}{9(p-1)}},  2^{-\\frac{2J}{9d}}\\},\n\t$$\n\\end{itemize}\n\n\\begin{remark}\nIn fact, in Yarotsky2017 they estimate as:\n$$\n\\inf_{v \\in DNN^{T}_{J}} |\\phi_{\\bm m} P_{\\bm m, p} - v| \\le (\\max_{ \\bm n} a_{\\bm m,\\bm n}) 2^{-\\frac{2J}{9(p + d -1)}},\n$$\t\nand \n$$\n\\max_{ \\bm n} a_{\\bm m,\\bm n} \\le 1,\n$$\nfor \n$$\n\\|f\\|_{p+1,\\infty} \\le 1.\n$$\n\\end{remark}\n\n\n\\subsection{Approximate $f_p$}\nIn Yarotsky2017, \n$$\n\\inf_{v \\in DNN^{\\hat T}_{J}} |f_p - v| \\le (\\max_{ \\bm n} a_{\\bm m,\\bm n}) 2^d d^p2^{-\\frac{2J}{9(p + d -1)}},\n$$\nwith \n$$\n\\hat T = d^p (M+1)^d.\n$$\n\n\\begin{remark}\n\tHere is a difference in Yarotsky2017 and Prof. E's paper: \n\t\\begin{itemize}\n\t\t\\item In Prof. E's paper\n\t\t$$\n\t\t\\sum_{k} a_k e_k \\le \\max_k e_k (\\sum_k a_k).\n\t\t$$\n\t\t\\item In Yarotsky2017\n\t\t$$\n\t\t\\sum_{k} a_k e_k \\le \\max_k a_k (\\sum_k e_k).\n\t\t$$\n\t\\end{itemize}\n\\end{remark}\nMore exactly, we can get\n\\begin{align}\n\\inf_{v \\in DNN^{T}_{J}} |f_p - v| &\\le  2^d (p+d) \\sum_{k=0}^{p-1}\\sum_{|\\bm n|=k} a_{\\bm m, \\bm n}2^{-\\frac{2J}{3(p + d )}}, \\\\\n&\\le  2^d (p+d) 2^{-\\frac{2J}{3(p + d )}} \\left(\\sum_{k=0}^{p-1}\\sum_{|\\bm n|=k} \\frac{C(\\bm n)}{k!} |f|_{k,\\infty}\\right), \\\\\n&= 2^d (p+d) 2^{-\\frac{2J}{3(p + d )}} \\left(\\sum_{k=0}^{p-1}\\sum_{|\\bm n|=k} \\frac{d^k}{k!} \\right)\\|f\\|_{p-1,\\infty}, \\\\\n&\\le  2^d (p+d) e^d 2^{-\\frac{2J}{3(p + d )}}\\|f\\|_{p-1,\\infty}. \n\\end{align}\n\n\n\\section{Balance $m, p$ with error and depth(d.o.f)}\nIn the end, we have\n\\begin{align}\n\\|f- \\tilde f_p\\|_{0,\\infty} &\\le \\|f - f_p\\|_{0,\\infty} + \\|f_p - \\tilde f_p\\|_{0,\\infty}, \\\\\n&\\le \\left[ \\frac{2^dd^p}{p!}(\\frac{1}{M})^p + 2^d e^d (d+p) 2^{-\\frac{2J}{3(d+p)}}\\right] \\|f\\|_{p, \\infty}\n\\end{align}\n\nQuestion: How to balance $m,p$? \n\nThe d.o.f of above model is about \n$$\nN = (M+1)^d \\tbinom{p+d}{d} d^2 J,\n$$\nso the result for the approximation w.r.t the d.o.f is\n\\begin{align}\n\\|f- \\tilde f_p\\|_{0,\\infty} &\\le \\|f - f_p\\|_{0,\\infty} + \\|f_p - \\tilde f_p\\|_{0,\\infty}, \\\\\n&\\le \\left[ \\frac{2^dd^p}{p!}(\\frac{1}{M})^p + 2^d e^d (d+p) 2^{-\\frac{2N}{3(d+p)(M+1)^d \\tbinom{p+d}{d}d^2}}\\right] \\|f\\|_{p, \\infty}.\n\\end{align}\n\nThere are some problems to balance this two terms.\n\n\\subsection{Exponential convergence in this case}\nTake $M = 2$, and \n$$\np = \\log_2\\frac{1}{\\epsilon}\n$$\nand $\\epsilon << 1$ such that \n$$\n\\frac{2^dd^p}{p!} \\le 1,\n$$\nthen we can take \n$$\nN = \\left( 3e(1 + \\frac{p}{d})\\right)^{2d},\n$$\nthen it is easy to see\n$$\n\\|f- \\tilde f_p\\|_{0,\\infty} \\le 2\\epsilon \\lesssim \\exp({ -d((3e)^{-1}N^{\\frac{1}{2d}} - 1)}).\n$$\n\\begin{proof}\\label{proof:1}\n\\begin{align}\n-\\frac{2N}{3(d+p)(M+1)^d \\tbinom{p+d}{d}d^2} &\\le\\frac{-2(3e(1+\\frac{p}{d}))^d}{3(p+d)d^2}, \\\\\n&\\le -\\frac{2e}{3}\\times \\frac{3^d e^{d-1}(\\tbinom{d}{2}\\frac{p^2}{d^2} + p)}{pd^3}, \\quad (d \\ge 2) \\\\\n&= -\\left( \\frac{3^de^{d-1}(d-1)}{2d^2}p + \\frac{3^de^{d-1}}{d^3} \\right), \\\\\n&\\le - \\left((\\log_2{e}) p + (1+\\log_2e)d\\right). \\quad (d \\ge 2)\n\\end{align}\n\\end{proof}\n\n\\begin{remark}\n\tIn Prof. E's result, it is:\n\t\\begin{align}\n\t\\|f- \\tilde f_p\\|_{0,\\infty, I_\\delta^d} &\\le \\|f - f_p\\|_{0,\\infty, I_\\delta^d} + \\|f_p - \\tilde f_p\\|_{0,\\infty, I_\\delta^d}, \\\\\n\t&\\le \\min_{p} \\left[(1-\\delta)^p + 3p2^{-\\frac{2J}{3pd^2\\tbinom{p+d}{d}}}\\right] (\\sum_{\\bm n}a_{\\bm n}).\n\t\\end{align}\n\tseems easier to balance.\n\t\\end{remark}\n\n\\section{Just $p$ version: expansion at $ (\\frac{1}{2}, \\cdots, \\frac{1}{2})$}\n\nFor $\\Omega = [0,1]^d$, and $f_p$ is like\n$$\nf_p = \\sum_{k=0}^{p-1} \\frac{1}{k!} \\left(\\sum_{i=1}^d (x_i - \\frac{1}{2})\\frac{\\partial}{\\partial x^i}\\right)^k f |_{x = \\frac{\\bm 1}{\\bm 2}},\n$$\nthen we have:\n\\begin{align}\n\\|f - f_p\\|_{0, \\infty} &= \\sup_{\\bm x \\in \\Omega} \\left|  \\frac{1}{p!} \\left(\\sum_{i=1}^d (x_i - \\frac{1}{2})\\frac{\\partial}{\\partial x^i}\\right)^p f |_{\\xi = \\frac{\\bm 1}{\\bm 2} + \\theta_{\\bm x}(\\bm x - \\frac{\\bm 1}{\\bm 2})} \\right|, \\\\\n&\\le \\frac{d^p}{p!} (\\frac{1}{2})^p |f|_{p, \\infty}.\n\\end{align}\n\nThen using $\\tilde f_p$ to approximate $f_p$ with ``spectral\" accuracy and finally get the ``spectral\" accuracy fo analytical functions with $W^{p, \\infty}$ norm with $p = C(\\epsilon, d)$. This seems can remove the $\\delta$ and $\\sum_{\\bm k} a_{\\bm k}$ in Prof. E's results.\n\n\\subsection{$\\|f\\|_{p, \\infty, I^d} \\le C$ for any $p \\in \\mathbb{N}$}\nHere $f_p$ can also be write as:\n$$\nf_p = \\sum_{k=0}^{p-1} \\sum_{|\\bm n| = k} \\frac{\\partial^{\\bm n}}{\\bm n !} f|_{x = \\frac{\\bm 1}{\\bm 2}} (\\bm x - \\frac{\\bm 1}{\\bm 2})^{\\bm n},\n$$\nwith approximation as:\n$$\n\\tilde f_p = \\sum_{k=0}^{p-1} \\sum_{|\\bm n| = k} \\frac{\\partial^{\\bm n}}{\\bm n !} f|_{x = \\frac{\\bm 1}{\\bm 2}}  f^J_{\\bm n}.\n$$\n\nWe have the next error estimate\n$$\n| f^J_{\\bm n} - (\\bm x - \\frac{\\bm 1}{\\bm 2})^{\\bm n} |_{0,\\infty, I^d} \\le 2^{\\frac{-2J}{3(|\\bm n|-1)}}\n$$\n\nSo we have the next error estimate\n\\begin{align}\n\\|f_p - \\tilde f_p\\| &= \\| f^J_{\\bm n} - (\\bm x - \\frac{\\bm 1}{\\bm 2})^{\\bm n} \\|_{0,\\infty, I^d}, \\\\\n&= \\|  \\sum_{k=0}^{p-1} \\sum_{|\\bm n| = k} \\frac{\\partial^{\\bm n}}{\\bm n !} f|_{x = \\frac{\\bm 1}{\\bm 2}} (f^J_{\\bm n} - (\\bm x - \\frac{\\bm 1}{\\bm 2})^{\\bm n})\\|_{0, \\infty,I^d} \\\\\n&\\le \\sum_{k=0}^{p-1} \\sum_{|\\bm n| = k} | \\frac{\\partial^{\\bm n}}{\\bm n !} |f|_{x = \\frac{\\bm 1}{\\bm 2}}| 3k2^{\\frac{-2J}{3(|\\bm n|-1)}}\\\\\n&\\le \\sum_{k=0}^{p-1} \\frac{d^k}{k!}|f|_{k,\\infty,I^d} 3k2^{\\frac{-2J}{3(|\\bm n|-1)}} \\\\\n&\\le   3pe^d2^{\\frac{-2J}{3p}} \\|f\\|_{p-1, \\infty, I^d}.\n\\end{align}\nBecause $\\tilde f_p$ has $\\tbinom{p+d}{d}$ terms, so if only $J$-th layer, we will have\n$$\n\\| f_p - \\tilde f^J_p\\| \\le  3pe^d  2^{\\frac{-2J}{3p\\tbinom{p+d}{d}}}\\|f\\|_{p-1, \\infty, I^d},\n$$ \nat last we have:\n\\begin{align}\n\\|f- \\tilde f^J_p\\|_{0,\\infty} &\\le \\|f - f^J_p\\|_{0,\\infty} + \\|f_p - \\tilde f^J_p\\|_{0,\\infty}, \\\\\n&\\le \\left[  \\frac{d^p}{p!} (\\frac{1}{2})^p +3pe^d  2^{\\frac{-2J}{3p\\tbinom{p+d}{d}}}\\right] \\|f\\|_{p, \\infty}.\n\\end{align}\n\nA simple case, consider $\\epsilon << 1$, and $\\frac{d^p}{p!} \\le 1$, we can take\n$$\np = \\log_2 \\frac{1}{\\epsilon},\n$$\nand \n$$\nJ = [e(1 + \\frac{p}{d})]^{2d},\n$$\nit is easy to see that\n$$\n\\frac{d^p}{p!} (\\frac{1}{2})^p \\le \\epsilon,\n$$\nand \n$$\ne^d  2^{\\frac{-2J}{3p\\tbinom{p+d}{d}}} \\le \\epsilon,\n$$\nsuch that\n$$\n\\|f- \\tilde f^J_p\\|_{0,\\infty}  \\le 2\\epsilon \\lesssim \\exp(-d(e^{-1}J^{\\frac{1}{2d}} - 1))\\|f\\|_{p, \\infty}\n$$\n\\begin{proof}\\label{proof:2}\nThe similar process in \\ref{proof:1}.\n\\begin{align}\n-\\frac{2J}{3p\\tbinom{p+d}{d}} &\\le \\frac{-2(e(1+\\frac{p}{d}))^d}{p}, \\\\\n&\\le -2 \\times \\frac{ e^{d}(\\tbinom{d}{2}\\frac{p^2}{d^2} + p)}{3p}, \\quad (d \\ge 2) \\\\\n&= -\\frac{2}{3}\\left( \\frac{e^{d}(d-1)}{2d}p + e^d \\right), \\\\\n&\\le - 2(\\log_2{e}) p - (\\log_2e)d. \\quad (d \\ge 2)\n\\end{align}\n\\end{proof}\n\n\n\\subsection{Derivative scaling}\n\\subsubsection{Power scaling}\nIf there exist $M \\ge 1$ and $C$ such that\n$$\n|f|^P_{p,\\infty,I^d} := \\frac{|f|_{p,\\infty,I^d}}{M^p} \\le C, \\quad \\forall p \\ge 1.\n$$\nWe will have the next error estimate:\n\t\\begin{align}\n\\|f- \\tilde f^J_p\\|_{0,\\infty} &\\le \\|f - f^J_p\\|_{0,\\infty} + \\|f_p - \\tilde f^J_p\\|_{0,\\infty}, \\\\\n&\\le \\left[  \\frac{(Md)^p}{p!} (\\frac{1}{2})^p +3pe^{Md}  2^{\\frac{-2J}{3p\\tbinom{p+d}{d}}}\\right] C.\n\\end{align}\nThen we can still have the next result:\n\\begin{equation}\n\\|f- \\tilde f^J_p\\|_{0,\\infty, I^d}  \\le 2\\epsilon \\lesssim \\exp(-d(e^{-1}J^{\\frac{1}{2d}} - 1))C\n\\end{equation}\nfor $\\epsilon \\ll 1$ and $J \\gg1$.\n\\begin{proof}\nConsidering that $\\epsilon \\ll 1$, and $p$ is big enough such that \n$$\n \\frac{(Md)^p}{p!} \\le 1,\n$$\nthen we will have\n$$\np \\ge Md.\n$$\nThen take \n$$\np = \\max\\{\\log_2\\frac{1}{\\epsilon}, Md\\},\n$$\nand\n$$\nJ = [e(1 + \\frac{p}{d})]^{2d}.\n$$\nThen\n\\begin{align}\n-\\frac{2J}{3p\\tbinom{p+d}{d}} &\\le \\frac{-2(e(1+\\frac{p}{d}))^d}{p}, \\\\\n&\\le -2 \\times \\frac{ e^{d}(\\tbinom{d}{3}\\frac{p^3}{d^3}  + \\tbinom{d}{2}\\frac{p^2}{d^2} + p)}{3p}, \\quad (d \\ge 3) \\\\\n&= -\\frac{2}{3}\\left(  \\frac{e^{d}(d-1)(d-2)}{2d^2}p^2 + \\frac{e^{d}(d-1)}{2d}p + e^d \\right), \\quad (p \\ge Md)\\\\\n&\\le - 2(\\log_2{e}) p - (\\log_2e)Md. \\quad (d \\ge 2)\n\\end{align}\n\\end{proof}\n\\subsubsection{Factorial scaling}\nDefine\n$$\n|f|^F_{p,\\infty,I^d} := \\frac{d^p|f|_{p,\\infty,I^d}}{p!} \\le C, \\quad \\forall p \\ge 1.\n$$\nWe will have the next error estimate:\n\\begin{align}\n\\|f- \\tilde f^J_p\\|_{0,\\infty} &\\le \\|f - f^J_p\\|_{0,\\infty} + \\|f_p - \\tilde f^J_p\\|_{0,\\infty}, \\\\\n&\\le \\left[  (\\frac{1}{2})^p +3p^2 2^{\\frac{-2J}{3p\\tbinom{p+d}{d}}}\\right] C.\n\\end{align}\nThen we can still have the next result:\n\\begin{equation}\n\\|f- \\tilde f^J_p\\|_{0,\\infty, I^d}  \\le 2\\epsilon \\lesssim \\exp(-d(e^{-1}J^{\\frac{1}{2d}} - 1))C\n\\end{equation}\nfor $\\epsilon \\ll 1$ and $J \\gg1$.\n\\begin{proof}\n\tThis is same but simpler than the above proof.\n\t\\end{proof}\n\\newpage\n\\subsection{A more compressive structure: 1d case}\nHere we consider a more compressive structure for $1$d case with just $\\log_2(p)$ layers to recover all $p$-th polynomials. \nWe use $f_{sq,J}$ to denote $J$-th layers network to approximate $x^2$ with width $3$ (without the identity neuron for $x$).\n\nConsidering the composition of $f_{sq,J}$ as\n$$\nf^k_{sq,J}(x) = f^{k-1}_{sq, J}(f_{sq,J}(x)),\n$$\nwith\n$$\nf^1_{sq,J} = f_{sq,J}.\n$$\n\n\\begin{properties}\n\tWe have the next approximation properties\n\t$$\n\t\\|f^k_{sq,J}(x) - x^{2^k} \\|_{o,\\infty, I} \\le k2^{-2J}\n\t$$\n\\end{properties}\n\\begin{proof}We prove it by induction, \n\t\\begin{align}\n\t\\|f^k_{sq,J}(x) - x^{2^k} \\|_{o,\\infty, I} &= \\|f_{sq,J}(f^{k-1}_{sq,J}) - f_{sq,J}(x^{2^{k-1}}) +f_{sq,J}(x^{2^{k-1}}) - f(x^{2^{k-1}}) \\|_{0,\\infty,I}, \\\\\n\t&\\le L_{f_{sq,J}} \\|f^{k-1}_{sq,J}(x) - x^{2^{k-1}}\\|_{0, \\infty, I} + \\|f_{sq,J}(x) - x^2\\|_{0,\\infty,I}, \\\\\n\t&\\le (k-1)2^{-2J} + 2^{-2J}.\n\t\\end{align}\n\t\n\t\n\t\\end{proof}\n", "meta": {"hexsha": "818e4a42710ac8e5630a51135cf969e12e17884a", "size": 36421, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/h-pDNN.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/h-pDNN.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/h-pDNN.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.7175572519, "max_line_length": 328, "alphanum_fraction": 0.5670080448, "num_tokens": 17764, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127529517043, "lm_q2_score": 0.7025300698514778, "lm_q1q2_score": 0.5998993859782286}}
{"text": "\\section{Deep Neural Net - RegularNet}\nThe regular neural network is composed exclusively of regular and strided convolutional layers. While this architecture works well for relatively shallow networks, it becomes increasingly more difficult to train as the network depth increases.\n\nThis section will elaborate on the implementaiton of a RegularNet used for this project.\n\nThe RegularNet implemented in the project consists of 34 layers, with an image as input layer. The first five layers in the neural network are normal convolutional layers with a zero padding of one, striding unit distance of one and feature mapping of 3x3. The convolutional layers are followed by a convolutional layer with a striding unit distance of two. The sixth layer functions as a pooling layer, since the output from the layer is an image of half the input size. The RegularNet consists of 25 convolutional layers, 5 pooling layers, 1 input layer, 1 flatten layer and a softmax classifier layer is appended to the end of the RegularNet. Adam is used as the activation function for the RegularNet, and normalization is used for regularization function. The creation of the RegularNet layers can be seen in listing \\ref{lst:regularloop}.\n\n\\begin{lstlisting}[language=Python, label=lst:regularloop, caption=For loop that creates the layers in the RegularNet]\ninput_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32, \n\t\tname='input')\nlabel_layer = tf.placeholder(shape=[None],dtype=tf.int32)\nlabel_oh = slim.layers.one_hot_encoding(label_layer,10)\n\nlayer1 = slim.conv2d(input_layer,64,[3,3],\n\t\tnormalizer_fn=slim.batch_norm, scope='conv_'+str(0))\nfor i in range(5):\n\tfor j in range(units_between_stride):\n\t\tlayer1 = slim.conv2d(layer1,64,[3,3],normalizer_fn=slim.batch_norm, \n\t\t\t\tscope='conv_'+str((j+1) + (i*units_between_stride)))\n\tlayer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2], \n\t\t\tnormalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))\n\ntop = slim.conv2d(layer1,10,[3,3],\n\t\tnormalizer_fn=slim.batch_norm, activation_fn=None,scope='conv_top')\n\noutput = slim.layers.softmax(slim.layers.flatten(top))\n\\end{lstlisting}\n\nAs seen in listing \\ref{lst:regularloop} the function conv2d is called to create a convolution neuron. This function creates the convolution block with the internal layers and returns the output from the block. One neuron consists of 1 convolutional layer and 1 batch normalization block. Between the convolutional layer and the batch layer a ReLU activation function is added. The convolutional layer consists of 64 feature maps which each has 9 weights, so the convolutional layer outputs 64 new images for the ReLU function. The code for training and updating the model can be seen in \\ref{lst:ConvLoss}.\n\n\\begin{lstlisting}[language=Python, label=lst:ConvLoss, caption= Implementation of learning rate type ADAM]\nloss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10\n, axis=[1]))\ntrainer = tf.train.AdamOptimizer(learning_rate=0.001)\nupdate = trainer.minimize(loss)\n\\end{lstlisting}", "meta": {"hexsha": "f98204fc91f26cb81a00626e4dae58c1e903c1bf", "size": 3016, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/RegularNet_imp.tex", "max_stars_repo_name": "Rotvig/cs231n", "max_stars_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-11T12:30:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-11T12:30:50.000Z", "max_issues_repo_path": "Report/RegularNet_imp.tex", "max_issues_repo_name": "Rotvig/cs231n", "max_issues_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/RegularNet_imp.tex", "max_forks_repo_name": "Rotvig/cs231n", "max_forks_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.7777777778, "max_line_length": 844, "alphanum_fraction": 0.7921087533, "num_tokens": 734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127529517043, "lm_q2_score": 0.7025300698514777, "lm_q1q2_score": 0.5998993859782284}}
{"text": "\\subsection{Example: Slider-crank Mechanism}\n\\begin{frame}{Normal Approach}\n\t\\begin{block}{Example 1: Slider-crank Mechanism}\n\t\t\\begin{table}\n\t\t\t\\begin{minipage}{0.5\\linewidth}\n\t\t\t\t\\begin{tabular}{l|l}\n\t\t\t\t\t& $l_{AB}=l_1=0.5m$ \\\\\n\t\t\t\t\t& $l_{BC}=l_2=1m$ \\\\\n\t\t\t\t\tGiven& $\\theta_1=60^\\circ$ \\\\\n\t\t\t\t\t& $\\vb{\\omega}{1} = 1\\kh $ $(rad/s)$\\\\\n\t\t\t\t\t& $\\vb{\\alpha}{1} = -1\\kh $ $(rad/s^2)$ \\\\\\hline\n\t\t\t\t\tFind & $\\vb{v}{B}$, $\\vb{v}{C}$, $\\vb{a}{B}$, $\\vb{a}{C}$,\\\\\n\t\t\t\t\t &\\boldmath{$\\vb{\\omega}{2}$}, \\boldmath{$\\vb{\\alpha}{2}$}\\\\\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{0.5\\linewidth}\n\t\t\t\t\\includegraphics[width=50mm]{images/R-RRT.png}\n\t\t\t\\end{minipage}\n\t\t\\end{table}\n\t\\end{block}\n\\emph{Solution}\\vskip2.5mm\nUsing position analysis / MATLAB,\\vskip1.25mm\n$\\Rightarrow \\vb{r}{B} = 0.25\\ih + 0.433\\jh\\text{ $(m)$}, \\vb{r}{C} = 1.1514\\ih \\text{ $(m)$}$\\\\\n$\\vb{r}{CB} = \\vb{r}{C}-\\vb{r}{B}=0.9014\\ih-0.433\\jh \\text{ $(m)$}$\n\\end{frame}\n\n\\begin{frame}\n\\begin{itemize}\n\t\\item Find $\\vb{v}{B}$\\vskip1.25mm\n\t$\\vb{v}{B} = \\vb{\\omega}{1} \\times \\vb{r}{B} = -0.433\\ih + 0.25\\jh \\text{ $(m/s)$}$\\vskip2.5mm\n\t\\item Find $\\vb{v}{C}$, $\\vb{\\omega}{2}$\\vskip1.25mm\n\t$\\vb{v}{C} = \\vb{v}{B} + \\vb{v}{CB} = \\vb{v}{B} + \\vb{\\omega}{2}\\times \\vb{r}{CB}$\\vskip2.5mm\n%\t$v_{Cx}\\ih = (-0.433\\ih + 0.25\\jh) + \\omega_{2z}\\kh\\times(0.9014\\ih-0.433\\jh)$\\vskip2.5mm\n\tProjecting onto $x,y$-axes, we obtain the solution:\\vskip1.25mm\n\t$\\Rightarrow\\begin{cases}\n\tv_{Cx} = -0.5531\\text{ $(m/s)$}\\\\ \\omega_{2z}=-0.2774\\text{ $(rad/s)$}\n\t\\end{cases}$\\\\$\\Rightarrow\\begin{cases}\n\t\\vb{v}{C} = -0.5531\\ih\\text{ $(m/s)$}\\\\ \\vb{\\omega}{2}=-0.2774\\kh\\text{ $(rad/s)$}\n\t\\end{cases}$\n\\end{itemize}\n\\end{frame}\n\\begin{frame}\n\t\\begin{itemize}\n\t\t\\item Find $\\vb{a}{B}$, $\\vb{a}{^n_{CB}}$\n\t\t\\vskip1.25mm\n\t\t$\\vb{a}{B} = \\vb{\\alpha}{1}\\times \\vb{r}{B} - \\vb{\\omega}{1}^2\\vb{r}{B}=0.183\\ih - 0.683\\jh \\text{ $(m/s^2)$}$\\\\\n\t\t$\\vb{a}{^n_{CB}}=- \\vb{\\omega}{2}^2\\vb{r}{CB}=-0.0693\\ih+0.0333\\jh\\text{ }(m/s^2)$\n\t\t\\vskip2.5mm\n\t\t\\item Find $\\vb{a}{C}$, $\\vb{\\alpha}{2}$\n\t\t\\vskip1.25mm\n\t\t$\\vb{a}{C} = \\vb{a}{B} + \\vb{a}{CB} = \\vb{a}{B} + \\vb{\\alpha}{2}\\times \\vb{r}{CB} - \\vb{\\omega}{2}^2\\vb{r}{CB}$\n%\t\t$a_{Cx}\\ih = 0.1137\\ih-0.6497\\jh+ \\alpha_{2z}\\kh\\times(0.9014\\ih-0.433\\jh)$\n\t\t\\vskip2.5mm\n\t\tProjecting onto $x,y$-axes, we obtain the solution:\n\t\t\\vskip1.25mm\n\t\t$\\Rightarrow\\begin{cases}\n\t\t\ta_{Cx} = 0.4258\\text{ $(m/s^2)$}\\\\\\alpha_{2z}=0.7208\\text{ }(rad/s^2)\n\t\t\\end{cases}$\\\\\n\t\t$\\Rightarrow\\begin{cases}\n\t\t\\vb{a}{C} = 0.4258\\ih\\text{ $(m/s^2)$}\\\\\\vb{\\alpha}{2}=0.7208\\kh\\text{ }(rad/s^2)\n\t\t\\end{cases}$\\\\\n\t\\end{itemize}\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_v.m}\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_a.m}\n\\end{frame}\n\n\\begin{frame}{Derivative Method}\n\tLet $\\theta_1=\\theta(t)$ be a function of time $t$. We perform position analysis to find $\\vb{r}{B}(\\theta(t))$ and $\\vb{r}{C}(\\theta(t))$. Using MATLAB, we find out that:\\vskip1.25mm\n\t\\hskip20mm$\\displaystyle \\vb{r}{B}(\\theta(t)) = \\frac{\\cos{\\theta(t)}}{2}\\ih + \\frac{\\sin{\\theta(t)}}{2}\\jh$\\vskip1.25mm\n\t\\hskip20mm$\\displaystyle \\vb{r}{C}(\\theta(t)) = \\left(\\frac{cos{\\theta(t)}}{2} + \\frac{\\sqrt{2 - \\sin{\\theta(t)}}\\sqrt{\\sin{\\theta(\\theta(t))} + 2)}}{2}\\right)\\ih$\\vskip2.5mm\n\tUsing the results above to obtain the angle  $\\theta_2(\\theta(t))$ of link 2:\n\t\\[\\displaystyle\\theta_2(\\theta(t))=\\arctan{\\left(-\\frac{\\sin{\\theta(t)}}{\\sqrt{2 - \\sin{\\theta(t))}}\\sqrt{\\sin{\\theta(t)} + 2}}\\right)}\\]\n\\end{frame}\n\\begin{frame}\nThen, find velocities and accelerations by taking first and second derivative of $\\vb{r}{B}$, $\\vb{r}{C}$, $\\theta_2$, respectively:\n\\[\\begin{cases}\n\\displaystyle \\vb{v}{B}(\\theta(t))=\\xvec[.]{\\bm{r}}_{\\bm{B}}\\\\\\displaystyle \\vb{v}{C}(\\theta(t))=\\xvec[.]{\\bm{r}}_{\\bm{C}}\\\\\\displaystyle \\vb{\\omega}{2}(\\theta(t))=\\dot{\\theta_2}\\kh\n\\end{cases}, \\begin{cases}\n\\displaystyle \\vb{a}{B}(\\theta(t))=\\xvec[:]{\\bm{r}}_{\\bm{B}}\\\\\\displaystyle \\vb{a}{C}(\\theta(t))=\\xvec[:]{\\bm{r}}_{\\bm{B}}\\\\\\displaystyle \\vb{\\alpha}{2}(\\theta(t))=\\ddot{\\theta_2}\\kh\n\\end{cases}\\]\nLet $\\theta(t)=60^\\circ$, we obtain the final results.\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\t\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_p_d.m}\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_v_a_d.m}\n\\end{frame}\n\\begin{frame}{Independent Contour Method}\n\tUsing the solution above,\\vskip1.25mm\n\t$\\vb{r}{B} = 0.25\\ih + 0.433\\jh\\text{ $(m)$}, \\vb{r}{C} = 1.1514\\ih \\text{ $(m)$}$ \\\\\n\t$\\vb{r}{CB}=0.9014\\ih-0.433\\jh \\text{ $(m)$}$, $ \\vb{a}{B}=0.183\\ih - 0.683\\jh \\text{ $(m/s^2)$}$\\vskip2.5mm\n\tApplying velocity formulas to find $\\vb{\\omega}{12} = \\omega_{12z}\\kh$, $\\vb{\\omega}{23} = \\omega_{23z}\\kh$,\n\t$\\vb{v}{23} = v_{23x}\\ih\\kh$:\n\t\\vskip1.25mm% and notice that there exists $\\vb{v}{23}$ corresponding to slider 2:\\vskip1.25mm\n\t$\\begin{cases}\n\t\\vb{\\omega}{01} + \\vb{\\omega}{12} + \\vb{\\omega}{23} = \\vb{0}{}\\\\\n\t\\vb{r}{B}\\times\\vb{\\omega}{12} + \\vb{r}{C}\\times\\vb{\\omega}{23} + \\vb{v}{23} = \\vb{0}{}\n\t\\end{cases}$\\vskip2.5mm%\\$\\Rightarrow\\begin{cases}\n%\t1\\kh+\\omega_{12z}\\kh + \\omega_{23z}\\kh=\\vb{0}{}\\\\\n%\t(0.25\\ih + 0.433\\jh)\\times\\omega_{12z}\\kh+(1.1514\\ih)\\times\\omega_{23z}\\kh+v_{23x}\\ih=\\vb{0}{}\n%\t\\end{cases}$\\vskip2.5mm\n\tSolving the system of equations by projecting onto $x,y$-axes, we obtain:\\vskip1.25mm $\\text{\\boldmath$\\vb{\\omega}{12}$}=-1.2774\\kh\\text{ }(rad/s)$, $\\text{\\boldmath$\\vb{\\omega}{23}$}=0.2774\\kh\\text{ }(rad/s)$,\n\t$\\vb{v}{23}=0.5531\\ih\\text{ }(m/s)$\\vskip2.5mm\n\tThe results is consistent with the 2 previous methods.\n\\end{frame}\n\n\\begin{frame}\n\tAgain, using acceleration formulas and notice that there exists $\\vb{a}{23}$ corresponding to slider 2:\\vskip1.25mm\n\t$\\begin{cases}\n\t\\text{\\boldmath$\\vb{\\alpha}{01}$}+\\text{\\boldmath$\\vb{\\alpha}{12}$}+\\text{\\boldmath$\\vb{\\alpha}{23}$}=\\vb{0}{}\\\\\n\t\\vb{r}{B}\\times\\text{\\boldmath$\\vb{\\alpha}{12}$}+\\vb{r}{C}\\times\\text{\\boldmath$\\vb{\\alpha}{23}$}+\\vb{a}{23}-\\vb{\\omega}{1}^2\\vb{r}{B}-\\vb{\\omega}{2}^2\\vb{r}{CB}=\\vb{0}{}\n\t\\end{cases}$\\vskip2.5mm%\\\\$\\Rightarrow\\begin{cases}\n%\t-1\\kh+\\alpha_{12z}\\kh + \\alpha_{23z}\\kh=\\vb{0}{}\\\\\n%\t(0.25\\ih + 0.433\\jh)\\times\\alpha_{12z}\\kh+(1.1514\\ih)\\times\\alpha_{23z}\\kh+a_{23x}\\ih\\\\\\hskip40mm-0.3193\\ih-0.3997\\jh=\\vb{0}{}\n%\t\\end{cases}$\\vskip2.5mm\n\tSolving the system of equations by projecting onto $x,y$-axes, we obtain:\\vskip1.25mm $\\text{\\boldmath$\\vb{\\alpha}{12}$}=1.7208\\kh\\text{ }(rad/s^2)$, $\\text{\\boldmath$\\vb{\\alpha}{23}$}=-0.7208\\kh\\text{ }(rad/s^2)$,\n\t$\\vb{a}{23}=-0.4258\\ih\\text{ }(m/s^2)$\\vskip2.5mm\n\tThe results is consistent with the 2 previous methods.\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_v_i.m}\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_v_i_2.m}\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_a_i.m}\n\\end{frame}\n\\begin{frame}{MATLAB R2019a code}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_a_i_2.m}\n\\end{frame}\n\\begin{frame}{Velocity - Acceleration plotting}\n\\lstinputlisting[style=Matlab-editor, basicstyle=\\mlttfamily]{codes/s-c_v_a_plotting.m}\n\\end{frame}\n\\begin{frame}{Output figures}\n\\begin{table}\n\\begin{minipage}{0.5\\linewidth}\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=60mm]{images/v_RRRT.png}\n\t\\end{figure}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.5\\linewidth}\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=60mm]{images/a_RRRT.png}\n\t\\end{figure}\n\\end{minipage}\n\\end{table}\n\\end{frame}", "meta": {"hexsha": "f73ec6e959bb06785bbf33756503567964527f99", "size": 7747, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Finished/velocity_acceleration_analysis_pdf/Sections/Examples/RRRT.tex", "max_stars_repo_name": "HungNguyenDang/literate-meme", "max_stars_repo_head_hexsha": "ed3383576918b6ca1480e45c6ed689c87636fc41", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Finished/velocity_acceleration_analysis_pdf/Sections/Examples/RRRT.tex", "max_issues_repo_name": "HungNguyenDang/literate-meme", "max_issues_repo_head_hexsha": "ed3383576918b6ca1480e45c6ed689c87636fc41", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Finished/velocity_acceleration_analysis_pdf/Sections/Examples/RRRT.tex", "max_forks_repo_name": "HungNguyenDang/literate-meme", "max_forks_repo_head_hexsha": "ed3383576918b6ca1480e45c6ed689c87636fc41", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9806451613, "max_line_length": 215, "alphanum_fraction": 0.6306957532, "num_tokens": 3368, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127641048444, "lm_q2_score": 0.7025300573952052, "lm_q1q2_score": 0.5998993831770747}}
{"text": "\\newpage\n\\section{Deep Boltzmann Machines \\cite{salakhutdinov2013learning, salakhutdinov2009deep, goodfellow2016deep}}\n\\subsection{More efficient learning algorithm for general binary-binary BMs}\n\\subsubsection{PCD-k}\nTo recap:\n\\bg\n\\frac{1}{N}\\sum_{n=1}^N\\nabla_{\\mb{W}}\\log p(\\mb{x}_n;\\bs{\\psi}) = \\E_{\\mb{v},\\mb{h}\\sim P_{\\text{data}}(\\mb{v},\\mb{h};\\bs{\\psi})}\\l[\\mb{v}\\mb{h}^T\\r]-\\E_{\\mb{v},\\mb{h}\\sim P_{\\text{model}}(\\mb{v},\\mb{h};\\bs{\\psi})}\\l[\\mb{v}\\mb{h}^T\\r]\n\\eg\nand similar formulae for log-likelihood gradients w.r.t. $\\mb{L},\\mb{J},\\mb{b},\\mb{c}$, see (17).\n\\\\[0.5em]\nNow, instead of CD-k we use PCD-k (i.e. we keep one Markov chain without restarting its state between the updates), which falls into class of so-called \\emph{Stochastic approximation procedures} (SAP) to estimate \\tb{model's} expectations.\n\\\\[1em]\nLet $\\bs{\\psi}^t$ and $\\mb{x}^t$ be the current model parameters and the state. Then $\\mb{x}^t$ and $\\bs{\\psi}^t$ are updated sequentially as follows:\n\\begin{itemize}\n\t\\item given $\\mb{x}^t$, a new state $\\mb{x}^{t+1}$ is sampled from a transition operator $T_{\\bs{\\psi}^t}(\\mb{x}^{t+1}\\leftarrow \\mb{x}^t)$ that leaves $p(\\cdot,\\cdot;\\bs{\\psi}^t)$ invariant (which in our case is performing Gibbs sampling using equations (11) and (12) for $k$ steps);\n\t\\item a new parameter $\\bs{\\psi}^{t+1}$ is then obtained by replacing the intractable model's expectation by the point estimate $\\mb{x}^t$ (see also subsubsection 2.2.2 for whether to sample or use probabilities/means instead of sampling, and for which type of states)\n\\end{itemize}\nIn practice, we typically maintain a set of $P$ \"persistent\" sample \"particles\" $X^t=\\{\\mb{x}_1^t\\ldots \\mb{x}_{P}^t\\}$ and use average over those particles.\n\\\\[1em]\nThe intuition behind why this procedure works is the following: as the learning rate becomes sufficiently small compared with the mixing rate of the Markov chain, this \"persistent\" chain will always stay very close to the stationary distribution even if it is only run for a few MCMC updates per parameter update.\n\\\\[1em]\nProvided $\\|\\bs{\\psi}^t\\|$ is bounded and Markov chain, governed by a transition kernel $T_{\\bs{\\psi}^t}$ is ergodic (which is typically true in practice), and sequence of learning rates $\\alpha_t$ satisfies $\\sum_{t}\\alpha_t=\\infty$, $\\sum_{t}\\alpha_t^2<\\infty$, this stochastic approximation procedure procedure is almost surely convergent to an asymptotically stable point.\n\\\\\nNote that in practice, $\\alpha_t$ is not approached to zero, but rather to some small but positive constant $\\varepsilon$ (e.g. $10^{-6}, 10^{-5}$).\n\\subsubsection{Variational learning}\nAnother approach is used to approximate \\tb{data}-dependent expectations. We approximate true posterior over latent variables $p(\\mb{h}|\\mb{v};\\bs{\\psi})$ (which is intractable in general BM, tractable in RBM, but will be again intractable in DBM) by approximate posterior $q(\\mb{h};\\bs{\\mu})$ and the variational parameters $\\bs{\\mu}$ are updated to follow the gradient of a \\emph{lower bound on the log-likelihood}:\n\\\\[1em]\nIn general:\n\\begin{empheq}[box={\\mybox[1em][1em]}]{gather*}\n\\log p(\\mb{v};\\bs{\\psi})=\\int q(\\mb{h};\\bs{\\mu})\\log p(\\mb{v};\\bs{\\psi})\\mathrm{d}\\mb{h}\n=\\int q(\\mb{h};\\bs{\\mu})\\log \\frac{p(\\mb{v},\\mb{h};\\bs{\\psi})}{p(\\mb{h}|\\mb{v};\\bs{\\psi})}\\mathrm{d}\\mb{h}=\\\\\n=\\int q(\\mb{h};\\bs{\\mu})\\log \\l(\\frac{p(\\mb{v},\\mb{h};\\bs{\\psi})}{q(\\mb{h};\\bs{\\mu})}\\cdot \\frac{q(\\mb{h};\\bs{\\mu})}{p(\\mb{h}|\\mb{v};\\bs{\\psi})}\\r)\\mathrm{d}\\mb{h}=\\\\\n=\\int q(\\mb{h};\\bs{\\mu})\\log p(\\mb{v},\\mb{h};\\bs{\\psi})\\mathrm{d}\\mb{h}\n\\underbrace{-\\int q(\\mb{h};\\bs{\\mu})\\log q(\\mb{h};\\bs{\\mu})\\mathrm{d}\\mb{h}}_{\\mc{H}(q)}\n+\\underbrace{\\int q(\\mb{h};\\bs{\\mu})\\log \\frac{q(\\mb{h};\\bs{\\mu})}{p(\\mb{h}|\\mb{v};\\bs{\\psi})}\\mathrm{d}\\mb{h}}_{D_{\\text{KL}}(q(\\mb{h};\\bs{\\mu}) \\;\\|\\; p(\\mb{h}|\\mb{v};\\bs{\\psi}))\\geq 0}\\geq\\\\\n\\geq \\int q(\\mb{h};\\bs{\\mu})\\log p(\\mb{v},\\mb{h};\\bs{\\psi})\\mathrm{d}\\mb{h} + \\mc{H}(q) =: \\mc{L}_{\\text{ELBO}}(\\bs{\\mu}; \\bs{\\psi})\n\\end{empheq}\nFor Boltzmann Machine:\n\\begin{empheq}[box={\\mybox[1em][1em]}]{gather*}\n\\text{\\textbullet{} }[1]=\\sum_{\\mb{h}} q(\\mb{h};\\bs{\\mu})\\l[-E(\\mb{v},\\mb{h};\\bs{\\psi})=-\\log Z(\\bs{\\psi}) \\r]-\\underbrace{\\log Z(\\bs{\\psi})}_{=\\text{ const w.r.t.}\\bs{\\mu}}\\cdot\\underbrace{\\sum_{\\mb{h}} q(\\mb{h};\\bs{\\mu})}_{=1}+\\\\\n+\\sum_{\\mb{h}} q(\\mb{h};\\bs{\\mu})\\l[\\sum_{j<k}L_{jk}v_jv_k +\\sum_{l<m}J_{lm}h_lh_m +\\sum_{j,l}W_{jl}v_jh_l+\\sum_jb_jv_j+\\sum_lc_lh_l  \\r]=\\\\\n=\\sum_{l<m}J_{lm}\\E_{\\mb{h}\\sim q(\\mb{h};\\bs{\\mu})}[h_lh_m]+\\sum_{j,l}W_{jl}v_j\\E_{\\mb{h}\\sim q(\\mb{h};\\bs{\\mu})}[h_l]+\\sum_lc_l\\E_{\\mb{h}\\sim q(\\mb{h};\\bs{\\mu})}[h_l]+\\text{const}\n\\end{empheq}\nFor Boltzmann Machine and fully-factorizable $q(\\mb{h};\\bs{\\mu})=\\prod_l q(h_l;\\mu_l), q(h_l=1;\\mu_l)=\\mu_l$ (\\emph{mean-field} approach):\n\\begin{empheq}[box={\\mybox[1em][1em]}]{gather*}\n\\text{\\textbullet{} }[1]=\\sum_{l<m}J_{lm}\\mu_l\\mu_m+\\sum_{j,l}W_{jl}v_j\\mu_l+\\sum_lc_l\\mu_l+\\text{const}\n\\\\\n\\text{\\textbullet{} }[2]=\\mc{H}(q)=-\\sum_{\\mb{h}}q(\\mb{h};\\bs{\\mu})\\log q(\\mb{h};\\bs{\\mu})=-\\sum_{\\mb{h}}q(\\mb{h};\\bs{\\mu})\\sum_{j}\\log q(h_j;\\mu_j)=\\\\\n=-\\sum_j \\sum_{h_j\\in\\{0,1\\}}q(h_j;\\mu_j)\\log q(h_j;\\mu_j) \\underbrace{\\sum_{\\mb{h}_{-j}}q(\\mb{h}_{-j};\\bs{\\mu}_{-j})}_{=1}=-\\sum_j \\mu_j\\log \\mu_j+(1-\\mu_j)\\log(1-\\mu_j)\n\\end{empheq}\n\\bg\n\\boxed{\\mc{L}_{\\text{ELBO}}(\\bs{\\mu}; \\bs{\\psi})=\\sum_{l<m}J_{lm}\\mu_l\\mu_m+\\sum_{j,l}W_{jl}v_j\\mu_l+\\sum_lc_l\\mu_l-\\sum_j \\mu_j\\log \\mu_j+(1-\\mu_j)\\log(1-\\mu_j)+\\text{C}}\n\\eg\nLet us maximize (55) for $\\bs{\\mu}$ for fixed $\\bs{\\psi}$:\n\\begin{empheq}[box={\\mybox[1em][1em]}]{gather*}\n0 \\doteq \\frac{\\partial}{\\partial\\mu_{i}}\\mc{L}=\\underbrace{\\comment{$l=i$}\\sum_{m>i}J_{im}\\mu_m+\\comment{$m=i$}\\sum_{l<i}J_{li}\\mu_l}_{=\\comment{$J_{ij}=J_{ji},J_{ii}=0$}\\sum_l J_{il}\\mu_l}+\\sum_{j}W_{ji}v_j+c_i\n-\\log \\mu_i-1+\\log(1-\\mu_i)+1\n\\\\\n\\Leftrightarrow\n\\\\\n\\text{sigm}^{-1}(\\mu_i)=\\log\\frac{\\mu_i}{1-\\mu_i}=\\sum_{l<i}J_{li}\\mu_l+\\sum_{j}W_{ji}v_j+c_i\n\\end{empheq}\n\\bg\n\\boxed{\\mu_i\\leftarrow \\text{sigm}\\l(\\sum_{l<i}J_{li}\\mu_l+\\sum_jW_{ji}v_j+c_i\\r)}\n\\eg\nNote that this is exactly the formula for (12) for computing $p(h_j=1|\\mb{v},\\mb{h}_{-i})$ in BM! So, updates of variational parameters can be computed using Gibbs sampler. This is not a coincidence, but the same holds if replace types of units, use RBM or even DBM (see below)!\n\\\\[1em]\nNote, that this variational approach cannot be used to approximate model-expectations because of minus sign in formulae (54),(17). This would cause variational learning to change the parameters so as to \\emph{maximize} $D_{\\text{KL}}(q(\\mb{h};\\bs{\\mu}) \\;\\|\\; p(\\mb{h}|\\mb{v};\\bs{\\psi}))$.\n\\\\[0.5em]\nThe naive mean-field approach was chosen because:\n\\begin{itemize}\n\t\\gooditem its convergence is usually fast;\n\t\\gooditem it is unimodal.\n\\end{itemize}\nNote that in general, we don't have to provide a parametric form of the approximating distribution beyond enforcing the independence assumptions. The variational approximation procedure is generally able to recover the functional form of the approximate distribution \\cite{goodfellow2016deep}.\n\n\\subsection{Deep Boltzmann Machine}\nAgain assume unless specifically mentioned that DBM contains all binary units.\n\\subsubsection{High-level overview}\n\\textbullet{} DBM is a deep generative model, that consists of a layer of visible units and a series of layers of hidden units.\n\\begin{figure}[h]\n\\begin{mdframed}\n\\includegraphics[scale=0.4]{img/dbn_dbm.png}\n\\centering\n\\caption{A three-layer Deep Belief Network and a three-layer Deep Boltzmann Machine.}\n\\label{fig:dbn_dbm}\n\\end{mdframed}\n\\end{figure}\nIn comparison to another deep generative model, DBN (which is hybrid, and has directed layers and one undirected), DBM is entirely undirected model, see Fig. \\ref{fig:dbn_dbm}. DBN is trained using greedy, layer by layer training of corresponding RBMs (one bottom-top pass). At the same time, All parameters in DBM are learned \\tb{jointly}, which greatly facilitates learning better generative models. Even though both models have a potential of learning series of internal representations that become increasingly complex, DBM's approximate bottom-top and top-bottom inference better propagate uncertainty $\\Rightarrow$ deal more robustly with ambiguous inputs, than DBN.\n\\\\\n\\textbullet{} Formally, suppose number of (hidden) layers $L=3$.\n$$\n\\mb{v}\\in\\R^D, \\mb{h}=\\{\\mb{h^{(1)}}, \\mb{h^{(2)}}, \\mb{h^{(3)}}\\}, \\mb{h^{(s)}}\\in\\R^{H_s}, s\\in\\{1,2,3\\};\n$$\nEnergy function:\n\\bg\nE(\\mb{v},\\mb{h};\\bs{\\psi})=-\\mb{v}^T\\mb{W^{(1)}}\\mb{h^{(1)}}-\\mb{h^{(1)}}^T\\mb{W^{(2)}}\\mb{h^{(2)}}-\\mb{h^{(2)}}^T\\mb{W^{(3)}}\\mb{h^{(3)}}-\\mb{b}\\cdot\\mb{v}-\\mb{c^{(1)}}\\cdot\\mb{h^{(1)}}-\\mb{c^{(2)}}\\cdot\\mb{h^{(2)}}-\\mb{c^{(3)}}\\cdot\\mb{h^{(3)}},\n\\eg\nwhere $\\bs{\\psi}=\\{\\mb{W^{(1)}}, \\mb{W^{(2)}}, \\mb{W^{(3)}}, \\mb{b}, \\mb{c^{(1)}}, \\mb{c^{(2)}}, \\mb{c^{(3)}}\\}$.\nProbability that the model assign to a configuration $(\\mb{v},\\mb{h})$:\n\\bg\np(\\mb{v},\\mb{h};\\bs{\\psi})\\;\\propto\\;\\exp(-E(\\mb{v},\\mb{h};\\bs{\\psi}))\n\\eg\n\\\\\n\\textbullet{} Now observe that conncections between units in the DBM are restricted in such a way, that unit from a layer depends only on the units from the \\emph{neighboring} layers, and does not depend from other units in the same layer or in the layers beyond. This is a multi-layer generalization of RBM, and allows to compute probabilities of units on given the others efficiently. For instance,\n\\bg\np(h^{(1)}_j=1|\\mb{v},\\mb{h^{(2)}})=\\text{sigm}\\l(\\sum_iW^{(1)}_{ij}v_i+\\sum_iW^{(2)}_{jl}h_l^{(2)}+c_j^{(1)}\\r)\n\\eg\nObserve how this formula resembles formulae (21),(22). This is also easily generalizes to other layers and other types of layers:\n\\\\[1em]\n\\noindent\\fbox{%\n\t\\parbox{\\textwidth}{%\n\tTo compute probability of unit being on given all the others, \\tb{add} linear combinations of states of units from neighboring layers + bias and apply \\tb{activation function of a unit} (e.g. sigmoid for binary, softmax for softmax/multinomial, affine for gaussian etc.).\n\t}%\n}\n\\\\[1em]\nNote, however, that the distribution over \\tb{all} hidden layers generally does not factorize because of interactions between layers. For instance, for $L=2$, $p(\\mb{h}^{(1)},mb{h}^{(2)}|\\mb{v};\\bs{\\psi})$ does not factorize due to interaction weights $\\mb{W}^{(2)}$ between $\\mb{h}^{(1)}$ and $\\mb{h}^{(2)}$ which  render those variables mutually dependent.\n\\\\\n\\textbullet{} Formulae for log-likelihood gradients are derived the same way as for RBM and have similar form. For instance:\n\\bg\n\\frac{1}{N}\\sum_{n=1}^N\\nabla_{\\mb{W^{(2)}}}\\log p(\\mb{x}_n;\\bs{\\psi}) = \\E_{\\mb{h^{(1)}},\\mb{h^{(2)}}\\sim P_{\\text{data}}(\\mb{h^{(1)}},\\mb{h^{(2)}};\\bs{\\psi})}\\l[\\mb{h^{(1)}}\\mb{h^{(2)}}^T\\r]-\\E_{\\mb{h^{(1)}},\\mb{h^{(2)}}\\sim P_{\\text{model}}(\\mb{h^{(1)}},\\mb{h^{(2)}};\\bs{\\psi})}\\l[\\mb{h^{(1)}}\\mb{h^{(2)}}^T\\r]\n\\eg\n\\textbullet{} Finally, we apply new learning algortihms for BMs described in the previous subsection with fully-factorizable mean-field approach:\n\\bg\nq(\\mb{h};\\bs{\\mu})=\\prod_j \\prod_l \\prod_m q(h_j^{(1)};\\mu_j^{(1)}) \\cdot q(h_l^{(2)};\\mu_l^{(2)}) \\cdot q(h_m^{(3)};\\mu_m^{(3)})\n\\eg\nThanks to lack of intra-layer interaction makes it\npossible to use fixed point equations (just like for general BM algorithm) to actually optimize the variational lower bound and find the true optimal mean field expectations.\n\\\\\n\\textbullet{} Further we will use DBM with Gaussian visible units, Multinomial top-most layer hidden unist, and Bernoulli hidden units for intermediate layers. In this setting, again the learning algortihms remains the same, the difference is only in the way probabilities are computed, and samples are made.\n\\\\\n\\textbullet{} One unfortunate property of DBMs is that sampling from them is relatively\ndifficult. DBNs only need to use MCMC sampling in their top pair of layers. The\nother layers are used only at the end of the sampling process, in one efficient\nancestral sampling pass. To generate a sample from a DBM, it is necessary to\nuse MCMC across all layers, with every layer of the model participating in every\nMarkov chain transition.\n\n\\subsubsection{Gibbs sampling in DBMs}\n\\textbullet{} Similar to RBM, Gibbs sampling using equations (59) can be made in parallel thus allowing to perform block Gibbs sampling for each layer of units. In addition to that, as illustrated in Fig. \\ref{fig:dbm_gibbs}, the DBM layers\ncan be organized into a bipartite graph, with odd layers on one side and even layers\non the other. This immediately implies that when we condition on the variables in\nthe even layer, the variables in the odd layers become conditionally independent. In conjuction with block Gibbs sampling for each layer, this allow to perform a Gibbs sampling in the whole DBM in \\tb{only 2 iterations}, instead of $L + 1$, as one might naively think at first.\n\\\\\n\\textbullet{} Good news that in TF no additional work need to be done beyond implementing block Gibbs sampling for each layer. Each independent branch in the computational graph should be executed in parallel.\n\\begin{figure}[h]\n\\begin{mdframed}\n\\includegraphics[scale=0.4]{img/dbm_gibbs.png}\n\\centering\n\\caption{A deep Boltzmann machine, re-arranged to reveal its bipartite graph structure.}\n\\label{fig:dbm_gibbs}\n\\end{mdframed}\n\\end{figure}\n\\textbullet{} Note that Contrastive Divergence algorithm is slow for DBMs because they do not allow efficient sampling of the hidden states given the visible units -- instead, CD would require burning in a Markov chain every time a new negative phase sample is needed.\n\\subsubsection{Greedy layerwise pretraining of DBMs}\nDBM can be trained using the aforementioned learning algorithm from random initialization (typically the results are quite bad even on MNIST, see \\cite{goodfellow2012joint, goodfellow2016deep}), but it works much better if weights are initialized sensibly. Greedy layerwise pretraining = learning procedure that consists of learning a stack of RBM's one layer at a time. After the stack is learned, the whole stack can be viewed as a single probabilistic model, called Deep Belief Net. Thus, pre-training for DBN is straightforward. In case of DBM though, a layer in the middle of the stack of RBMs is trained with only bottom-up input, but after the stack is combined to form DBM, the layer will have both bottom-up and top-down input. To account for this so-called \\emph{evidence double counting problem} \\cite{salakhutdinov2009deep, goodfellow2016deep}, Fig. \\ref{fig:dbm_pretraining}, two modifications are required:\n\\begin{figure}[h]\n\\begin{mdframed}\n\\includegraphics[width=1.5in]{img/dbm_pretraining2.png}\n\\quad\n\\quad\n\\includegraphics[width=3.5in]{img/dbm_pretraining.png}\n\\centering\n\\caption{Pre-training consists of learning a stack of modified RBMs, that are then composed to create a DBM.}\n\\label{fig:dbm_pretraining}\n\\end{mdframed}\n\\end{figure}\n\\begin{itemize}\n\t\\item bottom RBM should be trained using two \"copies\" of each visible unit and the weights tied to be equal between these two copies ($\\cong$ simply double the total input to hidden layer during upward pass); similarly, top RBM should be trained with two copies of topmost layer. Training of all intermediate RBMs if there are any, should not be modified.\n\t\\item the weights of all intermediate RBMs though, should be divided by 2 before inserting into DBM\n\\end{itemize}\n\\begin{figure}[h]\n\\begin{mdframed}\n\\includegraphics[scale=0.08]{dbm/dbm_init.jpg}\n\\centering\n\\caption{A more detailed scheme how to initialize 3-layer DBM from learned stack or RBMs, including biases. Black circles -- visible units, blue -- hidden units, red circel -- visible bias, black square -- hidden bias. Biases can be summed or averaged.}\n\\label{fig:dbm_init}\n\\end{mdframed}\n\\end{figure}\n\n\\subsubsection{Joint training of DBMs}\nClassic DBMs require greedy unsupervised pretraining, and to perform classification\nwell, require a separate MLP-based classifier on top of the hidden features they\nextract. It is hard to track performance\nduring training because we cannot evaluate properties of the full DBM while\ntraining the first RBM. Software implementations\nof DBMs need to have many different components for CD training of individual\nRBMs, PCD training of the full DBM, and training based on back-propagation\nthrough the MLP. Finally, the MLP on top of the Boltzmann machine loses many\nof the advantages of the Boltzmann machine probabilistic model, such as being\nable to perform inference when some input values are missing.\n\\\\\nThere are two main ways to resolve the joint training problem of the deep\nBoltzmann machine: \\tb{multi-prediction DBMs}\\cite{goodfellow2013multi}, which is currently beyond the scope of this project, and the \\tb{centering trick} \\cite{montavon2012deep}, which reparametrizes the model in order to make the Hessian of the cost function better-conditioned at the beginning of the learning process. More specifically, if we consider energy function of generalized Boltzmann Machine (BM, RBM, DBM can all be represented by appropriate choice of $\\mb{x}$ -- states, $\\mb{U}$ -- weights, $\\mb{a}$ -- biases):\n\\bg\nE(\\mb{x};\\bs{\\psi})=-\\mb{x}^T \\mb{U}\\mb{x}-\\mb{a}\\cdot\\mb{x}\n\\eg\nThen the idea of centering trick is simply to reparameterize this energy function as \n\\bg\nE(\\mb{x};\\bs{\\psi})=-(\\mb{x}-\\bs{\\beta})^T \\mb{U}(\\mb{x}-\\bs{\\beta})-\\mb{a}\\cdot(\\mb{x}-\\bs{\\beta})\n\\eg\nWhere new hyperparameter vector $\\bs{\\beta}$ is chosen to be $\\mb{x}-\\bs{\\beta}\\approx \\mb{0}$ at the beginning of training. This does not change the set of probability distributions that the model can represent, but it does change the learning dynamics so much, that it is actually possible to train DBM from random initialization w/o pre-training and achieve sensible results. However, in \\cite{goodfellow2013multi} they say when DBM is trained using centering trick, they have never shown to have good classification performance, if this was the primary goal.\n\n\\subsubsection{Annealed importance sampling \\cite{salakhutdinov2008, salakhutdinov2009deep, hinton2012better, upadhya2015empirical}}\nLet $p_A(\\mb{x})=\\frac{p^*_A(\\mb{x})}{\\mc{Z}_A}$ be simple proposal distribution from which we can sample easily, and $p_{B}(\\mb{x})=\\frac{p^*_B(\\mb{x})}{\\mc{Z}_B}$ our complex target distribution. We also have to make sure $p_B \\ll p_A$, which is easy in our case, since we can always choose $p_A$ to be uniform pmf, which dominates every other probability mass function on discrete units (of finite cardinality).\n\\\\[0.5em]\n\\u{(Classical) Importance Sampling}\n\\\\\nThe ratio of partition functions can be estimated as follows\n\\bg\n\\frac{\\mc{Z}_B}{\\mc{Z}_A}=\\frac{p^*_B(\\mb{x})}{p^*_A(\\mb{x})}=\\sum_{\\mb{x}}\\frac{p^*_B(\\mb{x})}{p^*_A(\\mb{x})} p_A(\\mb{x})=\\E_{\\mb{x}\\sim p_A}\\l[\\frac{p^*_B(\\mb{x})}{p^*_A(\\mb{x})} \\r] \\approx \\frac{1}{N}\\sum_{i=1}^N \\frac{p^*_B(\\mb{x}_i)}{p^*_A(\\mb{x}_i)}\n\\eg\nThe problem is when $p_A$ and $p_B$ are very different, as in our case, this estimator is very poor: its variance is very large, possibly infinite.\n\\\\[0.5em]\n\\u{Annealed Importance Sampling}\n\\\\\nTo handle this issue, we define sequence of probability mass functions $\\l(p_m\\r)_{m=0:M}$ such that $p_0 = p_A$ and $p_M = p_B$, and for which we know unnormalized probabilities $p_m^*$, which typically are mixtures of target and proposal:\n\\bg\np_m(\\mb{x})=p_B(\\mb{x})^{\\beta_m}\\cdot p_A(\\mb{x})^{1-\\beta_m}, \\beta_m=\\frac{m}{M}\n\\eg\nAlso, in order not to sample from $p_s$ we also need sequence of transition operators $\\l(T_i(\\mb{x}_{i + 1} \\leftarrow \\mb{x}_i)\\r)_{i=1:M-1}$ each that leaves the corresponding $p_i$ invariant. The importance weight can then be computed as\n\\bg\n\\omega_{\\text{AIS}} \\leftarrow \\prod_{m=1}^M \\frac{p_m^*(\\mb{x}_m)}{p_{m-1}^*(\\mb{x}_m)}\n\\eg\nwhere $\\mb{x}_1 \\sim p_0=p_A;\\; \\mb{x}_2 \\sim T_1(\\mb{x}_2 \\leftarrow \\mb{x}_1);\\; \\ldots \\; \\mb{x}_M \\sim T_{M-1}(\\mb{x}_M \\leftarrow \\mb{x}_{M-1})$.\nThe ratio of partition functions can then be estimated as average over many AIS runs:\n\\bg\n\\frac{\\mc{Z}_B}{\\mc{Z}_A}=\\frac{\\mc{Z}_M}{\\mc{Z}_0}\\approx \\frac{1}{L}\\sum_{l=1}^L \\omega_{\\text{AIS}}^{(l)}\n\\eg\nNotice also that we don't need to compute partition functions of any of the intermediate distributions.\n\\\\\n\\tb{Note}: to avoid numerical problems and overflow errors (partition functions are very large numbers even for very moderate sized BM), all computations are performed in $\\log$-domain, as usual.\n\\\\[0.5em]\n\\u{Annealed Importance Sampling for 2-layer Bernoulli BM}\n\\\\\nIt turns out that we can reduce state space of AIS to only hidden units in first layer $\\mb{x}=\\{\\mb{h}^{(1)}\\}$ by explicitly summing out visible and top-most layer hidden units:\n\\begin{empheq}[box={\\mybox[1em][1em]}]{align*}\n\\log p^*\\l(\\mb{h}^{(1)}\\r) &= \\log \\sum_{\\mb{v},\\mb{\\mb{h}^{(2)}}} p^*\\l(\\mb{v}, \\mb{h}^{(1)}, \\mb{h}^{(2)}\\r)=\n\\\\\n&= \\log \\sum_{\\mb{v},\\mb{\\mb{h}^{(2)}}} \\exp \\l( \\mb{v}^T\\mb{W}^{(1)}\\mb{h}^{(1)} +\n\\mb{h}^{(1)^T}\\mb{W}^{(2)}\\mb{h}^{(2)} + \\mb{b}\\cdot\\mb{v} + \\mb{c}^{(1)}\\cdot\\mb{h}^{(1)} + \\mb{c}^{(2)}\\cdot\\mb{h}^{(2)} \\r)\n\\\\\n&= \\mb{c}^{(1)}\\cdot\\mb{h}^{(1)} + \\log\\l[ \\sum_{\\mb{v}}\\exp\\l( \\mb{v}^T\\mb{W}^{(1)}\\mb{h}^{(1)} + \\mb{b}\\cdot\\mb{v} \\r)\\sum_{\\mb{h}^{(2)}} \\exp\\l( \\mb{h}^{(1)^T}\\mb{W}^{(2)}\\mb{h}^{(2)} + \\mb{c}^{(2)}\\cdot\\mb{h}^{(2)} \\r) \\r]\n\\\\\n&= \\mb{c}^{(1)}\\cdot\\mb{h}^{(1)} + \\sum_i^V \\text{softplus}\\l(\\sum_j^{H_1} W_{ij}^{(1)}h_j^{(1)}+b_i \\r) + \\sum_k^{H_2} \\text{softplus}\\l(\\sum_j^{H_1} W_{jk}^{(2)}h_k^{(2)}+c_k^{(2)} \\r)\n\\end{empheq}\n\\bg\n\\boxed{\\log p^*\\l(\\mb{h}^{(1)}\\r)=\\mb{c}^{(1)}\\cdot\\mb{h}^{(1)} + \\sum_i^V \\text{softplus}\\l(\\sum_j^{H_1} W_{ij}^{(1)}h_j^{(1)}+b_i \\r) + \\sum_k^{H_2} \\text{softplus}\\l(\\sum_j^{H_1} W_{jk}^{(2)}h_k^{(2)}+c_k^{(2)} \\r)}\n\\eg\nFrom this we can easily derive equation for $\\log p_{\\textcolor{red}{m}}^*$ by simply scaling all weights by $\\beta_m$:\n\\begin{equation}\n\\begin{aligned}\n\\log p_{\\textcolor{red}{m}}^*\\l(\\mb{h}^{(1)}\\r) =\n\\textcolor{red}{\\beta_m}\\mb{c}^{(1)}\\cdot\\mb{h}^{(1)} + \\sum_i^V \\text{softplus}\\l(\\textcolor{red}{\\beta_m}\\cdot\\l(\\sum_j^{H_1} W_{ij}^{(1)}h_j^{(1)}+b_i \\r)\\r) +\n\\\\\n+ \\sum_k^{H_2} \\text{softplus}\\l(\\textcolor{red}{\\beta_m}\\cdot\\l(\\sum_j^{H_1} W_{jk}^{(2)}h_k^{(2)}+c_k^{(2)} \\r)\\r)\n\\end{aligned}\n\\end{equation}\nWhen $\\beta_m=1$ we obtain target distribution, when $\\beta_m=0$ we obtain uniform distribution:\n\\bg\n\\log p_0^*\\l(\\mb{h}^{(1)}\\r) \\equiv 0+\\sum_i^V \\text{softplus}(0) + \\sum_k^{H_2} \\text{softplus}(0)=(V+H_2)\\log 2\n\\eg\nthus $ \\log \\mc{Z}_0=(V+H_1+H_2)\\log 2$.\n\\\\[1em]\nThus we gradually increase \"inverse temperature\" $\\beta$ from 0 to 1 and can estimate partition function using procedure described above. Staring from randomly initialized $\\mb{h}^{(1)}$, we apply sequence of transition operators $T_i$ which are simply alternating Gibbs sampler with weights scaled by $\\beta_i$.\n\\\\[1em]\nWe can do the same for different types of units and larger number of layers. In the latter case we can again analytically sum out visible and top-most hidden units.\n\\\\[0.5em]\n\\u{Variational lower-bound}\n\\\\\nHaving estimate of partition function $\\t{\\mc{Z}}$, we can estimate variational lower-bound on test vector $\\mb{v}^*$ as follows\n\\begin{equation}\n\\begin{aligned}\n\\log p(\\mathbf{v}^{*};\\boldsymbol{\\psi})\\geq\\;&-\\sum_{\\mathbf{h}} q(\\mathbf{h};\\boldsymbol{\\mu})E(\\mathbf{v}^{*}, \\mathbf{h};\\boldsymbol{\\psi})+\\mathcal{H}(\\boldsymbol{\\mu})-\\log\\mathcal{Z}(\\boldsymbol{\\psi})\n\\\\\n=\\;& \\mathbf{v}^{*^{T}}\\mathbf{W}^{(1)}\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(1)}+\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(1)^{T}}\\mathbf{W}^{(2)}\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(2)}+\\mathbf{b}\\cdot\\mathbf{v}^{*}+\\mathbf{c}^{(1)}\\cdot\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(1)}+\\mathbf{c}^{(2)}\\cdot\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(2)}+\\mathcal{H}(\\boldsymbol{\\mu}_{\\mathbf{v}^*})-\\log\\mathcal{Z}(\\boldsymbol{\\psi})\n\\\\\n\\approx\\;& \\mathbf{v}^{*^{T}}\\mathbf{W}^{(1)}\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(1)}+\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(1)^{T}}\\mathbf{W}^{(2)}\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(2)}+\\mathbf{b}\\cdot\\mathbf{v}^{*}+\\mathbf{c}^{(1)}\\cdot\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(1)}+\\mathbf{c}^{(2)}\\cdot\\boldsymbol{\\mu}_{\\mathbf{v}^*}^{(2)}+\\mathcal{H}(\\boldsymbol{\\mu}_{\\mathbf{v}^*})-\\log\\widehat{\\mathcal{Z}}\n\\end{aligned}\n\\end{equation}\nwhere $\\bs{\\mu}_{\\mathbf{v}^*}$ are variational parameters obtained by running fixed-point equations using Gibbs sampler unitl convergence with visible units clamped to $\\mathbf{v}^*$.\n\\\\[1em]\nOne can also estimate true log-probability using AIS by clamping visible units to test example (estimating log-probability for one test example is computationally equivalent to estimating a partition function).\n\n\\subsubsection{Additional facts}\n\\textbullet{} In \\cite{goodfellow2016deep} they say that obtaining sota results with DBM requires an additional partial mean field in negative phase, more details in \\cite{goodfellow2013multi}.\n\\\\\n\\textbullet{} The inference can further be accelerated using separate \\emph{recognition model}, see \\cite{salakhutdinov2010efficient} for details.\n\\\\\n\\textbullet{} DBMs were developed after DBNs. Compared to DBNs, the posterior distribution $p(\\mb{h}|\\mb{v})$ is simpler for DBMs. Somewhat counterintuitively, the simplicity of this posterior distribution allows richer approximations of the posterior \\cite{goodfellow2016deep}\n\\\\\n\\textbullet{} The use of proper mean field allows the approximate inference procedure for DBMs to capture the influence of top-down feedback interactions. This makes\nDBMs interesting from the point of view of neuroscience, because the human brain\nis known to use many top-down feedback connections\\cite{goodfellow2016deep}.\n\\\\\n\\textbullet{} In \\cite{goodfellow2013joint} they observe that energy function $E(\\mb{v},\\mb{h};\\bs{\\psi})$ inevitably induces some prior $p(\\mb{h};\\bs{\\psi})$ that is not motivated by the structure of any kind of data. The role of deeper layers in DBM is simply to provide a better prior on the first layer hidden units.\n\n\\clearpage\n\\begin{figure}[h]\n\\begin{mdframed}\n\\centering\n\\includegraphics[width=6.4in]{dbm/tf_graph.png}\n\\caption{High-level computational graph for DBM model.}\n\\end{mdframed}\n\\end{figure}\n", "meta": {"hexsha": "d61655e26234dd70a4569c855e69128bed55ff24", "size": 25928, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/chapter_3.tex", "max_stars_repo_name": "praisethemoon/boltzmann-machines", "max_stars_repo_head_hexsha": "bc49ba2c8c6c894af55b272e1b92f9cea3576136", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 196, "max_stars_repo_stars_event_min_datetime": "2019-03-16T14:50:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:24:00.000Z", "max_issues_repo_path": "tex/chapter_3.tex", "max_issues_repo_name": "praisethemoon/boltzmann-machines", "max_issues_repo_head_hexsha": "bc49ba2c8c6c894af55b272e1b92f9cea3576136", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2019-04-09T07:33:01.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-27T21:37:37.000Z", "max_forks_repo_path": "tex/chapter_3.tex", "max_forks_repo_name": "praisethemoon/boltzmann-machines", "max_forks_repo_head_hexsha": "bc49ba2c8c6c894af55b272e1b92f9cea3576136", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 49, "max_forks_repo_forks_event_min_datetime": "2019-03-16T14:51:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-10T13:47:40.000Z", "avg_line_length": 84.7320261438, "max_line_length": 920, "alphanum_fraction": 0.7002854057, "num_tokens": 8922, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127455162773, "lm_q2_score": 0.7025300698514778, "lm_q1q2_score": 0.5998993807546175}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{titletoc}\n\\usepackage{titlesec}\n\\usepackage{geometry} \n\\usepackage{fontspec, xunicode, xltxtra}\n\\usepackage{float}\n\\usepackage{cite}\n\\usepackage{amsmath}\n\\usepackage{listings}\n\\usepackage{titletoc}\n\n\\geometry{left=3cm,right=3cm,top=3cm,bottom=3cm}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\logit}{logit}\n\\DeclareMathOperator*{\\var}{var}\n\\DeclareMathOperator*{\\expec}{E}\n\n\\begin{document}\n\\title{\\textsf{Homework 2 for Bayesian Data Analysis}}\n\\author{Fan JIN\\quad (2015011506)}\n\\maketitle\n\n\\section*{Question 2.7a}\n{\n    $$p(\\theta) \\propto \\theta^{-1} (1-\\theta)^{-1}.$$\n\n    Denote $\\phi = \\logit(\\theta) = \\log{(\\theta / (1-\\theta))}$, one has $\\theta = 1/(1+\\exp{(-\\phi)})$, and therefore,\n    $$p(\\phi) \\propto (\\frac{1}{1 + \\exp{(-\\phi)}})^{-1} (1-\\frac{1}{1 + \\exp{(-\\phi)}})^{-1} \\cdot \\left| \\frac{\\exp{(-\\phi)}}{(1+\\exp{(-\\phi)})^2} \\right| = 1,$$ which gives a uniform prior distribution for $\\logit(\\theta)$, or the natural parameter of the exponential family.\n}\n\n\\section*{Question 2.7b}\n{\n    $$p(\\theta|y) \\propto \\theta^{-1} (1-\\theta)^{-1} \\cdot \\theta^{y} (1-\\theta)^{n-y} = \\theta^{y-1} (1-\\theta)^{n-y-1}.$$\n\n    If $y=0$, then we have $$p(\\theta|y) \\propto \\theta^{-1} (1-\\theta)^{n-1},$$ whose integral in interval $[0, 1]$ is \n    $$\\int_0^1 {\\theta^{-1} (1-\\theta)^{n-1} \\mathrm{d}\\theta} = +\\infty,$$ since $p(\\theta|y)$ is of the same order as $1/\\theta$ when $\\theta \\rightarrow 0$.\n\n    Likewise, if $y=n$, the integral is infinite when $\\theta \\rightarrow 1$. Therefore, the posterior distribution is improper if $y=0$ or $y=n$.\n}\n\n\\section*{Question 2.12}\n{\n    $$\\log{(p(y|\\theta))} = \\theta + y \\log{(\\theta)} - \\log{(y!)},$$\n\n    $$J(\\theta) = -\\expec{( \\frac{\\mathrm{d}^2 \\log{(p(y|\\theta))}}{\\mathrm{d}\\theta^2} | \\theta)} = -\\expec{( -\\frac{y}{\\theta^2} | \\theta)} = \\frac{\\expec{(y | \\theta)}}{\\theta^2} = \\frac{1}{\\theta}.$$\n\n    Hence, the Jeffery's prior density for $\\theta$ is $$p(\\theta) \\propto [J(\\theta)]^{1/2} = \\theta^{-1/2}.$$\n\n    Compared with a $\\mathrm{Gamma}(\\alpha, \\beta)$ distribution with prior $p(\\theta) \\propto \\theta^{\\alpha-1} \\exp{(-\\beta \\theta)}$, we have $\\alpha=1/2$ and $\\beta=0$.\n}\n\n\\section*{Question 3.1a}\n{\n    $$p(\\theta) \\propto \\prod_1^J {{\\theta_j}^{\\alpha_j-1}},$$\n    $$p(y|\\theta) \\propto \\prod_1^J {{\\theta_j}^{y_j}},$$\n    $$p(\\theta|y) \\propto \\prod_1^J {{\\theta_j}^{y_j+\\alpha_j-1}}.$$\n\n    Thus, by integrating $y_3, \\cdots, y_J$, the joint posterior distribution of $\\theta_1$ and $\\theta_2$ is $$p(\\theta_1, \\theta_2 | y) \\propto \\theta_1^{y_1+\\alpha_1-1} \\theta_2^{y_2+\\alpha_2-1}.$$\n\n    Under the variable substitution $\\alpha = \\theta_1 / (\\theta_1+\\theta_2)$ and $\\beta = \\theta_1 + \\theta_2$, we have $$\\theta_1 = \\alpha \\beta, \\quad \\theta_2 = (1-\\alpha) \\beta,$$ and\n    $$p(\\alpha, \\beta | y) = p(\\theta_1, \\theta_2) \\cdot \\left| \\frac{\\partial (\\theta_1, \\theta_2)}{\\partial (\\alpha, \\beta)} \\right| \\propto (\\alpha \\beta)^{y_1+\\alpha_1-1} \\left((1-\\alpha) \\beta\\right)^{y_2+\\alpha_2-1} \\cdot \\left| \\beta \\right| $$\n    $$= \\alpha^{y_1+\\alpha_1-1} (1-\\alpha)^{y_2+\\alpha_2-1} \\beta^{y_1+y_2+\\alpha_1+\\alpha_2-1}.$$\n\n    Therefore, the marginal posterior distribution of $\\alpha$ is $$p(\\alpha|y) = \\int {p(\\alpha, \\beta | y) \\mathrm{d}\\beta} \\propto \\alpha^{y_1+\\alpha_1-1} (1-\\alpha)^{y_2+\\alpha_2-1},$$ which is $\\alpha|y \\sim \\mathrm{Gamma}(y_1+\\alpha_1, y_2+\\alpha_2).$\n}\n\n\\section*{Question 3.1b}\n{\n    For a $\\mathrm{Gamma}(\\alpha_1, \\alpha_2)$ prior distribution, and a $\\mathrm{Binomial}$ sample with $y_1$ independent observations out of $y_1+y_2$ tests, we have the posterior distribution for the probability $\\alpha$ is a $\\mathrm{Gamma}(y_1+\\alpha_1, y_2+\\alpha_2)$. (See Homework 1) This posterior distribution is identical to the distribution obtained in (a).\n}\n\n\\section*{Question 3.9}\n{\n    It is known that\n    $$p(y | \\mu, \\sigma^2) \\propto \\sigma^{-n} \\exp{\\left(-\\frac{1}{2\\sigma^2} \\sum_1^n {(y_i-\\mu)^2}\\right)}$$\n    $$= \\sigma^{-n} \\exp{\\left(-\\frac{1}{2\\sigma^2} \\left[\\sum_1^n {(y_i-\\bar{y})^2} + n (\\bar{y}-\\mu)^2 \\right]\\right)}$$\n    $$= \\sigma^{-n} \\exp{\\left(-\\frac{1}{2\\sigma^2} \\left[(n-1)s^2 + n (\\bar{y}-\\mu)^2 \\right]\\right)},$$\n    and\n    $$p(\\mu, \\sigma^2) = p(\\sigma^2) p(\\mu | \\sigma^2) \\propto \\sigma^{-1} (\\sigma^2)^{-(\\nu_0/2+1)} \\exp{\\left( -\\frac{1}{2\\sigma^2} \\left[ \\nu_0\\sigma_0^2 + \\kappa_0(\\mu_0-\\mu)^2 \\right] \\right)}.$$\n\n    Therefore, we have the joint posterior distribution\n    $$p(\\mu, \\sigma^2 | y) \\propto p(y | \\mu, \\sigma^2) p(\\mu, \\sigma^2)$$\n    $$\\propto \\sigma^{-1} (\\sigma^2)^{-((\\nu_0+n)/2+1)} \\exp{\\left( -\\frac{1}{2\\sigma^2} \\left[ (n-1)s^2 + n (\\bar{y}-\\mu)^2 + \\nu_0\\sigma_0^2 + \\kappa_0(\\mu_0-\\mu)^2 \\right] \\right)},$$ \n    which is identical to a N-Inv-$\\chi^2 (\\mu_n, \\sigma_n^2; \\nu_n, \\sigma_n^2)$ distribution, or \n    $$\\sigma^{-1} (\\sigma^2)^{-(\\nu_n/2+1)} \\exp{\\left( -\\frac{1}{2\\sigma^2} \\left[ \\nu_n\\sigma_n^2 + \\kappa_n(\\mu_n-\\mu)^2 \\right] \\right)}.$$\n\n    Thus, by comparing the coefficients, we have $$\\nu_n = \\nu_0+n$$ and\n    $$(n-1)s^2 + n (\\bar{y}-\\mu)^2 + \\nu_0\\sigma_0^2 + \\kappa_0(\\mu_0-\\mu)^2 = \\nu_n\\sigma_n^2 + \\kappa_n(\\mu_n-\\mu)^2,$$ or\n    $$n+\\kappa_0 = \\kappa_n,$$\n    $$-2n\\bar{y} - 2\\kappa_0 \\mu_0 = -2\\kappa_n \\mu_n,$$\n    $$(n-1)s^2 + n\\bar{y}^2 + \\nu_0\\sigma_0^2 + \\kappa_0 \\mu_0^2 = \\nu_n\\sigma_n^2 + \\kappa_n \\mu_n^2,$$\n    the solution to which is \n    $$\\nu_n = \\nu_0+n,$$\n    $$\\kappa_n = \\kappa_0+n,$$\n    $$\\mu_n = \\frac{n}{n+\\kappa_0} \\bar{y} + \\frac{\\kappa_0}{n+\\kappa_0} \\mu_0,$$\n    $$\\nu_n\\sigma_n^2 = (n-1)s^2 + \\nu_0\\sigma_0^2 + n\\bar{y}^2 + \\kappa_0 \\mu_0^2 - \\kappa_n \\mu_n^2 = (n-1)s^2 + \\nu_0\\sigma_0^2 + \\frac{\\kappa_0 n}{\\kappa_0 + n} (\\bar{y}-\\mu_0)^2.$$\n\n}\n\n\\section*{Question 3.10}\n{\n    From the independency conditions, we have the joint posterior distribution\n    $$p(\\mu_1, \\sigma_1^2, \\mu_2, \\sigma_2^2 \\mid y) \\propto \\sigma_1^{-n_1-2} \\sigma_2^{-n_2-2} \\exp{\\left( - \\frac{1}{2\\sigma_1^2} \\sum_{j=1}^{n_1}{(y_{1j}-\\mu_1)^2} - \\frac{1}{2\\sigma_2^2} \\sum_{j=1}^{n_2}{(y_{2j}-\\mu_2)^2} \\right)}$$\n    $$= \\sigma_1^{-n_1-2} \\sigma_2^{-n_2-2} \\exp{\\left( - \\frac{1}{2\\sigma_1^2} {\\left[ (n_1-1)s_1^2 + n_1 (\\bar{y_1} - \\mu_1)^2 \\right]} \\right)} \\exp{\\left( - \\frac{1}{2\\sigma_2^2} {\\left[ (n_2-1)s_2^2 + n_2 (\\bar{y_2} - \\mu_2)^2 \\right]} \\right)}.$$\n\n    Recall that $$\\int {\\exp{\\left(-\\frac{n_1}{2\\sigma_1^2} (\\bar{y_1}^2 - \\mu_1)^2 \\right)} \\mathrm{d}\\mu_1} = (2\\pi)^{1/2} \\sigma_1 / \\sqrt{n_1},$$ we have the integral\n    $$p(\\sigma_1^2, \\sigma_2^2 \\mid y) = \\int {p(\\mu_1, \\sigma_1^2, \\mu_2, \\sigma_2^2 | y) \\mathrm{d}\\mu_1 \\mathrm{d}\\mu_2} $$\n    $$\\propto \\sigma_1^{-n_1-2} \\sigma_2^{-n_2-2} \\exp{\\left( - \\frac{1}{2\\sigma_1^2} {\\left[ (n_1-1)s_1^2 \\right]} \\right)}  \\exp{\\left( - \\frac{1}{2\\sigma_2^2} {\\left[ (n_2-1)s_2^2 \\right]} \\right)} \\cdot \\sigma_1 \\sigma_2$$\n    $$\\propto \\sigma_1^{-n_1-1} \\sigma_2^{-n_2-1} \\exp{\\left( - \\frac{(n_1-1)s_1^2}{2\\sigma_1^2} \\right)} \\exp{\\left( - \\frac{(n_2-1)s_2^2}{2\\sigma_2^2} \\right)}$$\n    $$\\propto \\left(\\frac{s_1^2}{\\sigma_1^2}\\right)^{(n_1+1)/2} \\left(\\frac{s_2^2}{\\sigma_2^2}\\right)^{(n_2+1)/2} \\exp{\\left( - \\frac{(n_1-1)s_1^2}{2\\sigma_1^2} \\right)} \\exp{\\left( - \\frac{(n_2-1)s_2^2}{2\\sigma_2^2} \\right)}.$$\n\n    Denote $u_1 = \\frac{s_1^2}{\\sigma_1^2}$, $u_2 = \\frac{s_2^2}{\\sigma_2^2}$, we have\n    $$p(u_1, u_2 \\mid y) \\propto \\left(u_1\\right)^{(n_1+1)/2} \\left(u_2\\right)^{(n_2+1)/2} \\exp{\\left( - \\frac{(n_1-1)}{2} u_1 \\right)} \\exp{\\left( - \\frac{(n_2-1)}{2} u_2 \\right)} \\cdot \\left| \\frac{1}{u_1^2} \\frac{1}{u_1^2} \\right|$$\n    $$= \\left(u_1\\right)^{(n_1-3)/2} \\left(u_2\\right)^{(n_2-3)/2} \\exp{\\left( - \\frac{(n_1-1)}{2} u_1 \\right)} \\exp{\\left( - \\frac{(n_2-1)}{2} u_2 \\right)}$$\n    and that $u_1$ is independent of $u_2$.\n    \n    By variable substitution $v_1 = u_1/u_2$ and $v_2 = u_2$, or $u_1 = v_1 v_2$ and $u_2 = v_2$, we have\n    $$p(v_1, v_2 \\mid y) \\propto \\left(v_1 v_2\\right)^{(n_1-3)/2} \\left(v_2\\right)^{(n_2-3)/2} \\exp{\\left( - \\frac{(n_1-1)}{2} v_1 v_2 \\right)} \\exp{\\left( - \\frac{(n_2-1)}{2} v_2 \\right)} \\cdot \\left| v_2 \\right|$$\n    $$ = (v_1)^{(n_1-2)/2} \\cdot (v_2)^{(n_1+n_2-2)/2} \\cdot \\exp{\\left( -\\frac{(n_1-1)v_1 + (n_2-1)}{2} v_2 \\right)}.$$\n\n    Note that the gamma distribution\\footnote{https://en.wikipedia.org/wiki/Gamma\\_distribution}, we have \n    $$ p(v_1 \\mid y) = \\int_0^\\infty {p(v_1, v_2 \\mid y) \\mathrm{d}v_2} $$\n    $$= \\int_0^\\infty {(v_1)^{(n_1-3)/2} \\cdot (v_2)^{(n_1+n_2-4)/2} \\cdot \\exp{\\left( -\\frac{(n_1-1)v_1 + (n_2-1)}{2} v_2 \\right)} \\mathrm{d}v_2}$$\n    $$= (v_1)^{(n_1-3)/2} \\cdot \\frac{\\Gamma(\\alpha_0)}{\\beta_0^{\\alpha_0}}$$\n    $$ \\propto (v_1)^{(n_1-3)/2} \\cdot \\left(\\frac{(n_1-1)v_1 + (n_2-1)}{2} \\right)^{-(n_1+n_2-2)/2},$$\n    where $\\alpha_0 = (n_1+n_2-2)/2$ and $\\beta_0 = (n_1-1)v_1 + (n_2-1))/2$.\n\n    Compare the expression above with the pdf of $F$ distributions\\footnote{https://en.wikipedia.org/wiki/F-distribution}, we have $$v_1 \\mid y \\sim F(n_1-1, n_2-1)$$.\n}\n\n\n\\clearpage\n\\end{document}\n", "meta": {"hexsha": "c409ccd4229a89c2ce227ad3c2dd0bcd93d8ed9b", "size": 9084, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW2/Homework2.tex", "max_stars_repo_name": "goldsail/BayesianHomework", "max_stars_repo_head_hexsha": "d5506faccbf4d0b7b696c7c2bcb42d020bb0d357", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-07-07T18:55:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-07T18:55:43.000Z", "max_issues_repo_path": "HW2/Homework2.tex", "max_issues_repo_name": "kingium/BayesianHomework", "max_issues_repo_head_hexsha": "d5506faccbf4d0b7b696c7c2bcb42d020bb0d357", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW2/Homework2.tex", "max_forks_repo_name": "kingium/BayesianHomework", "max_forks_repo_head_hexsha": "d5506faccbf4d0b7b696c7c2bcb42d020bb0d357", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.7941176471, "max_line_length": 369, "alphanum_fraction": 0.5931307794, "num_tokens": 4107, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.5998993753416586}}
{"text": "%% SECTION HEADER /////////////////////////////////////////////////////////////////////////////////////\n\\section{3D Spectral Modelling}\n\\label{sec:3Dmodel}\n\n%% SECTION CONTENT ////////////////////////////////////////////////////////////////////////////////////\nThe displacement vector of the \\ac{3d} element is composed of three translational displacements defined as:\n\\begin{eqnarray}\n\t\\left \\{ \\begin{array}{c}\n\t\t\\textbf{u}^e(\\xi,\\eta,\\zeta) \\\\\n\t\t\\textbf{v}^e(\\xi,\\eta,\\zeta) \\\\\n\t\t\\textbf{w}^e(\\xi,\\eta,\\zeta)\n\t\\end{array} \\right\\}\n\t= \\textbf{N}^e(\\xi,\\eta, \\zeta)\\widehat{\\textbf{d}}^e\n\t= \\sum_{l=1}^r\\sum_{n=1}^q\\sum_{m=1}^p\\textbf{N}_m^e(\\xi)\\textbf{N}_n^e(\\eta)\\textbf{N}_l^e(\\zeta)\n\t\\left \\{ \\begin{array}{c}\n\t\t\\widehat{\\textbf{u}}^e(\\xi_m,\\eta_n,\\zeta_l) \\\\\n\t\t\\widehat{\\textbf{v}}^e(\\xi_m,\\eta_n,\\zeta_l) \\\\\n\t\t\\widehat{\\textbf{w}}^e(\\xi_m,\\eta_n,\\zeta_l)\n\t\\end{array} \\right\\},\n\t\\label{eq:3D_displ}\n\\end{eqnarray}\nwhere \\(\\widehat{\\textbf{u}}^e\\), \\(\\widehat{\\textbf{v}}^e\\) and \n\\(\\widehat{\\textbf{w}}^e\\) are displacements of the element nodes in \\(\\xi,\\eta\\) and \\(\\zeta\\) direction.\n\nThe nodal strain--displacement relations are given as \\cite{kudela20093d}:\n\\begin{eqnarray}\n\t\\boldsymbol{\\epsilon}^e=\\textbf{B}_{d}^e\\widehat{\\textbf{d}}^e=\n\t\\left [\n\t\\begin{array}{ccc}\n\t\t\\frac{\\partial N^e}{\\partial x} & 0 & 0\\\\\n\t\t0 & \\frac{\\partial N^e}{\\partial y} & 0\\\\\n\t\t0 & 0 & \\frac{\\partial N^e}{\\partial z}\\\\\n\t\t0 & \\frac{\\partial N^e}{\\partial z} & \\frac{\\partial N^e}{\\partial y}\\\\\n\t\t\\frac{\\partial N^e}{\\partial z} & 0 & \\frac{\\partial N^e}{\\partial x}\\\\\n\t\t\\frac{\\partial N^e}{\\partial y} & \\frac{\\partial N^e}{\\partial x} & 0\n\t\\end{array} \\right]\n\t\\left \\{ \\begin{array}{c}\n\t\t\\widehat{\\textbf{u}}^e \\\\\n\t\t\\widehat{\\textbf{v}}^e \\\\\n\t\t\\widehat{\\textbf{w}}^e\n\t\\end{array} \\right\\}.\n\\end{eqnarray}\nThe formulae of the structural matrices for 3D elements are:\n\\begin{eqnarray}\n\t\\textbf{M}_{dd}^e & = & \\int_{V_e}\\textbf{N}^T\\rho \\textbf{N} \\diff V_e,\\\\\n\t\\textbf{K}_{dd}^e & = & \\int_{V_e}{\\textbf{B}_d^e}^T\\textbf{C}\\textbf{B}_d^e \\diff V_e,\n\\end{eqnarray}\nwhere \\textbf{C} is the stiffness tensor, \\(\\rho\\) is mass density, and \\(V_e\\) is the element volume.", "meta": {"hexsha": "307a4cb9f747b45af5ab94508c743ce2248420b5", "size": 2161, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:3Dmodel.tex", "max_stars_repo_name": "pfiborek/model-hc", "max_stars_repo_head_hexsha": "9e49fe23117fd320be14214e5ff6bafd2b1fc1a3", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:3Dmodel.tex", "max_issues_repo_name": "pfiborek/model-hc", "max_issues_repo_head_hexsha": "9e49fe23117fd320be14214e5ff6bafd2b1fc1a3", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:3Dmodel.tex", "max_forks_repo_name": "pfiborek/model-hc", "max_forks_repo_head_hexsha": "9e49fe23117fd320be14214e5ff6bafd2b1fc1a3", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.0208333333, "max_line_length": 107, "alphanum_fraction": 0.5853771402, "num_tokens": 869, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127529517043, "lm_q2_score": 0.7025300573952054, "lm_q1q2_score": 0.5998993753416586}}
{"text": "\\documentclass[10pt, a4paper]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\usepackage{cite}\n\\usepackage{fullpage}\n\n\\graphicspath{ {./images/} }\n\n\\title{Computational Physics Assignment Answers}\n\\author{Tilman Roeder}\n\\date{\\today}\n\n\\renewcommand\\thesection{Question \\arabic{section}}\n\\renewcommand\\thesubsection{\\thesection{} (\\alph{subsection})}\n\n\\newcommand{\\plot}[3]{\\begin{figure}[ht]\\centering\\includegraphics[width=10cm]{#1}\\caption{#2}\\label{#3}\\end{figure}}\n\n\\begin{document}\n\\maketitle\n\n% Question 1\n\\section{}\n\n  \\subsection{}\n  \\label{sec:cgo}\n  The program can be found in \\texttt{/assignment/q-1}.\n\n  \\subsection{}\n  The program written for \\ref{sec:cgo} reports the smallest floating point number\n  $a$ such that $1 + a > 1$ as\\footnotemark{}:\n  \\begin{equation}\n    a = 2^{-63}.\n  \\end{equation}\n\n  Note that this is for the C numeric type \\texttt{long double}, on a 2017 MacBook Pro using clang.\n  This size is the one that would be expected for an 80-bit extended precision floating point (IEEE 754).\n\n  Go supports 32 and 64 bit IEEE floating point numbers. For these types we find:\n  \\begin{itemize}\n    \\item \\texttt{float32} $a \\approx 1.192093 \\times 10^{-7}$\n    \\item \\texttt{float64} $a \\approx 2.220446 \\times 10^{-16}$\n  \\end{itemize}\n\n  Note that Go does not support extended precision floating point values (although they can be used through\n  CGo, as is done for \\ref{sec:cgo}). These values match the expected values (where `expected' means `values\n  they need to be to be IEEE compliant'). The theoretical values given our above definition are:\n  \\begin{itemize}\n    \\item \\texttt{float32} $a = 2^{-23}$\n    \\item \\texttt{float64} $a = 2^{-52}$\n  \\end{itemize}\n\n  \\footnotetext{Machine $\\epsilon$ is also sometimes defined as $\\frac{a}{2}$, with a defined as above.\n  The values given for $a$ are valid for the definition given above.}\n\n% Question 2\n\\section{}\n  \\subsection{}\n  The program can be found in \\texttt{/assignment/q-2}. The LU decomposition is implemented in the\n  \\texttt{/assignment/comply} package.\n\n  \\subsection{}\n  Using the LU decomposition routine, one can find\\footnotemark:\n  \\begin{equation}\n    L = \\begin{bmatrix}1 & 0 & 0 & 0 & 0 \\\\ 1 & 1 & 0 & 0 & 0 \\\\ 0 & 1.125 & 1 & 0 & 0 \\\\ 0 & 0 & -1.419\\cdots & 1 & 0 \\\\ 0 & 0 & 0 & -1.216\\cdots & 1\\end{bmatrix}\n  \\end{equation}\n  \\begin{equation}\n    U = \\begin{bmatrix}3 & 1 & 0 & 0 & 0 \\\\ 0 & 8 & 4 & 0 & 0 \\\\ 0 & 0 & 15.5 & 10 & 0 \\\\ 0 & 0 & 0 & 45.193\\cdots & -25 \\\\ 0 & 0 & 0 & 0 & 30.575\\cdots\\end{bmatrix},\n  \\end{equation}\n  where $LU = A$, with $A$ being the matrix given in the assignment problem.\n\n  \\footnotetext{Some of the numeric answers for these questions have very long decimal representations\n    when the full result is quoted to machine precision. These numbers have been truncated at three decimal\n    figures and are reported as $0.123\\cdots$. These merely indicate truncation and no rounding has taken place. To\n    see the full results to machine precision, please run \\texttt{make assignment}.}\n\n  Further, it can be obtained:\n  \\begin{equation}\n    \\det(A) = \\det(L) \\times \\det(U) = \\det(U) = \\prod_{i=1}^5 U_{ii} = 514032.\n  \\end{equation}\n\n  \\subsection{}\n  The solver is implemented in \\texttt{/assignment/comply}.\n\n  \\subsection{}\n  Using the solver, $x$ is determined to be:\n  \\begin{equation}\n    x \\approx \\begin{bmatrix}0.4565707971488156 \\\\ 0.6302876085535531 \\\\ -0.5105752171071062 \\\\ 0.05389158651601452 \\\\ 0.19613175833411145\\end{bmatrix}.\n  \\end{equation}\n\n  \\subsection{}\n  Using the solver, the matrix inverse is determined as:\n  \\begin{equation}\n    A^{-1} \\approx \\begin{bmatrix}0.379\\cdots & -0.046\\cdots & 0.004\\cdots & -0.004\\cdots & -0.001\\cdots \\\\ -0.138\\cdots & 0.138\\cdots & -0.012\\cdots & 0.014\\cdots & 0.005\\cdots \\\\ 0.027\\cdots & -0.027\\cdots & 0.024\\cdots & -0.028\\cdots & -0.011\\cdots \\\\ 0.070\\cdots & -0.070\\cdots & 0.062\\cdots & 0.044\\cdots & 0.018\\cdots \\\\ 0.063\\cdots & -0.063\\cdots & 0.056\\cdots & 0.039\\cdots & 0.032\\cdots\\end{bmatrix}\n  \\end{equation}\n\n% Question 3\n\\section{}\n  \\subsection{}\n  Linear interpolation is implemented in \\texttt{/pkg/interpolate}.\n\n  \\subsection{}\n  Cubic spline interpolation is implemented in \\texttt{/pkg/interpolate}. However this implementation is\n  not fully compliant with the question requirements, so there is an additional (compliant but slower and\n  more memory intensive) implementation in \\texttt{/assignment/comply}.\n\n  \\subsection{}\n  The data and interpolations are plotted in figure \\ref{fig:interpolate}.\n\n  \\plot{assignment-q-3}{\n    Linear and natural cubic spline interpolation on the given data.\n  }{fig:interpolate}\n\n% Question 4\n\\section{}\n  \\subsection{}\n  The program can be found in \\texttt{/assignment/q-4}. The convolutions are implemented in \\texttt{/pkg/signal}.\n\n  The program samples both functions over the range $[-10; 10]$ and takes $2^{10} = 1024$ samples. This\n  is an exact power of two, which makes good use of the fast Fourier transform.\n\n  A sample density of $1000$ was chosen for two reasons: firstly, this returns a convolution\n  sampled at a rate which results in a smooth-looking graph. Secondly, trials using the Fourier\n  transform show that for the chosen range and sample-density, one retrieves the original series very\n  precisely with minimal edge-effects or aliasing. This should then lead to good results when\n  using the Fourier transform to compute the convolution.\n\n  These choices are validated, when comparing the numerical result to the exact result, which can\n  be obtained by computing the convolution integral directly. (See figure \\ref{fig:conv} and equation\n  \\ref{eq:conv}.)\n\n  \\subsection{}\n  The plot showing $h(t)$, $g(t)$, and $(g * h)(t)$ can be seen in figure \\ref{fig:conv}.\n\n  % Given by: integrate(4 * exp((-(t-tau)**2)/2) / sqrt(2*pi), (tau, 3, 5))\n  The theoretical result\n  \\begin{equation}\n    \\label{eq:conv}\n    (g * h)(t) = - 2 \\operatorname{erf}{\\left(\\frac{\\sqrt{2} \\left(3 - t\\right)}{2} \\right)} + 2 \\operatorname{erf}{\\left(\\frac{\\sqrt{2} \\left(5 - t\\right)}{2} \\right)}\n  \\end{equation}\n  is also shown in the plot.\n\n  \\plot{assignment-q-4}{\n    Convolution of the function g and h. The plot depicts the functions themselves, their exact\n    convolution, and the result of their numerical convolution.\n  }{fig:conv}\n\n% Question 5\n\\section{}\n  \\subsection{}\n  The program is implemented in \\texttt{/assignment/q-5}, and the resulting distribution can be seen in\n  figure \\ref{fig:uniform}. The used random sampling is ultimately based on \\texttt{PCG XSL RR 128/64},\n  which is a random number generator with very robust statistical properties and small memory\n  requirements\\cite{pcg}.\n\n  \\plot{assignment-q-5-a}{\n    Random samples drawn from a uniform distribution over $[0,1]$. The red line indicates the shape of\n    the sample distribution.\n  }{fig:uniform}\n\n  \\subsection{}\n  \\label{sec:sample}\n  Starting from a uniform variate $x \\in [0,1]$ with $x \\sim p(x) = 1$, consider:\n\n  \\begin{equation}\n    \\int_0^x d\\zeta p_x(\\zeta) = \\int_0^{y(x)} d\\gamma p_y(\\gamma) = C_y(y(x)),\n  \\end{equation}\n  from where it follows that\n  \\begin{equation}\n    y(x) = C_y^{-1}(x) = \\left(\\int_0^y d\\gamma p_y(\\gamma)\\right)^{-1}(x).\n  \\end{equation}\n\n  Given $p_y(y) = \\frac{1}{2} \\cos(\\frac{y}{2})$, then:\n  \\begin{equation}\n    C_y(y) = \\int_0^y d\\gamma \\frac{1}{2} \\cos(\\frac{\\gamma}{2}) = \\sin(\\frac{y}{2}),\n  \\end{equation}\n  from which finally:\n  \\begin{equation}\n    y(x) = C_y^{-1}(x) = 2 \\times \\arcsin(x).\n  \\end{equation}\n\n  The resulting sample distribution can be seen in figure \\ref{fig:sample}.\n\n  \\plot{assignment-q-5-b}{\n    Samples of $x \\sim \\frac{1}{2} \\cos(\\frac{x}{2})$. The red line indicates the shape of\n    the sample distribution.\n  }{fig:sample}\n\n  \\subsection{}\n  The rejection method is implemented in \\texttt{/assignment/comply}. The implementations\n  interface is modeled on the rejection sampling interface provided by Gonum.\n\n  We use the distribution from \\ref{sec:sample} as the proposal distribution, with a value\n  of $c = 1.3$. (This $c$ is chosen to minimize the rejection probability\\footnotemark{}.) The resulting\n  sample distribution can be seen in figure \\ref{fig:reject}.\n\n  \\footnotetext{Notice that $\\frac{4}{\\pi} \\approx 1.3$.}\n\n  \\plot{assignment-q-5-c}{\n    Samples of $x \\sim \\frac{2}{\\pi} \\cos^2(\\frac{x}{2})$. The red line indicates the shape of\n    the sample distribution.\n  }{fig:reject}\n\n  The ration of the time taken versus \\ref{sec:sample} falls around $1.3$. Knowing that the\n  expected number of samples taken is $\\mathbb{E}(N) = n \\times c$, where $n$ is the number of samples\n  to generate, the expected ratio is $1.3$. Note that the relative time taken varies with every run,\n  and depends on additional factors that lie outside the program itself. Running the binary multiple\n  times shows that the average ration is $\\approx 1.3$ as expected.\n\n\\bibliography{assignment.bib}{}\n\\bibliographystyle{plain}\n\n\\appendix{}\n\n\\section{Unit Tests}\nThe code written for the assignments, is extensively unit-tested. To run the unit tests, use\n\\texttt{make test} and to inspect test-coverage on a line-by-line basis, run \\texttt{make cover}.\n\nBelow, the tests written for the different routines are outlined. These tests aim to validate the\nimplementations submitted as part of this assignment. (Tests can be found in files named\n\\texttt{*\\_text.go}.)\n\n  \\subsection{LU Decomposition}\n  The tests approaches are the following:\n  \\begin{itemize}\n    \\item $LU = M$, so one can check if $L$ and $U$ are correct by performing a matrix multiplication\n    and comparing with $M$.\n    \\item $M\\vec{x} = \\vec{y}$, so one can similarly check if $\\vec{x}$ is correct, by performing\n    a dot product and comparing the resulting vector with $\\vec{y}$.\n    \\item $M^{-1}M = \\mathbb{I}$, so one can verify any obtained inverse by performing a matrix\n    multiplication and comparing the result to unity.\n  \\end{itemize}\n\n  \\subsection{Interpolation}\n  The cubic splines and linear interpolation should be the exact result if the function being interpolated\n  is linear or cubic respectively. This allows one to compare values directly and verify.\n\n  \\subsection{Convolution}\n  If one attempts a convolution for which the analytic result is known, one can directly compare the\n  obtained values and verify the convolution is computed correctly.\n\n  \\subsection{Sampling}\n  There are no unit-tests for the sampling routines. However, above the resulting distributions\n  are plotted and compared to the theoretical distributions sampled from (see e.g. figure \\ref{fig:reject}).\n\n  This demonstrates, that with regards to the resulting distribution, the samples work as advertised.\n  There are many more statistics which one could compare to known values. And depending on the sampling\n  routine and quality of our random numbers some may show a discrepancy. However for the purposes of\n  this assignment, generating the correct distribution is sufficient\\footnotemark{}.\n\n  \\footnotetext{Since a random number generator with good statistical properties is used, one might expect\n  that our sampler will have reasonable statistical properties. Seeing that the assignment required to\n  use a good random number generator and justify the choice, one might argue that more statistics are\n  expected to be correct. However the only statistic relevant to the desired output (histogram) is the\n  distribution, which is correct. So I felt that no further tests were necessary.}\n\n\\section{Word Count}\n\\texttt{wc} reports 1526 words for the \\LaTeX{} source file.\n\n\\end{document}\n", "meta": {"hexsha": "7f6befe20d7ae3c589b44959aa15966b6e594383", "size": 11661, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pdf/assignment.tex", "max_stars_repo_name": "dyedgreen/comp-phys", "max_stars_repo_head_hexsha": "325efecd661d3f39112a43dd728b79fb4d71557d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "pdf/assignment.tex", "max_issues_repo_name": "dyedgreen/comp-phys", "max_issues_repo_head_hexsha": "325efecd661d3f39112a43dd728b79fb4d71557d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pdf/assignment.tex", "max_forks_repo_name": "dyedgreen/comp-phys", "max_forks_repo_head_hexsha": "325efecd661d3f39112a43dd728b79fb4d71557d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.1704545455, "max_line_length": 408, "alphanum_fraction": 0.7152045279, "num_tokens": 3428, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599563, "lm_q2_score": 0.8499711737573762, "lm_q1q2_score": 0.599896918010867}}
{"text": "\\setchapterpreamble[u]{\\margintoc}\n\\chapter{Mathematical Background}\n\nThe character of quantum mechanics is to a large extent dependent on the mathematics used to describe it. At the end of the day, all theory needs to be supported by experimental evidence. That is, by actual measurements. We quantify our measurements with numbers to speak in a standardized, precise way. When we ask questions like, ``How long?'' or ``How much?'', we typically use numbers to give an answer.\n\nIn quantum mechanics, we will use increasingly sophisicated mathematical constructs to describe the world around us, including experiments physicists perform. Some of properties of numbers that we take for granted will remain, while others will disappear. Here we begin with a exploration of two particular useful number systems: the \\emph{real} and the \\emph{complex} numbers.\n\n\\section{Real Numbers}\\marginnote{Real numbers. Communitivity, distributivity. (Figures showing geometric equivalence.) \\textbf{Exercises.} Field properties. Example, the real numbers. Binary. Finite fields. $\\mathbf{Z}_2$. $V_4$. \\textbf{Exercises.} Cartesian plane. Absolute value. Distance. Circles. \\textbf{Exercises.}}\n\nWe start of with our old friend, the \\emph{real line}. The real line is an ordinary geometric straight line. We label the points on the line with the set $\\mathbf{R}$ of all \\emph{real numbers}. That is, we identify each point on the line with a unique real number. And  we often speak as if the numbers were actually on a line, and as if the points on the line are actually numbers.\n\n\\begin{figure}[h]\n\\input{figures/introduction/real-line.tex}\n\\caption{The real line}\n\\end{figure}\n\nThe structure of the real line is anything but simple. Do not be fooled. You may wonder what makes the real numbers ``real''? Begin with integers.\n\nThe Greeks restricted themselves to the numbers that could be constructed using a compass and an unmarked straight edge. The began with the so-called \\emph{natural numbers}, usually denoted as a set by $\\mathbf{N}$. The construction went like so [FIGURE]. Because the straight edge had no unit measure on it, the results they produced are \\emph{coordinate-free}; that is, true no matter which coordinates we choose, be it inches, centimeters, or pikas.\n\nThen fractions. Pythagoras' cult discovered the irrationality of square root of two. Are there any holes? Answer to this question can be found in the study of real analysis. For now, the real line will suffice.  We will review a few properties of the real numbers.\n\n\\begin{marginfigure}\n\\input{figures/introduction/integer-line-construction.tex}\n\\caption{Construction of the natural numbers}\n\\end{marginfigure}\n\n\n\n\\section{Complex Numbers}\n\n\\marginnote{\\textsf{Jordan Chapter 2: Imaginary Numbers}}\n", "meta": {"hexsha": "9c8597e1f768ac0c1cd37a92e3c3b877db65029d", "size": 2768, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/introduction.tex", "max_stars_repo_name": "ironwallaby/qm-companion", "max_stars_repo_head_hexsha": "12981e89a74815ae612221c33593230663cb03a7", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/introduction.tex", "max_issues_repo_name": "ironwallaby/qm-companion", "max_issues_repo_head_hexsha": "12981e89a74815ae612221c33593230663cb03a7", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/introduction.tex", "max_forks_repo_name": "ironwallaby/qm-companion", "max_forks_repo_head_hexsha": "12981e89a74815ae612221c33593230663cb03a7", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.8787878788, "max_line_length": 452, "alphanum_fraction": 0.7850433526, "num_tokens": 639, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.599895713816371}}
{"text": "\\chapter{Complex Numbers}\n\n\\Section{1}{1}{Rational numbers.}\n\nThe idea of a set of numbers is derived in the first instance from the\nconsideration of the set of positive* integral numbers, or positive\nintegers; that is to say, the nnmbL-rs 1. 2, 8, 4, .... Positive\nintegers have many properties, which will be found in treatises on the\nTheory of Integral Numbers; but at a very early stage in the\ndevelopment of mathematics it was found that the operations of\nSubtraction and Division could only be performed among them subject to\ninconvenient restrictions; and consequently, in elementary\nArithmetic, classes of numbers are constructed such that the\noperations of subtraction and division can always bB performed among\nthem.\n\nTo obtaiti a class of numbers among which the operation of subtraction\ncan be performed without restraint wc construct the class of integers,\nwhich consists of the class of positive f integers (+1, +2, +3, ...)\nand of the class of negative integers (-1, -2, -3, ...) and the number\n0.\n\nTo obtain a class of numbers among which the operations both of sub-\ntraction and of division can be performed freely:):, we construct the\nclass of rational numbers. Symbols which denote members of this class\nare |, 3,\n\n0, -Y--\n\nWe have thus introduced three classes of numbers, (i) the signless\nintegers, (ii) the integers, (iii) the rational numbers.\n\nIt is not part of the scheme of this work to discuss the construction\nof the class of integers or the logical foundations of the theory of\nI'ational numbers .\n\nThe extension of the idea of number, which has just been described,\nwas not effected without some opposition from the more conservative\nmathematicians. In the latter half of the eighteenth century, Maseras\n(1731-1824) and Frend (1757-1841) published works on Algebra,\nTrigonometry, etc., in which the use of negative numbers was\ndisallowed, although Descartes had used them imrestrictedly more than\na hundred years before.\n\n* Strictly speakinpr, a more appropriate epithet would be, not\npositive, but signless.\n\nt In the strict sense.\n\n:*: With the exception of division by the rational number 0.\n\n\u00a7 Such a discussion, defining a rational number as an ordered\nnumber-pair of iute ers in a similar manner to that in which a complex\nnumber is defined in \\hardsectionref{1}{3} as an ordere i number-pair of real numbers,\nwill be found in Hobsou's Functions of a Real Variable, \u00a7\u00a7 1-12.\n\n1-2\n\n%\n% 4\n%\n\nA rational number x may be represented to the eye in the following\nmanner :\n\nIf, on a straight line, we take an origin and a fixed segment OPi (Pi\nbeing on the right of 0), we can measure from a length OFx such that\nthe ratio OPx/OPi is equal to x; the point P is taken on the right or\nleft of according as the number x is positive or negative. We may\nregard either the point P or the displacement OP (which Avill be\nwritten OPx) as repre- senting the number x.\n\nAll the rational numbers can thus be represented by points on the\nline, but the converse is not true. For if we measure off on the line\na length OQ equal to the diagonal of a square of which OPi is one\nside, it can be proved that Q does not correspond to any rational\nnumber.\n\nPoints oil the line which do not represent rational numbers may be\nsaid to represent irrational numbers; thus the jjoint Q is said to\nrej resent the irrational number, /2 = l 414213.... But while such\nan explanation of the existence of irrational numbers satisfied the\nmathematicians of the eighteenth century and may still be sufficient\nfor those whose interest lies in the applications of mathematics\nrather than in the logical upbuilding of the theory, yet from the\nlogical standpoint it is improper to introduce geometrical intuitions\nto supply deficiencies in arithmetical arguments; aud it was shewn by\nDedekind in 1858 that the theory of irrational numbers can be\nestablished on a purely arithmetical basis without any appeal to\ngeometry.\n\n\\Section{1}{2}{Dedekind's* theory of irrational numbers.}\n\nThe geometrical property of points on a line which suggested the\nstarting point of the arithmetical theory of irrationals was that, if\nall points of a line are separated into two classes such that every\npoint of the first class is on the right of every point of the second\nclass, there exists one and only one point at which the line is thus\nsevered.\n\nFollowing up this idea, Dedekind considered rules by which a\nseparationf or section of all rational numbers into two classes can be\nmade, these classes (which will be. called the Z-class and the\nP-class, or the left class and the right class) being such that they\npossess the following properties :\n\n(i) At least one member of each class exists.\n\n(ii) Every member of the X-class is less than every member of the\nP-class.\n\nIt is obvious that such a section is made b ' an rational number x;\nand X is either the greatest number of the Z-class or the least number\nof the\n\n* The theory, though elaborated in 1858, was not published before the\nappearance of Dede- kind's tract, Stetigkeit und irrationale Zahlen,\nBrunswick, 1872. Other theories are due to Weitrstrass [see von\nDantscher, Die Weierstrasa'sche Theorie der irrationulen Zahlen\n(Leipzig, 1908)] and Cantor, Math. Ann. v. (1872), pp. 123-130.\n\nt This procedure formed the basis of the treatment of irrational\nnumbers by the Greek mathematicians in the sixth aud fifth centuries\nb.c. The advance made by Dedekind consisted in observing that a purely\narithmetical theory could be built ap on it. *\n\n%\n% 5\n%\n\ni?-class. But sections can be made in which no rational number x plays\nthis part. Thus, since there is no rational number* Avhose square is\n2, it is easy to see that we may form a section in which the i?-c]ass\nconsists of the positive rational numbers whose squares exceed 2, and\nthe Z-class consists of all other rational numbers.\n\nThen this section is such that the i?-class has no least member and\nthe Z-class has no greatest member; for, if x be any positive\nrational fraction,\n\nand 2 are in order of magnitude; and therefore given any member x of\nthe Zr-class, we can always find a greater member of the Z-class, or\ngiven any member x of the 7?-class, we can always find a smaller\nmember of the /i- class, such numbers being, for instance, y and \\ j,\nwhere y' is the same function of x' as y of x.\n\nIf a section is made in which the i -chiss has a least member Jo, or\nif the /> class has a greatest member J,, the section determines a\nrational-real number, which it is convenient to denote by the samef\nsymbol Ao\\ or A .\n\nIf a section is made, such that the i -class has no least member and\nthe Z-class has no greatest member, the section determines an\nirrational-real number I.\n\nIf X, y are real numbers (defined by sections) we say that x is\ngreater than 3/ if the Z-class defining x contains at least two\u00a7\nmembers of the i -class defining y.\n\nLot a,, ... be real numbers and let .4j, i?,. ... be any members of\nthe corresponding Z-classes while A., B... are any members of the\ncorresponding i -classes. The classes of which A, Ao, ... are\nrespectively members will be denoted by the symbols A ), A. ), ....\n\nThen the sum (written a + ) of two real numbers a and /3 is defined as\nthe real number (rational or irrational) which is determined by the\nZ-class A + B,) and the E-class A. + B.,).\n\nIt is, of course, necessary to prove that these classes determine a\nsection of the rational numbers. It is evident that A i + 5, < A, +\nB., and that at least one member of each of the classes Ai + B ),\nA.,-\\-B.,) exists. It remains to prove that there is, at most, one\nrational\n\n* For if piq be such a number, this fraction being in its lowest\nterms, it may be seen that (2'if-i')/(p-9) is another such number, and\n0<p-q<q, so that pjq is not in its lowest terms. The contradiction\nimplies that such a rational number does not exist.\n\n+ This causes no confusion in practice.\n\nX B. A, W. Russell defines the class of real numbers as actualhj being\nthe class of all L-classes; the class of real numbers whose i classes\nhave a greatest member corresponds to the class of rational numbers,\nand though the rational-real number x which corresponds to a rational\nnumber .r is conceptually distinct from it, no confusion arises from\ndenoting both by the same symbol.\n\n\u00a7 If the classes had only one member in common, that member might be\nthe greatest member of the JL-class of .r and the least member of the\ni -class of y.\n\n%\n% 6\n%\n\nnumber which is greater than every Ai + B and less than every A., +\nB.,; suppose, if possible, that tliere are two, .v and y (?/>.r). Let\ncfj be a member of (.Ij) and let a-> be a member of (Ao); and let lY\nbe the integer next greater than (a2- i)/ i(y-. ) - Take the last of\n\nthe numbers ai-|- ((,,\\ flj), (where m=0, 1, ... lY), which belongs to\n(A ) and the first of\n\nthem which belongs to (Ao); let these two numbers be Cj, c,. Then\n\n 2 -' i = Jr ao - i) < i (3/ - )- Choose c/i, do in a similar maimer\nfrom the classes defining /3; then\n\nC2 + cL - f 1 - c?i <y - A'. But C2 + d., i Ci+di x, and therefore C2\n+ c?2-Ci-c/i > i/-.?-; we have therefore arrived at a contradiction by\nsupjjosing that two rational numbers a;, y exist belonging neither to\n( Jj + B ) nor to ( 2 + 2)-\n\nIf every rational number belongs either to the class (J 1 + 1) or to\nthe class ( 2+ 2), then the classes (Jj + i), ( 2 + 2) define an\nirrational number. If one rational number.)- exists belonging to\nneither class, then the Z-class formed by x and (Jj+ j) and the\ni2-class ( 2 + -52) define the rational-real number x. In either\nca.se, the number defined is called the sum a + /3.\n\nThe difference a-/3 of two real numbers is defined by the Z-class\n(J1-Z2) and the Z'-class ( 2- 1).\n\nThe product of two positive real numbers a, /3 is defined by the 7\nclass A B-i) and the Z-class of all other rational numbers.\n\nThe reader will see without difficulty how to define the product of\nnegative real num- bers and the quotient of two real numbers; and\nfurther, it may be shewn that real numbers may be combined in\naccordance with the associative, distributive and commuta- tive laws.\n\nThe aggregate of rational-real and irrational-real numbers is called\nthe aggregate of real numbers; for brevity, rational-real numbers and\nirrational- real numbers are called rational and irrational numbers\nrespectively.\n\n\\Section{1}{3}{Complex numbers.}\n\nWe have seen that a real number may be visualised as a displacement\nalong a definite straight line. If, however, P and Q are any two\npoints in a plane, the displacement PQ needs two real numbers for its\nspecification; for instance, the differences of the coordinates of P\nand Q referred to fixed rectangular axes. Ifthe coordinates of P be (\n, 77) and those oi Q( + cc,7 +i/), the displacement PQ may be\ndescribed by the symbol [x, y]. We are thus led to consider the\nassociation of real numbers in ordered* pairs. The natural definition\nof the sum of two displacements [x, y\\ [x, y'] is the displacement\nwhich is the result of the successive applications of the two\ndisplacements; it is therefore convenient to define the sum of two\nnumber-pairs by the equation\n\n[.c, y] + [x', y'] = [x + x\\ y -f y'].\n\nThe order of the two terms distiuguishes the ordered number-pair [.r,\nij] from the ordered number-pair [?,r].\n\n%\n% 7\n%\n\nThe product of a number-pair and a real, number x is then naturally\ndefined by the equation\n\nx X \\ x, y\\ = \\ x'x, x'y\\\n\nWe are at liberty to define the product of two number-pairs in any\nconvenient manner; but the only definition, which does not give rise\nto results that are merely trivial, is that symbolised by the equation\n\n[x, y] X [x, ij] = [xx' - i/t/', xy + xy\\\n\nIt is then evident that\n\n[x, 0] X [x, y ] = [xx, xy' = xx [x, y']\n\nand [0, y] x [x, y] = [- yy', x'y] = y x [- y', x'].\n\nThe geometrical interpretation of these results is that the effect of\nmultiplying by the displacement [x, 0] is the same as that of\nmultiplying by the real number x; but the effect of multiplying a\ndisplacement by [0, y] is to multiply it by a real number y and turn\nit through a right angle.\n\nIt is convenient to denote the number-pair [x, y] by the compound\nsymbol x + iy; and a number-pair is now conveniently called (after\nGauss) a complex number; in the fundamental operations of Arithmetic,\nthe complex number x+ iO may be replaced by the real number x and,\ndefining i to mean + il, we have i- = [0, 1] x [0, 1] = [- 1, 0]; and\nso r may be replaced by - 1.\n\nThe reader will easily convince himself that the definitions of\naddition and multiplication of numbei--pairs have been so framed that\nwe may perform the ordinary operations of algebra with complex numbers\nin exactly the same way as with real numburs, treating the symbol i as\na number and replacing the product it by - I wherever it occurs.\n\nThus he will verify that, if a, b, c are complex numbers, we have\n\na + b = b + a,\n\nab = b(i,\n\n(a + b) +c = a +(b -\\-c),\n\nab .c = a.bc,\n\na b -\\- c)= ab + ac,\n\nand if ab is zero, then either a or 6 is zero.\n\nIt is found that algebraical operations, direct or inverse, when\napplied to complex numbers, do not suggest numbers of any fresh type;\nthe complex number will therefore for our purposes be taken as the\nmost general type of number.\n\nThe introduction of the complex number has led to many important\ndevelopments in mathematics. Functions which, when real variables only\nare considered, appear as essentially distinct, are seen to be\nconnected when complex variables are introduced :\n\n%\n% 8\n%\n\nthus the circular functions are found to be expressible in terms of\nexponential functions of a complex argument, by the equations\n\nTODO\n\nAgain, many of the most important theorems of modern analysis are not\ntrue if the numbers concerned are restricted to be real; thus, the\ntheorem that every algebraic equation of degree n has % roots is true\nin general only when regarded as a theorem concerning complex numbers,\n\nHamilton's quaternions furnish an example of a still fiu'ther\nextension of the idea of number. A quaternion\n\niv+xi+1/j + zk\n\nis formed from four real numbers w, .i; y, z, and four number- units\n1, i, j, l\\ in the same way that the ordinary complex number x-\\-iy\nmight be regarded as being formed from two real numbers x, y, and two\nnumber-units 1, i. Quaternions however do not obey the commutative law\nof multiplication.\n\n\\Section{1}{4}{The modulus of a complex number.}\n\nLet X + iy be a complex number, x and y being real numbers. Then the\npositive square root of x- 4- y- is called the modulus of x + iy), and\nis\n\nwritten\n\nx + iy.\n\nLet us consider the complex number which is the sum of two given\ncomplex numbers, x + iy and u + iv. We have\n\nTODO\n\nThe modulus of the sum of the two numbers is therefore\n\n\\ \\ {X + Uf + ((/ + V) 2,\n\nor [ x- + y-) + ur + V-) -+ 2 xu + ijv)] .\n\nBut\n\n \\ x-\\-iy\\-\\-\\ u + iv\\ ] -= [ x- + /y\"') + (w- + v' yf\n\n= ( 2 4- y ) + ( <2 +;2 + 2 x' + t/'f tC + V') '\n\n= (x- + y-) + (u + tj2) + 2 (xu + yv)- -h (xv - yu)-], and this\nlatter expression is greater than (or at least equal to) x- + y' ) +\n(u\" + V-) + 2 (xu + yv). We have therefore\n\nTODO\n\ni.e. the moduhis of the sum of two complex numbers cannot be greater\nthan the sum of their moduli; and it follows by induction that the\nmodulus of the sum of any number of complex numbers cannot be greater\nthan the sum of their moduli.\n\n%\n% 9\n%\n\nLet us consider next the complex number which is the product of two\ngiven complex numbers, x + iy and u + iv; we have\n\n pG 4- iy) u + iv) = xu - yv) + i xv + yu), and therefore\n\nTODO\n\nThe modulus of the product of two complex numbers (and hence, by in-\nduction, of any number of complex numbers) is therefore equal to the\nproduct of their moduli.\n\n\\Section{1}{5}{The Argand diagram.}\n\nWe have seen that complex numbers may be represented in a geometrical\ndiagram by taking rectangular axes Ox, Oy in a plane. Then a point P\nwhose coordinates referred to these axes are x, y may be regarded as\nrepresenting the complex number x + iy. In this way, to every point of\nthe plane there corresponds some one complex number; and, conversely,\nto every possible complex number there corresponds one, and only one,\npoint of the plane.\n\nThe complex number x + iy may be denoted by a single letter* z. The\npoint P is then called the representative point of the number z; we\nshall also speak of the number z as being the ajfx of the point P.\n\nIf we denote (jf + /) by r and choose 6 so that r cos 6 = x, r sin 9\n=y, then r and 6 are clearly the radius vector and vectorial angle of\nthe point P, referred to the origin and axis Ox.\n\nThe representation of complex numbers thus afforded is often called\nthe Argand diagram .\n\nBy the definition already given, it is evident that r is the modulus\nof z. The angle 6 is called the argument, or amplitude, or phase, of\nz.\n\nWe write 6 - arg z.\n\nFrom geometrical considerations, it appears that (although the modulus\nof a complex number is unique) the argument is not unique;; if be a\nvalue of the argument, the other values of the argument are comprised\nin the expression 2mr + d where a is any integer, not zero. The\nprincipal value of arg 2 is that which satisfies the inequality TODO.\n\n* It is convenient to call .r and y the real and imaginary parts of z\nrespectively. We fre- quently write x = R(~), y = I(z).\n\nt .J. E. Argand published it in 1806; it had however previously been\nused by Gauss, and by Caspar Wessel, who discussed it in a memoir\npresented to the Danish Academy in 1797 and published by that Society\nin 1798-9.\n\nX See the Appendix, \u00a7 A-521.\n\n%\n% 10\n%\n\nIf P, and P2 are the representative points corresponding to values z-\nand Z.2 respectively of z, then the point which represents the value\nz- + z. is clearly the terminus of a line drawn from P, equal and\nparallel to that which joins the origin to P. 2.\n\nTo find the point which represents the complex number z Zo, where z\nand Zo are two given complex numbers, we notice that if\n\nz = 1 (cos 6 + i sin 6 ),\n\nz.2, = ?-2 (cos f 2 + * sin o) then, by multiplication,\n\nZ\\ Z% = n 2 [cos ((9i + 2) + sin ( j + 6. ]. The point which\nrepresents the number z z has therefore a radius vector measured by\nthe product of the radii vectores of Pj and P2, and a vectorial angle\nequal to the sum of the vectorial angles of Pj and Pg.\n\nREFERENCES.\n\nTke gical foundations of the theory of number. u . X. Whitehead axd B.\nA. W. Russell, Principia Mathematica (1910-1913). 'B. A. W. Russell,\nIntroduction to Mathematical Philosophy (1919).\n\nOn irrational numbers.\n\nR. Dedekixd, Stetigkeit unci irrationale Zahlen. (Brunswick, 1872.)\n''V. VON Dantscher, Vorlesungen ueber die Weierstrass'sehe Theorie der\nirrationalen\n\nZahlen. (Leipzig, 1908.) E .W. HoBSON, Functions of a Real Variable\n(1907), Ch. i. T.'*?. I'a. Bromwich, Theory of Infinite Series (1908),\nAppendix i.\n\nOn omplex numbers.\n\nH. Hankel, Theorie der complexen Zahlen-systeme. (Leipzig, 1867.) O.\nStolz, Yorlesungen iiber allgemeine Arithmetik, II. (Leipzig, 1886.)\nG. H. Hardy, A course of Pure Mathematics (1914), Ch. in.\n\nMiscellaneous Examples.\n\n1. Shew that the representative points of the complex numbers 1 + 4 2\n4-7?', 34-10 are coHinear.\n\n2. Shew that a paralwla can be drawn to pass through the\nrepresentative points of the complex numbers\n\n2 + V, 44-4t, 6 + 9 8+ 16 10 + 2.5i.\n\n3. Determine the nth. roots of unity by aid of the Argand diagram;\nand shew that the number of primitive roots (roots the powers of each\nof which give all the roots) is the number of integers (including\nunity) less than n and prime to it.\n\nProve that if j, 2? 3,  he the arguments of the primitive roots, 2\ncosjo =0 when\n\np m \\& positive integer less than -7 ., where a, b, c, ... k are the\ndifterent constituent\n\nn ( - 'f n\n\nprimes of 71; and that, when p-, - -?, 2 cos p6=,-,, where /x is\nthe number of\n\n(toe, m iC ccoc,,, fc\n\nthe constituent primes. \\addexamplecitation{Math. Trip. 1895}\n", "meta": {"hexsha": "465d8518aa77ef148326f60d9699872207b65312", "size": 19801, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/wandw-ch01.tex", "max_stars_repo_name": "CdLbB/Whittaker-and-Watson", "max_stars_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/wandw-ch01.tex", "max_issues_repo_name": "CdLbB/Whittaker-and-Watson", "max_issues_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/wandw-ch01.tex", "max_forks_repo_name": "CdLbB/Whittaker-and-Watson", "max_forks_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4485436893, "max_line_length": 86, "alphanum_fraction": 0.7412756931, "num_tokens": 5363, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.779992900254107, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5998771287692349}}
{"text": "\nGumbel statistics are often used to estimate the statistical\nsignificance of local alignment scores.\n\nThe Gumbel distribution is the so-called Type I extreme value\ndistribution (EVD). It occurs so frequently in sequence analysis\napplications, compared to the type II (Fr\\'{e}chet) and type III\n(Weibull) extreme value distributions, that ``Gumbel'' and ``EVD'' are\noften used interchangeably in bioinformatics. Easel has a separate\nmodule, the \\eslmod{gev} module, that implements the generalized\nextreme value distribution.\n\nKarlin/Altschul statistics are a special case of the Gumbel\ndistribution that apply to the scores of ungapped local alignments\nbetween infinitely long random sequences. Empirically, Karlin/Altschul\nstatistics also apply reasonably well to the more useful case of\ngapped alignment of finite-length sequences. Karlin/Altschul\nstatistics predict how the Gumbel's two parameters depend on the\nlength of the query and target sequences. In the case of ungapped\nalignments, Karlin/Altschul statistics allow the Gumbel parameters to\nbe estimated directly, without the need for a compute-intensive\nsimulation.\n\n\\subsection{The gumbel API}\n\nThe \\eslmod{gumbel} API consists of the following functions:\n\n\\vspace{0.5em}\n\\begin{center}\n\\begin{tabular}{ll}\\hline\n    \\multicolumn{2}{c}{\\textbf{evaluating densities and distributions:}}\\\\\n\\ccode{esl\\_gumbel\\_pdf()}     & Returns the probability density, $P(S=x)$.\\\\\n\\ccode{esl\\_gumbel\\_logpdf()}  & Returns the log of the pdf, $\\log P(S=x)$.\\\\\n\\ccode{esl\\_gumbel\\_cdf()}     & Returns the cumulative probability distribution, $P(S \\leq x)$.\\\\\n\\ccode{esl\\_gumbel\\_logcdf()}  & Returns the log of the cdf, $\\log P(S \\leq x)$.\\\\\n\\ccode{esl\\_gumbel\\_surv()}    & Returns right tail mass, 1-cdf, $P(S > x)$\\\\\n\\ccode{esl\\_gumbel\\_logsurv()} & Returns log of 1-cdf, $\\log P(S > x)$.\\\\\n    \\multicolumn{2}{c}{\\textbf{sampling:}}\\\\\n\\ccode{esl\\_gumbel\\_Sample()}  & Returns a Gumbel-distributed random sample.\\\\\n    \\multicolumn{2}{c}{\\textbf{maximum a posteriori parameter fitting:}}\\\\\n\\ccode{esl\\_gumbel\\_FitComplete()} & Estimates $\\mu,\\lambda$ from complete data.\\\\\n\\ccode{esl\\_gumbel\\_FitCompleteLoc()} & Estimates $\\mu$ when $\\lambda$ is known.\\\\\n\\ccode{esl\\_gumbel\\_FitCensored()} & Estimates $\\mu,\\lambda$ from censored data.\\\\\n\\ccode{esl\\_gumbel\\_FitCensoredLoc()} & Estimates $\\mu$ when $\\lambda$ is known.\\\\\n\\ccode{esl\\_gumbel\\_FitTruncated()}& Estimates $\\mu,\\lambda$ from truncated data.\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{0.5em}\n\nThe Gumbel distribution depends on two parameters, $\\mu$ and\n$\\lambda$. When $\\mu$ and $\\lambda$ are known, the statistical\nsignificance (P-value) of a single score $x$ is $P(S>x)$, obtained by\na call to \\ccode{esl\\_gumbel\\_surv()}.  The E-value for obtaining that\nscore or better in searching a database of $N$ sequences is just\n$NP(S>x)$.\n\nWhen $\\mu$ and $\\lambda$ are unknown, they are estimated from scores\nobtained from comparisons of simulated random data. (Analytical\nsolutions for $\\mu$ and $\\lambda$ are only available in the case of\nungapped sequence alignments.)  The \\ccode{esl\\_evd\\_Fit*()} functions\nprovide maximum likelihood parameter fitting routines for different\ntypes of data. \n\n\\subsubsection{Augmentations: random, minimizer}\n\nThe \\ccode{esl\\_gumbel\\_Sample()} function requires augmenting with the\n\\eslmod{random} module.\n\nThe \\ccode{esl\\_gumbel\\_FitTruncated()} function requires augmenting\nwith the \\eslmod{minimizer} module.\n\n\n\\subsection{Example of using the gumbel API}\n\nAn example that samples 10,000 data points from a Gumbel distribution\nwith $\\mu=-20$, $\\lambda=0.4$; reports the min and max samples, and\nthe probability mass to the left of the min and to the right of the\nmax (both of which should be about $\\frac{1}{10000}$, since we took\n10,000 samples); and then fits those simulated data to a Gumbel and\nreports the fitted $\\mu$ and $\\lambda$:\n\n\\input{cexcerpts/gumbel_example}\n\n\n\n\\subsection{Gumbel densities}\n\nThe probability density function (pdf) and the cumulative distribution\nfunction (cdf) of the extreme value distribution are:\n\n\\begin{equation}\nP(x) = \\lambda \\exp \\left[ -\\lambda (x - \\mu) - e^{- \\lambda (x - \\mu)} \\right]\n\\label{eqn:gumbel_density}\n\\end{equation}\n\n\\begin{equation}\nP(S < x) = \\exp \\left[ -e^{-\\lambda(x - \\mu)} \\right]\n\\label{eqn:gumbel_distribution}\n\\end{equation}\n\nThe extreme value density and distribution functions for $\\mu = 0$ and\n$\\lambda = 1.0$ are shown below.\n\n\\begin{center}\n\\includegraphics[width=3in]{figures/evd_basic}\n\\end{center}\n\nThe $\\mu$ and $\\lambda$ parameters are {\\em location} and {\\em scale}\nparameters, respectively:\n\n\\centerline{\n\\begin{minipage}{3in}\n\\includegraphics[width=2.8in]{figures/evd_location}\n\\end{minipage}\n\\begin{minipage}{3in}\n\\includegraphics[width=2.8in]{figures/evd_scale}\n\\end{minipage}\n}\n\nFor more details, a classic reference is \\citep{Lawless82}.  Gumbel\ndistributions can have their long tail to the right or to the\nleft. The form given here is for the long tail to the right.  This is\nthe form that arises when the extreme value is a maximum, such as when\nour score is the maximum over the individual scores of all possible\nalignments. The equations in \\citep{Lawless82} are for extremal\nminima; use $(x - u) = -(x - \\mu)$ and $b = 1 / \\lambda$ to convert\nLawless' notation to the notation used here.\n\n\n\\subsection{Fitting Gumbel distributions to observed data}\n\nGiven a set of $n$ observed samples $\\mathbf{x}$, we may want to\nestimate the $\\mu$ and $\\lambda$ parameters.\n\nOne might try to use linear regression to fit to a $\\log \\log$\ntransformation of the $P(S < x)$ histogram, which gives a straight\nline with slope $-\\lambda$ and $x$ intercept $\\mu$:\n\n\\begin{equation}\n\\log \\left[ -\\log P(S<x) \\right] = -\\lambda x + \\lambda \\mu\n\\end{equation}\n\nHowever, the linear regression method is undesirable because it is\nsensitive to outliers. The following table shows the \\% error for\nestimating $\\hat{\\mu}$ and $\\hat{\\lambda}$ from 500 simulated complete\ndatasets, sampled from a Gumbel with $\\mu = -20.0$ and $\\lambda =\n0.4$, for four different dataset sizes:\n\n\\begin{center}\n\\begin{tabular}{lrrrr} \\hline\n                              & \\multicolumn{4}{c}{\\# of samples}\\\\\n                              & 100 & 1000  & 10,000 & 100,000 \\\\\n\\% error in $\\hat{\\mu}$       &  2\\%&   1\\% & 0.9\\%  &  0.9\\%  \\\\\nmax error in $\\hat{\\mu}$      & 24\\%&  13\\% &  10\\%  &   10\\%  \\\\\n\\% error in $\\hat{\\lambda}$   & 12\\%&   7\\% &   5\\%  &    3\\%  \\\\\nmax error in $\\hat{\\lambda}$  & 49\\%&  33\\% &  25\\%  &   20\\%  \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\n\nA better rough estimate of $\\hat{\\mu}$ and $\\hat{\\lambda}$ can be\nobtained from the sample mean $m$ and variance $s^2$ of the observed\ndata \\citep{Evans00}:\\footnote{All simulation data are generated by\nthe \\eslmod{evd} module's stats driver. The only exception is the\nlinear regression fit data, which come from an old version of HMMER.}\n\n\\begin{eqnarray*}\n  \\hat{\\lambda} & = & \\frac{\\pi}{\\sqrt{6s^2}}\\\\\n  \\hat{\\mu}     & = & m - \\frac{0.57722}{\\hat{\\lambda}}\n\\end{eqnarray*}\n\nThe mean/variance method is more accurate than linear regression, as\nshown by the following simulation results:\n\n\\begin{center}\n\\begin{tabular}{lrrrr} \\hline\n                              & \\multicolumn{4}{c}{\\# of samples}\\\\\n                              & 100 & 1000  & 10,000 & 100,000 \\\\\n\\% error in $\\hat{\\mu}$       &  1\\%& 0.3\\% &  0.1\\% & 0.03\\%  \\\\\nmax error in $\\hat{\\mu}$      &  5\\%&   1\\% &  0.4\\% &  0.1\\%  \\\\\n\\% error in $\\hat{\\lambda}$   &  9\\%&   3\\% &  0.8\\% &  0.3\\%  \\\\\nmax error in $\\hat{\\lambda}$  & 40\\%&  12\\% &    3\\% &  0.9\\%  \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\nStill, the mean/variance method is not as accurate as a maximum\nlikelihood estimation (especially for $\\lambda$). Also, it requires\ncomplete data, whereas we also need to solve problems where we fit to\n\\emph{truncated} or \\emph{censored} data. Easel's main estimation\nmethods are therefore maximum likelihood methods.\n\n\\subsubsection{Maximum likelihood estimation, complete data}\n\nGiven $n$ samples $x_1..x_n$ from some distribution that depends on\nparameters $\\theta$, we want to estimate maximum likelihood parameter\nestimates $\\hat{\\theta}$ that maximize the log likelihood:\n\n\\[\n   \\hat{\\theta} = \\argmax_{\\theta} \\sum_{i=1}^{n} \\log P(x_i \\mid \\theta)\n\\]\n\nThese are also \\emph{maximum a posteriori} parameter estimates, if we\nassume a uniform prior $P(\\theta)$.\n\nSpecifically, for samples $x_i$ drawn from an extreme value\ndistribution, the log likelihood to optimize is:\n\n\\begin{equation}\n\\log L(\\lambda, \\mu) = n \\log \\lambda - \\sum_{i=1}^{n} \\lambda(x_i -\n\\mu) - \\sum_{i=1}^{n} e^{-\\lambda(x_i - \\mu)}\n\\label{eqn:gev_logL}\n\\end{equation}\n\nThis objective function is differentiable with respect to $\\mu$ and\n$\\lambda$:\n\n\\begin{eqnarray}\n\\frac{\\partial \\log L}{\\partial \\mu} & = &\nn \\lambda - \\lambda \\sum_{i=1}^{n} e^{-\\lambda (x_i - \\mu)}\\\\%\n\\\\%\n\\label{eqn:mupartial}\n\\frac{\\partial \\log L}{\\partial \\lambda} & = &\n\\frac{n}{\\lambda} - \\sum_{i=1}^{n} (x_i - \\mu) +  \n\\sum_{i=1}^{n} (x_i - \\mu) e^{-\\lambda (x_i - \\mu)}\n\\label{eqn:lambdapartial}\n\\end{eqnarray}\n\nThe maximum likelihood estimates $\\hat{\\lambda}$ and $\\hat{\\mu}$ are\nthe solutions to $\\frac{\\partial \\log L}{\\partial \\mu} = 0$ and\n$\\frac{\\partial \\log L}{\\partial \\lambda} = 0$. Lawless\n\\citep{Lawless82} gives a useful trick here that lets us solve both of\nthese simultaneously. When (\\ref{eqn:mupartial}) is set to zero, it\ncan be used to get $\\hat{\\mu}$ in terms of $\\hat{\\lambda}$:\n\n\\begin{eqnarray}\ne^{-\\hat{\\lambda} \\hat{\\mu}} & = & \\frac{1}{n} \\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i} \n\\label{eqn:substitute}\\\\\n\\hat{\\mu} & = & - \\frac{1}{\\hat{\\lambda}} \n\t\\log \\left[ \\frac{1}{n} \\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i} \\right]\n\\label{eqn:solvemu}\n\\end{eqnarray}\n\nSubstituting (\\ref{eqn:substitute}) into (\\ref{eqn:lambdapartial}),\ngives us an equation for solving $\\hat{\\lambda}$ in terms of the\n$x_i$'s:\n\n\\begin{eqnarray}\n\\frac{1}{\\hat{\\lambda}} - \\frac{1}{n} \\sum_{i=1}^{n} x_i +\n\\frac{\\sum_{i=1}^{n} x_i e^{-\\hat{\\lambda} x_i}}\n     {\\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i}} \n& = & 0\n\\label{eqn:newtontarget}\n\\end{eqnarray}\n\nThis is our target function. We could solve it readily enough (by\nbisection search, for example) and obtain $\\hat{\\lambda}$. We can\nsolve it even faster using the Newton/Raphson algorithm, because it is\ndifferentiable with respect to lambda:\n\n\\begin{equation}\n\\frac{d}{d\\hat{\\lambda}} = \n\\frac{\\left( \\sum_{i=1}^{n} x_i e^{-\\hat{\\lambda} x_i} \\right)^2 } \n     {\\left( \\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i}     \\right)^2 }\n-\n\\frac{\\sum_{i=1}^{n} x_i^2 e^{-\\hat{\\lambda} x_i}}\n     {\\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i}}\n-\n\\frac{1}{\\hat{\\lambda}^2}\n\\label{eqn:newtonderivative}\n\\end{equation}\n\nNow, the key equations are (\\ref{eqn:solvemu}),\n(\\ref{eqn:newtontarget}), and (\\ref{eqn:newtonderivative}). In\nsummary, the inference procedure is the following:\n\n\\begin{itemize}\n\\item Guess an initial $\\hat{\\lambda}$ (using the mean/variance\n  method, for example, but any reasonable guess works).\n\\item Use Newton/Raphson iterations to find the $\\hat{\\lambda}$ that satisfies\n      (\\ref{eqn:newtontarget}):\n\t\\begin{itemize}\n\t\\item calculate the target function $f$ and \n         its first derivative $f'$ at $\\hat{\\lambda}$, using \n\t(\\ref{eqn:newtontarget}) to calculate $f$ and \n\t(\\ref{eqn:newtonderivative}) to calculate $f'$.\n\t\\item If $f$ is within some absolute tolerance of zero \n\t(e.g., $10^{-6}$), stop; we have found $\\hat{\\lambda}$.\n\t\\item Else, estimate a new $\\hat{\\lambda} = \\hat{\\lambda} - \\frac{f}{f'}$,\n\t  and do another iteration.\n\t\\end{itemize}\n\\item Plug $\\hat{\\lambda}$ into (\\ref{eqn:solvemu}) to get $\\hat{\\mu}$.\n\\end{itemize}\n\nThis algorithm is implemented in \\ccode{esl\\_evd\\_FitComplete()}.  An\nauxiliary function, \\ccode{lawless416()}, calculates the target\nfunction and its derivative (equations (\\ref{eqn:newtontarget}) and\n(\\ref{eqn:newtonderivative})) given the current estimate of\n$\\hat{\\lambda}$.  The name comes from Lawless' equation 4.1.6, the\ntarget function \\citep{Lawless82}.\n\nThe accuracy of fitting to simulated data (generated with $\\mu=-20$\nand $\\lambda=0.4$), collated over 500 simulations, is shown in the\nfollowing table:\n\n\\begin{center}\n\\begin{tabular}{lrrrr} \\hline\n                              & \\multicolumn{4}{c}{\\# of samples}\\\\\n                              & 100 & 1000  & 10,000 & 100,000 \\\\\n\\% error in $\\hat{\\mu}$       &  1\\%& 0.3\\% &  0.1\\% & 0.03\\%  \\\\\nmax error in $\\hat{\\mu}$      &  4\\%&   2\\% &  0.5\\% &  0.1\\%  \\\\\n\\% error in $\\hat{\\lambda}$   &  6\\%&   2\\% &  0.6\\% &  0.2\\%  \\\\\nmax error in $\\hat{\\lambda}$  & 36\\%&   9\\% &    2\\% &  0.8\\%  \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\nThis is in accord with theoretical expectation. The distribution of\n$\\frac{\\lambda}{\\hat{\\lambda}}$ is approximately normal with mean 1 and\nstandard error $\\frac{0.78}{\\sqrt{N}}$ \\citep{Lawless82,Altschul01}. \n\n% Altschul says \\frac{\\hat{\\lambda}}{\\lambda}, actually, but I believe\n% that's wrong. xref J1/46.\n\n\n\\subsubsection{Maximum likelihood fitting to censored data}\n\nA \\emph{censored} data problem is when we have $N$ samples, but we\nonly observe the values of a subset of $n$ samples $x_1..x_n$ that are\ngreater or equal to some cutoff $\\phi$. The remaining $z = N-n$\nsamples are \\emph{censored}, and for these we only know that $x <\n\\phi$.  $x_i..x_n$, $n$, $\\phi$, and $z$ are all known in a censored\ndata problem.\n\nTo estimate maximum likelihood parameters $\\hat{\\theta}$ for some\ndistribution from censored data \\citep{Gelman95}, the log likelihood\nto maximize is:\n\n\n\\[ \n  \\hat{\\theta} = \\argmax_{\\theta} z \\log P(x<\\phi \\mid \\theta)\n                         + \\sum_{i=1}^n \\log P(x_i \\mid \\theta)\n\\]\n\nSpecifically, when fitting a Gumbel distribution, the log likelihood\nto optimize is:\n\n\\begin{equation}\n  \\log L(\\lambda, \\mu) = \n    n \\log \\lambda \n     - z e^{-\\lambda(\\phi - \\mu)}\n     - \\sum_{i=1}^{n} \\lambda(x_i - \\mu) \n     - \\sum_{i=1}^{n} e^{-\\lambda(x_i - \\mu)}\n\\label{eqn:censor_logL}\n\\end{equation}\n\nTo optimize this, we follow a similar procedure as used for complete\ndata \\citep{Lawless82}. The log likelihood is differentiable with\nrespect to $\\lambda$ and $\\mu$:\n\n\\begin{eqnarray}\n\\frac{\\partial \\log L}{\\partial \\mu} & = &\nn \\lambda  \n- z \\lambda e^{-\\lambda (\\phi - \\mu)}\n- \\lambda \\sum_{i=1}^{n} e^{-\\lambda (x_i - \\mu)}\n\\label{eqn:censor_dmu}\n\\\\%\n\\frac{\\partial \\log L}{\\partial \\lambda} & = &\n\\frac{n}{\\lambda} \n+ z (\\phi - \\mu) e^{-\\lambda (\\phi - \\mu)}\n- \\sum_{i=1}^{n} (x_i - \\mu) \n+ \\sum_{i=1}^{n} (x_i - \\mu) e^{-\\lambda (x_i - \\mu)}\n\\label{eqn:censor_dlambda}\n\\end{eqnarray}\n\nSetting (\\ref{eqn:censor_dmu}) to zero and solving for $\\hat{\\mu}$ in\nterms of $\\hat{\\lambda}$ gives:\n\n\\begin{equation}\n\\hat{\\mu}  =  - \\frac{1}{\\hat{\\lambda}} \n\t\\log \\left[ \\frac{1}{n} \n\t\\left( z e^{-\\hat{\\lambda} \\phi} \n               + \\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i} \\right)\n\t\\right]\n\\label{eqn:censor_solvemu}\n\\end{equation}\n\nSubstituting (\\ref{eqn:censor_solvemu}) into\n(\\ref{eqn:censor_dlambda}) gives the target equation:\n\n\\begin{equation}\n\\frac{1}{\\hat{\\lambda}} \n- \\frac{1}{n} \\sum_{i=1}^{n} x_i +\n\\frac{z \\phi e^{-\\hat{\\lambda} \\phi} + \\sum_{i=1}^{n} x_i e^{-\\hat{\\lambda} x_i}} \n     {z e^{-\\hat{\\lambda} \\phi} + \\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i}} \n =  0\n\\label{eqn:censor_newtontarget}\n\\end{equation}\n\nTo use Newton-Raphson root finding (instead of a slower bisection\nsearch) we also need the first derivative of this target equation with\nrespect to $\\lambda$:\n\n\\begin{equation}\n\\frac{d}{d\\hat{\\lambda}} = \n\\frac{\\left( \n        z \\phi e^{-\\hat{\\lambda} \\phi}\n        + \\sum_{i=1}^{n} x_i e^{-\\hat{\\lambda} x_i} \n       \\right)^2 } \n     {\\left( \n        z e^{-\\hat{\\lambda} \\phi}\n        + \\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i}     \n       \\right)^2 }\n-\n\\frac{z \\phi^2 e^{-\\hat{\\lambda} \\phi} + \\sum_{i=1}^{n} x_i^2 e^{-\\hat{\\lambda} x_i}}\n     {z  e^{-\\hat{\\lambda} \\phi} + \\sum_{i=1}^{n} e^{-\\hat{\\lambda} x_i}}\n-\n\\frac{1}{\\hat{\\lambda}^2}\n\\label{eqn:censor_newtonderiv}\n\\end{equation}\n\nIn summary: given $n$ observed samples $x_1..x_n$ from a total sample\nof $N$ samples, $z = N-n$ of which were censored because they have\nvalues $< \\phi$, we solve for maximum likelihood estimates\n$\\hat{\\lambda}$ and $\\hat{\\mu}$ using the same procedure we used for\ncomplete data, by using equations (\\ref{eqn:censor_solvemu}),\n(\\ref{eqn:censor_newtontarget}), and (\\ref{eqn:censor_newtonderiv}) in\nplace of equations (\\ref{eqn:solvemu}), (\\ref{eqn:newtontarget}), and\n(\\ref{eqn:newtonderivative}). Easel implements this procedure in\n\\ccode{esl\\_evd\\_FitCensored()}.  The target function\n(\\ref{eqn:censor_newtontarget}) and its derivative\n(\\ref{eqn:censor_newtonderiv}) are implemented in the auxiliary\nfunction \\ccode{lawless422()} \\citep{Lawless82}.\n\nResults on 500 simulated datasets with $\\mu = -20, \\lambda = 0.4$,\ncensored at $\\phi = -20$ -- the expected peak of the histogram; that\nis, a censored fit only to the right tail, which contains about 63\\%\nof the samples:\n\n\\begin{center}\n\\begin{tabular}{lrrrr} \\hline\n & \\multicolumn{4}{c}{\\# samples in EVD histogram}\\\\\n                        & 100 & 1000  & 10,000 & 100,000 \\\\\n\\% error in $\\mu$       &  1\\%& 0.4\\% &  0.1\\% &  0.04\\%  \\\\\nmax error in $\\mu$      &  5\\%&   2\\% &  0.5\\% &  0.2\\%  \\\\\n\\% error in $\\lambda$   &  9\\%&   3\\% &  0.9\\% &  0.3\\%  \\\\\nmax error in $\\lambda$  & 33\\%&  11\\% &    3\\% &    1\\%  \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\n\\subsubsection{Maximum likelihood fitting to truncated data}\n\nA \\emph{truncated} dataset is when we only observe $n$ samples $x_i$,\nand an \\emph{unknown} number $z$ of samples less than some threshold\n$\\phi$ were unobserved. Thus, only the right tail of $n$ samples $x_i\n\\geq \\phi$ as observed. In a truncated dataset, $x_1..x_n$, $n$, and\n$\\phi$ are known, but $z$ is unknown.\n\nSolving a truncated data problem motivates a Bayesian approach,\nbecause we need to integrate out (marginalize) the nuisance $z$\nparameter, and to do this, we have to specify a prior distribution for\n$P(z)$. Gelman \\emph{et al.} describe a general Bayesian framework for\nthinking about various types of missing data problems, including\ncensored and truncated data \\citep{Gelman95}.\n\nIn short, to obtain maximum likelihood parameters $\\hat{\\theta}$ for\nsome distribution, given truncated data, the log likelihood we wish to\nmaximize is:\n\n\\begin{equation}\n  \\hat{\\theta} = \\argmax_\\theta -n \\log P(x \\geq \\phi \\mid \\theta) \n                   + \\sum_{i=1}^n \\log P(x_i \\mid \\theta).\n\\label{eqn:truncated_objective}\n\\end{equation}\n\n\\textbf{Detour: derivation of the truncated data likelihood}\n\nThe derivation of the above equation may not be immediately obvious.\nThe presence of the $n P(x \\geq \\phi \\mid \\theta)$ term may be\ncounterintuitive, as opposed to the more intuitive $z P(x < \\phi \\mid\n\\theta)$ term that accounts for the missing data in a censored data\nproblem. Gelman \\emph{et al.} actually don't even show the equation in\ntheir book; I obtained it from an exercise solution on their web site.\nTo convince you (and to remind me) of its correctness, a sketch of the\nderivation follows.\n\nWe start with the same likelihood equation that arises in censored\ndata for a \\emph{known} total number of samples $N$ (where $N=n+z$),\nbut since $N$ is unknown, we need to integrate over all possible $N$\nfrom $n$ to $\\infty$:\n\n\\begin{eqnarray*}\n   P(\\mathbf{x} \\mid \\theta, \\phi) & = &\n    \\sum_{N=n}^{\\infty}   P(\\mathbf{x} \\mid \\theta, \\phi, N) P(N)\\\\\n   & = & \n    \\prod_{i=1}^n P(x_i \\mid \\theta) \n    \\left[\n      \\sum_{N=n}^\\infty {N \\choose n} P(x < \\phi \\mid \\theta)^{N-n} P(N)\n    \\right]\\\\\n\\end{eqnarray*}\n\nThe $\\prod_{i=1}^n P(x_i \\mid \\theta)$ is straightforward; that sum is\nour problem. The trick is to rearrange it so we can treat it as a\nconvergent negative binomial series:\n\n\\[\n   (1-p)^{-a} = 1 + ap + \\frac{a(a+1)}{2!} p^2 +\n   \\frac{a(a+1)(a+2)}{3!} p^3...\n\\]\n\nTo get the sum into the form of this series, Gelman \\emph{et al.}\nsuggest using an informative prior $P(N) = \\frac{1}{N}$, an apparently\nunmotivated choice that happens to make the sum collapse nicely:\n\n\\begin{eqnarray*}\n &=& P(N=n) \n    + (n+1) P(x < \\phi \\mid \\theta) P(N=n+1) \n    + \\frac{(n+1)(n+2)}{2!} P(x < \\phi \\mid \\theta)^2 P(N=n+2) ...\\\\\n &= & \\frac{1}{n} \\left[\n      1 \n      + n P(x < \\phi \\mid \\theta)\n      + \\frac{n(n+1)}{2!} P(x < \\phi \\mid \\theta)^2 \n      + \\frac{n(n+1)(n+2)}{3!} P(x < \\phi \\mid \\theta)^3 \\right]\\\\\n &=& \\frac{1}{n} (1 - P(x < \\phi \\mid \\theta))^{-n}\\\\\n &=& \\frac{1}{n} P(x \\geq \\phi \\mid \\theta)^{-n}\\\\\n\\end{eqnarray*}\n\nThe $\\frac{1}{n}$ is a constant, so we drop it from the likelihood\nequation we'll maximize. Putting this term back together with the\nprobability of the observed data and taking the log, we obtain the log\nlikelihood in equation (\\ref{eqn:truncated_objective}).\n\nAlternatively, we might choose an uninformative improper uniform prior\n$P(N) \\propto 1$. This gives a log likelihood that only differs by a\nterm of $n+1$ versus $n$:\n\n\\begin{equation}\n  \\hat{\\theta} = \\argmax_\\theta -(n+1) \\log P(x \\geq \\phi \\mid \\theta) \n                   + \\sum_{i=1}^n \\log P(x_i \\mid \\theta).\n\\end{equation}\n\nHowever, empirically, this form is ineffective, at least for fitting\nGumbels. The $\\frac{1}{N}$ prior performs much better, probably\nbecause it constrains the solutions to favor smaller, finite, more\nreasonable choices of $N$.\n\n\n\n\\textbf{Back to fitting a truncated Gumbel}\n\nFor the specific case of fitting a truncated Gumbel, the log\nlikelihood (\\ref{eqn:truncated_objective}) to optimize is:\n\n\\[\n  \\log L(\\lambda, \\mu) =\n     n \\log \\lambda \n     - \\sum_{i=1}^{n} \\lambda(x_i - \\mu) \n     - \\sum_{i=1}^{n} e^{-\\lambda(x_i - \\mu)}\n     - n \\log (1 - \\exp(-e^{-\\lambda(\\phi - \\mu)}))\n\\label{eqn:truncated_logL}\n\\]\n\nThis is differentiable with respect to $\\lambda$ and $\\mu$, but it's\nnot going to reduce to the clean root-finding problem that we obtained\nfor complete data or censored data. Instead we're going to be left\nwith a numerical optimization problem. We can use standard numerical\noptimization code, such as steepest descent or conjugate gradient\ndescent. There's just one hitch. These algorithms assume unconstrained\nparameters, but $\\lambda$ is constrained to values $>0$. We do a\nchange of variables, and use the transformation $\\lambda = e^w$ so we\ncan optimize the unconstrained parameter $w = \\log \\lambda$ instead of\noptimizing $\\lambda$ directly.  The necessary partial derivatives are\nthen:\n\n\\begin{eqnarray}\n\\frac{\\partial \\log L}{\\partial \\mu} & = &\nn \\lambda  \n- \\lambda \\sum_{i=1}^{n} e^{-\\lambda (x_i - \\mu)}\n- \\frac{n \\lambda \\exp \\left[ -\\lambda (\\phi - \\mu) - e^{- \\lambda (\\phi - \\mu)} \\right]}\n       {1 - \\exp(-e^{-\\lambda(\\phi - \\mu)})}\n\\label{eqn:truncated_dmu}\n\\\\%\n\\frac{\\partial \\log L}{\\partial w} & = &\nn \n- \\sum_{i=1}^{n} \\lambda(x_i - \\mu) \n+ \\sum_{i=1}^{n} \\lambda(x_i - \\mu) e^{-\\lambda (x_i - \\mu)}\n+ \\frac{n\\lambda (\\phi-\\mu) \\exp \\left[ -\\lambda (\\phi - \\mu) - e^{- \\lambda (\\phi - \\mu)} \\right]}\n       {1 - \\exp(-e^{-\\lambda(\\phi - \\mu)})}\n\\label{eqn:truncated_dw}\n\\end{eqnarray}\n\nThis optimization is carried out by \\ccode{esl\\_evd\\_FitTruncated()}.\nThe likelihood (\\ref{eqn:truncated_logL}) is implemented in\n\\ccode{tevd\\_func()}, and the derivatives (\\ref{eqn:truncated_dmu}) and\n(\\ref{eqn:truncated_dw}) are implemented in \\ccode{tevd\\_dfunc()}.\n\\ccode{esl\\_evd\\_FitTruncated()} simply sets up the problem and passes\nit all off to a conjugate gradient descent optimizer.\n\nResults on 500 simulated datasets with $\\mu = -20, \\lambda = 0.4$,\ntruncated at $\\phi = -20$ (leaving the right tail, containing about\n63\\% of the samples):\n\n\\begin{center}\n\\begin{tabular}{lrrrr} \\hline\n                              & \\multicolumn{4}{c}{\\# samples}\\\\\n                              & 100 & 1000  & 10,000 & 100,000 \\\\\n\\% error in $\\hat{\\mu}$       & 13\\%&   2\\% &  0.8\\% &  0.3\\%  \\\\\nmax error in $\\hat{\\mu}$      &260\\%&  42\\% &    3\\% &    1\\%  \\\\\n\\% error in $\\hat{\\lambda}$   & 15\\%&   5\\% &    2\\% &  0.6\\%  \\\\\nmax error in $\\hat{\\lambda}$  & 68\\%&  18\\% &    6\\% &    2\\%  \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\nFitting truncated Gumbel distributions is difficult, requiring much\nmore data than fitting complete or censored data. The problem is that\nthe right tail becomes a scale-free exponential when $\\phi >> \\mu$,\nand $\\mu$ becomes undetermined. Fits become very inaccurate as $\\phi$\ngets larger than $\\mu$, and for sufficiently large $\\phi$, the\nnumerical optimizer will completely fail.\n\n\n\n\n\n\n\n", "meta": {"hexsha": "506a448251f3d9a50a8a750412b2e17998ab092c", "size": 24458, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "third_party_software/hmmer-3.0-linux-intel-x86_64/easel/esl_gumbel.tex", "max_stars_repo_name": "juan-rodriguez-rivas/cointerfaces", "max_stars_repo_head_hexsha": "d9cfdb93e241e2246718c0fc5a4ec22424ee6ed4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "third_party_software/hmmer-3.0-linux-intel-x86_64/easel/esl_gumbel.tex", "max_issues_repo_name": "juan-rodriguez-rivas/cointerfaces", "max_issues_repo_head_hexsha": "d9cfdb93e241e2246718c0fc5a4ec22424ee6ed4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "third_party_software/hmmer-3.0-linux-intel-x86_64/easel/esl_gumbel.tex", "max_forks_repo_name": "juan-rodriguez-rivas/cointerfaces", "max_forks_repo_head_hexsha": "d9cfdb93e241e2246718c0fc5a4ec22424ee6ed4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1328, "max_line_length": 99, "alphanum_fraction": 0.6635865565, "num_tokens": 8183, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802264851919, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.599877120317498}}
{"text": "\\documentclass[11pt,english,BCOR10mm,DIV12,bibliography=totoc,parskip=false,smallheadings\n    headexclude,footexclude,oneside,dvips,UKenglish]{pst-doc}\n\\usepackage[T1]{fontenc}\n%\\usepackage{lmodern}\n  \\usepackage[tt=false]{libertine} %override beramono (doesn't look like tt font)\n  \\usepackage{libertinust1math}\n  %\\usepackage[scaled=0.83]{luximono} %override beramono (doesn't look like tt font)\n\\usepackage{attachfile2}\n\\attachfilesetup{date={},color=1 0 0}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{pst-3dplot}\n\\usepackage{pst-plot}\n\\usepackage{pstricks-add}\n\\usepackage{pst-ode}\n\\usepackage[ocgcolorlinks]{ocgx2}\n\\let\\pstFV\\fileversion\n\\let\\belowcaptionskip\\abovecaptionskip\n\n\\makeatletter\n\\renewcommand*\\l@subsection{\\bprot@dottedtocline{2}{1.5em}{3.6em}}\n\\renewcommand*\\l@subsubsection{\\bprot@dottedtocline{3}{3.8em}{4.5em}}\n\\makeatother\n\n\\def\\bgImage{%\n\\pstVerb{\n  /alpha 10 def\n  /beta 28 def\n  /gamma 8 3 div def\n}%\n\\pstODEsolve[algebraic,dt0=0.05]{lorenzXYZ}{0 1 2}{0}{25}{2501}{10 10 30}{\n  alpha*(x[1]-x[0]) |\n  x[0]*(beta-x[2]) - x[1] |\n  x[0]*x[1] - gamma*x[2]\n}\n\\begin{pspicture}(-8,-4)(6,12)\n\\psset{unit=0.17cm,Alpha=160,Beta=15}\n\\listplotThreeD{lorenzXYZ}\n\\psset{unit=0.425cm,linestyle=dashed}\n\\pstThreeDNode(0,0,0){O}\n\\pstThreeDNode(0,0,5){Z}\n\\pstThreeDNode(5,0,0){X}\n\\pstThreeDNode(0,5,0){Y}\n\\pstThreeDNode(-10,-10,0){A}\n\\pstThreeDNode(-10,-10,20){B}\n\\pstThreeDNode(-10,10,20){C}\n\\pstThreeDNode(-10,10,0){D}\n\\pstThreeDNode(10,-10,0){E}\n\\pstThreeDNode(10,-10,20){F}\n\\pstThreeDNode(10,10,20){G}\n\\pstThreeDNode(10,10,0){H}\n\\pspolygon(A)(B)(C)(D)\n\\pspolygon(E)(F)(G)(H)\n\\psline(A)(E)\n\\psline(B)(F)\n\\psline(D)(H)\n\\psline(C)(G)\n\\psset{linestyle=solid,linecolor=red}\n\\psline{->}(O)(X)\n\\psline{->}(O)(Y)\n\\psline{->}(O)(Z)\n\\end{pspicture}\n}\n\\lstset{explpreset={pos=l,width=-99pt,overhang=0pt,hsep=\\columnsep,vsep=\\bigskipamount,rframe={}},\n    escapechar=?}\n\\def\\textat{\\char064}%\n\\let\\verbI\\texttt\n\n\n\\def\\parsedate#1/#2/#3\\relax{\n  \\def\\year{#1}\n  \\def\\month{#2}\n  \\def\\day{#3}\n}\n\n\\begin{document}\n%\\author{Alexander Grahn}\n\\author{Alexander Grahn}\n\\expandafter\\parsedate\\filedate\\relax %set current date to package date\n\\title{pst-ode\\\\[4ex]}\n\\subtitle{A PSTricks package for solving initial value problems for sets of Ordinary Differential Equations (ODE),  v\\pstFV\\\\[2ex]\\url{https://github.com/agrahn/pst-ode}}\n\\maketitle\n\n%\\clearpage\n\\begin{abstract}\n\\noindent The \\LPack{pstricks-add} package already provides \\Lcs{psplotDiffEqn} for solving ODEs. However, as its name suggests, the macro always produces a plot of the computed result. While any number of coupled differential equations can be integrated simultaneously, only two-dimensional plots are supported. The user has to select the two components of the computed state vectors to be used in the plot.\n\nPackage \\LPack{pst-ode} separates solving the equations from plotting the result. The result is stored as a \\PS{} object and can be plotted later using macros from other PSTricks packages, such as \\Lcs{listplot} (\\LPack{pst-plot}) and \\Lcs{listplotThreeD} (\\LPack{pst-3dplot}), or be further processed by user-defined \\PS{} procedures. Optionally, the computed state vectors can be written as a table to a text file.\n\nPackage \\LPack{pst-ode} uses the Runge-Kutta-Fehlberg (RKF45) method with automatic step size control for integrating the differential equations. Thus, the precision of the result does not depend on the number of plot points specified, as it would be the case with the classical Runge-Kutta (RK4) method.\n\\end{abstract}\n\n\\tableofcontents\n\n\\section{Introduction}\nAn initial value problem involves finding the solution $\\mathbf{x}(t)$ of a set of first order differential equations\n\\begin{equation}\n    \\frac{\\mathrm{d}\\mathbf{x}}{\\mathrm{d}t}=\\mathbf{f}(t,\\mathbf{x})\n\\end{equation}\nby integrating them with respect to the independent variable $t$ starting at $t_0$ up to $t_\\mathrm{e}$. If the set consists of $n$ differential equations,\na vector of initial conditions\n\\begin{equation}\n    \\mathbf{x}(t_0)=\\mathbf{x}_0\n\\end{equation}\nof the same length $n$ is required. For $n=1$ the initial value problem is one-dimensional:\n\\begin{gather}\n    \\frac{\\mathrm{d}x}{\\mathrm{d}t}=f(t,x)\\quad\\text{for}\\ t \\in [t_0, t_\\mathrm{e}]\\text{, where}\\label{eq:1dode}\\\\\n    x(t_0) = x_0.\\label{eq:1ini}\n\\end{gather}\n\nInstead of producing analytical expressions of the solution functions $\\mathbf{x}(t)$, the numerical method gives only approximate values $\\mathbf{x}_i$ at $N$ discrete points $t_i$ of the integration interval $I=[t_0, t_\\mathrm{e}]$:\n\\begin{equation}\n\\mathbf{x}_i\\approx\\mathbf{x}(t_i).\n\\end{equation}\nThe computed approximations $\\mathbf{x}_i$ of the solution as well as the initial condition vector $\\mathbf{x}_0$ are called `state vectors'. In the case of a single equation problem, Eqs.~\\eqref{eq:1dode}, \\eqref{eq:1ini}, the state vectors have only one component.\n\n\\section{Commands}\n\\begin{BDef}\n    \\Lcs{pstODEsolve}\\OptArgs\\Largb{result}\\Largb{output format}\\Largb{$t_0$}\\Largb{$t_\\mathrm{e}$}\\Largb{$N$}\\Largb{$\\mathbf{x}_0$}\\Largb{$\\mathbf{f}(t,\\mathbf{x})$}\n\\end{BDef}\nis the main user command for solving initial value problems.\n\nThe first mandatory argument \\Larg{result} is a simple identifier composed of letters and possibly numbers. It is the name of a \\PS{} object that takes the computed state vectors $\\mathbf{x}_i$, formatted according to the second argument \\Larg{output format}, as a long list of values. \\Larg{result} can be directly used as the \\Larg{data} argument of \\Lcs{listplot}\\Largb{data} (package \\LPack{pst-plot}) or \\Lcs{listplotThreeD}\\Largb{data} (package \\LPack{pst-3dplot}). When put on the \\PS{} operand stack, \\Larg{result} is immediately executed, that is, the list of values contained in \\Larg{result} is pushed onto the operand stack. The scope of \\Larg{result} is global and thus its content survives page breaks.\n\nThe second argument \\Larg{output format} determines which of the components of the state vectors $\\mathbf{x}_i$ and possibly the independent variable $t$ are stored into \\Larg{result}. \\Larg{output format} can be specified in two different formats, depending on the setting of the command option \\Lkeyword{algebraicOutputFormat}.\nIf \\Lkeyword{algebraicOutputFormat} is set, calculations can be made on the components of the computed state vectors before writing them to \\Larg{result}. Without option \\Lkeyword{algebraicOutputFormat} the following applies: The keyword \\Lkeyword{(t)} (parentheses required) inserts the integration parameter value $t_i$ into the result list; numbers (\\Lkeyword{0}, \\Lkeyword{1}, \\Lkeyword{2}, \\dots, $n-1$) in arbitrary order specify the components of vector $\\mathbf{x}_i$ to be inserted, as well as their order of insertion. The elements of \\Larg{output format} are to be separated by spaces. If option \\Lkeyword{algebraicOutputFormat} is set, \\Larg{output format} is a \\Lkeyword{|}-separated list of algebraic expressions (infix notation) according to which the components of the output vector are to be calculated. In these algebraic expressions, the $n$ current state vector components can be referred to as \\Lkeyword{x[0]}, \\Lkeyword{x[1]}, \\dots, \\Lkeyword{x[}$n-1$\\Lkeyword{]} or \\Lkeyword{y[0]}, \\Lkeyword{y[1]}, \\dots, \\Lkeyword{y[}$n-1$\\Lkeyword{]}, and the current independent variable value as `\\Lkeyword{t}'. In either case, there is no upper limit of the output vector length. It must have at least one element though.\n\nArguments $t_0$ and $t_\\mathrm{e}$ define the interval of integration $I=[t_0, t_\\mathrm{e}]$. Both arguments accept expressions in infix or \\PS{} (postfix, reverse polish) notation. Infix notation requires option \\Lkeyword{algebraicT}.\n\n$N$ is the number of \\emph{equally} spaced output points, including $t_0$ and $t_\\mathrm{e}$; it must be $\\ge 2$. In order to divide the interval of integration into $K$ output steps, $N$ must be set to $K+1$. Note that the precision of the solution does \\emph{not} depend on $N$; internal integration steps are automatically inserted and resized according to the changes in the solution.\n\n$\\mathbf{x}_0$ is a list of $n$ space separated initial values, one for each differential equation. Alternatively, $\\mathbf{x}_0$ can be given as a \\PS{} procedure pushing the initial values on the stack, or as an algebraic expression in infix notation where the elements are separated by `\\Lkeyword{|}'. Infix notation requires option \\Lkeyword{algebraicIC}. This argument can be left empty. In that case, the last computed state vector of a preceding \\Lcs{pstODEsolve} call. Of course, the number of equations $n$ must be the same as in the preceding calculation.\n\n$\\mathbf{f}(t,\\mathbf{x})$ is the right-hand side of the differential equations. Equations can be entered in either infix or \\PS{} (postfix, reverse polish) notation. Infix notation requires option \\Lkeyword{algebraic}, and equations have to be separated by `\\Lkeyword{|}'. The $n$ current state vector components can be referred to as \\Lkeyword{x[0]}, \\Lkeyword{x[1]}, \\dots, \\Lkeyword{x[}$n-1$\\Lkeyword{]} or \\Lkeyword{y[0]}, \\Lkeyword{y[1]}, \\dots, \\Lkeyword{y[}$n-1$\\Lkeyword{]}, and the current independent variable value as `\\Lkeyword{t}'. If given in \\PS{} notation, the provided procedure must first pop the current state vector components in reverse order(!) from the operand stack and then push the first derivatives in regular order back to it. Again, the independent variable value can be accessed using `\\Lkeyword{t}'.%\\\\\n\n\\newpage\n\\noindent\\Lcs{pstODEsolve} accepts a few \\OptArgs:\\\\% \\Lkeyword{dt0}, \\Lkeyword{append}, \\Lkeyword{saveData}, \\Lkeyword{algebraicOutputFormat}, \\Lkeyword{algebraicT}, \\Lkeyword{algebraicIC}, \\Lkeyword{algebraic}, \\Lkeyword{algebraicAll}, \\Lkeyword{silent} and \\Lkeyword{varsteptol}.\\\\\n\n\\noindent\\OptArg*{dt0=<number>}\\\\\n\\noindent By default, the output step $(t_\\mathrm{e}-t_\\mathrm{0})/(N-1)$ is used as the intitial, tentative integration step size. A large initial step may cause the integration to crash early by numerical overflow, in particular if the right-hand side evaluates to large values. With option \\Lkeyword{dt0}, an arbitrary value of the initial step size can be specified.\\\\\n\n\\noindent\\OptArg*{append}\\\\\nIf the initial condition argument $\\mathbf{x}_0$ of \\Lcs{pstODEsolve} is left empty, integration of an ODE continues from an already existing state, which was obtained by the most recent use of \\Lcs{pstODEsolve} or which was re-instated from a previously saved state (\\Lcs{pstODEsaveState}) by calling \\Lcs{pstODErestoreState}. Keyword \\Lkeyword{append} tells \\Lcs{pstODEsolve} to append the computed result to \\Larg{result} which must already exist.\\\\\n\n\\noindent\\OptArg*{saveData}\\\\\nIf option \\Lkeyword{saveData} is set, the formatted state vectors are written as a table to a textfile named `\\Larg{result}\\Lkeyword{.dat}'. Note that \\Lkeyword{ps2pdf} must be called with option \\Lkeyword{-dNOSAFER} to enable writing of external files.\\\\\n\n\\noindent\\OptArg*{algebraicOutputFormat}\\\\\nWith \\Lkeyword{algebraicOutputFormat}, the command argument \\Larg{output format} is a \\Lkeyword{|}-separated list of algebraic expressions in infix notation, according to which the output vector components are to be assembled before storing them into \\Larg{result}. Default is to  \\emph{not} use algebraic infix expressions. For details, see the description of \\Larg{output format} above.\\\\\n\n\\noindent\\OptArg*{algebraicT}\\\\\nWith \\Lkeyword{algebraicT}, the integration interval limits $t_0$ and $t_\\mathrm{e}$ can be entered as algebraic expressions in infix notation, otherwise \\PS{} (postfix, reverse polish) notation must be used. Of course, single rational numbers for  $t_0$ and $t_\\mathrm{e}$ always work.\\\\\n\n\\noindent\\OptArg*{algebraicIC}\\\\\nWith \\Lkeyword{algebraicIC}, the initial condition vector $\\mathbf{x}_0$ can be given in algebraic infix notation. Vector components have to be separated by `\\Lkeyword{|}'. Default is \\PS{} notation, i.\\,e. space separated postfix expressions or rational numbers.\\\\\n\n\\noindent\\OptArg*{algebraic}\\\\\nWith \\Lkeyword{algebraic}, the right-hand side of differential equations $\\mathbf{f}(t,\\mathbf{x})$ can be given in infix notation. Algebraic infix expressions are to be separated by `\\Lkeyword{|}'. Default is \\PS{} notation.\\\\\n\n\\noindent\\OptArg*{algebraicAll}\\\\\nOption \\Lkeyword{algebraicAll} is equivalent to setting all of \\Lkeyword{algebraicOutputFormat}, \\Lkeyword{algebraicT}, \\Lkeyword{algebraicIC}, \\Lkeyword{algebraic}.\\\\\n\n\\noindent\\OptArg*{silent}\\\\\nOption \\Lkeyword{silent} suppresses the terminal output of stepping information.\\\\\n\n\\noindent\\OptArg*{varsteptol=<number>}\\\\\nThe tolerance for automatic calculation of the integration step size can be set with \\Lkeyword{varsteptol}. The default value is \\Lkeyword{1e-6}. Sometimes, it may be helpful to relax this value in order to cope with situations where integration stops with step size underflow. The occurence of step size underflow is indicated by writing `\\Lkeyword{!}' to the terminal and integration stops prematurely before reaching $t_\\mathrm{e}$. The current value of $t$ and the current state vector $\\mathbf{x}$ are written to the terminal. Step size underflow may happen for stiff ODEs at some point along the integration interval $[t_0, t_\\mathrm{e}]$.\\\\[1ex]\n\n\\begin{BDef}\n\\Lcs{pstODEsaveState}\\Largb{state}\n\\end{BDef}\nis a user command to save the state of an integration at the end of a \\Lcs{pstODEsolve} call. The saved state can be restored later, using \\Lcs{pstODErestoreState}\\Largb{state}, in order to continue the integration of an ODE. \\Larg{state} is an identifier composed of letters and possibly numbers.\\\\[1ex]\n\n\\begin{BDef}\n\\Lcs{pstODErestoreState}\\Largb{state}\n\\end{BDef}\nis a user command to restore a previously saved state (\\Lcs{pstODEsaveState}\\Largb{state}). After restoring a state, \\Lcs{pstODEsolve} can be called with an empty initial condition argument in order to continue integration of the ODE. Of course, the number of differential equations of the current and of the saved \\Lcs{pstODEsolve} calls must be the same.\n\n\\section{Examples}\nComplete and compilable example files are located in the \\Verb+{TEXMFDIST}/doc/generic/pst-ode/examples+ directory.\n\n\\subsection[Lorenz Attractor]{Lorenz Attractor \\textattachfile{examples/lorenz.tex}{($\\nearrow$lorenz.tex)}}\nThe Lorenz Attractor depicted on the title page is governed by\n\\begin{align*}\n  \\frac{\\mathrm{d}x}{\\mathrm{d}t}& = \\alpha (y-x)\\\\\n  \\frac{\\mathrm{d}y}{\\mathrm{d}t}& = x(\\beta-z)-y\\\\\n  \\frac{\\mathrm{d}z}{\\mathrm{d}t}& = x y - \\gamma z.\n\\end{align*}\nThis system of differential equations is known to display chaotic behaviour due to the non-linear combination (products) of the dependent functions $x(t)$, $y(t)$ and $z(t)$. The trajectory of solution is susceptible to slight changes in the initial conditions and hence to slight discrepancies in the computed intermediate state vectors, which in turn can be regarded as initial conditions for the continuation of the solution. This is known as the `butterfly effect', a term coined by Lorenz. Although an adaptive stepping algorithm is used, the solution of this initial problem \\emph{does} therefore depend on the number of output points chosen. To some extent, this fact contrasts with the statement made in the abstract of this documentation. However, for linear problems which only know one distinct solution it still holds. In the present case, the values $\\alpha=10$, $\\beta=28$, $\\gamma=8/3$ and the initial condition $\\mathbf{x}_0=(10,10,30)$ where chosen. The integration parameter $t$ is running from  $0$ to $25$ and the state vector is output at $2501$ points of the integration interval.\n\n\\begin{verbatim}\n\\pstVerb{\n  /alpha 10 def\n  /beta 28 def\n  /gamma 8 3 div def\n}\n\\pstODEsolve[algebraic]{lorenzXYZ}{0 1 2}{0}{25}{2501}{10 10 30}{\n  alpha*(x[1]-x[0]) |\n  x[0]*(beta-x[2]) - x[1] |\n  x[0]*x[1] - gamma*x[2]\n}\n\\listplotThreeD{lorenzXYZ}\n\\end{verbatim}\nAs the plot is three-dimensional, all three components of the state vectors are stored in the \\PS{} variable \\Lkeyword{lorenzXYZ} by setting the \\Larg{output format} argument to `\\Lkeyword{0 1 2}'.\n\n\\subsection[Charged particle movement in a vertical electrical field]{Charged particle movement in a vertical electrical field \\textattachfile{examples/particle.tex}{($\\nearrow$particle.tex)}}\nThe trajectory $\\mathbf{x}(t)$ of the particle shown below is governed by a set of three second order differential equations:\n\\begin{subequations}\n\\begin{align}\n\\ddot{x} &= \\omega\\dot{y}-\\dfrac{\\dot{x}}{\\tau}\\\\\n\\ddot{y} &= -\\omega\\dot{x}-\\dfrac{\\dot{y}}{\\tau}\\\\\n\\ddot{z} &= -\\dfrac{\\dot{z}}{\\tau},\n\\end{align}\n\\end{subequations}\nwhere $\\omega$ and $\\tau$ are constants. An initial value problem of this type needs $3\\times2=6$ initial conditions. These are given as the initial position $\\mathbf{x}_0=(x_0, y_0, z_0)$ and the initial velocity $\\dot{\\mathbf{x}}_0=(\\dot{x}_0, \\dot{y}_0, \\dot{z}_0)=(u_0, v_0, w_0) = \\mathbf{v}_0$ of the particle.\n\nIn order to solve the equations above numerically, they have to be rewritten as a set of six first order differential equations:\n\\begin{subequations}\n\\begin{align}\n\\dot{x} &= u\\\\\n\\dot{y} &= v\\\\\n\\dot{z} &= w\\\\\n\\dot{u} &= \\omega v-\\frac{u}{\\tau}\\\\\n\\dot{v} &= -\\omega u-\\dfrac{v}{\\tau}\\\\\n\\dot{w} &= -\\dfrac{w}{\\tau}.\n\\end{align}\n\\end{subequations}\n\nHere, $\\omega$ and $\\tau$ are both set to the value of $5$, the initial position of the particle is defined as $\\mathbf{x}_0=(0, 0, 0)$ and its initial velocity vector as $\\mathbf{v}_0=(20, 0, 2)$.  The integration parameter $t$ is running from  $0$ to $25$ and the state vector is output at $1000$ points of the integration interval.\n\\begin{verbatim}\n\\pstVerb{\n  /wc 5 def\n  /tau 5 def\n}\n\\pstODEsolve[algebraic]{particleXYZ}{0 1 2}{0}{25}{1000}{0 0 0 20 0 2}{\n  x[3] | x[4] | x[5] | wc*x[4] - x[3]/tau | -wc*x[3] - x[4]/tau | -x[5]/tau\n}\n\\listplotThreeD{particleXYZ}\n\\end{verbatim}\nSince we are interested in plotting the particle positions, only the first three components of the state vectors are stored in \\Lkeyword{particleXYZ}.\n\\begin{center}\n\\psset{unit=0.75cm,Alpha=40,Beta=20}\n\\begin{pspicture}(-10,-2)(6,13)\n\\newpsstyle{vecteurA}{arrowinset=0.1,arrowsize=0.15,linecolor={[rgb]{0 0.5 1}}}\n\\pstVerb{\n  /wc 5 def\n  /tau 5 def\n  /vx 20 def\n  /vz 2 def\n}\n\\pstODEsolve[algebraic]{particleXYZ}{0 1 2}{0}{25}{1000}{0 0 0 vx 0 vz}{\n  y[3] | y[4] | y[5] | wc*y[4] - y[3]/tau | -wc*y[3] - y[4]/tau | -y[5]/tau\n}\n\\listplotThreeD{particleXYZ}\n\\pstThreeDNode(0,0,0){O}\n\\pstThreeDNode(0,0,1){Z}\n\\pstThreeDNode(1,0,0){X}\n\\pstThreeDNode(0,1,0){Y}\n\\pstThreeDNode(-5,-9,0){A}\n\\pstThreeDNode(-5,-9,10){B}\n\\pstThreeDNode(-5,2,10){C}\n\\pstThreeDNode(-5,2,0){D}\n\\pstThreeDNode(5,-9,0){E}\n\\pstThreeDNode(5,-9,10){F}\n\\pstThreeDNode(5,2,10){G}\n\\pstThreeDNode(5,2,0){H}\n\\pstThreeDNode(0,0,0){M0}\n\\pstThreeDNode(vx 5 div,0,vz 5 div){V}\n\\pstThreeDNode(vx 5 div,0,0){Vx}\n{\\psset{linestyle=dashed}\n\\pspolygon(A)(B)(C)(D)\n\\pspolygon(E)(F)(G)(H)\n\\psline(A)(E)\n\\psline(B)(F)\n\\psline(D)(H)\n\\psline(C)(G)}%\n\\psline[linecolor=red]{->}(M0)(V)\n\\psline[linecolor=cyan]{->}(M0)(Vx)\n\\uput{0.1}[l](V){\\red$\\overrightarrow{v}_0$}\n{\\psset{linestyle=solid,linecolor=red}\n\\psline[style=vecteurA]{->}(O)(X)\n\\psline[style=vecteurA]{->}(O)(Y)\n\\psline[style=vecteurA]{->}(O)(Z)}%\n\\uput[u](Z){$z$}\n\\uput[dl](X){$x$}\n\\uput[r](Y){$y$}\n\\end{pspicture}\n\\end{center}\n\n\\subsection[One more first order differential equation]{One more first order differential equation \\textattachfile{examples/ode.tex}{($\\nearrow$ode.tex)}}\nThe aim of the last example is to demonstrate that precision does not depend on the number of output points. We set only five output points and plot the analytical solution against the numerical one for comparison.\n\nThe ODE to be solved reads\n\\begin{equation}\n  y'=-2y t.\\label{eq:ode}\n\\end{equation}\nIt should be noted that the right-hand side also depends the integration parameter $t$. % as well as in the output format description, since the solution $y$ shall be plotted against $t$.\nWith the initial condition\n\\begin{equation}\n  y(-1)=1/e\\label{eq:initcond}\n\\end{equation}\nthe analytical solution of Eq.~\\eqref{eq:ode} is\n\\begin{equation}\ny(t)=e^{-t^2}.\n\\end{equation}\n\\begin{verbatim}\n\\pstODEsolve[algebraicIC,algebraic]{TY}{(t) 0}{-1}{3}{5}{1/Euler}{-2*t*y[0]}\n\\listplot[plotstyle=dots]{TY}\n\\end{verbatim}\nNote that in \\Lcs{pstODEsolve}, `\\Lkeyword{x[..]}' and `\\Lkeyword{y[..]}' can be used interchangeably for representing the state vector in algebraic notation. The initial condition~\\eqref{eq:initcond} is given as an algebraic expression, which requires option \\Lkeyword{algebraicIC}; in \\PS{} notation it would read `\\Lkeyword{1 Euler div}'. The integration parameter and the one available state vector component are stored into the \\PS{} object `\\Lkeyword{TY}' by setting \\Larg{output format} to `\\Lkeyword{(t) 0}'.\n\\begin{center}\n\\psset{unit=3cm}\n\\begin{pspicture}(-0.3,-0.4)(.2,1)%\\psgrid\n  \\psset{xAxisLabel=$t$,xAxisLabelPos={c,-6ex},yAxisLabelPos={-3ex,c}}\n  \\begin{psgraph}[axesstyle=frame,Ox=-1,](0,0)(0,0)(4,1){10cm}{2.5cm}\n  \\rput(1,0){\\psplot[algebraic]{-1}{3}{Euler^(-x^2)}}\n  \\pstODEsolve[algebraicIC,algebraic]{TY}{(t) 0}{-1}{3}{5}{1/Euler}{-2*t*y[0]}\n  \\rput(1,0){\\listplot[plotstyle=dots,dotsize=0.05,linecolor=red]{TY}}\n  \\end{psgraph}\n\\end{pspicture}\n\\end{center}\nThe plot contains the analytical solution and the five output points of the numerical solution as red dots. They lie exactly on the analytic solution.\n\n\\section{Acknowledgements}\nI'd like to thank Manuel Luque for the inspiring examples on his site \\url{http://pstricks.blogspot.fr}, some of which I used as a basis for this documentation.\n\n\\section{License}\nThis package can be redistributed and/or modified under the terms\nof the \\LaTeX Project Public License Distributed from CTAN archives:\n\\url{http://mirrors.ctan.org/tex-archive/macros/latex/base/lppl.txt}\n\n\\end{document}\n", "meta": {"hexsha": "cc100bfef7d6e590a3521e8d247d88b475e94885", "size": 21861, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pst-ode-doc.tex", "max_stars_repo_name": "agrahn/pst-ode", "max_stars_repo_head_hexsha": "4383d531bb0b18918856637089e8cd2b2351b4ea", "max_stars_repo_licenses": ["LPPL-1.3c"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-08-02T18:09:15.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-08T09:16:24.000Z", "max_issues_repo_path": "pst-ode-doc.tex", "max_issues_repo_name": "agrahn/pst-ode", "max_issues_repo_head_hexsha": "4383d531bb0b18918856637089e8cd2b2351b4ea", "max_issues_repo_licenses": ["LPPL-1.3c"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pst-ode-doc.tex", "max_forks_repo_name": "agrahn/pst-ode", "max_forks_repo_head_hexsha": "4383d531bb0b18918856637089e8cd2b2351b4ea", "max_forks_repo_licenses": ["LPPL-1.3c"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.6486486486, "max_line_length": 1235, "alphanum_fraction": 0.7416861077, "num_tokens": 6837, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430353105599, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.5998718491460738}}
{"text": "\\chapter{Exact intensity for the two-faceted cup}\\label{app:boundariescup}\nFor very simple optical systems, the \\textit{exact} target intensity of light can be calculated. Here we explain how this is done for the two-faceted cup with a Lambertian source described in Chapter \\ref{chap:raytracing}. We remind the reader that, for this system, the intensity in a given direction $\\variabile{p}$ in PS is defined as:\n\\begin{equation}\\label{eq:eta_appendix}\nI_{\\textrm{PS}}(\\variabile{p}) = \\sum_{\\Pi}\\int_{\\variabile{q}^\\textrm{\\,min}(\\Pi, \\variabile{p})}^{\\variabile{q}^\\textrm{\\,max}(\\Pi,\\variabile{p})}L(\\variabile{q}, \\variabile{p})\\textrm{d}\\variabile{q} = \\sum_{\\Pi}\\big (\\variabile{q}^\\textrm{max}(\\Pi,\\variabile{p})-\\variabile{q}^\\textrm{\\,min}(\\Pi,\\variabile{p})\\big )\\,,\n\\end{equation}\nwhere the sum is over all the possible paths, $\\variabile{q}^\\textrm{\\,min}(\\Pi,\\variabile{p})$ and $\\variabile{q}^\\textrm{\\,max}(\\Pi,\\variabile{p})$ are of the intersection points between the line $\\variabile{p}=\\textrm{const}$ and $\\partial$\\insieme{R}$_\\textrm{t}(\\Pi)$, and the second equation holds as we assume $L=1$ in \\insieme{R}$_\\textrm{t}(\\Pi)$.\nTherefore, if we are able to provide an analytic expression for the boundaries $\\partial$\\insieme{R}$_\\textrm{t}(\\Pi)$ we can calculated the position coordinates $\\variabile{q}^\\textrm{\\,min}(\\Pi,\\variabile{p})$ and $\\variabile{q}^\\textrm{\\,max}(\\Pi,\\variabile{p})$ analytically and we can compute the exact intensity for every direction $\\variabile{p}$ is obtained using (\\ref{eq:eta_appendix}). The procedure used for such purpose is explained next.\n\\section{Analytic approach}\nThe idea is to rotate the cup to determine the maximum number of reflections between a ray and the optical lines before reaching the target. The rays are considered to be straight lines instead of broken lines. Hence it is sufficient to find only one intersection point between the ray and a line segment (also in the case where more than one reflection occurs). Finally transforming (rotating or reflecting) back these points we obtain the corresponding coordinates at the target.\\\\ \\indent\nThe two-faceted cup is defined in the $(\\variabile{x}, \\variabile{z})$-plane as in Chapter \\ref{chap:raytracing}. \nLet $\\gamma\\in(0, \\pi/2)$ be the angle that the left and right reflector make with the normal to the source. \n%\\begin{figure}[t]\n%\\label{fig:cup}\n%  \\begin{center}\n%%\\vspace{-1.5cm}\n%  \\includegraphics[width=6.7cm]{cup.pdf}\n%  \\end{center}\n%%\\vspace{-2cm}\n%  \\caption{\\textbf{Shape of the two-faceted cup.}  Each line of the system is labeled with a number.\n%   The source \\point{S}$= [-2,2]$ (line number $1$) is located on the $\\variabile{x}$-axis.\n%   The target \\point{T}$= [-17, 17]$ (line $4$) is parallel to the source and is located at a height $\\variabile{z}= 40$.\n%   The left and right reflectors (line $2$ and $3$) connect the source with the target.}\n%  \\label{fig:cup}\n%\\end{figure}\nThe maximum $\\variabile{z}$-coordinate that the two-faceted cup can reach during the rotation is defined by the $\\variabile{z}$-coordinate of the point $\\point{P}=(0,Z)$:\n\\begin{equation}\\label{rotation}\\begin{tabular}{llll}\n$Z$ & $=$ & $ \\big(h+\\frac{\\variabile{a}}{\\tan\\gamma}\\big)\\frac{1}{\\cos\\gamma}-\\frac{\\variabile{a}}{\\tan\\gamma}$ \\\\ $ \\quad$ & $ \\quad $ & $ \\quad $ \\\\$ \\quad$ &  $=$ & $\\frac{h}{\\cos\\gamma}+\\frac{\\variabile{a}(1-\\cos\\gamma)}{\\sin\\gamma},$\\end{tabular}\n\\end{equation} and $\\point{R}=(0,-\\frac{\\variabile{a}}{\\tan\\gamma})$ is the rotation point. We define $\\point{B}_k$ as the clockwise ($k<0$) or counterclockwise ($k\\geq 0$) rotation image of $\\point{P}$ around the point $\\point{R}$ over an angle $\\alpha_k=(2k+1)\\gamma$, with $k$ an integer number (Figure \\ref{fig:twofaced} is illustrative).\n\\begin{figure}[t]%\\label{fig:twofaced}\n \\centering\n  \\includegraphics[width=\\textwidth]{rotated_cup.pdf}\n \\caption{\\textbf{The two-faceted cup rotated twice on both sides.} The line segment with end points $B_{k-1}$ and $B_{k}$ is the $|k|$ times rotated target. The coordinates $(q,h)$ on the target $B_{-1}B_{0}$ are obtained by transforming the coordinates $(u,v)$ of the intersection point between a ray and the segment $B_0B_1$ . The point $\\point{R} = \\big(0,-\\frac{a}{\\tan{\\gamma}}\\big)$ is the center of the circle described by rotating the cup (dashed line).}\n  \\label{fig:twofaced}\n  \\end{figure}\n  %as the notation used in equation ($\\ref{Bk}$) could suggest.\nThe position coordinates of points $\\point{B}_k = (B_{k,x}, B_{k,z})$ are given by:\n\\begin{equation}\n \\begin{pmatrix} B_{k,x}  \\\\  B_{k,z}\\end{pmatrix}= -\n  \\begin{pmatrix} 0  \\\\  \\frac{a}{\\tan\\gamma}\\end{pmatrix}+\n \\left(\\begin{split}  & \\cos\\alpha_k  & -\\sin\\alpha_k \\\\  & \\sin\\alpha_k & \\cos\\alpha_k\\end{split}\\right).\n \\begin{pmatrix}  0 \\\\  Z+\\frac{a}{\\tan\\gamma}\\end{pmatrix},\n\\end{equation}\nThe maximum number of reflections $r_{\\textrm{max}}$ a ray can undergo before arriving at the target is:\n\\begin{equation}\nr_{\\textrm{max}}=\\max\\{k\\in\\mathbb{N} \\;| \\; B_{k-1,z}\\geq 0\\}.\n\\end{equation}\nFor example, for the two-faceted cup depicted in Figure \\ref{fig:cup}, we found $r_{\\textrm{max}}=2$.\\\\ \\indent \nGiven the coordinates $(\\variabile{x}_1, \\variabile{z}_1)$ and the angular coordinate $\\optangle_1$ of a ray at the source, we can calculate the corresponding position $(\\variabile{x}, \\variabile{z})$ and direction coordinate $\\optangle$ at the target as explained in the following. \\\\ \\indent We compute the coordinates $(u,v)$ of the intersection point between the ray parametrization and the $|\\variabile{k}|$ times rotated or reflected target $\\point{B}_{\\variabile{k}-1}\\point{B}_\\variabile{k}$ for which the intersection with the forward ray is not empty, for $\\variabile{k}=-\\variabile{r}_{\\textrm{max}}-1, \\cdots, \\variabile{r}_{\\textrm{max}}$. Next, if $\\variabile{k}$ is even, the corresponding coordinates $(\\variabile{x},\\variabile{z})$ at the target are found by rotating back the coordinates $(\\variabile{u},\\variabile{v})$, otherwise a reflection is applied. Therefore, the ray coordinates $(\\variabile{x},\\variabile{z})$ at the target are given by:\n\\begin{equation} \\label{rotation_target}\\begin{pmatrix} \\variabile{x}\\\\ \\variabile{z}\n\\end{pmatrix} = \\left(\\begin{array}{cc}(-1)^k & 0  \\\\ 0 & 1\\end{array}\\right)\n\\left(\\begin{array}{cc}\\cos(-2\\variabile{k}\\gamma) & -\\sin(-2\\variabile{k}\\gamma) \\\\\\sin(-2\\variabile{k}\\gamma) & \\cos(-2\\variabile{k}\\gamma)\\end{array}\\right)\\begin{pmatrix} \\variabile{u} \\\\\n \\variabile{v}+\\frac{a}{\\tan(\\gamma)}\\end{pmatrix}-\\begin{pmatrix}0 \\\\ \\frac{a}{\\tan\\gamma}\\end{pmatrix}.\n\\end{equation} We observe that the sign in the first matrix depends on the parity of $\\variabile{k}$. When $\\variabile{k}=0$, i.e., the ray does not reflect, the first matrices become the identity matrix and the cup is not rotated nor reflected. When $\\variabile{k}$ is even, the determinant of the matrix given by the product between the first and the second matrix (\\ref{rotation_target}) is equal to $1$ and we obtained a rotation matrix, while when $\\variabile{k}$ is odd the determinant of the product matrix is $-1$ and we have a reflection matrix.\n\\\\ \\indent\nThe method of transforming the cup instead of the rays allows us to determine the positive luminance regions $\\mbox{\\insieme{R}}_1(\\Pi_{\\variabile{j}})$ and $\\mbox{\\insieme{R}}(\\Pi_{\\variabile{j}})$ in source and target PS, where every path $(\\Pi_{\\variabile{j}})_{\\variabile{j}=1, \\cdots, 2\\variabile{r}_{\\textrm{max}}+1}$ corresponds to $|\\variabile{k}|$ reflections. The corresponding boundaries only consist of rays that either leave the extremes of the source or hit one of the points $\\point{B}_\\variabile{k}$. \n%At the boundaries a small change in the position or direction ray coordinate can cause a difference in the number of the reflections. \n\n\nRays that leave the interior of \\point{S} and hit $\\point{B}_\\variabile{k}$ have as position coordinates in source PS $\\variabile{q}_1 = \\variabile{x}_1\\in(-\\variabile{a}, \\variabile{a})$, the corresponding target PS coordinates are $\\variabile{q} = \\variabile{x} = B_{\\variabile{k}, \\variabile{x}}$.\nThe direction coordinates of these rays at the source PS are $\\variabile{p}_1 = \\sin(\\optangle_1)$ where $\\optangle_1$ is given by:\n\\begin{equation}\\label{anglesource}\n\\optangle_1 = \\arctan\\bigg(\\frac{\\variabile{x}_1-B_{k,x}}{B_{k,z}}\\bigg).\n\\end{equation}\nThe corresponding direction coordinates at the target PS are $\\variabile{p}=\\sin(\\optangle)$ where $\\optangle$ is given by:\n\\begin{equation}\\label{teta}\n\\optangle=(-1)^\\variabile{k}(\\optangle_1-2\\variabile{k}\\gamma).\n\\end{equation}\n\nRays emitted from the end points of the source have a constant position coordinate $\\variabile{q}_1 = \\variabile{x}_1 = \\textrm{const}$ in source PS and varying a direction coordinate $\\variabile{p}_1 = \\sin(\\optangle_1)\\in[-1, 1]$ where $\\optangle_1\\in[-\\pi/2, \\pi/2]$. The corresponding target position coordinate $\\variabile{x}$ is obtained from (\\ref{rotation_target}) while the direction coordinate in the target PS is $\\variabile{p} = \\sin(\\optangle)$, where $\\optangle$ is given by Equation (\\ref{teta}).\nNote that the rays emitted from the end points of the source form vertical lines in source PS as $\\variabile{q}_1= \\textrm{const}$ and $\\variabile{p}_1\\in[-1,1]$ varies.\nOn the other hand, rays that hit points $\\point{B}_k$ form vertical lines in target PS as $\\variabile{q} = \\textrm{const}$.\n\\\\ \\indent \n%In Figure \\ref{fig:raggi} are shown some rays that compose the boundaries of $M_{\\textrm{s},k}$ which coordinates are:\n%$$ \\begin{array}{cc}ADE = \\Bigg(-a, \\arctan\\Big(\\frac{-a+b_{-1,x}}{b_{-1,z}}\\Big)\\Bigg),\\; ACE = \\big(-a, \\sin(\\gamma)\\big),\\; AF = \\big(-a, -\\sin(\\delta)\\big), \\\\\n% BCF = \\Bigg(a, \\arctan\\Big(\\frac{a-b_{1,x}}{b_{1,z}}\\Big)\\Bigg), BDF = \\big(a, - \\sin(\\gamma)\\big) \\, \\;\\mbox{and} \\,\\; BE = \\big(a, \\sin(\\delta)\\big).\\end{array} $$\n%\\begin{figure}\n%\\includegraphics[scale=0.55]{raggi6.jpg}\n%\\caption{\\footnotesize{Rays that leave the corner points of the source. The rays $AF$, $BE$, $ACE$, $BDF$ are rays that do not hit the reflectors of the system.\n%They constitute rays on the boundaries of the regions $M_{\\textrm{s},0}$, $M_{\\textrm{s},1}$ and $M_{\\textrm{s},-1}$.\n% The rays $ADE$ and $BCF$ are rays that hit once the reflectors of the system. They constitute rays on the boundaries of the regions\n% $M_{\\textrm{s},-1}$, $M_{\\textrm{s},-2}$, and $M_{\\textrm{s},1}$ or $M_{\\textrm{s},2}$, respectively.}}\n%\\label{fig:raggi}\n%\n%\\end{figure}\n%$$ \\begin{array}{cc}ADE = \\big(-b, -(t_1+2\\gamma)\\big)),\\; ACE = \\big(-b, \\sin(\\gamma)\\big),\\; AF = \\big(-b, -\\sin(\\delta)\\big), \\\\\n% BCF = \\big(b, -(t_2-2\\gamma)\\big), BDF = \\big(b, - \\sin(\\gamma)\\big) \\, \\;\\mbox{and} \\,\\; BE = \\big(b, \\sin(\\delta)\\big).\\end{array} $$\n% where $t_1 = \\arctan(\\frac(-a+b_{-1,x}{b_{-1}}))$ and $t_2 = \\arctan(\\frac(a-b_{-1,x}{b_{-1}}))$.\nThe boundaries at the source and target PS are shown in red in Figures \\ref{fig:boundary} and \\ref{boundaries_target}, respectively. \n\\begin{figure}[htbp]\n\\centering\n%\\begin{minipage}[t]{.40\\textwidth}\n\\includegraphics[width = 0.8 \\textwidth]{analytic_boundaries_source}\n\\caption{Regions in source PS formed by rays that reflect $|k|$ times, for the two-faceted cup in Figure \\ref{fig:cup}.}\n\\label{fig:boundary}\n%\\end{minipage} \\qquad \\qquad\n%\\begin{minipage}[t]{.40\\textwidth}\n\\end{figure}\n\\begin{figure}[htbp]\n\\centering\n%\\begin{minipage}[t]{.40\\textwidth}\n\\includegraphics[width = 0.8 \\textwidth]{analytic_boundaries_target}\n\\caption{Regions in target PS formed by rays that reflect $|k|$ times, for the two-faceted cup in Figure \\ref{fig:cup}.}\n\\label{boundaries_target}\n%\\end{minipage} \\qquad \\qquad \\qquad\n\\end{figure}\n%\\noindent \n%Figure \\ref{fig:boundary} and \\ref{boundaries_target} show also the symmetry of the regions $M_{\\textrm{s},k}$ and $M_{\\textrm{t},k}$. \n%Finally we note that, since $k = 1$ is odd, the position of the regions $M_{\\textrm{t},1}$ and $ M_{\\textrm{t},-1}$ are exchanged with respect to the position of $ M_{\\textrm{s},1}$ and $ M_{\\textrm{s},-1}$.\nOnce the boundaries at the target are determined, the coordinates of the intersection points between line $\\variabile{p}=\\textrm{const}$ and $\\partial$\\insieme{R}$(\\Pi_{\\variabile{j}})$ are found for every path $(\\Pi_\\variabile{j})_{\\variabile{j}=1, \\cdots, 2\\variabile{r}_{\\textrm{max}}+1}$ along every direction $\\variabile{p}$. The target intensity is computed using Equation (\\ref{eq:eta_appendix}). The intensity profile is depicted in Figure \\ref{fig:intensity_cup_analytic}. \n\\begin{figure}[t]\n\\centering\n%\\begin{minipage}[t]{.40\\textwidth}\n\\includegraphics[width = 0.7\\textwidth]{Exact_intensity}\n\\caption{Profile of the exact intensity at the target of the two-faceted cup.}\n\\label{fig:intensity_cup_analytic}\n%\\end{minipage} \\qquad \\qquad \\qquad\n\\end{figure}\nSince the boundaries are computed analytically, the intensity $I(\\variabile{p})$ found is the exact intensity.\n\nThe exact intensity found as described above was taken as reference intensity in Chapters \\ref{chap:triangulation} and \\ref{chap:raymapping1}.\n\n", "meta": {"hexsha": "3d845a9924ba34fea14336a0fe9383a4dd7857cb", "size": 13043, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/Boundariescup.tex", "max_stars_repo_name": "melaniafilosa/ps_raytracing", "max_stars_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/Boundariescup.tex", "max_issues_repo_name": "melaniafilosa/ps_raytracing", "max_issues_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/Boundariescup.tex", "max_forks_repo_name": "melaniafilosa/ps_raytracing", "max_forks_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 106.9098360656, "max_line_length": 964, "alphanum_fraction": 0.7098060262, "num_tokens": 4233, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8311430415844384, "lm_q1q2_score": 0.5998718437250904}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{titletoc}\n\\usepackage{titlesec}\n\\usepackage{geometry} \n\\usepackage{fontspec, xunicode, xltxtra}\n\\usepackage{float}\n\\usepackage{cite}\n\\usepackage{amsmath}\n\\usepackage{listings}\n\\usepackage{titletoc}\n\\usepackage{bm}\n\n\\geometry{left=3cm,right=3cm,top=3cm,bottom=3cm}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\var}{var}\n\\DeclareMathOperator*{\\expec}{E}\n\n\\begin{document}\n\\title{\\textsf{Homework 6 for Pattern Recognition}}\n\\author{Fan JIN\\quad (2015011506)}\n\\maketitle\n\n\\section*{Question 1.1}\n{\n    Note that \n    $$\\varepsilon_B(x) = h_B(x) - y(x) = \\left[ \\frac{1}{m} \\sum_{i=1}^{m} {h_i(x)} \\right] - y(x)$$\n    $$= \\frac{1}{m} \\sum_{i=1}^{m} {[h_i(x) - y(x)]} = \\frac{1}{m} \\sum_{i=1}^{m} {\\varepsilon_i(x)}.$$\n    It follows that \n    $$E_{h_B} = \\frac{1}{n} \\sum_{j=1}^{n}{[\\varepsilon_B (x_j)]^2} = \\frac{1}{m^2 n} \\sum_{j=1}^{n}{ \\left[ \\sum_{i=1}^{m}{\\varepsilon_i(x_j)} \\right]^2 }$$\n    $$= \\frac{1}{m^2 n} \\sum_{j=1}^{n}{ \\left[ \\sum_{i=1}^{m}{(\\varepsilon_i(x_j))^2} + 2 \\sum_{i=1}^{m}{ \\sum_{k=1, k\\neq i}^{m}{\\varepsilon_i(x_j) \\varepsilon_k(x_j)}} \\right] }$$\n    $$= \\frac{1}{m^2 n} \\left[ \\sum_{i=1}^{m}\\sum_{j=1}^{n}{(\\varepsilon_i(x_j))^2} + 2 \\sum_{i=1}^{m}{ \\sum_{k=1, k\\neq i}^{m} \\sum_{j=1}^{n} {\\varepsilon_i(x_j) \\varepsilon_k(x_j)}} \\right]$$\n    $$= \\frac{1}{m^2 n} \\left[ \\sum_{i=1}^{m}\\sum_{j=1}^{n}{(\\varepsilon_i(x_j))^2} + 2 \\sum_{i=1}^{m}{ \\sum_{k=1, k\\neq i}^{m} 0 } \\right]$$\n    $$= \\frac{1}{m^2 n} \\left[ \\sum_{i=1}^{m}\\sum_{j=1}^{n}{(\\varepsilon_i(x_j))^2} \\right]$$\n    $$= \\frac{1}{m} \\frac{1}{m} \\sum_{i=1}^{m} \\left[ \\frac{1}{n}\\sum_{j=1}^{n}{(\\varepsilon_i(x_j))^2} \\right] = \\frac{1}{m} E_h.$$\n}\n\n\\section*{Question 1.2}\n{\n    By Cauchy Inequality, we have\n    $$\\left[ \\sum_{i=1}^{m}{(\\varepsilon_i(x_j) \\cdot 1)} \\right]^2 \\leq \\left[ \\sum_{i=1}^{m}{(\\varepsilon_i(x_j))^2} \\right] \\cdot \\left[ \\sum_{i=1}^{m}{1^2} \\right] = m \\sum_{i=1}^{m}{(\\varepsilon_i(x_j))^2}.$$\n    It follows that \n    $$E_{h_B} = \\frac{1}{n} \\sum_{j=1}^{n}{[\\varepsilon_B (x_j)]^2} = \\frac{1}{m^2 n} \\sum_{j=1}^{n}{ \\left[ \\sum_{i=1}^{m}{\\varepsilon_i(x_j)} \\right]^2 }$$\n    $$\\leq \\frac{1}{m^2 n} \\sum_{j=1}^{n}{ \\left[ m \\sum_{i=1}^{m}{(\\varepsilon_i(x_j))^2} \\right] }$$\n    $$= \\frac{1}{m n} \\sum_{j=1}^{n}{ \\left[ \\sum_{i=1}^{m}{(\\varepsilon_i(x_j))^2} \\right] }$$\n    $$= \\frac{1}{m} \\sum_{i=1}^{m} \\left[ \\frac{1}{n}\\sum_{j=1}^{n}{(\\varepsilon_i(x_j))^2} \\right] = E_h.$$\n}\n\n\\section*{Question 2.1}\n{\n    \\textbf{Lemma:}\\quad With all weights $w_i$, $d$ and $j$ fixed, there exists $k$ such that the following quantity\n    $$\\sum_{i=1}^{n}{w_i 1\\{ h_{a,d,j}(x_i) \\neq y_i \\}},$$ as a function of $a$, \n    is minimized at $a = x_{k,j}$, where $x_{k,j}$ is the $j$-th feature of the $k$-th sample.\n\n    \\textbf{Proof:}\\quad This quantity, with all weights $w_i$, $d$ and $j$ fixed, is a linear combination of many sign functions $$1\\{ h_{a,d,j}(x_i) \\neq y_i \\} $$ with respect to $a$. Therefore, it is a staircase function, which can be minimized at one of those steps. By definition, the threshold for $1\\{ h_{a,d,j}(x_i) \\neq y_i \\} $ is $a = x_{k,j}$, and the lemma is thus proved.\n\n    With this lemma, we only need to find the optimal $a$, $d$, $j$ from a finite set, i.e., to find $k$, $d$ and $j$ such that \n    $$a = x_{k,j} ,$$\n    $$d = 1 \\mathrm{\\quad or\\quad } d = -1 ,$$\n    $$1 \\leq k \\leq n ,$$\n    $$1 \\leq j \\leq p.$$\n    The size of this finite set is merely $2np$.\n\n    \\textbf{Note:}\\quad For this dataset, the images are nearly binary: the gray-scale values are very close to either 0 or 1. In such cases, we can speed up calculations by neglecting the repeating values in $X$, i.e., doing a ``unique'' operation. \n\n    \\begin{table}[!hbp]\n        \\centering\n        \\begin{tabular}{|c|c|c|}\n        \\hline\n        Iterations & Training error & Testing error \\\\\n        \\hline\n        30.0000  &  0.0180  &  0.0753 \\\\\n        \\hline\n        100.0000    &     0  &  0.0653 \\\\\n        \\hline\n        200.0000   &      0  &  0.0598 \\\\\n        \\hline\n        \\end{tabular}\n        \\caption{Error rates through training}\n    \\end{table}\n\n    \\begin{figure}[H]\n        \\centering\n        \\includegraphics[width = 1\\linewidth]{stump-AdaBoost/error.png}\n        \\caption{Error rates through training}\n    \\end{figure}\n}\n\n\\section*{Questions 2.2 \\& 2.3}\n{\n    Keep using 5-fold cross validation through training.\n\n    \\begin{table}[!hbp]\n        \\centering\n        \\begin{tabular}{|c|c|c|}\n        \\hline\n        Model & Validation error & Testing error \\\\\n        \\hline\n        Fine Tree  &  0.122000  &  0.084380 \\\\\n        \\hline\n        Medium Tree  &   0.126000  & 0.084380 \\\\\n        \\hline\n        Coarse Tree   &  0.124000  &  0.098443 \\\\\n        \\hline\n        Bagged Trees 30   &    0.044000 &  0.047715 \\\\\n        \\hline\n        Bagged Trees 100   &  0.038000  &  0.037670 \\\\\n        \\hline\n        Bagged Trees 300   &   0.042000  & 0.035660 \\\\\n        \\hline\n        \\end{tabular}\n        \\caption{Error rates of the 6 tree models}\n    \\end{table}\n\n    All 6 models have been saved to ``models.mat'' automatically. \n\n    For the 3 tree models, the FineTree (max division number = 4) and MediumTree (max division number = 20) give the best performance on both validation set and testing set, while the CoarseTree (max division number = 100) has much higher error rate, especially on the testing set.\n\n    For the 3 bagged trees, it attains the lowest error rate when the number of classifiers, the hyperparamter, is tuned at 30.\n}\n\n\\section*{Question 2.4}\n{\n    \\textbf{Briefly speaking:}\\quad\n    ``Boosting'' and ``random forest'' are both useful workarounds to the problem that single decision trees are easily overfitting. \n\n    \\textbf{Detailed comparison:}\n    \\begin{itemize}\n        \\item Judging from the output, ensembled models like bagged trees models attain the lowest error rate (around 4\\%), the AdaBoost the medium (around 6\\%), and single tree models the highest (around 12\\%). This means it is possible to improve the performance by ensembling multiple models. Ensembling contributes to better performance, makes the model robust against noise, and fits paralell computation as well.\n\n        \\item The idea of boosting is useful in that it assigns weights to different samples, and allows the classifier to concentrate on the samples in which it classifies wrong. It makes the training faster, and a bit more robust. The weights and the parameters are updated iteratively. Sometimes it also employs multiple trees and makes decision by taking a vote, as shown in AdaBoost.\n\n        \\item Hyperparamter tuning is cricial to our model. This not only applies in a single decision tree (where we can adjust the max division number), but it is also seen in bagged trees (where we can adjust the number of classifiers). More divisions and more sub-classifiers do not always bring better performance, as shown in Table 2.\n        \n        \\item It is necessary to identify overfitting. From Figure 1, we see it starts to overfit after 60 iterations. It is better to stop at 60 iterations than to continue training. \n    \\end{itemize}\n}\n\n\\section*{Source Code}\n{\n    Please download the souece code from http://39.106.23.58/files/PR6\\_2015011506.7z\n\n    For Question 2, please run ``main.m''. It may take \\emph{less than a minute} to train the network, but the result is reproducible because of the random seed.\n\n    For each model, I clicked ``Generate code'' button to transcript my operations into MATLAB codes, and stored each of them in the corresponding ``.m'' file. These files include:\n    \\begin{itemize}\n        \\item trainClassifierFineTree.m\n        \\item trainClassifierMediumTree.m\n        \\item trainClassifierCoarseTree.m\n        \\item trainClassifierBaggedTrees\\_30.m\n        \\item trainClassifierBaggedTrees\\_100.m\n        \\item trainClassifierBaggedTrees\\_300.m\n    \\end{itemize}\n    Thus, the steps above can be easily reproduced without using the GUI of the toolbox. \n\n}\n\n\\clearpage\n\\end{document}\n    ", "meta": {"hexsha": "591efe8de8194ab7b773387f4ad6795edd24d0af", "size": 8104, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW6/Homework6.tex", "max_stars_repo_name": "kingium/PatternRecognitionForUndergrads", "max_stars_repo_head_hexsha": "5cd08f3a260fae4a7edaf71599433e93484863b0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW6/Homework6.tex", "max_issues_repo_name": "kingium/PatternRecognitionForUndergrads", "max_issues_repo_head_hexsha": "5cd08f3a260fae4a7edaf71599433e93484863b0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW6/Homework6.tex", "max_forks_repo_name": "kingium/PatternRecognitionForUndergrads", "max_forks_repo_head_hexsha": "5cd08f3a260fae4a7edaf71599433e93484863b0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.3354037267, "max_line_length": 418, "alphanum_fraction": 0.636846002, "num_tokens": 2790, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8311430415844384, "lm_q1q2_score": 0.5998718437250904}}
{"text": "%!TEX root = ../main.tex\n\n\\subsection{Tau}\n360 is nice enough number.  It divides evenly by 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 18, 20, 24, 30,\n36, 40, 45, 60, 72, 90, and 180.  No other number under 1000 even comes close.  But it\nis still arbitrary.  Why are there 60 seconds in a minute, 60 minutes in an hour, 360 degrees\nin a full rotation?  Because Babylonian mathematicians wanted numbers that were easier\ndivision!\n\nA far more natural system exists and is indeed the only one used in calculus and higher\nmathematics today.  Rather than assigning an arbitrary number to the complete\ncircle, we ask instead, what fraction of a full rotation have you turned?  One full rotation\nwill correspond with the entire circumference of the circle.  And again, this will be\ndone on the Unit Circle, so that an angle's measure is defined as the arc length\ndivided by the radius.  Because this is length divided by length, angles in this system\n(called radians) have no units.  Turning all the way around one time is the same as\nlaying the radius of any circle out in a curved piece $6.28\\dots$ times.  Rather than\nwrite this irrational number to some arbitrary number of decimals, we use the symbol\n$\\tau$, called TAU.\n\nThere are no calculators with the number $\\tau$ on them (yet), so we must use \nthe fact that $\\tau=2\\pi$.\n\n\\subsection{Unit Circle}\nThis all culminates in the single figure that most embodies trigonometry: the completed\nUnit Circle.  Because your education up to this point has utilized degrees, they are included\nbut you should concentrate on using radians, as they will be the main system used\nfrom now on.  \n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture} [scale=1.7,\n% Toggle commenting on the next four lines for the completed unit circle:\nangle/.style={draw,text=white,fill=white,minimum height=1cm, minimum width=1cm},\npoint/.style={white},\nangle/.style={fill=white},\npoint/.style={},\n]\n    \\draw [white] (-3.6,-3.6) rectangle (3.6,3.6);\n    \\draw [thick,fill=white] (0,0) circle (3cm);\n    \\draw [thick, ] (-3.3,0) -- (3.3,0);\n    \\draw [thick, ] (0,-3.3) -- (0,3.3);\n    \\draw (0,0) --  node [angle] {$\\frac{\\tau}{4}$} (90:3)\n                node[point, above right] {$ \\left(0,1\\right)  d.n.e.$};\n    \\draw (0,0) --  node [angle] {$\\frac{\\tau}{2}$} (180:3)\n                node[point, above left] {$ \\left(-1,0\\right) 0$};\n    \\draw (0,0) --  node [angle] {$\\frac{3\\tau}{4}$} (270:3)\n                node[point, below right] {$ \\left(0,-1\\right) d.n.e.$};\n    \\draw (0,0) --  node [angle] {$\\tau$} (0:3)\n                node[point, above right] {$ \\left(1,0\\right) 0$};\n    \\draw (0,0) --  node [angle] {$\\frac{\\tau}{8}$} (45:3)\n                node [point, above right] {$ \\left( \\frac{\\sqrt2}{2} , \\frac{\\sqrt2}{2}  \\right) 1$};\n    \\draw (0,0) --  node [angle] {$\\frac{3\\tau}{8}$} (135:3)\n                node [point, above left] {$ \\left( -\\frac{\\sqrt2}{2} , \\frac{\\sqrt2}{2}  \\right ) -1$};\n    \\draw (0,0) --  node [angle] {$\\frac{5\\tau}{8}$} (225:3)\n                node [point, below left] {$ \\left( -\\frac{\\sqrt2}{2} , -\\frac{\\sqrt2}{2}  \\right) 1$};\n    \\draw (0,0) --  node [angle] {$\\frac{7\\tau}{8}$} (315:3)\n                node [point, below right] {$ \\left( \\frac{\\sqrt2}{2} , -\\frac{\\sqrt2}{2}  \\right) -1$};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{\\tau}{12}$} (30:3)\n                node [point, above right] {$ \\left( \\frac{\\sqrt3}{2} , \\frac{1}{2}  \\right) \\frac{\\sqrt{3}}{3}$};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{\\tau}{6}$} (60:3)\n                node [point, above right] {$ \\left( \\frac{1}{2} , \\frac{\\sqrt3}{2}  \\right) \\sqrt{3} $};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{\\tau}{3}$} (120:3)\n                node [point, above left] {$ \\left( -\\frac{1}{2} , \\frac{\\sqrt3}{2}  \\right) -\\sqrt{3} $};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{5\\tau}{12}$} (150:3)\n                node [point, above left] {$ \\left(- \\frac{\\sqrt3}{2} , \\frac{1}{2}  \\right) \\frac{\\sqrt{3}}{3} $};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{7\\tau}{12}$} (210:3)\n                node [point, below left] {$ \\left(- \\frac{\\sqrt3}{2} , -\\frac{1}{2}  \\right) \\frac{\\sqrt{3}}{3}$};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{2\\tau}{3}$} (240:3)\n                node [point, below left] {$ \\left( -\\frac{1}{2} , -\\frac{\\sqrt3}{2}  \\right) \\sqrt{3}$};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{5\\tau}{6}$} (300:3)\n                node [point, below right] {$ \\left( \\frac{1}{2} , -\\frac{\\sqrt3}{2}  \\right) -\\sqrt{3}$};\n    \\draw (0,0) --  node [near end, angle] {$\\frac{11\\tau}{12}$} (330:3)\n                node [point, below right] {$ \\left( \\frac{\\sqrt3}{2} , -\\frac{1}{2}  \\right) -\\frac{\\sqrt{3}}{3}$};\n \n    \\foreach \\n in {1,2,3,4}\n        \\foreach \\t in {0,30,45,60}\n            \\fill (\\n*90+\\t:3) circle (0.03cm);\n    \\foreach \\t in {30,45,60, 120,135,150, 210,225,240, 300,315,330}\n        \\node [font=\\tiny, fill=white,inner sep=1pt] at (\\t:.75) {$ \\t^\\circ $};\n\\end{tikzpicture}\n\\caption{The full unit circle, with $x$, $y$, and $m$, radians and degrees, for your careful study.}\n\\end{center}\n\\end{figure}\n\n\nIt may be helpful to think of conversion to and from radians as dimension analysis,\nas in the sciences.  A full circle is 1 $\\tau$, which is the same as $360^\\circ$, so these\nnumbers can be put in a fraction equalling one.  Multiplying by one does not change \nanything.\n\n\\begin{example}{Conversion}\n\\exProblem\nConvert $215^\\circ$ into radians and $\\frac{5\\tau}{6}$ into degrees\n\n\\exSolution\n$$\n215^\\circ \\cdot \\frac{\\tau}{360^\\circ} = \\frac{215\\tau}{360} = \\frac{43\\tau}{72}\n$$\nand\n$$\n\\frac{5\\tau}{6} \\cdot \\frac{360^\\circ}{\\tau} = \\frac{1800^\\circ}{6} = 300^\\circ\n$$\n\\end{example}\n\n\\subsection{Derivative}\nWe have already said that sine is y, cosine is x, and tangent is m, but m (slope)\nis y over x, so tangent is sine over cosine.  Recall that the reciprocals of the\nstandard trigonometric functions have names, specifically that 1 over sine\nis cosecant, 1 over cosine is secant, and 1 over tangent is cotangent.  You are asked\nto find the derivatives of each in the exercises.\n\nWhile it is not the formal proof for the derivative of sine we will present in chapter\n\\ref{ch:identities}, we can now build a geometric proof.  Sine is the signed vertical\ndisplacement of a point on the unit circle, after we have traversed $\\theta$ units of\narc.  For every tiny nudge $d\\theta$, there is a similar triangle created that\nhighlights the additional change brought about to the height, or $\\sin\\theta$.  The \nfigure below shows how on the smaller triangle, the ratio of the adjacent side\nto the hypotenuse -- cosine -- is tiny change in the height.  All these values\nbecome more accurate the smaller the triangle becomes.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\begin{tikzpicture}[line cap=round,line join=round,x=2cm,y=2cm,\n     spy using outlines={rectangle,lens={scale=12}, size=8cm, connect spies}]\n\t\\coordinate (A) at (30:2);\n\t\\coordinate (B) at (1.73205,0);\n\t\\coordinate (C) at (1.672,1.1000);\n\t\\coordinate (D) at (1.672,1.0000);\n\t\\draw[->] (-0.5,0) -- (2.2,0) node[anchor=west] {$x$};\n\t\\draw[->] (0,-0.3) -- (0,2.1) node[anchor=west] {$y$};\n\t\\draw(0,0) ++ (-10:2) arc (-10:100:2.01);\n\t\\draw(0,0) -- (A) -- (B);\n\t\\draw(C) -- (D) -- (A);\n\t\\draw[ultra thin](0,0) ++ (0:2.2) arc (0:30:2.2) ;\n\n\t\\node at (2.3,0.5) {$\\theta$};\n\t\\spy [blue] on (1.7,1.05)\n             in node [left] at (6.5,1);\n\t\\draw[decorate, decoration={brace},xshift=0.5cm,yshift=0.4cm] (4.2,1.6) -- (4.9,0.4) node[midway,xshift=0.5cm,yshift=0.2cm] {$d\\theta$};\n\t\\draw (4.2,1.6) ++ (-90:.5) arc (-90:-60:0.5) node[midway,yshift=-0.2cm] {$\\theta$};\n\t\\draw[decorate, decoration={brace},xshift=-0.5cm] (4.2,0.4) -- (4.2,1.6) node[midway,xshift=-0.6cm] {$\\cos\\theta$};\n\\end{tikzpicture}\n\\caption{Zoom in on $\\theta$ plus a tiny $d\\theta$'s effect on $\\sin\\theta$.}\n\\end{centering}\n\\end{figure}\n\n\n\n", "meta": {"hexsha": "f0ad601c1d482b6416619f8b71bcb295087a84f5", "size": 7868, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch09/0903.tex", "max_stars_repo_name": "aquatiki/AnalysisTextbook", "max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z", "max_issues_repo_path": "ch09/0903.tex", "max_issues_repo_name": "aquatiki/AnalysisTextbook", "max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch09/0903.tex", "max_forks_repo_name": "aquatiki/AnalysisTextbook", "max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.4533333333, "max_line_length": 137, "alphanum_fraction": 0.618200305, "num_tokens": 2797, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271998, "lm_q2_score": 0.8311430478583168, "lm_q1q2_score": 0.5998718383041063}}
{"text": "\\newcommand{\\raul}{Ra\\'ul}\n\\section{Tutorial 5: \\raul{} Typesetting An Exam Paper}\nOne task of the department is to elucidate the past exam papers of those math courses. In this tutorial, we will see how \\raul{} takes advantage of the package xjtluexam to typeset an exam paper.\n\n\\subsection{The exampaper Environment}\nTo begin an exam paper, \\raul{} uses the exampaper environment. The environment itself gives no output, but provides several commands\\footnote{In fact, the commands are not \\emph{provided} by this environment. It merely does some initialization works so that these commands can function normally.}.\n\nThen, \\raul{} types \\verb=\\question= to start a question.\n\\begin{miniexammar}{.5\\textandmarginlen}{\n\\begin{exampaper}\n\\question Justify whether $\\infseries{n}{1}$ converges or not when $a_n \\to 0$.\n\\question Evaluate\n\\[\n\\int x^5 e^{x^2} \\dx\n\\]\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\begin{exampaper}\n\\question Justify whether $\\infseries{n}{1}$ converges or not if $a_n \\to 0$.\n\\question Evaluate\n\\[\n\\int x^5 e^{x^2} \\dx\n\\]\n\\end{exampaper}\n\\end{lstlisting}\n\\end{miniexammar}\n\nThe question counter can be referenced normally.\n\\begin{miniexammar}{.5\\textandmarginlen}{\n\\begin{exampaper}\n\\question Justify whether $\\infseries{n}{1}$ converges or not when $a_n \\to 0$.\n\\question \\label{ques:examques} Evaluate\n\\[\n\\int x^5 e^{x^2} \\dx\n\\]\n\\end{exampaper}\nA solution to Question \\ref{ques:examques} is ...\n}\n\\begin{lstlisting}\n\\begin{exampaper}\n\\question Justify whether $\\infseries{n}{1}$ converges or not when $a_n \\to 0$.\n\\question \\label{ques:examques} Evaluate\n\\[\n\\int x^5 e^{x^2} \\dx\n\\]\n\\end{exampaper}\nA solution to Question \\ref{ques:examques} is ...\n\\end{lstlisting}\n\\end{miniexammar}\n\nSubquestions may be given, and the numbering for them is alphabetical.\n\\begin{miniexammar}{.5\\textandmarginlen}{\n\\begin{exampaper}\n\\question Given the function $f=...$\n\n\\hspace{1.5em}(a) Evaluate $f(3)$\n\n\\hspace{1.5em}(b) Prove a proposition about $f$.\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\begin{exampaper}\n\\question Given the function $f=...$\n\\subquestion Evaluate $f(3)$\n\\subquestion Prove a proposition about $f$.\n\\end{exampaper}\n\\end{lstlisting}\n\\end{miniexammar}\n\n\\subsection{Multiple Choice Questions}\nMultiple choice questions (MCQs) appear in exam papers quite often. xjtluexam package designs several environments and commands just for them.\n\nTo introduce the choices of a MCQ, \\raul{} begins a choice environment. There are two choice environments: \\emph{shortchoices} and \\emph{longchoices}. As their name suggest, they are used to introduce short choices (multiple choices are in one line) and long choices (one choice occupies one line), respectively.\n\nInside one of the environments, a choice number is given by the \\verb=\\choicenumber= command. The space after a choice is given by the command \\verb=\\choicespace=. \\raul{} doesn't want a space at the end of the last choice, so he only uses the command between choices.\n\\begin{miniexammar}{.55\\textandmarginlen}{\n\\begin{exampaper}\n\\question Evaluate\n\\[\n\\int_{-1}^1 x^2\\cosh{x} \\dx\n\\]\n\\begin{shortchoices}\n\\choicenumber $\\frac{2\\sinh{1}}{3}$ \\choicespace\n\\choicenumber $-\\frac{2\\sinh{1}}{3}$ \\choicespace\n\\choicenumber $\\frac{2\\cosh{1}}{3}$ \\choicespace\n\\choicenumber $-\\frac{2\\cosh{1}}{3}$ \n\\end{shortchoices}\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\begin{exampaper}\n\\question Evaluate\n\\[\\int_{-1}^1 x^2\\cosh{x} \\dx\\]\n\\begin{shortchoices}\n\\choicenumber $\\frac{2\\sinh{1}}{3}$ \\choicespace \\choicenumber $-\\frac{2\\sinh{1}}{3}$ \\choicespace \\choicenumber $\\frac{2\\cosh{1}}{3}$ \\choicespace \\choicenumber $-\\frac{2\\cosh{1}}{3}$ \n\\end{shortchoices}\n\\end{exampaper}\n\\end{lstlisting}\n\\end{miniexammar}\n\nIf \\raul{} feels that the space is too tight, he can freely break the lines\n\\begin{miniexammar}{.42\\textandmarginlen}{\n\\begin{exampaper}\n\\question Evaluate\n\\[\n\\int_{-1}^1 x^2\\cosh{x} \\dx\n\\]\n\\begin{shortchoices}%\n\\choicenumber $\\frac{2\\sinh{1}}{3}$\\choicespace%\n\\choicenumber $-\\frac{2\\sinh{1}}{3}$\\\\\n\\choicenumber $\\frac{2\\cosh{1}}{3}$\\choicespace%\n\\choicenumber $-\\frac{2\\cosh{1}}{3}$%\n\\end{shortchoices}\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\begin{exampaper}\n\\question Evaluate\n\\[\\int_{-1}^1 x^2\\cosh{x} \\dx\\]\n\\begin{shortchoices}%\n\\choicenumber $\\frac{2\\sinh{1}}{3}$\\choicespace \\choicenumber $-\\frac{2\\sinh{1}}{3}\\\\ \\choicenumber $\\frac{2\\cosh{1}}{3}$\\choicespace \\choicenumber $-\\frac{2\\cosh{1}}{3}$%\n\\end{shortchoices}\n\\end{exampaper}\n\\end{lstlisting}\n\\end{miniexammar}\nHere, \\raul{} uses the comment sign \\verb=%= to eliminate the line break character, which would be treated as a white space by \\LaTeX{} if not removed.\n\nFor longchoices environment, the usage is similar, except that \\raul{} doesn't need to care about the line breaks, as one choice itself begins a new line at its end.\n\\begin{miniexammar}{.42\\textandmarginlen}{\n\\begin{exampaper}\n\\question Which of the following functions satisfy Laplace's Equation\n\\[\\frpdt[^2 f]{x^2} + \\frpdt[^2 f]{y^2} = 0\\]\n\\begin{longchoices}%\n\\choicenumber $f(x,y):=x^3y-xy^3$\\choicespace%\n\\choicenumber $f(x,y):=\\log{(4x^2+4y^2)}$\\choicespace%\n\\choicenumber $f(x,y):=3 x^4 y^5 - 2 x^2 y^3$\\choicespace%\n\\choicenumber $f(x,y):=\\cos{(2x^2-2y^2)}$%\n\\end{longchoices}\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\begin{exampaper}\n\\question Which of the following functions satisfy Laplace's Equation\n\\[\\frpdt[^2 f]{x^2} + \\frpdt[^2 f]{y^2} = 0\\]\n\\begin{longchoices}%\n\\choicenumber $f(x,y):=x^3y-xy^3$\\choicespace%\n\\choicenumber $f(x,y):=\\log{(4x^2+4y^2)}$\\choicespace%\n\\choicenumber $f(x,y):=3 x^4 y^5 - 2 x^2 y^3$\\choicespace%\n\\choicenumber $f(x,y):=\\cos{(2x^2-2y^2)}$%\n\\end{longchoices}\n\\end{exampaper}\n\\end{lstlisting}\n\\end{miniexammar}\n\n\\subsection{Space And Rules}\nIt is conventional to leave a vertical space after a comprehensive question is given. To add a certain amount of vertical space, \\raul{} uses command \\verb=\\vspace=.\n\\begin{miniexammar}{.5\\textandmarginlen}{\n\\begin{exampaper}\n\\question Find ...\n\\vspace{.8cm}\n\\question Prove ...\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\question Find ...\n\\vspace{.8cm}\n\\question Prove ...\n\\end{lstlisting}\n\\end{miniexammar}\n\nHe is quite satisfied with this result, though he wonders how to evenly distribute questions on a page. The command \\verb=\\stretch=, when used in \\verb=\\vspace=, gives a space with respect to the number given to it. For example, if \\raul{} wants to place three questions evenly on a page, he writes:\n\\begin{miniexammar}{.5\\textandmarginlen}{\n\\begin{exampaper}\n\\question Find ...\n\\vspace{.5cm}\n\\question Prove ...\n\\vspace{.5cm}\n\\question Evaluate ...\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\question Find ...\n\\vspace{\\stretch{1}}\n\\question Prove ...\n\\vspace{\\stretch{1}}\n\\question Evaluate ...\n\\end{lstlisting}\n\\end{miniexammar}\nThe number can be changed for a relatively larger space. For example, if \\raul{} writes\n\\begin{lstlisting}\n\\question Find ...\n\\vspace{\\stretch{2}}\n\\question Prove ...\n\\vspace{\\stretch{1}}\n\\question Evaluate ...\n\\end{lstlisting}\n, then the empty space left will be evenly divided into $2+1=3$ pieces. Two of them are given to the place where \\verb=\\stretch{2}= is given, and the other one is given to the place where \\verb=\\stretch{1}= is given.\n\nTo create questions that require a student to fill in the blanks, \\raul{} has to draw the blanks, that is, to draw a line under each of the blanks. In \\LaTeX{}, a horizontal line can be drawn by the command\n\\begin{verbatim}\n\\rule[lift]{width}{height}\n\\end{verbatim}\n, where lift is with respect to the baseline.\n\nTo simplify the work, xjtluexam provides the command \\verb=\\blankline= to draw a line with a given width (default 1cm).\n\\begin{miniexammar}{.5\\textandmarginlen}{\n\\begin{exampaper}\n\\question $(3\\vec{i}-9\\vec{j}) \\times (2\\vec{i}+1\\vec{j}) = $ \\blankline\n\\question $\\frdt[\\cos^2 x \\sin x]{x} = $ \\blankline[1.5cm]\n\\end{exampaper}\n}\n\\begin{lstlisting}\n\\begin{exampaper}\n\\question $(3\\vec{i}-9\\vec{j}) \\times (2\\vec{i}+1\\vec{j}) = $ \\blankline\n\\question $\\frdt[\\cos^2 x \\sin x]{x} = $ \\blankline[1.5cm]\n\\end{exampaper}\n\\end{lstlisting}\n\\end{miniexammar}\n\nNow \\raul{} has everything he needs to know, and an exam paper will be ready soon...", "meta": {"hexsha": "32998b065f9e6d053b4b7e7e65e54ab2a1a3253e", "size": 8101, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Bin/Documentation/LaTeX Documentation/Tutorial 5.tex", "max_stars_repo_name": "Little-He-Guan/A-XJTLU-Math-Club-LaTeX-Template", "max_stars_repo_head_hexsha": "b008538f693d67932746a5c6dfa799b902b783f5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-25T03:20:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-25T03:20:04.000Z", "max_issues_repo_path": "Bin/Documentation/LaTeX Documentation/Tutorial 5.tex", "max_issues_repo_name": "Little-He-Guan/A-XJTLU-Math-Club-LaTeX-Template", "max_issues_repo_head_hexsha": "b008538f693d67932746a5c6dfa799b902b783f5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Bin/Documentation/LaTeX Documentation/Tutorial 5.tex", "max_forks_repo_name": "Little-He-Guan/A-XJTLU-Math-Club-LaTeX-Template", "max_forks_repo_head_hexsha": "b008538f693d67932746a5c6dfa799b902b783f5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6561085973, "max_line_length": 312, "alphanum_fraction": 0.724601901, "num_tokens": 2700, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.8311430457670241, "lm_q1q2_score": 0.5998718367947301}}
{"text": "\\section{Classifying spaces and bundles}\nLet $\\pi:Y\\to X$ be a map of spaces. This defines a ``descent category''\n$\\cCC(\\pi)$ whose objects are the points of $Y$, whose morphisms are points of\n$Y\\times_X Y$, and whose structure morphisms are the obvious maps. Let $cX$\ndenote the category whose objects and morphisms are both given by points of\n$X$, so that the nerve $NcX$ is the constant simplicial object with value $X$.\nThere is a functor $\\cCC(\\pi)\\to cX$ specified by the map $\\pi$. \n\nLet $\\cU$ be a cover of $X$. Let $\\cCC(\\cU)$ denote the descent category\nassociated to the obvious map $\\epsilon:\\coprod_{U\\in \\cU} U\\to X$. It is easy\nto see that $\\epsilon:B\\cCC(\\cU) \\simeq X$ if $\\cU$ is numerable. The morphism\ndetermined by $x\\in U\\cap V$ is denoted $x_{U,V}$. Suppose $p:P\\to X$ is a\nprincipal $G$-bundle.  Then $\\cU$ trivializes $p$ if there are homeomorphisms\n$t_U:p^{-1}(U)\\xrightarrow{\\simeq} U\\times G$ over $U$. Specifying such\nhomeomorphisms is the same as a trivialization of the pullback bundle\n$\\epsilon^\\ast P$.\n\nThis, in turn, is the same as a functor $\\theta_P:\\cCC(\\cU)\\to G$. To\nsee this, we note that the $G$-equivariant composite $t_V\\circ t_U^{-1}:(U\\cap\nV)\\times G\\to (U\\cap V)\\times G$ is determined by the value of $(x,1)\\in (U\\cap\nV)\\times G$. The map $U\\cap V\\to G$ is denoted $f_{U,V}$. Then, the functor\n$\\theta_P:\\cCC(\\cU)\\to G$ sends every object of $\\cCC(\\cU)$ to the point, and\n$x_{U,V}$ to $f_{U,V}(x)$.\n\nOn classifying spaces, we therefore get a map $X\\xleftarrow{\\simeq}\nB\\cCC(\\cU)\\xrightarrow{\\theta_P} BG$, where the map on the left is given by\n$\\epsilon$.\n\\begin{exercise}\n    Prove that $\\theta_P^\\ast EG \\simeq \\epsilon^\\ast P$.\n\\end{exercise}\nThis suggests that $BG$ is a classifying space for principal $G$-bundles (in\nthe sense of \\S \\ref{grassmannmodel}). To make this precise, we need to prove\nthat two principal $G$-bundles are isomorphic if and only if the associated\nmaps $X\\to BG$ are homotopic.\n\nTo prove this, we will need to vary the open cover. Say that $\\cV$\n\\emph{refines} $\\cU$ if for any $V\\in \\cU$, there exists $U\\in\\cU$ such that\n$V\\subseteq U$. A \\emph{refinement} is a function $p:\\cV\\to\\cU$ such that\n$V\\subseteq p(V)$. A refinement $p$ defines a map $\\coprod_{V\\in \\cV} V\\to\n\\coprod_{U\\in\\cU} U$, denoted $\\rho$.\n\nAs both $\\coprod_{V\\in \\cV} V$ and $\\coprod_{U\\in\\cU} U$ cover $X$, we get a\nmap $\\cCC(\\cV)\\to \\cCC(\\cU)$ over $cX$. Taking classifying spaces begets a\ndiagram:\n$$\n\\xymatrix{\n    B\\cCC(\\cV)\\ar[r]\\ar[dr] & B\\cCC(\\cU)\\ar[d]\\\\\n    & X\n}\n$$\nLet $t$ be trivialization of $P$ for the open cover $\\cU$. The construction\ndescribed above begets a functor $B\\cCC(\\cU)\\to BG$, so we get a trivialization\n$s$ for $\\cV$. This is a homeomorphism $s_V:p^{-1}(V)\\to V\\times G$ which fits\ninto the following diagram:\n\\begin{equation*}\n    \\xymatrix{\n\tp^{-1}(V)\\ar[r]^{s_V}_\\sim\\ar[d] & V\\times G\\ar[d]\\\\\n\tp^{-1}(\\rho(V))\\ar[r]_{t_{\\rho(V)}}^\\sim & \\rho(V)\\times G\n    }\n\\end{equation*}\nBy construction, there is a large commutative diagram:\n\\begin{equation*}\n\\xymatrix{\n    B\\cCC(\\cV)\\ar[r]\\ar[dr]^\\sim\\ar@/^0.135in/[rr] &\n    B\\cCC(\\cU)\\ar[d]^\\sim\\ar[r] & BG\\\\\n    & X.\n}\n\\end{equation*}\nThis justifies dropping the symbol $\\cU$ in the notation for the map\n$\\theta_P$.\n\nConsider two principal $G$-bundles over $X$:\n\\begin{equation*}\n    \\xymatrix{\n\tP\\ar[r]^{\\simeq} \\ar[dr] & Q\\ar[d]\\\\\n\t& X,\n    }\n\\end{equation*}\nand suppose I have trivializations $(\\cU, t)$ of $P$ and $(\\cW, s)$ of $Q$.\nLet $\\cV$ be a common refinement, so that there is a diagram:\n\\begin{equation*}\n    \\xymatrix{\n\t& \\cCC(\\cU)\\ar[dr]^{\\theta^\\cU_{P}} & \\\\\n\t\\cCC(\\cV)\\ar[ur]\\ar[dr] \\ar@/^/[rr]^{\\theta^\\cV_P}\n\t\\ar@/_/[rr]_{\\theta^\\cV_Q} & & G\\\\\n\t& \\cCC(\\cW)\\ar[ur]_{\\theta^\\cW_Q}\n    }\n\\end{equation*}\nIncluded in the diagram is a mysterious natural transformation\n$\\beta:\\theta^\\cV_P\\to \\theta^\\cV_Q$, whose construction is left as an exercise\nto the reader\\todo{Should we describe this? It's rather technical...}. Its\nexistence combined with Lemma \\ref{nat-trans-htpy} implies that the two maps\n$\\theta_P,\\theta_Q:B\\cCC(\\cV)\\simeq X\\to BG$ are homotopic, as desired.\n\n\\subsection{Topological properties of $BG$}\\label{loops-bg}\nBefore proceeding, let us summarize the constructions discussed so far. Let $G$\nbe some topological group (assumed to be an absolute neighborhood retract of a\nLie group). We constructed $EG$, which is a contractible space with $G$ acting\nfreely on the right (this works for any topological group). There is an orbit\nprojection $EG\\to BG$, which is a principal $G$-bundle under our assumption on\n$G$. The space $BG$ is universal, in the sense that there is a bijection\n$$\\Bun_G(X)\\xleftarrow{\\simeq} [X, BG]$$\ngiven by $f\\mapsto [f^\\ast EG]$.\n\nLet $E$ be a space such that $G$ acts on $E$ from the left. If $P\\to B$ is any\nprincipal $G$-bundle, then $P\\times E\\to P\\times_G E$ is another principal\n$G$-bundle. In the case $P = EG$, it follows that if $E$ is a contractible\nspace on which $G$ acts, then the quotient $EG\\times_G E$ is a model for $BG$.\nRecall that $EG$ is contractible. Therefore, if $E$ is a contractible space on\nwhich $G$ acts freely, then the quotient $G\\backslash E$ is a model for $BG$.\nOf course, one can run the same argument in the case that $G$ acts on $E$ from\nthe right. Although the construction with simplicial sets provided us with a\nvery concrete description of the classifying space of a group $G$, we could\nhave chosen any principal action on a contractible space in order to obtain a\nmodel for $BG$.\n\nSuppose $X$ is a pointed path connected space. Remember that $X$ has a\ncontractible path space $PX = X^I_\\ast$. The canonical map $PX\\to X$ is a\nfibration, with fiber $\\Omega X$.\n\nConsider the case when $X = BG$. Then, we can compare the above fibration with\nthe fiber bundle $EG\\to BG$:\n\\begin{equation*}\n    \\xymatrix{\n\tG\\ar[r]\\ar[d] & \\Omega BG\\ar[d]\\\\\n\t\\ast \\simeq EG\\ar[d]\\ar@{-->}[r] & PBG\\simeq \\ast \\ar[d]\\\\\n\tBG\\ar@{=}[r] & BG\n    }\n\\end{equation*}\nThe map $EG\\to BG$ is nullhomotopic; a choice of a nullhomotopy is exactly a\nlift into the path space. Therefore, the dotted map $EG\\to PBG$ exists in the\nabove diagram. As $EG$ and $PBG$ are both contractible, we conclude that\n$\\Omega BG$ is weakly equivalent to $G$. In fact, this weak equivalence is a\n$H$-map, i.e., it commutes up to homotopy with the multiplication on both\nsides.\n\\begin{remark}[Milnor]\n    If $X$ is a countable CW-complex, then $\\Omega X$ is not a CW-complex, but\n    it is \\emph{homotopy} equivalent (not just weakly equivalent) to one.\n    Moreover, $\\Omega X$ is weakly equivalent to a topological group $GX$ such\n    that $BGX\\simeq X$.\n\\end{remark}\n\\subsection{Examples}\nWe claim that $BU(n)\\simeq \\Gra_n(\\cC^\\infty)$. To see this, let\n$V_n(\\cC^\\infty)$ is the contractible space of complex $n$-frames in\n$\\cC^\\infty$, i.e., isometric embeddings of $\\cc^n$ into $\\cc^\\infty$. The\nLie group $U(n)$ acts principally on $V_n(\\cC^\\infty)$ by precomposition, and the\nquotient $V_n(\\cC^\\infty)/U(n)$ is exactly the Grassmannian\n$\\Gra_n(\\cC^\\infty)$. As $\\Gra_n(\\cC^\\infty)$ is the quotient of a principal action\nof $U(n)$ on a contractible space, our discussion in the previous section\nimplies the desired claim.\n\nLet $G$ be a compact Lie group (eg finite).\n\\begin{theorem}[Peter-Weyl]\n    There exists an embedding $G\\hookrightarrow U(n)$ for some $n$.\n\\end{theorem}\nSince $U(n)$ acts principally on $V_n(\\cC^\\infty)$, it follows $G$ also acts\nprincipally on $V_n(\\cc^\\infty)$. Therefore $V_n(\\cc^\\infty)/G$ is a model for\n$BG$. It is not necessarily that this the most economic description of $BG$.\n\nFor instance, in the case of the symmetric group $\\Sigma_n$, we have a much\nnicer geometric description of the classifying space. Let\n$\\mathrm{Conf}_n(\\RR^k)$ denote embeddings of $\\{1,\\cdots,n\\}\\to \\RR^k$\n(ordered distinct $n$-tuples). This space is definitely \\emph{not}\ncontractible! However, the classifying space $\\mathrm{Conf}_n(\\RR^\\infty)$ is\ncontractible. The symmetric group obviously acts freely on this (for finite\ngroups, a principal action is the same as a free action). It follows that\n$B\\Sigma_n$ is the space of \\emph{un}ordered configurations of $n$ distinct\npoints in $\\RR^\\infty$. Using Cayley's theorem from classical group theory, we\nfind that if $G$ is finite, a model for $BG$ is the quotient\n$\\mathrm{Conf}_n(\\RR^\\infty)/G$.\n\nWe conclude this chapter with a construction of Eilenberg-Maclane spaces via\nclassifying spaces. If $A$ is a topological abelian group, then the\nmultiplication $\\mu:A\\times A\\to A$ is a homomorphism. Applying the classifying\nspace functor begets a map $m:BA\\times BA\\to BA$. If $G$ is a finite group, then\n$BA = K(A, 1)$. The map $m$ above gives a topological abelian group model for\n$K(A, 1)$. There is nothing preventing us from iterating this construction: the\nspace $B^2 A$ sits in a fibration\n$$BA \\to EBA\\simeq \\ast \\to B^2 A.$$\nIt follows from the long exact sequence in homotopy that the homotopy groups of\n$B^2 A$ are the same as that of $BA$, but shifted up by one. Repeating this\nprocedure multiple times gives us an explicit model for $K(A,n)$:\n$$B^n A = K(A, n).$$\n", "meta": {"hexsha": "4608a5e410d0a5831bb60402dd796a97c837b447", "size": 9118, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "906/lec-59-classifying-spaces.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "906/lec-59-classifying-spaces.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "906/lec-59-classifying-spaces.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 49.5543478261, "max_line_length": 83, "alphanum_fraction": 0.7000438693, "num_tokens": 3037, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.8311430436757312, "lm_q1q2_score": 0.5998718352853537}}
{"text": "\\subsection{Accuracy Measures}\n\\label{mase}\n\nChoosing an error measure for both model selection and evaluation is not\n    straightforward when working with intermittent demand, as shown, for\n    example, by \\cite{syntetos2005}, and one should understand the trade-offs\n    between measures.\n\\cite{hyndman2006} provide a study of measures with real-life data taken from\n    the popular M3-competition and find that most standard measures degenerate\n    under many scenarios.\nThey also provide a classification scheme for which we summarize the main\n    points as they apply to the UDP case:\n\\begin{enumerate}\n\\item \\textbf{Scale-dependent Errors}:\nThe error is reported in the same unit as the raw data.\nTwo popular examples are the root mean square error (RMSE) and mean absolute\n    error (MAE).\nThey may be used for model selection and evaluation within a pixel, and are\n    intuitively interpretable; however, they may not be used to compare errors\n    of, for example, a low-demand pixel (e.g., at the UDP's service\n    boundary) with that of a high-demand pixel (e.g., downtown).\n\\item \\textbf{Percentage Errors}:\nThe error is derived from the percentage errors of individual forecasts per\n    time step, and is also intuitively interpretable.\nA popular example is the mean absolute percentage error (MAPE) that is the\n    primary measure in most forecasting competitions.\nWhereas such errors could be applied both within and across pixels, they\n    cannot be calculated reliably for intermittent demand.\nIf only one time step exhibits no demand, the result is a divide-by-zero\n    error.\nThis often occurs even in high-demand pixels due to the slicing.\n\\item \\textbf{Relative Errors}:\nA workaround is to calculate a scale-dependent error for the test day and\n    divide it by the same measure calculated with forecasts of a simple\n    benchmark method (e.g., na\\\"{i}ve method).\nAn example could be\n    $\\text{RelMAE} = \\text{MAE} / \\text{MAE}_\\text{bm}$.\nNevertheless, even simple methods create (near-)perfect forecasts, and then\n    $\\text{MAE}_\\text{bm}$ becomes (close to) $0$.\nThese numerical instabilities occurred so often in our studies that we argue\n    against using such measures.\n\\item \\textbf{Scaled Errors}:\n\\cite{hyndman2006} contribute this category and introduce the mean absolute\n    scaled error (MASE).\nIt is defined as the MAE from the actual forecasting method on the test day\n    (i.e., \"out-of-sample\") divided by the MAE from the (seasonal) na\\\"{i}ve\n    method on the entire training set (i.e., \"in-sample\").\nA MASE of $1$ indicates that a forecasting method has the same accuracy\n    on the test day as the (seasonal) na\\\"{i}ve method applied on a longer\n    horizon, and lower values imply higher accuracy.\nWithin a pixel, its results are identical to the ones obtained with MAE.\nAlso, we acknowledge recent publications, for example, \\cite{prestwich2014} or\n    \\cite{kim2016}, showing other ways of tackling the difficulties mentioned.\nHowever, only the MASE provided numerically stable results for all\n    forecasts in our study.\n\\end{enumerate}\nConsequently, we use the MASE with a seasonal na\\\"{i}ve benchmark as the\n    primary measure in this paper.\nWith the previously introduced notation, it is defined as follows:\n$$\n\\text{MASE}\n:=\n\\frac{\\text{MAE}_{\\text{out-of-sample}}}{\\text{MAE}_{\\text{in-sample}}}\n=\n\\frac{\\text{MAE}_{\\text{forecasts}}}{\\text{MAE}_{\\text{training}}}\n=\n\\frac{\\frac{1}{H} \\sum_{h=1}^H |y_{T+h} - \\hat{y}_{T+h}|}\n     {\\frac{1}{T-k} \\sum_{t=k+1}^T |y_{t} - y_{t-k}|}\n$$\nThe denominator can only become $0$ if the seasonal na\\\"{i}ve benchmark makes\n    a perfect forecast on each day in the training set except the first seven\n    days, which never happened in our case study involving hundreds of\n    thousands of individual model trainings.\nFurther, as per the discussion in the subsequent Section \\ref{decomp}, we also\n    calculate peak-MASEs where we leave out the time steps of non-peak times\n    from the calculations.\nFor this analysis, we define all time steps that occur at lunch (i.e., noon to\n    2 pm) and dinner time (i.e., 6 pm to 8 pm) as peak.\nAs time steps in non-peak times typically average no or very low order counts,\n    a UDP may choose to not actively forecast these at all and be rather\n    interested in the accuracies of forecasting methods during peaks only.\n\nWe conjecture that percentage error measures may be usable for UDPs facing a\n    higher overall demand with no intra-day down-times in between but have to\n    leave that to a future study.\nYet, even with high and steady demand, divide-by-zero errors are likely to\n    occur.\n", "meta": {"hexsha": "ebab8bcddc558f957dcddf9f1dd8996a50d4ad7e", "size": 4627, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/3_mod/5_mase.tex", "max_stars_repo_name": "webartifex/urban-meal-delivery-paper-demand-forecasting", "max_stars_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-25T19:40:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T19:40:56.000Z", "max_issues_repo_path": "tex/3_mod/5_mase.tex", "max_issues_repo_name": "webartifex/urban-meal-delivery-demand-forecasting", "max_issues_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/3_mod/5_mase.tex", "max_forks_repo_name": "webartifex/urban-meal-delivery-demand-forecasting", "max_forks_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.5795454545, "max_line_length": 78, "alphanum_fraction": 0.7490814783, "num_tokens": 1159, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681049901037, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.5997981342815826}}
{"text": "%!TEX root = kPtx_paper.tex\n\\section*{Discussion}\n\\subsection*{Summary}\nA small-tip-angle k-space domain parallel transmit pulse design algorithm was proposed\nthat divides up the calculation of a matrix relating a target excitation pattern to the pulses that produce it \ninto a set of independent problems for patches of target excitation k-space locations,\neach of which is influenced by a local neighborhood of points on the excitation k-space trajectory.\nThe division of the problem into patches of target locations creates an opportunity for fine parallelization,\nwhile the limited neighborhood sizes lead to small problem sizes for each patch. \nCompared to the original k-space-based algorithm of Ref. \\cite{Katscher:2003:Magn-Reson-Med:12509830},\nthe L-curve and matrix size results showed that \nthe new algorithm produces much smaller matrix sizes that can be calculated more quickly,\nwith the tradeoff of increased excitation error or RF power. \nResults showed that the algorithm also enables compensation of off-resonance \nwhich has not previously been described in a k-space domain design. \nCompared to widely-used spatial domain method of Ref. \\cite{Grissom:2006:MRM},\nthe new algorithm is non-iterative and can be finely parallelized to achieve shorter design times, \nand results showed that it can use coarser target grid sizes while avoiding Gibbs ringing, \nagain with the tradeoff of increased excitation error or RF power.\nThe performance of off-resonance-compensated spatial domain and k-space domain pulse designs was similar,\nand the methods were similarly sensitive to excitation k-space undersampling. \n\n\\subsection*{Applications and Extensions}\nThis work was initially motivated by the observation that spatial domain parallel pulse designs can be very slow\nfor 3D problems with large grid sizes, \nrequiring both many iterations and considerable computation per iteration. \nIt is anticipated that the proposed k-space domain algorithm will be most useful for these types of large $>$2D problems,\nwhich include 3D spatial designs \\cite{malik2012tailored} and 2D and 3D spatial-spectral designs \\cite{stenger2000three,yang2010four,davids2016fast}\nwhere full matrix construction and inversion is infeasible due to the problem size,\nand an iterative design can require several minutes to solve. \nFurthermore, unlike an iterative spatial domain design the proposed algorithm does not need to be repeated if the target pattern changes.\nThis means that it could have a considerable computational speed advantage for magnitude least-squares designs \\cite{setsompop2008magnitude,malik:mrm:2015}. \nThe method could also be used to initialize spatial-domain designs to reduce the number of iterations required to reach a target cost; \nFinally, while simple Tikhonov RF power regularization was used in the designs presented here,\nmore sophisticated regularization could be incorporated to, e.g., control peak RF power via adaptive regularization \\cite{Yip:2005:Magn-Reson-Med:16155881},\nor to enforce array compression by projecting the weights into the null space of a compression matrix \\cite{cao2016array}.\nIn such designs, it would be beneficial to pre-compute and store the lower triangular elements of the $\\bm{S}^H\\bm{S}$ matrices\nso they need not be re-computed as the regularization changes over iterations.  \n\n\\section*{Conclusion}\nThe proposed k-space domain algorithm accelerates and finely parallelizes parallel transmission pulse design,\nwith a modest tradeoff of excitation error and RMS RF amplitude.", "meta": {"hexsha": "75e6fc3a66461069833d61897d95c0d7f748e39b", "size": 3527, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/Manuscript_r0/Conclusion.tex", "max_stars_repo_name": "wgrissom/kpTx", "max_stars_repo_head_hexsha": "b0f89ad298c8814570fa6df758d97ea3832b28d5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-17T21:17:42.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-04T20:54:24.000Z", "max_issues_repo_path": "manuscript/Manuscript_r0/Conclusion.tex", "max_issues_repo_name": "wgrissom/kpTx", "max_issues_repo_head_hexsha": "b0f89ad298c8814570fa6df758d97ea3832b28d5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-12-01T10:38:17.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-03T11:18:22.000Z", "max_forks_repo_path": "manuscript/Manuscript_r0/Conclusion.tex", "max_forks_repo_name": "wgrissom/kpTx", "max_forks_repo_head_hexsha": "b0f89ad298c8814570fa6df758d97ea3832b28d5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.9761904762, "max_line_length": 157, "alphanum_fraction": 0.8208108874, "num_tokens": 745, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424489603726, "lm_q2_score": 0.7090191337850933, "lm_q1q2_score": 0.5997893823939239}}
{"text": "% Activate the following line by filling in the right side. If for example the name of the root file is Main.tex, write\n% \"...root = Main.tex\" if the chapter file is in the same directory, and \"...root = ../Main.tex\" if the chapter is in a subdirectory.\n \n%!TEX root =  \n\n\\chapter[Processor Instructions]{Processor Instructions}\n\n\\section{Processor Instructions}\n\n\\subsection*{AAA}\nASCII Adjust after Addition\n\n\\begin{tabular}{l l l l}\nOpcode & Instruction & Clocks & Description\\\\\n37 & AAA & 4 & ASCII adjust AL after addition\\\\\n\\end{tabular}\n\n\\subsubsection{Operation}\n\\begin{verbatim}\nIF ((AL AND 0FH) > 9) OR (AF = 1) THEN\n  AL := AL + 6;\n  AH := AH +1;\n  AF := 1;\n  CF := 1;\nELSE\n  CF := 0;\n  AF := 0;\nENDIF\nAL := AL AND 0FH;\n\\end{verbatim}\n       \n\\subsubsection{Discussion}\nCode AAA only following an ADD instruction that leaves a byte result in the AL register. The lower nibbles of the ADD operands should be in the range 0 through 9 (BCD digits) so that AAA adjusts AL to contain the correct decimal digit result. If ADD produced a decimal carry, AAA increments the AH register and sets the carry (CF) and auxiliary carry (AF) flags to 1. If ADD produced no decimal carry, AAA clears the carry and auxiliary flags (0) and leaves AH unchanged. In either case, AL is left with its upper nibble set to 0. To convert AL to an ASCII result, follow the AAA instruction with OR AL, 30H.\n\n\\subsubsection{Flags Affected}\nAF and CF as described in the Discussion section; OF, SF, ZF, and PF are undefined.\n\n\\subsubsection{Exceptions by Mode}\n\nProtected\n\nNone\n\nReal Address\n\nNone\n\nVirtual 8086\n\nNone\n\n\\subsection*{AAD} \n\nASCII Adjust AX before Division \n\n\\begin{tabular}{l l l l}\nOpcode & Instruction & Clocks & Description\\\\\nD5 0A & AAD & 19 & ASCII adjust AX before division\\\\\n\\end{tabular}\n\n\\subsubsection{Operation}\n\\begin{verbatim}\n  AL := AH * 0AH + AL; \n  AH := 0;\n\\end{verbatim}\n\n\\subsubsection{Discussion}\n\nAAD prepares 2 unpacked BCD digits (the least significant digit in AL, the most significant digit in AH) for a division operation that will yield an unpacked result. This is done by setting AL to AL + (10 * AH), and then setting AH to 0. AX is then equal to the binary equivalent of the original unpacked 2-digit number.\n\n\\subsubsection{Flags Affected}\nSF, ZF, and PF as described in Appendix A; OF, AF, and CF are undefined\n\n\\subsubsection{Exceptions by Mode}\n\nProtected\n\nNone\n\nReal Address\n\nNone\n\nVirtual 8086\n\nNone\n\n\\subsection*{AAM}\n\nASCII adjust AX after multiply\n\n\\begin{tabular}{l l l l}\nOpcode & Instruction & Clocks & Description\\\\\nD4 0A & AAM & 17 & ASCII adjust AX after multiply\\\\\n\\end{tabular}\n\n\\subsubsection{Operation}\n\\begin{verbatim}\n  AH := AL / 0AH; \n  AL := AL MOD 0AH;\n\\end{verbatim}\n\n\\subsubsection{Discussion}\n\nCode AAM only following a MUL instruction on two unpacked BCD digits that leaves the result in the AX register. AL contains the MUL result, because it is always less than 100. AAM unpacks this result by dividing AL by 10, leaving the quotient (most significant digit) in AH and the remainder (least significant digit) in AL.\n\n\\subsubsection{Flags Affected}\nF, ZF, and PF as described in Appendix A; OF, AF, and CF are undefined\n\n\\subsubsection{Exceptions by Mode} \n\nProtected\n\nNone\n\nReal Address\n\nNone\n\nVirtual 8086\n\nNone\n\n\\subsection*{AAS}\n\nASCII Adjust AL after Subtraction\n\n\\begin{tabular}{l l l l}\nOpcode & Instruction & Clocks & Description\\\\\n3F & AAS & 4 & ASCII adjust AL after subtraction\\\\\n\\end{tabular}\n\n\\subsubsection{Operation}\n\\begin{verbatim}\nIF (AL AND 0FH) > 9 OR AF = 1 THEN\n  AL := AL - 6;\n  AH := AH -1;\n  AF := 1;\n  CF := 1;\nELSE\n  CF := 0;\n  AF := 0;\nENDIF\nAL := AL AND 0FH;\n\\end{verbatim}\n\n\\subsubsection{Discussion}\nCode AAS only following a SUB instruction that leaves the byte result in the AL register. The lower nibbles of the SUB operands should be in the range 0 through 9 (BCD digits) so that AAS adjusts AL to contain the correct decimal digit result. If SUB produced a decimal carry, AAS decrements the AH register and sets the carry (CF) and auxiliary carry (AF) flags to 1. If SUB produced no decimal carry, AAS clears the carry and auxiliary carry flags (0) and leaves AH unchanged. In either case, AL is left with its upper nibble set to 0. To convert AL to an ASCII result, follow the AAS with OR AL, 30H.\n\n\\subsubsection{Flags Affected}\n\nAF and CF as described in the Discussion section; OF, SF, ZF, and PF are undefined\n\n\\subsubsection{Exceptions by Mode}\n\n\\subsubsection{Protected}\n\nNone\n\n\\subsubsection{Real Address}\n\nNone\n\n\\subsubsection{Virtual 8086}\n\nNone\n", "meta": {"hexsha": "7019c92ae2c33597d00340cf1ace17daddf399d7", "size": 4549, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/Chapter6.tex", "max_stars_repo_name": "pclwillmott/asm286", "max_stars_repo_head_hexsha": "2648bf9db3d746de4145699db1a1161182fd4be0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/Chapter6.tex", "max_issues_repo_name": "pclwillmott/asm286", "max_issues_repo_head_hexsha": "2648bf9db3d746de4145699db1a1161182fd4be0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/Chapter6.tex", "max_forks_repo_name": "pclwillmott/asm286", "max_forks_repo_head_hexsha": "2648bf9db3d746de4145699db1a1161182fd4be0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.2395209581, "max_line_length": 608, "alphanum_fraction": 0.7353264454, "num_tokens": 1246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.8006920092299293, "lm_q1q2_score": 0.5997881547274514}}
{"text": "\\chapter{Controlling the code generation}\nSeveral options of the \\faust compiler allow to control the generated C++ code. By default the computations are done sample by sample in a single loop. But the compiler can also generate \\textit{vector} and \\textit{parallel} code.\n\n\\section{Vector code generation}\nModern C++ compilers are able to do autovectorization, that is to use SIMD instructions to speedup the code. These instructions can typically operate in parallel on short vectors of 4 simple precision floating point numbers thus leading to a theoretical speedup of $\\times4$. \nAutovectorization of C/C++ programs is a difficult task. Current compilers are very sensitive to the way the code is arranged. In particular too complex loops can prevent autovectorization. The goal of the vector code generation is to rearrange the C++ code in a way that facilitates the autovectorization job of the C++ compiler. Instead of generating a single sample computation loop, it splits the computation into several simpler loops that communicates by vectors.\n\nThe vector code generation is activated by passing the \\lstinline!--vectorize! (or \\lstinline!-vec!) option to the \\faust compiler. Two additional options are available:  \\lstinline!--vec-size <n>! controls the size of the vector (by default 32 samples) and \\lstinline!--loop-variant 0/1! gives some additional control on the loops.  \n\nTo illustrate the difference between scalar code and vector code, let's take the computation of the RMS (Root Mean Square) value of a signal.  Here is the \\faust code that computes the Root Mean Square of a sliding window of 1000 samples:\n\\label{rms}\n\\begin{lstlisting}\n// Root Mean Square of n consecutive samples\nRMS(n) = square : mean(n) : sqrt;\n\n// Square of a signal\nsquare(x) = x * x;\n\n// Mean of n consecutive samples of a signal\n// (uses fixpoint to avoid the accumulation of\n// rounding errors) \nmean(n) = float2fix : integrate(n) : \n          fix2float : /(n); \n\n// Sliding sum of n consecutive samples\nintegrate(n,x) = x - x@n : +~_;\n\n// Convertion between float and fix point\nfloat2fix(x) = int(x*(1<<20));      \nfix2float(x) = float(x)/(1<<20);    \n\n// Root Mean Square of 1000 consecutive samples\nprocess = RMS(1000);\n\\end{lstlisting}\n\nThe compute() method generated in scalar mode is the following:\n\n\\begin{lstlisting}\nvirtual void compute (int count, \n                      float** input, \n                      float** output) \n{\n  float* input0 = input[0];\n  float* output0 = output[0];\n  for (int i=0; i<count; i++) {\n    float fTemp0 = input0[i];\n    int iTemp1 = int(1048576*fTemp0*fTemp0);\n    iVec0[IOTA&1023] = iTemp1;\n    iRec0[0] = ((iVec0[IOTA&1023] + iRec0[1])\n                    - iVec0[(IOTA-1000)&1023]);\n    output0[i] = sqrtf(9.536744e-10f * \n                       float(iRec0[0]));\n    // post processing\n    iRec0[1] = iRec0[0];\n    IOTA = IOTA+1;\n  }\n}\n\\end{lstlisting}\n\nThe \\lstinline!-vec! option leads to the following reorganization of the code:\n\\begin{lstlisting}\nvirtual void compute (int fullcount, \n                      float** input, \n                      float** output) \n{\n  int iRec0_tmp[32+4];\n  int* iRec0 = &iRec0_tmp[4];\n  for (int index=0; index<fullcount; index+=32) \n  {\n    int count = min (32, fullcount-index);\n    float* input0 = &input[0][index];\n    float* output0 = &output[0][index];\n    for (int i=0; i<4; i++) \n      iRec0_tmp[i]=iRec0_perm[i];\n    // SECTION : 1\n    for (int i=0; i<count; i++) {\n      iYec0[(iYec0_idx+i)&2047] =\n               int(1048576*input0[i]*input0[i]);\n    }\n    // SECTION : 2\n    for (int i=0; i<count; i++) {\n      iRec0[i] = ((iYec0[i] + iRec0[i-1]) - \n               iYec0[(iYec0_idx+i-1000)&2047]);\n    }\n    // SECTION : 3\n    for (int i=0; i<count; i++) {\n      output0[i] = sqrtf((9.536744e-10f * \n                 float(iRec0[i])));\n    }\n    // SECTION : 4\n    iYec0_idx = (iYec0_idx+count)&2047;\n    for (int i=0; i<4; i++)\n      iRec0_perm[i]=iRec0_tmp[count+i];\n  }\n}\n\\end{lstlisting}\n\nWhile the second version of the code is more complex, it turns out to be much easier to vectorize efficiently by the C++ compiler. Using Intel icc 11.0, with the exact same compilation options: \\texttt{-O3 -xHost -ftz -fno-alias -fp-model fast=2}, the scalar version leads to a throughput performance of 129.144  MB/s, while the vector version achieves 359.548  MB/s, a speedup of x2.8 ! \n\n\\begin{figure}[htb]\n  \\centering\n  \\includegraphics[scale=0.75]{images/compiler-stack}\n  \\caption{\\faust's stack of code generators}   \n  \\label{fig:stack}\n\\end{figure}\n\nThe vector code generation is built on top of the scalar code generation (see figure \\ref{fig:stack}). Every time an expression needs to be compiled, the compiler checks if it requires a separate loop or not. It applies some simple rules for that. Expressions that are shared (and are complex enough) are good candidates to be compiled in a separate loop, as well as recursive expressions and expressions used in delay lines. \n\nThe result is a directed graph in which each node is a computation loop (see Figure \\ref{fig:loopgraph}). This graph is stored in the klass object and a topological sort is applied to it before printing the code. \n\n\\begin{figure}[htb]\n  \\centering\n  \\includegraphics[scale=0.75]{graphs/loopgraph2}\n  \\caption{The result of the -vec option is a directed acyclic graph (DAG) of small computation loops}   \n  \\label{fig:loopgraph}\n\\end{figure}\n\n\\section{Parallel code generation}\n\nThe parallel code generation is activated by passing either the \\lstinline!--openMP! (or \\lstinline!-omp!) option or the \\lstinline!--scheduler! (or \\lstinline!-sch!) option. It implies the \\lstinline!-vec! options as the parallel code generation is built on top of the vector code generation.  \n\n\\subsection{The OpenMP code generator}\n\n\\begin{figure}[htb]\n  \\centering\n  \\includegraphics[scale=0.5,angle=-90]{images/openmp-model}\n  \\caption{OpenMP is based on a fork-join model}   \n  \\label{fig:openmp}\n\\end{figure}\n\nThe \\lstinline!--openMP! (or \\lstinline!-omp!) option given to the \\faust compiler will insert appropriate OpenMP directives in the C++ code. OpenMP (http://wwww.openmp.org) is a well established API that is used to explicitly define direct multi-threaded, shared memory parallelism. It is based on a fork-join model of parallelism (see figure \\ref{fig:openmp}). \nParallel regions are delimited by \\lstinline!#pragma omp parallel! constructs. At the entrance of a parallel region a team of parallel threads is activated. The code within a parallel region is executed by each thread of the parallel team until the end of the region. \n\n\\begin{lstlisting}\n#pragma omp parallel\n{\n  // the code here is executed simultaneously by \n  // every thread of the parallel team\n  ...\n}\n\\end{lstlisting}\n\nIn order not to have every thread doing redundantly the exact same work, OpemMP provides specific \\textit{work-sharing} directives. For example \\lstinline!#pragma omp sections! allows to break the work into separate, discrete sections, each section being executed by one thread:\n\n\\begin{lstlisting}\n#pragma omp parallel\n{\n  #pragma omp sections\n  {\n    #pragma omp section\n    {\n      // job 1\n    }\n    #pragma omp section\n    {\n      // job 2\n    }\n    ...\n  }\n\n  ...\n}\n\\end{lstlisting}\n\n\\subsection{Adding OpenMP directives}\nAs said before the parallel code generation is built on top of the vector code generation. The graph of loops produced by the vector code generator is topologically sorted in order to detect the loops that can be computed in parallel. The first set $S_0$ (loops $L1$, $L2$ and $L3$ in the DAG of Figure \\ref{fig:loopgraph}) contains the loops that don't depend on any other loops, the set $S_1$ contains the loops that only depend on loops of $S_0$, (that is loops $L4$ and $L5$), etc.. \n\nAs all the loops of a given set $S_n$ can be computed in parallel, the compiler will generate a \\lstinline!sections! construct with a \\lstinline!section! for each loop. \n\\begin{lstlisting}\n  #pragma omp sections\n  {\n    #pragma omp section\n    for (...) {\n      // Loop 1\n    }\n    #pragma omp section\n    for (...) {\n      // Loop 2\n    }\n    ...\n  }\n\\end{lstlisting}\n \nIf a given set contains only one loop, then the compiler checks to see if the loop can be parallelized (no recursive dependencies) or not. If it can be parallelized, it generates:\n\\begin{lstlisting}\n  #pragma omp for\n  for (...) {\n   // Loop code\n  }\n\\end{lstlisting}\notherwise it generates a \\lstinline!single! construct so that only one thread will execute the loop:\n\\begin{lstlisting}\n  #pragma omp single\n  for (...) {\n   // Loop code\n  }\n\\end{lstlisting}\n\n\\subsection{Example of parallel OpenMP code}\nTo illustrate how \\faust uses the OpenMP directives, here is a very simple example, two 1-pole filters in parallel connected to an adder (see figure \\ref{fig:parfilter} the corresponding block-diagram):\n\n\\begin{lstlisting}\nfilter(c) = *(1-c) : + ~ *(c);\nprocess = filter(0.9), filter(0.9) : +; \n\\end{lstlisting}\n\n\\begin{figure}[htb] \n  \\centering\n  \\includegraphics[width=8cm]{images/filter2}\n  \\caption{two filters in parallel connected to an adder}   \n  \\label{fig:parfilter}\n\\end{figure}\n\nThe corresponding compute() method obtained using the \\lstinline!-omp! option is the following:\n\\begin{lstlisting}\n\nvirtual void compute (int fullcount, \n                      float** input, \n                      float** output) \n{\n  float   fRec0_tmp[32+4];\n  float   fRec1_tmp[32+4];\n  float*  fRec0 = &fRec0_tmp[4];\n  float*  fRec1 = &fRec1_tmp[4];\n  #pragma omp parallel firstprivate(fRec0,fRec1)\n  {\n    for (int index = 0; index < fullcount; \n                                index += 32) \n    {\n      int count = min (32, fullcount-index);\n      float* input0 = &input[0][index];\n      float* input1 = &input[1][index];\n      float* output0 = &output[0][index];\n      #pragma omp single\n      {\n        for (int i=0; i<4; i++) \n          fRec0_tmp[i]=fRec0_perm[i];\n        for (int i=0; i<4; i++) \n          fRec1_tmp[i]=fRec1_perm[i];\n      }\n      // SECTION : 1\n      #pragma omp sections\n      {\n        #pragma omp section\n        for (int i=0; i<count; i++) {\n          fRec0[i] = ((0.1f * input1[i]) \n                   + (0.9f * fRec0[i-1]));\n        }\n        #pragma omp section\n        for (int i=0; i<count; i++) {\n          fRec1[i] = ((0.1f * input0[i]) \n                   + (0.9f * fRec1[i-1]));\n        }\n      }\n      // SECTION : 2\n      #pragma omp for\n      for (int i=0; i<count; i++) {\n        output0[i] = (fRec1[i] + fRec0[i]);\n      }\n      // SECTION : 3\n      #pragma omp single\n      {\n        for (int i=0; i<4; i++) \n          fRec0_perm[i]=fRec0_tmp[count+i];\n        for (int i=0; i<4; i++) \n          fRec1_perm[i]=fRec1_tmp[count+i];\n      }\n    }\n  }\n}\n\n\\end{lstlisting}\n\nThis code requires some comments:\n\n\\begin{enumerate}\n\\item The parallel construct \\lstinline!#pragma omp parallel! is the fundamental construct that starts parallel execution. The number of parallel threads is generally the number of CPU cores but it can be controlled in several ways.\n\n\\item Variables external to the parallel region are shared by default. The pragma \\lstinline!firstprivate(fRec0,fRec1)! indicates that each thread should have its private copy of fRec0 and fRec1. The reason is that accessing shared variables requires an indirection and is quite inefficient compared to private copies.\n\n\\item The top level loop \\lstinline!for (int index = 0;...)...! is executed by all threads simultaneously. The subsequent work-sharing directives inside the loop will indicate how the work must be shared between the threads. \n\n\\item Please note that an implied barrier exists at the end of each work-sharing region. All threads must have executed the barrier before any of them can continue.\n\n\\item The work-sharing directive \\lstinline!#pragma omp single! indicates that this first section will be executed by only one thread (any of them).\n\n\\item The work-sharing directive \\lstinline!#pragma omp sections! indicates that each corresponding \\lstinline!#pragma omp section!, here our two filters, will be executed in parallel.\n\n\\item The loop construct \\lstinline!#pragma omp for! specifies that the iterations of the associated loop will be executed in parallel. The iterations of the loop are distributed across the parallel threads. For example, if we have two threads, the first one can compute indices between 0 and count/2 and the other one between count/2 and count. \n\n\\item Finally \\lstinline!#pragma omp single!  in section 3 indicates that this last section will be executed by only one thread (any of them).\n\n\\end{enumerate}\n\n\\subsection{The scheduler code generator}\n With the \\lstinline!--scheduler! (or \\lstinline!-sch!) option given to the \\faust compiler, the computation graph is cut into separated computation loops (called \"tasks\"), and a \"Work Stealing Scheduler\" is used to activate and execute them following their dependencies. A pool of worked threads is created and each thread uses it's own local WSQ (Work Stealing Queue) of tasks. A WSQ is a special queue with a Push operation, a \"private\" LIFO Pop operation and a \"public\" FIFO Pop operation.\n\nStarting from a ready task, each thread follows the dependencies, possibly pushing ready sub-tasks into it's own local WSQ. When no more tasks can be activated on a given computation path, the thread pops a task from it's local WSQ. If the WSQ is empty, then the thread is allowed to \"steal\" tasks from other threads WSQ.\n\nThe local LIFO Pop operation allows better cache locality and the FIFO steal Pop \"larger chuck\" of work to be done. The reason for this is that many work stealing workloads are divide-and-conquer in nature, stealing one of the oldest task implicitly also steals a (potentially) large subtree of computations that will unfold once that piece of work is stolen and run.\n\nCompared to the OpenMP model (\\lstinline!-omp!) the new model is worse for simple \\faust  programs and usually starts to behave comparable or sometimes better for \"complex enough\" \\faust  programs. In any case, since OpenMP does not behave so well with GCC compilers (only quite recent versions like GCC 4.4 start to show some improvements), and is unusable on OSX in real-time contexts, this new scheduler option has it's own value.  We plan to improve it adding a \"pipelining\" idea in the future.\n\n\\subsection{Example of parallel scheduler code}\nTo illustrate how \\faust generates the scheduler code, here is a very simple example, two 1-pole filters in parallel connected to an adder (see figure \\ref{fig:parfilter} the corresponding block-diagram):\n\n\\begin{lstlisting}\nfilter(c) = *(1-c) : + ~ *(c);\nprocess = filter(0.9), filter(0.9) : +; \n\\end{lstlisting}\n\n\nWhen \\lstinline!-sch! option is used, the content of the additional \\textit{architecture/scheduler.h} file is inserted in the generated code. It contains code to deal with WSQ and thread management. The \\lstinline'compute()' and \\lstinline'computeThread()' methods are the following:\n\\begin{lstlisting}\n\nvirtual void compute (int fullcount, \n                      float** input, \n                      float** output) \n{\n\tGetRealTime();\n\tthis->input = input;\n\tthis->output = output;\n\tStartMeasure();\n\tfor (fIndex = 0; fIndex < fullcount; fIndex += 32) {\n\t\tfFullCount = min (32, fullcount-fIndex);\n\t\tTaskQueue::Init();\n\t\t// Initialize end task\n\t\tfGraph.InitTask(1,1);\n\t\t// Only initialize tasks with inputs\n\t\tfGraph.InitTask(4,2);\n\t\tfIsFinished = false;\n\t\tfThreadPool.SignalAll(fDynamicNumThreads - 1);\n\t\tcomputeThread(0);\n\t\twhile (!fThreadPool.IsFinished()) {}\n\t}\n\tStopMeasure(fStaticNumThreads, \n\t\tfDynamicNumThreads);\n}\nvoid computeThread (int cur_thread) {\n\tfloat* \tfRec0 = &fRec0_tmp[4];\n\tfloat* \tfRec1 = &fRec1_tmp[4];\n\t// Init graph state\n\t{\n\t\tTaskQueue taskqueue;\n\t\tint tasknum = -1;\n\t\tint count = fFullCount;\n\t\t// Init input and output\n\t\tFAUSTFLOAT* input0 = &input[0][fIndex];\n\t\tFAUSTFLOAT* input1 = &input[1][fIndex];\n\t\tFAUSTFLOAT* output0 = &output[0][fIndex];\n\t\tint task_list_size = 2;\n\t\tint task_list[2] = {2,3};\n\t\ttaskqueue.InitTaskList(task_list_size, task_list, fDynamicNumThreads, cur_thread, tasknum);\n\t\twhile (!fIsFinished) {\n\t\t\tswitch (tasknum) {\n\t\t\t\tcase WORK_STEALING_INDEX: { \n\t\t\t\t\ttasknum = TaskQueue::GetNextTask(cur_thread);\n\t\t\t\t\tbreak;\n\t\t\t\t} \n\t\t\t\tcase LAST_TASK_INDEX: { \n\t\t\t\t\tfIsFinished = true;\n\t\t\t\t\tbreak;\n\t\t\t\t} \n\t\t\t\t// SECTION : 1\n\t\t\t\tcase 2: { \n\t\t\t\t\t// LOOP 0x101111680\n\t\t\t\t\t// pre processing\n\t\t\t\t\tfor (int i=0; i<4; i++) fRec0_tmp[i]=fRec0_perm[i];\n\t\t\t\t\t// exec code\n\t\t\t\t\tfor (int i=0; i<count; i++) {\n\t\t\t\t\t\tfRec0[i] = ((1.000000e-01f * (float)input1[i]) + (0.9f * fRec0[i-1]));\n\t\t\t\t\t}\n\t\t\t\t\t// post processing\n\t\t\t\t\tfor (int i=0; i<4; i++) fRec0_perm[i]=fRec0_tmp[count+i];\n\t\t\t\t\t\n\t\t\t\t\tfGraph.ActivateOneOutputTask(taskqueue, 4, tasknum);\n\t\t\t\t\tbreak;\n\t\t\t\t} \n\t\t\t\tcase 3: { \n\t\t\t\t\t// LOOP 0x1011125e0\n\t\t\t\t\t// pre processing\n\t\t\t\t\tfor (int i=0; i<4; i++) fRec1_tmp[i]=fRec1_perm[i];\n\t\t\t\t\t// exec code\n\t\t\t\t\tfor (int i=0; i<count; i++) {\n\t\t\t\t\t\tfRec1[i] = ((1.000000e-01f * (float)input0[i]) + (0.9f * fRec1[i-1]));\n\t\t\t\t\t}\n\t\t\t\t\t// post processing\n\t\t\t\t\tfor (int i=0; i<4; i++) fRec1_perm[i]=fRec1_tmp[count+i];\n\t\t\t\t\t\n\t\t\t\t\tfGraph.ActivateOneOutputTask(taskqueue, 4, tasknum);\n\t\t\t\t\tbreak;\n\t\t\t\t} \n\t\t\t\tcase 4: { \n\t\t\t\t\t// LOOP 0x101111580\n\t\t\t\t\t// exec code\n\t\t\t\t\tfor (int i=0; i<count; i++) {\n\t\t\t\t\t\toutput0[i] = (FAUSTFLOAT)(fRec1[i] + fRec0[i]);\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\ttasknum = LAST_TASK_INDEX;\n\t\t\t\t\tbreak;\n\t\t\t\t} \n\t\t\t}\n\t\t}\n\t}\n}\n\n\\end{lstlisting}\n ", "meta": {"hexsha": "143e09acfbd9e9db71d48f5f965c3284eaf5476d", "size": 17463, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "quick-reference/chapters/codegeneration.tex", "max_stars_repo_name": "tinpark/faustdoc", "max_stars_repo_head_hexsha": "31c0492291d1bc71dc5c6f8d4ad6eed04776cf04", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-06-28T14:23:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-12T20:15:37.000Z", "max_issues_repo_path": "quick-reference/chapters/codegeneration.tex", "max_issues_repo_name": "tinpark/faustdoc", "max_issues_repo_head_hexsha": "31c0492291d1bc71dc5c6f8d4ad6eed04776cf04", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-09-22T07:11:15.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-22T07:32:27.000Z", "max_forks_repo_path": "quick-reference/chapters/codegeneration.tex", "max_forks_repo_name": "tinpark/faustdoc", "max_forks_repo_head_hexsha": "31c0492291d1bc71dc5c6f8d4ad6eed04776cf04", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-12-01T11:34:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T14:02:06.000Z", "avg_line_length": 42.6968215159, "max_line_length": 498, "alphanum_fraction": 0.685850083, "num_tokens": 4824, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.599788144898396}}
{"text": "\n\\chapter{More on order theory}\n\n\n\\section{Straight maps and separation subsets}\n\n\n\\subsection{Straight maps}\n\\begin{defn}\n\\index{order reflecting}An \\emph{order reflecting} map from a poset~$\\mathfrak{A}$ to a poset~$\\mathfrak{B}$\nis such a function~$f$ that (for every $x,y\\in\\mathfrak{A}$)\n\\[ fx\\sqsubseteq fy \\Rightarrow x\\sqsubseteq y. \\]\n\\end{defn}\n\n\\begin{obvious}\nOrder embeddings are exactly the same as monotone and order reflecting maps.\n\\end{obvious}\n\n\\begin{defn}\n\\index{straight map}Let $f$ be a monotone map from a meet-semilattice\n$\\mathfrak{A}$ to some poset $\\mathfrak{B}$. I call $f$ a \\emph{straight}\nmap when\n\\[\n\\forall a,b\\in\\mathfrak{A}:(fa\\sqsubseteq fb\\Rightarrow fa=f(a\\sqcap b)).\n\\]\n\\end{defn}\n\\begin{prop}\nThe following statements are equivalent for a monotone map~$f$:\n\\begin{enumerate}\n\\item \\label{str-def}$f$ is a straight map.\n\\item \\label{str-le-le}$\\forall a,b\\in\\mathfrak{A}:(fa\\sqsubseteq fb\\Rightarrow fa\\sqsubseteq f(a\\sqcap b))$.\n\\item \\label{str-le-ng}$\\forall a,b\\in\\mathfrak{A}:(fa\\sqsubseteq fb\\Rightarrow fa\\nsqsupset f(a\\sqcap b))$.\n\\item \\label{str-g-nle}$\\forall a,b\\in\\mathfrak{A}:(fa\\sqsupset f(a\\sqcap b)\\Rightarrow fa\\nsqsubseteq fb)$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{\\ref{str-def}$\\Leftrightarrow$\\ref{str-le-le}$\\Leftrightarrow$\\ref{str-le-ng}}] Due\n$fa\\sqsupseteq f(a\\sqcap b)$.\n\\item [{\\ref{str-le-ng}$\\Leftrightarrow$\\ref{str-g-nle}}] Obvious.\n\\end{description}\n\\end{proof}\n\\begin{rem}\nThe definition of straight map can be generalized for any poset $\\mathfrak{A}$\nby the formula\n\\[\n\\forall a,b\\in\\mathfrak{A}:(fa\\sqsubseteq fb\\Rightarrow\\exists c\\in\\mathfrak{A}:(c\\sqsubseteq a\\land c\\sqsubseteq b\\land fa=fc)).\n\\]\n\n\nThis generalization is not yet researched however.\\end{rem}\n\\begin{prop}\nLet $f$ be a monotone map from a meet-semilattice $\\mathfrak{A}$\nto a meet-semilattice $\\mathfrak{B}$. If\n\\[\n\\forall a,b\\in\\mathfrak{A}:f(a\\sqcap b)=fa\\sqcap fb\n\\]\nthen $f$ is a straight map.\\end{prop}\n\\begin{proof}\nLet $fa\\sqsubseteq fb$. Then $f(a\\sqcap b)=fa\\sqcap fb=fa$.\\end{proof}\n\\begin{prop}\nLet $f$ be a monotone map from a meet-semilattice $\\mathfrak{A}$\nto some poset $\\mathfrak{B}$. If $f$ is order reflecting,\nthen $f$ is a straight map.\\end{prop}\n\\begin{proof}\n$fa\\sqsubseteq fb\\Rightarrow a\\sqsubseteq b\\Rightarrow a=a\\sqcap b\\Rightarrow fa=f(a\\sqcap b)$.\\end{proof}\n\nThe following theorem is the main reason of why we are interested in straight maps:\n\\begin{thm}\nIf $f$ is a straight monotone map from a meet-semilattice $\\mathfrak{A}$\nthen the following statements are equivalent:\n\\begin{enumerate}\n\\item \\label{stra-inj}$f$ is an injection.\n\\item \\label{stra-sqe-sqe}$f$ is order reflecting.\n\\item \\label{stra-sq-sq}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow fa\\sqsubset fb)$.\n\\item \\label{stra-sq-ne}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow fa\\neq fb)$.\n\\item \\label{stra-sq-nsqe}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow fa\\nsqsupseteq fb)$.\n\\item \\label{stra-sqe-nsq}$\\forall a,b\\in\\mathfrak{A}:(fa\\sqsubseteq fb\\Rightarrow a\\nsqsupset b)$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{\\ref{stra-inj}$\\Rightarrow$\\ref{stra-sq-sq}}] Let $a,b\\in\\mathfrak{A}$.\nLet $fa=fb\\Rightarrow a=b$. Let $a\\sqsubset b$. $fa\\neq fb$ because\n$a\\neq b$. $fa\\sqsubseteq fb$ because $a\\sqsubseteq b$. So $fa\\sqsubset fb$.\n\\item [{\\ref{stra-sqe-sqe}$\\Rightarrow$\\ref{stra-inj}}] Let $a,b\\in\\mathfrak{A}$.\nLet $fa\\sqsubseteq fb\\Rightarrow a\\sqsubseteq b$. Let $fa=fb$. Then\n$a\\sqsubseteq b$ and $b\\sqsubseteq a$ and consequently $a=b$.\n\\item [{\\ref{stra-sq-sq}$\\Rightarrow$\\ref{stra-sqe-sqe}}] Let $\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow fa\\sqsubset fb)$.\nLet $a\\nsqsubseteq b$. Then $a\\sqsupset a\\sqcap b$. So $fa\\sqsupset f(a\\sqcap b)$.\nIf $fa\\sqsubseteq fb$ then $fa\\sqsubseteq f(a\\sqcap b)$ what is\na contradiction.\n\\item [{\\ref{stra-sq-sq}$\\Rightarrow$\\ref{stra-sq-nsqe}$\\Rightarrow$\\ref{stra-sq-ne}}] Obvious.\n\\item [{\\ref{stra-sq-ne}$\\Rightarrow$\\ref{stra-sq-sq}}] Because $a\\sqsubset b\\Rightarrow a\\sqsubseteq b\\Rightarrow fa\\sqsubseteq fb$.\n\\item [{\\ref{stra-sq-nsqe}$\\Leftrightarrow$\\ref{stra-sqe-nsq}}] Obvious.\n\\end{description}\n\\end{proof}\n\n\\subsection{\\label{sep-and-full}Separation subsets and full stars}\n\\begin{defn}\n$\\corestar_{Y}a=\\setcond{x\\in Y}{x\\nasymp a}$ for an element $a$\nof a poset $\\mathfrak{A}$ and $Y\\in\\subsets\\mathfrak{A}$.\n\\end{defn}\n\n\\begin{defn}\n\\index{star!full}\\emph{Full star} of $a\\in\\mathfrak{A}$ is $\\fullstar a=\\corestar_{\\mathfrak{A}}a$.\\end{defn}\n\\begin{prop}\nIf $\\mathfrak{A}$ is a meet-semilattice, then $\\fullstar$ is a straight\nmonotone map.\\end{prop}\n\\begin{proof}\nMonotonicity is obvious. Let $\\fullstar a\\nsqsubseteq\\fullstar(a\\sqcap b)$.\nThen it exists $x\\in\\fullstar a$ such that $x\\notin\\fullstar(a\\sqcap b)$.\nSo $x\\sqcap a\\notin\\fullstar b$ but $x\\sqcap a\\in\\fullstar a$ and\nconsequently $\\fullstar a\\nsqsubseteq\\fullstar b$.\\end{proof}\n\\begin{defn}\n\\index{separation subset}A \\emph{separation subset} of a poset $\\mathfrak{A}$\nis such its subset $Y$ that\n\\[\n\\forall a,b\\in\\mathfrak{A}:(\\corestar_{Y}a=\\corestar_{Y}b\\Rightarrow a=b).\n\\]\n\n\\end{defn}\n\n\\begin{defn}\n\\index{separable}\\index{poset!separable}I call \\emph{separable} such poset\nthat $\\fullstar$ is an injection.\\end{defn}\n\\begin{defn}\n\\index{separable}\\index{poset!strongly separable}I call \\emph{strongly separable} such poset\nthat $\\fullstar$ is order reflecting.\\end{defn}\n\\begin{obvious}\nA poset is separable iff it has a separation subset.\\end{obvious}\n\\begin{obvious}\nA poset is strongly separable iff $\\star$ is order embedding.\\end{obvious}\n\\begin{obvious}\nStrong separability implies separability.\\end{obvious}\n\\begin{defn}\n\\index{disjunction property of Wallman}A poset $\\mathfrak{A}$ has\n\\emph{disjunction property of Wallman} iff for any $a,b\\in\\mathfrak{A}$\neither $b\\sqsubseteq a$ or there exists a non-least element $c\\sqsubseteq b$\nsuch that $a\\asymp c$.\\end{defn}\n\\begin{thm}\n\\label{msl-sep-conds}For a meet-semilattice with least element the\nfollowing statements are equivalent:\n\\begin{enumerate}\n\\item \\label{la1}$\\mathfrak{A}$ is separable.\n\\item \\label{la2}$\\mathfrak{A}$ is strongly separable.\n\\item \\label{la3}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow\\fullstar a\\sqsubset\\fullstar b)$.\n\\item \\label{la4}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow\\fullstar a\\neq\\fullstar b)$.\n\\item \\label{la5}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow\\fullstar a\\nsqsupseteq\\fullstar b)$.\n\\item \\label{la6}$\\forall a,b\\in\\mathfrak{A}:(\\fullstar a\\sqsubseteq\\fullstar b\\Rightarrow a\\nsqsupset b)$.\n\\item \\label{la7}$\\mathfrak{A}$ conforms to Wallman's disjunction property.\n\\item \\label{la8}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow\\exists c\\in\\mathfrak{A}\\setminus\\{\\bot\\}:(c\\asymp a\\land c\\sqsubseteq b))$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{\\ref{la1}$\\Leftrightarrow$\\ref{la2}$\\Leftrightarrow$\\ref{la3}$\\Leftrightarrow$\\ref{la4}$\\Leftrightarrow$\\ref{la5}$\\Leftrightarrow$\\ref{la6}}] By\nthe above theorem.\n\\item [{\\ref{la8}$\\Rightarrow$\\ref{la4}}] Let property~\\ref{la8} hold.\nLet $a\\sqsubset b$. Then it exists element $c\\sqsubseteq b$ such\nthat $c\\neq\\bot$ and $c\\sqcap a=\\bot$. But $c\\sqcap b\\neq\\bot$.\nSo $\\fullstar a\\neq\\fullstar b$.\n\\item [{\\ref{la2}$\\Rightarrow$\\ref{la7}}] Let property~\\ref{la2} hold.\nLet $a\\nsqsubseteq b$. Then $\\fullstar a\\nsqsubseteq\\fullstar b$\nthat is it there exists $c\\in\\fullstar a$ such that $c\\notin\\fullstar b$,\nin other words $c\\sqcap a\\neq\\bot$ and $c\\sqcap b=\\bot$. Let $d=c\\sqcap a$.\nThen $d\\sqsubseteq a$ and $d\\neq\\bot$ and $d\\sqcap b=\\bot$. So\ndisjunction property of Wallman holds.\n\\item [{\\ref{la7}$\\Rightarrow$\\ref{la8}}] Obvious.\n\\item [{\\ref{la8}$\\Rightarrow$\\ref{la7}}] Let $b\\nsqsubseteq a$. Then\n$a\\sqcap b\\sqsubset b$ that is $a'\\sqsubset b$ where $a'=a\\sqcap b$.\nConsequently $\\exists c\\in\\mathfrak{A}\\setminus\\{\\bot\\}:(c\\asymp a'\\land c\\sqsubseteq b)$.\nWe have $c\\sqcap a=c\\sqcap b\\sqcap a=c\\sqcap a'=\\bot$. So $c\\sqsubseteq b$\nand $c\\sqcap a=\\bot$. Thus Wallman's disjunction property holds.\n\\end{description}\n\\end{proof}\n\\begin{prop}\\label{bool-sep}\nEvery boolean lattice is strongly separable.\\end{prop}\n\\begin{proof}\nLet $a,b\\in\\mathfrak{A}$ where $\\mathfrak{A}$ is a boolean lattice\nan $a\\neq b$. Then $a\\sqcap\\bar{b}\\neq\\bot$ or $\\bar{a}\\sqcap b\\neq\\bot$\nbecause otherwise $a\\sqcap\\bar{b}=\\bot$ and $a\\sqcup\\bar{b}=\\top$\nand thus $a=b$. Without loss of generality assume $a\\sqcap\\bar{b}\\neq\\bot$.\nThen $a\\sqcap c\\neq\\bot$ and $b\\sqcap c=\\bot$ for $c=a\\sqcap\\bar{b}\\neq\\bot$,\nthat is our lattice is separable.\n\nIt is strongly separable by theorem~\\ref{msl-sep-conds}.\n\\end{proof}\n\n\\subsection{\\label{atm-sep}Atomically Separable Lattices}\n\\begin{prop}\n``$\\atoms$'' is a straight monotone map (for any meet-semilattice).\\end{prop}\n\\begin{proof}\nMonotonicity is obvious. The rest follows from the formula\n\\[\n\\atoms(a\\sqcap b)=\\atoms a\\cap\\atoms b\n\\]\n(corollary \\ref{atoms-meet}).\\end{proof}\n\\begin{defn}\n\\index{separable!atomically}I will call \\emph{atomically separable}\nsuch a poset that ``$\\atoms$'' is an injection.\\end{defn}\n\\begin{prop}\n$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow\\atoms a\\subset\\atoms b)$\niff $\\mathfrak{A}$ is atomically separable for a poset $\\mathfrak{A}$.\\end{prop}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{$\\Leftarrow$}] Obvious.\n\\item [{$\\Rightarrow$}] Let $a\\neq b$ for example $a\\nsqsubseteq b$.\nThen $a\\sqcap b\\sqsubset a$; $\\atoms a\\supset\\atoms(a\\sqcap b)=\\atoms a\\cap\\atoms b$\nand thus $\\atoms a\\neq\\atoms b$.\n\\end{description}\n\\end{proof}\n\\begin{prop}\n\\label{atms-is-asep}Any atomistic poset is atomically separable.\\end{prop}\n\\begin{proof}\nWe need to prove that $\\atoms a=\\atoms b\\Rightarrow a=b$. But it\nis obvious because\n\\[\na=\\bigsqcup\\atoms a\\quad\\text{and}\\quad b=\\bigsqcup\\atoms b.\n\\]\n\\end{proof}\n\\begin{thm}\n\\label{amstc-sep}A complete lattice is atomistic iff it is atomically\nseparable.\\end{thm}\n\\begin{proof}\nDirect implication is the above proposition. Let's prove the reverse\nimplication.\n\nLet ``$\\atoms$'' be injective. Consider an element $a$ of our\nposet. Let $b=\\bigsqcup\\atoms a$. Obviously $b\\sqsubseteq a$ and\nthus $\\atoms b\\subseteq\\atoms a$. But if $x\\in\\atoms a$ then $x\\sqsubseteq b$\nand thus $x\\in\\atoms b$. So $\\atoms a=\\atoms b$. By injectivity\n$a=b$ that is $a=\\bigsqcup\\atoms a$.\\end{proof}\n\\begin{thm}\n\\label{atomistic-enough}If a lattice with least element is atomic\nand separable then it is atomistic.\\end{thm}\n\\begin{proof}\nSuppose the contrary that is $a\\sqsupset\\bigsqcup\\atoms a$. Then,\nbecause our lattice is separable, there exists $c\\in\\mathfrak{A}$\nsuch that $c\\sqcap a\\neq\\bot$ and $c\\sqcap\\bigsqcup\\atoms a=\\bot$.\nThere exists atom $d\\sqsubseteq c$ such that $d\\sqsubseteq c\\sqcap a$.\n$d\\sqcap\\bigsqcup\\atoms a\\sqsubseteq c\\sqcap\\bigsqcup\\atoms a=\\bot$.\nBut $d\\in\\atoms a$. Contradiction.\\end{proof}\n\\begin{thm}\n\\label{sep-conds}Let $\\mathfrak{A}$ be an atomic meet-semilattice\nwith least element. Then the following statements are equivalent:\n\\begin{enumerate}\n\\item \\label{sc-sep}$\\mathfrak{A}$ is separable.\n\\item \\label{sc-stsep}$\\mathfrak{A}$ is strongly separable.\n\\item \\label{sc-at-sep}$\\mathfrak{A}$ is atomically separable.\n\\item \\label{sc-wall}$\\mathfrak{A}$ conforms to Wallman's disjunction\nproperty.\n\\item \\label{sc-other}$\\forall a,b\\in\\mathfrak{A}:(a\\sqsubset b\\Rightarrow\\exists c\\in\\mathfrak{A}\\setminus\\{\\bot\\}:(c\\asymp a\\land c\\sqsubseteq b))$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{\\ref{sc-sep}$\\Leftrightarrow$\\ref{sc-stsep}$\\Leftrightarrow$\\ref{sc-wall}$\\Leftrightarrow$\\ref{sc-other}}] Proved\nabove.\n\\item [{\\ref{sc-at-sep}$\\Rightarrow$\\ref{sc-other}}] Let our semilattice\nbe atomically separable. Let $a\\sqsubset b$. Then $\\atoms a\\subset\\atoms b$\nand there exists $c\\in\\atoms b$ such that $c\\notin\\atoms a$. $c\\neq\\bot$\nand $c\\sqsubseteq b$, from which (taking into account that $c$ is\nan atom) $c\\sqsubseteq b$ and $c\\sqcap a=\\bot$. So our semilattice\nconforms to the formula~\\ref{sc-other}.\n\\item [{\\ref{sc-other}$\\Rightarrow$\\ref{sc-at-sep}}] Let formula~\\ref{sc-other}\nhold. Then for any elements $a\\sqsubset b$ there exists $c\\neq\\bot$\nsuch that $c\\sqsubseteq b$ and $c\\sqcap a=\\bot$. Because $\\mathfrak{A}$\nis atomic there exists atom $d\\sqsubseteq c$. $d\\in\\atoms b$ and\n$d\\notin\\atoms a$. So $\\atoms a\\neq\\atoms b$ and $\\atoms a\\subseteq\\atoms b$.\nConsequently $\\atoms a\\subset\\atoms b$.\n\\end{description}\n\\end{proof}\n\\begin{thm}\n\\label{atom-is-sep}Any atomistic poset is strongly separable.\\end{thm}\n\\begin{proof}\n$\\star x \\sqsubseteq \\star y \\Rightarrow \\atoms x \\sqsubseteq\n\\atoms y \\Rightarrow x \\sqsubseteq y$ because $\\atoms x = \\star x\n\\cap \\atoms^{\\mathfrak{A}}$.\n\\end{proof}\n\n\\section{Quasidifference and Quasicomplement}\n\nI've got quasidifference and quasicomplement (and dual quasicomplement)\nreplacing $\\max$ and $\\min$ in the definition of pseudodifference\nand pseudocomplement (and dual pseudocomplement) with $\\bigsqcup$\nand $\\bigsqcap$. Thus quasidifference and (dual) quasicomplement\nare generalizations of their pseudo- counterparts.\n\\begin{rem}\n\\emph{Pseudocomplements} and \\emph{pseudodifferences} are standard\nterminology. \\emph{Quasi-} counterparts are my neologisms.\\end{rem}\n\\begin{defn}\n\\index{quasicomplement}Let $\\mathfrak{A}$ be a poset, $a\\in\\mathfrak{A}$.\n\\emph{Quasicomplement} of $a$ is\n\\[\na^{\\ast}=\\bigsqcup\\setcond{c\\in\\mathfrak{A}}{c\\asymp a}.\n\\]\n\n\\end{defn}\n\n\\begin{defn}\n\\index{quasicomplement!dual}Let $\\mathfrak{A}$ be a poset, $a\\in\\mathfrak{A}$.\n\\emph{Dual quasicomplement} of $a$ is\n\\[\na^{+}=\\bigsqcap\\setcond{c\\in\\mathfrak{A}}{c\\equiv a}.\n\\]\n\n\\end{defn}\nI will denote quasicomplement and dual quasicomplement for a specific\nposet~$\\mathfrak{A}$ as $a^{\\ast(\\mathfrak{A)}}$ and $a^{+(\\mathfrak{A)}}$.\n\\begin{defn}\n\\index{quasidifference}Let $a,b\\in\\mathfrak{A}$ where $\\mathfrak{A}$\nis a distributive lattice. \\emph{Quasidifference} of $a$ and $b$\nis\n\\[\na\\psetminus b=\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq b\\sqcup z}.\n\\]\n\n\\end{defn}\n\n\\begin{defn}\n\\index{quasidifference}Let $a,b\\in\\mathfrak{A}$ where $\\mathfrak{A}$\nis a distributive lattice. \\emph{Second quasidifference} of $a$ and\n$b$ is\n\\[\na\\mathop\\#b=\\bigsqcup\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\asymp b}.\n\\]\n\\end{defn}\n\\begin{thm}\n$a\\psetminus b=\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land a\\sqsubseteq b\\sqcup z}$\nwhere $\\mathfrak{A}$ is a distributive lattice and $a,b\\in\\mathfrak{A}$.\\end{thm}\n\\begin{proof}\nObviously $\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land a\\sqsubseteq b\\sqcup z}\\subseteq\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq b\\sqcup z}$.\nThus $\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land a\\sqsubseteq b\\sqcup z}\\sqsupseteq a\\psetminus b$.\n\nLet $z\\in\\mathfrak{A}$ and $z'=z\\sqcap a$.\n\n$a\\sqsubseteq b\\sqcup z\\Rightarrow a\\sqsubseteq(b\\sqcup z)\\sqcap a\\Leftrightarrow a\\sqsubseteq(b\\sqcap a)\\sqcup(z\\sqcap a)\\Leftrightarrow a\\sqsubseteq(b\\sqcap a)\\sqcup z'\\Rightarrow a\\sqsubseteq b\\sqcup z'$\nand $a\\sqsubseteq b\\sqcup z\\Leftarrow a\\sqsubseteq b\\sqcup z'$. Thus\n$a\\sqsubseteq b\\sqcup z\\Leftrightarrow a\\sqsubseteq b\\sqcup z'$.\n\nIf $z\\in\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq b\\sqcup z}$ then\n$a\\sqsubseteq b\\sqcup z$ and thus\n\\[\nz'\\in\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land a\\sqsubseteq b\\sqcup z}.\n\\]\n\n\nBut $z'\\sqsubseteq z$ thus having $\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land a\\sqsubseteq b\\sqcup z}\\sqsubseteq\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq b\\sqcup z}$.\\end{proof}\n\\begin{rem}\nIf we drop the requirement that $\\mathfrak{A}$ is distributive, two\nformulas for quasidifference (the definition and the last theorem)\nfork.\\end{rem}\n\\begin{obvious}\nDual quasicomplement is the dual of quasicomplement.\n\\end{obvious}\n\n\\begin{obvious}\n~\n\\begin{itemize}\n\\item Every pseudocomplement is quasicomplement.\n\\item Every dual pseudocomplement is dual quasicomplement.\n\\item Every pseudodifference is quasidifference.\n\\end{itemize}\n\\end{obvious}\nBelow we will stick to the more general quasies than pseudos. If needed,\none can check that a quasicomplement $a^{\\ast}$ is a pseudocomplement\nby the equation $a^{\\ast}\\asymp a$ (and analogously with other quasies).\n\nNext we will express quasidifference through quasicomplement.\n\\begin{prop}\n~\n\\begin{enumerate}\n\\item \\label{minus-meet}$a\\psetminus b=a\\psetminus(a\\sqcap b)$ for any\ndistributive lattice;\n\\item \\label{minus-meet2}$a\\mathop\\#b=a\\mathop\\#(a\\sqcap b)$ for any distributive\nlattice with least element.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n~\n\\begin{widedisorder}\n\\item [{\\ref{minus-meet}}] $a\\sqsubseteq(a\\sqcap b)\\sqcup z\\Leftrightarrow a\\sqsubseteq(a\\sqcup z)\\sqcap(b\\sqcup z)\\Leftrightarrow a\\sqsubseteq a\\sqcup z\\land a\\sqsubseteq b\\sqcup z\\Leftrightarrow a\\sqsubseteq b\\sqcup z$.\nThus $a\\psetminus(a\\sqcap b)=\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq(a\\sqcap b)\\sqcup z}=\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq b\\sqcup z}=a\\psetminus b$.\n\\item [{\\ref{minus-meet2}}] ~\n\\begin{align*}\na\\mathop\\#(a\\sqcap b) & =\\\\\n\\bigsqcup\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\sqcap a\\sqcap b=\\bot} & =\\\\\n\\bigsqcup\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land(z\\sqcap a)\\sqcap a\\sqcap b=\\bot} & =\\\\\n\\bigsqcup\\setcond{z\\sqcap a}{z\\in\\mathfrak{A},z\\sqcap a\\sqcap b=\\bot} & =\\\\\n\\bigsqcup\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a,z\\sqcap b=\\bot} & =\\\\\na\\mathop\\#b.\n\\end{align*}\n\n\\end{widedisorder}\n\\end{proof}\nI will denote $Da$ the lattice $\\setcond{x\\in\\mathfrak{A}}{x\\sqsubseteq a}$.\n\\begin{thm}\nFor $a,b\\in\\mathfrak{A}$ where $\\mathfrak{A}$ is a distributive\nlattice\n\\begin{enumerate}\n\\item \\label{pdiff-comp}$a\\psetminus b=(a\\sqcap b)^{+(Da)}$;\n\\item \\label{sec-pdiff-comp}$a\\mathop\\#b=(a\\sqcap b)^{\\ast(Da)}$ if $\\mathfrak{A}$\nhas least element.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n~\n\\begin{disorder}\n\\item [{\\ref{pdiff-comp}}] ~\n\\begin{align*}\n(a\\sqcap b)^{+(Da)} & =\\\\\n\\bigsqcap\\setcond{c\\in Da}{c\\sqcup(a\\sqcap b)=a} & =\\\\\n\\bigsqcap\\setcond{c\\in Da}{c\\sqcup(a\\sqcap b)\\sqsupseteq a} & =\\\\\n\\bigsqcap\\setcond{c\\in Da}{(c\\sqcup a)\\sqcap(c\\sqcup b)\\sqsupseteq a} & =\\\\\n\\bigsqcap\\setcond{c\\in\\mathfrak{A}}{c\\sqsubseteq a\\land c\\sqcup b\\sqsupseteq a} & =\\\\\na\\psetminus b.\n\\end{align*}\n\n\\item [{\\ref{sec-pdiff-comp}}] ~\n\\begin{align*}\n(a\\sqcap b)^{\\ast(Da)} & =\\\\\n\\bigsqcup\\setcond{c\\in Da}{c\\sqcap a\\sqcap b=\\bot} & =\\\\\n\\bigsqcup\\setcond{c\\in\\mathfrak{A}}{c\\sqsubseteq a\\land c\\sqcap a\\sqcap b=\\bot} & =\\\\\n\\bigsqcup\\setcond{c\\in\\mathfrak{A}}{c\\sqsubseteq a\\land c\\sqcap b=\\bot} & =\\\\\na\\mathop\\#b.\n\\end{align*}\n\n\\end{disorder}\n\\end{proof}\n\\begin{prop}\n$(a\\sqcup b)\\psetminus b\\sqsubseteq a$ for an arbitrary complete\nlattice.\\end{prop}\n\\begin{proof}\n$(a\\sqcup b)\\psetminus b=\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{a\\sqcup b\\sqsubseteq b\\sqcup z}$.\n\nBut $a\\sqsubseteq z\\Rightarrow a\\sqcup b\\sqsubseteq b\\sqcup z$. So\n$\\setcond{z\\in\\mathfrak{A}}{a\\sqcup b\\sqsubseteq b\\sqcup z}\\supseteq\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq z}$.\n\nConsequently, $(a\\sqcup b)\\psetminus b\\sqsubseteq\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq z}=a$.\n\\end{proof}\n\n\\section{Several equal ways to express pseudodifference}\n\\begin{thm}\n\\label{pdiff-eq1}For an atomistic co-brouwerian lattice $\\mathfrak{A}$\nand $a,b\\in\\mathfrak{A}$ the following expressions are always equal:\n\\begin{enumerate}\n\\item \\label{pdiff-pdiff}$a\\psetminus b=\\bigsqcap\\setcond{z\\in\\mathfrak{A}}{a\\sqsubseteq b\\sqcup z}$\n(quasidifference of $a$ and $b$);\n\\item \\label{pdiff-sec}$a\\mathop\\#b=\\bigsqcup\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\sqcap b=\\bot}$\n(second quasidifference of $a$ and $b$);\n\\item \\label{pdiff-atm}$\\bigsqcup(\\atoms a\\setminus\\atoms b)$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{Proof~of~\\ref{pdiff-pdiff}=\\ref{pdiff-atm}}] \n\\begin{align*}\na\\psetminus b & =\\\\\n\\left(\\bigsqcup\\atoms a\\right)\\psetminus b & =\\text{ (theorem \\ref{cup-pdiff})}\\\\\n\\bigsqcup_{A\\in\\atoms a}(A\\psetminus b) & =\\\\\n\\bigsqcup_{A\\in\\atoms a}\\left(\\begin{cases}\nA & \\text{if }A\\notin\\atoms b\\\\\n\\bot & \\text{if }A\\in\\atoms b\n\\end{cases}\\right) & =\\\\\n\\bigsqcup\\setcond A{A\\in\\atoms a,A\\notin\\atoms b} & =\\\\\n\\bigsqcup(\\atoms a\\setminus\\atoms b).\n\\end{align*}\n\n\\item [{Proof~of~\\ref{pdiff-sec}=\\ref{pdiff-atm}}] $a\\psetminus b$\nis defined because our lattice is co-brouwerian. Taking the above\ninto account, we have\n\\begin{align*}\na\\psetminus b & =\\\\\n\\bigsqcup(\\atoms a\\setminus\\atoms b) & =\\\\\n\\bigsqcup\\setcond{z\\in\\atoms a}{z\\sqcap b=\\bot}.\n\\end{align*}\n\n\n\nSo $\\bigsqcup\\setcond{z\\in\\atoms a}{z\\sqcap b=\\bot}$ is defined.\n\n\nIf $z\\sqsubseteq a\\land z\\sqcap b=\\bot$ then $z'=\\bigsqcup\\setcond{x\\in\\atoms z}{x\\sqcap b=\\bot}$\nis defined because $z'=z\\psetminus b$ (atomisticity taken into account).\n$z'$ is a lower bound for $\\setcond{z\\in\\atoms a}{x\\sqcap b=\\bot}$.\n\n\nThus $z'\\in\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\sqcap b=\\bot}$\nand so $\\bigsqcup\\setcond{z\\in\\atoms a}{z\\sqcap b=\\bot}$ is an upper\nbound of $\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\sqcap b=\\bot}$.\n\n\nIf $y$ is above every $z'\\in\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\sqcap b=\\bot}$\nthen $y$ is above every $z\\in\\atoms a$ such that $z\\sqcap b=\\bot$\nand thus $y$ is above $\\bigsqcup\\setcond{z\\in\\atoms a}{z\\sqcap b=\\bot}$.\n\n\nThus $\\bigsqcup\\setcond{z\\in\\atoms a}{z\\sqcap b=\\bot}$ is least upper\nbound of\n\\[\n\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\sqcap b=\\bot},\n\\]\nthat is\n\\begin{align*}\n\\bigsqcup\\setcond{z\\in\\mathfrak{A}}{z\\sqsubseteq a\\land z\\sqcap b=\\bot} & =\\\\\n\\bigsqcup\\setcond{z\\in\\atoms a}{z\\sqcap b=\\bot} & =\\\\\n\\bigsqcup(\\atoms a\\setminus\\atoms b).\n\\end{align*}\n\n\n\\end{description}\n\\end{proof}\n\n\\section{Partially ordered categories}\n\n\n\\subsection{Definition}\n\\begin{defn}\n\\index{category!partially ordered}\\index{precategory!partially ordered}I\nwill call a partially ordered (pre)category a (pre)category together\nwith partial order $\\sqsubseteq$ on each of its Mor-sets with the\nadditional requirement that\n\\[\nf_{1}\\sqsubseteq f_{2}\\land g_{1}\\sqsubseteq g_{2}\\Rightarrow g_{1}\\circ f_{1}\\sqsubseteq g_{2}\\circ f_{2}\n\\]\nfor every morphisms $f_{1}$, $g_{1}$, $f_{2}$, $g_{2}$ such that\n$\\Src f_{1}=\\Src f_{2}$ and $\\Dst f_{1}=\\Dst f_{2}=\\Src g_{1}=\\Src g_{2}$\nand $\\Dst g_{1}=\\Dst g_{2}$.\n\\end{defn}\nI will denote lattice operations on a $\\Hom$-set $C(A,B)$ of a category\n(or any directed multigraph) like $\\sqcup^{C}$ instead of writing\n$\\sqcup^{C(A,B)}$ explicitly.\n\n\n\\subsection{Dagger categories}\n\\begin{defn}\n\\index{precategory!dagger}I will call a \\emph{dagger precategory}\na precategory together with an involutive contravariant identity-on-objects\nprefunctor $x\\mapsto x^{\\dagger}$.\n\nIn other words, a dagger precategory is a precategory equipped with\na function $x\\mapsto x^{\\dagger}$ on its set of morphisms which reverses\nthe source and the destination and is subject to the following identities\nfor every morphisms $f$ and~$g$:\n\\begin{enumerate}\n\\item $f^{\\dagger\\dagger}=f$;\n\\item $(g\\circ f)^{\\dagger}=f^{\\dagger}\\circ g^{\\dagger}$.\n\\end{enumerate}\n\\end{defn}\n\n\\begin{defn}\n\\index{category!dagger}I will call a \\emph{dagger category} a category\ntogether with an involutive contravariant identity-on-objects functor\n$x\\mapsto x^{\\dagger}$.\n\nIn other words, a dagger category is a category equipped with a function\n$x\\mapsto x^{\\dagger}$ on its set of morphisms which reverses the\nsource and the destination and is subject to the following identities\nfor every morphisms $f$ and $g$ and object~$A$:\n\\begin{enumerate}\n\\item $f^{\\dagger\\dagger}=f$;\n\\item $(g\\circ f)^{\\dagger}=f^{\\dagger}\\circ g^{\\dagger}$;\n\\item $(1_{A})^{\\dagger}=1_{A}$.\n\\end{enumerate}\n\\end{defn}\n\\begin{thm}\nIf a category is a dagger precategory then it is a dagger category.\\end{thm}\n\\begin{proof}\nWe need to prove only that $(1_{A})^{\\dagger}=1_{A}$. Really,\n\\[\n(1_{A})^{\\dagger}=(1_{A})^{\\dagger}\\circ1_{A}=(1_{A})^{\\dagger}\\circ(1_{A})^{\\dagger\\dagger}=((1_{A})^{\\dagger}\\circ1_{A})^{\\dagger}=(1_{A})^{\\dagger\\dagger}=1_{A}.\n\\]\n\n\\end{proof}\nFor a partially ordered dagger (pre)category I will additionally require\n(for every morphisms $f$ and $g$ with the same source and destination)\n\\[\nf^{\\dagger}\\sqsubseteq g^{\\dagger}\\Leftrightarrow f\\sqsubseteq g.\n\\]\n\n\nAn example of dagger category is the category $\\mathbf{Rel}$ whose\nobjects are sets and whose morphisms are binary relations between\nthese sets with usual composition of binary relations and with $f^{\\dagger}=f^{-1}$.\n\\begin{defn}\n\\index{morphism!unitary}A morphism $f$ of a dagger category is called\n\\emph{unitary} when it is an isomorphism and $f^{\\dagger}=f^{-1}$.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!symmetric}\\emph{Symmetric} (endo)morphism of a dagger\nprecategory is such a morphism $f$ that $f=f^{\\dagger}$.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!transitive}\\emph{Transitive} (endo)morphism of a\nprecategory is such a morphism $f$ that $f=f\\circ f$.\\end{defn}\n\\begin{thm}\n\\label{sym-trans}The following conditions are equivalent for a morphism\n$f$ of a dagger precategory:\n\\begin{enumerate}\n\\item \\label{sym-trans-both}$f$ is symmetric and transitive.\n\\item \\label{f-df-f}$f=f^{\\dagger}\\circ f$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{\\ref{sym-trans-both}$\\Rightarrow$\\ref{f-df-f}}] If $f$ is symmetric\nand transitive then $f^{\\dagger}\\circ f=f\\circ f=f$.\n\\item [{\\ref{f-df-f}$\\Rightarrow$\\ref{sym-trans-both}}] $f^{\\dagger}=(f^{\\dagger}\\circ f)^{\\dagger}=f^{\\dagger}\\circ f^{\\dagger\\dagger}=f^{\\dagger}\\circ f=f$,\nso $f$ is symmetric. $f=f^{\\dagger}\\circ f=f\\circ f$, so $f$ is\ntransitive.\n\\end{description}\n\\end{proof}\n\n\\subsubsection{Some special classes of morphisms}\n\\begin{defn}\n\\index{morphism!monovalued}For a partially ordered dagger category\nI will call \\emph{monovalued} morphism such a morphism $f$ that $f\\circ f^{\\dagger}\\sqsubseteq1_{\\Dst f}$.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!entirely defined}For a partially ordered dagger category\nI will call \\emph{entirely defined} morphism such a morphism $f$\nthat $f^{\\dagger}\\circ f\\sqsupseteq1_{\\Src f}$.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!injective}For a partially ordered dagger category\nI will call \\emph{injective} morphism such a morphism $f$ that $f^{\\dagger}\\circ f\\sqsubseteq1_{\\Src f}$.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!surjective}For a partially ordered dagger category\nI will call \\emph{surjective} morphism such a morphism f that $f\\circ f^{\\dagger}\\sqsupseteq1_{\\Dst f}$.\\end{defn}\n\\begin{rem}\nIt is easy to show that this is a generalization of monovalued, entirely\ndefined, injective, and surjective functions as morphisms of the category\n$\\mathbf{Rel}$.\\end{rem}\n\\begin{obvious}\n``Injective morphism'' is a dual of ``monovalued morphism'' and\n``surjective morphism'' is a dual of ``entirely defined morphism''.\\end{obvious}\n\\begin{defn}\nFor a given partially ordered dagger category $C$ the \\emph{category\nof monovalued} (\\emph{entirely defined}, \\emph{injective}, \\emph{surjective})\nmorphisms of $C$ is the category with the same set of objects as\nof $C$ and the set of morphisms being the set of monovalued (entirely\ndefined, injective, surjective) morphisms of $C$ with the composition\nof morphisms the same as in $C$.\n\\end{defn}\nWe need to prove that these are really categories, that is that composition\nof monovalued (entirely defined, injective, surjective) morphisms\nis monovalued (entirely defined, injective, surjective) and that identity\nmorphisms are monovalued, entirely defined, injective, and surjective.\n\\begin{proof}\nWe will prove only for monovalued morphisms and entirely defined morphisms,\nas injective and surjective morphisms are their duals.\n\\begin{description}\n\\item [{Monovalued}] Let $f$ and $g$ be monovalued morphisms, $\\Dst f=\\Src g$.\nThen\n\\begin{align*}\n(g\\circ f)\\circ(g\\circ f)^{\\dagger} & =\\\\\ng\\circ f\\circ f^{\\dagger}\\circ g^{\\dagger} & \\sqsubseteq\\\\\ng\\circ1_{\\Src g}\\circ g^{\\dagger} & =\\\\\ng\\circ g^{\\dagger} & \\sqsubseteq\\\\\n1_{\\Dst g}&=1_{\\Dst(g\\circ f)}.\n\\end{align*}\n\n\n\nSo $g\\circ f$ is monovalued.\n\n\nThat identity morphisms are monovalued follows from the following:\n\\[\n1_{A}\\circ(1_{A})^{\\dagger}=1_{A}\\circ1_{A}=1_{A}=1_{\\Dst1_{A}}\\sqsubseteq1_{\\Dst1_{A}}.\n\\]\n\n\n\\item [{Entirely~defined}] Let $f$ and $g$ be entirely defined morphisms,\n$\\Dst f=\\Src g$. Then\n\\begin{align*}\n(g\\circ f)^{\\dagger}\\circ(g\\circ f) & =\\\\\nf^{\\dagger}\\circ g^{\\dagger}\\circ g\\circ f & \\sqsupseteq\\\\\nf^{\\dagger}\\circ1_{\\Src g}\\circ f & =\\\\\nf^{\\dagger}\\circ1_{\\Dst f}\\circ f & =\\\\\nf^{\\dagger}\\circ f & \\sqsupseteq\\\\\n1_{\\Src f}&=1_{\\Src(g\\circ f)}.\n\\end{align*}\n\n\n\nSo $g\\circ f$ is entirely defined.\n\n\nThat identity morphisms are entirely defined follows from the following:\n\\[\n(1_{A})^{\\dagger}\\circ1_{A}=1_{A}\\circ1_{A}=1_{A}=1_{\\Src1_{A}}\\sqsupseteq1_{\\Src1_{A}}.\n\\]\n\n\n\\end{description}\n\\end{proof}\n\\begin{defn}\n\\index{morphism!bijective}I will call a \\emph{bijective} morphism\na morphism which is entirely defined, monovalued, injective, and surjective.\\end{defn}\n\\begin{prop}\nIf a morphism is bijective then it is an isomorphism.\\end{prop}\n\\begin{proof}\nLet $f$ be bijective. Then $f\\circ f^{\\dagger}\\sqsubseteq1_{\\Dst f}$,\n$f^{\\dagger}\\circ f\\sqsupseteq1_{\\Src f}$, $f^{\\dagger}\\circ f\\sqsubseteq1_{\\Src f}$,\n$f\\circ f^{\\dagger}\\sqsupseteq1_{\\Dst f}$. Thus $f\\circ f^{\\dagger}=1_{\\Dst f}$\nand $f^{\\dagger}\\circ f=1_{\\Src f}$ that is $f^{\\dagger}$ is an\ninverse of $f$.\n\\end{proof}\nLet $\\Hom$-sets be complete lattices.\n\\begin{defn}\n\\index{morphism!metamonovalued}A morphism $f$ of a partially ordered\ncategory is \\emph{metamonovalued} when $\\left(\\bigsqcap G\\right)\\circ f=\\bigsqcap_{g\\in G}(g\\circ f)$\nwhenever $G$ is a set of morphisms with a suitable source and destination.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!metainjective}A morphism $f$ of a partially ordered\ncategory is \\emph{metainjective} when $f\\circ\\left(\\bigsqcap G\\right)=\\bigsqcap_{g\\in G}(f\\circ g)$\nwhenever $G$ is a set of morphisms with a suitable source and destination.\\end{defn}\n\\begin{obvious}\nMetamonovaluedness and metainjectivity are dual to each other.\\end{obvious}\n\\begin{defn}\n\\index{morphism!metacomplete}A morphism $f$ of a partially ordered\ncategory is \\emph{metacomplete} when $f\\circ\\left(\\bigsqcup G\\right)=\\bigsqcup_{g\\in G}(f\\circ g)$\nwhenever $G$ is a set of morphisms with a suitable source and destination.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!co-metacomplete}A morphism $f$ of a partially ordered\ncategory is \\emph{co-metacomplete} when $\\left(\\bigsqcup G\\right)\\circ f=\\bigsqcup_{g\\in G}(g\\circ f)$\nwhenever $G$ is a set of morphisms with a suitable source and destination.\n\\end{defn}\nLet now $\\Hom$-sets be meet-semilattices.\n\\begin{defn}\n\\index{morphism!weakly metamonovalued}A morphism $f$ of a partially\nordered category is \\emph{weakly metamonovalued} when $(g\\sqcap h)\\circ f=(g\\circ f)\\sqcap(h\\circ f)$\nwhenever $g$ and $h$ are morphisms with a suitable source and destination.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!weakly metainjective}A morphism $f$ of a partially\nordered category is \\emph{weakly metainjective} when $f\\circ(g\\sqcap h)=(f\\circ g)\\sqcap(f\\circ h)$\nwhenever $g$ and $h$ are morphisms with a suitable source and destination.\n\\end{defn}\nLet now $\\Hom$-sets be join-semilattices.\n\\begin{defn}\n\\index{morphism!weakly metacomplete}A morphism $f$ of a partially\nordered category is \\emph{weakly metacomplete} when $f\\circ(g\\sqcup h)=(f\\circ g)\\sqcup(f\\circ h)$\nwhenever $g$ and $h$ are morphisms with a suitable source and destination.\n\\end{defn}\n\n\\begin{defn}\n\\index{morphism!weakly co-metacomplete}A morphism $f$ of a partially\nordered category is \\emph{weakly co-metacomplete} when $(g\\sqcup h)\\circ f=(g\\circ f)\\sqcup(h\\circ f)$\nwhenever $g$ and $h$ are morphisms with a suitable source and destination.\\end{defn}\n\\begin{obvious}\n~\n\\begin{enumerate}\n\\item Metamonovalued morphisms are weakly metamonovalued.\n\\item Metainjective morphisms are weakly metainjective.\n\\item Metacomplete morphisms are weakly metacomplete.\n\\item Co-metacomplete morphisms are weakly co-metacomplete.\n\\end{enumerate}\n\\end{obvious}\n\n\\section{Partitioning}\n\\begin{defn}\n\\index{torning}Let $\\mathfrak{A}$ be a complete lattice. \\emph{Torning}\nof an element $a\\in\\mathfrak{A}$ is a set $S\\in\\subsets\\mathfrak{A}\\setminus\\{\\bot\\}$\nsuch that\n\\[\n\\bigsqcup S=a\\quad\\text{and}\\quad\\forall x,y\\in S:(x\\neq y\\Rightarrow x\\asymp y).\n\\]\n\n\\end{defn}\n\n\\begin{defn}\n\\index{partition!weak}Let $\\mathfrak{A}$ be a complete lattice.\n\\emph{Weak partition} of an element $a\\in\\mathfrak{A}$ is a set $S\\in\\subsets\\mathfrak{A}\\setminus\\{\\bot\\}$\nsuch that\n\\[\n\\bigsqcup S=a\\quad\\text{and}\\quad\\forall x\\in S:x\\asymp\\bigsqcup(S\\setminus\\{x\\}).\n\\]\n\n\\end{defn}\n\n\\begin{defn}\n\\index{partition!strong}Let $\\mathfrak{A}$ be a complete lattice.\n\\emph{Strong partition} of an element $a\\in\\mathfrak{A}$ is a set\n$S\\in\\subsets\\mathfrak{A}\\setminus\\{\\bot\\}$ such that\n\\[\n\\bigsqcup S=a\\quad\\text{and}\\quad\\forall A,B\\in\\subsets S:(A\\asymp B\\Rightarrow\\bigsqcup A\\asymp\\bigsqcup B).\n\\]\n\\end{defn}\n\\begin{obvious}\n~\n\\begin{enumerate}\n\\item Every strong partition is a weak partition.\n\\item Every weak partition is a torning.\n\\end{enumerate}\n\\end{obvious}\n\n\\begin{defn}\n\\emph{Complete lattice generated by} a set~$P$ (on a complete lattice)\nis the set (obviously having the structure of complete lattice)\n$P_0\\cup P_1\\cup\\dots$\nwhere $P_0=P$ and\n$P_{i+1}=\\setcond{\\bigsqcup K, \\bigsqcap K}{K\\in\\subsets P_i}$.\n\\end{defn}\n\n\\begin{obvious}\nComplete lattice generated by a set is indeed a complete lattice.\n\\end{obvious}\n\n\\begin{example}\n$[S] \\ne \\setcond{\\bigsqcup^{\\mathfrak{A}}X}{X\\in\\subsets S}$, where~$[S]$ is the complete lattice generated by\na strong partition~$S$ of a filter on a set.\n\\end{example}\n\n\\begin{proof}\nConsider any infinite set~$U$ and its strong partition~$S=\\setcond{\\uparrow^U\\{x\\}}{x\\in U}$.\nThe set~$S$ consists only of principal filters. But~$[S]$ contains (exercise!) some\nnonprincipal filters.\n\\end{proof}\n\nBy the way:\n\n\\begin{prop}\n$\\setcond{\\bigsqcup^{\\mathfrak{A}}X}{X\\in\\subsets S}$ is closed under\nbinary meets, if~$S$ is a strong partition of an element of a complete lattice.\\end{prop}\n\\begin{proof}\nLet $R=\\setcond{\\bigsqcup^{\\mathfrak{A}}X}{X\\in\\subsets S}$. Then\nfor every $X,Y\\in\\subsets S$\n\\begin{align*}\n\\bigsqcup^{\\mathfrak{A}}X\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y & =\\\\\n\\bigsqcup^{\\mathfrak{A}}((X\\cap Y)\\cup(X\\setminus Y))\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y & =\\\\\n\\left(\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\sqcup^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}(X\\setminus Y)\\right)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y & =\\\\\n\\left(\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y\\right)\\sqcup^{\\mathfrak{A}}\\left(\\bigsqcup^{\\mathfrak{A}}(X\\setminus Y)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y\\right) & =\\\\\n\\left(\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y\\right)\\sqcup^{\\mathfrak{A}}\\bot^{\\mathfrak{A}} & =\\\\\n\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y.\n\\end{align*}\n\n\nApplying the formula $\\bigsqcup^{\\mathfrak{A}}X\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y=\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y$\ntwice we get\n\\begin{align*}\n\\bigsqcup^{\\mathfrak{A}}X\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}Y & =\\\\\n\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}(Y\\cap(X\\cap Y)) & =\\\\\n\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\sqcap^{\\mathfrak{A}}\\bigsqcup^{\\mathfrak{A}}(X\\cap Y) & =\\\\\n\\bigsqcup^{\\mathfrak{A}}(X\\cap Y).\n\\end{align*}\n\n\nBut for any $A,B\\in R$ there exist $X,Y\\in\\subsets S$ such that\n$A=\\bigsqcup^{\\mathfrak{A}}X$, $B=\\bigsqcup^{\\mathfrak{A}}Y$. So\n$A\\sqcap^{\\mathfrak{A}}B=\\bigsqcup^{\\mathfrak{A}}X\\sqcap\\bigsqcup^{\\mathfrak{A}}Y=\\bigsqcup^{\\mathfrak{A}}(X\\cap Y)\\in R$.\n\\end{proof}\n\n\\section{A proposition about binary relations}\n\\begin{prop}\n\\label{rel-cross}Let $f$, $g$, $h$ be binary relations. Then $g\\circ f\\nasymp h\\Leftrightarrow g\\nasymp h\\circ f^{-1}$.\\end{prop}\n\\begin{proof}\n~\n\\begin{align*}\ng\\circ f\\nasymp h & \\Leftrightarrow\\\\\n\\exists a,c:a\\mathrel{((g\\circ f)\\cap h)}c & \\Leftrightarrow\\\\\n\\exists a,c:(a\\mathrel{(g\\circ f)}c\\land a\\mathrel{h}c) & \\Leftrightarrow\\\\\n\\exists a,b,c:(a\\mathrel{f}b\\land b\\mathrel{g}c\\land a\\mathrel{h}c) & \\Leftrightarrow\\\\\n\\exists b,c:(b\\mathrel{g}c\\land b\\mathrel{(h\\circ f^{-1})}c) & \\Leftrightarrow\\\\\n\\exists b,c:b\\mathrel{(g\\cap(h\\circ f^{-1}))}c & \\Leftrightarrow\\\\\ng\\nasymp h\\circ f^{-1}.\n\\end{align*}\n\n\\end{proof}\n\n\\section{Infinite associativity and ordinated product}\n\n\n\\subsection{Introduction}\n\nWe will consider some function $f$ which takes an arbitrary ordinal\nnumber of arguments. That is $f$ can be taken for arbitrary (small,\nif to be precise) ordinal number of arguments. More formally: Let\n$x=x_{i\\in n}$ be a family indexed by an ordinal $n$. Then $f(x)$\ncan be taken. The same function $f$ can take different number of\narguments. (See below for the exact definition.)\n\nSome of such functions $f$ are associative in the sense defined below.\nIf a function is associative in the below defined sense, then the\nbinary operation induced by this function is associative in the usual\nmeaning of the word ``associativity'' as defined in basic algebra.\n\nI also introduce and research an important example of infinitely associative\nfunction, which I call \\emph{ordinated product}.\n\nNote that my searching about infinite associativity and ordinals in\nInternet has provided no useful results. As such there is a reason\nto assume that my research of generalized associativity in terms of\nordinals is novel.\n\n\n\\subsection{Used notation}\n\n\\index{ordinal}We identify natural numbers with finite Von Neumann's\nordinals (further just \\emph{ordinals} or \\emph{ordinal numbers}).\n\nFor simplicity we will deal with small sets (members of a Grothendieck\nuniverse). We will denote the Grothendieck universe (aka \\emph{universal\nset}) as $\\mho$.\n\nI will denote a tuple of $n$ elements like $\\left\\llbracket a_{0},\\ldots,a_{n-1}\\right\\rrbracket $.\nBy definition\n\\[\n\\left\\llbracket a_{0},\\ldots,a_{n-1}\\right\\rrbracket =\\{(0,a_{0}),\\ldots,(n-1,a_{n-1})\\}.\n\\]\n\n\nNote that an ordered pair $(a,b)$ is not the same as the tuple $\\left\\llbracket a,b\\right\\rrbracket $\nof two elements. (However, we will use them interchangeably.)\n\\begin{defn}\n\\index{relation!anchored}An \\emph{anchored relation} is a tuple $\\left\\llbracket n,r\\right\\rrbracket $\nwhere $n$ is an index set and $r$ is an $n$-ary relation.\n\\end{defn}\n\\index{graph!of anchored relation}For an anchored relation $\\arity\\left\\llbracket n,r\\right\\rrbracket =n$.\nThe graph\\footnote{It is unrelated with graph theory.} of $\\left\\llbracket n,r\\right\\rrbracket $\nis defined as follows: $\\GR\\left\\llbracket n,r\\right\\rrbracket =r$.\n\\begin{defn}\n$\\Pr_{i}f$ is a function defined by the formula\n\\[\n\\Pr_{i}f=\\setcond{x_{i}}{x\\in f}\n\\]\nfor every small $n$-ary relation $f$ where $n$ is an ordinal number\nand $i\\in n$. Particularly for every $n$-ary relation $f$ and $i\\in n$\nwhere $n\\in\\mathbb{N}$\n\\[\n\\Pr_{i}f=\\setcond{x_{i}}{\\left\\llbracket x_{0},\\ldots,x_{n-1}\\right\\rrbracket \\in f}.\n\\]\n\n\\end{defn}\n\\index{product!cartesian}Recall that Cartesian product is defined\nas follows:\n\\[\n\\prod a=\\setcond{z\\in\\left(\\bigcup\\im a\\right)^{\\dom a}}{\\forall i\\in\\dom a:z(i)\\in a_{i}}.\n\\]\n\n\\begin{obvious}\nIf $a$ is a small function, then $\\prod a=\\setcond{z\\in\\mho^{\\dom a}}{\\forall i\\in\\dom a:z(i)\\in a_{i}}.$\n\\end{obvious}\n\n\\subsubsection{Currying and uncurrying}\n\n\n\\paragraph{The customary definition}\n\nLet $X$, $Y$, $Z$ be sets.\n\nWe will consider variables $x\\in X$ and $y\\in Y$.\n\n\\index{currying}Let a function $f\\in Z^{X\\times Y}$. Then $\\curry(f)\\in(Z^{Y})^{X}$\nis the function defined by the formula $(\\curry(f)x)y=f(x,y)$.\n\n\\index{uncurrying}Let now $f\\in(Z^{Y})^{X}$. Then $\\uncurry(f)\\in Z^{X\\times Y}$\nis the function defined by the formula $\\uncurry(f)(x,y)=(fx)y$.\n\\begin{obvious}\n~\n\\begin{enumerate}\n\\item $\\uncurry(\\curry(f))=f$ for every $f\\in Z^{X\\times Y}$.\n\\item $\\curry(\\uncurry(f))=f$ for every $f\\in(Z^{Y})^{X}$.\n\\end{enumerate}\n\\end{obvious}\n\n\\paragraph{Currying and uncurrying with a dependent variable}\n\nLet $X$, $Z$ be sets and $Y$ be a function with the domain $X$.\n(Vaguely saying, $Y$ is a variable dependent on $X$.)\n\nThe disjoint union $\\coprod Y=\\bigcup_{i\\in\\dom Y}(\\{i\\}\\times Y_{i})=\\setcond{(i,x)}{i\\in\\dom Y,x\\in Y_{i}}$.\n\nWe will consider variables $x\\in X$ and $y\\in Y_{x}$.\n\n\\index{currying}Let a function $f\\in Z^{\\coprod_{i\\in X}Y_{i}}$\n(or equivalently $f\\in Z^{\\coprod Y}$). Then $\\curry(f)\\in\\prod_{i\\in X}Z^{Y_{i}}$\nis the function defined by the formula $(\\curry(f)x)y=f(x,y)$.\n\n\\index{uncurrying}Let now $f\\in\\prod_{i\\in X}Z^{Y_{i}}$. Then $\\uncurry(f)\\in Z^{\\coprod_{i\\in X}Y_{i}}$\nis the function defined by the formula $\\uncurry(f)(x,y)=(fx)y$.\n\\begin{obvious}\n~\n\\begin{enumerate}\n\\item $\\uncurry(\\curry(f))=f$ for every $f\\in Z^{\\coprod_{i\\in X}Y_{i}}$.\n\\item $\\curry(\\uncurry(f))=f$ for every $f\\in\\prod_{i\\in X}Z^{Y_{i}}$.\n\\end{enumerate}\n\\end{obvious}\n\n\\subsubsection{Functions with ordinal numbers of arguments}\n\nLet $\\mathrm{Ord}$ be the set of small ordinal numbers.\n\nIf $X$ and $Y$ are sets and $n$ is an ordinal number, the set of\nfunctions taking $n$ arguments on the set $X$ and returning a value\nin $Y$ is $Y^{X^{n}}$.\n\nThe set of all small functions taking ordinal numbers of arguments\nis $Y^{\\bigcup_{n\\in\\mathrm{Ord}}X^{n}}$.\n\n\\index{ordinal variadic}I will denote $\\mathrm{OrdVar}(X)=\\mho^{\\bigcup_{n\\in\\mathrm{Ord}}X^{n}}$\nand call it \\emph{ordinal variadic}. (``Var'' in this notation is\ntaken from the word \\emph{variadic} in the collocation \\emph{variadic\nfunction} used in computer science.)\n\n\n\\subsection{On sums of ordinals}\n\nLet $a$ be an ordinal-indexed family of ordinals.\n\\begin{prop}\n$\\coprod a$ with lexicographic order is a well-ordered set.\\end{prop}\n\\begin{proof}\nLet $S$ be non-empty subset of $\\coprod a$.\n\nTake $i_{0}=\\min\\Pr_{0}S$ and $x_{0}=\\min\\setcond{\\Pr_{1}y}{y\\in S,y(0)=i_{0}}$\n(these exist by properties of ordinals). Then $(i_{0},x_{0})$ is\nthe least element of $S$.\\end{proof}\n\\begin{defn}\n$\\sum a$ is the unique ordinal order-isomorphic to $\\coprod a$.\\end{defn}\n\\begin{xca}\nProve that for finite ordinals it is just a sum of natural numbers.\n\\end{xca}\nThis ordinal exists and is unique because our set is well-ordered.\n\\begin{rem}\nAn infinite sum of ordinals is not customary defined.\n\\end{rem}\n\\index{sum!structured}The \\emph{structured sum} $\\bigoplus a$ of\n$a$ is an order isomorphism from lexicographically ordered set $\\coprod a$\ninto $\\sum a$.\n\nThere exists (for a given $a$) exactly one structured sum, by properties\nof well-ordered sets.\n\\begin{obvious}\n$\\sum a=\\im\\bigoplus a$.\\end{obvious}\n\\begin{thm}\n$\\left(\\bigoplus a\\right)(n,x)=\\sum_{i\\in n}a_{i}+x$.\\end{thm}\n\\begin{proof}\nWe need to prove that it is an order isomorphism. Let's prove it is\nan injection that is $m>n\\Rightarrow\\sum_{i\\in m}a_{i}+x>\\sum_{i\\in n}a_{i}+x$\nand $y>x\\Rightarrow\\sum_{i\\in n}a_{i}+y>\\sum_{i\\in n}a_{i}+x$.\n\nReally, if $m>n$ then $\\sum_{i\\in m}a_{i}+x\\geq\\sum_{i\\in n+1}a_{i}+x>\\sum_{i\\in n}a_{i}+x$.\nThe second formula is true by properties of ordinals.\n\nLet's prove that it is a surjection. Let $r\\in\\sum a$. There exist\n$n\\in\\dom a$ and $x\\in a_{n}$ such that $r=\\left(\\bigoplus a\\right)(n,x)$.\nThus $r=\\left(\\bigoplus a\\right)(n,0)+x=\\sum_{i\\in n}a_{i}+x$ because\n$\\left(\\bigoplus a\\right)(n,0)=\\sum_{i\\in n}a_{i}$ since $(n,0)$\nhas $\\sum_{i\\in n}a_{i}$ predecessors.\n\\end{proof}\n\n\\subsection{\\label{ordinated-prod}Ordinated product}\n\n\n\\subsubsection{Introduction}\n\n\\emph{Ordinated product} defined below is a variation of Cartesian\nproduct, but is associative unlike Cartesian product. However, ordinated\nproduct unlike Cartesian product is defined not for arbitrary sets,\nbut only for relations having ordinal numbers of arguments.\n\nLet $F$ indexed by an ordinal number be a small family of anchored\nrelations.\n\n\n\\subsubsection{Concatenation}\n\\begin{defn}\n\\index{concatenation}Let $z$ be an indexed by an ordinal number\nfamily of functions each taking an ordinal number of arguments. The\n\\emph{concatenation} of $z$ is\n\\[\n\\concat z=\\uncurry(z)\\circ\\left(\\bigoplus(\\dom\\circ z)\\right)^{-1}.\n\\]\n\\end{defn}\n\\begin{xca}\nProve, that if $z$ is a finite family of finitary tuples, it is concatenation\nof $\\dom z$ tuples in the usual sense (as it is commonly used in\ncomputer science).\\end{xca}\n\\begin{prop}\nIf $z\\in\\prod(\\GR\\circ F)$ then $\\concat z=\\uncurry(z)\\circ\\left(\\bigoplus(\\arity\\circ F)\\right)^{-1}.$\\end{prop}\n\\begin{proof}\nIf $z\\in\\prod(\\GR\\circ F)$ then $\\dom z(i)=\\dom(\\GR\\circ F)_{i}=\\arity F_{i}$\nfor every $i\\in\\dom F$. Thus $\\dom\\circ z=\\arity\\circ F$.\\end{proof}\n\\begin{prop}\n$\\dom\\concat z=\\sum_{i\\in\\dom z}\\dom z_{i}$.\\end{prop}\n\\begin{proof}\nBecause $\\dom\\left(\\bigoplus(\\dom\\circ z)\\right)^{-1}=\\sum_{i\\in\\dom f}(\\dom\\circ z)$,\nit is enough to prove that\n\\[\n\\dom\\uncurry(z)=\\dom\\bigoplus(\\dom\\circ z).\n\\]\n\n\nReally,\n\\begin{align*}\n\\sum_{i\\in\\dom f}(\\dom\\circ z) & =\\\\\n\\setcond{(i,x)}{i\\in\\dom(\\dom\\circ z),x\\in\\dom z_{i}} & =\\\\\n\\setcond{(i,x)}{i\\in\\dom z,x\\in\\dom z_{i}} & =\\\\\n\\coprod z\n\\end{align*}\nand $\\dom\\uncurry(z)=\\coprod_{i\\in X}z_{i}=\\coprod z$.\n\\end{proof}\n\n\\subsubsection{Finite example}\n\nIf $F$ is a finite family (indexed by a natural number $\\dom F$)\nof anchored finitary relations, then by definition \\[\n\\GR\\prod^{\\mathrm{(ord)}} =\n\\setcond{\n\\llbracket a_{0, 0} , \\ldots , a_{0, \\arity F_0 - 1} , \\ldots , a_{\\dom F - 1, 0} , \\ldots , a_{\\dom F - 1, \\arity F_{\\dom F - 1} - 1} \\rrbracket\n}{\n\\begin{array}{l}\n\\llbracket a_{0, 0} , \\ldots , a_{0, \\arity F_0 - 1} \\rrbracket \\in \\GR F_0 \\land \\ldots \\land \\\\ \\llbracket a_{\\dom F - 1, \\arity F_{\\dom F - 1} - 1} \\rrbracket \\in \\GR F_{\\dom F - 1}\n\\end{array}\n}\n\\]and \n\\[\n\\arity\\prod^{\\mathrm{(ord)}}F=\\arity F_{0}+\\ldots+\\arity F_{\\dom F-1}.\n\\]\n\n\nThe above formula can be shortened to\n\\[\n\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\concat z}{z\\in\\prod(\\GR\\circ F)}.\n\\]\n\n\n\n\\subsubsection{The definition}\n\\begin{defn}\n\\index{product!ordinated}The anchored relation (which I call \\emph{ordinated\nproduct}) $\\prod^{\\mathrm{(ord)}}F$ is defined by the formulas:\n\\begin{gather*}\n\\arity\\prod^{\\mathrm{(ord)}}F=\\sum(\\arity\\circ f);\\\\\n\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\concat z}{z\\in\\prod(\\GR\\circ F)}.\n\\end{gather*}\n\\end{defn}\n\\begin{prop}\n$\\prod^{\\mathrm{(ord)}}F$ is a properly defined anchored relation.\\end{prop}\n\\begin{proof}\n$\\dom\\concat z=\\sum_{i\\in\\dom F}\\dom z_{i}=\\sum_{i\\in\\dom F}\\arity f_{i}=\\sum(\\arity\\circ F)$.\n\\end{proof}\n\n\\subsubsection{Definition with composition for every multiplier}\n\n\\[ q(F)_{i}\\eqdef\\left(\\curry\\left(\\bigoplus(\\arity\\circ F)\\right)\\right)i. \\]\n\\begin{prop}\n$\\prod^{\\mathrm{(ord)}}F=\\setcond{L\\in\\mho^{\\sum(\\arity\\circ F)}}{\\forall i\\in\\dom F:L\\circ q(F)_{i}\\in\\GR F_{i}}$.\\end{prop}\n\\begin{proof}\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\concat z}{z\\in\\prod(\\GR\\circ F)}$;\n\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\uncurry(z)\\circ\\left(\\bigoplus(\\arity\\circ f)\\right)^{-1}}{z\\in\\prod_{i\\in\\dom F}\\mho^{\\arity F_{i}},\\forall i\\in\\dom F:z(i)\\in\\GR F_{i}}$.\n\nLet $L=\\uncurry(z)$. Then $z=\\curry(L)$.\n\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{L\\circ\\left(\\bigoplus(\\arity\\circ f)\\right)^{-1}}{\\curry(L)\\in\\prod_{i\\in\\dom F}\\mho^{\\arity F_{i}},\\forall i\\in\\dom F:\\curry(L)i\\in\\GR F_{i}}$;\n\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{L\\circ\\left(\\bigoplus(\\arity\\circ f)\\right)^{-1}}{L\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}},\\forall i\\in\\dom F:\\curry(L)i\\in\\GR F_{i}}$;\n\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{L\\in\\mho^{\\sum(\\arity\\circ f)}}{\\forall i\\in\\dom F:\\curry\\left(L\\circ\\bigoplus(\\arity\\circ F)\\right)i\\in\\GR F_{i}}$;\n\n$\\left(\\curry\\left(L\\circ\\bigoplus(\\arity\\circ F)\\right)i\\right)x=L\\left(\\left(\\curry\\left(\\bigoplus(\\arity\\circ F)\\right)i\\right)x\\right)=L(q(F)_{i}x)=(L\\circ q(F)_{i})x$;\n\n$\\curry\\left(L\\circ\\bigoplus(\\arity\\circ F)\\right)i=L\\circ q(F)_{i}$;\n\n$\\prod^{\\mathrm{(ord)}}F=\\setcond{L\\in\\mho^{\\sum(\\arity\\circ F)}}{\\forall i\\in\\dom F:L\\circ q(F)_{i}\\in\\GR F_{i}}$.\\end{proof}\n\\begin{cor}\n$\\prod^{\\mathrm{(ord)}}F=\\setcond{L\\in\\left(\\bigcup\\im(\\GR\\circ F)\\right)^{\\sum(\\arity\\circ F)}}{\\forall i\\in\\dom F:L\\circ q(F)_{i}\\in\\GR F_{i}}$.\n\\end{cor}\n\n\\begin{cor}\n$\\prod^{\\mathrm{(ord)}}F$ is small if $F$ is small.\n\\end{cor}\n\n\\subsubsection{Definition with shifting arguments}\n\nLet $F'_{i}=\\setcond{L\\circ\\Pr_{1}|_{\\{i\\}\\times\\arity F_{i}}}{L\\in\\GR F_{i}}$.\n\\begin{prop}\n$F'_{i}=\\setcond{L\\circ\\Pr_{1}|_{\\{i\\}\\times\\mho}}{L\\in\\GR F_{i}}$.\\end{prop}\n\\begin{proof}\nIf $L\\in\\GR F_{i}$ then $\\dom L=\\arity F_{i}$. Thus\n\\[\nL\\circ\\Pr_{1}|_{\\{i\\}\\times\\arity F_{i}}=L\\circ\\Pr_{1}|_{\\{i\\}\\times\\dom L}=L\\circ\\Pr_{1}|_{\\{i\\}\\times\\mho}.\n\\]\n\\end{proof}\n\\begin{prop}\n$F'_{i}$ is an $(\\{i\\}\\times\\arity F_{i})$-ary relation.\\end{prop}\n\\begin{proof}\nWe need to prove that $\\dom\\left(L\\circ\\Pr_{1}|_{\\{i\\}\\times\\arity F_{i}}\\right)=\\{i\\}\\times\\arity F_{i}$\nfor $L\\in\\GR F_{i}$, but that's obvious.\\end{proof}\n\\begin{obvious}\n$\\coprod(\\arity\\circ F)=\\bigcup_{i\\in\\dom F}(\\{i\\}\\times\\arity F_{i})=\\bigcup_{i\\in\\dom F}\\dom F'_{i}$.\\end{obvious}\n\\begin{lem}\n$P\\in\\prod_{i\\in\\dom F}F'_{i}\\Leftrightarrow\\curry\\left(\\bigcup\\im P\\right)\\in\\prod(\\GR\\circ F)$\nfor a ($\\dom F$)-indexed family $P$ where $P_{i}\\in\\mho^{\\{i\\}\\times\\arity F_{i}}$\nfor every $i\\in\\dom F$, that is for $P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}$.\\end{lem}\n\\begin{proof}\nFor every $P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}$\nwe have:\n\\[\n\\resizebox{\\hsize}{!}{\n$\\begin{aligned}\nP\\in\\prod_{i\\in\\dom F}F'_{i} & \\Leftrightarrow\\\\\nP\\in\\setcond{z\\in\\mho^{\\dom F}}{\\forall i\\in\\dom F:z(i)\\in F'_{i}} & \\Leftrightarrow\\\\\nP\\in\\mho^{\\dom F}\\land\\forall i\\in\\dom F:P(i)\\in F'_{i} & \\Leftrightarrow\\\\\nP\\in\\mho^{\\dom F}\\land\\forall i\\in\\dom F\\exists L\\in\\GR F_{i}:P_{i}=L\\circ(\\Pr_{1}|_{\\{i\\}\\times\\mho}) & \\Leftrightarrow\\\\\nP\\in\\mho^{\\dom F}\\land\\forall i\\in\\dom F\\exists L\\in\\GR F_{i}:(P_{i}\\in\\mho^{\\{i\\}\\times\\arity F_{i}}\\land\\forall x\\in\\arity F_{i}:P_{i}(i,x)=Lx) & \\Leftrightarrow\\\\\nP\\in\\mho^{\\dom F}\\land\\forall i\\in\\dom F\\exists L\\in\\GR F_{i}:(P_{i}\\in\\mho^{\\{i\\}\\times\\arity F_{i}}\\land\\curry(P_{i})i=L) & \\Leftrightarrow\\\\\nP\\in\\mho^{\\dom F}\\land\\forall i\\in\\dom F:(P_{i}\\in\\mho^{\\{i\\}\\times\\arity F_{i}}\\land\\curry(P_{i})i\\in\\GR F_{i}) & \\Leftrightarrow\\\\\n\\forall i\\in\\dom F\\exists Q_{i}\\in(\\mho^{\\arity F_{i}})^{\\{i\\}}:(P_{i}=\\uncurry(Q_{i})\\land(Q_{i})i\\in\\mho^{\\arity F_{i}}\\land Q_{i}i\\in\\GR F_{i}) & \\Leftrightarrow\\\\\n\\forall i\\in\\dom F\\exists Q_{i}\\in(\\mho^{\\arity F_{i}})^{\\{i\\}}:\\left(P_{i}=\\uncurry(Q_{i})\\land\\left(\\bigcup_{i\\in\\dom F}Q_{i}\\right)i\\in\\GR F_{i}\\right) & \\Leftrightarrow\\\\\n\\forall i\\in\\dom F\\exists Q_{i}\\in(\\mho^{\\arity F_{i}})^{\\{i\\}}:\\left(P_{i}=\\uncurry(Q_{i})\\land\\bigcup_{i\\in\\dom F}Q_{i}\\in\\prod(\\GR\\circ F)\\right) & \\Leftrightarrow\\\\\n\\forall i\\in\\dom F:\\bigcup_{i\\in\\dom F}\\curry(P_{i})\\in\\prod(\\GR\\circ F) & \\Leftrightarrow\\\\\n\\curry\\left(\\bigcup_{i\\in\\dom F}P_{i}\\right)\\in\\prod(\\GR\\circ F) & \\Leftrightarrow\\\\\n\\curry\\left(\\bigcup\\im P\\right)\\in\\prod(\\GR\\circ F).\n\\end{aligned}$\n}\n\\]\n\\end{proof}\n\\begin{lem}\n$\\setcond{\\curry(f)\\circ\\bigoplus(\\arity\\circ F)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F}=\\prod(\\GR\\circ F)$.\\end{lem}\n\\begin{proof}\nFirst $\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\uncurry(z)\\circ\\left(\\bigoplus(\\dom\\circ z)\\right)^{-1}}{z\\in\\prod(\\GR\\circ F)}$,\nthat is\n\n$\\setcond f{f\\in\\GR\\prod^{\\mathrm{(ord)}}F}=\\setcond{\\uncurry(z)\\circ\\left(\\bigoplus(\\arity\\circ F)\\right)^{-1}}{z\\in\\prod(\\GR\\circ F)}$.\n\nSince $\\bigoplus(\\arity\\circ F)$ is a bijection, we have\n\n$\\setcond{f\\circ\\bigoplus(\\arity\\circ F)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F}=\\setcond{\\uncurry(z)}{z\\in\\prod(\\GR\\circ F)}$\nwhat is equivalent to\n\n$\\setcond{\\curry(f)\\circ\\bigoplus(\\arity\\circ F)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F}=\\setcond z{z\\in\\prod(\\GR\\circ F)}$\nthat is $\\setcond{\\curry(f)\\circ\\bigoplus(\\arity\\circ F)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F}=\\prod(\\GR\\circ F)$.\\end{proof}\n\\begin{lem}\n$\\setcond{\\bigcup\\im P}{P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}\\land\\curry(\\bigcup\\im P)\\in\\prod(\\GR\\circ F)}=\\setcond{L\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}}}{\\curry(L)\\in\\prod(\\GR\\circ F)}$.\\end{lem}\n\\begin{proof}\nLet $L'\\in\\setcond{L\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}}}{\\curry(L)\\in\\prod(\\GR\\circ F)}$.\nThen $L'\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}}$ and $\\curry(L')\\in\\prod(\\GR\\circ F)$.\n\nLet $P=\\mylambda i{\\dom F}{L'|_{\\{i\\}\\times\\arity F_{i}}}$. Then\n$P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}$ and $\\bigcup\\im P=L'$.\nSo $L'\\in\\setcond{\\bigcup\\im P}{P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}\\land\\curry(\\bigcup\\im P)\\in\\prod(\\GR\\circ F)}$.\n\nLet now $L'\\in\\setcond{\\bigcup\\im P}{P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}\\land\\curry(\\bigcup\\im P)\\in\\prod(\\GR\\circ F)}$.\nThen there exists $P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}$\nsuch that $L'=\\bigcup\\im P$ and $\\curry(L')\\in\\prod(\\GR\\circ F)$.\nEvidently $L'\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}}$. So\n$L'\\in\\setcond{L\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}}}{\\curry(L)\\in\\prod(\\GR\\circ F)}$.\n\\end{proof}\n\\begin{lem}\n$\\setcond{f\\circ\\bigoplus(\\arity\\circ F)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F}=\\setcond{\\bigcup\\im P}{P\\in\\prod_{i\\in\\dom F}F'_{i}}$.\\end{lem}\n\\begin{proof}\n~\n\\begin{align*}\nL\\in\\setcond{\\bigcup\\im P}{P\\in\\prod_{i\\in\\dom F}F'_{i}} & \\Leftrightarrow\\\\\nL\\in\\setcond{\\bigcup\\im P}{P\\in\\coprod_{i\\in\\dom F}\\mho^{\\{i\\}\\times\\arity F_{i}}\\land\\curry\\left(\\bigcup\\im P\\right)\\in\\prod(\\GR\\circ F)} & \\Leftrightarrow\\\\\nL\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}}\\land\\curry(L)\\in\\prod(\\GR\\circ F) & \\Leftrightarrow\\\\\nL\\in\\mho^{\\coprod_{i\\in\\dom F}\\arity F_{i}}\\land\\curry(L)\\in\\setcond{\\curry(f)\\circ\\bigoplus(\\arity\\circ F)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F} & \\Leftrightarrow\\\\\n\\text{(because \\ensuremath{\\bigoplus(\\arity\\circ F)} is a bijection)}\\\\\n\\curry(L)\\circ\\left(\\bigoplus(\\arity\\circ F)\\right)^{-1}\\in\\setcond{\\curry(f)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F} & \\Leftrightarrow\\\\\nL\\circ\\left(\\bigoplus(\\arity\\circ F)\\right)^{-1}\\in\\setcond f{f\\in\\GR\\prod^{\\mathrm{(ord)}}F} & \\Leftrightarrow\\\\\n\\text{(because \\ensuremath{\\bigoplus(\\arity\\circ F)} is a bijection)}\\\\\nL\\in\\setcond{f\\circ\\bigoplus(\\arity\\circ F)}{f\\in\\GR\\prod^{\\mathrm{(ord)}}F}.\n\\end{align*}\n\\end{proof}\n\\begin{thm}\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\left(\\bigcup\\im P\\right)\\circ\\left(\\bigoplus(\\arity\\circ F)\\right)^{-1}}{P\\in\\prod_{i\\in\\dom F}F'_{i}}$.\\end{thm}\n\\begin{proof}\nFrom the lemma, because $\\bigoplus(\\arity\\circ F)$ is a bijection.\\end{proof}\n\\begin{thm}\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\bigcup_{i\\in\\dom F}\\left(P_{i}\\circ\\left(\\bigoplus(\\arity\\circ f)\\right)^{-1}\\right)}{P\\in\\prod_{i\\in\\dom F}F'_{i}}$.\\end{thm}\n\\begin{proof}\nFrom the previous theorem.\\end{proof}\n\\begin{thm}\n$\\GR\\prod^{\\mathrm{(ord)}}F=\\setcond{\\bigcup\\im P}{P\\in\\prod_{i\\in\\dom F}\\setcond{f\\circ\\left(\\bigoplus(\\arity\\circ f)\\right)^{-1}}{f\\in F'_{i}}}$.\\end{thm}\n\\begin{proof}\nFrom the previous.\\end{proof}\n\\begin{rem}\nNote that the above formulas contain both $\\bigcup_{i\\in\\dom F}\\dom F'_{i}$\nand $\\bigcup_{i\\in\\dom F}F'_{i}$. These forms are similar but different.\n\\end{rem}\n\n\\subsubsection{Associativity of ordinated product}\n\nLet $f$ be an ordinal variadic function.\n\nLet $S$ be an ordinal indexed family of functions of ordinal indexed\nfamilies of functions each taking an ordinal number of arguments in\na set $X$.\n\n\\index{associative!infinite}I call $f$ \\emph{infinite associative}\nwhen\n\\begin{enumerate}\n\\item $f(f\\circ S)=f(\\concat S)$ for every $S$;\n\\item $f(\\llbracket x\\rrbracket)=x$ for $x\\in X$.\n\\end{enumerate}\n\n\\paragraph{Infinite associativity implies associativity}\n\\begin{prop}\nLet $f$ be an infinitely associative function taking an ordinal number\nof arguments in a set $X$. Define $x\\star y=f\\llbracket x,y\\rrbracket$\nfor $x,y\\in X$. Then the binary operation $\\star$ is associative.\\end{prop}\n\\begin{proof}\nLet $x,y,z\\in X$. Then $(x\\star y)\\star z=f\\llbracket f\\llbracket x,y\\rrbracket,z\\rrbracket=f(f\\llbracket x,y\\rrbracket,f\\llbracket z\\rrbracket)=f\\llbracket x,y,z\\rrbracket$.\nSimilarly $x\\star(y\\star z)=f\\llbracket x,y,z\\rrbracket$. So $(x\\star y)\\star z=x\\star(y\\star z)$.\n\\end{proof}\n\n\\paragraph{Concatenation is associative}\n\nFirst we will prove some lemmas.\n\nLet $a$ and $b$ be functions on a poset. Let $a\\sim b$ iff there\nexist an order isomorphism $f$ such that $a=b\\circ f$. Evidently\n$\\sim$ is an equivalence relation.\n\\begin{obvious}\n$\\concat a=\\concat b\\Leftrightarrow\\uncurry(a)\\sim\\uncurry(b)$ for\nevery ordinal indexed families $a$ and $b$ of functions taking an\nordinal number of arguments.\n\\end{obvious}\nThank to the above, we can reduce properties of $\\concat$ to properties\nof $\\uncurry$.\n\\begin{lem}\n$a\\sim b\\Rightarrow\\uncurry a\\sim\\uncurry b$ for every ordinal indexed\nfamilies $a$ and $b$ of functions taking an ordinal number of arguments.\\end{lem}\n\\begin{proof}\nThere exists an order isomorphism $f$ such that $a=b\\circ f$.\n\\begin{multline*}\n\\uncurry(a)(x,y)=(ax)y=(bfx)y=\\\\\\uncurry(b)(fx,y)=\\uncurry(b)g(x,y)\n\\end{multline*}\nwhere $g(x,y)=(fx,y)$.\n\n$g$ is an order isomorphism because $g(x_{0},y_{0})\\ge g(x_{1},y_{1})\\Leftrightarrow(x_{0},y_{0})\\ge(x_{1},y_{1})$.\n(Injectivity and surjectivity are obvious.)\\end{proof}\n\\begin{lem}\n\\label{by-member}Let $a_{i}\\sim b_{i}$ for every $i$. Then $\\uncurry a\\sim\\uncurry b$\nfor every ordinal indexed families $a$ and $b$ of ordinal indexed\nfamilies of functions taking an ordinal number of arguments.\\end{lem}\n\\begin{proof}\nLet $a_{i}=b_{i}\\circ f_{i}$ where $f_{i}$ is an order isomorphism\nfor every $i$.\n\n\\begin{multline*}\n\\uncurry(a)(i,y)=a_{i}y=b_{i}f_{i}y=\\\\\\uncurry(b)(i,f_{i}y)=\\uncurry(b)g(i,y)=(\\uncurry(b)\\circ g)(i,y)\\end{multline*}\nwhere $g(i,y)=(i,f_{i}y)$.\n\n$g$ is an order isomorphism because $g(i,y_{0})\\ge g(i,y_{1})\\Leftrightarrow f_{i}y_{0}\\ge f_{i}y_{1}\\Leftrightarrow y_{0}\\ge y_{1}$\nand $i_{0}>i_{1}\\Rightarrow g(i,y_{0})>g(i,y_{1})$. (Injectivity\nand surjectivity are obvious.)\n\\end{proof}\nLet now $S$ be an ordinal indexed family of ordinal indexed families\nof functions taking an ordinal number of arguments.\n\\begin{lem}\n$\\uncurry(\\uncurry\\circ S)\\sim\\uncurry(\\uncurry S)$.\\end{lem}\n\\begin{proof}\n$\\uncurry\\circ S=\\mylambda iS{\\uncurry(S_{i})}$;\n\n$(\\uncurry(\\uncurry\\circ S))((i,x),y)=(\\uncurry S_{i})(x,y)=(S_{i}x)y$;\n\n$(\\uncurry(\\uncurry S))((i,x),y)=((\\uncurry S)(i,x))y=(S_{i}x)y$.\n\nThus $(\\uncurry(\\uncurry\\circ S))((i,x),y)=(\\uncurry(\\uncurry S))((i,x),y)$\nand thus evidently $\\uncurry(\\uncurry\\circ S)\\sim\\uncurry(\\uncurry S)$.\\end{proof}\n\\begin{thm}\n$\\concat$ is an infinitely associative function.\\end{thm}\n\\begin{proof}\n$\\concat(\\llbracket x\\rrbracket)=x$ for a function $x$ taking an\nordinal number of argument is obvious. It is remained to prove\n\\[\n\\concat(\\concat\\circ S)=\\concat(\\concat S);\n\\]\n\n\nWe have, using the lemmas,\n\\begin{align*}\n\\concat(\\concat\\circ S) & \\sim\\\\\n\\uncurry(\\concat\\circ S) & \\sim\\\\\n\\text{(by lemma \\ref{by-member})}\\\\\n\\uncurry(\\uncurry\\circ S) & \\sim\\\\\n\\uncurry(\\uncurry S) & \\sim\\\\\n\\uncurry(\\concat S) & \\sim\\\\\n\\concat(\\concat S).\n\\end{align*}\n\n\nConsequently $\\concat(\\concat\\circ S)=\\concat(\\concat S)$.\\end{proof}\n\\begin{cor}\nOrdinated product is an infinitely associative function.\n\\end{cor}\n\n\\section{Galois surjections}\n\n\\begin{defn}\\index{Galois surjection}\n  \\emph{Galois surjection} is the special case of Galois connection such\n  that $f^{\\ast} \\circ f_{\\ast} $ is identity.\n\\end{defn}\n\n\\begin{prop}\\label{gal-eq}\n  For Galois surjection $\\mathfrak{A} \\rightarrow \\mathfrak{B}$ such that\n  $\\mathfrak{A}$ is a join-semilattice we have (for every $y \\in\n  \\mathfrak{B}$)\n  \\[ f_{\\ast} y = \\max \\setcond{ x \\in \\mathfrak{A} }{\n     f^{\\ast} x = y } . \\]\n\\end{prop}\n\n\\begin{proof}\n  We need to prove (theorem~\\ref{adj-max})\n  \\[ \\max \\setcond{ x \\in \\mathfrak{A} }{ f^{\\ast} x = y } =\n     \\max \\setcond{ x \\in \\mathfrak{A} }{ f^{\\ast} x \\sqsubseteq y } . \\]\n  To prove it, it's enough to show that for each $f^{\\ast} x \\sqsubseteq y$\n  there exists an $x' \\sqsupseteq x$ such that $f^{\\ast} x' = y$.\n  \n  Really, $y = f^{\\ast} f_{\\ast} y$. It's enough to prove $f^{\\ast} (x \\sqcup\n  f_{\\ast} y) = y$.\n  \n  Indeed (because lower adjoints preserve joins),\n  $f^{\\ast} (x \\sqcup f_{\\ast} y) = f^{\\ast} x \\sqcup f^{\\ast} f_{\\ast} y = f^{\\ast} x \\sqcup y = y$.\n\\end{proof}\n\n\\section{Some properties of frames}\\label{some-frames}\n\nThis section is based on a \\noun{Todd Trimble}'s proof. A shorter\nbut less elementary proof (also by \\noun{Todd Trimble}) is available\nat\\\\\n\\href{http://ncatlab.org/toddtrimble/published/topogeny}{http://ncatlab.org/toddtrimble/published/topogeny}\n\nI will abbreviate \\emph{join-semilattice with least element} as JSWLE.\n\\begin{obvious}\nJSWLEs are the same as finitely join-closed posets (with nullary joins\nincluded).\n\\end{obvious}\n\n\\begin{defn}\nIt is said that a function $f$ from a poset $\\mathfrak{A}$ to a\nposet $\\mathfrak{B}$ \\emph{preserves finite joins}, when for every\nfinite set $S\\in\\subsets\\mathfrak{A}$ such that $\\bigsqcup^{\\mathfrak{A}}S$\nexists we have $\\bigsqcup^{\\mathfrak{B}}\\supfun f^{\\ast}S=f\\bigsqcup^{\\mathfrak{A}}S$.\\end{defn}\n\\begin{obvious}\nA function between JSWLEs preserves finite joins iff it preserves\nbinary joins ($f(x\\sqcup y)=fx\\sqcup fy$) and nullary joins ($f(\\bot^{\\mathfrak{A}})=\\bot^{\\mathfrak{B}}$).\\end{obvious}\n\\begin{defn}\nA \\emph{fixed point} of a function $F$ is such $x$ that $F(x)=x$.\nWe will denote $\\Fix(F)$ the set of all fixed points of a function\n$F$.\n\\end{defn}\n\n\\begin{defn}\nLet $\\mathfrak{A}$ be a JSWLE. A \\emph{co-nucleus} is a function\n$F:\\mathfrak{A}\\rightarrow\\mathfrak{A}$ such that for every $p,q\\in\\mathfrak{A}$\nwe have:\n\\begin{enumerate}\n\\item \\label{co-nucleus-less}$F(p)\\sqsubseteq p$;\n\\item $F(F(p))=F(p)$;\n\\item $F(p\\sqcup q)=F(p)\\sqcup F(q)$.\n\\end{enumerate}\n\\end{defn}\n\\begin{prop}\nEvery co-nucleus is a monotone function.\\end{prop}\n\\begin{proof}\nIt follows from $F(p\\sqcup q)=F(p)\\sqcup F(q)$.\\end{proof}\n\\begin{lem}\n$\\bigsqcup^{\\Fix(F)}S=\\bigsqcup S$ for every $S\\in\\subsets\\Fix(F)$\nfor every co-nucleus~$F$ on a complete lattice.\\end{lem}\n\\begin{proof}\nObviously $\\bigsqcup S\\sqsupseteq x$ for every $x\\in S$.\n\nSuppose $z\\sqsupseteq x$ for every $x\\in S$ for a $z\\in\\Fix(F)$.\nThen $z\\sqsupseteq\\bigsqcup S$.\n\n$F\\left(\\bigsqcup S\\right)\\sqsupseteq F(x)$ for every $x\\in S$.\nThus $F\\left(\\bigsqcup S\\right)\\sqsupseteq\\bigsqcup_{x\\in S}F(x)=\\bigsqcup S$.\nBut $F\\left(\\bigsqcup S\\right)\\sqsubseteq\\bigsqcup S$. Thus $F\\left(\\bigsqcup S\\right)=\\bigsqcup S$\nthat is $\\bigsqcup S\\in\\Fix(F)$.\n\nSo $\\bigsqcup^{\\Fix(F)}S=\\bigsqcup S$ by the definition of join.\\end{proof}\n\\begin{cor}\n$\\bigsqcup^{\\Fix(F)}S$ is defined for every $S\\in\\subsets\\Fix(F)$.\\end{cor}\n\\begin{lem}\n$\\bigsqcap^{\\Fix(F)}S=F\\left(\\bigsqcap S\\right)$ for every $S\\in\\subsets\\Fix(F)$\nfor every co-nucleus~$F$ on a complete lattice.\\end{lem}\n\\begin{proof}\nObviously $F\\left(\\bigsqcap S\\right)\\sqsubseteq x$ for every $x\\in S$.\n\nSuppose $z\\sqsubseteq x$ for every $x\\in S$ for a $z\\in\\Fix(F)$.\nThen $z\\sqsubseteq\\bigsqcap S$ and thus $z\\sqsubseteq F\\left(\\bigsqcap S\\right)$.\n\nSo $\\bigsqcap^{\\Fix(F)}S=F\\left(\\bigsqcap S\\right)$ by the definition\nof meet.\\end{proof}\n\\begin{cor}\n$\\bigsqcap^{\\Fix(F)}S$ is defined for every $S\\in\\subsets\\Fix(F)$.\\end{cor}\n\\begin{obvious}\n$\\Fix(F)$ with induced order is a complete lattice.\\end{obvious}\n\\begin{lem}\n\\label{fix-is-co-frame}If $F$ is a co-nucleus on a co-frame $\\mathfrak{A}$,\nthen the poset $\\Fix(F)$ of fixed points of $F$, with order inherited\nfrom $\\mathfrak{A}$, is also a co-frame.\\end{lem}\n\\begin{proof}\nLet $b\\in\\Fix(F)$, $S\\in\\subsets\\Fix(F)$. Then \n\\begin{align*}\nb\\sqcup^{\\Fix(F)}\\bigsqcap^{\\Fix(F)}S & =\\\\\nb\\sqcup^{\\Fix(F)}F\\left(\\bigsqcap S\\right) & =\\\\\nF(b)\\sqcup F\\left(\\bigsqcap S\\right) & =\\\\\nF\\left(b\\sqcup\\bigsqcap S\\right) & =\\\\\nF\\left(\\bigsqcap\\langle b\\sqcup\\rangle^{\\ast}S\\right) & =\\\\\n\\bigsqcap^{\\Fix(F)}\\langle b\\sqcup\\rangle^{\\ast}S & =\\\\\n\\bigsqcap^{\\Fix(F)}\\langle b\\sqcup^{\\Fix(F)}\\rangle^{\\ast}S.\n\\end{align*}\n\\end{proof}\n\\begin{defn}\nDenote $\\Upper(\\mathfrak{A})$ the set of upper sets on $\\mathfrak{A}$\nordered \\emph{reverse} to set theoretic inclusion.\\end{defn}\n\\begin{defn}\nDenote $\\uparrow a = \\setcond{x\\in\\mathfrak{A}}{x\\sqsupseteq a} \\in \\Upper(\\mathfrak{A})$.\n\\end{defn}\n\\begin{lem}\nThe set $\\Upper(\\mathfrak{A})$ is closed under arbitrary meets and\njoins.\\end{lem}\n\\begin{proof}\nLet $S\\in\\subsets\\Upper(\\mathfrak{A})$.\n\nLet $X\\in\\bigcup S$ and $Y\\sqsupseteq X$ for an $Y\\in\\mathfrak{A}$.\nThen there is $P\\in S$ such that $X\\in P$ and thus $Y\\in P$ and\nso $Y\\in\\bigcup S$. So $\\bigcup S\\in\\Upper(\\mathfrak{A})$.\n\nLet now $X\\in\\bigcap S$ and $Y\\sqsupseteq X$ for an $Y\\in\\mathfrak{A}$.\nThen $\\forall T\\in S:X\\in T$ and so $\\forall T\\in S:Y\\in T$, thus\n$Y\\in\\bigcap S$. So $\\bigcap S\\in\\Upper(\\mathfrak{A})$.\\end{proof}\n\\begin{thm}\n\\label{compl-via-down}A poset $\\mathfrak{A}$ is a complete lattice\niff there is a antitone map $s:\\Upper(\\mathfrak{A})\\rightarrow\\mathfrak{A}$\nsuch that\n\\begin{enumerate}\n\\item $s(\\uparrow p)=p$ for every $p\\in\\mathfrak{A}$;\n\\item $D\\subseteq\\uparrow s(D)$ for every $D\\in\\Upper(\\mathfrak{A})$.\n\\end{enumerate}\nMoreover, in this case $s(D)=\\bigsqcap D$ for every $D\\in\\Upper(\\mathfrak{A})$.\\end{thm}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{$\\Rightarrow$}] Take $s(D)=\\bigsqcap D$.\n\\item [{$\\Leftarrow$}] $\\forall x\\in D:x\\sqsupseteq s(D)$ from the second\nformula.\n\n\nLet $\\forall x\\in D:y\\sqsubseteq x$. Then $x\\in\\uparrow y$, $D\\subseteq\\uparrow y$;\nbecause $s$ is an antitone map, thus follows $s(D)\\sqsupseteq s(\\uparrow y)=y$.\nSo $\\forall x\\in D:y\\sqsubseteq s(D)$.\n\n\nThat $s$ is the meet follows from the definition of meets.\n\n\nIt remains to prove that $\\mathfrak{A}$ is a complete lattice.\n\n\nTake any subset~$S$ of $\\mathfrak{A}$. Let $D$ be the smallest\nupper set containing~$S$. (It exists because $\\Upper(\\mathfrak{A})$\nis closed under arbitrary joins.) This is \n\\[\nD=\\setcond{x\\in\\mathfrak{A}}{\\exists s\\in S:x\\sqsupseteq s}.\n\\]\nAny lower bound of $D$ is clearly a lower bound of $S$ since $D\\supseteq S$.\nConversely any lower bound of $S$ is a lower bound of $D$.~ Thus\n$S$ and $D$ have the same set of lower bounds, hence have the same\ngreatest lower bound.\n\n\\end{description}\n\\end{proof}\n\\begin{prop}\n\\label{down-is-homo}For any poset $\\mathfrak{A}$ the following are\nmutually reverse order isomorphisms between upper sets $F$ (ordered\nreverse to set-theoretic inclusion) on $\\mathfrak{A}$ and order homomorphisms\n$\\varphi:\\mathfrak{A}^{\\op}\\rightarrow2$ (here $2$ is the partially\nordered set of two elements: $0$ and $1$ where $0\\sqsubseteq1$),\ndefined by the formulas \n\\begin{enumerate}\n\\item \\label{phi-01}$\\varphi(a)=\\left\\{ \\begin{array}{ll}\n1 & \\text{if }a\\in F\\\\\n0 & \\text{if }a\\notin F\n\\end{array}\\right.$ for every $a\\in\\mathfrak{A}$;\n\\item \\label{phi-inv}$F=\\varphi^{-1}(1)$. \n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nLet $X\\in\\varphi^{-1}(1)$ and $Y\\sqsupseteq X$. Then $\\varphi(X)=1$\nand thus $\\varphi(Y)=1$. Thus $\\varphi^{-1}(1)$ is a upper set.\n\nIt is easy to show that $\\varphi$ defined by the formula~\\ref{phi-01}\nis an order homomorphism $\\mathfrak{A}^{\\op}\\rightarrow2$ whenever\n$F$ is a upper set.\n\nFinally we need to prove that they are mutually inverse. Really: Let\n$\\varphi$ be defined by the formula~\\ref{phi-01}. Then take $F'=\\varphi^{-1}(1)$\nand define $\\varphi'(a)$ by the formula~\\ref{phi-01}. We have \n\\[\n\\varphi'(a)=\\left\\{ \\begin{array}{ll}\n1 & \\text{if }a\\in\\varphi^{-1}(1)\\\\\n0 & \\text{if }a\\notin\\varphi^{-1}(1)\n\\end{array}\\right.=\\left\\{ \\begin{array}{ll}\n1 & \\text{if }\\varphi(a)=1\\\\\n0 & \\text{if }\\varphi(a)\\neq1\n\\end{array}\\right.=\\varphi(a).\n\\]\nLet now $F$ be defined by the formula~\\ref{phi-inv}. Then take\n$\\varphi'(a)=\\left\\{ \\begin{array}{ll}\n1 & \\text{if }a\\in F\\\\\n0 & \\text{if }a\\notin F\n\\end{array}\\right.$ as defined by the formula~\\ref{phi-01} and define $F'=\\varphi'^{-1}(1)$.\nThen \n\\[\nF'=\\varphi'^{-1}(1)=F.\n\\]\n\\end{proof}\n\\begin{lem}\nFor a complete lattice $\\mathfrak{A}$, the map $\\bigsqcap:\\Upper(\\mathfrak{A})\\rightarrow\\mathfrak{A}$\npreserves arbitrary meets.\\end{lem}\n\\begin{proof}\nLet $S\\in\\subsets\\Upper(\\mathfrak{A})$ . We have $\\bigsqcap S\\in\\Upper(\\mathfrak{A})$.\n\n$\\bigsqcap\\bigsqcap S=\\bigsqcap\\bigsqcap_{X\\in S}X=\\bigsqcap_{X\\in S}\\bigsqcap X$\nis what we needed to prove.\\end{proof}\n\\begin{lem}\nA complete lattice $\\mathfrak{A}$ is a co-frame iff $\\bigsqcap:\\Upper(\\mathfrak{A})\\rightarrow\\mathfrak{A}$\npreserves finite joins.\\end{lem}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{$\\Rightarrow$}] Let $\\mathfrak{A}$ be a co-frame. Let $D,D'\\in\\Upper(\\mathfrak{A})$.\nObviously $\\bigsqcap(D\\sqcup D')\\sqsupseteq\\bigsqcap D$ and $\\bigsqcap(D\\sqcup D')\\sqsupseteq\\bigsqcap D'$,\nso $\\bigsqcap(D\\sqcup D')\\sqsupseteq\\bigsqcap D\\sqcup\\bigsqcap D'$.\n\n\nAlso \n\\begin{multline*}\n\\bigsqcap D\\sqcup\\bigsqcap D'=\\bigcup D\\sqcup\\bigcup D'=\\text{(because \\ensuremath{\\mathfrak{A}} is a co-frame)}=\\\\\n\\bigcup\\setcond{d\\sqcup d'}{d\\in D,d'\\in D'}.\n\\end{multline*}\nObviously $d\\sqcup d'\\in D\\cap D'$, thus $\\bigsqcap D\\sqcup\\bigsqcap D'\\subseteq\\bigcup(D\\cap D')=\\bigsqcap(D\\cap D')$\nthat is $\\bigsqcap D\\sqcup\\bigsqcap D'\\sqsupseteq\\bigsqcap(D\\cap D')$.\nSo $\\bigsqcap(D\\sqcup D')=\\bigsqcap D\\sqcup\\bigsqcap D'$ that is\n$\\bigsqcap:\\Upper(A)\\rightarrow A$ preserves binary joins.\n\n\nIt preserves nullary joins since $\\bigsqcap^{\\Upper(\\mathfrak{A})}\\bot^{\\Upper(\\mathfrak{A})}=\\bigsqcap^{\\Upper(\\mathfrak{A})}\\mathfrak{A}=\\bot^{\\mathfrak{A}}$.\n\n\\item [{$\\Leftarrow$}] Suppose $\\bigsqcap:\\Upper(\\mathfrak{A})\\rightarrow\\mathfrak{A}$\npreserves finite joins. Let $b\\in\\mathfrak{A}$, $S\\in\\subsets\\mathfrak{A}$.\nLet $D$ be the smallest upper set containing $S$ (so $D=\\bigcup\\rsupfun{\\uparrow}S$).\nThen\n\\begin{align*}\nb\\sqcup\\bigsqcap S & =\\\\\n\\bigsqcap\\uparrow b\\sqcup\\bigcup\\bigsqcap\\rsupfun{\\uparrow}S & =\\\\\n\\bigsqcap\\uparrow b\\sqcup\\bigsqcap\\bigcup\\rsupfun{\\uparrow}S & =\\text{(since \\ensuremath{\\bigsqcap} preserves finite joins)}\\\\\n\\bigsqcap\\left(\\uparrow b\\sqcup\\bigcup\\rsupfun{\\uparrow}S\\right) & =\\\\\n\\bigcup\\left(\\uparrow b\\cap\\bigcup\\rsupfun{\\uparrow}S\\right) & =\\\\\n\\bigsqcap\\bigcup_{a\\in S}(\\uparrow b\\cap\\uparrow a) & =\\\\\n\\bigsqcap\\bigcup_{a\\in S}\\uparrow(b\\sqcup a) & =\\text{(since \\ensuremath{\\bigsqcap} preserves all meets)}\\\\\n\\bigcup_{a\\in S}\\bigsqcap\\uparrow(b\\sqcup a) & =\\\\\n\\bigcup_{a\\in S}(b\\sqcup a) & =\\\\\n\\bigsqcap_{a\\in S}(b\\sqcup a).\n\\end{align*}\n\n\\end{description}\n\\end{proof}\n\\begin{cor}\n\\label{down-meet-co-nucleus}If $\\mathfrak{A}$ is a co-frame, then\nthe composition $F=\\uparrow\\circ\\bigsqcap:\\Upper(\\mathfrak{A})\\rightarrow\\Upper(\\mathfrak{A})$\nis a co-nucleus. The embedding $\\uparrow:\\mathfrak{A}\\rightarrow\\Upper(\\mathfrak{A})$\nis an isomorphism of $\\mathfrak{A}$ onto the co-frame $\\Fix(F)$.\\end{cor}\n\\begin{proof}\n$D\\sqsupseteq F(D)$ follows from theorem~\\ref{compl-via-down}.\n\nWe have $F(F(D))=F(D)$ for all $D\\in\\Upper(\\mathfrak{A})$ since\n$F(F(D))=\\uparrow\\bigsqcap\\uparrow\\bigsqcap D=\\text{(because \\ensuremath{\\bigsqcap\\uparrow s=s} for any \\ensuremath{s})}=\\uparrow\\bigsqcap D=F(D)$.\n\nAnd since both $\\bigsqcap:\\Upper(\\mathfrak{A})\\rightarrow\\mathfrak{A}$\nand $\\uparrow$ preserve finite joins, $F$ preserves finite joins.\nThus $F$ is a co-nucleus.\n\nFinally, we have $a\\sqsupseteq a'$ if and only if $\\uparrow a\\subseteq\\uparrow a'$,\nso that $\\uparrow:\\mathfrak{A}\\rightarrow\\Upper(\\mathfrak{A})$ maps\n$\\mathfrak{A}$ isomorphically onto its image $\\rsupfun{\\uparrow}\\mathfrak{A}$.\nThis image is $\\Fix(F)$ because if $D$ is any fixed point (i.e.\nif $D=\\uparrow\\bigsqcap D$), then $D$ clearly belongs to $\\rsupfun{\\uparrow}\\mathfrak{A}$;\nand conversely $\\uparrow a$ is always a fixed point of $F=\\uparrow\\circ\\bigsqcap$\nsince $F(\\uparrow a)=\\uparrow\\bigsqcap\\uparrow a=\\uparrow a$.\\end{proof}\n\\begin{defn}\nIf $\\mathfrak{A}$, $\\mathfrak{B}$ are two JSWLEs, then $\\operatorname{Join}(\\mathfrak{A},\\mathfrak{B})$\nis the (ordered pointwise) set of finite joins preserving maps $\\mathfrak{A}\\rightarrow\\mathfrak{B}$.\\end{defn}\n\\begin{obvious}\n$\\operatorname{Join}(\\mathfrak{A},\\mathfrak{B})$ is a JSWLE, where~$f\\sqcup g$\nis given by the formula $(f\\sqcup g)(p)=f(p)\\sqcup g(p)$, $\\bot^{\\operatorname{Join}(\\mathfrak{A},\\mathfrak{B})}$\nis given by the formula $\\bot^{\\operatorname{Join}(\\mathfrak{A},\\mathfrak{B})}(p)=\\bot^{\\mathfrak{B}}$.\\end{obvious}\n\\begin{defn}\nLet $h:Q\\rightarrow R$ be a finite joins preserving map. Then by\ndefinition $\\operatorname{Join}(P,h):\\operatorname{Join}(P,Q)\\rightarrow\\operatorname{Join}(P,R)$\ntakes $f\\in\\operatorname{Join}(P,Q)$ into the composition $h\\circ f\\in\\operatorname{Join}(P,R)$.\\end{defn}\n\\begin{lem}\nAbove defined $\\operatorname{Join}(P,h)$ is a finite joins preserving\nmap.\\end{lem}\n\\begin{proof}\n~\n\n\\begin{multline*}\n(h\\circ(f\\sqcup f'))x=h(f\\sqcup f')x=h(fx\\sqcup f'x)=\\\\\nhfx\\sqcup hf'x=(h\\circ f)x\\sqcup(h\\circ f')x=((h\\circ f)\\sqcup(h\\circ f'))x.\n\\end{multline*}\nThus $h\\circ(f\\sqcup f')=(h\\circ f)\\sqcup(h\\circ f')$.\n\n$(h\\circ\\bot^{\\operatorname{Join}(P,Q)})x=h\\bot^{\\operatorname{Join}(P,Q)}x=h\\bot^Q=\\bot^R$.\\end{proof}\n\\begin{prop}\nIf $h,h':Q\\rightarrow R$ are finite join preserving maps and $h\\sqsupseteq h'$,\nthen $\\operatorname{Join}(P,h)\\sqsupseteq\\operatorname{Join}(P,h')$.\\end{prop}\n\\begin{proof}\n$\\operatorname{Join}(P,h)(f)(x)=(h\\circ f)(x)=hfx\\sqsupseteq h'fx=(h'\\circ f)(x)=\\operatorname{Join}(P,h')(f)(x)$.\\end{proof}\n\\begin{lem}\nIf $g:Q\\rightarrow R$ and $h:R\\rightarrow S$ are finite joins preserving,\nthen the composition $\\operatorname{Join}(P,h)\\circ\\operatorname{Join}(P,g)$\nis equal to $\\operatorname{Join}(P,h\\circ g)$. Also $\\operatorname{Join}(P,\\id_{Q})$\nfor identity map $\\id_{Q}$ on $Q$ is the identity map $\\id_{\\operatorname{Join}(P,Q)}$\non $\\operatorname{Join}(P,Q)$.\\end{lem}\n\\begin{proof}\n$\\operatorname{Join}(P,h)\\operatorname{Join}(P,g)f=\\operatorname{Join}(P,h)(g\\circ f)=h\\circ g\\circ f=\\operatorname{Join}(P,h\\circ g)f$.\n\n$\\operatorname{Join}(P,\\id_{Q})f=\\id_{Q}\\circ f=f$.\\end{proof}\n\\begin{cor}\n\\label{join-map-co-nucleus}If $Q$ is a JSWLE and $F:Q\\rightarrow Q$\nis a co-nucleus, then for any JSWLE $P$ we have that \n\\[\n\\operatorname{Join}(P,F):\\operatorname{Join}(P,Q)\\rightarrow\\operatorname{Join}(P,Q)\n\\]\n is also a co-nucleus.\\end{cor}\n\\begin{proof}\nFrom $\\id_{Q}\\sqsupseteq F$ (co-nucleus axiom \\ref{co-nucleus-less})\nwe have $\\operatorname{Join}(P,\\id_{Q})\\sqsupseteq\\operatorname{Join}(P,F)$\nand since by the last lemma the left side is the identity on $\\operatorname{Join}(P,Q)$,\nwe see that $\\operatorname{Join}(P,F)$ also satisfies co-nucleus\naxiom \\ref{co-nucleus-less}.\n\n$\\operatorname{Join}(P,F)\\circ\\operatorname{Join}(P,F)=\\operatorname{Join}(P,F\\circ F)$\nby the same lemma and thus $\\operatorname{Join}(P,F)\\circ\\operatorname{Join}(P,F)=\\operatorname{Join}(P,F)$\nby the second co-nucleus axiom for $F$, showing that $\\operatorname{Join}(P,F)$\nsatisfies the second co-nucleus axiom.\n\nBy an other lemma, we have that $\\operatorname{Join}(P,F)$ preserves\nbinary joins, given that $F$ preserves binary joins, which is the\nthird co-nucleus axiom.\\end{proof}\n\\begin{lem}\n\\label{join-fix-inter}$\\Fix(\\operatorname{Join}(P,F))=\\operatorname{Join}(P,\\Fix(F))$\nfor every JSWLEs $P$, $Q$ and a join preserving function $F:Q\\rightarrow Q$.\\end{lem}\n\\begin{proof}\n$a\\in\\Fix(\\operatorname{Join}(P,F))\\Leftrightarrow a\\in F^{P}\\wedge F\\circ a=a\\Leftrightarrow a\\in F^{P}\\wedge\\forall x\\in P:F(a(x))=a(x)$.\n\n$a\\in\\operatorname{Join}(P,\\Fix(F))\\Leftrightarrow a\\in\\Fix(F)^{P}\\Leftrightarrow a\\in F^{P}\\wedge\\forall x\\in P:F(a(x))=a(x)$.\n\nThus $\\Fix(\\operatorname{Join}(P,F))=\\operatorname{Join}(P,\\Fix(F))$.\nThat the order of the left and right sides of the equality agrees\nis obvious.\\end{proof}\n\\begin{defn}\n$\\mathbf{Pos}(\\mathfrak{A},\\mathfrak{B})$ is the pointwise ordered\nposet of monotone maps from a poset $\\mathfrak{A}$ to a poset $\\mathfrak{B}$.\\end{defn}\n\\begin{lem}\n\\label{join-pos-interch}If $Q$, $R$ are JSWLEs and $P$ is a poset,\nthen $\\mathbf{Pos}(P,R)$ is a JSWLE and $\\mathbf{Pos}(P,\\operatorname{Join}(Q,R))$\nis isomorphic to $\\operatorname{Join}\\left(Q,\\mathbf{Pos}(P,R)\\right)$.\nIf $R$ is a co-frame, then also $\\mathbf{Pos}(P,R)$ is a co-frame.\n\\end{lem}\n\\begin{proof}\nLet $f,g\\in\\mathbf{Pos}(P,R)$. Then $\\lambda x\\in P:(fx\\sqcup gx)$\nis obviously monotone and then it is evident that $f\\sqcup^{\\mathbf{Pos}(P,R)}g=\\lambda x\\in P:(fx\\sqcup gx)$.\n$\\lambda x\\in P:\\bot^{R}$ is also obviously monotone and it is evident\nthat $\\bot^{\\mathbf{Pos}(P,R)}=\\lambda x\\in P:\\bot^{R}$.\n\nObviously both $\\mathbf{Pos}(P,\\operatorname{Join}(Q,R))$ and $\\operatorname{Join}\\left(Q,\\mathbf{Pos}(P,R)\\right)$\nare sets of order preserving maps.\n\nLet $f$ be a monotone map.\n\n$f\\in\\mathbf{Pos}(P,\\operatorname{Join}(Q,R))$ iff $f\\in\\operatorname{Join}(Q,R)^{P}$\niff $f\\in\\setcond{ g\\in R^{Q} }{ g\\text{ preserves finite joins}} ^{P}$\niff $f\\in(R^{Q})^{P}$ and every $g=f(x)$ (for $x\\in P$) preserving\nfinite joins. This is bijectively equivalent ($f\\mapsto f'$) to $f'\\in(R^{P})^{Q}$\npreserving finite joins.\n\n$f'\\in\\operatorname{Join}\\left(Q,\\mathbf{Pos}(P,R)\\right)$ iff $f'$\npreserves finite joins and $f'\\in\\mathbf{Pos}(P,R)^{Q}$ iff $f'$\npreserves finite joins and $f'\\in\\setcond{g\\in(R^{P})^{Q}}{g(x)\\text{ is monotone}}$\niff $f'$ preserves finite joins and $f'\\in(R^{P})^{Q}$.\n\nSo we have proved that $f\\mapsto f'$ is a bijection between $\\mathbf{Pos}(P,\\operatorname{Join}(Q,R))$\nand $\\operatorname{Join}\\left(Q,\\mathbf{Pos}(P,R)\\right)$. That it\npreserves order is obvious.\n\nIt remains to prove that if $R$ is a co-frame, then also $\\mathbf{Pos}(P,R)$\nis a co-frame.\n\nFirst, we need to prove that $\\mathbf{Pos}(P,R)$ is a complete lattice.\nBut it is easy to prove that for every set $S\\in\\subsets\\mathbf{Pos}(P,R)$\nwe have $\\lambda x\\in P:\\bigsqcup_{f\\in S}f(x)$ and $\\lambda x\\in P:\\bigsqcap_{f\\in S}f(x)$\nare monotone and thus are the joins and meets on $\\mathbf{Pos}(P,R)$.\n\nNext we need to prove that \n\\[\nb\\sqcup^{\\mathbf{Pos}(P,R)}\\bigsqcap^{\\mathbf{Pos}(P,R)}S=\\bigsqcap^{\\mathbf{Pos}(P,R)}\\left\\langle b\\sqcup^{\\mathbf{Pos}(P,R)}\\right\\rangle ^{\\ast}S.\n\\]\nReally (for every $x\\in P$),\n\n\\begin{multline*}\n\\left(b\\sqcup^{\\mathbf{Pos}(P,R)}\\bigsqcap^{\\mathbf{Pos}(P,R)}S\\right)x=b(x)\\sqcup\\left(\\bigsqcap^{\\mathbf{Pos}(P,R)}S\\right)x=\\\\\nb(x)\\sqcup\\bigsqcap_{f\\in S}f(x)=\\bigsqcap_{f\\in S}(b(x)\\sqcup f(x))=\\bigsqcap_{f\\in S}\\left(b\\sqcup^{\\mathbf{Pos}(P,R)}f\\right)x=\\\\\n\\left(\\bigsqcap_{f\\in S}^{\\mathbf{Pos}(P,R)}\\left(b\\sqcup^{\\mathbf{Pos}(P,R)}f\\right)\\right)x.\n\\end{multline*}\n\n\nThus\n\\begin{multline*}\nb\\sqcup^{\\mathbf{Pos}(P,R)}\\bigsqcap^{\\mathbf{Pos}(P,R)}S=\\\\\\bigsqcap_{f\\in S}^{\\mathbf{Pos}(P,R)}\\left(b\\sqcup^{\\mathbf{Pos}(P,R)}f\\right)=\\bigsqcap^{\\mathbf{Pos}(P,R)}\\left\\langle b\\sqcup^{\\mathbf{Pos}(P,R)}\\right\\rangle ^{\\ast}S.\n\\end{multline*}\n\\end{proof}\n\\begin{defn}\n$P\\cong Q$ means that posets $P$ and $Q$ are isomorphic.\\end{defn}\n", "meta": {"hexsha": "d5b1d9a4f29913e20e547474e86e5180f8ec569e", "size": 78582, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-order-more.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-order-more.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-order-more.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.661237785, "max_line_length": 232, "alphanum_fraction": 0.6976788578, "num_tokens": 30088, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.800691997339971, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.5997881323655777}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\hyphenation{KORDER}\n\\begmath 11.6 Low-level Subprograms for Operations on Splines\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThis chapter describes five subprograms for spline operations that are used\nby the subprograms of the preceding chapter. It is expected that one would\nonly use these subprograms directly if one has needs more specialized than\nare covered by the higher level subprograms of the preceding chapter.\n\nSubroutine DSVALA evaluates at an argument X the values of the derivatives,\nof orders~0 through NDERIV, of a spline function represented using the\nB-spline basis. DSVALA must be given a difference table of the coefficients\nof the spline function. Subroutine DSDIF is provided to compute this\ndifference table. Once the difference table has been computed and saved, use\nof DSVALA is more economical than making NDERIV$+1$ calls to subprogram\nDSVAL of the preceding chapter if NDERIV $> 0.$\n\nSubroutine DSFIND does a lookup in a knot array to find a knot subinterval\nof nonzero length containing a specified argument X, or the nearest such\nsubinterval if extrapolation is needed.\n\nUsing a knot sequence regarded as defining a B-spline basis function of\norder KORDER, subroutine DSBASD computes the values at X of the KORDER\nB-spline basis functions (or a derivative of these functions as specified by\nIDERIV) that could be nonzero at X. Subprogram DSBASI computes the integral\nfrom X1 to X2 of each of the NCOEF basis functions. The output of these\nsubprograms is needed in setting up the matrix for curve fitting or\ninterpolation involving values, derivatives, or integrals of the fitted\nspline function.\n\n\\subsection{Usage}\n\n\\subsubsection{Usage of DSBASD for evaluation of basis functions or their\nderivatives}\n\n\\paragraph{Program Prototype, Double Precision}\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf KORDER, LEFT, IDERIV}\n\n\\item[DOUBLE PRECISION]  \\ {\\bf TKNOTS}($\\geq ncoef+$\\\\ KORDER){\\bf , X,\nBDERIV}($\\geq $ KORDER)\n\\end{description}\n(See TKNOTS below for the definition of {\\em ncoef}.)\n\nAssign values to KORDER, LEFT, TKNOTS, X, and IDERIV.\n\\begin{center}\\vspace{-10pt}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL DSBASD(KORDER, LEFT, TKNOTS,\\\\\nX, IDERIV, BDERIV)\\\\\n\\end{tabular}}\n\\end{center}\nComputed quantities are returned in BDERIV().\n\n\\paragraph{Argument Definitions}\n\\begin{description}\n\\item[KORDER]  \\ [in] KORDER is both the order of the spline basis functions\nand the number of basis functions whose derivatives are to be evaluated.\n\n\\item[LEFT]  \\ [in] Identifies an interval of nonzero length [TKNOTS(LEFT),\nTKNOTS(LEFT$+$1)] which is the reference interval for the function\nevaluation. DSBASD will evaluate the IDERIV$^{th}$ derivative of the KORDER\nbasis functions that could be nonzero on this interval. Require KORDER $\\leq\n$ LEFT $\\leq ncoef.$ Except when extrapolation is needed, LEFT should\nsatisfy TKNOTS(LEFT) $\\leq $ X $<$ TKNOTS(LEFT+1). We recommend that the\nsubroutine DSFIND be used to determine LEFT.\n\n\\item[TKNOTS()]  \\ [in] The knot sequence [$t_i$: $i=1$, ..., {\\em ncoef} +\nKORDER], where {\\em ncoef} denotes the total number of B-spline basis functions\nassociated with this knot sequence. The proper interpolation interval, $[a,b]\n$, associated with this knot sequence is given by $a=$ TKNOTS(KORDER) and $b=\n$ TKNOTS({\\em ncoef}+1). Require $t_i\\leq t_{i+1}$ for $i=1$, ..., {\\em ncoef} +\nKORDER $-$ 1; $t_i<t_{i+KORDER}$ for $i=1$, ..., {\\em ncoef}; $%\nt_{KORDER+1}>t_{KORDER}$; $t_{ncoef}<t_{ncoef+1}$. The knots strictly\nbetween a and $b$ are internal knots. They specify abscissae at which one\npolynomial piece ends and the next begins. Successive internal knots may\nhave the same value. An abscissa appearing with multiplicity $\\mu $ means the\norder of continuity of the spline at this abscissa will be at least $\\text{%\nKORDER}-\\mu -1$. The knots indexed ahead of $t_{KORDER}$ can all be equal to\n$a$, and those indexed after $t_{ncoef+1}$ can all be equal to $b.$\n\n\\item[X]  \\ [in] Argument at which the IDERIV$^{th}$ derivative of basis\nfunctions are to be evaluated.\n\n\\item[IDERIV]  \\ [in] Order of derivative to be computed. IDERIV $=0$\nspecifies function values. Require IDERIV $\\geq 0$. Values of derivatives of\norder\\ $\\geq $ KORDER will be zero.\n\n\\item[BDERIV()]  \\ [out] On return the values at X of the IDERIV$^{th}$\nderivative of the basis functions indexed from $\\text{LEFT}+1-\\text{KORDER}$\nthrough LEFT will be stored in BDERIV($i$), $i=1$, ..., KORDER.\n\\end{description}\n\\subsubsection{Usage of DSBASI for evaluation of an integral of basis\nfunctions}\n\n\\paragraph{Program Prototype, Double Precision}\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf KORDER, NCOEF, J1, J2}\n\n\\item[DOUBLE PRECISION]  \\ {\\bf TKNOTS}($\\geq $ NCOEF +\\\\ KORDER){\\bf , X1,\nX2, BASI}($\\geq $ NCOEF)\n\\end{description}\nAssign values to KORDER, NCOEF, TKNOTS(), X1, X2, J1, and J2.\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL DSBASI ( KORDER, NCOEF,\\\\\nTKNOTS, X1, X2, J1, J2, BASI)\\\\\n\\end{tabular}}\n\\end{center}\nComputed results are returned in J1, J2, and BASI().\n\n\\paragraph{Argument Definitions}\n\\begin{description}\n\\item[KORDER]  \\ [in] The order of the spline basis functions.\n\n\\item[NCOEF]  \\ [in] The total number of B-spline basis functions associated\nwith this knot sequence. Also the number of values to be returned in BASI().\n\n\\item[TKNOTS()]  \\ [in] As for DSBASD above, with {\\em ncoef} replaced by\nNCOEF.\n\n\\item[X1, X2]  \\ [in] Integration is to be done from X1 to X2. Permit X1 $<$\nX2 or X1 $\\geq $ X2. Generally X1 and X2 should each lie in $[a,b]$, however\nextrapolation will be used to return values when this is not the case.\n\n\\item[J1, J2]  \\ [inout] On entry J1 and J2 must contain integer values. If\nJ1 is in [1, N] it will be used to start the lookup for X1. Otherwise the\nsearch will start with~1. Similarly for J2.\n\nOn return J1 and J2 indicate the portion of the array BASI() that might be\nnonzero on return. BASI($i$) might be nonzero if J1 $\\leq i\\leq $ J2, and BASI$%\n(i)=0$ if $i<$ J1 or $i>$ J2.\n\n\\item[BASI()]  \\ [out] On return, BASI($i$) will contain the value of the\nintegral of the $i^{th}$ basis function over the range from X1 to X2, for $i=1$,\n..., NCOEF. J1 and J2 above indicate which elements might be nonzero.\n\\end{description}\n\\subsubsection{Usage of DSDIF to compute the difference table needed by\nDSVALA}\n\n\\paragraph{Program Prototype, Double Precision}\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf KORDER, NCOEF, NDERIV}\n\n\\item[DOUBLE PRECISION]  \\ {\\bf TKNOTS}($\\geq $ NCOEF + \\\\ KORDER){\\bf ,\nBCOEF}($\\geq $ NCOEF){\\bf , \\\\ BDIF}($\\geq $ NCOEF $\\times $ (NDERIV+1))\n\\end{description}\nAssign values to KORDER, NCOEF, TKNOTS(), BCOEF(), and NDERIV.\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL DSDIF (KORDER, NCOEF,\\\\\nTKNOTS, BCOEF, NDERIV, BDIF)\\\\\n\\end{tabular}}\n\\end{center}\nComputed results are returned in BDIF().\n\n\\paragraph{Argument Definitions}\n\\begin{description}\n\\item[KORDER]  \\ [in] The order of the spline basis functions.\n\n\\item[NCOEF]  \\ [in] The total number of B-spline basis functions associated\nwith this knot sequence.\n\n\\item[TKNOTS()]  \\ [in] Same specifications as for DSBASI above.\n\n\\item[BCOEF()]  \\ [in] Array of NCOEF coefficients representing a spline\nfunction relative to a B-spline basis.\n\n\\item[NDERIV]  \\ [in] Highest order difference to be computed. Since the\ndifference table BDIF() is intended for use by DSVALA this should correspond\nto the largest order derivative one intends to compute using DSVALA.\n\n\\item[BDIF()]  \\ [out] Will contain a copy of BCOEF() plus differences\nthrough order NDERIV of this array of coefficients. Intended for use by\nDSVALA.\n\\end{description}\n\\subsubsection{Usage of DSFIND for lookup in a knot sequence}\n\n\\paragraph{Program Prototype, Double Precision}\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf IX1, IX2, LEFT, MODE}\n\n\\item[DOUBLE PRECISION]  \\ {\\bf XT}(IX2+1){\\bf , X}\n\\end{description}\nAssign values to XT(), IX1, IX2, LEFT, and X.\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL DSFIND(XT, IX1, IX2, X,\\\\\nLEFT, MODE)\\\\\n\\end{tabular}}\n\\end{center}\nResults are returned in LEFT and MODE.\n\n\\paragraph{Argument Definitions}\n\\begin{description}\n\\item[XT(), IX1, IX2]  \\ [in] XT() is the array in which the lookup will be\ndone. DSFIND will only look at elements from XT(IX1) through XT(IX2).\nRequire IX1 $<$ IX2, XT(IX1) $<$ XT(IX1+1), XT(IX2$-$1) $<$ XT(IX2), and\nXT($i$) $\\leq $ XT($i+$1) for $i=$ IX1+1, ..., IX2 $-$~2.\n\nIf the lookup is in a knot array of length $korder + ncoef$ associated with a\nB-spline basis, one would generally set IX1 $=korder$ and IX2 $=ncoef+1.$ If\nthe lookup is in a knot array of length $npc+1$ associated with a power\nbasis, one would generally set IX1 $=1$ and IX2 $=npc+1.$\n\n\\item[X]  \\ [in] Value to be looked up in XT().\n\n\\item[LEFT]  \\ [inout] On entry LEFT must contain an integer value. If this\nvalue is in [IX1,~IX2~$-$~1] the lookup will start with this value,\notherwise the lookup starts with IX1 or IX2~$-$~1.\n\nOn return LEFT is the index of the left end of the reference interval $%\n\\langle $ XT(LEFT), XT(LEFT+1) $\\rangle $ for X. This will always be an\ninterval of nonzero length. If X satisfies XT(IX1) $\\leq $ X $<$ XT(IX2) then\nLEFT will satisfy XT(LEFT) $\\leq $ X $<$ XT(LEFT+1). Otherwise, if X $<$\nXT(IX1), LEFT $=$ IX1; or if X $\\geq $ XT(IX2), LEFT$=$ IX2 $-$ 1. The\npolynomial segment defined over this reference interval is intended to be\nused for function evaluation at X.\n\n\\item[MODE]  \\ [out] Indicator of the position of X relative to [XT(IX1),\nXT(IX2)]. Set to $-$1 if X is to the left of this interval, to~0 if X is in\nthis closed interval, and to +1 if X is to the right of this interval.\n\\end{description}\n\\subsubsection{Usage of DSVALA for evaluating a sequence of derivatives}\n\n\\paragraph{Program Prototype, Double Precision}\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf KORDER, NCOEF, NDERIV}\n\n\\item[DOUBLE PRECISION]  \\ {\\bf TKNOTS}($\\geq $ NCOEF +\\\\ KORDER){\\bf , BDIF}%\n($\\geq $ NCOEF $\\times $ (NDERIV+1)){\\bf ,\\\\ X, SVALUE}($\\geq $ NDERIV + 1)\n\\end{description}\nAssign values to KORDER, NCOEF, TKNOTS(), NDERIV, BDIF(), and X.\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL DSVALA(KORDER, NCOEF,\\\\\nTKNOTS, NDERIV, BDIF, X, SVALUE)\\\\\n\\end{tabular}}\n\\end{center}\nComputed results are returned in SVALUE().\n\n\\paragraph{Argument Definitions}\n\\begin{description}\n\\item[KORDER]  \\ [in] The order of the spline basis functions.\n\n\\item[NCOEF]  \\ [in] The total number of B-spline basis functions associated\nwith this knot sequence.\n\n\\item[TKNOTS()]  \\ [in] Same specifications as for DSBASI above.\n\n\\item[NDERIV]  \\ [in] Highest order derivative to be evaluated. Values of\nderivatives of order $\\geq $ KORDER will be zero.\n\n\\item[BDIF()]  \\ [in] A difference table of B-spline coefficients computed\nby DSDIF.\n\n\\item[X]  \\ [in] Argument at which values returned in SVALUE() are to be\ncomputed.\n\n\\item[SVALUE()]  \\ [out] On return, SVALUE($i$+1) contains the value at X of\nthe $i^{th}$ derivative of the spline function $f$ for $i=0$, ..., NDERIV.\nThe spline function $f$ is defined by the parameters KORDER, NCOEF,\nTKNOTS(), and the coefficients whose difference table is in BDIF().\n\\end{description}\n\\subsubsection{Modifications for Single Precision}\n\nFor single precision usage change the DOUBLE PRECISION statements to REAL\nand change the initial ``D'' in the subprogram names to ``S''.\n\n\\subsection{Examples and Remarks}\n\nThe program DRDSBASD and output listing ODDSBASD demonstrate the use of the\nsubprograms of this chapter.\n\n\\subsection{Functional Description}\n\nThe subprograms of this chapter are described in Section~D of Chapter~11.5.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nDSBASD, DSBASI, and DSVALA each contain an internal dimensioning parameter\n$kmax = 20$. It is an error if KORDER $> kmax$ in DSBASD, DSBASI, or DSVALA.\nThe condition IDERIV\\ $< 0$ is an error in DSBASD.\n\nIn DSFIND, if the search reaches either of the intervals [XT(IX1),\nXT(IX1+1)] or [XT(IX2$-$1), XT(IX2)] and the interval is found to have\nnonpositive length, the error is reported.\n\nErrors are reported to the library error message processing subroutines\nof Chapter~19.2 with a severity level of~2 that will, by default, cause\nexecution of the program to stop.\n\nAbscissae and weights for 2-point, 6-point, and 10-point Gaussian quadrature\nare stored to 40~decimal digits in DSBASI. With infinite precision abscissae\nand weights, these formulae would be exact for splines of KORDER up to~20.\n\n\\subsection{Supporting Information}\n\nThe source language is ANSI Fortran~77.\n\nThese subprograms, except DSBASI, are modifications by Lawson of codes\ndeveloped by C.\\ de Boor, \\cite{deBoor:1978:APG}.  Subprogram DSBASI is\nbased on code due to D.\\ E.\\ Amos, \\cite{Amos:1979:xxx}.\n\n\\newpage\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nDSBASD & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nDSBASD, ERFIN, ERMSG, IERM1, IERV1\\rule[-5pt]{0pt}{8pt}}\\\\\nDSBASI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nDERV1, DSBASD, DSBASI, DSFIND, ERFIN, ERMSG, IERM1, IERV1\\rule[-5pt]{0pt}{8pt}}\\\\\nDSDIF & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright DSDIF\\rule[-5pt]{0pt}{8pt}}\\\\\nDSFIND & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nDERV1, DSFIND, ERFIN, ERMSG, IERM1, IERF1\\rule[-5pt]{0pt}{8pt}}\\\\\nDSVALA & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nDERV1, DSFIND, ERFIN, ERMSG, IERM1, IERF1}\\\\\n\\end{tabular}\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nSSBASD & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nERFIN, ERMSG, IERM1, IERV1, SSBASD\\rule[-5pt]{0pt}{8pt}}\\\\\nSSBASI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nERFIN, ERMSG, IERM1, IERV1, SERV1, SSBASD, SSBASI, SSFIND\\rule[-5pt]{0pt}{8pt}}\\\\\nSSDIF & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright SSDIF\\rule[-5pt]{0pt}{8pt}}\\\\\nSSFIND & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nERFIN, ERMSG, IERM1, IERF1, SERV1, SSFIND\\rule[-5pt]{0pt}{8pt}}\\\\\nSSVALA & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nERFIN, ERMSG, IERM1, IERF1, SERV1, SSFIND}\\\\\n\\end{tabular}\n\n\\begcode\n\\bigskip\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRDSBASD}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{dsbasd}}\n\n\\vspace{30pt}\\centerline{\\bf \\large ODDSBASD}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{dsbasd}}\n\\end{document}\n", "meta": {"hexsha": "51f81a603c1b65b978a7b5cc6214b104580f297a", "size": 14678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch11-06.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch11-06.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch11-06.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 41.4632768362, "max_line_length": 98, "alphanum_fraction": 0.7301403461, "num_tokens": 4740, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006919925839875, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.5997881288029313}}
{"text": "\\newpage\n\\subsection{Juncai's summary}\n\\paragraph{Summary of learning rate schedule methods:}\n\\begin{itemize}\n\t\\item Classical method: iteration for fixed epochs (30 or 50) and then reduce the learning rate by 1/10 or 1/2. \n\t\\item Adaptive learning rate: iteration for some epochs by monitoring some indicator and then reduce the learning rate by some factor (1/10 or 1/2) such as SASA.\n\t\\item Cosine method: use the next formula\n\t\\begin{equation}\\label{key}\n\t\\eta_t = \\eta_{min} + \\frac{1}{2}(\\eta_{max} - \\eta_{min}) (1 - \\cos(\\frac{t\\pi}{T_{\\max}})),\n\t\\end{equation}\n\twhere $T_{\\max}$ is the global number of epochs.\n\\end{itemize}\n\n\\paragraph{Some observations:}\n\\begin{itemize}\n\t\\item There should be a method between classical methods and Cosine method.\n\t\\item How to reduce learning rate might be more important than when to reduce learning rate?\n\\end{itemize}\n\n\\paragraph{Summary for what we have in adaptive learning rate method:}\n\\begin{itemize}\n\t\\item For SASA, we have the next observations (conclusions):\n\t\\begin{itemize}\n\t\t\\item This is only a method which tries to tell us when to reduce learning rate.\n\t\t\\item When learning rate is very ``small'' (different models data sets, this small level may different), the criterion in this method is very difficult to hold. \n\t\\end{itemize}\n\t\\item For epoch level criterion to check the stationary:\n\t\\begin{itemize}\n\t\t\\item Lian implemented it before, I will take over that code and try to fine-tune these hyper-parameters to make it works as a comparable criterion with SASA.\n\t\t\\item I am thinking about to use this as the global stopping criterion method. This seems more difficult, I will focus on it after I am familiar with these codes.\n\t\\end{itemize}\n\\end{itemize}\n\n\\paragraph{One suggestion from Prof. Xu on Mar. 14th, 2020 for global stopping criterion:}\n\\begin{itemize}\n\t\\item 1. Do SASA first when learning rate is big enough.\n\t\\item 2. Fix a thresholding, and if the criterion of SASA cannot hold within that number of epochs then switch to epoch level criterion.\n\t\\item 3. When epoch level criterion holds, finish the training process.\n\\end{itemize}\n\n\\paragraph{Juncai's plan for this project:}\n\\begin{itemize}\n\t\\item Be familiar with running these models with SASA on DGX-1.\n\t\\begin{itemize}\n\t\\item CIFAR10, for SASA with 300 epochs: 95.6\\% test accuracy. (To verify the code)\n\t\\item CIFAR100, SGD with cosine learning rate with ($T_{max}=1800$, $\\eta_{max} = 0.1$ and $\\eta_{min}=0.0001$). Now for 1350 epochs, I got the test accuracy 78.6\\%. (Compared with Jianqing's results for FAA augmentation and same SGD with cosine learning rate, which needs 1800 epochs to have 80.3\\% test accuracy.)\n\t\\item Observation: from the test on CIFAR100, I doubt the efficiency of FAA. \n\t\\end{itemize}\n\t\\item Based on Lian's implementation about epoch level criterion as learning rate schedule, try to run some comparable results for some classical models (ResNet and MgNet) on CIFAR 10 and 100 data sets. \n\t\\item Try to figure our a good global stopping criterion algorithm.\n\\end{itemize}\n\n", "meta": {"hexsha": "4c20a9ce767331d7178935acad8294f1e750f875", "size": 3045, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/juncaiCNN.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/juncaiCNN.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/juncaiCNN.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.4528301887, "max_line_length": 316, "alphanum_fraction": 0.7566502463, "num_tokens": 835, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529376, "lm_q2_score": 0.7662936537604181, "lm_q1q2_score": 0.599749290658033}}
{"text": "\\subsection{Helices}\r\n\\noindent\r\nA helix looks like a spring and appears to look like a circle when viewed from top-down. It has the form $\\vec{r}(t) = \\langle r\\cos{t}, r\\sin{t}, ct\\rangle$ where $a\\in\\mathbb{R}$. $a$ defines the \"tightness\" between consecutive windings.\r\n\r\n\\begin{figure}[h]\r\n\t\\centering\r\n\t\\includegraphics[scale=0.33]{Images/vectorValuedFunctions/Helix}\r\n\\end{figure}", "meta": {"hexsha": "2e7496f669e44d18b8a45465b634ebfa2b8e838f", "size": 387, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "multiCalc/vectorValuedFunctions/helices.tex", "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "multiCalc/vectorValuedFunctions/helices.tex", "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "multiCalc/vectorValuedFunctions/helices.tex", "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.375, "max_line_length": 240, "alphanum_fraction": 0.7286821705, "num_tokens": 122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.766293653760418, "lm_q1q2_score": 0.5997492906580327}}
{"text": "\n\\section{The Physical System and Modeling}\n\\begin{frame}\\frametitle{On Models and Modeling}\n  \\begin{itemize}\n\t\\item Mathematical models are an abstraction of reality.\n\t\\item There are three types of responses. \n\t\\begin{enumerate}\n\t\t\\item Type I\\\\\n\t\t$\\ y'=\\alpha y$\n\t\t\\item Type II\\\\\n\t\t$\\ y' = \\frac{\\alpha y}{y+ \\beta}$ \n\t\t\\item Type III\\\\\n\t\t$\\ y'= \\frac {\\alpha y^2}{\\beta + y^2}$\n\t\\end{enumerate}\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Type I Response}\n   Type I: $y'=\\alpha y$\\\\ Applied to phenomena including chemical reaction kinetics.\n   \\includegraphics[scale=.4]{Rplot.png}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Type II Response} \n\n    Type II: $y'=\\frac{\\alpha y}{y+L}$\\\\Models population growth until carrying capacity is reached.\n   \\includegraphics[scale=0.4]{Rplot01.png}\n\n   % Type II: $y'=\\frac{\\alpha y}{y+\\beta}$\\\\Models population growth until carrying capacity is reached.\n  % \\includegraphics[natwidth=162bp, natheight=227bp,width=280bp]{Rplot02.png}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Type III Response}\n    Type III: $y'=y(1-y)$. Used to model neuronal action potentials.\n    \\includegraphics[scale=.4]{Rplot02.png}\n\\end{frame}\n\\begin{frame}\n    Type III: $y'=\\frac{\\alpha y^2}{\\beta+y^2}$. Used to model neuronal action potentials.\n    \\includegraphics[scale=.4]{Rplot01.png}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{Coral Reef Dynamics}\n\\hspace{1.5em}Coral reefs serve as a complex aquatic ecosystem physically supported by calcium carbonate secretions of the colorful stone coral. To model this ecosystem, one approach taken by Li-Wang-Zhang-Hastings is to describe the relationships between macroalgae, algal turfs, and corals \\cite{Hastings}.\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{Coral Reef Dynamics}\n\\includegraphics[natwidth=162bp,natheight=227bp,width=280bp]{./coral-reef-triangle.png}\n\\end{frame}\n\\begin{frame}\\frametitle{Coral Reef Dynamics}\n$$\\begin{cases}\\begin{array}{rl}\n\\frac{dM}{dt}\\hspace{-.8em}&=aMC - \\frac{gM}{M+T}+\\gamma M T\\\\\n\\frac{dC}{dt}\\hspace{-.8em}&=rTC - dC - aMC\\\\\n\\frac{dT}{dt}\\hspace{-.8em}&=\\frac{gM}{M + T} - \\gamma MT - rTC + dC\n\\end{array}\\end{cases},$$ where \\begin{itemize}\\itemsep0pt\n\\item $r$ is the rate corals overgrow upon algal turfs\\\\\n\\item $d$ is the mortality rate of corals\\\\\n\\item $a$ is the rate that macroalgae overgrow upon corals\\\\\n\\item $\\gamma$ is the rate that macroalgae spread over algal turfs\\\\\n\\item $g$ is the indiscriminate grazing rate of parrotfish.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\\frametitle{Coral Reef Dynamics}\n\\hspace{1.57em}Note that our scope is limited to regions entirely covered by coral, macroalgae, and algal turf, as $\\frac{dT}{dt}=-\\frac{dM}{dt}-\\frac{dC}{dt}$ implies $M+C+T=constant$, say $constant = 1$.  We reduce our system to, $$\\begin{cases} \n\\begin{array}{rl}\n\\frac{dM}{dt}&= aMC-\\frac{gM}{M+T} + \\gamma MT\\\\ \n\\frac{dC}{dt}&=rTC-dC-aMC\n\\end{array} \\end{cases}.$$ \n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Nondimensionalization}\n  \\hspace{1.57em}To simplify systems of equations we scale problems using the technique of nondimensionalization: set {\\begin{itemize}\\itemsep0pt\\item $M(t)=\\overline{M}\\HAT{M}(s)$;\\\\\\item $C(t)=\\overline{C}\\HAT{C}(s)$;\\\\\\item $T(t)=\\overline{T}\\HAT{T}(s)$, and;\\\\\\item $t=\\tau s$. \\end{itemize} From this we get: \\begin{itemize}\\itemsep0pt \\item$\\frac{dM}{dt}=\\overline{M}\\HAT{M}'\\frac{ds}{dt}$;\\\\\\item $\\frac{dC}{dt}=\\overline{C}\\HAT{C}'\\frac{ds}{dt}$;\\\\\\item $\\frac{dT}{dt}=\\overline{T}\\HAT{T}'\\frac{ds}{dt}$, and;\\\\ \\item $\\frac{ds}{dt}=\\frac{1}{\\tau}.$\\end{itemize} }\n  \\end{frame}\n \\begin{frame}\n  \\frametitle{Nondimensionalization}Our system then becomes: $$\\begin{cases} \n\\begin{array}{rl}\n\\frac{\\overline{M}}{\\tau}\\frac{d\\HAT{M}}{dt}&= a\\overline{M}\\overline{C}\\HAT{M}\\HAT{C}-\\frac{g\\overline{M}\\HAT{M}}{\\overline{M}\\HAT{M}+\\overline{T}\\HAT{T}} + \\gamma \\overline{M}\\HAT{M}\\overline{T}\\HAT{T}\\\\ \n\\frac{\\overline{C}}{\\tau}\\frac{d\\HAT{C}}{dt}&=r\\overline{T}\\overline{C}\\HAT{T}\\HAT{C}-d\\overline{C}\\HAT{C}-a\\overline{M}\\overline{C}\\HAT{M}\\HAT{C}\n\\end{array} \\end{cases}$$\n\\end{frame}\n\n{\\center\n\\begin{frame}\\frametitle{Nondimensionalization}Set $\\overline{T}=\\frac{r}{\\gamma}$, $\\overline{C}=\\frac{r^2}{a\\gamma}$, $\\overline{M}=1$, $\\tau=\\frac{\\gamma}{r^2}$. Then, $$\\begin{cases}\n\\begin{array}{rl}\n\\frac{d\\HAT{M}}{dt}&= \\HAT{M}(a\\tau\\overline{C}\\HAT{C}-\\frac{g\\tau}{\\overline{M}\\HAT{M}+\\overline{T}\\HAT{T}} + \\gamma \\tau\\overline{T}\\HAT{T})\\\\ \n\\frac{d\\HAT{C}}{dt}&=\\HAT{C}(r\\tau\\overline{T}\\HAT{T}-d\\tau-a\\tau\\overline{M}\\overline{C}\\HAT{M})\n\\end{array} \\end{cases}$$ \n\\end{frame}\n}\n\n\\begin{frame}\\frametitle{Nondimensionalization}$$\\begin{cases}\n\\begin{array}{rl}\n\\frac{d\\HAT{M}}{dt}&= \\HAT{M}(\\HAT{C}+\\HAT{\\gamma}\\HAT{T}+\\frac{\\HAT{g}\\gamma^2}{\\gamma\\HAT{M}+ \\HAT{T}}), \\\\ \n\\frac{d\\HAT{C}}{dt}&=\\HAT{C}(\\HAT{T}-\\HAT{M}-\\HAT{d}\\HAT{\\gamma}),\n\\end{array} \\end{cases}$$ where $\\HAT{d}=\\frac{d}{r}$, $\\HAT{\\gamma} =\\frac{\\gamma}{r}$, $\\HAT{g}=\\frac{g}{\\gamma^2}$,  Recall, \\begin{itemize}\\itemsep0pt\n\\item $r$ is the rate corals overgrow upon algal turfs\\\\\n\\item $d$ is the mortality rate of corals\\\\\n\\item $a$ is the rate that macroalgae overgrow upon corals\\\\\n\\item $\\gamma$ is the rate that macroalgae spread over algal turfs\\\\\n\\item $g$ is the indiscriminate grazing rate of parrotfish.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Blood Coagulation}\n  Blood coagulation in response to an injury occurs in three stages:\\begin{enumerate}\n  \\item Initiation Stage\\\\\n  \\item Amplification Stage\\\\\n  \\item Propagation Stage\n  \\end{enumerate} \\end{frame}\n  \\begin{frame}\\frametitle{Initiation stage}\n \\hspace{1.57em}Injury exposes tissue factor to the site of injury thereby initiating the coagulation process. TF + factor VII = TF-VIIa complex activates factors IX and X. X activates thrombin before plasma protease inhibitor (TFPI) binds to and inactivates TF, Xa, and finally TF-VIIa which terminates this stage.\n \\end{frame}\n \\begin{frame}\\frametitle{Amplification Stage}\n \\hspace{1.57em}Thrombin activates platelets and factor V and VIII which drives the formation of a coagulation complex (IXa-VIIIa-Ca-PL, Xa-Va-Ca-PL) serving as an initial plug of the injury. TFPI inactivates thrombin terminating this stage.\n \\end{frame}\\begin{frame}\\frametitle{Propagation Stage}\n \\hspace{1.57em}Increasing platelet production of thrombin leads to thrombin's cleavage of the initial plug's fibrinogen into fibrin to which the final plug owes its strength.\n\\end{frame}\n\n\n\\begin{comment}\n\\begin{frame}\\frametitle{Clotting Factor Models}\n\\begin{itemize}\n\\item 1989 Khamin-Semenov Model - Treated Coagulation as a three stage process as previously described.\\\\\n\\item 2002 Xu Model extends the K-S Model to be more applicable to bleeding pathologies.\\\\\n\\item 2004 Qiao et. al. Model focuses on the role of activated protein C (factor XIV)\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{1989 Khamin-Semenov Model}\n$$\\begin{cases}\\begin{array}{rl}\n\\frac{dVIIa}{dt}\\hspace{-.8em}&=\\alpha k_1- H_4[VIIa]\\\\\n\\frac{dXa}{dt}\\hspace{-.8em}&=k_2[VIIa]-H_2[Xa]\\\\\n\\frac{dVa}{dt}\\hspace{-.8em}&=k_3[IIa]-H_3[Va]\\\\\n\\frac{dIIa}{dt}\\hspace{-.8em}&=k_4[Xa]\\frac{[Va]}{k_a+[Va]}-H_4[IIa]\n\\end{array}\\end{cases},$$ where $\\{k_i\\}$ are kinetic coefficients and $\\{H_i\\}$ are dissipation constants. Xu notes that this model lacks the role that platelets play in this model.\n\\end{frame}\n\\end{comment} \n\n\\begin{frame}\\frametitle{Coagulation Cascade}\n\\includegraphics[scale=.4]{./coag-cascade}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{2002 Xu Model \\cite{Xu}}\\vspace{-1.5em}\n$$\\begin{cases}\\begin{array}{rl}\n\\frac{d(TF-VIIa)}{dt}\\hspace{-.8em}&=[TF-VIIa]_0-h'_1[TF-VIIa]([TFPI][Xa]k_{14})\\\\\n\\frac{dXa}{dt}\\hspace{-.8em}&=k_2[TF-VIIa]+k_{21}[IXa]\\left(\\frac{VIIIa}{VIIIa+d_2}\\right)\\left(c_0+\\frac{f(IIa)}{1+f(IIa)}\\right)\\\\&\\hspace{1em}-h'_{21}[TFPI][Xa]-h'_{22}[AT][Xa]\\\\\n\\frac{dIXa}{dt}\\hspace{-.8em}&=k_3[TF-VIIa]-h'_3[AT][IXa]\\\\\n\\frac{dVa}{dt}\\hspace{-.8em}&=k'_5[IIa][V]-h'_5[Va]\\\\\n\\frac{dVIIIa}{dt}\\hspace{-.8em}&=k'_6[IIa][VIII]-h'_6[VIIIa]\\\\\n\\frac{dIIa}{dt}\\hspace{-.8em}&=k_4[Xa]\\frac{[Va]}{[Va]+d_1}\\left(c_0+\\frac{f(IIa)}{1+f(IIa)}\\right)-h'_4[IIa]\n\\end{array}\\end{cases},$$ where $\\{k'_i\\}$ are kinetic coefficients and $\\{h'_i\\}$ are dissipation constants.\n\n\\hspace{1.57em}The role that platelets have in this model is realized by the term $c=c_0+\\frac{[IIa]}{1+[IIa]}$, the proportion of platelets that have be activated by thrombin.% (Xu asserts this is a strict, twice differentiable function of $[IIa]$ whose codomain is $[0,1]$).\n\\end{frame}\n\\begin{comment}\n\\begin{frame}\n\\frametitle{2004 Qiao et. al. Model}\\vspace{-1.5em}\n$$\\begin{cases}\\begin{array}{rl}\n\\frac{d[IXa]}{dt}\\hspace{-.8em}&=k_1\\beta-h_1[IX_a]\\\\\n\\frac{d[VIIIa]}{dt}\\hspace{-.8em}&=k_2[IIa]+k_3[X_a]-k_4[APC]\\frac{[VIIIa]}{b_1+[VIIIa]}-h_2[VIIIa]\\\\\n\\frac{d[Xa]}{dt}\\hspace{-.8em}&=k_5[IXa]\\frac{[VIIIa]}{b_2+[VIIIa]}-h_3[Xa]\\\\\n\\frac{d[Va]}{dt}\\hspace{-.8em}&=k_6[IIa]-k_7[APC]\\frac{Va}{b_3+Va}-h_4[V_a]\\\\\n\\frac{d[APC]}{dt}\\hspace{-.8em}&=k_8[IIa]-h_5[APC]\\\\\n\\frac{d[IIa]}{dt}\\hspace{-.8em}&=k_9[Xa]\\frac{[Va]}{b_4+[Va]}-h_6[IIa]\n\\end{array}\\end{cases},$$ where $\\{k'_i\\}$ are kinetic coefficients and $\\{h'_i\\}$ are dissipation constants.\n\\end{frame}\n\n\\begin{frame}\\frametitle{Moiseyev and Friends' Random Fibrin Walk}\nBasis for the model is the 1998 Keener-Snead model: $$\\frac{\\partial n}{\\partial t}+V\\frac{\\partial n}{\\partial x}=F(1-n)-Gn,$$ where \\begin{itemize}\\item$n(x,t)dx$ is the fraction of connected links in the interval $[x,x+dx]$ at time $t$;\\item $F(x)$ is the rate of link formation; \\item $G(x)$ rate of lin dissociation; \\item $V$ is the velocity field representing a shear flow.\\end{itemize}\n\\end{frame}\n\\end{comment}\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"Presentation1\"\n%%% End:\n", "meta": {"hexsha": "6a286e86fec3da11824cfb980d6bec534fcd4549", "size": 9755, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Presentations/Midterm/modeling.tex", "max_stars_repo_name": "SUNY-SDE-2015/REU15", "max_stars_repo_head_hexsha": "a54ace642d8696250c7fa0bf574b16a931ec91c9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Presentations/Midterm/modeling.tex", "max_issues_repo_name": "SUNY-SDE-2015/REU15", "max_issues_repo_head_hexsha": "a54ace642d8696250c7fa0bf574b16a931ec91c9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-06-04T17:55:32.000Z", "max_issues_repo_issues_event_max_datetime": "2015-07-09T15:38:17.000Z", "max_forks_repo_path": "Presentations/Midterm/modeling.tex", "max_forks_repo_name": "SUNY-SDE-2015/REU15", "max_forks_repo_head_hexsha": "a54ace642d8696250c7fa0bf574b16a931ec91c9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.7297297297, "max_line_length": 572, "alphanum_fraction": 0.6933880062, "num_tokens": 3722, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7826624840223699, "lm_q1q2_score": 0.5997492861882086}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%% Minimum Spanning Tree %%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Minimum Spanning Tree}\n\n\\begin{frame}\n  \\frametitle{Minimum Spanning Tree}\n\n    \\begin{itemize}\n\n    \\item (\\textcolor{purple}{Evaluate Prim's MST algorithm implemented with min-heap} ([\\textcolor{blue}{$P_{417}$ 8.9}]):) \\\\\n        \\begin{enumerate}\n          \\item Find the asymptotic order of the number of comparisons of edge weights in the worst case.\n          \\item Find the asymptotic order of the number of comparisons of edge weights on a \\textcolor{blue}{bounded-degree} family.\n          \\item Find the asymptotic order of the number of comparisons of edge weights on a \\textcolor{blue}{planar graph}.\n        \\end{enumerate}\n\n    \\end{itemize}\n\n\\end{frame}\n\n\n\n\\begin{frame}\n  \\frametitle{Minimum Spanning Tree}\n\n    \\[\n      T(n,m) = O \\big (nT(getMin) + nT(deleteMin) + mT(decreaseKey) \\big ). (\\lbrack \\textcolor{blue}{P_{395}} \\rbrack)\n    \\]\n\n    \\pause\n\n    \\begin{itemize}\n      \\item $getMin: O(1)$.  \\\\  \\emph{no comparison of edge weights}.\n      \\item $deleteMin: O(\\log n)$.  \\\\  \\textcolor{red}{NOT}: $O(\\log m)$.\n      \\item $decreaseKey: O(\\log n)$.  \\\\ \\textcolor{red}{NOT}: $O(n + \\log n)$ where $O(n)$ for search ([\\textcolor{blue}{$P_{296}$}]).\n      \\item ``Else if newWgt less than fringeWgt for w'' ([\\textcolor{blue}{$P_{395}$}]) : $O(m).$\n      \\item \\textcolor{blue}{In total,}\n      \\[\n      T(n,m) = O(n \\log n + m \\log n + 2m) = O((n+m) \\log n).\n      \\]\n    \\end{itemize}\n\n\\end{frame}\n\n\n\n\\begin{frame}\n  \\frametitle{Minimum Spanning Tree}\n\n  \\begin{itemize}\n      \\item\n        $m \\le \\frac{nk} {2}, \\Rightarrow T(n,m) = O((n+m) \\log n) = O((n+ \\frac{nk}{2}) \\log n) = O(n \\log n).$\n\n        \\pause\n\n      \\item\n        First, we should know the relation between $m$ and $n$.\n        \\begin{equation}\n          n - m + f = 2. \\label{eq: EulerExtension}\n        \\end{equation}\n\n        If $deg(f_i) \\ge l$,\n        \\begin{equation}\n          2m = \\sum_{i=1}^f deg(R_i) \\ge lf. \\label{eq: Handshaking}\n        \\end{equation}\n        \\begin{equation}\n          m \\le \\frac{l}{l-2}(n-2)\n        \\end{equation}\n\n        \\pause\n\n        If $G$ is tree, $m=n-1$; Else, $l \\ge 3$.\n        \\begin{equation}\n          \\textcolor{blue}{m \\le \\frac{l}{l-2}(n-2) \\le 3(n-2) = 3n - 6}\n        \\end{equation}\n        \\[\n          T(n,m) = O((n+m) \\log n) = O((n + 3n) \\log n) = O(n \\log n).\n        \\]\n  \\end{itemize}\n\n\\end{frame} ", "meta": {"hexsha": "f1fe406cd6ead5164f2fc6a059b16516f74f465f", "size": 2437, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2011/algorithm-tutorial-bfs-dfs-mst-sssp-dp-20111218/mst.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2011/algorithm-tutorial-bfs-dfs-mst-sssp-dp-20111218/mst.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2011/algorithm-tutorial-bfs-dfs-mst-sssp-dp-20111218/mst.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 30.0864197531, "max_line_length": 136, "alphanum_fraction": 0.5453426344, "num_tokens": 832, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859598, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5997492861882086}}
{"text": "\\subsection{Linear finite element in 2D as a 2-depth ReLU DNN}\\label{sec:2DFEMandDNN}\nIn this section, we will present our unexpected discovery pertaining to the\nexpressive power of ReLU DNNs, which derived from our investigation into ReLU DNNs \nin relation to hierarchical basis methods.\n\nWe have\n$$\ns(x) = x^2, \\quad m(x,y) = xy,\n$$\nand \n$I_{\\ell}$ means the interpolation on 1D and $\\Pi_\\ell$ means interpolation\non $[0,1]^2$ with mesh size $2^{-\\ell}$.\n\nIn terms of the approximation of ReLU DNNs for $x^2$,\nthe most critical point is that\n\\begin{equation}\\label{key}\n\t(I_{\\ell} - I_{\\ell-1})s(x) = -h_\\ell^2\\sum_{i\\in \\mathcal I_\\ell}\\phi_{\\ell,i}(x) = -h_\\ell^2g_\\ell(x) \\in {\\mathcal N}_\\ell^3\n\\end{equation}\nfor all $x\\in \\mathbb{R}$. Thus, a key question arises:\n\\begin{quote}\n\tIs there composition structure for $(\\Pi_\\ell - \\Pi_{\\ell-1})m (x, y)$?\n\\end{quote}\n\nWe have the following \ncorollary for the explicit formula for $(\\Pi_\\ell- \\Pi_{\\ell-1})m$ on $[-1,1]^2$.\n\\begin{corollary}\n\tFor any $(x,y) \\in [-1,1]^2$, we have\n\t\\begin{equation}\\label{eq:Pi-Pi}\n\t\t\\begin{aligned}\n\t\t\t&(\\Pi_\\ell- \\Pi_{\\ell-1})m (x,y) \\\\\n\t\t\t&= m_{\\ell+2}(x,y) - m_{\\ell+1}(x,y)  \\\\\n\t\t\t&= 2h^2_{\\ell+1}\\left(g_{\\ell+1}\\left(\\frac{|x|}{2}\\right)+g_{\\ell+1}\\left(\\frac{|y|}{2}\\right)-g_{\\ell+1}\\left(\\frac{|x+y|}{2}\\right)\\right).\n\t\t\\end{aligned}\n\t\\end{equation}\n\tThis means that $\\psi_\\ell \\in \\widehat{{\\mathcal N}}_{3(\\ell+2)}^3$ where $\\psi_\\ell = (\\Pi_\\ell- \\Pi_{\\ell-1})m$. Furthermore, \n\tit is worth noting that we can actually have $\\psi_\\ell \\in {\\mathcal N}_{\\ell+2}^9$ since $g_{\\ell+1}(\\frac{|x+y|}{2}) \\in {\\mathcal N}_{\\ell+2}^3$.\n\\end{corollary}\n\nBy choosing a suitable scale, we have $\\|h_{\\ell}^{-2}(\\Pi_\\ell- \\Pi_{\\ell-1})m\\|_{L^{\\infty}([-1,1]^2)} = 1$. \nFig.~\\ref{fig:T2T3} shows function graphs of $h_2^{-2}(\\Pi_2 - \\Pi_{1})m$ and $h_{3}^{-2}(\\Pi_3- \\Pi_{2})m$ on $[0,1]^2$.\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{minipage}[t]{0.43\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{6DL/figures/4basis}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.43\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{6DL/figures/16basis}\n\t\\end{minipage}\n\t\\caption{Functions of $h_{\\ell}^{-2}(\\Pi_\\ell- \\Pi_{\\ell-1})m$ for $\\ell = 2,3$ on $[0,1]^2$.}\n\t\\label{fig:T2T3}\n\\end{figure}\n\nThese graphs and meshes of $(\\Pi_\\ell - \\Pi_{\\ell-1})m(x,y)$ give rise to a natural discussion about\nthe representation theorem of ReLU DNN for the basis function of the 2D linear finite element.\nBy taking $\\ell=1$ with a suitable scale, we have\n\\begin{equation}\\label{eq:varphi_1}\n\t4(\\Pi_1- \\Pi_{0})m (x,y) \n\t= \\frac{1}{2}\\left(g_{2}\\left(\\frac{x}{2}\\right)+g_2\\left(\\frac{y}{2}\\right)-g_2\\left(\\frac{x+y}{2}\\right)\\right)\n\\end{equation}\nfor all $ (x,y) \\in [0,1]^2$. \nHere, $4(\\Pi_1- \\Pi_{0})m (x,y)$ equals the basis function $\\varphi(x,y)$ for the \nlinear finite element (see left-hand graph of Fig.~\\ref{fig:varphi_1}) \non the uniform mesh on $[0,1]^2$ with mesh size $h_1=\\frac{1}{2}$ (see right-hand graph of Fig.~\\ref{fig:varphi_1}).\n\\begin{figure}[h!]\n\t\\begin{minipage}[t]{0.43\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{6DL/figures/varphi}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.45\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[scale=.5]{6DL/figures/2D_Basis}\n\t\t%\t\t\\begin{tikzpicture}[scale = 2 ]\n\t\t%\t\t%\\fill[green] (0,0) -- (0,2) -- (2,0) -- cycle;\n\t\t%\t\t\\draw[gray, step = 1cm] (0, 0) grid (2, 2);\n\t\t%\t\t\\node[anchor = north] at (0, 0) {$(0,0)$};\n\t\t%\t\t%\t\t\t\\node[anchor = north] at (1,0) {$6$};\n\t\t%\t\t\\node[anchor = north] at (2,0) {$(0,1)$};\n\t\t%\t\t%\t\t\t\\node[anchor = east] at (0,1) {$(\\frac{1}{2},\\frac{1}{2})$};\n\t\t%\t\t\\node[anchor = south west] at (1,1) {$(\\frac{1}{2},\\frac{1}{2})$};\n\t\t%\t\t\\node[anchor = south] at (0,2) {$(1,0)$};\n\t\t%\t\t\\node[anchor = south] at (2,2) {$(1,1)$};\n\t\t%\t\t\\draw(0,2) -- (2,0);\n\t\t%\t\t\\draw (0,1) -- (1,0);\n\t\t%\t\t\\draw (1,2) -- (2,1);\n\t\t%\t\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\caption{left: $\\varphi(x,y)$ on $[0,1]^2$; right:  mesh for $\\ell=1$ on $[0,1]^2$.}\n\t\\label{fig:varphi_1}\n\\end{figure}\nWithout loss of generality, we can define $\\varphi(x,y)$ globally as\n\\begin{equation}\\label{key}\n\t\\varphi(x,y) = \\begin{cases}\n\t\t4(\\Pi_1- \\Pi_{0})m (x,y), \\quad &(x,y) \\in [0,1]^2, \\\\\n\t\t0, \\quad &\\text{others}.\n\t\\end{cases}\n\\end{equation}\nHowever, this representation,\n\\begin{equation}\\label{eq:varphi_1-1}\n\t\\varphi(x,y) = \\frac{1}{2}\\left(g_{2}\\left(\\frac{x}{2}\\right)+g_2\\left(\\frac{y}{2}\\right)-g_2\\left(\\frac{x+y}{2}\\right)\\right),\n\\end{equation}\nholds only for $(x,y) \\in [0,1]^2$. \nRelated to this observation is the simple argument that \n\\begin{equation}\\label{key}\n\t\\frac{1}{2}\\left(g_{2}\\left(\\frac{x}{2}\\right)+g_2\\left(\\frac{y}{2}\\right)-g_2\\left(\\frac{x+y}{2}\\right)\\right) = \\frac{1}{2} \\neq 0\n\\end{equation}\nif $(x,y) = (\\frac{3}{2}, \\frac{3}{2})$.\nOn the other hand, on the assumption that the identity in \\eqref{eq:varphi_1-1} holds for all $(x,y) \\in \\mathbb{R}^2$, we can rewrite $g_2(x)$ as\n\\begin{equation}\\label{eq:defg2_2}\n\tg_2(x) = \\sum_{i=0}^4 \\alpha_i {\\rm ReLU}\\left(x-\\frac{i}{4}\\right),\n\\end{equation}\nfor all $x\\in \\mathbb{R}$ where $(\\alpha_0,\\alpha_1,\\cdots, \\alpha_4) = (4, -8, 8, -8, 4)$ based on the relations between the linear finite element functions and ReLU DNNs on 1D~\\cite{he2020relu}.\nThis means that $g_2 \\in {\\mathcal N}_{1}^5$ and $\\varphi(x,y)$ can be represented by a ReLU DNN with only one hidden layer on the entire space $\\mathbb{R}^2$.\nHowever, this representation is contradictory to the theorem in~\\cite{he2020relu} that the locally supported basis function of the 2D linear finite element cannot be represented globally by ReLU DNN  with just one hidden layer.\n\nGiven the global representation of $\\varphi(x,y)$, \\cite{he2019finite} constructs \na ReLU DNN with four hidden layers to reproduce $\\varphi(x,y)$ explicitly. \nAlthough \\cite{arora2018understanding,he2020relu} show that\na two-hidden-layer ReLU DNN should be able to represent $\\varphi(x,y)$ globally on $\\mathbb{R}^2$,\nthe structure of network will be extremely complicated, as well as it requires a \nlarge number of neurons. Thus, to find a concise formula to \nreproduce $\\varphi(x,y)$ on $\\mathbb{R}^2$ directly becomes the focus of the inquiry.\nBased on the discovery in \\eqref{eq:varphi_1-1} and the properties of ReLU DNNs, \nwe can construct a ReLU DNN function with two hidden layers to represent $\\varphi(x,y)$. \nTo simplify the statement of that result, let us first denote the following\n${\\rm ReLU1}(x)$ function:\n\\begin{equation}\\label{eq:defReLU1}\n\t{\\rm ReLU1}(x)  := {\\rm ReLU}(x) - {\\rm ReLU}(x-1) \\in {\\mathcal N}_1^2.\n\\end{equation}\n\n\\begin{lemma}\n\tThe basis function $\\varphi(x,y)$ is in  ${\\mathcal N}_2^{15}$, more precisely, we have\n\t\\begin{equation}\\label{eq:b-DNN2}\n\t\t%\\widehat b(x,y) := \n\t\t{\\varphi}(x,y) = \\frac{1}{2}\\left(g_{2}\\left(\\frac{{\\rm ReLU1}(x)}{2}\\right)+g_2\\left(\\frac{{\\rm ReLU1}(y)}{2}\\right)-g_2\\left(\\frac{{\\rm ReLU1}(x)+{\\rm ReLU1}(y)}{2}\\right)\\right)\n\t\\end{equation}\n\tfor all $(x,y) \\in \\mathbb{R}^2$. \n\\end{lemma}\n\\begin{proof}\n\tAccording to the definition of $g_2(x)$ in \\eqref{func:glinduction} (or \\eqref{eq:defg2_2}) and ${\\rm ReLU1}(x)$ in \\eqref{eq:defReLU1},\n\twe have $ \\varphi(x,y) \\in {\\mathcal N}_2^{15}$ and\n\t\\begin{equation}\\label{key}\n\t\t\\varphi(x,y) = 0\n\t\\end{equation}\n\tfor any $(x,y) \\notin [0,1]^2$ as ${\\rm ReLU1}(x)$ and ${\\rm ReLU1}(y)$ will equal to $0$ or $1$. \n\tThen, \\eqref{eq:b-DNN2} holds, given that ${\\rm ReLU1}(x) = x$ and ${\\rm ReLU1}(y) = y$ for $(x,y) \\in [0,1]^2$.\n\\end{proof}\n\nBy employing the above lemma, we can construct the following theorem pertaining to the representation\nof linear finite element functions by using ReLU DNNs with only two hidden layers on\ntwo-dimensional space.\n\\begin{theorem}\\label{thm:FEM-DNN}\n\tAssume $u_h$ is a two-dimensional linear finite element function, which can be written as\n\t\\begin{equation}\\label{key}\n\t\tu_h(x,y) = \\sum_{i=1}^N \\mu_i \\varphi(T_i(x,y)), \n\t\\end{equation}\n\twhere $T_i : \\mathbb{R}^2 \\mapsto \\mathbb{R}^2$ is an affine mapping and $N$ denotes\n\tthe number of degree of freedom. Then, $u_h(x)$ \n\tcan be reproduced globally by a ReLU DNN with only two hidden layers and $15N$ neurons\n\tat most for each layer, i.e., $u_h(x,y)  \\in {\\mathcal N}_2^{15N}$.\n\\end{theorem}\nAccording to~\\cite{arora2018understanding}, any d-dimensional continuous piecewise\nlinear function can be represented by ReLU DNNs with at most $\\lceil \\log_2(d+1) \\rceil$ hidden layers.\nHowever, the representation theory in \\cite{arora2018understanding} requires an extremely complicated\nconstruction by induction with a tremendously high number of neurons. On the other hand, \\cite{he2020relu}\nshows that ReLU DNNs with only one hidden layer cannot be used to represent general linear\nfinite element functions even on two-dimensional space. \nThus, the focus inevitably turns to finding\na concise and explicit representation of linear finite element functions using ReLU DNNs with \nonly two hidden layers on two-dimensional space. \nHere, Theorem~\\ref{thm:FEM-DNN} provides the answer for any two-dimensional linear finite element functions on a uniform mesh or meshes, which can be obtained by adding affine mappings.", "meta": {"hexsha": "d4d124f47ae6bc3e3fae73d74c00758557c33858", "size": 9154, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/2Drepresentation.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/2Drepresentation.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/2Drepresentation.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.2209302326, "max_line_length": 227, "alphanum_fraction": 0.6724928993, "num_tokens": 3460, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7662936377487305, "lm_q1q2_score": 0.5997492781262856}}
{"text": "\\lab{Polynomial Interpolation}{Polynomial Interpolation}\n\\label{lab:Polynomial Interpolation}\n\\objective{Learn and compare three methods of polynomial interpolation: standard Lagrange interpolation, Barycentric Lagrange interpolation and Chebyshev interpolation.\nExplore Runge's phenomenon and how the choice of interpolating points affect the results.\nUse polynomial interpolation to study air polution by approximating graphs of particulates in air.}\n\n\\section*{Polynomial Interpolation}\nPolynomial interpolation is the method of finding a polynomial that matches a function at specific points in its range.\nMore precisely, if $f(x)$ is a function on the interval $[a,b]$ and $p(x)$ is a polynomial then $p(x)$ interpolates the function $f(x)$ at the points $x_0,x_1,\\dots ,x_n$ if $p(x_j)=f(x_j)$ for all $j=0,1,\\dots,n$.\nIn this lab most of the discussion is focused on using interpolation as a means of approximating functions or data, however, polynomial interpolation is useful in a much wider array of applications.\n\nGiven a function $f(x)$ and a set of unique points $\\{x_i\\}_{i=0}^n$, it can be shown that there exists a unique interpolating polynomial $p(x)$.\nThat is, there is one and only one polynomial of degree $n$ that interpolates $f(x)$ through those points.\nThis uniqueness property is why, for the remainder of this lab, an interpolating polynomial is referred to as \\emph{the} interpolating polynomial.\nOne approach to finding the unique interpolating polynomial of degree $n$ is Lagrange interpolation.\n\n\\begin{figure}\n\\captionsetup[subfigure]{justification=centering}\n\\captionsetup{justification=centering}\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/bad_interp1.pdf}\n    \\caption{Interpolation using 5 equally spaced points.}\n    \\label{fig:bad1}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/bad_interp2.pdf}\n    \\caption{Interpolation using 11 equally spaced points.}\n    \\label{fig:bad2}\n\\end{subfigure}\n\\caption{Interpolations of Runge's function $f(x)=\\frac{1}{1+25x^2}$ with equally spaced interpolating points.}\n\\label{fig:badinterp}\n\\end{figure}\n\n\\subsection*{Lagrange interpolation} %------------------------------------------------------------------------------------------------------------------------------------------------\nGiven a set $\\{x_i\\}_{i=1}^n$ of $n$ points to interpolate, a family of $n$ basis functions with the following property is constructed:\n\\[\nL_j(x_i) = \\begin{cases} 0 &\\mbox{ if } i \\neq j\\\\ 1 &\\mbox{ if } i =j \\\\ \\end{cases}.\n\\]\n\nThe Lagrange form of this family of basis functions is\n\\begin{equation}\n\\label{equa:lagrange}\nL_j(x) = \\frac{\\displaystyle\\prod_{k=1, k \\neq j}^n (x-x_k)}{\\displaystyle\\prod_{k=1, k \\neq j}^n (x_j-x_k)}\n\\end{equation}\nEach of these Lagrange basis functions is a polynomial of degree $n-1$ and has the necessary properties as given above.\n\n\\begin{problem}\nDefine a function \\li{lagrange()} that will be used to construct and evaluate an interpolating polynomial on a domain of $x$ values. \nThe function should accept two NumPy arrays of length $n$ which contain the $x$ and $y$ values of the interpolating points as well as a NumPy array of values of length $m$ at which the interpolating polynomial will be evaluated.\n\nWithin \\li{lagrange()}, write a subroutine that will evaluate each of the $n$ Lagrange basis functions at every point in the domain.\nIt may be helpful to follow these steps:\n\\begin{enumerate}\n\\item Compute the denominator of each $L_j$ (as in Equation \\ref{equa:lagrange}) .\n\\item Using the previous step, evaluate $L_j$ at all points in the computational domain (this will give you $m$ values for each $L_j$.)\n\\item Combine the results into an  $n \\times m$ NumPy array, consisting of each of the $n$ $L_j$ evaluated at each of the $m$ points in the domain.\n\\end{enumerate}\n\nYou may find the functions \\li{np.product()} and \\li{np.delete()} to be useful while writing this method.\n\\label{prob:lagrange_basis}\n\\end{problem}\n\n\\emph{Lagrange interpolation} is completed by combining the Lagrange basis functions with the y-values of the function to be interpolated $y_i=f(x_i)$ in the following manner: \n\\begin{equation}\n\\label{equa:poly}\np(x) = \\sum_{j=1}^n y_j L_j(x)\n\\end{equation}\nThis will create the unique interpolating polynomial.\n\nSince polynomials are typically represented in their expanded form with coefficients on each of the terms, it may seem like the best option when working with polynomials would be to use Sympy, or NumPy's\n\\li{poly1d} class to compute the coefficients of the interpolating polynomial individually.\nThis is rarely the best approach, however, since expanding out the large polynomials that are required can quickly lead to instability (especially when using large\nnumbers of interpolating points).\nInstead, it is usually best just to leave the polynomials in unexpanded form (which is still a polynomial, just not a pretty-looking one), and compute values of the polynomial directly from this unexpanded form.\n\n\\begin{lstlisting}\n# Evaluate the polynomial (x-2)(x+1) at 10 points without expanding the expression.\n>>> pts = np.arange(10)\n>>> (pts - 2) * (pts + 1)\narray([ 2,  0,  0,  2,  6, 12, 20, 30, 42, 56])\n\\end{lstlisting}\nIn the given example, there would have been no instability if the expression had actually been expanded but in the case of a large polynomial, stability issues can dominate the computation.\nAlthough the coefficients of the interpolating polynomials will not be explicitly computed in this lab, polynomials are still being used, albeit in a different form.\n\n\\begin{problem}\n\nComplete the implementation of \\li{lagrange()}. \n\nEvaluate the interpolating polynomial at each point in the domain by combining the $y$ values of the interpolation points and the evaluated Lagrange basis functions from Problem \\ref{prob:lagrange_basis} as in Equation \\ref{equa:poly}. \nReturn the final array of length $m$ that consists of the interpolating polynomial evaluated at each point in the domain.\n\nYou can test your function by plotting Runge's function $f(x)=\\frac{1}{1+25x^2}$ and your interpolating polynomial on the same plot for different values of $n$ equally spaced interpolating values then comparing\nyour plot to the plots given in Figure \\ref{fig:badinterp}.\n\\label{prob:lagrange}\n\\end{problem}\n\nThe Lagrange form of polynomial interpolation is useful in some theoretical contexts and is easier to understand than other methods, however, it has some serious drawbacks that prevent it from being a\nuseful method of interpolation.\nFirst, Lagrange interpolation is $O(n^2)$ where other interpolation methods are $O(n^2)$ (or faster) at startup but only $O(n)$ at run-time,\nSecond, Lagrange interpolation is an unstable algorithm which causes it to return innacurate answers when larger numbers of interpolating points are used.\nThus, while useful in some situations, Lagrange interpolation is not desirable in most instances.\n\n\\subsection*{Barycentric Lagrange interpolation}\nBarycentric Lagrange interpolation is simple variant of Lagrange interpolation that performs much better than plain Lagrange interpolation.\nIt is essentially just a rearrangement of the order of operations in Lagrange multiplication which results in vastly improved perfomance, both in speed and stability.\n\nBarycentric Lagrange interpolation relies on the observation that each basis function $L_j$ can be rewritten as\n\\[\nL_j(x) = \\frac{w(x)}{(x-x_j)}w_j\n\\]\nwhere\n\\[\nw(x) = \\prod_{j=1}^n (x-x_j)\n\\]\nand\n\\[\nw_j = \\frac{1}{\\prod_{k=1, k \\neq j}^n (x_j-x_k)}.\n\\]\nThe $w_j$'s are known as the \\emph{barycentric weights}.\n\nUsing the previous equations, the interpolating polynomial can be rewritten\n\\[\np(x) = w(x) \\sum_{j=1}^n \\frac{w_j y_j}{x-x_j}\n\\]\nwhich is the \\emph{first barycentric form}.\nThe computation of $w(x)$ can be avoided by first noting that\n\\[\n1 = w(x) \\sum_{j=1}^n \\frac{w_j}{x-x_j}\n\\]\nwhich allows the interpolating polynomial to be rewriten as\n\\[\np(x) = \\frac{\\displaystyle\\sum_{j=1}^n \\frac{w_j y_j}{x-x_j}}{\\displaystyle\\sum_{j=1}^n \\frac{w_j}{x-x_j}}\n\\]\nThis form of the Lagrange interpolant is known as the \\emph{second barycentric form} which is the form used in Barycentric Lagrange interpolation.\nSo far, the changes made to Lagrange interpolation have resulted in an algorithm that is $O(n)$ once the barycentric weights ($w_j$) are known.\nThe following adjustments will improve the algorithm so that it is numerically stable and later discussions will allow for the quick addition of new interpolating points after startup.\n\nThe second barycentric form makes it clear that any factors that are common to the $w_k$ can be ignored (since they will show up in both the numerator and denominator).\nThis allows for an important improvement to the formula that will prevent overflow error in the arithmetic.\nWhen computing the barycentric weights, each element of the product $\\prod_{k=1, k \\neq j}^n (x_j-x_k)$ should be multiplied by $C^{-1}$, where $4C$ is the width of the interval being interpolated\n(C is known as the \\emph{capacity} of the interval).\nIn effect, this scales each barycentric weight by $C^{1-n}$ which helps to prevent overflow during computation.\nThus, the new barycentric weights are given by\n\\[\nw_j = \\frac{1}{\\prod_{k=1, k \\neq j}^n \\left[(x_j-x_k) / C\\right]}.\n\\]\nOnce again, this change is possible since the extra factor $C^{1-n}$ is cancelled out in the final product.\nThis process is summed up in the following code:\n\n\\begin{lstlisting}\n# Given a NumPy array xint of interpolating x-values, calculate the weights.\n>>> n = len(xint)                   # Number of interpolating points.\n>>> w = np.ones(n)                  # Array for storing barycentric weights.\n# Calculate the capacity of the interval.\n>>> C = (np.<<max>>(xint) - np.<<min>>(xint)) / 4\n\n>>> shuffle = np.random.permutation(n-1)\n>>> for j in range(n):\n>>>     temp = (xint[j] - np.delete(xint, j)) / C\n>>>     temp = temp[shuffle]        # Randomize order of product.\n>>>     w[j] /= np.product(temp)\n\\end{lstlisting}\n\nThe order of \\li{temp} was randomized so that the arithmetic does not overflow due to poor ordering (if standard ordering is used, overflow errors can be encountered since all\nof the points of similar magnitude are multiplied together at once).\nWhen these two fixes are combined, the Barycentric Algorithm becomes numerically stable.\n\n\\begin{problem}\n\\label{prob:barycentric}\nCreate a class that performs Barycentric Lagrange interpolation.\nThe constructor of your class should accept two NumPy arrays which contain the $x$ and $y$ values of the interpolation points.\nStore these arrays as attributes.\nIn the constructor, compute the corresponding barycentric weights and store the resulting array as a class attribute.\nBe sure that the relative ordering of the arrays remains unchanged.\n\nImplement the \\li{__call__()} method so that it accepts a NumPy array of values at which to evaluate the interpolating polynomial and returns an array of the evaluated points.\nYour class can be tested in the same way as the Lagrange function written in Problem \\ref{prob:lagrange}\n\\end{problem}\n\n\\begin{warn}\nAs currently explained and implemented, the Barycentric class from Problem \\ref{prob:barycentric} will fail when a point to be evaluated exactly matches one of the $x$-values of the interpolating points.\nThis happens because a divide by zero error is encountered in the final step of the algorithm.\nThe fix for this, although not required here, is quite easy: keep track of any problem points and replace the final computed value with the corresponding y-value (since this is a point that is exactly interpolated).\nIf you do not implement this fix, just be sure not to pass in any points that exactly match your interpolating values.\n\\end{warn}\n\nAnother advantage of the barycentric method is that it allows for the addition of new interpolating points in $O(n)$ time.\nGiven a set of existing barycentric weights $\\{ w_j\\}_{j=1}^n$ and a new interpolating point $x_i$, the new barycentric weight is given by\n\\[\nw_i = \\frac{1}{\\prod_{k=1}^n (x_i-x_k)}.\n\\]\nIn addition to calculating the new barycentric weight, all existing weights should be updated as follows $w_j=\\frac{w_j}{x_j-x_i}$.\n\n\\begin{problem}\n\\label{prob:add weights}\nInclude a method in the class written in Problem \\ref{prob:barycentric} that allows for the addition of new interpolating points by updating the barycentric weights.\nYour function should accept two NumPy arrays which contain the $x$ and $y$ values of the new interpolation points.\nUpdate and store the old weights then extend the class attribute arrays that store the weights, and the $x$ and $y$ values of the interpolation points with the new data.\nWhen updating all class attributes, make sure to maintain the same relative order.\n\\end{problem}\n\nThe implementation outlined here calls for the y-values of the interpolating points to be known during startup, however, these values are not needed until run-time\nThis allows the y-values to be changed without having to recompute the barycentric weights.\nThis is an additional advantage of Barycentric Lagrange interpolation.\n\n\\subsection*{Scipy's Barycentric Lagrange class}\nScipy includes a Barycentric interpolator class.\nThis class includes the same functionality as the class described in Problems \\ref{prob:barycentric} and \\ref{prob:add weights} in addition to the ability to update the y-values of the interpolation points.\nThe following code will produce a figure similar to Figure \\ref{fig:bad2}.\n\\begin{lstlisting}\n>>> from scipy.interpolate import BarycentricInterpolator\n\n>>> f = lambda x: 1/(1+25 * x**2)   # Function to be interpolated.\n# Obtain the Chebyshev extremal points on [-1,1].\n>>> n = 11\n>>> pts = np.linspace(-1, 1, n)\n>>> domain = np.linspace(-1, 1, 200)\n\n>>> poly = BarycentricInterpolator(pts[:-1])\n>>> poly.add_xi(pts[-1])          # Oops, forgot one of the points.\n>>> poly.set_yi(f(pts))           # Set the y values.\n\n>>> plt.plot(domain, f(domain))\n>>> plt.plot(domain, poly.<<eval>>(domain))\n\\end{lstlisting}\n\n\\begin{comment} # Potential timing problem comparing student written barycentr, lagrange and Scipy's Lagrange.  Superceded by later problem which also includes Chebyshev.\n\\begin{problem}\nCompare the runtime and error of your Lagrange and Barycentric methods with Scipy's Barycentric class by writing two functions, one comparing error and the other run-time.\nBoth functions should accept a callable function to interpolate and an integer designating how many interpolating points should be used.\nAll computations should be made on 200 equally spaced points on the interval $[-1, 1]$ with the interpolating points coming from the Chebyshev roots.\nBoth functions should print the runtime or error of the interpolation for all three methods with an accompanying description (so that it is obvious which time/error corresponds to which method).\nWhen calculating error, use \\li{np.linalg.norm()} with \\li{ord=np.inf}.\n\\end{problem}\n\\end{comment}\n\n\\begin{figure}[H]\n\\captionsetup[subfigure]{justification=centering}\n\\captionsetup{justification=centering}\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/good_interp1.pdf}\n    \\caption{Polynomial using 5 Chebyshev roots.}\n    \\label{fig:good1}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/good_interp2.pdf}\n    \\caption{Polynomial using 11 Chebyshev roots.}\n    \\label{fig:good2}\n\\end{subfigure}\n\\caption{Example of overcoming Runge's phenomenon by using Chebyshev nodes for interpolating values.\nPlots made using Runge's function $f(x)=\\frac{1}{1+25x^2}$.\nCompare with Figure \\ref{fig:badinterp}}\n\\label{fig:goodinterp}\n\\end{figure}\n\n\\section*{Chebyshev Interpolation}\n\\subsection*{Chebyshev Nodes}\nAs has been mentioned previously, the Barycentric version of Lagrange interpolation is a stable process that does not accumulate large errors, even with extreme inputs.\nHowever, polynomial interpolation itself is, in general, an ill-conditioned problem.\nThus, even small changes in the interpolating values can give drastically different interpolating polynomials.\nIn fact, poorly chosen interpolating points can result in a very bad approximation of a function.\nAs more points are added, this approximation can worsen.\nThis increase in error is called \\emph{Runge's phenomenon}.\n\nThe set of equally spaced points is an example of a set of points that may seem like a reasonable choice for interpolation but in reality produce very poor results.\nFigure \\ref{fig:badinterp} gives an example of this using Runge's function.\nAs the number of interpolating points increases, the quality of the approximation deteriorates, especially near the endpoints.\n\nAlthough polynomial interpolation has a great deal of potential error, a good set of interpolating points can result in fast convergence to the original function as the number of interpolating points is increased.\nOne such set of points is the Chebyshev extremal points which are related to the Chebyshev polynomials (to be discussed shortly).\n%The $n$ Chebyshev roots on the interval $[a,b]$ are given by the formula $z_j=\\frac{1}{2}(a+b + (b-a)\\cos(\\frac{\\pi}{n}(j+\\frac{1}{2})))$ for $j=0,1,\\dots,n-1$.\nThe $n+1$ Chebyshev extremal points on the interval $[a,b]$ are given by the formula $y_j=\\frac{1}{2}(a+b + (b-a)\\cos(\\frac{j\\pi}{n}))$ for $j=0,1,\\dots,n$.\nThese points are shown in Figure \\ref{fig:extrema}.\nOne important feature of these points is that they are clustered near the endpoints of the interval, this is key to preventing Runge's phenomenon.\n\n\\begin{problem} % Compare error.\n% Compare the error of interpolation using equally spaced points and interpolation using the Chebyshev extrema by writing a function that performs the following experiment.\nWrite a function that defines a domain $\\x$ of $400$ equally spaced points on the interval $[-1, 1]$.\nFor $n=2^2,2^3,\\ldots,2^8$, repeat the following experiment.\n\\begin{enumerate}\n    \\item Interpolate Runge's function $f(x) = 1/(1+25x^2)$ with $n$ equally spaced points over $[-1,1]$ using SciPy's \\li{BarycentricInterpolator} class, resulting in an approximating function $\\tilde{f}$.\n    Compute the absolute error $\\|f(\\x) - \\tilde{f}(\\x)\\|_\\infty$ of the approximation using \\li{la.norm()} with \\li{<<ord>>=np.inf}.\n    \\item Interpolate Runge's function with $n+1$ Chebyshev extremal points, also via SciPy, and compute the absolute error.\n    % Note that the definition for Chebyshev extremal points uses $n+1$ interpolating values.\n\\end{enumerate}\nPlot the errors of each method against the number of interpolating points $n$ in a log-log plot.\n\nTo verify that your figure make sense, try plotting the interpolating polynomials with the original function for a few of the larger values of $n$.\n\\end{problem}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics{figures/extrema.pdf}\n\\caption{The Chebyshev extremal points.  The $n$ points where the Chebyshev polynomial of degree $n$ reaches its local extrema.\nThese points are also the projection onto the x-axis of $n$ equally spaced points around the unit circle.}\n\\label{fig:extrema}\n\\end{figure}\n\n\\subsection*{Chebyshev Polynomials}\nThe Chebyshev roots and Chebyshev extremal points are closely related to a set of polynomials known as the Chebyshev polynomials.\nThe first two Chebyshev polynomials are defined as $T_0(x)=1$ and $T_1(x)=x$.\nThe remaining polynomials are defined by the recursive algorithm $T_{n+1}(x)=2xT_n(x)-T_{n-1}(x)$.\nThe Chebyshev polynomials form a complete basis for the polynomials in $\\mathbb{R}$ which means that for any polynomial $p(x)$,  there exists a set of unique coefficients $\\{a_k\\}_{k=0}^n$\nsuch that\n\\[\np(x) = \\sum_{k=0}^n a_kT_k.\n\\]\n\nFinding the Chebyshev representation of an interpolating polynomial is a slow process (dominated by matrix multiplication or solving a linear system), but when the interpolating values are the\nChebyshev extrema, there exists a fast algorithm for computing the Chebyshev coefficients of the interpolating polynomial.\nThis algorithm is based on the Fast Fourier transform which has temporal complexity $O(n\\log n)$.\nGiven the $n+1$ Chebyshev extremal points $y_j=\\cos(\\frac{j\\pi}{n})$ for $j=0,1,\\dots,n$ and a function $f$, the unique n-degree interpolating polynomial $p(x)$ is given by\n\\[\np(x)=\\sum_{k=0}^na_kT_k\n\\]\nwhere\n\\[\na_k = \\gamma_k \\Re \\left[ DFT(f(y_0), f(y_1),\\dots, f(y_{2n-1}))\\right]_k.\n\\]\nNote that although this formulation includes $y_j$ for $j>n$, there are really only $n+1$ distinct values being used since $y_{n-k}=y_{n+k}$.\nAlso, $\\Re$ denotes the real part of the Fourier transform and $\\gamma_k$ is defined as\n\\[\n\\gamma_k =\n\\begin{cases}\n1 & k\\in \\{0,n\\} \\\\\n2 & \\text{otherwise.}\n\\end{cases}\n\\]\n\n\\begin{problem}\nWrite a function that accepts a function $f$ and an integer $n$.\nCompute the $n+1$ Chebyshev coefficients for the degree $n$ interpolating polynomial of $f$ using the Fourier transform (\\li{np.real()} and \\li{np.fft.fft()} will be helpful).\nWhen using NumPy's \\li{fft()} function, multiply every entry of the resulting array by the scaling factor $\\frac{1}{2n}$ to match the derivation given above.\n\nValidate your function with \\li{np.polynomial.chebyshev.poly2cheb()}.\nThe results should be exact for polynomials.\n\\begin{lstlisting}\n# Define f(x) = -3 + 2x^2 - x^3 + x^4 by its (ascending) coefficients.\n>>> f = lambda x: -3 + 2*x**2 - x**3 + x**4\n>>> pcoeffs = [-3, 0, 2, -1, 1]\n>>> ccoeffs = np.polynomial.chebyshev.poly2cheb(pcoeffs)\n\n# The following callable objects are equivalent to f().\n>>> fpoly = np.polynomial.Polynomial(pcoeffs)\n>>> fcheb = np.polynomial.Chebyshev(ccoeffs)\n\\end{lstlisting}\n\\end{problem}\n\n\\begin{comment}\nOnce the coefficients of the Chebyshev polynomial have been computed, there exists a fast algorithm (discussed in the Additional Material section) for evaluating the polynomials.\nNumPy's \\li{polynomial.chebyshev} module contains a method for doing this.\n\\begin{lstlisting}\n>>> from NumPy.polynomial.chebyshev import chebval\n\n>>> domain = np.linspace(-1, 1, 5)\n>>> f = lambda x: x**4              # Function to interpolate.\n>>> coeffs = chebyshv_coeffs(f, 4)  # Function from Problem 5.\n>>> print(coeffs)\n[  3.75000000e-01  -5.88784672e-17   5.00000000e-01   5.88784672e-17\n   1.25000000e-01]\n\n>>> chebval(domain, coeffs)         # Evaluate at the points in domain.\n[ 1.      0.0625  0.      0.0625  1.    ]\n\\end{lstlisting}\n\nNumPy's \\li{polynomial.chebyshev} module has many useful functions for working with Chebyshev polynomials.\nSome of the methods include fast implementations of polynomial operations such as addition, division, multiplication, and integration.\n\\end{comment}\n\n\\section*{Lagrange vs. Chebyshev}\n\nAs was previously stated, Barycentric Lagrange interpolation is $O(n^2)$ at startup and $O(n)$ at runtime while Chebyshev interpolation is $O(n\\log n)$.\nThis improved speed is one of the greatest advantages of Chebyshev interpolation.\nChebyshev interpolation is also more accurate than Barycentric interpolation, even when using the same points.\nDespite these significant advantages in accuracy and temporal complexity, Barycentric Lagrange interpolation has one very important advantage over Chebyshev interpolation: Barycentric interpolation can be used on any set of interpolating points while Chebyshev is restricted to the Chebyshev nodes.\nIn general, because of their better accuracy, the Chebyshev nodes are more desirable for interpolation, but there are situations when the Chebyshev nodes are not available or when specific points are needed in an interpolation.\nIn these cases, Chebyshev interpolation is not possible and Barycentric Lagrange interpolation must be used.\n\n\\begin{comment} % Too confusing and too much of a repeat of a previous problem.\n\\begin{problem}\nInvestigate the claims made in the previous section by writing two different functions, one to compare error and the other to compare speed.\nThis function should accomplish this by doing the following:\n\\begin{itemize}\n\\item Interpolate Runge's function on the interval $[-1,1]$ using your Lagrange function, your Barycentric Lagrange class, Scipy's Barycentric class and Chebyshev interpolation.\n\\item Use the Chebyshev extremal points for the interpolating values.\n\\item Print the error or time required to run the function for each of the methods for $n=[500,1000,1500]$ interpolating values.\nBe sure to clearly label the method and value of $n$ being used for each value printed.\n\\item Always evaluate at 200 equally spaced points in the domain.\n\\end{itemize}\nUse \\li{scipy.linalg.norm()} with \\li{ord=np.inf} and the Chebyshev extremal points.\nNote that if \\li{inf} appears in any of your compuatations \\li{scipy.linalg.norm()} will raise an error, in this case print the error as \\li{inf} (you may need to use a \\li{try} \\li{except} block for some of the methods).\n\\end{problem}\n\\end{comment}\n\n\n\\section*{Utah Air Quality}\n\nThe Utah Department of Environmental Quality has air quality stations throughout the state of Utah that measure the concentration of particles found in the air.\nOne particulate of particular interest is $PM_{2.5}$ which is a set of extremely fine particles known to cause tissue damage to the lungs.\nThe file \\texttt{airdata.npy} has the hourly concentration of $PM_{2.5}$ in micrograms per cubic meter for a particular measuring station in Salt Lake County for the year 2016.\nThe given data presents a fairly smooth function which can be reasonably approximated by an interpolating polynomial.\nAlthough Chebyshev interpolation would be preferable (because of its superior speed and accuracy), it is not possible in this case because the data is not continous and the information at the Chebyshev nodes\nis not known.\nIn order to get the best possible interpolation, it is still preferable to use points close to the Chebyshev extrema with Barycentric interpolation.\nThe following code will take the $n+1$ Chebyshev extrema and find the closest match in the non-continuous data found in the variable \\li{data} then calculate the barycentric weights.\n\n\\begin{lstlisting}\n>>> fx = lambda a, b, n: .5*(a+b + (b-a) * np.cos(np.arange(n+1) * np.pi / n))\n>>> a, b = 0, 366 - 1/24\n>>> domain = np.linspace(0, b, 8784)\n>>> points = fx(a, b, n)\n>>> temp = np.<<abs>>(points - domain.reshape(8784, 1))\n>>> temp2 = np.argmin(temp, axis=0)\n\n>>> poly = barycentric(domain[temp2], data[temp2])\n\\end{lstlisting}\n\n\\begin{problem}\nWrite a function that interpolates the given data along the whole interval at the closest approximations to the $n+1$ Chebyshev extremal nodes.\nThe function should accept $n$, perform the Barycentric interpolation then plot the original data and the approximating polynomial on the same domain on two separate subplots.\nYour interpolating polynomial should give a fairly good approximation starting at around 50 points.\nNote that beyond about 200 points, the given code will break down since it will attempt to return multiple of the same points causing a divide by 0 error.\nIf you did not perform the fix suggested in the \\texttt{ACHTUNG} box, make sure not to pass in any points that exactly match the interpolating values.\n\\end{problem}\n\n\\newpage\n\n\\section*{Additional Material}\n\nThe \\emph{Clenshaw Algorithm} is a fast algorithm commonly used to evaluate a polynomial given its representation in Chebyshev coefficients.\nThis algorithm is based on the recursive relation between Chebyshev polynomials and is the algorithm used by NumPy's \\li{polynomial.chebyshev} module.\n\n\\begin{algorithm}\n\\begin{algorithmic}[1]\n\\Procedure{ClenshawRecursion}{$x, a$}\n\t\\State $u_{n+1} \\gets 0$\n\t\\State $u_{n} \\gets 0$\n\t\\State $k \\gets n-1$\n\t\\While{$k \\geq 1$}\n\t\t\\State $u_k \\gets 2 x u_{k+1} - u_{k+2} + a_k$\n\t\t\\State $k \\gets k-1$\n\t\\EndWhile\n\t\\State \\pseudoli{return} $a_0 + x u_1 -u_2$\n\\EndProcedure\n\\end{algorithmic}\n\\caption{Accepts an array $x$ of points at which to evaluate the polynomial and an array $a=[a_0,a_1,\\dots,a_{n-1}]$ of Chebyshev coefficients.}\n\\label{alg:clenshaw_recursion}\n\\end{algorithm}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Some material from the old Chebyshev Polynomial lab that expands on the material contained in the current lab. Some of it may be useful for future iterations of this lab.\n\n\\begin{comment} %This section gives a more complete introduction to Chebyshev polynomials including their orthogonality.\nThis sort of interpolation is also related to what are called the Chebyshev polynomials.\nChebyshev Polynomials can be used very nicely to approximate functions on $[ -1, 1 ]$ and, with proper scaling, can be used to approximate reasonably well-behaved functions on any interval.\nThe $k$'th Chebyshev polynomial is commonly written $T_k \\left( x \\right)$.\nOne way to define the Chebyshev Polynomials is that the $k$'th Chebyshev polynomial on the unit circle is the real part of the function $z^k$ on the unit circle.\nMore precisely, let $z(x) = t + i \\sqrt{1 - x^2}$, then the $k$'th Chebyshev polynomial on $[-1, 1]$ is the real part of the polynomial $x^k$.\nWe can write this real part explicitly as\n\\[T_k \\left( x \\right) = \\frac{1}{2} \\left( z^k + z^{-k} \\right) = \\cos \\left( k \\cos^{-1} \\left( x \\right) \\right)\\]\nNotice that, as this function has been defined, the Chebyshev polynomial of order $k$ on an interval takes values of $1$ and $-1$ at the $k+1$ chebyshev points corresponding to interpolation with a polynomial of order $k$ on that interval.\nThe first few Chebyshev polynomials are shown in Figure \\ref{fig:cheb_polys}.\n\nThese polynomials are orthogonal with respect to the integral inner product\n\\[\\langle f, g \\rangle = \\int_{-1}^1 \\frac{1}{\\sqrt{1 - x^2}} f\\left( x \\right) g\\left( x \\right) dx\\]\nThis inner product allows us to write many functions as an infinite series of Chebyshev polynomials.\nIn particular, we can write any polynomial as a finite series of Chebyshev polynomials.\nAs it turns out, it is relatively easy to compute the Chebyshev series representation of an interpolating polynomial for a function on a given interval.\nThese approximations converge to the function we are approximating nearly as well as the partial sums of the infinite series formed using the inner product above, so, in practice, we will use them instead.\n\nIt can be shown that the Chebyshev polynomials follow the following recurrence relation: $ T_{k+1} \\left( x \\right) = 2 x T_k \\left( x \\right) - T_{k-1}$.\n$T_0 \\left( x \\right)$ is equal $1$ and $T_1 \\left( x \\right)$ is equal to $x$ (still working on the interval $[-1, 1]$.)\n\\end{comment}\n\n\\begin{comment} %This section includes a more mathematical introduction to the connection between Fourier series and Chebyshev Polynomials.\n\\section*{Chebyshev Polynomials and Fourier Series}\nIn looking at the plots of the Chebyshev polynomials you may have noticed that the Chebyshev polynomials take alternating values of $1$ and $-1$ at the Chebyshev points on the interval $[-1,1]$ similar to how the functions $\\cos\\left(k x \\right)$ change between $-1$ and $1$ on equispaced points in the interval $[ 0 , \\pi ]$.\nRecall that, as mentioned above, $T_k \\left( x \\right) = \\cos \\left( k \\cos^{-1} \\left( x \\right) \\right)$.\nFor discrete samples, this means that the Chebyshev coefficients for the interpolating polynomial through a function at the Chebyshev nodes has the same coefficients as the discrete cosine series would at the points $\\cos^{-1} x$ where $x$ is a chebyshev node.\nIn practice this means we can compute an array of samples of a function at the Chebyshev nodes of the interval $[-1, 1]$, then compute the discrete cosine transform to get the corresponding coefficients to the Chebyshev interpolation.\nThis is very helpful since the discrete cosine transform is a close relative of the discrete Fourier transform.\n\\li{scipy.fftpack} and \\li{pyfftw} both include discrete cosine transforms as \\li{scipy.fftpack.dct} and \\li{pyfftw.interfaces.scipy_fftpack.dct} respectively.\nThe discrete cosine transform is defined a little differently than the coefficients we need here.\nThere are several different versions of the discrete cosine transform.\nUsing the inverse discrete cosine transform of type 1 on an array $y$ of length $N$ will give us an array $x$ of values such that\n\\[y_k = z_0 + 2 \\sum_{n=1}^{N-2} z_n \\cos\\left(\\frac{\\pi k n}{N-1}\\right) + \\left(-1\\right)^k z_{N-1}\\]\nwhere $z = \\frac{x}{2 \\left(N - 1\\right)}$.\nTo obtain the Chebyshev coefficients, we desire to find the vector $a$ of coefficients $a_n$ such that\n\\[y_k = \\sum_{n=0}^{N-1} a_n \\cos\\left(\\frac{\\pi k n}{N-1}\\right)\\]\nThis means that, given the vector $x$, we can obtain $a$ by dividing all the terms of $x$ by $N-1$, and then dividing the first and last entries of $x$ by 2.\nThis means that we can obtain the coefficients of the Chebyshev interpolant using the following steps:\n\n\\begin{itemize}\n\n\\item Sample the function you desire to approximate at the Chebyshev nodes.\nMake sure you use the sampled values in the order that corresponds to the chebyshev nodes when sorted in \\emph{descending} order and not ascending order.\n\n\\item Compute the inverse discrete cosine transform of the data.\nUse the option \\li{type=1} to tell either scipy or pyfftw the version of the discrete cosine transform you want to use.\n\n\\item Divide all the coefficients by $\\left( N - 1 \\right)$ where $N$ is the number of nodes used.\n\n\\item Divide the first and last coefficient by $2$.\n\n\\end{itemize}\n\n\\begin{warn}\nOften it is customary in computation to use values in ascending order.\nWhen you compute the Chebyshev coefficients for the interpolatng polynomial through a set of points you will need the samples from the points sorted in descending order, not ascending order.\n\\end{warn}\n\\end{comment}\n\n\\begin{comment} % Potential problem and discussion exploring convergence of polynomial interpolation.\n\\begin{problem}\n\\label{prob:cheb_interpolations}\nWe will briefly consider the rate of convergence of these polynomial approximations.\nCompute the coefficients for the interpolating Chebyshev series for the function $\\cos x$ on the interval $[-1, 1]$.\nThis approximation converges very rapidly, so the first $20$ terms or so should be more than enough.\nHow many of these coefficients have absolute value greater than $10^{-14}$?\nHow close does your series approximate the actual function?\n\nNow compute the coefficients for a degree $100000$ polynomial approximating the function\n\\[\\sin \\left( \\frac{1}{x} \\right) \\sin \\left( \\frac{1}{\\sin \\left( \\frac{1}{x} \\right)} \\right) \\]\non the interval $[-1, 1]$.\nHow large are the last $10$ coefficients in the series?\nUse NumPy's Chebyshev class to plot this function at $100001$ equispaced points on the interval $[-1, 1]$.\nPlot it with the original function.\nCompare the two.\nHow close are they?\nNotice how the interpolating polynomial is able to approximate the original function about as well as the discrete sample of the original function.\n\\end{problem}\n\nProblem \\ref{prob:cheb_interpolations} also illustrates an interesting principle.\nThese series converge more quickly when a function is infinitely differentiable everywhere in the complex plane (the proper term for a function like this is \"entire\").\nFor functions that do not satisfy this property, it can be shown (roughly speaking) that the rate of convergence depends on how many times differentiable the function is on the interval of interpolation.\nFor a less well-behaved function like the one considered in the second half of Problem \\ref{prob:cheb_interpolations} the coefficients converge much more slowly.\nThe last $10$ coefficients for the interpolant of degree $2^{23} - 1$ are still\n\\begin{lstlisting}\n[9.52973125e-08,  -1.89451973e-09,  -7.42182166e-08,\n 1.89319137e-09,   5.26564839e-08,  -1.89451836e-09,\n -3.13050802e-08,   1.89319005e-09,   1.03700608e-08,\n -9.47258778e-10]\n\\end{lstlisting}\n\\end{comment}\n", "meta": {"hexsha": "a1e907b4ec743bc4055010f7d2bf1caab9dab2fa", "size": 35994, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Volume2/PolynomialInterpolation/PolynomialInterpolation.tex", "max_stars_repo_name": "chrismmuir/Labs-1", "max_stars_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 190, "max_stars_repo_stars_event_min_datetime": "2015-07-17T01:57:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T19:16:19.000Z", "max_issues_repo_path": "Volume2/PolynomialInterpolation/PolynomialInterpolation.tex", "max_issues_repo_name": "chrismmuir/Labs-1", "max_issues_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-07-16T17:56:06.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-06T23:47:14.000Z", "max_forks_repo_path": "Volume2/PolynomialInterpolation/PolynomialInterpolation.tex", "max_forks_repo_name": "chrismmuir/Labs-1", "max_forks_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 76, "max_forks_repo_forks_event_min_datetime": "2015-08-06T02:53:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-27T11:08:57.000Z", "avg_line_length": 65.3248638838, "max_line_length": 326, "alphanum_fraction": 0.7589598266, "num_tokens": 9424, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6959583376458152, "lm_q2_score": 0.8615382112085969, "lm_q1q2_score": 0.5995947012910843}}
{"text": "\n\\subsection{Density estimation through direct sampling}\n\nI THINK THE STUFF HERE IS LIMITATIONS TO REJECTION SAMPLING??\n\nDIRECT SAMPLING IS DOING PHYSICAL SAMPLES, MANUALLY PICKING BALLS FROM URL ETC?\n\nThere is distribution \\(P(x)\\) which we want to know more about.\n\nIf the function was closed, we could estimate it by using values of \\(x\\).\n\n\\subsection{Limitations of direct sampling}\n\nHowever if the function does not have such a form, we cannot do that.\n\nWe can't plug in values, because the function is complex.\n\nSometimes we may know a function of the form:\n\n\\(f(x)=cP(x)\\)\n\nThat is, a multiple of the function.\n\nThis can happen from Bayes' theorem:\n\n\\(P(y|x)=\\dfrac{P(x|y)P(y)}{P(x)}\\)\n\nWe may be able to estimate \\(P(x|y)\\) and \\(P(y)\\), but not \\(P(x)\\)\n\nThis means be have \n\n\\(P(y|x)=cP(x|y)P(y)\\)\n\n", "meta": {"hexsha": "329f4cc811102c55c68a80968788de803305ebb1", "size": 810, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/probability/sampling/01-01-direct.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/probability/sampling/01-01-direct.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/probability/sampling/01-01-direct.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.8235294118, "max_line_length": 79, "alphanum_fraction": 0.7111111111, "num_tokens": 228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8615381987656672, "lm_q2_score": 0.6959583376458152, "lm_q1q2_score": 0.5995946926313236}}
{"text": "\\chapter{Extending FLID}\n\\label{sec:models}\n\n\\section{Beyond Diversity: FLDC}\n\nDiversity is an important property in the context of summarization \\citep{tschiatschek16learning}, however it is not the only one that should be considered. Coherence is another desired property of summaries, for example in structured summaries \\citep{Yan:2011:ETS:2009916.2010016}. Balancing coherence and diversity is considered a challenge and maxiziming diversity alone may lead to disconnected summaries, while focusing on coherence can hinder the breadth of the resulting summaries \\citep{Shahaf2012}.\n\nTo better understand why coherence is important, consider the construction of a spatial summary for photos taken in a city. A model based on spatial diversity would create sets containing photos taking at distant locations in the city, however another possibility is to have summaries that contain photos that were taken close to some starting point, in this case the desired property is not spatial diversity but rather coherence.\n\nTherefore, a natural extension to the FLID model is to add the capacity of modeling coherence along with diversity. We propose the addition of a log-supermodular term analogous to the log-submodular term \\eqref{eq:diversity} in order to model the complementarity between elements.\n\nThis supermodular term introduces a representation of the elements using $K$-dimensional vectors $\\mathbf{w}^{K} \\in \\mathbb{R}^{K}_{\\geq 0}$, where each dimension $1 \\leq c \\leq K$ captures a concept that relates to set coherence and $w_{i,c}$ quantifies the contribution of each element $i$ to each coherence concept $c$. The coherence of a set $S$ with respect to each concept $c$ is quantified as $\\sum_{i \\in S}{w^{e}_{i,c}} - \\max_{i \\in S}{w^{e}_{i,c}}$ which is a supermodular term mirroring the submodular term in FLID. Summing over all the concepts results in the proposed coherence term. Hereby, define $\\mathbf{W}^{e} \\in \\mathbb{R}^{|V| \\times K}_{\\geq 0}$ as the matrix where the $i$-th row is the representation $\\mathbf{w}^{e}$ of element $i$.\n\nAdding the coherence term to FLID results in the extended model, which we refer to as the Facility Location Diversity and Coherence (FLDC) model. Its probability distribution is defined as,\n\n\\begin{equation}\n  \\tag{FLDC}\n  P(S) = \\frac{1}{Z}\\exp{\\left(\\sum_{i \\in S}{u_{i}} + \\mathrm{div}(S) + \\sum_{c=1}^{K}{\\left(\\sum_{i \\in S}{w^{e}_{i,c}} - \\max_{i \\in S}{w^{e}_{i,c}}\\right)}\\right)},\n  \\label{eq:fldc}\n\\end{equation}\n\nwhere $\\mathrm{div}(S)$ is the diversity term in Equation \\eqref{eq:diversity}.\n\nFor the FLDC model, the partition function can be computed in $\\mathcal{O}(|V|^{L+K+1})$ time. This complexity can be derived using a similar algorithm as the one presented by \\citet{tschiatschek16learning} for FLID, the key observation is that the algorithm they present only requires the ordering of weights to determine eligible elements for a set given  the maximum weights for each dimension. This can be easily extended to include the ordering for the coherence weights as well. This requires evaluating $K$ more dimensions which results in the added term to the exponent of the complexity expression, which makes the computation of the partition function more expensive than in the FLID case.\n\n\\subsection{Learning FLDC Models}\n\nAs with FLID, the parameters for an FLDC model can be estimated using NCE. For FLDC the vector $\\boldsymbol{\\theta}$ is composed of $[\\mathbf{u}, \\mathbf{W}^{b}, \\mathbf{W}^{e}, \\hat{Z}]$ and its gradient is the same as in equations \\eqref{eq:gradient-flid-1}-\\eqref{eq:gradient-flid-4} with the addition of a gradient with respect to the new parameter, $\\mathbf{W}^{e}$, which is given by,\n\n\\begin{equation}\n\\left(\\nabla_{\\mathbf{W}^{e}}\\log{\\hat{P}_{d}(S; \\hat{Z}, \\mathbf{u}, \\mathbf{W}^{e}, \\mathbf{W}^{e})}\\right)_{i,c} = \\begin{cases}\n1 & \\text{if}\\ i \\in S\\ \\text{and}\\ i \\neq \\argmax_{j \\in S}{w^{e}_{j,c}} \\\\\n0 & \\text{otherwise}\n\\end{cases}, \\label{eq:sub-fldc}\\\\\n\\end{equation}\n\nwhere $\\left(\\nabla_{\\mathbf{W}^{e}}\\cdot \\right)_{i,c}$ represents the $(i,c)$-th entry in the gradient with respect to $\\mathbf{W}^{e}$.\n\nThis also affects the overall complexity of the learning algorithm. Computing the gradient takes $\\mathcal{O}(L+K)|S|$ for FLDC at each step, therefore the running time of a full pass over the data and noise samples takes $\\mathcal{O}(|\\mathcal{D}\\cup\\mathcal{N}|\\kappa(L+K))$. This is not significantly more expensive than learning FLID because the complexity is still linear in the data and noise.\n\n\n\\subsection{Example: Two Non-overlapping Clusters}\n\\label{sec:fldc-toy}\n\nAs an example of the extended model, consider the distribution presented in Table \\ref{tab:fldc-toy-probs} for $V = \\{1,2,3,4\\}$, representing a set of two non-overlapping clusters with 2 elements each. Intuitively, this can be modeled by considering that there is diversity between elements in different clusters while inside a cluster there is a coherence component that ties them together.\n\nConcretely, the weight matrices $\\mathbf{W}^{b}, \\mathbf{W}^{e}$ in Figure \\ref{fig:fldc-toy-mixed-weights} illustrate one possible instance of the model. The corresponding utility vector is $\\mathbf{u} = \\overrightarrow{0}$, because there is no indication that an individual element is favored over the pairs. This model is easily interpretable and accurately realizes the distribution.\n\n\\begin{table}\n  \\centering\n  \\caption{Probability distribution for Example \\ref{sec:fldc-toy}.}\n  \\begin{tabular}{@{}ll@{}}\n    \\toprule\n    $S$ & $P(S)$  \\\\\n    \\midrule\n    $\\{1,2\\}$ & $0.5$ \\\\\n    $\\{3,4\\}$ & $0.5$ \\\\\n    Otherwise & $0.0$ \\\\\n    \\bottomrule\n  \\end{tabular}\n  \\label{tab:fldc-toy-probs}\n\\end{table}\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\textwidth]{fldc_toy_example_mixed_weights}\n  \\caption{Diversity and coherence weights for FLDC model in Example \\ref{sec:fldc-toy}. The dotted line divides the clusters.}\n  \\label{fig:fldc-toy-mixed-weights}\n\\end{figure}\n\n\\section{Generalizing through Features: FFLDC}\n\nAn important characteristic of the FLID model, and by extension the FLDC one, is that it is agnostic to the type of elements in the ground set, this allows its application to a wide range of problems without prior knowledge. However the downside is that the model has no capability to make use of information about the elements to improve the modeling of the data. Moreover, if a new element is added to the set there is no way to generalize the existing knowledge about similar elements.\n\nConsidering these issues, we propose a further extension to the model. First, an element ($i \\in V$) is represented as a vector $\\mathbf{x}_{i} \\in \\mathbb{R}^{M}$ where each entry is a feature, e.g. for venues one feature could be its aggregated rating while another feature indicates whether it is indoors or outdoors. The representations of all elements are represented in matrix $\\mathbf{X} \\in \\mathbb{R}^{|V| \\times M}$ where each row $i$ corresponds to the representation $\\mathbf{x}_{i}$ of element $i$.\n\nThen, the utility vector $\\mathbf{u}$ and the weight matrices $\\mathbf{W}^{b}, \\mathbf{W}^{e}$ are factorized,\n\n\\begin{align}\n  \\mathbf{u} &= \\mathbf{Xa},   \\label{eq:ffldc-factorization-1} \\\\\n  \\mathbf{W}^{b} &= \\mathbf{XB},\\ \\text{and}  \\label{eq:ffldc-factorization-2} \\\\\n  \\mathbf{W}^{e} &= \\mathbf{XE},\n  \\label{eq:ffldc-factorization-3}\n\\end{align} \n\nwhere $\\mathbf{a} \\in \\mathbb{R}^{M}$ represents the contribution of each feature to the total utility of an item, and $\\mathbf{B} \\in \\mathbb{R}^{M \\times L}$ and $\\mathbf{E} \\in \\mathbb{R}^{M \\times K}$ encode the contribution of each feature to each latent diversity and coherence dimension, respectively. The intuition behind this factorization is that the information about the items can enhance the latent representations. For example if $M < |V|$ and the features are sufficient to represent the items, i.e. each item can be uniquely identified from its features, it is expected that learning requires less computation because of the reduced number of parameters in contrast to learning an equivalent FLDC model.\n\nWe refer to the extended model as the Featurized Facility Location Diversity and Coherence (FFLDC) model and its probability distribution is defined as:\n\n\\begin{align}\n  \\tag{FFLDC} \\label{eq:ffldc}\n  & P(S) &=& \\frac{1}{Z}\\exp{\\left(\\sum_{i \\in S}{\\sum_{k=1}^{M}x_{i,k}a_{k}} + fdiv(S) + fcoh(S)\\right)}, \\\\\n  \\text{where} & & \\\\\n  & fdiv(S) &=& \\sum_{d=1}^{L}{\\left(\\max_{i \\in S}{\\sum_{k=1}^{M}x_{i,k}b_{k,d}} - \\sum_{i \\in S}{\\sum_{k=1}^{M}x_{i,k}b_{k,d}}\\right)}\\ \\text{and} \\\\\n  & fcoh(S) &=& \\sum_{c=1}^{K}{\\left(\\sum_{i \\in S}{\\sum_{k=1}^{M}x_{i,k}e_{k,c}} - \\max_{i \\in S}{\\sum_{k=1}^{M}x_{i,k}e_{k,c}}\\right)}\n\\end{align}\n\nWhere $a_{k}$ is the $k$-th entry of $\\mathbf{a}$, $b_{k,d}$ is the $(k,d)$-th entry of $\\mathbf{B}$, $e_{k,d}$ is the $(k,d)$-th entry of $\\mathbf{E}$ and $x_{k,c}$ is the $(k,c)$-th entry of $X$.\n\n\\begin{remark}\n  If $\\mathbf{X} = \\mathcal{I}$, then FFLDC is equivalent to FLDC, with $\\mathbf{u} = \\mathbf{a}$, $\\mathbf{W}^{b} = \\mathbf{B}$ and $\\mathbf{W}^{e} = \\mathbf{E}$.\n\\end{remark}\n\nThe use of features also allows the application of the model to previously unseen elements, hence solving the aforementioned problem of generalization. This is possible because the model is defined on the space of features and not directly on items, to evaluate the probability of a set containing a previously unseen element $j$ only its representation $\\mathbf{x}_{j}$ is required.\n\nThe computation of the partition function for this model can be performed using the fact that any FFLDC model can be transformed into an equivalent FLDC model through the factorizations in equations \\eqref{eq:ffldc-factorization-1}-\\eqref{eq:ffldc-factorization-3}. This transformation requires $\\mathcal{O}(|V|M(L+K))$ time for the matrix multiplications, therefore the overall complexity of calculating the partition function for an FFLDC model is $\\mathcal{O}(|V|M(L+K) + |V|^{L+K+1})$. This expression can be simplified to $\\mathcal{O}(|V|^{L+K+1})$ because the exponential term is significantly larger assuming sensible values for $M$. Given that the complexity of computing the partition function for FFLDC is equivalent to the FLDC model, it is also intractable in practice.\n\n\\subsection{Learning FFLDC Models}\n\nFor FFLDC, the parameter vector $\\boldsymbol{\\theta}$ is different from the previous models, it is composed of $[\\mathbf{a}, \\mathbf{B}, \\mathbf{C}, \\hat{Z}]$ and the gradient of the NCE objective $g(\\boldsymbol{\\theta})$ is given by,\n\n\\begin{align}\n  \\left(\\nabla_{\\mathbf{a}}\\log{\\hat{P}_{d}(S; \\hat{Z}, \\mathbf{a}, \\mathbf{B}, \\mathbf{E})}\\right)_{m} &= \\sum_{i \\in S} x_{i,m} \\label{eq:sub-a}\\\\\n  \\left(\\nabla_{\\mathbf{B}}\\log{\\hat{P}_{d}(S; \\hat{Z}, \\mathbf{a}, \\mathbf{B}, \\mathbf{E})}\\right)_{m,d} &= x_{i^{d},m} - \\sum_{i \\in S} x_{i,m} \\label{eq:sub-b} \\\\\n  \\left(\\nabla_{\\mathbf{E}}\\log{\\hat{P}_{d}(S; \\hat{Z}, \\mathbf{a}, \\mathbf{B}, \\mathbf{E})}\\right)_{m,c} &= x_{i^{c},m} - \\sum_{i \\in S} x_{i,m}, \\label{eq:sub-e}\n\\end{align}\n\nwhere $i^{d} = \\argmax_{i \\in S}{\\sum_{k=1}^{M}{x_{i,k}b_{k,d}}}$ and $i^{c} = \\argmax_{i \\in S}{\\sum_{k=1}^{M}x_{i,k}e_{k,c}}$. $\\left(\\nabla_{\\mathbf{a}}\\cdot \\right)_{m}$ represents the $m$-th entry of the sub-gradient with respect to $a$, $\\left(\\nabla_{\\mathbf{B}}\\cdot\\right)_{m,d}$ represents the $(m,d)$-th entry of the sub-gradient with respect to $B$ and $\\left(\\nabla_{\\mathbf{E}}\\cdot\\right)_{m,c}$ represents the $(m,c)$-th entry of the sub-gradient with respect to $E$.\n\n\\begin{remark}\n  For $\\mathbf{X} = \\mathcal{I}$, the gradients in \\eqref{eq:sub-a}-\\eqref{eq:sub-e} are equivalent to the FLDC sub-gradients. Because $x_{i,m} = 1$ only when $i = m$.\n\\end{remark}\n\nRegarding time complexity, the FFLDC model requires more operations for the gradient due to the inclusion of features. Concretely, computing the gradient for a set $S$ takes $\\mathcal{O}(|S|M(L+K))$ which makes the running time of a single pass over data and noise samples $\\mathcal{O}(|\\mathcal{D}\\cup\\mathcal{N}|M(L+K)\\kappa)$. Even though this is larger than the complexity of FLID or FLDC, it is still linear in the size of the data and noise samples.\n\n\\subsection{Example: Rated Locations}\n\\label{sec:ffldc-toy}\n\nA small town has 6 popular locations, each of them has been rated from 0 to 5, where 0 represents a bad review and 5 an excellent review. Collected data shows that a typical visitor is more likely to visit and take photos at places with high ratings, however this is not the only factor driving their behavior. For example, an average visitor does not visit two places that serve food on the same day instead the data shows that visitors usually go to one place that serves food and then to an outdoor one, or they just visit one that has both characteristics. This data is summarized in Table \\ref{tab:ffldc-toy-probs}, the task is to model this behavior using the FFLDC model.\n\n\\begin{table}\n  \\centering\n  \\caption{Synthetic data for Example \\ref{sec:ffldc-toy}.}\n  \\begin{tabular}{@{}ll@{}}\n    \\toprule\n    Locations visited $(S)$ & $P(S)$  \\\\\n    \\midrule\n    $\\{1,3\\}$ & $0.30$ \\\\\n    $\\{3,4\\}$ & $0.25$ \\\\\n    $\\{3,6\\}$ & $0.15$ \\\\\n    $\\{2\\}$ & $0.10$ \\\\\n    $\\{1\\}$ & $0.06$ \\\\\n    $\\{3\\}, \\{4\\}$ & $0.04$ \\\\\n    $\\{5\\}, \\{6\\}$ & $0.03$ \\\\\n    Otherwise & $0.00$ \\\\\n    \\bottomrule\n  \\end{tabular}\n  \\label{tab:ffldc-toy-probs}\n\\end{table}\n\nIn this example there is knowledge about the items and what features are relevant for the data. These features are summarized in equation \\eqref{eq:ffldc-toy-feats}. The first column corresponds to the aforementioned rating, the second and third are binary features indicating whether the location is an outdoor one and whether it serves food, respectively.\n\n\\begin{equation}\n  \\mathbf{X} = \\left(\n    \\begin{array}{ccc}\n      4 & 1 & 0 \\\\\n      4 & 1 & 1 \\\\\n      3 & 0 & 1 \\\\\n      3 & 1 & 0 \\\\\n      2 & 1 & 1 \\\\\n      2 & 1 & 0  \\\\\n     \\end{array}\n  \\right)\n  \\label{eq:ffldc-toy-feats}\n\\end{equation}\n\nA possiblity is an FFLDC model that encourages diversity on the second and third feature, i.e. models the idea that visitors do not go to more than one place that serves food or is outdoor, while assigning a positive utility and coherence value to the first feature, i.e. modeling the preference for places with high ratings. One such model is presented in Figure \\ref{fig:ffldc-toy-all-weights}, this approximates distribution from Table \\ref{tab:ffldc-toy-probs} and illustrates the type of model that is useful in this example.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\textwidth]{ffldc_toy_example}\n  \\caption{FFLDC sample model for Example \\ref{sec:ffldc-toy}.}\n  \\label{fig:ffldc-toy-all-weights}\n\\end{figure}\n\nIf a new location $j$ is considered then it is possible to apply the model and estimate the probability of sets that include it. For example, consider a seventh location with the following features $\\mathbf{x}_{7} = [5, 1, 0]$, using the same model parameters a new distribution can be computed without changing the model parameters, as it would be the case with FLID or FLDC. The updated distribution is shown in Table \\ref{tab:ffldc-toy-probs-2} and it can be considered a sensible distribution given the problem description and the high rating of the new location, hence showing the model's capacity to generalize.\n\n\\begin{table}\n  \\centering\n  \\caption{Synthetic data for Example \\ref{sec:ffldc-toy} after adding a new item.}\n  \\begin{tabular}{@{}ll@{}}\n    \\toprule\n    Locations visited $(S)$ & $P(S)$  \\\\\n    \\midrule\n    $\\{3,7\\}$ & $0.24$ \\\\\n    $\\{1,3\\}$ & $0.21$ \\\\\n    $\\{3,4\\}$ & $0.19$ \\\\\n    $\\{3,6\\}$ & $0.10$ \\\\\n    $\\{7\\}$ & $0.06$ \\\\\n    $\\{1\\}, \\{2\\}$ & $0.04$ \\\\ \n    $\\{3\\}, \\{4\\}$ & $0.03$ \\\\\n    $\\{5\\}, \\{6\\}$ & $0.02$ \\\\\n    Otherwise & $0.00$ \\\\\n    \\bottomrule\n  \\end{tabular}\n  \\label{tab:ffldc-toy-probs-2}\n\\end{table}", "meta": {"hexsha": "57688f1577a50d546a3d51d9e7898a7ed43bdfa8", "size": 15926, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/report/chapters/models.tex", "max_stars_repo_name": "dballesteros7/master-thesis-2015", "max_stars_repo_head_hexsha": "8c0bf9a6eef172fc8167a30780ae0666f8ea2d88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/report/chapters/models.tex", "max_issues_repo_name": "dballesteros7/master-thesis-2015", "max_issues_repo_head_hexsha": "8c0bf9a6eef172fc8167a30780ae0666f8ea2d88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/report/chapters/models.tex", "max_forks_repo_name": "dballesteros7/master-thesis-2015", "max_forks_repo_head_hexsha": "8c0bf9a6eef172fc8167a30780ae0666f8ea2d88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.8426395939, "max_line_length": 781, "alphanum_fraction": 0.7104734397, "num_tokens": 4846, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8615382129861583, "lm_q2_score": 0.6959583124210896, "lm_q1q2_score": 0.599594680796128}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[letterpaper, total={7.5in, 9in}]{geometry}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\n\\title{Applied Partial Differential Equations}\n\\author{Grant Smith}\n\\date{Spring 2022}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Classifying PDEs}\n\n\\begin{itemize}\n    \\item 2) Classify the following operators as linear or nonlinear:\n    \\begin{enumerate}\n        \\item $\\mathcal{L} u = u_x + xu_y$ -- linear\n        \\item $\\mathcal{L} u = u_x + uu_y$ -- nonlinear\n        \\item $\\mathcal{L} u = u_x + \\left(u_y\\right)^2$ -- nonlinear\n        \\item $\\mathcal{L} u = u_x + u_y + 1$ -- nonlinear\n        \\item $\\mathcal{L} u =  \\sqrt{1+x^2}\\left(cos(y)\\right)u_x+u_{yxy}-\\left[arctan(x/y)\\right]u $ -- linear\n    \\end{enumerate}\n    \\item 3) For each of the following equations, state the order and whether it is nonlinear, linear inhomogeneous, or linear homogeneous. Provide reasons.\n\n        \\begin{tabular}{|| c| c| c| c|| }\n        \\hline\n        Equation & Order & Classification & Reason \\\\ \n        \\hline \\hline\n $u_t - u_{xx} + 1 = 0$ & 2 & linear inhomogeneous & inspection \\\\ \\hline\n $u_t - u_{xx} + xu = 0$ & 2 & linear homogeneous & inspection \\\\ \\hline\n $u_t - u_{xxt} + uu_x = 0$ & 3 & linear homogeneous & inspection \\\\ \\hline\n $u_{tt} - u_{xx} + x^2 = 0$ & 2 & linear inhomogeneous & inspection \\\\ \\hline\n $iu_t - u_{xx} + u/x = 0$ & 2 & linear inhomogeneous & inspection \\\\ \\hline\n $u_x(1+u_x^2)^{-1/2} + u_y(1+u_y^2)^{-1/2} = 0$ & 1 & nonlinear & The derivative terms are squared \\\\ \\hline\n $u_x - e^yu_{y} = 0$ & 1 & linear homogeneous & inspection \\\\ \\hline\n $u_t + u_{xxxx} + \\sqrt{1+u} = 0$ & 4 & nonlinear & u is under the square root \\\\\n\\hline\n\\end{tabular}\n\n\\item 4) Show that the difference of two solutions of an inhomogenous linear equation $\\mathcal{L} u = g$ with the same $g$ is a solution of the homogenous equation $\\mathcal{L} u =0$\n\n\\begin{enumerate}\n    \\item let solution 1 be $u_1$, and let solution 2 be $u_2$\n    \\item the question asks about the difference of the two solutions. Thus, $\\mathcal{L} \\left( u_1 - u_2 \\right)$\n    \\item because $\\mathcal{L}$ is linear, $\\mathcal{L} \\left( u_1 - u_2 \\right)$ = $\\mathcal{L}  u_1 - \\mathcal{L}  u_2 = g - g = 0$\n    \\item thus, $u_1 - u_2$ is a solution to $\\mathcal{L} u =0$\n    \n\\end{enumerate}\n\n\\end{itemize}\n\\newpage\n\\section{First Order Linear PDEs}\n\n\\subsection{Deriving the Method of Characteristics}\n\n\n\nThe problem we will solve is of the form \n$$ a(x,y) \\frac{\\partial u}{\\partial x}(x,y) + b(x,y) \\frac{\\partial u}{\\partial y}(x,y) = c(x,y)$$\n\nIt's solution is:\n\n$$\\frac{\\partial x}{\\partial s}(s,t) = a^*(s,t)$$\n$$\\frac{\\partial y}{\\partial s}(s,t) = b^*(s,t)$$\n$$ \\frac{\\partial u^*}{\\partial s}(s,t) = c^*(s,t)$$\n\nWhich we will derive now. First, let's define some coordinate transforms.  These will help us get x and y from s and t\n\n$$x(s,t); y(s,t)$$\n\nAnd we will define our transforms to be bijective (or nonzero Jacobians), so we can go the other way and get s and t from x and y\n\n$$s(x,y); t(x,y)$$\n\nThis is enforced by:\n\n$$ x(s(x,y),t(x,y)) = x $$\n$$ y(s(x,y),t(x,y)) = y$$\n$$s(x(s,t),y(s,t)) = s $$\n$$ t(x(s,t),y(s,t)) = t$$\n\nWe also have a transformed u that we get by using these transform equations:\n\n$$u^*(s,t):= u(x(s,t),y(s,t))$$\n\nAnd for posterity and clarity's sake, we show that we can get u back from the transformed u:\n\n$$u(x,y) = u^{**}(x,y) = u^*(x(s(x,y),t(x,y)),y(s(x,y),t(x,y)))$$\n\nWe can also transform the whole first equation:\n\n$$a^*(s,t) \\frac{\\partial u}{\\partial x}^*(s,t) + b^*(s,t) \\frac{\\partial u}{\\partial y}^*(s,t) = c^*(s,t)$$\n\nHere, we rewrite $u^*$ for convenience\n\n$$u^*(s,t):= u(x(s,t),y(s,t))$$\n\nWe could differentiate $u^*$ with respect to either of its arguments. Let's just choose the first one for now, $s$.\n\n$$\\frac{\\partial u^*}{\\partial s}(s,t) = \\frac{\\partial u}{\\partial x}(x(s,t),y(s,t)) \\frac{\\partial x}{\\partial s}(s,t) +\u00a0\\frac{\\partial u}{\\partial y}(x(s,t),y(s,t)) \\frac{\\partial y}{\\partial s}(s,t)$$\n\nAnd we can do the coordinate transform on the partials of $u$ to get:\n\n$$\\frac{\\partial u^*}{\\partial s}(s,t) = \\frac{\\partial u}{\\partial x}^*(s,t) \\frac{\\partial x}{\\partial s}(s,t) +\u00a0\\frac{\\partial u}{\\partial y}^*(s,t) \\frac{\\partial y}{\\partial s}(s,t)$$\n\nWhich we can pattern match against the transformed version of the original equation (rewritten here for convenience)\n\n$$a^*(s,t) \\frac{\\partial u}{\\partial x}^*(s,t) + b^*(s,t) \\frac{\\partial u}{\\partial y}^*(s,t) = c^*(s,t)$$\n\n or \n\n$$c^*(s,t)= \\frac{\\partial u}{\\partial x}^*(s,t)a^*(s,t) +  \\frac{\\partial u}{\\partial y}^*(s,t)b^*(s,t)$$\n\nand get:\n\n$$\\frac{\\partial x}{\\partial s}(s,t) = a^*(s,t)$$\n$$\\frac{\\partial y}{\\partial s}(s,t) = b^*(s,t)$$\n$$ \\frac{\\partial u^*}{\\partial s}(s,t) = c^*(s,t)$$\n\n\\newpage\n\n\\subsection{Problems}\n\n\\begin{itemize}\n    \\item 1) Solve: $2u_t + 3u_x = 0$\n    \\begin{enumerate}\n        \\item The form of this equation hints at a directional derivative. So if we change the coordinates to one which runs along this direction, then we have a nice form of differential equation.\n        \\item Change the coordinates:\n        $$u\\left(x(c,k),t(c,k)\\right) = u^*(c,k)$$\n        \\item Now, pick one of the variables, $c$ or $k$, and partially differentiate the above equation with respect to that variable. For example, we'll pick $c$.\n        $$\\frac{\\partial u}{\\partial x} \\frac{\\partial x}{\\partial c} + \\frac{\\partial u}{\\partial t} \\frac{\\partial t}{\\partial c} = \\frac{\\partial u^*}{\\partial c}$$\n        \\item By pattern matching against the initial equation, we have the following three equations:\n            $$\\frac{\\partial x}{\\partial c} = 3 ; \\frac{\\partial t}{\\partial c} = 2; \\frac{\\partial u^*}{\\partial c} = 0$$\n        \\item By solving all three, we get the following:\n            $$x = 3c + f_1(k) ; t = 2c + f_2(k); u^* = f(k)$$\n        \\item Given that the relationships between $x, t, c, k$ are just coordinate transforms, we can choose $f_1$ and $f_2$ to be suitable. We choose $f_1 = k$ and $f_2 = 0$, which gives us the following:\n        $$x = 3c + k ; t = 2c; u^* = f(k)$$\n        This works because we know that as $c$ and $k$ vary throughout the whole plane, the corresponding $x$ and $y$ will also trace the whole plane. Also, these are convenient choices because the initial condition has $t=0$, so having $t$ be a very simple function (i.e. not dependent upon $k$) will be convenient.\n        \\item We can rearrange to get:\n        $$c = \\frac{t}{2} ; k = x - \\frac{3t}{2}$$\n        \\item Looking back at step 2, we can now write: \n        $$u\\left(x(c,k),t(c,k)\\right) = u^*(c,k) = f(k)$$\n        And if we transform the coordinates back using our equation for $k$ in step 7, we have:\n        $$u(x,t) = f (x-\\frac{3t}{2})$$\n        \\item Now we use the initial condition that $$u(x,0) = sin(x) = f(x)$$\n        \\item Now we know that $$u(x,t) = sin(x-\\frac{3t}{2})$$\n        \\item And this also works if we plug it into the original equation.\n        \n\n    \\end{enumerate}\n    \n    \\newpage\n    \\item 2) Solve $3u_y + u_{xy} = 0$\n    \\begin{enumerate}\n        \\item Assuming differentiability, we can change the order of integration, so $3u_y + u_{yx} = 0$\n        \\item Let $v = u_y$. This gives $3v + v_x = 0$\n        \\item Rearranging gives:\n        $$\\frac{\\partial v}{\\partial x} = -3v$$\n        \\item separating and integrating:\n        $$\\frac{-1}{3}\\frac{\\partial v}{v} =\\partial x$$\n        $$\\frac{-1}{3}\\ln\\left| v\\right| = x + f^{***}(y)$$\n        $$\\ln\\left| v\\right| = -3x + f^{**}(y)$$\n        $$\\left| v\\right| = e^{-3x}f^*(y)$$\n        And assuming that $f^*$ can be positive or negative, we can drop the absolute value:\n        $$v = e^{-3x}f^*(y)$$\n        \\item Remembering that $v$ is not necessarily $u$, \n        $$\\frac{\\partial u}{\\partial y} = u_y = v = e^{-3x}f^*(y)$$\n        $$\\partial u = e^{-3x}f^*(y) \\partial y$$\n        $$u  = e^{-3x}f(y) + g(x)$$\n        \\item Which satisfies the original PDE, which can be verified by substitution.\n        \\item I suppose I haven't proven that this is the most general form of the equation, but I don't really know how to prove that. I know that this is linear, so any linear combination of this function will also work, but all linear combinations are already captured in $f$ and $g$. \n        \n    \\end{enumerate}\n    \\newpage\n    \\item 7) Solve the equation \n    $$yu_x + xu_y = 0; u(0,y) = e^{-y^2}$$\n    And find the region of the x-y plane in which the solution is uniquely determined.\n    \\begin{enumerate}\n        \\item Note that this is the directional derivative of u.  Thus, we change coordinates using the transform $T$:\n        $$T(u) = u^*(t,k) = u(x(t,k),y(t,k))$$\n        \\item Now take the partial derivative of both sides with respect to $t$ or $k$. Either should work. We'll arbitrarily choose $t$.\n        $$\\frac{\\partial u}{\\partial x}\\frac{\\partial x}{\\partial t} + \\frac{\\partial u}{\\partial y}\\frac{\\partial y}{\\partial t} = \\frac{\\partial u^*}{\\partial t}$$\n        \\item Now pattern match against the original equation to get the following PDEs:\n        $$\\frac{\\partial x}{\\partial t} = y;\\frac{\\partial y}{\\partial t} = x;\\frac{\\partial u^*}{\\partial t} = 0$$\n\n        \\item Now we will focus on the other two PDEs. Please forgive this poor mathematics, but I will integrate them and get the following:\n        $$x = yt + f_1(k); y = xt + f_2(k); u^* = f(k)$$\n        Which, using the third equation, now extends the equality from step 1 to:\n        $$T(u) = u^*(t,k) = u(x(t,k),y(t,k)) = f(k)$$\n        \\item Let's pause for a moment to discuss the region of the x-y plane in which the solution is uniquely determined. The equation gets information along the line $x=0$. This means that any characteristic curve that does not pass through this line will not be uniquely determined. Given the shape of the characteristic curves determined in step 4, there is an \"X\" shape in the x-y plane formed by $y=x$ and $y=-x$ that determines the areas of determination. The areas of the x-y plane that are between these two lines are not determined, but the area above and below both of these lines is determined. Thus, it makes a sort of bow-tie of non-determinedness, and an hourglass of determinedness. \n        \\item Having discussed the determinedness, I also will mention that I'm not sure how to choose $f_1$ and $f_2$ such that they sweep the entire x-y plane. I seem to only be able to get half of it without making a non-injective function. Given this fact along with the fact that we only will be able to define a solution in the hourglass region anyway, I will choose $f_1$ and $f_2$ to sweep that region, and I will ignore the bow-tie region. Thus, we can choose:\n        $$ f_1(k) = 0; f_2(k) = k$$\n        So\n        $$x = yt; y = xt + k$$\n        And\n        $$t = \\frac{x}{y}; k = y - \\frac{x^2}{y}$$\n        \\item Rearranging and rewriting the transforms gives:\n        $$T(u) = u^*(t,k) = u(x(t,k),y(t,k)) = u(yt, xt + k) = f(k)$$\n        $$T(u^*) = u(x,y) = u^*(t(x,y),k(x,y)) = u^*(\\frac{x}{y},y - \\frac{x^2}{y}) = f(y - \\frac{x^2}{y})$$\n        \\item And using the initial condition gives:\n        $$u(0,y) = e^{-y^2} = f(y)$$\n        So\n        $$u(x,y) = e^{-\\left(y - \\frac{x^2}{y}\\right)^2}$$\n        Which satisfies the initial condition, but it does not satisfy the original equation, so I'm doing something wrong. But after graphing it, it appears that this solution works in the hourglass region, and the only area in which it looks off is in the bow-tie, so possibly it does work.\n        \n    \\end{enumerate}\n\n    \\newpage\n    \\item 8)\n    Solve the following:\n    $$au_x + bu_y + cu = 0$$\n    \\begin{enumerate}\n        \\item First, move $cu$ to the other side:\n        $$au_x + bu_y =  - cu$$\n        \\item Now note that this is the directional derivative of u.  Thus, we change coordinates using the transform $T$:\n        $$T(u) = u^*(t,k) = u(x(t,k),y(t,k))$$\n        \\item Now take the partial derivative of both sides with respect to $t$ or $k$. Either should work. We'll arbitrarily choose $t$.\n        $$\\frac{\\partial u}{\\partial x}\\frac{\\partial x}{\\partial t} + \\frac{\\partial u}{\\partial y}\\frac{\\partial y}{\\partial t} = \\frac{\\partial u^*}{\\partial t}$$\n        \\item Now pattern match against the original equation to get the following PDEs:\n        $$\\frac{\\partial x}{\\partial t} = a;\\frac{\\partial y}{\\partial t} = b;\\frac{\\partial u^*}{\\partial t} = -cu$$\n        \\item Which, when integrated, give:\n        $$x = at + f_1(k);y = bt + f_2(k);\\frac{\\partial u^*}{\\partial t} = -cu$$\n        In which we have not yet simplified equation 3\n        \\item We can simplify the $f_1$ and $f_2$ because we don't need that much generality. Simply letting $f_1 = 0$ and $f_2 = k$ will suffice. Thus:\n        $$x = at ;y = bt + k;\\frac{\\partial u^*}{\\partial t} = -cu$$\n        \\item and we can solve for $t$ and $k$ in terms of $y$ and $x$ as a way to invert transform $T$:\n        $$t = \\frac{x}{a}; k = y - \\frac{bx}{a}$$\n        Which gives us: \n        $$T(u) = u^*(t,k) = u(at,bt + k)$$\n        $$T^{-1}(u^*) = u(x,y) = u^*(\\frac{x}{a},y - \\frac{bx}{a})$$\n        \\item Now we need to solve the PDE we left above:\n        $$\\frac{\\partial u^*}{\\partial t} = -cu$$\n    \\end{enumerate}\n    \\newpage\n    \\item 13) Use the coordinate method to solve the following equation:\n    $$u_x + 2u_y + (2x-y)u = 2x^2 + 3xy -2y^2$$\n    \\begin{enumerate}\n        \\item Using the methods from the previous problem, we can get:\n        $$x = t ;y = 2t + k;\\frac{\\partial u^*}{\\partial t} = 2x^2 + 3xy - 2y^2 - (2x-y)u$$\n        $$t = x; k = y - 2x$$\n        $$T(u) = u^*(t,k) = u(t,2t + k)$$\n        $$T^{-1}(u^*) = u(x,y) = u^*(x,y - 2x)$$\n        \\item and I am stuck again\n    \\end{enumerate}\n    \\newpage\n    \\item 1.5-5: Solve the following PDE and discuss behavior at boundary conditions given below.\n    $$\\forall x,y; u_x(x,y) + yu_y(x,y) = 0$$\n    Transform coordinates:\n    $$\\forall s,t; u_x^*(s,t) + y(s,t)u_y^*(s,t) = 0$$\n    which gives the following equations:\n    $$\\frac{\\partial x}{\\partial s} = 1 \\rightarrow x(s,t) = s + f_1(t)$$\n    $$\\frac{\\partial y}{\\partial s} = y(s,t) \\rightarrow \\frac{1}{y(s,t)}\\partial y = \\partial s \\rightarrow y(s,t) = f_2(t)e^s$$\n    $$\\frac{\\partial u^*}{\\partial s} = 0 \\rightarrow u^*(s,t) = f(t)$$\n    And we know that $x(s,t)$ can trace out the whole line without $f_1(t)$, so we will drop $f_1(t)$.  Also, we know that $y(s,t)$ traces out the whole plane if $f_2(t) = t$, so these three equations simplify to:\n    $$x(s,t) = s; y(s,t) = te^s; u^*(s,t)=f(t)$$\n    We can also invert the coordinate transforms and get:\n    $$s(x,y) = x; t(x,y) = \\frac{y}{e^x}$$\n    And so now we can invert the transform to get $u(x,y)$ from $u^*(s,t)$:\n    $$u(x,y) = f(\\frac{y}{e^x})$$\n    Now we consider the boundary condition $u(x,0) = x$\n    $$u(x,0) = f(\\frac{0}{e^x}) = f(0) \\neq x$$\n    We have found a contradiction because $f(0)$ is constant, and $x$ is not.\n    Now we consider the boundary condition $u(x,0) = 1$\n    $$u(x,0) = f(\\frac{0}{e^x}) = f(0) = 1$$\n    Thus, we have found a requirement that $f(0) = 1$, but there are no other requirements on $f$, thus, there are still infinitely many possibilities for the function. The moral of this problem is that we're specifying our boundary along our characteristics instead of against them.\n    \\newpage\n    \\item 1.5-6: Solve the following PDE\n    $$\\forall x,y; u_x(x,y) + 2xy^2u_y(x,y) = 0$$\n    First, divide the whole equation by $x$. This highlights a problem at $x = 0$\n    $$\\forall x,y; \\frac{1}{x}u_x(x,y) + 2y^2u_y(x,y) = 0$$\n    Now we transform coordinates:\n    $$\\forall s,t; \\frac{1}{x(s,t)}u_x^*(s,t) + 2y(s,t)^2u_y^*(s,t) = 0$$\n    And we have three smaller PDEs and their solutions:\n    $$\\frac{\\partial x}{\\partial s} = \\frac{1}{x(s,t)} \\rightarrow x(s,t)^2 = 2s + f_1(t) $$\n    $$\\frac{\\partial y}{\\partial s} = 2y(s,t)^2 \\rightarrow y(s,t) = \\frac{-1}{2s + f_2(t)}$$\n    $$\\frac{\\partial u^*}{\\partial s} = 0 \\rightarrow u^*(s,t) = F(t)$$\n    The next issue is how to choose $f_1$ and $f_2$. If we isolate the $s$ in the $x$ equation, we can substitute it in for the $y$ equation to get:\n    $$y(s,t) = \\frac{-1}{x(s,t)^2 + f(t)}$$\n    Which if we simply let $f$ be the negative identity function the characteristics still trace out the whole plane, and we get:\n    $$y(s,t) = \\frac{-1}{x(s,t)^2 - t} \\rightarrow t = \\frac{1}{y}+x^2$$\n    Which means if we invert the transform on $u^*$, we get:\n    $$u(x,y) = F(\\frac{1}{y}+x^2)$$\n    Which works if we substitute it back into the original equation.\n    \n\\end{itemize}\n\\newpage\n\\section{The Wave Equation}\n\nFor the following wave equation:\n\n$$u_{tt} = c^2u_{xx}$$\n\nwith the initial conditions:\n\n$$u(x,0) = \\phi(x)$$\n$$u_t(x,0) = \\psi(x)$$\n\nis\n$$u(x,t) = \\frac{1}{2}\\left(\\phi(x + ct) + \\phi(x-ct)\\right) + \\frac{1}{2c} \\int_{x-ct}^{x+ct} \\psi(s) \\,ds $$\n\n\n\\subsection{Example Problems}\n\\begin{itemize}\n\n    \\item 2.1-1:\n    Solve $$u_{tt} = c^2u_{xx} ; u(x,0) = e^x ; u_t(x,0) = \\sin{x}$$\n    $$u(x,t) = \\frac{1}{2}\\left(e^{x + ct} + e^{x - ct}\\right) + \\frac{1}{2c}\\int_{x-ct}^{x+ct} \\sin{s} \\,ds$$\n$$u(x,t) =  \\frac{1}{2}\\left(e^{x + ct} + e^{x - ct}\\right) + \\frac{1}{2c} [\\cos{(x-ct)}-\\cos{(x+ct)}] $$\n    \\item 3.1-2:\n    Solve $$u_{tt} = c^2u_{xx} ; u(x,0) = \\log{1 + x^2} ; u_t(x,0) = 4 + x$$\n    $$u(x,t) = \\frac{1}{2}\\left[\\log{\\left(1 + (x + ct)^2\\right)}+\\log{\\left(1 + (x - ct)^2\\right)}\\right] + \\frac{1}{2c}\\int_{x-ct}^{x+ct} (4+s) \\,ds$$    \n    $$u(x,t) = \\frac{1}{2}\\left[\\log{\\left(1 + (x + ct)^2\\right)}+\\log{\\left(1 + (x - ct)^2\\right)}\\right] + 4t + xt$$   \n\\end{itemize}\n\n\\newpage\n\\section{Fourier Series and Boundary Conditions}\n\nI think one of the most important things you can know that will help you when doing fourier series is that fourier series converge to the average of the limit on the left and right. That might seem innocuous, but it will help. Here's how.  We sort of learned two different types or uses of fourier series:\n\n\\begin{itemize}\n    \\item We learned that that the boundary values determine whether you want to use sines or cosines in your expansion.\n    \\item And we also separately learned that when we're computing fourier series to approximate a function, we often extend the function and find its period etc etc to find the fourier coefficients.\n\\end{itemize}\n  But what I noticed with the help of the above fact is how to unify these two views of the fourier series. What we can really do is use the boundary conditions to choose how to extend the function, and then that's the function we want to approximate. This is helpful because it unifies the two seemingly separate issues of using a fourier series to approximate a function vs choosing a certain type of fourier series to meet the requirements of a boundary condition. Just extend your function to meet your boundary condition, then the correct choice of sines and cosines will fall out.\n\n\\end{document}\n", "meta": {"hexsha": "9a4fcd06b8660cdb25abe81101cd12b70d2831ad", "size": 18943, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main.tex", "max_stars_repo_name": "GSmithApps/PDEs", "max_stars_repo_head_hexsha": "0bdae742c082cbf7ac383523aef4379984ec801f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.tex", "max_issues_repo_name": "GSmithApps/PDEs", "max_issues_repo_head_hexsha": "0bdae742c082cbf7ac383523aef4379984ec801f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.tex", "max_forks_repo_name": "GSmithApps/PDEs", "max_forks_repo_head_hexsha": "0bdae742c082cbf7ac383523aef4379984ec801f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.403030303, "max_line_length": 700, "alphanum_fraction": 0.6131024653, "num_tokens": 6355, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210897, "lm_q2_score": 0.861538211208597, "lm_q1q2_score": 0.5995946795590196}}
{"text": "\\documentclass[12pt,letterpaper]{article}\n\\usepackage{amsmath}\n\\usepackage[colorlinks]{hyperref} \n\\usepackage[title,toc,page]{appendix}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\usepackage{listings}\n\\usepackage{titlesec}\n\\newcommand{\\sectionbreak}{\\clearpage}\n\\begin{document}\n\n\\title{Oh, What a Bot!}\n\\author{HackRVA.org}\n\\date{}\n\\maketitle\n\n\n\n\\section{Introduction}\nThis article describes the development of a control system model for an reaction wheel pendulum, shown in Figure \\ref{sketchOfPendulum}.  A reaction wheel pendulum is a variation of a simple pendulum, balanced upright, with rotating wheels\nat the top in place of the pendulum bob. Each wheel is driven by a separate motor.The pendulum is inherently unstable in the upright position - gravity will pull the pendulum over if it is not perfectly balanced.  Thus, a control system is needed to accelerate the reaction wheels, applying torques\nas needed to the pendulum so as to keep it upright. The reaction wheels are mounted at right angles to each other, providing the ability to \ncorrect the alignment of the pendulum in any direction.\n\nThe pendulum physical structure includes a rod, an accelerometer, and two flywheels.  Each\nflywheel is comprised of a motor mount, a DC brushed motor, and a reaction wheel.  The flywheel assemblies are mounted at\nthe top of the rod at right angles to each other and the rod.  The DC motors are wired to a motor controller and an Arduino.  The Arduino is also wired\nto the accelerometer, which is located at the top of the rod.\n\nThe basics of the pendulum control system is outlined in Figure \\ref{fig.basicControlSystem}.  The system\ncan be controlled by adjusting the voltage.  An additional input F is included to describe random environmental forces that may affect the pendulum.  The random forces may result in the desired angle of the pendulum not being achieved.  More accurate control will result from feeding the resulting pendulum angle back into the control system so that the input voltage is continuously adjusted until the desired pendulum angle is achieved.  This is shown in Figure \\ref{basicControl}.\n\nIn this case, however, the goal is for the pendulum to remain vertical with as little wobble as possible.\nThe real control comes into play when correcting for the stray forces on the system.  Hence, the control\nsystem can be reformulated as shown in Figure \\ref{fig.regulatorControlSystem}, commonly known as\na Regulator Problem.  This structure emphasizes the environmental forces as the primary input into the\nsystem and that the feedback is primarily intended to compensate for those forces.\n\n\\includegraphics[width=\\textwidth]{images/pencil.png} \n    \\captionof{figure}{Sketch of the Pendulum}  \n    \\label{sketchOfPendulum}\n   \n\\includegraphics[width=\\textwidth]{images/basicControl.eps}\n    \\captionof{figure}{Simplified Pendulum Model}\n    \\label{basicControl}\n\n\\includegraphics[width=\\textwidth]{images/regulator.eps}\n    \\captionof{figure}{Regulator Model}\n    \\label{fig.regulatorControlSystem}\n\n\n\n\n\n\n\n\n\\section{Pendulum Dynamics}\n\nMore detail can be added to the regulator control system description by building a mathematical model of the the\npendulum.\nFigure \\ref{modelOfPendulum} shows a simplified sketch of the inverted pendulum.\nReference \\cite{reactionWheel} contains a summary of the model dynamics for the balancing pencil.  \nThe model dynamics can be summarized as\n%\n\\begin{equation}\n    m g l \\sin \\theta + F - I_{c}\\ddot{\\theta}  = I_{f}\\ddot{\\phi}\n\\end{equation}\n%\nwhere: \\\\\n$ m =$ total mass of inverted pendulum (kg)\\\\\n$g =$ gravitational constant  (m s\\textsuperscript{-2}) \\\\\n$l =$ distance from the pivot point to the center of pendulum mass (m)\\\\\n$\\theta =$ pendulum angle (rad) \\\\\n$F =$ environmental forces on the pendulum\\\\\n$I_{c} =$ moment of inertia of the pendulum (kg m\\textsuperscript{2}) \\\\\n$I_{f} =$ moment of inertia of the reaction wheel (kg m\\textsuperscript{2}) \\\\\n$\\phi =$ flywheel angle (rad) \\\\\n\nOther references may have alternate formulations using different definitions for $\\theta$ and $\\phi$.\\footnote{Reference \\cite{monograph} has a slightly different formulation for the model dynamics; \nthe sign of the gravitational term is positive rather than negative.  This difference is caused \nby Reference \\cite{monograph} defining the angle from the downward, resting position\nwhile Reference \\cite{reactionWheel} defines the angle from the vertical.  \nThe cosines of angles differing by $\\pi$ have opposite signs.}\\\\\n\n\nAfter linearizing the equation by noting that $\\sin{\\theta} \\approx \\theta$ for small angles, \nthe transfer function for the pendulum is then\n%\n\\begin{equation}\n    m \\,g \\,l \\theta + F - I_{c}\\ddot{\\theta}  = I_{f}\\ddot{\\phi}\n\\end{equation}\n%\nThe reaction wheel can be continuously revolving, making the reaction wheel angle, $\\phi$, less useful as a control parameter.\nIn its place we will use $\\omega = \\dot{\\phi}$.\n%\n\\begin{equation}\n    m \\,g \\,l \\theta + F - I_{c}\\ddot{\\theta}  = I_{f}\\dot{\\omega}\\label{bigOne}\n\\end{equation}\n%\nFor ease of use in later analysis, it is beneficial to split \\eqref{bigOne} into two parts:\n%\n\\begin{equation}\n    m \\,g \\,l \\theta + u(t)   = I_{c}\\ddot{\\theta}\\label{eq.pendulum}\n\\end{equation}\n%\n\\begin{equation}\n    u(t) = F - I_{f}\\dot{\\omega}\\label{oddOne}\n\\end{equation}\n%\nwhere $u(t)$ is the net torque applied to the pendulum.  The sign convention is important; a positive torque $u(t)$\nwill accelerate the pendulum in the direction of the positive pendulum angle.  However the sign convention in \n\\eqref{oddOne} indicates that  a positive acceleration of the rotor will result in a \\textit{negative} acceleration of the pendulum.\n\nNote that if the pendulum is not vertical (i.e. $\\theta = 0$) gravity will begin to pull the pendulum over.  Correcting this requires accelerating the flywheel to apply torque to the pendulum.  A flywheel turning at a constant velocity applies no torque to the pendulum and will not alter the movement of the pendulum.\n\n\n\n\\includegraphics[width=\\textwidth]{images/scan1.png}\n    \\captionof{figure}{Simplified Pendulum Model}\n    \\label{modelOfPendulum}\n\n\n\n\n\n\\section{Motor and Flywheel Dynamics}\n\nReference \\cite{reactionWheel} models the motor as \n%\n\\begin{equation}\n    I_{f} \\, \\ddot{\\phi} \\, \\eta_{m} \\, \\eta_{g}  = K_{t} \\, i\n\\end{equation}\n%\nwhere: \\\\\n$\\eta_{m} =$ motor efficiency \\\\\n$\\eta_{g} =$ gear efficiency \\\\\n$K_{t} =$ motor torque constant (N m A\\textsuperscript{-1}) \\\\\n$i =$ motor current \\\\\n\nThis formulation appears to have problems, as a reduction in the motor and gear efficiencies results in a \ngreater flywheel acceleration for a given motor current.  Similarly, the gear ratio above unity would \ndecrease the flywheel acceleration for a specified motor current/torque, which is non-physical.  The equation can be altered\nto correct these issues, resulting in the following:\n%\n\\begin{equation}\n\tI_{f} \\, \\ddot{\\phi} = \\, \\eta_{m} \\, \\eta_{g} K_{t} \\, i\n\\end{equation}\n%\nHowever, this format may not be the easiest to implement, as the motor and gear efficiencies are not known.  Reference \\cite{monograph}\ntakes a different approach,\n%\n\\begin{equation}\n    I_{f} \\, \\ddot{\\phi} = T_{shaft}\n\\end{equation}\n%\nWhere the earlier format dealt with motor/gear efficiencies using $\\eta_{m}$ and $\\eta_{g}$, Reference \\cite{monograph} develops a correlation of friction in terms of torque required to maintain a given $\\dot{\\phi}$.  Here the torque will be\nreformulated as \n%\n\\begin{equation}\n    T_{shaft} = T - T_{friction}\n\\end{equation}\n%\n\\begin{equation}\n    T_{friction} = A \\, sgn(\\dot{\\phi} ) + B \\dot{\\phi} \n\\end{equation}\n%\n\\begin{equation}\n    T = K_{t} \\, i\n\\end{equation}\n%\nwhere \\\\\n$T =$ the total motor torque \\\\\n$T_{shaft} =$ torque available to the flywheel \\\\\n$T_{friction} =$ torque lost to motor and gear friction \\\\\n$A =$ coulomb friction coefficient \\\\\n$B =$ rotational friction coefficient \\\\\n\nCombining equations and adding the reduction gear ratio,\n%\n\\begin{equation}\n    I_{f} \\, \\ddot{\\phi}  = K_{t} \\, i - A \\, sgn(\\dot{\\phi} ) - B \\dot{\\phi}\n\\end{equation}\n%\nAgain, using the notation $\\omega = \\dot{\\phi}$,\n%\n\\begin{equation}\n    I_{f} \\, \\dot{\\omega}  = K_{t} \\, i - A \\, sgn(\\omega) - B \\omega \\label{wheel}\n\\end{equation}\n%\n\nAs will be seen, this equation is easier to implement because $K_{t}$, $A$, and $B$ can be measured experimentally (see Appendixes \\ref{appendix:measure} and \\ref{appendix:friction}).\n\n\n\n\n\n\n\n\\section{Motor Control}\nThe previous chapter determined the relationship between the motor current and the acceleration of the flywheel.  \nHowever, the motor will be not be controlled via motor current.  Instead, a motor controller will be used to control the \nmotor by adjusting the motor voltage via a PWM signal.  The equivalent, average voltage will then regulate the motor.  \nReference \\cite{monograph} give the relationship between motor current and voltage as\n%\n\\begin{equation}\n    l \\, \\frac{di}{dt} + r \\,i = v - K_{v} \\, \\omega\n\\end{equation}\n%\nwhere: \\\\\n$l =$ armature inductance (henry) \\\\\n$r =$ armature resistance (ohm) \\\\\n$v =$ voltage supplied to the motor (volt) \\\\\n$K_{v} =$ motor voltage constant (V  rad\\textsuperscript{-1} sec) \\\\\n$\\omega =$ motor rotation speed (rad sec\\textsuperscript{-1}) \\\\\n\nNormally the inductance of the motor is much lower than the resistance, such that $l/r \\sim 0.001$.  \nIn such cases it is acceptable to ignore the time dependence of the current and write\n%\n\\begin{equation}\n    r \\,i = v - K_{v} \\, \\omega \\label{motor}\n\\end{equation}\n%\nNote that, provided mks units are used, $K_{v}$ and $K_{t}$ will have the same magnitude.  This can be seen by equating the \nmechanical and electrical power\n%\n\\begin{equation}\n    T \\omega = v i\n\\end{equation}\n%\nwhere $T =$ is the motor torque.\nRearranging,\n%\n\\begin{equation}\n    \\frac{T}{i} = \\frac{v}{\\omega}\n\\end{equation}\n%\nor\n%\n\\begin{equation}\n    K_{t} = K_{v} \n\\end{equation}\n%\n\nBack to the motor equation, we can solve \\eqref{motor} for i:\n%\n\\begin{equation}\n    i = \\frac{v}{r} - \\frac{K_{v} \\, \\omega}{r}\n\\end{equation}\n%\nCombining with \\eqref{wheel}\n%\n\\begin{equation}\n    I_{f} \\, \\dot{\\omega}  =  K_{t} \\, \\left( \\frac{v}{r} - \\frac{ K_{v} \\, \\omega}{R} \\right) - A \\, sgn(\\omega ) - B \\omega\n\\end{equation}\nRearranging,\n\n\\begin{equation}\n    I_{f} \\, \\dot{\\omega} + \\left( B+\\frac{K_{t} K_{v}}{r} \\right) \\omega +A \\, sgn(\\omega)= \\left(\\frac{K_{t}} {r}\\right)v \n\\end{equation}\n\nRecognizing that $K_{t} = K_{v}$ and replacing them with $K$,\n\\begin{equation}\n    I_{f} \\, \\dot{\\omega} + \\left( B+\\frac{K^2}{r} \\right) \\omega +A \\, sgn(\\omega)= \\left(\\frac{K} {r}\\right)v\\label{eq.motorFinal} \n\\end{equation}\n\nNot all of the voltage results in acceleration of the reaction wheel.  As the reaction wheel velocity increases, an\nincreasing amount of the voltage (and the resulting torque) is consumed by friction and the back EMF.\n\nWe will assume that the control function can adjust the demand voltage by adding or subtracting a constant, depending on the current spin direction of the reaction wheel.  This means that $-A\\,sgn(\\dot{\\phi})$ is folded\ninto $v$ for the purposes of this analysis.  \n\n\n\n\n\n\n\n\n\n\\section{Modeling in the Frequency Domain}\n\n\nThe primary objectives of a control system for the pendulum should be to stabilize the upright position of the pendulum and recovering from external forces on the pendulum.  In addition, the control system should be design to deal with a number\nof physical limitations of the physical components of the pendulum.  Specifically,\n\\begin{itemize}\n    \\item the motor voltage is limited to maximum value,\n    \\item The rotor velocity has a practical upper limit.\n\\end{itemize}\n\n\nThe stability of the system shown in Figure  \\ref{fig.regulatorControlSystem} can be evaluated by examining the\nopen loop transfer function\n\\begin{equation}\n    T(s) = P(s) C(s) M(s) R(s)\n\\end{equation}\nwhere $P(s)$ is the pendulum transfer function, $C(s)$ is the control function, $M(s)$ is the motor transfer function, and $R(s)$ is the rotor transfer function.  \nThe control system in terms of these transfer functions is shown in Figure \\ref{transferFunction}.\\\\\n\n\nThe controller, at this point in the analysis, is assumed to be a pass-through, or\n%\n\\begin{equation}\nv = \\theta\n\\end{equation}\nHence, the controller transfer function is\n\\begin{equation}\nC(s) = \\frac{v(s)}{\\theta(s)} = 1\n    \\label{EQ.simpleController}\n\\end{equation}\n\nThe pendulum transfer function can be derived from \\eqref{eq.pendulum}\n%\n\\begin{equation}\n    m \\,g \\,l \\theta(s) + u(s) = I_{c}\\theta(s) s^{2}\n\\end{equation}\n\nThe rotor transfer function can be derived by noting that the torque generated by the flywheel is\n\\begin{equation}\n    u(s) = F(s) - I_{f}\\omega(s) s\n\\end{equation}\n%\nIgnoring the environmental forces, the pendulum transfer functions are \n\\begin{equation}\n    P(s) = \\frac{\\theta(s)}{u(s)}= \\frac{1}{I_{c} s^{2} - m g l }\n    \\label{franco}\n\\end{equation}\n%\n\\begin{equation}\n    R(s) = \\frac{u(s)}{\\omega(s)} = -I_{f} s\n    \\label{groucho}\n\\end{equation}\\\\\n\nNote that the transfer function for the rotor in \\eqref{groucho} is negative.  Combined with the summing\njunction shown in Figure \\ref{transferFunction}, this produces a negative feedback control system.\\\\\n \n\n\nFrom \\eqref{eq.motorFinal} the transfer function for the motor is:\n\\begin{equation}\n    I_{f} \\, \\omega(s) s + \\left( B+\\frac{K^2}{r} \\right) \\omega(s) = \\left(\\frac{K} {r}\\right)v(s)\n\\end{equation}\n\\begin{equation}\n    M(s) = \\frac{\\omega(s)}{v(s)} =  \\frac{\\left(\\frac{K} {r}\\right)}{I_{f} s + (\\frac{K^2}{r}+B)}\n    \\label{rondo}\n\\end{equation} \n\nUsing \\eqref{EQ.simpleController}, \\eqref{franco}, \\eqref{groucho}, and \\eqref{rondo} results in \n%\n\\begin{equation}\n\tT(s) =\\frac{-(\\frac{1} {I_{f}I_{c}})(\\frac{K}{r})s}\n\t{(s^2-\\frac{m g l}{I_{c}})(s+(\\frac{K^2}{r I_{f}}+\\frac{B}{I_{f}}))}\n\\end{equation}\n\nSubstituting in values from the appendixes, the transfer function for the pendulum is\n\\begin{equation}\n\tT(s) =\\frac{-0.2464 s}{s^3 + 4.73 s^2 -34.8 s -164.6}\n\\end{equation}\n\nThe numerator of the transfer function is the characteristic equation that, when factored, is\n$(s-5.8993) (s+5.8993) (s+4.7303)$.\n\nUsing the root locus technique and the open loop transfer function, Figure \\ref{rootLocus} shows the\nbehavior of the closed loop poles as a function of feedback gain.  Note that one pole is always in the right\nhalf of the plane, indicating that the system is unstable regardless of the magnitude of the gain.\n\nOne possible method of stabilizing the system would be to use both proportional and derivative, or PD, feedback.  The control function would be.\n\\begin{equation}\n\tC(s) = 1 + p s\n\\end{equation}\nRearranging,\n\\begin{equation}\n\tC(s) = \\frac{s+\\frac{1}{p}}{p}\n\\end{equation}\n\nindicating a zero at $1/p$ and a gain of $1/p$.\nThe results are sensitive to the placement of the zero.  Placing the zero to the right of the pole at -5.8993 \neliminates any oscillation, but does not remediate the pole in the right half of the plane.  Placing the zero to the left of the pole at -5.8993 results in a much faster response, but can result in oscillations (Figure \\ref{rootLocusPD}).  Again, the pole in the right half of the plane is not remediated. \\\\\n\n\nOne possibility for resolving the issue of the zero at the origin would be to implement a PID controller.\n\\begin{equation}\n\tC(s) = 1 + \\frac{q}{s} + p s\n\\end{equation}\nRearranging,\n\\begin{equation}\n\tC(s) = \\frac{ps^2+s+q}{s}\n\\end{equation}\n\nThis controller would have two zeros that can be placed as required and a pole that would cancel\nthe zero at the origin.  If the zeros are placed to the left of the pole at -4.7303, then the system may stabilize.\nHowever, in the real world the original zero may not be exactly at the origin and the poles in the left side of the axis may not be exactly where they are expected (due to nonlinearities, etc.) \nleading to a less-than robust control system. \\\\\n\nFinally, it is possible to include feedback from the other monitored and controlled variable, the rotor velocity.\nUnfortunately, to continue to evaluate the system using a root locus analysis, the feedback from\nthe rotor velocity will have to be directly factored into the transfer function.  \\\\\n\nHere we modify \\eqref{motor} to include the feedback from the rotor velocity:\n\\begin{equation}\n    r \\,i = v +K_{\\omega}\\omega - K_{v} \\, \\omega \\label{motorUpdated}\n\\end{equation}\n\nwhere: \\\\\n$ K_{\\omega} =$ rotor velocity feedback gain.\\\\\n\nSolving for i,\n\\begin{equation}\n    i = \\frac{v}{r} - \\frac{(K-K_{\\omega}) \\, \\omega}{r}\n\\end{equation}\n\nand\n\\begin{equation}\n    M(s) = \\frac{\\omega(s)}{v(s)} =  \\frac{\\left(\\frac{K} {I_{f}r}\\right)}{s + (\\frac{1}{I_{f}})(\\frac{K^2}{r}-\\frac{K K_{\\omega}}{r}+B)}\n    \\label{rondoMucho}\n\\end{equation} \n\nUsing \\eqref{EQ.simpleController}, \\eqref{franco}, \\eqref{groucho}, and \\eqref{rondoMucho}, the open loop transfer function is now\n\\begin{equation}\n\tT(s) =\\frac{-(\\frac{K} {I_{c}r})s}\n\t{s^3 + (\\frac{B r-K K_{\\omega}+K^2}{I_{f}r})s^2 - (\\frac{m g l}{I_{c}})s - (\\frac{Br-K K_{\\omega}+K^2}{I_{f}r} - \\frac{m g l}{I_{c}})}\n\\end{equation}\n\nFigure \\ref{rootLocusFull} shows the root locus with $p = 1/4.5$ and $K_{\\omega} = 0.075$.  With a system gain of 266 the controlling poles are at -11.5205, -1.5353, and -0.999.\\\\\n\n\\includegraphics[width=\\textwidth]{images/transferFunction.eps}\n    \\captionof{figure}{Control System Modeled as Transfer Functions}\n    \\label{transferFunction}\n\n\\includegraphics[width=\\textwidth]{images/rootLocus.eps} \n    \\captionof{figure}{Root Locus Plot With Proportional Feedback}  \n    \\label{rootLocus}\n\n\\includegraphics[width=\\textwidth]{images/rootLocusPD.eps} \n    \\captionof{figure}{Root Locus Plot With PD Feedback}  \n    \\label{rootLocusPD}\n\n\\includegraphics[width=\\textwidth]{images/rootLocusFull.eps} \n    \\captionof{figure}{Root Locus Plot With PD and Rotor Velocity Feedback}  \n    \\label{rootLocusFull}\n\n\n\n\n\n\n\n\n\n \n\\begin{thebibliography}{99}\n\\bibitem{monograph} Block, D. J., Astrom, K. J., and Spong, M. W. (2007).  \\emph{The Reaction Wheel Pendulum} . Morgan \\& Claypool Publishers.\n\\bibitem{reactionWheel} \\href{http://www.diva-portal.se/smash/get/diva2:916271/FULLTEXT01.pdf}{Ramm, A., and Sjostedt, M. (2015). Reaction wheel balanced robot, Design and Sensor Analysis of Inverted Pendulum Robot. Technical Report, KTH Royal Institute of Technology.}\n\\bibitem{balanceBot} \\href{https://kth.diva-portal.org/smash/get/diva2:916184/FULLTEXT01.pdf}{Hellman, H. and Sunnerman, H, (2015).  Two-Wheeled Self-Balancing Robot. Technical report, KTH Royal Institute of Technology.}\n\\bibitem{twoWheeled} \\href{http://geoffrey.chauveau.free.fr/pendulum/reports/final_report.pdf}{Chauveau, G., Chazal, D., Nakayama, D., Olsen, E., and Palm, S., (2005).  Controlling the Reaction Wheel Pendulum.}\n\\end{thebibliography}\n\n\\begin{appendices}\n\n\\section{Motor and Gear Set Data}\n\nKey to much of the work so far are the constants associated with the motor and gear set, $K_{t}$, $K_{v}$, $R$, and $L$.\nHere we are using a \\href{https://www.pololu.com/product/3237}{Pololu  4.4:1 metal gear motor}.  \n\nPololu gives the following parameters for the 4.4:1 gear motor: \\\\\nweight = 95g \\\\\ngear ratio = 4.4:1 \\\\\nNo-load speed @ 12V = 1700 rpm (178 rad/sec)\\\\\nNo-load current @ 12V = 0.2 A \\\\\nStall current @ 12V = 2.1 A \\\\\nStall torque @ 12V = 11 oz in (0.07768 Nm)\\\\\nEncoder frequency = 211.2 counts/rev of gearbox shaft\\\\\n\nA number of the motor constants can be calculated from the published Pololu parameters.  However, it\nshould  be noted that, for inexpensive motors, the actual motor parameters may vary from the \npublished data.  In any event, the motor constants will be calculated here and then later compared\nwith measured data.\n\n$K_{f}$ can be calculated directly from the stall current, gear ratio, and torque: $K_{f}$ = 0.07768 Nm /  2.1 A = \n0.0370 Nm/A.  $K_{v}$ can be set equal to $K_{f}$, or 0.0370 Vs.\n\nThe motor winding resistance can be determined from the rated voltage and the stall current, as no back-EMF is occurring at stall\nconditions.  Thus, $R$ = 12 V / 2.1 A = 5.7 ohms.\n\n\n\n\\section{Pendulum Constants}\nA multitude of other values were calculated in MATLAB.  \\\\\n\n\\begin{tabular}{l l}\n$m$ & 0.517327 kg \\\\\n$l$ & 0.319038 m \\\\\n$I_{c}$ & 0.046508 kg m\\textsuperscript{2} \\\\\n$I_{f}$ & 0.000164 kg m\\textsuperscript{2} \\\\\n$g$ & 9.81 m s\\textsuperscript{-2} \\\\\n\\end{tabular}\n\n\n\n\n\\section{Measuring Motor Constants}\n\\label{appendix:measure}\nMeasurements were made to determine a number of motor constants, including $r$, $l$, and $K_{v}$.  \n\n$r$ was measured using an EXTEC LCR meter, on the $R_{DC}$ setting, as 5.82 $\\Omega$. $l$ was \nmeasured with the same instrument as 5.994 mH at 1kHz.  This gives a motor electrical time constant $l/r$ of 0.001 sec.  This is the time taken by the current in the motor to go from rest to 63\\% of the final steady-state current.\n\n$K_{v}$ was measured by driving a motors with a constant voltage and\nvarying loads while measuring $V$, $i$, and $\\omega$.  \n\n\nThe motor was driven with a 12 V power supply.  The current was measured by the power supply.  Measuring current in this manner can have larger uncertainties, but comparing with measurements from a $\\mu$Current DVM adapter indicated that the power supply current measurement was reasonable.  \n\n\n$\\omega$ was measured using the motor's encoder and an Arduino.  The encoder measured the shaft speed, not the gear-set speed.  However, a conversion coefficient of 211.2 counts/revolution was used to convert the encoder pulse rate to gear set rotation speed.\n\nThe following table shows the results for three different loads.  The first load was just rotor friction.  The second load added the Pololu hub adapter.  The third load included a plastic wheel attached to a propeller to increase the drag.\nThe resulting measured values are as follows: \\\\\n\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nCase & $v$ & $i$ & speed & $\\omega$ & $\\omega$ \\\\\n      & (V) &  (A) & (counts/sec) & (RPM) & (sec\\textsuperscript{-1}) \\\\\n\\hline\n1    & 11.95 & 0.0675 & 5,942.2   & 1,688.1   & 176.74 \\\\\n2    & 11.95 & 0.075   & 5,923.58 & 1,682.83 & 176.19 \\\\ \n3    & 11.95 & 0.153   & 5,574.88 & 1583.77  & 165.82 \\\\\n\\hline\n\\end{tabular}\n\nCalculating $K_{v}$ can be approached two ways.  The first is to solve \\eqref{motor} using the measured\nvalue of $R$.  The resulting values of $K_{v}$, one for each case, are:\n\n\\begin{tabular}{|c|c|}\n\\hline\nCase & $K_{v}$  \\\\\n         & (V sec) \\\\\n\\hline\n1    & 0.0654  \\\\\n2    & 0.0653  \\\\\n3    & 0.0667  \\\\\n\\hline\n\\end{tabular}\n\nA second approach is to use two cases in \\eqref{motor} and solve simultaneously for $R$ and $K_{v}$.\nCases 1 and 3 were chosen because of the largest difference in $i$, attempting to ensure that the two\ncases are independent.  This results in a values of $r$ = 8.24 $\\Omega$ and $K_{v}$ = 0.0645 V sec.\n\nThe value of $K_{v}$ is fairly consistent with the values determined in the first approach, but the value\nof $r$ appears to be high.  This may be caused by the limited change in $i$ in the data.  Use of a larger\nload torque might result in a larger value of $i$ and a better estimate of $r$.  However, for the purposes\nof this work the following values will be used:\n\n\\begin{tabular}{|c|c|c|}\n\\hline\nr                   & $K_{v}$  & $K_{t}$ \\\\\n($\\Omega$)   & (V sec)   & (N m A) \\\\\n\\hline\n5.82    & 0.0667  & 0.0667  \\\\\n\\hline\n\\end{tabular}\n\n\\section{Measuring Motor Friction Constants}\n\\label{appendix:friction}\nFriction in the motor was evaluated by measuring the supply current as a function of motor speed.\nThe friction force is related to the motor current as $F = K_{t} i$.  The measurement was performed\nby using a power supply with a variable current limit.  For various currents the speed of the motor\nwas measured by monitoring the motor encoder output with an Arduino.  The current was measured\nboth at the power supply as well as with a $\\mu$Current DVM adapter.\n\nThe following data was collected:\n\n\\begin{tabular}{|c|c|c|c|c|c|}\nSupply Voltage & Supply Current & $\\mu$Current & Speed & Speed & Speed\\\\\n(V) & (A) & (A) & (pulse/sec) & (rpm) & radians/sec\\\\\n\\hline\n10.90 & 0.068 & 0.0676 & 5417 & 1538.9 & 161.16\\\\\n10.04 & 0.063 & 0.0627 & 5001 & 1420.7 & 148.78\\\\\n9.09 & 0.058 & 0.0596 & 4520 & 1284.1 & 134.47\\\\\n7.93 & 0.056 & 0.0562 & 3937 & 1118.5 & 117.13\\\\\n6.99 & 0.055 & 0.0550 & 3465 & 984.4 & 103.08\\\\\n6.00 & 0.051 & 0.0521 & 2953 & 838.9 & 87.85\\\\\n5.00 & 0.047 & 0.0500 & 2437 & 692.3 & 72.50\\\\\n4.04 & 0.046 & 0.0483 & 1960 & 556.8 & 58.31\\\\\n3.57 & 0.046 & 0.0486 & 1701 & 483.2 & 50.60\\\\\n3.02 & 0.047 & 0.0481 & 1421 & 403.7 & 42.27\\\\\n2.52 & 0.045 & 0.0459 & 1169 & 332.1 & 34.78\\\\\n2.02 & 0.042 & 0.0437 & 910 & 258.5 & 27.07\\\\\n1.51 & 0.039 & 0.0409 & 662 & 188.1 & 19.69\\\\\n1.02 & 0.034 & 0.0366 & 414 & 117.6 & 12.32\\\\\n0.47 & 0.031 & 0.0328 & 148 & 42.0 & 4.40\\\\\n\\end{tabular}\n\nThe encoder speed, in pulses/sec, was converted to RPM using the encoder frequency of 211.2\ncounts/revolution.\n\nThe Excel regression function was used to fit the $\\mu$Current current measurement as a function\nof the speed in radians/sec.  The regression intercept represents the fixed Coulomb friction and the\nslope represents the viscous friction.  The regression results were 2.47E-3 N m and 1.20E-5 N m/(radian/sec),\nafter converting from current using $K_{t}$.  The coulomb friction represents approximately 3\\% of the \nmotor stall torque.  At the motor's no-load speed the viscous friction would double the total friction to \napproximately 6\\% of the motor stall torque.  The viscous friction coefficient is significantly larger than\nthe value of 9.06E-7 N m/(radian/sec) measured in Reference \\cite{twoWheeled}.  This may be due to\nthe gearbox associated with the motor used in the current work.  In any event, the friction will be included\nin the system modeling.\n\n\n\\section{Arduino Code for Measuring RPM}\n\nThe following is an Arduino sketch used to monitor the speed of a dc motor.  The sketch is designed\nto work with either an Arduino Uno or an Arduino Mega. The motor wiring described in the comments\nreferrs to the wiring of the Pololu 25D 4.4:1 metal gear motor.  The output is in position change \nper second, where the motor has 211.2 counts/rev of gearbox shaft. \\\\\n\n\\lstinputlisting{../motor_testing/rpm/rpm.ino}\n\n\\section{Reaction Wheel Design Options}\n\nThe initial reaction wheel design was modeled after XXXX, with modifications to allow the reaction wheel\nto mate properly with a Pololu motor hub.  The reaction wheel is shown in Figure \\ref{reactionWheel}.\nThe reaction wheel body is aluminum an outer diameter of 90 mm, a width of 18 mm, and an annular thickness of 7 mm. \nFusion 360 reports the mass of the reaction wheel to be 0.09391 kg (assuming a density of 2.7 g/cc).\nThe moment of inertia around the centerline is 1.562E-4 kg/m2.\n\nAn alternate, simpler design (although less elegant) is shown in Figure \\ref{simpleReactionWheel}.\nFashioned from carbon steel with a density of  7.85 g/cc, the diameter of the wheel is 90 mm and the\nthickness is 3.175 mm (1/8\").  The reaction wheel mass is 0.175 kg and the moment of inertia \naround the centerline is 1.605E04 kg/m2.  The simpler design with only a small change in the moment\nof inertia permits an easier fabrication at a cost of a doubling of the reaction wheel mass.\n\n\n\n\n\\includegraphics[width=\\textwidth]{images/reactionWheel.png} \n    \\captionof{figure}{Aluminum Reaction Wheel}  \n    \\label{reactionWheel}\n\n\\includegraphics[width=\\textwidth]{images/simpleReactionWheel.png} \n    \\captionof{figure}{Simplified Carbon Steel Reaction Wheel}  \n    \\label{simpleReactionWheel}\n\n\\end{appendices}\n\n\n\\end{document}\n", "meta": {"hexsha": "4a8993385f4c57467f6228acb254ed69c3d4fc86", "size": 27603, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/project-documentation.tex", "max_stars_repo_name": "AlanFord/self-balancing-stick", "max_stars_repo_head_hexsha": "c805f3136effbceed71b7b6fe086c471a698899b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 64, "max_stars_repo_stars_event_min_datetime": "2018-08-13T07:02:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T09:21:28.000Z", "max_issues_repo_path": "latex/project-documentation.tex", "max_issues_repo_name": "Archer0v0/self-balancing-stick", "max_issues_repo_head_hexsha": "6c9386228d2658fd85068fb87e602fa2bf3f80c9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2019-04-29T15:11:33.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-19T14:15:25.000Z", "max_forks_repo_path": "latex/project-documentation.tex", "max_forks_repo_name": "qiuwenhui/self-balancing-stick", "max_forks_repo_head_hexsha": "6c9386228d2658fd85068fb87e602fa2bf3f80c9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 15, "max_forks_repo_forks_event_min_datetime": "2019-01-22T20:53:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-26T14:53:20.000Z", "avg_line_length": 42.7291021672, "max_line_length": 483, "alphanum_fraction": 0.7186537695, "num_tokens": 8270, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321720225278, "lm_q2_score": 0.8418256512199033, "lm_q1q2_score": 0.5995753120326306}}
{"text": "\\section{Preliminaries}\n\\label{sect:prob-preliminaries}\n\n\\subsection{Boolean network}\n\\label{sect:prob-preliminaries-boolean-network}\n\nA \\textit{(combinational) Boolean network} is a directed acyclic graph $G=(V,E)$,\nwith a set $V$ of vertices and a set $E\\subseteq V \\times V$ of edges.\nTwo non-empty disjoint subsets $V_I$ and $V_O$ of $V$ are identified:\na vertex $v \\in V_I$ (resp. $V_O$) is referred to as a \\textit{primary input} (PI) (resp. \\textit{primary output} (PO)).\nEach vertex $v \\in V$ is associated with a Boolean variable $b_v$.\nEach vertex $v \\in V \\setminus V_I$ is associated with a Boolean function $f_v$.\nAn edge $(u,v)\\in E$ indicates $f_v$ refers to $b_u$ as an input variable;\n$u$ is called a \\textit{fanin} of $v$, and $v$ a \\textit{fanout} of $u$.\nThe valuation of the Boolean variable $b_v$ of vertex $v$ is as follows:\nif $v$ is a PI, $b_v$ is given by external signals; otherwise, $b_v$ equals the value of $f_v$.\nTo ease readability, we will not distinguish a vertex $v$ and its corresponding Boolean variable $b_v$.\nWe will simply denote $b_v$ with $v$.\n\nNote that a Boolean network can be converted in linear time to a CNF formula through Tseitin transformation~\\cite{Tseitin1983}.\nConsider a Boolean network $G$,\nand let $X$ denote the set of PI variables of $G$.\nDuring Tseitin transformation,\nnew variables will be introduced for every vertex $v\\in V\\setminus V_I$.\nLet $Y$ denote the set of these fresh variables.\nThe resultant formula $\\pf_G(X,Y)$ obtained from Tseitin transformation encodes the behavior of the Boolean network $G$.\nObserve that $X$ is a base set for $\\pf_G$.\nThis is because once the PI variables are decided by an assignment $\\as^+$ over $X$,\nthe values for the other variables will be propagated according to the behavior of the Boolean network.\nTherefore, at most one assignment $\\mu$ over $Y$ (the one with the consistent variable evaluation to the Boolean network) is able to satisfy $\\pf_G$.\n\n\\subsection{Probability and random variables}\n\\label{sect:prob-preliminaries-random-variable}\nTo characterize the behavior of a probabilistic design,\nwe take advantage of Bernoulli random variables.\nIn the following, we provide basic definitions of random variables.\n\nConsider an experiment with a sample space $S$ and a probability measure $\\Pr[\\cdot]$.\nA \\textit{random variable} $X$ is a mapping from an outcome in $S$ to a real number.\nThe \\textit{probability mass function} (PMF) $P_X$ of $X$\nis defined by $P_X(x)=\\Pr[\\{s \\mid X(s)=x\\}]$.\n\nA random variable $X$ is called a $\\textit{Bernoulli}(p)$ \\textit{random variable} with parameter $p\\in[0,1]$,\ndenoted by $X\\sim\\textit{Bernoulli}(p)$,\nif the PMF of $X$ has the form:\n\\begin{align*}\n    P_X(x)=\n    \\left\\{\n    \\begin{array}{ll}\n        p,   & \\mbox{ if } x = 1,      \\\\\n        1-p, & \\mbox{ else if } x = 0, \\\\\n        0,   & \\mbox{ otherwise. }\n    \\end{array}\n    \\right.\n\\end{align*}\nNote that a Bernoulli random variable maps every outcome in a sample space to either $0$ or $1$.\nTherefore, it is suitable to characterize experiments with binary outcomes.\n\nFor a wire (an edge) of a circuit (Boolean network),\nits value is either \\true or \\false.\nA Bernoulli random variable in this context maps \\true and \\false to real numbers $1$ and $0$, respectively.\nThe corresponding parameter $p$ of the random variable is the probability for the wire to valuate to \\true.\nOn the other hand, for a gate (a vertex) of a circuit,\nits operation has two outcomes: correct and erroneous.\nA Bernoulli random variable in this context maps erroneous and correct operations to real numbers $1$ and $0$, respectively.\nThe corresponding parameter $p$ of the random variable is the probability of erroneous operation,\ni.e., the error rate, of the gate.", "meta": {"hexsha": "9eaaf204e1e8a024e1f8aa0adf867ace14fbe051", "size": 3755, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/prob-design-eval/preliminaries.tex", "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_issues_repo_path": "paper/prob-design-eval/preliminaries.tex", "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/prob-design-eval/preliminaries.tex", "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.0447761194, "max_line_length": 149, "alphanum_fraction": 0.7312916112, "num_tokens": 1045, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256432832333, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5995753115219263}}
{"text": "\\documentclass[oneside, 12pt, a4paper]{book}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n\\providecommand{\\main}{.}\n\n\\begin{document}\n\\chapter{EKF}\n\n\\section{Kalman filter}\nThe simplest of state space models are linear models, which can be expressed with equations of the following form.\n\n\\begin{equation}\n    \\begin{split}\n        \\mathbf{x}_t &= \\mathbf{F}\\mathbf{x}_{t-1}+\\mathbf{w}_x\\\\\n        \\mathbf{z}_t &= \\mathbf{H}\\mathbf{x}_t+\\mathbf{w}_z\n     \\end{split}\n\\end{equation}\nwhere\n\\begin{itemize}\n    \\item $\\mathbf{x}_t \\in \\mathbb{R}^n$ is the state of the system describing the condition of n elements at time $t$. \n    \\item $\\mathbf{z}_t \\in \\mathbb{R}^m$ are the measurements at time $t$.\n    \\item $\\mathbf{w}_x \\thicksim \\mathcal{N}(0,\\mathbf{Q}_t)$ is the process noise at time $t$.\n    \\item $\\mathbf{w}_z \\thicksim \\mathcal{N}(0,\\mathbf{R}_t)$ is the measurement noise at time $t$.\n    \\item $\\mathbf{F}$ is called either \\textbf{State Transition Matrix} or \\textbf{Fundamental Matrix}.\n    \\item $\\mathbf{H}$ is the measurement model matrix.\n\\end{itemize}\n\n\\begin{equation}\n    x = x_0 + \\dot{x}*t\n\\end{equation}\n\\begin{equation}\n    \\dot{x} = (-1/t)x_0 + (1/t)x\n\\end{equation}\n\n\\begin{equation}\n    \\begin{split}\n        x_1 &= x_0 + v_x t \\\\\n        x_2 &= x_0 + v_x t\n    \\end{split}\n\\end{equation}\n\n\\begin{equation}\n    x_1 + x_2 = 2*x_0 + 2v_x t \n\\end{equation}\n\n\\begin{equation}\n    x_3 = x_0 + 2v_x t\n\\end{equation}\n\n\\end{document}", "meta": {"hexsha": "4f9c513d7cdbb3ecfa7ed04a186ea4efe756a72a", "size": 1535, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "software/imu/docs/main.tex", "max_stars_repo_name": "tucuongbrt/PIFer", "max_stars_repo_head_hexsha": "e2ac4d4443e1c6a6263f91c32f28dbe767590359", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-17T18:23:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-18T06:19:44.000Z", "max_issues_repo_path": "software/imu/docs/main.tex", "max_issues_repo_name": "tucuongbrt/PIFer", "max_issues_repo_head_hexsha": "e2ac4d4443e1c6a6263f91c32f28dbe767590359", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-04-03T08:50:46.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-03T08:50:57.000Z", "max_forks_repo_path": "software/imu/docs/main.tex", "max_forks_repo_name": "tucuongbrt/PIFer", "max_forks_repo_head_hexsha": "e2ac4d4443e1c6a6263f91c32f28dbe767590359", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-04-14T00:18:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-06T05:57:54.000Z", "avg_line_length": 27.4107142857, "max_line_length": 121, "alphanum_fraction": 0.6618892508, "num_tokens": 562, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772253241803, "lm_q2_score": 0.6859494550081925, "lm_q1q2_score": 0.5995727963461945}}
{"text": "\\documentclass{article}\n\n\\begin{document}\n\\subsubsection*{Bootstrap estimate of standard errors}\n\nWe use nonparametric bootstrap to estimate standard errors for subgroup estimates. Let $W_1,\\ldots, W_n$ be the observed values in the data. We take a random sample of size $n$ from this data, with replacement. Denote this bootstrap sample by $W_{b1},\\ldots, W_{bn}$. We now compute our estimates using the bootstrap sample:\n\\begin{equation}\n  \\hat \\theta_b^* = g(w_{b1},\\ldots, w_{bn}),\n\\end{equation}\nwhere $b=1,\\ldots,B$ denotes the bootstrap samples and $\\hat\\theta_b^*$ is the $b$th set of parameters for each subgroup.\nThis leads to a collection of $B$ bootstrap estimates, $\\hat\\theta_{1}^*,\\ldots,\\hat\\theta_{B}^*$. The bootstrap covariance matrix is\n\\begin{equation}\nS_B = \\frac{\\sum_b (\\hat\\theta_b^* - \\hat{\\bar\\theta}^*) (\\hat\\theta_b^* - \\hat{\\bar\\theta}^*)'}{B-1},\n\\end{equation}\nwhere $\\hat{\\bar\\theta}^* = \\sum_b \\hat\\theta_b^*/n$. $S_B$ is the bootstrap estimate of $Cov(\\hat\\theta)$, the covariance matrix of $\\hat\\theta$.\n\nConfidence intervals and p-values are obtained by using the so-called percentile method, which is a nonparametric.\nReported p-values are based on the empirical distribution of bootstrap estimates, that is, the proportion of estimates that lie further in the tails than the actual estimate. This method yields the confidence interval\n\\begin{equation}\n[\\theta^*_{\\alpha/2}, \\theta^*_{1-\\alpha/2}],\n\\end{equation}\nwhere $\\theta^*_p$ it the $p$th quantile (i.e. the 100$p$th percentile) of the bootstrap distribution $(\\hat\\theta_{1}^*,\\ldots,\\hat\\theta_{B}^*)$.\n\n\\end{document}", "meta": {"hexsha": "58bf1d81f571501cbbb6fc8c0287c4738ef15d8f", "size": 1614, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bootstrapSE/bootstrapSE.tex", "max_stars_repo_name": "acarril/rddsga", "max_stars_repo_head_hexsha": "02a30b90fb2b99d64fa877c49485b10f582c6f5b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-09-10T20:13:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-06T01:33:16.000Z", "max_issues_repo_path": "bootstrapSE/bootstrapSE.tex", "max_issues_repo_name": "acarril/rddsga", "max_issues_repo_head_hexsha": "02a30b90fb2b99d64fa877c49485b10f582c6f5b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-01-05T04:32:31.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-10T15:22:39.000Z", "max_forks_repo_path": "bootstrapSE/bootstrapSE.tex", "max_forks_repo_name": "acarril/rddsga", "max_forks_repo_head_hexsha": "02a30b90fb2b99d64fa877c49485b10f582c6f5b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2017-09-08T12:49:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-29T00:34:17.000Z", "avg_line_length": 67.25, "max_line_length": 324, "alphanum_fraction": 0.7329615861, "num_tokens": 480, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174787, "lm_q2_score": 0.7853085909370422, "lm_q1q2_score": 0.599570367012554}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{bm}\n\\usepackage{ragged2e}\n\\usepackage{amsmath}\n\\usepackage{physics}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{float}\n\\usepackage{graphicx}\n\\begin{document}\n\\title{Taxonomy of serious game development approaches}\n\\date{}\n\\maketitle\n\n\\section{What is a game?}\nAn voluntary activity, where people interact in order to achieve some goals, within some constraints (described as game rules).\nThe purpose of a game is to provide tools for the players that allow them to approximate their challenging expectations. Expectations that might be provided externally, like from an instructor, or are self motivated, like being entertained.\n\n\n\\subsection{Formal definition of a video game}\nA game is made of objects, which we call \\textit{state}, whose properties change according to the progression of the time. We can represent the evolution of the state by mean of a integrator that approximates, with numeric methods, the value of state for all $T$ times of a game:\n\\begin{equation}\\label{eq:1}\nw(T) = \\int_{t = 0}^{t=T} \\dv{w_t}{t} dt\n\\end{equation}\n\\noindent\nwhere $w(T)$ represents the state at time $T$. \n\n\n\\paragraph{A particle} Consider a state made of a particle accelerating at  $\\bar{\\mathbf{a}}$ $m/s$, with velocity $v(t)$ and position $p(t)$. We can represent the just mentioned evolutions as a system of differential equations:\n\n\\begin{equation}\\label{eq:2}\n\\left \\lbrace\n\\begin{array}{l}\n\\dv{\\mathbf{p}(t)}{t} = \\mathbf{v}(t)\\\\\n\\dv{\\mathbf{v}(t)}{t} = \\mathbf{a}(t)\n\\end{array}\n\\right.\n\\end{equation}\n\n\n\\noindent\nNow if we want to compute the position of the particle at time $T$ we need to update the above integrator:\n\\begin{equation}\\label{eq:3}\n\\left \\lbrace\n\\begin{array}{l}\np(T) = \\int_{0}^{T} \\mathbf{v}(t) dt \\\\\nv(T) = \\int_{0}^{T} \\mathbf{a}(t) dt\n\\end{array}\n\\right.\n\\end{equation}\n\n\\noindent\nThe position of the particle is the result of the sum of all the velocities for all times less than $T$. All entities of the state of a game can be seen as numbers, collection of positions, collection of velocities, etc., so we can put them all in one system and the state of the game at a time $t$ consists of integrating the system at time $t$. If we expand \\ref{eq:2} into:\n\\begin{equation}\\label{eq:4}\n\\left \\lbrace\n\\begin{array}{l}\n\\dv{\\mathbf{p}(t)}{t} = \\lim\\limits_{dt \\to 0} \\dfrac{\\mathbf{p}(t + dt) - \\mathbf{p}(t)}{dt} = \\mathbf{v}(t)\\\\\n\\dv{\\mathbf{v}(t)}{t} = \\lim\\limits_{dt \\to 0} \\dfrac{\\mathbf{v}(t + dt) - \\mathbf{v}(t)}{dt} = \\mathbf{a}(t)\n\\end{array}\n\\right.\n\\end{equation}\n\nSolving the two limits in (\\ref{eq:4}) requires to apply a numerical methods. We use the Euler method, which is meant for solving systems of differential equations, where $\\lim\\limits_{dt \\to 0}$ becomes a fixed step that we call $\\Delta t$.\n\\begin{equation}\\label{eq:5}\n\\begin{cases}\n\\dfrac{\\Delta \\mathbf{p}}{\\Delta t} = \\dfrac{\\mathbf{p}(t + \\Delta t) - \\mathbf{p}(t)}{\\Delta t} = \\mathbf{v}(t)\\\\\n\\dfrac{\\Delta \\mathbf{v}}{\\Delta t} = \\dfrac{\\mathbf{v}(t + \\Delta t) - \\mathbf{v}(t)}{\\Delta t} =  \\mathbf{a}(t)\n\\end{cases}\n\\end{equation}\n\n\\begin{equation}\\label{eq:6}\n\\begin{cases}\n\\mathbf{p}(t + \\Delta t) = \\mathbf{p}(t) + \\mathbf{v}(t) * \\Delta t\\\\\n\\mathbf{v}(t + \\Delta t) = \\mathbf{v}(t) + \\mathbf{a}(t) * \\Delta t\n\\end{cases}\n\\end{equation}\n\nBy starting from a known position we move for a small amount towards a new position which is targeted by the velocity. Of course we need an initial position for our particle so when $t=0$ we use the initial position of the particle instead of using the rule in the system.\n\nEventually, we can generalize the case of the particle to any object or objects of game by applying the reasoning given above and solving the proper differential equations.\n\n\n\n\\subsubsection{Numerical vs. Analytic Solutions}\nThe fact that we are able to model the evolution of the state by mean of a function does not mean that founding the exact solution is possible or simple. Analytical solutions solution work only for simple models. But when the model of the game becomes complex (imagine a city simulator, or a driving simulator with lots of physics) or the model is influenced by the user input, then it is not possible to identify a closed form solution. Reason why we need to use numerical methods for solving the equations such the Euler method, where the initial values are the initial state and the update describes the changes of the state over a short amount of time. Of course it is an approximation, but at least we manage to get very close to a closed form solution.\n\n\\subsection{Video game}\nWithin the panorama of games we find video games. A video game is a specialized kind of game where the interaction is carried out by mean of electronic devices. Precisely a video game, from now on a \\textit{game}, is a program which indefinitely interacts with hardware components. Hardware components take care of: updating the memory, displaying the game entities, reading the input, etc.\n\n\n\\subsection{Structure of a video game}\nA video game, from now on a game, is a real time simulation whose \\textit{state}, made of objects, can be seen as an intractable object. \\textit{Interactions} that are carried out with respect to a discrete \\textit{time flow}. The discrete reason is explained by the fact that interaction with hardware components require some time. \\footnote{Every increment in the state should indeed take into consideration the difference in time between now and the last time we interacted with the components.} Interactions are implemented/coded within a so called \\textit{game loop} (or main loop). The game loop keeps interacting the hardware components (those components that represent the state or might change it) and at the end of each loop a new state of th game is achieved. The goal of the game is to display the game objects and to apply all interactions (from now we describe them all under the keyword update). Each complete sequence of interactions of the game loop is \\textit{frame}. Updating the state of game with respect to intervals of time might seem that we loose precision, since one could argue that we might miss some information between two frames. But games exhibit some sorting of dithering in their behavior, which means that difference in the state between two consecutive frames are very small and similar. Of course as long as the frame is small\\footnote{60 frames per second are a good approximation}. \n\n\n\n\n\n\n\n \nScience has provided a high-level representation of a game. We can represent the state of a game by mean of differential . In this representation a game can be seen as .... \n\n\n\nA video game:\n\nIssue:\nMismatch between high-level and hardware representation!\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{games_description.png}\n\\caption{Game formalization (overview)}\\label{game_description}\n\\end{figure}\n\nIn what follow we discuss how the community try to solve such mismatch my mean of tools and languages.\n\n\\section{Game development}\n\\begin{itemize}\n\\item Research in game development tries to solve the gap/mismatch between the low level constraints and the high level description of a game descried in Section \\ref{section_introduction}\n\\item \\textbf{How?} Historically a hierarchy has come to life, which is described by the picture below:\n\\end{itemize}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{game_development_evolution.png}\n\\caption{Game development tools evolution}\\label{game_dvelopment}\n\\end{figure}\n\n\\subsection{Hardware} introduction, examples (assembly games), pros, and issues\n\\subsection{Multimedia API} introduction, examples (DirectX, OpenGL), pros, and issues\n\\subsection{Engines} introduction, examples (Ogre, XNA, UnityEngine), pros, and issues\n\\subsection{Editors} introduction, examples (UnityEditor, UnrealEngine), pros, and issues\n\\subsection{Visual + GPL} introduction, examples(GameMaker,RPGmaker), pros, and issues\n\\subsection{DSL} introduction, examples(Casanova), pros, and issues\n\n\\section{Research questions?}\nFor example: \n\\begin{itemize}\n\\item To what extent a game should be designed around the domain of games and what are the requirements\n\\item To what extent game development would benefit from the integration of DSLs into the development processes and to what level of abstraction\n\\item ...\n\\end{itemize}\n\\end{document}", "meta": {"hexsha": "ef7bfbd64635ef9a0599126e09509e3ebe6737c3", "size": 8404, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Serious games - book chapter/main.tex", "max_stars_repo_name": "vs-team/Papers", "max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z", "max_issues_repo_path": "Serious games - book chapter/main.tex", "max_issues_repo_name": "vs-team/Papers", "max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Serious games - book chapter/main.tex", "max_forks_repo_name": "vs-team/Papers", "max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.5616438356, "max_line_length": 1421, "alphanum_fraction": 0.7611851499, "num_tokens": 2201, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.599570350887666}}
{"text": "\\chapter{Complexity analysis and implementation}\\label{ch:complexity}\n\n\\begin{remark}{Outline}\nIn this chapter, we study the computational efficiency of tree-based ensemble\nmethods. In sections~\\ref{sec:5:complexity-fit} and \\ref{sec:5:complexity-predict},\nwe derive and discuss the time complexity of random forests, first for building them from data and then for making predictions. In\nSection~\\ref{sec:5:impl}, we discuss technical details that are critical\nfor efficiently  implementing random forests. Finally, we conclude in\nSection~\\ref{sec:5:benchmarks} with benchmarks of the random forest implementation developed\nwithin this work and compare our solution with alternative implementations.\n\\end{remark}\n\n\\section{Complexity of the induction procedure}\n\\label{sec:5:complexity-fit}\n\nThe dominant objective of most machine learning methods is to find models that\nmaximize accuracy. For models of equivalent performance, a secondary objective\nis usually to minimize complexity, both functional, as studied in the previous\nchapters, and computational. With regards to the later, a first and immediate\naspect of computational complexity in decision trees and random forests is the\n\\textit{time complexity} for learning models, that is the number of operations\nrequired for building models from data.\n\nGiven the exponential growth in the number of possible partitions of $N$\nsamples, we chose in Section~\\ref{sec:3:splitting-rules} to restrict splitting\nrules to binary splits defined on single variables. Not only this is sufficient\nfor reaching good accuracy (as discussed in the previous chapter),\nit also allows for time complexity to effectively remain within reasonable bounds.\n\nFormally, let $T(N)$ denotes the time complexity for building a decision tree\nfrom a learning set ${\\cal L}$ of $N$ samples. From\nAlgorithm~\\ref{algo:induction}, $T(N)$ corresponds to the number of operations\nrequired for splitting a node of $N$ samples and then for recursively building\ntwo sub-trees respectively from $N_{t_L}$ and $N_{t_R}$ samples. Without loss\nof generality, let us assume that learning samples all have distinct input\nvalues, such that it is possible to build a fully developed decision tree where\neach leaf contains exactly one sample. Under this assumption, determining time\ncomplexity therefore boils down to solve the recurrence equation\n\\begin{equation}\\label{eqn:complexity:rec}\n\\begin{cases}\nT(1) = c_1 \\\\\nT(N) = C(N) + T(N_{t_L}) + T(N_{t_R}),\n\\end{cases}\n\\end{equation}\nwhere $c_1$ is the constant cost for making a leaf node out of a single sample and $C(N)$\\label{ntn:cN} is the runtime complexity for finding a split $s$ and then partitioning the $N$ node samples\ninto  ${t_L}$ and ${t_R}$. In particular, this later operation\nrequires at least to iterate over all $N$ samples, which sets a linear lower\nbound on the time complexity within a node, i.e., $C(N)=\\Omega(N)$. As for finding\nthe split $s$, this can be achieved in several ways, as outlined in the\nprevious chapters  for several randomized variants of the induction procedure.\nAs we will see, not only this has an impact on the accuracy of the resulting\nmodel, it also drives the time complexity of the induction procedure.\n\n\\begin{remark}{Big O notations}\nTime complexity analyzes the asymptotic behavior of an algorithm\nwith respect to the size $N$ of its input and its hyper-parameters. In this way,\nbig O notations are used to formally express an asymptotic upper bound on the\ngrowth rate of the number $f(N)$ of steps in the algorithm. Formally,\nwe write that\n\\begin{equation}\nf(N) =  O(g(N)) \\ \\text{if}\\ \\exists c > 0, N_0 > 0, \\forall N > N_0, f(N) \\leq c g(N)\n\\end{equation}\nto express that  $f(N)$ is asymptotically upper bounded by $g(N)$, up to some negligible constant factor $c$.\nSimilarly, big $\\Omega$ notations are used to express an asymptotic lower\nbound on the growth rate of the number of steps in the algorithm. Formally,\nwe write that\n\\begin{equation}\nf(N) =  \\Omega(g(N)) \\ \\text{if}\\ \\exists c > 0, N_0 > 0, \\forall N > N_0,  c g(N) \\leq f(N)\n\\end{equation}\nto express that $f(N)$ is asymptotically lower bounded by $g(N)$.\nConsequently, if $f(N)$ is both $O(g(N))$ and $\\Omega(g(N))$ then $f(N)$ is\nboth lower bounded and upper bounded asymptotically by $g(N)$ (possibly for different constants),\nwhich we write using big $\\Theta$ notations:\n\\begin{equation}\nf(N) = \\Theta(g(N)) \\ \\text{if}\\  f(N) =  O(g(N)) = \\Omega(g(N)).\n\\end{equation}\n\\end{remark}\n\nIn the original CART induction procedure~\\citep{breiman:1984}\n(Algorithms~\\ref{algo:findsplit} and \\ref{algo:findsplit:x_j}), finding a split\n$s$  requires, for each of the $p$ variables $X_j$ (for $j=1,\\dots,p$), to sort\nthe values $x_{i,j}$ of all $N$ node samples (for $i=1,\\dots,N$) and then to\niterate over these in order to find the best threshold $v$. Assuming that the\nvalue of the impurity criterion can be computed iteratively for consecutive\nthresholds, the most costly operation\nis sorting, whose time complexity is at worst $O(N \\log N)$ using an optimal algorithm. As a result, the\noverall within-node complexity is $C(N) = O(p N \\log N)$. In a randomized tree\nas built with the Random Forest algorithm~\\citep{breiman:2001} (RF,\nAlgorithm~\\ref{algo:findsplit:random}), the search of the best split $s$ is\ncarried out in the same way, but only on $K \\leq p$ of the input variables,\nresulting in a within-node complexity $C(N) = O(K N \\log N)$. In Extremely\nRandomized Trees~\\citep{geurts:2006} (ETs, Algorithm~\\ref{algo:findsplit:et}),\ndiscretization thresholds are drawn at random within the minimum and maximum\nnode sample values of $X_j$, making sort no longer necessary. As such, the\nwithin-node complexity reduces to the time required for finding these lower and\nupper bounds, which can be done in linear time, hence $C(N)=O(KN)$. Finally, in\nPerfect Random Tree Ensembles~\\citep{cutler:2001} (PERT,\nAlgorithm~\\ref{algo:findsplit:pert}), cut-points $v$ are set midway between two\nrandomly drawn samples, which can be done in constant time, independently of\nthe number $N$ of node samples. Yet, the within-node complexity is lower\nbounded by the time required for partitioning the node samples into ${t_L}$ and\n${t_R}$, hence $C(N)=O(N)$.\n\nSince both CART and PERT can respectively be considered as special cases of RF\nand ETs with regards to time complexity (i.e., for $K=p$ in CART, for $K=1$ in\nPERT), let us consider the overall complexity $T(N)$ when either $C(N)=O(KN\\log\nN)$ or $C(N)=O(KN)$ (for $K=1,\\dots,p$). To make our analysis easier, let us\nfurther assume that it exists $c_2, c_3 > 0$ such that $c_2 KN \\log N \\leq C(N)\n\\leq c_3 KN \\log N$ for all $N \\geq 1$ (resp. $c_2 KN \\leq C(N) \\leq c_3 KN$\nfor all $N \\geq 1$). Results presented below extend however to the case where\n$C(N) = \\Theta(KN)$ (resp. $\\Theta(KN\\log N)$), at the price of a more\ncomplicated mathematical analysis. Time complexity is studied in three cases:\n\n\\begin{description}\n\\item \\textit{Best case.}\\hfill\\\\\n      The induction procedure is the most efficient\n      when node samples can always be partitioned into two balanced subsets of $\\tfrac{N}{2}$\n      samples.\n\n\\item \\textit{Worst case.}\\hfill\\\\\n      By contrast, the induction procedure is the least\n      efficient when splits are unbalanced. In the worst case,\n      this results in splits that move a single sample in the first sub-tree and\n      put all $N-1$ others in the second sub-tree.\n\n\\item \\textit{Average case.}\\hfill\\\\\n      The average case corresponds to the average time\n      complexity, as taken over all possible learning sets ${\\cal L}$ (of a given size and for a given distribution) and\n      all random seeds $\\theta$.  Since the analysis of this case is hardly\n      feasible without strong assumptions on the structure of the problem,\n      we consider instead the case where the sizes of the possible\n      partitions of a node are all equiprobable. We believe this is a good\n      enough approximation, that should at least help us derive the order\n      of magnitude of the complexity (instead of deriving it exactly).\n\n\\end{description}\n\nSince the induction algorithm looks for splits that most decrease impurity (in\nparticular, as $K\\to p$), we assume that balanced partitions of the nodes\nsamples should  be more likely than partitions that are unbalanced (such a\nclaim is however highly data dependent). Under this assumption, random forests\nshould therefore approach the best case rather than the average case (for which\npartitions are equally likely). However, as randomization increases (e.g., as $K\n\\to 1$), splits yield partitions that tend to be more equiprobable, hence\napproaching the average case. As such, we believe the complexity for the true\naverage case should lie in-between the complexity of the best case and the\ncomplexity of the average case as defined above. As for the worst case, it\nshould only arise in exceptional circumstances.\n\nLet us mention that due to the end-cut preference of classical impurity\ncriteria, worst splits are in fact likely to be selected (see\n\\citep{breiman:1984}, Theorem 11.1) if none of the $K$ selected variables are\ninformative with respect to the output. (Note that this effect is\nless pronounced in ETs because thresholds are drawn at random rather than being\noptimized locally, as in RF.) In the analysis carried below,\nintermingling effects  due to good splits at the top of trees and such degenerated\nsplits due to end-cut preference when input variables become independent of the\noutput are not explicitly taken into account. Yet, the simplifying assumptions\ndefining our average case appear to constitute a good-enough approximation,\nas further confirmed in empirical benchmarks in Section~\\ref{sec:5:benchmarks}.\n\n\\begin{remark}{Master Theorem}\nSome of the results outlined in the remaining of this section make use of the\nMaster Theorem~\\citep{bentley:1980,goodrich:2006}, which is recalled below to be self-contained.\n\n\\begin{theorem}\nLet consider the recurrence equation\n\\begin{equation}\n\\begin{cases}\nT(1) = c & \\text{if}\\quad n < d\\\\\nT(n) = aT(\\frac{n}{b}) + f(n) & \\text{if}\\quad n \\geq d\n\\end{cases}\n\\end{equation}\nwhere $d \\geq 1$ is an integer constant, $a > 0$, $c>0$ and $b>1$ are real constants, and\n$f(n)$ is a function that is positive for $n \\geq d$.\n\n\\begin{enumerate}\n\\item If there is a small constant $\\epsilon > 0$, such that $f(n)$ is $O(n^{\\log_b a - \\epsilon})$, then $T(n)$ is $\\Theta(n^{\\log_b a})$.\n\\item If there is a constant $k \\geq 0$, such that $f(n)$ is $\\Theta(n^{\\log_b a} \\log^k n)$, then $T(n)$ is $\\Theta(n^{\\log_b a} \\log^{k+1} n)$.\n\\item If there are small constants $\\epsilon > 0$ and $\\delta < 1$, such that $f(n)$ is $\\Omega(n^{\\log_b a + \\epsilon})$ and $af(\\tfrac{n}{b}) \\leq \\delta f(n)$, for $n \\geq d$, then $T(n)$ is $\\Theta(f(n))$.\n\\end{enumerate}\n\\end{theorem}\n\\end{remark}\n\n\\pagebreak\n\n\\begin{theorem}\\label{thm:6:best:knlogn}\nFor $c_2 KN\\log N \\leq C(N) \\leq c_3 KN\\log N$ (for all $N \\geq 1$), the time complexity for building a decision\ntree in the best case is $\\Theta(K N \\log^2 N)$.\n\\end{theorem}\n\n\\begin{proof}\nLet us rewrite $T(N)$ as $K T'(N)$ where, in the best case,\n\\begin{equation}\n\\begin{cases}\nT'(1) = c_1^\\prime \\\\\nT'(N) = C^\\prime(N) + 2 T'(\\frac{N}{2}),\n\\end{cases}\n\\end{equation}\nwith $c_1^\\prime = \\tfrac{c_1}{K}$ and $C^\\prime(N)=\\tfrac{C(N)}{K}$. In this form, the second case of the Master Theorem applies for\n$f(N) = C^\\prime(N)=\\Theta(N\\log N)$, $a=2$, $b=2$ and $k=1$. Accordingly, $T'(N)=\\Theta(N\\log^{k+1} N)=\\Theta(N\\log^2 N)$ and $T(N) = \\Theta(K N\\log^2 N)$.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:6:best:kn}\nFor $c_2 K N \\leq C(N) \\leq c_3 K N$ (for all $N \\geq 1$), the time complexity for building a decision\ntree in the best case is $\\Theta(K N \\log N)$.\n\\end{theorem}\n\n\\begin{proof}\nIn the best case, $T(N) = K T'(N)$, where\n\\begin{equation}\n\\begin{cases}\nT'(1) = c_1^\\prime \\\\\nT'(N) = C^\\prime(N) + 2 T'(\\frac{N}{2}).\n\\end{cases}\n\\end{equation}\nAgain, the second case of the Master Theorem applies, this time for $f(N)=C^\\prime(N)=\\Theta(N)$, $a=2$, $b=2$ and $k=0$.\nAccordingly, $T'(N)=\\Theta(N\\log^{k+1} N)=\\Theta(N\\log N)$ and $T(N) = \\Theta(K N\\log N)$.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:6:worst:knlogn}\nFor $c_2 K N \\log N \\leq C(N)\\leq c_3 K N\\log N$ (for all $N \\geq 1$), the time complexity for building a decision\ntree in the worst case is $\\Omega(K N^2)$ and $O(K N^2 \\log N)$.\n\\end{theorem}\n\n\\begin{proof}\nIn the worst case, $T(N) = K T'(N)$, where\n\\begin{equation}\n\\begin{cases}\nT'(1) = c_1^\\prime \\\\\nT'(N) = C^\\prime(N) +  T'(1) + T'(N-1).\n\\end{cases}\n\\end{equation}\nLet us first consider an upper bound on $T'(N)$. By expanding the recurrence, we have\n\\begin{align}\nT'(N) &\\leq c_3 N \\log N + c_1^\\prime + T'(N-1) \\nonumber \\\\\n      &= \\sum_{i=1}^N c_1^\\prime + c_3 i\\log i \\nonumber \\\\\n      &= c_1^\\prime N + \\sum_{i=1}^N c_3 i\\log i \\nonumber \\\\\n      &\\leq c_1^\\prime N + c_3 \\log N \\sum_{i=1}^N i \\nonumber \\\\\n      &= c_1^\\prime N + c_3 \\log N \\frac{N(N+1)}{2} \\nonumber \\\\\n      &= O(N^2 \\log N).\n\\end{align}\nSimilarly, a lower\nbound on $T'(N)$ can be derived in the following way:\n\\begin{align}\nT'(N) &\\geq c_2 N \\log N + c_1^\\prime + T'(N-1) \\nonumber \\\\\n      &= \\sum_{i=1}^N c_1^\\prime + c_2 i\\log i \\nonumber \\\\\n      &= c_1^\\prime N + \\sum_{i=1}^N c_2 i\\log i \\nonumber \\\\\n      &\\geq c_1^\\prime N + c_2 \\sum_{i=2}^N i \\nonumber \\\\\n      &= c_1^\\prime N + c_2 \\frac{N(N+1)}{2} - c_2 \\nonumber \\\\\n      &= \\Omega(N^2).\n\\end{align}\nAs such, $T(N) = \\Omega(K N^2)$ and $O(K N^2 \\log N)$ in the worst case.\n\\end{proof}\n\nLet us  remark that the lower and upper bounds derived for $T(N)$ in Theorem~\\ref{thm:6:worst:knlogn} are not\ntight. Nevertheless, given the fact that\n\\begin{equation}\n\\sum_{i=1}^N i \\log i = \\log(H(N)),\n\\end{equation}\nwhere $H(N)$ is the hyperfactorial function, complexity\ncould be re-expressed tightly as $T(N) = \\Theta(K \\log(H(N)))$.\n\n\\begin{theorem}\\label{thm:6:worst:kn}\nFor $c_2 K N \\leq C(N)\\leq c_3 K N$ (for all $N \\geq 1$), the time complexity for building a decision\ntree in the worst case is $\\Theta(K N^2)$.\n\\end{theorem}\n\n\\begin{proof}\nIn the worst case, $T(N) = K T'(N)$, where\n\\begin{equation}\n\\begin{cases}\nT'(1) = c_1^\\prime \\\\\nT'(N) = C^\\prime(N) +  T'(1) + T'(N-1).\n\\end{cases}\n\\end{equation}\nBy expanding the recurrence, we have\n\\begin{align}\nT'(N) &\\leq c_3 N + c_1^\\prime + T'(N-1) \\nonumber \\\\\n      &= c_1^\\prime + \\sum_{i=2}^N (c_1^\\prime + c_3 i) \\nonumber \\\\\n      &= c_1^\\prime N + \\frac{c_3}{2} (N^2 + 3N - 4) \\nonumber \\\\\n      &= O(N^2).\n\\end{align}\nIn the exact same way, $T'(N)$ can be lower bounded by $c_2 N + c_1^\\prime + T'(N-1)$\nand shown to be $\\Omega(N^2)$. As a result, $T(N) = \\Theta(K N^2)$ in the worst case.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:6:average:knlogn}\nFor $c_2 K N \\log N \\leq C(N)\\leq c_3 K N\\log N$ (for all $N \\geq 1$), the time complexity for building a decision\ntree in the average case is  $\\Theta(K N \\log^2 N)$.\n\\end{theorem}\n\n\\begin{proof}\nIn the average case, $T(N) = K T'(N)$, where\n\\begin{equation}\n\\begin{cases}\nT'(1) = c_1^\\prime \\\\\nT'(N) = C^\\prime(N) +  \\frac{1}{N-1} \\sum_{i=1}^{N-1} ( T'(i) + T'(N-i) ).\n\\end{cases}\n\\end{equation}\nLet us first consider an upper bound on $T'(N)$ by assuming that $C'(N) = c_3 N\\log N$.\nBy symmetry and by multiplying both sides of the last equation by $(N-1)$, it comes\n\\begin{equation}\\label{eqn:6:average:knlogn:1}\n(N-1) T'(N) = (N-1)(c_3 N\\log N) +  2 \\sum_{i=1}^{N-1} T'(i).\n\\end{equation}\nFor $N \\geq 3$, substituting $N$ with $N-1$ similarly yields\n\\begin{equation}\\label{eqn:6:average:knlogn:2}\n(N-2) T'(N-1) = (N-2)(c_3 (N-1) \\log(N-1)) +  2 \\sum_{i=1}^{N-2} T'(i).\n\\end{equation}\nSubtracting Equation~\\ref{eqn:6:average:knlogn:2} from \\ref{eqn:6:average:knlogn:1},\nit comes after simplifications and division of both sides by $N(N-1)$:\n\\begin{equation}\n\\frac{T'(N)}{N} = \\frac{T'(N-1)}{N-1} + c_3 \\frac{2}{N} \\log(N-1) + c_3 \\log \\frac{N}{N-1}.\n\\end{equation}\nLet us now introduce $S(N) = \\frac{T'(N)}{N}$. From the last equation, it comes\n\\begin{align}\nS(N) &= S(N-1) + c_3 \\frac{2}{N} \\log(N-1) + c_3 \\log \\frac{N}{N-1} \\nonumber \\\\\n     &= c_1^\\prime + c_3 \\sum_{i=2}^N \\frac{2}{i} \\log(i-1) + \\log \\frac{i}{i-1}  \\nonumber \\\\\n     &= c_1^\\prime + c_3 \\log N + 2 c_3 \\sum_{i=2}^N \\frac{1}{i} \\log(i-1)  \\nonumber \\\\\n     &\\leq c_1^\\prime + c_3 \\log N + 2 c_3 \\log N \\sum_{i=2}^N \\frac{1}{i}  \\nonumber \\\\\n     &= c_1^\\prime + c_3 \\log N + 2 c_3 \\log N (H_N - 1)  \\nonumber \\\\\n     &= O(H_N \\log N)\n\\end{align}\nwhere $H_N$ is the $N$-th harmonic number. Using\n\\begin{equation}\nH_N \\approx \\log N + \\gamma + \\frac{1}{2N} + O(\\frac{1}{N^2})\n\\end{equation}\nas approximation (where $\\gamma$ is the Euler-Mascheroni constant), we have\n\\begin{align*}\nT'(N) &= O(N H_N \\log N) \\nonumber \\\\\n      &= O(N (\\log N + \\gamma + \\frac{1}{2N} + \\frac{1}{N^2}) \\log N  ) \\nonumber \\\\\n      &= O(N \\log^2 N).\n\\end{align*}\n\nGiven Theorem~\\ref{thm:6:best:knlogn}, we also know that $T'(N)$ cannot be\nfaster than $\\Omega(N \\log^2 N)$, thereby setting a lower bound on the complexity\nof the average case.\n\nAs a result, $T(N) = \\Theta(KN\\log^2 N)$ in the average case.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:6:average:kn}\nFor $c_2 K N \\leq C(N) \\leq c_3 K N$ (for all $N \\geq 1$), the time complexity for building a decision\ntree in the average case is $\\Theta(K N \\log N)$.\n\\end{theorem}\n\n\\begin{proof}\nIn the average case, $T(N) = K T'(N)$, where\n\\begin{equation}\n\\begin{cases}\nT'(1) = c_1^\\prime \\\\\nT'(N) = C^\\prime(N) +  \\frac{1}{N-1} \\sum_{i=1}^{N-1} T'(i) + T'(N-i).\n\\end{cases}\n\\end{equation}\nLet us first consider an upper bound on $T'(N)$ by assuming that $C'(N) = c_3 N$.\nBy symmetry and by multiplying both sides of the last equation by $(N-1)$, it comes\n\\begin{equation}\\label{eqn:6:average:kn:1}\n(N-1) T'(N) = c_3 N (N-1) +  2 \\sum_{i=1}^{N-1} T'(i).\n\\end{equation}\nFor $N \\geq 3$, substituting $N$ with $N-1$ similarly yields\n\\begin{equation}\\label{eqn:6:average:kn:2}\n(N-2) T'(N-1) = c_3 (N-1) (N-2) +  2 \\sum_{i=1}^{N-2} T'(i).\n\\end{equation}\nSubtracting Equation~\\ref{eqn:6:average:kn:2} from \\ref{eqn:6:average:kn:1},\nit comes after simplifications and division of both sides by $N(N-1)$:\n\\begin{equation}\n\\frac{T'(N)}{N} = \\frac{T'(N-1)}{N-1} + \\frac{2 c_3}{N}.\n\\end{equation}\nLet us now introduce $S(N) = \\frac{T'(N)}{N}$. From the last equation, it comes\n\\begin{align}\nS(N) &= S(N-1) + \\frac{2 c_3}{N} \\nonumber \\\\\n     &= c_1^\\prime +  2 c_3 \\sum_{i=2}^N \\frac{1}{i}  \\nonumber \\\\\n     &= O(2 H_N)\n\\end{align}\nwhere $H_N$ is the $N$-th harmonic number. Using\n\\begin{equation}\nH_N \\approx \\log N + \\gamma + \\frac{1}{2N} + O(\\frac{1}{N^2})\n\\end{equation}\nas approximation (where $\\gamma$ is the Euler-Mascheroni constant), we have\n\\begin{align}\nT'(N) &= O(2 H_N N) \\nonumber \\\\\n&= O(2 N \\log N + 2 N\\gamma + 1 + \\frac{2}{N}) \\nonumber \\\\\n&= O(N \\log N).\n\\end{align}\nIn the exact same way, when assuming that $C'(N) = c_2 N$, $T'(N)$ can be shown\nto be lower bounded by $\\Omega(N \\log N)$. As a result, $T(N) = \\Theta(KN \\log N)$ in the average case.\n\\end{proof}\n\nFrom Theorems~\\ref{thm:6:best:knlogn}-\\ref{thm:6:average:kn}, the total time\ncomplexity for building an ensemble of $M$ randomized trees can finally be\nderived. As summarized in Table~\\ref{table:complexity-fit}, complexity remains\nwithin polynomial time for all variants studied in this work. In the best\ncase, complexity is linear with the number of split variables considered at\neach node (i.e., $O(K)$) and linearithmic or quasilinear with the number of\n(unique) samples effectively used to build each tree (i.e., $O(N\\log N)$ or\n$O(N\\log^2 N)$). In the very worst case however, this later dependency becomes quadratic\n(i.e., $O(N^2)$ or $O(N^2 \\log N)$), which might greatly affect performance in\nthe context of large datasets. Reassuringly, the analysis of the average case\nshows that pathological cases are not dominant and that, on average, for all methods, complexity\nbehaves as in the best case.\n\nAs expected, the complexity of Bagging~\\citep{breiman:1996b} is about $M$ times\nas large as the complexity of building a single decision tree. Yet, by taking\ninto account the fact that bagged decision trees are built on bootstrap\nreplicates ${\\cal L}^m$ that contain about $63.2\\%$ of unique samples,\ncomplexity can be expressed with respect to $\\widetilde{N} = 0.632 N$ instead\nof $N$. From an implementation point of view, repeated occurrences of the same\ninput vector can indeed be accounted for using sample weights for the\nevaluation of the impurity criterion, thereby  simulating a learning set of $N$\nsamples from $\\widetilde{N}$ unique samples while reducing the effective size\nof the learning set. Assuming close constant factors, building a single bagged\ndecision tree is therefore asymptotically\n\\begin{equation}\n\\lim_{N\\to \\infty} \\frac{N\\log^2 N}{0.632N \\log^2 0.632N} = 1.582\n\\end{equation}\ntimes faster than building a regular decision tree. The complexity of RF is\nidentical to Bagging, except that the dependency\nto $p$ becomes a dependency to $K$, resulting in an average speedup of\n$\\tfrac{p}{K}$. In other words, not only choosing $K \\leq p$ improves accuracy,\nit also significantly decreases the time required for building trees in a\nforest. In ETs, due to the fact that\nsamples are no longer required to be sorted at each node, the dependency to $N$\nbecomes linearithmic instead of quasilinear. With respect to RF,\nspeedup is asymptotically $O(\\log N)$ in the best and average cases.\nFinally, PERT shows to be the fastest\nmethod of all, which is due to the fact that only a single split variable\nis considered at each split. For $K=1$ however, ETs and PERT have identical\ntime complexity. Note that the analysis presented here is only valid\nasymptotically. In practice, constant factors might lead to different\nobserved results, though they should not significantly deviate  from our conclusions if\nalgorithms are all implemented from a common code-base.\n\n\\begin{table}\n    \\centering\n    \\begin{tabular}{| c | c c c |}\n    \\hline\n         & \\textit{Best case} & \\textit{Worst case} & \\textit{Average case}  \\\\\n    \\hline\n    \\hline\n    CART & $\\Theta(pN\\log^2 N)$ & $O(pN^2\\log N)$ & $\\Theta(pN\\log^2 N)$ \\\\\n    Bagging & $\\Theta(Mp\\widetilde{N}\\log^2 \\widetilde{N})$ & $O(Mp\\widetilde{N}^2\\log \\widetilde{N})$ & $\\Theta(Mp\\widetilde{N}\\log^2 \\widetilde{N})$  \\\\\n    RF & $\\Theta(MK\\widetilde{N}\\log^2 \\widetilde{N})$ & $O(MK\\widetilde{N}^2\\log \\widetilde{N})$ & $\\Theta(MK\\widetilde{N}\\log^2 \\widetilde{N})$  \\\\\n    ETs & $\\Theta(MKN\\log N)$ & $\\Theta(MKN^2)$ & $\\Theta(MKN\\log N)$  \\\\\n    PERT & $\\Theta(MN\\log N)$ & $\\Theta(MN^2)$ & $\\Theta(MN\\log N)$  \\\\\n    \\hline\n    \\end{tabular}\n    \\caption{Time complexity for building forests of $M$ randomized trees. $N$ denotes the number of samples in ${\\cal L}$, $p$ the number of input variables and $K$ the number of variables randomly drawn at each node. $\\widetilde{N} = 0.632 N$, due to the fact that bootstrap samples draw, on average, $63.2\\%$ of unique samples.}\n    \\label{table:complexity-fit}\n\\end{table}\n\n\\section{Complexity of the prediction procedure}\n\\label{sec:5:complexity-predict}\n\nA second facet of computational complexity in random forests is the time\nrequired for making predictions. For a single test point, this represents the\nnumber of operations for traversing each of the $M$ trees, from the root to one\nof the leaves. As such, computational complexity is directly related to the\naverage depth of the leaves reached by the test samples. Assuming that both the\nlearning set and the test set are drawn from the same probability distribution,\nthe complexity analysis of the prediction procedure therefore reduces to the\nanalysis of the average depth of the leaves induced by a learning set ${\\cal\nL}$.\n\nAs in Section~\\ref{sec:5:complexity-fit}, let us assume a fully developed\ndecision tree, where each leaf contains exactly one sample from ${\\cal L}$. Likewise\nlet us derive the average depth $D(N)$ of the leaves in the best,\nworst and average cases.\n\n\\begin{theorem}\\label{thm:6:best:depth}\nIn the best case, the average depth of the leaves is $\\Theta(\\log N)$.\n\\end{theorem}\n\n\\begin{proof}\nFor a decision tree which is perfectly balanced, node samples are recursively split\ninto two subsets of $\\tfrac{N_t}{2}$ samples at each node, which leads to the following\nrecurrence equation in the best case:\n\\begin{equation}\n\\begin{cases}\nD(1) = 1 \\\\\nD(N) = 1 + \\frac{1}{2} D(\\frac{N}{2}) + \\frac{1}{2} D(\\frac{N}{2}) = 1+D(\\frac{N}{2}).\n\\end{cases}\n\\end{equation}\nWith this equation, the second case of the Master Theorem applies for $f(N) = 1$, $a=1$, $b=2$ and $k=0$.\nAccordingly, $D(N)=\\Theta(\\log^{k+1} N) = \\Theta(\\log N)$.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:6:worst:depth}\nIn the worst case, the average depth of the leaves is $\\Theta(N)$.\n\\end{theorem}\n\n\\begin{proof}\nIn the worst case, the recurrence equation of the average depth of the leaves is:\n\\begin{equation}\n\\begin{cases}\nD(1) = 1 \\\\\nD(N) = 1 + \\frac{1}{N} D(1) + \\frac{N-1}{N} D(N-1).\n\\end{cases}\n\\end{equation}\nLet us now introduce $S(N)=N D(N)$. From the previous equation, it comes\n\\begin{align}\nS(N) &= N+1 + S(N-1) \\nonumber \\\\\n     &= 1+ \\sum_{i=2}^N (i + 1) \\nonumber \\\\\n     &= \\frac{N^2}{2} + \\frac{3N}{2} - 1.\n\\end{align}\nSince $D(N) = \\tfrac{S(N)}{N}$, we have $D(N) = \\frac{1}{2} (N - \\frac{2}{N} + 3) = \\Theta(N)$.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:6:average:depth}\nIn the average case, the average depth of the leaves is $\\Theta(\\log N)$.\n\\end{theorem}\n\n\\begin{proof}\nIn the average case, the recurrence equation of the average depth of the leaves is:\n\\begin{equation}\n\\begin{cases}\nD(1) = 1 \\\\\nD(N) = 1 + \\frac{1}{N-1} \\sum_{i=1}^{N-1} (\\frac{i}{N} D(i) + \\frac{N-i}{N} D(N-i)).\n\\end{cases}\n\\end{equation}\nBy symmetry and by multiplying both sides by $N(N-1)$, it comes\n\\begin{equation}\\label{eqn:thm:6:best:depth:1}\nN(N-1) D(N) = N(N-1) + 2 \\sum_{i=1}^{N-1} i D(i).\n\\end{equation}\nFor $N\\geq3$, substituting $N$ with $N-1$ similarly yields\n\\begin{equation}\\label{eqn:thm:6:best:depth:2}\n(N-1)(N-2) D(N-1) = (N-1)(N-2) + 2 \\sum_{i=1}^{N-2} i D(i).\n\\end{equation}\nSubtracting Equation~\\ref{eqn:thm:6:best:depth:2} from \\ref{eqn:thm:6:best:depth:1},\nit comes after simplifications and division of both sides by $N(N-1)$\n\\begin{align}\nD(N) &= D(N-1) + \\frac{2}{N} \\nonumber \\\\\n    &= 1 + 2 \\sum_{i=2}^N \\frac{1}{i} \\nonumber \\\\\n    &= 2H_N - 1\n\\end{align}\nwhere $H_N$ is the $N$-th harmonic number. Using\n\\begin{equation}\nH_N \\approx \\log N + \\gamma + \\frac{1}{2N} + O(\\frac{1}{N^2})\n\\end{equation}\nas approximation (where $\\gamma$ is the Euler-Mascheroni constant), we have $D(N) = \\Theta(\\log N)$.\n\\end{proof}\n\nFrom Theorems~\\ref{thm:6:best:depth}-\\ref{thm:6:average:depth}, the total time\ncomplexity for computing predictions out of a forest of $M$ trees is therefore\n$\\Theta(M\\log N)$ in the best case and $\\Theta(MN)$ is the very worst case. In accordance\nwith previous results, the analysis of the average case shows however that\npathological cases are not dominant and that, on average, complexity behaves\nonce again as in the best case.\n\n\\begin{remark}{Randomized trees are not equiprobable}\nWhen trees are built totally at random, that is when all sizes of partitions\nare equiprobable, the analysis of the average case shows that the average depth\nof the leaves remains logarithmic with size of the learning set. Assuming\nunique input values for all samples and considering the special case $p=1$,\nthe random construction of a decision tree\nis in fact equivalent to the insertion in random order of $N$ unique keys into\na binary search tree. As shown exactly in \\citep{sedgewick:2013}, the average\ndepth of a binary search tree built from equiprobable permutations of $N$\nunique keys is $O(\\log N)$, which corroborates our results. By contrast, if\ntrees were drawn uniformly at random from the set of all possible  trees (i.e.,\nin the case of \\textit{Catalan} trees), then the average depth can be shown to\nbe $O(\\sqrt{N})$. The fundamental difference is that binary search trees built\nfrom random permutations are \\textit{not} all equiprobable. There may exist\nseveral permutations mapping to the same binary search tree. In particular,\nshort trees are more likely to occur than degeneratedly deep trees. Accordingly,\ndecision trees, even built totally at random, are also not all equiprobable.\nBecause of the recursive construction procedure, deep degenerated decision\ntrees are less likely than shorter decision trees.\n\\end{remark}\n\n\n\\section{Implementation}\n\\label{sec:5:impl}\n\nImplementing decision trees and random forests involves many issues that are\neasily overlooked if not considered with care. In this section, we describe the\nrandom forest implementation that has been developed in this work and\ndeployed within the Scikit-Learn machine learning\nlibrary~\\citep{pedregosa:2011}. \\textit{The first part of this section is based\non previous work published in \\citep{buitinck:2013}.}\n\n\\subsection{Scikit-Learn}\n\nThe Scikit-Learn library provides a comprehensive suite of machine learning\ntools for Python. It extends this general-purpose programming language with\nmachine learning operations: learning algorithms, preprocessing tools, model\nselection procedures and a composition mechanism to produce complex machine\nlearning work-flows. The ambition of the library is to provide efficient\nimplementations of well-established algorithms within a programming environment\nthat is accessible to non-experts and reusable in various scientific areas. The\nlibrary is distributed under the simplified BSD license to encourage its use in\nboth academia and industry.\n\nScikit-Learn is designed to tie in with the set of numeric and scientific\npackages centered around the NumPy~\\citep{oliphant:2007} and\nSciPy~\\citep{vanderwalt:2011} libraries. NumPy augments Python with a\ncontiguous numeric array datatype and fast array computing primitives, while\nSciPy extends it further with common numerical operations, either by\nimplementing these in Python/NumPy or by wrapping existing\nC/C{}\\verb!++!/Fortran code. The name ``scikit'' derives from ``SciPy toolkit''\nand is shared with \\textit{scikit-image}. IPython~\\citep{perez:2007} and\nMatplotlib~\\citep{hunter:2007} complement SciPy to provide a\n\\textsc{matlab}-like working environment suited for scientific use.\n\nSince 2007, Scikit-Learn has been developed by more than a dozen core\ndevelopers, mostly researchers in fields ranging from neuro\\-science to\nastro\\-physics. It also benefits from occasional contributors proposing small\nbug-fixes or improvements. Development proceeds on\nGitHub\\footnote{\\url{https://github.com/scikit-learn}}, a platform which\ngreatly facilitates this kind of collaboration. Because of the large number of\ndevelopers, emphasis is put on keeping the project maintainable. In particular,\ncode must follow specific quality guidelines, such as style consistency and\nunit-test coverage. Documentation and examples are required for all features,\nand major changes must pass code review by at least two other developers.\nThe popularity of the project can be gauged from various indicators such as the hundreds\nof citations in scientific publications, successes in various machine learning\nchallenges (e.g., Kaggle), and statistics derived from our\nrepositories and mailing list.  At the time of writing\\footnote{July 2014.}, the project is watched\nby 3,445 people and forked 1,867 times on GitHub; the mailing list receives more\nthan 300 mails per month; version control logs\n% ddaa494c116e3c16bf032003c5cccbed851733d2\nshow more than 200 unique contributors to the code-base and the online documentation\nreceives 37,000 unique visitors and 295,000 pageviews per month.\n\nOur implementation guidelines emphasize writing efficient but readable code. In\nparticular, we focus on making the code-base maintainable and understandable in\norder to favor external contributions. Whenever practical, algorithms\nimplemented in Scikit-Learn are written in Python, using NumPy vector\noperations for numerical work. This allows the code to remain concise, readable\nand efficient. For critical algorithms that cannot be easily and efficiently\nexpressed as NumPy operations, we rely on Cython \\citep{behnel:2011} to achieve\ncompetitive performance and scalability. Cython is a compiled programming\nlanguage that extends Python with static typing. It produces efficient C\nextension modules that are importable from the Python run-time system. Examples\nof Cython code in Scikit-Learn are stochastic gradient descent for linear\nmodels, some clustering algorithms, and decision trees.\n\n\\subsubsection{Data}\n\nMachine learning revolves around data, so good data structures are paramount to\ndesigning software for it. In most tasks, data is modeled by a set of $p$\nnumerical variables, so that a single \\textit{sample} is a vector $\\mathbf{x}\n\\in \\mathbb{R}^p$. A collection of such samples is naturally regarded as the\nrows in a matrix $\\mathbf{X}$ of size $N \\times p$. In the common case of\nsupervised learning (classification, regression), we have an additional vector\n$\\mathbf{y}$ of length $N$ at training time and want an algorithm to produce\nsuch a $\\mathbf{y}$ for new data.\n\nScikit-Learn's data representation is kept as close as possible to this matrix\nformulation: datasets are encoded as two-dimensional NumPy arrays or SciPy\nsparse matrices \\citep{vanderwalt:2011}, while target vectors are flat\narrays of numbers or strings. While these may seem rather unsophisticated when\ncompared to more object-oriented constructs, such as the ones used by Weka\n\\citep{hall:2009}, they allow us to rely on efficient NumPy and SciPy vector\noperations while keeping the code close to the textbook. Given the\npervasiveness of NumPy and SciPy in many other scientific Python packages, many\nscientific users of Python will already be familiar with these data structures,\nand a collection of available data loading and conversion tools facilitate\ninteroperability. For tasks where the inputs are text files or semi-structured\nobjects, we provide \\textit{vectorizer} objects that efficiently convert such\ndata to the NumPy or SciPy formats.\n\nThe public interface is oriented towards processing a batch of samples, rather\nthan a single sample, in each API call. While classification and regression\nalgorithms can make predictions for single samples, Scikit-Learn objects are\nnot optimized for this use case. (The online learning algorithms in the library\nare intended to take mini-batches.) Batch processing makes optimal use of NumPy\nand SciPy by preventing the overhead inherent to Python function calls or due\nto per-element dynamic type checking. Although this might seem to be an\nartifact of the Python language, and therefore an implementation detail that\nleaks into the API, we argue that APIs should be designed so as not to tie a\nlibrary to a suboptimal implementation strategy. Batch processing also enables\nfast implementations in lower-level languages, where memory hierarchy effects\nand the possibility of internal parallelization come into play.\n\n\\subsubsection{Estimators}\n\nThe \\textit{estimator} interface is at the core of the library. It defines\ninstantiation mechanisms of objects and exposes a \\texttt{fit} method for\nlearning a model from training data.  All supervised and unsupervised learning\nalgorithms (e.g., for classification, regression or clustering) are offered as\nobjects implementing this interface. Machine learning tasks like feature\nextraction and selection are also provided as estimators.\n\nEstimator initialization and actual learning are strictly separated, in a way\nthat is similar to partial function application: an estimator is initialized\nfrom a set of named hyper-parameter values (e.g., the number of trees in a\nforest) and can be considered a function that maps these values to actual\nlearning algorithms. The constructor does not see any actual data. All it does\nis attach the given parameters to the object. For the sake of model inspection,\nhyper-parameters are set as public attributes, which is especially important in\nmodel selection settings. Default hyper-parameter values are provided for all\nbuilt-in estimators. These defaults are set to be relevant in many common\nsituations in order to make estimators effective \\textit{out-of-the-box}.\n\nActual learning is performed by the \\texttt{fit} method. This method is called\nwith training data (e.g., supplied as two arrays \\texttt{X\\_train} and\n\\texttt{y\\_train} in supervised learning estimators). Its task is to run a\nlearning algorithm and to determine model-specific parameters from the training\ndata and set these as attributes on the estimator object. As a convention, the\nparameters learned by an estimator are exposed as public attributes with names\nsuffixed with a trailing underscore (e.g., \\texttt{coef\\_} for the learned\ncoefficients of a linear model), again to facilitate model inspection. In the\npartial application view, \\texttt{fit} is a function from data to a model of\nthat data. It always returns the estimator object it was called on, which now\nserves as a model and can be used to perform predictions or transformations of\nnew data.\n\nThe choice to let a single object serve dual purpose as estimator and model has\nbeen driven by usability and technical considerations. Having two coupled\ninstances (an estimator object used as a factory, and a model object produced\nby the estimator) makes a library harder to learn and use. From the developer\npoint of view, decoupling estimators from models would create parallel class\nhierarchies and increases the overall maintenance burden. A good reason for\ndecoupling would be to make it possible to ship a model to a new environment\nwhere the full library cannot be installed. However, our inspectable setup\nwhere model parameters are documented public attributes and prediction formulas\nfollow standard textbooks, goes a long way in solving this problem.\n\nTo illustrate the initialize-fit sequence, let us consider a supervised\nlearning task using a single decision tree. Given the API defined above,\nsolving this problem is as simple as the following example.\n\n\\vskip0.3cm\n\\begin{pythoncode}\n# Import the tree module\nfrom sklearn.tree import DecisionTreeClassifier\n# Instantiate and set hyper-parameters\nclf = DecisionTreeClassifier(min_samples_split=5)\n# Learn a model from data\nclf.fit(X_train, y_train)\n\\end{pythoncode}\n\nIn this snippet, a \\texttt{DecisionTreeClassifier} estimator is first\ninitialized by setting the \\texttt{min\\_samples\\_split} hyper-parameter to $5$\n(See Section~\\ref{sec:3:stop}). Upon calling \\texttt{fit}, a model is learned\nfrom the training arrays \\texttt{X\\_train} and \\texttt{y\\_train}, and stored on\nthe object for later use. Since all estimators share the same interface, using\na different learning algorithm is as simple as replacing the constructor (the\nclass name). To build a random forest on the same data, one would simply\nreplace \\texttt{DecisionTreeClassifier(min\\_samples\\_split=5)} in the snippet\nabove by \\texttt{RandomForestClassifier()}.\n\nIn Scikit-Learn, classical learning algorithms are not the only objects to be\nimplemented as estimators. For example, preprocessing routines (e.g., scaling\nof features) or feature extraction techniques (e.g., vectorization of text\ndocuments) also implement the \\textit{estimator} interface. Even stateless\nprocessing steps, that do not require the \\texttt{fit} method to perform useful\nwork, implement the estimator interface. This design pattern is of prime\nimportance for consistency, composition and model selection reasons,\nas further illustrated in \\citep{buitinck:2013}.\n\n\\subsubsection{Predictors}\n\nThe \\textit{predictor} interface extends the notion of an estimator by adding a\n\\texttt{predict} method that takes an array \\texttt{X\\_test} and produces\npredictions for \\texttt{X\\_test}, based on the learned parameters of the\nestimator (we call the input to \\texttt{predict} ``\\texttt{X\\_test}'' in order\nto emphasize that \\texttt{predict} generalizes to new data). In the case of\nsupervised learning estimators, this method returns the predicted labels or\nvalues computed by the model:\n\n\\vskip0.3cm\n\\begin{pythoncode}\n# Make predictions on new data\ny_pred = clf.predict(X_test)\n\\end{pythoncode}\n\nApart from \\texttt{predict}, predictors may also implement methods that\nquantify the confidence of predictions. In the case of linear models, the\n\\texttt{decision\\_function} method returns the distance of samples to the\nseparating hyperplane. Some predictors also provide a \\texttt{predict\\_proba}\nmethod which returns class probability estimates.\n\nFinally, supervised predictors provide a \\texttt{score} function to assess\ntheir performance on a batch of input data. This method takes as input arrays\n\\texttt{X\\_test} and \\texttt{y\\_test} and typically computes the coefficient of\ndetermination between \\texttt{y\\_test} and \\texttt{predict(X\\_test)} in\nregression, or the accuracy in classification. The only requirement is that the\n\\texttt{score} method return a value that quantifies the quality of its\npredictions (the higher, the better). An unsupervised estimator may also expose\na \\texttt{score} function to compute, for instance, data likelihood under its\nmodel.\n\n\\subsubsection{API for random forests}\n\nScikit-Learn provides efficient implementations of decision trees and random\nforests, all offered as objects implementing the estimator and predictor\ninterfaces presented above. Most of the hyper-parameters described\nin this work are supported.\n\n\\begin{description}\n\\item \\texttt{DecisionTreeClassifier}, \\texttt{DecisionTreeRegressor}: \\hfill \\\\\n    Implement single decision trees~\\citep{breiman:1984}, as described in Chapter~\\ref{ch:cart}.\n\n\\item \\texttt{BaggingClassifier}, \\texttt{BaggingRegressor}: \\hfill \\\\\n    Implement Bagging~\\citep{breiman:1996b}, Random Subspaces~\\citep{ho:1998}\n    and Ensembles of Random Patches~\\citep{louppe:2012}, as described in Chapters~\\ref{ch:forest}\n    and \\ref{ch:random-patches}.\n\n\\item \\texttt{RandomForestClassifier}, \\texttt{RandomForestRegressor}: \\hfill \\\\\n    Implement Random Forest~\\citep{breiman:2001},  as described in Chapter~\\ref{ch:forest}.\n\n\\item \\texttt{ExtraTreesClassifier}, \\texttt{ExtraTreesRegressor}: \\hfill \\\\\n    Implement Extremely Randomized Trees~\\citep{geurts:2006},  as described in Chapter~\\ref{ch:forest}.\n\\end{description}\n\n\n\\subsection{Internal data structures}\n\nAmong all possible ways of representing a decision tree, one of the simplest and most\ndirect representations is to adopt an object-oriented approach. In this\nparadigm, a decision tree is naturally represented as a hierarchy of high-level\nobjects, where each object corresponds to a node of the tree and comprises\nattributes referencing its children or storing its split and value. Such a\nrepresentation would make for a correct, intuitive and flexible implementation\nbut may in fact not be the most appropriate when aiming for high-performance.\nOne of the biggest issues indeed is that object-oriented programming usually\nfragments complex and nested objects into non-contiguous memory blocks, thereby\nrequiring multiple levels of indirection for traversing the structure. In\nparticular, this design can easily impair computing times in\nperformance-critical applications, e.g., by not making it possible to leverage CPU cache or\npre-fetching mechanisms.\nAt the price of less abstraction and flexibility, we adopt instead in this work\na compact low-level representation of decision trees, allowing us for a fine-grained\nand complete control over memory management and CPU mechanisms.\n\nLet $\\texttt{t}\\in \\{0,\\dots,T-1\\}$ denote unique node identifiers and let\n$\\texttt{t}=0$ corresponds to the identifier of the root node. The data\nstructure that we propose for representing decision trees consists in using\ncontiguous arrays for simultaneously storing the content of all nodes, as\ndefined below:\n\n\\begin{description}\n\n\\item \\texttt{left\\_child} (array of $T$ integers):\\hfill\\\\\n    Store the node identifier $\\texttt{left\\_child[t]} \\in \\{0,\\dots,T-1\\}$ of the left child of node \\texttt{t}.\n\\item \\texttt{right\\_child} (array of $T$ integers):\\hfill\\\\\n    Store the node identifier $\\texttt{right\\_child[t]} \\in \\{0,\\dots,T-1\\}$ of the right child of node \\texttt{t}.\n\\item \\texttt{feature} (array of $T$ integers):\\hfill\\\\\n    Store the identifier $\\texttt{feature[t]} \\in \\{1, \\dots, p\\}$ of the variable used for splitting  node \\texttt{t}.\n\\item \\texttt{threshold} (array of $T$ reals):\\hfill\\\\\n    Store the cut-off value $\\texttt{threshold[t]} \\in \\mathbb{R}$ used for splitting  node \\texttt{t}.\n\\item \\texttt{impurity} (array of $T$ reals):\\hfill\\\\\n    Store the impurity $i(t)$ at node \\texttt{t}, as computed on the learning set ${\\cal L}$.\n\\item \\texttt{n\\_samples} (array of $T$ integers or reals):\\hfill\\\\\n    Store the (weighted) number \\texttt{n\\_samples[t]} of learning samples reaching node \\texttt{t}.\n\\item \\texttt{value} (array of $T\\times J$ reals or $T$ reals):\\hfill\\\\\n    Store the class probability estimates (i.e., the number of samples of each class) or the mean regression values,\n    as computed from the learning samples reaching node \\texttt{t}.\n\n\\end{description}\n\nAs an example, Table~\\ref{table:tree-array} illustrates the array\nrepresentation of the decision tree shown in Figure~\\ref{fig:5:tree}, as built\nfrom the learning set ${\\cal L}$ of Figure~\\ref{fig:5:set}. Internal nodes\n($t_0$, $t_2$, $t_3$ and $t_6$) are nodes for which the corresponding values in\n$\\texttt{left\\_child}$, $\\texttt{right\\_child}$, $\\texttt{feature}$ and\n$\\texttt{threshold}$ are not empty, while leaves ($t_1$, $t_4$, $t_7$ and\n$t_8$) correspond to nodes for which these values are not defined. In the case\nof classification, the \\texttt{value} array contains the number of samples of\neach class for each node. From these, class probability estimates $p_{\\cal\nL}(Y=c|t)$ can be computed by dividing \\texttt{value[t][c]} by the number of\nsamples $\\texttt{n\\_samples[t]}$ in $t$. In the case of regression,\n\\texttt{value[t]} would  contain the average output value for the samples in\n$t$. Finally, let us note that storing node impurities in the \\texttt{impurity}\narray is not strictly necessary, since these values are only used during\nconstruction. Yet, storing them can prove to be valuable, e.g., for\nlater inspection of the decision tree or for computing variable importances (see\nChapter~\\ref{ch:importances}) more readily.\n\n\\begin{figure}\n\\centering\n\\begin{minipage}{0.55\\textwidth}\n\\hspace{-0.5cm}\n\\centering\n    \\includegraphics[width=\\textwidth]{figures/ch5_learningset.pdf}\n    \\caption{Binary classification task.}\n    \\label{fig:5:set}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n    \\includegraphics[scale=1.0]{figures/ch5_tree.pdf}\n    \\caption{Decision tree learned from Figure~\\ref{fig:5:set}.}\n    \\label{fig:5:tree}\n\\end{minipage}\n\\end{figure}\n\n\\begin{table}\n    \\small\n    \\hspace{-1.3cm}\n    \\begin{tabular}{| l | c c c c c c c c c |}\n    \\hline\n    \\texttt{t}    & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\\\n    \\hline\n    \\hline\n    \\texttt{left\\_child} & 1 & -- & 3 & 5 & -- & -- & 7 & -- & -- \\\\\n    \\texttt{right\\_child} & 2 & -- & 4 & 6 & -- & -- & 8 & -- & -- \\\\\n    \\texttt{feature} & 2 & -- & 2 & 1 & -- & -- & 1 & -- & -- \\\\\n    \\texttt{threshold} & 0.303 & -- & 0.696 & 0.296 & -- & -- & 0.703 & -- & -- \\\\\n    \\texttt{impurity} & 0.273 & 0. & 0.368 & 0.482 &  0.065 &  0.051 & 0.48  & 0.042 &  0. \\\\\n    \\texttt{n\\_samples} & 300 & 99  & 201 & 113  & 88  & 38  & 75  & 46  & 29\\\\\n    \\texttt{value} & 251/49 & 99/0 & 152/49 & 67/46 & 85/3 & 37/1 & 30/45 & 1/45 & 29/0 \\\\\n    \\hline\n    \\end{tabular}\n    \\caption{Array representation of the decision tree in Figure~\\ref{fig:5:tree}.}\n    \\label{table:tree-array}\n\\end{table}\n\nBy implementing the array representation as dynamic tables~\\citep{cormen:2001},\ninsertion of new nodes has an $O(1)$ amortized cost, while keeping memory\nallocations at a minimum and thereby reducing the overhead that would otherwise\noccur using per-node data structures. Maintaining the representation\ncontiguously in memory also brings the benefits of leveraging CPU caches,\nwhich might greatly improve performance when fast predictions are critical.\nSince nodes are accessed successively and repeatedly at test time,\nstoring them in nearby memory locations is indeed highly recommended.\n\n\\subsection{Builders}\n\nThe implementation of the induction procedure in Scikit-Learn revolves around\nthree nested components:\n\n\\begin{enumerate}\n\\item The first component is a \\texttt{Builder} object whose role is to\n    effectively build the tree array representation presented above,\n    by recursively partitioning nodes using splits found by a \\texttt{Splitter} object.\n\n\\item The second component is a \\texttt{Splitter} object whose role is to find splits on internal nodes.\n\n\\item The third component is a \\texttt{Criterion} object whose role is to evaluate\n    the goodness of a split.\n\\end{enumerate}\n\nThe most direct \\texttt{Builder} implementation is the depth-first greedy\ninduction procedure as originally outlined in Algorithm~\\ref{algo:induction}. A\ncritical aspect of this algorithm is how to keep account of the  the input\nsamples $(\\mathbf{x},y) \\in {\\cal L}_t$  that arrive at node $t$. A\nstraightforward solution is to store lists of indices of the input samples at\nnode $t$. As pointed out in \\citep{criminisi:2013}, building many such lists is\nhowever inefficient because of repeated memory allocations and deallocations. A\nbetter solution is to have instead all indices stored in a single static array\nand to use in-place reordering operations when partitioning $t$. More\nspecifically, let assume that  $\\texttt{samples}$ is the list of all indices\nand that  $\\texttt{start}$ and $\\texttt{end}$ are bounds such that\n$\\texttt{samples[start:end]}$ contains the sample indices at the current node\n$t$. If $\\texttt{t}$ is split on feature $\\texttt{feature[t]}$ and threshold\n$\\texttt{threshold[t]}$, then ${\\cal L}_t$ can be partitioned into ${\\cal\nL}_{t_L}$ and ${\\cal L}_{t_R}$ by reorganizing $\\texttt{samples[start:end]}$\ninto $\\texttt{samples[start:pos]}$ and $\\texttt{samples[pos:end]}$, such that\nall samples in the first part are on the left side of the split while all\nsamples in the second part are on the right side of the split. The induction\nprocedure then proceeds by pushing the bounds $\\texttt{start:pos}$ and\n$\\texttt{pos:end}$ on the stack $S$, as an efficient way to effectively\nrepresent ${\\cal L}_{t_L}$ and ${\\cal L}_{t_R}$.\n\nAn alternative implementation of the $\\texttt{Builder}$ interface consists in\nreplacing the stack $S$ in Algorithm~\\ref{algo:induction} by a priority queue,\nhence changing the order in which the nodes are split. In particular, if nodes\n$t$ are prioritized by weighted potential impurity decrease $p(t) \\Delta\ni(s^*,t)$ (where $s^*$ is the best split on $t$), then the greedy induction\nprocedure switches from a depth-first construction of the tree to a \\textit\n{best-first} induction procedure. In this way, an additional stopping criterion\ncan be defined as the maximum number \\texttt{max\\_leaf\\_nodes} of leaves in the\ntree. If nodes are expanded in best-first order, then the resulting tree only\nincludes the most significant splits, thereby pre-pruning all (seemingly)\nirrelevant branches without wasting computational resources. Alternatives also\ninclude prioritizing nodes according to their impurity $i(t)$ or their size $N_t$.\n\nLikewise,  the stack $S$ in Algorithm~\\ref{algo:induction} can be replaced by a\ndeque data structure, in which case the induction procedure switches to a\n\\textit{breadth-first} construction of the tree. With some modifications,\nbreadth-first construction shows to be more efficient when it is expensive to\nrandomly access the data~\\citep{criminisi:2013}, e.g., when data are too big to\nfit into memory and must be streamed from disk. Breadth-first induction also\nproves to be an efficient strategy when combined with parallelism, e.g., for\nbuilding a whole level of the decision tree simultaneously using multiple\ncores~\\citep{liao:2013}.\n\n\\subsection{Splitters}\n\\label{sec:5:impl:splitters}\n\nIn Scikit-Learn, \\texttt{Splitter} objects implement search procedures for\nfinding splits on internal nodes. In the case of decision trees, they implement\nalgorithms~\\ref{algo:findsplit} and \\ref{algo:findsplit:x_j} for finding the\nbest split over all $p$ input variables. In the case of randomized trees, they\nimplement the search procedure \\ref{algo:findsplit:random} over $K \\leq p$\nrandomly drawn input variables, combined with either algorithm\n\\ref{algo:findsplit:x_j} or  \\ref{algo:findsplit:et} to respectively obtain\nRandom Forests or Extremely Randomized Trees. Again, several aspects of the\nalgorithm should be considered with care to obtain an efficient implementation:\n\n\\begin{description}\n\n\\item \\textit{Data access.}\\hfill\\\\\n    Looking for the best split on the input variable $X_j$ and partitioning\n    ${\\cal L}_t$ requires to repeatedly access  over the node sample\n    values $x_{i,j}$ (i.e., over values in the $j$-th column if samples are represented\n    as a two dimensional array), which may comes at a non negligible cost if data is\n    not ordered properly.  However, due to CPU caching and pre-fetching effects, the\n    closer the values in memory, usually the lower it takes to access them. In\n    our case, this can be exploited in two ways to minimize the cost of data access:\n\n    \\begin{itemize}\n    \\item By storing the learning set ${\\cal L}$ using column-major order (i.e., Fortran\n          array memory layout), hence storing the values $x_{i,j}$ of $X_j$ (for all samples $i=1,\\dots,N$)\n          contiguously in memory.\n    \\item By pre-fetching the node sample values $x_{i,j}$ (for $i = 1, \\dots, N_t$) manually\n          into a static and contiguous buffer, on which the search procedure\n          is then applied.\n    \\end{itemize}\n\n\\item \\textit{Sort algorithm.}\\hfill\\\\\n    In CART and Random Forests, finding the best split on $X_j$ requires\n    to sort the sample indices $\\texttt{samples[start:end]}$ along\n    their respective values on $X_j$, i.e., such that\n    \\begin{align}\n    \\texttt{X[samples[start], j]} &\\leq \\dots \\leq \\texttt{X[samples[start+i], j]} \\nonumber \\\\\n                                  &\\leq \\dots \\leq \\texttt{X[samples[end-1], j]}, \\nonumber\n    \\end{align}\n    for $i=0,\\dots,N_t-1$.\n    As shown in Section~\\ref{sec:5:complexity-fit},\n    this operation drives the complexity of the overall induction procedure\n    and  is therefore critical in the implementation of the algorithm.\n\n    To guarantee the lowest time complexity in all cases, we rely on\n    Introsort~\\citep{musser:1997} which combines Quicksort and Heapsort into a\n    hybrid sorting algorithm. Basically, sorting in Introsort begins with\n    Quicksort and then switches to Heapsort when the depth of partitioning\n    exceeds $\\Theta(\\log N)$. As such, the practical $\\Theta(N\\log N)$\n    performance of Introsort is comparable to the average performance of\n    Quicksort, except that performance in the worst case is bounded to\n    $\\Theta(N \\log N)$  due to Heapsort, instead of $\\Theta(N^2)$ in the\n    worst case for Quicksort. Additionally, Quicksort is\n    implemented using the median-of-three as pivot~\\citep{bentley:1993}, hence\n    yielding better partitions and further improving performance.\n\n    \\begin{figure}[b]\n        \\centering\n        \\includegraphics[width=0.9\\textwidth]{figures/ch5_sort.pdf}\n        \\caption{3-way partition in Quicksort. Once $\\texttt{i=r}$, elements that are identical\n                 to the pivot are all set at their final positions \\texttt{samples[l:r]}, in a single\n                 recursive call. The sub-arrays \\texttt{samples[start:l]} and \\texttt{samples[r:end]}\n                 are then recursively sorted.}\n        \\label{fig:5:sort}\n    \\end{figure}\n\n    In the context of the tree induction procedure, let us finally notice that\n    partitioning node samples ${\\cal L}_t$ into subsets ${\\cal L}_{t_L}$ and\n    ${\\cal L}_{t_R}$ not only makes them purer in terms of the output variable\n    $Y$, it also indirectly makes node samples more and more identical in terms\n    of the values taken by the input variables. As we get deeper into the tree,\n    this property can be leveraged by implementing Quicksort as a 3-way\n    recursive partition~\\citep{bentley:1993}, as illustrated in\n    Figure~\\ref{fig:5:sort}. In this variant, elements that are identical to\n    the pivot are all set at their final positions in a single recursive call,\n    hence accelerating the sorting procedure. If the number of distinct values\n    is constant, 3-way Quicksort can be shown~\\citep{sedgewick:2011}\n    to reduce running times from $O(N \\log N)$ to $O(N H)$, where $H$ is the\n    Shannon entropy defined on the frequencies of the values to be sorted. As a\n    consequence, in the case of many duplicates (i.e., when $H=O(1)$),\n    $C(N)=O(N)$ and using 3-way Quicksort for sorting the sample indices in\n    fact reduces the overall complexity of Random Forests to the asymptotic\n    complexity of Extremely Randomized Trees (see Section~\\ref{sec:5:complexity-fit}).\n\n\\item \\textit{Skipping locally constant input variables.}\\hfill \\\\\n    In the extreme case, recursively partitioning node samples might result in\n    input variables $X_j$ to become locally constant at $t$. That is,\n    $x_j=x^\\prime_j$ for all $(\\mathbf{x},y), (\\mathbf{x}^\\prime, y) \\in {\\cal\n    L}_t$. In such  a case, there is no valid split on $X_j$ and trying to find\n    one at $t$, but also at any of the descendant nodes, is a waste of\n    computational resources. Accordingly, Algorithm~\\ref{algo:findsplit:random}\n    can be extended to account for variables that are known to be constant, thereby skipping\n    non-informative variables if we know in advance that no valid split can be found.\n    To stay as close as possible to the original algorithm, note however that constant\n    variables are still included when drawing $K$ variables at random.\n\n\\item \\textit{Split approximations.}\\hfill\\\\\n    In Chapter~\\ref{ch:forest}, we showed that an increase of bias due to\n    randomization is beneficial as long as it sufficiently decorrelates the\n    predictions of the individual trees. Given this result, a legitimate\n    question is to wonder if finding the very best splits $s^*_j$ really is\n    imperative in the first place to achieve good accuracy in random forests. As proved by the\n    good results usually obtained by Extremely Randomized Trees, even splits\n    that are drawn at random are in fact often sufficient to reach good\n    (sometimes even better) performance. More importantly, not only this yields\n    comparable accuracy, it is also more computationally efficient. At midway\n    between  exact best splits (as in Algorithm~\\ref{algo:findsplit:x_j}) and\n    random splits (as in Algorithm~\\ref{algo:findsplit:et}), an intermediate\n    and pragmatic solution is therefore to consider approximations of the best splits.\n    The two most common strategies are based on \\textit{subsampling} and \\textit{binning}:\n\n    \\begin{itemize}\n    \\item Let $N_0$ denote an upper limit on the node sample size. If $N_t > N_0$,\n          the subsampling approximation of the best split $s^*_j$ at $t$ on $X_j$ consists\n          in looking for the best split using only $N_0$ randomly drawn node\n          samples from ${\\cal L}_t$, and all node samples otherwise. The rationale behind this strategy is\n          that, in the first nodes of the tree, the best splits are often\n          so markedly superior to all other splits that this clear-cut information\n          also reflects in a subsample. For nodes further down in the tree (i.e., for $N_t \\leq N_0$),\n          splits are usually less marked but then all node samples in ${\\cal L}_t$\n          are used, hence guaranteeing to find the same splits as in the original procedure.\n          Historically, the subsampling strategy was first proposed in \\citep{breiman:1984}\n          in the context of single decision trees, for datasets too large to be held in memory.\n\n    \\item An alternative but close strategy is to consider only a subset\n          of the possible discretization thresholds $v^\\prime_k$ instead\n          of exhaustively evaluating all candidate splits. In binning, this\n          is performed by grouping together node samples that are close\n          to each other into bins and then evaluating only the thresholds in-between these groups.\n          The simplest strategy of all is to divide the value interval of $X_j$\n          at $t$ into $D$ intervals of equal length. Another strategy\n          is to sort the node samples along their respective values of $X_j$\n          and to partition them into $D$ subsets of equal size.\n          More elaborate algorithms, also known as \\textit{attribute discretization} procedures,\n          are reviewed in \\citep{zighed:2000}.\n    \\end{itemize}\n\n\\item \\textit{Pre-sorting data.}\\hfill\\\\\n    As shown earlier, sorting node samples for each candidate split significantly impacts\n    the overall complexity of the induction procedure. An alternative strategy is\n    possible~\\citep{breiman:2002} and consists in presorting the samples for all\n    $p$ input variables before the construction of the tree. More specifically, let\n    $i=1,\\dots,\\widetilde{N}$ denote the original indices of the unique input\n    samples in ${\\cal L}^m$. For each input variable $X_j$ (for $j=1,\\dots,p$), the\n    sorted indices $\\sigma^j_1,\\dots,\\sigma^j_{\\widetilde{N}}$, such that\n    \\begin{equation} x_{\\sigma^j_1,j} \\leq \\dots \\leq\n    x_{\\sigma^j_{\\widetilde{N}},j}, \\end{equation} can be computed in\n    $O(p\\widetilde{N}\\log \\widetilde{N})$. Given these indices, finding the best\n    split at a node then simply amounts to iterate, in that order, over the\n    samples $\\smash{\\sigma^j_1,\\dots,\\sigma^j_{\\widetilde{N}}}$ (for all $K$ of the\n    split variables $X_j$), which reduces complexity to\n    $O(K\\widetilde{N})$ instead of $O(K\\widetilde{N}\\log \\widetilde{N})$.\n    Partitioning the node samples into $t_L$ and $t_R$ then requires to partition\n    all $p$ lists of sorted indices into $2p$ sub-lists of $\\widetilde{N}_{t_L}$ and\n    $\\widetilde{N}_{t_R}$ sorted indices\n    $\\smash{\\sigma^j_1,\\dots,\\sigma^j_{\\widetilde{N}_{t_L}}}$ and\n    $\\smash{\\sigma^j_1,\\dots,\\sigma^j_{\\widetilde{N}_{t_R}}}$, which can be done in\n    $O(p\\widetilde{N})$. In total, the within-node complexity is\n    \\begin{equation}\n    C(\\widetilde{N}) = O(K\\widetilde{N} + p\\widetilde{N}) = O(p\\widetilde{N}).\n    \\end{equation} Using this\n    alternative strategy, the time complexity of RF for the best, the worst and the\n    average cases is respectively $O(Mp\\widetilde{N}\\log \\widetilde{N})$,\n    $O(Mp\\widetilde{N}^2)$ and $O(Mp\\widetilde{N}\\log \\widetilde{N})$, as derived\n    from theorems~\\ref{thm:6:best:kn}, \\ref{thm:6:worst:kn} and\n    \\ref{thm:6:average:kn} for $K=p$. Neglecting constant factors, the ratio between the two\n    implementations is $O(\\frac{p}{K\\log \\widetilde{N}})$, which might not necessarily\n    lead to faster building times depending on $K$ and the size of the problem.\n    For this reason, pre-sorting is in fact not used within the Scikit-Learn implementation\n    of random forests.\n\n\\end{description}\n\n\\subsection{Criteria}\n\nThe last components of the implementation of decision trees in Scikit-Learn are\n\\texttt{Criterion} objects for evaluating the goodness of splits. Supported\ncriteria are \\texttt{\"gini\"} and \\texttt{\"entropy\"} in classification, and the\nreduction of variance \\texttt{\"mse\"} in regression. All of them implement the\nupdate mechanisms described in Section~\\ref{sec:best-split-ordered}, such that\nthe iterative evaluation of $N_t-1$ splits on an ordered variable remains a\n$O(N_t)$ operation.\n\n\\subsection{Parallelization}\n\nScikit-Learn implementation of random forests supports parallelism to\ndistribute computations on multi-core architectures. The simple, yet effective,\napproach that we implement is to consider the construction of a forest as an\nembarrassingly parallel problem, in which the $M$ decision trees are built\nconcurrently and independently on multiple processors. Assuming no overhead due\nto parallelization, the theoretical speedup of this strategy is equal to the\nnumber $N_p$ of processors used for building the forest. In practice however,\nthis upper bound is not always strictly reached due to the variance of the time\nrequired for building randomized trees. It may indeed happen that building\ntrees take longer  than building some others, thereby under-utilizing\ncomputational resources since jobs may have finished sooner than others.\n\nMore fine-grained approaches consist in building decision trees one at time,\nbut using multiple processors. In the \\textit{node-based decomposition}, the\nconstruction of a tree is distributed by building multiple nodes concurrently.\nIn this case, workload can be divided between processors using a breadth-first\ninduction procedure and assigning whole sub-trees to develop once the waiting\nqueue contains $N_p$ nodes. Once again however, sub-trees may take longer to\nbuild than others, hence under-utilizing computational resources if the\nresulting decision tree is not balanced. To alleviate this issue, processors\nmay be assigned single nodes to develop (that is, one at a time) using a producer-consumer scheme to\nbalance workload in a fair way. Unfortunately, this latter approach often proves to be\ndominated in practice by the overhead due to task assignment and bookkeeping\nactivities. Alternatively, the construction of a tree can also be distributed\nusing an \\textit{attribute-based decomposition}. In this case, the workload is\ndivided by parallelizing the search of the $K$ best splits at a given node (see\nAlgorithm~\\ref{algo:findsplit:random}). For detailed reviews on the topic, see\n\\citep{kufrin:1997,srivastava:2002}.\n\nUsing similar or hybrid approaches, let us finally note that the parallel\ninduction of random forests extend to other distributed architectures,\nincluding GPUs~\\citep{sharp:2008,liao:2013}, FPGA \\citep{narayanan:2007} or\nclusters \\citep{mitchell:2011}.\n\n\\section{Benchmarks}\n\\label{sec:5:benchmarks}\n\n\\subsection{Complexity on artificial data}\n\\label{sec:5:benchmarks:artificial}\n\nIn this section, we study the empirical complexity of random\nforests on artificial data. Our goal is to investigate whether the\ntheoretical results that have been derived in Sections~\\ref{sec:5:complexity-fit}\nand \\ref{sec:5:complexity-predict} hold true in practice, even without\nour assumptions necessarily satisfied.\n\nFor our experiments on simulated data, we consider the Friedman~\\#1 artificial\nregression problem~\\citep{friedman:1991}. It consists in $10$ independent\ninput variables $X_1, \\dots, X_{10}$, each uniformly distributed over $[0,1]$.\nThe output $Y$ is a real-valued variable and is given by\n\\begin{equation}\nY = 10 \\sin(\\pi X_1 X_2) + 20(X_3 - 0.48)^2 + 10 X_4 + 5 X_5 + \\epsilon\n\\end{equation}\nwhere $\\epsilon$ is ${\\cal N}(0, 1)$. We study the effects of the number $M$ of\ntrees in the forest, the number $N$ of input samples, the number $p$ of input\nvariables (by adding new irrelevant variables), the number $K$ of variables drawn at each node and the effects of\nusing bootstrap replicates ${\\cal L}^m$ instead of ${\\cal L}$. Hyper-parameters are studied independently\nfor both (fully developed) Random Forests (RF) and Extremely Randomized Trees (ETs) built\non $N=1000$ learning samples of $p=10$ input variables, with $M=250$ trees and $K=\\sqrt{p}$ set by\ndefault. Statistics reported below are averages over 10 runs. Figures~\\ref{fig:5:artificial:M}-\\ref{fig:5:artificial:bootstrap}\nall illustrate the effect of one the hyper-parameters with respect to the time required for building\na forest (on the left plots), the average depth of the trees (in the middle plots)\nand the mean squared error of the ensemble (on the right plots).\n\nFigure~\\ref{fig:5:artificial:M} first demonstrates the effect of the number $M$\nof trees. As expected, the time complexity of the induction procedure is\n$O(M)$ while the average depth of the leaves is constant with $M$, since the\nlater has no effect on the structure of the individual trees. The left plot\nalso confirms that ETs are faster to build than RF, even if trees in ETs are on\naverage one level deeper that trees in RFs, as shown in the middle plot.\nFinally, the mean squared error of the model shows to be inversely proportional\nto the number of trees in the forest, which confirms our previous results on\nvariance reduction from Chapter~\\ref{ch:forest}.\n\nFigure~\\ref{fig:5:artificial:N} considers the effect of the number $N$ of\ntraining samples. As shown on the left plot, building times grow slightly\nfaster than linearly, which appears to confirm the respective $O(N\\log^2 N)$\nand $O(N\\log N)$ dependencies of RF and ETs with the size of the learning set (as\nderived in Section~\\ref{sec:5:complexity-fit}). Similarly,  the middle plot\nconfirms that the average depth of the leaves grows in $O(\\log N)$ with $N$ (as\nshown in Section~\\ref{sec:5:complexity-predict}). As expected, the right plot\nalso illustrates that the more the data, the better the resulting model.\nAdditionally, it shows that RF and ETs tend towards the same error rate as\n$N$ increases, even if RF was strictly better on less data. This\nconfirms once again that finding the very best splits is not strictly necessary\nto achieve good results.\n\n\\begin{figure}\n\\hspace{-1.75cm}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_estimators_time_fit.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_estimators_average_depth.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_estimators_mse.pdf}}\n\\caption{Effect of the number $M$ (= \\texttt{n\\_estimators})  of trees on training time (left), average depth (middle) and mean squared error (right). $N=1000$, $p=10$, $K=\\sqrt{p}$.}\n\\label{fig:5:artificial:M}\n\\end{figure}\n\n\\begin{figure}\n\\hspace{-1.75cm}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_train_time_fit.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_train_average_depth.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_train_mse.pdf}}\n\\caption{Effect of the size $N$ (= \\texttt{n\\_train}) of the learning set on training time (left), average depth (middle) and mean squared error (right). $M=250$, $p=10$, $K=\\sqrt{p}$.}\n\\label{fig:5:artificial:N}\n\\end{figure}\n\n\\begin{figure}\n\\hspace{-1.75cm}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_features_time_fit.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_features_average_depth.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/n_features_mse.pdf}}\n\\caption{Effect of the number $p$ (= \\texttt{n\\_features}) of input variables on training time (left), average depth (middle) and mean squared error (right). $N=1000$, $M=250$, $K=\\sqrt{p}$.}\n\\label{fig:5:artificial:p}\n\\end{figure}\n\nNext, Figure~\\ref{fig:5:artificial:p} shows the effect of irrelevant and noisy\nvariables. The right plot first indicates that adding irrelevant variables\ndeteriorates the accuracy of the forests. This is expected since the more the\nirrelevant variables, the higher the probability of splitting on one of them\nand therefore of fitting noise in the data. More importantly, the left and\nmiddle plots show that RF is more affected than ETs by the addition of noisy\nvariables. This is actually a consequence of the end-cut preference of the classical impurity criteria. When\nsplitting on noisy variables and looking for the best threshold, partitions of\nthe node samples are more likely to be unbalanced rather than the opposite.\nThis explains why on the middle plot the average depth of RF eventually gets\nlarger than the depth of ETs -- which suffers less from end-cut preference since\nthresholds are not optimized locally.\n\n\\begin{figure}\n\\hspace{-4cm}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/max_features_time_fit.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/max_features_average_depth.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/max_features_mse.pdf}}\n\\caption{Effect of $K$ (= \\texttt{max\\_features}) in random variable selection on training time (left), average depth (middle) and mean squared error (right). $N=1000$, $M=250$, $p=10$.}\n\\label{fig:5:artificial:K}\n\\end{figure}\n\n\\begin{figure}\n\\hspace{-4cm}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/bootstrap_time_fit.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/bootstrap_average_depth.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{figures/make_friedman1/bootstrap_mse.pdf}}\n\\caption{Effect of bootstrap replicates on training time (left), average depth (middle) and mean squared error (right). $N=1000$, $M=250$, $p=10$, $K=\\sqrt{p}$.}\n\\label{fig:5:artificial:bootstrap}\n\\end{figure}\n\nFigure~\\ref{fig:5:artificial:K} illustrates the effect of random variable\nselection. On the left plot, building times grow as expected in $O(K)$ with the\nnumber $K$ of variables randomly drawn at each node. The average depth of the\ntree shows that increasing $K$ slightly reduces the depth of the trees, which\nis  explained by the fact that better splits are selected as more variables are\nevaluated. Finally, the right plot again confirms results from\nChapter~\\ref{ch:forest}. Random perturbations negatively impact the resulting\nmodel when they are too strong (at $K=1$) but then become beneficial as $K$\nincreases. For RF, the optimal trade-off is at $K=5$ while the lowest error can\nbe achieved in ETs for $K=7$.\n\nFinally, Figure~\\ref{fig:5:artificial:bootstrap} studies the effect of using\nbootstrap replicates. For RF, learning trees from learning sets ${\\cal L}^m$\ndrawn with replacement from ${\\cal L}$ (\\texttt{bootstrap=True}) is about $1.5$ faster than building\ntrees directly on ${\\cal L}$ (\\texttt{bootstrap=False}) , which is in accordance with our theoretical\nresults from Section~\\ref{sec:5:complexity-fit}. The effect for ETs is less\nstrong, but still leads to a non-negligible speedup. The average depth for RF\nand ETs is roughly $0.5$ lower when using bootstrap replicates. Again, this is\nexpected since the average depth grows with $O(\\log N)$ and that reducing the\neffective size of the learning set to $\\widetilde{N} = 0.632N$ reduces its\nlogarithm by $-\\log(0.632) = 0.662$. Finally, the right plot shows that using\nbootstrap replicates actually impairs performance on this problem.\n\nWhile not discussed here, further experiments carried out on other artificial\ndata, for both regression and classification, lead to similar conclusions. They\nall corroborate the theoretical results presented earlier.\n\n\\subsection{Comparison with other implementations}\n\nDue to their apparent simplicity, random forests have been reimplemented at\nmultiple occasions in various machine learning libraries and programming\nlanguages. In this section, we compare the Scikit-Learn implementation\ndeveloped within this work with popular alternatives available in other machine\nlearning packages.\n\n\\begin{table}\n    \\footnotesize\n    \\centering\n    \\rotatebox{90}{\n    \\begin{tabular}{| c | p{1.75cm} >{\\centering}p{0.8cm} p{1.8cm} p{2.5cm} >{\\centering}p{1.8cm} >{\\centering}p{0.8cm} >{\\centering}p{0.8cm} >{\\centering}p{1.3cm} p{3.5cm} |}\n    \\hline\n        \\textbf{Library} & \\textbf{Algorithms} & \\textbf{Tasks} & \\textbf{Impurity} & \\textbf{Stopping criteria} & \\textbf{Variable importances} & \\textbf{Multi-thread} & \\textbf{Open source} & \\textbf{Language} & \\textbf{Reference} \\\\\n    \\hline\n    \\hline\n    \\textit{Scikit-Learn} & Bagging, RF, ETs, RS, RP & C/R & Gini,\\newline Entropy,\\newline MSE & \\texttt{max\\_depth}, \\texttt{min\\_samples\\_split}, \\texttt{min\\_samples\\_leaf}, \\texttt{max\\_leaf\\_nodes} & \\cmark & \\cmark & BSD & Python & \\citep{pedregosa:2011} \\\\\n    \\textit{OpenCV} & Bagging, RF, ETs & C/R & Gini & \\texttt{max\\_depth}, \\texttt{min\\_samples\\_count}, \\texttt{forest\\_accuracy} & \\cmark & \\xmark & BSD & C/C++ & \\citep{bradski:2008} \\\\\n    \\textit{OK3} & Bagging, RF, ETs & C/R & MSE & \\texttt{varmin},\\newline \\texttt{nmin},\\newline \\texttt{maxnbsplits} & \\cmark & \\xmark & Source-available & C & \\citep{geurts:2006} \\\\\n    \\textit{Weka} & Bagging, RF & C & Gini & \\texttt{depth}  & \\xmark & \\cmark & GPL & Java & \\citep{hall:2009} \\\\\n    \\textit{randomForest} & Bagging, RF & C/R & Gini & \\texttt{nodesize}, \\texttt{maxnodes} & \\cmark & \\xmark & GPL & R & \\citep{liaw:2002} \\\\\n    \\textit{Orange} & Bagging, RF & C/R & GainRatio, Gini,\\newline Relief,\\newline MSE & \\texttt{worst\\_acceptable}, \\texttt{min\\_susbet}, \\texttt{min\\_instances}, \\texttt{max\\_depth}, \\texttt{max\\_majority} & \\xmark & \\xmark & GPL & Python & \\citep{demsar:2013} \\\\\n    % \\textit{TMVA} & Bagging, RF  &  C/R & 0-1 loss,\\newline Gini, Gini~(Laplace), Entropy, MSE & \\texttt{NodePurityLimit}, \\texttt{NNodesMax}, \\texttt{MaxDepth} & \\cmark & \\xmark & BSD & C++ & \\citep{hoecker:2007} \\\\\n    \\textit{H2O} & Bagging, RF & C & Gini,\\newline Entropy & \\texttt{maxDepth}  & \\xmark & \\cmark & Apache & Java & \\citep{vitek:2013} \\\\\n    \\hline\n    \\end{tabular}}\n    \\caption{Popular libraries for random forests.}\n    \\label{table:implementations}\n\\end{table}\n\nTable~\\ref{table:implementations} summarizes the main open source\nimplementations of random forests along with some of their supported features.\nAll implement the original Random Forest algorithm~\\citep{breiman:2001} and\ntherefore also provide an interface for Bagging~\\citep{breiman:1996b} (since it\ncorresponds to the special case $K=p$). Scikit-Learn, OpenCV and OK3 also offer\nvariants like Extremely Randomized Trees~\\citep{geurts:2006} or Random\nSubspaces~\\citep{ho:1998}. All support both classification and regression\ntasks, with the exceptions of Weka and H2O which appear to only support\nclassification. From a parameter point of view, implementations mostly differ\nfrom each other in the impurity and stopping criteria they support. The most\ncommon impurity criterion is Gini (or, equivalently, MSE) but some packages,\nlike Orange, include alternative measures such as entropy, gain\nratio or Relief~\\citep{kira:1992}. The most common stopping criteria are an\nupper bound on the depth of the trees (e.g., \\texttt{max\\_depth} in Scikit-\nLearn) and a minimum number of samples required for splitting a node (e.g.,\n\\texttt{nmin} in OK3). Some packages also allow to set an upper limit on the\nnumber of nodes in the trees or to stop the construction when a fixed accuracy\nis reached. Despite the embarrassing parallelism, only Scikit-Learn, Weka and\nH2O make it possible to build random forests using multiple cores. Most\nnotably, H2O builds on top of Hadoop to provide a fully distributed\nimplementation of random forests  (i.e., on a cluster).\n\nFor performance reasons, implementations listed in\nTable~\\ref{table:implementations} have mostly been written in low-level\nprogramming languages like C, C++ or Java, but usually interface with higher\nlevel languages, like Python, MATLAB or R, for convenience reasons. In the case\nof OpenCV for example, the core implementation is written in C++ but can easily\nbeen called from Python through bindings shipped with the library. Let us\nfinally note that Table~\\ref{table:implementations} is by no means an\nexhaustive list of all implementations of random forests. Domain-specific\nimplementations (e.g., Random Jungle~\\citep{schwarz:2010} for GWAS or TMVA~\\citep{hoecker:2007} in particle physics) and\nproprietary software (e.g., Salford Systems, which owns the\n\\textit{Random Forests} trademark) are not listed nor evaluated here.\n\nIn the experiments carried out below, we benchmark the Scikit-Learn\nimplementations of RF and ETs against all libraries listed in\nTable~\\ref{table:implementations}. We do not compare against H2O since it\nrequires a dedicated cloud environment. Implementations are all benchmarked on\n29 well-known and publicly available classification datasets that were chosen a priori and\nindependently of the results obtained. Overall, these datasets cover a wide\nrange of conditions (including both artificial and real data), with a sample\nsize $N$ ranging from 208 to 70000 and a number $p$ of variables varying from 6\nto 24496. Implementations are evaluated for (fully developed) Random Forests\n(RF) and Extremely Randomized Trees (ETs) whenever available, using\nrespectively 75\\% and 25\\% of the original dataset as learning and test sets,\n$M=250$ trees and $K=\\sqrt{p}$ set by default. Results reported below are\naverages over 10 runs, with the learning and test sets resampled at each fold.\nTables~\\ref{table:bench:fit} and \\ref{table:bench:predict} respectively report\nthe average time required for building a random forest and the average time\nrequired making predictions, on all 29 datasets and for all implementations.\nComputing times are reported using the Scikit-Learn implementation of RF as a\nbaseline. Results in bold indicate that the implementation is the fastest on\nthe dataset. Remark that times are not reported on \\textsc{cifar10} for R and\nOrange because these implementations failed at building a single forest within\n$96$ hours for CPU time.\n\nWith respect to RF, Table~\\ref{table:bench:fit} shows that Scikit-Learn is the\nfastest on average, far before all others. OK3 is the second fastest\nimplementation ($2.63\\times$ slower on average), Weka is third ($2.92\\times$\nslower), OpenCV is fourth ($12.37\\times$ slower),  Orange is fifth\n($17.80\\times$ slower) and the R implementation is last ($30.73\\times$ slower).\nIn particular, the R implementation shows to be very ineffective on large\ndatasets, when both $N$ and $p$ are moderately large (e.g., on \\textsc{mnist},\nas further illustrated in Figure~\\ref{fig:5:benchmarks:mnist}). A possible explanation\nis that the R implementation uses pre-sorting (see Section~\\ref{sec:5:impl:splitters}), which makes complexity linear\nwith $p$ rather than with $K$. Regarding Orange, results can be explained by\nthe fact that the implementation is purely written in Python (i.e., an\ninterpreted high-level programming language), while all others are written in\noptimized low-level programming languages. With respect to ETs, the\nScikit-Learn implementation is again the fastest, far before all others. More\ninterestingly, the table also shows that ETs are usually faster to build than RF,\nwhich  confirms the theoretical results presented earlier. For the\nScikit-Learn implementation, ETs are indeed empirically $\\tfrac{1}{0.71} = 1.41\\times$\nfaster than RF on average.\n\n\\begin{figure}\n\\hspace{-1.5cm}\n\\subfloat{\\includegraphics[width=0.6\\textwidth]{figures/ch5_mnist_fit.pdf}}\n\\subfloat{\\includegraphics[width=0.6\\textwidth]{figures/ch5_mnist_predict.pdf}}\n\\caption{Average time required for building a forest on the \\textsc{mnist} dataset (left) and average time required for making predictions (right).}\n\\label{fig:5:benchmarks:mnist}\n\\end{figure}\n\nTable~\\ref{table:bench:predict} reports results regarding the average time\nrequired for making predictions on the test set. Again, the Scikit-Learn implementation of RF is the\nfastest implementation on average. OpenCV is second ($2.07\\times$ slower on\naverage), OK3 is third ($3.71\\times$ slower), Weka is fourth ($4.31\\times$\nslower), R is fifth ($9.64\\times$ slower) and Orange is last ($19.82\\times$\nslower). By contrast with fitting times, making predictions from ETs is now\nslightly slower than RF, which is explained by the fact that trees in ETs\nare on average deeper than in RF.\n\nIn conclusions, benchmarks show that the Scikit-Learn implementation is on\naverage significantly faster than OpenCV, OK3, Weka, R and Orange.  While these\nresults suffer from a selection bias (i.e., they depend on the 29 selected\ndatasets), we believe that the general conclusions extend to other datasets.\nMore importantly, the careful design and implementation of  each and every\ncomponent of random forests, as discussed all throughout this work, eventually\nshows to be highly rewarding in terms of computational performance.\n\n\\begin{table}\n    \\footnotesize\n    \\hspace{-1.25cm}\n    \\rotatebox{90}{\n\\begin{tabular}{|c| >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering\\arraybackslash}p{1.75cm} |}\n\\hline\n\\textbf{Dataset} & \\textbf{Sklearn-RF} & \\textbf{Sklearn-ETs} & \\textbf{OpenCV-RF} & \\textbf{OpenCV-ETs} & \\textbf{OK3-RF} & \\textbf{OK3-ETs} & \\textbf{Weka-RF} & \\textbf{R-RF} & \\textbf{Orange-RF} \\\\\n\\hline\n\\hline\n\\textsc{arcene} & 1.00 & \\textbf{0.77} & 26.33 & 16.73 & 0.81 & 0.85 & 8.70 & 22.84 & 4.78 \\\\\n\\textsc{breast2} & 1.00 & \\textbf{0.41} & 35.10 & 19.03 & 1.20 & 0.99 & 3.88 & 33.41 & 11.42 \\\\\n\\textsc{cifar10} & 1.00 & \\textbf{0.52} & 22.82 & 13.76 & 3.78 & 2.30 & 3.29 & -- & -- \\\\\n\\textsc{diabetes} & 1.00 & 0.88 & 0.73 & 0.75 & \\textbf{0.59} & 0.65 & 2.25 & 1.07 & 2.42 \\\\\n\\textsc{dig44} & 1.00 & \\textbf{0.39} & 1.01 & 0.71 & 2.68 & 2.03 & 1.70 & 1.60 & 10.85 \\\\\n\\textsc{ionosphere} & 1.00 & 0.71 & 0.79 & 0.56 & 0.53 & \\textbf{0.34} & 1.87 & 0.80 & 3.01 \\\\\n\\textsc{isolet} & 1.00 & \\textbf{0.36} & 5.39 & 3.39 & 4.48 & 2.76 & 2.24 & 8.70 & 16.58 \\\\\n\\textsc{letter} & 1.00 & \\textbf{0.94} & 2.34 & 1.82 & 9.05 & 10.57 & 3.13 & 4.43 & 21.04 \\\\\n\\textsc{liver} & 1.00 & 0.95 & 0.38 & 0.46 & \\textbf{0.33} & 0.42 & 2.80 & 0.58 & 1.42 \\\\\n\\textsc{madelon} & 1.00 & \\textbf{0.46} & 5.82 & 4.07 & 1.38 & 0.82 & 2.06 & 6.18 & 48.48 \\\\\n\\textsc{marti0} & 1.00 & \\textbf{0.31} & 5.70 & 4.45 & 1.08 & 0.55 & 1.36 & 18.99 & 7.16 \\\\\n\\textsc{mnist} & \\textbf{1.00} & 1.04 & 21.99 & 16.47 & 7.48 & 8.43 & 5.06 & 66.14 & 53.90 \\\\\n\\textsc{mnist3vs8} & 1.00 & \\textbf{0.95} & 18.74 & 13.90 & 3.32 & 3.04 & 3.68 & 54.89 & 27.80 \\\\\n\\textsc{mnist4vs9} & 1.00 & \\textbf{0.97} & 20.21 & 14.70 & 3.46 & 3.22 & 4.09 & 57.10 & 29.42 \\\\\n\\textsc{musk2} & 1.00 & \\textbf{0.44} & 4.59 & 2.44 & 2.00 & 0.95 & 1.97 & 5.48 & 22.63 \\\\\n\\textsc{pendigits} & 1.00 & \\textbf{0.61} & 1.60 & 1.12 & 3.61 & 3.09 & 2.30 & 2.64 & 9.24 \\\\\n\\textsc{reged0} & 1.00 & \\textbf{0.43} & 6.65 & 4.29 & 1.18 & 0.64 & 1.63 & 10.02 & 7.04 \\\\\n\\textsc{ring-norm} & 1.00 & \\textbf{0.22} & 0.63 & 0.43 & 1.13 & 0.42 & 1.11 & 1.21 & 9.92 \\\\\n\\textsc{satellite} & 1.00 & \\textbf{0.56} & 2.10 & 1.36 & 2.69 & 1.97 & 1.94 & 2.56 & 13.88 \\\\\n\\textsc{secom} & 1.00 & \\textbf{0.21} & 5.59 & 2.61 & 1.54 & 0.48 & 1.00 & 5.46 & 12.88 \\\\\n\\textsc{segment} & 1.00 & \\textbf{0.76} & 1.09 & 0.81 & 1.91 & 1.79 & 1.83 & 1.64 & 5.97 \\\\\n\\textsc{sido0} & \\textbf{1.00} & 1.55 & 133.11 & 140.63 & 9.70 & 8.62 & 9.69 & 468.85 & 49.11 \\\\\n\\textsc{sonar} & 1.00 & 0.78 & 0.80 & 0.64 & 0.39 & \\textbf{0.30} & 1.82 & 0.81 & 2.61 \\\\\n\\textsc{spambase} & \\textbf{1.00} & 1.41 & 4.06 & 3.36 & 2.24 & 3.25 & 2.90 & 4.59 & 15.85 \\\\\n\\textsc{tis} & \\textbf{1.00} & 1.84 & 26.83 & 24.59 & 3.73 & 4.31 & 5.41 & 74.77 & 69.51 \\\\\n\\textsc{two-norm} & 1.00 & \\textbf{0.30} & 0.98 & 0.66 & 1.12 & 0.55 & 1.21 & 1.33 & 17.12 \\\\\n\\textsc{vehicle} & 1.00 & \\textbf{0.74} & 1.40 & 1.08 & 1.63 & 1.46 & 2.23 & 1.69 & 6.51 \\\\\n\\textsc{vowel} & 1.00 & 0.59 & 0.67 & \\textbf{0.59} & 1.87 & 1.50 & 1.99 & 1.08 & 8.18 \\\\\n\\textsc{waveform} & 1.00 & \\textbf{0.43} & 1.20 & 0.94 & 1.49 & 0.98 & 1.49 & 1.50 & 9.79 \\\\\n\\hline\n\\hline\n\\textit{Average} & 1.00 & \\textbf{0.71} & 12.37 & 10.22 & 2.63 & 2.32 & 2.92 & 30.73 & 17.80 \\\\\n\\textit{Median} & 1.00 & \\textbf{0.61} & 4.06 & 2.44 & 1.87 & 1.46 & 2.23 & 4.51 & 11.14 \\\\\n\\hline\n\\end{tabular}\n}\n    \\caption{Average time required for building a random forest, relative to the Scikit-Learn implementation of the Random Forest algorithm (first column). The lower, the better.}\n    \\label{table:bench:fit}\n\\end{table}\n\n\\begin{table}\n    \\footnotesize\n    \\hspace{-3.5cm}\n    \\rotatebox{90}{\n\\begin{tabular}{|c| >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering}p{1.75cm} >{\\centering\\arraybackslash}p{1.75cm} |}\n\\hline\n\\textbf{Dataset} & \\textbf{Sklearn-RF} & \\textbf{Sklearn-ETs} & \\textbf{OpenCV-RF} & \\textbf{OpenCV-ETs} & \\textbf{OK3-RF} & \\textbf{OK3-ETs} & \\textbf{Weka-RF} & \\textbf{R-RF} & \\textbf{Orange-RF} \\\\\n\\hline\n\\hline\n\\textsc{arcene} & 1.00 & 1.02 & \\textbf{0.38} & 0.40 & 0.55 & 0.78 & 5.65 & 5.46 & 6.70 \\\\\n\\textsc{breast2} & \\textbf{1.00} & 1.11 & 2.00 & 1.76 & 3.63 & 4.97 & 33.92 & 33.26 & 44.73 \\\\\n\\textsc{cifar10} & \\textbf{1.00} & 1.13 & 3.17 & 2.25 & 2.82 & 3.05 & 6.23 & -- & -- \\\\\n\\textsc{diabetes} & 1.00 & 1.25 & 1.00 & 1.78 & 1.34 & 2.47 & 1.49 & \\textbf{0.99} & 11.41 \\\\\n\\textsc{dig44} & \\textbf{1.00} & 1.25 & 3.60 & 4.40 & 4.62 & 5.86 & 3.25 & 1.44 & 21.86 \\\\\n\\textsc{ionosphere} & 1.00 & 1.05 & \\textbf{0.35} & 0.61 & 0.47 & 0.74 & 1.36 & 0.45 & 6.56 \\\\\n\\textsc{isolet} & \\textbf{1.00} & 1.19 & 2.21 & 2.25 & 2.03 & 2.63 & 2.92 & 2.17 & 12.28 \\\\\n\\textsc{letter} & 1.00 & 1.15 & 2.25 & 2.71 & 3.18 & 3.87 & 2.69 & \\textbf{0.85} & 14.91 \\\\\n\\textsc{liver} & 1.00 & 1.14 & \\textbf{0.49} & 0.93 & 0.62 & 1.22 & 1.54 & 0.58 & 6.01 \\\\\n\\textsc{madelon} & \\textbf{1.00} & 1.29 & 2.83 & 4.01 & 7.28 & 11.03 & 3.58 & 3.01 & 33.41 \\\\\n\\textsc{marti0} & 1.00 & 1.10 & \\textbf{0.68} & 1.00 & 0.73 & 1.09 & 3.23 & 2.42 & 9.80 \\\\\n\\textsc{mnist} & \\textbf{1.00} & 1.09 & 2.98 & 2.18 & 2.68 & 3.09 & 3.10 & 3.50 & 21.39 \\\\\n\\textsc{mnist3vs8} & \\textbf{1.00} & 1.05 & 1.74 & 1.71 & 1.90 & 2.37 & 2.55 & 3.46 & 15.76 \\\\\n\\textsc{mnist4vs9} & \\textbf{1.00} & 1.21 & 2.29 & 2.24 & 2.42 & 2.92 & 3.21 & 5.21 & 19.46 \\\\\n\\textsc{musk2} & \\textbf{1.00} & 1.15 & 2.62 & 3.31 & 3.10 & 4.18 & 2.88 & 2.07 & 29.81 \\\\\n\\textsc{pendigits} & \\textbf{1.00} & 1.18 & 2.70 & 3.72 & 3.66 & 4.97 & 2.89 & 1.03 & 22.49 \\\\\n\\textsc{reged0} & 1.00 & 1.09 & \\textbf{0.48} & 0.69 & 0.51 & 0.81 & 2.84 & 2.37 & 10.70 \\\\\n\\textsc{ring-norm} & \\textbf{1.00} & 1.48 & 4.35 & 6.24 & 5.74 & 8.29 & 3.30 & 1.59 & 28.55 \\\\\n\\textsc{satellite} & \\textbf{1.00} & 1.07 & 2.64 & 3.43 & 3.46 & 4.51 & 2.63 & 1.16 & 23.14 \\\\\n\\textsc{secom} & \\textbf{1.00} & 1.07 & 1.87 & 1.98 & 2.09 & 2.22 & 3.06 & 2.74 & 17.35 \\\\\n\\textsc{segment} & 1.00 & 1.49 & 1.25 & 2.15 & 1.66 & 2.86 & 2.23 & \\textbf{0.82} & 18.98 \\\\\n\\textsc{sido0} & \\textbf{1.00} & 1.12 & 2.02 & 2.18 & 2.52 & 2.96 & 11.25 & 184.54 & 46.13 \\\\\n\\textsc{sonar} & 1.00 & 1.03 & \\textbf{0.20} & 0.33 & 0.23 & 0.41 & 1.01 & 0.39 & 4.70 \\\\\n\\textsc{spambase} & \\textbf{1.00} & 1.38 & 2.91 & 3.90 & 4.32 & 8.89 & 3.67 & 1.73 & 26.51 \\\\\n\\textsc{tis} & \\textbf{1.00} & 1.33 & 2.70 & 2.66 & 2.82 & 3.66 & 3.40 & 3.70 & 16.65 \\\\\n\\textsc{two-norm} & \\textbf{1.00} & 1.33 & 4.09 & 5.93 & 5.35 & 7.57 & 3.52 & 1.52 & 32.80 \\\\\n\\textsc{vehicle} & \\textbf{1.00} & 1.17 & 1.74 & 2.50 & 2.26 & 3.21 & 2.57 & 1.07 & 17.61 \\\\\n\\textsc{vowel} & 1.00 & 1.18 & 1.07 & 1.60 & 1.43 & 2.12 & 1.53 & \\textbf{0.76} & 10.83 \\\\\n\\textsc{waveform} & \\textbf{1.00} & 1.35 & 3.48 & 4.95 & 4.37 & 6.34 & 3.40 & 1.52 & 24.53 \\\\\n\\hline\n\\hline\n\\textit{Average} & \\textbf{1.00} & 1.19 & 2.07 & 2.54 & 2.68 & 3.76 & 4.31 & 9.64 & 19.82 \\\\\n\\textit{Median} & \\textbf{1.00} & 1.15 & 2.21 & 2.24 & 2.52 & 3.05 & 3.06 & 1.66 & 18.29 \\\\\n\\hline\n\\end{tabular}\n}\n    \\caption{Average time required for making predictions, relative to the Scikit-Learn implementation of the Random Forest algorithm (first column). The lower, the better.}\n    \\label{table:bench:predict}\n\\end{table}\n", "meta": {"hexsha": "f313e2008b3c3517cf8a3311c3828b1975c20f73", "size": 91720, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/chapters/chapter05.tex", "max_stars_repo_name": "mathkann/understanding-random-forests", "max_stars_repo_head_hexsha": "d2c5e0174d1a778be37a495083d756b2829160ec", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 353, "max_stars_repo_stars_event_min_datetime": "2015-01-03T13:34:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T05:16:30.000Z", "max_issues_repo_path": "tex/chapters/chapter05.tex", "max_issues_repo_name": "mathkann/understanding-random-forests", "max_issues_repo_head_hexsha": "d2c5e0174d1a778be37a495083d756b2829160ec", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-06-29T05:43:41.000Z", "max_issues_repo_issues_event_max_datetime": "2016-06-29T05:43:41.000Z", "max_forks_repo_path": "tex/chapters/chapter05.tex", "max_forks_repo_name": "mathkann/understanding-random-forests", "max_forks_repo_head_hexsha": "d2c5e0174d1a778be37a495083d756b2829160ec", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 153, "max_forks_repo_forks_event_min_datetime": "2015-01-14T03:46:42.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-26T10:13:51.000Z", "avg_line_length": 58.5696040868, "max_line_length": 331, "alphanum_fraction": 0.7276493676, "num_tokens": 27868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085859124002, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5995703504976873}}
{"text": "\\section{Ambiguity}\n\nA sentence $x$ of $G$ is ambiguous iff it admits different syntax trees. In that case $G$ is ambiguous. The degree of ambiguity of $x$ is the number of distinct trees of $x$, of $G$ is the maximum over its sentences.\n\n\\paragraph{Bilateral Recursions} $E \\rarr E + E | i$ becomes $E \\rarr i + E | i$.\n\n\\paragraph{Left-Right Recurions in different rules} $A \\rarr aA | Ab | c$. Remedies: generate using different rules or force an order of derivation.\n\n\\paragraph{Union of Languages} If $L_1 \\cap L_2 \\ne \\emptyset$ then $L_1 \\cup L_2$ is ambiguous (the intersection has 2 derivations). Remedy: provide disjointed set of rules: $L_1 \\cap L_2$, $L_1 \\setminus L_2$ and $L_2 \\setminus L_1$.\n\n\\paragraph{Concatenation of Languages} $G_1 . G_2$ is ambiguous if $\\exists x_1 \\in L_1, x_2 \\in L_2$ such that $x_1 = uv$ with $u \\in L_1$ and $x_2 = vz$ with $z \\in L_2$: $uvz$ can be $(uv)z$ or $u(vz)$.\n\n\\paragraph{Inherent Ambiguity} A language is inherently ambiguous if all its grammar are ambiguous, e.g. those where the intersection is not CF.\n\n\\paragraph{Lack of Order in Derivations} $S \\rarr bSc | bbSc | \\epsilon$ becomes $S \\rarr bSc | D$, $D \\rarr bbDc | \\epsilon$.\n", "meta": {"hexsha": "aff3a9b11559d8b92d9d1f37672656f8df5665f7", "size": 1189, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "grammars/ambiguity.tex", "max_stars_repo_name": "Kakasinho/FLC-cheatsheet", "max_stars_repo_head_hexsha": "9293e89e803006f1b419c78087caa5d5e04a5931", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-01-13T14:36:20.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-18T16:22:18.000Z", "max_issues_repo_path": "grammars/ambiguity.tex", "max_issues_repo_name": "Kakasinho/FLC-cheatsheet", "max_issues_repo_head_hexsha": "9293e89e803006f1b419c78087caa5d5e04a5931", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "grammars/ambiguity.tex", "max_forks_repo_name": "Kakasinho/FLC-cheatsheet", "max_forks_repo_head_hexsha": "9293e89e803006f1b419c78087caa5d5e04a5931", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-21T11:05:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-17T14:59:50.000Z", "avg_line_length": 74.3125, "max_line_length": 235, "alphanum_fraction": 0.7115222876, "num_tokens": 399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5995703470514332}}
{"text": "%!TEX root = da2020-09.tex\n\n\\Chapter{9}{Round Elimination}\n\n\\noindent\nIn this chapter we introduce the basic idea of a proof technique, called \\emph{round elimination}.\nRound elimination is based on the following idea. Assume that there exists a distributed algorithm $S_0$ with complexity $T$ solving a problem $\\Pi_0$. Then there exists a distributed algorithm $S_1$ with complexity $T-1$ for solving another problem $\\Pi_1$. That is, if we can solve problem $\\Pi_0$ in $T$ communication rounds, then we can solve a related problem $\\Pi_1$ exactly one round faster\\mydash we can ``eliminate one round''. If this operation is repeated $T$ times, we end up with some algorithm $S_T$ with round complexity $0$ for some problem $\\Pi_T$. If $\\Pi_T$ is not a \\emph{trivial} problem, that is, cannot be solved in $0$ rounds, we have reached a contradiction: therefore the assumption that $\\Pi_0$ can be solved in $T$ rounds has to be wrong. This is a very useful approach, as it is much easier to reason about $0$-round algorithms than about algorithms in general.\n\n\\section{Bipartite Model and Biregular Trees}\n\nWhen dealing with round elimination, we will consider a model that is a variant of the $\\PN$ model from Chapter~\\chapterref{3}. We will restrict our attention to specific families of graphs (see Figure~\\ref{fig:biregular}):\n\\begin{enumerate}\n\t\\item \\textbf{Bipartite.} The set of nodes $V$ is partitioned into two sets: the \\emph{active} nodes $V_A$ and the \\emph{passive} nodes $V_P$. The partitioning forms a proper $2$-coloring of the graph, i.e., each edge connects an active node with a passive node. The role of a node\\mydash active or passive\\mydash is part of the local input.\n\t\\item \\textbf{Biregular trees.} We will assume that the input graphs are \\emph{biregular} trees: the graph is connected, there are no cycles, each node in $V_A$ has degree $d$ or 1, and each node in $V_P$ has degree $\\delta$ or 1. We say that such a tree is $(d,\\delta)$-biregular. See Figure~\\ref{fig:biregular} for an illustration.\n\\end{enumerate}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[page=\\PBipartiteModel,scale=0.33]{figs.pdf}\n\t\\caption{The bipartite model; black nodes are active and white nodes are passive. (a)~A $(3,3)$-biregular tree. (b)~A $(3,2)$-biregular tree.} \\label{fig:biregular}\n\\end{figure}\n\n\\subsection{Bipartite Locally Verifiable Problem}\n\nWe consider a specific family of problems, called \\emph{bipartite locally verifiable} problems. Such a problem is defined as a 3-tuple $\\Pi = (\\Sigma, \\collA, \\collP)$, where:\n\\begin{itemize}\n\t\\item $\\Sigma$ is a finite alphabet.\n\t\\item $\\collA$ and $\\collP$ are finite collections of multisets, where each multiset $A \\in \\collA$ and $P \\in \\collP$ consists of a finite number of elements from $\\Sigma$. These are called the \\emph{active} and \\emph{passive configurations}. \n\\end{itemize}\nRecall that multisets are sets that allow elements to be repeated. We use the notation $[x_1, x_2, \\dotsc, x_k]$ for a multiset that contains $k$ elements; for example, $[1,1,2,2,2]$ is a multiset with two $1$s and three $2$s. Note that the order of elements does not matter, for example, $[1,1,2] = [1,2,1] = [2,1,1]$.\n\nIn problem $\\Pi$, each active node $v \\in V_A$ must label its incident $\\deg(v)$ edges with elements of $\\Sigma$ such that the labels of the incident edges, considered as a multiset, form an element of $\\collA$. The order of the labels does not matter. The passive nodes do not have outputs. Instead, we require that for each passive node the labels of its incident edges, again considered as a multiset, form an element of $\\collP$. A labeling $\\varphi\\colon E \\to \\Sigma$ is a solution to $\\Pi$ if and only if the incident edges of all active and passive nodes are labeled according to some configuration.\n\nIn this chapter we will only consider labelings such that all nodes of degree 1 accept any configuration: these will not be explicitly mentioned in what follows. Since we only consider problems in $(d,\\delta)$-biregular trees, each active configuration will have $d$ elements and each passive configuration $\\delta$ elements.\n\n\\subsection{Examples} \\label{ssec:bipartite-examples}\n\nTo illustrate the definition of bipartite locally verifiable labelings, we consider some examples (see Figure~\\ref{fig:bipartite-problem-examples}).\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[page=\\PBipartiteModelExamples,scale=0.3]{figs.pdf}\n\t\\caption{Bipartite locally verifiable labeling problems. (a)~$5$-edge coloring in a $(3,3)$-biregular tree. (b)~Maximal matching in a $(3,3)$-biregular tree. (c)~Sinkless orientation in a $(3,3)$-biregular tree. (d)~Weak 3-labeling in a $(3,2)$-biregular tree.} \\label{fig:bipartite-problem-examples}\n\\end{figure}\n\n\\paragraph{Edge Coloring.} A $c$-edge coloring is an assignment of labels from $\\{1,2,\\dotsc,c\\}$ to the edges such that no node has two incident edges with the same label. \n\nConsider the problem of $5$-edge coloring $(3,3)$-biregular trees. The alphabet $\\Sigma$ consists of the five edge colors $\\{ 1, 2, 3, 4, 5 \\}$. The active configurations consist of all multisets of three elements $[x,y,z]$, such that all elements are distinct and come from $\\Sigma$. The problem is symmetric, and the passive configurations consist of the same multisets:\n\\[\n\\begin{split}\n\t\\collA = \\collP = \\bigl\\{\\,\n\t\t&[1,2,3],\\,\n\t\t[1,2,4],\\,\n\t\t[1,2,5],\\,\n\t\t[1,3,4],\\,\n\t\t[1,3,5], \\\\\n\t\t&[1,4,5],\\,\n\t\t[2,3,4],\\,\n\t\t[2,3,5],\\,\n\t\t[2,4,5],\\,\n\t\t[3,4,5]\n\t\\,\\bigr\\}.\n\\end{split}\n\\]\n\n\\paragraph{Maximal Matching.} A maximal matching $M$ is a subset of the edges such that no two incident edges are in $M$ and no edge can be added to~$M$.\n\nConsider maximal matching on $(3,3)$-biregular trees. To encode a matching, we could use just two labels: $\\mM$ for matched and $\\mU$ for unmatched. Such a labeling, however, has no way of guaranteeing maximality. We use a third label $\\mP$, called a pointer:\n\\[\n\t\\Sigma = \\{ \\mM, \\mP, \\mU \\}.\n\\]\nThe active nodes either output $[ \\mM, \\mU, \\mU ]$, denoting that the edge marked $\\mM$ is in  the matching, or they output $[\\mP, \\mP, \\mP]$, denoting that they are unmatched, and thus all passive neighbors \\emph{must} be matched with another active node:\n\\[\n\t\\collA = \\bigl\\{\\, [ \\mM, \\mU, \\mU ], \\, [\\mP, \\mP, \\mP] \\, \\bigr\\}.\n\\]\nPassive nodes must verify that they are matched with at most one node, and that if they have an incident label $\\mP$, then they also have an incident label $\\mM$ (to ensure maximality). Hence the passive configurations are\n\\[\n\t\\collP = \\bigl\\{\\,\n\t\t[ \\mM, \\mP, \\mP ],\\,\n\t\t[ \\mM, \\mP, \\mU ],\\,\n\t\t[ \\mM, \\mU, \\mU ],\\,\n\t\t[ \\mU, \\mU, \\mU ]\n\t\\,\\bigr\\}.\n\\]\n\n\\paragraph{Sinkless Orientation.} A \\emph{sinkless orientation} is an orientation of the edges such that each node has an edge oriented away from it. That is, no node is a \\emph{sink}. We will consider here sinkless orientation in (3,3)-biregular trees; leaf nodes can be sinks, but nodes of degree $3$ must have at least one outgoing edge.\n\nTo encode sinkless orientation, each active node chooses an orientation of its incident edges: outgoing edges are labeled $\\mO$ and incoming edges $\\mI$. Thus the alphabet is $\\Sigma = \\{ \\mO, \\mI \\}$. Each node must have an outgoing edge, so the active configurations are all multisets that contain at least one $\\mO$:\n\\[\n\t\\collA = \\bigl\\{ [\\mO, x, y] \\bigm| x, y \\in \\Sigma \\bigr\\}.\n\\]\nThe passive configurations are similar, but the roles of the labels are reversed: an outgoing edge for an active node is an incoming edge for a passive node. Therefore each passive node requires that at least one of its incident edges is labeled $\\mI$, and the passive configurations are\n\\[\n\t\\collP = \\bigl\\{ [\\mI, x, y] \\bigm| x, y \\in \\Sigma \\bigr\\}.\n\\]\n\n\\paragraph{Weak Labeling.} We will use the following problem as the example in the remainder of this chapter. Consider $(3,2)$-biregular trees. A \\emph{weak $3$-labeling} is an assignment of labels from the set $\\{1,2,3\\}$ to the edges such that each active node has at least two incident edges labeled with different labels. Each passive node must have its incident edges labeled with the same label. The problem can be formalized as\n\\begin{align*}\n\t\\Sigma &= \\{1,2,3\\}, \\\\\n\t\\collA &= \\bigl\\{\\,\n\t\t[ 1, 1, 2 ],\\,\n\t\t[ 1, 1, 3 ],\\,\n\t\t[ 1, 2, 2 ],\\,\n\t\t[ 1, 2, 3 ],\\,\n\t\t[ 1, 3, 3 ],\\,\n\t\t[ 2, 2, 3 ],\\,\n\t\t[ 2, 3, 3 ]\n\t\\,\\bigr\\}, \\\\\n\t\\collP &= \\bigl\\{\\,\n\t\t[ 1, 1 ],\\,\n\t\t[ 2, 2 ],\\,\n\t\t[ 3, 3 ]\n\t\\,\\bigr\\}. \n\\end{align*}\n\n\\section{Introducing Round Elimination}\n\nRound elimination is based on the following basic idea. Assume that we can solve some bipartite locally verifiable problem $\\Pi_0$ in $T$ communication rounds on $(d,\\delta)$-biregular trees. Then there exists a bipartite locally verifiable problem $\\Pi_1$, called the \\emph{output problem} of $\\Pi_0$, that can be solved in $T-1$ rounds on $(\\delta,d)$-biregular trees. The output problem is uniquely defined, and we refer to the output problem of $\\Pi$ as $\\re(\\Pi)$. The definition of output problem will be given in Section~\\ref{ssec:output-problems}.\n\nA single round elimination step is formalized in the following lemma.\n\n\\begin{lemma}[Round elimination lemma] \\label{lem:round-elimination}\n\tLet $\\Pi$ be bipartite locally verifiable problem that can be solved in $T$ rounds in $(d,\\delta)$-biregular trees. Then the output problem $\\re(\\Pi)$ of $\\Pi$ can be solved in $T-1$ rounds in $(\\delta,d)$-biregular trees.\n\\end{lemma}\n\n\\subsection{Impossibility Using Iterated Round Elimination}\nLemma~\\ref{lem:round-elimination} can be iterated, applying it to the output problem of the previous step. This will yield a sequence of $T+1$ problems\n\\[\n\t\\Pi_0 \\rightarrow \\Pi_1 \\rightarrow \\cdots \\rightarrow \\Pi_{T},\n\\]\nwhere $\\Pi_{i+1} = \\re(\\Pi_i)$ for each $i = 0, 1, \\dotsc, T-1$.\n\nIf we assume that there is a $T$-round algorithm for $\\Pi_0$, then by an iterated application of Lemma~\\ref{lem:round-elimination}, there is a $(T-1)$-round algorithm for $\\Pi_1$, a $(T-2)$-round algorithm for $\\Pi_2$, and so on. In particular, there is a $0$-round algorithm for $\\Pi_T$.\n\nAlgorithms that run in $0$ rounds are much easier to reason about than algorithms in general. Since there is no communication, each active node must simply map its input, essentially its degree, to some output. In particular, we can try to show that there is no 0-round algorithm for $\\Pi_T$. If this is the case, we have a contradiction with our original assumption: there is no $T$-round algorithm for $\\Pi_0$.\n\nWe will now proceed to formally define output problems.\n\n\\subsection{Output Problems} \\label{ssec:output-problems}\n\nFor each locally verifiable problem $\\Pi$ we will define a unique \\emph{output problem} $\\re(\\Pi)$.\n\nLet $\\Pi_0 = (\\Sigma_0, \\collA_0, \\collP_0)$ be a bipartite locally verifiable problem on $(d,\\delta)$-biregular trees. We define the output problem $\\Pi_1 = \\re(\\Pi_0) = (\\Sigma_1, \\collA_1, \\collP_1)$ of $\\Pi_0$ on $(\\delta,d)$-biregular trees as follows\\mydash note that we swapped the degrees of active vs.\\ passive nodes here.\n\nThe alphabet $\\Sigma_1$ consists of all possible non-empty subsets of $\\Sigma_0$. The roles of the active and passive nodes are inverted, and new configurations are computed as follows.\n\\begin{enumerate}\n\t\\item The active configurations $\\collA_1$ consist of all multisets\n\t\\[\n\t[ X_1, X_2, \\dotsc, X_{\\delta} ], \\text{ where } X_i \\in \\Sigma_1 \\text{ for all } i = 1,\\dotsc,\\delta,\n\t\\] \n\tsuch that for \\textbf{\\emph{every}} choice of $x_1 \\in X_1$, $x_2 \\in X_2$, \\ldots, $x_{\\delta} \\in X_{\\delta}$ we have $[x_1, x_2, \\dotsc, x_{\\delta}] \\in \\collP_0$, i.e., it is a passive configuration of $\\Pi_0$.\n\t\\item The passive configurations $\\collP_1$ consist of all multisets\n\t\\[\n\t[ Y_1,  Y_2, \\dotsc, Y_d ], \\text{ where } Y_i \\in \\Sigma_1 \\text{ for all } i = 1,\\dotsc,d,\n\t\\]\n\tfor which \\textbf{\\emph{there exists}} a choice $y_1 \\in Y_1$, $y_2 \\in Y_2$, \\ldots, $y_d \\in Y_d$ with $[ y_1, y_2, \\dotsc, y_d] \\in \\collA_0$, i.e., it is an active configuration of $\\Pi_0$.\n\\end{enumerate}\n\n\\subsection{Example: Weak 3-labeling} \\label{ssec:example-w3ec}\n\nTo illustrate the definition, let us construct the output problem $\\re(\\Pi_0) = (\\Sigma_1, \\collA_1, \\collP_1)$ of weak 3-labeling problem $\\Pi_0 = (\\Sigma_0, \\collA_0, \\collP_0)$. Recall that\n\\begin{align*}\n\t\\Sigma_0 &= \\{ 1,2,3 \\}, \\\\\n\t\\collA_0 &= \\bigl\\{\\,\n\t\t[ 1, 1, 2 ],\\,\n\t\t[ 1, 1, 3 ],\\,\n\t\t[ 1, 2, 2 ],\\,\n\t\t[ 1, 2, 3 ],\\,\n\t\t[ 1, 3, 3 ],\\,\n\t\t[ 2, 2, 3 ],\\,\n\t\t[ 2, 3, 3 ]\n\t\\,\\bigr\\}, \\\\\n\t\\collP_0 &= \\bigl\\{\\,\n\t\t[ 1, 1 ],\\,\n\t\t[ 2, 2 ],\\,\n\t\t[ 3, 3 ]\n\t\\,\\bigr\\}. \n\\end{align*}\nThe alphabet $\\Sigma_1$ consists of all possible (non-empty) subsets of $\\Sigma_0$: \n\\[\n\t\\Sigma_1 = \\bigl\\{ \\{ 1 \\}, \\{ 2 \\}, \\{ 3 \\}, \\{ 1, 2 \\}, \\{ 1, 3 \\}, \\{ 2, 3 \\}, \\{ 1, 2, 3 \\} \\bigr\\}.\n\\]\nThe active configurations $\\collA_1$ are all multisets $[ X,Y ]$ with $X, Y \\in \\Sigma_1$ such that \\emph{all} choices of elements $x \\in X$ and $y \\in Y$ result in a multiset $[x,y] \\in \\collP_0$. For example $X = \\{1\\}$ and $Y = \\{1,2\\}$ is \\emph{not} a valid choice: we could choose $x = 1$ and $y = 2$ to construct $[1,2] \\notin \\collP_0$. In general, whenever $|X| > 1$ or $|Y| > 1$, we can find $x \\in X$, $y \\in Y$ with $x \\ne y$, and then $[x,y] \\notin \\collP_0$. Therefore the only possibilities are $|X| = |Y| = 1$, and then we must also have $X = Y$. We obtain\n\\[ \n\\collA_1 = \\Bigl\\{\\, \\bigl[\\{ 1 \\}, \\{1 \\}\\bigr],\\, \\bigl[\\{ 2 \\}, \\{2 \\}\\bigr],\\, \\bigl[\\{ 3 \\}, \\{3 \\}\\bigr] \\,\\Bigr\\}. \n\\] \nBut since the active configurations only allow singleton sets, we can restrict ourselves to them when listing the possible passive configurations; we obtain simply\n\\[\n\\begin{split}\n\t\\collP_1 = \\Bigl\\{\\,\n\t\t&\\bigl[ \\{1\\}, \\{1\\}, \\{2\\} \\bigr],\\,\n\t\t\\bigl[ \\{1\\}, \\{1\\}, \\{3\\} \\bigr],\\,\n\t\t\\bigl[ \\{1\\}, \\{2\\}, \\{2\\} \\bigr],\\,\n\t\t\\bigl[ \\{1\\}, \\{2\\}, \\{3\\} \\bigr],\\\\[-2pt]\n\t\t&\\bigl[ \\{1\\}, \\{3\\}, \\{3\\} \\bigr],\\,\n\t\t\\bigl[ \\{2\\}, \\{2\\}, \\{3\\} \\bigr],\\,\n\t\t\\bigl[ \\{2\\}, \\{3\\}, \\{3\\} \\bigr]\n\t\\,\\Bigr\\}.\n\\end{split}\n\\]\n\n\\subsection{Complexity of Output Problems}\n\nIn this section we prove Lemma~\\ref{lem:round-elimination} that states that the output problem $\\re(\\Pi)$ of $\\Pi$ can be solved one round faster than $\\Pi$. \n\nThe proof is by showing that we can truncate the execution of a $T$-round algorithm and output the set of \\emph{possible outputs}. As we will see, this is a solution to the output problem.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:round-elimination}]\n\tAssume that we can solve some problem $\\Pi_0 = (\\Sigma_0, \\collA_0, \\collP_0)$ on $(d,\\delta)$-biregular trees in $T$ rounds using some deterministic $\\PN$-algorithm $S$. We want to design an algorithm that works in $(\\delta,d)$-biregular trees and solves $\\Pi_1 = \\re(\\Pi_0)$ in $T-1$ rounds.\n\t\n\tNote that we are considering the same family of networks, but we are only switching the sides that are marked as active and passive. We will call these \\emph{$\\Pi_0$-active} and \\emph{$\\Pi_1$-active} sides, respectively.\n\t\n\tThe algorithm for solving $\\Pi_1$ works as follows. Let $N = (V,P,p)$ be any port-numbered network with a $(\\delta,d)$-biregular tree as the underlying graph. Each $\\Pi_1$-active node $u$, in $T-1$ rounds, gathers its full $(T-1)$-neighborhood $\\ball_N(u, T-1)$. Now it considers all \\emph{possible} outputs of its $\\Pi_0$-active neighbors under the algorithm $S$, and outputs these.\n\t\n\tFormally, this is done as follows. When $N$ is a port-numbered network, we use $\\ball_{N}(u,r)$ to refer to the information within distance $r$ from node $u$, including the local inputs of the nodes in this region, as well as the port numbers of the edges connecting nodes within this region. We say that a port-numbered network $H$ is \\emph{compatible} with $\\ball_{N}(u,r)$ if there is a node $v \\in H$ such that $\\ball_{H}(v,r)$ is isomorphic to $\\ball_{N}(u,r)$.\n\n\tFor each neighbor $v$ of $u$, node $u$ constructs all possible fragments $\\ball_{H}(v, T)$ such that $H$ is compatible with $\\ball_N(u,T-1)$ and has a $(\\delta,d)$-biregular tree as its underlying graph. Then $u$ simulates the $\\Pi_0$-algorithm $S$ on $\\ball_{H}(v,T)$. The algorithm outputs some label $x \\in \\Sigma_0$ on the edge $\\{u,v\\}$. Node $u$ adds each such label $x$ to set $S(u,v)$; finally node $u$ will label edge $\\{u,v\\}$ with $S(u,v)$.\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[page=\\PRoundElimination,scale=0.3]{figs.pdf}\n\t\t\\caption{Illustration of the round elimination step. A fragment of a $(2,3)$-biregular tree. The 3-neighborhood of node $u$ consists of the gray area. The 4-neighborhoods of nodes $v$ and $w$ consist of the blue and orange areas, respectively. Since the input is a tree, these intersect exactly in the 3-neighborhood of $u$.} \\label{fig:round-elimination}\n\t\\end{figure}\n\t\n\tBy construction, $S(u,v)$ is a nonempty set of labels from $\\Sigma_0$, i.e., $S(u,v) \\in \\Sigma_1$. We now prove that the sets $S(u,v)$ form a solution to $\\Pi_1$. We use the assumption that the underlying graph $G$ is a tree. Let $H$ be any port-numbered network compatible with $\\ball_{N}(u,T-1)$. Consider any two neighbors $v$ and $w$ of $u$: since there are no cycles, we have\n\t\\[\\ball_{H}(v,T) \\cap \\ball_{H}(w,T) = \\ball_{H}(u,T-1) = \\ball_{N}(u,T-1).\\]\n\tIn particular, once $\\ball_{H}(u,T-1)$ is fixed, the outputs of $v$ and $w$, respectively, depend on the structures of $\\ball_{H}(v,T) \\setminus \\ball_{H}(u,T-1)$ and $\\ball_{H}(w,T) \\setminus \\ball_{H}(u,T-1)$, which are completely distinct. See Figure~\\ref{fig:round-elimination} for an illustration. Therefore, if there exist $x \\in S(u,v)$ and $y \\in S(u,w)$, then there exists a port-numbered network $H$ such that running $S$, node $v$ outputs $x$ on $\\{v,u\\}$ and node $w$ outputs $y$ on $\\{ w,u \\}$. This further implies that since $S$ is assumed to work correctly on all port-numbered networks, for any combination of $x_1 \\in S(u,v_1)$, $x_2 \\in S(u,v_2)$, \\ldots, $x_{\\delta} \\in S(u,v_{\\delta})$, we must have that \\[[x_1, x_2, \\dots, x_{\\delta}] \\in \\collP_0.\\] This implies that\n\t\\[\n\t\t[S(u,v_1),\\, S(u,v_2),\\, \\dotsc,\\, S(u, v_{\\delta})] \\in \\collA_1.\n\t\\]\n\t\n\tIt remains to show that for each $\\Pi_0$-active node $v$, it holds that the sets $S(u_1, v)$, $S(u_2, v)$, \\ldots, $S(u_d, v)$, where $u_i$ are neighbors of $v$, form a configuration in $\\collP_1$. To see this, note that the $\\Pi_1$-active nodes $u_i$ simulate $S$ on every port-numbered fragment, including the true neighborhood $\\ball_{N}(v,T)$ of $v$. This implies that the output of $v$ on $\\{v,u_i\\}$ running $S$ in network $N$ is included in $S(u_i, v)$. Since $S$ is assumed to be a correct algorithm, these true outputs $x_1 \\in S(u_1, v)$, $x_2 \\in S(u_2, v)$, \\ldots, $x_{d} \\in S(u_d,v)$ form a configuration \\[[x_1, x_2, \\dots, x_d] \\in \\collA_0,\\] which implies that\n\t\\[\n\t\t[S(u_1, v),\\, S(u_2, v),\\, \\dots,\\, S(u_d,v)] \\in \\collP_1,\n\t\\] as required. \n\\end{proof}\n\n\\subsection{Example: Complexity of Weak 3-labeling} \\label{ssec:output-weak3}\n\nNow we will apply the round elimination technique to show that the weak 3-labeling problem is not solvable in 1 round. To do this, we show that the output problem of weak 3-labeling is not solvable in 0 rounds.\n\n\\begin{lemma}\n\tWeak 3-labeling is not solvable in 1 round in the \\PN-model on $(3,2)$-biregular trees.\n\\end{lemma}\n\n\\begin{proof}\n\tIn Section~\\ref{ssec:example-w3ec} we saw the output problem of weak 3-labeling. We will now show that this problem is not solvable in 0 rounds on $(2,3)$-biregular trees. By Lemma~\\ref{lem:round-elimination}, weak 3-labeling is then not solvable in 1 round on $(3,2)$-biregular trees. Let $\\Pi_1 = (\\Sigma_1, \\collA_1, \\collP_1)$ denote the output problem of weak 3-labeling.\n\t\n\tIn a 0-round algorithm an active node $v$ sees only its own side (active or passive) and its own port numbers. Since\n\t\\[ \n\t\\collA_1 = \\Bigl\\{\\, \\bigl[\\{ 1 \\}, \\{1 \\}\\bigr],\\, \\bigl[\\{ 2 \\}, \\{2 \\}\\bigr],\\, \\bigl[\\{ 3 \\}, \\{3 \\}\\bigr] \\,\\Bigr\\},\n\t\\] \n\teach active node $v$ must output the same label $X \\in \\bigl\\{ \\{1\\}, \\{2\\}, \\{3\\} \\bigr\\}$ on both of its incident edges.\n\t\n\tSince all active nodes look the same, they all label their incident edges with exactly one label $X$. Since $[X, X, X]$ is not in $\\collP_1$ for any $X \\in \\Sigma_1$, we have proven the claim. \n\\end{proof}\n\n\\subsection{Example: Iterated Round Elimination} \\label{ssec:repeated}\n\nWe will finish this chapter by applying round elimination \\emph{twice} to weak 3-labeling. We will see that the problem\n\\[\n\t\\Pi_2 = \\re(\\Pi_1) = \\re(\\re(\\Pi_0))\n\\]\nobtained this way \\emph{is} $0$-round solvable.\n\nLet us first construct $\\Pi_2$. Note that this is again a problem on $(3,2)$-biregular trees. We first \\emph{simplify} notation slightly; the labels of $\\Pi_1$ are sets and labels of $\\Pi_2$ would be sets of sets, which gets awkward to write down. But the configurations in $\\Pi_1$ only used \\emph{singleton} sets. Therefore we can \\emph{leave out} all non-singleton sets without changing the problem, and then we can \\emph{rename} each singleton set $\\{ x \\}$ to $x$. After these simplifications, we have got\n\\begin{align*}\n\t\\Sigma_1 &= \\{ 1,2,3 \\}, \\\\\n\t\\collA_1 &= \\bigl\\{\\,\n\t\t[ 1, 1 ],\\,\n\t\t[ 2, 2 ],\\,\n\t\t[ 3, 3 ]\n\t\\,\\bigr\\}, \\\\ \n\t\\collP_1 &= \\bigl\\{\\,\n\t\t[ 1, 1, 2 ],\\,\n\t\t[ 1, 1, 3 ],\\,\n\t\t[ 1, 2, 2 ],\\,\n\t\t[ 1, 2, 3 ],\\,\n\t\t[ 1, 3, 3 ],\\,\n\t\t[ 2, 2, 3 ],\\,\n\t\t[ 2, 3, 3 ]\n\t\\,\\bigr\\}.\n\\end{align*}\nAlphabet $\\Sigma_2$ consists of all non-empty subsets of $\\Sigma_1$, that is \n\\[\n\t\\Sigma_2 = \\bigl\\{ \\{ 1 \\}, \\{ 2 \\}, \\{ 3 \\}, \\{ 1, 2 \\}, \\{ 1, 3 \\}, \\{ 2, 3 \\}, \\{ 1, 2, 3 \\} \\bigr\\}.\n\\]\nThe active configurations are all multisets $[X_1, X_2, X_3]$ where $X_1,X_2,X_3 \\in \\Sigma_2$ such that \\emph{any} way of choosing $x_1 \\in X_1$, $x_2 \\in X_2$, and $x_3 \\in X_3$ is a configuration in $\\collP_1$. There are many cases to check, the following observation will help here (the proof is left as Exercise~\\ref{ex:pi2}):\n\\begin{itemize}\n\t\\item $\\collP_1$ consists of all $3$-element multisets over $\\Sigma_1$ where at least one of the elements is not $1$, at least one of the elements is not $2$, and at least one of the elements is not $3$.\n\t\\item It follows that $\\collA_2$ consists of all $3$-element multisets over $\\Sigma_2$ where at least one of the elements does not contain $1$, at least one of the elements does not contain $2$, and at least one of the elements does not contain $3$.\n\\end{itemize}\nIt follows that we can enumerate all possible configurations e.g.\\ as follows (here $X,Y,Z \\in \\Sigma_2$):\n\\begin{equation}\\label{eq:pi2}\n\\begin{split}\n\t\\collA_2\n\t      = {} &\\Bigl\\{ \\, \\bigl[ X, Y, Z \\bigr] \\bigm| X \\subseteq \\{1, 2\\}, Y \\subseteq \\{ 1, 3 \\}, Z \\subseteq \\{ 2, 3 \\} \\,\\Bigr\\} \\\\\n\t{} \\cup {} &\\Bigl\\{ \\, \\bigl[ X, Y, Z \\bigr] \\bigm| X \\subseteq \\{1\\}, Y \\subseteq \\{ 2, 3 \\}, Z \\subseteq \\{ 1, 2, 3 \\} \\,\\Bigr\\} \\\\\n\t{} \\cup {} &\\Bigl\\{ \\, \\bigl[ X, Y, Z \\bigr] \\bigm| X \\subseteq \\{2\\}, Y \\subseteq \\{ 1, 3 \\}, Z \\subseteq \\{ 1, 2, 3 \\} \\,\\Bigr\\} \\\\\n\t{} \\cup {} &\\Bigl\\{ \\, \\bigl[ X, Y, Z \\bigr] \\bigm| X \\subseteq \\{3\\}, Y \\subseteq \\{ 1, 2 \\}, Z \\subseteq \\{ 1, 2, 3 \\} \\,\\Bigr\\}.\n\\end{split}\n\\end{equation}\nOn the passive side, $\\collP_2$ consists of all multisets $[X,Y]$ where we can choose $x \\in X$ and $y \\in Y$ with $[x,y] \\in \\collA_1$. But $[x,y] \\in \\collA_1$ is equivalent to $x = y$, and hence $\\collP_2$ consists of all multisets $[X,Y]$ where we can choose some $x \\in X$ and choose the same value $x \\in Y$. Put otherwise,\n\\[\n\\collP_2 = \\Bigl\\{ \\, \\bigl[ X, Y \\bigr] \\bigm| X \\in \\Sigma_2,\\, Y \\in \\Sigma_2,\\, X \\cap Y \\ne \\emptyset \\,\\Bigr\\}.\n\\]\n\n\\begin{lemma}\n\tLet $\\Pi_0$ denote the weak 3-labeling problem. The problem $\\Pi_2 = \\re(\\re(\\Pi_0)) = (\\Sigma_2, \\collA_2, \\collP_2)$ is solvable in $0$ rounds.\n\\end{lemma}\n\n\\begin{proof}\n\tThe active nodes always choose the configuration\n\t\\[\n\t\\bigl[\\{ 1, 2 \\}, \\{ 1, 3 \\}, \\{ 2, 3 \\} \\bigr] \\in \\collA_2\n\t\\]\n\tand assign the sets in some way using the port numbers, e.g., the edge incident to port $1$ is labeled with $\\{2,3\\}$, the edge incident to port $2$ is labeled with $\\{1,3\\}$, and the edge incident to port $3$ is labeled with $\\{1,2\\}$.\n\n\tSince each pair of these sets has a non-empty intersection, no matter which sets are assigned to the incident edges of passive nodes, these form a valid passive configuration in $\\collP_2$.\n\\end{proof}\n\n\\section{Quiz}\n\nConsider the following bipartite locally verifiable labeling problem $\\Pi = (\\Sigma, \\collA, \\collP)$ on $(2,2)$-biregular trees:\n\\begin{align*}\n\t\\Sigma ={}& \\{ 1, 2, 3, 4, 5, 6 \\}, \\\\\n\t\\collA ={}& \\bigl\\{\\, [1, 6],\\, [2, 5],\\,[3, 4] \\bigr\\}, \\text{ and} \\\\\n\t\\collP ={}& \\bigl\\{\\, [x, y] \\bigm| x \\in \\{ 3, 5, 6 \\}, y \\in \\{ 1, 2, 3, 4, 5, 6 \\} \\,\\bigr\\} \\\\\n\t  {}\\cup{}& \\bigl\\{\\, [x, y] \\bigm| x \\in \\{4,5,6\\}, y \\in \\{ 2, 3, 4, 5, 6 \\} \\,\\bigr\\}.\n\\end{align*}\nGive a $0$-round algorithm for solving $\\Pi$.\n\n\\section{Exercises}\n\n\\begin{ex}[encoding graph problems]\\label{ex:encode-bipartite}\nEven if a graph problem is defined for general (not bipartite) graphs, we can often represent it in the bipartite formalism. If we take a $d$-regular tree $G$ and subdivide each edge, we arrive at a $(d,2)$-biregular tree $H$, where the active nodes represent the nodes of $G$ and passive nodes represent the edges of $G$.\n\nUse this idea to encode the following graph problems as bipartite locally verifiable labelings in $(d,2)$-biregular trees. Give a brief explanation of why your encoding is equivalent to the original problem. You can ignore the leaf nodes and their constraints; it is sufficient to specify constraints for the active nodes of degree $d$ and passive nodes of degree~$2$.\n\\begin{subex}[noitemsep]\n\t\\item Vertex coloring with $d+1$ colors.\n\t\\item Maximal matching.\n\t\\item Dominating set.\n\t\\item Minimal dominating set.\n\\end{subex}\n\\end{ex}\n\n\\begin{ex}[algorithms in the bipartite model]\n\tThe bipartite model can be used to run algorithms from the standard $\\PN$ and $\\LOCAL$ models. Using the idea of Exercise~\\ref{ex:encode-bipartite}, we encode the \\emph{maximal independent set} problem in $3$-regular trees as the following bipartite locally verifiable problem $\\Pi = (\\Sigma, \\collA, \\collP)$ in $(3,2)$-biregular trees:\n\t\\begin{align*}\n\t\t\\Sigma &= \\{ \\mI, \\mO, \\mP \\}, \\\\\n\t\t\\collA &= \\bigl\\{ \\, [\\mI, \\mI, \\mI],\\, [\\mP, \\mO, \\mO] \\, \\bigr\\}, \\\\\n\t\t\\collP &= \\bigl\\{ \\, [\\mI, \\mP],\\, [\\mI, \\mO],\\, [\\mO, \\mO] \\, \\bigr\\}.\n\t\\end{align*}\n\tIn $\\collA$, the first configuration corresponds to a node in the independent set $X$, and the second configuration to a node not in $X$. A node not in $X$ points to a neighboring active node with the label $\\mP$: the node pointed to has to be in $X$. The passive configurations ensure that two active nodes connected by a passive node are not both in $X$, and that the pointer $\\mP$ always points to a node in~$X$.\n\t\n\tAssume that the active nodes are given a 4-coloring $c$ as input. That is, $c\\colon V_A \\to \\{1,2,3,4\\}$ satisfies $c(v) \\ne c(u)$ whenever the active nodes $v,u \\in V_A$ share a passive neighbor $w \\in V_P$. The nodes also know whether they are active or passive, but the nodes do not have any other information.\n\t\n\tPresent a $\\PN$-algorithm in the state machine formalism for solving~$\\Pi$. Prove that your algorithm is correct. What is its running time? How does it compare to the complexity of solving maximal independent set in the $\\PN$ model, given a $4$-coloring? \n\\end{ex}\n\n\\begin{ex}[Round Eliminator]\n\tThere is a computer program, called Round Eliminator, that implements the round elimination technique and that you can try out in a web browser:\n\t\\begin{center}\n\t\t\\url{https://github.com/olidennis/round-eliminator}\n\t\\end{center}\n\tLet $\\Pi_0$ be the weak 3-labeling problem defined in Section~\\ref{ssec:bipartite-examples}. Use the Round Eliminator to find out what are $\\Pi_1 = \\re(\\Pi_0)$ and $\\Pi_2 = \\re(\\Pi_1)$. In your answer you need to show how to encode $\\Pi_0$ in a format that is suitable for the Round Eliminator, what were the answers you got from the Round Eliminator, and how to turn the answers back into our usual mathematical formalism.\n\\end{ex}\n\n\\begin{ex}[iterated round elimination]\\label{ex:pi2}\n\tFill in the missing details in Section~\\ref{ssec:repeated} to show that formula \\eqref{eq:pi2} is a correct definition of the active configurations for problem $\\Pi_2$ (i.e., it contains all possible configurations and only them).\n\\end{ex}\n\n\\begin{ex}[solving weak 3-labeling]\n\tPresent a 2-round deterministic $\\PN$-algorithm for solving weak 3-labeling in $(3,2)$-biregular trees.\n\\end{ex}\n\n\\begin{ex}[sinkless orientation]\n\tConsider the sinkless orientation problem, denoted by $\\Pi$, on $(3,3)$-biregular trees from Section~\\ref{ssec:bipartite-examples}. Compute the output problems $\\re(\\Pi)$ and $\\re(\\re(\\Pi))$; include a justification for your results.\n\\end{ex}\n\n\\section{Bibliographic Notes}\n\nLinial's \\cite{linial92locality} lower bound for vertex coloring in cycles already used a proof technique that is similar to round elimination. However, for a long time it was thought that this is an ad-hoc technique that is only applicable to this specific problem. This started to change in 2016, when the same idea found another very different application \\cite{brandt16lll}. Round elimination as a general-purpose technique was defined and formalized by Brandt~\\cite{Brandt2019automatic} in 2019, and implemented as a computer program by Olivetti~\\cite{Olivetti2019}.\n", "meta": {"hexsha": "889e7ca775dd20e359e9dcffa8105836931cabc7", "size": 29300, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "book/ch09.tex", "max_stars_repo_name": "suomela/da2020", "max_stars_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2020-12-11T00:47:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T15:46:43.000Z", "max_issues_repo_path": "book/ch09.tex", "max_issues_repo_name": "suomela/da2020", "max_issues_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-17T18:31:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-17T18:42:16.000Z", "max_forks_repo_path": "book/ch09.tex", "max_forks_repo_name": "suomela/da2020", "max_forks_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-06-22T03:53:31.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-11T12:33:40.000Z", "avg_line_length": 76.3020833333, "max_line_length": 890, "alphanum_fraction": 0.6840614334, "num_tokens": 9935, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085758631159, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5995703428252219}}
{"text": "\\subsection{Propositional Logic}\n\\input{background/logic.tex}\n\\subsection{Stochastic Boolean Satisfiability}\n\\input{background/ssat.tex}\n\\subsection{Model Counting}\n\\input{background/counting.tex}", "meta": {"hexsha": "81f62522a405d585670fc36dbc4ae4b9186b62b0", "size": 196, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Talks/oral_defense/background.tex", "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_issues_repo_path": "Talks/oral_defense/background.tex", "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Talks/oral_defense/background.tex", "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.6666666667, "max_line_length": 46, "alphanum_fraction": 0.8316326531, "num_tokens": 49, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513786759491, "lm_q2_score": 0.6688802669716107, "lm_q1q2_score": 0.599484861442443}}
{"text": "% Copyright 2018 Melvin Eloy Irizarry-Gelp\u00ed\n\\chapter{AW Quaternions}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nAn AW quaternion has the form\n\\begin{equation}\n    a_{0} + a_{1} A + a_{2} W + a_{3} AW\n\\end{equation}\nThese follow from a parabolic Cayley-Dickson construct on the A binions.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Arithmetic Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Multiplication}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Conjugate Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Asterisk Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quadrance}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Cloak Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Dagger Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Hodge Star}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Zero-Divisors}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Differential Operators}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{M\\\"{o}bius Transformations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Cross-Ratio}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...", "meta": {"hexsha": "37261f5248a3a8dd09d184d15e7bce5e6cc5dd7f", "size": 2668, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/I.tex", "max_stars_repo_name": "meirizarrygelpi/cdc", "max_stars_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/I.tex", "max_issues_repo_name": "meirizarrygelpi/cdc", "max_issues_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/I.tex", "max_forks_repo_name": "meirizarrygelpi/cdc", "max_forks_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.6428571429, "max_line_length": 80, "alphanum_fraction": 0.1757871064, "num_tokens": 270, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513648201266, "lm_q2_score": 0.668880247169804, "lm_q1q2_score": 0.5994848344271604}}
{"text": "\\section{Parameter model}\n\\label{sec:parameterModel}\n\\label{maths:parameter_defn}\n\\label{maths:parameter-model}\n\nThe following section outlines the parameter model. We consider three types of parameter model:\n\\begin{itemize}\n\\item\nType 1. Equation type\n\\begin{align*}\n\\psi_i = H(\\beta, C_i, \\eta_i)\n\\end{align*}\nThis is the most general form of a parameter model with no constraints on the function $H$.\nIt is an implicit equation as it doesn't allow an easy interpretation of its elements in contrast to the two following forms.\n\\item\nType 2. Gaussian model with general covariate model\n\\begin{align*}\nh(\\psi_i) = H(\\beta, C_i) + \\eta_i\n\\end{align*}\nHere the parameter is normally distributed up to a transformation $h$ with a general covariate model and additive random effects.\n\\item\nType 3. Gaussian model with linear covariate model\n\\begin{align*}\nh(\\psi_i) = h(\\psi_{pop}) + \\beta \\, C_i + \\eta_i\n\\end{align*}\nThis is a special case of the models above which allows for the most detailed interpretation as explained in the following section.\n\\end{itemize}\nwith\n\\begin{itemize}\n\\item\n$\\psi_i$ -- individual parameter\n\\item\n$\\psi_{pop}$ -- typical or population mean parameter\n\\item\n$\\eta_i$ -- random effect(s)\n\\item\n$\\beta$ -- fixed effect(s)\n\\item\n$C_i$ -- covariate(s)\n\\item\nH -- arbitrary function\n\\item\nh -- function which transforms the model on both sides, e.g. log, logit, probit.\n\\end{itemize}\n\n\\subsection{Discussion and examples of Type 1 models}\n\\label{subsec:paramModelType1}\nThis model type is the most flexible one, able to accommodate\n\\begin{itemize}\n\\item\nmultiple fixed effects\n\\item\nmultiple random effects and\n\\item\nan arbitrary (nonlinear) covariate model.\n\\end{itemize}\n\n\\paragraph{Example}\nLet's consider a complex clearance model as introduced in \\cite{NONMEM:2006aa}, which contains\n\\begin{itemize}\n\\item\nfour fixed effects $\\theta_1, \\cdots, \\theta_4$\n\\item\nthree continuous covariates $WT, AGE, SECR$\n\\item\none categorical covariate $ICU$\n\\item\nthree random effects $\\eta_{1i,met}\\sim \\mathcal{N}(0,\\omega_{1,met}^2),\\eta_{2i,met}\\sim \\mathcal{N}(0,\\omega_{2,met}^2), \\eta_{3i,ren}\\sim \\mathcal{N}(0,\\omega_{3,ren}^2)$\n\\end{itemize}\nThe model is composed of\n\\begin{enumerate}\n\\item\nthe average metabolic clearance which reads\n\\begin{align*}\n& CL_{met_{average}} = WT\\times \\frac{\\theta_1 - \\theta_2 \\times Cpss_2}{\\theta_3 + Cpss_2}\n\\end{align*}\nextended with random effects representing a patient being from an ICU (intensive care unit) or else\n\\begin{align*}\n& CL_{i,met} = CL_{met_{average}} + (1 - ICU) \\; \\eta_{1,i} + ICU \\; \\eta_{2,i}\n\\end{align*}\ni.e.\n\\begin{align*}\n& CL_{i,met} = \\left\\{ \\begin{array}{lcl}  CL_{met_{average}} + \\eta_{1,i}  & \\mbox{for} & ICU = 0 \\quad \\text{i.e. patient not from ICU} \\\\\nCL_{met_{average}} + \\eta_{2,i}  & \\mbox{for} & ICU = 1 \\quad \\text{else}\n\\end{array}\\right.\n\\end{align*}\n\\item\nand average renal clearance which reads\n\\begin{align*}\n& CL_{ren_{average}} = \\theta_4 \\times RF \\quad \\text{with}  \\quad\nRF = WT\\times \\frac{1.66 - 0.011 \\times AGE}{SECR}\n\\end{align*}\nso the clearance for subject $i$ amounts to\n\\begin{align*}\n& CL_{i,ren} = CL_{ren_{average}}(1+ \\eta_{3,i})\n\\end{align*}\n\\end{enumerate}\nThe complete model, combining (1) and (2), for an individual's clearance then reads\n\\begin{align*}\n& CL_i = CL_{i,met} + CL_{i,ren}.\n\\end{align*}\nThis model, although fully flexible, is difficult to break into meaningful sub-components. This is an entirely different situation for the following model types, where clearly defined sub-components can be separately stored and annotated.\n%\\begin{itemize}\n%\\item\n%metabolic clearance\n%\\begin{align*}\n%& CL_{met_{average}} = WT\\times \\frac{\\theta_1 - \\theta_2 \\times Cpss_2}{\\theta_3 + Cpss_2}\n%\\end{align*}\n%extended with random effects representing a patient being from an ICU (Intensive Care Unit) or else\n%\\begin{align*}\n%& CL_{met} = CL_{met_{average}} + (1 - ICU) \\, \\eta_{1i,met} + ICU \\, \\eta_{2i,met}\n%\\end{align*}\n%i.e.\n%\\begin{align*}\n%& CL_{met} = \\left\\{ \\begin{array}{lcl}  CL_{met_{average}} + \\eta_1  & \\mbox{for} & ICU = 0 \\quad \\text{i.e. patient not from ICU} \\\\\n%CL_{met_{average}} + \\eta_2  & \\mbox{for} & ICU = 1 \\quad \\text{else}\n%\\end{array}\\right.\n%\\end{align*}\n%\\item\n%and renal clearance\n%\\begin{align*}\n% CL_{ren} = \\theta_4 \\times RF\n%\\end{align*}\n%with\n%\\begin{align*}\n%& RF = WT\\times \\frac{1.66 - 0.011 \\times AGE}{SECR}\n%\\end{align*}\n%with covariates \\var{WT}, \\var{AGE} and \\var{SECR}.\n%\\end{itemize}\n%The complete model for an individual's clearance reads\n%\\begin{align*}\n%& CL_i = CL_{met} + CL_{ren}\\; \\eta_{3i,ren}.\n%\\end{align*}\n\n\\subsection{Discussion and examples of Type 2 models}\n\\label{subsec:paramModelType2}\nHere, we consider normally distributed parameters, up to a transformation $h$, i.e. normal, log-normal or logit-normally distributed with identity, the natural logarithm or the logit as transformation, respectively.\n\nCompared to the Type 1 parameter model, the Type 2 parameter model has a more structured additive form:\n\\begin{align*}\nh(\\psi_i) =\n\\underbrace{H(\\beta, C_i)}_{\\text{\\parbox{2.5cm}{\\centering non-linear covariate\\\\[-4pt] model}}}\n+ \\underbrace{\\eta_i^{(0)}+ \\eta_{ik}^{(-1)} + \\dots}_{\\text{\\parbox{3cm}{\\centering IIV and other\\\\[-4pt] levels of variability}}}\n\\end{align*}\nAccordingly a model for an individual parameter consists of\n\\begin{itemize}\n\\item\nthe left-hand transformation, $h$\n\\item\na non-linear covariate model, i.e. any function, $H$, of fixed effects, \\var{\\beta}, and categorical or continuous covariates, $C_i$, e.g. \\textit{Sex} or \\textit{Weight}, and\n\\item\nrandom effects, $\\eta$, for \\textit{inter-individual, inter-occasion} and/or other levels of variability (see section \\ref{sec:variabilityModel}).\n\\end{itemize}\n\n\\paragraph{Example}\nThe following example is taken from the 'Fisher/Shafer NONMEM Workshop', and in NMTRAN code reads\n\\begin{xmlcode}\n\tWTE = THETA(1) * WT / (THETA(2)+ WT)\n\tV = (THETA(3) + WTE) * EXP(ETA(1))\n\\end{xmlcode}\nAfter taking the logarithm of both sides we get\n\\begin{align*}\n\\log(V_i) = \\log\\Big(\\theta_3 + \\frac{\\theta_1 \\times WT_i}{\\theta_2 + WT_i}\\Big) + \\eta_{V,i}.\n\\end{align*}\n\n\\subsection{Discussion and examples of Type 3 models}\n\\label{subsec:paramModelType3}\nHere, we again consider normally distributed parameters, up to a transformation $h$, i.e. normal, log-normal or logit-normally distributed with identity, the natural logarithm or the logit as transformation, respectively.\n\nThe Type 3 parameter model has a very convenient fully additive form, which separates all of the sub-components, making it very easy to understand and process:\n\\begin{align*}\nh(X_i) = h(X_{pop})\n+ \\underbrace{\\beta \\,C_i}_{\\text{\\parbox{2cm}{\\centering linear covariate\\\\[-4pt] model}}}\n+ \\underbrace{\\eta_i^{(0)}+ \\eta_{ik}^{(-1)} + \\dots}_{\\text{\\parbox{3cm}{\\centering IIV and other\\\\[-4pt] levels of variability}}}\n\\end{align*}\nAccordingly a model for an individual parameter consists of\n\\begin{itemize}\n\\item\na parameter transformation, $h$\n\\item\na typical or population mean value of the parameter, $X_{pop}$\n\\item\na linear covariate model, $\\beta \\, C_i$, with\n\\begin{itemize}\n\\item\nfixed effects, $\\beta$, and\n\\item\ncategorical or continuous covariates, $C_i$, e.g. \\textit{Sex} or \\textit{Weight}\n\\end{itemize}\n\\item\nrandom effects, $\\eta$, for \\textit{inter-individual, inter-occasion} and/or other levels of variability (section \\ref{sec:variabilityModel}).\n\\end{itemize}\nSee Figure \\ref{fig:weightAsCovariate} for an example of the linear relationship between a parameter and a continuous covariate and one, \\textit{inter-individual}, level of variability.\n\n\\paragraph{Example}\nLet's consider volume, $V$, as a log-normally distributed parameter with two covariates \\textit{Sex} and \\textit{Weight} and with three levels of variability as discussed in Example 3 in section \\ref{sec:variabilityModel} (see Figure \\ref{tree_IOV1}), which can be represented by the equation:\n%\\begin{align*}\n%& \\eta_i \\sim \\mathcal{N}(0,\\omega_V); \\quad \\log( V_i ) = \\log( V_{pop} ) + \\beta_{V,1} 1_{Sex_i=F} + \\beta_{V,2} \\log\\Big(\\frac{W_i}{70}\\Big) + \\eta_{i,V}\n%\\end{align*}\n\\begin{align*}\nV_{lik} = V_{pop} e^{\\beta_{V,1} 1_{Sex_i=F}} \\Big(\\frac{W_i}{70}\\Big)^{\\beta_{V,2}} e^{\\eta_{l,V}^{(1)}}  e^{\\eta_{li,V}^{(0)}} e^{\\eta_{lik,V}^{(-1)}}\n\\end{align*}\nor alternatively as\n\\begin{align*}\n\\underbrace{\\log(V_{lik})}_{\\text{\\parbox{2cm}{\\centering transformed\\\\[-4pt] individual value}}} =\n\\underbrace{\\log(V_{pop})}_{\\text{\\parbox{2cm}{\\centering transformed\\\\[-4pt] typical value}}} +\n\\underbrace{\\beta_{V,1} 1_{Sex_i=F}}_{\\text{\\parbox{2.2cm}{\\centering categorical\\\\[-4pt] covariate model\\\\[-4pt] for Sex}}}\n+ \\underbrace{\\beta_{V,2} \\log\\Big(\\frac{W_i}{70}\\Big)}_{\\text{\\parbox{2.2cm}{\\centering continuous\\\\[-4pt] covariate model\\\\[-4pt] for Weight}}}\n+ \\underbrace{\\eta_{l,V}^{(1)}}_{\\text{\\parbox{1.8cm}{\\centering inter-centre\\\\[-4pt]  variability}}}\n+ \\underbrace{\\eta_{li,V}^{(0)}}_{\\text{\\parbox{1.8cm}{\\centering inter-individual\\\\[-4pt] within centre \\\\[-4pt]  variability}}}\n+ \\underbrace{\\eta_{lik,V}^{(-1)}}_{\\text{\\parbox{2.2cm}{\\centering inter-occasion\\\\[-4pt] within individual \\\\[-4pt] within centre \\\\[-4pt] variability}}}\n\\end{align*}\nwith\n\\begin{align*}\n && \\eta_{l,V}^{(1)} \\sim \\mathcal{N}\\big(0,\\Omega^{(1)}\\big), \\quad \\eta_{li,V}^{(0)} \\sim \\mathcal{N}\\big(0,\\Omega^{(0)}\\big),\n\\quad \\eta_{lik,V}^{(-1)} \\sim \\mathcal{N}\\big(0,\\Omega^{(-1)}\\big).\n\\end{align*}\nThe equation for $V_{lik}$ represented in the additive form is clearly easier to understand and one can read out the following information from it:\n\\begin{itemize}\n\\item\nthe parameter transformation, the natural logarithm, $log$\n\\item\nthe typical volume, $V_{pop}$\n\\item\nthe two linear covariate models, $\\beta_{V,1} C_1$ and $\\beta_{V,2} C_2$ with\n\\begin{itemize}\n\\item\na fixed effect for the categorical covariate, $\\beta_{V,1}$\n\\item\na categorical covariate, $1_{Sex_i=F}$\n\\item\na fixed effect for the continuous covariate, $\\beta_{V,2}$\n\\item\na continuous covariate, $C_2 = \\log(W/W_{pop})$ with $W_{pop}=70$\n\\end{itemize}\n\\item\nmultiple random effects\n\\begin{itemize}\n\\item\na random effect above the subject level for \\textit{inter-centre} variability, $\\eta_{l,V}^{(1)}$\n\\item\na random effect at the subject level for \\textit{inter-individual within centre} variability, $\\eta_{li,V}^{(0)}$\n\\item\na random effect below the subject level for \\textit{inter-occasion within individual within centre} variability, $\\eta_{lik,V}^{(-1)}$.\n\\end{itemize}\n\\end{itemize}\n\n\n\\begin{figure}[htbp]\n\\centering\n \\includegraphics[width=100mm]{weightAsCovariate}\n\\caption{The linear relationship between the parameter and a continuous covariate after application of appropriate transformations $V_i \\longrightarrow \\log(V_i)$ and $W \\longrightarrow \\log(W/70)$ with $\\beta_{V,2} = \\tan{\\alpha}$, the slope of the regression line, and $\\log(V_{pop})$ as the $y$-axis intercept.}\n\\label{fig:weightAsCovariate}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Correlation of random effects}\n\\label{subsec:correlationModel}\\label{maths:param correlation}\nCorrelation of random effects means that the transformed parameters. e.g. $\\log(V_i)$ and $\\log(CL_i)$ are correlated as well (although the relationship is not straightforward; see also the discussion below on correlation and covariates). There are two alternative ways to define the correlation, using either\n\\begin{itemize}\n\\item\na correlation matrix, $R$, or\n\\item\na variance-covariance matrix, $\\Omega$.\n\\end{itemize}\n\n\n\\paragraph{Correlation matrix}\nIn this case it is sufficient to define the non-zero correlation coefficients, e.g. $\\rho_{V,CL}$. All other off-diagonal correlation coefficients will be assumed to be equal to 0. For a simple one-compartment oral PK model with parameters $ka$, $V$, $CL$, and a correlation between $CL$ and $V$ the full correlation matrix reads as follows\n\\[\nR =\n \\begin{pmatrix}\n  1 \t& 0 \t& 0  \t\\\\\n   \t\t& 1\t& \\rho_{V,CL} \\\\\n  \t\t& \t& 1\n \\end{pmatrix}\n\\]\n\n\\paragraph{Variance-covariance matrix}\n\\label{maths:covariance-mat-derivation}\nAlternatively, the variance-covariance matrix for the model\n\\[\n \\Omega =\n \\begin{pmatrix}\n  \\omega_{ka}^2 \t& \\omega_{ka,V}\t& \\omega_{ka,CL}\\\\\n   \t\t\t  \t& \\omega_{V}^2\t& \\omega_{V,CL} \\\\\n  \t\t\t\t& \t\t\t\t& \\omega_{CL}^2\n \\end{pmatrix}\n =\n  \\begin{pmatrix}\n  \\omega_{ka}^2 \t& 0\t\t\t\t& 0 \\\\\n   \t\t\t  \t& \\omega_{V}^2\t& \\omega_{V,CL} \\\\\n  \t\t\t\t& \t\t\t\t& \\omega_{CL}^2\n \\end{pmatrix}\n\\]\nis providing the necessary information due to the reletionship\n\\begin{align*}\n\t&\\mbox{Cov($p_i$,$p_j$)} = \\sigma_i \\sigma_j \\mbox{Corr($p_i$,$p_j$)}  = \\sigma_i \\sigma_j \\;\\rho_{i,j} \\quad \\mbox{i.e.} \\quad \\omega_{V,CL} = \\omega_V \\omega_{CL} \\rho_{V,CL}\n\\end{align*}\nin which case it is enough to define $\\Omega$ to cover the full correlation structure.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Covariate model}\n\\label{maths:covariate_model}\n\nThe covariate model accounts for systematic or known subject\ncharacteristics such as treatment group, gender or body\nweight. Accordingly, the model can be defined for discrete and\ncontinuous covariates and is the place where one category of fixed\neffects is defined (the other being\nthe population averages, e.g. $V_{pop}$). Of course, the values of\nindividual characteristics (weight or sex) are subject specific but\nthe parameters assigned to them are identical for a group or\npopulation.\n\nAs described in the example above the contribution of the continuous\ncovariate $Weight$ to the parameter value is formulated as\n$\\beta_{V,2} \\log(W_i/70)$ (see figure\n\\ref{fig:weightAsCovariate}). The figure illustrates the linearity\nafter the appropriate transformation of the parameter, $V_i\n\\longrightarrow \\log(V_i)$ and $W \\longrightarrow \\log(W/70)$ with\n$\\beta_{V,2} = \\tan{\\alpha}$, the slope of the regression line, and\n$\\log(V_{pop})$ as the $y$-axis intercept.\n\nIn the estimation case the values for the covariate are provided for\neach individual. In the case of a simulation (see example\n\\ref{subsec:exp2_TaskDescription}) its probability distribution has to\nbe estimated. The information we have to provide is summarised in the\n\n\\paragraph{Continuous covariate model}\n\\begin{align*}\nCovariates & =  Weight  \\\\\nCovariatesType & = Continuous  \\\\\nCovariatesPopDistribution\\{1\\} & \\sim \\mbox{Normal}(pop_{Weight}, \\omega_{Weight})  \\\\\n \\text{with} & \\quad pop_{Weight}=70.07 \\\\\n & \\quad \\omega_{Weight}=14.09  \\\\\nCovariatesTransf & =\\log(Weight/70) \n\\end{align*}\nAnalog information has to be provided in the case of a categorical covariate, such as \\textit{Sex} and is summarised for a simple example in the\n\n\\paragraph{Categorial covariate model}\n\\begin{align*}\nCovariates &= Sex   \\\\\nCovariatesType &= Categorical  \\\\\nCategoriesNumber &= 2   \\\\\nCategories &= \\{F,M\\}   \\\\\nRefCategory &= F   \\\\\nRefCategProbability &= 14/36 \n\\end{align*}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%\\subsection{Note on correlation and covariates}\n%%This section discusses the influence of covariates on correlation between random effects and that of the parameters. In the case without e.g. normally distributed covariates, the correlation is identical. However, the presence of normally distributed body weight in the parameter model has interesting consequences for these correlations. \\\\\n%%Let's consider as an example two correlated log-normally distributed parameters, $V$ and $CL$, without covariates i.e.\n%%\\begin{align*}\n%%& \\eta_{V} \\sim \\mathcal{N}(0,\\omega_V); \\quad \\log( V ) = \\log( V_{pop} ) + \\eta_{V}  \\\\\n%%& \\eta_{CL} \\sim \\mathcal{N}(0,\\omega_{CL}); \\quad \\log( CL ) = \\log( CL_{pop} ) + \\eta_{CL}\n%%\\end{align*}\n%%then the correlation between the random effects is identical to the correlation of the (transformed) parameters, see Figure \\ref{fig:correlationEtasLogedParams1}.\n%%\\begin{figure}[h!]\n%%\\centering\n%% \\includegraphics[height=45mm]{correlationsNoCovars}\n%%\\caption{Correlation of random effects $\\eta_V$, $\\eta_{CL}$ (blue), transformed parameters $\\log(V)$ and $\\log(CL)$ (red) and actual parameters  $V$ and $CL$ (green) is equal in the case without covariates. Values used: $V_{pop}=8, \\omega_V=0.2, CL_{pop}=0.13,  \\omega_{CL}=0.2$.}\n%%\\label{fig:correlationEtasLogedParams1}\n%%\\end{figure}\n%%\\newline\n%%Now, we consider that both parameters depend on a continuous normally distributed covariate, e.g. the $Weight$,\n%%\\begin{align*}\n%%& \\eta_{V} \\sim \\mathcal{N}(0,\\omega_V); \\quad \\log( V ) = \\log( V_{pop} ) + \\beta_{V} \\log\\Big(\\frac{W}{70}\\Big) + \\eta_{V}  \\\\\n%%& \\eta_{CL} \\sim \\mathcal{N}(0,\\omega_{CL}); \\quad \\log( CL ) = \\log( CL_{pop} ) + \\beta_{CL} \\log\\Big(\\frac{W}{70}\\Big) + \\eta_{CL}\n%%\\end{align*}\n%%then the correlation between the transformed parameters is higher then that of the random effects and increases proportionally to $corr(\\eta_{V}, \\eta_{CL})$, see Figure \\ref{fig:correlationEtasLogedParams2}.\n%%\\begin{figure}[h!]\n%%\\centering\n%% \\includegraphics[height=45mm]{correlationsWithCovars}\n%%\\caption{The correlation between both the transformed parameters (red) and actual parameters (green) is higher then that of the random effects (blue) (and is equal 0.933, 0.948 and 0.750, respectively, for $n=10^8$). Values used as before with $\\beta_V=2, \\beta_{CL}= 1.75$.}\n%%\\label{fig:correlationEtasLogedParams2}\n%%\\end{figure}\n%%\\newline\n%%The consequence of this discussion is that the correlation structure of random effects, as defined in PharmML, translates only in special cases to the correlation between parameters.\n%%%This underlines the importance of the fact that we define in PharmML the relationship between random effects and not parameters.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Equivalent representations of the parameter model}\nEvery parameter model represented in the Type 3 format discussed before has at least three mathematically equivalent representation forms,\nwhich will be presented and discussed in terms of advantages and disadvantages in the following. It is important to understand these different\nrepresentation forms, as they explain the different forms of notation used in different software tools. Here, we concentrate on NONMEM and MONOLIX only.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Log-Normal distributed\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsubsection{Log-Normal distributed}\nFor a \\textbf{log-normal} distributed parameter, e.g. $V$, the equivalent representations read\n\\begin{align*}\n&(1) \\eta_i \\sim \\mathcal{N}(0,\\omega_V); \\quad V_i= V_{pop} \\; e^{\\eta_{i,V}}   \\\\\n&(2) \\eta_i \\sim \\mathcal{N}(0,\\omega_V); \\quad \\log( V_i ) = \\log( V_{pop} ) + \\eta_{i,V}  \\\\\n&(3) \\log( V_i ) \\sim \\mathcal{N}\\big( \\log( V_{pop} ),\\omega_V\\big)\n\\end{align*}\nfor a typical value $V_{pop}$ and standard deviation $\\omega_V$ as described in \\cite{Lavielle:2012b}.\\\\\nThe typical NMTRAN code for a log-normally distributed parameter is (\\cite{Smith:2012aa})\n\\begin{lstlisting}\nGRPV=THETA(1)\nV=GRPV*EXP(ETA(1))\n\\end{lstlisting}\nand in MLXTRAN (\\cite{MonolixOverview:2012})\n\\begin{lstlisting}\n# as explicit equation\neta_V ~ normal(0, omega_V)\nV = V_pop*exp(eta_V)\n\n# or using short notation\nV = {distribution=lognormal, typical=V_pop, sd=omega_V}\n\\end{lstlisting}\n\n\\begin{figure}[htbp]\n\\centering\n \\includegraphics[width=100mm]{paramCovModel_V}\n\\caption{Log-normally distributed 'V' with $V_{pop}=8$ and $\\omega_V=0.2$}\n\\label{fig:parameterCovModel0}\n\\end{figure}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Log-Normal distributed with a continuous covariate\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsubsection{Log-Normal distributed with a continuous covariate}\nFor a \\textbf{log-normal} distributed parameter, e.g. $V$, with body weight, $W$, as covariate the equivalent representations read\n\\begin{align*}\n& (1) \\quad \\eta_i \\sim \\mathcal{N}(0,\\omega_V); \\quad V_i= V_{pop} \\; \\big(\\frac{W_i}{70}\\big)^\\beta \\; e^{\\eta_{i,V}}   \\\\\n&(2) \\quad \\eta_i \\sim \\mathcal{N}(0,\\omega_V); \\quad \\log( V_i ) = \\log( V_{pop} ) + \\beta \\log\\big(\\frac{W_i}{70}\\big) + \\eta_{i,V}  \\\\\n&(3) \\quad \\log( V_i ) \\sim \\mathcal{N}\\big( \\log( V_{pop} )+ \\beta\\log\\Big(\\frac{W_i}{70}\\Big),\\omega_V\\big)\n\\end{align*}\nThe typical NMTRAN code for a log-normally distributed parameter with weight as covariate is\n\\begin{lstlisting}\nGRPV=THETA(1)*(WT/70)**THETA(2)\nV=GRPV*EXP(ETA(1))\n\\end{lstlisting}\nand in MLXTRAN\n\\begin{lstlisting}\n# as explicit equation\nV_pop = V_pop*(weight/70)^beta_V\neta_V ~ normal(0, omega_V)\nV = V_pop*exp(eta_V)\n\n# or using short notation\nV = {distribution=lognormal, typical=V_pop, covariate=lw70, coefficient=beta_V, sd=omega_V}\n\\end{lstlisting}\nwith $lw70 \\equiv \\log(W/70)$.\n\n\\begin{figure}[htbp]\n\\centering\n \\includegraphics[width=100mm]{paramCovModel_VlogV}\n\\caption{Log-normally distributed 'V' with 'Weight' as covariates.}\n\\label{fig:parameterCovModel1}\n\\end{figure}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Logit-Normal distributed\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsubsection{Logit-Normal distributed}\nFor a \\textbf{logit-normal} distributed parameter, e.g. $Imax$, the equivalent representations read\n\\begin{align*}\n&(1) \\quad \\eta_i \\sim \\mathcal{N}(0,\\omega); \\quad  Imax_i= \\frac{\\bigg[\\frac{ Imax_{pop}}{1- Imax_{pop}} \\; e^{\\eta_{i, Imax}} \\bigg]}{ 1+  \\bigg[\\frac{ Imax_{pop}}{1- Imax_{pop}} \\; e^{\\eta_{i, Imax}} \\bigg]}   \\\\\n&(2) \\quad  \\eta_i \\sim \\mathcal{N}(0,\\omega); \\quad \\mbox{logit}(  Imax_i ) = \\mbox{logit}(  Imax_{pop} ) + \\eta_{i, Imax}  \\\\\n&(3) \\quad  \\mbox{logit}(  Imax_i ) \\sim \\mathcal{N}\\big( \\mbox{logit}( Imax_{pop} ),\\omega\\big)\n\\end{align*}\nEquation (1) can be rewritten using '$logit$' as follows\n\\begin{align*}\n& Imax_i= \\frac{\\exp\\big(logit(Imax_{pop})  + \\eta_{i,Imax} \\big)}{ 1+ \\exp\\big(logit(Imax_{pop}) + \\eta_{i,Imax} \\big)}  \\\\\n& \\Leftrightarrow  Imax_i= \\frac{1}{ 1+ \\exp\\big(- logit(Imax_{pop}) - \\eta_{i,Imax} \\big)}\n\\end{align*}\\newline\nThe last form is used for a typical NMTRAN implementation of a logit-normally distributed parameter\n\\begin{lstlisting}\nLGTIMAX=LOG(POP_IMAX/(1-POP_IMAX)) + ETA(IMAX)\nIMAX=1/(1+EXP(-LGTIMAX))\n\\end{lstlisting}\nand in MLXTRAN\n\\begin{lstlisting}\n# as explicit equation\neta_Imax ~ normal(0, omega_Imax)\nlogitImaxi = log(pop_Imax/(1-pop_Imax)) + eta_Imax\nImaxi = 1/(1 + exp(-logitImaxi))\n\n# or using short notation\nImax = {distribution=logitnormal, typical=Imax_pop, sd=omega_Imax}\n\\end{lstlisting}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Logit-Normal distributed with a continuous covariate\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsubsection{Logit-Normal distributed with a continuous covariate}\nFor a \\textbf{logit-normal} distributed parameter with \\textit{Weight} as \\textbf{covariate} we have\n\\begin{align*}\n&(1)\\quad \\eta_i \\sim \\mathcal{N}(0,\\omega); \\quad Imax_i= \\frac{\\bigg[\\frac{Imax_{pop}}{1-Imax_{pop}} \\; \\big(\\frac{W_i}{70}\\big)^\\beta \\; e^{\\eta_{i,Imax}} \\bigg]}{ 1+  \\bigg[\\frac{Imax_{pop}}{1-Imax_{pop}} \\; \\big(\\frac{W_i}{70}\\big)^\\beta \\; e^{\\eta_{i,Imax}} \\bigg]}\n\\end{align*}\n\\begin{align*}\n&(2)\\quad \\eta_i \\sim \\mathcal{N}(0,\\omega); \\quad \\mbox{logit}( Imax_i ) = \\mbox{logit}( Imax_{pop} ) + \\beta \\log\\bigg(\\frac{W_i}{70}\\bigg) + \\eta_{i,Imax}  \\\\\n&(3)\\quad \\mbox{logit}(  Imax_i ) \\sim \\mathcal{N}\\big( \\mbox{logit}( Imax_{pop}) + \\beta\\log\\Big(\\frac{W_i}{70}\\Big),\\omega\\big)\n\\end{align*}\nThe first equation can be rewritten as follows\n\\begin{align*}\n& Imax_i= \\frac{\\exp\\big(logit(Imax_{pop}) + \\beta \\log\\big(\\frac{W_i}{70}\\big) + \\eta_{i,Imax} \\big)}{ 1+ \\exp\\big(logit(Imax_{pop}) + \\beta \\log\\big(\\frac{W_i}{70}\\big) + \\eta_{i,Imax} \\big)}  \\\\\n& \\Leftrightarrow Imax_i= \\frac{1}{ 1+ \\exp\\big(- logit(Imax_{pop}) - \\beta \\log\\big(\\frac{W_i}{70}\\big) - \\eta_{i,Imax} \\big)}\n\\end{align*}\\newline\nThe last form is used for a typical NMTRAN implementation of a logit-normally distributed parameter with covariate\n\\begin{lstlisting}\nLGTIMAX=LOG(POP_IMAX/(1-POP_IMAX)) + BETA*LOG(WT/70) + ETA(IMAX)\nIMAX=1/(1+EXP(-LGTIMAX))\n\\end{lstlisting}\nand in MLXTRAN\n\\begin{lstlisting}\n# as explicit equation\neta_Imax ~ normal(0, omega_Imax)\nlogitImaxi = log(pop_Imax/(1-pop_Imax)) + beta*lw70 + eta_Imax\nImaxi = 1/(1 + exp(-logitImaxi))\n\n# or using short notation\nImax =\t{distribution=lognormal, typical=Imax_pop, covariate=lw70, coefficient=beta_Imax, sd=omega_Imax}\n\\end{lstlisting}\n\n\\begin{figure}[htbp]\n\\centering\n \\includegraphics[width=100mm]{paramCovModel_ImaxlogImax_Weight}\n\\caption{Logit-normally distributed 'Imax' with 'Weight' as covariate.}\n\\label{fig:parameterCovModel3}\n\\end{figure}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%% Logit-Normal distributed with a categorical covariate\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\n%\\subsubsection{Logit-Normal distributed with a categorical covariate}\n%For a \\textbf{logit-normal} distributed parameter with \\textit{Sex} as \\textbf{covariate} we have\n%\\begin{align*}\n%&(1)\\quad \\eta_i \\sim \\mathcal{N}(0,\\omega); \\quad Imax_i= \\frac{\\bigg[\\frac{Imax_{pop}}{1-Imax_{pop}} \\; e^{\\beta_{Imax_i} 1_{Sex_i=F}} \\; e^{\\eta_{i,Imax}} \\bigg]}{ 1+  \\bigg[\\frac{Imax_{pop}}{1-Imax_{pop}} \\; e^{\\beta_{Imax_i} 1_{Sex_i=F}} \\; e^{\\eta_{i,Imax}} \\bigg]}  \\\\\n%&(2)\\quad \\eta_i \\sim \\mathcal{N}(0,\\omega); \\quad \\mbox{logit}( Imax_i ) = \\mbox{logit}( Imax_{pop} ) + \\beta_{Imax_i} 1_{Sex_i=F} + \\eta_{i,Imax}  \\\\\n%&(3)\\quad \\mbox{logit}(  Imax_i ) \\sim \\mathcal{N}\\big( \\mbox{logit}( Imax_{pop}) + \\beta_{Imax_i} 1_{Sex_i=F},\\omega\\big)\n%\\end{align*}\n%The first equation can be rewritten as follows\n%\\begin{align*}\n%& Imax_i= \\frac{\\exp\\big(logit(Imax_{pop}) + \\beta_{Imax_i} 1_{Sex_i=F} + \\eta_{i,Imax} \\big)}{ 1+ \\exp\\big(logit(Imax_{pop}) + \\beta_{Imax_i} 1_{Sex_i=F} + \\eta_{i,Imax} \\big)}  \\\\\n%& \\Leftrightarrow Imax_i= \\frac{1}{ 1+ \\exp\\big(- logit(Imax_{pop}) - \\beta_{Imax_i} 1_{Sex_i=F} - \\eta_{i,Imax} \\big)}\n%\\end{align*}\\newline\n%The last form is used for a typical implementation of a logit-normally distributed parameter with covariate:\\\\\n%in NMTRAN\n%\\begin{lstlisting}\n%LGTIMAX=LOG(POP_IMAX/(1-POP_IMAX)) + BETA*SEX + ETA(IMAX)\n%IMAX=1/(1+EXP(-LGTIMAX))\n%\\end{lstlisting}\n%and in MLXTRAN\n%\\begin{lstlisting}\n%# as explicit equation\n%eta_Imax ~ normal(0, omega_Imax)\n%logitImaxi = log(pop_Imax/(1-pop_Imax)) + beta*Sex + eta_Imax\n%Imaxi = 1/(1 + exp(-logitImaxi))\n%\n%# or using short notation\n%Imax =\t{distribution=lognormal, typical=Imax_pop, covariate=Sex, coefficient=beta_Imax, sd=omega_Imax}\n%\\end{lstlisting}\n%\n%\n%\\begin{figure}[h!]\n%\\centering\n% \\includegraphics[width=100mm]{paramCovModel_ImaxlogImax_Sex}\n%\\caption{Logit-normally distributed 'Imax' with 'Sex' as a categorical covariate.}\n%\\label{fig:parameterCovModel4}\n%\\end{figure}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Log-Normal distributed with complex variability structure\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsubsection{Log-Normal distributed with complex variability structure}\n\\label{subsec:logIOVCovariate}\nIn this example we consider representations of type (1) and (2) only. A typical parameter model with a continuous covariate, $W$, for three levels of variability e.g. \\{centre, subject, occasion\\} (this will be explained in detail in next section), see Figure \\ref{tree_IOV1}, reads as follows\n\\begin{align*}\n&(1)\\quad V_{lik} = V_{pop} \\; \\big(\\frac{W_i}{70}\\big)^\\beta \\; e^{\\eta_{l,V}^{(1)}} \\; e^{\\eta_{li,V}^{(0)}} \\; e^{\\eta_{lik,V}^{(-1)}}   \\\\\n&(2)\\quad \\log(V_{lik}) = \\log(V_{pop}) + \\beta\\log\\Big(\\frac{W_i}{70}\\Big) + \\eta_{l,V}^{(1)} + \\eta_{li,V}^{(0)} + \\eta_{lik,V}^{(-1)}\n\\end{align*}\nwith\n\\begin{align*}\n & \\eta_l^{(1)} \\sim \\mathcal{N}\\big(0,\\Omega^{(1)}\\big), \\quad \\eta_{li}^{(0)} \\sim \\mathcal{N}\\big(0,\\Omega^{(0)}\\big),\n\\quad \\eta_{lik}^{(-1)} \\sim \\mathcal{N}\\big(0,\\Omega^{(-1)}\\big)\n\\end{align*}\nwith $l$ -- centre index, $i$ -- subject index, $k$ -- occasion index.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Discussion\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\n%\\subsection{Discussion}\n%Currently, \\pharmml supports the version (2) of the parameter model. Consider the parameter model from last example\n%which with some additional annotation reads as follows\n%\\begin{align*}\n%& \\underbrace{\\log(V_{lik})}_{\\text{\\parbox{2cm}{\\centering transformed\\\\[-4pt] individual value}}} = \\underbrace{\\log(V_{pop})}_{\\text{\\parbox{2cm}{\\centering transformed\\\\[-4pt] typical value}}} + \\underbrace{\\beta\\log\\Big(\\frac{W_i}{70}\\Big)}_{\\text{covariate model}}\n%+ \\underbrace{\\eta_{l,V}^{(1)}}_{\\text{\\parbox{2cm}{\\centering inter-centre\\\\[-4pt]  variability}}}\n%+ \\underbrace{\\eta_{li,V}^{(0)}}_{\\text{\\parbox{2cm}{\\centering inter-individual\\\\[-4pt] within centre \\\\[-4pt]  variability}}}\n%+ \\underbrace{\\eta_{lik,V}^{(-1)}}_{\\text{\\parbox{2.5cm}{\\centering inter-occasion\\\\[-4pt] within individual \\\\[-4pt] within centre \\\\[-4pt] variability}}}\n%\\end{align*}\n%This formula, linear for the transformed parameter, has the following \\textbf{advantages}\n%\\begin{itemize}\n%\\item\n%it has an additive structure allowing for easy interpretation and implementation of its components, i.e.\n%\\begin{itemize}\n%\\item\n%typical/population value of the parameter\n%\\item\n%covariate model\n%\\item\n%any level of random effects\n%\\end{itemize}\n%\\item\n%it covers the majority of models relevant for daily practice\n%\\end{itemize}\n%The \\textbf{disadvantage} is that it doesn't cover parameter models which cannot be represented in the linear form for the transformed parameter, e.g. the models proposed by \\cite{Keizer:2011aa}. This issue has been recognised early on and discussed in the consortium. This class of models is being considered for a subsequent specification of \\pharmml.\n\n\n%\n%\\paragraph{Discussion}\n%Consider for example equation (2) for a parameter with a continuous covariate, $W$, with three levels of variability e.g. \\{country,subject, occasion\\} from the last example. With some additional annotation it reads then\n%\\begin{align*}\n%& \\underbrace{\\log(V_{lik})}_{\\parbox{2cm}{\\centering transformed\\\\[-4pt] individual\\\\[-4pt] value}} = \\underbrace{\\log(V_{pop})}_{\\parbox{1.5cm}{\\centering transformed\\\\[-4pt] typical\\\\[-4pt] value}}\n%+ \\underbrace{\\beta\\log\\Big(\\frac{W_i}{70}\\Big)}_{\\parbox{2cm}{\\centering covariate\\\\[-4pt] model}}\n%+ \\underbrace{\\eta_{l,V}^{(1)}}_{\\parbox{2cm}{\\centering inter-centre\\\\[-4pt]  variability}}\n%+ \\underbrace{\\eta_{li,V}^{(0)}}_{\\parbox{2.25cm}{\\centering inter-centre-\\\\[-4pt] individual\\\\[-4pt]  variability}}\n%+ \\underbrace{\\eta_{lik,V}^{(-1)}}_{\\parbox{3cm}{\\centering inter-centre-\\\\[-4pt] individual-occasion\\\\[-4pt] variability}}\n%\\end{align*}\n%\n%This formulation, linear for the transformed parameter, has the following advantages\n%\\begin{itemize}\n%\\item\n%it has an additive structure allowing for easy interpretation and implementation of its components, i.e.\n%\\begin{itemize}\n%\\item\n%typical/population value of the parameter\n%\\item\n%covariate model\n%\\item\n%inter-individual variability random effects\n%\\item\n%higher levels of variability\n%\\end{itemize}\n%\\item\n%it covers the majority of models relevant for daily practice\n%\\end{itemize}\n%More complex models will be supported in an upcoming specification.\n%\n", "meta": {"hexsha": "c870cb84596f761bb5ff55d66a3ff3bfb6ede8e3", "size": 31398, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "input/parameterRepresentation_specSection.tex", "max_stars_repo_name": "pharmml/pharmml-spec", "max_stars_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-01-26T13:17:54.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-26T13:17:54.000Z", "max_issues_repo_path": "input/parameterRepresentation_specSection.tex", "max_issues_repo_name": "pharmml/pharmml-spec", "max_issues_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "input/parameterRepresentation_specSection.tex", "max_forks_repo_name": "pharmml/pharmml-spec", "max_forks_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5727272727, "max_line_length": 354, "alphanum_fraction": 0.6867634881, "num_tokens": 9977, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8198933271118222, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.5993900599549113}}
{"text": "\\documentclass[a4paper]{article}\n\n\\input{temp}\n\n\\setcounter{section}{-1}\n\n\\begin{document}\n\n\\title{Advanced Probability}\n\n\\maketitle\n\n\\newpage\n\n\\tableofcontents\n\n\\newpage\n\n\\section{Reviews}\n\n\\subsection{Measure spaces}\n\nLet $E$ be a set. Let $\\mathcal{E}$ be a set of subsets of $E$. We say that $\\mathcal{E}$ is a $\\sigma$-algebra on $E$ if:\\\\\n$\\bullet$ $\\phi \\in \\mathcal{E}$;\\\\\n$\\bullet$ $\\mathcal{E}$ is closed under countable unions and complements.\\\\\nIn that case, $(E,\\mathcal{E})$ is called a \\emph{measurable space}.\n\nWe call the elements of $\\mathcal{E}$ \\emph{measurable sets}.\n\nLet $\\mu$ be a function $\\mathcal{E} \\to [0,\\infty]$. We say $\\mu$ is a measure if:\\\\\n$\\bullet$ $\\mu(\\phi) = 0$;\n$\\bullet$ $\\mu$ is countably additive: for all sequences $(A_n)$ of disjoint elements of $\\mathcal{E}$, then\n\\begin{equation*}\n\\begin{aligned}\n\\mu(\\bigcup_n A_n) = \\sum_n \\mu(A_n)\n\\end{aligned}\n\\end{equation*}\nIn that case, the triple $(E,\\mathcal{E},\\mu)$ is called a \\emph{measure space}.\n\nGiven a topological space $E$, there is a smallest $\\sigma$-algebra containing all the open sets in $E$. This is the \\emph{Borel $\\sigma$-algebra of $\\mathcal{E}$}, denoted $\\mathcal{B}(E)$.\n\nIn particular, for the real line $\\R$, we will just write $\\mathcal{B} = \\mathcal{B}(\\R)$ for simplicity.\n\n\\subsection{Integration of measurable functions}\n\nLet $(E,\\mathcal{E})$ and $(E',\\mathcal{E}')$ be measurable spaces. A function $f:E \\to E'$ is \\emph{measurable} if $f^{-1}(A) = \\{x \\in E: f(x) \\in A\\} \\in \\mathcal{E} \\forall A \\in \\mathcal{E}'$.\n\nIf we refer to a measurable function $f$ without specifying range, the default is $(\\R,\\mathcal{B})$.\n\nSimilarly, if we refer to $f$ as a non-negative measurable function, then we mean $E'=[0,\\infty]$, $\\mathcal{E}' = \\mathcal{B}([0,\\infty])$.\n\nIt is worth notice that under this set of definitions, a non-negative measurable function might not be $\\R$-measurable (since we allowed $\\infty$).\n\nWe write $m\\mathcal{E}^+$ for set of non-negative measurable functions.\n\n\\begin{thm}\nLet $(E,\\mathcal{E},\\mu)$ be a measure space. There exists a unique map $\\tilde{\\mu}: m\\mathcal{E}^+ \\to [0,\\infty]$ such that:\\\\\n$\\bullet$(a) $\\tilde{\\mu}(1_A) = \\mu(A)$ for all $A \\in \\mathcal{E}$, where $1_A$ is the indicator function;\\\\\n$\\bullet$(b) $\\tilde{\\mu}(\\alpha f + \\beta g) = \\alpha\\tilde{\\mu}(f) + \\beta\\tilde{\\mu}(g)$ for all $\\alpha,\\beta \\in [0,\\infty)$, $f,g \\in m\\mathcal{E}^+$ (linearity);\\\\\n$\\bullet$(c) $\\tilde{\\mu}(f) = \\lim_{n \\to \\infty} \\tilde{\\mu}(f_n)$ for any non-decreasing sequence $(f_n:n \\in \\N)$ in $m\\mathcal{E}^+$ such that $f_n(x) \\to f(x)$ for all $x \\in E$ (monotone-convergence).\n\nWe'll only prove uniqueness. For existence, see II Probability and Measure notes.\n\\end{thm}\n\nFrom now on, write $\\mu$ for $\\tilde{\\mu}$.\\\\\nWe'll call $\\mu(f)$ the \\emph{integral} of $f$ w.r.t. $\\mu$.\\\\\nWe also write $\\int_E f d\\mu = \\int E f(x) \\mu(dx)$.\n\nA \\emph{simple function} is a finite linear combination of indicator functions of measurable sets with positive coefficients, i.e. $f$ is simple if \n\\begin{equation*}\n\\begin{aligned}\nf =\\sum_{k=1}^n \\alpha_k 1_{A_k}\n\\end{aligned}\n\\end{equation*}\nfor some $n \\geq 0$, $\\alpha_k \\in (0,\\infty), A_k \\in \\mathcal{E} \\forall k = 1,...,n$.\n\nFrom (a) and (b), for $f$ simple,\n\\begin{equation*}\n\\begin{aligned}\n\\mu(f) =\\ sum_{k=1}^n \\alpha_k \\mu(A_k)\n\\end{aligned}\n\\end{equation*}\nAlso, if $f,g \\in m\\mathcal{E}^+$ with $f \\leq g$, then $f+h = g$ where $h = g - f \\cdot 1_{f < \\infty} \\in m\\mathcal{E}^+$. Then since $\\mu(h) \\geq 0$, (b) implies $\\mu(f) \\leq \\mu(g)$.\n\nTake $f \\in m\\mathcal{E}^+$. Define for $x \\in E$, $n \\in \\N$,\n\\begin{equation*}\n\\begin{aligned}\nf_n(x) = \\left(2^{-n} \\lfloor 2^n f(x)\\rfloor \\right) \\wedge n\n\\end{aligned}\n\\end{equation*}\nwhere $\\wedge$ means taking the minimum. Note that $(f_n)$ is a non-decreasing sequence of simple functions that converges to $f$ pointwise everywhere on $E$. Then by (c),\n\\begin{equation*}\n\\begin{aligned}\n\\mu(f) = \\lim_{n \\to \\infty} \\mu(f_n)\n\\end{aligned}\n\\end{equation*}\nSo we have shown uniqueness: $\\mu$ is uniquely determined by the measure (provided that it exists, which we're not going to show).\n\nWhen is $\\mu(f)$ zero (for $f \\in m\\mathcal{E}^+$)? For measurable functions $f,g$, we say $f=g$ \\emph{almost everywhere} if \n\\begin{equation*}\n\\begin{aligned}\n\\mu(\\{x \\in E: f(x) \\neq g(x) \\}) = 0\n\\end{aligned}\n\\end{equation*}\ni.e. they only disagree on a measure-zero set.\n\nWe can show, for $f \\in m\\mathcal{E}^+$, that $\\mu(f)=0$ if and only if $f=0$ almost everywhere.\n\nLet $f$ be a measurable function. We say that $f$ is \\emph{integrable} if $\\mu(|f|) < \\infty$.\n\nWrite $L^1 = L^1(E,\\mathcal{E},\\mu)$ for the set of all integrable functions. We extend the integral to $L^1$ by setting $\\mu(f) = \\mu(f^+) - \\mu(f^-)$, where \n\\begin{equation*}\n\\begin{aligned}\nf^\\pm (x) = 0 \\vee (\\pm f(x))\n\\end{aligned}\n\\end{equation*}\nwhere $\\vee$ means the maximum (so $f = f^+ - f^-$). Note that now $f^+,f^-$ are both non-negative, with disjoint support. Then we can show that $L^1$ is a vector space, and $\\mu:L^1 \\to \\R$ is linear.\n\n\\begin{lemma} (Fatou's lemma)\\\\\nLet $(f_n:n \\in \\N)$ be any sequence in $m\\mathcal{E}^+$. Then \n\\begin{equation*}\n\\begin{aligned}\n\\mu(\\liminf_{n\\to \\infty} f_n) \\leq \\liminf_{n \\to \\infty} \\mu(f_n)\n\\end{aligned}\n\\end{equation*}\nThe proof is a straight forward application of monotone convergence.\\\\\nThe only hard part is to remember which way the inequality is (consider a sliding block function to the right).\n\\end{lemma}\n\n\\begin{thm} (Dominated convergence)\\\\\nLet $(f_n:n \\in \\N)$ be a sequence of measurable functions on $(E,\\mathcal{E})$. Suppose $f_n(x)$ converges pointwise as $n \\to \\infty$, with limit $f(x)$ say. Suppose further that $|f_n| \\leq g$ for all $n$, for some integrable function $g$. Then $f_n$ is integrable for all $n$, so is $f$, and $\\mu(f_n) \\to \\mu(f)$ as $n \\to \\infty$.\n\\end{thm}\n\n\\begin{defi}\nWe call a measure space $(\\Omega,\\mathcal{F},\\P)$ such that $\\P(\\Omega)=1$ a \\emph{probability space}. In this setting, measurable functions correspond to random variables, measurable sets correspond to events, almost everywhere corresponds to almost surely, and the integral $\\P(X)$ corresponds to the expectation $\\E(X) = \\int_\\Omega X d\\P$, sometimes written $\\E_\\P(X)$ if we need to specify the underlying measure.\n\\end{defi}\n\n\\newpage\n\n\\section{Conditional expectation}\nThroughout this section we'll use the default probability space $(\\Omega \\mathcal{F},\\P)$.\n\\subsection{The discrete case}\n\nSuppose $(G_n:n \\in \\N)$ is a sequence of disjoint set in $\\mathcal{F}$ such that $\\cup_n G_n = \\Omega$ (so a partition of the space $\\Omega$). Let $X$ be an integrable random variable. Set $\\mathcal{G} = \\sigma(G_n:n \\in \\N)$, which in this case is $\\{\\cup_{n \\in I} G_n:I \\subseteq \\N\\}$, i.e. all countable unions of $G_n$. Define $Y = \\sum_{n \\in \\N} \\E(X|G_n) 1_{G_n}$, where $\\E(X|G_n) = \\E(X 1_{G_n}) / \\P(G_n)$, except in the case where $\\P(G_n)$ we define LHS to be 0 as well). Now note that $Y$ is $\\mathcal{G}$-measurable, is integrable, and $\\E(Y 1_A) = \\E(X 1_A)$ for any $A \\in \\mathcal{G}$. We'll write $Y =\\E(X|\\mathcal{G})$ almost surely, and say $Y$ is \\emph{a version of} conditional expectation of $X$ given $\\mathcal{G}$.\n\n\\subsection{Gaussian case}\nLet $(W,X)$ be a Gaussian (normal) random variable in $\\R^2$. Take a coarser $\\sigma$-algebra $\\mathcal{G}$ generated by $W$, which is $\\{\\{W in B\\}: B \\in \\mathcal{B}\\}$. Consider for $a,b \\in \\R$, the random variable $Y = aW + b$. We can choose $a,b$ so that $\\E(Y-X) = a \\E(W)+b - \\E(X) = 0$, and $cov(Y-X,W) = a\\ var(W) - cov(X,W) =0$. Then $Y$ is $\\mathcal{G}$-measurable, is integrable, and $\\E(Y1_A) = \\E(X1_A)$ for all $A \\in \\mathcal{G}$. To see this, note $Y-X$ and $W$ are independent (as their covariance is 0), and $A = \\{W \\in B\\}$ for some $B \\in \\mathcal{B}$. So for $A \\in \\mathcal{G}$, $\\E((Y-X)1_A) = \\E(Y-X)\\P(A) = 0$.\n\n\\subsection{Conditional density functions}\nLet $(U,V)$ be a random variable in $\\R^2$ with density functoin $f(u,v)$, i.e.\n\\begin{equation*}\n\\begin{aligned}\n\\P((U,V) \\in A) = \\int_A f(u,v) dudv\n\\end{aligned}\n\\end{equation*}\n\nTake $\\mathcal{G} = \\sigma(U) = \\{\\{U \\in B\\}:B \\in \\mathcal{B}\\}$. Take a Borel measurable function $h$ on $\\R$ and set $X = h(V)$, assume $X \\in L^1(\\P)$. Note $U$ has density function \n\\begin{equation*}\n\\begin{aligned}\nf(u) = \\int_\\R f(u,v) dv\n\\end{aligned}\n\\end{equation*}\n\nDefine the conditional density function\n\\begin{equation*}\n\\begin{aligned}\nf(v|u) = f(u,v) / f(u)\n\\end{aligned}\n\\end{equation*}\nwhere we define $0/0 = 0$.\n\nNow set $Y = g(U)$, where \n\\begin{equation*}\n\\begin{aligned}\ng(u) = \\int_\\R h(v) f(v|u) dv\n\\end{aligned}\n\\end{equation*}\n\nThen $g$ is a Borel-measurable functoin on $\\R$ (not obvious), so $Y$ is a $\\mathcal{G}$-measurable random variable, and is integrable and for all $A = \\{U \\in B\\} \\in \\mathcal{G}$, $\\E(Y 1_A ) = \\E(X 1_A)$. To see this, \n\\begin{equation*}\n\\begin{aligned}\n\\E(Y1_A) &=\\int_\\R g(u) 1_B(u) f(u) du\\\\\n&= \\int_\\R \\int_\\R h(v)f(v|u) dv 1_B(u) f(u) du\\\\\n&=\\E(X1_A)\n\\end{aligned}\n\\end{equation*}\nwhere at the last step we use Fubini's theorem (introduced later) to swap integrals, and note that we can combine $\\int f(v|u) f(u)$ to get $f(u,v)$.\n\n\\subsection{Product measure and Fubini's theorem}\nTake finite (or countably infinite) measure spaces $(E_1,\\mathcal{E}_1,\\mu_1)$ and $(E_2,\\mathcal{E}_2,\\mu_2)$. Write $\\mathcal{E}_1 \\otimes \\mathcal{E}_2$ for the $\\sigma$-algebra on $E_1 \\times E_2$ generated by sets of the form $A_1 \\times A_2$ where $A_i \\in \\mathcal{E}_i$ for $i=1,2$. We call $\\mathcal{E}_1 \\otimes \\mathcal{E}_2$ the \\emph{product $\\sigma$-algebra}.\n\n\\begin{thm}\nThere exists a unique measure $\\mu=\\mu_1 \\otimes \\mu_2$ on $(E_1 \\times E_2, \\mathcal{E}_1 \\otimes \\mathcal{E}_2)$ such that\n\\begin{equation*}\n\\begin{aligned}\n\\mu(A_1 \\times A_2) = \\mu_1(A_1) \\mu_2(A_2)\n\\end{aligned}\n\\end{equation*}\nfor all $A_i \\in \\mathcal{E}_i$ for $i=1,2$.\n\\end{thm}\n\n\\begin{thm} (Fubini's theorem)\\\\\nLet $f$ be a non-negative measurable function $(E_1 \\times E_2, \\mathcal{E}_1 \\otimes \\mathcal{E}_2)$. For $x_1 \\in E_1$, define in the obvious way\n\\begin{equation*}\n\\begin{aligned}\nf_{x_1}(x_2) = f(x_1,x_2)\n\\end{aligned}\n\\end{equation*}\nThen $f_{x_1}$ is $\\mathcal{E}_2$-measurable for all $x_1 \\in E_1$. Now define $f_1(x_1) = \\mu_2(f_{x_1})$. Then $f_1$ is $\\mathcal{E}_1$ measurable and $\\mu_1(f_1) = \\mu(f)$ (see part II Prob and Measure notes for the integrable case). Define $\\hat{f}$ on $E_2 \\times E_1$ by \n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}(x_2,x_1) = f(x_1,x_2)\n\\end{aligned}\n\\end{equation*}\nthen we can show $\\hat{f}$ is $\\mathcal{E}_2 \\otimes \\mathcal{E}_1$-measurable, and \n\\begin{equation*}\n\\begin{aligned}\n(\\mu_2 \\otimes \\mu_1) (\\hat{f}) = (\\mu_1 \\otimes \\mu_2) (f)\n\\end{aligned}\n\\end{equation*}\nSo by Fubini,\n\\begin{equation*}\n\\begin{aligned}\n\\mu_2(f_2) = \\hat{f}(\\hat{f}) = \\mu(f) = \\mu_1(f_1)\n\\end{aligned}\n\\end{equation*}\nwith obvious notations. This means\n\\begin{equation*}\n\\begin{aligned}\n\\int_{E_2}\\left(\\int_{E_1} f(x_1,x_2) \\mu_1 (dx_1) \\right) \\mu_2 (dx_2) = \\int_{E_1} \\left(\\int_{E_2} f(x_1,x_2) \\mu_2(dx_2) \\right) \\mu_1(dx_1)\n\\end{aligned}\n\\end{equation*}\nNote that this also holds for just $f$ integrable.\n\\end{thm}\n\n\\subsection{Existence and uniqueness of conditional expectation}\n\\begin{thm}\n    Let $X$ be an integrable random variable and let $\\mathcal{G}$ be a sub-$\\sigma$-algebra of $\\mathcal{F}$. There exist a random variable $Y$ s.t.:\\\\\n    $\\bullet$ (a) $Y$ is $\\mathcal{G}$-measurable;\\\\\n    $\\bullet$ (b) $Y$ is integrable;\\\\\n    $\\bullet$ (c) For all $A \\in \\mathcal{G}$, $\\E(Y1_A) = \\E(X1_A)$.\\\\\n    Moreover, if $Y'$ is another random variable satisfying above, then $Y'=Y$ a.s..\n\n    We write $Y=\\E(X|\\mathcal{G})$ a.s., and we say that $Y$ is \\emph{a version of} the conditional expectation of $X$ given $\\mathcal{G}$.\\\\\n    In the case $X=1_A$, write $Y=\\P(A|\\mathcal{G})$ a.s., the \\emph{probability} of event $A$ given $\\mathcal{G}$.\n\n    An analogous statement holds with \\emph{integrable} replaced by \\emph{non-negative} throughout the statement.\n    \\begin{proof}\n        Uniqueness: We'll actually prove something stronger. Suppose $Y$ satisfies the above conditions, and $Y'$ satisfies the above for some integrable $X'$, with $X \\leq X'$. We'll show that $Y \\leq Y'$ (That gives what we want by setting $X'=X$).\\\\\n        Consider the non-negative random variable $Z=(Y-Y')1_A$ where $A =\\{Y \\geq Y' \\} \\in \\mathcal{G}$ (think about definition of measurable function here). Then $\\E(Y 1_A) = \\E(X 1_A) \\leq \\E(X' 1_A) = \\E(Y' 1_A)$, so $\\E(Z) \\leq 0$, so $Z = 0$ a.s.. But that means $Y\\leq Y'$ a.s..\n\n        Existence: Consider for now the case $X \\in L^2(\\mathcal{F})$ (means $\\E(|X|^2) < \\infty$). Since $L^2(\\mathcal{F})$ is complete, every Cauchy sequence has a limit, and $L^2(\\mathcal{G})$ is a closed subspace of $L^2(\\mathcal{F})$, there exists orthogonal projection $Y$ of $X$ on $L^2(\\mathcal{G})$. That is, $Y \\in L^2(\\mathcal{G})$ and for all $Z \\in L^2(\\mathcal{G})$, we have $\\E((Y-X)Z) = 0$, i.e. the difference is orthogonal to the subspace.\\\\\n        For $A \\in \\mathcal{G}$, take $Z = 1_A$ to see $\\E(Y1_A) = \\E(X1_A)$. So $Y$ satisfies the above conditions (the key point here is that the orthogonal projection always exists, so we've in some sense converted the problem of existence of conditional probability to the existence of orthogonal projection).\\\\\n        Now suppose $X \\geq 0$. Set $X_n = X \\wedge n$, then $X_n \\in L^2(\\mathcal{F})$. We have shown ther exists $Y_n \\in L^2(\\mathcal{G})$, such that for all $A \\in \\mathcal{G}$, $\\E(Y_n1_A) = \\E(X_n1_A)$, and $0 \\leq Y_n \\leq Y_{n+1}$ a.s. for all $n$ (Note $0 \\leq X_n \\leq X_{n+1})$. Set $\\Omega_0 = \\{\\omega \\in \\Omega: 0 \\leq Y_n(\\omega) \\leq Y_{n+1}(\\omega) \\forall n\\}$. Then $\\P(\\Omega_0) = 1$. Define a non-negative $\\mathcal{G}$-measurable random variable $$Y_\\infty = 1_{\\Omega_0} \\lim_{n \\to \\infty} Y_n$$\n        Then $0 \\leq 1_|{\\Omega_0 Y_n} \\uparrow Y_\\infty$ as $n \\to \\infty$.\\\\\n        So by monotone convergence, for all $A \\in \\mathcal{G}$,\n        \\begin{equation*}\n            \\begin{aligned}\n                \\E(Y_\\infty 1_A) &= \\lim_{n \\to \\infty} \\E(Y_n 1_A)\\\\\n                &= \\lim_{n \\to \\infty} \\E(X_n 1_A)\\\\\n                &= \\E(X 1_A)\n            \\end{aligned}\n        \\end{equation*}\n        In particular, taking $A=\\Omega$, we see $Y_\\infty$ is integrable, so $\\P(Y_\\infty = \\infty) = 0$.\\\\\n        So if we set $Y = Y_\\infty 1_{Y_\\infty < \\infty}$, then $Y$ is a random variable and satisfies the above conditions. So we've done for the case $X \\geq 0$.\n\n        In the general case, we use the usual trick: apply the preceding to $X^\\pm = 0 \\vee (\\pm X)$ (so $X = X^+-X^-)$ to obtain $Y^\\pm$, and the check $Y=Y^+-Y^-$ satisfies the above conditions similarly.\n    \\end{proof}\n\\end{thm}\n\n\\subsection{Properties of conditional expectation}\nFix an integrable random variable $X$ and a sub-$\\sigma$-algebra $\\mathcal{G}$. Theorem 1.4.1 has the following consequences:\\\\\n(i) $\\E(\\E(X|\\mathcal{G})) = \\E(X)$;\\\\\n(ii) if $X$ is $\\mathcal{G}$-measurable, then $\\E(X|\\mathcal{G}) = X$ a.s.;\\\\\n(iii) If $X$ is independent of $\\mathcal{G}$, then $\\E(X|\\mathcal{G}) = \\E(X)$ a.s..\n\nCheck for $A \\in \\mathcal{G}$, \n\\begin{equation*}\n    \\begin{aligned}\n        \\E(X 1_A) = \\E(X)\\P(A) = \\E(\\E(X)1_A)\n    \\end{aligned}\n\\end{equation*}\nso $\\E(X)$ satisfies (b) and (c) in the previous theorem.\n\n(iv) if $X \\geq Y$ a.s.., then $\\E(X|\\mathcal{G}) \\geq \\E(Y|\\mathcal{G})$ a.s. (from the proof).\\\\\n(v) for all $\\alpha,\\beta \\in \\R$, all integrable random variables $X,Y$,\n\\begin{equation*}\n    \\begin{aligned}\n        \\E(\\alpha X + \\beta Y | \\mathcal{G}) = \\alpha\\E(X|\\mathcal{G})+\\beta\\E(Y|\\mathcal{G})\n    \\end{aligned}\n\\end{equation*}\na.s..\n\nConsider a sequence of random variables $X_n$,\\\\\n(vi) if $0 \\leq X_n \\uparrow X$ pointwise, then \n\\begin{equation*}\n    \\begin{aligned}\n        \\E(X_n|\\mathcal{G}) \\to \\E(X|\\mathcal{G}) \\ a.s.\n    \\end{aligned}\n\\end{equation*}\n(conditional monotone convergence).\\\\\nTo see this, we know $\\E(X_n|\\mathcal{G}) \\uparrow Y$ a.s. for some non-negative $\\mathcal{G}$-measurable random variable $Y$.\n\nTake $A \\in \\mathcal{G}$, then by monotone convergence,\n\\begin{equation*}\n    \\begin{aligned}\n        \\E(Y 1_A) &= \\lim_{n \\to \\infty} \\E(\\E(X_n | \\mathcal{G})1_A)\\\\\n        &= \\lim_{n \\to \\infty} \\E(X_n 1_A)\\\\\n        &= \\E(X 1_A)\n    \\end{aligned}\n\\end{equation*}\nSo $Y$ satisfies (b) and (c) in the above theorem for $X$.\n\n(vii) For any non-negative random variable $X_n$,\n\\begin{equation*}\n    \\begin{aligned}\n        \\E(\\liminf_{n \\to \\infty} X_n | \\mathcal{G}) \\leq \\liminf_{n \\to \\infty} \\E(X_n|\\mathcal{G})\n    \\end{aligned}\n\\end{equation*}\n\n(viii) If $X_n(\\omega) \\to X(\\omega)$ for all $\\omega$ as $n \\to \\infty$ and there exist $Y$ integrable (so $|X_n| \\leq Y$ for all $n$ (???)), then\n\\begin{equation*}\n    \\begin{aligned}\n        \\E(X_n|\\mathcal{G}) \\to \\E(X|\\mathcal{G}) \\ a.s.\n    \\end{aligned}\n\\end{equation*}\nFor (vii), $(\\inf_{m\\geq n} X_n : n \\in \\N)$; (viii) apply Fatou to $(Y \\pm X_n: n \\in \\N$).\n\nFind lecturer's webpage for everything!\n\n\n\\end{document}\n", "meta": {"hexsha": "2f268cb77a80fc99896fb96a2058608a9f091d3b", "size": 17174, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/ADVP.tex", "max_stars_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_stars_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-25T17:34:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T17:34:25.000Z", "max_issues_repo_path": "Notes/ADVP.tex", "max_issues_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_issues_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/ADVP.tex", "max_forks_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_forks_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.5735735736, "max_line_length": 742, "alphanum_fraction": 0.6423081402, "num_tokens": 6384, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300048, "lm_q2_score": 0.8198933293122507, "lm_q1q2_score": 0.5993900519552364}}
{"text": "\\documentclass[Thesis.tex]{subfiles}\n\\begin{document}\n\\chapter{Monte Carlo Methods}\n\\label{chp:monte-carlo}\n\n\\glsresetall\n\nIn \\cref{chp:variational-monte-carlo} we found that we needed a way to sample\nsystem configurations $\\vX$ from the \\gls{pdf} described by the\nwave function. We needed this as a means to evaluate integrals. Specifically, we\nneeded to evaluate the expectation value of the Hamiltonian w.r.t. a given wave\nfunction $\\psialpha$:\n\n\\begin{align}\n    \\expval{\\hat H} &= \\frac{\\expval{\\hat H}{\\psialpha}}{\\braket{\\psialpha}} = \\int\\dd{\\vX} P_{\\psi_{\\valpha}} (\\vX) E_L(\\vX)\\label{eq:chp-mc-anon-1}\\\\\n    &\\approx \\frac{1}{N} \\sum_{i = 1}^N E_L(\\vX_i)\n    \\qq{where} \\vb X_i\\disteq P_{\\psi_{\\valpha}}.\n\\end{align}\nwhere $N$ is set sufficiently large to satisfy the required accuracy. Due to\nthe central nature of this method, we will start by a proper presentation of\nthis \\gls{mc} technique as it is used here, followed the details of how to\nobtain the samples we need.\n\n\\section{Monte Carlo Integration}\n\n\\acrfull{mci} is a general technique for numeric evaluation of\nany arbitrary integral. In general it concerns evaluating any multidimensional\ndefinite integral, written as:\n\n\\begin{align}\n    I = \\idotsint_\\sigma\\dd{\\vx} f(\\vx),\n\\end{align}\nfor some integrand $f(\\vx)$ and where $\\sigma$ denotes a subset of $\\mathbb{R}^m$ with a $m$-dimensional volume given by:\n\\begin{align}\n    V = \\idotsint_\\sigma\\dd{\\vx}\n\\end{align}\nIn its simplest form, \\gls{mci} works by sampling $N$ \\emph{uniformly} distributed points\n$\\vx_1,\\dots,\\vx_N$ from $\\sigma$, and uses a Riemann sum formulation of the\nintegral $I$:\\footnote{Notice that $\\vx_i$ now denotes one random sample, and\n  has nothing to do with the $i$'th particle. For the time being we don't talk\n  about particles, and the subscripts indicate different samples.}\n\\begin{align}\n  \\label{eq:mci-def}\n    I = \\lim_{N\\to\\infty} M_N \\defeq \\lim_{N\\to\\infty}\\sum_{i=1}^N f(\\vx_i) \\frac{V}{N} = V\\expval{f(\\vx)}_\\sigma,\n\\end{align}\nwhere $\\expval{\\cdot}_\\sigma$ denotes the expectation value over all points in\n$\\sigma$ when all points are equally likely. The estimate becomes increasingly\naccurate for increasing $N$, and is exact in the limit where $N\\to\\infty$.\n\n\\subsection{Estimating the Error}\n\nFor any finite $N$ the result of using \\gls{mci} will not be exact. Because of\nthis, it is immensely useful to be able to assign a statistical certainty to the\nresult. For instance, say we estimate the ground state energy of a system using\ntwo different wave functions. We perform the integrals and get (to the first\nfive digits) $\\expval{H}_A = \\SI{1.0315}{eV}$ and $\\expval{H}_B =\n\\SI{1.0943}{eV}$. Recalling that the lowest energy is the most physically\naccurate, can we say that $A$ is better than $B$? No, because we lack\ninformation on the precision of the integrals. Assume that in reality, the\nnumbers were $\\expval{H}_A=\\SI{1.03(8)}{eV}$ and\n$\\expval{H}_B=\\SI{1.09(8)}{eV}$, where the numbers in parenthesis indicate the\nuncertainty in the last digit. Now we know that the two numbers are not\ndifferent in a statistically significant way, and any difference could be\nentirely random. We would need to use more sample points to lower the\nuncertainty to be able to say anything more about them.\\\\\n\nWe obtain a statistical estimate of the error we make in approximating the\nintegral by considering the variability of the individual terms of\n\\cref{eq:mci-def}. Let's start by estimating the variance of the integrand:\n\\begin{align}\n    \\Var[f(\\vx)] \\defeq \\sigma^2_N = \\frac{1}{N-1} \\sum_{i=1}^N\\qty(f(\\vx_i) - \\frac{1}{N}\\sum_{j=1}^N f(\\vx_j))^2.\n\\end{align}\nIt then follows, by the properties of variance,\n\\begin{align}\n  \\label{eq:mci-annon-2}\n    \\Var[M_N] = \\Var\\qty[ \\frac{V}{N} \\sum_{i=1}^Nf(\\vx_i)] = \\frac{V^2}{N^2}\\sum_{i=1}^N\\Var[f(\\vx_i)] = \\frac{V^2\\sigma^2_N}{N}\n\\end{align}\n\\begin{align}\n  \\label{eq:mci-sem-def}\n    \\implies \\Std[M_N] = \\sqrt{\\Var[M_N]} = \\frac{V\\sigma_N}{\\sqrt N}.\n\\end{align}\n\\Cref{eq:mci-sem-def} is the standard deviation of the sample mean, otherwise\nknown as the \\gls{sem}. It is common use the \\gls{sem} as the estimate of uncertainty on\nexpectation values. Another common choice is to give confidence intervals\nbased on the \\gls{sem}. Assuming the variance is normally\ndistributed,\\footnote{Which for our wave functions is often times at least\n  approximately true.} we can say with e.g.\\ $\\SI{95}{\\percent}$ certainty that\nthe true expectation value, I, is in the interval\n\n\\begin{align}\n  \\qty[CI_{-}^{95}, CI_{+}^{95}] \\defeq \\qty[M_N - 1.96\\Std[M_N], M_N + 1.96\\Std[M_N]].\n\\end{align}\n\n\\Cref{eq:mci-sem-def} tells us that the expected statistical error we make when using \\gls{mci} goes\nlike $\\mathcal{O}(1 / \\sqrt N)$, and depends linearly on the size of the volume\nand standard deviation of the integrand itself. This illustrates both the\nadvantage, and disadvantage of \\gls{mci} compared to other,\ndeterministic integration methods. Its advantage is its simple dependency on the\nvolume, and its independence from the particular number of dimensions in the\nintegral. Other methods tend to depend exponentially on the dimensionality, and\nas such \\gls{mci} is often the best choice for multidimensional integrals. Its\ndisadvantage is the relatively slow convergence rate, which is asymptotically\nmuch worse than other approaches~\\cite{Numerical-Recipes-Press-et-al}.\n\n\\subsubsection{Correction for Autocorrelation}\n\nActually, \\cref{eq:mci-annon-2} includes a mistake. We applied a property of\nvariance on a sum of random variables. The equation is correct if and\nonly if the samples $\\vx_i$ are perfectly independent. Unfortunately for us,\nthe samples will not always be so, depending on the algorithm used to generate\nthem. We can sample independently from a uniform distribution and some other\nnotable distributions (\\cref{sec:sampling-arb-prob-dist-funcs}), but not in\ngeneral. This will be especially true for the algorithms we use to sample from\narbitrary wave functions (\\cref{sec:metro-hastings-alg}).\n\nThe version of \\cref{eq:mci-annon-2} that is true in general accounts for\nautocorrelated samples:\\footnote{Covariance is a measure of the joint\n  variability between two random variables. Autocorrelation is used to refer to\n  the covariance between a random variable and itself at pairs of time points.}\n\n\n\\begin{align}\n  \\label{eq:mci-annon-3}\n    \\Var[M_N] = \\Var\\qty[ \\frac{V}{N} \\sum_{i=1}^Nf(\\vx_i)] = \\frac{V^2}{N^2}\\sum_{i=1}^N\\sum_{j=1}^N\\Cov[f(\\vx_i), f(\\vx_j)].\n\\end{align}\nThis is always greater than or equal to \\cref{eq:mci-annon-2}, and equal only if\nthe off-diagonal covariance terms (where $i\\neq j$) are all zero. Not accounting\nfor the autocorrelation in the samples will therefore lead us to underestimate\nthe error, which is arguably much worse than the opposite.\n\nThere is, unfortunately a heavy cost to computing the full covariance.\nTypically, to keep the expectation values accurate, we will use very large\nvalues for $N$, often on the order of tens or hundreds of millions. The double loop in\n\\cref{eq:mci-annon-3} will then quickly become unfeasible to compute, with\ntrillions or quadrillions of iterations. In practice we need to estimate it.\nYes, we are going to estimate the estimate of the error we make in an estimate.\n\nThe strategy we shall use for all later error analysis is called \\emph{blocking},\nand specifically we use an automated form presented in an article by\n\\textcite{Jonsson-2018}. Say we have a sequence of sample values $f(x_i)$:\n\n\\begin{align}\n  D_0 = \\mqty[f(x_1) & f(x_2) & \\dots & f(x_N)],\n\\end{align}\nand let's assume that there is some non-zero autocovariance between samples that are\nsufficiently close in the sequence. The idea of blocking is to group\n\\say{blocks} of samples together, replacing them by their mean value. For\ninstance, after one blocking transformation of the above sequence we get:\n\n\\begin{align}\n  D_1 = \\mqty[\\qty(\\frac{f(x_1) + f(x_2)}{2}) & \\qty(\\frac{f(x_3) + f(x_4)}{2}) & \\dots & \\qty(\\frac{f(x_{n-1}) + f(x_n)}{2})].\n\\end{align}\nIf $N = 2^d$ with $d > 1$ we can repeat the process to obtain $d$ different\nsequences $D_i$. These transformations conserve the value of \\cref{eq:mci-annon-3}\nwhile decreasing the covariance between samples~\\cite{Jonsson-2018}. That implies that measuring\nthe variance with \\cref{eq:mci-annon-2} on each $D_i$ will yield greater and\ngreater errors, until there is no more covariance and the error estimate\nconverges.\n\n\\Cref{fig:blocking-example-diagram} shows this process in practice. The example\ndata is $2^{27}$ samples of the mean radial displacement of a single\none-dimensional particle placed in a harmonic oscillator. Due to how the data is\ngenerated, we expect there to be a significant portion of autocorrelation\npresent. The figure shows the standard error from \\cref{eq:mci-annon-2},\ncalculated on the data sets obtained by performing repeated blocking\ntransformations. We can see it rises steadily during the first $6$-$7$\ntransformations, after which the block sizes are big\nenough to remove all the covariance and the estimate converges. As\nthe data set becomes too small the results start to become unpredictable. The\noptimal estimate is indicated by the red circle, as determined by the automated\nprocedure developed by \\textcite{Jonsson-2018}. In this particular case this occurred after\n$\\num{12}$ blocking transformations.\n\nIf not explicitly stated otherwise, all error estimates in this thesis will\ninclude this correction.\n\n\n\\begin{figure}[h]\n  \\centering\n  \\resizebox{0.7\\linewidth}{!}{%\n    \\input{scripts/blocking-example-diagram.py.tex}\n  }\n  \\caption[Standard error as a function of blocking transformations]{The estimated standard error of a random process excibiting symptoms\n    of autocorrelation, shown as a function of the effective size of the data set as\n    blocking transformations are applied. As the block sizes increase the\n    covariance goes away, while the variance increases. After convergence, the\n    optimal estimate is illustrated by the red dot, as determined by an automated blocking\n    procedure~\\cite{Jonsson-2018}.\\citesource{writing/scripts/blocking-example-diagram.py}}\n  \\label{fig:blocking-example-diagram}\n\\end{figure}\n\n\n\n\\subsection{Importance Sampling}\n\nIn some suitable cases we can improve quite dramatically on the simple,\nstraightforward integration approach given in \\cref{eq:mci-def} with a technique\nknown as \\gls{is}. To illustrate this, say we\nwould like to evaluate the following integral:\n\n\\begin{align}\n    \\label{eq:mci-importance-example-func}\n    I = \\intfy\\dd{x}f(x)=\\intfy\\dd{x} \\frac{\\exp{-\\flatfrac{x^2}{2} }}{\\sqrt{2\\pi \\qty(1 +x^2)}}  = \\num{0.78964}.\n\\end{align}\n\n\\begin{figure}\n    \\centering\n    \\resizebox{0.7\\linewidth}{!}{%\n    \\begin{tikzpicture}\n        \\begin{axis}[every axis plot post/.append style={\n                mark=none,domain=-5:5,samples=50,smooth}, % All plots: from -2:2, 50 samples, smooth, no marks\n            axis x line=bottom, % no box around the plot, only x and y axis\n            axis y line=left,\n            enlargelimits=upper] % extend the axes a bit to the right and top\n            \\addplot[semithick, color0] {1/(sqrt(2*pi))*exp(-(x^2)/2)*(1+x^2)^(-0.5)};\n            \\addplot[semithick, color1] {1/(sqrt(2*pi))*exp(-(x^2)/2)};\n            \\addlegendentry{$f(x)$}\n            \\addlegendentry{$\\mathcal{N}(0, 1)$}\n        \\end{axis}\n    \\end{tikzpicture}\n    }\n    \\caption[Illustration of example probability distributions]{\\label{fig:mci-importance-example-func-plot}Plot of the function in\n      \\cref{eq:mci-importance-example-func}, enveloped by a standard normal distribution.\\citesource{writing/MonteCarlo.tex}}\n\\end{figure}\n\\noindent The integrand is plotted in \\cref{fig:mci-importance-example-func-plot}, and\nthe observant reader might recognise this as the product of a normal\ndistribution and a Student-t distribution. The correct value for the integral\nis also given, so we have a reference for the results.\n\nFor comparison, let's start with the straightforward approach from before.\nBecause the integral goes to infinity the \"volume\" $V$ would not be well\ndefined, and we are forced to truncate the region manually. From looking at the\ngraph in \\cref{fig:mci-importance-example-func-plot} we may say that $x\\in\n[-5, 5]$ should account for the vast majority of the total integral. That means\nwe use the following estimate:\n\n\\begin{align}\n    \\label{eq:chp-mc-anon-2}\n    I\\approx \\frac{10}{N}\\sum_{i=1}^Nf(x_i)\\qq{where} x_i\\disteq\\text{Uniform}(-5, 5).\n\\end{align}\n\n\\noindent The main issue with this approach is that 1) we need to truncate the integral\nmanually and 2) no matter where we place the box boundaries we will tend to\nsample a lot of $x_i$'s in areas where $f$ gives very small contributions to\nthe integral. Ideally we would like our sample points to be distributed as\nclosely as possible to $f$, in order to capture as much information as we can.\nThis is the idea of \\gls{is}. Instead of using the uniform\ndistribution for sampling, we use some probability distribution which more\nclosely resembles the integrand, call it $g(x)$. Formally we then restate the integral as follows:\n\n\\begin{align}\n    I = \\intfy\\dd{x}f(x) = \\intfy\\dd{x} \\frac{f(x)}{g(x)} g(x).\n\\end{align}\nThis can be interpreted as the expectation of $z(x)=\\flatfrac{f(x)}{g(x)}$ for $x$'s drawn from $g$, and so the corresponding estimation is then:\n\n\\begin{align}\n    I \\approx \\frac{1}{N} \\sum_{i=1}^N \\frac{f(x_i)}{g(x_i)} \\qq{where} x_i\\disteq g.\n\\end{align}\nSetting $g = \\text{Uniform}(-5, 5)$ recovers \\cref{eq:chp-mc-anon-2}, so this is\nsimply the natural generalization of the standard approach for an arbitrary\n\\gls{pdf}.\n\n\nGoing back to the example, we should now choose a distribution function that\nclosely resembles the integrand, while still being simple to sample from. In\nthis (contrived) example, a natural choice is to use the standard normal\ndistribution, $g = \\mathcal{N}(0, 1)$. In\n\\cref{fig:mci-importance-example-func-plot} we can see how $g(x)$ is\nenclosing $f(x)$ much more tightly than any rectangular box might hope to.\n\n\\begin{figure}\n   \\centering\n    \\resizebox{0.7\\linewidth}{!}{%\n        \\input{scripts/monte_carlo_int_example.py.tex}\n    }\n    \\caption[Convergence of Monte Carlo Integration]{\\label{fig:mci-importance-example-func-convergence}Convergence of\n    the integral in \\cref{eq:mci-importance-example-func}, using regular\n    \\gls{mci} and \\gls{is} with $g = \\mathcal{N}(0,\n    1)$. The points indicate the approximated value of the integral for each\n    particular experiment, and the shaded areas indicate the corresponding\n    $\\SI{95}{\\percent}$ confidence intervals for the expected value. We see the\n    latter displaying both greater accuracy and tighter confidence intervals.\n    \\citesource{writing/scripts/monte_carlo_int_example.py}\n    }\n\\end{figure}\n\n\\Cref{fig:mci-importance-example-func-convergence} shows the convergence of\nthe \\gls{mc} approximations towards the correct value for an increasing\nnumber of sampled points. The drawn line is the mean value at each run, while\nthe shaded areas show the corresponding $\\SI{95}{\\percent}$ confidence\nintervals. Because of the more suited sampling distribution, we obtain results\nwhich are more accurate, and perhaps most importantly, tighter confidence\nbounds. In general, using better sampling distributions will tend to give us\nlower variance results.\n\n\\section{Sampling from Arbitrary Probability Density Functions}\n\\label{sec:sampling-arb-prob-dist-funcs}\n\nSo far we have taken for granted the ability to sample numbers from various\n\\glspl{pdf}. We dedicate this section to discuss sampling from the bottom up,\nculminating in a detailed presentation of the most central algorithm in this\nthesis, the Metropolis-Hastings algorithm.\n\n\\subsection{The Uniform Distribution}\n\nThe basic building block upon which we shall build all other random sampling\ntechniques is the ability to sample random numbers from the uniform\ndistribution, $\\text{Uniform}(a, b)$. This is the simplest distribution\npossible, where all numbers in the range $[a, b]$ have the same probability\ndensity. The definition of its \\gls{pdf} is simply:\n\n\\begin{align}\\label{eq:uniform-pdf-definition}\n    p_U\\qty(x\\;\\middle|\\; a, b) \\defeq  \\begin{cases}\n        \\frac{1}{b - a} &\\qfor x\\in [a, b]\\\\\n        0&\\qotherwise\n    \\end{cases}.\n\\end{align}\nMost commonly we operate only with the \\emph{standard uniform distribution}, $\\text{Uniform}(0, 1)$. Any other range can be simply related via the trivially verifiable identity\n\\begin{align}\n    p_U\\qty(x\\;\\middle|\\; a,b) = p_U\\qty( \\frac{x-a}{b-a} \\;\\middle|\\; 0, 1).\n\\end{align}\n\nBut how exactly do we obtain random realizations from this \\gls{pdf}? First we need\nto depart from the typical notion of randomness, and realize that we are unable to\nwrite an algorithm which produces a truly random result. While we could in\nprinciple rely on nature to provide sources of complete, true randomness (such as\nthe exact behavior of a quantum mechanical observable), this is impractical\nwhen we need fast, on-demand samples. Instead we settle for\n\\emph{pseudo-random} samples. A pseudo-random sequence $x_1, x_2,\\dots x_n$\nimplies that one cannot\\footnote{In any practical way, barring exhaustive trial\nand error of every conceivable underlying algorithm.} predict $x_{n+1}$ without\ninsight into how the sequence is generated. In other words, the (ideal)\npseudo-random sequence is indistinguishable from a truly random sequence for\nanyone observing the numbers $x_i$ alone.\n\nThere are a multitude of algorithms that can generate a sequence of\npseudo-random \\emph{integers}, all with varying properties.\\footnote{Such properties\ncould include range of possible numbers, period, level of bias and how\ncryptographically secure they are.} The simplest family of\nsuch algorithms is a \\gls{lcg}~\\cite{Knuth-1997-ACP-270146}. It defines a\npseudo-random sequence by the simple recurrence relation\n\\begin{align}\n    \\label{eq:linear-congruential-generator-relation}\n    x_{n+1} = (ax_n + c)\\mod m,\n\\end{align}\nwhere $a, c$ and $m$ are constants which determine the behaviour of the\nalgorithm, and $x_0$ needs to be specified manually. For instance,\n\\textcite{Numerical-Recipes-Press-et-al} use $a = 1664525$, $c=1013904223$ and\n$m=2^{32}$.\n\nThis produces uniformly distributed integers in the range $[0, m)$. Numbers\nfrom the standard uniform distribution can then simply be obtained by dividing\nthe integers by $m$. The complete algorithm is shown in \\cref{alg:uniform-LCG-sampling}.\n\n\\begin{algorithm}[h]\n    \\caption{Sampling from $\\text{Uniform}(l, u)$}\n    \\label{alg:uniform-LCG-sampling}\n    \\begin{algorithmic}[1]\n        \\Require $a, c, m$ and $x_0$\n        \\Function{unif}{lower bound $l$, upper bound $u$}\n            \\State $x\\gets x_0$\n            \\Repeat\n                \\State $x\\gets (ax + c)\\mod m$\n                \\State \\Yield $(\\flatfrac{x}{m}) \\times (u - l) + l$\n            \\Until{done}\n        \\EndFunction\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Inverse Transform Sampling}\n\nArmed with uniformly distributed random numbers, we now turn towards generating\nnumbers from other \\glspl{pdf}. Let $U\\disteq\\text{Uniform}(0,1)$ be a uniformly\ndistributed stochastic variable, and let $X$ be a stochastic variable\nassociated with a \\gls{pdf}, $p$, and a corresponding \\gls{cdf}, $F$, i.e.\n\n\\begin{align}\n    F(x) = \\text{Pr}\\qty(X\\leq x) = \\int_{-\\infty}^x \\dd{t}p(t).\n\\end{align}\nWe would like to define a transformation $T:[0, 1]\\mapsto\\mathbb{R}$ that can map from uniform numbers to numbers that follow the given \\gls{cdf}, i.e. define $T$ such that $T(U)\\disteq X$. We have:\n\\begin{align}\n    F(x) = \\text{Pr}\\qty(X\\leq x) = \\text{Pr}\\qty(T(U)\\leq x) = \\text{Pr}\\qty(U \\leq T^{-1}(x)) = T^{-1}(x),\n\\end{align}\nbecause $T^{-1}(x)\\in[0, 1]$ by definition, and assuming that $T^{-1}(x)$ is strictly monotone. It follows then that $F^{-1}(U)\\disteq X$, i.e. if the \\gls{cdf} is strictly monotone (as \\glspl{cdf} should be) then we can get realizations of $X$ by applying $F^{-1}$ to realizations of $U$.\n\nThe algorithm to sample from any \\gls{pdf} with a tractable inverse \\gls{cdf} is\nthen extremely simple, and for completeness it is listed in\n\\cref{alg:inverse-transform-sampling}.\n\n\\begin{algorithm}[h]\n    \\caption{Inverse transform sampling}\n    \\label{alg:inverse-transform-sampling}\n    \\begin{algorithmic}[1]\n        \\Require Cumulative distribution function $F$\n        \\Ensure Random $x$ with \\gls{cdf} equal to $F$\n        \\Repeat\n          \\State $u\\gets\\text{unif}(0,1)$\n          \\State $x\\gets F^{-1}(u)$\n          \\State \\Yield $x$\n        \\Until{done}\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{The Metropolis-Hastings Algorithm}\n\\label{sec:metro-hastings-alg}\n\n\\Cref{alg:inverse-transform-sampling} works great for a number of standard\n\\glspl{pdf}. For many cases, however, we do not have a tractable form for the inverse\n\\gls{cdf}, which renders the algorithm useless. So what about the wave functions we\nare interested in? The \\gls{pdf} in question is, as defined in \\cref{eq:wavefunc-probability-density-def},\n\n\\begin{align}\n    P_{\\psi}(\\vX) \\defeq \\frac{\\abs{\\psi}^2}{\\int\\dd{\\vX}\\abs{\\psi}^2}.\n\\end{align}\nSadly, computing $F^{-1}$ for this \\gls{pdf}, if even possible, would be a very costly\noperation. In fact, the normalization integral in the denominator is enough to\nmake practical sampling from $P_\\psi$ impossible. Even worse, considering that\n$\\psi$ is updated continuously throughout a \\gls{vmc} calculation, we could not even\ncache the result of the integral after computing it once.\n\nLuckily, we have a solution: The Metropolis-Hastings Algorithm. This algorithm\nis the workhorse behind thousands of applications, including \\gls{vmc}, and is\nconsidered to be among the most influential algorithms of the 20th century.\nWhile the algorithm itself is quite simple, the argument for \\emph{why} it\nworks is a little more involved. The following sections will be dedicated to the\nmathematical underpinnings. In the mean time, the full algorithm is shown\nin~\\cref{alg:metropolis-hastings-general}.\n\n\\begin{algorithm}[h]\n    \\caption{Metropolis-Hastings sampling}\n    \\label{alg:metropolis-hastings-general}\n    \\begin{algorithmic}[1]\n        \\Require Probability density $P(\\vx)$, proposal distribution $q\\qty(\\vx'\\;\\middle|\\;\\vx)$\n        \\Ensure Random $\\vx$ drawn from $P$\n        \\State Initialize $\\vx$, randomly or otherwise\n        \\Repeat\n          \\State $\\vx'\\gets q\\qty(\\vx'\\;\\middle|\\;\\vx)$\n          \\State $u\\gets\\text{unif}(0, 1)$\n          \\State $A\\gets \\flatfrac{P(\\vx')q\\qty(\\vx\\;\\middle|\\;\\vx')}{P(\\vx)q\\qty(\\vx'\\;\\middle|\\;\\vx)}$\n          \\If{$u \\leq A$}\n            \\State $\\vx \\gets \\vx'$\n          \\EndIf\n          \\State \\Yield $\\vx$\n        \\Until{done}\n    \\end{algorithmic}\n\\end{algorithm}\n\nThe most important thing about~\\cref{alg:metropolis-hastings-general} is the\nfact that we only need to know the ratios of probabilities,\n$\\flatfrac{P(\\vx')}{P(\\vx)}$. This means that we don't need the probabilities to\nbe normalized to unity\\footnote{We do need the probabilities to be\n\\emph{normalizable} though. In our case, where $P\\propto \\abs{\\Psi}^2$, we have\nalready assumed this requirement in~\\cref{sec:requirements-of-wave-functions}.},\nmeaning we don't have to compute the costly integral\nin~\\cref{eq:wavefunc-probability-density-def}. For our purposes this is\nessential, and~\\cref{alg:metropolis-hastings-general} is the reason why \\gls{vmc} is\npossible.\n\n\\subsubsection{Formal Derivation}\n\nThe Metropolis-Hastings algorithm builds on what is called a \\emph{Markov Chain}. A\nMarkov Chain is a type of stochastic model that describes a sequence of possible\nstates. We imagine some space of possible states and a set of transition\nprobabilities, $T\\qty(\\vx'\\;\\middle|\\;\\vx)$, describing the likelihood of moving between them.\nThe defining property of a Markov Chain, as opposed to other stochastic\nprocesses is that the transition probabilities $T\\qty(\\vx'\\;\\middle|\\;\\vx)$ from a state $\\vx$\nto a state $\\vx'$ depends only on the current state and not on any previous\nstates. In other words, it matters only where you are, not where you have been.\n\nFor well behaved Markov Chains, as the number of steps in the chain increases,\nthe distribution of the states will asymptotically approach a stationary\ndistribution, $\\pi(\\vx)$. The Metropolis-Hastings algorithm works by carefully\nconstructing $T\\qty(\\vx'\\;\\middle|\\;\\vx)$ such that $\\pi(\\vx)=P(\\vx)$, where $P(\\vx)$ is the\ndesired \\gls{pdf}.\n\nIn order to find the correct $T\\qty(\\vx'\\;\\middle|\\;\\vx)$, we require that the stationary\ndistribution $\\pi(\\vx)$ exists and that it is unique:\n\n\\begin{description}\n\\item[Existence:] A sufficient condition for the existence of $\\pi(\\vx)$ is that\n  of \\emph{detailed balance}. This says that the probability of being in state\n  $\\vx$ and transitioning to $\\vx'$ is equal to the probability of being in state\n  $\\vx'$ and transitioning to $\\vx$:\n  \\begin{align}\n    \\label{eq:detailed-balance}\n    \\pi\\qty(\\vx)T\\qty(\\vx'\\;\\middle|\\;\\vx) = \\pi\\qty(\\vx')T\\qty(\\vx\\;\\middle|\\;\\vx').\n  \\end{align}\n\\item[Uniqueness:] The distribution $\\pi(\\vx)$ is unique if the Markov chain is\n  \\emph{ergodic}. This is the case when we don't return to the same state after\n  a fixed number of transitions, and that all states can be reached in a finite\n  number of transitions.\n\\end{description}\nDeriving the correct $T\\qty(\\vx'\\;\\middle|\\;\\vx)$ starts by factoring the transition\nprobabilities into a proposal distribution, $q\\qty(\\vx'\\;\\middle|\\;\\vx)$, and an\nacceptance ratio $A\\qty(\\vx',\\vx)$. The idea is to use the former to propose the\nnext state, and use the latter to accept or reject the proposal. Inserting this\ninto \\cref{eq:detailed-balance} and rearranging we get:\n\n\\begin{align}\n  \\frac{A\\qty(\\vx',\\vx)}{A\\qty(\\vx,\\vx')} = \\frac{P\\qty(\\vx')}{P\\qty(\\vx)}\\frac{q\\qty(\\vx\\;\\middle|\\;\\vx')}{q\\qty(\\vx'\\;\\middle|\\;\\vx)}.\n\\end{align}\nWe can now freely choose an acceptance ratio that satisfies the above. The\ncommon choice is:\n\n\\begin{align}\n  A\\qty(\\vx',\\vx) = \\min\\qty(1, \\frac{P\\qty(\\vx')}{P\\qty(\\vx)}\\frac{q\\qty(\\vx\\;\\middle|\\;\\vx')}{q\\qty(\\vx'\\;\\middle|\\;\\vx)}).\n\\end{align}\n\nAt this point, all that remains is to specify a proposal distribution\n$q\\qty(\\vx'\\;\\middle|\\;\\vx)$. Depending on the choice of $q$ we get a sampler with\ndifferent attributes. We have implemented two version, presented in the following\nsections.\n\n\\subsubsection{Metropolis Algorithm}\n\nThe simplest choice, referred to as simply the Metropolis algorithm, is to\nchoose a distribution such that\n$q\\qty(\\vx'\\;\\middle|\\;\\vx)=q\\qty(\\vx\\;\\middle|\\;\\vx')$. That way the acceptance\nratio simplifies significantly, fully canceling out $q$. In our implementation\nwe have used a uniform proposal distribution:\n\n\\begin{align}\n  q\\qty(\\vx' \\;\\middle|\\; \\vx) &= p_U\\qty(\\vx' \\;\\middle|\\; \\vx -\\frac{\\Delta x}{2}, \\vx + \\frac{\\Delta x}{2}),\n\\end{align}\nwhere $\\Delta x$ is a \\emph{step size} that regulates how large each\nperturbation should be. This leads to the simplest form of\n\\cref{alg:metropolis-hastings-general}, given for completeness in\n\\cref{alg:metropolis-simple}. The algorithm given here is adapted particularly\nfor our purposes, where we want to sample a matrix, $\\mat\nX\\in\\mathbb{R}^{N\\times D}$, for $N$ particles in $D$ dimensions. In particular,\nour implementation yields a new sample after moving only one of the particles.\nThis is a common case for \\gls{vmc} applications, because this allows for a code\noptimization that is crucial when using Slater determinants in wave functions.\nWe don't use such wave functions in this thesis, but future work might benefit\nfrom this choice.\n\n\\begin{algorithm}[h]\n    \\caption{Metropolis sampling}\n    \\label{alg:metropolis-simple}\n    \\begin{algorithmic}[1]\n        \\Require Probability density $P(\\vX)=\\abs{\\psi(\\vX)}^2$, step size $\\Delta x$\n        \\Ensure Random $\\vX$ drawn from $P$\n        \\State Initialize $\\vX$, randomly or otherwise\n        \\Repeat\n          \\For{$i \\gets 1:N$}\n            \\State $\\vX'\\gets \\vX$\n            \\For{$d \\gets 1:D$}\n              \\State $\\delta\\gets \\text{unif}(-0.5, 0.5)$\n              \\State $X_{i,d}'\\gets X_{i,d} + \\Delta x \\cdot \\delta$\n            \\EndFor\n            \\State $u\\gets\\text{unif}(0, 1)$\n            \\State $A\\gets \\flatfrac{P\\qty(\\vX')}{P\\qty(\\vX)}$\n            \\If{$u \\leq A$}\n              \\State $\\vX \\gets \\vX'$\n            \\EndIf\n            \\State \\Yield $\\vX$\n          \\EndFor\n        \\Until{done}\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\subsubsection{Importance Sampling}\n\n\\Cref{alg:metropolis-simple} suffers from the same issues that ordinary \\gls{mci}\ndoes, as opposed to the enhancement of \\gls{is}. The proposal\ndistribution used in the Metropolis algorithm, while simple to implement, can\ncertainly be improved upon. The idea is to guide the random walk in space\ntowards areas of greater probability. We will now give a brief physical\nmotivation for the strategy, glossing over most technical details.\n\nParticles will tend towards regions of space where $P(\\vX)$ is large. We may say\nthat this is the result of a \\emph{quantum drift force}. Without further\nmotivation, we define the drift force acting on particle $k$ as follows:\n\n\\begin{align}\n  \\vb{F}_k(\\vX) = \\frac{2\\grad_k{\\Psi(\\vX)}}{\\Psi(\\vX)},\n\\end{align}\nwhich points in the direction of greatest increase in $\\Psi$ with respect to\nparticle $k$'s position. We also expect the\nsystem to exhibit some degree of random motion, as it is a quantum system. This\ncombination of drift and random motion is described by the Langevin equation~\\cite{Langevin-1908}, % TODO, what is the correction?\n\n\\begin{align}\n \\pdv{\\vx_k}{t} &= D\\vb{F}_k + \\vb{\\eta},\n\\end{align}\nwhere $D$ is called the drift coefficient, and $\\vb{\\eta}$ is a vector of\nuniformly distributed random values (with zero mean) and accounts for the random\nmotion. We fix $D=\\flatfrac{1}{2}$. Applying Euler's method to the Langevin\nequation we can get:\n\n\\begin{align}\n  \\vx_k' = \\vx_k + \\frac{1}{2}\\vb{F}_k\\,\\Delta t + \\vb{\\xi}\\sqrt{\\Delta t},\n\\end{align}\nwhere $\\Delta t$ is a time step similar to $\\Delta x$ for plain Metropolis, and\n$\\vb{\\xi}$ is a vector of values drawn from the standard normal distribution.\nThis is our new recipe for generating proposals $\\vx'$. The final piece is\nto find the corresponding \\gls{pdf},\n$q\\qty(\\vx'\\;\\middle|\\;\\vx)$.\n\nTo this end, we consider the Fokker-Planck equation, which for one particle in\none dimension is written as\n\n\\begin{align}\n  \\label{eq:mc-annon-4}\n  \\pdv{\\Psi}{t} &= D\\pdv{}{x}\\qty(\\pdv{}{x} - F)\\Psi,\n\\end{align}\nwhere $D$ is as before and $F$ is the one-dimensional analog of $\\vb{F}_k$. This\ndescribes the time-evolution of a probability distribution under the influence\nof a drift force and random impulses. We've let $\\Psi$ play the role of the\nprobability distribution. \\cref{eq:mc-annon-4} yields a solution given by the\nfollowing Green's function (for one particle):\n\n\\begin{align}\n  \\label{eq:mc-annon-5}\n  G\\qty(\\vx_k', \\,\\vx_k,\\,\\Delta t) &\\propto \\exp[-\\frac{\\norm{\\vx_k' - \\vx_k - D\\Delta t\\,\\vb{F}_k}^2}{4D\\Delta t}].\n\\end{align}\nThis is interpreted as the probability of transitioning to position $\\vx_k'$\nfrom $\\vx_k$ within a time interval $\\Delta t$, so that finally we have the\nproposal distribution:\n\n\\begin{align}\n  q\\qty(\\vx'\\;\\middle|\\;\\vx) &=G\\qty(\\vx', \\,\\vx,\\,\\Delta t).\n\\end{align}\nNote that because we only need the ratio of $q$ values, there is no problem\nthat \\cref{eq:mc-annon-5} only gives the proportionality. The final algorithm is\nshown in \\cref{alg:metropolis-importance}.\nSee \\cref{sec:verify-sampling} for a demonstration of how\n\\cref{alg:metropolis-importance} compares to \\cref{alg:metropolis-simple}.\n\n\\begin{algorithm}[h]\n    \\caption{Metropolis-Hastings importance sampling}\n    \\label{alg:metropolis-importance}\n    \\begin{algorithmic}[1]\n        \\Require Probability density $P(\\vX) = \\abs{\\psi(\\vX)}^2$, step size $\\Delta t$\n        \\Ensure Random $\\vX$ drawn from $P$\n        \\State Initialize $\\vX$, randomly or otherwise\n        \\Repeat\n          \\For{$i \\gets 1:N$}\n            \\State $\\vX'\\gets \\vX$\n            \\For{$d \\gets 1:D$}\n              \\State $\\xi\\gets \\mathcal{N}(0, 1)$\n              \\State $X_{i,d}'\\gets X_{i,d} + \\sqrt{\\Delta t} \\cdot \\xi +\n              \\frac{1}{2}\\Delta t\\,(\\vb{F}_i(\\psi))_d$\n            \\EndFor\n            \\State $u\\gets\\text{unif}(0, 1)$\n            \\State $A\\gets \\flatfrac{P\\qty(\\vX')}{P\\qty(\\vX)}$\n            \\State $A\\gets A\\cdot \\flatfrac{q\\qty(\\vx_i\\;\\middle|\\;\\vx_i')}{q\\qty(\\vx_i'\\;\\middle|\\;\\vx_i)}$\n            \\If{$u \\leq A$}\n              \\State $\\vX \\gets \\vX'$\n            \\EndIf\n            \\State \\Yield $\\vX$\n          \\EndFor\n        \\Until{done}\n    \\end{algorithmic}\n\\end{algorithm}\n\\end{document}\n", "meta": {"hexsha": "2cebeae8348289083bb59ace45b29d7ce60a92fe", "size": 32809, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writing/MonteCarlo.tex", "max_stars_repo_name": "johanere/qflow", "max_stars_repo_head_hexsha": "5453cd5c3230ad7f082adf9ec1aea63ab0a4312a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-07-24T21:46:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-11T18:18:24.000Z", "max_issues_repo_path": "writing/MonteCarlo.tex", "max_issues_repo_name": "johanere/qflow", "max_issues_repo_head_hexsha": "5453cd5c3230ad7f082adf9ec1aea63ab0a4312a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2019-02-19T10:49:26.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-18T09:42:13.000Z", "max_forks_repo_path": "writing/MonteCarlo.tex", "max_forks_repo_name": "bsamseth/FYS4411", "max_forks_repo_head_hexsha": "72b879e7978364498c48fc855b5df676c205f211", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-11-04T15:17:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-03T16:37:38.000Z", "avg_line_length": 49.9375951294, "max_line_length": 289, "alphanum_fraction": 0.7183090006, "num_tokens": 9446, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7879312006227324, "lm_q1q2_score": 0.5993403820184808}}
{"text": "\\section{The PID actions}\n\\subsection{}\n\n\\begin{frame}\n\\frametitleTC{Foreword}\n\\framesubtitleTC{}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item We carry out this treatise in a DT LTI (comments on saturations later on) with sampling time $T_s$.\n \\item This said, from the PID quasi-algorithm on slide~\\ref{pag:PID-quasi-alg} we get the error-to-action\n       transfer functions\n       \\begin{displaymath}\n        \\begin{array}{lclcl}\n         C_P(z) &:=& \\cfrac{u_P(k)}{e(k)} &=& K, \\\\ \\\\\n         C_I(z) &:=& \\cfrac{u_I(k)}{e(k)} &=& \\cfrac{KT_s}{T_i} \\, \\cfrac{z}{z-1}, \\\\ \\\\\n         C_D(z) &:=& \\cfrac{u_D(k)}{e(k)} &=& \\cfrac{KT_d(1-\\beta)}{T_s} \\, \\cfrac{z-1}{z-\\beta}.\n        \\end{array}\n       \\end{displaymath}\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Foreword}\n\\framesubtitleTC{}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item Block diagram of a PID control loop evidencing the three actions\n       \\begin{center}\n        \\includegraphics[width=0.85\\columnwidth]{./Unit-07/img/PIDloop-actions.pdf}\n       \\end{center}\n \\item \\vspace{2mm}NOTE: the disturbance is added to the control signal, i.e., $H(z)=P(z)$;\\\\\n       such a disturbance is called a \\TC{load} or a \\TC{matched} disturbance.\n \\item Load disturbances can arise every time the disturbing and the\\\\\n       control actions are physically homogeneous, which is not infrequent\\\\\n       ---e.g., the control is allocating a resource, and the disturbing entity\\\\\n       is somebody subtracting some amount of the same resource.\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Computing the three actions}\n\\framesubtitleTC{}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item From\n       \\begin{center}\n        \\includegraphics[width=0.70\\columnwidth]{./Unit-07/img/PIDloop-actions.pdf}\n       \\end{center}\n       and setting $C(z) = C_P(z)+C_I(z)+C_D(z)$, we readily get\n       \\begin{displaymath}\n        \\begin{array}{llcll}\n          \\cfrac{u_P(k)}{w(k)} &\\mkern-12mu =  \\cfrac{C_P(z)}{1+P(z)C(z)},       &\\quad&\n          \\cfrac{u_P(k)}{d(k)} &\\mkern-12mu = -\\cfrac{P(z)C_P(z)}{1+P(z)C(z)}, \\\\\n          \\cfrac{u_I(k)}{w(k)} &\\mkern-12mu =  \\cfrac{C_I(z)}{1+P(z)C(z)},       &\\quad&\n          \\cfrac{u_I(k)}{d(k)} &\\mkern-12mu = -\\cfrac{P(z)C_I(z)}{1+P(z)C(z)}, \\\\\n          \\cfrac{u_D(k)}{w(k)} &\\mkern-12mu =  \\cfrac{C_D(z)}{1+P(z)C(z)},       &\\quad&\n          \\cfrac{u_D(k)}{d(k)} &\\mkern-12mu = -\\cfrac{P(z)C_D(z)}{1+P(z)C(z)}.\n        \\end{array}\n       \\end{displaymath}\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Computing the three actions}\n\\framesubtitleTC{}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item We now examine the behaviour of the scheme above with a first-order process\\\\\n       with gain $\\mu$ and time constant $T$, that recalling slide~\\ref{pag:FOsystem-DT}, in transfer function\\\\\n       form reads\n       \\begin{displaymath}\n        P(z) = \\frac{\\mu\\frac{T_s}{T+T_s}z}{z-\\frac{T}{T+T_s}}.\n       \\end{displaymath}\n \\item We start out with a PI tuned by cancellation as per the formul{\\ae} of slide~\\ref{pag:PItuning-lambda},\\\\\n       that is,\n       \\begin{displaymath}\n        K   = \\frac{T}{\\lambda\\mu}, \\quad\n        T_i = T.\n       \\end{displaymath}\n       where $\\lambda$ is the desired closed-loop time constant (1/5 of the desired\\\\\n       settling time).\n \\item Then we add a bit of D action, and comment on all of the results.\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{Computing the three actions}\n\\framesubtitleTC{Scilab script (1/3)}\n\\myPause\n {\\scriptsize\n \\begin{verbatim}\n clear; clc;                                          // clear workspace & console\n z      = %z;                                         // allow to omit lots of %'s below\n\n mu     = 1;                                          // process gain  \n T      = 10;                                         // and time constant\n lambda = 4;                                          // desired closed-loop TC\n Beta   = 0.6;                                        // derivative filter\n Ts     = 1;                                          // sampling time\n tfin   = 50;                                         // simulation end time\n\n K      = T/lambda/mu;                                // PID tuning\n Ti     = T;\n Td     = 0;\n\n P      = syslin(Ts,z*Ts/(T+Ts)/(z-T/(T+Ts)));        // process and controller\n Cp     = K;                                          // transfer functions:\n Ci     = syslin(Ts,K*Ts/Ti*z/(z-1));                 // a number instead of\n Cd     = syslin(Ts,K*Td*(1-Beta)/Ts*(z-1)/(z-Beta)); // 'd' sets the sampling\n C      = Cp+Ci+Cd;                                   // time for a DT system\n \\end{verbatim}\n }\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{Computing the three actions}\n\\framesubtitleTC{Scilab script (2/3)}\n\\myPause\n {\\scriptsize\n \\begin{verbatim}\n w2y    = tf2ss(C*P/(1+C*P));           // transfer function from w to y\n d2y    = tf2ss(P/(1+C*P));             // transfer function from d to y\n w2u    = tf2ss(C/(1+C*P));             // transfer function from w to u\n d2u    = tf2ss(-C*P/(1+C*P));          // transfer function from d to u\n w2up   = tf2ss(Cp/(1+C*P));            // transfer function from w to up\n w2ui   = tf2ss(Ci/(1+C*P));            // transfer function from w to ui\n w2ud   = tf2ss(Cd/(1+C*P));            // transfer function from w to ud\n d2up   = tf2ss(-P*Cp/(1+C*P));         // transfer function from d to up\n d2ui   = tf2ss(-P*Ci/(1+C*P));         // transfer function from d to ui\n d2ud   = tf2ss(-P*Cd/(1+C*P));         // transfer function from d to ud\n\n t      = 0:Ts:tfin;                    // time vector\n wd     = ones(t);                      // step with one zero sample \n wd(1)  = 0;                            // at the beginning\n yw     = dsimul(w2y,wd)';              // y in response to a w step\n yd     = dsimul(d2y,wd)';              // y in response to a d step\n uw     = dsimul(w2u,wd)';              // u in response to a w step\n ud     = dsimul(d2u,wd)';              // u in response to a d step\n upidw  = dsimul([w2up;w2ui;w2ud],wd)'; // u{p,i,d} in response to a w step\n upidd  = dsimul([d2up;d2ui;d2ud],wd)'; // u{p,i,d} in response to a d step\n \\end{verbatim}\n }\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{Computing the three actions}\n\\framesubtitleTC{Scilab script (3/3)}\n\\myPause\n {\\scriptsize\n \\begin{verbatim}\n hf=scf(0); clf; hf.figure_size = [1200,700];                    // plot stuff: see the Scilab\n subplot(321); title('Responses to a w unit step'); ylabel('y'); // documentation for technical\n   plot(0,0,'k');                                                // details here inessential\n   plot(t',ones(t'),'k:');\n   plot(t',yw,'b','linewidth',4);\n subplot(322); title('Responses to a d unit step'); \n   plot(t',zeros(t'),'k:');\n   plot(t',yd,'b','linewidth',4);\n subplot(323); xlabel('time'); ylabel('u,up,ui,ud (g,b,r,m)');\n   plot(0,0,'k');\n   plot(t',uw,'g','linewidth',4);\n   plot(t',upidw(:,1),'b','linewidth',2);\n   plot(t',upidw(:,2),'r','linewidth',2);\n   plot(t',upidw(:,3),'m','linewidth',2);\n subplot(324); xlabel('time');\n   plot(0,0,'k');\n   plot(t',ud,'g','linewidth',4);\n   plot(t',upidd(:,1),'b','linewidth',2);\n   plot(t',upidd(:,2),'r','linewidth',2);\n   plot(t',upidd(:,3),'m','linewidth',2);\n \\end{verbatim}\n }\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Observing the three actions}\n\\framesubtitleTC{Scilab script -- example of the output}\n\\myPause\n \\begin{center}\n  \\includegraphics[width=0.75\\columnwidth]{./Unit-07/img/PIDactions-scilab-01.pdf}\n \\end{center}\n \\begin{itemize}[<+-| alert@+>]\n \\item Top row: responses of $y$ to a unit step on $w$ (left) and on $d$ (right).\n \\item Bottom row: breakdown of $u$ (green) into $u_P$ (blue), $u_I$ (red),\\\\\n       and $u_D$ (magenta).\n \\item Note that at steady state $u$ is made only of integral action.\n \\item Let us now play around with PID parameters, also not following the\\\\\n       tuning rule exactly, and see what happens.\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{Observing the three actions}\n\\framesubtitleTC{Playing around with PID parameters -- a few suggestions (1/3)}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item Add some D action:\n       {\\scriptsize\n       \\begin{verbatim}\n       K   = T/lambda/mu;\n       Ti  = T;\n       Td  = 0.2*Ti;\n       \\end{verbatim}\n       }\n \\item \\vspace{-3mm}Speeds up initial transient, hardly any effect on settling time, just a \\emph{very} small\\\\\n       overshoot:\n       \\begin{center}\n        \\includegraphics[width=0.60\\columnwidth]{./Unit-07/img/PIDactions-scilab-02.pdf}\n       \\end{center}\n \\item Note that clearly the D action vanishes as well toward steady state \\\\\n       (and faster than P). \n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{Observing the three actions}\n\\framesubtitleTC{Playing around with PID parameters -- a few suggestions (2/3)}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item Back to PI, but halve the integral time:\n       {\\scriptsize\n       \\begin{verbatim}\n       K   = T/lambda/mu;\n       Ti  = 0.5*T;\n       Td  = 0;\n       \\end{verbatim}\n       }\n \\item \\vspace{-3mm}Disturbance response improved at the cost of a slight oscillation\\\\\n       ---i.e., some stability reduction:\n       \\begin{center}\n        \\includegraphics[width=0.60\\columnwidth]{./Unit-07/img/PIDactions-scilab-03.pdf}\n       \\end{center}\n \\item Try adjusting $K$ to eliminate the oscillation in the set point response.\\\\\n       Do you succeed? Or just alter the oscillation period?\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{Observing the three actions}\n\\framesubtitleTC{Playing around with PID parameters -- a few suggestions (3/3)}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item PID again, but with a definitely excessive integral time:\n       {\\scriptsize\n       \\begin{verbatim}\n       K   = T/lambda/mu;\n       Ti  = 4*T;\n       Td  = 0.2*Ti;\n       \\end{verbatim}\n       }\n \\item \\vspace{-3mm}When P and D vanish, I is still far from playing its full role:\n       \\begin{center}\n        \\includegraphics[width=0.60\\columnwidth]{./Unit-07/img/PIDactions-scilab-04.pdf}\n       \\end{center}\n \\item Try removing the D action. What effects do you observe on the set\\\\ \n       point and the disturbance response? Do these correspond to your\\\\\n       expectations?\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{The P, I, and D actions}\n\\framesubtitleTC{Suggestions for further activities}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item Take the Scilab script above and add some noise on the feedback path---i.e., the \\TC{measurement}\n       of $y$ fed back to $C$ is corrupted by noise. Hint: Scilab has a \\texttt{rand()} and a \\texttt{randn()}\n       function for uniformly and normally distributed random numbers.\n \\item Always remember:\n       \\begin{itemize}[<+-| alert@+>]\n       \\item we would like to control physical quantities,\n       \\item but we can only control measurements.\n       \\end{itemize}\n \\item \\vfill Experiment with different numbers, observe, and comment.\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitleTC{The P, I, and D actions}\n\\framesubtitleTC{Takeaways}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item Looking at a response and diagnosing: too much P action, too few I,...\n \\item A keyword for the curious here is ``tuning maps''.\n \\item \\vspace{2mm} Use your \\TC{systems theory} knowledge to confirm or debunk some common beliefs:\n       \\begin{itemize}[<+-| alert@+>]\n       \\item adding D always makes a system faster --- sure, given one of the tests above?\n       \\item too much D destabilises --- hmmm, or maybe just amplifies noise?\n       \\item increasing $K$ does not reject disturbances faster, for that you need\\\\\n             to reduce $T_i$ --- i.e., not just tune by cancellation,\n       \\item ...and so forth.\n       \\end{itemize}\n \\item If interested go through the many PID-related web sites, read, test\\\\\n       (you have \\TC{methods} \\& tools) and subject the material you found\\\\\n       to a system-theoretically educated analysis.\n \\item This is what we meant right from the outset for ``using (PID)\\\\\n       control consciously''. \n \\end{itemize}\n\\end{frame}\n\n%Myths debunked\n% D makes faster...\n\n%Lessons learnt:\n% diagnose, tuning maps\n\n", "meta": {"hexsha": "c8efe2a0abe5f93af6bbd1ed4a7ca99b951f0723", "size": 12269, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/Unit-07/sections/01-ApplyingPID.tex", "max_stars_repo_name": "albertoleva/PID4CSE", "max_stars_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-19T16:38:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-19T16:38:10.000Z", "max_issues_repo_path": "slides/Unit-07/sections/01-ApplyingPID.tex", "max_issues_repo_name": "albertoleva/PID4CSE", "max_issues_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/Unit-07/sections/01-ApplyingPID.tex", "max_forks_repo_name": "albertoleva/PID4CSE", "max_forks_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.9641693811, "max_line_length": 112, "alphanum_fraction": 0.5962181107, "num_tokens": 3769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428946, "lm_q2_score": 0.7606506472514405, "lm_q1q2_score": 0.5993403739553692}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\documentclass[a4paper]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{Sweave}\n\n\\title{G4.P-1}\n\\author{David Emanuel Craciunescu \\and Laura P<e9>rez Medeiro}\n\n\\begin{document}\n\\input{PL3-G4-concordance}\n\n\\maketitle\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Data analysis using Hunt decision algorithm and linear regression for qualifications and planets data}\n\n  \\subsection{}The data use in this exercise will be nine students marks and califications, as it has been done in the teorical classes. For this analysis, it wil be used the gain information meause, using Gini as the impurity measure.\n \n  First step in the analysis is read the data, which is contains in a txt file called qualifications.txt:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> library(utils)\n> qualifications <- read.table(\"qualification.txt\")\n> sample = data.frame(qualifications)\n\\end{Sinput}\n\\end{Schunk}\n\nIn order to make the analysis, the package rpart will be used. this means, that it should be install before working with the dataset. In order to manage R packages, it will be used Packrat\n\n\\begin{Schunk}\n\\begin{Sinput}\n> library(rpart)\n> clasification = rpart(C.G ~ .,\n+                       data = sample,\n+                       method = \"class\",\n+                       minsplit = 1)\n> clasification\n\\end{Sinput}\n\\begin{Soutput}\nn= 9 \n\nnode), split, n, loss, yval, (yprob)\n      * denotes terminal node\n\n1) root 9 3 Ss (0.3333333 0.6666667)  \n  2) Labo=A,B 5 2 Ap (0.6000000 0.4000000)  \n    4) Pract=A,B 3 0 Ap (1.0000000 0.0000000) *\n    5) Pract=C,D 2 0 Ss (0.0000000 1.0000000) *\n  3) Labo=C,D 4 0 Ss (0.0000000 1.0000000) *\n\\end{Soutput}\n\\end{Schunk}\n\nAnother package that can be used to do this analysis is tree:\n\\begin{Schunk}\n\\begin{Sinput}\n> library(tree)\n> (clasificationTree = tree(C.G ~ .,\n+                           data = sample,\n+                           mincut = 1,\n+                           minsize = 2)\n+ )\n\\end{Sinput}\n\\begin{Soutput}\nnode), split, n, deviance, yval, (yprob)\n      * denotes terminal node\n\n1) root 9 11.46 Ss ( 0.3333 0.6667 )  \n  2) Labo: A,B 5  6.73 Ap ( 0.6000 0.4000 )  \n    4) Pract: A,B 3  0.00 Ap ( 1.0000 0.0000 ) *\n    5) Pract: C,D 2  0.00 Ss ( 0.0000 1.0000 ) *\n  3) Labo: C,D 4  0.00 Ss ( 0.0000 1.0000 ) *\n\\end{Soutput}\n\\begin{Sinput}\n> clasificationTree\n\\end{Sinput}\n\\begin{Soutput}\nnode), split, n, deviance, yval, (yprob)\n      * denotes terminal node\n\n1) root 9 11.46 Ss ( 0.3333 0.6667 )  \n  2) Labo: A,B 5  6.73 Ap ( 0.6000 0.4000 )  \n    4) Pract: A,B 3  0.00 Ap ( 1.0000 0.0000 ) *\n    5) Pract: C,D 2  0.00 Ss ( 0.0000 1.0000 ) *\n  3) Labo: C,D 4  0.00 Ss ( 0.0000 1.0000 ) *\n\\end{Soutput}\n\\end{Schunk}\n  \\subsection{} In this second part, tha dataset use is planets.txt. To this dataset, linear regression will be applied.\n  \nAs it has been done before, the first step consist on reading data from a \\textit{txt} file:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> library(utils)\n> data <- read.table(\"planets.txt\")\n> data = data.frame(data)\n> names(data)\n\\end{Sinput}\n\\begin{Soutput}\n[1] \"Radius\"  \"Density\"\n\\end{Soutput}\n\\end{Schunk}\n\nIn order to quantify the correlation between the variables, it will be calculated the coeficient's matrix correlation:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> cor(data)\n\\end{Sinput}\n\\begin{Soutput}\n          Radius  Density\nRadius  1.000000 0.371063\nDensity 0.371063 1.000000\n\\end{Soutput}\n\\end{Schunk}\n\nThen, it will be calculated and representated the minimun square error line:\n\\begin{Schunk}\n\\begin{Sinput}\n> regression <- lm( Density~Radius, data)\n> summary(regression)\n\\end{Sinput}\n\\begin{Soutput}\nCall:\nlm(formula = Density ~ Radius, data = data)\n\nResiduals:\nMercurio    Venus   Tierra    Marte \n 0.70312 -0.01253  0.24566 -0.93624 \n\nCoefficients:\n            Estimate Std. Error t value Pr(>|t|)  \n(Intercept)   4.3624     1.2050   3.620   0.0685 .\nRadius        0.1394     0.2466   0.565   0.6289  \n---\nSignif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n\nResidual standard error: 0.846 on 2 degrees of freedom\nMultiple R-squared:  0.1377,\tAdjusted R-squared:  -0.2935 \nF-statistic: 0.3193 on 1 and 2 DF,  p-value: 0.6289\n\\end{Soutput}\n\\end{Schunk}\n\nThe equation's line is y = 4.3624 + 0.1394x\n\n\\begin{Schunk}\n\\begin{Sinput}\n> library(gplots)\n> par(mar = rep(2,4))\n> plot(data$Density, data$Radius)\n> abline(regression)\n\\end{Sinput}\n\\end{Schunk}\n\nFinally, it is necessary to calculate ANOVA in order to analysize correctly the relation between variables.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> anova <- aov(Density~Radius, data) \n> summary(anova)\n\\end{Sinput}\n\\begin{Soutput}\n            Df Sum Sq Mean Sq F value Pr(>F)\nRadius       1 0.2286  0.2286   0.319  0.629\nResiduals    2 1.4314  0.7157               \n\\end{Soutput}\n\\end{Schunk}\n\n\\section{Data analysis using Hunt decision algorithm and linear regression for vehicules and pairs of data}\n\nThe following part consist on doing the same analysis as it has been done before, but now with new datasets.\n\\begin{Schunk}\n\\begin{Sinput}\n> library(utils)\n> vehicules <- read.table(\"vehiculos.txt\")\n> sampleV = data.frame(vehicules)\n\\end{Sinput}\n\\end{Schunk}\nIn this file, TC = license type, NR = number of roads that  has the vehicule, NP = number of people that can be in the vehicule and TV = vehicule's type.\n\\begin{Schunk}\n\\begin{Sinput}\n> library(rpart)\n> clasificationV = rpart(TV ~.,\n+                        data = sampleV,\n+                        method=\"class\",\n+                        minsplit =1)\n> clasificationV\n\\end{Sinput}\n\\begin{Soutput}\nn= 10 \n\nnode), split, n, loss, yval, (yprob)\n      * denotes terminal node\n\n 1) root 10 7 Bicicleta (0.3000000 0.2000000 0.3000000 0.2000000)  \n   2) TC=N 3 0 Bicicleta (1.0000000 0.0000000 0.0000000 0.0000000) *\n   3) TC=A,B 7 4 Coche (0.0000000 0.2857143 0.4285714 0.2857143)  \n     6) NR>=3 5 2 Coche (0.0000000 0.4000000 0.6000000 0.0000000)  \n      12) NR>=5 2 0 Cami\u00c3\u00b3n (0.0000000 1.0000000 0.0000000 0.0000000) *\n      13) NR< 5 3 0 Coche (0.0000000 0.0000000 1.0000000 0.0000000) *\n     7) NR< 3 2 0 Moto (0.0000000 0.0000000 0.0000000 1.0000000) *\n\\end{Soutput}\n\\end{Schunk}\n\n\\begin{Schunk}\n\\begin{Sinput}\n> library(tree)\n> (clasificationTreeV = tree(TV ~.,\n+                           data = sampleV,\n+                           mincut = 1,\n+                           minsize = 2)\n+ )\n\\end{Sinput}\n\\begin{Soutput}\nnode), split, n, deviance, yval, (yprob)\n      * denotes terminal node\n\n1) root 10 27.32 Bicicleta ( 0.3 0.2 0.3 0.2 )  \n  2) NR < 3 5  6.73 Bicicleta ( 0.6 0.0 0.0 0.4 )  \n    4) TC: A,B 2  0.00 Moto ( 0.0 0.0 0.0 1.0 ) *\n    5) TC: N 3  0.00 Bicicleta ( 1.0 0.0 0.0 0.0 ) *\n  3) NR > 3 5  6.73 Coche ( 0.0 0.4 0.6 0.0 )  \n    6) NR < 5 3  0.00 Coche ( 0.0 0.0 1.0 0.0 ) *\n    7) NR > 5 2  0.00 Cami\u00c3\u00b3n ( 0.0 1.0 0.0 0.0 ) *\n\\end{Soutput}\n\\begin{Sinput}\n> clasificationTreeV\n\\end{Sinput}\n\\begin{Soutput}\nnode), split, n, deviance, yval, (yprob)\n      * denotes terminal node\n\n1) root 10 27.32 Bicicleta ( 0.3 0.2 0.3 0.2 )  \n  2) NR < 3 5  6.73 Bicicleta ( 0.6 0.0 0.0 0.4 )  \n    4) TC: A,B 2  0.00 Moto ( 0.0 0.0 0.0 1.0 ) *\n    5) TC: N 3  0.00 Bicicleta ( 1.0 0.0 0.0 0.0 ) *\n  3) NR > 3 5  6.73 Coche ( 0.0 0.4 0.6 0.0 )  \n    6) NR < 5 3  0.00 Coche ( 0.0 0.0 1.0 0.0 ) *\n    7) NR > 5 2  0.00 Cami\u00c3\u00b3n ( 0.0 1.0 0.0 0.0 ) *\n\\end{Soutput}\n\\end{Schunk}\n\n\\subsection{} Then, the lineal regression anaylisis will be done:\n\\begin{Schunk}\n\\begin{Sinput}\n> library(utils)\n> pairs <- read.table(\"pair1.txt\")\n> dataP = data.frame(pairs)\n\\end{Sinput}\n\\end{Schunk}\n\n\n\\section{}\n\\end{document}\n", "meta": {"hexsha": "60d22da1bd6d82763a76e2260cb204b2f199a1fc", "size": 7634, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PL3/docs/PL3-G4.tex", "max_stars_repo_name": "craciunescu/DataScience", "max_stars_repo_head_hexsha": "e246994974d817f48d6861162f2804ed4c9539ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-22T15:58:11.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-22T15:58:11.000Z", "max_issues_repo_path": "PL3/docs/PL3-G4.tex", "max_issues_repo_name": "craciunescu/DataScience", "max_issues_repo_head_hexsha": "e246994974d817f48d6861162f2804ed4c9539ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-06-02T00:46:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-02T00:46:53.000Z", "max_forks_repo_path": "PL3/docs/PL3-G4.tex", "max_forks_repo_name": "craciunescu/datascience", "max_forks_repo_head_hexsha": "e246994974d817f48d6861162f2804ed4c9539ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5891472868, "max_line_length": 235, "alphanum_fraction": 0.6243122871, "num_tokens": 2929, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.787931185683219, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5993403663795356}}
{"text": "\\section{Discussion} \\label{discussion}\nThe Lasso-based procedure performed on microarray data is enhanced by a permutation approach that consistently improves the stability of the inferred network structure. The purpose of permuting the response variable is to break the link with the other independent variables by optimising an equivalent convex function which selects a number of variables close to those selected for the original (not permuted) response. Since the permutation affects only the response gene, the structure of the permuted data is equivalent to the original one. Moreover, using the same $\\lambda$ increases the speed of the algorithm due to the fact that cross-validation is no longer required. \nResults from simulated genetic data are encouraging and consent to perform our approach to predict genetic interactions from real biological datasets. However, we address some limitations we intend to investigate in the near future. \n\nAs already stated, genetic data are usually affected by measurement noise and high number of variables collected from different datasets such as gene expression profiles, SNPs, methylation and clinical data. \n\nThe curse of dimensionality can set a limit on the number of permutations to perform. Due to the fact that our method relies on permuting each response variable in order to increase the stability of the discovered interactions, the overall performance is directly affected by the total number of genes in the dataset. \n\n%We are investigating possible solutions to mitigate the curse of dimensionality by limiting the discovery of interactions to highly connected genes. This strategy would detect the local structure around genes usually referred to as network hubs. We do not interpret this fact as a limitation since biological networks usually manifest a scale free topology, in which only few nodes are highly connected to the rest of the graph (\\citealp{BAR03a, evidencescalefree}). FUTURE WORK\n\nThe variable selection procedure consistently depends on the value of the shrinkage factor $\\lambda$, estimated on a subset of the covariates. Obviously, it might occur a prior exclusion of significant genes from further analyses in the case of a too restrictive shrinkage factor. An alleviation to this risk (which can  directly determine the false negative rate) consists in replacing the pure Lasso penalty with an elastic net procedure of the type\n\n\\begin{equation}\n\\label{eq:elnet}\n    \\hat{\\Theta}^{a,\\lambda} = \n    \\argmin_{%\n      \\substack{%\n        \\text{s.\\,t.}\\, \\Theta:\\Theta_a = 0 \\\\\n        \\phantom{}\\, \n      }\n    }\n    (\\frac{1}{n} \\| X_i - X\\Theta \\|^{2}_2 + \\alpha \\| \\Theta \\|_{1} + (1-\\alpha) \\| \\Theta \\|^{2})\n  \\end{equation}\n   \n\nIn such a scenario it would be necessary to estimate an additional parameter $\\alpha$. To the other extreme, a pure ridge-regression procedure would not benefit from the permutation-based stability test, due to the fact that ridge-regression procedures tend to include all the covariates in the model. Moreover, our method ignores the value of the regression coefficients and selects a subset of genes with the best permutation score. In a ridge-regression setting all covariates would be selected an equal number of times.  \n\nAnother aspect we intend to probe regards the direction of the interactions. In our analysis we ignore the direction of each edge in the graph. A relaxation of the problem of learning the network topology consists in considering the interaction $i \\rightarrow j$ equivalent to the interaction $j \\rightarrow i$. Although this simplification makes the construction of the overall network consistently easier, it might lead to inconsistencies from a biological perspective. As a matter of fact, gene regulations are known to have a direction, usually referred to as activation and inhibition. Activation and inhibition are essential regulatory mechanisms in the transcriptional machinery of the cell and are causes for up- and down-regulation of particular genes (\\citealp{GeistlingerCKMZ11}).\n\n Learning the directionality of network edges represents an additional complexity that is plausible to deal with only in the presence of a large number of samples, or by integrating complementary data sources of known interactions. \nTherefore, the need for integrating different data sources is twofold: data integration can increase the stability of all discovered interactions and their direction and, specifically to our method, it can reduce the number of required permutations per gene. We believe that data integration can consistently  improve the overall performance of the described approach. \n\nWe endorse our approach to be deployed in a data analysis pipeline in order to 1) analyse different data sources 2) build the local network from each dataset 3) increase the stability of predicted interactions by permutation and 4) integrate each singular network into a more stable and complete graph. \nWe are currently extending our network inference method to implement the aforementioned data analysis pipeline. \n\n", "meta": {"hexsha": "3edf5e67c334869cdb301a1a2d8be93d675898c4", "size": 5063, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LABnet bioinformatics/discussion.tex", "max_stars_repo_name": "worldofpiggy/academic-papers", "max_stars_repo_head_hexsha": "a9ad707cf504e6460ebc0ec53e6217156726022a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-10-31T19:39:38.000Z", "max_stars_repo_stars_event_max_datetime": "2016-10-31T19:39:38.000Z", "max_issues_repo_path": "LABnet bioinformatics/discussion.tex", "max_issues_repo_name": "worldofpiggy/academic-papers", "max_issues_repo_head_hexsha": "a9ad707cf504e6460ebc0ec53e6217156726022a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LABnet bioinformatics/discussion.tex", "max_forks_repo_name": "worldofpiggy/academic-papers", "max_forks_repo_head_hexsha": "a9ad707cf504e6460ebc0ec53e6217156726022a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 140.6388888889, "max_line_length": 791, "alphanum_fraction": 0.801303575, "num_tokens": 991, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8652240895276223, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5992905237368679}}
{"text": "\\subsection{Vector spaces}\\label{subsec:vector_spaces}\n\nThis subsection is about the algebraic properties of vector spaces. See \\fullref{subsec:vector_space_geometry} for \\enquote{geometric} concepts like hyperplanes and convexity.\n\n\\begin{definition}\\label{def:vector_space}\n  A \\term{vector space} \\( (V, +, \\cdot) \\) is a \\hyperref[def:left_module]{left module} over a field \\( \\BbbK \\).\n\n  We call elements of \\( \\BbbK \\) \\term{scalars} and elements of \\( V \\) \\term{vectors}.\n\n  The category of vector spaces over \\( \\BbbK \\) is denoted by \\( \\cat{Vect}_{\\BbbK} \\).\n\\end{definition}\n\n\\begin{definition}\\label{def:vector_field}\n  Let \\( V \\) be a vector space over \\( \\BbbK \\). Functions of the type\n  \\begin{equation*}\n    f: \\BbbK \\to V\n  \\end{equation*}\n  are called \\term{vector fields}. To avoid confusion, \\( \\BbbK \\) is sometimes referred to as a \\term{scalar field}. This convention comes from physics and is dominant in areas that are far from algebraic field theory, hence in practice it does not cause a lot of confusion.\n\\end{definition}\n\n\\begin{remark}\\label{rem:real_vector_space}\n  Outside of algebra, we are usually only interested in vector spaces over the fields \\( \\BbbR \\) or \\( \\BbbC \\). We call them \\term{real vector spaces} and \\term{complex vector spaces}, respectively.\n\\end{remark}\n\n\\begin{definition}\\label{def:complex_conjucate_vector_space}\n  Let \\( V \\) be a vector space over the complex numbers \\( \\BbbC \\). Its \\term{complex conjugate vector space} \\( \\overline V \\) is the same space, but with scalar multiplication defined as\n  \\begin{equation*}\n    t \\cdot_{\\overline V} x \\coloneqq \\overline t \\cdot_V x.\n  \\end{equation*}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:field_extension_is_vector_space}\n  Let \\( \\BbbK \\) be a field \\hyperref[def:field_extension]{extension} of \\( G \\). Then \\( \\BbbK \\) is a vector space over \\( G \\).\n\\end{proposition}\n\\begin{proof}\n  Since \\( \\BbbK \\) already has the structure of an abelian group, we must only define scalar multiplication\n  \\begin{balign*}\n     & \\circ: G \\times \\BbbK \\to \\BbbK, \\\\\n     & g \\circ f \\coloneqq gf,\n  \\end{balign*}\n  where the product in the definition is simply multiplication in \\( \\BbbK \\). The well-definedness of \\( \\circ \\) follows from the well-definedness of multiplication in \\( \\BbbK \\).\n\\end{proof}\n\n\\begin{remark}\\label{rem:linear_span_only_for_vector_spaces}\n  The definition for linear \\hyperref[def:linear_span]{span} applies to general commutative \\hyperref[def:left_module]{modules}. However, since \\fullref{thm:vector_space_linear_dependence,thm:vector_space_basis} do not apply to general commutative modules, it makes sense to only use linear spans withing the context of vector spaces.\n\\end{remark}\n\n\\begin{definition}\\label{def:linear_span}\n  For a set \\( A \\subseteq X \\) of vectors, the set of all linear combinations of finite subsets of \\( A \\) is called its span and is denoted by \\( \\linspan{A} \\).\n\\end{definition}\n\n\\begin{proposition}\\label{thm:vector_space_linear_dependence}\n  The set \\( A \\subseteq V \\) is linearly dependent in the sense of \\fullref{def:left_module_linear_dependence} if and only if there exists a vector \\( x \\in V \\) such that\n  \\begin{equation*}\n    x \\in \\linspan{A} \\setminus \\{ x \\}.\n  \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Let \\( A \\subseteq M \\) and let\n  \\begin{equation*}\n    0_M \\coloneqq \\sum_{k=1}^n t_k x_k,\n  \\end{equation*}\n  where \\( t_1, \\ldots, t_n \\) have at least one nonzero scalar and where \\( x_1, \\ldots, x_n \\) are nonzero vectors. Without loss of generality, assume that \\( t_{n_0} \\) is the nonzero scalar. Then\n  \\begin{balign*}\n    0_M             & = \\sum_{k=1}^n t_k x_k,                                                                                  \\\\\n    t_{k_0} x_{k_0} & = -\\sum_{k=1}^n t_k x_k,                                                                                 \\\\\n    x_{k_0}         & = \\sum_{k=1}^n \\left(-\\frac {t_k} {t_{k_0}} \\right) x_k \\in \\linspan{A} \\setminus \\left\\{ x_{k_0} \\right\\}.\n  \\end{balign*}\n\n  \\NecessitySubProof Let \\( A \\subseteq M \\) and \\( x \\in \\linspan{A} \\setminus \\{ 0_M, x \\} \\). By \\fullref{def:linear_combination}, there exist nonzero vectors \\( x_1, \\ldots, x_n \\in A \\) and scalars \\( t_1, \\ldots, t_n \\in R \\) such that\n  \\begin{equation*}\n    x \\coloneqq \\sum_{k=1}^n t_k x_k,\n  \\end{equation*}\n  where at least one of \\( t_1, \\ldots, t_n \\) is nonzero.\n\n  Then \\( 0_M \\) is a nontrivial linear combination of the nonzero vectors \\( x_1, \\ldots, x_n, x \\):\n  \\begin{equation*}\n    0_M = \\sum_{k=1}^n t_k x_k - x.\n  \\end{equation*}\n\\end{proof}\n\n\\begin{definition}\\label{affine_independence}\n  Given a vector space \\( V \\) over \\( F \\), we say that a set \\( A \\subseteq V \\) of vectors is \\term{affinely independent} in \\( V \\) if the set\n  \\begin{equation*}\n    \\{ (x, 1) \\colon x \\in A \\}\n  \\end{equation*}\n  is linearly independent in \\( V \\times F \\).\n\\end{definition}\n\n\\begin{proposition}\\label{thm:vector_space_basis}\n  The set \\( B \\subseteq V \\) is a basis in the sense of \\fullref{def:left_module_hamel_basis} if and only if it is linearly independent and\n  \\begin{equation*}\n    V = \\linspan{B}.\n  \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof We will prove the contraposition, that is, if \\( \\linspan{B} \\neq M \\), then \\( B \\) is not a maximal linearly independent set.\n\n  If \\( \\linspan{B} \\subsetneq M \\), then there exists a vector \\( x \\in M \\) such that \\( x \\) is not a linear combination of any subset of \\( B \\). Thus, \\( B \\cup \\{ x \\} \\) does not have a nontrivial linear combination that equals zero. Hence, \\( B \\cup \\{ x \\} \\) is linearly independent.\n\n  \\NecessitySubProof Let \\( B \\subseteq M \\) and \\( \\linspan{B} = M \\). Assume that there exists a vector \\( x \\in M \\setminus B \\) such that the set \\( B \\cup \\{ x \\} \\) is linearly independent.\n\n  Then, for any \\( b \\in B \\), the vector \\( x + b \\) is a linear combination of elements, one of which is independent of \\( B \\). Thus, by \\fullref{thm:vector_space_linear_dependence},\n  \\begin{equation*}\n    M = \\linspan{B} \\subsetneq \\linspan{B \\cup \\{ x \\}} \\subseteq M,\n  \\end{equation*}\n  which is a contradiction.\n\\end{proof}\n\n\\begin{theorem}[Every vector space has a basis]\\label{thm:every_vector_space_has_a_basis}\n  Every vector space has a \\hyperref[def:left_module_hamel_basis]{basis}. Equivalently, all vector spaces are \\hyperref[def:free_left_module]{free modules}.\n\n  Compare this to finite-dimensional vector spaces of order \\( n \\) over \\( \\BbbK \\), which are all isomorphic to \\( \\BbbK^n \\).\n\n  In \\hyperref[def:zfc]{\\logic{ZF}} this theorem is equivalent to the \\hyperref[def:zfc/choice]{axiom of choice} --- see \\fullref{thm:axiom_of_choice_equivalences/vector_space_bases}.\n\\end{theorem}\n\\begin{proof}\n  Let \\( V \\) be a vector space. Assume that it does not have a basis. Let \\( \\mathcal{B} \\) be the family of all linearly independent \\hyperref[def:linear_combination]{subsets} of \\( V \\).\n\n  The family \\( \\mathcal{B} \\) is obviously nonempty since any \\hyperref[rem:singleton_sets]{singleton} from \\( V \\) belongs to \\( \\mathcal{B} \\). The union of any chain \\( \\mathcal{B}' \\subseteq \\mathcal{B} \\) can then contain only linearly independent elements since otherwise we would have that some set in \\( \\mathcal{B}' \\) is not linearly independent. Thus, we can apply \\fullref{thm:zorns_lemma} to obtain a maximal element \\( B \\).\n\n  Assume that \\( B \\) is not a basis, that is,\n  \\begin{equation*}\n    \\linspan B \\subsetneq V.\n  \\end{equation*}\n\n  Take \\( V \\in V \\setminus \\linspan B \\). Then the set \\( B \\cup \\{ v \\} \\) is linearly independent, which contradicts the assumption that \\( B \\) is not a basis. Thus, \\( B \\) is a basis of \\( V \\) and \\( V \\) is a free module.\n\\end{proof}\n\n\\begin{definition}\\label{def:vector_space_dimension}\n  The \\hyperref[def:free_left_module]{free module rank} of a vector space \\( V \\) is called the \\term{dimension} \\( \\dim V \\) of \\( V \\). If \\( U \\) is a vector subspace of \\( V \\), we call \\( \\co\\dim_V U \\coloneqq \\dim(V/U) \\) the \\term{codimension} of \\( U \\) relative to \\( V \\).\n\\end{definition}\n\n\\begin{proposition}\\label{thm:linear_maps_form_algebra}\n  The set \\( \\hom(U, V) \\) is a vector space.\n\\end{proposition}\n\\begin{proof}\n  By \\fullref{thm:functions_over_ring_form_algebra}, \\( \\hom(U, V) \\) forms an \\( \\BbbK \\)-vector space.\n\\end{proof}\n\n\\begin{remark}\\label{rem:functional}\n  The term \\enquote{functional} does not have a strict meaning. For example, logicians use terms like \\enquote{primitive recursive functional} for certain generalized functions. Functions are also ill-defined, see \\fullref{rem:function_definition}. Outside of logic, however, the term \\enquote{functional} usually refers to a function from a vector space \\( V \\) to its base field \\( \\BbbK \\). Examples include linear \\hyperref[def:linear_operator]{functionals}, like projection \\hyperref[def:left_module_basis_projection]{maps} and \\hyperref[def:differentiability]{derivatives}, and nonlinear functionals, like the Minkowski \\hyperref[def:minkowski_functional]{functionals}.\n\\end{remark}\n\n\\begin{definition}\\label{def:eigenpair}\n  Let \\( f: U \\to V \\) be a function between vector spaces over \\( \\BbbK \\).\n\n  An \\term{eigenpair} of \\( f \\) consists of an \\term{eigenvalue} \\( \\lambda \\in \\BbbK \\) and an \\term{eigenvector} \\( x \\in U \\) such that\n  \\begin{equation*}\n    f(x) = \\lambda x.\n  \\end{equation*}\n\\end{definition}\n", "meta": {"hexsha": "2b351e3fd91cfedd9cd41f3d883251cc2c955b42", "size": 9464, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/vector_spaces.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/vector_spaces.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/vector_spaces.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.8562091503, "max_line_length": 675, "alphanum_fraction": 0.6839602705, "num_tokens": 2926, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.8031738034238806, "lm_q1q2_score": 0.5992793019832444}}
{"text": "%%\\begin{CodeBlock}{51mm}{\\file{feff.inp} file:}\n\n\\subsection{Constraints}\n\\begin{slide}{ Constraints in {\\feffit} }\n  \n  Examples of Path Parameters written as functions of the Generalized\n  Variables:\n  \\vmm\n  \\setbeamertemplate{blocks}[rounded][shadow=true]\n\n  \\begin{tabular}{ll}\n    \\begin{minipage}{45mm}\n     \\begin{block} \n        {Parameter = Variable}\n{\\tiny{\\begin{alltt}\n\n{\\Red{guess e0    = 1.0 }}\npath(1,  e0 = e0)\npath(2,  e0 = e0)\n\\end{alltt}}}\n \\end{block}\n   \\vspace{1.75mm}\n   \\onslide+<2->\n   \\begin{block}\n{mixed coordination shell}\n\n{\\tiny{\\begin{alltt}\n set S02   = 0.80\n\n {\\Red{guess x   = 0.5}}\n\n path(1,  Amp= S02 * x )\n path(2,  Amp= S02 * (1-x))\n\n\\end{alltt}}}\n   \\end{block}\n   \\end{minipage} \n    & \\hspace{0.2mm}\n     \\begin{minipage}{65mm}\n       %% \\onslide+<3->  \n%       \\begin{block}\n%  { Fit Einstein Temperature }\n\n% {\\tiny{\\begin{alltt}\n  \n%  set   factor  = 24.254337   {\\Blue{#= (hbar*c)^2/(2 k_boltz)}}\n% {\\Blue{# mass and reduced mass in amu}}\n%  set   mass1 = 63.54,  mass2 = 63.54\n%  set   r_mass =  1/ (1/mass1 +  1/mass2)\n\n% {\\Blue{# the Einstein Temp will be adjusted in the fit!}}\n%  {\\Red{guess thetaE = 200}}\n% {\\Blue{# use for data set 1, T=77}}\n%  set   temp1 = 77\n%  {\\Red{def ss2_path1 = factor*coth(thetaE/(2*temp1))/r_mass )}}\n%  path(101,  sigma2 = ss2_path1   )\n\n% {\\Blue{# use for data set 2, T=300}}\n%  set   temp2 = 300\n%  {\\Red{def ss2_path2 = factor*coth(thetaE/(2*temp2))/r_mass )}}\n%  path(201,  sigma2 = ss2_path2   )\n% \\end{alltt}}\n% \\end{block}\n\\end{minipage}\\\\\n\\end{tabular}    \n\n\\vmm \\vmm\n\n\\onslide+<4-> Other Examples:\n\n\\begin{itemize} \n\\item force one $R$ for the same bond for data taken from different  edges.\n\\item model complex distortions (height of a sorbed atom above a surface).\n\\end{itemize}\n \n\\end{slide}\n\n\n\n", "meta": {"hexsha": "340c5b92e6eba0cf36da1f658fa8f4f2cc6abe41", "size": 1791, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/fitting.tex", "max_stars_repo_name": "newville/xafsfun", "max_stars_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/fitting.tex", "max_issues_repo_name": "newville/xafsfun", "max_issues_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/fitting.tex", "max_forks_repo_name": "newville/xafsfun", "max_forks_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.3875, "max_line_length": 75, "alphanum_fraction": 0.611948632, "num_tokens": 680, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737963569016, "lm_q2_score": 0.7461390043208003, "lm_q1q2_score": 0.5992792967102958}}
{"text": "\\subsection{Heapsort}\n\n\\begin{frame}{Heapsort - Algorithm 1 / 10}\n  \\textbf{Heapsort:}\n  \\begin{itemize}\n    \\item\n      The principle stays the same\n    \\item\n      Better structure for finding the smallest element quicker\n  \\end{itemize}\n  \\vspace{1em}\n  \\onslide<2- |handout:1>{\\textbf{Binary heap:}}\n  \\begin{itemize}\n    \\item<2- |handout:1>\n      Preferably a complete binary tree\n    \\item<2- |handout:1>\n      \\textbf{Heap property:} Each child is {\\color{MainA}smaller} (larger) than the parent\n      element\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 2 / 10}\n  \\textbf{Min heap:}\n  \\begin{itemize}\n    \\item<1- |handout:1>\n      \\textbf{Heap property:} Each child is {\\color{MainA}smaller}\n      (larger) than the parent element\n  \\item<2- |handout:1>\n    A valid heap fulfills the property at each node\n  \\end{itemize}\n  \\vspace{-1em}\n  \\begin{columns}%\n    \\begin{column}[b]{0.45\\textwidth}%\n      \\begin{figure}[!h]%\n        \\begin{adjustbox}{height=0.75\\linewidth}%\n          \\input{Images/Heap/MinHeap_Valid.tikz}\n        \\end{adjustbox}\n        \\caption{Valid min heap}\n        \\label{fig:minheap_valid}\n      \\end{figure}\n    \\end{column}%\n    \\hspace*{0.1em}%\n    \\begin{column}[b]{0.45\\textwidth}%\n      \\begin{figure}[!h]%\n        \\begin{adjustbox}{height=0.75\\linewidth}%\n          \\input{Images/Heap/MinHeap_Invalid.tikz}\n        \\end{adjustbox}\n        \\caption{Invalid min heap}\n        \\label{fig:minheap_invalid}\n      \\end{figure}\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 3 / 10}\n  \\textbf{How to save the heap?}\\\\[0.25em]\n  \\begin{itemize}\n    \\item\n      We number all nodes from top to bottom and left to right starting at\n      {\\color{MainA}0}\n      \\begin{itemize}\n        \\item\n          The children of node {\\color{MainA}$i$} are\n          {\\color{MainA}$2i + 1$} and {\\color{MainA}$2i + 2$}\n        \\vspace*{0.5em}\n        \\item\n          The parent node of node {\\color{MainA}$i$} is\n          {\\color{MainA}$\\mathrm{floor}\\left(\\frac{i-1}{2}\\right)$}\n      \\end{itemize}\n  \\end{itemize}%\n  \\vspace{-2em}%\n  \\begin{columns}%\n    \\begin{column}{0.5\\textwidth}\n      \\begin{figure}[!h]%\n        \\begin{adjustbox}{width=\\linewidth}\n          \\input{Images/Heap/MinHeap_Numbered.tikz}%\n        \\end{adjustbox}\n        \\vspace*{-0.5em}%\n        \\caption{Min heap}%\n        \\label{fig:minheap_numbered}%\n      \\end{figure}%\n    \\end{column}\n    \\begin{column}{0.5\\textwidth}\n      \\begin{table}[!h]\n        \\caption{Elements can be stored in array}\n        \\label{tab:minheap_numbered}\n        \\begin{tabular}{ccccccc}\n          \\onslide<2- |handout:1>{\\color{MainB}0}&%\n          \\onslide<3- |handout:1>{\\color{MainB}1}&%\n          \\onslide<4- |handout:1>{\\color{MainB}2}&%\n          \\onslide<5- |handout:1>{\\color{MainB}3}&%\n          \\onslide<6- |handout:1>{\\color{MainB}4}&%\n          \\onslide<7- |handout:1>{\\color{MainB}5}&%\n          \\onslide<8- |handout:1>{\\color{MainB}6}\\\\\n          \\hline\n          \\multicolumn{1}{|c}{\\onslide<2- |handout:1>{2}}&%\n          \\multicolumn{1}{|c}{\\onslide<3- |handout:1>{3}}&%\n          \\multicolumn{1}{|c}{\\onslide<4- |handout:1>{4}}&%\n          \\multicolumn{1}{|c}{\\onslide<5- |handout:1>{11}}&%\n          \\multicolumn{1}{|c}{\\onslide<6- |handout:1>{7}}&%\n          \\multicolumn{1}{|c}{\\onslide<7- |handout:1>{5}}&%\n          \\multicolumn{1}{|c|}{\\onslide<8- |handout:1>{8}}\\\\\n          \\hline\n        \\end{tabular}\n      \\end{table}\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 4 / 10}\n  \\textbf{Repairing after taking the smallest element:} \\texttt{heap.pop()}\n  \\begin{itemize}\n    \\item<2- |handout:1>\n      Remove the smallest element (root node)\n    \\item<3- |handout:1>\n      Replace the root with the last node\n    \\item<4- |handout:1>\n      {\\color{MainA}Sift} the new root node down until the\n      {\\color{MainA}heap property} is satisfied\n  \\end{itemize}\n  \\onslide<5- |handout:1>{\n    \\begin{figure}[!h]%\n      \\begin{columns}%\n        \\begin{column}{0.3\\textwidth}%\n          \\begin{adjustbox}{width=\\linewidth}\n            \\input{Images/HeapSort/MinHeap_Repair_First.tikz}%\n          \\end{adjustbox}%\n        \\end{column}%\n        \\hspace*{0.05em}%\n        \\begin{column}{0.3\\textwidth}<7- |handout:1>%\n          \\begin{adjustbox}{width=\\linewidth}\n            \\input{Images/HeapSort/MinHeap_Repair_Second.tikz}%\n          \\end{adjustbox}\n        \\end{column}%\n        \\hspace*{0.05em}%\n        \\begin{column}{0.3\\textwidth}<9- |handout:1>%\n          \\begin{adjustbox}{width=\\linewidth}\n            \\input{Images/HeapSort/MinHeap_Repair_Third.tikz}%\n          \\end{adjustbox}%\n        \\end{column}%\n      \\end{columns}%\n      \\caption{Repairing a min heap via sifting}%\n      \\label{fig:minheap_repair}%\n    \\end{figure}\n  }\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 5 / 10}\n  \\textbf{Heapsort:}\n  \\begin{itemize}\n    \\item\n      Organize the {\\color{MainA}$n$} elements as heap\n    \\item\n      While the heap still contains elements\n      \\begin{itemize}\n        \\item\n          Take the smallest element\n        \\item\n          Move the last node to the root\n        \\item\n          Repair the heap as described\n      \\end{itemize}\n    \\item<2- |handout:1>\n      % Output: {\\color{MainB}4}%\n      % \\onslide<6- |handout:1>{, {\\color{MainB}5}, {\\color{MainB}\\ldots}}\n       Output: {\\color{MainB}2}%\n       \\onslide<9- |handout:1>{, {\\color{MainB}3}, {\\color{MainB}\\ldots}}\n  \\end{itemize}\n  \\vspace*{-0.5em}\n  \\onslide<2- |handout:1>{\n    \\begin{center}\n      \\begin{figure}[!h]%\n        \\begin{columns}%\n          \\begin{column}{0.33\\textwidth}%\n            \\begin{centering}\n              \\begin{adjustbox}{height=8em}\n                \\input{Images/HeapSort/HeapSort_First.tikz}%\n              \\end{adjustbox}%\n            \\end{centering}\n          \\end{column}%\n          \\begin{column}{0.33\\textwidth}<4- |handout:1>%\n            \\begin{centering}\n              \\begin{adjustbox}{height=8em}\n                \\input{Images/HeapSort/HeapSort_Second.tikz}%\n              \\end{adjustbox}%\n            \\end{centering}\n          \\end{column}%\n          \\begin{column}{0.33\\textwidth}<8- |handout:1>%\n            \\begin{centering}\n              \\begin{adjustbox}{height=8em}\n                \\input{Images/HeapSort/HeapSort_Third.tikz}%\n              \\end{adjustbox}%\n            \\end{centering}\n          \\end{column}%\n        \\end{columns}%\n        \\caption{One iteration of Heapsort}%\n        \\label{fig:heapsort_repair}%\n      \\end{figure}\n    \\end{center}\n  }\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 6 / 10}\n  \\textbf{Creating a heap:}\n  \\begin{itemize}\n    \\item\n      This operation is called {\\color{MainA}heapify}\n    \\item<2- |handout:1>\n      The {\\color{MainA}$n$} elements are already stored in an array\n    \\item<3- |handout:1>\n      Interpret the array as binary heap where the {\\color{MainA}heap property} is not yet satisfied\n    \\item<4- |handout:1>\n      We repair the heap from bottom up (in layers) with {\\color{MainA}sifting}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 7 / 10}\n  \\vspace{-1.0em}\n  \\begin{table}[!h]%\n    \\caption{Input in array}%\n    \\label{tab:heapify_numbers}%\n    \\begin{tabular}{ccccccc}\n      {\\color{MainB}0}&\n      {\\color{MainB}1}&\n      {\\color{MainB}2}&\n      {\\color{MainB}3}&\n      {\\color{MainB}4}&\n      {\\color{MainB}5}&\n      {\\color{MainB}6}\\\\\n      \\hline\n      \\multicolumn{1}{|c}{11}&%\n      \\multicolumn{1}{|c}{7}&%\n      \\multicolumn{1}{|c}{8}&%\n      \\multicolumn{1}{|c}{3}&%\n      \\multicolumn{1}{|c}{2}&%\n      \\multicolumn{1}{|c}{5}&%\n      \\multicolumn{1}{|c|}{4}\\\\\n      \\hline\n    \\end{tabular}\n  \\end{table}\n  \\vspace*{-0.5em}\n  \\begin{centering}\n    \\begin{figure}[!h]%\n      \\begin{columns}%\n        \\begin{column}{0.425\\textwidth}%\n          \\begin{adjustbox}{width=\\linewidth}%\n            \\input{Images/HeapSort/Heapify_First.tikz}%\n          \\end{adjustbox}%\n        \\end{column}%\n        \\begin{column}{0.425\\textwidth}<2- |handout:1>%\n          \\begin{adjustbox}{width=\\linewidth}%\n              \\input{Images/HeapSort/Heapify_Second.tikz}%\n          \\end{adjustbox}%\n        \\end{column}%\n      \\end{columns}%\n      \\caption{Heapify lower layer}%\n      \\label{fig:heapify_lower}%\n    \\end{figure}\n  \\end{centering}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 8 / 10}\n  \\begin{centering}\n   \\begin{figure}[!h]%\n     \\begin{columns}%\n       \\begin{column}{0.425\\textwidth}%\n         \\begin{adjustbox}{width=\\linewidth}%\n           \\input{Images/HeapSort/Heapify_Third.tikz}%\n         \\end{adjustbox}%\n       \\end{column}%\n       \\begin{column}{0.425\\textwidth}<2- |handout:1>%\n          \\begin{adjustbox}{width=\\linewidth}%\n            \\input{Images/HeapSort/Heapify_Fourth.tikz}%\n          \\end{adjustbox}%\n        \\end{column}%\n      \\end{columns}%\n      \\caption{Heapify upper layer}%\n      \\label{fig:heapify_upper}%\n    \\end{figure}\n  \\end{centering}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 9 / 10}\n  \\begin{centering}\n    \\begin{figure}[!h]\n      \\begin{adjustbox}{width=0.425\\linewidth}%\n        \\input{Images/HeapSort/Heapify_Fifth.tikz}%\n      \\end{adjustbox}%\n      \\caption{Resulting heap}%\n      \\label{fig:heapify_upper_final}%\n    \\end{figure}\n  \\end{centering}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Heapsort - Algorithm 10 / 10}\n  \\textbf{Finding the minimum is intuitive:}\n  \\begin{itemize}\n    \\item\n      \\textbf{Minsort:} Iterate through all non-sorted elements\n    \\item\n      \\textbf{Heapsort:} Finding the minimum is trivial (concept)\n      \\begin{center}\n        \\textit{Just take the root of the heap}\n      \\end{center}\n  \\end{itemize}\n  \\vspace*{1.5em}\n  \\onslide<2- |handout:1>{\n    \\textbf{Removing the minimum in Heapsort:}\n    \\begin{itemize}\n      \\item\n        Repair the heap and restore the {\\color{MainA}heap property}\n        \\begin{itemize}\n          \\item\n            We don't have to repair the whole heap\n        \\end{itemize}\n      \\item\n        More of this in the next lecture\n    \\end{itemize}\n  }\n\\end{frame}\n\n%%% ===================================================================\n%%% This should be at the END of the file !!!!!!\n%%%\n%%% Local Variables: \n%%% mode: latex \n%%% TeX-master: \"../../Lecture.tex\" \n%%% End: \n%%% ===================================================================\n", "meta": {"hexsha": "1bf3ef0dcd917fa2c0e34a9a262d024b5b03e6f1", "size": 11074, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-1/Chapter/eng/70_HeapSort.tex", "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_issues_repo_path": "Lecture-1/Chapter/eng/70_HeapSort.tex", "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_forks_repo_path": "Lecture-1/Chapter/eng/70_HeapSort.tex", "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "avg_line_length": 31.8218390805, "max_line_length": 100, "alphanum_fraction": 0.5337728012, "num_tokens": 3506, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5992792894000737}}
{"text": "\\chapter{Non Inertial frames and Fictitious force}\n\\section{Non- inertial frame}\nBy now we were  discussing  inertial frames ,  in which Newton's second law of motion `$\\boldsymbol{F}=\\mathrm{ma}$'  holds true .\nThere are other frames of references in which Newton's law of  inertia does not hold and  are called non- inertial frames. All the accelerated and rotating frames are the non-inertial frames of reference.\\\\\nIn an accelerated frame ,  a force-free particle will seem to have an acceleration. If we do not consider the acceleration of the frame but apply Newton's laws to the motion of the force free-particle, then it will appear that a force is acting on it.\n\\section{Fictitious or Pseudo force (Translational.)}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3.5cm,width=7.5cm]{fictitious}\n\t\\caption{}\n\t\\label{}\n\\end{figure}\nLet us consdier two frames $S$ and $S^{\\prime}$ where, $S$ is an inertial frame and  frame $S^{\\prime}$ is moving with an acceleration  $\\mathrm{a}_{0}$ relative to $S$. The acceleration of a particle $P$, on which no external force is acting, will be zero in the frame $S$. But in frame $S^{\\prime}$ the observer will find that an acceleration - $\\mathrm{a}_{0}$ is acting on it. Thus, in frame $S^{\\prime}$ the observed force on the particle is $-\\mathrm{m a}_{0}$, where $m$ is the mass of the particle.  \\textbf{Such a force, which does not really act on the particle but appears due to the acceleration of the frame, is called a Fictitious or pseudo force}. The fictitious force on the particle $P$ is \n\\begin{equation}\n\\mathrm{F}_{0}=- \\mathrm{m a}_{0}\n\\label{Ficititious 1}\n\\end{equation}\nHence the accelerated frame is non inertial.\nIf a force $\\mathrm{F}_{i}$ is applied on the particle and $\\mathrm{a}_{i}$ is the observed acceleration in the inertial frame $S$, then according to Newton's law\n\\begin{equation}\n\\mathrm{F}_{i}= \\mathrm{ma}_{i} \n\\label{Ficititious 2}\n\\end{equation}\nSince the non inertial frame $S^{\\prime}$ is moving with an acceleration  $\\mathrm{a}_{0}$, We can connect the  position vectors of the two frames as,\n \\begin{align}\n \\mathrm{r}_{i}&=\\mathrm{r}_{n}+\\frac{1}{2} \\mathrm{a}_{0} t^{2}\n \\intertext{ Differentiating twice we get,}\n \\frac{d^{2} \\mathrm{r}_{i}}{d t^{2}}&=\\frac{d^{2} \\mathrm{r}_{n}}{d t^{2}}+\\mathrm{a}_{0}\\\\\n  \\frac{d^{2} \\mathrm{r}_{i}}{d t^{2}}&=\\mathrm{a}_{i}\\quad  (\\text { The acceleration in the inertial frame.}) \\notag\\\\ \\frac{d^{2} \\mathrm{r}_{n}}{d t^{2}}&=\\mathrm{a}_{n} \\quad (\\text { The acceleration observed in the non-inertial frame.}) \\notag \\\\\n  a_{i}&=a_{n}+a_{0}\\notag  \\\\\n  a_{i}-a_{0}&=a_{n} \\\\\n  m a_{i}-m a_{0}&=m a_{n}\\\\\n\\intertext{And using equations \\ref{Ficititious 1} and \\ref{Ficititious 2} we get}\n  \\mathrm{F}_{n}&=\\mathrm{F}_{i}+\\mathrm{F}_{0}\n  \\intertext{Thus the observer in the accelerated frame will measure the resultant (total) force which is the sum of real and fictitious forces on the particle i.e.,}\n  \\text{Total force}&=\\text{True force} +\\text{Fictitious force}\n \\end{align}\n \\subsection{ Free fall of a body inside a box}\n Suppose that a box is falling in the gravitational field of the earth with an acceleration $\\mathrm{a}_{0}=-\\mathrm{g} \\hat{\\mathrm{n}}$, where $g$ is the acceleration due to gravity and $\\hat{n}$ is a unit vector in the upward direction. Now, if we consider a particle, falling freely inside the box, the fictitious force on the particle is $\\mathrm{F}_{0}=-m \\mathrm{a}_{0}=m g \\hat{n}$. As the real force on the particle due to the attraction of the earth is $-m g \\hat{\\mathrm{n}}$, the force observed by the observer inside the box is\n \\begin{equation}\n \\mathrm{F}_{n}=\\mathrm{F}_{i}+\\mathrm{F}_{0}=-m g \\hat{\\mathrm{n}}+m g \\hat{\\mathrm{n}}=0\n \\end{equation}\n\\textbf{ Case-1:}$\\quad$ If the particle has no initial velocity relative to the box, it will seem to remain suspended in mid-air at the same place inside the box.\\\\\\\\\n\\textbf{ Case-2:}$\\quad$ Suppose that the box is moved with an acceleration $a_{0}=g \\hat{\\mathrm{n}}$ in the upward direction relative to the ground. In such a case, the real force $\\left(F_{i}\\right)$ and fictitious force $\\left(F_{0}\\right)$ on the particle are given by,\n\\begin{equation}\n \\mathrm{F}_{i}=-m g \\hat{\\mathrm{n}}\\quad ; \\quad \\mathrm{F}_{0}=-m {\\mathrm{a}}_{0}=-m g \\hat{\\mathrm{n}}\n\\end{equation}\nThen, the total force in the accelerated frame (box) is\n\\begin{equation}\n \\mathrm{F}_{n}=\\mathrm{F}_{i}+\\mathrm{F}_{0}=-m g \\hat{\\mathrm{n}}-m g \\hat{\\mathrm{n}}=-2 m g \\hat{\\mathrm{n}}\n\\end{equation}\n This means that the observer, stationed in the box having an acceleration $g$ upward, will measure a force $2 m g$ downward on the particle.\n\\begin{exercise}\n\\textbf{The Apparent Force of Gravity:}\\\\\n\tA small weight of mass $m$ hangs from a string in an automobile which accelerates at rate $A .$ What is the static angle of the string from the vertical, and what is its tension?\n\\end{exercise}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=2cm,width=7cm]{Car}\n\\end{figure} \n\\begin{answer}\n\tWe shall analyze the problem both in an inertial frame and in a frame accelerating with the car.\\\\\n\t\\begin{minipage}{0.45\\textwidth}\n\t\t$\\left. \\right. $\\\\\n\t$\\left. \\right. $ \\hspace{2cm}\\textbf{Inertial System}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3.5cm,width=4cm]{car 1}\n\\end{figure}\n\t\\begin{align*}\n\tT \\cos \\theta-W &=0 \\\\\n\tT \\sin \\theta &=m A \\\\\n\t\\tan \\theta &=\\frac{m A}{W}\\\\&=\\frac{A}{g} \\\\\n\tT &=m\\left(g^{2}+A^{2}\\right)^{1 / 2}\n\t\\end{align*}\n\t\\end{minipage} \\hfill\n\\begin{minipage}{0.45\\textwidth}\n\t\t$\\left. \\right. $\\\\\n\\textbf{System accelerating with auto.}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3.5cm,width=4cm]{car2}\n\\end{figure}\n\t\\begin{align*}\n\tT \\cos \\theta-W &=0 \\\\\n\tT \\sin \\theta-F_{\\text {fict }} &=0 \\\\\n\tF_{\\text {fict }} &=-m A \\\\\n\t\\tan \\theta &=\\frac{A}{g} \\\\\n\tT &=m\\left(g^{2}+A^{2}\\right)^{1 / 2}\n\t\\end{align*}\n\\end{minipage}\\\\\nFrom the point of view of a passenger in the accelerating car, the fictitious force acts like a horizontal gravitational force. The effective gravitational force is the vector sum of the real and fictitious forces.\n\\end{answer}\n \n\\section{Centrifugal force}\nLet us consider a mass $m$ rest in  a non -inertial frame of reference, so that in this frame, the observed acceleration or the acceleration of the particle is zero. Now suppose that a frame is rotating with an angular velocity $\\vec{\\omega}$ relative to an inertial frame . In this noninertial (rotating) frame, the observed acceleration $\\left(\\mathrm{a}_{n}\\right)$ of the mass $m$ is zero.\\\\\n\n\\begin{minipage}{0.65\\textwidth}\n\\begin{align*}\ni.e.,\\  a_{n}&=0\\\\\n\\text{ Then the total force,}\\ \\mathrm{F}_{n}=m \\mathrm{a}_{n}&=0 \\\\\n\\mathrm{F}_{i}+\\mathrm{F}_{0}&=\\mathrm{F}_{n} \\\\\n-m \\vec{\\omega}^{2} \\mathrm{r}+\\mathrm{F}_{0}&=0\\\\\n\\text{Then,}\\ \\mathrm{F}_{0}&=m \\vec{\\omega}^{2} \\mathrm{r}\n\\intertext{This fictitious force $F_{0}$ is directed away from the centre , along $r$,   and is called the centrifugal force.}\n\\end{align*}\n\\end{minipage}\\hfil\n\\begin{minipage}{0.25\\textwidth}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=3.5cm,width=4.2cm]{Centrifugal1}\n\t\t\\caption{}\n\t\t\\label{}\n\t\\end{figure}\n\\end{minipage}\\\\\nWe know that the centrifugal force is a pseudo force and appears in the rotating frame due to its rotation . Here in the noninertial frame the centrifugal force is balanced by the inward tension in the string. In general, in the rotating frame, the centrifugal force is equal and opposite to the actual force and both are acting on the same particle. \n\\section{ Uniformely Rotating Frames }\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=4.2cm,width=4.2cm]{Centrifugal2}\n\t\\caption{}\n\t\\label{}\n\\end{figure}\nSuppose that a frame $S^{\\prime}\\left(X_{r}, Y_{r}, Z_{r}\\right)$ is rotating with an angular velocity $\\vec{\\omega}$ relative to an inertial frame $S\\left(X_{i}\\right.$, $\\left.Y_{i}, Z_{i}\\right) .$ For simplicity, we assume that both of the frames have common origin $O$ and common $Z$-axis.\\\\\nThe position vector of a particle $P$ in both frames will be the same $^{+}$, i.e., $\\mathrm{R}_{i}=\\mathrm{R}_{r}=\\mathrm{R}$, because the origins are coincident. Now, if the particle $P$ is stationary in the frame $S$, the observer in the rotating frame $S^{\\prime}$ will see that the particle is moving oppositely with linear velocity $-\\vec{\\omega} \\times \\mathrm{R}$. \n\\begin{align}\n\\intertext { If the velocity of the particle in the frame $S$,}v_{i} &=\\left(\\frac{d \\mathrm{R}}{d t}\\right)_{i}\\\\ \\intertext {Then it's velocity in the rotating frame, $S^{\\prime}$ ,} v_{r}&=\\left(\\frac{d \\mathrm{R}}{d t}\\right)_{r}\\\\\n\\text{It can be written as,}\\left(\\frac{d \\mathrm{R}}{d t}\\right)_{r}&=\\left(\\frac{d \\mathrm{R}}{d t}\\right)_{i}-\\vec{\\omega} \\times \\mathrm{R} \\\\\n\\left(\\frac{d \\mathrm{R}}{d t}\\right)_{i}&=\\left(\\frac{d \\mathrm{R}}{d t}\\right)_{r}+\\vec{\\omega} \\times \\mathrm{R} \\label{centrifugal 1}\n\\intertext{In fact this equation holds for all vectors and relates the time derivatives of a vector in the frames Therefore, equation.\\ref{centrifugal 1}  may be written in the form of operator equation,}\n\\left(\\frac{d}{d t}\\right)_{i}&=\\left(\\frac{d}{d t}\\right)_{r}+\\vec{\\omega} \\times \\mathrm{R} \\label{centrifugal 2}\n\\intertext{ Equation.\\ref{centrifugal 1} can be written in terms of velocity as, }\n\\mathrm{v}_{i}&=\\mathrm{v}_{r}+\\vec{\\omega} \\times \\mathrm{R} \\qquad \\text{Since, }\\frac{d \\mathrm{R}}{d t}=\\mathrm{v}\n\\intertext{Now, if we operate equation.\\ref{centrifugal 2}  on velocity vector $\\mathrm{v}_{i}$, we have}\n\\left(\\frac{d \\mathrm{v}_{i}}{d t}\\right)_{i}&=\\left(\\frac{d \\mathrm{v}_{i}}{d t}\\right)_{r}+\\vec{\\omega} \\times \\mathrm{v}_{i}\\label{centrifugal 3}\n\\intertext{Substituting the value of $\\mathrm{v}_{i}$ in the right hand side of the equation.\\ref{centrifugal 3} . We obtain,}\n\\left(\\frac{d \\mathrm{v}_{i}}{d t}\\right)_{i} &=\\left[\\frac{d}{d t}\\left(\\mathrm{v}_{r}+\\vec{\\omega} \\times \\mathrm{R}\\right)\\right]_{r}+\\vec{\\omega} \\times\\left(\\mathrm{v}_{r}+\\vec{\\omega} \\times \\mathrm{R}\\right) \\\\\n&=\\left(\\frac{d \\mathrm{v}_{r}}{d t}\\right)_{r}+\\frac{d \\vec{\\omega}}{d t} \\times \\mathrm{R}+\\vec{\\omega} \\times\\left(\\frac{d \\mathrm{R}}{d t}\\right)_{r}+\\vec{\\omega} \\times \\mathrm{v}_{r}+\\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})\n\\intertext{If we write the acceleration ,} \\frac{d \\mathrm{v_{r}}}{d t}&=\\mathrm{a}_{r} \\quad ; \\quad  \\frac{d \\mathrm{v_{i}}}{d t}=\\mathrm{a}_{i}\\quad  \\text{And}\\quad \\left(\\frac{d \\mathrm{R}}{d t}\\right)_{r}=\\mathrm{v}_{r}\\\\ \\text{Then,}\\\n\\mathrm{a}_{i}&=\\mathrm{a}_{r}+2 \\vec{\\omega} \\times \\mathrm{v}_{r}+\\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})+\\frac{d \\vec{\\omega}}{d t} \\times \\mathrm{R}\\\\\nm \\mathrm{a_{i}} &=m \\mathrm{a_{r}} +2m (\\vec{\\omega} \\times \\mathrm{v}_{r})+ m \\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})+m \\frac{d \\vec{\\omega}}{d t} \\times \\mathrm{R}\\\\\n\\mathrm{F_{i}} &=\\mathrm{F_{r}}+m \\frac{d^{2} \\mathrm{R}}{d t^{2}}+m \\boldsymbol{\\vec{\\omega}} \\times(\\boldsymbol{\\vec{\\omega}} \\times \\mathrm{r})+2 m \\boldsymbol{\\vec{\\omega}} \\times \\mathrm{v}+m \\frac{d \\vec{\\omega}}{d t} \\times \\mathrm{r} \\\\\n\\mathrm{F_{r}} &=\\mathrm{F_{i}}-m \\frac{d^{2} \\mathrm{R}}{d t^{2}}-m \\boldsymbol{\\vec{\\omega}} \\times(\\boldsymbol{\\vec{\\omega}} \\times \\mathrm{r})-2 m \\boldsymbol{\\vec{\\omega}} \\times \\mathrm{v}-m \\frac{d \\vec{\\omega}}{d t} \\times \\mathrm{r} \\\\\n\\mathrm{F_{r}} &=\\mathrm{F_{i}} +\\mathrm{F}_{\\text {translation }}+\\mathrm{F}_{\\text {centrifugal }}+\\mathrm{F}_{\\text {Coriolis }}+\\mathrm{F}_{\\text {azimuthal }}\n\\end{align}\n\\subsubsection{Rotation of Earth}\nIn the case of earth, the common origin $O$ may be considered as the centre of the earth, $Z$-axes as coinciding with its rotational axis and the frame $S^{\\prime}$ as rotating with earth relative to the non-rotating frame $S$.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=4.5cm,width=4cm]{Centrifugal3}\n\t\\caption{Rotation of earth}\n\t\\label{Rotation of earth}\n\\end{figure}\n\\begin{align*}\n\\intertext{For earth, $\\vec{\\omega}$ is constant, so,} \\frac{d \\vec{\\omega}}{d t}&=0 \\\\\n\\text{Then,}\\ \\mathrm{a}_{i}&=\\mathrm{a}_{r}+2 \\vec{\\omega} \\times \\mathrm{v}_{r}+\\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})\n\\intertext{If $m$ is the mass of the particle, then force in the rotating frame is,}\nm \\mathrm{a}_{r}&=m \\mathrm{a}_{i}-2 m \\vec{\\omega} \\times \\mathrm{v}_{r}-m \\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})\\\\\n\\text{But,}\\ m \\mathrm{a}_{r}&=\\mathrm{F}_{i}+\\mathrm{F}_{0}\n\\intertext{Therefore fictitious force $\\mathrm{F}_{0}$ is given by,}\n\\mathrm{F}_{0}&=-2 m \\vec{\\omega} \\times \\mathrm{v}_{r}-\\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})\n\\intertext{Here $-2 m \\vec{\\omega} \\times \\mathrm{v}_{r}$ is the Coriolis force and\\  $-m \\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})$, the centrifugal force.}\n\\end{align*}\n\\subsection{Azimuthal Force}\nAzimuthal Force  appears to act on the particles which are being observed from a rotating frame which has non-uniform angular velocity i.e. $\\frac{d \\vec{\\omega}}{d t} \\neq 0$.\nIt's direction is tangential to the rotation of frame. \n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3.8cm,width=4cm]{Azimuthal}\n\t\\caption{Azimuthal force.}\n\t\\label{Azimuthal force}\n\\end{figure}\n\\begin{equation}\n\\text{Azimuthal Force}=-m \\frac{d{\\vec{\\omega}}}{d t} \\times r=\\vec{F}_{a z}\n\\end{equation}  \nThe direction of azimuthal force on a particle lying on a non-uniformly rotating disc is shown in the figure .\\ref{Azimuthal force}\n\\subsection{Centrifugal force}\n The centrifugal force is the only fictitious force, acting on a particle which is at rest $\\left(\\mathrm{v}_{r}=0\\right.$ ) in the rotating frame.  This force goes hand in hand with the ,$ \\frac{mv^{2}}{r}=mr\\vec{\\omega}^{2}$, centripetal acceleration as viewed by someone in an inertial frame. The centrifugal force may be written as, \n\\begin{align*}\n-m \\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{R})&=m \\vec{\\omega}^{2} \\mathrm{r}\n\\intertext{Where, $r$ is the vector from the axis of the earth to the particle and normal to it, because,}\n\\boldsymbol{\\vec{\\omega}} \\times(\\boldsymbol{\\vec{\\omega}} \\times \\mathrm{R})&=(\\boldsymbol{\\vec{\\omega}} \\cdot \\mathrm{R}) \\vec{\\omega}-(\\boldsymbol{\\vec{\\omega}} \\cdot \\boldsymbol{\\vec{\\omega}}) \\mathrm{R}\\\\&=\\vec{\\omega}^{2} R \\sin \\phi \\hat{\\mathrm{k}}-\\vec{\\omega}^{2} R(\\hat{\\mathrm{i}} \\cos \\phi+\\hat{\\mathrm{k}} \\sin \\phi) \\qquad[\\because \\vec{\\omega}=\\vec{\\omega} \\hat{\\mathrm{k}} \\text { and } \\mathrm{R}=R(\\hat{\\mathrm{i}} \\cos \\phi+\\hat{\\mathrm{k}} \\sin \\phi)]\\\\\n&=-\\vec{\\omega}^{2} R \\cos \\phi \\hat{\\mathrm{i}}\\\\\n&=-\\vec{\\omega}^{2} \\mathrm{r} \n\\end{align*}\n\\subsubsection{Effective gravity force ($\\mathrm{mg}_{\\mathrm{eff}}$).}\n Consider a person standing motionless on the earth, at a polar angle $\\theta$. In the rotating frame of the earth, the person feels a centrifugal force (directed away from the axis) in addition to the gravitational force, $m \\mathrm{g}$.   Note that we're using $\\mathrm{g}$ to denote the acceleration due solely to the gravitational force. \nThe sum of the gravitational and centrifugal forces doesn't point radially, unless the person is at the equator or at a pole. Let us denote the sum by $m \\mathrm{~g}_{\\text {eff }}$. To calculate $m \\mathrm{~g}_{\\mathrm{eff}}$, we must calculate $\\mathrm{F}_{\\text {cent }}=$ $-m \\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{r})$. The $\\vec{\\omega} \\times \\mathrm{r}$ part has magnitude $R \\vec{\\omega} \\sin \\theta$, where $R$ is the radius of the earth, and it is directed tangentially along the latitude circle of radius $R \\sin \\theta$. So $-m \\vec{\\omega} \\times(\\vec{\\omega} \\times \\mathrm{r})$ points outward from the axis, with magnitude $m R \\vec{\\omega}^{2} \\sin \\theta$, which is just what we expect for something traveling at frequency $\\vec{\\omega}$ in a circle of radius $R \\sin \\theta$. Therefore, the effective gravitational force,\n$$\nm {g}_{\\mathrm{eff}} \\equiv m({g}-\\vec{\\omega} \\times(\\boldsymbol{\\vec{\\omega}} \\times \\mathrm{r}))\n$$\n\\begin{center}\n\\begin{figure}[H]\n\\begin{minipage}{0.45\\textwidth}\n\t\\includegraphics[height=3cm,width=3cm]{g-effective 1}\n\\end{minipage}\t\\hfil\n\\begin{minipage}{0.45\\textwidth}\n\t\\includegraphics[height=3cm,width=3cm]{g-effective 2}\n\\end{minipage}\t\n\t\\caption{Effective Gravity force}\n\t\\label{Effective Gravity force}\n\\end{figure}\n\\end{center}\n\\subsection{Coriolis force: $-2 m \\vec{\\omega} \\times {v_{r}}$}\nCoriolis force is a fictitious force which acts on a particle only  if it is in motion with respect to the rotating frame. In the rotating frame, if a particle moves with velocity $v_{r}$ , then it's always experience a force, ($-2 m \\vec{\\omega} \\times {v_{r}}$) , perpendicular to it's path opposite to the direction of vector product, $\\vec{\\omega} \\times {v_{r}}$.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3.5cm,width=7cm]{coriolis}\n\t\\caption{Coriolis force}\n\t\\label{Coriolis force}\n\\end{figure}\n\\begin{equation}\n\\begin{split}\n\\text{Coriolis force,}\\ \\vec{F}_{\\text {cor }}&=-2 m \\vec{\\omega} \\times \\vec{v}_{r}\\\\  \\text{Or}\\quad \\vec{F}_{c o r}&=2 m \\vec{v}_{r} \\times \\vec{\\omega}\n\\end{split}\n\\end{equation}\\\\ The classic Coriolis example is to imagine you are sitting on the north pole and just happen to have a Howizter handy. You fire the thing off and because the earth is turning under the projectile to you the shell is turning to the right. From the point of view of an observer in an inertial frame, the projectile follows a straight path. The angular deflection of the projectile must be $\\omega t$, the earth's rotation in time $t$. The effect of coriolis force is appreciable in the condition if it acts horizontal or has a horizontal componet because in the vertical direction it's effect is masked by the large gravitational force.\n\n\\begin{abox}\n\tPractise set-3 \n\\end{abox}\n\\begin{enumerate} [label=\\color{ocre}\\textbf{\\arabic*.}]\n\t\\item \\textbf{(a)} Given that earth rotates once every $23 \\mathrm{~h} 56 \\mathrm{~min}$ around the axis from the North to South Pole, calculate the angular velocity, $\\omega$, of the earth. When viewed from above the North Pole, the earth rotates counterclockwise (west to east). Which way does $\\omega$ point?\\\\\\\\\n\t\\textbf{(b)} Foucault's pendulum is a simple pendulum suspended by a long string from a high ceiling. The effect of Coriolis force on the motion of the pendulum is to produce a precession or rotation of the plane of oscillation with time. Find the time for one rotation for the plane of oscillation of the Foucault pendulum at $30^{\\circ}$ latitude.\n\t\\begin{answer}\n\t\t$ \\left. \\right.$\\\\\\\\\n\\textbf{(a)}  $\\omega$ points in the south to north direction along the rotational axis of the earth.\n\t\\begin{equation*}\n\\omega=\\frac{2 \\pi}{T}=\\frac{2 \\pi}{86,160}=7.292 \\times 10^{-5} \\mathrm{rad} / \\mathrm{s}\n\t\\end{equation*}\n\\textbf{(b)}The period of rotation of the plane of oscillation is given by\n\t\\begin{equation*}\n\tT^{\\prime}=\\frac{2 \\pi}{\\omega^{\\prime}}=\\frac{2 \\pi}{\\omega \\sin \\lambda}=\\frac{T_{0}}{\\sin \\lambda}=\\frac{24}{\\sin 30^{\\circ}}=48 \\mathrm{~h}\n\t\\end{equation*}\n\\end{answer}\n\n\n\\item An iceberg of mass $5 \\times 10^{5}$ tons near the North Pole moves west at the rate of $8 \\mathrm{~km} /$ day. Neglecting the curvature of the earth, find the magnitude and direction of the Coriolis force.\n\\begin{answer}\n\t\\begin{align*}\nF_{\\text {coriolis }}&=-2 m \\omega \\times v_{\\mathrm{R}} \\\\\nF_{\\text {cor }}&=2 m \\omega v_{\\mathrm{R}} \\sin \\theta\\\\&=2 \\times 5 \\times 10^{8} \\times 7.27 \\times 10^{-5} \\times \\frac{8000}{86,400} \\quad\\left(\\because \\theta=90^{\\circ}\\right) \\\\\n&=6730 \\mathrm{~N} \\text { due north }\n\t\\end{align*}\n\\end{answer}\n\\item A train of mass 1000 tons moves in the latitude $60^{\\circ}$ north. Find the magnitude and direction of the lateral force that the train exerts on the rails if it moves with a velocity of $15 \\mathrm{~m} / \\mathrm{s}$.\n\\begin{answer}\n\t\\begin{align*}\n\tF_{\\text {cor }} &=2 m v \\omega \\sin \\theta \\\\\n\t&=2 \\times 10^{6} \\times 15 \\times 7.27 \\times 10^{-5} \\sin 60^{\\circ} \\\\\n\t&=1889 \\mathrm{~N} \\text { on the right rail. }\n\t\\end{align*}\n\\end{answer}\n\\item A train of mass $m$ is travelling with a uniform velocity $v$ along a parallel latitude. Show that the difference between the lateral force on the rails when it travels towards east and when it travels towards west is $4 m v \\omega \\cos \\lambda$, where $\\lambda$ is latitude and $\\omega$ is the angular velocity of the earth.\n\\begin{answer}\n\tThe difference between the lateral forces on the rails arises because when the train reverses its direction of motion Coriolis force also changes its sign, the magnitude remaining the same. Therefore, the difference between the lateral force on the rails will be equal to $2 m v \\omega \\cos \\lambda-(-2 m v \\omega \\cos \\lambda)$ or $4 m v \\omega \\cos \\lambda$.\n\\end{answer}\n\\item  A small block of mass ' $m$ ' lies on a wedge of mass $M$ as shown in figure. All the contact surfaces are smooth. When a horizontal force $\\mathrm{F}$ is applied to the wedge, the block does not slide on the wedge. What must be value of $\\mathrm{F}$.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=2.2cm,width=5cm]{pset-3- fict}\n\\end{figure}\n\\begin{answer}\n\\begin{align*}\n\\text { Horizontal acceleration of the system is } a_{0}&=\\frac{F}{M+m}\n\\intertext{According to question, the block does not slide on the wedge, therefore if block is seen from the reference frame of the wedge it will appear stationary. The wedge has a linear acceleration $a_{0}$ therefore if observation is made from the wedge frame a pseudo force $-m a_{0}$ must be applied on the block as shown in figure. Block is stationary on the wedge so component of forces parallel to the inclined plane must cancel.}\n\\text{Therefore,} \\ m g \\sin \\theta&=m a_{0} \\cos \\theta\\\\  \\text{or}\\ \\tan \\theta&=\\frac{a_{0}}{g}\\\\&=\\frac{F}{(M+m) g}\\\\ \\text{or}\\ F&=(M+m) g \\tan \\theta\n\\end{align*}\n\\end{answer}\n\\item Assuming earth to be a sphere, calculate the linear speed of an object lying on the earth's surface at an altitude of $\\lambda=60^{\\circ}$ and also calculate the centrifugal force experienced by an object of mass $50 \\mathrm{~kg}$. Compare this force with gravitational force.\nfictitious pset-3\n\\begin{answer}\n\t\\begin{align*}\n\t\\intertext { Angular velocity of earth about its axis due to its spinning motion is ,}\n\t\\omega&=\\frac{2 \\pi}{T}=\\frac{2 \\pi}{(24 \\times 60 \\times 60)} \\mathrm{rad} / \\mathrm{sec}\\\\\n\t\\intertext{At latitude $\\lambda=60^{\\circ}$ objects move in circle of radius $r=R \\cos \\lambda$}\n\t\\therefore \\text {Linear speed }&=\\omega r\\\\\n\t&=\\frac{2 \\pi}{24 \\times 60 \\times 60} \\times 6400 \\times 10^{3} \\cos 60^{\\circ} \\mathrm{m} / \\mathrm{s} \\\\\n\t&=\\frac{2 \\pi \\times 10^{3}}{27} \\mathrm{~m} / \\mathrm{s} \\quad 233 \\mathrm{~m} / \\mathrm{s}\\\\\n\t\\text { Centrifugal force }&=m \\omega^{2} r \\\\\n\t&=50 \\times\\left(\\frac{2 \\pi}{24 \\times 60 \\times 60}\\right)^{2} \\times 6400 \\times 10^{3} \\\\\n\t&\\simeq 1.70 \\mathrm{~N} \\\\\n\t\\text { Gravitational force }&=m g=50 \\times 9.8\\\\&=490 \\mathrm{~N}\n\t\\intertext{Since centrifugal force due to spinning of earth is very small compared to the gravitational force due to this reason we do not feel the rotation of earth.}\n\t\\end{align*}\n\\end{answer}\n\\item A person is standing at the edge of a disc of radius $\\mathrm{R}$. The dise is rotating about its axis with uniform angular velocity $\\omega .$ The person throws a stone in radially outward direction with speed $\\frac{\\omega R}{2}$ relative for the disc. Calculate acceleration of stone as seen by the person soon after throwing. (neglect gravity).\n\\begin{answer} $\\left. \\right. $\\\\\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=3cm,width=4cm]{fictitious pset-3 1}\n\t\\end{figure}\n\t\\begin{align*}\n\t\\intertext{As seen by the person the stone is acted upon by centrifugal and coriolis forces. The direction of the two forces soon after throwing is shown in the figure. Net force on the stone is}\n\tF^{\\prime}&=\\sqrt{F_{c o r}^{2}+F_{c f}^{2}} \\\\\n\t&=\\sqrt{\\left|2 m \\vec{v}^{\\prime} \\times \\vec{\\omega}\\right|^{2}+\\left(m \\omega^{2} r_{\\perp}\\right)^{2}}\\\\&=\\sqrt{\\left(2 m \\frac{\\omega R}{2} \\omega \\sin 90^{\\circ}\\right)^{2}+ (m \\omega^{2}R )^{2}} \\\\\n\tF^{\\prime}&=\\sqrt{2} m \\omega^{2} \n\t\\intertext{Then the acceleration,}\n\ta^{\\prime}&=\\frac{F^{\\prime}}{m}=\\sqrt{2} \\omega^{2} R\n\t\\end{align*}\n\\end{answer}\n\\end{enumerate}", "meta": {"hexsha": "e21abea3b0363508255b6a66cf2f19a3fc7d7cf2", "size": 24321, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Classical Mechanics  -CSIR/chapter/Non Inertial frames and Fictitious force.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Classical Mechanics  -CSIR/chapter/Non Inertial frames and Fictitious force.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Classical Mechanics  -CSIR/chapter/Non Inertial frames and Fictitious force.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.8006644518, "max_line_length": 849, "alphanum_fraction": 0.6952427943, "num_tokens": 7954, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.8267118004748678, "lm_q1q2_score": 0.5992588209561083}}
{"text": "\\chapter{Random Number Generation}\nIt is obvious that the aspect of randomness is a crucial factor in procedural content generation. Random numbers are required for countless reasons. They can be used to pick a random element in a set or to assign a random value to a variable. Generating random numbers is a concept that has been in the field of research for many years and, although solutions exist, it remains a delicate task. \n\nIn most cases a pseudo-random number generator (PRNG) can be used to generate numbers, although these do not provide a true random result. The term ``pseudo-random\" is used to define a sequence of deterministic generated numbers with a distribution that closely resembles that of random variable. Because of the fact that random numbers generated by a PRNG are defined by a mathematical formula, poor PRNG algorithms suffer from several issues such as predictability and dependent distributions \\citep{frieze_lcg}.\n\n\\section{Linear Congruential Generators}\nLinear congruential generators (LCG), proposed several decades ago by D. H. Lehmer, are amongst the oldest and most common PRNG \\citep{park_miller}. Requiring nothing but a single variable and a simple formula for generating numbers, they are considered very memory and time-efficient. Their space and time complexity are of order $\\bigo{1}$ \\citep{Knuth_art2}.\n\nAs direct result of their simplicity and efficiency, LCGs are often used in certain applications such as simulations where a satisfactory result that ``looks good\" is required. However they present the disadvantage of producing predictable and dependent variables \\citep{frieze_lcg}.\n\n\\subsection{Recurrence relation}\nLinear congruential generators are mathematically described by the following recurrence relation:\n\n\\[ X_{n+1} = (a X_{n} + c) \\modulo M \\]\n\nThis recursive formula is used to generate a sequence of numbers $\\{X_0, X_1, X_2,...\\}$ with a seemingly random uniform distribution \\citep{Knuth_art2} where each variable represents the following (as explained by \\citep{hamid_lcgSecurity}):\n\\begin{itemize}\n\\item $X_{n+1}$ is the next ``random\" number in the sequence $\\{X_0, X_1, X_2,...\\}$. It is referred to as \\texttt{next()} in some implementations such as .NET's \\texttt{System.Random} class \\citep{System.Random} or Java's \\texttt{java.util.Random} class \\citep{java.util.Random}.\n\\item $M$ is called the {\\em modulus}, generally a large prime number \\citep{park_miller}\n\\item $a$ is called the {\\em multiplier}. $0<a<M$\n\\item $c$ is called the {\\em increment}. $0 \\leq c<M$\n\\item The operation $ \\modulo $ is called the modulo and defines the remainder of an integer-division.\n\\end{itemize}\n\n\\subsubsection{Seed}\nAs with all recurrence relations, the formula requires a starting point. In other words the formula should be defined as the following:\n\n\\begin{align}\n X_{0} &= k \\nonumber \\\\\n X_{n+1} &= (a X_{n} + c) \\modulo M \\nonumber\n\\end{align} \n\nWhere $k$ is a constant. The initial value $X_0$ is called the {\\em seed} of the generator \\citep{hamid_lcgSecurity}. In some implementations such as the  \\texttt{java.util.Random} class the variable $X_n$ is also called the seed \\citep{java.util.Random}, although generally speaking the term {\\em seed} refers to $X_0$. \n\nIt is trivial yet important to note that LCGs are ironically deterministic. Two identical LCGs (meaning the values for $a$, $c$ and $M$ are the same) will give exactly the same sequence of numbers if the seed is the same for both LCGs. This is a key concept in PRNG algorithms. Any program that utilises a pseudo-random number generator will provide exactly the same result if all given inputs (including the seed) are the same \\citep{Knuth_art2}.\n\nThis is an obvious problem since the aim of any PRNG is to mimic the aspect of non-determinism. In order to obtain a different result we require a different seed. Typically, whenever a random seed is needed the computer's time can be used as the seed because this is guaranteed to be unique. The clock time is a feature related to time, which breaks the determinism of the LCG and provides a more unpredictable result \\citep{Knuth_art2}.\n\n\\subsubsection{Period}\nAnother problem with LCGs is that the sequence produced is essentially always the same even when using a different seed. This issue is easiest to explain with the use of an example. \n\n\\begin{itemize}\n\\item Let us consider two generators $L_X$ and $L_Y$ that differ only by their seeds.\n\\item Let $X_0$ and $Y_0$ be the seeds of $L_X$ and $L_Y$ respectively.\n\\item Let $a = 3,\\;c = 1,\\;M = 7$. These values for $a$, $c$ and $M$ are most certainly non-ideal but are used only for the purpose of demonstration.\n\\end{itemize}\n\nWe can calculate the numbers generated by the first LCG $L_X$ when its seed is $X_0 = 1$ for instance:\n\n$$ \\{X_0, X_1, X_2, ...\\} = \\{1,\\,4,\\,6,\\,5,\\,2,\\,0,\\,1,\\,4,\\,6,\\,5,\\,2,\\,0,\\,1,\\,4,\\,...\\} $$ \n\nWe can do the same for the second LCG $L_Y$ with a different seed $Y_0 = 5$:\n\n$$ \\{Y_0, Y_1, Y_2, ...\\} = \\{5,\\,2,\\,0,\\,1,\\,4,\\,6,\\,5,\\,2,\\,0,\\,1,\\,4,\\,6,\\,5,\\,2,\\,...\\} $$ \n\nHere we notice two things:\n\\begin{itemize}\n\\item The infinite sequence produced by $L_Y$ is nothing more than the sequence from $L_X$ with an offset. In mathematically terms: \n\\\\ $ \\{X_0, X_1, X_2, ...\\} = \\{Y_3, Y_4, Y_5, ...\\} $\n\\item The infinite sequence produced by both $L_X$ and $L_Y$ is simply an infinite repetition of a finite sequence\n\\end{itemize}\nBoth of these observations are due to the mathematical determinism of the method. In other words, given a certain input to the LCG, say $X=1$, the result will always be 4.\n\n\\subsection{Advantages and Disadvantages}\n\\paragraph{Advantage} LCGs are famous for their efficiency and simplicity. They are easy to understand and implement. Requiring nothing more than a few simple operations such as bit-shifts and additions, they are computationally cheap \\citep{wang_review} and only require a single 16-bit word of storage at-least \\citep{yu_random}. Pierre L'Ecuyer has shown that larger word sizes can be used to provide more decent results \\citep{lecuyer_tables}. Regardless of word-size used a single word is negligible on modern machines.\n\n\\paragraph{Disadvantages} \nAs demonstrated by Pierre L'Ecuyer, LCGs can give satisfying results in certain applications such as simulations \\citep{lecuyer_lcg_k1}. However, the quality of the pseudo-random numbers is very sensitive to the values assigned to its parameters $a$, $c$ and $M$. Bad choices for these parameters have historically lead to LCGs with inadequate results \\citep{li_ooRandom}. Finding good coefficients for a LCG is very difficult and has been in the field of research for many years \\citep{park_miller}. Generally in most applications, a known and reputable LCG will be used instead of innovating a new one. For instance, the LCG used in Java's \\texttt{java.util.Random} class is based on the algorithm discussed by Donald E. Knuth in the {\\em The Art of Programming: Volume 2} \\citep{Knuth_art2}.\n\nAside from the concern for the quality of the result, LCGs also has a major concern regarding security. LCGs are notorious for their predictability even if the coefficients provide decent results \\citep{hamid_lcgSecurity}. Plumstead has shown that the multiplier $a$ and multiplier $M$ can be inferred when a certain number of pseudo-random bits are revealed \\citep{plumstead_predicting}. Therefore, using merely a fragment of a LCG's output, it is possible to predict the sequence of pseudo-random bits without knowing the coefficients $a$ and $M$ \\citep{frieze_lcg}. Even worse, once the modulus becomes known we can predict the output with almost perfect certainty as demonstrated by Jacques Stern \\citep{Stern_lcgInsecure}. Needless to say, this is a big concern for security because a high degree of unpredictability is required. Therefore LCGs are un-recommended for the use of cryptography. Some suitable alternatives derived from LCGs can be used for cryptography. An example is the Blum Blum Shub algorithm \\citep{BlumBlumShub-1}.\n\n\\subsection{Constraints for good LCGs}\nThe aim of a number generator is to provide a sequence of numbers with good statistical properties and randomness \\citep{hamid_lcgSecurity}. Naturally good randomness is required for simulation, but it has been shown by \\citep{Knuth_decipher} and \\citep{plumstead_predicting} that a LCG is more difficult to decipher if its output possesses good statistical randomness. This  makes statistical randomness the key element when designing a number generator.\n\nAnother essential feature for LCGs is the concept of full-period. A full period means that all values $0\\leq X<M$ are generated once before the LCG returns to its initial state (i.e. it re-generates the initial seed) \\citep{Knuth_art2}. Our previous example with the coefficients $a = 3$, $c = 1$, $M = 7$, $X_0=1$ is not a full-period LCG because the period generated does not include $3$:\n\\[\\{1,\\,4,\\,6,\\,5,\\,2,\\,0,\\,1,...\\}\\]\nA full-period LCG with a modulus $M=7$ would generate a period that includes all the numbers $\\{0,\\,1,\\,...,\\,6\\}$. Donald Knuth has stated that a LCG has a full-period if and only if \\citep{Knuth_art2}:\n\\begin{itemize}\n\\item The coefficients $c$ and $M$ are {\\em relatively prime}, meaning they have no common dividers other than 1.\n\\item $a-1$ is divisible by all the prime factors of $M$.\n\\item If $M$ is a multiple of 4 then $a-1$ is so too.\n\\end{itemize}\nHowever, full-period LCGs are not guaranteed to provide good results. In an example shown by Park and Miller they demonstrated this by using two similar number generators, both provided full-period sequences. These generators were given by formulas \\((6 X \\modulo 13)\\) and \\((7 X \\modulo 13)\\). However the former provided better statistical randomness, making it superior to the latter \\citep{park_miller}.\n\n\n\\subsection{Examples of good and bad LCG}\nPark and Miller suggested a good example of a LCG that provides adequate results. They refer to this LCG as the {\\em minimal standard generator} \\citep{park_miller}. This LCG uses the coefficients $M = 2^{31}-1$, $a = 7^5$ and $c = 0$. They claim that this LCG possesses both the full-period and statistical randomness properties. Additionally they also state that this LCG is easy to implement on nearly every computer system \\citep{park_miller}.\n\n\\begin{quotation}\n``We present the rationale for our choice of a minimal standard generator. We believe that this is the generator that should always be used -- unless one has access to a random number generator known to be better.\" \\citep{park_miller}\n\\end{quotation}\n\nUnfortunately good LCGs are difficult to find and hence there are many LCGs that suffer from certain defects \\citep{park_miller}. Notably any number generator with a modulus of the form $M=2^n$ will suffer from an effect where lower bit are less significant and causes problems. For instance, the full-period mixed generator on UNIX platforms called \\texttt{rand()} suffers from an issue where  the distribution of the lower bits is non-random \\citep{park_miller}. The generator \\texttt{rand()} used the parameters $a=1103515245$, c=$12345$ and $M=2^{31}$ \\citep{park_miller}.\n\nAnother poor example of a LCG which suffers from the same effect that causes a defect in \\texttt{rand()} is the random number generator in Turbo Pascal. The parameters used are $a=129$, $c=907633385$ and $M=2^{32}$, which is claimed to be even worse than the function \\texttt{rand()} because the multiplier is simply too small \\citep{park_miller}.\n\nThe worst example of any LCG must probably be the random generator known as RANDU. The coefficients are given by $a=2^{16}+ 3$, $c=0$ and $M=2^{31}$. These coefficients were chosen because they made the implementation of the algorithm very simple and it quickly became widespread \\citep{park_miller}. However RANDU does not have a full period and the numbers generated have a distribution which is clearly non-random \\citep{park_miller}.\n\n\\begin{quotation}\n``...its very name RANDU is enough to bring dismay into the eyes and stomachs of many computer scientists!\" \\citep{Knuth_art2}\n\\end{quotation}\n\n\\section{Alternatives}\n\\subsection{Coupled Linear Congruential Generators}\nIn the attempt to make a PRNG that is efficient and cryptographically secure, Katti et al. suggested the use of Coupled Linear Congruential Generator (CLCG). This consists of using two LCGs with slightly different arguments to generate a sequence of bits. They show that breaking a CLCG is of complexity $\\bigo{2^{2n}}$ where $n$ is the number of bits in the modulus $M$, which they consider as ``moderate security\". However the authors confess that not all CLCGs pass the statistical tests of randomness, but they propose a modified version of CLCGs called dual-coupled LCG designed to provide statistically satisfying results. \\citep{katti_coupledCG2}\n\n\\subsection{Mersenne twister}\nAlthough LCGs generally provide adequate results they present major flaws in certain applications. One typical example is the Monte Carlo simulation that consists of repeatedly sampling data and testing them. Good statistical randomness is required in order for this simulation to work properly, LCGs simply don't have a level of randomness that is sufficient for a Monte Carlo simulation \\citep{Itan-MonteCarlo}. In 1997, Makoto Matsumoto and Takuji Nishimura conceived a pseudo-random number generator known as the Mersenne twister. This method is optimised to provide statistical randomness and passes most statical tests \\citep{DBLP:journals/tomacs/MatsumotoN98}. However this number generator on its own remains cryptographically insecure and shouldn't be used for this purpose, although it is possible to modify the algorithm to make it safe \\citep{DBLP:journals/iacr/MatsumotoNHS05}.\n\n\n\\subsection{Blum Blum Shub}\nPRNG algorithms such as LCG and Mersenne twister are fast and provide decent results. However, they are not cryptographically secure and are predictable. When considering randomness for security proposes, one must consider a different algorithm such as the {\\em Blum Blum Shub} algorithm.\n\nThe name of the Blum Blum Shub algorithm is derived from its three creators Lenore Blum, Manuel Blum, and Michael Shub. \\citep{BlumBlumShub-1} The formula used to generate pseudo-random bits is given as:\n\n\\[x_{n+1} = x^2_n \\modulo M \\]\n\nWhere $M$ is the product of two large prime numbers $p$ and $q$. The seed \\(x_0 \\notin \\{0,1\\} \\) should be co-prime to $M = pq$ \\citep{BlumBlumShub-2}.\n\nThis number generator is particularly useful in the field of cryptography because of its unpredictability. Predicting the output of the algorithm is possible, but it is a problem that is {\\em computationally difficult} to solve. The problem is at-least as difficult as factorising $M$, which is an {\\em NP-complete} problem \\citep{BlumBlumShub-1}.\n\n\\pagebreak", "meta": {"hexsha": "503ceaff4ddbb4df83b8b6588fe336d52e5d71f3", "size": 14823, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Essay/Dissert/numbergen.tex", "max_stars_repo_name": "olegat/uea-dissertation", "max_stars_repo_head_hexsha": "3768908aa172715278dc9307b99361746fb38f2e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Essay/Dissert/numbergen.tex", "max_issues_repo_name": "olegat/uea-dissertation", "max_issues_repo_head_hexsha": "3768908aa172715278dc9307b99361746fb38f2e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Essay/Dissert/numbergen.tex", "max_forks_repo_name": "olegat/uea-dissertation", "max_forks_repo_head_hexsha": "3768908aa172715278dc9307b99361746fb38f2e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 121.5, "max_line_length": 1039, "alphanum_fraction": 0.773527626, "num_tokens": 3766, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117855317473, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5992588101242842}}
{"text": "\\documentclass[acmsmall,nonacm]{acmart}\n\\acmConference[]{}{}{}\n\n\\usepackage{minted}\n\\newmintinline[agda]{agda}{escapeinside=<>,mathescape=true}\n\n\\newcommand{\\\u03b1}{\\alpha}\n\\newcommand{\\\u03b2}{\\beta}\n\\newcommand{\\\u03b3}{\\gamma}\n\\newcommand{\\\u03c3}{\\sigma}\n\\newcommand{\\\u0393}{\\Gamma}\n\\newcommand{\\\u039b}{\\Lambda}\n\\newcommand{\\\u2200}{\\forall}\n\\newcommand{\\\u2191}{\\uparrow}\n\\newcommand{\\\u22a2}{\\vdash}\n\\newcommand{\\\u21a6}{\\mapsto}\n\\newcommand{\\\u22b3}{\\rhd}\n\\newcommand{\\fix}{\\textrm{fix}}\n\n% Each submission (referred to as \u201cabstract\u201d below) should include the student author\u2019s name and e-mail address;\n% institutional affiliation; research advisor\u2019s name; ACM student member number; category (undergraduate or graduate)\n\\author{Jonathan Chan}\n\\title{A Syntactic Model of Sized Dependent Types}\n\n\\begin{document}\n\\maketitle\n\n\\section{???}\n\nThe types-as-propositions paradigm associates certain type theories with formal logical systems,\nand consequently types in those theories with propositions in those logics.\nFurthermore, well-typed programs are associated with proofs of the corresponding proposition.\nMany dependent type theories, for instance, correspond to higher-order logics,\nand having an automated type checker means having the ability to automatically verify proofs.\nOne must be careful, however, not to allow nonterminating programs,\nbecause they correspond to logical inconsistencies, i.e. proofs of falsehood;\nadditionally, in dependent type checkers where programs may be evaluated during type checking,\nfailure to rule out nonterminating programs leads to nonterminating type checking.\nContemporary proof assistants based on dependent type theories, such as Coq, Agda, Lean, Idris,\nand more, typically restrict recursive functions to \\emph{structurally-recursive} ones,\nwhere the argument of recursive calls must be \\emph{syntactically} smaller.\nAs an example, consider the following function \\agda{minus n m} that computes $\\min(n - m, 0)$.\n\n\\begin{minted}{agda}\nminus : Nat \u2192 Nat \u2192 Nat\nminus Zero m = Zero\nminus n Zero = n\nminus (Succ n) (Succ m) = minus n m\n\\end{minted}\n\nInuitively, any well-typed call to \\agda{minus} is guaranteed to terminate because the recursive call\nonly occurs on constructor arguments, and the base cases must be reached eventually.\nHowever, we often wish to write terminating functions that aren't necessarily syntactically recursive.\nConsider now the following function \\agda{div n m} that computes $\\big\\lceil\\frac{n}{m+1}\\big\\rceil$ using \\agda{minus}.\n\n\\begin{minted}{agda}\ndiv : Nat \u2192 Nat \u2192 Nat\ndiv Zero m = Zero\ndiv (Succ n) m = Succ (div (minus n m) m)\n\\end{minted}\n\nDespite the fact that \\agda{minus} returns a natural no greater than its first argument,\nmeaning that the first argument of the recursive call to \\agda{div} has a smaller magnitude\nthan the original argument,\nthe type checker is unable to conclude that this is a terminating function,\nbecause the first argument isn't \\emph{syntactically} smaller (i.e. \\agda{n}).\nThe programmer would have to alter the definition of \\agda{minus} to prove this property,\nas well as the definition of \\agda{div} to use this proof, making writing otherwise simple code burdensome.\nSometimes it would be possible to make the type checker a little more clever by selectively inlining\ncertain functions that allow the syntactic termination check to pass,\nbut this is not always possible, and requiring inlining just to pass termination checking is anti-modular.\nIf inlining \\agda{minus} in the body of \\agda{div} helped, then we are forced to import its entire definition,\nrather than merely its name and type, as is only required for type checking alone.\n\n%\\paragraph*{}\nA \\emph{type-based} rather than syntactic method of termination checking uses \\emph{sized types} (Hughes 1996),\nwhere a function is guaranteed to terminate simply if it type checks.\nTo use sized types, we first alter our definition of naturals to take a size expression as a parameter.\n\n\\begin{minted}[escapeinside=<>,mathescape=true]{agda}\ndata Nat [<$\\\u03b1$>] : Type where\n  Zero : <$\\\u2200 \\\u03b2 < \\\u03b1$>. Nat [<$\\\u03b1$>]\n  Succ : <$\\\u2200 \\\u03b2 < \\\u03b1$>. Nat [<$\\\u03b2$>] \u2192 Nat [<$\\\u03b2$>]\n\\end{minted}\n\nThe bounded size quantification \\agda{<$\\\u2200\\\u03b2 < \\\u03b1$>} asks for some size $\\\u03b2$ strictly smaller than $\\\u03b1$.\nSo to construct the successor of some natural \\agda{n : Nat [s]}, we need a size larger than \\agda{s}.\nLet \\agda{<$\\\u2191$>s} denote the next size up from \\agda{s}.\nThen the successor is simply \\agda{Succ {<$\\\u2191$>s} [s] n} (using curly braces for the implicit size parameter).\nNow, let us express the fact that \\agda{minus} returns a natural no larger than its first argument using sizes,\ni.e. that \\agda{minus} is in fact \\emph{size-preserving}.\n\n\\begin{minted}[escapeinside=<>,mathescape=true]{agda}\nminus : <$\\\u2200\\\u03b1$>, <$\\\u03b2$>. Nat [<$\\\u03b1$>] \u2192 Nat [<$\\\u03b2$>] \u2192 Nat [<$\\\u03b1$>]\n\\end{minted}\n\nThen we are able to write a \\agda{div} function that the type checker will accept to be terminating\nwith just a few more annotations.\n\n\\begin{minted}[escapeinside=<>,mathescape=true]{agda}\ndiv : <$\\\u2200\\\u03b1$>, <$\\\u03b2$>. Nat [<$\\\u03b1$>] \u2192 Nat [<$\\\u03b2$>] \u2192 Nat [<$\\\u03b1$>]\ndiv [<$\\\u03b1$>] [<$\\\u03b2$>] (Zero {<$\\\u03b1$>} [<$\\\u03b3$>])   m = Zero {<$\\\u03b1$>} [<$\\\u03b3$>]\ndiv [<$\\\u03b1$>] [<$\\\u03b2$>] (Succ {<$\\\u03b1$>} [<$\\\u03b3$>] n) m = Succ {<$\\\u03b1$>} [<$\\\u03b3$>] (div [<$\\\u03b3$>] [<$\\\u03b2$>] (minus [<$\\\u03b3$>] [<$\\\u03b2$>] n m) m)\n\\end{minted}\n\nIn the second branch, when matching on the first argument of size $\\\u03b1$, we have the successor of \\agda{n},\nwhich itself has size $\\\u03b3 < \\\u03b1$.\nThen the call to \\agda{minus} returns a natural of size $\\\u03b3$, which is then passed to the recursive call of \\agda{div}.\nType checking passes without problem because the size $\\\u03b3$ of the recursive call is strictly smaller\nthan the size $\\\u03b1$ of the original call to \\agda{div}.\nMore precisely, when expressing recursive functions as fixpoints,\ntheir typing rule and reduction behaviour is the following\n(using $e[x \\\u21a6 e']$ to denote substitution of $x$ by $e'$ in $e$):\n\\begin{align*}\n&\\\u0393, \\\u03b1, f : \\\u2200\\\u03b2 < \\\u03b1 \\mathpunct{.} \\\u03c3[\\\u03b1 \\\u21a6 \\\u03b2] \\\u22a2 e : \\\u03c3 \\\\[-2\\jot]\n&\\cline{1-2}\n&\\\u0393 \\\u22a2 \\fix \\, f \\, [\\\u03b1] : \\\u03c3 \\coloneqq e : \\\u2200\\\u03b1 \\mathpunct{.} \\\u03c3\n\\end{align*}\n\\begin{align*}\n(\\fix \\, f \\, [\\\u03b1] : \\\u03c3 \\coloneqq e) \\: [s] \\: e' &\\\u22b3 e[\\\u03b1 \\\u21a6 s, f \\\u21a6 \\\u039b\\\u03b2 < s \\mathpunct{.} (\\fix \\, f \\, [\\\u03b1] : \\\u03c3 \\coloneqq e) [\\\u03b2]] \\: e'\n\\end{align*}\n\nA fixpoint is well-typed if the body has the same type under the environment where\nthe recursive reference to the fixpoint quantifies over a \\emph{smaller} size.\nWhen reducing the fixpoint applied to some size, we substitute in the body the fixpoint bounded by that size.\n\n\\section{???}\n\nOnce we have a program and correctness proofs about the program, we may wish to compile and run our program.\nIn Coq and Agda, programs get \\emph{extracted} to OCaml and Haskell code, respectively, which are then compiled.\nHowever, the process of extraction necessarily discards some type information,\nsince the extraction targets are not dependently typed.\nThis makes it possible to link our proven-safe program after extraction with unsafe code,\nwhich might cause runtime errors, making all of the work proving it correct in vain.\nTo ensure that our proofs are preserved and verified even during linking,\nwe want to compile our code in a \\emph{type-preserving} manner (Bowman, 2018).\nThe type-based nature of sized types makes it more amenable than syntactic termination checking\nto type-preserving compilation of recursive functions.\n\nTraditionally in dependent type theories, there are two mutually-dependent judgements:\nthe typing judgement, and the (typed) equality judgement.\nEquality is used in the following typing rule:\n\\end{document}", "meta": {"hexsha": "e02846bca85a5a5d9689ab965cb1c26efd5a3700", "size": 7554, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "SRC/SRC (bad).tex", "max_stars_repo_name": "ionathanch/msc-thesis", "max_stars_repo_head_hexsha": "8fe15af8f9b5021dc50bcf96665e0988abf28f3c", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SRC/SRC (bad).tex", "max_issues_repo_name": "ionathanch/msc-thesis", "max_issues_repo_head_hexsha": "8fe15af8f9b5021dc50bcf96665e0988abf28f3c", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SRC/SRC (bad).tex", "max_forks_repo_name": "ionathanch/msc-thesis", "max_forks_repo_head_hexsha": "8fe15af8f9b5021dc50bcf96665e0988abf28f3c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.8251748252, "max_line_length": 141, "alphanum_fraction": 0.7266348954, "num_tokens": 2152, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117983401363, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.599258809581454}}
{"text": "\\chapter{Deriving other gates from universal gates}\n%\\ref{sec:background}.\n\n\\section{Aim}\n%\\label{sec:objectives}\n\tTo implement the logic functions i.e. AND, OR, NOT, Ex-OR, Ex- NOR and a logical expression with the help of NAND and NOR universal gates respectively.\n\n\\section{Apparatus}\n%\\label{sec:objectives}\n\t\\begin{itemize}\n\t\t\\tightlist\n\t\t\\item Kit for realization of NAND and NOR gates\n\t\t\\item Connecting Leads\n\t\\end{itemize}\n\n\\section{Theory}\n\tLogic gates are electronic circuits which perform logical functions on one or more inputs to produce one output. There are seven logic gates. When all the input combinations of a logic gate are written in a series and their corrresponding outputs written along them, then this input/ output combination is called Truth Table.\n\t\n\tAND, OR, NOT, XOR and XNOR gates can be derived from NAND and NOR gates by converting their respective equations to NAND or NOR form, using he following 3 laws of Boolean Algebra (and their duals according to Duality Principle):\n\t\\begin{enumerate}\n\t\t\\tightlist\n\t\t\\item Involution Law: $\\overline{(\\overline{X})} = X$\n\t\t\\item Idempotent Law: $X + X = X$\n\t\t\\item De-Morgan's Law: $\\overline{X+Y} = \\overline{X} . \\overline{Y}$\n\t\\end{enumerate}\n\t\n\t\\subsection{NAND gate}\n\t\tNAND gate is actually a combination of two logic gates i.e. AND gate followed by NOT gate. So its output is complement of the output of an AND gate.This gate can have minimum two inputs. By using only NAND gates, we can realize all logic functions: AND, OR, NOT, Ex-OR, Ex-NOR, NOR. So this gate is also called as universal gate.\n\t\tThe expression for NAND gate is:\n\t\t$$Y = \\overline{A.B}$$\n\t\\subsection{NOR gate}\n\t\tNOR gate is actually a combination of two logic gates: OR gate followed by NOT gate. So its output is complement of the output of an OR gate.This gate can have minimum two inputs, output is always one. By using only NOR gates, we can realize all logic functions: AND, OR, NOT, Ex-OR, Ex-NOR, NAND. So this gate is also called universal gate.\n\t\tThe expression for NOR gate is:\n\t\t$$Y = \\overline{A+B}$$\n\n\\section{Circuits}\n\t\\subsection{NAND gate as Universal gate}\n\t\t\\subsubsection{NAND gates as OR gate}\t\t\t\n\t\t\tFrom DeMorgan\u2019s theorems:\n\t\t\t\\begin{align*}\n\t\t\t\t(A.B)^\\prime &= A^\\prime + B^\\prime \\\\\n\t\t\t\t(A^\\prime .B^\\prime)^\\prime &= A^{\\prime\\prime} + B^{\\prime\\prime} \\\\\n\t\t\t\t&= A + B\n\t\t\t\\end{align*}\n\t\t\tSo, give the inverted inputs to a NAND gate, obtain OR operation at output.\n\t\t\t\\begin{figure}[ht]\n\t\t\t\t\\centering\n\t\t\t\t\\subfloat[OR gate using NAND gates]{\n\t\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig1}\n\t\t\t\t\t\\label{fig:nand_to_or:1}\n\t\t\t\t}\n\t\t\t\t\\hfill\n\t\t\t\t\\subfloat[Truth Table for OR]\n\t\t\t\t{\n\t\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t$A$ & $B$ & $Y=A + B$ \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 0 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 1 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 0 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 1 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\\label{fig:nand_to_or:2}\n\t\t\t\t}\n\t\t\t\t\\caption{\\textit{OR Gate}}\n\t\t\t\\end{figure}\t\t\t\n\t\n\t\t\\subsubsection{NAND gates as AND gate}\n\t\t\tFrom DeMorgan\u2019s theorems:\n\t\t\t\\begin{align*}\n\t\t\t\tY &= ((A.B)^\\prime)^\\prime \\\\\n\t\t\t\tY &= (A.B)\t\t\t\t\n\t\t\t\\end{align*}\n\t\t\tA NAND produces complement of AND gate. So, if the output of a NAND gate is inverted, overall output will be that of an AND gate.\n\t\t\t\\begin{figure}[ht]\n\t\t\t\t\\centering\n\t\t\t\t\\subfloat[AND gate using NAND gates]{\n\t\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig2}\n\t\t\t\t\t\\label{fig:nand_to_and:1}\n\t\t\t\t}\n\t\t\t\t\\hfill\n\t\t\t\t\\subfloat[Truth Table for AND]\n\t\t\t\t{\n\t\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t$A$ & $B$ & $Y=A . B$ \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 0 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 1 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 0 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 1 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\\label{fig:nand_to_and:2}\n\t\t\t\t}\n\t\t\t\t\\caption{\\textit{AND Gate}}\n\t\t\t\\end{figure}\n\t\t\t\n\t\t\\subsubsection{NAND gates as XOR gate}\n\t\t\tFrom DeMorgan\u2019s theorems:\n\t\t\t\\begin{align*}\n\t\t\t\tY &= A \\oplus B \\\\\n\t\t\t\t  &= \\overline{A}B + A\\overline{B} \\\\\n\t\t\t\t  &= \\overline{(\\overline{\\overline{A}B + A\\overline{B}})} \\\\\n\t\t\t\t  &= \\overline{(\\overline{(\\overline{A}B)} . \\overline{(A\\overline{B})})} \\\\\n\t\t\t\t  &= \\overline{(A+\\overline{B}) . (\\overline{A} + B)}\n\t\t\t\\end{align*}\n\t\t\tThis can be achieved with the logic diagram shown in Figure \\ref{fig:nand_to_xor:1}.\n\t\t\t\\begin{figure}[ht]\n\t\t\t\t\\centering\n\t\t\t\t\\subfloat[XOR gate using NAND gates]{\n\t\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig3}\n\t\t\t\t\t\\label{fig:nand_to_xor:1}\n\t\t\t\t}\n\t\t\t\t\\hfill\n\t\t\t\t\\subfloat[Truth Table for XOR]\n\t\t\t\t{\n\t\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t$A$ & $B$ & $Y=A \\oplus B$ \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 0 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 1 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 0 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 1 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\\label{fig:nand_to_xor:2}\n\t\t\t\t}\n\t\t\t\t\\caption{\\textit{XOR Gate}}\n\t\t\t\\end{figure}\n\t\t\n\t\t\\subsubsection{NAND gates as XNOR gate}\n\t\t\tFrom DeMorgan\u2019s theorems:\n\t\t\t\\begin{align*}\n\t\t\t\tY &= \\overline{A \\oplus B} \\\\\n\t\t\t\t&= \\overline{(\\overline{A}B + A\\overline{B})} \\\\\n\t\t\t\t&= \\overline{(\\overline{(\\overline{\\overline{A}B + A\\overline{B}})})} \\\\\n\t\t\t\t&= \\overline{(\\overline{(\\overline{(\\overline{A}B)} . \\overline{(A\\overline{B})})})} \\\\\n\t\t\t\t&= \\overline{(\\overline{(A+\\overline{B}) . (\\overline{A} + B)})} \\\\\n\t\t\t\t&= \\overline{(XOR-gate-implemented-using-NAND-Gates)}\n\t\t\t\\end{align*}\n\t\t\tThe output of a two input XNOR gate is shown by: $Y = \\overline{\\overline{A}B + A\\overline{B}}$. This can be achieved with the logic diagram shown in Figure \\ref{fig:nand_to_xnor:1}.\n\t\t\t\\begin{figure}[ht]\n\t\t\t\t\\centering\n\t\t\t\t\\subfloat[XNOR gate using NAND gates]{\n\t\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig4}\n\t\t\t\t\t\\label{fig:nand_to_xnor:1}\n\t\t\t\t}\n\t\t\t\t\\hfill\n\t\t\t\t\\subfloat[Truth Table for XNOR]\n\t\t\t\t{\n\t\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t$A$ & $B$ & $Y=\\overline{A \\oplus B}$ \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 0 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t0 & 1 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 0 & 0 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t1 & 1 & 1 \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\\label{fig:nand_to_xnor:2}\n\t\t\t\t}\n\t\t\t\t\\caption{\\textit{XNOR Gate}}\n\t\t\t\\end{figure}\n\t\n\t\\subsection{NOR gate as Universal gate}\n\t\t\\subsubsection{NOR gates as OR gate}\t\t\t\n\t\tFrom DeMorgan\u2019s theorems:\n\t\t\\begin{align*}\n\t\t\tY &= \\overline{(\\overline{(A+B)})} \\\\\n\t\t\t  &= (A + B)\n\t\t\\end{align*}\n\t\tA NOR produces complement of OR gate. So, if the output of a NOR gate is inverted, overall output will be that of an OR gate.\n\t\t\\begin{figure}[ht]\n\t\t\t\\centering\n\t\t\t\\subfloat[OR gate using NOR gates]{\n\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig5}\n\t\t\t\t\\label{fig:nor_to_or:1}\n\t\t\t}\n\t\t\t\\hfill\n\t\t\t\\subfloat[Truth Table for OR]\n\t\t\t{\n\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t$A$ & $B$ & $Y=A + B$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 0 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 1 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 0 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 1 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\label{fig:nor_to_or:2}\n\t\t\t}\n\t\t\t\\caption{\\textit{OR Gate}}\n\t\t\\end{figure}\t\t\t\n\t\t\n\t\t\\subsubsection{NOR gates as AND gate}\n\t\tFrom DeMorgan\u2019s theorems:\n\t\t\\begin{align*}\n\t\t\tY &= A.B \\\\\n\t\t\t  &= \\overline{(\\overline{A.B})} \\\\\n\t\t\t  &= \\overline{(\\overline{A} + \\overline{B})}\n\t\t\\end{align*}\n\t\tSo, give the inverted inputs to a NOR gate, obtain AND operation at output.\n\t\t\\begin{figure}[ht]\n\t\t\t\\centering\n\t\t\t\\subfloat[AND gate using NAND gates]{\n\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig6}\n\t\t\t\t\\label{fig:nor_to_and:1}\n\t\t\t}\n\t\t\t\\hfill\n\t\t\t\\subfloat[Truth Table for AND]\n\t\t\t{\n\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t$A$ & $B$ & $Y=A . B$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 0 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 1 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 0 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 1 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\label{fig:nor_to_and:2}\n\t\t\t}\n\t\t\t\\caption{\\textit{AND Gate}}\n\t\t\\end{figure}\n\t\t\n\t\t\\subsubsection{NOR gates as XOR gate}\n\t\tFrom DeMorgan\u2019s theorems:\n\t\t\\begin{align*}\n\t\t\tY &= A \\oplus B \\\\\n\t\t\t&= \\overline{A}B + A\\overline{B} \\\\\n\t\t\t&= \\overline{(\\overline{\\overline{A}B + A\\overline{B}})} \\\\\n\t\t\t&= \\overline{(\\overline{(\\overline{A}B)} . \\overline{(A\\overline{B})})} \\\\\n\t\t\t&= \\overline{(A+\\overline{B}) . (\\overline{A} + B)} \\\\\n\t\t\t&= \\overline{(A+\\overline{B})} + \\overline{(\\overline{A} + B)} \\\\\n\t\t\t&= M + N \\\\\n\t\t\t&= \\overline{(\\overline{M+N})} \\\\\n\t\t\t\\\\\n\t\t\tM = \\overline{(A+\\overline{B})} &, N = \\overline{(\\overline{A} + B)}\n\t\t\\end{align*}\n\t\tThe output of a two input Ex-OR gate is shown by: $Y = \\overline{A}B + A\\overline{B}$. This can be achieved with the logic diagram shown in Figure \\ref{fig:nor_to_xor:1}.\n\t\t\\begin{figure}[ht]\n\t\t\t\\centering\n\t\t\t\\subfloat[XOR gate using NOR gates]{\n\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig7}\n\t\t\t\t\\label{fig:nor_to_xor:1}\n\t\t\t}\n\t\t\t\\hfill\n\t\t\t\\subfloat[Truth Table for XOR]\n\t\t\t{\n\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t$A$ & $B$ & $Y=A \\oplus B$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 0 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 1 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 0 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 1 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\label{fig:nor_to_xor:2}\n\t\t\t}\n\t\t\t\\caption{\\textit{XOR Gate}}\n\t\t\\end{figure}\n\t\t\n\t\t\\subsubsection{NOR gates as XNOR gate}\n\t\tFrom DeMorgan\u2019s theorems:\n\t\t\\begin{align*}\n\t\t\tY &= \\overline{A \\oplus B} \\\\\n\t\t\t&= \\overline{\\overline{A}B + A\\overline{B}} \\\\\n\t\t\t&= \\overline{\\overline{(\\overline{\\overline{A}B + A\\overline{B}})}} \\\\\n\t\t\t&= \\overline{\\overline{(\\overline{(\\overline{A}B)} . \\overline{(A\\overline{B})})}} \\\\\n\t\t\t&= \\overline{\\overline{(A+\\overline{B}) . (\\overline{A} + B)}} \\\\\n\t\t\t&= \\overline{\\overline{(A+\\overline{B})} + \\overline{(\\overline{A} + B)}} \\\\\n\t\t\t&= \\overline{M + N} \\\\\n\t\t\t\\\\\n\t\t\tM = \\overline{(A+\\overline{B})} &, N = \\overline{(\\overline{A} + B)}\n\t\t\\end{align*}\n\t\tThe output of a two input Ex-NOR gate is shown by: $Y = \\overline{\\overline{A}B + A\\overline{B}}$. This can be achieved with the logic diagram shown in Figure \\ref{fig:nor_to_xnor:1}.\n\t\t\\begin{figure}[ht]\n\t\t\t\\centering\n\t\t\t\\subfloat[XNOR gate using NOR gates]{\n\t\t\t\t\\includegraphics[width=0.6\\textwidth,valign=c]{img/exp4/fig8}\n\t\t\t\t\\label{fig:nor_to_xnor:1}\n\t\t\t}\n\t\t\t\\hfill\n\t\t\t\\subfloat[Truth Table for XNOR]\n\t\t\t{\n\t\t\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\multicolumn{2}{|c|}{Input} & Output \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t$A$ & $B$ & $Y=\\overline{A \\oplus B}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 0 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t0 & 1 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 0 & 0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1 & 1 & 1 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\label{fig:nor_to_xnor:2}\n\t\t\t}\n\t\t\t\\caption{\\textit{XOR Gate}}\n\t\t\\end{figure}\n\n\\section{Procedure}\n\t\\begin{enumerate}\n\t\t\\tightlist\n\t\t\\item Check the components for their working (NAND or NOR kit).\n\t\t\\item Insert the wire-leads into the appropriate places on the kit as per the cicuits shown.\n\t\t\\item For output, use the LED on the provided kit.\n\t\t\\item Provide the input data in the input switches and observe the output on the LED.\n\t\t\\item Verify the truth tables of various gates.\n\t\\end{enumerate}\n\n\\section{Precautions}\n\t\\begin{enumerate}\n\t\t\\tightlist\n\t\t\\item The leads must be connected properly.\n\t\t\\item Wires must be connected during power supply being off only.\n\t\t\\item Change the input switches only when supply is off.\n\t\\end{enumerate}\n\n\\section{Result}\n\tNOT, AND, OR, XOR, XNOR gates can be realized using NAND and NOR gates.", "meta": {"hexsha": "f1606030e2cbf769e38fcd368b9470094fee3d42", "size": 11604, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Semester 3/DCLD_File/sections/4-exp4.tex", "max_stars_repo_name": "anhatsingh/Anhat_LATEX", "max_stars_repo_head_hexsha": "2d4601493b243949e9ba7a7abe59f59168e4d50a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Semester 3/DCLD_File/sections/4-exp4.tex", "max_issues_repo_name": "anhatsingh/Anhat_LATEX", "max_issues_repo_head_hexsha": "2d4601493b243949e9ba7a7abe59f59168e4d50a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Semester 3/DCLD_File/sections/4-exp4.tex", "max_forks_repo_name": "anhatsingh/Anhat_LATEX", "max_forks_repo_head_hexsha": "2d4601493b243949e9ba7a7abe59f59168e4d50a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.3621621622, "max_line_length": 343, "alphanum_fraction": 0.5977249224, "num_tokens": 4391, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.8267117919359419, "lm_q1q2_score": 0.5992588049392439}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n\\usepackage{graphicx}\n\\usepackage{tabularx}\n\\usepackage{multicol}\n\\usepackage{enumitem}\n\n\\usepackage[english]{babel}\n\\newtheorem{theorem}{Theorem}\n\n% Geometry \n\\usepackage{geometry}\n\\geometry{letterpaper, left=15mm, top=20mm, right=15mm, bottom=20mm}\n\n% Fancy Header\n\\usepackage{fancyhdr}\n\\renewcommand{\\footrulewidth}{0.4pt}\n\\pagestyle{fancy}\n\\fancyhf{}\n\\chead{MAT 341 - Linear Algebra}\n\\lfoot{CALU Fall 2021}\n\\rfoot{RDK}\n\n% Add vertical spacing to tables\n\\renewcommand{\\arraystretch}{1.4}\n\n% Macros\n\\newcommand{\\definition}[1]{\\underline{\\textbf{#1}}}\n\n\\newenvironment{rcases}\n  {\\left.\\begin{aligned}}\n  {\\end{aligned}\\right\\rbrace}\n\n% Begin Document\n\\begin{document}\n\n\\section*{Section 1.4: The Matrix Equation}\n\n\\begin{itemize}\n\n  \\item If $A$ is an $m \\times n$, with columns $a_1, \\ldots, a_n$, and if $x$ is in $\\mathbb{R}^n$, then the product of $A$ and $x$, denoted by $Ax$, is the linear combination\n  of the columns of $A$ using the corresponding entries in $x$ as weights; that is:\n  \\begin{equation*}\n    Ax = \n    \\begin{bmatrix}\n      a_1 & a_2 & \\cdots & a_n\n    \\end{bmatrix}\n    \\begin{bmatrix}\n      x_1 \\\\ x_2 \\\\ \\cdots \\\\ x_n\n    \\end{bmatrix}\n    = x_1a_1 + x_2a_2 + \\cdots + x_na_n\n  \\end{equation*}\n\n  \\item $Ax$ is defined only if the number of columns of $A$ equals the number of entries in $x$.\n\n\\end{itemize}\n\n\\noindent\\fbox{\n  \\parbox{\\textwidth}{\n    \\begin{theorem}\n      If $A$ is an $m \\times n$ matrix, with columns $a_1, \\ldots, a_n$, and if $b$ is in $\\mathbb{R}^m$, then the matrix equation $Ax = b$ has the same solution set as the vector equation\n      \\begin{equation*}\n        x_1a_1 + x_2a_2 + \\cdots + x_na_n\n      \\end{equation*}\n      which, in turn, has the same solution set as the system of linear equations whose augmented matrix is \n      \\begin{equation*}\n        Ax = \\begin{bmatrix}\n          a_1 & a_2 & \\ldots & a_n & b\n        \\end{bmatrix}\n      \\end{equation*}\n    \\end{theorem}\n  }\n}\n\n\\begin{itemize}\n\n  \\item The equation $Ax = b$ has a solution if and only if $b$ is a linear combination of the columns of $A$.\n\n\\end{itemize}\n\n\\noindent\\fbox{\n  \\parbox{\\textwidth}{\n    \\begin{theorem}\n      Let $A$ be an $m \\times n$ matrix. Then the following statements are logically equivalent. That is, for a particular $A$, either they are all true statements or they are all false.\n      \\begin{enumerate}[label=\\alph*.)]\n        \\item For each $b$ in $\\mathbb{R}^m$, the equation $Ax = b$ has a solution.\n        \\item Each $b$ in $\\mathbb{R}^m$ is a linear combination of the columns of $A$.\n        \\item The columns of $A$ span $\\mathbb{R}^m$.\n        \\item $A$ has a pivot position in every row.\n      \\end{enumerate}\n    \\end{theorem}\n  }\n}\n\n\\begin{itemize}\n\n  \\item The matrix with $1$s on the diagonal and $0$s elsewhere is called the \\definition{identity matrix} and is denoted by $I$:\n  \\begin{equation*}\n    \\begin{bmatrix}\n      1 & 0 & 0 \\\\\n      0 & 1 & 0 \\\\\n      0 & 0 & 1\n    \\end{bmatrix}\n  \\end{equation*}\n\n\\end{itemize}\n\n\\noindent\\fbox{\n  \\parbox{\\textwidth}{\n    \\begin{theorem}\n      If $A$ is an $m \\times n$ matrix, $u$ and $v$ are vectors in $\\mathbb{R}^n$, and $c$ is a scalar, then \n      \\begin{enumerate}[label=\\alph*.)]\n        \\item $A(u + v) = Au + Av$\n        \\item $A(cu) = c(Au)$\n      \\end{enumerate}\n    \\end{theorem}\n  }\n}\n\n\n\n\\end{document}", "meta": {"hexsha": "50b32112866572929d283509ba784d7b914191e1", "size": 3399, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter 1/Section 4/notes.tex", "max_stars_repo_name": "Bkrenz/calu-mat341", "max_stars_repo_head_hexsha": "2628f0755dde2e4a933131e23cbe8168444fd77c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter 1/Section 4/notes.tex", "max_issues_repo_name": "Bkrenz/calu-mat341", "max_issues_repo_head_hexsha": "2628f0755dde2e4a933131e23cbe8168444fd77c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter 1/Section 4/notes.tex", "max_forks_repo_name": "Bkrenz/calu-mat341", "max_forks_repo_head_hexsha": "2628f0755dde2e4a933131e23cbe8168444fd77c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.192, "max_line_length": 188, "alphanum_fraction": 0.6395998823, "num_tokens": 1149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.8128673110375457, "lm_q1q2_score": 0.5992117734653557}}
{"text": "\\subsection{Hopfield Network}\n\n\\subsubsection{Implementation Example: Optical Information Processing}\nIn paper from D. Psaltis and N. Farhat(1985) \\cite{optical_processing}, threshold and feedback properties from Hopfield model was used in implementation of an optical system that processes optical information.\n\nEnhanced error-correcting capability is one important outcome which is benefited from Non-linearity of Hopfield model. With $M$ words of each binary vector $v_i$ with length of N bits, matrix $T_{ij}$ represents a storage of information. Matrix $T_{ij}$ gets multiplied by a stored binary vector $v'_i$ results in a \\texttt{pseudoeigensystem} if $N$ is sufficiently larger than $M$. This indicated that the output vector $v''_i$ equals the input.\n\nSupposed non-linear iterative procedural experiments of certain number of known bits $N_1$ and the rest bits set to zero inside total of $N$ bits long vector were addressed to discover under what conditions the number of correct bits $N_2$ in output will be higher than $N_1$. A SNR(signal-to-noise ratio) equation which consists ratio of the expected value to the standard deviation on the same output vector, was used. As the Hopfield model has studied on the convergence property with respect to asynchronous operations, insensitivity to imperfections(non-uniformities, exact form of the threshold operation and errors in $T_{ij}$ matrix) and correct convergence obtained with threshold $T_{ij}$. These all become most desired properties in optical implementation. Detailed optical implementation with 2D inputs was presented which was based on spatial-frequency multiplexing. Methods using Fourier transform, transmitting amplitude(weighted sum) and integral of the product of the input images were introduced in such an implementation. The robustness of a such system with non-linear feedback becomes the most important feature.\nAs a conclusion, the implementation with the capabilities and limitations of optical techniques matches excellently with the Hopfield model that requires global, linear operations and local, point non-linearities in a fully interconnected optical system.\n\n\\subsubsection{Memory Capacity with Modification: \\\\ replacing sigmoid neuron with a non-monotonic neuron}\nIn paper from S. Yoshizawa, M. Morita and S. Amari(1992) \\cite{capacity_of_nonmonotonic_model} it started with introduction of a new method by replacing sigmoid neuron with a non-monotonic neuron and discussed theoretically on potential of absolute capacity(the maximum number of randomly generated patterns which are memorized as the equilibria of the network with the correlation-type connection weights) to be of order $n$ (nearly equal to $0.4n$).\n\nPrevious memory capacities were briefly introduced that Hopfield(1982) model's associative memory capacity is $0.15n$, the proven result of absolute capacity is asymptotically $ \\frac{n}{2\\log{n}} $ (from McEliece, Posner, Rodemich, \\& Venkatesh, 1987; Weisbuch, 1985), and relative capacity(recalling process) is about 0.14n(with admission of small percent of errors) with replica method, but about $0.16n$ with a simply approximation method from respectively two different research work.\n\nVarious research suggestions were mentioned though all failed to deal completely with flaws of the conventional model, that both absolute and relative capacities are too small as well as the existence of large number of spurious memories. With a non-monotonic neuron the result turned to be $0.4n$ for the absolute capacity which is even greater than relative capacity while the spurious memory disappeared.\n\nWith conventional neuron, memorized pattern is unstable, whereas a basin of attraction around memorized pattern was shown with non-monotonic neuron by replacing sigmoid function with a non-monotonic output function \\textit{figure \\ref{fig:nonmonotonic}} to neuron elements from the recalling process of an autocorrelation associative memory. This was named as Morita Model in the paper though it is essentially an extension from Hopfield Model.\nBy then the existence of equilibrium solutions and local stability, the authors did not further investigate on problems as the following: the size of the basin of attraction, the full sketch of spurious memories and the behaviour for clustered memorized patterns, which leaves more research work for future researchers to understand better associative memory with non-monotonic neurons.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[scale = 0.8]{inc/nonmonotonic.jpg}\n\t\\caption{The non-monotonic output function, S. Yoshizawa, M. Morita and S.Amari(1992)}\n\t\\label{fig:nonmonotonic}\n\\end{figure}\n\n%THIS IS STH EXTRA OUTSIDE HOPFIELD\n\\subsection{Experiment on Implicit Memory for Novel Associations between pictures: \\\\ Effects of Stimulus Unitization and Aging}\n\nFrom various previous research, concepts such as associative priming, unitization, difference between conceptual and perceptual associative priming, verbal versus pictorial material/stimuli and roll of spatial proximity were briefly summarized \\cite{stimulus_unitization_and_aging}.\n\nExperiments with pictorial stimuli(paired pictures) were done in 3 consecutive stages, where the result from first stage showed no evidence on requirement of spatial contiguity, though associative priming was enhanced compared to with spatially separated stimuli, which proved ``implicit memory for novel associations still can occur in the absence of an emergent conceptual representation''. The second experiment was an extension from the first experiment, with focus on the effects of aging and spatial contiguity of the same topic of stimuli on novel association priming between pictures, where stunning result was shown that ``associative priming is age invariant''(exposure of pictures was longer with older group to yield a matched performance in the baseline). The last experiment was based on both first and second experiments that ``associative priming with pictorial stimuli is modulated by spatial contiguity but not by aging'', and the study proved further evidence for the notion that novel association priming for picture pairs is mediated by the PRS(Perceptual Representation System).\n\n%Wenting.\n\n%1982, $0.15n$ (capacity of associative memory)\n%\n%% Above 0.15n releases the constraint on symmetries according to Olle.\n%\n%1985, proven $ \\frac{n}{2\\log{n}} $ (absolute capacity)\n%\n%1985, $0.14n$ (relative capacity of recalling process)\n%\n%1993, $ n ~= 0.4n $ (new result, absolute capacity)\n", "meta": {"hexsha": "0e4e5518f89dd7114e791fd6df02466121e153cf", "size": 6515, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/3_current_capabilities/1_hopfield_network.tex", "max_stars_repo_name": "mewmew/associative_memories", "max_stars_repo_head_hexsha": "d0c50cf3efbcab9369f5a030125752253539d9e0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-05-30T12:08:22.000Z", "max_stars_repo_stars_event_max_datetime": "2016-05-30T12:08:22.000Z", "max_issues_repo_path": "report/sections/3_current_capabilities/1_hopfield_network.tex", "max_issues_repo_name": "mewmew/associative_memory", "max_issues_repo_head_hexsha": "d0c50cf3efbcab9369f5a030125752253539d9e0", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 54, "max_issues_repo_issues_event_min_datetime": "2016-04-04T00:06:16.000Z", "max_issues_repo_issues_event_max_datetime": "2016-06-02T13:32:52.000Z", "max_forks_repo_path": "report/sections/3_current_capabilities/1_hopfield_network.tex", "max_forks_repo_name": "mewmew/associative_memory", "max_forks_repo_head_hexsha": "d0c50cf3efbcab9369f5a030125752253539d9e0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 141.6304347826, "max_line_length": 1133, "alphanum_fraction": 0.8125863392, "num_tokens": 1394, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943822145998, "lm_q2_score": 0.7905303137346446, "lm_q1q2_score": 0.5990594307184588}}
{"text": "\\chapter{Bonus: B\\'ezout's theorem}\nIn this chapter we discuss B\\'ezout's theorem.\nIt makes precise the idea that two degree $d$ and $e$\ncurves in $\\CP^2$ should intersect at ``exactly'' $de$ points.\n(We work in projective space so e.g.\\ any two lines intersect.)\n\n\\section{Non-radical ideals}\n\\prototype{Tangent to the parabola.}\nWe need to account for multiplicities.\nSo we will whenever possible work with homogeneous ideals $I$,\nrather than varieties $V$,\nbecause we want to allow the possibility that $I$ is not radical.\nLet's see how we might do so.\n\nFor a first example, suppose we intersect $y=x^2$ with the line $y=1$;\nor more accurately, in projective coordinates of $\\CP^2$,\nthe parabola $zy=x^2$ and $y=z$.\nThe intersection of the ideals is \n\\[ (zy-x^2, y-z) = (x^2-z^2, y-z) \\subseteq \\CC[x,y,z]. \\]\nSo this corresponds to having two points;\nthis gives two intersection points: $(1:1:1)$ and $(-1:1:1)$.\nHere is a picture of the two varieties in the affine $z=1$ chart:\n\\begin{center}\n\t\\begin{asy}\n\t\timport graph;\n\t\tsize(4cm);\n\t\treal f(real x) { return x*x-1; }\n\t\tgraph.xaxis(\"$\\mathcal V(y-z)$\", red);\n\t\tdraw(graph(f,-2,2,operator ..), blue, Arrows);\n\t\tlabel(\"$\\mathcal V(zy-x^2)$\", (1.4, f(1.4)), dir(15), blue);\n\t\tlabel(\"$\\mathbb{CP}^2$\", (2,3), dir(45));\n\t\tdotfactor *= 1.5;\n\t\tdot(dir(0), heavygreen);\n\t\tdot(dir(180), heavygreen);\n\t\\end{asy}\n\\end{center}\nThat's fine, but now suppose we intersect $zy=x^2$ with the line $x=0$ instead.\nThen we instead get a ``double point'':\n\\begin{center}\n\t\\begin{asy}\n\t\timport graph;\n\t\tsize(4cm);\n\t\treal f(real x) { return x*x; }\n\t\tgraph.xaxis(\"$\\mathcal V(y)$\", red);\n\t\tdraw(graph(f,-2,2,operator ..), blue, Arrows);\n\t\tlabel(\"$\\mathcal V(zy-x^2)$\", (1.4, f(1.4)), dir(15), blue);\n\t\tlabel(\"$\\mathbb{CP}^2$\", (2,3), dir(45));\n\t\tdotfactor *= 1.5;\n\t\tdot(origin, heavygreen);\n\t\\end{asy}\n\\end{center}\nThe corresponding ideal is this time\n\\[ (zy-x^2, y) = (x^2,y) \\subseteq \\CC[x,y,z]. \\]\nThis ideal is \\emph{not} radical,\nand when we take $\\sqrt{(x^2,y)} = (x,y)$ we get the ideal\nwhich corresponds to a single projective point $(0:0:1)$ of $\\CP^2$.\nThis is why we work with ideals rather than varieties:\nwe need to tell the difference between $(x^2,y)$ and $(x,y)$.\n\n\\section{Hilbert functions of finitely many points}\n\\prototype{The Hilbert function attached to the double point $(x^2,y)$\n\tis eventually the constant $2$.}\n\\begin{definition}\n\tGiven a nonempty projective variety $V$, there is a unique\n\tradical ideal $I$ such that $V = \\Vp(I)$.\n\tIn this chapter we denote it by $\\II(V)$.\n\tFor an empty variety we set $\\II(\\varnothing) = (1)$,\n\trather than choosing the irrelevant ideal.\n\\end{definition}\n\\begin{definition}\n\tLet $I \\subseteq \\CC[x_0, \\dots, x_n]$ be homogeneous.\n\tWe define the \\vocab{Hilbert function} of $I$,\n\tdenoted $h_I : \\ZZ_{\\ge 0} \\to \\ZZ_{\\ge 0}$ by\n\t\\[ h_I(d) = \\dim_{\\CC} \\left( \\CC[x_0, \\dots, x_n]/I \\right)^d \\]\n\ti.e.\\ $h_I(d)$ is the dimension of the $d$th graded part of\n\t$\\CC[x_0, \\dots, x_n] / I$.\n\\end{definition}\n\\begin{definition}\n\tIf $V$ is a projective variety, we set $h_V = h_{\\II(V)}$,\n\twhere $I$ is the \\emph{radical} ideal satisfying $V = \\Vp(I)$.\n\tIf $V = \\varnothing$, we choose $I = (1)$.\n\\end{definition}\n\\begin{example}[Examples of Hilbert functions in zero dimensions]\n\t\\label{ex:hilbert_zero}\n\tFor concreteness, let us use $\\CP^2$.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii If $V$ is the single point $(0:0:1)$,\n\t\twith ideal $\\II(V) = (x,y)$,\n\t\tthen \n\t\t\\[ \\CC[x,y,z] / (x,y) \\cong \\CC[z]\n\t\t\\cong \\CC \\oplus z\\CC \\oplus z^2\\CC \\oplus z^3\\CC \\dots \\]\n\t\twhich has dimension $1$ in all degrees.\n\t\tConsequently, we have \\[ h_I(d) \\equiv 1. \\]\n\t\t\\ii Now suppose we use the ``double point'' ideal $I = (x^2,y)$.\n\t\tThis time, we have\n\t\t\\begin{align*}\n\t\t\t\\CC[x,y,z] / (x^2,y)\n\t\t\t&\\cong \\CC[z] \\oplus x\\CC[z] \\\\\n\t\t\t&\\cong \\CC \\oplus (x\\CC \\oplus z\\CC) \\oplus (xz\\CC \\oplus z^2\\CC)\n\t\t\t\\oplus (xz^2\\CC \\oplus z^3\\CC) \\oplus\\dots.\n\t\t\\end{align*}\n\t\tFrom this we deduce that\n\t\t\\[\n\t\t\th_I(d) =\n\t\t\t\\begin{cases}\n\t\t\t\t2 & d = 1, 2, 3, \\dots \\\\\n\t\t\t\t1 & d = 0.\n\t\t\t\\end{cases}\n\t\t\\]\n\t\t\\ii Let's now take the variety $V = \\{(1:1:1), (-1:1:1)\\}$\n\t\tconsisting of two points, with $\\II(V) = (x^2-z^2, y-z)$. Then\n\t\t\\begin{align*}\n\t\t\t\\CC[x,y,z] / (x^2-z^2,y-z)\n\t\t\t&\\cong \\CC[x,z] / (x^2-z^2) \\\\\n\t\t\t&\\cong \\CC[z] \\oplus x\\CC[z].\n\t\t\\end{align*}\n\t\tSo this example has the same Hilbert function as the previous one.\n\t\\end{enumerate}\n\\end{example}\n\\begin{abuse}\n\tI'm abusing the isomorphism symbol\n\t$\\CC[z] \\cong \\CC \\oplus z\\CC \\oplus z^2\\CC$ and similarly\n\tin other examples.\n\tThis is an isomorphism only on the level of $\\CC$-vector spaces.\n\tHowever, in computing Hilbert functions of other examples\n\tI will continue using this abuse of notation.\n\\end{abuse}\n\\begin{example}\n\t[Hilbert functions for empty varieties]\n\tSuppose $I \\subsetneq \\CC[x_0, \\dots, x_n]$\n\tis an ideal, possibly not radical\n\tbut such that \\[ \\Vp(I) = \\varnothing \\]\n\thence $\\sqrt I = (x_0, \\dots, x_n)$ is the irrelevant ideal.\n\tThus there are integers $d_i$ for $i=0,\\dots,n$ such that\n\t$x_i^{d_i} \\in I$ for every $i$; consequently, $h_I(d) = 0$\n\tfor any $d > d_0 + \\dots + d_n$.\n\tWe summarize this by saying that\n\t\\[ h_I(d) = 0 \\text{ for all $d \\gg 0$}. \\]\n\\end{example}\nHere the notation $d\\gg 0$ means ``all sufficiently large $d$''.\n\nFrom these examples we see that if $I$ is an ideal,\nthen the Hilbert function appears to eventually be constant,\nwith the desired constant equal to the size of $\\Vp(I)$,\n``with multiplicity'' in the case that $I$ is not radical.\n\nLet's prove this.\nBefore proceeding we briefly remind the reader of short exact sequences:\na sequence of maps of $0 \\to V \\injto W \\surjto X \\to 0$\nis one such that the $\\img(V \\injto W) = \\ker(W \\surjto X)$\n(and of course the maps $V \\injto W$ and $W \\surjto X$ are\ninjective and surjective).\nIf $V$, $W$, $X$ are finite-dimensional vector spaces over $\\CC$\nthis implies that $\\dim W = \\dim V + \\dim X$.\n\n\\begin{proposition}\n\t[Hilbert functions of $I \\cap J$ and $I+J$]\n\tLet $I$ and $J$ be homogeneous ideals in $\\CC[x_0, \\dots, x_n]$.\n\tThen \\[ h_{I \\cap J} + h_{I+J} = h_I + h_J. \\]\n\\end{proposition}\n\\begin{proof}\n\tConsider any $d \\ge 0$.\n\tLet $S = \\CC[x_0, \\dots, x_n]$ for brevity.\n\tThen\n\t\\begin{diagram}\n\t\t0 & \\rTo & \\left[ S / (I \\cap J) \\right]^d\n\t\t\t& \\rInj & \\left[ S / I \\right]^d \\oplus \\left[ S / J \\right]^d\n\t\t\t& \\rSurj & \\left[ S / (I+J) \\right]^d\n\t\t\t& \\rTo & 0 \\\\\n\t\t&& f & \\rMapsto & (f,f) &&&& \\\\\n\t\t&&&& (f,g) & \\rMapsto & f-g &&\n\t\\end{diagram}\n\tis a short exact sequence of vector spaces.\n\tTherefore, for every $d \\ge 0$ we have that\n\t\\[\n\t\t\\dim \\left[ S / I \\right]^d \\oplus \\left[ S / J \\right]^d\n\t\t= \\dim \\left[ S / (I \\cap J) \\right]^d\n\t\t+ \\dim \\left[ S / (I+J) \\right]^d\n\t\\]\n\twhich gives the conclusion.\n\\end{proof}\n\\begin{example}\n\t[Hilbert function of two points in $\\CP^1$]\n\tIn $\\CP^1$ with coordinate ring $\\CC[s,t]$,\n\tconsider $I = (s)$ the ideal corresponding to the point $(0:1)$\n\tand $J = (t)$ the ideal corresponding to the point $(1:0)$.\n\tThen $I \\cap J = (st)$ is the ideal corresponding\n\tto the disjoint union of these two points,\n\twhile $I+J = (s,t)$ is the irrelevant ideal.\n\tConsequently $h_{I+J}(d) = 0$ for $d \\gg 0$.\n\tTherefore, we get\n\t\\[ h_{I \\cap J}(d) = h_I(d) + h_J(d) \\text{ for $d \\gg 0$} \\]\n\tso the Hilbert function of a two-point projective variety\n\tis the constant $2$ for $d \\gg 0$.\n\\end{example}\n\nThis example illustrates the content of the main result:\n\\begin{theorem}\n\t[Hilbert functions of zero-dimensional varieties]\n\tLet $V$ be a projective variety consisting of $m$ points\n\t(where $m \\ge 0$ is an integer).\n\tThen \\[ h_V(d) = m \\text{ for $d \\gg 0$}. \\]\n\\end{theorem}\n\\begin{proof}\n\tWe already did $m = 0$, so assume $m \\ge 1$.\n\tLet $I = \\II(V)$ and for $k=1,\\dots,m$\n\tlet $I_k = \\II(\\text{$k$th point of $V$})$.\n\t\\begin{exercise}\n\t\tShow that $h_{I_k} (d) = 1$ for every $d$.\n\t\t(Modify \\Cref{ex:hilbert_zero}(a).)\n\t\t\\label{ques:hilbert_always_one}\n\t\\end{exercise}\n\n\tHence we can proceed by induction on $m \\ge 2$,\n\twith the base case $m=1$ already done above.\n\tFor the inductive step,\n\twe use the projective analogues of \\Cref{thm:many_aff_variety}.\n\tWe know that $h_{I_1 \\cap \\dots \\cap I_{m-1}}(d) = m-1$ for $d \\gg 0$\n\t(this is the first $m-1$ points;\n\tnote that $I_1 \\cap \\dots \\cap I_{m-1}$ is radical).\n\tTo add in the $m$th point we note that\n\t\\[\n\t\th_{I_1 \\cap \\dots \\cap I_m}(d)\n\t\t= h_{I_1 \\cap \\dots I_{m-1}}(d) + h_{I_m}(d)\n\t\t- h_J(d)\n\t\\]\n\twhere $J = (I_1 \\cap \\dots \\cap I_{m-1}) + I_m$.\n\tThe ideal $J$ may not be radical, but satisfies $\\Vp(J) = \\varnothing$\n\tby an earlier example, hence $h_J = 0$ for $d \\gg 0$.\n\tThis completes the proof.\n\\end{proof}\nIn exactly the same way we can prove that:\n\\begin{corollary}[$h_I$ eventually constant when $\\dim \\Vp(I) = 0$]\n\tLet $I$ be an ideal, not necessarily radical,\n\tsuch that $\\Vp(I)$ consists of finitely many points.\n\tThen the Hilbert $h_I$ is eventually constant.\n\\end{corollary}\n\\begin{proof}\n\tInduction on the number of points, $m \\ge 1$.\n\tThe base case $m = 1$ was essentially done in \\Cref{ex:hilbert_zero}(b)\n\tand \\Cref{ques:hilbert_always_one}.\n\tThe inductive step is literally the same as in the proof above,\n\texcept no fuss about radical ideals.\n\\end{proof}\n\n\\section{Hilbert polynomials}\nSo far we have only talked about Hilbert functions\nof zero-dimensional varieties, and showed that\nthey are eventually constant.\nLet's look at some more examples.\n\\begin{example}\n\t[Hilbert function of $\\CP^n$]\n\tThe Hilbert function of $\\CP^n$ is\n\t\\[ h_{\\CP^n}(d) = \\binom{d+n}{n}\n\t= \\frac{1}{n!} (d+n)(d+n-1) \\dots (d+1) \\]\n\tby a ``balls and urns'' argument.\n\tThis is a polynomial of degree $n$.\n\\end{example}\n\\begin{example}\n\t[Hilbert function of the parabola]\n\tConsider the parabola $zy-x^2$ in $\\CP^2$\n\twith coordinates $\\CC[x,y,z]$.\n\tThen \n\t\\[ \\CC[x,y,z] / (zy-x^2) \\cong \\CC[y,z] \\oplus x\\CC[y,z]. \\]\n\tA combinatorial computation gives that\n\t\\begin{align*}\n\t\th_{(zy-x^2)}(0) &= 1 & \\text{Basis $1$} \\\\\n\t\th_{(zy-x^2)}(1) &= 3 & \\text{Basis $x,y,z$} \\\\\n\t\th_{(zy-x^2)}(2) &= 5 & \\text{Basis $xy$, $xz$, $y^2$, $yz$, $z^2$}.\n\t\\end{align*}\n\tWe thus in fact see that $h_{(zy-x^2)}(d) = 2d-1$.\n\\end{example}\n\nIn fact, this behavior of ``eventually polynomial'' always works.\n\\begin{theorem}\n\t[Hilbert polynomial]\n\tLet $I \\subseteq \\CC[x_0, \\dots, x_n]$ be a homogeneous ideal,\n\tnot necessarily radical. Then\n\t\\begin{enumerate}[(a)]\n\t\t\\ii There exists a polynomial $\\chi_I$ such that\n\t\t$h_I(d) = \\chi_I(d)$ for all $d \\gg 0$.\n\t\t\\ii $\\deg \\chi_I = \\dim\\Vp(I)$ (if $\\Vp(I) = \\varnothing$\n\t\tthen $\\chi_I = 0$).\n\t\t\\ii The polynomial $m! \\cdot \\chi_I$ has integer coefficients.\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\tThe base case was addressed in the previous section.\n\n\tFor the inductive step, consider $\\Vp(I)$ with dimension $m$.\n\tConsider a hyperplane $H$ such that no irreducible\n\tcomponent of $\\Vp(I)$ is contained inside $H$\n\t(we quote this fact without proof, as it is geometrically obvious,\n\tbut the last time I tried to write the proof I messed up).\n\tFor simplicity, assume WLOG that $H = \\Vp(x_0)$.\n\n\tLet $S = \\CC[x_0, \\dots, x_n]$ again.\n\tNow, consider the short exact sequence\n\t\\begin{diagram}\n\t\t0 & \\rTo & [S/I]^{d-1} & \\rInj^{\\times x_0} & [S/I]^d & \\rSurj &\n\t\t\t[S/(I+(x_0))]^d & \\rTo & 0 \\\\\n\t\t&& f & \\rMapsto & fx_0 &&&& \\\\\n\t\t&&&& f & \\rMapsto & f. &&\n\t\\end{diagram}\n\t(The injectivity of the first map follows from the assumption\n\tabout irreducible components of $\\Vp(I)$.)\n\tNow exactness implies that\n\t\\[ h_I(d) - h_I(d-1) = h_{I + (x_0)}(d). \\]\n\tThe last term geometrically corresponds to $\\Vp(I) \\cap H$;\n\tit has dimension $m-1$, so by the inductive hypothesis\n\twe know that\n\t\\[ h_I(d) - h_I(d-1)\n\t\t= \\frac{c_0 d^{m-1} + c_1 d^{m-2} + \\dots + c_{m-1}}{(m-1)!}\n\t\t\\qquad d \\gg 0 \\]\n\tfor some integers $c_0$, \\dots, $c_{m-1}$.\n\tThen we are done by the theory of\n\t\\textbf{finite differences} of polynomials.\n\\end{proof}\n\n\\section{B\\'ezout's theorem}\n\n\\begin{definition}\n\tWe call $\\chi_I$ the \\vocab{Hilbert polynomial} of $I$.\n\tIf $\\chi_I$ is nonzero, we call the leading coefficient of\n\t$m! \\chi_I$ the \\vocab{degree} of $I$, which is an integer,\n\tdenoted $\\deg I$.\n\n\tOf course for projective varieties $V$ we let $h_V = h_{\\II(V)}$.\n\\end{definition}\n\n\\begin{example}[Examples of degrees]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii If $V$ is a finite set of $n \\ge 1$ points, it has degree $n$.\n\t\t\\ii If $I$ corresponds to a double point, it has degree $2$.\n\t\t\\ii $\\CP^n$ has degree $1$.\n\t\t\\ii The parabola has degree $2$.\n\t\\end{enumerate}\n\\end{example}\n\nNow, you might guess that if $f$ is a homogeneous quadratic polynomial\nthen the degree of the principal ideal $(f)$ is $2$, and so on.\n(Thus for example we expect a circle to have degree $2$.)\nThis is true:\n\n\\begin{theorem}\n\t[B\\'ezout's theorem]\n\tLet $I$ be a homogeneous ideal of $\\CC[x_0, \\dots, x_n]$,\n\tsuch that $\\dim \\Vp(I) \\ge 1$.\n\tLet $f \\in \\CC[x_0, \\dots, x_n]$ be a homogeneous polynomial of degree $k$\n\twhich does not vanish on any irreducible component of $\\Vp(I)$.\n\tThen\n\t\\[ \\deg\\left( I + (f) \\right) = k \\deg I. \\]\n\\end{theorem}\n\\begin{proof}\n\tLet $S = \\CC[x_0, \\dots, x_n]$ again.\n\tThis time the exact sequence is\n\t\\begin{diagram}\n\t\t0 & \\rTo & [S/I]^{d-k} & \\rInj^{\\times f} & [S/I]^d & \\rSurj &\n\t\t\t[S/(I+(f))]^d & \\rTo & 0\n\t\\end{diagram}\n\tWe leave this olympiad-esque exercise as \\Cref{prob:bezout}.\n\\end{proof}\n\n\\section{Applications}\nFirst, we show that the notion of degree is what we expect.\n\\begin{corollary}\n\t[Hypersurfaces: the degree deserves its name]\n\tLet $V$ be a hypersurface, i.e. $\\II(V) = (f)$\n\tfor $f$ a homogeneous polynomial of degree $k$.\n\tThen $\\deg V = k$.\n\\end{corollary}\n\\begin{proof}\n\tRecall $\\deg(0) = \\deg \\CP^n = 1$.\n\tTake $I = (0)$ in B\\'ezout's theorem.\n\\end{proof}\n\nThe common special case in $\\CP^2$ is:\n\\begin{corollary}[B\\'ezout's theorem for curves]\n\tFor any two curves $X$ and $Y$ in $\\CP^2$ without\n\ta common irreducible component,\n\t\\[ \\left\\lvert X \\cap Y \\right\\rvert \n\t\t\\le \\deg X \\cdot \\deg Y. \\]\n\\end{corollary}\n\nNow, we use this to prove Pascal's theorem.\n\\begin{theorem}\n\t[Pascal's theorem]\n\tLet $A$, $B$, $C$, $D$, $E$, $F$ be six\n\tdistinct points which lie on a conic $\\mathscr C$ in $\\CP^2$.\n\tThen the points $AB \\cap DE$, $BC \\cap EF$, $CD \\cap FA$ are collinear.\n\\end{theorem}\n\\begin{proof}\n\tLet $X$ be the variety equal to the union of the\n\tthree lines $AB$, $CD$, $EF$, hence $X = \\Vp(f)$\n\tfor some cubic polynomial $f$ (which is the product of three linear ones).\n\tSimilarly, let $Y = \\Vp(g)$ be the variety\n\tequal to the union of the three lines $BC$, $DE$, $FA$.\n\n\t\\begin{center}\n\t\t\\begin{asy}\n\t\t\tfilldraw(unitcircle, opacity(0.2)+lightcyan, blue);\n\t\t\tpair A = dir(110);\n\t\t\tpair B = dir(210);\n\t\t\tpair C = dir(40);\n\t\t\tpair D = dir(270);\n\t\t\tpair E = dir(130);\n\t\t\tpair F = dir(-30);\n\n\t\t\tpair P = extension(A, B, D, E);\n\t\t\tpair X = extension(B, C, E, F);\n\t\t\tpair Q = extension(C, D, F, A);\n\n\t\t\tdraw(A--B--C--D--E--F--cycle);\n\t\t\tdraw(P--Q, red+dashed);\n\n\t\t\tdot(\"$A$\", A, dir(A));\n\t\t\tdot(\"$B$\", B, dir(B));\n\t\t\tdot(\"$C$\", C, dir(C));\n\t\t\tdot(\"$D$\", D, dir(D));\n\t\t\tdot(\"$E$\", E, dir(E));\n\t\t\tdot(\"$F$\", F, dir(F));\n\t\t\tdot(P);\n\t\t\tdot(X);\n\t\t\tdot(Q);\n\n\t\t\t/* Source generated by TSQ */\n\t\t\\end{asy}\n\t\\end{center}\n\n\tNow let $P$ be an arbitrary point on the conic on $\\mathscr C$,\n\tdistinct from the six points $A$, $B$, $C$, $D$, $E$, $F$.\n\tConsider the projective variety\n\t\\[ V = \\Vp(\\alpha f + \\beta g) \\]\n\twhere the constants $\\alpha$ and $\\beta$ are chosen such that $P \\in V$.\n\t\\begin{ques}\n\t\tShow that $V$ also contains the six points $A$, $B$, $C$, $D$, $E$, $F$\n\t\tas well as the three points $AB \\cap DE$, $BC \\cap EF$, $CD \\cap FA$\n\t\tregardless of which $\\alpha$ and $\\beta$ are chosen.\n\t\\end{ques}\n\n\tNow, note that $|V \\cap \\mathscr C| \\ge 7$.\n\tBut $\\dim V = 3$ and $\\dim \\mathscr C = 2$.\n\tThis contradicts B\\'ezout's theorem unless $V$ and $\\mathscr C$\n\tshare an irreducible component.\n\tThis can only happen if $V$ is the union of a line and conic,\n\tfor degree reasons; i.e.\\ we must have that\n\t\\[ V = \\mathscr C \\cup \\text{line}. \\]\n\tFinally note that the three intersection points $AB \\cap DE$,\n\t$BC \\cap EF$ and $CD \\cap FA$ do not lie on $\\mathscr C$,\n\tso they must lie on this line.\n\\end{proof}\n\n\n\\section\\problemhead\n\\begin{problem}\n\t\\label{prob:bezout}\n\tComplete the proof of B\\'ezout's theorem from before.\n\t\\begin{sol}\n\t\tFrom the exactness,\n\t\t$h_I(d) = h_I(d-k) + h_{I+(f)}(d)$,\n\t\tand it follows that\n\t\t\\[ \\chi_{I+(f)}(d) = \\chi_I(d) - \\chi_I(d-k). \\]\n\t\tLet $m = \\dim \\Vp(I) \\ge 1$.\n\t\tNow $\\dim \\Vp(I+(f)) = m-1$, so\n\t\tand $c_{\\text{new}} = \\deg I+(f)$ then we have\n\t\t\\[\n\t\t\t\\frac{\\deg (I+(f)) d^{m-1} + \\dots}{(m-1)!}\n\t\t\t=\n\t\t\t\\frac{1}{m!}\n\t\t\t\\left( \\deg I (d^m - (d-k)^m)\n\t\t\t+ \\text{lower order terms} \\right)\n\t\t\\]\n\t\tfrom which we read off\n\t\t\\[ \\deg (I+(f)) = \\frac{(m-1)!}{m!} \\cdot k \\binom m1 \\deg I\n\t\t\t= k \\deg I \\]\n\t\tas needed.\n\t\\end{sol}\n\\end{problem}\n\n\\begin{problem}\n\t[USA TST 2016/6]\n\t\\yod\n\tLet $ABC$ be an acute scalene triangle\n\tand let $P$ be a point in its interior.\n\tLet $A_1$, $B_1$, $C_1$ be projections of $P$ onto\n\ttriangle sides $BC$, $CA$, $AB$, respectively.\n\tFind the locus of points $P$ such that $AA_1$, $BB_1$, $CC_1$\n\tare concurrent and $\\angle PAB + \\angle PBC + \\angle PCA = 90^{\\circ}$.\n\t\\begin{hint}\n\t\tYou will need to know about complex numbers\n\t\tin Euclidean geometry to solve this problem.\n\t\\end{hint}\n\t\\begin{sol}\n\tIn complex numbers with $ABC$ the unit circle,\n\tit is equivalent to solving the two cubic equations\n\t\\begin{align*}\n\t\t(p-a)(p-b)(p-c) &= (abc)^2 (q -1/a)(q - 1/b)(q - 1/c) \\\\\n\t\t0 &= \\prod_{\\text{cyc}} (p+c-b-bcq) + \\prod_{\\text{cyc}} (p+b-c-bcq)\n\t\\end{align*}\n\tin $p$ and $q = \\overline p$.\n\tViewing this as two cubic curves in $(p,q) \\in \\mathbb C^2$,\n\tby Bezout's theorem it follows there are at most nine solutions\n\t(unless both curves are not irreducible,\n\tbut one can check the first one cannot be factored).\n\tMoreover it is easy to name nine solutions (for $ABC$ scalene):\n\tthe three vertices, the three excenters, and $I$, $O$, $H$.\n\tHence the answer is just those three triangle centers $I$, $O$ and $H$.\n\t\\end{sol}\n\\end{problem}\n", "meta": {"hexsha": "c885d5c3b182d67d7d20ca92e8a2de78def1409f", "size": 18042, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/alg-geom/bezout.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/alg-geom/bezout.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/alg-geom/bezout.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3072407045, "max_line_length": 79, "alphanum_fraction": 0.6369027824, "num_tokens": 6635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397349, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5990594246409336}}
{"text": "\\section{The Lemke-Howson Algorithm}\r\n\\label{lh-sect}\r\n\r\nTheorem \\ref{nash-ppad-complete-thm} suggests that the study of solutions of\r\n$n$-{\\sc Nash} as endpoints of paths can yield interesting results about\r\nthe complexity of the problem itself. In this\r\nsection we will study an algorithm that describes exactly this idea.\r\n\r\nLet $P$ be a simple $d$-polytope with $n$ facets.\r\nWe {\\em pivot on the vertices} of $P$ by moving from a\r\nvertex $x$ to another vertex $y$ connected to $x$ by an edge,\r\nsee Figure \\ref{pivot-vertex-fig}.\r\nNote that, since $P$ is simple, there are exactly\r\n$d$ possible choices for $y$.\r\nAnalogously, we {\\em pivot on the facets} of a simplicial\r\npolytope $P^\\Delta$ in\r\ndimension $d$ by moving from a facet $F$ to a facet $G$ that\r\nshares all vertices but one with~$F$, see Figure \\ref{pivot-facet-fig}.\r\nAs above, since $P^\\Delta$ is simplicial, there are $d$ possible choices\r\nfor $G$.\r\n\r\n\\begin{figure}[hp]\r\n\\strut\\hfill\r\n\\includegraphics[width=38ex]{chapter-3/fig-lh/cube-pivot.pdf}%\r\n\\hfill\\strut\r\n\\caption[A pivot on the vertices of the cube]{%\r\nA pivot from vertex $x$ to vertex $y$ on the edge of a cube.\r\n}\r\n\\label{pivot-vertex-fig}\r\n\\end{figure}\r\n\r\n\\begin{figure}[hp]\r\n\\strut\\hfill\r\n\\includegraphics[width=48ex]{chapter-3/fig-lh/octahedron-pivot.pdf}%\r\n\\hfill\\strut\r\n\\caption[A pivot on the facets of the octahedron]{%\r\nA pivot from facet $F$ of an octahedron to facet $G$.\r\n}\r\n\\label{pivot-facet-fig}\r\n\\end{figure}\r\n\r\n\\clearpage\r\n\r\nSuppose now that there is a labeling $l_f:[n]\\to [d]$ of the facets of the\r\nsimple polytope $P$.\r\nIf we pivot from vertex $x$ to vertex $x'$ we ``leave behind'' a facet $F$\r\nwith label $k$ to which $x$ belongs, but $x'$ does not. At the same\r\ntime, we ``reach'' a facet $F'$ with label $h$, to which $x$ does not\r\nbelong, but $x'$ does.\r\nTherefore, if $x$ has labels $(l_1,\\ldots,k,\\ldots,l_d)$, then\r\n$x'$ has labels $(l_1,\\ldots,h,\\ldots,l_d)$. We call this\r\n{\\em dropping label $k$ and picking up label $h$}, or\r\n{\\em pivoting on label $k$}; see Figure \\ref{pivot-vertex-label-fig}.\r\nAnalogously, if there is a labeling $l_v:[n]\\to [d]$ of the vertices\r\nof the simplicial poytope $P^\\Delta$\r\nand we pivot from a facet $F$ with labels $(l_1,\\ldots,k,\\ldots,l_d)$\r\nto a facet $F'$ with labels $(l_1,\\ldots,h,\\ldots,l_d)$, we say that we\r\n{\\em drop label $k$ and pick up label $h$}, or that we\r\n{\\em pivot on label $k$}; see Figure \\ref{pivot-facet-label-fig}.\r\n\r\n\\begin{figure}[ht]\r\n\\strut\\hfill\r\n\\includegraphics[width=40ex]{chapter-3/fig-lh/cube-pivot-label.pdf}%\r\n\\hfill\\strut\r\n\\caption[A pivot on the vertices of the labeled cube]{%\r\nA pivot on label $k$: drop vertex $x$ with labels $(l_1,l_2,k)$ and\r\npick up vertex $x'$ with labels $(l_1,l_2,h)$.\r\n}\r\n\\label{pivot-vertex-label-fig}\r\n\\end{figure}\r\n\r\n\\clearpage\r\n\r\n\\begin{figure}[htb]\r\n\\strut\\hfill\r\n\\includegraphics[width=48ex]{chapter-3/fig-lh/octahedron-pivot-label.pdf}%\r\n\\hfill\\strut\r\n\\caption[A pivot on the facets of the labeled octahedron]{%\r\nA pivot on label $k$: drop a facet with labels $(l_1,l_2,k)$ and\r\npick up a facet with labels $(l_1,l_2,h)$.\r\n}\r\n\\label{pivot-facet-label-fig}\r\n\\end{figure}\r\n\r\nConsider a labeling function $l:[n]\\to[d]$, and a subset $S$\r\nof $[n]$ with $|S|=d$.\r\nThen $S$ is called {\\em almost completely labeled} if\r\n\\begin{equation}\r\n\\label{acl-equation}\r\nl(S)~=~\\{\\,l(s)\\mid s\\in S\\}~=~[d]\\setminus\\{k\\}\\,\r\n\\end{equation}\r\nthat is, all labels appear once in $S$ except for one {\\em\r\nmissing label} $k\\in [d]$.\r\nSince $|S|=d$, in that case there is one\r\n{\\em duplicate label} $h\\in [d]$ that appears twice in~$S$.\r\n\r\nLet $S$ be the set of labels of a vertex in a\r\nsimple polytope, or the set of labels of a facet in a\r\nsimplicial polytope.\r\nWe call this vertex (or, respectively, facet) {\\em almost\r\ncompletely labeled vertex (or facet)} if it is almost completely\r\nlabeled by $S$ with respect to the labeling of the facets\r\n(or, respectively, vertices) of the polytope.\r\nIt is easy to see that if we pivot from an almost completely\r\nlabeled vertex (or facet) on the duplicate label, or from a\r\ncompletely labeled vertex (or facet) on any label, this\r\nbecomes the missing label~$k$, and we reach either an\r\nalmost completely labeled or a completely labeled vertex\r\n(or facet).\r\n\r\nThe algorithm by Lemke and Howson \\cite{lh} finds one Nash\r\nequilibrium of a bimatrix game.\r\nIn a modern description (e.g., Savani and von Stengel \\cite{svs}),\r\nit employs pivoting on the vertices of a simple polytope,\r\nmoving through a succession of almost completely labeled\r\nvertices with missing label~$k$, where this polytope is\r\nthe product $P\\times Q$ of the best-response polytopes.\r\nThis can be abstracted slightly further by considering only\r\na single polytope $P$ in dimension $d$ with facets labels\r\nfrom $[d]$.\r\nAlgorithm~\\ref{lh-alg} gives this latter version; for simplicity\r\nof notation, we will call it {\\em Lemke-Howson Algorithm}.\r\nAlgorithm~\\ref{lh-alg} also computes a ``Lemke\r\npath'', in the terminology of Morris~\\cite{morris}; this, in turn,\r\ncan be used to prove some fundamental properties\r\nof both the Lemke-Howson Algorithm and the Nash equilibria of a\r\nbimatrix game.\r\n\r\n\\begin{algorithm}[htb]\r\n\\SetKwInOut{Input}{input}\r\n\\SetKwInOut{Output}{output}\r\n\\Input{%\r\nA simple $d$-polytope $P$ with $n$ facets and a\r\nlabeling $l_f:[n]\\to [d]$ of the facets of $P$.\r\nA vertex $x_0$ of $P$ that is completely labeled by $l_f$.\r\n}\r\n\\Output{%\r\nA vertex $x\\neq x_0$ of $P$ that is completely labeled by $l_f$.\r\n}\r\n\\BlankLine\r\nchoose any label $k\\in [d]$ as missing label \\\\\r\npivot on label $k$ from $x_0$ to $x$ reaching a new facet\r\nwith label $h$\\\\\r\n\\While{ $h\\ne k$, so $x$ is not completely labeled ~}\r\n{\r\n% you don't use x_0 in the while loop\r\npivot away from the other facet with label $h$ from $x$ to $x'$  \\\\\r\nlet $h$ be the label of the new facet of $x'$ \\\\\r\nset $x = x'$\r\n}\r\n\\Return $x$\r\n\\caption{Lemke-Howson}\r\n\\label{lh-alg}\r\n\\end{algorithm}\r\n\r\n\\begin{proposition}\\label{lh-works-ppa-thm}\r\nThe Lemke-Howson Algorithm \\ref{lh-alg} returns a solution to\r\nthe {\\bf PPA} problem {\\sc Another Completely Labeled Vertex}.\r\nFurthermore, the number of completely labeled vertices in a simple\r\npolytope with labeled facets is even.\r\n\\end{proposition}\r\n\r\n\\begin{proof}\r\nWe first show that the Lemke-Howson Algorithm works.\r\nFrom the completely labeled vertex $x_0$, there is a unique\r\nedge that leaves the facet with label~$k$ which leads to a\r\nnew vertex $x$, as in step~2 of the algorithm.\r\nIf $x$ is completely labeled, then the algorithm terminates\r\nwith output $x$, and it is trivial to see that $x\\ne x_0$.\r\nOtherwise, $x$ is an almost completely labeled vertex with\r\nduplicate label~$h$, where one of the facets that contain $x$\r\nand have label~$h$ is a ``new'' facet that did not contain\r\nthe preceding vertex on the Lemke path.\r\nSince $S$ is simple, $x$ is always on exactly $d$ facets\r\nand the duplicate label is unique.\r\nHence no vertex, including $x_0$, can ever be re-visited\r\non the path because it would otherwise offer an alternative\r\nway to proceed when the vertex was encountered for the first\r\ntime.\r\n\r\nThe parity result is proven by the following argument: each Lemke path is\r\nuniquely determined by its missing label and its starting point, so the\r\nLemke path from the endpoint with the same missing label will lead back\r\nto the starting point. Since the endpoint and the starting point are\r\ndifferent, the Lemke paths must connect an even number of points.\r\n\r\nFinally, for each label $k\\in [d]$ chosen in line 1 of\r\nAlgorithm~\\ref{lh-alg}, the Lemke paths are disjoint paths connecting\r\nall the completely labeled vertices of $P$, with a standard\r\nstarting point $x_0$.\r\nThe problem {\\sc Another Completely Labeled Vertex} corresponds to finding\r\na non-standard endpoint of this graph, which is a {\\bf PPA} problem.\r\n\\end{proof}\r\n\r\nProposition \\ref{lh-works-ppa-thm} can be extended. Lemke paths can be\r\nused to prove that {\\sc Another Completely Labeled Vertex} is in {\\bf PPAD},\r\nnot just in {\\bf PPA}. This is done\r\nby giving a {\\em sign} (positive or negative) to the vertices.\r\nIt can be proven that the endpoints of the Lemke paths have opposite\r\nsign, and the paths can therefore be oriented accordingly.\r\nShapley \\cite{shapley} proved an analogous result:\r\ntwo Nash equilibria at the ends of a\r\nLemke path have opposite {\\em index}, a concept analogous to sign but\r\ndefined using determinants on the payoff matrices for the equilibrium\r\nsupport.\r\nThe index of a Nash equilibrium is usually normalized, by\r\nmultiplication with $-1$ of all signs if necessary, so that\r\nthe artificial equilibrium has index $-1$;\r\nthen a nondegenerate game with $n$ Nash equilibria\r\nwith index $+1$ has $n-1$ Nash equilibria with index $-1$.\r\nFor an in-depth study of the topics related to sign in the Lemke-Howson\r\nand other algorithms, we refer to V\\'{e}gh and von Stengel \\cite{vvs}.\r\n\r\nApplying the parity result of Proposition \\ref{lh-works-ppa-thm} to the\r\ncase of a bimatrix game (not necessarily a unit vector game), and\r\nremembering that the point $(\\0,\\0)$ corresponds to the\r\n``artificial'' equilibrium, we have the following result, due to Lemke and\r\nHowson \\cite{lh}.\r\n\r\n\\begin{theorem}{\\rm (Lemke-Howson \\cite{lh})}\r\nEvery non-degenerate bimatrix game has an odd number of Nash equilibria.\r\n\\end{theorem}\r\n\r\nThere are two ways of using the Lemke-Howson Algorithm to find a Nash\r\nequilibrium of a bimatrix game $(A,B)$.\r\nThe first one is to ``symmetrize'' the game as in Proposition\r\n\\ref{symmetrize-c}. Let\r\n$R = \\{ z\\in\\reals^{m+n}\\ |\\ z\\geq\\0,\\ Cz\\leq\\1 \\}$ be the\r\npolytope associated to the game $(C,C\\T)$, where\r\n\\[\r\nC=\\binom{~0~~~ A\\,}{B\\T~ 0\\,}.\r\n\\]\r\nThe facets of $C$ correspond to $2(m+n)$ inequalities. We label both\r\nthe $i$-th and the $(m+n+i)$-th inequality as $i\\in [m+n]$ and we apply the\r\nLemke-Howson algorithm starting from the vertex $\\0$. This returns a Nash\r\nequilibrium $(z,z)$ of $C$, which corresponds to a Nash equilibrium\r\n$(x,y)=z$ of $(A,B)$.\r\nWe can also follow the ``traditional'' exposition of the Lemke-Howson\r\nAlgorithm given by Shapley \\cite{shapley}. In this version,\r\nwe alternate a move on the best response polytopes $P$ and a move\r\non the best response polytope $Q$ of (\\ref{br-polytopes}).\r\nSince the polytopes $P$ and $Q$ are in $\\reals^m$ and $\\reals^n$, whereas $R$\r\nis a polytope in $\\reals^{m+n}$, the second version is much easier to\r\nvisualize.\r\n\r\n\\clearpage\r\n\r\n\\begin{example}({\\rm Savani and von Stengel \\cite{uvg}})\r\nConsider the $3\\times 3$ game $(A,B)$ of Example \\ref{br-game-ex}.\r\n\\begin{equation*}\r\nA = \\left(\\begin{matrix}1&0&0\\\\ 0&1&0\\\\\r\n0&0&1\\end{matrix}\\right),\r\n\\qquad\r\nB = \\left(\\begin{matrix}0&2&4\\\\ 3&2&0\\\\\r\n0&2&0\\end{matrix}\\right).\r\n\\end{equation*}\r\nThe best response polytopes can be represented as the best response regions\r\nof Figure \\ref{br-regions-fig} extended to the origin $\\0$,\r\nas in Figure~\\ref{lh-path-fig}.\r\n\r\nThe path starts from $(\\0,\\0)$.\r\nWe choose the missing label~1 and move in the polytope $P$.\r\nThen label~6 is duplicate; so we drop it and we make the next move on the\r\npolytope $Q$, and so on until we reach the point $x$ in $P$\r\nand $y$ in~$Q$, which gives here the only Nash equilibrium $(x,y)$ of $(A,B)$.\r\n\r\n\\begin{figure}[hbt]\r\n\\strut\\hfill\r\n\\includegraphics[width=70ex]{chapter-3/fig-lh/lemke-path.pdf}%\r\n\\hfill\\strut\r\n\\caption[A Lemke path for a bimatrix game]{%\r\nLemke path for missing label 1 on the best response polytopes of\r\nplayer 1 (left) and player 2 (right) of game (\\ref{AB}).\r\n}\r\n\\label{lh-path-fig}\r\n\\end{figure}\r\n\\end{example}\r\n\r\nIt is possible to have an equilibrium that cannot be reached applying the\r\nLemke-Howson Algorithm from the artificial equilibrium, or even from\r\nthe endpoint of a Lemke path from the artificial equilibrium. This\r\ncan be seen in the next example, where the Lemke paths form two disconnected\r\ncomponents. Notice that, by the parity result of\r\nProposition \\ref{lh-works-ppa-thm}, each one of these components must\r\ncontain an even number of equilibria (either Nash or artificial), since all\r\nthese equilibria are endpoints of Lemke paths.\r\n\r\n\\begin{example}(R. Wilson, cited in Shapley \\cite{shapley})\r\nConsider the symmetric game $(C,C\\T)$ with\r\n\\begin{equation}\r\n\\label{disj-lp-game}\r\nC = \\left(\r\n    \\begin{matrix}\r\n        0&3&0 \\\\\r\n        2&2&0 \\\\\r\n        3&0&1\r\n    \\end{matrix}\r\n    \\right).\r\n\\end{equation}\r\nThere are three equilibria of $(C,C\\T)$, all of them symmetric,\r\nat $(x_i,x_i)$ with\r\n$x_1=(0,0,1)$,\r\n$x_2=(1/6,1/3,1/2)$ and\r\n$x_3=(1/3,2/3,0)$.\r\n\r\nAll Lemke paths from the\r\nartificial equilibrium $(0,0)$ end at $(x_1,x_1)$, and consequently all\r\nother Lemke paths connect $(x_2,x_2)$ and $(x_3,x_3)$; see\r\nFigure \\ref{disj-lp-br-poly}.\r\n\r\n\\begin{figure}[hbt]\r\n\\strut\\hfill\r\n\\includegraphics[width=70ex]{chapter-3/fig-lh/shapley-game.pdf}%\r\n\\hfill\\strut\r\n\\caption[A game with disjoint Lemke paths]{%\r\nThe Lemke paths for missing label 1 (yellow), 2 (green) and 3 (pink) on the best\r\nresponse polytopes of game (\\ref{disj-lp-game}).\r\n\r\nThe paths for missing label\r\n4, 5 and 6 on the best response polytope of player 1 are the same\r\nas the paths of 1, 2 and 3 on the best response polytope of player 2,\r\nand vice versa.\r\n}\r\n\\label{disj-lp-br-poly}\r\n\\end{figure}\r\n\\end{example}\r\n\r\nThe dual version of the Lemke-Howson Algorithm \\ref{lh-alg} and of\r\nProposition \\ref{lh-works-ppa-thm} is straightforward; analogously,\r\nthe proof can be extended to show that {\\sc Another Completely Labeled Facet}\r\nis {\\bf PPAD}.\r\n\r\n\\clearpage\r\n\r\n\\begin{algorithm}[ht]\r\n\\SetKwInOut{Input}{input}\r\n\\SetKwInOut{Output}{output}\r\n\\Input{%\r\nA simplicial $m$-polytope $P^\\Delta$ with $n$ vertices and a\r\nlabeling $l_v:[n]\\to [d]$ of the vertices of $P^\\Delta$.\r\nA facet $F_0$ of $P^\\Delta$ that is completely labeled by $l_v$.\r\n}\r\n\\Output{%\r\nA facet $F\\neq F_0$ of $P^\\Delta$ that is completely labeled by $l_v$.\r\n}\r\n\\BlankLine\r\nchoose any label $k\\in [d]$ as missing label \\\\\r\npivot on label $k$ from $F_0$ to $F$ which has a new vertex\r\nwith label $h$ \\\\\r\n\\While{ $h\\ne k$, so $F$ is not completely labeled ~}\r\n{\r\npivot away from the other vertex with label $h$ from $F$ to $F'$  \\\\\r\nlet $h$ be the label of the new vertex of $F'$ \\\\\r\nset $F = F'$\r\n}\r\n\\Return $F$\r\n\\caption{Dual Lemke-Howson}\r\n\\label{lh-dual-alg}\r\n\\end{algorithm}\r\n\r\n\\begin{proposition}\\label{lh-dual-works-ppa-thm}\r\nThe Dual Lemke-Howson Algorithm \\ref{lh-dual-alg} returns\r\na solution to the {\\bf PPAD}\r\nproblem {\\sc Another Completely Labeled Facet}.\r\nFurthermore, the number of completely labeled facets\r\nin a simplicial polytope with labeled vertices is even.\r\n\\end{proposition}\r\n\r\nBy Theorem \\ref{unit-vector-thm} and Theorem \\ref{unit-vector-dual-thm},\r\nin the case of unit vector games it is enough to apply the\r\nLemke-Howson Algorithm~\\ref{lh-alg} to the polytope $P^l$ in\r\n(\\ref{p-l}), or the Dual Lemke-Howson Algorithm \\ref{lh-dual-alg}\r\nto the polytope $P^\\Delta$ in (\\ref{p-l-dual}).\r\nThe following theorem by Savani and von Stengel \\cite{uvg} guarantees that\r\nnot only does this yield a Nash equilibrium, but no potential solutions\r\nare ``lost'' considering the polytope $P^l$ with $m$ labels\r\ninstead of the product of polytopes $P\\times Q$ with $m + n$ labels;\r\nan analogous result holds for the dual case.\r\n\r\n\\begin{theorem}\\label{unit-paths}\r\nLet $(U,B)$ be a unit vector game, with\r\n$U=[e_{l(1)}\\cdots e_{l(n)}]$ for a labeling $l:[n]\\to [m]$.\r\nLet\r\n\\[\r\n\\arraycolsep.3em\r\n\\begin{array}{rcll}\r\nP&=&\\{ x\\in\\reals^m&\\mid\\ x\\geq\\0,\\ B\\T x\\leq\\1 \\}, \\\\\r\nQ&=&\\{ y\\in\\reals^n&\\mid\\ A y\\leq\\1,~y\\geq\\0 \\}, \\\\\r\n\\end{array}\r\n\\]\r\nas in $(\\ref{br-polytopes})$, and let\r\n\\[\r\nP^l=\\{ x\\in\\reals^m \\mid\\ x\\geq\\0,\\ B\\T x\\leq\\1 \\}\r\n\\quad \\hbox{with labels in $[m]$ as in $(\\ref{facet-labeling-unitv})$}\r\n\\]\r\nas in $(\\ref{p-l})$.\r\nThen for the missing label $k\\in [m]$\r\nthe Lemke path on $P\\times Q$ projects to a path on $P$ that corresponds\r\nto the Lemke path on $P^l$ for the missing label~$k$. For the missing label\r\n$k=m+j$, where $j\\in [n]$, the Lemke path on $P\\times Q$ projects to a\r\npath on $Q$ that corresponds to the Lemke path on $P^l$ for the\r\nmissing label~$l(j)$.\r\n\\end{theorem}\r\n\r\nWe finally focus on the case of Gale games.\r\nIn line with Proposition \\ref{galenash-to-another-gale}, we look for\r\nsolutions of the problem \\anothergale, see Table \\ref{another-gale}.\r\nBy Proposition \\ref{d-even-another-gale}, it is enough to study the\r\ncase of Gale strings $s\\in G(d,n)$ with $d$ even.\r\nWe will consider these as ``wrapped-around strings''.\r\n\r\nLet $s(i)=1$ for an index $i\\in [n]$. Then, by the Gale evenness condition,\r\nthere is an odd run of \\1's either on the left or on the right\r\nof position $i$ in $s$. Let $j$ be the first index after this run.\r\nA {\\em pivot from $s$ to $s'$} is given by setting $s'(i)=0$\r\nand $s'(j)=1$ to yield the new string~$s'$ which otherwise\r\nagrees with~$s$.\r\nIf there is a labeling $l_s:[n]\\to [d]$, we say that we\r\n{\\em drop label $l_s(i)$} and\r\n{\\em pick up label $l_s(j)$}, or that we {\\em pivot on label $l_s(i)$},\r\nspecifying the index $i$ when the label $l_s(i)$ is not enough to\r\nidentify it.\r\nThe {\\em Lemke-Howson for Gale Algorithm} is given\r\nin Algorithm~\\ref{lhg-alg}.\r\n\r\n\\clearpage\r\n\r\n\\begin{algorithm}%\\label{lhg-alg}\r\n\\SetKwInOut{Input}{input}\r\n\\SetKwInOut{Output}{output}\r\n\\Input{%\r\nA labeling $l_s:[n]\\to [d]$, where $d$ is even, such that there is a\r\ncompletely labeled Gale string $s_0 \\in G(d,n)$.\r\n}\r\n\\Output{%\r\nA Gale string $s\\in G(d,n)$ that $s$ is completely labeled by $l_s$,\r\nsuch that $s\\neq s_0$.\r\n}\r\n\\BlankLine\r\nchoose a missing label $k\\in [d]$ \\\\\r\npivot on label $k$ from $s_0$ to $s$ reaching a new \\1 bit\r\nwith label $h$\\\\\r\n\\While{ $h\\ne k$, so $s$ is not completely labeled ~}\r\n{\r\npivot away from the other \\1 bit in $s$ with label $h$ from $s$ to $s'$  \\\\\r\nlet $h$ be the label of the new \\1 bit in $s'$ \\\\\r\nset $s = s'$\r\n}\r\n\\Return $s$\r\n\\caption{Lemke-Howson for Gale}\r\n\\label{lhg-alg}\r\n\\end{algorithm}\r\n\r\nThe next example illustrates the correspondence between the Dual\r\n\\linebreak[5]\r\nLemke-Howson Algorithm and the Lemke-Howson for Gale Algorithm.\r\n\r\n\\begin{example}\r\nFigure \\ref{lhg-123432-fig} shows the cyclic polytope $C_4(6)$\r\nwith the labeling\r\n\\[\r\n\\begin{array}{rll}\r\nl_v(i)= & i\\quad & \\text{ for }i\\in [4], \\\\\r\nl_v(5)= & 3,  \\\\\r\nl_v(6)= & 2.\r\n\\end{array}\r\n\\]\r\nThis corresponds to the labeling $l_s=123432$ for $G(4,6)$ given in\r\nExample \\ref{c46-123432-ex}, for which there are four completely labeled\r\nGale strings:\r\n$s_A=\\1\\1\\1\\100$, $s_B=\\1\\10\\1\\10$, $s_C=\\100\\1\\1\\1$ and\r\n$s_D=\\10\\1\\10\\1$. These, in turn, correspond to the facets $A$, $B$, $C$\r\nand $D$ of $C_4(6)$, that are exactly the completely labeled facets for $l_v$.\r\n\r\nFrom the point of view of Gale strings, pivoting on label 3\r\nfrom\r\n\\linebreak[4]\r\n$s_A=\\1\\1\\1\\100$ returns $s_B=\\1\\10\\1\\10$. Analogously,\r\npivoting on label 3 from facet $A$ returns facet $B$.\r\n\r\n\\begin{figure}[ht]\r\n\\strut\\hfill\r\n\\includegraphics[width=40ex]{chapter-3/fig-lh/123432-lh-pivot.pdf}%\r\n\\hfill\r\n\\small\r\n\\begin{tabular}{c | c @{ } c @{ } c @{ } c @{ } c @{ } c @{ } c }\r\nfacet & {\\bf 1} & {\\bf 2} & {\\bf 3} & {\\bf 4} & {\\bf 3} & {\\bf 2}\\\\\r\n\\hline\r\n{\\bf A} & \\1 & \\1 & $\\underline{\\1}$ & \\1 & 0               & 0 \\\\\r\n{\\bf B} & \\1 & \\1 & 0                & \\1 & $\\overline{\\1}$ & 0 \\\\\r\n\\hline\r\nfacet & {\\bf 1} & {\\bf 2} & {\\bf 3} & {\\bf 4} & {\\bf 3} & {\\bf 2}\r\n\\end{tabular}\r\n\\hfill\\strut\r\n\\caption[A pivot on $G(4,6)$ and on $C_4(6)$]{%\r\nPivoting on label 3 from $s_A=\\1\\1\\1\\1 00$ to $s_B=\\1\\10\\1\\10$ in the\r\nLemke-Howson for Gale Algorithm corresponds to the pivoting from\r\nfacet $A$ (edges in green) to facet $B$ (edges in blue) in the\r\nDual Lemke Path Algorithm.\r\n\r\nThe indices $i=1,2,4$ correspond to the 2-dimensional intersection of\r\n$A$ and $B$ (edges in pink).\r\n}\r\n\\label{lhg-123432-fig}\r\n\\end{figure}\r\n\\end{example}\r\n\r\nThe membership of \\anothergale\\ in the complexity class\r\n{\\bf PPA} follows from an argument similar to Proposition\r\n\\ref{lh-works-ppa-thm}. We give here the full proof that it is\r\nin {\\bf PPAD}, following the exposition given in Merschen \\cite{jm}.\r\n\r\nA {\\em permutation} of elements of an ordered set $S$ is a sequence\r\nwithout repetition; this gives a rearrangement of the elements of $S$.\r\nA {\\em transposition} is a permutation of exactly two elements.\r\nThe {\\em sign of a permutation} is\r\n$\\sign(\\sigma)=(-1)^m$, where $m$ is the number of transpositions needed\r\nto get the {\\em natural order} $\\sigma_0=1\\ldots n$ from $\\sigma$.\r\nIt is immediate to see that any two permutations that differ in only\r\none transposition have opposite sign.\r\n\r\nWe define the {\\em sign of a completely labeled Gale string} $s\\in G(d,n)$\r\n%% what is \"relative labeling\"?\r\n% as follows: let $l:[n]\\to [d]$ be the relative labeling of $G(d,n)$, and\r\nas follows: let $l:[n]\\to [d]$ be the labeling of $G(d,n)$, and\r\nlet $l_0$ be the string of labels $l(i)$ such that $s(i)=\\1$ and that\r\ntwo labels corresponding to a run in $l$ are adjacent in $l_0$. Then we\r\ndefine $\\sign(s)=\\sign(l_0)$.\r\nNotice that if $l(i)=i$ for $i\\in [d]$ then the sign of the completely\r\nlabeled Gale string $\\1^d 0^{(n-d)}$ is always positive.\r\n\r\nThe {\\em sign of an almost completely labeled Gale string} $s\\in G(d,n)$ with\r\nmissing label~$k$ and duplicate label $h$ is defined on two different strings.\r\nLet $i_1$ be the index of $h$ reached by the last pivot\r\n(the ``new'' position of the \\1) and let $i_2$ be the index of $h$ such that\r\n$s(i_2)=\\1$ before the last pivot (the ``old'' position of the \\1).\r\nLet $l_1$ be the string obtained as $l_0$ substituting $k$ to\r\n$h$ at index $i_1$, and let $l_2$ be the string obtained as $l_0$\r\nsubstituting $k$ to $h$ at index $i_2$. Notice that $\\sign(l_1)=-\\sign(l_2)$,\r\nsince they can be obtained from each other applying the transposition $(kh)$.\r\n\r\nConsider now the steps of the Lemke paths in the Lemke Path for Gale\r\nAlgorithm in the case where $\\sign(s_0)=+1$;\r\nthe negative case is analogous, with opposite signs.\r\nIf the first pivot returns another completely labeled Gale\r\nstring $s$, this must have negative sign because it has been obtained\r\n``jumping'' over an odd number of \\1's.\r\nFor the same reason, if the pivoting returns an almost completely labeled\r\nGale string, we have that $\\sign(l_1)=-1$, which implies $\\sign(l_2)=+1$.\r\nThe next pivoting step drops the label $h$ from index $i_2$, so again we\r\nchange sign. This shows that\r\nthe Lemke Path for Gale Algorithm results in the sign\r\nof the completely and almost completely labeled Gale strings\r\n``swinging'' as in Table \\ref{lhg-sign-figure}.\r\nNotice that all the steps of this construction can be done in polynomial\r\ntime.\r\nOrienting all Lemke paths from positive to negative reduces the problem\r\n\\anothergale\\ to {\\sc End Of The Line}.\r\n\r\n\\begin{figure}[h]\r\n\\strut\\hfill\r\n\\includegraphics[width=70ex]{chapter-3/fig-lh/lhg-sign.pdf}%\r\n\\hfill\\strut\r\n\\caption[Sign switching of the Lemke Path for Gale Algorithm]\r\n{Sign switching of the Lemke Path for Gale Algorithm.}\r\n\\label{lhg-sign-figure}\r\n\\end{figure}\r\n\r\n\\begin{proposition}\\label{lhg-works-ppad-thm}\r\nThe Lemke-Howson for Gale Algorithm~\\ref{lhg-alg} returns a\r\nsolution to the {\\bf PPAD} problem \\anothergale.\r\n% d even assumed in Algo\r\nFurthermore, the number of completely labeled Gale strings\r\n$s\\in G(d,n)$ is even, and the two completely labeled Gale\r\nstrings at opposite ends of any Lemke path have opposite\r\nsign.\r\n\\end{proposition}\r\n\r\n\\clearpage\r\n\r\n\\begin{example}\r\nLet $l_s=123432$. Consider the Lemke path from the completely labeled\r\nGale string $s=\\1\\1\\1\\100$ with missing label $4$.\r\nFigure \\ref{lhg-sign-ex-figure} shows the graph of Table\r\n\\ref{lhg-sign-figure}.\r\n\r\nNotice that\r\n$\\sign(\\10\\1\\10\\1)=\\sign(l(6)l(1)l(3)l(4))=\\sign((2134))$,\r\nsince $s(6)=s(1)=1$ and therefore the indices $6$ and $1$ are consecutive\r\nin the same run.\r\n\r\n\\begin{figure}[h]\r\n\\strut\\hfill\r\n\\includegraphics[width=70ex]{chapter-3/fig-lh/lhg-sign-example.pdf}%\r\n\\hfill\\strut\r\n\\caption[Pivoting with sign]\r\n{Pivoting with sign on the labeling $l=123432$ for $G(4,6)$.}\r\n\\label{lhg-sign-ex-figure}\r\n\\end{figure}\r\n\\end{example}\r\n\r\n\\clearpage\r\n\r\nMorris \\cite{morris} has given an example of a labeling\r\n$l:[2d]\\to [d]$ where the length of the Lemke paths on the\r\ncyclic polytope $C_d(2d)$ for the Lemke-Howson\r\nAlgorithm~\\ref{lh-alg}\r\ngrows exponentially in $d$ for every missing label.\r\nThese paths are therefore also exponential on $G(d,2d)$ for\r\nthe Lemke-Howson for Gale Algorithm~\\ref{lhg-alg}.\r\nWithout repetitions of labels in consecutive positions,\r\nMorris's labeling is equivalent to the following:\r\n\\[\r\n\\arraycolsep.2em\r\n\\begin{array}{rcll}\r\nl(k)&=&k          & \\quad\\quad\\text{for }k\\in [d],    \\\\\r\nl(d+k)&=&d-k+1    & \\quad\\quad\\text{for }k\\in [d - 1]\\text{ and even},   \\\\\r\nl(d+k)&=&d-k-1    & \\quad\\quad\\text{for }k\\in [d - 1]\\text{ and odd}.\r\n\\end{array}\r\n\\]\r\n\r\n\\begin{example}\r\n\\label{morris-ex}\r\nConsider the labeling $l=1234564523$ for $G(6,10)$. The only two completely\r\nlabeled Gale string  are $s=\\1\\1\\1\\1\\1\\100$ and $s'=\\1 00000\\1\\1\\1\\1\\1$.\r\nTable \\ref{morris-6-fig} shows the Lemke path for missing label~1.\r\n\\begin{table}[hbt]\r\n\\begin{center}\r\n\\large\r\n\\begin{tabular}{c @{ } c @{ } c @{ } c @{ } c @{ } c @{ } c @{ } c @{ } c @{ } c @{ } c }\r\n{\\bf 1} & {\\bf 2} & {\\bf 3} & {\\bf 4} & {\\bf 5} & {\\bf 6} & {\\bf 4} & {\\bf 5} & {\\bf 2} & {\\bf 3} \\\\\r\n\\hline\r\n\\d1 & \\1 & \\1 & \\1 & \\1 & \\1 & 0 & 0 & 0 & 0 \\\\\r\n0 & \\1 & \\1 & \\d1 & \\1 & \\1 & \\u1 & 0 & 0 & 0 \\\\\r\n0 & \\1 & \\1 & 0 & \\d1 & \\1 & \\1 & \\u1 & 0 & 0 \\\\\r\n0 & \\d1 & \\1 & 0 & 0 & \\1 & \\1 & \\1 & \\u1 & 0 \\\\\r\n0 & 0 & \\1 & \\u1 & 0 & \\1 & \\d1 & \\1 & \\1 & 0 \\\\\r\n0 & 0 & \\1 & \\1 & \\u1 & \\1 & 0 & \\d1 & \\1 & 0 \\\\\r\n0 & 0 & \\d1 & \\1 & \\1 & \\1 & 0 & 0 & \\1 & \\u1 \\\\\r\n0 & 0 & 0 & \\d1 & \\1 & \\1 & \\u1 & 0 & \\1 & \\1 \\\\\r\n0 & 0 & 0 & 0 & \\d1 & \\1 & \\1 & \\u1 & \\1 & \\1 \\\\\r\n\\u1 & 0 & 0 & 0 & 0 & \\1 & \\1 & \\1 & \\1 & \\1 \\\\\r\n\\hline\r\n{\\bf 1} & {\\bf 2} & {\\bf 3} & {\\bf 4} & {\\bf 5} & {\\bf 6} & {\\bf 4} & {\\bf 5} & {\\bf 2} & {\\bf 3} \\\\\r\n\\end{tabular}\r\n\\normalsize\r\n\\end{center}\r\n\\caption[A Lemke path for the Morris labeling]\r\n{The Lemke path for the Morris labeling on $G(4,6)$ with missing label 1.\r\nThe bit position \\1 that is dropped in the next pivoting step\r\nis underlined, the bit \\1 that has just been picked up is\r\noverlined.}\r\n\\label{morris-6-fig}\r\n\\end{table}\r\n\\end{example}\r\n\r\nSavani and von Stengel \\cite{svs} \\cite{uvg} extended Morris'\r\nexample in order to construct a series of ``hard to solve'' games.\r\nTheir results point to the importance of studying the\r\ncomplexity of \\anothergale\\ to understand the complexity of 2-{\\sc Nash}.\r\nOur main result, in the next section, will give a\r\n{\\bf FP} algorithm that circumvents the problem of any exponential running\r\ntime of the Lemke-Howson for Gale Algorithm by using a very\r\ndifferent approach.\r\n", "meta": {"hexsha": "832882d63f8e51ce813ae969ad43af5063f5bf2e", "size": 26678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/chapter-3/lemke-howson.tex", "max_stars_repo_name": "mmcasetti/mphil-thesis", "max_stars_repo_head_hexsha": "6d9902c4f813cf3239d1b312e9453b3ea690c3db", "max_stars_repo_licenses": ["OLDAP-2.4"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/chapter-3/lemke-howson.tex", "max_issues_repo_name": "mmcasetti/mphil-thesis", "max_issues_repo_head_hexsha": "6d9902c4f813cf3239d1b312e9453b3ea690c3db", "max_issues_repo_licenses": ["OLDAP-2.4"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/chapter-3/lemke-howson.tex", "max_forks_repo_name": "mmcasetti/mphil-thesis", "max_forks_repo_head_hexsha": "6d9902c4f813cf3239d1b312e9453b3ea690c3db", "max_forks_repo_licenses": ["OLDAP-2.4"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.1172932331, "max_line_length": 101, "alphanum_fraction": 0.6831096784, "num_tokens": 8688, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303087996142, "lm_q2_score": 0.7577943658046608, "lm_q1q2_score": 0.5990594140061662}}
{"text": "% Standard Article Definition\n\\documentclass[]{article}\n\n% Page Formatting\n\\usepackage[margin=1in]{geometry}\n\\setlength\\parindent{0pt}\n\n% Graphics\n\\usepackage{graphicx}\n\n% Math Packages\n\\usepackage{physics}\n\\usepackage{amsmath, amsfonts, amssymb, amsthm}\n\\usepackage{mathtools}\n\n% Extra Packages\n\\usepackage{listings}\n\\usepackage{hyperref}\n\n% Section Heading Settings\n\\usepackage{enumitem}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\renewcommand*{\\thesection}{Problem \\arabic{section}}\n\\renewcommand*{\\thesubsection}{\\alph{subsection})}\n\\renewcommand*{\\thesubsubsection}{\\quad \\quad \\roman{subsubsection})}\n\n%Custom Commands\n\\newcommand{\\Rel}{\\mathcal{R}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\n\\newcommand{\\toI}{\\xrightarrow{\\textsf{\\tiny I}}}\n\\newcommand{\\toS}{\\xrightarrow{\\textsf{\\tiny S}}}\n\\newcommand{\\toB}{\\xrightarrow{\\textsf{\\tiny B}}}\n\n\\newcommand{\\divisible}{ \\ \\vdots \\ }\n\\newcommand{\\st}{\\ : \\ }\n\n\n% Theorem Definition\n\\newtheorem{definition}{Definition}\n\\newtheorem{assumption}{Assumption}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{proposition}{Proposition}\n\\newtheorem{example}{Example}\n\n\n%opening\n\\title{MATH 5301 Elementary Analysis - Homework 10}\n\\author{Jonas Wagner}\n\\date{2021, November 12\\textsuperscript{th}}\n\n\\begin{document}\n\n\\maketitle\n\n% Problem 1 ----------------------------------------------\n\\section{}\nProve that the closure and the interior of a convex set $A \\subset \\R^n$ are also convex.\n\n% Convex Set\n\\begin{definition}\n    The set $A$ is called \\emph{\\underline{convex}} if\n    \\[\n        \\forall_{x, y \\in A} \\forall_{t \\in [0, 1]}\n        \\qty((t) x + (1 - t) y) \\in A\n    \\]\n\\end{definition}\n\n% Interior and Closure\n\\begin{definition}\n    For a given set $A \\subseteq (S,d)$,\n    \\begin{enumerate}\n        \\item the \\emph{\\underline{interior}} of $A$ is defined as\n        \\[\n            \\text{int}(A) = \\qty{\n                x \\in A \\st \\exists_{\\epsilon > 0} B_{\\epsilon}(x) \\subset A\n            }\n        \\]\n        \\item the \\emph{\\underline{closure}} of $A$ is defined as\n        \\[\n            \\overline{A} = \\qty{\n                x \\in S \\st \\forall_{\\epsilon > 0} B_\\epsilon(x) \\cap A \\neq \\emptyset\n            }\n        \\]\n    \\end{enumerate}\n\\end{definition}\n\n% Problem 1 Theorem\n\\begin{theorem}\n    If $A \\subset \\R^n$ is a convex set,\n    then the closure of $A$, $\\overline{A}$, is also convex.\n    \\begin{proof}\n        $A$ being convex means that\\[\n            \\forall_{x, y \\in A} \\forall_{t \\in [0, 1]}\n            \\qty((t) x + (1 - t) y) \\in A\n        \\]\n        $\\overline{A}$ is defined by\\[\n            \\overline{A} = \\qty{\n                x \\in \\R^n \\st \\forall_{\\epsilon > 0} B_\\epsilon(x) \\cap A \\neq \\emptyset\n            }\n        \\]\n        For $\\overline{A}$ to be convex, the following would be true:\\[\n            \\forall_{x, y \\in \\overline{A}} \\forall_{t \\in [0,1]} \n                \\qty((t) x + (1 - t) y) \\in \\overline{A}\n        \\]\n        Additionally, since $\\overline{A} = A \\cup \\partial A$, $\\overline{A}$ is convex if\n        \\[\n            \\qty(\n                \\forall_{x \\in A} \\forall_{y \\in \\overline{A}} \\forall_{t \\in [0,1]} \n                    \\qty((t) x + (1 - t) y) \\in \\overline{A}\n            ) \\land \\qty(\n                \\forall_{x \\in \\partial A} \\forall_{y \\in \\overline{A}} \\forall_{t \\in [0,1]} \n                    \\qty((t) x + (1 - t) y) \\in \\overline{A}\n            )\n        \\]\n        Since $A \\subset \\overline{A}$, by definition the first statement is true, \\[\n            \\forall_{x \\in A} \\forall_{y \\in \\overline{A}} \\forall_{t \\in [0,1]} \n                \\qty((t) x + (1 - t) y) \\in \\overline{A}\n        \\]\n        Additionally, since the boundary of $A$, $\\partial A$, is the collection of limit points of $A$ and the limit points all exist within the neighborhood of elements in $A$, \\[\n            \\forall_{x \\in \\partial A} \\forall_{y \\in \\overline{A}} \\forall_{t \\in [0,1]} \n                    \\qty((t) x + (1 - t) y) \\in \\overline{A}\n        \\]\n        Therefore,\n        \\[\n            \\forall_{x, y \\in \\overline{A}} \\forall_{t \\in [0,1]} \n                \\qty((t) x + (1 - t) y) \\in \\overline{A}\n        \\]\n    \\end{proof}\n\\end{theorem}\n\n% Problem 2 ----------------------------------------------\n\\newpage\n\\section{}\nProve that the intersection of an arbitrary collection of convex sets $\\cap_{i \\in I} C_i$ is also convex.\n\n% Problem 2 Theorem\n\\begin{theorem}\n    If each of the sets within the collection $C_i \\subset (S,d)$ are convex,\n    then the intersection of the collection, $\\cap_{i \\in I}$ is also convex.\n    \\begin{proof}\n        For $\\cap_{i \\in I}$ to be convex, the following must be true: \\[\n            \\forall_{x, y \\in \\cap_{i \\in I} C_i} \n                \\forall_{t \\in [0,1]} (t) x + (1 - t) y \\in \\cap_{i \\in I} C_i\n        \\]\n        Which is the same as: \\[\n            \\forall_{x,y \\in S} \\st \\forall_{i \\in I} x,y \\in C_i \n                \\implies \\forall_{t \\in [0,1]} \\forall_{i \\in I} (t) x + (1 - t) y \\in C_i\n        \\]\n        Since all the sets $C_i$ are convex, by definition: \\[\n            \\forall_{x, y \\in C_i} \\forall_{t \\in [0,1]} \n                (t) x + (1 - t) y \\in C_i\n        \\]\n        Therefore this is true $\\forall_{i \\in I}$: \\[\n            \\land_{i \\in I} \\forall_{x,y \\in C_i}\n                \\implies \\forall_{t \\in [0,1]} (t) x + (1 - t) y \\in C_i\n        \\]\n        Which is equivalent to: \\[\n            \\forall_{x, y \\in \\cap_{i \\in I} C_i} \n                \\forall_{t \\in [0,1]} (t) x + (1 - t) y \\in \\cap_{i \\in I} C_i\n        \\]\n\n    \\end{proof}\n\\end{theorem}\n\n% Problem 3 ----------------------------------------------\n\\newpage\n\\section{}\nLet $\\{C_i\\}_{i \\in \\N}$ be a sequence of nested convex sets in $\\R^n$, i.e. $C_i \\subset C_{i+1}$.\nProve that $\\cup_{i = 1}^\\infty C_i$ is also convex.\n\n% Problem 3 Theorem\n\\begin{theorem}\n    For the sequence of nested convex sets in $\\R^n$, $\\qty{C_i}_{i \\in \\N}$, a union of all the elements, $\\cup_{i = 1}^\\infty C_i$, is also convex.\n    \\begin{proof}\n        Proof by induction.\n        \n        For $n = 1$, the set $\\cup_{i = 1}^n C_i = C_1$ is convex.\n        \n        For $n = 2$, the set $\\cup_{i = 1}^n C_i = C_1 \\cup C_2$ is convex.\n        \\begin{proof}\n            Since $C_1 \\subset C_2$, $C_1 \\cup C_2 = C_2$ and $C_2$ is convex.\n        \\end{proof}\n\n        Assuming for $n = k$, $\\cup_{i = 1}^k C_i = C_k$ is convex, \n        then for $n = k+1$, $\\cup_{i = 1}^{k + 1} C_i = C_{k + 1}$ is convex.\n        \\begin{proof}\n            Since $C_k \\subset C_{k+1}$, \\[\n                \\cup_{i = 1}^{k + 1} C_i = \\cup_{i = 1}^{k} C_i \\cup C_{k + 1} = C_{k + 1}\n            \\]\n            which is convex.\n        \\end{proof}\n\n        Therefore, by induction, \\[\n            \\forall_{n \\in \\N} \\cup_{i = i}^n C_i\n        \\]\n        is convex.\n        This implies $\\cup_{i = 1}^\\infty C_i$.\n    \\end{proof}\n\\end{theorem}\n\n% Problem 4 -----------------------------------------------\n\\newpage\n\\section{}\n\n% Convex Hull\n\\begin{definition}\n    The \\emph{\\underline{convex hull}} for set $A \\in (S,d)$ is defined as \\[\n        \\textnormal{conv}(A) = \\cap_{C \\supseteq A \\st C \\textnormal{ convex}}\n    \\]\n    Additionally, for $A \\subset \\R^n$, \\[\n        \\textnormal{cov}(A) = \\cup_{m = 1}^{\\infty} C_m\n    \\]\\[\n        C_m = \\qty{\n            x \\in \\R^n \\st\n            x = \\alpha_1 a_1 + \\cdot + \\alpha_m a_m, \\\n            a_1, \\dots, a_m \\in A, \\\n            \\alpha_i \\geq 0, \\\n            \\sum_{i} \\alpha_i = 1\n        }\n    \\]\n\\end{definition}\n% Open Set\n\\begin{definition}\n    The set $A \\subset V$ is called \\underline{open} if \\[ \n        \\forall_{x\\in A} \\exists_{\\epsilon>0} : B_\\epsilon(x)\\subset A\n    \\]\n    or equivalently, \\[\n        \\forall_{x \\in A} \\exists_{\\epsilon>0} : \\forall_{y \\in V}\\norm{x - y} < \\epsilon \\implies y \\in A\n    \\]\n\\end{definition}\n% Closed Set\n\\begin{definition}\n    The set $A \\subset V$ is called \\underline{closed} if $A^c$ is open.\n\\end{definition}\n\n% Part a\n\\subsection{}\nShow that the convex hull of any open sets in $\\mathcal{R}^n$ is open.\n\n% Problem 4a Theorem\n\\begin{theorem}\n    For any open set $A \\subset \\R^n$,\n    then the convex hull, $\\textnormal{conv}(A)$,\n    is open.\n    \\begin{proof}\n        By definition, $A$ will satisfy the open condition:\\[\n            \\forall_{x \\in A} \\exists_{\\epsilon>0} : \\forall_{y \\in \\R^n}\\norm{x - y} < \\epsilon \\implies y \\in A\n        \\]\n        The convex hull of $A$, $\\textnormal{conv}(A)$, is the intersection of all convex sets that contain $A$.\n        This means that the smallest convex set containing $A$ will share a portion of the boundary of $A$.\n        Then the open condition will be true along the portion of the boundary in which $A$ and $\\textnormal{conv}(A)$ share will be open.\n        The smallest convex set that shares this boundary will then also be open since the open $\\textnormal{int}(C)$ is contained by any closed set with the same boundary.\n        Therefore, the convex hull would be open.\n    \\end{proof}\n\\end{theorem}\n\n% Part b\n\\subsection{}\nProvide an example of a closed set $A \\subset \\mathcal{R}^n$, such that its convex hull is not closed.\n\n\\begin{example}\n    Let closed set $A \\subset \\R^2$ be defined as\\[\n        A := \\qty{\n            (x,y) \\in \\R^2 \\st y \\geq e^{-x^2}\n        }\n    \\]\n    Clearly, $A$ is closed, however the convex hull of $A$ is actually open due to the asymptote that occurs on the $x$-axis:\\[\n        \\textnormal{conv}(A) := \\qty{\n            (x,y) \\in \\R^2 \\st y > 0\n        }\n    \\]\n\\end{example}\n\n% Problem 5 -----------------------------------------------\n\\newpage\n\\section{}\nLet $f : \\R^n \\to \\R$ be a convex function and $A \\subset \\R^n$ be a bounded set.\nProve that $f(A)$ is bounded in $\\R$.\n\n% Convex Function\n\\begin{definition}\n    Function $f : [a,b] \\to \\R$ is considered a \\emph{\\underline{convex function}} if\\[\n        \\forall_{x_1,x_2 \\in [a,b]} \\forall_{t \\in [0,1]}\n        f((t) x_1 + (1 - t) x_2) \\leq (t) f(x_1) + (1 - t) f(x_2)\n    \\]\n\\end{definition}\n\n% Problem 5 Theorem\n\\begin{theorem}\n    If the function $f : \\R^n \\to \\R$ is a convex function,\n    then for a bounded set $A \\subset \\R^n$, the image $f(A)$ is bounded in $\\R$.\n    \\begin{proof}\n        $f$ being convex means that\\[\n            \\forall_{\\va{x},\\va{y} \\in \\R^n} \\forall_{t \\in [0,1]}\n                f((t) \\va{x} + (1 - t) \\va{y}) \\leq (t) f(\\va{x}) + (1 - t) f(\\va{y})\n        \\]\n        $A$ being bounded means\\[\n            \\exists_{N} \\st \\forall_{\\va{x} \\in A}  \\norm{\\va{x}} \\leq N\n        \\]\n        It is known that the a convex function with a non-constant output cannot obtain its maximum within $\\textnormal{int}(A)$;\n        therefore, $\\arg \\max_{\\va{x} \\in A} f(x) \\in \\partial A$.\n        Since $A$ is bounded, \\[\n            \\exists_N \\st \\forall_{x \\in \\partial A} \\norm{x} < N\n        \\]\n        Therefore, \\[\n            \\exists_N \\st \\forall_{\\va{x} \\in A} f(x) < \\max_{\\va{x} \\in \\partial A} f(x) < N\n        \\]\n        Which means $f(A)$ is bounded.\n    \\end{proof}\n\\end{theorem}\n\n% Problem 6 -----------------------------------------------\n\\newpage\n\\section{}\nShow that the convex hull of a compact set $A \\subset \\R^n$ is compact.\n(\\textit{Hint:} Caratheodory theorem)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "cd550ab0534a7d2cd3539373dc3b8e4f345ac02d", "size": 11338, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW11/MATH5301-HW11.tex", "max_stars_repo_name": "jonaswagner2826/MATH5301", "max_stars_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-01T05:26:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-01T05:26:53.000Z", "max_issues_repo_path": "Homework/HW11/MATH5301-HW11.tex", "max_issues_repo_name": "jonaswagner2826/MATH5301", "max_issues_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW11/MATH5301-HW11.tex", "max_forks_repo_name": "jonaswagner2826/MATH5301", "max_forks_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.055393586, "max_line_length": 181, "alphanum_fraction": 0.5432174987, "num_tokens": 3739, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737473266735, "lm_q2_score": 0.8774767858797979, "lm_q1q2_score": 0.5989426179301388}}
{"text": "%% SECTION HEADER /////////////////////////////////////////////////////////////////////////////////////\n\\section{Parallel Implementation of the Internal Force Vector Calculation}\n\\label{sec:gpu}\n\n%% SECTION CONTENT ////////////////////////////////////////////////////////////////////////////////////\n\r\nThe most time-consuming operation in the equation (\\ref{eq:motion}) is calculating the internal force vector $\\textbf{F}_{int}=\\textbf{Kd}$, as the stiffness matrix \\textbf{K} occupies a large amount of memory, and it is defined as:\r\n\\begin{eqnarray}\r\n\t\\label{eq:K}\r\n\t\\textbf{K}^P=\\int_{V^P}\\textbf{B}^T\\textbf{C}\\textbf{B}dV^P\r\n\\end{eqnarray}\r\nwhere \\textbf{c} is the elastic constant matrix, and \\textbf{B} is the strain-displacement matrix.\r\nInstead of allocating the matrix \\textbf{K} defined by equation~\\ref{eq:K}, Kudela proposed a parallelized computation of the internal force vector \\cite{kudela2016parallel}.\r\nIn the pre-processing, the natural derivatives matrix, the vector of inverted components of the Jacobian matrix, and the integration weights multiplied by the Jacobian determinant is rearranged from global to the local form:\r\n\\begin{eqnarray}\r\n\t\\label{eq:isoparametric}\r\n\t\\textbf{N}^P_{,\\xi} & = & \\left[ \\begin{array}{cccc}\r\n\t\t\\textbf{N}^{e=1}_{,\\xi} & \\textbf{0} & \\ldots & \\textbf{0}\\\\\r\n\t\t\\textbf{0} & \\textbf{N}^{e=2}_{,\\xi} & \\ldots & \\textbf{0}\\\\\r\n\t\t\\vdots & \\vdots &  \\ddots & \\vdots\\\\\r\n\t\t\\textbf{0} & \\textbf{0} & \\ldots & \\textbf{N}^{e=n}_{,\\xi}\r\n\t\\end{array}\\right],\\\\\r\n\t\\label{eq:jacob}\r\n\t\\left(\\textbf{J}^P\\right)^{ij}_{inv} & = & \\left\\{ \\begin{array}{c}\r\n\t\t\\left(\\textbf{J}^{e=1}\\right)^{ij}_{inv}\\\\\r\n\t\t\\left(\\textbf{J}^{e=2}\\right)^{ij}_{inv}\\\\\r\n\t\t\\vdots\\\\\r\n\t\t\\left(\\textbf{J}^{e=n}\\right)^{ij}_{inv} \\end{array}\\right\\}\\\\\r\n\t\\label{eq:intWeights}\r\n\t\\textbf{w}^P & = & \\left\\{ \\begin{array}{c}\r\n\t\t\\textbf{w}^{e=1}\\\\\r\n\t\t\\textbf{w}^{e=2}\\\\\r\n\t\t\\vdots\\\\\r\n\t\t\\textbf{w}^{e=n} \\end{array}\\right\\} \\circ\r\n\t\\left\\{ \\begin{array}{c}\r\n\t\tdet(\\textbf{J})^{e=1}\\\\\r\n\t\tdet(\\textbf{J})^{e=2}\\\\\r\n\t\t\\vdots\\\\\r\n\t\tdet(\\textbf{J})^{e=n} \\end{array}\\right\\}\r\n\\end{eqnarray}\r\nwhere $n$ is the spectral elements number in modeled domain; \\textbf{J} is the Jacobian matrix; $i,j=1\\ldots3$; and $\\circ$ denotes element-wise multiplication.\r\nThe $\\textbf{N}^P_{,\\xi}$ is a block-diagonal sparse matrix, and the equality of $\\textbf{N}^1_{,\\xi}=\\textbf{N}^2_{,\\xi}=\\ldots=\\textbf{N}^n_{,\\xi}$ holds if the same order of interpolation shape function is used for the all elements.\r\nBesides, a vector of local node indices $\\textbf{I}_L$ and corresponding global node indices $\\textbf{I}_G$ must be defined in the preprocessing process.\r\n\nAdjacent elements in the mesh share nodes, so one node in the global system can correspond to several nodes in the local system. Since independent operations on vectors are necessary for parallel computation on \\ac{gpu}, $I_{G}$ must be rearranged to separate all duplicated nodes. Therefore, the matrix $I_{G}$ is created in which no column has repeated indices of the nodes. Then, the corresponding local map $I_{L}$ must also be created. For the rearrangement algorithm presented in \\cite{kudela2016parallel} was used.\r\n\nThe following computational operations are performed during the time integration algorithm. Firstly, the global vector of nodal displacements is transferred to the element nodes displacements such as:\r\n\n\\begin{eqnarray}\r\n\t\\widehat{\\textbf{d}}_i^P = \\left\\{ \\begin{array}{c}\r\n\t\t\\widehat{\\textbf{d}}_i^{e=1}\\\\\r\n\t\t\\widehat{\\textbf{d}}_i^{e=2}\\\\\r\n\t\t\\vdots\\\\\r\n\t\t\\widehat{\\textbf{d}}_i^{e=n} \\end{array}\\right\\}\r\n\\end{eqnarray}\r\nwhere $\\widehat{\\textbf{d}}_i^e$ is the nodal displacements in element for $i^{th}$ time step.\r\nNext, the strain and stress vectors are calculated as:\r\n\\begin{eqnarray}\r\n\t\\label{eq:strain}\r\n\t\\boldsymbol{\\epsilon}=\\left[\\boldsymbol{\\epsilon}_{xx},\\ \\boldsymbol{\\epsilon}_{yy},\\ \\boldsymbol{\\epsilon}_{zz},\\ \\boldsymbol{\\gamma}_{yz},\\ \\boldsymbol{\\gamma}_{xz},\\ \\boldsymbol{\\gamma}_{xy}\\ \\right]^T&=&\\textbf{B}^e\\widehat{d}^e\\\\\r\n\t\\label{eq:stress}\r\n\t\\boldsymbol{\\sigma}=\\left[\\boldsymbol{\\sigma}_{xx},\\ \\boldsymbol{\\sigma}_{yy},\\ \\boldsymbol{\\sigma}_{zz},\\ \\boldsymbol{\\tau}_{yz},\\ \\boldsymbol{\\tau}_{xz},\\ \\boldsymbol{\\tau}_{xy},\\ \\right]^T&=&\\textbf{c}\\boldsymbol{\\epsilon}\r\n\\end{eqnarray}\r\nThe formulation of equation~\\ref{eq:strain} and equation~\\ref{eq:stress} for 3D and first-order shear deformation model can be found in \\cite{kudela2016parallel} and \\cite{kudela2020parallel}, respectively.\r\nThen, the internal forces vector is calculated as:\r\n\\begin{eqnarray}\r\n\t\\label{eq:forces}\r\n\t\\textbf{F}^P_{int}=\\left[\\textbf{F}^P_1,\\ \\textbf{F}^P_2,\\ \\ldots\\ \\textbf{F}^P_{n} \\right]^T={\\textbf{B}^e}^T\\sigma\r\n\\end{eqnarray}\r\nwhere $n$ is the nodal degree of freedom.\r\nIt should be mentioned that $\\epsilon$, $\\sigma$ and $\\textbf{F}^P_{int}$ components are calculated separately, with the appropriate order of performing the element-wise multiplication of the particular vectors.\r\nThis approach is essential in order to keep the calculations matrix-free.\r\n\nFinally, the assembly of internal forces vector is performed using the $\\textbf{I}_G$ and $\\textbf{I}_L$ as follows:\r\n\n\\begin{eqnarray}\r\n\t\\label{eq:Fint}\r\n\t{\\left(\\textbf{F}_{int}\\right)}^i_{\\textbf{I}^m_G} = {\\left(\\textbf{F}_{int}\\right)}^i_{\\textbf{I}^m_G} + {\\left(\\textbf{F}^P_{int}\\right)}^i_{\\textbf{I}^m_L},\\quad for\\ m=1\\ldots col \r\n\\end{eqnarray}\r\nwhere $col$ is the column number of $\\textbf{I}_G$.\r\n\nIn the presented paper, some improvements have been implemented to the above algorithm to make it more computationally efficient.\r\nInstead of calculating the internal forces vector in the for loop like in equation~(\\ref{eq:Fint}), it is recommended to assign all local forces into the matrix as:\r\n\\begin{eqnarray}\r\n\t\\label{eq:Fmatrix}\r\n\t{\\left(\\textbf{F}_{int}\\right)}^i_{\\textbf{I}_G} ={\\left(\\textbf{F}^P_{int}\\right)}^i_{\\textbf{I}_L}\r\n\\end{eqnarray}\r\nand then return the column vector containing the sum of each row of matrix ${\\left(\\textbf{F}^P_{int}\\right)}^i_{\\textbf{I}_L}$.\r\nFor example in Matlab, it can be done by built-in function $sum$ as:\r\n\\begin{eqnarray}\r\n\t\\label{eq:Fsum}\r\n\t{\\left(\\textbf{F}_{int}\\right)}^i = sum \\left({\\left(\\textbf{F}^P_{int}\\right)}^i_{\\textbf{I}_L},2\\right);\r\n\\end{eqnarray}\r\nFixed number of columns in equation~(\\ref{eq:Fint}) was proposed in \\cite{kudela2016parallel}. In the current approach the number of columns is chosen adaptively according to the given mesh. It should be chosen as the smallest divisor of the number of nodes in an element but not less than the maximum number of common elements for a node. In this way, less serial operations are performed and \\ac{gpu} resources are better utilized.\r\n\nFurther code modifications included storage scheme. Instead of storing in memory both isoparametric derivatives equation~(\\ref{eq:isoparametric}) and inverted components of Jacobian matrix shown in equation~(\\ref{eq:jacob}), it is recommended to calculate derivatives in global coordinates system as:\r\n\\begin{eqnarray}\r\n\t\\textbf{N}^P_{,X} = \\textbf{J}^{-1}\\,\\textbf{N}^P_{\\xi} \r\n\\end{eqnarray}\r\nAlso, a multiplication of elastic constants $\\textbf{c}$ with integration weights defined in equation (\\ref{eq:intWeights}) can be performed in preprocessing stage before main loop through integration time steps.\n", "meta": {"hexsha": "e1f51c7cce0ac2a93a6e8011727a2ba51cd9c241", "size": 7318, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:gpu.tex", "max_stars_repo_name": "pfiborek/model_hc", "max_stars_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:gpu.tex", "max_issues_repo_name": "pfiborek/model_hc", "max_issues_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:gpu.tex", "max_forks_repo_name": "pfiborek/model_hc", "max_forks_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.4554455446, "max_line_length": 522, "alphanum_fraction": 0.6977316207, "num_tokens": 2294, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8774767970940975, "lm_q2_score": 0.6825737344123243, "lm_q1q2_score": 0.5989426142526835}}
{"text": "\n\\section{Declarative System}\n\n\\begin{figure}[t]\n    \\begin{gather*}\n    \\begin{aligned}\n        \\text{Type variables}\\qquad&a, b\\\\\n        \\text{Types}\\qquad&A, B, C &::=&\\quad 1 \\mid \\top \\mid \\bot \\mid a \\mid \\all A \\mid A\\to B\\\\\n        \\text{Monotypes}\\qquad&\\tau &::=&\\quad 1 \\mid \\top \\mid \\bot \\mid a \\mid \\tau_1\\to \\tau_2\\\\\n        \\text{Expressions}\\qquad&e &::=&\\quad x \\mid () \\mid \\lam e \\mid e_1~e_2 \\mid (e:A)\\\\\n        \\text{Context}\\qquad&\\Psi &::=&\\quad \\cdot \\mid \\Psi, a \\mid \\Psi, x:A\n    \\end{aligned}\n    \\end{gather*}\n\\Description{Declarative Syntax}\n\\caption{Declarative Syntax}\\label{fig:top_decl_syntax}\n\\end{figure}\n\n\\paragraph{Syntax}\nThe syntax of the declarative system, shown in Figure~\\ref{fig:top_decl_syntax},\nis similar to the previous systems by having\na primitive type $1$, type variables $a$,\npolymorphic types $\\all A$ and function types $A \\to B$.\nAdditionally, top and bottom types are introduced to the type system.\n% intro of top/bot moved to intro\n\nThe well-formedness formalization of the system is standard\nand almost identical to the previous systems,\ntherefore we omit the formal definitions.\n\n\n\\begin{figure}[t]\n    \\framebox{$\\Psi \\vdash A \\le B$}\n    \\begin{gather*}\n    \\inferrule*[right=$\\mathtt{{\\le}Var}$]\n    {a\\in\\Psi}{\\Psi\\vdash a\\le a}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{{\\le}Unit}$]\n    {~}{\\Psi \\vdash 1 \\le 1}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{{\\le}{\\to}}$]\n    {\\Psi \\vdash B_1 \\le A_1 \\quad \\Psi \\vdash A_2 \\le B_2}\n    {\\Psi\\vdash A_1\\to A_2 \\le B_1\\to B_2}\n    \\\\\n    \\inferrule*[right=$\\mathtt{{\\le}\\forall L}$]\n    {\\Psi\\vdash \\tau \\quad \\Psi\\vdash [\\tau/a] A \\le B}\n    {\\Psi\\vdash \\all A \\le B}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{{\\le}\\forall R}$]\n    {\\Psi, b\\vdash A\\le B}\n    {\\Psi\\vdash A \\le \\all[b]B}\n    \\\\\n    \\pmb{\n    \\inferrule*[right=$\\mathtt{{\\le}Top}$]\n    {~}\n    {A \\le \\top}\n    }\n    \\qquad\n    \\pmb{\n    \\inferrule*[right=$\\mathtt{{\\le}Bot}$]\n    {~}\n    {\\bot \\le A}\n    }\n    \\end{gather*}\n\\Description{Declarative Subtyping}\n\\caption{Declarative Subtyping}\\label{fig:top_decl_subtyping}\n\\end{figure}\n\n\\paragraph{Declarative Subtyping}\nShown in Figure~\\ref{fig:top_decl_subtyping},\nthe declarative subtyping extends the polymorphic subtyping relation\noriginally proposed by \\citet{odersky1996putting}\nby adding Rules $\\mathtt{{\\le}Top}$ and $\\mathtt{{\\le}Bot}$,\ndefining the properties of the $\\top$ and $\\bot$ types, respectively.\nAlthough the new rules seem quite simple,\nthey may increase the uncertainty of polymorphic instantiations.\nFor example, the subtyping judgment\n\\[\\all a \\to a \\le \\bot \\to \\top\\]\naccepts any well-formed instantiation on the polymorphic type $\\all a \\to a$.\n\n\n\\begin{figure}[t]\n    \\begin{tabular}{rl}\n        \\framebox{$\\Psi \\vdash e \\Lto A$} & $e$ checks against input type $A$.\\\\[0.5mm]\n        \\framebox{$\\Psi \\vdash e \\To A$} & $e$ synthesizes output type $A$.\\\\[0.5mm]\n        \\framebox{$\\Psi \\vdash \\appInf{A}{e}{C}$} & Applying a function of type $A$ to $e$ synthesizes type $C$.\n    \\end{tabular}\n    \\begin{gather*}\n    \\inferrule*[right=$\\mathtt{DeclVar}$]\n        {(x:A)\\in\\Psi}{\\Psi\\vdash x\\To A}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{DeclSub}$]\n    %e \\neq \\lam e' \\quad B \\neq \\all B' \\quad \n        {\\Psi\\vdash e\\To A \\quad \\Psi\\vdash A\\le B}\n        {\\Psi \\vdash e\\Lto B}\n    \\\\\n    \\inferrule*[right=$\\mathtt{DeclAnno}$]\n        {\\Psi \\vdash A \\quad \\Psi\\vdash e\\Lto A}\n        {\\Psi\\vdash (e:A)\\To A}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{Decl1I{\\To}}$]\n        {~}{\\Psi\\vdash () \\To 1}\n    \\\\\n    \\inferrule*[right=$\\mathtt{Decl1I}$]\n        {~}{\\Psi\\vdash () \\Lto 1}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{Decl\\top}$]\n        {\\Psi \\vdash e}\n        {\\Psi\\vdash e \\Lto \\top}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{Decl{\\bot}App}$]\n        {\\Psi\\vdash e}\n        {\\Psi\\vdash \\appInf{\\bot}{e}{\\bot}}\n    \\\\\n    \\inferrule*[right=$\\mathtt{Decl\\forall I}$]\n        {\\Psi,a \\vdash e \\Lto A}\n        {\\Psi\\vdash e\\Lto \\all A}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{Decl\\forall App}$]\n        {\\Psi \\vdash \\tau \\quad \\Psi\\vdash \\appInf{[\\tau/a]A}{e}{C} }\n        {\\Psi\\vdash \\appInf{\\all A}{e}{C}}\n    \\\\\n    \\inferrule*[right=$\\mathtt{Decl{\\to}I}$]\n        {\\Psi,x:A \\vdash e\\Lto B}\n        {\\Psi\\vdash \\lam e \\Lto A \\to B}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{Decl{\\to}I{\\To}}$]\n        {\\Psi\\vdash \\sigma\\to\\tau \\quad \\Psi,x:\\sigma \\vdash e\\Lto \\tau}\n        {\\Psi\\vdash \\lam e \\To \\sigma\\to\\tau}\n    \\\\\n    \\inferrule*[right=$\\mathtt{Decl{\\to} E}$]\n        {\\Psi\\vdash e_1\\To A \\quad \\Psi\\vdash \\appInf{A}{e_2}{C}}\n        {\\Psi\\vdash e_1~e_2 \\To C}\n    \\qquad\n    \\inferrule*[right=$\\mathtt{Decl{\\to}App}$]\n        {\\Psi\\vdash e \\Lto A}\n        {\\Psi\\vdash \\appInf{A \\to C}{e}{C}}\n    \\end{gather*}\n\\Description{Declarative Typing}\n\\caption{Declarative Typing}\\label{fig:top_decl_typing}\n\\end{figure}\n\n\\paragraph{Declarative Typing}\n\nThe declarative typing rules, shown in Figure~\\ref{fig:top_decl_typing},\nextend DK's higher-ranked type system in order to support the top and bottom types.\nRule $\\mathtt{Decl\\top}$ allows any well-formed expression to check against $\\top$.\nRule $\\mathtt{Decl{\\bot}App}$ returns the $\\bot$ type\nwhen a function of $\\bot$ type is applied to any argument.\nAll other rules remain exactly the same as the system of\nChapters~\\ref{chap:ITP} and \\ref{chap:ICFP}.\n\nIt's worth mentioning that the design of the two new rules\nis driven by the subsumption property described in Section~\\ref{sec:meta:decl}.\nThey maintain the property in presence of a more powerful declarative subtyping,\nand we will discuss further later in that part.\n\n\\setcounter{algRuleCounter}{0}\n", "meta": {"hexsha": "c81c23432117f4c77e78ae895f815432676e37ca", "size": 5676, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Sources/Top/Declarative.tex", "max_stars_repo_name": "JimmyZJX/Dissertation", "max_stars_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Sources/Top/Declarative.tex", "max_issues_repo_name": "JimmyZJX/Dissertation", "max_issues_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Sources/Top/Declarative.tex", "max_forks_repo_name": "JimmyZJX/Dissertation", "max_forks_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.9240506329, "max_line_length": 112, "alphanum_fraction": 0.628435518, "num_tokens": 2003, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8774767906859264, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5989426042126131}}
{"text": "\\section{Macroscale Constitutive Model}\n\nIn this section we describe the macroscale stress-strain relationship, $\\dot{\\boldsymbol{\\sigma}}^M=\\dot{\\boldsymbol{\\sigma}}^M\\left(\\dot{\\boldsymbol{\\epsilon}}^M, \\boldsymbol{\\chi},\\mathbf{h}\\right)$, used in the validation examples. The model is chosen to be complex enough to make the validation of the framework meaningful; however, we do not claim that this is the \"best\" macroscale model. The framework presented is general and the macroscale constitutive model described here can be replaced in particular applications by a different one. To simplify the discussion and notation in this section, the superscript \"$M$\" is omitted since all quantities defined describe macroscale behaviour.\n\nA Continuum Damage Mechanics (CDM) constitutive model was chosen to represent the NFR at the macroscale. CDM is a branch of continuum mechanics that is concerned with modeling the progressive failure and stiffness degradation in solid materials. CDM in this investigation is used to help describe the micro-mechanical degradation of the rock mass due to the nucleation and growth of cracks and voids. This micro-mechanical degradation is represented in a CDM model by using macroscopic state variables to represent a spatial average of the effects of this degradation. These state variables used in this context with respect to CDM are known as damage variables. \n\nThe damage variables in a CDM model can be described in different capacities. Often, for mathematical and physical simplicity, a single scalar damage variable is used to characterize the state of damage in the material. In this case, the damage variable, $D$, takes a value between 0 and 1 to represent the degree of damage to the material, where $D=0$ represents a completely undamaged material (original stiffness) and $D=1$ represents a completely damaged material with no stiffness. A scalar damage description limits the applicability of the CDM model to an isotropically damaged state, which may not be appropriate in some circumstances. More sophisticated CDM models use 2\\textsuperscript{nd} and 4\\textsuperscript{th} order tensorial representations of the damage variables as well as distinguishing between compressive damage and tensile damage states in order to more accurately characterize anisotropic damage evolution. In this paper, a ductile isotropic damage formulation is prescribed using a modified Johnson-Cook damage initiation criterion and a linear stiffness degradation model.\n\nIn addition to damage, the elasto-plastic behaviour of the rock is also considered using an extended Drucker-Prager model with a linear yield criterion and a Barcelona hardening function. Models that incorporate theories of plasticity and damage mechanics in a unified approach to damage evolution and constitutive relationships are often referred to as damage-plasticity models \\citep{zhang_continuum_2010}. In general, the constitutive relation for these damage-plasticity models describes the relationship between the stress, $\\boldsymbol{\\sigma}$, and the strain, $\\boldsymbol{\\epsilon}$ as a function of the damage variable, the original elastic stiffness tensor, $\\mathbf{E}$, and the plastic strain, $\\boldsymbol{\\epsilon}^{pl}$: \n\n\\begin{equation}\n\\boldsymbol{\\sigma}=\\left(1-D\\right)\\mathbf{E}:\\left(\\boldsymbol{\\epsilon}-\\boldsymbol{\\epsilon}^{pl}\\right)\n\\label{eqn:const5}\n\\end{equation}\n\nwhere an additive decomposition of the elastic and plastic strain, $\\boldsymbol{\\epsilon}=\\boldsymbol{\\epsilon}^{el}+\\boldsymbol{\\epsilon}^{pl}$, is assumed.\n\nIn CDM, the notion of effective stress, $\\boldsymbol{\\sigma}$, becomes useful to describe the mechanics of the system as it refers to the stress that the system would be experiencing without damage. This effective stress can be related to the actual Cauchy stress through the scalar damage variable: $\\boldsymbol{\\sigma}=\\left(1-D\\right)\\bar{\\boldsymbol{\\sigma}}$.\n", "meta": {"hexsha": "0e65ad07118d0f1c4f5f88a4e8992628d2fe1df4", "size": 3928, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "section_continuumModels.tex", "max_stars_repo_name": "yetisir/up-scaling-dem-simulations", "max_stars_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "section_continuumModels.tex", "max_issues_repo_name": "yetisir/up-scaling-dem-simulations", "max_issues_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "section_continuumModels.tex", "max_forks_repo_name": "yetisir/up-scaling-dem-simulations", "max_forks_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-29T23:14:09.000Z", "avg_line_length": 206.7368421053, "max_line_length": 1099, "alphanum_fraction": 0.8065173116, "num_tokens": 850, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246118695629, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5988223983028013}}
{"text": "%!TEX root = inversion.tex\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Copyright (c) 2003-2018 by The University of Queensland\n% http://www.uq.edu.au\n%\n% Primary Business: Queensland, Australia\n% Licensed under the Apache License, version 2.0\n% http://www.apache.org/licenses/LICENSE-2.0\n%\n% Development until 2012 by Earth Systems Science Computational Center (ESSCC)\n% Development 2012-2013 by School of Earth Sciences\n% Development from 2014 by Centre for Geoscience Computing (GeoComp)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\section{MT inversion: 2D TE Mode}\\label{sec:forward 2DMT TEMode}\nIn this section we present the forward model for the magnetotelluric (MT)\\index{magnetotelluric}\\index{MT}inversion case~\\cite{chave2012magnetotelluric} \nwhere we particularly looking into the two-dimensional case of the electrical field polarization\\index{electrical field polarization}.\nFor this case the horizontal electrical field $E_x$\\index{electrical field} and the magnetic field $H_y$\\index{magnetic field} component \nfor a given frequency $\\omega$ are measured and the horizontal impedance $Z_{xy}$\\index{impedance}  is recorded as the ratio of\nelectrical and magentical field: \n\\begin{equation}\\label{ref:2DMTTE:EQU:1}\nE_x  = Z_{xy} \\cdot H_y \n\\end{equation}\nIt is common practice to record the impedance using the apparent resistivity $\\rho_a$\\index{resistivity!apparent} and the \nphase $\\phi$ as \n\\begin{equation}\\label{ref:2DMTTE:EQU:2}\nZ_{xy} = (\\rho_a \\cdot \\omega \\cdot \\mu) ^{\\frac{1}{2}} \\cdot e^{\\mathbf{i} \\phi}\n\\end{equation}\nwhere $\\mu$ is the permeability (e.g. $mu = 4 \\pi \\cdot 10^{-7} \\frac{Vs}{Am}$). Notice that the impedance is independent of the scale of the $E_x$ and $E_y$.\nThe electrical field $E_x$ as function of depth $z$ and horizontal coordinate $y$ is given as the solution of the \nPDE\n\\begin{equation}\\label{ref:2DMTTE:EQU:3}\n- E_{x,kk}  + \\mathbf{i} \\omega \\mu \\sigma \\cdot E_x = 0 \n\\end{equation}\nwhere $\\mathbf{i}$ is the complex unit, and $k=0$ and $k=1$ correspond to the $y$ and $z$ direction.\n$\\sigma$ is the unknown conductivity to be calculated through the inversion. The domain \nof the PDE is comprised of the subsurface region in which the conductivity is to be calculated \nand a sufficient high airlayer in which the conductivity is assumed to be zero.\n\n\nIt is assumed that $E_x$ takes the value $1$ at the top of the domain $\\Gamma_0$ representing an incoming \nelectro-magnetic wave front. On the other faces of the domain homogeneous Neuman conditions \nare set to model the assumption that the electrical field is constant away from the domain.\n\\begin{equation}\\label{ref:2DMTTE:EQU:4}\nH_y = - \\frac{1}{\\mathbf{i} \\omega \\mu} E_{x,z}\n\\end{equation}\n\nThe impedance $Z_{xy}^{(s)}$ is measured at certain points ${\\mathbf x}^{(s)}$. The defect \nof the measurements for the prediction $E_x$ is given as \n\\begin{equation}\\label{ref:2DMTTE:EQU:5}\nJ^{MT}(\\sigma) = \\frac{1}{2}\\sum_{s} \\int_{\\Omega}\nw^{(s)} \\cdot \\| E_x - Z_{xy}^{(s)} \\cdot H_y \\|^2 d{\\mathbf x}\n\\end{equation} \nwhere $w^{(s)}$ is a spatially dependent weighting faction. Here we use the Gaussian profile \n\\begin{equation}\\label{ref:2DMTTE:EQU:6}\nw^{(s)}({\\mathbf x}) = \\frac{w^{(s)}_0}{2\\pi (\\eta^{(s)})^2} \\cdot e^{-\\frac{\\|{\\mathbf x} - {\\mathbf x}^{(s)} \\|^2}{2(\\eta^{(s)})^2}}\n\\end{equation} \nwhere $\\eta^{(s)}$ is measuring the spatial validity \nand $w^{(s)}_0$ is confidence of the measurement $s$ (e.g. $w^{(s)}_0$ is set to the inverse of the measurement error. )\n\n\\subsection{Usage}\n\n\\begin{classdesc}{MT2DModelTEMode}{domain, omega, x, Z_XY, eta,\n        \\optional{, w0=1}\n        \\optional{, mu=4E-7 * PI }\n        \\optional{, coordinates=\\None}\n        \\optional{, fixAtBottom=False}\n        \\optional{, tol=1e-8}\n}\n\\end{classdesc}\n\n\\subsection{Gradient Calculation}\n\nFor the implemtation we set\n\\begin{equation}\\label{ref:2DMTTE:EQU:100}\nE_x  = u_0 + \\mathbf{i} \\cdot u_1\n\\end{equation}\nwhich translates teh forward model~\\ref{ref:2DMTTE:EQU:3}\n\\begin{align}\\label{ref:2DMTTE:EQU:103}\n- u_{0,kk}  - \\omega \\mu \\sigma u_1 = 0 \\\\\n- u_{1,kk}  + \\omega \\mu \\sigma u_0 = 0 \n\\end{align}\nIn weak form this takes the form \n\\begin{equation}\\label{ref:2DMTTE:EQU:104}\n\\int_{\\Omega}\n\\left(\nv_{0,k}u_{0,k}\n+ v_{1,k}u_{1,k}\n+ \\omega \\mu \\sigma \\cdot ( v_1 u_0 - v_0 u_1) \\right) dx =0  \n\\end{equation}\nfor all test function $v_i$ with $v_i$ zero on $\\Gamma_0$. \nUsing the complex data \n\\begin{equation}\\label{ref:2DMTTE:EQU:105}\nd^{(s)} = - \\frac{1}{\\mathbf{i} \\omega \\mu} \\cdot Z_{xy}^{(s)} = d^{(s)}_0 +  \\mathbf{i} \\cdot d_1^{(s)}\n\\end{equation}\nthe cost function~\\ref{ref:2DMTTE:EQU:6} takes the form\n\\begin{equation}\\label{ref:2DMTTE:EQU:106}\nJ^{MT}(\\sigma)=\n\\frac{1}{2}\\sum_{s} \\int_{\\Omega} w^{(s)} \\cdot \\left(  (u_0- u_{0,1} \\cdot d_0^{(s)} + u_{1,1} \\cdot d_1^{(s)}  )^2 \n + ( u_1- u_{0,1} \\cdot d_1^{(s)} -u_{1,1} \\cdot d_0^{(s)} )^2 \\right) dx \n\\end{equation} \n\nWe need to calculate the gardient \n$\\frac{\\partial J^{MT}}{\\partial \\sigma}$ of the cost function $J^{{MT}}$ with\nrespect to the susceptibility $\\sigma$.  We follow the concept as outlined in section~\\ref{chapter:ref:inversion cost function:gradient}.\n\nIf $\\Gamma_{\\sigma}$ denotes the region of the domain where the susceptibility is\nknown and for a given direction $\\hat{\\sigma}$ with $\\hat{\\sigma}=0$ on $\\Gamma_{\\sigma}$ one has\n\\begin{align}\\label{ref:2DMTTE:EQU:201aa}\n\\int_{\\Omega}   \\frac{\\partial J^{MT}}{\\partial \\sigma} \\cdot \\hat{\\sigma} \\; dx  & = &\n\\sum_{s} \\int_{\\Omega}  w^{(s)} \\cdot \\left(  u_0 - u_{0,1} \\cdot d_0^{(s)} + u_{1,1} \\cdot d_1^{(s)}  \\right ) \\left(  \\hat{u}_0 - \\hat{u}_{0,1} \\cdot d_0^{(s)} + \\hat{u}_{1,1}\\cdot d_1^{(s)}  \\right )  \\\\\n& + &  w^{(s)} \\cdot \\left(u_1- u_{0,1} \\cdot d_1^{(s)} -u_{1,1} \\cdot d_0^{(s)}  \\right)   \\left(\\hat{u}_1- \\hat{u}_{0,1} \\cdot d_1^{(s)} -\\hat{u}_{1,1} \\cdot d_0^{(s)} \\right) \\; dx \n\\end{align} \nwith\n\\begin{equation}\\label{ref:2DMTTE:EQU:201A}\n\\int_{\\Omega}\n\\left(\nq_{0,k}\\hat{u}_{0,k}\n+ q_{1,k}\\hat{u}_{1,k}\n+ \\omega \\mu \\sigma \\cdot ( q_1 \\hat{u}_0 - q_0 \\hat{u}_1)   \n+ \\omega \\mu \\hat{\\sigma} \\cdot ( q_1 u_0 - q_0 u_1) \\right) dx =0\n\\end{equation}\nfor all $q_i$ with $q_i=0$ on $\\Gamma_{0}$. This equation is obtained from equation~(\\ref{ref:2DMTTE:EQU:104}).\nWith\n\\begin{align}\\label{ref:2DMTTE:EQU:202b}\nY_0  = & \\sum_{s} w^{(s)} \\cdot \\left(  u_0 - u_{0,1} \\cdot d_0^{(s)} + u_{1,1} \\cdot d_1^{(s)}  \\right) \\\\\nY_1  = & \\sum_{s} w^{(s)} \\cdot \\left(  u_1 - u_{0,1} \\cdot d_1^{(s)} - u_{1,1} \\cdot d_0^{(s)}  \\right)   \\\\\nX_{01}  = &  \\sum_{s} w^{(s)} \\cdot \\left(    u_{0,1} \\cdot ( ( d_0^{(s)})^2 + (d_1^{(s)})^2) -  u_0 \\cdot d_0^{(s)} -  u_1 \\cdot d_1^{(s)} \\right)    \\\\\nX_{11}  = & \\sum_{s} w^{(s)} \\cdot \\left(  u_{1,1} \\cdot ( ( d_0^{(s)})^2 + (d_1^{(s)})^2)  + u_0 \\cdot d_1^{(s)} - u_1 \\cdot d_0^{(s)} \\right)\n\\end{align} \nand $X_{00}=X_{10}=0$ we can write Equation~(\\ref{ref:2DMTTE:EQU:201aa}) as \n\\begin{equation}\\label{ref:2DMTTE:EQU:202c}\n\\int_{\\Omega}   \\frac{\\partial J^{MT}}{\\partial \\sigma} \\cdot \\hat{\\sigma} \\; dx  =\n\\int_{\\Omega}  Y_i \\hat{u}_i + X_{ij}  \\hat{u}_{i,j} \\; dx\n\\end{equation}\nWe then set adjoint function $u_i^*$ with $u_i^*=0$ on $\\Gamma_{0}$ as the solution of the variational problem  \n\\begin{equation}\\label{ref:2DMTTE:EQU:202d}\n\\int_{\\Omega}\n\\left(\nu^*_{0,k} v_{0,k}\n+ u^*_{1,k} v_{1,k}\n+ \\omega \\mu \\sigma \\cdot ( u^*_1 v_0 - u^*_0 v_1) \\right) dx = \\int_{\\Omega}  Y_i v_i + X_{ij}  v_{i,j} \\; dx  \n\\end{equation}\nfor all $v_i$ with $v_i=0$ on $\\Gamma_{0}$. Setting $q=u^*$ in Equation~(\\ref{ref:2DMTTE:EQU:201A}) and \n$v_i=\\hat{u}_i$ in Equation~(\\ref{ref:2DMTTE:EQU:202d}) Equation~(\\ref{ref:2DMTTE:EQU:201A}) gives\n\\begin{equation}\\label{ref:2DMTTE:EQU:290}\n\\int_{\\Omega} \\omega \\mu \\hat{\\sigma} \\cdot ( u^*_0 u_1 - u^*_1 u_0) \\; dx\n= \\int_{\\Omega}  Y_i \\hat{u}_i + X_{ij}  \\hat{u}_{i,j} \\; dx  \n\\end{equation}\nThis gives \n\\begin{equation}\\label{ref:2DMTTE:EQU:300}\n\\int_{\\Omega}   \\frac{\\partial J^{MT}}{\\partial \\sigma} \\cdot \\hat{\\sigma} \\; dx  =\n\\int_{\\Omega} \\omega \\mu \\hat{\\sigma} \\cdot ( u^*_0 u_1 - u^*_1 u_0) \\; dx\n\\end{equation}\nor \n\\begin{equation}\\label{ref:2DMTTE:EQU:301}\n\\frac{\\partial J^{MT}}{\\partial \\sigma} = \\omega \\mu \\cdot ( u^*_0 u_1 - u^*_1 u_0)\n\\end{equation}\n\n", "meta": {"hexsha": "e9f84edc0abad6d511ada96c4437f515abbad0bb", "size": 8278, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/inversion/Forward2DMTTEMode.tex", "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/inversion/Forward2DMTTEMode.tex", "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_forks_repo_path": "doc/inversion/Forward2DMTTEMode.tex", "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.0987654321, "max_line_length": 206, "alphanum_fraction": 0.6450833535, "num_tokens": 3200, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246118695629, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.5988223932807529}}
{"text": "\\subsection{Variational updates}\nIn this section, we give the explicit update equations for every hidden variable that is applied at each iteration of the (stochastic) variational bayes algorithm.\\\\\nAdditionally, we write down the analytical form of the ELBO. Although computing the ELBO is not necessary in order to estimate the posterior distribution of the parameters, it is used to monitor the convergence of the algorithm.\n\n\\subsection{Variational Updates}\n\n\\subsubsection{Non-sparse factors}\nFor every group $g$, sample $n$ and factor $k$: \\\\\n\nPrior distribution $p(z_{nk}^g)$:\\\\\n\\[\n\tp(z_{nk}^g), = \\mathcal{N} (z_{nk}^g |0,1)\n\\]\nVariational distribution $q(z^g_{nk})$:\\\\\n\\begin{equation}\n   q(z^g_{nk}) = \\mathcal{N} ( z_{nk}^g | \\mu_{z_{nk}^g}, \\sigma_{z_{nk}^g} )\n\\end{equation}\nwhere\n\\begin{equation} \\begin{aligned}\n\t\\sigma_{z_{nk}^{g}}^2 &= \\left( \\sum_{m=1}^{M} \\sum_{d=1}^{D_m} \\la\\tau_{dg}^m \\ra \\la (w_{dk}^m)^2 \\ra + 1 \\right)^{-1}\\\\\n\t\\mu_{z_{nk}^g} &= \\sigma_{z_{nk}^g}^2 \\sum_{m=1}^{M} \\sum_{d=1}^{D_m} \\la\\tau_{dg}^m\\ra \\la w_{dk}^m \\ra \\left(\\sum_{g=1}^{G} y_{nd}^{gm} - \\sum_{j \\neq k} \\la w_{dj}^m \\ra \\la z_{nj}^g \\ra \\right)\n\\end{aligned} \\end{equation}\n\n\n\\subsubsection{Sparse factors}\nFor every group $g$, sample $n$ and factor $k$: \\\\\n\nPrior distribution $p(\\hat{z}_{nk}^g,s_{nk}^g)$:\\\\\n\n\\begin{align}\n\tp(\\hat{z}_{nk}^g,s_{nk}^g) &= \\mathcal{N} (\\hat{z}_{nk}^g \\,|\\, 0, 1/\\alpha_k^g)\\, \\text{Ber}(s_{nk}^g \\,|\\,\\theta_k^g) \\\\\n\tp(\\theta_k^g) &= \\Bdist{\\theta_k^g}{a_0^\\theta,b_0^\\theta}\\\\\n\tp(\\alpha_k^g) &= \\Gdist{\\alpha_k^g}{a_0^\\alpha, b_0^\\alpha},\n\\end{align}\n\nVariational distribution:\\\\\nTO-DO\n\n\\subsubsection{group-wise ARD precision for the factors}\n\\subsubsection{Spike-and-slab sparsity for the factors}\n\n    % \\paragraph{Sample-wise}\n\n    %     Variational distribution:\\\\\n\n    %     \\begin{equation}\n    %         q(\\alpha^g_{k}) = \\Gamma \\left( \\alpha_k^g \\middle| \\hat{a}_{g,k}^{\\alpha}, \\hat{b}_{g,k}^{\\alpha} \\right)\n    %     \\end{equation}\n\n    %     where\\\\\n\n    %     \\begin{equation}\n    %       \\begin{aligned}\n    %           \\hat{a}_{g,k}^\\alpha &= a_0^\\alpha + \\frac{N_g}{2}\\\\\n    %           \\hat{b}_{g,k}^\\alpha &= b_0^\\alpha +\\frac{ \\sum_{n=1}^{N_g} \\la (z_{nk}^g)^2 \\ra }{2}\n    %       \\end{aligned}\n    %     \\end{equation}\n\n\n% START COPIED\n% Prior distribution $p(\\hat{w}_{dk}^m,s_{dk}^m)$:\\\\\n% \\begin{align}\n% \tp(\\hat{w}_{dk}^m,s_{dk}^m) &= \\mathcal{N} (\\hat{w}_{dk}^m \\,|\\, 0, 1/\\alpha_k^m)\\, \\text{Ber}(s_{dk}^m \\,|\\,\\theta_k^m)\n% \\end{align}\n\n% Variational distribution $q(\\hat{w}_{dk}^m,s_{dk}^m)$:\\\\\n\n% Update for $q(s_{dk}^m)$:\\\\\n\n% \\begin{equation}\n% \tq(s^m_{dk}) = \\mathrm{Ber}(s^m_{dk}|\\gamma^m_{dk})\n% \\end{equation}\n% with\n% \\begin{equation} \\begin{aligned}\n% \t&\\gamma^m_{dk} = \\frac{1}{1+\\exp(-\\lambda_{dk}^m)}\\\\\n% \t& \\lambda_{dk}^m = \\la \\ln\\frac{\\theta}{1-\\theta} \\ra + 0.5\\ln\\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} - 0.5\\ln\\left( \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} \\right) \\\\\n% \t&+ \\frac{\\la\\tau_d^m\\ra}{2} \\frac{ \\left( \\sum_{g=1}^G\\sum_{n=1}^{N_g} y_{nd}^m \\la z_{nk} \\ra - \\sum_{j \\neq k} \\la s_{dj}^m\\hat{w}_{dj}^m\\ra \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk} \\ra \\la z_{nj} \\ra \\right)^2} {\\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} }\n% \\end{aligned} \\end{equation}\n\n% Update for $q(\\hat{w}_{dk}^m)$:\\\\\n\n% \\begin{equation} \\begin{aligned}\n%       q(\\hat{w}_{dk}^m|s_{dk}^m=0) &= \\mathcal{N} \\left(\\hat{w}_{dk}^m \\middle| 0, 1/\\alpha_k^m \\right) \\\\\n%       q(\\hat{w}_{dk}^m|s_{dk}^m=1) &= \\mathcal{N} \\left( \\hat{w}_{dk}^m \\middle| \\mu_{w_{dk}^m}, \\sigma_{w_{dk}^m}^2\\right)\n%   \\end{aligned} \\end{equation}\n% with\n% \\begin{equation} \\begin{aligned}\n%   \t\\mu_{w_{dk}^m} &= \\frac{ \\sum_{g=1}^G\\sum_{n=1}^{N_g} y_{nd}^m \\la z_{nk} \\ra - \\sum_{j \\neq k} \\la s_{dj}^m\\hat{w}_{dj}^m \\ra \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk} \\ra \\la z_{nj} \\ra } { \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} }\\\\\n%   \t\\sigma_{w_{dk}^m} &= \\frac{ \\la\\tau_d^m\\ra^{-1} } { \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} }\n% \\end{aligned} \\end{equation}\n% where $N$ is the total number of samples (across all groups).\n\n% END COPIED\n\n\n\\subsubsection{Sparse weights}\nFor every view $m$, feature $d$ and factor $k$: \\\\\n\nPrior distribution $p(\\hat{w}_{dk}^m,s_{dk}^m)$:\\\\\n\\begin{align}\n\tp(\\hat{w}_{dk}^m,s_{dk}^m) &= \\mathcal{N} (\\hat{w}_{dk}^m \\,|\\, 0, 1/\\alpha_k^m)\\, \\text{Ber}(s_{dk}^m \\,|\\,\\theta_k^m)\n\\end{align}\n\nVariational distribution $q(\\hat{w}_{dk}^m,s_{dk}^m)$:\\\\\n\nUpdate for $q(s_{dk}^m)$:\\\\\n\n\\begin{equation}\n\tq(s^m_{dk}) = \\mathrm{Ber}(s^m_{dk}|\\gamma^m_{dk})\n\\end{equation}\nwith\n\\begin{equation} \\begin{aligned}\n\t&\\gamma^m_{dk} = \\frac{1}{1+\\exp(-\\lambda_{dk}^m)}\\\\\n\t& \\lambda_{dk}^m = \\la \\ln\\frac{\\theta}{1-\\theta} \\ra + 0.5\\ln\\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} - 0.5\\ln\\left( \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} \\right) \\\\\n\t&+ \\frac{\\la\\tau_d^m\\ra}{2} \\frac{ \\left( \\sum_{g=1}^G\\sum_{n=1}^{N_g} y_{nd}^m \\la z_{nk} \\ra - \\sum_{j \\neq k} \\la s_{dj}^m\\hat{w}_{dj}^m\\ra \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk} \\ra \\la z_{nj} \\ra \\right)^2} {\\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} }\n\\end{aligned} \\end{equation}\n\nUpdate for $q(\\hat{w}_{dk}^m)$:\\\\\n\n\\begin{equation} \\begin{aligned}\n      q(\\hat{w}_{dk}^m|s_{dk}^m=0) &= \\mathcal{N} \\left(\\hat{w}_{dk}^m \\middle| 0, 1/\\alpha_k^m \\right) \\\\\n      q(\\hat{w}_{dk}^m|s_{dk}^m=1) &= \\mathcal{N} \\left( \\hat{w}_{dk}^m \\middle| \\mu_{w_{dk}^m}, \\sigma_{w_{dk}^m}^2\\right)\n  \\end{aligned} \\end{equation}\nwith\n\\begin{equation} \\begin{aligned}\n  \t\\mu_{w_{dk}^m} &= \\frac{ \\sum_{g=1}^G\\sum_{n=1}^{N_g} y_{nd}^m \\la z_{nk} \\ra - \\sum_{j \\neq k} \\la s_{dj}^m\\hat{w}_{dj}^m \\ra \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk} \\ra \\la z_{nj} \\ra } { \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} }\\\\\n  \t\\sigma_{w_{dk}^m} &= \\frac{ \\la\\tau_d^m\\ra^{-1} } { \\sum_{g=1}^G\\sum_{n=1}^{N_g} \\la z_{nk}^2 \\ra + \\frac{\\la\\alpha_k^m\\ra}{\\la\\tau_d^m\\ra} }\n\\end{aligned} \\end{equation}\nwhere $N$ is the total number of samples (across all groups).\n\n  \t% Taken together this means that we can update $q(\\hat{w}_{dk}^m,s_{dk}^m)$ using:\n  \t% \\begin{equation*}\n  \t% q(\\hat{w}_{dk}^m|s_{dk}^m) q(s_{dk}^m) = \\Ndist{ \\hat{w}_{dk}^m } { s_{dk}^m \\mu_{w_{dk}^m}, s_{dk}^m\\sigma_{w_{dk}^m}^2 + (1-s_{dk}^m)/\\alpha_k^m}    (\\new{\\gamma_{dk}^m})^{s_{dk}^m} (1-\\new{\\gamma_{dk}^m})^{1-s_{dk}}\n  \t% \\end{equation*}\n\n\n\\subsubsection{ARD precision for the loadings}\nFor every view $m$ and factor $k$: \\\\\n\nPrior distribution $p(\\alpha_k^m)$:\n\\[\n\tp(\\alpha_k^m) = \\Gdist{\\alpha_k^m}{a_0^\\alpha, b_0^\\alpha},\n\\]\nVariational distribution $q(\\alpha_k^m)$:\n\\begin{equation}\n    q(\\alpha^m_{k}) = \\Gamma \\left( \\alpha_k^m \\middle| \\hat{a}_{mk}^{\\alpha}, \\hat{b}_{mk}^{\\alpha} \\right)\n\\end{equation}\nwhere:\n\\begin{equation} \\begin{aligned}\n\t\\hat{a}_{mk}^\\alpha &= a_0^\\alpha + \\frac{D_m}{2}\\\\\n\t\\hat{b}_{mk}^\\alpha &= b_0^\\alpha +\\frac{ \\sum_{d=1}^{D_m} \\la (\\hat{w}_{dk}^m)^2 \\ra }{2}\n\\end{aligned} \\end{equation}\n\n\n\\subsubsection{Spike-and-slab sparsity for the loadings}\nFor every view $m$ and factor $k$: \\\\\n\nPrior distribution:\n\\[\n\tp(\\theta_k^m) = \\Bdist{\\theta_k^m}{a_0^\\theta,b_0^\\theta}\n\\]\nVariational distribution:\n\\begin{equation}\n\tq(\\theta_k^m) = \\Bdist{\\theta_k^m}{\\hat{a}_{mk}^{\\theta}, \\hat{b}_{mk}^{\\theta}}\n\\end{equation}\nwhere\n\\begin{equation}\n     \\begin{aligned}\n  \t\\hat{a}_{mk}^\\theta &= \\sum_{d=1}^{D_m} \\la s^m_{dk}\\ra + a_0^\\theta\\\\\n  \t\\hat{b}_{mk}^\\theta &= b_0^\\theta - \\sum_{d=1}^{D_m} \\la s^m_{dk}\\ra + D_m\n     \\end{aligned}\n\\end{equation}\n\n\n\\subsubsection{Noise}\nFor every view $m$, group $g$ and feature $d$:\\\\\n\nPrior distribution $p(\\tau_{dg}^m)$:\n\\[\n\tp(\\tau_{dg}^m) = \\Gdist{\\tau_{dg}^m}{a_0^\\tau,b_0^\\tau},\n\\]\n\nVariational distribution $q(\\tau_d^m)$:\n\\begin{equation}\n\tq(\\tau_{dg}^m) = \\Gdist{\\tau_{dg}^m}{\\hat{a}_{dg}^{m} , \\hat{b}_{dg}^{m}}\n\\end{equation}\nwhere:\n\\begin{equation} \\begin{aligned}\n\t\\hat{a}_{dg}^{m} &= a_0^{\\tau} + \\frac{N_g}{2}\\\\\n\t\\hat{b}_{dg}^{m} &= b_0^{\\tau} + \\frac{1}{2} \\sum_{n=1}^{N_g}  \\la\\left(y_{nd}^{gm} - \\sum_k^{K} \\hat{w}_{dk}^m s_{dk}^m z_{nk}^{g}\\right)^2 \\ra\n\\end{aligned} \\end{equation}\n\n\n\n\\subsection{Evidence Lower Bound}\nAs shown in~\\ref{section:technical:elbo_interpretation}, the ELBO can be decomposed into a sum of two terms: (1) the expected log likelihood under the current estimate of the posterior distribution of the parameters and (2) the KL divergence between the prior and the variational distributions of the parameters:\\\\\n\n\\begin{equation} \\begin{aligned}\n    \\Lagr = \\E_{q(\\theta)} \\ln P(Y|\\Theta) - \\KL\\left(q(\\Theta) \\middle|\\middle| p(\\Theta) \\right)\n\\end{aligned} \\end{equation}\n\n\n\\subsection{Log likelihood term}\nAssuming a gaussian likelihood:\n\\begin{equation} \\begin{aligned}\n\t\\E_{q(\\theta)} \\ln P(Y|\\Theta) = & -\\sum_{m=1}^M \\frac{ND_m}{2} \\ln(2\\pi) + \\sum_{g=1}^G \\frac{N_g}{2} \\sum_{m=1}^M \\sum_{d=1}^{D_m} \\la \\ln(\\tau_{dg}^m) \\ra \\\\\n\t&-\\sum_{g=1}^G \\sum_{m=1}^M \\sum_{d=1}^{D_m} \\frac{\\la \\tau_{dg}^m \\ra}{2} \\sum_{n=1}^{N_g} \\big( y_{nd}^{m,g} - \\sum_{k=1}^{K}\\la s_{dk}^m \\hat{w}_{dk}^m \\ra \\la z_{nk}^g \\ra \\big)^2\n\\end{aligned} \\end{equation}\n\n\n\\subsection{KL divergence terms}\n\nNote that $\\KL\\left(q(\\Theta) \\middle|\\middle| P(\\Theta) \\right) = \\E_q(q(\\Theta)) - \\E_q(P(\\Theta))$. \\\\\nBelow, we will write the analytical form for these two expectations.\n\n\\subsubsection*{Sparse loadings}\n\n\\begin{equation} \\begin{aligned}\n    \\E_q[\\ln p(\\hat{W},S)] =& -\\sum_{m=1}^{M}\\frac{KD_m}{2}\\ln(2\\pi) + \\sum_{m=1}^{M}\\frac{D_m}{2}\\sum_{k=1}^{K} \\ln(\\alpha_k^m) - \\sum_{m=1}^{M} \\frac{\\alpha_k^m}{2} \\sum_{d=1}^{D_m} \\sum_{k=1}^{K} \\la (\\hat{w}_{dk}^m)^2 \\ra \\\\\n    & + \\la \\ln(\\theta) \\ra \\sum_{m=1}^{M} \\sum_{d=1}^{D_m} \\sum_{k=1}^{K} \\la s_{dk}^m \\ra + \\la \\ln(1-\\theta) \\ra \\sum_{m=1}^{M} \\sum_{d=1}^{D_m}\\sum_{k=1}^{K} (1- \\la s_{dk}^m \\ra)\n\\end{aligned} \\end{equation}\n\n\\begin{equation} \\begin{aligned}\n\t\\E_q[\\ln q(\\hat{W}, S)] =&-\\sum_{m=1}^{M}\\frac{KD_m}{2}\\ln(2\\pi) + \\frac{1}{2}\\sum_{m=1}^{M}\\sum_{d=1}^{D_m}\\sum_{k=1}^{K}\\ln(\\la s_{dk}^m \\ra \\sigma_{w_{dk}^m}^2 + (1-\\la s_{dk}^m \\ra)/\\alpha_k^m) \\\\\n\t&+ \\sum_{m=1}^{M} \\sum_{d=1}^{D_m} \\sum_{k=1}^{K} (1-\\la s_{dk}^m \\ra) \\ln(1 - \\la s_{dk}^m \\ra) - \\la s_{dk}^m \\ra \\ln \\la s_{dk}^m \\ra\n\\end{aligned} \\end{equation}\n\n\\subsubsection*{Sparse factors}\n\n\\begin{equation} \\begin{aligned}\n    \\E_q[\\ln p(\\hat{Z},S)] =& -\\sum_{g=1}^{G}\\frac{N_g K}{2}\\ln(2\\pi) + \\sum_{g=1}^{G}\\frac{N_g}{2}\\sum_{k=1}^{K} \\ln(\\alpha_k^g) - \\sum_{g=1}^{G} \\frac{\\alpha_k^g}{2} \\sum_{n=1}^{N_g} \\sum_{k=1}^{K} \\la (\\hat{z}_{nk}^g)^2 \\ra \\\\\n    & + \\la \\ln(\\theta) \\ra \\sum_{g=1}^{G} \\sum_{n=1}^{N_g} \\sum_{k=1}^{K} \\la s_{nk}^g \\ra + \\la \\ln(1-\\theta) \\ra \\sum_{g=1}^{G} \\sum_{n=1}^{N_g}\\sum_{k=1}^{K} (1- \\la s_{nk}^g \\ra)\n\\end{aligned} \\end{equation}\n\n\\begin{equation} \\begin{aligned}\n\t\\E_q[\\ln q(\\hat{Z}, S)] =&-\\sum_{g=1}^{G}\\frac{N_g K}{2}\\ln(2\\pi) + \\frac{1}{2}\\sum_{g=1}^{G}\\sum_{n=1}^{N_g}\\sum_{k=1}^{K}\\ln(\\la s_{nk}^g \\ra \\sigma_{z_{nk}^g}^2 + (1-\\la s_{nk}^g \\ra)/\\alpha_k^g) \\\\\n\t&+ \\sum_{g=1}^{G} \\sum_{n=1}^{N_g} \\sum_{k=1}^{K} (1-\\la s_{nk}^g \\ra) \\ln(1 - \\la s_{nk}^g \\ra) - \\la s_{nk}^g \\ra \\ln \\la s_{nk}^g \\ra\n\\end{aligned} \\end{equation}\n\n\\subsubsection*{Non-sparse factors}\n\\begin{equation} \\begin{aligned}\n\t\\E_q [\\ln p(Z)] &= -\\frac{NK}{2}\\ln(2\\pi) -\\frac{1}{2} \\sum_{g=1}^G\\sum_{n=1}^{N_g}\\sum_{k=1}^{K} \\la z_{nk}^2 \\ra \\\\\n\t\\E_q [\\ln q(Z)] &= - \\frac{NK}{2}(1 + \\ln(2\\pi)) - \\frac{1}{2}\\sum_{g=1}^G\\sum_{n=1}^{N_g}\\sum_{k=1}^{K} \\ln(\\sigma_{z_{nk}}^2)\n\\end{aligned} \\end{equation}\n\n\\subsubsection*{Automatic relevance determination for the loadings}\n\\begin{equation} \\begin{aligned}\n\t\\E_q [\\ln p(\\balpha)] &= \\sum_{m=1}^{M}\\sum_{k=1}^{K}\\Big(a_0^\\alpha\\ln b_0^\\alpha +   (a_0^\\alpha - 1) \\la \\ln \\alpha_k \\ra - b_0^\\alpha \\la \\alpha_k \\ra - \\ln \\Gamma(a_0^\\alpha) \\Big) \\\\\n\t\\E_q [\\ln q(\\balpha)] &= \\sum_{m=1}^{M}\\sum_{k=1}^{K} \\Big( \\hat{a}_{k}^\\alpha \\ln \\hat{b}_{k}^\\alpha + (\\hat{a}_{k}^\\alpha - 1) \\la \\ln \\alpha_k \\ra - \\hat{b}_{k}^\\alpha \\la \\alpha_k \\ra - \\ln \\Gamma(\\hat{a}_{k}^\\alpha) \\Big)\n\\end{aligned} \\end{equation}\n\n\\subsubsection*{Noise}\n\\begin{equation} \\begin{aligned}\n\t\\E_q [\\ln p(\\btau)] &= \\sum_{m=1}^{M} D_m a_0^\\tau \\ln b_0^\\tau + \\sum_{g=1}^{G}\\sum_{m=1}^{M}\\sum_{d=1}^{Dm} (a_0^\\tau - 1) \\la \\ln \\tau_d^{mg} \\ra - \\sum_{g=1}^{G}\\sum_{m=1}^{M}\\sum_{d=1}^{Dm} b_0^\\tau \\la \\tau_d^{mg} \\ra - \\sum_{m=1}^{M} D_m \\ln \\Gamma(a_0^\\tau)\\\\\n\t\\E_q [\\ln q(\\btau)] &= \\sum_{g=1}^{G} \\sum_{m=1}^{M} \\sum_{d=1}^{D_m} \\left( \\hat{a}_{dmg}^\\tau \\ln \\hat{b}_{dmg}^\\tau + (\\hat{a}_{dmg}^\\tau - 1) \\la \\ln \\tau_d^{mg} \\ra - \\hat{b}_{dmg}^\\tau \\la \\tau_d^{mg} \\ra - \\ln \\Gamma(\\hat{a}_{dmg}^\\tau) \\right)\n\\end{aligned} \\end{equation}\n\n\\subsubsection*{Sparsity of the loadings}\n\\begin{equation} \\begin{aligned}\n\t  &\\E_q\\left[ \\ln p(\\btheta) \\right] = \\sum_{m=1}^M \\sum_{k=1}^K\\sum_{d=1}^{D_m}\\left( (a_0 - 1) \\times \\la \\ln(\\pi^m_{d, k}) \\ra + (b_0 -1) \\la \\ln(1 - \\pi^m_{d, k}) \\ra - \\ln (\\mathrm{B} (a_0, b_0))\\right) \\\\\n\t  &\\E_q\\left[ \\ln q(\\btheta) \\right] = \\sum_{m=1}^M \\sum_{k=1}^K\\sum_{d=1}^{D_m}\\left( (a^m_{k,d} - 1) \\times \\la \\ln(\\pi^m_{d, k}) \\ra + (b^m_{k,d} -1) \\la \\ln(1 - \\pi^m_{d, k}) \\ra - \\ln (\\mathrm{B} (a^m_{k,d}, b^m_{k,d})) \\right) \\\\\n\\end{aligned} \\end{equation}\n\n", "meta": {"hexsha": "57e576fb5b2a1ac6a4801a6c982f779a39269636", "size": 13425, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Appendix/old/variational_updates.tex", "max_stars_repo_name": "rargelaguet/thesis", "max_stars_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2021-01-08T13:01:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T07:24:40.000Z", "max_issues_repo_path": "Appendix/old/variational_updates.tex", "max_issues_repo_name": "rargelaguet/thesis", "max_issues_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Appendix/old/variational_updates.tex", "max_forks_repo_name": "rargelaguet/thesis", "max_forks_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-01-09T04:47:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-04T08:25:50.000Z", "avg_line_length": 50.8522727273, "max_line_length": 314, "alphanum_fraction": 0.5848044693, "num_tokens": 6309, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246118695629, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5988223882587046}}
{"text": "\\chapter{Constructing the Borel and Lebesgue measure}\nIt's very difficult to define in one breath a measure\non the Borel space $\\SB(\\RR^n)$.\nIt is easier if we define a weaker notion first.\nThere are two such weaker notions that we will define:\n\\begin{itemize}\n\t\\ii A \\textbf{pre-measure}:\n\tsatisfies the axioms of a measure,\n\tbut defined on \\emph{fewer} sets than a measure:\n\tthey'll be defined on an ``algebra''\n\trather than the full-fledged ``$\\sigma$-algebra''.\n\n\t\\ii An \\textbf{outer measure}:\n\tdefined on $2^\\Omega$ but satisfies weaker axioms.\n\\end{itemize}\nIt will turn out that pre-measures yield outer measures,\nand outer measures yield measures.\n\n\n\\section{Pre-measures}\n\\prototype{Let $\\Omega = \\RR^2$. Then we take $\\SA_0$ generated by rectangles,\n\twith $\\mu_0$ the usual area.}\nThe way to define a pre-measure is to weaken\nthe $\\sigma$-algebra to an algebra.\n\\begin{definition}\n\tLet $\\Omega$ be a set.\n\tWe define notions of an \\vocab{algebra},\n\twhich is the same as $\\sigma$-algebra except\n\twith ``countable'' replaced by finite everywhere.\n\n\tThat is: an algebra $\\SA_0$ on $\\Omega$ is a\n\tnonempty subset of $2^\\Omega$,\n\twhich is closed under complement and \\emph{finite} union.\n\tThe smallest algebra containing a subset $\\SF \\subseteq 2^\\Omega$\n\tis the \\vocab{algebra generated by $\\SF$}.\n\\end{definition}\nIn practice, we will basically always use generation for algebras.\n\\begin{example}\n\tWhen $\\Omega = \\RR^n$,\n\twe can let $\\mathcal{L}_0$ be the algebra generated by\n\t$[a_1, b_1] \\times \\dots \\times [a_n, b_n]$.\n\tA typical element might look like:\n\t\\begin{center}\n\t\\begin{asy}\n\t\tsize(5cm);\n\t\tfilldraw( (0,0)--(9,0)--(9,2)--(6,2)--(6,5)--(0,5)--cycle,\n\t\t\topacity(0.1)+lightcyan, heavycyan );\n\t\tfilldraw( (7,3)--(12,3)--(12,6)--(7,6)--cycle,\n\t\t\topacity(0.1)+lightcyan, heavycyan );\n\t\\end{asy}\n\t\\end{center}\n\tUnsurprisingly, since we have \\emph{finitely} many\n\trectangles and their complements involved,\n\tin this case we actually \\emph{can}\n\tunambiguously assign an area, and will do so soon.\n\\end{example}\n\n\\begin{definition}\n\tA \\vocab{pre-measure} $\\mu_0$ on a algebra $\\SA_0$\n\tis a function $\\mu_0 \\colon \\SA_0 \\to [0, +\\infty]$\n\twhich satisfies the axioms\n\t\\begin{itemize}\n\t\t\\ii $\\mu_0(\\varnothing) = 0$, and\n\t\t\\ii \\textbf{Countable additivity}:\n\t\tif $A_1$, $A_2$, \\dots are disjoint sets in $\\SA_0$\n\t\tand \\emph{moreover} the disjoint union $\\bigsqcup A_i$\n\t\tis contained in $\\SA_0$ (not guaranteed by algebra axioms!),\n\t\tthen\n\t\t\\[ \\mu_0\\left( \\bigsqcup_n A_n \\right) = \\sum_n \\mu_0(A_n). \\]\n\t\\end{itemize}\n\\end{definition}\n\n\\begin{example}\n\t[The pre-measure on $\\RR^n$]\n\tLet $\\Omega = \\RR^2$.\n\tThen, let $\\mathcal{L}_0$ be the algebra generated by rectangles\n\t$[a_1, a_2] \\times [b_1, b_2]$.\n\tWe then let\n\t\\[ \\mu_0\\left( [a_1, a_2] \\times [b_1, b_2] \\right)\n\t= (a_2-a_1)(b_2-b_1) \\]\n\tthe area of the rectangle.\n\tAs elements of $\\mathcal{L}_0$ are simply \\emph{finite} unions\n\tof rectangles and their complements (picture drawn earlier),\n\tit's not difficult to extend this to a pre-measure $\\lambda_0$\n\twhich behaves as you expect --- although we won't do this.\n\\end{example}\n\nSince we are sweeping something under the rug that\nturns out to be conceptually important,\nI'll go ahead and blue-box it.\n\\begin{proposition}\n\t[Geometry sanity check that we won't prove]\n\t\\label{prop:lebesgue_rectangle}\n\tFor $\\Omega = \\RR^n$ and $\\mathcal{L}_0$\n\tthe algebra generated by rectangular prisms,\n\tone can define a pre-measure $\\lambda_0$ on $\\mathcal{L}_0$.\n\\end{proposition}\nFrom this point forwards, we will basically do\nalmost no geometry\\footnote{White lie.\n\tTechnically, we will use one more fact:\n\tthat open sets of $\\RR^n$ can be covered by countably\n\tinfinitely many rectangles,\n\tas in \\Cref{exer:cubes_vs_open}.\n\tThis step doesn't involve any area assignments, though.}\nwhatsoever in defining the measure $\\SB(\\RR^n)$,\nand only use set theory to extend our measure.\nSo, \\Cref{prop:lebesgue_rectangle} is the only sentry\nwhich checks to make sure that our ``initial definition'' is sane.\n\nTo put the point another way,\nsuppose an \\textbf{insane scientist}\\footnote{Because\n\t``mad scientists'' are overrated.}\ntried to define a notion\nof area in which every rectangle had area $1$.\nIntuitively, this shouldn't be possible:\nevery rectangle can be dissected into two halves\nand we ought to have $1+1 \\ne 1$.\nHowever, the only thing that would stop them is that they couldn't\nextend their pre-measure on the algebra $\\mathcal{L}_0$.\nIf they somehow got past that barrier and got a pre-measure,\nnothing in the rest of the section would prevent them\nfrom getting an entire \\emph{bona fide} measure with this property.\nThus, in our construction of the Lebesgue measure,\nmost of the geometric work is captured in the (omitted) proof\nof \\Cref{prop:lebesgue_rectangle}.\n\n\\section{Outer measures}\n\\prototype{Keep taking $\\Omega = \\RR^2$; see the picture to follow.}\nThe other way to weaken a measure is to relax the countable additivity,\nand this yields the following:\n\\begin{definition}\n\tAn \\vocab{outer measure} $\\mu^\\ast$ on a set $\\Omega$\n\tis a function $\\mu^\\ast \\colon 2^\\Omega \\to [0, +\\infty]$\n\tsatisfying the following axioms:\n\t\\begin{itemize}\n\t\t\\ii $\\mu^\\ast(\\varnothing) = 0$;\n\t\t\\ii if $E \\subseteq F$ and $E,F \\in 2^{\\Omega}$\n\t\tthen $\\mu^\\ast(E) \\le \\mu^\\ast(F)$;\n\t\t\\ii for any subsets $E_1$, $E_2$, \\dots of $\\Omega$ we have\n\t\t\\[ \\mu^\\ast \\left( \\bigcup_n E_n \\right)\n\t\t\t\\le \\sum_n \\mu^\\ast(E_n). \\]\n\t\\end{itemize}\n\t(I don't really like the word ``outer measure'',\n\tsince I think it is a bit of a misnomer:\n\tI would rather call it ``fake measure'',\n\tsince it's not a measure either.)\n\\end{definition}\n\nThe reason for the name ``outer measure''\nis that you almost always obtain outer measures\nby approximating them from ``outside'' sets.\nOfficially, the result is often stated as follows\n(as \\Cref{pr:construct_outer_measure}).\n\\begin{quote}\n\tFor a set $\\Omega$, let $\\mathcal{E}$ be \\emph{any} subset of $2^{\\Omega}$\n\tand let $\\rho \\colon \\mathcal{E} \\to [0,+\\infty]$ be \\emph{any} function.\n\tThen\n\t\\[ \\mu^\\ast(E) = \\inf \\left\\{ \\sum_{n=1}^\\infty \\rho(E_n) \\mid\n\t\tE_n \\in \\mathcal{E}, \\;\n\t\tE \\subseteq \\bigcup_{n=1}^\\infty E_n \\right\\} \\]\n\tis an outer measure.\n\\end{quote}\n\nHowever, I think the above theorem is basically always\nwrong to use in practice, because it is \\emph{way too general}.\nAs I warned with the insane scientist,\nwe really do want some sort of sanity conditions on $\\rho$:\notherwise, if we apply the above result as stated,\nthere is no guarantee that $\\mu^\\ast$ will\nbe compatible with $\\rho$ in any way.\n\n%So, the recipe so far goes like:\n%define an algebra $\\SA_0$ generated by some sets with\n%``known'' measure (like rectangles) and checking it extends well,\n%then get an outer measure from it.\n%In the next section, we will then see every outer measure gives a true measure.\n\nSo, I think it is really better to apply the theorem to pre-measures $\\mu_0$\nfor which one \\emph{does} have some sort of guarantee\nthat the resulting $\\mu^\\ast$ is compatible with $\\mu_0$.\nIn practice, this is always how we will want to construct our outer measures.\n\\begin{theorem}\n\t[Constructing outer measures from pre-measures]\n\t\\label{thm:construct_outer}\n\tLet $\\mu_0$ be a pre-measure on an algebra $\\SA_0$ on a set $\\Omega$.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The map $\\mu^\\ast \\colon 2^\\Omega \\to [0,+\\infty]$ defined by\n\t\t\\[ \\mu^\\ast(E) = \\inf \\left\\{ \\sum_{n=1}^\\infty \\mu_0(A_n) \\mid\n\t\t\tA_n \\in \\SA_0, \\; E \\subseteq \\bigcup_{n=1}^\\infty A_n \\right\\} \\]\n\t\tis an outer measure.\n\t\t\\ii Moreover, this measure agrees with $\\mu_0$ on sets in $\\SA_0$.\n\t\\end{enumerate}\n\\end{theorem}\nIntuitively, what is going on is that\n$\\mu^\\ast(A)$ is the infimum of coverings of $A$ by\ncountable unions of elements in $\\SA_0$.\nPart (b) is the first half of the compatibility condition I promised;\nthe other half appears later as \\Cref{prop:cm_compatible}.\n\n\\begin{proof}\n\t[Proof of \\Cref{thm:construct_outer}]\n\tAs alluded to already, part (a)\n\tis a special case of \\Cref{pr:construct_outer_measure}\n\t(and proving it in this generality is actually easier,\n\tbecause you won't be distracted by unnecessary properties).\n\n\tWe now check (b), that $\\mu^\\ast(A) = \\mu_0(A)$ for $A \\in \\SA_0$.\n\tOne bound is quick:\n\t\\begin{ques}\n\t\tShow that $\\mu^\\ast(A) \\le \\mu_0(A)$.\n\t\\end{ques}\n\tFor the reverse, suppose that $A \\subseteq \\bigcup_n A_n$.\n\tThen, define the sets\n\t\\begin{align*}\n\t\tB_1 &= A \\cap A_1 \\\\\n\t\tB_2 &= (A \\cap A_2) \\setminus B_1 \\\\\n\t\tB_3 &= (A \\cap A_3) \\setminus B_2 \\\\\n\t\t&\\vdotswithin=\n\t\\end{align*}\n\tand so on.\n\tThen the $B_n$ are disjoint elements of $\\SA_0$ with $B_n \\subset A_n$,\n\tand we have rigged the definition so that $\\bigsqcup_n B_n = A$.\n\tThus by definition of pre-measure,\n\t\\[ \\mu_0(A) = \\sum_n \\mu_0(B_n) \\le \\sum_n \\mu_0(A_n) \\]\n\tas desired.\n\\end{proof}\n\n\\begin{example}\n\tLet $\\Omega = \\RR^2$ and $\\lambda_0$ the pre-measure from before.\n\tThen $\\lambda^\\ast(A)$ is, intuitively,\n\tthe infimum of coverings of the set $A$ by rectangles.\n\tHere is a picture you might use to imagine the\n\tsituation with $A$ being the unit disk.\n\t\\missingfigure{circles covered by rectangles}\n\\end{example}\n\n\n\n\\section{Carath\\'{e}odory extension for outer measures}\nWe will now take any outer measure and turn it into a proper measure.\nTo do this, we first need to specify the $\\sigma$-algebra\non which we will define the measure.\n\n\\begin{definition}\n\tLet $\\mu^\\ast$ be an outer measure.\n\tWe say a set $A$ is \\vocab{Carath\\'{e}odory measurable with respect\n\tto $\\mu^\\ast$}, or just \\vocab{$\\mu^\\ast$-measurable},\n\tif the following condition holds:\n\tfor any set $E \\in 2^{\\Omega}$,\n\t\\[ \\mu^\\ast(E) = \\mu^\\ast(E \\cap A) + \\mu^\\ast(E \\setminus A). \\]\n\\end{definition}\nThis definition is hard to motivate, but turns out to be the right one.\nOne way to motivate is this:\nit turns out that in $\\RR^n$,\nit will be equivalent to a reasonable geometric condition\n(which I will state in \\Cref{prop:lebesgue_geo}),\nbut since that geometric definition requires information about $\\RR^n$ itself,\nthis is the ``right'' generalization for general measure spaces.\n\nSince our goal was to extend our $\\SA_0$,\nwe had better make sure this definition\nlets us measure the initial sets that we started with!\n\\begin{proposition}\n\t[Carath\\'{e}odory measurability is compatible with the initial $\\SA_0$]\n\t\\label{prop:cm_compatible}\n\tSuppose $\\mu^\\ast$ was obtained from a pre-measure $\\mu_0$ on\n\tan algebra $\\SA_0$, as in \\Cref{thm:construct_outer}.\n\tThen every set in $\\SA_0$ is $\\mu^\\ast$-measurable.\n\\end{proposition}\nThis is the second half of the compatibility condition\nthat we get if we make sure our initial $\\mu_0$\nat least satisfies the pre-measure axioms.\n(The first half was (b) of \\Cref{thm:construct_outer}.)\n\\begin{proof}\n\tLet $A \\in \\SA_0$ and $E \\in 2^{\\Omega}$; we wish to prove\n\t$\\mu^\\ast(E) = \\mu^\\ast(E \\cap A) + \\mu^\\ast(E \\setminus A)$.\n\tThe definition of outer measure already requires\n\t$\\mu^\\ast(E) \\le \\mu^\\ast(E \\cap A) + \\mu^\\ast(E \\setminus A)$\n\tand so it's enough to prove the reverse inequality.\n\n\tBy definition of infimum, for any $\\eps > 0$,\n\tthere is a covering $E \\subset \\bigcup_n A_n$\n\twith $\\mu^\\ast(E) + \\eps \\ge \\sum_n \\mu_0(A_n)$.\n\tBut \\[ \\sum_n \\mu_0(A_n)\n\t\t= \\sum_n \\left( \\mu_0(A_n \\cap A) + \\mu_0(A_n \\setminus A) \\right)\n\t\t\\ge \\mu^\\ast(E \\cap A) + \\mu^\\ast(E \\setminus A)  \\]\n\twith the first equality being the definition of pre-measure\n\ton $\\SA_0$, the second just being by definition of $\\mu^\\ast$\n\t(since $A_n \\cap A$ certainly covers $E \\cap A$, for example).\n\tThus $\\mu^\\ast(E) + \\eps \\ge \\mu^\\ast(E \\cap A) + \\mu^\\ast(E \\setminus A)$.\n\tSince the inequality holds for any $\\eps > 0$, we're done.\n\\end{proof}\n\nTo add extra icing onto the cake,\nhere is one more niceness condition\nwhich our constructed measure will happen to satisfy.\n\\begin{definition}\n\tA \\vocab{null set} of a measure space $(\\Omega, \\SA, \\mu)$\n\tis a set $A \\in \\SA$ with $\\mu(A) = 0$.\n\tA measure space $(\\Omega, \\SA, \\mu)$ is \\vocab{complete}\n\tif whenever $A$ is a null set,\n\tthen all subsets of $A$ are in $\\SA$ as well (and hence null sets).\n\\end{definition}\nThis is a nice property to have, for obvious reasons.\nVisually, if I have a bunch of dust which I \\emph{already} assigned weight zero,\nand I blow away some of the dust,\nthen the remainder should still have an assigned weight --- zero.\nThe extension theorem will give us $\\sigma$-algebras with this property.\n\n\\begin{theorem}\n\t[Carath\\'{e}odory extension theorem for outer measures]\n\t\\label{thm:cara_outer}\n\tIf $\\mu^\\ast$ is an outer measure,\n\tand $\\SA\\cme$ is the set of $\\mu^\\ast$-measurable sets\n\twith respect to $\\mu^\\ast$,\n\tthen $\\SA\\cme$ is a $\\sigma$-algebra on $\\Omega$,\n\tand the restriction $\\mu\\cme$ of $\\mu^\\ast$ to $\\SA\\cme$\n\tgives a \\emph{complete} measure space.\n\\end{theorem}\n(Phonetic remark: you can think of the superscript ${}\\cme$ as standing\nfor either ``Carath\\'{e}odory measurable'' or ``complete''.\nBoth are helpful for remembering what this represents.\nThis notation is not standard but the pun was too good to resist.)\n\nThus, if we compose \\Cref{thm:construct_outer} with \\Cref{thm:cara_outer},\nwe find that every pre-measure $\\mu_0$ on an algebra $\\SA_0$ naturally\ngives a $\\sigma$-algebra $\\SA\\cme$ with a complete measure $\\mu\\cme$,\nand our two compatibility results\n(namely (b) of \\Cref{thm:construct_outer},\ntogether with \\Cref{prop:cm_compatible})\nmeans that $\\SA\\cme \\supset \\SA_0$\nand $\\mu\\cme$ agrees with $\\mu$.\n\nHere is a table showing the process,\nwhere going down each row of the table corresponds to restriction process.\n\\begin{center}\n\t\\begin{tabular}[h]{llcl}\n\t\t& & Construct order & Notes \\\\ \\hline\n\t\t$2^\\Omega$ & $\\mu^\\ast$ & Step 2 &\n\t\t\t$\\mu^\\ast$ is outer measure obtained from $\\mu_0$ \\\\[1em]\n\t\t$\\SA\\cme$ & $\\mu\\cme$ & Step 3 & $\\SA\\cme$ defined as $\\mu^\\ast$-measurable sets, \\\\\n\t\t&&& $(\\SA\\cme, \\mu\\cme)$ is complete. \\\\[1em]\n\t\t$\\SA_0$ & $\\mu_0$ & Step 1 & $\\mu_0$ is a pre-measure\n\t\\end{tabular}\n\\end{center}\n\n\n\\section{Defining the Lebesgue measure}\nThis lets us finally define the Lebesgue measure on $\\RR^n$.\nWe wrap everything together at once now.\n\\begin{definition}\n\tWe create a measure on $\\RR^n$ by the following procedure.\n\t\\begin{itemize}\n\t\t\\ii Start with the algebra $\\mathcal{L}_0$\n\t\tgenerated by rectangular prisms,\n\t\tand define a \\emph{pre-measure} $\\lambda_0$ on this $\\mathcal{L}_0$\n\t\t(this was glossed over in the example).\n\t\t\\ii By \\Cref{thm:construct_outer},\n\t\tthis gives the \\vocab{Lebesgue outer measure}\n\t\t$\\lambda^\\ast$ on $2^{\\RR^n}$,\n\t\twhich is compatible on all the rectangular prisms.\n\t\t\\ii By Carath\\'{e}odory (\\Cref{thm:cara_outer}),\n\t\tthis restricts to a complete measure $\\lambda$\n\t\ton the $\\sigma$-algebra $\\mathcal{L}(\\RR^n)$\n\t\tof $\\lambda^\\ast$-measurable sets\n\t\t(which as promised contains all rectangular prisms).\\footnote{If\n\t\t\tI wanted to be consistent with the previous theorems,\n\t\t\tI might prefer to write $\\mathcal{L}\\cme$\n\t\t\tand $\\lambda\\cme$ for emphasis.\n\t\t\tIt seems no one does this, though, so I won't.}\n\t\\end{itemize}\n\tThe resulting complete measure, denoted $\\lambda$,\n\tis called the \\vocab{Lebesgue measure}.\n\n\tThe algebra $\\mathcal{L}(\\RR^n)$ we obtained will be called the\n\t\\vocab{Lebesgue $\\sigma$-algebra};\n\tsets in it are said to be \\vocab{Lebesgue measurable}.\n\\end{definition}\n\nHere is the same table from before,\nwith the values filled in for the special case $\\Omega = \\RR^n$,\nwhich gives us the Lebesgue algebra.\n\\begin{center}\n\t\\begin{tabular}[h]{llcl}\n\t\t& & Construct order & Notes \\\\ \\hline\n\t\t$2^{\\RR^n}$ & $\\lambda^\\ast$ & Step 2 &\n\t\t\t$\\lambda^\\ast$ is Lebesgue outer measure \\\\[1em]\n\t\t$\\mathcal L(\\RR^n)$ & $\\lambda$ & Step 3 & Lebesgue $\\sigma$-algebra (complete) \\\\[1em]\n\t\t$\\mathcal L_0$ & $\\lambda_0$ & Step 1 & Define pre-measure on rectangles\n\t\\end{tabular}\n\\end{center}\n\n\nOf course, now that we've gotten all the way here,\nif we actually want to \\emph{compute} any measures,\nwe can mostly gleefully forget about how we actually constructed\nthe measure and just use the properties.\nThe hard part was to showing that there \\emph{is}\na way to assign measures consistently;\nactually figuring out what that measure's value is\n\\emph{given that it exists} is often much easier.\nHere is an example.\n\n\\begin{example}\n\t[The Cantor set has measure zero]\n\tThe standard \\vocab{middle-thirds Cantor set} is the subset\n\t$[0,1]$ obtained as follows:\n\twe first delete the open interval $(1/3, 2/3)$.\n\tThis leaves two intervals $[0,1/3]$ and $[2/3,1]$\n\tfrom which we delete the middle thirds again from both,\n\ti.e.\\ deleting $(1/9,2/9)$ and $(7/9,8/9)$.\n\tWe repeat this procedure indefinitely and let $C$ denote the result.\n\tAn illustration is shown below.\n\t\\begin{center}\n\t\t\\includegraphics[width=0.8\\textwidth]{media/cantor-thirds.png} \\\\\n\t\t\\footnotesize Image from \\cite{img:cantor}\n\t\\end{center}\n\tIt is a classic fact that $C$ is uncountable\n\t(it consists of ternary expansions omitting the digit $1$).\n\tBut it is measurable (it is an intersection of closed sets!)\n\tand we contend it has measure zero.\n\tIndeed, at the $n$th step, the result has measure $(2/3)^n$ leftover.\n\tSo $\\mu(C) \\le (2/3)^n$ for every $n$, forcing $\\mu(C) = 0$.\n\\end{example}\n\nThis is fantastic, but there is one elephant in the room:\nhow are the Lebesgue $\\sigma$-algebra and the Borel $\\sigma$-algebra related?\nTo answer this question briefly, I will state two results\n(but another answer is given in the next section).\nThe first is a geometric interpretation of the strange\nCarath\\'{e}odory measurable hypothesis.\n\\begin{proposition}\n\t[A geometric interpretation of Lebesgue measurability]\n\t\\label{prop:lebesgue_geo}\n\tA set $A \\subseteq \\RR^n$ is Lebesgue measurable\n\tif and only if for every $\\eps > 0$,\n\tthere is an open set $U \\supset A$ such that\n\t\\[ \\lambda^\\ast(U \\setminus A) < \\eps \\]\n\twhere $\\lambda^\\ast$ is the Lebesgue outer measure.\n\\end{proposition}\nI want to say that this was Lebesgue's original formulation\nof ``measurable'', but I'm not sure about that.\nIn any case, we won't need to use this,\nbut it's good to see that our definition of Lebesgue measurable\nhas a down-to-earth geometric interpretation.\n\n\\begin{ques}\n\tDeduce that every open set is Lebesgue measurable.\n\tConclude that the Lebesgue $\\sigma$-algebra\n\tcontains the Borel $\\sigma$-algebra.\n\t(A different proof is given later on.)\n\\end{ques}\n\nHowever, the containment is proper:\nthere are more Lebesgue measurable sets than Borel ones.\nIndeed, it can actually be proven using transfinite induction\n(though we won't) that\n$\\left\\lvert \\SB(\\RR) \\right\\rvert = \\left\\lvert \\RR \\right\\rvert$.\nUsing this, one obtains:\n\\begin{exercise}\n\tShow the Borel $\\sigma$-algebra is not complete.\n\t(Hint: consider the Cantor set.\n\tYou won't be able to write down an example of a non-measurable\n\tset, but you can use cardinality arguments.)\n\tThus the Lebesgue $\\sigma$-algebra strictly contains the Borel one.\n\t% It should contain every subset of the Cantor set,\n\t% since Lebesgue is complete.\n\\end{exercise}\n\nNonetheless, there is a great way to describe the Lebesgue $\\sigma$-algebra,\nusing the idea of completeness.\n\\begin{definition}\n\tLet $(\\Omega, \\SA, \\mu)$ be a measure space.\n\tThe \\vocab{completion} $(\\Omega, \\ol{\\SA}, \\ol{\\mu})$\n\tis defined as follows:\n\twe let\n\t\\[ \\ol{\\SA} = \\left\\{ A \\cup N \\mid A \\in \\SA,\n\t\tN \\text{ subset of null set} \\right\\}. \\]\n\tand $\\ol{\\mu}(A \\cup N) = \\mu(A)$.\n\tOne can check this is well-defined,\n\tand in fact $\\ol{\\mu}$ is the unique extension\n\tof $\\mu$ from $\\SA$ to $\\ol{\\SA}$.\n\n\tThis looks more complicated than it is.\n\tIntuitively, all we are doing is ``completing'' the measure\n\tby telling $\\ol{\\mu}$ to regard any subset of a null set\n\tas having measure zero, too.\n\\end{definition}\n\nThen, the saving grace:\n\\begin{theorem}\n\t[Lebesgue is completion of Borel]\n\tFor $\\RR^n$, the Lebesgue measure is the completion of the Borel measure.\n\\end{theorem}\n\\begin{proof}\n\tThis actually follows from results in the next section,\n\tnamely \\Cref{exer:cubes_vs_open}\n\tand part (c) of Carath\\'{e}odory for pre-measures (\\Cref{thm:cara_premeasure}).\n\\end{proof}\n\n\\section{A fourth row: Carath\\'{e}odory for pre-measures}\n\\prototype{The fourth row for the Lebesgue measure is $\\SB(\\RR^n)$.}\nIn many cases, $\\SA\\cme$ is actually bigger than our original goal,\nand instead we only need to extend $\\mu_0$ on $\\SA_0$\nto $\\mu$ on $\\SA$, where $\\SA$ is the $\\sigma$-algebra generated by $\\SA_0$.\nIndeed, our original goal was to get $\\SB(\\RR^n)$, and in fact:\n\\begin{exercise}\n\tShow that $\\SB(\\RR^n)$ is the $\\sigma$-algebra generated\n\tby the $\\mathcal{L}_0$ we defined earlier.\n\t\\label{exer:cubes_vs_open}\n\\end{exercise}\n\nFortunately, this restriction is trivial to do.\n\\begin{ques}\n\tShow that $\\SA\\cme \\supset \\SA$,\n\tso we can just restrict $\\mu\\cme$ to $\\SA$.\n\\end{ques}\nWe will in a moment add this as the fourth row in our table.\n\nHowever, if this is the end goal,\nthan a somewhat different Carath\\'{e}odory theorem\ncan be stated because often one more niceness condition holds:\n\\begin{definition}\n\tA pre-measure or measure $\\mu$ on $\\Omega$ is \\vocab{$\\sigma$-finite}\n\tif $\\Omega$ can be written as a countable union $\\Omega = \\bigcup_n A_n$\n\twith $\\mu(A_n) < \\infty$ for each $n$.\n\\end{definition}\n\\begin{ques}\n\tShow that the pre-measure $\\lambda_0$ we had,\n\tas well as the Borel measure $\\SB(\\RR^n)$,\n\tare both $\\sigma$-finite.\n\\end{ques}\nActually, for us, $\\sigma$-finite is basically always going to be true,\nso you can more or less just take it for granted.\n\n\\begin{theorem}\n\t[Carath\\'{e}odory extension theorem for pre-measures]\n\t\\label{thm:cara_premeasure}\n\tLet $\\mu_0$ be a pre-measure on an algebra $\\SA_0$ of $\\Omega$,\n\tand let $\\SA$ denote the $\\sigma$-algebra generated by $\\SA_0$.\n\tLet $\\SA\\cme$, $\\mu\\cme$ be as in \\Cref{thm:cara_outer}.\n\tThen:\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The restriction of $\\mu\\cme$ to $\\SA$\n\t\tgives a measure $\\mu$ extending $\\mu_0$.\n\t\t\\ii If $\\mu_0$ was $\\sigma$-finite,\n\t\tthen $\\mu$ is the unique extension of $\\mu_0$ to $\\SA$.\n\t\t\\ii If $\\mu_0$ was $\\sigma$-finite,\n\t\tthen $\\mu\\cme$ is the completion of $\\mu$,\n\t\thence the unique extension of $\\mu_0$ to $\\SA\\cme$.\n\t\\end{enumerate}\n\\end{theorem}\nHere is the updated table, with comments if $\\mu_0$ was indeed $\\sigma$-finite.\n\\begin{center}\n\t\\begin{tabular}[h]{llcl}\n\t\t& & Construct order & Notes \\\\ \\hline\n\t\t$2^\\Omega$ & $\\mu^\\ast$ & Step 2 &\n\t\t\t$\\mu^\\ast$ is outer measure obtained from $\\mu_0$ \\\\[1em]\n\t\t$\\SA\\cme$ & $\\mu\\cme$ & Step 3 & $(\\SA\\cme, \\mu\\cme)$ is completion $(\\SA, \\mu)$, \\\\\n\t\t&&& $\\SA\\cme$ defined as $\\mu^\\ast$-measurable sets \\\\[1em]\n\t\t$\\SA$ & $\\mu$ & Step 4 & $\\SA$ defined as $\\sigma$-alg.\\ generated by $\\SA_0$ \\\\[1em]\n\t\t$\\SA_0$ & $\\mu_0$ & Step 1 & $\\mu_0$ is a pre-measure\n\t\\end{tabular}\n\\end{center}\nAnd here is the table for $\\Omega = \\RR^n$,\nwith Borel and Lebesgue in it.\n\\begin{center}\n\t\\begin{tabular}[h]{llcl}\n\t\t& & Construct order & Notes \\\\ \\hline\n\t\t$2^{\\RR^n}$ & $\\lambda^\\ast$ & Step 2 &\n\t\t\t$\\lambda^\\ast$ is Lebesgue outer measure \\\\[1em]\n\t\t$\\mathcal L(\\RR^n)$ & $\\lambda$ & Step 3 &\n\t\t\tLebesgue $\\sigma$-algebra, completion of Borel one \\\\[1em]\n\t\t$\\SB(\\RR^n)$ & $\\mu$ & Step 4 &\n\t\t\tBorel $\\sigma$-algebra, generated by $\\mathcal{L}_0$ \\\\[1em]\n\t\t$\\mathcal L_0$ & $\\lambda_0$ & Step 1 & Define pre-measure on rectangles\n\t\\end{tabular}\n\\end{center}\n\nGoing down one row of the table corresponds to restriction,\nwhile each of $\\mu_0 \\to \\mu \\to \\mu\\cme$ is a unique extension\nwhen $\\mu_0$ is $\\sigma$-finite.\n\\begin{proof}\n\t[Proof of \\Cref{thm:cara_premeasure}]\n\tFor (a): this is just \\Cref{thm:construct_outer} and \\Cref{thm:cara_outer}\n\tput together, combined with the observation that $\\SA^\\ast \\supset \\SA_0$\n\tand hence $\\SA^\\ast \\supset \\SA$.\n\tParts (b) and (c) are more technical, and omitted.\n\\end{proof}\n\n\\section{From now on, we assume the Borel measure}\n\\todo{explain why}\n\n\\section{\\problemhead}\n\\begin{dproblem}\n\t[Constructing outer measures from arbitrary $\\rho$]\n\t\\label{pr:construct_outer_measure}\n\tFor a set $\\Omega$,\n\tlet $\\mathcal{E}$ be \\emph{any} subset of $2^{\\Omega}$\n\tand let $\\rho \\colon \\mathcal{E} \\to [0,+\\infty]$\n\tbe \\emph{any} function.\n\tProve that\n\t\\[ \\mu^\\ast(E) = \\inf \\left\\{ \\sum_{n=1}^\\infty \\rho(E_n) \\mid\n\t\tE_n \\in \\mathcal{E}, \\;\n\t\tE \\subseteq \\bigcup_{n=1}^\\infty E_n \\right\\} \\]\n\tis an outer measure.\n\\end{dproblem}\n\n\\begin{problem}\n\t[The insane scientist]\n\tLet $\\Omega = \\RR^2$, and let $\\mathcal{E}$\n\tbe the set of (non-degenerate) rectangles.\n\tLet $\\rho(E) = 1$ for every rectangle $E \\in \\mathcal{E}$.\n\tIgnoring my advice, the insane scientist\n\tuses $\\rho$ to construct an outer measure $\\mu^\\ast$,\n\tas in \\Cref{pr:construct_outer_measure}.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Find $\\mu^\\ast(S)$ for each subset $S$ of $\\RR^2$.\n\t\t\\ii Which sets are $\\mu^\\ast$-measurable?\n\t\\end{enumerate}\n\tYou should find that no rectangle is $\\mu^\\ast$-measurable,\n\tunsurprisingly foiling the scientist.\n\t\\begin{hint}\n\t\tShow that\n\t\t\\[ \\mu^\\ast(S) = \\begin{cases}\n\t\t\t\t0 & S = \\varnothing \\\\\n\t\t\t\t1 & S \\text{ bounded and nonempty} \\\\\n\t\t\t\t\\infty & S \\text{ not bounded}.\n\t\t\t\\end{cases}\n\t\t\\]\n\t\tThis lets you solve (b) readily;\n\t\tI think the answer is just unbounded sets,\n\t\t$\\varnothing$, and one-point sets.\n\t\\end{hint}\n\\end{problem}\n\n\\begin{problem}\n\t\\gim\n\tA function $f \\colon \\RR \\to \\RR$ is continuous.\n\tMust $f$ be measurable with respect to the Lebesgue measure on $\\RR$?\n\\end{problem}\n", "meta": {"hexsha": "9c8db69aed924abbed76fbd21914e6d848d17cb2", "size": 25261, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/measure/caratheodory.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/measure/caratheodory.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/measure/caratheodory.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3473520249, "max_line_length": 89, "alphanum_fraction": 0.7011598907, "num_tokens": 8077, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.8333245973817158, "lm_q1q2_score": 0.5988223828698673}}
{"text": "\\section{Constrained Dynamic String Sampling}\n  \\label{sec:ConstrainedAlg}\n  \n  While the algorithm presented in Sec. \\ref{sec:GreedyAlg} is fast for sufficiently smooth families of loss surfaces with few saddle points, here we present a slightly modified version which, while slower, provides more control over the convergence of the string.  We did not use the algorithm presented in this section for our numerical studies.  \n  \n  Instead of training intermediate models via full SGD to a desired accuracy as in step $8$ of the algorithm, intermediate models are be subject to a constraint that ensures they are ``close'' to the neighboring models on the string.  Specifically, intermediate models are constrained to the unique hyperplane in weightspace equidistant from its two neighbors.  This can be further modified by additional regularization terms to control the ``springy-ness'' of the string.  These heuristics could be chosen to try to more faithfully sample the geodesic between two models.  \n  \n  In practice, for a given model on the string, $\\theta_i$, these two regularizations augment the standard loss by: $\\tilde{F}(\\theta) = F(\\theta)+\\zeta(\\|\\theta_{i-1} - \\theta_i\\|+\\|\\theta_{i+1} - \\theta_i\\|) + \\kappa \\|\\frac{(\\theta_{i-1} - \\theta_{i+1})/2}{\\|(\\theta_{i-1} - \\theta_{i+1})/2\\|} \\cdot \\frac{(\\theta_i - (\\theta_{i-1} - \\theta_{i+1})/2)}{\\| (\\theta_i - (\\theta_{i-1} - \\theta_{i+1})/2)\\|}\\|$.  The $\\zeta$ regularization term controls the ``springy-ness'' of the weightstring, and the $\\kappa$ regularization term controls how far off the hyperplane a new model can deviate.  \n  \n  Because adapting DSS to use this constraint is straightforward, here we will describe an alternative ``breadth-first'' approach wherein models are trained in parallel until convergence.  This alternative approach has the advantage that it will indicate a disconnection between two models ``sooner'' in training.  The precise geometry of the loss surface will dictate which approach to use in practice.\n  \n  Given two random models $\\sigma_i$ and $\\sigma_j$ where $|\\sigma_i - \\sigma_j| < L_0$, we aim to follow the evolution of the family of models connecting $\\sigma_i$ to $\\sigma_j$.  Intuitively, almost every continuous path in the space of random models connecting $\\sigma_i$ to $\\sigma_j$ has, on average, the same (high) loss.  For simplicity, we choose to initialize the string to the linear segment interpolating between these two models.  If this entire segment is evolved via gradient descent, the segment will either evolve into a string which is entirely contained in a basin of the loss surface, or some number of points will become fixed at a higher loss.  These fixed points are difficult to detect directly, but will be indirectly detected by the persistence of a large interpolated loss between two adjacent models on the string.\n  \n  The algorithm proceeds as follows:\n  \n  (0.) Initialize model string to have two models, $\\sigma_i$ and $\\sigma_j$.\n  \n  1. Begin training all models to the desired loss, keeping the instantaneous loss, $L_0(t)$, of all models being trained approximately constant.\n  \n  2. If the pairwise interpolated loss between $\\sigma_n$ and $\\sigma_{n+1}$ exceeds $L_0(t)$, insert a new model at the maximum of the interpolated loss (or halfway) between these two models.\n  \n  3. Repeat steps (1) and (2) until all models (and interpolated errors) are below a threshold loss $L_0(t_{\\rm{final}}):=L_0$, or until a chosen failure condition (see \\ref{sec:Fail}).\n\n\n", "meta": {"hexsha": "9d4c8bb50f80bceed87d7527e1b245effaa54f46", "size": 3513, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Writeup/iclr/constrained.tex", "max_stars_repo_name": "danielfreeman11/convex-nets", "max_stars_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-08-09T00:48:46.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-03T09:04:59.000Z", "max_issues_repo_path": "Writeup/iclr/constrained.tex", "max_issues_repo_name": "danielfreeman11/convex-nets", "max_issues_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Writeup/iclr/constrained.tex", "max_forks_repo_name": "danielfreeman11/convex-nets", "max_forks_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 140.52, "max_line_length": 842, "alphanum_fraction": 0.7546256761, "num_tokens": 858, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245870332531, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5988223704114718}}
{"text": "\\chapter{Interior Point Methods for Maximum Flow}\n%%%% ADD MACROS HERE\n% feel free to add more macros here\n\n\\newcommand{\\sdiv}{\\tilde{D}}\n\\newcommand\\cvec[1]{\\overrightarrow{\\left(#1\\right)}}\n\n\\section*{Background and Notation}\n\nIn this chapter, we'll learn about interior point methods for solving\nmaximum flow, which is a rich and active area of research \\cite{DS08,M13,LS20a,LS20b}. \n\nWe're going to frequently need to refer to vectors\narising from elementwise operations combining other vectors.\n\nTo that end, given two vector $\\aa \\in \\R^m$, and $\\bb \\in \\R^m$, we\nwill use $\\cvec{\\aa(i) \\bb(i)}$ to denote the vector $\\zz$ with\n$\\zz(i) = \\aa(i) \\bb(i)$ and so on.\n\nThroughout this chapter, when we are working in the context of some\ngiven graph $G$ with vertices $V$ and edges $E$, we will let $m =\n\\abs{E}$ and $n = \\abs{V}$.\n\nThe plots in this chapter were made using Mathematica, which is\navailable to ETH students for download through the ETH IT Shop.\n\n\n\\section{An Interior Point Method}\n\\paragraph{The Maximum Flow problem in undirected graphs.}\n \\begin{align}\n   \\label{eq:maxflow}\n\\max_{\\ff \\in \\R^E}  & \\quad F    \\\\\n   \\textrm{s.t. }  \\BB\\ff= F \\bb_{st}\n     \\tag*{``The Undirected Maximum Flow Problem''}\n   \\\\\n   \\nonumber \n   -\\cc \\leq \\ff \\leq \\cc\n\\end{align}\nWe use $\\val(\\ff)$ to denote $F$ when $B\\ff = F\\bb_{st}$.\n\nAs we develop algorithms for this problem, we will assume that we know\nthe maximum flow value $F^*$.\nLet $\\ff^*$ denote some maximum flow, i.e. a flow with $-\\cc \\leq \\ff\n\\leq \\cc$ can $\\val(\\ff^*) = F^*$.\n\nIn general, an a lower\nbound $F \\leq F^*$ will allow us to find a flow with value $F$, and\nbecause of this, we can use a binary search to approximate $F^*$.\n\n\\subsection{A barrier function and an algorithmic framework}\n\\[\nV(\\ff) = \\sum_e -\\log(\\cc(e)-\\ff(e))  -\\log(\\cc(e)+\\ff(e)) \n\\]\n\nWe assume the optimal value of Program~\\eqref{eq:maxflow} is $F^*$.\nThen for a given $0 \\leq \\alpha < 1$ we define a program\n\\begin{align}\n   \\label{eq:barrierflow}\n\\min_{\\ff \\in \\R^E} & \\quad  V(\\ff)\\\\\n\\textrm{s.t. }  \\BB\\ff= \\alpha F^* \\bb_{st}\n  \\tag*{``The Barrier Problem''}\n\\end{align}\nThis problem makes sense for any $0 \\leq \\alpha < 1$.\nWhen $\\alpha = 0$, we are not routing any flow yet. This will be our\nstarting point.\nFor any $0 \\leq \\alpha < 1$, the scaled-down maximum\nflow $\\alpha\\ff^*$ strictly satisfies the capacities $ -\\cc < \\alpha\\ff^* < \\cc$, and\n$\\BB\\alpha\\ff^*= \\alpha F^* \\bb_{st}$.\nHence $\\alpha\\ff^*$ is a feasible flow for this value of $\\alpha$ and \nhence $V(\\alpha\\ff^*) < \\infty$ and so the optimal flow for the\nBarrier Problem at this $\\alpha$ must also have objective value\nstrictly below $\\infty$, and hence in\nturn strictly satisfy the capacity constraints.\nThus, if we can find the optimal flow for Program\n\\eqref{eq:barrierflow} for $\\alpha = 1-\\epsilon$, we will have a\nfeasible flow with Program~\\eqref{eq:maxflow}, the Undirected Maximum Flow Problem, routing\n$(1-\\epsilon)F^*$.\nThis is how we will develop an algorithm for computing the maximum flow.\n\nProgram~\\eqref{eq:barrierflow}  has the Lagrangian\n\\[\n\\calL(\\ff,\\xx) = V(\\ff) + \\xx^{\\trp}( \\alpha F^* \\bb_{st}  - \\BB\\ff   )\n\\]\n\nAnd we have optimality when\n\\begin{align}\n  \\label{eq:barrierfeasibility}\n  \\BB\\ff= \\alpha  F^* \\bb_{st}\n  \\text{ and }\n  -\\cc \\leq \\ff \\leq \\cc\n  \\\\\n  \\tag*{``Barrier feasibility''}\n\\end{align}\n\nand $\\grad_{\\ff} \\calL(\\ff,\\xx) = \\veczero$, i.e.\n\\begin{align}\n  \\label{eq:barrierlagrangegrad}\n  \\grad V(\\ff) = \\BB^{\\trp} \\xx\n  \\\\\n  \\tag*{\"Barrier Lagrangian gradient optimality\"}\n\\end{align}\n\nLet $\\ff^*_{\\alpha}$ denote the optimal solution to\nProblem~\\ref{eq:barrierflow} for a given $0 \\leq \\alpha < 1$,\nand let $\\xx^*_{\\alpha}$ be optimal dual voltages such that $\\grad V(\\ff^*_{\\alpha}) = \\BB^{\\trp} \\xx^*_{\\alpha}$.\n\nIt turns out that, if we have a solution $\\ff^*_{\\alpha}$ to this problem for some\n$\\alpha < 1$, then we can find a solution $\\ff_{\\alpha+\\alpha'}$ for\nsome $\\alpha' < 1-\\alpha$.\nAnd, we can compute $\\ff_{\\alpha+\\alpha'}$ using a small number of Newton\nsteps, each of which will only require a Laplacian linear equation\nsolve, and hence is computable in $\\Otil(m)$ time.\nConcretely, for any $0 \\leq \\alpha < 1$,\ngiven the optimal flow at this $\\alpha$, we will be able to compute\nthe optimal flow at $\\alpha_{\\text{new}} = \\alpha + (1-\\alpha)\n\\frac{1}{150\\sqrt{m}}$.\nThis means that after $T = 150\\sqrt{m}\\log(1/\\epsilon)$ updates, we have a solution\nfor $\\alpha \\geq 1-\\epsilon$.\n\nWe can state the update problem as \n\n\\begin{align}\n \\label{eq:updateflow}\n  \\min_{\\ddelta \\in \\R^E} & \\quad \n    V(\\ddelta+\\ff)\n  \\\\\n\\textrm{s.t. }  \\BB\\ddelta = \\alpha' F^* \\bb_{st}\n  \\tag*{``The Update Problem''}\n\\end{align}\n\n\\subsection{Updates using Divergence}\n\nIt turns out that for the purposes of analysis, it will be useful to\nensure that our ``Update Problem'' uses an objective function that is\nminimized at $\\ddelta = \\veczero$.\n\nThis leads to a variant of the Update Problem, which we call the\n``Divergence Update Problem''.\nWe obtain our new problem by switching from\n$ V(\\ddelta+\\ff)$ as our objective to  $V(\\ddelta+\\ff)\n  -\n  (V(\\ff)\n  +\n  \\ip{\\grad V(\\ff), \\ddelta})$ as our objective, and this is called\n  the \\emph{divergece} of $V$ w.r.t. $\\ddelta$ \\emph{based} at $\\ff$.\n  \n\\begin{align}\n \\label{eq:divflow}\n  \\min_{\\ddelta \\in \\R^E} & \\quad \n    V(\\ddelta+\\ff)\n  -\n  (V(\\ff)\n  +\n  \\ip{\\grad V(\\ff), \\ddelta})\n  \\\\\n\\textrm{s.t. }  \\BB\\ddelta = \\alpha' F^* \\bb_{st}\n  \\tag*{``The Divergence Update Problem''}\n\\end{align}\n\nNow, for any flow $\\ddelta$ such that $\\BB\\ddelta = \\alpha' F^*\n\\bb_{st}$, using the Lagrangian gradient condition\n\\eqref{eq:barrierlagrangegrad},\nwe have\n$\\ip{\\grad V(\\ff^*_{\\alpha}), \\ddelta}) = \\ip{\\xx^*_{\\alpha}, \\alpha' F^*\n  \\bb_{st}})$.\nHence, for such $\\ddelta$, we have \n\\[\n    V(\\ddelta+\\ff^*_{\\alpha})\n  -\n\\left(\n  V(\\ff^*_{\\alpha})\n  +\n  \\ip{\\grad V(\\ff^*_{\\alpha}), \\ddelta}\n\\right)\n=\n    V(\\ddelta+\\ff^*_{\\alpha})\n  -\n\\left(\n V(\\ff^*_{\\alpha})\n  +\n\\ip{\\xx^*_{\\alpha}, \\alpha' F^*\\bb_{st}}\n\\right)\n\\]\nWe conclude that the objectives of the Update\nProblem~\\eqref{eq:updateflow} and the Divergence Update\nProblem~\\eqref{eq:divflow} have the same minimizer, which we denote\n$\\ddelta^*_{\\alpha'}$, although, to be precise, it is also a function of $\\alpha$.\n\n% This problem has Lagrangian\n% \\[\n% \\calM(\\ddelta,\\zz) = V(\\ff) + \\zz^{\\trp}((\\alpha+\\alpha') F\n% \\bb_{st}-  \\BB(\\ff+\\ddelta) )\n% \\]\n\n% And we have optimality when\n% \\begin{align}\n%   \\label{eq:divfeasibility}\n%   \\BB(\\ff+\\ddelta)= (\\alpha+\\alpha') F^* \\bb_{st}\n%   \\\\\n%     \\tag*{``Update feasibility''}\n% \\text{and}\n% -\\cc \\leq \\ff + \\ddelta\\leq \\cc\n% \\end{align}\n% % \\begin{align}\n% %   \\label{eq:divlagrangegrad}\n% %   \\grad V(\\ff+\\ddelta) - \\grad V(\\ff) = \\BB^{\\trp} \\zz\n% %   \\tag*{Divergence Langrange Gradient.}\n% % \\end{align}\n% % $\\BB(\\ff+\\ddelta)= (1-\\alpha+\\alpha') F^* \\bb_{st}$\n% % and\n% % $ -\\cc \\leq \\ff + \\ddelta\\leq \\cc$,\n% and $\\grad_{\\ddelta} \\calM(\\ddelta,\\zz) = \\veczero$, i.e.\n% \\begin{align}\n%   \\label{eq:divlagrangegrad}\n% \\grad V(\\ff+\\ddelta) - \\grad V(\\ff) = \\BB^{\\trp} \\zz\n% \\end{align}\n\n% Note that if we simultaneously have \\eqref{eq:barrierfeasibility}, the\n% ``barrier feasibility''  condition, and \\eqref{eq:divfeasibility}, the``update feasibility'' condition,\n% satisfied along with Equations~\\eqref{eq:barrierlagrangegrad}\n% and~\\eqref{eq:divlagrangegrad}, then\n% $\\ff + \\ddelta$ satisfies the ``barrier feasibility'' with $\\alpha$\n% replaced by $\\alpha + \\alpha'$ and we have \n% \\[\n% \\grad V(\\ff+\\ddelta) = \\BB^{\\trp} (\\xx+\\zz).\n% \\]\n\nThus $\\ff^*_{\\alpha}+\\ddelta^*_{\\alpha'}$ is optimal for the\noptimization problem\n  \\begin{equation}\n   \\label{eq:barrierflowupdated}\n\\begin{aligned}\n\\min_{\\ff \\in \\R^E} & \\quad V(\\ff)\\\\\n\\textrm{s.t. }  \\BB\\ff= (\\alpha+\\alpha') F^* \\bb_{st}\n\\end{aligned}\n\\end{equation}\n\n\\begin{lemma}\n  \\label{lem:updateoptimality}\n  Suppose $\\ff^*_{\\alpha}$ is the minimizer of Problem~\\eqref{eq:barrierflow}\n  (the Barrier Problem with parameter $\\alpha$) and\n  $\\ddelta^*_{\\alpha'}$ is the minimizer of Problem~\\eqref{eq:divflow} (the\n  Update Problem with parameters $\\ff^*_{\\alpha}$ and $\\alpha'$),\n  then $\\ff^*_{\\alpha} + \\ddelta^*_{\\alpha'}$ is optimal for Problem~\\eqref{eq:barrierflow} \n  with parameter $\\alpha+\\alpha'$ (i.e. a new instance of the Barrier\n  problem).\n\\end{lemma}\n\n% \\begin{remark}\n%   \\[\n%     \\ip{\\grad V(\\ff), \\ddelta}) = \\ip{\\BB^{\\trp}\\xx, \\ddelta} =\n%     \\ip{\\xx, \\alpha' F^* \\bb_{st}}\n%   \\]\n%   and why it's still useful, despite\n%   being a constant.\n% \\end{remark}\n\n\n\\begin{algorithm}[H]\n  \\SetAlgoLined\n  $\\ff \\leftarrow \\veczero$\\;\n  $\\alpha \\leftarrow 0$\\;\n  \\While{$\\alpha < 1 - \\epsilon$}{\n    $a' \\leftarrow  \\frac{1-\\alpha}{20\\sqrt{m}}$\\;\n    Compute $\\ddelta$, the minimizer of Problem~\\eqref{eq:divflow}\\;\n    Let $\\ff \\leftarrow \\ff + \\ddelta$ and\n    $\\alpha \\leftarrow \\alpha + \\alpha'$\\;\n  }\n  \\Return{\\ff}\n  \\caption{\\textsc{Interior Point Method}}\n  \\label{alg:ipm}\n\\end{algorithm}\n\n\\begin{pseudotheorem}\n  \\label{thm:updatealgo}\n  Let $\\ff$ be the minimizer of Problem~\\eqref{eq:barrierflow}.\n  Then, when  $a' \\leq \\frac{1-\\alpha}{20\\sqrt{m}}$, the minimizer\n  $\\ddelta$ of Problem~\\eqref{eq:barrierflowupdated} can be computed\n  in $\\Otil(m)$ time.\n\\end{pseudotheorem}\n\nThe key insight in this type of interior point method is that when the\nupdate $\\alpha'$ is small enough, \n\\begin{theorem}\\label{thm:maxflowipm}\n  Algorithm~\\ref{alg:ipm} returns a flow $\\ff$ that is feasible for\n  Problem~\\eqref{eq:maxflow} in time $\\Otil(m^{1.5}\\log(1/\\epsilon))$.\n\\end{theorem}\n\n\\begin{proof}[Proof Sketch]\n  First note that for $\\alpha = 0$, the minimizer of\n  Problem~\\eqref{eq:barrierflow} is $\\ff = \\veczero$.\n  The proof now essentially follows by\n  Lemma~\\eqref{lem:updateoptimality}, and\n  Pseudotheorem~\\ref{thm:updatealgo}.\n  Note that $1-\\alpha$ shrinks by a factor $(1-\\frac{1}{20\\sqrt{m}})$\n    in each iteration of the while-loop, and so after\n    $20\\sqrt{m}\\log(1/\\epsilon)$ iterations, we have $1-\\alpha \\leq\n    \\epsilon$, at which point the loop terminates.\n   To turn this into a formal proof, we need to take care of the fact\n   the proper theorem corresponding to \n   Pseudotheorem~\\ref{thm:updatealgo} only gives a highly accurate\n   but not exact solution $\\delta$ to the ``Update Problem''.\n   But it's possible to show that this is good enough (even though\n   both $\\ff$ and $\\ddelta$ end up not being exactly optimal in each iteration).\n\\end{proof}\n\n\\begin{remark}\n  For the maximum flow problem, when capacities are integral and\n  polynomially bounded, if we choose $\\epsilon = m^{-c}$ for some\n  large enough constant $c$, given a feasible flow with $\\val(\\ff) =\n  1-\\epsilon$, is it possible to compute an exact maximum flow in\n  nearly linear time.\n  Thus Theorem~\\ref{thm:maxflowipm} can also be used to compute an\n  exact maximum flow in $\\Otil(m)$ time, but we omit the proof.\n  The idea is to first round to an almost optimal, feasible integral flow (which\n  requires a non-trivial combinatorial algorithm), and then to recover\n  the exact flow using Ford-Fulkerson.\n  See \\cite{M13} for details.\n\\end{remark}\n\n\\begin{remark}\n  It is possible to reduce an instance of directed maximum flow to an\n  instance of undirected maximum flow in nearly-linear time, in such a\n  way that if we can \\emph{exactly} solve the undirected instance,\n  then in nearly-linear time we can recover an exact solution to the\n  directed maximum flow problem.\n  Thus Theorem~\\eqref{thm:maxflowipm} can also be used to solve\n  directed maximum flow.\n  We will ask you to develop this reduction in Graded Homework 2.\n\\end{remark}\n\n\\begin{remark}\n  For sparse graphs with $m = \\Otil(n)$ and large capacities, this\n  running time is the best known, and improving it is major open problem.\n\\end{remark}\n\n\\subsection{Understanding the Divergence Objective}\n% \\begin{align*}\n%   D( \\ddelta+\\ff \\mid \\ff )\n% \\end{align*}\n% \\[\n% D(x) = -\\log(1-x) - x\n% \\]\nNote that if $V(x) = -\\log(1-x)$, then $D(x) = V(x) - (V(0) + V'(0) x)$.\n\n% \\begin{figure}[H]\n%   \\centering\n%   % \\includegraphics[width=0.5\\linewidth]{fig/logbarrier.png}\n%   \\begin{minipage}{0.4\\textwidth}\n%     \\centering\n%     \\includegraphics[width=0.9\\textwidth]{fig/logbarrier.png} % first figure itself\n%     \\caption{Plot showing ${V(x) = -\\log(1-x)}$ and then linear\n%       approximation ${V(0) + V'(0) x}$.}\n%   \\end{minipage}\\hfill\n%   \\begin{minipage}{0.4\\textwidth}\n%     \\centering\n%     \\includegraphics[width=0.9\\textwidth]{fig/logdivergence.png}% second figure itself\n%     \\caption{Plot showing ${D(x) = V(x) - (V(0) + V'(0) x)}$.}\n%   \\end{minipage}\n% \\end{figure}\n\n\n\\begin{figure}[H]\n  \\centering\n    \\includegraphics[width=0.6\\textwidth]{fig/logbarrier.png} % first figure itself\n    \\caption{Plot showing ${V(x) = -\\log(1-x)}$ and then linear\n      approximation ${V(0) + V'(0) x}$.}\n\\end{figure}\n\n\n\\begin{figure}[H]\n  \\centering\n    \\includegraphics[width=0.6\\textwidth]{fig/logdivergence.png}% second figure itself\n    \\caption{Plot showing ${D(x) = V(x) - (V(0) + V'(0) x)}$.}\n\\end{figure}\n\n  \nWe let\n\\[\n  \\cc_+(e) = \\cc(e) - \\ff(e) \\text{ and } \\cc_-(e) = \\cc(e) + \\ff(e)\n\\]\n\nSo then  \n\\begin{align*}\n  D_V( \\ddelta )\n  &=\n  V(\\ddelta+\\ff)\n  -\n  (V(\\ff)\n  +\n  \\ip{\\grad V(\\ff), \\ddelta})\n  \\\\\n  &=\n  \\sum_e\n  -\\log\n  \\left(\\frac{\\cc(e)-(\\ddelta(e)+\\ff(e))}\n  {\\cc(e)-\\ff(e)} \n    \\right)\n   -\n\\frac{\\ddelta(e)}\n  {\\cc(e)-\\ff(e)} \n  \\\\\n  &\\quad\\quad\\,\\,\\,\\,\n  -\\log\n\\left(\n  \\frac{\\cc(e)+(\\ddelta(e)+\\ff(e))}\n  {\\cc(e)+\\ff(e)} \n  \\right)\n   +\n\\frac{\\ddelta(e)}\n    {\\cc(e)+\\ff(e)}\n  \\\\\n  &=\n    \\sum_e\n    D\\left(\\frac{\\ddelta(e)}\n    {\\cc(e)-\\ff(e)}\n    \\right)\n    +\n     D\\left(-\\frac{\\ddelta(e)}\n    {\\cc(e)+\\ff(e)}\n    \\right)\n   \\\\\n  &=\n    \\sum_e\n    D\\left(\\frac{\\ddelta(e)}{\\cc_+(e)}\\right)\n    +\n    D\\left(-\\frac{\\ddelta(e)}{\\cc_-(e)}\\right)\n\\end{align*}\n\nNote that we can express Problem~\\eqref{eq:divflow} as\n% \\begin{equation}\n%    \\label{eq:divflow2}\n% \\begin{aligned}\n%   \\min\n%   _{\\ddelta \\in \\R^E} & \\quad \n%      D_V( \\ddelta )\n%   \\\\\n% \\textrm{s.t. }  \\BB\\ddelta = \\alpha' F^* \\bb_{st}\n% \\end{aligned}\n% \\end{equation}\n\\begin{align}\n   \\label{eq:divflow2}\n  \\min_{\\ddelta \\in \\R^E} & \\quad \n      D_V( \\ddelta )\n  \\\\\n  \\textrm{s.t. }  \\BB\\ddelta = \\alpha' F^* \\bb_{st}\n\\tag*{The Update Problem, restated}\n\\end{align}\n\nNote that  $D_V( \\ddelta )$ is strictly convex of over the\nfeasible set, so the argmin is unique.\n\n\\subsection{Quadratically Smoothing Divergence and Local Agreement}\n\n\\[\n  \\sdiv_{\\epsilon}(x) =\n  \\begin{cases}\n    -\\log(1-x) - x & \\text{ if } \\abs{x} \\leq \\epsilon \\\\\n    D(\\epsilon) + D'(\\epsilon) (x - \\epsilon) \n    +\\frac{D''(\\epsilon)}{2} (x - \\epsilon)^2\n    & \\text{ if }  x \\geq \\epsilon \\\\\n    D(-\\epsilon) + D'(-\\epsilon) (x + \\epsilon) \n    +\\frac{D''(-\\epsilon)}{2} (x +\\epsilon)^2\n    & \\text{ if }  x \\leq -\\epsilon \\\\\n  \\end{cases}\n\\]\n\nFor brevity, we define\n\\[\n  \\sdiv(x) =\\sdiv_{0.1}(x) \n\\]\n\\begin{lemma}\n  \\label{lem:sdivderivs}\n  \\noindent\n  \\begin{enumerate}\n  \\item $1/2 \\leq \\sdiv''(\\xx) \\leq 2$.\n  \\item For $x \\geq 0$, we have $x/2 \\leq \\sdiv'(\\xx) \\leq 2x$\nand $-2x \\leq \\sdiv'(-\\xx) \\leq -x/2$.\n\\item $x^2/4 \\leq \\sdiv(\\xx) \\leq x^2$.\n  \\end{enumerate}\n\\end{lemma}\n\nWhat's happening here? We glue together $D(x)$ for small $x$ with its\nquadratic approximation for $\\abs{x} > \\epsilon$.\nFor $x > 0$, we ``glue in'' a Taylor series expansion based at $x =\n\\epsilon$.\n\n\\begin{figure}[H]\n  \\centering\n    \\includegraphics[width=\\textwidth]{fig/quad-div-apx.png} % first figure itself\n    \\caption{Plot showing ${D(x) = -\\log(1-x)}$ and the quadratic\n      approximation based at $x = 0.1$.}\n\\end{figure}\n\n\nWe also define\n  \\begin{align*}\n  \\sdiv_V( \\ddelta )\n &=\n    \\sum_e\n     \\sdiv\\left(\\frac{\\ddelta(e)}{\\cc_+(e)}\\right)\n    +\n     \\sdiv\\left(-\\frac{\\ddelta(e)}{\\cc_-(e)}\\right)\n  \\end{align*}\n\nWe can now introduce the smoothed optimization problem\n\\begin{align}\n   \\label{eq:sdivflow}\n  \\min_{\\ddelta \\in \\R^E} & \\quad \n     \\sdiv_V( \\ddelta )\n  \\\\\n  \\textrm{s.t. }  \\BB\\ddelta = \\alpha' F^* \\bb_{st}\n\\tag*{``The Smoothed Update Problem''}\n\\end{align}\nNote that  $\\sdiv_V( \\ddelta )$ is strictly convex of over the\nfeasible set, so the argmin is unique.\n\n\\begin{pseudoclaim}\n  We can compute the argmin $\\ddelta^*$ of\n  Problem~\\eqref{eq:sdivflow}, the Smoothed Update Problem, using the Newton-Steps for\n  $K$-stable Hessian convex functions that we saw in the previous chapter, in\n  $\\Otil(m)$ time.\n\\end{pseudoclaim}\n\\begin{proof}[Sketch of proof]\n  Problem~\\eqref{eq:sdivflow} fits the class of problems for which we\n  showed in the previous chapter \n  that (appropriately scaled) Newton steps converge.\n  This is true because the Hessian is always a $2$-spectral\n  approximation of the Hessian at $\\sdiv_V( \\ddelta^* )$, as can be\n  shown from Lemma~\\ref{lem:sdivderivs}.\n  Because the Hessian of $\\sdiv_V( \\ddelta )$ is diagonal, and the\n  constraints are flow constraints, each\n  Newton step boils down to solving a Laplacian linear system, which can\n  be done to high accuracy $\\Otil(m)$ time.\n\\end{proof}\n\n\\begin{remark}\n  There are three things we need to modify to turn the pseudoclaim\n  into a true claim, addressing the errors arising from both Laplacian\n  solvers and Newton steps:\n  \\begin{enumerate}\n  \\item We need to rephrase the claim to so that we only claim\n    $\\ddelta^*$ has been computed to high accuracy, rather than exactly.\n  \\item We need to show that we can construct an initial guess to\n    start off Newton's method $\\ddelta_0$ for which the value\n    $\\sdiv_V( \\ddelta_0 )$ is not too large. (This is easy).\n  \\item We need show that Newton steps converge despite using a\n    Laplacian solver that doesn't give exact solutions, only high\n    accuracy solutions. (Takes a bit of work, but is ultimately not\n    too difficult).\n  \\end{enumerate}\nImportantly, to ensure our overall interior point method still works,\n  we also need to show that it converges,\neven if we're using approximate solutions\neverywhere. This also takes some work to show, again is not too difficult.\n\\end{remark}\n\n\n\\paragraph{Local Agreement Implies Same Optimum.}\n\\begin{lemma}\n  \\label{lem:argminsfromlocalagreement}\n  Suppose $S \\subseteq \\R^n$ is a convex set, and let $f, g: S \\to R$ be convex\n  functions.\n  Let $\\xx^* = \\argmin_{\\xx \\in S} f(\\xx)$.\n  Suppose $f,g$ agree on a neighborhood of $\\xx^*$ in $S$ (i.e. an\n  open set containing $\\xx^*$).\n  Then $\\xx^* = \\argmin_{\\xx \\in S} f(\\xx)$.\n\\end{lemma}\n\\begin{proof}[Proof Sketch]\n  We sketch the proof in the case when both $f,g$ are differentiable: Observe\n  that $\\veczero = \\grad f(\\xx^*) = \\grad g(\\xx^*)$, and hence $\\xx^*$\n  is also minimized at $\\xx^*$.\n\\end{proof}\n\nWe define\n\\begin{equation}\n  \\label{eq:symrescap}\n  \\cchat(e) = \\min(\\cc_+(e) , \\cc_-(e) ) \n\\end{equation}\n% and for a positive vector $\\cc > 0$, we define\n% \\[\n% \\norm{\\yy}_{\\cc,\\infty} = \\norm{}\n%   \\]\n\n\\begin{lemma}\n  Suppose $\\ddelta^*$ is the argmin of\n  Problem~\\eqref{eq:sdivflow}, the Smoothed Update Problem, and\n  $\\norm{\\cvec{\\ddelta^*(e)/\\cchat(e)}}_{\\infty} < 0.1$.\n  Then $\\ddelta^*$ is the argmin of Problem~\\eqref{eq:divflow2}.\n\\end{lemma}\n\\begin{proof}\n  We observe that if   $\\norm{\\cvec{\\ddelta^*(e)/\\cchat(e)}}_{\\infty}\n  < 0.1$, then $\\sdiv_V( \\ddelta^*) = D_V( \\ddelta^*)$, and,\n  for all $\\ttau \\in \\R^m$ with norm\n  \\[\n    \\norm{\\cvec{\\ttau(e)/\\cchat(e)}}_{\\infty} < 0.1 -\n    \\norm{\\cvec{\\ddelta^*(e)/\\cchat(e)}}_{\\infty}\n  \\]\n  we have that\n  $\\sdiv_V( \\ddelta^*+\\ttau) = D_V( \\ddelta^*+\\ttau\n  )$.\n  Thus $\\sdiv_V$ and $D_V$ agree on a neighborhood around\n  $\\ddelta^*$ and hence by\n  Lemma~\\ref{lem:argminsfromlocalagreement}, we have that\n  $\\ddelta^*$ is the argmin of Problem~\\eqref{eq:divflow2}.\n\\end{proof}\n\n\\subsection{Step size for divergence update}\n\n\\begin{definition}[\\emph{$s$-$t$ well-conditioned} graph]\n   An undirected, capacitated multi-graph $G = (V,E,\\cc)$ with source $s$\n   and sink $t$ is \n   \\emph{$s$-$t$ well-conditioned} if,\n   letting $U$ denote the maximum edge capacity $U =\n   \\norm{\\cc}_{\\infty}$,\n   we have at least $\\frac{2}{3} m$ multi-edges of capacity $U$ going directly\n   from $s$ to $t$.\n \\end{definition}\n\n \\begin{remark}\n   It is straightforward to make a graph $s$-$t$ well-conditioned.\n   We just add $2m$ new edges of capacity $U$ directly between $s$ and\n   $t$.\n   Given an exact maximum flow in the new graph, it is trivial to get\n   one in the original graph: Just remove the flow on the new edges.\n \\end{remark}\n \n \\begin{definition}\n   Given a \\emph{directed} graph $G = (V,E,\\cc)$, the\n   \\emph{symmetrization}\n   of $G$ is the undirected $\\Ghat = (V,\\Ehat,\\cchat)$ is the undirected graph given by\n\\[\n\\setof{a,b} \\in \\Ehat \\text{ if } (a,b) \\in E \\text{ AND } (b,a) \\in E    \n\\]\nand\n\\[\n  \\cchat(\\setof{a,b}) = \\min(\\cc(a,b),\\cc(b,a))\n  .\n\\]\n\\end{definition}\nNote that when  $\\Ghat_{\\ff}$ is the symmetrization of the residual\n   graph $G_{\\ff}$ (which we defined in Chapter~\\ref{cha:maxflow1}), then $\\cchat$ matches\n   exactly the definition of $\\cchat$ in Equation~\\eqref{eq:symrescap}.\n\\begin{lemma}\n\\label{lem:symres}\n  Let $G$ be an undirected, capacitated multi-graph $G = (V,E,\\cc)$\n  which is $s$-$t$ well-conditioned.\n   Let $\\ff$ be the minimizer of Program~\\eqref{eq:barrierflow}.\n   Let $\\Ghat_{\\ff}$ be the \\emph{symmetrization} of the residual\n   graph $G_{\\ff}$ (in the sense of Lecture~10).\n   Then there exists a flow $\\ddeltahat$ which satisfies $\\BB \\ddeltahat = \\frac{1-\\alpha}{5} F^*\n   \\bb_{st} $ and is feasible in\n   $\\Ghat_{\\ff}$. Note that we can also state the feasibility in\n   $\\Ghat_{\\ff}$ as\n   \\[\n\\norm{\\cvec{\\ddeltahat(e)/\\cchat(e)}}_{\\infty} \\leq 1\n     \\]\n\\end{lemma}\n\\begin{proof}\n  We recall since $\\ff$ is the minimizer of\n  Program~\\eqref{eq:barrierflow}, there exists dual-optimal voltages\n  $\\xx$ such that\n  \\[\n    \\BB^{\\trp} \\xx = \\grad V(\\ff) =\n    \\cvec{\\frac{1}\n    {\\cc(e)-\\ff(e)}\n    -\n    \\frac{1}\n    {\\cc(e)+\\ff(e)}}\n  \\]\nFrom Lecture~10, we know that there is flow $\\ddeltabar$ that is feasible with\nrespect to the residual graph capacities of the graph $G_{\\ff}$ such\nthat $\\BB \\ddeltabar = (1-\\alpha)F^* \\bb_{st}$.\nNote when treating $\\ddeltabar$ as an undirected\nflow, feasibility in the residual graph means that $\\ddeltabar(e) < \\cc(e)-\\ff(e)$\nand $-\\ddeltabar(e) < \\cc(e)+\\ff(e)$.\nThus,\n\\[\n  (1-\\alpha)F^* \\bb_{st}^{\\trp}\n  \\xx\n  =\n  \\ddeltabar \n  \\BB^{\\trp} \\xx\n  =\n  \\sum_e\n  \\frac{\\ddeltabar}\n    {\\cc(e)-\\ff(e)}\n    -\n    \\frac{\\ddeltabar}\n    {\\cc(e)+\\ff(e)}\n    \\leq\n    m\n  \\]\n  Now, because the graph is $s$-$t$ well-conditioned,\n  there are at $\\frac{2}{3} m $ edges directly from $s$ to $t$ with capacity $U$\n  and each of these $e$\n satisfy by the Lagrangian gradient optimality condition~\\eqref{eq:barrierlagrangegrad}\n \\[\n   \\bb_{st}^{\\trp}\n  \\xx\n   =\n  \\frac{1}\n    {U-\\ff(e)}\n    -\n    \\frac{1}\n    {U+\\ff(e)}\n  \\]\nNote that $\\frac{2}{3} mU \\leq F^* \\leq mU$ because the graph is $s$-$t$\nwell-conditioned.\nTo complete the analysis, we consider three cases.\n\n\\emph{Case 1: $\\abs{\\ff(e)} \\leq \\frac{2}{3} U$.}\nThen the capacity on each of these edges in the symmetrized residual\ngraph $\\Ghat_{\\ff}$ is at least $U/3$.\nAs there are $\\frac{2}{3} m $ of them,\nwe get that there is a feasible flow in $\\Ghat_{\\ff}$ of value at least\n$\\frac{2}{9} mU\\geq \\frac{1}{10} F^*$. \n\n\\emph{Case 2: $\\ff(e) < -\\frac{2}{3} U$.}\nBy the gradient condition, we have the same flow on all of the $\\frac{2}{3} m$\n$s$-$t$ edges, adding  up to at least $\\frac{2}{3} mU$ going from $t$ to $s$.\nThis means that we must have at least $\\frac{2}{3} mU$ flow going from\n$s$ to $t$ via the remaining edges. But, their combined capacity is at\nmost $\\frac{1}{3} mU$, so that cannot happen. Thus we can rule out\nthis case entirely.\n\n\\emph{Case 3: $\\ff(e) > \\frac{2}{3} U$.}\nThen \n\\[\n  \\frac{m}{(1-\\alpha)F^* }\n  \\geq\n     \\bb_{st}^{\\trp}\n     \\xx\n \\geq\n  \\frac{1}\n    {U-\\ff(e)}\n    -\n    \\frac{1}\n    {U+\\ff(e)}\n \\geq\n   \\frac{4/5}\n    {U-\\ff(e)}\n  \\]\nSo\n\\[\n  U-\\ff(e) \\geq \\frac{4}{5}  \\frac{(1-\\alpha)F^* }{m}\n  \\geq \\frac{1}{2}\n(1-\\alpha) U\n\\]\nIn this case, the capacity on each of the $\\frac{2}{3} m$ \n$s$-$t$ edges with capacity $U$ in $G$ will\nhave capacity $(1-\\alpha) U/2$ in $\\Ghat_{\\ff}$.\nThis guarantees that there is feasible flow in $\\Ghat_{\\ff}$ of value\nat least $\\frac{1}{3} (1-\\alpha) mU \\geq \\frac{1}{3} (1-\\alpha) F^*$.\n\\end{proof}\n\\begin{lemma}\n  Let $0 < \\alpha' \\leq \\frac{1-\\alpha}{150\\sqrt{m}}$.\n  Then the minimizer $\\ddelta^*$ of Problem~\\eqref{eq:sdivflow}\n  satisfies $\\norm{\\cvec{\\ddelta^*(e)/\\cchat(e)}}_{\\infty} < 0.1$.\n\\end{lemma}\n\\begin{proof}\n  By Lemma~\\ref{lem:symres}, there exists a flow $\\ddeltahat$ which satisfies $\\BB \\ddeltahat = \\frac{1-\\alpha}{5}  F^*\n   \\bb_{st} $ and ${\\norm{\\cvec{\\ddeltahat(e)/\\cchat(e)}}_{\\infty} \\leq\n   1}$.\n Hence for any\n $0 < \\alpha' \\leq \\frac{1-\\alpha}{150\\sqrt{m}}$,\n the flow \n $\\ddeltatil = \\alpha' \\frac{5}{1-\\alpha}\\ddeltahat$\n satisfies\n$\\BB \\ddeltatil = \\alpha' F^*\\bb_{st}$\nand\n$\\norm{\\cvec{\\ddeltatil(e)/\\cchat(e)}}_{\\infty}\n\\leq\n\\frac{1}{30\\sqrt{m}} $.\n   This means that\n\\begin{align*}\n     \\sdiv_V( \\ddeltatil )\n     &=\n    \\sum_e\n     \\sdiv\\left(\\frac{\\ddeltatil(e)}{\\cc_+(e)}\\right)\n    +\n    \\sdiv\\left(-\\frac{\\ddeltatil(e)}{\\cc_-(e)}\\right)\n  \\\\\n  &\\leq\n    \\sum_e\n     4\\left(\\frac{\\ddeltatil(e)}{\\cc_+(e)}\\right)^2\n    +\n    4\\left(-\\frac{\\ddeltatil(e)}{\\cc_-(e)}\\right)^2\n  \\\\\n  &\n  \\leq\n    \\sum_e\n    8\\left(\\frac{\\ddeltatil(e)}{\\cchat\n        (e)}\\right)^2\n \\\\\n     &\\leq 8/900 < 1/100\n.\n\\end{align*}\n\nThis then means that the minimizer $\\ddelta^*$ of\nProblem~\\eqref{eq:sdivflow} also satisfies\n$\\sdiv_V( \\ddeltatil ) < 1/100$.\n\n\\begin{align*}\n  \\norm{\\cvec{\\ddelta^*/\\cchat(e)}}_{\\infty}^2\n&\\leq \n      \\sum_e\n     \\left(\\frac{\\ddelta^*(e)}{\\cc_+(e)}\\right)^2\n    +\n    \\left(-\\frac{\\ddelta^*(e)}{\\cc_-(e)}\\right)^2\n\\\\                                      \n&\\leq \n    \\sum_e\n     \\sdiv\\left(\\frac{\\ddelta^*(e)}{\\cc_+(e)}\\right)\n    +\n    \\sdiv\\left(-\\frac{\\ddelta^*(e)}{\\cc_-(e)}\\right)\n\\tag*{By Lemma~\\ref{lem:sdivderivs}.}\n\\\\\n  &=\n\\sdiv_V( \\ddeltatil ) \n<\n1/100\n.\n\\end{align*}\nHence $\\norm{\\cvec{\\ddelta^*/\\cchat(e)}}_{\\infty} < 0.1$.\n\\end{proof}\n\n\n% \\todo{writing plan}\n%  \\begin{itemize}\n%  \\item well-cond def\n%  \\item symmetrized res graph\n%  \\item  state existence of a big, feasible step in sym-res graph (inf norm)\n%    (asym -> sym)\n%  \\item use of that to say that smoothed divergence opt is within $\\infty$\n%    ball for \n%  \\item prove existence proof of the step\n%  \\end{itemize}\n \n% \\begin{definition}\n% Given a feasible flow $\\ff$ in $G = (V,E,\\cc)$\n% \\end{definition}\n\n% \\begin{lemma}\n%    Suppose the original graph is $G = (V,E,\\cc)$ is \\emph{$s$-$t$\n%      well-conditioned},\n%    and suppose $\\ff$ is the \n% \\end{lemma}\n\n% Suppose the original graph is $G = (V,E,\\cc)$ is \\emph{preconditioned}\n\n\n% \\subsection{old step size writing}\n\n% We define a strict barrier feasbility condition:\n% \\begin{align}\n%   \\label{eq:strictbarrierfeasibility}\n%   \\BB\\ff= \\alpha F^* \\bb_{st}\n%   \\text{ and }\n%   -\\cc < \\ff < \\cc\n%   \\\\\n%   \\tag*{\"Strict barrier feasibility\"}\n% \\end{align}\n \n% \\begin{lemma}\n%   Suppose $\\ff$ is a flow  satisfying the strict barrier\n%   feasibility conditions.\n%   Note that \n%   Consider \n%   There exists a flow $\\ddelta_{\\infty}$ s.t.\n% \\end{lemma}\n\n% oops\n\n% \\begin{itemize}\n% \\item the problem is that if we SUBSTANTIALLY use a forward edge\n% \\item then we still need to make sure it doesn't exceed the backward\n%   edge capacity :(\n% \\end{itemize}\n\n% \\todo{you can leave comments like this}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"agao21_script\"\n%%% End:\n", "meta": {"hexsha": "a648964c89777a7c7ce94347202d304270c182b1", "size": 27596, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "agao21_script/lecture14.tex", "max_stars_repo_name": "csssaz/agao21_script", "max_stars_repo_head_hexsha": "51044f4775e5e20d2c5fc5c0d035363e5beb66be", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "agao21_script/lecture14.tex", "max_issues_repo_name": "csssaz/agao21_script", "max_issues_repo_head_hexsha": "51044f4775e5e20d2c5fc5c0d035363e5beb66be", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "agao21_script/lecture14.tex", "max_forks_repo_name": "csssaz/agao21_script", "max_forks_repo_head_hexsha": "51044f4775e5e20d2c5fc5c0d035363e5beb66be", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0765765766, "max_line_length": 119, "alphanum_fraction": 0.6379910132, "num_tokens": 9848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676284, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.5987275724144999}}
{"text": "\\chapter{Example}\n\nThis example will cover a calculator that can parse mathematical expressions and print the result. The resulting code for this, can be found in the \\textit{examples} folder. \n\n\n\\section{Define the classes}\n\nThe first step should be the specify, which nonterminal and terminal symbols are needed. In our case, we have the following symbols: \n\n\\begin{itemize}\n    \\item Nonterminal\n    \\begin{itemize}\n        \\item Plus\n        \\item Minus\n        \\item Division\n        \\item Multiplication\n    \\end{itemize}\n\n    \\item Terminal\n    \\begin{itemize}\n        \\item Number\n    \\end{itemize}\n\\end{itemize}\n\nThe nonterminal symbols need, of course, two child nodes. This could for example be a terminal but also nonterminal symbol. With these symbols, we can basically define a simple syntax tree. \n\n\n\n\\section{Implement the classes}\n\nNow that we have defined the classes, we can implement them. As already mentioned in the last chapter, each class derives from an abstract class, that has the \\textit{interpret} method. \n\n\\begin{verbatim}\npackage expressions;\n\npublic abstract class AbstractExpression {\n    public abstract int interpret();\n\n    @Override\n    public abstract String toString();\n}\n\\end{verbatim}\n\nYou can see that this class also overrides the \\textit{toString} method. You'll see later why. The different expressions then derive the \\textit{AbstractExpression} class and implement the \\textit{interpret} method. This could look like this: \n\n\\subsection{Nonterminal Expressions}\n\n\\begin{verbatim}\npackage expressions;\n\nimport lombok.AllArgsConstructor;\nimport lombok.Getter;\nimport lombok.Setter;\n\n@Getter\n@Setter\n@AllArgsConstructor\npublic class MultiplyExpression extends AbstractExpression {\n    private AbstractExpression lhs;\n    private AbstractExpression rhs;\n\n    @Override\n    public int interpret() {\n        return lhs.interpret() * rhs.interpret();\n    }\n\n    @Override\n    public String toString() {\n        return String.format(\"(%s * %s)\", lhs.toString(), rhs.toString());\n    }\n}\n\\end{verbatim}\n\nI should probably mention, that I used \\textit{Lombok} to automatically generate the getter and setters for me so I don't have to write that much code and keep it simple. The only difference between the different nonterminal symbols is, that they have a different \\textit{interpret} method. For example the \\textit{PlusExpression} will just use the \\textit{+} instead of the \\textit{*} operator. As you have seen in the UML diagram, the nonterminal expression forward interprets the terminal expressions that it stores internally. \n\nYou can also see that I implemented the overridden \\textit{toString} method, which will later be used to print our custom parse tree. It just takes the left and right side of the mathematical expression and prints it. Just like you would write it on paper. \n\n\\subsection{Terminal Expressions}\n\nThis is basically the same as the nonterminal expression above, with the only difference, that it does not have child nodes. It has only a simple number, which cannot be replaced.\n\n\\begin{verbatim}\npackage expressions;\n\nimport lombok.AllArgsConstructor;\nimport lombok.Getter;\nimport lombok.Setter;\n\n@Getter\n@Setter\n@AllArgsConstructor\npublic class NumberExpression extends AbstractExpression {\n    private int number;\n\n    public NumberExpression(String s) {\n        this.number = Integer.parseInt(s);\n    }\n\n    @Override\n    public int interpret() {\n        return number;\n    }\n\n    @Override\n    public String toString() {\n        return String.format(\"%s\", number);\n    }\n}\n\\end{verbatim}\n\n\n\\section{Define an input format}\n\nI decided to use the \\nameref{sec:reverse-polish-notation} as input format. This remove the need of parentheses, which makes parsing the input easier. The input format for our calculator can also be described in the \\nameref{sec:backus-naur-form}, which will look like this: \n\n\\begin{verbatim}\nexpression  ::= plus | minus | variable | number\nplus        ::= expression expression '+'\nminus       ::= expression expression '-'\nvariable    ::= 'a' | 'b' | 'c' | ... | 'z'\ndigit       = '0' | '1' | ... | '9'\nnumber      ::= digit | digit number\n\\end{verbatim}\n\nThis defines a language that contains \\nameref{sec:reverse-polish-notation} like these: \n\n\\begin{verbatim}\na b +\na b c + -\na b + c a - -\n\\end{verbatim}\n\n\\section{Parse the input}\n\nThe input can be parsed, by first splitting the string and then finding the correct expression for the token. Using a stack makes parsing the \\nameref{sec:reverse-polish-notation} input really easy. \n\n\\begin{verbatim}\nprivate static AbstractExpression createAbstractSyntaxTree (String tokenString) {\n    var tokenList = tokenString.split(\" \");\n    var stack = new Stack<AbstractExpression>();\n\n    // Iterate though the token list.\n    for (var token : tokenList) {\n        if (isOperator(token)) {\n            var rhs = stack.pop();\n            var lhs = stack.pop();\n\n            var operator = getOperator(token, lhs, rhs);\n\n            stack.push(operator);\n        } else {\n            var number = new NumberExpression(token);\n            stack.push(number);\n        }\n    }\n\n    return stack.pop();\n}\n\\end{verbatim}\n\n\n\\section{Result}\n\nWhen you use all the functions, you can add the input string like \\textbf{4 3 2 - 1 + *} in this example, generate the \\nameref{sec:abstract-syntax-tree} and then either print it or call \\textit{interpret}. The \\textit{toString} method also forward calls \\textit{toString}, thus, if we print it, the entire \\nameref{sec:abstract-syntax-tree} will get printed to the console. The output will be \\textit{(4 * ((3 - 2) + 1)) = 8} and \\textit{((12 * 2) + (64 / 2)) = 56}\n\n\\begin{verbatim}\npublic static void main(String[] args) {\n    //\n    // Create expression tree from string\n    //\n    var tokenString = \"4 3 2 - 1 + *\";\n    var result = createAbstractSyntaxTree(tokenString);\n    System.out.println(String.format(\"%s = %s\", \n        result, result.interpret()));\n\n    //\n    // Create expression tree manually\n    //\n    var customExpression = new PlusExpression(\n            new MultiplyExpression(new NumberExpression(12), \n                new NumberExpression(2)),\n            new DivideExpression(new NumberExpression(64), \n                new NumberExpression(2))\n    );\n    System.out.println(String.format(\"%s = %s\", \n        customExpression, customExpression.interpret()));\n}\n\\end{verbatim}\n", "meta": {"hexsha": "598a17c2bc3b5558964f1c402a12d85600b0792b", "size": 6384, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/chapters/01_document/04_example.tex", "max_stars_repo_name": "not-matthias/interpreter-pattern", "max_stars_repo_head_hexsha": "b8894f4b23c0ef2ce11c152a40691cd797246869", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documentation/chapters/01_document/04_example.tex", "max_issues_repo_name": "not-matthias/interpreter-pattern", "max_issues_repo_head_hexsha": "b8894f4b23c0ef2ce11c152a40691cd797246869", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documentation/chapters/01_document/04_example.tex", "max_forks_repo_name": "not-matthias/interpreter-pattern", "max_forks_repo_head_hexsha": "b8894f4b23c0ef2ce11c152a40691cd797246869", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.7777777778, "max_line_length": 531, "alphanum_fraction": 0.704887218, "num_tokens": 1453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7931059438487663, "lm_q1q2_score": 0.5987275719315643}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[letterpaper, margin=1in]{geometry} % page format\n\\usepackage{listings} % this package is for including code\n\\usepackage{graphicx} % this package is for including figures\n\\usepackage{amsmath}  % this package is for math and matrices\n\\usepackage{amsfonts} % this package is for math fonts\n\\usepackage{tikz} % for drawings\n\\usepackage{hyperref} % for urls\n\n\\title{Homework 1}\n\\author{Amy Pitts}\n\\date{2/5/19}\n\n\\begin{document}\n\\lstset{language=Python}\n\n\\maketitle\n\n\\section{Problem 1.2}\nConsider the perceptron in two dimensions: $h(x)=sign(w^Tx)$\nwhere $w=[w_0,w_1,w_2]^T$ and $x=[1,x_1,x_2]^t$. \nTechnically, $x$ has three coordinates, but we call this \nperception two-dimensional because the first coordinates \nis fixed at 1. \\\\\n(a) Show that the regions on the plane where $h(x)=+1$\nand $h(x)=-1$ are separated by a line. If we express\nthis line by the equation $x_2=ax_1+b$ what are \nthe slope $a$ and intercept $b$ in terms of $w_0,w_1,x_2$? \\\\\n\\indent \\textbf{Solution:} If we have $h(x)=1$ and $h(x)=-1$ this \nimplies that for $h(x)=1$ we have $w^Tx>0$ and with $h(x)=-1$ we \nhave $w^Tx<0$ so the separation between the two is just the \nx-axis or when $w^Tx=0$. \\\\\nFor the equation $x_2=ax_1+b$ we have that $a= -\\frac{w_1}{w_2}$ and $b=-\\frac{w_0}{w_2}$\n\n(b) Draw a picture for the cases $w=[1,2,3]^T$ and $w=-[1,2,3]^T$. \\\\\nIn more than two dimensions, the $+1$ and $-1$ regions are\nseparated by a hyperplane, the generalization of a line. \\\\\n\\indent \\textbf{Solution:} \\\\\n\\includegraphics[width=85mm]{positive.png}\n\\includegraphics[width=85mm]{negative.png} \\\\\n\\lstinputlisting[language=Python,frame=single]{plotting_practice.py}\n\n\n\\section{Problem 1.4}\nIn Exercise 1.4 we use an artificial data set to study the \nperceptron learning algorithm. This problem leads you to explore the algorithm\nfurther with data sets of different sizes and dimensions. \\\\\n\\indent For the following question I will be using and modifying \nthis code: \n\\lstinputlisting[language=Python,frame=single]{plotP.py}\n\\indent \\textbf{(a):} Generate a linearly separable data set of \nsize 20 as indicated in Exercise 1.4. Plot the examples\n$\\{(x_n,y_n)\\}$ as well as the target function $f$ on a plane. Be sure\nto mark the examples from different classes differently, \nand add labels to the axes of the plot. \\\\\n\\includegraphics[width=85mm]{a_1.png} \n\\includegraphics[width=85mm]{a_2.png} \\\\\nThe program took 5 iterations to properly split up the two groups. \\\\\n\n\n\\indent \\textbf{(b):} Run the perceptron learning algorithm on \nthe data set above. Report the number of updates that the algorithm\ntakes before converging. Plot the examples $\\{(x_n,y_n)\\}$, and the\nfinal hypothesis $g$ in the same figure. Comment on whether $f$ is \nclose to $g$. \\\\\n\\includegraphics[width=85mm]{b_1.png} \n\\includegraphics[width=85mm]{b_2.png} \\\\\nThe program took 2 iterations to properly split up the two groups.\nWe are unable to tell if $f$ is close to $g$ because we do not know \nwhat $f$ is. \\\\ \n\n\\indent \\textbf{(c):} Repeat everything in (b) with another randomly\ngenerated data set of size 20. Compare your results with (b). \\\\\n\\includegraphics[width=85mm]{c_1.png} \n\\includegraphics[width=85mm]{c_2.png} \\\\\nThe program took 1 iteration to properly split up the two groups.\nWe are unable to tell if $f$ is close to $g$ because we do not know \nwhat $f$ is. \\\\ \n\n\\indent \\textbf{(d):} Repeat everything in (b) with another randomly\ngenerated data set of size 100. Compare your results with (b). \\\\\n\\includegraphics[width=85mm]{d_1.png} \n\\includegraphics[width=85mm]{d_2.png} \\\\\nThe program took 5 iteration to properly split up the two groups.\\\\\n\n\\indent \\textbf{(e):} Repeat everything in (b) with another randomly\ngenerated data set of size 1,000. Compare your results with (b). \\\\\n\\includegraphics[width=85mm]{e_1.png} \n\\includegraphics[width=85mm]{e_2.png} \\\\\nThe program took 1 iteration to properly split up the two groups.\nI thought it was going to take more iterations but the \ndata generator is making perfectly separated groups making \nit super easy to separate. Also, after running the program\na couple of times to see what happens I found that I either get \nstuck in a loop of never being able to find the perfect line\nor the groups are easily separated and it take 1 to 2 iterations. \n\\\\\n\n\n\\indent \\textbf{(f):} Modify the algorithm such that it takes \n$x_n \\in \\mathbb{R}^{10}$ instead of $\\mathbb{R}^2$. Randomly generate\na linearly separable data set of size 1,000 with $x_n \\in \\mathbb{R}^{10}$\nand feed the data set to the algorithm. How many updates does the \nalgorithm take to converge? \\\\\n\\includegraphics[width=85mm]{f.png} \\\\\nThe algorithm took one update to converge. \\\\\n\n\n\\indent \\textbf{(g):} Repeat the algorithm on the same data set as (f)\nfor 100 experiments. In the iterations of each experiment, pick \n$x(t)$ randomly instead of deterministically. Plot a histogram\nfor the number of updates that the algorithm take to converge. \\\\ \n\\includegraphics[width=150mm]{g.png} \\\\\n\\lstinputlisting[language=Python,frame=single]{10_dims.py}\n\n\n\\indent \\textbf{(h):} Summarize your conclusions with respect to \naccuracy and running time as a function of $N$ and $d$. \\\\\n\nI found that the generated data if broken up takes about 1 to \n2 iterations to properly split the data. However, \nwhen there were only two dimensions there was a more \nlikely chance that a data set was generated that was not \nable to be split up resulting in an endless loop. \nRunning the 10-dimensional data set there was less of a \nchance that the data would not be able to be split. \nAlso when $N$ was increased there was a higher chance \nthat a data set was produced that was not able to be split \nby a line. I wonder what would happen if we used a \ndifferent data generator. \n\n\\end{document}\n", "meta": {"hexsha": "6b9f45206a4e7b2eea330cfa1167815588aa4e7b", "size": 5797, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw1/hw.tex", "max_stars_repo_name": "amypitts01/data440", "max_stars_repo_head_hexsha": "4900a0a78625614d245b2fd064c6ca74e13049a3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-03-02T12:45:00.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-02T12:45:00.000Z", "max_issues_repo_path": "hw1/hw.tex", "max_issues_repo_name": "amypitts01/data440", "max_issues_repo_head_hexsha": "4900a0a78625614d245b2fd064c6ca74e13049a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw1/hw.tex", "max_forks_repo_name": "amypitts01/data440", "max_forks_repo_head_hexsha": "4900a0a78625614d245b2fd064c6ca74e13049a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.5864661654, "max_line_length": 89, "alphanum_fraction": 0.7409004658, "num_tokens": 1674, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7931059414036511, "lm_q1q2_score": 0.5987275700857102}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% UMB-CS110-2015S: Introduction to Computing\n% Copyright 2015 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% More info: https://github.com/ghorbanzade/UMB-CS110-2015S\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\def \\topDirectory {.}\n\\def \\texDirectory {\\topDirectory/src/main/tex}\n\n\\documentclass[12pt,letterpaper,twoside]{article}\n\\usepackage{\\texDirectory/template/style/directives}\n\\usepackage{\\texDirectory/template/style/assignment}\n\\input{\\texDirectory/template/config}\n\n\\begin{document}\n\n\\doc{title}{Solution to Quiz 2(a)}\n\\doc{date-pub}{Mar 10, 2015 at 01:00 PM}\n\\doc{date-due}{Mar 10, 2015 at 11:00 PM}\n\\doc{points}{4}\n\n\\prepare{header}\n\n\\section*{Question 1}\n\nWrite a program \\texttt{PrimeRelative.java} that asks for \\texttt{N} from the user and prints out an \\texttt{N}-by-\\texttt{N} table such that there is an \\texttt{*} in row \\texttt{i} and column \\texttt{j}, if the greatest common divisor of \\texttt{i} and \\texttt{j} is 1, and a space in that position if otherwise.\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=2}\n\\begin{lstlisting}\n// we use scanner to ask for number N\nimport java.util.Scanner;\npublic class PrimeRelative {\n\tpublic static void main(String[] args)\n\t{\n\t\tint i; // 1st for loop (matrix row)\n\t\tint j; // 2nd for loop (matrix col)\n\t\tint k; // 3rd for loop (division checker)\n\t\tint numberN;\n\t\tboolean notPrimeRelative;\n\t\tSystem.out.print(\"Please insert the number N: \");\n\t\tScanner input = new Scanner(System.in);\n\t\tnumberN = input.nextInt();\n\t\tSystem.out.println(\"Your entered number is \" + numberN + \".\");\n\t\tinput.close(); // remember to close the scanner object\n\t\tnumberN++; // to compensate our sacrifice of first row and column\n\t\tfor (i = 0 ; i < numberN ; i++) {\n\t\t\tfor (j = 0 ; j < numberN ; j++) {\n\t\t\t\t// if first row, print number of columns\n\t\t\t\tif (i == 0) {\n\t\t\t\t\tSystem.out.printf(\"%-2d \",j);\n\t\t\t\t}\n\t\t\t\t// else if first column, print number of rows\n\t\t\t\telse if (j == 0) {\n\t\t\t\t\tSystem.out.printf(\"%-2d \",i);\n\t\t\t\t}\n\t\t\t\t// for matrix elements\n\t\t\t\telse {\n\t\t\t\t\t// we initially assume gcd(i,j)=1\n\t\t\t\t\tnotPrimeRelative = false;\n\t\t\t\t\t// now we check if a number k can be found\n\t\t\t\t\t// that divides both i and j\n\t\t\t\t\tfor (k = 2; k <= Math.min(i, j); k++) {\n\t\t\t\t\t\t// if it divides both i and j\n\t\t\t\t\t\t// gcd(i,j) != 1 for sure\n\t\t\t\t\t\tif (i % k == 0 && j % k == 0) {\n\t\t\t\t\t\t\t// so they are not prime relative\n\t\t\t\t\t\t\tnotPrimeRelative = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (notPrimeRelative) {\n\t\t\t\t\t\tSystem.out.print(\"   \");\n\t\t\t\t\t}\n\t\t\t\t\telse {\n\t\t\t\t\t\tSystem.out.print(\"*  \");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t// after each row, a feed line is inserted\n\t\t\tSystem.out.println();\n\t\t}\n\t}\n}\n\\end{lstlisting}\n\\newpage\n\n\\section*{Question 2}\n\nWrite a program \\texttt{MatrixDeterminant.java} that asks for elements of a three-by-three matrix and prints its determinant.\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=2}\n\\begin{lstlisting}\n// import Scanner class to use Scanner\nimport java.util.Scanner;\npublic class MatrixDeterminant {\n\tpublic static void main(String[] args) {\n\t\tint i; // 1st loop (row)\n\t\tint j; // 2nd loop (col)\n\t\tint[][] matrix = new int[3][3]; // array of int for 3x3 matrix\n\t\tlong determinant; // long is bigger than int\n\t\tScanner input = new Scanner(System.in); // we create scanner object\n\t\t// ask user to initialize matrix\n\t\tfor (i = 0; i < 3; i++) {\n\t\t\tfor (j = 0; j < 3; j++) {\n\t\t\t\tSystem.out.printf(\"Insert element in row %d and col %d: \",i,j);\n\t\t\t\tmatrix[i][j] = input.nextInt();\n\t\t\t}\n\t\t}\n\t\t// close Scanner object as we don't need it anymore\n\t\tinput.close();\n\t\t// show initialized matrix to user\n\t\tSystem.out.println(\"Matrix is: \");\n\t\tfor (i = 0; i < 3; i++) {\n\t\t\tfor (j = 0; j < 3; j++) {\n\t\t\t\tSystem.out.printf(\"%5d \",matrix[i][j]);\n\t\t\t}\n\t\t\tSystem.out.println();\n\t\t}\n\t\t// show determinant\n\t\tSystem.out.print(\"Determinant for matrix is: \");\n\t\tdeterminant = matrix[0][0]*matrix[1][1]*matrix[2][2]\n\t\t\t\t+ matrix[1][0]*matrix[2][1]*matrix[1][2]\n\t\t\t\t+ matrix[0][1]*matrix[1][2]*matrix[2][0]\n\t\t\t\t- matrix[0][2]*matrix[1][1]*matrix[2][0]\n\t\t\t\t- matrix[0][0]*matrix[1][2]*matrix[2][1]\n\t\t\t\t- matrix[2][2]*matrix[1][0]*matrix[0][1];\n\t\tSystem.out.println(determinant);\n\t}\n}\n\\end{lstlisting}\n\n\\end{document}\n", "meta": {"hexsha": "6ba731540b5bfd563d9affebe5c545ff4b6e2465", "size": 4310, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/main/tex/quizzes/q02as.tex", "max_stars_repo_name": "UMB-CS110-2015S/Assignments", "max_stars_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:40.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:40.000Z", "max_issues_repo_path": "src/main/tex/quizzes/q02as.tex", "max_issues_repo_name": "UMB-CS110-2015S/Assignments", "max_issues_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2015-08-22T15:44:45.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-17T16:39:11.000Z", "max_forks_repo_path": "src/main/tex/quizzes/q02as.tex", "max_forks_repo_name": "UMB-CS110-2015S/Assignments", "max_forks_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.4598540146, "max_line_length": 314, "alphanum_fraction": 0.6232018561, "num_tokens": 1313, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059414036511, "lm_q2_score": 0.7549149758396752, "lm_q1q2_score": 0.5987275525930401}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[margin=20mm]{geometry}\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amssymb,amsmath}\n\\usepackage{slashbox}\n\\usepackage{booktabs}\n\\usepackage[table,x11names]{xcolor}\n\\usepackage{pgfplots}\n\n\\setlength{\\intextsep}{1pt}\n\n\\begin{document}\n    \\pagenumbering{gobble}\n    \\section*{Percent point function (PPF) of the normal distribution}\n\n\\pgfmathdeclarefunction{gauss}{3}{%\n  \\pgfmathparse{1/(#3*sqrt(2*pi))*exp(-((#1-#2)^2)/(2*#3^2))}%\n}\n\n\\begin{figure}[!h]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\n  no markers,\n  domain=0:6,\n  samples=100,\n  ymin=0,\n  axis lines*=left,\n  xlabel=$x$,\n  every axis y label/.style={at=(current axis.above origin),anchor=south},\n  every axis x label/.style={at=(current axis.right of origin),anchor=west},\n  height=4cm,\n  width=12cm,\n  xtick=\\empty,\n  ytick=\\empty,\n  enlargelimits=false,\n  clip=false,\n  axis on top,\n  grid = major,\n  hide y axis\n  ]\n\n \\addplot [very thick,cyan!50!black] {gauss(x, 3, 1)};\n\n\\pgfmathsetmacro\\valueA{gauss(1,3,1)}\n\\pgfmathsetmacro\\valueB{gauss(2,3,1)}\n\\draw [gray] (axis cs:1,0) -- (axis cs:1,\\valueA)\n    (axis cs:5,0) -- (axis cs:5,\\valueA);\n\\draw [gray] (axis cs:2,0) -- (axis cs:2,\\valueB)\n    (axis cs:4,0) -- (axis cs:4,\\valueB);\n\\draw [yshift=1.4cm, latex-latex](axis cs:2, 0) -- node [fill=white] {$0.683$} (axis cs:4, 0);\n\\draw [yshift=0.3cm, latex-latex](axis cs:1, 0) -- node [fill=white] {$0.954$} (axis cs:5, 0);\n\n\\node[below] at (axis cs:1, 0)  {$\\mu - 2\\sigma$};\n\\node[below] at (axis cs:2, 0)  {$\\mu - \\sigma$};\n\\node[below] at (axis cs:3, 0)  {$\\mu$};\n\\end{axis}\n\\end{tikzpicture}\n\\end{figure}\n\n\\begin{table}[!h]\n    \\centering\n    \\begin{tabular}{c|ccccc|ccccc}\n    \\toprule\n    \\backslashbox{$x$}{$\\Delta x$}  & \\textbf{0.00} & \\textbf{0.01} & \\textbf{0.02} & \\textbf{0.03} & \\textbf{0.04} & \\textbf{0.05} & \\textbf{0.06} & \\textbf{0.07} & \\textbf{0.08} & \\textbf{0.09} \\\\\\midrule\n\\textbf{0.0} & -inf & -2.3263 & -2.0537 & -1.8808 & -1.7507 & -1.6449 & -1.5548 & -1.4758 & -1.4051 & -1.3408\\\\\n\\textbf{0.1} & -1.2816 & -1.2265 & -1.1750 & -1.1264 & -1.0803 & -1.0364 & -0.9945 & -0.9542 & -0.9154 & -0.8779\\\\\n\\textbf{0.2} & -0.8416 & -0.8064 & -0.7722 & -0.7388 & -0.7063 & -0.6745 & -0.6433 & -0.6128 & -0.5828 & -0.5534\\\\\n\\textbf{0.3} & -0.5244 & -0.4959 & -0.4677 & -0.4399 & -0.4125 & -0.3853 & -0.3585 & -0.3319 & -0.3055 & -0.2793\\\\\n\\textbf{0.4} & -0.2533 & -0.2275 & -0.2019 & -0.1764 & -0.1510 & -0.1257 & -0.1004 & -0.0753 & -0.0502 & -0.0251\\\\\n\\textbf{0.5} & 0.0000 & 0.0251 & 0.0502 & 0.0753 & 0.1004 & 0.1257 & 0.1510 & 0.1764 & 0.2019 & 0.2275\\\\\n\\textbf{0.6} & 0.2533 & 0.2793 & 0.3055 & 0.3319 & 0.3585 & 0.3853 & 0.4125 & 0.4399 & 0.4677 & 0.4959\\\\\n\\textbf{0.7} & 0.5244 & 0.5534 & 0.5828 & 0.6128 & 0.6433 & 0.6745 & 0.7063 & 0.7388 & 0.7722 & 0.8064\\\\\n\\textbf{0.8} & 0.8416 & 0.8779 & 0.9154 & 0.9542 & 0.9945 & 1.0364 & 1.0803 & 1.1264 & 1.1750 & 1.2265\\\\\n\\textbf{0.9} & 1.2816 & 1.3408 & 1.4051 & 1.4758 & 1.5548 & 1.6449 & 1.7507 & 1.8808 & 2.0537 & 2.3263\\\\\n        \\bottomrule\n        \\end{tabular}\n        \\caption{Approximations of $\\Phi_{0;1}^{-1}(x + \\Delta x)$}\n    \\end{table}\n    \\begin{align*}\n        \\Phi_{0;1}(x) &= \\frac{1}{\\sqrt{2 \\pi}} \\int_{-\\infty}^{x} e^{- t^2 / 2} \\mathrm{d} t &\n        \\Phi_{0;1}(1.65) &\\approx 0.9505\\\\\n        \\Phi_{\\mu; \\sigma^2}(x) &= \\Phi_{0;1} \\left (\\frac{x-\\mu}{\\sigma} \\right ) &\n        \\Phi_{0;1}(-x) &= 1 - \\Phi_{0;1}(x)\\\\\n        z_\\alpha &= \\Phi_{0;1}^{-1}(1-\\alpha)\n    \\end{align*}\n\\end{document}\n", "meta": {"hexsha": "0e2363f97550e7146795595869cc06d7a34e0991", "size": 3523, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/normal-distribution-z/normal-distribution-z.tex", "max_stars_repo_name": "RalfGuder/LaTeX-examples", "max_stars_repo_head_hexsha": "a1bf9fe422969be1ca4674394ebd2170c07f7693", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1231, "max_stars_repo_stars_event_min_datetime": "2015-01-07T04:04:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T17:43:29.000Z", "max_issues_repo_path": "documents/normal-distribution-z/normal-distribution-z.tex", "max_issues_repo_name": "DoubleL61/LaTeX-examples", "max_issues_repo_head_hexsha": "cd0d97f85fadb59b7c6e9062b37a8bf7d725ba0c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2015-05-10T13:10:47.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-02T21:28:49.000Z", "max_forks_repo_path": "documents/normal-distribution-z/normal-distribution-z.tex", "max_forks_repo_name": "DoubleL61/LaTeX-examples", "max_forks_repo_head_hexsha": "cd0d97f85fadb59b7c6e9062b37a8bf7d725ba0c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 400, "max_forks_repo_forks_event_min_datetime": "2015-01-05T06:22:18.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-19T04:07:59.000Z", "avg_line_length": 39.5842696629, "max_line_length": 206, "alphanum_fraction": 0.5835935282, "num_tokens": 1706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.8056321843145404, "lm_q1q2_score": 0.5987200752565943}}
{"text": "\\section{Object Class Recognition}\n\nTwo main tasks: Classification and Detection\n\nClassification: is there a car in this image ? A binary answer is enough\\\\\nDetection: where is the car ? Need localization\n\n\\subsection{Bag of words approaches: Visual words}\n\\subsubsection{Local features}\nLocal (textured) patches (\u2018regions\u2019) \u2022 Detection co-variant with translation, scale, and sometimes affine transformations\n\u2022 Cover the same portion of the object in any view\n\n\n\nEach patch / region has a descriptor, which is a point in some high-dimensional feature space (e.g., SIFT)\n\nClose points in feature space have similar descriptors, which indicates similar image content.\n\n\\includegraphics[width=\\columnwidth]{pictures/localfeature}\n\n\\subsubsection{Visual Words}\n1) Extract some local features from a number of images\n\n2) Map high-dimensional descriptors to words by quantizing the feature space (Quantize via clustering, let cluster centers be the prototype \u201cwords\u201d)\n\n3) Map high-dimensional descriptors to words by quantizing the feature space. Determine which word to assign to each new image region by finding the closest cluster center\n\nBuilding Visual Vocabularies\nIssues: \u2022 Sampling strategy: where to extract features? ->\nFor object classes dense sampling works best\n\u2022 Clustering / quantization algorithm ->\nMany options, often k-means works well enough\n\u2022 What corpus provides features (universal vocabulary?) ->\nTypically as close as possible to your application \u2022 Vocabulary size, number of words\\\\\n\n\\subsection{Bag of Words}\nSummarize entire image based on its distribution (histogram) of word occurrences.\n(Analogous to bag of words representation commonly used for documents.) Quantize feature space to make discrete set of visual words\n\nBag of words enable to describe the unordered feature set with a single vector of fixed dimensionality\n\nWorks pretty well for whole-image classification\n\nClassifier Choises\n\n\\begin{itemize}\n\t\\item Nearest Neighbor\n\t\\item Neural Networks\n\t\\item Support Vector Machines\n\t\\item Boosting\n\\end{itemize}\n\nNearest Neighbor - Assign label of nearest training data point to each test data point. Works very fast when training data is huge.\nSupport Vector Machines - Maximize the margin between the positive and negative training examples\n\nThe bag of words removes spatial layout. This is both a strength and a weakness\n\n\\subsubsection{Discussion: Bag-of-Words}\nPros:\n\\begin{itemize}\n\t\\item Flexible to geometry / deformations / viewpoint\n\t\\item Compact summary of image content\n\t\\item Provides vector representation for sets (bags to be precise)\n\t\\item Empirically good recognition results in practice\n\\end{itemize}\nCons:\n\\begin{itemize}\n\t\\item Basic model ignores geometry \u2013 can verify afterwards, or embed within feature descriptors\n\t\\item Background and foreground mixed when bag covers whole image - Interest points or sampling: no guarantee to capture object-level parts\n\t\\item Optimal vocabulary formation remains unclear\n\n\\end{itemize}\n\n\\subsection{Classification: convolutional neural networks}\nNeural network with specialized connectivity structure.\n\nTypical building blocks of CNNs for classification in vision\n\n\\begin{itemize}\n\t\\item Feed forward\n\t\\item Most layers (often)\n\t\\begin{itemize}\n\t\t\\item Convolve input (filters)\n\t\t\\item Non-linearity (rectified linear)\n\t\t\\item Pooling (local max)\n\t\\end{itemize}\n\t\\item Last few layers (often)\n\t\\begin{itemize}\n\t\t\\item fully connected (linear combinations)\n\t\t\\item Non-linearity (rectified linear)\n\t\\end{itemize} \n\t\\item output: softmax distribution over classes\n\\end{itemize}\nsupervise, train convolutional filters by back-propagatin classification error\n\n\n\\subsection{Detection: Sliding-window approaches}\nIf object may be in a cluttered scene, slide a window around looking for it. - A brute force approach with many local decisions\n\n\\subsubsection{Gradient based Representations}\nSummarize local distribution of gradients with histogram\n\n\\subsection{Boosting}\nIntuition\nConsider a 2D feature\nspace with positive and negative examples.\nEach weak classifier splits the training examples with at least 50% accuracy.\nExamples misclassified by a previous weak learner are given more emphasis at future rounds. Final classifier is combination of the weak classifiers\n\n\\subsection{Discussion: Sliding-Windows }\nPros\n\\begin{itemize}\n\t\\item Simple detection protocol to implement\n\t\\item Good feature choices critical\n\t\\item Past successes for certain classes\n\t\\item Good detectors available (Viola\\&Jones, HOG, etc.)\n\\end{itemize}\nCons/Limitations\n\\begin{itemize}\n\t\\item High computational complexity \n\t\\begin{itemize}\n\t\t\\item \u2013 For example: 250,000 locations x 30 orientations x 4 scales = 30,000,000 evaluations!\n\t\t\\item This puts tight constraints on the classifiers we can use.\n\t\t\\item If training binary detectors independently, this means cost increases linearly with number of classes.\n\t\\end{itemize}\n\t\\item With so many windows, false positive rate better be low\n\t\\item Typically need fully supervised training data (= bounding-boxes)\n\t\\item Some object do not fit a box well (diagonal bottle) Sensitive to partial occlusion (unless in training data)\n\\end{itemize}\n\n\\subsection{Detection: proposal-based approaches}\n\\subsubsection{Object proposals}\nEvery object has at least one of these properties\n\u2022 Well-defined, closed boundary in space \u2022 Different appearance than its surroundings \u2022 Might be unique within the image (salient)\n\nObjectness(w) = probability that w covers an object\n\nObjectness cues\n\\begin{itemize}\n\t\\item Density of salient pixels\n\t\\item color contrast\n\t\\item superpixel straddling\n\\end{itemize}\n\\subsubsection{R-CNN}\n\\includegraphics[width=\\columnwidth]{pictures/R-CNN}\nPros\n\\begin{itemize}\n\t\\item Accurate!\n\t\\item Any deep architecture can immediately be plugged in\n\\end{itemize}\nCons\n\\begin{itemize}\n\t\\item Ad hoc training objectives\n\t\\begin{itemize}\n\t\t\\item Fine-tuning network with softmax classifier (log loss)\n\t\t\\item Train post-hoc linear SVMs (hinge loss)\n\t\t\\item Train post-hoc bounding-box regressions (least squares)\n\t\\end{itemize}\n\t\\item Training is slow (84h), takes a lot of disk space\n\t\\item Inference (detection) is slow\n\\end{itemize}\n\\subsubsection{Fast R-CNN}\nFull Image first through ConvNet and afterwards Region selection\n\\subsection{Detection: a star model}\nRecognition of object categories\n\n\\includegraphics[width=\\columnwidth]{pictures/star_models}\n\n\\subsubsection{Implicit Shape Model}\nVisual vocabulary is used to index votes for object position [a visual word = a \u201cpart\u201d].\nObjects are detected as consistent configurations of the observed parts (visual words).\n\n\\includegraphics[width=\\columnwidth]{pictures/ISM}\n\\includegraphics[width=\\columnwidth]{pictures/ism_spatial}\n\n\nPros:\n\\begin{itemize}\n\t\\item Works well for many different object categories \u2013 Both rigid and articulated objects\n\t\\item Flexible geometric model \u2013 Can recombine parts seen on different training examples\n\t\\item Learning from relatively few (50-100) training examples\n\t\\item Optimized for detection, good localization properties\n\\end{itemize}\nCons:\n\\begin{itemize}\n\t\\item Needs supervised training data \u2013 Object bounding boxes for detection \\& Segmentations for top-down segmentation\n\t\\item Only weak geometric constraints \u2013 Result segmentations may contain superfluous body parts.\n\t\\item Purely representative model \u2013 No discriminative learning\n\\end{itemize}\n", "meta": {"hexsha": "e16f3679c1069718401926359a259e7a637a9a7f", "size": 7375, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/11_Object_Class_Recognition.tex", "max_stars_repo_name": "gruke/ethz-cv-lectureNotes", "max_stars_repo_head_hexsha": "688827b1eebdf7d7aa4446986aa838312175fa1f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-05T20:43:06.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-05T20:43:06.000Z", "max_issues_repo_path": "chapters/11_Object_Class_Recognition.tex", "max_issues_repo_name": "gruke/ethz-cv-lectureNotes", "max_issues_repo_head_hexsha": "688827b1eebdf7d7aa4446986aa838312175fa1f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/11_Object_Class_Recognition.tex", "max_forks_repo_name": "gruke/ethz-cv-lectureNotes", "max_forks_repo_head_hexsha": "688827b1eebdf7d7aa4446986aa838312175fa1f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8157894737, "max_line_length": 171, "alphanum_fraction": 0.7982372881, "num_tokens": 1675, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321983146848, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5987200719129027}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\\usepackage{epstopdf}\n\\usepackage{abstract}\n\\usepackage[toc,page]{appendix}\n\\usepackage[sort]{cite}\n\n\n\\usepackage{hyperref}\n\n\n\\newcommand{\\half}[0]{\\frac{1}{2}}\n\\newcommand{\\bvec}[1]{\\mathbf{#1}}\n\\newcommand{\\bigO}[2]{\\mathcal{O}\\left(#1^{#2} \\right)}\n\\newcommand{\\dotprod}[2]{ \\left<#1 , #2 \\right> }\n\n\\newcommand{\\dd}[0]{ \\mathrm{d} }\n\n\\title{ Some notes on implicit Runge-Kutta methods }\n\\author{ Stefan Paquay }\n\\date{  }\n\n\\begin{document}\n\\maketitle\n\n\\section{The idea}\nRunge-Kutta methods are multi-stage, single-step methods aimed towards solving (systems of) ODEs.\nThey work by constructing at each time step approximations to the derivative of the function in between the current time level $t$ and the next $t + \\Delta t,$ after which a linear combination of these stages is used to advance the numerical approximation to $y$ to the next level.\nThat is, if we have $\\dd t/ \\dd t = f(t,y),$ and $y_n$ is the numerical approximation to $y$ at $t_n = n \\Delta t,$ then we have\n\\begin{equation*}\n  k_i = f\\left( t + c_i \\Delta t, y_n + \\Delta t \\sum_{j=0}^{N-1} \\alpha_{i,j} k_j \\right), \\qquad y_{n+1} = y_n + \\Delta t \\sum_{i=0}^{N-1} b_i k_i.\n\\end{equation*}\nThe methods can be summarized easily in so-called Butcher tableaus, which conveniently list $b_i,~c_i$ and $\\alpha_{i,j}.$ See \\ref{tab:butcher} for the Butcher tableau of some methods.\n\n\\begin{table}[b]\n  \\centering\n  \\caption{Butcher tableaus for some Runge-Kutta methods: The implicit midpoint (left), the classical Runge-Kutta method (center), and the fourth order Gauss-Legendre method (right). \\label{tab:butcher}}\n  \\begin{tabular}{c|c}\n    $\\half$ & $\\half$ \\\\ \\hline\n    & 1\n  \\end{tabular} \\quad\n  \\begin{tabular}{c|cccc}\n    0 & & & & \\\\\n    $\\half$ & $\\half$ & & & \\\\\n    $\\half$ & & $\\half$ & & \\\\\n    1 & & & 1 & \\\\ \\hline\n    {} & $\\frac{1}{6}$ & $\\frac{2}{6}$ & $\\frac{2}{6}$ & $\\frac{1}{6}$\n  \\end{tabular}\n  \\begin{tabular}{c|cc}\n    $\\half - \\frac{\\sqrt{3}}{6}$ & $\\frac{1}{4}$ & $\\frac{1}{4} - \\frac{\\sqrt{3}}{6}$ \\\\\n    $\\half + \\frac{\\sqrt{3}}{6}$ & $\\frac{1}{4} + \\frac{\\sqrt{3}}{6}$ & $\\frac{1}{4}$ \\\\ \\hline\n    {} & $\\half$ & $\\half$ \\\\\n    {} & $\\half + \\half\\sqrt{3}$ & $\\half - \\half\\sqrt{3}$ \\\\\n  \\end{tabular}\n\\end{table}\n\n\\section{The implementation}\nA general, implicit Runge-Kutta method (RK method) involves solving a system of non-linear equations every time step.\nThe exact shape of this system depends on the number of equations in the system of ODEs \\emph{and} the number of stages in the method.\nIf the ODE has $N_{ode}$ equations and the RK method has $N_s$ stages then the total non-linear system has $N_{ode} \\times N_s$ equations.\nTo ease implementation of a general implicit RK method we deduce the general form of the system of equations to solve for a general number of stages and ODEs.\nIf we denote $\\bvec{f}(t,\\bvec{y}) = (\\bvec{f}_0, \\bvec{f}_1, \\hdots, \\bvec{f}_{N-1})^T$ then we have\n\\begin{align*}\n  \\bvec{F} = \n  \\begin{pmatrix}\n    \\bvec{f}\\left( t + c_0\\Delta t, \\bvec{y} + \\Delta t \\sum_{j=0}^{N_s-1} \\alpha_{0,j} \\bvec{k}_j \\right) - \\bvec{k}_0 \\\\\n    \\vdots \\\\\n    \\bvec{f}\\left( t + c_0\\Delta t, \\bvec{y} + \\Delta t \\sum_{j=0}^{N_s-1} \\alpha_{n,j} \\bvec{k}_j \\right) - \\bvec{k}_n \\\\\n    \\vdots \\\\\n        \\bvec{f}\\left( t + c_0\\Delta t, \\bvec{y} + \\Delta t \\sum_{j=0}^{N_s-1} \\alpha_{N_s-1,j} \\bvec{k}_j \\right) - \\bvec{k}_{N_s-1}\n      \\end{pmatrix} =\n  \\bvec{0}\n\\end{align*}\nTo solve such a system we employ Newton iteration.\nAlthough Newton iteration is typically costly, we shall see we only need to evaluate $N_{s}$ evaluations of $\\bvec{f}$ and its Jacobi matrix $\\bvec{J}.$\nTo find the general expression of the Jacobi matrix, we will apply a general three-stage method to the following system of ODEs:\n\\begin{align*}\n  \\bvec{f}(t, \\bvec{y}) = -\\lambda \\bvec{y}\n\\end{align*}\nThen we have\n\\begin{align*}\n  \\bvec{F} = \\begin{pmatrix}\n    -\\lambda \\bvec{y} - \\lambda \\Delta t \\left( \\alpha_{0,0}\\bvec{k}_0 + \\alpha_{0,1}\\bvec{k}_1 + \\alpha_{0,2}\\bvec{k}_2  \\right) - \\bvec{k}_0 \\\\\n    -\\lambda \\bvec{y} - \\lambda \\Delta t \\left( \\alpha_{1,0}\\bvec{k}_0 + \\alpha_{1,1}\\bvec{k}_1 + \\alpha_{1,2}\\bvec{k}_2  \\right) - \\bvec{k}_1 \\\\\n    -\\lambda \\bvec{y} - \\lambda \\Delta t \\left( \\alpha_{2,0}\\bvec{k}_0 + \\alpha_{2,1}\\bvec{k}_1 + \\alpha_{2,2}\\bvec{k}_2  \\right) - \\bvec{k}_2\n\\end{pmatrix}\\end{align*}\nThe Jacobi matrix of $\\bvec{f}$ is given by $-\\lambda \\bvec{I},$ so the Jacobi matrix of $F$ with respect to $\\bvec{k}_i$ becomes\n\\begin{equation*}\n  \\bvec{J}_\\bvec{k} = \\begin{pmatrix}\n    -\\lambda \\Delta t \\alpha_{0,0}\\bvec{I} - \\bvec{I} & -\\lambda \\Delta t \\alpha_{0,1}\\bvec{I} & -\\lambda \\Delta t \\alpha_{0,2}\\bvec{I} \\\\\n    -\\lambda \\Delta t \\alpha_{1,0}\\bvec{I}  & -\\lambda \\Delta t \\alpha_{1,1}\\bvec{I}  - \\bvec{I} & -\\lambda \\Delta t \\alpha_{1,2}\\bvec{I} \\\\\n    -\\lambda \\Delta t \\alpha_{2,0}\\bvec{I} & -\\lambda \\Delta t \\alpha_{2,1}\\bvec{I} & -\\lambda \\Delta t \\alpha_{2,2}\\bvec{I} - \\bvec{I} \\\\\n  \\end{pmatrix}\n\\end{equation*}\nNow note that $-\\lambda \\bvec{I}$ is really the Jacobi matrix of $\\bvec{J}$ evaluated at specific points. If we write $\\bvec{J}\\left( t + c_i \\Delta t, \\bvec{y}_n + \\Delta t \\sum_{j=0}^{N_s-1} \\alpha_{i,j} \\bvec{k}_j \\right) := \\bvec{J}_{i}$ then we have in general that\n\\begin{align*}\n  \\bvec{J}_\\bvec{k} =& \\Delta t\\begin{pmatrix}\n      \\alpha_{0,0} \\bvec{J}_0 &  \\alpha_{0,1} \\bvec{J}_0 &  \\alpha_{0,2} \\bvec{J}_0 \\\\\n     \\alpha_{1,0} \\bvec{J}_1  &  \\alpha_{1,1} \\bvec{J}_1   &  \\alpha_{1,2} \\bvec{J}_2 \\\\\n     \\alpha_{2,0} \\bvec{J}_2 &  \\alpha_{2,1} \\bvec{J}_2 &  \\alpha_{2,2} \\bvec{J}_2  \\\\\n   \\end{pmatrix} - \\bvec{I}\n\\end{align*}\nNote that we only need $N_s$ evaluations of the Jacobi matrix of $f$ and not $N_s^2.$\nIf we have a more general system of ODEs\n\\begin{equation*}\n  \\bvec{f}(t,\\bvec{y}) = \\begin{pmatrix} f_1( t, \\bvec{y} ) \\\\\n    f_2( t, \\bvec{y} ) \n  \\end{pmatrix}, \\qquad \\left(\\frac{\\partial f_k}{\\partial z}\\right)_i := \\frac{ \\partial f}{\\partial z}\\left(t + c_i \\Delta t, \\bvec{y} + \\Delta t \\sum_{j=0}^{N_s-1}\\alpha_{i,j} \\bvec{k}_j\\right)\n\\end{equation*}\nwith $z = x,y$ and $k = 1,2$ then the total Jacobi matrix system becomes\n\\begin{align*}\n  \\bvec{J}_\\bvec{k} =& \\Delta t\\begin{pmatrix}\n    \\alpha_{0,0} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_0 & \\alpha_{0,0} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_0 &  \\alpha_{0,1} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_0 & \\alpha_{0,1} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_0 &  \\alpha_{0,2} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_0 & \\alpha_{0,2} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_0 \\\\\n    \\alpha_{0,0} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_0 & \\alpha_{0,0} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_0 &  \\alpha_{0,1} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_0 & \\alpha_{0,1} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_0 &  \\alpha_{0,2} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_0 & \\alpha_{0,2} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_0 \\\\\n    \\alpha_{1,0} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_1 & \\alpha_{1,0} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_1 &  \\alpha_{1,1} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_1 & \\alpha_{1,1} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_1 &  \\alpha_{1,2} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_1 & \\alpha_{1,2} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_1 \\\\\n    \\alpha_{1,0} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_1 & \\alpha_{1,0} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_1 &  \\alpha_{1,1} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_1 & \\alpha_{1,1} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_1 &  \\alpha_{1,2} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_1 & \\alpha_{1,2} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_1 \\\\\n    \\alpha_{2,0} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_2 & \\alpha_{2,0} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_2 &  \\alpha_{2,1} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_2 & \\alpha_{2,1} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_2 &  \\alpha_{2,2} \\left( \\frac{\\partial f_1}{\\partial x} \\right)_2 & \\alpha_{2,2} \\left( \\frac{\\partial f_1}{\\partial y} \\right)_2 \\\\\n    \\alpha_{2,0} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_2 & \\alpha_{2,0} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_2 &  \\alpha_{2,1} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_2 & \\alpha_{2,1} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_2 &  \\alpha_{2,2} \\left( \\frac{\\partial f_2}{\\partial x} \\right)_2 & \\alpha_{2,2} \\left( \\frac{\\partial f_2}{\\partial y} \\right)_2 \\\\    \n   \\end{pmatrix} - \\bvec{I}\n\\end{align*}\nwhich, in ``prettier'' notation, is\n\\begin{equation*}\n  \\bvec{J}_\\bvec{k} = \\Delta t \\begin{pmatrix}\n    \\alpha_{0,0}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_0 & \\alpha_{0,1}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_0 & \\alpha_{0,2}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_0 \\\\\n    \\alpha_{1,0}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_1 & \\alpha_{1,1}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_1 & \\alpha_{1,2}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_1 \\\\\n    \\alpha_{2,0}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_2 & \\alpha_{2,1}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_2 & \\alpha_{2,2}\\left( \\frac{\\partial \\bvec{f}}{\\partial \\bvec{y}} \\right)_2\n  \\end{pmatrix} - \\bvec{I}\n\\end{equation*}\n\nSince we have now deduced a general form for the non-linear function that defines all the stages as well as its Jacobi matrix, we can use Newton iteration to solve the system of equations.\nFor explicit methods, we will of course not resort to Newton iteration because the stages can be computed in a single step.\n\n\\section{A test case}\nA simple way to test the integrators is to apply them to an ODE whose solution is known.\nWe consider\n\\begin{equation*}\n  \\frac{\\dd}{\\dd t} \\bvec{y} = \\begin{pmatrix}\n    -\\alpha & -\\omega \\\\\n    \\omega & -\\alpha \\end{pmatrix} \\bvec{y}\n\\end{equation*}\nThe solution can be composed in terms of the eigenvalues $\\lambda_\\pm$ of the matrix as\n\\begin{equation*}\n  \\bvec{y} = \\bvec{C}_1e^{\\lambda_+t} + \\bvec{C}_2e^{\\lambda_-t}, \\qquad \\lambda_\\pm = \\left[-\\alpha \\pm i \\omega\\right].\n\\end{equation*}\nIn other words, we have $\\bvec{y} = \\left[ \\bvec{A}_1\\cos(\\omega t) + \\bvec{A}_2 \\sin(\\omega t) \\right]e^{-\\alpha t}.$\nTo find $\\bvec{A}_1$ and $\\bvec{A}_2$, we apply the initial value $(x,y) = (1,0)^T$ and find\n\\begin{equation*}\n  \\bvec{y} = \\left[ \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} \\cos(\\omega t) +\n  \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} \\sin( \\omega t ) \\right]e^{-\\alpha t}\n\\end{equation*}\n\n\\section{Adaptive time step control and embedding methods}\n\nGiven a Runge-Kutta method, it is sometimes possible to find a second set of weights $\\hat{\\bvec{b}}$ in such a way that the approximation $\\hat{\\bvec{y}} = $\n\n\\section{Stability regions}\n\nFor a given Runge-Kutta method, the stability region can be determined by applying the method to a\nsimple test problem $y' = \\lambda y.$\nFor a general Runge-Kutta method, the stages are implicitly defined as\n\\begin{align*}\n  \\begin{pmatrix} \\bvec{k}_0 \\\\\n    \\bvec{k}_1 \\\\\n    \\vdots \\\\\n    \\bvec{k}_{N-1} \n  \\end{pmatrix}\n  =& \\lambda \\begin{pmatrix} \\bvec{y}_0 \\\\\n    \\bvec{y}_0 \\\\\n    \\vdots \\\\\n    \\bvec{y}_0\n  \\end{pmatrix}\n  + \\lambda \\Delta t \\sum_{j=0}^{N-1} \\begin{pmatrix}\n    \\alpha_{0j}\\bvec{k}_j \\\\\n    \\alpha_{1j}\\bvec{k}_j \\\\\n    \\vdots \\\\\n    \\alpha_{(N-1)j}\\bvec{k}_j\n  \\end{pmatrix} \\\\\n  =& \\lambda \\begin{pmatrix} \\bvec{y}_0 \\\\\n    \\bvec{y}_0 \\\\\n    \\vdots \\\\\n    \\bvec{y}_0\n  \\end{pmatrix} + \\lambda \\Delta t \\begin{pmatrix}\n    \\alpha_{00}\\bvec{I} &\\alpha_{01}\\bvec{I} &\\hdots &\\alpha_{0(N-1)}\\bvec{I} \\\\\n    \\alpha_{00}\\bvec{I} &\\alpha_{1j}\\bvec{I} &\\hdots &\\alpha_{1(N-1)}\\bvec{I} \\\\\n    \\vdots & \\vdots & \\ddots & \\vdots \\\\\n    \\alpha_{00}\\bvec{I} &\\alpha_{(N-1)j}\\bvec{I} &\\hdots &\\alpha_{(N-1)(N-1)}\\bvec{I} \\\\\n  \\end{pmatrix} \\begin{pmatrix}\n    \\bvec{k}_0 \\\\\n    \\bvec{k}_1 \\\\\n    \\vdots \\\\\n    \\bvec{k}_{N-1} \n  \\end{pmatrix}\n\\end{align*}\nand hence are given by\n\\begin{align*}\n  \\left[ \\bvec{I}_{MN\\times MN} - \\lambda \\Delta t \\begin{pmatrix}\n    \\alpha_{00}\\bvec{I} &\\alpha_{01}\\bvec{I} &\\hdots &\\alpha_{0(N-1)}\\bvec{I} \\\\\n    \\alpha_{00}\\bvec{I} &\\alpha_{1j}\\bvec{I} &\\hdots &\\alpha_{1(N-1)}\\bvec{I} \\\\\n    \\vdots & \\vdots & \\ddots & \\vdots \\\\\n    \\alpha_{00}\\bvec{I} &\\alpha_{(N-1)j}\\bvec{I} &\\hdots &\\alpha_{(N-1)(N-1)}\\bvec{I} \\\\\n  \\end{pmatrix}\\right] \\begin{pmatrix} \\bvec{k}_0 \\\\\n    \\bvec{k}_1 \\\\\n    \\vdots \\\\\n    \\bvec{k}_{N-1} \n  \\end{pmatrix} = \\lambda\n  \\begin{pmatrix}\n    \\bvec{y}_0 \\\\\n    \\bvec{y}_0 \\\\\n    \\vdots \\\\\n    \\bvec{y}_0 \\\\\n  \\end{pmatrix},\n\\end{align*}\nwhere $\\bvec{I}_{MN\\times MN}$ is a $MN$ ny $MN$ identity matrix, as opposed to $\\bvec{I}$, which is an $N\\times N$ identity matrix, with $M$ the number of equations (1 for our problem) and $N$ the number of stages.\nFrom the solution of this system the new value of $\\bvec{y}$ is computed as\n\\begin{equation*}\n  \\bvec{y}_1 = \\bvec{y}_0 + \\Delta t\n  \\begin{pmatrix}\n    \\vert & \\vert & \\hdots & \\vert \\\\\n    \\bvec{k}_0 & \\bvec{k}_1 & \\hdots & \\bvec{k}_{N-1}\\\\\n        \\vert & \\vert & \\hdots & \\vert \\\\\n  \\end{pmatrix}\\begin{pmatrix}\n    b_0 \\\\\n    b_1 \\\\\n    \\vdots \\\\\n    b_{N-1}\n  \\end{pmatrix}\n\\end{equation*}.\nSince we only have one equation, the solution for $\\bvec{K} := \\left(k_0, k_1, \\hdots, k_{N-1}\\right)^T$ becomes\n\\begin{equation*}\n  \\bvec{K} = \\lambda \\left[ \\bvec{I} - \\lambda \\Delta t \\bvec{A}  \\right]^{-1}\n  \\begin{pmatrix}\n    y_0 \\\\\n    y_0 \\\\\n    \\vdots \\\\\n    y_0\n  \\end{pmatrix}\n\\end{equation*}\nand so\n\\begin{equation*}\n  y_1/y_0 = 1 + \\lambda \\Delta t \\left(b_0, b_1, \\hdots, b_{N-1}\\right)\\left[\\bvec{I} - \\lambda \\Delta t \\bvec{A}\\right]^{-1}\\left(1, 1, \\hdots, 1\\right)^T\n\\end{equation*}\n\n\\section{General linear methods}\nA generalization to Runge-Kutta methods is the family of so-called general linear methods (GLM).\nThey can be thought of as a combination of RK and multistep methods.\nA GLM consists of $s$ stages and $r$ history points. For $r=1$ we recover normal Runge-Kutta methods.\nThe $s$ stage points $Y_i,~0<i<s-1$ and stage derivatives $F_i$ are defined as follows:\n\\begin{align*}\n  Y_i = \\sum_{j=0}^{s-1} a_{ij}\\Delta t F_j + \\sum_{j=0}^{r-1}u_{ij} y_{n-j}, \\qquad F_j := f(t+\\Delta t c_i, Y_j).\n\\end{align*}\nAs one can see, we now have two matrices $\\bvec{A} = (a_{ij})$ and $\\bvec{U} = (u_{ij})$ that define the stages.\nFor the update to the numerical solution we similarly have two vectors $\\bvec{b}$ and $\\bvec{v}$:\n\\begin{align*}\n  y_{n+1} = \\sum_{i=0}^{s-1} b_i\\Delta t F_i + \\sum_{i=0}^{r-1}v_i y_{n-i}.\n\\end{align*}\nApplying a GLM to the test problem $f(t,y) = \\lambda y$ leads to the following:\n\\begin{align*}\n  Y_i =& \\sum_{j=0}^{s-1} z a_{ij} Y_j + \\sum_{j=0}^{r-1} u_{ij}y_{n-j} \\\\\n  y_{n+1} =& \\sum_{i=0}^{s-1} z b_i Y_i + \\sum_{i=0}^{r-1} v_iy_{n-i},\n\\end{align*}\nwhere $z:=\\lambda \\Delta t.$\nApplying some linear algebra we can write the stage values the following way:\n\\begin{align*}\n  \\bvec{Y} := \\begin{pmatrix}\n    Y_0 \\\\\n    Y_1 \\\\\n    Y_2 \\\\\n    \\vdots \\\\\n    Y_{s-1} \n  \\end{pmatrix} = z \\bvec{A} \\bvec{Y} + \\bvec{U}\\begin{pmatrix}\n    y_n \\\\\n    y_{n-1} \\\\\n    y_{n-2} \\\\\n    \\vdots \\\\\n    y_{n-r+1}\n  \\end{pmatrix},\n\\end{align*}\nwhich can be compacted into $(\\bvec{I} - z \\bvec{A})\\bvec{Y} = \\bvec{U} \\bvec{y},$ with $\\bvec{y} = (y_n, y_{n-1}, y_{n-2}, \\hdots, y_{n-r+1})^T.$\nThus, for the stage points we simply find\n\\begin{equation*}\n  \\bvec{Y} = (\\bvec{I} - z\\bvec{A})^{-1}\\bvec{Uy}\n\\end{equation*}\nand hence the update becomes\n\\begin{equation*}\n  \\bvec{y}(t_{n+1}) = z \\bvec{b} \\cdot (\\bvec{I} - z\\bvec{A})^{-1}\\bvec{Uy} + \\bvec{v}\\cdot\\bvec{y}.\n\\end{equation*}\nAs a check, consider $\\bvec{U}=1$ and $\\bvec{v} = 1$ and $r=1,$ in which case we should recover the result for RK methods.\nWe see that in this case\n\\begin{equation*}\n  \\bvec{y}(t_{n+1}) = z \\bvec{b} \\cdot (\\bvec{I} - z\\bvec{A})^{-1}\\bvec{y}(t_n) + \\bvec{y}(t_n).\n\\end{equation*}\nIf we have $y(t_n) = 1$ and divide by it, we recover the stability function:\n\\begin{equation*}\n  y(t_{n+1})/y(t_n) = z \\bvec{b} \\cdot (\\bvec{I} - z\\bvec{A})^{-1}(1,1,1,\\hdots,1)^T + 1,\n\\end{equation*}\nwhich indeed is the stability function for RK methods.\n\n\n\n\\end{document}\n", "meta": {"hexsha": "c998f62d5b16ec2a1ec942b4feaaed4f95b90958", "size": 16474, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes.tex", "max_stars_repo_name": "LucasPayne/rehuel", "max_stars_repo_head_hexsha": "431f7eb7dd3b9f1381cd2b266383eda529ff53fe", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-07-20T07:41:28.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-04T02:29:27.000Z", "max_issues_repo_path": "notes.tex", "max_issues_repo_name": "LucasPayne/rehuel", "max_issues_repo_head_hexsha": "431f7eb7dd3b9f1381cd2b266383eda529ff53fe", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes.tex", "max_forks_repo_name": "LucasPayne/rehuel", "max_forks_repo_head_hexsha": "431f7eb7dd3b9f1381cd2b266383eda529ff53fe", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-08-27T10:20:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-02T06:43:13.000Z", "avg_line_length": 53.661237785, "max_line_length": 394, "alphanum_fraction": 0.6280199102, "num_tokens": 6619, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891479496523, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.5985335189165605}}
{"text": "%!TEX root = ../main.tex\n\\documentclass[../main.tex]{subfiles}\n\\begin{document}\n\\subsection{Identifying Changes in Skill Prices} \\label{app:ident_changes_in_skill_prices}\nThis appendix provides a more detailed derivation of the identification of skill prices that is presented in section \\ref{sec:identifying-changes-in-skill-prices}. Just like in that section, I start from the total derivative of the utility function with respect to time.\n\\begin{align} \n\t\\frac{d }{dt} u_{i}(\\lambda_i^*(\\tilde{w}_i(t)), \\tilde{w}_i(t)) &=\t\\underbrace{\\frac{\\partial u_i(\\lambda_i^*(\\tilde{w}_i(t)), \\tilde{w}_i(t))}{\\partial \\lambda_i^*(\\tilde{w}_i(t))} \\frac{\\partial \\lambda_i^*(\\tilde{w}_i(t))}{\\partial \\tilde{w}_i(t)}  \\frac{d \\tilde{w}_i(t)}{dt}}_{(1)} \\nonumber \\\\\n\t{} &+ \\underbrace{\\frac{\\partial u_i(\\lambda_i^*(\\tilde{w}_i(t)), \\tilde{w}_i(t))} {\\partial \\tilde{w}_i(t)} \\frac{d \\tilde{w}_i(t)}{dt}}_{(2)} \\tag{\\ref{eq:util_total_derivative}} \\\\\n\t\\intertext{By the first order condition, part (1) of above equation equals zero for utility maximizing task choices $\\lambda_{i,t}^*$. This can be thought of as an application of the envelope theorem, where $u_{i,t}$ is the objective function of a parameterized optimization problem and $\\lambda_{i,t}^*(\\tilde{w}_{i,t})$ is the parameterized optimizer. As parameter $\\tilde{w}_{i,t}$ changes, changes in the optimizer $\\lambda_{i,t}^*(\\tilde{w}_{i,t})$ of the objective function do not contribute to the change in the objective function.\\linebreak\n\tThe partial derivation of $u_{i,t}$ w.r.t. $\\tilde{w}_{i,t}$ in part (2) is easily calculated for the particular realized wage function that results for $J=2$ tasks:}\n\t\\frac{\\partial u_{i}(\\lambda_{i}^*(\\tilde{w}_{i}(t)), \\tilde{w}_{i}(t)}{\\tilde{w}_{i}(t)} \\frac{d\\tilde{w}_i(t)}{dt}   =& \\frac{\\partial}{\\partial \\tilde{w}_{i}(t)} \\left( \\lambda_{i}^* \\tilde{w}_{i}(t) - 2 \\theta |b_i - \\lambda_{i}^*|^\\phi\\right) \\frac{d\\tilde{w}_i(t)}{dt} \\nonumber\\\\\n\t{} &= \\lambda_{i}^* \\frac{d\\tilde{w}_i(t)}{dt} \\nonumber \\\\\n\t\\intertext{By substituting this transformation of part (2) into the total differential (equation (\\ref{eq:util_total_derivative})), equation (\\ref{eq:util_derivative_env_th}) is obtained.}\n\t\\frac{d }{dt} u_{i}(\\lambda_i^*(\\tilde{w}_i(t)), \\tilde{w}_i(t)) &= \\lambda^*(\\tilde{w}_i(t)) \\frac{d \\tilde{w}_i (t)}{d t} \\tag{\\ref{eq:util_derivative_env_th}} \n\\end{align}\n\\begin{align}\n\t\\intertext{Integrating equation (\\ref{eq:util_derivative_env_th}) from $t-1$ to $t$:}\n\t\\int^{t}_{t-1} \\left( \\frac{\\partial u_{i}(\\lambda_{i}^*(\\tilde{w}_{i}(\\tau)), \\tilde{w}_{i}(\\tau)}{\\tilde{w}_{i}(\\tau)} \\frac{d\\tilde{w}_i(\\tau)}{dt}\\right) d\\tau &= \\int^{t}_{t-1} \\left(\\lambda^*(\\tilde{w}_i(t)) \\frac{d \\tilde{w}_i (t)}{d t} \\right) d\\tau \\nonumber\n\\end{align}\nI define $\\Delta$ to denote the discrete change in a variable between two periods. Specifically, in case of the utility $\\Delta u_{i,t}$ denotes the discrete change in utility from period $t-1$ to $t$, i.e.:\n\\begin{align} \\label{appen:definition_delta}\n\t\\Delta u_{i,t} &\\equiv \\int^{t}_{t-1} \\left( \\frac{\\partial u_{i}(\\lambda_{i}^*(\\tilde{w}_{i}(\\tau)), \\tilde{w}_{i}(\\tau)}{\\tilde{w}_{i}(\\tau)} \\frac{d\\tilde{w}_i(\\tau)}{dt}\\right) d\\tau \\nonumber \\\\\n\t{} &= \\int^{t}_{t-1} \\left(\\lambda^*(\\tilde{w}_i(t)) \\frac{d \\tilde{w}_i (t)}{d t} \\right) d\\tau \\nonumber\n\\end{align}\nRewriting this by application of integration by substitution with $\\tilde{w}_i(t) = \\tilde{w}_{i,t}$ and relying on the abridged notation with $t$ as subscript again:\n\\begin{align}\n\\Delta u_{i,t} &= \\int^{\\tilde{w}_{i,t}}_{\\tilde{w}_{i,t-1}} \\lambda_{i,t}^* d\\tilde{w}_{i,\\tau} \\tag{\\ref{eq:util_integration}}\n\\end{align}\nI now apply the following linear approximation of the task choice parameter:\n\\begin{align}\n\t\\frac{\\lambda_{i,\\tau}^* - \\lambda_{i,t-1}^*}{\\tilde{w}_{i,\\tau} - \\tilde{w}_{i,t-1}} &\\approx \\frac{\\lambda_{i,t}^* - \\lambda_{i,t-1}^*}{\\tilde{w}_{i,t} - \\tilde{w}_{i,t-1}} \\nonumber \\\\\n\t\\Leftrightarrow \\lambda_{i,\\tau}^* &\\approx \\lambda_{i,t-1}^* + \\frac{\\lambda_{i,t}^* - \\lambda_{i,t-1}^*}{\\tilde{w}_{i,t} - \\tilde{w}_{i,t-1}} (\\tilde{w}_{i,\\tau} - \\tilde{w}_{i,t-1}) \\tag{\\ref{eq:lambda_approximation}}\n\\end{align}\nNow substituting equation (\\ref{eq:lambda_approximation}) into equation (\\ref{eq:util_integration}):\n\\begin{align}\n\t\\Delta u_{i,t} &= \\int^{w_{i,t}}_{w_{i,t-1}} \\left(  \\lambda_{i,t-1}^* + \\frac{\\lambda_{i,t}^* - \\lambda_{i,t-1}^*}{\\tilde{w}_{i,t} - \\tilde{w}_{i,t-1}} (\\tilde{w}_{i,\\tau} - \\tilde{w}_{i,t-1})\\right) dw_{i,\\tau} \\nonumber \\\\\n\t\\intertext{Calculate the closed interval by evaluating the antiderivative:}\n\t{} &= \\left[ \\lambda_{i,t-1}^* \\tilde{w}_{i,\\tau} \\frac{\\lambda_{i,t}^* - \\lambda_{i,t-1}^*}{\\tilde{w}_{i,t} - \\tilde{w}_{i,t-1}}(\\frac{1}{2} \\tilde{w}_{i, \\tau}^2 - \\tilde{w}_{i,t-1} \\tilde{w}_{i,\\tau}) \\right ]^{\\tilde{w}_{i,t}}_{\\tilde{w}_{i,t-1}} \\nonumber \\\\\n\t\\intertext{Evaluating the closed interval:}\n\t{} &= \\lambda_{i,t-1}^* \\Delta \\tilde{w}_{i,t} + \\frac{\\lambda_{i,t}^* - \\lambda_{i,t-1}^*}{\\tilde{w}_{i,t}-\\tilde{w}_{i,t-1}} ( \\frac{1}{2} \\tilde{w}_{i,t}^2 - \\tilde{w}_{i,t-1} \\tilde{w}_{i,t} + \\frac{1}{2} \\tilde{w}_{i,t-1}^2 ) \\nonumber \\\\\n\t\\intertext{Rearranging and making use of the binomial expansion:}\n\t{} =& \\lambda_{i,t-1}^* \\Delta \\tilde{w}_{i,t} + \\frac{1}{2} \\frac{\\lambda_{i,t}^* - \\lambda_{i,t-1}^*}{\\tilde{w}_{i,t}-\\tilde{w}_{i,t-1}} (\\tilde{w}_{i,t} - \\tilde{w}_{i,t-1})^2 \\nonumber \\\\\n\t\\intertext{Which can be rearranged to:}\n\t{} &= \\bar{\\lambda}_{i,t}^* \\Delta \\tilde{w}_{i,t} \\tag{\\ref{eq:utility_change}}\n\\end{align}\n\\end{document}", "meta": {"hexsha": "23753591cadabed0480b73990616a707ae25e980", "size": 5552, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "latex_files/Appendix/ident_changes_in_skill_prices.tex", "max_stars_repo_name": "DaLueke/estimating_skill_prices", "max_stars_repo_head_hexsha": "bc895c8b0b0439f86c1f2dd53b34108f1eaf69a1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "latex_files/Appendix/ident_changes_in_skill_prices.tex", "max_issues_repo_name": "DaLueke/estimating_skill_prices", "max_issues_repo_head_hexsha": "bc895c8b0b0439f86c1f2dd53b34108f1eaf69a1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "latex_files/Appendix/ident_changes_in_skill_prices.tex", "max_forks_repo_name": "DaLueke/estimating_skill_prices", "max_forks_repo_head_hexsha": "bc895c8b0b0439f86c1f2dd53b34108f1eaf69a1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 120.6956521739, "max_line_length": 549, "alphanum_fraction": 0.6509365994, "num_tokens": 2188, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891392358015, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5985335028685126}}
{"text": "\\chapter{Regularization}\\label{Chp:ref:regularization}\n\nThe general cost function $J^{total}$ to be minimized has some of the cost\nfunction $J^f$ measuring the defect of the result from the\nforward model with the data, and the cost function $J^{reg}$ introducing the\nregularization into the problem and makes sure that a unique answer exists.\nThe regularization term is a function of, possibly vector-valued, level set\nfunction $m$ which represents the physical properties to be represented and is,\nfrom a mathematical point of view, the unknown of the inversion problem.\nIt is the intention that the values of $m$ are between zero and one and that\nactual physical values are created from a mapping before being fed into a\nforward model. In general the cost function $J^{reg}$ is defined as \n\\begin{equation}\\label{EQU:REG:1}\nJ^{reg}(m) = \\frac{1}{2} \\int_{\\Omega} \\left(\n \\sum_{k} \\mu_k \\cdot ( \\omega^{(0)}_k \\cdot m_k^2 + \\omega^{(1)}_{ki}m_{k,i}^2 ) \n+  \\sum_{l<k} \\mu^{(c)}_{lk} \\cdot \\omega^{(c)}_{lk}  \\cdot  \\chi(m_l,m_k) \\right) \\; dx \n\\end{equation} \nwhere summation over $i$ is performed. The additional trade--off factors \n$\\mu_k$ and $\\mu^{(c)}_{lk}$ ($l<k$) are between zero and one and constant across the \ndomain. They are potentially modified during the inversion in order to improve the balance between the different terms \nin the cost function.\n\n$\\chi$ is a given symmetric, non-negative cross-gradient function\\index{cross-gradient\n}. We use\n\\begin{equation}\\label{EQU:REG:4}\n \\chi(a,b) =  ( a_{,i} a_{,i}) \\cdot ( b_{,j} b_{,j}) -   ( a_{,i} b_{,i})^2 \n\\end{equation} \nwhere summations over $i$ and $j$  are performed, see~\\cite{GALLARDO2005a}. Notice that cross-gradient function\nis measuring the angle between the surface normals of contours of level set functions. So \nminimizing the cost function will align the surface normals of the contours.\n\nThe coefficients $\\omega^{(0)}_k$, $\\omega^{(1)}_{ki}$ and $\\omega^{(c)}_{lk}$ define weighting factors which \nmay depend on their location within the domain. We assume that for given level set function $k$ the \nweighting factors $\\omega^{(0)}_k$, $\\omega^{(1)}_{ki}$ are scaled such that \n\\begin{equation}\\label{ref:EQU:REG:5}\n\\int_{\\Omega} ( \\omega^{(0)}_k  + \\frac{\\omega^{(1)}_{ki}}{L_i^2} ) \\; dx = \\alpha_k \n\\end{equation} \nwhere $\\alpha_k$ defines the scale which is typically set to one. $L_i$ is the width of the domain in $x_i$ direction.\nSimilarly we set for $l<k$ we set \n\\begin{equation}\\label{ref:EQU:REG:6}\n\\int_{\\Omega} \\frac{\\omega^{(c)}_{lk}}{L^4} \\; dx = \\alpha^{(c)}_{lk}\n\\end{equation} \nwhere $\\alpha^{(c)}_{lk}$ defines the scale which is typically set to one and\n\\begin{equation}\\label{ref:EQU:REG:6b}\n\\frac{1}{L^2} = \\sum_i \\frac{1}{L_i^2} \\;.\n\\end{equation} \n\nIn some cases values for the level set functions are known to be zero at certain regions in the domain. Typically this is the region \nabove the surface of the Earths. This expressed using a\na characteristic function $q$ which varies with its location within the domain. The function $q$ is set to zero except for those \nlocations $x$ within the domain where the values of the level set functions is known to be zero. For these locations $x$\n$q$ takes a positive value. for a single level set function one has\n\\begin{equation}\\label{ref:EQU:REG:7}\nq(x) = \\left\\{ \n\\begin{array}{rl}\n  1 & \\mbox{ if } m \\mbox{ is set to zero at location } x \\\\\n  0 & \\mbox{ otherwise }\n\\end{array}\n\\right.\n\\end{equation} \nFor multi-valued level set function the  characteristic function is set componentwise:\n\\begin{equation}\\label{ref:EQU:REG:7b}\nq_k(x) = \\left\\{ \n\\begin{array}{rl}\n  1 & \\mbox{ if component } m_k \\mbox{ is set to zero at location } x \\\\\n  0 & \\mbox{ otherwise }\n\\end{array}\n\\right.\n\\end{equation} \n\n\n\\section{Usage}\n\n\\begin{classdesc}{Regularization}{domain\n        \\optional{, w0=\\None}\n        \\optional{, w1=\\None}\n        \\optional{, wc=\\None}\n        \\optional{, location_of_set_m=Data()}\n        \\optional{, numLevelSets=1}\n        \\optional{, useDiagonalHessianApproximation=\\False}\n        \\optional{, tol=1e-8}\n        \\optional{, scale=\\None}\n        \\optional{, scale_c=\\None}\n        }\n\n  \ninitializes a regularization component of the cost function for inversion. \n\\member{domain} defines the domain of the inversion. \\member{numLevelSets}\nsets the number of level set functions to be found during the inversion. \n\\member{w0}, \\member{w1} and  \\member{wc} define the weighting factors\n$\\omega^{(0)}$,\n$\\omega^{(1)}$ and\n$\\omega^{(c)}$, respectively. A value for \\member{w0} or \\member{w1} or both must be given. \nIf more than one level set function is involved  \\member{wc} must be given. \n\\member{location_of_set_m} sets the characteristic function $q$ \nto define locations where the level set function is set to zero, see equation~(\\ref{ref:EQU:REG:7}).\n\\member{scale} and \n\\member{scale_c} set the scales $\\alpha_k$ in equation~(\\ref{ref:EQU:REG:5}) and\n$\\alpha^{(c)}_{lk}$ in equation~(\\ref{ref:EQU:REG:6}), respectively. By default, their values are set to one.\nNotice that weighting factors are rescaled to meet the scaling conditions. \\member{tol} sets the \ntolerance for the calculation of the Hessian approximation. \\member{useDiagonalHessianApproximation} \nindicates to ignore coupling in the Hessian approximation produced by the \ncross-gradient term. This can speed-up an individual iteration step in the inversion but typically leads to more\ninversion steps.\n\\end{classdesc}\n\n\\section{Gradient Calculation}\n\n\nThe cost function kernel\\index{cost function!kernel} is given as\n\\begin{equation}\\label{ref:EQU:REG:100}\nK^{reg}(m) = \\frac{1}{2}\n\\sum_{k} \\mu_k \\cdot ( \\omega^{(0)}_k \\cdot m_k^2 + \\omega^{(1)}_{ki}m_{k,i}^2 ) \n+  \\sum_{l<k} \\mu^{(c)}_{lk} \\cdot \\omega^{(c)}_{lk}  \\cdot  \\chi(m_l,m_k)\n\\end{equation} \nWe need to provide the gradient of the cost function $J^{reg}$  with respect to the level set functions $m$.\nThe gradient is represented by two functions $Y$ and $X$ which define the \nderivative of the cost function kernel with respect to $m$ and to the gradient $m_{,i}$, respectively:\n\\begin{equation}\\label{ref:EQU:REG:101}\n\\begin{array}{rcl}\n  Y_k & = & \\displaystyle{\\frac{\\partial K^{reg}}{\\partial m_k}} \\\\\n   X_{ki} & = & \\displaystyle{\\frac{\\partial K^{reg}}{\\partial m_{k,i}}} \n\\end{array}\n\\end{equation} \nFor the case of a single valued level set function $m$ we get \n\\begin{equation}\\label{ref:EQU:REG:202}\nY = \\mu \\cdot \\omega^{(0)} \\cdot m\n\\end{equation} \nand \n\\begin{equation}\\label{ref:EQU:REG:203}\n X_{i} = \\mu \\cdot \\omega^{(1)}_{i} \\cdot m_{,i}\n\\end{equation}\nFor a two-valued level set function $(m_0,m_1)$ we have\n\\begin{equation}\\label{ref:EQU:REG:302}\nY_k = \\mu_k \\cdot \\omega^{(0)}_k \\cdot m_k \\mbox{ for } k=0,1\n\\end{equation} \nand for $X$ \n\\begin{equation}\\label{ref:EQU:REG:303}\n\\begin{array}{rcl}\n X_{0i} &  = & \\mu_0 \\cdot \\omega^{(1)}_{0i} \\cdot m_{0,i} + \\mu^{(c)}_{01} \\cdot \\omega^{(c)}_{01} \\cdot\n\\left( (m_{1,j}m_{1,j} ) \\cdot m_{0,i} - (m_{1,j}m_{0,j} ) \\cdot m_{1,i} \\right) \\\\\n X_{1i} &  = & \\mu_1 \\cdot \\omega^{(1)}_{1i} \\cdot m_{1,i} + \\mu^{(c)}_{01} \\cdot \\omega^{(c)}_{01} \\cdot\n\\left( (m_{0,j}m_{0,j} ) \\cdot m_{1,i} - (m_{1,j}m_{0,j} ) \\cdot m_{0,i} \\right)\n\\\\\n\\end{array}\n\\end{equation}  \nWe also need to provide an approximation of the inverse of the Hessian operator as discussed in section~\\ref{chapter:ref:inversion cost function:gradient}.\nFor the case of a single valued level set function $m$ we get \n\\begin{equation}\\label{ref:EQU:REG:601}\n\\begin{array}{rcl}\n A_{ij} & =&  \\mu \\cdot \\omega^{(1)}_i \\cdot \\delta_{ij}  \\\\\nD & = &  \\mu \\cdot \\omega^{(0)} \n\\end{array}\n\\end{equation}\nFor a two-valued level set function $(m_0,m_1)$ we have\n\\begin{equation}\\label{ref:EQU:REG:602}\nD_{kl}  =   \\mu_k \\cdot \\omega^{(0)}_k \\cdot \\delta_{kl} \n\\end{equation} \nand \n\\begin{equation}\\label{ref:EQU:REG:603}\n\\begin{array}{rcl}\nA_{0i0j} & = & \\mu_0 \\cdot \\omega^{(1)}_{0i} \\cdot \\delta_{ij} + \\mu^{(c)}_{01} \\cdot \\omega^{(c)}_{01} \\cdot \n\\left( (m_{1,j'}m_{1,j'} )\\cdot \\delta_{ij}  -  m_{1,i} \\cdot m_{1,j} \\right)    \\\\\nA_{0i1j} & = & \\mu^{(c)}_{01} \\cdot \\omega^{(c)}_{01} \\cdot \\left( 2 \\cdot m_{0,i} \\cdot  m_{1,j}\n- m_{1,i} \\cdot  m_{0,j} - ( m_{1,j'} m_{0,j'} ) \\cdot  \\delta_{ij}\n\\right)  \\\\\nA_{1i0j} & = & \\mu^{(c)}_{01} \\cdot \\omega^{(c)}_{01} \\cdot \\left( 2 \\cdot m_{1,i} \\cdot  m_{0,j}\n- m_{0,i} \\cdot  m_{1,j} - ( m_{1,j'} m_{0,j'} ) \\cdot  \\delta_{ij} \\right)  \\\\\nA_{1i1j} & = &  \\mu_1 \\cdot \\omega^{(1)}_{1i} \\cdot \\delta_{ij} + \\mu^{(c)}_{01} \\cdot \\omega^{(c)}_{01} \\cdot\n\\left( (m_{0,j'}m_{0,j'} ) \\cdot \\delta_{ij}  -  m_{0,i} \\cdot m_{0,j}  ) \\right) \n\\end{array}\n\\end{equation} \n\n\n", "meta": {"hexsha": "51ef564d9f459667790642967c1d7efb6f36ffc5", "size": 8637, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/inversion/Regularization.tex", "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/inversion/Regularization.tex", "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_forks_repo_path": "doc/inversion/Regularization.tex", "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.3542857143, "max_line_length": 155, "alphanum_fraction": 0.678013199, "num_tokens": 2961, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891305219504, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5985335013773321}}
{"text": "\\documentclass[a4paper,10pt]{article}\r\n\\usepackage{a4wide}\r\n\r\n\\newcommand{\\rvir}{r_{\\mathrm{vir}}}\r\n\\newcommand{\\rs}{r_{\\mathrm{s}}}\r\n\\newcommand{\\rcut}{r_{\\mathrm{cutoff}}}\r\n\\newcommand{\\rdec}{r_{\\mathrm{decay}}}\r\n\\newcommand{\\Mtot}{M_{\\mathrm{tot}}}\r\n\\newcommand{\\Ntot}{N_{\\mathrm{tot}}}\r\n\r\n\\author{Marcel Zemp}\r\n\\title{HALOGEN4MUSE}\r\n\\date{30. August 2007}\r\n\r\n\\begin{document}\r\n\r\n\\maketitle\r\n\r\n\\abstract{HALOGEN4MUSE allows to generate spherical structures from the $\\alpha\\beta\\gamma$-model family with an isotropic velocity tensor. Particles are sampled self-consistently form the distribution function of the models.}\r\n\r\n\\section{General characteristics of the models}\r\n\r\nWe restrict ourself to models of the form given by equation (\\ref{eq:rhocut}) with $\\gamma < 3$ for the mass not to diverge in the centre. Similarly, for $\\beta \\leq 3$ the total mass would diverge and we need to introduce a cut-off radius. We chose the form\r\n\\begin{equation} \\label{eq:rhocut}\r\n\\rho(r) = \\left\\{\r\n\\begin{array}{ll}\r\n\\frac{\\rho_0}{\\left(\\frac{r}{\\rs}\\right)^\\gamma \\left(1+\\left(\\frac{r}{\\rs}\\right)^\\alpha\r\n\\right)^{\\left(\\frac{\\beta-\\gamma}{\\alpha}\\right)}}\t&\r\nr \\leq \\rcut \\\\\r\n\\rho(\\rcut) \\left(\\frac{r}{\\rcut}\\right)^\\delta \\mathrm{e}^{-\\frac{r-\\rcut}{\\rdec}} &\r\nr > \\rcut\r\n\\end{array}\r\n\\right.\r\n\\end{equation}\r\nwhere $\\delta$ and $\\rdec$ are free parameters. By requiring the logarithmic slope to be continuous at $\\rcut$, we get\r\n\\begin{equation}\r\n\\delta = \\frac{\\rcut}{\\rdec} - \\frac{\\gamma + \r\n\\beta \\left(\\frac{\\rcut}{\\rs}\\right)^{\\alpha}}{1 + \\left(\\frac{\\rcut}{\\rs}\\right)^{\\alpha}}~.\r\n\\end{equation}\r\nWe set the truncation scale $\\rdec = 0.3~\\rcut$ in order not to make the truncation too sharp. A too sharp truncation can lead to an instability of the model around $\\rcut$. For $\\beta > 3$ we simply set $\\rcut = \\infty$ (i.e. no cut-off) while for $\\beta \\leq 3$ one has to specify a cut-off scale $\\rcut$ (e.g. $\\rcut = 10~\\rs$). By further specifying $\\rs$ and the total mass $\\Mtot$, the normalisation $\\rho_{0}$ is given by\r\n\\begin{equation}\r\n\\rho_0 = \\frac{\\Mtot}{4 \\pi \\rs^3 (I_{\\mathrm{M}} + I_{\\mathrm{M,cutoff}})}\r\n\\end{equation}\r\nwhere\r\n\\begin{eqnarray}\r\nI_{\\mathrm{M}} &\\equiv& \\int_0^{q} \\frac{x^{2-\\gamma}}{\\left(1+x^\\alpha\\right)^{\\left(\\frac{\\beta-\\gamma}{\\alpha}\\right)}} \\mathrm{d}x\r\n\\stackrel{q=\\infty}{=} \\frac{\\Gamma\\left(\\frac{\\beta-3}{\\alpha}\\right) \\Gamma\\left(\\frac{3-\\gamma}{\\alpha}\\right)}{\\alpha \\Gamma\\left(\\frac{\\beta-\\gamma}{\\alpha}\\right)} \\\\\r\nI_{\\mathrm{M,cutoff}} &\\equiv&\r\n\\frac{1}\r\n{\\rs^3 q^\\gamma \\left(1+q^\\alpha\\right)^{\\left(\\frac{\\beta-\\gamma}{\\alpha}\\right)}}\r\n\\int_{\\rcut}^{\\infty} r^2 \\left(\\frac{r}{\\rcut}\\right)^{\\delta} \\mathrm{e}^{-\\frac{r-\\rcut}{\\rdec}} \\stackrel{q=\\infty}{=} 0\r\n\\end{eqnarray}\r\nwith $q = \\rcut / \\rs$ and $\\Gamma$ is the standard gamma function.\r\n\r\n\\section{Distribution function as probability density function}\r\n\r\nFor spherical systems with an isotropic velocity distribution one can calculate the distribution function, which in that case only depends on energy, by the Eddington inversion. Hence, by restricting to models with an isotropic velocity distribution, we can calculate the distribution function for our spherical structure models described by equation (\\ref{eq:rhocut}) - at least numerically. Since the state of a system at a given time is completely described by the distribution function $f(\\vec{r},\\vec{v})$, we use it as a probability density function in order to sample the phase space with particles,\r\n\\begin{equation}\r\np_{\\mathrm{6D}}(\\vec{r},\\vec{v}) \\mathrm{d}\\vec{r} \\mathrm{d}\\vec{v} = \\frac{f(\\vec{r},\\vec{v})}{\\Mtot} \\mathrm{d}\\vec{r} \\mathrm{d}\\vec{v}\r\n\\end{equation}  \r\nis the probability that a particle is in the volume $\\mathrm{d}\\vec{r} \\mathrm{d}\\vec{v}$ around the phase space point $(\\vec{r},\\vec{v})$. By integrating out the velocities and using spherical symmetry, we get for the probability density $p(r)$ in coordinate space\r\n\\begin{equation}\r\np(r) \\mathrm{d}r = \\frac{4 \\pi r^2 \\rho(r)}{\\Mtot} \\mathrm{d}r\r\n\\end{equation}\r\nand the positions can now be sampled by using the quantile function, which is the inverse of the cumulative probability distribution function $M(r)/\\Mtot$ for the above probability density function $p(r)$. For a particle at location $\\vec{r}_i$ we get now the following probability density for the velocities \r\n\\begin{equation}\r\np(r_i,v) \\mathrm{d}v = \\frac{16 \\pi^2}{\\Mtot} f(r_i,v) r_i^2 v^2 \\mathrm{d}v\r\n\\end{equation}\r\nwhere $r_i = |\\vec{r}_i|$. In general, the distribution function can only be calculated numerically for the large family of models described by the density profile (\\ref{eq:rhocut}). Hence, numerical integration and inversion for $p(r_i,v)$ in order to calculate the quantile function is difficult and one generally uses the acceptance-rejection technique for the Monte Carlo sampling of the velocities.\r\n\r\nThis Monte Carlo sampling procedure directly from the distribution function $f(\\vec{r},\\vec{v})$ leads to perfectly stable equilibrium models. These models do not show the flattening in the central part of the structure during evolution as it is obtained in the case of the assumption of a local Maxwellian velocity distribution with the velocity dispersion given by the Jeans equation.\r\n\r\n\\section{Options and Usage}\r\n\r\nHALOGEN4MUSE accepts command line options in the following way: for example the command \\\\ \r\n\\\\\r\n\\texttt{halogen4muse -a 2 -b 5 -c 0 -M 1 -rs 1 -N 100000 -name plummer} \\\\ \r\n\\\\\r\ngenerates a Plummer sphere with $\\alpha = 2$, $\\beta = 5$, $\\gamma = 0$, $\\Mtot = 1$, $\\rs = 1$, $\\Ntot = 100000$ and writes the output into a file \\texttt{plummer.IC.ascii}. The built-in default values for the total mass and the scale radius are unity, i.e. $\\Mtot = 1$ and $\\rs = 1$. So, we could have left these options in the command line above.\r\n\r\nFor models that need a cut-off (i.e. $\\beta \\leq 3$), one has to additionally specify a cut-off radius $\\rcut$. For example the command \\\\\r\n\\\\\r\n\\texttt{halogen4muse -a 1 -b 3 -c 1 -M 1 -rs 1 -rcutoff 10 -N 100000 -name nfw}\\\\ \r\n\\\\\r\ngenerates an NFW profile with $\\alpha = 1$, $\\beta = 3$, $\\gamma = 1$, $\\Mtot = 1$, $\\rs = 1$, $\\rcut = 10$ and $\\Ntot = 100000$.\r\n\r\nIf one specifies the additional options \\texttt{-ogr} or \\texttt{-ogdf}, the grid in radius respectively the grid for the distribution function is written out into separate files.\r\n\r\nNormally, the Monte Carlo sampling is initiated by a random seed. The option \\texttt{-randomseed} allows one to set this artificially.\r\n\r\nThe virial radius is defined by\r\n\\begin{equation}\r\n\\rvir \\equiv {\\Mtot^2}/{\\sum_{i, j \\neq i} \\frac{m_i m_j}{|\\vec{r}_i - \\vec{r}_j|}} = - \\frac{1}{2} \\frac{G \\Mtot^2}{E_{\\mathrm{pot}}}\r\n\\end{equation}\r\nwhere the second equality is only valid in the $N \\rightarrow \\infty$ limit. The standard way to calculate $\\rvir$ is via the potential energy since it is much faster than the exact way via the $N^2$ sum over all particles. If one sets the option \\texttt{-dorvirexact}, one has the possibility to calculate the virial radius exactly via the $N^2$ sum.\r\n\r\nHALOGEN4MUSE also supports the inclusion of a central black hole. With \\texttt{-MBH} one specifies the mass of the black hole which is then set with zero velocity at the geometric centre of the surrounding structure. The presence of the central point mass modifies the distribution function of the surrounding structure compared to the same model without central point mass. The self-consistent distribution function of the spherical structure with a central point mass is again calculated numerically so that the velocities are sampled correctly. Warning: choices of $\\gamma < 1/2$ lead to unphysical models, i.e. the distribution function has negative values. But HALOGEN4MUSE warns you if you accidentally do so.\r\n\r\nOf course, you can always ask HALOGEN4MUSE for help with the \\texttt{-h} or \\texttt{-help} options.\r\n\r\n\\end{document}", "meta": {"hexsha": "0c99509ecdb59758f14d854952468b5478c8a18b", "size": 7903, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/amuse/community/halogen/src/doc/HALOGEN4MUSE.tex", "max_stars_repo_name": "rknop/amuse", "max_stars_repo_head_hexsha": "85d5bdcc29cfc87dc69d91c264101fafd6658aec", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 131, "max_stars_repo_stars_event_min_datetime": "2015-06-04T09:06:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T12:11:29.000Z", "max_issues_repo_path": "src/amuse/community/halogen/src/doc/HALOGEN4MUSE.tex", "max_issues_repo_name": "rknop/amuse", "max_issues_repo_head_hexsha": "85d5bdcc29cfc87dc69d91c264101fafd6658aec", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 690, "max_issues_repo_issues_event_min_datetime": "2015-10-17T12:18:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:15:58.000Z", "max_forks_repo_path": "src/amuse/community/halogen/src/doc/HALOGEN4MUSE.tex", "max_forks_repo_name": "rieder/amuse", "max_forks_repo_head_hexsha": "3ac3b6b8f922643657279ddee5c8ab3fc0440d5e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 102, "max_forks_repo_forks_event_min_datetime": "2015-01-22T10:00:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-09T13:29:43.000Z", "avg_line_length": 78.2475247525, "max_line_length": 716, "alphanum_fraction": 0.7163102619, "num_tokens": 2368, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677468516187, "lm_q2_score": 0.7057850402140659, "lm_q1q2_score": 0.5984829503119005}}
{"text": "\\section{Time Integration}\nA maneuver is a general unsteady flight condition which includes both trimmed flight and accelerating motions. While it is computationally more expensive to simulate compared to trim, the nature of the present formulation renders it extremely straightforward to implement. Recall that the dynamics formulation resembles \n\\[ \\textbf{f}(\\vector{y}\\textrm{ , }\\dot{\\vector{y}}\\textrm{ , }\\vector{u}\\textrm{ , }t)\n\\quad = \\quad \\grkvec{\\epsilon} \\quad = \\quad \\vector{0} \\]\n\nUsing an ODE solver (Ref. \\cite{Petzold}), the values of $\\vector{y}$ and $\\dot{\\vector{y}}$ are adjusted automatically at each time step by the solver (internally using polynomial interpolation up to order 5) until the relative and absolute errors fall below a user-specified threshold $\\delta_\\textrm{ODE}$. Reducing this threshold, i.e. enforcing more precision increases the computational effort, but does not significantly affect the accuracy of the solutions beyond a certain numerical value of $\\delta_\\textrm{ODE}$. For cases investigated so far, the point of diminishing returns is typically $\\delta_\\textrm{ODE}$ =  10$^{-6}$. The version of DASSL used requires a function argument without derived types, since it is written in Fortran 77. An adapter routine \\textbf{ODE\\_Bridge} converts the state vectors provided by DASSL into the derived types needed for \\textbf{ODEResiduals}. \n\nIntegration can be performed either with user-prescribed controls (subroutine \\textbf{read\\_con}), fixed controls or a pseudo-flight control system to maintain trim for propulsive trim. Integration is performed for a specified number of rotor revolutions until convergence is achieved. The advantage of this technique is that it makes no assumptions on the time resolution of rotor motions, especially blade accelerations. \n\\begin{Figure}\n \\centering\n \\includegraphics[width=1.3\\textwidth, angle=90]{images/time_march_callgraph.png}\n \\vspace{-0.5cm}\n \\captionof{figure}{Call graph to perform time marching}\n \\label{fig:cg}\n\\end{Figure}\n", "meta": {"hexsha": "f7329e80e70253d737e9daae878fa20091673afe", "size": 2028, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Autodoc/theory_prog_manual/Integration.tex", "max_stars_repo_name": "ananthsridharan/vtol_sizing", "max_stars_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-03-24T10:20:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-22T18:49:25.000Z", "max_issues_repo_path": "Autodoc/theory_prog_manual/Integration.tex", "max_issues_repo_name": "ananthsridharan/vtol_sizing", "max_issues_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-12-08T10:26:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-04T18:19:59.000Z", "max_forks_repo_path": "Autodoc/theory_prog_manual/Integration.tex", "max_forks_repo_name": "ananthsridharan/vtol_sizing", "max_forks_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-11-27T21:21:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-20T15:44:18.000Z", "avg_line_length": 126.75, "max_line_length": 892, "alphanum_fraction": 0.7869822485, "num_tokens": 481, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677430095496, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5984829318572109}}
{"text": "\\section{Functional programming}\n\n\\begin{frame}[fragile]\n  \\frametitle{List Comprehensions}\n  \\begin{itemize}\n  \\item A different style of Programming\n  \\item Functions `emulate' mathematical functions\n  \\item Output depends only on input arguments\n  \\item There is no `state' associated with the program\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{List Comprehensions}\n  \\begin{block}{}\n    Given a list of weights of people, and we need to\n    calculate the corresponding weights on the moon. Return a new\n    list, with each of the values divided by 6.0. \n  \\end{block}\n\n  \\begin{itemize}\n  \\item Solution using \\texttt{for} loop is shown\n  \\end{itemize}\n  \\begin{lstlisting}\n    weights = [30, 45.5, 78, 81, 55.5, 62.5]\n    weights_moon = []\n    for w in weights:\n        weights_moon.append(w/6.0)\n    print weights_moon\n  \\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{List Comprehensions \\ldots}\n  \\begin{itemize}\n  \\item List comprehensions are compact and readable\n  \\end{itemize}\n  \\begin{lstlisting}\n    [ w/6.0 for w in weights ]\n  \\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{List Comprehensions \\ldots}\n  \\begin{itemize}\n  \\item Return the weight on moon, only if weight on earth > 50\n  \\end{itemize}\n  \\begin{lstlisting}\n    weights = [30, 45.5, 78, 81, 55.5, 62.5]\n    weights_moon = []\n    for w in weights:\n        if w > 50:\n            weights_moon.append(w/6.0)\n\n    print weights_moon\n  \\end{lstlisting}\n  \\begin{itemize}\n  \\item List comprehension that checks for a condition\n  \\end{itemize}\n  \\begin{lstlisting}\n    [ w/6.0 for w in weights if w>50 ]\n  \\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{List Comprehensions \\ldots}\n  \\begin{itemize}\n  \\item if weight > 50, return weight/6.0\n  \\item else, return weight*3.0\n  \\end{itemize}\n  \\begin{lstlisting}\n    weights = [30, 45.5, 78, 81, 55.5, 62.5]\n    weights_migrate = []\n    for w in weights:\n        if w > 50:\n            weights_migrate.append(w/6.0)\n        else:\n            weights_migrate.append(w * 3.0)\n\n    print weights_migrate\n  \\end{lstlisting}\n  \\begin{itemize}\n  \\item This problem \\alert{CANNOT} be solved using list\n    comprehensions\n  \\item Try \\texttt{map}\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{\\texttt{map}}\n  \\begin{itemize}\n  \\item Takes a function and a sequence as arguments\n  \\item Calls function with each element of sequence as argument\n  \\item Returns a sequence with all the results as elements\n  \\item We solve the easier problem first, using \\texttt{map}\n  \\end{itemize}\n\n  \\begin{lstlisting}\n    def moonize(weight):\n        return weight/6.0\n\n    map(moonize, weights)\n  \\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{\\texttt{map} \\ldots}\n  \\begin{lstlisting}\n    def migrate(weight):\n        if weight < 50:\n            return weight*3.0\n        else:\n            return weight/6.0\n  \\end{lstlisting}\n\n  \\begin{itemize}\n  \\item \\texttt{migrate} compares weight with 50 and returns the\n    required value.\n  \\item We can now use \\texttt{map}\n  \\end{itemize}\n  \\begin{lstlisting}\n    map(migrate, weights)\n  \\end{lstlisting}\n  \\begin{itemize}\n  \\item Now, we wish to get away with the function definition\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{\\texttt{lambda}}\n  \\begin{itemize}\n  \\item Allows function definition, anonymously\n  \\end{itemize}\n  \\begin{lstlisting}\n    map(lambda x: x/6.0, weights)\n  \\end{lstlisting}\n  \\begin{itemize}\n  \\item \\texttt{lambda} actually returns a function which we could in\n    fact assign to a name and use later.\n  \\end{itemize}\n  \\begin{lstlisting}\n    l_moonize = lambda x: x/6.0\n    map(l_moonize, weights)\n  \\end{lstlisting}\n  \\begin{lstlisting}\n    l_migrate = lambda x: x*3.0 if x < 50 else x/6.0\n  \\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{\\texttt{filter}}\n\n  \\begin{itemize}\n  \\item We avoided discussion of problem of returning new weights only\n    when actual weight is more than 50. \n  \\item \\texttt{filter} can be used to filter out ``bad'' weights\n  \\item Later, we could use \\texttt{map}\n  \\end{itemize}\n  \\begin{lstlisting}\n    filter(lambda x: x > 50, weights)\n  \\end{lstlisting}\n  \\begin{itemize}\n  \\item Takes a function and a sequence\n  \\item Returns a sequence, containing only those elements of the\n    original sequence, for which the function returned \\texttt{True}\n  \\end{itemize}\n  \\begin{lstlisting}\n    map(lambda x: x/6.0, filter(lambda x: x > 50, weights))\n  \\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{\\texttt{reduce}}\n  \\begin{itemize}\n  \\item ``reduces'' a sequence\n  \\end{itemize}\n  \\begin{lstlisting}\n    reduce(lambda x,y: x*y, [1, 2, 3, 4])\n  \\end{lstlisting}\n  \\begin{itemize}\n  \\item Takes function and sequence as arguments; the function should\n    take two arguments\n  \\item Passes first two elements of sequence, and continues to move\n    over the sequence, passing the output in the previous step and the\n    current element as the arguments\n  \\item The function above essentially calculates $((1*2)*3)*4$ \n  \\end{itemize}\n\\end{frame}\n\n", "meta": {"hexsha": "25c4f974fb728c05776de93daaf59e00a87a883e", "size": 5131, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/advanced_python/slides/lambda.tex", "max_stars_repo_name": "FOSSEE/sees", "max_stars_repo_head_hexsha": "0e76356043e3ed28a74ecb5c8f64094bc18f2115", "max_stars_repo_licenses": ["OLDAP-2.5"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2015-01-21T13:52:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-12T08:54:48.000Z", "max_issues_repo_path": "slides/advanced_python/slides/lambda.tex", "max_issues_repo_name": "FOSSEE/sees", "max_issues_repo_head_hexsha": "0e76356043e3ed28a74ecb5c8f64094bc18f2115", "max_issues_repo_licenses": ["OLDAP-2.5"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2015-01-22T08:07:08.000Z", "max_issues_repo_issues_event_max_datetime": "2015-03-12T14:39:57.000Z", "max_forks_repo_path": "slides/advanced_python/slides/lambda.tex", "max_forks_repo_name": "FOSSEE/sees", "max_forks_repo_head_hexsha": "0e76356043e3ed28a74ecb5c8f64094bc18f2115", "max_forks_repo_licenses": ["OLDAP-2.5"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2015-01-20T23:03:36.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-12T08:54:57.000Z", "avg_line_length": 27.0052631579, "max_line_length": 70, "alphanum_fraction": 0.6838822842, "num_tokens": 1622, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.8152324960856175, "lm_q1q2_score": 0.598478094340029}}
{"text": "\\title{Experimental Laboratory 1\\\\Calibration of a load cell}\n\\author{\n        Sergio M. Vanegas A.\\\\\n        Francesco de Pas\\\\\n                Department of Mathematics\\\\\n        Polimi---Politecnico di Milano\\\\\n        Milano, Italia\n}\n\\date{\\today}\n\n\\documentclass[12pt]{article}\n\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{siunitx}\n\n\\begin{document}\n\\maketitle\n\n\\begin{abstract}\n        The second test case consists on the experimental calibration of a Load Cell (instrument for measuring Force). The load cell produces an electrical output (Voltage) which is approximately proportional to the applied force. In order to estimate the value a force with a load cell, it is necessary to determine the transfer function from voltage to newton \\( V [volt] \\rightarrow F [Newton] \\). \\cite{FL:02}\n\\end{abstract}\n\n\\section{Introduction}\n\n        The transfer function is obtained by using mass samples whose weight force \\( F^* \\) is accurately known (see Figure~\\ref{fig:sketch}); this operation is what's usually called calibration.\n\n        \\begin{figure}[!ht]\n                \\includegraphics[width=0.75\\textwidth]{Sketch.png}\n                \\centering\n                \\caption{Sketch of the Case}\n                \\label{fig:sketch}\n        \\end{figure}\n\n        Each mass sample is applied to the load cell for a few seconds and the correspondent electrical output is recorded (the sampling frequency is \\( f_s=100 \\: H\\!z \\)). This produces a noisy signal like the one shown in Figure~.\n\n        \\begin{figure}[!ht]\n                \\includegraphics[width=0.75\\textwidth]{Noise.png}\n                \\centering\n                \\caption{Noisy Load Cell signal example}\n                \\label{fig:noise}\n        \\end{figure}\n\n        The voltage output histories and the corresponding values of the reference force, \\( F^* \\), are listed in the Table and provided in the form of MATLAB workspaces. Each workspace contains the vector of the readings in Volt (remember that the sampling frequency is \\( f_s = 100 H\\!z \\)). In Table~\\ref{tab:data}, the sign of F* is as presented in the sketch, namely, \\( F^* > 0 \\) when the force is directed upwards, and \\( F^* < 0 \\) when it is directed downwards.\n\n        \\begin{table}[!ht]\n                \\label{tab:data}\n                \\begin{tabular}{|cc|cc|}\n                        \\hline\n                        \\textbf{TestID} & \\( \\pmb{F^* \\: \\left[ N \\right]} \\) & \\textbf{TestID} & \\( \\pmb{F^* \\: \\left[ N \\right]} \\) \\\\ \\hline\n                        calibr01        & 0.000               & calibr09        & -5.598              \\\\\n                        calibr02        & -0.314              & calibr10        & -10.501             \\\\\n                        calibr03        & -0.411              & calibr11        & 0.126               \\\\\n                        calibr04        & -0.695              & calibr12        & 1.097               \\\\\n                        calibr05        & -0.891              & calibr13        & 2.078               \\\\\n                        calibr06        & -1.186              & calibr14        & 4.039               \\\\\n                        calibr07        & -1.676              & calibr15        & 8.942               \\\\\n                        calibr08        & -2.656              & calibr16        & 13.845              \\\\ \\hline\n                \\end{tabular}\n                \\centering\n                \\caption{Reference Forces}\n        \\end{table}\n\n        The main objective of the laboratory is to determine the transfer function for the load cell, and estimate the uncertainty of the instrument. For that purpose, the remainder of the report is organized as follows: Section~\\ref{sec:stabilized_signal} graphically stimates the acquisition time required in order to get a stabilized measure out of an accumulating averaged voltage signal, and then provides the stabilized voltage value for each test case; Section~\\ref{sec:lin_regression} performs a linear regression over the averaged voltage measure in function of the reference force; Section~\\ref{sec:uncertainty} provides an approximation of the uncertainty we can expect from using the coefficients obtained in Section~\\ref{sec:lin_regression} to estimate the value of the measured force in function of the averaged voltage signal; finally, Section~\\ref{sec:resolution} provides an approximation of the resolution of the load cell based on the measured data from the test cases.\n\n\\section{Stabilized voltage measurement} \\label{sec:stabilized_signal}\n\n        \\begin{figure}[!ht]\n                \\includegraphics[width=0.75\\textwidth]{Averaged_Data.png}\n                \\centering\n                \\caption{Accumulated averaged data over time (normalized)}\n                \\label{fig:averaged}\n        \\end{figure}\n\n        Following the guidelines, we proceeded to calculate the accumulated average of the sensor signal over time. Additionally, we normalized the difference between each partial average and the final one by this last value and plotted the square of this quantity, which is the signal we can observe on Figure~\\ref{fig:averaged}. We can observe how the power of the error decreases by 3 orders of magnitude (at the very least) by the 10 second mark, turning it into a convenient criterion to start considerating the averaged measure as valid.\n\n        Nevertheless, as a countermeasure to overfitting (and considered the data had already been gathered anyway), we decided to extract the median of each accumulated average after the 10 second mark and use it as the final measure.\n\n\\section{Linear regression} \\label{sec:lin_regression}\n\n        \\begin{figure}[!ht]\n                \\includegraphics[width=0.75\\textwidth]{Fitting.png}\n                \\centering\n                \\caption{Data linear regression}\n                \\label{fig:regression}\n        \\end{figure}\n\n        After extracting the averaged measure, we proceeded to perform a linear regression on the data as a function of the reference applied force. The resulting formula corresponds to the one in Equation~\\ref{eq:regression}.\n\n        \\begin{equation} \\label{eq:regression}\n                \\mu_{V_0} = m F^{*} + b = (\\SI{-4.077e-4}{\\volt\\per\\newton}) F^{*} + (\\SI{-2.901e-5}{\\volt})\n        \\end{equation}\n\n\\section{Uncertainty estimation} \\label{sec:uncertainty}\n\n        Considering the random nature of the measurement process, we had to calculate the uncertainty \\( U \\) associated to the estimated force \\( \\widetilde{F} = \\frac{V_0 - b}{m} \\). For that purpose, we followed 2 different approaches:\n\n        \\begin{enumerate}\n                \\item Set \\( U \\) as the maximum absolute difference between \\( \\widetilde{F} \\) and \\( F^{*} \\) for all the 16 calibration cases.\n                \\item Set \\( U \\) as a function of the confidence intervals for a perfect Gaussian PDF (\\( 95\\% \\) confidence).\n        \\end{enumerate}\n\n        The resulting uncertainties can be seen in Equation~\\ref{eq:uncertainty_1} and Equation~\\ref{eq:uncertainty_2} respectively.\n\n        \\begin{equation} \\label{eq:uncertainty_1}\n                U_1 = \\sup_{i \\in \\{1,...,16\\}} \\left| \\frac{V_i - b}{m} - F^* \\right| = \\SI{1.381e-2}{\\newton}\n        \\end{equation}\n\n        \\begin{equation} \\label{eq:uncertainty_2}\n                U_2 = 2 \\bar{S}_F = 2 \\sqrt{\\frac{1}{N-2} \\sum_{i = 1}^{16}{\\left(\\frac{V_i - b}{m} - F\\right)^2}} = \\SI{1.179e-2}{\\newton}\n        \\end{equation}\n\n\\section{Resolution estimation} \\label{sec:resolution}\n\n        Finally, we estimated the sensor's voltage resolution by ordering each measuring sequence and extraction the global minimum variation, which ended up being \\( \\SI{6.593e-6}{\\volt} \\). Then, making use of the linear regression coefficients calculated beforehand, we obtained the expression in Equation~\\ref{eq:resolution} for the sensor's force resolution.\n\n        \\begin{equation} \\label{eq:resolution}\n                \\Delta F = \\inf_{\\left\\{ V_1, V_2 \\right\\}} \\left| \\frac{V_1 - b}{m} - \\frac{V_2 - b}{m} \\right| = \\left| \\frac{\\Delta V}{m} \\right| = \\SI{1.617e-2}{\\newton}\n        \\end{equation}\n\n\\bibliographystyle{abbrv}\n\\bibliography{main}\n\n\\end{document}\n", "meta": {"hexsha": "2a73a4b4eae5db8014803c7a31637c23c7c6180d", "size": 8149, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Fluids_Labs/Lab_2/main.tex", "max_stars_repo_name": "sergiovaneg/LaTex_Documents", "max_stars_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Fluids_Labs/Lab_2/main.tex", "max_issues_repo_name": "sergiovaneg/LaTex_Documents", "max_issues_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Fluids_Labs/Lab_2/main.tex", "max_forks_repo_name": "sergiovaneg/LaTex_Documents", "max_forks_repo_head_hexsha": "22daa8196b611089e6753e600c39922c55522d9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-21T14:26:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-21T14:26:02.000Z", "avg_line_length": 65.192, "max_line_length": 988, "alphanum_fraction": 0.6281752362, "num_tokens": 2032, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.8152324826183822, "lm_q1q2_score": 0.5984780844534685}}
{"text": "\\section{Question 2}\nSystem:\n$$\nG_{1_{(s)}} = \\dfrac{1}{(17s+1)(5s+1)}\\exp(-30s) = \\dfrac{1}{102s^2+23s+1}\n$$\nWe use Optimal PID to design controller with ITAE, ISE and IAE cost function. In program we use 100 second for optimization but use 1000 second for simulation beacuse optimization takes too long time and in 100 second beacuse of long delay time we can't see system behavior. \n\\newpage\n \\begin{itemize}\n     \\item ITAE\n     $$\n     K_p = 0.6867, \\quad K_i = 0.0347, \\quad K_d = 15.0543\n     $$\n     \\begin{figure}[H]\n        \\caption{Step responde with PID controller and ITAE cost function}\n        \\centering\n        \\includegraphics[width=11cm]{../Figure/Q2/ITAE.png}\n    \\end{figure}\n    \\item ISE\n    $$\n    K_p =0.5399, \\quad K_i = 0.0446, \\quad  K_d =20.8391\n    $$\n    \\begin{figure}[H]\n       \\caption{Step responde with PID controller and ISE cost function}\n       \\centering\n       \\includegraphics[width=11cm]{../Figure/Q2/ISE.png}\n   \\end{figure}\n   \\item IAE\n   $$\n   K_p = 0.6522, \\quad K_i = 0.0393, \\quad K_d = 17.5028\n   $$\n   \\begin{figure}[H]\n      \\caption{Step responde with PID controller and IAE cost function}\n      \\centering\n      \\includegraphics[width=11cm]{../Figure/Q2/IAE.png}\n  \\end{figure}\n \\end{itemize}\n PID designed with ITAE and IAE cost function work better system is fast with lower overshoot but in ITAE cost function system has better undershoot.", "meta": {"hexsha": "4c9d2f78d9e370bb05fa7b811c5d9c1443ea979b", "size": 1397, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW/HW VII/Report/Q2/Q2.tex", "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW/HW VII/Report/Q2/Q2.tex", "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW/HW VII/Report/Q2/Q2.tex", "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7567567568, "max_line_length": 275, "alphanum_fraction": 0.6621331424, "num_tokens": 465, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511579973931, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5984577059979351}}
{"text": "%% !TeX root = main.tex\n\n\\chapter{CORDIC}\n\\glsresetall\n\\label{chapter:cordic}\n\n\\section{Overview}\n\\label{subsec:CORDIC_Overview}\n\n%\\note{Is this the right place for this? I think it is fine. Just keep the focus on number representation. Do not do anything about pipelining, unrolling, memory partitioning, etc. That will be covered in the DFT chapter (next). Need to integrate the basic ideas of number representation (currently its own separate chapter) into this one. Also need to talk about some of the optimization tricks. E.g., the if/else to selection operator optimization. Can look through students old reports to see other common tricks.}\n\nCORDIC (Coordinate Rotation DIgital Computer) is an efficient technique to calculate trigonometric, hyperbolic, and other mathematical functions. It is a digit-by-digit algorithm that produces one output digit per iteration. This allows us to tune the accuracy of the algorithm to the application requirements; additional iterations produce a more precise output result. Accuracy is another common design evaluation metric alongside performance and resource usage. CORDIC performs simple computations using only addition, subtraction, bit shifting, and table lookups, which are efficient to implement in FPGAs and more generally in hardware. \n\n\\begin{aside}\nThe CORDIC method was developed by Jack Volder in the 1950's as a digital solution to replace an analog resolver for real-time navigation on a B-58 bomber. A resolver measures degrees of rotation. At that time hardware implementations of multiply operations were prohibitively expensive and CPUs had very limited amount of state. Thus the algorithm needed to have low complexity and use simple operations.  Over the years, it has been used in math co-processors \\cite{duprat1993cordic}, linear systems \\cite{ahmed1982highly}, radar signal processing \\cite{andraka1996building}, Fourier transforms \\cite{despain1974fourier}, and many other digital signal processing algorithms. It is now commonly used in FPGA designs. \\VHLS uses a CORDIC core for calculating trigonometric functions and it is a common element of modern FPGA IP core libraries.\n\\end{aside}\n\n%\\note{Talk about what HLS optimizations this chapter will focus on (number representation) and optimization of precision vs area and performance. \n\nThe goal of this chapter is to demonstrate how to create an optimized CORDIC core using high-level synthesis. We are gradually increasing the complexity of the types of hardware cores that we are developing as we progress through the book. The CORDIC method is an iterative algorithm; thus most of the computation is performed within a single \\lstinline|for| loop. The code itself is not all that complex. However, understanding the code such that we can create an optimal hardware implementation requires deep insight. And a good HLS designer must always understand the computation if they wish to create the optimal design. Thus, we spend the early part of this chapter giving the mathematical and computational background of the CORDIC method.  \n\nThe major HLS optimization that we wish to highlight in this chapter is choosing the correct number representation for the variables. As we discuss later in the chapter, the designer must carefully tradeoff between the accuracy of the results, the performance, and resource utilization of the design. Number representation is one big factor in this tradeoff -- ``larger'' numbers (i.e., those with more bits) generally provide more precision at the cost of increased resource usage (more FFs and logic blocks) and reduced performance. We provide a background on number representation and arbitrary data types in Chapter \\ref{sec:arbitrary_precision}. \n\n%This chapter is coupled with the project described in Chapter \\ref{chapter:phase_detector} that allows more in-depth experimentation with the tradeoffs between precision (accuracy of the computation), resource usage, and performance. The aim of this chapter is to provide enough insight so that one can perform the exercises from that project, i.e., this chapter and that project are meant to complement each other. The goal of the project is to build a phase detector which uses a CORDIC and a complex matched filter which we have conveniently covered in this and the previous chapter. \n\n\\section{Background}\n\\label{subsec:CORDIC_Basics}\n\nThe core idea behind the CORDIC is to efficiently perform a set of vector rotations in a two-dimensional plane. By overlaying these rotations with some simple control decisions, we can perform a variety of fundamental operations, e.g., trigonometric, hyperbolic, and logarithmic functions, real and complex multiplication, and matrix decompositions and factorizations.  CORDIC has been used in a wide range of applications including signal processing, robotics, communications, and many scientific computations. CORDIC is commonly used in FPGA design since it has a small resource usage. \n\nIn the following, we walk through the process of how a CORDIC performs the sine and cosine of a given an input angle $\\theta$. This is done using a series of vector rotations using only simple operations which are very efficient to implement in hardware. At the high level, the algorithm works using a series of rotations with the goal of reaching the target input angle $\\theta$. The key innovation that makes this efficient is that the rotations can be done in a manner that requires minimal computation. In particular, we perform the rotations using multiplications by constant powers of two. This translates to simply moving bits around in hardware which is extremely efficient as it does not require any sort of logic.  \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{images/cordic_overview}\n\\caption{ Using the CORDIC to calculate the functions $\\sin \\phi$ and $\\cos \\phi$. Here, the CORDIC starts at the x-axis with a corresponding $0^\\circ$ angle. It then performs four iterative positive/negative rotations in increasingly smaller rotation angle with the ultimate goal of reaching the target angle $\\phi$. Once we finish our rotations we are close to the target angle. We take the corresponding $x$ and $y$ values of the final vector which correspond to  $\\cos \\phi$ and $\\sin \\phi$ (respectively) assuming the length of the vector is $1$. The key to the CORDIC is doing all of this in a computationally efficient manner. }\n\\label{fig:cordic_overview}\n\\end{figure}\n\nFigure \\ref{fig:cordic_overview} provides a high level overview of the CORDIC procedure for calculating $\\cos \\phi$ and $\\sin \\phi$. In this case, we start our initial rotating vector on the x-axis, i.e, at a $0^\\circ$ angle. Then, we perform an iterative series of rotations; in this example we only perform four rotations, but generally this is on the order of 40 rotations. Each of the subsequent rotation uses an increasingly smaller angle, which means that every iteration adds a bit more precision to the output value. At each iteration, we decide between doing a positive or negative rotation by that smaller angle. The angle values that we rotate are fixed a priori; thus, we can easily store their values in a small memory and keep a running sum of the cumulative angle that we have rotated so far. If this cumulative angle is larger than our target angle $\\phi$, then we perform a negative rotation. If it is smaller, then the rotation is positive. Once we have completed a sufficient number of rotations, we can determine the $\\cos \\phi$ and $\\sin \\phi$ by directly reading the $x$ and $y$ values from the final rotated vector. If our final vector has a magnitude of $1$, then $x = \\cos \\phi$ and $y = \\sin \\phi$.\n\n\nWe start with some terminology. The goal is to refresh your memory about some basic trigonometric and vector concepts. Feel free to skim this if it is familiar. But keep in mind that one of the most important aspects of creating an efficient hardware design is to truly understand the application; only then can the designer effectively utilize the optimization directives and perform code refactoring which is required to get the most efficient designs. \n\nThe fundamental goal of the CORDIC algorithm is to perform a series of rotations in an efficient manner. Let us start by thinking about how to generally perform a rotation. In two dimensions, the rotation matrix is:\n\\begin{equation}\nR(\\theta) = \\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\\n\\sin \\theta & \\cos \\theta \\\\\n\\end{bmatrix}\n\\label{eq:rotation_matrix}\n\\end{equation}\nThe CORDIC uses an iterative algorithm that rotates a vector $v$ to some target of angle which depends on the function that the CORDIC is performing. One rotation is a matrix vector multiplications in the form of $v_{i} = R_{i} \\cdot v_{i-1}$. Thus in each iteration of the CORDIC we perform the following operations to perform one rotation which is the matrix vector multiply:\n\\begin{equation}\n\\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\\n\\sin \\theta & \\cos \\theta \\\\\n\\end{bmatrix}\\begin{bmatrix}\nx_{i-1} \\\\\ny_{i-1} \\\\\n\\end{bmatrix}\n= \\begin{bmatrix}\nx_i \\\\\ny_i \\\\\n\\end{bmatrix} \n\\end{equation}\nWriting out the linear equations, the coordinates of the newly rotated vector are: \n\\begin{equation}\nx_i = x_{i-1}  \\cos \\theta - y_{i-1}  \\sin \\theta\\\n\\end{equation} and \n\\begin{equation}\ny_i = x_{i-1} \\sin \\theta + y_{i-1} \\cos \\theta\\\n\\end{equation}\nThis is precisely the operation that we need to simplify. We want to perform these rotations without having to perform any multiplications.\n\nConsider first a $90^\\circ$ rotation. In this case the rotation matrix is:\n\\begin{equation}\nR(90^\\circ) = \\begin{bmatrix}\n\\cos 90^\\circ & -\\sin 90^\\circ \\\\\n\\sin 90^\\circ & \\cos 90^\\circ \\\\\n\\end{bmatrix} = \\begin{bmatrix}\n0 & -1 \\\\\n1 & 0 \\\\\n\\end{bmatrix} \n\\end{equation} and thus we only have to perform the operations:\n%\\begin{equation}\n\\begin{align}\nx_i &= x_{i-1}  \\cos 90^\\circ - y_{i-1} \\sin 90^\\circ \\nonumber \\\\\n& = x_{i-1} \\cdot 0 - y_{i-1} \\cdot 1 \\nonumber \\\\\n& = -y_{i-1}\\\n\\end{align}\n%\\end{equation} \nand \n\\begin{align}\ny_i &= x_{i-1} \\sin  90^\\circ + y_{i-1} \\cos 90^\\circ \\nonumber \\\\\n& = x_{i-1} \\cdot 1 + y_{i-1} \\cdot 0 \\nonumber \\\\\n&=  x_{i-1}\\\n\\end{align}\nPutting this altogether we get\n\\begin{equation}\n\\begin{bmatrix}\n0 & -1 \\\\\n1 & 0 \\\\\n\\end{bmatrix}\\begin{bmatrix}\nx \\\\\ny \\\\\n\\end{bmatrix}\n= \\begin{bmatrix}\n-y \\\\\nx \\\\\n\\end{bmatrix} \n\\end{equation}\nYou can see that this is requires a very minimal amount of calculation; the rotated vector simply negates the $y$ value, and then swaps the $x$ and $y$ values. A two's complement negation requires the hardware equivalent to an adder. Thus, we have achieved our goal of performing a $90^\\circ$ rotation efficiently.\n\n\\begin{exercise}\nWhat if you wanted to rotation by $-90^\\circ$? What is the rotation matrix $R(-90^\\circ)$? What type of calculation is required for this rotation? How would one design the most efficient circuit that could perform a positive and negative rotation by $-90^\\circ$, i.e., the direction of rotation is an input to the circuit?\n\\end{exercise}\n\nWhile it is great that we can rotate by $\\pm 90^\\circ$, we also need to rotate by smaller angles if we wish to have any sort of good resolution in moving to the target angle. Perhaps the next natural angle that we might wish to rotate would be $\\pm 45^\\circ$. Using the rotation matrix from Equation \\ref{eq:rotation_matrix}, we get\n\\begin{equation}\nR(45^\\circ) = \\begin{bmatrix}\n\\cos 45^\\circ & -\\sin 45^\\circ \\\\\n\\sin 45^\\circ & \\cos 45^\\circ \\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\\sqrt 2/2 & -\\sqrt 2/2 \\\\\n\\sqrt 2/2 & \\sqrt 2/2 \\\\\n\\end{bmatrix}\n\\end{equation} Calculating out the computation for performing the rotation, we get\n\\begin{align}\nx_i &= x_{i-1}  \\cos 45^\\circ - y_{i-1} \\sin 45^\\circ \\nonumber \\\\\n& = x_{i-1} \\cdot \\sqrt 2/2  - y_{i-1} \\cdot \\sqrt 2/2\n\\end{align} and \n\\begin{align}\ny_i &= x_{i-1} \\sin  45^\\circ + y_{i-1} \\cos 45^\\circ \\nonumber \\\\\n&= x_{i-1} \\cdot \\sqrt 2/2 + y_{i-1} \\cdot \\sqrt 2/2\n\\end{align} which when put back into matrix vector notation is\n\\begin{equation}\n\\begin{bmatrix}\n\\sqrt 2/2 & -\\sqrt 2/2 \\\\\n\\sqrt 2/2 & \\sqrt 2/2 \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nx \\\\\ny \\\\\n\\end{bmatrix}\n= \\begin{bmatrix}\n\\sqrt 2/2 x - \\sqrt 2/2 y \\\\\n\\sqrt 2/2 x + \\sqrt 2/2 y \\\\\n\\end{bmatrix} \n\\end{equation}\nThis certainly is not as efficient of a computation as compared to rotating by $\\pm 90^\\circ$. The $\\pm 90^\\circ$ rotation was ideal because the multiplication were by very simple constants (in this case $0$, $1$, and $-1$). The key to the CORDIC is doing these rotations in an efficient manner, i.e., defining the rotation matrix in a way that their multiplication is trivial to compute. That is, we wish to be more like the previous $\\pm 90^\\circ$ and less like the much more difficult computation required for the $\\pm 45^\\circ$ rotation that we just described.\n\nWhat if we ``forced'' the rotation matrix to be constants that were easy to multiply? For example, a multiplication by any power of two turns into a shift operation. If we set the constants in the rotation matrix to be powers of two, we could very easily perform rotations without multiplication. This is the key idea behind the CORDIC -- finding rotations that are very efficient to compute while minimizing any side effects. We will discuss these ``side effects'' in more detail, but there is an engineering decision that is being made here. In order to get efficient computation, we have to give up something; in this case we have to deal with the fact that the rotation also performs scaling, i.e., it changes the magnitude of the rotated vector  -- more on that later.\n\nTo further explore the idea of ``simple'' rotation matrices, consider the matrix \n\\begin{equation}\nR() = \\begin{bmatrix}\n1 & -1 \\\\\n1 & 1 \\\\\n\\end{bmatrix} \n\\end{equation} with the corresponding computation for the transformation\n\\begin{equation}\nx_i = x_{i-1}  - y_{i-1}\\\n\\end{equation} and \n\\begin{equation}\ny_i = x_{i-1}  + y_{i-1}\\\n\\end{equation} with the matrix vector form of\n\\begin{equation}\n\\begin{bmatrix}\n1 & -1 \\\\ \n1 & 1 \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nx \\\\\ny \\\\\n\\end{bmatrix}\n= \\begin{bmatrix}\nx - y \\\\\nx + y \\\\\n\\end{bmatrix} \n\\end{equation}\n\nThis is certainly easy to compute and does not require any ``difficult'' multiplications. But what is the consequence of this operation? It turns out that this performs a rotation by $45^\\circ$ which is perfect; we now have an efficient way to perform a $45^\\circ$ rotation. But, this transform also scales the vector by a factor of  $\\sqrt 2$.  The square root of the determinant of this matrix tells us how much the transformation scales the vector, i.e., how the length of the vector has changed. The determinant of this matrix here is $1 \\cdot 1 -  (-1) \\cdot 1 = 2$. Thus, this operation rotates by $45^\\circ$ and scales by $\\sqrt 2$. This is the tradeoff that the CORDIC makes; we can make the computation for the rotation easy to compute but it has the side effect that scales the length of the vector. This may or may not be a problem depending on the application. But for now, we put aside the scaling issue and focus on how to generalize the idea of performing rotations that are computationally efficient to perform.\n\nNow we generalize the notion of performing efficient matrix rotations, i.e., performing rotations by only performing addition/subtraction and multiplication by a power of two (i.e., by shift operations). Consider again the rotation matrix \n\n\\begin{equation}\nR_{i}(\\theta) = \\begin{bmatrix} \\cos(\\theta_{i}) & -\\sin(\\theta_{i}) \\\\ \\sin(\\theta_{i}) & \\cos(\\theta_{i})\\end{bmatrix}\n\\end{equation}\nBy using the following trigonometric identities,\n\\begin{equation}\n\\cos(\\theta_{i}) =  {\\frac{1}{\\sqrt{1 + \\tan^2(\\theta_{i})}}}\n%\\sin(\\gamma_{i}) & =  \\frac{\\tan(\\gamma_{i})}{\\sqrt{1 + \\tan^2(\\gamma_{i})}} \\end{align}\n\\end{equation}\n\\begin{equation}\n\\sin(\\theta_{i})  =  \\frac{\\tan(\\theta_{i})}{\\sqrt{1 + \\tan^2(\\theta_{i})}}\n\\end{equation}\nwe can rewrite the rotation matrix as\n\\begin{equation}\nR_i = \\frac{1}{\\sqrt{1 + \\tan^2(\\theta_i)}} \\begin{bmatrix} 1 & -\\tan(\\theta_i) \\\\ \\tan(\\theta_i) & 1 \\end{bmatrix}\n\\end{equation}\nIf we restrict the values of $\\tan(\\theta_i)$ to be a multiplication by a factor of two, the rotation can be performed using shifts (for the multiplication) and additions. More specifically, we use let $\\tan(\\theta_i) = 2^{-i}$. The rotation then becomes \n\\begin{equation}\nv_i = K_i \\begin{bmatrix} 1 & - 2^{-i} \\\\  2^{-i} & 1 \\end{bmatrix} \\begin{bmatrix} x_{i-1} \\\\ y_{i-1} \\end{bmatrix}\n\\end{equation}\nwhere\n\\begin{equation}\nK_i = \\frac{1}{\\sqrt{1 + 2^{-2i}}}\n\\end{equation}\n\nA few things to note here. The $2^{-i}$ is equivalent to a right shift by $i$ bits, i.e., a division by a power of two. This is essentially just a simple rewiring which does not require any sort of logical resources, i.e., it is essentially ``free'' to compute in hardware. This is a huge benefit, but it does not come without some drawbacks. First, we are limited to rotate by angles $\\theta$ such that $\\tan(\\theta_i) = 2^{-i}$. We will show that this is not much of a problem. Second, we are only showing rotation in one direction; the CORDIC requires the ability to rotation by $\\pm \\theta$. This is simple to correct by adding in $\\sigma$ which can have a value of $1$ or $-1$, which corresponds to performing a positive or negative rotation. We can have a different $\\sigma_i$ at every iteration/rotation. Thus the rotation operation generalizes to\n\\begin{equation}\nv_i = K_i \\begin{bmatrix} 1 & -\\sigma_i 2^{-i} \\\\ \\sigma_i 2^{-i} & 1 \\end{bmatrix} \\begin{bmatrix} x_{i-1} \\\\ y_{i-1} \\end{bmatrix}\n\\end{equation}\nFinally, the rotation requires a multiplication by $K_i$.  $K_i$ is typically ignored in the iterative process and then adjusted for after the series of rotations is completed. The cumulative scaling factor is\n\\begin{equation}\nK(n) = \\prod_{i=0}^{n-1} K_i  = \\prod_{i=0}^{n-1}\\frac {1}{\\sqrt{1 + 2^{-2i}}}\n\\end{equation} and  \n\\begin{equation}\nK = \\lim_{n \\to \\infty}K(n) \\approx 0.6072529350088812561694\n\\end{equation}\nThe scaling factors for different iterations can be calculated in advance and stored in a table. If we always perform a fixed number of rotations, this is simply one constant. This correction could also be made in advance by scaling $v_0$ appropriately before performing the rotations. Sometimes it is ok to ignore this scaling, which results in a processing gain\n\\begin{equation}\nA = \\frac{1}{K} = \\lim_{n \\to \\infty} \\prod_{i=0}^{n-1} {\\sqrt{1 + 2^{-2i}}}\\approx 1.64676025812107\n\\label{eq:cordicgain}\n\\end{equation}\n\nAt each iteration, we need to know the angle $\\theta_i$ of the rotation that was just performed. This is derived as $\\theta_i = \\arctan 2^{-i}$. We can precompute these values for each value of $i$, store them in an on-chip memory, and use them as a lookup table. Additionally, we have a control decision that determines whether the rotation is clockwise or counterclockwise, i.e., we must determine if $\\sigma$ is $1$ or $-1$. This decision depends on the desired CORDIC mode. For example, for calculating $\\cos \\phi$ and $\\sin \\phi$, we keep a running sum of the cumulative angle that we have rotated so far. We compare this to the target angle $\\phi$ and perform a positive rotation if our current angle is less than $\\phi$ and a negative rotation is our current angle is greater than $\\phi$. \n\nTable \\ref{table:cordic} provides the statistics for the first seven iterations of a CORDIC. The first row is the ``zeroth'' rotation (i.e., when $i=0$), which is a $45^{\\circ}$ rotation. It performs a scaling of the vector by a factor of $1.41421$. The second row does a rotation by $2^{-1} = 0.5$. This results in a rotation by $\\theta = \\arctan 2^{-1} = 26.565^{\\circ}$. This rotation scales the vector by $1.11803$. The CORDIC gain is the overall scaling of the vector. In this case, it is the scaling factor of the first two rotations, i.e., $1.58114 = 1.41421 \\cdot 1.11803$. This process continues by incrementing $i$ which results in smaller and smaller rotating angles and scaling factors. Note that the CORDIC gain starts to stabilize to $\\approx 1.64676025812107$ as described in Equation \\ref{eq:cordicgain}. Also, note as the angles get smaller, they have less effect on the most significant digits. \n\n\\begin{exercise}\nDescribe the effect if the $i$th iteration on the precision of the results? That is, what bits does it change? How does more iterations change the precision of the final result, i.e., how do the values of $\\sin \\phi$ and $\\cos \\phi$ change as the CORDIC performs more iterations?\n\\end{exercise}\n\n\n\n\\begin{table}[htbp]\n\\caption{The rotating angle, scaling factor, and CORDIC gain for the first seven iterations of a CORDIC. Note that the angle decreases by approximately half each time. The scaling factor indicates how much the length the the vector increases during that rotation. The CORDIC gain is the overall increase in the length of the vector which is the product of all of the scaling factors for the current and previous rotations.}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\ni & $2^{-i}$ \t& Rotating Angle  \t& Scaling Factor \t& CORDIC Gain \t\\\\ \\hline \\hline\n0 & 1.0 \t\t& $45.000^{\\circ}$\t& 1.41421\t\t\t& 1.41421\t\t\\\\ \\hline\n1 & 0.5 \t\t& $26.565^{\\circ}$\t& 1.11803\t\t\t& 1.58114\t\t\\\\ \\hline\n2 & 0.25 \t\t& $14.036^{\\circ}$\t& 1.03078\t\t\t& 1.62980\t\t\\\\ \\hline\n3 & 0.125 \t\t& $7.125^{\\circ}$\t& 1.00778\t\t\t& 1.64248\t\t\\\\ \\hline\n4 & 0.0625 \t& $3.576^{\\circ}$\t& 1.00195\t\t\t& 1.64569\t\t\\\\ \\hline\n5 & 0.03125 \t& $1.790^{\\circ}$\t& 1.00049\t\t\t& 1.64649\t\t\\\\ \\hline\n6 & 0.015625 \t& $0.895^{\\circ}$\t& 1.00012\t\t\t& 1.64669\t\t\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\label{table:cordic}\n\\end{table}%\n\n\\section{Calculating Sine and Cosine}\n\nNow we describe more precisely our running example of using a CORDIC to calculate the sine and cosine of some given angle $\\phi$. In order to do this, we start with a vector on the positive $x$-axis (i.e., with an initial angle of $0^{\\circ}$) and perform a series of rotations until we are approximately at the given angle $\\phi$. Then we can simply read the $x$ and $y$ values of the resulting rotated vector to get the values $\\cos \\phi$ and $\\sin \\phi$, respectively. This assumes that the amplitude of the final vector is equal to $1$, which as you will see is not too difficult to achieve.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{images/sin_cos_cordic}\n\\caption{Calculating $\\cos 60^{\\circ}$ and $\\sin 60^{\\circ}$ using the CORDIC algorithm. Five rotations are performed using incrementally larger $i$ values (0,1,2,3,4). The result is a vector with an angle of $61.078^{\\circ}$. The corresponding $x$ and $y$ values of that vector give the approximate desired cosine and sine values. }\n\\label{fig:cordic_rotations}\n\\end{figure}\n\nLet us illustrate this with an example: calculating $ \\cos 60^{\\circ}$ and $\\sin 60^{\\circ}$, i.e., $\\phi = 60^{\\circ}$. This process is depicted graphically in Figure \\ref{fig:cordic_rotations}. Here we perform five rotations in order to give a final vector with an angle approximately equal to $60^{\\circ}$. Our initial vector has a $0^{\\circ}$ angle, i.e., it starts on the positive $x$-axis. The first rotation corresponds to $i=0$ which has a $45^{\\circ}$ angle (see Table \\ref{table:cordic}). Since we want to get to $60^{\\circ}$, we rotate in the positive direction. The resulting rotated vector has a $45^{\\circ}$ angle; also note that its amplitude is scaled by approximately 1.414. Now, we move on to $i=1$. As we wish to get to a $60^{\\circ}$ angle, we rotate again in the positive direction. This rotation results in a vector that has an angle of $45^{\\circ} + 26.565^{\\circ} = 71.565^{\\circ}$ and is scaled by a factor of 1.118; the total scaling resulting from the two rotations is $1.414 \\times 1.118 = 1.581$. This is the CORDIC gain. Moving on to $i=2$, we now determine that our current angle is larger than the $60^{\\circ}$ target, so we rotate by a negative angle resulting in a vector with a $57.529^{\\circ}$ angle and scaled by a factor of $1.630$. This process continues by rotating the vector with incrementally larger $i$ values, resulting in smaller and smaller rotations that will eventually (approximately) reach the desired angle.  Also, note that the CORDIC gain begins to stabilize as the number of rotation increases.\n\nAfter we perform a sufficient number of rotations, which is a function of the desired accuracy, we get a vector with an angle close to the desired input angle. The $x$ and $y$ values of that vector correspond to approximately $A_R \\cos 60^{\\circ}$ and $A_R \\sin 60^{\\circ}$, which is exactly what we want if $A_R = 1$. Since we typically know a priori the number of rotations that we will perform, we can insure that $A_R = 1$ by setting the magnitude of the initial vector to the reciprocal of the CORDIC gain. In the case of our example, assuming that we perform five rotations as shown in Figure \\ref{fig:cordic_rotations}, this value is $1.64649^{-1} = 0.60735$ (the reciprocal of the CORDIC gain when $i=5$; see Table \\ref{table:cordic}). We can easily set the amplitude of the initial vector by starting at a vector $(0.60735, 0)$. \n\n\\begin{exercise}\nHow would the answer change if we performed one more rotation? How about two (three, four, etc.) more rotations? What is the accuracy (e.g., compared to MATLAB implementation) as we perform more rotations? How many rotations do you think is sufficient in the general case?\n\\end{exercise}\n\n\n\\begin{exercise}\nIs it possible to get worse accuracy by performing more rotations? Provide an example when this would occur.\n\\end{exercise}\n\n\\begin{figure}\n\\lstinputlisting{examples/cordic.cpp}\n\\caption{CORDIC code implementing the sine and cosine of a given angle.}\n\\label{fig:cordic_code}\n\\end{figure}\n\nFigure \\ref{fig:cordic_code} provides code that implements sine and cosine calculation using the CORDIC algorithm. It takes as input a target angle, and outputs the sine and cosine values corresponding to that angle. The code uses an array \\lstinline{cordic_phase} as a lookup table that holds the angle of rotation for each iteration. This corresponds to the values in the ``Rotating Angle'' column in Table \\ref{table:cordic}. We assume that the \\lstinline{cordic.h} file defines the different data types (i.e., \\lstinline{COS_SIN_TYPE} and \\lstinline{THETA_TYPE}) and sets \\lstinline{NUM_ITERATIONS} to some constant value. The data types can be changed to different fixed or floating point types, and \\lstinline{NUM_ITERATIONS} set depending on our desired accuracy, area, and throughput. \n\nThis code is close to a ``software'' version. It can be optimized in many ways to increase its performance and reduce its area. We will discuss how to optimize this code later in the chapter.\n\n\\section{Cartesian to Polar Conversion}\n\nWith some modifications, the CORDIC can perform other functions. For example, it can convert between Cartesian and polar representations; we describe that in more detail in this section. The CORDIC can also do many other functions, which we will leave as an exercise to the reader. \n\nA two-dimensional vector $v$ can be represented using a Cartesian coordinate system $(x, y)$ or in the polar coordinate system $(r, \\theta)$ where  $r$ is the radial coordinate (length of the vector) and $\\theta$ is the angular coordinate. Both of these coordinate systems have their benefits and drawbacks. For example, if we want to do a rotation, then it is easier to think about the polar form while a linear transform is more easily described using the Cartesian system. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.55\\textwidth]{images/rotation}\n\\caption{ The figure shows a two-dimensional plane and a vector represented in both the Cartesian form $(x,y)$ and the polar form $(r, \\theta)$ and provides the relationship between those two coordinate systems.  }\n\\label{fig:cordic_polar}\n\\end{figure}\n\nThe relationship between these coordinates is shown in the following equations:\n\n\\begin{equation}\nx = r \\cos \\theta\n\\end{equation}\n\n\\begin{equation}\ny = r \\sin \\theta\n\\end{equation}\n\n\\begin{equation}\nr =\\sqrt{x^2 + y^2}\n\\end{equation}\n\n\\begin{equation}\n\\theta = \\operatorname{atan2}(y, x)\n\\end{equation}\nwhere atan2 is a common variation on the arctangent function defined as\n\\begin{equation}\n\\operatorname{atan2}(y, x) =\n\\begin{cases}\n\\arctan(\\frac{y}{x}) & \\mbox{if } x > 0\\\\\n\\arctan(\\frac{y}{x}) + \\pi & \\mbox{if } x < 0 \\mbox{ and } y \\ge 0\\\\\n\\arctan(\\frac{y}{x}) - \\pi & \\mbox{if } x < 0 \\mbox{ and } y < 0\\\\\n\\frac{\\pi}{2} & \\mbox{if } x = 0 \\mbox{ and } y > 0\\\\\n-\\frac{\\pi}{2} & \\mbox{if } x = 0 \\mbox{ and } y < 0\\\\\n\\text{undefined} & \\mbox{if } x = 0 \\mbox{ and } y = 0\n\\end{cases}\n\\end{equation}\n\nThis provides a way to translate between the two coordinate systems. However, these operations are not easy to implement in hardware. For example, sine, cosine, square root, and arctan are not simple operations and they require significant amount of resources. But we can use the CORDIC to perform these operations using a series of simple iterative rotation operations. \n\nGiven a number in Cartesian form $(x,y)$, we can calculates its radial and amplitude coordinate (i.e., convert it to polar form) using the CORDIC. To do this, we rotate the given Cartesian number to $0^{\\circ}$. Once this rotation is complete, the amplitude is the $x$ value of the final rotated vector. To determine the radial coordinate, we simply keep track of the cumulative angle of the rotations that the CORDIC performs. The angles of the rotating vector (for $i = 0,1,2,3, \\dots$) are known and can be stored in a lookup table as done for calculating sine/cosine. Therefore, we simply need to keep track of the total rotation angle by performing an addition or subtraction of these angles, which depends on the direction of rotation.\n\nThe algorithm is similar to that of calculating the sine and cosine of a given angle. We perform a set of rotations with increasing values of $i$ such that the final vector resides on (close to) the positive $x$-axis (i.e., an angle of $0^{\\circ}$). This can be done using positive or negative rotations  which is predicated on the $y$ value of the vector whose amplitude and phase we wish to determine. \n\nThe first step of the algorithm performs a rotation to get the initial vector into either Quadrant I or IV. This rotates the vector by $\\pm 90^{\\circ}$ depending on the sign of the $y$ value of the initial vector. If the $y$ value is positive, we know that we are in either Quadrant I or II. A rotation by $-90^{\\circ}$ will put us into Quadrant IV or I, respectively. Once we are in either of those quadrants, we can guarantee that we will be able to asymptotically approach the target $0^{\\circ}$ angle. If we are in Quadrant III or IV, the $y$ value of the initial vector will be negative. And a rotation by $90^{\\circ}$ will put us into Quadrant IV or I, respectively. Recall that a $\\pm 90^{\\circ}$ rotation is done by negating either the $x$ or $y$ values of the vector and then swapping those values. The concept of these $\\pm 90^{\\circ}$ is shown in Figure \\ref{fig:rotate90}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{images/cordic_magnitude_angle}\n\\caption{The first step in performing a Cartesian to polar conversion is to perform a rotation by $\\pm 90^{\\circ}$ in order to get the initial vector into either Quadrant I or IV. Once it is in either of these two quadrants, subsequent rotations will allow the vector to reach a final angle of $0^{\\circ}$. At this point, the radial value of the initial vector is the $x$ value of the final rotated vector and the phase of the initial vector is the summation of all the angles that the CORDIC performed. Parts a) and b) show an example with the initial $y$ value is positive, which means that the vector resides in either Quadrant I or II. Rotating by $-90^{\\circ}$ puts them into the appropriate quadrant. Parts c) and d) show a similar situation when the $y$ value of the initial vector is negative. Here we wish to rotate by $90^{\\circ}$ to get the vector into Quadrant I or IV. }\n\\label{fig:rotate90}\n\\end{figure}\n\nThere is an issue with the final radial value of the rotated vector; its magnitude is not the same as the initial magnitude before the rotations; it is scaled by the CORDIC gain. One could calculate the precise radial value of the vector by multiplying by the reciprocal of the appropriate CORDIC gain (approximately $1/1.647 = 0.607$)\\footnote{Recall that the CORDIC gain is a function of the number of rotations as show in Table \\ref{table:cordic}.}. However, this defeats the purpose of having a CORDIC, which eliminates the need for costly multiplication. And unfortunately this multiplication cannot be performed trivially using shifts and adds. Fortunately, this factor is often not important. e.g., in amplitude shift keying used in modulation in wireless communications, you only need to know a relative magnitude.  Or in other times, this amplitude gain can be compensated by other parts of the system.\n\n% This section is not really correct.  In the compiler, there should be no difference between the ternary operator and if..else.\n%\\section{Removing if/else Conditional}\n%One effective code restructuring technique is to make use of the \\lstinline{?} ternary operator. The \\lstinline{?} operator performs a selection between two input operations based upon an input condition. \\VHLS will synthesize the \\lstinline{?} operator as a multiplexor. Thus, when you perform this code restructuring, you are explicitly telling the \\VHLS tool to instantiate the code in a certain manner. In this case, a multiplexor implementation is  typically more efficient than the equivalent implementation using an \\lstinline{if/else} clause. \n\n%\\begin{aside}\n%We note that in most cases, this optimization is done by the \\VHLS tool automatically. We discuss it here as it is an %important difference between hardware and software implementations. \n%\\end{aside}\n\n%Looking again at the code in Figure \\ref{fig:cordic_code}, we can replace the lines corresponding to the if/else \\lstinline{sigma} clause (these lines are replicated again in Figure \\ref{fig:ternary} for convenience) with an equivalent implementation using the \\lstinline{?} ternary conditional operator. This is shown in the bottom part of Figure \\ref{fig:ternary}.\n\n%\\begin{figure}[!h]\n%\\lstinputlisting[firstline=23, lastline=28]{examples/cordic.c}\n%\\label{The if/else conditional code from the CORDIC function (Figure \\ref{fig:cordic_code} lines 23-28). These can be replaced by the conditional ternary operator \\lstinline{?} as shown in Figure \\ref{fig:ternary}.}\n%\\end{figure}\n\n%\\begin{figure}\n%\\lstinputlisting{examples/cordic_ternary.cpp}\n%\\caption{Replacing the if/else conditional code from the CORDIC function with the functionally equivalent conditional ternary operator (\\lstinline{?}). \\VHLS synthesizes the ternary operation as a multiplexor.}\n%\\label{fig:ternary}\n%\\end{figure}\n\n%\\begin{exercise}\n%How does code restructuring using the \\lstinline{?} ternary operator change the FPGA resource utilization? How does it affect the performance? Does it change the number of cycles? Will it every make the clock period different? Why?\n%\\end{exercise}\n\n\\section{Number Representation}\n\\label{sec:number_representation}\n\nThe \\lstinline{cordic} function uses currently uses common types for the variables. For example, the variable \\lstinline{sigma} is defined as an \\lstinline{int} and other variables are defined to use custom data types (e.g., \\lstinline{THETA_TYPE} and \\lstinline{COS_SIN_TYPE}).  It is sometimes convenient to think that such types represent arbitrary numbers, when in actuality they don't.  In practice, it digital representation of numbers has a huge impact on the complexity of the logic to compute with those numbers.   In many cases, HLS tools are able to optimize the representation of each value to simplify the generated hardware.  For instance, in Figure \\ref{fig:cordic_code}, the variable \\lstinline{sigma} is restricted to be either \\lstinline{1} or \\lstinline{-1}.  Even though the variable is declared as an \\lstinline{int} type of at least 32 bits, many fewer bits can be used to implement the variable without changing the behavior of the program. In other cases, particularly function inputs, memories, and variables that appear in recurrences, the representation cannot be automatically optimized.  In these cases, modifying the code to use smaller datatypes is a key optimization to avoid unnecessary resource usage.\n\nAlthough reducing the size of variables is generally a good idea, this optimization can change the behavior of the program. A data type with fewer number of bits will not be able to express as much information as a data type with more bits and no finite binary representation can represent all real numbers with infinite accuracy. Fortunately, as designers we can pick numeric representations that are tuned to accuracy requirements of particular applications and tradeoff between accuracy, resource usage, and performance.\n\nBefore discussing these number representation optimizations further using our \\lstinline{cordic} function, we first give a background on number representation. We provide the basics, as this is important in understand the data type specific representations provided by \\VHLS. The next section starts with a fundamental background on number representation, and then proceeds to discuss the arbitrary precision variables available in \\VHLS.\n\n\\subsection{Binary and Hexadecimal Numbers}\n\nComputers and FPGAs typically represent numbers using \\term{binary representation}, which enables numbers to be efficiently represented using on-off signals called binary digits, or simply \\term{bits}.  Binary numbers work in most ways like normal decimal numbers, but can often be the cause of confusing errors if you are not familiar with how they work.  This is particularly true in many embedded systems and FPGAs where minimizing the number of bits used to represent variables can greatly increase the overall performance or efficiency of a system.  In this section, we will summarize binary arithmetic and the basic ways that computers represent numbers. \n\nMany readers may already be familiar with these ideas. In that case, you may skim these sections or skip them entirely. We do suggest that look at Section \\ref{sec:arbitrary_precision} as this provides information specific to \\VHLS on how to declare arbitrary data types. This is a key idea for optimizing the number representation of the \\lstinline{cordic} function and any HLS code. \n\nWhen we write a normal integer, such as $4062$, what we really mean is implicitly $(4 * 1000) + (0 * 100) + (6* 10) + (2 * 1) = 4062$, or written in columns:\n\\begin{tabularpad}{*{4}{c}|l}\n$10^3$ & $10^2$ & $10^1$ & $10^0$ & unsigned \\\\\n\\hline \n4&0&6&2 & = 4062 \\\\\n\\end{tabularpad}\n\nA binary number is similar, except instead of using digits from zero to nine and powers of ten, we use numbers from zero to one and powers of 2:\n\\begin{tabularpad}{*{4}{c}|l}\n$2^3$ & $2^2$ & $2^1$ & $2^0$ & unsigned \\\\\n\\hline \n1&0&1&1 &= 11\\\\\n\\end{tabularpad}\nsince $(1*8) + (0*4) + (1*2) + (1*1) = 11$.  To avoid ambiguity, binary numbers are often prefixed with \"0b\".  This makes it obvious that \\lstinline|0b1011| is the number decimal 11 and not the number 1011.  The bit associated with the highest power of two is the \\term{most significant bit}, and the bit associated with the lowest power of two is the \\term{least significant bit}. \n\nHexadecimal numbers use the digits representing numbers from zero to 15 and powers of 16:\n\\begin{tabularpad}{*{4}{c}|l}\n$16^3$ & $16^2$ & $16^1$ & $16^0$ & unsigned\\\\\n\\hline \n8&0&3&15 &= 32831 \\\\\n\\end{tabularpad}\nIn order to avoid ambiguity, the digits from 10 to 15 are represented by the letters \"A\" through \"F\", and hexadecimal numbers are prefixed with \"0x\".  So the number above would normally be written in C code as \\lstinline|0x803F|.\n\nNote that binary representation can also represent fractional numbers, usually called \\term{fixed-point} numbers, by simply extending the pattern to include negative exponents, so that \"0b1011.01\" is equivalent to:\n\\begin{tabularpad}{*{6}{c}|l}\n$2^3$ & $2^2$ & $2^1$ & $2^0$ & $2^{-1}$ & $2^{-2}$ & unsigned \\\\\n\\hline \n1&0&1&1&0&1&= 11.25 \\\\\n\\end{tabularpad}\nsince $8 + 2 + 1 + \\frac{1}{4} = 11.25$.  Unfortunately, the C standard doesn't provide a way of specifying constants in binary representation, although gcc and many other compilers allow integer constants (without a decimal point) to be specified  with the \"0b\" prefix.  The C99 standard does provide a way to describe floating-point constants with hexadecimal digits and a decimal exponent, however.  Note that the decimal exponent is required, even if it is zero.\n\\begin{lstlisting}\nfloat p1 = 0xB.4p0; // Initialize p1 to \"11.25\"\nfloat p2 = 0xB4p-4; // Initialize p2 to \"11.25\"\n\\end{lstlisting}\n\nNotice that in general, it is only necessary to write non-zero digits and any digits not shown can be assumed to be zero without changing the represented value of an unsigned number.  As a result, it is easy to represent the same value with more digits: simply add as many zero digits as necessary.  This process is often called \\term{zero-extension}.  Note that each additional digit increases the amount of numbers that can be represented.  Adding an additional bit to a binary number doubles the amount of numbers that can be represented, while an additional hexadecimal digit increases the amount of numbers by a factor of 16.\n\\begin{tabularpad}{*{12}{c}|l}\n$2^7$ & $2^6$ & $2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$ & $2^{-1}$ & $2^{-2}$ & $2^{-3}$ & $2^{-4} $ & unsigned \\\\\n\\hline \n0&0&0&0&1&0&1&1&0&1&0&0&= 11.25 \\\\\n\\end{tabularpad}\n\n\\begin{aside}\nNote that it is possible to have any number of bits in a binary number, not just 8, 16, or 32.   SystemC \\cite{systemc}, for instance, defines several template classes for handling arbitrary precision integers and fixed-point numbers (including \\lstinline|sc_int<>|, \\lstinline|sc_uint<>|, \\lstinline|sc_bigint<>|,  \\lstinline|sc_ubigint<>|, \\lstinline|sc_fixed<>|, and \\lstinline|sc_ufixed<>|).  These classes can be commonly used in HLS tools, although they were originally defined for system modeling and not necessarily synthesis.  \\VHLS, for instance, includes similar template classes (\\lstinline|ap_int<>|, \\lstinline|ap_uint<>|, \\lstinline|ap_fixed<>|, and \\lstinline|ap_ufixed<>|) that typically work better than the SystemC template classes, both in simulation and synthesis. \\end{aside}\n\n\\begin{exercise}\nArbitrary precision numbers are even well defined (although not terribly useful) with zero digits.  List all the numbers that are representable with zero digits.\n\\end{exercise}\n\n\\subsection{Negative numbers}\nNegative numbers are slightly more complicated than positive numbers, partly because there are several common ways to do it.  One simple way is represent negative numbers with a sign bit, often called \\term{signed-magnitude} representation.  This representation just includes an additional bit to the front of the number to indicate whether it is signed or not.  One somewhat odd thing about signed-magnitude representation is that there is more than one way to represent zero.  This tends to make even apparently simple operations, like \\lstinline|operator ==()|, more complex to implement.\n\\begin{tabularpad}{*{3}{c}|l}\n+/-  & $2^1$ & $2^0$ & signed magnitude  \\\\\n\\hline \n0& 1&1& $=3$\\\\\n0& 1&0& $=2$\\\\\n0& 0&1& $=1$\\\\\n0& 0&0& $=0$\\\\\n1& 0&0& $=-0$\\\\\n1& 0&1& $=-1$\\\\\n1& 1&0& $=-2$\\\\\n1& 1&1& $=-3$\\\\\n\\end{tabularpad} \n\nAnother way to represent negative numbers is with \\term{biased} representation.  This representation adds a constant offset (usually equal in magnitude to the value of the largest bit) to the value, which are otherwise treated as positive numbers:\n\\begin{tabularpad}{*{3}{c}|l}\n$2^2$  & $2^1$ & $2^0$ & biased \\\\\n\\hline \n1& 1&1& $=3$\\\\\n1& 1&0& $=2$\\\\\n1& 0&1& $=1$\\\\\n1& 0&0& $=0$\\\\\n0& 1&1& $=-1$\\\\\n0& 1&0& $=-2$\\\\\n0& 0&1& $=-3$\\\\\n0& 0&0& $=-4$\\\\\n\\end{tabularpad} \n\nHowever by far the most common technique for implementing negative numbers is known as \\term{two's complement}.  In two's complement representation, the most significant bit represents the sign of the number (as in signed-magnitude representation), and \\emph{also} whether or not an offset is applied.  One way of thinking about this situation is that the high order bit represents a negative contribution to the overall number.\n\\begin{tabularpad}{*{3}{c}|l}\n$-2^2$  & $2^1$ & $2^0$ & two's complement \\\\\n\\hline \n0& 1&1& $=3$\\\\\n0& 1&0& $=2$\\\\\n0& 0&1& $=1$\\\\\n0& 0&0& $=0$\\\\\n1& 1&1& $=-1$\\\\\n1& 1&0& $=-2$\\\\\n1& 0&1& $=-3$\\\\\n1& 0&0& $=-4$\\\\\n\\end{tabularpad}\n\\begin{tabularpad}{*{5}{c}|l}\n$-2^4$  & $2^3$ & $2^2$  & $2^1$ & $2^0$ & two's complement \\\\\n\\hline \n0&0&0& 1&1& $=3$\\\\\n0&0&0& 1&0& $=2$\\\\\n0&0&0& 0&1& $=1$\\\\\n0&0&0& 0&0& $=0$\\\\\n1& 1&1& 1&1& $=-1$\\\\\n1& 1&1& 1&0& $=-2$\\\\\n1& 1&1& 0&1& $=-3$\\\\\n1& 1&1& 0&0& $=-4$\\\\\n\\end{tabularpad}\n\nOne significant difference between unsigned numbers and two's complement numbers is that we need to know exactly how many bits are used to represent the number, since the most significant bit is treated differently than the remaining bits.  Furthermore, when widening a signed two's complement number with more bits, the sign bit is replicated to all the new most significant bits. This process is normally called \\term{sign-extension}. For the rest of the book, we will generally assume that all signed numbers are represented in two's complement unless otherwise mentioned.\n\n\\begin{exercise}\nWhat is the largest positive number representable with N bits in two's complement?  What is the largest negative number?\n\\end{exercise}\n\\begin{exercise}\nGiven a positive number $x$, how can you find the two's complement representation of $-x$?  \nWhat is $-0$ in two's complement?  if $x$ is the largest negative number representable with N bits in two's complement, what is $-x$? \n\\end{exercise}\n\n\\subsection{Overflow, Underflow, and Rounding}\n\n\\term{Overflow} occurs when a number is larger than the largest number that can be represented in a given number of bits.  Similarly, \\term{underflow} occurs when a number is smaller than the smallest number that can be represented.  One common way of handling overflow or underflow is to simply drop the most significant bits of the original number, often called \\term{wrapping}.\n\\begin{tabularpad}{*{10}{c}|l}\n$2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$ & $2^{-1}$ & $2^{-2}$ & $2^{-3}$ & $2^{-4} $ & \\\\\n\\hline \n0&0&1&0&1&1&0&1&0&0&$=11.25$ \\\\\n&0&1&0&1&1&0&1&0&0&$=11.25$ \\\\\n&&1&0&1&1&0&1&0&0&$=11.25$ \\\\\n&&&0&1&1&0&1&0&0&$=3.25$ \\\\\n\\end{tabularpad}\n\nHandling overflow and underflow by wrapping two's complement numbers can even cause a positive number to become negative, or a negative number to become positive.\n\\begin{tabularpad}{*{8}{c}|l}\n$-2^3$ & $2^2$ & $2^1$ & $2^0$ & $2^{-1}$ & $2^{-2}$ & $2^{-3}$ & $2^{-4} $ & two's complement \\\\\n\\hline \n1&0&1&1&0&1&0&0&$=-4.75$ \\hfill\\tabspace \\\\\n\\hfill& $-2^2$ & $2^1$ & $2^0$ & $2^{-1}$ & $2^{-2}$ & $2^{-3}$ & $2^{-4} $ & two's complement \\\\\n\\hline \n\\hfill&0&1&1&0&1&0&0&$=3.25$ \\hfill\n\\end{tabularpad}\n\nSimilarly, when a number cannot be represented precisely in a given number of fractional bits, it is necessary to apply \\term{rounding}.  Again, there are several common ways to round numbers.  The simplest way is to just drop the extra fractional bits, which tends to result in numbers that are more negative.  This method of rounding is often called \\term{rounding down} or \\term{rounding to negative infinity}.  When rounding down to the nearest integer, this corresponds to the \\lstinline|floor()| function, although it's possible to round to other bit positions as well.\n\\tabspace\\\\\\makebox{\\begin{tabular}{rl}\n0b0100.00&$=4.0$   \\\\\n0b0011.11&$=3.75$ \\\\\n0b0011.10&$=3.5$ \\\\\n0b0011.01&$=3.25$ \\\\\n0b0011.00&$=3.0$ \\\\\n0b1100.00&$=-4.0$\\\\\n0b1011.11&$=-4.25$\\\\\n0b1011.10&$=-4.5$\\\\\n0b1011.01&$=-4.75$\\\\\n0b1011.00&$=-5.0$\\\\\n\\end{tabular}\n$\\rightarrow$ \\parbox{2cm}{Round to\\\\Negative\\\\Infinity} $\\rightarrow$\n\\begin{tabular}{rl}\n0b0100.0&$=4.0$   \\\\\n0b0011.1&$=3.5$ \\\\\n0b0011.1&$=3.5$ \\\\\n0b0011.0&$=3.0$ \\\\\n0b0011.0&$=3.0$ \\\\\n0b1100.0&$=-4.0$\\\\\n0b1011.1&$=-4.5$\\\\\n0b1011.1&$=-4.5$\\\\\n0b1011.0&$=-5.0$\\\\\n0b1011.0&$=-5.0$\\\\\n\\end{tabular}}\n\nIt is also possible to handle rounding in other similar ways which force rounding to a more positive numbers (called \\term{rounding up} or \\term{rounding to positive infinity} and corresponding to the \\lstinline|ceil()| function), to smaller absolute values (called \\term{rounding to zero} and corresponding to the \\lstinline|trunc()| function), or to larger absolute values (called \\term{rounding away from zero} or \\term{rounding to infinity} and corresponding to the \\lstinline|round()| function)).  None of these operations always minimizes the error caused by rounding, however.  A better approach is called \\term{rounding to nearest even}, \\term{convergent rounding}, or \\term{banker's rounding} and is implemented in the \\lstinline|lrint()| function. As you might expect, this approach to rounding always picks the nearest representable number.  In addition, If there are two numbers equally distant, then the \\emph{even} one is always picked.  An arbitrary-precision number is even if the last digit is zero.  This approach is the default handling of rounding with IEEE floating point, as it not only minimizes rounding errors but also ensures that the rounding error tends to cancel out when computing sums of random numbers.\n\\tabspace\\\\\\makebox{\\begin{tabular}{rl}\n0b0100.00&$=4.0$   \\\\\n0b0011.11&$=3.75$ \\\\\n0b0011.10&$=3.5$ \\\\\n0b0011.01&$=3.25$ \\\\\n0b0011.00&$=3.0$ \\\\\n0b1100.00&$=-4.0$\\\\\n0b1011.11&$=-4.25$\\\\\n0b1011.10&$=-4.5$\\\\\n0b1011.01&$=-4.75$\\\\\n0b1011.00&$=-5.0$\\\\\n\\end{tabular}\n$\\rightarrow$ \\parbox{2cm}{Round to\\\\Nearest\\\\Even} $\\rightarrow$\n\\begin{tabular}{rl}\n0b0100.0&$=4.0$   \\\\\n0b0100.0&$=4.0$ \\\\\n0b0011.1&$=3.5$ \\\\\n0b0011.0&$=3.0$ \\\\\n0b0011.0&$=3.0$ \\\\\n0b1100.0&$=-4.0$\\\\\n0b1100.0&$=-4.0$\\\\\n0b1011.1&$=-4.5$\\\\\n0b1011.0&$=-5.0$\\\\\n0b1011.0&$=-5.0$\\\\\n\\end{tabular}}\n\\tabspace\\\\\n\n\n\\subsection{Binary arithmetic} \n\\label{sec:arithmetic}\n\nBinary addition is very similar to decimal addition, simply align the binary points and add digits, taking care to correctly handle bits carried from one column to the next.  Note that the result of adding or subtracting two N-bit numbers generally takes N+1 bits to represent correctly without overflow.  The added bit is always an additional most significant bit for fractional numbers\n\\begin{tabularpad}{*{6}{c}|l}\n  &$2^3$ & $2^2$  & $2^1$ & $2^0$ && unsigned \\\\\n\\hline \n&&0& 1&1&& $=3$\\\\\n+&&0& 1&1&& $=3$\\\\\n\\hline\n=&0&1& 1& 0&&$=6$\\tabspace\\\\\n & $2^3$ & $2^2$  & $2^1$ & $2^0$ & $2^{-1}$ & unsigned \\\\\n\\hline \n&&1&1& 1&1& $=7.5$\\\\\n+&&1&1& 1&1& $=7.5$\\\\\n\\hline\n=&1&1&1& 1& 0&$=15$\\\\\n\\end{tabularpad}\n\nNote that since the result of subtraction can be negative, the 'extra bit' becomes the sign-bit of a two's complement number.\n\\begin{tabularpad}{*{6}{c}|l}\n & &$2^3$ & $2^2$  & $2^1$ & $2^0$ & unsigned \\\\\n\\hline \n&&0&0& 1&1& $=3$\\\\\n-&&0&0& 1&1& $=3$\\\\\n\\hline\n=&&0&0& 0& 0&$=0$\\tabspace\\\\\n &-$2^4$  & $2^3$ & $2^2$  & $2^1$ & $2^0$ & unsigned \\\\\n\\hline \n&&0&0& 1&1& $=3$\\\\\n-&&1&1& 1&1& $=15$\\\\\n\\hline\n=&1&0&1& 0& 0&$=-12$ (two's complement)\\\\\n\\end{tabularpad}\n\nMultiplication for binary numbers also works similarly to familiar decimal multiplication.  In general, multiplying 2 N-bit numbers results in a 2*N bit result.\n\\begin{tabularpad}{*{8}{c}|l}\n &$2^6$  &$2^5$ &$2^4$ &$2^3$ & $2^2$  & $2^1$ & $2^0$ & two's complement \\\\\n\\hline \n&&&&1&0& 0&1& $=9$\\\\\n*&&&&1&0& 0&1& $=9$\\\\\n\\hline\n&&&&1&0& 0& 1&$=9$\\\\\n&&&0&0& 0& 0&&$=0$\\\\\n&&0&0& 0& 0&&&$=0$\\\\\n+&1&0& 0& 1&&&&$=72$\\\\\n\\hline\n&1&0&1&0&0&0&1&$=81$\\\\\n\\end{tabularpad}\n\nOperations on signed numbers are somewhat more complex because of the sign-bit handling and won't be covered in detail.  However, the observations regarding the width of the result still applies: adding or subtracting two N-bit signed numbers results in an N+1-bit result, and Multiplying two N-bit signed numbers results in an 2*N-bit result.\n\n\\begin{exercise}\nWhat about division?  Can the number of bits necessary to exactly represent the result a division operation of 2 N-bit numbers be computed?\n\\end{exercise}\n\n\n\\subsection{Representing Arbitrary Precision Integers in C and C++}\n\\label{sec:arbitrary_precision}\n\nAccording to the C99 language standard, the precision of many standard types, such as \\lstinline|int| and \\lstinline|long| are implementation defined.  Although many programs can be written with these types in a way that does not have implementation-defined behavior, many cannot.  One small improvement is the \\lstinline{inttypes.h} header in C99, which defines the types \\lstinline|int8_t|,\\lstinline|int16_t|,\\lstinline|int32_t|, and \\lstinline|int64_t| representing signed numbers of a given width and the corresponding types \\lstinline|uint8_t|,\\lstinline|uint16_t|,\\lstinline|uint32_t|, and \\lstinline|uint64_t| representing unsigned numbers.  Although these types are defined to have exactly the given bitwidths, they can still be somewhat awkward to use.  For instance, even relatively simple programs like the code below can have unexpected behavior.\n\\begin{lstlisting}\n#include \"inttypes.h\"\nuint16_t a =0x4000;\nuint16_t b = 0x4000;\n// Danger! p depends on sizeof(int)\nuint32_t p = a*b;  \n\\end{lstlisting}\nAlthough the values of \\lstinline{a} and \\lstinline{b} can be represented in 16 bits and their product (\\lstinline{0x10000000}) can be represented exactly in 32 bits, the behavior of this code by the conversion rules in C99 is to first convert \\lstinline|a| and \\lstinline|b| to type \\lstinline|int|, compute an integer result, and then to extend the result to 32 bits.  Although uncommon, it is correct for a C99 compiler to only have integers with only 16 bits of precision.  Furthermore, the C99 standard only defines 4 bitwidths for integer numbers, while FPGA systems often use a wide variety of bitwidths for arithmetic.  Also, printing these datatypes using \\lstinline{printf()} is awkward, requiring the use of additional macros to write portable code.  The situation is even worse if we consider a fixed-point arithmetic example.  In the code below, we consider \\lstinline{a} and \\lstinline{b} to be fixed point numbers, and perform normalization correctly to generate a result in the same format.\n\\begin{lstlisting}\n#include \"inttypes.h\"\n// 4.0 represented with 12 fractional bits\nuint16_t a =0x4000; \n// 4.0 represented with 12 fractional bits.\nuint16_t b = 0x4000; \n// Danger! p depends on sizeof(int)\nuint32_t p = (a*b) >> 12; \n\\end{lstlisting}\n\nThe correct code in both cases requires casting the input variables to the width of the result before multiplying.\n\\begin{lstlisting}\n#include \"inttypes.h\"\nuint16_t a = 0x4000;\nuint16_t b = 0x4000;\n// p is assigned to 0x10000000\nuint32_t p = (uint32_t) a*(uint32_t) b; \n\\end{lstlisting}\n\\begin{lstlisting}\n#include \"inttypes.h\"\n// 4.0 represented with 12 fractional bits.\nuint16_t a =0x4000; \n// 4.0 represented with 12 fractional bits.\nuint16_t b = 0x4000; \n// p assigned to 16.0 represented with 12 fractional bits\nuint32_t p = ( (uint32_t) a*(uint32_t) b ) >> 12; \n\\end{lstlisting}\n\n\\begin{aside}\nWhen using integers to represent fixed-point numbers, it is very important to document the fixed point format used, so that normalization can be performed correctly after multiplication.  Usually this is described using \"Q\" formats that give the number of fractional bits.  For instance, \"Q15\" format uses 15 fractional bits and usually applies to 16 bit signed variables.  Such a variable has values in the interval $[-1,1)$.  Similarly \"Q31\" format uses 31 fractional bits.\n\\end{aside}\n\nFor these reasons, it's usually preferable to use C++ and the \\VHLS template classes \\lstinline|ap_int<>|, \\lstinline|ap_uint<>|, \\lstinline|ap_fixed<>|, and \\lstinline|ap_ufixed<>| to represent arbitrary precision numbers.  The \\lstinline|ap_int<>| and \\lstinline|ap_uint<>| template classes require a single integer template parameter that defines their width. Arithmetic functions generally produce a result that is wide enough to contain a correct result, following the rules in section \\ref{sec:arithmetic}.\nOnly if the result is assigned to a narrower bitwidth does overflow or underflow occur.\n\\begin{lstlisting}\n#include \"ap_int.h\"\nap_uint<15> a =0x4000;\nap_uint<15> b = 0x4000;\n// p is assigned to 0x10000000.\nap_uint<30> p = a*b; \n\\end{lstlisting}\n\nThe \\lstinline|ap_fixed<>| and \\lstinline|ap_ufixed<>| template classes are similar, except that they require two integer template arguments that define the overall width (the total number of bits) and the number of integer bits.\n\\begin{lstlisting}\n#include \"ap_fixed.h\"\n// 4.0 represented with 12 integer bits.\nap_ufixed<15,12> a = 4.0; \n// 4.0 represented with 12 integer bits.\nap_ufixed<15,12> b = 4.0; \n// p is assigned to 16.0 represented with 12 integer bits\nap_ufixed<18,12> p = a*b; \n\\end{lstlisting}\n\n\\begin{exercise}\nNote that the \\lstinline|ap_fixed<>| and \\lstinline|ap_ufixed<>| template classes require the overall width of the number to be positive, but the number of integer bits can be arbitrary.  In particular, the number of integer bits can be 0 (indicating a number that is purely fractional) or can be the same as the overall width (indicating a number that has no fractional part).  However, the number of integer bits can also be negative or greater than the overall width!  What do such formats describe?  What are the largest and smallest numbers that can be represented by an \\lstinline|ap_fixed<8,-3>|? \\lstinline|ap_fixed<8,12>|?\n\\end{exercise}\n\n\\subsection{Floating Point}\n\\label{sec:floating_point}\n\n\\VHLS can also synthesize floating point calculations. Floating point numbers provide a large amount of precision, but this comes at a cost; it requires significant amount of computation which in turn translates to a large amount of resource usage and many cycles of latency. Thus, floating point numbers should be avoided unless absolutely necessary as dictated by the accuracy requirements application. In fact, the primary goal of this chapter is to allow the reader to understand how to effectively move from floating point to fixed point representations. Unfortunately, this is often a non-trivial task and there  are not many good standard methods to automatically perform this translation. This is partially due to the fact that moving to fixed point will reduce the accuracy of the application and this tradeoff is best left to the designer. \n\nThe standard technique for high-level synthesis starts with a floating point representation during the initial development of the application. This allows the designer to focus on getting a functionally correct implementation. Once that is achieved, then she can move optimizing the number representation in order to reduce the resource usage and/or increase the performance. \n\n\\begin{exercise}\nChange all of the variables in the CORDIC from \\lstinline{float} to \\lstinline{int}. How does this effect the resource usage? How does it change the latency? How about the throughput? Does the accuracy change?\n\\end{exercise} \n\n\\section{Cordic Optimizations}\n\nIn this section, we provide some brief thoughts and suggestions on the best way to optimize the CORDIC function. We focus on how the different optimizations change the precision of the result while providing the ability tradeoff between throughput, precision, and area. \n\nOne important decision when implementing the CORDIC function is to select the type used to represent angles and the results.  While the original code can operate with either floating-point types, e.g. \\lstinline{float}, and fixed-point types, CORDIC is most commonly used with fixed-point types with the specific goal of eliminating multipliers.  When efficient means of multiplication are available, other methods for computing trigonometric functions are often preferred.   Looking at the original code in Figure \\ref{fig:cordic_code}, it contains several multiply operators  related to the \\lstinline{sigma} and \\lstinline{factor} variables.  By restricting the code to only work with fixed-point types we can remove these multiplications, converting them into shifts and adds.  Code that does this is shown in Figure \\ref{fig:cordic_fixed_code}.\n\n\\begin{figure}\n\\lstinputlisting{examples/cordic_fixed.cpp}\n\\caption{Fixed-point CORDIC code implementing the sine and cosine of a given angle.}\\label{fig:cordic_fixed_code}\n\\end{figure}\n\n\\begin{exercise}\nHow do the area, throughput, and precision of the sine and cosine results change as you vary the data type?  Do you see a significant difference when \\lstinline{THETA_TYPE} and \\lstinline{COS_SIN_TYPE} are floating point types vs. \\lstinline{ap_fixed<>} types?  What about using the code from Figure \\ref{fig:cordic_code} and Figure \\ref{fig:cordic_fixed_code}?\n\\end{exercise}\n\nUltimately, CORDIC produces an approximation. The error on that approximation generally decreases as the number of iterations increases. This corresponds to the number of times that we execute the \\lstinline{for} loop in the \\lstinline{cordic} function, which is set by \\lstinline{NUM_ITERATIONS}.  Even if we perform a very large number of iterations, we may still have an approximation. One reason for this is that we may approach but never exactly match the desired target angle. We can, however, tune precision by choosing to perform greater or fewer iterations. All that needs to change in the algorithm is to modify the value of \\lstinline{NUM_ITERATIONS}. The choice of \\lstinline{NUM_ITERATIONS} depends on the number of digits of precision required by application using this CORDIC core.\n\n\\begin{exercise}\nHow does the constant \\lstinline{NUM_ITERATIONS} affect the area, throughput, and precision? How does this affect the initial values of \\lstinline{current_cos} and \\lstinline{current_sin}? Do you need to modify the array \\lstinline{cordic_phase}? Can you optimize the data types depending on the value of \\lstinline{NUM_ITERATIONS}?\n\\end{exercise}\n\n\\begin{exercise}\nUltimately, most applications imply requirements on the precision of the final result and a designer must determine how to build the best circuit given particular precision requirements.  To achieve a given precision in the final result, what value of \\lstinline{NUM_ITERATIONS} and what data types are required?\n\\end{exercise}\n\n\\begin{exercise}\nThe computations in the $for$ loop occupy most of the overall time. How do you best perform code transforms and/or use pragmas to optimize it?\n\\end{exercise}\n\n\\begin{exercise}\nThe current code assumes that the given angle is between $\\pm 90^{\\circ}$. Can you add code to allow it to handle any angle between $\\pm 180^{\\circ}$?\n\\end{exercise}\n\n\\section{Conclusion}\nIn this chapter, we looked the Coordinate Rotation DIgital Computer (CORDIC) method for calculating trigonometric and hyperbolic functions based on vector rotations. We start with a background on the computation being performed by the CORDIC method. In particular, we focus on how to use the CORDIC method to calculate the sine and cosine values for a given angle. Additionally, we discuss how the same CORDIC method can be used to determine the amplitude and phase of a given complex number.\n\nAfter this, we focus on the optimizations that can be done on the CORDIC method. Since it is an iterative method, there are fundamental tradeoffs between the number of iterations that are performed and the precision and accuracy of the resulting computation. We discuss how to reduce the precision/accuracy and get savings in FPGA resource usage and increases in performance.\n\nWe introduce the notion of using custom arbitrary data types for the variables in our \\lstinline{cordic} function. This provides another method to reduce the latency, increase the throughput, and minimize the area while changing the precision of the intermediate and final results. \\VHLS provides a method to specifically generate a large number of data types. We provide a background on number representation and introduce these custom data types. \n\nIn general, there is a complex relationship between precision, resource utilization, and performance. We touch on some of these tradeoffs, and provide some insights on how to best optimize the \\lstinline{cordic} function. We leave many of the optimizations as well as the analysis of these tradeoffs, as an exercise to the reader. %The CORDIC method is an integral part of the Phase Detector project described in Chapter \\ref{chapter:phase_detector} -- a lab provided in the Appendix.", "meta": {"hexsha": "f1c8b9a26fa8540144bd84f26159f05493c554fb", "size": 64705, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cordic.tex", "max_stars_repo_name": "mithro/pp4fpgas", "max_stars_repo_head_hexsha": "ddede5bd337f4fa33915d7e4ca98f97a7b31413a", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 418, "max_stars_repo_stars_event_min_datetime": "2018-05-09T17:28:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T05:51:12.000Z", "max_issues_repo_path": "cordic.tex", "max_issues_repo_name": "mithro/pp4fpgas", "max_issues_repo_head_hexsha": "ddede5bd337f4fa33915d7e4ca98f97a7b31413a", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2018-05-13T16:26:23.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-06T06:06:57.000Z", "max_forks_repo_path": "cordic.tex", "max_forks_repo_name": "mithro/pp4fpgas", "max_forks_repo_head_hexsha": "ddede5bd337f4fa33915d7e4ca98f97a7b31413a", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 107, "max_forks_repo_forks_event_min_datetime": "2018-05-12T16:43:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-23T22:59:42.000Z", "avg_line_length": 85.2503293808, "max_line_length": 1549, "alphanum_fraction": 0.7491229426, "num_tokens": 17757, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6859494550081925, "lm_q2_score": 0.8723473879530491, "lm_q1q2_score": 0.5983862153442143}}
{"text": "\\clearpage \n\\section*{\\underline{Extended Data}}\n\n%\\renewcommand\\thefigure{\\thesection.\\arabic{figure}}    \n\\setcounter{figure}{0}    \n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.6 \\textwidth]{pgm_models.jpg}\n\t\\caption{\\textbf{A probabilistic graphical model (PGM) represented algebraically in Equation \\ref{eq:modelll}.} The shaded circle indicates observed data, and solid black points represent other fixed information, such as the KDEs and observational uncertainties. The remaining circles represent parameters. The underline indicates that the symbol represents a set of parameters or data. Here, $\\kappa_{\\rm{s}}$ and $\\kappa_{\\rm WMB}$ represent the KDEs of standard and WMB model populations respectively. $Q_{\\rm{WMB}}$ is the mixture model weighting factor. The latent parameters $\\theta$, our observations $\\mathcal{D}$ and their uncertainties $\\sigma_{\\mathcal{D}}$ include temperature (\\teff), mass ($M$), log-age ($\\ln(t)$), metallicity (\\feh) and log-rotation ($\\ln(P)$). This model is \\textit{hierarchical}, as all the latent parameters are drawn from the common probability distribution set by $Q_{\\rm{WMB}}$ and described in Equation \\ref{eq:mixturell}.}\n\t\\label{fig:pgm}\n\\end{figure}\n ", "meta": {"hexsha": "db5b54d502afc1d2cf9e698224cc67e421727d83", "size": 1219, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/natastron/Publication/ed.tex", "max_stars_repo_name": "ojhall94/malatium", "max_stars_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-25T07:45:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-25T07:45:20.000Z", "max_issues_repo_path": "paper/natastron/Publication/ed.tex", "max_issues_repo_name": "ojhall94/malatium", "max_issues_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/natastron/Publication/ed.tex", "max_forks_repo_name": "ojhall94/malatium", "max_forks_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-19T09:38:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T09:38:35.000Z", "avg_line_length": 93.7692307692, "max_line_length": 964, "alphanum_fraction": 0.7571780148, "num_tokens": 323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8723473680407889, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5983862072859675}}
{"text": "\\lab{Algorithms}{Optimization}{Np-Hard Simplex}\n\\label{lab:Simplex}\n\\objective{In this lab, you will see the downside of the Simplex method.}\n\nThe simplex algorithm is one of the most used in business. The Computing in Science and Engineering journal listed Simplex as one of the top ten algorithms of the twentieth century. Despite its popularity, like any other algorithm , simplex has drawbacks.  \n\nThe Victor Klee and George Minty Cube wrote a paper in 1972 called, \"How Good is the Simplex Algorithm?\" \nIn their paper, they give several examples of polytopes that struggle with the Simplex algorithm. \n\n\n\\section*{Example}\n\nConsider the following linear program from Klee-Minty.\n\n\\begin{align*}\n\\text{max } & 2^{n-1}x_1 & + & 2^{n-2}x_2  & + & \\cdots & + & 2x_{n-1} & + & x_n\\\\                     \n\\text{subject to } & x_1 &  &  &  &  &  &  &  &\\leq 5\\\\\n& 4x_1 & + & x_2 &  &  &  &  &  &\\leq 25\\\\\n& 8x_1 & + & 4x_2 & + & x_3 &  &  &  &\\leq 125\\\\\n& \\vdots & &      &   &     &  &  &  &\\vdots\\\\ \n& 2^n x_1 & + & 2^{n-1} x_2 & + & \\cdots & + & 4x_{n-1} & + x_n &\\leq 5\\\\\n\\end{align*}\n\nWhen $n = 3$, we have the initial tableau\n$$\n\\begin{bmatrix}\n0 & 4 & 2 & 1 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n5 & -1 & 0 & 0 & 0 & 0 & 0\\\\\n25 & -4 & -1 & 0 & 0 & 0 & 0\\\\\n125 & -8 & -4 & -1 & 0 & 0 & 0\\\\\n\\end{bmatrix}\n$$. \n\nAfter the first pivot with $x_1$ leaving and $s_1$ entering, we have the tableau\n\n$$\n\\begin{bmatrix}\n20 & 0 & 2 & 1 & -4 & 0 & 0\\\\\n5 & 0 & 0 & 0 & -1 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0\\\\\n5 & 0 & -1 & 0 & 4 & 0 & 0\\\\\n85 & 0 & -4 & -1 & 8 & 0 & 0\\\\\n\\end{bmatrix}\n$$. \n\n\\textbf{Problem 1}\n\nWhat is the final tableau for this Klee-Minty example with $n=3$?\nHow many iterations does it take to arrive at the final tableau?\n\n\\textbf{Problem 2}\nUsing problem 1 as a guide, guess the optimum value of the Klee-Minty example with $n=20$. Then find the maximum value for the Klee-Minty example with $n=20$. \nHow many iterations does it take to arrive at the final tableau?\nHow long does the program take to run for only $20$ variables?\n\nKlee and Minty show that for this example, the worst case has exponential time complexity. With only $n$ constraints and $n$ variables, the simplex algorithm goes through $2^n$ iterations. This is because there are $2^n$ extreme points and when starting at the point $x=0$, the simplex algorithm goes through all of the extreme points before reaching the optimal point $(0,0,\\dots, 0, 5^n)$.\nOther algorithms, such as the interior point method, solve this problem much faster because they are not constrained to follow the edges. \n\n\\end{document}", "meta": {"hexsha": "1fa6caec2154b680b10bae3bbd8ea7fbd65aa93d", "size": 2704, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Algorithms/NPhardSimplex/nphard.tex", "max_stars_repo_name": "abefrandsen/numerical_computing", "max_stars_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Algorithms/NPhardSimplex/nphard.tex", "max_issues_repo_name": "abefrandsen/numerical_computing", "max_issues_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Algorithms/NPhardSimplex/nphard.tex", "max_forks_repo_name": "abefrandsen/numerical_computing", "max_forks_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.25, "max_line_length": 391, "alphanum_fraction": 0.6261094675, "num_tokens": 1005, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8723473680407889, "lm_q1q2_score": 0.5983862016854103}}
{"text": "\\title{\\bf Stellar Dynamics}\n\n% REALLY should also cover:\n% - Axisymmetric potentials\n% - epicyclic frequencies\n\n\\section{Basics \\& Nomenclature}\n\nStellar dynamics is almost entirely collisionless, due to the low\nnumber density of stars relative to their radii. It is therefore\ngoverned by the {\\it collisionless Boltzmann equation} (sometimes\ncalled the {\\it Vlasov equation}) acting under gravity.\n\nIn the continuum limit, we can express the {\\it distribution function}\nof stars in phase space as $f(\\vec{x}, \\vec{v}, t)$, in units of per\nlength-cubed per unit velocity-cubed. If we define the phase space\nvector $\\vec{w} = \\{\\vec{x}, \\vec{v}\\}$ then we can write $f(\\vec{w},\nt)$. The distribution in phase space can be arbitrarily\ncomplicated. It will not be thermalized as in a gas or fluid, or obey\nany particular equation of state.\n\nThe continuum limit will be violated most rapidly by two-body\ninteractions. We find in the exercises that the time for an $N$-body\nsystem to relax due to this effect, the {\\it two-body relaxation\ntime}, is:\n\\begin{equation}\n\\label{eq:relax}\nt_{\\rm relax} \\sim \\frac{0.1 N}{\\ln N} t_{\\rm cross}\n\\end{equation}\nwhere $t_{\\rm cross}$ is the crossing time of the system. Globular\nclusters have relaxation times short compared to their ages.\nGalaxies, over most of their extent, have relaxation times high\ncompared to their ages.\n\nIn the continuum limit, the system obeys the collisionless Boltzmann\nequation under just gravity:\n\\begin{equation}\n \\frac{\\partial f}{\\partial t} + \\vec{v}\\cdot\\vec{\\nabla} f -\n\\vec{\\nabla}\\Phi\\cdot\\frac{\\partial f}{\\partial \\vec{v}} = 0\n\\end{equation}\nIt can be shown that this equation obeys a special case of {\\it\nLiouville's Theorem}:\n\\begin{equation}\n\\frac{{\\rm d}f}{{\\rm d}t} = 0\n\\end{equation}\nwhere in this case the substantive derivative is:\n\\begin{equation}\n\\frac{\\rm d}{{\\rm d}t} = \\frac{\\partial}{\\partial t}\n+ \\sum_\\alpha \\frac{\\partial}{\\partial w_\\alpha}\n\\end{equation}\nThe meaning of Liouville's Theorem is that as a particle travels\nthrough phase space, the phase space density of particles around it\nremains constant.\n\n\\section{Jeans Equations}\n\nThe first few moments of the collisionless Boltzmann equation are\ninstructive and can be useful. These equations are known as the {\\it\nJeans Equations}.\n\nThe zeroth moment integrated over velocity yields the equation of\ncontinuity:\n\\begin{equation}\n\\frac{\\partial n}{\\partial t}\n+ \\vec{\\nabla} \\cdot\\left(n \\left\\langle\\vec{v}\\right\\rangle\\right) =\n0\n\\end{equation}\nwhere $n(\\vec{x})$ is the mean density per unit volume and\n$\\langle\\rangle$ indicates a density weighted-mean over all\nvelocities.\n\nThe first moment integrated over velocity yields something akin to\nEuler's equations:\n\\begin{equation}\n\\label{eq:euler}\nn \\frac{\\partial \\langle\\vec{v}\\rangle}{\\partial t}\n + n \\langle \\vec{v} \\rangle \\cdot\n \\vec{\\nabla} \\langle\\vec{v}\\rangle\n  = -n \\vec{\\nabla}\\Phi(\\vec{x}, t) - \\vec{\\nabla} \\cdot(n\n {\\mathbf{\\sigma^2}})\n\\end{equation}\nwhere ${\\mathbf{\\sigma^2}}$ is the tensor second moment of the velocity\nfield, analogous to pressure:\n\\begin{equation}\n\\sigma_{ij} = \\left\\langle \\left(v_i - \\langle v_i \\rangle\\right)\n\\left(v_j - \\langle v_j \\rangle\\right) \\right\\rangle\n\\end{equation}\nEach successively higher moment of the collisional Boltzman equation\ninvolves terms of higher order in this fashion; whereas in a\ncollisional fluid the system would close with an equation of state, in\na collisionless system the equations never close.\n\nIn steady state, where the density is not changing with time anywhere\nand therefore the mean velocity is zero everywhere, we find a relation\nanalogous to the hydrostatic equation:\n\\begin{equation}\n \\label{eq:steady}\n \\vec{\\nabla} \\cdot(n\n {\\mathbf{\\sigma^2}}) = -n \\vec{\\nabla}\\Phi(\\vec{x}, t)\n\\end{equation}\nUnder spherical symmetry in configuration space, this can be\nrewritten:\n\\begin{equation}\n \\label{eq:spherical}\n\\frac{\\partial(n \\sigma_{rr}^2)}{\\partial r}\n+ \\frac{n}{r} \\left[2 \\sigma_{rr}^2\n- \\left(\\sigma_{\\theta\\theta}^2 + \\sigma_{\\phi\\phi}^2\\right)\\right] =\n- n \\frac{\\partial \\Phi}{\\partial r}\n\\end{equation}\nand in addition $\\sigma_{\\theta\\theta}^2 = \\sigma_{\\phi\\phi}^2$. In\nthis equation, $\\sigma_{rr}$ and $\\sigma_{\\theta\\theta}$ may both be\nfunctions of $r$.\n\nAlthough the spherical symmetry in configuration space means that $f$\ndoes not depend on $\\theta$ or $\\phi$, it can clearly depend on\n$v_\\theta$ and $v_\\phi$. Thus, $\\sigma_{rr}^2$ does not have to equal\n$\\sigma_{\\theta\\theta}^2$. The orbit distribution can be anisotropic,\nand the degree anisotropy affects the radial distribution $n(r)$.\nThis anisotropy is usually quantified by\n\\begin{equation}\n\\beta(r)  = 1 - \\frac{\\sigma_{\\theta\\theta}^2}{\\sigma_{rr}^2}\n\\end{equation}\nWe can then write:\n\\begin{equation}\n \\label{eq:sphericalmass}\nM(<r) = \\frac{rv_c^2}{G} = - \\frac{r \\sigma_{rr}^2}{G}\n\\left[\\frac{\\dd{\\ln n}}{\\dd{\\ln r}}\n+ \\frac{\\dd{\\ln \\sigma_{rr}^2}}{\\dd{\\ln r}} + 2 \\beta\\right],\n\\end{equation}\nwhere we have defined $v_c$ as the circular velocity of a stable\ncircular orbit. In this equation, the right hand side consists of\nin-principle observables. Particularly, $n(r)$ is the tracer density;\nthe potential could be set by other unobserved masses. However, in\npractice $\\beta$ proves hard to constrain for most systems, for which\nonly line-of-sight velocities and projected densities are known.\n\nIf we take $\\sigma_{rr}$ to be constant and $\\beta=0$, and we assume\nthat the particles are {\\it self-gravitating} --- meaning that they\nare the source of the potential --- then a solution to the equations\nis given by the singular isothermal sphere:\n\\begin{eqnarray}\nn &\\propto& r^{-2} \\cr\nM(<r) = \\frac{rv_c^2}{G} &=& \\frac{2 r \\sigma_{rr}^2}{G}\n\\end{eqnarray}\nyielding the relation $v_c^2 = 2\\sigma_{rr}^2$, which is a useful\norder-of-magnitude relationship between the circular velocity and the\none-dimensional (for example, line-of-sight) velocity dispersion in\ngravitating systems.\n\n\\section{Virial Theorem}\n\nThe virial equations establish the relationship between kinetic\nand potential energy in collisionless gravitating systems. They are\nobtained as a further moment of the Jeans Equation. Specifically, one\ntakes the first moment of position over the analog of Euler's\nequation. For a time-independent system, in the center-of-mass frame,\nwe can establish the {\\it tensor virial theorem}:\n\\begin{equation}\n2 K_{jk} + W_{jk} = 0\n\\end{equation}\nwhere the internal {\\it kinetic energy tensor} is:\n\\begin{equation}\nK_{jk} = \\frac{1}{2} \\int \\dd{}^3\\vec{x}^3 \\rho \\sigma_{jk}^2\n\\end{equation}\n(where $\\rho$ is the mass density, so for particles of equal mass $m$,\n$\\rho = nm$). The {\\it potential energy tensor} is:\n\\begin{equation}\nW_{jk} = - \\frac{G}{2} \\int \\dd{}^3\\vec{x}'\n\\dd{}^3\\vec{x} \\rho(\\vec{x}') \\rho(\\vec{x}) \\frac{(x_j' - x_j)(x_k' -\nx_k)}{\\left| \\vec{x}' - \\vec{x}\\right|^3}\n\\end{equation}\n\nThe trace of the tensor virial theorem yields the {\\it scalar virial\ntheorem}:\n\\begin{equation}\n2K + W = 0\n\\end{equation}\nwhere $K$ is the total kinetic energy in the center of mass frame, and\n$W$ is the total potential energy. \n\n\\section{Jeans Theorem}\n\nJeans Theorem yields an important tool for modeling equilibrium\nself-gravitating systems. These systems can be described by the set of\norbits of the particles comprising them. The density field resulting\nfrom this distribution of orbits generates a potential. To remain in\nequilibrium, the orbit distribution needs to be stable in that\npotential. Jeans Theorem yields a way of generating orbit\ndistributions that are self-consistent in this sense that they are in\nequilibrium.\n\nWe showed earlier that $f$ is conserved along orbits in phase space:\n\\begin{equation}\n\\frac{\\dd{f}}{\\dd{t}} = 0\n\\end{equation}\nFurther, if $\\Phi(\\vec{x})$ is time-independent, there are six {\\it\nconstants of motion} $C(\\vec{x}, \\vec{v}, t)$ conserved along the\norbit (there must be six because the orbit is fully defined by\n$\\vec{w}(t=0)$).\n\nThe {\\it integrals of motion} are related. These are functions only of\nphase space position that are conserved along orbits:\n\\begin{equation}\n\\frac{\\dd{I(\\vec{x}, \\vec{v})}}{\\dd{t}} = 0 \n\\end{equation}\nEach is also a constant of the motion; so there are at most six of\nthem. The existence of these integrals of motion implies:\n\\begin{equation}\nf(\\vec{x}, \\vec{v}) = f\\left(I_1(\\vec{x}, \\vec{v}),\nI_2(\\vec{x}, \\vec{v}), \\ldots, I_6(\\vec{x}, \\vec{v})\\right)\n\\end{equation}\nIf this were not the case, then $f$ would be an independent integral\nof the motion itself!\n\nAn integral of motion that always exists is total energy. It is\nconserved for each particle along its orbit. Under specific\nsymmetries, other useful integrals of motion exist. For example, in\nspherical symmetry, the angular momentum $\\vec{J}$ is conserved; note\nthat only its amplitude is physically significant in spherical\nsymmetry however. Therefore under spherical symmetry all equilibrium\ndistribution functions can be written as $f(E, J)$. \n\nA specific case of interest is the {\\it isothermal sphere}. This\ndistribution results from the choice $f\\propto \\exp(-E/\\sigma^2)$. The\nresulting $f$ can be shown to have a velocity distribution that has a\nGaussian width $\\sigma$ in each dimension. Generically, at large\nradius $\\rho \\propto r^{-2}$ for an isothermal sphere. At these radii\nthe circular velocity $v_c = \\sqrt{2}\\sigma$. The case in which\n$r^{-2}$ at all radii is known as the {\\it singular isothermal\nsphere}, because of the infinite value of the density at the center.\n\n\\section{Chandrasekhar Dynamical Friction}\n\nCollisionless, gravitating, dynamical systems exhibit an effect known\nas {\\it dynamical friction} that converts ``bulk'' kinetic energy into\n``internal'' kinetic energy, even in systems with long two-body\nrelaxation times. This effect is calculated in the exercises below,\nwhere it is shown that for a mass $M$ moving through a system with\ndensity $\\rho$ with an isothermal distribution function of velocity\ndistribution $\\sigma$ there is a drag force:\n\\begin{equation}\n\\frac{\\dd{\\vec{v}_M}}{\\dd{t}} = - \\frac{4\\pi \\ln\\Lambda\nG^2M\\rho}{v_M^3} \\left[\\erf X\n-  \\frac{2X}{\\sqrt{\\pi}} \\exp\\left(-X^2\\right) \\right] \\vec{v}_M\n\\end{equation}\nwhere $X = v_M /\\sqrt{2} \\sigma$ and $\\Lambda \\sim M_{\\rm total} / M$.\nFor an initial circular orbit of radius $r_i$, this drag leads to a\ndynamical friction time scale:\n\\begin{equation}\nt_{f} = \\frac{2.6 \\times\n10^{11} \\mathrm{~yr}}{\\ln \\Lambda} \\left[\\frac{r_i}{2\\mathrm{~kpc}}\\right]^2\n\\left[\\frac{v_c}{250\\mathrm{~km~s}^{-1}} \\right]\n\\left[\\frac{10^6 M_\\odot}{M}\\right]\n\\end{equation}\n\n\\section{Tidal radius}\n\nTwo point masses $M$ and $m$ separated by distance $D$ will create a\npotential with a saddle point that separates zones dominated by one of\nthe two potential wells or the other.  If these two point masses are\norbiting each other with frequency $\\Omega$, a fixed potential can\nonly be found in the frame rotating about the center of mass at\n$\\Omega$ (and this potential is only relevant for telling you how\nstationary objects in that frame will start to move in that\nframe).\n\nThe co-rotating frame by design has the relative $1/r^2$ forces from\nthe two masses roughly balancing each other, meaning the position of\nthe saddle point is defined by the derivatives of the gravitational\nforces, that is, the tidal forces. The dimensional scaling for the\ntidal forces is $1/r^3$, so:\n\\begin{equation}\nr = \\left(\\frac{m}{3M}\\right)^{1/3} D\n\\end{equation}\n\nThis tidal radius limits the size of any system of mass $m$ orbiting a\nbody of mass $M$. Particles or gas more distant than the tidal radius\nbecome unbound from mass $m$. The tend to form {\\it tidal\ntails}. Particles closer to mass $M$ enter orbits that lead mass $m$,\nand particles farther from mass $M$ enter orbits that trail mass $m$.\nThese tidal features are observable for numerous systems near the\nMilky Way. \n\n\\section{Commentary}\n\nThe fact that collisionless systems have a non-trivial phase space is\nof enormous significance. It provides another way that each objects'\nhistory may be encoded in its dynamics. It also means that the\nproperties of a system have a full six-dimensional structure to their\ndescription. This complexifies accurate predictions of $N$-body\ngravitating systems when $N$ cannot be achieved computationally.\n\nThe virial theorem is often spoken of in the casual terms that\n$v^2 \\sim GM/r$ for the characteristic, $v$, $M$ and $r$ for the\nsystem. While this relation follows from dimensional analysis alone,\nthe virial theorem goes further and is a precise relationship. It is\nclearest to think of the virial theorem establishing the relationship\nbetween $K$ and $U$ for a bound, equilibrium system.  However, as\ndescribed in the problems, one can create definitions of\n``characteristic'' for $v$, $M$, and $r$ for which the equation $v^2 =\nGM/r$ holds strictly, and will scale with a constant coefficient in\nhomologous systems.\n\n\\section{Important numbers}\n\n\\section{Key References}\n\n\\begin{itemize}\n  \\item\n    {\\it Galactic\n    Dynamics, \\href{https://ui.adsabs.harvard.edu/abs/2009PhT....62e..56B/abstract}{\\citet{binney09a}}}\n\\end{itemize}\n\n\\section{Order-of-magnitude Exercises}\n\n\\begin{enumerate} \n\\item Assuming the Milky Way halo is isothermal and spherical, given\n    the circular velocity at the radius of the Sun is 220 km s$^{-1}$,\n    what do you expect the one-dimensional velocity dispersion of dark\n    matter particles is? What is the mass interior to the Sun?\n\\item What is the typical relaxation time for globular clusters?\n    Galaxies? Clusters of galaxies?\n\\item What are the tidal radii for:\n\\begin{enumerate}\n\\item A Milky Way mass galaxy ($10^{12}$ $M_\\odot$ total) inside a\nComa-mass cluster of galaxies ($10^{15}$ $M_\\odot$), \nat 500 kpc from the cluster center.\n\\item An LMC-mass galaxy ($10^{11}$ $M_\\odot$ total) inside a\nMilky Way-mass galaxy, at 50 kpc from the galactic center.\n\\item A relatively massive globular cluster ($10^5$ $M\\odot$) inside a\nMilky Way mass galaxy, at 10 kpc from the galactic center.\n\\end{enumerate}\n\\item What is the dynamical friction time scale for:\n\\begin{enumerate}\n\\item A Milky Way mass galaxy ($10^{12}$ $M_\\odot$ total) inside a\nComa-mass cluster of galaxies ($\\sigma \\sim 1000$ km s$^{-1}$), \nat 500 kpc from the cluster center.\n\\item An LMC-mass galaxy ($10^{11}$ $M_\\odot$ total) inside a\nMilky Way-mass galaxy ($\\sigma\\sim 150$ km s$^{-1}$), at 50 kpc from\nthe galactic center.\n\\item A relatively massive globular cluster ($10^5$ $M\\odot$) inside a\nMilky Way mass galaxy, at 10 kpc from the galactic center.\n\\end{enumerate}\n\\end{enumerate}   \n\n\\section{Analytic Exercises}\n\n\\begin{enumerate}\n\\item In this exercise, we derive the equation for the relaxation time\ngiven in Equation \\ref{eq:relax}. Relaxation refers to the effect of\ngranularity in the potential due to the fact that there are a finite\nnumber of particles in the system, due primarily to two-body\ninteractions. A distribution function that forms an equilibrium\nsystem, if sampled by a finite number of particles, will slowly (or\nquickly) wander away from that equilibrium, on a time scale associated\nwith the relaxation time. We will calculate this for a system of mass\n$M$, consisting of $N$ particles with mass $m$, within some system\nsize $R$. For this exercise we define the crossing time $t_c=R/v$,\nwhere $v$ is the typical velocity of a particle.\n\\begin{enumerate}\n\\item What does the virial theorem tell us about the crossing time\n$t_c$?\n\\begin{answer}\nThe typical velocity is defined by the virial relation $v^2\\sim GM/R$,\nso the crossing time is $R^{3/2}/M^{1/2}$ (related, as it needs to be,\nby the square root of the density).\n\\end{answer}\n\\item \\label{q:deflection}\nImagine two particles of mass $m$ passing by each other with\nspeed $v$ with an impact parameter $b$, defined as their separation at\ninfinity normal to their relative velocity. Argue from a heuristic\npoint of view why the velocity perturbation normal to the original\nvelocity will scale $\\propto Gm/bv$ (in detail it is $\\Delta\nv_\\perp \\approx 2Gm/bv$).\n\\begin{answer}\nAt closest approach the relative acceleration perpendicular to the\noriginal velocity is $Gm/b^2$, and this closest approach lasts a time\nof order $b/v$. Multiplying these together yields the $Gm/bv$ scaling.\n\\end{answer}\n\\item Consider interactions in some range of impact parameters,\nbetween $b$ and $b+\\dd{b}$. During one crossing time of a particle,\nwhat is the mean $\\langle \\delta v_\\perp\\rangle$ and mean-squared\n$\\langle \\delta v_\\perp^2\\rangle$ perturbation that these\ninteractions cause on the velocity of the particle perpendicular to\nits motion?\n\\item Below $b_{\\rm min} = Gm/v^2$ our assumptions break down. We will\ntake the perhaps questionable route of ignoring these close\nencounters. Considering only interactions with impact parameters\nbetween $b_{\\rm min}$ and $R$ to express the total\n$langle \\delta v_\\perp\\rangle$ over all encounters in a crossing\ntime. Express the result in terms of $\\Lambda = R/b_{\\rm min}$.\n\\item Define the relaxation time as the time it takes for the total\nfractional perturbation in velocity to reach unity. How many crossing\ntimes does it take?\n\\item Approximate the answer one step further, using the\nvirial theorem to show $\\Lambda \\sim N$, and thus expressing the\nnumber of crossings just in terms of $N$.\n\\end{enumerate}\n\\item Show that Liouville's Theorem follows from the collisionless\nBoltzmann equation. \n\\item Verify the expressions for the first-order Jeans Equation. \n\\item Show that Equation \\ref{eq:spherical} follows from\nEquation \\ref{eq:steady} under spherical symmetric in configuration\nspace. \n\\begin{answer}\nFirst we need to recognize that if there is spherical symmetry in\nconfiguration space, then in spherical coordinates $\\sigma_{ij}$ will\nbe diagonal. Furthermore, the derivatives of $\\sigma_{ij}$ with\nrespect to angular coordinates will be zero. We can write\n$\\vec{\\nabla}$ in spherical coordinates as:\n\\begin{equation}\n\\vec{\\nabla} = {\\hat e}_e \\partial_r + {\\hat\ne}_\\theta \\frac{1}{r} \\partial_\\theta + {\\hat\ne}_\\phi \\frac{1}{r\\sin\\theta} \\partial_\\phi\n\\end{equation}\nWe are applying this operator to the tensor $\\mathbf{\\sigma^2}$ so we\nneed to work out how it acts on basis vectors:\n\\begin{equation}\n\\begin{array}{ccc}\n\\partial_r {\\hat e}_r = 0 & \n\\partial_r {\\hat e}_\\theta = 0 &\n\\partial_r {\\hat e}_\\phi = 0 \\cr\n\\partial_\\theta {\\hat e}_r = {\\hat e}_\\theta & \n\\partial_\\theta {\\hat e}_\\theta = - {\\hat e}_r &\n\\partial_\\theta {\\hat e}_\\phi = 0 \\cr\n\\partial_\\phi {\\hat e}_r = \\sin\\theta {\\hat e}_\\phi & \n\\partial_\\phi {\\hat e}_\\theta = \\cos\\theta {\\hat e}_\\phi &\n\\partial_\\phi {\\hat e}_\\phi = - \\cos\\theta {\\hat e}_\\theta - \\sin\\theta {\\hat e}_r \\cr\n\\end{array}\n\\end{equation}\nThen we can write the right-hand side of Equation \\ref{eq:steady} as:\n\\begin{equation}\n\\left[{\\hat e}_r \\partial_r +\n{\\hat e}_\\theta \\frac{1}{r} \\partial_\\theta + \n{\\hat e}_\\phi \\frac{1}{r\\sin\\theta} \\partial_\\phi\\right] \\cdot\n\\left[ \\langle n\\rangle \\sigma_{ij}^2 {\\hat e}_i {\\hat e}_j \\right]\n\\end{equation}\nwhere we use the Einstein summation convention. The first term yields:\n\\begin{equation}\n{\\hat e}_r \\partial_r \\left(\\langle n\\rangle \\sigma_{rr}^2\\right).\n\\end{equation}\nThe second term yields:\n\\begin{eqnarray}\n{\\hat e}_\\theta \\cdot \\frac{1}{r} \\partial_\\theta \\left[ \\langle\nn\\rangle \\sigma_{ij}^2 {\\hat e}_i {\\hat e}_j \\right] &=& \n{\\hat e}_\\theta \\cdot \\frac{1}{r} \\partial_\\theta\n\\left[\\langle n \\rangle \\left(\\sigma_{rr}^2 {\\hat e}_r {\\hat e}_r +\n\\sigma_{\\theta\\theta}^2 {\\hat e}_\\theta {\\hat e}_\\theta +\n\\sigma_{\\phi\\phi}^2 {\\hat e}_\\phi {\\hat e}_\\phi\\right) \\right] \n\\cr\n&=& {\\hat e}_\\theta \\cdot \\frac{\\langle n \\rangle}{r}\n\\left[\n\\sigma_{rr}^2 {\\hat e}_\\theta {\\hat e}_r +\n\\sigma_{rr}^2 {\\hat e}_r {\\hat e}_\\theta -\n\\sigma_{\\theta\\theta}^2 {\\hat e}_r {\\hat e}_\\theta -\n\\sigma_{\\theta\\theta}^2 {\\hat e}_\\theta {\\hat e}_r\\right]\\cr\n&=& \\frac{\\langle n \\rangle}{r}\n\\left[\n\\sigma_{rr}^2 {\\hat e}_r -\n\\sigma_{\\theta\\theta}^2 {\\hat e}_r\\right]\\cr\n&=& \\frac{\\langle n \\rangle}{r}\n\\left[\n\\sigma_{rr}^2 -\n\\sigma_{\\theta\\theta}^2 \\right] {\\hat e}_r\n\\end{eqnarray}\nThe third term yields:\n\\begin{eqnarray}\n{\\hat e}_\\phi \\frac{1}{r\\sin\\theta} \\cdot \\partial_\\phi \n\\left[ \\langle n\\rangle \\sigma_{ij}^2 {\\hat e}_i {\\hat e}_j \\right]\n&=& \n{\\hat e}_\\phi \\frac{1}{r\\sin\\theta} \\cdot \\partial_\\phi \n\\left[\\langle n \\rangle \\left(\\sigma_{rr}^2 {\\hat e}_r {\\hat e}_r +\n\\sigma_{\\theta\\theta}^2 {\\hat e}_\\theta {\\hat e}_\\theta +\n\\sigma_{\\phi\\phi}^2 {\\hat e}_\\phi {\\hat e}_\\phi\\right) \\right] \n\\cr \n&=&\n{\\hat e}_\\phi \\frac{\\langle n \\rangle}{r\\sin\\theta} \\cdot\n\\left(\\sigma_{rr}^2 \\sin\\theta {\\hat e}_\\phi {\\hat e}_r +\n\\sigma_{rr}^2 \\sin\\theta {\\hat e}_r {\\hat e}_\\phi +\n\\sigma_{\\theta\\theta}^2 \\cos\\theta {\\hat e}_\\phi {\\hat e}_\\theta +\n\\right. \\cr\n&& \n\\left.\n\\sigma_{\\theta\\theta}^2 \\cos\\theta {\\hat e}_\\theta {\\hat e}_\\phi - \n\\sigma_{\\phi\\phi}^2 \\sin\\theta {\\hat e}_r {\\hat e}_\\phi -\n\\sigma_{\\phi\\phi}^2 \\sin\\theta {\\hat e}_\\phi {\\hat e}_r - \\right. \\cr\n& &\n\\left.\n\\sigma_{\\phi\\phi}^2 \\cos\\theta {\\hat e}_\\theta {\\hat e}_\\phi -\n\\sigma_{\\phi\\phi}^2 \\cos\\theta {\\hat e}_\\phi {\\hat e}_\\theta\n\\right) \\cr\n&=&\n \\frac{\\langle n \\rangle}{r} \\left(\n \\sigma_{rr}^2 - \\sigma_{\\phi\\phi}^2 \\right) {\\hat e}_r +\n \\frac{\\langle\n n \\rangle \\cos\\theta}{r \\sin\\theta} \\left(\\sigma_{\\theta\\theta}^2\n - \\sigma_{\\phi\\phi}^2\\right) {\\hat e}_\\theta\n\\end{eqnarray}\nSince under spherical symmetric the gradient of the potential has no\ncomponents in the angular directions, the second term must also be\nzero:\n\\begin{equation}\n\\sigma_{\\theta\\theta} = \\sigma_{\\phi\\phi}\n\\end{equation}\nand the radial terms then add together and yield Equation\n(\\ref{eq:spherical}):\n\\begin{equation}\n\\frac{\\partial(n \\sigma_{rr}^2)}{\\partial r}\n+ \\frac{n}{r} \\left[2 \\sigma_{rr}^2\n- \\left(\\sigma_{\\theta\\theta}^2 + \\sigma_{\\phi\\phi}^2\\right)\\right] =\n- n \\frac{\\partial \\Phi}{\\partial r}\n\\end{equation}\n\\end{answer}\n\\item Using the spherically symmetric first-order Jeans Equation,\nEquation (\\ref{eq:spherical}), shown that\nEquation \\ref{eq:sphericalmass} holds.\n\\item For a singular isothermal sphere with $n\\propto r^{-2}$ and\n$\\sigma_{rr}^2$ a constant, but $\\beta > 0$, what choice of $\\beta$\nwill make the radial velocity dispersion equal to the \nvelocity of a stable circular orbit in the potential?\n\\item Starting with the first-order Jeans Equation,\nEquation (\\ref{eq:euler}), take another moment with respect to\nposition. Rearrange in terms of the kinetic and potential energy\ntensors to obtain the tensor virial theorem.\n% \\item Define the characteristic v, M, R for virial theorem\n\\item We will derive the Plummer model for a spherical equilibrium set\nof orbits. Under Jeans Theorem, all equilibrium models will have\n$f(E,J)$.  We can take:\n\\begin{equation}\nf = \\left\\{ \\begin{array}{ll}\nk_1 (-E)^p & E<0 \\cr\n0 & E>0 \\end{array} \\right.\n\\end{equation}\nUnder this form, we can find self-consistent combinations of $\\rho$\nand the gravitational potential $\\phi$.\n\\begin{enumerate}\n\\item Show that under this form, the density becomes:\n\\begin{equation}\n\\rho = k_2 \\left(-\\phi\\right)^n\n\\end{equation}\nfor $n=p+3/2$. You may find the following integral useful:\n\\begin{equation}\n\\int_0^a {\\dd x} x^m \\left(a^n - x^n\\right)^p =\n\\frac{a^{m+1+np} \\Gamma\\left((m+1)/n\\right) \\Gamma\\left(p+1\\right)}\n{n \\Gamma\\left[(m+1)/n +p + 1\\right]}\n\\end{equation}\n\n\\begin{answer}[Author: Jiarong Zhu]\nThe distribution function f :\n\\begin{equation}\nf = \n\\begin{cases}\nk_1 (-E)^p & {E < 0} \\\\\n0 & {E > 0}\n\\end{cases}\n\\end{equation}\n$f$ is non-zero only when $E<0$, which fact will be used in the following calculation. Energy E takes the form:\n\\begin{equation}\n    E = \\frac{1}{2}m v^2 + m\\phi\n\\end{equation}\nThus $E<0$ corresponds to $v<v_{max}=\\sqrt{2(-\\phi)}$.\nDensity $\\rho(\\vec x) = m n(\\vec x)$, where n is number density and can be calculated by integrating $f(\\vec x,\\vec v)$ over the velocity space:\n\\begin{align*}\nn &= \\int f \\,d^3 \\vec v  \\\\\n&= \\int_{0}^{\\infty} 4\\pi v^2 f \\,dv  \\\\\n&= \\int_{0}^{v_{max}} 4 \\pi v^2 k_1 \\left(-\\frac{1}{2}m v^2 -  m \\phi\\right)^p \\,dv + \\int_{v_{max}}^{\\infty}4\n\\pi v^2 * 0 \\,dv\\\\\n&= 4 \\pi\nk_1 \\left(\\frac{m}{2}\\right)^p \\int_{0}^{v_{max}=\\sqrt{2(-\\phi)}} v^2\n(2(-\\phi) - v^2)^p\\,dv \\\\\n&= 4 \\pi k_1 \\left(\\frac{m}{2}\\right)^p (-2\\phi)^{\\frac{3+2p}{2}}\\frac{\\Gamma (3/2) \\Gamma (p+1)}{2\\Gamma (3/2 +p+1)}\\\\\n&\\propto (-\\phi)^{p+3/2}\n\\end{align*} \n\\end{answer}\n\n\\item Demonstrate that the potential:\n\\begin{equation}\n\\phi = - \\frac{GM}{R} \\frac{1}{\\left(1 + r^2/R^2\\right)^{1/2}}\n\\end{equation}\nsatisfies this relation for $n=5$. \n\n\\begin{answer}[Author: Jiarong Zhu]\nWe want to show that potential below satisfies the relation  $\\rho = k_2 (-\\phi)^5$. \n\\begin{equation}\n    \\phi = - \\frac{GM}{R}\\frac{1}{(1+r^2/R^2)^{1/2}}\n\\end{equation}\n\nWe know that the potential satisfies Poisson's equation:\n\\begin{equation}\n   \\nabla ^2 \\phi = 4 \\pi G \\rho \n\\end{equation}\nFor spherical equilibrium, potential only depend on radius $r$. Calculate LHS of Eqn.(4) with Eqn.(3) plugged in:\n\n\n\\begin{align*}\n    \\nabla ^2 \\phi &= \\frac{1}{r^2}\\frac{\\partial}{\\partial r}(r^2 \\frac{\\partial \\phi}{\\partial r})\\\\\n    & = \\frac{1}{r^2}\\frac{\\partial}{\\partial r}(r^2 \\frac{GM}{2R}\\frac{2r/R^2}{(1+r^2/R^2)^{3/2}})\\\\\n    &= \\frac{GM}{R^3}\\frac{1}{r^2}\\frac{\\partial}{\\partial r}(\\frac{r^3}{(1+r^2/R^2)^{3/2}}) \\\\\n    &= \\frac{GM}{R^3}(\\frac{3(1+r^2/R^2)^{3/2}-r^2 \\frac{3}{2} (1+r^2/R^2)^{1/2}\\frac{2}{R^2}}{(1+r^2/R^2)^3})\\\\\n    &= \\frac{3GM}{R^3(1+r^2/R^2)^{5/2}} \\\\\n    & \\propto (-\\phi)^5\n\\end{align*}\n$\\therefore \\rho \\propto \\nabla ^2 \\phi \\propto (-\\phi)^5 $\n\\\\\nThe proportionality can be easily fixed by assign $k_1$ properly.\n\nThe Plummer model density profile, sometimes used for modeling\nspherical systems like globular clusters, is therefore:\n\\begin{equation}\n\\rho \\propto \\frac{1}{\\left(1 + r^2/R^2\\right)^{5/2}}.\n\\end{equation}\n\\end{answer}\n\n\\end{enumerate}\n\\item We will derive the isothermal sphere model for a spherical\nequilibrium set of orbits. Under Jeans Theorem, all equilibrium models\nwill have $f(E,J)$.  We can take:\n\\begin{equation}\nf = \nk \\exp\\left(-E/2\\sigma^2\\right)\n\\end{equation}\nThere is something very unusual about this assumption, because it\nincludes positive as well as negative energies. As it turns out, this\nfeature has to do with the result we find later than the total size\nand mass of the system is infinite. \n\\begin{enumerate}\n\\item Show that:\n\\begin{equation}\n\\rho = m K \\left(4\\pi\\sigma^2\\right)^{3/2} \\exp\\left(-\\phi/2 \\sigma^2\\right)\n\\end{equation}\n\\item Show that the density must satisfy the equation:\n\\begin{equation}\n\\label{eq:lane-emden}\n\\frac{\\partial}{\\partial\nr} \\left(r^2 \\frac{\\partial \\ln \\rho}{\\partial r} \\right) =\n- \\frac{2\\pi G}{\\sigma^2} r^2 \\rho\n\\end{equation}\n\\item Using the hydrostatic equation and the ideal gas law, and\nassuming isothermal gas, show that $\\sigma^2$ is the equivalent of\n$kT/2m$ --- i.e. $m\\sigma^2$ is like the energy per degree of freedom.\n\\item Find the singular solution with $\\rho\\propto r^{-2}$ everwhere,\nand express $\\rho(r)$, $M(<r)$, and $v_c(r)$ as functions of\n$\\sigma^2$ and $r$.\n\\end{enumerate}\nWe will show in a  numerical exercise that we can numerically integrate\nthe equations to a better solution by imposing a zero slope at $r=0$\n(but it is still infinite mass).\n\\item Lowered isothermal sphere model\n\\item King model\n\\item Tidal radius\n\\item We will here derive Equation (\\ref{eq:chandra}) for the\nChandrasekhar Dynamical Friction. We will calculate the drag on a\nparticle of mass $M$ moving at velocity $\\vec{v}_M$ through a system\nof density $\\rho$ with a Gaussian velocity dispersion $\\sigma$ in each\nCartesian dimension.\n\\begin{enumerate}\n\\item We use the same approach for calculating the relaxation\ntime. Consider a two-body interaction in the frame of the particle\nwith mass $M$, with a particle in the field with velocity\n$\\vec{v}$. Using the results of Exercise \\ref{q:deflection}, estimate\nthe change of the velocity of the second particle along the\noriginal direction ${\\hat v}$. \n\\begin{answer}\nThe perpendicular velocity deflection is $2GM/bv$. However, the\nvelocity amplitude should not change. This means that the deflection\nangle should satisfy:\n\\begin{equation}\n\\sin \\theta_D = \\frac{2GM/ bv}{v} = \\frac{2GM}{bv^2}\n\\end{equation}\nIf we assume that the velocities are typicall not much perturbed on\naverage, we can write the perpendicular deflection as:\n\\begin{eqnarray}\n\\left|\\Delta \\vec{v}_{||}\\right| &=&\nv\\left(1- \\cos\\theta_D\\right) \\cr\n&\\approx&\nv\\left(\\frac{\\theta_D^2}{2}\\right) \\cr\n&\\approx&\n\\frac{2G^2M^2}{b^2 v^3}\n\\end{eqnarray}\n\\end{answer}\n\\item What is the velocity change of the particle of mass $M$ due to\nthis interaction?\n\\begin{answer}\nConservation of momentum says it must be:\n\\begin{equation}\n\\left|\\Delta \\vec{v}_{M,||}\\right| = \\frac{m}{M} \n\\left|\\Delta \\vec{v}_{||}\\right| \\approx\n\\frac{2G^2Mm }{b^2 v^3}\n\\end{equation}\n\\end{answer}\n\n\\item If the particle of mass $M$ travels at velocity $\\vec{v}_M$\nthrough a uniform sea of particles of mass $m$ with a velocity\ndistribution $f(\\vec{v}_m$, what is its rate of encounters with\nparticles within some differential $\\dd{}^3\\vec{v}_m$ around a\nparticular velocity $\\vec{v}_m$, and within a differential $\\dd{b}$\naround impact parameter $b$?\n\\begin{answer}\nDefine relative velocity as $\\vec{v}_0 = \\vec{v}_m - \\vec{v}_M$. Then\nthe particle of mass $M$ sweeps out particles of mass $m$ in the\nspecified annulus around it with the rate:\n\\begin{equation}\n2\\pi b \\dd{b} v_0\n\\end{equation}\nUsing the distribution function then, the rate is:\n\\begin{equation}\nr(v_0) = 2\\pi b \\dd{b} v_0 f(\\vec{v}_m)\n\\dd{}^3\\vec{v}_m\n\\end{equation}\n\\end{answer}\n\\item In the case of the perpendicular deflection, the mean velocity\ndifference averages to zero. However, the mean velocity difference\nalong the parallel direction is always in the same direction. Assume\nsome $b_{\\rm max}$ is set by the size of the system in question, and\ndefine:\n\\begin{equation}\n\\Lambda = \\frac{b_{\\rm max}v_0^2}{GM} \\sim \\frac{M_{\\rm total}}{M}\n\\end{equation}\nwhere the approximation is from the virial relation. Derive the change\nin $\\vec{v}_M$ per unit time from particles around velocity\n$\\vec{v}_m$.\n\\begin{answer}\n\\begin{eqnarray}\n\\left.\\frac{\\dd\\vec{v}_M}{\\dd{t}} \\right|_m  &=&\n\\vec{v}_0 f(\\vec{v}_m) \\dd{}^3 \\vec{v}_m\n\\int_0^{b_{\\rm max}} \\dd{b} (2\\pi b) \\frac{2G^2Mm}{b^2 v_0^3} \\cr\n&=&\n\\end{eqnarray}\n\\end{answer}\n\\end{enumerate}\n\\item Imagine two masses, $M_1$ and $M_2$, orbiting in a circular\norbit around each other with a constant separation $D$. Transform into\na frame rotating with the orbit, and find the effective potential\n$\\Phi_{\\rm eff}$ such that in the rotating frame:\n\\begin{equation}\n{\\rm d}\\vec{v}/{\\rm d}t = - \\vec{\\nabla}\\Phi_{\\rm eff}.\n\\end{equation}\nPlot contours of this effective potential in the orbital plane; you\nshould see a number of stationary points (the Lagrange points), one of\nwhich is between the two masses. Then, for $M_2\\ll M_1$, show that the\nstationary point between the two masses is a distance from the small\nmass of:\n\\begin{equation}\nr = \\left(\\frac{M_2}{3M_1}\\right)^{1/3} D\n\\end{equation}\nThat distance is approximately the tidal radius.\n\\end{enumerate}\n\n\\section{Numerics and Data Exercises}\n\n\\begin{enumerate}\n\\item Using the inner boundary condition that $\\partial\\rho/\\partial\nr=0$, integrate Equation \\ref{eq:lane-emden} outwards to derive the\nshape of an isothermal sphere with a core.\n\\item Lowered isothermal sphere.\n\\item $\\Phi$, $n$ and $\\beta$\n\\item Dynamics estimates from Gaia\n\\item Globular cluster radial profiles.\n\\item Tidal potential\n\\end{enumerate}\n\n\\bibliographystyle{apj}\n\\bibliography{exex}  \n", "meta": {"hexsha": "b60c14b30ac5ff3ea601d2500481c8160c995d00", "size": 31581, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/dynamics-text.tex", "max_stars_repo_name": "blanton144/exex", "max_stars_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/dynamics-text.tex", "max_issues_repo_name": "blanton144/exex", "max_issues_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/dynamics-text.tex", "max_forks_repo_name": "blanton144/exex", "max_forks_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.3363874346, "max_line_length": 144, "alphanum_fraction": 0.7210031348, "num_tokens": 9868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5983448956613229}}
{"text": "% Sequential IMPLEMENTATION\n\\chapter{Sequential Implementation}\n\\label{chapter:sequential}\nIn order to better understand the algorithm, we have started with a basic sequential implementation of it in C++. While this step could be done in any language, we have chosen to work with C++, as it would allow us to re-use pieces of code for the parallel CUDA implementations, described further in this report. Running this version with a large number of options will likely result in a significant amount of computation time. However, the purpose of this implementation is rather a proof of concept that the algorithm produces correct approximations, as well as to provide a set of results, which can be used to test against with the other implementations.\n\nThe algorithm described in the book is used to price one option at a time and the natural way to start a sequential implementation would be to create a single function that prices one option. Looping through all options in the data set and calling this function for each of them will then produce the end results. Pseudo-code in Algorithm~\\ref{alg:sequential} describes the approach we took based on the book and articles by Hull and White. Note that real is a data type that can be either single or double precision floating point number based on the required accuracy.\n\nThe implementation iterates through all given options, constructs a trinomial tree for each of them and propagates prices back through the tree, obtaining the price approximations for each option and returning them in the end. The algorithm follows the intuition provided in the previous chapter \\ref{chapter:hullwhitemodel}. The focus of this implementation is on correctness and simplicity.\n\n\\pagebreak\n\\section{Algorithm Description}\n\\paragraph{Precomputation}\nPricing of one option starts with computing its constants such as tree width and rate step, tree height and time step, and other values needed to solve the formulas in Hull and White model. Afterwards, for each width step j, rate and probabilities (up, middle, down) are precomputed for use during both forward and backward propagations.\n\n\\begin{algorithm}[H]\n    \\DontPrintSemicolon\n    \\caption{Sequential implementation\\label{alg:sequential}}\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n\n    \\underline{function ComputeOptionPrice}\\;\n    \\Input{option : \\{ StrikePrice, Maturity, Length, TermUnit, TermStepCount, ReversionRate, Volatility, Type \\}\\\\yields : \\{ [Prices], [Timesteps] \\}}\n    \\Output{Price approximation for the option}\n    \\;\n    Pre-compute probabilities for all width steps j\\;\n    c : OptionConstants = Compute constants for the option\\;\n    \\tcc{Option constants include:}\n    \\tcc{c.t : int - option length}\n    \\tcc{c.X : real - option strike price}\n    \\tcc{c.dt : real - time step (height)}\n    \\tcc{c.dr : real - rate step (width)}\n    \\tcc{c.jmax : int - max. j of the tree}\n    \\tcc{c.width : int - tree width ($2 * \\text{c.jmax} + 1$)}\n    \\tcc{c.height : int - tree height}\n    \\tcc{c.type : CALL | PUT - option type}\n    \\;\n    \\tcc{Create an array of alphas and set the first alpha to the initial dt-period interest rate}\n    alphas : real[$\\text{c.height} + 1$]\\;\n    alphas[0] = Compute yield at c.dt\\;\n\\end{algorithm}\n\n\\pagebreak\n\\paragraph{Forward propagation}\nThe purpose of forward propagation is to compute an array of alphas of size tree height + 1 that will be used during backward propagation. The first alpha is set to the interest rate at time of one time step. To capture the tree values at any given time step, only the current and previous tree levels of size tree width are needed, these two arrays are named $\\mathit{Qs}$ and $\\mathit{QsCopy}$. The single starting value in the middle of the tree (the root of the tree) in $\\mathit{Qs}$ array is initalized to 1\\$.\n\nAfter the arrays are initialized, the program iterates through time steps along the tree height. At each time step, it goes through the values Q computed in the previous step. Every value contributes to three values in the next time step ($\\mathit{QsCopy}$) as illustrated in figure~\\ref{fig:seqforward}, according to the precomputed rates and probabilities. Note that this is an example of standard branching and there are also a bottom and a top branching, see fig.~\\ref{fig:background:allbranchings}. After all $\\mathit{Qs}$ in the next step are computed, their values are aggregated to compute the next alpha. Lastly, arrays $\\mathit{Qs}$ and $\\mathit{QsCopy}$ are swapped and $\\mathit{QsCopy}$ is reset to zeros for the next iteration. Note that this approach combines stages 1 and 2 described in chapter~\\ref{section:hullwhite:forwardpropagation} in a single iteration of the forward propagation loop. \n\n\\begin{figure}[H]\n    \\centering\n    \\def\\svgwidth{0.5\\textwidth}\n\t\\caption{Forward propagation - computing the next step}\n    \\input{img/seqforward.pdf_tex}\n\t\\source{Compiled by the authors}\n\t\\label{fig:seqforward}\n\\end{figure}\n\n\\begin{algorithm}[H]\n    \\DontPrintSemicolon\n    \\setcounter{AlgoLine}{17}\n    \\caption{Sequential implementation - forward propagation\\label{alg:sequential-forward}}\n    \n    \\tcc{Forward propagation}\n    Qs = real[c.width]\\;\n    QsCopy = real[c.width]\\;\n    Qs[c.jmax] = 1\\tcc*{Set initial node to 1\\textdollar}\n    \\;\n    \\tcc{Iterate through nodes along tree height}\n    \\For{$i = 0$ \\KwTo $\\mathit{c.height} - 1$}{\n        \\tcc{Compute the highest allowed j index on step i}\n        jhigh : int = min(i, c.jmax)\\;\n        alpha : real = alphas[i]\\;\n        \\;\n        \\tcc{Iterate along width between j indexes on step i}\n        \\For{$j = -jhigh$ \\KwTo jhigh} {\n            Compute and add to QsCopy on $\\text{j} + 1$, j, $\\text{j}-1$\n        }\n        \\;\n        \\tcc{Iterate along width between j indexes on step $i + 1$}\n        jhigh1 : int = min($\\text{i}+1$, c.jmax)\\; \n        alpha\\_p1 : real = 0\\;\n        \\For{$j = -\\mathit{jhigh1}$ \\KwTo jhigh1}{\n            Aggregate alpha\\_p1 based on QsCopy[j]\n        }\n        \\;\n        Compute alphas[$\\text{i}+1$] based on alpha\\_p1\\;\n        Qs = QsCopy\\;\n        Fill QsCopy with 0\\;\n    }\n\\end{algorithm}\n\n\\pagebreak\n\\paragraph{Backward propagation}\nAfter all alphas are computed, they are carried over to backward propagation along with two arrays of size tree width. These arrays called Prices and $PricesCopy$ are used to store the current and previous tree levels similarly to forward propagation. Prices are initialized to 100\\$ which represents the payoff at bond maturity.\n\nAfterwards, the program iterates through time steps along the tree height starting from the end of the tree. At each time step, the values at step $i-1$ in $PricesCopy$ are computed from three values in $Prices$ at step $i$ using alpha at $i$ and the precomputed probabilities as illustrated in figure~\\ref{fig:seqbackward}. If the current time step is the option maturity, every computed price is discounted by the option strike price, taking care of the option type being call or put as well. Lastly, arrays Prices and $PricesCopy$ are swapped and $PricesCopy$ is reset to zeros for the next iteration.\n\n\\begin{figure}[H]\n    \\centering\n    \\def\\svgwidth{0.5\\textwidth}\n\t\\caption{Backward propagation - computing the previous step}\n    \\input{img/seqbackward.pdf_tex}\n\t\\label{fig:seqbackward}\n\\end{figure}\n\n\\begin{algorithm}[H]\n    \\DontPrintSemicolon\n    \\setcounter{AlgoLine}{44}\n    \\caption{Sequential implementation - backward propagation\\label{alg:sequential-backward}}\n    \n    \\tcc{Backward propagation}\n    Prices : real[c.width]\\;\n    PricesCopy : real[c.width]\\;\n    Fill Prices with 100 \\tcc*{Initialize prices to 100\\textdollar}\n    \\;\n    \\For{$i = \\mathit{c.height} - 1$ \\KwTo 0} {\n        jhigh : int = min(i, c.jmax)\\;\n        alpha : real = alphas[i]\\;\n        \\;\n        \\For{$j = -\\mathit{jhigh}$ \\KwTo jhigh} {\n            jind : int = j + c.jmax\\;\n            Compute res based on Prices at $\\text{j}+1$, j, $\\text{j}-1$\\;\n            \\;\n            \\eIf{Step i is the option maturity}{\n                \\eIf(\\tcc*[f]{Call option}){c.type is CALL}{\n                    PricesCopy[jind] = max($\\text{res} - \\text{c.X}, 0$)\n                }(\\tcc*[f]{Put option}){               \n                    PricesCopy[jind] = max($\\text{c.X} - \\text{res}, 0$)\n                }\n            }{\n                PricesCopy[jind] = res\n            }\n        }\n        \n        Prices = PricesCopy\\;\n        Fill PricesCopy with 0\\;\n    }\n    \\;\n    \\tcc{Return the calculated current option price}\n    \\Return Prices[c.jmax]\n\\end{algorithm}\n\n\\section{Validation}\\label{section:sequential:validation}\n\nResults obtained by running this implementation will be used for validation of the parallel algorithms, so it is important that they are fully correct. We compared our intermediate array values of $\\mathit{alphas}$, $\\mathit{Qs}$ and $Prices$ along with the final results with values provided by our supervisor and made sure they are the same within a margin of error.\n\nTable~\\ref{table:book-results} compares the value of a three-year put option on a nine-year zero-coupon bond with a strike price of 63: mean-reversion rate $a = 0.1$ and volatility $\\sigma = 0.01$, which is an example option in Hull \\& White~\\cite[pg. 706]{ofod}. The left table shows book results~\\cite[pg. 707]{ofod} and the right table shows our results for the same option with different time steps. Our approach is fully numerical, while their tree results are semi-analytic, since they do not build a tree for the whole nine-year bond, but only for the three-year option and then compute the rest using analytic formulas. Despite this fact, our result for daily time steps, i.e. $365 \\times 9$ steps for the full tree, are within $0.02\\%$ difference of their analytic result.\n\n\\begin{table}[h]\n\\centering\n\\caption{Sequential results compared on a book example}\n\\source{Compiled by the authors, based on~\\cite[pg. 707]{ofod}.}\n\\label{table:book-results}\n\\begin{tabular}{|lll|}\n\\hline\nSteps & Tree   & Analytic \\\\ \\hline\n10    & 1.8468 & 1.8093   \\\\\n30    & 1.8172 & 1.8093   \\\\\n50    & 1.8057 & 1.8093   \\\\\n100   & 1.8128 & 1.8093   \\\\\n200   & 1.8090 & 1.8093   \\\\\n500   & 1.8091 & 1.8093   \\\\ \\hline\n\\end{tabular}\n\\begin{tabular}{|ll|}\n\\hline\nSteps per year & Results \\\\ \\hline\n1              & 1.87996 \\\\\n5              & 1.83827 \\\\\n10             & 1.81851 \\\\\n25             & 1.81120 \\\\\n100            & 1.81053 \\\\\n365            & 1.80968 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\newpage\n\\section*{Summary}\nThis chapter provided an overview of our sequential implementation with focus on explaining  the computations in forward and backward propagations and how the final results are obtained. Finally, it described how the computations and results were validated with external sources. \nThe following chapter will describe how this implementation was adapted for a parallel one option per thread version in CUDA.\n", "meta": {"hexsha": "7a263bde6e3611397d908aead49add04f5cdf6b8", "size": 10908, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/chapters/Sequential.tex", "max_stars_repo_name": "MartinMetaksov/diku.OptionsPricing", "max_stars_repo_head_hexsha": "1734801dddf7faa9e7c6ed1a46acb6ef8492e33c", "max_stars_repo_licenses": ["ISC"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-01-11T11:13:13.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-16T10:20:46.000Z", "max_issues_repo_path": "thesis/chapters/Sequential.tex", "max_issues_repo_name": "MartinMetaksov/diku.OptionsPricing", "max_issues_repo_head_hexsha": "1734801dddf7faa9e7c6ed1a46acb6ef8492e33c", "max_issues_repo_licenses": ["ISC"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/chapters/Sequential.tex", "max_forks_repo_name": "MartinMetaksov/diku.OptionsPricing", "max_forks_repo_head_hexsha": "1734801dddf7faa9e7c6ed1a46acb6ef8492e33c", "max_forks_repo_licenses": ["ISC"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-01-09T21:47:46.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-09T21:47:46.000Z", "avg_line_length": 60.938547486, "max_line_length": 908, "alphanum_fraction": 0.7032453245, "num_tokens": 2862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7956580976404296, "lm_q1q2_score": 0.5983448938394035}}
{"text": "\\chapter{Kirchhoff-Love shell problem}\n\\label{chp:chapter5}\n\\graphicspath{{figures/}{figures/chapter5/}}\n\\pgfplotsset{\n\ttable/search path={{figures/chapter5/data},{data}},\n}\n\n\\section{Dual mortar method for non-homogeneous constraints}\\label{sec:dual_mortar}\n\nWe briefly demonstrate the dual mortar method in the context of an abstract formulation for a constrained problem: find $u\\in\\mathcal{X}$ and $\\lambda\\in\\mathcal{M}$ such that\n\\begin{subequations}\\label{eq:LM-form-non-hom}\n\t\\begin{empheq}[left=\\empheqlbrace]{alignat=2}\n\t\ta(v,u)+b(v,\\lambda)&=l(v)\\quad &&\\forall{}v\\in\\mathcal{X},\\\\\n\t\tb(\\mu,u)&=c(\\mu)\\quad &&\\forall{}\\mu\\in\\mathcal{M},\\label{eq:LM-form-constraint-non-hom}\n\t\\end{empheq}.\n\\end{subequations}\n%%TODO: which section\nwhere $a(\\cdot,\\cdot)$ is a bilinear form representing a potential energy, $l(\\cdot)$ is a linear form representing the external load, $b(\\cdot,\\cdot)$ is a bilinear form representing a set of constraints on the solution $u$ and $c(\\mu)$ is a linear form corresponding to any non-homogeneous constraints. In Section~, $b(\\cdot,\\cdot)$ and $c(\\cdot)$ will represent the continuity constraints across patch boundaries for each Newton-Raphson iteration.\n\nIf we introduce a pair of discrete function spaces $\\mathcal{X}^h \\subset \\mathcal{X}$ and $\\mathcal{M}^h \\subset \\mathcal{M}$ we can represent the weak form~\\eqref{eq:LM-form-non-hom} as the matrix problem\n\\begin{equation}\\label{eq:disc-LM-form-non-hom}\n\t\\mathbf{K}^\\text{LM}\\mathbf{U}^{\\text{LM}}=\\begin{bmatrix}\n\t\t\\mathbf{K} & \\mathbf{B}^T \\\\\n\t\t\\mathbf{B} & \\mathbf{0}\n\t\\end{bmatrix}\\mathbf{U}^{\\text{LM}}\n\t=\n\t\\begin{bmatrix}\n\t\t\\mathbf{F} \\\\\n\t\t\\mathbf{R}\n\t\\end{bmatrix},\n\\end{equation}\nwhere $\\mathbf{K}$ is the discretized stiffness matrix, $\\mathbf{F}$ is the discretized external force vector, $\\mathbf{B}$ is the discretized constraints matrix, $\\mathbf{R}$ is the forcing term due to non-homogeneous constraints (for homogeneous constraints, $\\mathbf{R} = \\mathbf{0}$) and $\\mathbf{U}^{\\text{LM}}$ is a vector containing the control values $\\mathbf{U}$ of the displacement field and the control values $\\mathbf{\\Lambda}$ Lagrange multiplier field. The mortar method statically condenses out additional unknowns and gives rise to a positive definite variational problem by introducing a constrained function space\n\\begin{equation}\n\t\\mathcal{V}^h\\coloneq\\{u^h\\in\\mathcal{X}^h\\, | \\, b(\\lambda^h,u^h)=0, \\quad \\forall{}\\lambda^h\\in\\mathcal{M}^h\\}.\n\\end{equation}\nThe saddle point problem~\\eqref{eq:LM-form-non-hom} can now be transformed into a minimization problem: find a general solution $u^h_\\text{hom}\\in\\mathcal{V}^h$ such that, for $u^h = u^h_\\text{hom}+ u^h_\\text{non}$\n\\begin{equation}\n\ta(v^h, u^h)=l(v^h),\\quad \\forall{}v^h\\in\\mathcal{V}^h,\n\\end{equation}\nwhere $u^h_\\text{non}\\in \\mathcal{X}^h$ is a particular solution that satisfies the constraint~\\eqref{eq:LM-form-constraint-non-hom}. Given $\\mathbf{N}^{\\mathcal{X}^h}$, the vector containing the basis functions of $\\mathcal{X}^h$, the vector containing the basis functions of $\\mathcal{V}^h$ is given by\n\\begin{equation}\n\t\\mathbf{N}^{\\mathcal{V}^h}=\\left[\\mathbf{B}^\\perp\\right]^T\\mathbf{N},\\label{eq:basis-null-space-non-hom}\n\\end{equation}\nwhere the matrix $\\mathbf{B}^\\perp$ is the vector basis of the null space of the constraint matrix $\\mathbf{B}$. If the Lagrange multiplier space is discretized by a set of dual basis functions, the constraint matrix $\\mathbf{B}$ can be written as~\\cite{gilbert1987computing}\n\\begin{equation}\\label{eq:constraint-form-non-hom}\n\t\\mathbf{B}=\\begin{bmatrix}\n\t\t\\mathbf{I} & \\mathbf{B_2}\n\t\\end{bmatrix},\n\\end{equation}\nthe bandwidth of $\\mathbf{B}_2$ depends on the support size of dual basis functions. \\par\n\nFor a $\\mathbf{B}$ in the form~\\eqref{eq:constraint-form-non-hom}, the vector basis of its null space can be obtained from\n\\begin{equation}\n\t\\mathbf{B}^\\perp=\\begin{bmatrix}\n\t\t-\\mathbf{B}_2 \\\\\n\t\t\\mathbf{I}\n\t\\end{bmatrix}.\n\t\\label{eq:null-space-non-hom}\n\\end{equation}\nThe discretization of the constraint~\\eqref{eq:LM-form-constraint-non-hom} is\n\\begin{equation}\n\t\\mathbf{B}\\mathbf{U}^\\text{non} = \\mathbf{R},\n\\end{equation}\nwhere a particular solution can be solved from~\\cite{ainsworth2001essential}\n\\begin{equation}\n\t\\mathbf{U}^\\text{non} = \\mathbf{B}^T\\left[ \\mathbf{B}\\mathbf{B}^T \\right]^{-1}\\mathbf{R}.\n\\end{equation}\nHowever, for a constraint matrix takes the form~\\eqref{eq:constraint-form-non-hom}, a particular solution can be explicitly constructed as\n\\begin{equation}\n\t\\mathbf{U}^\\text{non} = \\begin{bmatrix}\n\t\t\\mathbf{R} \\\\\\mathbf{0}\n\t\\end{bmatrix}.\\label{eq:dual_particular_solution}\n\\end{equation}\nThe mortar linear system can now be written as\n\\begin{equation}\n\t\\mathbf{K}^{\\text{mortar}}\\mathbf{U}^{\\text{mortar}}=\\left[\\mathbf{B}^\\perp\\right]^T\\mathbf{K}\\mathbf{B}^\\perp\\mathbf{U}^{\\text{mortar}}=\\left[\\mathbf{B}^\\perp\\right]^T\\mathbf{F}-\\left[\\mathbf{B}^\\perp\\right]^T\\mathbf{K}\\mathbf{U}^\\text{non}.\\label{eq:mortar-form-discretized}\n\\end{equation}\nThe relation between the mortar displacement nodal value vector $\\mathbf{U}^{\\text{mortar}}$, non-homogeneous solution $\\mathbf{U}^\\text{non}$ and $\\mathbf{U}$ is given by\n\\begin{equation}\n\t\\mathbf{U}=\\mathbf{B}^\\perp\\mathbf{U}^{\\text{mortar}}+\\mathbf{U}^\\text{non}.\n\\end{equation}\nWith a sparse $\\mathbf{B}^\\perp$ obtained from a set of dual basis with compact support, the stiffness matrix of the mortar formulation $\\mathbf{K}^{\\text{mortar}}$ will remain sparse, resulting in an efficient linear system.\n\n\\section{Formulation of Kirchhoff-Love shell}\\label{sec:formulation}\n\nIn this section, we present the formulation of Kirchhoff-Love shell in compact form. The theory of Kirchhoff-Love shell is based on the assumption that the normal of the shell's mid-surface remains perpendicular in the deformed configuration. Hence, the strain the transverse strains are zero and the description of shell geometry can be reduced to its mid-surface.\n\n\\subsection{Kinematics}\\label{sec:kinematics}\n\nIn what follows, we use Greek letters for indices taking values $\\left\\{1, 2\\right\\}$ and Latin letters for $\\left\\{1,2,3\\right\\}$, respectively. Einstein summation convention on repeated indices is also utilized. We now consider a shell structure of arbitrary geometry with constant thickness $h$ and parameterized with curvilinear coordinates $\\theta^i$, where $\\theta^1$ and $\\theta^2$ denote the natural curvilinear coordinates and $\\theta^3$ indicates the thickness direction with $\\theta^3\\in \\left[-0.5h, \\; 0.5h\\right]$ (see Figure~\\ref{fig:shell_configuration}). We use $\\mathbf{R}(\\theta^1,\\theta^2), \\mathbf{r}(\\theta^1,\\theta^2)\\colon \\mathbb{R}^2\\rightarrow\\mathbb{R}^3$ to describe the mid-surface of a shell in reference and current configurations, respectively. For simplicity and without loss of generality, we assume the parametric domain $\\hat{\\Omega}=[0,1]\\times[0,1]$. The displacement of the shell mid-surface is given by\n\\begin{equation}\n\t\\label{}\n\t\\mathbf{u}(\\theta^1,\\theta^2) = \\mathbf{r}(\\theta^1,\\theta^2)-\\mathbf{R}(\\theta^1,\\theta^2).\n\\end{equation}\nThe mid-surface covariant base vectors in both configurations are obtained by\n\\begin{equation}\n\t\\label{eq:mid_surface_base_vector}\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\t\\mathbf{A}_\\alpha & =\\mathbf{R}_{,\\alpha} = \\frac{\\partial\\mathbf{R}}{\\partial\\theta^\\alpha}            \\\\\n\t\t\t\\mathbf{A}_3      & = \\frac{\\mathbf{A}_1\\times\\mathbf{A}_2}{\\vert{\\mathbf{A}_1\\times\\mathbf{A}_2}\\vert}\n\t\t\\end{aligned}\n\t\\end{cases},\\qquad\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\t\\mathbf{a}_\\alpha & =\\mathbf{r}_{,\\alpha} = \\frac{\\partial\\mathbf{r}}{\\partial\\theta^\\alpha}  =  \\mathbf{A}_\\alpha+\\mathbf{u}_{,\\alpha} \\\\\n\t\t\t\\mathbf{a}_3      & = \\frac{\\mathbf{a}_1\\times\\mathbf{a}_2}{\\vert{\\mathbf{a}_1\\times\\mathbf{a}_2}\\vert}\n\t\t\\end{aligned}\n\t\\end{cases},\n\\end{equation}\nwhere $\\vert\\cdot\\vert$ denotes the Euclidean length of the given vector. $\\mathbf{A}_3$ and $\\mathbf{a}_3$ are commonly refered to as the directors in the reference and currect configurations. The position vector of any material point within the shell in both reference configuration and current configuration are described by\n\\begin{equation}\n\t\\label{eq:position_vector}\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\t\\mathbf{X}(\\theta^1, \\theta^2, \\theta^3) & = \\mathbf{R}(\\theta^1, \\theta^2) + \\theta^3\\mathbf{A}_3(\\theta^1,\\theta^2) \\\\\n\t\t\t\\mathbf{x}(\\theta^1, \\theta^2, \\theta^3) & = \\mathbf{r}(\\theta^1, \\theta^2) + \\theta^3\\mathbf{a}_3(\\theta^1,\\theta^2)\n\t\t\\end{aligned}\n\t\\end{cases},\n\\end{equation}\nSimilar to the mid-surface, the covariant base vectors at any arbitrary material point within the shell in two configurations are obtained by\n\\begin{equation}\n\t\\label{eq:base_vector}\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\t\\mathbf{G}_\\alpha & =\\mathbf{X}_{,\\alpha} = \\mathbf{A}_\\alpha+\\theta^3\\mathbf{A}_{3,\\alpha} \\\\\n\t\t\t\\mathbf{G}_3      & = \\mathbf{X}_{,3} = \\mathbf{A}_3\n\t\t\\end{aligned}\n\t\\end{cases},\\qquad\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\t\\mathbf{g}_\\alpha & =\\mathbf{x}_{,\\alpha} = \\mathbf{a}_\\alpha+\\theta^3\\mathbf{a}_{3,\\alpha} \\\\\n\t\t\t\\mathbf{g}_3      & = \\mathbf{x}_{,3} = \\mathbf{a}_3\n\t\t\\end{aligned}\n\t\\end{cases}.\n\\end{equation}\n\n\n\\begin{figure}[h]\n\t\\center\n\t\\includestandalone[scale=1]{configuration}%     without .tex extension\n\t% or use \\input{mytikz}\n\t\\caption{Illustration of deformation, reference and currect configuration of Kirchhoff-Love shell. The mid-surface is denoted by blue color.}\n\t\\label{fig:shell_configuration}\n\\end{figure}\nThe covariant and contravariant metric coefficients are computed by\n\\begin{equation}\n\t\\label{eq:covariant_contravariant_metric}\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\tG_{ij} & =\\mathbf{G}_i\\cdot\\mathbf{G}_j \\\\\n\t\t\tG^{ij} & =\\left[G_{ij}\\right]^{-1}\n\t\t\\end{aligned}\n\t\\end{cases},\\qquad\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\tA_{ij} & =\\mathbf{A}_i\\cdot\\mathbf{A}_j \\\\\n\t\t\tA^{ij} & =\\left[A_{ij}\\right]^{-1}\n\t\t\\end{aligned}\n\t\\end{cases},\\qquad\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\tg_{ij} & =\\mathbf{g}_i\\cdot\\mathbf{g}_j \\\\\n\t\t\tg^{ij} & =\\left[g_{ij}\\right]^{-1}\n\t\t\\end{aligned}\n\t\\end{cases},\\qquad\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\ta_{ij} & =\\mathbf{a}_i\\cdot\\mathbf{a}_j \\\\\n\t\t\ta^{ij} & =\\left[a_{ij}\\right]^{-1}\n\t\t\\end{aligned}\n\t\\end{cases}.\n\\end{equation}\nThe contravariant base vectors are then given by\n\\begin{equation}\n\t\\label{eq:contravariant_base_vector}\n\t\\mathbf{G}^i = G^{ij}\\mathbf{G}_j,\\qquad \\mathbf{A}^i = A^{ij}\\mathbf{A}_j,\\qquad \\mathbf{g}^i = g^{ij}\\mathbf{g}_j,\\qquad \\mathbf{a}^i = a^{ij}\\mathbf{a}_j.\n\\end{equation}\n\nForm the numerour strain measures, we use the \\textit{Green-Lagrangian} strain tensor to describe the strain, which is defined as\n\\begin{equation}\n\t\\label{eq:green_lagrangian}\n\t\\mathbf{E} = \\frac{1}{2}\\left( \\mathbf{F}^T\\mathbf{F}-\\mathbf{I} \\right),\n\\end{equation}\nwhere $\\mathbf{F}=\\frac{\\partial \\mathbf{x}}{\\partial \\mathbf{X}}=\\mathbf{g}_i\\otimes\\mathbf{G}^i$ is the deformation gradient and $\\mathbf{I}$ is the identity tensor. Alternatively, $\\mathbf{E}$ can be represented as\n\\begin{equation}\n\t\\label{eq:green_lagrangian_contravariant}\n\t\\mathbf{E}=E_{ij}\\mathbf{G}^i\\otimes\\mathbf{G}^j,\\quad\\text{with } E_{ij} = \\frac{1}{2}\\left( g_{ij}-G_{ij} \\right).\n\\end{equation}\nSubstituting~\\eqref{eq:covariant_contravariant_metric} into~\\eqref{eq:green_lagrangian_contravariant}, we obtain $E_{i3}=E_{3i}=0$. Separating the strain into a constant part due to membrane load and alinear part due to bending and neglecting $\\bigO((\\theta^3)^2)$ terms, the rest strain coefficients are given by\n\\begin{equation}\n\t\\label{}\n\tE_{\\alpha\\beta} = \\epsilon_{\\alpha\\beta} + \\theta^3\\kappa_{\\alpha\\beta},\n\\end{equation}\nwhere the membrane strain tensor\n\\begin{equation}\n\t\\label{eq:membrane_strains}\n\t\\mathbf{\\epsilon} = \\epsilon_{\\alpha\\beta}\\mathbf{G}^\\alpha\\otimes\\mathbf{G}^\\beta, \\quad\\epsilon_{\\alpha\\beta} = \\frac{1}{2}\\left( a_{\\alpha\\beta}-A_{\\alpha\\beta} \\right) = \\frac{1}{2}\\left( \\mathbf{a}_{\\alpha}\\cdot\\mathbf{a}_{\\beta}-\\mathbf{A}_{\\alpha}\\cdot\\mathbf{A}_{\\beta} \\right),\n\\end{equation}\nand the tensor expressing changes in curvature\n\\begin{equation}\n\t\\label{eq:curvatures}\n\t\\mathbf{\\kappa} = \\kappa_{\\alpha\\beta}\\mathbf{G}^\\alpha\\otimes\\mathbf{G}^\\beta,\\quad\\kappa_{\\alpha\\beta} = b_{\\alpha\\beta}-B_{\\alpha\\beta},\\qquad\\text{with }\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\tB_{\\alpha\\beta} & =\\frac{1}{2}\\left( \\mathbf{A}_{\\alpha}\\cdot\\mathbf{A}_{3,\\beta}+\\mathbf{A}_{\\beta}\\cdot\\mathbf{A}_{3,\\alpha}\\right) = -\\mathbf{A}_{\\alpha,\\beta}\\cdot\\mathbf{A}_3 \\\\\n\t\t\tb_{\\alpha\\beta} & =\\frac{1}{2}\\left( \\mathbf{a}_{\\alpha}\\cdot\\mathbf{a}_{3,\\beta}+\\mathbf{a}_{\\beta}\\cdot\\mathbf{a}_{3,\\alpha}\\right)  =-\\mathbf{a}_{\\alpha,\\beta}\\cdot\\mathbf{a}_3\n\t\t\\end{aligned}\n\t\\end{cases}.\n\\end{equation}\n\n\\subsection{Equilibrium of elastic Kirchhoff-Love shell}\\label{sec:equilibrium}\n\nNext, we develop the variational formulation from the minimization of the potential energy. For the sake of simplicity, we assume that the shell is linear elastic with a strain energy density per unit area of the form~\\cite{chapelle2010finite}\n\\begin{equation}\n\t\\label{eq:strain_energy_density}\n\tW(\\theta^1,\\theta^2) = \\frac{1}{2}\\left( h \\mathbf{\\epsilon}\\colon \\mathbf{C}\\colon \\mathbf{\\epsilon} + \\frac{h^3}{12} \\mathbf{\\kappa }\\colon \\mathbf{C}\\colon \\mathbf{\\kappa } \\right)= \\frac{1}{2}\\left( hC^{\\alpha\\beta\\gamma\\delta}\\epsilon_{\\alpha\\beta}\\epsilon_{\\gamma\\delta}+\\frac{h^3}{12}C^{\\alpha\\beta\\gamma\\delta}\\kappa_{\\alpha\\beta}\\kappa_{\\gamma\\delta} \\right),\n\\end{equation}\nwhere $E$ is Young's modulus, $\\nu$ is Poisson's ratio and the fourth order material tensor\n\\begin{equation}\n\t\\label{eq:material_tensor}\n\t\\mathbf{C} = C^{\\alpha\\beta\\gamma\\delta} \\mathbf{A}_\\alpha\\otimes\\mathbf{A}_\\beta\\otimes\\mathbf{A}_\\gamma\\otimes\\mathbf{A}_\\delta,\\qquad C^{\\alpha\\beta\\gamma\\delta} = \\frac{E\\nu}{1-\\nu^2} A^{\\alpha\\beta}A^{\\gamma\\delta}+\\frac{E}{2(1+\\nu)}\\left( A^{\\alpha\\gamma}A^{\\beta\\delta}+A^{\\alpha\\delta}A^{\\beta\\gamma} \\right).\n\\end{equation}\nThe membrane force resultants tensor and the bending moment resultants tensor read\n\\begin{equation}\n\t\\label{eq:stress_and_moment}\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\t\\mathbf{n} & = n^{\\alpha\\beta}\\mathbf{A}_\\alpha\\otimes\\mathbf{A}_\\beta,\\quad n^{\\alpha\\beta} =\\frac{\\partial W}{\\partial \\epsilon_{\\alpha\\beta}} = hC^{\\alpha\\beta\\gamma\\delta}\\epsilon_{\\gamma\\delta}         \\\\\n\t\t\t\\mathbf{m} & = m^{\\alpha\\beta}\\mathbf{A}_\\alpha\\otimes\\mathbf{A}_\\beta,\\quad m^{\\alpha\\beta} =\\frac{\\partial W}{\\partial \\kappa_{\\alpha\\beta}} =\\frac{h^3}{12}C^{\\alpha\\beta\\gamma\\delta}\\kappa_{\\gamma\\delta}\n\t\t\\end{aligned}\n\t\\end{cases}.\n\\end{equation}\nThe potential energy of Kirchhoff-Love shell is defined as\n\\begin{equation}\n\t\\label{eq:potential_energy}\n\t\\Pi(\\mathbf{u},\\mathbf{u}) = \\Pi^\\text{int}(\\mathbf{u},\\mathbf{u})+\\Pi^\\text{ext}(\\mathbf{f}, \\mathbf{u}) = \\int_{\\overline{\\Omega}}W d\\Omega+\\Pi^\\text{ext}(\\mathbf{f}, \\mathbf{u}),\n\\end{equation}\nwhere $\\overline{\\Omega}$ is the mid-surface of the shell in the reference configuration, $d\\Omega=\\vert{\\mathbf{A}_1\\times\\mathbf{A}_2}\\vert d\\theta^1 d\\theta^2$ is the differential area, $\\Pi^\\text{int}(\\mathbf{u},\\mathbf{u}) = \\int_{\\overline{\\Omega}}W d\\Omega$ is the strain energy and $\\Pi^\\text{ext}(\\mathbf{f},\\mathbf{u})$ is the external work due to external force $\\mathbf{f}$, in general $\\Pi^\\text{ext}$ is a linear functional with respect to $\\mathbf{u}$. The variation formulation can be obtained from the minimization of the potential energy,\n\\begin{equation}\n\t\\label{eq:weak_form_nonlinear}\n\t\\delta \\Pi(\\mathbf{u},\\delta\\mathbf{u}) = \\frac{\\partial \\Pi}{\\partial \\mathbf{u}} \\delta\\mathbf{u} = \\int_{\\overline{\\Omega}} \\left(\\delta{\\epsilon(\\mathbf{u},\\delta\\mathbf{u})} \\colon\\mathbf{n}(\\mathbf{u}) + \\delta{\\kappa(\\mathbf{u},\\delta\\mathbf{u})} \\colon \\mathbf{m}(\\mathbf{u}) \\right)d \\Omega+\\Pi^\\text{ext}(\\mathbf{f},\\delta\\mathbf{u})= 0,\n\\end{equation}\nwith\n\\begin{equation}\n\t\\label{}\n\t\\delta{\\epsilon(\\mathbf{u},\\delta\\mathbf{u})} = \\frac{\\partial \\mathbf{\\epsilon}(\\mathbf{u})}{\\partial \\mathbf{u}}\\delta\\mathbf{u},\\qquad \\text{and }\\delta{\\kappa(\\mathbf{u},\\delta\\mathbf{u})} = \\frac{\\partial \\mathbf{\\kappa}(\\mathbf{u})}{\\partial \\mathbf{u}}\\delta\\mathbf{u}.\n\\end{equation}\nHowever, the variational formulation~\\eqref{eq:weak_form_nonlinear} is a nonlinear functional with respect to $\\mathbf{u}$ and cannot be solved directly. Hence, we adopt the Newton-Raphson method to solve the problem iteratively. Assuming $\\mathbf{u}^{i+1} = \\mathbf{u}^{i} +{\\Delta} \\mathbf{u}$, the weak form for solving ${\\Delta} \\mathbf{u}$ is stated as: find ${\\Delta}\\mathbf{u}\\in \\boldsymbol{\\mathcal{X}}$, such that\n\\begin{equation}\n\t\\label{eq:weak_form_linearized}\n\t\\left(K_\\text{m}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u})+K_\\text{b}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u})\\right)=-\\delta \\Pi(\\mathbf{u}^{i},\\delta\\mathbf{u}),\\qquad \\forall \\delta\\mathbf{u}\\in \\boldsymbol{\\mathcal{X}},\n\t% -\\int_\\Omega \\left(\\delta{\\epsilon(\\mathbf{u}_{i},\\delta\\mathbf{u})} \\colon \\delta\\mathbf{n}(\\mathbf{u}_i,{\\Delta}\\mathbf{u}) + \\delta{\\epsilon(\\mathbf{u}_{i},\\delta\\mathbf{u},\\Delta\\mathbf{u})} \\colon \\mathbf{n}(\\mathbf{u}_i) + \\delta{\\kappa(\\mathbf{u}_i,\\delta\\mathbf{u})} \\colon \\delta\\mathbf{m}(\\mathbf{u}_i,{\\Delta}\\mathbf{u}) \\right)d \\Omega = \\delta \\Pi(\\mathbf{u}_{i},\\delta\\mathbf{u}),\\qquad \\forall \\delta\\mathbf{u}\\in \\mathbf{\\mathcal{U}},\n\\end{equation}\nwhere $i$ denotes the iterative step, the solution space $\\boldsymbol{\\mathcal{X}}=\\left[H^2(\\hat{\\Omega})\\right]^3$, the membrane stiffness\n\\begin{equation}\n\tK_\\text{m}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u}) = \\int_{\\overline{\\Omega}} \\delta{\\epsilon(\\mathbf{u}^{i},\\delta\\mathbf{u})} \\colon \\delta\\mathbf{n}(\\mathbf{u}^i,{\\Delta}\\mathbf{u}) + \\delta{\\epsilon(\\mathbf{u}^{i},\\delta\\mathbf{u},\\Delta\\mathbf{u})} \\colon \\mathbf{n}(\\mathbf{u}^i) d\\Omega,\n\\end{equation}\nand the bending stiffness\n\\begin{equation}\n\tK_\\text{b}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u}) = \\int_{\\overline{\\Omega}} \\delta{\\kappa(\\mathbf{u}^{i},\\delta\\mathbf{u})} \\colon \\delta\\mathbf{m}(\\mathbf{u}^i,{\\Delta}\\mathbf{u}) + \\delta{\\kappa(\\mathbf{u}^{i},\\delta\\mathbf{u},\\Delta\\mathbf{u})} \\colon \\mathbf{m}(\\mathbf{u}^i) d\\Omega,\n\\end{equation}\nwith\n\\begin{equation}\n\t\\begin{cases}\n\t\t\\begin{aligned}\n\t\t\t\\delta{\\mathbf{n}(\\mathbf{u},{\\Delta}\\mathbf{u})}              & = \\frac{\\partial \\mathbf{n}(\\mathbf{u})}{\\partial \\mathbf{u}}{\\Delta}\\mathbf{u} = h\\mathbf{C}\\colon \\delta\\mathbf{\\epsilon}(\\mathbf{u},\\Delta\\mathbf{u}), \\\\\n\t\t\t\\delta{\\mathbf{m}({\\Delta}\\mathbf{u})}                         & = \\frac{\\partial \\mathbf{m}}{\\partial \\mathbf{u}}{\\Delta}\\mathbf{u} = \\frac{h^3}{12}\\mathbf{C}\\colon \\delta\\mathbf{\\kappa}(\\Delta\\mathbf{u}),             \\\\\n\t\t\t\\delta{\\epsilon(\\mathbf{u},\\delta\\mathbf{u},\\Delta\\mathbf{u})} & = \\frac{\\partial \\delta\\mathbf{\\epsilon}(\\mathbf{u},\\delta\\mathbf{u})}{\\partial \\mathbf{u}}\\Delta\\mathbf{u},                                              \\\\\n\t\t\t\\delta{\\kappa(\\mathbf{u},\\delta\\mathbf{u},\\Delta\\mathbf{u})}   & = \\frac{\\partial \\delta\\mathbf{\\kappa}(\\mathbf{u},\\delta\\mathbf{u})}{\\partial \\mathbf{u}}\\Delta\\mathbf{u}.\n\t\t\\end{aligned}\n\t\\end{cases}\n\\end{equation}\n\n\\section{A dual mortar formulation for the multi-patch Kirchhoff-Love shell}\\label{sec:multi_patch}\n\nMost of the commonly used patch coupling approaches for Kirchhoff-Love shell fall into the following categories: penalty method~\\cite{kiendl2010bending, herrema2019penalty}, Lagrange multiplier method (collocation approach can be viewed as a Lagrange multiplier method with the Lagrange multiplier discretized by Dirac delta functions )~\\cite{coox2017flexible, hirschler2019embedded, schuss2019multi, goyal2017penalty}, and Nitsche's method~\\cite{guo_nitsches_2015, nguyen2017isogeometric}. The performance of the penalty method is significantly influenced by the choice of the penalty parameter: a small penalty parameter cannot effectively enforce inter-patch constraint while a large penalty parameter may badly affect the condition number of the linear system. Usually, the selection of the penalty parameter is performed empirically by the analyst. The Lagrange multiplier methods leads to a saddle point problem, for with iterative methods are known to be less efficient than for symmetric positive definite systems. In addition, redundancy may happen in the discretized constraint matrix, leading to a rank deficiency linear system (\\textit{inf-sup} instability). Although a global factorization followed by a static condensation can circumvent this issue, the resulted linear system no longer preserves its sparsity and may be polluted by the numerical error within the factorization process. The stability parameter in the Nitsche method needs to be approximated by eigenvalue problems associated with element intersections, which increase the computational cost. For nonlinear problems, the Nitsche's method becomes complex as it requires the tractions and their variations on the interface. \\par\n\nHere, we present a dual mortar formulation for the Kirchhoff-Love shell over multi-patch tensor product domains. Thanks to the locally supported dual basis, the linear system can be statically condensed with minimum computational cost and the resulted linear system preserves its sparsity. Along each interface, we introduce a local coordinate system, in which a generic inter-patch constraint is developed in a natural manner. The main advantages of this generic inter-patch constraint are that it deals with both patches joining smoothly and those joining at a kink in a uniform framework and it is compatible with dual basis.\\par\n\nWe first introduce a rotation operator: for a vector $\\mathbf{v}\\in\\mathbb{R}^3$, its rotation round the axis $\\mathbf{k}\\in\\mathbb{R}^3$ by an angle $\\theta$ according to the right hand rule is given as\n\\begin{equation}\n\t\\mathbf{R}_{\\mathbf{k},\\theta}(\\mathbf{v}) = \\mathbf{v}\\cos(\\theta)+\\left(\\frac{\\mathbf{k}}{\\vert \\mathbf{k} \\vert}\\times \\mathbf{v}\\right)\\sin(\\theta)+\\frac{\\mathbf{k}}{\\vert \\mathbf{k} \\vert}\\left(\\frac{\\mathbf{k}}{\\vert \\mathbf{k} \\vert}\\cdot\\mathbf{v}\\right)(1-\\cos(\\theta)). \\label{eq:rodrigues_rotation}\n\\end{equation}\nThis operator is called the Rodrigues' rotation formula~\\cite{rodrigues1840lois} and will play an important rule in formulating the inter-patch constraint.\\par\n\nTo demonstrate our approach, we consider a kinked shell structure consisting of two NURBS patches shown in Figure~\\ref{fig:two_patch_shell_with_kink}. We denote by $\\Omega_s$ the slave domain, $\\Omega_m$ the master domain and $\\Gamma_{sm}$ the intersection between two patches. These two domains are parameterized by coordinate systems $(\\theta^1_s, \\theta^2_s)$ and $(\\theta^1_m, \\theta^2_m)$, respectively.\n\n\\begin{figure}[ht]\n\t\\center\n\t\\includegraphics[width=.5\\columnwidth]{two_patch_shell_with_kink}\n\t\\caption{A two-patch non-conforming Kirchhoff-Love shell consisting of two patches $\\Omega_s$ and $\\Omega_m$ with the intersection denoted by the red curve. The director $\\mathbf{A}_3^m$ of $\\Omega_m$ and the director $\\mathbf{A}_3^s$ of $\\Omega_s$ determine a rotation angle $\\theta$ along the intersection. }\\label{fig:two_patch_shell_with_kink}\n\\end{figure}\n\n\\subsection{A local coordinate system for patch intersections}\n\n\\begin{table}\n\t\\center\n\t\\caption{The strategy in selecting the restrictions of $\\bar{\\theta}^1$ and $\\bar{\\theta}^2$ on $\\Omega_s$, where $\\bar{\\mathbf{A}}_\\alpha^k=\\sfrac{\\partial\\mathbf{X}^k}{\\partial \\bar{\\theta}^\\alpha\\vert_{\\Omega_k}}$}\n\t\\label{tab:orientation_and_coordinate}\n\t\\begin{tabularx}{.5\\textwidth}{l@{\\extracolsep{\\fill}}cc}\n\t\t\\toprule\n\t\tInterface orientation & $\\bar{\\theta}^1\\vert_{\\Omega_s}$         & $\\bar{\\theta}^2\\vert_{\\Omega_s}$         \\\\\n\t\t\\midrule\n\t\tSouth                 & $\\bar{\\mathbf{A}}_1^s = -\\mathbf{A}_2^s$ & $\\bar{\\mathbf{A}}_2^s = \\mathbf{A}_1^s$  \\\\\n\t\tEast                  & $\\bar{\\mathbf{A}}_1^s = \\mathbf{A}_1^s$  & $\\bar{\\mathbf{A}}_2^s = \\mathbf{A}_2^s$  \\\\\n\t\tNorth                 & $\\bar{\\mathbf{A}}_1^s = \\mathbf{A}_2^s$  & $\\bar{\\mathbf{A}}_2^s = -\\mathbf{A}_1^s$ \\\\\n\t\tWest                  & $\\bar{\\mathbf{A}}_1^s = -\\mathbf{A}_1^s$ & $\\bar{\\mathbf{A}}_2^s = -\\mathbf{A}_2^s$ \\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\\end{table}\n\nIn this subsection, we reparameterize the intersection between the slave patch and the master patch by a coordinate system $(\\bar{\\theta}^1, \\bar{\\theta}^2)$. However, we do not need to develop explicitly the map from $(\\bar{\\theta}^1, \\bar{\\theta}^2)$ to $\\Omega_s$ and $\\Omega_m$. Instead, we are interested in the covariant base vectors $(\\bar{\\mathbf{A}}_1, \\bar{\\mathbf{A}}_2)$ defined in the new coordinate system and how they behave during the deformation. The new coordinate system is defined based on the orientation of both the slave patch and the master path. We first specify the restrictions of $\\bar{\\theta}^1$ and $\\bar{\\theta}^2$ on $\\Omega_s$ following the strategy given in Table~\\ref{tab:orientation_and_coordinate}. For example, if an intersection is a north edge on the slave patch, we have\n\\begin{equation}\n\t\\left\\{\n\t\\begin{split}\n\t\t\\bar{\\mathbf{A}}^s_1 &= \\frac{\\partial\\mathbf{X}^s}{\\partial \\bar{\\theta}^1\\vert_{\\Omega_s}} = \\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^s & \\mathbf{A}_2^s\n\t\t\\end{bmatrix} \\cdot \\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_s} = \\mathbf{A}_2^s\\\\\n\t\t\\bar{\\mathbf{A}}_2^s &= \\frac{\\partial\\mathbf{X}^s}{\\partial \\bar{\\theta}^2\\vert_{\\Omega_s}} = \\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^s & \\mathbf{A}_2^s\n\t\t\\end{bmatrix} \\cdot \\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_s} = -\\mathbf{A}_1^s\n\t\\end{split}\n\t\\right.,\n\\end{equation}\nwith\n\\begin{equation}\n\t\\left\\{\n\t\\begin{split}\n\t\t\\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_s} &=\n\t\t\\begin{bmatrix}\n\t\t\t\\frac{\\partial \\theta^1_s}{\\partial \\bar{\\theta}^1\\vert_{\\Omega_s}} \\\\\n\t\t\t\\frac{\\partial \\theta^2_s}{\\partial \\bar{\\theta}^1\\vert_{\\Omega_s}}\n\t\t\\end{bmatrix}=\n\t\t\\begin{bmatrix}\n\t\t\t0 \\\\\n\t\t\t1\n\t\t\\end{bmatrix}\\\\\n\t\t\\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_s} &= \\begin{bmatrix}\n\t\t\t\\frac{\\partial \\theta^1_s}{\\partial \\bar{\\theta}^2\\vert_{\\Omega_s}} \\\\\n\t\t\t\\frac{\\partial \\theta^2_s}{\\partial \\bar{\\theta}^2\\vert_{\\Omega_s}}\n\t\t\\end{bmatrix}=\n\t\t\\begin{bmatrix}\n\t\t\t-1 \\\\\n\t\t\t0\n\t\t\\end{bmatrix}\n\t\\end{split}\n\t\\right..\n\\end{equation}\n\nFor different pairings of the slave edge and the master edge, the corresponding Jacobian $\\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_s}$, $\\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_s}$ can be computed in the same manner. \\par\n\nWe now extend the curvilinear coordinate system $(\\bar{\\theta}^1, \\bar{\\theta}^2)$ from the slave patch to the master patch. A natural choice of the restriction of $\\bar{\\theta}^2$ on $\\Omega_m$ is\n\\begin{equation}\n\t\\bar{\\theta}^2\\vert_{\\Omega_m}=\\bar{\\theta}^2\\vert_{\\Omega_s}, \\text{ or } \\bar{\\mathbf{A}}^m_2=\\bar{\\mathbf{A}}^s_2.\n\\end{equation}\nGiven the directors $\\mathbf{A}^s_3$ and $\\mathbf{A}^m_3$ and the axis $\\bar{\\mathbf{A}}^s_2$, we can now uniquely determine the counterclockwise rotation (see Figure~\\ref{fig:two_patch_shell_with_kink}) from $\\mathbf{A}^m_3$ to $\\mathbf{A}^s_3$ by\n\\begin{equation}\n\t\\left\\{\n\t\\begin{split}\n\t\t\\cos{\\theta} &= \\mathbf{A}^m_3\\cdot \\mathbf{A}^s_3\\\\\n\t\t\\sin{\\theta} &= \\frac{(\\bar{\\mathbf{A}}^m_2 \\times \\mathbf{A}^m_3) \\cdot \\mathbf{A}^s_3}{\\vert \\bar{\\mathbf{A}}^m_2 \\vert}.\n\t\\end{split}\n\t\\right.\\label{eq:rotation_angle}\n\\end{equation}\nBy Equation~\\eqref{eq:rodrigues_rotation}, we can define a rotation operator rotate $\\mathbf{A}^s_3$ to $\\mathbf{A}^m_3$ around the axis $\\bar{\\mathbf{A}}^m_2$ as\n\\begin{equation}\n\t\\mathbf{A}^m_3 = \\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}(\\mathbf{A}^s_3) =  -\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,\\theta}(\\mathbf{A}^s_3).\n\\end{equation}\nMeanwhile, the rotation operator $\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}$ also rotates $\\bar{\\mathbf{A}}^s_1$ to the tangential plane of $\\Omega_m$ along the intersection. We let\n\\begin{equation}\n\t\\bar{\\mathbf{A}}^m_1=\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}(\\bar{\\mathbf{A}}^s_1).\\label{eq:reference_constraint_c1}\n\\end{equation}\nThe corresponding Jacobians $\\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_m} = \\begin{bmatrix}\n\t\t\\frac{\\partial \\theta^1_m}{\\partial \\bar{\\theta}^1\\vert_{\\Omega_m}} & \\frac{\\partial \\theta^2_m}{\\partial \\bar{\\theta}^1\\vert_{\\Omega_m}}\n\t\\end{bmatrix}^T$ and $\\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_m} = \\begin{bmatrix}\n\t\t\\frac{\\partial \\theta^1_m}{\\partial \\bar{\\theta}^2\\vert_{\\Omega_m}} & \\frac{\\partial \\theta^2_m}{\\partial \\bar{\\theta}^2\\vert_{\\Omega_m}}\n\t\\end{bmatrix}^T$ are given by\n\\begin{equation}\n\t\\begin{split}\n\t\t\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix} \\cdot \\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_m} &= \\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}(\\bar{\\mathbf{A}}^s_1),\\\\\n\t\t\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix} \\cdot \\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_m} &= \\bar{\\mathbf{A}}^s_2.\n\t\\end{split}\\label{eq:jacobian_of_Jtheta}\n\\end{equation}\nAs $\\begin{bmatrix}\n\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\\end{bmatrix}$ is a $3\\times 2$ matrix, Equation~\\eqref{eq:jacobian_of_Jtheta} could not be factorized directly. However, as $\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}(\\bar{\\mathbf{A}}^s_1)$ and $\\bar{\\mathbf{A}}^s_2$ are on the tangential plane of $\\Omega_m$, we can solve $\\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_m}$ and $\\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_m}$ by\n\\begin{equation}\n\t\\begin{split}\n\t\t\\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_m} &= \\left(\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix}^T\\cdot\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix}\\right)^{-1}\\left(\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix}^T\\cdot\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}(\\bar{\\mathbf{A}}^s_1) \\right),\\\\\n\t\t\\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_m} &= \\left(\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix}^T\\cdot\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix}\\right)^{-1}\\left(\\begin{bmatrix}\n\t\t\t\\mathbf{A}_1^m & \\mathbf{A}_2^m\n\t\t\\end{bmatrix}^T\\cdot\\bar{\\mathbf{A}}^s_2 \\right).\n\t\\end{split}\n\\end{equation}\n\nFollowing the above procedures, we have that the covariant base vectors of the master patch are nothing but the rotation of the covariant base vectors of the slave patch via the rotation operator $\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}$ (see Figure~\\ref{fig:coordinate_system_on_each_patch}), as\n\\begin{equation}\n\t\\begin{bmatrix}\n\t\t\\bar{\\mathbf{A}}_1^m & \\bar{\\mathbf{A}}_2^m\n\t\\end{bmatrix}\n\t=\n\t\\begin{bmatrix}\n\t\t\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}(\\bar{\\mathbf{A}}_1^s) & \\bar{\\mathbf{A}}_2^s\n\t\\end{bmatrix}\n\t=\n\t\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}\n\t\\begin{bmatrix}\n\t\t\\bar{\\mathbf{A}}_1^s & \\bar{\\mathbf{A}}_2^s\n\t\\end{bmatrix}.\n\\end{equation}\n\nThe partial derivatives of the displacement $\\mathbf{u}$ w.r.t. the new coordinate system $(\\bar{\\theta}^1, \\bar{\\theta}^2)$ are now given by\n\\begin{equation}\n\t\\left\\{\n\t\\begin{split}\n\t\t\\bar{\\mathbf{u}}^s_{,1} &= \\frac{\\partial \\mathbf{u}^s}{\\partial \\bar{\\theta}^1\\vert_{\\Omega_s}} = \\begin{bmatrix}\n\t\t\t\\mathbf{u}^s_{,1} & \\mathbf{u}^s_{,2}\n\t\t\\end{bmatrix}\\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_s}\\\\\n\t\t\\bar{\\mathbf{u}}^m_{,1} &= \\frac{\\partial \\mathbf{u}^m}{\\partial \\bar{\\theta}^1\\vert_{\\Omega_m}} = \\begin{bmatrix}\n\t\t\t\\mathbf{u}^m_{,1} & \\mathbf{u}^m_{,2}\n\t\t\\end{bmatrix}\\mathbf{J}_{\\bar{\\theta}^1}\\vert_{\\Omega_m}\\\\\n\t\t\\bar{\\mathbf{u}}^m_{,2} &= \\frac{\\partial \\mathbf{u}^m}{\\partial \\bar{\\theta}^2\\vert_{\\Omega_m}} = \\begin{bmatrix}\n\t\t\t\\mathbf{u}^m_{,1} & \\mathbf{u}^m_{,2}\n\t\t\\end{bmatrix}\\mathbf{J}_{\\bar{\\theta}^2}\\vert_{\\Omega_m}\n\t\\end{split}\n\t\\right..\n\\end{equation}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[scale = .7]{master_patch}\n\t\t\\caption{Slave patch}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.58\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[trim=.2cm .5cm 0 0, clip, scale = .9]{slave_patch}\n\t\t\\caption{Master patch}\n\t\\end{subfigure}\n\t\\caption{ The covariant base vectors $(\\bar{\\mathbf{A}}_1, \\bar{\\mathbf{A}}_2)$ of the coordinate system $(\\bar{\\theta}^1, \\bar{\\theta}^2)$ on both the slave patch and the master patch. Note that the covariant base vectors of the master patch can be obtained by rotating the covariant base vectors of the slave patch via the rotation operator $\\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,-\\theta}$. }\\label{fig:coordinate_system_on_each_patch}\n\\end{figure}\n\n\\begin{remark}\n\tFollowing the strategy in Table~\\ref{tab:orientation_and_coordinate} is of crucial importance with respect to the algorithm's stability. Figure~\\ref{fig:parametric_of_slave_master} shows the restriction of the new coordinate system on the coupling edges for different coupling scenarios. As can be seen, the new coordinate system always follows the right hand rule. On the slave patch, $\\bar{\\theta}^1$ is always perpendicular to $\\bar{\\theta}^2$, while on the master patch, $\\bar{\\theta}^1$ and $\\bar{\\theta}^2$ forms an angle in the range of $(0^{\\circ}, 180^{\\circ})$. Thus, the director of the new coordinate system is always consistent with the director of the original coordinate system, i.e.\n\t\\begin{equation}\n\t\t\\left\\{\n\t\t\\begin{split}\n\t\t\t\\bar{\\mathbf{A}}^s_3 &= {\\mathbf{A}}^s_3\\\\\n\t\t\t\\bar{\\mathbf{A}}^m_3 &= {\\mathbf{A}}^m_3\n\t\t\\end{split}\n\t\t\\right..\\label{eq:A3_consistency}\n\t\\end{equation}\n\tOn the contrary, if we do not follow the scheme in Table~\\ref{tab:orientation_and_coordinate}, the new coordinate system may not obey the right hand rule, Equation~\\eqref{eq:A3_consistency} may not be satisfied and the rotation angle from Equation~\\eqref{eq:rotation_angle} may not necessarily guide $\\bar{\\mathbf{A}}^s_1$ to the tangential plane of $\\Omega_m$.\n\\end{remark}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.9]{master_coordinates}\n\t\t\\caption{Parametric domain of the master patch}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.9]{slave_coordinates}\n\t\t\\caption{Parametric domain of the slave patch}\n\t\\end{subfigure}\n\t\\caption{The new coordinate system $(\\bar{\\theta}^1, \\bar{\\theta}^2)$ on parametric domains of both slave patch and master patch. Coordinate systems on different edges denote the orientations in different coupling scenarios. Note that no matter which edge is coupled, the new coordinate system always obey the right hand rule on both slave and master patches.}\n\t\\label{fig:parametric_of_slave_master}\n\\end{figure}\n\n\\subsection{Generic dual-compatible constraints for Kirchhoff-Love shell coupling}\n\nShell patches can be coupled smoothly as well as joined at a kink. In this subsection, we propose a set of constraints that can tackle shell coupling in a systematic manner. Compared with existing Kirchhoff-Love shell continuity constraints, the proposed formulation has its uniqueness. For instance, the constraint proposed in~\\cite{schuss2019multi} only deal with $G^1$ continuity of adjacent patches, while the proposed formulation can tackle shell patches joined at different angle in a uniform framework. When patches are coupled smoothly, the proposed constraints will enforce $C^1$ continuity across adjacent patches while the angle between directors of adjacent patches will be preserved when they are joined at a kink. The constraint presented in~\\cite{coox_robust_2017, hirschler2019embedded} is designed for small deformation problems, while the proposed formulation solves both small deformation and large deformation problems. In addition, the proposed constraints are compatible with dual basis, i.e. the discretized constraint matrix takes the form of Equation~\\eqref{eq:constraint-form-non-hom}, and the \\textit{inf-sup} stability is automatically satisfied.\\par\n\nIn the coordinate system $(\\bar{\\theta}^1, \\bar{\\theta}^2)$, two physical properties that satisfied in the reference configuration should be preserved:\n\\begin{subequations}\n\t\\begin{align}\n\t\t\\mathbf{X}^s-\\mathbf{X}^m=0\\quad                                                              & \\Rightarrow\\quad \\mathbf{x}^s-\\mathbf{x}^m=0,\\label{eq:preserve_c0}                                                               \\\\\n\t\t\\bar{\\mathbf{A}}^s_3 - \\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,\\theta}(\\bar{\\mathbf{A}}^m_3)=0 \\quad & \\Rightarrow\\quad \\bar{\\mathbf{a}}^s_3 - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_3) = 0,\\label{eq:preserve_c1}\n\t\\end{align}\n\\end{subequations}\nwhere Equation~\\eqref{eq:preserve_c0} indicates the continuity of the displacement and Equation~\\eqref{eq:preserve_c1} reflects the rotational continuity between two patches. Equation~\\eqref{eq:preserve_c1} can be applied directly to classic Lagrange multiplier formulation, however, for dual mortaring, modifications are required to fully take the advantage of dual basis functions.\\par\n\nWe modify Equation~\\eqref{eq:preserve_c1} by the following steps:\n\\begin{equation}\n\t\\bar{\\mathbf{a}}^s_3 - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_3) = 0 \\supset \\bar{\\mathbf{a}}^s_1 \\times \\bar{\\mathbf{a}}^s_2 - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_1 \\times \\bar{\\mathbf{a}}^m_2) = 0\\supset \\bar{\\mathbf{a}}^s_1 - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_1) = 0,\n\\end{equation}\nwhere the symbol $\\supset$ indicates that functions that satisfy the equation on the right-hand side is a subset of those on the left-hand side. Combining Equation~\\eqref{eq:reference_constraint_c1}, we have:\n\\begin{equation}\n\t\\bar{\\mathbf{A}}^s_1 - \\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,\\theta}(\\bar{\\mathbf{A}}^m_1) = 0 \\quad \\Rightarrow\\quad \\bar{\\mathbf{a}}^s_1 - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_1) = 0.\\label{eq:preserve_c1_dual}\n\\end{equation}\nSubtracting two equations in Equation~\\eqref{eq:preserve_c0} and~\\eqref{eq:preserve_c1_dual} respectively, we obtain:\n\\begin{subequations}\n\t\\begin{align}\n\t\t\\mathbf{u}^s-\\mathbf{u}^m=0,\\label{eq:constraint_c0_kl} \\\\\n\t\t\\bar{\\mathbf{u}}^s_{,1} - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_1) + \\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,\\theta}(\\bar{\\mathbf{A}}^m_1) = 0.\\label{eq:constraint_c1_kl}\n\t\\end{align}\n\\end{subequations}\n\nNote that, for two patches that are coupled smoothly, i.e. $\\theta=0$, Equation~\\eqref{eq:constraint_c1_kl} reduces to\n\\begin{equation}\n\t\\bar{\\mathbf{u}}^s_{,1} - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,0}(\\bar{\\mathbf{a}}^m_1) + \\mathbf{R}_{\\bar{\\mathbf{A}}^m_2,0}(\\bar{\\mathbf{A}}^m_1) = \\bar{\\mathbf{u}}^s_{,1} - \\bar{\\mathbf{a}}^m_1 + \\bar{\\mathbf{A}}^m_1 = \\bar{\\mathbf{u}}^s_{,1} - \\bar{\\mathbf{u}}^m_{,1} = 0,\\label{eq:constraint_c1_kl_smooth}\n\\end{equation}\nwhich is indeed the $C^1$ continuity condition in the coordinate system $(\\bar{\\theta}^1, \\bar{\\theta}^2)$. Both Equation~\\eqref{eq:constraint_c0_kl} and Equation~\\eqref{eq:constraint_c1_kl_smooth} are linear. To solve the nonlinear problem at $\\mathbf{u}^{i+1} = \\mathbf{u}^{i} +{\\Delta} \\mathbf{u}$, we have\n\\begin{subequations}\n\t\\begin{align}\n\t\t\\Delta\\mathbf{u}^s-\\Delta\\mathbf{u}^m=0, \\\\\n\t\t\\Delta\\bar{\\mathbf{u}}^s_{,1} - \\Delta\\bar{\\mathbf{u}}^m_{,1} = 0.\n\t\\end{align}\n\\end{subequations}\nHowever, when patches coupled at a kink, the constraint~\\eqref{eq:constraint_c1_kl} is no longer linear. Hence, the Newton-Raphson method is needed to apply the constraint~\\eqref{eq:constraint_c1_kl} iteratively. as\n\\begin{equation}\n\t\\Delta\\bar{\\mathbf{u}}^s_{,1}-\\frac{\\partial \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_1)}{\\partial \\mathbf{u}^m}\\Delta\\bar{\\mathbf{u}}^m_{,1} = \\mathbf{r}_c^i,\\quad\\text{with}\\quad \\mathbf{r}_c^i = -\\left[\\bar{\\mathbf{a}}^s_1 - \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_1)\\right]_{\\mathbf{u}=\\mathbf{u}^{i}}.\n\\end{equation}\n\n\\begin{remark}\n\tIt is important to use $\\bar{\\mathbf{a}}^m_2$ as the rotation axis in the rotation operator formulation of Equation~\\eqref{eq:preserve_c1_dual}. Although $\\bar{\\mathbf{a}}^s_2$ equals to $\\bar{\\mathbf{a}}^m_2$ in the weak sense, the presence of $\\Delta\\bar{\\mathbf{u}}^s_{,2}$ in the linearization of $\\bar{\\mathbf{a}}^m_2$ will impede the formulation of the identity submatrix in Equation~\\eqref{eq:constraint-form-non-hom}.\n\\end{remark}\n\n\\subsection{The dual mortar formulation}\n\nThe Lagrange multiplier formulation for the multi-patch nonlinear Kirchhoff-Love shell can be stated as: find $\\Delta\\mathbf{u} \\in \\boldsymbol{\\mathcal{X}}$, $\\boldsymbol{\\lambda}_0\\in\\boldsymbol{\\mathcal{M}}_0 $ and $\\boldsymbol{\\lambda}_1\\in\\boldsymbol{\\mathcal{M}}_1$ such that\n\\begin{subequations}\n\t\\begin{empheq}[left=\\empheqlbrace]{alignat=2}\n\t\tK_\\text{m}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u})+K_\\text{b}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u})+b_0(\\boldsymbol{\\lambda}_0,\\delta\\mathbf{u})+b_1(\\mathbf{u}^i, \\boldsymbol{\\lambda}_1,\\delta\\mathbf{u}) & =-\\delta \\Pi(\\mathbf{u}^{i},\\delta\\mathbf{u})\\quad      & \\forall \\delta\\mathbf{u}              & \\in{\\boldsymbol{\\mathcal{X}}},   \\\\\n\t\tb_0(\\delta\\boldsymbol{\\lambda}_0,\\Delta\\mathbf{u})                                                                                                                                                                            & =0 \\quad                                                & \\forall \\delta\\boldsymbol{\\lambda}_0  & \\in{\\boldsymbol{\\mathcal{M}}_0},\\label{eq:mixed_c0_constraint} \\\\\n\t\tb_1(\\mathbf{u}^i, \\delta\\boldsymbol{\\lambda}_1,\\Delta\\mathbf{u})                                                                                                                                                              & =R_{b_1}(\\mathbf{u}^i, \\delta\\boldsymbol{\\lambda}_1) \\quad  & \\forall \\delta\\boldsymbol{\\lambda}_1  & \\in{\\boldsymbol{\\mathcal{M}}_1},\\label{eq:mixed_c1_constraint}\n\t\\end{empheq}\\label{eq:kl_shell_mixed}\n\\end{subequations}\nwith\n\\begin{subequations}\n\t\\begin{align}\n\t\tb_0(\\delta\\boldsymbol{\\lambda}_0,\\Delta\\mathbf{u})               & = \\sum_{\\Gamma\\in\\mathbf{S}} \\int_\\Gamma \\delta\\boldsymbol{\\lambda}_0\\cdot\\left( \\Delta\\mathbf{u}^s-\\Delta\\mathbf{u}^m \\right) d\\Gamma,                                                                                                                            \\\\\n\t\tb_1(\\mathbf{u}^i, \\delta\\boldsymbol{\\lambda}_1,\\Delta\\mathbf{u}) & = \\sum_{\\Gamma\\in\\mathbf{S}} \\int_\\Gamma \\delta\\boldsymbol{\\lambda}_1\\cdot\\left( \\Delta\\bar{\\mathbf{u}}^s_{,1}-\\frac{\\partial \\mathbf{R}_{\\bar{\\mathbf{a}}^m_2,\\theta}(\\bar{\\mathbf{a}}^m_1)}{\\partial \\mathbf{u}^m}\\Delta\\bar{\\mathbf{u}}^m_{,1} \\right) d\\Gamma, \\\\\n\t\tR_{b_1}(\\mathbf{u}^i, \\delta\\boldsymbol{\\lambda}_1)              & = \\sum_{\\Gamma\\in\\mathbf{S}} \\int_\\Gamma \\delta\\boldsymbol{\\lambda}_1\\cdot \\mathbf{r}_c^i d\\Gamma,\n\t\\end{align}\n\\end{subequations}\nwhere $\\mathbf{S}$ is the union of all interfaces. When patches are coupled smoothly, Equation~\\eqref{eq:mixed_c1_constraint} is degenerated to\n\\begin{equation}\n\t\\sum_{\\Gamma\\in\\mathbf{S}} \\int_\\Gamma \\delta\\boldsymbol{\\lambda}_1\\cdot\\left( \\Delta\\bar{\\mathbf{u}}^s_{,1} - \\Delta\\bar{\\mathbf{u}}^m_{,1} \\right) d\\Gamma = 0.\n\\end{equation}\n\nThe constrained function space for the dual mortar formulation of the multi-patch Kirchhoff-Love shell problem can then be defined as\n\\begin{equation}\n\t\\boldsymbol{\\mathcal{V}}\\coloneq\\left\\{ \\Delta\\mathbf{v}\\in \\boldsymbol{\\mathcal{X}} \\vert\\,b_0(\\boldsymbol{\\mu}_0,\\Delta\\mathbf{v})=0 \\text{ and }b_1(\\mathbf{u}^i,\\boldsymbol{\\mu}_1,\\Delta\\mathbf{v})=0\\quad\\forall(\\boldsymbol{\\mu}_0,\\boldsymbol{\\mu}_1)\\in{\\mathcal{M}_0\\times{}\\mathcal{M}_1}\\right\\}.\n\\end{equation}\nThe dual mortar formulation for the multi-patch Kirchhoff-Love shell can then be stated as: find $\\Delta\\mathbf{u}=\\Delta\\mathbf{u}_\\text{non}+\\Delta\\mathbf{u}_\\text{hom}$, with the homogeneous contribution $\\Delta\\mathbf{u}_\\text{hom}\\in \\boldsymbol{\\mathcal{V}}$ such that\n\\begin{equation}\n\tK_\\text{m}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u})+K_\\text{b}(\\mathbf{u}^i,\\delta\\mathbf{u},\\Delta\\mathbf{u})=-\\delta \\Pi(\\mathbf{u}^{i},\\delta\\mathbf{u}),\\qquad \\forall \\delta\\mathbf{u}\\in \\boldsymbol{\\mathcal{V}},\\label{eq:kl_shell_dual_mortar}\n\\end{equation}\nwhere the non-homogeneous contribution $\\Delta\\mathbf{u}_\\text{non}$ is a function in $\\boldsymbol{\\mathcal{X}}$ that satisfies both constraint~\\eqref{eq:mixed_c0_constraint} and ~\\eqref{eq:mixed_c1_constraint}. In what follow, we will show that, in the dual mortar formulation, the constrained function space $\\boldsymbol{\\mathcal{V}}$ and the non-homogeneous contribution $\\Delta\\mathbf{u}_\\text{non}$ can be constructed with minimum computational costs.\n\n\\subsection{Discretization}\n\nFor each intersection $\\Gamma_{sm}$, we classify two adjacent patches as either slave $\\Omega_s$ or master $\\Omega_m$. Note that one patch can simultaneously be a master for one intersection and a slave for another intersection. To approximate the solution of the variational problem~\\eqref{eq:kl_shell_dual_mortar}, we discretize $\\Omega_s$ and $\\Omega_m$ by B-spline basis functions $\\{N^s_i\\}_{i\\in{I_s}}$ and $\\{N^m_i\\}_{i\\in{I_m}}$, with the index sets $I_s=\\left\\{1, 2, \\dots, n_s\\right\\}$ and $I_m=\\left\\{n_s+1, n_s+2, \\dots, n_s+n_m\\right\\}$. The incremental displacement and its variation are discretized as\n\\begin{equation}\n\t\\Delta\\mathbf{u}^h = \\sum_{i\\in I_s \\cup I_m} \\mathbf{N}_i\\cdot\\Delta\\mathbf{U}_i,\\quad \\delta\\mathbf{u}^h = \\sum_{i\\in I_s \\cup I_m} \\mathbf{N}_i\\cdot\\delta\\mathbf{U}_i,\n\\end{equation}\nwhere\n\\begin{equation}\n\t\\delta\\mathbf{U}_i = \\begin{bmatrix}\n\t\t\\delta U_i^x \\\\\n\t\t\\delta U_i^y \\\\\n\t\t\\delta U_i^z\n\t\\end{bmatrix},\\quad\n\t\\Delta\\mathbf{U}_i = \\begin{bmatrix}\n\t\t\\Delta U_i^x \\\\\n\t\t\\Delta U_i^y \\\\\n\t\t\\Delta U_i^z\n\t\\end{bmatrix},\\quad\n\t\\mathbf{N}_i = \\begin{bmatrix}\n\t\tN_i & 0   & 0   \\\\\n\t\t0   & N_i & 0   \\\\\n\t\t0   & 0   & N_i\n\t\\end{bmatrix},\\quad\n\t\\text{with }\n\tN_i=\n\t\\begin{cases}\n\t\tN_i^s \\quad & i\\in{I_s}, \\\\\n\t\tN_i^m \\quad & i\\in{I_m}.\n\t\\end{cases}\n\\end{equation}\n\nThe Lagrange multipliers and their variations are discretized by the dual basis of the discretized trace space of intersections. However, for a multi-patch decomposition, at least three patches meet at an interior vertex and several interfaces can share this extraordinary point as a common endpoint. If we discretize the Lagrange multiplier space with the same dimension as univariate basis of the slave side, we obtain too many constraints. Some of the control points in the neighborhood of a vertex may serve as both slave nodes and master nodes. We overcome this issue by considering a set of dual basis $\\left\\{\\hat{N}_i\\right\\}_{i=1}^{n^s_{\\bar{\\theta}^2}-4}$ with codimension four of the corresponding $n^s_{\\bar{\\theta}^2}$-dimensional trace space, and satisfies the following biorthogonality relation\n\\begin{equation}\n\t\\int_{\\Gamma_{sm}} \\hat{N}_{i}({\\bar{\\theta}^2})N^s_{j+2}({\\bar{\\theta}^2}) d \\Gamma = \\delta_{ij}, \\quad 1\\leq i,j-2\\leq  n^s_{\\bar{\\theta}^2}-4.\n\\end{equation}\nwhere the basis functions $N^s_{j+2}({\\bar{\\theta}^2})$ of the trace space depend on the orientation and are summarized in Table~\\ref{tab:lagrange_multiplier_discretization}. The codimension can be accomplished by coarsening the mesh in the neighborhood of each vertex. For the global dual basis, we remove the two knots adjacent to each vertex. For the enriched \\Bezier dual basis, there is a built-in coarsening algorithm, see Chapter~\\ref{chp:chapter4} for the detail. The Lagrange multipler $\\boldsymbol{\\lambda}_0$, $\\boldsymbol{\\lambda}_1$ and their variation are written as:\n\\begin{equation}\n\t\\begin{alignedat}{2}\n\t\t\\boldsymbol{\\lambda}_0^h & = \\sum_{i=1}^{n^s_{\\bar{\\theta}^2}-4} \\hat{\\mathbf{N}}_i\\cdot\\boldsymbol{\\Lambda}^0_i,\\quad\\quad && \\delta\\boldsymbol{\\lambda}_0^h            = \\sum_{i=1}^{n^s_{\\bar{\\theta}^2}-4} \\hat{\\mathbf{N}}_i\\cdot\\delta\\boldsymbol{\\Lambda}^0_i,            \\\\\n\t\t\\boldsymbol{\\lambda}_1^h & = \\frac{1}{c}\\sum_{i=1}^{n^s_{\\bar{\\theta}^2}-4} \\hat{\\mathbf{N}}_i\\cdot\\boldsymbol{\\Lambda}^1_i,\\quad  &&\\delta\\boldsymbol{\\lambda}_1^h = \\frac{1}{c}\\sum_{i=1}^{n^s_{\\bar{\\theta}^2}-4} \\hat{\\mathbf{N}}_i\\cdot\\delta\\boldsymbol{\\Lambda}^1_i,\n\t\\end{alignedat}\n\\end{equation}\nwhere\n\\begin{equation}\n\t\\boldsymbol{\\Lambda}^0_i = \\begin{bmatrix}\n\t\t\\Lambda_i^{0x} \\\\\n\t\t\\Lambda_i^{0y} \\\\\n\t\t\\Lambda_i^{0z}\n\t\\end{bmatrix},\\quad\n\t\\delta\\boldsymbol{\\Lambda}^0_i = \\begin{bmatrix}\n\t\t\\delta\\Lambda_i^{0x} \\\\\n\t\t\\delta\\Lambda_i^{0y} \\\\\n\t\t\\delta\\Lambda_i^{0z}\n\t\\end{bmatrix},\\quad\n\t\\boldsymbol{\\Lambda}^1_i = \\begin{bmatrix}\n\t\t\\Lambda_i^{1x} \\\\\n\t\t\\Lambda_i^{1y} \\\\\n\t\t\\Lambda_i^{1z}\n\t\\end{bmatrix},\\quad\n\t\\delta\\boldsymbol{\\Lambda}^1_i = \\begin{bmatrix}\n\t\t\\delta\\Lambda_i^{1x} \\\\\n\t\t\\delta\\Lambda_i^{1y} \\\\\n\t\t\\delta\\Lambda_i^{1z}\n\t\\end{bmatrix},\\quad\\hat{\\mathbf{N}}_i = \\begin{bmatrix}\n\t\t\\hat{N}_i & 0         & 0         \\\\\n\t\t0         & \\hat{N}_i & 0         \\\\\n\t\t0         & 0         & \\hat{N}_i\n\t\\end{bmatrix},\n\\end{equation}\nthe weight $c$ is given in Table~\\ref{tab:lagrange_multiplier_discretization}.\n\n\\begin{table}\n\t\\center\n\t\\caption{A summary of all parameters used in the description of the discretized Lagrange multiplier $\\boldsymbol{\\lambda}_0$ and $\\boldsymbol{\\lambda}_1$ }\n\t\\label{tab:lagrange_multiplier_discretization}\n\t\\begin{tabularx}{.7\\textwidth}{l@{\\extracolsep{\\fill}}ccc}\n\t\t\\toprule\n\t\tInterface orientation & $N^s_{j}({\\bar{\\theta}^2})$ & $n^s_{\\bar{\\theta}^2}$ & $c$                                                                                                 \\\\\n\t\t\\midrule\n\t\tSouth                 & $N^s_{j}({{\\theta}^1})$     & $n^s_{{\\theta}^1}$     & $-\\left.\\frac{\\partial N^s_2(\\theta^2)}{\\partial \\theta^2}\\right\\vert_{\\theta^2=0}$                 \\\\\n\t\tEast                  & $N^s_{j}({{\\theta}^2})$     & $n^s_{{\\theta}^2}$     & $\\left.\\frac{\\partial N^s_{n^s_{\\theta^1}-1}(\\theta^1)}{\\partial \\theta^1}\\right\\vert_{\\theta^1=1}$ \\\\\n\t\tNorth                 & $N^s_{j}({{\\theta}^1})$     & $n^s_{{\\theta}^1}$     & $\\left.\\frac{\\partial N^s_{n^s_{\\theta^2}-1}(\\theta^2)}{\\partial \\theta^2}\\right\\vert_{\\theta^2=1}$ \\\\\n\t\tWest                  & $N^s_{j}({{\\theta}^2})$     & $n^s_{{\\theta}^2}$     & $-\\left.\\frac{\\partial N^s_2(\\theta^1)}{\\partial \\theta^1}\\right\\vert_{\\theta^1=0}$                 \\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\\end{table}\n\nBy substituting the discretized displacement field and Lagrange multipliers into the mixed problem~\\eqref{eq:kl_shell_mixed}, we obtain the following stiffness, constraint matrices and the right-hand side of the non-homogeneous constraint~\\eqref{eq:mixed_c1_constraint}:\n\\begin{equation}\n\t\\begin{split}\n\t\t\\delta\\mathbf{U}^T\\mathbf{K}_\\text{kl}\\Delta\\mathbf{U}&=K_\\text{m}(\\mathbf{u}^{hi},\\delta\\mathbf{u}^h,\\Delta\\mathbf{u}^h)+K_\\text{b}(\\mathbf{u}^{hi},\\delta\\mathbf{u}^h,\\Delta\\mathbf{u}^h),\\\\\n\t\t\\begin{bmatrix}\n\t\t\t\\delta\\mathbf{\\Lambda}^0 \\\\\n\t\t\t\\delta\\mathbf{\\Lambda}^1\n\t\t\\end{bmatrix}^T\\mathbf{B}_\\text{kl}\\Delta\\mathbf{U}&=\\begin{bmatrix}\n\t\t\tb_0(\\delta\\boldsymbol{\\lambda}_0^h,\\Delta\\mathbf{u}^h) \\\\\n\t\t\tb_1(\\mathbf{u}^{hi}, \\delta\\boldsymbol{\\lambda}^h_1,\\Delta\\mathbf{u}^h)\n\t\t\\end{bmatrix},\\\\\n\t\t\\left[\\delta\\mathbf{\\Lambda}^1\\right]^T \\mathbf{R}_{b_1} &= R_{b_1}(\\mathbf{u}^{hi}, \\delta\\boldsymbol{\\lambda}^h_1).\n\t\\end{split}\n\\end{equation}\n\n\\subsection{Building a solution space from the discretized constraints}\n\nIn this subsection, we will show how to recover the form~\\eqref{eq:constraint-form-non-hom} from the constraint matrix $\\mathbf{B}_\\text{kl}$ by a simple linear transformation. Then, the vector basis $\\mathbf{B}^\\perp_\\text{kl}$ of $\\mathbf{B}_\\text{kl}$'s nullspace can be constructed naturally by Equation~\\eqref{eq:null-space} and a particular solution can be constructed explicitly by Equation~\\eqref{eq:dual_particular_solution}. We first classify the basis functions of the discretized space into five different types, depending on their vicinity to an interface or a vertex, as shown in Figure~\\ref{fig:control_point_classification}:\n\\begin{enumerate}\n\t\\item All the second closest column of slave basis functions to the intersection $\\Gamma_{sm}$ but the first two and the last two, whose indices are denoted by the index set $I_\\text{i}$. (denoted by blue dots)\n\t\\item All the column of slave basis functions on the intersection $\\Gamma_{sm}$ but the first two and the last two, whose indices are denoted by the index set $I_\\text{ii}$. (denoted by red dots)\n\t\\item All the column of master basis functions on the intersection $\\Gamma_{sm}$ and the first two and the last two slave basis functions on the intersection $\\Gamma_{sm}$, whose indices are denoted by the index set $I_\\text{iii}$. (denoted by green dots)\n\t\\item All the second closest column of master basis functions to the intersection $\\Gamma_{sm}$ and the first two and the last two of the second closest column of slave basis functions to the intersection $\\Gamma_{sm}$, whose indices are in the index set $I_\\text{iv}$. (denoted by yellow dots)\n\t\\item The basis functions whose values and first order derivative values in the $\\bar{\\theta}^1$ direction are zero on $\\Gamma_{sm}$, whose indices are denoted by the index set $I_\\text{v}$. (denoted by grey dots)\n\\end{enumerate}\\par\n\n\\begin{figure}[ht]\n\t\\center\n\t\\includegraphics[width=.5\\columnwidth]{control_points_classification}\n\t\\caption{The classification of all basis functions for the two-patch non-conforming Kirchhoff-Love shell in Figure~\\ref{fig:two_patch_shell_with_kink}. (For interpretation of colors in this figure, the reader is referred to the web version of this article.)}\\label{fig:control_point_classification}\n\\end{figure}\n\nWe reorder the basis functions as well as the Lagrange multipliers by introducing two permutation matrices $\\mathbf{P}_\\text{c}$ and $\\mathbf{P}_\\text{r}$ (this step is not neccessary from the implementation point-of-view, but is helpful during the derivation, especially for multi-patch problems). We define the column-wise permutation matrix $\\mathbf{P}_\\text{c}$ as\n\\begin{equation}\n\t\\begin{bmatrix}\n\t\t\\mathbf{I}_\\text{i}   \\\\\n\t\t\\mathbf{I}_\\text{ii}  \\\\\n\t\t\\mathbf{I}_\\text{iii} \\\\\n\t\t\\mathbf{I}_\\text{iv}  \\\\\n\t\t\\mathbf{I}_\\text{v}\n\t\\end{bmatrix}=\n\t\\mathbf{P}_\\text{c}\n\t\\begin{bmatrix}\n\t\t\\mathbf{I}_s \\\\\n\t\t\\mathbf{I}_m\n\t\\end{bmatrix},\n\\end{equation}\nwhere $\\mathbf{I}_i$ is the vector form of the index set  $I_i$. We also define a row-wise permutation matrix $\\mathbf{P}_\\text{r}$ such that the permuted constraint matrix can be written in the partitioned form\n\\begin{equation}\n\t\\mathbf{B}_\\text{p}=\\left[ \\mathbf{P}_\\text{r}\\otimes\\mathbf{I}_{3\\times 3} \\right]\\mathbf{B}_\\text{kl}\\left[\\mathbf{P}_\\text{c}\\otimes\\mathbf{I}_{3\\times 3}\\right]^T=\n\t\\begin{bmatrix}\n\t\t\\mathbf{B}_1^1 & \\mathbf{B}_1^2 & \\mathbf{B}_1^3 & \\mathbf{B}_1^4 & \\mathbf{0} \\\\\n\t\t\\mathbf{0}     & \\mathbf{B}_2^2 & \\mathbf{B}_2^3 & \\mathbf{0}     & \\mathbf{0}\n\t\\end{bmatrix},\n\\end{equation}\nwhere $\\otimes$ is the tensor product operator, $\\mathbf{I}_{3\\times 3}$ is the ${3\\times 3}$ identity matrix, $\\mathbf{B}_1^1$ is the contribution of the first type of B-spline basis function in the discretization of $b_1$ and $\\mathbf{B}_2^2$ is the contribution of the second type of B-spline basis function in the discretization of $b_0$. Under the row-wise permutation matrix $\\mathbf{P}_\\text{r}$, $\\mathbf{B}_1^1$ and $\\mathbf{B}_2^2$ become identity submatrices. Under a rank-preserving transformation $\\mathbf{T}$ we can eliminate the submatrix $\\mathbf{B}_1^2$ such that\n\\begin{equation}\n\t\\sbox0{$\\begin{matrix}\\mathbf{B}_1^3-\\mathbf{B}_1^2\\mathbf{B}_2^3 & \\mathbf{B}_1^4 & \\mathbf{0} \\\\ \\mathbf{B}_2^3 & \\mathbf{0} & \\mathbf{0}\\end{matrix}$}\n\t\\mathbf{T}\\mathbf{B}_\\text{p}=\\left[\n\t\t\\begin{array}{c:c}\n\t\t\t\\makebox[\\wd0/3]{\\large $\\mathbf{I}$} & \\usebox{0} \\\\\n\t\t\\end{array}\n\t\t\\right].\\label{eq:simple_form}\n\\end{equation}\nWe may now take\n\\begin{equation}\n\t\\mathbf{B}^\\perp_\\text{p}=\n\t\\left[\\begin{array}{ccc}\n\t\t\t\\mathbf{B}_1^2\\mathbf{B}_2^3-\\mathbf{B}_1^3 & -\\mathbf{B}_1^4 & \\mathbf{0}        \\\\\n\t\t\t-\\mathbf{B}_2^3                             & \\mathbf{0}      & \\mathbf{0}        \\\\ \\hdashline[2pt/2pt]\n\t\t\t\\multicolumn{3}{c}{\\multirow{3}{*}{\\raisebox{0mm}{\\scalebox{1.5}{$\\mathbf{I}$}}}} \\\\\n\t\t\t                                            &                 &\n\t\t\\end{array}\\right].\n\\end{equation}\nThe vector basis of the null space of $\\mathbf{B}_b$ can now be obtained from\n\\begin{equation}\n\t\\mathbf{B}^\\perp_\\text{kl}=\\left[\\mathbf{P}_\\text{c}\\otimes\\mathbf{I}_{3\\times 3}\\right]^T\\mathbf{B}^\\perp_\\text{p}.\n\\end{equation}\nWhen the constraint is not homogeneous, i.e. $\\mathbf{R}=\\begin{bmatrix}\n\t\t\\mathbf{0} & \\mathbf{R}_{b_1}\n\t\\end{bmatrix}^T\\neq \\mathbf{0}$, we have\n\\begin{equation}\n\t\\mathbf{T}\\mathbf{B}_\\text{p}\\left[\\mathbf{P}_\\text{c}\\otimes\\mathbf{I}_{3\\times 3}\\right]\\mathbf{U}^\\text{non} = \\mathbf{T}\\mathbf{R}_\\text{p} = \\mathbf{R}_\\text{p},\n\\end{equation}\nwhere $\\mathbf{R}_\\text{p}=\\left[ \\mathbf{P}_\\text{r}\\otimes\\mathbf{I}_{3\\times 3} \\right]\\mathbf{R}$. We can explicitly construct a particular solution from Equation~\\eqref{eq:dual_particular_solution} as\n\\begin{equation}\n\t\\mathbf{U}^\\text{non} = \\left[ \\mathbf{P}_\\text{r}\\otimes\\mathbf{I}_{3\\times 3} \\right]^T\\mathbf{R}_\\text{p} = \\mathbf{R}.\n\\end{equation}\n\n\\section{Numerical results}\\label{sec:numerical}\n\nIn this section, the performance of the proposed Kirchhoff-Love shell coupling formulation as well as the enriched \\Bezier dual basis are illustrated by several challenging benchmark examples, including both linear and non-linear problems. Results computed using the $i^\\text{th}$ order global dual basis are labeled $G-Q_i$. Results computed using the $i^\\text{th}$ order enriched \\Bezier dual basis are labeled $B-Q_i$. Note that the $i^\\text{th}$ order enriched \\Bezier dual basis is constructed by reproducing polynomials of degree $i-2$.\n\n\\subsection{Linear problems}\n\nIn this subsection, we consider the performance of the proposed patch coupling formulation for the linearized Kirchhoff-Love shell model. The first example is a simply supported plate under sinusoidal load, where results are compared with the analytical solution. The second example is the Scordelis-Lo roof problem. The convergence behavior is tested against a numerical solution from a very fine mesh. In the third example, we consider a hemisphere shell subjected to two opposite point loads. Two different parameterizations are considered. All problems in this subsection are solved by the conjugate gradient iterative solver in Eigen library~\\cite{eigenweb}.\n\n\\subsubsection{Simply supported plate under sinusoidal load}\n\\begin{figure}[h]\n\t\\center\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.36]{plate_config1}\n\t\t\\caption{Non-matching parameterized non-conforming three-patch mesh}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.28]{plate_solution-plot}\n\t\t\\caption{The reference solution}\n\t\\end{subfigure}\n\t\\caption{The decomposition and parameterization of the domain $\\left[0, 12\\right] \\times \\left[0, 12\\right]$ and the refernce solution that satisfies $u=0$ on $\\partial\\Omega$.}\\label{fig:kirchhoff_plate_mesh}\n\\end{figure}\nIn the first example, we study a plate of size $L\\times L = 12\\times 12$, thickness $t=0.375$, Young's modulus $E=4.8\\times 10^5$, Poisson's ratio $\\nu = 0.38$ and subjected to a sinusoidal pressure $p(x, y) = \\sin(\\pi \\frac{x}{L})\\sin(\\pi \\frac{y}{L})$ (in $-z$ direction). The analytical solution of the vertical displacement is given by~\\cite{timoshenko1959theory} (see Figure~\\ref{fig:kirchhoff_plate_mesh})\n\\begin{equation}\n\tw(x, y) = -\\frac{L^4}{4D\\pi^4}\\sin(\\frac{\\pi x}{L}) \\sin(\\frac{\\pi y}{L}),\n\\end{equation}\nwhere $D = \\frac{Et^3}{12(1-\\nu^2)}$ is the flexural rigidity of the plate. The computational domain is decomposed into five non-conformingly coupled patches as shown in Figure~\\ref{fig:kirchhoff_plate_mesh}. The simply supported boundary condition is applied by setting $\\mathbf{u}=0$ on the boundary.\\par\n\\begin{figure}[h]\n\t\\center\n\t\\captionsetup[subfigure]{labelformat=empty}\n\t\\begin{subfigure}{.47\\linewidth}\n\t\t\\center\n\t\t\\includestandalone[scale=.75]{plate_L2}\n\t\\end{subfigure}\\hfil\n\t\\begin{subfigure}{.47\\linewidth}\n\t\t\\center\n\t\t\\includestandalone[scale=.75]{plate_Mx}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.47\\linewidth}\n\t\t\\center\n\t\t\\includestandalone[scale=.75]{plate_Mxy}\n\t\\end{subfigure}\n\t\\caption{Convergence plots for the simply supported plate under sinusoidal pressure load. Upper left: error of $w$ measured in $L^2$ norm. Upper right: error of $M_{xx}$ measured in $L^2$ norm. Bottom: error of $M_{xy}$ measured in $L^2$ norm.}\\label{fig:convergence_square_plate}\n\\end{figure}\nFigure~\\ref{fig:convergence_square_plate} shows the convergence of the approximated vertical displacement $w^h$, bending moment $M^h_{xx}$ and $M^h_{xy}$ to the analytical solution as the mesh is refined. As expected, both the enriched \\Bezier dual basis and the global dual basis yield optimal results for all polynomial orders in all three measures. For the displacement error, there is no visible difference between the enriched \\Bezier dual basis and the global dual basis in all tested polynomial orders. However, compared to the global dual basis, the enriched \\Bezier dual basis introduces slightly higher errors in both $M_{xx}$ and $M_{xy}$ errors for $p = 3,4$, though the convergence remains optimal. \\par\n\n\n\\begin{figure}[H]\n\t\\center\n\t\\captionsetup[subfigure]{labelformat=empty}\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.3,trim={0cm 1.5cm 0cm 1.5cm},clip]{e_d}\n\t\t\\caption{$B-Q_3$ $\\quad \\text{err} = u_z^h-u_z$}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.3,trim={0cm 1.5cm 0cm 1.5cm},clip]{g_d}\n\t\t\\caption{$G-Q_3$ $\\quad \\text{err} = u_z^h-u_z$}\n\t\\end{subfigure}\\\\\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.3,trim={0cm 1.5cm 0cm 1.5cm},clip]{e_Mxx}\n\t\t\\caption{$B-Q_3$ $\\quad \\text{err} = M_{xx}^h-M_{xx}$}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.3,trim={0cm 1.5cm 0cm 1.5cm},clip]{g_Mxx}\n\t\t\\caption{$G-Q_3$ $\\quad \\text{err} = M_{xx}^h-M_{xx}$}\n\t\\end{subfigure}\\\\\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.3,trim={0cm 1.5cm 0cm 1.5cm},clip]{e_Mxy}\n\t\t\\caption{$B-Q_3$ $\\quad \\text{err} = M_{xy}^h-M_{xy}$}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{.45\\linewidth}\n\t\t\\center\n\t\t\\includegraphics[scale=.3,trim={0cm 1.5cm 0cm 1.5cm},clip]{g_Mxy}\n\t\t\\caption{$G-Q_3$ $\\quad \\text{err} = M_{xy}^h-M_{xy}$}\n\t\\end{subfigure}\n\t\\caption{Contour plots of $\\text{err} = u_z^h-u_z$, $\\text{err} =  M_{xx}^h- M_{xx}$ and $\\text{err} =  M_{xy}^h- M_{xy}$ for the simply supported plate under sinusoidal load ($p=3$, and the mesh is obtained after one refinement). }\\label{fig:contour_d_mxx_mxy_plate}\n\\end{figure}\n\nContour plots of $\\text{err} = u_z^h-u_z$, $\\text{err} = M_{xx}^h-M_{xx}$ and $\\text{err} = M_{xy}^h-M_{xy}$ for cubic splines are given in Figure~\\ref{fig:contour_d_mxx_mxy_plate}. The error level for two types of dual basis are very similar in all three error measures. For the enriched \\Bezier dual basis case, error spikes are formed along intersections and highest error spikes are observed at vertices for $M_{xx}$ and $M_{xy}$. The global dual basis, on the other hand, seems to yield more evenly distributed errors.\n\n\\FloatBarrier\n\\subsubsection{Scordelis-Lo roof problem}\n\nWe then consider the Scordelis-Lo roof benchmark problem. The Scordelis-Lo roof problem is a membrane stress dominated static shell problem and named after the authors who first reported it~\\cite{scordelis1964computer}. In this problem, a cylindrical shell roof (Young's modulus $E=432\\text{MPa}$, Poisson's ratio $\\nu = 0$, thickness $t = 0.25\\text{m}$.), under the distributed gravity load ($f = 90\\text{N}/\\text{m}^2$), is supported by rigid diaphragms on both curved edges (i.e. $u_{x}=u_z=0$), while the straight edges are free to move, as depicted in Figure.~\\ref{fig:scordelis-lo}. To improve the robustness, the displacement in $y$ direction of one DOF on the diaphragms supported edges is fixed. The vertical displacement at the midpoint of the straight edges is considered as the reference (with the given value $u_z=-0.300592457\\text{m}$~\\cite{coox2017flexible}).\\par\n\n\\begin{figure}[h]\n\t\\centering\n\t\\captionsetup[subfigure]{font = footnotesize}\n\t\\begin{subfigure}[b]{.33\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = \\textwidth]{roof_config1}\n\t\t\\caption{}\\label{fig:scordelis-lo}\n\t\\end{subfigure}\\hfil\n\t\\begin{subfigure}[b]{.66\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = .48\\textwidth]{roof_c0_result}\n\t\t\\includegraphics[width = .48\\textwidth]{roof_c1_result}\n\t\t\\caption{}\\label{fig:scordelis_deform}\n\t\\end{subfigure}\n\t% \\begin{subfigure}[b]{.48\\textwidth}\n\t% \t\\centering\n\t% \t\\includegraphics[width = \\textwidth]{roof_decompose}\n\t% \t\\vspace{.5cm}\n\t% \t\\caption{Non-conforming four patch mesh with the intersections denoted by red lines.}\\label{fig:scordelis-lo-decompose}\n\t% \\end{subfigure}\n\t\\caption{The Scordelis-Lo roof problem: (a) Geometry, parameterization and boundary conditions. Note that the blue edges are free, while the red edges are fixed in $x$ and $z$ directions. (b) Deformed Scordelis-Lo roof (scaling factor of 20 is applied to the displacement). Left: Only the $C^0$ continuity constraint is applied. After deformation, kinks are formed on all intersections. Right: The $C^1$ continuity constraint is also applied. The deformed surface is as smooth as a single patch.}\n\\end{figure}\n\nThe roof structure is decomposed into four patches, which are discretized non-conformingly as shown in Fig.~\\ref{fig:scordelis-lo}. Fig.~\\ref{fig:scordelis_deform} demonstrates the effect of the proposed constraint. As can be seen, with only $C^0$ continuity constraint enforced, althought the deformed surface remains continuous, all intersections fail to transfer the bending moments from one patch to another. Hence, connections act like hinges and kinks are formed along all intersections. By enforcing the additional constraint, the smoothness of the roof structure is preserved, even though the mesh is non-conformingly discretized. \\par\n\n% \\begin{figure}[h]\n% \t\\centering\n% \t\\captionsetup[subfigure]{font = footnotesize}\n% \t\\begin{subfigure}[b]{.49\\textwidth}\n% \t\t\\centering\n% \t\t\\includegraphics[width = \\textwidth]{roof_c0_result}\n% \t\t\\caption{}\n% \t\\end{subfigure}\n% \t\\begin{subfigure}[b]{.49\\textwidth}\n% \t\t\\centering\n% \t\t\\includegraphics[width = \\textwidth]{roof_c1_result}\n% \t\t\\caption{}\n% \t\\end{subfigure}\n% \t\\caption{Deformed Scordelis-Lo roof (scaling factor of 20 is applied to the displacement): (a) Only the $C^0$ continuity constraint is applied. After deformation, kinks are formed on all intersections. (b) The $C^1$ continuity constraint is also applied. The deformed surface is as smooth as a single patch.}\\label{fig:scordelis_deform}\n% \\end{figure}\n\nThe sparsity patterns for the stiffness matrices corresponding to the coupled problem using the global dual basis, and the coupled problem using the enriched \\Bezier dual basis are shown in Figure~\\ref{fig:scordelis-lo-sparsity}. Note that the matrix constructed using the enriched \\Bezier dual basis is much sparser then the matrix constructed using the global dual basis.\\par\n\n\\begin{figure}[h]\n\t\\centering\n\t\\captionsetup[subfigure]{font = footnotesize}\n\t\\begin{subfigure}[b]{.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[clip, trim=5cm 8.5cm 5cm 9cm, width = .9\\textwidth]{global_sparsity}\n\t\t\\caption{}\n\t\\end{subfigure}\\hfil\n\t\\begin{subfigure}[b]{.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[clip, trim=5cm 8.5cm 5cm 9cm, width = .9\\textwidth]{enriched_sparsity}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{Stiffness matrix sparsity patterns for (a) the coupled linear system using the global dual basis, and (b) the coupled linear system using the enriched \\Bezier dual basis for the Scordelis-Lo roof problem. The stiffness matrices are computed from the four-patch domain in Figure~\\ref{fig:scordelis-lo} after $4$ levels of refinement.}\\label{fig:scordelis-lo-sparsity}\n\\end{figure}\n\nFig.~\\ref{fig:scordelis_displacement} shows the vertical displacement of the midpoint of the free edge for different polynomial degrees. Converged results are obtained by both the global dual basis and the enriched \\Bezier dual basis for all polynomial orders. For quartic basis functions, the relative error is reduced to $0.1\\%$ with only one refinement for both dual basis. The accuracy of the four-patch configurations is very similar to the single patch one. To better study the performance of the proposed coupling formulation, we compare the displacement field of the four-patch mesh to a reference solution obtained from a very fine single patch mesh, as shown in Figure~\\ref{fig:convergence_scordelis}. Optimal convergence rates are attained for all polynomial orders. The convergence plots of the enriched \\Bezier dual basis is indistinguishable to that of the global dual basis.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\captionsetup[subfigure]{font = footnotesize}\n\t\\begin{subfigure}[b]{.32\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{Scordelis-Lo-p=2}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{.32\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{Scordelis-Lo-p=3}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{.32\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{Scordelis-Lo-p=4}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{Scordelis-Lo roof problem: a comparison of the vertical displacement at the midpoint of the free edge for different dual basis functions and degrees.}\\label{fig:scordelis_displacement}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\center\n\t\\includestandalone[scale=1]{Scordelis-Lo_convergence}\n\t\\caption{The convergence plot for the Scordelis-Lo roof problem.}\\label{fig:convergence_scordelis}\n\\end{figure}\n\\FloatBarrier\n\\subsubsection{Pinched hemispherical shell problem}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\captionsetup[subfigure]{font = footnotesize}\n\t\\begin{subfigure}[b]{.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = \\textwidth]{hemisphere_config}\n\t\t\\caption{Geometry and boundary conditions of the problem. The red point is fixed from moving and rotating.}\\label{fig:hemisphere-config}\n\t\\end{subfigure}\\hfil\n\t\\begin{subfigure}[b]{.48\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = .7\\textwidth]{hemisphere_decompose}\n\t\t\\caption{Non-conforming twelve patch mesh with the intersections denoted by the red lines.}\\label{fig:hemisphere_decompose-2}\n\t\\end{subfigure}\n\t\\caption{The geometry and mesh setup of the pinched hemisphere shell problem.}\\label{fig:hemisphere}\n\\end{figure}\n\nIn this example, we consider a hemispherical shell pinched at the top and subjected to four radial point loads (see Fig.~\\ref{fig:hemisphere-config}). The bottom circumferential edge of the hemisphere is free. The thickness of the shell is $t = 0.04$ and the material properties are $E = 6.825\\times{}10^7$ and $\\nu = 0.3$. The hemispherical is decomposed into twelve patches as shown in Figure~\\ref{fig:hemisphere_decompose-2}. Note that the twelve-patch parameterization is degeneracy-free. The radial displacement at point A is considered as the benchmark quantity and a reference value is given as $u_x = 0.0924$~\\cite{kiendl2009isogeometric}.\\par\n\nThe convergence of the proposed formulation is plotted in Figure~\\ref{fig:hemisphere_result}. Convergence is observed for both types of dual basis functions. It seems that, as the mesh is refined, the enriched \\Bezier dual basis provides results that are closer to the reference solution.\n\\begin{figure}[h]\n\t\\center\n\t\\includestandalone[scale = 1]{twelve-patch-hemisphere}\n\t\\caption{ Pinched hemispherical shell problem: acomparison of the radial displacement at point A for different dual basis functions and degrees}\n\t\\label{fig:hemisphere_result}\n\\end{figure}\n\\FloatBarrier\n\\subsubsection{T-beam}\n\nShell structrues with kinks and sharp folds are widely applied in engineering practice. In this example, we verify the performance of the proposed coupling formulation in preserving the coupling angle between patches during deformation. A T-beam (see Herrema et al.~\\cite{herrema2019penalty}) with a thickness of $t=0.1$ in Figure~\\ref{fig:T-beam-config} is modelled by three cubic B-spline patches joined at a common edge, where the flange is formed by $14\\times 4$ and $ 16\\times 4$ B-spline patches and the web is formed by a $12\\times 4$ B-spline patch. The T-beam is pinned (i.e. $\\mathbf{u}=0$) on one side and deflected under a point load of $F = 10$ at one corner of the flange in the $-z$ direction (see Figure~\\ref{fig:T-beam-config}). The deformed configuration is shown in Figure~\\ref{fig:T-beam-deform}, a maximum displacement magnitude of $\\max(\\vert \\mathbf{u} \\vert)=0.0589$ is observed at the bottom tip of the web, which agrees with the result in~\\cite{herrema2019penalty}. \\par\n\n\\begin{figure}[h]\n\t\\center\n\t\\begin{subfigure}[b]{0.43\\textwidth}\n\t\t\\centering\n\t\t% \\includegraphics[width = \\textwidth]{T-beam-config}\n\t\t\\begin{tikzpicture}\n\t\t\t\\node[inner sep=0pt] at (0,0)\n\t\t\t{\\includegraphics[width=\\textwidth]{T-beam-config}};\n\t\t\t\\node[draw,align=left] at (2.2,1) {$L=20$\\\\$w=2$\\\\$h=2$};\n\t\t\\end{tikzpicture}\n\t\t\\caption{}\\label{fig:T-beam-config}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.54\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = \\textwidth,trim={0cm 4cm 0cm 1cm},clip]{t-beam_deform}\n\t\t\\caption{}\\label{fig:T-beam-deform}\n\t\\end{subfigure}\n\t\\caption{T-beam prolem: (a) Geometry, parameterization and boundary conditions of the problem. Note that red edges are pinned ends (i.e. $\\mathbf{u}=0$). (b) Deformed configuration with a scale factor of $10$. }\n\t\\label{fig:T-beam}\n\\end{figure}\n\n\nFigure~\\ref{fig:T-beam_angle} shows the relative error between the deformed coupling angle and the original $90^\\circ$ coupling angle between the web and flange for the mesh in Figure~\\ref{fig:T-beam-config} (coarse) and the one after one refinement (fine). In the region $y\\in\\left[0, 10\\right]$ for the coarse mesh and $y\\in\\left[0, 14\\right]$ for the fine mensh, we observe a relative error close to zero. Oscillations are observed at the free end of the intersection for all tested cases. We attribute this phenomenon to the dimonsion of the discretized Lagrange multiplier spaces. Lagrange multiplier with codimension four of the trace space renders all twelve control points at the free end of the intersection become master nodes so that constraints at this region could not transfer stresses from one patch to the other. However, a $h-$refinement does reduce the error magnitude as well as the size of the oscillation region. Owing to the compact supports, the oscillation region of the result from the enriched \\Bezier dual basis is smaller than the result from the global dual basis for both coarse and fine meshes. \\par\n\n\\begin{figure}[h]\n\t\\center\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{T-beam_angle_coarse}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{T-beam_angle_fine}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{Relative error of angle between the flange and web along the intersection for (a) a coarse mesh, (b) a fine mesh. }\n\t\\label{fig:T-beam_angle}\n\\end{figure}\n\\FloatBarrier\n\n\\subsection{Nonlinear problems}\n\nThe proposed formulation has demonstrated its accuracy in linear analysis. In the following, the robustness of the proposed formulation will be verified by several challenging nonlinear benchmark problems. From our observations, the presence of the geometric stiffness matrix significantly influences the convergence of the conjugate gradient solver. For the sake of robustness, all problems in this subsection are solved by SparseLU module in Eigen library~\\cite{eigenweb}.\n\n\\subsubsection{Cantilever subjected to an end shear force}\n\nThe first nonlinear problem to be studied is a cantilever subjected to an end shear force (see Figure~\\ref{fig:cantilever_config}). The length, width and thickness of the cantilever are $L = 10$, $b = 1$ and $t = 0.1$, respectively. The material parameters are: Young's modulus $E = 1.2\\times 10^6$ and Poisson's ratio $\\nu = 0$. The left boundary is clamped ($\\mathbf{u}=\\frac{\\partial \\mathbf{u}}{\\partial x}=0$) while the right boundary is subjected to a uniformly distributed traction load in the $z$-direction with the maximum load of $f=4$ and the incremental load of $\\Delta f = 0.4$. The cantilever is decomposed into three patches, which are discretized by $9\\times 3$, $5\\times 2$ and $3\\times 3$ B-spline elements, respectively (see Figure~\\ref{fig:cantilever_mesh}). A fine mesh obtained by a uniform refinement of the mesh in Figure~\\ref{fig:cantilever_mesh} is also considered in this research. The deformed cantilever is demonstrated in Figure~\\ref{fig:cantilever_deform}.\n% \\begin{figure}[h]\n% \t\\center\n% \t\\begin{subfigure}[b]{\\textwidth}\n% \t\t\\centering\n% \t\t\\begin{tikzpicture}\n% \t\t\t\\node[inner sep=0pt] at (0,0)\n% \t\t\t{\\includegraphics[width=.7\\textwidth]{pure_shear_shell_config}};\n% \t\t\t% \\node[draw,align=left] at (7.2,0) {$E=1.2\\times 10^6$\\\\$L=10$\\\\$\\nu=0$\\\\$b=1$\\\\$f_\\text{max}=4(\\text{force/length})$\\\\$t=0.1$};\n% \t\t\\end{tikzpicture}\n% \t\t% \\includegraphics[scale=1]{pure_shear_shell_config}\n% \t\t\\caption{}\\label{fig:cantilever_config}\n% \t\\end{subfigure}\n% \t\\\\\\hspace{1em}\n% \t\\begin{subfigure}[b]{\\textwidth}\n% \t\t\\centering\n% \t\t\\includegraphics[scale=.35]{pure_shear_shell_mesh}\n% \t\t\\caption{}\\label{fig:cantilever_mesh}\n% \t\\end{subfigure}\n% \t\\\\\\hspace{1em}\n% \t\\begin{subfigure}[b]{\\textwidth}\n% \t\t\\centering\n% \t\t\\includegraphics[scale=.2,trim={2cm 4cm 2cm 1cm},clip]{pure_shear_shell_deformed}\n% \t\t\\caption{}\\label{fig:cantilever_deform}\n% \t\\end{subfigure}\n% \t\\caption{A cantilever subjected to an end shear force: (a) the problem description, (b) the three-patch non-matching discretization and (c) the initial and deformed configurations. }\n% \\end{figure}\n\\begin{figure}[h]\n\t\\begin{tabular}[b]{cc}\n\t\t\\begin{tabular}[b]{c}\n\t\t\t\\begin{subfigure}[b]{0.48\\columnwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\textwidth]{pure_shear_shell_config}\n\t\t\t\t\\caption{}\n\t\t\t\t\\label{fig:cantilever_config}\n\t\t\t\\end{subfigure} \\\\\n\t\t\t\\begin{subfigure}[b]{0.48\\columnwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=\\textwidth]{pure_shear_shell_mesh}\n\t\t\t\t\\caption{}\n\t\t\t\t\\label{fig:cantilever_mesh}\n\t\t\t\\end{subfigure}\n\t\t\\end{tabular}\n\t\t &\n\t\t\\begin{subfigure}[b]{0.48\\columnwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth,scale=.26,trim={4cm 4cm 4cm 1cm},clip]{pure_shear_shell_deformed}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:cantilever_deform}\n\t\t\\end{subfigure}\n\t\\end{tabular}\n\t% \\label{fig:ABC}\n\t\\caption{A cantilever subjected to an end shear force: (a) the problem description, (b) the three-patch non-matching discretization and (c) the initial and deformed configurations.}\n\\end{figure}\n\nFigure~\\ref{fig:cantilever_shear_result} shows the shear traction against the horizontal ($-u_x$) and vertical ($u_z$) displacements at the free end for both the non-conforming multi-patch configuration and the reference results reported by Sze et al.~\\cite{sze2004popular}. Due to the heavy distortion of the mesh, the results are as expected poor for quadratic elements. However, the results for cubic splines agree with the reference result even for the coarse mesh. For all tested cases, the difference between the results obtained from the enriched \\Bezier dual basis and the global dual basis are negligible.\n\n\\begin{figure}[h]\n\t\\center\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{cantilever_shear_Q2_R0}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{cantilever_shear_Q2_R1}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\\\\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{cantilever_shear_Q3_R0}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.47\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{cantilever_shear_Q3_R1}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{Load-deflection curves for cantilever subjected to an end shear force. The horizontal ($-u_x$) and vertical ($u_z$) displacements at the free end for (a) a quadratic coarse mesh, (b) a quadratic fine mesh, (c) a cubic coarse mesh and (d) a cubic fine mesh are compared to the results provided in~\\cite{sze2004popular}. }\n\t\\label{fig:cantilever_shear_result}\n\\end{figure}\n\\FloatBarrier\n\\subsubsection{Slit annular plate subjected to a lifting line force}\n\nIn the second example, we study a slit annular plate subjected to a lifting line force. The problem setup is illustrated in Figure~\\ref{fig:annular_shear_config}, where the inner radius, outer radius, thickness, maximum vertical traction load and load step are $R_0 = 6$, $R_1 = 10$, $t = 0.03$, $f = 0.8$ and $\\Delta f = 0.04$, respectively. Young's modulus is $E = 21\\times 10^6$ and Poisson's ratio is $0$. One end of the slit is fully clamped while the other end is lifted under the uniform traction load $f$. We benchmark the vertical displacements of points A and B. To test the performance of the proposed coupling formulation, we decompose the annular plate into three NURBS patches with $6\\times 2$, $6\\times 5$ and $6\\times 3$ elements, respectively (see Figure~\\ref{fig:annular_shear_mesh}). In this example, we also consider a fine mesh obtained by a uniform refinement of the mesh in Figure~\\ref{fig:annular_shear_mesh}. The deformed annular plate is shown in Figure~\\ref{fig:annular_shear_deform}.\n\n% \\begin{figure}[h]\n% \t\\center\n% \t\\begin{subfigure}[b]{.33\\textwidth}\n% \t\t\\includegraphics[scale=.35]{pure_shear_annular_config}\n% \t\t\\caption{}\\label{fig:annular_shear_config}\n% \t\\end{subfigure}\n% \t\\begin{subfigure}[b]{.33\\textwidth}\n% \t\t\\centering\n% \t\t\\includegraphics[scale=.13]{annular_mesh_2}\n% \t\t\\caption{}\\label{fig:annular_shear_mesh}\n% \t\\end{subfigure}\n% \t\\begin{subfigure}[b]{.33\\textwidth}\n% \t\t\\centering\n% \t\t\\includegraphics[scale=.2,trim={16cm 4cm 16cm .5cm},clip]{annular_deformed}\n% \t\t\\caption{}\\label{fig:annular_shear_deform}\n% \t\\end{subfigure}\n% \t\\caption{Slit annular plate subjected to a lifting line force: (a) the problem description, (b) the three-patch non-conforming discretization and (c) the initial and deformed configurations.}\n% \\end{figure}\n\n\\begin{figure}[h]\n\t\\begin{tabular}[b]{cc}\n\t\t\\begin{tabular}[b]{c}\n\t\t\t\\begin{subfigure}[b]{0.4\\columnwidth}\n\t\t\t\t\\center\n\t\t\t\t\\includegraphics[width=.9\\textwidth]{pure_shear_annular_config}\n\t\t\t\t\\caption{}\n\t\t\t\t\\label{fig:annular_shear_config}\n\t\t\t\\end{subfigure} \\\\\n\t\t\t\\begin{subfigure}[b]{0.4\\columnwidth}\n\t\t\t\t\\center\n\t\t\t\t\\includegraphics[width=.9\\textwidth]{annular_mesh_2}\n\t\t\t\t\\caption{}\n\t\t\t\t\\label{fig:annular_shear_mesh}\n\t\t\t\\end{subfigure}\n\t\t\\end{tabular}\n\t\t &\n\t\t\\begin{subfigure}[b]{0.58\\columnwidth}\n\t\t\t\\center\n\t\t\t\\includegraphics[scale=.27,trim={17cm 4cm 17cm .5cm},clip]{annular_deformed}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:annular_shear_deform}\n\t\t\\end{subfigure}\n\t\\end{tabular}\n\t% \\label{fig:ABC}\n\t\\caption{A cantilever subjected to an end shear force: (a) the problem description, (b) the three-patch non-matching discretization and (c) the initial and deformed configurations.}\n\\end{figure}\n\nFigure~\\ref{fig:annular_shear_result} shows the load against the vertical deflections of points A and B for both the non-conforming multi-patch configuration and the reference results provided in~\\cite{sze2004popular}. Cubic elements are utilized in all tested cases. Whereas the multi-patch results obtained from the coarse mesh demonstrate slight discrepancy from the reference results, a good agreement with the reference results is observed for the fine mesh. Again, the difference between the results obtained from the enriched \\Bezier dual basis and the global dual basis are negligible.\n\n\\begin{figure}[h]\n\t\\center\n\t\\begin{subfigure}[b]{.48\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{annular_shear_Q3_R0}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{.48\\textwidth}\n\t\t\\centering\n\t\t\\includestandalone[scale=.8]{annular_shear_Q3_R1}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{Load-deflection curves for the slit annular plate lifted by a lifting line force. The vertical displacements at tip A and B for (a) a cubic coarse mesh and (b) a cubic fine mesh are compared to the results provided in~\\cite{sze2004popular}. }\n\t\\label{fig:annular_shear_result}\n\\end{figure}\n\\FloatBarrier\n\\subsubsection{Pullout of an open-ended cylindrical shell}\n\nIn this test, an open-ended cylinder is pulled by a pair of radial forces. The problem setup is illustrated in Figure~\\ref{fig:cylindrical_pull_config}, where the radius, length, thickness of the cylinder, radial force and load step are $R = 4.953$, $L = 10.35$, $t = 0.094$, $P = 40,000$ and $\\Delta P = 1,000$, respectively. The material properties are: Young's modulus $E = 10.5\\times 10^6$ and Poisson's ratio $\\nu = 0.3125$. We benchmark $u_z$ at point A, $u_x$ and points B and C, correspondingly. The cylindrical shell is modeled by four NURBS patches, discretized by $32\\times 16$, $28\\times 14$, $28\\times 14$ and $32\\times 16$, respectively (see Figure~\\ref{fig:cylindrical_pull_config}). The results of Sze et al.~\\cite{sze2004popular} are used as the reference. A good agreement with the reference results is observed in Figure~\\ref{fig:cylindrical_pull_result} indicating the accuracy and robustness of our formulation.\n\n\\begin{figure}[h]\n\t\\center\n\t\\begin{subfigure}[b]{\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.55\\textwidth]{cylindrical_configuration}\n\t\t% \\begin{tikzpicture}\n\t\t% \t% \\node[inner sep=0pt] at (0,0)\n\t\t% \t{\\includegraphics[width=.6\\textwidth]{cylindrical_configuration}};\n\t\t% \t% \\node[draw,align=left] at (7.2,0) {$E=10.5\\times 10^6$\\\\$R=4.953$\\\\$\\nu=0.3125$\\\\$L=10.35$\\\\$h=0.094$\\\\$P=40000$};\n\t\t% \\end{tikzpicture}\n\t\t\\caption{}\\label{fig:cylindrical_pull_config}\n\t\\end{subfigure}\n\t\\\\\\hspace{1em}\n\t\\begin{subfigure}[b]{.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[scale=.2,trim={18cm 3cm 18cm 0cm},clip]{cylindrical_deformed}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[scale=.3,trim={22cm 8cm 22cm 3cm},clip]{cylindrical_deformed_2}\n\t\t\\caption{}\n\t\\end{subfigure}\n\t\\caption{The open-end cylindrical shell subjected to radial pulling forces: (a) the problem description and four-patch non-matching discretization, (b) the initial and deformed configurations in 3D view, and (c) the initial and deformed configurations in $y$-axis view.}\n\\end{figure}\n\\newpage \n\\vfill\n\\begin{figure}\n\t\\center\n\t\\centering\n\t\\begin{center}\n\t\t\\includestandalone[scale=1]{cylindrical_shell_pull_Q3}\n\t\\end{center}\n\t\\caption{Load-deflection curves of the open-end cylinder subjected to a point pulling load. The results are measured at points A, B and C.}\n\t\\label{fig:cylindrical_pull_result}\n\\end{figure}\n\\vfill\n\\clearpage\n\\section{Conclusion}\\label{sec:conlusion}\nIn this chapter, we present a dual mortar formulation for the Kirchhoff-Love shell problem. The proposed formulation is based on the enriched \\Bezier dual basis and generic dual-compatible constraints. The enriched \\Bezier dual basis reproduces polynomial up to a given order without losing its locality. Thanks to the dual-compatible constraint, the biorthogonality between the dual basis functions and the corresponding primal spline basis functions can be extended to the discretized constraint matrix. Hence, the static condensation can be achieved without extra computational effort. With the help of the enriched \\Bezier dual basis, the condensed linear system remains sparse. Moreover, the constraint utilized in our formulation is generic in the sense that it handles $C^1$ continuity for smooth shell coupling as well as angle preservation for patches joined at a kink. When kinks present, the constraint is no longer linear. Thus, Newton-Raphson iteration is needed in order to apply the constraint. Due to the presence of the residual of the constraint, the linearized constraint is non-homogeneous. Thanks to the unique constraint matrix structure, a particular solution that satisfies the non-homogeneous constraints can be constructed without the need to solve any linear systems. \\par\n\nThe accuracy and robustness of the proposed formulation are verified by several linear and nonlinear benchmark problems. The Kirchhoff plate and Scordelis-Lo roof problems indicate the optimality of the proposed formulation. The T-beam and L-beam problems demonstrate the ability of the proposed formulation in preserving coupling anlge. From the benchmark results, we believe the proposed patch coupling formulation has great potential in addressing real world complex shell problems.", "meta": {"hexsha": "295bdc75a1f86921f5aaa6225cc6f0f7bd929f39", "size": 88393, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exampleFiles/chapter5.tex", "max_stars_repo_name": "miaodi/phd_dissertation", "max_stars_repo_head_hexsha": "80b8d49e46c1ef620f87b30fc45782fe0369c056", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exampleFiles/chapter5.tex", "max_issues_repo_name": "miaodi/phd_dissertation", "max_issues_repo_head_hexsha": "80b8d49e46c1ef620f87b30fc45782fe0369c056", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exampleFiles/chapter5.tex", "max_forks_repo_name": "miaodi/phd_dissertation", "max_forks_repo_head_hexsha": "80b8d49e46c1ef620f87b30fc45782fe0369c056", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 73.5382695507, "max_line_length": 1706, "alphanum_fraction": 0.7180206577, "num_tokens": 29462, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.752012562644147, "lm_q1q2_score": 0.5983448868170657}}
{"text": "\\documentclass[letterpaper,hidelinks]{article}\n\\usepackage[left=1in,right=1in,top=1in,bottom=1in]{geometry}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{listings}\n\\usepackage{enumitem}\n\\usepackage{listings}\n\\usepackage{algorithm}\n\\usepackage{algorithmic}\n\\usepackage{hyperref}\n\\usepackage{subfigure}\n\\lstset{language=R,%\n\tbasicstyle=\\footnotesize,\n\t%basicstyle=\\color{red},\n\tbreaklines=true,%\n\tshowstringspaces=false,%without this there will be a symbol in the places where there is a space\n\tnumbers=left,%\n\tnumbersep=9pt, % this defines how far the numbers are from the text\n\t%emph=[2]{word1,word2}, emphstyle=[2]{style},    \n}\n \n\\numberwithin{equation}{section}\n\\author{KK Feng}\n\\title{Histogram Clustering with Expectation-Maximization on SAR Images}\n\\date{}\n\\begin{document}\n\\maketitle\n\\section{Problem}\n\\subsection{Description}\nImage segmentation\\footnote{Image segmentation refers to the task of dividing an input image into regions of pixels that belong together.} problem for synthetic aperture radar (SAR) images\\footnote{This type of image is well-suited for segmentation by histogram clustering, because the local intensity distributions provide distinctive information about the segments.}.\n\\subsection{Data}\n\\begin{itemize}\n\\item \\textbf{histograms.bin}: the histograms extracted from an 800 by 800 grayscale image per following procedure\n\\begin{enumerate}\n\\item Select a subset of pixels.\n\\item Place a rectangle of fixed radius around the site pixel.\n\\item Select all pixels within the rectangle and sort their intensity values into a histogram.\n\\end{enumerate}\n\\end{itemize}\n\\subsection{Idea}\n\\begin{itemize}\n\\item Extract the histograms from the image\n\\begin{itemize}\n\\item The histograms were drawn at the nodes of a 4-by-4 pixel grid, therefore there are 200 by 200 = 40000 histograms. \n\\item Each histogram was drawn within a rectangle of edge length 11 pixels, so each histogram contains 11 by 11 = 121 values.\n\\end{itemize}\n\\item Apply the Expectation-Maximization algorithm and a finite mixture of multinomial distributions.\n\\end{itemize}\n\n\\section{Solution}\n\\subsection{Plot}\nWe have an integer $K$ which specifies the number of clusters and a threshold parameter $\\tau$ which is used for the termination of the iteration. For $K=3,4,5$, we tested $\\tau$ for following values\n\\begin{align}\n1,~0.1,~0.01,~0.001,~0.0001,~0.00001,0.000001,0.0000001\n\\end{align}\nThen we visualize the clustering results, which are the numbers of clusters assigned to the histograms, as an image. Note that when use $image$ function in R, we need to rotate the axes to get the images to the right position. From the figures below, we can see that the larger $K$ is, the smaller $\\tau$ the algorithm requires to get a convergent solution.\n\\begin{figure}\n\\centering\n\\subfigure[$\\tau$=1]{\\includegraphics[width=1.8in]{13}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.1]{\\includegraphics[width=1.8in]{14}}\n\\\\\n\\subfigure[$\\tau$=0.01]{\\includegraphics[width=1.8in]{15}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.001]{\\includegraphics[width=1.8in]{16}}\n\\\\\n\\subfigure[$\\tau$=0.0001]{\\includegraphics[width=1.8in]{17}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.00001]{\\includegraphics[width=1.8in]{18}}\n\\\\\n\\subfigure[$\\tau$=0.000001]{\\includegraphics[width=1.8in]{19}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.0000001]{\\includegraphics[width=1.8in]{110}}\n\\caption{K=3}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\subfigure[$\\tau$=1]{\\includegraphics[width=1.8in]{23}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.1]{\\includegraphics[width=1.8in]{24}}\n\\\\\n\\subfigure[$\\tau$=0.01]{\\includegraphics[width=1.8in]{25}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.001]{\\includegraphics[width=1.8in]{26}}\n\\\\\n\\subfigure[$\\tau$=0.0001]{\\includegraphics[width=1.8in]{27}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.00001]{\\includegraphics[width=1.8in]{28}}\n\\\\\n\\subfigure[$\\tau$=0.000001]{\\includegraphics[width=1.8in]{29}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.0000001]{\\includegraphics[width=1.8in]{210}}\n\\caption{K=4}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\subfigure[$\\tau$=1]{\\includegraphics[width=1.8in]{33}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.1]{\\includegraphics[width=1.8in]{34}}\n\\\\\n\\subfigure[$\\tau$=0.01]{\\includegraphics[width=1.8in]{35}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.001]{\\includegraphics[width=1.8in]{36}}\n\\\\\n\\subfigure[$\\tau$=0.0001]{\\includegraphics[width=1.8in]{37}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.00001]{\\includegraphics[width=1.8in]{38}}\n\\\\\n\\subfigure[$\\tau$=0.000001]{\\includegraphics[width=1.8in]{39}}\n\\hspace{0.1in}\n\\subfigure[$\\tau$=0.0000001]{\\includegraphics[width=1.8in]{310}}\n\\caption{K=5}\n\\end{figure}\n\\subsection{Code}\n\\lstinputlisting{em.R}\n\\end{document}", "meta": {"hexsha": "8d5f6477e5756de51404233f0d82d7aa9b4f6f5d", "size": 4644, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "clustering-expectation-maximization-sar-images/readme.tex", "max_stars_repo_name": "zhengkaifeng/classification-clustering-machine-learning", "max_stars_repo_head_hexsha": "85fe0ffcad52edd3c8a1a4d5f24fa15b3d5b6ba7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "clustering-expectation-maximization-sar-images/readme.tex", "max_issues_repo_name": "zhengkaifeng/classification-clustering-machine-learning", "max_issues_repo_head_hexsha": "85fe0ffcad52edd3c8a1a4d5f24fa15b3d5b6ba7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "clustering-expectation-maximization-sar-images/readme.tex", "max_forks_repo_name": "zhengkaifeng/classification-clustering-machine-learning", "max_forks_repo_head_hexsha": "85fe0ffcad52edd3c8a1a4d5f24fa15b3d5b6ba7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.6923076923, "max_line_length": 369, "alphanum_fraction": 0.7493540052, "num_tokens": 1542, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.8289388146603364, "lm_q1q2_score": 0.598280947994754}}
{"text": "\\section{Data Structures}\n\\lst{Fenwick tree}{(range) query: $\\mathcal{O}(\\log n)$, (point) update $\\mathcal{O}(\\log n)$}{dataStructures/FT.cc}\n\\lst{2D Fenwick tree}{(range) query: $\\mathcal{O}(\\log n\\log m)$, (point) update $\\mathcal{O}(\\log n\\log m)$}{dataStructures/FT2D.cc}\n\\lst{Segment tree}{build: $\\mathcal{O}(n)$, (range) query: $\\mathcal{O}(\\log n)$, (point) update $\\mathcal{O}(\\log n)$}{dataStructures/STIT.cc}\n\\lst{Union Find/DSU}{$n$ Elements, $m \\leq n$ Operations: $\\mathcal{O}(m \\cdot \\alpha(m, n)) \\approx \\mathcal{O}(n)$}{dataStructures/DSU.cc}\n\n\\begin{code}{1D Sparse Table}{build: $\\mathcal{O}(n \\log n)$ query: $\\mathcal{O}(1)$}{dataStructures/SPT.cc}\n  For any idempotent function\n\\end{code}\n\n\\lst{2D Sparse Table}{build: $\\mathcal{O}(nm \\log n \\log m)$ query: $\\mathcal{O}(1)$}{dataStructures/SPT2D.cc}\n", "meta": {"hexsha": "ed2a310a1700f8721dd5dd5b6512705982960edc", "size": 825, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "document/dataStructures.tex", "max_stars_repo_name": "Zeldacrafter/CompProg", "max_stars_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-02-06T15:44:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-21T03:51:21.000Z", "max_issues_repo_path": "document/dataStructures.tex", "max_issues_repo_name": "Zeldacrafter/CompProg", "max_issues_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "document/dataStructures.tex", "max_forks_repo_name": "Zeldacrafter/CompProg", "max_forks_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.75, "max_line_length": 143, "alphanum_fraction": 0.6678787879, "num_tokens": 316, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086178919837705, "lm_q2_score": 0.6584175072643413, "lm_q1q2_score": 0.5982499274957347}}
{"text": " \\chapter[Multiple scattering]{Multiple scattering}\n\n The G4MultipleScattering class simulates the multiple scattering of\ncharged particles  in material.It uses a new multiple scattering (MSC)\nmodel which  does not use the\nMoliere formalism(\\cite{msc.moliere}).This MSC model simulates the scattering of the \nparticle after a given step , computes the mean path length \ncorrection and the mean lateral displacement as well.\n\n Let us define a few notation first.\n\n The true path length ('t' path length) is the total length travelled\nby the particle. All the physical processes restrict this 't' step.\n\n The geometrical ( or 'z') path length is the straight distance between\nthe starting and endpoint of the step , if there is no magnetic field.\nThe geometry gives a constraint for this 'z' step. It should be noted,\nthat the geometrical step length is meaningful in the case of magnetic\nfield, too, but in this case it is a distance along a curved \ntrajectory.\n\n The mean properties of the multiple scattering process are determined\nby the transport mean free path , \\(\\lambda\\) , which is a function of the \nenergy in a given material.Some of the mean properties - the mean lateral\n displacement and the second moment of cos(theta) - depend on the second\n transport mean free path, too. (The transport mean free path is called\n first transport mean free path as well.)\n\n  The 't'\\(\\Rightarrow\\)'z' (true path length -- geometrical path length) transformation is given by the simple equation\n\n   \\begin{equation}\n         z = \\lambda*(1.-exp(-t/\\lambda))                \\label{msc.a}\n   \\end{equation}\n\n which is an exact result for the mean values of z , if\nthe differential cross section has an axial symmetry and the energy loss\ncan be neglected .\n  This formula and some other expressions for the first moments of the spatial\ndistribution after a given 'true' path length t have been taken from the excellent\npaper of Fernandez-Varea et al. \\cite{msc.fernandez}, but the expressions have been\ncalculated originally by Goudsmit and Saunderson \\cite{msc.goudsmit} and Lewis\n\\cite{msc.lewis}.\n  Inverting eq. \\ref{msc.a} the 'z'\\(\\Rightarrow\\)'t' transformation can be written as\n\n   \\begin{equation}\n        t = -\\lambda*ln(1.-z/\\lambda)                     \\label{msc.b}\n   \\end{equation}\n\n where \\(z < \\lambda\\) should be required (this condition is fulfilled\n if z has been computed from eq. \\ref{msc.a}).\n\n  The mean value of \\(cos(\\theta)\\) - \\(\\theta\\) is the scattering angle after a\ntrue step length t - is\n\n   \\begin{equation}\n          <cos(\\theta)> = exp(-t/\\lambda)               \\label{msc.c}  \n   \\end{equation}\n\n  The transport mean free path values have been calculated by Liljequist et al.\n\\cite{msc.liljequist2},\\cite{msc.liljequist1} for electrons and positrons in the kinetic\nenergy range \\(0.1 keV -- 20 MeV\\) in 15 materials . The MSC model uses these\nvalues with an appropriate interpolation or extrapolation in the atomic number\n\\(Z\\) and in the velocity of the particle \\(\\beta\\) , when it is necessary.\n  \n The quantity \\(cos(\\theta)\\) is sampled in the MSC model according to a model function\n \\(f(cos(\\theta))\\). The shape of this function has been choosen in such a way,\nthat\\(f(cos(\\theta))\\) reproduces the results of the direct simulation ot the particle\ntransport rather well and eq. \\ref{msc.c} is satisfied.\n The functional form of this model function is\n\n  \\begin{equation}\n      f(x) = p \\frac{(a + 1)^2 (a - 1)^2}{2 a} \\frac{1}{(a-x)^3}\n            + (1-p) \\frac{1}{2}                            \\label{msc.d}\n  \\end{equation}\n\n where  \\( x= cos(\\theta)\\) , \\( 0 \\leq p \\leq 1\\) and \\( a > 1\\) . The model\n parameters \\(p\\) and \\(a\\) depend on the path length t , the energy of the\n particle and the material.They are not independent parameters , they should\n satisfy the constraint\n\n  \\begin{equation}\n        \\frac{p}{a} = exp(-\\frac{t}{\\lambda})              \\label{msc.e}\n  \\end{equation} \n\n which follows from eq. \\ref{msc.c} .\n\n  The mean lateral displacement is given by a more complicated formula\n(see the paper \\cite{msc.fernandez} ), but this quantity also can be calculated\n relatively easily and accurately.\n \n  It is worth to note that in this MSC model there is no step limitation\n originated from the multiple scattering process. Another important feature\n of this model\n that the total 'true' path length of the particle does not depend the\n length of the steps . Most of the algorithms used in simulations do not have\n these properties.\n  \n In the case of heavy charged particles ( \\(\\mu,\\pi,proton,etc.\\) ) the\n mean transport free path is calculated from the \\(e+/e-\\)  \\(\\lambda\\) values\n with a 'scaling'.\n\n In its present form the model computes and uses {\\em mean}  path length\n corrections and lateral displacements, the only {\\em random} quantity is\n the scattering angle \\(\\theta\\) which is sampled according to the model\n function \\( f \\).   \n\n  The G4MultipleScattering process has  'AlongStep' and  'PostStep'\nparts.\n\n  The AlongStepGetPhysicalInteractionLength function performs the\\linebreak\n \\mbox{'t' step \\(\\Rightarrow\\) 'z' step} transformation . It should be called after the\nother physics GetPhysicalInteractionLength functions but before\nthe GetPhysicalInteractionLength of the transportation process.The\nreason for this restriction is the following: The physics processes\n'feel' the true path length travelled by the particle , the geometry\n(transport) uses the 'z' step length.If we want to compare the minimum\nstep size coming from the physics with the constraint of the geometry,\nwe have make the transformation.\n\n  The AlongStepDoIt function of the process performs the inverse,\n 'z'\\(\\Rightarrow\\)'t' transformation.This function should be called after the \nAlongStepDoIt of the transportation process , i.e. after the particle\nrelocation determined by the geometrical step length, but before applying\nany other (physics) AlongStepDoIt.\n\n  The PostStepGetPhysicalInteractionLength part of the multiple\nscattering process is very simple , it sets the force flag to 'Forced'\nin order to ensure the call of the PostStepDoIt in every step and\nreturns a big value as interaction length (that means that the multiple\nscattering process does not restrict the step size).\n\n\\section{Status of this document}\n  9.10.98  created by L. Urb\\'an.\n\n\\begin{thebibliography}{99}\n\\bibitem[Mol48]{msc.moliere}\n   {\\em Z. Naturforsch. 3a (1948) 78. }\n\\bibitem[Fer93]{msc.fernandez}J. M. Fernandez-Varea et al.\n   {\\em NIM B73 (1993) 447.}\n\\bibitem[Goud40]{msc.goudsmit}S. Goudsmit and J. L. Saunderson.\n   {\\em Phys. Rev. 57 (1940) 24. }\n\\bibitem[Lew50]{msc.lewis} H. W. Lewis. \n   {\\em Phys. Rev. 78 (1950) 526. }\n\\bibitem[Lil87]{msc.liljequist1} D. Liljequist and M. Ismail.\n   {\\em J.Appl.Phys. 62 (1987) 342. }\n\\bibitem[Lil90]{msc.liljequist2} D. Liljequist et al.\n   {\\em J.Appl.Phys. 68 (1990) 3061. }\n\\end{thebibliography}\n", "meta": {"hexsha": "68ccddbc1cc0f1c2d11047d357eb18e111486108", "size": 6894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "geant4/electromagnetic/muons/msc.tex", "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_issues_repo_path": "geant4/electromagnetic/standard/msc.tex", "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geant4/electromagnetic/standard/msc.tex", "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.2684563758, "max_line_length": 120, "alphanum_fraction": 0.729329852, "num_tokens": 1818, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8807970811069351, "lm_q2_score": 0.679178699175393, "lm_q1q2_score": 0.5982186157836913}}
{"text": "\\documentclass[11pt]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[a4paper]{geometry}\n\n\n\\usepackage{graphicx}\n\\usepackage{amsmath,amssymb,mleftright}\n\\usepackage{mathtools} % For cases environment\n\\usepackage[round]{natbib}\n\\usepackage{microtype}\n\n\\usepackage[british]{babel}\n\n% Make a title for your question and provide your name (or a pseudonymn)\n\\title{Slope and correlation}\n\\author{C. Andidate}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\section{Question}\nWhen is the slope of a straight-line fit through a set of points $(x_i,y_i)$ equal to the correlation between $x$ and $y$?\n\n\\section{Answer}\n\nWhen we fit\n\\[\n  y_i = \\alpha + \\beta x_i\n\\]\nwe know that the least square estimates are\n\\begin{align*}\n  \\hat{\\alpha} &= \\bar{y} - \\hat\\beta \\bar{x}  \\\\\n  \\hat{\\beta} &=\n  \\frac\n  {\\operatorname{Cov}(x, y)}\n  {\\operatorname{Var}(x)} \\\\\n  &= \\rho_{xy} \\frac{s_y}{s_x}\n\\end{align*}\nwhere\n$  \\bar{y} = \\frac{1}{n}\\sum_{i=1}^{n}{y_i} $,\n$  \\bar{x} = \\frac{1}{n}\\sum_{i=1}^{n}{x_i} $,\nand $\\operatorname{Var}$ and $\\operatorname{Cov}$ are the variance and covariance, respectively.\nLikewise,\n$\\rho_{xy}$,\n$s_y$, and\n$s_x$ are the sample correlation coefficient and sample standard deviations of the $y_i$s and $x_i$s, respectively.\nSo the slope is equal to the correlation when $x$ and $y$ have the same variance,\n\\begin{align*}\n  \\rho_{xy} &= \\rho_{xy} \\frac{s_y}{s_x} \\\\\n  s_y       &=  s_x\n  \\text{.}\n\\end{align*}\n\n\n%% Uncomment this if you need references\n%\\bibliography{references.bib}\n%\\bibliographystyle{chicago}\n\n\n\\end{document}", "meta": {"hexsha": "796c36834632f5060fec278824af3822f5c80834", "size": 1536, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "test.tex", "max_stars_repo_name": "AlaaMahi/Neurips-2019", "max_stars_repo_head_hexsha": "836d6cd82191bf48934d8f196a36285c6c5a782b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "test.tex", "max_issues_repo_name": "AlaaMahi/Neurips-2019", "max_issues_repo_head_hexsha": "836d6cd82191bf48934d8f196a36285c6c5a782b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test.tex", "max_forks_repo_name": "AlaaMahi/Neurips-2019", "max_forks_repo_head_hexsha": "836d6cd82191bf48934d8f196a36285c6c5a782b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.7741935484, "max_line_length": 122, "alphanum_fraction": 0.6888020833, "num_tokens": 514, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019595, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5981106418282813}}
{"text": "\\documentclass{subfile}\n\n\\begin{document}\n\t\\section{ABMO}\\label{sec:abmo}\n\t\t\\begin{problem}[Team Selection Test $2014$, problem $1$]\n\t\t\tProve that for an integer $n>2$, the following inequality holds:\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{n+1}\\left(1+\\dfrac{1}{3}+\\ldots+\\dfrac{1}{2n-1}\\right)\n\t\t\t\t\t\t& > \\dfrac{1}{n}\\left(\\dfrac{1}{2}+\\ldots+\\dfrac{1}{2n}\\right)\n\t\t\t\t\\end{align*}\n\t\t\t\n\t\t\t\t\\begin{solution}\n\t\t\t\t\t\n\t\t\t\t\\end{solution}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[Team Selection Test $2010$, problem $4$]\n\t\t\tLet $a,b,c$ be the sides of a triangle and $k$ be a real number. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta^{3}+b^{3}+c^{3}\n\t\t\t\t\t\t& < k(a+b+c)(ab+bc+ca)\n\t\t\t\t\\end{align*}\n\t\t\tholds for $k=1$. Find the smallest value of $k$ such that the inequality holds.\n\t\t\\end{problem}\n\\end{document}", "meta": {"hexsha": "40207c3988faeac85e3a00f0d84c70a21fc4bfa2", "size": 783, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abmo.tex", "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_issues_repo_path": "abmo.tex", "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abmo.tex", "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.32, "max_line_length": 82, "alphanum_fraction": 0.6155810983, "num_tokens": 303, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019594, "lm_q2_score": 0.8080672066194946, "lm_q1q2_score": 0.5981106384070628}}
{"text": "\\chapter{Technical background}\\label{chap:background}\n\nBefore describing pruning and random structures, we must know how RNNs and their different variants, namely LSTM and GRU, operate. Therefore, in this section, we provide a detailed explanation of RNNs, LSTM, and GRU.\n\nTo properly understand how RNNs operate, we must understand how a perceptron, a building block of neural networks, works.\n\n% --------------------------------------------------------------------------------------------\n% ---------------------------------------- PERCEPTRON ----------------------------------------\n% --------------------------------------------------------------------------------------------\n\n\\section{Perceptron}\\label{section:perceptron}\n\nA perceptron, also known as a Single Layer Perceptron (SLP), is a single layer neural network developed by F. Rosenblatt in \\cite{Rosenblatt}, inspired by \\cite{mcculloch}. It is a linear classifier that accepts multiple binary inputs and returns a single binary output.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{images/background/perceptron.png}\n\t\\caption[Single Layer Perceptron]%\n\t{\\textbf{Single Layer Perceptron} with binary inputs $x_1, x_2, x_3, ..., x_n$ and its corresponding weights $w_1, w_2, w_3, ..., w_n$.}\n\t\\label{fig:perceptron}\n\\end{figure}\n\nAs shown in the above figure, a perceptron consists of four main parts, inputs, weights, weighted sum, and an activation function. A perceptron works by following these simple steps:\n\n\\begin{enumerate}\n    \\item Multiply each input $x$ with its corresponding weight $w$.\n    \\item Get a value for the weighted sum by adding all the multiplied values together.\n        \\begin{equation}\n        \\label{eqn:weighted_sum}\n            weighted\\; sum = \\sum_{i=1}^{n}w_i.x_i\n        \\end{equation}\n    \\item Apply this weighted sum to a activation function to generate the output $\\hat{y}$.\n\\end{enumerate}\n\nAn activation function is a significant part of a perceptron. It transforms the input of the node into the output for that node. It ensures the output value is mapped between (0, 1) or (-1, 1). Rectified Linear Unit (ReLU), Hyperbolic Tangent (Tanh) are two of the popular nonlinear activation functions explained later in the following section.\n\n\\subsection{Nonlinearity (Nonlinear Activation function)}\\label{subsection:nonlinearity}\n\nA nonlinearity, as the name suggests, is used when it is not possible to produce an output for any unit using a linear function. Concerning neural networks, three of the most widely used nonlinearities are, ReLU, Sigmoid, Tanh \\cite{nonlin}.\n\n\\subsubsection{ReLU}\\label{subsubsection:relu}\n\nA \\textit{Rectified Linear Unit} is a nonlinear activation function mathematically defined as:\n\n\\begin{equation}\n    \\label{eqn:relu}\n    y = max(0, x)\n\\end{equation}\n\nAs per the above equation, for a given input $x$, if the value of $x$ is less than $0$, it returns $0$, otherwise $x$.\n\nThe following figure shows the line plot of the ReLU activation function.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.5\\linewidth]{images/background/relu.png}\n    \\caption[Rectified Linear Unit]{Line plot of ReLU activation function}\n    \\label{fig:relu}\n\\end{figure}\n\nFor any given positive input, the derivative of ReLU simply returns 1. This eliminates the need to perform computationally expensive exponentials and divisions, as required by other activation functions such as the Sigmoid activation function.\n\n\\subsubsection{Sigmoid}\\label{subsubsection:sigmoid}\n\nThe \\textit{Sigmoid} activation function, unlike ReLU, is mainly used in \\textit{Feedforward Neural Networks}. This activation function returns a value between $0$ and $1$. The following figure shows the line plot of the Sigmoid activation function.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.7\\linewidth]{images/background/sigmoid.jpg}\n    \\caption[Sigmoid activation function]{Line plot of Sigmoid activation function \\cite{sigmoid}}\n    \\label{fig:sigmoid}\n\\end{figure}\n\nFor a given input $x$, the Sigmoid activation is mathematically written as:\n\n\\begin{equation}\n    \\label{eqn:sigmoid}\n    S(x) = \\frac{1}{1 + e^{-x}}\n\\end{equation}\n\nAs mention by Nwankpa et al. in \\cite{activation}, this activation function has many drawbacks, including gradient saturation and slow convergence. Some of these shortcomings are possible to avoid using other forms of activation functions such as \\textit{hyperbolic tangent}.\n\n\\subsubsection{Tanh}\\label{subsubsection:tanh}\n\nTanh, short for \\textit{hyperbolic tangent} is an activation function with a value range between $-1$ to $1$, making it a zero-centered activation function. The following figure shows the line plot of the Tanh activation function.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.6\\linewidth]{images/background/tanh.png}\n    \\caption[Tanh activation function]{Line plot of Tanh activation function \\cite{tanh}}\n    \\label{fig:tanh}\n\\end{figure}\n\nFor a given input $x$, the Tanh activation is mathematically written as:\n\n\\begin{equation}\n    \\label{eqn:tanh}\n    tanh(x) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}\n\\end{equation}\n\nAlthough compared to the Sigmoid activation function, Tanh has better performance during training \\cite{tanh_1, tanh_2}, it suffers from the vanishing gradient problem.\n\nThe output from a nonlinear activation function is not a perfect match to the actual target. For this reason, it is necessary to optimize weight values such that the difference between the actual target and the final output is the smallest, which is done by a process called gradient descent.\n\n\\subsection{Batch Gradient Descent}\\label{subsection:bgd}\n\nA perceptron has fixed input and output, meaning we can only modify and improve weights to minimize errors. An error function $E$ returns the deviation of predicted outcome $\\hat{y}$ from the actual one $y$ as sum of squared errors:\n\n\\begin{equation}\n    \\label{eq:error_func}\n    E(w) = \\frac{1}{2} \\sum_{i=1}^{n}(\\hat{y}^{(i)} - y^{(i)})^2\n\\end{equation}\n\nWeights are then updated using this error function as:\n\n\\begin{equation}\n    \\label{eq:weight_update}\n    w \\coloneqq w - \\eta \\nabla E(\\text{w})\n\\end{equation}\n\nwhere $\\eta$ is the learning rate and $\\nabla E(\\text{w})$ is partial derivative of the cost function, computed for each weight in the weight vector as:\n\n\\begin{equation}\n    \\label{eq:part_der}\n    \\nabla E(\\text{w}) = \\frac{\\partial E(\\text{w})}{\\partial w_j}\n\\end{equation}\n\nBy substituting the value of $E(\\text{w})$ from equation \\ref{eq:error_func} in above equation, we can derive $\\nabla E(\\text{w})$, as shown by \\cite{perc_eq}, as:\n\n\\begin{align}\n    \\nabla E(\\text{w}) &= \\frac{\\partial}{\\partial w_j}\\frac{1}{2} \\sum_{i=1}^{n}(\\hat{y}^{(i)} - y^{(i)})^2 \\nonumber \\\\\n                       &= \\frac{1}{2} \\sum_{i=1}^{n}\\frac{\\partial}{\\partial w_j}(\\hat{y}^{(i)} - y^{(i)})^2 \\nonumber \\\\\n                       &= \\frac{1}{2} \\sum_{i=1}^{n}2(\\hat{y}^{(i)} - y^{(i)})\\frac{\\partial}{\\partial w_j}(\\hat{y}^{(i)} - y^{(i)}) \\nonumber \\\\\n                       &= \\sum_{i=1}^{n}(\\hat{y}^{(i)} - y^{(i)}) \\frac{\\partial}{\\partial w_j}(\\hat{y}^{(i)} - \\sum_{j}w_{j}.x_{j}^{(i)}) \\nonumber \\\\\n                       &= \\sum_{i=1}^{n}(\\hat{y}^{(i)} - y^{(i)})(-x_{j}^{(i)}) \\label{eq:part_derived}\n\\end{align}\n\nBy substituting the value from equations \\ref{eq:part_derived} in \\ref{eq:weight_update}, we can re-write the weight update formula as:\n\n\\begin{equation}\n    w \\coloneqq w + \\eta \\sum_{i=1}^{n}(\\hat{y}^{(i)} - y^{(i)})(x_{j}^{(i)})\n\\end{equation}\n\nThis approach for updating weights is known as Batch Gradient Descent because each sample in the training set is considered at each step of weights update. Repetition of these training and weights update steps is necessary to obtain convergence.\n\nAn SLP has no hidden layers. By adding one or more hidden layers, we can generate a Multi-Layer Perceptron (MLP), also known as an Artificial Neural Network (ANN) or simply, a Neural Network.\n\n% ------------------------------------------------------------------------------------------------------------\n% ---------------------------------------- ARTIFICIAL NEURAL NETWORKS ----------------------------------------\n% ------------------------------------------------------------------------------------------------------------\n\n\\newpage\n\\section{Artificial Neural Network}\\label{section:ann}\n\nAn Artificial Neural Network is a network of neurons that tries to capture the essential features of the given inputs to infer rules needed to complete a given task, such as image recognition or machine translation. It achieves this by a series of one or more hidden layers, as shown in the below figure:\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{images/background/neural_network.png}\n\t\\caption[Artificial Neural Network]%\n\t{\\textbf{Artificial Neural Network} with two hidden layers, $h_1$ and $h_2$, each consisting of a plethora of neurons.}\n\t\\label{fig:neural_network}\n\\end{figure}\n\nThe neural network shown in the above figure is a Feedforward Neural Network (FFN), where the output of one layer is an input for the next layer. Each neuron of one layer is directly connected to all other neurons of the next layer. Due to this dense connectivity between neurons of each layer, such layers are called dense layers.\n\nAs Goodfellow et al. described in \\cite{goodfellow}, the goal of a Feedforward Neural Network is to approximate some function $f^*$. According to the authors, It does so by defining a mapping $y = f(x;\\theta)$ that maps the given input $x$ to a corresponding classification category $y$. The feedforward network learns the value of $\\theta$ that returns the best function approximation.\n\nThe input layers accept the data from the outside world and transfer it directly to the first hidden layer without performing any computations.\n\nHidden layers are helpful when the linear separation of the data is not possible. Each neuron in a hidden layer is a perceptron that accepts inputs, computes a weighted sum (eq. \\ref{eqn:weighted_sum}), applies the weighted sum to an activation function, and passes it towards the next layer.\n\nThe output layer of a neural network returns the final results based on the input it receives from its previous layer. The number of neurons in an output layer must match the expected outputs of its respective classification problem.  For example, if a neural network's task is to recognize the hand-written digits shown in figure \\ref{fig:digits}, then the output layer of this neural network must have ten neurons where each neuron corresponds to a number from $0$ to $9$.\n\nAs with perceptron, neural networks also aim at minimizing the error. Due to the availability of hidden layers, the error is propagated backward using a chain rule, as explained in the following section.\n\n\\subsection{Backpropagation}\\label{subsection:backprop}\n\nAlthough the idea of backpropagation was initially pitched in the 1970s, it became famous by the work of Rumelhart et al. in \\cite{backprop86} published in 1986. That paper showed the importance of this algorithm by demonstrating that it works better than other earlier approaches to learning. Given an error function for a neural network, the backpropagation algorithm computes the gradient of this error function. This computation happens backward, from the final layer to the first layer.\n\nTo properly understand how chain rule applies in error backpropagation, consider a simple neural network shown in below figure:\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{images/background/back_prop.png}\n\t\\caption[Backpropagation in a neural network]%\n\t{\\textbf{Backpropagation} in a neural network with one hidden layer consisting of two hidden neurons $h_1$ and $h_2$, two input neurons $i_1$ and $i_2$, and one output.}\n\t\\label{fig:back_prop}\n\\end{figure}\n\nAfter getting output from the forward pass, we calculate the error using equation \\ref{eq:error_func}. By using the partial derivative of this error, the following equation updates all the weights of the neural network:\n\n\\begin{equation}\n    \\label{eq:backprop_wu}\n    w_j \\coloneqq w_j - \\eta \\frac{\\partial E}{\\partial w_j}\n\\end{equation}\n\nAs stated before, we start error backpropagation from the final layer, i.e., the output layer of our neural network shown in figure \\ref{fig:back_prop}. Consider the following figure that is visualizing the output layer of the neural network.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.4\\linewidth]{images/background/back_prop_1.png}\n\t\\caption[Backpropagating error from output layer]%\n\t{Backpropagating error from output layer to modify weights $w_5$ and $w_6$.}\n\t\\label{fig:back_prop_1}\n\\end{figure}\n\nTo update $w_5$ and $w_6$, we must first compute $\\frac{\\partial E}{\\partial w_5}$ and $\\frac{\\partial E}{\\partial w_6}$. However, error $E$ is the difference between target $y$ and predicted output $\\hat{y}$. Furthermore, the predicted output $\\hat{y}$ is the result of applying the weighted sum to an activation function. Here, the weighted sum is calculated as:\n\n\\begin{equation}\n    weighted\\; sum\\; (z) = \\hat{y}_{h1} w_5 + \\hat{y}_{h2} w_6\n\\end{equation}\n\nwhere $\\hat{y}_{h1}$ is output of the hidden neuron $h_1$ and $\\hat{y}_{h2}$ is output of the hidden neuron $h_2$.\n\nBased on this, we can derive the following formula that computes the partial derivative of $E$ with respect to $w_5$ and $w_6$ as:\n\n\\begin{equation}\n    \\frac{\\partial E}{\\partial w_5} = \\frac{\\partial E}{\\partial \\hat{y}} \\frac{\\partial \\hat{y}}{\\partial z} \\frac{\\partial z}{\\partial w_5}\n\\end{equation}\n\n\\begin{equation}\n    \\frac{\\partial E}{\\partial w_6} = \\frac{\\partial E}{\\partial \\hat{y}} \\frac{\\partial \\hat{y}}{\\partial z} \\frac{\\partial z}{\\partial w_6}\n\\end{equation}\n\nWe follow the similar process to compute the partial derivative of error $E$ with respect to $w_1$, $w_2$, $w_3$, and $w_4$.\n\nConsider the following figure that is visualizing the hidden neuron $h_1$ of the neural network:\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.4\\linewidth]{images/background/back_prop_2.png}\n\t\\caption[Backpropagating error from $h_1$]%\n\t{Backpropagating error from $h_1$ to modify weights $w_1$ and $w_3$.}\n\t\\label{fig:back_prop_2}\n\\end{figure}\n\nTo update $w_1$ and $w_3$, we first compute $\\frac{\\partial E}{\\partial w_1}$ and $\\frac{\\partial E}{\\partial w_3}$ as:\n\n\\begin{equation}\n    \\frac{\\partial E}{\\partial w_1} = \\frac{\\partial E}{\\partial \\hat{y}_{h1}} \\frac{\\partial \\hat{y}_{h1}}{\\partial z_{h1}} \\frac{\\partial z_{h1}}{\\partial w_1}\n\\end{equation}\n\n\\begin{equation}\n    \\frac{\\partial E}{\\partial w_3} = \\frac{\\partial E}{\\partial \\hat{y}_{h1}} \\frac{\\partial \\hat{y}_{h1}}{\\partial z_{h1}} \\frac{\\partial z_{h1}}{\\partial w_3}\n\\end{equation}\n\nHere, $\\hat{y}_{h1}$ is output of the hidden neuron $h_1$ and $z_{h1}$ is the weighted sum used to compute $\\hat{y}_{h1}$, calculated as:\n\\begin{equation}\n    z_{h1} = i_1 w_1 + i_2 w_3\n\\end{equation}\n\nNow, consider the following figure that is visualizing the hidden neuron $h_2$ of the neural network:\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.4\\linewidth]{images/background/back_prop_3.png}\n\t\\caption[Backpropagating error from $h_2$]%\n\t{Backpropagating error from $h_2$ to modify weights $w_2$ and $w_4$.}\n\t\\label{fig:back_prop_3}\n\\end{figure}\n\nTo update $w_2$ and $w_4$, we first compute $\\frac{\\partial E}{\\partial w_2}$ and $\\frac{\\partial E}{\\partial w_2}$ as:\n\n\\begin{equation}\n    \\frac{\\partial E}{\\partial w_2} = \\frac{\\partial E}{\\partial \\hat{y}_{h2}} \\frac{\\partial \\hat{y}_{h2}}{\\partial z_{h2}} \\frac{\\partial z_{h2}}{\\partial w_2}\n\\end{equation}\n\n\\begin{equation}\n    \\frac{\\partial E}{\\partial w_4} = \\frac{\\partial E}{\\partial \\hat{y}_{h2}} \\frac{\\partial \\hat{y}_{h2}}{\\partial z_{h2}} \\frac{\\partial z_{h2}}{\\partial w_4}\n\\end{equation}\n\nHere, $\\hat{y}_{h2}$ is output of the hidden neuron $h_2$ and $z_{h2}$ is the weighted sum used to compute $\\hat{y}_{h2}$, calculated as:\n\\begin{equation}\n    z_{h2} = i_1 w_2 + i_2 w_4\n\\end{equation}\n\nOnce we have the partial derivatives of error $E$ with respect to weights, we can use equation \\ref{eq:backprop_wu} to update each weight, and then we train the neural network with updated weights. This process of weights update and re-training is repeated until the neural network converges.\n\nEach layer in a traditional neural network is densely connected, making it a complex architecture. Also, in such a neural network, information travels only in the forward direction, i.e., there are no feedback loops available where the input to a function also depends on the output. However, there exist other variations of neural networks that are either computationally less complex (i.e., Convolutional Neural Networks) or support feedback loops (i.e., Recurrent Neural Networks).\n\nA Convolutional Neural Network (CNN) ensures less computational complexity by forcing neurons of one layer to share weights. This sharing of weights reduces the number of learnable parameters, allowing for a better generalization. This process is based on the neocognitron network, published by K. Fukushima in \\cite{neocognitronbc}. Results of \\cite{7382560, 7822567} show that, in the areas of image recognition and classification, CNNs are very useful. However, similar to Feedforward Neural Networks, CNNs also do not have feedback loops. For this reason, Recurrent Neural Networks are better than CNNs while working on tasks where sequence order is essential such as machine translation or music composition.\n\n% -----------------------------------------------------------------------------------------------------------\n% ---------------------------------------- RECURRENT NEURAL NETWORKS ----------------------------------------\n% -----------------------------------------------------------------------------------------------------------\n\n\\newpage\n\\section{Recurrent Neural Network}\\label{section:rnn}\n\n\\begin{wrapfigure}{r}{6.4cm}\n    \\centering\n    \\includegraphics[width=1.0\\linewidth]{images/background/jordan_rnn.png}\n    \\caption[A simple Recurrent Network]{A simple Recurrent Network as depicted in \\cite{jordan}}\n    \\label{fig:jordan_rnn}\n\\end{wrapfigure} \n\nAs stated before, one of the drawbacks of the standard neural network model is the lack of feedback loops. In 1986, M. Jordan published \\cite{jordan}, in which he described Recurrent Networks as networks that have a connection from a unit to itself, i.e., a recurrent connection. This recurrent connection act as a feedback loop that makes it possible to use the previous outputs as inputs. This property of Recurrent Neural Networks makes them suitable to work with sequential information where all the inputs depend on each other.\n\nAs I. Sutskever describes in \\cite{ilya}, a Recurrent Neural Network uses hidden states to incorporate new observations using an intricate nonlinear function. Such a Recurrent Neural Network can effortlessly be understated when it is unfolded in time, as shown in the figure:\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{images/background/rnn.png}\n\t\\caption[A Recurrent Neural Network unfolded in time]%\n\t{A simple \\textbf{Recurrent Neural Network} unfolded in time with $t$ sequences. Here, $U$ represents \\textit{input-to-hidden weights}, $W$ represents \\textit{hidden-to-hidden weights}, and $V$ represents \\textit{hidden-to-output weights}.}\n\t\\label{fig:rnn}\n\\end{figure}\n\nThe depicted Recurrent Neural Network accepts inputs $x_1, x_2, x_3, ..., x_t$, and outputs $\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, ..., \\hat{y}_t$. The hidden states $h_0, h_1, h_2, h_3, ..., h_t$ are high-dimensional vectors, connected to each other to create recurrence. Deep extensions of a basic RNN can be constructed by stacking multiple recurrent hidden states on top of each other as shown in \\cite{deeprnn}. Bias vectors $b_h$ and $b_o$, although are not shown in the above figure, are optional, but are useful to shift the activation function.\n\nThe RNN uses the following algorithm to compute $h_t$ and $\\hat{y}_t$:\n\n\\begin{algorithm}\n  \\caption[Standard RNN algorithm]%\n  {A standard RNN algorithm}\n  \\label{alg:rnn}\n    \\For{$t$ \\textbf{from} $1$ \\textbf{to} $T$}{\n        \\DontPrintSemicolon\n        $z_{t} \\gets U x_{t} + W h_{t-1} + b_{h}$ \\tcp*{$b_{h}$ is optional}\n        $h_{t} \\gets e$($z_{t}$) \\\\\n        $o_{t} \\gets V h_{t} + b_{o}$ \\tcp*{$b_{0}$ is optional}\n        $\\hat{y}_{t} \\gets g$($o_{t}$)\n    }\n\\end{algorithm}\n\nwhere $e$($\\cdot$) and $g$($\\cdot$) are the hidden and output nonlinearities of the RNN. This computation of $h_t$ is visualized in the following figure:\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.6\\linewidth]{images/background/rnn_unit.png}\n\t\\caption[Internal structure of a single hidden unit in a standard RNN]%\n\t{The internal structure of a single hidden unit in a standard RNN visualizing the computation of $h_t$ using an input $x_t$, and the hidden state value of the previous unit $h_{t-1}$.}\n\t\\label{fig:rnn_unit}\n\\end{figure}\n\nThe nonlinearity is required to produce a nonlinear decision boundary. Out of all three nonlinearities explained in section \\ref{subsection:nonlinearity}, for the scope of this thesis, we only focus on ReLU and Tanh due to their added advantages over the Sigmoid activation function.\n\nSimilar to error backpropagation in traditional neural networks (as explained in section \\ref{subsection:backprop}), RNNs also backpropagate error to update weights and minimize the absolute error. However, in recurrent networks, since the output of one time-step depends on previous ones, the error also backpropagates from time-step $t$ through the entire network to the first time-step. This is known as Backpropagation Through Time (BPTT \\cite{bptt-1}).\n\n\\subsection{Backpropagation Through Time}\\label{subsection:bptt}\n\nTo understand backpropagation through time, consider the following simple RNN with three time-steps visualizing error backpropagation from time-step 3:\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.7\\linewidth]{images/background/bptt.png}\n    \\caption[Backpropagation Through Time]{A standard RNN visualizing BPTT from time-step 3. Here, $U$ represents \\textit{input-to-hidden weights}, $W$ represents \\textit{hidden-to-hidden weights}, and $V$ represents \\textit{hidden-to-output weights}.}\n    \\label{fig:bptt}\n\\end{figure}\n\nError at time-step 3 $E_3$ is the difference between target output $y_3$  and predicted output $\\hat{y}_3$. This error $E_3$ backpropagates from time-step 3 to time-step 1, as shown in the above figure. \n\nTo start, using the algorithm \\ref{alg:rnn}, we can write following equation for output $\\hat{y}_3$:\n\n\\begin{equation}\n    \\hat{y}_3 = g(o_3)\n\\end{equation}\n\nwhere $o_3$ is product of $V$ and $h_3$ given as:\n\n\\begin{equation}\n    o_3 = V h_3\n\\end{equation}\n\nwhere $V$ represents \\textit{hidden-to-output} weights, $e$ is output nonlinearity, and $h_3$ is calculated as:\n\n\\begin{equation}\n    h_3 = e(z_3)\n\\end{equation}\n\nwhere, $z_3$ is the weighted sum computed as:\n\n\\begin{equation}\n    \\label{eq:z3}\n    z_3 = Ux_3 + Wh_2\n\\end{equation}\n\nwhere $U$ is \\textit{input-to-hidden} weights, $W$ is \\textit{hidden-to-hidden} weights, and $e$ is hidden nonlinearity.\n\nAs we can see in the above equations, to compute output $\\hat{y}_3$, we need a total of three different weight vectors $V$, $W$, and $U$. Since our error $E_3$ is dependent on output $\\hat{y}_3$, we must calculate the partial derivative of the error $E_3$ with respect to all three weight vectors, i.e., we need to compute $\\frac{\\partial E_3}{\\partial V}$, $\\frac{\\partial E_3}{\\partial W}$, and $\\frac{\\partial E_3}{\\partial U}$. To compute these gradients, we follow the same chain rule as we did in backpropagation (section \\ref{subsection:backprop}).\n\nCalculating $\\frac{\\partial E_3}{\\partial V}$ is easy as it only depends on $\\hat{y}_3$, $o_3$. Therefore, by following the simple backpropagation, we can calculate $\\frac{\\partial E_3}{\\partial V}$ as:\n\n\\begin{equation}\n    \\frac{\\partial E_3}{\\partial V} = \\frac{\\partial E_3}{\\partial \\hat{y}_3} \\frac{\\partial \\hat{y}_3}{\\partial o_3} \\frac{\\partial o_3}{\\partial V}\n\\end{equation}\n\nCalculation of $\\frac{\\partial E_3}{\\partial W}$ is where calculation gets more complicated. To see why it is complicated from $\\frac{\\partial E_3}{\\partial V}$, we first apply the chain rule as:\n\n\\begin{equation}\n    \\label{eq:wrt_W}\n    \\frac{\\partial E_3}{\\partial W} = \\frac{\\partial E_3}{\\partial \\hat{y}_3} \\frac{\\partial \\hat{y}_3}{\\partial h_3} \\frac{\\partial h_3}{\\partial W}\n\\end{equation}\n\nHowever, as we can see in equation \\ref{eq:z3}, $z_3$ depends on $h_2$, which depends on $h_1$. For this reason, we cannot treat $h_2$ as constant; instead, we need to sum up the contributions of $h_2$ and $h_1$ as:\n\n\\begin{align}\n    \\frac{\\partial h_3}{\\partial W} &= \\frac{\\partial h_3}{\\partial W} + \\frac{\\partial h_3}{\\partial h_2} \\frac{\\partial h_2}{\\partial W} + \\frac{\\partial h_3}{\\partial h_2}\\frac{\\partial h_2}{\\partial h_1}\\frac{\\partial h_1}{\\partial W} \\nonumber \\\\\n                                    &= \\frac{\\partial h_3}{\\partial h_3} \\frac{\\partial h_3}{\\partial W} + \\frac{\\partial h_3}{\\partial h_2} \\frac{\\partial h_2}{\\partial W} + \\frac{\\partial h_3}{\\partial h_1} \\frac{\\partial h_1}{\\partial W} \\nonumber \\\\\n                                    &= \\sum_{t=1}^{3} \\frac{\\partial h_3}{\\partial h_t} \\frac{\\partial h_t}{\\partial W} \\label{eq:sum_h}\n\\end{align}\n\nBy substituting value from equation \\ref{eq:sum_h} in equation \\ref{eq:wrt_W}, we get\n\n\\begin{equation}\n    \\frac{\\partial E_3}{\\partial W} = \\frac{\\partial E_3}{\\partial \\hat{y}_3} \\frac{\\partial \\hat{y}_3}{\\partial h_3} \\left[ \\sum_{t=1}^{3} \\frac{\\partial h_3}{\\partial h_t} \\frac{\\partial h_t}{\\partial W} \\right]\n\\end{equation}\n\nWe follow the similar process to calculate $\\frac{\\partial E_3}{\\partial U}$ as:\n\n\\begin{equation}\n    \\frac{\\partial E_3}{\\partial U} = \\frac{\\partial E_3}{\\partial \\hat{y}_3} \\frac{\\partial \\hat{y}_3}{\\partial h_3} \\left[ \\sum_{t=1}^{3} \\frac{\\partial h_3}{\\partial h_t} \\frac{\\partial h_t}{\\partial U} \\right]\n\\end{equation}\n\nSimilar to the backpropagation algorithm, we use these partial derivatives of error with respect to $V$, $W$, and $U$ to update the corresponding weights.\n\nIn standard RNNs, each hidden unit only performs the specified nonlinear activation (as depicted in figure \\ref{fig:rnn_unit}). However, there exist other variations of RNNs, such as Long Short-Term Memory, Gated Recurrent Unit, that do more than just performing a single computation per hidden unit.\n\n% -----------------------------------------------------------------------------------------------------------\n% -------------------------------------------------- LSTM ---------------------------------------------------\n% -----------------------------------------------------------------------------------------------------------\n\n\\newpage\n\\section{Long Short-Term Memory}\\label{section:lstm}\n\nTwo of the problems with this algorithm are of exploding gradients\\footnote{Exploding gradients is a problem when large error gradients accumulate, resulting in substantial updates to model weights making the model unstable.} or vanishing gradient\\footnote{The vanishing gradient is a problem that prevents making changes in the weight values due to vanishingly small gradients.}. Two mitigate these problems, in 1997, Hochreiter et al. proposed a new recurrent architecture, termed Long Short-Term Memory (LSTM) in \\cite{lstm}.\n\nAs opposed to the standard RNN's internal structure (figure \\ref{fig:rnn_unit}), LSTM has a very different and much more complex internal structure made up of three gates (i.e., an \\textit{input gate $i_t$}, an \\textit{output gate $o_t$}, and a \\textit{forget gate $f_t$}) to regulate the flow of information \\cite{lstm_gates}. This is visualized in the following figure:\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{images/background/lstm_unit.png}\n\t\\caption[Internal structure of a single hidden unit in an LSTM]%\n\t{The internal structure of a single hidden unit in an LSTM visualizing the computation of $h_t$ and $C_t$ using an input $x_t$, the hidden state value of the previous unit $h_{t-1}$, and the cell state value of the previous unit $C_{t-1}$.}\n\t\\label{fig:lstm_unit}\n\\end{figure}\n\nGiven an input $x_t$, previous hidden state $h_{t-1}$, and previous cell state $C_{t-1}$, the current hidden state $h_t$ and cell state $C_t$ can mathematically be calculated as follow:\n\n\\begin{enumerate}\n    \\item The first step is to compute the forget gate that informs the cell state about which past information to keep       and which one to forget.\n        \\begin{equation}\n            \\label{eqn:forget_gate}\n            f_t = \\sigma(W_f \\cdot [h_{t-1}, x_t] + b_f)\n        \\end{equation}\n        For each value in the cell state $C_{t-1}$, it returns a value between $0$ and $1$.\n        \n    \\item The second step is to determine what new information needs to be stored in the cell state.\n        \\begin{equation}\n            \\label{eqn:input_gate}\n            i_t = \\sigma(W_i \\cdot [h_{t-1}, x_t] + b_i)\n        \\end{equation}\n        \\begin{equation}\n            \\widetilde{C} _t = tanh(W_C \\cdot [h_{t-1}, x_t] + b_C)\n        \\end{equation}\n        The addition of these two values then will be used to update the previous cell state.\n        \n    \\item The third step is to update the previous cell state as:\n        \\begin{equation}\n            C_t = f_t * C_{t-1} + i_t * \\widetilde{C}_t\n        \\end{equation}\n        \n    \\item The final step is to compute the value for the output gate, that then will be used to update the hidden state       as:\n        \\begin{equation}\n            \\label{eqn:output_gate}\n            o_t = \\sigma(W_o \\cdot [h_{t-1, x_t}] + b_o)\n        \\end{equation}\n        \\begin{equation}\n            h_t = o_t * tanh(C_t)\n        \\end{equation}\n    \n    These updated state values, $h_t$, and $C_t$ are then forwarded to be used in the next unit.\n\\end{enumerate}\n\nOne of the benefits of LSTM over a standard RNN is the capability of learning long-term dependencies. Because of this added benefit, LSTMs have shown exceptional performance in many application areas including time series forecasting \\cite{lstm_time}, drug design \\cite{lstm_drug}, and music composition \\cite{lstm_music}.\n\nAlthough LSTMs have a better performance than standard RNNs, their execution is slower due to the availability of a higher number of computations. Therefore, another variation of RNNs, the Gated Recurrent Unit, is faster than an LSTM but has a better performance than a standard RNN.\n\n% -----------------------------------------------------------------------------------------------------------\n% --------------------------------------------------- GRU ---------------------------------------------------\n% -----------------------------------------------------------------------------------------------------------\n\n\\newpage\n\\section{Gated Recurrent Unit}\\label{section:gru}\n\nThe Gated Recurrent Unit (GRU) was introduced by Cho et al. in \\cite{gru}. Similar to Long Short-Term Memory, Gated Recurrent Unit also aims to solve the vanishing gradient problem.\n\nThe internal structure of a GRU is very different from that of a standard RNN unit but is quite similar to that of an LSTM unit, except with only two gates (i.e., an \\textit{update gate} $z_t$, and a \\textit{reset gate} $r_t$) than three. These two gates control the flow of information that flows into and out of the memory \\cite{gru_gates}. This internal structure is visualized in the following figure:\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{images/background/gru_unit.png}\n\t\\caption[Internal structure of a single hidden unit in a GRU]%\n\t{The internal structure of a single hidden unit in a GRU visualizing the computation of $h_t$ using an input $x_t$, and the hidden state value of the previous unit $h_{t-1}$.}\n\t\\label{fig:gru_unit}\n\\end{figure}\n\nGiven an input $x_t$, and previous hidden state $h_{t-1}$, the current hidden state $h_t$ can mathematically be calculated as follow:\n\n\\begin{enumerate}\n    \\item The first step is to compute the update gate value as:\n        \\begin{equation}\n            \\label{eqn:update_gate}\n            z_t = \\sigma(W_z \\cdot [h_{t-1}, x_t] + b_z)\n        \\end{equation}\n        This gate helps to determine what information from the past to carry forward.\n        \n    \\item The second step is to compute the value of the reset gate as:\n        \\begin{equation}\n            \\label{eqn:reset_gate}\n            r_t = \\sigma(W_r \\cdot [h_{t-1}, x_t] + b_r)\n        \\end{equation}\n        This gate decides how much of the past information to omit.\n        \n    \\item The final step is to update the hidden state using the values of the update gate and the reset gate obtained        from equations \\ref{eqn:update_gate} and \\ref{eqn:reset_gate}, respectively.\n        \\begin{equation}\n            \\widetilde{h}_t = tanh(W_h \\cdot [r_t * h_{t-1}, x_t] + b_h)\n        \\end{equation}\n        \\begin{equation}\n            \\label{eqn:update_hidden}\n            h_t = (1 - z_t) * h_{t-1} + z_t * \\widetilde{h}_t\n        \\end{equation}\n        \n    This updated hidden state value $h_t$ is then forwarded to be used in the next unit.\n\\end{enumerate}\n\nAlthough LSTMs mostly outperform GRUs in many scenarios, GRUs have the upper hand when the dataset is small \\cite{gru_lstm}. Similar to LSTM, GRUs are also capable of learning long-term dependencies. Because of this, GRUs have shown outstanding performance in different application areas, including speech recognition \\cite{gru_speech} and water level prediction \\cite{gru_water_level}.\n\nIn the scope of this thesis, we only focus on working with RNN with Tanh nonlinearity (RNN-Tanh), RNN with ReLU nonlinearity (RNN-ReLU), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) as these four are the most popularly practiced recurrent networks.\n\n% -----------------------------------------------------------------------------------------------------------\n% ------------------------------------------------- PRUNING -------------------------------------------------\n% -----------------------------------------------------------------------------------------------------------\n\n\\newpage\n\\section{Pruning}\\label{section:pruning}\n\nIt is well known that the use of more extensive neural networks is favorable to achieve better performance, but at the same time, they are more expensive to use, takes more time to run, and in most cases, require expensive hardware. Therefore, in the past couple of decades, many researchers have worked on various model compression techniques to train large neural networks more efficiently while maintaining their accuracy. Pruning is one such technique that removes the top $k$ ranked weight parameters of a neural network. The weight parameters removed first are usually the ones with the most negligible impact on the model's final output.\n\nIn \\cite{blalock}, Blalock et al. studied 81 different papers on pruning to identify important (hyper-)parameters and methods that give the best results with pruning compared to others. In that paper, the authors state that ``pruning methods vary primarily in their choices regarding sparsity structure, scoring, scheduling, and fine-tuning\". Here, the sparse structure results from performing unstructured pruning in which individual weight parameters are pruned, and fine-tuning is the process of retraining the pruned neural network to identify the amount of pruning that can be applied without significantly reducing the performance.\n\nThere are different pruning techniques such as Magnitude-based pruning, Error-based pruning, Entropy-based pruning, Evolutionary pruning. In the scope of this thesis, we only focus on Magnitude-based pruning as it is proven to give satisfactory results despite being an easy technique.\n\n\\subsection{Magnitude-based pruning}\n\nThis is the simplest way of pruning that considers the magnitude of weight parameters as pruning criteria, meaning weight parameters that are least important are zeroed out before retraining the model to perform fine-tuning. Li et al. in \\cite{li} state the OLMP (\\textbf{O}ptimization based \\textbf{L}ayer-wise \\textbf{M}agnitude-based \\textbf{P}runing) can reduce the size of AlexNet by 82\\% without any loss inaccuracy. We also apply layer-wise pruning, where we first calculate threshold based on the percent of pruning to apply and zero-out values below this threshold using a binary mask.\n\n% -----------------------------------------------------------------------------------------------------------\n% ------------------------------------------------- GRAPHS --------------------------------------------------\n% -----------------------------------------------------------------------------------------------------------\n\n\\newpage\n\\section{Graph Theory}\\label{section:graphs}\n\nGraph theory is the study of graphs, a network of nodes connected with edges. In \\cite{carlson}, Carson states that the graph theory originated from Leonhard Euler's solution to the famous K\u00f6nigsberg bridge problem. This section briefly explores random graphs, their graph properties, and two random graphs with small-world properties, namely Watts\u2013Strogatz and Barab\u00e1si\u2013Albert.\n\nAn undirected graph $G$ is given as a set of $(V, E)$, where $V$ is the set of nodes, and $E$, where $E \\subseteq E_{comp} \\coloneqq \\{\\{v, w\\} | v, w \\in V, v \\neq w\\}$ is the set of edges. A complete graph, $G_{comp}$, is a graph where each node is connected to every other node in the same network, which is different from a connected graph, where it is possible to get from one node to any other node in the same network via a series of edges.\n\nGraphs, in mathematics, are usually used to model real-world structures, but if the structure is very complex and cannot model it in all details, random graphs are used.\n\n\\subsection{Random Graphs}\\label{subsection:randomgraphs}\n\nRandom graphs, as the name states, have randomly distributed edges according to some probability measure. Based on which adequate probability measure is used, random graphs can be given as:\n\n\\begin{enumerate}\n    \\item \\textbf{Uniform random graph}: For the given set of nodes $V$, edges $M$, and a probability space $\\Omega$ where all graphs with nodes $V$ and $M$ edges has the uniform distribution, the probability is given as:\n    \\begin{equation}\n        P(G) = \\left( M_{comp} \\atop M \\right)^{-1}\n    \\end{equation}\n    \n    \\item \\textbf{Binomial random graph}: Given the probability $0 \\leq p \\leq 1$ that there exists an edge between two given nodes $v, w \\in V$ and a probability space $\\Omega$ where all graphs with nodes $V$ and edges $M$ has the binomial distribution, the probability is given as:\n    \\begin{equation}\n        P(G) = p^M \\cdot (1-p)^{M_{comp}-M}\n    \\end{equation}\n\\end{enumerate}\n\n\\subsection{Graph properties}\\label{subsection:properties}\n\nThis section explores graph properties that we later use to find a correlation with performance. Along with simple properties such as the number of layers, the number of nodes, and the number of edges, we also use other graph properties, as given below:\n\n\\subsubsection{Diameter}\nA graph's diameter is the maximal distance between any pair of nodes in a given graph, excluding any detours or loops. We can use the following equation to find the graph's diameter:\n\\begin{equation}\n    \\delta = \\max_{ij}\\{s(i, j)\\}\n\\end{equation}\nwhere $s(i,j)$ is the shortest path between two nodes $i$ and $j$.\n\n\\subsubsection{Density}\nThe density of a graph is the ratio of edges present to all possible edges. Given an undirected graph, the density of this graph is calculated as:\n\\begin{equation}\n    d = \\frac{2m}{n(n-1)}\n\\end{equation}\nFor a directed graph, the density is calculated as:\n\\begin{equation}\n    d = \\frac{m}{n(n-1)}\n\\end{equation}\nwhere $n$ is the number of nodes, and $m$ is the number if edges.\n\n\\subsubsection{Average Shortest Path Length}\nThe average shortest path length is the mean number of steps on all potential pairs of nodes' shortest paths and is calculated as:\n\\begin{equation}\n    a = \\sum_{i,j \\in V}\\frac{d(i, j)}{n(n-1)}\n\\end{equation}\nwhere $V$ is the set of nodes, $d(i,j)$ is the shortest path from node $i$ to $j$, and $n$ is the number of nodes.\n\n\\subsubsection{Eccentricity}\nThe eccentricity of a given node $V$ in a connected graph $G$ is the maximum distance from node $V$ to all other nodes in that graph. In contrast, for a disconnected graph, all nodes have an infinite eccentricity.\n\nThe maximum a graph can have is the graph diameter, while the minimum eccentricity is the graph radius.\n\n\\subsubsection{Degree}\nThe degree of a node is the number of nodes adjacent to that node.\n\n\\subsubsection{Closeness}\nCloseness, often known as closeness centrality, indicates how close a node is to all other nodes in a given graph and is calculated as:\n\\begin{equation}\n    C(u) = \\frac{n-1}{\\sum_{i=1}^{n=1}d(i, j)}\n\\end{equation}\nwhere $d(i,j)$ is the shortest path between nodes $i$ and $j$, $n$ is the number of nodes in the graph.\n\n\\subsubsection{Node betweenness}\nNode betweenness, also known as the betweenness centrality of node $u$, is the sum of the fraction of all shortest path pairs that pass through $u$ and is calculated as:\n\\begin{equation}\n    c_B(u) = \\sum_{i,j \\in V}\\frac{\\sigma(i,j|u)}{\\sigma(i,j)}\n\\end{equation}\nHere, $V$ is the set of nodes, $\\sigma(i,j)$ is the number of shortest paths between nodes $i$ and $j$, and $\\sigma(i,j|u)$ is the number of shortest paths through node $u$.\n\n\\subsubsection{Edge betweenness}\nIn contrast to node betweenness, Edge betweenness is the number of shortest paths that pass through an edge in a given graph. It is calculated as:\n\\begin{equation}\n    c_B(e) = \\sum_{i,j \\in V}\\frac{\\sigma(i,j|e)}{\\sigma(i,j)}\n\\end{equation}\nHere, $V$ is the set of nodes, $\\sigma(i,j)$ is the number of shortest paths between nodes $i$ and $j$, and $\\sigma(i,j|e)$ is the number of shortest paths through an edge $e$.\n\n\\subsection{Small World Networks}\nIn a Small World Network, the mean shortest-path distance between any two given nodes increases sufficiently slowly as a function of the number of nodes in the network \\cite{porter}.\n\nA Small World Network have the following three properties:\n\\begin{enumerate}\n    \\item \\textbf{Higher order of the graph}: The order of a graph is simply the number of nodes in that graph, in other words, the cardinality of its node-set. In a Small World Network, the order of the graph is much higher than its average vertex degree.\n    \n    \\item \\textbf{Small characteristic path length}: As stated by F. Schreiber in \\cite{schreiber}, the characteristic path length of a given graph is the average number of edges in the shortest paths between all pairs of nodes. For a given graph to be considered a Small World Network, it must have a small characteristic path length.\n    \n    \\item \\textbf{High clustering coefficient}: A clustering coefficient is a measure of the degree to which nodes in a graph are clustered together. A given graph must have a higher clustering coefficient to be considered a Small World Network.\n\\end{enumerate}\n\nWatts\u2013Strogatz is one of the popular Small World Networks, that is explained in the following section.\n\n\\subsubsection{Watts\u2013Strogatz model}\\label{subsubsection:wsmodel}\nWatts\u2013Strogatz (WS) are regular graphs with degree $k$. In 1998, Watts and Strogatz \\cite{watts} proposed the Watts-Strogatz model to counteract the incomplete regularity and randomness in social networks.\n\nThe construction of a WS model begins with a lattice structure\\footnote{A lattice graph is a graph embedded in a Euclidean space $\\mathbb{R}$ that has a regular tiling form  \\cite{lattice}.} that has a high clustering coefficient ($C$) but also a large characteristic path length ($L$). We can rewire some edges in this lattice structure to reduce $L$.\n\nFor each node $i \\in V$, for each edge $\\{i, j\\}$ from node $i$ to one of the $r$ right neighbor $j$, the edge is left unchanged with probability $1-p$, and with probability $p$, the edge is replaced by a new edge $\\{i, j\\prime\\}$ where $j\\prime\\in V \\setminus (N(v) \\cup \\{v\\})$ is randomly chosen.\n\nFor a small probability $p$, a sharp drop in characteristic path length ($L$) can be observed with no change in clustering coefficient ($C$).\n\n\\subsection{Scale Free Networks}\\label{subsection:sfn}\nIn contrast to Small World Networks, where graphs have a small path length due to local clustering, Scale Free Networks have a skewed degree distribution \\cite{aarstad}. Probability distribution of node degrees in such graphs is given as:\n\\begin{equation}\n    P(d(v) = k) \\sim k^{-\\gamma}\n\\end{equation}\nwhere $d(v)$ is the degree of node $v \\in V$ and $\\gamma$ is the power law factor.\n\nAs seen in the above equation, this probability distribution has power law where the probability of a node having a degree $k$ varies as the power law factor $\\gamma$ varies.\n\nIn a Scale Free Network, the system grows with time, and new nodes and edges are added based on the preferential attachment. Such networks are observed in different science and technology areas such as the internet (nodes are HTML pages and edges are hyperlinks), electronic mail (nodes are email addresses and edges are sent emails between two users), and scientific literature (nodes are publications and edges are citations).\n\n\\subsubsection{Barab\u00e1si\u2013Albert model}\\label{subsubsection:bamodel}\nBarab\u00e1si\u2013Albert model is one of many Scale Free Networks that follows preferential attachment with power-law growth for a given system.\n\nThe construction of a Barab\u00e1si\u2013Albert model begins with $N_0 > 1$ nodes and $m < N_0$ edges where $N_0 = |V_0|$, cardinality of set of nodes at $t = 0$ without any edges.\n\nAt time $t = 1$, a new node is added to $V_0$ with $m$ edges from this new node to $m$ random different nodes.\n\nAt time $t > 1$, a new node is added to $V_{t-1}$ with $m$ edges from this new node to $m$ random different nodes, where the preferential attachment gives the probability of choosing node $i$ as:\n\n\\begin{equation}\n    P(i) = c \\cdot d(i)\n\\end{equation}\n\\centerline{where, $c = \\frac{1}{\\sum_{j \\in V_{t-1}}d(j)}$}\n\nIn any given Barab\u00e1si\u2013Albert model, for any given time $t$, there are $N_0 + t = N_t$ nodes and $m \\cdot t$ edges available.", "meta": {"hexsha": "028f9241523d7197cbae544ec7af379fc8e0db09", "size": 46438, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX/chapters/background.tex", "max_stars_repo_name": "harshildarji/Master-Thesis", "max_stars_repo_head_hexsha": "062bcba76bb2a7784b388a70a37d615589c13e72", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LaTeX/chapters/background.tex", "max_issues_repo_name": "harshildarji/Master-Thesis", "max_issues_repo_head_hexsha": "062bcba76bb2a7784b388a70a37d615589c13e72", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LaTeX/chapters/background.tex", "max_forks_repo_name": "harshildarji/Master-Thesis", "max_forks_repo_head_hexsha": "062bcba76bb2a7784b388a70a37d615589c13e72", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.0909090909, "max_line_length": 713, "alphanum_fraction": 0.70399242, "num_tokens": 12033, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5981106366964535}}
{"text": "\\chapter{Background}\n\nWe will first introduce optimisation and argumentation, where our project lies at their intersection. Afterwards, we will explore lightly on the notion of explanations. Finally, we will conduct two case studies that explore existing tools.\n\n\\section{Optimisation}\n\nOptimisation is the process of finding an optimal decision with respect to constraints given a set of possible decisions. The optimal decision is measured by a cost function, that typically determines a numerical value representing how good a decision is. The problem is to find the optimal decision, either as a minima or maxima using systematic methods. Applications of optimisation occur in many practical situations such as in engineering, manufacturing, science. In engineering applications, accurate modelling of a problem may require discrete decisions and non-linear relationships, resulting in difficulty in finding arbitrary optimal decisions. There is a wide range of techniques for optimisation, we focus on mixed-integer linear programming.\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\t\\node[file](problem) at (0, 2){Problem};\n\t\t\t\\node[file](model) at (0, 0){Model};\n\t\t\t\\node[file](solution) at (4, 2){Solution};\n\t\t\t\\node[file](optimum) at (4, 0){Optimum};\n\t\t\t\\draw[arrow](problem) -- node [above] {solve} (solution);\n\t\t\t\\draw[arrow](problem) -- node [left, inner sep=10pt] {abstract} (model);\n\t\t\t\\draw[arrow](model) -- node [below] {optimise} (optimum);\n\t\t\t\\draw[arrow](optimum) -- node [right, inner sep=10pt] {intepret} (solution);\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\caption{Abstraction of problem solving}\n\\end{figure}\n\nReal-life problems need to be modelled to be optimised. Problems are formulated using constraints and a cost function. Consider the following linear programming example. A person wants to sell some drinks. Each unit of hot chocolate requires 1 litre of milk and 3 bars of chocolate. Each unit of milkshake requires 1 litre of milk and 2 bars of chocolate. The person only has 5 units of milk and 12 bars of chocolate. The person sells a unit of hot chocolate for 6 and a unit of milkshake for 5 monetary units. What is the best strategy for maximising profit given that all units produced are sold? The problem is abstracted as follows. Let $x$ and $y$ be the number of hot chocolates and milkshakes produced, respectively.\n\\begin{align*}\n\t\\max_{x,y}\\ &6x+5y&\\text{subject to:}\\\\\n\t&x+y\\leq 5&\\text{milk resource constraint}\\\\\n\t&3x+2y\\leq 12&\\text{chocolate resource constraint}\\\\\n\t&x,y\\geq 0&\\text{non-negative units}\\\\\n\t&x,y\\in\\mathbb{N}&\\text{whole units only}\\\\\n\\end{align*}\nThe problem is sufficiently small to be solved by inspection, but may also be solved using graphical methods or a simplex algorithm. The optimum is $x^*=2$ and $y^*=3$, which is interpreted as that the best strategy is producing 2 units of hot chocolate and 3 units of milkshake.\n\\linespace\nIn non-linear optimisation problems, finding a global optima is not trivial. For gradient-based and local-search algorithms, estimated solutions can return a local optima. This is typically not favoured, however for large problems, finding a global optima result in non-polynomial complexity for arbitrary dimensional problems. In context of the project, it is infeasible to compute a complete explanation of optimality in polynomial time.\n\n\\subsection{Makespan Scheduling}\n\\label{makespan}\n\nThe simplistic definition of makespan scheduling gives a good foundation for experimenting with argumentation. Makespan schedules are defined by a $m\\in\\mathbb{N}$ independent machines and $n\\in\\mathbb{N}$ independent jobs \\cite{sa}. Let $\\mathcal{M}=\\{1,...,m\\}$ be the set of machines and $\\mathcal{J}=\\{1,...,n\\}$ be the set of jobs. Each job $j\\in\\mathcal{J}$, has an associated processing time $p_j\\in\\mathbb{R}_{\\geq 0}$. All processing times are collectively denoted by a vector $\\mathbf{p}$. A machine can only execute at most one job at any time. For a feasible schedule, each job is assigned to a machine non-preemptively. For some $i\\in\\mathcal{M}$, let $C_i$ be the completion time of the $i^\\text{th}$ machine. Let $C_{\\max}$ be the total completion time. Let $\\mathbf{x}\\in\\{0,1\\}^{m\\times n}$ be the assignment matrix that allocates jobs to machines. Formally, makespan schedules are modelled as an optimisation problem:\n\\begin{align*}\n\t\\min_{C_{\\max},\\mathbf{C},\\mathbf{x}}\\ &C_{\\max}&\\text{ subject to:}\\\\\n\t\\forall i\\in\\mathcal{M}.\\ &C_{\\max}\\geq C_i\\\\\n\t\\forall i\\in\\mathcal{M}.\\ &C_i=\\sum_{j\\in\\mathcal{J}}x_{i,j}\\cdot p_j\\\\\n\t\\forall j\\in\\mathcal{J}.\\ &\\sum_{j\\in\\mathcal{J}}x_{i,j}=1\\\\\n\t\\forall i\\in\\mathcal{M},\\ \\forall j\\in\\mathcal{J}.\\ &x_{i,j}\\in\\{0,1\\}\n\\end{align*}\n\n\\begin{definition}\n\t\\label{assignmentmatrix}\n\t\n\tA schedule $S$ is defined by its assignment matrix $\\mathbf{x}$. $S$ will be used to reference a high-level representation of $\\mathbf{x}$ but does not specify its formal representation unlike $\\mathbf{x}$, which will be used in linear programming and algorithms with its precise definition.\n\\end{definition}\n\n\\begin{definition}\n\tA schedule $S$ is optimal iff $S$ feasibly achieves the minimal total completion time.\n\\end{definition}\n\n\\begin{definition}\n\tA machine $i\\in\\mathcal{M}$ is critical iff $C_i=C_{max}$.\n\\end{definition}\n\n\\begin{definition}\n\tA job $j\\in\\mathcal{J}$ is critical iff $i\\in\\mathcal{M}$ is critical and $x_{i,j}=1$.\n\\end{definition}\n\n\\begin{definition}\n\t\\label{sep}\n\t\n\tA schedule satisfies the single exchange property (SEP) iff for any critical machine and any machine $i,i'\\in\\mathcal{M}$ and for all critical jobs $j\\in\\mathcal{J}$, $C_i-C_{i'}\\leq p_j$\n\\end{definition}\n\n\\begin{definition}\n\t\\label{pep}\n\t\n\tA schedule satisfies the pairwise exchange property (PEP) iff for any critical job and any job $j,j'\\in\\mathcal{J}$, if $p_j>p_{j'}$, then $C_i+p_{j'}\\leq C_{i'}+p_j$.\n\\end{definition}\n\n\\begin{definition}\n\tA schedule $S$ is exchange iff $S$ satisfies SEP and PEP. Note this definition of efficiency succinctly captures necessary optimally conditions, which differs from the property efficiency defined in the paper \\cite{aes}.\n\\end{definition}\n\n\\begin{proposition}\n\tSchedule efficiency is a necessary condition for optimality \\cite{aes}.\n\\end{proposition}\n\nMakespan schedules are often represented using cascade charts. The charts shows graphically the difference in total completion time of schedules where the problem has the parameters $m=4$ and $n=13$.\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[width=.8\\linewidth]{figures/makespan_inefficient.pdf}\n\t\\end{center}\n\t\\caption{An inefficient schedule}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[width=.8\\linewidth]{figures/makespan_efficient.pdf}\t\n\t\\end{center}\n\t\\caption{An efficient schedule}\n\\end{figure}\n\n\\subsection{User Fixed Decisions}\n\\label{fixeddecisions}\n\nTo accommodate practical applications of makespan schedules, user positive and negative fixed decisions are introduced as an extension to makespan problems. In a hospital setting, positive fixed decisions capture patients exclusively allocated to a nurse while negative fixed decisions capture unavailable or incompatible nurses and patients \\cite{aes}. Let $D^-,D^+\\subseteq\\mathcal{M}\\times\\mathcal{J}$ be the negative and positive fixed decisions respectively. Let $D$ be the fixed decisions such that $D=(D^-,D^+)$.\n\n\\begin{definition}\n\tA schedule $S$ satisfies $D$ iff $\\forall(i,j)\\in D^-.\\ x_{i,j}=0$ and $\\forall \\pair{i}{j}\\in D^+.\\ x_{i,j}=1$.\n\\end{definition}\n\n\\begin{definition}\n\tA fixed decision $D$ is satisfiable iff there exists a schedule $S$ such that $S$ satisfies $D$. \n\\end{definition}\n\nA fixed decision $D$ is satisfiable iff if the following necessary and sufficient conditions hold:\n\\begin{itemize}\n\t\\item$D^+$ and $D^-$ are disjoint.\n\t\\item$\\forall\\pair{i}{j},\\pair{i'}{j'}\\in D^+.\\ i=i'\\lor j\\neq j'$\n\t\\item$\\forall j\\in\\mathcal{J}.\\ \\exists i\\in\\mathcal{M}.\\ \\pair{i}{j}\\not\\in D^-$\n\\end{itemize}\n\nThe relaxed definition of $D$ allows $D$ to be not satisfiable, which is not permitted in previous work \\cite{aes}. This relaxation accommodates for poorly-formulated user problems, allowing explanations over validation of user input, which is more useful to users in a practical setting.\n\n\\subsection{Interval Scheduling}\n\nInterval scheduling is a natural extension to makespan scheduling. This has freedom in determining the starting times of its jobs. In practice, rearranging critical jobs may effect the feasibility of a schedule. Interval scheduling is a widely researched area, with many literature proposing algorithms for variants of interval scheduling \\cite{is}.\n\\linespace\nInterval scheduling is defined over $m\\in\\mathbb{N}$ and $n\\in\\mathbb{N}$ jobs where they are collectively denoted by the sets $\\mathcal{M}$ and $\\mathcal{J}$ such that $\\mathcal{M}=\\{1,...,m\\}$ and $\\mathcal{J}=\\{1,...,n\\}$ respectively. Each job $j\\in\\mathcal{J}$ must be allocated a machine $i\\in\\mathcal{M}$ preemptively. In addition, each job must be allocated between the interval $[s_j,f_j)$ such that $s_j<f_j$ where $s_j,f_j\\in\\mathbb{R}_{\\geq 0}$ without loss of generality. Finally, each job is associated with a processing time $p_j\\in\\mathbb{R}_{\\geq 0}$. No jobs are allowed to overlap. Note that $p_j>f_j-s_j$ is possible, meaning that interval schedules can be constructed to be infeasible.\n\n\\section{Argumentation}\n\nArgumentation is a method to understand and evaluate reasons for and against potential conclusions. Argumentation is useful in resolving conflicts, clarifying incomplete information and most importantly, with respect to this project, explanations. The precise definition of an argument varies on the literature, however it is commonly agreed that arguments can attack or support other arguments. For an argument $\\alpha$ to attack an argument $\\beta$, $\\alpha$ may critically challenge $\\beta$ such that acceptability of $\\beta$ is doubted \\cite{at}. This may be to question one of $\\beta$'s premises, by proposing a counter-example. For example, consider an scenario whether to sleep. Let $\\alpha$ be ``I want to sleep\", $\\beta$ be ``I have work to do\" and $\\gamma$ be ``I can work tomorrow\". Using human intuition, we can derive that $\\gamma$ attacks $\\beta$ and $\\beta$ attacks $\\alpha$. This is represented graphically below.\n\n\\begin{center}\n\t\\begin{tikzpicture}\n\t\t\\node[node](gamma) at (0, 0){$\\gamma$};\n\t\t\\node[node](beta) at (2, 0){$\\beta$};\n\t\t\\node[node](alpha) at (4, 0){$\\alpha$};\n\t\t\\draw[arrow](gamma) -- (beta);\n\t\t\\draw[arrow](beta) -- (alpha);\n\t\\end{tikzpicture}\n\\end{center}\n\nIf we conclude $\\alpha$ to be acceptable, then we must not accept $\\beta$. Accepting two arguments with conflicts can be interpreted as a contradiction or hypocritical. Hence, argumentation theory have measures of acceptable extensions, to decide whether some set of arguments are acceptable in some notion, with respect to different intuitions. This motivates to use abstract argumentation frameworks.\\linespace\nNote that this example uses implicit background knowledge, also known as enthymemes. In this scenario, one cannot sleep and work at the same time. To our advantage, argumentation is applied to well-defined scheduling problems, so enthymemes are inapplicable in this project.\n\n\\subsection{Abstract Argumentation Frameworks}\n\\label{aaf}\n\nAn abstract argumentation framework (AAF) models the relation of attacks between arguments \\cite{aa}. Formally, an AAF is a directed graph $(Args,\\rightsquigarrow)$ where $Args$ is the set of arguments and $\\rightsquigarrow$ is a binary relation over $Args$. For $a,b\\in Args$, $a$ attacks $b$ iff $a\\rightsquigarrow b$. Attacks are extended over sets of arguments, where $A\\subseteq Args\\rightsquigarrow b\\in Args$ iff $\\exists a\\in A.\\ a\\rightsquigarrow b$. An extension $E$ is a subset of $Args$.\n\n\\begin{definition}\n\tAn extension $E$ is conflict-free iff $\\forall a,b\\in E.\\ a\\not\\rightsquigarrow\\ b$.\n\\end{definition}\n\n\\begin{definition}\n\tAn extension $E$ is stable iff $E$ is conflict-free and\n\t\n\t$\\forall a\\in Args\\backslash E.\\ E\\rightsquigarrow a$\n\\end{definition}\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\node[node](gamma) at (0, 0){$\\gamma$};\n\t\t\\node[node](beta) at (2, 0){$\\beta$};\n\t\t\\node[node](alpha) at (4, 0){$\\alpha$};\n\t\t\\node[node](delta) at (2, 2){$\\delta$};\n\t\t\\node[node](epsilon) at (0, 2){$\\varepsilon$};\n\t\t\\draw[arrow](beta) -- (alpha);\n\t\t\\draw[arrow](beta) -- (gamma);\n\t\t\\draw[arrow](beta) -- (delta);\n\t\t\\draw[arrow](delta) -- (beta);\n\t\t\\draw[arrow](delta) to [loop](delta);\n\t\t\\draw[arrow](epsilon) -- (delta);\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\caption{An AAF represented graphically}\n\\end{figure}\n\nConsider the following example where $Args=\\{\\alpha,\\beta,\\gamma,\\delta,\\varepsilon\\}$ and $\t\\rightsquigarrow=\\{\n\\pair{\\beta}{\\alpha},\\\\\n\\pair{\\beta}{\\gamma},\n\\pair{\\beta}{\\delta},\n\\pair{\\delta}{\\beta},\n\\pair{\\delta}{\\delta},\n\\pair{\\varepsilon}{\\delta}\n\\}$ as illustrated above. Then the following statements hold:\n\\begin{itemize}\n\t\\item $\\varepsilon\\rightsquigarrow\\delta$.\n\t\\item $\\{\\delta,\\varepsilon\\}\\rightsquigarrow\\beta$.\n\t\\item $\\{\\alpha,\\gamma\\}$ is conflict-free but not stable.\n\t\\item $\\{\\delta\\}$ is not conflict-free and not stable.\n\t\\item $\\{\\beta,\\varepsilon\\}$ is conflict-free and stable.\n\\end{itemize}\n\nWe will introduce common definitions in abstract argumentation, but these are not directly relevant in this project. \n\n\\begin{definition}\n\tAn extension $E$ is admissible iff $E$ is conflict-free and $E$ attacks every argument attacking $E$.\n\t\\linespace\n\tAn extension $E$ defends an argument $\\alpha$ iff for all every argument attacking $\\alpha$, $E$ attacks such attacking argument \n\t\\linespace\n\tAn extension $E$ is complete iff $E$ is admissible and $E$ contains all arguments $E$ defends.\n\t\\linespace\n\tAn extension $E$ is preferred iff $E$ is maximally admissible with respect to $\\subseteq$.\n\\end{definition}\n\n\\section{Explanations}\n\nMany explanation generation tasks appeal to either minimality or simplicity \\cite{pe}. Explanations may occur over observations. For example, a sequence of states may lead to an error, an explanation can guide an user to avoid such error. However, trustworthy and theoretical well-understood algorithms are difficult to explain to non-technical users \\cite{ep}. The paper highlights concepts to explain given an optimisation context:\n\\begin{itemize}\n\t\\item Why did the optimiser do that?\n\t\\item Why did the optimiser not do this?\n\t\\item Why does this proposal result to more optimal result?\n\t\\item Why can't this decision be taken?\n\t\\item Why do I need to re-plan at this point?\n\t\\item Will a better result be produced if given $n$ more hours?\n\\end{itemize}\n\nArguably, understanding cannot be captured with a few questions. Future question include the negation, where their answers are not natively their negation.\n\n\\section{Tensors}\n\nLinear algebra is a mature area in literature. We will be using Boolean tensors as multi-dimensional arrays as data structures to manipulate AAFs. For directed graphs, we can obtain a matrix representation of its edges with an adjacency matrix \\cite{adjmat}. If we generalise edges to multiple dimensions, then the graph's corresponding adjacency matrix becomes higher-dimensional.\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\begin{minipage}{0.3\\textwidth}\n\t\t\t\\begin{center}\n\t\t\t\t\\begin{tikzpicture}\n\t\t\t\t\t\\node[node](1) at (0, 0){$x_1$};\n\t\t\t\t\t\\node[node](2) at (2, 0){$x_2$};\n\t\t\t\t\t\\node[node](3) at (2, 2){$x_3$};\n\t\t\t\t\t\\draw[arrow](2) -- (3);\n\t\t\t\t\t\\draw[arrow](2) -- (1);\n\t\t\t\t\t\\draw[arrow](3) -- (2);\n\t\t\t\t\t\\draw[arrow](1) -- (3);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\\end{center}\n\t\t\\end{minipage}\n\t\t\\begin{minipage}{0.3\\textwidth}\n\t\t\t\\begin{center}\n\t\t\t$\\begin{bmatrix}\n\t\t\t0&0&1\\\\\n\t\t\t1&0&1\\\\\n\t\t\t0&1&0\\\\\n\t\t\t\\end{bmatrix}$\n\t\t\t\\end{center}\n\t\t\\end{minipage}\n\t\t\n\t\t\t\n\t\\end{center}\n\t\\caption{Equivalent representations of a directed graph. Left: graphical; right: adjacency matrix}\n\\end{figure}\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\begin{minipage}{0.3\\textwidth}\n\t\t\t\\begin{center}\n\t\t\t\t\\begin{tikzpicture}\n\t\t\t\t\\node[node](1) at (0, 2){$\\pair{x_1}{y_1}$};\n\t\t\t\t\\node[node](3) at (0, 0){$\\pair{x_2}{y_1}$};\n\t\t\t\t\\node[node](2) at (2, 2){$\\pair{x_1}{y_2}$};\n\t\t\t\t\\node[node](4) at (2, 0){$\\pair{x_2}{y_2}$};\n\t\t\t\t\\draw[arrow](2) -- (3);\n\t\t\t\t\\draw[arrow](4) -- (1);\n\t\t\t\t\\draw[arrow](3) -- (2);\n\t\t\t\t\\draw[arrow](1) -- (3);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\\end{center}\n\t\t\\end{minipage}\n\t\t\\begin{minipage}{0.3\\textwidth}\n\t\t\t\\begin{center}\n\t\t\t\t$\\begin{bmatrix}\n\t\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\t0&0\\\\\n\t\t\t\t\t\t1&0\\\\\n\t\t\t\t\t\\end{bmatrix}&\n\t\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\t0&0\\\\\n\t\t\t\t\t\t1&0\\\\\n\t\t\t\t\t\\end{bmatrix}\\vspace{0.1cm}\\\\\n\t\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\t0&1\\\\\n\t\t\t\t\t\t0&0\\\\\n\t\t\t\t\t\t\\end{bmatrix}&\n\t\t\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\t1&0\\\\\n\t\t\t\t\t\t0&0\\\\\n\t\t\t\t\t\t\\end{bmatrix}\\\\\n\t\t\t\t\\end{bmatrix}$\n\t\t\t\\end{center}\n\t\t\\end{minipage}\n\t\t\n\t\\end{center}\n\t\\caption{A more complicated case with a 2-dimensional node, resulting a in a 4-dimensional adjacency tensor.}\n\\end{figure}\n\nWe will use a 2-dimension case for makespan schedules, with machines and jobs as the two dimensions, and 3-dimensional case for interval scheduling.\n\n\\section{Existing Tools}\n\nWe will look at two scheduling software, Setmore and LEKIN, in terms of explainable planning. We will not look existing tools using argumentation because of relevancy, we use argumentation as a means of explanation in an intermediate process.\n\n\\subsection{Setmore}\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[width=\\linewidth]{figures/setmore_gui.png}\n\t\\end{center}\n\t\\caption{Setmore interactive interface \\cite{setmore}}\n\\end{figure}\n\nSetmore is a commercial online application that records appointments, schedules and employees. The application is designed for small business such as in healthcare \\cite{setmore}, where managers can organise appointments on a calender. Makespan schedules are formulae where employees are machines and appointments are jobs. The intuitive interface enables users to quickly glance at appointments and their times, it is graphically clear when two appointments overlap. However, under the condition that one employee is able to attend at most one appointment, the user is presented with an error message. Messages also occur when resources are fully-booked or unavailable. Scheduling error messages alert the user during data input, which may prove problematic when a user wishes to input a large number of appointments. Appointments cannot be over-allocated because each appointment has at most assignment one by interface restrictions.\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.6]{figures/setmore_overallocated.png}\n\t\\end{center}\n\t\\caption{Overlapping appointment allocation error message}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.6]{figures/setmore_no_service.png}\n\t\\end{center}\n\t\\caption{Data input of an appointment where there are no possible services, or possible assignments}\n\\end{figure}\n\nWe will assess the application under its free-trail, which may limit its explanative functionality. A key observation is that there is no emphasis on the concept of an optimal schedule. Because the tool is designed for small businesses and optimality is not well-defined for arbitrary businesses, the user can graphically inspect and improve a schedule with respect to the user's notion of optimality. Modification of an existing schedule is well-facilitated within its interface, where appointments can be moved or swapped between employees. Explanations for unfeasible schedules are limited to data input verification and validation error messages.\n\n\\subsection{LEKIN}\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[width=\\linewidth]{figures/lekin_gui.jpg}\n\t\\end{center}\n\t\\caption{LEKIN interactive interface \\cite{lekin}}\n\\end{figure}\n\nLEKIN is a academic-oriented scheduling application to teach students scheduling theory and its applications, developed at the Stem School of Business, NYU \\cite{lekin}. The application features numerous optimisation algorithms specialised for scheduling and draws inspiration from academic for rules and heuristics \\cite{sta}. The tool supports single machines, parallel machines and flexible job shop settings.\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.8]{figures/lekin_overallocated.png}\n\t\\end{center}\n\t\\caption{Over-allocation of a job to multiple machines}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.8]{figures/lekin_unallocated.png}\n\t\\end{center}\n\t\\caption{A job is not allocated to any machine}\n\\end{figure}\n\nThe application validates a schedule's feasibility at data input. Infeasible schedule results in error messages. The application computes optimal schedules, but has no functionality to verify its optimality. While our approach is to weaken optimality with efficiency, LEKIN takes the approach to compute common scheduling performance metrics such as makespan completion time, tardiness and weighted metrics over machines. The advantage is that these metrics can be easily and quickly be computed and are intuitive to non-technical users given some background reading. However, these metrics are global across all machines, and give no indication to improving schedules. Non-technical users may run one of the provided optimisers to improve metrics, but no explanation is offered between the assignments of the pre-optimised and the post-optimised schedules.\n\n\\begin{figure}[H]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.8]{figures/lekin_metric.png}\n\t\\end{center}\n\t\\caption{Performance metrics of an schedule}\n\\end{figure}\n\n\\subsection{Comparison}\n\nIt is clear that both application offers limited explanations, explicitly by error messages and implicitly by cascade charts. However, both methods require human intuition, which is unfeasible for large schedules. Therefore, for effective knowledge transfer, localised text explanations and cascade charts help users to focus on key machines and jobs.", "meta": {"hexsha": "15725500adb3e540967bafce681843ca0710ccdf", "size": 21917, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "backend/aes-master/report/background.tex", "max_stars_repo_name": "AminKaramlou/AESWebApp", "max_stars_repo_head_hexsha": "606534731eb7793fc2980b7c2b8c37e3b26cbbd4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "backend/aes-master/report/background.tex", "max_issues_repo_name": "AminKaramlou/AESWebApp", "max_issues_repo_head_hexsha": "606534731eb7793fc2980b7c2b8c37e3b26cbbd4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2020-07-19T01:11:36.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T21:15:53.000Z", "max_forks_repo_path": "backend/aes-master/report/background.tex", "max_forks_repo_name": "AminKaramlou/AESWebApp", "max_forks_repo_head_hexsha": "606534731eb7793fc2980b7c2b8c37e3b26cbbd4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-09-23T21:18:08.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-31T07:32:52.000Z", "avg_line_length": 59.5570652174, "max_line_length": 935, "alphanum_fraction": 0.7481407127, "num_tokens": 5957, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.598110634276062}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{natbib}\n\\usepackage{graphicx}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n\n\\title{CTA200 Assignment 2}\n\\author{Calvin Chan}\n\\date{May 2020}\n\n\n\\begin{document}\n\\maketitle\n\\section{Mandelbrot set}\n\\subsection{Methods}\nUsing numpy, we apply the restrictions given to us, where $-2 < x < 2$, $-2 < y < 2$, and $z_0 = 0$. We also apply the equation to be iterated using a for-in statement while setting a boundary for the absolute value of z with the letter alpha. This allows us to distinguish between values that will diverge versus values that will converge. We then plot the graph using Matplotlib as shown below. \n\\subsection{Analysis}\nWhen graphed we can see that this iteration resembles the famous Mandelbrot set. We can see that it has a symmetrical feature along the y axis. When zoomed in, it also shows similar repeated patterns emerging, which in fact goes on infinitely.\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics{Mandelbrot BW.png}\n    \\caption{Mandelbrot set with varying values of c}\n    \\label{fig:Mandelbrot_set_BW}\n\\end{figure}\n\n\\newpage{}\nWe also implemented colours which allows use to distinguish the near cutoff points between convergence and divergence.\n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics{Mandelbrot Colour.png}\n    \\caption{Mandelbrot set with colour bar}\n    \\label{fig:Mandelbrot_set_colour}\n\n    \\centering\n    \\includegraphics{Mandelbrot Colour Zoom.png}\n    \\caption{Zoomed in Mandelbrot set with colour bar}\n    \\label{fig:Mandelbrot_set_colour_zoom}\n\\end{figure}\n\n\\newpage{}\n\n\\section{The SIR Model}\n\\subsection{Methods}\nWe started off by importing scipy.integrate's odeint in order to integrate as well as defining the parameters given with set values or values of our choice. Then we implemented the derivatives of our functions that are to be integrated. After that, we expressed those results in the form a graph over the time interval from 0 to 200. Two additional graphs were made with varying beta and gamma values in order to analyse the behaviour of the function. \n\\subsection{Analysis}\nIt turns out that depending on the infection rate and recovery rate (or beta and gamma), the rate at which people are either susceptible, infected, or recovered can vary drastically. Higher values of beta and gamma indicate that the spread is much quicker but it also settles down faster. In other words, if it is more contagious, then the faster it will spread but also the faster the curve will flatten out and regress. \n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[scale=0.75]{SIR Model 1.png}\n    \\caption{SIR Model 1, $\\beta=0.2$, $\\gamma=1./10$}\n    \\label{fig:SIR_Model_1}\n    \n    \\centering\n    \\includegraphics[scale=0.75]{SIR Model 2.png}\n    \\caption{SIR Model 2, $\\beta=0.6$, $\\gamma=2./10$}\n    \\label{fig:SIR_Model_2}\n\\end{figure}\n\n\\newpage\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.75]{SIR Model 3.png}\n    \\caption{SIR Model 3, $\\beta=0.4$, $\\gamma=3./10$}\n    \\label{fig:SIR_Model_3}\n\\end{figure}\n\n\\end{document}\n", "meta": {"hexsha": "ccbd2b9623e26af6bbadef9e0290e521b7c08d25", "size": 3114, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignment2/Assignment2.tex", "max_stars_repo_name": "chantsin/CTA200", "max_stars_repo_head_hexsha": "6441de478d592dd30239222986c920e2f1933940", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignment2/Assignment2.tex", "max_issues_repo_name": "chantsin/CTA200", "max_issues_repo_head_hexsha": "6441de478d592dd30239222986c920e2f1933940", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignment2/Assignment2.tex", "max_forks_repo_name": "chantsin/CTA200", "max_forks_repo_head_hexsha": "6441de478d592dd30239222986c920e2f1933940", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.6575342466, "max_line_length": 452, "alphanum_fraction": 0.7530507386, "num_tokens": 849, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.8080672135527632, "lm_q1q2_score": 0.598110634276062}}
{"text": "\\chapter{Inner product spaces}\n%%fakesection Cheerleading\nIt will often turn out that our vector spaces which look more like $\\RR^n$\nnot only have the notion of addition, but also a notion of \\emph{orthogonality}\nand the notion of \\emph{distance}.\nAll this is achieved by endowing the vector space with a so-called \\textbf{inner form},\nwhich you likely already know as the ``dot product'' for $\\RR^n$.\nIndeed, in $\\RR^n$ you already know that\n\\begin{itemize}\n\t\\ii $v \\cdot w = 0$ if and only if $v$ and $w$ are perpendicular, and\n\t\\ii $|v|^2 = v \\cdot v$.\n\\end{itemize}\nThe purpose is to quickly set up this structure in full generality.\nSome highlights of the chapter:\n\\begin{itemize}\n\t\\ii We'll see that the high school ``dot product''\n\tformulation is actually very natural:\n\tit falls out from the two axioms we listed above.\n\tIf you ever wondered why $\\sum a_i b_i$ behaves as nicely as it does,\n\tnow you'll know.\n\t\\ii We show how the inner form can be used to make $V$ into a \\emph{metric space},\n\tgiving it more geometric structure.\n\t\\ii A few chapters later,\n\twe'll identify $V \\cong V^\\vee$ in a way that wasn't possible before,\n\tand as a corollary deduce the nice result that\n\tsymmetric matrices with real entries\n\talways have real eigenvalues.\n\\end{itemize}\n\nThroughout this chapter, \\emph{all vector spaces are over $\\CC$ or $\\RR$},\nunless otherwise specified.\nWe'll generally prefer working over $\\CC$ instead of $\\RR$ since\n$\\CC$ is algebraically closed\n(so, e.g.\\ we have Jordan forms).\nEvery real matrix can be thought of as a matrix\nwith complex entries anyways.\n\n\\section{The inner product}\n\\prototype{Dot product in $\\RR^n$.}\n\\subsection{For real numbers: bilinear forms}\n\nFirst, let's define the inner form for real spaces.\nRather than the notation $v \\cdot w$ it is most customary\nto use $\\left< v,w \\right>$ for general vector spaces.\n\\begin{definition}\n\tLet $V$ be a real vector space.\n\tA \\vocab{real inner form}\\footnote{Other\n\t\tnames include ``inner product'', ``dot product'',\n\t\t``positive definite nondegenerate symmetric bilinear form'', \\dots}\n\tis a function\n\t\\[ \\left< \\bullet, \\bullet \\right> : V \\times V \\to \\RR \\]\n\twhich satisfies the following properties:\n\t\\begin{itemize}\n\t\t\\ii The form is \\vocab{symmetric}: for any $v,w \\in V$ we have\n\t\t\\[ \\left< v,w \\right> = \\left< w,v\\right>. \\]\n\t\tOf course, one would expect this property from a product.\n\n\t\t\\ii The form is \\vocab{bilinear}, or \\textbf{linear in both arguments},\n\t\tmeaning that $\\left< -, v\\right>$\n\t\tand $\\left< v, -\\right>$ are linear functions for any fixed $v$.\n\t\tSpelled explicitly this means that\n\t\t\\begin{align*}\n\t\t\t\\left< cx, v \\right> &= c \\left< x,v \\right> \\\\\n\t\t\t\\left< x+y, v \\right> &= \\left< x,v \\right> + \\left< y,v \\right>.\n\t\t\\end{align*}\n\t\tand similarly if $v$ was on the left.\n\t\tThis is often summarized by the single equation\n\t\t$\\left< cx+y, z \\right> = c \\left< x,z \\right> + \\left< y,z \\right>$.\n\n\t\t\\ii The form is \\vocab{positive definite}, meaning $\\left<v,v\\right> \\ge 0$\n\t\tis a nonnegative real number, and equality takes place only if $v = 0_V$.\n\t\\end{itemize}\n\\end{definition}\n\\begin{exercise}\n\tShow that linearity in the first argument plus symmetry\n\talready gives you linearity in the second argument,\n\tso we could edit the above definition\n\tby only requiring $\\left< -, v\\right>$ to be linear.\n\\end{exercise}\n\n\\begin{example}\n\t[$\\RR^n$]\n\tAs we already know, one can define the inner form on $\\RR^n$ as follows.\n\tLet $e_1 = (1, 0, \\dots, 0)$, $e_2 = (0, 1, \\dots, 0)$,\n\t\\dots, $e_n = (0, \\dots, 0, 1)$ be the usual basis.\n\tThen we let\n\t\\[ \n\t\t\\left< a_1 e_1 + \\dots + a_n e_n, b_1 e_1 + \\dots + b_n e_n \\right>\n\t\t\\defeq a_1 b_1 + \\dots + a_n b_n.\n\t\\]\n\tIt's easy to see this is bilinear\n\t(symmetric and linear in both arguments).\n\tTo see it is positive definite,\n\tnote that if $a_i = b_i$\n\tthen the dot product is $a_1^2 + \\dots + a_n^2$,\n\twhich is zero exactly when all $a_i$ are zero.\n\\end{example}\n\n\\subsection{For complex numbers: sesquilinear forms}\nThe definition for a complex product space is similar, but has one difference:\nrather than symmetry we instead have \\emph{conjugate symmetry}\nmeaning $\\left< v, w \\right> = \\ol{ \\left< w,v \\right> }$.\nThus, while we still have linearity in the first argument,\nwe actually have a different linearity for the second argument.\nTo be explicit:\n\\begin{definition}\n\tLet $V$ be a complex vector space.\n\tA \\vocab{complex inner product} is a function\n\t\\[ \\left< \\bullet, \\bullet \\right> : V \\times V \\to \\CC \\]\n\twhich satisfies the following properties:\n\t\\begin{itemize}\n\t\t\\ii The form has \\vocab{conjugate symmetry}, which means that\n\t\tfor any $v,w \\in V$ we have\n\t\t\\[ \\left< v,w \\right> = \\ol{\\left< w,v\\right>}. \\]\n\n\t\t\\ii The form is \\vocab{sesquilinear}\n\t\t(the name means ``one-and-a-half linear'').\n\t\tThis means that:\n\t\t\\begin{itemize}\n\t\t\\ii The form is \\textbf{linear in the first argument}, so again we have\n\t\t\\begin{align*}\n\t\t\t\\left< x+y, v \\right> &= \\left< x,v \\right> + \\left< y,v \\right> \\\\\n\t\t\t\\left< cx, v \\right> &= c\\left< x,v \\right>.\n\t\t\\end{align*}\n\t\tAgain this is often abbreviated to the single line\n\t\t$\\left< cx+y, v \\right> = c \\left< x,v \\right> + \\left< y,v \\right>$\n\t\tin the literature.\n\n\t\t\\ii However, it is now \\textbf{\\vocab{anti-linear}\n\t\tin the second argument}:\n\t\tfor any complex number $c$ and vectors $x$ and $y$ we have\n\t\t\\begin{align*}\n\t\t\t\\left< v, x+y\\right> &= \\left< v, x\\right> + \\left< v,y \\right> \\\\\n\t\t\t\\left< v, cx \\right> &= \\ol c \\left< v, x\\right>.\n\t\t\\end{align*}\n\t\tNote the appearance of the complex conjugate $\\ol c$,\n\t\twhich is new!\n\t\tAgain, we can abbreviate this to just\n\t\t$\\left< v, cx+y \\right> = \\ol c \\left< v,x \\right> + \\left< v,y\\right>$\n\t\tif we only want to write one equation.\n\t\t\\end{itemize}\n\n\t\t\\ii The form is \\vocab{positive definite},\n\t\tmeaning $\\left<v,v\\right>$ is a nonnegative real number,\n\t\tand equals zero exactly when $v = 0_V$.\n\t\\end{itemize}\n\\end{definition}\n\\begin{exercise}\n\tShow that anti-linearity follows\n\tfrom conjugate symmetry plus linearity in the first argument.\n\\end{exercise}\n\n\\begin{example}\n\t[$\\CC^n$]\n\tThe dot product in $\\CC^n$ is defined as follows:\n\tlet $\\ee_1$, $\\ee_2$, \\dots, $\\ee_n$ be the standard basis.\n\tFor complex numbers $w_i$, $z_i$ we set\n\t\\[ \n\t\t\\left< w_1 \\ee_1 + \\dots + w_n \\ee_n, z_1 \\ee_1 + \\dots + z_n \\ee_n \\right>\n\t\t\\defeq w_1\\ol{z_1} + \\dots + w_n \\ol{z_n}.\n\t\\]\n\\end{example}\n\\begin{ques}\n\tCheck that the above is in fact a complex inner form.\n\\end{ques}\n\n\\subsection{Inner product space}\nIt'll be useful to treat both types of spaces simultaneously:\n\\begin{definition}\n\tAn \\vocab{inner product space} is either a real vector space\n\tequipped with a real inner form,\n\tor a complex vector space equipped with a complex inner form.\n\n\tA linear map between inner product spaces\n\tis a map between the underlying vector spaces\n\t(we do \\emph{not} require any compatibility with the inner form).\n\\end{definition}\n\n\\begin{remark}\n\t[Why sesquilinear?]\n\tThe above example explains one reason why we want\n\tto satisfy conjugate symmetry rather than just symmetry.\n\tIf we had tried to define the dot product as $\\sum w_i z_i$,\n\tthen we would have lost the condition of being positive definite,\n\tbecause there is no guarantee that\n\t$\\left< v,v \\right> = \\sum z_i^2$ will even be a real number at all.\n\tOn the other hand, with conjugate symmetry\n\twe actually enforce $\\left< v,v \\right> = \\ol{\\left< v,v \\right>}$,\n\ti.e.\\ $\\left< v,v \\right> \\in \\RR$ for every $v$.\n\n\tLet's make this point a bit more forcefully.\n\tSuppose we tried to put a bilinear form $\\left< -, -\\right>$,\n\ton a \\emph{complex} vector space $V$.\n\tLet $e$ be any vector with $\\left< e, e \\right> = 1$ (a unit vector).\n\tThen we would instead get\n\t$\\left< ie, ie \\right> = - \\left< e,e \\right> = -1$;\n\tthis is a vector with length $-1$, which is not okay!\n\tThat's why it is important that,\n\twhen we have a complex inner product space,\n\tour form is sesquilinear, not bilinear.\n\\end{remark}\n\nNow that we have a dot product,\nwe can talk both about the norm and orthogonality.\n\n\\section{Norms}\n\\prototype{$\\RR^n$ becomes its usual Euclidean space with the vector norm.}\n\nThe inner form equips our vector space with a notion of distance, which we call the norm.\n\\begin{definition}\n\tLet $V$ be an inner product space.\n\tThe \\vocab{norm} of $v \\in V$ is defined by\n\t\\[ \\norm{v} = \\sqrt{\\left<v,v\\right>}. \\]\n\tThis definition makes sense because\n\twe assumed our form to be positive definite,\n\tso $\\left< v,v\\right>$ is a nonnegative real number.\n\\end{definition}\n\n\\begin{example}[$\\RR^n$ and $\\CC^n$ are normed vector spaces]\n\tWhen $V = \\RR^n$ or $V = \\CC^n$ with the standard dot product norm,\n\tthen the norm of $v$ corresponds to the absolute value that we are used to.\n\\end{example}\n\nOur goal now is to prove that\n\\begin{moral}\n\tWith the metric $d(v,w) = \\norm{v-w}$, $V$ becomes a metric space.\n\\end{moral}\n\\begin{ques}\n\tVerify that $d(v,w) = 0$ if and only if $v = w$.\n\\end{ques}\nSo we just have to establish the triangle inequality.\nLet's now prove something we all know and love,\nwhich will be a stepping stone later:\n\\begin{lemma}\n\t[Cauchy-Schwarz]\n\tLet $V$ be an inner product space.\n\tFor any $v,w \\in V$ we have\n\t\\[ \\left\\lvert \\left< v,w\\right> \\right\\rvert\n\t\\le \\norm{v} \\norm{w} \\]\n\twith equality if and only if $v$ and $w$ are linearly dependent.\n\\end{lemma}\n\\begin{proof}\n\tThe theorem is immediate if $\\left< v,w\\right> = 0$.\n\tIt is also immediate if $\\norm{v} \\norm{w} = 0$,\n\tsince then one of $v$ or $w$ is the zero vector.\n\tSo henceforth we assume all these quantities are nonzero\n\t(as we need to divide by them later).\n\n\tThe key to the proof is to think about the equality case:\n\twe'll use the inequality $\\left< cv-w, cv-w\\right> \\ge 0$.\n\tDeferring the choice of $c$ until later, we compute\n\t\\begin{align*}\n\t\t0 &\\le \\left< cv-w, cv-w \\right> \\\\\n\t\t&= \\left< cv, cv\\right> - \\left< cv, w\\right> - \\left< w, cv\\right> + \\left< w,w \\right> \\\\\n\t\t&= |c|^2 \\left< v,v \\right> - c \\left< v,w \\right> - \\ol c \\left< w,v \\right> + \\left< w,w \\right> \\\\\n\t\t&= |c|^2 \\norm{v}^2 + \\norm{w}^2 - c \\left< v,w \\right> - \\ol{c \\left< v,w\\right> } \\\\\n\t\t2 \\Re \\left[ c \\left< v,w \\right> \\right] &\\le |c|^2 \\norm{v}^2 + \\norm{w}^2  \\\\\n\t\t\\intertext{At this point, a good choice of $c$ is}\n\t\tc &= \\frac{ \\norm w}{\\norm v} \\cdot \\frac{|\\left< v,w\\right>|}{\\left< v,w\\right>} \\\\\n\t\t\\intertext{since then}\n\t\tc \\left< v,w \\right> &= \\frac{\\norm w}{\\norm v} \\left\\lvert \\left< v,w\\right> \\right\\rvert \\in \\RR \\\\\n\t\t|c| &= \\frac{\\norm w}{\\norm v} \\\\\n\t\t\\intertext{whence the inequality becomes}\n\t\t 2\\frac{\\norm w}{\\norm v} \\left\\lvert \\left< v,w\\right> \\right\\rvert &\\le 2 \\norm{w}^2  \\\\\n\t\t \\left\\lvert \\left< v,w\\right> \\right\\rvert &\\le \\norm v \\norm w. \\qedhere\n\t\\end{align*}\n\\end{proof}\nThus:\n\\begin{theorem}\n\t[Triangle inequality]\n\tWe always have\n\t\\[ \\norm v + \\norm w \\ge \\norm{v+w} \\]\n\twith equality if and only if $v$ and $w$ are linearly dependent.\n\\end{theorem}\n\\begin{exercise}\n\tProve this by squaring both sides, and applying Cauchy-Schwarz.\n\\end{exercise}\n\nIn this way, our vector space now has a topological structure of a metric space.\n\n\\section{Orthogonality}\n\\prototype{Still $\\RR^n$!}\nOur next goal is to give the geometric notion of ``perpendicular''.\nThe definition is easy enough:\n\\begin{definition}\n\tTwo nonzero vectors $v$ and $w$ in an inner product space\n\tare \\vocab{orthogonal} if $\\left< v,w \\right> = 0$.\n\\end{definition}\n\nAs we expect from our geometric intuition in $\\RR^n$,\nthis implies independence:\n\\begin{lemma}[Orthogonal vectors are independent]\n\tAny set of pairwise orthogonal vectors $v_1$, $v_2$, \\dots, $v_n$,\n\twith $\\norm{v_i} \\ne 0$ for each $i$,\n\tis linearly independent.\n\\end{lemma}\n\\begin{proof}\n\tConsider a dependence\n\t\\[ a_1 v_1 + \\dots + a_n v_n = 0 \\]\n\tfor $a_i$ in $\\RR$ or $\\CC$.\n\tThen \\[ 0_V = \\left< v_1, \\sum a_i v_i \\right> = \\ol{a_1} \\norm{v_1}^2. \\]\n\tHence $a_1 = 0$, since we assumed $\\norm{v_1} \\neq 0$.\n\tSimilarly $a_2 = \\dots = a_m = 0$.\n\\end{proof}\n\nIn light of this, we can now consider a stronger condition on our bases:\n\\begin{definition}\n\tAn \\vocab{orthonormal} basis of a\n\t\\emph{finite-dimensional} inner product space $V$\n\tis a basis $e_1$, \\dots, $e_n$ such that\n\t$\\norm{e_i} = 1$ for every $i$ and\n\t$\\left< e_i, e_j \\right> = 0$ for any $i \\neq j$.\n\\end{definition}\n\\begin{example}[$\\RR^n$ and $\\CC^n$ have standard bases]\n\tIn $\\RR^n$ and $\\CC^n$ equipped with the standard dot product,\n\tthe standard basis $\\ee_1$, \\dots, $\\ee_n$ is also orthonormal.\n\\end{example}\nThis is no loss of generality:\n\\begin{theorem}[Gram-Schmidt]\n\tLet $V$ be a finite-dimensional inner product space.\n\tThen it has an orthonormal basis.\n\\end{theorem}\n\\begin{proof}[Sketch of Proof]\n\tOne constructs the orthonormal basis explicitly from any basis\n\t$e_1$, \\dots, $e_n$ of $V$.\n\tDefine $\\opname{proj}_u(v) = \\frac{\\left< v,u\\right>}{\\left< u,u\\right>} u$.\n\tThen recursively define\n\t\\begin{align*}\n\t\tu_1 &= e_1 \\\\\n\t\tu_2 &= e_2 - \\opname{proj}_{u_1}(e_2) \\\\\n\t\tu_3 &= e_3 - \\opname{proj}_{u_1}(e_3) - \\opname{proj}_{u_2}(e_3) \\\\\n\t\t&\\vdotswithin{=} \\\\\n\t\tu_n &= e_n - \\opname{proj}_{u_1}(e_n) - \\dots - \\opname{proj}_{u_{n-1}}(e_n).\n\t\\end{align*}\n\tOne can show the $u_i$ are pairwise orthogonal and not zero.\n\\end{proof}\nThus, we can generally assume our bases are orthonormal.\n\nWorth remarking:\n\\begin{example}[The dot product is the ``only'' inner form]\n\tLet $V$ be a finite-dimensional inner product space,\n\tand consider \\emph{any} orthonormal basis $e_1, \\dots, e_n$.\n\tThen we have that\n\t\\[ \\left< a_1 e_1 + \\dots + a_n e_n,\n\t\tb_1 e_1 + \\dots + b_n e_n \\right>\n\t\t= \\sum_{i,j=1}^n a_i\\ol{b_j} \\left< e_i, e_j \\right>\n\t\t= \\sum_{i=1}^n a_i \\ol{b_i} \\]\n\towing to the fact that the $\\{e_i\\}$ are orthonormal.\n\\end{example}\nAnd now you know why the dot product expression is so ubiquitous.\n\n\\section{Hilbert spaces}\nIn algebra we are usually scared of infinity,\nand so when we defined a basis of a vanilla vector space many chapters ago,\nwe only allowed finite linear combinations.\nHowever, if we have an inner product space,\nthen it is a metric space and we \\emph{can}\nsometimes actually talk about convergence.\n\nHere is how it goes:\n\\begin{definition}\n\tA \\vocab{Hilbert space} is a inner product space $V$,\n\tsuch that the corresponding metric space is complete.\n\\end{definition}\n\nIn that case, it will now often make sense to take infinite linear combinations,\nbecause we can look at the sequence of partial sums and let it converge.\nHere is how we might do it.\nLet's suppose we have $e_1$, $e_2$, \\dots an infinite sequence\nof vectors with norm $1$ and which are pairwise orthogonal.\nSuppose $c_1$, $c_2$, \\dots, is a sequence of real or complex numbers.\nThen consider the sequence\n\\begin{align*}\n\tv_1 &= c_1 e_1 \\\\\n\tv_2 &= c_1 e_1 + c_2 e_2 \\\\\n\tv_3 &= c_1 e_1 + c_2 e_2 + c_3 e_3 \\\\\n\t&\\vdotswithin=\n\\end{align*}\n\n\\begin{proposition}\n\t[Convergence criteria in a Hilbert space]\n\tThe sequence $(v_i)$ defined above\n\tconverges if and only if $\\sum \\left\\lvert c_i \\right\\rvert^2 < \\infty$.\n\\end{proposition}\n\\begin{proof}\n\tThis will make more sense if you read \\Cref{ch:calc_limits},\n\tso you could skip this proof if you haven't read the chapter.\n\tThe sequence $v_i$ converges if and only if it is Cauchy,\n\tmeaning that when $i < j$,\n\t\\[ \\norm{v_j - v_i}^2 = |c_{i+1}|^2 + \\dots + |c_j|^2 \\]\n\ttends to zero as $i$ and $j$ get large.\n\tThis is equivalent to the sequence\n\t$s_n = |c_1|^2 + \\dots + |c_n|^2$ being Cauchy.\n\n\tSince $\\RR$ is complete, $s_n$ is Cauchy\n\tif and only if it converges.\n\tSince $s_n$ consists of nonnegative real numbers,\n\tconverges holds if and only if $s_n$ is bounded,\n\tor equivalently if $\\sum \\left\\lvert c_i \\right\\rvert^2 < \\infty$.\n\\end{proof}\n\nThus, when we have a Hilbert space, we change our definition slightly:\n\\begin{definition}\n\tAn \\vocab{orthonormal basis} for a Hilbert space $V$\n\tis a (possibly infinite) sequence $e_1$, $e_2$, \\dots,\n\tof vectors such that\n\t\\begin{itemize}\n\t\t\\ii $\\left< e_i, e_i \\right> = 1$ for all $i$,\n\t\t\\ii $\\left< e_i, e_j \\right> = 0$ for $i \\ne j$,\n\t\ti.e.\\ the vectors are pairwise orthogonal\n\t\t\\ii every element of $V$ can be expressed uniquely as an\n\t\tinfinite linear combination\n\t\t\\[ \\sum_i c_i e_i \\]\n\t\twhere $\\sum_i \\left\\lvert c_i \\right\\rvert^2 < \\infty$,\n\t\tas described above.\n\t\\end{itemize}\n\\end{definition}\nThat's the official definition, anyways.\n(Note that if $\\dim V < \\infty$, this agrees with our usual definition,\nsince then there are only finitely many $e_i$.)\nBut for our purposes you can mostly not worry about it and instead think:\n\\begin{moral}\n\tA Hilbert space is an inner product space\n\twhose basis requires infinite linear combinations,\n\tnot just finite ones.\n\\end{moral}\nThe technical condition $\\sum \\left\\lvert c_i \\right\\rvert^2 < \\infty$\nis exactly the one which ensures the infinite sum makes sense.\n\n\\section{\\problemhead}\n\n\\begin{problem}\n\t[Pythagorean theorem]\n\tShow that if $\\left< v,w \\right> = 0$ in an inner product space,\n\tthen $\\norm{v}^2 + \\norm{w}^2 = \\norm{v+w}^2$.\n\\end{problem}\n\n\\begin{sproblem}\n\t[Finite-dimensional $\\implies$ Hilbert]\n\tShow that a finite-dimensional inner product space\n\tis a Hilbert space.\n\t\\begin{hint}\n\t\tFix an orthonormal basis $e_1$, \\dots, $e_n$.\n\t\tUse the fact that $\\RR^n$ is complete.\n\t\\end{hint}\n\\end{sproblem}\n\n%\\begin{problem}\n%\tLet $V$ be a complex inner product space\n%\twith orthonormal basis $e_1$, \\dots, $e_n$.\n%\tSuppose $x = a_1 e_1 + \\dots + a_n e_n \\in V$\n%\tand $y = b_1 e_1 + \\dots + b_n e_n \\in V$.\n%\tShow:\n%\t\\begin{align*}\n%\t\t\\left< x,x \\right> &= \\sum_\\xi |a_i|^2 \\\\\n%\t\ta_1 &= \\left< x, e_1 \\right> \\\\\n%\t\t\\left< x,y \\right> &= \\sum_i a_i \\ol{b_i}.\n%\t\\end{align*}\n%\\end{problem}\n\n\\begin{problem}[Taiwan IMO camp]\n\t\\gim\n\tIn a town there are $n$ people and $k$ clubs.\n\tEach club has an odd number of members,\n\tand any two clubs have an even number of common members.\n\tProve that $k \\le n$.\n\t\\begin{hint}\n\t\tDot products in $\\FF_2$.\n\t\\end{hint}\n\t\\begin{sol}\n\t\tInterpret clubs as vectors in the vector space $\\mathbb F_2^n$.\n\t\tConsider a ``dot product'' to show that all $k$ vectors are linearly independent.\n\t\tThus $k \\le \\dim \\mathbb F_2^n = n$.\n\t\\end{sol}\n\\end{problem}\n\n\\begin{sproblem}[Inner product structure of tensors]\n\t\\label{prob:inner_prod_tensor}\n\tLet $V$ and $W$ be finite-dimensional inner product spaces over $k$,\n\twhere $k$ is either $\\RR$ or $\\CC$.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Find a canonical way to make $V \\otimes_k W$ into an inner product space too.\n\t\t\\ii Let $e_1$, \\dots, $e_n$ be an orthonormal basis of $V$\n\t\tand $f_1$, \\dots, $f_m$ be an orthonormal basis of $W$.\n\t\tWhat's an orthonormal basis of $V \\otimes W$?\n\t\\end{enumerate}\n\t\\begin{hint}\n\t\tDefine it on simple tensors then extend linearly.\n\t\\end{hint}\n\t\\begin{sol}\n\t\tThe inner form given by\n\t\t\\[ \\left< v_1 \\otimes w_1 , v_2 \\otimes w_2 \\right>_{V \\otimes W}\n\t\t= \\left< v_1,v_2 \\right>_V \\left< w_1,w_2\\right>_W \\]\n\t\ton pure tensors, then extending linearly.\n\t\tFor (b) take $e_i \\otimes f_j$ for $1 \\le i \\le n$, $1 \\le j \\le m$.\n\t\\end{sol}\n\\end{sproblem}\n\n\\begin{problem}[Putnam 2014]\n\t\\gim\n\tLet $n$ be a positive integer.\n\tWhat is the largest $k$ for which there exist\n\t$n\\times n$ matrices $M_1,\\dots,M_k$ and $N_1,\\dots,N_k$\n\twith real entries such that for all $i$ and $j$,\n\tthe matrix product $M_i N_j$ has a zero entry somewhere\n\ton its diagonal if and only if $i \\ne j?$\n\t\\begin{hint}\n\t\t$k = n^n$.\n\t\tEndow tensor products with an inner form.\n\t\tNote that ``zero entry somewhere on its diagonal''\n\t\tis equivalent to the product of those entries being zero.\n\t\\end{hint}\n\\end{problem}\n\n\\begin{problem}\n\t[Sequence space]\n\tConsider the space $\\ell^2$ of infinite sequences of real numbers\n\t$a = (a_1, a_2, \\dots)$ satisfying $\\sum_i a_i^2 < \\infty$.\n\tWe equip it with the dot product\n\t\\[ \\left< a, b\\right> = \\sum_i a_i b_i. \\]\n\tIs this a Hilbert space?\n\tIf so, identify a Hilbert basis.\n\\end{problem}\n\n\\begin{problem}\n\t[Kuratowski embedding]\n\tA \\vocab{Banach space} is a normed vector space $V$,\n\tsuch that the corresponding metric space is complete.\n\t(So a Hilbert space is a special case of a Banach space.)\n\n\tLet $(M,d)$ be any metric space.\n\tProve that there exists a Banach space $X$\n\tand an injective function $f \\colon M \\injto X$\n\tsuch that $d(x,y) = \\norm{f(x)-f(y)}$ for any $x$ and $y$.\n\\end{problem}\n", "meta": {"hexsha": "ceaabaa4dcd6d1079bba6bf3365e35fb1bb239fd", "size": 20185, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/linalg/inner-form.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/linalg/inner-form.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/linalg/inner-form.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6585820896, "max_line_length": 103, "alphanum_fraction": 0.6861035422, "num_tokens": 6670, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5981106325654528}}
{"text": "\\chapter{Introduction}\\label{chapter:introduction}\n\nTwo of the most important results in geometric group theory are Gromov's classification of groups with \\emph{polynomial volume growth}~\\cite{gromov1981}, and Grigorchuk's construction of a group with \\emph{intermediate volume growth}~\\cite{grigorchuk1983}.\nThese results answer two questions originally posed by \\citeauthor{milnor1968a}, namely, whether the \\emph{volume growth} of groups must always be either exponential, or polynomial of an integer degree; and if there is an algebraic classification of groups with polynomial volume growth~\\cite{milnor1968a}.\nIn this thesis, we consider the analogous questions for \\emph{geodesic growth}.\n\nThe \\emph{volume growth function} of a group counts the number of elements that can be represented using words up to a given length.\nIt was shown by \\citeauthor{gromov1981} that a group has polynomial volume growth if and only if it is \\emph{virtually nilpotent}~\\cite{gromov1981}.\nThe \\emph{geodesic growth function} of a group counts the number of \\emph{geodesic words} (i.e.\\ minimal-length representatives of group elements) with a given upper bound on their lengths.\nThe \\emph{volume (resp.~geodesic) growth series} is then the power series whose coefficients are the values of the volume (resp.~geodesic) growth function.\nSince each element of a group has at least one corresponding geodesic, we see that the geodesic growth is bounded from below by the volume growth.\nFrom this and Gromov's theorem, we see that only \\emph{virtually nilpotent} groups may have polynomial geodesic growth.\nIt is well known that the class of \\emph{nilpotent} groups come in a sequence of \\textit{steps}, the first \\textit{step} being the abelian groups.\nThus, to obtain a classification of polynomial geodesic growth, it is natural to start with the \\emph{virtually abelian} groups.\n\nThe study of geodesic growth for abelian groups began in \\citeyear{shapiro1997} when \\citeauthor{shapiro1997} considered the function $p_S\\colon G \\to \\mathbb{N}$ which counts the geodesics corresponding to a given element of a group $G$~\\cite{shapiro1997}.\nThe function $p_S$ is referred to as the \\emph{Pascal function} as, in the case of free-abelian groups, it resembles a Pascal triangle.\nThe geodesic growth of virtually abelian groups was considered by \\citeauthor{bridson2012} who provided the first example of a group with polynomial geodesic growth that is not \\emph{virtually cyclic}~\\cite{bridson2012}.\nMoreover, they provided a sufficient condition for a virtually abelian group to have polynomial geodesic growth with respect to some generating set, a condition for a group to have exponential geodesic growth with respect to every generating set, and proved that the geodesic growth function of a virtually cyclic group is either exponential, or polynomial of an integer degree.\nIn this thesis, we extend their results by completely characterising the geodesic growth for virtually abelian groups.\n\n\\begin{restatable*}{theoremx}{TheoremGeodesicGrowth}\\label{thm:geodesic-growth}\n\tLet $G$ be a virtually abelian group with a finite weighted monoid generating set $S$.\n\tThen the geodesic growth with respect to $S$ is either polynomial of integer degree with rational geodesic growth series, or exponential with holonomic geodesic growth series.\n\\end{restatable*}\n\nWe provide the first example of a group with polynomial geodesic growth that is not virtually abelian.\nIn particular, we show that the \\emph{virtually Heisenberg group}\n\\[\n\t\\vH\n\t=\n\t\\left\\langle\n\t\ta,b,c,t\n\t\\,\\,\\middle|\\,\\,\n\t\t[a,b] = c,\\,\n\t\t[a,c] = [b,c] = t^2 = 1,\\,\n\t\ta^t = b\n\t\\right\\rangle\n\\]\nhas a polynomial upper bound on its geodesic growth function with respect to the generating set $S = \\{a,a^{-1},t\\}$.\nWe prove this result in \\cref{thm:main}.\nThis example shows that a classification of polynomial geodesic growth is more complicated than just a subclass of virtually abelian groups.\n\n\\begin{restatable*}{theoremx}{TheoremVirtuallyHeisenberg}\\label{thm:main}\n\tThe geodesic growth function of $\\vH$ with respect to $S = \\{a,a^{-1},t\\}$ is bounded from above by a polynomial of degree $8$.\n\\end{restatable*}\n\nIt was shown by \\citeauthor{duchin2019} that the volume growth series of the Heisenberg group $H_3$ is rational with respect to every generating set~\\cite{duchin2019}.\n\\Cref{thm:geodesic-growth} shows us that if the geodesic growth of a virtually abelian group is polynomial, then its geodesic growth series is rational.\nHowever, it is not clear if this holds for the geodesic growth function of our virtually Heisenberg example $H_3 \\rtimes C_2$.\nIn fact, computational experiments suggest that the geodesic growth series is not rational (see~\\cite{githubcode}).\nThus, this may be the first example with polynomial geodesic growth and non-rational geodesic growth series.\n\nThe question of the existence of a group with intermediate geodesic growth was considered as early as 1993 by Grigorchuk and Shapiro (see~\\cite[756]{grigorchuk2014}).\nIn the PhD thesis of \\citeauthor{broennimann2016} %Theorem~3.5,3.8,3.10 and 3.14\nmost of the groups known to have intermediate volume growth, at the time, were shown to have exponential geodesic growth with respect to their standard generating sets~\\cite[Chapter~3]{broennimann2016};\nthese results were an extension of an unpublished work of \\citeauthor{elder2006} where this was shown only for the \\emph{first Grigorchuk group}~\\cite{elder2006}.\nWe cannot, at this time, eliminate the possibility of there being a virtually nilpotent group with intermediate geodesic growth.\nThus, the results in this thesis also have application to the search of a group with intermediate geodesic growth.\n\nMost of the literature on geodesic growth has been concerned with either showing that the language of geodesics is regular (see~\\cref{sec:grammars-and-automata}), or that the geodesic growth series is rational (see~\\cref{sec:formal-language/generating-functions/rational}) with respect to some particular generating sets.\nIt is known that the language of geodesics for a \\emph{hyperbolic group} is regular with respect to any finite generating set~\\cite[Theorem~3.4.5]{epstein1992}.\nThis result was generalised by \\citeauthor{neumann1995} to any group and generating set with the \\emph{falsification by fellow traveller property}~\\cite[Proposition~4.2~on~p.~267]{neumann1995}.\nThere are many results for particular generating sets of certain \\emph{Coxeter groups}, \\emph{Artin groups} \\cite{antolin2021,mairesse2006,kolpakov2020,ciobanu2016,antolin2013,holt2012,antolin2021,athreya2014}, and \\emph{Garside groups}~\\cite{sabalka2004,charney2004}.\n\\Citeauthor{shapiro1997} studied the Pascal function for abelian and hyperbolic groups~\\cite{shapiro1997}.\nIt was shown by \\citeauthor{loeffler2002} that having a regular language of geodesics is preserved by graph product~\\cite{loeffler2002}.\n\\Citeauthor{hermiller2008} studied groups whose languages of geodesics are \\emph{locally testable} (which is a proper subfamily of the regular languages)~\\cite{hermiller2008}.\n\\Citeauthor{cleary2006} showed that the language of geodesics for the \\emph{lamplighter group} is \\emph{context-free} and \\emph{counter} for some generating sets; and that the language of geodesics for \\emph{Thompson's group $F$} is not regular for any generating set~\\cite{cleary2006}.\n\nWe prove \\cref{thm:geodesic-growth} by constructing an algorithm that converts geodesics into \\emph{patterned words}.\nFrom this algorithm, we find a bijection from the set of geodesics of a virtually abelian group to a certain formal language with a \\emph{holonomic} growth series.\nIt is then natural to ask if there is a formal-language characterisation for the language of geodesics for a virtually abelian group.\nIn \\cref{thm:virtually-abelian-are-blind-counter}, we obtain such a characterisation by implementing this algorithm using \\emph{blind multicounter automata}.\n\n\\begin{restatable*}{theoremx}{TheoremBlindMulticounter}\\label{thm:virtually-abelian-are-blind-counter}\n\tThe language of geodesics of a virtually abelian group with respect to any finite weighted monoid generating set $S$ is blind multicounter.\n\\end{restatable*}\n\nA \\emph{formal language over a group} is a set of words whose letters are taken from the generating set.\nSo far we have discussed our results on the language of geodesics for a group.\nIn this thesis, we also study the \\emph{co-word problem}, that is, the language of words that do not correspond to the group identity.\nCharacterisations of such languages provide us with one measure for the computational difficulty involved with computing in a group.\n\nThe \\emph{word problem} of a group $G$ with respect to a finite monoid generating set $S$, denoted $\\WP_S$, is the set of all words in $S^*$ that correspond to the group identity.\nWe see that the word problem completely specifies a group as $\\left\\langle S \\mid \\WP_S \\right\\rangle$ is a presentation for $G$.\nThis formal language is one characterisation of the difficulty of computing within a group, i.e., checking if two words $u,v \\in S^*$ represent the same group element is equivalent to checking if the word $uv^{-1}$ is in the word problem $\\WP_S$.\nThe \\emph{co-word problem} of a group, denoted $\\coWP_S$, is the complement of the word problem in the sense that $\\coWP_S = S^* \\setminus \\WP_S$.\n\nThere are groups for which it is not possible to decide membership to the word problem.\nInterestingly, there are such groups for which the word problem is \\emph{recursively enumerable} but not \\emph{recursive}, that is, there is an algorithm that lists every word in the word problem (with no guarantee of order) but no such algorithm which lists out every word in the co-word problem.\nIn particular, there are finitely-presented groups with unsolvable word problems.\nThe existence of such examples was shown independently by \\textcite{novikov1955} (see \\cite{novikov1955-translated} for an English translation) and \\textcite{boone1959}.\nAn explicit example with 10 generators and 27 relators was given by \\textcite{collins1986}, then later a simpler example with 2 generators and 27 relators was given by \\textcite{wang2016}.\nWe see that the word problem for any finitely-presented group is recursively enumerable (see \\cref{prop:appendix/wp-fp}).\n\nSuppose that $\\mathcal{F}$ is a family of formal languages that is closed under \\emph{inverse word homomorphism}.\nThen, if the word or co-word problem with respect to some generating set belongs to $\\mathcal{F}$, it belongs to $\\mathcal{F}$ for all generating sets (see~\\cref{lem:well-defined-coword}).\nThus, for such a family it is well defined to state that a group has a word or co-word problem in $\\mathcal{F}$.\nExamples of such families are the \\emph{regular}, \\emph{context-free} and \\emph{context-sensitive} languages; the family of \\emph{ET0L languages} which are a type of L-system, introduced by \\textcite{rozenberg1973}, that generalises context-free languages, and form a subfamily of context-sensitive languages; and the family of \\emph{blind multicounter languages}~\\cite{greibach1978} which generalise counter languages and are equivalent to the class of \\emph{reversal bounded multicounter languages}~\\cite{baker1974} and \\emph{Parikh languages}~\\cite{klaedtke2003}.\nIf $\\mathcal{F}$ is a family of formal languages that is closed under inverse word homomorphism, and the word (resp.~co-word) problem of a group lies in $\\mathcal{F}$, then we say that the group is an $\\mathcal{F}$ (resp.~co-$\\mathcal{F}$) group.\n\nIt is interesting to find classifications of groups whose languages $\\WP_S$ and $\\coWP_S$ belong to certain language families.\nThe study of this problem began with \\citeauthor{anisimov1971} who showed that a group is finite if and only if $\\WP_S$ is a regular language~\\cite{anisimov1971}.\nFurther classifications were obtained by \\citeauthor{muller1983} who showed that a group is \\emph{virtually free} if and only if its word problem is a context-free language~\\cite{muller1983}; and \\citeauthor{elder2008} who showed that a group is virtually abelian if and only if its word problem is a blind multicounter language~\\cite{elder2008}.\n\nThe study of the group co-word problem, $\\coWP_S$, gives us an additional source of such classifications.\nThe class of groups for which $\\coWP_S$ is context-free was first studied by \\textcite{holt2005}.\nTheir results were then extended by \\textcite{lehnert2007} who showed that Thompson's group $V$ has a context-free co-word problem.\nCombining a result of \\textcite{bleak2016} with a remark in \\citeauthor{lehnert2008}'s thesis~\\cite[\\S\\,4.2]{lehnert2008}, it is conjectured that every group with context-free co-word problem is a subgroup of Thompson's group $V$.\nMoreover, it is conjectured that Grigorchuk's group does not have a context-free co-word problem~\\cite{bleak2016}.\n\nThe class of \\emph{bounded automata groups} includes important examples such as Grigorchuk's group of intermediate growth, the Gupta-Sidki groups, and many more \\cite{grigorchuk1980,gupta1983,nekrashevych2005,sidki2000}.\nIt was shown by \\textcite{holt2006} that bounded automata groups have \\emph{indexed} co-word problems.\nET0L languages form a proper subfamily of the indexed languages introduced by \\textcite{aho1968} (see Corollary~4.1 in~\\cite{culik1974} and Proposition~4.5 in~\\cite{ehrenfeucht1976}).\nFor the case of Grigorchuk's group, it was later shown by \\citeauthor{ciobanu2018} that the co-word problem is ET0L~\\cite{ciobanu2018}.\nThey proved this result by explicitly constructing an ET0L grammar to recognise the co-word problem.\nIn \\cref{thm:bounded automata is ET0L} we use an equivalent machine model to generalise this result to all bounded automata groups.\n\n\\begin{restatable*}{theoremx}{TheoremBoundedAutomata}\\label{thm:bounded automata is ET0L}\n\tEvery finitely-generated bounded automata group is co-ET0L.\n\\end{restatable*}\n\nAll results in this thesis are with respect to monoid generating sets for groups.\nIn particular, this means that we do not assume that our generating sets are \\emph{symmetric}.\nFor example, the group of integers $\\mathbb{Z} = \\left\\langle a \\mid -\\right\\rangle$ is generated by $\\{a^{-1}, a^3\\}$.\nMoreover, we prove \\cref{thm:geodesic-growth,thm:virtually-abelian-are-blind-counter} for each finite weighted monoid generating set.\nThat is, for each generator we associate a positive integer weight, and we say that a word is a geodesic if it represents an element with minimal weight.\nNotice that we may recover the usual definition of a geodesic by choosing the weight of each generator to be one.\n\n\\section{Structure}\n\nThis thesis is structured as follows.\nIn \\cref{chp:formal-language-and-automata}, we provide background on formal language theory and define the families of formal languages that are used in our proofs.\nIn \\cref{chp:coword-problems} we study the co-word problem for bounded automata groups, and prove \\cref{thm:bounded automata is ET0L}.\nWe then return to formal language theory in \\cref{chp:polyhedral} where we define the family of polyhedrally constrained languages.\nThen, in \\cref{chapter:polynomial-geodesic-growth} we prove provide a characterisation of the language of geodesics and geodesic growth of virtually abelian groups by proving \\cref{thm:geodesic-growth,thm:virtually-abelian-are-blind-counter}.\nFinally, in \\cref{chapter:virtually-heisenberg} we consider virtually nilpotent groups and prove \\cref{thm:main}.\n\n\\section{Attribution of Results}\n\n\\Cref{thm:geodesic-growth,thm:virtually-abelian-are-blind-counter} are published in the single-authored paper~\\cite{bishop2021}.\n\\Cref{thm:main} is joint work with my supervisor, Murray Elder, a preprint of this work is available in~\\cite{bishop2020}.\n\\Cref{thm:bounded automata is ET0L} is also joint work with Elder and has appeared in the conference proceedings of LATA (Language and Automata Theory and Applications) 2019~\\cite{bishop2019}.\n\n\\section{Notation}\\label{chapter:notation}\n\nLet $\\mathbb{N} = \\{0,1,2,\\ldots\\}$ denote the set of nonnegative integers, including zero, and $\\mathbb{N}_+ = \\{1,2,3,\\ldots\\}$ the set of positive integers.\n\nLet $G$ be a group, and let $g,h \\in G$, then $[g,h] = ghg^{-1}h^{-1}$ and $g^h = h g h^{-1}$.\nGiven two subgroups $H,K \\leqslant G$, we write $[H,K]$ for the subgroup\n\\[\n\t[H,K] = \\left\\langle\\left\\{ [h,k] \\mid h \\in H,\\ k \\in K \\right\\}\\right\\rangle.\n\\]\nA group $G$ is \\emph{$k$-step nilpotent} if there is a finite sequence of subgroups\n\\[\n\tG_0 = G,\\ \n\tG_1 = [G,G_0],\\ \n\tG_2 = [G,G_1],\\ \n\tG_3 = [G,G_2],\\ \n\t\\ldots,\\ \n\tG_k = [G,G_{k-1}] = \\{1\\}.\n\\]\nNotice that the $1$-step nilotent groups are precisely the abelian groups.\nMoreover, for each $k$, the set of $(k+1) \\times (k+1)$ invertible upper-triangular integer matrices with 1's on their diagonal form a $k$-step nilpotent group.\nIn fact, it is a result of \\citeauthor{auslander1967} that each nilpotent (or more generally each \\emph{polycyclic}) group has a faithful representation in $\\mathrm{SL}(n,\\mathbb{Z})$ for some $n$~\\cite[Theorem~2]{auslander1967}.\n\nLet $\\mathcal{P}$ be some group property, e.g., being abelian, nilpotent, or free.\nThen, we say that a group is \\emph{virtually $\\mathcal{P}$} is it has a finite-index subgroup with the property $\\mathcal{P}$.\nFor example, we say that a group is virtually nilpotent if it has a finite-index nilpotent subgroup.\n\nLet $G$ be a group with a finite generating set $S$, then we write $S^*$ for the set of all words, including the empty word $\\varepsilon \\in S^*$, in the letters of $S$; and $\\overline{\\sigma} \\in G$ for the group element corresponding to the word $\\sigma \\in S^*$.\nWe endow $S$ with a weighting, that is, for each generator $s \\in S$ we assign a positive integer weight $\\omega(s) \\in \\mathbb{N}_+$.\nWe then say that $S$ is a finite weighted generating set for the group $G$.\nThe \\emph{weight} of a word $\\sigma = \\sigma_1 \\sigma_2 \\cdots \\sigma_k \\in S^*$ is then given by\n$\n\t\\omega(\\sigma)\n\t=\n\t\\sum_{i=1}^k\n\t\\omega(\\sigma_i)\n$.\nMoreover, we write $|\\sigma|_S = k$ for the \\emph{word length} of $\\sigma$.\nThe \\emph{weighted length} of an element $g \\in G$ is then defined as the minimum weight required to represent it as a word, that is,\n\\[\n\t\\ell_S(g)\n\t=\n\t\\min\\{\n\t\t\\omega(\\sigma)\n\t\\mid\n\t\t\\overline{\\sigma} = g\n\t\t\\text{ where }\n\t\t\\sigma \\in S^*\n\t\\}.\n\\]\nWe may now define the volume growth function $a_S \\colon \\mathbb{N} \\to \\mathbb{N}$ as follows.\n\n\\begin{definition}\\label{defn:volume-growth}\n\tThe \\emph{volume growth function} $a_S\\colon \\mathbb{N} \\to \\mathbb{N}$ is defined as\n\t\\[\n\t\ta_S(n) = \\#\\{\n\t\t\tg \\in G\n\t\t\\mid\n\t\t\t\\ell_S(g) \\leq n\n\t\t\\}.\n\t\\]\n\tThat is, $a_S(n)$ counts the elements which can be represented by a word of length $n$ or less.\n\\end{definition}\n\nWe say that a word $\\sigma \\in S^*$ is a \\emph{geodesic} if it represents $\\overline{\\sigma}$ with minimal weight, that is, if $\\omega(\\sigma) = \\ell_S(\\overline{\\sigma})$.\nWe write $\\GeodesicWords_S$ for the set of all geodesic words with respect to the generating set $S$, that is,\n\\[\n\\GeodesicWords_S\n=\n\\{\n\\sigma \\in S^*\n\\mid\n\\omega(\\sigma) = \\ell_S(\\overline{\\sigma})\n\\}.\n\\]\nWe then define the \\emph{geodesic growth function} $\\gamma_S\\colon \\mathbb{N} \\to \\mathbb{N}$ as follows.\n\n\\begin{definition}\\label{defn:geodesic-growth}\n\tThe \\emph{geodesic growth function} $\\gamma_S \\colon \\mathbb{N} \\to \\mathbb{N}$ is defined as\n\t\\[\n\t\\gamma_{S}(n)\n\t=\n\t\\#\n\t\\{\n\t\\sigma \\in \\GeodesicWords_S\n\t\\mid\n\t\\omega_S(\\sigma) \\leq n\n\t\\}\n\t\\]\n\tThis function counts the number of geodesic words of length $n$ or less.\n\\end{definition}\n\nNotice that the volume and geodesic growth functions can be at most exponential as\n\\[\n\t\ta_S(n)\n\t\\leq\n\t\t\\gamma_S(n)\n\t\\leq\n\t\t\\sum_{i=0}^n |S|^{i}\n\t\\leq\n\t\t|S|^{n+1}.\n\\]\nWe say that a (volume/geodesic) growth function $f \\colon \\mathbb{N} \\to \\mathbb{N}$ has\n\\begin{itemize}\n\t\\item \\emph{polynomial growth} if there is some $\\beta,d \\in \\mathbb{N}_+$ such that $f(n) \\leq \\beta n^d$ for each $n \\geq 1$;\n\t\\item \\emph{exponential growth} if there is an $\\alpha \\in \\mathbb{R}$ with $\\alpha > 1$ such that $f(n) \\geq \\alpha^n$; and\n\t\\item \\emph{intermediate growth} if its growth is neither polynomial nor exponential.\n\\end{itemize}\nNotice that the volume and geodesic growth functions are submultiplicative, that is, if $f\\colon \\mathbb{N} \\to \\mathbb{N}$ is a growth function, then $f(n+m) \\leq f(n) f(m)$ for each $n,m \\in \\mathbb{N}$.\nThus, we may apply the following result.\n\n\\begin{lemma}[\\citeauthor{fekete1923}'s lemma~\\cite{fekete1923}]\\label{lemma:fekete}\n\tIf $f\\colon \\mathbb{N} \\to \\mathbb{N}$ is submultiplicative, then the \\emph{growth rate}\n\t$\n\t\t\\alpha_f = \\lim_{n \\to \\infty} \\sqrt[n]{f(n)}\n\t$ is defined.\n\\end{lemma}\n\nFrom \\cref{lemma:fekete}, we see that a function $f \\colon \\mathbb{N} \\to \\mathbb{N}$ has exponential growth if and only if the growth rate $\\alpha_f > 1$.\n\nIn this thesis, we are interested in studying the asymptotics of growth functions by considering their associated generating functions.\nWe write $A_S$ and $\\Gamma_S$ for the generating functions associated with $a_S$ and $\\gamma_S$, respectively.\n\n\\begin{definition}\\label{defn:growth-series}\n\tWe write\n\t\\[\n\tA_S(z) = \\sum_{n = 0}^\\infty a_S(n) z^n\n\t\\quad\\text{and}\\quad\n\t\\Gamma_S(z) = \\sum_{n = 0}^\\infty \\gamma_S(n) z^n\n\t\\]\n\tfor the \\emph{volume} and \\emph{geodesic growth series}, respectively.\n\\end{definition}\n\nWe write $\\mathbf{x} = (x_1,x_2,\\ldots,x_m)$ for a finite list of variables.\nThen, for each vector $\\mathbf{n} = (n_1,n_2,\\ldots,n_m)$, we then write $\\mathbf{x}^\\mathbf{n} = x_1^{n_1} x_2^{n_2} \\cdots x_m^{n_m}$.\nWe may write a multivariate generating series as $f(\\mathbf{x}) = \\sum_{\\mathbf{n} \\in \\mathbb{N}^m} c_\\mathbf{n} \\mathbf{x}^{\\mathbf{n}}$ where each $c_\\mathbf{n}$ is a constant.\nWe write $\\mathbb{C}[[\\mathbf{x}]]$, $\\mathbb{C}[\\mathbf{x}]$, $\\mathbb{C}((\\mathbf{x}))$, and $\\mathbb{C}(\\mathbf{x})$ for the class of formal power series, polynomials, formal Laurent series, and rational functions, respectively, over the variables $\\mathbf{x} = (x_1,x_2,\\ldots,x_m)$.\nMoreover, we write $\\partial_{x_i} f(\\mathbf{x})$ for the formal partial derivative of $f(\\mathbf{x})$ with respect to $x_i$.\nWe use this notation in \\cref{sec:generating-functions} where we define multivariate generating functions and the classes of rational, algebraic and holonomic power series.\n\nLet $G$ be a group with finite monoid generating set $S$, then the \\emph{word} and \\emph{co-word} problem of $G$ with respect to $S$ are given as\n\\[\n\t\\WP(G,S) = \\{ w \\in S^* \\mid \\overline{w} = 1_G \\}\n\t\\quad\\text{and}\\quad\n\t\\coWP(G,S) = \\{ w \\in S^* \\mid \\overline{w} \\neq 1_G \\},\n\\]\nrespectively.\nNotice here that $\\coWP(G,S) = S^* \\setminus \\WP(G,S)$, that is, the two sets are complements of each another with respect to the set $S^*$.\n", "meta": {"hexsha": "6b664e6f2ba7ed245f66f62f0a8a8638f1c3272f", "size": 22767, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/01_Introduction.tex", "max_stars_repo_name": "alexbishop/phd-thesis", "max_stars_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/01_Introduction.tex", "max_issues_repo_name": "alexbishop/phd-thesis", "max_issues_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/01_Introduction.tex", "max_forks_repo_name": "alexbishop/phd-thesis", "max_forks_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.9691780822, "max_line_length": 566, "alphanum_fraction": 0.759827821, "num_tokens": 6624, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8397339736884711, "lm_q1q2_score": 0.5980855622597903}}
{"text": "\\chapter{Low Thrust Optimization Using Orthogonal Collocation} \\label{Ch:OrthogonalCollocation} \n\n\\section{Nomenclature}\n\n\\begin{tabbing}\n12345678 \\= Reynolds number based on length $s$ \\kill\\\\\n$t$  \\>   Time in dimensional units \\\\\n$\\tau$  \\>  Time in nondimensional units $(-1 \\leq \\tau \\leq 1)$\\\\\n$t_I$  \\>   Initial time \\\\\n$t_F$  \\>  Final time\\\\\n$\\mathbf{r}(t)$        \\> Position vector in continous time form\\\\\n$\\mathbf{v}(t)$        \\> Velocity vector in continous time form\\\\\n$m(t)$        \\> Mass in continous time form\\\\\n$\\mathbf{x}(t)$   \\>  State Vector in continous time form\\\\\n$\\mathbf{r}_k$        \\> Position vector in discrete time form at $k^{th}$ point\\\\\n$\\mathbf{v}_k$        \\> Velocity vector in discrete time form at $k^{th}$ point\\\\\n$m_k$        \\> Mass in discrete time form at $k^{th}$ point\\\\\n$\\mathbf{x}_k$   \\>  State Vector in discrete time form at $k^{th}$ point\\\\\n$n_x$              \\> Number of elements in state vector\\\\\n$n_u$              \\> Number of elements in control vector\\\\\n$N$              \\> Number of collocation points\\\\\n$n_k$              \\> Number of quadrature points \\\\\n$L_i$              \\> $i^{th}$ Lagrange polynomial \\\\\n\\end{tabbing}\n\n\\section{Dynamics Model}\n\n\\section{Continuous Time Problem}\n\n\\begin{equation}\n     J = \\Phi \\left(t_I,\\mbx(t_I),t_F,\\mbx(t_F)\\right) + \\int_{t_I}^{t_F} \\mathcal{L}\\left( \\mbx(t),\\mbu(t),t;\\mbq \\right)dt\n\\end{equation}\n%\nsubject to\n%\n\\begin{equation}\n   \\frac{\\partial \\mathbf{x}}{\\partial t} = \\mathbf{f}\\left( \\mbx(t),\\mbu(t),t;\\mbq \\right)\n\\end{equation}\n\n\\section{Discretization of State and Dynamics}\n\n\\begin{equation}\n    \\mbx(\\tau_k) \\approx \\mbx_k =  \\sum^{N}_{j = 1,j \\neq i} \\mbx_i L_i(\\tau_k)\n\\end{equation}\n%\nwhere\n%\n\\begin{equation}\n    L_i = \\prod^{N}_{j = 1,j \\neq i} \\frac{\\tau - \\tau_j}{\\tau_i - \\tau_j}\n\\end{equation}\n%\n%\n\\begin{equation}\n   \\dot{\\mbx}(t_k) \\approx \\dot{\\mbx}_k =  \\sum^{N}_{j = 1,j \\neq i} \\mbx_i \\dot{L}_i(\\tau_k)  \n\\end{equation}\n\nDefine the matrix $\\mathbf{X}$ as follows\n%\n\\begin{equation}\n     \\mathbf{X} = \\left[\n     \\begin{array}{ccc}\n         \\mbx_1^T \\\\\n         \\mbx_2^T \\\\\n         \\vdots \\\\\n         \\mbx_N^T\n     \\end{array}\n     \\right]\n     %\n     =\n     %\n     \\left[\n     \\begin{array}{ccc}\n         \\mbr_1^T & \\mbv_1^T & m_1 \\\\\n         \\mbr_2^T & \\mbv_2^T & m_2\\\\\n         \\vdots \\\\\n         \\mbr_N^T & \\mbv_N^T & m_N\n     \\end{array}\n     \\right]\n     %\n\\end{equation}\n%\nSimilarly, define the matrix $\\mathbf{F}$ as \n%\n\\begin{equation}\n   \\mathbf{F} = \\left[\n     \\begin{array}{ccc}\n         \\mathbf{f}_1^T \\\\\n         \\mathbf{f}_2^T \\\\\n         \\vdots \\\\\n         \\mathbf{f}_N^T\n     \\end{array}\n     \\right]\n\\end{equation}\n%\nand\n%\n\\begin{equation}\n   \\mathbf{U} = \\left[\n     \\begin{array}{ccc}\n         \\mathbf{u}_1^T \\\\\n         \\mathbf{u}_2^T \\\\\n         \\vdots \\\\\n         \\mathbf{u}_N^T\n     \\end{array}\n     \\right]\n\\end{equation}\n\nthen,\n%\n\\begin{equation}\n    \\dot{\\mathbf{X}} = \\mathbf{D}\\mathbf{X}\n\\end{equation}\n%\n\\begin{equation}\n \\mathbf{D}\\mathbf{X} - \\mathbf{F}(\\mathbf{X},\\mathbf{U},\\mathbf{Q}) = \\mathbf{0}\n\\end{equation}\n%\n\\begin{equation}\n    \\dot{\\mathbf{x}}_k^T = \\mathbf{D}_{(k,:)}\\mathbf{X}\n\\end{equation}\n%\n\n\\begin{equation}\n     \\frac{\\partial \\dot{\\mbx}_k}{\\partial \\mbx_{\\ell}} = \\left[ \\mathbf{0}_{n_x,\\ell-1} \n     \\hspace{.1 in} \\mathbf{D}_{(k,:)}^T \\hspace{.1 in}  \\mathbf{0}_{n_x,n_x - \\ell+1} \\right]\n\\end{equation}\n%\n\\begin{equation}\n     \\frac{\\partial \\dot{\\mbx}_k}{\\partial \\mathbf{u}_{\\ell}} = \\mathbf{0}_{n_x,n_u} \n\\end{equation}\n\n\n\\begin{equation}\n     \\frac{\\partial \\mathbf{f}_k}{\\partial \\mathbf{x}_{\\ell}} = \n     \\left\\{ \\begin{array}{ccc}\n    &\\mathbf{A}(t_\\ell) \\hspace{.1 in} & k = \\ell \\\\\n    &\\mathbf{0}_{n_x,n_x}  \\hspace{.1 in} & k \\neq \\ell  \\\\\n    \\end{array} \\right. \n\\end{equation}\n%\n\\begin{equation} \n     \\frac{\\partial \\mathbf{f}_k}{\\partial \\mathbf{u}_{\\ell}} = \n          \\left\\{ \\begin{array}{ccc}\n          & \\displaystyle\\frac{\\partial \\mbf(t_\\ell)}{\\partial \\mbu} \\hspace{.1 in} & k = \\ell \\\\\n          &\\mathbf{0}_{n_x,n_u}  \\hspace{.1 in} & k \\neq \\ell  \\\\\n          \\end{array} \\right. \n\\end{equation}\n\n\\section{Discretization of Cost Function}", "meta": {"hexsha": "aaac81eb0e3dd04bf425a594d5ff7076f5f9ab60", "size": 4160, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/SystemDocs/MathematicalSpecification/OrthogonalCollocation.tex", "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_issues_repo_path": "doc/SystemDocs/MathematicalSpecification/OrthogonalCollocation.tex", "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_licenses": ["NASA-1.3"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_forks_repo_path": "doc/SystemDocs/MathematicalSpecification/OrthogonalCollocation.tex", "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": ["NASA-1.3"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "avg_line_length": 27.9194630872, "max_line_length": 124, "alphanum_fraction": 0.5685096154, "num_tokens": 1586, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8397339596505965, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5980855471322931}}
{"text": "\\chapter{Transfer Function}\n\\label{chap:tf}\nThe transfer function was calculated from the system equation\n\\eqref{eq:equationmotion} analysed in the section\n\\ref{subsec:equationofomotion}.\n%\n\\begin{equation}\n\\label{eq:gs}\n\tG(s) = \\big([M]s^2+[C]s+[K]\\big)^{-1}\n\\end{equation}\n%\nGiven \\(F(s) = g_{\\text{v}}V(s)\\)  the Laplace transform of the force signal,\nthus the transfer function can be write:\n\\begin{align}\n\\label{eq:tf}\n  {X(s)} &= G(s)\\frac{F(s)}{g_{\\text{v}}}\\\\\n  \\frac{X(s)}{F(s)} &= \\frac{1}{g_{\\text{v}}}\\,{G(s)}\n\\end{align}\n%\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=0.66\\textwidth]{bodediagram1}\n\t\\caption{Bode diagram of the transfer functions as in \\eqref{eq:tf}}\n\t\\label{fig:bodeplot1}\n\\end{figure}\n%\nThe bode diagram are shown in Figure \\ref{fig:bodeplot1}.", "meta": {"hexsha": "df8cde6b3d8f1a440b723c3c51cb648b4bc637cf", "size": 787, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/transferfunction.tex", "max_stars_repo_name": "frank1789/MechanicalVibrationProject", "max_stars_repo_head_hexsha": "ad28e4c047fe4f806fa1fb5405b2304699e377be", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-28T12:59:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-28T12:59:41.000Z", "max_issues_repo_path": "Report/transferfunction.tex", "max_issues_repo_name": "frank1789/MechanicalVibrationProject", "max_issues_repo_head_hexsha": "ad28e4c047fe4f806fa1fb5405b2304699e377be", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/transferfunction.tex", "max_forks_repo_name": "frank1789/MechanicalVibrationProject", "max_forks_repo_head_hexsha": "ad28e4c047fe4f806fa1fb5405b2304699e377be", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.1481481481, "max_line_length": 77, "alphanum_fraction": 0.6899618806, "num_tokens": 288, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916099737806, "lm_q2_score": 0.6926419958239132, "lm_q1q2_score": 0.598021287909861}}
{"text": "\\documentclass{report}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\begin{document}\n\\section{Expectation and Variance}\n\\subsection{Abstract}\nIn this issue, we mainly study some properties of Gaussian distribution\n\\subsection{Assumption}\nNow given a bunch of data:\n$$\nX=(x_1, x_2, ..., x_N)^T\n$$\n$$\nx_i \\in \\mathcal{R}^p\n$$\nFirst, we assume our model: the Gauss linear model.\\\\\\\\\nTo simplify the derivation of formula, we set $p$ equals $1$, so\n$$\nx \\backsim N(\\mu, \\sigma^2)\n$$\n$$\n\\theta=(\\mu, \\sigma)\n$$ \nNext, we use maximum likelihood estimation ($MLE $) to get the expectation and variance based on this bunch of data\\\\\\\\\nThe likelihood function is given below:\n$$\n\\begin{aligned}\np(X|\\theta)&=log(\\prod_{i=1}^N \\frac{1}{\\sqrt{2\\pi}\\sigma} \\exp(-\\frac{(x_i-\\mu)^2}{2\\sigma^2})) \\\\\n&=\\sum_{i=1}^N log(\\frac{1}{\\sqrt{2\\pi}\\sigma} \\exp(-\\frac{(x_i-\\mu)^2}{2\\sigma^2}))\\\\\n&=\\sum_{i=1}^N log(\\frac{1}{\\sqrt{2\\pi}}) - log(\\sigma) - \\frac{(x_i-\\mu)^2}{2\\sigma^2}\n\\end{aligned}\n$$\n\\subsection{Expectation}\nNext, we first use the maximum likelihood estimation to obtain the estimated value of the expected $\\mu $\n\n$$\n\\begin{aligned}\n\\mu_{MLE}\n&=argmax(p(X|\\theta))\\\\\n&=argmin(\\sum_{i=1}^N (x_i-\\mu)^2)\n\\end{aligned}\n$$\nBy deriving the formula:\n$$\n\\begin{aligned}\n\\sum_{i=1}^N 2(x_i-\\mu)&=0\\\\\n\\sum_{i=1}^N x_i - N \\mu&=0\\\\\n\\mu_{MLE}&=\\frac{1}{N} \\sum_{i=1}^N x_i\\\\\n\\end{aligned}\n$$\n\n\\subsection{Variance}\nSimilarly, we use maximum likelihood estimation to estimate the variance \n$\\sigma $\n\n$$\n\\begin{aligned}\n\\sigma_{MLE}\n&=argmax(p(X|\\theta))\\\\\n&=argmin(\\sum_{i=1}^N log(\\sigma)+\\frac{(x_i-\\mu)^2}{2\\sigma^2})\n\\end{aligned}\n$$\nSimilarly, we derive the formula:\n$$\n\\sum_{i=1}^N(\\frac{1}{\\sigma}-\\frac{(x_i-\\mu)^2}{\\sigma^3})=0\n$$\nFinally, we get the estimated value:\n$$\n\\sigma_{MLE}^2 = \\Sigma_{MLE} = \\frac{1}{N} \\sum_{i=1}^N (x_i-\\mu)^2\n$$\n\\subsection{Bias Estimation}\nTo verify whether an estimate is biased or unbiased, we only need to calculate the expectation of the estimate.\n\\subsubsection{$\\mu$}\n$$\n\\begin{aligned}\nE[\\mu_{MLE}]\n&=E[\\frac{1}{N}\\sum_{i=1}^N x_i]\\\\\n&=\\frac{1}{N}\\sum_{i=1}^N E[x_i]\\\\\n&=\\mu\n\\end{aligned}\n$$\nSo $\\mu_{MLE}$ is an unbiased estimation\n\\subsubsection{$\\sigma$}\nFirst we deform the estimate of $\\sigma$\n$$\n\\begin{aligned}\n\\sigma_{MLE}^2\n&=\\frac{1}{N} \\sum_{i=1} ^N (x_i - \\mu_{MLE})^2\\\\\n&=\\frac{1}{N} \\sum_{i=1} ^N (x_i^2 - 2x_i\\mu_{MLE} + \\mu_{MLE}^2)\\\\\n&=\\frac{1}{N} \\sum_{i=1}^N x_i^2 - 2(\\frac{1}{N} \\sum_{i=1} ^N x_i) \\mu_{MLE} + \\mu_{MLE}^2\\\\\n&=\\frac{1}{N} \\sum_{i=1}^N x_i^2 - 2\\mu_{MLE}^2 + \\mu_{MLE}^2\\\\\n&=\\frac{1}{N} \\sum_{i=1}^N x_i^2 - \\mu_{MLE}^2\\\\\n&=(\\frac{1}{N} \\sum_{i=1}^N x_i^2-\\mu^2) - (\\mu_{MLE}^2-\\mu^2)\\\\\n\\end{aligned}\n$$\nset $f_1=(\\frac{1}{N} \\sum_{i=1}^N x_i^2-\\mu^2)$ , $f_2=(\\mu_{MLE}^2-\\mu^2)$\\\\\\\\\nso:\n$$\n\\begin{aligned}\nE[f_1]\n&=E[\\frac{1}{N} \\sum_{i=1}^N x_i^2 - \\mu^2]\\\\\n&=E[\\frac{1}{N} \\sum_{i=1}^N (x_i^2 - \\mu^2)]\\\\\n&=\\frac{1}{N} \\sum_{i=1}^N E[x_i^2] - E[\\mu^2]\\\\\n&=\\frac{1}{N} \\sum_{i=1}^N E[x_i^2] - \\mu^2\\\\\n&=\\frac{1}{N} \\sum_{i=1}^N E[x_i^2] - (E[x_i])^2\\\\\n&=\\sigma^2\n\\end{aligned}\n$$\nsimilarly:\n$$\n\\begin{aligned}\nE[f_2]\n&=E[\\mu_{MLE}^2 - \\mu^2]\\\\\n&=E[\\mu_{MLE}^2 - (E[\\mu_{MLE}])^2]\\\\\n&=Var[\\mu_{MLE}]\\\\\n&=Var[\\frac{1}{N} \\sum_{i=1} ^N x_i]\\\\\n&=\\frac{1}{N^2} \\sum_{i=1} ^N Var[x_i]\\\\\n&=\\frac{1}{N} \\sigma^2\n\\end{aligned}\n$$\nfinally, adding $f_1$ and $f_2$ , we get:\n$$\nE[\\sigma_{MLE}^2]=\\frac{N-1}{N} \\sigma^2\n$$\nSo our estimate of $\\sigma$ from the maximum likelihood estimate is slightly smaller than the true value, so it is biased.\\\\\\\\\nThe unbiased estimate of $\\sigma^2$ is $\\frac{1}{N-1}\\sum_{i=1}^N (x_i-\\mu_{MLE})^2$\n\\end{document}", "meta": {"hexsha": "3785e136598789636b02086735963e6a091da63a", "size": 3605, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EN-TeX_files/Intro_Math/01_fundamentals-of-math_gaussian-distribution_expectation&variance.tex", "max_stars_repo_name": "btobab/Machine-Learning-notes", "max_stars_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-08-28T18:47:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T07:36:27.000Z", "max_issues_repo_path": "EN-TeX_files/Intro_Math/01_fundamentals-of-math_gaussian-distribution_expectation&variance.tex", "max_issues_repo_name": "btobab/Machine-Learning-notes", "max_issues_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EN-TeX_files/Intro_Math/01_fundamentals-of-math_gaussian-distribution_expectation&variance.tex", "max_forks_repo_name": "btobab/Machine-Learning-notes", "max_forks_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-28T18:47:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T18:47:22.000Z", "avg_line_length": 28.3858267717, "max_line_length": 126, "alphanum_fraction": 0.613592233, "num_tokens": 1574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347362, "lm_q2_score": 0.8633916117313211, "lm_q1q2_score": 0.5980212781714784}}
{"text": "\\renewcommand{\\figurename}{Supplementary Figure}\n\n\n\\section*{\\underline{Supplementary Information}}\n\\tableofcontents\n\n\\section{Asteroseismic Model}\\label{s:seismo}\nIn order to extract signatures of stellar rotation from the asteroseismic p mode frequencies, we built a model that simultaneously treats the convective background ($B(\\nu)$), the oscillations ($O(\\nu)$), and the white noise ($W$), where $\\nu$ is frequency \\cite{davies+2015}. Our data, observed with \\kepler, are also subject to the apodization (attenuation) of signals in the frequency-domain, where we fit our model \\cite{chaplin+2011}. The apodization in power is given by\n\n\\begin{equation}\\label{eq:apodization}\n\t\\eta^2(\\nu) = \\rm{sinc}^2\\left(\\frac{\\pi}{2}\\frac{\\nu}{\\nu_{\\rm nyq}}\\right)\\, ,\n\\end{equation}\nwhere $\\nu_{\\rm nyq}$ is the Nyquist frequency for the \\kepler short cadence, which was treated as a free parameter in our model to account for gaps in the data. Apodization only affects signals with characteristic timescales, meaning that it does not affect the white noise level, only the oscillations and convective background components. Given the above, our comprehensive model for the power spectrum is\n\n\\begin{equation}\\label{eq:model}\n\tM(\\nu) = W + \\eta^2(\\nu)[O(\\nu) + B(\\nu)]\\, .\n\\end{equation}\n\n\\subsection{Convective Background ($B(\\nu)$)}\nTo model the convective background we used three Harvey components \\cite{harvey1985}, which express the background in power as Lorentzian-like functions centered on zero frequency. The Harvey components take the form \n\n\\begin{equation}\n\tH(\\nu, a, b, x) = \\frac{4a^2/b}{1 + (2\\pi b\\nu)^x}\\, ,\n\\end{equation}\n\n\\noindent where $a$ and $b$ are the free parameters in our model, and $x$ is fixed. The three Harvey components together form our background function as\n\n\\begin{equation}\\label{eq:background}\n\tB(\\nu) = H(\\nu, a, b, x=4) + H(\\nu, c, d, x=4) + H(\\nu, j, k, x=2)\\, ,\n\\end{equation}\n\n\\noindent where we have labeled parameters for the separate components. The $x = 2$ term here contributes to the background at high frequencies, whereas the $x=4$ terms contribute the background at low frequencies.\n\n\\subsection{Modes of Oscillation ($O(\\nu)$)}\nModes of oscillation appear in the power spectrum as Lorentzian peaks \\cite{chaplin+basu2017}. These peaks can be described by three values: the radial order ($n$, the overtone number of the oscillation), the angular degree ($\\ell$) and the azimuthal order ($m$). Due to stellar rotation, each mode with an angular degree of $\\ell > 0$ is split into its $(2\\ell +1)$ Lorentzian components, labeled by $m$. For all $\\ell=(0,1,2)$ modes identified for our targets in the  `Kages'  and LEGACY studies \\cite{davies+2016,lund+2017} we add a (set of) Lorentzian(s) to our model, building a composite model representing all visible modes. The construction of our oscillation model takes the form\n\n\\begin{equation}\n\tO(\\nu) = \\sum_n \\sum_\\ell \\sum_{m=-\\ell}^\\ell \\frac{H_{n,\\ell,m}}{1 + \\frac{4}{\\Gamma^2_{n,\\ell}}(\\nu - \\nu_{n,\\ell,m})^2}\\, ,\n\\end{equation}\n\n\\noindent where $H_{n,\\ell,m}$ is the height of the mode, $\\Gamma_{n,\\ell}$ is the linewidth of the mode (approximated to be equal for all split azimuthal orders at a single $n$ and $\\ell$) and $\\nu_{n, \\ell, m}$ is the frequency of the mode. The range of $n$ differs per star depending on how many radial orders were reported in LEGACY or Kages, and the range of $\\ell$ depends on how many angular degrees were reported for the corresponding radial order.\n\n\\subsubsection{Mode Frequencies and Rotational Splitting ($\\nu_{n,\\ell,m}$)}\\label{ssec:frequencies}\nThe mode frequencies of main sequence stars are described by the asymptotic expression \\cite{tassoul1980, vrard+2016}. The asymptotic expression defines the locations of the modes as regularly spaced, with structured deviation around \\numax, the frequency of maximum oscillation amplitude. The expression takes the form\n\n\\begin{equation}\\label{eq:asymptotic}\n\t\\nu_{n,\\ell,m} = \\dnu\\left(n + \\epsilon + \\delta\\nu_{0\\ell} + \\frac{\\alpha}{2}(n - \\frac{\\numax}{\\dnu} + \\epsilon)^2\\right) + m\\nu_{s}\\, ,\n\\end{equation}\n\n\\noindent where \\dnu is the large frequency separation between two consecutive radial orders $n$, $\\epsilon$ is a a phase offset, $\\delta\\nu_{0\\ell}$ is the small frequency separation between two oscillation modes of different $\\ell$ at the same radial order, $\\alpha$ describes the curvature of the spacing around \\numax, and $\\nu_s$ is the rotational splitting. Note that here we have expressed the small separation $\\delta\\nu_{0\\ell}$ as a fraction of $\\dnu$. In order to improve the computational efficiency of this analysis, we fixed \\dnu to the values reported in LEGACY and Kages.\n\nInstead of calculating mode frequencies directly from Equation \\ref{eq:asymptotic} for the model, we treated the individual mode frequencies as parameters as well, drawn from Equation \\ref{eq:asymptotic}. This is called a `latent parameter' implementation \\cite{hogg2012, hall+2019}, as it forms a step between the parameters we want to draw inference on (also called hyperparameters) and our data. \nThe parameters $\\nu_{n,\\ell,m}$ were allowed to vary within an uncertainty $\\sigma_{\\ell}$, which has a single value for each angular degree and also varied as a free parameter.\nThis allowed us to account for small shifts in frequency due to sudden changes in the stellar structure \\cite{mazumdar+2014}. \n\nThe mode frequency latent parameters were drawn from a normal distribution using Equation \\ref{eq:asymptotic} as its mean function, as\n\n\\begin{equation}\n\t\\nu_{n, \\ell, m} \\sim \\mathcal{N}(\\nu_{n, \\ell, m}, \\sigma_{\\ell})\\, ,\n\\end{equation}\n\n\\noindent where the expression $\\nu_{n, \\ell, m}$ on the right hand side represents the contents of Equation \\ref{eq:asymptotic}, and $\\mathcal{N}$ represents a normal distribution with a mean equal to $\\nu_{n, \\ell, m}$ and a standard deviation equal to $\\sigma_{\\ell}$. The symbol `$\\sim$' indicates that the parameters on the left hand side of the equation are drawn from the probability distribution on the right hand side. This notation will be used throughout this work.\n\n\\subsubsection{Mode Linewidth ($\\Gamma_{n,\\ell}$)}\nThe linewidths of asteroseismic p modes vary roughly as a function of mode frequency, and do so slowly relative to \\dnu. This can be expressed as an empirical relation \\cite{lund+2017, davies+2014, appourchaux+2016}. However, this relation has six free parameters, none of which are directly relevant to this work. Instead of fitting this relation, we chose to employ a more flexible Gaussian Process \\cite[GP]{rasmussen+williams2006} to act as a prior on the linewidths. This can be considered as us modelling the linewidths as correlated measurements, effectively loosely constraining how linewidth varies with frequency.\n\nA GP is defined by a covariance kernel (describing the degree of correlation between linewidths) and a mean function (describing a global trend with frequency). As this approach describes the mode linewidths relative to one another in frequency, the radial orders of each target $n$ were rescaled to be between 0 (for the lowest $n$) and 1 (for the highest $n$). The radial orders of $\\ell = 2$ modes were increased by $1$ to ensure this approximation applied to all modes. This approximation was used to describe the change in linewidth as a function of frequency without depending on the exact frequencies of the modes, which themselves were free parameters (see above). Given this, we defined our GP covariance kernel as a Squared Exponential Kernel to capture the slight periodicity of linewidth with frequency, as\n\n\\begin{equation}\\label{eq:gpkernel}\n\tK_{i,j} = \\rho^2 \\exp \\left[ -\\frac{(n_{\\textrm{f}, i} - n_{\\textrm{f}, j})^2}{2L^2} \\right]\\, ,\n\\end{equation}\n\n\\noindent where $n_{\\rm f}$ is the fractional radial order of a given mode and  $K_{i,j}$ represents an element of the covariance matrix $\\underline{K}$, describing the covariance between two values of linewidth at different fractional radial orders. The GP kernel has two hyperparameters: $\\rho$, which determines the spread of the kernel in linewidth, and $L$, which determines the length scale in terms of $n_{\\rm f}$. The length scale $L$ was significantly larger than the large frequency separation (\\dnu) in all cases, and so we considered the use of fractional radial orders a valid approximation in this model.\n\nA linear function was used for the mean of the GP, as\n\n\\begin{equation}\\label{eq:gpmean}\n\t\\mu = m \\times n_{\\textrm{f}} + c\\, ,\n\\end{equation}\nwhere $m$ and $c$ are the slope and intercept of the line. The linewidth latent parameters were then drawn from the multivariate probability distribution\n\n\\begin{equation}\\label{eq:gammagp}\n\t\\Gamma \\sim \\mathcal{N}(\\mu, \\underline{K})\\, ,\n\\end{equation}\n\n\\noindent where $\\Gamma$ represents the linewidths of all the modes in the model. The parameters $m$, $c$ and $\\rho$ were marginalised over, whereas $L$ was fixed to a pre-determined value.\n\n\\subsubsection{Mode Heights and Angle of Inclination ($H_{n,\\ell,m}$)}\nThe height in power of each mode, $H_{n, \\ell, m}$, varies not only as a function of distance in frequency from \\numax, but also due to observation conditions, such as inclination angle and passband. In our model, we treated $H_{n, \\ell, m}$ as a deterministic parameter, as\n\n\\begin{equation}\\label{eq:height}\n\tH_{n, \\ell, m} =  \\varepsilon_{\\ell, m}(i) \\frac{2 (A_{n, \\ell})^2}{\\pi \\Gamma_{n, \\ell}}\\, ,\n\\end{equation}\n\n\\noindent where $\\varepsilon_{\\ell, m}(i)$ modulates the height as a function of inclination angle $i$ (see below), and $A_{n, \\ell}$ and $\\Gamma_{n, \\ell}$ are the mode amplitude and linewidth respectively for a given radial order and angular degree. Instead of modeling and modulating height directly, we instead sampled in amplitude and linewidth. This approach mitigates the correlations between height and linewidth in the sampling process \\cite{toutain+appourchaux1994}.\n\nAs done above for the mode frequencies and linewidths, the mode amplitudes $A_{n,\\ell}$ were also treated as latent parameters drawn from a probability distribution governed by hyperparameters. For this, we used a Gaussian function $G(\\nu)$, centered on $\\numax$, as\n\n\\begin{equation}\\label{eq:amplitude}\n\tG(\\nu) = A \\times \\exp\\left[-\\frac{(\\nu - \\numax)^2}{2w^2}\\right]\\, ,\n\\end{equation}\n\n\\noindent where $A$ is the modes' amplitude at \\numax, and $w$ is the width of the Gaussian, both free parameters in our model. \nThe mode amplitude latent parameters were then drawn from the probability distribution\n\n\\begin{equation}\\label{eq:amplitwod}\n\tA_{n, \\ell} \\sim \\mathcal{N}(G(\\nu_{n,\\ell}) \\times V_\\ell, \\sigma_{A})\\, ,\n\\end{equation}\n\n\\noindent where $V_\\ell$ is a free parameter for the mode visibility of different angular degrees, which should be consistent for all \\kepler observations. These parameters describe the difference in relative height between modes of different radial order. The mode visibility for $V_0$ is fixed at 1, and $V_{1,2}$ are treated as free parameters. The parameter $\\sigma_{A}$, the uncertainty on the distribution, is also a free parameter, and takes the same value for all amplitudes regardless of angular degree.\\\\\n\nThe angle of inclination of a star with respect to Earth changes the net perturbation by a given mode when integrated across the stellar disc, changing the amplitudes of modes of different azimuthal orders. This is a geometric problem, and is expressed by $\\varepsilon_{\\ell, m}(i)$, which takes the form \\cite{gizon+solanki2003}\n\n\\begin{equation}\\label{eq:legendre}\n\t\\varepsilon_{\\ell, m}(i) = \\frac{(\\ell - |m|)!}{(\\ell + |m|)!}\\left[P_\\ell^{|m|}(\\cos(i))\\right]^2\\, ,\n\\end{equation}\n\n\\noindent where $P_\\ell^{|m|}$ are associated Legendre functions. For the first three angular degrees, Equation \\ref{eq:legendre} takes the form \\cite{handberg+campante2011}\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\varepsilon_{0,0}(i) &= 1\\, ,\\\\\n\t\t\\varepsilon_{1,0}(i) &= \\cos^2(i)\\, ,    \\\\\n\t\t\\varepsilon_{1,\\pm1}(i) &= \\frac{1}{2}\\sin^2(i)\\, ,\\\\\n\t\t\\varepsilon_{2,0}(i) &= \\frac{1}{4}(3\\cos^2(i) - 1)^2\\, ,\\\\\n\t\t\\varepsilon_{2,\\pm1}(i) &= \\frac{3}{8}\\sin^2(2i)\\, ,\\\\\n\t\t\\varepsilon_{2,\\pm2}(i) &= \\frac{3}{8}\\sin^4(i)\\, ,\n\t\\end{split}\n\\end{equation}\n\n\\noindent where the sum of available components for a single $\\ell$ are normalized to one.\n\n\\subsection{Likelihood Function for $M(\\nu)$}\\label{sec:like}\nIf data have Gaussian noise in the time domain, they will appear in the frequency domain with noise following a $\\chi^2$ distribution with two degrees of freedom \\cite[$\\chi^2_2$ hereafter]{appourchaux+1998}. The noise properties of $\\chi^2_2$ distributed data are multiplicative, and require a specific treatment when fitting a model. As our frequency bins can be approximated to be independent, we used the likelihood function \\cite{anderson+1990},\n\n\\begin{equation}\n\t\\ln p(\\mathcal{P} | M(\\nu)) = \\sum_{j=0}^{N-1} \\left[\\ln[M_j(\\nu)] + \\frac{\\mathcal{P}_j}{M_j(\\nu)}\\right]\\, , \n\\end{equation}\n\n\\noindent where $\\mathcal{P}$ is the power spectral density (and thus our data), and $M(\\nu)$ represents our model. The subscript $j$ denotes an individual datum, for a total of $N$ data. We have omitted the dependence of the model $M(\\nu)$ on its parameters, for clarity. This equation is functionally equivalent to the evaluation of a gamma distribution of the form $\\gamma(\\mathcal{P} | 1, \\beta)$, where $\\beta = 1/M(\\nu)$, which is the implementation we used in the sampling process.\n\n\\subsection{Model preparation and hyperparameter priors}\n\\subsubsection{Fitting the convective background}\\label{sec:background}\nFitting the convective background, apodization and white noise component must be done for the full range of the power spectrum in order to be accurately constrained. However, fitting a single model to the full range of frequencies is computationally inefficient when we are interested in the modes of oscillation, as these occupy a relatively small range of frequencies.\n\nIn order to speed up this process, the background was first fit independently to a subset of our data for each star. This subset was created by removing all frequencies within a range $0.1 \\times \\dnu$ below and above the minimum and maximum mode frequencies reported in LEGACY and Kages. For KIC 3427720 we also removed frequencies in the range $90\\, \\mu\\rm{Hz} < \\nu < 400\\, \\mu{\\rm{Hz}}$, where there were large peaks not of asteroseismic origin, skewing the background fit.\n\nFor each star the model function (see Eq. \\ref{eq:model}) was fit, as\n\n\\begin{equation}\n\tM_{B}(\\nu) = W + \\eta^2(\\nu)B(\\nu)\\, ,\n\\end{equation}\n\n\\noindent where $B(\\nu)$ is the background model described in Equation \\ref{eq:background}. The parameter components of our background fit are then $\\phi_B = \\{\\log(a), \\log(b), \\log(c), \\log(d), \\log(j), \\log(k), W, \\nu_{\\rm nyq}\\}$, where the parameters of the Harvey components were sampled in log space. The model was fit to the background data using \\texttt{PyStan} \\cite{vanhoey+2013}, run for 10,000 iterations on each star. \n\n\\subsubsection{Obtaining First Guesses and Prior Values}\nIn order to utilise some of the prior measurements of our targets without using them as hard constraints on our parameters, some of our model equations were fit to LEGACY and `Kages' data to obtain first guesses and mean values on hyperparameter priors.\n\nFor first guesses for parameters in the asymptotic expression, we fit Equation \\ref{eq:asymptotic}, \\textit{not} including the rotational component $m\\nu_s$, to the $\\ell = (0, 1, 2)$ mode frequencies reported in LEGACY and `Kages' for each star, using their reported uncertainties. This yielded estimates of $\\hat{\\epsilon}$, $\\widehat{\\delta\\nu}_{01}$, $\\widehat{\\delta\\nu}_{02}$ and $\\hat{\\alpha}$, where the hat symbol `\\, $\\widehat{}$\\, ' indicates a prior value (e.g. $\\hat{\\nu}_{\\rm max}$ is taken from LEGACY or `Kages'). While not precise, as we did not mitigate any perturbations due to acoustic glitches, these rough results act as functional first guesses and prior mean values. The relation was fit to each star using PyMC3 \\cite{salvatier+2016} using 5000 iterations on 4 chains.\n\nTo obtain first guesses for the parameters used to set the GP prior on linewidth, we fit a GP constructed as in Equation \\ref{eq:gammagp} to the linewidths of the $\\ell = 0$ modes reported in LEGACY. Linewidths were not reported for the other angular degrees in LEGACY, but the estimates may be generalised to other $\\ell$, as linewidth is a strong function of frequency.  The relation was fit to each star using PyMC3 using 2500 iterations on 4 chains.\n\nFitting the LEGACY linewidths yielded rough estimates of $\\hat{m}$, $\\hat{c}$, $\\hat{\\rho}$ and $L$ for each star. As is noted in Equation \\ref{eq:gammagp}, $L$ was fixed to this fit value when fitting our full model to our data. For stars in `Kages', for which no linewidths were reported, we instead fixed these prior values to $\\hat{m} = 1$, $\\hat{c} = 0.5$, $\\hat{\\rho} = 0.1$, and the length scale to $L = 0.3$. These values were chosen to reflect those found for the LEGACY stars.\n\nFinally, we obtained prior values for the Gaussian function describing the distribution of mode amplitudes around \\numax (Eq. \\ref{eq:amplitude}). The mode amplitude of the highest peak in the spectrum was used for $\\hat{A}$, which was typically at or near \\numax. For the width of the Gaussian we used the empirical function \\cite{lund+2017}\n\n\\begin{equation}\n\t\\hat{w} = 0.25 \\times \\hat{\\nu}_{\\rm max},\\, .\n\\end{equation}\n\n\\noindent For the mode visibilities, we used $\\hat{V}_1 = 1.2$ and $\\hat{V}_2 = 0.7$, which reflect the results for these parameters reported in the LEGACY catalogue.\n\n\\subsubsection{Priors on our Hyperparameters}\nGiven our first guesses and measured prior values, we can define the prior probabilities of the hyperparameters on which our model depends. For the mode frequency hyperparameters (Eq. \\ref{eq:asymptotic}), these are\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\numax &\\sim \\mathcal{N}(\\hat{\\nu}_{\\rm max}, 10)\\, ,\\\\\n\t\t\\epsilon &\\sim \\mathcal{N}(\\hat{\\epsilon}, 1)\\, ,\\\\\n\t\t\\alpha &\\sim \\ln\\mathcal{N}(\\ln(\\hat{\\alpha}), 0.01)\\, ,\\\\\n\t\t\\delta\\nu_{01} &\\sim \\ln\\mathcal{N}(\\ln(\\widehat{\\delta\\nu}_{01}), 0.1)\\, ,\\\\\n\t\t\\delta\\nu_{02} &\\sim \\ln\\mathcal{N}(\\ln(\\widehat{\\delta\\nu}_{02}), 0.1)\\, ,\\\\\n\t\t\\sigma_{0,1,2} &\\sim \\mathcal{C}_{1/2}(\\beta = 2)\\, ,\n\t\\end{split}\n\\end{equation}\n\n\\noindent where $\\ln\\mathcal{N}$ represents a log-Normal distribution and $\\mathcal{C}_{1/2}$ represents a half-Cauchy distribution. The half-Cauchy distribution ensures the standard deviations do not inflate to large numbers, and is generally well-behaved close to zero in the case of stars with little deviation from Eq. \\ref{eq:asymptotic} \\cite{gelman2006}. Other symbols are as described above. All three hyperparameters $\\sigma_{0,1,2}$ describing the uncertainty on the latent parameters of different angular degree were subject to the same prior.\n\nFor the mode linewidths (Eq. \\ref{eq:gpkernel} and \\ref{eq:gpmean}), our hyperparameter priors took the form\n\n\\begin{equation}\n\t\\begin{split}\n\t\tm &\\sim \\mathcal{N}(\\hat{m}, 1)\\, ,\\\\\n\t\tc &\\sim \\mathcal{N}(\\hat{c}, 1)\\, ,\\\\\n\t\t\\rho &\\sim \\ln\\mathcal{N}(\\ln(\\hat{\\rho}), 0.1)\\, ,\\\\\n\t\\end{split}\n\\end{equation}\n\n\\noindent where the conventions are the same as above. For our mode amplitudes (Eq. \\ref{eq:amplitude} and \\ref{eq:amplitwod}), they took the form\n\n\\begin{equation}\n\t\\begin{split}\n\t\tw &\\sim \\ln\\mathcal{N}(\\ln(\\hat{w}), 10)\\, ,\\\\\n\t\tA &\\sim \\ln\\mathcal{N}(\\ln(\\hat{A}), 1)\\, ,\\\\\n\t\tV_1 &\\sim \\ln\\mathcal{N}(\\ln(\\hat{V}_1), 0.1)\\, ,\\\\\n\t\tV_2 &\\sim \\ln\\mathcal{N}(\\ln(\\hat{V}_2), 0.1)\\, ,\\\\\n\t\t\\sigma_A &\\sim \\mathcal{C}_{1/2}(\\beta = 1)\\, .\n\t\\end{split}\n\\end{equation}\n\nAs the convective background had already been fit to our data excluding the region where the modes are present, the results from that fit could be used as extremely informative priors on our fit to the region containing the modes, where there is little information present to constrain the background. To do so, we modeled the background parameters $\\phi_B$ in our full model as being drawn from a multivariate normal distribution as\n\n\\begin{equation}\n\t\\phi_{B}\\sim\\mathcal{N}(\\hat{\\phi}_{B},\\underline{\\Sigma}_{\\hat{\\phi}_{B}})\\, ,\n\\end{equation}\n\n\\noindent where $\\hat{\\phi}_B$ are the median values of our posterior distributions from our prior background fit, and $\\underline{\\Sigma}_{\\hat{\\phi}_{B}}$ is the full covariance matrix of all the posterior distributions from our prior background fit, taking into account the correlations between the different Harvey components.\n\nFinally, we defined the priors on the rotational parameters: the mode splitting ($\\nu_s$), and the inclination angle ($i$). In order to give these an appropriate treatment, we made two reparameterizations. First, we sampled the projected rotational splitting, $\\nu_{\\rm{s}}\\sin(i)$, which is more efficiently sampled due to the strong correlations between $i$ and $\\nu_{\\rm s}$ \\cite{ballot+2006,ballot+2008a}. A prior was applied over this as\n\n\\begin{equation}\n\t\\nu_s\\sin(i) \\sim \\ln\\mathcal{N}(\\ln(0.75), 0.75)\\, ,\n\\end{equation}\n\n\\noindent where conventions are as above. This subjective prior was chosen to reflect that most stars will have a solar-like rotation, with a long tail to allow for the fastest rotators. Second, we sampled in $\\cos(i)$, and gave this a prior of\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\cos(i) &\\sim \\mathcal{U}(0, 1)\\, ,\n\t\\end{split}\n\\end{equation}\nwhich is equivalent to stating that probability to observe an inclination angle $i$ is equal to $\\sin(i)$. Here, the $\\mathcal{U}(0,1)$ indicates a uniform prior between 0 and 1. Using a uniform prior on $\\cos(i)$ allowed us to account for the geometric effect that stars with a large inclination angle with respect to us are more common \\cite{chaplin+basu2017}.\n\n\\subsection{Fitting procedure}\nUsing our prior information and model described above, we fit Equation \\ref{eq:model} to our data $\\mathcal{P}$, using the likelihood function described in Section \\ref{sec:like}.\n\nIn order to speed up the fitting process, we only applied our model to the region of the power spectrum that contains visible modes of oscillation. We created this subset by removing all frequencies outside a range $0.25 \\times \\Delta\\nu$ below and above the minimum and maximum mode frequencies reported in LEGACY and Kages. This region overlaps minimally with the data used to fit for our prior information of the convective background (see Section \\ref{sec:background}), so that both model fits are independent.\n\nTo improve computational efficiency, we reduced the number of oscillation modes being fit in five targets. For 16 Cyg A \\& B, KIC 7970740 and KIC 8478994, we excluded any modes with a Bayes Factor ($\\ln(K)$) of less than 6, as reported in LEGACY \\cite{davies+2016,lund+2017,kass+raftery1995}. For KIC 8478994, which is reported without a value for $\\ln(K)$ in Kages, we only included modes of an overtone number that contained a detection for all of $\\ell = (0, 1, 2)$, retaining 5 sets of higher signal-to-noise overtones. We do not expect this reduced scope to bias our results, although they may reduce the precision on our measured rotation rates.\n\nWe fit our model to our power spectrum data with \\texttt{PyMC3}, using 2500 iterations each on 4 chains. An example of our model fit to an asteroseismic power spectrum of 16 Cyg A is shown in Supplementary Figure \\ref{fig:modelfit}.\n\n \\begin{figure}\n\t\\centering\n\t\\includegraphics[width=.99\\textwidth]{modelfit.pdf}\n\t\\caption{A power spectrum constructed from four years of \\kepler observations of 16 Cyg A (KIC 12069424). Plotted over the top is the model resulting from the fit to the data described in this work. The model implements both the mode frequencies, seen on the right hand side of the plot, and the convective background, the effects of which are seen on the left. Low frequencies have been cropped out for clarity. \\textit{Inset}: A zoom in on a radial (right) and quadrupole (left) ($\\ell = 0, 2$) pair of modes. The quadrupole mode is split into five components by the star's rotation. Due to the star's inclination angle with respect to us, two out of five peaks are more distinct. The height and spacing of the mode components is a function of the star's rotational splitting ($0.56\\, \\mu\\rm{Hz}$, equivalent to $P = 20.5\\, \\rm{days}$) and angle of inclination ($45^\\circ$).}\n\t\\label{fig:modelfit}\n\\end{figure}\n\n\n\\section{Verifying asteroseismic results}\n\\subsection{Priors on rotational parameters}\nIn our Bayesian analysis, we have placed weakly informative priors on our sampled rotational parameters, $\\nu_{\\rm s}\\sin(i)$ and $\\cos(i)$. The prior is especially important for the angle of inclination, which is hardest to infer from the data. We are able to validate the robustness of our asteroseismic results by confirming that their posterior distributions are data-dominated, and not prior-dominated. We can do so by comparing the $68\\%$ credible regions of the posterior estimates of $\\nu_{\\rm s}\\sin(i)$ and $i$ against the $68\\%$ credible regions of their priors.\n\nA comparison between prior and posterior is shown for 94 stars in $\\nu_{\\rm s}\\sin(i)$, $i$ and $P$ in Supplementary Figure \\ref{fig:priors}, arranged by age. In the Figure, results with means (symbols) and $68\\%$ credible regions (error bars) that are close to those same values for the prior distribution (where the horizontal line is the mean, and the shaded area is the credible region) can be interpreted as prior-dominated (i.e. poorly informed by the data). Cases where the means differ or the credible regions are smaller than the prior distribution are data-dominated. The projected splitting, $\\nu_{\\rm s}\\sin(i)$, is overall well constrained, with only one star being prior dominated. This is expected, as the projected splitting is what we observe on the star before decoupling inclination and rotation. The angle of inclination $i$, sampled as $\\cos(i)$, more closely follows the prior distribution in most cases. Combining the two, the rotation period $P$ has no stars directly corresponding to the effective prior on period, and globally follows a trend with increasing age. The three outliers with fast rotation at late ages (KICs 6603624, 8760414 and 8938364) are discussed in more detail in below.\n\nThe rotation rates as presented in this work are a product of our Bayesian sampling of both projected splitting and angle of inclination. As seen in the Figure there are instances where $i$ or $\\nu_{\\rm s}\\sin(i)$ closely resemble the prior (and are therefore prior-dominated). There are no cases of this when looking at the resulting period measurements, as they will have been informed by at least one strongly data-driven parameter (commonly $\\nu_{\\rm s}\\sin(i)$). From this, we concluded that our ensemble of asteroseismic rotation is not strongly dominated by the priors imposed on projected splitting and inclination in our Bayesian analysis.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{priors.pdf}\n\t\\caption{\\textbf{Comparisons between posterior estimates of rotational parameters (data points and error bars) and the priors on these parameters (shaded regions).} The data are sorted from young stars (left) to old (right). Shown are projected splitting ($\\nu_{\\rm s}\\sin(i)$), inclination angle ($i$, sampled as $\\cos(i)$) and rotation period ($P$). Both the prior and values for $P$ are a transformation of the upper two parameters (see text). In all cases, the extent of the error bars and shaded regions indicate the 68\\% credible interval of the posterior and prior distributions respectively. The solid lines indicate the median of the prior distributions. In this figure, results with means and errorbars that closely resemble the prior distribution can be interpreted as prior-dominated (i.e. poorly informed by the data). All stars are sorted and coloured and sorted by age. In the case of inclination angle $i$ and rotation period $P$, the displayed priors are transformed from the priors imposed on the sampled parameters from which their posteriors were derived.}\n\t\\label{fig:priors}\n\\end{figure}\n\n\\subsection{Comparisons to previous studies}\\label{ssec:litcomp}\nIn order to validate our results, we compared our rotational parameters to those obtained in the literature, as well as those resulting from the work presented in LEGACY and `Kages', which were unreported and received through private communication by the authors of the catalogue papers.\n\nComparisons with LEGACY and `Kages' are shown in Supplementary Figure \\ref{fig:legacykages} for projected splitting, inclination angle, and rotation period. In all three cases we show the fractional difference between the values obtained in this work and those from LEGACY and `Kages'. On the right of the Figure, we show the distribution of the fractional differences for the three parameters.\nThe projected splitting is in good agreement with both studies, however LEGACY finds slightly lower $\\nu_{\\rm s}\\sin(i)$ for the faster rotators, deviating from our work by over $1\\sigma$. Neither LEGACY nor `Kages' used a spatially isotropic prior for the inclination angle in their analyses, instead opting for a uniform prior. As posterior estimates of inclination angle are only loosely data-driven, the introduction of an isotropic prior should result in our analysis reporting globally higher inclination angles. This effect is seen in the comparisons of both inclination angles and rotation rates for the LEGACY stars, where we find overall lower rotation rates compared to LEGACY for stars at very similar $\\nu_{\\rm s}\\sin(i)$.\n\nA number of stars are excluded from Supplementary Figure \\ref{fig:legacykages} and compared individually as extreme outliers: KICs 5094751, 6196457, 8349582, 8494142, 8554498, 105114430 and 11133306 all have fast rotation rates ($<5\\, \\rm days$) in `Kages', but are found to have a broader spread of rotation rates in this work. At similar values for $\\nu_{\\rm s}\\sin(i)$, `Kages' found much lower inclination angles with highly asymmetrical uncertainties. Based on a comparison between the summary statistics of these stars, we concluded that the results found in this work have better marginalised over inclination angle, improving our measure of rotation.\n\nConversely, KICs 6603624, 8760414 and 8938364 have extremely slow rotation periods in LEGACY, but extremely \\textit{fast} ($<3\\, \\rm {days}$) in this work. KICs 8760414 and 8938364 are excluded from the gyrochronology analysis below based on checks for $\\hat{R}$ and the number of effective samples. Both stars have ages greater than $10\\, \\rm Gyr$, making their fast rotation rates highly unlikely under any model of rotational evolution. The posterior estimate of $P$ for KIC 6603624 is well-defined in our analysis, but with an age of $7.8\\, \\rm Gyr$, its measured rotation of $1.2\\, \\rm days$ is also highly unlikely under any model of rotational evolution. The LEGACY estimate of rotation is similarly extreme at $378\\, \\rm days$. These three stars have the lowest inclination angles in our sample ($< 10^\\circ$), at which point the power in the split components of the seismic modes is so low that it becomes difficult to probe the measure of splitting. The split components for these stars will have a height roughly 3\\% of the central mode. For comparison, for the next lowest inclined star at $17^\\circ$, this rises to $10\\%$. In cases such as these with lower signal-to-noise, spurious peaks may be interpreted as split components.\\\\\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{litcomp_alt2.pdf}\n\t\\caption{\\textbf{Comparisons between posterior estimates of rotational parameters from this work and the literature}. Literature values are taken from LEGACY and `Kages' \\cite[private communication]{davies+2016, lund+2017}. Shown are projected splitting ($\\nu_{\\rm s}\\sin(i)$), inclination angle ($i$, sampled as $\\cos(i)$) and rotation period ($P$). Fractional differences are plotted against stellar rotation obtained in this work. The $\\Delta$ indicates the fractional difference between this work and the literature (i.e. stars above the zero-line have higher values in this work). The right hand panels the distribution of the fractional differences around the zero line. The colour legend is consistent throughout all panels. The x-axis units on the right hand panels are equivalent to the y-axis of the left hand panels. 10 stars have been omitted from this plot and are discussed in more detail in the text: KICs 5094751, 6196457, 8349582, 8494142, 8554498, 105114430 and 11133306 all have extremely low rotation periods in Kages, with high uncertainties. Conversely, KICs 6603624, 8760414 and 8938364 have extremely high rotation periods in LEGACY with low uncertainties. Error bars represent the 68\\% confidence intervals. In cases where stars had asymmetric error bars, the larger of the two was used when propagating uncertainty for the purposes of this figure.}\n\t\\label{fig:legacykages}\n\\end{figure}\n\nWe also compared our asteroseismic estimates of stellar rotation with similar studies in the literature, shown in Supplementary Figure \\ref{fig:literaturecomp}. These included: a study of the binary solar analogues 16 Cyg A \\& B \\cite{davies+2015}, ; a study of surface and seismic rotation which our catalogue shares 5 stars with \\cite{nielsen+2015}, ; and an asteroseismic study of differential rotation with which our catalogue shares 40 targets \\cite{benomar+2018}. For the latter, we used their reported splitting value $a_1$, which is equivalent to $\\nu_{\\rm s}$. \n\nOverall,  Supplementary Figure \\ref{fig:literaturecomp} shows no strong disagreements between our asteroseismic measurements for stellar rotation and those from the literature. The scatter of the fractional differences lies cleanly around the zero line, with a mean and spread of $0.0_{-15.6}^{+16.4}\\, \\%$. The increase in uncertainty with period is due to more slowly rotating stars being more difficult to constrain using asteroseismology.\n\nIt is of note that 16 Cyg A was found to be rotating slightly faster in our analysis compared to the literature \\cite{davies+2015} (deviating within $2\\sigma$), despite the fit being performed on the same data. We found an inclination angle that is slightly lower for 16 Cyg A but at a similar projected splitting, which would explain finding a lower value of rotation.\n\nThere are three outliers at low period in  Supplementary Figure \\ref{fig:literaturecomp}: KICs 6603624, 8760414 and 8938364. These are the same targets found to be outliers in a comparison to the LEGACY and `Kages' measurements (see above), with anomalously fast rotation rates and low inclination angles. As these stars represent the lowest inclination angles in our sample, and were found to disagree with two independent studies, we opted to exclude them from the gyrochronology analysis, and to flag the rotation measurements for these three stars presented in this work.\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{seis_comparison_rot_alt2.pdf}\n\t\\caption{\\textbf{Fractional differences between posterior estimates of asteroseismic rotation period from this work.} Literature sources are: Davies et al. (2015) \\cite{davies+2015} (16 Cyg A \\& B), Nielsen et al. (2015) \\cite{nielsen+2015} (5 stars) and Benomar et al. (2018) \\cite{benomar+2018} (40 stars). We used the reported parameter $a_{1}$ from the latter, which represents the rotational splitting in the case of uniform latitudinal rotation in their model. The dashed line represents the median of the sample shown, with the dotted lines representing the $15.9^{\\rm{th}}$ and $84.1^{\\rm{st}}$ percentiles. Error bars represent the 68\\% confidence intervals. In cases where stars had asymmetric error bars, the larger of the two was used when propagating uncertainty for the purposes of this figure.}\n\t\\label{fig:literaturecomp}\n\\end{figure}\n\n\\subsection{Seismic vs spectroscopic rotation}\nA distinct difference between rotation rates obtained through different techniques may hold information about differential rotation (both latitudinal and radial) of near-surface layers, such as those we observe in the Sun \\cite{beck2000}.  A previous comparison of spectroscopic and seismic rotation rates, performed on a sample of 22 stars, found no significant radial differential rotation \\cite{benomar+2018}. In this work they not only considered rotation rates from spots, but also spectroscopic measures of the projected surface rotation $\\textrm{v}\\sin(i)$, which they found to be more reliable.\n\nWith our expanded sample of asteroseismic rotation we can perform a similar analysis, to both validate our sample and probe radial differential rotation.  Supplementary Figure \\ref{fig:vsinilit} shows a comparison between spectroscopic $\\textrm{v}\\sin(i)$ measurements as listed in LEGACY and `Kages' (left) and Benomar et al. (2015) \\cite{benomar+2015} (right). In these cases the asteroseismic $\\textrm{v}\\sin(i)$ has been calculated using our measure of asteroseismic $\\nu_s\\sin(i)$ and the known asteroseismic radii. Three stars (KICs 6603624, 8760414 and 8938364) have been excluded from this figure due to strong disagreements of measured rotation rates with the literature (see above).\n\nFor the LEGACY and `Kages' sample, we find no strong deviation from the 1:1 line except at very low velocities, which is likely due to biases inherent to spectroscopic line broadening measurements \\cite{doyle+2014, tayar+2015}. For the Benomar et al. (2015) sample stars lie a lot closer to the 1:1 line.\n\nOverall, there appears to be a global offset where spectroscopic measurements of projected rotation appear faster than asteroseismic measures. Based on the LEGACY and `Kages' $v\\sin(i)$ values, this offset is roughly $18\\%$ (i.e. spectroscopic projected rotation rates are faster than asteroseismic rates). This offset is much smaller ($\\sim5\\%$) for the \\cite{benomar+2015} sample, albeit for far fewer stars. These offsets are within the typical disagreement between spectroscopic methods, based on comparisons of projected rotation measurements for red giant stars \\cite[see Figure 2]{tayar+2015}, especially at $< 5\\, \\rm{km\\,s^{-1}}$.\n\nIt is worth addressing the impact this comparison would have on our conclusions for gyrochronology, were we to take the spectroscopic rotation rates as if they were the truth, even at velocities of $< 5\\, \\rm{km\\,s^{-1}}$. For the majority of cases, the seismic velocities are slower than the spectroscopic velocities, which would bias our ensemble towards favouring a standard evolution over a weakened magnetic braking scenario (as stars would overall appear to be rotating slower at late ages). Based on the comparison to spot rotation presented in the main body of this paper, and given known issues comparing seismic and spectroscopic rotation at low velocities, we concluded that the divergence seen in  Supplementary Figure \\ref{fig:vsinilit} does not undermine our results.\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{vsini_comparison_new.pdf}\n\t\\caption{\\textbf{Comparisons between asteroseismic and spectroscopic measures of projected surface rotation, $\\textrm{v}\\sin(i)$.} All asteroseismic (x-axis) values are from this work, all spectroscopic (y-axis) values are from the literature. \\textit{Left}: comparisons to 81 stars values reported in LEGACY and Kages. \\textit{Right}: comparisons to 16 stars observed by Benomar et al. (2015) \\cite{benomar+2015}. Asteroseismic values are transformed from projected splitting ($\\nu_s\\sin(i)$) using the asteroseismic radius measurements presented in LEGACY and `Kages'. Horizontal error bars represent the 68\\% confidence intervals. Vertical error bars represent the formal uncertainty on the published spectroscopic values. The solid lines indicate the 1:1 line, while the dash-dotted lines represent the 2:1 and 1:2 lines. The dashed lines indicate the location of $\\textrm{v}\\sin(i) = 5\\, \\rm{kms^{-1}}$, the point at which surface and seismic measures of projected rotation begin to differ strongly \\cite{tayar+2015}.}\n\t\\label{fig:vsinilit}\n\\end{figure}\n\n\n\\subsection{Asteroseismic detection biases}\n\nStars that are rotating very slowly will show small rotational splittings, which may be indistinguishable from a non-rotating case, wheras stars that are spinning very fast may find that their splitting is so wide that they cross over with split modes at different radial orders ($n$). Asteroseismic selection techniques may also introduce additional biases on a population level. We consider the limitations this places on our asteroseismic method.\n\n\\begin{itemize}\n\t\\item \\textit{Fast rotating stars:} In prinicple, a star could rotate fast enough that the highest-frequency peak in a split dipole ($\\ell = 2$) mode would overlap with the radial ($\\ell = 0$) mode. This would occur if $2\\nu_{\\rm s} > \\delta\\nu_{02}$, the small separation. For the star in our sample with the smallest $\\delta\\nu_{02}$ (so the highest risk of this happening), this would occur at a rotation of 8.8 days or faster. For the star with the largest $\\delta\\nu_{02}$, this limit was roughly 2 days.\n\t\n\tIn practice, the model still properly fits a spectrum where this kind of mode-overlapping occurs, as the $\\ell = 0$ modes don't split, meaning that there are no degenerate solutions in this regime, and further constraints can be obtained from the other modes and split components in the spectrum. A more serious issue may be when the $\\ell = 1$ mode at lower frequencies overlaps with the $\\ell = 2$ mode as well, which would only occur at a rotation rate of 2 days or faster for the star with the smallest $\\delta\\nu_{01}$.\n\t\n\t\\item \\textit{Slow rotating stars:} Except for the most rapidly rotating cases, asteroseismic mode splitting is detected by a change in the distribution of the power of mode frequencies as the split components begin to separate from the central mode. If there is to be hard limit at which this measurement is not possible, it should be in the case that $\\nu_{\\rm s} < 2/T$, where $T$ is the observation time, and $1/T$ is the frequency bin width. For the star in our sample with the shortest observing time, this limit would be $\\sim 89$ days, which quickly rises for longer observing baselines. While this constitutes a hard limit, slower rotation rates will become harder and harder to constrain as the rate of splitting becomes smaller. A benefit of the Bayesian approach used in this paper is that it should accurately reflect the increased uncertainty associated with the small rotational splitting.\n\\end{itemize}\n\nWhile these two limits cause concern when studying the most extreme rotators, the results presented in this work are driven by stars with rotation rates in the regime of 5 to 30 days. Above, we demonstrated that results on period for these stars were data-driven (i.e. do not reflect the priors), indicating that the splitting is well resolved in this regime, and so we do not expect these limits to bias our conclusions.\n\nFinally, we also consider bias in asteroseismic detections. The probability of an asteroseismic detection with \\kepler scales with temperature and radius (i.e. scales inversely with \\logg, and therefore main sequence age), and detections are unlikely below roughly $5200\\, K$ \\cite{chaplin+2011, schofield+2019}. Very young ($\\lesssim 2\\, \\rm{Gyr}$), very magnetically active stars may also suppress their oscillation modes. \\cite{mathur+2019}. Though these two effects impose a bias on asteroseismic populations, they do so outside the parameter range on which our comparison takes place. As it is older stars, undergoing magnetic braking, that are required to drive the distinction between stellar models, we conclude that our seismic sample can be assumed to be complete for the purposes of this work.\n\n\\section{Verifying consequences for gyrochronology}\n\\subsection{Limits of our stellar models}\\label{ssec:limits}\nThe rotational stellar models used in this work \\cite{vansaders+2019} are constructed for metallicities of $-0.4 < \\rm{[Fe/H]} < 0.4\\, \\rm{dex}$, in steps of $0.1\\, \\rm {dex}$. Our sample of 91 stars contained 4 stars with metallicities below $-0.4\\, \\rm{dex}$, which are shown as shaded symbols in Figure 3 in the main paper. Of these, 3 were included in our final sample of 73 stars used to evaluate our stellar models: KICs 7970740, 8684723 and KIC 9965715. All three are classed as MS stars, with metallicities of $-0.54 \\pm 0.10$, $-0.42 \\pm 0.10$ and $-0.44 \\pm 0.18\\, \\rm {dex}$ respectively, placing them within $3\\sigma$ of the metallicity limits of our stellar models. KICs 8684723 and 9965715 strongly agree with the WMB model, whereas KIC 7970740 weakly prefers the standard model. Excluding these stars was not found to significantly alter the joint posterior distribution shown in Figure 4 of the main paper.\n\nA recent study \\cite{amard+matt2020} compared different rotational evolution models \\cite[of which we use the former in this work]{vansaders+pinsonneault2013, matt+2015} while studying the effect metallicity has on rotation. They found that metal-rich stars spin down significantly more effectively than metal poor stars. The population of 73 stars used in our stellar model comparisons is roughly centered on a $\\rm{[Fe/H]}$ of 0 $\\rm dex$, with a spread of $~0.16\\, \\rm dex$, with no stars significantly above or below $\\pm 0.4\\, \\rm dex$ (as discussed above). While differences in stellar rotational evolution as a function of metallicity in this region are still somewhat pronounced, they are much less so than for more metal-rich or poor stars \\cite[see Figure 2]{amard+matt2020}. However, for stars with $\\feh < 0$, the alternative model prescription \\cite{matt+2015} sees stars spin down more slowly than the models used in this work (i.e. they will rotate faster at later ages). This work only explores the presence of weakened magnetic braking for a single braking law, and a comparison to alternative braking model will be explored in a future paper.\n\nWhen constructing KDEs from our stellar model samples, we selected fixed resolutions (or band-widths) for the KDEs. In mass, the band-width of $0.02\\, M_\\odot$, was larger than the uncertainties of 24 of 73 stars used to construct our joint posterior. For the stars with the smallest uncertainties, this significantly limits the size of the KDE being evaluated (as subsections of the full stellar models are used to evaluate individual stars, for computational efficiency). In order to confirm that these stars do not significantly affect the ensemble's preference towards the WMB model, we recalculated the joint posterior distribution for $Q_{\\rm WMB}$, excluding stars with an uncertainty on mass smaller than the KDE band-width. While the 24 stars with small uncertainties do favour the WMB model, they do so very weakly, whereas the remaining 49 stars with larger uncertainties strongly favour the WMB model. Their removal from the total joint posterior probability does not significantly alter it from the distribution shown in Figure 4 of the main paper.\n\n\\subsection{Systematic uncertainties from asteroseismology}\nIn our model analysis, we used asteroseismic mass and age obtained using \\texttt{BASTA}, as reported in LEGACY and Kages. Asteroseismic properties obtained through stellar models can be subject to systematic errors arising from differences in input physics and choice of stellar models not included in the reported statistical uncertainties. A quantification of these different systematic effects can be found in the `Kages' catalogue paper \\cite{silvaaguirre+2015}. Combining their reported median systematic uncertainties due to input physics results in median uncertainties of $20\\%$ (up from $14\\%)$ on age and $5\\%$ (up from $3\\%$) on mass for the `Kages' sample. For LEGACY, the median uncertainties are $18\\%$ (up from $10\\%)$ and $5.6\\%$ (up from $4\\%$) for age and mass respectively.\n\nWe re-ran our model analysis after inflating uncertainties on mass and age. We increased uncertainties by the fractional difference between the \\texttt{BASTA} statistical uncertainties and the median full statistical and systematic uncertainties described above. For example, a LEGACY star with a mass of $2.0 \\pm 0.5\\, M_\\odot$ would have its uncertainty inflated by $1.6\\%$ of its mass, to $0.53\\, M_\\odot$. \n\nTo further test the limits of this analysis, we also reran our mixture model fit, this time only shifting the asteroseismic ages younger by the systematic uncertainty, and retaining the statistical uncertainty (i.e. the ages of LEGACY and `Kages' stars were reduced by $8\\%$ and $6\\%$ respectively). In this scenario where all asteroseismic ages are overestimated, `true' fast rotators at young ages would have been mistaken for fast rotators at old ages, suggesting the presence of weakened magnetic braking where none existed. \n\nThe results of these tests are discussed further in the main body of the paper.\n\n\\subsection{Binaries and Planet Hosts}\nFor stars to be good probes of existing gyrochronology relations, their rotational evolution must occur in isolation. If a star interacts with a close binary companion (through tides, or a merger) the natural angular momentum loss can be disturbed, causing gyrochronology to mispredict ages \\cite{leiner+2019, fleming+2019}. Between LEGACY and Kages, we have 8 known binaries. \n\nFirst, KIC 8379927, KIC 7510397, KIC 10454113 and KIC 9025370 are spectroscopic binaries. This does not affect the asteroseismic analysis, but may affect their rotational evolution. Of these, KICs 8379927, 7510397 and 9025370 were included in the gyrochronology analysis. None of them preferred one model strongly over the other, with all three finding flat posteriors for $Q_{\\rm WMB}$.\n\nSecond, the binary pairs of KIC 9139151 \\& 9139163 and 16 Cyg A \\& B are individually observed binary components with wide orbital separations, so we do not expect their binarity to have affected their rotational evolution \\cite{halbwachs1986, white+2013}. \n\nWhile we chose not to account for star-planet tidal interactions in this work, we note that this may disturb the natural stellar rotational evolution when tidal forces are at play \\cite{maxted+2015, gallet+delorme2019, benbakoura+2019}, although this has been disputed by observations of asteroseismic planet hosts \\cite{ceillier+2016}.\\\\\n\n%\\bibliographystyle{naturemag}\n%\\bibliography{library.bib} % if your bibtex file is called example.bib\n\\addcontentsline{toc}{section}{References}\n\\begin{thebibliography}{99}\n\t\\makeatletter\n\t\\addtocounter{\\@listctr}{78}\n\t\\makeatother\n\t\\expandafter\\ifx\\csname url\\endcsname\\relax\n\t\\def\\url#1{\\texttt{#1}}\\fi\n\t\\expandafter\\ifx\\csname urlprefix\\endcsname\\relax\\def\\urlprefix{URL }\\fi\n\t\\providecommand{\\bibinfo}[2]{#2}\n\t\\providecommand{\\eprint}[2][]{\\url{#2}}\n\t\n\\bibitem{hogg2012}\n\\bibinfo{author}{Hogg, D.~W.}\n\\newblock \\bibinfo{title}{Data analysis recipes: {{Probability}} calculus for\n\tinference}.\n\\newblock \\emph{\\bibinfo{journal}{arXiv e-prints}}\n\\bibinfo{pages}{arXiv:1205.4446} (\\bibinfo{year}{2012}).\n\n\\bibitem{davies+2014}\n\\bibinfo{author}{Davies, G.~R.}, \\bibinfo{author}{Chaplin, W.~J.},\n\\bibinfo{author}{Elsworth, Y.} \\& \\bibinfo{author}{Hale, S.~J.}\n\\newblock \\bibinfo{title}{{{BiSON}} data preparation: A correction for\n\tdifferential extinction and the weighted averaging of contemporaneous data}.\n\\newblock \\emph{\\bibinfo{journal}{Mon. Not. R. Astron. Soc.}}\n\\textbf{\\bibinfo{volume}{441}}, \\bibinfo{pages}{3009--3017}\n(\\bibinfo{year}{2014}).\n\n\\bibitem{appourchaux+2016}\n\\bibinfo{author}{Appourchaux, T.} \\emph{et~al.}\n\\newblock \\bibinfo{title}{Oscillation mode linewidths and heights of 23\n\tmain-sequence stars observed by {{{\\emph{Kepler}}}}\n\t{\\emph{(}}{{{\\emph{Corrigendum}}}}{\\emph{)}}}.\n\\newblock \\emph{\\bibinfo{journal}{Astronomy \\& Astrophysics}}\n\\textbf{\\bibinfo{volume}{595}}, \\bibinfo{pages}{C2} (\\bibinfo{year}{2016}).\n\n\\bibitem{rasmussen+williams2006}\n\\bibinfo{author}{Rasmussen, C.~E.} \\& \\bibinfo{author}{Williams, C. K.~I.}\n\\newblock \\emph{\\bibinfo{title}{Gaussian Processes for Machine Learning}}.\n\\newblock Adaptive Computation and Machine Learning (\\bibinfo{publisher}{{MIT\n\t\tPress}}, \\bibinfo{address}{{Cambridge, Mass}}, \\bibinfo{year}{2006}).\n\n\\bibitem{toutain+appourchaux1994}\n\\bibinfo{author}{Toutain, T.} \\& \\bibinfo{author}{Appourchaux, T.}\n\\newblock \\bibinfo{title}{Maximum likelihood estimators: {{An}} application to\n\tthe estimation of the precision of helioseismic measurements}.\n\\newblock \\emph{\\bibinfo{journal}{Astron. Astrophys.}}\n\\textbf{\\bibinfo{volume}{289}}, \\bibinfo{pages}{649--658}\n(\\bibinfo{year}{1994}).\n\n\\bibitem{gizon+solanki2003}\n\\bibinfo{author}{Gizon, L.} \\& \\bibinfo{author}{Solanki, S.~K.}\n\\newblock \\bibinfo{title}{Determining the {{Inclination}} of the {{Rotation\n\t\t\tAxis}} of a {{Sun}}-like {{Star}}}.\n\\newblock \\emph{\\bibinfo{journal}{Astrophys. J.}}\n\\textbf{\\bibinfo{volume}{589}}, \\bibinfo{pages}{1009} (\\bibinfo{year}{2003}).\n\n\\bibitem{handberg+campante2011}\n\\bibinfo{author}{Handberg, R.} \\& \\bibinfo{author}{Campante, T.~L.}\n\\newblock \\bibinfo{title}{Bayesian peak-bagging of solar-like oscillators using\n\t{{MCMC}}: A comprehensive guide}.\n\\newblock \\emph{\\bibinfo{journal}{Astron. Astrophys.}}\n\\textbf{\\bibinfo{volume}{527}}, \\bibinfo{pages}{A56} (\\bibinfo{year}{2011}).\n\n\\bibitem{appourchaux+1998}\n\\bibinfo{author}{Appourchaux, T.}, \\bibinfo{author}{Gizon, L.} \\&\n\\bibinfo{author}{{Rabello-Soares}, M.-C.}\n\\newblock \\bibinfo{title}{The art of fitting p-mode spectra. {{I}}. {{Maximum}}\n\tlikelihood estimation}.\n\\newblock \\emph{\\bibinfo{journal}{Astron. Astrophys. Supplement Series}}\n\\textbf{\\bibinfo{volume}{132}}, \\bibinfo{pages}{107--119}\n(\\bibinfo{year}{1998}).\n\n\\bibitem{anderson+1990}\n\\bibinfo{author}{{Anderson}, E.~R.}, \\bibinfo{author}{{Duvall}, J., Thomas~L.}\n\\& \\bibinfo{author}{{Jefferies}, S.~M.}\n\\newblock \\bibinfo{title}{{Modeling of Solar Oscillation Power Spectra}}.\n\\newblock \\emph{\\bibinfo{journal}{Astrophys. J.}}\n\\textbf{\\bibinfo{volume}{364}}, \\bibinfo{pages}{699} (\\bibinfo{year}{1990}).\n\n\\bibitem{gelman2006}\n\\bibinfo{author}{Gelman, A.}\n\\newblock \\bibinfo{title}{Prior distributions for variance parameters in\n\thierarchical models (comment on article by {{Browne}} and {{Draper}})}.\n\\newblock \\emph{\\bibinfo{journal}{Bayesian Analysis}}\n\\textbf{\\bibinfo{volume}{1}}, \\bibinfo{pages}{515--534}\n(\\bibinfo{year}{2006}).\n\n\\bibitem{ballot+2006}\n\\bibinfo{author}{Ballot, J.}, \\bibinfo{author}{Garcia, R.~A.} \\&\n\\bibinfo{author}{Lambert, P.}\n\\newblock \\bibinfo{title}{Rotation speed and stellar axis inclination from p\n\tmodes: How {{CoRoT}} would see other suns}.\n\\newblock \\emph{\\bibinfo{journal}{Mon. Not. R. Astron. Soc.}}\n\\textbf{\\bibinfo{volume}{369}}, \\bibinfo{pages}{1281--1286}\n(\\bibinfo{year}{2006}).\n\n\\bibitem{ballot+2008a}\n\\bibinfo{author}{Ballot, J.}, \\bibinfo{author}{Appourchaux, T.},\n\\bibinfo{author}{Toutain, T.} \\& \\bibinfo{author}{Guittet, M.}\n\\newblock \\bibinfo{title}{On deriving p-mode parameters for inclined solar-like\n\tstars}.\n\\newblock \\emph{\\bibinfo{journal}{Astronomy \\& Astrophysics}}\n\\textbf{\\bibinfo{volume}{486}}, \\bibinfo{pages}{867--875}\n(\\bibinfo{year}{2008}).\n\\newblock \\eprint{0803.0885}.\n\n\\bibitem{kass+raftery1995}\n\\bibinfo{author}{Kass, R.~E.} \\& \\bibinfo{author}{Raftery, A.~E.}\n\\newblock \\bibinfo{title}{Bayes {{Factors}}}.\n\\newblock \\emph{\\bibinfo{journal}{Journal of the American Statistical\n\t\tAssociation}} \\textbf{\\bibinfo{volume}{90}}, \\bibinfo{pages}{773--795}\n(\\bibinfo{year}{1995}).\n\n\\bibitem{beck2000}\n\\bibinfo{author}{Beck, J.~G.}\n\\newblock \\bibinfo{title}{A comparison of differential rotation measurements -\n\t({{Invited Review}})}.\n\\newblock \\emph{\\bibinfo{journal}{Solar Physics}}\n\\textbf{\\bibinfo{volume}{191}}, \\bibinfo{pages}{47--70}\n(\\bibinfo{year}{2000}).\n\n\\bibitem{doyle+2014}\n\\bibinfo{author}{Doyle, A.~P.}, \\bibinfo{author}{Davies, G.~R.},\n\\bibinfo{author}{Smalley, B.}, \\bibinfo{author}{Chaplin, W.~J.} \\&\n\\bibinfo{author}{Elsworth, Y.}\n\\newblock \\bibinfo{title}{Determining stellar macroturbulence using\n\tasteroseismic rotational velocities from {{Kepler}}}.\n\\newblock \\emph{\\bibinfo{journal}{Mon. Not. R. Astron. Soc.}}\n\\textbf{\\bibinfo{volume}{444}}, \\bibinfo{pages}{3592--3602}\n(\\bibinfo{year}{2014}).\n\n\\bibitem{tayar+2015}\n\\bibinfo{author}{Tayar, J.} \\emph{et~al.}\n\\newblock \\bibinfo{title}{Rapid {{Rotation}} of {{Low}}-mass {{Red Giants Using\n\t\t\tAPOKASC}}: {{A Measure}} of {{Interaction Rates}} on the\n\t{{Post}}-main-sequence}.\n\\newblock \\emph{\\bibinfo{journal}{Astrophys. J.}}\n\\textbf{\\bibinfo{volume}{807}}, \\bibinfo{pages}{82} (\\bibinfo{year}{2015}).\n\n\\bibitem{schofield+2019}\n\\bibinfo{author}{Schofield, M.} \\emph{et~al.}\n\\newblock \\bibinfo{title}{The {{Asteroseismic Target List}} for {{Solar}}-like\n\t{{Oscillators Observed}} in 2 minute {{Cadence}} with the {{Transiting\n\t\t\tExoplanet Survey Satellite}}}.\n\\newblock \\emph{\\bibinfo{journal}{Astrophys. J. Supplement Series}}\n\\textbf{\\bibinfo{volume}{241}}, \\bibinfo{pages}{12} (\\bibinfo{year}{2019}).\n\n\\bibitem{mathur+2019}\n\\bibinfo{author}{{Mathur}, S.} \\emph{et~al.}\n\\newblock \\bibinfo{title}{{Revisiting the impact of stellar magnetic activity\n\t\ton the detection of solar-like oscillations by Kepler}}.\n\\newblock \\emph{\\bibinfo{journal}{Frontiers in Astronomy and Space Sciences}}\n\\textbf{\\bibinfo{volume}{6}}, \\bibinfo{pages}{46} (\\bibinfo{year}{2019}).\n\\newblock \\eprint{1907.01415}.\n\n\\bibitem{amard+matt2020}\n\\bibinfo{author}{{Amard}, L.} \\& \\bibinfo{author}{{Matt}, S.~P.}\n\\newblock \\bibinfo{title}{{The Impact of Metallicity on the Evolution of the\n\t\tRotation and Magnetic Activity of Sun-like Stars}}.\n\\newblock \\emph{\\bibinfo{journal}{Astrophys. J.}}\n\\textbf{\\bibinfo{volume}{889}}, \\bibinfo{pages}{108} (\\bibinfo{year}{2020}).\n\\newblock \\eprint{2001.10404}.\n\n\\bibitem{fleming+2019}\n\\bibinfo{author}{Fleming, D.~P.}, \\bibinfo{author}{Barnes, R.},\n\\bibinfo{author}{Davenport, J. R.~A.} \\& \\bibinfo{author}{Luger, R.}\n\\newblock \\bibinfo{title}{Rotation {{Period Evolution}} in {{Low}}-mass\n\t{{Binary Stars}}: {{The Impact}} of {{Tidal Torques}} and {{Magnetic\n\t\t\tBraking}}}.\n\\newblock \\emph{\\bibinfo{journal}{Astrophys. J.}}\n\\textbf{\\bibinfo{volume}{881}}, \\bibinfo{pages}{88} (\\bibinfo{year}{2019}).\n\n\\bibitem{halbwachs1986}\n\\bibinfo{author}{{Halbwachs}, J.~L.}\n\\newblock \\bibinfo{title}{{Common proper motion stars in the AGK 3.}}\n\\newblock \\emph{\\bibinfo{journal}{Astron. Astrophys.s}}\n\\textbf{\\bibinfo{volume}{66}}, \\bibinfo{pages}{131--148}\n(\\bibinfo{year}{1986}).\n\n\\bibitem{white+2013}\n\\bibinfo{author}{{White}, T.~R.} \\emph{et~al.}\n\\newblock \\bibinfo{title}{{Interferometric radii of bright Kepler stars with\n\t\tthe CHARA Array: {\\ensuremath{\\theta}} Cygni and 16 Cygni A and B}}.\n\\newblock \\emph{\\bibinfo{journal}{Mon. Not. R. Astron. Soc.}}\n\\textbf{\\bibinfo{volume}{433}}, \\bibinfo{pages}{1262--1270}\n(\\bibinfo{year}{2013}).\n\\newblock \\eprint{1305.1934}.\n\n\\bibitem{maxted+2015}\n\\bibinfo{author}{Maxted, P. F.~L.}, \\bibinfo{author}{Serenelli, A.~M.} \\&\n\\bibinfo{author}{Southworth, J.}\n\\newblock \\bibinfo{title}{Comparison of gyrochronological and isochronal age\n\testimates for transiting exoplanet host stars}.\n\\newblock \\emph{\\bibinfo{journal}{Astron. Astrophys.}}\n\\textbf{\\bibinfo{volume}{577}}, \\bibinfo{pages}{A90} (\\bibinfo{year}{2015}).\n\n\\bibitem{gallet+delorme2019}\n\\bibinfo{author}{{Gallet}, F.} \\& \\bibinfo{author}{{Delorme}, P.}\n\\newblock \\bibinfo{title}{{Star-planet tidal interaction and the limits of\n\t\tgyrochronology}}.\n\\newblock \\emph{\\bibinfo{journal}{Astron. Astrophys.}}\n\\textbf{\\bibinfo{volume}{626}}, \\bibinfo{pages}{A120} (\\bibinfo{year}{2019}).\n\\newblock \\eprint{1905.06070}.\n\n\\bibitem{benbakoura+2019}\n\\bibinfo{author}{Benbakoura, M.}, \\bibinfo{author}{R{\\'e}ville, V.},\n\\bibinfo{author}{Brun, A.~S.}, \\bibinfo{author}{{Le Poncin-Lafitte}, C.} \\&\n\\bibinfo{author}{Mathis, S.}\n\\newblock \\bibinfo{title}{Evolution of star-planet systems under magnetic\n\tbraking and tidal interaction}.\n\\newblock \\emph{\\bibinfo{journal}{Astron. Astrophys.}}\n\\textbf{\\bibinfo{volume}{621}}, \\bibinfo{pages}{A124} (\\bibinfo{year}{2019}).\n\n\\bibitem{ceillier+2016}\n\\bibinfo{author}{Ceillier, T.} \\emph{et~al.}\n\\newblock \\bibinfo{title}{Rotation periods and seismic ages of {{KOIs}} -\n\tcomparison with stars without detected planets from {{Kepler}} observations}.\n\\newblock \\emph{\\bibinfo{journal}{Mon. Not. R. Astron. Soc.}}\n\\textbf{\\bibinfo{volume}{456}}, \\bibinfo{pages}{119--125}\n(\\bibinfo{year}{2016}).\n\t\n\\end{thebibliography}\n\n\n\\input{tabulate.tex}\n\n\n%\\end{document}", "meta": {"hexsha": "5aef4e9718ff8119b186ea3152033767545270d9", "size": 60552, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/natastron/ArXiV/si.tex", "max_stars_repo_name": "ojhall94/malatium", "max_stars_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-25T07:45:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-25T07:45:20.000Z", "max_issues_repo_path": "paper/natastron/ArXiV/si.tex", "max_issues_repo_name": "ojhall94/malatium", "max_issues_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/natastron/ArXiV/si.tex", "max_forks_repo_name": "ojhall94/malatium", "max_forks_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-19T09:38:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T09:38:35.000Z", "avg_line_length": 102.6305084746, "max_line_length": 1375, "alphanum_fraction": 0.7591821905, "num_tokens": 16551, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9334308147331957, "lm_q2_score": 0.6406358411176238, "lm_q1q2_score": 0.5979892351217098}}
{"text": "\\section{Neural Networks} \\label{ch:neural_networks}\n\nAs already mentioned, the name Neural Network has become widely accepted, although it is neither neural nor a network. % TODO citation needed (chollet)\nThey are used to create more complex mappings than the previous models were able to do.\nIn the following we will discuss what the motivation behind this concept is and how networks are trained.\n\n\\subsection{Deep learning} \\label{ch:deep_learning}\n\nAlthough the universal approximation theorem shows that all continuous functions can be described with only one layer, in practice it is much more reasonable to add several layers to our model to keep the required resources low.\nRolnick and Tegmark \\cite{Rolnick2017} show that in single-layer models the number of neurons grows exponentially with the number of variables of the input, whereas multi-layer neural networks grow only linearly.\n\n% TODO add some clues for notation\n\\begin{figure}\n    \\centering\n    \\caption[Neural Network]{ A neural network with three input nodes, two hidden layers with five nodes each and an output layer with three nodes. The bias is added to each layer by prepending a node with value one. Every node is, like the previous example calculated by the sum of the product of the previous layer and their respective weight.  }\n    \\includegraphics[width=0.6\\textwidth]{images/2_nn_with_bias.png}\n    \\label{fig:nn}\n\\end{figure}\n\nEach node in figure~\\ref{fig:nn} can be addressed using the notation $z^l_i$.\n$l$ is the number of the layer in which the node is located and $i$ is the index of the node.\nEach weight $\\theta^l_{i, j}$ can also be addressed; $l$ is the layer to which the weight refers, $i$ is the index of the node, which is multiplied by the weight to influence the value of the node in the layer $l$ with the index $j$.\nSo we can describe each node as the result of the following equation\\footnote{In literature the bias is often added separately.\nAccording to the convention of prefixing the input vector $x$ with a 1, it is always included as index 0 of each layer.\nPlease note that, mathematically this is the same.}:\n\n\\begin{equation}\n    z^l_j = \\sum^n_{i=0}z^{l-1}_i\\theta^l_{i, j}\n    \\label{eq:node_value}\n\\end{equation}\n\n$n$ is the number of nodes in the respective layer. Equation~\\eqref{eq:node_value} can be vectorized to:\n\n\\begin{equation}\n    z^l = \\theta^l z^{l-1}\n    \\label{eq:node_value_vectorized}\n\\end{equation}\n\n\\subsection{Activation functions}\n\nOne problem with the values of the individual cells is that they can be of any size. Numerically, this is very difficult to control, so the nodes themselves are given to another function first; this function is aptly called an activation function.\n\nActivation functions are used in different ways and give more possibilities to adjust yet new parameters to increase model performance.\nPreviously we intrinsically assumed the linear function shown in figure~\\ref{fig:activation_linear} with the mapping $g(a) = a$.\nThe logistic sigmoid is often used in practice, because it creates a mapping of arbitrary numerical input and maps it into the interval $[0,1]$.\nThe rectified linear unit is much easier computationally and is recommended for use with most feedforward neural networks\\cite[p.169]{Goodfellow2017}\\cite{Glorot2011}.\n\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}[b]{0.3\\textwidth}\n        \\includegraphics[width=\\textwidth]{images/4_linear.png}\n        \\caption{Linear}\n        \\label{fig:activation_linear}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.3\\textwidth}\n        \\includegraphics[width=\\textwidth]{images/4_sigmoid.png}\n        \\caption{Logistic sigmoid}\n        \\label{fig:activation_sigmoid}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.3\\textwidth}\n        \\includegraphics[width=\\textwidth]{images/4_relu.png}\n        \\caption{ReLU}\n        \\label{fig:activation_relu}\n    \\end{subfigure}\n    \\caption{Activation functions}\n    \\label{fig:activation}\n\\end{figure}\n\nMathematical representations are shown in equation~\\eqref{eq:sigmoid_relu}\n\n\\begin{equation}\n    \\begin{split}\n        Sigmoid: f(x) & = \\frac{1}{1 + \\exp^{-x}} \\\\ ReLU: f(x) & = max(0, x)\n    \\end{split}\n    \\label{eq:sigmoid_relu}\n\\end{equation}\n\nThe use of the activation functions can also vary; for example, it is common for the logistic sigmoid to have a threshold to not fire at all if a certain value (e.g. 0.5) is not reached.\n\nWith the introduction of activation functions, the equations ~\\eqref{eq:node_value} and \\eqref{eq:node_value_vectorized} become equal:\n\n\\begin{equation}\n    a^l_j = \\sigma(z^l_j) = \\sigma(\\sum^n_{i=0}z^{l-1}_i\\theta^l_{i, j})\n    \\label{eq:activation}\n\\end{equation}\n\nWith the corresponding vectorized equation:\n\n\\begin{equation}\n    a^l = \\sigma(z^l) = \\sigma(\\theta^l z^{l-1})\n    \\label{eq:activation_vectorized}\n\\end{equation}\n\nThere are a variety of different activation functions, these two being the most prominent.\nThe selection of the activation function and possibly a corresponding threshold value is another hyperparameter that must be selected before the training.\n\n\\subsection{Backpropagation}\n\nWith gradient descent~\\eqref{eq:gradient_descent} not all parameters in all layers can be updated, because it only considers updating the parameters of the last level if it is considered independent of previous layers.\nSince the last layer is a function of the previous layer, the chain rule can be applied and thus apply the calculated error of the last layer to the preceding one and update the weights and biases. This applies to every layer, except the input layer (which does not need to be updated in the end).\nThe following equations are derived using \\cite[p.733]{StuartRussell2018}, \\cite[p.197]{Goodfellow2017} and \\cite[ch.2]{Nielsen2015}:\n\n\\begin{equation}\n    \\varDelta^L = \\nabla_a L \\odot \\sigma'(z^L)\n    \\label{eq:output_error}\n\\end{equation}\n\\begin{equation}\n    \\varDelta^l = ((\\theta^{l+1})^T \\varDelta^{l+1}) \\odot \\sigma'(z^l)\n    \\label{eq:hidden_error}\n\\end{equation}\n\\begin{equation}\n    \\theta_{i+1} := \\theta_i - \\eta \\varDelta\n    \\label{eq:backprop_update}\n\\end{equation}\n\nEquation~\\eqref{eq:output_error} describes the error calculated in the output layer. We define our network with $L$ layers, so the output layer is always described by the $L$ index.\n$\\odot$ is the Hadamard operator. It is used to multiply tensors element by element, which is useful in this case as it prevents the vectors from having to be transformed into diagonal matrices.\nEquation~\\eqref{eq:output_error} is a direct result of applying the multivariate chain rule to the derivative of the loss function performed in the last layer.\nAs mentioned above, the gradient descent is performed by the derivative of the loss function in equation~\\eqref{eq:sgd_mse_here}.\nSince an activation function is implemented between the input used in the loss function and the output of the previous layer, the chain rule must also be applied to the latter.\n\n\\begin{equation}\n    \\frac{\\partial L}{\\partial z^L_j} = \\frac{\\partial L}{\\partial a^L_j}\\frac{\\partial a^L_j}{\\partial z^L_j}\n    \\label{eq:proof_loss_chain_rule}\n\\end{equation}\n\nBy pointing out that $\\frac{\\partial L}{\\partial a^L_i}$ is only a non-vectorized form of $\\nabla_a L$ and $\\frac{\\partial a^L_i}{\\partial z^L_i}$ of $\\sigma'(z^L)$ the equivalence becomes clear.\n\nEquation~\\eqref{eq:hidden_error} can be rewritten as follows:\n\n\\begin{equation}\n    \\varDelta^l_i = \\frac{\\partial L}{\\partial z^l_i} = \\sum_{j=0}^n (\\frac{\\partial L}{\\partial z^{l+1}_j}\\frac{\\partial z^{l+1}_j}{\\partial z^l_i})\n\\end{equation}\n\n$n$ is the number of nodes in the next layer. From $\\frac{\\partial L}{\\partial z^{l+1}_i} = \\varDelta^{l+1}_i$ follows:\n\n\\begin{equation}\n    \\frac{\\partial L}{\\partial z^l_i} = \\sum_{j=0}^n (\\varDelta^{l+1}_j \\frac{\\partial z^{l+1}_j}{\\partial z^{l}_i})\n    \\label{eq:hidden_error_intermediate}\n\\end{equation}\n\nThe following equation is also known:\n\n\\begin{equation}\n    \\begin{split}\n    z^{l+1}_j & = \\sum_{i=0}^n \\theta^{l+1}_{i,j} a^l_i = \\sum_{i=0}^n \\theta^{l+1}_{i, j} \\sigma(z^l_i) \\\\\n    \\frac{\\partial z_j^{l+1}}{\\partial z_i^l} & = \\theta^{l+1}_{i,j} \\sigma'(z^l_j)\n    \\end{split}\n\\end{equation}\n\nSo we can substitute the terms in equation~\\eqref{eq:hidden_error_intermediate}:\n\n\\begin{equation}\n    \\frac{\\partial L}{\\partial z^l_i} = \\sum_{j=0}^n (\\varDelta^{l+1}_j \\theta^{l+1}_{i,j} \\sigma'(z_j^l))\n\\end{equation}\n\nWhich again is just a non-vectorized form of the equation ~\\eqref{eq:hidden_error}.\n\nEquation~\\eqref{eq:hidden_error} is then executed on each layer until $l=0$ is reached and a $\\varDelta$ is calculated for each parameter in $\\theta$, which are applied by equation~\\eqref{eq:backprop_update}.\nThis type of backward propagation gave this algorithm the prominent name of \"backpropagation\" (short for \"backward propagation of error\" \\cite{Rumelhart1986}).\n\n\\subsection{Generalization}\n\nThe task of any algorithm for machine learning is to generalize.\nAfter the model has been trained with data, it should also achieve good results with data it has never seen before.\nThis distinguishes machine learning from optimization problems (which could be solved with the Newton-Raphson method, for example).\n\nHowever, a generalization is not guaranteed. In particular the phenomena of over- and underfitting  pose problems.\nFor example, a model that has been trained for a classification task might make correct predictions for training examples, but not work reliably with new data (overfitting).\nThis indicates that the model has, so to speak, memorized the training examples.\nThere are some things which can be done to solve this issue.\nGiving the model more data for training is a solution shown by Banko and Brill \\cite {Banko2001} and commented by Halevy, Norvig and Pereira in their article \"The Unreasonable Effectiveness of Data\" \\cite{Halevy2009}.\nAdding a lot of data is not always possible because generating data can be costly, so some compromises may have to be made to solve this problem.\n\nAnother possibility is to reduce the number of features in the training data; apparently unintuitive at first glance, it becomes clear that an abundance of features does not add much information to an image. For example, to recognize a handwritten digit, many more parameters have to be adjusted if a high-resolution image with hundreds of thousands of pixels is fed into the model, where a pixel count of less than a thousand would already be sufficient \\cite{Nielsen2015}.\n\nOn the other hand, the problem of underfitting is as problematic as overfitting.\nIf the model is already not getting sufficient results from the training data, the problem is called underfitting.\nIf this is the case, it is likely that the model or parameter will need to be adjusted, or that the training data is unsuitable for the problem.\nFurthermore, for the equations~\\eqref{eq:universal_approx} and \\eqref{eq:hypothesis} to be true for a reasonable model, a rule of thumb is that mapping the input to the expected output should be at least relatable to a human.\n\nIn order to train a model for the recognition of mechanical symbols using these techniques, the corresponding data must first be generated.\n", "meta": {"hexsha": "c7e63adb9ffec66452bbb791c30d7fd47017e6d5", "size": 11195, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/srp/sections/neural_networks.tex", "max_stars_repo_name": "klawr/deepmech", "max_stars_repo_head_hexsha": "61de238f1d4b1b867ec1d5f4e4af2a3b25a5abff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-17T12:27:06.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-17T12:27:06.000Z", "max_issues_repo_path": "reports/srp/sections/neural_networks.tex", "max_issues_repo_name": "klawr/deepmech", "max_issues_repo_head_hexsha": "61de238f1d4b1b867ec1d5f4e4af2a3b25a5abff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-27T13:13:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-27T13:13:17.000Z", "max_forks_repo_path": "reports/srp/sections/neural_networks.tex", "max_forks_repo_name": "klawr/deepmech", "max_forks_repo_head_hexsha": "61de238f1d4b1b867ec1d5f4e4af2a3b25a5abff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.174863388, "max_line_length": 474, "alphanum_fraction": 0.7563197856, "num_tokens": 2941, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.798186787341014, "lm_q2_score": 0.7490872187162396, "lm_q1q2_score": 0.5979115205453308}}
{"text": "\\title{Probabilistic Decoder}\n\n\\subsection{Probabilistic Decoder}\n\nA probabilistic decoder is a reinterpretation of model likelihoods\nbased on coding theory. It is a distribution $p(\\mathbf{x}_n\\mid \\mathbf{z}_n)$  over each value\n$\\mathbf{x}_n\\in\\mathbb{R}^D$ given a code $\\mathbf{z}_n$. The latent\nvariables $\\mathbf{z}_n$ are interpreted as the hidden representation, or code, of the value\n$\\mathbf{x}_n$. The decoder is probabilistic because its generated\nvalues (decoding) for any given code is random\n\\citep{dayan1995helmholtz}.\n\n\\includegraphics[width=250px]{/images/decoder.png}\n\n{\\small\\textit{Graphical model of a probabilistic decoder, with model\nparameters $\\theta$.}}\n\nFor real-valued data,\nthe randomness in the decoder is given by a multivariate Gaussian\n\\begin{align*}\n  p(\\mathbf{x}_n\\mid\\mathbf{z}_n)\n  &=\n  \\text{Normal}(\\mathbf{x}_n\\mid [\\mu,\\sigma^2]=\\mathrm{NN}(\\mathbf{z}_n; \\mathbf{\\theta})),\n\\end{align*}\nwhere the probabilistic decoder is parameterized by a neural network\n$\\mathrm{NN}$ taking the code $\\mathbf{z}_n$ as input.\n\nFor binary data,\nthe randomness in the decoder is given by a Bernoulli\n\\begin{align*}\n  p(\\mathbf{x}_n\\mid\\mathbf{z}_n)\n  &=\n  \\text{Bernoulli}(\\mathbf{x}_n\\mid p=\\mathrm{NN}(\\mathbf{z}_n; \\mathbf{\\theta})).\n\\end{align*}\nProbabilistic decoders are typically used alongside a standard normal\nprior over the code\n\\begin{align*}\n  p(\\mathbf{z})\n  &=\n  \\text{Normal}(\\mathbf{z} \\mid \\mathbf{0}, \\mathbf{I}).\n\\end{align*}\n\nLet's build the model in Edward using\nKeras as an easy way to build neural networks. Here we use a\nprobabilistic decoder to model binarized 28 x 28\npixel images from MNIST.\n\\begin{lstlisting}[language=Python]\nfrom edward.models import Bernoulli, Normal\nfrom keras.layers import Dense\n\nz = Normal(loc=tf.zeros([N, d]), scale=tf.ones([N, d]))\nhidden = Dense(256, activation='relu')(z)\nx = Bernoulli(logits=Dense(28 * 28)(hidden))\n\\end{lstlisting}\nIt starts with a $d$-dimensional standard normal prior, one for each\ndata point. Then it applies a layer of 256 hidden units with\n\\texttt{relu} nonlinearity. The output layer is $28\\cdot 28$ units\nwithout a nonlinearity. This output\nparameterizes the Bernoulli's logits. Logits are a more numerically stable\nparameterization than probabilities constrained\nfrom 0 and 1.\n\nAn example script using this model can be found at\n\\href{https://github.com/blei-lab/edward/blob/master/examples/vae.py}\n{\\texttt{examples/vae.py}} in the Github repository.\nAn example with a convolutional architecture can be found at\n\\href{https://github.com/blei-lab/edward/blob/master/examples/vae_convolutional_prettytensor.py}\n{\\texttt{examples/vae_convolutional_prettytensor.py}} in the Github repository.\n%We experiment with this model in the\n%\\href{/tutorials/variational-autoencoder}{variational auto-encoder} tutorial.\n\n\\subsubsection{Footnotes}\n\nThe neural network which parameterizes the probabilistic decoder is\nalso known as a generative network. It is in analogy to an\n\\href{/tutorials/inference-networks}{inference network}, which\ncan parameterize a variational model used for inference,\ninterpreted as a probabilistic encoder.\n\nTraditionally, a probabilistic encoder is the most common\nchoice of inference. This lead to the coinage of the model-inference\ncombination known as the variational auto-encoder\n\\citep{kingma2014auto}, which is a probabilistic extension of\nauto-encoders.\nWe recommend against this terminology,\nin favor of making explicit the separation of model and inference.\nThat is, probabilistic decoders are a general class of\nmodels that can be used without an encoder.\nVariational\ninference is not necessary to infer probabilistic decoders, and\nvariational inference can also be done without an inference network.\n\n\\subsubsection{References}\\label{references}\n\n", "meta": {"hexsha": "bdc82dce255b1be91f8009ba2156336f7567c255", "size": 3786, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/tutorials/decoder.tex", "max_stars_repo_name": "xiangze/edward", "max_stars_repo_head_hexsha": "6419751d1d849c84c502e5ff3f7249b9bbc7b3aa", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-11T03:33:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-11T03:33:36.000Z", "max_issues_repo_path": "docs/tex/tutorials/decoder.tex", "max_issues_repo_name": "xiangze/edward", "max_issues_repo_head_hexsha": "6419751d1d849c84c502e5ff3f7249b9bbc7b3aa", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/tex/tutorials/decoder.tex", "max_forks_repo_name": "xiangze/edward", "max_forks_repo_head_hexsha": "6419751d1d849c84c502e5ff3f7249b9bbc7b3aa", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-12-22T08:21:41.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-16T02:45:04.000Z", "avg_line_length": 39.8526315789, "max_line_length": 96, "alphanum_fraction": 0.7783940835, "num_tokens": 1040, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7981867825403177, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.5979115080070846}}
{"text": "\\textit{\nProve that the topology of $C(\\Omega)$ does not depend on the particular \nchoice of $\\singleton{K_n}$, as long as this sequence satisfies the conditions \nspecified in section 1.44. Do the same for $C^\\infty(\\Omega)$ (Section 1.46).}\n%\n\\paragraph{Comment}This is an invariance property: \nThe function test topology only depends on the existence of the \nsupremum-seminorms $p_n$, then, eventually, \nonly on the ambient space itself. \nThis should be regarded as a very part of the textbook \\cite{FA}\n%\nThe proof consists in combining trivial consequences of the local base \ndefinition with a well-known result (\\eg [2.6] in \\cite{BigRudin}) \nabout intersection of nonempty compact sets. \n\n\\paragraph{Lemma 1} {\\it %\nLet $X$ be a topological space with a countable local base %\n$\\mathit{\\set{V_n}{\\counting{n}}}$. \nIf \n%\n  $\\mathit{\\tilde{V}_{n} = V_1 \\cuts \\cdots \\cuts V_n}$, \n%\nthen every subsequence \n% \n  $\\mathit{\\singleton{\\tilde{V}_{\\rho(n)}}}$ \n%\nis a decreasing (\\ie \n%\n  $\\mathit{\\tilde{V}_{\\rho(n)} \\contains \\tilde{V}_{\\rho(n+1)}}$)\n%\nlocal base of $\\varit{X}$.\n}\n%\n\\begin{proof}\nThe decreasing property is trivial. Now remark that \n%\n  $V_n \\contains \\tilde{V}_{n}$:\n%\nThis shows that \n%\n  $\\singleton{\\tilde{V}_{n}}$ \n% \nis a local base of $X$. Then so is \n%\n  $\\singleton{\\tilde{V}_{\\rho(n)}}$,\n% \nsince $\\tilde{V}_{n} \\contains \\tilde{V}_{\\rho(n)}$.\n\\end{proof}\n%\n\\noindent The following special case \n%\n  $V_{n} = \\tilde{V}_{n}$ \n% \nis one of the key ingredients:\n%: COROLLARY 1 OF LEMMA 1-----------------------------------------------------%\n\\paragraph{Corollary 1 (special case $V_{n} = \\tilde{V}_{n}$)}\n{\\it Under the same notations of Lemma 1, if $\\mathit{\\singleton{V_{n}}}$ %\nis a decreasing local base, then so is $\\mathit{\\singleton{V_{\\rho(n)}}}$.}\n%\n%: COROLLARY 2 OF LEMMA 1-----------------------------------------------------%\n\\paragraph{Corollary 2}{\\it %\nIf \n%\n  $\\mathit{\\mathit{\\singleton{Q_n}}}$ \n%\nis a sequence of compact sets that satisfies the conditions specified \nin section 1.44, then every subsequence \n%\n  $\\mathit{\\singleton{Q_{\\rho(n)}}}$ \n%\nalso satisfies theses conditions.\n%\nFurthermore, if $\\mathit{\\tau_{Q}}$ is the $\\mathit{C(\\Omega)}$'s \n(respectively $\\mathit{C^\\infty (\\Omega)}$'s) topology of the seminorms %\n$\\mathit{p_{n}}$, \nas defined in section 1.44 (respectively 1.46), then the seminorms \n%\n  $\\mathit{p_{\\rho(n)}}$ \n%\ndefine the same topology $\\mathit{\\tau_{Q}}$. %\n}\n%\n\\begin{proof}%\n%\nLet $X$ be $C(\\Omega)$ topologized by the seminorms $p_{n}$ \n(the case $X=C^\\infty(\\Omega)$ is proved the same way).\n%\nIf \n  %\n    $V_{n} = \\singleton{p_{n} < 1/n}$, \n  %\nthen \n  %\n    $\\singleton{V_{n}}$ \n  %\nis a decreasing local base of $X$.\n%\nMoreover,\n% \n  \\begin{align}\n    Q_{\\rho(n)} \n      \\subset \n    \\interior{Q}_{\\rho(n) + 1} \n      \\subset \n    Q_{\\rho(n) + 1} \n      \\subset \n    Q_{\\rho(n+ 1)}.\n  \\end{align}\n% \nThus,\n%\n  \\begin{align}\n    Q_{\\rho(n)} \n      \\subset \n    \\interior{Q}_{\\rho(n+ 1)}.\n  \\end{align}\n%\nIn other words, \n%\n  $Q_{\\rho(n)}$ satisfies the conditions specified in section 1.44.\n%\n%\n  $\\singleton{p_{\\rho(n)}}$\n% \nthen defines a topology $\\tau_{Q_\\rho}$ for which  \n% \n  $\\singleton{V_{\\rho(n)}}$ \n%\nis a local base. So, \n% \n  $\\tau_{Q_\\rho} \\subset \\tau_{Q}$.\n%\nConversely, the above corollary asserts that \n%\n  $\\singleton{V_{\\rho(n)}}$ \n%\nis a local base of $\\tau_{Q}$, which yields  \n%\n  $\\tau_{Q}\\subset \\tau_{Q_\\rho}$.\n%\n\\end{proof}\n%: LEMMA 2 -------------------------------------------------------------------%\n\\paragraph{Lemma 2}{ \\it \\label{1.16 Lemma 2}\nIf a sequence of compact sets $\\mathit{\\singleton{Q_n}}$ satisfies the conditions \nspecified in section 1.44, then every compact set $K$ lies in allmost all \n%\n  $\\mathit{Q^{\\,\\circ}_n}$, \\ie\n%\nthere exists $\\varit{m}$ such that \n%\n  \\begin{align}\\mathit{\n    K \\subset \n    \\interior{Q}_m \n      \\subset \n    \\interior{Q}_{m+1}\n      \\subset\n    \\interior{Q}_{m+2}\n      \\subset\n    \\cdots.}\n  \\end{align}\n}\n%\n\\begin{proof}\nThe following definition\n%\n  \\begin{align}\n    C_n \\Def K \\setminus \\interior{Q}_n \n  \\end{align}\n%\nshapes $\\singleton{C_n}$ as a decreasing sequence of compact\\footnote{\n  See (b) of 2.5 of \\cite{BigRudin}.\n} \nsets. We now suppose (to reach a contradiction) that \n% \n  no $C_n$ is empty \n% \nand so conclude\\footnote{\n  In every Hausdorff space, the intersection of a decreasing sequence of %\n  nomempty compact sets is nonempty. %\n  This is a corollary of 2.6 of \\cite{BigRudin}.\n} \nthat the $C_n$'s intersection contains a point that is not in any $Q^\\circ_n$. \nOn the other hand, the conditions specified in [1.44] force the \n% \n  $Q^\\circ_n$'s collection  \n%\nto be an open cover.\n% \nThis contradiction reveals that \n%\n  $C_m = \\emptyset$, \n    \\ie \n  $K \\subset Q^\\circ_m$, \n%  \nfor some $m$.\n%\nFinally,  \n%\n  \\begin{align}\n    K\\subset \n    \\interior{Q}_m\n      \\subset\n    Q_m\n      \\subset\n    \\interior{Q}_{m+1}\n      \\subset\n    Q_{m +1}\n      \\subset\n    \\interior{Q}_{m+2}\n      \\subset\n    \\cdots.\n  \\end{align}  \n%\n\\end{proof}\n%: THEOREM -------------------------------------------------------------------%\n\\noindent We are now in a fair position to establish the following:\n%\\newpage\n\\paragraph{Theorem}{\\it \nThe topology of %\n%\n$\\mathit{C(\\Omega)}$ %\n%\ndoes not depend on the particular choice of %\n%\n$\\mathit{\\singleton{K_n}}$, %\n%\nas long as this sequence satisfies the conditions specified in section 1.44. %\nNeither does the topology of %\n%\n$\\mathit{C^{\\,\\infty} (\\Omega)}$, %\n%\nas long as this sequence satisfies the conditions specified in section 1.44.\n}\n%\n\\begin{proof}%\nWith the second corollary's notations,\n% \n  $\\tau_{K} = \\tau_{K_\\lambda}$,\n%\nfor every subsequence $\\singleton{K_{\\lambda(n)}}$.\n% \nSimilarly, let \n%\n  $\\singleton{L_n}$ \n% \nbe another sequence of compact subsets of $\\Omega$ that satisfies \nthe condition specified in [1.44], \nso that \n%\n  $\\tau_{L} = \\tau_{L_\\kappa}$\n%\nfor every subsequence $\\singleton{L_{\\kappa(n)}}$. \n%\nNow apply the above Lemma 2 with $K_i$ ($\\counting{i}$) and so conclude that  \n%\n  $K_i \n    \\subset \n  L^\\circ_{m_i} \n    \\subset \n  L^\\circ_{m_{i}+1}\n    \\subset\n  \\cdots$\n%  \nfor some $m_i$. In particular, the special case $\\kappa_i = m_i + i$ is \n%\n  \\begin{align}\n    %\n    \\label{1_16. K subset interior L}\n    K_i\n      \\subset \n    \\interior{L}_{\\kappa_i}.\n  \\end{align} \n%\nLet us reiterate the above proof with $K_n$ and $L_n$ in exchanged roles \nthen similarly find a subsequence $\\set{\\lambda_j}{\\counting{j}}$ such that \n%\n  \\begin{align}\n  %\n  \\label{1_16. L subset interior K}\n  %\n    L_j \\subset \\interior{K}_{\\lambda_j}\n  \\end{align}\n%\nCombine \n%\n  (\\ref{1_16. K subset interior L}) with \n  (\\ref{1_16. L subset interior K}) \n%\nand so obtain\n%\n  \\begin{align}\n    K_1 \n      \\subset \n    \\interior{L}_{\\kappa_1} \n      \\subset \n    L_{\\kappa_1} \n      \\subset \n    \\interior{K}_{\\lambda_{\\kappa_1}}\n      \\subset \n    K_{\\lambda_{\\kappa_1}}\n      \\subset\n    \\interior{L}_{\\kappa_{\\lambda_{\\kappa_1}}}\n      \\subset\n    \\cdots, \n  \\end{align}\n%\nwhich means that the sequence \n%\n  $Q = (\n    K_1, \n    L_{\\kappa_1}, \n    K_{\\lambda_{\\kappa_1}}, \n    %L_{\\kappa_{\\lambda_{\\kappa_1}}},\n    \\dots\n  )$\n%\nsatisfies the conditions specified in section 1.44. \nIt now follows from the corollary 2 that \n%\n  \\begin{align}  \n    \\tau_{K} \n    = \n      \\tau_{K_\\lambda} \n    = \n      \\tau_{Q} \n    = \n      \\tau_{L_\\kappa} \n    = \\tau_{L}.\n  \\end{align} \n%\nSo ends the proof\n\\end{proof}\n% END\n", "meta": {"hexsha": "10da767d0cb4d8b160bff64a95f0bdf851c981a6", "size": 7454, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter_1/1_16.tex", "max_stars_repo_name": "gitcordier/FunctionalAnalysis", "max_stars_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter_1/1_16.tex", "max_issues_repo_name": "gitcordier/FunctionalAnalysis", "max_issues_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter_1/1_16.tex", "max_forks_repo_name": "gitcordier/FunctionalAnalysis", "max_forks_repo_head_hexsha": "4d54af9cab1ce2bf512341cc1f2a0c81d7097754", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.3173652695, "max_line_length": 82, "alphanum_fraction": 0.6089348001, "num_tokens": 2515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7981867753392728, "lm_q1q2_score": 0.5979115070839269}}
{"text": "\n\\chapter{Universality, Adjoints and Closed Cartesian Categories}\n\\thispagestyle{empty}\nIn this chapter we are going to study universality and adjunctions. Despite a possible apparent difference in their definitions, both have the same use: they are tools that, given an arrow selected in a category, allow us to uniquely select an arrow in another category, generally fulfilling some special property. \\\\\n\nWe will start this section with the definition of universality, as well as the statement of the celebrated Yoneda's lemma. Then, we present the concept of adjunction. We will use these concepts to introduce new structures, in particular that of closed Cartesian category.\n\n\n\\section{Universality}\nIn this section we present the concept of universality. This concept is behind lots of mathematical properties. Intuitively, universality is an efficient way of expressing a one-to-one correspondence between arrows of different categories. This one-to-one relationship is usually expressed via ''given an arrow  $f$ it exists one and only one  arrow $\\overline f$ such that <insert your favorite universal property>''.\\\\\n\nProbably the first contact that any mathematician has with universality is defining a function  $f:\\mathbb R \\to \\mathbb R^2$. It is easy to see that defining such a function is equivalent to define two $g,h: \\mathbb R \\to \\mathbb R$. Furthermore, those $g,h$ are unique for each $f$. This uniqueness is the flavor that attempts to capture the concept of universality. Other examples of unique existence are those that occur in quotient groups or in bases of vector spaces. \\\\\n\n\\begin{definition}\\label{def:univ-arrow}\n  Let $S: D \\to C$ be a functor and $c \\in Ob(C)$. An \\emph{universal arrow}  from $c$ to $S$ is a pair $(d,u)$ with $d\\in Ob(D), u:c \\to Sd \\in Ar(C)$, such that for every $(e,f)$ with $e\\in Ob(D)$  and $f:c\\to Sd$ there exists an unique $f':r,d\\in Ar(D)$ such that $Sf'\\circ u = f$.\n\n\\end{definition}\nIn a diagram:\n\n\\[\n  \\begin{tikzcd}\n    c\\arrow[r, \"u\"] \\arrow[rd, \"f\", swap]      & Sd \\arrow[d, \"Sf'\", dashed]& d\\arrow[d,\"f'\",dashed]\\\\\n    &Se& e \n  \\end{tikzcd}\n\\]\n\nNote that an universal arrow $(d,u)$ induces the unique existence of an arrow in $D$, but with interesting properties via it relationship with $S$. Usually, to provide an universal arrow we will only define the functor $S:D\\to C$ and the arrow $u:c\\to Sd$, letting all other information be deduced from the context.\\\\\n\n\n\nThe idea of universality is often regard via the notion of \\emph{universal element}, in the special (and commonplace) case where $S:D\\to Set$.\n\n\\begin{definition}\n  Let $S: D \\to Set$, an \\emph{universal element}  for $S$ is a pair $(r,e)$ with $r \\in Ob(D), e \\in   Sr$, such that for every $(d,x)$ with $d\\in Ob(D), x\\in Sd$, there is an unique arrow $f:r\\to d\\in D$ such that $(Sf)e = x$. \n\\end{definition}\n\n\nNote that having an universal element is a particular case of having an universal arrow. Considering $*$ the set with one point (as a category) an universal element is an universal arrow $*\\to H$. This is clearly seen if we consider the diagram:\n% c\\arrow[r, \"u\"] \\arrow[rd, \"f\"]\n\\[\n  \\begin{tikzcd}\n    *\\arrow[r, \"u\"] \\arrow[rd, \"f\", swap] & Sr \\arrow[d, \"Sf\", dashed]& r\\arrow[d,\"f\", dash]\\\\\n    &Sd& d \n  \\end{tikzcd}\n\\]\n\nwhere $*$ is helping us select the elements of $Sd$ and $Se$ to enforce the property.\\\\\n\nConversely if $D$ has small homesets, we can consider an universal arrow to be a particular case of an universal element. With the same context as in \\ref{def:univ-arrow} $(d,u: c\\to Sd)$ is an universal arrow if, and only if, $(d, u\\in  \\hom_{D}(c, Sd))$ is an universal element.\\\\\n\nThe point of having these such similar definition is to have different interfaces for the concept. While most result can be easily adapted, once we got to the examples is clear when to use each concept. \n\n\\begin{example}\\ \n  \\begin{itemize}\n  \\item Quotient group: This property states that, for any two groups $N\\lhd G$ with $\\pi: G \\to N$ the canonical projection, and any group homomorphism $f:G\\to K$ such that $N\\subset \\ker f$ there exists an unique $\\overline f: H \\to K$ such that $\\overline f  \\circ \\pi = f$.\\\\\n    \n    The question now is, how is this a universal property. This is an example of a universal element. Considering the functor $H: Grp\\to Set$:\n    $$HG' = \\{f:G\\to G'\\in Ar(Grp) : F(N)=\\{0\\} \\},$$\n    \n    Then, $(G/N, \\pi)$ is an universal element for $H$. From this property alone the three isomorphism theorems can be deduced. Therefore, we only have to prove this result to have the full power of these theorems in any context (e.g. Rings, K-Algebra, or Topological spaces).\n\n  \\item Tensor product: Given an right $R$-module $A$ and a left $R$-module $B$ one can consider the tensor product as an abelian group $A\\otimes_R B$ and an $R$-biadditive function\n    $$h:A\\times B\\to A \\otimes_R B,$$\n    such that  for every $R$-biadditive $f:A\\times B \\to G$ it exists an unique $\\overline f$ such that $\\overline f\\circ h=f$. Is easy to follow that tensor product can be reformulated as an universal arrow $\\varphi:A\\time B \\to A\\otimes_R B$ from $A\\times B$ to the identity functor in the category of $R$-additive groups along with $R$-additive functions.\n  \\end{itemize}\n\\end{example}\n\nDually, we can consider an \\emph{universal arrow from $S$ to $c$} or simply universal arrow from $S$:\n\\begin{definition}\\label{def:univ-arrow2}\n  Let $S: D \\to C$ be a functor and $c \\in Ob(C)$, an \\emph{universal arrow}  from $c$ to $S$ is a pair $(d,u)$ with $d\\in Ob(D), u:c \\to Sd \\in Ar(C)$, such that for every $(e,f)$ with $e\\in Ob(D)$  and $f:c\\to Sd$ there exists an unique $f':r,d\\in Ar(D)$ such that $u\\circ Sf' = f$.\n\\end{definition}\nIn a diagram:\n\n\\[\n  \\begin{tikzcd}\n    c       & Sd \\arrow[swap, l, \"f\"]\\arrow[d, \"Sf'\", dashed]& d\\arrow[d,\"f'\"]\\\\\n    &Se\\arrow[lu, \"u\"]& e \n  \\end{tikzcd}\n\\]\n\n\n\\begin{example}\\label{example:prod}\\ \n  \\begin{itemize}\n  \\item In $Set$  we have a particular construction: the product of sets for any two sets $a,b$ along with the projections $\\pi_1 a \\times b \\to a$ and $\\pi_2 a \\times b \\to b$. Then, we can define the functor $\\Delta: Set \\to Set\\times Set$ such that $\\Delta c = c\\times c$ for every $c\\in Ob(Set)$ and $\\Delta (f:c\\to b): c\\times c\\to b\\times b$ is the elements-wise application of $f$.\\\\\n\n    Then, given any two sets $a,b\\in Ob(Sets)$, $u=(\\pi_1,\\pi_2)$ is an universal arrow from $\\Delta$ to $(a,b)$ as\n\n    \\[\n      \\begin{tikzcd}\n        (a,b)       & \\Delta d=(d,d) \\arrow[swap, l, \"f\"]\\arrow[d, \"\\Delta f'\", dashed]& d\\arrow[d,\"f'\"]\\\\\n        &\\Delta e=(e,e)\\arrow[lu, \"u\"]& e \n      \\end{tikzcd}\n    \\]\n\n    This construction is reproducible in, among many others, $Grp, Top$ or Banach Spaces with bounded linear transformation. \n  \\end{itemize}\n\\end{example}\n\n\nLastly, we will provide a characterization of universality:\n\\begin{proposition}\\label{Yoneda-proposition}\n  Let $S: D \\to C$ be a functor and $u:c\\to Sr\\in Ar(C)$. Then $u$ is an universal arrow if and, only if, the function $\\varphi:\\hom_D(r,\\cdot)\\to \\hom(c,S\\cdot)$ such that $\\varphi(d)(f)= Sf\\circ u$ for all $f\\in \\hom_D(r,d)$ is a natural bijection. Conversely, every natural bijection is uniquely determined by an universal arrow $u:c\\to Sr$. \n\\end{proposition}\n\\begin{proof}\n  Let $u$ be universal. Then, $\\varphi$ is clearly a bijection. For $\\varphi$ to be a natural transformation the diagram \n  \\[\n    \\begin{tikzcd}\n      \\hom_D(r,d)\\arrow[swap]{d}{\\hom_D(r,g)}\\arrow{r}{\\varphi(d)} & \\hom_C(c,Sd)\\arrow{d}{\\hom_C(c,Sg)}\\\\\n      \\hom_D(r,d')\\arrow{r}{\\varphi(d')} & \\hom_C(c,Sd')\n    \\end{tikzcd}\n  \\]\n\n  should commute for all $g\\in Ar(D)$. As this is a diagram in the category of sets, we can check the commutativity by checking it element wise. For any $f \\in\\hom_D(r,d)$:\n  \\[\n    \\begin{tikzcd}\n      f\\arrow[swap]{d}{\\hom_D(r,g)}\\arrow{r}{\\varphi(d)} & Sf\\circ u \\arrow{d}{\\hom_C(c,Sg)}\\\\\n      g\\circ f \\arrow{r}{\\varphi(d')} & S(g\\circ f) \\circ u = Sg\\circ Sf \\circ u\n    \\end{tikzcd}\n  \\]\n\n  So the diagram commutes, and $\\varphi$ is natural. The bijectivity follows from $u$ being universal.\\\\\n\n  Lets consider now that $\\varphi$ is a natural bijection. We will define $u := \\varphi(r)(1_r)$ and check that $(r,u)$ is an universal arrow. As $\\varphi$ is natural we have that:\n\n  \\[\n    \\begin{tikzcd}\n      \\hom_D(r,r)\\arrow[swap]{d}{\\hom_D(r,g)}\\arrow{r}{\\varphi(r)} & \\hom_C(r,Sr)\\arrow{d}{\\hom_C(r,Sg)}\\\\\n      \\hom_D(r,d)\\arrow{r}{\\varphi(d)} & \\hom_C(r,Sd)\n    \\end{tikzcd}\n  \\]\n\n  Writing the diagram for the element $1_r\\in\\hom_D(r,r)$ and any $d\\in Ob(D), g:r\\to d\\in \\hom_D(r,d)$:\n\n  \\[\n    \\begin{tikzcd}\n      1_r \\arrow[swap]{d}{\\hom_D(r,g)}\\arrow{r}{\\varphi(r)} & u \\arrow{d}{\\hom_C(r,Sg)}\\\\\n      g\\arrow{r}{\\varphi(d)} & \\varphi(d)(g)= Sg\\circ u\n    \\end{tikzcd}\n  \\]\n\n  and since $\\varphi$ is a bijection, for every $f\\in \\hom_C(r,Sd)$ there is an unique function $f' = \\varphi(d)^{-1}(f)$ such that $Sf'\\circ u = f$, thus being $u$ universal.\n\n\\end{proof}\n\nThis proposition, sometimes called Yoneda's proposition \\cite[p. 81]{mac2013categories}, is of capital importance. Later on this chapter, we will study \\emph{adjunctions} and this proposition will enable us to fully understand the relationship between adjunctions and universality. From result there is a definition that arises:\n\n\\begin{definition}\n  Let $D$ be a category with small home-sets and let $F:D\\to C$ be a functor. A representation of a functor $K:D\\to Set$ is a pair $(r,\\varphi)$ with $r \\in Ob(D)$ and $ \\varphi$ a natural isomorphism  such that\n  $$D(r,\\cdot) \\equiv_{\\varphi} F\\cdot.$$\n  A functor is said to be \\emph{representable} whenever it has a representation.\n\\end{definition}\n\nNote that therefore a universal arrow induces a natural isomorphism $D(r,d)\\equiv C(c,Sd)$ and this induces a representation of the functor $C(c,S\\cdot): D\\to Set$, being these three equivalent.\n\n\\subsection{Yoneda's lemma}\nThis subsection  deals with Yoneda's Lemma. Mac Lane\\cite{mac2013categories} assures the lemma first appeared in his private communication with Yoneda in 1954. With time, this result has became one of the most relevant one in Category Spaces. We will start by providing some intuition to it, followed by it proof and some use cases.\\\\\n\n\nThis results is due to Japanese professor Nobuo Yoneda. We know about Yoneda's life thanks to the elegy that was written by Yoshiki Kinoshita\\cite{YonedaLife}. Yoneda was born in Japan in 1930, and received his doctorate in mathematics from Tokyo University in 1952. He was a reviewer for international mathematical journals. In addition to his contributions to the field of mathematics, he also devoted his research to computer science.\\\\\n\n\nThe idea behind the Yoneda lemma can be arid at first, if one does not have a prior understanding of what the purpose and usefulness of this lemma is. In order to illustrate this idea we will introduce a (simplified) definition of Moduli Spaces, so that we have a geometric understanding of Yoneda's Lemma.\\\\\n\nThe idea behind (some) Moduli spaces is to classify algebraic curves up to isomorphisms. In addition, Moduli spaces allow us to control complex mathematical objects (such as a quotient space of unknown objects) by simpler objects or objects with better properties (such as a concrete variety). A canonical example of this type of classification is:  \n\\begin{align*}\n  \\{\\text{Vector spaces of finite dimension}\\}/\\text{isomorphism } &\\cong \\N\\\\\n  [V] &\\to dim V.\n\\end{align*}\nWhere a complex object can be classified by another object of which we know more properties. We can start by defining:  \n$$\\mathcal{M} = \\{\\text{smooth complex non singular curves} \\}/\\text{isomorphism}$$\n\nFurther on when we talk about curves we will refer to smooth complex non singular curves. Note that if two curves are isomorphic then they have the same genus. Therefore the function \n\\begin{align*}\n  \\gamma: \\mathcal{M} &\\mapsto \\N\\\\\n  \\displaystyle  [V]&\\mapsto \\text{genus of V} \n\\end{align*}\n\nis well defined and we can define $\\mathcal{M}_g = \\gamma^{-1}(g)$. An interesting classification of $\\mathcal{M}_g$ is given when we consider that for every $g$ there exists a closed, connected, non-singular variety $U_g$ and a family $\\{C_t : t \\in U_g}$ such that a curve of genus $g$ will be a fibration of $C_t$.  Moreover there is a variety $M_g$ and a subjective morphism $\\varphi: U_g \\to M_g$ such that $\\varphi(t_1)=\\varphi(t_2)$ if $C_{t_1} \\equiv C_{t_2}$. Therefore we are classifying the equivalence classes of $\\mathcal{M}_g$ by points of the variety $M_g$ (thus generating a Moduli problem).\\\\\n\nSimilarly to this two example, with the Yoneda lemma we will have a functor $F:D\\to Set$ and one representation of this functor. We will classify the natural transformation of these functors by the set in the image of $F$! Interestingly enough, there will be applications where the complex object is not the space of naturals transformations, but the images of $F$ (see \\ref{Cayleys} for an example). Lets proceed to enunciate and proof the result.\n\n\\begin{theorem}\\cite[Section 3.2]{mac2013categories}\n  Let $D$ be a category with small home-sets, $K:D\\to Set$ be a functor, and $r\\in Ob(D)$. Then there is a bijection\n  \\begin{align*}\n    \\tau:Nat(\\hom_D(r,\\cdot), K\\cdot) &\\equiv Kr\\\\\n    \\tau(\\alpha:\\hom_D(r,\\cdot)\\to K\\cdot)& = \\alpha(r)(1_r)\n  \\end{align*}\n  Where $\\tau$ is natural in $K$(as an object of $Set^{D}$) and in $r$.\n\\end{theorem}\n\\begin{proof}\n  As $\\alpha$ is a natural transformation we have that, for every $g:r\\to d\\in Ar(D)$: \n  \\[\n    \\begin{tikzcd}\n      \\hom_D(r,r)\\arrow[swap]{d}{\\hom_D(r,g)}\\arrow{r}{\\alpha(r)} & Kr\\arrow{d}{Kg}\\\\\n      \\hom_D(r,d)\\arrow{r}{\\alpha(d)} & Kd\n    \\end{tikzcd}\n  \\]\n  Writing $\\alpha(r)(1_r) = u$ we have that:\n  \\[\n    \\begin{tikzcd}\n      1_r\\arrow[swap]{d}{\\hom_D(r,g)}\\arrow{r}{\\alpha(r)} & u\\arrow{d}{Kg}\\\\\n      g\\circ 1_r = g\\arrow{r}{\\alpha(d)} & \\alpha(d)(g)=Kg\\circ u\n    \\end{tikzcd}\n  \\]\n\n  Therefore every natural transformation is uniquely identified by the value of $u$, therefore $\\tau$ is injective. Moreover, for every $u$ in $Kr$, we can define a natural transformation following the previous diagram, therefore, $\\tau$ is bijective.\\\\\n\n  To see that $\\tau$ we is natural we have to consider for which functor it is natural. Consider the functor \\emph{evaluation} $E: Set^D\\times D$ that maps each $(F,c)\\to Fc$, and the functor $N:Set^D\\times D\\to Nat(\\hom_D(r,\\cdot),K)$ the set of natural transformations. Finally, $\\tau:N\\to E$ is a natural transformation.\n\\end{proof}\n\\begin{remark}\n  The existence of an evaluation will be revisited when we consider \\emph{closed Cartesian categories} in  \\ref{subsect:CCC}. This categories (in general, closed categories) will generalized the idea of a category having an exponential object suitable for evaluation. In this case, we are inadvertently using that $Cat$ is a closed Cartesian category.\n\\end{remark}\n\nThe first functor that we will want to apply this result is to the $\\hom_{\\cdot}$ functor. But this functor is a bifunctor, so to get the full result of this lemma applied to the full bifunctor we may restate this lemma to contravariant functors.\n\n\\begin{corollary}\n  Let $D$ be a category with small home-sets, $F:D\\to Set$ be a contravariant functor, and $r\\in Ob(D)$. Then there is a bijection\n  \\begin{align*}\n    \\tau:Nat(\\hom_D(\\cdot,r), F\\cdot) &\\equiv Fr\\\\\n    \\tau(\\alpha:\\hom_D(\\cdot,r)\\to F\\cdot)& = \\alpha(r)(1_r)\n  \\end{align*}\n  \n  Where $\\tau$ is natural in $K$(as an object of $Set^{D}$) and in $r$.\n\\end{corollary}\n\\begin{proof}\n  We seek to use the Yoneda lemma in the functor $F':D^{op}\\to Set$ induced by $F$. Then we have that: \n  \\begin{align*}\n    \\tau:Nat(\\hom_{D^{op}}(r,\\cdot), F'\\cdot) &\\equiv F'r\\\\\n    \\tau(\\alpha:\\hom_{D^{op}}(r,\\cdot)\\to F'\\cdot)& = \\alpha(r)(1_r)\n  \\end{align*}\n\n  Taking into account that $F_{|Ob(D)}=F'_{|Ob(D^{op})}$, and that \n  $\\hom_{D^{op}}(r,\\cdot) = \\hom_D(\\cdot,r)$ we have the result.\n\\end{proof}\n\nAs we have seen, the Yoneda lemma is a direct generalization of the moduli problem. In the same vein, Yoneda's lemma is the generalisation of other problems/theorems in mathematics, most notably Cayley's lemma. It states:\n\\begin{proposition} \\label{Cayleys}\n  Any group is isomorphic to a subgroup of a symmetric group.\n\\end{proposition}\n\nTo understand this, take a groups $G$ seen as a single-object category, and name that object $e$. Then, the functor $\\hom_G(e, \\cdot): G \\to Set$ can be seen as a group action \\ref{group-action}. Then the Yoneda lemma states that:\n\n$$Nat(\\hom_G(e,\\cdot), Hom_G(e,\\cdot)) \\equiv_\\varphi Hom_G(e,e).$$\n\nTranslating this result to group theory:\n\\begin{itemize}\n\\item Remember that $Hom_G(e,e)$ is the group $G$.\n\\item Every natural transformation is a equivariant map between G-sets.\n\\item This equivariant maps, forms an endormophism group under composition, being a subgroup of the group of permutations.\n\\item This natural isomorphism $\\varphi$ define a group isomorphism.\n\\end{itemize}\n\nSo we have the isomorphism of groups that is stated in Cayley's Theorem.\\\\\n\n\n\nWe continue our exploration of Yoneda lemma by defining the \\emph{Yoneda Embedding}. For that we define the contravariant functor $h_a = \\hom_C(\\cdot, a)$. Then the contravariant Yoneda lemma tell us that:\n$$Nat(h_a,h_b) \\equiv_{\\tau_a} \\hom(a,b).$$\n\nWe then can define a fully faithfuls embedding $\\upsilon: C \\to Set^{C^{op}}$ such that \n\\begin{align*}\n  \\upsilon a  &= \\hom_C(A, \\cdot)\\qquad \\forall a \\in Ob(C), \\\\\n  \\upsilon f &= \\tau_a^{-1} (f)\\qquad\\qquad \\forall f:b\\to a\\in Ar(C).\n\\end{align*}\n\nThis functor allows us to view the category $C$ as a subcategory of the category of contravariant functors from $C$ to $Set$, which will be useful for determining \"heritable\" properties in $C$.\n\\subsection{Properties expressed in terms of universality}\n\nAfter the examples given, we define a few constructions that are commonplace in Maths. We will outline the notions of limit, pullback and product, and the dual notions of colimit, pushout and coproduct.\\\\\n\nThe notions of product and pullback can be seen as particular cases of the notion of limit. To define limit we will introduce the concept of co-cone and the diagonal functor. \\\\\n\n\\begin{definition}\\label{daigon-alley}\n  Let $C,J$ be categories. We can define the functor $\\Delta_J: C \\to C^J$ that maps $c$ to the functor from $J$ to $C$ that is constantly $c$, and maps every arrow to the identity $1_c$. \n\\end{definition}\n\nWhenever possible we will write only $\\Delta$, and let the information of the category be deduced from context. $J$ is usually small and often finite. We can now consider a natural transformation $\\tau: F \\to \\Delta c$. This can be represented as in the following diagram:\n\\[\n  \\begin{tikzcd}\n    Fx_j\\arrow[swap]{dr}{\\tau x_j}\\arrow{rr}{F g} &\n    & Fx_k\\arrow{dl}{\\tau x_k}\\\\\n    & c&\n  \\end{tikzcd}\n\\]\n\ncommutes for every $g:x_j\\to x_k\\in Ar(j)$, for that reason, such natural transformation is usually called a co-cone. The dual notion is called cone and  is represented as:\n\\[\n  \\begin{tikzcd}\n    Fx_j\\arrow{rr}{F g}&\n    & Fx_k\\\\\n    & c\\arrow{ur}[swap]{\\tau x_k}\\arrow{ul}{\\tau x_j} &\n  \\end{tikzcd}\n\\]\n\n\nWe can now define the concepts of limit and colimit. We introduce first the concept of colimit. This definition is that of an universal arrow, only in a category of functors. Following this definition, we will define the limit as its dual concept.\n\\begin{definition}\n  A colimit is an object $r\\in Ob(C)$ together with an universal arrow $u:F\\to \\Delta r \\in Ar(C^J)$. The colimit is denoted by $$\\lim_{\\leftarrow} F = r = \\colim F.$$\n\\end{definition}\n\nThe notation $\\lim_{\\leftarrow}$ is intuitive: in the colimit we have arrows to $F$. To represent this as a diagram, we have a co-cone $u\\to \\colim F$ such that for every other co-cone $\\tau \\to s$, it exist an unique $f$ such that the following commutes for every $x_j,x_k\\in Ob(C)$:\n\n\\[\n  \\begin{tikzcd}\n    {} & l& & \\\\\n    & \\colim F   \\arrow[dashed]{u}[description]{f} \\\\\n    x_j \\arrow{ur}{u x_j} \\arrow[bend left]{uur}{\\tau x_j}\\arrow[swap]{rr}{Fg} & & \n    x_k \\arrow[swap]{dr}{u x_k}\\arrow[bend right,swap]{uul}{\\tau x_k}\\\\\n  \\end{tikzcd}\n\\]\n\n\nNow, thanks to the duality of categories, we can define what a limit is in a very synthetic way:\n\n\\begin{definition}\n  A limit is the dual concept of a colimit. It is denoted as\n  $$\\lim_{\\rightarrow} F = r = \\lim F.$$\n\\end{definition}\n\nA limit is represented by the following diagram:\n\\[\n  \\begin{tikzcd}\n    {} & l\n    \\arrow[bend right,swap]{ddl}{\\tau x_j}\n    \\arrow[bend left]{ddr}{\\tau x_j} \\arrow[dashed]{d}[description]{f}& & \\\\\n    & \\lim F \\arrow{dr}{u x_k} \\arrow{dl}[swap]{u x_j} \\\\\n    x_j \\arrow[swap]{rr}{Fg} & & \n    x_k \\\\\n  \\end{tikzcd}\n\\]\n\nAnalogously as in the colimit notation, in the limit we have arrows from $F$, and thus the notation $\\lim_{\\rightarrow}$. Let focus for a while now in the notion of limit, in particular of two of its specials cases: the product and the pullback. From these cases we are going to provide most examples. \\\\\n\nWe have already talked about the product of categories. The product is a limit where $J$ is the 2-element discrete category, that is, where every functor $F:J\\to C$ is merely choosing to object of $C$. \n\n\\begin{definition}\\label{prod-univ}\n  Let $C$ be a category, $J=\\{0,1\\}$ be the discrete category with two elements. The product $c_1\\times c_2$ of two elements $c_0,c_1\\in Ob(C)$ is the limit of the functor $F:J\\to C$ such that $F0 = c_0, F1= c_1$.\n\\end{definition}\n\nThis construction means that providing an arrow to $c_0,c_1$ determines an unique arrow to $c_0\\times c_1$ and vice versa by composition with $\\pi_i$. In this case, the arrows $u0, u1$ are usually called \\emph{projections} and denoted by $\\pi_0, \\pi_1$. The notation is due to the product being a generalization of the Cartesian product in the category $Set$. It is also sometimes note with $c_0 \\Pi c_1$ with $\\Pi$ being standard for the sequence product. Some examples of product object in categories are:\n\\begin{example}\\ \n  \\begin{itemize}\n  \\item Products from Example \\ref{example:prod}.\n  \\item In Haskell, considering two integer types \\texttt{A} and \\texttt{B}, we can consider the duple type \\texttt{(A,B)} that is the product, along with projections \\texttt{fst} and \\texttt{snd} to be a product. We can fairly easily check that, for any other integer type \\texttt{C} and any morphism \\texttt{f: C$\\to$ (A,B)} it exist a unique \\texttt{g} and \\texttt{h} such that:\n    \\[\n      \\begin{tikzcd}\n        {} & C \\arrow[bend right,swap,dashed]{dl}{g}\n        \\arrow[bend left,dashed]{dr}{h} \\arrow{d}[description]{f}& & \\\\\n        A  &(A,B) \\arrow{l}[swap]{fst} \\arrow{r}{snd} & \n        B \\\\\n      \\end{tikzcd}\n    \\]\n    Note that we have requested \\texttt{A} and \\texttt{B} to be integer types. Quite surprisingly, this construction can not be generalized in Haskell to provide a product type for every object\\cite{wiki:hask}. This will provide in the future a palpable difference between our theoretical $\\lambda$-calculus, and its applicable version,  Haskell.\\\\\n\n    This troubles arise from of implementation details of bottom values. Why have not anyone fix this? Well, when functions have to terminate and only finite values are considered this problems does not Arise. This condition is fairly acceptable in any engineering scenario. \n  \\end{itemize}\n\\end{example}\n\nAnalogously, we can define the coproduct, on which instead of defining an arrow to $c_0, c_1$, we define an object \\emph{from} $c_0,c_1$. \n\\begin{definition}\n  The coproduct is the dual definition of the product. It is denoted by $c_0 \\sqcup c_1$.\n\\end{definition}\nIn this notation $\\sqcup$ denotes an inverted $\\Pi$, with the meaning of being the dual notion of the product.\n\\begin{example}\\ \\label{example:mitchell}\n  \\begin{itemize}\n  \\item In the category $Ab$ of Abelian Group and morphism, finite products and coproducts are equivalent. This equivalence also happens in others categories, such as $R$-Modules for a ring $R$ or vector spaces. This has a underlying idea of a type of category that relates this constructions: This are all \\emph{Abelian Categories}, as in \\cite[Section 5.5]{rotman2008introduction}.\\\\\n\n    We will not delve further into this concept so as not to deviate from the subject at hand. However, for those readers who are familiar with the notation, we cannot resist including Mitchel's theorem\\cite[Chapter IV]{mitchell1965theory} .\n\n    \\begin{theorem}[Mitchell's Theorem] If A is a small abelian category, then there is a covariant full faithful exact functor $F : A \\to Ab$.\n    \\end{theorem}\n\n    As a consequence, any statement ``$p$ implies $q$'' where $p$ and $q$ are categorical statement about a diagram in $A$, if is true in  $Ab$, it holds true in $A$.\n  \\end{itemize}\n\\end{example}\n\n\n\n% From my personal experience I have to say that I have seen more difficulties learning this notion rather than learning the utterly similar notion of product, probably because we are more used to think in terms of arrays rather than in terms of universality for the product. I think that thinking in terms of universality should be the way in to these concept.\n\n\nAfter learning about the product and the coproduct, we will focus now in the notion of pullback and its dual, the pushout. We will first define the category\n\n\\[\n  P = \\begin{tikzcd}\n    x\\arrow{r}{f} & z & y\\arrow[swap]{l}{g}\n  \\end{tikzcd}\n\\]\n\nThen we can define the pullback:\n\n\\begin{definition}\\label{def:pullback}\n  Let $C$ be a category and $F:P\\to C$ be a functor. Then the pullback of $Fx$ and $F_y$ denoted as $Fx\\times_{Fz}Fy$ is the limit of the functor $F$.\n\\end{definition}\n\nWe can represent this structure in the following diagram. For any object $q$ and arrows $f':q\\to Fx,g':q\\to Fy$ we have:\n\n\\[\n  \\begin{tikzcd}\n    q\n    \\arrow[bend left]{drr}{g'}\n    \\arrow[bend right,swap]{ddr}{f'}\n    \\arrow[dashed]{dr}[description]{u} & & \\\\\n    & Fx\\times_{Fz}Fy \\arrow{r}{p_2} \\arrow{d}[swap]{p_1}\n    & Fy \\arrow{d}{Fg} \\\\\n    & Fx \\arrow[swap]{r}{Ff}\n    & Fz\n  \\end{tikzcd}\n\\]\n\nAnalogously, we can define:\n\n\\[\n  CoP = \\begin{tikzcd}\n    x & z\\arrow{l}{f}\\arrow[swap]{r}{g} & y\n  \\end{tikzcd}\n\\]\n\nand define:\n\\begin{definition}\n  Let $C$ be a category and $F:CoP\\to C$ be a functor. Then the pushout of $Fx$ and $F_y$ denoted as $Fx\\sqcup_{Fz}Fy$ is the colimit of the functor $F$.\n\\end{definition}\n\n\\begin{example}\n  \\begin{itemize}\\label{fiber-product}\n  \\item Fiber Product: The fiber product is the canonical pullback of the $Top$ category. Let $X,Y,Z$ be topological spaces and let $f:X\\to Z$ and $g: Y\\to Z$ be continuous function. Then we define the fiber product as:\n    $$X\\times_ZY = \\{(x,y) \\in X\\times Y: fx=gy\\}.$$ Note that, although it not usually noted, $X\\times_Z Y$ depend on both $f$ and $g$. In a diagram:\n    \\[\n      \\begin{tikzcd}\n        X\\times_Z Y\\arrow{r}{\\pi_2}\\arrow[swap]{d}{\\pi_1} & Y\\arrow{d}{g}\\\\\n        X\\arrow{r}{f} & Z\n      \\end{tikzcd}\n    \\]\n    where $\\pi_i$ are the projection inherited from $X\\times Y$. It is easy to check that, for any topological space $Q$ and any $f':Q\\to X, g': Q \\to Y$ such that\n    \\[\n      \\begin{tikzcd}\n        Q\\arrow{r}{f'}\\arrow[swap]{d}{g'} & Y\\arrow{d}{g}\\\\\n        X\\arrow{r}{f} & Z\n      \\end{tikzcd}\n    \\]\n    there is an unique $u:Q\\to X\\times_Z Y$ that maps $q\\mapsto (f'q,g'q)$ for every $q\\in Q$.\\\\\n  \\item Seifert-Van-Kampen: This theorem is a classic result in Algebraic topology, that can be found in any classical source, such as \\cite{munkres2000topology}. It was independently discovered by Seifert \\cite{seifert1931konstruktion} and Van-Kampen \\cite{van1933connection}.  We present the result as in  Brown's paper\\cite{brown1967groupoids}.\\\\\n\n    We denote by $\\pi_1(X,A)$ the fundamental groupoid of the topological space $X$ over the set $A\\subset X$. This notion was first showed by Brown. $\\pi_1(X,X)$ would be the classical full fundamental groupoid, and $\\pi_1(X,x_0)$ for some $x_0\\in X$ would be the fundamental group of $X$ over $x_0$. Let us present the result.\n    \\begin{theorem}[Seifert-Van-Kampen]\n      Let the topological space $X$  be covered by the interiors of two subspaces $X_1,X_2$ and let $A$\n      be a set which meets each path component of $X_1, X_2$ and $X_0=X_1\\cap X_2$, and $i_1: X_0 \\to X_i,j_i: X_i \\to X $ be the canonical inclusions. Then $A$  meets each path component of $X$  and the following diagram is a pushout in the category of groupoids:\n      \\[\n        \\begin{tikzcd}\n          \\pi_1(X_0,A)\\arrow{r}{\\pi_1(i_1)}\\arrow[swap]{d}{\\pi_1(i_2)} & \\pi_1(X_1,A)\\arrow{d}{\\pi_1(j_1)}\\\\\n          \\pi_1(X_2,A)\\arrow{r}{\\pi_1(j_2)} & \\pi_1(X,A).\\\\\n        \\end{tikzcd}\n      \\]\n\n    \\end{theorem}\n\n\n  \\end{itemize}\n\\end{example}\n\nNote the  similarity of the pullback/pushout and the one of product/coproduct notation-wise. To understand these we have to consider the similarities of both construction. We are going to focus on the similarities of product and pullback, letting the coproduct/pushouts as duals.\\\\\n\nIn both case, the universal property consists of having an arrow to a generated object only if we have an arrow to each of its generators. In this line of reasoning, we can consider that the product is a pullback where we forget about the object $z$ and its arrows. One easy way to generate that case is to consider the construction when $Fz$ is a terminal object. In that case we have that the existence of $Ff$ and $Fg$ is a tautology, and we can consider \n\n\\[\n  \\begin{tikzcd}\n    & q\n    \\arrow[bend left]{dr}{q_2}\n    \\arrow[bend right,swap]{dl}{q_1}\n    \\arrow[dashed]{d}[description]{u} & & \\\\\n    Fx  & Fx\\times_{Fz}Fy \\arrow{r}{\\pi_1} \\arrow{l}[swap]{\\pi_0}& Fy \\\\\n  \\end{tikzcd}\n\\]\n\n% having a product structure.  One can proceed to consider an analogous consideration with pushout and coproduct.\\\\\n\n\n\n% One last property that can be expressed in terms of universality, and in particular in terms of limit/colimit is the \\emph{power/copower}. The first example of power that we have considered yet of such construction is, given $C$ a small categories an $J$ a discrete category, the category $C^J$ is the power of $C$ up to $J$. The notation is due to the standard tuple notation:  considering a set of  tuples in $\\R^n$, is in fact the same as considering the functions from  $n= {0,...,n-1}$ to $\\R$.\\\\\n\n% There are two intuitive notion behind this definition:\n% \\begin{itemize}\n% \\item The one more close to the formalism: to consider as a infinite product of the same element.\n% \\item To consider this \n% \\end{itemize}\n\n\n\n\\section{Adjoints}\n\nAdjointness is a fundamental theory within Category theory. It relates the homesets of two functors $F:C\\to D, G:D\\to C$. The importance of adjoints however, comes from it ubiquity among mathematics, often relate with the ubiquity of universality. This notion was first presented by Daniel M. Kan in \\cite{kan1958adjoint}. For this section we follow the material presented in \\cite{mac2013categories}. Adjoints are also called Adjunctions.\\\\\n\nWe  will start by providing some intuitive notions, before a formal definition. Take the Forgetful Functor $U:Grp\\to Set$, and the Free group Functor $\\mathcal{F}:Set\\to Grp$. We can see that it is not difficult to consider that we can compose $F$ and $G$ to make endofunctors. These endofunctors are far from being identities, nonetheless we have still a great relation between them: for every set $X$ and group $G$, there is a function $f: X \\to UG$ for each morphism $g:FX\\to U$. Conversely, any function from $X$ to $UG$ induces a morphism. \\\\\n\nAny avid reader will detect by this point the taste of universality. But there is a bit more, we have a bijective relation on every image home-set of both $F$ and $G$. This is the underlying property of adjointness - to relate home-sets. Lets formalize this idea:\n\\begin{definition}\n  Let $D,C$ be categories. And adjunction is a triple $(F,G,\\varphi):D\\to C$ where $F,G$ are functors:\n  \\[\n    \\begin{tikzcd}\n      D\\arrow[r, shift left=.5ex, \"G\"{name=F}] &\n      C\\arrow[l, shift left=1ex, \"F\"{name=G}] \\\\\n    \\end{tikzcd}\n  \\]\n  while $\\varphi$ is a transformation that maps each $(c,d) \\in Ob(C\\times D)$ with a bijection:\n  \\[\n    \\varphi_{c,d}:\\hom_A(Fd, c)\\equiv\\hom_X(d, Gc).\n  \\]\n  which is natural in $c$ and $d$. \n\\end{definition}\n\n\\begin{remark}\n  but we will stick with the notation of $F$ being the \\emph{left adjoint} and $G$ being the \\emph{right adjoint}, denoted as $F\\dashv_\\varphi G$ or just $F\\dashv G$. When a functor \\emph{is} a left (resp. right) adjoint it \\emph{has} a right (resp. left) adjoint.\\\\\n\\end{remark}\nWe have a bit to unpack in this definition. We will start to understand that $\\varphi:Hom_C(F\\cdot, \\cdot)\\to \\hom_D(\\cdot, G\\cdot)$ is a natural bijection in each variable. That is, for every $f:d\\to d'\\in C,k:c\\to c' \\in D$ the following diagrams commutes.\n\\[\n  \\begin{tikzcd}\n    \\hom_C(Fd,c) \\arrow{d}{\\hom_C(Fd,\\cdot)(g)}\\arrow{r}{\\varphi_{c,d}}\n    &\\hom_D(d,Gc)\\arrow{d}{\\hom_D(d,G\\cdot)(g)}&\n    \\hom_C(Fd,c) \\arrow{d}{\\hom_C(F\\cdot,c)(f)}\\arrow{r}{\\varphi_{c,d}}\n    &\\hom_D(d,Gc)\\arrow{d}{\\hom_D(\\cdot,Gc)(f)}\\\\\n    \\hom_C(Fd,c') \\arrow{r}{\\varphi_{c',d}}  &\\hom_D(d,Gc')&\n    \\hom_C(Fd,c') \\arrow{r}{\\varphi_{c,d'}}  &\\hom_D(d,Gc')\n  \\end{tikzcd}\n\\]\n\nThe definition of adjoint can be reworked as:\n\\begin{proposition}\\label{description:adjoint}\n  An adjoint can be described as a bijection that send each $f:Fx\\to a\\in A$ to $\\varphi(f):x\\to Ga$ such that, for every $h:x\\to x'\\in X, k: a\\to a'\\in A$:\n  $$\\varphi(k\\circ f) = Gk \\circ \\varphi(f), \\qquad \\varphi(f\\circ Fh) = \\varphi(f)\\circ h.$$\n\\end{proposition}\n\\begin{proof}\n  These conditions ensures the naturality of the bijection.\n\\end{proof}\n\\begin{remark}\n  It is equivalent to require $\\varphi^{-1}$ to be natural.\n\\end{remark}\n\nThis proposition, that can be obviated and is rarely used by itself, is really important as it encapsulate the idea behind the commonplace concepts of \\emph{unit} and \\emph{counit}. The key reasoning behind these objects is to consider what would happen whenever $f$ is the identity, following the notation of the previous proposition. This provide us of full control of the bijection based only on arrow $\\varphi(f)$. This is also the fundamental idea behind proposition \\ref{Yoneda-proposition}.\\\\\n\nWe have already suggested the relationship between universality and adjointness. We can summary that relation in the following property:\n\\begin{proposition}\\label{prop:univAdjoint}\n  An adjunction $(F,G,\\varphi): D\\to C $ determines:\n  \\begin{itemize}\n  \\item a natural transformation $\\eta: I_D \\to GF$ such that $\\eta(d):d\\to GFd$ is universal to $G$ from $d$ for every $d\\in Ob(D)$. Conversely, we can define:\n    $$\\varphi(f:Fd\\to c) = Gf\\circ \\eta(d): d\\to G c.$$\n  \\item a natural transformation $\\epsilon: FG\\to I_C$ such that $\\varepsilon(c):FGc\\to c$ is universal from $c$ to $F$ and we can define:\n    $$\\varphi(g:d\\to Gc) = \\varepsilon(c)\\circ Fg: Fd\\to c.$$\n  \\item We have that the natural transformation $\\eta_{G\\cdot} \\circ G(\\varepsilon_\\cdot)$ is the identity natural transformation.\n  \\end{itemize}\n\\end{proposition}\n\\begin{remark}\n  We can therefore consider an adjoint $(F,G,\\varphi)$ as a quintuple  $(F,G,\\varphi,\\eta,\\varepsilon)$. $\\eta$ is usually called the \\emph{unit} and $\\varepsilon$ is called the \\emph{counit}.\n\\end{remark}\n\n\\begin{proof}\n  \\begin{itemize}\n  \\item Let $a=Fd$, then we have a natural bijection $\\varphi$:\n    $$\\hom_C(a, c)\\equiv\\hom_D(d, Gc).$$\n    By proposition \\ref{Yoneda-proposition} we have an universal arrow $\\eta_d:d\\to Ga=GFd$ is provided, where $\\eta_d = \\varphi(1_a)$. The function $d\\to \\eta(d)$ is natural $I_D\\to GF$ as the following diagram commutes due to the naturality of $\\varphi$:\n    \\[\n      \\begin{tikzpicture}\n        \\node {\\begin{tikzcd}[column sep=20mm]\n            d\\ar[r,\"\\eta c\"]\\ar[d,\"h\"] & GFd\\ar[d,\"GFh\"]\\\\\n            d'\\ar[r,\"\\eta c'\"] & GFd''.\n          \\end{tikzcd}};\n      \\end{tikzpicture}\n    \\]\n\n  \\item Analogous.\n  \\item $1_Ga = G(\\varepsilon_a)\\circ \\eta_{Ga}$.\n  \\end{itemize} \n\\end{proof}\n\nFrom this proposition we can see that we may be able to define an adjoint based only on the universal arrows provided. We can summary a few equivalent definition of adjoint:\n\\begin{proposition}\\label{prop:equivdefinition}\n  Each adjoint $(F,G,\\varphi,\\eta,\\varepsilon):D\\to C$ is completely determined by the items in any one of the following list:\n  \\begin{enumerate}\n  \\item $F,G$ and a natural transformation $\\eta:1_D\\to GF$ such that $\\eta(d)$ is universal for every $d\\in Ob(D)$.\n  \\item $F,G$ and a natural transformation $\\varepsilon:FG\\to 1_C$ such that $\\varepsilon(c)$ is universal for every $c\\in \\Ob(C)$.\n  \\item $G$ and for each $d\\in Ob(D)$, an object $F_0(d)\\in Ob(C)$ and an universal arrow $\\nu(x):d \\to GF_0 d$.\n  \\item $F$ and for each $c\\in Ob(C)$, an object $G_0(c)\\in Ob(D)$ and an universal arrow $\\varepsilon(c):c \\to GF_0 c$.\n  \\item  $F,G$ and two natural transformation $\\eta: I_D\\to GF$, $\\varepsilon: FG\\to I_C$ such that both composites are identities.\n  \\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n  \\begin{enumerate}\n    \n  \\item We need to define the natural bijection $\\varphi: \\hom_C(F\\cdot,\\cdot)\\to \\hom_D(\\cdot,G\\cdot)$. Let  for each $f:Fd \\to c$:\n    $$\\varphi(g) = Gf\\circ \\eta(c): d \\to Gc$$ Is well defined and it is a bijection  due to $\\eta(d)$ universality.\\\\\n\n    It is natural in $d$ because $\\eta$ is natural and is natural in $c$ because $G$ s a functor.\n  \\item Dual of the previous.\n  \\item We will proof that there is only one functor with $F_0$ as object function such that $\\eta (d): I_D\\to GF$ is natural. The naturality and universality told us that each $h:d\\to d'\\in Ob(D)$ induces two arrows:\n    \\[\n      \\begin{tikzcd}\n        F_0 d\\arrow[dashed]{d} & d\\arrow{d}{h}\\arrow{r}{\\eta (d)}&GF_0d\\arrow[dashed]{d}\\\\\n        F_0 d'& d'\\arrow{r}{\\eta (d')} & GF_0d'\n      \\end{tikzcd}\n    \\]\n    So the only way to define $Fh= \\eta (d) \\circ h$. We conclude applying 1.\n  \\item Dual of the previous.\n  \\item We use $\\eta$ and $\\varsepsilon$ to define two functions:\n    \\[\n      \\begin{tikzcd}\n        \\hom_C(Fd,c) \\arrow[r, shift left=.5ex, \"\\tau\"{name=F}] &\n        \\hom_D(d,Gc)\\arrow[l, shift left=1ex, \"\\sigma\"{name=G}] \\\\\n      \\end{tikzcd}\n    \\]\n\n    such that $\\tau(f) = Gf\\circ \\eta_d$ and $\\sigma(g)=\\varpesilon_d \\circ Fg$. as both composites are the identity, $\\tau \\circ \\sigma$ and $\\sigma \\circ \\tau$ are the identities. It is clearly natural due to proposition \\ref{description:adjoint}.\n  \\end{enumerate}\n\\end{proof}\nTo further illustrate this concept we present some examples:\n\\begin{example}\n  \\begin{itemize}\n  \\item Free category from a graph and forgetful functor:  Given a graph $G$, we can define the free category $\\mathcal{F}(G)$ by adding the arrows needed for the graph to satisfy the category axioms, that is:\n    \\begin{itemize}\n    \\item $\\mathcal{F}(G)$ has as objects all nodes in $G$.\n    \\item $\\mathcal{F}(G)$ has as arrows all identities in $G$, and every finite composition $f=f_1\\circ f_2\\circ ... \\circ f_n$ where $f_i$ is either an identity or an arrow in $G$. Note that the concept of arrow composition in graph can be applied directly from it categorical definition.\n    \\end{itemize}\n    \n    Then, we can define a functor $\\mathcal{F}:Graph \\to Cat$ that maps every (small) graph $G$ to $\\mathcal{F}(G)$ and every graph morphism $\\varphi$ to a category morphism, that take\n    $$f=f_1\\circ f_2\\circ ... \\circ f_n \\mapsto (\\mathcal{F}\\varphi) f = \\varphi (f_1)\\circ \\varphi (f_2)\\circ ... \\circ \\varphi (f_n).$$\n\n    We can see that we have an adjoint $F \\dashv U$ where the unit is given by the insertion of generators.\n  \\item \\v{C}ech compactification\\cite{wiki:stonecech}: We have presented in \\ref{example:stone-cech} the $\\beta: Hauss \\to Comp$ Stone-\\v{C}ech functor. If we consider the injection of categories $i: Comp \\to Hauss$ we can check that is an adjoint $\\beta \\dashv i$.\n  \\end{itemize}\n\\end{example}\n\n% TODO: Lastly, we are going to study the  and prove a theorem that allow us to include parameters in adjoints.\n\n\n\n\\subsection{Equivalence of Categories}\nWe can define an isomorphism of categories the same way that we define isomorphisms in any other category: an isomorphism is an arrow (Functor) with a two sided inverse. This definition, while standard, is quite restrictive. We are going to need some more lax concept of equivalence of categories, in the sense that while not being exactly the same, they are mostly the same. Formally:\n\n\\begin{definition}\n  A functor $F:C\\to D$ is an equivalence of categories if there exists a functor $G:D\\to C$  and two natural isomorphisms $\\varphi: I_C \\to GF$ and $\\zeta: I_D\\to FG$\n\\end{definition}\n\n\nFor the canonical example we may introduce a really organic concept: the \\emph{skeleton} of a category. Quite often in mathematics we do not consider all objects of certain type, but rather objects up to isomorphism. The skeleton category of a category $C$ is another category where we consider the objects up to isomorphism. Formally:\n\\begin{definition}\n  Let $C$ be a category. The skeleton of $C$, namely $ske(C)$, is a subcategory  such that every object in $C$ is isomorphic to an object in $ske(C)$.\n\\end{definition}\n\nThe definition of equivalence of categories previously outline will allow us, for example, to consider $C\\sim ske(C)$, while not being isomorphic. Continuing our discussion, it is not difficult to draw similarities between equivalences and adjoints. That motivates the further definition:\n\\begin{definition}\n  Let $(F,G,\\varphi,\\eta,\\varepsilon)$ be an adjoint. It is called an \\emph{adjoint equivalence} whenever $\\eta$ and $\\varepsilon$ are natural equivalences.\n\\end{definition}\nIt is clear that every adjoint equivalence induces two equivalences in $F$ and $G$. We state this idea in the following proposition:\n\\begin{proposition}\\cite[Theorem 1, 4.4]{mac2013categories}\n  Let $F:C\\to D$ be a functor. Then the following properties are equivalent:\n  \\begin{itemize}\n  \\item[i)] $F$ is an equivalence of categories.\n  \\item[ii)] $F$ is part of an adjoint equivalence. \n  \\item[iii)] $F$ is full and faithful, and each object $d\\in D$ is isomorphic to $Fc$ for some object $c\\in C$.  \n  \\end{itemize}  \n\\end{proposition}\n\\begin{proof}\\\n  \\  \n  \\begin{itemize}\n  \\item[$ii)\\implies i)$] Given an equivalent adjoint $(F,G,\\varphi,\\eta,\\varepsilon)$, then $F$ is an equivalence with $G,\\eta,\\varepsilon$. \n  \\item[$i)\\implies iii)$] $\\varphi: GF\\equiv I_C$ implies that for every $c\\in Ob(C)$ we can consider $c\\equiv GFc$. Then considering the natural isomorphism $\\zeta: FG\\equiv I_D$ states for every $f:a\\to a'\\in D$\n    \\[\n      \\begin{tikzcd}\n        FGa \\arrow{d}{FG f}& a\\arrow{d}{f}\\arrow{l}{\\zeta^{-1} a}\\\\\n        FGa' \\arrow{r}{\\zeta a'}& a'\n      \\end{tikzcd}\n    \\]\n\n    there is one $FG f$ for each $f$ and we get that $G$ is faithful. Faithfullnes of $F$ can be prove symmetrically. To see that $G$ is full we can consider any $h:Ga\\to Ga'$ and defining $f=\\zeta a\\circ h \\circ \\zeta a'$ we follow hat $FG f = Gh$. As $T$ is faithfully, $Gf=h$ so that $G$ is full. Proceed symmetrically with $F$.  \n  \\item[$iii)\\implies ii)$]\n\n\n    We need to construct $G$ so that $F$ is a left adjoint. The idea is to define $\\eta$ and apply the previously developed characterizations.\\\\\n\n\n    Due to the fully faithfulness of $F$ we can choose for every $d\\in D$ an object $c\\in C$ such that there is an isomorphism $\\eta d: d \\to Fc$ and for each $c'\\in C, f:d\\to Fc'\\in D$ there is some $g\\in Ar(C)$ (that exists because $F$ is full and is unique because of faithful) such that:\n    \\[\n      \\begin{tikzcd}\n        d\\arrow[leftrightarrow]{r}{\\eta d}\\arrow[swap]{dr}{f}& Fc\\arrow{d}{Fg}\\\\\n        & Fc'.\n      \\end{tikzcd} \n    \\]  \n\n    We define $G_0d = c$. Also note  that $\\eta d$ is universal from $d$ to $F$. Therefore, $F$ is part of an adjoint by proposition \\ref{prop:equivdefinition} $F$ is part of an adjoint $(F,G,\\eta, \\varepsilon)$. As with every adjoint, $F (\\varepsilon c)\\circ \\eta (Fc) =1_{Gc}$. Thus $F(\\varepsilon c)$ is invertible and by  fully faithfulness of $F$ so is $\\epsilon c$, thus having an adjoint equivalence.\n\n\n    % We name $\\zeta: Ob(D)  \\to Ob(C)$ to the mapping  such that $d\\equiv F(\\zeta d = c)$. Naming $\\eta d: d \\to c$ to the denoted isomorphism, we can see that\n    % \\[\n    %   \\begin{tikzcd}\n    %     c=Fd& d\\arrow[swap]{l}{\\eta d}\\arrow{d}{f}\\\\\n    %     & d'\n    %   \\end{tikzcd}\n    % \\]\n    % Now, we can apply $G$ to the diagram, and as it is faithfull, the isomorphism will be maitained it will be maintained (see example \\ref{example:mitchell} for more on this idea): \n\n\n  \\end{itemize}\n\\end{proof}\n\n\nIn the next few subsection, we are going to introduce categories, with some additional structures onto them. This type of considerations, are commonplace in category theory, and will provide useful considerations.\n\n% \\subsection{Monad}\n\n\n\n\\subsection{Closed Cartesian Categories}\\label{subsect:CCC}\n\nThe notion of product seen in \\ref{prod-univ} is a notion that seek to catch the essence of what the Cartesian product is in the category of Set. In this category it happens that for any two objects, one can consider the product without any hesitance about it existence. A \\emph{Cartesian category} will be a category that, in some sense, maintain this property (of being closed under Cartesian product). Formally: \n\\begin{definition}\n  A Cartesian category is a category with a specified terminal object $T$ and for which every finite product exists.\n\\end{definition}\n\\begin{remark}\n  Finite product are equivalent to binary product.\n\\end{remark}\n\nNonetheless, further on the text we will need categories with some even nicer properties. These categories are called \\emph{closed Cartesian categories}. We will define them formally first, and after unpack the information that the definition contains:\n\n\\begin{definition}\\label{def:CCC}\n  A $C$ is closed Cartesian category, CCC for short, if each of the following functors:\n  \\begin{center}\n    \\begin{tabular}{p{0.3\\linewidth}p{0.3\\linewidth}p{0.3\\linewidth}}\n      $\\qquad F_1:C\\to 1$&$ F_2=\\Delta: C \\to C\\times C $&$ F_{3}^b:C \\to C$\\\\\n      $\\qquad \\qquad c\\to e$&$ \\qquad \\qquad c \\to  (c,c)$&$ \\qquad c \\to c\\times b$\\\\\n    \\end{tabular}\n  \\end{center}\n  has a \\emph{specified} right adjoint, denoted by:\n  \\begin{center}\n    \\begin{tabular}{p{0.3\\linewidth}p{0.3\\linewidth}p{0.3\\linewidth}}\n      $\\qquad G_1:1\\to C$&$ G_2:  C\\times C\\to C $&$ G_{3}^b:C \\to C$\\\\\n      $\\qquad \\qquad e\\to t$&$ \\  \\qquad (c,b)\\to c \\times b$&$ \\qquad c \\to c^b$\\\\\n    \\end{tabular}\n    where there is one $F_3^b\\dashv G_3^b$ for every $b\\in \\Ob(C)$.\n  \\end{center}\n\n  These adjoints provide us with a lot of information:\n  \\begin{itemize}\n  \\item From $F_1$ being an adjoint we get that $t$ is terminal as for every $c\\in Ob(C)$:\n    $$\\{id_e\\} \\equiv \\hom_1(F_1(c), e) \\equiv \\hom_C(c,Ge=t)   $$\n  \\item From $F_2$ we get that every pair $c,b$ has a product at is states the universal property of product:\n    $$\\hom_{C\\times C}(F_2(c), (c',b'))\\equiv \\hom_{C\\times C}((c,c), (c',b')) \\equiv \\hom_C(c,c'\\times b')$$\n    that is, for every object in $c\\in C$, to provide a morphism to $c\\to c'\\times b'$ is the same that providing to morphisms $c\\to c',c\\to b'$.\n  \\item First of all, as every product exists, and it specified by $F_2$, we can define $F_3^b$ for every $b$. Then, it adjointness states that:\n    $$\\hom_C(c\\times b, d) \\equiv \\hom_C(c,d^b)   $$\n\n    This object $d^b$ is called the exponential object or map object. Note that to fully define the adjoint we can take one of two steps,  the functor $G_3^b$ is defined only on the objects:\n    \\begin{itemize}\n    \\item We can define how $G_3^b$ acts on arrows.\n    \\item We can  provide an arrow $\\varepsilon$:\n      $$\\varepsilon: F_3^bG_3^b(c) = c^b\\times b \\to c = I(c)$$\n      that is natural in $c$ and universal from $F_3^b$ to $c$.\n    \\end{itemize}\n\n    The intuitive notion is that this object represent in some way the arrows from $b$ to $c$, being a generalization of the function set of two small sets. The best way to understand this adjoint is by the natural transformation $\\varepsilon^b(c^b\\times b) = c$. In this context, this is called \\emph{evaluation arrow}. In the context of $Set$ this amount to the classic evaluation of a function.\\\\\n\n  \\end{itemize}\n\\end{definition}\n\n\\begin{remark}\n  Note that a Closed Cartesian category grow in the notion of a Cartesian category. It is in fact a Cartesian category on which we have a function object, with an evaluation arrow that is universal.\n\\end{remark}\n\nCartesian Closed Categories will be reformulated in Chapter 4, proposition \\ref{def2:CCC}. We will use this last ideas in other to equationally present the concept. For now on, lets provide some examples of Cartesian Closed Categories:\n\n\n\n\n\\begin{example}\n  \\begin{itemize}\n  \\item $Set$. We have the set $\\{*\\}$ as a terminal object, products as in example \\ref{example:prod}. Given two sets $C, B$ we can define the exponential object $C^B= \\hhom (B,C)$. The adjointness is then implied by the currying process:\n    $$g: A\\times B \\to C \\mapsto f: A \\to B^C,$$\n    where $f(a)(\\cdot) = g(a,\\cdot)$.\n  \\item We will prove in chapter \\ref{chap:4} that $\\lambda$-calculus is, in some sense, a closed Cartesian category.\n  \\end{itemize}\n\\end{example}\n\n\nWe can consider the duals:\n\n\\begin{definition}\n  A \\emph{Cartesian} category is a category with a specified initial object $\\bot$ and for which every finite coproduct exists. A \\emph{bi-Cartesian} category is a category that is both Cartesian and cocartesian.\n\\end{definition}\n\n\\begin{definition}A \\emph{closed bi-Cartesian category} is a bicartesian category that is also a closed Cartesian category.\n\\end{definition}\n\nAs with almost every structure, a category can be deduced is the one of the objects along with structure-preserving functors.\n\n\\begin{definition}[Category of closed Fartesian categories]\n  We can define the \\emph{category of closed Cartesian categories} $Cart$ that has as objects all small closed Cartesian categories, and as objects all functors that preserve the specified terminal, products and exponential objects.\n\\end{definition}\n\nThe morphism of closed Cartesian categories are often called \\emph{closed Cartesian functors.}\n", "meta": {"hexsha": "5b7a2147064835cb3deba35dbce8f7a271b3035f", "size": 50425, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/Chapters/Chapter2.tex", "max_stars_repo_name": "pedrobn23/Master-thesis", "max_stars_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-02T13:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-20T16:41:28.000Z", "max_issues_repo_path": "thesis/Chapters/Chapter2.tex", "max_issues_repo_name": "pedrobn23/Master-thesis", "max_issues_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/Chapters/Chapter2.tex", "max_forks_repo_name": "pedrobn23/Master-thesis", "max_forks_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.1892230576, "max_line_length": 609, "alphanum_fraction": 0.7006048587, "num_tokens": 15441, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7981867753392728, "lm_q1q2_score": 0.5979115070839269}}
{"text": "\\section{Theoretical Background}\n\nMoment of inertia of a rigid body about an axis is a quantitative\ncharacteristics that defines the body\u2019s resistance (inertia) to a change of\nangular velocity in rotation about that axis. \nThis characteristics of the rigid body rotating about a fixed axis is determined\nnot only by the mass of the body, but also by its distribution. \nThe moment of inertia of a rigid body about a certain rotation axis can be\ncalculated analytically. \nHowever, if the body has irregular shape or non-uniformly distributed mass, the\ncalculation may be di cult.\nExperimental methods turn out to be more useful in such cases.\n\n\\subsection{Laws of Physics Used}\n\nThere are mainly two laws of Physics been used in the experiment. Second law of\ndynamics for rotational motion can find the relationship between the rotational\nacceleration and the torque at the object.\n\n\\subsubsection{Second Law of Dynamics for Rotational Motion}\n\nThe rotational motion about a fixed axis relates with the component of the\ntorque about the axis of rotation with the moment of inertia about this axis.\n$$ \\tau_z = I\\beta_z$$\nTherefore, the moment of inertia I can be found once the torque and the\nresulting angular acceleration are measured.\n\nThe moment of inertia is an additive quantity, the moment of inertia of the\ncombined rigid body AB composed of A and B, about the same axis of rotation, is \n$$ I_{ab} = I_A + I_B $$\n\n\n\\subsubsection{Parallel Axis Theorem}\nIf the moment of inertia of a rigid body with mass m about an axis through the\nbody\u2019s center of mass is I0, then for any axis parallel to that axis, the moment\nof inertia is\n$$ I = I_0 + md^2 $$\nwhere $d$ is the distance between the axes.", "meta": {"hexsha": "72deafdb67cf09f861aec1b9990289cb83a8475c", "size": 1697, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "E1/part/background.tex", "max_stars_repo_name": "iamwrm/VP141", "max_stars_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-24T11:28:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-24T11:28:04.000Z", "max_issues_repo_path": "E1/part/background.tex", "max_issues_repo_name": "iamwrm/VP141", "max_issues_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "E1/part/background.tex", "max_forks_repo_name": "iamwrm/VP141", "max_forks_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6578947368, "max_line_length": 80, "alphanum_fraction": 0.7878609311, "num_tokens": 385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5978571525694315}}
{"text": "\\chapter{Multivariate normality}\n\\label{mvn}\n\nThe multivariate normal distribution remains central to most work concerning multivariate continuous data.   Experience has suggested that it is usually at least an acceptable approximation, and of course one usually has recourse to the central limit theorem.   Although we will examine a few common alternatives, it is perhaps in the field of robust statistics where it's use has been modified most.\n\n\\section{Expectations and moments of continuous random functions}\n\\label{meancovar}\n\nThe mean and covariance can be defined in a similar way to to the univariate context\n\n\\input{defs/mvexpectation}\n\n\n\\section{Multivariate normality}\n\\label{mvndetail}\n\n\\input{defs/mvnpdf}\n\nAnd it can be shown that $E(\\boldsymbol{x}) = \\boldsymbol{\\mu}$, and that $Var(\\boldsymbol{x}) = \\boldsymbol{\\Sigma}$, hence we can use the notation:\n\n\\begin{displaymath}\n\\boldsymbol{y} \\sim MVN_{p}(\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})\n\\end{displaymath}\n\nFinding the maximum likelihood estimators for $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\Sigma}$ is not trivial, there are perhaps at least three derivations.   We briefly recap results from one of the more popular derivations here.\n\n\\input{defs/mvnlike}\n\n\\subsection{\\textbf{R} estimation}\n\n\\verb+cov()+ and \\verb+var()+ (equivalent calls) both give the unbiased estimate for the variance-covariance matrix, i.e. $\\frac{1}{n-1} \\sum_{i=1}^{n}(\\boldsymbol{x}_{i} - \\bar{\\boldsymbol{x}})^{T}(\\boldsymbol{x}_{i} - \\bar{\\boldsymbol{x}})$.   It is worth nothing that by default, \\verb+cov.wt()+ (which will do a few other things as well) uses the divisor $\\frac{1}{n}$\n\n\n%\\section{Skewness}\n%\\label{skewness}\n\n%\\section{Outliers}\n%\\label{outliers}\n\n%\\section{Missing Data}\n%\\label{missing}\n\n\\section{Transformations}\n\\label{transform}\n\n\\cite{Box+Cox:1964} modified earlier proposals by \\cite{Tukey:1957} to yield the following transformation:\n\n\\input{defs/boxcox}\n\n\\input{defs/boxcoxmle}\n\n\nA range of tranformations have been considered for multivariate data, mainly of the Box-Cox type.   If the variables $\\boldsymbol{y} = (y_{1}, y_{2}, \\ldots, y_{p}$ are are smooth transformation of $\\boldsymbol{x}$, the frequency function for $\\boldsymbol{y}$ can be given by:\n\\begin{displaymath}\ng(\\boldsymbol{y}) = f(\\boldsymbol{x}(\\boldsymbol{y}))\\lvert \\frac{\\partial \\boldsymbol{x}}{\\partial \\boldsymbol{y}} \\rvert\n\\end{displaymath}\nwhere $\\boldsymbol{x}(\\boldsymbol{y})$ is $\\boldsymbol{x}$ expressed in terms of the elements of $\\boldsymbol{y}$, and $J = \\lvert \\frac{\\partial \\boldsymbol{x}}{\\partial \\boldsymbol{y}} \\rvert$ is the Jacobian % determinant of partial derivatives\nwhich ensures the density is mapped correctly. \n\n$\\prod_{j=1}^{p} \\prod_{i=1}^{n} x_{ij}^{\\lambda_{j}-1}$\n\n\\input{defs/boxcoxmvmle}\n\n\n\n\n\n%%% Local Variables: ***\n%%% mode:latex ***\n%%% TeX-master: \"../book.tex\"  ***\n%%% End: ***", "meta": {"hexsha": "e9510b297d5409c44b5af72f4d29c23b1f8c5d64", "size": 2872, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/mvn.tex", "max_stars_repo_name": "phewson/mvstats", "max_stars_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/mvn.tex", "max_issues_repo_name": "phewson/mvstats", "max_issues_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2020-08-28T16:37:22.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-28T16:49:11.000Z", "max_forks_repo_path": "chapters/mvn.tex", "max_forks_repo_name": "phewson/mvstats", "max_forks_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.4507042254, "max_line_length": 400, "alphanum_fraction": 0.7329387187, "num_tokens": 805, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5978571445855143}}
{"text": "\\chapter{The ORBM: A model of independent complex causes}\n\n\\section{A network equivalent to an RBM}\n\n\\subsection{Unrolling a Gibbs Chain, a different perspective on RBM inference}\n\n\\begin{wrapfigure}{r}{0.6\\textwidth}\n  \\begin{center}\n    \\includegraphics[width=0.5\\textwidth]{Assets/3_Layer_RBM.png}\n  \\end{center}\n  \\caption{A diagram illustrating `unrolling` an RBM by one Gibbs iteration. Note the connections between $H$ and $V$ layers are now directed.}\n  \\label{F:3-Layer-RBM}\n\\end{wrapfigure}\n\nBefore the ORBMs architecture and inference algorithm can be introduced, Gibbs sampling in an RBM must be presented in a different, yet equivalent way.\nHinton, when introducing the idea of training a Deep Belief Network showed that a Gibbs chain in an RBM is equivalent to a infinitely deep belief network, with an RBM on the top with tied weights~\\cite{hinton2006reducing}. An unrolling of a single Gibbs iteration is illustrated in figure \\ref{F:3-Layer-RBM}, the $U$ layer corresponding to the $V$ layer after one Gibbs iteration. The top two layers ($U$ and $H$) form a standard RBM, but the bottom connections ($V$ and $H$) form a Sigmoid Belief Network. To further clarify, the $U$ layer corresponds to $V_1$ in figure \\ref{F:Gibbs_Chain}. Note in figure \\ref{F:3-Layer-RBM} the weights between the $H$ and $U$ layer are shared between the $V$ and $H$ layers.\n\nWe can now show that Gibbs sampling in this RBM unrolled with a Sigmoid belief network is equivalent to Gibbs Sampling in a standard RBM.\\@\n\n\\subsubsection{Sampling in this equivalent network}\n\nSampling in the ORBM network behaves a similar way to RBM sampling, where a Gibbs chain is run between the top two layers, $U$ and $H$, until the last iteration. At the last iteration a hidden pattern is sampled and then is pushed through the Sigmoid belief network between $H$ and $V$. This is illustrated in figure \\ref{F:3-Layer-RBM-Gibbs}.\n\n\\begin{figure}[h]\n\\begin{center}\n  \\includegraphics[width = 0.8\\textwidth]{Assets/ORBM-Gibbs-Chain.png}\n\\caption{A diagram showing sampling in the equivalent network, where normal sampling in the top 2 layers is performed until Gibbs iteration $t$, and the hidden state is pushed down through the bottom layers of the Sigmoid Belief Network.}\n\\label{F:3-Layer-RBM-Gibbs}\n\\end{center}\n\\end{figure}\n\nTo generate a sample from $h$ is equivalent we can use equation \\ref{eq:Hid-Gibbs-Update}, by running the Gibbs chain between $h$ and $u$. To sample from $v$ in the SBN is by definition:\n$$\nP(v_i = 1|h) = \\sigma_i(h)\n$$\nThus Gibbs sampling from a simple RBM ending in a sample for $v$ is equivalent to sampling $h$ from the same RBM and using a SBN for the last step. However another useful way to draw samples in such a network that we will leverage in the ORBM. The log likelihood of the joint can be written as:\n\\begin{equation}\\label{eq:Product-Joint}\n\\log P^\\star(h,v) = \\log P^\\star(h) + \\log P(v|h)\n\\end{equation}\nThe second term is defined in equation \\ref{eq:SBN-v-given-h}. To find the first term, $P^\\star(h)$, we need to marginalise the joint (eq~\\ref{eq:Product-Joint}) over all $\\mathbf{U}$ layer configurations:\n\n$$\n \\begin{aligned}\nP^\\star(h) &= \\sum_{v_1=0}^1 \\cdots \\sum_{v_n=0}^1 \\exp \\bigg[  \\log P^{\\star}(h,v) \\bigg] \\\\\n&= \\sum_{v_1=0}^1 \\cdots \\sum_{v_n=0}^1 \\exp \\bigg[  \\sum_i  \\sum_j h_j W_{ji} v_i \\;\\; + \\;\\; \\sum_i W_{0i} v_i \\;\\; + \\;\\; \\sum_j W_{j0} h_j \\bigg] \\\\\n&= \\sum_{v_1=0}^1 \\cdots \\sum_{v_n=0}^1 \\exp \\bigg[  \\sum_i v_i \\phi_i(h)  \\;\\; + \\;\\; \\sum_j W_{j0} h_j \\bigg] \\\\\n&= \\exp\\left[ \\sum_j h_j  W_{j0} \\right] \\;\\; \\times \\sum_{v_1=0}^1 \\cdots \\sum_{v_n=0}^1 \\prod_i \\exp\\bigg[ v_i \\phi_i(h) \\bigg] \\\\\n&= \\exp\\left[\\sum_j h_j  W_{j0}\\right] \\;\\; \\times \\prod_i \\bigg( 1 + e^{\\phi_i(h) } \\bigg) \\\\\n\\text{and taking logs,}\n\\log P^\\star(h) &= \\sum_j h_j  W_{j0} \\;\\; +  \\sum_i \\log \\bigg( 1 + e^{\\phi_i(h) } \\bigg)\n\\\\\n&= \\sum_j h_j  W_{j0} \\;\\; + \\; \\sum_i \\phi_i(h) \\;  - \\; \\sum_i \\log \\sigma_i(h)\n\\end{aligned}\n$$\n\n$\\log P^\\star(h)$ for the RBM that is the `top layer` has now be defined. Given this, another way to write $\\log P^\\star(h,v)$ is\n\\begin{equation}\\label{eq:equivalent-rbm-log-joint}\n\\log P^\\star(h,v) = \\underbrace{\\sum_j h_j  W_{j0} \\;\\; + \\; \\sum_i \\phi_i(h) \\;  - \\; \\sum_i \\log \\sigma_i(h)}_{\\log P^\\star(h)} \\;\\;+\\;\\; \\underbrace{\\sum_i v_i \\log \\sigma_i(h) + (1-v_i) \\log (1 - \\sigma_i(h))}_{\\log P(v \\mid h)}\n\\end{equation}\nBy collecting terms and simplifying this matches the earlier form in equation~\\ref{eq:LogPJoint}.\n\n\n\\section{A New Approach, The ORBM}\n\n\\subsection{Architecture}\n\nFrean and Marsland extend on this idea of a `One Gibbs iteration unrolled RBM'. They propose adding another $U$ and $H$ layers to represent another rich source of data, which then combines with the original $U$ and $H$ layer via a SBN to form the visible layer, $V$. This architecture is illustrated in figure \\ref{ORBM-Architecture}, where A and B are used to denote the two independent causes we are modeling. To clarify the ORBMs architecture, there are two RBMs, one for each cause. These RBMs are connected to a SBN into the visible layer, the weights of which are tied to the respective RBMs. Performing inference in this architecture is slightly simpler for performing inference, as the $U$ layers are not involved.\n\nBy building on RBMs and an SBN we can leverage existing algorithms and work on RBMs. Also the potential for extending this work to deeper architectures could be possible. Also the architecture supports `plug and play` with the models in that existing RBMs can be plugged into the architecture. The implications of this are exciting in that an RBM that has been trained on hand written digits could be plugged into an RBM that has been trained on a paper texture (the bumps and ridges of paper). Then using the ORBM a `clean` representation of the text and paper could be separated out given an image of a digit on paper.\n\nThe ORBM, like the RBM is difficult to evaluate empirically, but the same techniques that can be used to evaluate RBMs can be applied to the ORBM.\\@\n\n\\begin{figure}[h]\n\\begin{center}\n  \\includegraphics[width = 1.2\\textwidth]{Assets/ORBM-Full-Architecture.png}\n\\caption{The full ORBM architecture, where $A$ and $B$ are the two causes that combine to form the data.}\n\\label{ORBM-Architecture}\n\\end{center}\n\\end{figure}\n\n\\subsection{Gibbs Sampling in the ORBM}\n\n\\subsubsection{Sampling in the ORBM}\n\nSampling from the generative model in an RBM (Section \\ref{S:Gibbs-Sampling-RBM}) involved running a Gibbs chain and then taking the last visible pattern in that chain. To perform Sampling in the ORBM we can leverage the generative power of the two RBMs and then combine their inputs in a simple way.\nWe know that joint in the unrolled RBM is expressed as eq~\\ref{eq:Product-Joint}.\nThat is, the ORBM probability of $v$ given $h^{A} $ and $h^{B}$ is defined by:\n$$ \\log P(v|h^A,h^B) = \\sum_i v_i \\log \\sigma (\\phi^A_i + \\phi^B_i) + (1-v_i) \\log (1 - \\sigma(\\phi^A_i + \\phi^B_i)$$\nWhere $\\phi_i^A \\text{ and } \\phi_i^B $ are the weighted sums into the $ ith$ visible units from the models A and B respectively. In this generative model a visible unit is created by taking the weighted sum from both sources, adding their contribution and then passing through a Sigmoid function to give a probability.\n\n\\subsubsection{Inference in the ORBM}\\label{S:ORBM-Inference}\n\nIn a standard RBM sampling from $P(h_j=1)$ is given by $P(h_j=1) = \\sigma(\\psi_i)$, where $\\psi_i$ is defined in equation \\ref{psi-gibbs-update-rbm}. In an ORBM we cannot consider the single RBM alone when trying to find $P(h_j)$, as the hidden layers of the two RBMs in the ORBM are dependent given a visible layer $v$. This amounts to explaining away as described in section \\ref{S:Explaining-Away}. There is a nice feature of the ORBM in that the hidden representations ($ h^A \\text{ and } h^B $) we extract from the visible layer require no interaction with the $U$ layers to sample from. The $ U $ layers are only needed to generate a composite $ V $ pattern (or independent reconstructions).\n\nWe aim to use Gibbs sampling to generate $ h^A \\text{ and } h^B $ given a $ v $:\nWe need to calculate\n$$\n\\psi^A_j = \\log P^\\star(h,v | h^A_j = 1) - \\log P^\\star (h,v| h^A_j = 0)\n$$\nWe will use that fact that the weighted sum into a visible unit $i$ where some hidden unit $h_j$ is on, is equivalent to the same configuration except $h_j$ is off plus the weight between these two units. This is by definition true, and expressed below:\n$$\n\\phi^A_i(h | h^A_j=1) = \\phi^A_i(h | h^A_j=0) + W^A_{ji}\n$$\nWe will abbreviate $\\phi^A_i(h | h_j=0)$ to $\\phi^{Aj0}_i$. Given these we obtain:\n$$\n\\psi^A_j = \\sum_i v_i \\log \\left( \\frac{1+ e^{-\\phi^{Aj0}_i - \\phi^B_i}}{1+e^{-\\phi^{Aj0}_i - W_{ji} -\\phi^B_i}} \\frac{1+ e^{\\phi^{Aj0}_i + W_{ji} + \\phi^B_i}}{1+e^{\\phi_i^{Aj0} + \\phi^B_i}}\\right) \\;\\;+ \\;\\;\\sum_i \\log \\left(\\frac{1+e^{\\phi_i^{Aj0} + W_{ji}}}{1+ e^{\\phi_i^{Aj0}}}\n\\frac{1+e^{\\phi_i^{Aj0} + \\phi^B_i}}{1+ e^{\\phi_i^{Aj0} + W_{ji} + \\phi^B_i}} \\right)\n$$\n\nNow $\\phi = \\log \\frac{1+e^{\\phi}}{1+e^{-\\phi}}$, which is$ = \\log \\frac{\\sigma(\\phi)}{\\sigma(-\\phi)}$.\nSo the first term simplifies to\n$ \\sum_i v_i W_{ji}$, which is the same as that in an RBM. The second term can also be simplified, using the identity $\\log(1-\\sigma(\\phi)) = \\phi - \\log(1+e^\\phi)$. This leads to the following Gibbs Sampler probability of the j-th hidden unit in network $A$ being 1: $p_j = \\sigma(\\psi_j^A)$ with\n\n$$\n\\psi_j^A = \\underbrace{\\sum_i W^A_{ji} v_i}_\\text{vanilla RBM} \\; + \\; \\underbrace{\\sum_i C^A_{ji}}_\\text{correction} $$\nwhere the `full` Correction is:\n\\begin{equation}\\label{eq:Full-Corretion}\n\\begin{aligned}\nC^A_{ji} \\; &= \\;\\log \\bigg[ \\frac{\\sigma (\\phi_i^{Aj0})}{\\sigma (\\phi_i^{Aj0} + W^A_{ji})} . \\frac{\\sigma (\\phi_i^{Aj0} + W_{ji}^A + \\phi_i^B) }{\\sigma (\\phi_i^{Aj0} + \\phi_i^B)} \\bigg]\n\\\\\n&= \\log \\sigma(\\phi_i^{Aj0})  \\; + \\; \\log \\sigma (\\phi_i^{Aj0} + W^A_{ji} + \\phi_i^B) \\;- \\log \\sigma (\\phi_i^{Aj0} + W^A_{ji})  \\; - \\; \\log \\sigma ( \\phi_i^{Aj0} + \\phi_i^B)\n\\\\\n&= \\log \\bigg[ \\frac{\\sigma(\\phi_i^{A} - h^A_i W^A_{ji})}{\\sigma (\\phi_i^{A} + (1-h^A_i) W^A_{ji})} \\bigg]  \\; - \\; \\log \\bigg[ \\frac{ \\sigma ( \\phi_i^{AB} - h^A_i W^A_{ji})}{\\sigma (\\phi_i^{AB} + (1-h^A_i) W^A_{ji})} \\bigg]\n\\end{aligned}\n\\end{equation}\nwhere $\\phi_i^{AB} = \\phi_i^{A} + \\phi_i^{B}$. Note that $v$ plays no role in the correction. It is clear that adding $\\phi^B_i$ has introduced a dependency between the entirety of $h$. This means that a Gibbs chain will need to be run, in practice this chain proves to be short, even in the larger dimension tasks like MNIST.\n\n\\subsubsection{Examining and approximating the Correction}\n\nThis `full' correction (eq \\ref{eq:Full-Corretion}) is a non-trivial computation to make in that for every weight, a series of semi-large matrix operations have to be performed. As a result, Frean and Marsland propose an approximation for this `full' correction.\n\nIt is useful to see the correction for the $A$ model as contours on axes $\\phi^A_i$ versus $\\phi^{AB}_i$, for a positive weight $W^A_{ij}$.\nThere are two plots for the two cases $h^A_j=0 \\text{ and } h^A_j = 1$ These are plotted in figure \\ref{F:Correction-Plot}. This function has defined `plateaus` where the height of these plateaus is $\\pm $ the weight respectively. The correction essentially adjusts the weight between $v_i$ and $h_j$ allowing it to be turned off, turned on, intensified, de-intensified, or have no effect. This is similar to adjusting the visible activation based on what the two RBMs are doing and hence Frean and Marsland propose the following approximation:\n\n\\begin{equation}\\label{eq:Approx-Correction}\n C^A_{ji} = \\sigma(\\phi^{A_{j0}}_i + W^A_{ji}) - \\sigma(\\phi^{A_{j0}}_i W^A_{ji} + \\phi^B_i)\n\\end{equation}\n\n\n\\begin{figure}[h]\n\\begin{center}\n  \\includegraphics[width = 0.8\\textwidth]{Assets/correction.png}\n\\caption{A diagram that shows the structure of the correction over the weighted sums into the hiddens of one of the RBMS versus both. Note the plateaus that are $\\pm weight$ in magnitude}\n\\label{F:Correction-Plot}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{What happened to the biases?}\\label{S:Biases}\n\nThe ORBM utilises the hidden biases present on the RBMs when calculating the hidden states $h^A$ and $h^B$. However, the visible biases are not used in sampling from the visible. The reasoning behind this being that the RBMs visible bias acts like the `background rate`, i.e. what visible reconstruction would we see from an all zero hidden activation. As visible bias are not captured in the ORBM, it is important that RBMs plugged into it's structure do not use a visible bias when being trained.\n\n\\subsubsection{Reconstructions and Dreams in the ORBM}\n\nReconstructions in the ORBM are a natural extension of the inference algorithm.\n\\begin{itemize}\n  \\item First hidden representations $h^A$ and $ h^B $ are generated given an input $v$.\n  \\item The RBMs (RBM A and B) use their respective hidden representations to generate visible patterns independently. That is, we can use the same way of performing Gibbs sampling we saw in equation \\ref{eq:Vis-Gibbs-Update}.\n\\end{itemize}\nThis means that for a visible pattern there are potentially two reconstructions, one from each model.\n", "meta": {"hexsha": "76fe25b1bd190670e0deb062b5296460f45b5068", "size": 13269, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Max/Report/orbm_derivation.tex", "max_stars_repo_name": "garibaldu/multicauseRBM", "max_stars_repo_head_hexsha": "f64f54435f23d04682ac7c15f895a1cf470c51e8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Max/Report/orbm_derivation.tex", "max_issues_repo_name": "garibaldu/multicauseRBM", "max_issues_repo_head_hexsha": "f64f54435f23d04682ac7c15f895a1cf470c51e8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Max/Report/orbm_derivation.tex", "max_forks_repo_name": "garibaldu/multicauseRBM", "max_forks_repo_head_hexsha": "f64f54435f23d04682ac7c15f895a1cf470c51e8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.93125, "max_line_length": 722, "alphanum_fraction": 0.7164066621, "num_tokens": 4131, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5978059615072756}}
{"text": "\\chapterimage{chapter_head_2.pdf}\n\n\\chapter{Objects and Data Structures}\n\n\\section{Data Abstraction}\\index{Objects and Data Structures!Data Abstraction}\n\nData Abstraction represents more than just a data structure. The methods enforce an access policy. For example:\n\n\\begin{tcolorbox}[breakable, colback=red!10!white, colframe=red!85!black]\n\\begin{lstlisting}[language = java, basicstyle=\\small]\npublic interface Point {\n    double getX();\n    double getY();\n    void setCartesian(double x, double y);\n    double getR();\n    double getTheta();\n    void setPolar(double r, double theta);\n}\n\\end{lstlisting}\n\\end{tcolorbox}\n\nYou can read the individual coordinates independently, but \\textbf{you must set the coordinates together as an atomic operation}.\n\n\\section{\\textcolor{red}{Data/Object Anti-Symmetry(VERY IMPORTANT)}}\\index{Objects and Data Structures!Data/Object Anti-Symmetry}\n\nWe know Java is an \\textit{Object Oriented Language} as opposed to \\textit{Procedure Language}, such as C. Let's start with 3 data structures and 1 procedure used in procedure language. Listing~\\ref{object-and-data-structure-procedure-pair} show the difference between objects and data structures.\n\n\\begin{tcolorbox}[breakable, colback=green!10!white, colframe=green!85!black, sidebyside, label = object-and-data-structure-procedure-pair]\n\n\\begin{lstlisting}[language = java, basicstyle=\\small]\npublic class Square {\n    public Point topLeft;\n    public double side;\n}\n\npublic class Rectangle {\n    public Point topLeft;\n    public double height;\n    public double width;\n}\n\npublic class Circle {\n    public Point center;\n    public double radius;\n}\n\\end{lstlisting}\n\n\\tcblower\n\n\\begin{lstlisting}[language = java, basicstyle=\\small]\npublic class Geometry {\n    public final double PI = 3.141592653589793;\n\n    public double area(Object shape) throws NoSuchShapeException {\n        if (shape instanceof Square) {\n            Square s = (Square)shape;\n            return s.side * s.side;\n        } else if (shape instanceof Rectangle) {\n            Rectangle r = (Rectangle)shape;\n            return r.height * r.width;\n        } else if (shape instanceof Circle) {\n            Circle c = (Circle)shape;\n            return PI * c.radius * c.radius;\n        }\n        throw new NoSuchShapeException();\n    }\n}\n\\end{lstlisting}\n\\end{tcolorbox}\n\nObjects hide their data behind abstractions and expose functions that operate on that data. Data structure expose their data and have no meaningful functions.\n\nThe Geometry class operates on the three shape classes. The shape classes are simple data structures without any behavior. All the behavior is in the Geometry class.\n\nConsider what would happen if a \\inlinecode[green]{perimeter()} function were added to \\inlinecode[green]{Geometry}, you notice the following:\n\n\\begin{itemize}\n    \\item The shape classes would be unaffected. Any other classes that depended upon the shapes would also be unaffected.\n    \\item On the other hand, if I add a new shape, I must change all the functions in \\inlinecode[green]{Geometry} to deal with it.\n\\end{itemize}\n\nNow hold the points above in your memory for just a moment and let's consider the object-oriented solution in Listing~\\ref{object-and-data-structure-oo-solution} \n\n\\begin{tcolorbox}[breakable, colback=green!10!white, colframe=green!85!black, label = object-and-data-structure-oo-solution]\n\\begin{lstlisting}[language = java, basicstyle=\\small]\npublic class Square implements Shape {\n    \n    private Point topLeft;\n    private double side;\n\n    public double area() {\n        return side*side;\n    }\n}\n\npublic class Rectangle implements Shape {\n    \n    private Point topLeft;\n    private double height;\n    private double width;\n\n    public double area() {\n        return height * width;\n    }\n}\n\npublic class Circle implements Shape {\n\n    public static final double PI = 3.141592653589793;\n    \n    private Point center;\n    private double radius;\n\n    public double area() {\n        return PI * radius * radius;\n    }\n}\n\\end{lstlisting}\n\\end{tcolorbox}\n\nWhen \\inlinecode[green]{perimeter()} function is added to \\inlinecode[green]{Geometry}, you notice the following:\n\n\\begin{itemize}\n    \\item The shape classes would be affected\n    \\item On the other hand, if I add a new shape,  \\inlinecode[green]{Geometry} is not changed\n\\end{itemize}\n\n\\textit{\\textbf{Procedural code (code using data structures) makes it easy to add new functions without changing the existing data structures. OO code, on the other hand, makes it easy to add new classes without changing existing functions.}}\n\nSo, the things that are hard for OO are easy for procedures, and the things that are hard for procedures are easy for OO.\n\nIn any complex system there are going to be times when we want to add new data\ntypes rather than new functions. For these cases objects and OO are most appropriate. On the other hand, there will also be times when we'll want to add new functions as opposed to data types. In that case procedural code and data structures will be more appropriate.\n\nMature programmers know that the idea that everything is an object is a myth. Sometimes you really do want simple data structures with procedures operating on them.\n\n\\begin{marker}\nWhen we want to add new data types rather than new functions, Objects and OO are better\n\nHowever if we want to add new functions, procesure code and data structure will be more appropriate\n\nMature programmers know that the idea that everything is an object is a myth. Sometimes you really do want simple data structures with procedures operating on them\n\\end{marker}\n\n\\section{The Law of Demeter}\\index{Objects and Data Structures!The Law of Demeter}\n\nA module should not know about the innards of the objects it manipulates. An object should not expose its internal structure through accessors because to do so is to expose, rather than to hide, its internal structure.\n\n\\begin{tcolorbox}[breakable, colback=green!10!white, colframe=green!85!black, center title, title = Law of Demeter]\n\\begin{center}\nA method f of a class C should only call the methods of these:\n\n\\begin{itemize}\n    \\item C\n    \\item An object created by f\n    \\item An object passed as an argument to f\n    \\item An object held in an instance variable of C\n    \\item The method should not invoke methods on objects that are returned by any of the allowed functions. e.g. final String outputDir = ctxt.getOptions().getScratchDir().getAbsolutePath(); violates this rule\n\\end{itemize}\n\\end{center}\n\\end{tcolorbox}\n\n\\section{Data Transfer Objects}\\index{Objects and Data Structures!Data Transfer Objects}\n\nThe quintessential form of a data structure is a class with public variables and no functions. This is sometimes called a data transfer object, or DTO. \\textbf{Somewhat more common in Java is the \u201cbean\u201d form}.\n", "meta": {"hexsha": "09131e06e79c82e273bda32c2c84b56f4434a9a5", "size": 6810, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/assets/pdf/review/parts/1/objects-and-data-structures.tex", "max_stars_repo_name": "QubitPi/jersey-fundamentals", "max_stars_repo_head_hexsha": "dd3b407e033f2504acb245bb3ce8465464c57620", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/assets/pdf/review/parts/1/objects-and-data-structures.tex", "max_issues_repo_name": "QubitPi/jersey-fundamentals", "max_issues_repo_head_hexsha": "dd3b407e033f2504acb245bb3ce8465464c57620", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/assets/pdf/review/parts/1/objects-and-data-structures.tex", "max_forks_repo_name": "QubitPi/jersey-fundamentals", "max_forks_repo_head_hexsha": "dd3b407e033f2504acb245bb3ce8465464c57620", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.0240963855, "max_line_length": 297, "alphanum_fraction": 0.748164464, "num_tokens": 1574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7690802317779601, "lm_q1q2_score": 0.5978059573932077}}
{"text": "\\newpage\\section{Parallelogram Stuff}\n\n\t\n\t\\theo{http://forumgeom.fau.edu/FG2005volume5/FG200510.pdf}{Maximality of the Area of a Cyclic Quadrilateral}{Among all quadrilaterals with given side lengths, the cyclic one has maximal area.\n\t\t\\fig{.5}{quad-same_sides-diff_area}{The cyclic quad has the maximal area}\n\t}\n\t\t\n\t\t\n\n\t\\prob{https://artofproblemsolving.com/community/c6h1508225_simple_parallelogram_geo}{IOM 2017 P1}{E}{Let $ABCD$ be a parallelogram in which angle at $B$ is obtuse and $AD>AB$. Points $K$ and $L$ on $AC$ such that $\\angle ADL=\\angle KBA$(the points $A, K, C, L$ are all different, with $K$ between $A$ and $L$). The line $BK$ intersects the circumcircle  $\\omega$ of $ABC$ at points $B$ and $E$, and the line $EL$ intersects $\\omega$ at points $E$ and $F$. Prove that $BF\\parallel AC$.}\n\n\t\\hl{Simplify}: Make the diagram easier to draw.\n\n\n\t\\prob{https://artofproblemsolving.com/community/c6h148830p841255}{USA TST 2006 P6}{E}{Let $ABC$ be a triangle. Triangles $PAB$ and $QAC$ are constructed outside of triangle $ABC$ such that $AP = AB$ and $AQ = AC$ and $\\angle{BAP}= \\angle{CAQ}$. Segments $BQ$ and $CP$ meet at $R$. Let $O$ be the circumcenter of triangle $BCR$. Prove that $AO \\perp PQ.$}\n\t\n\t\t\t\\figdf{.5}{USATST2006P6}{USA TST 2006 P6, That is a prallelogram}", "meta": {"hexsha": "960d1b7955afe424c0015067a5b1f87c5f32977d", "size": 1285, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "geo/sec9_parallel.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "geo/sec9_parallel.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geo/sec9_parallel.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 75.5882352941, "max_line_length": 486, "alphanum_fraction": 0.7151750973, "num_tokens": 438, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7772998611746911, "lm_q2_score": 0.76908023177796, "lm_q1q2_score": 0.5978059573932075}}
{"text": "\\section{The fundamental cover of the circle}\n\\index{circle!fundamental cover|(}\n\\index{fundamental cover!of the circle|(}\n\nIn this section we show that the loop space of the circle is equivalent to $\\mathbb{Z}$ by constructing the universal cover of the circle as an application of the univalence axiom. \n\n\\subsection{Families over the circle}\n\nThe type of small families over $\\sphere{1}$ is just the function type $\\sphere{1}\\to\\UU$, so in fact we may use the universal property of the circle to construct small dependent types over the circle. \nBy the universal property, small type families over $\\sphere{1}$ are equivalently described as pairs $(X,p)$ consisting of a type $X:\\UU$ and an identification $p:X=X$.\nThis is where the univalence axiom\\index{univalence axiom!families over $\\sphere{1}$} comes in. By the map\n\\begin{equation*}\n\\mathsf{eq\\usc{}equiv}_{X,X}:(\\eqv{X}{X})\\to (X=X)\n\\end{equation*}\nit suffices to provide an equivalence $\\eqv{X}{X}$.\n\n\\begin{defn}\\label{defn:circle_descent}\nConsider a type $X$ and every equivalence $e:\\eqv{X}{X}$.\nWe will construct a dependent type $\\mathcal{D}(X,e):\\sphere{1}\\to\\UU$ with an equivalence $x\\mapsto x_{\\mathcal{D}}:\\eqv{X}{\\mathcal{D}(X,e,\\base)}$ for which the square\n\\begin{equation*}\n\\begin{tikzcd}\nX \\arrow[r,\"\\eqvsym\"] \\arrow[d,swap,\"e\"] & \\mathcal{D}(X,e,\\base) \\arrow[d,\"\\mathsf{tr}_{\\mathcal{D}(X,e)}(\\lloop)\"] \\\\\nX \\arrow[r,swap,\"\\eqvsym\"] & \\mathcal{D}(X,e,\\base)\n\\end{tikzcd}\n\\end{equation*}\ncommutes. We also write $d\\mapsto d_{X}$ for the inverse of this equivalence, so that the relations\n\\begin{samepage}%\n\\begin{align*}\n(x_{\\mathcal{D}})_X & =x & (e(x)_{\\mathcal{D}}) & = \\mathsf{tr}_{\\mathcal{D}(X,e)}(\\lloop,x_{\\mathcal{D}}) \\\\\n(d_X)_{\\mathcal{D}} & =d & (\\mathsf{tr}_{\\mathcal{D}(X,e)}(d))_X & = e(d_X)\n\\end{align*}\n\\end{samepage}%\nhold.\n\nThe type $\\sm{X:\\UU}\\eqv{X}{X}$ is also called the type of \\define{descent data}\\index{descent data!for the circle} for the circle.\n\\end{defn}\n\n\\begin{constr}\n  An easy path induction argument reveals that\n\\begin{equation*}\n\\mathsf{equiv\\usc{}eq}(\\ap{P}{\\lloop})=\\mathsf{tr}_P(\\lloop)\n\\end{equation*}\nfor each dependent type $P:\\sphere{1}\\to\\UU$. Therefore we see that the triangle\\index{desc_S1@{$\\mathsf{desc}_{\\sphere{1}}$}}\n\\begin{equation*}\n\\begin{tikzcd}\n& (\\sphere{1}\\to \\UU) \\arrow[dl,swap,\"\\mathsf{gen}_{\\sphere{1}}\"] \\arrow[dr,\"\\mathsf{desc}_{\\sphere{1}}\"] \\\\\n\\sm{X:\\UU}X=X \\arrow[rr,swap,\"\\tot{\\lam{X}\\mathsf{equiv\\usc{}eq}_{X,X}}\"] & & \\sm{X:\\UU}\\eqv{X}{X}\n\\end{tikzcd}\n\\end{equation*}\ncommutes, where the map $\\mathsf{desc}_{\\sphere{1}}$ is given by $P\\mapsto\\pairr{P(\\base),\\mathsf{tr}_P(\\lloop)}$ and the bottom map is an equivalence by the univalence axiom and \\cref{thm:fib_equiv}.\nNow it follows by the 3-for-2 property that $\\mathsf{desc}_{\\sphere{1}}$ is an equivalence, since $\\mathsf{gen}_{\\sphere{1}}$ is an equivalence by \\cref{thm:circle_up}.\nThis means that for every type $X$ and every $e:\\eqv{X}{X}$ there is a type family $\\mathcal{D}(X,e):\\sphere{1}\\to\\UU$ such that\n\\begin{equation*}\n\\pairr{\\mathcal{D}(X,e,\\base),\\mathsf{tr}_{\\mathcal{D}(X,e)}(\\lloop)}=\\pairr{X,e}.\n\\end{equation*}\nEquivalently, we have $p:\\id{\\mathcal{D}(X,e,\\base)}{X}$ and $\\mathsf{tr}(p,{\\mathsf{tr}_{\\mathcal{D}(X,e)}(\\lloop)})=e$. Thus, we obtain $\\mathsf{equiv\\usc{}eq}(p):\\eqv{\\mathcal{D}(X,e,\\base)}{X}$, for which the square\n\\begin{equation*}\n\\begin{tikzcd}[column sep=huge]\n\\mathcal{D}(X,e,\\base)\\arrow[r,\"\\mathsf{equiv\\usc{}eq}(p)\"] \\arrow[d,swap,\"\\mathsf{tr}_{\\mathcal{D}(X,e)}(\\lloop)\"] & X \\arrow[d,\"e\"] \\\\\n\\mathcal{D}(X,e,\\base)\\arrow[r,swap,\"\\mathsf{equiv\\usc{}eq}(p)\"] & X\n\\end{tikzcd}\n\\end{equation*}\ncommutes.\n\\end{constr}\n\n\\begin{comment}\n\\begin{defn}\\label{defn:fiber_sequence}\nA \\define{fiber sequence} \n\\begin{equation*}\nF \\hookrightarrow E \\twoheadrightarrow B\n\\end{equation*}\nconsists of a \\define{base type} $B$ with a base point $b_0$ and a dependent type $P:B\\to\\type$, a type $F$ called the \\define{fiber} with an equivalence $\\eqv{P(b_0)}{F}$, and a type $E$ called the \\define{total space} with a map $p:E\\to B$ and an equivalence $e:\\eqv{(\\sm{b:B}P(b))}{E}$ such that the triangle\n\\begin{equation*}\n\\begin{tikzcd}\n\\Big(\\sm{b:B}P(b)\\Big) \\arrow[rr,\"e\"] \\arrow[dr,swap,\"\\proj 1\"] & & E \\arrow[dl,\"p\"] \\\\\n& B\n\\end{tikzcd}\n\\end{equation*}\ncommutes.\n\\end{defn}\n\\end{comment}\n\n\\subsection{The fundamental cover of the circle}\n\nThe \\emph{fundamental cover}\\index{fundamental cover!of the circle} of the circle is a family of sets over the circle with contractible total space.\nClassically, the fundamental cover is described as a map $\\mathbb{R}\\to\\sphere{1}$ that winds the real line around the circle.\nIn homotopy type theory there is no analogue of such a construction.\n\nRecall from \\cref{ex:succ_equiv} that the successor function $\\mathsf{succ}:\\Z\\to \\Z$ is an equivalence. Its inverse is the predecessor function defined in \\cref{ex:int_pred}. \n\n\\begin{defn}\nThe \\define{fundamental cover}\\index{fundamental cover!of the circle} of the circle is the dependent type $\\mathcal{E}_{\\sphere{1}}\\defeq\\mathcal{D}(\\Z,\\mathsf{succ}):\\sphere{1}\\to\\UU$.\\index{Z@{$\\Z$}!fundamental cover of S1@{fundamental cover of $\\sphere{1}$}}\\index{E_S1@{$\\mathcal{E}_{\\sphere{1}}$}}\n\\end{defn}\n\n\\begin{rmk}\n  The fundamental cover of the circle comes equipped with an equivalence\n  \\begin{equation*}\n    e:\\mathbb{Z} \\simeq \\mathcal{E}_{\\sphere{1}}(\\mathsf{base})\n  \\end{equation*}\n  and a homotopy witnessing that the square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      \\mathbb{Z} \\arrow[r,\"e\"] \\arrow[d,swap,\"\\mathsf{succ}\"] & \\mathcal{E}_{\\sphere{1}}(\\mathsf{base}) \\arrow[d,\"\\mathsf{tr}_{\\mathcal{E}_{\\sphere{1}}}(\\mathsf{loop})\"] \\\\\n      \\mathbb{Z} \\arrow[r,swap,\"e\"] & \\mathcal{E}_{\\sphere{1}}(\\mathsf{base})\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes.\n\n  For convenience, we write $k_{\\mathcal{E}}$ for the term $e(k):\\mathcal{E}_{\\sphere{1}}(\\mathsf{base})$, for any $k:\\mathbb{Z}$. \n\\end{rmk}\n\nThe picture of the fundamental cover is that of a helix\\index{helix} over the circle. This picture emerges from the path liftings of $\\mathsf{loop}$ in the total space. The segments of the helix connecting $k$ to $k+1$ in the total space of the helix, are constructed in the following lemma.\n\n\\begin{lem}\nFor any $k:\\Z$, there is an identification\n\\begin{equation*}\n\\mathsf{segment\\usc{}helix}_k:(\\base,k_{\\mathcal{E}})=(\\base,\\mathsf{succ}(k)_{\\mathcal{E}})\n\\end{equation*}\nin the total space $\\sm{t:\\sphere{1}}\\mathcal{E}(t)$.\n\\end{lem}\n\n\\begin{proof}\nBy \\cref{thm:eq_sigma} it suffices to show that\n\\begin{equation*}\n\\prd{k:\\Z} \\sm{\\alpha:\\base=\\base} \\mathsf{tr}_{\\mathcal{E}}(\\alpha,k_{\\mathcal{E}})= \\mathsf{succ}(k)_{\\mathcal{E}}.\n\\end{equation*}\nWe just take $\\alpha\\defeq\\lloop$. Then we have $\\mathsf{tr}_{\\mathcal{E}}(\\alpha,k_{\\mathcal{E}})= \\mathsf{succ}(k)_{\\mathcal{E}}$ by the commuting square provided in the definition of $\\mathcal{E}$.\n\\end{proof}\n\n\\subsection{Contractibility of general total spaces}\nConsider a type $X$, a family $P$ over $X$, and a term $c:\\sm{x:X}P(x)$, and suppose our goal is to construct a contraction\n\\begin{equation*}\n  \\prd{t:\\sm{x:X}P(x)}c=t.\n\\end{equation*}\nOf course, the first step is to apply the induction principle of $\\Sigma$-types, so it suffices to construct a term of type\n\\begin{equation*}\n\\prd{x:X}{y:P(x)} c = (x,y).\n\\end{equation*}\nIn the case where $P$ is the fundamental cover of the circle, we are given an equivalence $e:\\eqv{\\Z}{\\mathcal{E}(\\base)}$. Using this equivalence, we obtain an equivalence\n\\begin{equation*}\n  \\Big(\\prd{y:\\mathcal{E}(y)}c=(\\mathsf{base},y)\\Big)\\to \\Big(\\prd{k:\\Z}c=(\\mathsf{base},k_{\\mathcal{E}})\\Big).\n\\end{equation*}\nMore generally, if we are given an equivalence $e:\\eqv{F}{P(x)}$ for some $x:X$, then we have an equivalence\n\\begin{equation}\n\\Big(\\prd{y:P(x)}c=(x,y)\\Big) \\to \\Big(\\prd{y:F}c=(x,e(y))\\Big)\n\\end{equation}\nby precomposing with the equivalence $e$. Therefore we can construct a term of type $\\prd{y:P(x)}c=(x,y)$ by constructing a term of type $\\prd{y:F}c=(x,e(y))$. \n\nFurthermore, if we consider a path $p:x=x'$ in $X$ and a commuting square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      F \\arrow[r,\"e\"] \\arrow[d,swap,\"f\"] & P(x) \\arrow[d,\"\\mathsf{tr}_P(p)\"] \\\\\n      F' \\arrow[r,\"{e'}\"] & P(x')\n    \\end{tikzcd}\n  \\end{equation*}\n  where $e$, $e'$, and $f$ are all equivalences, then we obtain a function\n  \\begin{equation*}\n    \\psi : \\Big(\\prd{y:F}c=(x,e(y))\\Big)\\to \\Big(\\prd{y':F'}c=(x,e'(y'))\\Big).\n  \\end{equation*}\n  The function $\\psi$ is constructed as follows. Given $h:\\prd{y:F}c=(x,e(y))$ and $y':F'$ we have the path $h(f^{-1}(y')):c=(x,e(f^{-1}(y')))$. Moreover, writing $G$ for the homotopy $f\\circ f^{-1} \\htpy\\idfunc$, we have the path\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=huge]\n      {\\mathsf{tr}_P(p,e(f^{-1}(y')))} \\arrow[r,equals,\"{H(f^{-1}(y'))}\"] &\n      {e'(f(f^{-1}(y')))} \\arrow[r,equals,\"\\ap{e'}{G(y')}\"] &\n      {e'(y')}.\n    \\end{tikzcd}\n  \\end{equation*}\n  From this concatenated path we obtain the path\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=14em]\n      {(x,e(f^{-1}(y')))} \\arrow[r,equals,\"{\\mathsf{eq\\usc{}pair}(p,\\ct{H(f^{-1}(y'))}{\\ap{e'}{G(y')}})}\"] & {(x',e'(y'))}.\n    \\end{tikzcd}\n  \\end{equation*}\n  Now we define the function $\\psi$ by\n  \\begin{equation*}\n    h\\mapsto \\lam{y'}\\ct{h(f^{-1}(y'))}{\\mathsf{eq\\usc{}pair}(p,\\ct{H(f^{-1}(y'))}{\\ap{e'}{G(y')}})}.\n  \\end{equation*}\n  Note that $\\psi$ is an equivalence, since it is given as precomposition by the equivalence $f^{-1}$, followed by postcomposition by concatenation, which is also an equivalence. Now we state the main technical result of this section, which will help us prove the contractibility of the total space of the fundamental cover of the circle by computing transport in the family $x\\mapsto \\prd{y:P(x)}c=(x,y)$.\n\n  \\begin{defn}\n    Consider a path $p:x=x'$ in $X$ and a commuting square\n    \\begin{equation*}\n      \\begin{tikzcd}\n        F \\arrow[r,\"e\"] \\arrow[d,swap,\"f\"] & P(x) \\arrow[d,\"\\mathsf{tr}_P(p)\"] \\\\\n        F' \\arrow[r,\"{e'}\"] & P(x')\n      \\end{tikzcd}\n    \\end{equation*}\n    with $H:e'\\circ f ~ \\mathsf{tr}_P(p)\\circ e$, where $e$, $e'$, and $f$ are all equivalences. Then there is for any $y:F$ an identification\n    \\begin{equation*}\n      \\mathsf{segment\\usc{}tot}(y):(x,e(y))=(x',e'(f(y)))\n    \\end{equation*}\n    defined as $\\mathsf{segment\\usc{}tot}(y)\\defeq\\mathsf{eq\\usc{}pair}(p,H(y)^{-1})$.\n  \\end{defn}\n\n  \\begin{lem}\\label{lem:compute-tr-contraction}\n    Consider a path $p:x=x'$ in $X$ and a commuting square\n    \\begin{equation*}\n      \\begin{tikzcd}\n        F \\arrow[r,\"e\"] \\arrow[d,swap,\"f\"] & P(x) \\arrow[d,\"\\mathsf{tr}_P(p)\"] \\\\\n        F' \\arrow[r,\"{e'}\"] & P(x')\n      \\end{tikzcd}\n    \\end{equation*}\n    with $H:e'\\circ f ~ \\mathsf{tr}_P(p)\\circ e$, where $e$, $e'$, and $f$ are all equivalences. Furthermore, let\n    \\begin{align*}\n      h & : \\prd{y:F}c=(x,e(y)) \\\\\n      h' & : \\prd{y':F'}c=(x',e'(y')).\n    \\end{align*}\n    Then there is an equivalence\n    \\begin{equation*}\n      \\Big(\\prd{y:F} h'(f(y))=\\ct{h(y)}{\\mathsf{segment\\usc{}tot}(y)}\\Big)\n      \\simeq \\Big(\\mathsf{tr}_C(p,\\varphi(h))= \\varphi'(h')\\Big).\n    \\end{equation*}\n  \\end{lem}\n\n  \\begin{proof}\n    We first note that we have a commuting square\n    \\begin{equation*}\n      \\begin{tikzcd}\n        \\prd{y:B(x)}c=(x,y) \\arrow[r,\"\\blank\\circ e\"] \\arrow[d,swap,\"\\mathsf{tr}_C(p)\"] & \\prd{y:F}c=(x,e(y)) \\\\\n        \\prd{y':B(x')}c=(x',y') \\arrow[r,swap,\"\\blank\\circ {e'}\"] & \\prd{y':F'}c=(x',e'(y')) \\arrow[u,swap,\"\\psi\"]\n      \\end{tikzcd}\n    \\end{equation*}\n    where $\\psi(h')=\\lam{y}\\ct{h'(f(y))}{\\mathsf{segment\\usc{}tot}(y)^{-1}}$. All the maps in this square are equivalences. In particular, the inverses of the top and bottom maps are $\\varphi$ and $\\varphi'$, respectively. The claim follows from this observation, but we will spell out the details.\n\n    Since any equivalence is an embedding, we see immediately that the type $\\mathsf{tr}_C(p)(\\varphi(h))=\\varphi'(h')$ is equivalent to the type\n    \\begin{equation*}\n      \\psi(\\mathsf{tr}_C(p)(\\varphi(h))\\circ e')=\\psi(\\varphi'(h')\\circ e').\n    \\end{equation*}\n    By the commutativity of the square, the left hand side is $h$. The right hand side is $\\psi(h')$. Therefore it follows that\n    \\begin{align*}\n      \\Big(\\mathsf{tr}_C(p)(\\varphi(h))=\\varphi'(h')\\Big)\n      & \\simeq \\Big(h= \\lam{y}\\ct{h'(f(y))}{\\mathsf{segment\\usc{}tot}(y)^{-1}}\\Big) \\\\\n      & \\simeq \\Big(h'\\circ f \\htpy (\\lam{y}\\ct{h(y)}{\\mathsf{segment\\usc{}tot}(y)}\\Big).\\qedhere\n    \\end{align*}\n  \\end{proof}\n  \n  Applying these observations to the fundamental cover of the circle, we obtain the following lemma that we will use to prove that the total space of $\\mathcal{E}$ is contractible.\n  \n  \\begin{cor}\\label{cor:construct-contraction-fundamental-cover}\n    In order to show that the total space of $\\mathcal{E}$ is contractible, it suffices to construct a function\n    \\begin{equation*}\n      h : \\prd{k:\\Z}(\\base,0_{\\mathcal{E}})=(\\base,k_{\\mathcal{E}})\n    \\end{equation*}\n    equipped with a homotopy\n    \\begin{equation*}\n      H : \\prd{k:\\Z}h(\\mathsf{succ}(k)_{\\mathcal{E}})=\\ct{h(k)}{\\mathsf{segment\\usc{}helix}(k)}.\n    \\end{equation*}\n  \\end{cor}\n\n  In the next section we establish the dependent universal property of the integers, which we will use with \\cref{cor:construct-contraction-fundamental-cover} to show that the total space of the fundamental cover is contractible.\n  \n\n\\subsection{The dependent universal property of the integers}\n\\begin{lem}\\label{lem:elim-Z}\nLet $B$ be a family over $\\Z$, equipped with a term $b_0:B(0)$, and an equivalence\n\\begin{equation*}\ne_k : B(k)\\eqvsym B(\\mathsf{succ}(k))\n\\end{equation*}\nfor each $k:\\Z$. Then there is a dependent function $f:\\prd{k:\\Z}B(k)$ equipped with identifications $f(0)=b_0$ and\n\\begin{equation*}\nf(\\mathsf{succ}(k))=e_k(f(k))\n\\end{equation*}\nfor any $k:\\Z$.\n\\end{lem}\n\n\\begin{proof}\nThe map is defined using the induction principle for the integers, stated in \\cref{lem:Z_ind}. First we take\n\\begin{align*}\nf(-1) & \\defeq e^{-1}(b_0) \\\\\nf(0) & \\defeq b_0 \\\\\nf(1) & \\defeq e(b_0).\n\\end{align*}\nFor the induction step on the negative integers we use\n\\begin{equation*}\n\\lam{n}e_{\\mathsf{neg}(S(n))}^{-1} : \\prd{n:\\N} B(\\mathsf{neg}(n))\\to B(\\mathsf{neg}(S(n)))\n\\end{equation*}\nFor the induction step on the positive integers we use\n\\begin{equation*}\n\\lam{n}e(\\mathsf{pos}(n)) : \\prd{n:\\N} B(\\mathsf{pos}(n))\\to B(\\mathsf{pos}(S(n))).\n\\end{equation*}\nThe computation rules follow in a straightforward way from the computation rules of $\\Z$-induction and the fact that $e^{-1}$ is an inverse of $e$. \n\\end{proof}\n\n\\begin{eg}\nFor any type $A$, we obtain a map $f:\\Z\\to A$ from any $x:A$ and any equivalence $e:\\eqv{A}{A}$, such that $f(0)=x$ and the square\n\\begin{equation*}\n\\begin{tikzcd}\n\\Z \\arrow[d,swap,\"\\mathsf{succ}\"] \\arrow[r,\"f\"] & A \\arrow[d,\"e\"] \\\\\n\\Z \\arrow[r,swap,\"f\"] & A\n\\end{tikzcd}\n\\end{equation*}\ncommutes. In particular, if we take $A\\jdeq (x=x)$ for some $x:X$, then for any $p:x=x$ we have the equivalence $\\lam{q}\\ct{p}{q}:(x=x)\\to (x=x)$. This equivalence induces a map\n\\begin{equation*}\nk\\mapsto p^k : \\Z \\to (x=x),\n\\end{equation*}\nfor any $p:x=x$. This induces the \\define{degree $k$ map} on the circle\n\\begin{equation*}\n\\mathsf{deg}(k) : \\sphere{1}\\to\\sphere{1},\n\\end{equation*}\nfor any $k:\\mathbb{Z}$, see \\cref{ex:degk}.\n\\end{eg}\n\nIn the following theorem we show that the dependent function constructed in \\cref{lem:elim-Z} is unique.\n\n\\begin{thm}\n  Consider a type family $B:\\mathbb{Z}\\to\\UU$ equipped with $b:B(0)$ and a family of equivalences\n  \\begin{equation*}\n    e:\\prd{k:\\Z} \\eqv{B(k)}{B(\\mathsf{succ}(k))}.\n  \\end{equation*}\n  Then the type\n  \\begin{equation*}\n    \\sm{f:\\prd{k:\\Z}B(k)}(f(0)=b)\\times\\prd{k:\\Z}f(\\mathsf{succ}(k))=e_k(f(k))\n  \\end{equation*}\n  is contractible.\n\\end{thm}\n\n\\begin{proof}\n  In \\cref{lem:elim-Z} we have already constructed a term of the asserted type.\n  Therefore it suffices to show that any two terms of this type can be identified.\n  Note that the type $(f,p,H)=(f',p',H')$ is equivalent to the type\n  \\begin{equation*}\n    \\sm{K:f\\htpy f'} (K(0)= \\ct{p}{(p')^{-1}})\\times \\prd{k:\\Z}K(\\mathsf{succ}(k))=\\ct{(\\ct{H(k)}{\\ap{e_k}{K(k)}})}{H'(k)^{-1}}. \n  \\end{equation*}\n  We obtain a term of this type by applying \\cref{lem:elim-Z} to the family $C$ over $\\Z$ given by $C(k)\\defeq f(k)=f'(k)$, which comes equipped with a base point\n  \\begin{equation*}\n    \\ct{p}{(p')^{-1}} : C(0),\n  \\end{equation*}\n  and the family of equivalences\n  \\begin{equation*}\n    \\lam{\\alpha:f(k)=f'(k)}\\ct{(\\ct{H(k)}{\\ap{e_k}{\\alpha}})}{H'(k)^{-1}}:\\prd{k:\\Z}\\eqv{C(k)}{C(\\mathsf{succ}(k))}.\\qedhere\n  \\end{equation*}\n\\end{proof}\n\nOne way of phrasing the following corollary, is that $\\Z$ is the `initial type equipped with a point and an automorphism'.\n\n\\begin{cor}\n  For any type $X$ equipped with a base point $x_0:X$ and an automorphism $e:\\eqv{X}{X}$, the type\n  \\begin{equation*}\n    \\sm{f:\\Z\\to X}(f(0)=x_0)\\times ((f \\circ \\mathsf{succ})\\htpy(e\\circ f))\n  \\end{equation*}\n  is contractible.\n\\end{cor}\n\n\n\n\\subsection{The identity type of the circle}\n\n\\begin{lem}\\label{thm:circle_fundamental}\nThe total space $\\sm{t:\\sphere{1}}\\mathcal{E}(t)$ of the fundamental cover of $\\sphere{1}$ is contractible.\\index{circle!fundamental cover!total space is contractible}\n\\end{lem}\n\n\\begin{proof}\n  By \\cref{cor:construct-contraction-fundamental-cover} it suffices to construct\n  a function\n  \\begin{equation*}\n    h : \\prd{k:\\Z}(\\base,0_{\\mathcal{E}})=(\\base,k_{\\mathcal{E}})\n  \\end{equation*}\n  equipped with a homotopy\n  \\begin{equation*}\n    H : \\prd{k:\\Z}h(\\mathsf{succ}(k)_{\\mathcal{E}})=\\ct{h(k)}{\\mathsf{segment\\usc{}helix}(k)}.\n  \\end{equation*}\n  We obtain $h$ and $H$ by the elimination principle of \\cref{lem:elim-Z}. Indeed, the family $P$ over the integers given by $P(k)\\defeq (\\base,0_{\\mathcal{E}})=(\\base,k_{\\mathcal{E}})$ comes equipped with a term $\\refl{(\\base,0_{\\mathcal{E}})}:P(0)$, and a family of equivalences\n  \\begin{equation*}\n    \\prd{k:\\Z}P(k) \\simeq P(\\mathsf{succ}(k))\n  \\end{equation*}\n  given by $k,p\\mapsto \\ct{p}{\\mathsf{segment\\usc{}helix}(k)}$. \n\\end{proof}\n\n\\begin{comment}\n\\begin{proof}\nWe show that the total space satisfies singleton induction (i.e., we apply \\cref{thm:contractible}). Let $P$ be a family over the total space of the fundamental cover, and let $p_0:P(\\base,0_{\\mathcal{E}})$. Our goal is to construct a term of type\n\\begin{equation*}\n\\prd{t:\\sphere{1}}{x:\\mathcal{E}(t)} P(t,x).\n\\end{equation*}\nWe do this by induction. For the base case we must construct a term of type\n\\begin{equation*}\n\\prd{k:\\Z}P(\\base,k_{\\mathcal{E}}).\n\\end{equation*}\nSince we have the identifications $s_k: (\\base,k_{\\mathcal{E}})=(\\base,\\mathsf{succ}(k)_{\\mathcal{E}})$, we have the equivalences\n\\begin{equation*}\n\\mathsf{tr}_P(s_k) : \\eqv{P(\\base,k_{\\mathcal{E}})}{P(\\base,\\mathsf{succ}(k)_{\\mathcal{E}})}\n\\end{equation*}\nfor each $k:\\Z$. Thus we obtain a dependent function $f:\\prd{x:\\mathcal{E}(\\base)}P(\\base,x)$ satisfying $f(0_{\\mathcal{E}})=p_0$ and $f(\\mathsf{succ}(k)_{\\mathcal{E}})=\\mathsf{tr}_P(s_k,f(k_{\\mathcal{E}}))$, for each $k:\\Z$. \n\nFor the loop case we must show that\n\\begin{equation*}\n\\mathsf{tr}_Q(\\lloop,f)=f,\n\\end{equation*}\nwhere $Q$ is the family over $\\sphere{1}$ given by $Q(t)\\defeq \\prd{x:\\mathcal{E}(t)} P(t,x)$. By function extensionality it suffices to construct a homotopy, and the transport along $\\lloop$ in $Q$ computes as\n\\begin{equation*}\n\\mathsf{tr}_Q(\\lloop,f)(k_{\\mathcal{E}})= \\mathsf{tr}_P(s_k,f(\\mathsf{succ}^{-1}(k)_{\\mathcal{E}})). \n\\end{equation*}\nTherefore the following computation completes the proof:\n\\begin{align*}\n\\mathsf{tr}_Q(\\lloop,f)(k_{\\mathcal{E}})\n& = \\mathsf{tr}_P(s_k,f(\\mathsf{succ}^{-1}(k)_{\\mathcal{E}})) \\\\\n& = f(\\mathsf{succ}(\\mathsf{succ}^{-1}(k))_{\\mathcal{E}}) \\\\\n& = f(k_{\\mathcal{E}}).\\qedhere\n\\end{align*}\n\\end{proof}\n\\end{comment}\n\n\\begin{thm}\\label{thm:eq-circle}\n  The family of maps\n  \\begin{equation*}\n    \\prd{t:\\sphere{1}} (\\base=t)\\to \\mathcal{E}(t)\n  \\end{equation*}\n  sending $\\refl{\\base}$ to $0_{\\mathcal{E}}$ is a family of equivalences. In particular, the loop space of the circle is equivalent to $\\Z$.\n\\end{thm}\n\n\\begin{proof}\n  This is a direct corollary of \\cref{thm:circle_fundamental,thm:id_fundamental}. \n\\end{proof}\n\n\\begin{cor}\n  The circle is a $1$-type and not a $0$-type.\\index{circle!is a 1-type@{is a $1$-type}}\n\\end{cor}\n\n\\begin{proof}\n  To see that the circle is a $1$-type we have to show that $s=t$ is a $0$-type for every $s,t:\\sphere{1}$. By \\cref{ex:circle-connected} it suffices to show that the loop space of the circle is a $0$-type. This is indeed the case, because $\\Z$ is a $0$-type, and we have an equivalence $(\\base=\\base)\\simeq \\Z$.\n\n  Furthermore, since $\\Z$ is a $0$-type and not a $(-1)$-type, it follows that the circle is a $1$-type and not a $0$-type.\n\\end{proof}\n\n\\begin{exercises}\n\\exercise Show that the map\n  \\begin{equation*}\n    \\Z\\to\\loopspace{\\sphere{1}}\n  \\end{equation*}\n  is a group homomorphism. Conclude that the loop space $\\loopspace{\\sphere{1}}$ as a group is isomorphic to $\\Z$.\n  \\exercise\n  \\begin{subexenum}\n  \\item Show that\n    \\begin{equation*}\n      \\prd{x:\\sphere{1}}\\neg\\neg(\\base=x).\n    \\end{equation*}\n  \\item On the other hand, use the fundamental cover of the circle to show that\n    \\begin{equation*}\n      \\neg\\Big(\\prd{x:\\sphere{1}}\\base=x\\Big).\n    \\end{equation*}\n  \\item Conclude that\n    \\begin{equation*}\n      \\neg\\Big(\\prd{X:\\UU} \\neg\\neg X\\to X\\Big)\n    \\end{equation*}\n    for any univalent universe $\\UU$ containing the circle.\n  \\end{subexenum}\n  \\exercise \\label{ex:circle_degk}\n\\begin{subexenum}\n\\item Show that for every $x:X$, we have an equivalence\n\\begin{equation*}\n\\eqv{\\Big(\\sm{f:\\sphere{1}\\to X}f(\\base)= x \\Big)}{(x=x)}\n\\end{equation*}\n\\item Show that for every $t:\\sphere{1}$, we have an equivalence\n\\begin{equation*}\n\\eqv{\\Big(\\sm{f:\\sphere{1}\\to \\sphere{1}}f(\\base)= t \\Big)}{\\Z}\n\\end{equation*}\nThe base point preserving map $f:\\sphere{1}\\to\\sphere{1}$ corresponding to $k:\\Z$ is called the \\define{degree $k$ map} on the circle, and is denoted by $\\mathsf{deg}(k)$.\n\\item Show that for every $t:\\sphere{1}$, we have an equivalence\n\\begin{equation*}\n\\eqv{\\Big(\\sm{e:\\eqv{\\sphere{1}}{\\sphere{1}}}e(\\base)= t \\Big)}{\\bool}\n\\end{equation*}\n\\end{subexenum}\n\\exercise \\label{ex:circle_double_cover} The \\define{(twisted) double cover} of the circle is defined as the type family $\\mathcal{T}\\defeq\\mathcal{D}(\\bool,\\mathsf{neg}):\\sphere{1}\\to\\UU$, where $\\mathsf{neg}:\\eqv{\\bool}{\\bool}$ is the negation equivalence of \\cref{ex:neg_equiv}.\n\\begin{subexenum}\n\\item Show that $\\neg(\\prd{t:\\sphere{1}}\\mathcal{T}(t))$.\n\\item Construct an equivalence $e:\\eqv{\\sphere{1}}{\\sm{t:\\sphere{1}}\\mathcal{T}(t)}$ for which the triangle\n\\begin{equation*}\n\\begin{tikzcd}[column sep=tiny]\n\\sphere{1} \\arrow[rr,\"e\"] \\arrow[dr,swap,\"\\mathsf{deg}(2)\"] & & \\sm{t:\\sphere{1}}\\mathcal{T}(t) \\arrow[dl,\"\\proj 1\"] \\\\\n\\phantom{\\sm{t:\\sphere{1}}\\mathcal{T}(t)} & \\sphere{1}\n\\end{tikzcd}\n\\end{equation*}\ncommutes.\n\\end{subexenum}\n\\exercise Construct an equivalence $\\eqv{(\\eqv{\\sphere{1}}{\\sphere{1}})}{\\sphere{1}+\\sphere{1}}$ for which the triangle\n\\begin{equation*}\n  \\begin{tikzcd}\n    (\\eqv{\\sphere{1}}{\\sphere{1}}) \\arrow[rr,\"\\simeq\"] \\arrow[dr,swap,\"\\evbase\"] & & (\\sphere{1}+\\sphere{1}) \\arrow[dl,\"\\fold\"] \\\\\n    & \\sphere{1}\n  \\end{tikzcd}\n\\end{equation*}\ncommutes. Conclude that a univalent universe containing a circle is not a $1$-type.\n\\exercise \\label{ex:is_invertible_id_S1}\n\\begin{subexenum}\n\\item Construct a family of equivalences\n\\begin{equation*}\n\\prd{t:\\sphere{1}} \\big(\\eqv{(t=t)}{\\Z}\\big).\n\\end{equation*}\n\\item Use \\cref{ex:circle_connected} to show that $\\eqv{(\\idfunc[\\sphere{1}]\\htpy\\idfunc[\\sphere{1}])}{\\Z}$.\n\\item Use \\cref{ex:idfunc_autohtpy} to show that\n\\begin{equation*}\n\\eqv{\\mathsf{has\\usc{}inverse}(\\idfunc[\\sphere{1}])}{\\Z},\n\\end{equation*}\nand conclude that ${\\mathsf{has\\usc{}inverse}}(\\idfunc[\\sphere{1}])\\not\\simeq{\\isequiv(\\idfunc[\\sphere{1}])}$. \n\\end{subexenum}\n\\exercise Consider a map $i:A \\to \\sphere{1}$, and assume that $i$ has a retraction. Construct a term of type\n  \\begin{equation*}\n    \\iscontr(A)+\\isequiv(i).\n  \\end{equation*}\n  \\exercise\n  \\begin{subexenum}\n  \\item Show that the multiplicative operation on the circle is associative, i.e.~construct an identification\n    \\begin{equation*}\n      \\assoc_{\\sphere{1}}(x,y,z) :\n      \\mulcircle(\\mulcircle(x,y),z)=\\mulcircle(x,\\mulcircle(y,z))\n    \\end{equation*}\n    for any $x,y,z:\\sphere{1}$.\n  \\item Show that the associator satisfies unit laws, in the sense that the following triangles commute:\n    \\begin{equation*}\n      \\begin{tikzcd}[column sep=-1em]\n        \\mulcircle(\\mulcircle(\\base,x),y) \\arrow[rr,equals] \\arrow[dr,equals] & & \\mulcircle(\\base,\\mulcircle(x,y)) \\arrow[dl,equals] \\\\\n        & \\mulcircle(x,y)\n      \\end{tikzcd}\n    \\end{equation*}\n    \\begin{equation*}\n      \\begin{tikzcd}[column sep=-1em]\n        \\mulcircle(\\mulcircle(x,\\base),y) \\arrow[rr,equals] \\arrow[dr,equals] & & \\mulcircle(x,\\mulcircle(\\base,y)) \\arrow[dl,equals] \\\\\n        & \\mulcircle(x,y)\n      \\end{tikzcd}\n    \\end{equation*}\n    \\begin{equation*}\n      \\begin{tikzcd}[column sep=-1em]\n        \\mulcircle(\\mulcircle(x,y),\\base) \\arrow[rr,equals] \\arrow[dr,equals] & & \\mulcircle(x,\\mulcircle(y,\\base)) \\arrow[dl,equals] \\\\\n        & \\mulcircle(x,y).\n      \\end{tikzcd}\n    \\end{equation*}\n  \\item State the laws that compute\n    \\begin{align*}\n      & \\assoc_{\\sphere{1}}(\\base,\\base,x) \\\\\n      & \\assoc_{\\sphere{1}}(\\base,x,\\base) \\\\\n      & \\assoc_{\\sphere{1}}(x,\\base,\\base) \\\\\n      & \\assoc_{\\sphere{1}}(\\base,\\base,\\base).\n    \\end{align*}\n    Note: the first three laws should be $3$-cells and the last law should be a $4$-cell. The laws are automatically satisfied, since the circle is a $1$-type.\n  \\end{subexenum}\n  \\exercise Construct the \\define{Mac Lane pentagon} for the circle, i.e.~show that the pentagon\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=-6em]\n      &[-6em] \\mulcircle(\\mulcircle(\\mulcircle(x,y),z),w) \\arrow[rr,equals] \\arrow[dl,equals] & & \\mulcircle(\\mulcircle(x,y),\\mulcircle(z,w)) \\arrow[dr,equals] &[-6em] \\\\\n      \\mulcircle(\\mulcircle(x,\\mulcircle(y,z)),w) \\arrow[drr,equals] & & & & \\mulcircle(x,\\mulcircle(y,\\mulcircle(z,w))) \\\\\n      & & \\mulcircle(x,\\mulcircle(\\mulcircle(y,z),w)) \\arrow[urr,equals]\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes for every $x,y,z,w:\\sphere{1}$.\n  \\exercise Recall from \\cref{ex:surjective-precomp} that if $f:A\\to B$ is a surjective map, then the precomposition map\n  \\begin{equation*}\n    \\blank\\circ f : (B\\to C)\\to (A\\to C)\n  \\end{equation*}\n  is an embedding for every set $C$. \n  Give an example of a surjective map $f:A\\to B$, such that the precomposition function\n    \\begin{equation*}\n      \\blank\\circ f:(B\\to \\sphere{1})\\to (A\\to \\sphere{1})\n    \\end{equation*}\n    is \\emph{not} an embedding, showing that the condition that $C$ is a set is essential.\n\\end{exercises}\n\n\\index{circle!fundamental cover|)}\n\\index{fundamental cover!of the circle|)}\n\\index{circle|)}\n\\index{inductive type!circle|)}\n", "meta": {"hexsha": "992e3592cf8f577e57de89185afe55a536abb839", "size": 27124, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/circle-fundamental-cover.tex", "max_stars_repo_name": "hemangandhi/HoTT-Intro", "max_stars_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 333, "max_stars_repo_stars_event_min_datetime": "2018-09-26T08:33:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T23:50:15.000Z", "max_issues_repo_path": "Book/circle-fundamental-cover.tex", "max_issues_repo_name": "hemangandhi/HoTT-Intro", "max_issues_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-06-18T04:16:04.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-16T15:27:01.000Z", "max_forks_repo_path": "Book/circle-fundamental-cover.tex", "max_forks_repo_name": "hemangandhi/HoTT-Intro", "max_forks_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2018-09-26T09:08:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-16T00:33:50.000Z", "avg_line_length": 48.3493761141, "max_line_length": 406, "alphanum_fraction": 0.6560979207, "num_tokens": 9849, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835371034368, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5977249755951715}}
{"text": "\\subsection{Implementing Statistics Calculator in Pascal} % (fold)\n\\label{sub:implementing_statistics_calculator_in_pas}\n\n\\sref{sec:arrays_using_these_concepts} of this Chapter introduced the Statistics Calculator. A partial implementation of this program is shown in Listing \\ref{lst:pas-stats-calc}, with the logic in the \\texttt{max} and \\texttt{variance} functions still to be implemented. This program reads a number of values from the user into an array, and then calculates and outputs the \\textbf{sum}, \\textbf{mean}, \\textbf{variance}, and \\textbf{maximum} value from this data.\n\n\\straightcode{\\pascode{lst:pas-stats-calc}{Pascal code for the Statistics Calculator}{code/pascal/array/SimpleStats.pas}}\n\n\\mynote{\n\\begin{itemize}\n  \\item \\texttt{SysUtils} is used to give access to the \\texttt{TryStrToFloat} function. This \\emph{tries} to convert a string to a double, and returns a boolean to indicate if it succeeded.\n  \\item The arrays in this program are passed by reference using \\textbf{const} (for in only) or \\textbf{var} (for in and out).\n  \\item The \\texttt{Low(\\ldots)} function gives you the first index of the array, \\texttt{High(\\ldots)} gives you the last index of the array, and \\texttt{Length(\\ldots)} tells you the number of elements in the array.\n\\end{itemize}\n}\n\n% subsection implementing_statistics_calculator_in_c (end)\n", "meta": {"hexsha": "469f187e3726e804964fd266ca0b0708a4759234", "size": 1350, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "topics/arrays/pascal/pas-stats-calc.tex", "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_issues_repo_path": "topics/arrays/pascal/pas-stats-calc.tex", "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_forks_repo_path": "topics/arrays/pascal/pas-stats-calc.tex", "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "avg_line_length": 79.4117647059, "max_line_length": 465, "alphanum_fraction": 0.7807407407, "num_tokens": 355, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8354835330070839, "lm_q1q2_score": 0.5977249675952213}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{III}\n\n\\def\\ntitle{Differential Geometry}\n\\def\\nlecturer{A.\\ Kovalev}\n\n\\def\\nterm{Michaelmas}\n\\def\\nyear{2018}\n\n\\input{header}\n\n\\DeclareMathOperator{\\Gr}{Gr} % Grassmannian\n\\DeclareMathOperator{\\Lie}{Lie} % Lie functor\n\\DeclareMathOperator{\\supp}{supp} % support\n\\newcommand{\\w}{\\wedge}\n\\DeclareMathOperator{\\Ric}{Ric} % Ricci curvature\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\tableofcontents\n\n\\section{Manifolds}\n\nWe want to generalise curves and surfaces in \\(\\R^2\\). A curve is a map \\(\\gamma: \\R \\supseteq I \\to \\R^2\\) or \\(\\R^3\\) that satisfies certain properties we'll find out in a minute. Clearly continuity is necessary but not sufficent, as evidenced by the famous Peano space-filling curve. Smoothness is not quite enough either, as \\(t \\mapsto (t^2, t^3)\\) has a cusp at the origin. The correct requirement will be that \\(\\gamma\\) has regular parameterisation, i.e.\\ \\(|\\dot \\gamma(t)| \\neq 0\\) for all \\(t\\).\n\nSimilarly, a surface should be defined as as a map \\(r: \\R^2 \\supseteq D \\to \\R^3\\) with a regular parameterisation, i.e.\\ \\(r \\in C^\\infty(D)\\) and \\(\\left| \\frac{\\p r}{\\p u} \\times \\frac{\\p r}{\\p v} \\right| \\neq 0\\) for all \\((u, v) \\in D\\).\n\nWe may follow this route and generalise to (hyper)surfaces in \\(\\R^n\\), which will be a generalisation of classical differential geometry of curves and surfaces in \\(\\R^3\\). The good thing is that we can readily apply calculus and it is easy to construct these objects. However, it does suffer from the prolblem of different parameterisations give rise to different geometric objects, as well as some surface requiring more than one parameterisation. Although these can be bypassed more or less, the more serious drawback is the extra technical complexity determined by the higher dimension of the ambient space.\n\nA better concept is \\emph{smooth manifolds}. We first begin with a review of topological structure, which every manifold possesses.\n\n\\begin{definition}[topological space]\n  A \\emph{topological space} \\(M\\) is a choice of class of the \\emph{open sets} such that\n  \\begin{enumerate}\n  \\item \\(\\emptyset\\) and \\(M\\) are open,\n  \\item if \\(U\\) and \\(U'\\) are open then so is \\(U \\cap U'\\),\n  \\item for anly collection of open sets, the union is open.\n  \\end{enumerate}\n\\end{definition}\n\nIn this course, we always require a topological space to be Hausdorff and second countable.\n\n\\begin{definition}[local coordinate chart]\\index{chart}\n  A \\emph{local coordinate chart} on a topological space \\(M\\) is a homeomorphism \\(\\varphi: U \\to V\\) where \\(U \\subseteq M\\) and \\(V \\subseteq \\R^d\\) are open. \\(U\\) is a \\emph{coordinate neighbourhood}.\n\\end{definition}\n\n\\begin{definition}[\\(C^\\infty\\)-differentiable structure]\\index{differentiable structure}\\index{atlas}\n  A \\emph{\\(C^\\infty\\)-differentiable structure} on a topological space \\(M\\) is a collection of charts \\(\\{\\varphi_\\alpha: U_\\alpha \\to V_\\alpha\\}\\) where \\(V_\\alpha \\subseteq \\R^d\\) for all \\(\\alpha\\) such that\n    \\begin{enumerate}\n    \\item \\(\\{U_\\alpha\\}\\) covers \\(M\\), i.e.\\ \\(M = \\bigcup U_\\alpha\\),\n    \\item compatibility condition: for all \\(\\alpha, \\beta\\), \\(\\varphi_\\beta \\compose \\beta_\\alpha^{-1}\\) is \\(C^\\infty\\) wherever defined, i.e.\\ on \\(\\varphi_\\alpha(U_\\alpha \\cap U_\\beta)\\),\n    \\item maximality: if \\(\\varphi\\) is compatible with all the \\(\\varphi_\\alpha\\)'s then \\(\\varphi\\) is in the collection.\n    \\end{enumerate}\n    \n    The collection of charts is called an \\emph{atlas}.\n\\end{definition}\n\nNote that 2 implies that \\(\\varphi_\\beta \\compose \\varphi_\\alpha^{-1}\\) is a diffeomorphism.\n\n\\begin{definition}[manifold]\\index{manifold}\n  A \\emph{manifold} is a Hausdorff, second countable topological space with a \\(C^\\infty\\)-differentiable structure.\n\\end{definition}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item In practice, we almost never specify the topological structure. Instead, we can induce a topology from a \\(C^\\infty\\) structure by declaring \\(D \\subseteq M\\) open if and only if for all \\((\\varphi_\\alpha, U_\\alpha)\\), \\(\\varphi_\\alpha(U_\\alpha \\cap D)\\) is open in \\(\\R^d\\).\n  \\item We may replace \\(C^\\infty\\) by \\(C^k\\) for \\(k > 0\\) finite. If we set \\(k = 0\\), the objects become topological manifolds. On the other hand, use \\(\\C^n\\) and holomorphic maps we get complex manifolds.\n  \\item Requirement of being Hausdorff and second countable are for rather technical reasons. In some cases we may drop Hausdorffness requirement, and such examples do arise naturally. However in that case we lose uniqueness of limits. Similarly non-second countable space, such as the disjoint union of uncountably many \\(\\R^n\\), can become manifold-like structure if we relax the definition.\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\R^d\\) covered by the single chart \\(\\varphi = \\id\\).\n  \\item Unit sphere \\(S^n = \\{\\V x = (x_0, \\dots, x_n) \\in \\R^{n + 1}: \\sum_{i = 0}^n x_i^2 = 1\\}\\). The charts are stereographic projections\n    \\begin{align}\n      \\varphi(\\V x) &= \\frac{1}{1 - x_0} (x_1, \\dots, x_n) \\\\\n      \\psi(\\V x) &= \\frac{1}{1 + x_0} (x_1, \\dots, x_n)\n    \\end{align}\n    where \\(\\varphi\\) is defined for all points on \\(S^n\\) except \\((1, 0, \\dots, 0)\\) and similar for \\(\\psi\\). Suppose \\(\\varphi(P) = u, \\psi(P) = v\\), then by basic geometry \\(v = \\psi \\compose \\varphi^{-1}(u) = \\frac{u}{\\norm{u}^2}\\).\n  \\item Given \\(M_1, M_2\\) manifolds of dimension of \\(d_1\\) and \\(d_2\\), \\(M_1 \\times M_2\\) is a manifold of dimension \\(d_1 + d_2\\). We define the \\emph{\\(n\\)-torus} to be \\(T_n = \\underbrace{S^1 \\times \\dots \\times S^1}_{n}\\).\n  \\item An open subset \\(U\\) of a manifold \\(M\\) is a manifold.\n  \\item The \\emph{real projective space},\n    \\[\n      \\R P^n = \\{\\text{all straight lines through \\(0\\) in } \\R^{n + 1}\\}.\n    \\]\n    The points in the space are \\(x_0 : \\dots : x_n\\) where \\(x_i\\)'s are not all zero. Note that\n    \\[\n      x_0 : \\dots : x_n = \\lambda x_0 : \\dots \\lambda x_n\n    \\]\n    for all \\(\\lambda \\neq 0\\). As per a previous remark, we shall induce the topology by a \\(C^\\infty\\) structure. The charts are \\((U_i, \\varphi_i)\\) where \\(U_i = \\{x_i \\neq 0\\}\\), and\n    \\[\n      \\varphi_i(x_0 : \\dots : x_n) = (\\frac{x_0}{x_i}, \\dots, \\hat i, \\dots, \\frac{x_n}{x_i})\n    \\]\n    where \\(\\hat i\\) denotes that the \\(i\\)th coordinate is omitted. The \\(U_i\\)'s cover \\(\\R P^n\\) so we are left to check compatibility. For \\(i < j\\),\n    \\[\n      \\varphi_j \\compose \\varphi_i^{-1} : (y_1, \\dots, y_n)\n      \\mapsto y_1 : \\dots, \\underbrace{1}_{i\\text{th}} : y_n\n      \\mapsto (\\frac{y_1}{y_j}, \\dots, \\frac{1}{y_j}, \\dots, \\hat j, \\dots, \\frac{y_n}{y_j})\n    \\]\n    is smooth. Since \\(i\\) and \\(j\\) are arbitrary, \\(\\R p^n\\) is a \\(n\\)-manifold.\n\n    Similarly we can check \\(\\C P^n\\) is a \\(2n\\)-manifold, and \\(\\H P^n\\) is a \\(4n\\)-manifold (but this is a bit tricky due to noncommutativity).\n  \\item Grassmannians (over \\(\\R\\) or \\(\\C\\)): we define \\(\\Gr(k, n)\\) to be all \\(k\\)-dimensional subspaces of the \\(n\\)-dimensional vector space, which generalises the projective space. For real vector spaces for example, \\(\\R P^n = \\Gr(1, n + 1)\\). We can check \\(\\Gr(k, n)\\) is a manifold of dimension \\(k(n - k)\\).\n\n    The construction is a bit technical so we will give example of one chart. Let \\(U\\) be the \\(k\\)-subspaces obtainable as the span of rows of \\(k \\times n\\) matrices of the form \\((I_k \\quad *)\\), and the local coordinate maps a basis of such \\(k\\)-subspace to the \\(k \\times (n - k)\\) block \\(*\\). Since the frist \\(k\\) rows are linearly independent, we call \\(U = U_{1 < 2 < \\dots < k}\\). More generally the domain of charts take the form \\(U_{1 \\leq i_1 < \\dots < i_k \\leq n}\\). It is an exercise to check that this gives a valid \\(C^\\infty\\) structure.\n  \\item A non-example: define an equivalence relation \\((x, y) \\sim (\\lambda x, \\frac{y}{\\lambda})\\) for all \\(\\lambda \\neq 0\\) and define \\(X := \\R^2/\\sim\\). \\(\\{xy = c\\}\\) is one equivalence class if \\(c \\neq 0\\). If \\(c = 0\\), \\(\\{xy = 0\\}\\) is three classes \\(\\{(x, 0): x \\neq 0\\}, \\{(0, 0)\\}, \\{(0, y): y \\neq 0\\}\\). Thus\n    \\[\n      X \\cong (-\\infty, 0) \\cup \\{0', 0'', 0'''\\} \\cup (0, \\infty).\n    \\]\n    Define charts\n    \\[\n      \\varphi_i: (-\\infty, 0) \\cup 0^{(i)} \\cup (0, \\infty) \\to \\R\n    \\]\n    in the obvious ways. We can check that it gives \\(X\\) a valid \\(C^\\infty\\) structure, except that the induced topology is non-Hausdorff!\n  \\item For a non-second countable example, see example sheet 1 Q12.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{definition}[smooth map]\\index{smooth map}\n  Let \\(M, N\\) be manifolds, a continuous map \\(f: M \\to N\\) is \\emph{smooth} or \\(C^\\infty\\) if for any \\(p \\in M\\), there exists charts \\((U, \\varphi)\\) on \\(M\\), \\((V, \\psi)\\) on \\(N\\) such that \\(p \\in U, f(p) \\in V\\) and\n  \\[\n    \\psi \\compose f \\compose \\varphi^{-1}\n  \\]\n  is smooth wherever it is defined, i.e.\\ on \\(\\varphi(U \\cap f^{-1}(V)) \\subseteq \\R^n\\) where \\(n = \\dim M\\).\n\\end{definition}\n\n\\begin{remark}\n  Note that by continuity, \\(\\varphi(U \\cap f^{-1}(V))\\) is necessarily open. Of course we can do it differently by not requiring \\(f\\) to be continuous a priori but instead ask the above set to be open.\n\\end{remark}\n\nFor any charts \\(\\tilde \\varphi, \\tilde \\psi\\),\n\\[\n  \\tilde \\psi \\compose f \\compose \\tilde \\varphi^{-1} = (\\tilde \\psi \\compose \\psi^{-1}) \\compose (\\psi \\compose f \\compose \\varphi^{-1}) \\compose (\\varphi \\compose \\tilde \\varphi^{-1})\n\\]\nis a composition of smooth map so is smooth. Thus smoothness of a map is independent of charts.\n\n\\begin{definition}[diffeomorphism]\\index{diffeomorphism}\n  A smooth map \\(f: M \\to N\\) is a \\emph{diffeomorphism} if \\(f\\) is bijective and \\(f^{-1}\\) is smooth. If such \\(f\\) exists, \\(M\\) and \\(N\\) are \\emph{diffeomorphism}.\n\\end{definition}\n\n\\begin{remark}\\leavevmode\n\\begin{enumerate}\n\\item Our definition of smoothness is a generalisation of that in calculus. More precisely, \\(f: \\R^n \\to \\R^m\\) is smooth if and only if it is smooth in the calculus sense.\n\\item Any chart \\(\\varphi: U \\to \\R^d\\) is a diffeomorphism onto its image.\n\\item Composition of smooth maps is smooth.\n\\end{enumerate}\n\\end{remark}\n\n\\section{Matrix Lie groups}\n\nWe can view \\(\\GL(n, \\R)\\) as an array of numbers and thus embed it in \\(\\R^{n^2}\\). Furthermore, by considering the determinant function it is an open subset so is an \\(n^2\\)-manifold. Matrix multiplication is obviously smooth. In the same vein we have \\(\\GL(n, \\C)\\), \\(\\SL(n, \\R)\\) etc as mannifolds with smooth multiplications.\n\n\\begin{definition}[Lie group]\\index{Lie group}\n  A group \\(G\\) is a \\emph{Lie group} if \\(G\\) is a manifold and has a compatible group structure, i.e.\\ the map \\((\\sigma, \\tau) \\mapsto \\sigma\\tau^{-1}\\) is smooth.\n\\end{definition}\n\nBefore Lie groups, let's have a short digression in analysis to discuss the exponential map. For a complex \\(n \\times n\\) matrix \\(A = (a_{ij})\\), define a norm\n\\[\n  |A| = n \\cdot \\max_{ij} |a_{ij}|\n\\]\nwhere the \\(n\\) in front is such that \\(|AB| \\leq |A| \\cdot |B|\\). We now define\n\\[\n  \\exp(A) := I + A + \\frac{1}{2} A^2 + \\dots + \\frac{1}{n!} A^n + \\dots\n\\]\n\nOf course we have to check the series makes sense: in fact \\(\\exp A\\) is absolutely convergent for all \\(A\\) as \\(\\left| \\frac{A^n}{n!} \\right| \\leq \\frac{|A|^n}{n!}\\). The series is also uniformly convergent on any compact set, by Weierstrass \\(M\\)-test. Therefore \\(\\exp\\) is a well-defined continuous map.\n\nIn fact, the map is smooth although the proof is technical. A sketch of the proof is like this: note that \\(f: A \\to A^m\\) is smooth for all \\(m \\in \\N\\). For \\(m = 2\\), \\(df_A: H \\mapsto HA + AH\\) so \\(\\norm{df_A} \\leq 2 |A|\\) so for all \\(m\\),\n\\[\n  \\norm{df_A} \\leq m |A|^{m - 1}\n\\]\nWe can thus term-by-term differentiate \\(\\exp\\) and get a locally uniformly convergent series and use the above estimate to bound derivatives.\n\nIt is easy to check that:\n\\begin{enumerate}\n\\item \\(\\exp (A^t) = (\\exp A)^t\\) where \\(A^t\\) is the transpose of \\(A\\).\n\\item \\(\\exp (CAC^{-1}) = C (\\exp A) C^{-1}\\) for \\(C\\) nonsingular.\n\\item \\(\\exp (A + B) \\neq \\exp A \\exp B\\) unless \\(AB = BA\\).\n\\item \\(\\exp A \\exp (-A) = I\\) for any matrix \\(A\\).\n\\end{enumerate}\n\nThe second property prompts us to put \\(A\\) into Jordan normal form before computing \\(\\exp A\\).\n\nUsing the series\n\\[\n  \\log (I + A) = A - \\frac{A^2}{2} + \\dots + (-1)^{n + 1} \\frac{A^n}{n} + \\dots\n\\]\nwe can tackle this similarly to \\(\\exp\\) to check \\(\\log(I + A)\\) is smooth on \\(\\{|A| < 1\\}\\). We can also check\n\\[\n  \\exp (\\log A) = A\n\\]\nif \\(|A - I| < 1\\), with a proof given by manipulation of double indexed series, which is valid due to absolute convergence. The expression \\(\\log (\\exp A)\\) is more subtle. Clearly we need \\(|\\exp A - I| < 1\\). But it is not sufficient: if\n\\[\n  A_\\theta =\n  \\begin{pmatrix}\n    0 & -\\theta \\\\\n    \\theta & 0\n  \\end{pmatrix}\n\\]\nwhere \\(\\theta \\in \\R\\), then\n\\[\n  \\exp(A_\\theta) =\n  \\begin{pmatrix}\n    \\cos \\theta & - \\sin \\theta \\\\\n    \\sin \\theta & \\cos \\theta\n  \\end{pmatrix}\n\\]\nPut \\(\\theta = 2\\pi\\), \\(\\exp A - I = 0\\) but\n\\[\n  \\log (\\exp A_{2\\pi}) = 0 \\neq A_{2\\pi}.\n\\]\nThe reason is that the series is no longer absolutely convergent. If we add the additional condition that \\(|A| < \\log 2\\) then\n\\[\n  \\log (\\exp A) = A.\n\\]\nIt is left as an exercise. (Hint: \\(|\\exp |A| - 1| < 1\\) implies absolute convergence.)\n\n\\begin{eg}[orthogonal group]\n  Recall that\n  \\[\n    O(n) = \\{A \\in \\GL(n, \\R), AA^t = I\\}.\n  \\]\n  Let \\(A \\in O(n)\\) with \\(|A - I| < 1\\). Let \\(B = \\log A\\) so \\(e^B = A\\). There exists \\(0 < \\varepsilon < 1\\) such that whever \\(|A - I| < \\varepsilon\\), have \\(|B| < \\log 2\\) using continuity of \\(\\log\\). Then\n  \\[\n    e^B e^{B^t} = AA^t = I\n  \\]\n  so\n  \\[\n    e^B = A = (A^t)^{-1} = (e^{B^t})^{-1} = e^{-B^t}.\n  \\]\n  Now \\(|B^t| = |B| = \\log 2\\). Taking \\(\\log\\), we find that \\(B = -B^t\\) so \\(B\\) is a skew-symmetric matrix.\n\n  Conversely, if \\(B = - B^t\\), \\(|B| < \\log 2\\) then\n  \\[\n    (e^B)^t = e^{B^t} = e^{-B} = (e^B)^{-1}\n  \\]\n  so \\(A = e^B \\in O(n)\\).\n\n\\begin{proposition}\n  \\(O(n)\\) has a \\(C^\\infty\\) structure making it a manifold and Lie group of dimension \\(\\frac{n(n - 1)}{2}\\).\n\\end{proposition}\n\n\\begin{proof}\n  Put\n  \\[\n    V_0 := \\{B: B \\text{ skew-symmetric}, |B| < \\log 2\\}\n  \\]\n  and \\(U := \\exp (V_0)\\), an open neighbourhood of \\(I \\in O(n)\\). Let\n  \\begin{align*}\n    h: U &\\to V_0 \\\\\n    A &\\mapsto \\log A\n  \\end{align*}\n  which is a well-defined homeomorphism onto \\(V_0\\), an open subset of the skew-symmetric matrices, which can be identified with \\(\\R^{n(n - 1)/2}\\).\n\n  Now we construct the charts \\((U_C, h_C)\\). For all \\(C = O(n)\\), put \\(U_C := \\{CA: A \\in U\\}\\), i.e.\\ left translation of \\(U\\) by \\(C\\). Define\n  \\begin{align*}\n    h_C: U_C &\\to V_0 \\\\\n    A &\\mapsto \\log (C^{-1}A)\n  \\end{align*}\n  which is a homeomorphism % ?\n  . To check they form an atlas, first note that \\(C \\in U_C\\) so \\(O(n) = \\bigcup_{C \\in O(n)} U_C\\). Furthermore\n  \\[\n    h_{C_2} \\compose h_{C_1}^{-1} (B) = h_{C_2} (C_1 e^B) = \\log (C_2^{-1}C_1 e^B)\n  \\]\n  which is smooth since it is the composition of smooth maps. Thus \\(O(n)\\) is a manifold.\n\n  To check compatibility of group axioms, define\n  \\begin{align*}\n    F: O(n) \\times O(n) &\\to O(n) \\\\\n    (A_1, A_2) &\\mapsto A_1A_2^{-1}\n  \\end{align*}\n  In local coordinates, it is\n  \\begin{align*}\n    &h_{A_1A_2^{-1}} (F(h_{A_1}^{-1}(B_1), h_{A_2}^{-1}(B_2))) \\\\\n    =& \\log [(A_1A_2^{-1})^{-1}A_1 e^{B_1} (A_2 e^{B_2})^{-1}] \\\\\n    =& \\log (A_2 e^{B_1} e^{-B_2} A_2^{-1})\n  \\end{align*}\n  which is smooth.\n\\end{proof}\n\\end{eg}\n\nThe same construction works for other classical groups of matrices. See example sheet 1 Q4.\n\n\\section{Tangent space to manifolds}\n\nConsider a curve in \\(\\R^n\\), defined by a smooth parameterisation\n\\[\n  x(t) = (x_i(t))_{i = 1}^n\n\\]\nsuch that \\(x(0) = p \\in \\R^n\\). Then a tangent to the curve is velocity\n\\[\n  \\dot x(t) \\in T_p\\R^n \\cong \\R^n.\n\\]\nLet \\(y = y(x)\\) be a \\(C^\\infty\\) change of variables, i.e.\\ a local coordinates. Then\n\\[\n  \\frac{d}{dt} \\Big|_{t = 0} y(x(t)) = \\underbrace{\\frac{D y}{D x}}_{\\text{Jacobian}}(p) \\dot x(0)\n  = \\left( \\sum_{j = 1}^n \\frac{\\p y_i}{\\p x_j} a_j \\right)_{i = 1}^n\n\\]\n\n\\begin{definition}\n  A \\emph{tangent vector} \\(a\\) to a manifold \\(M\\) at a point \\(p \\in M\\) is the assignment to each chart \\((U, \\varphi)\\), \\(p \\in U\\) a \\(n\\)-tuple \\((a_1, \\dots, a_n) \\in \\R^n\\) where \\(n = \\dim M\\) so that for another chart \\((U', \\varphi')\\), \\(p \\in U'\\) with local coordinates \\((x_1, \\dots, x_n)\\) and \\((x_1', \\dots, x_n')\\), we have\n  \\[\n    a_i' = \\sum_{j = 1}^n \\frac{\\p x_i'}{\\p x_j} (p) a_j.\n  \\]\n\\end{definition}\n\nThis is sometimes known as the tensorial definition of tangent space. Other equivalent definitions involve flows or using derivation on germs of smooth functions.\n\n\\begin{definition}[tangent space]\\index{tangent space}\n  The \\emph{tangent space} \\(T_pM\\) is the set of all tangent vectors to \\(M\\) at \\(p\\).\n\\end{definition}\n\nIt follows that \\(T_pM\\) is an \\(n\\)-dimensional real vector space. Thus \\(T_pM \\cong \\R^n\\) although this isomorphism is not canonical. But given a local coordinate chart with coordinates \\((x_1, \\dots, x_n)\\), the tuple \\((0, \\dots, 1, \\dots, 0)\\) with \\(1\\) in \\(i\\)th position in \\(\\R^n\\) has image under isomorphism \\(\\frac{\\p}{\\p x_i} (p)\\). Recall the chain rule for partial derivatives:\n\\[\n  \\frac{\\p}{\\p x_j'} = \\sum_{i = 1}^n \\frac{\\p x_i}{\\p x_j'} \\frac{\\p}{\\p x_i}\n\\]\nwhere \\((x_1', \\dots, x_n')\\) is another chart. This is precisely the reason we impose upon tangent vectors the transformation rule, namely so that tangent vectors become a well-defined derivation of smooth functions at \\(p\\): let \\(a = \\sum_i a_i \\frac{\\p}{\\p x_i} (p) \\in T_pM\\) where \\(x_i\\)'s are local coordinates around \\(p\\). Then a first order \\emph{derivation} at \\(p\\) is\n\\begin{align*}\n  a: C^\\infty(M) &\\to \\R \\\\\n  f &\\mapsto \\sum_i a_i \\frac{\\partial f}{\\partial x_i} \n\\end{align*}\n(where we assume the same charts on RHS) is a well-defined map independent of choice of coordinates. We can interpret, with the \\(x_i\\)'s,\n\\[\n  a(f) = \\frac{d}{dt} \\Big|_{t = 0} f(x(t))\n\\]\nfor all \\(x: (-\\varepsilon, \\varepsilon) \\to M\\) smooth, \\(x(0) = p\\) and \\(\\dot x(0) = a\\).\n\nNow for another choice \\(\\tilde x_i\\) of local coordinates,\n\\[\n  \\frac{d}{dt} \\Big|_{t = 0} f(\\tilde x(t))\n  = \\sum_j \\frac{\\partial f}{\\partial \\tilde x_j} (p) \\dot{\\tilde x}_j(0)\n  = \\sum_{j, i} \\frac{\\partial f}{\\partial \\tilde x_j}(p) \\frac{\\partial \\tilde x_j}{\\partial x_i}(p) \\dot x_i(0)\n\\]\nby the transformation law for tangent vectors.% Thus the tensor transformation law is \n\nThe derivations satisfy Leibniz rule, i.e.\\\n\\[\n  a(fg) = a(f)g(p) + f(p)a(g).\n\\]\nConversely, every linear map \\(a: C^\\infty(M) \\to \\R\\) satisfying the Leibniz rule arises from some \\(a \\in T_pM\\). This is left as an exercise.\n\n\\begin{eg}\n  An example from classical differential geometry. Consider a surface \\(r = r(u, v): D \\to S\\) where \\(D \\subseteq \\R^2\\) and \\(S = r(D) \\subseteq \\R^3\\). Then \\(S\\) is a manifold with \\(\\varphi = r^{-1}\\) as a chart. Then \\(r_u, r_v\\) at \\(p \\in S\\) corresponds to \\(\\frac{\\partial}{\\partial u}, \\frac{\\partial}{\\partial v}\\) in our theory.\n\\end{eg}\n\n\\subsection{Lie algebra}\n\nFor a Lie group, the tangent spaces get an ``infinitesimal'' version of the group multiplication.\n\n\\begin{definition}\n  A \\emph{Lie algebra} is a vector space with a bilinear multiplication \\([\\cdot, \\cdot]\\), i.e.\\ a Lie bracket such that\n  \\begin{enumerate}\n  \\item anticommutativity: \\([a, b] = -[b, a]\\),\n  \\item Jacobi identity: \\([[a, b], c] + [[b, c], a] + [[c, a], b] = 0\\).\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{theorem}\n  Let \\(G\\) be a Lie group of \\(n \\times n\\) (real or complex) matrices such that \\(\\log\\) defines a coordinate chart near \\(I \\in G\\), i.e.\\ the image of \\(\\log\\) near \\(I\\) is an open set in some real vector subspace of \\(\\R^{n^2}\\). Identify \\(\\mathfrak g = T_IG\\) with the above open subset. Then \\(\\mathfrak g\\) is a Lie algebra with\n  \\[\n    [B_1, B_2] := B_1B_2 - B_2B_1\n  \\]\n  for \\(B_1, B_2 \\in \\mathfrak g\\).\n\\end{theorem}\n\n\\begin{proof}\n  Check that \\(\\mathfrak g\\) is a vector space and \\([\\cdot, \\cdot]\\) is anticommutative. The Jacobi identity holds for matrices (straightforward check).\n\n  What is left is to show \\(B_1, B_2 \\in \\mathfrak g\\) then \\([B_1, B_2] \\in \\mathfrak g\\). Consider\n  \\[\n    A(t) = \\exp (B_1 t) \\exp (B_2 t) \\exp(-B_1t) \\exp (-B_2t),\n  \\]\n  the commutator of two elements in \\(G\\). Then \\(A(0) = I\\). Expand \\(\\exp\\), we get\n  \\[\n    A(t) = I + [B_1, B_2] t^2 + o(t^2)\n  \\]\n  as \\(t \\to 0\\) so\n  \\[\n    B(t) = \\log A(t) = [B_1, B_2] t^2 + o(t^2).\n  \\]\n  In addition \\(\\exp B(t) = A(t)\\) holds for \\(|t|\\) sufficiently small so \\(B(t) \\in \\mathfrak g\\) as it is in the image of the \\(\\log\\) chart. It follows that \\(\\frac{B(t)}{t^2} \\in \\mathfrak g\\) for \\(t \\neq 0\\) as \\(\\mathfrak g\\) is a vector space. Thus\n  \\[\n    [B_1, B_2] = \\lim_{t \\to 0} \\frac{B(t)}{t^2} \\in \\mathfrak g\n  \\]\n  as every vector subspace of matrix \\(n, \\C\\) is a closed subset.\n\\end{proof}\n\n\\begin{eg}\n  For \\(G = O(n)\\), have \\(\\mathfrak g = \\mathfrak o(n) = \\{\\text{skew-symmetric \\(n \\times n\\) matrices}\\}\\) by previous work.\n\\end{eg}\n\n\\begin{definition}\n  \\(\\mathfrak g\\) is called the \\emph{Lie algebra} of \\(G\\), write \\(\\mathfrak g = \\Lie(G)\\).\n\\end{definition}\n\nIn fact we can show \\(\\Lie\\) is a functor but we won't pursue in that direction.\n\n\\begin{definition}[tangent bundle]\\index{tangent bundle}\n  Let \\(M\\) be a smooth manifold. Then \\(TM = \\coprod_{p \\in M} T_pM\\) is the \\emph{tangent bundle} of \\(M\\).\n\\end{definition}\n\n\\begin{theorem}\n  \\(TM\\) has a natural \\(C^\\infty\\) structure, making it into a smooth manifold of \\(\\dim TM = 2 \\dim M\\).\n\\end{theorem}\n\n\\begin{proof}\n  We shall induce the topology from the \\(C^\\infty\\) structure. Let \\((U, \\varphi)\\) be a chart on \\(M\\). Consider \\(U_T = \\coprod_{p \\in U} T_pM\\) so \\(TM = \\bigcup U_T\\). For \\(a \\in T_pM, \\varphi(p) = (x_1, \\dots, x_n)\\) so that \\(a = \\sum_i a_i \\frac{\\partial  }{\\partial x_i}\\). Now define\n  \\begin{align*}\n    \\varphi_T: U_T &\\to \\R^n \\times \\R^n \\\\\n    a &\\mapsto (\\varphi(p), (a_i))\n  \\end{align*}\n\n  To show compatibility, suppose \\((U', \\varphi')\\) is another chart on \\(M\\) with local coordinates \\(x_i'\\) and efine \\(\\tilde \\varphi'\\) as above. Then\n  \\[\n    \\varphi_T' \\compose \\varphi_T^{-1}(x, a)\n    = (x', a')\n  \\]\n  where \\(x' = \\varphi' \\compose \\varphi^{-1}(x)\\) and \\(a'\\) is given by the translation law\n  \\[\n    a' = \\sum_j \\frac{\\partial x_i'}{\\partial x_j} (x) a_j,\n  \\]\n  so is smooth wherever defined.\n\n  Hausdorffness and second countability follows from that \\(M\\) and \\(\\R^n\\) are manifolds.\n\\end{proof}\n\n\\begin{note}\n  Some remarks on the final statement regarding topological properties:\n  \\begin{enumerate}\n  \\item \\(M\\) is \\(\\sigma\\)-compact, i.e.\\ every open cover has a countable subcover.\n  \\item A basis of topology of \\(TM\\) is given by \\(\\{B_1 \\times B_2\\}\\) where \\(B_1\\) is open in some coordinate neighbourhood \\(U \\subseteq M\\) and \\(B_2\\) is open in \\(\\R^n\\).\n  \\end{enumerate}\n\\end{note}\n\n\\begin{corollary}\n  The projection\n  \\begin{align*}\n    \\pi: TM &\\to M \\\\\n    (p, a) &\\mapsto p\n  \\end{align*}\n  is smooth.\n\\end{corollary}\n\n\\begin{remark}\n  \\(TM\\) has locally a product structure but in general \\(TM\\) is not diffeomorphic to \\(M \\times \\R^n\\).\n\\end{remark}\n\n\\begin{definition}[vector field]\\index{vector field}\n  A \\emph{vector field} on a manifold \\(M\\) is a smooth map \\(X: M \\to TM\\) such that \\(\\pi \\compose X = \\id_M\\), i.e.\\ \\(X(p) \\in T_pM\\) for all \\(p \\in M\\).\n\\end{definition}\n\nNote that \\(X\\) is smooth at \\(p\\) if and only if for any coordinate neighbourhood \\(U\\) of \\(p\\) with local coordinates \\((x_1, \\dots, x_n)\\), \\(X = \\sum_{i = 1}^n a_i(x) \\frac{\\partial  }{\\partial x}\\) where \\(a_i \\in C^\\infty(U)\\) for all \\(i\\).\n\n\\begin{eg}\n  Every manifold has at least one vector field: sending every point to \\(0\\). This is not the most interesting example, however.\n\\end{eg}\n\n\\begin{theorem}\n  Suppose \\(\\dim M = n\\). Then there exists smooth vector fields \\(X^{(1)}, \\dots, X^{(n)}\\) on \\(M\\) such that for all \\(p \\in M\\), \\(X^{(1)}(p), \\dots, X^{(n)}(p)\\) is a basis of \\(T_pM\\), then \\(TM\\) is isomorphic to \\(M \\times \\R^n\\).\n\\end{theorem}\n\nHere ``isomorphic'' means that there is a diffeomorphism \\(\\Phi: TM \\to M \\times \\R^n\\) such that \\(\\Phi|_{T_pM}: T_pM \\to \\{p\\} \\times \\R^n\\) is a linear isomorphism.\n\n\\begin{definition}[parallelisable]\\index{parallelisable}\n  A manifold satisfying the hypothesis is called \\emph{parallelisable}.\n\\end{definition}\n\n\\begin{proof}\n  Consider \\(p \\in M\\) so \\(\\pi(a) = p\\) for all \\(a \\in T_pM\\). Then \\(a = \\sum_{i = 1}^n a_iX^{(i)}(p)\\) for some unique \\(a_i \\in \\R\\). Put\n  \\[\n    \\Phi(a) := (\\pi(a), (a_1, \\dots, a_n)) \\in M \\times \\R^n\n  \\]\n  which is clearly a bijection and \\(\\Phi|_{T_pM}\\) is a linear isomorphism. Thus suffices to check \\(\\Phi\\) is a diffeomorphism. We use chart \\((U, \\varphi)\\) on \\(M\\) and let \\((\\pi^{-1}(U), \\varphi_T)\\) be the corresponding chart on \\(TM\\). Then\n  \\[\n    (\\varphi, \\id_{\\R^n}) \\compose \\Phi \\compose \\varphi_T^{-1}:\n    (x, (b_i)_{i = 1}^n) \\mapsto (x, (a_i)_{i = 1}^n)\n  \\]\n  such that\n  \\[\n    a = \\sum_i a_i X^{(i)}(p) = \\sum_j b_j \\frac{\\p}{\\p x_j}(p).\n  \\]\n  It is then obvious that these \\(a_i\\)'s and \\(b_j\\)'s differ by a change-of-basis transformation. Explicitly, write \\(X^{(i)}\\) in local basis as\n  \\[\n    X^{(i)}|_U = \\sum_j X_j^{(i)} (x) \\frac{\\partial  }{\\partial x_j}\n  \\]\n  and so\n  \\[\n    b_j = \\sum_i a_i X_j^{(i)}(x)\n  \\]\n  so \\(\\Phi\\) is smooth. To check the inverse, note that \\((X_j^{(i)}(x))\\) is a non-singular matrix smooth in \\(x\\) so \\(\\Phi^{-1}\\) also has a smooth local expression.\n\\end{proof}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item The converse is easily seen to be true, in which case vector fields on \\(M\\) are simply \\(C^\\infty(M, \\R^n)\\).\n  \\item The parallelisable hypothesis is quite restrictive. For example, it implies that each \\(X^{(i)}\\) is never-zero. Some manifolds, such as \\(S^2\\) do not have such vector fields at all. In fact, \\(S^n\\) is parallelisable if only if \\(n = 1, 3, 7\\).\n  \\item Every orientable \\(3\\)-dimensional vector field is parallelisable.\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{ex}\n  Show \\(S^{2n + 1}\\) has a never-zero vector field.\n\\end{ex}\n\n\\begin{definition}[differential]\\index{differential}\n  Let \\(F: M \\to N\\) be a smooth map. Then the \\emph{differential} of \\(F\\) at \\(p \\in M\\) is a linear map\n  \\[\n    dF_p: T_pM \\to T_{F(p)}N\n  \\]\n  such that if \\(x_i\\) is a local coordinate near \\(p\\) and \\(y_i\\) is a local coordinate near \\(F(p)\\), then\n  \\[\n    dF_p : \\frac{\\partial  }{\\partial x_i} \\mapsto \\sum_j \\frac{\\partial \\hat F_j}{\\partial x_i}(x(p)) \\frac{\\partial  }{\\partial y_j} (F(p))\n  \\]\n  where \\(\\hat F = \\psi \\compose F \\compose \\phi^{-1}\\) is the coordinate representation of \\(F\\).\n\\end{definition}\n\nNow we have to check that it is independent of coordinate representation. Recall that\n\\[\n  \\frac{\\partial  }{\\partial x_k'}(p) = \\sum_i \\frac{\\partial x_i}{\\partial x_k'}(x'(p)) \\frac{\\partial  }{\\partial x_i}(p)\n\\]\nand similar for \\(\\frac{\\partial  }{\\partial y_j}(F(p))\\). So\n\\[\n  df_p: \\frac{\\partial  }{\\partial x_k'}(p) \\mapsto \\sum_{i, j, \\ell} \\underbrace{\\frac{\\partial x_i}{\\partial x_k'} \\frac{\\partial y_j}{\\partial x_i} \\frac{\\partial y_\\ell'}{\\partial y_j}}_{\\frac{\\partial y_\\ell'}{\\partial x_k'} \\text{ by chain rule}} \\frac{\\partial  }{\\partial y_\\ell'} (F(p))\n\\]\nso \\(dF_p\\) is indeed invariantly defined.\n\n\\begin{ex}[geometer's chain rule]\n  For \\(M \\xrightarrow{F} N \\xrightarrow{G} Z\\), we have\n  \\[\n    d(G \\compose F)_p = dG_{F(p)} \\compose dF_p.\n  \\]\n\\end{ex}\n\nNow suppose \\(F: M \\to N\\) is a diffeomorphism and \\(X\\) is a vector field on \\(M\\). Then \\((dF)X\\) is a valid vector field on \\(N\\).\n\n\\begin{remark}\n  In general a vector field does not admit a pushforward. We need surjectivity so \\((dF)X\\) is everywhere defined and injectivity to avoid conflicting values on target points. Finally we need the inverse to be smooth to ensure \\((dF)X\\) is smooth.\n\\end{remark}\n\nEvery vector field \\(X\\) defines a linear map from \\(C^\\infty(M)\\) to itself. More precisely, it is a first order derivation with \\(p \\in M\\) varying. Locally, if \\(X = \\sum_i X_i(x) \\frac{\\partial  }{\\partial x_i}\\) then \\(Xh\\) is given by\n\\[\n  Xh = \\sum_i X_i(x) \\frac{\\partial h}{\\partial x_i}(x)\n\\]\nIt is an easy exercise to check that it is invariantly defined.\n\nOn the other hand, a smooth function on \\(N\\) can always be pulled back by \\(F\\). Suppose \\(f \\in C^\\infty(N)\\). Then \\(f \\compose F \\in C^\\infty(M)\\). Thus in any local coordinates \\(x_i\\) on \\(M\\), \\(y_i\\) on \\(N\\), have\n\\[\n  \\frac{\\partial  }{\\partial x_i}(p) (f \\compose F) = \\sum_j \\frac{\\partial f}{\\partial y_j} (y(F(p)) \\frac{\\partial y_j}{\\partial x_i} (x(p)).\n\\]\nThus we have a coordinate-free formula\n\\[\n  X(f \\compose F) = ((dF)X f) \\compose F.\n\\]\nEquivalently, the following diagram commutes:\n\\[\n  \\begin{tikzcd}\n    C^\\infty(N) \\ar[r, \"F^*\"] \\ar[d, \"(dF) X\"] & C^\\infty(M) \\ar[d, \"X\"] \\\\\n    C^\\infty(N) \\ar[r, \"F^*\"] & C^\\infty(M)\n  \\end{tikzcd}\n\\]\n\nLet \\(X\\) and \\(Y\\) be two vector fields on \\(M\\) considered as first order linear differential operators. The composition \\(XY\\) is \\emph{not} a vector field. However,\n\\[\n  Z := [X, Y] := XY - YX\n\\]\nis a vector field. In local coordinates, it is\n\\[\n  \\sum_{i, k} \\left( X_i \\frac{\\partial Y_k}{\\partial x_i} - Y_i \\frac{\\partial X_k}{\\partial x_i} \\right) \\frac{\\partial  }{\\partial x_k}\n\\]\nwhere \\(X = \\sum_i X_i \\frac{\\p}{\\p x_i}, Y = \\sum_i Y_i \\frac{\\p}{\\p x_i}\\). Check that second order derivatives vanish (because of symmetry of mixed partials). We can check that \\(Z\\) is a vector field and as a map \\(C^\\infty(M) \\to C^\\infty(M)\\) it is linear over \\(\\R\\) and satisfy\n\\[\n  Z(fg) = (Zf) g + f Zg.\n\\]\n\nThus vector fields on \\(M\\) form a Lie algebra, which is infinite-dimensional. Denote it by \\(V(M)\\).\n\n\\subsection{Left-invariant vector field}\n\nLet \\(G\\) be a Lie group and \\(e \\in G\\) the identity element. Let \\(\\mathfrak g = T_eG\\). Given \\(g \\in G\\), the left translation by \\(g\\)\n\\begin{align*}\n  L_g: G &\\to G \\\\\n  h &\\mapsto gh\n\\end{align*}\nis a smooth map and since it has inverse \\(L_{g^{-1}}\\), it is a diffeomorphism.\n\nGiven \\(\\xi \\in \\mathfrak g\\), we can define a vector field by\n\\[\n  X_\\xi(g) := (dL_g)_e(\\xi) \\in T_gG.\n\\]\nWe can do this because for every point there is a diffeomorphism sending \\(e\\) to it, and therefore we can construct a vector field using only its information at \\(e\\). We will soon find out there are lots of symmetry involved.\n\n\\begin{lemma}\n  The map \\(X_\\xi: G \\to TG, g \\mapsto (dL_g)_e(\\xi)\\)  is smooth so \\(X_\\xi\\) is a smooth vector field.\n\\end{lemma}\n\n\\begin{proof}\n  As usual, check smoothness in local coordinates. Consider group multiplication \\(L: G \\times G \\to G\\). Fix \\(g_0 \\in G\\), then around \\((g_0, e) \\in G \\times G\\), given charts \\(\\varphi_e\\) around \\(e\\) whose image is \\(V_e\\) and \\(\\varphi_{g_0}\\) around \\(g_0\\) whose image is \\(V_{g_0}\\), the local expression \\(\\hat L\\) is\n  \\begin{align*}\n    V_{g_0} \\times V_e &\\to V_{g_0}' \\\\\n    \\hat L &= \\varphi_{g_0} (L(\\varphi_{g_0}^{-1}(\\cdot), \\varphi_e^{-1}(\\cdot)))\n  \\end{align*}\n  Then \\(\\hat L_g = \\hat L(\\varphi_{g_0}(g), \\cdot): V_e \\to V_{g_0}'\\).% ?\n  Thus \\(D_2\\hat L\\), the derivative of the \\(V_e\\) variables, gives the coordinate expression for \\((dL_g)_e\\). But \\(D_2\\hat L\\) depends smoothly on the \\(V_{g_0}\\) variables as \\(L\\) is a \\(C^\\infty\\) map. Therefore \\(X_\\xi\\) is a smooth map around \\(g_0\\). As \\(g_0\\) is arbitrary \\(X_\\xi\\) is smooth.\n\\end{proof}\n\nWe have shown that, identifying \\(\\mathfrak g = T_eG\\), that\n\\[\n  (dL_g)_e: \\mathfrak g \\to T_gG\n\\]\nis smooth. In fact, it is a linear isomorphism. Thus we have\n\n\\begin{proposition}\n  If \\(\\xi_1, \\dots, \\xi_n\\) are linearly independent (form a basis, respectively) in \\(\\mathfrak g\\) then for all \\(g \\in G\\), \\(X_{\\xi_1}(g), \\dots, X_{\\xi_n}(g)\\) are linearly independent (form a basis, respectively) in \\(T_gG\\).\n\\end{proposition}\n\nAs a consequence we have\n\n\\begin{theorem}\n  Every Lie group \\(G\\) is parallelisable, i.e.\\ \\(TG \\cong G \\times \\R^{\\dim G}\\) where the diffeomorphism restricts to \\(T_gG\\) is an isomorphism onto \\(\\{g\\} \\times \\R^{\\dim G}\\) for all \\(g \\in G\\).\n\\end{theorem}\n\nThere is another symmetry for a Lie group \\(G\\). For all \\(g, h \\in G\\), for all \\(\\xi \\in \\mathfrak g\\),\n\\[\n  (dL_g)_h X_\\xi(h)\n  = (dL_g)_h (dL_h)_e \\xi\n  = (dL_{gh})_e \\xi\n  = X_\\xi(gh),\n\\]\ni.e.\\\n\\[\n  \\label{eqn:left invariant vector field}\n  (dL_g) X_\\xi = X_\\xi \\compose L_g,\n  \\tag{\\ast}\n\\]\nwhich, if we view \\(X_\\xi: M \\to TM\\) as a global section of the projection map, may also be written as \\((dL_g) X_\\xi = X_\\xi\\).\n\n\\begin{definition}[left-invariant vector field]\\index{left-invariant}\n  A vector field \\(X\\) on a Lie group satisfying the \\eqref{eqn:left invariant vector field} is called \\emph{left-invariant}. Denote the subspace of all left-invariant vector fields by \\(\\ell(G)\\).\n\\end{definition}\n\nOne important observation is that \\(\\ell(G)\\) is a finite-dimensional subspace of \\(V(G)\\): it is easy to see that for all \\(X \\in \\ell(G)\\), there exists a unique \\(\\xi \\in \\mathfrak g\\) such that \\(X = X_\\xi\\). This induces an isomorphism \\(\\ell(G) \\cong \\mathfrak g\\) so \\(\\dim \\ell(G) = \\dim G\\). Even better, \\(\\ell(G)\\) is closed under Lie bracket so\n\n\\begin{theorem}\n  \\(\\ell(G)\\) is a Lie subalgebra of \\(V(G)\\).\n\\end{theorem}\n\n\\begin{proof}\n  One way to show this is to use\n  \\[\n    X(f \\compose F) = ((dF) X f) \\compose F\n  \\]\n  with \\(F = L_g, X = X_\\xi\\) and \\(f \\in C^\\infty(G)\\). See example sheet.\n\n  Alternatively, for any vector fields \\(X, Y\\) and diffeomorphism \\(F\\), in example sheet 1 Q6 we show that\n  \\[\n    (dF) [X, Y] = [(dF) X, (dF) Y].\n  \\]\n  Let \\(f \\in C^\\infty(G), g \\in G, \\xi, \\eta \\in \\mathfrak g\\). Then\n  \\begin{align*}\n    & ((dL_g) [X_\\xi, X_\\eta] f) \\compose L_g \\\\\n    =& ([(dL_g) X_\\xi, (dL_g) X_\\eta] f) \\compose L_g \\\\\n    =& ([X_\\xi \\compose L_g, X_\\eta \\compose L_g] f) \\compose L_g \\\\\n    =& (([X_\\xi, X_\\eta] \\compose L_g) f) \\compose L_g\n  \\end{align*}\n  % why \\compose L_g in the end?\n  which is saying\n  \\[\n    (dL_g) [X_\\xi, X_\\eta] = [X_\\xi, X_\\eta] \\compose L_g\n  \\]\n  which is precisely \\eqref{eqn:left invariant vector field}.\n\\end{proof}\n\nWe now have, in the case of ``good'' matrix Lie groups, two definitions making \\(\\mathfrak g\\) into a Lie algebra. In fact, they are equivalent.\n\n\\begin{theorem}\n  Let \\(G\\) be a matrix Lie group with \\(\\log\\) defining  a chart around \\(e \\in G\\). Then\n  \\begin{align*}\n    T_eG &\\to \\ell(G) \\\\\n    \\xi &\\mapsto X_\\xi\n  \\end{align*}\n  is an isomorphism of the Lie algebras (using the ``matrix'' definition on LHS).\n\\end{theorem}\nWe will prove the theorem in the next chapter.\n\n\\section{Submanifolds}\n\nLet \\(M\\) be a manifold, \\(N \\subseteq M\\) and \\(N\\) is itself a manifold (not a priori to be have restriction of smooth structure on \\(M\\)). Denote \\(\\iota: N \\to M\\) the inclusion map.\n\n\\begin{definition}[embedded submanifold]\\index{embedded submanifold}\n  If \\(\\iota\\) is smooth, \\(d\\iota_p: T_pN \\to T_pM\\) is injective for all \\(p \\in N\\) and \\(\\iota\\) is a homeomorphism onto its image, then we say \\(N\\) is an \\emph{embedded submanifold} of \\(M\\).\n\\end{definition}\n\n\\begin{definition}[immersed submanifold]\\index{immersed submanifold}\n  If we drop the requirement that \\(\\iota\\) is a homeomorphism then \\(N\\) is a \\emph{immersed submanifold} of \\(M\\).\n\\end{definition}\n\nAt this point, the topological requirement may seem a bit mysterious and not clear what it entails. It is equivalent to the statement that \\(D \\subseteq N\\) is open in \\(N\\) if and only if \\(D = U \\cap N\\) for some \\(U \\subseteq M\\) open. It excludes situation like mapping an open interval to figure 8.\n\nA variant of the definition may omit \\(N \\subseteq M\\) and instead require \\(\\psi: N \\to M\\) to be an embedding. If \\(\\psi\\) is injective and the three properties hold then \\(\\psi(N) \\subseteq M\\) is an embedded submanifold.\n\n\\begin{convention}\n  From now on submanifold means by default embedded submanifold.\n\\end{convention}\n\n\\begin{notation}\n  For manifolds \\(M\\) and \\(N\\) and \\(\\psi: N \\to M\\) an embedding, write \\(\\psi: N \\embed M\\), i.e.\\ \\(\\psi(N)\\) is a submanifold of \\(M\\).\n\\end{notation}\n\n\\begin{remark}\n  An immersion \\(\\psi: N \\to M\\) means that \\(d\\psi\\) is injective everywhere. In this way we may obtain \\(\\psi(N) \\subseteq M\\) ``immersed with self-intersections''.\n\\end{remark}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item For curves and surfaces in \\(\\R^3\\), \\(\\gamma: (0, 1) \\to \\R^3\\) and \\(r: U \\to \\R^3\\) where \\(U \\subseteq \\R^2\\), the conditions mean that\n    \\begin{enumerate}\n    \\item \\(\\gamma\\) and \\(r\\) are smooth,\n    \\item they are regular parameterisations,\n    \\item depending on what we are interested in, we may require them to be (topological) embeddings.\n    \\end{enumerate}\n  \\item The figure 8 example mentioned to distinguish embedded vs.\\ immersed manifold may seem contrived. However, immersed (and not embedded) manifolds emerge naturally from \\emph{irrational twist flow}. Consider the map\n    \\begin{align*}\n      \\R &\\to S^1 \\times S^1 \\\\\n      t &\\mapsto (e^{it}, e^{i\\alpha t})\n    \\end{align*}\n    where \\(\\alpha \\in \\R \\setminus \\Q\\). We can check that the map is injective and the image is dense in \\(S^1 \\times S^1\\). In particular it is not a toplogical emedding so this is an immersion but not an embedding.\n  \\end{enumerate}\n\\end{eg}\n\nOne frequently asked question is: is a submanifold of \\(\\R^n\\) the same as \\(f^{-1}(0)\\) for some smooth map \\(f: \\R^n \\to \\R^k\\)? In general, no! \\(f^{-1}(0) \\subseteq \\R^n\\) is closed. One can check that for every closed \\(E \\subseteq \\R^2\\) there exists a smooth \\(f: \\R^2 \\to \\R\\) such that \\(f^{-1}(0) = E\\).\n\n\\begin{definition}[regular value]\\index{regular value}\n  Let \\(f: M \\to Y\\) be a smooth map. \\(q \\in Y\\) is a \\emph{regular value} of \\(f\\) if for all \\(p \\in M\\) such that \\(f(p) = q\\), \\(df_p\\) is surjective.\n\\end{definition}\nNote that under this definition if \\(q \\notin f(M)\\) then \\(q\\) is vacuuously a regular value of \\(f\\).\n\n\\begin{theorem}\n  Let \\(f: M \\to Y\\) be a smooth map and \\(q \\in Y\\) is regular value of \\(f\\). If \\(f^{-1}(q) \\neq \\emptyset\\) then \\(N = f^{-1}(q)\\) is an embedded submanifold with\n  \\[\n    \\dim N = \\dim M - \\dim Y.\n  \\]\n\\end{theorem}\nThis is geometer's implicit function theorem, also known as preimage theorem. We assume this theorem without proof, which can be found in the lecturer's online notes.\n\n\\begin{remark}\n  By a result in differential topology, suppose \\(M\\) is a manifold and \\(N \\subseteq M\\) is equipped with subspace topology. If there exists a smooth structure on \\(N\\) such that \\(N \\subseteq M\\) is submanifold then this structure is unique. Thus it makes sense to say \\(N\\) is or isn't a submanifold of \\(M\\).\n\\end{remark}\n\n% TODO\n% check this proposition and its proof\n\n\\begin{proposition}\n  Let \\(N \\embed M\\) and \\(p \\in N\\). Then there eixsts a neighbourhood \\(U\\) of \\(p\\) in \\(M\\) and a smooth map \\(f: U \\to \\R^d\\) where \\(d = \\dim M - \\dim N\\) such that \\(N \\cap U = f^{-1}(0)\\). \\(d\\) is called the \\emph{codimension}\\index{codimension} of \\(N\\) in \\(M\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(\\varphi: U_0 \\to \\R^n\\) is a chart on \\(M\\) where \\(\\varphi(p) = 0\\) with local coordinates \\((x_1, \\dots, x_n)\\). Let \\(\\psi: V_0 \\to \\R^\\ell\\) be a chart on \\(N\\) where \\(\\psi(p) = 0\\) with local coordinates \\((y_1, \\dots, y_\\ell)\\). Then the inclusion \\(\\iota: N \\to M\\) has local exression\n  \\[\n    x_i = x_i(y).\n    %?\n  \\]\n  The derivative at \\(0\\)\n  \\[\n    \\left( \\frac{\\partial x_i}{\\partial y_j} (0) \\right)_{n \\times \\ell}\n  \\]\n  has rank \\(\\ell\\). wlog we may assume the top \\(\\ell \\times \\ell \\) submatrix is nonsingular. Then by Inverse function theorem,\n  \\[\n    y_j = y_j(x_1, \\dots x_\\ell)\n  \\]\n  which is well-defined and smooth near \\(0\\). Then for \\(i > \\ell\\),\n  \\[\n    x_i = x_i(y(x_1, \\dots, x_\\ell))\n    = h_i(x_1, \\dots, x_\\ell).\n  \\]\n  Then\n  \\[\n    f_i(x) := x_i - h_i(x_1, \\dots, x_\\ell)\n  \\]\n  for \\(i > \\ell\\) gives a required \\(f: U \\to \\R^d\\) where \\(d = n - \\ell\\), with Jacobian\n  \\[\n    \\frac{\\partial f}{\\partial x} =\n    \\begin{pmatrix}\n      & & 1 & & 0 \\\\\n      & * & & \\vdots \\\\\n      & & 0 & & 1\n    \\end{pmatrix}\n  \\]\n  so a regular value.\n\\end{proof}\n\nThis result cannot be improved: let \\(M = \\R P^2\\) and \\(N = \\{x_0: x_1: x_2 \\in \\R P^2: x_2 = 0\\}\\) which can be identified with \\(\\R P^1 \\cong S^1\\). But \\(N \\neq f^{-1}(q)\\) for all smooth \\(f: \\R P^2 \\to P\\) where \\(P\\) is a one-dimensional manifold, with \\(q\\) a regular value. A sketch of proof: if there exists such an \\(f\\) then there exsits some chart \\(\\psi\\) on \\(U\\) around \\(q\\), \\(\\psi \\compose f: U \\to (-1, 1)\\) and \\(U \\supseteq N\\). Suppose for contradiction \\(N = \\{p: (\\psi \\compose f) (p) = 0\\}\\). \\(\\R P^2 \\setminus N\\) is homeomorphic to an open disk in \\(\\R^2\\). \\(\\psi \\compose f\\) has \\(0\\) as a regular value, implying that \\(\\psi \\compose f\\) takes both positive and negative values. But \\(\\R P^2 \\setminus N\\) is connected so contradiction.\n\n\\begin{theorem}[Whitney embedding theorem]\\index{Whitney embedding theorem}\n  Every \\(n\\)-dimensional manifold \\(M\\) admits a smooth embedding to \\(\\R^{2n}\\).\n\\end{theorem}\n\nIt is a hard theorem but it is very easy to show that there exists \\(N\\) such that \\(M\\) is a submanifold of \\(\\R^N\\). In example sheet 1 Q9 we showed this for \\(M\\) compact. It is also not too difficult to set \\(N = 2n + 1\\), basically by embedding it in a sufficiently large space and whittle down the dimension. However the last step of improvement is truely an ingenious piece of work. The remarkability of this theorem is not to say that the intrinsic defintion of manifolds coincide with the extrinsic one, but the optimal dimension \\(\\R^{2n}\\) of ambient space. This is a topological invariant measuring how ``complicated'' a geometric object is. For example, \\(S^n\\) can be embedded in \\(\\R^{n + 1}\\) but \\(\\R p^2\\) cannot be embedded in \\(\\R^3\\). The Klein bottle does not embed in \\(\\R^3\\) either.\n\nRestatement of an earlier theorem:\n\\begin{theorem}\n  Suppose \\(G \\subseteq \\GL(n, \\C)\\) is a subgroup and a Lie group with \\(C^\\infty\\) structure of \\(G\\) given by \\(\\log\\) charts (i.e.\\ \\(\\log\\) maps an open neighbourhood \\(U_I\\) of the idenity \\(I\\) onto a neighbourhood of \\(0\\) is some real subspace \\(V_0\\) of \\(\\operatorname{Mat}(n, \\C)\\)), then\n  \\begin{align*}\n    \\mathfrak g &\\to \\ell(G) \\\\\n    \\xi &\\mapsto X_\\xi\n  \\end{align*}\n  is a Lie algebra isomorphism. Here \\(\\mathfrak g = T_IG \\subseteq \\operatorname{Mat}(n, \\C)\\) is the span of \\(V_0\\) over \\(\\R\\).\n\\end{theorem}\n\n\\begin{proof}\n  We have shown earlier\n  \\[\n    [X_\\xi, X_\\eta] = X_\\zeta\n  \\]\n  for some \\(\\zeta \\in \\mathfrak g\\). Now want to show that\n  \\[\n    \\zeta = [\\xi, \\eta] = \\xi\\eta - \\eta \\xi\n  \\]\n  as \\emph{matrices} in \\(\\mathfrak g\\). First \\(G = \\GL(n) = \\operatorname{Mat}(n)\\) over \\(\\R\\) or \\(\\C\\) so \\(\\mathfrak g = mat(n)\\). Then for \\(g \\in \\GL(n)\\), \\(L_g\\) is a linear map so \\((dL_g)_h\\) for all \\(g, h\\) in the usual matrix multiplication %?\n  The local expression for \\(L_g\\) around \\(I \\in \\GL(n)\\) is\n  \\[\n    (\\hat L_g) B = g \\cdot \\exp B = g \\cdot (I + B + \\frac{1}{2!} B^2 + \\dots)\n  \\]\n  so\n  \\[\n    (d \\hat L_g)_0 C = gC\n  \\]\n  Therfore for all \\(g = (g^i_j) \\in \\GL(n), A = (A^i_j) \\in \\mathfrak g = T_IGL(n)\\), we have\n  \\[\n    X_A(g)\n    = \\sum_{i, j} X^i_j(g) \\frac{\\partial  }{\\partial g^i_j}\n    = \\sum_{i, j, k} g^i_k A^k_j \\frac{\\partial  }{\\partial g^i_j}.\n  \\]\n  Now the chain rule follows from straightforard calculation. Using formula for Lie brackets of vector fields,\n  \\[\n    g^i_k \\left( A^k_j \\frac{\\partial  }{\\partial g^i_j} (g^\\ell_p B^p_q) - B^k_j \\frac{\\partial  }{\\partial g^i_j} (g^\\ell_p A^p_q) \\right) \\frac{\\partial  }{\\partial g^\\ell_q}\n    = g^i_k (AB - BA)^k_j \\compose \\frac{\\partial  }{\\partial g^i_k}.\n  \\]\n\n  For the general case \\(G \\subseteq \\GL(n)\\), note that the \\(\\log\\) chart hypothesis implies that \\(\\iota: G \\embed \\GL(n)\\). In fact we'll use \\(U_I \\embed \\GL(n)\\) where \\(U_I \\subseteq G\\). For all \\(g \\in G\\), \\(L_g: G \\to G\\) is the restriction of \\(L_g: \\GL(n) \\to \\GL(n)\\). Furthermore for all \\(h \\in G\\), \\((dL_g)_h: T_hG \\to T_{gh}G\\) is the corresponding restriction of \\((dL_g)_h\\) on \\(\\GL(n)\\). Therefore \\(X_\\xi \\in \\ell(G)\\) is a restriction of \\(X_\\xi \\in \\ell(\\GL(n))\\). Then\n  \\[\n    [X_\\xi|_G, X_\\eta|_G] = [X_\\xi, X_\\eta]|_G\n  \\]\n  which can be verified in ``adapted'' local coordinates using local test functions only depending on coordinates along \\(G\\) and constant in the normal direction (graph)\n\n  But on \\(\\GL(n)\\) we know \\([X_\\xi, X_\\eta] = X_{[\\xi, \\eta]}\\), also for all \\(\\xi, \\eta \\in \\mathfrak g\\), \\([\\xi, \\eta] \\in \\mathfrak g\\). Now the theorem for \\(G\\) follows too.\n\\end{proof}\n\n\\section{Differential forms}\n\n\\begin{definition}[cotangent space]\\index{cotangent space}\n  Suppose \\(M\\) is an \\(n\\)-dimensional manifold and \\(p \\in M\\). The dual of \\(T_pM\\), consisting of all linear funcitons \\(T_pM \\to \\R\\), is call the \\emph{cotangent space} at \\(p\\) and denoted by \\(T_p^*M\\).\n\\end{definition}\nIf \\(x_i\\)'s are local coordinates then we have \\(\\frac{\\p}{\\p x_i}\\Big|_p\\) as a basis for \\(T_pM\\). The \\emph{dual basis} of \\(T_p^*M\\) is denoted by \\((\\mathrm d x_i)_p\\), i.e.\n\\[\n  \\mathrm dx_i(\\frac{\\partial  }{\\partial x_j}) = \\delta_{ij}.\n\\]\nTherefore for all \\(a \\in T_p^*M\\), we can express uniquely as\n\\[\n  a = \\sum_{i = 1}^n a_i (\\mathrm dx_i)_p.\n\\]\n\nRecall transformation rule for tangent space: if \\(x_i'\\) is another coodinates then\n\\[\n  \\frac{\\partial  }{\\partial x_i'} = \\sum \\frac{\\partial x_k}{\\partial x_i'} \\frac{\\partial  }{\\partial x_k}\n\\]\nso by linear algebra\n\\[\n  \\mathrm dx_i = \\sum \\frac{\\partial x_i}{\\partial x_j'} \\mathrm d x_j'\n\\]\nso if a \\(1\\)-form is given in two local coordinates\n\\[\n  a = \\sum_i a_i \\mathrm dx_i = \\sum_j a_j' \\mathrm dx_j'\n\\]\nthen we have the transformation law\n\\[\n  a_j' = \\sum_i \\frac{\\partial x_i}{\\partial x_j'} a_i\n\\]\nwhich is \\emph{not} the same as that for tangent space.\n\n\\begin{definition}[cotangent bundle]\\index{cotangent bundle}\n  The \\emph{cotangent bundle} of a manifold \\(M\\) is defined by\n  \\[\n    T^*M = \\coprod_{p \\in M} T^*_pM.\n  \\]\n\\end{definition}\n\n\\begin{theorem}\n  \\(T^*M\\) is a smooth manifold of twice the dimension of that of \\(M\\). Moreover the natural projection map \\(\\pi: T^*M \\to M\\) is smooth.\n\\end{theorem}\n\n\\begin{proof}\n  Similar to that of tangent bundle.\n\\end{proof}\n\n\\begin{definition}[differential \\(1\\)-form]\\index{differential form}\n  A \\emph{(smooth) differential \\(1\\)-form}, also known as \\(1\\)-form \\(\\alpha\\) is a smooth map \\(\\alpha: M \\to T^*M\\) that is a section of \\(\\pi: T^*M \\to M\\), i.e.\\ \\(\\alpha(p) \\in T_p^*M\\) for all \\(p \\in M\\).\n\\end{definition}\n\nSimilar to vector fields, a \\(1\\)-form \\(\\alpha = \\sum_i \\alpha_i \\mathcal dx_i\\) is smooth if and only if \\(\\alpha_i\\) is smooth in all local coordinates, if and only if for all \\(X \\in V(M)\\), \\(\\alpha(X) \\in C^\\infty(M)\\).\n\nTo define general \\(k\\)-forms, we need to take a crash course in multilinear algebra. For \\(r = 0, 1, \\dots\\), the \\(r\\)th \\emph{exterior product}\n\\[\n  \\Lambda^r T_p^*M = \\{\\text{alternating multilinear functions on } (T_pM)^r\\}.\n\\]\nIt is a vector space with basis\n\\[\n  \\{\\mathrm dx_{i_1} \\w \\dots \\w \\mathrm dx_{i_r}\\}, 1 \\leq i_1 < \\dots < i_r \\leq n\n\\]\nwhere \\(n = \\dim M\\) and \\(x_i\\)'s are local coordinates around \\(p\\). They are defined by\n\\[\n  \\mathrm dx_{i_1} \\w \\dots \\w \\mathrm dx_{i_r} (v_1, \\dots, v_r) = \\det( (\\mathrm dx_{i_k}(v_\\ell))_{k, \\ell})\n\\]\nwhere \\(v_\\ell = \\frac{\\partial  }{\\partial x_{k_\\ell}}\\) and \\(k_\\ell \\leq k_{\\ell + 1}\\) for all \\(\\ell\\). This equals to \\(1\\) if and only if \\(v_k = \\frac{\\partial  }{\\partial x_{i_k}}\\) for all \\(k\\) and \\(0\\) otherwise.\n\n\\begin{notation}[multiindex notation]\n  Let \\(I = (i_1, \\dots i_r)\\) be a multiindex, then write\n  \\[\n    \\mathrm dx_I = \\mathrm dx_{i_1} \\w \\dots \\w \\mathrm dx_{i_r}.\n  \\]\n  Using this notation, the transformation law for \\(r\\)-forms under change of coordinates from \\(x_i\\) to \\(x_j'\\) is\n  \\[\n    \\mathrm dx_J' = \\sum_I \\prod_{k = 1}^r \\frac{\\partial x_{Jk}'}{\\partial x_{ik}} \\mathrm dx_I.\n  \\]\n\\end{notation}\n\nUsing the same method in the construction of smooth structure on \\(TM\\) and \\(T^*M\\), together with the transformation law for \\(r\\)-forms, we can make\n\\[\n  \\Lambda^rT^*M = \\coprod_{p \\in M} \\Lambda^r T_p^*M\n\\]\ninto a manifold of dimension \\(n = \\binom{n}{r}\\) with smooth projection \\(\\pi: \\Lambda^rT^*M \\to M\\). It is called the \\emph{bundle of differential \\(r\\)-forms}.\n\n\\begin{note}\n  \\(\\Lambda^0T^*M = M \\times \\R\\) and \\(\\Lambda^1T^*M = T^*M\\)\n\\end{note}\n\n\\begin{definition}[differential form]\\index{differential form}\n  A \\emph{smooth differential \\(r\\)-form}, or simply an \\emph{\\(r\\)-form} on \\(M\\) is a smooth map \\(\\alpha: M \\to \\Lambda^rT^*M\\) that is a section of \\(\\pi: \\Lambda^rT^*M \\to M\\). The space of all \\(r\\)-forms on \\(M\\) is denoted \\(\\Omega^r(M)\\).\n\\end{definition}\n\nGiven local coordinates \\(x_i\\)'s, a differential form \\(\\alpha\\) can be locally expressed as\n\\[\n  \\alpha = \\sum_I \\alpha_I(x) \\mathrm dx_I\n\\]\nwhere \\(\\alpha_I\\)'s are smooth. They can be computed by \\(\\alpha_I = \\alpha(\\frac{\\partial  }{\\partial x_I})\\). It can be easily checked that \\(\\alpha\\) is smooth if and only if \\(\\alpha(X_1, \\dots, X_r)\\) is smooth for all smooth vector fields \\(X_1, \\dots, X_r\\). Note that \\(0\\) forms are just smooth functions \\(M \\to \\R\\) so \\(\\Omega^0(M) = C^\\infty(M)\\).\n\n\\subsection{Orientation of manifolds}\n\n\\begin{theorem}\n  For an \\(n\\)-manifold, TFAE:\n  \\begin{enumerate}\n  \\item there exists a never-zero \\(n\\)-form on \\(M\\),\n  \\item there exists a family of charts \\((\\varphi_\\alpha, U_\\alpha)\\) such that \\(M = \\bigcup U_\\alpha\\) and for all \\(\\alpha, \\alpha'\\) with local coordinates \\(x_i, x_i'\\) on \\(U_\\alpha, U_{\\alpha'}\\) respectively, we have\n    \\[\n      \\det \\frac{\\partial x_j'}{\\partial x_i} > 0\n    \\]\n    on \\(U_\\alpha \\cap U_{\\alpha'}\\),\n  \\item \\(\\Lambda^nT^*M\\) is parallelisable.\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(1 \\implies 2\\): the transformation law when a differential form is expressed in local coordinates \\(x_i\\) and \\(x_i'\\) is\n    \\begin{align*}\n      &\\mathrm dx_1 \\w \\dots \\w \\mathrm dx_n \\\\\n      =& \\left( \\sum_{i_1} \\frac{\\partial x_1}{\\partial x_{i_1}'} \\mathrm dx_{i_1}' \\right) \\w \\dots \\w \\left( \\sum_{i_n} \\frac{\\partial x_n}{\\partial x_{i_n}'} \\mathrm dx_{i_n}' \\right) \\\\\n      =& \\dots \\\\\n      =& \n    \\end{align*}\n    Suppose we have \\(\\omega \\in \\Omega^n(M)\\) never-zero. Let \\(M = \\bigcup U_\\alpha\\) where each \\(U_\\alpha\\) is open connected, then\n    \\[\n      \\omega|_{U_\\alpha} = f_\\alpha(x^\\alpha) \\mathrm d x_1^\\alpha \\w \\dots \\w \\mathrm dx_n^\\alpha\n    \\]\n    We can reorder \\(x_i^\\alpha\\) such that \\(f_\\alpha > 0\\). Then all deteminants are \\(f_\\alpha/f_\\alpha' > 0\\)\n  \\item \\(2 \\implies 1\\): we need \\nameref{thm:partition of unity}. Given such a collection of charts, define on each \\(U_\\alpha\\) a never-zero \\(n\\)-form\n    \\[\n      \\omega_\\alpha := \\mathrm d x_1^\\alpha \\w \\dots \\w \\mathrm dx_n^\\alpha\n    \\]\n    Let \\(\\{\\rho_i\\}\\) be a partition of unity subordinate to the cover. Then\n    \\[\n      \\omega = \\sum_i \\rho_i (\\omega_{\\alpha_i})\n    \\]\n    is a well-defined never-zero form on \\(\\Omega^n(M)\\).\n  \\item \\(1 \\Longleftrightarrow 3\\): this involves the same idea in the proof that the existence of global (co)frame is equivalent to parallisability of (co)tangent bundle. In fact, this generalises to any bundle. We omit the details.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{theorem}[partition of unity]\\index{partition of unity}\n  \\label{thm:partition of unity}\n  For every open cover \\(M = \\bigcup U_\\alpha\\), there exists a countable collection \\(\\rho_i \\in C^\\infty(M)\\) such that\n  \\begin{enumerate}\n  \\item for all \\(i\\), \\(\\cl{\\supp{\\rho_i}}\\) is compact and is contaned in \\(U_\\alpha\\) for some \\(\\alpha\\).\n  \\item locally finite: for every \\(x \\in M\\), there exists an open neighbourhood \\(W_x\\) containing \\(x\\) such that \\(\\rho_i|_{W_x} \\neq 0\\) only for finitely many \\(i\\)'s.\n  \\item \\(\\rho_i \\geq 0\\) for all \\(i\\) and \\(\\sum_i \\rho_i = 1\\).\n  \\end{enumerate}\n  Then \\(\\{\\rho_i\\}\\) is a \\emph{partition of unity} subordinate to \\(\\{U_\\alpha\\}\\).\n\\end{theorem}\n\n\\begin{proof}\n  Omitted.\n\\end{proof}\n\n\\begin{definition}[orientability]\\index{orientability}\n  A manifold satisfying any of the condition is called \\emph{orientable}.\n\n  If a manifold is orientable and connected then there are precisely two choices of \\emph{orientation}.\n\\end{definition}\n\nThe orientation is induced, equivalently, by\n\\begin{enumerate}\n\\item \\(\\omega \\in \\Omega^n(M)\\) up to a positive smooth function,\n\\item a coordinate cover up to ``positive compatibility'',\n\\item a choice of \\(\\phi\\) up to composition \\((x, a) \\mapsto (x, h(x) a)\\) where \\(h \\in C^\\infty(M)\\), \\(h > 0\\).\n\\end{enumerate}\n\n\\subsection{Exterior derivative}\n\nLet \\(M\\) be a smooth manifold and \\(f\\) a smooth function on \\(M\\). The differential at a point \\(p \\in M\\) is a linear map \\(df_p: T_pM \\to T_{f(p)}\\R = \\R\\). Thus \\(df_p\\) can be thought as an element of \\(T_p^*M\\). Therefore for all \\(X \\in V(M)\\), have\n\\[\n  df_p(X) = Xf(p) \\in C^\\infty(M).\n\\]\nBy checking coeficients of \\(df\\) in local coordinates we find that \\(df \\in \\Omega^1(M)\\) is well-defined.\n\n\\begin{notation}\n  For all local coordiates \\(x_i\\) defined on \\(U \\subseteq M\\), \\(\\mathrm d x_i = dx_i \\in \\Omega^1(U)\\).\n\\end{notation}\n\n\\begin{remark}\n  This is what is called gradient in vector calculus on \\(\\R^n\\). However, in vector calculus we identify \\(\\R\\) with its dual, which is not always the case for manifolds. Thus gradient is a covector, or \\(1\\)-form.\n\\end{remark}\n\n\\begin{theorem}[exterior derivative]\\index{exterior derivative}\n  There is a unique \\(\\R\\)-linear map \\(\\mathrm d: \\Omega^k(M) \\to \\Omega^{k + 1}(M)\\) for all \\(k = 0, 1, \\dots\\) such that\n  \\begin{enumerate}\n  \\item if \\(f \\in \\Omega^0(M)\\) then \\(\\mathrm d f\\) is the differential \\(df\\).\n  \\item \\(\\mathrm d(\\omega \\w \\eta) = (\\mathrm d \\omega) \\w \\eta + (-1)^{\\deg \\omega} \\omega \\w \\mathrm d \\eta\\) where \\(\\deg \\omega = k\\) means that \\(\\omega \\in \\Omega^k(M)\\).\n  \\item \\(\\mathrm d^2 = 0\\).\n  \\end{enumerate}\n  \\(\\mathrm d\\) is called the \\emph{exterior derivative}.\n\\end{theorem}\n\n\\begin{proof}\n  Choose \\(U \\subseteq M\\) with local coordinates \\(x_i\\) and suppose \\(\\omega|_U = f(x) \\mathrm d x_I\\). Then\n  \\begin{align*}\n    &\\mathrm d(f(x) \\mathrm d x_I) \\\\\n    =& (\\mathrm d f) \\w \\mathrm d x_I + f(x) \\sum_{k = 1}^r (\\dots \\w (-1)^{k + 1} \\mathrm d \\mathrm d x_k \\w \\dots ) \\\\\n    =& \\sum_{i = 1}^n \\frac{\\partial f}{\\partial x_i}(x) \\mathrm d x_i \\w \\mathrm d x_I\n  \\end{align*}\n  and extend to \\(\\Omega^r(U)\\) by linearity. Then 1 clearly holds, 2 is an easy verification and 3 holds by symmetry of mixed partials. This shows the uniqueness of \\(\\mathrm d\\).\n\n  The above computation also shows that \\(\\mathrm d\\) must be a local operator, i.e.\\ \\(\\mathrm d\\omega_p\\) is determined by \\(\\omega|_U\\) for any coordinate neighbourhood \\(U\\) of \\(p\\). Thus for existence of \\(\\mathrm d\\) it suffices to check if given another local coordinates \\(x_i'\\) on \\(U\\), the two local expressions agree. Suppose \\(\\mathrm d'\\) is the exterior derivative given by another coordinates \\(x_i'\\) on \\(U\\). Aim to show \\(\\mathrm d' = \\mathrm d\\) on \\(\\Omega^r(U)\\) for all \\(r\\).\n  \\[\n    \\mathrm d' (f \\mathrm d x_I)\n    = \\mathrm d'f \\w \\mathrm d x_I + \\sum_{k = 1}^r (-1)^{k + 1} f \\mathrm d x_{i_1} \\w \\dots \\w \\mathrm d' \\mathrm d x_{i_k} \\w \\dots \\w \\mathrm d x_i\n  \\]\n  \\(\\mathrm d' f = \\mathrm d f\\) as \\(f\\) is a \\(0\\)-form and \\(\\mathrm d' x_{i_k} = \\mathrm d x_{i_k}\\) as \\(x_{i_k}\\) are smooth functions. Finally,\n  \\[\n    \\mathrm d' \\mathrm d x_{i_k} = \\mathrm d \\mathrm d x_{i_k} = 0\n  \\]\n  so \\(\\mathrm d (f \\mathrm dx_I) = \\mathrm d'(f \\mathrm dx_I)\\). \\(\\mathrm d = \\mathrm d'\\).\n\\end{proof}\n\n\\subsection{Pullback of differential forms}\n\n\\begin{definition}[pullback]\\index{pullback}\n  Let \\(f: M \\to N\\) be a smooth map. Then the \\emph{pullback} induced by \\(f\\) is\n  \\begin{align*}\n    f^*: \\Omega^r(N) &\\to \\Omega^r(M) \\\\\n    \\alpha &\\mapsto f^*(\\alpha)\n  \\end{align*}\n  where\n  \\[\n    (f^*(\\alpha))_p (v_1, \\dots, v_r) = \\alpha_{f(p)} ((df_p) v_1, \\dots, (df_p)v_r)\n  \\]\n  for all \\(p \\in M, v_i \\in T_pM\\).\n\\end{definition}\n\n\\begin{note}\\leavevmode\n  \\begin{enumerate}\n  \\item Unlike vector fields which in general cannot be pushed forward unless \\(f\\) is a diffeomorphism, \\(f^*\\) is well-defined for all maps \\(f\\).\n  \\item Chain rule for pullbacks: given \\(Z \\xrightarrow{g} M \\xrightarrow{f} N\\) then \\((f \\compose g)^* = g^* \\compose f^*\\), i.e.\\ pullback is a contravariant functor.\n  \\item \\(f^*(\\alpha \\w \\beta) = f^*(\\alpha) \\w f^*(\\beta)\\) which is a straightforward exercise. In particular for all \\(h \\in C^\\infty(N)\\),\n    \\[\n      f^*(h \\alpha) = (h \\compose f) f^*(\\alpha)\n    \\]\n    which implies that \\(f^*\\) is \\(C^\\infty(M)\\)-linear.\n  \\item Exterior derivative commutes with pullbacks, i.e.\\ \\(\\mathrm d (f^*\\alpha) = f^*(\\mathrm d \\alpha)\\). For a proof, suffices to check this on \\(\\R^n\\) as \\(\\mathrm d\\) is local and \\(f^*\\) is pointwise (i.e.\\ algebraic). The outline is as follow: wlog \\(M = U \\subseteq \\R^n\\) and \\(N = V \\subseteq \\R^m\\) open. Let the local coordinates be \\(x_i\\) and \\(y_j\\) respectively. For \\(\\alpha = \\mathrm dy_j, v = \\frac{\\partial  }{\\partial x_k}\\) and local coodinate for \\(f\\) is \\(y_j = y_j(x)\\), have\n    \\begin{align*}\n      &(f^* (\\mathrm d y_j)) (\\frac{\\partial  }{\\partial x_k}) \\\\\n      =& \\mathrm d y_j ((df) \\frac{\\partial  }{\\partial x_k}) \\\\\n      =& \\mathrm d y_j (\\sum_\\ell \\frac{\\partial y_\\ell}{\\partial x_k} \\frac{\\partial  }{\\partial y_\\ell}) \\\\\n      =& \\sum_\\ell \\frac{\\partial y_\\ell}{\\partial x_k} \\delta_{j \\ell} \\\\\n      =& \\frac{\\partial y_j}{\\partial x_k}\n    \\end{align*}\n    so \\(f^*\\) is determined by its images on \\(\\mathrm dy_j\\).\n  \\end{enumerate}\n\\end{note}\n\n\\subsection{de Rham cohomology}\n\nWe can use exterior derivatives to form a sequence\n\\[\n  \\Omega^0(M) \\xrightarrow{d_0} \\Omega^1(M) \\xrightarrow{d_1} \\Omega^2(M) \\xrightarrow{d_2} \\dots\n\\]\nwith \\(d_{n + 1} \\compose d_n = 0\\) for all \\(n\\). In fact this is a sequence of \\(\\R\\)-vector space homomorphisms.\n\n\\begin{definition}[closed form, exact form]\\index{closed form}\\index{exact form}\n  If \\(\\alpha \\in \\Omega^r(M)\\) is such that \\(\\mathrm d \\alpha = 0\\) then \\(\\alpha\\) is a \\emph{closed form}. If \\(\\alpha = \\mathrm d \\beta\\) for some \\(\\beta \\in \\Omega^{r - 1}(M)\\) then \\(\\alpha\\) is an \\emph{exact form}.\n\\end{definition}\n\nClearly exact forms are closed but the converse is not true in general. Thus it makes sense to consider\n\n\\begin{definition}[de Rham cohomology]\\index{de Rham cohomology}\n  The quotient space\n  \\[\n    H^r_{\\text{dR}}(M) = \\frac{\\ker d_r}{\\im d_{r - 1}},\n  \\]\n  sometimes also denoted \\(H^r(M)\\), is called the \\emph{de Rham cohomology} (or degree \\(r\\)) of \\(M\\).\n\\end{definition}\n\nAs \\(f^*\\) commutes with \\(\\mathrm d\\), \\(f^*\\) is a well-defined map on \\(H^r(M)\\) and chain rule holds. Hence for \\(M\\) diffeomorphic to \\(N\\), we have\n\\[\n  H^r(M) \\cong H^r(N)\n\\]\nfor all \\(r\\). The inverse, however, is not true.\n\nWe can define\n\\[\n  H^*_\\text{dR}(M) = \\bigoplus_{r = 0}^\\infty H^r_{\\text{dR}}(M).\n\\]\nIt is an easy exercise to check that wedge product induces a ring structure on \\(H^*_{\\text{dR}}(M)\\) (the key point is that \\((\\mathrm d \\alpha) \\w \\beta = \\mathrm d(\\alpha \\w \\beta)\\) whenever \\(\\mathrm d \\beta = 0\\)). Moreover, assume \\(M\\) is connected, \\(H^r_{\\text{dR}}(M)\\) is finite-dimensional as \\(\\Omega^r(M)\\) vanishes for \\(r > \\dim M\\), so does \\(H^r_{\\text{dR}}(M)\\).\n\nFor all smooth \\(f: M \\to N\\), the pullback \\(f^*\\) induces a well-defined ring homomorphism \\(f^*: H^*(N) \\to H^*(M)\\). From previous work, have\n\n\\begin{proposition}\n  If \\(M\\) is diffeomorphism to \\(N\\)  then\n  \\[\n    H^*(M) \\cong H^*(N).\n  \\]\n\\end{proposition}\n\nAgain the converse is false. Try to show that\n\\[\n  H^*(\\R P^3) \\cong H^*(S^3).\n\\]\n\nThere are many types of (co)homologies on a space. The important result is\n\n\\begin{theorem}[de Rham]\\index{de Rham theorem}\n  Given a smooth manifold \\(M\\),\n  \\[\n    H^r_{\\text{dR}}(M) \\cong H^r(M, \\R)\n  \\]\n  where \\(H^r(M, \\R)\\) is the singular cohomology of \\(M\\) with coefficients in \\(\\R\\).\n\\end{theorem}\n\nThis tells us that \\(H^r_{\\text{dR}}(M)\\) recovers an invariant of the topological space underlying \\(M\\), using differential \\(r\\)-forms.\n\nIf \\(M\\) is connected then \\(H^0_{\\text{dR}}(M) \\cong \\R\\), i.e.\\ the constant functions on \\(M\\). For more generally result, we need\n\n\\begin{theorem}[Poincar\u00e9 lemma]\\index{Poincar\u00e9 lemma}\n  \\label{thm:Poincar\u00e9 lemma}\n  Let \\(U\\) be the unit open ball in \\(\\R^n\\). Then every closed \\(k\\)-form, where \\(k > 0\\), is exact.\n\\end{theorem}\n\n\\begin{corollary}\n  \\[\n    H^k(U) =\n    \\begin{cases}\n      0 & k > 0 \\\\\n      \\R & k = 0\n    \\end{cases}\n  \\]\n\\end{corollary}\n\nIn fact, we can replace \\(U\\) by \\(\\R^n\\) and get the same result.\n\n\\begin{proof}[Proof of \\nameref{thm:Poincar\u00e9 lemma}]\n  We give a sketch of proof. The key idea of is that we can invert \\(\\mathrm d_k\\)'s using integral operators\n  \\[\n    h_k: \\Omega^k(U) \\to \\Omega^{k - 1}(U)\n  \\]\n  for \\(k > 0\\) such that\n  \\[\n    h_{k + 1} \\compose \\mathrm d_k + \\mathrm d_{k - 1} \\compose h_k = \\id_{\\Omega^k(U)}.\n  \\]\n  Explicitly, define\n  \\begin{align*}\n    & h_k(a \\mathrm dx_{i_i} \\w \\dots \\w \\mathrm dx_{i_k}) (v_1, \\dots, v_{k - 1}) \\\\\n    =& \\left(\\int_0^1 t^{k - 1} a(tx) dt \\right) \\mathrm d x_{i_1} \\w \\dots \\w \\mathrm dx_{i_k} \\left(\\sum_{i = 1}^k x_i \\frac{\\partial  }{\\partial x_i}, v_1, \\dots v_{k - 1}\\right).\n  \\end{align*}\n\\end{proof}\n\n\\subsection{Basic integration on manifolds}\n\nIn this section assume \\(M\\) is an oriented \\(n\\)-manifold, where the orientation is given by a choice of positively compatible charts \\(\\{(\\varphi_\\alpha, U_\\alpha)\\}\\).\n\nLet \\(\\omega \\in \\Omega^n(M)\\) and define its support to be\n\\[\n  \\supp \\omega = \\{p \\in M: \\omega(p) \\neq 0\\}.\n\\]\nAssume the closure \\(\\cl{\\supp \\omega}\\) is comapct. If \\(\\cl{\\supp \\omega} \\subseteq U_\\alpha\\) for some \\(\\alpha\\) with local coordinates \\(x_i\\) and\n\\[\n  \\omega|_{U_\\alpha} = f(x) \\mathrm dx_1 \\w \\dots \\w \\mathrm dx_n,\n\\]\nthen define\n\\[\n  \\int_M \\omega = \\int_{\\varphi_\\alpha(U_\\alpha)} f(x) dx_1 \\dots dx_n\n\\]\nwhere RHS is the Riemann integral on \\(\\R^n\\). To show this is well-defined, suppose \\(\\cl{\\supp \\omega} \\subseteq U_\\beta\\) for some \\(\\beta\\) with coordinates \\(y_j\\), and\n\\[\n  \\omega|_{U_\\alpha \\cap U_\\beta} = h(y) \\mathrm dy_1 \\w \\dots \\w \\mathrm d y_n.\n\\]\nThen\n\\begin{align*}\n  \\int_{\\varphi_\\beta(U_\\beta)} h(y) dy_J\n  = \\int_{\\varphi_\\alpha(U_\\alpha)} h(y(x)) \\Bigg| \\det \\frac{D y}{D x} \\Bigg| d x_I\n  = \\int_{\\varphi_\\alpha(U_\\alpha)} f(x) dx_I\n\\end{align*}\n\nTo define the intergral of a differential form that is not supported on a single chart, we use partition of unity.\n\n\\begin{definition}[integration on manifold]\\index{integration on manifold}\n  Suppose \\(\\{\\rho_i\\}\\) is a partition of unity subordinate to the oriented cover. Then define\n  \\[\n    \\int_M \\omega = \\sum_i \\int_{U_i} \\rho_i \\omega.\n  \\]\n\\end{definition}\n\nSome basic properties of the integral whose proofs will be left as exercises:\n\\begin{enumerate}\n\\item it is linear in \\(\\omega\\),\n\\item it is additive over disjoint coordinate neighbourhoods,\n\\item it is independent of choice of partition of unity (hint: if \\(\\{\\rho_i\\}\\) and \\(\\{\\tilde \\rho_j\\}\\) are two partitions of unity then show \\(\\{\\rho_i \\tilde \\rho_j\\}\\) is another).\n\\end{enumerate}\n\n\\begin{theorem}[Stoke's theorem for manifolds without boundary]\\index{Stoke's theorem}\n  If \\(\\eta \\in \\Omega^{n - 1}(M)\\) is compactly supported then\n  \\[\n    \\int_M \\mathrm d \\eta = 0.\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  Let \\(\\{U_i\\}_{i = 1}^N\\) be positively compatible coordinate neighbourhoods covering \\(\\cl{\\supp \\eta}\\) and let choose charts on \\(U_0 = M \\setminus \\cl{\\supp \\eta}\\) so that it is positively compatible with \\(U_i\\) for \\(1 \\leq i \\leq N\\). Then \\(\\{U_i\\}_{i = 0}^N\\) define an orientation on \\(M\\). Let \\(\\{\\rho_i\\}\\) be a partition of unity subordinate to \\(\\{U_i\\}_{i = 0}^N\\) and let\n  \\[\n    \\mathrm d \\eta = \\sum_{i = 0}^N \\mathrm d(\\rho_i \\eta)\n  \\]\n  be an \\(n\\)-form. Suffices to prove that for all \\(i\\),\n  \\[\n    \\int_M \\mathrm d (\\rho_i \\eta) = 0.\n  \\]\n  Fix \\(i\\) and let \\(x_k\\) be coordiantes on \\(U_i\\). wlog\n  \\[\n    \\rho_i \\eta = h \\mathrm dx_1 \\w \\dots \\w \\mathrm dx_n\n  \\]\n  so\n  \\[\n    \\mathrm d(\\rho_i \\eta) = \\frac{\\partial h}{\\partial x_1} \\mathrm dx_2 \\w \\dots \\w \\mathrm dx_n.\n  \\]\n  As \\(\\supp h\\) is bounded, say by \\(R\\), have\n  \\begin{align*}\n    & \\int_{\\R^n} d(\\rho_i \\eta) \\\\\n    =& \\int_{\\R^{n - 1}} (\\int_{-R}^R \\frac{\\partial h}{\\partial x_1} \\mathrm dx_1) dx_2 \\dots dx_n \\\\\n    =& \\int_{\\R^{n - 1}} \\underbrace{h(R, x_2, \\dots, x_n)}_{= 0}- \\underbrace{h(-R, x_2, \\dots, x_n)}_{= 0} dx_2 \\dots dx_n \\\\\n    =& 0\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}[integration by parts]\n  Assume one of the forms \\(\\alpha, \\beta\\) is compactly supported on \\(M\\) and \\(\\deg \\alpha + \\deg \\beta + 1 = \\dim M\\). Then\n  \\[\n    \\int_M \\alpha \\w \\mathrm d\\beta = (-1)^{1 + \\deg \\alpha} \\int_M (\\mathrm d\\alpha) \\w \\beta.\n  \\]\n\\end{corollary}\n\n\\begin{proof}\n  Apply Stoke's theorem to \\(\\eta = \\alpha \\w \\beta\\).\n\\end{proof}\n\n\\section{Vector bundles}\n\n\\begin{definition}[submersion]\\index{submersion}\n  A smooth map \\(f: M \\to N\\) is a \\emph{submersion} if \\(d f_x: T_xM \\to T_{f(x)}N\\) is surjective for all \\(x \\in M\\).\n\\end{definition}\n\n\\begin{definition}[vector bundle]\n  A \\emph{vector bundle} over a manifold \\(B\\) is a smooth submersion \\(\\pi: E \\to B\\) of a manifold \\(E\\) onto \\(B\\) satisfying\n  \\begin{enumerate}\n  \\item there exists a vector space \\(V\\) such that for all \\(p \\in B\\), \\(E_p := \\pi^{-1}(p)\\) is a vector space isomorphic to \\(V\\);\n  \\item local trivailisation: for every \\(p \\in B\\), there exists open neighbourhood \\(U\\) of \\(p\\) and a diffeomorphism \\(\\Phi_U\\) such that the diagram\n    \\[\n      \\begin{tikzcd}\n        \\pi^{-1}(U) \\ar[r, \"\\Phi_U\"] \\ar[dr, \"\\pi\"] & U \\times V \\ar[d] \\\\\n        & U\n      \\end{tikzcd}\n    \\]\n    commutes, where the map \\(U \\times V \\to U\\) is projection to first coordinate;\n  \\item \\(\\Phi_p: E_p \\to \\{p\\} \\times V\\) is an isomorphism of vector spaces for all \\(p \\in U\\).\n  \\end{enumerate}\n  \\(B\\) is called the \\emph{base} of the bundle and \\(E\\) is called the \\emph{total space}. \\(V\\) is called \\emph{typical bundle} and \\(\\dim V\\) is the \\emph{rank} of the vector bundle. \\(\\pi\\) is the bundle projection. \\(\\Phi_U\\) is a \\emph{local trivialisation} and \\(U\\) is a \\emph{trivialising neighbourhood}.\n\\end{definition}\n\nVector bundles of rank \\(1\\) are also called \\emph{line bundles}\\index{line bundle}.\n\n\\begin{definition}[section]\\index{section}\n  A \\emph{section} of a vector bundle \\(E\\) is a smooth map \\(s: B \\to E\\) such that \\(\\pi \\compose s = \\id_B\\).\n\n  A \\emph{local section} \\(s: N \\to E\\) where \\(N \\subseteq B\\) is a smooth map such that \\(\\pi \\compose s = \\id_N\\).\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item For any manifold \\(V\\) and vector space \\(V\\), take \\(E = B \\times V\\) and \\(\\pi\\) to be the projection onto first coordinate. This is known as a \\emph{trivial bundle} or \\emph{product bundle}. Sections of this bundle are \\(C^\\infty(B, V)\\), i.e.\\ vector valued functions on \\(B\\).\n  \\item (Co)tangent bundles \\(TM, T^*M\\) are real vector bundles of rank \\(\\dim M\\). Sections are \\(V(M)\\) and \\(\\Omega^1(M)\\) respectively. More generally, \\(\\Lambda^rT^*M\\) is a real bundle of rank \\(\\binom{n}{r}\\) and sections are differential \\(r\\)-forms \\(\\Omega^r(M)\\). If \\(r = 0\\) then the bundle is trivial.\n  \\item Tautological vector bundle\\index{tautological vector bundle} over \\(\\R P^n, \\C P^n\\) and more generally Grassmannians: set \\(B = \\C P^n\\) and \\(E\\) to be the disjoint union of all complex lines in \\(\\C^{n + 1}\\) through \\(0\\) and \\(\\pi\\) maps \\(z \\in E\\) to the complex line containing \\(z\\) in \\(\\C P^n\\). We'll check that this is a well-defined line bundle.\n  \\end{enumerate}\n\\end{eg}\n\n\\subsection{Structure group and transition functions}\n\nLet \\((U_\\alpha, \\varphi_\\alpha)\\) and \\((U_\\beta, \\varphi_\\beta)\\) be two local trivialising neighbourhoods with \\(U_\\alpha \\cap U_\\beta \\neq \\emptyset\\). Then\n\\[\n  \\varphi_\\beta \\compose \\varphi_\\alpha^{-1}(b, v) = (b, \\psi_{\\beta\\alpha}(b)(v))\n\\]\nfor all \\((b, v) \\in (U_\\alpha \\cap U_\\beta) \\times V\\), where \\(\\psi_{\\beta\\alpha}: U_\\alpha \\cap U_\\beta \\to \\GL(V)\\) is smooth. We have\n\\begin{align*}\n  \\psi_{\\alpha\\alpha} &= \\id_V \\\\\n  \\psi_{\\alpha\\beta} \\cdot \\psi_{\\beta\\alpha} &= \\id_V \\\\\n  \\psi_{\\alpha\\beta} \\cdot \\psi_{\\beta\\gamma} \\cdot \\psi_{\\gamma\\alpha} &= \\id_V\n\\end{align*}\nwhich are called the \\emph{cocycle} conditions. \\(\\psi_{\\beta\\alpha}\\) are called the \\emph{transition functions} of the vector bundle \\(E\\).\n\n\\begin{eg}\n  When \\(E = TM\\) or \\(T^*M\\), \\(\\psi_{\\beta\\alpha}\\) are given by the Jacobian matrices of a change of local coordinates (they are daul to each other).\n\\end{eg}\n\n\\begin{proposition}\n  The data of \\(B\\) and \\(\\{(U_\\alpha, \\psi_{\\beta\\alpha})\\}\\) with \\(B = \\bigcup U_\\alpha\\) determines a vector bundle \\(E\\) uniquely up to isomorphism (which we will define formally later).\n\\end{proposition}\n\n\\begin{proof}\n  The proof is called \\emph{Steenrod construction}. Define\n  \\[\n    E := \\left(\\coprod_\\alpha U_\\alpha \\times V\\right) / \\sim\n  \\]\n  where the equivalence relation is the one generated by \\((b, v) \\sim (b, \\psi_{\\beta\\alpha}(b)(v))\\) for all \\(b \\in U_\\alpha\\) for all \\(\\alpha, \\beta\\). \\(E\\) is a manifold as \\(\\sim\\) is a diffeomorphism. wlog \\(U_\\alpha \\subseteq B\\) are coordinate neighbourhoods with charts \\(\\varphi_\\alpha\\). Then \\(\\Phi_\\alpha = (\\varphi_\\alpha, \\id)\\) are charts for \\(E\\). Finally, \\(\\pi: E \\to B, (b, v) \\mapsto b\\) is a smooth submersion.\n\\end{proof}\n\n\\begin{definition}[\\(G\\)-structure]\\index{\\(G\\)-strucure}\n  Let \\(E\\) be a real vector bundle over \\(B\\). Let \\(G \\leq \\GL(k, \\R)\\). A collection of charts \\(\\{(U_\\alpha, \\varphi_\\alpha)\\}\\) covering \\(B\\) such that the corresponding \\(\\psi_{\\beta\\alpha}(b) \\in G\\) for all \\(\\alpha, \\beta\\) is called a \\emph{\\(G\\)-structure}. If \\(E = TM\\), we say \\(E\\) is a \\(G\\)-structure on \\(B\\).\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item If \\(G\\) is the trivial subgroup then a \\(G\\)-structure is a global trivialisation over \\(B\\), i.e.\\ \\(\\Phi: E \\to B \\times \\R^k\\) is a local trivilisation on \\(U = B\\).\n  \\item If \\(G = \\GL_+(k, \\R)\\), the linear isomorphisms with positive determinant, then a \\(G\\)-structure is an orientation of \\(E\\). For example, if \\(E = TM\\) then this gives an orientation of \\(M\\).\n  \\item If \\(G = O(k)\\) (or \\(G = U(k)\\) in case of complex bundle) then it determines a choice of invariantly defined inner product on fibres of \\(E\\) smoothly varying with the fibre. \\(\\Phi_\\alpha\\) are then called \\emph{orthogonal (unitary respectively) trivialisations}. In this case given a trivialisation \\(\\Phi_{U_\\alpha}: \\pi^{-1}(U_\\alpha) \\to U_\\alpha \\times \\R^n\\) and \\(b \\in U_\\alpha\\), the linear isomorphism \\(\\Phi_{U_\\alpha}|_b: \\pi^{-1}(b) \\to \\{b\\} \\times \\R^n\\) is moreover a linear isometry.\n\n    If we set instead \\(G = SO(k) = O(k) \\cap \\GL_+(k, \\R)\\) we get an orthogonal trivialisation together with an orientation.\n  \\item Suppose \\(E\\) is a real vector bundle of rank \\(k = 2n\\) and take\n    \\[\n      G = \\GL(n, \\C) \\subseteq \\GL(2n, \\R),\n    \\]\n    then for all \\(p \\in B\\), there exists \\(J_p \\in \\GL(E_p)\\) such that \\(J^2 = -\\id_{E_p}\\) which depends smoothly on \\(p\\). In other words, \\((E_p, J_p)\\) is isomorphic to a complex vector space of dimension \\(n\\). This \\(G\\)-structure makes \\(E\\) into a \\emph{complex vector bundle}\\index{vector bundle!complex}. Note that this is \\emph{not} the same as complex manifold, which has a stronger requirement. When \\(E = TM\\) then a \\(\\GL(n, \\C)\\)-structure on \\(E\\) is called an \\emph{almost complex structure}\\index{almost complex structure}.\n  \\end{enumerate}\n\\end{eg}\n\nGenerally, if \\(G \\leq \\GL(V)\\) preservse some ``linear algebra objects'' on \\(V\\) then \\(G\\)-structure on a vector bunldes means there exists a well-defined family of such objects depend smoothly on the point in the base.\n\n\\subsection{Principal bundles}\n\nLet \\(G\\) be a Lie group, \\(1_G\\) the identity element of \\(G\\). Let \\(P, B\\) be manifolds.\n\n\\begin{definition}[smooth free right action]\\index{smooth free right action}\n  A \\emph{smooth free right action} is a right action of a Lie group on a manifold that is also free. Concretely, a smooth free right action of \\(G\\) on \\(P\\) is a smooth map\n  \\begin{align*}\n    P \\times G &\\to P \\\\\n    (p, h) &\\mapsto ph\n  \\end{align*}\n  satisfying\n  \\begin{enumerate}\n  \\item free action: for all \\(p \\in P\\), \\(ph = p\\) if and only if \\(h = 1_G\\).\n  \\item right action: for all \\(p \\in P, h_1, h_2 \\in G\\), \\((ph_1)h_2 = p(h_1h_2)\\).\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{note}\n  The right action implies that for all \\(h \\in G\\), \\(p \\mapsto ph\\) is a diffeomorphism of \\(P\\) to itself.\n\\end{note}\n\n\\begin{definition}[principal bundle]\\index{principal bundle}\n  A \\emph{principal \\(G\\)-bundle} of \\(P\\) over \\(B\\) is a smooth submersion \\(\\pi: P \\to B\\) with a smooth free right action of \\(G\\) on \\(P\\) such that the set of orbits is bijective with \\(B\\) ``naturally'': for all \\(b \\in B\\), there exists an open neighbourhood \\(U\\) of \\(b\\) with a diffeomorphism \\(\\Phi_U\\) such that\n  \\[\n    \\begin{tikzcd}\n      \\pi^{-1}(U) \\ar[r, \"\\Phi_U\"] \\ar[dr, \"\\pi\"] & U \\times G \\ar[d] \\\\\n      & U\n    \\end{tikzcd}\n  \\]\n  commutes, where \\(U \\times G \\to U\\) is projection to first coordinate. Moreover \\(\\Phi_U\\) commutes with \\(G\\)-action, i.e.\n  \\[\n    \\Phi_U(ph) = (b, gh)\n  \\]\n  whenever \\(b = \\pi(p) \\in U, \\Phi_U(p) = (b, g)\\).\n\n  \\(P\\) is the \\emph{total space}, \\(B\\) is the \\emph{base} and \\(G\\) is the \\emph{fibre}.\n\\end{definition}\n\n\\begin{note}\n  Principal \\(G\\)-bundle is not the same as a locally trivial bundle with fibres being a Lie group. It also incorporates information about group action.\n\\end{note}\n\nSimilar to vector bundles, given charts on two trivialising neighbourhoods, have\n\\[\n  \\varphi_\\beta \\compose \\varphi_\\alpha^{-1} (b, g) = (b, \\psi_{\\beta\\alpha}(b, g))\n\\]\nwhere \\(\\psi_{\\beta\\alpha}(b, \\cdot): G \\to G\\). The \\(G\\)-equivariant condition then implies that\n\\[\n  \\psi_{\\beta\\alpha}(b, g) = \\psi_{\\beta\\alpha}(b) g = L_{\\psi_{\\beta\\alpha}(b)} g\n\\]\nwhere \\(L_h : G \\to G\\) is left multiplication by an element \\(h\\) and \\(\\psi_{\\beta\\alpha}(b) = \\psi_{\\beta\\alpha} (b, 1_G)\\).\nAs before, the transition functions \\(\\psi_{\\beta\\alpha}: U_\\alpha \\cap U_\\alpha \\to G\\) are smooth maps satisfying the cocycle conditions.\n\nWe can also recover a principal \\(G\\)-bundle from the transition functions, up to \\(G\\)-bundle isomorphism:\n\n\\begin{theorem}\n  Given \\(\\{(U_\\alpha, \\psi_{\\beta\\alpha})\\}\\) such that \\(B = \\bigcup U_\\alpha\\) and \\(\\psi_{\\beta\\alpha}\\)'s satisfy the cocycle conditions, let\n  \\[\n    P := \\left(\\coprod_\\alpha U_\\alpha \\times G \\right) / \\sim\n  \\]\n  where \\((b, h) \\sim (b', h')\\) if and only if \\(b = b', h' = \\psi_{\\beta\\alpha}(b)h\\). Then \\(P\\) is a principal \\(G\\)-bundle over \\(B\\).\n\\end{theorem}\n\n\\begin{proof}\n  Same as before.\n\\end{proof}\n\n\\begin{remark}\n  The orbit space can be made into a manifold with an additional assumption: if the action of \\(G\\) is also \\emph{proper}, i.e.\\ the map\n  \\begin{align*}\n    G \\times P &\\to P \\times P \\\\\n    (g, p) &\\mapsto (pg, p)\n  \\end{align*}\n  is a proper map (preimage of compact set is compact) then can show \\(P/G\\) is a smooth manifold. Then may define \\(B = P/G\\).\n\\end{remark}\n\nThe consequence of the above theory is that we can start from a vector bundle \\(E\\) with a \\(G\\)-structure, and use \\(\\{\\psi_{\\beta\\alpha}\\}\\) to obtain a principal \\(G\\)-bundle \\(P\\) with the same transition functions. Conversely, given a principal \\(G\\)-bundle \\(P\\), we say \\(G\\) is \\emph{associated} to \\(P\\) via the action of \\(G\\) on \\(\\R^n\\) where \\(n\\) is the rank of \\(E\\), i.e.\\ this is a \\emph{representation} \\(G \\to \\GL(n, \\R)\\), an injective homomorphism.\n\nIn the case of principal \\(G\\)-bundle, \\(G\\) acts on itself by \\emph{left translation}.\n\n\\begin{eg}\n  Let \\(E = TM\\) then \\(P\\) is a \\emph{frame bundle}\\index{frame bundle}, \\(G = \\GL(n, \\R)\\). For all \\(p \\in P\\), it is a basis of \\(T_{\\pi(p)}M\\).\n\n  If in addition \\(TM\\) has \\(O(n)\\)-structure then obtain \\(P\\) the \\emph{orthonormal frames bundle}.\n\\end{eg}\n\n\\subsection{Hopf bundle}\n\nA Hopf bundle is an example of a tautological rank 1 complex vector bundle over \\(\\C P^1\\). Recall that the fibre over \\(z_1 : z_2 \\in \\C P^1\\) is the line \\(\\C(z_1, z_2) \\subseteq \\C^2\\).\n\nWe shall work out the transition functions to prove this vector bundle is well-defined. Let\n\\[\n  U_i = \\{z_i \\neq 0\\} \\subseteq \\C P^1\n\\]\nand let \\(z = \\frac{z_2}{z_1}\\) be a local coordinate on \\(U_1\\), \\(\\zeta = \\frac{z_1}{z_2}\\) on \\(U_2\\). Every point in fibre over \\(1: z \\in U_1\\) can be written as \\((w, wz)\\) where \\(w \\in \\C\\), and respectively as \\((w\\zeta, w)\\) over \\(\\zeta: 1 \\in U_2\\). The local trivialisations over \\(U_i\\) are\n\\begin{align*}\n  \\varphi_1: \\pi^{-1}(U_1) &\\to U_1 \\times \\C \\\\\n  (w, wz) &\\mapsto (1 : z, w \\sqrt{1 + |z|^2}) \\\\\n  \\varphi_2: \\pi^{-1}(U_2) &\\to U_2 \\times \\C \\\\\n  (w\\zeta, w) &\\mapsto (\\zeta : 1, w \\sqrt{|\\zeta|^2 + 1})\n\\end{align*}\nthen\n\\[\n  \\varphi_1^{-1}(1: z, \\tilde w) = \\left( \\frac{\\tilde w}{\\sqrt{1 + |z|^2}}, \\frac{\\tilde w}{\\sqrt{1 + |z|^2}} z \\right)\n\\]\nif \\(z \\neq 0\\) then\n\\begin{align*}\n  \\varphi_2 \\compose \\varphi_1^{-1}(1 : z, \\tilde w)\n  &= \\varphi_2 \\left( \\frac{\\tilde w}{\\zeta \\sqrt{1 + 1/|\\zeta|^2}} \\zeta, \\frac{\\tilde w}{\\zeta{ }\\sqrt{1 + 1/|\\zeta|^2}} \\right) \\\\\n  &= (\\zeta : 1, \\frac{|\\zeta|}{\\zeta} \\tilde w) \\\\\n  &= (1 : z, \\frac{z}{|z|} \\tilde w)\n\\end{align*}\nby noting that \\(\\zeta z = 1\\). Thus the transition functions are\n\\begin{align*}\n  \\psi_{2, 1}(1 : z) & = \\frac{z}{|z|} \\\\\n  \\psi_{1, 2}(\\zeta: 1) &= \\frac{|z|}{z} = \\frac{\\zeta}{|\\zeta|}\n\\end{align*}\nObserve\n\\[\n  \\psi_{2, 1}: U_1 \\cap U_2 \\to U(1) = \\{a \\in \\C: |a| = 1\\} \\subseteq \\C^* = \\GL(\\C)\n\\]\nso the Hopf-bundle is well-defined and admits a \\(U(1)\\)-structure. Thus it has an invariantly defined norm on the fibres, induced from standard Euclidean norm on \\(\\C \\subseteq \\C^2\\) via \\(\\varphi_1, \\varphi_2\\). More explicitly, on \\(\\pi^{-1}(U_1)\\),\n\\[\n  \\norm{(w, wz)} = |w| \\sqrt{1 + |z^2|}.\n\\]\nLeave as an exercise to write out the expression for \\(\\pi^{-1}(U_2)\\), and verify that it gives the same norm.\n\nThe associated principal \\(U(1)\\)-bundle \\(P\\), also known as \\emph{Hopf bundle}\\index{Hopf bundle}, is the bundle of unit vectors. Thus\n\\[\n  P \\cong \\{(w_1, w_2) \\in \\C^2: |w_1|^2 + |w_2|^2 = 1\\} = S^3\n\\]\nand\n\\begin{align*}\n  \\pi: S^3 &\\to \\C P^1 \\\\\n  (w_1, w_2) &\\mapsto w_1 : w_2\n\\end{align*}\nBut recall from example sheet 1 Q5 that \\(\\C P^1 \\cong S^2\\) so \\(\\pi: S^3 \\to S^2\\). Together with some topological argument, we see that the Hopf bundle cound not be a product bundle\n\\[\n  S^3 \\cong S^2 \\times S^1.\n\\]\nSee example sheet 2 Q5.\n\n\\begin{definition}[local section]\\index{local section}\\index{section!local}\n  A \\emph{local section} of a principal bundle \\(\\pi: P \\to B\\) is a smooth map \\(s: W \\to P\\), where \\(W \\subseteq B\\) open, such that \\(\\pi \\compose s = \\id_w\\).\n\\end{definition}\n\nThere is nothing new in the dfinition here --- we have seen local and global sections for vector bundles before. However, note that although every vector bundle admits a global section, namely the zero section, it is not true for principal bundles. We cannot choose the ``identity element section'' as there is no distinguished point in a fibre. In fact, a principal bundle admits a global section if and only if it is trivial.\n\n\\subsection{Pullback of a vector bundle or a principal bundle}\n\n\\begin{definition}[pullback of a vector bundle]\\index{pullback}\\index{vector bundle!pullback}\\index{principal bundle!pullback}\n  Let \\(\\pi: E \\to B\\) be a vector bundle and \\(f: M \\to B\\) be a smooth map. The \\emph{pullback} \\(f^*E\\) of \\(E\\) is a vector bundle over \\(M\\) with same fibre \\(V\\) as \\(E\\) and such that there exists smooth \\(F\\) making the diagram\n  \\[\n    \\begin{tikzcd}\n      f^*E \\ar[r, \"F\"] \\ar[d, \"\\tilde \\pi\"] & E \\ar[d, \"\\pi\"] \\\\\n      M \\ar[r, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  commutes, and the restriction \\(F: (f^*E)_p \\to E_{f(p)}\\) to fibres is an isomorphism of vector spaces for all \\(p \\in M\\).\n\\end{definition}\n\nIt follows that for all local trivialisations of \\(E\\) over \\(U\\), we must have\n\\[\n  \\begin{tikzcd}\n    \\pi^{-1}(U) \\ar[r, \"\\Phi_U\"] & U \\times V \\\\\n    \\tilde \\pi^{-1}(f^{-1}(U)) \\ar[u, \"F\"] \\ar[r, \"\\tilde \\Phi_U\"] & f^{-1}(U) \\times V \\ar[u, \"f \\times \\id_V\"]\n  \\end{tikzcd}\n\\]\nIt is left as an exercise to check that \\(\\tilde \\Phi_U\\) is a diffeomorphism.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item If \\(B = *\\) then \\(E \\cong V\\). Have\n    \\begin{align*}\n      \\tilde \\Phi: f^*E &\\to M \\times V \\\\\n      x &\\mapsto (\\tilde \\pi(x), F(x))\n    \\end{align*}\n    as a (global) trivialisation of \\(f^*E\\) over \\(M\\).\n  \\item As a variant, let \\(M = B \\times X\\) with \\(f: B \\times X \\to B\\) the projection to first coordinate. Then \\(f^*E\\) is ``trivialised in the \\(X\\) direction'', i.e.\\ \\(f^*E \\cong E \\times X\\) and \\(\\tilde \\pi = \\pi \\times \\id_X\\).\n  \\item Suppose \\(M = *\\). Then \\(f^*E\\) is just a copy of the fibre \\(V\\) and \\(F: V \\embed E\\) is an embedding as the fibre over \\(f(M) \\in B\\).\n  \\end{enumerate}\n\\end{eg}\n\nIn general, \\(f^*E\\) may be determined by pulling back the transitions\n\\[\n  f^* \\psi_{\\beta\\alpha} = \\psi_{\\beta\\alpha} \\compose f: f^{-1}(U_\\alpha \\cap U_\\beta) \\to \\GL(V).\n\\]\nThis gives an equivalent but more formal way to define pullbacks of bundles.\n\nThe above discussion applies equally well to pullbacks of principal \\(G\\)-bundles \\(f^*P\\), with obvious change of notations.\n\n\\subsection{Bundle morphism}\n\n\\begin{definition}[vector bundle morphism]\\index{vector bundle!morphism}\n  Let \\(\\pi: E \\to B, \\pi': E' \\to B'\\) be two vector bundles and \\(f: B \\to B'\\) a smooth map. A smooth map \\(F: E \\to E'\\) is a \\emph{vector bundle morphism (covering \\(f\\))} if the diagram\n  \\[\n    \\begin{tikzcd}\n      E \\ar[r, \"F\"] \\ar[d, \"\\pi\"] & E' \\ar[d, \"\\pi'\"] \\\\\n      B \\ar[r, \"f\"] & B'\n    \\end{tikzcd}\n  \\]\n  commutes and for all \\(p \\in B\\), the restriction \\(F_p = F|_{E_p}: E_p \\to E'_{f(p)}\\) is a linear map.\n\\end{definition}\n\nIt follows that if \\(\\Phi, \\Phi'\\) are local trivialisations of \\(E\\) and \\(E'\\) respectively over \\(U \\subseteq B, U' \\subseteq B'\\) with \\(f(U) \\subseteq U'\\), then the local expression \\(\\hat F = \\Phi' \\compose F \\compose \\Phi^{-1}\\) is given by\n\\[\n  \\hat F(b, v) = (f(v), h(b)(v))\n\\]\nfor all \\(b \\in U\\) and \\(h: U \\to L(V, V')\\) is a smooth map.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Suppose \\(\\varphi: M \\to N\\) is a smooth map then \\(d \\varphi: TM \\to TN\\) given by \\(d\\varphi_p\\) for all \\(p \\in M\\) is a bundle morphism.\n  \\item Given \\(f: M \\to B\\) and \\(\\pi: E \\to B\\) a vector bundle, the pullback \\(F: f^*E \\to E\\) is a bundle morphism.\n  \\item Let \\(B' = B\\) and \\(f: B \\to B\\) a diffeomorphism. If for all \\(p \\in B\\), \\(F_p: E_p \\to E_{f(p)}\\) is a linear isomorphism then \\(F\\) is an \\emph{vector bundle isomorphism}\\index{vector bundle!isomorphism}. Furthermore if \\(f = \\id_B, E' = E, \\pi' = \\pi\\) then we have the commutative diagram\n    \\[\n      \\begin{tikzcd}\n        E \\ar[r, \"F\"] \\ar[dr, \"\\pi\"] & E \\ar[d, \"\\pi\"] \\\\\n        & B\n      \\end{tikzcd}\n    \\]\n    In this case \\(F\\) is called an \\emph{automorphism}\\index{vector bundle automorphism} of \\(E\\) and denote it by \\(F \\in \\aut(E)\\).\n  \\item For example if \\(E \\cong V \\times V\\) then \\(\\aut(E) = C^\\infty(V, \\GL(V))\\). If \\(E\\) has a \\(G\\)-structure then \\(\\aut_G(E) \\subseteq \\aut(E)\\) is well-defined. Locally\n    \\[\n      \\hat F(b, v) = (v, h(b)(v))\n    \\]\n    where \\(h(v) \\in G \\subseteq \\GL(V)\\), whenever \\(\\hat F\\) is with respect to local trivialisations \\(\\Phi\\) from this \\(G\\)-structure. In mathematical physics, often write \\(\\aut_G(E) = \\mathcal G\\), the \\emph{group of gauge transformations}. For example, \\(G = U(1), SU(2), SO(3), SU(3)\\).\n  \\end{enumerate}\n\\end{eg}\n\n\\section{Connections}\n\nLet \\(\\pi: E \\to B\\) be a vector bundle and \\(s: B \\to E\\) is a section. Locally\n\\begin{align*}\n  \\hat s: U &\\to U \\times V \\\\\n  b &\\mapsto (b, s(b))\n\\end{align*}\nwhich is really, just a map to \\(V\\) so \\(d\\hat s_p: T_bU \\to T_{s(b)}(V) \\cong V\\). On the other hand, the ordinary differential of \\(s\\) is a map \\(ds_b: T_b B \\to T_{(b, s(b))}E\\). We would like \\(T_{s(b)}E_b \\cong E_b\\) for some subspace of \\(T_{s(b)}E\\).\n\nHow to define differential of a section of a vector bundle so that it looks reasonable like a gradient\n\nSome conventions throughout: let \\(\\dim B = n\\) and \\(U \\subseteq B\\) a coordinate neighbourhood that is simultaneously a trivialising neighbourhood for \\(E\\), with coodinates \\(x^k\\) where \\(k = 1, \\dots, n\\). Let \\(a = (a^j)_{j = 1}^m \\in \\R^m\\) be coordinates on the fibres on \\(E\\). We use \\(i, j = 1, \\dots, m\\) and \\(k = 1, \\dots, n\\) and use summation convection for Roman indices (but not Greek indices).\n\nLet \\(\\Phi_U: \\pi^{-1}(U) \\to U \\times \\R^m\\) be a local trivialisation. Let \\(T_p\\) be the span of \\(\\frac{\\partial  }{\\partial x^k}, \\frac{\\partial  }{\\partial a^j}\\). For \\(p \\in E, \\pi(p) = b\\), Let \\(\\pi^{-1}(b) = E_b \\subseteq E\\) is a submanifold.\n\\[\n  T_pE_b = \\ker (d\\pi_p)\n\\]\nwhich is the span of \\(\\frac{\\partial  }{\\partial a^j}\\).\n\n\\begin{definition}\n  The \\emph{vertical subspace} at \\(p \\in E\\) is\n  \\[\n    Tv_pE = \\ker(d\\pi_p).\n  \\]\n\n  A subspace \\(S_p \\subseteq T_pE\\) is a  \\emph{horizontal subspace} if\n  \\begin{align*}\n    S_p \\cap Tv_p E &= 0 \\\\\n    S_p \\oplus Tv_p E &= T_pE\n  \\end{align*}\n\\end{definition}\n\nThen \\(\\dim S_p = \\dim B\\).\n\nHow to choose a horizontal subspace? All \\(n\\) dimensional subspace of \\(T_pE \\cong \\R^{m + n}\\) is \\(\\bigcap_{i = 1}^m \\ker \\theta^i\\) for some linearly independent \\(\\theta^1, \\dots, \\theta^m \\in (\\R^{m + n})^* \\cong T_p^*E\\) where\n\\[\n  \\theta^i = f_k^i \\mathrm d x^k + g_j^i \\mathrm d a_j\n\\]\nwhere \\(f_k^i, g_j^i \\in \\R\\). A subspace \\(C = B^k \\frac{\\partial  }{\\partial x^k} + C^j \\frac{\\partial  }{\\partial a^j} \\in T_p E\\) is vertical if and only if \\(B^k = 0\\) for all \\(k\\). Thus must have\n\\[\n  \\theta_p^i(C_j \\frac{\\partial  }{\\partial a^j}\n\\]\nnot all \\(0\\) for \\(i = 1, \\dots, n\\), i.e.\\ \\(g^i_j C^j = 0\\) for all \\(i\\), which implies \\(C^j = 0\\) for all \\(j\\). Hence the \\(m \\times m\\) matrix \\((g_j^i)\\) is invertible. Let its inverse be \\((h_j^i)\\). Put\n\\[\n  \\tilde \\theta_p^i := h_j^i \\theta_p^j = \\mathrm d a^j + e^i_k \\mathrm d x^k\n\\]\nwhich determines the same subspace \\(S_p \\subseteq T_pE\\), where\n\\[\n  e_k^i = e_k^i(p) = e_k^i(x, a) \\in C^\\infty(U \\times \\R^n).\n\\]\n\n\\begin{proposition}\n  Every given field of horizontal subspace \\(S_p, p \\in E\\) is locally expressed as\n  \\[\n    S_p = \\bigcap_{i = 1}^m \\ker \\theta_p^i\n  \\]\n  where\n  \\[\n    \\theta_p^i = \\mathrm d a^i + e_k^i(x, a) \\mathrm dx^k\n  \\]\n  for some unique functions \\(e_k^i\\) on \\(U \\times \\R^m\\).\n\n  We say \\(S = S_p\\) is smooth if all \\(e_k^i\\) are smooth.\n\\end{proposition}\n\n\\begin{definition}[connection]\\index{connection}\n  \\(S = S_p\\) a field of horizontal subspaces is a \\emph{connection} of \\(E\\) if all \\(e_k^i\\)'s are linear in \\(a\\). Write\n  \\[\n    e_k^i(x, a) = \\Gamma_{jk}^i (x) a^j\n  \\]\n  where \\(\\Gamma_{jk}^i \\in C^\\infty(U)\\) are the coefficients of this connection in a local trivialisation.\n\\end{definition}\n\n\\[\n  \\theta^i = \\mathrm d a^i + \\Gamma_{jk}^i(x) a^j \\mathrm dx^k\n  = \\mathrm d a^i + A^i_ja^j\n\\]\nwhere \\(A^i_j \\in \\Omega^1(U)\\).\n\nThe transformation law for \\(A^i_j\\): if \\(U'\\) is another trivialisation neighbourhood with \\(U \\cap U' \\neq \\emptyset\\) and \\(\\Psi = (\\Psi^{i'}_i)_{m \\times m}\\) is the transition function from \\(\\Phi_U\\) to \\(\\Phi_{U'}\\), and \\(\\Psi^{-1} = (\\Psi^i_{i'})_{m \\times m}\\) the transition function fro \\(\\Phi_{U'}\\) to \\(\\Phi_U\\). Note that\n\\[\n  \\Psi_{i'}^i \\Psi_j^{i'} = \\delta_j^i.\n\\]\nSuppose \\(a^{i'} = \\Psi_i^{i'} a^i\\), then\n\\[\n  \\mathrm d a^{i'} = (d \\Psi_i^{i'}) a^i + \\Psi_i^{i'} da^i.\n\\]\nSuppose\n\\begin{align*}\n  \\theta^{i'} &= d a^{i'} + A_{j'}^{i'} a^{j'} \\\\\n  \\theta^i &= da^i + A_j^i a^j\n\\end{align*}\nthen rewrite\n\\[\n  \\theta^{i'} = (d \\Psi_i^{i'}) a^i + \\Psi_i^{i'} da^i + A_{j'}^{i'} a^{j'}\n\\]\nCompare the coefficient of \\(da^i\\), must have\n\\[\n  \\theta^i = \\Psi_{i'}^i \\theta^{i'}\n\\]\nso\n\\[\n  \\theta^i = da^i + (\\Psi_{i'}^i d\\Psi_j^{i'} + \\Psi_{i'}^i A_{j'}^{i'} \\Psi_j^{j'}) a^j\n\\]\nso the transformation law is\n\\[\n  A_j^i = \\Psi_{i'}^i A_{j'}^{i'} \\Psi_j^{j'} + \\Psi_{j'}^i d\\Psi_j^{i'}.\n\\]\nUse \\(A^\\Phi\\) to denote the matrix \\((A_j^i)\\), have \\(A^{\\Phi'} = A^{\\Psi\\Phi} = (A_{j'}^{i'})\\). Then the transformation law is\n% \\[\n%   A^\\Phi = \\Psi A^{\\Psi\\Phi} \\Psi^{-1} + \\Psi d\\Psi^{-1}\n% \\]\n% or equally\n\\[\n  A^{\\Psi\\Phi} = \\Psi A^\\Phi \\Psi^{-1} - (d\\Psi) \\Psi^{-1}.\n\\]\nNoting that\n\\[\n  (d\\Psi)\\Psi^{-1} + \\Psi d(\\Psi^{-1}) = 0\n\\]\nso equally\n\\[\n  A^\\Psi = \\Psi^{-1}A^{\\Psi\\Phi} \\Psi + (d\\Psi)\\Psi^{-1}.\n\\]\n\n\\begin{theorem}\n  Every choice of local matrices \\((A_j^i)\\) of \\(1\\)-forms satifying * assigned to local trivialisations defines a connection on \\(E\\).\n\\end{theorem}\n\nRecall the transformation law for the connection matrices\n\\[\n  A^{\\Psi\\Phi} = \\Psi A^\\Phi \\psi^{-1} - (d\\Psi) \\Psi^{-1}\n\\]\nwhere \\(\\Psi\\) is the transition function from local trivialisation \\(\\Phi\\) to \\(\\Phi'\\) with \\(\\Phi = \\Phi' \\compose \\Phi^{-1}\\). Write \\(A^\\Phi = (A^i_j)_{i,j = 1, \\dots, m}\\) where \\(m\\) is rank of \\(E\\),\n\\[\n  A^i_j = \\Gamma^i_{jk}(x) \\mathrm d x^k \\in \\Omega^1(U).\n\\]\nSimilarly \\(A^{\\Psi\\Phi} = (A^{i'}_{j'})\\). The \\((A^i_j)\\)'s do \\emph{not} patch together over trivialising neighbourhoods \\(U \\subseteq B\\) to give a global matrix-valued \\(1\\)-form, essentialy because of the \\((d\\Psi) \\Psi^{-1}\\) correction term in the transformation law.\n\nLet \\(\\Phi_\\alpha, \\Phi_\\beta: \\pi^{-1}(U) \\to U \\times V\\) be two trivialising neighbourhoods over \\(U\\). If \\(G_\\alpha \\in \\End(V)\\) considered in the \\(\\Phi_\\alpha\\) trivialisation then with respect to \\(\\Phi_\\beta\\), \\(G_\\alpha\\) becomes\n\\[\n  G_\\beta = \\Psi_{\\beta\\alpha} G_\\alpha \\Psi_{\\alpha\\beta},\n\\]\nnote that there is no summation here! This defines a linear action of \\(\\GL(V)\\) on \\(\\End(V)\\). Thus we can apply the Steenrod construction to build a vector bundle \\(\\End(E)\\) using \\(U \\times \\End(V)\\) and the above formula for the transitions. This is the \\emph{endomorphism bundle}\\index{endomorphism} of \\(E\\). This is the bundle associated to \\(E\\).\n\n\\begin{remark}\n  We may also define subbundle \\(\\GL(E) \\subseteq \\End(V)\\) with typical fibre \\(\\GL(V) \\subseteq \\End(V)\\). \\(\\GL(E)\\) is \\emph{not} a principal bundle. For example it always has a global section\n  \\begin{align*}\n    B &\\to \\GL(E) \\\\\n    b &\\mapsto \\id_V\n  \\end{align*}\n  Sections of \\(\\GL(E)\\) are precisely \\(\\aut(E)\\). Coincidentally this also provides a practical counterexample to the statement that ``every vector bundle with transition functions in a group is a principal bundle''.\n\\end{remark}\n\nSimilarly to \\(\\End(E)\\) we may build a vector bundle with typical fibre\n\\[\n  \\Lambda^r(\\R^n)^* \\otimes V \\cong \\{f: \\underbrace{\\R^n \\times \\dots \\times \\R^n}_{r \\text{ times}} \\to V \\text{ multilinear and antisymmetric}\\}\n\\]\n(where the isomorphism comes from the fact in linear algebra that \\((\\R^n)^* \\otimes V \\cong \\{\\text{linear maps } \\R^n \\to V\\}\\) and so on), where \\(n = \\dim B\\). Use the action induced by the Jacobi matrices for local coordinate transformation on \\(B\\) and use \\(\\Psi_{\\alpha\\beta}\\)'s on the \\(V\\) factor.\n\n\\begin{table}[h]\n  \\centering\n  \\begin{tabular}{|c|c|c|}\n    \\hline\n    vector bundle & typical fibre & transition functions \\\\ \\hline\n    \\(E\\) & \\(V\\) & \\(v \\mapsto \\Psi_{\\beta\\alpha}v\\) \\\\ \\hline\n    \\(\\End(E)\\) & \\(\\End(V)\\) & \\(G \\mapsto \\Psi_{\\beta\\alpha} G \\Psi_{\\alpha\\beta}\\) \\\\ \\hline\n    \\(T^*B \\otimes E\\) & \\(L(\\R^n, V) \\cong (\\R^n)^* \\otimes V\\) & \\(v_k \\mathrm dx^k \\mapsto (\\Phi_{\\beta\\alpha} v_k) \\frac{\\partial x^k}{\\partial x^{k'}} \\mathrm d x^{k'}\\) \\\\ \\hline\n    \\(\\Lambda^r(T^*B) \\otimes E\\) & \\(\\Lambda^r(\\R^n)^* \\otimes V\\) & \\(v_K \\mathrm d x^K \\mapsto (\\Phi_{\\beta\\alpha}v_K) \\frac{D x^K}{D x^{k'}} \\mathrm dx^{K'}\\) \\\\ \\hline\n    \\(T^*B \\otimes \\End E\\) & \\((\\R^n)^* \\otimes \\End V\\) & \\(G_k \\mathrm dx^k \\mapsto (\\Psi_{\\beta\\alpha} G_k \\Psi_{\\alpha\\beta}) \\frac{\\partial x^k}{\\partial x^{k'}} \\mathrm d x^{k'}\\) \\\\ \\hline\n    \\(\\Lambda^rT^*B \\otimes \\End E\\) & \\(\\Lambda^r(\\R^n)^* \\otimes \\End V\\) &  \\\\ \\hline\n  \\end{tabular}\n\\end{table}\n\nwhwere we multiply vectors and matrices in \\(\\Lambda^r V \\otimes E\\) and \\(\\Lambda^r V \\otimes \\End V\\) using the wedge product of forms.\n\n(The good news is we mostly use \\(r = 1, 2\\))\n\nConnections are \\emph{not} sections of \\(T^*B \\otimes \\End E\\), but if \\(A\\) and \\(\\tilde A\\) are two connections then \\(A - \\tilde A\\) \\emph{is} a valid section of \\(T^*B \\otimes \\End E\\). Thus the space of all connections \\(A\\) on a given vector bundle \\(E\\) is an \\emph{affine space} modeled on  section of \\(T^*B \\otimes \\End E\\), written \\(\\Omega^1_B(\\End E)\\). (this means there is no canonical zero section).\n\n\\begin{notation}\n  Use \\(\\Gamma(E)\\) to denote all sections of \\(E\\) and \\(\\Omega^r_B(E), \\Omega^r_B(\\End E)\\) to denote sections of \\(\\Lambda^r T^*B \\otimes E\\) and \\(\\Lambda^r T^*B \\otimes \\End E\\). In particular \\(\\Omega^0_B(E) = \\Gamma(E)\\).\n\\end{notation}\n\nNow we are ready to define calculus on vector bundles.\n\n\\begin{definition}[covariant derivative]\\index{covariant derivative}\n  A \\emph{covariant derivative} on a real vector bundle \\(E\\) over \\(B\\) is a \\(\\R\\)-linear map \\(\\nabla^E: \\Gamma(E) \\to \\Omega^1_B(E)\\) satisfying the Leibniz rule\n  \\[\n    \\nabla^E(fs) = \\mathrm df \\otimes s + f \\nabla^E s\n  \\]\n  for all \\(s \\in \\Gamma(E), f \\in C^\\infty(B)\\).\n\\end{definition}\n\n\\begin{eg}\n  Let \\(A\\) be a connection on \\(E\\). Have in any local trivialisation \\(A = (A_j^i)\\) where \\(A_j^i \\in \\Omega^1(U)\\). Define an map \\(\\mathrm d_A\\) which is locally given by\n  \\[\n    \\mathrm d_A s|_U = (\\mathrm ds + As)|_U = (\\mathrm ds^i + A_j^i s^j)_{i = 1, \\dots, m}\n  \\]\n  where \\(m\\) is rank of \\(E\\). To check this is well-defined, if \\(U\\) is also a coordinate neighbourhood in \\(B\\) then\n  \\[\n    (\\mathrm d_A s)^i = \\left( \\frac{\\partial s^i}{\\partial x^k} + \\Gamma_{jk}^i(x) s^j \\right) \\mathrm dx^k.\n  \\]\n  Let \\(\\Phi'\\) be another local trivialisation over \\(U\\). Then\n  \\begin{align*}\n    s &= \\Psi s' \\\\\n    A &= \\Psi A' \\Psi^{-1} - (d\\Psi) \\Psi^{-1}\n  \\end{align*}\n  so\n  \\begin{align*}\n    \\mathrm d_A(s)\n    &= \\mathrm d s + As \\\\\n    &= \\mathrm d(\\Psi s') + (\\Psi A \\Psi^{-1} - (d \\Psi) \\Psi^{-1}) \\Psi s' \\\\\n    &= \\Psi \\mathrm ds' + (d\\Psi) s' + \\Psi A' s' - (d\\Psi) s' \\\\\n    &= \\Psi(\\mathrm d s' + A's') \\\\\n    &= \\Psi(\\mathrm d_A s)'\n  \\end{align*}\n  which is the correction transformation law for \\(\\Omega^1_B(E)\\).\n\\end{eg}\n\n\\begin{theorem}\n  Every covariant derivative \\(\\nabla^E\\) arises as \\(\\nabla^E = d_A\\) for some connection \\(A\\) on \\(E\\).\n\\end{theorem}\n\nRecall that\n\\[\n  d_AS = ds + AS\n\\]\nwhere RHS only makes sense with a choice of local trivialisation!\n\n\\begin{proof}\n  Claim every \\(\\nabla^E\\) is a local operator. i.e.\\ for all \\(U \\subseteq B\\) open, if \\(s_1|_U = s_2|_U\\) then \\(\\nabla^E s_1|_U = \\nabla^E s_2|_U\\). (remark by writer of note: essentially partition of unity subject to \\(\\{U, B \\setminus \\cl U_0\\))\n  Indeed for all \\(b \\in U\\) let\n  \\[\n    b \\in U_0 \\subseteq \\cl U_0 \\subseteq U\n  \\]\n  and let \\(\\alpha \\in C^\\infty(B)\\) such that \\(0 \\leq \\alpha \\leq 1, \\alpha|_{B \\setminus U} = 0\\). Then\n  \\[\n    0 = \\nabla^E (\\alpha(s_1 - s_2))\n    = d\\alpha \\otimes (s_1 - s_2)\n    + \\alpha \\nabla^E(s_1 - s_2)\n  \\]\n  At \\(b\\), \\(d\\alpha_b = 0, \\alpha(b) = 1\\) whence\n  \\[\n    (\\nabla^E s_1)(b) = (\\nabla^E s_2)(b)\n  \\]\n  so it suffices to work in trivialising neighbourhoods. Let \\(\\V s = (s^1, \\dots, s^m) = s^i \\V e_i\\) where \\(\\V e_i\\) is the standard basis of \\(\\R^m\\) and \\(s^i \\in C^\\infty(U)\\). Put\n  \\[\n    \\Gamma_{jk}^i = \\left[(\\nabla^E \\V e_j) \\frac{\\partial  }{\\partial x^k} \\right]^i\n  \\]\n  where we assumed that \\(U\\) is also a coordinate neighbourhood in \\(B\\). \\(\\Gamma_{jk}^i \\in C^\\infty(U)\\). Thus\n  \\[\n    \\nabla^E \\V s = \\nabla^E (s^i \\V e_i)\n    = (ds^i + s^j \\Gamma^i_{jk}(x) dx^k) \\otimes \\V e_i\n    = d_As\n  \\]\n  Then the previous example verifies that \\(A_j^i = \\Gamma_{jk}^i dx^k\\) have the required transformation law.\n\\end{proof}\n\n\\begin{definition}[covariantly constant]\\index{covariantly constant}\n  A (local) section \\(s: U \\to E\\) is \\emph{covariantly constant} with respect to connection \\(A\\) if \\(d_As = 0\\).\n\\end{definition}\n\nThis is geometer's constant vector function.\n\n\\begin{proposition}\n  \\((d_As)(x) = 0\\) if and only if \\((ds)x(TxB) \\subseteq T_{s(x)}E\\) is a horizontal subspace \\((ds_x (T_xB) = S_{s(x)}\\) associated with \\(A\\).\n\\end{proposition}\n\n\\begin{proof}\n  Exercise.\n\\end{proof}\n\nWe can extend \\(d_A\\) as a \\(\\R\\)-linear map \\(d_A: \\Omega_B^r(E) \\to \\Omega_B^{r + 1}(E)\\) for \\(r = 0, 1, \\dots\\) by requiring\n\\[\n  d_A(\\sigma \\w \\omega) = (d_A\\sigma) \\w \\omega + (-1)^{\\deg \\sigma} \\sigma \\w d\\omega\n\\]\nfor all \\(\\sigma \\in \\Omega_B^q(E), \\omega \\in \\Omega_B^p(B)\\). Locally\n\\[\n  d_A(s_I dx^I) = (d_A s_I) \\w dx^I = d(s_i dx^I) + (A \\w s^I) dx^I,\n\\]\ni.e.\\\n\\[\n  d_A\\sigma = d\\sigma + A \\w \\sigma.\n\\]\nAgain, RHS only makes sense with a choice of local trivialisation. Can futher extend \\(d_A\\) to\n\\[\n  d_A: \\Omega_B^r(\\End E) \\to \\Omega_B^{r + 1}(\\End E)\n\\]\nvia\n\\[\n  (d_AC)s = d_A(Cs) - Cd_As\n\\]\nfor all \\(C \\in \\Gamma(\\End E),  s \\in \\Gamma(E)\\) (this is really just product rule rearranged). Also\n\\[\n  (d_A\\mu) \\omega \\sigma = d_A(\\mu \\w \\sigma) - (-1)^{\\deg \\mu} \\mu \\w d_A\\sigma\n\\]\nfor \\(\\mu \\in \\Omega_B^p(\\End E), \\sigma \\in \\Omega_B^r(E)\\). Finally\n\\[\n  d_A(\\mu_1 \\w \\mu_2) = (d_A\\mu-1) \\w \\mu_2 + (-1)^{\\deg \\mu_1} \\mu_1 \\w d_A \\mu\n\\]\nfor \\(\\mu_1, \\mu_2 \\in \\Omega_B^*(\\End E)\\).\n\n\\begin{eg}\n  For \\(\\mu \\in \\Omega_B^2(\\End E)\\), the above theory implies in that in any local trivialisation,\n  \\[\n    d_A\\mu = d\\mu + A \\w \\mu - \\mu \\w A.\n  \\]\n\\end{eg}\n\n\\subsection{Curvature}\n\nWe may repeatedly apply covariant derivative \\(d_A\\) to get a chain\n\\[\n  \\Gamma(E) = \\Omega_B^0(E) \\xrightarrow{d_A} \\Omega_B^1(E) \\xrightarrow{d_A} \\dots \\xrightarrow{d_A} \\Omega_B^n(E) \\xrightarrow{d_A} 0.\n\\]\nUnfortunately in general \\(d_A \\compose d_A \\neq 0\\) so we don't have a cohomology. However in a local trivilisation, let \\(s \\in \\Omega_B^r(E)\\), have\n\\begin{align*}\n  d_A d_As\n  &= d(ds + A \\w s) + A \\w (ds + A \\w s) \\\\\n  &= dA \\w s - A \\w ds + A \\w ds + A \\w A \\w s \\\\\n  &= (d A + A \\w A) \\w s\n\\end{align*}\nso \\(d_Ad_A\\) is multiplication by a \\(2\\)-form. The expression is a pointwise (i.e.\\ algebraic) expression: if \\(s_1(b) = s_2(b)\\) then\n\\[\n  (d_A d_A s_1)_b = (d_A d_A s_2)_b.\n\\]\n\n\\begin{note}\n  One may wonder why \\(A \\w A\\) does not vanish in the expression. In fact it is matrix valued:\n  \\[\n    (A \\w A)_j^i\n    = \\Gamma_{pk}^i dx^k \\w \\Gamma_{j\\ell}^p dx^\\ell\n    = \\Gamma_{pk}^i \\Gamma_{j\\ell}^p dx^k \\w dx^\\ell\n    \\neq 0\n  \\]\n  which is nonvanishing in general unless \\(E\\) has rank \\(1\\). In summary, matrix multiplicaiton is noncommutative.\n\\end{note}\n\n\\begin{definition}[curvature]\\index{curvature}\n  Given a connection \\(A\\), the \\(2\\)-form\n  \\[\n    F(A) = dA + A \\w A \\in \\Omega_B^2(\\End E)\n  \\]\n  is the \\emph{curvature} of \\(A\\).\n\\end{definition}\n\nNote that \\(F(A)\\) is well-defined (i.e.\\ independent of local trivialisation) as \\(d_Ad_A\\) is so.\n\nLocally \\(FA = F(A)_{j, k\\ell}^i dx^k \\w dx^\\ell\\) where \\(\\{F(A)_{j, k\\ell}^i\\}\\) is a function of \\(\\{\\Gamma_{jk}^i, \\frac{\\partial \\Gamma_{jk}^i}{\\partial x_ell}\\}\\).\n\n\\begin{definition}[flat]\\index{connection!flat}\\index{vector bundle!flat}\n  A connection is \\emph{flat} if \\(F(A) = 0\\).\n\n  A vector bundle \\(E\\) is \\emph{flat} if made a choice of flat connection.\n\\end{definition}\n\n\\begin{eg}\n  Consider the product bundle \\(E = B \\times \\R^m\\). Then\n  \\[\n    d_A = d: C^\\infty(B, \\R^m) \\to \\Omega^1(B) \\otimes \\R^m\n  \\]\n  is a \\emph{trivial} product connection. It is flat.\n\n  The converse is true only locally and only if connection is flat.\n\\end{eg}\n\n\\begin{theorem}[2nd Bianchi identity]\\index{Bianchi identity!second}\n  \\[\n    d_AF(A) = 0\n  \\]\n  for every connection \\(A\\) on vector bundle \\(E\\).\n\\end{theorem}\n\n\\begin{proof}\n  Let \\(s \\in \\Gamma(E)\\). Then\n  \\[\n    d_A(F(A)s) = d_A(d_Ad_A)s = (d_Ad_A)d_As = F(A) \\w d_As.\n  \\]\n  Bu LHS may be rewritten as\n  \\[\n    d_A(F(A))s + F(A) \\w d_As.\n  \\]\n  Substitute and cancel to get the result.\n\\end{proof}\n\n\\section{Riemannian geometry}\n\n\\begin{definition}\n  A \\emph{Riemannian metric} \\(g\\) on \\(M\\) is a field of positive-definite symmetric bilinear forms\n  \\[\n    g_p : T_pM \\times T_pM \\to \\R\n  \\]\n  defined for all \\(p \\in M\\) and smooth in \\(p\\).\n\\end{definition}\n\n\\begin{definition}[Riemannian manifold]\\index{Riemannian manifold}\n  A \\emph{Riemannian manifold} is a manifold endowed with a Riemannian metric.\n\\end{definition}\n\n\\begin{remark}\n  Smoothness in \\(p\\) means that for all \\(X, Y \\in V(M)\\) have \\(g(X, Y) \\in C^\\infty(M)\\). Equivalently, in each coordinate neighbourhood \\(U\\) with coordinates \\(x^i\\) have\n  \\[\n    g_{ij} = g(\\frac{\\partial  }{\\partial x^i}, \\frac{\\partial  }{\\partial x^j}) \\in C^\\infty(U).\n  \\]\n  We thus obtain the local expression of \\(g\\)\n  \\[\n    g_{ij} dx^idx^j.\n  \\]\n  Technically it should be \\(g_{ij} dx^i \\otimes dx^j\\) but we omit the tensor symbol. This means that suppose \\(X = X^i \\frac{\\partial  }{\\partial x^i}, Y = Y^i \\frac{\\partial  }{\\partial x^i}\\) then\n  \\[\n    g(X, Y) = g_{ij} X^iY^j.\n  \\]\n\\end{remark}\n\nFormally \\(g \\in \\Gamma(T^*M \\otimes T^*M)\\).\n\n\\begin{eg}\n  This is compatible with the more classical definition of Riemannian metric in differential geometry of curves and surfaces. Given \\(r = r(u, v): \\R^2_{u, v} \\to \\R^3\\), the first fundamental form\n  \\[\n    E du^2 + 2F dudv + G dv^2\n  \\]\n  is a Riemannian metric with \\(g_{11} = E, g_{12} = g_{21} = F, g_{22} = G\\).\n\\end{eg}\n\n\\begin{theorem}\n  Every manifold admits a Riemannian metric.\n\\end{theorem}\n\n\\begin{proof}\n  Apply example sheet 3 question 2 (every vector bundle can be given a smooth inner product) to \\(TM\\).\n\\end{proof}\n\nThe definition of a pullback \\(F^*\\) for smooth \\(F: M \\to B\\) is valid for bilinear forms of \\(TN\\). If \\(g_N\\) is a metric on \\(N\\) then \\(F^*g_N\\) is a symmetric, bilinear smooth and non-negative definite on fibres of \\(TM\\).\n\nIf also \\(dF_x\\) are injective for all \\(x \\in M\\) then \\(F^*g_N\\) is positive-definite on \\(TM\\) and is a valid Riemannian metric. For example, when \\(M\\) is a immersed submanifold or \\(F\\) is a diffeomorphism.\n\nThis gives us an alternative way to prove the theorem by embedding in Euclidean space by Whitney and pullback the Euclidean metric.\n\n\\begin{definition}[connection]\\index{connection}\n  A \\emph{connection on a manifiold \\(M\\)} is a connection on \\(TM\\).\n\\end{definition}\n\nRecall that transition functions for \\(TM\\) are given by the Jacobian\n\\[\n  \\psi = (\\psi^i_{i'}) = \\left( \\frac{\\partial x^i}{\\partial x^{i'}} \\right)\n\\]\nThe coefficients \\(\\Gamma_{jk}^i\\) of a connection on \\(M\\) are sometimes called \\emph{Christoffel symbols}\\index{Christoffel symbols}. Recall the transformation law for connection %forms matrices \\((A_j^i) = (\\Gamma_{jk}^i dx^k)\\)\n\\[\n  \\Gamma^i_{jk} = \\Gamma^{i'}_{j'k'} \\psi^i_{i'} \\psi^{j'}_j \\frac{\\partial x^{k'}}{\\partial x^k} + \\psi^i_{i'} \\frac{\\partial \\psi^{i'}_j}{\\partial x^k}\n\\]\nWhen \\(E = TM\\) this simplifies to\n\\[\n  \\Gamma^i_{jk} = \\Gamma^{i'}_{j'k'} \\frac{\\partial x^i}{\\partial x^{i'}} \\frac{\\partial x^{j'}}{\\partial x^j} \\frac{\\partial x^{k'}}{\\partial x^k} + \\frac{\\partial x^i}{\\partial x^{i'}} \\frac{\\partial^2 x^{i'}}{\\partial x^j \\partial x^k}\n\\]\nNote that in this case the object \\(\\Gamma^i_{kj}\\) obtained by switching \\(j\\) and \\(k\\) makes sense as they live in the same space. It obeys the same transformation law so they also define a connection.\n\nThe difference \\(T^i_{jk} = \\Gamma^i_{jk} - \\Gamma^i_{kj}\\) is the \\emph{torsion}\\index{torsion} of a connection on \\(M\\). \\(T = (T^i_{jk}) \\in \\Omega^1_M(\\End TM)\\). Locally \\(T = T^i_{jk} dx^k\\). Moerover given \\(X, Y \\in V(M)\\),\n\\[\n  T(X, Y) = (T^i_{jk} X^j Y^k \\frac{\\partial  }{\\partial x^i}) \\in V(M).\n\\]\nThen \\(T(X, Y) = -T(Y, X)\\) so \\(T \\in \\Omega^2_M(TM)\\)\n\n\\begin{definition}[symmetric connection]\\index{connection!symmetric}\n  A connection on a manifold \\(M\\) is \\emph{symmetric} if the torsion vanishes, i.e.\n  \\[\n    \\Gamma^i_{jk} = \\Gamma^i_{kj}\n  \\]\n  in every local coordinates.\n\\end{definition}\n\nDenote the covariant derivative on \\(M\\)\n\\[\n  D: \\Omega_M^0(TM) = V(M) \\to \\Omega_M^1(TM).\n\\]\nDenote by \\(D_XY\\), where \\(X, Y \\in V(M)\\) the evaluation of \\(DY \\in \\Omega^1_M(TM)\\) on \\(X\\). Thus \\(D_XY \\in V(M)\\). This is an \\(\\R\\)-linear map.\n\nIn local coordinates,\n\\[\n  (D_XY)^i \\p_i = (X^j\\p_j Y^i - \\Gamma^i_{jk} Y^jX^k) \\p_i\n\\]\nConsequently we obtain\n\n\\begin{proposition}\n  A connection \\(D\\) on \\(M\\) is symmetric if and only if\n  \\[\n    D_XY - D_YX = [X, Y]\n  \\]\n  for all \\(X, Y \\in V(M)\\).\n\\end{proposition}\n\n\\begin{theorem}[Levi-Civita connection]\\index{Levi-Civita connection}\n  \\label{thm:Levi-Civita}\n  On each Riemannian manifold \\((M, g)\\) there is a unique connection \\(D\\) such that\n  \\begin{enumerate}\n  \\item for every \\(X, Y, Z \\in V(M)\\),\n    \\[\n      Zg(X, Y) = g(D_ZX, Y) + g(X, D_ZY).\n    \\]\n  \\item \\(D\\) is symmetric.\n  \\end{enumerate}\n\n  \\(D\\) is called \\emph{Levi-Civita connection} of \\(g\\).\n\\end{theorem}\n\nRecall that given \\(X \\in V(M)\\), \\(D_X: V(M) \\to V(M)\\) is a covariant derivative if and only if\n\\begin{enumerate}\n\\item \\(\\R\\)-linearity in \\(Y\\): \\(D_X(\\lambda Y) = \\lambda D_XY\\) for all \\(\\lambda \\in \\R, Y \\in V(M)\\),\n\\item Leibniz rule: \\(D_X(hY) = (Xh) \\cdot Y + hD_XY\\) for all \\(h \\in C^\\infty(M)\\),\n\\item \\(C^\\infty(M)\\)-linearity in \\(X\\): \\(D_{fX}Y = fD_XY\\) for all \\(f \\in C^\\infty(M)\\).\n\\end{enumerate}\n\n\\begin{proof}[Proof of \\Cref{thm:Levi-Civita}]\n  We do uniqueness first. In a coordinate neighbourhood\n  \\[\n    D \\p_i = \\Gamma_{ik}^p \\p_p dx^k.\n  \\]\n  For condition 1 in the statement of the theorem, let \\(X = \\p_i, Y = \\p_j, Z = \\p_k\\) so that \\(g(X, Y) = g_{ij}\\), so we obtain the first equation\n  \\begin{align*}\n    \\p_k g_{ij} &= \\Gamma_{ij}^p g_{pj} + \\Gamma_{jk}^p g_{ip} \\\\\n    \\p_j g_{ki} &= \\Gamma_{kj}^p g_{pi} + \\Gamma_{ij}^p g_{kp} \\\\\n    \\p_i g_{jk} &= \\Gamma_{ji}^p g_{pk} + \\Gamma_{ki}^p g_{jp}\n  \\end{align*}\n  the next two lines are similarly obtained by cyclic permutation on \\(i, j, k\\).\n\n  Define \\((g^{iq}) = (g_{iq})^{-1}\\), the inverse matrix. Use the fact that \\(g_{iq}\\) is symmetric,\n  \\[\n    \\Gamma_{jk}^p g_{pq} g^{iq} = \\Gamma_{jk}^i.\n  \\]\n  Summing equations \\(1 + 2 - 3\\) gives\n  \\[\n    \\p_k g_{ij} + \\p_j g_{ki} - \\p_i g_{jk}\n    = 2\\Gamma_{jk}^p g_{ip}\n  \\]\n  which is the same as\n  \\[\n    \\Gamma_{jk}^p g_{qp} = \\frac{1}{2} (\\p_k g_{qj} + \\p_j g_{kq} - \\p_q g_{jk}).\n  \\]\n  Multiply by \\(g^{iq}\\) to get\n  \\[\n    \\Gamma_{jk}^i = \\frac{1}{2} g^{iq}(\\p_k g_{qj} + \\p_j g_{kq} - \\p_q g_{jk})\n  \\]\n  which is to say, there is at most one way to define the Christoffel symbols.\n\n  \\begin{ex}\n    Show that the equation has a coordinate-free formulation\n    \\begin{align*}\n      g(D_XY, Z) &= \\frac{1}{2}(X_g(Y, Z) + Y_g(Z, X) - Zg(X, Y) \\\\\n      &\\quad - g(Y, [X, Z]) - g(Z, [Y, X]) + g(X, [Y, Z])).\n    \\end{align*}\n  \\end{ex}\n\n  To check the requirement for covariant derivative, 1 is clear from the coordinate-free formula. We omit the verification for 2 and 3 with the following checkpoints:\n  \\begin{enumerate}\n  \\item 3: Let \\(f \\in C^\\infty(M)\\). Recall\n    \\[\n      [fX, Z] = (fX)Z - Z(fX) = f[X, Z] - (Zf)X\n    \\]\n    so\n    \\begin{align*}\n      g(D_{fX}Y, Z)\n      &= \\frac{1}{2}(fX_g(Y, Z) + Y(fg(Z, X)) - Z(fg(X, Y)) \\\\\n      &\\quad - f(g(Y, [X, Z]) + g(Z, [Y, X]) - g(X, [Z, Y])) \\\\\n      &\\quad + (Zf)g(X, Y) - (Yf)g(Z, X) \\\\\n      &\\dots \\\\\n      &= g(fD_XY, Z)\n    \\end{align*}\n    then do some cancellation. We are also using\n    \\[\n      Y(fh) = (Yf) \\cdot h + f \\cdot Yh.\n    \\]\n  \\item 2: let \\(h \\in C^\\infty(M)\\),\n    \\begin{align*}\n      g(D_X(hY), Z)\n      &= \\frac{1}{2}(X(h g(Y, Z)) + hY(g(Z, X)) - Z(hg(X, Y)) - hg(Y, [X, Z]) - g(Z, [hY, X]) + g(X, [Z, hY]) \\\\\n      &= \\frac{1}{2}((Xh)g(Y, Z) + h Xg(Y, Z) + hY(g(Z, X)) - (Zh) g(X, Y) - hZ g(X, Y) - hg(Y, [X, Z]) + (Xh) g(Z, [Y, X]) + (Zh) g(X, Y) + hg(X, [Z, Y]) \\\\\n      &= (Xh) g(Y, Z) + hg(D_XY, Z)\n    \\end{align*}\n    for all \\(Z\\). Thus\n    \\[\n      D_X(h, Y) = (Xh)Y + hD_XY.\n    \\]\n  \\end{enumerate}\n  Thus \\(D\\) is a well-define connection. For statment 1 in theorem, we can trace deduction of \\(\\Gamma_{jk}^i\\) backwards.\n\\end{proof}\n\n\\subsection{Geodesics}\n\nLet \\(\\gamma\\) be a curve in \\(U\\), a coordinate neighbourhood of \\(M\\). Let \\(\\pi: E \\to M\\) be a vector bundle equipped with a connection \\(A\\), \\(A = (\\Gamma_{jk}^i)\\) on \\(U\\). Then a \\emph{lift} of \\(\\gamma\\) to \\(E\\) is \\(\\gamma_E\\) such that \\(\\pi \\compose \\gamma_E = \\gamma\\). If \\(\\gamma(t) = (x^k(t))\\) in local coordinates, then\n\\[\n  \\gamma_E(t) = (x^k(t), a^i(t))\n\\]\nin \\(U \\times V = \\pi^{-1}(U)\\). A \\emph{horizontal lift} means that\n\\[\n  \\dot \\gamma_E(t) \\in S_{\\gamma_E(t)}\n\\]\nfor all \\(t\\) where \\(S\\) denotes the horizontal subspace with respect to \\(A\\). In other words,\n\\[\n  \\theta^i(\\dot \\gamma_E(t)) = 0\n\\]\nfor \\(i = 1, \\dots, m\\) where \\(m\\) is the rank of \\(E\\). Expand out,\n\\[\n  (da^i + \\Gamma_{jk}^i a^j dx^k) (\\dot x^k(t) \\frac{\\partial  }{\\partial x^k} + \\dot a^j(t) \\frac{\\partial  }{\\partial a^j}) = 0.\n\\]\nSimplify to get\n\\[\n  \\dot a^i + \\Gamma_{jk}^i(x) a^j \\dot x^k = 0.\n\\]\nThis is a system of \\(1\\)st order linear ODEs so by basic results in ODE theorey, it is solvable on ant \\(I \\subseteq \\R\\) and is uniquely determined by \\(a^i(0)\\). Thus horizontal lift \\emph{always exists} from each \\(a \\in E\\).\n\nMisses a lecture on 17/11/18\n\nFix \\(p \\in M\\). If \\(\\gamma(t, a) = \\gamma_p(t, a)\\) is a geodesic, \\(\\lambda \\in \\R\\) then\n\\begin{align*}\n  \\frac{d}{dt} \\gamma(\\lambda t, a) &= \\lambda \\dot \\gamma(\\lambda t, a) \\\\\n  \\frac{d^2}{dt^2} \\gamma(\\lambda t, a) &= \\lambda^2 \\ddot \\gamma(\\lambda t, a)\n\\end{align*}\nso\n\\[\n  \\gamma(\\lambda t, a) = \\gamma(t, \\lambda a).\n\\]\nFurthermore\n\\begin{enumerate}\n\\item for all \\(a \\in T_pM\\), there exists \\(\\varepsilon = \\varepsilon_a > 0\\) such that\n  \\[\n    \\gamma(s, a) = \\gamma(1, sa)\n  \\]\n  exists for all \\(|s| < \\varepsilon\\).\n\\item from theory of ODEs, \\(\\varepsilon_a\\) may be chosen continuous in \\(a\\). For \\(|a|_g \\leq \\varepsilon_1\\), have \\(\\varepsilon_a \\geq \\varepsilon_2 > 0\\). We have \\(|a|_g < \\varepsilon = \\varepsilon_1 \\varepsilon_2\\) then \\(\\gamma(1, a)\\) is defined.\n\\end{enumerate}\n\n\\begin{definition}[exponential map]\\index{exponential map}\n  Let \\((M, g)\\) be a Riemannian manifold and \\(p \\in M\\). The \\emph{exponential map} at \\(p\\) is\n  \\begin{align*}\n    \\exp_p: T_m M &\\to M \\\\\n    a &\\mapsto \\gamma_p(1, a)\n  \\end{align*}\n\\end{definition}\nThus for some \\(\\varepsilon > 0\\) sufficiently small, \\(\\exp_p: \\text{Ball}_g(0, \\varepsilon) \\to M\\) is a well-defined smooth map.\n\n\\begin{proposition}\n  \\[\n    (d \\exp_p)_0 = \\id_{T_pM}\n  \\]\n  by noting \\(T_0(T_pM) \\cong T_pM\\).\n\\end{proposition}\n\n\\begin{proof}\n  One approach is to Taylor expand\n  \\[\n    \\gamma(t, a) = p + at + c_2 t^2 + \\dots\n  \\]\n  and then substitute into geodesic ODE to get\n  \\[\n    c_2^i = \\frac{1}{2} \\Gamma_{jk}^i (\\phi) a^ja^k.\n  \\]\n  Then do some hands-on analysis.\n\n  Another approach is to note \\(\\exp_p(0) = p\\). For \\(|a|_g < \\varepsilon\\), \\(\\gamma_p(t, a) = \\gamma_p(1, ta)\\) for \\(-1 \\leq t \\leq 1\\). Then\n  \\begin{align*}\n    (d \\exp_p)_0 a\n    &= \\frac{d}{dt} \\exp_p(t a) \\Big|_t = 0 \\\\\n    &= \\frac{d}{dt} \\gamma_p(1, ta) \\Big|_{t = 0}\n      = \\frac{d}{dt} \\gamma_p (t, a) \\Big|_{t = 0} \\\\\n    &= \\dot \\gamma_p(t, a) \\Big|_{t = 0} \\\\\n    &= a\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n  \\(\\exp_p: \\text{Ball}_g(0, r) \\to U\\) is a diffeomorphism onto its image \\(U\\) for some \\(r > 0\\).\n\\end{corollary}\n\n\\begin{proof}\n  Inverse mapping theorem.\n\\end{proof}\n\nThis means that \\((\\exp_p)^{-1}\\) is a hcart around \\(p \\in M\\). The respective local coordinates are the \\emph{normal coordinates}\\index{normal coordinate}, aka \\emph{geodesic coordinates}\\index{geodesic coordinate} at \\(p\\).\n\nFor exmaple in geodesic coordinates\n\\[\n  \\exp_p^{-1} (\\gamma_p(t, a)) = ta\n\\]\nfor \\(|t|\\) sufficiently small.\n\nThe \\emph{geodesic polars} \\((T_pM, g(p)) \\cong (\\R^n, \\text{Euclidean})\\). Suppose \\((0, \\varepsilon) \\times S^{n - 1}\\) is included in the domain of \\(\\exp_p\\). Then can define a map\n\\begin{align*}\n  f: (0, \\varepsilon) \\times S^{n - 1} &\\to M \\\\\n  (r, v) &\\mapsto \\exp_p(r v)\n\\end{align*}\nLet \\(\\Sigma_r\\) be the image under \\(f\\) of \\(\\{r\\} \\times S^{n - 1}\\), then \\(f\\) defines an immersion (embed?) into \\(M\\). This is a \\emph{geodesic sphere} around \\(p\\).\n\n\\begin{theorem}[Gauss lemma]\\index{Gauss lemma}\n  \\(\\gamma_p(t, a)\\) meets \\(\\Sigma_r\\) orthogonally for all \\(0 < r < \\varepsilon\\). Thus in geodesic polars,\n  \\[\n    g = dr^3 + h(r, v)\n  \\]\n  where \\(g = g|_{\\Sigma_r}\\).\n\\end{theorem}\n\nSo we may write \\(g\\) as\n\\[\n  \\begin{pmatrix}\n    dr^2 & 0 & \\dots & 0 \\\\\n    0 \\\\\n    \\vdots & & h(r, v) \\\\\n    0 \n  \\end{pmatrix}\n\\]\n\n\\begin{proof}\n  Choose any \\(X \\in V(S^{n - 1})\\). \\(S^{n - 1} \\subseteq T_pM\\) with respect to \\(g(p)\\). Extend \\(X\\) o \\(\\text{Ball}_1(0) \\setminus \\{0\\} = \\{a \\in T_pM: |a|_g \\leq 1\\}\\). \\(\\tilde X(r, v) = rX(v)\\) on \\(B \\setminus \\{0\\}\\). Define\n  \\[\n    Y(f(r, v)) = d(\\exp_p)_{rv} \\tilde X(r, v),\n  \\]\n  a vector field on \\(B' \\subseteq \\exp_p(B \\setminus \\{0\\}) \\subseteq M\\). To show \\(Y \\perp \\frac{\\p}{\\p r}\\) at each point in \\(B' \\setminus \\{p\\}\\), note that\n  \\[\n    \\frac{\\p}{\\p r} = \\dot \\gamma_p(t, a) \\frac{1}{|a|_{g(p)}}\n  \\]\n  so \\(\\dot \\gamma(t, a)\\) defines a vector field on \\(B' \\setminus \\{p\\}\\) for all \\(|a|_{g(p)} = 1\\) and \\(0 < t < \\varepsilon\\). Take limit as \\(r \\to 0\\) to obtain \\(g(\\frac{\\partial  }{\\partial r}, \\frac{\\partial  }{\\partial r}) = 1\\).\n\n  Thus to show \\(g(Y, \\dot \\gamma) = 0\\),\n  \\begin{align*}\n    D_{\\dot \\gamma} Y - D_Y \\dot \\gamma\n    &= (df)(D_{\\p/\\p r} \\tilde X - D_{\\tilde X} \\frac{\\partial  }{\\partial r}) \\\\\n    &= (df) (\\frac{d}{dr} \\tilde X) \\\\\n    &= (df) \\frac{\\tilde X}{r} \\\\\n    &= \\frac{Y}{r}\n  \\end{align*}\n  by linearity of \\(df\\) at each point. Thus\n  \\begin{align*}\n    \\frac{d}{dr} g(Y, \\dot \\gamma)\n    &= g(D_{\\dot \\gamma} Y, \\dot \\gamma) + g(Y, \\underbrace{D_{\\dot \\gamma} \\dot \\gamma}_{= 0}) \\\\\n    &= g(D_Y \\dot \\gamma + \\frac{Y}{r}, \\dot \\gamma) \\\\\n    &= \\frac{1}{r} g(Y, \\dot \\gamma)\n  \\end{align*}\n  Let \\(G = g(Y, \\dot \\gamma)\\) and we have an ODE\n  \\[\n    \\frac{d}{dr} G = \\frac{G}{r}.\n  \\]\n  So \\(G\\) is linear in \\(r\\) and \\(\\frac{dG}{dr}\\) is independent of \\(r\\). But\n  \\[\n    \\lim_{r \\to o^+} \\frac{dG}{dr} = \\lim_{r \\to 0^+} g(X, \\frac{\\partial  }{\\partial r}) = 0\n  \\]\n  because \\((d \\exp)_0\\) is an isometry.\n\\end{proof}\n\n\\subsection{Curvature of a Riemannian metrix}\n\n\\begin{definition}[Riemannian curvature]\\index{Rieamannian curvature}\n  Let \\((M, g)\\) be a Riemannian manifold. The \\emph{(full) Riemannian curvature} \\(R = R(g)\\) of the metric \\(g\\) is the curvature of the Levi-Civita connection of \\(g\\).\n\\end{definition}\n\nBy general theorey \\(R(g) \\in \\Omega_M^2(\\End TM)\\). In local coordinates,\n\\[\n  R = \\frac{1}{2} R^i_{j, k \\ell} dx^k \\w dx^\\ell\n\\]\nwhere \\(i, j, k, \\ell\\) all take values \\(1, \\dots, n\\) where \\(n = \\dim M\\). \\((R^k_{j, k \\ell}\\) is called the \\emph{Riemann curvature tensor}\\index{Riemann curvature tensor}.\n\nLet \\(X, Y \\in V(M)\\). Then \\(R(X, Y) \\in \\Gamma(\\End TM)\\). Explicitly, if \\(X = X^k \\p_k, Y = Y^\\ell \\p_\\ell\\) then\n\\[\n  R(X, Y) = (R^i_{j, k \\ell} X^k Y^\\ell)_{i, j}.\n\\]\nWe may define \\(R_{k\\ell} = R(\\p_k, \\p_\\ell)\\) to be local sections of \\(\\End TM\\) so\n\\[\n  R(X, Y) = X^k Y^\\ell R_{k\\ell}.\n\\]\n\nIn local coordinates, \\(D = d + A\\) where \\(A = A_k dx^k = (\\Gamma^i_{jk}) dx^k)\\). Denote\n\\[\n  D_k = D_{\\p_k} = \\p_k + A_k.\n\\]\nFor all \\(Z \\in V(M)\\) have\n\\begin{align*}\n  (- D \\compose D) Z &= RZ = (R_{k \\ell} dx^k \\wedge dx^\\ell) Z \\\\\n  R_{k \\ell} Z &= (D_\\ell D_k - D_k D_\\ell) Z\n\\end{align*}\nthus \\(R_{k \\ell} = - [D_k, D_\\ell]\\) and thus\n\\[\n  R^i_{j, k \\ell} = ((D_\\ell D_k - D_k D_\\ell) \\p_j)^i\n\\]\nWrite \\(D_X = X^kD_k\\) then we have\n\\begin{align*}\n  -[D_X, D_Y]\n  &= -[X^kD_k, Y^\\ell D_\\ell] \\\\\n  &= - X^k (\\p_k Y^\\ell) D_\\ell - X^k Y^\\ell D_k D_\\ell \\\\\n  &\\quad+ Y^k (\\p_k X^\\ell) D_\\ell + Y^k X^\\ell D_\\ell D_k \\\\\n  &= X^k Y^\\ell R_{k \\ell} - [X, Y]^\\ell D_\\ell\n\\end{align*}\nThis proves\n\n\\begin{lemma}\n  \\[\n    R(X, Y) = D_{[X, Y]} - [D_X, D_Y].\n  \\]\n\\end{lemma}\n\nSometimes it is convenient to consider\n\\[\n  R_{ij, k \\ell} = g_{iq} R^q_{j, k\\ell}\n\\]\nby \\((X, Y, Z, T) \\mapsto g(R(X, Y)Z, T)\\).\n\n\\begin{proposition}[symmetries of the curvature tensor]\\leavevmode\n  \\begin{enumerate}\n  \\item \\(R_{ij, \\ell k} = - R_{ij, k\\ell} = R_{ji, k \\ell}\\).\n  \\item 1st Bianchi identity\\index{Bianchi identity!first}: \\(R^i_{j, k \\ell} + R^i_{k, \\ell j} + R^i_{\\ell, jk} = 0\\).\n  \\item \\(R_{ij, k \\ell} = R_{k \\ell, ij}\\). \n  \\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item The first equality is clear from properties of (``scalar'') 2-forms. For the second equality,\n    \\begin{align*}\n      \\frac{\\partial g_{ij}}{\\partial x^k}\n      &= g(D_k\\p_i, \\p_j) + g(\\p_i, D_k \\p_j) \\\\\n      \\frac{\\partial^2 g_{ij}}{\\partial x^\\ell \\partial x^k}\n      &= g(D_\\ell D_k \\p_i, \\p_j) + g(D_k \\p_i, D_\\ell \\p_j) + g(D_\\ell \\p_i, D_k \\p_j) + g(\\p_i, D_\\ell D_k \\p_j)\n    \\end{align*}\n    so\n    \\begin{align*}\n      0\n      &= \\frac{\\partial^2 g_{ij}}{\\partial x^\\ell \\partial x^k} - \\frac{\\partial^2 g_{ij}}{\\partial x^k \\partial x^\\ell} \\\\\n      &= g([D_\\ell, D_k] \\p_i, \\p_j) + g(\\p_i, [D_\\ell, D_k] \\p_j) \\\\\n      &= g(R_k\\ell \\p_i, \\p_j) + g(\\p_i, R_{k\\ell} \\p_j) \\\\\n      &= R_{ji, k\\ell} + R_{ij, \\k \\ell}\n    \\end{align*}\n  \\item \n    \\begin{align*}\n      R^i_{j, k \\ell} + R^i_{k, \\ell j} + R^i_{\\ell, jk}\n      &= [D_\\ell D_k \\p_j - D_k D_\\ell \\p_j + D_j D_\\ell \\p_k - D_\\ell D_j \\p_k + \\text{ two more terms}]^i\n    \\end{align*}\n    But the first and last term cancel as \\([\\p_k, \\p_\\ell] = 0\\).\n  \\item See the handout on octahedron trick.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{definition}[Ricci curvature]\\index{Ricci curvature}\n  The \\emph{Ricci curvature} of \\(g = g_{ij} dx^i dx^j\\) at \\(p \\in M\\) is\n  \\[\n    \\Ric_p(X, Y) = \\tr(v \\mapsto R_p(X, v) Y).\n  \\]\n\\end{definition}\n\nLocally \\(\\Ric = (\\Ric_{ij})\\) so\n\\[\n  \\Ric(X, Y) = \\Ric_{ij} X^iY^j.\n\\]\nWe can also write\n\\[\n  \\Ric_{ij} = R^q_{i, jq} = g^{pq} R_{pi, jq}.\n\\]\nBy the third identities we derive, \\(\\Ric_{ij}\\) is a symmetric bilinear form on \\(T_pM\\) for all \\(p\\).\n\n\\begin{definition}[scalar curvature]\\index{scalar curvature}\n  The \\emph{scalar curvature} \\(s = \\operatorname{scal}(g) \\in C^\\infty(M)\\) is the trace of \\(\\Ric\\) with respect to \\(g\\), i.e. if \\(g_{ij}(p) = \\delta_{ij}\\) then \\(s = \\Ric_{ii}\\).\n\\end{definition}\n\nIn general,\n\\[\n  s = g^{ij} \\Ric_{ij} = g^{ij} g^{pq} R_{pi, jq} = g^{j k} R^\\ell_{j, k \\ell}.\n\\]\n\nWe can show, using linear algebra and representation theory, that \\(\\Ric\\) and \\(s\\) are the only \\(SO(n)\\)-invariant components in the ``space of curvature tensor''.\n\nMissed a lecture on 24/11/18 (Laplacian, Hodge star operator)\n\nRecall that \\(\\delta = (-1)^{n(p + 1) + 1} * d *\\) on \\(p\\)-forms for \\(p \\geq 1\\) and \\(\\delta = 0\\) on functions.\n\n\\begin{proof}\n  Given \\(\\alpha \\in \\Omega^{p - 1}(M), \\beta \\in \\Omega^p(M)\\),\n  \\[\n    d(\\alpha \\w * \\beta)\n    = d\\alpha \\w * \\beta + (-1)^{p - 1} \\alpha \\w d (*\\beta)\n  \\]\n  Now\n  \\begin{align*}\n    - * \\delta \\beta\n    &= (-1)^{n (p + 1)} ** \\underbrace{d* \\beta}_{\\in \\Omega^{n - p + 1}} \\\\\n    &= (-1)^{n(p + 1) + (n - p + 1)(p - 1)} d *\\beta \\\\\n    &= (-1)^{- 1} d*\\beta \\pmod 2\n  \\end{align*}\n  Thus\n  \\[\n    d(\\alpha \\w *\\beta)\n    = d\\alpha \\w * \\beta - \\alpha \\w * \\delta \\beta\n    = \\langle d\\alpha, \\beta\\rangle_g \\omega_g - \\langle \\alpha, \\delta \\beta \\rangle_g \\omega_g\n  \\]\n  Now proposition follows from Stokes'.\n\\end{proof}\n\n\\begin{remark}\n  We can define for \\(\\zeta, \\eta \\in \\Omega^p(M)\\)\n  \\[\n    \\langle \\langle \\zeta, \\eta \\rangle\\rangle_{M, g} = \\int_M \\langle \\zeta, \\eta \\rangle_g \\omega_g\n  \\]\n  the \\(L^2\\) inner product of \\(p\\)-forms. The the proposition asserts that\n  \\[\n    \\langle\\langle d\\alpha, \\beta \\rangle\\rangle = \\langle \\langle \\alpha, \\delta \\beta\\rangle\\rangle\n  \\]\n  i.e.\\ \\(\\delta = d^*\\), the \\emph{formal \\(L^2\\)-adjoint}\\index{formal adjoint} of \\(d\\).\n\\end{remark}\n\n\\begin{corollary}\n  \\[\n    \\Delta = d \\delta + \\delta d\n  \\]\n  is formally self-adjoint.\n\\end{corollary}\n\n\\begin{definition}[Laplace-Beltrami/Hodge Laplacian]\\index{Laplace-Beltrami}\\index{Hodge Laplacian}\n  \\(\\Delta\\) is called \\emph{Laplace-Beltrami}, also known as the \\emph{Hodge Laplacian}.\n\\end{definition}\n\nWe give a special name to kernel of \\(\\Delta\\):\n\n\\begin{definition}[harmonic \\(p\\)-form]\\index{harmonic \\(p\\)-form}\n  The space of \\emph{harmonic \\(p\\)-forms} is defined as\n  \\[\n    \\mathcal H^p(M) = \\{\\alpha \\in \\Omega^p(M): \\Delta \\alpha = 0\\}.\n  \\]\n\\end{definition}\n\n\\begin{note}\n  \\(* \\Delta = \\Delta *\\) so \\(*: \\mathcal H^p(M) \\to \\mathcal H^{n - p}(M)\\) is an isomorphism.\n\\end{note}\n\n\\begin{proposition}\n  Assume \\(M\\) is compact. Then for all \\(\\alpha \\in \\Omega^p(M)\\), \\(\\Delta \\alpha = 0\\) if and only if \\(d \\alpha = 0\\) and \\(\\delta \\alpha = 0\\).\n\\end{proposition}\n\nThe condition \\(\\delta \\alpha = 0\\) is called \\(\\alpha\\) is \\emph{coclosed}\\index{coclosed}.\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\implies\\):\n    \\begin{align*}\n      0\n      &= \\langle\\langle \\Delta\\alpha, \\alpha \\rangle\\rangle \\\\\n      &= \\langle\\langle d \\delta \\alpha, \\alpha \\rangle\\rangle + \\langle\\langle \\delta d \\alpha, \\alpha \\rangle\\rangle \\\\\n      &= \\norm{\\delta \\alpha}^2_{L^2} + \\norm{d \\alpha}^2_{L^2}\n    \\end{align*}\n    but for all \\(\\zeta \\in \\Omega^*(M)\\),\n    \\[\n      \\norm{\\zeta}_{L^2} = \\sqrt{\\langle\\langle \\zeta, \\zeta \\rangle\\rangle} \\geq 0\n    \\]\n    with equality if and only if \\(\\zeta = 0\\) as \\(\\zeta\\) is smooth. Thus \\(\\delta \\alpha = 0, \\delta \\alpha = 0\\).\n  \\item \\(\\impliedby\\): obvious.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{corollary}\n  Let \\(M\\) be compact and connected. Then if \\(f \\in C^2(M)\\) is harmonic it is constant.\n\\end{corollary}\n\n\\begin{note}\n  Compactness is essential. For example \\((x, y) \\mapsto e^y \\cos x\\) on the Euclidean space \\(\\R^2\\).\n\n  Note that on \\((\\R^n, g)\\) where \\(g\\) is not necessarily Euclidean,\n  \\[\n    \\Delta_g = - \\frac{1}{\\sqrt{\\det g}} \\frac{\\partial  }{\\partial x^j} \\left(\\sqrt{\\det g} g^{ij} \\frac{\\partial f}{\\partial x^i} \\right).\n  \\]\n  When \\(g\\) is Euclidean, this simplies to\n  \\[\n    \\Delta_{\\text{Eucl}} = - \\sum \\frac{\\partial^2}{(\\partial x^i)^2}.\n  \\]\n  It is an exercise to check that this has positive eigenvalues.\n\\end{note}\n\n\\begin{theorem}[Hodge decomposition theorem]\\index{Hodge decomposition theorem}\n  Let \\((M, g)\\) be a compact oriented Riemannian manifold. Let \\(0 \\leq p \\leq \\dim M\\). Then\n  \\begin{enumerate}\n  \\item \\(\\dim \\mathcal H^p(M)\\) is finite.\n  \\item\n    \\begin{align*}\n      \\Omega^p(M)\n      &= \\mathcal H^p(M) \\oplus \\Delta \\Omega^p(M) \\\\\n      &= \\mathcal H^p(M) \\oplus d\\Omega^p(M) \\oplus \\delta d\\Omega^p(M) \\\\\n      &= \\mathcal H^p(M) \\oplus d \\Omega^{p - 1}(M) \\oplus \\delta \\Omega^{p + 1}(M)\n    \\end{align*}\n    where all the direct sums are orthogonal with respect to \\(L^2\\).\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n  Too long. Omitted.\n\\end{proof}\n\n\\begin{corollary}\n  Assume the hypothesis of Hodge decomposition theorem, then for all \\(a \\in H_{\\text{dR}}^p(M)\\) there exists a unique \\(\\alpha \\in \\mathcal H^p(M)\\) such that \\([\\alpha] = a\\).\n\\end{corollary}\n\n\\begin{proof}\n  We do uniqueness first: suppose \\(\\alpha_1, \\alpha_2 \\in \\mathcal H^p(M)\\) with \\(\\alpha_1 - \\alpha_2 = d\\beta\\). Then\n  \\[\n    \\norm{d\\beta}^2\n    = \\langle\\langle d\\beta, \\alpha_1 - \\alpha_2 \\rangle\\rangle\n    = \\langle\\langle \\beta, \\underbrace{\\delta\\alpha_1 - \\delta\\alpha_2}_0 \\rangle\\rangle\n    = 0 \n  \\]\n  so \\(d\\beta = 0\\) and \\(\\alpha_1 = \\alpha_2\\).\n\n  For existence, suppose \\(a = [\\tilde \\alpha]\\) where \\(d \\tilde \\alpha = 0\\) but \\(\\tilde \\alpha\\) is not necessarily closed. By Hodge decomposition theorem we can write\n  \\[\n    \\tilde \\alpha = \\alpha + d\\beta + \\delta \\gamma.\n  \\]\n  Then \\(d\\delta \\gamma = 0\\) so\n  \\[\n    0\n    = \\langle\\langle d\\delta\\gamma, \\gamma \\rangle\\rangle\n    = \\norm{\\delta \\gamma}^2\n  \\]\n  so \\(\\delta \\gamma = 0\\). It follows that \\(\\alpha \\in \\mathcal H^p(M)\\).\n\\end{proof}\n\nWhat we have shown is that\n\\begin{align*}\n  \\mathcal H^p(M) &\\to H_{\\text{dR}}^p(M) \\\\\n  \\alpha &\\mapsto [\\alpha]\n\\end{align*}\nis a linear isomorphism. LHS is an analytic object, and by de Rham theory, RHS is the same as \\(H^p(M)\\), a topological object. The significance of Hodge decomposition is this correspondence.\n\nAn intuitive explanation of the corollary: given \\(a \\in H_{\\text{dR}}^p(M)\\), define\n\\[\n  B_a = \\{\\zeta \\in \\Omega^p(M): d \\zeta = 0, [\\zeta] = a\\}\n\\]\nand consider the function \\(F(\\zeta) = \\norm \\zeta^2\\) on \\(B_a\\). If \\(\\alpha\\) is a minimum of \\(F\\) the\n\\[\n  \\frac{d}{dt}\\Big|_{t = 0} F(\\alpha + td \\beta) = 0\n\\]\nfor all \\(\\beta\\) so\n\\begin{align*}\n  \\frac{d}{dt} \\Big|_{t = 0}\n  &= \\langle\\langle \\alpha + t d\\beta, \\alpha + t d\\beta \\rangle\\rangle \\\\\n  &= 2 \\langle\\langle \\alpha, d\\beta \\rangle\\rangle \\quad \\delta \\alpha = 0 \\\\\n  &= 2 \\langle\\langle \\delta \\alpha, \\beta\\rangle\\rangle \\\\\n  &= 0\n\\end{align*}\nso \\(\\alpha \\in \\mathcal H^p(M)\\).\n\nA remark about the second statement of Hodge decomposition theorem: the last two lines follow from the first for similar reason to the corollary.\nIf \\(\\Delta \\alpha = \\beta\\) then \\(\\beta \\perp \\mathcal H^p(M)\\) so \\((\\mathcal H^p(M))^\\perp\\) is well-defined and the projection of \\(\\beta\\) to this space is given by\n\\[\n  \\beta - \\sum_{i = 1}^{\\dim \\mathcal H^p(M)} \\langle\\langle \\beta, \\omega_i\\rangle\\rangle \\omega_i\n\\]\nwhere \\(\\{\\omega_i\\}\\) is an orthonormal basis of \\(\\mathcal H^p(M)\\). Thus \\(\\Delta \\Omega^p(M) \\subseteq (\\mathcal H^p(M))^\\perp\\). We can solve for \\(\\omega\\) is\n\\[\n  \\Delta \\omega = \\alpha, \\alpha \\perp \\mathcal H^p.\n\\]\nFirst define a ``weak solution'' of \\(\\Delta \\omega = \\alpha\\), a bounded linear functional on \\(\\Omega^p(M)\\). Then argue using a regularity theorem from analysis every weak solution is an actual solution. The key idea is that \\(\\Delta\\) is elliptic.\n\n\\end{document}\n\n% https://www.dpmms.cam.ac.uk/~agk22/teaching.html", "meta": {"hexsha": "3948ed9b55a9b08f54587ec065d74e4f60a52a0a", "size": 126891, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "III/differential_geometry_iii.tex", "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_issues_repo_path": "III/differential_geometry_iii.tex", "max_issues_repo_name": "geniusKuang/tripos", "max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_forks_repo_path": "III/differential_geometry_iii.tex", "max_forks_repo_name": "geniusKuang/tripos", "max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "avg_line_length": 47.7213238059, "max_line_length": 807, "alphanum_fraction": 0.6182077531, "num_tokens": 45928, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5977036011169031}}
{"text": "\\documentclass[letterpaper,final,12pt,reqno]{amsart}\n\n\\usepackage[total={6.0in,8.8in},top=1.3in,left=1.25in]{geometry}\n\n\\usepackage{times,bm,bbm,empheq,verbatim,fancyvrb,graphicx}\n\\usepackage[dvipsnames]{xcolor}\n\n\\usepackage[kw]{pseudo}\n\n\\pseudoset{left-margin=15mm,topsep=5mm,idfont=\\texttt}\n\n\\usepackage{tikz}\n\\usetikzlibrary{decorations.pathreplacing}\n\n% hyperref should be the last package we load\n\\usepackage[pdftex,\ncolorlinks=true,\nplainpages=false, % only if colorlinks=true\nlinkcolor=blue,   % ...\ncitecolor=Red,    % ...\nurlcolor=black    % ...\n]{hyperref}\n\n\\DefineVerbatimEnvironment{cline}{Verbatim}{fontsize=\\small,xleftmargin=5mm}\n\n\\renewcommand{\\baselinestretch}{1.05}\n\n\\newtheorem{lemma}{Lemma}\n\\newtheorem*{example}{Example}\n\n\\newcommand{\\Matlab}{\\textsc{Matlab}\\xspace}\n\\newcommand{\\eps}{\\epsilon}\n\\newcommand{\\lam}{\\lambda}\n\\newcommand{\\RR}{\\mathbb{R}}\n\n\\newcommand{\\grad}{\\nabla}\n\\newcommand{\\Div}{\\nabla\\cdot}\n\\newcommand{\\trace}{\\operatorname{tr}}\n\n\\newcommand{\\hbn}{\\hat{\\mathbf{n}}}\n\n\\newcommand{\\bb}{\\mathbf{b}}\n\\newcommand{\\be}{\\mathbf{e}}\n\\newcommand{\\bbf}{\\mathbf{f}}\n\\newcommand{\\bg}{\\mathbf{g}}\n\\newcommand{\\bn}{\\mathbf{n}}\n\\newcommand{\\bq}{\\mathbf{q}}\n\\newcommand{\\br}{\\mathbf{r}}\n\\newcommand{\\bs}{\\mathbf{s}}\n\\newcommand{\\bt}{\\mathbf{t}}\n\\newcommand{\\bu}{\\mathbf{u}}\n\\newcommand{\\bv}{\\mathbf{v}}\n\\newcommand{\\bw}{\\mathbf{w}}\n\\newcommand{\\bx}{\\mathbf{x}}\n\\newcommand{\\by}{\\mathbf{y}}\n\\newcommand{\\bz}{\\mathbf{z}}\n\n\\newcommand{\\bF}{\\mathbf{F}}\n\\newcommand{\\bV}{\\mathbf{V}}\n\\newcommand{\\bX}{\\mathbf{X}}\n\n\\newcommand{\\bxi}{\\bm{\\xi}}\n\n\\newcommand{\\blambda}{\\bm{\\lambda}}\n\\newcommand{\\bzero}{\\bm{0}}\n\n\\newcommand{\\rhoi}{\\rho_{\\text{i}}}\n\\newcommand{\\ip}[2]{\\left<#1,#2\\right>}\n\n\\newcommand{\\Rpr}{R_{\\text{pr}}}\n\\newcommand{\\Rin}{R_{\\text{in}}}\n\\newcommand{\\Rfw}{R_{\\text{fw}}}\n\n\n\\begin{document}\n\\title[Two balls, rigidly-connected]{Two balls, rigidly-connected: \\\\ a DAE case study}\n\n\\author{Ed Bueler}\n\n\\begin{abstract}\nThe problem of two equal-mass balls, rigidly connected by a massless rod, is described by an index-3 differential-algebraic equations (DAE) system.  An index-reduction procedure rewrites it as a stabilized index-2 DAE system.  Numerical solutions using implicit PETSc TS solvers are evaluated.\n\\end{abstract}\n\n\\maketitle\n\n%\\tableofcontents\n\n\\thispagestyle{empty}\n\\bigskip\n\n\\section{What are the cartesian equations of motion?}\n\nConsider the problem of two equal masses $m$, labeled ``$a$'' and ``$b$'', moving in the $(x,y)$ plane, with $x$ horizontal and $y$ vertical.  We may form a column vector from their cartesian coordinates,\n\\begin{equation}\n\\bq(t) = \\begin{bmatrix} q_1(t) \\\\ q_2(t) \\\\ q_3(t) \\\\ q_4(t) \\end{bmatrix} = \\begin{bmatrix} x_a(t) \\\\ y_a(t) \\\\ x_b(t) \\\\ y_b(t) \\end{bmatrix}. \\label{position}\n\\end{equation}\n\nNow suppose the two masses move according to two kinds of forces.  First, gravity acts vertically downwards.  Second, suppose the masses are connected by a rigid, massless rod with length $\\ell$.  The positions of the two masses are therefore constrained to satisfy $(x_a - x_b)^2 + (y_a - y_b)^2 = \\ell^2$ at all times.  Equivalently, a certain scalar function is identically zero:\n\\begin{equation}\n0 = g(\\bq) = \\frac{1}{2} \\Big((q_1 - q_3)^2 + (q_2 - q_4)^2 - \\ell^2\\Big). \\label{constraint}\n\\end{equation}\n(The overall constant $\\frac{1}{2}$ is chosen for later convenience.)\n\nPhysically speaking, the rod exerts a tension or expansion force along the line between the masses, which varies during the motion.  We denote this scalar force by $\\lambda$, positive when the rod is pulling the two masses together.  Newton's second law says, of course, that the \\emph{total} force $\\bm{\\Phi}(\\bq,\\lambda)$ determines the accelerations $\\ddot \\bq$, namely\n\\begin{equation}\nm \\ddot \\bq = \\bm{\\Phi}(\\bq,\\lambda) \\label{newtonssecond}\n\\end{equation}\ndescribes the motion.  This is all fine and good, except how are we to determine the total (vector) forces including those enforcing the constraint?  In particular, how does $\\bm{\\Phi}$ depend on the tension $\\lambda$, and how does the constraint \\eqref{constraint} determine $\\lambda$?\n\n\n\\section{As a rigid body: easy}\n\nClassical mechanics does not regard this as a difficult problem.  It can be solved by a simple, planar application of the theory of \\emph{rigid bodies} \\cite[Vol.~1, Chapter 18]{Feynman2011i}.\n\nLet $\\xi(t),\\eta(t)$ be the coordinates of the center of mass; this is the center of the rod.   Let $\\theta(t)$ be the angle in radians between the rod and the horizontal axis, specifically the angle between the positive $x$-axis and the vector from the center of mass $(\\xi,\\eta)$ to the first mass $(x_a,y_a) = (q_1,q_2)$.  In terms of the three new variables $\\xi,\\eta,\\theta$, we can write\n\\begin{equation}\n\\bq = \\begin{bmatrix} \\xi + (\\ell/2) \\cos\\theta \\\\ \\eta + (\\ell/2) \\sin\\theta \\\\ \\xi - (\\ell/2) \\cos\\theta \\\\ \\eta - (\\ell/2) \\sin\\theta  \\end{bmatrix}.  \\label{cartesianrigid}\n\\end{equation}\nImmediately constraint \\eqref{constraint} is satisfied.\n\nIn the theory of rigid bodies the motion of the center of mass is determined by the external forces, here just gravity.  On the other hand the moment of inertial ($m\\ell$) times the angular acceleration equals the torques applied to the rigid body.  Here there are no torques at all, so the angular acceleration is zero.  Thus, now that we are using the ``correct'' coordinates, the equations of motion are\n\\begin{subequations}\n\\label{rigidequations}\n\\begin{align}\n      2 m \\ddot\\xi &= 0 \\\\\n     2 m \\ddot\\eta &= -2 m g_r \\\\\nm \\ell \\ddot\\theta &= 0\n\\end{align}\n\\end{subequations}\nAs a first-order system this has dimension 6.\n\nThis is a delightfully trivial ODE system, with exact solution\n\\begin{subequations}\n\\label{rigidsolution}\n\\begin{align}\n   \\xi(t) &= \\xi(0) + \\dot\\xi(0) t \\\\\n  \\eta(t) &= \\eta(0) + \\dot\\eta(0) t - \\frac{1}{2} g_r t^2 \\\\\n\\theta(t) &= \\theta(0) + \\dot\\theta(0) t\n\\end{align}\n\\end{subequations}\n\nThe problem is thus easy in the correct coordinates.  However, we are interested in the solution of the DAE system in the original cartesian coordinates, primarily because other more-complicated problems do not have a cute theory like rigid bodies.  The rigid body exact solution \\eqref{rigidsolution} will be used to verify the numerical solution of the ``wrong coordinates'' DAE solution.\n\nIn such verification usage we will need to convert the cartesian initial conditions to the initial conditions for the rigid body solution \\eqref{rigidsolution}.  The key equations say that the cartesian velocity of each mass is the sum of the center-of-mass velocity plus the cross product of the angular velocity and the position vector of the mass relative to the center of mass, for instance $\\bv = \\bV + \\bm{\\omega}\\times \\br$ in obvious notation.  Thus, to do the initial-value conversion, we use \\eqref{cartesianrigid} plus these equations for initial velocity:\n\\begin{subequations}\n\\label{velocitycartesianrigid}\n\\begin{align}\n\\dot q_1 &= \\dot\\xi - \\dot\\theta (q_2 - \\eta) \\\\\n\\dot q_2 &= \\dot\\eta + \\dot\\theta (q_1 - \\xi) \\\\\n\\dot q_3 &= \\dot\\xi - \\dot\\theta (q_4 - \\eta) \\\\\n\\dot q_4 &= \\dot\\eta + \\dot\\theta (q_3 - \\xi)\n\\end{align}\n\\end{subequations}\n\n\n\\section{Construction of a DAE system using Lagrangian dynamics}\n\nReturning to our naive, cartesian problem, it is well-known that Newton's laws are poorly-suited to describing the forces in such a constrained situation, but that we can find the motion via \\emph{Lagrangian dynamics}.  One can derive the Lagrangian approach from the calculus of variations, via Hamilton's principle of least action or the principle of virtual work \\cite{Lanczos1970}, but here we will simply use the derived Euler-Lagrange equations (below) as the source of our DAE system.  This Lagrangian analytical approach applies broadly, across all of classical dynamics \\cite{Layton1998}.\n\nFirst, for unconstrained, non-dissipative motion described by the position variables $\\bq(t) \\in \\RR^n$ and velocities $\\dot\\bq(t)$, one defines the \\emph{Lagrangian} $\\mathcal{L}_0(\\bq,\\dot\\bq) = T(\\bq,\\dot\\bq) - U(\\bq)$ as the difference of kinetic and potential energy.  The motion $\\bq(t)$ then solves the \\emph{Euler-Lagrange differential equations} ($i=1,\\dots,n$):\n\\begin{equation}\n\\frac{d}{dt} \\frac{\\partial \\mathcal{L}}{\\partial \\dot q_i} = \\frac{\\partial \\mathcal{L}}{\\partial q_i}. \\label{eulerlagrange}\n\\end{equation}\nFor constrained motion we must, however, modify the Lagrangian.  Suppose $\\bg(\\bq) \\in \\RR^k$ is a column vector of the constraints, i.e.\n\\begin{equation}\n\\bg(\\bq)=\\bzero. \\label{generalconstraints}\n\\end{equation}\nNow we define a vector of Lagrange multipliers $\\blambda(t) \\in \\RR^k$ of the same length and a modified Lagrangian \\cite[equation (58.2)]{Lanczos1970}\n\\begin{align}\n\\mathcal{L}(\\bq,\\dot\\bq,\\blambda) &= \\mathcal{L}_0(\\bq,\\dot\\bq) - \\blambda^\\top \\bg(\\bq)  \\notag \\\\\n  &= T(\\bq,\\dot\\bq) - U(\\bq) - \\blambda^\\top \\bg(\\bq). \\label{extendedlagrangian}\n\\end{align}\nThen the motion $\\bq(t),\\blambda(t)$ satisfies the system \\eqref{eulerlagrange} for Lagrangian $\\mathcal{L}$, along with the constraint equations \\eqref{constraint}.  Physically, $\\blambda$ is the (vector) force which enforces the constraints.\n\nBecause $\\mathcal{L}$ does not depend on $\\dot\\blambda$, one may recover the constraints by the notional Euler-Lagrange equation $\\bzero = d/dt(\\partial \\mathcal{L}/\\partial \\dot\\blambda) = \\partial \\mathcal{L}/\\partial\\blambda = \\bg(\\bq)$.  In this sense the multipliers are treated as coordinates and the Euler-Lagrange equations include the constraint equations.\n\nIn our two-masses problem with $n=4$ there is $k=1$ (scalar) constraint $g(\\bq)$ and thus one Lagrange multiplier $\\lambda$, and it is the rod tension which we seek.  Based on the usual cartesian formula for kinetic energy and on gravitational potential energy, we have\n\\begin{equation}\nT(\\bv) = \\frac{m}{2} \\left(v_1^2+v_2^2+v_3^2+v_4^2\\right), \\qquad U(\\bq) = m g_r \\left(q_2+q_4\\right), \\label{energies}\n\\end{equation}\nwhere $\\bv(t) = \\dot\\bq(t)$ and $g_r>0$ is the acceleration of gravity.  The modified Lagrangian is\n\\begin{align}\n\\mathcal{L}(\\bq,\\bv,\\lambda) &= T(\\bv) - U(\\bq) - \\lambda g(\\bq) \\label{lagrangian} \\\\\n  &= \\frac{m}{2} \\left(v_1^2+v_2^2+v_3^2+v_4^2\\right) - m g_r \\left(q_2+q_4\\right) - \\frac{\\lambda}{2} \\Big((q_1 - q_3)^2 + (q_2 - q_4)^2 - \\ell^2\\Big). \\notag\n\\end{align}\n\nFrom the Euler-Lagrange equations \\eqref{eulerlagrange} and the constraint equation \\eqref{constraint}, our two-masses problem is a system of 9 equations with (at most) first-order derivatives once we include the definition of the velocities:\n\\begin{subequations}\n\\label{rawsystem}\n\\begin{align}\n  \\dot q_1 &= v_1 \\\\\n  \\dot q_2 &= v_2 \\\\\n  \\dot q_3 &= v_3 \\\\\n  \\dot q_4 &= v_4 \\\\\nm \\dot v_1 &= - \\lambda (q_1 - q_3) \\\\\nm \\dot v_2 &= - m g_r - \\lambda (q_2 - q_4) \\\\\nm \\dot v_3 &= \\lambda (q_1 - q_3) \\\\\nm \\dot v_4 &= - m g_r + \\lambda (q_2 - q_4) \\\\\n         0 &= \\frac{1}{2} \\Big((q_1 - q_3)^2 + (q_2 - q_4)^2 - \\ell^2\\Big) \\label{rawsystem:constraint}\n\\end{align}\n\\end{subequations}\n\nNext we rewrite system \\eqref{rawsystem} using the vector notation from \\cite[equation (9.30)]{AscherPetzold1998}:\n\\begin{subequations}\n\\label{system}\n\\begin{align}\n\\dot \\bq &= \\bv  \\label{system:dotq} \\\\\nm \\dot \\bv &= \\bbf - \\lambda\\, G(\\bq)^\\top  \\label{system:dotv} \\\\\n0 &= g(\\bq)  \\label{system:constraint}\n\\end{align}\n\\end{subequations}\nThe column vector $\\bbf = [0,-mg_r,0,-mg_r]^\\top$ is the (constant) external force and the Jacobian matrix $G$ is a row vector\n\\begin{equation}\nG(\\bq) = \\begin{bmatrix} {\\displaystyle \\frac{\\partial g}{\\partial q_i}} \\end{bmatrix} = \\begin{bmatrix} q_1-q_3, & q_2-q_4, & -(q_1-q_3), & -(q_2-q_4) \\end{bmatrix}. \\label{constraintjacobian}\n\\end{equation}\n(In general $G$ is a $k\\times n$ matrix.  Reference \\cite{AscherPetzold1998} allows a nontrivial mass matrix $M(\\bq)$, but here $M(\\bq) = mI$.)  For future reference we calculate that $G(\\bq) \\bbf = 0$ and that\n\\begin{equation}\nG(\\bq) G(\\bq)^\\top = 2 (q_1-q_3)^2 + 2 (q_2 - q_4)^2 = 2 \\ell^2 > 0.  \\label{ggpd}\n\\end{equation}\n\nSystem \\eqref{system} would be suitable for numerical solution by a black-box ODE solver, e.g.~any adaptive explicit Runge-Kutta method \\cite{AscherPetzold1998}, except that the constraint $0=g(\\bq)$ is not, in fact, a differential equation at all.  Formulated as we have done it in cartesian coordinates, our problem is a \\emph{differential-algebraic equation} (DAE) system.  Equations \\eqref{system:dotq}, \\eqref{system:dotv}, equivalently the first eight equations in system \\eqref{rawsystem}, are differential equations, but \\eqref{system:constraint} is algebraic.  Application of an implicit solver, one which consistently solves the discretized equations at each time step, will be necessary.\n\n\n\\section{DAE index}\n\nSome DAE systems are more difficult to solve than others, but the various possibilities are neither simple to describe nor particularly easy to categorize.  One quantification which has proven value is the (\\emph{differential}) \\emph{index} of the DAE system.  This nonnegative integer is the minimum number of times that the algebraic equations, the constraints, must be differentiated in time before substitutions reveal an ODE system.  (An ODE system is thus an index-0 DAE system.)  While this is not obviously a rigorous definition, the linear case is, at least, precise \\cite[Chapter IV.5]{HairerWanner1996}.\n\nOur DAE problem \\eqref{system} is a well-known type of mechanical system with equality constraints on the position variables (``holonomic constraints'' \\cite{Lanczos1970}), a class of DAE problems with index 3.  Note that indices higher than 2 are traditionally called ``high index,'' because much less is known about reliable numerical solvers beyond index 2.\n\nLet us show, by differentiating $g(\\bq)=0$ with respect to time, why system \\eqref{system} has index 3.  (The argument here will not exclude generating an ODE in fewer differentiations; this shows that the differential index is at most 3.)  Differentiating \\eqref{system:constraint} once with respect to $t$ gives\n\\begin{align}\n0 &= \\frac{d}{dt} \\bigg(\\frac{1}{2} \\Big((q_1 - q_3)^2 + (q_2 - q_4)^2 - \\ell^2\\Big)\\bigg) \\notag \\\\\n  &= (q_1 - q_3)(v_1 - v_3) + (q_2 - q_4) (v_2 - v_4). \\label{rawvelocityconstraint}\n\\end{align}\nBy substituting \\eqref{system:dotq} and \\eqref{constraintjacobian}, we may write this as a matrix-vector product:\n\\begin{equation}\n0 = G(\\bq) \\bv. \\label{velocityconstraint}\n\\end{equation}\nThis equation is called the \\emph{velocity constraint}.\n\nDifferentiating again, and then substitution of \\eqref{system:dotv} and \\eqref{system:constraint}, yields the \\emph{acceleration constraint} \\cite{Layton1998}\n\\begin{equation}\n0 = \\frac{d}{dt} \\Big((q_1 - q_3)(v_1 - v_3) + (q_2 - q_4) (v_2 - v_4)\\Big) = (v_1 - v_3)^2 + (v_2 - v_4)^2 - \\lambda \\frac{2 \\ell^2}{m} \\label{rawddconstraint}\n\\end{equation}\nor equivalently\n\\begin{equation}\n\\lambda = \\frac{m}{2\\ell^2} \\left((v_1 - v_3)^2 + (v_2 - v_4)^2\\right). \\label{rawlambda}\n\\end{equation}\nWriting this using matrix-vector notation is not so obvious.  However, since $G(\\bq)\\bv = \\sum_i G_{1i}(\\bq) v_i$, substituting \\eqref{system:dotv} yields\n\\begin{align}\n0 &= \\sum_{i,j=1}^4 \\frac{\\partial G_{1i}(\\bq)}{\\partial q_j} \\dot q_j v_i + \\sum_{i=1}^4 G_{1i}(\\bq) \\dot v_i = \\bv^\\top \\frac{\\partial G(\\bq)}{\\partial \\bq} \\bv + G(\\bq) \\frac{1}{m} \\left(\\bbf - \\lambda\\, G(\\bq)^\\top\\right) \\notag \\\\\n  &= \\frac{\\partial (G(\\bq)\\bv)}{\\partial \\bq} \\bv +  \\frac{G(\\bq)}{m} \\bbf - \\lambda \\frac{G(\\bq) G(\\bq)^\\top}{m} = \\frac{\\partial (G(\\bq)\\bv)}{\\partial \\bq} \\bv - \\lambda \\frac{G(\\bq) G(\\bq)^\\top}{m}.  \\label{ddconstraint}\n\\end{align}\nHere\n\\begin{equation}\n\\left(\\frac{\\partial G(\\bq)}{\\partial \\bq}\\right)_{ij} = \\frac{\\partial G_{1i}(\\bq)}{\\partial q_j} \\qquad \\text{and} \\qquad\n\\left(\\frac{\\partial (G(\\bq)\\bv)}{\\partial \\bq}\\right)_{j} = \\frac{\\partial G_{1i}(\\bq)}{\\partial q_j} v_j,\n\\end{equation}\nwith the latter regarded as a row vector.  In fact, let\n\\begin{equation}\nH(\\bv) = \\frac{\\partial (G(\\bq)\\bv)}{\\partial \\bq} = \\begin{bmatrix} v_1-v_3, & v_2-v_4, & -(v_1-v_3), & -(v_2-v_4) \\end{bmatrix}. \\label{velocityconstraintderiv}\n\\end{equation}\nApplying \\eqref{ggpd} in \\eqref{ddconstraint} and solving for $\\lambda$ gives \\eqref{rawlambda} again:\n\\begin{equation}\n\\lambda = \\frac{m}{2\\ell^2} H(\\bv) \\bv = \\frac{m}{2\\ell^2} \\left((v_1 - v_3)^2 + (v_2 - v_4)^2\\right). \\label{lambda}\n\\end{equation}\n\nDifferentiating \\eqref{lambda}, and using this formula for $\\dot\\lambda$ to replace \\eqref{system:constraint}, finally converts system \\eqref{system} into an ODE system.  This \\emph{unstabilized index reduction} shows that \\eqref{system} has at most index 3.  However, we do not actually need or want the final ODE system, or even the differential equation for $\\dot \\lambda$.  Instead, we will use a stabilized index 2 DAE formulation as described in the next section.\n\n\n\\section{Stabilized index-2 formulation}\n\n\\begin{quote}\n\\emph{For a DAE of index greater than 2 it is usually best to use one of the index-reduction techniques \\dots to rewrite the problem in lower-index form.} \\, \\cite[p 262]{AscherPetzold1998}\n\\end{quote}\n\nThe approach of Gear and others \\cite{Gearetal1985} is to replace the original index-3 DAE system \\eqref{system} with one which is more constrained, and has lower index, but which remains a DAE.  This involves two changes to system \\eqref{system}.  First we append the velocity constraint \\eqref{velocityconstraint} to \\eqref{system}.  Then, to compensate, we add a corresponding Lagrange multiplier $\\mu$, and use it to add a restoring constraint force to equation \\eqref{system:dotq}:\n\\begin{subequations}\n\\label{stab}\n\\begin{align}\n\\dot \\bq &= \\bv - \\mu\\, G(\\bq)^\\top \\label{stab:dotq} \\\\\nm \\dot \\bv &= \\bbf - \\lambda\\, G(\\bq)^\\top  \\label{stab:dotv} \\\\\n0 &= g(\\bq)  \\label{stab:qconstraint} \\\\\n0 &= G(\\bq) \\bv  \\label{stab:vconstraint}\n\\end{align}\n\\end{subequations}\n\nAs shown momentarily, system \\eqref{stab} has index 2.  It is called the \\emph{stabilized index-2 formulation} \\cite[Exercise 9.10]{AscherPetzold1998} of our index-3 constrained mechanical system.  When such mechanical systems initially have $n$ position variables and $k$ constraints, thus total dimension $2n+k$, the dimension of the stabilized index-2 formulation becomes $2(n+k)$.  In our case $n=4$ and $k=1$.\n\nIt is easy to see that the exact solution is unchanged.  In fact, the solution of \\eqref{stab} is a quadruple $\\bq(t),\\bv(t),\\lambda(t),\\mu(t)$ in which the triple $\\bq(t),\\bv(t),\\lambda(t)$ solves \\eqref{system} and $\\mu(t)=0$ identically.  To show this, differentiate \\eqref{stab:qconstraint} with respect to time and apply \\eqref{ggpd}, \\eqref{stab:dotq}, \\eqref{stab:vconstraint}:\n\\begin{equation}\n0 = G(\\bq) \\dot \\bq = G(\\bq) \\left(\\bv - \\mu G(\\bq)^\\top\\right) = - 2 \\ell^2 \\mu.\n\\end{equation}\nWhen system \\eqref{stab} is solved exactly it follows that $\\mu(t)=0$.  However, the modified velocity equation \\eqref{stab:dotq} will assist the numerical solver in staying on the constraint, making the numerical solution to \\eqref{stab} superior the unstabilized formulation \\eqref{system}.\n\nTo further expose the structure of system \\eqref{stab}, let\n\\begin{equation}\n\\bx = \\begin{bmatrix} \\bq \\\\ \\bv \\end{bmatrix} \\qquad \\text{and} \\qquad \\bz = \\begin{bmatrix} \\mu \\\\ \\lambda \\end{bmatrix}.\n\\end{equation}\nIn this notation, system \\eqref{stab} has the form\n\\begin{subequations}\n\\label{hessen}\n\\begin{align}\n\\dot \\bx &= \\br(\\bx,\\bz) \\label{hessen:differential} \\\\\n  \\bzero &= \\bs(\\bx) \\label{hessen:algebraic}\n\\end{align}\n\\end{subequations}\nThe $\\bx$ variables are \\emph{differential}, as they have time derivatives, while the $\\bz$ variables are \\emph{algebraic}.  For such a DAE system one would convert to an ODE system by computing by differentiating \\eqref{hessen:algebraic} with respect to $t$ and then substituting \\eqref{hessen:differential},\n\\begin{equation}\n\\bzero = \\frac{\\partial \\bs}{\\partial \\bx} \\dot \\bx = \\frac{\\partial \\bs}{\\partial \\bx} \\br.\n\\end{equation}\nThis equation which is algebraic; note $\\partial \\bs/\\partial \\bx$ is a $2k\\times 2n = 2\\times 8$ matrix.  A second time-differentiation is now needed, yielding\n\\begin{equation}\n\\bzero = \\frac{\\partial^2 \\bs}{\\partial \\bx^2}\\left[\\br, \\br\\right] + \\frac{\\partial \\bs}{\\partial \\bx} \\frac{\\partial \\br}{\\partial \\bx}\\, \\br + \\frac{\\partial \\bs}{\\partial \\bx} \\frac{\\partial \\br}{\\partial \\bz}\\, \\dot \\bz  \\label{hessen:indextwo}\n\\end{equation}\nwherein the second derivative $\\partial^2 \\bs/\\partial \\bx^2$ is tensor-valued.  This gets us to the main point: equation \\eqref{hessen:indextwo} can be solved for the needed time-derivative $\\dot \\bz$ if the matrix sitting beside it is invertible.  That is, system \\eqref{hessen} has index 2 as long as\n\\begin{equation}\nB = \\frac{\\partial \\bs}{\\partial \\bx} \\frac{\\partial \\br}{\\partial \\bz} \\in \\RR^{2k\\times 2k} \\quad \\text{ is invertible}. \\label{hessen:criterion}\n\\end{equation}\nIn general, when \\eqref{hessen:criterion} holds we say system \\eqref{hessen} is a \\emph{Hessenberg} (or \\emph{pure}) index-2 system.\n\nIn our two-balls problem one differentiates system \\eqref{stab} to find $B$, a $2\\times 2$ matrix which is in fact diagonal:\n\\begin{equation}\nB = \\begin{bmatrix}  G(\\bq) G(\\bq)^\\top & 0 \\\\ 2 G(\\bq) \\bv & m^{-1} G(\\bq) G(\\bq)^\\top \\end{bmatrix} = 2\\ell^2 \\begin{bmatrix}  1 & 0 \\\\ 0 & m^{-1} \\end{bmatrix}.\n\\end{equation}\nAs this matrix is invertible, we have shown that system \\eqref{stab} has index 2.  Thus our index-reduction and stabilization procedure has converted the original index-3 problem into a Hessenberg index-2 system.\n\n\n\\section{Numerical solutions}\n\n\\begin{quote}\n\\emph{[For] Hessenberg index-2 DAEs, the coefficients of the multistep methods must satisfy a set of order conditions which is in addition to the order conditions for ODEs, to attain order greater than 2.  It turns out that these additional order conditions are satisfied by BDF methods.} \\, \\cite[p 267]{AscherPetzold1998}\n\\end{quote}\n\nOur numerical implementation in this section will solve initial-value problems for a first-order, index-2 DAE system with 10 scalar variables.  We will apply the TS (time-steppers) component of the PETSc library \\cite{Balayetal2021,Bueler2021} to solve the DAE system.\n\nTo use the fully-implicit solvers in PETSc TS, specifically BDF, the problem is best put in the most general DAE form\n\\begin{equation}\n\\bF(t,\\bu,\\bu')=0. \\label{fullyimplicit}\n\\end{equation}\nWe choose to order the components of $\\bu$ as follows:\n\\begin{equation}\n\\bu = \\begin{bmatrix} \\bq \\\\ \\bv \\\\ \\mu \\\\ \\lambda \\end{bmatrix}\n= \\begin{bmatrix} q_1 & q_2 & q_3 & q_4 & v_1 & v_2 & v_3 & v_4 & \\mu & \\lambda \\end{bmatrix}^\\top\n\\end{equation}\nThe function $\\bF$ is defined following system \\eqref{stab}:\n\\begin{equation}\n\\bF(t,\\bu,\\dot\\bu)\n = \\begin{bmatrix}\n\\dot \\bq - \\bv + \\mu\\, G(\\bq)^\\top \\\\\nm \\dot \\bv - \\bbf + \\lambda\\, G(\\bq)^\\top \\\\\ng(\\bq) \\\\\nG(\\bq) \\bv\n \\end{bmatrix}\n = \\begin{bmatrix}\n  \\dot q_1 - v_1 + \\mu (q_1 - q_3) \\\\\n  \\dot q_2 - v_2 + \\mu (q_2 - q_4) \\\\\n  \\dot q_3 - v_3 - \\mu (q_1 - q_3) \\\\\n  \\dot q_4 - v_4 - \\mu (q_2 - q_4) \\\\\nm \\dot v_1 + \\lambda (q_1 - q_3) \\\\\nm \\dot v_2 + m g_r + \\lambda (q_2 - q_4) \\\\\nm \\dot v_3 - \\lambda (q_1 - q_3) \\\\\nm \\dot v_4 + m g_r - \\lambda (q_2 - q_4) \\\\\n\\frac{1}{2} \\Big((q_1 - q_3)^2 + (q_2 - q_4)^2 - \\ell^2\\Big) \\\\\n(q_1 - q_3) (v_1 - v_3) + (q_2 - q_4) (v_2 - v_4)\n\\end{bmatrix}\n\\end{equation}\nNote that $\\bF$ does not explicitly depend on $t$.\n\nAt each step of an implicit method, a system of 10 nonlinear equations will be solved.  While this can be done using a finite-difference Jacobian (see the next section), it is most efficient using an exact, analytical Jacobian.  Effectively this means we must calculate two $10 \\times 10$ matrices, next, the derivatives of $\\bF$ with respect to $\\bu$ and $\\dot\\bu$.\n\nThe matrix $\\partial\\bF/\\partial\\bu$ has nested block structure with a larger upper-left $8\\times 8$ block associated to the $\\bq,\\bv$ variables and a smaller lower-right $2\\times 2$ block which is identically zero:\n\\begin{equation}\n\\frac{\\partial\\bF}{\\partial\\bu} =\n\\left[\\begin{array}{cc|cc}\n \\mu C & -I     & G(\\bq)^\\top & \\\\\n\\lam C &        &             & G(\\bq)^\\top \\\\ \\hline\nG(\\bq) &        &             & \\\\\nH(\\bv) & G(\\bq) &             &\n\\end{array}\\right].\n\\end{equation}\n(Blank blocks/entries are zero.)  Here $I$ is the $4\\times 4$ identity matrix and\n\\begin{equation}\nC = \\begin{bmatrix}\n 1 &    & -1 & \\\\\n   &  1 &    & -1 \\\\\n-1 &    &  1 & \\\\\n   & -1 &    &  1\n\\end{bmatrix}.\n\\end{equation}\nRecall that $H(\\bv)$ is the $1\\times 4$ row matrix defined in \\eqref{velocityconstraintderiv}.\n\nBecause our problem is a nontrivial DAE, the matrix $\\partial\\bF/\\partial\\dot\\bu$ is singular (nullity $=2$), but also conveniently diagonal:\n\\begin{equation}\n\\frac{\\partial\\bF}{\\partial\\dot\\bu} = \\left[\\begin{array}{cc|cc}\nI &    &   & \\\\\n  & mI &   & \\\\ \\hline\n  &    & 0 & \\\\\n  &    &   & 0\n\\end{array}\\right]\n\\end{equation}\n\nThe PETSc implicit TS solvers use a callback for the function $\\bF(t,\\bu,\\dot\\bu)$  and, because implicit discrete methods generate a linear system based on a combined Jacobian \\cite[section 2.5]{Balayetal2021}\n\\begin{equation}\nJ_\\sigma = \\sigma \\frac{\\partial\\bF}{\\partial\\dot\\bu} + \\frac{\\partial\\bF}{\\partial\\bu} = \\left[\\begin{array}{cc|cc}\n\\sigma I + \\mu C & -I     & G(\\bq)^\\top & \\\\\n\\lam C & \\sigma m I &             & G(\\bq)^\\top \\\\ \\hline\nG(\\bq) &        & 0           & \\\\\nH(\\bv) & G(\\bq) &             & 0\n\\end{array}\\right].\n\\end{equation}\nfor some scalar $\\sigma$.  A key idea is that the DAE itself, and many implicit methods, will be uniquely solvable because $J_\\sigma$ is nonsingular for $\\sigma > 0$.\n\nNext we need specific examples for testing purposes.\n\n\\subsection*{Example A}  Suppose the masses are tennis balls ($m=58\\,\\text{g}$), the rod has length $\\ell=50\\,\\text{cm}$, and we are on Earth ($g_r=9.81\\,\\text{m}\\,\\text{s}^{-2}$).  For Example A we take the initial positions to be $(q_1,q_2)=(0,1)\\,\\text{m}$ and $(q_3,q_4)=(0,1.5)\\,\\text{m}$, thus the two balls start one above the other, with the lower one meter off the ground.  The initial velocities are $(v_1,v_2)=(10,10)\\,\\text{m}\\,\\text{s}^{-1}$ and $(v_3,v_4)=(15,10)\\,\\text{m}\\,\\text{s}^{-1}$ so that the second, higher mass starts at a higher velocity.  Observe that both constraints, $0=g(\\bq)$ and $0=G(\\bq)\\bv$, are satisfied for these initial conditions.  That is, the initial positions are distance $\\ell$ apart and the velocity constraint \\eqref{stab:vconstraint} is satisfied.\n\n\\subsection*{Example B}  FIXME suppose balls falling straight down, side-by-side $\\ell$ apart; $\\partial\\bF/\\partial\\bu$ clearly not invertible because $\\mu=\\lambda=0$; of course $\\partial\\bF/\\partial\\dot\\bu$ also not invertible; but $J_\\sigma$ invertible\n\nFIXME start with BDF\n\n\\small\n\n\\bigskip\n\\bibliography{twoballs}\n\\bibliographystyle{siam}\n\n\\end{document}\n", "meta": {"hexsha": "dc335c017837e0476d0cc02acd5663d78550ea06", "size": 26917, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/twoballs.tex", "max_stars_repo_name": "bueler/dae-examples", "max_stars_repo_head_hexsha": "fffd181c719e97fe1e28d01a41678357a9abde6a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/twoballs.tex", "max_issues_repo_name": "bueler/dae-examples", "max_issues_repo_head_hexsha": "fffd181c719e97fe1e28d01a41678357a9abde6a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/twoballs.tex", "max_forks_repo_name": "bueler/dae-examples", "max_forks_repo_head_hexsha": "fffd181c719e97fe1e28d01a41678357a9abde6a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.7435897436, "max_line_length": 795, "alphanum_fraction": 0.7047962254, "num_tokens": 8929, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7662936430859597, "lm_q1q2_score": 0.5977036011169031}}
{"text": "\\documentclass[]{article}\n\\begin{document}\n\n\\title{Homework 2 1.3: 1, 3, 7, 8, 11(c), 21(e)}\n\\author{Alex Gordon}\n\\date{\\today}\n\\maketitle\n\n\\section*{Homework}\n\\subsection*{1. A)}\n\n\\begin{tabular}{ l | c || r || r ||}\n p & q & p $\\wedge$ q are lying &$\\neg$p (A is lying) \\\\\n  \\hline                        \n  T & T & T & F\\\\\n  T & F & F & F\\\\\n  F & T & F & T\\\\\n  F & F & F & T\\\\\n  \\hline  \n\\end{tabular}\n\\\\ The only answer that can explain this is that A is lying, because Row 3 is the only row that is consistent with all the demands.\n\\subsection*{1. B)}\n\\begin{tabular}{ | c | c | c || r || r || r || }\n p & q & r & $\\neg$ p $\\vee$ $\\neg$ q &$\\neg$ r & p $\\wedge$ q \\\\\n  \\hline\n  T & T & T & F & F & T\\\\   \n  T & T & F & T & T & F\\\\   \n  T & F & T & T & F & T\\\\           \n  T & F & F & T & T & F\\\\ \n  F & T & T & F & F & F\\\\           \n  F & T & F & T & T & F\\\\           \n  F & F & T & T & F & F\\\\           \n  F & F & F & T & T & F\\\\                                                  \n  \\hline  \n\\end{tabular} \\\\\n\\\\ We can only know that A is telling the truth. We know that either B or C is truthful, but with this truth table we can not tell which. This is shown in the fact that row 2 and 3 are consistent within the table. \n\n\\subsection*{1. C)}\n\\begin{tabular}{ | c | c | c || r || r || r || }\n p & q & r & $\\neg$ p $\\vee$ $\\neg$ q &$\\neg$ r & p $\\wedge$ q \\\\\n  \\hline\n  T & T & T & F & F & F\\\\   \n  T & T & F & F & T & T\\\\   \n  T & F & T & F & F & T\\\\           \n  T & F & F & T & T & T\\\\ \n  F & T & T & F & T & T\\\\           \n  F & T & F & F & T & T\\\\           \n  F & F & T & F & T & T\\\\           \n  F & F & F & T & T & T\\\\                                                  \n  \\hline  \n\\end{tabular} \\\\\n\\\\ Because of row 5, we can see that A is lying and both B and C are telling the truth. This is because that row is the only consistent row within the truth table. \n\n\\subsection*{3. A)}\n$\\neg$ p $\\vee$ q\n\\subsection*{3. B)}\np $\\wedge$ q\n\\subsection*{3. C)}\np $\\wedge$ $\\neg$ q\n\\subsection*{7. A)}\n t $\\wedge$ d $\\wedge$ h\n\\subsection*{7. B)}\nt $\\wedge$ d $\\wedge$ $\\neg$ h\n\\subsection*{7. C)}\n(t $\\vee$ h)$\\wedge$ $\\neg$ (t $\\wedge$ h)\n\\subsection*{7. D)}\n$\\neg$ t $\\wedge$ $\\neg$ h\n\n\\subsection*{8. A)}\nBill is tall OR dark OR handsome AND he is NOT tall AND dark AND handsome\n\\subsection*{8. B)}\nBill is NOT tall and NOT dark\n\\subsection*{8. C)}\nBill is dark AND NOT tall AND NOT dark\n\\subsection*{8. D)}\nBill is tall AND dark OR NOT tall AND dark\n\\subsection*{11. C)}\n\\begin{tabular}{ | c | c || r || r || r || }\n p & q & p $\\vee$ q &$\\neg$p $\\vee$ q & (p $\\vee$ q) $\\wedge$ ($\\neg$p $\\vee$ q)  \\\\\n  \\hline\n  T & T & T & T & T\\\\   \n  T & F & T & F & F\\\\   \n  F & T & T & T & T\\\\           \n  F & F & F & T & F\\\\                                             \n  \\hline  \n\\end{tabular}\n\\subsection*{21. E)}\n\\begin{tabular}{| c | c | c || r || r ||}\n p & q & r & (p $\\vee$ q) $\\wedge$ (q $\\vee$ r) &(p  $\\wedge$ r)  $\\vee$ q  \\\\\n  \\hline                        \n  T & T & T & T & T \\\\\n  T & T & F & T & T \\\\\n  T & F & T & T & T \\\\\n  T & F & F & F & F \\\\\n  F & T & T & T & T \\\\\n  F & T & F & T & T \\\\\n  F & F & T & F & F \\\\\n  F & F & F & F & F \\\\\n\n  \\hline  \n\\end{tabular}\n\\end{document}", "meta": {"hexsha": "fe17052b1a0a4be4e499169c6ded9d1ef9f35091", "size": 3199, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DiscreteMath/Homework2.tex", "max_stars_repo_name": "alexggordon/latex", "max_stars_repo_head_hexsha": "7dd945f33490e6585e26cff39d9cf6ad8f582a0e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DiscreteMath/Homework2.tex", "max_issues_repo_name": "alexggordon/latex", "max_issues_repo_head_hexsha": "7dd945f33490e6585e26cff39d9cf6ad8f582a0e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DiscreteMath/Homework2.tex", "max_forks_repo_name": "alexggordon/latex", "max_forks_repo_head_hexsha": "7dd945f33490e6585e26cff39d9cf6ad8f582a0e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.362745098, "max_line_length": 214, "alphanum_fraction": 0.4470146921, "num_tokens": 1205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936377487305, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5977036008728788}}
{"text": "\\section{Types of Rules}\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabularx}{\\textwidth}{| l | c |}\n        Terminal & $\\rarr a | \\epsilon$ \\\\\n        Empty & $\\rarr \\epsilon$ \\\\\n        Initial & $S \\rarr$ \\\\\n        Recursive & $A \\rarr \\alpha A \\beta$ \\\\\n        Left-Recursive & $A \\rarr A \\beta$ \\\\\n        Right-Recursive & $A \\rarr \\alpha A$ \\\\\n        Left-Right-Recursive & $A \\rarr A \\alpha A$ \\\\\n        Copy/Categorization & $A \\rarr B$ \\\\\n        Linear & $\\rarr u A v | w$ \\\\\n        Left-Linear & $\\rarr A v | w$ \\\\\n        Right-Linear & $\\rarr v A | w$ \\\\\n        Homogeneous Normal & $\\rarr A_1\\ldots A_n | a$ \\\\\n        Chomsky Normal & $\\rarr AB | a$ \\\\\n        Real Time Normal & $\\rarr a \\alpha | a$ \\\\\n        Greibach Normal & $\\rarr a A_1\\ldots A_n | b$ \\\\\n        Operator Normal & $\\rarr A a B$ \\\\\n    \\end{tabularx}\n\\end{table}\n", "meta": {"hexsha": "4da285751be09416bce529278b0f506b9de5423e", "size": 858, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "grammars/types-of-rules.tex", "max_stars_repo_name": "TiberioG/FLC-cheatsheet", "max_stars_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-01-13T14:36:20.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-18T16:22:18.000Z", "max_issues_repo_path": "grammars/types-of-rules.tex", "max_issues_repo_name": "TiberioG/FLC-cheatsheet", "max_issues_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "grammars/types-of-rules.tex", "max_forks_repo_name": "TiberioG/FLC-cheatsheet", "max_forks_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-21T11:05:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-17T14:59:50.000Z", "avg_line_length": 35.75, "max_line_length": 57, "alphanum_fraction": 0.506993007, "num_tokens": 283, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744939732855, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.5976948522788269}}
{"text": "\\chapter{Introduction to Data Visualization}\n\nIt is difficult for the human mind to look at a list of numbers and\nidentify the patterns in them, so we often make pictures with\nnumbers. These pictures are called \\textit{graphs}, or\n\\textit{charts}, or \\textit{plots}. Often the right picture can make\nthe meaning in the data obvious. \\textit{Data visualization} is the\nprocess of making pictures from numbers.\n\n\\section{Common Types of Data Visualizations}\n\nDepending on the type of data and what you are trying to demonstrate\nabout it, you will use different types of data visualizations.  How\nmany types of data visualizations are there? Hundreds, but we will\nconcentrate on just four: The bar chart, the line graph, the pie\nchart, and the scatter plot.\n\n\\subsection{Bar Chart}\n\nHere is an example of a bar chart.\\index{bar chart}\n\n\\includegraphics[width=0.7\\textwidth]{CookieChart.png}\n\nEach bar represents the cookie sales of one person.  For example,\nCharlie has sold 6 boxes of cookies, so the bar goes over Charlie's\nname and reaches to the number 6.\n\nLooking at this chart, you probably think, ``Wow, Debra has sold a lot\nmore cookies than anyone else, and Francis has sold a lot fewer.''\n\nThe same data could be in a table like this:\n\n\\begin{tabular}{c | c}\n  Salesperson & Boxes Sold \\\\\n  \\hline\n  Allison & 4 \\\\\n  Becky & 5 \\\\\n  Charlie & 6\\\\\n  Debra & 12\\\\\n  Elias & 5\\\\\n  Francis & 1\\\\\n  Glenda & 7\n\\end{tabular}\n\nThe table (especially a large table) is often just a bunch of\nnumbers. A chart helps our brains understand what the numbers mean.\n\nBar charts can also go horizontally.\n\n\\includegraphics[width=0.7\\textwidth]{HorizontalBarCookies.png}\n\nSometimes we use colors to explain what contributed to the number.\n\n\\includegraphics[width=0.7\\textwidth]{TypesCookieBar.png}\n\nThis tells us that Becky sold more boxes of chocolate chip cookies\nthan boxes of oatmeal cookies.\n\n\\subsection{Line Graph}\n\nHere is a line graph.\\index{line graph}\n\n\\includegraphics[width=0.7\\textwidth]{SharksLine1.png}\n\nThese are often used to show trends over time. Here, for example, you\ncan see that the number of shark attacks has been increasing over\ntime.\n\nYou can have more than one line on a graph.\n\n\\includegraphics[width=0.7\\textwidth]{SharksVsMosquitoes.png}\n\n\\subsection{Pie Chart}\n\nYou use a pie chart when you are looking at the comparative size of numbers.\\index{pie chart}\n\n\\includegraphics[width=0.4\\textwidth]{AirPie.png}\n\n\\subsection{Scatter Plot}\n\nSometimes you have a bunch of data points with two values and you are\nlooking for a relationship between them.  For example, maybe you write\ndown the average temperature and the total sales for your lemonade\nstand on the 15th of every month:\\index{scatter plot}\n\n\\begin{tabular}{c | c | c}\n  Date &\tAvg. Temp. &\tTotal Sales \\\\\n  \\hline\n15 January 2022 & 2.6\u00ba C & \\$183.85 \\\\\n15 February 2022 & -4.2\u00ba C & \\$173.56\\\\\n15 March 2022 & 13.3\u00ba C & \\$195.22\\\\\n15 April 2022 & 26.2\u00ba C & \\$207.61\\\\\n15 May 2022 & 27.5\u00ba C & \\$210.88\\\\\n15 June 2022 & 31.3\u00ba C & \\$214.18\\\\\n15 July 2022 & 33.5\u00ba C & \\$215.23\\\\\n15 Aug 2022 & 41.7\u00ba C & \\$224.07\\\\\n15 September 2022 & 20.7\u00ba C & \\$198.94\\\\\n15 October 2022 & 17.2\u00ba C & \\$196.10\\\\\n15 November 2022 & 1.7\u00ba C & \\$185.10\\\\\n15 December 2022 & 0.2\u00ba C & \\$188.70 \\\\\n\\end{tabular}\n\nAnd you think ``I wonder if I sell more lemonade on hotter days?''\n\nYou might create a scatter plot.  For each day, you put a mark that\nrepresents that temperature and the sales that day:\n\n\\includegraphics[width=\\textwidth]{LemonadeScatter.png}\n\nFrom this scatter plat, you can easily see that you do sell more\nlemonade as the temperature goes up.\n\n\n\\section{Make Bar Graph}\n\nGo back to your compound interest spreadsheet and make a bar graph\nthat shows both balances over time:\n\n\\includegraphics[width=0.9\\textwidth]{InterestGraph.png}\n\nThe year column should be used as the x-axis. There are two series of\ndata that come from C4:C16 and E4:E16.  Tidy up the titles and legend\nas much as you like.\\index{spreadsheet!graphing multiple series}\n\nLooking at the graph, you can see the balances start the same, but\nbalance of the account with the larger interest rate quickly pulls\naway from the account with the smaller interest rate.\n\n", "meta": {"hexsha": "5461ecc51bdd41a5169653cd25e4ee79a633d4e2", "size": 4194, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Modules/Spreadsheets/intro_dataviz-en_US.tex", "max_stars_repo_name": "rajivjhoomuck/sequence", "max_stars_repo_head_hexsha": "5b39f09b6350922867c3f88beaf3683425715676", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-06-13T17:19:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-05T00:43:44.000Z", "max_issues_repo_path": "Modules/Spreadsheets/intro_dataviz-en_US.tex", "max_issues_repo_name": "rajivjhoomuck/sequence", "max_issues_repo_head_hexsha": "5b39f09b6350922867c3f88beaf3683425715676", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Modules/Spreadsheets/intro_dataviz-en_US.tex", "max_forks_repo_name": "rajivjhoomuck/sequence", "max_forks_repo_head_hexsha": "5b39f09b6350922867c3f88beaf3683425715676", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-05T00:43:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-05T00:43:58.000Z", "avg_line_length": 32.511627907, "max_line_length": 93, "alphanum_fraction": 0.7477348593, "num_tokens": 1169, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585903489892, "lm_q2_score": 0.8175744739711883, "lm_q1q2_score": 0.5976948424466934}}
{"text": "\n\\subsection{Voronoi path planning}\n\nFind paths as far as way from obstacles as possible\n\nDivide plane in cells, with a cell around each obstacle\n\nWithin a cell, all points are closest to that obstacle\n\nWe move along the lines between cells\n\nOverly conservative\n\nHard to compute in 3d\n\nSmall environmental changes can significantly change the graph.\n\n", "meta": {"hexsha": "4adf9729fedd83ecf8b1d4bb97bc8b5c319871c4", "size": 351, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/ai/robotics/02-02-voronoi.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/ai/robotics/02-02-voronoi.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/ai/robotics/02-02-voronoi.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.5, "max_line_length": 63, "alphanum_fraction": 0.8005698006, "num_tokens": 75, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.5976948392808671}}
{"text": "\\subsection{Logistic Regression}\n\nA key subroutine in logistic regression \\cite{Hastie2009} is improving the \nestimates of the\ncoefficients, \\(\\beta\\), using the Newton-Raphson algorithm.\nIn the course of this, we need to compute\n\\(\\frac{1}{1 + e^{-x}}\\) and \n\\(\\frac{1}{(1 + e^{-x})^2}\\).\nWe use this to motivate the ``conjoin'' feature of \\Q.\nPerforming common sub-expression elimination as in \nFigure~\\ref{sub_expr} yields a 2x speedup.\n\n\\begin{figure}\n\\centering\n\\fbox{\n\\begin{minipage}{8 cm}\n\\centering\n\\verbatiminput{sub_expr.lua}\n\\caption{Common sub-expression elimination}\n\\label{sub_expr}\n\\end{minipage}\n}\n\\end{figure}\n\n\n\\subsubsection{Lock step evaluation}\nHowever, the code in Figure~\\ref{sub_expr} has a subtle but\ncritical bug when the number of elements in \\(x\\) exceeds the chunk\nsize.  If we were to call {\\tt eval()} on \\(y\\), then we would\nend up consuming sucessive chunks of \\(t_3\\). Now, if we were to call\n{\\tt eval()} on \\(z\\), we would fail when requesting the first\nchunk of \\(t_3\\), since it has {\\bf not} been memo-ized. One solution\nis to ensure that \\(y\\) and \\(z\\) are evaluated in lock-step, after they have\nbeen created, as in Figure~\\ref{lock_step}.\n\\begin{figure}\n\\centering\n\\fbox{\n\\begin{minipage}{8 cm}\n\\centering\n\\verbatiminput{lock_step.lua}\n\\caption{Lock step evaluation}\n\\label{lock_step}\n\\end{minipage}\n}\n\\end{figure}\n\nThe problem with this solution is that the burden of lock-step evaluation falls\non the \\Q\\ programmer. This is remedied by the {\\tt conjoin} function\n(Figure~\\ref{conjoin})\n\n\\begin{figure}\n\\centering\n\\fbox{\n\\begin{minipage}{8 cm}\n\\centering\n\\verbatiminput{conjoin.lua}\n\\caption{Conjoined Vectors}\n\\label{conjoin}\n\\end{minipage}\n}\n\\end{figure}\n\n", "meta": {"hexsha": "2026cff93301f312d55e7e93f47732e800fc8c97", "size": 1704, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DOC/Q_PAPER/SIGMOD_2019/logreg.tex", "max_stars_repo_name": "subramon/qlu", "max_stars_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DOC/Q_PAPER/SIGMOD_2019/logreg.tex", "max_issues_repo_name": "subramon/qlu", "max_issues_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-07-29T16:48:25.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-26T23:47:22.000Z", "max_forks_repo_path": "DOC/Q_PAPER/SIGMOD_2019/logreg.tex", "max_forks_repo_name": "subramon/qlu", "max_forks_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-14T22:34:13.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-14T22:34:13.000Z", "avg_line_length": 27.0476190476, "max_line_length": 79, "alphanum_fraction": 0.7335680751, "num_tokens": 513, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744806385543, "lm_q2_score": 0.7310585727705126, "lm_q1q2_score": 0.5976948329492145}}
{"text": "\\section{Canonical Transformations}\nIn Hamiltonian formalism,\n$p$ and $q$ are on an equal footing.\n\nWhat that means is we can make coordinate transformations that mix up the $p$\nand $q$.\n\\begin{align}\n    q_i &\\to Q_i = Q_i\\left( q, p, t  \\right)\\\\\n    p_i &\\to P_i = P_i\\left( q, p, t  \\right)\n\\end{align}\nNow in the Hamiltonian formalism,\n$p$ and $q$ are on an equal footing.\nSo in general you can think of a transformation that mixes the $p$ and $q$.\nBut there's a catch.\nNot all such transformations are allowed.\n\nOnly those transformations that are allowed are those that leave Hamiltonian's\nequations invariant,\nso that in general\n\\begin{align}\n    H(q, p) \\to K(Q, P, t)\n\\end{align}\nand what we need is for\n\\begin{align}\n    \\frac{\\partial K}{\\partial P_i} &= \\dot{Q}_i\n\\end{align}\nand\n\\begin{align}\n    \\frac{\\partial K}{\\partial Q_i} &= -\\dot{P}_i\n\\end{align}\nso you're allowed a much bigger set of transformations,\nbut the condition is that they still have to respect Hamilton's equations.\n\n\\begin{question}\n    How are $q$ and $\\dot{q}$ not on an equal footing in the Lagrangian\n    formalism?\n\\end{question}\nIf you know $q(t)$,\nyou know $\\dot{q}$ from the derivative.\nYou have second order differential equation for $q(t)$.\nHere,\nyou get two first-order equations for $p$ and $q$ separately.\nYou can vary $p$ and $q$ independently to get the equations.\nBut you can't do that in the Lagrangian formalism.\n\nSo anyway,\nthey shouldn't change the form of Hamilton's equations.\n\nTransformations that satisfy this condition are called canonical\ntransformations.\nThis is the set of allowed transformations that leave the form of Hamilton's\nequations invariant.\n\nUnfortunately,\nthis is a huge topic,\nquite a complicated one,\nand we don't have enough time to go into great detail.\nSo I will look at a subset of these transformations.\nI'm going to look at descriptive canonical transformations.\nThese are the most important subset,\nbut if you look in the book,\nyou'll see a more extended discussion.\nThe real difference is mostly the ones we're looking at are not going to\ninvolve time.\nSo if you have transformations that mix $p$ and $q$ that don't mix time.\nFor those,\nthere's a simplification that $H=K$.\nThat's not very clear,\nbut it's part of the bigger formalism.\n\n\\section{Restricted Canonical Transformations}\nSo they are\n\\begin{align}\n   q_i &\\to Q_i(q, p)\\\\\n   p_i &\\to P_i(q, p)\n\\end{align}\nand the $H$ and $K$ are the same\n\\begin{align}\n    H(q, p) = K(Q, P)\n\\end{align}\nThis is what defines the restricted set of canonical transformations.\nWhat we do is,\nwe begin by writing Hamilton's equations in a symmetric form.\nSo you put the $p$'s and $q$'s into a single vector.\n\\begin{align}\n    \\vec{z} &=\n    \\begin{pmatrix}\n        q_1\\\\\n        q_2\\\\\n        \\vdots\\\\\n        q_n\\\\\n        p_1\\\\\n        p_2\\\\\n        \\vdots\\\\\n        p_n\n    \\end{pmatrix}\n\\end{align}\nSo we just defined this giant vector $2n$ components\nwith all the $p$'s and the $q$'s together.\n\nAnd then there's a standard matrix.\nWe introduce a $2n\\times 2n$ matrix called $\\hat{J}$,\nwhich is of this form\n\\begin{align}\n    \\hat{J} &=\n    \\begin{pmatrix}\n        0_{n\\times n} & I_{n\\times n}\\\\\n        - I_{n\\times n} & 0_{n\\times n}\\\\\n    \\end{pmatrix}\n\\end{align}\nWhere the $0_{n\\times n}$ denotes an $n\\times n$ matrix of zeros,\nand $1_{n\\times n}$ is the $n\\times n$ identity matrix.\nIt has the funny property that if you take the transpose you get $-1$ of itself.\n\nIn this notation,\nHamilton's equations are\n\\begin{align}\n    \\dot{\\vec{z}} &=\n    \\hat{J}\n    \\frac{\\partial H}{\\partial \\vec{z}}\n\\end{align}\nor alternatively\n\\begin{align}\n    \\dot{z}_i\n    &=\n    \\sum_{j}\n    \\hat{J}_{ij}\n    \\frac{\\partial H}{\\partial z_j}\n\\end{align}\nThis is just a rewrite.\n\nSo now we make our transformation\nfrom $z=(q_i, p_i)$ to $w=(Q_i, P_i)$.\nIn other word's we're going from\n\\begin{align}\n    z_i &\\to w_i = w_i(z)\n\\end{align}\nLet's keep going.\n\nSo now these $w$'s must satisfy an equation just like the equation for the\n$\\dot{z}$.\nThat is,\n\\begin{align}\n    \\dot{w}_i &=\n    \\sum_{j}\n    \\frac{\\partial w_i}{\\partial z_j}\n    \\dot{z_j}\n\\end{align}\nbut this $\\dot{z}_j$ satisfies\n\\begin{align}\n    \\dot{z}_j &=\n    \\sum_{k}\n    \\hat{J}_{jk}\n    \\frac{\\partial H}{\\partial z_k}\n\\end{align}\nso then the equation becomes\n\\begin{align}\n    \\dot{w} &=\n    \\sum_{j,j}\n    \\frac{\\partial w_i}{\\partial z_j}\n    \\hat{J}_{jk}\n    \\frac{\\partial H}{\\partial z_k}\n\\end{align}\nbut this last derivative can be written\n\\begin{align}\n    \\frac{\\partial H}{\\partial z_k}\n    &=\n    \\sum_{l}\n    \\frac{\\partial H}{\\partial w_l}\n    \\frac{\\partial w_l}{\\partial z_k}\n\\end{align}\nso putting it together\n\\begin{align}\n    \\dot{w}_i &=\n    \\sum_{l}\n    \\left( \n    \\sum_{jk}\n    \\frac{\\partial w_i}{\\partial z_j}\n    \\hat{J}_{jk}\n    \\frac{\\partial w_l}{\\partial z_k}\n    \\right)\n    \\frac{\\partial H}{\\partial w_l}\n\\end{align}\nand you'll see why I put the brackets like that in a minute.\nSo this expression in the brackets is\n\\begin{align}\n    \\sum_{jk}\n    \\frac{\\partial w_i}{\\partial z_j}\n    \\hat{J}_{jk}\n    \\frac{\\partial w_l}{\\partial z_k}\n    &=\n    \\left( J \\hat{J} J \\right)_{il}\n\\end{align}\nwhere $J$ here is the Jacobian matrix of the transformation.\nSo when we transform from the $z$'s to the $w$'s,\nthere's a Jacobian matrix associated with that transformation.\nSo the Jacobian matrix is the matrix whose elements are\n\\begin{align}\n    J_{ij} &=\n    \\frac{\\partial w_i}{\\partial z_j}\n\\end{align}\nThose are the elements of the Jacobian matrix.\nAnd you can see that with this definition,\nthis thing in the brackets is indeed just $\\left( J \\hat{J} J \\right)_{il}$.\n\nSo then this equation can then be written as\n\\begin{align}\n    \\dot{\\vec{w}}\n    &=\n    \\left(\n    J \\hat{J} J^T\n    \\right)\n    \\frac{\\partial H}{\\partial \\vec{w}}\n\\end{align}\nSo this equation comes once\nonce you recognize that expression can be written in terms of the Jacobian.\nThe condition for the transformation to be canonicial is tht this equation\nshould have the same form as this equation\n\\begin{align}\n    \\dot{\\vec{z}} &=\n    \\hat{J}\n    \\frac{\\partial H}{\\partial \\vec{z}}\n\\end{align}\nThat is, \nthe equation for $\\dot{w}$ should have the same form as the equation for\n$\\dot{z}$.\n\nSo the condition for Hamilton's equations to take the same form is that\n\\begin{align}\n    \\boxed{J \\hat{J} J^T = \\hat{J}}\n\\end{align}\nIf this condition is satisfied then we are done.\nThis is the condition you have to remember.\nIt's easy in the exam to get a question that asks to show that a transformation\nis canonical.\nThen you just calculate the Jacobian matrix,\nthen compute this to check that you get $\\hat{J}$ back.\n\nIf this condition holds,\nthen we say that the Jacobian is \\emph{symplectic}.\nIt's a technical term.\n\nIf you study group theory,\nyou'll find that matrices that satisfy this equation forms a group,\ncalled the symplectic group.\nIf you study classical mechanics in detail,\na large part of it is understanding properties of the symplectic group.\nThis is one of the fundamental classical continuous groups.\n\n\\begin{question}\n    Is this the same as the canonical group?\n\\end{question}\nNo. Symplectic is not the same as canonical.\nThis symplectic group is the real symplectic group.\nIt's called $SP_2(i)$.\nBut in quantum theory,\nthe transformation also has to be unitary,\nand it's complex.\nFor example, in QM,\nyou have an extra condition of unitary,\nand you're allowed to have a complex map.\nSo things can be symplectic without being canonical.\n\nThis $\\hat{J}$ matrix shows up everywhere beyond classical mechanics\nin the mathematics literature.\nUnfortunately,\nthis is what everyone calls it $J$,\nwithout the hat,\nbut unfortunately,\nthe Jacobian is also called $J$,\nso I have to choose which one has the hat.\n\nSo we found this equation.\nWe now show that this condition is equivalent to the requirement that the new\ncoordinates satisfy\n\\begin{align}\n    \\left\\{ \n    Q_i, Q_j\n    \\right\\}\n    &=\n    \\left\\{ \n    P_i, P_j\n    \\right\\}\n    = 0\\\\\n    \\left\\{ \n    Q_i, P_i\n    \\right\\}\n    &=\n    \\delta_{ij}\n\\end{align}\nThe claim is,\nthat if the new coordinates satsify the Poisson brackets,\nthen you are guaranteed that symplectic condition.\n\nSo let's show that.\nThis is just an exercise in linear algebra.\nThe Jacobian matrix by definition is\n\\begin{align}\n    \\frac{\\partial w_i}{\\partial z_j} &=\n    J_{ij}\\\\\n    &=\n    \\begin{pmatrix}\n        \\partial Q_i/ \\partial q_j & \\partial Q_i / \\partial p_j\\\\\n        \\partial P_i/ \\partial q_j & \\partial P_i / \\partial p_j\\\\\n    \\end{pmatrix}\n\\end{align}\nWhat we're doing is evaluating this condition,\nbut in terms of the $Q$'s and $P$'s rather than in $z$-space.\nJust blindly plugging it in.\nThen what we have is\n\\begin{align}\n    \\left( J \\hat{J} J^T \\right)_{ik} &=\n    \\sum_{j} \\sum_{k}\n    \\frac{\\partial w_i}{\\partial z_j}\n    \\hat{J}_{jk}\n    \\frac{\\partial w_i}{\\partial z_k}\\\\\n    &=\n    \\begin{pmatrix}\n        \\frac{\\partial Q_i}{\\partial q_j} &\n        \\frac{\\partial Q_i}{\\partial p_j}\\\\\n        \\frac{\\partial P_i}{\\partial q_j} &\n        \\frac{\\partial P_i}{\\partial p_j}\\\\\n    \\end{pmatrix}\n    \\begin{pmatrix}\n        0 & \\delta_{ij}\\\\\n        - \\delta_{jk} & 0\n    \\end{pmatrix}\n    \\begin{pmatrix}\n        \\frac{\\partial Q_i}{\\partial q_k} &\n        \\frac{\\partial P_l}{\\partial q_k}\\\\\n        \\frac{\\partial Q_l}{\\partial p_k} &\n        \\frac{\\partial P_l}{\\partial p_k}\n    \\end{pmatrix}\n\\end{align}\nand as an exercise in linear algebra,\nif you multiply this out,\nyou get\n\\begin{align}\n    \\begin{pmatrix}\n        0 & \\delta_{ij}\\\\\n        - \\delta_{jk} & 0\n    \\end{pmatrix}\n    \\begin{pmatrix}\n        \\frac{\\partial Q_i}{\\partial q_k} &\n        \\frac{\\partial P_l}{\\partial q_k}\\\\\n        \\frac{\\partial Q_l}{\\partial p_k} &\n        \\frac{\\partial P_l}{\\partial p_k}\n    \\end{pmatrix}\n    &=\n    \\begin{pmatrix}\n        \\frac{\\partial Q_i}{\\partial p_j} &\n        \\frac{\\partial P_l}{\\partial p_j}\\\\\n        -\\frac{\\partial Q_l}{\\partial q_j} &\n        -\\frac{\\partial P_l}{\\partial q_j}\n    \\end{pmatrix}\n\\end{align}\nIf you do the multiplication,\nI claim that this is what you're going to find.\n\\begin{align}\n    \\left(J \\hat{J} J\\right)_{ik}\n    &=\n    \\begin{pmatrix}\n        \\left\\{ Q_i, Q_l \\right\\} &\n        \\left\\{ Q_i, P_l \\right\\}\\\\\n        \\left\\{ P_i, Q_l \\right\\} &\n        \\left\\{ P_i, p_l \\right\\}\n    \\end{pmatrix}\n\\end{align}\nand so if this is really equal to $\\hat{J}_{il}$,\nthen we get the conditions\n\\begin{align}\n    \\left\\{ Q_i, Q_l \\right\\} &=\n    \\left\\{ P_i, P_l \\right\\} = 0\n\\end{align}\nand\n\\begin{align}\n    \\left\\{ Q_i, P_l \\right\\} &= \\delta_{il}\n\\end{align}\n\nI just want you to convince yourself that the $ij$-th element,\ncorresponds to that block.\nWhat it is depends on which block you're in.\n\n\\begin{question}\n    If we actually set out to compute the Jacobian,\n    we could just do it element by element right?\n\\end{question}\nI just wanted to show that the condition\n$J \\hat{J} J^T = \\hat{J}$\nis equivalent to\n$\\left\\{ Q_i, Q_l \\right\\} = \\left\\{ P_i, P_l \\right\\} = 0$\nand\n$\\left\\{ Q_i, P_l \\right\\} = \\delta_{il}$.\nThe two are completely equivalent mathematically.\nIn the exam you could show either.\n\n\\section{Poisson Bracket Invariance}\nThere's one more thing I want to show.\nThe Poisson bracket is invariant under canonical transformation.\nLet me explain what I mean by this.\nSuppose you have a transformation\n\\begin{align}\n    q &\\to Q(q, p)\\\\\n    p &\\to P(q, p)\n\\end{align}\nAnd suppose you have a Poisson bracket that is by definition,\nwhat I mean is that they are the same in the old coordinates and the new\ncoordinates.\n\\begin{align}\n    \\left\\{ f, g \\right\\}\n    &=\n    \\sum_{i}\n    \\left( \n    \\frac{\\partial f}{\\partial q_i}\n    \\frac{\\partial g}{\\partial p_i}\n    -\n    \\frac{\\partial g}{\\partial q_i}\n    \\frac{\\partial f}{\\partial p_i}\n    \\right)\n    \\sum_{i}\n    \\left( \n    \\frac{\\partial f}{\\partial Q_i}\n    \\frac{\\partial g}{\\partial P_i}\n    -\n    \\frac{\\partial g}{\\partial Q_i}\n    \\frac{\\partial f}{\\partial P_i}\n    \\right)\n\\end{align}\nThat means,\nif you have a Poisson bracket,\nyou can evalaute it in any coordinates you like,\nprovided they are related by canonical transformation.\n\nLet's prove this.\nThis is one of those things where it proves useful to be good at linear algebra.\nTo prove this note that\n\\begin{align}\n    \\left\\{ f, g \\right\\}\n    &=\n    \\sum_i\n    \\left( \n    \\frac{\\partial f}{\\partial q_i}\n    \\frac{\\partial g}{\\partial p_i}\n    -\n    \\frac{\\partial f}{\\partial p_i}\n    \\frac{\\partial g}{\\partial q_i}\n    \\right)\\\\\n    &=\n    \\sum_{i}\n    \\sum_{j}\n    \\left( \n    \\frac{\\partial f}{\\partial z_i}\n    \\hat{J}_{ij}\n    \\frac{\\partial g}{\\partial z_j}\n    \\right)\n\\end{align}\nSo we're going to expand it out and show the claim that they are equal.\n\nUnder $z\\to w(z)$,\nwe have\n\\begin{align}\n    \\frac{\\partial f}{\\partial z_i}\n    &=\n    \\sum_{j}\n    \\left( \\frac{\\partial f}{\\partial w_j} \\right)\n    \\underbrace{\\left( \\frac{\\partial w_j}{\\partial z_j} \\right)}_{J_{ji}}\n\\end{align}\nwhere we notice the second factor is just the Jacobian matrix element.\n\nThen we have\n\\begin{align}\n    \\left\\{ f, g \\right\\}\n    &=\n    \\sum_i \\sum_j \\sum_k \\sum_l\n    \\frac{\\partial f}{\\partial w_k}\n    J_{ki}\n    \\hat{J}_{ij}\n    J_{lj}\n    \\frac{\\partial g}{\\partial w_l}\\\\\n    &=\n    \\sum_k \\sum_l\n    \\frac{\\partial f}{\\partial w_k}\n    \\left( J \\hat{J} J^T \\right)_{kl}\n    \\frac{\\partial g}{\\partial w_l}\n\\end{align}\nBut because the transformation is canonical,\n\\begin{align}\n    J \\hat{J} J^T &= \\hat{J}_{kl}\n\\end{align}\nwe find\n\\begin{align}\n    \\left\\{ f, g \\right\\}\n    &=\n    \\sum_l\n    \\frac{\\partial f}{\\partial w_k}\n    \\hat{J}_{kl}\n    \\frac{\\partial g}{\\partial w_l}\n\\end{align}\nand we can immediately go back to the definition of $\\hat{J}$ and see\n\\begin{align}\n    \\left\\{ f, g \\right\\}\n    &=\n    \\sum_i\n    \\left( \n    \\frac{\\partial f}{\\partial Q_i}\n    \\frac{\\partial g}{\\partial P_i}\n    -\n    \\frac{\\partial g}{\\partial Q_i}\n    \\frac{\\partial f}{\\partial P_i}\n    \\right)\n\\end{align}\nand we are done.\n\nSo the Poisson bracket can be evaluated in any coordinate system,\nprovided they are related by canonical transformation.\nIt's a huge subject.\nThere is a chapter in Goldstein dedicated to canonical transformation.\nThe current edition is quite a bit better than the earlier ones.\nThey are mostly taken from David Tong's lectures.\nBut,\nyou can find this in Goldstein as well.\nNot in the first edition,\nbut it is in the second edition.\n\nNow I'm going to start a new topic.\n\n\\section{Action-Angle Variables}\nConsider a one-dimensional system $H(q, p)$\nwithout explicit time-dependence.\nSo then the total energy is conserved.\nWe assume that the motion is bounded,\nso there exists some $q_1$, $q_2$ such that\n\\begin{align}\n    q_1 \\le q \\le q_2\n\\end{align}\nSo then imagine an arbitrary potential.\nThe system undergoes periodic motion,\nwith turning points $q_1$ and $q_2$.\nTransform from\n$(p, q)$\nto new variables\n$(I, \\theta)$\nwhich have the property that $H$ is independent of $\\theta$,\nso the Hamiltonian is a function of $I$\n\\begin{align}\n    H = H(I).\n\\end{align}\nIn these variables,\nfrom Hamilton's equations\n\\begin{align}\n    -\\frac{dI}{dt} &= \\frac{\\partial H}{\\partial \\theta} = 0\n\\end{align}\nwhich means that $I$ is a constant of the motion.\nThen what about the other Hamilton's equation?\n\\begin{align}\n    \\frac{d\\theta}{dt} &=\n    \\frac{\\partial H}{\\partial I}\n\\end{align}\nwhich is some function of $I$.\nRemember $H$ does not contain $t$ and it does not contain $\\theta$,\nand $I$ we just showed to be a constant.\nSo this is also a constant.\n\nSo the equations of motion are very simple.\nSo this equation can be integrated to find that $\\theta$ is just a constant\ntimes time,\nand the first equation just tells you $I$ is a constant.\n\nAnd so by convention,\n$I$ and $\\theta$ are normalized such that\n$\\dot{\\theta} = \\omega$,\nwhere $\\omega$ is the angular frequency of this oscillation.\n\n$I$ is called the \\emph{action variable}\nand $\\theta$ is called the \\emph{angle variable}.\n\nSo $I$ is like the momentum and $\\theta$ is like the position.\n\nThe point is that in terms of the action-angle variables,\nthe problem is very simple.\nThis equation just tells you $I$ is a constant,\nand this equation tells you this can be integrated to $\\theta=\\omega t$.\nThat's easy.\n\nThe hard work is figuring out what is the relation between $I$ and $\\theta$.\nIf you can find the action-angle variables,\nyou can trivially solve the problem.\nThe challenge is how do you find the action-angle variables.\nIf you can find those variables,\nyou basically solve the problem.\n\nI'm going to show you how to do his for a special case and then we'll talk\nabout how potentially generalize it.\n\nAlmost all the solve problems there are,\nyou can reduce it down to action-angle variables,\nthe Kepler problem,\nthe harmonic oscillator,\netc.\nBut there are chaotic systems for which there is no such variable.s\nBut in more complicated systems,\nyou usually start from a limit that does have action-angle variables,\nand then you perturb.\nSo you might not have exact action-angle variables,\nbut then you start from a point where you do.\nIn this case,\nthey exist because there is a constant of motion,\nthe conserved energy,\nand you will see it is related in a trivial way to the action variable.\nOnce you have an action variable,\nthat's all you need.\nIf you have an $n$-dimensional system with conserved energy,\nthere is a theorem which says you do have action-angle variables.\nBut if energy is not conserved,\nyou may not have action-angle variables in general.\n\nConsider\n\\begin{align}\n    H &=\n    \\frac{p^2}{2m} + V(q)\n\\end{align}\nIf $I$ is a constant of the motion,\nit must be some function of the energy.\n\\begin{align}\n    H &= H(I) =  E\n\\end{align}\nThen\n\\begin{align}\n    \\dot{\\theta}\n    &=\n    \\frac{\\partial H}{\\partial I}\n    =\n    \\frac{d E}{dI}\n    = \\omega\n\\end{align}\nThis is because $H$ does not depend on $\\theta$.\nRemember that we normalized $\\dot{\\theta}$\nand this $\\omega$ is the angular frequency of oscillation.\nThat's because we normalized the action-angle variables to get this.\nSo this is the setting.\nFor this Hamiltonian,\nwe have to find the action variable which is a constant of the motion,\nsome function of the energy,\nand it has this property that $\\frac{dE}{dI} = \\omega$.\n\nThe claims is the following.\nThe correct choice of $I$ is\n\\begin{align}\n    I &=\n    \\frac{1}{2\\pi}\n    \\oint p\\, dq\n\\end{align}\nwhere this is the area in phase space enclsoed by once orbit.\n\nIn other words,\nin physical space,\nthis thing is just bouncing back and forth between $q_1$ and $q_2$.\nLet's thing about what this means in phase space.\n\nThere is a turning point $q_2$ and there's a turning point $q_1$.\nAnd the horizontal axis is $q$ and the vertical axis is $p$.\nThe system traces out an orbit in phase space.\nThis has a reflection symmetry about the $q$ axis.\nThis is because therei s a positive value of $p$ and it comes back with a\nnegative value of $p$,\nsince the root of the equation gives two solutions.\nSo this thing is the enclosed area,\nwhich is obviously a function of the energy.\nWith more energy you can go further apart.\nSo this is a function of the energy and this gives a relation between the energy\nand $I$.\nThis is just a claim we haven't proved it yet.\n\nOkay,\nso let's now try to prove this claim.\nTo prove this,\nwe need to show that\n\\begin{align}\n    \\frac{d}{dE}\\left( \n    \\oint p\\, dq\n    \\right)\n    =\n    \\frac{2\\pi}{\\omega}\n\\end{align}\nwhere $\\omega$ is the frequency of oscillation.\nAlright, so this is what we're going to show.\nSo now we have a bit of work.\nHow do we evalaute the area of this curve for a given $E$?\n\\begin{align}\n    p &=\n    \\sqrt{2m\\left( E - V(q) \\right)}\n\\end{align}\nAs $E$ is changed, two things happen.\nFirstly,\nobviously the value of $p$ at every point $q$ is altered.\nSo\n\\begin{align}\n    p &=\n    p\n    +\n    \\left( \\frac{\\partial p}{\\partial E}\\right)_q\n    \\Delta E\n\\end{align}\nBy convention the square root is always positive.\nSo if we change the energy,\nat every point inside this integral,\nthe valueof $E$ is going to change for a given $q$.\n\nSecondly,\nthe end points $q_1$ and $q_2$ are shifted.\nThat's because if you change $E$,\nyour $q_1$ and $q_2$ are not the same now.\nWe're going to see how much this changes if I change $E$.\nThat's what we're planning to do.\n\nThe value of that integral changes because $E$ is changing at every point.\nAnd the end points are shifting.\nWe're going to account for both these effects.\n", "meta": {"hexsha": "f8381f528a049c06f429864035476ac1c17a4bbd", "size": 20464, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phys610/lecture19.tex", "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_issues_repo_path": "phys610/lecture19.tex", "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phys610/lecture19.tex", "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4617524339, "max_line_length": 80, "alphanum_fraction": 0.6812451134, "num_tokens": 6090, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.8175744739711883, "lm_q1q2_score": 0.5976948328655509}}
{"text": "\n\\subsection{Visibility graphs}\n\nVisibility graph generates nodes from points on obstacles, current position and goal. generate graph.\n\nUse A* to solve\n\nThat algorithm is called VGRAPH\n\n\\subsection{Robots with volume}\n\nWhat if robot takes up space?\n\nExpand each obstacle by size of robot\n\nHow to grow each obstacle? draw shape of robot around each obstacle. shape is relative to point, so the growth is asymmetric if you choose a point for the robot which is not in the middle. eg a vertex.\n\n\\subsection{Rotation of robots}\n\nWhat about rotation? grow robot by shape of all rotations. but this is overly conservative. will miss paths\n\nCan do vgraph for each rotation, and switch between rotations. can choose 2 or 3 rotations\n\n\n", "meta": {"hexsha": "3bda397ba5996062aba2f54b1bdd5922b757b32d", "size": 727, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/ai/robotics/02-01-visibility.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/ai/robotics/02-01-visibility.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/ai/robotics/02-01-visibility.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.08, "max_line_length": 201, "alphanum_fraction": 0.788170564, "num_tokens": 155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744673038221, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5976948232007444}}
{"text": "Assuming that data is appropriately treated to eliminate redundant\nfeatures, we proceed to propose surrogate models and criteria\nused for their evaluation. The task all presented surrogates strive to solve can be\nformulated using the language of conventional regression problems. In this section,\nwe focus on interpretation in the scheme of supervised learning with decoupled\nand adaptive sampling.\n\nLabeling the expensive MC TBR model $f(x)$, a surrogate is a mapping\n$\\hat{f}(x)$ that yields similar images as $f(x)$. In other words, $f(x)$ and\n$\\hat{f}(x)$ minimise a selected dissimilarity metric. Furthermore, in order to\nbe considered \\textit{viable}, surrogates are required to achieve expected evaluation time\nlower than that of $f(x)$.\n\nIn the decoupled sampling approach that is further described\nin~\\cref{sec:supervised}, we first gather a sufficiently large\ntraining set of samples $\\mathcal{T}=\\left\\{\\left( x^{(i)},f\\left(x^{(i)}\\right) \\right)\\right\\}_{i=1}^N$\nto describe the behaviour of $f(x)$ across its domain.\nDepending on a specific model family and appropriate choice of its\nhyperparameters, surrogate models $\\hat{f}(x)$ are trained to minimise\nthe empirical risk~$R_{\\text{emp.}}$ with respect to $\\mathcal{T}$ and a model-specific\nloss function $\\mathcal{L}$, where the empirical risk is defined as\n$R_{\\text{emp.}}(\\hat{f}\\mid\\mathcal{T},\\mathcal{L})\n\t=\\frac{1}{N}\\sum_{i=1}^N\n\t\\mathcal{L}\\left(\\hat{f}(x^{(i)}),f(x^{(i)})\\right)$.\n\n\nThe adaptive sampling approach that we characterise in~\\cref{sec:adaptive} can be viewed as a more general problem.\nRather than fixing the training set $\\mathcal{T}$ for the entire duration of\ntraining, multiple sets $\\{\\mathcal{T}_k\\}_{k=0}^K$ are used, such that\n$\\mathcal{T}_{k-1}\\subset\\mathcal{T}_k$ for all $k>1$. The first set\n$\\mathcal{T}_0$ is initialised randomly to provide a \\textit{burn-in}, and is\nrepeatedly extended in epochs, whereby each epoch trains a new surrogate~$\\hat{f}_k(x)$ on\n$\\mathcal{T}_k$ using the supervised learning procedure, evaluates its\nperformance, and forms a new set $\\mathcal{T}_{k+1}$ by adding more samples to\n$\\mathcal{T}_k$. This permits the learning algorithm to condition the selection\nof new samples in~$\\mathcal{T}_{k+1}$ on the evaluation results\nof~$\\hat{f}_k(x)$ in order to maximise improvement of\nsurrogate performance over complex regions within the feature space.\n\n\n\\subsection{Metrics}\n\\label{sec:metrics}\n\nAiming to provide objective comparison of a diverse set of surrogate model\nfamilies, we define a multitude of metrics to be tracked during experiments.\nFollowing the motivation of this work, two desirable properties of surrogates\narise: (a) their capability to approximate the TBR MC model well and (b) their\nprediction time. An ideal surrogate would maximise\nthe former while minimising the latter.\n\n\\Cref{tbl:metrics} provides an exhaustive listing and description of metrics recorded\nin our experiments. For regression performance analysis, we include a selection\nof absolute metrics to assess the approximation capability of surrogates, and set\npractical bounds on the expected uncertainty of their predictions. In addition, we also track\nrelative measures that are better-suited for comparison between this work and others, as\nthey maintain invariance with respect to the selected domain and image space.\nFor complexity analysis, surrogates are assessed in terms of wall\ntime\\footnote{Real time elapsed during computation measured by a\nchronometer, here by means of the Python~\\texttt{time} package.}\nelapsed during training and prediction. This is motivated by common practical use\ncases of our work, where models are trained and used as drop-in replacements for the\nexpensive MC TBR model. Since training set sizes remain to be determined, all times are\nreported per a single datapoint. Even though some surrogates support acceleration\nby means of parallelisation, sequential processing of samples was ensured to\nachieve comparability between considered\nmodels.\\footnote{The only exception to this are artificial neural networks,\n\twhich require considerable amount of processing power for training on conventional CPU architectures.}\n\n\\begin{table}[h]\n\t\\centering\n\t{\\footnotesize\n\t\t\\begin{tabular}{llrl}\n\t\t\\toprule\n\t\tRegression performance metrics\t& Mathematical formulation / description &\n\t\t\\multicolumn{2}{c}{Ideal value [units]} \\\\\n\t\t\\midrule\n\t\tMean absolute error (MAE)\t& $\\sum_{i=1}^N |y^{(i)}-\\hat{y}^{(i)}|/N$ & 0\n\t\t\t\t\t\t\t\t\t& [TBR] \\\\\n\t\tStandard error of regression $S$\t& $\\text{StdDev}_{i=1}^N\\left\\{ |y^{(i)} -\n\t\t\\hat{y}^{(i)}| \\right\\} $\t & 0 & [TBR] \\\\\n\t\tCoefficient of determination $R^2$\t& $1-\\sum_{i=1}^N \\left(y^{(i)}-\\hat{y}^{(i)} \\right)^2 /\n\t\t\\sum_{i=1}^N \\left( y^{(i)}-\\overline{y} \\right)^2 $ & 1 & [rel.] \\\\\n\t\tAdjusted $R^2$\t& $1-(1-R^2)(N-1)/(N-P-1)$\t& 1 & [rel.] \\\\\n\t\t\\midrule\n\t\tComplexity metrics\t& {} & {} & {} \\\\\n\t\t\\midrule\n\t\tMean training time $\\overline{t}_{\\text{trn.}}$\t& $(\\text{wall training time of\n\t\t$\\hat{f}(x)$})/N_0$ \t& 0 & [ms] \\\\\n\t\tMean prediction time $\\overline{t}_{\\text{pred.}}$\t& $(\\text{wall prediction time of\n\t\t$\\hat{f}(x)$})/N$\t& 0 & [ms] \\\\\n\t\tRelative speedup $\\omega$\t& $(\\text{wall evaluation time of $f(x)$}) /\n\t\t(N\\overline{t}_{\\text{pred.}})$\t&\n\t\t$\\to\\infty$ & [rel.] \\\\\n\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\caption{Metrics recorded in supervised learning experiments. In\n\tformulations, we work with a training set of size $N_0$ and a test set of\nsize $N$, TBR values $y^{(i)}=f(x^{(i)})$ and $\\hat{y}^{(i)}=\\hat{f}(x^{(i)})$\ndenote images of the $i$th testing sample in the expensive model and the surrogate\nrespectively. Furthermore, the mean $\\overline{y}=\\sum_{i=1}^N y^{(i)}/N$ and $P$ is the\nnumber of input features.}\n\t\\label{tbl:metrics}\n\\end{table}\n\nTo prevent undesirable bias in results due to training set selection, all metrics\nare collected in the scheme of $k$-fold cross-validation with a conventional choice of\n$k=5$. In this setting, a sample set is uniformly divided into 5 disjoint folds, each of which\nis used as a test set for models trained on the remaining 4.\\footnote{Unless explicitly stated otherwise, we use 1\nfold for testing and the 4 remaining folds for training. This gives 80\\% to\n20\\% train-test ratio.} Having repeated the same experiment for each such run,\nthe overall value of individual metrics is reported in terms of their mean and\nstandard deviation over all folds.\n\n\n\\subsection{Decoupled Sampling}\n\\label{sec:supervised}\n\nIn our experiments, we evaluate and compare surrogates in effort to\noptimise against metrics described in~\\cref{sec:metrics}. To attain meaningful\nand practically usable results, we require a sufficiently large and diverse pool\nof surrogate families to review. This is described by the listing in~\\cref{tbl:surrogates}. The presented selection of models includes basic\ntechniques suitable for linear regression enhanced by the kernel trick or dimension\nlifting, methods driven by decision trees, instance-based learning models,\nensemble regressors, randomised algorithms, artificial neural networks and mathematical approaches\ndeveloped specifically for the purposes of surrogate modelling. For each of\nthese families, a state-of-the-art implementation was selected and adapted to\noperate with TBR samples.\n\n\\begin{table}[h]\n\t\\centering\n\t{\\footnotesize\n\t\t\\begin{tabular}{llll}\n\t\t\\toprule\n\t\tSurrogate & Acronym & Implementation & Hyperparameters \\\\\n\t\t\\midrule\n\t\tSupport vector machines~\\cite{fan2008liblinear}\t& SVM & SciKit Learn~\\cite{scikit-learn} & 3 \\\\\n\t\tGradient boosted trees~\\cite{friedman2001greedy,friedman1999stochastic,hastie2009elements}\t& GBT & SciKit Learn & 11 \\\\\n\t\tExtremely randomised trees~\\cite{geurts2006extremely}\t& ERT & SciKit Learn & 7 \\\\\n\t\tAdaBoosted decision trees~\\cite{drucker1997improving}\t& ABT & SciKit Learn & 3 \\\\\n\t\tGaussian process regression~\\cite{williams2006gaussian}\t& GPR & SciKit Learn & 2 \\\\\n\t\t$k$ nearest neighbours\t& KNN & SciKit Learn & 3 \\\\\n\t\tArtificial neural networks\t& ANN & Keras (TensorFlow)~\\cite{chollet2015keras} & 2 \\\\\n\t\tInverse distance weighing~\\cite{shepard1968two} & IDW & SMT~\\cite{SMT2019} & 1 \\\\\n\t\tRadial basis functions & RBF & SMT & 3 \\\\\n\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\caption{Considered surrogate model families.}\n\t\\label{tbl:surrogates}\n\\end{table}\n\nWhile some of the presented model families are clearly determined by an explicit\nchoice of a learning algorithm, others may vary considerably depending on\nhyperparameter configuration. A good example of the latter group are artificial neural\nnetworks, which in addition to conventional hyperparameters enable to control network\narchitecture through selection from various parametric graph templates\n(illustrated in~\\cref{fig:nn-archs}). This allows us to realise a simplistic\nnetwork architecture search during the hyperparameter tuning procedure.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\begin{subfigure}[b]{0.25\\textwidth}\n\t\t\\centering\n\t\t{\\scriptsize \\incfigscale{0.6}{1h3f}}\n\t\t\\caption{$\\text{1h3f}(W)$}\n\t\\end{subfigure}\\hfill%\n\t\\begin{subfigure}[b]{0.25\\textwidth}\n\t\t\\centering\n\t\t{\\scriptsize \\incfigscale{0.6}{kf}}\n\t\t\\caption{$\\text{df}(D,W)$}\n\t\\end{subfigure}\\hfill%\n\t\\begin{subfigure}[b]{0.25\\textwidth}\n\t\t\\centering\n\t\t{\\scriptsize \\incfigscale{0.6}{3pyramid}}\n\t\t\\caption{$\\text{3pyramid}(W)$}\n\t\\end{subfigure}\\hfill%\n\t\\begin{subfigure}[b]{0.25\\textwidth}\n\t\t\\centering\n\t\t{\\scriptsize \\incfigscale{0.6}{5diam}}\n\t\t\\caption{$\\text{5diamond}(W)$}\n\t\\end{subfigure}\n\n\t\\caption{Selected parametric neural network architectures. All layers except\n\t\tthe last use ReLU activation. Prediction information flow is indicated by\n\t\tarrows.}\n\t\\label{fig:nn-archs}\n\\end{figure}\n\n\\subsubsection{Experiments}\n\\label{sec:experiment-methodology}\n\nThe presented surrogate candidates are evaluated in four experimental cases:%\n\\begin{enumerate}\n\t\\item Hyperparameter tuning in a simplified domain.\n\n\t\\item Hyperparameter tuning in full domain.\n\n\t\\item Scaling benchmark.\n\n\t\\item Model comparison.\n\\end{enumerate}\n\nThe aim of the initial experiments is to use a relatively small subset of\ncollected TBR samples to determine hyperparameters of considered surrogates.\nSince this process requires learning the behaviour of an unknown, possibly\nexpensive mapping -- here a function that assigns cross-validated metrics to a\npoint in the hyperparameter domain -- it in many aspects mirrors\nthe primary task of this work with the notable extension of added utility\nto optimise. In order to avoid undesirable exponential slowdown in exhaustive\nsearches of a possibly high-dimensional parameter space, Bayesian\noptimisation~\\cite{movckus1975bayesian} is employed as a standard hyperparameter tuning algorithm. We set\nits objective to maximise $R^2$ and perform\n1000~iterations.\\footnote{Hyperparameter tuning of each surrogate family was\n\tterminated after 2~days. Instances that reached this limit may be identified\n\tin~\\cref{tbl:exp1-detailed-results,tbl:exp2-detailed-results} in the\n\tAppendix.}\n\nIn the first experiment, efforts are made to maximise the possibility of success\nin surrogates that are prone to suboptimal performance in discontinuous spaces.\nThis follows the notion that, if desired, performance of such models may be\nreplicated by training separate instances to model each continuous subregion of\nthe domain independently.\nTo this end, data are limited to a single slice from run~2, and discrete\nfeatures are completely withheld from evaluated\nsurrogates. This is repeated for each of the four available slices to\ninvestigate variance in behaviour under different discrete feature assignments.\nThe second experiment conventionally measures surrogate performance on the full\nfeature space. Here, in extension of the previous case, surrogates work with\nsamples comprised of discrete as well as continuous features.\n\nThe objective of the last two experiments is to exploit the information gathered by\nhyperparameter tuning. In the third experiment, the 20~best-performing\nhyperparameter configurations of each family (with respect to~$R^2$) are used to\nperform training on progressively larger sets to investigate their scaling\nproperties. Following that, the fourth experiment aims to produce surrogates\nsuitable for practical use by retraining selected well-scaling instances on large\ntraining sets to satisfy the goals of this work.\n\n\n\\subsection{Adaptive Sampling}\n\\label{sec:adaptive}\n\n\\begin{wrapfigure}[25]{r}{0.55\\textwidth}\n\t\\centering\n\t\\vspace{-3ex}\n\t\\includegraphics[width=0.7\\textwidth]{fig4_qassplan.png}\n\t\\caption{Schematic of QASS algorithm}\n\t\\label{fig:qassplan}\n\\end{wrapfigure}\n\nAll of the surrogate modelling techniques studied in this project face a common\nchallenge: their accuracy is limited by the quantity of training samples which\nare available from the expensive MC TBR model. Adaptive sampling procedures can\nimprove upon this limitation by taking advantage of statistical information\nwhich is accumulated during the training of any surrogate model. Rather than\ntraining the surrogate on a single sample set generated according to a fixed\nstrategy, sample locations are chosen periodically during training so as to best suit the model\nunder consideration.\n\nAdaptive sampling techniques appear frequently in the literature and have been\nspecialised for surrogate modelling. Garud's~\\cite{Garud2016} ``Smart Sampling\nAlgorithm'' achieved notable success by incorporating surrogate quality and\ncrowding distance scoring to identify optimal new samples, but was only tested\non a single-parameter domain. We theorised that a nondeterministic sample\ngeneration approach, built around Markov Chain Monte Carlo methods (MCMC), would\nfare better for high-dimensional models by more thoroughly exploring all local\noptima in the feature space. MCMC produces a progressive chain of sample points,\neach drawn according to the same symmetric proposal distribution\\footnote{An\nadaptive MCMC procedure~\\cite{Zhang2012}, which adjusts an ellipsoidal proposal\ndistribution to fit the posterior, was also implemented but not fully tested.}\nfrom the prior point. These sample points will converge to a desired posterior\ndistribution, so long as the acceptance probability for these draws has a\nparticular functional dependence on that posterior value (see~\\cite{Zhou2018}\nfor a review).\n\n\nMany researchers have embedded surrogate methods into MCMC strategies for\nparameter optimisation~\\cite{Zhang2020,Gong2017}, in particular the ASMO-PODE\nalgorithm~\\cite{Ginting2011} which makes use of MCMC-based adaptive sampling to\nattain greater surrogate precision around prospective optima. Our novel approach\ndraws inspiration from ASMO-PODE, but instead uses MCMC to generate samples\nwhich increase surrogate precision throughout the entire parameter space.\n\nWe designed the Quality-Adaptive Surrogate Sampling algorithm (QASS,\n\\cref{fig:qassplan}) to iteratively increment the training/test set with sample\npoints which maximise surrogate error and minimise a crowding distance metric\n(CDM)~\\cite{Solonen2012} in feature space. On each iteration following an initial training of the surrogate on $N$ uniformly random samples, the surrogate was trained and absolute error calculated. MCMC was then performed on the error function generated by performing nearest-neighbor interpolation on these test error points. The resultant samples were culled by $50\\%$ according to the CDM, and then the $n$ highest-error candidates were selected for reintegration with the training/test set, beginning another training epoch. Validation was also performed during each iteration on independent, uniformly-random sample sets.\n\n% Adaptive multi-chain MCMC:~\\cite{Zhang2012}\n\n\n\n", "meta": {"hexsha": "4098c79535f6f115aae4b48fee9fd18195bb2a1e", "size": 15616, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "final_report/methodology.tex", "max_stars_repo_name": "ukaea-group-project/Documentation", "max_stars_repo_head_hexsha": "fcc642a2969e86c532d03254bb4b162e9f23ee01", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-22T11:25:42.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-22T11:25:42.000Z", "max_issues_repo_path": "final_report/methodology.tex", "max_issues_repo_name": "ukaea-group-project/Documentation", "max_issues_repo_head_hexsha": "fcc642a2969e86c532d03254bb4b162e9f23ee01", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "final_report/methodology.tex", "max_forks_repo_name": "ukaea-group-project/Documentation", "max_forks_repo_head_hexsha": "fcc642a2969e86c532d03254bb4b162e9f23ee01", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-04T14:53:55.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-04T14:53:55.000Z", "avg_line_length": 53.1156462585, "max_line_length": 626, "alphanum_fraction": 0.7774718238, "num_tokens": 4033, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.867035752930664, "lm_q2_score": 0.6893056231680122, "lm_q1q2_score": 0.597652619982818}}
{"text": "\\chapter{Spiking Neural Networks}\n\\label{section:snn}\n\nFor an more in depth introduction to spiking neural networks see \\cite{Vreeken2003}\n\nNotes:\n\n\n\n\nSpiking neural networks are the third generation of neural network models. They are more realistic and Wolfgang Maass showed that they are in theory computationally more powerful than threshold and sigmoidal gates\\cite{Maass1997}.\n\nGenerations of neural networks:\n1. generation consisted of McCulloch-Pitts threshold neurons\nsimple model. neuron sends a binary 'high' signal if the sum of its weighted inputs rises above a threshold value\nused successfully in multi-layer perceptrons and Hopfield nets. Any function with boolean output can be computed by a multilayer perception with single hidden layer\n2. generation neuronal networks\nsuccessfully used in deep nn and others\n\nNeurons of the first two generations do not employ individual pulses but their output signals typically lie between 0 and 1. These signals can be seen as normalized firing rates of the neuron within a certain period of time. rate coding. there ,ist ne am averaging window and the outputs of all neurons can be calculated by using the firing rates instead of working with single spikes. Neurons of the second generation are biologically more realisic and powerful than neurons of the first generation. They use a continuous activation function withc allows us to apply a gradient descet learning algorithm like back propagation.\n\nThe third generation of neural network once again raises teh level of biological realism by using individual spikes. This allows incorporating spatial-temporal information in communication and computation. These neurons use pulse coding instead of rate coding. Humans can classyfy visual input in under 100ms it takes at leas 10 synapic steps from the retina to the temporal lobe this leaves about 10 ms of processing time per neuron. such a time window is much to little to allow an averagin mechanism like rate coding\n\nWhy use snn over other models?\nSNN raise the biological plausibility in comparison to ann since they simulate individual spikes instead of using averaged firing frequencys.\nOn robotic platforms like a snake like robot the energy and computing resources are limited. The human brain needs only 20 W of power  (Drubach, 2000). Yet is able to perform visual pattern analysis and classification in just 100 ms, despite it taking at least 10 synaptic stages (Thorpe et al., 2001).\nSNN can use the temporal information, precise timing of events can contain information\nSNN allow to invorporate spatial-temporal information that would be lost by averaging over pulse frequencies. This ability is essencial in a environment that is rich with temporal information. \n\n\\section{Leaky Integrate and Fire Neuron Model}\n% TODO rewrite\n% Izhikevich (2004). comparison of different models\nThe neuron model used in this work is the Leaky integrate and fire neuron model(LIF)\\cite{(Stein, 1965)}, a variant of the Integrate-and-Fire model from Burkitt\\cite{(Burkitt, 2006)}. It is a widely used model because it is a good trade off between biological plausibility and complexity. It is compared to other models simple since it can be explained using principles form electronics. The model is based on the assumption that timing of spikes rather than the specific shape carries neural information. The sequences of firing times are called spike trains and can be described as\n\n\\begin{equation}\\label{eq:spikeTrain}\nS\\left(t\\right) = \\sum_{f}\\delta\\left(t - t^f\\right)\n\\end{equation}\n\nwhere $ f = 1, 2, \\dots $ is the label of a spike and $\\delta\\left(\\cdot\\right) $ is a Dirac function % TODO explain dirac ? alpha shaped function??\n\nThe incoming spike train will trigger the following synaptic electric current\n\n\\begin{equation}\\label{eq:current}\ni\\left(t\\right) = \\int_{0}^{\\infty} S_j\\left(s - t\\right)\\exp\\left(\\frac{-s}{\\tau_s}\\right)\n\\end{equation}\n\nThe post synaptic current then charges the LIF neuron model increasing the potential u according to\n\n\\begin{equation}\\label{eq:potential}\n\\tau_m\\frac{du}{dt}\\left(t\\right)=u_{rest} - u\\left(t\\right) + R\\left(i_0\\left(t\\right) + \\sum w_ji_j\\left(t\\right) \\right)\n\\end{equation}\n\nwhere $\\tau_m=RC$ is the time constant of the neuron membrane, modeling the voltage leakage depending on the resistance R. The potential after a reset is $u_{rest}$. The external current $i_0\\left(t\\right)$ drives the neuron state, the input current $i_j\\left(t\\right)$ from the j-th synaptic input while $w_j$ represents the strength of the connection. Once the membrane potential $u$ reaches a certain firing threshold $\\vartheta$ the neuron fires a single spike and its membrane potential is set back to $u_{rest}$. This event is followed by a refracory period in wich the neuron is inactive and can't be charged.\n\n% iaf_psc_alpha - Leaky integrate-and-fire neuron model.\n% http://www.nest-simulator.org/helpindex/cc/iaf_psc_alpha.html\n\n% E. M. Izhikevich et. al. \"Simple Model of spiking Neurons\" 2003\n% Hodgking, Huxley 1952 A quantitive description of membrane current and its application to conduction and excition in nerves\n%\tTODO look up on which paper nest implementation is based\n\n% TODO choose wich one i want to use\n\\begin{equation}\n\t\\tau_m \\frac{dv \\left( t \\right)}{dt} = - \\left( v \\left( t \\right) - v_{rest} \\right) + RI \\left( t \\right)\n\\end{equation}\n\n\\section{Spike-Timing-Dependent-Plasticity}\n\nSynaptic plasticity is a change of the pre-processing, so it is a different way of saying 'learning'. Hebbian plasticity is a local form of long term potentiation (LTP) and depression (TLD) of synapses based on the correlation of firing activity between pre and post synaptic neurons. Spike Timing Dependent synaptic plasticity (STDP) is a form of hebbian learning that uses the exact spike timing information.\n\n\\begin{equation}\n\t\\Delta t = t_{post} - t_{pre}\n\\end{equation}\n\n\\begin{equation}\n\t\\Delta w_+ = A_+ e^{- \\frac{\\Delta t}{\\tau_+}} if \\Delta t > 0\n\\end{equation}\n\n\\begin{equation}\n\t\\Delta w_- = A_- e^{- \\frac{\\Delta t}{\\tau_-}} if \\Delta t < 0\n\\end{equation}\n\n\\section{Reward-Modulated Spike-Timing-Dependent-Plasticity}\n", "meta": {"hexsha": "cfd4a3578c1e373f8e592cd5a5a56aff915ebee3", "size": 6141, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/chapters/snn.tex", "max_stars_repo_name": "Aut0R3V/bachelorthesis", "max_stars_repo_head_hexsha": "69c5fdb9911ad85a84a3561c5e99d7ee06619f7a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-12-07T11:21:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-31T05:41:10.000Z", "max_issues_repo_path": "thesis/chapters/snn.tex", "max_issues_repo_name": "Aut0R3V/bachelorthesis", "max_issues_repo_head_hexsha": "69c5fdb9911ad85a84a3561c5e99d7ee06619f7a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/chapters/snn.tex", "max_forks_repo_name": "Aut0R3V/bachelorthesis", "max_forks_repo_head_hexsha": "69c5fdb9911ad85a84a3561c5e99d7ee06619f7a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-07T11:56:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-07T11:56:57.000Z", "avg_line_length": 73.1071428571, "max_line_length": 627, "alphanum_fraction": 0.7848884546, "num_tokens": 1515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8670357460591569, "lm_q2_score": 0.6893056167854461, "lm_q1q2_score": 0.5976526097123366}}
{"text": "\\section{Reminder on Abstract Harmonic Analysis}\\label{app:theorem}\n\\label{sec:abstract_harmonic}\n\\subsection{Locally compact Abelian groups}\n\\begin{definition}[\\acf{LCA} group.]\n    A group $\\mathcal{X}$ endowed with a binary operation $\\groupop$ is said to\n    be a Locally Compact Abelian group if $\\mathcal{X}$ is a topological\n    \\emph{commutative} group \\acs{wrt}~$\\groupop$ for which every point has a\n    compact neighborhood and is Hausdorff (T2).\n\\end{definition}\nMoreover given a element $z$ of a \\ac{LCA} group $\\mathcal{X}$, we define the\nset $z\\groupop\\mathcal{X}=\\mathcal{X}\\groupop z=\\Set{z\\groupop x|\\forall\nx\\in\\mathcal{X}}$ and the set $\\mathcal{X}^{-1}=\\Set{x^{-1}|\\forall\nx\\in\\mathcal{X}}$.  We also note $e$ the neutral element of $\\mathcal{X}$ such\nthat $x\\groupop e=e \\groupop x= e$ for all $x\\in\\mathcal{X}$.  Throughout this\npaper we focus on positive definite function. Let $\\mathcal{Y}$ be a complex\nseparable Hilbert space. A function $f:\\mathcal{X}\\to\\mathcal{Y}$ is positive\ndefinite if for all $N\\in\\mathbb{N}$ and all $y\\in\\mathcal{Y}$,\n\\begin{dmath}\n    \\label{eq:positive_definite} \\sum_{i,j=1}^N\\inner*{y_i,\n    f\\left(x_j^{-1}\\groupop x_i\\right)y_j}_{\\mathcal{Y}}\\ge 0\n\\end{dmath}\nfor all sequences $(y_i)_{i\\in\\mathbb{N}_N^*}\\in\\mathcal{Y}^N$ and all sequences\n$(x_i)_{i\\in\\mathbb{N}_N^*}\\in\\mathcal{X}^N$. If $\\mathcal{Y}$ is real we add\nthe assumption that $f(x^{-1})=f(x)^*$ for all $x\\in\\mathcal{X}$\n\n\\subsection{Even and odd functions}\nLet $\\mathcal{X}$ be a \\ac{LCA} group and $\\mathbb{K}$ be a field viewed as an\nadditive group. We say that a function $f:\\mathcal{X}\\to\\mathbb{K}$ is even if\nfor all $x\\in\\mathcal{X}$, $f(x)=f\\left(\\inv{x}\\right)$ and odd if\n$f(x)=-f\\left(\\inv{x}\\right)$. The definition can be extended to\noperator-valued functions.\n\\begin{definition}[Even and odd operator-valued function on a \\ac{LCA} group]\n    Let $\\mathcal{X}$ be a measured \\ac{LCA} group and $\\mathcal{Y}$ be a\n    Hilbert space, and $\\mathcal{L}(\\mathcal{Y})$ the space of bounded linear\n    operators from $\\mathcal{Y}$ to itself viewed as an additive group. A\n    function $f:\\mathcal{X}\\to\\mathcal{L}(\\mathcal{Y})$ is (weakly) even if for\n    all $x\\in\\mathcal{X}$ and all $y$, $y'\\in\\mathcal{Y}$,\n    $\\inner{y,f\\left(\\inv{x}\\right)y'}_{\\mathcal{Y}} =\n    \\inner{y,f(x)y'}_{\\mathcal{Y}}$ and (weakly) odd if\n    $\\inner{y,f\\left(\\inv{x}\\right)y'}_{\\mathcal{Y}} =\n    -\\inner{y,f(x)y'}_{\\mathcal{Y}}$.\n\\end{definition}\nIt is easy to check that if $f$ is odd then\n$\\int_{\\mathcal{X}}\\inner{y,f(x)y'}_{\\mathcal{Y}}d\\Haar(x)=0$.  Besides the\nproduct of an even and an odd function is odd. Indeed for all $f$,\n$g\\in\\mathcal{F}(\\mathcal{X};\\mathcal{L}(\\mathcal{Y}))$, where $f$ is even and\n$g$ odd. Define $h(x)=\\inner{y,f(x)g(x)y'}$. Then we have\n$h\\left(\\inv{x}\\right) = \\inner{y, f\\left(\\inv{x}\\right)\ng\\left(\\inv{x}\\right)y'}_{\\mathcal{Y}}\n\\hiderel{=}\\inner{y,f(x)\\left(-g(x)\\right)y'}_{\\mathcal{Y}} =-h(x)$.\n\\subsection{Characters}\n\\label{subsec:character} \\acs{LCA} groups are central to the general definition\nof Fourier Transform which is related to the concept of Pontryagin\nduality~\\citep{folland1994course}.  Let $(\\mathcal{X}, \\groupop)$ be a \\ac{LCA}\ngroup with $e$ its neutral element and the notation, $\\inv{x}$, for the inverse\nof $x \\in \\mathcal{X}$. A \\emph{character} is a complex continuous homomorphism\n$\\omega:\\mathcal{X}\\to\\mathbb{U}$ from $\\mathcal{X}$ to the set of complex\nnumbers of unit module $\\mathbb{U}$. The set of all characters of $\\mathcal{X}$\nforms the Pontryagin \\emph{dual  group} $\\dual{\\mathcal{X}}$. The dual group of\nan \\ac{LCA} group is an \\ac{LCA} group so that we can endow\n$\\dual{\\mathcal{X}}$ with a \\say{dual} Haar measure noted $\\dual{\\Haar}$. Then\nthe dual group operation is defined by $(\\omega_1 \\groupop'\n\\omega_2)(x)=\\omega_1(x)\\omega_2(x) \\hiderel{\\in} \\mathbb{U}$.  The Pontryagin\nduality theorem states that $\\dual{\\dual{\\mathcal{X}}}\\cong \\mathcal{X}$.\n\\acs{ie}~there is a canonical isomorphism between any \\ac{LCA} group and its\ndouble dual. To emphasize this duality the following notation is usually\nadopted: $\\label{eq:paringdef} \\omega(x) = \\pairing{x, \\omega} \\hiderel{=}\n\\pairing{\\omega, x} \\hiderel{=} x(\\omega)$, where\n$x\\in\\mathcal{X}\\cong\\dual{\\dual{\\mathcal{X}}}$ and\n$\\omega\\in\\dual{\\mathcal{X}}$. The form $\\pairing{\\cdot,\\cdot}$ defined in\n\\cref{eq:paringdef} is called (duality) pairing. Another important property\ninvolves the complex conjugate of the pairing which is defined as\n$\\conj{\\pairing{x, \\omega}} = \\pairing*{\\inv{x}, \\omega} \\hiderel{=}\n\\pairing*{x, \\inv{\\omega}}$.\n\\begin{table}[t!]\n    \\caption{Classification of \\acl{FT}s in terms of their domain and transform\n    domain.}\n    \\label{tab:dual_and_pairing}\n    \\centering\n    \\begin{tabularx}{\\textwidth}{cccX}\n        \\toprule\n            \\multicolumn{1}{c}{$\\mathcal{X}=$} &\n            \\multicolumn{1}{c}{$\\dual{\\mathcal{X}}\\cong$} &\n            \\multicolumn{1}{c}{Operation} & \\multicolumn{1}{l}{Pairing} \\\\\n        \\cmidrule{1-4}\n            $\\mathbb{R}^d$ & $\\mathbb{R}^d$ & $+$ & $\\pairing{x,\\omega} =\n            \\exp\\left(\\iu \\inner{x, \\omega}_2\\right)$ \\\\ $\\mathbb{R}^d_{*,+}$ &\n            $\\mathbb{R}^d$ & $\\cdot$ & $\\pairing{x,\\omega} =\\exp\\left( \\iu\n            \\inner{\\log(x), \\omega}_2 \\right)$ \\\\ $(-c;+\\infty)^d$ &\n            $\\mathbb{R}^d$ & $\\odot$ & $\\pairing{x,\\omega} =\\exp\\left( \\iu\n            \\inner{\\log(x+c), \\omega}_2 \\right)$ \\\\\n        \\bottomrule\n    \\end{tabularx}\n\\end{table}\nWe notice that for any pairing depending of $\\omega$, there exists a function\n$h_{\\omega}: \\mathcal{X} \\to \\mathbb{R}$ such that $(x,\\omega)= \\exp(\\iu\nh_{\\omega}(x))$ since any pairing maps into $\\mathbb{U}$. Moreover,\n$\\pairing*{x \\groupop \\inv{z},\\omega} = \\omega(x)\\omega\\left(\\inv{z}\\right)\n\\hiderel{=}\\exp\\left(+\\iu h_{\\omega}\\left(x\\right)\\right)\\exp\\left(+\\iu\nh_{\\omega}\\left(\\inv{z}\\right)\\right) =\\exp\\left(+\\iu\nh_{\\omega}\\left(x\\right)\\right)\\exp\\left(-\\iu h_{\\omega}\\left(z\\right)\\right)$.\n\\Cref{tab:dual_and_pairing} provides an explicit list of pairings for various\ngroups based on $\\mathbb{R}^d$ or its subsets. The interested reader can refer\nto~\\citet{folland1994course} for a more detailed construction of \\ac{LCA},\nPontryagin duality and \\acl{FT}s on \\ac{LCA}.\n\n\\subsection[The Fourier Transform]{The \\acl{FT}}\nFor a function with values in a separable Hilbert space, $f\\in\nL^1(\\mathcal{X},\\Haar;\\mathcal{Y})$, we denote $\\FT{f}$ its \\acf{FT} which is\ndefined by\n\\begin{dmath*}\n        \\forall \\omega \\in \\dual{\\mathcal{X}},\\enskip \\FT{f}(\\omega)\n        \\hiderel{=}\\int_{\\mathcal{X}} \\conj{\\pairing{x,\\omega}}f(x)d\\Haar(x).\n\\end{dmath*}\nThe \\acf{IFT} of a function $g\\in L^1(\\dual{\\mathcal{X}},\\dual{\\Haar};\n\\mathcal{Y})$ is noted $\\IFT{g}$ defined by $\\forall x \\in \\mathcal{X},\\enskip\n\\IFT{g}(x) \\hiderel{=}\\int_{\\dual{\\mathcal{X}}} \\pairing{x,\\omega}g(\\omega)\nd\\dual{\\Haar}(\\omega)$, We also define the flip operator $\\mathcal{R}$ by\n$(\\mathcal{R}f)(x) \\colonequals f\\left(\\inv{x}\\right)$.\n\\begin{theorem}[Fourier inversion]\n    \\label{th:fourier_inversion} Given a measure $\\Haar$ defined on\n    $\\mathcal{X}$, there exists a unique suitably normalized dual measure\n    $\\dual{\\Haar}$ on $\\dual{\\mathcal{X}}$ such that for all $f \\in\n    L^1(\\mathcal{X}, \\Haar;\\mathcal{Y})$ and if $\\FT{f} \\in\n    L^1(\\dual{\\mathcal{X}}, \\dual{\\Haar}; \\mathcal{Y})$ we have\n    \\begin{dmath}\n        \\label{fourier-l1} f(x) \\hiderel{=} \\int_{\\dual{\\mathcal{X}}}\n        \\pairing{x, \\omega} \\FT{f}(\\omega) d\\dual{\\Haar}(\\omega) \\condition{for\n        $\\Haar$-almost all $x\\in \\mathcal{X}$.}\n    \\end{dmath}\n    \\acs{ie}~such that\n    $(\\mathcal{R}\\mathcal{F}\\FT{f})(x)=\\mathcal{F}^{-1}\\FT{f}(x)=f(x)$ for\n    $\\Haar$-almost all $x\\in\\mathcal{X}$. If $f$ is continuous this relation\n    holds for all $x\\in\\mathcal{X}$.\n\\end{theorem}\nThus when a Haar measure $\\Haar$ on $\\mathcal{X}$ is given, the measure on\n$\\dual{\\mathcal{X}}$ that makes \\cref{th:fourier_inversion} true is called the\ndual measure of $\\Haar$, noted $\\dual{\\Haar}$. Let $c\\in\\mathbb{R}_*$ If\n$c\\Haar$ is the measure on $\\mathcal{X}$, then $c^{-1}\\dual{\\Haar}$ is the dual\nmeasure on $\\dual{\\mathcal{X}}$. Hence one must replace $\\dual{\\Haar}$ by\n$c^{-1}\\dual{\\Haar}$ in the inversion formula to compensate. Whenever\n$\\dual{\\Haar}=\\Haar$ we say that the Haar measure is self-dual. For the\nfamiliar case of a scalar-valued function $f$ on the \\ac{LCA} group\n$(\\mathbb{R}^d, +)$, we have for all $\\omega\\in\n\\dual{\\mathcal{X}}=\\mathbb{R}^d$\n\\begin{dmath}\n    \\label{fourier-R-plus}\n    \\FT{f}(\\omega)\n    =\\int_{\\mathcal{X}} \\conj{\\pairing{x,\\omega}}f(x)d\\Haar(x)\n    \\hiderel{=}\\int_{\\mathbb{R}^d} \\exp(-\\iu \\inner{x,\\omega}_2)f(x) d\\Leb(x),\n\\end{dmath}\nthe Haar measure being here the Lebesgue measure. Notice that the normalization\nfactor of $\\dual{\\Haar}$ on $\\dual{\\mathcal{X}}$ depends on the measure $\\Haar$\non $\\mathcal{X}$ \\emph{and} the duality pairing. For instance let\n$\\mathcal{X}=(\\mathbb{R}^d, +)$. If one endow $\\mathcal{X}$ with the\nLebesgue measure as the Haar measure, the Haar measure on the dual is defined\nfor all $\\mathcal{Z}\\in\\mathcal{B}(\\mathbb{R}^d)$ by\n\\begin{dmath*}\n    \\Haar(\\mathcal{Z})\\hiderel{=}\\Leb(\\mathcal{Z}),\n    \\quad\\text{and}\\quad\n    \\dual{\\Haar}(\\mathcal{Z})\\hiderel{=}\\frac{1}{(2\\pi)^d}\\Leb(\\mathcal{Z}),\n\\end{dmath*}\nin order to have $\\mathcal{F}^{-1}\\FT{f}=f$. If one use the cleaner equivalent\npairing $\\pairing{x,\\omega}=\\exp(2\\iu\\pi \\inner{x, \\omega}_2)$ rather than\n$\\pairing{x,\\omega}=\\exp(\\iu \\inner{x,\\omega}_2)$, then\n$\\dual{\\Haar}(\\mathcal{Z})=\\Leb(\\mathcal{Z})$.  The pairing\n$\\pairing{x,\\omega}=\\exp(2\\iu\\pi \\inner{x,\\omega}_2)$ looks more attractive in\ntheory since it limits the messy factor outside the integral sign and make the\nHaar measure self-dual. However it is of lesser use in practice since it yields\nadditional unnecessary computation when evaluating the pairing.  Hence for\nsymmetry reason on $(\\mathbb{R}^d, +)$ and reduce computations we settle with\nthe Haar measure on $\\mathbb{R}^d$ groups (additive and multiplicative) defined\nas $\\dual{\\Haar}(\\mathcal{Z}) = \\Haar(\\mathcal{Z}) \\hiderel{=}\n{\\sqrt{2\\pi}}^{-d} \\Leb(\\mathcal{Z})$.  We conclude this subsection by recalling\nthe injectevity property of the \\acl{FT}.\n\\begin{corollary}[\\acl{FT} injectivity]\n    Given $\\mu$ and $\\nu$ two measures, if $\\FT{\\mu}=\\FT{\\nu}$ then $\\mu=\\nu$.\n    Moreover given two functions $f$ and $g\\in\n    L^1(\\mathcal{X},\\Haar;\\mathcal{Y})$ if $\\FT{f}=\\FT{g}$ then $f=g$\n\\end{corollary}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Proofs}\nIn this section we give the proofs of our contributions stated in the main body\nof the paper.\n\\subsection{Construction}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{lm:C_characterization}}{Lemma %\n\\ref{lm:C_characterization}}}\n\\begin{proof}\n    For any function $f$ on $(\\mathcal{X},\\groupop)$ define the flip operator\n    $\\mathcal{R}$ by $(\\mathcal{R}f)(x) \\colonequals f\\left(\\inv{x}\\right)$.\n    For any shift invariant $\\mathcal{Y}$-Mercer kernel and for all\n    $\\delta\\in\\mathcal{X}$,\n    $K_e(\\delta)=K_e\\left(\\inv{\\delta}\\right)^\\adjoint$. Indeed from the\n    definition of a shift-invariant kernel,\n    $K_e\\left(\\inv{\\delta}\\right)=K\\left(\\inv{\\delta},e\\right)\n    \\hiderel{=}K\\left(e,\\delta\\right)\n    \\hiderel{=}K\\left(\\delta,e\\right)^\\adjoint\n    \\hiderel{=}K_e\\left(\\delta\\right)^\\adjoint$.\n    \\paragraph{}\n    Item 1: taking the \\acl{FT} yields,\n    $\\inner{y',C(\\omega)y}_{\\mathcal{Y}}=\\IFT{\\inner{y',\n    K_e(\\cdot)y}_{\\mathcal{Y}}}(\\omega)\n        %=\\IFT{\\inner{y', (\\mathcal{R}K_e(\\cdot))^\\adjoint\n        %y}_{\\mathcal{Y}}}(\\omega)\n        %=\\IFT{\\inner{\\mathcal{R}K_e(\\cdot)y', y}_{\\mathcal{Y}}}(\\omega)\n        %=\\IFT{\\mathcal{R}\\inner{K_e(\\cdot)y', y}_{\\mathcal{Y}}}(\\omega)\n    \\hiderel{=}\\mathcal{R}\\IFT{\\inner{K_e(\\cdot)y', y}_{\\mathcal{Y}}}(\\omega)\n    \\hiderel{=}\\mathcal{R}\\inner{C(\\cdot)y',y}_{\\mathcal{Y}}(\\omega)\n    =\\inner*{y',C\\left(\\inv{\\omega}\\right)^\\adjoint y}_{\\mathcal{Y}}$.  Hence\n    $C(\\omega)=C\\left(\\inv{\\omega}\\right)^\\adjoint$. Suppose that $\\mathcal{Y}$\n    is a complex Hilbert space. Since for all $\\omega\\in\\mathcal{\\dual{X}}$,\n    $C(\\omega)$ is bounded and non-negative so $C(\\omega)$ is self-adjoint.\n    Besides we have $C(\\omega)=C\\left(\\inv{\\omega}\\right)^\\adjoint $ so $C$\n    must be even.  Suppose that $\\mathcal{Y}$ is a real Hilbert space. The\n    \\acl{FT} of a real valued function obeys\n    $\\FT{f}(\\omega)=\\conj{\\FT{f}\\left(\\inv{\\omega}\\right)}$. Therefore since\n    $C(\\omega)$ is non-negative for all $\\omega\\in\\dual{\\mathcal{X}}$,\n    $\\inner{y', C(\\omega)y} = \\conj{\\inner{y', C\\left(\\inv{\\omega}\\right)y}}\n    \\hiderel{=}\\inner{y, C\\left(\\inv{\\omega}\\right)^* y'} \\hiderel{=}\\inner{y,\n    C\\left(\\omega\\right) y'}$.  Hence $C(\\omega)$ is self-adjoint and thus $C$\n    is even.\n    \\paragraph{}\n    Item 2: simply, for all $y$, $y'\\in\\mathcal{Y}$, $\\inner{y,\n    C(\\inv{\\omega})y'}$ $=$ $\\inner{y', C(\\omega)y}$ thus $\\IFT{\\inner{y',\n    K_e(\\cdot)y}_{\\mathcal{Y}}}(\\omega)=\\inner{y',\n    C(\\omega)y}\\hiderel{=}\\mathcal{R}\\inner{y',\n    C(\\cdot)y}(\\omega)=\\mathcal{R}\\IFT{\\inner{y',\n    K_e(\\cdot)y}_{\\mathcal{Y}}}(\\omega) \\hiderel{=} \\FT{\\inner{y',\n    K_e(\\cdot)y}_{\\mathcal{Y}}}(\\omega)$.\n    \\paragraph{}\n    Item 3: from Item 2 we have\n    $\\IFT{\\inner{y', K_e(\\cdot)y}}$ $=$ $\\mathcal{F}^{-1}\\mathcal{R}{\\inner{y',\n    K_e(\\cdot)y}}$. By injectivity of the \\acl{FT}, $K_e$ is even. Since\n    $K_e(\\delta)=K_e(\\inv{\\delta})^\\adjoint $, we must have\n    $K_e(\\delta)=K_e(\\delta)^\\adjoint $.\n\\end{proof}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{pr:spectral}}{Proposition %\n\\ref{pr:spectral}}}\n\\begin{proof}\n    This is a simple consequence of \\cref{pr:inverse_ovk_Fourier_decomposition}\n    and \\cref{lm:C_characterization}. By taking $\\inner{y',C(\\omega)y} =\n    \\IFT{\\inner{y', K_e(\\cdot)y}}(\\omega)=\\FT{\\inner{y', K_e(\\cdot)y}}(\\omega)$\n    we can write the following equality concerning the \\acs{OVK} signature\n    $K_e$:\n    % Suppose that $\\mu$ is absolutely continuous \\acs{wrt}~$d\\omega$. Then for\n    % all $\\delta \\in \\mathcal{X}$ and for all $y,$ $y'$ in $\\mathcal{Y}$\n    $\\inner{y', K_e(\\delta)y}(\\omega)=\n    \\int_{\\dual{\\mathcal{X}}}\\conj{\\pairing{\\delta, \\omega}}\\inner{y',\n    C(\\omega)y}d\\dual{\\Haar}(\\omega)\n    \\hiderel{=}\\int_{\\dual{\\mathcal{X}}}\\conj{\\pairing{\\delta,\n    \\omega}}\\inner*{y',\n    \\frac{1}{\\rho(\\omega)}C(\\omega)y}\\rho(\\omega)d\\dual{\\Haar}(\\omega)$.  It is\n    always possible to choose $\\rho(\\omega)$ such that\n    $\\int_{\\dual{\\mathcal{X}}}\\rho(\\omega)d\\dual{\\Haar}(\\omega)=1$. For\n    instance choose\n    \\begin{dmath*}\n        \\rho(\\omega)=\n        \\frac{\\norm{C(\\omega)}_{\\mathcal{Y},\n        \\mathcal{Y}}}{\\int_{\\dual{\\mathcal{X}}} \\norm{C(\\omega)}_{\\mathcal{Y},\n        \\mathcal{Y}} d\\dual{\\Haar}(\\omega)}\n    \\end{dmath*}\n    Since for all $y$, $y'\\in\\mathcal{Y}$, $\\inner{y',C(\\cdot)y}\\in\n    L^1(\\dual{\\mathcal{X}},\\dual{\\Haar})$ and $\\mathcal{Y}$ is a separable\n    Hilbert space, by Pettis measurability theorem, $\\int_{\\dual{\\mathcal{X}}}\n    \\norm{C(\\omega)}_{\\mathcal{Y},\\mathcal{Y}} d\\dual{\\Haar}(\\omega)$ is finite\n    and so is $\\norm{C(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}$ for all\n    $\\omega\\in\\dual{\\mathcal{X}}$.  Therefore $\\rho(\\omega)$ is the density of\n    a probability measure $\\probability_{\\dual{\\Haar},\\rho}$, \\acs{ie}~conclude\n    by taking $\\probability_{\\dual{\\Haar},\\rho}(\\mathcal{Z}) =\n    \\int_{\\mathcal{Z}}\\rho(\\omega)d\\dual{\\Haar}(\\omega)$, for all\n    $\\mathcal{Z}\\in\\mathcal{B}(\\dual{\\mathcal{X}})$.\n\\end{proof}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{cr:ORFF-kernel}}{Proposition %\n\\ref{cr:ORFF-kernel}}}\n\\begin{proof}\n    %Let us first notice that for a given $D$, $\\tilde{K}$ satisfies the\n    %properties of a shift-invariant $\\mathcal{Y}$-Mercer kernel.  Second, F\n    Suppose that for all $y$, $y'\\in\\mathcal{Y}$, $\\inner{y',\n    A(\\omega)y}\\rho(\\omega)=\\IFT{\\inner{y', K_e(\\cdot)y}}(\\omega)$ where $\\rho$\n    is a probability distribution (see \\cref{pr:spectral}). From the strong law\n    of large numbers $\\frac{1}{D} \\sum_{j=1}^D\n    \\conj{\\pairing{x\\groupop\\inv{z},\\omega_j}} A(\\omega_j)\n    \\converges{\\acs{asurely}}{D \\to \\infty} \\expectation_{\\dual{\\Haar},\n    \\rho}[\\conj{\\pairing{x \\groupop z^{-1}, \\omega_j} }A(\\omega)]$ where the\n    integral converges in the weak operator topology. Then by\n    \\cref{pr:spectral} we recover $K_e$ when $D\\to\\infty$ since,\n    $\\expectation_{\\dual{\\Haar}, \\rho}[\\conj{\\pairing{x \\groupop z^{-1},\n    \\omega_j}}A(\\omega)] = K_e(x\\groupop\\inv{z})$.\n\\end{proof}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{cr:ORFF-map-kernel}}{%\nProposition~\\ref{cr:ORFF-map-kernel}}}\n\\begin{proof}\n    Let $(\\omega_j)_{j=1}^D$ be a sequence of $D\\in\\mathbb{N}^*$\n    \\ac{iid}~random variables following the law\n    $\\probability_{\\dual{\\Haar},\\rho}$. For all $x$, $z \\in \\mathcal{X}$ and\n    all $y$, $y' \\in \\mathcal{Y}$,\n    \\begin{dmath*}\n        \\inner*{\\tildePhi{\\omega}(x)y,\\tildePhi{\\omega}(z)y'}_{\\Vect_{j=1}^D\n        \\mathcal{Y}'}\n        =\\frac{1}{D}\\inner*{\\Vect_{j=1}^D \\left(\\pairing{x,\n        \\omega_j}B(\\omega_j)^\\adjoint y\\right), \\Vect_{j=1}^D\\left(\\pairing{z,\n        \\omega_j}B(\\omega_j)^\\adjoint y'\\right)}\n    \\end{dmath*}\n    By definition of the inner product in direct sum of Hilbert spaces,\n    \\begin{dmath*}\n        \\frac{1}{D}\\inner*{\\Vect_{j=1}^D \\left(\\pairing{x,\n        \\omega_j}B(\\omega_j)^\\adjoint y\\right), \\Vect_{j=1}^D\\left(\\pairing{z,\n        \\omega_j}B(\\omega_j)^\\adjoint y'\\right)}\n        = \\frac{1}{D} \\sum_{j=1}^D \\inner*{y, \\conj{\\pairing{x,\n        \\omega_j}}B(\\omega_j)\\pairing{z, \\omega_j}B(\\omega_j)^\\adjoint\n        y'}_{\\mathcal{Y}}\n        \\hiderel{=} \\inner*{y, \\left(\\frac{1}{D} \\sum_{j=1}^D\n        \\conj{\\pairing{x\\groupop \\inv{z},\n        \\omega_j}}A(\\omega_j)\\right)y'}_{\\mathcal{Y}},\n    \\end{dmath*}\n    %With similar reasoning about plug-in Monte-Carlo estimator, we get the\n    %proof.\n    Eventually apply \\cref{cr:ORFF-kernel} to obtain the convergence of the\n    Monte-Carlo plug-in estimator to the true kernel $K$.\n\\end{proof}\n\\subsubsection{Proof of~%\n\\texorpdfstring{\\cref{pr:fourier_feature_map}}{Proposition~%\n\\ref{pr:fourier_feature_map}}}\n\\begin{proof}\n    For all $y$, $y'\\in \\mathcal{Y}$ and $x$, $z\\in\\mathcal{X}$,\n    \\begin{dmath*}\n        \\inner{y, \\Phi_x^\\adjoint \\Phi_z y'}_{\\mathcal{Y}} \\hiderel{=}\n        \\inner{\\Phi_x y, \\Phi_z\n        y'}_{L^2(\\dual{\\mathcal{X}},\\dual{\\mu};\\mathcal{Y}')}\n        \\hiderel{=} \\int_{\\dual{\\mathcal{X}}}\\conj{\\pairing{x,\\omega}}\\inner{y,\n        B(\\omega)\\pairing{z,\\omega}B(\\omega)^\\adjoint y'}d\\dual{\\mu}(\\omega)\n        = \\int_{\\dual{\\mathcal{X}}}\\conj{\\pairing{x \\groupop\n        \\myinv{z},\\omega}}\\inner{y, B(\\omega)B(\\omega)^\\adjoint\n        y'}d\\dual{\\mu}(\\omega)\n        \\hiderel{=} \\int_{\\dual{\\mathcal{X}}}\\conj{\\pairing{x \\groupop\n        \\inv{z},\\omega}}\\inner{y,A(\\omega)y'}d\\dual{\\mu}(\\omega),\n    \\end{dmath*}\n    which defines a $\\mathcal{Y}$-Mercer according to\n    \\cref{pr:mercer_kernel_bochner} of~\\citet{Carmeli2010}.\n\\end{proof}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{pr:ORFF-map}}{Proposition~%\n\\ref{pr:ORFF-map}}}\n\\begin{proof}\n    %Let us first notice that for a given $D$, $\\tilde{K}$ satisfies the\n    %properties of a shift-invariant $\\mathcal{Y}$-Mercer kernel.  Second, F\n    From the strong law of large numbers $\\frac{1}{D} \\sum_{j=1}^D\n    \\conj{\\pairing{x\\groupop\\inv{z},\\omega_j}} A(\\omega_j)\n    \\converges{\\acs{asurely}}{D \\to \\infty} \\expectation_{\\dual{\\Haar},\n    \\rho}[\\conj{\\pairing{x \\groupop z^{-1}, \\omega_j} }A(\\omega)]$ where the\n    integral converges in the weak operator topology. Then by\n    \\cref{pr:mercer_kernel_bochner}, $\\expectation_{\\dual{\\Haar},\n    \\rho}[\\conj{\\pairing{x \\groupop z^{-1}, \\omega_j}}A(\\omega)]\n    =K_e(x\\groupop\\inv{z})$.\n\\end{proof}\n\\subsubsection{Proof of~%\n\\texorpdfstring{\\cref{pr:orff_defines_kernel}}{Proposition~%\n\\ref{pr:orff_defines_kernel}}}\n\\begin{proof}\n    Apply \\cref{pr:feature_operator} to $\\tildePhi{\\omega}$ considering the\n    Hilbert space $\\tildeH{\\omega}$ to show that $\\tildeK{\\omega}$ is an\n    \\acs{OVK}. Then \\cref{pr:kernel_signature} shows that $\\tildeK{\\omega}$ is\n    shift-invariant since\n    $\\tildeK{\\omega}(x,z)=\\tildeK{\\omega}_e\\left(x\\groupop z^{-1}\\right)$.\n    Since $B(\\omega)$ is a bounded operator, $\\widetilde{K}$ is\n    $\\mathcal{Y}$-Mercer because all the functions in the sum are continuous.\n\\end{proof}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{pr:phitilde_phi_rel}}{%\nProposition~\\ref{pr:phitilde_phi_rel}}}\n\\begin{proof}[of \\cref{pr:cv_feature_map_1}]\n    Since $(\\omega_j)_{j=1}^D$ are \\ac{iid}~random vectors, for all $y\\in\n    \\mathcal{Y}$ and for all $y'\\in\\mathcal{Y}'$, $\\inner{y, B(\\cdot)y'}\\in\n    L^2(\\dual{\\mathcal{X}},\\probability_{\\dual{\\Haar},\\rho})$ and $g\\in\n    L^2(\\dual{\\mathcal{X}},\\probability_{\\dual{\\Haar},\\rho};\\mathcal{Y}')$,\n    \\begin{dmath*}\n        (\\tildeW{\\omega} \\theta)(x)=\\tildePhi{\\omega}(x)^\\adjoint\n        \\theta\\hiderel{=}\\frac{1}{D}\\sum_{j=1}^D\n        \\conj{\\pairing{x,\\omega_j}}B(\\omega_j)g(\\omega_j), \\qquad \\omega_j\n        \\hiderel{\\sim} \\probability_{\\dual{\\Haar},\\rho} \\enskip\n        \\text{\\ac{iid}~} \\\\\n        \\converges{\\acs{asurely}}{D\\to\\infty}\n        \\int_{\\dual{\\mathcal{X}}} \\conj{\\pairing{x,\\omega}} B(\\omega) g(\\omega)\n        d\\probability_{\\dual{\\Haar}, \\rho}(\\omega)\n        \\hiderel{=} (Wg)(x)\n        \\hiderel{\\colonequals} \\Phi_x^\\adjoint g.\n    \\end{dmath*}\n    from the strong law of large numbers.\n\\end{proof}\n\n\\begin{proof}[of \\cref{pr:cv_feature_map_2}]\n    Again, since $(\\omega_j)_{j-1}^D$ are \\ac{iid}~random vectors and $g\\in\n    L^2(\\dual{\\mathcal{X}},\\probability_{\\dual{\\Haar},\\rho};\\mathcal{Y}')$,\n    \\begin{dmath*}\n        \\norm{\\theta}^2_{\\tildeH{\\omega}}\n        = \\frac{1}{D}\\sum_{j=1}^D\\norm{g(\\omega_j)}^2_{\\mathcal{Y}'},\n        \\qquad \\omega_j \\hiderel{\\sim} \\probability_{\\dual{\\Haar}, \\rho}\n        \\enskip \\text{\\ac{iid}~} \\\\ \\converges{\\acs{asurely}}{D\\to\\infty}\n        \\int_{\\dual{\\mathcal{X}}}\n        \\norm{g(\\omega)}_{\\mathcal{Y}'}^2 d\\probability_{\\dual{\\Haar},\n        \\rho}(\\omega)\n        \\hiderel{=} \\norm{g}_{L^2\\left(\\dual{\\mathcal{X}},\n        \\probability_{\\dual{\\Haar},\\rho};\n        \\mathcal{Y}'\\right)}^2.\n    \\end{dmath*}\n    from the strong law of large numbers.\n\\end{proof}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{pr:fourier_reg_ovk}}{%\nProposition~\\ref{pr:fourier_reg_ovk}}}\n\\begin{proof}\n    We first show how the \\acl{FT} relates to the feature operator. Since\n    $\\mathcal{H}_K$ is embedded into $\\mathcal{H}=L^2(\\dual{\\mathcal{X}},\n    \\probability_{\\dual{\\Haar},\\rho}; \\mathcal{Y}')$ by means of the feature\n    operator $W$, we have for all $f\\in\\mathcal{H}_k$, for all\n    $f\\in\\mathcal{H}$ and for all $x\\in\\mathcal{X}$\n    \\begin{dgroup*}\n        \\begin{dmath*}\n            \\FT{\\IFT{f}}(x)\n            \\hiderel{=} \\int_{\\dual{\\mathcal{X}}} \\conj{\\pairing{x, \\omega}}\n            \\IFT{f}(\\omega) d\\dual{\\Haar}(\\omega)\n            = f(x)\n        \\end{dmath*}\n        \\begin{dmath*}\n            (Wg)(x)\n            \\hiderel{=}\\int_{\\dual{\\mathcal{X}}} \\conj{\\pairing{x,\n            \\omega}}\\rho(\\omega) B(\\omega) g(\\omega) d\\dual{\\Haar}(\\omega)\n            = f(x).\n        \\end{dmath*}\n    \\end{dgroup*}\n    By injectivity of the \\acl{FT},\n    $\\IFT{f}(\\omega)=\\rho(\\omega)B(\\omega)g(\\omega)$. From\n    \\cref{pr:feature_operator} we have\n    \\begin{dmath*}\n        \\norm{f}^2_{K} = \\inf \\Set{\\norm{g}^2_{\\mathcal{H}} | \\forall\n        g\\hiderel{\\in}\\mathcal{H}, \\enskip Wg\\hiderel{=}f}\n        = \\inf\\Set{\\int_{\\dual{\\mathcal{X}}}\n        \\norm{g(\\omega)}^2_{\\mathcal{Y}'}d\\probability_{\\dual{\\Haar},\\rho}\n        (\\omega) | \\forall g\\hiderel{\\in}\\mathcal{H},\\enskip \\IFT{f}\n        \\hiderel{=}\\rho(\\cdot)B(\\cdot)g(\\cdot)}.\n    \\end{dmath*}\n    The pseudo inverse of the operator $B(\\omega)$ -- noted $B(\\omega)^\\dagger$\n    -- is the unique solution of the system\n    $\\IFT{f}(\\omega)=\\rho(\\omega)B(\\omega)g(\\omega)$ \\acs{wrt}~$g(\\omega)$ with\n    minimal norm\\footnote{Note that since $B(\\omega)$ is bounded the pseudo\n    inverse of $B(\\omega)$ is well defined for $\\dual{\\Haar}$-almost all\n    $\\omega$.}. Eventually, $\\norm{f}^2_K = \\int_{\\dual{\\mathcal{X}}}\n    \\frac{\\norm{B(\\omega)^\\dagger\n    \\IFT{f}(\\omega)}_{\\mathcal{Y}}^2}{\\rho(\\omega)^2}\n    d\\probability_{\\dual{\\Haar}, \\rho}(\\omega)$ Using the fact that\n    $\\IFT{\\cdot}=\\mathcal{F}\\mathcal{R}[\\cdot]$ and\n    $\\mathcal{F}^2[\\cdot]=\\mathcal{R}[\\cdot]$,\n    \\begin{dmath*}\n        \\norm{f}^2_K= \\displaystyle\\int_{\\dual{\\mathcal{X}}}\n        \\frac{\\norm{\\mathcal{R} \\left[B(\\cdot)^\\dagger\n        \\rho(\\cdot)\\right](\\omega)\n        \\FT{f}(\\omega)}^2_{\\mathcal{Y}}}{\\rho(\\omega)^2} d\\dual{\\Haar}(\\omega)\n        %= \\displaystyle\\int_{\\dual{\\mathcal{X}}}\n        %\\frac{\\norm{B(\\omega)^\\dagger \\rho(\\omega)\n        %\\FT{f}(\\omega)}^2_{\\mathcal{Y}}}{\\rho(\\omega)^2} d\\dual{\\Haar}(\\omega)\n        = \\displaystyle\\int_{\\dual{\\mathcal{X}}}\n        \\frac{\\inner{B(\\omega)^\\dagger \\FT{f}(\\omega),\n        B(\\omega)^\\dagger\\FT{f}(\\omega)}_{\\mathcal{Y}}}{\\rho(\\omega)}\n        d\\dual{\\Haar}(\\omega)\n        = \\displaystyle\\int_{\\dual{\\mathcal{X}}}\n        \\frac{\\inner{\\FT{f}(\\omega),\n        A(\\omega)^\\dagger\\FT{f}(\\omega)}_{\\mathcal{Y}}}{\\rho(\\omega)}\n        d\\dual{\\Haar}(\\omega).\n    \\end{dmath*}\n\\end{proof}\n\\subsection{Convergence with high probability of the ORFF estimator}\n\\label{subsec:concentration_proof}\nWe recall the notations $\\delta=x\\groupop z^{-1}$, for all $x$,\n$z\\in\\mathcal{X}$,\n$\\tilde{K} (x,z) = {\\tildePhi{\\omega} (x)}^\\adjoint \\tildePhi{\\omega} (z)$,\n$\\tilde{K}^j (x,z) = {\\Phi_x (\\omega_j)}^\\adjoint \\Phi_z (\\omega_j)$, where\n$\\omega_j\\sim\\probability_{\\dual{\\Haar,\\rho}}$ and $K_e (\\delta)=K(x,z)$ and\n$\\tilde{K}_e(\\delta)=\\tilde{K}(x,z)$. For the sake of readabilty, we use\nthroughout the proof the quantities: $F (\\delta)\\colonequals\\tilde{K} (x,z) - K\n(x,z)$ and $F^j (\\delta)\\colonequals\\frac{1}{D} \\left(\\tilde{K}^j (x,z) - K\n(x,z)\\right)$.  We also view $\\mathcal{X}$ as a metric space endowed with the\ndistance $d_{\\mathcal{X}}:\\mathcal{X}\\times\\mathcal{X}\\to\\mathbb{R}_+$.\nCompared to the scalar case, the proof follows the same scheme as the one\ndescribed in \\citep{Rahimi2007, sutherland2015}, but we consider an operator\nnorm as measure of the error and therefore concentration inequality dealing\nwith these operator norm.  The main feature of \\cref{pr:bound_approx_unbounded}\nis that it covers the case of bounded \\acs{ORFF} as well as unbounded\n\\acs{ORFF}. In the case of bounded \\acs{ORFF}, a Bernstein inequality for\nmatrix concentration such that the one proved in \\citet[Corollary\n5.2]{Mackey2014} or the formulation of \\citet{Tropp} recalled in\n\\citet{koltchinskii2013remark}~is suitable. However some kernels like the curl\nand the divergence-free kernels do not have obvious bounded\n$\\norm{F^j}_{\\mathcal{Y},\\mathcal{Y}}$ but exhibit $F^j$ with subexponential\ntails. Therefore, we use an operator Bernstein concentration inequality adapted\nfor random matrices with subexponential norms.\n\\subsubsection{Epsilon-net}\n\\label{subsec:epsilon-net}\nLet $\\mathcal{C}\\subseteq\\mathcal{X}$ be a compact subset of $\\mathcal{X}$. Let\n$\\mathcal{D}_{\\mathcal{C}} = \\Set{x\\groupop z^{-1} | x, z\\in\\mathcal{C} }$ with\ndiameter at most $2\\abs{\\mathcal{C}}$ where $\\abs{\\mathcal{C}}$ is the diameter\nof $\\mathcal{C}$. Since $\\mathcal{C}$ is supposed compact, so is\n$\\mathcal{D}_{\\mathcal{C}}$. Since $\\mathcal{D}_{\\mathcal{C}}$ is also a metric\nspace it is well known that a compact metric space is totally bounded. Thus it\nis possible to find a finite $\\epsilon$-net covering\n$\\mathcal{D}_{\\mathcal{C}}$. We call $T=\\mathcal{N}(\\mathcal{D}_{\\mathcal{C}},\nr)$ the number of closed balls of radius $r$ required to cover\n$\\mathcal{D}_{\\mathcal{C}}$. For instance if $\\mathcal{D}_{\\mathcal{C}}$ is a\nsubspace finite dimensional Banach space with diameter at most\n$2\\abs{\\mathcal{C}}$ it is possible to cover the space with at most\n$T={(4\\abs{\\mathcal{C}}/r)}^d$ balls of radius $r$\n(see \\citet[proposition 5]{cucker2001mathematical}).  Let us call\n$\\delta_i,i=1,\\ldots,T$ the center of the $i$-th ball, also called anchor of\nthe $\\epsilon$-net. Denote $L_{F}$ the Lipschitz constant of $F$. Let\n$\\norm{\\cdot}_{\\mathcal{Y},\\mathcal{Y}}$ be the operator norm on\n$\\mathcal{L}(\\mathcal{Y})$ (largest eigenvalue). We introduce the following\ntechnical lemma.\n\\begin{lemma}\\label{lm:error_decomposition}\n    $\\forall \\delta \\in \\mathcal{D}_{\\mathcal{C}}$, if\n    \\begin{dmath}\n        L_{F}\\le\\frac{\\epsilon}{2r}\\label{condition1}\n    \\end{dmath}\n    and\n    \\begin{dmath}\n        \\norm{F (\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}\n        \\le\\frac{\\epsilon}{2}\\condition{for all\n        $i\\in\\mathbb{N}^*_T$}\\label{condition2}\n    \\end{dmath}\n    then $\\norm{F(\\delta)}_{\\mathcal{Y},\\mathcal{Y}} \\leq \\epsilon$.\n\\end{lemma}\n\\begin{proof}\n    $\\norm{F (\\delta)}_{\\mathcal{Y},\\mathcal{Y}} = \\norm{F (\\delta) -\n    F(\\delta_i) + F(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y} }\\le \\norm{F(\\delta) -\n    F(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}} +\n    \\norm{F(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}$ for all $0<i<T$. Using the\n    Lipschitz continuity of $F$ we have $\\norm{F(\\delta) -\n    F(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}} \\le d_{\\mathcal{X}}(\\delta,\\delta_i)\n    L_{F} \\hiderel{\\le} rL_{F}$ hence\n    $\\norm{F(\\delta)}_{\\mathcal{Y},\\mathcal{Y}} \\le rL_{F} +\n    \\norm{F(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}} \\hiderel{=} \\frac{r\\epsilon}{2\n    r} + \\frac{\\epsilon}{2} \\hiderel{=} \\epsilon$.\n\\end{proof}\nTo apply the lemma, we must bound the Lipschitz constant of the operator-valued\nfunction $F$ (\\cref{condition1}) and\n$\\norm{F(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}$, for all $i=1, \\ldots, T$ as\nwell (\\cref{condition2}).\n\\subsubsection{Bounding the Lipschitz constant}\nThis proof is a slight generalization of \\citet{minh2016operator} to arbitrary\nmetric spaces. It differ from our first approach \\citep{brault2016random},\nbased on the proof of \\citet{sutherland2015} which was only valid for a finite\ndimensional input space $\\mathcal{X}$ and imposed a twice differentiability\ncondition on the considered kernel.\n\\begin{lemma}\n    \\label{lm:LipschitzK}\n    Let $H_\\omega \\in \\mathbb{R}_+$ be the Lipschitz constant of\n    $h_\\omega(\\cdot)$ and assume that $\\int_{\\dual{\\mathcal{X}}} H_\\omega\n    \\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}d\\probability_{\\dual{\\Haar},\n    \\rho}(\\omega) < \\infty$.  Then the operator-valued function\n    $K_e:\\mathcal{X}\\to\\mathcal{L}(\\mathcal{Y})$ is Lipschitz with\n    \\begin{dmath}\n        \\norm{K_e(x) - K_e(z)}_{\\mathcal{Y},\\mathcal{Y}}\\le\n        d_{\\mathcal{X}}(x,z) \\int_{\\dual{\\mathcal{X}}} H_\\omega\n        \\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}d\\probability_{\\dual{\\Haar},\n        \\rho}(\\omega).\n    \\end{dmath}\n\\end{lemma}\n\\begin{proof}\n    We use the fact that the cosine function is Lipschitz with constant $1$ and\n    $h_{\\omega}$ Lipschitz with constant $H_\\omega$. For all $x$,\n    $z\\in\\mathcal{X}$ we have\n    \\begin{dmath*}\n        \\norm{\\tilde{K}_e(x) - K_e(z)}_{\\mathcal{Y},\\mathcal{Y}}\n        = \\norm{\\int_{\\dual{\\mathcal{X}}} \\left(\\cos h_\\omega(x) - \\cos\n        h_\\omega(z)\\right)A(\\omega)d\\probability_{\\dual{\\Haar},\\rho}\n        }_{\\mathcal{Y},\\mathcal{Y}}\n        \\le \\int_{\\dual{\\mathcal{X}}} \\abs{\\cos h_\\omega(x) - \\cos\n        h_\\omega(z)}\\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}\n        d\\probability_{\\dual{\\Haar},\\rho}\n        \\le \\int_{\\dual{\\mathcal{X}}} \\abs{h_\\omega(x) -\n        h_\\omega(z)}\\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}\n        d\\probability_{\\dual{\\Haar},\\rho}\n        \\le d_{\\mathcal{X}}(x, z) \\int_{\\dual{\\mathcal{X}}} H_{\\omega}\n        \\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}\n        d\\probability_{\\dual{\\Haar},\\rho}\n    \\end{dmath*}\n\\end{proof}\nIn the same way, considering $\\tilde{K}_e(\\delta)=\\frac{1}{D}\\sum_{j=1}^D\\cos\nh_{\\omega_j}(\\delta)A(\\omega_j)$, where\n$\\omega_j\\sim\\probability_{\\dual{\\Haar},\\rho}$, we can show that $\\tilde{K}_e$\nis Lipschitz with $\\norm{\\tilde{K}_e(x) -\n\\tilde{K}_e(z)}_{\\mathcal{Y},\\mathcal{Y}} \\le\nd_{\\mathcal{X}}(x,z)\\frac{1}{D}\\sum_{j=1}^DH_{\\omega_j}\n\\norm{A(\\omega_j)}_{\\mathcal{Y},\\mathcal{Y}}$.  Combining the Lipschitz\ncontinuity of $\\tilde{K}_e$ and $\\tilde{K}$ (\\cref{lm:LipschitzK}) we obtain\n\\begin{dmath*}\n    \\norm{F(x)-F(z)}_{\\mathcal{Y},\\mathcal{Y}}\n    = \\norm{\\tilde{K}_e(x) - \\tilde{K}_e(x) - \\tilde{K}_e(z) +\n    K_e(z)}_{\\mathcal{Y}, \\mathcal{Y}}\n    \\le \\norm{\\tilde{K}_e(x) -\n    \\tilde{K}_e(z)}_{\\mathcal{Y}, \\mathcal{Y}} + \\norm{K_e(x) -\n    K_e(z)}_{\\mathcal{Y}, \\mathcal{Y}}\n    \\le d_{\\mathcal{X}}(x,z)\\left(\\int_{\\dual{\\mathcal{X}}} H_\\omega\n    \\norm{A(\\omega)}_{\\mathcal{Y}, \\mathcal{Y}}\n    d\\probability_{\\dual{\\Haar},\\rho} +\n    \\frac{1}{D}\\sum_{j=1}^DH_{\\omega_j}\\norm{A(\\omega_j)}_{\\mathcal{Y},\n    \\mathcal{Y}} \\right)\n\\end{dmath*}\nTaking the expectation yields $\\expectation_{\\dual{\\Haar},\\rho}\\left[ L_F\n\\right] = 2 \\int_{\\dual{\\mathcal{X}}} H_\\omega\n\\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}d\\probability_{\\dual{\\Haar},\\rho}\n(\\omega)$ Thus by Markov's inequality,\n\\begin{dmath}\n    \\probability_{\\dual{\\Haar},\\rho}\\set{(\\omega_j)_{j=1}^D | L_F \\ge \\epsilon}\n    \\le \\frac{\\expectation_{\\dual{\\Haar},\\rho}\\left[ L_F \\right]}{\\epsilon}\n    \\hiderel{\\le} \\frac{2}{\\epsilon} \\int_{\\dual{\\mathcal{X}}} H_\\omega\n    \\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}\n    d\\probability_{\\dual{\\Haar},\\rho}.  \\label{eq:Lipschitz_constant}\n\\end{dmath}\n\\subsubsection[Bounding the error on a given anchor point]{Bounding $F$ on a\ngiven anchor point $\\delta_i$}\nTo bound $\\norm{F(\\delta_i)}_{\\mathcal{Y}, \\mathcal{Y}}$, Hoeffding inequality\ndevoted to matrix concentration \\citep{Mackey2014}~can be applied. We prefer\nhere to turn to tighter and refined inequalities such as Matrix Bernstein\ninequalities (\\citet{sutherland2015}~also pointed that for the scalar case).\nThe first non-commutative (matrix) concentration inequalities are due to the\npioneer work of \\citet{Ahls2002}, using bound on the moment generating\nfunction. This gave rise to many applications\n\\citet{Tropp,oliveira2009concentration,koltchinskii2013remark}~ranging from\nanalysis of randomized optimization algorithm to analysis of random graphs and\ngeneralization bounds usefull in machine learning.\n%The following inequality has\n%been proposed in \\cite{koltchinskii2013remark}.\n%\\begin{theorem}[Bounded non-commutative Bernstein]\\label{th:Bernstein1}\n    %From Theorem 3 of \\citet{koltchinskii2013remark}, consider a sequence\n    %$(X_{j})_{j=1}^D$ of $D$ independent Hermitian $p \\times p$ random matrices\n    %acting on a finite dimensional Hilbert space $\\mathcal{Y}$ that satisfy\n    %$\\expectation X_{j} = 0$, and suppose that there exist some constant $U \\ge\n    %\\norm{X_{j}}_{\\mathcal{Y},\\mathcal{Y}}$ for each index $j$. Denote the\n    %proxy bound on the matrix variance\n    %\\begin{dmath*}\n        %V \\succcurlyeq \\sum_{j=1}^D \\expectation X_j^2.\n    %\\end{dmath*}\n    %Then, for all $\\epsilon \\geq 0$,\n    %\\begin{dmath*}\n        %\\probability\\Set{\\norm{\\sum_{j=1}^D X_{j}}_{\\mathcal{Y}, \\mathcal{Y}}\n        %\\geq \\epsilon } \\leq p\n        %\\exp\\left(-\\frac{\\epsilon^2}{2\\norm{V}_{\\mathcal{Y},\\mathcal{Y}} +\n        %2U\\epsilon/3}\\right)\n    %\\end{dmath*}\n%\\end{theorem}\nThe concentration inequatilty of \\citet{koltchinskii2013remark} we used in our\noriginal paper \\citep{brault2016random}~has the default to grow linearly with\nthe dimension $p$ of the output space $\\mathcal{Y}$. However if the evaluation\nof the operator-valued kernel at two points yields a low-rank matrix, this\nbound could be improved since only a few principal dimensions are relevant.\nMoreover this bound cannot be used when dealing with operator-valued kernel\nacting on infinite dimensional Hilbert spaces. Recent results of\n\\citet{minsker2011some} consider the notion of intrinsic dimension to avoid\nthis \\say{curse of dimensionality} (see \\cref{def:intdim} for the definition).\nWhen $A$ is approximately low-rank (\\acs{ie} many eigenvalues are small), or go\nquickly to zero, the intrinsic dimension can be much lower than the\ndimensionality.  Indeed, $1 \\le\\intdim(A) \\hiderel{\\le} \\rank(A) \\hiderel{\\le}\n\\dim(A)$.\n\\begin{theorem}[Bounded non-commutative Bernstein with intrinsic dimension\n\\citep{minsker2011some, tropp2015introduction}]\\label{th:Bernstein2}\n    Consider a sequence ${(X_j)}_{j=1}^D$ of $D$ independent Hilbert-Schmidt\n    self-adjoint random operators acting on a separable Hilbert $\\mathcal{Y}$\n    space that satisfy $\\expectation X_j = 0$ for all $j\\in\\mathbb{N}^*_D$.\n    Suppose that there exist some constant $U \\ge\n    2\\norm{X_j}_{\\mathcal{Y},\\mathcal{Y}}$ almost surely for all\n    $j\\in\\mathbb{N}^*_D$. Define a semi-definite upper bound for the the\n    operator-valued variance $V \\succcurlyeq \\sum_{j=1}^D \\expectation X_j^2$.\n    Then for all $\\epsilon \\ge \\sqrt{\\norm{V}_{\\mathcal{Y}, \\mathcal{Y}}} +\n    U/3$,\n    \\begin{dmath*}\n        \\probability\\Set{\\norm{\\sum_{j=1}^D X_{j}}_{\\mathcal{Y}, \\mathcal{Y}}\n        \\ge \\epsilon } \\le 4\\intdim(V)\\exp\\left(-\\psi_{V,U}(\\epsilon)\\right)\n    \\end{dmath*}\n    where $\\psi_{V, U}(\\epsilon)=\\frac{\\epsilon^2}{2\\norm{V}_{\\mathcal{Y},\n    \\mathcal{Y}} + 2 U \\epsilon / 3}$\n\\end{theorem}\nhe concentration inequality is restricted to the case where $\\epsilon \\ge\n\\sqrt{\\norm{V}_{\\mathcal{Y}, \\mathcal{Y}}} + U/3$ since the probability is\nvacuous on the contrary. The assumption that $X_j$'s are Hilbert-Schmidt\noperators comes from the fact that the product of two such operator yields a\ntrace-class operator, for which the intrinsic dimension is well defined.\n\\paragraph{}\nHowever, to cover the general case including unbounded \\acp{ORFF} like curl and\ndivergence-free \\acp{ORFF}, we choose a version of Bernstein matrix\nconcentration inequality proposed in~\\cite{koltchinskii2013remark} that allows\nto consider matrices that are not uniformly bounded but have subexponential\ntails.  In the following we use the notion of Orlicz norm to bound random\nvariable by their tail behavior rather than their value (see\n\\cref{def:orlicz}).  For the sake of simplicity, we now fix\n$\\psi(t)=\\psi_1(t)=\\exp(t)-1$. Although the Orlicz norm should be adapted to\nthe tail of the distribution of the random operator we want to quantify to\nobtain the sharpest bounds.  We also introduce two technical lemmas related to\nOrlicz norm. The first one relates the $\\psi_1$-Orlicz norm to the moment\ngenerating function ($\\MGF$).\n\\begin{lemma}\\label{lm:orlicz_mgf}\n    Let $X$ be a random variable with a strictly monotonic moment-generating\n    function. We have $\\norm{X}_{\\psi_1}^{-1}=\\MGF_{\\abs{X}}^{-1}(2)$.\n\\end{lemma}\n\\begin{proof}\n    We have\n    \\begin{dmath*}\n        \\norm{X}_{\\psi_1}=\\inf \\Set{C \\hiderel{>} 0 \\,\\, | \\,\\,\n        \\expectation[\\exp\\left( \\abs{X}/C \\right)] \\hiderel{\\le} 2 }\n        \\hiderel{=} \\frac{1}{\\sup \\Set{C \\hiderel{>} 0 \\,\\, | \\,\\,\n        \\MGF_{\\abs{X}}(C)\\le 2 }}.\n    \\end{dmath*}\n    $X$ has strictly monotonic moment-generating thus\n    $C^{-1}=\\MGF^{-1}_{\\abs{X}}(2)$. Hence\n    $\\norm{X}_{\\psi_1}^{-1}=\\MGF^{-1}_{\\abs{X}}(2)$.\n\\end{proof}\nThe second lemma gives the Orlicz norm of a positive constant.\n\\begin{lemma}\n    If $a\\in\\mathbb{R}_+$ then $\\norm{a}_{\\psi_1} = \\frac{a}{\\ln(2)}<2a$.\n    \\label{lm:orlicz_cte}\n\\end{lemma}\n\\begin{proof}\n    We consider $a$ as a positive constant random variable, whose \\ac{MGF} is\n    $\\MGF_a(t)=\\exp(at)$.  From \\cref{lm:orlicz_mgf},\n    $\\norm{a}_{\\psi_1}=\\frac{1}{\\MGF_X^{-1}(2)}$.  Then\n    $\\MGF^{-1}_{\\abs{a}}(2)=\\frac{\\ln(2)}{\\abs{a}}$, $a \\neq 0$. If $a=0$ then\n    $\\norm{a}_{\\psi_1}=0$ by definition of a norm. Thus $\\norm{a}_{\\psi_1} =\n    \\frac{a}{\\ln(2)}$.\n\\end{proof}\nWe now turn our attention to \\citet{minsker2011some}'s theorem to for unbounded\nrandom variables.\n\\begin{theorem}[Unbounded non-commutative Bernstein with intrinsic dimension]\n    \\label{th:Bernstein3} Consider a sequence $(X_j)_{j=1}^D$ of $D$\n    independent self-adjoint random operators acting on a finite dimensional\n    Hilbert space $\\mathcal{Y}$ of dimension $p$ that satisfy $\\expectation X_j\n    = 0$ for all $j\\in\\mathbb{N}^*_D$.  Suppose that there exist some constant\n    $U \\ge \\norm{\\norm{X_j}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi}$ for all\n    $j\\in\\mathbb{N}^*_D$. Define a semi-definite upper bound for the the\n    operator-valued variance $V \\succcurlyeq \\sum_{j=1}^D \\expectation X_j^2$.\n    Then for all $\\epsilon > 0$,\n    \\begin{dmath*}\n        \\probability\\Set{\\norm{\\sum_{j=1}^D X_{j}}_{\\mathcal{Y}, \\mathcal{Y}}\n        \\ge \\epsilon } \\hiderel{\\le}\n        \\begin{cases}\n            2\\intdim(V)\\exp\\left(-\\frac{\\epsilon^2}{2\n            \\norm{V}_{\\mathcal{Y},\\mathcal{Y}}\\left(1 + \\frac{1}{p}\\right)}\n            \\right) r_{V}(\\epsilon) \\condition{$\\epsilon \\le\n            \\frac{\\norm{V}_{\\mathcal{Y},\\mathcal{Y}}}{2U}\\frac{1+1/p}{K(V,\n            p)}$} \\\\\n            2\\intdim(V)\\exp\\left(-\\frac{\\epsilon}{4UK(V,\n            p)}\\right)r_{V}(\\epsilon) \\condition{otherwise.}\n        \\end{cases}\n    \\end{dmath*}\n    where $K(V,p)=\\log\\left(16\\sqrt{2}p\\right)+\\log\\left(\\frac{D\n    U^2}{\\norm{V}_{\\mathcal{Y}, \\mathcal{Y}}}\\right)$ and $r_V(\\epsilon)= 1 +\n    \\frac{3}{\\epsilon^2\\log^2(1 + \\epsilon /\n    \\norm{V}_{\\mathcal{Y},\\mathcal{Y}})}$\n\\end{theorem}\nLet $\\psi=\\psi_1$. To use \\cref{th:Bernstein3}, we set $X_j=F^j(\\delta_i)$. We\nhave indeed $\\expectation_{\\dual{\\Haar},\\rho}[F^j(\\delta_i)] = 0$ since\n$\\tilde{K}(\\delta_i)$ is the Monte-Carlo approximation of $K_e(\\delta_i)$ and\nthe matrices $F^j(\\delta_i)$ are self-adjoint. We assume we can bound all the\nOrlicz norms of the $F^j(\\delta_i)=\\frac{1}{D}(\\tilde{K}^j(\\delta_i) -\nK_e(\\delta_i))$. In the following we use constants $u_i$ such that $u_i=D U$.\nUsing \\cref{lm:orlicz_cte} and the sub-additivity of the\n$\\norm{\\cdot}_{\\mathcal{Y},\\mathcal{Y}}$ and $\\norm{\\cdot}_{\\psi_1}$ norm,\n\\begin{dmath*}\n    u_i=2D\\max_{1\\le j\\le\n    D}\\norm{\\norm{F^j(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1}\n    \\hiderel{\\le} 2\\max_{1\\le j\\le\n    D}\\norm{\\norm{\\tilde{K}^j(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1} +\n    2\\norm{\\norm{K_e(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1}\n    <4\\max_{1\\le j\\le\n    D}\\norm{\\norm{A(\\omega_j)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1} + 4\n    \\norm{K_e(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}\n    \\hiderel{=}4\\left(\\norm{\\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1}\n    + \\norm{K_e(\\delta_i)}_{\\mathcal{Y},\\mathcal{Y}}\\right)\n\\end{dmath*}\nIn the same way we defined the constants $v_i=DV$, $v_i=\nD\\sum_{j=1}^D\\expectation_{\\dual{\\Haar,\\rho}} F^j(\\delta_i)^2\n\\hiderel{=}D\\variance_{\\dual{\\Haar},\\rho}\\left[ \\tilde{K}(\\delta_i) \\right]$\nThen applying \\cref{th:Bernstein3}, we get for all\n$i\\in\\mathbb{N}^*_{\\mathcal{N}(\\mathcal{D}_{\\mathcal{C}},r)}$ ($i$ is the index\nof each anchor)\n\\begin{dmath*}\n    \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n    \\norm{F(\\delta_i)}_{\\mathcal{Y}, \\mathcal{Y}} \\ge \\epsilon } \\le\n    \\begin{cases}\n        4 \\intdim(v_i)\\exp\\left(-D\\frac{\\epsilon^2}{2\n        \\norm{v_i}_{\\mathcal{Y},\\mathcal{Y}}\\left(1 + \\frac{1}{p}\\right)}\n        \\right) r_{v_i/D}(\\epsilon) \\condition{$\\epsilon \\le\n        \\frac{\\norm{v_i}_{\\mathcal{Y},\\mathcal{Y}}}{2u_i}\\frac{1+1/p}{K(v_i,\n        p)}$} \\\\\n        4 \\intdim(v_i)\\exp\\left(-D\\frac{\\epsilon}{4u_iK(v_i,\n        p)}\\right)r_{v_i/D}(\\epsilon) \\condition{otherwise.}\n    \\end{cases}\n\\end{dmath*}\nwith\n\\begin{dmath*}\n    K(v_i,p)=\\log\\left(16 \\sqrt{2}\n    p\\right)+\\log\\left(\\frac{u_i^2}{\\norm{v_i}_{\\mathcal{Y},\n    \\mathcal{Y}}}\\right)\n\\end{dmath*}\nand\n\\begin{dmath*}\n    r_{v_i/D}= 1 + \\frac{3}{\\epsilon^2\\log^2(1 + D \\epsilon /\n    \\norm{v_i}_{\\mathcal{Y},\\mathcal{Y}})}.\n\\end{dmath*}\nTo unify the bound on each anchor we\ndefine two constant\n\\begin{dmath*}\n    u = 4\\left(\\norm{\\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1} +\n    \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n    \\norm{K_e(\\delta)}_{\\mathcal{Y},\\mathcal{Y}}\\right)\n    \\hiderel{\\ge} \\max_{i=1,\\hdots T} u_i\n\\end{dmath*}\nand\n\\begin{dmath*}\n    v = \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n    D\\variance_{\\dual{\\Haar},\\rho}\\left[ \\tilde{K}_e(\\delta) \\right]\n    \\hiderel{\\ge} \\max_{i=1,\\hdots T} v_i.\n\\end{dmath*}\n\\subsubsection{Union Bound and examples}\nTaking the union bound over the anchors yields\n\\begin{dmath}\n    \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n    \\bigcup_{i=1}^{\\mathcal{N}(\\mathcal{D}_{\\mathcal{C}}, r)}\n    \\norm{F(\\delta_i)}_{\\mathcal{Y}, \\mathcal{Y}} \\ge \\epsilon\n    } \\le 4 \\mathcal{N}(\\mathcal{D}_{\\mathcal{C}}, r) r_{v/D}(\\epsilon)\n    \\intdim(v )\n    \\begin{cases}\n        \\exp\\left(-D\\frac{\\epsilon^2}{2\n        \\norm{v}_{\\mathcal{Y},\\mathcal{Y}}\\left(1 + \\frac{1}{p}\\right)}\n        \\right) \\condition{$\\epsilon \\le\n        \\frac{\\norm{v}_{\\mathcal{Y},\\mathcal{Y}}}{2u}\\frac{1+1/p}{K(v,\n        p)}$} \\\\\n        \\exp\\left(-D\\frac{\\epsilon}{4uK(v,\n        p)}\\right)\\condition{otherwise.}\n    \\end{cases}\n    \\label{eq:anchor_bound}\n\\end{dmath}\nHence combining \\cref{eq:Lipschitz_constant} and \\cref{eq:anchor_bound} gives\nand summing up the hypothesis yields the following proposition\n\\begin{proposition}\n    \\label{pr:bound_approx_unbounded}\n    Let $K:\\mathcal{X}\\times\\mathcal{X}\\to\\mathcal{L}(\\mathcal{Y})$ be a\n    shift-invariant $\\mathcal{Y}$-Mercer kernel, where $\\mathcal{Y}$ is a\n    finite dimensional Hilbert space of dimension $p$ and $\\mathcal{X}$ a\n    metric space. Moreover, let $\\mathcal{C}$ be a compact subset of\n    $\\mathcal{X}$, $A:\\dual{\\mathcal{X}}\\to\\mathcal{L}(\\mathcal{Y})$ and\n    $\\probability_{\\dual{\\Haar},\\rho}$ a pair such that $\\tilde{K}_e =\n    \\sum_{j=1}^D \\cos{\\pairing{\\cdot,\\omega_j}}A(\\omega_j) \\hiderel{\\approx}\n    K_e$ $\\omega_j\\sim\\probability_{\\dual{\\Haar}, \\rho}$ \\acs{iid}.  Let\n    $V(\\delta) \\succcurlyeq \\variance_{\\dual{\\Haar},\\rho} \\tilde{K}_e(\\delta)$,\n    for all $\\delta\\in\\mathcal{D}_{\\mathcal{C}}$ and $H_\\omega$ be the\n    Lipschitz constant of the function $h: x\\mapsto \\pairing{x,\\omega}$. If the\n    three following constant exists\n    \\begin{dmath*}\n        m \\ge \\int_{\\dual{\\mathcal{X}}} H_\\omega\n        \\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}} d\\probability_{\\dual{\\Haar},\n        \\rho} \\hiderel{<} \\infty\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        u \\ge 4\\left(\\norm{\\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1}\n        + \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n        \\norm{K_e(\\delta)}_{\\mathcal{Y},\\mathcal{Y}}\\right) \\hiderel{<} \\infty\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        v \\ge \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}} D\n        \\norm{V(\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\hiderel{<} \\infty.\n    \\end{dmath*}\n    Define $p_{int}\\ge \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n    \\intdim(V(\\delta))$ then for all $r\\in\\mathbb{R}_+^*$ and all\n    $\\epsilon\\in\\mathbb{R}_+^*$,\n    \\begin{dmath*}\n        \\label{eq:bound1}\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\norm{\\tilde{K}-K}_{\\mathcal{C}\\times\\mathcal{C}} \\ge \\epsilon}\n        \\le 4\\left(\\frac{r m}{\\epsilon} +\n        p_{int} \\mathcal{N}(\\mathcal{D}_{\\mathcal{C}},r) r_{v/D}(\\epsilon)\n        \\\\\n        \\begin{cases}\n            \\exp\\left(-D\\frac{\\epsilon^2}{8\n            v\\left(1 + \\frac{1}{p}\\right)}\n            \\right) \\condition{$\\epsilon \\le\n            \\frac{v}{u}\\frac{1+1/p}{K(v,\n            p)}$} \\\\\n            \\exp\\left(-D\\frac{\\epsilon}{8uK(v,\n            p)}\\right)\\condition{otherwise.}\n        \\end{cases}\\right)\n    \\end{dmath*}\n    where\n    \\begin{dmath*}\n        K(v, p)=\\log\\left(16 \\sqrt{2}\n        p\\right)+\\log\\left(\\frac{u^2}{\\norm{v}_{\\mathcal{Y},\n        \\mathcal{Y}}}\\right)\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        r_{v/D}(\\epsilon)=1 + \\frac{3}{\\epsilon^2\\log^2(1 + D \\epsilon /\n        \\norm{v}_{\\mathcal{Y},\\mathcal{Y}})}.\n    \\end{dmath*}\n\\end{proposition}\n\\begin{proof}\n    Let $m=\\int_{\\dual{\\mathcal{X}}} H_\\omega\n    \\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}} d\\probability_{\\dual{\\Haar},\n    \\rho}$. From \\cref{lm:LipschitzK},\n    $\\probability_{\\dual{\\Haar},\\rho}\\Set{(\\omega_j)_{j=1}^D | L_F \\ge\n    \\frac{\\epsilon}{2r}} \\le \\frac{4 r m}{\\epsilon}$. Thus from\n    \\cref{lm:error_decomposition}, for all $r\\in\\mathbb{R}_+^*$,\n    \\begin{dmath*}\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n        \\norm{F(\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\ge \\epsilon\n        } \\hiderel{\\le} \\\\\n        \\probability_{\\dual{\\Haar},\\rho}\\Set{(\\omega_j)_{j=1}^D | L_F \\ge\n        \\frac{\\epsilon}{2r}} +\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\bigcup_{i=1}^{\\mathcal{N}(\\mathcal{D}_{\\mathcal{C}}, r)}\n        \\norm{F(\\delta_i)}_{\\mathcal{Y}, \\mathcal{Y}} \\ge \\epsilon }\n        = 4\\frac{r m}{\\epsilon} + 4 \\mathcal{N}(\\mathcal{D}_{\\mathcal{C}}, r)\n        r_{v/D}(\\epsilon) \\intdim(v )\n        \\begin{cases}\n            \\exp\\left(-D\\frac{\\epsilon^2}{8\n            \\norm{v}_{\\mathcal{Y},\\mathcal{Y}}\\left(1 + \\frac{1}{p}\\right)}\n            \\right) \\condition{$\\epsilon \\le\n            \\frac{\\norm{v}_{\\mathcal{Y},\\mathcal{Y}}}{u}\\frac{1+1/p}{K(v,\n            p)}$} \\\\\n            \\exp\\left(-D\\frac{\\epsilon}{8uK(v,\n            p)}\\right)\\condition{otherwise.}\n        \\end{cases}\n    \\end{dmath*}\n\\end{proof}\nWith minor modifications we can obtain a second inequality for the case where\nthe random operators $A(\\omega_j)$ are bounded almost surely. This second bound\nwith more restrictions on $A$ has the advantage of working in infinite\ndimension as long as $A(\\omega_j)$ is a Hilbert-Schmidt operator.\n\\begin{proposition}\n    \\label{pr:bound_approx_bounded}\n    Let $K:\\mathcal{X}\\times\\mathcal{X}\\to\\mathcal{L}(\\mathcal{Y})$ be a\n    shift-invariant $\\mathcal{Y}$-Mercer kernel, where $\\mathcal{Y}$ is a\n    Hilbert space and $\\mathcal{X}$ a metric space. Moreover, let $\\mathcal{C}$\n    be a compact subset of $\\mathcal{X}$,\n    $A:\\dual{\\mathcal{X}}\\to\\mathcal{L}(\\mathcal{Y})$ and\n    $\\probability_{\\dual{\\Haar},\\rho}$ a pair such that $\\tilde{K}_e =\n    \\sum_{j=1}^D \\cos{\\pairing{\\cdot,\\omega_j}}A (\\omega_j) \\hiderel{\\approx}\n    K_e$, $\\omega_j\\sim\\probability_{\\dual{\\Haar}, \\rho}$ \\acs{iid}.  where\n    $A(\\omega_j)$ is a Hilbert-Schmidt operator for all $j \\in \\mathbb{N}^*_D$.\n    Let $\\mathcal{D}_{\\mathcal{C}}=\\mathcal{C} \\groupop \\mathcal{C}^{-1}$ and\n    $V (\\delta) \\succcurlyeq\\variance_{\\dual{\\Haar},\\rho} \\tilde{K}_e\n    (\\delta)$, for all $\\delta\\in\\mathcal{D}_{\\mathcal{C}}$ and $H_\\omega$ be\n    the Lipschitz constant of the function $h: x\\mapsto \\pairing{x,\\omega}$. If\n    the three following constant exists\n    \\begin{dmath*}\n        m \\ge\\int_{\\dual{\\mathcal{X}}} H_{\\omega}\n        \\norm{A (\\omega)}_{\\mathcal{Y},\\mathcal{Y}}\n        d\\probability_{\\dual{\\Haar}, \\rho} \\hiderel{<} \\infty{}\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        u \\ge\\esssup_{\\omega\\in\\dual{\\mathcal{X}}}\n        \\norm{A (\\omega)}_{\\mathcal{Y}, \\mathcal{Y}} +\n        \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n        \\norm{K_e (\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\hiderel{<} \\infty{}\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        v \\ge\\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}} D\n        \\norm{V (\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\hiderel{<} \\infty.\n    \\end{dmath*}\n    define $p_{int} \\ge \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n    \\intdim\\left(V(\\delta)\\right)$ then for all $r\\in\\mathbb{R}_+^*$ and all\n    $\\epsilon>\\sqrt{\\frac{v}{D}} +\n    \\frac{1}{3D}u$,\n    \\begin{dmath*}\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n        \\norm{F (\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\ge\\epsilon} \\le~4\n        \\left(\\frac{r m}{\\epsilon} + p_{int}\n        \\mathcal{N} (\\mathcal{D}_{\\mathcal{C}}, r)\n        \\exp\\left(-D\\psi_{v,u} (\\epsilon) \\right)\\right)\n    \\end{dmath*}\n    where $\\psi_{v,u}(\\epsilon)=\\frac{\\epsilon^2}{2(v + u\n    \\epsilon / 3)}$.\n\\end{proposition}\nWhen the covering number $\\mathcal{N}(\\mathcal{D}_{\\mathcal{C}}, r)$ of the\nmetric space $\\mathcal{D}_{\\mathcal{C}}$ has an analytical form, it is\npossible to optimize the bound over the radius $r$ of the covering balls. As an\nexample, we refine \\cref{pr:bound_approx_unbounded} and\n\\cref{pr:bound_approx_bounded} in the case where $\\mathcal{C}$ is a finite\ndimensional Banach space.\n\\begin{corollary}\n    Let $K:\\mathcal{X}\\times\\mathcal{X}\\to\\mathcal{L}(\\mathcal{Y})$ be a\n    shift-invariant $\\mathcal{Y}$-Mercer kernel, where $\\mathcal{Y}$ is a\n    finite dimensional Hilbert space of dimension $p$ and $\\mathcal{X}$ a\n    finite dimensional Banach space of dimension $d$. Moreover, let\n    $\\mathcal{C}$ be a closed ball of $\\mathcal{X}$ centered at the origin of\n    diameter $\\abs{\\mathcal{C}}$,\n    $A:\\dual{\\mathcal{X}}\\to\\mathcal{L}(\\mathcal{Y})$ and\n    $\\probability_{\\dual{\\Haar},\\rho}$ a pair such that $\\tilde{K}_e =\n    \\sum_{j=1}^D \\cos\\pairing{\\cdot,\\omega_j}A(\\omega_j) \\approx K_e$,\n    $\\omega_j\\sim\\probability_{\\dual{\\Haar}, \\rho}$ \\acs{iid}.  Let\n    $\\mathcal{D}_{\\mathcal{C}}=\\mathcal{C}\\groupop\\mathcal{C}^{-1}$ and\n    $V(\\delta) \\succcurlyeq \\variance_{\\dual{\\Haar},\\rho} \\tilde{K}_e(\\delta)$,\n    for all $\\delta\\in\\mathcal{D}_{\\mathcal{C}}$ Let $H_\\omega$ be the\n    Lipschitz constant of $h_{\\omega}:x\\mapsto \\pairing{x, \\omega}$. If the\n    three following constant exists\n    \\begin{dmath*}\n        m \\ge \\int_{\\dual{\\mathcal{X}}} H_{\\omega}\n        \\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}} d\\probability_{\\dual{\\Haar},\n        \\rho} \\hiderel{<} \\infty\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        u \\ge 4\\left(\\norm{\\norm{A(\\omega)}_{\\mathcal{Y},\\mathcal{Y}}}_{\\psi_1}\n        + \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n        \\norm{K_e(\\delta)}_{\\mathcal{Y},\\mathcal{Y}}\\right) \\hiderel{<} \\infty\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        v \\ge \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}} D\n        \\norm{V(\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\hiderel{<} \\infty.\n    \\end{dmath*}\n    Define $p_{int}\\ge \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n    \\intdim(V(\\delta))$, then for all $0 < \\epsilon \\le m \\abs{C}$,\n    \\begin{dmath*}\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\norm{\\tilde{K}-K}_{\\mathcal{C}\\times\\mathcal{C}} \\ge \\epsilon}\n        \\le 8\\sqrt{2} \\left( \\frac{m\\abs{\\mathcal{C}}}{\\epsilon}\n        \\right)\n        {\\left(p_{int}r_{v/D}(\\epsilon)\\right)}^{\\frac{1}{d + 1}}\n        \\begin{cases}\n            \\exp\\left(-D\\frac{\\epsilon^2}{8\n            v(d+1)\\left(1 + \\frac{1}{p}\\right)}\n            \\right) \\condition{$\\epsilon \\le\n            \\frac{v}{u}\\frac{1+1/p}{K(v,\n            p)}$} \\\\\n            \\exp\\left(-D\\frac{\\epsilon}{8u(d+1)K(v,\n            p)}\\right)\\condition{otherwise,}\n        \\end{cases}\n    \\end{dmath*}\n    where $K(v, p)=\\log\\left(16 \\sqrt{2}\n    p\\right)+\\log\\left(\\frac{u^2}{v}\\right) $ and $r_{v/D}(\\epsilon)=1 +\n    \\frac{3}{\\epsilon^2\\log^2(1 + D \\epsilon / v)}$.\n\\end{corollary}\n\\begin{proof}\n    As we have seen in~\\cref{subsec:epsilon-net}, suppose that $\\mathcal{X}$ is\n    a finite dimensional Banach space. Let $\\mathcal{C}\\subset\\mathcal{X}$ be\n    a closed ball centered at the origin of diameter $\\abs{\\mathcal{C}}=C$ then\n    the difference ball centered at the origin\n    \\begin{dmath*}\n        \\mathcal{D}_{\\mathcal{C}}\n        = \\mathcal{C}\\groupop\\mathcal{C}^{-1}\n        \\hiderel{=} \\Set{x \\groupop~z^{-1} | \\norm{x}_{\\mathcal{X}}\n        \\hiderel{\\le} C / 2, \\norm{z}_{\\mathcal{X}} \\hiderel{\\le} C / 2, (x,\n        z)\\in\\mathcal{X}^2} \\hiderel{\\subset} \\mathcal{X}\n    \\end{dmath*}\n    is closed and bounded, so compact and has diameter $\\abs{C}=2C$. It is\n    possible to cover it with $\\log(\\mathcal{N} (\\mathcal{D}_{\\mathcal{C}}, r))\n    = d\\log\\left(\\frac{2\\abs{C}}{r} \\right)$ closed balls of radius $r$.\n    Pluging back into \\cref{eq:bound1} yields\n    \\begin{dmath*}\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\norm{\\tilde{K}-K}_{\\mathcal{C}\\times\\mathcal{C}} \\ge \\epsilon}\n        \\le 4\\left(\\frac{r m}{\\epsilon} + p_{int} \\left(\\frac{2\\abs{C}}{r}\n        \\right)^d r_{v/D}(\\epsilon) \\\\\n        \\begin{cases}\n            \\exp\\left(-D\\frac{\\epsilon^2}{8\n            v\\left(1 + \\frac{1}{p}\\right)}\n            \\right) \\condition{$\\epsilon \\le\n            \\frac{v}{u}\\frac{1+1/p}{K(v,\n            p)}$} \\\\\n            \\exp\\left(-D\\frac{\\epsilon}{8uK(v,\n            p)}\\right)\\condition{otherwise.}\n        \\end{cases}\\right)\n    \\end{dmath*}\n    The right hand side of the equation has the form $ar+br^{-d}$ with $a =\n    \\frac{m}{\\epsilon}$ and\n    \\begin{dmath*}\n        b =  p_{int} {\\left(2 \\abs{\\mathcal{C}}\\right)}^d r_{v/D}(\\epsilon)\n        \\begin{cases}\n            \\exp\\left(-D\\frac{\\epsilon^2}{8\n            v\\left(1 + \\frac{1}{p}\\right)}\n            \\right) \\condition{$\\epsilon \\le\n            \\frac{v}{u}\\frac{1+1/p}{K(v,\n            p)}$} \\\\\n            \\exp\\left(-D\\frac{\\epsilon}{8uK(v,\n            p)}\\right)\\condition{otherwise.}\n        \\end{cases}\n    \\end{dmath*}\n    Following \\cite{Rahimi2007, sutherland2015, minh2016operator}, we optimize\n    over $r$.  It is a convex continuous function on $\\mathbb{R}_+$ and achieve\n    minimum at $r=\\left(\\frac{bd}{a}\\right)^{\\frac{1}{d+1}}$ and the minimum\n    value is $r_*=a^{\\frac{d}{d + 1}}b^{\\frac{1}{d + 1}}\\left( d^{\\frac{1}{d +\n    1}} + d^{-\\frac{d}{d+1}} \\right)$, hence\n    \\begin{dmath*}\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\norm{\\tilde{K}-K}_{\\mathcal{C}\\times\\mathcal{C}} \\ge \\epsilon}\n        \\le C_d {\\left( \\frac{2m\\abs{\\mathcal{C}}}{\\epsilon}\n        \\right)}^{\\frac{d}{d + 1}}\n        {\\left(p_{int}r_{v/D}(\\epsilon)\\right)}^{\\frac{1}{d + 1}}\n        \\begin{cases}\n            \\exp\\left(-D\\frac{\\epsilon^2}{8\n            v(d+1)\\left(1 + \\frac{1}{p}\\right)}\n            \\right) \\condition{$\\epsilon \\le\n            \\frac{v}{u}\\frac{1+1/p}{K(v,\n            p)}$} \\\\\n            \\exp\\left(-D\\frac{\\epsilon}{8u(d+1)K(v,\n            p)}\\right)\\condition{otherwise,}\n        \\end{cases}\n        \\le 8\\sqrt{2} \\left( \\frac{m\\abs{\\mathcal{C}}}{\\epsilon}\n        \\right)\n        {\\left(p_{int}r_{v/D}(\\epsilon)\\right)}^{\\frac{1}{d + 1}}\n        \\begin{cases}\n            \\exp\\left(-D\\frac{\\epsilon^2}{8\n            v(d+1)\\left(1 + \\frac{1}{p}\\right)}\n            \\right) \\condition{$\\epsilon \\le\n            \\frac{v}{u}\\frac{1+1/p}{K(v,\n            p)}$} \\\\\n            \\exp\\left(-D\\frac{\\epsilon}{8u(d+1)K(v,\n            p)}\\right)\\condition{otherwise,}\n        \\end{cases}\n    \\end{dmath*}\n    where $C_d = 4 \\left( d^{\\frac{1}{d + 1}} + d^{-\\frac{d}{d+1}} \\right)$.\n    Eventually when $\\mathcal{X}$ is a Banach space, the Lipschitz constant of\n    $h_{\\omega}$ is the supremum of the gradient $H_{\\omega} =\n    \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}} \\norm{(\\nabla h_{\\omega})\n    (\\delta)}_{\\dual{\\mathcal{X}}}$.\n\\end{proof}\nFollowing the same proof technique we obtain the second bound for bounded\n\\ac{ORFF}.\n\\begin{corollary}\n    Let $K:\\mathcal{X}\\times\\mathcal{X}\\to\\mathcal{L}(\\mathcal{Y})$ be a\n    shift-invariant $\\mathcal{Y}$-Mercer kernel, where $\\mathcal{Y}$ is a\n    Hilbert space and $\\mathcal{X}$ a finite dimensional Banach space of\n    dimension $D$. Moreover, let $\\mathcal{C}$ be a closed ball of\n    $\\mathcal{X}$ centered at the origin of diameter $\\abs{\\mathcal{C}}$,\n    subset of $\\mathcal{X}$, $A:\\dual{\\mathcal{X}}\\to\\mathcal{L}(\\mathcal{Y})$\n    and $\\probability_{\\dual{\\Haar},\\rho}$ a pair such that $\\tilde{K}_e =\n    \\sum_{j=1}^D \\cos{\\pairing{\\cdot,\\omega_j}}A (\\omega_j) \\hiderel{\\approx}\n    K_e$, $\\omega_j\\sim\\probability_{\\dual{\\Haar}, \\rho}$ \\acs{iid}.  where\n    $A(\\omega_j)$ is a Hilbert-Schmidt operator for all $j \\in \\mathbb{N}^*_D$.\n    Let $\\mathcal{D}_{\\mathcal{C}}=\\mathcal{C} \\groupop \\mathcal{C}^{-1}$ and\n    $V (\\delta) \\succcurlyeq\\variance_{\\dual{\\Haar},\\rho} \\tilde{K}_e (\\delta)$\n    for all $\\delta\\in\\mathcal{D}_{\\mathcal{C}}$    and $H_\\omega$ be the\n    Lipschitz constant of the function $h: x\\mapsto \\pairing{x,\\omega}$. If the\n    three following constant exists\n    \\begin{dmath*}\n        m \\ge\\int_{\\dual{\\mathcal{X}}} H_{\\omega}\n        \\norm{A (\\omega)}_{\\mathcal{Y},\\mathcal{Y}}\n        d\\probability_{\\dual{\\Haar}, \\rho} \\hiderel{<} \\infty{}\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        u \\ge\\esssup_{\\omega\\in\\dual{\\mathcal{X}}}\n        \\norm{A (\\omega)}_{\\mathcal{Y}, \\mathcal{Y}} +\n        \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n        \\norm{K_e (\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\hiderel{<} \\infty{}\n    \\end{dmath*}\n    and\n    \\begin{dmath*}\n        v \\ge\\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}} D\n        \\norm{V (\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\hiderel{<} \\infty.\n    \\end{dmath*}\n    define $p_{int} \\ge \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n    \\intdim\\left(V(\\delta)\\right)$ then for all $\\sqrt{\\frac{v}{D}} +\n    \\frac{u}{3D} < \\epsilon < m\\abs{\\mathcal{C}}$,\n    \\begin{dmath*}\n        \\probability_{\\dual{\\Haar,\\rho}}\\Set{(\\omega_j)_{j=1}^D |\n        \\sup_{\\delta\\in\\mathcal{D}_{\\mathcal{C}}}\n        \\norm{F (\\delta)}_{\\mathcal{Y}, \\mathcal{Y}} \\ge\\epsilon} \\le~8\\sqrt{2}\n        \\left(\\frac{m\\abs{\\mathcal{C}}}{\\epsilon}\\right) p_{int}^{\\frac{1}{d +\n        1}} \\exp\\left(-D\\psi_{v,d,u} (\\epsilon) \\right)\n    \\end{dmath*}\n    where $\\psi_{v,d,u}(\\epsilon)=\\frac{\\epsilon^2}{2(d+1)(v + u\n    \\epsilon / 3)}$.\n\\end{corollary}\n\\subsubsection{Proof of the ORFF estimator variance bound\n(\\texorpdfstring{\\cref{pr:variance_bound}}{Proposition~%\n\\ref{pr:variance_bound}}).}\nWe use the notations $\\delta = x \\groupop z^{-1}$ for all $x, z\n\\in\\mathcal{X}$, $\\tilde{K}(x,z) = {\\tildePhi{\\omega}(x)}^\\adjoint\n\\tildePhi{\\omega}(z)$, $\\tilde{K}^j(x, z) = {\\Phi_x(\\omega_j)}^\\adjoint\n\\Phi_z(\\omega_j)$ and $K_e(\\delta)=K_e(x, z)$.\n\\begin{proof}\n    Let $\\delta\\in\\mathcal{D}_{\\mathcal{C}}$ be a constant. From the definition\n    of the variance of a random variable and using the fact that the\n    $(\\omega_j)_{j=1}^D$ are \\ac{iid} random variables,\n    \\begin{dmath*}\n        \\variance_{\\dual{\\Haar}, \\rho} \\left[ \\tilde{K}_e(\\delta) \\right]\n        = \\expectation_{\\dual{\\Haar}, \\rho}\\left[ \\frac{1}{D} \\sum_{j=1}^D\n        \\tilde{K}^j_e(\\delta) - K_e(\\delta) \\right]^2\n        \\hiderel{=} \\frac{1}{D^2} \\expectation_{\\dual{\\Haar}, \\rho}\\left[\n        \\sum_{j=1}^D \\tilde{K}^j_e(\\delta) - K_e(\\delta) \\right]^2\n        = \\frac{1}{D} \\expectation_{\\dual{\\Haar}, \\rho} \\left[\n        \\tilde{K}_e^j(\\delta)^2 - \\tilde{K}_e^j(\\delta)K_e(\\delta) -\n        K_e(\\delta)\\tilde{K}_e^j(\\delta) + K_e(\\delta)^2 \\right]\n    \\end{dmath*}\n    From the definition of $\\tilde{K}^j_e$, $\\expectation_{\\dual{\\Haar}, \\rho}\n    \\tilde{K}^j_e(\\delta) = K_e(\\delta)$, which leads to\n    \\begin{dmath*}\n        \\variance_{\\dual{\\Haar}, \\rho} \\left[ \\tilde{K}_e(\\delta) \\right]\n        = \\frac{1}{D} \\expectation_{\\dual{\\Haar}, \\rho} \\left[\n        \\tilde{K}^j_e(\\delta)^2 - K_e(\\delta)^2 \\right]\n    \\end{dmath*}\n    A trigonometric identity gives us $(\\cos\\pairing{\\delta,\n    \\omega})^2=\\frac{1}{2}\\left( \\cos\\pairing{2\\delta, \\omega} +\n    \\cos\\pairing{e, \\omega} \\right)$. Thus\n    \\begin{dmath*}\n        \\variance_{\\dual{\\Haar}, \\rho} \\left[ \\tilde{K}_e(\\delta) \\right]\n        = \\frac{1}{2D} \\expectation_{\\dual{\\Haar}, \\rho} \\left[ \\left(\n        \\cos\\pairing{2\\delta, \\omega} + \\cos\\pairing{e, \\omega} \\right)\n        A(\\omega)^2 - 2 K_e(\\delta)^2 \\right].\n    \\end{dmath*}\n    Also,\n    \\begin{dmath*}\n        \\expectation_{\\dual{\\Haar}, \\rho} \\left[ \\cos\\pairing{2\\delta, \\omega}\n        A(\\omega)^2 \\right]\n        = \\expectation_{\\dual{\\Haar}, \\rho}\\left[ \\cos\\pairing{2\\delta, \\omega}\n        A(\\omega) \\right] \\expectation_{\\dual{\\Haar}, \\rho}\\left[ A(\\omega)\n        \\right] + \\covariances_{\\dual{\\Haar}, \\rho}\\left[ \\cos\\pairing{2\\delta,\n        \\omega} A(\\omega), A(\\omega) \\right]\n        = K_e(2\\delta) \\expectation_{\\dual{\\Haar}, \\rho}\\left[ A(\\omega)\n        \\right] + \\covariances_{\\dual{\\Haar}, \\rho}\\left[ \\cos\\pairing{2\\delta,\n        \\omega} A(\\omega), A(\\omega) \\right]\n    \\end{dmath*}\n    Similarly we obtain\n    \\begin{dmath*}\n        \\expectation_{\\dual{\\Haar}, \\rho}\\left[ \\cos\\pairing{e, \\omega}\n        A(\\omega)^2 \\right] = K_e(e)\\expectation_{\\dual{\\Haar}, \\rho}\\left[\n        A(\\omega) \\right] + \\covariances_{\\dual{\\Haar}, \\rho}\\left[\n        \\cos\\pairing{e, \\omega} A(\\omega), A(\\omega) \\right]\n    \\end{dmath*}\n    Therefore\n    \\begin{dmath*}\n        \\variance_{\\dual{\\Haar}, \\rho} \\left[ \\tilde{K}_e(\\delta) \\right]\n        = \\frac{1}{2D} \\left( \\left( K_e(2\\delta) + K_e(e) \\right)\n        \\expectation_{\\dual{\\Haar}, \\rho}\\left[ A(\\omega) \\right] -\n        2K_e(\\delta)^2 + \\covariances_{\\dual{\\Haar}, \\rho}\\left[\n        \\left(\\cos\\pairing{2\\delta, \\omega} + \\cos\\pairing{e, \\omega}\\right)\n        A(\\omega), A(\\omega) \\right]\\right)\n        = \\frac{1}{2D} \\left( \\left( K_e(2\\delta) + K_e(e) \\right)\n        \\expectation_{\\dual{\\Haar}, \\rho}\\left[ A(\\omega) \\right] -\n        2K_e(\\delta)^2 + \\covariances_{\\dual{\\Haar}, \\rho}\\left[\n        \\left(\\cos\\pairing{\\delta, \\omega}\\right)^2\n        A(\\omega), A(\\omega) \\right]\\right) \\\\\n        \\preccurlyeq \\frac{1}{2D} \\left( \\left( K_e(2\\delta) + K_e(e) \\right)\n        \\expectation_{\\dual{\\Haar}, \\rho}\\left[ A(\\omega) \\right] -\n        2 K_e(\\delta)^2 + \\variance_{\\dual{\\Haar}, \\rho}\\left[\n        A(\\omega) \\right]\\right)\n    \\end{dmath*}\n\\end{proof}\n\\subsection{Learning}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{th:representer}}{Theorem~%\n\\ref{th:representer}}}\n\\begin{proof}\n    Since $f(x)=K_x^*f$, the optimization problem reads\n    \\begin{dmath*}\n        f_{\\seq{s}} = \\argmin_{f\\in\\mathcal{H}_K}\n        \\frac{1}{N}\\displaystyle\\sum_{i=1}^N c(K_{x_i}^\\adjoint f, y_i) +\n        \\frac{\\lambda}{2}\\norm{f}^2_{K}\n    \\end{dmath*}\n    Let $W_{\\seq{s}}:\\mathcal{H}_K\\to\\Vect_{i=1}^N\\mathcal{Y}$ be the\n    restriction\\footnote{$W_{\\seq{s}}$ is sometimes called the sampling or\n    evaluation operator as in \\citet{minh2016unifying}. However we prefer\n    calling it \\say{restriction operator} as in \\citet{rosasco2010learning}\n    since $W_{\\seq{s}}f$ is the restriction of $f$ to the points in $\\seq{s}$.}\n    linear operator defined as $W_{\\seq{s}}f = \\Vect_{i=1}^N K_{x_i}^\\adjoint\n    f$, with $K_{x_i}^\\adjoint:\\mathcal{H}_K\\to\\mathcal{Y}$ and\n    $K_{x_i}:\\mathcal{Y}\\to\\mathcal{H}_K$. Let\n    $Y=\\vect_{i=1}^Ny_i\\in\\mathcal{Y}^N$. We have\n    $\\inner{Y,W_{\\seq{s}}f}_{\\Vect_{i=1}^N\\mathcal{Y}} =\n    \\sum_{i=1}^N\\inner{y_i, K_{x_i}^\\adjoint f}_{\\mathcal{Y}}\n    \\hiderel{=}\\sum_{i=1}^N\\inner{K_{x_i} y_i, f}_{\\mathcal{H}_K}$.  Thus the\n    adjoint operator $W_{\\seq{s}}^\\adjoint :\n    \\Vect_{i=1}^N\\mathcal{Y}\\to\\mathcal{H}_K$ is $W_{\\seq{s}}^\\adjoint\n    Y=\\sum_{i=1}^NK_{x_i} y_i$, and the operator $W_{\\seq{s}}^* W_{\\seq{s}} :\n    \\mathcal{H}_K \\to \\mathcal{H}_K$ is $W_{\\seq{s}}^\\adjoint W_{\\seq{s}}f =\n    \\sum_{i=1}^NK_{x_i} K_{x_i}^\\adjoint f$.  Let $\\mathfrak{R}_{\\lambda}(f,\n    \\seq{s}) = \\underbrace{\\frac{1}{N}\\displaystyle\\sum_{i=1}^N c(f(x_i),\n    y_i)}_{=\\mathfrak{R}_c} + \\frac{\\lambda}{2}\\norm{f}^2_{K}$. To ensure that\n    $\\mathfrak{R}_{\\lambda}$ has a global minimizer we need the following\n    technical lemma (which is a consequence of the Hahn-Banach theorem for\n    lower-semicontimuous functional, see~\\citet{kurdila2006convex}).\n    \\begin{lemma}\n        \\label{lm:strongly_convex_is_coercive} Let $\\mathfrak{R}$ be a proper,\n        convex, lower semi-continuous functional, defined on a Hilbert space\n        $\\mathcal{H}$. If $\\mathfrak{R}$ is strongly convex, then\n        $\\mathfrak{R}$ is coercive.\n    \\end{lemma}\n    %\\begin{proof}\n        %Consider the convex function $G(f)\\colonequals\n        %\\mathfrak{R}(f)-\\lambda\\norm{f}^2$, for some $\\lambda>0$. Since\n        %$\\mathfrak{R}$ is by assumption proper, lower semi-continuous and\n        %strongly convex with parameter $\\lambda$, $G$ is proper, lower\n        %semi-continuous and convex.  Thus Hahn-Banach theorem apply, stating\n        %that $G$ is bounded by below by an affine functional. \\acs{ie}~there\n        %exists $f_0$ and $f_1\\in\\mathcal{H}$ such that\n        %\\begin{dmath*}\n            %G(f)\\ge G(f_0) + \\inner{f - f_0, f_1} \\condition{for all\n            %$f\\in\\mathcal{H}$.}\n        %\\end{dmath*}\n        %Then substitute the definition of $G$ to obtain\n        %\\begin{dmath*}\n            %\\mathfrak{R}(f)\\ge \\mathfrak{R}(f_0) +\n            %\\lambda\\left(\\norm{f}-\\norm{f_0}\\right) + \\inner{f - f_0, f_1}.\n        %\\end{dmath*}\n        %By the Cauchy-Schwartz inequality, $\\inner{f, f_1}\\ge -\n        %\\norm{f}\\norm{f_1}$, thus\n        %\\begin{dmath*}\n            %\\mathfrak{R}(f)\\ge \\mathfrak{R}(f_0) +\n            %\\lambda\\left(\\norm{f}-\\norm{f_0}\\right) - \\norm{f}\\norm{f_1} -\n            %\\inner{f_0, f_1},\n        %\\end{dmath*}\n        %which tends to infinity as $f$ tends to infinity. Hence $\\mathfrak{R}$\n        %is coercive\n    %\\end{proof}\n    Since $c$ is proper, lower semi-continuous and convex by assumption, thus\n    the term $\\mathfrak{R}_c$ is also proper, lower semi-continuous and convex.\n    Moreover the term $\\frac{\\lambda}{2}\\norm{f}^2_{K}$ is strongly convex.\n    Thus $\\mathfrak{R}_{\\lambda}$ is strongly convex. Apply\n    \\cref{lm:strongly_convex_is_coercive} to obtain the coercivity of\n    $\\mathfrak{R}_{\\lambda}$, and then Mazur-Schauder's theorem (see\n    \\citet{gorniewicz1999topological, kurdila2006convex}) to show that\n    $\\mathfrak{R}_{\\lambda}$ has a unique minimizer and is attained. Then let\n    $\\mathcal{H}_{K, \\seq{s}}=\\Set{\\sum_{j=1}^{N}K_{x_j}u_j| \\forall\n    (u_i)_{i=1}^{N} \\in\\mathcal{Y}^{N}}$.  For $f\\in\\mathcal{H}_{K,\n    \\seq{s}}^\\perp$\\footnote{$\\mathcal{H}_{K,\n    \\seq{s}}^\\perp\\oplus\\mathcal{H}_{K, \\seq{s}}=\\mathcal{H}_K$ because\n    $W_{\\seq{s}}$ is bounded.}, the operator $W_{\\seq{s}}$ satisfies $\\inner{Y,\n    W_{\\seq{s}}f}_{\\Vect_{i=1}^N\\mathcal{Y}} =\n    \\inner{\\underbrace{f}_{\\in\\mathcal{H}_{K, \\seq{s}}^\\perp},\n    \\underbrace{\\sum_{i=1}^{N}K_{x_i}V^\\adjoint y_i}_{\\in\\mathcal{H}_{K,\n    \\seq{s}}}}_{\\mathcal{H}_K} \\hiderel{=} 0$ for all sequences\n    $(y_i)_{i=1}^N$, since $y_i\\in\\mathcal{Y}$.  Hence,\n    \\begin{dmath}\n        \\label{eq:null1} (f(x_i))_{i=1}^{N}=0\n    \\end{dmath}\n    In the same way, $\\sum_{i=1}^{N}\\inner{K_{x_i}^* f, u_i}_{\\mathcal{Y}}\n    \\hiderel{=} \\inner{\\underbrace{f}_{\\in\\mathcal{H}_{K, \\seq{s}}^\\perp},\n    \\underbrace{\\sum_{j=1}^{N}K_{x_j}u_j}_{\\in\\mathcal{H}_{K,\n    \\seq{s}}}}_{\\mathcal{H}_K} \\hiderel{=} 0$.  for all sequences\n    $(u_i)_{i=1}^{N}\\in\\mathcal{Y}^{N}$. As a result,\n    \\begin{dmath}\n        \\label{eq:null2} (f(x_i))_{i=1}^{N}=0.\n    \\end{dmath}\n    Now for an arbitrary $f\\in\\mathcal{H_K}$, consider the orthogonal\n    decomposition $f = f^{\\perp} + f^{\\parallel}$, where $f^{\\perp} \\in\n    \\mathcal{H}_{K, \\seq{s}}^\\perp$ and $f^{\\parallel} \\in \\mathcal{H}_{K,\n    \\seq{s}}$. Then since $\\norm{f^{\\perp} + f^{\\parallel}}_{\\mathcal{H}_K}^2\n    =\\norm{f^{\\perp}}_{\\mathcal{H}_K}^2 +\n    \\norm{f^{\\parallel}}_{\\mathcal{H}_K}^2$, \\cref{eq:null1} and\n    \\cref{eq:null2} shows that if $\\lambda > 0$, clearly then\n    $\\mathfrak{R}_{\\lambda}(f, \\seq{s}) = \\mathfrak{R}_{\\lambda}\\left(f^{\\perp}\n    + f^{\\parallel}, \\seq{s}\\right) \\hiderel{\\ge}\n    \\mathfrak{R}_{\\lambda}\\left(f^{\\parallel}, \\seq{s}\\right)$ The last\n    inequality holds only when $\\norm{f^{\\perp}}_{\\mathcal{H}_K}=0$, that is\n    when $f^{\\perp}=0$. As a result since the minimizer of\n    $\\mathfrak{R}_{\\lambda}$is unique and attained, it must lies in\n    $\\mathcal{H}_{K, \\seq{s}}$.\n\\end{proof}\n\\subsubsection{Proof of \\texorpdfstring{\\cref{th:orff_representer}}{Theorem~%\n\\ref{th:orff_representer}}}\n\\label{subsubsec:proof_feature_equiv}\n\\begin{proof}\n    Since $\\tildeK{\\omega}$ is an operator-valued kernel, from\n    \\cref{th:representer}, \\cref{eq:argmin_RKHS_rand} has a solution of the\n    form\n    \\begin{dmath*}\n        \\widetilde{f}_{\\seq{s}} = \\sum_{i=1}^{N} \\tildeK{\\omega}(\\cdot,\n        x_i)u_i \\hiderel{=} \\sum_{i=1}^N\n        \\tildePhi{\\omega}(\\cdot)^\\adjoint \\tildePhi{\\omega}(x_i)u_i \\hiderel{=}\n        \\tildePhi{\\omega}(\\cdot)^\\adjoint \\underbrace{\\left(\\sum_{i=1}^{N}\n        \\tildePhi{\\omega}(x_i) u_i \\right)}_{= \\theta \\in \\left( \\Ker\n        \\tildeW{\\omega}\\right)^\\perp \\subset \\tildeH{\\omega}},\n    \\end{dmath*}\n    where $u_i \\hiderel{\\in} \\mathcal{Y}$ and $x_i\\in\\mathcal{X}$. Let\n    $\\theta_{\\seq{s}}=\\argmin_{\\theta\\in\\left(\\Ker\n    \\tildeW{\\omega}\\right)^\\perp}\n    \\frac{1}{N}\\sum_{i=1}^Nc\\left(\\tildePhi{\\omega}(x_i)^\\adjoint \\theta,\n    y_i\\right) + \\frac{\\lambda}{2} \\norm{\\tildePhi{\\omega}(\\cdot)^\\adjoint\n    \\theta}^2_{\\tildeK{\\omega}}$.  Since $\\theta\\in(\\Ker\n    \\tildeW{\\omega})^\\perp$ and $W$ is an isometry from $(\\Ker\n    \\tildeW{\\omega})^\\perp\\subset \\tildeH{\\omega}$ onto\n    $\\mathcal{H}_{\\tildeK{\\omega}}$, we have\n    $\\norm{\\tildePhi{\\omega}(\\cdot)^\\adjoint\\theta}^2_{\\tildeK{\\omega}} =\n    \\norm{\\theta}^2_{\\tildeH{\\omega}}$. Hence\n        $\\theta_{\\seq{s}}=\\argmin_{\\theta\\in\\left(\\Ker\n        \\tildeW{\\omega}\\right)^\\perp}\n        \\frac{1}{N}\\sum_{i=1}^Nc\\left(\\tildePhi{\\omega}(x_i)^\\adjoint \\theta,\n        y_i\\right) + \\frac{\\lambda}{2}\\norm{\\theta}^2_{\\tildeH{\\omega}}$\n    Finding a minimizer $\\theta_{\\seq{s}}$ over $\\left(\\Ker\n    \\tildeW{\\omega}\\right)^\\perp$ is not the same as finding a minimizer over\n    $\\tildeH{\\omega}$. Although in both cases Mazur-Schauder's theorem\n    guarantees that the respective minimizers are unique, they might not be the\n    same. Since $\\tildeW{\\omega}$ is bounded, $\\Ker \\tildeW{\\omega}$ is closed,\n    so that we can perform the decomposition $\\tildeH{\\omega}=\\left(\\Ker\n    \\tildeW{\\omega}\\right)^\\perp\\oplus \\left(\\Ker \\tildeW{\\omega}\\right)$. Then\n    clearly by linearity of $W$ and the fact that for all\n    $\\theta^{\\parallel}\\in\\Ker \\tildeW{\\omega}$,\n    $\\tildeW{\\omega}\\theta^{\\parallel}=0$, if $\\lambda > 0$ we have\n    $\\theta_{\\seq{s}}=\\argmin_{\\theta\\in\\tildeH{\\omega}}\n    \\frac{1}{N}\\sum_{i=1}^Nc\\left(\\tildePhi{\\omega}(x_i)^\\adjoint \\theta,\n    y_i\\right) + \\frac{\\lambda}{2}\\norm{\\theta}^2_{\\tildeH{\\omega}}$\n    Thus $\\theta_{\\seq{s}} =\\argmin_{\\substack{\\theta^{\\perp}\\in\\left(\\Ker\n    \\tildeW{\\omega}\\right)^\\perp, \\\\ \\theta^{\\parallel}\\in\\Ker\n    \\tildeW{\\omega}}} \\frac{1}{N} \\sum_{i=1}^N c\\left(\\left( \\tildeW{\\omega}\n    \\theta^{\\perp} \\right)(x) + \\underbrace{\\left( \\tildeW{\\omega}\n    \\theta^{\\parallel} \\right)(x)}_{=0 \\enskip \\text{for all}\\enskip\n    \\theta^{\\parallel} }, y_i\\right) +\n    \\frac{\\lambda}{2}\\norm{\\theta^\\perp}^2_{\\tildeH{\\omega}} +\n    \\underbrace{\\frac{\\lambda}{2} \\norm{\\theta^{\\parallel} }^2_{%\n    \\tildeH{\\omega}} }_{=0 \\enskip\\text{only if}\\enskip \\theta^{\\parallel}=0}$\n    Thus $\\theta_{\\seq{s}}=\\argmin_{\\theta^{\\perp}\\in\\left(\\Ker\n    \\tildeW{\\omega}\\right)^\\perp} \\frac{1}{N}\\sum_{i=1}^Nc\\left( \\left(\n    \\tildeW{\\omega} \\theta^{\\perp} \\right)(x), y_i \\right) + \\frac{\\lambda}{2}\n    \\norm{\\theta^\\perp}^2_{\\tildeH{\\omega}}$ Hence minimizing over $\\left(\\Ker\n    \\tildeW{\\omega}\\right)^\\perp$ or $\\widetilde{\\mathcal{H}}{\\omega}$ is the\n    same when $\\lambda > 0$.  Eventually,\n    % Eventually for any outcome of $\\omega_j \\sim\n    % \\probability_{\\dual{\\Haar},\\rho}$ \\ac{iid},\n    $\\theta_{\\seq{s}}=\\argmin_{\\theta\\in\\tildeH{\\omega}}\n    \\frac{1}{N}\\sum_{i=1}^Nc\\left(\\tildePhi{\\omega}(x_i)^\\adjoint \\theta,\n    y_i\\right) + \\frac{\\lambda}{2}\\norm{\\theta}^2_{\\tildeH{\\omega}}$.\n\\end{proof}\n\n", "meta": {"hexsha": "04a1630e8d746e34f2d4798297ba01e5671db7af", "size": 75320, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "appendix.tex", "max_stars_repo_name": "RomainBrault/JMLR-ORFF", "max_stars_repo_head_hexsha": "b7e6d09e7d308d4cc9abeb5a554defe76ce3c19b", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "appendix.tex", "max_issues_repo_name": "RomainBrault/JMLR-ORFF", "max_issues_repo_head_hexsha": "b7e6d09e7d308d4cc9abeb5a554defe76ce3c19b", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "appendix.tex", "max_forks_repo_name": "RomainBrault/JMLR-ORFF", "max_forks_repo_head_hexsha": "b7e6d09e7d308d4cc9abeb5a554defe76ce3c19b", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.8190743338, "max_line_length": 80, "alphanum_fraction": 0.6204062666, "num_tokens": 28536, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619350028205, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.5976279554230269}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\graphicspath{ {./images/} }\n\\usepackage{subfig}\n\n\\title{CTA200 Assignment 2 Question 3}\n\\author{Caleb Lammers}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Question 1.}\n\n\\subsection*{Methods}\n\nTo create the graphs shown below, this program iterates\nthrough points c, determining for each whether $\\left| z \\right|$ diverges\nand plotting the point accordingly. In more detail, for $c = x + iy$, the\nprogram starts with $x = -2 +$incr (for some small incr) and checks the\ndivergence of $\\left| z \\right|$ for $y = -2 +$incr, $y = -2 +2\\cdot$incr,\n$...$, $y = -2 +n\\cdot$incr until $y = -2 +n\\cdot$incr $>= 2$. Then, the\nprogram repeats this whole process for $x = -2 + 2\\cdot$incr and so on up until $x = -2 +\nn\\cdot$incr (note that it would be more efficient to use np.meshgrid, however, this is more intuitive). To check the divergence of $\\left| z \\right|$ for a given point c, the program\nrecursively calculates $z$ by $z_{i+1} = z_i^2 + c$ with $z_0$ = 0. If at some point in the\nfirst 50 iterations $\\left| z \\right| > 10$ we conclude that $\\left| z\n\\right|$ will diverge. In fact, this is actually overkill as it is well known\n(and can be shown relatively easily) that if $\\left| z \\right| > 2$,\n$\\left| z \\right|$ certainly diverges.\\textsuperscript{1} The choice of 50\niterations is somewhat arbitrary but for high numbers of iterations there\nare few difference in the image, due to the limited resolution (there are few\nnoticeable differences with 100 or even 1000 iterations). For higher resolution images, a smaller incr value is required, however, this increases the amount of computation.\nUsing matplotlib.pyplot, in graph 1, the points for which $\\left| z \\right|$ diverged were plotted in white and those which did not were coloured black. In the second graph, points for which $\\left| z \\right|$ did not diverge were again plotted in black but those that did were coloured using a colourmap according to the number of iterations until $\\left| z \\right| > 10$. The third graph was obtained by zooming into graph 2. \\\\\n\n 1 - http://mathforum.org/library/drmath/view/68518.html\n\n\\newpage\n\\subsection*{Analysis}\n\n\\begin{figure}[h]\n  \\centering\n\\includegraphics[scale=0.5]{Black and White}\n\\caption{Black and white graph (incr $ = 0.002$)}\n\\end{figure}\n\nIn the black and white graph, we can see that the points for which $\\left| z\n\\right|$ did not diverge are around the origin, with most points having a negative\nreal part. With the recursive equation in mind, this makes sense as we are\nrepeatedly adding $c$. That is, if c itself has a very large magnitude, it is\nclear that the recursion equation will diverge. As we are squaring $\\left| z\n\\right|$, if $c$ has a negative real part, adding $c$ will decrease the magnitude of $z$\n(as long as $z$ does not have too large of an imaginary part). Furthermore, even if you do not recognize this as the Mandelbrot set, it is visually clear that the graph has additional interesting properties. This is especially noticeable after colouring the diverging points.\n\n\\begin{figure}[h]\n  \\centering\n\\includegraphics[scale=0.5]{Coloured}\n\\caption{Coloured graph (incr $ = 0.002$)}\n\\end{figure}\n\nWe see that a higher number of iterations is required to determine whether $\\left| z \\right|$ diverges for points near the perimeter of the figure and far fewer iterations are\nrequired for points far from the perimeter, which is to be expected. Additionally, the coloured graph is quite aesthetically pleasing. One important observation to note is that it appears as though there are smaller copies of itself branching from the original shape.\n\n\\begin{figure}[h]\n  \\centering\n\\includegraphics[scale=0.5]{Zoomed In}\n\\caption{Zoom in on Figure 2 (incr $ = 0.002$)}\n\\end{figure}\n\nIn the zoomed image, we can especially see the so-called \"self-similarity\" of\nthe Mandelbrot set that makes it a fractal. In fact, in this image we can even\nsee smaller similar shapes branching from itself again. This actually\ncontinues indefinitely and there are many videos on the internet zooming into the\nMandelbrot set at a specific point to very high level of magnification. Indeed, the Mandelbrot set has became a staple of mathematical beauty.\n\n\\section*{Question 2.}\n\n\\subsection*{Methods}\n\nUsing the Scipy ODE solver and motivated choices of parameters, graphs\nrepresenting the idealized spreading of disease can be created. Using the SIR model\nODEs, the derivatives of $S$, $I$, and $R$ can each be calculated at time $t$.\nBy specifying $t_0$, $\\beta$, $\\gamma$, $t_{end}$, $dt$ and initial values for\n$S$, $I$, and $R$, Scipy's ODE solver is used to numerically solve the\nfunctions. Then, graphs are created using matplotlib.pyplot. From equation\n(1), we see that $\\beta$ is controlling the rate at which the number of\nsusceptible members of the population is decreasing. That is, roughly,\n$\\beta$ is controlling how infectious the disease is. In equation (2),\nwe see that $\\gamma$ corresponds to the rate at which people people recover\nfrom the disease. With these interpretations in mind, we will be able to choose\nreasonable values for $\\beta$ and $\\gamma$ depending on the situation.\nTo incorporate deaths ($D$) into the model, notice that in this model recovered\nand dead members of the population are analogous. So, the new equation is:\n\\[ \\frac{dD}{dt} = \\lambda I\\]\nOf course, the addition of this equation does not affect equation (3) or\nequation (1) directly (which is clear by the analogy to equation (3)). However,\nnow we have \\[ 1000 = S + I + R + D\\]\n\\[\\Rightarrow 0 = \\frac{dS}{dt} +\\frac{dI}{dt} + \\frac{dR}{dt} + \\frac{dD}{dt}\\]\n\\[\\Rightarrow \\frac{dI}{dt} = \\frac{\\beta S I}{N}  - \\gamma I - \\lambda I\\]\n\nThis makes sense, as people who die from the disease can no longer spread the\ndisease, which should decrease $I$.\n\n\\subsection*{Analysis}\n\nDepending on the values of $\\beta$ and $\\gamma$, over the course of $t = 0$ to\n$t = 200$, varying proportions of the population will be infected and\nrecover/not recover.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfloat[$\\beta = 0.3, \\gamma = 0.2$]{\\label{fig:a}\\includegraphics[width=0.46\\linewidth]{Graph 1}}\\qquad\n\\subfloat[$\\beta = 0.4, \\gamma = 0.03$]{\\label{fig:b}\\includegraphics[width=0.46\\linewidth]{Graph 2}}\\\\\n\\subfloat[$\\beta = 0.3, \\gamma = 0.01$]{\\label{fig:c}\\includegraphics[width=0.46\\textwidth]{Graph 3}}\\qquad\n\\subfloat[$\\beta = 0.05, \\gamma = 0.01$]{\\label{fig:d}\\includegraphics[width=0.46\\textwidth]{Graph 4}}\n\\caption{Plots of S, I, and R for different motivated $\\beta$ and $\\gamma$}\n\\label{fig:myfig}\n\\end{figure}\n\nDifferent physical situations require varying values of $\\beta$ and $\\gamma$.\nFigure 4. a) depicts a disease that has a high recovery rate and a somewhat low\ninfection rate. This is modelled by a medium $\\beta$ and a relatively high\n$\\gamma$. Figure 4. b) shows a disease that has a high infection rate and a\nrecovery rate that is high enough for everyone to recover but low enough that\neveryone is infected. This type of disease requires a relatively high $\\beta$ and a medium $\\gamma$. Next, Figure 4. c) represents a disease with a somewhat high infection rate but\na low recovery rate, modelled with a medium $\\beta$ and a small $\\gamma$.\nLastly, Figure 4. d) shows a disease with a low infection rate and a low\nrecovery rate, which requires a low $\\beta$ and $\\gamma$. Of course, there are\nmany other examples of physically possible situations that each require their own modifications to $\\beta$ and $\\gamma$.\n\n\\begin{figure}[h]\n  \\centering\n\\includegraphics[scale=0.4]{Graph 5}\n\\caption{Plot including deaths with $\\beta = 0.2, \\gamma = 0.07 \\lambda = 0.03$}\n\\end{figure}\n\nWhen also considering deaths, there even more possible circumstances. Above is the\ngraph of a disease with a relatively low infection rate, high recovery rate,\nand medium mortality rate (the parameters required are as you would expect). Despite starting with idealized equations, it is interesting to see the utility they provide in modelling something as complex as the spread of a disease.\n\n\n\\end{document}\n", "meta": {"hexsha": "3664b12b40d6a62d46b1c00e34a2f857c1f5490c", "size": 8060, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignment2/Q3.tex", "max_stars_repo_name": "CalebLammers/CTA200", "max_stars_repo_head_hexsha": "2b8e442f10479b8f82a9b8c4558a45aa9e791118", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignment2/Q3.tex", "max_issues_repo_name": "CalebLammers/CTA200", "max_issues_repo_head_hexsha": "2b8e442f10479b8f82a9b8c4558a45aa9e791118", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignment2/Q3.tex", "max_forks_repo_name": "CalebLammers/CTA200", "max_forks_repo_head_hexsha": "2b8e442f10479b8f82a9b8c4558a45aa9e791118", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.1631205674, "max_line_length": 430, "alphanum_fraction": 0.7449131514, "num_tokens": 2198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768249, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.5976279458330899}}
{"text": "\\documentclass{article}\n%\\usepackage[utf8]{vietnam}\n\\usepackage{amsmath,amssymb}\n\\usepackage{tikz}\n \n\n\\begin{document}\n\n\\def\\firstcircle{(0,0) circle (2cm)}\n\\def\\secondcircle{(0:2.5cm) circle (2cm)}\n\n\\section{The Big/Little-\\texttt{\\{oh,omega,theta\\}} Notations}\nRelationships btw the sets $O(g(n)),\\; o(g(n)),\\; \\Omega(g(n)),\\; \\omega(g(n))\\; \\text{and}\\; \\Theta(g(n)):$ \n\\vskip 1cm\n  \\begin{tikzpicture}\n    %\\filldraw[color=green, fill=green!50, very thick] \\firstcircle (0,1.8) node[above,left] {$O(g(n))$};\n    %\\filldraw[color=red, fill=red!50, very thick] \\secondcircle (2cm,1.8) node[above,right] {$\\Omega(g(n))$};\n    %\\draw[very thick] \\firstcircle (0,1.8) node[above,left,black] {$O(g(n))$};\n    %\\draw[very thick] \\secondcircle (2cm,1.8) node[above,right,black] {$\\Omega(g(n))$};\n    \\draw \\firstcircle (-2cm,-1cm) node[above,left,black] {$O(g(n))$};\n    \\draw \\secondcircle (4.5cm,-1cm) node[above,right,black] {$\\Omega(g(n))$};\n    \\begin{scope}[fill opacity=0.5]\n      \\fill[red] \\firstcircle;\n      \\fill[green] \\secondcircle;\n    \\end{scope}\n    \\node[left] at (0,0) {$o(g(n))$};\n    \\node[right] at (0:2.5cm) {$\\omega(g(n))$};\n    \\node at (0:1.25cm) {$\\tiny{\\Theta(g(n))}$};\n  \\end{tikzpicture}\n\n  \\begin{align*}\n    %\\Omega(g(n)) &\\,\\cap\\, O(g(n)) \\,=\\, \\Theta\\left(g(n)\\right) \\\\\n    %o(g(n)) &\\,\\subset\\, O(g(n)) \\,-\\, \\Theta\\left(g(n)\\right)\n    %\\Omega(g(n)) \\,\\cap\\, O(g(n))\\; &=\\, \\Theta(g(n)) \\\\\n    \\Theta(g(n)) &=\\, \\Omega(g(n)) \\,\\cap\\, O(g(n)) \\\\\n    %o(g(n)) &\\subset\\, O(g(n)) \\,-\\, \\Theta\\left(g(n)\\right) \\\\\n    %o(g(n)) &\\subsetneq\\, O(g(n)) \\,\\setminus\\, \\Omega\\left(g(n)\\right) \\\\\n    O(g(n)) &= o(g(n)) \\sqcup \\Theta(g(n)) \\\\\n    \\Omega(g(n)) &= \\omega(g(n)) \\sqcup \\Theta(g(n)) \\\\\n    %% redundant\n    %o(g(n))      \\,\\cap\\, \\Omega(g(n)) &= \\emptyset\n  \\end{align*}\n\\vskip -1em\n\\noindent where $\\sqcup$ denotes disjoint union.\n\\vskip 2em\n\n\\noindent\n\\textbf{\\textsl{3.1-1}}\nLet $f(n)$ and $g(n)$ be asymptotically nonnegative functions. Using the basic definition of $\\Theta$-notation,\nprove that $\\max\\left(f(n), g(n)\\right) = \\Theta(f(n) + g(n))$.\n\\newline\n\\newline\n%\\hline\n\nIf we allow us to reframe the question. Define\n\\begin{align*}\n\th \\!:\\quad n \\;&\\mapsto\\quad \\max(f(n), g(n)) \\\\\n\tl \\!:\\quad n \\;&\\mapsto\\quad f(n) + g(n)\n\\end{align*}\nThen we are supposed to show that $h(n) = \\Theta(l(n))$.\n\n\\noindent\nSince $f$ and $g$ are asymtotically nonnegative, we have\n\\begin{align*}\n\t& \\exists\\; n_1 \\quad\\textrm{s.t.}\\quad f(n) \\ge 0 \\quad\\forall\\; n \\ge n_1 \\\\\n\t& \\exists\\; n_2 \\quad\\textrm{s.t.}\\quad g(n) \\ge 0 \\quad\\forall\\; n \\ge n_2\n\\end{align*}\nLet $n_3 = \\max(n_1, n_2)$. Then for the constants $c_1=\\frac{1}{2}, c_2=1, n_3$, we see that\n\\begin{align*}\n\tc_1 l(n) &\\le h(n) \\le c_2 l(n) \\quad\\forall\\; n \\ge n_3\\,, \\quad \\textrm{i.e.}\\\\\n\t\\frac{1}{2}(f(n) + g(n)) &\\le \\max\\left(f(n), g(n)\\right) \\le f(n) + g(n) \\quad\\forall\\; n \\ge n_3\\,.\n\\end{align*}\n\\hfill$\\blacksquare$\n\n\\end{document}\n", "meta": {"hexsha": "7dc4f8ca61f7f9a2057d61fb499ca0412c98e529", "size": 2938, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CLRS/ch03-growth_of_fn/01-asymptotic_notation/notes.tex", "max_stars_repo_name": "phunc20/algorithms", "max_stars_repo_head_hexsha": "04674829311cde7bb173252b8a41620aae4b14ba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CLRS/ch03-growth_of_fn/01-asymptotic_notation/notes.tex", "max_issues_repo_name": "phunc20/algorithms", "max_issues_repo_head_hexsha": "04674829311cde7bb173252b8a41620aae4b14ba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CLRS/ch03-growth_of_fn/01-asymptotic_notation/notes.tex", "max_forks_repo_name": "phunc20/algorithms", "max_forks_repo_head_hexsha": "04674829311cde7bb173252b8a41620aae4b14ba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.6578947368, "max_line_length": 111, "alphanum_fraction": 0.5840707965, "num_tokens": 1199, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.5976279442698618}}
{"text": "\\chapterimage{head2.png} % Chapter heading image\n\\chapter{EM Algorithm}\n\n\\section{The General EM Algorithm}\n\\begin{definition}[The joint probability]\n    \\begin{equation}\n        p(\\textbf{x}, \\textbf{y})= p(x_0)\\prod_{t=1}^{T} p(x_{t}|x_{t-1}) p(y_t|x_t)\n    \\end{equation}\n    \\begin{center}\n        \\includegraphics[scale=0.6]{ch2/four_states_ex.png}   \n    \\end{center}\nIn this example\n    \\begin{equation}\n    \\begin{split}\n        p(\\textbf{x}, \\textbf{y})= &[p(x_0) p(x_1|x_0) p(y_1|x_1)] [p(x_2|x_1) p(y_2|x_2)]  [p(x_3|x_2) p(y_3|x_3)] \\\\\n        &[p(x_4|x_3) p(y_4|x_4)]\n    \\end{split}\n    \\end{equation}\n\\end{definition}\n\n\\begin{definition}[The likelihood functional]\n\\begin{equation}\n    L[\\theta] = p(\\textbf{y};\\theta) = \\int d\\textbf{x}p(\\textbf{x}, \\textbf{y})\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[The log-likelihood functional]\n\\begin{equation}\n    l[\\theta] = \\ln{(\\int d\\textbf{x}p(\\textbf{x}, \\textbf{y}))}\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[Solve Euler-Lagrange equation]\nMaximization for the optimal parameter set requires taking derivatives of the log-likelihood functional and solves the resulting Euler-Lagrange equation:\n\\begin{equation}\n        \\frac{\\partial l[\\theta]}{\\partial \\theta} = \\frac{\\partial}{\\partial \\theta} \\ln{(\\int d\\textbf{x}p(\\textbf{x}, \\textbf{y}))} = 0\n\\end{equation}\nWe can further get\n\\begin{equation}\n    \\frac{\\partial l[\\theta]}{\\partial \\theta} =  \\frac{\\partial \\ln{L[\\theta]}}{\\partial \\theta} = \\frac{1}{L[\\theta]}\\frac{\\partial L[\\theta]}{\\partial \\theta}\n\\end{equation}\nBecause we need to do maximization according to previous step($k$), we want to solve the\nfollowing equation\n\\begin{equation}\n    0 = \\frac{\\partial l^k[\\theta]}{\\partial \\theta} = \\frac{1}{L^k[\\theta]}\\frac{\\partial L^k[\\theta]}{\\partial \\theta}\n\\end{equation}\nAnd we can express the likelihood functional as\n\\begin{equation}\n    L^k[\\theta] = p(\\textbf{y}) = \\sum_{\\tau=0}^{T-1} \\left<\\alpha^k_{t_{\\tau}}\\right|  e^{-\\textbf{H} \\Delta t} \\left|\\beta^k_{t_{\\tau+1}}\\right>\n\\end{equation}\nTherefore, the maximization problem becomes to solve\n\\begin{equation}\n    0 =  \\frac{\\partial l^k[\\theta]}{\\partial \\theta} = \\frac{1}{L^k[\\theta]} \\sum_{\\tau=0}^{T-1} \\frac{\\partial}{\\partial \\theta} \\left<\\alpha^k_{t_{\\tau}}\\right|  e^{-\\textbf{H} \\Delta t} \\left|\\beta^k_{t_{\\tau+1}}\\right>\n\\end{equation}\n\\end{definition}\n\n\\section{The expected log of the complete likelihood}\n\\begin{definition}[$\\left<\\alpha^k_{t_{0}}\\right|  e^{-\\textbf{H} \\Delta t} \\left|\\beta^k_{t_{1}}\\right>$]\n\\begin{equation}\n    \\left< \\alpha_{t_0} | e^{-\\textbf{H}\\Delta t} | \\beta_{t_1} \\right> = \\int p(x_0) p(x_1|x_0) p(y_2,y_3,y_4|x_1) dx_1= p(x_0,y_2,y_3,y_4)\n\\end{equation}\n    \\begin{center}\n        \\includegraphics[scale=0.25]{ch4/alpha_t0_e_beta_t1.pdf}   \n    \\end{center}\n\\end{definition}\n\n\\begin{definition}[$\\left<\\alpha^k_{t_{1}}\\right|  e^{-\\textbf{H} \\Delta t} \\left|\\beta^k_{t_{2}}\\right>$]\n\\begin{equation}\n    \\left< \\alpha_{t_1} | e^{-\\textbf{H}\\Delta t} | \\beta_{t_2} \\right> = \\int p(x_1,y_1) p(x_2|x_1) p(y_3,y_4|x_2) dx_2= p(x_1,y_1,y_3,y_4)\n\\end{equation}\n\\begin{center}\n    \\includegraphics[scale=0.35]{ch4/alpha_t1_e_beta_t2.pdf}   \n\\end{center}\n\\end{definition}\n\n\\begin{definition}[$\\left<\\alpha^k_{t_{2}}\\right|  e^{-\\textbf{H} \\Delta t} \\left|\\beta^k_{t_{3}}\\right>$]\n\\begin{equation}\n    \\left< \\alpha_{t_2} | e^{-\\textbf{H}\\Delta t} | \\beta_{t_3} \\right> = \\int p(x_2,y_1,y_2) p(x_3|x_2) p(y_4|x_3) dx_3= p(x_2,y_1,y_2,y_4)\n\\end{equation}\n\\begin{center}\n    \\includegraphics[scale=0.35]{ch4/alpha_t2_e_beta_t3.pdf}   \n\\end{center}\n\\end{definition}\n\n\\begin{definition}[$\\left<\\alpha^k_{t_{3}}\\right|  e^{-\\textbf{H} \\Delta t} \\left|\\beta^k_{t_{4}}\\right>$]\n\\begin{equation}\n    \\left< \\alpha_{t_3} | e^{-\\textbf{H}\\Delta t} | \\beta_{t_4} \\right> = \\int p(x_3,y_1,y_2,y_3) p(x_4|x_3) dx_4= p(x_3,y_1,y_2,y_3)\n\\end{equation}\n\\begin{center}\n    \\includegraphics[scale=0.25]{ch4/alpha_t3_e_beta_t4.pdf}   \n\\end{center}\n\\end{definition}\n\n\\section{Update the equilibrium probability density}\n\\begin{definition}\n    The basis functions in Def.(\\ref{basisfunctions}) were used to update\n\\begin{align}\n    p_{\\rm eq}^{k+1}(x) &=\\mathbb{E}^k_{X|Y} \\left[ \\delta(x-X) \\right] \\\\\n    &= p^k_{\\rm{eq}}(x) \\sum_{i,j} \\mathbb{E}^k_{X|Y}[a_ib_j] \\phi_i(x)\\phi_j(x) \\\\\n    &= \\sum_{i,j} \\psi_i(x) \\mathbb{E}^k_{X|Y}[a_ib_j] \\psi_j(x).\n\\end{align}\n\\end{definition}\n\n\\begin{definition}[Update from matrix multiplication]\n\\begin{align}\n    p_{\\rm eq}^{k+1}(x) = \\mathrm{diag}(\\Psi \\mathbb{E}^k_{X|Y}[a_ib_j] \\Psi^{\\dagger})\n\\end{align}\n\\end{definition}\n\n\\begin{example}[$N_v=3$] In the matrix multiplication, I abbreviate $\\mathbb{E}^k_{X|Y}[a_ib_j]$ as $a_ib_j$, and the grids are set like the bottom figure. We will show\n\\begin{align*}\n    p_{\\rm eq}^{k+1}(x) &=\\sum_{i,j} \\psi_i(x) \\mathbb{E}^k_{X|Y}[a_ib_j] \\psi_j(x) \\\\\n    &= \\mathrm{diag}(\\Psi \\mathbb{E}^k_{X|Y}[a_ib_j] \\Psi^{\\dagger})\n\\end{align*}\n\\begin{center}\n    \\includegraphics[scale=1.5]{ch4/ex1_grids.pdf}   \n\\end{center}\n\\begin{align*} \n&\\Psi \\mathbb{E}^k_{X|Y}[a_ib_j] \\Psi^{\\dagger} = \\\\\n&\\begin{bmatrix}\n    \\psi_1(x_1) & \\psi_2(x_1) & \\psi_3(x_1)  \\\\\n    \\psi_1(x_2) & \\psi_2(x_2) & \\psi_3(x_2) \\\\\n    \\psi_1(x_3) & \\psi_2(x_3) & \\psi_3(x_3)  \\\\\n    \\psi_1(x_4) & \\psi_2(x_4) & \\psi_3(x_4)  \\\\\n    \\psi_1(x_5) & \\psi_2(x_5) & \\psi_3(x_5)  \n\\end{bmatrix}\n\\begin{bmatrix}\n    a_1 b_1 & a_1 b_2 & a_1 b_3  \\\\\n    a_2 b_1 & a_2 b_2 & a_2 b_3 \\\\\n    a_3 b_1 & a_3 b_2 & a_3 b_3\n\\end{bmatrix}\n\\begin{bmatrix}\n    \\psi_1(x_1) & \\psi_1(x_2) & \\psi_1(x_3) & \\psi_1(x_4) & \\psi_1(x_5) \\\\\n    \\psi_2(x_1) & \\psi_2(x_2) & \\psi_2(x_3) & \\psi_2(x_4) & \\psi_2(x_5) \\\\\n    \\psi_3(x_1) & \\psi_3(x_2) & \\psi_3(x_3) & \\psi_3(x_4) & \\psi_3(x_5) \n\\end{bmatrix} \\\\\n&=\\begin{bmatrix}\n    \\sum_{i=1}^{3}a_i b_1 \\psi_i(x_1) & \\sum_{i=1}^{3}a_i b_2 \\psi_i(x_1) & \\sum_{i=1}^{3}a_i b_3 \\psi_i(x_1) \\\\\n    \\sum_{i=1}^{3}a_i b_1 \\psi_i(x_2) & \\sum_{i=1}^{3}a_i b_2 \\psi_i(x_2) & \\sum_{i=1}^{3}a_i b_3 \\psi_i(x_2) \\\\\n    \\sum_{i=1}^{3}a_i b_1 \\psi_i(x_3) & \\sum_{i=1}^{3}a_i b_2 \\psi_i(x_3) & \\sum_{i=1}^{3}a_i b_3 \\psi_i(x_3) \\\\\n    \\sum_{i=1}^{3}a_i b_1 \\psi_i(x_4) & \\sum_{i=1}^{3}a_i b_2 \\psi_i(x_4) & \\sum_{i=1}^{3}a_i b_3 \\psi_i(x_4) \\\\\n    \\sum_{i=1}^{3}a_i b_1 \\psi_i(x_5) & \\sum_{i=1}^{3}a_i b_2 \\psi_i(x_5) & \\sum_{i=1}^{3}a_i b_3 \\psi_i(x_5) \n\\end{bmatrix}\\\\\n&\\begin{bmatrix}\n    \\psi_1(x_1) & \\psi_1(x_2) & \\psi_1(x_3) & \\psi_1(x_4) & \\psi_1(x_5) \\\\\n    \\psi_2(x_1) & \\psi_2(x_2) & \\psi_2(x_3) & \\psi_2(x_4) & \\psi_2(x_5) \\\\\n    \\psi_3(x_1) & \\psi_3(x_2) & \\psi_3(x_3) & \\psi_3(x_4) & \\psi_3(x_5) \n\\end{bmatrix}\\\\\n&=\n\\begin{bmatrix}\n\\sum_{j=1}^{3} b_j \\psi_j(x_1) \\sum_{i=1}^{3}a_i \\psi_i(x_1) &  \\sum_{j=1}^{3} b_j \\psi_j(x_2) \\sum_{i=1}^{3}a_i \\psi_i(x_1) &\n\\cdots & \\sum_{j=1}^{3} b_j \\psi_j(x_5) \\sum_{i=1}^{3}a_i \\psi_i(x_1) \\\\\n\\sum_{j=1}^{3} b_j \\psi_j(x_1) \\sum_{i=1}^{3}a_i \\psi_i(x_2) &  \\sum_{j=1}^{3} b_j \\psi_j(x_2) \\sum_{i=1}^{3}a_i \\psi_i(x_2) &\n\\cdots & \\sum_{j=1}^{3} b_j \\psi_j(x_5) \\sum_{i=1}^{3}a_i \\psi_i(x_2) \\\\\n\\sum_{j=1}^{3} b_j \\psi_j(x_1) \\sum_{i=1}^{3}a_i \\psi_i(x_3) &  \\sum_{j=1}^{3} b_j \\psi_j(x_2) \\sum_{i=1}^{3}a_i \\psi_i(x_3) &\n\\cdots & \\sum_{j=1}^{3} b_j \\psi_j(x_5) \\sum_{i=1}^{3}a_i \\psi_i(x_3) \\\\\n\\sum_{j=1}^{3} b_j \\psi_j(x_1) \\sum_{i=1}^{3}a_i \\psi_i(x_4) &  \\sum_{j=1}^{3} b_j \\psi_j(x_2) \\sum_{i=1}^{3}a_i \\psi_i(x_4) &\n\\cdots & \\sum_{j=1}^{3} b_j \\psi_j(x_5) \\sum_{i=1}^{3}a_i \\psi_i(x_4) \\\\\n\\sum_{j=1}^{3} b_j \\psi_j(x_1) \\sum_{i=1}^{3}a_i \\psi_i(x_5) &  \\sum_{j=1}^{3} b_j \\psi_j(x_2) \\sum_{i=1}^{3}a_i \\psi_i(x_5) &\n\\cdots & \\sum_{j=1}^{3} b_j \\psi_j(x_5) \\sum_{i=1}^{3}a_i \\psi_i(x_5)\n\\end{bmatrix}\n\\end{align*}\n\\end{example}\n\n\\section{EM Test}\n\\begin{center}\n    \\includegraphics[scale=0.36]{ch4/forward_backward_v2.png}   \n\\end{center}\n\n\\subsection{Initial Guess Test}\n\\begin{center}\n    \\includegraphics[scale=0.55]{ch4/initial_guess_test.pdf}   \n\\end{center}\n\n\\subsection{Troubleshooting: Double Peak}\n\\begin{center}\n    \\includegraphics[scale=0.55]{ch4/double_peak_problem.pdf}   \n\\end{center}\n\\subsubsection{Step 1: Detect abrupt change}\n\\begin{itemize}\n    \\item Use finite difference to get $\\frac{d p^{[k]}_{eq}(x)}{dx}$ and $\\frac{d^2 p^{[k]}_{eq}(x)}{dx^2}$\n    \\item If $|\\frac{d^2 p_{eq}(x)}{dx^2}|>1$, put the point into the list storing \"change points\"\n    \\item If $\\text{Number}(\\text{Change points}) > 0$, do smoothing. Else: no smooth \n\\end{itemize}\n\\begin{center}\n    \\includegraphics[scale=0.4]{ch4/double_peak_problem_derivative.pdf}   \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.5]{ch4/abrupt_detect.pdf}   \n\\end{center}\n\n\\subsubsection{Step 2: The cubic smoothing spline}\n\\begin{definition}[Smoothing Spline]\n    Let  $\\{x_{i},Y_{i}:i=1,\\dots ,n\\}$ be a set of observations, modeled by the relation $\\{Y_{i}=f(x_{i})+\\epsilon _{i}\\}$ where the $\\epsilon _{i}$ are independen, zero mean random variables (usually assumed to have constant variance). The cubic smoothing spline estimate $\\hat{f}$ of the function $f$ is defined to be the minimizer (over the class of twice differentiable functions) of\n    \\begin{equation}\n         \\sum _{i=1}^{n}\\{Y_{i}-{\\hat {f}}(x_{i})\\}^{2}+\\lambda \\int {\\hat {f}}''(x)^{2}\\,dx\n    \\end{equation}\n    \\begin{itemize}\n        \\item $\\lambda \\geq 0$ is a smoothing parameter, controlling the trade-off between fidelity to the data and roughness of the function estimate. This is often estimated by generalized cross-validation, or by restricted marginal likelihood (REML) which exploits the link between spline smoothing and Bayesian estimation (the smoothing penalty can be viewed as being induced by a prior on the $f$.\n        \\item As $ \\lambda \\to 0$, the smoothing spline converges to the interpolating spline.\n        \\item As $\\lambda \\to \\infty$(infinite smoothing), the roughness penalty becomes paramount and the estimate converges to a linear least squares estimate.\n    \\end{itemize}\n\\end{definition}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/double_peak_smooth_afterEM.pdf}   \n\\end{center}\n\n\\subsubsection{Result}\n\\begin{center}\n    \\includegraphics[scale=0.6]{ch4/aftersmooth_result.pdf}   \n\\end{center}\n\n\\section{Optimize diffusion coefficient D}\nHere, we use the following simulation data to understand the procedure of updating D.\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/simu_ref_learn_D.png}   \n\\end{center}\nWe will fix $p_{eq}^{[k]}$ as the real answer $p_{eq}$, and change $D$ to see the effect to eigenvalues $\\lambda_i$ and log-likelihood $l[\\theta]$\n\n\\subsection{Hamiltonian and its eigenvalues scale linearly with diffusion coefficient}\n\\begin{align*}\n    \\frac{\\partial \\rho(x,t)}{\\partial t} &= -\\bm{H}^0 \\rho(x,t), \\\\\n    \\bm{H}^0 &= -D\\nabla^2  + V_{\\rm eff}(x), \\\\\n    V_{\\rm eff}  &= \\frac{DF'(x)}{2} + \\frac{DF^2(x)}{4}.\n\\end{align*}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/eigenvalue_linear_with_D.pdf}   \n\\end{center}\nI only show the first six eigenvalues, all the rest eigenvalues is linear scaling with $D$.\n\n\\subsection{Log-likelihood as a function of D}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/D_loglikelihood_broad_search_1.pdf} \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/D_loglikelihood_broad_search_3.pdf}   \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/D_loglikelihood_broad_search_2.pdf}   \n\\end{center}\n\n\\section{Complete EM}\n\\subsection{Initial Guess by Gaussian Kernel Density Estimation}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/Gaussian_Kde_harmonic_well.pdf} \n\\end{center}\n\n\\subsection{Algorithm}\n\\begin{algorithm}[H]\n    \\SetAlgoLined\n    \\KwResult{$p^{k}_{\\rm eq}$, $D^k$}\n     initialize $p^0_{\\rm eq}$ as Gaussian KDE \\;\n     initialize $D^0=\\frac{k_B T}{6\\pi \\eta a}$\\;\n     \n     \\While{$(\\ell[\\theta^k] - \\ell[\\theta^{k-1}] > \\num{1e-1})$ \\& $(k < k_{\\text{max}})$}{\n       Eigen-decompose the Hermitian $\\bm H$ for $\\Psi$ and $\\Lambda$\\;\n       Evaluate $\\ell [\\theta^k] = \\sum_{\\tau} |\\alpha_{\\tau}|$ via normalization\\;\n       Infer the latent states $\\langle \\alpha(t)|,|\\beta(t) \\rangle$\\;\n       Collect statistics of $\\mathbb{E}^k_{X|Y}[a_ib_j] = \\sum_{\\tau}\\Gamma^{\\tau}_{ij}$\\;\n       Update $p_{\\rm{eq}}^{k+1} = p^k_{\\rm{eq}}(x) \\sum_{ij} \\mathbb{E}^k_{X|Y}[a_ib_j] \\phi_i(x)\\phi_j(x) $\\;\n       %\n      \\If{$k~\\%~ 5 == 0$}{\n       Detect abrupt change in $p^{k+1}_{\\rm eq}$\\;\n       \\If{abrupt change exists}{\n           $p^{k+1}_{\\rm eq}$ = Smooth($p^{k+1}_{\\rm eq}$) \n       }\n       Line search $D^{k+1} = \\argmax_D \\ell[p^{k+1}_{\\rm eq},D]$\\;\n       }\n       k = k + 1\\;\n     }\n     \\caption{Expectation-Maximization Statistical Learning for $F(x)$ and $D$}\n\\end{algorithm}\n\n\\subsection{The data strucutres for storing results}\n\\begin{align*}\n    \\mathrm{p\\_container} &= \n    \\begin{bmatrix}\n        p^{0}_{\\rm eq} \\\\ p^{1}_{\\rm eq} \\\\ \\vdots \\\\ p^{k}_{\\rm eq}\n    \\end{bmatrix}\\\\\n    \\mathrm{D\\_records} &= \n    \\begin{bmatrix}\n        D^{0} \\\\ D^{1} \\\\ \\vdots \\\\ D^{k}\n    \\end{bmatrix}\\\\\n    \\mathrm{log\\_likelihood\\_records} &= \n    \\begin{bmatrix}\n        \\ell[\\theta^{0}] \\\\ \\ell[\\theta^{1}] \\\\ \\vdots \\\\ \\ell[\\theta^{k}]\n    \\end{bmatrix}   \n\\end{align*}\n\n\\subsection{Results}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/em_p0_kde_pref_xavg_50.pdf} \n\\end{center}\n\n\\section{Regularization by using Trajectory Entropy}\nThe proposed statistical learning problem of Langevin dynamics from smFRET data is naturally underdetermined because we attempt to extract a continuous profile from a finite number of photons. A Bayesian prior is thus required to break the degeneracy in the parameter set. With the prior, the posterior function, $p(\\theta | \\textbf{y})$, for parameter optimization becomes\n\\begin{equation}\n    p(\\theta | \\textbf{y}) = \\frac{p(\\textbf{y}|\\theta) p(\\theta)}{p(\\textbf{y})}\n\\end{equation}\nA criterion for choosing the prior is to guide the optimization toward $F(x)$ profiles that imply the least amount information of dynamics. The goal is to prevent the statistical learning from overfitting and overinterpreting the measured data. As such, we selet the prior based on maximum trajectory entropy. For Langevin dynamics at equilibrium, the trajectory entropy is \n\\begin{equation}\n    \\textbf{S}[F(x), D; D] = S_{\\rm eq} - t_{\\rm obs} \\frac{D}{4}\\left< F^2(x)\\right>\n\\end{equation}\nIn Kevin's EM 2013 JPCB paper,\n\\begin{align}\n    D &= 500~~\\text{s}^{-1} \\\\\n    \\eta_{F} &= 2 \\times 10^{-7} \\\\ \n    \\eta_{F}D &= 0.0001\n\\end{align}\nTherefore, when including the prior for updating $p_{\\rm eq}$\n\\begin{align*}\n    p_{\\rm eq}^{k+1}(x) &= \\left(\\mathbb{E}^k_{X|Y} \\left[\\delta(x-X) \\right]\\right)^{1/(1+\\eta_FD)} \\\\\n    &= \\left(\\mathbb{E}^k_{X|Y} \\left[\\delta(x-X) \\right]\\right)^{1/(1+0.0001)} \\\\\n    &= \\left(\\mathbb{E}^k_{X|Y} \\left[\\delta(x-X) \\right]\\right)^{1/(1.0001)}\n\\end{align*}\nI think the regularization has no real effect.\n\n\\section{Double Well}\n\\subsection{Case 1: 10000 photons}\n$\\Delta t = 10^{-9}~\\text{s}$\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/simu_doublewell_10000photons.png} \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/em_p0_kde_pref_doublewell_10000photons.pdf} \n\\end{center}\n\n\\subsection{Case 2: 20000 photons}\n$\\Delta t = 10^{-9}~\\text{s}$\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/simu_doublewell_20000photons.png} \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/em_p0_kde_pref_doublewell_20000photons.pdf} \n\\end{center}\n\n\\subsection{Gaussian mixture model}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/gmm_doublewell_1.png} \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/em_p0_kde_pref_gmm_doublewell_1.pdf} \n\\end{center}\n\n\\section{Triple Well}\n\\subsection{Gaussian mixture model: 10000 photons}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/gmm_triplewell_1.png} \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/em_p0_kde_pref_gmm_triplewell_1.pdf} \n\\end{center}\n\n\\subsection{Gaussian mixture model: 50000 photons}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/gmm_triplewell_2.png} \n\\end{center}\n\\begin{center}\n    \\includegraphics[scale=0.45]{ch4/em_p0_kde_pref_gmm_triplewell_2.pdf} \n\\end{center}", "meta": {"hexsha": "c7bfafa515f54f2a243e4d93191d81a4caf303cd", "size": 16160, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/chapter4.tex", "max_stars_repo_name": "yizaochen/em_theory", "max_stars_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/chapter4.tex", "max_issues_repo_name": "yizaochen/em_theory", "max_issues_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/chapter4.tex", "max_forks_repo_name": "yizaochen/em_theory", "max_forks_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.8888888889, "max_line_length": 402, "alphanum_fraction": 0.655259901, "num_tokens": 6374, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619091240701, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.597627926863784}}
{"text": "\\documentclass{article}\n\\usepackage[margin=0.5in]{geometry}\n\\usepackage{Sweave}\n\\begin{document}\n\\input{hw3_saket-concordance}\n\n%\\SweaveOpts{concordance=TRUE}\n\\title{HW\\#3 ||  MDS  Eigen Decomposition using pairwise distances of US cities}\n\\author{Saket Choudhary}\n\\maketitle\n\n\\begin{Schunk}\n\\begin{Sinput}\n> library(graphics)\n> library(ggplot2)\n\\end{Sinput}\n\\end{Schunk}\n\n\\textbf{Read Data:}\n\\begin{Schunk}\n\\begin{Sinput}\n> distanceMatrix <- read.csv(\"distance_matrix.csv\", header=T)\n> distanceMatrix\n\\end{Sinput}\n\\begin{Soutput}\n  BOST   NY   DC MIAM CHIC SEAT   SE   LA DENV\n1    0  206  429 1504  963 2976 3095 2979 1949\n2  206    0  233 1308  802 2815 2934 2786 1771\n3  429  233    0 1075  671 2684 2799 2631 1616\n4 1504 1308 1075    0 1329 3273 3053 2687 2037\n5  963  802  671 1329    0 2013 2142 2054  996\n6 2976 2815 2684 3273 2013    0  808 1131 1307\n7 3096 2934 2799 3053 2142  808    0  379 1235\n8 2979 2786 2631 1687 2054 1131  379    0 1059\n9 1949 1771 1616 2037  996 1307 1235 1059    0\n\\end{Soutput}\n\\end{Schunk}\n\n\\section{MDS}\n\\begin{figure}\n\n\\begin{Schunk}\n\\begin{Sinput}\n> mds <- cmdscale(distanceMatrix)\n> mdsPlot  <- qplot(x=mds[,1], \n+                   y=-mds[,2], \n+                   label=colnames(distanceMatrix)) \n> mdsPlot + \n+   geom_point(color='red') + \n+   geom_text(hjust=-.15) + \n+   xlab(\"X1\") + ylab(\"X2\") + \n+   ggtitle(\"MDS plot using 2 dimensions\")\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{hw3_saket-003}\n\\caption{MDS plot(2D)}\n\\end{figure}\n\nCoordinates of points are given by:\n\\begin{Schunk}\n\\begin{Sinput}\n> rownames(mds) <- colnames(distanceMatrix)\n> colnames(mds) <- c(\"X1\", \"X2\")\n> print(mds)\n\\end{Sinput}\n\\begin{Soutput}\n             X1          X2\nBOST -1388.4759   401.09561\nNY   -1235.5455   281.49474\nDC   -1110.4073   156.62867\nMIAM -1108.0546 -1273.17629\nCHIC  -435.5452   288.46936\nSEAT  1601.4477   651.49498\nSE    1708.7378    76.02141\nLA    1335.1428 -1065.44210\nDENV   546.1006    78.38664\n\\end{Soutput}\n\\end{Schunk}\n%\\end{figure}\n\n\\section{EigenValue Decomoposition}\n\\subsection{Using similarity matrix}\n\n\\begin{Schunk}\n\\begin{Sinput}\n> similarityMatrix <- apply(distanceMatrix, 1, function(x) exp(-x^2/3000.0^2))\n> similarityMatrix\n\\end{Sinput}\n\\begin{Soutput}\n          [,1]      [,2]      [,3]      [,4]      [,5]      [,6]      [,7]\nBOST 1.0000000 0.9952960 0.9797587 0.7777617 0.9020900 0.3737889 0.3447196\nNY   0.9952960 1.0000000 0.9939860 0.8268797 0.9310269 0.4145882 0.3842415\nDC   0.9797587 0.9939860 1.0000000 0.8794991 0.9512040 0.4491365 0.4187467\nMIAM 0.7777617 0.8268797 0.8794991 1.0000000 0.8218076 0.3041358 0.3549972\nCHIC 0.9020900 0.9310269 0.9512040 0.8218076 1.0000000 0.6374745 0.6006181\nSEAT 0.3737889 0.4145882 0.4491365 0.3041358 0.6374745 1.0000000 0.9300281\nSE   0.3449568 0.3842415 0.4187467 0.3549972 0.6006181 0.9300281 1.0000000\nLA   0.3730477 0.4221385 0.4634165 0.4483331 0.6257725 0.8675093 0.9841666\nDENV 0.6556903 0.7057505 0.7481425 0.6306268 0.8956335 0.8271200 0.8441125\n          [,8]      [,9]\nBOST 0.3730477 0.6556903\nNY   0.4221385 0.7057505\nDC   0.4634165 0.7481425\nMIAM 0.7289000 0.6306268\nCHIC 0.6257725 0.8956335\nSEAT 0.8675093 0.8271200\nSE   0.9841666 0.8441125\nLA   1.0000000 0.8828420\nDENV 0.8828420 1.0000000\n\\end{Soutput}\n\\end{Schunk}\n\nPerform PCA an similarityMatrix: \n\\begin{figure}\n\\begin{Schunk}\n\\begin{Sinput}\n> similarityMatrix.prcomp <- prcomp(similarityMatrix, scale.=T)\n> similarityMatrix.PCA <- similarityMatrix.prcomp$x[,1:2]\n> similarityMatrix.plot <- qplot(x=similarityMatrix.PCA[,1],\n+                           y=similarityMatrix.PCA[,2], \n+                           label=colnames(distanceMatrix))\n> similarityMatrix.plot +\n+   geom_point(color='red') +\n+   geom_text(hjust=-.15) +\n+   xlab(\"PC1\") + ylab(\"PC2\")+\n+   ggtitle(\"First 2 principle components of similarity Matrix\")\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{hw3_saket-006}\n\\caption{PCA using gaussian kernel(using similarity matrix)}\n\\end{figure}\n\n\\subsubsection{EigenValues}\n\n\\begin{Schunk}\n\\begin{Sinput}\n> print(similarityMatrix.prcomp$sdev)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 2.763748e+00 1.011925e+00 5.609621e-01 1.317532e-01 6.715841e-02\n[6] 3.342472e-02 6.485914e-03 1.019860e-03 1.743940e-16\n\\end{Soutput}\n\\end{Schunk}\n\n\\subsubsection{EigenVectors}\n\\begin{Schunk}\n\\begin{Sinput}\n> print(similarityMatrix.prcomp$rotation[,1:2])\n\\end{Sinput}\n\\begin{Soutput}\n             PC1         PC2\n [1,] -0.3552454 -0.15532942\n [2,] -0.3556963 -0.17183709\n [3,] -0.3555762 -0.18252044\n [4,] -0.3372374 -0.03256753\n [5,] -0.3160704 -0.47693301\n [6,]  0.3425956 -0.25501975\n [7,]  0.3540793 -0.19295181\n [8,]  0.3348572 -0.03848530\n [9,]  0.2287874 -0.76207529\n\\end{Soutput}\n\\end{Schunk}\n\n\\subsubsection{Principle  Components(Scaled and Centered)}\n\\begin{Schunk}\n\\begin{Sinput}\n> print(similarityMatrix.prcomp$x[,1:2])\n\\end{Sinput}\n\\begin{Soutput}\n           PC1        PC2\nBOST -2.763048  0.6631076\nNY   -2.646744  0.1721873\nDC   -2.517306 -0.2152715\nMIAM -1.894466  1.3418267\nCHIC -1.324643 -1.4856597\nSEAT  3.240198  0.4969233\nSE    3.553513  0.5559057\nLA    3.222135  0.2212970\nDENV  1.130362 -1.7503165\n\\end{Soutput}\n\\end{Schunk}\n\n\n\n\n\\subsection{Using distance matrix}\n\nPerform PCA an distanceMatrix: \n\\begin{figure}\n\\begin{Schunk}\n\\begin{Sinput}\n> distanceMatrix\n\\end{Sinput}\n\\begin{Soutput}\n  BOST   NY   DC MIAM CHIC SEAT   SE   LA DENV\n1    0  206  429 1504  963 2976 3095 2979 1949\n2  206    0  233 1308  802 2815 2934 2786 1771\n3  429  233    0 1075  671 2684 2799 2631 1616\n4 1504 1308 1075    0 1329 3273 3053 2687 2037\n5  963  802  671 1329    0 2013 2142 2054  996\n6 2976 2815 2684 3273 2013    0  808 1131 1307\n7 3096 2934 2799 3053 2142  808    0  379 1235\n8 2979 2786 2631 1687 2054 1131  379    0 1059\n9 1949 1771 1616 2037  996 1307 1235 1059    0\n\\end{Soutput}\n\\begin{Sinput}\n> distanceMatrix.prcomp <- prcomp(distanceMatrix, scale.=T)\n> distanceMatrix.PCA <- distanceMatrix.prcomp$x[,1:2]\n> distanceMatrix.plot <- qplot(x=distanceMatrix.PCA[,1],\n+                           y=distanceMatrix.PCA[,2], \n+                           label=colnames(distanceMatrix))\n> distanceMatrix.plot +\n+   geom_point(color='red') +\n+   geom_text(hjust=-.15) +\n+   xlab(\"PC1\") + ylab(\"PC2\") +\n+   ggtitle(\"First 2 principle components of distanceMatrix\")\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{hw3_saket-010}\n\\caption{PCA without using gaussian kernel(using distance matrix)}\n\\end{figure}\n\n\\subsubsection{EigenValues}\n\n\\begin{Schunk}\n\\begin{Sinput}\n> print(distanceMatrix.prcomp$sdev)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 2.666684e+00 1.049368e+00 7.363066e-01 3.395746e-01 2.999203e-01\n[6] 1.640774e-01 1.070771e-01 4.272792e-02 1.600394e-17\n\\end{Soutput}\n\\end{Schunk}\n\n\\subsubsection{EigenVectors}\n\\begin{Schunk}\n\\begin{Sinput}\n> print(distanceMatrix.prcomp$rotation[,1:2])\n\\end{Sinput}\n\\begin{Soutput}\n            PC1         PC2\nBOST  0.3609929  0.13053243\nNY    0.3632938  0.14972372\nDC    0.3651761  0.17155008\nMIAM  0.2962816 -0.13553983\nCHIC  0.2957988  0.54249894\nSEAT -0.3516124  0.16804789\nSE   -0.3676068  0.08877045\nLA   -0.3577376  0.10405688\nDENV -0.2057333  0.75596984\n\\end{Soutput}\n\\end{Schunk}\n\n\n\n\\subsubsection{Principle  Components(Scaled and Centered)}\n\n\\begin{Schunk}\n\\begin{Sinput}\n> print(distanceMatrix.prcomp$x[,1:2])\n\\end{Sinput}\n\\begin{Soutput}\n            PC1         PC2\n [1,] -2.594727  0.49402847\n [2,] -2.566601  0.09812092\n [3,] -2.448209 -0.18194241\n [4,] -1.927579  1.48379129\n [5,] -1.307785 -1.43547852\n [6,]  3.143991  0.46313051\n [7,]  3.491687  0.53895796\n [8,]  2.883923  0.42915851\n [9,]  1.325301 -1.88976673\n\\end{Soutput}\n\\end{Schunk}\n\n\\section{Discussion}\n\nAs evident from Figure 1 and Figure 2, transforming original data from pairwise distances to pairwise similairties using gaussian kernel does not seem to have effect on the resulting PCA plots. The most probable reasoning is because the data is linearly separable in its original 9 dimensions. With the gaussian kernel trick each row being a 9-D vector is transformed to a infinite-dimensional space(a function for example satisfies all operations that a vector satisfies: addition/multiplication in 'higher' dimensions and with a gaussian kernel with fixed $\\sigma^2$ each original vector is sent to a 'higher' dimensional gaussian blob centered at that point. If any two points in original 9-D space were close, their guassian transformation would lead to the resulting vectors having small angle in the 'higher' dimensional space), assuming that higher dimensions would guarantee linear separation. But in this case, linear separation is guaranteed in the original 9-dimensions itself.\n\n\\subsection{Equivalence of MDS and PCA?}\nMDS and PCA are expected to have same results when eucliden distances are used.\n[Cox, Trevor F., and Michael AA Cox. Multidimensional scaling. CRC Press, 2000.]\n\n\\end{document}\n", "meta": {"hexsha": "5741e531bdad5fdd73b838fd37fc8bdc09030e09", "size": 8800, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2015_Spring/MATH578/Unit3/hw3/hw3_saket.tex", "max_stars_repo_name": "NeveIsa/hatex", "max_stars_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2015-09-10T02:45:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T03:20:47.000Z", "max_issues_repo_path": "2015_Spring/MATH578/Unit3/hw3/hw3_saket.tex", "max_issues_repo_name": "NeveIsa/hatex", "max_issues_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-09-16T23:11:00.000Z", "max_issues_repo_issues_event_max_datetime": "2015-09-23T21:21:52.000Z", "max_forks_repo_path": "2015_Spring/MATH578/Unit3/hw3/hw3_saket.tex", "max_forks_repo_name": "saketkc/hatex", "max_forks_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2015-09-25T19:06:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-10T03:21:09.000Z", "avg_line_length": 30.8771929825, "max_line_length": 988, "alphanum_fraction": 0.7044318182, "num_tokens": 3586, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837527911057, "lm_q2_score": 0.7826624738835051, "lm_q1q2_score": 0.5975500827293492}}
{"text": "% !TeX spellcheck = en_US\n% !TeX root = DynELA.tex\n%\n% LaTeX source file of DynELA FEM Code\n%\n% (c) by Olivier Pantal\u00e9 2020\n%\n\\chapter{DynELA impact sample cases}\n\n\\startcontents[chapters]\n\\printmyminitoc[1]\\LETTRINE{T}his chapter deals with some numerical applications of\nthe \\DynELA~for impact applications in 2D, axi-symmetric and 3D\ncases. In the subsequent tests, if not specified, a Johnson-Cook constitutive\nlaw is used to model the behavior of the material. The Johnson-Cook\nhardening flow law is probably the most widely used flow law for the\nsimulation of high strain rate deformation processes taking into account\nplastic strain, plastic strain rate and temperature effects. Since\na lot of efforts have been made in the past to identify the constitutive\nflow law parameters for many materials, it is implemented in numerous\nFinite Element codes such as Abaqus \\cite{abaqus20146}. The general\nformulation of the Johnson-Cook law $\\sigma^{y}(\\overline{\\varepsilon}^{p},\\stackrel{\\bullet}{\\overline{\\varepsilon}^{p}},T)$\nis given by the following equation:\n\n\\begin{equation}\n\\sigma^{y}=\\left(A+B\\overline{\\varepsilon}^{p^{n}}\\right)\\left[1+C\\ln\\left(\\frac{\\stackrel{\\bullet}{\\overline{\\varepsilon}^{p}}}{\\stackrel{\\bullet}{\\overline{\\varepsilon}_{0}}}\\right)\\right]\\left[1-\\left(\\frac{T-T_{0}}{T_{m}-T_{0}}\\right)^{m}\\right]\\label{eq:Samples!Johnson-Cook}\n\\end{equation}\nwhere $\\stackrel{\\bullet}{\\overline{\\varepsilon}_{0}}$ is the reference\nstrain rate, $T_{0}$ and $T_{m}$ are the reference temperature and\nthe melting temperature of the material respectively and $A$, $B$,\n$C$, $n$ and $m$ are the five constitutive flow law parameters.\nA 42CrMo4 steel following the Johnson-Cook behavior law has been selected\nfor all those tests, and material properties are reported in Table\n\\ref{tab:Samples!JohnsonCookParameters}.\n\n\\begin{table}[h]\n\\begin{center}\\begin{tcolorbox}[width=.75\\textwidth,myTab,tabularx={C|C|C|C|C|C|C}]\n$E$ & $\\nu$ & $A$ & $B$ & $C$ & $n$ & $m$ \\\\\n\\small{($Gpa$)} &  & \\small{($MPa$)} & \\small{($MPa$)} &  &  & \\\\ \\hline\n$206.9$ & $0.3$ & $806$ & $614$ & $0.0089$ & $0.168$ & $1.1$ \\\\ \\hline\\hline\n$\\rho$ & $\\lambda$ & $C_{p}$ & $\\eta$ & $\\stackrel{\\bullet}{\\overline{\\varepsilon}_{0}}$ & $T_{0}$ & $T_{m}$ \\\\\n\\small{$(kg/m^{3})$} & \\small{$(W/m^{\\circ}C)$} & \\small{$(J/Kg^{\\circ}C)$} & & \\small{$(s^{-1})$} & \\small{$(^{\\circ}C)$} & \\small{$(^{\\circ}C)$} \\\\ \\hline\n$7830$ & $34.0$ & $460$ & $0.9$ & $1.0$ & $20$ & $1540$\n\\end{tcolorbox}\\end{center}\\caption{Material parameters of the Johnson-Cook behavior for the numerical\ntests\\label{tab:Samples!JohnsonCookParameters}}\n\\end{table}\n\n\n\\section{Taylor impact sample}\n\n\\subsection{Axisymmetric Taylor impact}\n\nThe performance of the proposed code is validated under high deformation\nrate with the simulation of the Taylor impact test \\cite{taylor_1946}.\nIn the Taylor impact test, a cylindrical specimen is launched to impact\na rigid target with a prescribed initial velocity. The numerical model,\nreported in Figure \\ref{fig:Samples!Impact!TaylorAxi} is established\nas axisymmetric. The height is $32.4\\,mm$ and the radius is $3.2\\,mm$.\nThe axial displacement is restrained on the right side of the specimen\nwhile the radial displacement is free (to figure a perfect contact\nwithout friction of the projectile onto the target). A predefined\nvelocity of $V_{c}=287\\,m/s$ is imposed on the specimen. The mesh\nconsists of $250$ elements ($5\\times50$ elements). The total simulation\ntime for the Taylor impact test is $t=8.0\\,10^{-5}s$.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[width=0.75\\columnwidth]{Figures/TaylorAxi}\n\\par\\end{centering}\n\\caption{Numerical model for the Axisymmetric Taylor impact test\\label{fig:Samples!Impact!TaylorAxi}}\n\\end{figure}\n\nFigure \\ref{fig:Samples!Impact!TaylorAxi-temp-contour} shows the\ntemperature contourplot of the deformed rod for both the \\DynELA~and\nAbaqus. The distributions of the temperatures are almost the same\nfor both models. The maximum temperature $T$ is located into the\ncenter element of the model (the red element in Figure \\ref{fig:Samples!Impact!TaylorAxi})\nand the models give quite the same results as reported in Table \\ref{tab:Samples!Impact!TaylorAxi-comparison}\nfor $\\overline{\\varepsilon}^{p}$, $T$ and the final dimensions of\nthe specimen $L_{f}$ (final length) and $D_{f}$ (final diameter\nof the impacting face).\n\nFigure \\ref{fig:Samples!Impact!TaylorAxi-comparison} shows the evolution\nof the the final dimensions of the specimen $L_{f}$ (final length)\nand $R_{f}$ (final radius of the impacting face), the equivalent\nplastic strain $\\overline{\\varepsilon}^{p}$, the temperature $T$,\nthe von Mises stress $\\overline{\\sigma}$ and the timeStep $\\Delta t$\nfor the different models for the element at the center of the impacting\nface (the red element in Figure \\ref{fig:Samples!Impact!TaylorAxi}).\n\nAs reported in this figure and according to the results presented\nin Table \\ref{tab:Samples!Impact!TaylorAxi-comparison}, a quite good\nagreement between the results is obtained.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[height=0.5\\columnwidth]{Figures/Samples/Impact/Taylor-Axi_temperatureCP}\\hspace*{3cm}\\includegraphics[height=0.5\\columnwidth]{Figures/Samples/Impact/Taylor-Axi-Abaqus_Temperature}\n\\par\\end{centering}\n\\caption{Temperature contourplots for Axisymmetric Taylor impact test (DynELA\nleft and Abaqus right)\\label{fig:Samples!Impact!TaylorAxi-temp-contour}}\n\\end{figure}\n\n\\begin{table}[h]\n\\begin{center}\\begin{tcolorbox}[width=.75\\textwidth,myTab,tabularx={C|C|C|C|C}]\ncode & $L_f$ & $D_f$ & $T$ & $\\overline{\\varepsilon}^{p}$ \\\\\n & \\small{($mm$)} & \\small{($mm$)} & \\small{($^{\\circ}C$)} & \\\\ \\hline\\hline\nDynELA & $26.52$ & $11.15$ & $582.34$ & $1.78$ \\\\ \\hline\nAbaqus & $26.56$ & $11.16$ & $590.96$ & $1.81$\n\\end{tcolorbox}\\end{center}\\caption{Comparison of numerical results for the Axisymmetric Taylor impact\ntest\\label{tab:Samples!Impact!TaylorAxi-comparison}}\n\\end{table}\n\n\\begin{figure}[h]\n\\begin{centering}\n\\begin{tabular}{cc}\n\\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-Axi_radius} & \\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-Axi_height}\\tabularnewline\n\\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-Axi_temperature} & \\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-Axi_plasticStrain}\\tabularnewline\n\\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-Axi_vonMises} & \\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-Axi_timeStep}\\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{Comparison of numerical and analytical results for the Axisymmetric\nTaylor impact test\\label{fig:Samples!Impact!TaylorAxi-comparison}}\n\\end{figure}\n\n\n\\subsection{3D Taylor impact}\n\nThe performance of the proposed code is validated under high deformation\nrate with the simulation of the Taylor impact test \\cite{taylor_1946}.\nIn the Taylor impact test, a cylindrical specimen is launched to impact\na rigid target with a prescribed initial velocity. The numerical model,\nreported in Figure \\ref{fig:Samples!Impact!Taylor3D} is established\nas axisymmetric. The height is $32.4\\,mm$ and the radius is $3.2\\,mm$.\nThe axial displacement is restrained on the right side of the specimen\nwhile the radial displacement is free (to figure a perfect contact\nwithout friction of the projectile onto the target). A predefined\nvelocity of $V_{c}=287\\,m/s$ is imposed on the specimen. The mesh\nconsists of $4455$ elements ($55\\times81$ elements). The total simulation\ntime for the Taylor impact test is $t=8.0\\,10^{-5}s$.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[width=0.5\\columnwidth]{Figures/Samples/Impact/Taylor-3D_mesh}\n\\par\\end{centering}\n\\caption{Numerical model for the 3D Taylor impact test\\label{fig:Samples!Impact!Taylor3D}}\n\\end{figure}\n\nFigure \\ref{fig:Samples!Impact!Taylor3D-temp-contour} shows the temperature\ncontourplot of the deformed rod for both the \\DynELA~and Abaqus.\nThe distributions of the temperatures are almost the same for both\nmodels. The maximum temperature $T$ is located into the center element\nof the model (the red element in Figure \\ref{fig:Samples!Impact!Taylor3D})\nand the models give quite the same results as reported in Table \\ref{tab:Samples!Impact!Taylor3D-comparison}\nfor $\\overline{\\varepsilon}^{p}$, $T$ and the final dimensions of\nthe specimen $L_{f}$ (final length) and $D_{f}$ (final diameter\nof the impacting face).\n\nFigure \\ref{fig:Samples!Impact!Taylor3D-comparison} shows the evolution\nof the the final dimensions of the specimen $L_{f}$ (final length)\nand $R_{f}$ (final radius of the impacting face), the equivalent\nplastic strain $\\overline{\\varepsilon}^{p}$, the temperature $T$,\nthe von Mises stress $\\overline{\\sigma}$ and the timeStep $\\Delta t$\nfor the different models for the element at the center of the impacting\nface (the red element in Figure \\ref{fig:Samples!Impact!Taylor3D}).\n\nAs reported in this figure and according to the results presented\nin Table \\ref{tab:Samples!Impact!Taylor3D-comparison}, a quite good\nagreement between the results is obtained.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[width=0.8\\columnwidth]{Figures/Samples/Impact/Taylor-3D_temperatureCP}\n\\par\\end{centering}\n\\caption{Temperature contourplot for the 3D Taylor impact test\\label{fig:Samples!Impact!Taylor3D-temp-contour}}\n\\end{figure}\n\n\\begin{table}[h]\n\\begin{center}\\begin{tcolorbox}[width=.75\\textwidth,myTab,tabularx={C|C|C|C|C}]\ncode & $L_f$ & $D_f$ & $T$ & $\\overline{\\varepsilon}^{p}$ \\\\\n & \\small{($mm$)} & \\small{($mm$)} & \\small{($^{\\circ}C$)} & \\\\ \\hline\\hline\nDynELA & $26.52$ & $11.18$ & $597.10$ & $1.84$ \\\\ \\hline\nAbaqus & $26.55$ & $11.22$ & $597.01$ & $1.84$\n\\end{tcolorbox}\\end{center}\\caption{Comparison of numerical results for the 3D Taylor impact test\\label{tab:Samples!Impact!Taylor3D-comparison}}\n\\end{table}\n\n\\begin{figure}[h]\n\\begin{centering}\n\\begin{tabular}{cc}\n\\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-3D_radius} & \\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-3D_height}\\tabularnewline\n\\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-3D_temperature} & \\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-3D_plasticStrain}\\tabularnewline\n\\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-3D_vonMises} & \\includegraphics[width=0.45\\columnwidth]{Figures/Samples/Impact/Taylor-3D_timeStep}\\tabularnewline\n\\end{tabular}\n\\par\\end{centering}\n\\caption{Comparison of numerical and analytical results for the 3D Taylor impact\ntest\\label{fig:Samples!Impact!Taylor3D-comparison}}\n\\end{figure}\n\n", "meta": {"hexsha": "68222b64c28db5b0f25f745b774591a4ea50e4c0", "size": 10706, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/Samples-Impacts.tex", "max_stars_repo_name": "pantale/DynELA-v.-4.0", "max_stars_repo_head_hexsha": "2c366859b68df6243a1e128a7839e4fb23888820", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-12-14T20:12:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T14:47:50.000Z", "max_issues_repo_path": "Documentation/Samples-Impacts.tex", "max_issues_repo_name": "pantale/DynELA-v.-4.0", "max_issues_repo_head_hexsha": "2c366859b68df6243a1e128a7839e4fb23888820", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documentation/Samples-Impacts.tex", "max_forks_repo_name": "pantale/DynELA-v.-4.0", "max_forks_repo_head_hexsha": "2c366859b68df6243a1e128a7839e4fb23888820", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.345177665, "max_line_length": 280, "alphanum_fraction": 0.7553708201, "num_tokens": 3268, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789132480439, "lm_q2_score": 0.737158174177441, "lm_q1q2_score": 0.5974511558992446}}
{"text": "\\documentclass[Main.tex]{subfiles}\n\n\\begin{document}\n\n\\chapter{1}{Introduction and time-domain signal representation}\n\n\\section{Signals}\n\\subsection{Definitions}\n\\begin{definitions}\n\\begin{itemize}\n    \\item Signal: Function of an independent variable, usually time $t$.\n    \\begin{itemize}\n        \\item This course will only be about 1D signals, e.g.\\ $f(t) = A \\cos(\\omega t + \\theta )$, $t \\in \\R$.\n        \\item 2D signals are things like black and white images, i.e.\\ $f(x,y)=$ intensity of pixel at $(x, y)$.\n    \\end{itemize}\n    \\item Continuous \\textbf{Time} Signals: A signal that has a domain $t \\in \\R$, e.g.\\ $f(t)= e^{-t}, t \\leq 0$.\n    \\item Discrete \\textbf{Time} Signals: A signal that has a domain $t \\in \\Z$.\n    \\item Digital Signals: A discrete signal that has been quantized further in $f(t)$.\n    \\item Periodic Signals: A signal is periodic when $f(t) \\equiv f(t+T)$, where T is as small as possible to satisfy the equation.\n    \\item Causal Signals:\n    \\begin{itemize}\n        \\item Causal: $f(t) = 0, \\forall t < 0$\n        \\item Anticausal: $f(t) = 0, \\forall t \\geq 0$\n        \\item Otherwise, noncausal.\n    \\end{itemize}\n\\end{itemize}\n\\end{definitions}\n\\subsection{Even and Odd signals}\n\\begin{itemize}\n    \\item Even: $f(t) = f(-t)$\n    \\item Odd: $f(t) = -f(t)$\n\\end{itemize}\n\n\\end{document}\n", "meta": {"hexsha": "4dda6be36b05651cfc1fdf2df71f6e1f44a87d63", "size": 1327, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/ECE240/Chapter1.tex", "max_stars_repo_name": "n30phyte/SchoolDocuments", "max_stars_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/ECE240/Chapter1.tex", "max_issues_repo_name": "n30phyte/SchoolDocuments", "max_issues_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/ECE240/Chapter1.tex", "max_forks_repo_name": "n30phyte/SchoolDocuments", "max_forks_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.9142857143, "max_line_length": 132, "alphanum_fraction": 0.6480783723, "num_tokens": 423, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799252, "lm_q2_score": 0.8104789040926008, "lm_q1q2_score": 0.5974511304301856}}
{"text": "\\section{Discontinuous Galerkin Scheme}\n\\label{sec:dgMethod}\n\nWe assume a spacetime metric\n\\begin{equation}\n  ds^{2}=-\\alpha^{2}\\,dt^{2}+\\gamma_{ij}\\,dx^{i}\\,dx^{j},\n\\end{equation}\nand consider the system of conservation laws with sources\n\\begin{equation}\n  \\pd{}{t}\\big(\\sqrt{\\gamma}\\,\\bU\\big)+\\sum_{i=1}^{d}\\pd{}{i}\\big(\\alpha\\,\\sqrt{\\gamma}\\,\\bF^{i}(\\bU)\\big)=\\alpha\\,\\sqrt{\\gamma}\\,\\bG(\\bU),\n  \\label{eq:conservationLaws}\n\\end{equation}\nwhere\n\\begin{align}\n  \\bU\n  &=\\big(D,\\,S_{j},\\,\\tau\\big)^{\\mbox{\\tiny T}}\n  =\\big(\\rho\\,W,\\,\\rho\\,h\\,W^{2}\\,v_{j},\\,\\rho\\,W\\left(h\\,W-1\\right)-p\\big)^{\\mbox{\\tiny T}}, \\\\\n  \\bF^{i}(\\bU)\n  &=\\big(D\\,v^{i},\\,\\big)^{\\mbox{\\tiny T}}\n\\end{align}\n", "meta": {"hexsha": "7b37a3e16a5529a3fcf100e27e0137eaeb844cfc", "size": 682, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/Euler/SamsTexFiles/DGScheme.tex", "max_stars_repo_name": "srichers/thornado", "max_stars_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-12-08T16:16:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-24T19:31:21.000Z", "max_issues_repo_path": "Documents/Euler/SamsTexFiles/DGScheme.tex", "max_issues_repo_name": "srichers/thornado", "max_issues_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-07-10T20:13:15.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-11T13:21:00.000Z", "max_forks_repo_path": "Documents/Euler/SamsTexFiles/DGScheme.tex", "max_forks_repo_name": "srichers/thornado", "max_forks_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-11-14T01:13:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-24T02:08:20.000Z", "avg_line_length": 32.4761904762, "max_line_length": 139, "alphanum_fraction": 0.5982404692, "num_tokens": 300, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273633016692238, "lm_q2_score": 0.6442251064863697, "lm_q1q2_score": 0.597430721769407}}
{"text": "\\section{Model Description}\n\n\n\t\\subsection{Derivation of Equations of Motion - Newtonian Mechanics}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[]{Figures/Flex_Slosh_Figure}\n\t\\caption{Frame and variable definitions used for formulation}\n\t\\label{fig:Flex_Slosh_Figure}\n\\end{figure} \n\n\\subsubsection{Rigid Spacecraft Hub Translational Motion}\n\nFollowing a similar derivation as in previous work \\cite{Allard2016rz}, the derivation begins with Newton's second law for the center of mass of the spacecraft.\n\\begin{equation}\n\t\\ddot{\\bm r}_{C/N} = \\frac{\\bm{F}}{m_{\\text{\\text{sc}}}}\n\t\\label{eq:Newtons1Law}\n\\end{equation}\nUltimately the acceleration of the body frame or point $B$ is desired\n\\begin{equation}\n\t\\ddot{\\bm r}_{B/N} = \\ddot{\\bm r}_{C/N}-\\ddot{\\bm c}\n\t\\label{eq:RcRbacc}\n\\end{equation}\nThe definition of $\\bm{c}$ can be seen in Eq. (\\ref{eq:c}).\n\\begin{equation}\n\t\\bm{c} = \\frac{1}{m_{\\text{sc}}}\\Big(m_{\\text{\\text{hub}}}\\bm{r}_{B_{c}/B} + \\sum_{j=1}^{N_{P}}m_j\\bm{r}_{P_{c,j}/B}\\Big)\n\t\\label{eq:c} \n\\end{equation}\nTo find the inertial time derivative of $\\bm{c}$, it is first necessary to find the time derivative of $\\bm{c}$ with respect to the body frame. A time derivative of any vector, $\\bm{v}$, with respect to the body frame is denoted by $\\bm{v}'$; the inertial time derivative is labeled as $\\dot{\\bm{v}}$. The first and second body-relative time derivatives of $\\bm{c}$ can be seen in Eqs. (\\ref{eq:cprime}) and (\\ref{eq:cdprime}).\n\n$\\bm{r}_{P_{c,j}/B}$ is defined in the following\n\\begin{equation}\n\t\\bm{r}_{P_{c,j}/B} = \\bm{r}_{P_{j}/B} + \\rho_j \\hat{\\bm p}_j \n\\end{equation}\nAnd, the first and second body time derivatives of $\\bm{r}_{P_{c,j}/B}$ are\n\\begin{align}\n\t\\bm{r}'_{P_{c,j}/B} &= \\dot{\\rho}_j \\hat{\\bm p}_j \n\t\\\\\n\t\\bm{r}''_{P_{c,j}/B} &= \\ddot{\\rho}_j \\hat{\\bm p}_j \n\\end{align}\n$\\bm{c}'$ and $\\bm{c}''$ are defined in the following equations\n\\begin{equation}\n\t\\bm{c}' = \\frac{1}{m_{\\text{sc}}}\\sum_{j=1}^{N_{P}}m_j\\dot{\\rho}_j \\hat{\\bm p}_j\n\t\\label{eq:cprime}\n\\end{equation}\n\\begin{equation}\n\t\\bm{c}'' = \\frac{1}{m_{\\text{sc}}}\\sum_{j=1}^{N_{P}}m_j\\ddot{\\rho}_j \\hat{\\bm p}_j\n\t\\label{eq:cdprime}\n\\end{equation}\nUsing the transport theorem\\cite{schaub} yields the following definition for $\\ddot{\\bm c}$\n\\begin{equation}\n\t\\ddot{\\bm c} = \\bm{c}'' + 2\\bm\\omega_{\\cal B/N}\\times\\bm{c}'+\\dot{\\bm\\omega}_{\\cal B/N}\\times\\bm{c}+\\bm\\omega_{\\cal B/N}\\times\\left(\\bm\\omega_{\\cal B/N}\\times\\bm{c}\\right)\n\t\\label{eq:cddot}\n\\end{equation}\nEq.~\\eqref{eq:RcRbacc} is updated to include Eq.~\\eqref{eq:cddot}\n\\begin{equation}\n\t\\ddot{\\bm r}_{B/N} = \\ddot{\\bm r}_{C/N}-\\bm{c}'' - 2\\bm\\omega_{\\cal B/N}\\times\\bm{c}'-\\dot{\\bm\\omega}_{\\cal B/N}\\times\\bm{c}-\\bm\\omega_{\\cal B/N}\\times\\left(\\bm\\omega_{\\cal B/N}\\times\\bm{c}\\right)\n\t\\label{eq:Rbddot}\n\\end{equation}\nSubstituting Eq.\\eqref{eq:cdprime} into Eq.\\eqref{eq:Rbddot} results in\n\\begin{equation}\n\t\\ddot{\\bm r}_{B/N} = \\ddot{\\bm r}_{C/N}-\\frac{1}{m_{\\text{sc}}}\\sum_{j=1}^{N_{P}}m_j\\ddot{\\rho}_j \\hat{\\bm p}_j = \n\t- 2\\bm\\omega_{\\cal B/N}\\times\\bm c'\n\t-\\dot{\\bm\\omega}_{\\cal B/N}\\times\\bm{c}-\\bm\\omega_{\\cal B/N}\\times\\left(\\bm\\omega_{\\cal B/N}\\times\\bm{c}\\right)\n\t\\label{eq:Rbddot2}\n\\end{equation}\nMoving second order terms to the left hand side and introducing the tilde matrix\\cite{schaub} to replace the cross product operators simplifies the equation to\n\\begin{equation}\n\tm_{\\text{sc}} \\ddot{\\bm r}_{B/N}-m_{\\text{sc}} [\\tilde{\\bm{c}}] \\dot{\\bm\\omega}_{\\cal B/N} +\\sum_{j=1}^{N_{P}}m_j \\hat{\\bm p}_j \\ddot{\\rho}_j = \\bm F_{\\text{ext}}\t- 2 m_{\\text{sc}} [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm c'\n\t- m_{\\text{sc}} [\\tilde{\\bm\\omega}_{\\cal B/N}][\\tilde{\\bm\\omega}_{\\cal B/N}]\\bm{c}\n\t\\label{eq:Rbddot3}\n\\end{equation}\n\nEquation~\\eqref{eq:Rbddot3} is the translational motion equation and is the first EOM needed to describe the motion of the spacecraft. The following section develops the rotational EOM.\n\n\\subsubsection{Rigid Spacecraft Hub Rotational Motion}\n\nStarting with Euler's equation when the body fixed coordinate frame origin is not coincident with the center of mass of the body\\cite{schaub}\n\\begin{equation}\n\t\\bm{\\dot{H}}_{\\text{sc},B} = \\bm{L}_B+m_{\\text{\\text{sc}}}\\ddot{\\bm r}_{B/N}\\times\\bm{c}\n\t\\label{eq:Euler}\n\\end{equation}\nwhere $\\bm{L}_B$ is the total external torque about point $B$. The definition of the angular momentum vector of the spacecraft about point $B$ is\n\\begin{equation}\n\t\\bm{H}_{\\text{sc},B} = [I_{\\text{hub},B_c}] \\bm\\omega_{\\cal B/N} + m_{\\text{\\text{hub}}}\\bm{r}_{B_{c}/B} \\times \\dot{\\bm{r}}_{B_{c}/B} + \\sum\\limits_{j=1}^{N_P}m_j \\bm r_{P_{c,j}/B}\\times \\dot{\\bm r}_{P_{c,j}/B}\n\t\\label{eq:Hb2}\n\\end{equation}\n\nNow the inertial time derivative of Eq. \\eqref{eq:Hb2} is taken and yields\n\\begin{equation}\n\t\\dot{\\bm{H}}_{\\text{sc},B} = [I_{\\text{hub},B_c}] \\dot{\\bm\\omega}_{\\cal B/N} + \\bm\\omega_{\\cal B/N} \\times [I_{\\text{hub},B_c}] \\bm\\omega_{\\cal B/N} + m_{\\text{\\text{hub}}}\\bm{r}_{B_{c}/B} \\times \\ddot{\\bm{r}}_{B_{c}/B} +  \\sum\\limits_{j=1}^{N_P}m_j \\bm r_{P_{c,j}/B}\\times \\ddot{\\bm r}_{P_{c,j}/B}\n\t\\label{eq:Hbdot}\n\\end{equation}\n$\\ddot{\\bm r}_{P_{c,j}/B}$ is\n\\begin{align}\n\t\\ddot{\\bm{r}}_{P_{c,j}/B}  &= \\bm{r}''_{P_{c,j}/B} +2 \\bm\\omega_{\\cal B/N} \\times \\bm{r}'_{P_{c,j}/B} + \\dot{\\bm\\omega}_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B}  +\\bm\\omega_{\\cal B/N} \\times (\\bm\\omega_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B})\n\t\\label{eq:rpddot}\n\\end{align}\nIncorporating Eq.-~\\eqref{eq:rpddot} into Eq.~\\eqref{eq:Hbdot} results in\n\\begin{multline}\n\t\\dot{\\bm{H}}_{\\text{sc},B} = [I_{\\text{hub},B_c}] \\dot{\\bm\\omega}_{\\cal B/N} + \\bm\\omega_{\\cal B/N} \\times [I_{\\text{hub},B_c}] \\bm\\omega_{\\cal B/N} + m_{\\text{\\text{hub}}}\\bm{r}_{B_{c}/B} \\times \\ddot{\\bm{r}}_{B_{c}/B}\\\\ + \\sum\\limits_{j=1}^{N_P}m_j \\bm r_{P_{c,j}/B}\\times \\Bigl[\\bm{r}''_{P_{c,j}/B} +2 \\bm\\omega_{\\cal B/N} \\times \\bm{r}'_{P_{c,j}/B} + \\dot{\\bm\\omega}_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B} +\\bm\\omega_{\\cal B/N} \\times (\\bm\\omega_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B})\\Bigr]\n\t\\label{eq:Hbdot2}\n\\end{multline}\nApplying the parallel axis theorem the following inertia tensor terms are defined as\n\\begin{align}\n\t[I_{\\text{sc},B}] &= [I_{\\text{hub},B}] + m_{\\text{hub}} [\\tilde{\\bm{r}}_{B_{c}/B}] [\\tilde{\\bm{r}}_{B_{c}/B}]^T + \\sum\\limits_{j=1}^{N_P} m_j [\\tilde{\\bm{r}}_{P_{c,j}/B}] [\\tilde{\\bm{r}}_{P_{c,j}/B}]^T\n\t\\label{eq:IscB}\n\\end{align}\nTaking the body-relative time derivative of Equation~\\eqref{eq:IscB} yields\n\\begin{equation}\n\t[I'_{\\text{sc},B}] =\n\t- \\sum\\limits_{j=1}^{N_P} m_j \\Bigl(  [\\tilde{\\bm{r}}'_{P_{c,j}/B}] [\\tilde{\\bm{r}}_{P_{c,j}/B}] + [\\tilde{\\bm{r}}_{P_{c,j}/B}] [\\tilde{\\bm{r}}'_{P_{c,j}/B}]  \\Bigr)\n\\end{equation}\nThe Jacobi Identity, $(\\bm a \\times \\bm b)\\times \\bm c = \\bm a \\times (\\bm b\\times \\bm c) - \\bm b \\times (\\bm a\\times \\bm c)$, is used to combine terms and produce the following simplified equation\n\\begin{multline}\n\t\\dot{\\bm{H}}_{\\text{sc},B} = [I_{\\text{sc},B}] \\dot{\\bm\\omega}_{\\cal B/N} + \\bm\\omega_{\\cal B/N} \\times [I_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} + [I'_{\\text{sc},B}] \\bm\\omega_{\\cal B/N}\n\t\\\\+ \\sum\\limits_{j=1}^{N_P}\\biggl[m_j \\bm r_{P_{c,j}/B}\\times\\bm{r}''_{P_{c,j}/B}\n\t+ m_j \\bm\\omega_{\\cal B/N} \\times \\Bigl(\\bm{r}_{P_{c,j}/B} \\times \\bm{r}'_{P_{c,j}/B}\\Bigr)\\biggr]\n\t\\label{eq:Hbdot4}\n\\end{multline}\n\nEqs. (\\ref{eq:Euler}) and (\\ref{eq:Hbdot4}) are equated and yield\n\\begin{multline}\n\t\\bm{L}_B+m_{\\text{sc}}\\ddot{\\bm r}_{B/N}\\times\\bm{c} = [I_{\\text{sc},B}] \\dot{\\bm\\omega}_{\\cal B/N} + \\bm\\omega_{\\cal B/N} \\times [I_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} + [I'_{\\text{sc},B}] \\bm\\omega_{\\cal B/N}\n\t\\\\+ \\sum\\limits_{j=1}^{N_P}\\biggl[m_j \\bm r_{P_{c,j}/B}\\times\\bm{r}''_{P_{c,j}/B}\n\t+ m_j \\bm\\omega_{\\cal B/N} \\times \\Bigl(\\bm{r}_{P_{c,j}/B} \\times \\bm{r}'_{P_{c,j}/B}\\Bigr)\\biggr]\n\t\\label{eq:Hbdot5}\n\\end{multline}\nFinally, using tilde matrix and simplifying yields the modified Euler equation, which is the second EOM necessary to describe the motion of the spacecraft.\n\\begin{multline}\n\t[I_{\\text{sc},B}] \\dot{\\bm\\omega}_{\\cal B/N} = -[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} - [I'_{\\text{sc},B}] \\bm\\omega_{\\cal B/N}\t\\\\- \\sum\\limits_{j=1}^{N_P}\\biggl(m_j [\\tilde{\\bm r}_{P_{c,j}/B}]\\bm{r}''_{P_{c,j}/B}\n\t+ m_j [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm{r}}_{P_{c,j}/B}] \\bm{r}'_{P_{c,j}/B}\\biggr) \n\t+ \\bm{L}_B - m_{\\text{sc}} [\\tilde{\\bm{c}}] \\ddot{\\bm r}_{B/N}\n\t\\label{eq:Final5}\n\\end{multline}\nRearranging Eq.~\\eqref{eq:Final5} to be in the same form as the previous sections results in\n\\begin{multline}\n\tm_{\\text{sc}}[\\tilde{\\bm{c}}]\\ddot{\\bm r}_{B/N}+[I_{\\text{sc},B}] \\dot{\\bm\\omega}_{\\cal B/N} + \\sum\\limits_{j=1}^{N_P}m_j [\\tilde{\\bm r}_{P_{c,j}/B}] \\hat{\\bm p}_j \\ddot{\\rho}_j = \\\\\n\t-[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} \n\t- [I'_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} \t- \\sum\\limits_{j=1}^{N_P} m_j [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm{r}}_{P_{c,j}/B}] \\bm{r}'_{P_{c,j}/B} + \\bm{L}_B\n\t\\label{eq:Final6}\n\\end{multline} \n\n\\subsubsection{Fuel Slosh Motion}\nFigure~\\ref{fig:Flex_Slosh_Figure} shows that a single fuel slosh particle is free to move along its corresponding $\\hat{\\bm p}_j$ direction and this formulation is generalized to include $N_P$ number of fuel slosh particles. The derivation begins with Newton's law for each fuel slosh particle:\n\\begin{equation}\n\tm_j \\ddot{\\bm{r}}_{P_{c,j}/N} = \\bm F_G + \\bm F_C- k_j \\rho_j \\hat{\\bm p}_j - c_j \\dot{\\rho_j} \\hat{\\bm p}_j\n\t\\label{eq:sloshaccel}\n\\end{equation}\nWhere $\\bm F_C$ is the constraint force that maintains the fuel slosh mass to travel along the direction $\\hat{\\bm p}_j$. The forces due to the spring and damper are explicitly included in Eq.~\\eqref{eq:sloshaccel} and result in a restoring force and damping force. $\\ddot{\\bm{r}}_{P_{c,j}/N}$ is defined in the following equation.\n\\begin{equation}\n\t\\ddot{\\bm{r}}_{P_{c,j}/N} = \\ddot{\\bm{r}}_{B/N} + \\ddot{\\bm{r}}_{P_{c,j}/B}\n\\end{equation}\nThe inertial acceleration vector $\\ddot{\\bm{r}}_{P_{c,j}/B}$ is defined in Eq.~\\eqref{eq:rpddot}. Plugging this definition into Eq.~\\eqref{eq:sloshaccel} results in\n\\begin{multline}\n\tm_j \\Big[\\ddot{\\bm{r}}_{B/N} + \\ddot{\\rho}_j \\hat{\\bm p}_j  +2 \\bm\\omega_{\\cal B/N} \\times \\bm{r}'_{P_{c,j}/B} + \\dot{\\bm\\omega}_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B}  +\\bm\\omega_{\\cal B/N} \\times (\\bm\\omega_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B})\\Big]\\\\\n\t=  \\bm F_G + \\bm F_C - k_j \\rho_j \\hat{\\bm p}_j - c_j \\dot{\\rho_j} \\hat{\\bm p}_j\n\t\\label{eq:sloshaccel3}\n\\end{multline}\t\n\nEquation~\\eqref{eq:sloshaccel3} is the dynamical equation for a fuel slosh particle, however, the constraint force, $\\bm F_C$, is undefined. Since the fuel slosh particle is free to move in the $\\hat{\\bm p_j}$ direction, the component of $\\bm F_C$ along the $\\hat{\\bm p_j}$ direction is zero. Leveraging this insight, Eq.~\\eqref{eq:sloshaccel3} is projected into the $\\hat{\\bm p_j}$ direction by multiplying both sides of the equation by $\\hat{\\bm p_j}^T$.\n\\begin{multline}\n\tm_j \\Bigl(\\hat{\\bm p_j}^T \\ddot{\\bm{r}}_{B/N} + \\ddot{\\rho}_j  +2 \\hat{\\bm p_j}^T  \\bm\\omega_{\\cal B/N} \\times \\bm{r}'_{P_{c,j}/B} + \\hat{\\bm p_j}^T \\dot{\\bm\\omega}_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B}  + \\hat{\\bm p_j}^T \\bm\\omega_{\\cal B/N} \\times (\\bm\\omega_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B})\\Bigr)\\\\\n\t= \\hat{\\bm p_j}^T \\bm F_G - k_j \\rho_j - c_j \\dot{\\rho_j}\n\t\\label{eq:sloshaccel4}\n\\end{multline}\t\nMoving the second order terms to the left hand side and introducing the tilde matrix notation yields the final equation needed to describe the motion of the spacecraft.\n\\begin{multline}\n\tm_j \\hat{\\bm p_j}^T \\ddot{\\bm{r}}_{B/N} - m_j \\hat{\\bm p_j}^T [\\tilde{\\bm{r}}_{P_{c,j}/B}] \\dot{\\bm\\omega}_{\\cal B/N} + m_j \\ddot{\\rho}_j  \\\\\n\t= \\hat{\\bm p_j}^T \\bm F_G - k_j \\rho_j - c_j \\dot{\\rho_j} - 2 m_j \\hat{\\bm p_j}^T  [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm{r}'_{P_{c,j}/B}  - m_j \\hat{\\bm p_j}^T [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm{r}_{P_{c,j}/B} \n\t\\label{eq:sloshaccel5}\n\\end{multline}\t\n\n\\subsection{Derivation of Equations of Motion - Kane's Method}\n\nThe choice of generalized coordinates and their respective generalized speeds are:\n\n\\begin{equation}\n\t\\bm q = \n\t\\begin{bmatrix}\n\t\t\\bm r_{B/N}\\\\\n\t\t\\bm \\sigma_{\\cal{B/N}}\\\\\n\t\t\\rho_1\\\\\n\t\t\\cdot\\\\\n\t\t\\rho_{N_P}\n\t\\end{bmatrix}\n\t\\quad\n\t\\bm u = \\begin{bmatrix}\n\t\t\\dot{\\bm r}_{B/N}\\\\\n\t\t\\bm \\omega_{\\cal{B/N}}\\\\\n\t\t\\dot{\\rho}_1\\\\\n\t\t\\cdot\\\\\n\t\t\\dot{\\rho}_{N_P}\n\t\\end{bmatrix}\n\\end{equation} \n\nThe necessary velocities needed to be defined are as follows\n\n\\begin{equation}\n\\dot{\\bm r}_{B_c/N} = \\dot{\\bm r}_{B/N} + \\bm \\omega_{\\cal{B/N}} \\times {\\bm r}_{B_c/B} = \\dot{\\bm r}_{B/N}  - [\\tilde{{\\bm r}}_{B_c/B}] \\bm \\omega_{\\cal{B/N}} \\\\\n\\end{equation}\n\n\\begin{equation}\n\t\\dot{\\bm r}_{C/N} = \\dot{\\bm r}_{B/N} + \\dot{\\bm c}\\\\\n\t\\label{eq:rDot_CN}\n\\end{equation}\n\n\\begin{equation}\n\t\\bm \\omega_{\\cal{B/N}} = \\bm \\omega_{\\cal{B/N}}\n\\end{equation}\n\n\\begin{equation}\n\t\\dot{\\bm r}_{P_{c,j}/N} = \\dot{\\bm r}_{B/N} + {\\bm r}'_{P_{c,j}/B} + \\bm \\omega_{\\cal{B/N}} \\times {\\bm r}_{P_{c,j}/B} = \\dot{\\bm r}_{B/N} + \\dot{\\rho}_j \\hat{\\bm p}_j  - [\\tilde{{\\bm r}}_{P_{c,j}/B}] \\bm \\omega_{\\cal{B/N}}\n\\end{equation}\n\nNow the following partial velocity table can be created:\n\n\\begin{table}[htbp]\n\t\\caption{Partial Velocity Table}\n\t\\label{tab:hub}\n\t\\centering \\fontsize{10}{10}\\selectfont\n\t\\begin{tabular}{ c | c | c | c } % Column formatting, \n\t\t\\hline\n\t\t$r$  & $\\bm v^{B_c}_{r}$  & $\\bm \\omega_{\\textit{r}}^{\\cal{B}}$ & $\\bm v^{P_c}_{r}$ \\\\\n\t\t\\hline\n\t\t$1-3$  & $[I_{3\\times 3}]$ & $[0_{3\\times 3}]$ & $[I_{3\\times 3}]$ \\\\\n\t\t$4-6$ & $- [\\tilde{{\\bm r}}_{B_c/B}]$ & $[I_{3\\times 3}]$ & $- [\\tilde{{\\bm r}}_{P_{c,j}/B}]$ \\\\\n\t\t$7-{N_P}$ &$[0_{3\\times 1}]$ & $[0_{3\\times 1}]$ & $\\hat{\\bm p}_j$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\nAn additional partial velocity that is needed is $[\\bm v^C_{1-3}]$ for the external force applied on the spacecraft, $\\bm F_{\\text{ext}}$. Using Eq.\\eqref{eq:rDot_CN} the following is defined:\n\n\\begin{equation}\n\t[\\bm v^C_{1-3}] = [I_{3\\times 3}]\n\\end{equation}\n\nUsing these partial velocity definitions, the follow sections will step through the formulation for the translational, rotational and slosh EOMs developed using Kane's method.\n\n\\subsubsection{Rigid Spacecraft Hub Translational Motion}\n\nStarting with the definition of a generalized force:\n\n\\begin{equation}\n\t\\bm F_r = \\bm v_r^T \\bm F\n\t\\label{eq:genActive}\n\\end{equation}\nUsing this definition the external force applied on the spacecraft for the translational equations is defined as:\n\n\\begin{equation}\n\t\\bm F_{1-3} = [\\bm v^C_{1-3}]^T \\bm F_{\\text{ext}} = \\bm F_{\\text{ext}}\n\\end{equation}\n\nUsing the definition of generalized inertia forces,\n\\begin{equation}\n\t\\bm F^*_r = \\sum\\limits_{r}^{N}\\Big[\\bm \\omega_r^T \\bm T^* +  \\bm v_r^T (- m_r \\bm a_r)\\Big]\n\t\\label{eq:genInert}\n\\end{equation}\nthe inertia forces for the hub translational motion are defined as\n\n\\begin{equation}\n\t\\bm F^*_{1-3} = [\\bm v^B_{1-3}]^T (-m_{\\text{hub}} \\ddot{\\bm r}_{B_c/N}) + \\sum\\limits_{j}^{N_P}[\\bm v^{P_c}_{1-3}]^T (-m_j \\ddot{\\bm r}_{P_{c,j}/N}) = -m_{\\text{hub}} \\ddot{\\bm r}_{B_c/N} + \\sum\\limits_{j}^{N_P} -m_j \\ddot{\\bm r}_{P_{c,j}/N}\n\\end{equation}\n\nFinally, Kane's equation is:\n\n\\begin{equation}\n\t\\bm F_r + \\bm F^*_r = 0\n\t\\label{eq:KanesEq}\n\\end{equation}\ntherefore\n\\begin{equation}\n\t\\bm F_{\\text{ext}} -m_{\\text{hub}} \\ddot{\\bm r}_{B_c/N} + \\sum\\limits_{j}^{N_P} -m_j \\ddot{\\bm r}_{P_{c,j}/N} = 0\n\\end{equation}\nExpanding and rearranging results in\n\\begin{equation}\n\tm_{\\text{hub}} \\ddot{\\bm r}_{B_c/N} + \\sum\\limits_{j}^{N_P} m_j (\\ddot{\\bm r}_{B/N} + \\ddot{\\bm r}_{P_{c,j}/B}) = \\bm F_{\\text{ext}}\n\t\\label{eq:KanesTrans}\n\\end{equation}\nPlugging Eq.~\\eqref{eq:rpddot} into Eq.~\\eqref{eq:KanesTrans} results in\n\\begin{multline}\n\tm_{\\text{hub}} (\\ddot{\\bm r}_{B_c/N} + \\ddot{\\bm r}_{B_c/B}) + \\sum\\limits_{j}^{N_P} m_j \\Big[\\ddot{\\bm r}_{B/N} + \\ddot{\\rho}_j \\hat{\\bm p}_j  +2 \\bm\\omega_{\\cal B/N} \\times \\bm{r}'_{P_{c,j}/B} + \\dot{\\bm\\omega}_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B}  \\\\\n\t+\\bm\\omega_{\\cal B/N} \\times (\\bm\\omega_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B})\\Big] = \\bm F_{\\text{ext}}\n\\end{multline}\nCombining like terms and rearranging results in\n\\begin{equation}\n\tm_{\\text{sc}} \\ddot{\\bm r}_{B/N}-m_{\\text{sc}} [\\tilde{\\bm{c}}] \\dot{\\bm\\omega}_{\\cal B/N} +\\sum_{j=1}^{N_{P}}m_j \\hat{\\bm p}_j \\ddot{\\rho}_j = \\bm F_{\\text{ext}}\t- 2 m_{\\text{sc}} [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm c'\n\t- m_{\\text{sc}} [\\tilde{\\bm\\omega}_{\\cal B/N}][\\tilde{\\bm\\omega}_{\\cal B/N}]\\bm{c}\n\t\\label{eq:Rbddot4}\n\\end{equation}\nWhich is identical to Eq.~\\eqref{eq:Rbddot3} found using Newtonian mechanics.\n\\subsubsection{Rigid Spacecraft Hub Rotational Motion}\n\nThe torque acting on the spacecraft, $\\bm L_B$ needs to be defined as a general active force. Using Eq.~\\eqref{eq:genActive} active forces acting on the spacecraft for the rotational equations can be defined as:\n\n\\begin{equation}\n\t\\bm F_{4-6} = [\\bm \\omega_{4-6}^{\\cal{B}}]^T \\bm L_B = \\bm L_B\n\\end{equation}\n\nTo define the generalized inertia forces, using Eq.~\\eqref{eq:genInert} the definition of $\\bm T^*$ needs to be defined for a rigid body and applying it to the hub:\n\n\\begin{equation}\n\t\\bm T^* = -[I_{\\text{hub},B}] \\dot{\\bm\\omega}_{\\cal B/N}  -[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{hub},B}] \\bm\\omega_{\\cal B/N}\n\\end{equation}\n\n\\begin{multline}\n\t\\bm F^*_{4-6} = [\\bm \\omega_{4-6}^{\\cal{B}}]^T \\bm T^* + [\\bm v^{B_c}_{4-6}]^T (-m_{\\text{hub}} \\ddot{\\bm r}_{B_c/N}) + \\sum\\limits_{j}^{N_P}[\\bm v^{P_c}_{4-6}]^T (-m_j \\ddot{\\bm r}_{P_{c,j}/N}) \\\\\n\t= -[I_{\\text{hub},B}] \\dot{\\bm\\omega}_{\\cal B/N}  -[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{hub},B}] \\bm\\omega_{\\cal B/N} - m_{\\text{hub}} [\\tilde{{\\bm r}}_{B_c/B}] \\ddot{\\bm r}_{B_c/N}  + \\sum\\limits_{j}^{N_P}- [\\tilde{{\\bm r}}_{P_{c,j}/B}]^T (-m_j \\ddot{\\bm r}_{P_{c,j}/N})\n\\end{multline}\n\nUsing Kane's equation, Eq.~\\eqref{eq:KanesEq}, the following equations of motion for the rotational dynamics are defined:\n\n\\begin{equation}\n\t\\bm L_B  -[I_{\\text{hub},B}] \\dot{\\bm\\omega}_{\\cal B/N}  -[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{hub},B}] \\bm\\omega_{\\cal B/N} -  m_{\\text{hub}} [\\tilde{{\\bm r}}_{B_c/B}] \\ddot{\\bm r}_{B_c/N}  + \\sum\\limits_{j}^{N_P}- [\\tilde{{\\bm r}}_{P_{c,j}/B}] (m_j \\ddot{\\bm r}_{P_{c,j}/N}) = 0\n\\end{equation}\nPlugging in some definitions\n\\begin{multline}\n\t[I_{\\text{hub},B}] \\dot{\\bm\\omega}_{\\cal B/N}  + m_{\\text{hub}} [\\tilde{{\\bm r}}_{B_c/B}] \\ddot{\\bm r}_{B_c/N}  + \\sum\\limits_{j}^{N_P} m_j [\\tilde{{\\bm r}}_{P_{c,j}/B}] \\Big[\\ddot{\\bm r}_{B/N} + \\ddot{\\rho}_j \\hat{\\bm p}_j  +2 \\bm\\omega_{\\cal B/N} \\times \\bm{r}'_{P_{c,j}/B} + \\dot{\\bm\\omega}_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B}  \\\\\n\t+\\bm\\omega_{\\cal B/N} \\times (\\bm\\omega_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B})\\Big] = -[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{hub},B}] \\bm\\omega_{\\cal B/N} + \\bm L_B\n\\end{multline}\nCombining like terms results in the same equation seen in Eq.~\\eqref{eq:Final6} found using Newtonian mechanics.\n\\begin{multline}\n\tm_{\\text{sc}}[\\tilde{\\bm{c}}]\\ddot{\\bm r}_{B/N}+[I_{\\text{sc},B}] \\dot{\\bm\\omega}_{\\cal B/N} + \\sum\\limits_{j=1}^{N_P}m_j [\\tilde{\\bm r}_{P_{c,j}/B}] \\hat{\\bm p}_j \\ddot{\\rho}_j = \\\\\n\t-[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} \n\t- [I'_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} \t- \\sum\\limits_{j=1}^{N_P} m_j [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm{r}}_{P_{c,j}/B}] \\bm{r}'_{P_{c,j}/B} + \\bm{L}_B\n\t\\label{eq:Final7}\n\\end{multline} \n\n\\subsubsection{Fuel Slosh Motion}\nFollowing the similar pattern for translational and rotational equations the generalized active forces are defined:\n\n\\begin{equation}\n\t\\bm F_{7} = \\bm v^{P_c}_{r} \\cdot (\\bm F_G + \\bm F_C -k\\rho_j \\hat{\\bm p}_j - c \\dot{\\rho} \\hat{\\bm p}_j) = \\hat{\\bm p_j}^T \\bm F_G -k\\rho_j - c \\dot{\\rho}\n\\end{equation}\nThe generalized inertia forces are defined as: \n\n\\begin{equation}\n\t\\bm F^*_{7} = \\bm v^{P_c}_{r} \\cdot (-m_j \\ddot{\\bm r}_{P_{c,j}/N}) = \\hat{\\bm p}_j^T (-m_j \\ddot{\\bm r}_{P_{c,j}/N})\n\\end{equation}\nUsing Kane's equation the following equations of motion are defined:\n\n\\begin{multline}\n\t\\hat{\\bm p_j}^T \\bm F_G -k\\rho_j - c \\dot{\\rho} - m_j \\hat{\\bm p}_j^T \\Big[\\ddot{\\bm r}_{B/N} + \\ddot{\\rho}_j \\hat{\\bm p}_j  +2 \\bm\\omega_{\\cal B/N} \\times \\bm{r}'_{P_{c,j}/B} + \\dot{\\bm\\omega}_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B}  \\\\\n\t+\\bm\\omega_{\\cal B/N} \\times (\\bm\\omega_{\\cal B/N} \\times \\bm{r}_{P_{c,j}/B})\\Big] = 0 \n\\end{multline}\nRearranging and combining like terms results in:\n\n\\begin{multline}\n\tm_j \\hat{\\bm p_j}^T \\ddot{\\bm{r}}_{B/N} - m_j \\hat{\\bm p_j}^T [\\tilde{\\bm{r}}_{P_{c,j}/B}] \\dot{\\bm\\omega}_{\\cal B/N} + m_j \\ddot{\\rho}_j  \\\\\n\t= \\hat{\\bm p_j}^T \\bm F_G - k_j \\rho_j - c_j \\dot{\\rho_j} - 2 m_j \\hat{\\bm p_j}^T  [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm{r}'_{P_{c,j}/B}  - m_j \\hat{\\bm p_j}^T [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm{r}_{P_{c,j}/B} \n\t\\label{eq:sloshaccel6}\n\\end{multline}\t\nWhich is identical to Eq.~\\eqref{eq:sloshaccel5}!\n\n\n", "meta": {"hexsha": "8317a8ce7354ce3a121142ca59cdc0a666cf95f7", "size": 20814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/simulation/dynamics/LinearSpringMassDamper/_Documentation/secModelDescription.tex", "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/simulation/dynamics/LinearSpringMassDamper/_Documentation/secModelDescription.tex", "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_forks_repo_path": "src/simulation/dynamics/LinearSpringMassDamper/_Documentation/secModelDescription.tex", "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.1396648045, "max_line_length": 492, "alphanum_fraction": 0.6345728836, "num_tokens": 8771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7606506581031359, "lm_q1q2_score": 0.5973454888663129}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 3.5 Commutation of covariant derivatives}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).\n\n   \\nabla{#}::Derivative.\n\n   expr :=   \\nabla_{d}{\\nabla_{c}{A_{a} B_{b}}}\n           - \\nabla_{c}{\\nabla_{d}{A_{a} B_{b}}}.  # cdb(ex-0305.100,expr)\n\n   product_rule (expr)                             # cdb(ex-0305.101,expr)\n   distribute   (expr)                             # cdb(ex-0305.102,expr)\n   product_rule (expr)                             # cdb(ex-0305.103,expr)\n   factor_out   (expr,$A_{a?},B_{b?}$)             # cdb(ex-0305.104,expr)\n\n\\end{cadabra}\n\n\\begin{dgroup*}[spread={3pt}]\n   \\Dmath*{\\cdb{ex-0305.100} = \\Cdb*{ex-0305.101}\n                             = \\Cdb*{ex-0305.102}\n                             = \\Cdb*{ex-0305.103}\n                             = \\Cdb*{ex-0305.104}}\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "bb0ed86ad34fc7ba4283bc5cbf6b3b364877a412", "size": 1102, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0305.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0305.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0305.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 32.4117647059, "max_line_length": 94, "alphanum_fraction": 0.4736842105, "num_tokens": 360, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5973454803443835}}
{"text": "% Chapter 1\n\n\\chapter{Random Walk Approach} % Main chapter title\n\n\\label{Chapter3} % For referencing the chapter elsewhere, use \\ref{Chapter1} \n\n\\section{Aldous, Broder}\n\nAldous \\cite{aldous1990random} and Broder \\cite{63516} independently invented the following simple random walk based algortihm.\n\n\n\\begin{figure}[h!]\n  % \\centering\n  % \\scalebox{0.7}{\n  \\begin{algorithm}[H]\n    \\KwIn{$G = (V,E)$}\n    \\KwOut{A random spanning tree}\n    \n    Choose a starting vertex $s$ arbitrarily\n    \n    $T_V \\leftarrow \\{s\\}, T_E \\leftarrow \\emptyset$\n    \n    \\While{$|T_V| < |V$} {\n      \n      $next =_{u.a.r} N(s)$\n      \n      \\If{$next \\not\\in T_V$} {\n        $T_V = T_V \\cup \\{next\\}$\n        \n        $T_E = T_E \\cup \\{(s, next)\\}$\n      }\n      \n      $s = next$\n      \n    }\n    \n    \\KwRet{$T = (T_V, T_E)$}\n\n    \n    \\caption{Aldous-Broder Algorithm}\n  \\end{algorithm}\n  % }\n\\end{figure} \n\n\\subsection{Running Time}\n\n\n\\textbf{Cover Time}\n\n$cov_G(u) := $ The expected number of steps for a random walk starting at $u$ to visit all the vertices in $G$ \\\\\n\n\n\\textbf{Cover Time of $G$} \n\n$cov_G := \\max_{u \\in V_G} cov_G(u)$  \n\n\nIt is known that $cov_G = \\mathcal{O}(|V|\\ |E|) = \\mathcal{O}(|V|^3)$\n\n\n\\subsection{An Example}\n\n\\begin{figure}[h!]\n  \\begin{center}\n  \\begin{subfigure}{.3\\textwidth}\n    \\centering\n    \\begin {tikzpicture}[auto ,node distance =3 cm and 3cm ,on grid ,\n      semithick ,\n      state/.style ={ circle  , \n        draw,black , text=black , minimum width =1 cm}]\n      \\node[state] (C){$C$};\n      \\node[state, fill=white!50!green] (A) [above =of C] {$A$};\n      \\node[state] (B) [above right =of C] {$B$};\n      \\node[state] (D) [right =of C] {$D$};\n      \\path (C) edge[very thick] (D);\n      \\path (B) edge[very thick] (D);\n      \\path (A) edge[very thick] (B);\n      \\path (C) edge[very thick] (A);\n      \\path (D) edge[very thick] (A);\n    \\end{tikzpicture}\n    \\caption{Iteration xxx}\n  \\end{subfigure}\n  \\begin{subfigure}{.3\\textwidth}\n    \\centering\n    \\begin {tikzpicture}[auto ,node distance =3 cm and 3cm ,on grid ,\n      semithick ,\n      state/.style ={ circle  , \n        draw,black , text=black , minimum width =1 cm}]\n      \\node[state] (C){$C$};\n      \\node[state, fill=white!50!red] (A) [above =of C] {$A$};\n      \\node[state, fill=white!50!green] (B) [above right =of C] {$B$};\n      \\node[state] (D) [right =of C] {$D$};\n      \\path (C) edge[very thick] (D);\n      \\path (B) edge[very thick] (D);\n      \\path (A) edge[very thick,blue] (B);\n      \\path (C) edge[very thick] (A);\n      \\path (D) edge[very thick] (A);\n    \\end{tikzpicture}\n    \\caption{Iteration xxy}\n  \\end{subfigure}\n  \\begin{subfigure}{.3\\textwidth}\n    \\centering\n    \\begin {tikzpicture}[auto ,node distance =3 cm and 3cm ,on grid ,\n      semithick ,\n      state/.style ={ circle  , \n        draw,black , text=black , minimum width =1 cm}]\n      \\node[state] (C){$C$};\n      \\node[state, fill=white!50!green] (A) [above =of C] {$A$};\n      \\node[state, fill=white!50!red] (B) [above right =of C] {$B$};\n      \\node[state] (D) [right =of C] {$D$};\n      \\path (C) edge[very thick] (D);\n      \\path (B) edge[very thick] (D);\n      \\path (A) edge[very thick,blue] (B);\n      \\path (C) edge[very thick] (A);\n      \\path (D) edge[very thick] (A);\n    \\end{tikzpicture}\n    \\caption{Iteration xxz}\n  \\end{subfigure}\n\\end{center}\n\n  \\begin{center}\n  \\begin{subfigure}{.3\\textwidth}\n    \\centering\n    \\begin {tikzpicture}[auto ,node distance =3 cm and 3cm ,on grid ,\n      semithick ,\n      state/.style ={ circle  , \n        draw,black , text=black , minimum width =1 cm}]\n      \\node[state] (C){$C$};\n      \\node[state, fill=white!50!red] (A) [above =of C] {$A$};\n      \\node[state, fill=white!50!red] (B) [above right =of C] {$B$};\n      \\node[state, fill=white!50!green] (D) [right =of C] {$D$};\n      \\path (C) edge[very thick] (D);\n      \\path (B) edge[very thick] (D);\n      \\path (A) edge[very thick,blue] (B);\n      \\path (C) edge[very thick] (A);\n      \\path (D) edge[very thick, blue] (A);\n    \\end{tikzpicture}\n    \\caption{Iteration xyx}\n  \\end{subfigure}\n  \\begin{subfigure}{.3\\textwidth}\n    \\centering\n    \\begin {tikzpicture}[auto ,node distance =3 cm and 3cm ,on grid ,\n      semithick ,\n      state/.style ={ circle  , \n        draw,black , text=black , minimum width =1 cm}]\n      \\node[state] (C){$C$};\n      \\node[state, fill=white!50!red] (A) [above =of C] {$A$};\n      \\node[state, fill=white!50!green] (B) [above right =of C] {$B$};\n      \\node[state, fill=white!50!red] (D) [right =of C] {$D$};\n      \\path (C) edge[very thick] (D);\n      \\path (B) edge[very thick] (D);\n      \\path (A) edge[very thick,blue] (B);\n      \\path (C) edge[very thick] (A);\n      \\path (D) edge[very thick, blue] (A);\n    \\end{tikzpicture}\n    \\caption{Iteration xyy}\n  \\end{subfigure}\n  \\begin{subfigure}{.3\\textwidth}\n    \\centering\n    \\begin {tikzpicture}[auto ,node distance =3 cm and 3cm ,on grid ,\n      semithick ,\n      state/.style ={ circle  , \n        draw,black , text=black , minimum width =1 cm}]\n      \\node[state] (C){$C$};\n      \\node[state, fill=white!50!red] (A) [above =of C] {$A$};\n      \\node[state, fill=white!50!red] (B) [above right =of C] {$B$};\n      \\node[state, fill=white!50!green] (D) [right =of C] {$D$};\n      \\path (C) edge[very thick] (D);\n      \\path (B) edge[very thick] (D);\n      \\path (A) edge[very thick,blue] (B);\n      \\path (C) edge[very thick] (A);\n      \\path (D) edge[very thick, blue] (A);\n    \\end{tikzpicture}\n    \\caption{Iteration xyz}\n  \\end{subfigure}\n\\end{center}\n  \\begin{subfigure}{1.0\\textwidth}\n    \\centering\n    \\begin {tikzpicture}[auto ,node distance =3 cm and 3cm ,on grid ,\n      semithick ,\n      state/.style ={ circle  , \n        draw,black , text=black , minimum width =1 cm}]\n      \\node[state, fill=white!50!green] (C){$C$};\n      \\node[state, fill=white!50!red] (A) [above =of C] {$A$};\n      \\node[state, fill=white!50!red] (B) [above right =of C] {$B$};\n      \\node[state, fill=white!50!red] (D) [right =of C] {$D$};\n      \\path (C) edge[very thick, blue] (D);\n      \\path (B) edge[very thick] (D);\n      \\path (A) edge[very thick,blue] (B);\n      \\path (C) edge[very thick] (A);\n      \\path (D) edge[very thick, blue] (A);\n    \\end{tikzpicture}\n    \\caption{Iteration xzx}\n  \\end{subfigure}\n\\end{figure}\n\n% \\pagebreak\n% \n% \\section{Wilson's Algorithm}\n% \n% \\citet{10.1145/237814.237880} proposed a different variant of random walk based algorithm which runs faster than the cover time of the graph. \n\n% ----------------------------------------------------------------------------------------\n\n\n% ----------------------------------------------------------------------------------------\n\n\n\n\n\n", "meta": {"hexsha": "839fb6f27bf2e9e5c4f030e5e442a5442921826e", "size": 6679, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/Chapters/Chapter3.tex", "max_stars_repo_name": "severus-tux/masters-thesis", "max_stars_repo_head_hexsha": "c6d3856cccda06735a01699c91ad923590f08ab7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/Chapters/Chapter3.tex", "max_issues_repo_name": "severus-tux/masters-thesis", "max_issues_repo_head_hexsha": "c6d3856cccda06735a01699c91ad923590f08ab7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/Chapters/Chapter3.tex", "max_forks_repo_name": "severus-tux/masters-thesis", "max_forks_repo_head_hexsha": "c6d3856cccda06735a01699c91ad923590f08ab7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-14T15:34:34.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-14T15:34:34.000Z", "avg_line_length": 31.5047169811, "max_line_length": 144, "alphanum_fraction": 0.5526276389, "num_tokens": 2305, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085758631159, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5973454765223861}}
{"text": "\\section{Technical Introduction}\nHere is a brief description of why Proposition FIXME(0.2) might be amenable to \ncomputer-assisted proof.\nIf a shortest geodesic $\\delta$ in a hyperbolic $3$-manifold $N$ does not have \na $\\ln(3)/2$\ntube then there is a 2-generator subgroup $G$ of $\\pi_1(N) = \\Gamma$\nwhich also does not have that property.\nSpecifically, take $G$ generated by $f$ and $w$,\nwith $f \\in \\Gamma$ a primitive hyperbolic isometry\nwhose fixed axis $\\delta_0 \\subset {\\bf H}^3$ projects to $\\delta$, and \nwith $w \\in \\Gamma$ a hyperbolic isometry\nwhich takes $\\delta_0$ to a nearest translate.\nThen, after identifying $N={\\bf H}^3/\\Gamma$ and letting \n$Z={\\bf H}^3/G$,\nwe see that the shortest geodesic in $Z$ (which corresponds to $\\delta$)\ndoes not have \na $\\ln(3)/2$ tube. \nThus, to understand solid tubes around shortest geodesics in hyperbolic \n$3$-manifolds, we need to understand appropriate 2-generator groups, and this \ncan be done by a parameter space analysis as follows.  (Parameter space \nanalyses are naturally amenable to \ncomputer proofs.)\n\nThe space of {\\it relevant} (see Definition FIXME(2.8)) 2-generator groups \nin ${\\rm Isom}_+({\\bf H}^3)$ is naturally\nparametrized by a subset ${\\cal P}$ of ${\\bf C}^3.$  \nEach parameter corresponds to a\n2-generator group $G$ with specified generators $f$ and $w$, and we call \nsuch a group a {\\it marked group}. \nThe marked groups of particular interest are those in which $G$ is\ndiscrete, torsion-free, parabolic-free, $f$ corresponds to a shortest\ngeodesic $\\delta$, and $w$ corresponds to a\ncovering translation of a particular lift of\n$\\delta$ to a nearest translate.  \nWe denote this set of particularly interesting marked groups by ${\\cal T}.$\nWe show that if tuberadius($\\delta) \\le \\ln(3)/2$ \nin a hyperbolic $3$-manifold $N,$ then \n$G$ must correspond to a parameter lying in one of seven small regions\n${\\cal R}_n,\\ n=0,\\ldots,6$ \nin ${\\cal P}$.  \nWith respect to this notation, we have:\n\n\\vglue8pt {\\elevensc Proposition FIXME(2.19)}.\n${\\cal T} \\cap ({\\cal P} - \\mathbold{\\cup}_{n=0,\\ldots,6}{\\cal R}_n) = \\emptyset.$\n\\vfill\n\nThe full statement of Proposition FIXME(2.19)\nexplicitly describes the seven small\nregions of the parameter space as well as some associated data.\n\nHere is the idea of the proof.\nRoughly speaking, we subdivide ${\\cal P}$ into a billion regions\nof varying sizes,\nand show that all but the seven exceptional regions cannot contain \na parameter corresponding to a  \n``shortest/nearest\" marked group.\nFor example we would know that \na region ${\\cal R}$ contained no such group if we knew that for each \npoint $\\rho\\in {\\cal R}$,\nRelength($f_\\rho) > {\\rm Relength}(w_\\rho).$  \n(Here Relength($f_\\rho$) (resp.\\ Relength($w_\\rho^{\\phantom{|}}$))\ndenotes the real translation length of the isometry of ${\\bf H}^3$ \ncorresponding to\nthe element $f$ (resp.\\ $w$) in the marked group with parameter~$\\rho.$)\nThis inequality would contradict the fact that $f$ corresponds to $\\delta$ \nwhich is a {\\it shortest} geodesic.  Similarly, there are {\\it nearest}\ncontradictions.\n", "meta": {"hexsha": "a1c5aeb2ce2c5f31eddf0892587da713e55719f1", "size": 3065, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/TeX_and_Figures_Files/Chapter_1.tex", "max_stars_repo_name": "njt99/findingkillerwords", "max_stars_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/TeX_and_Figures_Files/Chapter_1.tex", "max_issues_repo_name": "njt99/findingkillerwords", "max_issues_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/TeX_and_Figures_Files/Chapter_1.tex", "max_forks_repo_name": "njt99/findingkillerwords", "max_forks_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.0735294118, "max_line_length": 82, "alphanum_fraction": 0.7256117455, "num_tokens": 893, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430562234878, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.597254734617501}}
{"text": "% -*-latex-*-\n\n\\title{Orbiter Technical Notes: Distributed Vessel Mass}\n\\author{Martin Schweiger}\n\\date{September 27, 2005}\n\n\\documentclass[a4paper]{article}\n\\usepackage[dvips]{graphicx}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{times}\n\\usepackage{cite}\n\n\\begin{document}\n\\bibliographystyle{unsrt}\n\n%\\renewcommand{\\vec}[1]{\\ensuremath{\\mathbf{#1}}}\n\n\\newcommand{\\vR}[1]{\\ensuremath{\\vec{R}_{#1}}}\n\\newcommand{\\nR}[1]{\\ensuremath{|\\vR{#1}|}}\n\\newcommand{\\mat}[1]{\\ensuremath{\\mathsf{#1}}}\n\n\\maketitle\n\n\\section{Introduction}\nA point mass $m$ placed at position $\\vec{r}_0$ in a gravitational field $\\vec{g}(\\vec{r})$ experiences a force $\\vec{F}_G = m\\vec{g}(\\vec{r}_0)$.\nFor an extended object with a density distribution $\\rho(\\vec{r})$ the resulting force can be obtained by integrating over its volume $V \\subset \\mathbb{R}^3$:\n\\begin{equation*}\n\\vec{F}_G = \\int_V \\vec{g}(\\vec{r})\\rho(\\vec{r}) d\\vec{r}\n\\end{equation*}\nFor numerical calculations it is sometimes useful to discretise the object into a rigid system of point masses $m_i$ whose relative positions are defined by their barycentric coordinates $\\vec{s}_i$. Then,\n\\begin{equation*}\n\\vec{F}_G = \\sum_i m_i \\vec{g}(\\vec{s}_i+\\vec{r}_\\text{CG})\n\\end{equation*}\nwhere $\\vec{r}_\\text{CG}$ is the position of the barycentre. For the calculation of the linear force $\\vec{F}_G$ Orbiter makes the assumption $\\vec{g}(\\vec{s}_i + \\vec{r}_\\text{CG}) = \\vec{g}(\\vec{r}_\\text{CG})$, i.e. the gravitational field is homogeneous over the volume of the object. This approximation is justified when calculating the gravitational force on a spacecraft which is small compared to its orbital radius vector, $|\\vec{s}_i| \\ll |\\vec{r}_\\text{CG}|$. With this assumption, we arrive back at the expression for a point mass:\n\\begin{equation*}\n\\vec{F}_G = \\vec{g}(\\vec{r}_\\text{CG}) \\sum_i m_i = m \\vec{g}(\\vec{r}_\\text{CG})\n\\end{equation*}\nHowever, an inhomogeneous potential will also induce an angular moment $\\vec{M}_G$ in an extended object, and this can generally not be neglected.\nIn the continuous case, $\\vec{M}_G$ is given by\n\\begin{equation}\\label{eq:cont_torque}\n\\vec{M}_G = \\int_V \\vec{g}(\\vec{r}) \\rho(\\vec{r}) \\times (\\vec{r}-\\vec{r}_\\text{CG}) d\\vec{r},\n\\end{equation}\nand after discretisation this becomes\n\\begin{equation}\\label{eq:torque}\n\\vec{M}_G = \\sum_i m_i \\vec{g}(\\vec{s}_i + \\vec{r}_\\text{CG}) \\times \\vec{s}_i\n\\end{equation}\n\n\\section{Symmetric gravitational potential}\\label{sec:sympot}\nIf we assume that\n\\begin{equation}\\label{eq:gravpot}\n\\vec{g}(\\vec{r}) = -GMr^{-3}\\vec{r},\n\\end{equation}\ni.e. the central body is a sphere of mass $M$ with homogeneous density distribution, and further that $|\\vec{r}| = |\\vec{r}_\\text{CG} + \\vec{s}| \\gg |\\vec{s}|$ over the volume $V$ of the spacecraft, then we can approximate~\\cite{wertz1978}:\n\\begin{equation}\\label{eq:radapprox}\nr^{-3} = (\\vec{r}\\cdot\\vec{r})^{-3/2} =\n\\left\\{ r_\\text{CG}^2 \\left[ 1 + \\frac{2\\vec{r}_\\text{CG}\\cdot\\vec{s}}{r_\\text{CG}^2} + \\frac{\\vec{s}^2}{r_\\text{CG}^2} \\right] \\right\\}^{-3/2} \\approx\nr_\\text{CG}^{-3} \\left[ 1 - \\frac{3 \\vec{r}_\\text{CG}\\cdot\\vec{s}}{r_\\text{CG}^2} \\right]\n\\end{equation}\nSubstituting Eqns.~\\ref{eq:gravpot} and \\ref{eq:radapprox} into Eq.~\\ref{eq:cont_torque} leads to\n\\begin{equation}\n\\vec{M}_G = \\frac{3GM}{r_\\text{CG}^3} \\int_V (\\hat{r}_\\text{CG} \\times \\vec{s}) (\\hat{r}_\\text{CG} \\cdot \\vec{s}) \\rho(\\vec{r}) d\\vec{r}\n\\end{equation}\nIf the vectors $\\vec{s}$ and $\\hat{r}_\\text{CG}$ are expressed in the vessel reference system, then $\\vec{M}_G$ can be written as\n\\begin{equation}\n\\vec{M}_G = \\frac{3GM}{r_\\text{CG}^3} [(\\mat{L}\\hat{r}_\\text{CG}) \\times \\hat{r}_\\text{CG}]\n\\end{equation}\nwhere $\\mat{L}$ is the vessel's inertia tensor expressed in the same frame. (Note that Orbiter currently assumes that the vessel frame of reference is orientated so that $\\mat{L}$ is diagonal.)\n\n\n\\section{Discrete point mass systems}\nOrbiter implements gravity gradient torque as discussed in Section~\\ref{sec:sympot}, assuming that the vessel's inertia tensor is known. In this section we give an alternative method that describes the vessel as a rigid system of point masses. This method is not currently implemented in Orbiter.\n\n\\subsection{2-point systems}\nIn the simplest case, a vessel can be expressed as a rigid system of two point masses. This allows to simulate an angular moment as a result of a gradient in the gravitational potential $\\vec{g}(\\vec{r})$.\n\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(100,50)(0,35)\n\\put(40,70){\\circle*{3}}\n\\put(40,70){\\vector(0,-1){25}}\n\\put(45,58){\\makebox(0,0){$\\vec{g}_\\text{CG}$}}\n\\put(42,74){\\makebox(0,0){CG}}\n\\put(40,70){\\vector(-3,1){20}}\n\\put(30,77){\\makebox(0,0){$\\vec{s}_1$}}\n\\put(40,70){\\vector(3,-1){41}}\n\\put(62,66){\\makebox(0,0){$\\vec{s}_2$}}\n\\put(19,77){\\vector(0,-1){30}}\n\\put(15,62){\\makebox(0,0){$\\vec{g}_1$}}\n\\put(82,56){\\vector(0,-1){15}}\n\\put(86,48){\\makebox(0,0){$\\vec{g}_2$}}\n\\put(19,77){\\circle*{2}}\n\\put(14,77){\\makebox(0,0){$m_1$}}\n\\put(82,56){\\circle*{2}}\n\\put(87,56){\\makebox(0,0){$m_2$}}\n\\end{picture}\n\n\\noindent We want to calculate the angular moment induced by the difference of the gravitational fields at $\\vec{s}_1$ and $\\vec{s}_2$. Let the ratio of masses be denoted by $m_1/m_2 = \\gamma$. Then $\\vec{s}_2 = -\\gamma \\vec{s}_1$, and the forces acting on the two mass points are\n\\begin{equation*}\n\\vec{F}_1 = m_1 \\vec{g}_1,\n\\qquad \\vec{F}_2 = m_2 \\vec{g}_2 = \\frac{m_1}{\\gamma} \\vec{g}_2.\n\\end{equation*}\nThe angular moment is obtained by adding both components,\n\\begin{equation}\\label{eq:moment}\n\\begin{split}\n\\vec{M}_G &= \\vec{F}_1 \\times \\vec{s}_1 + \\vec{F}_2 \\times \\vec{s}_2 \\\\\n&= m_1 \\vec{g}_1 \\times \\vec{s}_1 - \\frac{m_1}{\\gamma} \\vec{g}_2 \\times \\gamma \\vec{s}_1 \\\\\n&= m_1 \\left(\\vec{g}_1 - \\vec{g}_2\\right) \\times \\vec{s}_1\n\\end{split}\n\\end{equation}\nwhich is now expressed as a function of the local field gradient $\\vec{g}_1 - \\vec{g}_2$.\n\n\\subsection{Numerical implementation}\nThe field difference is usually small compared to the magnitude of the field acting on the vessel:\n\\begin{equation*}\n|\\vec{g}_1 - \\vec{g}_2| \\ll |\\vec{g}_{1,2}|\n\\end{equation*}\nresulting in a significant loss of precision when the field difference in Eq.~\\ref{eq:moment} is calculated directly.\n\nTo avoid this problem, we can simplify Eq.~\\ref{eq:moment} if the field $\\vec{g}(\\vec{r})$ can be condsidered to be generated by a single point mass $M$ at position $\\vec{R}_0$ relative to CG.\nLet $\\vR{1} = \\vR{0} - \\vec{s}_1$ and $\\vR{2} = \\vR{0} - \\vec{s}_2$ be the position of $M$ relative to $m_1$ and $m_2$, and let $\\vec{s} = \\vec{s}_2-\\vec{s}_1$.\n\n\\setlength{\\unitlength}{0.5mm}\n\\begin{picture}(100,120)(-50,-35)\n\\put(34,62){\\makebox(0,0){CG}}\n\\put(40,66.5){\\circle*{3}}\n\\put(19,77){\\circle*{2}}\n\\put(81,46){\\circle*{2}}\n\\put(19,77){\\vector(2,-1){61}}\n\\put(19,77){\\vector(0,-1){100}}\n\\put(40,67){\\vector(0,-1){89.5}}\n\\put(81,46){\\vector(0,-1){69}}\n\\put(51,66){\\makebox(0,0){$\\vec{s}$}}\n\\put(20,-28){\\makebox(0,0){$\\vR{1}$}}\n\\put(41,-28){\\makebox(0,0){$\\vR{0}$}}\n\\put(81,-28){\\makebox(0,0){$\\vR{2}$}}\n\\put(13,77){\\makebox(0,0){$m_1$}}\n\\put(88,46){\\makebox(0,0){$m_2$}}\n\\put(19,46){\\line(1,0){61}}\n\\put(15,60){\\makebox(0,0){$h$}}\n\\put(19,50){\\line(1,0){4}}\n\\put(23,46){\\line(0,1){4}}\n\\put(19,46){\\makebox(4,4){$\\cdot$}}\n\\end{picture}\n\n\\noindent Since $\\nR{0} \\gg |\\vec{s}|$, we can find the difference $h = \\nR{1} - \\nR{2}$ of the distances between the point masses $m_1$ and $m_2$ from $M$ by\n\\begin{equation*}\nh = |\\vec{s}| \\cos \\sphericalangle (\\vec{s},\\vR{0})\n= \\frac{\\vec{s} \\vR{0}}{\\nR{0}}\n\\end{equation*}\nWe can now write the field difference $\\vec{g}_1 - \\vec{g}_2$ as\n\\begin{equation*}\n\\vec{g}_1 - \\vec{g}_2 =\nGM \\left( \\frac{\\vR{1}}{\\nR{1}^3} - \\frac{\\vR{2}}{\\nR{2}^3} \\right) \n\\end{equation*}\nand by substituting $\\vR{2} = \\vR{1} - \\vec{s}$ and $\\nR{2} = \\nR{1} - h$, and omitting higher-order terms,\n\\begin{equation*}\n\\begin{split}\n\\frac{\\vec{g}_1 - \\vec{g}_2}{GM} &= \\frac{\\vR{1}}{\\nR{1}^3} - \\frac{\\vR{1} - \\vec{s}}{(\\nR{1}-h)^3} \\\\\n&= \\frac{h (- 3\\nR{1}^2 + 3\\nR{1} h - h^2) \\vR{1} + \\nR{1}^3 \\vec{s}}\n{\\nR{1}^3 (\\nR{1}-h)^3} \\\\\n&= \\frac{3h(h - \\nR{1})\\vR{1} + \\nR{1}^2\\vec{s}}\n{\\nR{1}^4 (\\nR{1} - 3h)} + O(2)\n\\end{split}\n\\end{equation*}\nSubstituting into Eq.~\\ref{eq:moment} and utilising $\\vec{s} \\parallel \\vec{s}_1$ leads to\n\\begin{equation*}\n\\vec{M}_G \\approx 3 G M m_1 \\frac{h(h-\\nR{1})}{\\nR{1}^4(\\nR{1}-3h)}\n\\vR{1} \\times \\vec{s}_1\n\\end{equation*}\n\n\\subsection{Multi-point systems}\nFor objects composed of more than two mass points, the formulation must be somewhat extended. We split the gravitational potential acting on mass point $m_i$ into a barycentric and a perturbation component: $\\vec{g}(\\vec{r}_\\text{CG} + \\vec{s}_i) = \\vec{g}_\\text{CG} + \\vec{\\gamma}_i$.\nThen Eq.~\\ref{eq:torque} becomes\n\\begin{equation}\\label{eq:multitorque}\n\\begin{split}\n\\vec{M}_G &= \\sum_i (\\vec{g}_\\text{CG} + \\vec{\\gamma}_i) \\times m_i \\vec{s}_i\\\\\n&= \\vec{g}_\\text{CG} \\times \\sum_i m_i \\vec{s}_i + \\sum_i \\vec{\\gamma}_i \\times m_i \\vec{s}_i\\\\\n&= \\sum_i \\vec{\\gamma}_i \\times m_i \\vec{s}_i\n\\end{split}\n\\end{equation}\nsince the definition of the barycentre demands $\\sum_i m_i \\vec{s}_i = 0$.\n\nThe perturbation components are calculated in the same way as in the 2-point problem, assuming that the gravitational potential is generated by a single point mass $M$ at position $\\vec{R}_0$ in barycentric coordinates of the spacecraft, with $|\\vec{R}_0| \\gg |\\vec{s}_i|\\;\\forall i$.\nGiven $\\vec{R}_i = \\vec{R}_0 - \\vec{s}_i$, the difference $h_i$ between the distances of $m_i$ and the CG from $M$ is given by\n\\begin{equation*}\nh_i = |\\vec{s}_i| \\cos\\sphericalangle (-\\vec{s}_i, \\vec{R}_0)\n= -\\frac{\\vec{s}_i \\vec{R}_0}{|\\vec{R}_0|}\n\\end{equation*}\nThen as before,\n\\begin{equation*}\n\\begin{split}\n\\frac{\\vec{\\gamma}_i}{GM} &= \\frac{\\vec{R}_0 - \\vec{s}_i}{(|\\vec{R}_0|+h_i)^3} - \\frac{\\vec{R}_0}{|\\vec{R}_0|^3} \\\\\n&= -\\frac{h_i(3|\\vec{R}_0|^2 + 3|\\vec{R}_0| h_i + h_i^2) \\vec{R}_0 + |\\vec{R}_0^3| \\vec{s}_i}{(|\\vec{R}_0|+h_i)^3 |\\vec{R}_0|^3} \\\\\n&= -\\frac{3h_i(|\\vec{R}_0|+h_i) \\vec{R}_0 + |\\vec{R}_0|^2 \\vec{s}_i}{|\\vec{R}_0|^4 (|\\vec{R}_0| + 3h_i)} + O(2)\n\\end{split}\n\\end{equation*}\nand inserting into Eq.~\\ref{eq:multitorque} leads to\n\\begin{equation*}\n\\vec{M}_G = -\\frac{3GM}{|\\vec{R}_0|^4} \\vec{R}_0 \\times \\sum_i\n\\frac{m_i h_i(|\\vec{R}_0|+h_i)}{|\\vec{R}_0| + 3h_i} \\vec{s}_i\n\\end{equation*}\n\n\\section{Damping}\nThe model defined above has equilibrium states ($\\vec{M}_G = 0$) for the attitudes $\\vec{s} \\parallel \\vR{0}$ (vessel axis aligned with radius vector) and $\\vec{s} \\perp \\vR{0}$ (vessel axis perpendicular to radius vector). Only the first of these is stable because an attitude perturbation will generate a torque in the opposite direction, leading to an undamped oscillation around the equilibrium attitude.\n\nOrbiter allows to introduce a damping term\n\\begin{equation*}\n\\vec{M}_D = -\\alpha\\vec{\\omega}_G\n\\end{equation*}\nwhere $\\vec{\\omega}_G$ is the angular velocity induced by torque $\\vec{M}_G$, and $\\alpha$ is a user-defined damping coefficient. The physical source for the damping term may be the deformation of the vessel by tidal forces, or redistribution of liquid propellants.\n\n\\bibliography{../ref}\n\n\\end{document}\n", "meta": {"hexsha": "7a75019cdaf39e7763d986faacc8713ba2a2cf45", "size": 11206, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/Technotes/distmass/distmass.tex", "max_stars_repo_name": "Ybalrid/orbiter", "max_stars_repo_head_hexsha": "7bed82f845ea8347f238011367e07007b0a24099", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1040, "max_stars_repo_stars_event_min_datetime": "2021-07-27T12:12:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-02T14:24:49.000Z", "max_issues_repo_path": "Doc/Technotes/distmass/distmass.tex", "max_issues_repo_name": "Ybalrid/orbiter", "max_issues_repo_head_hexsha": "7bed82f845ea8347f238011367e07007b0a24099", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 20, "max_issues_repo_issues_event_min_datetime": "2021-07-27T12:25:22.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-02T12:22:19.000Z", "max_forks_repo_path": "Doc/Technotes/distmass/distmass.tex", "max_forks_repo_name": "Ybalrid/orbiter", "max_forks_repo_head_hexsha": "7bed82f845ea8347f238011367e07007b0a24099", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 71, "max_forks_repo_forks_event_min_datetime": "2021-07-27T14:19:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-02T05:51:52.000Z", "avg_line_length": 51.8796296296, "max_line_length": 542, "alphanum_fraction": 0.6700874532, "num_tokens": 4350, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972717658209, "lm_q2_score": 0.6859494678483918, "lm_q1q2_score": 0.5971857352780267}}
{"text": "% Created 2021-12-09 Thu 23:56\n% Intended LaTeX compiler: pdflatex\n\\documentclass[11pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true,linkcolor=blue}\n\\date{\\today}\n\\title{On discontinuity of derivatives in Calculus}\n\\hypersetup{\n pdfauthor={Iman Alavi Fazel},\n pdftitle={On discontinuity of derivatives in Calculus},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 27.2 (Org mode 9.4.4)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n\n\\section{Prologue}\n\\label{sec:orgbf28a93}\nDerivatives appear in almost every engineering subject.\nFor example, Faraday's law of induction which is written as:\n\n\\begin{equation}\n  \\varepsilon = -N\\frac{d\\phi}{dt}\n\\end{equation}\n\nOr the famous Newton's law:\n\n\\begin{equation}\n  F = m\\frac{dv}{dt}\n\\end{equation}\n\nA question that I was prompted with, was that how much discontinuous derivative function can be?\nBecause the fundamental theorem of Calculus (Part II) tells us that:\nif a function has derivatives at all point (in its domain), and the values of these derivatives are \\textbf{changing continuously}, then integral cancels the effect of differentiation and gives back the original function.\nBut does having the equation $$f(x) = \\frac{dy}{dx}$$ automatically implies that $$f(x)$$ can only be continuous? \nIf not, how much discontinuous our function can be?\n\nI've spent a good amount of time looking for this starightforward question, which doesn't get addressed in a standard Calculus course (at least in the engineering curriculums).\nThe answer turned out to be a bit tricky because we should first be clear that of what do we mean by \"how much discontinouos\".\nBut first, let's first start with a classic example where discontinuity of derivative occure at a \\emph{single} point.\n\n\\section{Discontinuity of derivative at a single point}\n\\label{sec:org7485e62}\nThe function below is a well-known function that has derivative at all points, and the value of these derivatives are all continuous except at a single point, \\(x = 0\\).\n\n\\begin{equation}\n\\[   \nf(x) = \n     \\begin{cases}\n       \\equation{x^2 sin(1/x)} &\\quad\\text{if }\\equation{x \\neq 0} \\\\\n       \\equation{0} &\\quad\\text{if }\\equation{x = 0} \\\\\n     \\end{cases}\n\\]\n\\end{equation}\n\nIt has the following graph:\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.9\\linewidth]{./img/figure1.png}\n\\caption{\\label{fig:org4e02768}}\n\\end{figure}\n\nThe derivative of this function (by plugging into the definition of derivative) is:\n\n\\begin{equation}\n\\[   \nf'(x) = \n     \\begin{cases}\n       \\equation{x^2 sin(1/x) - cos(1/x)} &\\quad\\text{if }\\equation{x \\neq 0} \\\\\n       \\equation{0} &\\quad\\text{if }\\equation{x = 0} \\\\\n     \\end{cases}\n\\]\n\\end{equation}\n\nWith the following graph:\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.9\\linewidth]{./img/figure2.png}\n\\label{fig:org3cf45b4}\n\\end{figure}\n\nWhich clearly shows that it is discontinuous at \\(x = 0\\).\nAs a side note, we may ask ourselves under which conditions does the discontinuity of derivative can occure?\nIs the discontinuity in above example a special type of discontinuity?\nor can it be of any type, such as \"jumps\" discontinuity?\n\nAs it turns out, even though derivatives can be discontinuous,\nthey have to preserve the \\emph{Intermediate Value theorem}.\n\n\\textbf{Theorem} Suppose \\(f\\) is a real-valued differentiable function on \\([a,b]\\).\nThen for every \\(y\\) value between \\(f'(a)\\) and \\(f'(b)\\), there exists an \\(x\\) in \\([a,b]\\) such that \\(f'(x) = y\\). \n\nWhat does that imply?\nIt simply implies that our function has to be either continuous or if there is any form of discontinuity, either \\textbf{right-hand-side} or \\textbf{left-hand-side}, or both limits of that point should not be defined\n(as with our function at \\(x = 0\\)).\n\\footnote{This kind of discontinuity is referred to as \\emph{the second type} discontinuity compared to more well-known \\emph{first type} like \"jumps\" in graphs.}\n\nNow \\emph{how much} discontinuous a derivative function can be?\nFor example, the following function:\n\n\\begin{equation}\n\\[   \nf(x) = \n     \\begin{cases}\n       \\equation{1} &\\quad\\text{if }\\equation{x} \\text{ rational} \\\\\n       \\equation{0} &\\quad\\text{if }\\equation{x} \\text{ irrational} \\\\\n     \\end{cases}\n\\]\n\\end{equation}\n\nHas second type discontinuity at every point \\(x\\) in \\(\\mathbb{R}\\).\nCan this function be a derivative function?\nThe answer is \\emph{no}, because all derivative functions are said to be of \\emph{Baire class 1 functions} (also written as \"Baire-1 function\") in topology.\nAnd there are restrictions on how much discontinuous Baire-1 functions can be.\nLet's first define what Baire-1 functions are.\n\n\\section{Baire-1 functions}\n\\label{sec:orgdcde0a7}\nBaire-1 function that we mentioned in the previous section, has the following definition:\n\\footnote{This section requires familiarity with functional sequences and pointwise convergence mean. I highly recommend to watch \"Functional sequences (Part 1 of 2)\" by Rob Shone on YouTube; here I assume you already know what these terms mean}\n\n\\textbf{Def.} If $$f(x) = \\lim_{n\\to\\infty}{f_n(x)}$$ exists (as a real number) for each $$x \\in \\mathbb{R}$$, then the function $$f$$ is Baire-1 function.\n\nDerivative functions are Baire-1 class functions, because from elementary Calculus we know that:\n\n\\begin{equation}\n  f'(x) = \\lim_{h\\to0} \\frac{f(x + h) - f(x)}{h}\n\\end{equation}\n\nNow we have to express it in form of infinite sum. Therefore:\n\n\\begin{equation}\n  f'(x) = \\lim_{n\\to\\infty} \\frac{f(x + \\frac{1}{n}) - f(x)}{\\frac{1}{n}}\n\\end{equation}\n\n\\begin{equation}\n  f'(x) = \\lim_{n\\to\\infty} n [f(x + \\frac{1}{n}) - f(x)]\n\\end{equation}\n\nLastly just naming the whole expression on the right-hand-side as \\(f_n(x)\\), we get:\n\n\\begin{equation}\n  f'(x) = \\lim_{n\\to\\infty} {f_n(x)}\n\\end{equation}\n\nand $$f_n(x)$$ is continuous for n $$=$$ 1, 2, 3, \\ldots{} because our assumption was that \\(f\\) is differentiable at all points and that, implies continuity at all points as well.\n\nNow, we only need to mention one theorem until we can answer our original question. But before that, we first have to define some terminologies.\n\n\\emph{Topoligcal space}: This term has a rigorous definition in mathematics, but here, we are interseted is a specific example of topological space namely \"metric space\".\nReal numbers are a form of metric space.\n\nNext we have the following definitions:\n\n\\emph{Closure}: a subset $$S$$ of points in a topological space consists of all points in \\(S\\) together with all limit points of \\(S\\) (points that are arbitrarily close).\n\n\\emph{Nowhere dense subset}: A subset of a topological space is called \"nowhere dense\" if its closure has empty interior.\nFor instance, the integers are nowhere dense in real numbers.\n\n\\emph{Dense subset}:  subset \\(A\\) of a topological space \\(X\\) is dense, if every point x in X either belongs to A or is a limit point of A (points that are arbitrarily close).\nFor instance, the rational numbers are a dense subset of the real numbers because every real number either is a rational number or has a rational number arbitrarily close to it.\n\n\\emph{Meagre set} or first category: is a set that is a subset of a usually larger set, and it is negligible compared to that (larger) set.\nMathematically it is defined as \\emph{a set countable union of nowhere dense sets}.\n\nAn example of a meagre set is Contor set with respect to the real numbers.\nThis set is constructed as below:\n\n\\begin{enumerate}\n\\item Take the interval of [1,0] and divide it into 3 pieces.\n\\item Remove the middle piece (excluding its endpoints).\n\\item Leave the union of the two remaining parts (\\([0,\\frac{1}{3}] \\cup [\\frac{2}{3},1]\\))\n\\item \\textbf{Indefinitely} repeat the 2nd and 3rd step of each remaining part.\n\nContruction of a Cantor set is visualized as below:\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.9\\linewidth]{./img/figure3.png}\n\\caption{\\label{fig:org10ba7a6}}\n\\end{figure}\n\n\\emph{Non-meagre set} or second category: a set that is not meagre.\n\n\\textbf{Def.} \\emph{Residual set}: The complement of a meagre set is called residual set; in other words, what is left of a (larger) set after removing the meagre sets.\n\\end{enumerate}\n\n\\section{Baire-Osgood theorem}\n\\label{sec:org77ec466}\n\nFinally, we shall mention a theorem that would answer our original question.\n\n\\textbf{Theorem} Let \\(f\\) be a \\uline{Baire-1 function} on the \\uline{complete metric space} \\(X\\). Then \\(f\\) is continuous on a \\uline{residual subset} of \\(X\\).\n\\footnote{This theorem is a result of \"Baire category theorem\" and the proof can be found on N.L. Carothers, \"Real Analysis\" P. 183}\n\nTherefore, if our function is differentiable everywhere, the derivative function should have a dense set of points where it is continouous.\n\nAnd moreover:\n\n\\textbf{Theorem} Let \\(f\\) be a real-valued function on \\(\\mathbb{R}\\). The set of points of discontinuity of \\(f\\) is of first category (meagre set) if and only if \\(f\\) is continuous at a dense set of points.\n\nOne thing to mention in the end is that even though discontinuity points are meagre set, a derivative function can be non-integrable in the classical definition of integral i.e. Riemann integral.\nAn example of such a function is called \\emph{Volterra's function}.\nThis function is differentiable everywhere but it is discontiuous on a set of nowhere dense but positive measures.\nTherefore, it is not Riemann integralable.\n\\end{document}\n", "meta": {"hexsha": "c8fe829860316f1b71d4621ea2bbec1cad694e1c", "size": 9690, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "1_Discontinuity_of_derivatives/discontinuity_of_derivatives.tex", "max_stars_repo_name": "DigitalNX/docs", "max_stars_repo_head_hexsha": "7bf9ca4ae054cd0816f45da67dff96000600420f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-01-16T19:10:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-16T19:10:55.000Z", "max_issues_repo_path": "1_Discontinuity_of_derivatives/discontinuity_of_derivatives.tex", "max_issues_repo_name": "DigitalNX/docs", "max_issues_repo_head_hexsha": "7bf9ca4ae054cd0816f45da67dff96000600420f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1_Discontinuity_of_derivatives/discontinuity_of_derivatives.tex", "max_forks_repo_name": "DigitalNX/docs", "max_forks_repo_head_hexsha": "7bf9ca4ae054cd0816f45da67dff96000600420f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.0666666667, "max_line_length": 245, "alphanum_fraction": 0.7393188854, "num_tokens": 2722, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8705972784807406, "lm_q1q2_score": 0.5971857287054797}}
{"text": "\\section{Cycles and Permutations}\r\n\\begin{definition}\r\n    Given a list $a_1,a_2,\\ldots,a_k$ of distinct elements of $\\{1,2,\\ldots,n\\}$, the $k$-cycle, written as $(a_1\\ a_2\\ \\ldots\\ a_k)$ is the permutation $\\pi\\in S_n$ defined by $\\pi(a_i)=a_{i+1}$ for $1\\le i\\le k-1$, $\\pi(a_k)=a_1$, and $\\pi(j)=j$ if $j\\notin\\{a_i:1\\le i\\le k\\}$.\r\n\\end{definition}\r\nIt is obvious that it is indeed a permutation, and we know that\r\n$$(a_1\\ a_2\\ \\ldots\\ a_k)^{-1}=(a_k\\ a_{k-1}\\ \\ldots\\ a_1)$$\r\nIn particular, there is a set of special cycles.\r\n\\begin{definition}\r\n    A $2$-cycle is called a transposition.\r\n\\end{definition}\r\nTwo different cycles $(a_1\\ a_2\\ \\ldots\\ a_k),(b_1\\ b_2\\ \\ldots\\ b_l)$ are called disjoint if $\\{a_i:1\\le i\\le k\\}\\cap\\{b_j:1\\le j\\le l\\}=\\varnothing$.\\\\\r\nWe can compose the permutations since they are in fact functions\r\n\\begin{example}\r\n    $((1\\ 2\\ 3\\ 4)\\circ(3\\ 2\\ 4))(1)=2$, and $((1\\ 2\\ 3\\ 4)\\circ(3\\ 2\\ 4))(2)=(1\\ 2\\ 3\\ 4)(4)=1$, similarly $((1\\ 2\\ 3\\ 4)\\circ(3\\ 2\\ 4))(3)=3,((1\\ 2\\ 3\\ 4)\\circ(3\\ 2\\ 4))(4)=4$, so it is a transposition $(1\\ 2)$.\\\\\r\n    By the same way, we have $(3\\ 2\\ 4)\\circ (1\\ 2\\ 3\\ 4)=(1\\ 4)$, so $(3\\ 2\\ 4)$ and $(1\\ 2\\ 3\\ 4)$ do not commute.\\\\\r\n    So $S_4$ is not abelian.\r\n    In fact, $S_n$ is not abelian for all $n\\ge 3$ since $S_n$ can be embedded into $S_{n+1}$ by fixing the last element.\r\n\\end{example}\r\n\\begin{lemma}\r\n    1. $(a_1\\ a_2\\ \\ldots\\ a_k)=(a_k\\ a_1\\ a_2\\ \\ldots\\ a_{k-1})$.\\\\\r\n    2. Disjoint cycles commute.\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Trivial.\r\n\\end{proof}\r\n\\begin{theorem}\\label{disjoint_cycles}\r\n    Every permutation is a product of disjoint cycles (including $1$-cycles for convenience) and it is unique to write it as such a product up to the ambiguity stated in the preceding lemma.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Trivial but let us write this down.\r\n    We do strong induction on $n$.\r\n    The base case is trivial.\r\n    Now we consider the sequence $1, \\sigma(1),\\sigma^2(1),\\ldots$.\r\n    Since there is only finitely many numbers, at some point the sequence goes back to $1$.\r\n    Indeed, there must be some $p>q$ such that $\\sigma^p(1)=\\sigma^q(1)$, thus $\\sigma^{p-q}(1)=1$.\r\n    So we take the smallest $k\\ge 1$ such that $\\sigma^k(1)=1$, so $\\sigma^i(1)\\neq\\sigma^j(1)$ for $i\\neq j$ because if so then $\\sigma^{i-j}(1)=1$ (WLOG $i>j$) which contradicts the minimality of $k$.\\\\\r\n    Hence $\\sigma$ maps $S=\\{1,\\sigma(1),\\sigma^2(1),\\ldots\\}$ to itself, so it is a permutation on $T\\{1,2,\\ldots,n\\}\\setminus S$, thus $\\sigma|_T$ is a product of disjoint cycles by induction hypothesis, thus we must have $\\sigma=(1\\ \\sigma(1)\\ \\sigma^2(1)\\ \\ldots\\ \\sigma^{k-1}(1))\\sigma|_T$, which proves the existence.\\\\\r\n    The uniqueness is obvious since if $\\sigma$ is written as two cycles, then we can pick an element and by enumerating it we can show that the cycles containing that element are the same.\r\n    Hence the theorem is proved.\r\n\\end{proof}\r\n\\begin{lemma}\r\n    For any permutation $\\sigma$, write it as the product of disjoint cycles, that is,\r\n    $$\\sigma=(a_1^1\\ a_2^1\\ \\ldots\\ a_{k_1}^1)(a_1^2\\ a_2^2\\ \\ldots\\ a_{k_2}^2)\\cdots (a_1^r\\ a_2^r\\ \\ldots\\ a_{k_r}^r)$$\r\n    Then the order of $\\sigma$ is the LCM of $k_1,k_2,\\ldots,k_r$.\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Note that\r\n    $$\\sigma^j=(a_1^1\\ a_2^1\\ \\ldots\\ a_{k_1}^1)^j(a_1^2\\ a_2^2\\ \\ldots\\ a_{k_2}^2)^j\\cdots (a_1^r\\ a_2^r\\ \\ldots\\ a_{k_r}^r)^j$$\r\n    since disjoint cycles commute.\r\n    Now the order of a cycle $(a_1^i\\ a_2^i\\ \\ldots\\ a_{k_i}^i)$ has order $k_i$.\r\n    So if we let $\\ell$ be the LCM of $k_1,k_2,\\ldots,k_r$, we immediately have $\\sigma^\\ell=e$.\r\n    Suppose $\\sigma^m=e$ for some $m$, then we would have\r\n    $$(a_1^1\\ a_2^1\\ \\ldots\\ a_{k_1}^1)^m=((a_1^2\\ a_2^2\\ \\ldots\\ a_{k_2}^2)^m\\cdots (a_1^r\\ a_2^r\\ \\ldots\\ a_{k_r}^r)^m)^{-1}$$\r\n    But the LHS fixes $a_s^1$ for any $1\\le s\\le k_1$ due to disjointness, thus the LHS must equal to the identity, i.e. $k_1|m$.\r\n    Similarly $k_i|m$ for any $1\\le i\\le r$, thus $\\ell|m\\implies \\ell\\le m$, hence $\\ell$ is the order of $\\sigma$.\r\n\\end{proof}\r\n\\subsection{The Sign of Permutation}\r\n\\begin{proposition}\r\n    Every permutation is a product of transpositions.\r\n\\end{proposition}\r\n\\begin{proof}\r\n    By Theorem \\ref{disjoint_cycles}, it suffices to show that every cycle is a product of transpositions, but\r\n    $$(a_1\\ a_2\\ \\ldots\\ a_k)=(a_1\\ a_k)(a_1\\ a_{k-1})\\cdots (a_1\\ a_2)$$\r\n    As desired.\r\n\\end{proof}\r\n\\begin{proof}[Alternative proof]\r\n    We proceed by induction on $n$ such that the permutation is in $S_n$.\r\n    In $n=0$, there is nothing to show.\r\n    More generally, for $\\sigma\\in S_n$ for $n>1$.\r\n    Choose an $a\\in\\{1,2,\\ldots,n\\}$, then $(a\\ \\sigma(a))\\sigma$ fixes $a$, hence is a permutation on at most $n-1$ elements, so the proof is done by induction.\r\n\\end{proof}\r\n\\begin{definition}\r\n    Define the sign of a permutation $\\sigma$ by\r\n    $$\r\n    \\operatorname{sgn}(\\sigma)=\r\n    \\begin{cases}\r\n        1\\text{, if $\\sigma$ has an even number of transpositions}\\\\\r\n        -1\\text{, otherwise}\r\n    \\end{cases}\r\n    $$\r\n\\end{definition}\r\n\\begin{proposition}\r\n    The sign of the permutation is well-defined.\r\n\\end{proposition}\r\n\\begin{proof}\r\n    Let $\\sigma$ be written as the product of disjoint cycles (including $1$-cycles).\r\n    Suppose there is $\\ell(\\sigma)$ many such cycles.\r\n    It is well-defined by Theorem \\ref{disjoint_cycles}.\\\\\r\n    Consider a transposition $(c\\ d)$.\r\n    $\\ell(\\sigma\\circ (c\\ d))=\\ell(\\sigma)+1$ if $c,d$ are in the same cycle in $\\sigma$.\r\n    Otherwise, it is $\\ell(\\sigma)-1$.\r\n    So in either case, $\\ell(\\sigma\\circ (c\\ d))\\equiv \\ell(\\sigma)+1\\pmod{2}$.\\\\\r\n    Note then that $\\ell(\\sigma)=\\ell(t_1t_2\\cdots t_k)$ where $t_i$ are transpositions.\r\n    Thus $\\ell(\\sigma)\\equiv\\ell(e)+k\\equiv n+k\\pmod{2}$, so if a permutation has both signs $k,l$, then $k\\equiv l\\pmod{2}$.\r\n    Therefore the sign is well-defined.\r\n    Indeed, $\\operatorname{sgn}(\\sigma)=(-1)^{\\ell(\\sigma)-n}$.\r\n\\end{proof}\r\nImmediately $\\operatorname{sgn}((a_1\\ a_2\\ \\ldots\\ a_r))=(-1)^{r-1}$.\r\n\\begin{corollary}\r\n    The function $\\operatorname{sgn}$ is a homomorphism $S_n\\to (\\{1,-1\\},\\times,1)$.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    Trivial.\r\n\\end{proof}\r\n\\begin{definition}\r\n    A permutation $\\sigma$ is even if $\\operatorname{sgn}(\\sigma)=1$ and it is odd otherwise.\r\n\\end{definition}\r\n\\begin{definition}\r\n    The alternating group $A_n\\le S_n$ is defined as $A_n=\\ker\\operatorname{sgn}$.\r\n    That is, $A_n$ consists of all even permutations.\r\n\\end{definition}\r\nNote that any transposition has sign $-1$ and the identity has sign $1$, thus $\\operatorname{sgn}$ is surjective, therefore the index of $A_n$ is $2$, hence it is normal.\r\n\\subsection{Conjugation in the Permutation Group}\r\n\\begin{proposition}\r\n    If we have a permutation $\\sigma$, then $\\sigma (a_1\\ a_2\\ \\ldots\\ a_r)\\sigma^{-1}=(\\sigma(a_1)\\ \\sigma(a_2)\\ \\ldots\\ \\sigma(a_r))$.\r\n\\end{proposition}\r\n\\begin{proof}\r\n    Trivial.\r\n\\end{proof}\r\n\\begin{corollary}\r\n    $\\tau,\\tau'\\in S_n$ are conjugates if and only if, when written as a composition of disjoint cycles (in which every number appears, i.e. counting $1$-cycles), they have the same number of cycles of each length.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    Follows directly from the formula in the preceding proposition and Theorem \\ref{disjoint_cycles}.\r\n\\end{proof}\r\nFor a permutation, we can produce an (unique) string $1^{a_1}2^{a_2}\\cdots n^{a_n}$ where $a_i$ is the number of cycles of length $i$.\r\nWe call such a string the ``cycles type'' of a permutation.\r\nSo the above corollary means that two permutations are in the same conjugacy class if and only if they have the same cycle type.\r\nIt is then curious to consider the size of each conjugacy class.\r\n\\begin{definition}\r\n    The stabiliser of an element $g$ under the conjugacy action is the centraliser, written as $C_G(g)$.\r\n\\end{definition}\r\n\\begin{lemma}\r\n    If $\\tau\\in S_n$ has cycle type $1^{a_1}2^{a_2}\\cdots n^{a_n}$, then\r\n    $$|C_G(\\tau)|=1^{a_1}(a_1)!2^{a_2}(a_2)!\\cdots n^{a_n}(a_n)!$$\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Obvious.\r\n\\end{proof}\r\n\\begin{corollary}\r\n    The size of the conjugacy class containing $\\tau$ is\r\n    $$\\frac{n!}{1^{a_1}(a_1)!2^{a_2}(a_2)!\\cdots n^{a_n}(a_n)!}$$\r\n    where $a_i$ are defined as before.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    Orbit-Stabiliser.\r\n\\end{proof}\r\n\\begin{example}\r\n    Consider $S_4$, then the conjugacy classes are of sizes $1$ (consisting of $e$ only and have cycle type $1^4$), $6$ (of type $1^22^1$), $3$ (of type $2^2$), $8$ (of type $1^13^1$), and $6$ (of type $4^1$) by the formula.\r\n    We do have $1+6+3+8+6=24=4!=|S_4|$.\r\n    Note that given the number of conjugacy classes that we expect, it is trivial to work out what are the elements.\r\n\\end{example}\r\n\\begin{corollary}\r\n    Any normal subgroup of a finite group $G$ is an union of conjugacy classes of $G$.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    If $h\\in H$ is in one of the conjugacy class, then by normality $ghg^{-1}\\in H$ for any $g$, so the entire conjugacy class is in $H$.\r\n\\end{proof}\r\n\\begin{example}\r\n    We try to find the normal subgroups of $S_4$.\r\n    Let $H\\unlhd S_4$, then $H$ must contain the conjugacy class $\\{e\\}$.\r\n    If $H$ contains the conjugacy class $1^22^1$, then since transpositions generates $S_4$, $H$ is the entire group $S_4$.\\\\\r\n    If $H$ contains the conjugacy class $2^2$, then it contains the normal subgroup $K$ consisting of the conjugacy classes $1^4,2^2$ only.\r\n    We the case $H>K$ means that $H$ contains more than $2$ conjugacy classes, so we can discuss this case later by considering other conjugacy classes.\\\\\r\n    If $H$ contains the conjugacy class $3^1$, then it contains all $3$-cycles, which there are $8$ of them, so $|H|\\ge 9$, so we must have $|H|=12$ or $24$.\r\n    Note that $3$-cycles are even, so $H\\cap A_4$ contains all $3$-cycles, which have at least $9$ elements, thus $H\\cap A_4=A_4$, so $H=A_4$ or $H=S_4$.\\\\\r\n    If $H$ contains the conjugacy class $4^1$, but then it contains at least one $3$-cycles (e.g. $(1\\ 2\\ 3\\ 4)(1\\ 4\\ 2\\ 3)=(2\\ 4\\ 3)$), but since $H$ is normal it in fact contains all $3$-cycles, then it is just the same as the previous case (where, since $(1\\ 2\\ 3\\ 4)$ is odd, we have $H=S_4$).\\\\\r\n    So $H$ is one of $K,A_4,S_4$.\\\\\r\n    We have $S_4/S_4\\cong \\{e\\},S_4/A_4\\cong C_2, S_4/K\\cong S_3$.\r\n\\end{example}\r\nAs $A_n\\unlhd S_n$, for $\\sigma\\in A_n$, $\\operatorname{ccl}_{A_n}(\\sigma)\\subseteq\\operatorname{ccl}_{S_n}(\\sigma)$, but the equality may not hold.\r\nFor example, $(1\\ 2\\ 3)$ and $(1\\ 3\\ 2)$ are even and conjugates of each other in $S_3$, but they are not in $A_3$ which is abelian.\r\nOn the other hand, in $S_5$, we have, however, $[(2\\ 3)(4\\ 5)](1\\ 2\\ 3)[(2\\ 3)(4\\ 5)]^{-1}=(1\\ 3\\ 2)$, and $(2\\ 3)(4\\ 5)$ is even, so they are conjugate in $A_5$ in this case.\\\\\r\nFor $\\sigma\\in A_n$, we have\r\n$$|A_n|/|\\operatorname{ccl}_{A_n}(\\sigma)|=|C_{A_n}(\\sigma)|,|S_n|/|\\operatorname{ccl}_{S_n}(\\sigma)|=|C_{S_n}(\\sigma)|$$\r\nBut we also have $|S_n|=2|A_n|$, so either the conjugacy classes are the same, which implies $|C_{A_n}(\\sigma)|=|C_{S_n}(\\sigma)|/2$.\r\nOtherwise, $|\\operatorname{ccl}_{A_n}(\\sigma)|=|\\operatorname{ccl}_{S_n}(\\sigma)|/2$ and $C_{A_n}(\\sigma)=C_{S_n}(\\sigma)$.\\\\\r\nThus either the centraliser of $\\sigma$ contains an odd element and the conjugacy classes are the same or the centralisers of $\\sigma$ is contained in $A_n$ and the conjugacy class in $A_n$ is half the size of that in $S_n$.\r\n\\begin{example}\r\n    Consider $S_4$ and $A_4$.\r\n    We look at the conjugacy classes in $S_4$ and study whether they split in $A_4$.\r\n    $\\{e\\}$ itself constitutes a conjugacy class, so there is nothing to show.\r\n    The conjugacy class $1^22^1$ of transpositions is all odd, thus does not lie in $A_4$.\r\n    $2^2$ in $S_4$ are double transpositions, which are even, so it do lie in $A_4$, but there are only $3$ elements, so it cannot split.\r\n    On the other hand, $(1\\ 2)$ centralises $(1\\ 2)(3\\ 4)$, so the centraliser does contain an odd element.\\\\\r\n    The conjugacy class $1^13^1$ are even so do lie in $A_4$.\r\n    And the centraliser of it is contained in $A_4$, i.e. all even elements, so the conjugacy class splits into two.\r\n    Indeed, they splits to give $\\{(1\\ 2\\ 3),(1\\ 4\\ 2),(1\\ 3\\ 4),(2\\ 4\\ 3)\\}$ and $\\{(1\\ 3\\ 2),(1\\ 2\\ 4), (1\\ 4\\ 3),(2\\ 3\\ 4)\\}$.\\\\\r\n    The $4$-cycles in $S_4$ are odd, so do not lie in $A_4$, hence again there is nothing to show.\r\n\\end{example}\r\nWe can also use it to search for normal subgroups of $A_4$.\r\n\\begin{example}\r\n    By definition a normal subgroup of $A_4$ must be the union of conjugacy classes.\r\n    Then we could either have $\\{e\\}$, or $K$, constituting of $\\{e\\}$ with all the double transpositions.\r\n    Note that if the normal subgroup contains one of the $1^13^1$ conjugacy classes, it must contain the other one which constitutes the inverses of it.\r\n    Hence it must contain at least $9$ elements, but $|A_4|=12$, so it can only be $A_4$.\r\n    Therefore the normal subgroups are $\\{e\\},K,A_4$.\r\n\\end{example}\r\nIn particular, $A_4$ is not simple.\r\n\\begin{theorem}\r\n    $A_5$ is simple.\r\n\\end{theorem}\r\nIn fact, $A_n$ is simple for any $n\\neq 4$.\r\n\\begin{proof}\r\n    $S_5$ has $120$ elements, and its conjugacy classes can be summarized as $1^5,1^32^1,1^12^2,1^23^1,2^13^1,1^14^1,5^1$, we can have the following table\r\n    \\begin{center}\r\n        \\begin{tabular}{c|ccccccc}\r\n            Cycle type&$1^5$&$1^32^1$&$1^12^2$&$1^23^1$&$2^13^1$&$1^14^1$&$5^1$\\\\\r\n            Size&$1$&$10$&$15$&$20$&$20$&$30$&$24$\\\\\r\n            Sign&+&-&+&+&-&-&+\r\n        \\end{tabular}\r\n    \\end{center}\r\n    By looking at whether the conjugacy class split for a typical even permutation of each cycle type, we conclude the following sizes of conjugacy classes in $A_5$.\r\n    \\begin{center}\r\n        \\begin{tabular}{c|ccccc}\r\n            Cycle type&$1^5$&$2^2$&$3^1$&$5^1$&$5^1$\\\\\r\n            Size&$1$&$15$&$20$&$12$&$12$\r\n        \\end{tabular}\r\n    \\end{center}\r\n    Thus there is no way to sum them up (where we must of course add the first cycle type that is the identity) to produce a factor of $|A_5|=60$.\r\n    Therefore $A_5$ is simple.\r\n\\end{proof}", "meta": {"hexsha": "795e2287d3be1ba4110461d4d3735059ef133966", "size": 14168, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "9/sn.tex", "max_stars_repo_name": "david-bai-notes/IA-Groups", "max_stars_repo_head_hexsha": "98be673eb3a1fb62f01ba45168e1997eb1171541", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9/sn.tex", "max_issues_repo_name": "david-bai-notes/IA-Groups", "max_issues_repo_head_hexsha": "98be673eb3a1fb62f01ba45168e1997eb1171541", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9/sn.tex", "max_forks_repo_name": "david-bai-notes/IA-Groups", "max_forks_repo_head_hexsha": "98be673eb3a1fb62f01ba45168e1997eb1171541", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.8198198198, "max_line_length": 326, "alphanum_fraction": 0.6455392434, "num_tokens": 5117, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.685949467848392, "lm_q2_score": 0.8705972600147106, "lm_q1q2_score": 0.5971857272173589}}
{"text": "\\documentclass[letterpaper, twoside, 12pt]{book}\n\\usepackage{packet}\n\n\n\\begin{document}\n\n\\setcounter{chapter}{1}\n\n\\chapter{Part 2.4: Sections 14.7-14.8}\n\n\\setcounter{chapter}{14}\n\\setcounter{section}{6}\n\n\\section{Maximum and Minimum Values} %14.7\n\n\\begin{definition}\n  Let $f$ be a function of many variables defined near the point $P_0$.\n  Then $f$ has a \\textbf{local maximum} $f(P_0)$ at $P_0$ if $f(P_0)$ is the\n  largest value of $f$ near $P_0$, and\n  $f$ has a \\textbf{local minimum} $f(P_0)$ at $P_0$ if $f(P_0)$ is the smallest\n  value of $f$ near $P_0$.\n  (Local maxima and minimal are also known as local extreme values or\n  local extrema.)\n\\end{definition}\n\n\\begin{definition}\n  If $P_0$ is a point in the domain of $f$ and\n    \\[\n      \\nabla f(P_0) = \\vect{0} \\text{ or } \\nabla f(P_0) \\text{ DNE}\n    \\]\n  then $P_0$ is called a \\textbf{critical point}.\n\\end{definition}\n\n\\begin{theorem}\n  Critical points of a two-variable function occur when the tangent plane\n  is horizontal (because $\\<0,0,-1\\>$ is a normal vector)\n  or the tangent plane does not exist.\n\\end{theorem}\n\n\\begin{theorem}\n  The local maximum and minimum values of a function always\n  occur at critical points.\n\\end{theorem}\n\n          \\begin{problem}\n            Prove that $f(x,y)=x^2+16y^2$ has exactly one local extreme value\n            value by showing that $(0,0)$ is the only critical point\n            for $f$, and then showing that $f(0,0)$ is the minimum value\n            of the function.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{remark}\n  By plotting the graph of $f$ in the previous problem, you can see\n  that $(0,0)$ yields the lowest point on the surface.\n\\end{remark}\n\n\\begin{definition}\n  The \\textbf{saddle points} of $f$ are the critical points which don't yield local extreme values.\n\\end{definition}\n\n          \\begin{problem}\n            Prove that $(0,0)$ is a saddle point of the function\n            $f(x,y)=4x^2-9y^2$ by first showing that it is a critical point,\n            and then considering the function $f$ restricted to the curves $y=0$\n            and $x=0$ in the $xy$ plane.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{remark}\n  The term ``saddle point'' comes from the fact that the graph near a\n  saddle point often looks like a saddle (such as in the previous problem).\n\\end{remark}\n\n\\begin{definition}\n  The \\textbf{discriminant} of a differentiable two variable function\n  $f$ is the function\n  \\[\n    f_D\n      =\n    \\begin{array}{|cc|}f_{xx}&f_{xy}\\\\f_{yx}&f_{yy}\\end{array}\n      =\n    f_{xx}f_{yy} - f_{xy}^2\n  \\]\n\\end{definition}\n\n          \\begin{problem}\n            Compute the discriminant of the function\n            $f(x,y)=3x^2y-2y^3+4x$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{theorem}\n  The \\textbf{Second Derivative Test} for two-variable functions gives\n  a way to (sometimes) determine if a critical point either yields a\n  local maximum, a local minimum, or a saddle point.\n  Let $(a,b)$ be a critical point of of $f$ where $\\nabla f$ is defined.\n    \\begin{itemize}\n      \\item If $f_D(a,b)>0$ and $f_{xx}(a,b)<0$,\n            then $f(a,b)$ is a local maximum.\n      \\item If $f_D(a,b)>0$ and $f_{xx}(a,b)>0$,\n            then $f(a,b)$ is a local minimum.\n      \\item If $f_D(a,b)<0$,\n            then $f$ has a saddle point at $(a,b)$.\n      \\item If $f_D(a,b)=0$,\n            then the test is inconclusive.\n    \\end{itemize}\n\\end{theorem}\n\n          \\begin{problem}\n            Prove that $f_{xx}$ could be replaced with $f_{yy}$ in the\n            Second Derivative Test by showing that if $f_D(a,b)>0$,\n            then $f_{xx}(a,b)$ and $f_{yy}(a,b)$ are either both positive\n            or both negative.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            In an earlier problem we found that $f(x,y)=x^2+16y^2$ has exactly\n            one critical point $(0,0)$. Use the Second Derivative Test\n            to show that $f(0,0)$ is the minimum value of the function.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Identify all the critical points for\n            $f(x,y)=x^3-6xy+\\frac{3}{2}y^2-1$, then use the Second Derivative\n            Test to label each critical point as yielding a local minimum,\n            a local maximum, or a saddle point.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{definition}\n  Let $f$ be a function of many variables.\n  Then $f$ has an \\textbf{absolute maximum} $f(P_0)$ at $P_0$ if $f(P_0)$ is\n  the largest value in the range of $f$, and\n  $f$ has an \\textbf{absolute minimum} $f(P_0)$ at $P_0$ if $f(P_0)$ is\n  the smallest value in the range of $f$.\n  (Absolute maxima and minimal are also known as the absolute extreme values or\n  absolute extrema.)\n\\end{definition}\n\n\\begin{theorem}\n  A continuous function $f$ restricted to a closed and bounded domain $D$\n  always has an absolute minimum and absolute maximum value.\n\\end{theorem}\n\n\\begin{theorem}\n  The only possible points which can yield the absolute value of a function\n  $f$ of two variables $x,y$ on a restricted domain $D\\subseteq \\mathbb R^2$\n  are:\n  \\begin{itemize}\n    \\item Critical points of $f$ inside $D$\n    \\item Critical points of $f$ restricted to the boundary of $D$\n    \\item Corners on the boundary of $D$\n  \\end{itemize}\n  The absolute maximum and absolute minimum values may be computed by\n  plugging in all of these candidates into $f$.\n\\end{theorem}\n\n\\begin{remark}\n  The previous theorem works because it checks all the local extreme values\n  against each other to find the absolute largest and smallest value.\n\\end{remark}\n\n          \\begin{problem}\n            Find the absolute maximum and minimum value of\n            $f(x,y)=x^2+y^2-2x-2y$ restricted to the region bounded by\n            the triangle with vertices\n            $(0,0)$, $(2,4)$, and $(2,0)$. (Hint: this\n            triangle is given by the curves $y=0$, $y=2x$, and\n            $x=2$.)\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Find the absolute maximum and minimum value of\n            $f(x,y)=2xy$ restricted to the region bounded by\n            the circle $x^2+y^2=4$. (Hint: find a vector equation $\\vect{r}(t)$\n            for this circle and find the critical points of $f(\\vect{r}(t))$.)\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\n\\section{Lagrange Multipliers} %14.8\n\n          \\begin{problem}\n            A rancher wants to enclose a rectangular area using the\n            straight edge of a cliff on one side, and barbed wire on\n            the other three sides. If the rancher wants to maximize\n            the area of this rectangle, what are the dimensions of the\n            fence and the maximized area? In other words, find $x,y$ which\n            maximize $A(x,y)=xy$ given the constraint $2x+y=100$, and\n            the value of $A$ for those values.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Plot the constraint $2x+y=100$ from the previous problem in\n            the $xy$-plane, along with the level curves of $A(x,y)=xy$\n            for $k=750,1000,1250,1500$. (Make sure the image includes\n            $x\\in[-100,100]$ and $y\\in[-100,100]$.)\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{remark}\n  Notice that the point $(x,y)$ which maximizes the value of $A(x,y)$\n  is where the constraint $2x+y=100$ and the level curve $xy=1250$\n  share the same tangent line (and therefore normal vectors).\n\\end{remark}\n\n\\begin{theorem}\n  A function\n  $f(P)$ of many variables, constrained by the requirement\n  $g(P)=k$ for some function $g$ and constant $k$, is maximized or\n  minimized at a point $P_0$ where the normal vector to the\n  level curve/surface $f(P)=f(P_0)$ is parallel to the normal vector\n  to the curve/surface $g(P)=k$. Put another way:\n  \\[\n    \\nabla f = \\lambda(\\nabla g)\n  \\]\n\\end{theorem}\n\n\\begin{theorem}\n  The \\textbf{Method of Lagrange Multiplers} for two-variable functions\n  states that to maximize/minimize $f(x,y)$ on the constraint $g(x,y)=k$,\n  you should solve the system of equations\n    \\[\n      f_x(x,y)=\\lambda g_x(x,y)\n    \\]\n    \\[\n      f_y(x,y)=\\lambda g_y(x,y)\n    \\]\n    \\[\n      g(x,y)=k\n    \\]\n  where $\\lambda$ is an unknown real number,\n  and testing all solutions of $x,y$ to find the maximum\n  and minimum values of $f$.\n\\end{theorem}\n\n          \\begin{problem}\n            Use the Method of Lagrange Multipliers to solve the first\n            problem of this section. (Tip: to start, use the first\n            two equations and eliminate the variable $\\lambda$ since\n            it's not needed for the solution.)\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Find the minimum surface area of a right circular cylinder\n            with volume equal to $432\\pi$ cubic units.\n            (Hint: $V=\\pi r^2h$ and $SA=2\\pi r(r+h)$.)\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            OPTIONAL.\n            A river of constant width $10\\sqrt{3}\\approx 17.32$ meters\n            flows $40$ meters per second from north to south. A swimmer\n            on the west side of the river can swim at a constant\n            $20$ meters per second through still waters, but since the flow\n            of the river is faster than her top speed, this swimmer will\n            unavoidably be pushed downstream if she tries to swim across.\n\n            Use the Method of Lagrange Mutlipliers to prove that if\n            the swimmer sets an angle of $\\frac{\\pi}{6}=30^\\circ$ north of\n            east, then she will minimize the distance she is pushed downstream\n            as she swims from the west riverbank to the east riverbank.\n            (Hint: define $x(\\theta,t)$ to be the distance she travels east\n            after $t$ seconds if she sets the angle $\\theta$, and define\n            $y(\\theta,t)$ to be the distance she travels south\n            after $t$ seconds if she sets the angle $\\theta$. Then your\n            constrant is that $x(\\theta,t)$ should be the width of the river,\n            although that's actually not needed to solve the puzzle...)\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{theorem}\n  The \\textbf{Method of Lagrange Multiplers} for three-variable functions\n  states that to maximize/minimize $f(x,y,z)$ on the constraint $g(x,y,z)=k$,\n  you should solve the system of equations\n    \\[\n      f_x(x,y,z)=\\lambda g_x(x,y,z)\n    \\]\n    \\[\n      f_y(x,y,z)=\\lambda g_y(x,y,z)\n    \\]\n    \\[\n      f_z(x,y,z)=\\lambda g_z(x,y,z)\n    \\]\n    \\[\n      g(x,y,z)=k\n    \\]\n  where $\\lambda$ is an unknown real number,\n  and testing all solutions of $x,y,z$ to find the maximum\n  and minimum values of $f$.\n\\end{theorem}\n\n          \\begin{problem}\n            Find the maximum volume of a rectangular box without a lid\n            which uses $48$ square units of material.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Find the highest and lowest points which lay on the curve of intersection for the cylinder $x^2+y^2=8$ and the plane\n            $2x+2y+z=16$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\end{document}", "meta": {"hexsha": "13b6425f9572fa39423a70303f1c89aa85c9177c", "size": 11648, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packet2_4.tex", "max_stars_repo_name": "StevenClontz/teaching-2015-spring", "max_stars_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packet2_4.tex", "max_issues_repo_name": "StevenClontz/teaching-2015-spring", "max_issues_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packet2_4.tex", "max_forks_repo_name": "StevenClontz/teaching-2015-spring", "max_forks_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.6274509804, "max_line_length": 128, "alphanum_fraction": 0.6118646978, "num_tokens": 3238, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.685949467848392, "lm_q2_score": 0.8705972566572503, "lm_q1q2_score": 0.5971857249143108}}
{"text": "\\section{Results}\\label{sec:analysis}\n\n\\begin{table}[t]\n\\centering\n\\begin{small}\n\\begin{tabular}{| c | p{2.44in} |}\\hline\n\\textbf{Parameter} & \\textbf{Definition}\t\\\\\\hline\nTraffic Demand per Subscriber (\\S~\\ref{subsec:behavior})\t& \\(\\dfrac{\\text{total bytes transferred in \nmeasurement int.}}{\\text{number of contributing subscribers}}\\)\t\\\\\nPeak Demand (\\S~\\ref{subsec:behavior})\t\t\t& Daily 95th percentile of bytes transferred in any \n15-minute interval \\\\ \nPrime-Time Ratio (\\S~\\ref{subsec:primetime}) \t& \\( \\dfrac{ \\text{avg usage in peak (prime-time) \nhour}}{ \\text{avg usage in off-peak hour}}\\) \t\t\\\\\nPeak-to-Average Ratio (\\S~\\ref{subsec:peakratio}) \t& \\(\\dfrac{\\text{95\\%-ile of daily traffic \ndemand}}{\\text{mean of daily traffic demand}}\\)\t\\\\\\hline\n\\end{tabular}\n\\end{small}\n\\caption{\\green{Evaluation Metrics.}}\n\\label{tab:eval-criteria}\n\\end{table}\n\n\\paragraph{Metrics}\nTable~\\ref{tab:eval-criteria} shows the metrics that we use to evaluate\nhow user demand responds to service-tier upgrades. The \\emph{traffic\n  demand} for a subscriber is defined as the total bytes transferred, in\nupstream or downstream, during a single sample measurement (15 minutes).\nWe use traffic demand to calculate the total demand per hour, and the\naverage and 95th percentile peak demand over a day. To compare the total\ntraffic of the control and treatment groups, we scale to a thousand\nsubscribers wherever applicable (Table~\\ref{tab:data-stats}).  We\ndefine \\emph{prime time} as 8:00 p.m. to 12:00 a.m., when Internet usage\ntends to be highest.  Indeed, we\nobserved that the total daily traffic consistently falls within 90th percentile\nduring this four-hour period. We define the \\emph{prime-time ratio} as\nthe ratio of traffic during an average prime-time hour, to the average\nhourly traffic outside the prime-time hour.  This ratio conveys the\ndisparity between demand during the prime-time and the rest of the day.\nThe rest of this section explores the effects of a service-tier upgrade\non user traffic demand in the context of these metrics.\n\n\\input{behavior}\n\n\\input{primetime}\n\n\\input{peakratio}\n\n\n% PUT RESULTS TABLE HERE\n\n%\\input{prevalence}", "meta": {"hexsha": "452d13e621b3bfe0feda449202e3e36d22b6a3e4", "size": 2143, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Software-Projects/experimental/experimental--tv-data-analysis/comcast-analysis/writing/analysis.tex", "max_stars_repo_name": "briancabbott/xtrax", "max_stars_repo_head_hexsha": "3bfcbe1c2f5c355b886d8171481a604cca7f4f16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-02-07T17:32:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T09:14:01.000Z", "max_issues_repo_path": "Software-Projects/experimental/experimental--tv-data-analysis/comcast-analysis/writing/analysis.tex", "max_issues_repo_name": "briancabbott/xtrax", "max_issues_repo_head_hexsha": "3bfcbe1c2f5c355b886d8171481a604cca7f4f16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Software-Projects/experimental/experimental--tv-data-analysis/comcast-analysis/writing/analysis.tex", "max_forks_repo_name": "briancabbott/xtrax", "max_forks_repo_head_hexsha": "3bfcbe1c2f5c355b886d8171481a604cca7f4f16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.86, "max_line_length": 101, "alphanum_fraction": 0.7550163322, "num_tokens": 591, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711870587667, "lm_q2_score": 0.7025300636233415, "lm_q1q2_score": 0.5971303121224024}}
{"text": "\\section{Type theoretic universes}\n\nTo complete our specification of dependent type theory, we introduce type theoretic \\emph{universes}. Universes are types that consist of types. In other words, a universe is a type $\\UU$ that comes equipped with a type family $\\Ty$ over $\\UU$, and for any $X:\\UU$ we think of $X$ as an \\emph{encoding} of the type $\\Ty(X)$. We call this type family the \\emph{universal type family}.\n\nThere are several reasons to equip type theory with universes. One reason is that it enables us to define new type families over inductive types, using their induction principle. For example, since the universe is itself a type, we can use the induction principle of $\\bool$ to obtain a map $P:\\bool\\to\\UU$ from any two terms $X_0,X_1:\\UU$. Then we obtain a type family over $\\bool$ by substituting $P$ into the universal type family:\n\\begin{equation*}\n  x:\\bool\\vdash \\Ty(P(x))~\\mathrm{type}\n\\end{equation*}\nsatisfying $\\Ty(P(0_\\bool))\\jdeq \\Ty(X_0)$ and $\\Ty(P(1_\\bool))\\jdeq \\Ty(X_1)$.\n\nWe use this way of defining type families to define many familiar relations over $\\N$, such as $\\leq$ and $<$. We also introduce a relation called \\emph{observational equality} $\\mathsf{Eq}_\\N$ on $\\N$, which we can think of as equality of $\\N$. This relation is reflexive, symmetric, and transitive, and moreover it is the least reflexive relation. Furthermore, one of the most important aspects of observational equality $\\mathsf{Eq}_\\N$ on $\\N$ is that $\\mathsf{Eq}_\\N(m,n)$ is a type for every $m,n:\\N$, unlike judgmental equality. Therefore we can use type theory to reason about observational equality on $\\N$. Indeed, in the exercises we show that some very elementary mathematics can already be done at this early stage in our development of type theory.\n\nA second reason to introduce universes is that it allows us to define many types of types equipped with structure. One of the most important examples is the type of groups, which is the type of types equipped with the group operations satisfying the group laws, and for which the underlying type is a set. We won't discuss the condition for a type to be a set until \\cref{chap:hierarchy}, so the definition of groups in type theory will be given much later. Therefore we illustrate this use of the universe by giving simpler examples: pointed types, graphs, and reflexive graphs.\n\nOne of the aspects that make universes useful is that they are postulated to be closed under all the type constructors. For example, if we are given $X:\\UU$ and $P:\\Ty(X)\\to \\UU$, then the universe is equipped with a term\n\\begin{equation*}\n  \\check{\\Sigma}(X,P):\\UU\n\\end{equation*}\nsatisfying the judgmental equality $\\Ty(\\check{\\Sigma}(X,P)\\jdeq\\sm{x:\\Ty(X)}\\Ty(P(x))$. We will similarly assume that any universe is closed under $\\Pi$-types and the other ways of forming types. However, there is an important restriction: it would be inconsistent to assume that the universe is contained in itself. One way of thinking about this is that universes are types of \\emph{small} types, and it cannot be the case that the universe is small with respect to itself. We address this problem by assuming that there are many universes: enough universes so that any type family can be obtained by substituting into the universal type family of some universe.\n\n\\subsection{Specification of type theoretic universes}\n\nIn the following definition we already state that universes are closed under identity types. Identity types will be introduced in \\cref{chap:identity}.\n\n\\begin{defn}\n  A \\define{universe}\\index{universe|textbf} in type theory is a closed type $\\UU$\\index{U@{$\\UU$}|textbf} equipped with a type family $\\Ty$\\index{Ty@{$\\Ty$}} over $\\UU$ called the \\define{universal family}\\index{universal family}\\index{family!universal family|textbf}, equipped with the following structure:\n  \\begin{enumerate}\n  \\item $\\UU$ is closed under $\\Pi$, in the sense that it comes equipped with a function\n    \\begin{equation*}\n      \\check{\\Pi} :\\prd{X:\\UU}(\\Ty(X)\\to\\UU)\\to\\UU\n    \\end{equation*}\n    for which the judgmental equality\n    \\begin{equation*}\n      \\Ty\\big(\\check{\\Pi}(X,P)\\big)\\jdeq \\prd{x:\\Ty(X)}\\Ty(P(x)).\n    \\end{equation*}\n    holds, for every $X:\\UU$ and $P:\\Ty(X)\\to\\UU$.\n  \\item $\\UU$ is closed under $\\Sigma$ in the sense that it comes equipped with a function\n    \\begin{equation*}\n      \\check{\\Sigma} :\\prd{X:\\UU}(\\Ty(X)\\to\\UU)\\to\\UU\n    \\end{equation*}\n    for which the judgmental equality\n    \\begin{equation*}\n      \\Ty\\big(\\check{\\Sigma}(X,P)\\big) \\jdeq \\sm{x:\\Ty(X)}\\Ty(P(x))\n    \\end{equation*}\n    holds, for every $X:\\UU$ and $P:\\Ty(X)\\to\\UU$.\n  \\item $\\UU$ is closed under identity types, in the sense that it comes equipped with a function\n    \\begin{equation*}\n      \\check{\\mathrm{I}} : \\prd{X:\\UU}\\Ty(X)\\to(\\Ty(X)\\to\\UU)\n    \\end{equation*}\n    for which the judgmental equality\n    \\begin{equation*}\n      \\Ty\\big(\\check{\\mathrm{I}}(X,x,y)\\big)\\jdeq (\\id{x}{y})\n    \\end{equation*}\n    holds, for every $X:\\UU$ and $x,y:\\Ty(X)$.\n  \\item $\\UU$ is closed under coproducts, in the sense that it comes equipped with a function\n    \\begin{equation*}\n      \\check{+}:\\UU\\to (\\UU\\to\\UU)\n    \\end{equation*}\n    that satisfies $\\Ty\\big(X\\check{+}Y\\big)\\jdeq \\Ty(X)+\\Ty(Y)$.\n  \\item $\\UU$ contains terms $\\check{\\emptyt},\\check{\\unit},\\check{\\N}:\\UU$\n    that satisfy the judgmental equalities\n    \\begin{align*}\n      \\Ty\\big(\\check{\\emptyt}\\big) & \\jdeq \\emptyt \\\\\n      \\Ty\\big(\\check{\\unit}\\big) & \\jdeq \\unit \\\\\n      \\Ty(\\check{\\N}) & \\jdeq \\N.\n    \\end{align*}\n  \\end{enumerate}\n  Given a universe $\\UU$, we say that a type $A$ in context $\\Gamma$ is \\define{small}\\index{small type|textbf} with respect to $\\UU$ if it occurs in the universe, i.e., if it comes equipped with a term $\\check{A}:\\UU$ in context $\\Gamma$, for which the judgment\n  \\begin{equation*}\n    \\Gamma\\vdash\\Ty\\big(\\check{A}\\big)\\jdeq A~\\mathrm{type}\n  \\end{equation*}\n  holds. If $A$ is small with respect to $\\UU$, we usually write simply $A$ for $\\check{A}$ and also $A$ for $\\Ty(\\check{A})$. In other words, by $A:\\UU$ we mean that $A$ is a small type. \n\\end{defn}\n\n\\begin{rmk}\n  Since ordinar function types are defined as a special case of dependent function types, we don't have to assume that universes are closed under ordinary function types. Similarly, it follows from the assumption that universes are closed under dependent pair types that universes are closed under cartesian product types.\n\\end{rmk}\n\n\\subsection{Assuming enough universes}\n  Most of the time we will get by with assuming one universe $\\UU$, and indeed we recommend on a first reading of this text to simply assume that there is one universe $\\UU$. However, sometimes we might need a second universe $\\mathcal{V}$ that contains $\\UU$ as well as all the types in $\\UU$. In such situations we cannot get by with a single universe, because the assumption that $\\UU$ is a term of itself would lead to inconsistencies like the Russel's paradox.\n\n  Russel's paradox is the famous argument that there cannot be a set of all sets. If there were such a set $S$, then we could consider Russel's subset\n  \\begin{equation*}\n    R:=\\{x\\in S\\mid x\\notin x\\}.\n  \\end{equation*}\n  Russell then observed that $R\\in R$ if and only if $R\\notin R$, so we reach a contradiction. A variant of this argument reaches a similar contradiction when we assume that $\\UU$ is a universe that contains a term $\\check{\\UU}:\\UU$ such that $\\mathcal{T}\\big(\\check{\\UU}\\big)\\jdeq \\UU$. In order to avoid such paradoxes, Russell and Whitehead formulated the \\emph{ramified theory of types} in their book \\emph{Principia Mathematica}. The ramified theory of types is a precursor of Martin L\\\"of's type theory that we are studying in this course.  \n\n  Even though the universe is not a term of itself, it is still convenient if every type, including any universe, is small with respect to \\emph{some} universe. Therefore we will assume that there are sufficiently many universes: we will assume that for every finite list of types\n\\begin{align*}\n  \\Gamma_1 & \\vdash A_1~\\mathrm{type} \\\\\n  & ~\\vdots \\\\\n  \\Gamma_n & \\vdash A_n~\\mathrm{type},\n\\end{align*}\nthere is a universe $\\UU$ that contains each $A_i$ in the sense that $\\UU$ comes equipped with a term\n\\begin{align*}\n  \\Gamma_i\\vdash \\check{A}_i:\\UU\n\\end{align*}\nfor which the judgment\n\\begin{equation*}\n  \\Gamma_i\\vdash \\Ty\\big(\\check{A}_i\\big)\\jdeq A_i~\\mathrm{type}\n\\end{equation*}\nholds. With this assumption it will rarely be necessary to work with more than one universe at the same time.\n\n\\begin{rmk}\n  Using the assumption that for any finite list of types in context there is a universe that contains those types, we obtain many specific universes:\n  \\begin{enumerate}\n  \\item There is a \\emph{base universe} $\\UU_0$ that we obtain using the empty list of types in context. This is a universe, but it isn't specified to contain any further types.\n  \\item Given a finite list\n    \\begin{align*}\n      \\Gamma_1 & \\vdash A_1~\\mathrm{type} \\\\\n      & ~\\vdots \\\\\n      \\Gamma_n & \\vdash A_n~\\mathrm{type},\n    \\end{align*}\n    of types in context, and a universe $\\UU$ that contains them, there is a universe $\\UU^+$ that contains all the types in $\\UU$ as well as $\\UU$. More precisely, it is specified by the finite list\n    \\begin{align*}\n      & \\vdash \\UU~\\mathrm{type} \\\\\n      X:\\UU & \\vdash \\mathcal{T}(X)~\\mathrm{type}.\n    \\end{align*}\n    Note that since the universe $\\UU^+$ contains all the types in $\\UU$, it also contains the types $A_1,\\ldots,A_n$. To see this, we derive that there is a code for $A_i$ in $\\UU^+$.\n    \\begin{prooftree}\n      \\AxiomC{$\\Gamma_i\\vdash\\check{A}_i:\\UU$}\n      \\AxiomC{$X:\\UU\\vdash\\check{\\mathcal{T}}(X):\\UU^+$}\n      \\UnaryInfC{$\\Gamma_i,X:\\UU\\vdash\\check{\\mathcal{T}}(X):\\UU^+$}\n      \\BinaryInfC{$\\Gamma_i\\vdash\\check{\\mathcal{T}}(\\check{A}_i):\\UU^+$}\n    \\end{prooftree}\n    We leave it as an exercise to derive the judgmental equality\n    \\begin{equation*}\n      \\mathcal{T}^+(\\check{\\mathcal{T}}(\\check{A}_i))\\jdeq A_i.\n    \\end{equation*}\n  \\item Given two finite lists\n    \\begin{align*}\n      \\Gamma_1 & \\vdash A_1~\\mathrm{type} & \\Delta_1 & \\vdash B_1~\\mathrm{type} \\\\\n      & ~\\vdots & & ~\\vdots \\\\\n      \\Gamma_n & \\vdash A_n~\\mathrm{type} & \\Delta_m & \\vdash B_m~\\mathrm{type}\n    \\end{align*}\n    of types in context, and two universes $\\mathcal{U}$ and $\\mathcal{V}$ that contain $A_1,\\ldots,A_n$ and $B_1,\\ldots,B_m$ respectively, there is a universe $\\UU\\sqcup\\mathcal{V}$ that contains the types of both $\\UU$ and $\\mathcal{V}$. The universe $\\UU\\sqcup\\mathcal{V}$ is specified by the finite list\n    \\begin{align*}\n      X:\\UU & \\vdash \\mathcal{T}_{\\mathcal{U}}(X)~\\mathrm{type} \\\\\n      Y:\\mathcal{V} & \\vdash \\mathcal{T}_{\\mathcal{V}}(Y) ~\\mathrm{type}.\n    \\end{align*}\n    With an argument similar to the previous construction of a universe, we see that the universe $\\UU\\sqcup\\mathcal{V}$ contains the types $A_1,\\ldots,A_n$ as well as the types $B_1,\\ldots,B_m$.\n\n    Note that we could also directly obtain a universe $\\mathcal{W}$ that contains the types $A_1,\\ldots,A_n$ and $B_1,\\ldots,B_m$. However, this universe might not contain all the types in $\\UU$ or all the types in $\\mathcal{V}$.\n  \\end{enumerate}\n  Since we don't postulate any relations between the universes, there are indeed very few of them. For example, the base universe $\\UU_0$ might contain many more types than it is postulated to contain. Nevertheless, there are some relations between the universes. For instance, there is a function $\\UU\\to\\UU^+$, since we can simply derive\n  \\begin{prooftree}\n    \\AxiomC{$X:\\UU\\vdash \\check{\\mathcal{T}}(X):\\UU^+$}\n    \\UnaryInfC{$\\vdash \\lam{X}\\check{\\mathcal{T}}(X) : \\UU\\to\\UU^+$}\n  \\end{prooftree}\n  Similarly, there are functions $\\UU\\to \\UU\\sqcup\\mathcal{V}$ and $\\mathcal{V}\\to \\UU\\sqcup\\mathcal{V}$ for any two universes $\\UU$ and $\\mathcal{V}$.\n\\end{rmk}\n\n\\subsection{Pointed types}\n\n\\begin{defn}\n  A \\define{pointed type} is a pair $(A,a)$ consisting of a type $A$ and a term $a:A$. The type of all pointed types in a universe $\\UU$ is defined to be\n  \\begin{equation*}\n    \\UU_\\ast \\defeq \\sm{X:\\UU}X.\n  \\end{equation*}\n\\end{defn}\n\n\\begin{defn}\n  Consider two pointed types $(A,a)$ and $(B,b)$. A \\define{pointed map} from $(A,a)$ to $(B,b)$ is a pair $(f,p)$ consisting of a function $f:A\\to B$ and an identification $p:f(a)=b$. We write\n  \\begin{equation*}\n    A\\to_\\ast B \\defeq \\sm{f:A\\to B}f(a)=b\n  \\end{equation*}\n  for the type of all pointed maps from $(A,a)$ to $(B,b)$, leaving the base point implicit.\n\\end{defn}\n\nSince we have a type $\\UU_\\ast$ of \\emph{all} pointed types in a universe $\\UU$, we can start defining operations on $\\UU_\\ast$. An important example of such an operation is to take the loop space of a pointed type.\n\n\\begin{defn}\n  We define the \\define{loop space} operation $\\Omega : \\UU_\\ast \\to \\UU_\\ast$\n  \\begin{equation*}\n    \\Omega(A,a)\\defeq \\big((a=a),\\refl{a}\\big).\n  \\end{equation*}\n\\end{defn}\n\nWe can even go further and define the \\emph{iterated loop space} of a pointed type. Note that this definition could not be given in type theory if we didn't have universes.\n\n\\begin{defn}\n  Given a pointed type $(A,a)$ and a natural number $n$, we define the $n$-th loop space $\\Omega^n(A,a)$ by induction on $n:\\N$, taking\n  \\begin{align*}\n    \\Omega^0(A,a) & \\defeq (A,a) \\\\\n    \\Omega^{n+1}(A,a) & \\defeq \\Omega(\\Omega^n(A,a)).\n  \\end{align*}\n\\end{defn}\n\n\\subsection{Relations on the natural numbers}\n\n\\begin{defn}\\label{defn:obs_nat}\nWe define the \\define{observational equality}\\index{observational equality!on N@{on $\\N$}} on $\\N$ as binary relation $\\mathsf{Eq}_\\N:\\N\\to(\\N\\to\\UU)$\\index{Eq_N@{$\\mathsf{Eq}_\\N$}|textbf} satisfying\n\\begin{align*}\n\\mathsf{Eq}_\\N(\\zeroN,\\zeroN) & \\jdeq \\unit & \\mathsf{Eq}_\\N(\\succN(n),\\zeroN) & \\jdeq \\emptyt \\\\\n\\mathsf{Eq}_\\N(\\zeroN,\\succN(n)) & \\jdeq \\emptyt & \\mathsf{Eq}_\\N(\\succN(n),\\succN(m)) & \\jdeq \\mathsf{Eq}_\\N(n,m).\n\\end{align*}\n\\end{defn}\n\n\\begin{constr}\nWe define $\\mathsf{Eq}_\\N$ by double induction on $\\N$. By the first application of induction it suffices to provide\n\\begin{align*}\nE_0 & : \\N\\to\\UU \\\\\nE_S & : \\N\\to (\\N\\to\\UU)\\to(\\N\\to\\UU)\n\\end{align*}\nWe define $E_0$ by induction, taking $E_{00}\\defeq \\unit$ and $E_{0S}(n,X,m)\\defeq \\emptyt$. The resulting family $E_0$ satisfies\n\\begin{align*}\nE_0(\\zeroN) & \\jdeq \\unit \\\\\nE_0(\\succN(n)) & \\jdeq \\emptyt.\n\\end{align*} \nWe define $E_S$ by induction, taking $E_{S0}\\defeq \\emptyt$ and $E_{S0}(n,X,m)\\defeq X(m)$. The resulting family $E_S$ satisfies\n\\begin{align*}\nE_S(n,X,\\zeroN) & \\jdeq \\emptyt \\\\\nE_S(n,X,\\succN(m)) & \\jdeq X(m) \n\\end{align*}\nTherefore we have by the computation rule for the first induction that the judgmental equality\n\\begin{align*}\n\\mathsf{Eq}_\\N(\\zeroN,m) & \\jdeq E_0(m) \\\\\n\\mathsf{Eq}_\\N(\\succN(n),m) & \\jdeq E_S(n,\\mathsf{Eq}_\\N(n),m)\n\\end{align*}\nholds, from which the judgmental equalities in the statement of the definition follow.\n\\end{constr}\n\n\\begin{lem}\n  Suppose $R:\\N\\to(\\N\\to\\UU)$ is a reflexive relation on $\\N$, i.e., $R$ comes equipped with\n  \\begin{equation*}\n    \\rho : \\prd{n:\\N}R(n,n).\n  \\end{equation*}\n  Then there is a family of maps\n  \\begin{equation*}\n    \\prd{m,n:\\N}\\EqN(m,n)\\to R(m,n).\n  \\end{equation*}\n\\end{lem}\n\n\\begin{proof}\n  We will prove by induction on $m,n:\\N$ that there is a term of type\n  \\begin{equation*}\n    f_{m,n}:\\prd{e:\\EqN(m,n)}{R:\\N\\to(\\N\\to\\UU)}\\Big(\\prd{x:\\N}R(x,x)\\Big)\\to R(m,n)\n  \\end{equation*}\n  The dependent function $f_{m,n}$ is defined by\n  \\begin{align*}\n    f_{\\zeroN,\\zeroN} & \\defeq \\lam{\\ttt}{r}{\\rho}\\rho(\\zeroN) \\\\\n    f_{\\zeroN,\\succN(n)} & \\defeq \\indempty \\\\\n    f_{\\succN(m),\\zeroN} & \\defeq \\indempty \\\\\n    f_{\\succN(m),\\succN(n)} & \\defeq \\lam{e}{R}{\\rho}f_{m,n}(e,R',\\rho'),\n  \\end{align*}\n  where $R'$ and $\\rho'$ are given by\n  \\begin{align*}\n    R'(m,n) & \\defeq R(\\succN(m),\\succN(n)) \\\\\n    \\rho'(n) & \\defeq \\rho(\\succN(n)).\\qedhere\n  \\end{align*}\n\\end{proof}\n\nWe can also define observational equality for many other kinds of types, such as $\\bool$ or $\\Z$. In each of these cases, what sets the observational equality apart from other relations is that it is the \\emph{least} reflexive relation. \n\n\\subsection{The finite types}\n\n\\begin{defn}\\label{defn:fin}\nWe define the type family $\\mathsf{Fin}:\\N\\to\\UU$ of finite types\\index{Fin@{$\\mathsf{Fin}$}|textbf}\\index{finite types|textbf} by induction on $\\N$\\index{family!of finite types}, taking\n\\begin{align*}\n\\mathsf{Fin}(\\zeroN) & \\defeq \\emptyt \\\\\n\\mathsf{Fin}(\\succN(n)) & \\defeq \\mathsf{Fin}(n)+\\unit\n\\end{align*}\n\\end{defn}\n\n\\begin{defn}\n  For each $n:\\N$, we define a map\n  \\begin{equation*}\n    \\prd{m:\\N} \\big((m<n)\\to \\mathsf{Fin}(n)\\big)\n  \\end{equation*}\n\\end{defn}\n\n\\begin{proof}\n  We construct this map by induction on $n,m:\\N$. In the base case for $n$, the map is constructed by induction on the empty type, since the relation $m<\\zeroN$ never holds. For the inductive step\n\\end{proof}\n\n\\begin{exercises}\n\\item Let $A$ be a type.\n  \\begin{subexenum}\n  \\item Show that $(A+\\neg A)\\to(\\neg\\neg A\\to A)$.\n  \\item Show that $\\neg\\neg\\neg A \\to \\neg A$.\n  \\end{subexenum}\n\\item Construct a function\n  \\begin{equation*}\n    \\check{\\Pi}:\\prd{A:\\UU} (\\Ty(A)\\to\\UU)\\to \\UU\n  \\end{equation*}\n  such that\n  \\begin{equation*}\n    \\Ty(\\check{\\Pi}(A,B))\\jdeq \\prd{x:\\Ty(A)}\\Ty(B(x))\n  \\end{equation*}\n  holds for every $A:\\UU$ and $B:\\Ty(A)\\to\\UU$. \n  \n  \\emph{A similar exercise can be posed for $\\Sigma$ and $+$ (and for $\\to$ and $\\times$ as special cases of $\\Pi$ and $\\Sigma$).}\n\\item \\label{ex:obs_nat_eqrel}Show that observational equality on $\\N$\\index{observational equality!on N@{on $\\N$}!is an equivalence relation} is an equivalence relation\\index{equivalence relation!observational equality on N@{observational equality on $\\N$}}, i.e., construct terms of the following types:\n  \\begin{align*}\n    & \\prd{n:\\N} \\EqN(n,n) \\\\\n    & \\prd{n,m:\\N} \\EqN(n,m)\\to \\EqN(m,n) \\\\\n    & \\prd{n,m,l:\\N} \\EqN(n,m)\\to (\\EqN(m,l)\\to \\EqN(n,l)).\n  \\end{align*}\n\\item \\label{ex:obs_nat_least}\\index{observational equality!on N@{on $\\N$}!is least reflexive relation}Let $R$ be a reflexive binary relation\\index{reflexive relation}\\index{relation!reflexive} on $\\N$, i.e., $R$ is of type $\\N\\to (\\N\\to\\UU)$ and comes equipped with a term $\\rho:\\prd{n:\\N}R(n,n)$. Show that\n  \\begin{equation*}\n    \\prd{n,m:\\N} \\EqN(n,m)\\to R(n,m).\n  \\end{equation*}\n\\item \\index{observational equality!on N@{on $\\N$}!is preserved by functions}Show that every function $f:\\N\\to \\N$ preserves observational equality in the sense that\n  \\begin{equation*}\n    \\prd{n,m:\\N} \\EqN(n,m)\\to \\EqN(f(n),f(m)).\n  \\end{equation*}\n  \\emph{Hint: to get the inductive step going the induction hypothesis has to be strong enough. Construct by double induction a term of type}\n  \\begin{equation*}\n    \\prd{n,m:\\N}{f:\\N\\to\\N} \\EqN(n,m)\\to \\EqN(f(n),f(m)),\n  \\end{equation*}\n  \\emph{and pull out the universal quantification over $f:\\N\\to\\N$ by \\cref{ex:swap}.}\n\\item \n  \\begin{subexenum}\n  \\item Define the \\define{order relations}\\index{relation!order}\\index{order relation} $\\leq$ and $<$ on $\\N$.\n  \\item Show that $\\leq$ is reflexive and that $<$ is \\define{anti-reflexive}\\index{anti-reflexive}\\index{relation!anti-reflexive}, i.e., that $\\neg(n<n)$. \n  \\item Show that both $\\leq$ and $<$ are transitive, and that $n<S(n)$.\n  \\item Show that $k\\leq \\min(m,n)$ holds if and only if both $k\\leq m$ and $k\\leq n$ hold, and show that $\\max(m,n)\\leq k$ holds if and only if both $m\\leq k$ and $n\\leq k$ hold.\n  \\end{subexenum}\n\\item \\label{ex:obs_bool}\\index{observational equality!on 2@{on $\\bool$}}\n  \\begin{subexenum}\n  \\item Define observational equality $\\mathsf{Eq}_\\bool$\\index{Eq_bool@{$\\mathsf{Eq}_\\bool$}|textbf} on the booleans.\n  \\item Show that $\\mathsf{Eq}_\\bool$ is reflexive.\\index{observational equality!on 2@{on $\\bool$}!is reflexive}\n  \\item Show that for any reflexive relation $R:\\bool\\to(\\bool\\to \\UU)$ one has\\index{observational equality!on 2@{on $\\bool$}!is least reflexive relation}\n    \\begin{equation*}\n      \\prd{x,y:\\bool} \\mathsf{Eq}_\\bool(x,y)\\to R(x,y).\n    \\end{equation*}\n  \\end{subexenum}\n\\item \\label{ex:int_order}\n  \\begin{subexenum}\n  \\item Define the order relations\\index{relation!order}\\index{order relation} $\\leq$ and $<$ on and $\\Z$.\n  \\item For $k:\\Z$, consider the type $\\Z_{\\geq k}\\defeq \\sm{n:\\Z}n\\geq k$. Construct\n    \\begin{align*}\n      b_k & : \\Z_{\\geq k} \\\\\n      s_k & : \\Z_{\\geq k}\\to\\Z_{\\geq k},\n    \\end{align*}\n    and show that $\\Z_{\\geq k}$ satisfies the induction principle of the natural numbers\\index{induction principle!of N@{of $\\N$}}:\n    \\begin{equation*}\n      \\ind{\\Z_{\\geq k}} : P(b_k)\\to \\Big(\\prd{n:\\Z_{\\geq k}} P(n)\\to P(s_k(n))\\Big)\\to \\Big(\\prd{n:\\Z_{\\geq k}} P(n)\\Big)\n    \\end{equation*}\n  \\end{subexenum}\n\\item\n  \\begin{subexenum}\n  \\item Show that $\\N$ satisfies \\define{strong induction}, i.e., construct for any type family $P$ over $\\N$ a function of type\n    \\begin{equation*}\n      P(\\zeroN) \\to \\Big(\\prd{k:\\N}\\Big(\\prd{m:\\N} (m\\leq k) \\to P(m)\\Big)\\to P(\\succN(k))\\Big) \\to \\prd{n:\\N}P(n).\n    \\end{equation*}\n  \\item Show that $\\N$ satisfies \\define{ordinal induction}, i.e., construct for any type family $P$ over $\\N$ a function of type\n    \\begin{equation*}\n      \\Big(\\prd{k:\\N} \\Big(\\prd{m:\\N} (m< k) \\to P(m)\\Big)\\to P(k)\\Big) \\to \\prd{n:\\N}P(n).\n    \\end{equation*}\n  \\end{subexenum}\n\\item\n  \\begin{subexenum}\n  \\item For each $i:\\mathsf{Fin}(\\succN(n))$, define a function\n    \\begin{equation*}\n      \\mathsf{skip}_i : \\mathsf{Fin}(n)\\to\\mathsf{Fin}(\\succN(n))\n    \\end{equation*}\n    that includes $\\mathsf{Fin}(n)$ in $\\mathsf{Fin}(\\succN(n))$ by skipping $i$.\n  \\item For each $i:\\mathsf{Fin}(n)$, define a function\n    \\begin{equation*}\n      \\mathsf{double}_i : \\mathsf{Fin}(\\succN(n))\\to\\mathsf{Fin}(n)\n    \\end{equation*}\n    that projects $\\mathsf{Fin}(\\succN(n))$ onto $\\mathsf{Fin}(n)$ by doubling at $i$. \n  \\end{subexenum}\n\\end{exercises}\n", "meta": {"hexsha": "55f687356a2eb04883e613e71ca64d8f5c2b1f2e", "size": 21953, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/universes-relations.tex", "max_stars_repo_name": "tadejpetric/HoTT-Intro", "max_stars_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Book/universes-relations.tex", "max_issues_repo_name": "tadejpetric/HoTT-Intro", "max_issues_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Book/universes-relations.tex", "max_forks_repo_name": "tadejpetric/HoTT-Intro", "max_forks_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.9808743169, "max_line_length": 762, "alphanum_fraction": 0.6804081447, "num_tokens": 7229, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711756575749, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.5971302882315134}}
{"text": "\\documentclass{tufte-handout}\n\n%\\geometry{showframe}% for debugging purposes -- displays the margins\n\n\\usepackage{amsmath}\n\n% Set up the images/graphics package\n\\usepackage{asymptote} % for asymptote graphics\n\\usepackage{graphicx}\n\\setkeys{Gin}{width=\\linewidth,totalheight=\\textheight,keepaspectratio}\n\\graphicspath{{graphics/}}\n\n\\title{Summing Approximations}\n\\author[Warren MacEvoy]{Warren MacEvoy}\n% \\date{24 January 2009}  % if the \\date{} command is left out, the current date will be used\n\n% The following package makes prettier tables.  We're all about the bling!\n\\usepackage{booktabs}\n\n\\hypersetup{colorlinks} % Comment this line if you don't wish to have colored links\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{listings}\n\\usepackage{siunitx} % align decimals in tables\n\\usepackage{nth}     % 1st, 2nd, ...\n\\usepackage{microtype} % Improves character and word spacing\n\\usepackage{asymptote} % for asymptote graphics\n\\usepackage{ccicons}\n\\usepackage{lipsum} % Inserts dummy text\n\\usepackage{booktabs} % Better horizontal rules in tables\n\\usepackage{graphicx} % Needed to insert images into the document\n\n% The units package provides nice, non-stacked fractions and better spacing\n% for units.\n\\usepackage{units}\n\n% The fancyvrb package lets us customize the formatting of verbatim\n% environments.  We use a slightly smaller font.\n\\usepackage{fancyvrb}\n\\fvset{fontsize=\\normalsize}\n\n% Small sections of multiple columns\n\\usepackage{multicol}\n\n% Provides paragraphs of dummy text\n\\usepackage{lipsum}\n\n% These commands are used to pretty-print LaTeX commands\n\\newcommand{\\doccmd}[1]{\\texttt{\\textbackslash#1}}% command name -- adds backslash automatically\n\\newcommand{\\docopt}[1]{\\ensuremath{\\langle}\\textrm{\\textit{#1}}\\ensuremath{\\rangle}}% optional command argument\n\\newcommand{\\docarg}[1]{\\textrm{\\textit{#1}}}% (required) command argument\n\\newenvironment{docspec}{\\begin{quote}\\noindent}{\\end{quote}}% command specification environment\n\\newcommand{\\docenv}[1]{\\textsf{#1}}% environment name\n\\newcommand{\\docpkg}[1]{\\texttt{#1}}% package name\n\\newcommand{\\doccls}[1]{\\texttt{#1}}% document class name\n\\newcommand{\\docclsopt}[1]{\\texttt{#1}}% document class option name\n\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}\n\\theoremstyle{example}\n\\newtheorem{example}{Example}\n\\theoremstyle{theorem}\n\\newtheorem{theorem}{Theorem}\n\\begin{document}\n\n\\maketitle% this prints the handout title, author, and date\n\n\\begin{abstract}\n\\noindent Understanding asymptotic behavior of a series or function is useful.\n\\end{abstract}\n\n%\\printclassoptions\n\n\\section{Estimating Sums}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.00\\linewidth]{graphics/intsum.pdf}\n\\end{center}\n\\caption{\\label{fig:intmid}Comparing sum (black rectangles) with integrals of increasing $f$ (red), $f$ shifted -1/2 (green), and $f$ shifted -1 (blue).}\n\\label{fig:intmid}\n\\end{figure}\n\nSumming is a linear operation, so\n\n\\begin{equation}\n  \\sum_{k=0}^{n-1} \\left[ \\alpha f(k) + \\beta g(k) \\right] = \\alpha \\sum_{k=0}^{n-1} f(k) + \\beta \\sum_{k=0}^{n-1} g(k) \\,.\n\\end{equation}\n\nSumming powers of $k$:\n\n\\begin{equation}\n  \\sum_{k=0}^{n-1} 1 = n \\,.\n\\end{equation}\n\n\\begin{equation}\n  \\sum_{k=0}^{n-1} k = \\frac{1}{2} n(n-1) = \\frac{1}{2} n^2 + O(n)\\,.\n\\end{equation}\n\n\\begin{equation}\n  \\sum_{k=0}^{n-1} k^2 = \\frac{1}{3}n(n-1/2)(n-1) = \\frac{1}{3} n^3 + O(n^2) \\,.\n\\end{equation}\n\n\\begin{equation}\n  \\sum_{k=0}^{n-1} k^3 = \\frac{1}{4}n^2(n-1)^2 = \\frac{1}{4} n^4 + O(n^3) \\,.\n\\end{equation}\n\nGenerally, for $p>0$,\n\n\\begin{equation}\n  \\sum_{k=0}^{n-1} k^p \\approx \\frac{1}{p+1} (n-1/2)^{p+1} = \\frac{1}{p+1} n^{p+1} +O(n^p) \\,.\n\\end{equation}\n\nEstimating sums with integrals.\n\nIf $f(x)$ is increasing, then\n\\begin{equation}\n\\int_{a-1}^{b} f(x) \\, dx \\leq \\sum_{k=a}^{b} f(k) \\approx \\int_{a-1/2}^{b+1/2} f(x) dx \\leq \\int_{a}^{b+1} f(x) \\, dx\n\\end{equation}\n\n\\end{document}\n", "meta": {"hexsha": "128c059aad19059976adc5a0284be3e3b71894b3", "size": 3895, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/sums/notes.tex", "max_stars_repo_name": "wmacevoy/algorithms-fall-2018", "max_stars_repo_head_hexsha": "232224ee94d1cef56bf60ec964953539cf0ace26", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-08-22T17:14:25.000Z", "max_stars_repo_stars_event_max_datetime": "2018-08-22T17:14:33.000Z", "max_issues_repo_path": "notes/sums/notes.tex", "max_issues_repo_name": "wmacevoy/algorithms-fall-2018", "max_issues_repo_head_hexsha": "232224ee94d1cef56bf60ec964953539cf0ace26", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/sums/notes.tex", "max_forks_repo_name": "wmacevoy/algorithms-fall-2018", "max_forks_repo_head_hexsha": "232224ee94d1cef56bf60ec964953539cf0ace26", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-08-22T17:16:09.000Z", "max_forks_repo_forks_event_max_datetime": "2018-08-22T17:16:09.000Z", "avg_line_length": 31.6666666667, "max_line_length": 153, "alphanum_fraction": 0.7150192555, "num_tokens": 1346, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.8539127585282744, "lm_q1q2_score": 0.5971022823029317}}
{"text": "\\chapter{An introduction to arbitrage pricing theory}\n\\label{chap:arbitrage}\n\nWe shall now review the basic setting and results of arbitrage pricing theory.\n\n\\section{Discrete one period model}\n\nIn order to gain an insight into the basic concepts in mathematical finance, an overview of discrete one period model is given in this chapter. This treatment is based on \\textcite[pp. 5--34]{bjork2004arbitrage} and \\textcite[pp. 3--12]{duffie2010dynamic}. \\added[comment={Added contribution}]{While the context follows these sources closely, the presentation should be unique. For example, we do not assume the ecistance of a risk-free rate.}. A more complete overview of discrete model with multiple time periods can be found in \\textcite[pp. 33--85]{musielarutkowski2005martingale}.\n\nLet $(\\Omega, \\Pf)$ be a discrete probability space with $\\Omega = \\{ \\omega_1, \\omega_2, \\ldots , \\omega_m \\}$ and we assume that $\\Pf (\\omega ) > 0$ for all $\\omega \\in \\Omega$. The discrete one period market model is the sample space $\\Omega$ coupled with the price process $S$. We assume that at the time $t=0$ the price vector $S_0 \\in \\R^n$ is a known constant and at the time $t=1$ it is a random vector $S_1 : \\Omega \\rightarrow \\R^n$. Thus our model consists of one time step with $n$ stochastic assets. \n\nA portfolio $w$ is a vector in $\\R^n$. Since we do not put any restrictions on $w$, we assume that there are no restriction on buying or short selling of assets in this market. We denote the starting value of portfolio $w$ by $V_0^w = w^{\\top} S_0$ and the final value of it is a random variable $V_1^w = w^{\\top} S_1$. \n\nWe say that the portfolio $w$ is an arbitrage portfolio if either\n  \\begin{enumerate}[labelindent=\\parindent, leftmargin=*]\n    \\item $V_0^w < 0$ and $V_1^w \\geq 0$ or\n    \\item $V_0^w \\leq 0$ and $V_1^w > 0$\n  \\end{enumerate}\n$\\Pf$-surely. We say that the market with starting prices is arbitrage free if there are no arbitrage portfolios. We denote\n  \\begin{align}\n    \\label{marketmatrixindiscretesetting}\n    M = \\begin{bmatrix} S_1 ( \\omega_1 ) & S_1 ( \\omega_2 ) & \\ldots & S_1  ( \\omega_m ) \\end{bmatrix}_{n \\times m} = \\begin{bmatrix} M_1^{\\top} \\\\ M_2^{\\top} \\\\ \\vdots \\\\ M_n^{\\top} \\end{bmatrix}_{n \\times m}\n  \\end{align}\nwhere $M_i^{\\top} \\in \\R_m$ is the vector for terminal prices of $i$th asset. Now the absence of arbitrage is equivalent to that neither\n  \\begin{enumerate}[labelindent=\\parindent, leftmargin=*]\n    \\item $w^{\\top} S_0 = V_0^w < 0$ and $w^{\\top} M \\geq 0$ nor\n    \\item $w^{\\top} S_0 = V_0^w \\leq 0$ and $w^{\\top} M > 0$\n  \\end{enumerate}\ndoes not hold for any portfolio $w$. We recall that for vectors, we use $x > 0$ to denote that every component of $x$ is positive. Likewise $x \\geq 0$ denotes component-wise non-negativity.\n\n\\subsection{The first fundamental theorem of asset pricing in discrete one period model}\n\nA state-price vector is a vector $x \\in \\R^m$ satisfying $S_0 = Mx$ and $x > 0$. This definition can be understood in the following context. If $x \\in \\R^m$ is a state-price vector and $x = \\beta y$, where $\\left| y \\right| = 1 $ and $\\beta = \\left| x \\right|$, then $S_0 = Mx = \\beta M y$. where $My$ is the expected value of $S_1$ under the probability measure where the state probabilities are given by the vector $y$. The probability vector of the original probability space $\\Pf$ does not have to be a state-price vector.\n\nIt may happen that the state-price vector does not exists, but if it does, then $V_0^{w} = w^{\\top} S_0 = w^{\\top} Mx$ for all portfolios $w$ and this implies that arbitrage does not exist. Suppose that $w \\in \\R^n$ and $x$ is a state-price vector. Now $x > 0$ and $w^{\\top} S_0 = w^{\\top} Mx < 0$ implies that $w^{\\top} M$ has a negative component. Likewise $x > 0$ and $w^{\\top} S_0 = w^{\\top} Mx = 0$ implies that not all elements of $w^{\\top} M$ are positive.\n\nWe shall now prove the reverse implication using a variant of hyperplane separation theorem, which can be found in many standard textbooks for convex optimization.\n\n\\begin{thm}[Hyperplane separation theorem]\nLet $A$ and $B$ be closed convex subsets of $R^s$. If either of them is compact, then there exists $0 \\not = x \\in R^s$ such that\n  \\begin{align}\n    a^{\\top} x < b^{\\top} x\n  \\end{align}\nfor all $a \\in A$ and $b \\in B$.\n\\end{thm}\n\n\\begin{lemma}\n\\label{discreteonetimefirstfundamentallemma}\nThe market $(\\Omega, S)$ is arbitrage free if and only if a state-price vector exists.\n\\end{lemma}\n\n\\begin{proof}\n\\added[comment={Added reference}]{The argument presented here is essentially the same as the one found in} \\textcite[p. 4]{duffie2010dynamic}. We denote\n  \\begin{align}\n    A = \\{ \\ (-w^{\\top} S_0, w^{\\top} M ) \\in \\R^{m+1} \\ | \\ w \\in \\R^n \\ \\}\n  \\end{align}\nand $C = \\R_+^{m+1}$. Now the absence of arbitrage is equivalent to $A \\cap C = \\{ 0 \\}$. We need to only prove that the absence of arbitrage implies that the state-price vector does exist.\n\nIt is clear that $A$ is a closed and convex linear subspace of $\\R^{m+1}$. We consider the function $f$ defined by $z \\mapsto z / \\left| z \\right|$ for all $0 \\not = z \\in \\R^{m+1}$. Since $C$ is a convex subset, it is easy to to see that the convex closure of $B = f(C \\setminus \\{ 0 \\})$ is a convex and compact subset of $C$. \n\nBy the hyperplane separation theorem, there exists $0 \\not = y \\in R^{m+1}$ such that $a^{\\top} y < b^{\\top} y$ for all $a \\in A$ and $b \\in B$. Since $0 \\in A$, $0 < b^{\\top} y$ for all $b \\in B$, which implies that $0 < c^{\\top} y$ for all $0 \\not = c \\in C$. Coordinate vectors are in $C$ and this means that $y > 0$. Since $A$ is a subspace, $a \\in A$ implies that $-a \\in A$ and this means that $0 \\leq a^{\\top} y < \\left| c^{\\top} y \\right|$ for all $a \\in A$ and $0 \\not = c \\in C$. Thus $a^{\\top} y = 0 $ for all $a \\in A$.\n\nIf $y = (y_1, y_2, \\ldots , y_{m+1})^{\\top} \\in \\R^{m+1}$ and $y^* = (y_2, \\ldots , y_{m+1})^{\\top} \\in \\R^{m}$, then\n  \\begin{align}\n    y_1 w^{\\top} S_0 = w^{\\top} M y^*\n  \\end{align}\nfor all $w \\in \\R^n$, which implies $x = y^* / y_1 > 0$ is a state-price vector.\n\\end{proof}\n\nIf the market is arbitrage free and $x = (x_1, \\ldots x_m)^{\\top}$ is a state-price vector, then by denoting \n  \\begin{align}\n    \\beta = \\sum_{i=1}^m x_i\n  \\end{align}\nand $\\Pm (\\omega_i) = q_i = x_i / \\beta > 0$ for all $i = 1,2, \\ldots ,m$, we have a new probability measure $\\Pm$ on $\\Omega$. This measure has the property\n  \\begin{align}\n    \\label{martingalemeasureequationdiscrete}\n    S_0 = M x = \\beta M \\frac{x}{\\beta} = \\beta \\E_{\\Pm} \\left( S_1 \\right) .\n  \\end{align}\nSince we assumed that $\\Pf (\\omega) > 0$ for all $\\omega \\in \\Omega$, it is easy to see that $\\Pm (\\omega) > 0$ for all $\\omega \\in \\Omega$.\n  \nA deflator $d$ is a strictly positive process, so that $d_0 > 0$ and $d_1(\\omega) > 0$ for all $\\omega \\in \\Omega$. Now we may define relative price processes\n\\begin{align}\nS^d_0 &= \\frac{S_0}{d_0}, \\\\\nS^d_1 &= \\frac{S_1}{d_1}\n\\end{align}\nwith regards to the deflator $d$.\n\n\\begin{lemma}\n\t\\label{discreteonetimemartingalemeasurelemma}\n\tThe market has a state-space vector if and only if the market with relative prices has a state-price vector.\n\\end{lemma}\n\n\\begin{proof}\n\tLet\n\t\\begin{align}\n\tM^d = \\left[ \\ \\frac{S_1 ( \\omega_1 )}{ d_1 (\\omega_1) } \\ \\frac{S_1 ( \\omega_2 )}{ d_1 (\\omega_2) } \\ \\ldots \\ \\frac{S_1  ( \\omega_m )}{ d_1 (\\omega_m) } \\ \\right]_{n \\times m} .\n\t\\end{align}\n\tIf $x = (x_1, x_2, \\ldots , x_m)^{\\top} > 0$ and\n\t\\begin{align}\n\tx^d = d_0^{-1} ( x_1 d_1 (\\omega_1), x_2 d_1 (\\omega_2), \\ldots , x_m d_1 (\\omega_m) )^{\\top},\n\t\\end{align}\n\tthen $d_0 M^d* x^d = Mx$. Therefore $S_0 = Mx$ if and only if $S^d_0 = M^d x^d$, where $x^d \\in \\R^m$ and $x^d > 0$.\n\\end{proof}  \n  \nInspired by these observations, we define that a measure $\\Pm$ that satisfies \n  \\begin{enumerate}[labelindent=\\parindent, leftmargin=*]\n    \\item $\\Pf( \\omega ) = 0$ if and only if $\\Pm ( \\omega ) = 0$ (meaning that the measures share null sets) and\n    \\item there exists a deflator $d$ such that \n    \\begin{align}\n    \t\tS^d_0 = \\E_{\\Pm} \\left( S^d_1 \\right)\n   \t \\end{align}\n  \\end{enumerate}\nis an equivalent martingale measure induced by deflator $d$. By Lemma \\ref{discreteonetimemartingalemeasurelemma}, the original market is arbitrage-free if and only if the deflated market is arbitrage-free, which is equivalent to\n    \\begin{align}\n\t\tS^d_0 = \\beta \\E_{\\Pm} \\left( S^d_1 \\right)\n\t\\end{align}\nfor some $\\beta \\in \\R_+$ and measure $\\Pm$. We can define a new deflator $e$ by $e_0 = d_0$ and $e_1 = \\beta d_1$. Now\n    \\begin{align}\n\t\tS^e_0 = \\beta \\E_{\\Pm} \\left( S^e_1 \\right)\n\t\\end{align}\nmeaning that $\\Pm$ is an EMM induced by $e$. Thus we see that the market is arbitrage-free if and only if it has an EMM.\n\nThe curious aspect of the probabilities of the EMM is that they are not influenced by the original probability measure $\\Pf$ beyond the fact that they share the same sets of non-zero probabilities. In fact, the probabilities of the EMM is defined by the original price vector $S_0$ and the state space $M$.\n\nAssume that $\\Pm$ is an equivalent measure. Since $\\Pf$ is probability measure with no null sets, we may define a new random variable called Radon-Nikod\\'{y}m derivative as\n\\begin{align}\n\\Lambda ( \\omega ) = \\frac{ \\Pm ( \\omega ) }{ \\Pf ( \\omega ) } \n\\end{align}\nfor all $\\omega \\in \\Omega$. If $X$ is a random variable, then\n\\begin{align}\n\\E_{\\Pf} (\\Lambda X) = \\sum_{i=1}^m \\Pf (\\omega_i) \\frac{ \\Pm ( \\omega_i ) }{ \\Pf ( \\omega_i ) } X( \\omega_i ) = \\E_{\\Pm} (X) \n\\end{align}\nso we see that the expectations under different measures are linked by the Radon-Nikod\\'{y}m derivative. By this we see that $X$ is a martingale under the measure $\\Pm$ if and only if $\\Lambda X$ is a martingale under the measure $\\Pf$.\n\nA num\\'{e}raire is any asset with only positive prices. If one of the assets is a num\\'{e}raire with initial price $s_0$ and terminal price $s_1( \\omega )$, then we use it as a deflator and define relative price process\n  \\begin{align}\n    S_0^* &= \\frac{S_0}{s_0}, \\\\\n    S_1^* &= \\frac{S_1}{s_1}.\n  \\end{align}\nIf the market is arbitrage-free, then by Lemma \\ref{discreteonetimemartingalemeasurelemma} there exists such $\\beta \\in \\R_+$ and measure $\\Pm$ that \n\\begin{align}\nS^*_0 = \\beta \\E_{\\Pm} \\left( S^*_1 \\right) .\n\\end{align}\nBut since $s$ is one of the assets, it must hold that $\\beta = 1$ meaning that the measure $\\Pm$ is an equivalent martingale measure.\n\nWe can now state the first fundamental theorem of asset pricing that weaves these different concepts together.\n\n\\begin{thm}[First fundamental theorem of asset pricing for discrete one period model]\nThe following are equivalent in a discrete one period market model: \n  \\begin{enumerate}[labelindent=\\parindent, leftmargin=*]\n    \\item the market is arbitrage free,\n    \\item a state-price vector exists and\n    \\item an equivalent martingale measure $\\Pm$ exists.\n  \\end{enumerate}\nIf $\\Pm_d$ is an EMM induced by a deflator $d$ and $\\Pm_e$ is an EMM induced by a deflator $e$, then\n  \t\\begin{align}\n\t\tS_0 = d_0 \\E_{\\Pm_d} \\left( S^d_1 \\right) = e_0 \\E_{\\Pm_d} \\left( S^e_1 \\right) .\n\t\\end{align}\t\nIn particularly, if the market has a num\\'{e}raire $s$ and there is no arbitrage, then a martingale measure induced by $s$ satisfies\n  \t\\begin{align}\n\t\tS_0 = s_0 \\E_{\\Pm} \\left( \\frac{S_1}{s_1} \\right).\n\t\\end{align}\n\\end{thm}\n\nA risk-free asset is a num\\'{e}raire with a constant terminal price $z$. We assume that the risk-free asset has initial price of $1$ and terminal price of $1+r$, where $r > -1$. Thus if a risk-free asset exists and there is no arbitrage, an EMM $\\Pm$ induced by a risk-free asset satisfies\n\t\\begin{align}\n\t\tS_0 = \\E_{\\Pm} \\left( \\frac{S_1}{1+r} \\right) .\n\t\\end{align}\n\n\n\\subsection{The second fundamental theorem of asset pricing in discrete one period model}\n\n\\added[comment={Added reference}]{The arguments presented in this section is} essentially the same as the ones in \\textcite[pp. 31--34]{bjork2004arbitrage}\n\nHow should we then price derivatives in this model? A contingent claim $X$ is a random variable $X : \\Omega \\rightarrow \\R$. If the the original market is arbitrage-free, then the first fundamental theorem of asset pricing implies that an equivalent martingale measure $\\Pm$ induced by a deflator $d$ satisfies\n  \\begin{align}\n    S_0 & = d_0 \\E_{\\Pm} \\left( S^d_1 \\right) .\n  \\end{align}\nIt would be natural to define\n  \t\\begin{align}\n\t\tX_0 & = d_0 \\E_{\\Pm} \\left( X^d_1 \\right)\n\t\\end{align}\nto be the initial price of the claim $X$. But since there may be different EMMs, $X_0$ may not be well-defined. Thus it is vital to pose the question how many measures there are. If the market is arbitrage free, then the equation $S_0 = M x$ has a solution by Lemma \\ref{discreteonetimefirstfundamentallemma}. By basic linear algebra, this solution is unique if and only if $\\Kernel{ ( M ) } = 0$. We also know that null space is the orthogonal compliment of the row space meaning that\n  \\begin{align}\n    \\label{linearalgebraduality}\n    \\Kernel{ ( M ) } = \\Image{ (M^{\\top}) }^{\\perp} \n  \\end{align} \nand this suggests that we have a closer look at the image set $\\{ x^{\\top} M \\ | \\ x \\in \\R^n \\} $. But the image set is just the set of all possible portfolios of the original market.\n\nIf there is such portfolio $w$ that $w^{\\top} S_1 = X$ $\\Pf$-surely, then we say that $X$ is replicated by the portfolio $w$. This is equivalent to\n\t\\begin{align}\n\t\tX \\in \\{ x^{\\top} M \\ | \\ x \\in \\R^n \\} .\n\t\\end{align}\nBy the first fundamental theorem of asset pricing, we see that if the contingent claim $X$ can be replicated and the market has no arbitrage, then for any pair of an EMM $\\Pm$ and a replicating portfolio $w$, we have that \n  \\begin{align}\n    V_0^w = d_0 \\E_{\\Pm} \\left( w^{\\top} S^d_1 \\right) = d_0 \\E_{\\Pm} \\left( \\frac{X}{d} \\right),\n  \\end{align}\nwhere $d$ is a deflator inducing $\\Pm$. The left hand side depends only on the replicating portfolio $w$ and the right hand size depends only on the measure $\\Pm$ and the deflator. This implies that $V_0^{w_1} = V_0^{w_2}$ for any portfolios $w_1, w_2$ which replicates $X$. \n\nWe define $Z_0 = (w^{\\top} S_0, S_0^{\\top})^{\\top}$, $Z_1 = (w^{\\top} S_1, S_1^{\\top})^{\\top}$. If $\\Pm$ is an EMM with a deflator $d$, then\n  \t\\begin{align}\n\t\tZ_0 = d_0 \\E_{\\Pm} \\left( Z^d_1 \\right)\n\t\\end{align}\nif and only if \n  \t\\begin{align}\n\t\tw^{\\top} S_0 &= d_0 \\E_{\\Pm} \\left( w^{\\top} S^d_1 \\right) ,\\\\\n\t\tS_0 &= d_0 \\E_{\\Pm} \\left( S^d_1 \\right) .\n\t\\end{align}\nHence, if the market is arbitrage-free, then the arbitrage-free price of a replicated contingent claim $X$ is\n$w^{\\top} S_0$, where $w$ is any replicating portfolio.\n\nNow we define that the market is complete if every contingent claim can be replicated. This means that if $M$ is defined as in \\ref{marketmatrixindiscretesetting}, then the marker is complete if and only if \n  \\begin{align}\n    \\Image{ (M^{\\top}) } = \\{ M^{\\top} w \\ | \\ w \\in \\R^n \\} = \\{ (w^{\\top} M)^{\\top} \\ | \\ w \\in \\R^n \\} = \\R^m\n  \\end{align} \nmeaning that the matrix $M$ has a rank of $m$. By duality in \\ref{linearalgebraduality}, we see that this is equivalent to $\\Kernel{ ( M ) } = 0$. Therefore the market is complete if and only if the solution to the equation in Lemma \\ref{discreteonetimefirstfundamentallemma} has a unique solution. Thus we have proved the second fundamental theorem of asset pricing.\n\n\\begin{thm}[Second fundamental theorem of asset pricing]\nAssume that the market is arbitrage free. The market is complete if and only if there is a deflator that induces a unique equivalent martingale measure. Then every EMM induced by a given deflator is unique. If the market has a num\\'{e}raire, then the market is complete if and only if EMM induced by the num\\'{e}raire is unique.\n\\end{thm}\n\nCompleteness also implies that the market has $m$ linearly independent asset price processes since the matrix $M$ has a rank of $m$. This means, in a sense, that for every risk dimension is covered be tradable assets and therefore every possible contingent claim can be replicated.\n\nSo we have identified three distinct scenarios which are from best to worst:\n  \\begin{enumerate}[labelindent=\\parindent, leftmargin=*]\n    \\item The market model is arbitrage free and complete which is equivalent to the fact the there exists a deflator with a unique martingale measure. Every contingent claim can be given a unique price which is the cost of replicating portfolio. Or, if $d$ is deflator with induced EMM $\\Pm$, then the initial arbitrage-free price of a contingent claim $X$ is\n    \t\\begin{align}\n    \t\t\\label{deflatorinducedpricingformula}\n\t    \tX_0 = d_0 \\E_{\\Pm} \\left( \\frac{X_1}{d_1} \\right) .\n    \t\\end{align}\n    This price does not depend on the choice of the deflator $d$ as long as the deflator induces an EMM.\n    \\item The market is arbitrage free but not complete. Then every deflator which induces an equivalent martingale measure have several EMMs. Every replicated contingent claim can be given a unique arbitrage-free price which is the cost of replicating portfolio. This price is given by Equation \\ref{deflatorinducedpricingformula}. Contingent claims that could not be replicated may not be priced in the sense of the Equation \\ref{deflatorinducedpricingformula}.\n    \\item The market has arbitrage which makes pricing rather meaningless. \n  \\end{enumerate}\n\nThus, if the market does not allow arbitrage, then we may price every replicated contingent claim can be given an arbitrage-free price. This price is the cost of the cost of replicating portfolio $w$ which is equal to\n    \\begin{align}\n\t\tV^w_0 = d_0 \\E_{\\Pm} \\left( \\frac{V^w_1}{d_1} \\right) ,\n\t\\end{align}\nwhere $d$ is any deflator that induces an EMM $\\Pm$. If the market has a num\\'{e}raire, then we may use this as the deflator.\n  \n\n\n", "meta": {"hexsha": "e8bedd664afcd0219ca9437dddc56af312935642", "size": 17757, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "arbitrage.tex", "max_stars_repo_name": "mrytty/gradu-public", "max_stars_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "arbitrage.tex", "max_issues_repo_name": "mrytty/gradu-public", "max_issues_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "arbitrage.tex", "max_forks_repo_name": "mrytty/gradu-public", "max_forks_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.7745901639, "max_line_length": 585, "alphanum_fraction": 0.6933603649, "num_tokens": 5791, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.8539127566694177, "lm_q1q2_score": 0.5971022810031179}}
{"text": "% !TEX root = CSL 2021.tex\n\nTo illustrate our construction, we start from a relatively concrete example: we consider a simply-typed lambda calculus with a base type $\\mathsf{Real}$ and primitives for real numbers, and we follow the plan outlined in the introduction, which yields for each simple type a notion of approximate value, approximate function, diameter and distance between programs. Most definitions are straightforward and intuitive: the interesting, not immediately obvious point is that our construction does yield a \\emph{partial metric} on each type.\n\n\\emph{Simple types} are defined as follows: $\\mathsf{Real}$ is a simple type; if $A$ and $B$ are simple types, then $A \\to B$ and $A \\times B$ are simple types.\nFor all $n>0$, we fix a set $\\mathcal{F}_n$ of functions from $\\mathbb{R}^n$ to $\\mathbb{R}$. We consider the usual Curry-style simply-typed $\\lambda$-calculus over the types defined above (the left and right projection are denoted by $\\pi_L:A\\times B \\to A$ and  $\\pi_R:A\\times B \\to B$ respectively, and the constructor for pairs by $\\left\\langle-,-\\right\\rangle$), enriched with the following constants: for all $r \\in \\mathbb{R}$, a constant $r:\\mathsf{Real}$; for all $n>0$ and all $f\\in\\mathcal{F}_n$, a constant $f:\\mathsf{Real}\\to\\ldots\\to\\mathsf{Real}\\to\\mathsf{Real}$. We call this calculus $\\STLC$, and its terms are simply called \\emph{terms}. We write $t[x_1 := u_1, \\ldots, x_n := u_n]$ to denote the \\emph{simultaneous} substitution of $u_1, \\ldots, u_n$ for $x_1, \\ldots, x_n$ in $t$. For all types $A$, we denote by $\\Lambda_A$ the set of closed terms of type $A$. The relation of $\\beta$-reduction is enriched with the following rule, extended to all contexts: for all $n>0$, $f\\in\\mathcal{F}_n$, and $r_1,\\ldots,r_n\\in\\mathbb{R}$, $f r_1 \\ldots r_n \\to_\\beta s$, where $s = f(r_1, \\ldots, r_n)$. By standard arguments \\cite{abramsky:handbook-2}, this calculus has the properties of subject reduction, confluence and strong normalisation.\n\n\\begin{remark}\\label{rem:continuous}\nThe class of real-valued functions which can be computed in $\\STLC$ depends on the choice we make for $\\mathcal F_{n}$. With suitable choices (see for instance \\cite{TaylorReal, Di-Gianantonio:2013aa,Edalat:2000aa}) one can obtain that all programs of type $\\mathsf{Real}\\to \\mathsf{Real}$ compute \\emph{continuous} functions\\footnote{Note that for this to be possible, $\\mathcal F_{n}$ cannot contain the identity function over $\\mathsf{Real}$.}, that all such programs are \\emph{integrable} over closed intervals, or that all such programs are \\emph{continuously differentiable}.\n\n\n\n\\end{remark}\n\nIn addition to the usual notion of $\\beta$-equivalence between terms of $\\STLC$, we will exploit also a stronger equivalence: given two closed terms $t,u$ of type $A$,  we say that $t$ and $u$ are \\emph{observationally equivalent} and write $t \\oeq_A u$ if for all terms $C$ such that $x:A \\vdash C : \\mathsf{Real}$ is derivable, $C[x:=t]$ is $\\beta$-equivalent to $C[x:=u]$ (which amounts to saying that they both $\\beta$-reduce to the same real number).  It is clear that observational equivalence is a congruence and that two $\\beta$-equivalent terms are always observationally equivalent. %The notion of distance we define in the next paragraph will be a generalization of this notion of equivalence.\n\n\n\n\\subsection{Approximate Values and Approximate Programs}\n\nThe first step of our construction for $\\STLC$ is to associate to each simple type $A$ a set $\\intervals{A}$ whose elements are certain sets of programs of type $A$ that we call \\emph{approximate values of type $A$}. \nA closed term $t\\in \\Lambda_{A}$ represents a program with return type $A$ and no parameters, so an approximate value can be thought of as a specification of\u00a0a program with return type $A$ and no parameters \\emph{up to} a certain degree of error or approximation.\n\nFor each simple type $A$, the set of approximate values $\\intervals{A}\\subseteq \\mathcal P(\\Lambda_A)$ is defined inductively as follows:\n\\begin{itemize}\n\\item $\\intervals{\\mathsf{Real}} = \\{ \\{ t \\in \\Lambda_\\mathsf{Real} \\mid \\exists r \\in I, t \\to_\\beta^* r \\} \\mid I \\subseteq \\mathbb{R} \\text{ is a compact interval or $\\emptyset$ or } \\mathbb{R} \\}$,\n\\item $\\intervals{A \\times B} = \\{ a \\times b \\mid a  \\in \\intervals{A}, b \\in \\intervals{B} \\}$, where $a \\times b = \\{ t \\in \\Lambda_{A \\times B} \\mid \\pi_L t \\in a \\text{ and } \\pi_R t \\in b \\}$,\n\\item $\\intervals{A \\to B} = \\{ \\{ t \\in \\Lambda_{A \\to B} \\mid \\forall u \\in \\Lambda_{A},~ tu \\in I(u) \\} \\mid I : \\Lambda_A \\to \\intervals{B} \\}$.\n\\end{itemize}\n\nThe approximate values of type $\\mathsf{Real}$ are sets of closed programs of type $\\mathsf{Real}$ which essentially coincide with the compact intervals of $\\mathbb R$, plus the empty set and $\\mathbb{R}$ itself. An approximate value in $\\intervals{A\\times B}$ is a ``rectangle'' $a \\times b$, with $a\\in \\intervals{A}$ and $b \\in \\intervals{B}$, while an approximate value in $\\intervals{A\\to B}$ is uniquely determined by a function \n$I$ from closed terms $u\\in \\Lambda_{ A}$ to approximate values $I(u)\\in \\intervals{B}$.\n\nFor example, any two terms $t,u\\in \\Lambda_{\\mathsf{Real}}$ with normal forms $q, r\\in \\mathbb{R}$ induce an approximate value $[t,u]_{\\mathsf{Real}}= \\{v\\in \\Lambda_{\\mathsf{Real}}\\mid  v\\to_{\\beta}^{*} s \\land (q\\leq s\\leq r \\vee q\\geq s\\geq r)\\}$ of type $\\mathsf{Real}$. Similarly, any two terms $t,u\\in \\Lambda_{\\mathsf{Real}\\to \\mathsf{Real}}$ induce an approximate value\n$[t,u]_{\\mathsf{Real}\\to \\mathsf{Real}}= \\{v\\in \\Lambda_{\\mathsf{Real}\\to\\mathsf{Real}}\\mid  \n\\forall r \\in \\Lambda_{\\mathsf{Real}} \\ vr\\in [ tr, ur]_{\\mathsf{Real}} \n\\}$. For instance, if $t= \\lambda x.\\sin(x)+1$ and $u=\\lambda x.\\cos(x)-1$, then \n$[t,u]_{\\mathsf{Real}\\to \\mathsf{Real}}$ contains all closed terms corresponding to maps oscillating between $\\sin (x)+1$ and $\\cos(x)+1$ (\\textit{e.g.} the program $\\lambda x. \\sin(x+1)$, as illustrated in Fig. \\ref{fig:interval1}).\n\n\\begin{figure}\n\\begin{subfigure}{0.42\\textwidth}\n\\parbox[h][3cm][c]{\\textwidth}{\n\\adjustbox{center}{$\n\\begin{tikzpicture}[domain=-5:5, scale=0.5]\n%\\draw[very thin,color=gray] (-0.1,-1.1) grid (3.9,3.9);\n\\draw[<->]   (-5,0) -- (5,0);\n\\draw[<->] (0,-3) -- (0,3); % node[above] {$f(x)$};\n\n   % \\node at (0,0)[circle,fill,inner sep=1pt]{};\n%\n\\draw[color=blue, samples=40] plot (\\x,{sin(\\x r)+1}) node[above] {\\tiny$\\sin(x)+1$};\n\\draw[color=orange, samples=40] plot (\\x,{cos(\\x r)-1}) node[below] {\\tiny$\\cos(x)-1$};\n\n\\draw[color=gray, dotted, thick, samples=40] plot (\\x,{sin(deg(\\x+1) ))}) node[below] {\\tiny$\\sin(x+1)$};\n\n\n\\end{tikzpicture}$}\n}\n\\caption{$\\lambda x.\\sin(x+1)$ is in $[\\lambda x.\\sin(x)+1, \\lambda x.\\cos(x)+1]_{\\mathsf{Real}\\to\\mathsf{Real}}$.}\n\\label{fig:interval1}\n\\end{subfigure} \\ \\ \\ \n%\\end{figure}\n%\\begin{figure}\n\\begin{subfigure}{0.52\\textwidth}\n\\parbox[h][3cm][c]{\\textwidth}{\n\\adjustbox{center}{$\n\\begin{tikzpicture}[domain=-4:4, scale=0.65]\n\n%\\draw[very thin,color=gray] (-0.1,-1.1) grid (3.9,3.9);\n\\draw[<->]   (-4,0) -- (4,0);\n\\draw[<->] (0,-2) -- (0,2); % node[above] {$f(x)$};\n\n\\draw[dashed, |-|] (-1,0) -- (1,0);\n\\node(a) at (-1,-0.3) {\\tiny$-1$};\n\\node(a) at (1,-0.3) {\\tiny$1$};\n\n\\node(a) at (0.3,-1.2) {\\tiny$-1$};\n\\node(a) at (-0.3,0.8) {\\tiny$1$};\n   % \\node at (0,0)[circle,fill,inner sep=1pt]{};\n\n\\draw[|-|] (4.8,0.4) -- node[right] {\\tiny$\\varepsilon$} (4.8,1.3);\n\n\\draw[|-|] (5.4,0.35) -- node[right] {\\tiny$\\delta$} (5.4,0.45);\n\n\n\\draw[color=red, domain=-4:4, samples=40] plot (\\x, {(1/2*sqrt(2*pi))*exp(-((\\x)^2)) } ) node[above] {\\tiny$u[x]$};\n%\\draw[color=violet, domain=-1.2:1.2] plot (\\x, { (\\x)^3 } );\n\n\\draw[color=blue, domain=-4:0] plot (\\x,-1);\n\\draw[color=blue, domain=0:4] plot (\\x,1) node[above] {\\tiny$t[x]$};\n\n\\draw[dotted] (-4,0.4) -- (4.8,0.4);\n\\draw[dotted] (-4,1.3) -- (4.8,1.3);\n\n\\node(z) at (-1,0.4) {\\tiny$\\bullet$};\n\\node(zz) at (1,0.4) {\\tiny$\\bullet$};\n\n\\node(z) at (0,-1) {\\tiny$\\bullet$};\n\\node(zz) at (0,1) {\\tiny$\\bullet$};\n\n\\node(r) at (0,0.4) {\\tiny$\\bullet$};\n\\node(rr) at (0.3,0.5) {\\tiny$r$};\n\n\n\n\\end{tikzpicture}$}\n}\n\\caption{$\\varepsilon=(\\partial(u)\\circ \\partial(t))([-1,1])$ is bigger than $\\delta=\\partial(u\\circ t)([-1,1])=[r,r]$.}\n\\label{fig:interval2}\n\\end{subfigure}\n\\caption{Examples of functional approximate values and of approximate programs. }\n\\end{figure}\n\n\nFor all $A$, the set $\\intervals{A}$ is a a subset of $\\mathcal{P}(\\Lambda_A)$ closed under arbitrary intersections. We deduce that $\\intervals{A}$ has arbitrary meets (given by intersections) and arbitrary joins\n$\\bigvee_{i\\in I}a_{i}= \\bigcap \\{ a\\in \\intervals{A}\\mid \\forall i\\in I \\ a_{i}\\subseteq a\\}$, and thus $\\intervals{A}$ is a complete lattice. In particular, for all $t\\in\\Lambda_A$, there is a least element of $\\intervals{A}$ that contains $t$, which will be denoted by $\\tointerval{t}$. \nOne can check that $\\tointerval{t} = \\tointerval{u}$ if and only if $t\\oeq_{A} u$.\n\nMonotone functions from approximate values to approximate values represent \\emph{approximate programs}. They behave like a model of the simply-typed $\\lambda$-calculus in a weak sense, namely: \\begin{itemize}\n\\item for all monotone functions $\\vec{\\alpha} \\mapsto c[\\vec{\\alpha}] : \\intervals{A_1} \\times \\ldots \\times \\intervals {A_n} \\to \\intervals {B \\to C}$ and $\\vec{\\alpha} \\mapsto b[\\vec{\\alpha}] : \\intervals{A_1} \\times \\ldots \\times \\intervals {A_n} \\to \\intervals {B}$, we can define a monotone function $\\vec{\\alpha} \\mapsto (c[\\vec{\\alpha}]~ b[\\vec{\\alpha}]) = \\sup\\{\\tointerval{vu} \\mid v \\in c[\\vec{\\alpha}], u \\in b[\\vec{\\alpha}]\\} : \\intervals{A_1} \\times \\ldots \\times \\intervals {A_n} \\to \\intervals {C}$,\n\\item for all monotone functions $\\vec{\\alpha} \\mapsto c[\\vec{\\alpha}] : \\intervals{A_1} \\times \\ldots \\times \\intervals {A_n} \\to \\intervals {C}$ and all $i \\leq n$, we can define a monotone function $(\\alpha_j)_{j \\neq i} \\mapsto (\\lambda \\alpha_i.~ c[\\vec{\\alpha}]) = \\{ v \\in \\Lambda_{A_i \\to C} \\mid \\forall t_i \\in \\Lambda_{A_i},~ v t_i \\in c[\\alpha_1, \\ldots, \\tointerval{t_i}, \\ldots, \\alpha_n] \\} : \\prod_{j \\neq i} \\intervals{A_j} \\to \\intervals{A_i \\to C}$,\n\\end{itemize}\nand these two constructions are weakly compatible with $\\beta$-reduction and $\\eta$-expansion:\n\n\\begin{proposition} \\label{prop:intervals-weak-model-lambda} For all monotone functions $(\\vec{\\alpha}, \\beta) \\mapsto c[\\vec{\\alpha}, \\beta] : \\intervals{A_1} \\times \\ldots \\times \\intervals {A_n} \\times  \\intervals {B} \\to \\intervals {C}$ and $\\vec{\\alpha} \\mapsto b[\\vec{\\alpha}] : \\intervals{A_1} \\times \\ldots \\times \\intervals {A_n} \\to \\intervals {B}$, $\\left(\\vec{\\alpha} \\mapsto (\\lambda \\beta.~  c[\\vec{\\alpha}, \\beta])~ b[\\vec{\\alpha}]\\right) \\leq \\left(\\vec{\\alpha} \\mapsto c[\\vec{\\alpha}, b[\\vec{\\alpha}]]\\right)$, %$$\\begin{array}{ll} & \\vec{\\alpha} \\mapsto (\\lambda \\beta.~  c[\\vec{\\alpha}, \\beta])~ b[\\vec{\\alpha}] \\\\ \\leq & \\vec{\\alpha} \\mapsto c[\\vec{\\alpha}, b[\\vec{\\alpha}]]\\text{,}\\end{array}$$\nand for all monotone functions $\\vec{\\alpha} \\mapsto d[\\vec{\\alpha}] : \\intervals{A_1} \\times \\ldots \\times \\intervals {A_n} \\to  \\intervals {B \\to C}$,\n$\\left(\\vec{\\alpha} \\mapsto \\lambda \\beta.~  d[\\vec{\\alpha}]~ \\beta\\right) \\geq \\left(\\vec{\\alpha} \\mapsto d[\\vec{\\alpha}]\\right)$,\n%$$\\begin{array}{ll} & \\vec{\\alpha} \\mapsto \\lambda \\beta.~  d[\\vec{\\alpha}]~ \\beta \\\\ \\geq & \\vec{\\alpha} \\mapsto d[\\vec{\\alpha}]\\text{,}\\end{array}$$\nwhere functions are ordered by pointwise inclusion.  In other words, on approximate programs, $\\beta$-reduction and $\\eta$-expansion \\emph{discard} information, and conversely $\\beta$-expansion and $\\eta$-reduction \\emph{recover} some information.\n\\end{proposition}\n\n\\begin{proof} Without loss of generality, we can assume $n=0$.\nLet $v \\in \\lambda \\beta.~ c[\\beta]$ and $u \\in b$. By definition, $tu \\in c[\\tointerval{u}]$, so $\\tointerval{tu} \\subseteq c[\\tointerval{u}] \\subseteq c[b]$. Therefore, $(\\lambda \\beta.~ c[\\beta])~ b \\subseteq b$.\nLet $v \\in d$. For all $u \\in \\Lambda_B$, by definition, $vu \\in d\\tointerval{u}$. Therefore, $v \\in \\lambda \\beta.~ d~ \\beta$.\n\\end{proof}\n\nBeyond theoretical aspects (which will be made clearer in Section 5)\nProposition \\ref{prop:intervals-weak-model-lambda} is also important in practice because it implies that if we compute an approximation of a program from approximations of its parts and then simplify the resulting approximate program using $\\beta$-reduction and $\\eta$-expansion, what we obtain is still a valid approximation of the original program.\n\n\nWe can define a weak embedding from terms into approximate programs, by mapping each term to its tightest approximation: for all terms $t$ such that $\\alpha_{1}:A_{1},\\dots,\\alpha_{n}:A_{n}\\vdash t:B$, we define a monotone function $\\partial(t):\\intervals{A_{1}}\\times \\dots \\times \\intervals{A_{n}} \\to \\intervals{B}$ by $\\partial(t)(a_1, \\ldots, a_n) = \\sup \\{ \\tointerval{t u_1 \\ldots u_n} \\mid u_1 \\in a_1, \\ldots, u_n \\in a_n \\}$. \n\n\\begin{remark} \\label{remark:push-exp-stlc}\nThe map $\\partial$ is constant on classes of observational equivalence, and one can check that it is is weakly compatible with the constructions of the $\\lambda$-calculus, in particular:\n\\begin{itemize}\n\\item $\\partial (\\alpha_{i})(a_{1},\\dots, a_{n})=a_{i}$,\n\\item $\\partial (tu)(a_{1},\\dots, a_{n}) \\subseteq \\partial (t)(a_{1},\\dots, a_{n}) ~ \\partial (u)(a_{1},\\dots, a_{n})$,\n\\item $\\partial (\\lambda \\beta. t)(a_{1},\\dots, a_{n}) \\subseteq \\lambda \\beta.~ \\partial (t)(\\beta, a_{1},\\dots, a_{n})$.\n\\end{itemize}\n\\end{remark}\n\nThis map $\\partial(t)$ can be taken as a measure of the \\emph{sensitivity} of $t$, as it maps an interval $a$, that is a quantifiably uncertain input, to a quantifiably uncertain output $\\partial(t)(a)$. \nFor instance, if we take the term $t[x]= \\sin(x)+1$ above, then $\\partial(t): \\intervals{\\mathsf{Real}}\\to \\intervals{\\mathsf{Real}}$ sends the interval $[-\\pi,\\pi]_{\\mathsf{Real}}$ into $[0,2]_{\\mathsf{Real}}$.\n\n\n\n\\begin{remark} \\label{remark:oplax-functor-stlc}\nWhen composing two maps $\\partial(t)$ and $\\partial(u)$, we might obtain a worse approximation than by computing $\\partial(t[u/x])$ directly.\nFor instance, let $t[x]$ and \n$u[x]$ be, respectively, the discontinuous and Gaussian functions illustrated in Fig. \\ref{fig:interval2}.  \nIf $a$ is the interval $[-1,+1]$, then $\\partial(t)(a)=[-1,1]$, and since $u[x:=-1]=u[x:=1]\\simeq_{\\beta} r$ for some $0<r<1$, we deduce that $\\partial(u)(\\partial(t)(a))=[-1,1] \\supsetneq [r,r  ]= \\partial (u[t/x])(a)$.\n\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\\subsection{A Partial Metric on Each Type}\n\\label{subsection:type-gpms}\n\nSo far, we have associated each type $A$ of $\\STLC$ with a complete lattice $\\intervals{A}\\subseteq \\mathcal P(\\Lambda_{A})$ of approximate values of type $A$, and each typed program $t:A\\to B$ with an approximate program $\\partial(t)$ (in fact, a monotone function) from approximate values of type $A$ to approximate values of type $B$.\nWe will now exploit this structure to define, for each type $A$ of $\\STLC$, a generalized partial metric on the closed (exact) programs of type $A$. % space $(\\Lambda_{A}/\\oeq_{A}, \\distances{A}, d_{A})$.\n\n% The definition of this space will exploit in an essential way the sets of approximate values $\\intervals{A}$.\n\n\nThe first step is to define, for every simple type $A$, a commutative integral quantale $(\\distances{A}, \\quantalegeq_A, \\quantaleop_A)$ of \\emph{distances of type $A$}:\n\\begin{itemize}\n\\item $(\\distances{\\mathsf{Real}}, \\quantalegeq_\\mathsf{Real}, \\quantaleop_\\mathsf{Real}) = ([0,\\infty], \\leq, +)$,\n\\item $\\distances{A \\times B} = \\distances{A} \\times \\distances{B}$,\n\\item $\\distances{A \\to B} =\\operatorname{Poset}(\\intervals{A}, \\distances{B})$.\n% \\{ \\text{monotonically decreasing functions from } \\intervals{A} \\text{ to } \\distances{B} \\}$.\n\\end{itemize}\nwhere, for two posets $Q,R$, $\\operatorname{Poset}(Q,R)$ denotes the set of monotone functions from $Q$ to $R$.\nObserve that the quantale $\\distances{A\\to B}$ is a set of functions over the approximate values of $A$.\n\nFor all simple types $A$, we now define a \\emph{distance function} $d_A : \\Lambda_A \\times \\Lambda_A \\to \\distances{A}$:\n\\begin{itemize}\n\\item $d_\\mathsf{Real}(t,u) = \\left\\vert r-s \\right\\vert$, where $r,s$ are the unique elements of $\\mathbb{R}$ such that $t \\to_\\beta^* r$ and $u \\to_\\beta^* s$,\n\\item $d_{A \\times B}(t,u) = (d_A(\\pi_L t, \\pi_L u), d_B(\\pi_R t, \\pi_R u))$,\n\\item $d_{A \\to B}(t,u) = a \\mapsto \\sup \\left\\{ d_B(rv, sw) \\mid r,s \\in \\{t,u\\}, v,w \\in a \\right\\}$.\n\\end{itemize}\n\nIt would be tempting to define $d_{A \\to B}(t,u)(a)$ simply as $\\sup \\left\\{ d_B(tv, uw) \\mid v,w \\in a \\right\\}$, but then the axiom ``$d_{A \\to B}(t,t) \\leq d_{A \\to B}(t,u)$'' of partial metric spaces would fail.\n\nThe maps $d_{A}$ are clearly compatible with observational equivalence (\\textit{i.e.} if $a \\oeq_A a'$ and $b \\oeq_A b'$, then $d_A(a,b) = d_A(a',b')$). \n\n\n\\begin{figure}\n\\adjustbox{center}{\n\\begin{tikzpicture}\n\n\n\\draw[|<->|, orange] (-2,0) to node[above] {$a$} (2,0);\n\\draw[|<->|, blue] (1,0) to node[above]  {$b$} (4,0);\n\\draw[violet] (1,0) to (2,0);\n\n\\draw[|<->|, dashed] (-2,0.8) to node[above] {\\tiny$\\delta(a\\cup b)$} (4,0.8);\n\\draw[|<->|, dashed] (1,0.5) to node[below] {\\tiny$\\delta(a\\cap b)$} (2,0.5);\n\\draw[|<->|, dashed] (-2,-0.5) to node[below] {\\tiny$\\delta(a)$} (2,-0.5);\n\\draw[|<->|, dashed] (1,-0.8) to node[below] {\\tiny$\\delta(b)$} (4,-0.8);\n\n\n\n\\end{tikzpicture}}\n\\caption{\\small The diameter function is \\emph{modular} over intersecting real intervals: \n$\\mathsf{diam}(a\\cup  b)+\\mathsf{diam}(a\\cap b)=\\mathsf{diam}(a)+\\mathsf{diam}(b)$ for all $a,b\\in [\\R]$ such that $a\\cap b\\neq \\emptyset$.\nThis property is at the heart of our generalization of diameters. Observe that this property fails when $a\\cap b$ is empty.}\n\\label{fig:modular}\n\\end{figure}\n\n\nOur objective is now to prove that $\\left(\\Lambda_A/\\oeq_A, \\distances{A}, d_A\\right)$ is a generalized partial metric space.\nTo this end, we define for all simple types $A$ a monotone \\emph{diameter function} $\\diam_A : \\intervals{A} \\to \\distances{A}$ by $\\diam_A(a) = \\sup \\{ d_A(t,u) \\mid t,u \\in a \\}$. The key to our objective will be to prove that $\\diam_{A}$ is \\emph{sub-modular} on intersecting approximate values (henceforth, \\emph{quasi-sub-modular} -- see Proposition \\ref{prop:submodular}): this generalizes the fact that, on the (real-valued) metric space $\\mathbb{R}$, the diameter is \\emph{modular} over intersecting closed intervals (see Fig. \\ref{fig:modular}).\n %-- or rather, using the quantale ordering convention, supermodular. \n \n \n First, one can check that for all $t,u \\in \\Lambda_A$, $\\diam_A\\left(\\tointerval{t} \\vee \\tointerval{u}\\right) = d_A(t,u)$, and that:\n \\begin{itemize}\n\\item\n$\\diam_\\mathsf{Real}(a)   = \\sup\\{s-r \\mid s,r \\in \\mathbb{R} \\text{ such that } s,r \\in a\\}$,\n\\item \n$\\diam_\\mathsf{A \\times B}(p)  = \\left(\\diam_A\\left(\\sup\\left\\{\\tointerval{\\pi_L t} \\mid t \\in p\\right\\}\\right)\\diam_B\\left(\\sup\\left\\{\\tointerval{\\pi_R t} \\mid t \\in p\\right\\}\\right)\\right)$,\n\\item$ \\diam_\\mathsf{A \\to B}(b)  = a \\mapsto \\diam_B\\left(\\sup\\left\\{\\tointerval{vt} \\mid t\\in a, v \\in b\\right\\}\\right)$.\n\\end{itemize}\nThis leads then to the following:\n%With that, we can prove that the diameter function $\\diam_{A}$ is super-modular over intersecting approximate values: \n%``quasi-supermodularity'':\n\\begin{proposition}[$\\diam_{A}$ is quasi-sub-modular]\\label{prop:submodular} For all simple types $A$ and all $a,b \\in \\intervals{A}$ such that $a \\wedge b \\neq \\emptyset$, $\\diam(a \\wedge b) \\quantaleop \\diam(a \\vee b) \\quantalegeq \\diam(a) \\quantaleop \\diam(b)$.\n\\end{proposition}\n\\begin{proof}\nWe proceed by induction on types.\n\nLet $a,b \\in \\intervals{\\mathsf{Real}}$ such that $a\\wedge b \\neq \\emptyset$. Let $I = \\{r \\in \\mathbb{R} \\mid r \\in a\\}$ and $J = \\{s \\in \\mathbb{R} \\mid s \\in b\\}$: then $I$ (respectively, $J$, $I \\cap J$, $I \\cup J$) is either $\\mathbb{R}$ or a non-empty compact interval of $\\mathbb{R}$, and its length in the usual sense is equal to $\\diam_\\mathsf{Real}(a)$ (respectively, $\\diam_\\mathsf{Real}(b)$, $\\diam_\\mathsf{Real}(a \\wedge b)$, $\\diam_\\mathsf{Real}(a \\vee b)$). Note that the only reason we know that $I \\cup J$ is an interval is because $a\\wedge b \\neq \\emptyset$ implies $I \\cap J \\neq \\emptyset$. The length of an interval of $\\mathbb{R}$ is equal to its Lebesgue measure, therefore $\\operatorname{length}(I \\cap J) + \\operatorname{length}(I \\cup J) = \\operatorname{length}(I) + \\operatorname{length}(J)$, so $\\diam_\\mathsf{Real}(a \\wedge b) \\quantaleop \\diam_\\mathsf{Real}(a \\vee b) = \\diam_\\mathsf{Real}(a) \\quantaleop \\diam_\\mathsf{Real}(b)$.\n\nLet $a,b \\in \\intervals{A_L \\times A_R}$ such that $a\\wedge b \\neq \\emptyset$. For all $c \\in \\intervals{A_L \\times A_R}$, let $c_L = \\sup\\{\\tointerval{\\pi_L t} \\mid t \\in c\\}$ and $c_R = \\sup\\{\\tointerval{\\pi_R t} \\mid t \\in c\\}$.\nOne can check that $(a \\wedge b)_L = a_L \\wedge b_L$, $(a \\wedge b)_R = a_R \\wedge b_R$, $(a \\vee b)_L = a_L \\vee b_L$ and $(a \\vee b)_R = a_R \\vee b_R$, so $\\diam(a \\wedge b) \\quantaleop \\diam(a \\vee b) = (\\diam(a_L \\wedge b_L) \\quantaleop \\diam(a_L \\vee b_L),  \\diam(a_R \\wedge b_R) \\quantaleop \\diam(a_R \\vee b_R)) \\quantalegeq (\\diam(a_L) \\quantaleop \\diam(b_L), \\diam(a_R) \\quantaleop \\diam(b_R)) = \\diam(a) \\quantaleop \\diam(b)$.\n\nLet $f,g \\in \\intervals{A \\to B}$ and $a \\in \\intervals{A}$. For all $h \\in \\intervals{A \\to B}$, let $ha = \\sup\\{\\tointerval{vt} \\mid v \\in h, t \\in a\\}$. One can check that $(f \\wedge g)a \\subseteq (f a) \\wedge (g a)$ and $(f \\vee g)a = (f a) \\vee (g a)$. As a result, $(\\diam(f \\wedge g)\\quantaleop \\diam(f \\vee g))(a) \\quantalegeq \\diam((fa) \\wedge (ga)) \\quantaleop \\diam((fa) \\vee (ga)) \\quantalegeq \\diam(fa) \\quantaleop \\diam(ga) = (\\diam(f)\\quantaleop\\diam(g))(a)$.\n\\end{proof}\n\nIt is well-known  \\cite{6845021} that any \nfunction $\\delta: L\\to \\mathbb [0,\\infty]$ on a lattice $L$ that is monotone and \\emph{sub-modular} induces a pseudo-metric $d: L\\times L \\to \\mathbb [0,\\infty]$ by letting $d^*(a,b)=2\\delta(a\\vee b)-\\delta(a)-\\delta(b)$. In fact, one can decompose this construction: first, one defines a partial pseudometric $d$ on $L$ by $d(a,b) = \\diam(a \\vee b)$, and then $d^*$ is just the distance given by equation \\eqref{eq:pmettomet}: $d^*(a,b) = 2d(a,b)-d(a,a)-d(b,b)$. We can use this way of reasoning to establish that the maps $d_{A}$ are indeed partial metrics:\n\n\\begin{corollary} \\label{corollary:stlc-metric} For all simple types $A$, $\\left(\\Lambda_A/\\oeq_A, \\distances{A}, d_A\\right)$ is a generalized partial metric space, that is to say:\n\\begin{enumerate}\n\\item for all $t,u \\in \\Lambda_A$, $d_A(t,t) \\quantalegeq d_A(t,u)$,\n\\item for all $t,u \\in \\Lambda_A$, if $d_A(t,t) = d_A(t,u) = d_A(u,u)$, then $t \\oeq_A u$,\n\\item for all $t,u \\in \\Lambda_A$, $d_A(t,u) = d_A(u,t)$,\n\\item for all $t,u,v \\in \\Lambda_A$, $d_A(t,v) \\quantaleop d_A(u,u) \\quantalegeq d_A(t,u) \\quantaleop d_A(u,v)$.\n\\end{enumerate}\n\\end{corollary}\n\\begin{proof}\nAs mentioned above, for all $t,u\\in\\Lambda_A$, $d_A(t,u) = \\diam_A(\\tointerval{t} \\vee \\tointerval{u})$, which immediately gives point 3. Since $\\delta_A$ is monotone and $\\tointerval{t} \\vee \\tointerval{t} \\leq \\tointerval{t} \\vee \\tointerval{u}$, we also get point 1.\n\nOne can check (by induction on types) that the restriction of $\\diam_A$ to the ideal generated by the $\\tointerval{t}$ (for $t \\in \\Lambda_A$) is \\emph{strictly} monotone. Therefore, if  $d_A(t,t) = d_A(t,u) = d_A(u,u)$, \\textit{i.e.} $\\diam_A(\\tointerval{t}) = \\diam_A(\\tointerval{t} \\vee \\tointerval{u}) = \\diam_A(\\tointerval{u})$,  then $\\tointerval{t} = \\tointerval{t} \\vee \\tointerval{u} = \\tointerval{u}$, so $t \\oeq_A u$.\n\nThe triangular inequality is an immediate consequence of the quasi-sub-modularity of $\\diam_A$: $d(t,v) \\quantaleop d(u,u) = \\diam(\\tointerval{t} \\vee \\tointerval{v}) \\quantaleop \\diam(\\tointerval{u}) \\quantalegeq \\diam((\\tointerval{t} \\vee \\tointerval{u}) \\vee (\\tointerval{u} \\vee \\tointerval{v})) \\quantaleop \\diam((\\tointerval{t} \\vee \\tointerval{u}) \\wedge (\\tointerval{u} \\vee \\tointerval{v})) \\quantalegeq \\diam(\\tointerval{t} \\vee  \\tointerval{u}) \\quantaleop \\diam(\\tointerval{u} \\vee  \\tointerval{v}) = d(t,u) \\quantaleop d(u,v)$.\n\\end{proof}\n\n%\n%\\begin{remark}\n%Corollary \\ref{corollary:stlc-metric} is a slight refinement of the well-known fact \\cite{6845021} that any \n%function $\\delta: L\\to \\mathbb [0,\\infty]$ on a lattice $L$ that is monotone and \\emph{submodular}  (\\emph{i.e.}~ satisfying $\\delta(a\\cup b)+ \\delta(a\\cap b) \\leq \\delta(a)+\\delta(b)$ - recall that the order on $\\mathbb [0,\\infty]$, taken as a quantale, is opposite to the usual order) induces a pseudo-metric $d: L\\times L \\to \\mathbb [0,\\infty]$ by letting $d(a,b)=2\\delta(a\\vee b)-\\delta(a)-\\delta(b)$. In fact, one can decompose this construction: first, one defines a partial pseudometric $p$ on $L$ by $p(a,b) = \\diam(a \\vee b)$, and then $d$ is just the distance given by equation \\eqref{eq:pmettomet}: $d(a,b) = p^*(a,b) = 2p(a,b)-p(a,a)-p(b,b)$.\n%\n% \n%\n%\\end{remark}\n%\n%\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "83b843350f56487d3f2581a97d3667ce55e12f94", "size": 25027, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "section-stlc.tex", "max_stars_repo_name": "guillaume-geoffroy/pg-csl21", "max_stars_repo_head_hexsha": "bd46697cd614a7251356e9b3e7d7d1ead1576a6a", "max_stars_repo_licenses": ["LPPL-1.3c"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "section-stlc.tex", "max_issues_repo_name": "guillaume-geoffroy/pg-csl21", "max_issues_repo_head_hexsha": "bd46697cd614a7251356e9b3e7d7d1ead1576a6a", "max_issues_repo_licenses": ["LPPL-1.3c"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "section-stlc.tex", "max_forks_repo_name": "guillaume-geoffroy/pg-csl21", "max_forks_repo_head_hexsha": "bd46697cd614a7251356e9b3e7d7d1ead1576a6a", "max_forks_repo_licenses": ["LPPL-1.3c"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.146179402, "max_line_length": 1256, "alphanum_fraction": 0.6722339873, "num_tokens": 8957, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127566694177, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5971022702995622}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n\\usetikzlibrary{arrows,chains,matrix,positioning,scopes}\n\n\\makeatletter\n\\tikzset{join/.code=\\tikzset{after node path={%\n\\ifx\\tikzchainprevious\\pgfutil@empty\\else(\\tikzchainprevious)%\nedge[every join]#1(\\tikzchaincurrent)\\fi}}}\n\\makeatother\n\n\\tikzset{>=stealth',every on chain/.append style={join},\n         every join/.style={->}}\n\n%\\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png}\n%\\institute{Rice University}\n%\\faculty{Faculty of Whatever Sciences}\n%\\department{Department of Mathematics}\n%\\title{Class Notes}\n%\\subtitle{Based on MATH xxx}\n%\\author{\\textit{Author}\\\\Gabriel \\textsc{Gress}}\n%\\supervisor{Linus \\textsc{Torvalds}}\n%\\context{Well, I was bored...}\n%\\date{\\today}\n\n%\\makeindex\n\n\\begin{document}\n\n% \\maketitle\n\n% Notes taken on 06/08/21\n\nConsider modules \\(A,C\\). One question worth exploring is if there exists a module \\(B\\) such that \\(A / B \\cong C\\); that is, \\(B\\) is an extension of \\(C\\) by \\(A\\). The tools we develop to understand this question are exact sequences. If \\(A\\) is isomorphic to a submodule of \\(B \\), there is an injective homomorphism from \\(A\\) to \\(B\\). And if \\(C\\) is isomorphic to the quotient, then there is a surjective homomorphism from \\(B\\) to \\(C\\). This will give us a chain\n\\begin{align*}\n\tA \\to B \\to C\n\\end{align*}\nwhere the homomorphisms are compatible with. We formalize this idea via exact sequences.\n\n\\begin{defn}[Exact Sequences]\n\tLet \\(\\alpha ,\\beta \\) be homomorphisms so that\n\t\\begin{align*}\n\t\tX \\to^{\\alpha } Y \\to^{\\beta }Z.\n\t\\end{align*}\n\tIf \\(\\textrm{Im}(\\alpha ) = \\textrm{Ker}(\\beta )\\), then we say the pair of homomorphisms are \\textbf{exact}.\\\\\n\n\tA sequence of homomorphisms\n\t\\begin{align*}\n\t\t\\ldots \\to X_{n-1} \\to X_n \\to X_{n+1} \\to \\ldots\n\t\\end{align*}\n\tis said to be an \\textbf{exact sequence} if it is exact at every \\(X_n\\) between a pair of homomorphisms.\n\\end{defn}\nHence, our goal is to see whether we can form an exact sequence \\(A\\to B\\to C\\). Our notions of injectivity and surjectivity correspond exactly to the notions of exactness.\n\\begin{prop}\n\tLet \\(A,B,C\\) form \\(R\\)-modules over some ring \\(R\\). Then the sequence\n\t\\begin{align*}\n\t\t0 \\to A \\to^{\\psi }B\n\t\\end{align*}\n\tis exact at \\(A\\) if and only if \\(\\psi \\) is injective. Likewise, the sequence\n\t\\begin{align*}\n\t\tB \\to^{\\varphi }\\to C \\to 0\n\t\\end{align*}\n\tis exact at \\(C\\) if and only if \\(\\varphi \\) is surjective.\n\\end{prop}\nCombining the two ideas, the sequence\n\\begin{align*}\n\t0 \\to A \\to^{\\psi }B \\to^{\\varphi }C \\to 0\n\\end{align*}\nis exact if and only if \\(\\psi \\) is injective, \\(\\varphi \\) is surjective, and \\(\\textrm{Im}(\\psi ) = \\textrm{Ker}(\\varphi )\\).\n\n\\begin{defn}\n\tAn exact sequence of the form\n\t\\begin{align*}\n\t\t0 \\to A \\to^{\\psi }B \\to^{\\varphi }C \\to 0\n\t\\end{align*}\n\tis called an \\textbf{short exact sequence}.\n\\end{defn}\nOur goal then is to determine if two modules admit a short exact sequence, and if so, how many.\\\\\n\nNotice that any exact sequence can be written as a succession of short exact sequences. For example, if\n\\begin{align*}\n\tX \\to^{\\alpha }Y \\to^{\\beta }Z\n\\end{align*}\nis exact at \\(Y\\), then equivalently\n\\begin{align*}\n\t0 \\to \\alpha (X) \\to Y \\to Y / \\textrm{Ker}(\\beta ) \\to 0\n\\end{align*}\nis a short exact sequence.\n\n\\begin{exmp}\n\t\n\\end{exmp}\n\nFor fixed \\(A,C\\), there can be many extensions of \\(C\\) by  \\(A\\). Hence, we need to determine a notion of a homomorphism to distinguish exact sequences.\n\n\\begin{defn}[Homomorphism of Short Exact Sequences]\n\tLet\n\t\\begin{align*}\n\t\t0 \\to A \\to B \\to C \\to 0\\\\\n\t\t0 \\to A' \\to B' \\to C' \\to 0\n\t\\end{align*}\n\tbe two short exact sequences of modules. A \\textbf{homomorphism of short exact sequences} is a collection of module homomorphisms \\(\\alpha ,\\beta ,\\gamma \\) such that the following diagram commutes:\n\\begin{center}\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=3em, column sep=3em]\n    { 0 & A  & B  & C  & 0 \\\\\n      0 & A' & B' & C' & 0 \\\\ };\n  { [start chain] \\chainin (m-1-1);\n    \\chainin (m-1-2);\n    { [start branch=A] \\chainin (m-2-2)\n        [join={node[right] {$\\alpha $}}];}\n    \\chainin (m-1-3) [join={node[above] {}}];\n    { [start branch=B] \\chainin (m-2-3)\n        [join={node[right] {$\\beta $}}];}\n    \\chainin (m-1-4) [join={node[above] {}}];\n    { [start branch=C] \\chainin (m-2-4)\n        [join={node[right] {$\\gamma $}}];}\n    \\chainin (m-1-5); }\n  { [start chain] \\chainin (m-2-1);\n    \\chainin (m-2-2);\n    \\chainin (m-2-3) [join={node[above] {}}];\n    \\chainin (m-2-4) [join={node[above] {}}];\n    \\chainin (m-2-5); }\n\\end{tikzpicture}\n\\end{center}\nThis is an \\textbf{isomorphism of short exact sequences} if \\(\\alpha ,\\beta ,\\gamma \\) are isomorphisms in which case the extensions \\(B,B'\\) are \\textbf{isomorphic extensions}.\\\\\n\nThe two exact sequences are called \\textbf{equivalent} if \\(A = A'\\), \\(C = C'\\), and there is an isomorphism between them where  \\(\\alpha ,\\gamma \\) are identity. In this case \\(B\\) and \\(B'\\) are \\textbf{equivalent extensions}.\n\\end{defn}\nEquivalency by extensions is stronger than just \\(R\\)-module isomorphisms between \\(B\\) and \\(B'\\)-- it tells us tat there is an \\(R\\)-module isomorphism between \\(B\\) and \\(B'\\) that restricts to an isomorphism from \\(A\\) to \\(A'\\) and induces an isomorphism on the quotients by \\(C\\) and \\(C'\\).\n\n\\begin{exmp}\n\t\n\\end{exmp}\n\n\\begin{prop}[Short Five Lemma]\n\tLet \\(\\alpha ,\\beta ,\\gamma \\) be a homomorphism of short exact sequences\n\\begin{center}\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=3em, column sep=3em]\n    { 0 & A  & B  & C  & 0 \\\\\n      0 & A' & B' & C' & 0 \\\\ };\n  { [start chain] \\chainin (m-1-1);\n    \\chainin (m-1-2);\n    { [start branch=A] \\chainin (m-2-2)\n        [join={node[right] {$\\alpha $}}];}\n    \\chainin (m-1-3) [join={node[above] {}}];\n    { [start branch=B] \\chainin (m-2-3)\n        [join={node[right] {$\\beta $}}];}\n    \\chainin (m-1-4) [join={node[above] {}}];\n    { [start branch=C] \\chainin (m-2-4)\n        [join={node[right] {$\\gamma $}}];}\n    \\chainin (m-1-5); }\n  { [start chain] \\chainin (m-2-1);\n    \\chainin (m-2-2);\n    \\chainin (m-2-3) [join={node[above] {}}];\n    \\chainin (m-2-4) [join={node[above] {}}];\n    \\chainin (m-2-5); }\n\\end{tikzpicture}\n\\end{center}\n\\begin{itemize}\n\t\\item If \\(\\alpha ,\\gamma \\) are injective then so is \\(\\beta \\) \n\t\\item If \\(\\alpha ,\\gamma \\) are surjective then so is \\(\\beta \\) \n\t\\item If \\(\\alpha ,\\gamma \\) are isomorphisms then so is \\(\\beta \\)\n\\end{itemize}\n\\end{prop}\nThese results also hold for short exact sequences of groups.\n\n\\begin{proof}[Proof of Short Five Lemma]\n\t\n\\end{proof}\nThere is always at least one extension of a module \\(C\\) by \\(A\\) given by \\(B = A \\oplus C\\).\n\n\\begin{defn}\n\tLet \\(R\\) be a ring and let\n\t\\begin{align*}\n\t\t0 \\to A \\to^{\\psi }B \\to^{\\varphi }C \\to 0\n\t\\end{align*}\n\tbe a short exact sequence of \\(R\\)-modules. We say the sequence is \\textbf{split} if there is an \\(R\\)-module complement to \\(\\psi (A)\\) in \\(B\\). If this holds, then \\(B = A \\oplus C\\) up to isomorphism by\n\t\\begin{align*}\n\t\tB = \\psi (A) \\oplus C'\n\t\\end{align*}\n\tfor some submodile \\(C'\\), where \\(\\varphi (C') \\cong C\\).\\\\\n\n\tWe say \\(B\\) is a \\textbf{split extension of \\(C\\) by \\(A\\)}.\n\\end{defn}\nThis is really just the question of existence of a complement to \\(\\psi (A)\\) in \\(B\\) that is isomorphic by \\(\\varphi \\) to \\(C\\).\n\n\\begin{prop}\n\tThe short exact sequence\n\t\\begin{align*}\n\t\t0 \\to A \\to^{\\psi }B \\to^{\\varphi }C \\to 0\n\t\\end{align*}\n\tof \\(R\\)-modules is split if and only if there is an \\(R\\)-module homomorphism \\(\\mu :C \\to B\\) such that \\(\\varphi \\circ \\mu \\cong \\textrm{Id}_C\\).\\\\\n\n\tAny set map \\(\\mu :C\\to B\\) such that \\(\\varphi \\circ \\mu = \\textrm{Id}_C\\) is called a \\textbf{section} of \\(\\varphi \\). If \\(\\mu \\) is a homomorphism, then \\(\\mu \\) is called a \\textbf{splitting homomorphism} for the sequence.\n\\end{prop}\nA section of \\(\\varphi \\) is merely a choice of coset representative in \\(B\\) for \\(B / \\textrm{Ker}\\varphi  \\cong C\\). A section is a homomorphism if this set of coset representatives forms a submodule, in which case this submodule gives a complement to \\(\\psi (A)\\) in \\(B\\).\n\n\\begin{exmp}\n\t\n\\end{exmp}\n\n\\begin{prop}\n\tLet\n\t\\begin{align*}\n\t\t0 \\to A \\to^{\\psi }B \\to^{\\varphi }C \\to 0\n\t\\end{align*}\n\tbe a short exact sequence of modules. Then \\(B = \\psi (A) \\oplus C'\\) for some submodule \\(C'\\) of \\(B\\) with \\(\\varphi (C') \\cong C\\) if and only if there is a homomorphism \\(\\lambda :B\\to A\\) such that \\(\\lambda \\circ \\psi = \\textrm{Id}_A\\) .\n\\end{prop}\nFor groups, this is a stronger notion. The existence of a splitting homomorphism on the left end of the sequence gives that the extension group is a direct product (instead of a semidirect product). Of course, in modules there is no distinction as the underlying groups are abelian.\n\n\\subsection{Projective Modules}\n\\label{sub:projective_modules}\n\nLet \\(R\\) be a ring and suppose that \\(\\prescript{}{R}M\\) is an extension of \\(N\\) by \\(L\\), so that\n\\begin{align*}\n\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0.\n\\end{align*}\nIf another \\(R\\)-module has an \\(R\\)-module homomorphism into \\(L\\) or \\(N\\), can it extend to a homomorphism to \\(M\\)?\\\\\n\nWe can see directly that if \\(f \\in \\textrm{Hom}_R(D,L)\\) then \\(f' = \\psi \\circ f\\) is an \\(R\\)-module homomorphism from \\(D\\) to \\(M\\):\n\\begin{align*}\n\t\\psi': \\textrm{Hom}_R(D,L) \\to \\textrm{Hom}_R(D,M)\\\\\n\tf \\mapsto f' = \\psi \\circ f.\n\\end{align*}\n\n\\begin{prop}\n\tLet \\(D,L,\\) and \\(M\\) form \\(R\\)-modules and let \\(\\psi :L \\to M\\) be an \\(R\\)-module homomorphism. Then the map\n\t\\begin{align*}\n\t\t\\psi': \\textrm{Hom}_R(D,L) &\\to \\textrm{Hom}_R(D,M)\\\\\n\t\tf &\\mapsto f' = \\psi \\circ f\n\t\\end{align*}\n\tis a homomorphism of abelian groups. If \\(\\psi \\) is injective, then \\(\\psi '\\) is injective. Hence, if\n\t\\begin{align*}\n\t\t0 \\to L \\to^{\\psi }M\n\t\\end{align*} is exact, then\n\t\\begin{align*}\n\t\t0 \\to \\textrm{Hom}_R(D,L) \\to^{\\psi '} \\textrm{Hom}_R(D,M)\n\t\\end{align*}\n\tis exact.\n\\end{prop}\n\nUnfortunately, if there is an \\(R\\)-module homomorphism \\(f:D \\to N\\), it isn't always the case that \\(f\\) \\textbf{lifts} to an \\(R\\)-module homomorphism \\(f:D\\to M\\) by\n\\begin{align*}\n\t\\varphi ': \\textrm{Hom}_R(D,M) &\\to \\textrm{Hom}_R(D,N)\\\\\n\tF &\\mapsto F' = \\varphi \\circ F\n\\end{align*}\nThis only holds if and only if \\(f\\) is in the image of \\(\\varphi '\\).\n\n\\begin{exmp}\n\t\n\\end{exmp}\n\n\\begin{thm}\n\tLet \\(D,L,M,N\\) form \\(R\\)-modules. If\n\t\\begin{align*}\n\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\\end{align*}\n\tis exact, then\n\t\\begin{align*}\n\t\t0 \\to \\textrm{Hom}_R(D,L) \\to^{\\psi '} \\textrm{Hom}_R(D,M) \\to^{\\varphi '} \\textrm{Hom}_R(D,N)\n\t\\end{align*}\n\tis exact.\\\\\n\n\tA homomorphism \\(f:D\\to N\\) lifts to a homomorphism \\(F:D\\to M\\) if and only if \\(f \\in \\textrm{Hom}_R(D,N)\\) is in the image of \\(\\varphi '\\). In general \\(\\varphi '\\) is surjective if and only if every homomorphism from \\(D\\) to \\(N\\) lifts to a homomorphism from \\(D\\) to \\(M\\), in which case the sequence above extends to a short exact sequence.\\\\\n\n\tThe sequence above is exact for all \\(R\\)-modules \\(D\\) if and only if\n\t\\begin{align*}\n\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N\n\t\\end{align*}\n\tis exact.\n\\end{thm}\nHence, by the theorem, the sequence\n\\begin{align*}\n\t0 \\to \\textrm{Hom}_R(D,L) \\to^{\\psi '}\\textrm{Hom}_R(D,M) \\to^{\\varphi '} \\textrm{Hom}_R(D,N) \\to 0\n\\end{align*}\nis in general not a short exact sequence, as \\(\\varphi '\\) may not be surjective. In fact, this sequence is exact if and only if there is a bijection between homomorphisms \\(F:D \\to M\\) and \\(g:D\\to L\\), \\(f:D\\to N\\) where\n\\begin{align*}\n\tF\\mid_{\\varphi (L)} = \\psi'(g)\\\\\n\tf = \\varphi'(F).\n\\end{align*}\nNotice that if the original sequence is spit exact, then the sequence of homomorphisms is also split exact.\n\\begin{prop}\n\tLet \\(D,L,N\\) form \\(R\\)-modules. Then\n\t\\begin{align*}\n\t\t\\textrm{Hom}_R(D,L\\oplus N) \\cong \\textrm{Hom}_R(D,L) \\oplus \\textrm{Hom}_R(D,N)\\\\\n\t\t\\textrm{Hom}_R(L \\oplus N, D) \\cong \\textrm{Hom}_R(L,D) \\oplus \\textrm{Hom}_R(N,D)\n\t\\end{align*}\n\\end{prop}\nOf course this extends by induction to any finite direct sum of \\(R\\)-modules. In other words, the group of module homomorphisms commute with dfinite direct sums in either variable.\n\n\\begin{rmrk}\n\tFor infinite direct sums, this does not always hold. If \\(L \\oplus N\\) is replaced by an arbitrary direct sum, and the direct sum on the right hand side is replaced by a direct product, then\n\t\\begin{align*}\n\t\t\\textrm{Hom}_R(D,L\\oplus \\left\\{ N_i \\right\\}_{i \\in I}) \\cong \\textrm{Hom}_R(D,L) \\otimes \\left\\{ \\textrm{Hom}_R(D,N_i) \\right\\}_{i \\in I} \n\t\\end{align*}\n\tholds. The second part must be translated into\n\t\\begin{align*}\n\t\t\\textrm{Hom}_R(L \\otimes \\left\\{ N_i \\right\\}_{i \\in I},D) \\cong \\textrm{Hom}_R(L,D) \\otimes \\left\\{ \\textrm{Hom}_R(N_i,D) \\right\\}_{i \\in I}\n\t\\end{align*}\n\\end{rmrk}\n\nHence, a split short exact sequence of \\(R\\)-modules induces a split short exact sequence of abelian groups for every \\(R\\)-module \\(D\\). In fact, the converse holds:\n\\begin{hw}\n\tProve that if\n\t\\begin{align*}\n\t\t0 \\to \\textrm{Hom}_R(D,L) \\to^{\\psi '} \\textrm{Hom}_R(D,M) \\to^{\\varphi'} \\textrm{Hom}_R(D,N) \\to 0\n\t\\end{align*} is exact for every \\(R\\)-module \\(D\\), then\n\t\\begin{align*}\n\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\\end{align*}\n\tis a split short exact sequence.\\\\\n\n\tThis implies that if the original homomorphism sequence is exact for every \\(D\\), then it is in fact split exact for every \\(D\\).\n\\end{hw}\n\n\\begin{prop}\n\tLet \\(P\\) be an \\(R\\)-module. Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item Let \\(L,M,N\\) form \\(R\\)-modules. If\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\t\t\\end{align*}\n\t\t\tis a short exact sequence, then\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to \\textrm{Hom}_R(P,L) \\to^{\\psi '} \\textrm{Hom}_R(P,M) \\to^{\\varphi '} \\textrm{Hom}_R(P,N) \\to 0\n\t\t\t\\end{align*}\n\t\t\tis a short exact sequence.\n\t\t\\item Let \\(M,N\\) form \\(R\\)-modules. If\n\t\t\t \\begin{align*}\n\t\t\t\tM \\to^{\\varphi }N \\to 0\n\t\t\t\\end{align*}\n\t\t\tis exact, then every \\(R\\)-module homomorphism from \\(P\\) into \\(N\\) lifts to an \\(R\\)-module homomorphism into \\(M\\), so the following diagram commutes:\n\\begin{center}\n\\begin{tikzpicture}% Fix later by making mu dashed\n  \\matrix (m) [matrix of math nodes, row sep=3em, column sep=3em]\n    {  & P  &   \\\\\n      M & P & 0 \\\\ };\n  { [start chain];\n    \\chainin (m-1-2);\n    { [start branch=P] \\chainin (m-2-1)\n        [join={node[left] {$\\mu  $}}];}\n    { [start branch=P'] \\chainin (m-2-2)\n\t[join={node[right] {$\\textrm{Id} $}}];}}\n  { [start chain] \\chainin (m-2-1);\n    \\chainin (m-2-2) [join={node[above] {\\(\\varphi \\)}}];\n    \\chainin (m-2-3) ;}\n\\end{tikzpicture}\n\\end{center}\n\t\\item If \\(P\\) is a quotient of the \\(R\\)-module \\(M\\) then \\(P\\) is isomorphic to a direct summand of \\(P\\). That is, every short exact sequence\n\t\t\\begin{align*}\n\t\t\t0\\to L \\to M\\to P\\to 0\n\t\t\\end{align*}\n\t\tsplits.\n\t\\item \\(P\\) is a direct summand of a free \\(R\\)-module.\n\t\\end{itemize}\n\\end{prop}\n\n\\begin{defn}[Projective Modules]\n\tAn \\(R\\)-module \\(\\prescript{}{R}P\\) is called \\textbf{projective} if any module \\(M\\) that projects onto \\(P\\) has an isomorphic copy of \\(P\\) as a direct summand.\n\\end{defn}\nA projective module is merely one that satisfies any of the above equivalent conditions.\n\n\\begin{cor}\n\tFree modules are projective modules.\\\\\n\n\tA finitely generated module is projective if and only if it is the direct summand of a finitely generated free module.\\\\\n\n\tEvery free module is a quotient of a projective module.\n\\end{cor}\n\n\\begin{exmp}\n\t\n\\end{exmp}\n\n% Category theory explanation and functors\n\n\\subsection{Injective Modules}\n\\label{sub:injective_modules}\n\nOf course we can also consider the reverse case-- when do \\(R\\)-module homomorphisms from \\(L\\) or \\(N\\) to \\(D\\) exist?\\\\\n\nWe can see that an \\(R\\)-module map from \\(N\\) to \\(D\\) induces a map from \\(M\\) to \\(D\\) by composition by\n\\begin{align*}\n\t\\varphi' : \\textrm{Hom}_R(N,D) &\\to \\textrm{Hom}_R(M,D)\\\\\n\tf &\\mapsto f' = f \\circ \\varphi \n\\end{align*}\nwhich is injective, and hence if the sequence\n\\begin{align*}\n\tM\\to^{\\varphi }N \\to 0\n\\end{align*} is exact, then\n\\begin{align*}\n\t0 \\to \\textrm{Hom}_R(N,D) \\to^{\\varphi '} \\textrm{Hom}_R(M,D)\n\\end{align*} is exact.\\\\\n\nThe reverse does not hold in general.\n\n\\begin{exmp}\n\t\n\\end{exmp}\n\n\\begin{thm}\n\tLet \\(D,L,M,N\\) form \\(R\\)-modules. If\n\t\\begin{align*}\n\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\\end{align*}\n\tis exact, then\n\t\\begin{align*}\n\t\t0 \\to \\textrm{Hom}_R(N,D) \\to^{\\varphi '} \\textrm{Hom}_R(M,D) \\to^{\\psi'} \\textrm{Hom}_R(L,D)\n\t\\end{align*} is exact.\\\\\n\n\tA homomorphism \\(f:L\\to D\\) lifts to a homomorphism \\(F:M\\to D\\) if and only if \\(f\\) is in the image of \\(\\psi '\\).\\\\\n\n\tThe sequence of homomorphisms is exact for all \\(R\\)-modules \\(D\\) if and only if\n\t\\begin{align*}\n\t\tL \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\\end{align*}\n\tis exact.\n\\end{thm}\nHence the sequence\n\\begin{align*}\n\t0 \\to \\textrm{Hom}_R(N,D) \\to^{\\varphi '}\\textrm{Hom}_R(M,D) \\to^{\\psi '}\\textrm{Hom}_R(L,D) \\to 0\n\\end{align*} is not a short exact sequence in general, as \\(\\psi '\\) may not be surjective.\\\\\n\nOf course, this sequence is exact if the original exact sequence is a split exact sequence (in which case the sequence of homomorphisms is a split exact sequence for every \\(R\\)-module \\(D\\)).\n\n\\begin{hw}\n\tIf \n\t\\begin{align*}\n\t\t0 \\to \\textrm{Hom}_R(N,D) \\to^{\\varphi '}\\textrm{Hom}_R(M,D) \\to^{\\psi '}\\textrm{Hom}_R(L,D) \\to 0\n\t\\end{align*}\n\tis exact for every \\(R\\)-module \\(D\\), then\n\t\\begin{align*}\n\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\\end{align*}\n\tis a split short exact sequence.\\\\\n\n\tThis implies that if the homomorphism sequence is exact for every \\(D\\), then it is split exact for every \\(D\\).\n\\end{hw}\n\n\\begin{prop}\n\tLet \\(Q\\) be an \\(R\\)-module. Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item Let \\(L,M,N\\) form \\(R\\)-modules. If\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\t\t\\end{align*}\n\t\t\tis a short exact sequence, then\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to \\textrm{Hom}_R(N,Q) \\to^{\\varphi '} \\textrm{Hom}_R(M,Q) \\to^{\\psi '} \\textrm{Hom}_R(L,Q) \\to 0\n\t\t\t\\end{align*}\n\t\t\tis also a short exact sequence.\n\t\t\\item Let \\(L,M\\) form \\(R\\)-modules. If \\(0 \\to L\\to^{\\psi }M\\) is exact, then every \\(R\\)-module homomorphism from \\(L\\) to \\(Q\\) lifts to an \\(R\\)-module homomorphism of \\(M\\) into \\(Q\\). That is, the following diagram commutes:\n\\begin{center}\n\\begin{tikzpicture}% Fix later by making F dashed\n  \\matrix (m) [matrix of math nodes, row sep=3em, column sep=3em]\n    { 0 & L  & M  \\\\\n       & Q &  \\\\ };\n  { [start chain];\n    \\chainin (m-1-1);\n    \\chainin (m-1-2);\n    { [start branch=L] \\chainin (m-2-2)\n    [join={node[left] {\\(f\\) }}];}\n    \\chainin (m-1-3) [join={node[above] {\\(\\psi \\)}}];\n    { [start branch=M] \\chainin (m-2-2)\n\t[join={node[right] {$F $}}];}}\n\\end{tikzpicture}\n\\end{center}\n\\item If \\(Q\\) is a submodule of the \\(R\\)-module \\(M\\), then \\(Q\\) is a direct summand of \\(M\\). That is, every short exact sequence\n\t\\begin{align*}\n\t\t0 \\to Q \\to M \\to N \\to 0\n\t\\end{align*}\n\tsplits.\n\t\\end{itemize}\n\\end{prop}\n\n\\begin{defn}[Injective Modules]\n\tAn \\(R\\)-module \\(\\prescript{}{R}Q\\) is called \\textbf{injective} if for any module \\(M\\) that \\(Q\\) injects into, \\(M\\) has an isomorphic copy of \\(Q\\) as a direct summand.\n\\end{defn}\n\n\\begin{exmp}\n\t\t\n\\end{exmp}\n\n%Category theory stuff again\n\nUnfortunately, there is not a nice equivalent to the direct summand of a free \\(R\\)-module condition from projective modules. Instead:\n\\begin{prop}[Baer's Criterion]\n\tLet \\(Q\\) form an  \\(R\\)-module. \\(\\prescript{}{R}Q\\) is injective if and only if for every left ideal \\(I\\triangleleft R\\), any \\(R\\)-module homomorphism \\(g:I\\to Q\\) can be extended to an \\(R\\)-module homomorphism \\(G:R\\to Q\\).\n\\end{prop}\nIf \\(R\\) is a PID, then \\(Q\\) is injective if and only if \\(rQ = Q\\) for every nonzero \\(r \\in R\\). Hence, a \\(\\Z\\)-module is injective if and only if it is divisible. When \\(R\\) is a PID, quotient modules of injective \\(R\\)-modules are injective.\n\\begin{proof}\n\t\n\\end{proof}\n\n\\begin{cor}\n\tEvery \\(\\Z\\)-module is a submodule of an injective \\(\\Z\\)-module.\n\\end{cor}\nThis can be useful to prove the more general statement:\n\\begin{thm}\n\tLet \\(R\\) be a ring and \\(\\prescript{}{R}M\\) an \\(R\\)-module. Then \\(M\\) is contained in an injective \\(R\\)-module.\n\\end{thm}\n\n\\subsection{Flat Modules}\n\\label{sub:flat_modules}\n\nSuppose \\(D\\) forms a right \\(R\\)-module. For every homomorphism \\(f:X\\to Y\\) of left \\(R\\)-modules, we obtain a homomorphism\n\\begin{align*}\n\t1 \\otimes f: D \\otimes_R X \\to D \\otimes_R Y\n\\end{align*}\nof abelian groups. If \\(D \\) is also an \\((S,R)\\)-bimodule, then \\(1 \\otimes f\\) is a homomorphism of left \\(S\\)-modules.\n\n\\begin{thm}\n\tLet \\(D\\) form a right \\(R\\)-module, and \\(L,M,N\\) form left \\(R\\)-modules. If\n\t\\begin{align*}\n\t\t0 \\to L \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\\end{align*}\n\tis exact, then\n\t\\begin{align*}\n\t\tD \\otimes_R L ^{1 \\otimes \\psi }D \\otimes_R M \\to^{1 \\otimes \\varphi }D \\otimes_R N \\to 0\n\t\\end{align*}\n\tis exact.\\\\\n\n\tIf \\(D\\) is an \\((S,R)\\)-bimodule then the sequence of abelian groups is an exact sequence of left \\(S\\)-modules. Hence, if \\(S = R\\) is commutative, then the sequence of abelian groups is an exact sequence of \\(R\\)-modules. The map \\(1 \\otimes \\varphi \\) is not necessarily injective, so the sequence may not extend to a short exact sequence.\\\\\n\n\tThe sequence of abelian groups is exact for all right \\(R\\)-modules \\(D\\) if and only if\n\t\\begin{align*}\n\t\tL \\to^{\\psi }M \\to^{\\varphi }N \\to 0\n\t\\end{align*}\n\tis exact.\n\\end{thm}\n\n\\begin{prop}\n\tLet \\(A\\) form a right \\(R\\)-module. Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item Let \\(L,M,N\\) form left \\(R\\)-modules. If\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to L \\to^{\\psi } M \\to^{\\varphi }N \\to 0\n\t\t\t\\end{align*}\n\t\t\tis a short exact sequence, then\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to A \\otimes_R L \\to^{1 \\otimes \\psi } A \\otimes_R M \\to^{1 \\otimes \\varphi } A \\otimes_R N \\to 0\n\t\t\t\\end{align*}\n\t\t\tis a short exact sequence.\n\t\t\\item  Let \\(L,M\\) be left \\(R\\)-modules, if\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to L \\to^{\\psi }M\n\t\t\t\\end{align*} is an exact sequence of left \\(R\\)-modules, then\n\t\t\t\\begin{align*}\n\t\t\t\t0 \\to A \\otimes_R L \\to^{1 \\otimes \\psi } A \\otimes_R M\n\t\t\t\\end{align*} is an exact sequence of abelian groups.\n\t\\end{itemize}\n\\end{prop}\n\n\\begin{defn}\n\tA right \\(R\\)-module \\(\\prescript{}{R}A\\) is called \\textbf{flat} if either of the above conditions hold.\n\\end{defn}\n\n\\begin{cor}\n\tFree modules are flat. Moreover, projective modules are flat.\n\\end{cor}\n\n\\begin{exmp}\n\t\n\\end{exmp}\n\n\\begin{thm}\n\tLet \\(R,S\\) be rings, left \\(A\\) form a right \\(R\\)-module, let \\(B\\) form an \\((R,S)\\)-bimodule and let \\(C\\) form a right \\(S\\)-module. Then there is an isomorphism of abelian groups:\n\t\\begin{align*}\n\t\t\\textrm{Hom}_S(A \\otimes_R B, C) \\cong \\textrm{Hom}_R(A, \\textrm{Hom}_S(B,C))\n\t\\end{align*}\n\\end{thm}\nIf \\(R=S\\) is commutative this is an isomorphism of \\(R\\)-modules with the standard \\(R\\)-module structures.\n\n\\begin{cor}\n\tIf \\(R\\) is commutative then the tensor product of two projective \\(R\\)-modules is projective.\n\\end{cor}\n\n\\end{document}\n", "meta": {"hexsha": "518e7fec9e9156dc0a7737a7563dedfc3e089919", "size": 23023, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Module Theory/Notes/source/ExactSequences.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Module Theory/Notes/source/ExactSequences.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Module Theory/Notes/source/ExactSequences.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.6948275862, "max_line_length": 473, "alphanum_fraction": 0.6408374234, "num_tokens": 8431, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.8539127603871312, "lm_q1q2_score": 0.5971022621956338}}
{"text": "\\section{Tree-based Methods}\n\\subsection{Decision Trees}\nDecision Trees are created via the Classification and Regression Trees (CART)\ntraining algorithm. They can be represent as binary trees where at each node the\npartition the data space according to a threshold to minimize either gini impurity\nor entropy between the two child nodes.\n\\subsection{Random Forest}\nRandom forests are a tree-based technique that uses a high number of decision\ntrees built out of randomly selected sets of features. Contrary to the simple\ndecision tree, it is highly uninterpretable but its generally good performance\nmakes it a popular algorithm. It averages the decisions across several trees\nto combat overfitting. Random forests are a type of ensemble methods.\n\\subsection{Boosting}\nThe idea of boosting methods is to combine several weak learners to form a\nstronger one. The main ones are summed up below:\n\\begin{enumerate}\n  \\item Adaptive boosting: Known as adaboost, it places high weights on errors\n    to improve at the next boosting step.\n  \\item Gradient boosting: Weak learners are trained on the remaining errors.\n\\end{enumerate}\nBoosting requires training several models to improve upon previous ones.\n", "meta": {"hexsha": "3f411ff85b64f91e9a8a76e339a24e980587cb2a", "size": 1200, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "study_guide/sections/trees.tex", "max_stars_repo_name": "nextBillyonair/StudyGuide", "max_stars_repo_head_hexsha": "3fbb85c1f738878935c18280d728ca7e92aa1414", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-02-18T19:47:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-17T21:49:14.000Z", "max_issues_repo_path": "study_guide/sections/trees.tex", "max_issues_repo_name": "nextBillyonair/StudyGuide", "max_issues_repo_head_hexsha": "3fbb85c1f738878935c18280d728ca7e92aa1414", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "study_guide/sections/trees.tex", "max_forks_repo_name": "nextBillyonair/StudyGuide", "max_forks_repo_head_hexsha": "3fbb85c1f738878935c18280d728ca7e92aa1414", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.5454545455, "max_line_length": 82, "alphanum_fraction": 0.8133333333, "num_tokens": 246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467675095294, "lm_q2_score": 0.6791787056691697, "lm_q1q2_score": 0.5970977636503566}}
{"text": "\\documentclass[t,usenames,dvipsnames]{beamer}\n\\usetheme{Copenhagen}\n\\setbeamertemplate{headline}{} % remove toc from headers\n\\beamertemplatenavigationsymbolsempty\n\n\\usepackage{amsmath, tikz, xcolor}\n\\usetikzlibrary{arrows.meta, calc}\n\n\\title{Circles}\n\\author{}\n\\date{}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}\n    \\frametitle{Objectives}\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\begin{document}\n\n\\begin{frame}\n    \\titlepage\n\\end{frame}\n\n\\section{Write the Standard Form of the Equation of a Circle.}\n\n\\begin{frame}{}\nA \\alert{circle} is the set of all points $(x, y)$ in the plane whose distance (the \\textit{radius}) from a fixed point (the \\textit{center}) is constant.   \\newline\\\\\n\n\\begin{center}\n    \\begin{tikzpicture}[scale=0.8]\n    \\draw (0,0) circle (1in);\n    \\draw [fill=black] (0,0) circle (1pt) node [below] {$(h, k)$};\n    \\draw (0,0) -- (45:1in) node [above right] {$(x,y)$};\n    \\draw [fill=black] (45:1in) circle (1pt);\n    \\node at (45:0.5in) [above left] {$r$};\n    \\end{tikzpicture}\n\\end{center}    \n\\pause\nThe equation can be found by incorporating the distance formula (Pythagorean Theorem):\n\\[\nr^2 = (x-h)^2 + (y-k)^2\n\\]\n\\end{frame}\n\n\\begin{frame}{Example 1}\nWrite the standard from of the equation of a circle with center $(-2, 3)$ and radius 5.\n\\begin{align*}\n    \\onslide<2->{(x-(-2))^2 + (y-3)^2 = 5^2} \\\\[10pt]\n    \\onslide<3->{(x+2)^2 + (y-3)^2 = 25} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\nWrite the standard form of the equation of a circle which has $(-1, 3)$ and $(2, 4)$ as the endpoints of the diameter.  \\newline\\\\ \\pause\n\nThe center is the \\alert{midpoint} of the endpoints of the diameter:\n\\begin{align*}\n    \\onslide<3->{h = \\frac{-1+2}{2} \\quad & \\quad k = \\frac{3+4}{2}} \\\\[10pt]\n    \\onslide<4->{h = \\frac{1}{2} \\quad & \\quad k = \\frac{7}{2}} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n\\onslide<1->{The radius is half the length of the diameter.}\n\\begin{align*}\n    \\onslide<2->{d &= \\sqrt{(2-(-1))^2 + (4-3)^2}} \\\\[10pt]\n    \\onslide<3->{&= \\sqrt{10}} \\\\[10pt]\n    \\onslide<4->{r &= \\frac{\\sqrt{10}}{2}} \\\\[8pt]\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n$h = \\frac{1}{2} \\qquad k = \\frac{7}{2} \\qquad r = \\frac{\\sqrt{10}}{2}$\n\\begin{align*}\n\\onslide<1->{\\left(x-\\frac{1}{2}\\right)^2 + \\left(y-\\frac{7}{2}\\right)^2 &= \\left(\\frac{\\sqrt{10}}{2}\\right)^2}  \\\\[10pt]\n\\onslide<2->{\\left(x-\\frac{1}{2}\\right)^2 + \\left(y-\\frac{7}{2}\\right)^2 &= \\frac{10}{4}}    \\\\[10pt]\n\\onslide<2->{\\left(x-\\frac{1}{2}\\right)^2 + \\left(y-\\frac{7}{2}\\right)^2 &= \\frac{5}{2}} \\\\\n\\end{align*}\n\\end{frame}\n\n\\section{Find the Center and Radius of a Circle.}\n\n\\begin{frame}{Example 3}\nFind the center and radius of $(x+2)^2 + (y-1)^2 = 4$.   \\newline\\\\ \\pause\n\n$h = -2 \\qquad k = 1$   \\pause  \\newline\\\\\n\nCenter: $(-2, 1)$   \\pause  \n\n\\begin{align*}\n    \\onslide<4->{r^2 &= 4} \\\\\n    \\onslide<5->{r & = 2} \n\\end{align*}\n\n\\onslide<6->{Radius: 2}\n\\end{frame}\n\n\\begin{frame}{Finding Center and Radius When Not in Standard Form}\nWe can use the techniques of finding vertices of parabolas to find the center and radius of a circle not in standard form.   \\newline\\\\    \\pause \n\n\\begin{enumerate}\n    \\item Move any constants to the other side of the equation. \\newline\\\\ \\pause\n    \\item Find the $x$-coordinates of the vertices of the $x$ and $y$ terms. These are your $h$ and $k$, respectively.    \\newline\\\\ \\pause\n    \\begin{itemize}\n        \\item Use $x$'s as the variable when finding the vertex for the $y$'s parabola.  \\newline\\\\ \\pause\n    \\end{itemize}\n    \\item Add the absolute value of the $y$-coordinates of the vertices to the right side.\n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}{Example 4a}\nFind the center and radius of each. \\newline\\\\\n(a) \\quad $3x^2 - 6x + 3y^2 +4y - 4 = 0$\n\\begin{align*}\n    \\onslide<2->{{\\color{red}3x^2-6x} + {\\color{blue}3y^2 + 4y} &= 4} \n\\end{align*}\n\\begin{minipage}{0.4\\textwidth}\n\\onslide<3->{\n\\begin{tikzpicture}[scale=0.6]\n\\draw[dashed,gray] (-1,-3) grid (3,2);\n\\draw[<->,>=stealth] (-1.5,0) -- (3.5,0) node [right] {$x$};\n\\draw[<->,>=stealth] (0,-3.5) -- (0,2.5) node [right] {$y$};\n\\draw[<->,>=stealth,color=red,line width=1.25,domain=-0.25:2.25] plot (\\x,{3*\\x*\\x - 6*\\x});\n\\draw[color=red,fill=red] (1,-3) circle (3pt) node [below right] {\\scriptsize $(1,-3)$};\n\\node[color=red, anchor=west] at (2.2,1.2) {\\scriptsize $3x^2-6x$};\n\\end{tikzpicture}\n}\n\\end{minipage}\n\\hspace{0.25cm}\n\\onslide<4->{\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}[scale=0.66]\n\\draw[dashed,gray] (-2,-2) grid (2,2);\n\\draw[<->,>=stealth] (-2.5,0) -- (2.5,0) node [right] {$x$};\n\\draw[<->,>=stealth] (0,-2.5) -- (0,2.5) node [right] {$y$};\n\\draw[<->,>=stealth,color=blue,line width=1.25,domain=-1.6:0.27] plot (\\x,{3*\\x*\\x + 4*\\x});\n\\draw[color=blue,fill=blue] (-0.667,-1.333) circle (3pt) node [below right] {\\scriptsize $\\left(-\\frac{2}{3},-\\frac{4}{3}\\right)$};\n\\node[color=blue, anchor=west] at (0.5,1.1) {\\scriptsize $3x^2+4x$};\n\\end{tikzpicture}\n\\end{minipage}\n}\n\\end{frame}\n\n\\begin{frame}{Example 4a}\n    \\begin{align*}\n    {\\color{red}3(x-1)^2} + {\\color{blue}3\\left(y+\\frac{2}{3}\\right)^2} &= 4 + {\\color{red}|-3|} + {\\color{blue}\\left|-\\frac{4}{3}\\right|} \\\\[10pt]\n    \\onslide<2->{3\\left(x-1\\right)^2 + 3\\left(y+\\frac{2}{3}\\right)^2 &= \\frac{25}{3}} \\\\[10pt]\n    \\onslide<3->{(x-1)^2 + \\left(y+\\frac{2}{3}\\right)^2 &= \\frac{25}{9}} \\\\\n    \\end{align*}\n    \n\\onslide<4->{Center: $\\left(1, -\\frac{2}{3}\\right)$} \\qquad \n\\onslide<5->{Radius: $\\sqrt{\\frac{25}{9}} = \\frac{5}{3}$}\n\\end{frame}\n\n\\begin{frame}{Example 4b}\n(b) \\quad $2x^2 + 5x + 2y^2 - 8y + 1 = 0$\n\\begin{align*}\n\\onslide<2->{{\\color{red}2x^2+5x} + {\\color{blue}2y^2-8y} &= -1}\n\\end{align*}\n\\begin{minipage}{0.5\\textwidth}\n\\onslide<3->{\n\\begin{tikzpicture}\n\\draw[dashed, gray] (-3,-3) grid (1,1);\n\\draw[<->,>=stealth] (-3.5,0) -- (1.5,0) node [right] {$x$};\n\\draw[<->,>=stealth] (0,-3.5) -- (0,1.5) node [above] {$y$};\n\\draw[<->,>=stealth,color=red,line width=1.25,domain=-2.75:0.25] plot (\\x, {2*\\x*\\x + 5*\\x}) node [right] {\\scriptsize $2x^2+5x$};\n\\draw[color=red,fill=red] (-1.25,-3.125) circle (3pt);\n\\node[color=red, anchor=north east] at (-1.25,-3.125) {\\scriptsize $\\left(-\\frac{5}{4}, -\\frac{25}{8}\\right)$};\n% \\node[color=red, anchor=west] at (0,-0.5) {\\scriptsize $2x^2+5x$};\n\\end{tikzpicture} }\n\\end{minipage}\n\\hspace{0.25cm}\n\\onslide<4->{\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}[scale=0.5]\n\\draw[dashed,gray] (-1,-8) grid (4,1);\n\\draw[<->,>=stealth] (-1.5,0) -- (4.5,0) node [right] {$x$};\n\\draw[<->,>=stealth,] (0,-8.5) -- (0,1.5) node [above] {$y$};\n\\draw[<->,>=stealth,color=blue,line width=1.25,domain=-0.15:4.15] plot (\\x, {2*\\x*\\x - 8*\\x}) node [right] {\\scriptsize $2x^2-8x$};\n\\draw[color=blue,fill=blue] (2,-8) circle (3pt) node [below] {\\scriptsize $(2,-8)$};\n\\end{tikzpicture}\n\\end{minipage}\n}\n\\end{frame}\n\n\\begin{frame}{Example 4b}\n\\begin{align*}\n    {\\color{red}2\\left(x+\\frac{5}{4}\\right)^2} + {\\color{blue}2\\left(y-2\\right)^2} &= -1 + {\\color{red}\\left|-\\frac{25}{8}\\right|} + {\\color{blue}|-8|} \\\\[8pt]\n    \\onslide<2->{2\\left(x+\\frac{5}{4}\\right)^2 + 2(y-2)^2 &= \\frac{81}{8}} \\\\[8pt]\n    \\onslide<3->{\\left(x+\\frac{5}{4}\\right)^2 + (y-2)^2 &= \\frac{81}{16}}   \\\\\n\\end{align*}\n\n\\onslide<4->{Center: $\\left(-\\frac{5}{4}, 2\\right)$} \\qquad\n\\onslide<5->{Radius: $\\sqrt{\\frac{81}{16}} = \\frac{9}{4}$}\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "60352a4da3b4b174320e419e9fe8fafd36dce461", "size": 7326, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Circles(BEAMER).tex", "max_stars_repo_name": "BryanBain/Trig_BEAMER", "max_stars_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Circles(BEAMER).tex", "max_issues_repo_name": "BryanBain/Trig_BEAMER", "max_issues_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Circles(BEAMER).tex", "max_forks_repo_name": "BryanBain/Trig_BEAMER", "max_forks_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:06.000Z", "avg_line_length": 36.447761194, "max_line_length": 167, "alphanum_fraction": 0.601965602, "num_tokens": 3055, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.845942439250491, "lm_q1q2_score": 0.5970535185051928}}
{"text": "\n\n    \\filetitle{!symbolic}{Attempt to solve the current equations block symbolically using the Symbolic Maths Toolbox}{sstatelang/symbolic}\n\n\t\\paragraph{Syntax}\n \n \\begin{verbatim}\n !equations\n     EQUATION;\n     EQUATION;\n     EQUATION;\n     ...\n \n     !symbolic\n     !solvefor\n         LIST_OF_VARIABLES\n \\end{verbatim}\n \n \\paragraph{Description}\n \n \\paragraph{Example}\n\n\n", "meta": {"hexsha": "acbd1d9254d26373ce7b7ce1c0a541bf6552e8ea", "size": 375, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/sstatelang/symbolic.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/sstatelang/symbolic.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/sstatelang/symbolic.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 15.625, "max_line_length": 138, "alphanum_fraction": 0.6773333333, "num_tokens": 101, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424373085146, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5970535014291611}}
{"text": "% Copyright 2018 Melvin Eloy Irizarry-Gelp\u00ed\n\\chapter{WXY Octonion}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA WXY octonion has the form\n\\begin{equation}\n    a_{0} + a_{1} W + a_{2} X + a_{3} WX + a_{4} Y + a_{5} WY + a_{6} XY + a_{7} WXY\n\\end{equation}\nThese follow from a parabolic Cayley-Dickson construct on the WX quaternions.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Arithmetic Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Multiplication}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Conjugate Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Asterisk Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quadrance}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Cloak Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Dagger Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Hodge Star}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Zero-Divisors}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Differential Operators}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...", "meta": {"hexsha": "5940c5a1a5e46982adf9dc50999860066c726ddf", "size": 2322, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/K.tex", "max_stars_repo_name": "meirizarrygelpi/cdc", "max_stars_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/K.tex", "max_issues_repo_name": "meirizarrygelpi/cdc", "max_issues_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/K.tex", "max_forks_repo_name": "meirizarrygelpi/cdc", "max_forks_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.375, "max_line_length": 84, "alphanum_fraction": 0.1916451335, "num_tokens": 267, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206686206199, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.5970019287769933}}
{"text": "\n\\subsection{Preservation of Value}\n\\label{sec:preservation-of-value}\n\nAs visualized in Figure~\\ref{fig:fund-preservation},\nthe total amount of lovelace in any given chain state\n$\\var{s}\\in\\ChainState$ is completely contained within the values of the six\nvariables:\n\n\\begin{tabular}{||l|l|l|l||}\\hline\\hline\n\n  \\textbf{Variable} & \\textbf{Name in Figure~\\ref{fig:fund-preservation}}\n                    & \\textbf{Nesting Inside Chain State} & \\textbf{Kind} \\\\ \\hline\n  utxo & circulation & s.nes.es.ls.utxoSt & Map over Lovelace Values  \\\\ \\hline\n  deposits & deposits &  s.nes.es.ls.utxoSt & Lovelace Value ($\\Coin$) \\\\ \\hline\n  fees & fees &  s.nes.es.ls.utxoSt & Lovelace Value ($\\Coin$) \\\\ \\hline\n  rewards & reward accounts & s.nes.es.ls.dpstate.dstate  & Lovelace Value ($\\Coin$)  \\\\ \\hline\n  treasury & treasury &  s.nes.es.acnt  & Lovelace Value ($\\Coin$) \\\\ \\hline\n  reserves & reserves & s.nes.es.acnt & Map over Lovelace Values \\\\ \\hline\n  \\hline\n\\end{tabular}\n\n\\noindent\nNotice that $\\var{deposits}$, $\\var{fees}$, $\\var{treasury}$, and $\\var{reserves}$\nare all single lovelace values, while $\\var{utxo}$, and $\\var{rewards}$ are\nmaps whose values are lovelace.\n\nWe define the \\emph{Lovelace Value} of a given chain state as:\n\\begin{definition}[Lovelace Value]\n  \\label{def:val}\n  \\begin{equation*}\n    \\Val(s~\\in~\\var{State}) =\n        \\Val(\\var{utxo}) +\n            \\Val(\\var{deposits}) +\n            \\Val(\\var{fees}) +\n            \\Val(\\var{reserves}) +\n            \\Val(\\var{treasury}) +\n            \\Val(\\var{rewards})\n  \\end{equation*}\n  where\n  \\begin{equation*}\n      \\Val(x \\in \\Coin) = x\n  \\end{equation*}\n  \\begin{equation*}\n      \\Val((\\wcard\\mapsto (y \\in \\Coin))^{*}) = \\sum y\n  \\end{equation*}\n\\end{definition}\n\n\\noindent\nFor any state that is used in a given subtransition of $\\mathsf{CHAIN}$,\nwe define $\\Val{}$ in an analogous way, setting the value of any variable that is not explicitly\nrepresented in the state to zero.\nFor example, given $\\var{utxoSt}\\in\\UTxOState$,\n\\begin{equation*}\n  \\Val(\\var{utxoSt}) =\n  \\left(\\sum_{\\wcard\\mapsto(\\wcard,~v)\\in\\var{utxo}}v\\right) + \\var{deposits} + \\var{fees}\n\\end{equation*}\n\n\\noindent\nThe key property that we want to prove is that no semantic transition changes the value that\nis captured in the state ($\\Val{s}$).\nThis property is easy to state: intuitively,\nthe \\emph{Lovelace Value}before the transition is the same as the\n\\emph{Lovelace Value} after that transition.\n\n\\begin{theorem}[Preservation of Value]\n  \\label{thm:chain-pres-of-value}\n  For all environments $e$, blocks $b$, and states $s$, $s'$, if\n  \\begin{equation*}\n    e\\vdash s\\trans{\\hyperref[fig:rules:chain]{chain}}{b}s'\n  \\end{equation*}\n  then\n  \\begin{equation*}\n    \\Val(s) = \\Val(s')\n  \\end{equation*}\n\\end{theorem}\n\n\\noindent\nWe will prove the soundness of Theorem~\\ref{thm:chain-pres-of-value} via a few lemmas.\n\n\\begin{lemma}\n  \\label{lemma:value-sum-pres-1}\n  For any mapping $m:A\\mapsto\\Coin$ and set $s\\in\\powerset{A}$,\n  \\begin{equation*}\n    \\Val(\\var{m}) = \\Val(s\\subtractdom m) + \\Val(s\\restrictdom m)\n  \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n  easy\n\\end{proof}\n\n\\begin{lemma}\n  \\label{lemma:value-sum-pres-2}\n  For any mappings $m_1, m_2:A\\mapsto\\Coin$,\n  if $\\dom{m_1}\\cap\\dom{m_2}=\\emptyset$,\n  then\n  \\begin{equation*}\n    \\Val(m_1\\cup m_2) = \\Val(m_1) + \\Val(m_2)\n  \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n  easy\n\\end{proof}\n\n\\begin{lemma}\n  \\label{lemma:utxo-pres-of-value}\n  For all environments $e$, transactions $t$, and states $s$, $s'$, if\n  \\begin{equation*}\n    e\\vdash s\\trans{\\hyperref[fig:rules:utxo-shelley]{utxo}}{t}s'\n  \\end{equation*}\n  then\n  \\begin{equation*}\n    \\Val(s) + w = \\Val(s')\n  \\end{equation*}\n  where $w = \\fun{wbalance}~(\\fun{txwdrls}~{t})$.\n\\end{lemma}\n\n\\begin{proof}\n  The proof is essentially unfolding the definition of the predicate\n  \\begin{equation}\n    \\label{cons-is-prod}\n    \\consumed{pp}{utxo}{t} = \\produced{pp}{stpools}{t}\n  \\end{equation}\n  and applying a little algebra.\n%\nIf we let:\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n      k & \\keyRefunds{pp}{stkCreds}{t} \\\\\n      f & \\txfee{t} \\\\\n      d & \\totalDeposits{pp}{stpools}{(\\txcerts{t})} \\\\\n    \\end{array}\n  \\end{equation*}\n  then equation~\\ref{cons-is-prod} can be rewritten as:\n  \\begin{equation*}\n    \\Val(\\txins{t} \\restrictdom{\\var{utxo}}) + w + k = \\Val(\\outs{t}) + f + d\n  \\end{equation*}\n  where $\\outs{}$ is defined in Figure~\\ref{fig:functions:utxo} and returns a value of type $\\UTxO$.\n  Therefore, moving $k$ to the right and adding $\\txins{t} \\subtractdom{\\var{utxo}}$ to each side,\n  \\begin{equation*}\n    \\Val(\\txins{t} \\restrictdom{\\var{utxo}}) + \\Val(\\txins{t} \\subtractdom{\\var{utxo}}) + w\n    = \\Val(\\outs{t}) + f + d - k + \\Val(\\txins{t} \\subtractdom{\\var{utxo}})\n  \\end{equation*}\n  (Though not needed for the proof at hand,\n  note that $d-k$ is non-negative since the deposits will always be large enough to cover\n  the current obligation. See Theorem~\\ref{thm:non-neg-deposits}.)\n%\n  It then follows that:\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}lr}\n      \\Val(\\var{utxo}) + w\n    & \\Val(\\outs{t}) + f + d - k + \\Val(\\txins{t} \\subtractdom{\\var{utxo}})\n    & \\text{(by Lemma~\\ref{lemma:value-sum-pres-1})}\n    \\\\\n    & \\Val((\\txins{t} \\subtractdom{\\var{utxo}})\\cup\\outs{t}) + (d - k) + f\n    & \\text{(by Lemma~\\ref{lemma:value-sum-pres-2})}\n    \\end{array}\n  \\end{equation*}\n  Note that in order to apply Lemma~\\ref{lemma:value-sum-pres-2} above,\n  it must be true that $(\\txins{t} \\subtractdom{\\var{utxo}})$ and $(\\outs{t})$\n  have disjoint domains, which follows from the uniqueness of the transaction IDs.\n\n  Therefore, by adding the deposits and fees from $s$ to the equality above,\n  it follows that $\\Val(s) + w = \\Val(s')$.\n\\end{proof}\n\n\\begin{lemma}\n  \\label{lemma:deleg-pres-of-value}\n  For all environments $e$, transactions $c$, and states $s$, $s'$, if\n  \\begin{equation*}\n    e\\vdash s\\trans{\\hyperref[fig:delegation-rules]{deleg}}{c}s'\n  \\end{equation*}\n  then\n  \\begin{equation*}\n    \\Val(s) = \\Val(s')\n  \\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\n  The only variable with value in this transition is \\var{rewards}.\n  Only two of the rules in $\\mathsf{DELEG}$ can change \\var{rewards},\n  namely $\\mathsf{Deleg{-}Reg}$ and $\\mathsf{Deleg{-}Dereg}$.\n  However, $\\mathsf{Deleg{-}Reg}$ only adds a zero value,\n  and $\\mathsf{Deleg{-}Dereg}$ only removes a zero value.\n\\end{proof}\n\n\\begin{lemma}\n  \\label{lemma:delegs-pres-of-value}\n  For all environments $e$, certificates $\\Gamma$, and states $s$, $s'$, if\n  \\begin{equation*}\n    e\\vdash s\\trans{\\hyperref[fig:rules:delegation-sequence]{delegs}}{\\Gamma}s'\n  \\end{equation*}\n  then\n  \\begin{equation*}\n    \\Val(s) = \\Val(s') + w\n  \\end{equation*}\n  where $w = \\fun{wbalance}~(\\fun{txwdrls}~{t})$,\n  and $t$ is the transaction in the environment $e$.\n\\end{lemma}\n\n\\begin{proof}\n  The proof is by induction on the length of $\\Gamma$.\n  Note that the only variable with value in this transition is \\var{rewards}.\n\n  \\vspace{2ex}\n  \\noindent\n  \\emph{In the base case}, we look at the rule $\\mathsf{Seq{-}delg{-}base}$.\n  Since $\\var{wdrls}\\subseteq\\var{rewards}$, then\n  $\\var{rewards} = \\var{wdrls}\\cup\\var{(\\var{rewards}\\setminus\\var{wdrls})}$.\n%\n  Therefore\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}lr}\n      \\Val{(\\var{rewards})}\n      & \\Val{(\\var{rewards}\\setminus\\var{wdrls})} + \\Val{(\\var{wdrls})}\n      & \\text{by Lemma~\\ref{lemma:value-sum-pres-2}}\n      \\\\\n      & \\Val{(\\var{rewards}\\setminus\\var{wdrls})} + w\n      & \\text{by definition}\n      \\\\\n      & \\Val\\left(\\var{rewards}\\unionoverrideRight\\{(w, 0) \\mid w \\in \\dom \\var{wdrls}\\}\\right) + w\n    \\end{array}\n  \\end{equation*}\n  Therefore $\\Val(s) = \\Val(s')$.\n\n  \\vspace{2ex}\n  \\noindent\n  \\emph{In the inductive case}, we look at the rule $\\mathsf{Seq{-}delg{-}ind}$.\n  In this case, the lemma then follows directly from Lemma~\\ref{lemma:deleg-pres-of-value}.\n\\end{proof}\n\n\\begin{lemma}\n  \\label{lemma:poolreap-pres-of-value}\n  For all environments $e$, epoch $\\epsilon$, and states $s$, $s'$, if\n  \\begin{equation*}\n    e\\vdash s\\trans{\\hyperref[fig:rules:pool-reap]{poolreap}}{\\epsilon}s'\n  \\end{equation*}\n  then\n  \\begin{equation*}\n    \\Val(s) = \\Val(s')\n  \\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\n  The $\\mathsf{POOLREAP}$ value is contained in\n  $\\var{deposits}$, $\\var{treasury}$, and $\\var{rewards}$.\n  Notice that $\\var{unclaimed}$ is added to $\\var{treasury}$\n  and subtracted from the $\\var{deposits}$.\n  Moreover, $\\var{refunded}$ is subtracted from $\\var{deposits}$.\n  (Note that $\\var{deposits}-(\\var{unclaimed}+\\var{refunded})$\n  is non-negative by Theorem~\\ref{thm:non-neg-deposits}.)\n  It therefore suffices to show that\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n    \\Val(\\var{rewards}\\unionoverridePlus\\var{refunds})\n    & \\Val(\\var{rewards}) + \\Val(\\var{refunds})\n    \\\\\n    & \\Val(\\var{rewards}) + \\var{refunded}\n    \\end{array}\n  \\end{equation*}\n  But this is clear from the definition of $\\unionoverridePlus$.\n\\end{proof}\n\n\\begin{lemma}\n  \\label{lemma:ru-pres-of-value}\n  For every $(\\Delta t,~\\Delta r,~\\var{rs},~\\Delta f)$ in the range of $\\fun{createRUpd}$,\n  \\begin{equation*}\n    \\Delta t + \\Delta r + \\Val(rs) + \\Delta f = 0\n  \\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\n  In the definition of $\\fun{createRUpd}$ in Figure~\\ref{fig:functions:reward-update-creation},\n  We see that:\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n      \\var{rewardPot} & \\var{feeSS} + \\Delta r \\\\\n      \\var{R} & \\var{rewardPot} - \\Delta t_1 \\\\\n      \\Delta t_2 & R - \\Val(\\var{rs})\\\\\n      \\Delta t & \\Delta t_1 + \\Delta t_2 \\\\\n    \\end{array}\n  \\end{equation*}\n  Therefore\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n      (\\var{feeSS} + \\Delta r) & \\var{rewardPot} = R + \\Delta t_1 = \\Delta t_2 + \\Val(rs) + \\Delta t_1  \\\\\n      0 & (\\Delta t_1 + \\Delta t_2 ) - \\Delta r + \\Val(rs)- \\var{feeSS} \\\\\n      0 & \\Delta t - \\Delta r + \\Val(rs)- \\var{feeSS} \\\\\n    \\end{array}\n  \\end{equation*}\n  It then suffices to notice that $\\fun{createRUpd}$ returns\n  $(\\Delta t,-~\\Delta r,~\\var{rs},~-\\var{feeSS})$.\n\\end{proof}\n\n\\noindent\n\nNote that Lemma~\\ref{lemma:ru-pres-of-value} is not strictly need for the proof of\nTheorem~\\ref{thm:chain-pres-of-value}, since the $\\mathsf{NEWEPOCH}$ transition\nrequires that $\\Delta t + \\Delta r + \\Val(rs) + \\Delta f = 0$ holds.\nIt does, however, give us confidence that the $\\mathsf{CHAIN}$ transition can proceed.\n\nWe are now ready to prove Theorem~\\ref{thm:chain-pres-of-value}.\n\n\\begin{proof}\n  For a given transition $\\mathsf{TR}$, let \\POV{TR}\n  be the statement:\n\n  \\begin{tabular}{l}\n    for all environments $e$, signals $\\sigma$, and states $s$, $s'$,\n    $$\n    $e\\vdash s\\trans{tr}{\\sigma}s'~\\implies~\\Val(s) = \\Val(s')$.\n    $$\n  \\end{tabular}\n\n  \\noindent\n  Our goal is to prove \\POV{CHAIN}.\n  Lemmas~\\ref{lemma:utxo-pres-of-value} and \\ref{lemma:delegs-pres-of-value} imply \\POV{LEDGER},\n  since $\\mathsf{UTXOW}$ transforms state exactly as $\\mathsf{UTXO}$ does.\n  \\POV{LEDGERS} then follows by straightforward induction on the length of $\\Gamma$:\n  the base case is trivial;\n  and the inductive case follows directly from \\POV{LEDGER}.\n%\n  \\POV{SNAP} holds trivially, since it contains no value.\n  Similarly, \\POV{NEWPP} holds since $\\var{diff}$ is added to $\\var{reserves}$\n  and subtracted from $\\var{deposits}$.\n  Therefore \\POV{EPOCH} holds by Lemma~\\ref{lemma:poolreap-pres-of-value}.\n  \\POV{MIR} holds since\n  $\\Val{i_{rwd}'}=\\var{tot}$ in Figure~\\ref{fig:rules:mir}.\n  Morover, \\POV{NEWEPOCH} holds in the presence of $\\fun{applyRUpd}$\n  since the transition requires $\\Delta t + \\Delta r + \\Val(rs) + \\Delta f = 0$.\n  \\POV{CHAIN} easily follows from this.\n\\end{proof}\n\n\\subsection{Non-negative Deposit Pot}  % TODO - this section is out of date due to no decaying deposits\n\\label{sec:non-negative-deposit-pot}\n\nThe \\emph{deposit pot} (the variable $\\var{deposits}$ in the UTxO State)\nrepresents the amount of \\emph{lovelace} that is set aside by the system as a whole for refunding deposits.\nDeposits are added to this pot, which then decays exponentially over time,\nand is also depleted by any refunded deposits.\nAt an epoch boundary, the decayed parts of any deposits (including, possibly, deposits for any transactions that will complete in future epochs)\nwill be distributed as additional \\emph{rewards}, as described in~\\cite{delegation_design}.\nSince $\\var{deposits}$ is only used to record the value of future refunds or rewards whose costs have\nalready been incurred, both it and any reward value will always be non-negative.\nNote that there are two types of deposits which are recorded in the same pot: those for stake keys; and those for stake pools.\nStake keys are deregistered in the slot in which the deregistration certificates\nis processed. Stake pools, however, are staged for retirement on epoch boundaries.\n%\nThe following theorem ensures that the deposit pot is properly maintained\nand will always be large enough to meet all of its obligations.\n\n\n\\begin{figure}[h!]\n  \\begin{tabular}{||l|l|l|l||}\\hline\\hline\n\n    \\textbf{Variable} & \\textbf{Value}\n                      & \\textbf{Nesting Inside Chain State} & \\textbf{Kind} \\\\ \\hline\n    deposits & 0 &  s.nes.es.ls.utxoSt & $\\Coin$ \\\\ \\hline\n    stkCreds & $\\emptyset$ & s.nes.es.ls.dpstate.dstate.stkCreds\n             & $\\StakeCreds$ ($\\Credential\\mapsto\\Slot$)  \\\\ \\hline\n    stpools & $\\emptyset$ & s.nes.es.ls.dpstate.pstate.stpools\n            & $\\StakePools$ ($\\KeyHash\\mapsto\\Slot$)  \\\\ \\hline\n  \\end{tabular}\n  \\caption{Initial Chain State}\n  \\end{figure}\n\n\\begin{theorem}[Non-negative Deposit Pot]\n  \\label{thm:non-neg-deposits}\n  Let $n\\in\\N$ and $c_0\\in\\ChainState$ be a chain state in which $\\var{deposits} ~=~0$, $\\var{stkCreds}~=~\\emptyset$ and $\\var{stPools}~=~\\emptyset$, as shown above:\n%  \\\\~\\\\\n  If\n  \\begin{equation*}\n    s_0\\vdash c_0\\trans{\\hyperref[fig:rules:chain]{chain}}{b_0}c_1,~~\n    s_1\\vdash c_1\\trans{\\hyperref[fig:rules:chain]{chain}}{b_1}c_2,~~\n    \\ldots,~~\n    s_n\\vdash c_n\\trans{\\hyperref[fig:rules:chain]{chain}}{b_n}c_{n+1},~~n \\ge 0\n  \\end{equation*}\n  is a sequence of valid $\\mathsf{CHAIN}$ transitions,\n  then $\\forall i, 0 \\le i \\le n, \\var{deposits} ~(c_{n+1}) \\ge 0$.\n\\end{theorem}\n\n\\begin{proof}\n\n  We will prove a slightly stronger condition, namely that some stronger invariants hold\n  most of the time, and that when they do fail to hold, then $\\var{deposits}$ is still non-negative.\n  These stronger invariants will require a few additional definitions.\n%\n  Given a slot $s$, let $\\ell(s)$ be the first slot of the epoch that $s$ occurs in,\n  that is $\\ell = \\fun{firstSlot}\\circ\\fun{epoch}$.\n  Given a mapping $m\\in\\mathsf{T}\\to\\Slot$ and a slot $s\\in\\Slot$,\n  let $\\fun{sep}$ be the function that separates $m$ into two maps,\n  those whose value is strictly less than $s$ and those whose value is at least $s$.\n  So,\n  \\begin{equation*}\n    \\fun{sep}~m~s = \\forall x\\mapsto t~\\in~m,~~\n    \\left(\\{x\\mapsto t~\\mid~t<s\\},~\\{x\\mapsto t~\\mid~t\\geq s\\}\\right)\n  \\end{equation*}\n\n\n  \\noindent\n  If we assume that the \\emph{protocol parameters}, $pp$, are fixed\\footnote{Note that the\n    protocol parameters can only change in the $\\mathsf{NEWPP}$ transition.}, then we can provide convenience functions\n  $R_c$ and $R_p$ for the \\emph{stake credential} and \\emph{stake pool} refunds, respectively:\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n      R_c~s_0~s_1 & \\refund{d_{val}}{d_{min}}{\\lambda_d}{s_1-s_0} \\\\\n      R_p~s_0~s_1 & \\refund{p_{val}}{p_{min}}{\\lambda_p}{s_1-s_0} \\\\\n    \\end{array}\n  \\end{equation*}\n  where $d_{val}$, $d_{min}$, $\\lambda_d$, $p_{val}$, $p_{min}$, $\\lambda_p$\n  are the protocol parameter values from $pp$, and $\\fun{refund}$ is defined in\n  Figure~\\ref{fig:functions:deposits-refunds}.\n  We let \\DBE{c}{s} (``Deposits (precisely) Big Enough\"), be the following property:\n  \\begin{equation}\\tag{DBE}\\label{DBE}\n    \\var{deposits}\n    = \\left(\\sum_{\\wcard\\mapsto t\\in C_{old}}R_c~t~\\ell(s)\\right)\n    + |C_{new}|\\cdot d_{val}\n    + \\left(\\sum_{\\wcard\\mapsto t\\in P_{old}}R_p~t~\\ell(s)\\right)\n    + |P_{new}|\\cdot p_{val}\n  \\end{equation}\n  where\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n      C_{old},~C_{new} & \\fun{sep}~\\var{stkCreds}~{\\ell(s)} \\\\\n      P_{old},~P_{new} & \\fun{sep}~\\var{stpools}~{\\ell(s)},\n    \\end{array}\n  \\end{equation*}\n  for some slot, $s$, where $\\var{pp}$, $\\var{stkCreds}$, $\\var{stpools}$ are in the corresponding chain state, $c$.\n%\n  In other words, \\DBE{c}{s} asserts that the deposit pot is equal to the\n  sum of the deposit refunds that were available at the previous epoch boundary,\n  plus the sum of the initial deposit values for all the deposits from the current epoch.\n\n  Notice that for a chain state $c$ and slot $s$, if the range of\n  $\\var{stkCreds}$ and $\\var{stpools}$ contains only slots from the previous epoch,\n  then \\DBE{c}{s} is equivalent to\n  \\begin{equation}\\tag{DEO}\\label{DEO}\n    \\var{deposits} = \\obligation{pp}{stkCreds}{stpools}{\\ell(s)}\n  \\end{equation}\n  where $\\fun{obligation}$ is defined in Figure~\\ref{fig:funcs:epoch-helper-rewards}.\n%\n  It is generally true that \\DBE{c'}{s_i} holds after each subtransition of\n  $s_i\\vdash c_i\\trans{\\hyperref[fig:rules:chain]{chain}}{b_i}c_{i+1}$.\n  However, this invariant can fail to hold after the\n  $\\hyperref[fig:delegation-transitions]{\\mathsf{DELEG}}$ transition,\n  since this transition can add and remove stake credentials, and can also add stake pools,\n  but the deposit pot is not adjusted accordingly\n  until the next subtransiton of $\\hyperref[fig:rules:ledger]{\\mathsf{LEDGER}}$,\n  namely $\\hyperref[fig:rules:utxo-shelley]{\\mathsf{UTXO}}$.\n%\n  The invariant can also fail to hold if the slot increases while the chain state remains the same.\n  That is, if \\DBE{c_{i+1}}{s_i} holds, then \\DBE{c_{i+1}}{s_{i+1}} can fail to hold if\n  $\\epoch{s_i} < \\epoch{s_{i+1}}$, since the value of the deposit\n  in the left hand side of equation~\\ref{DBE} remains the same, but the\n  refunded values become smaller\\footnote{Note that if $\\epoch{s_i} = \\epoch{s_{i+1}}$, then \\DBE{c_{i+1}}{s_{i+1}} is trivially true.}.\n  Therefore, in this situation we can consider the slightly weaker constraint:\n  \\begin{equation}\\tag{DGO}\\label{DGTO}\n    \\var{deposits} \\geq \\obligation{pp}{stkCreds}{stpools}{\\ell(s)}\n  \\end{equation}\n  The difference between the left and right hand sides of the inequality\n  corresponds to the lovelace value in $c_{i+1}$ that decays between $s_i$ and $s_{i+1}$.\n\n  There are four sub-transitions where $\\var{deposits}$ is changed:\n  $\\mathsf{SNAP}$ (Figure~\\ref{fig:rules:snapshot}),\n  $\\mathsf{POOLREAP}$ (Figure~\\ref{fig:rules:pool-reap}),\n  $\\mathsf{NEWPP}$ (Figure~\\ref{fig:rules:new-proto-param}),\n  $\\mathsf{UTXO}$ (Figure~\\ref{fig:rules:utxo-shelley}).\n  This ordering is also the order in which $\\var{deposits}$ is changed.\n  Of these sub-transitions, only $\\mathsf{UTXO}$ actually changes the value of $\\var{deposits}$\n  when $s_i$ is in the same epoch as $s_i$.\n  (We say that $s_i$ \\emph{crosses the epoch boundary} if the precondition of\n  Rule~\\ref{eq:new-epoch} in Figure~\\ref{fig:rules:new-epoch} is met,\n  namely if $\\epoch{s_i} \\ge e_\\ell+1$.)\n%\n  The proof then proceeds by induction on $n$, showing the following:\n  \\begin{itemize}\n    \\item\n      Let $c$ be the chain state after the $\\mathsf{SNAP}$ transition\n      in $s_i\\vdash c_i\\trans{\\hyperref[fig:rules:chain]{chain}}{b_i}c_{i+1}$.\n      If \\DGO{c_i}{s_i}, then \\DBE{c}{s_i} holds.\n    \\item $\\mathsf{POOLREAP}$ preserves \\ref{DBE}.\n    \\item $\\mathsf{NEWPP}$ preserves \\ref{DBE}.\n    \\item The property for $\\mathsf{UTXO}$ requires a bit of explanation.\n      Let $\\var{nes}\\in\\NewEpochState$ be the new epoch state in $c_i$.\n      Note that the property \\ref{DBE} makes sense for values of $\\NewEpochState$\n      since it contains all the relevant variables.\n      Similarly, \\ref{DBE} also makes sense for values of $\\UTxOState\\times\\PParams$.\n      Let\n      $$\n        {\\begin{array}{c}\n           \\var{gkeys} \\\\\n         \\end{array}}\n        \\vdash\\var{nes}\\trans{\\hyperref[fig:rules:tick]{tick}}{\\var{bh}}\\var{nes'}\n      $$\n      be the first sub-transition of\n      $s_i\\vdash c_i\\trans{\\hyperref[fig:rules:chain]{chain}}{b_i}c_{i+1}$.\n      If \\DBE{\\var{nes'}}{s_i} holds, then \\DBE{(us', pp)}{s_i} holds for every transaction\n      $tx$ in $b_i$, where:\n      $$\n      \\var{env}\\vdash \\var{us} \\trans{\\hyperref[fig:rules:utxow-shelley]{utxo}}{tx} \\var{us'},\n      $$\n      is a sub-transition of\n      $s_i\\vdash c_i\\trans{\\hyperref[fig:rules:chain]{chain}}{b_i}c_{i+1}$,\n      and $\\var{pp}$ is the protocol parameters in $\\var{nes'}$.\n  \\end{itemize}\n\n  \\noindent\n  Case $\\hyperref[fig:rules:snapshot]{\\mathsf{SNAP}}$.\n  We must show that\n  if $c$ is the chain state after the $\\mathsf{SNAP}$ transition\n  in $s_i\\vdash c_i\\trans{\\hyperref[fig:rules:chain]{chain}}{b_i}c_{i+1}$,\n  and \\DGO{c_i}{s_i} holds, then so does \\DBE{c}{s_i}.\n%\n  We can assume that $s_i$ crosses the epoch boundary,\n  since otherwise the $\\mathsf{SNAP}$ transition will not occur.\n  Since the $\\mathsf{SNAP}$ transition only happens within the $\\mathsf{TICK}$ transition\n  on the epoch boundary, it follows that\n  $c_i$ does not contain any stake credentials or pools from the current epoch,\n  and so \\ref{DBE} will be equivalent to \\ref{DEO} (the current epoch is $\\epoch{s_i}$).\n  However, \\DBE{c}{s_i} holds trivially, since it is determined from the $\\fun{obligation}$ value.\n  \\\\~\\\\\n  Case $\\hyperref[fig:rules:pool-reap]{\\mathsf{POOLREAP}}$.\n  We must show that \\ref{DBE} is preserved.\n%\n  We again assume that $s_i$ crosses the epoch boundary.\n  The $\\mathsf{POOLREAP}$ transition does the following:\n  \\begin{enumerate}\n    \\item leaves $\\var{stkCreds}$ unchanged,\n    \\item removes $\\var{retired}$ from $\\var{stpools}$,\n    \\item subtracts $\\var{unclaimed}+\\var{refunded}$ from $\\var{deposits}$.\n  \\end{enumerate}\n%\n  Notice that the domain of the $\\var{pr}$ is $\\var{retired}$,\n  and similarly the domain of the $\\var{rewardAcnts}$ is also $\\var{retired}$\n  since the domains of $\\var{stpools}$ and $\\var{poolParams}$ are the same.\n  Therefore $\\var{retired}$ is the disjoint union of\n  $\\dom({\\var{refunds}})$ and $\\dom({\\var{mRefunds}})$, so that\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n      \\var{unclaimed}+\\var{refunded}\n      &\n      \\left(\n        \\sum\\limits_{\\wcard\\mapsto t\\in\\var{refunds}}R_p~t~\\ell(s)\n      \\right)+\n      \\left(\n        \\sum\\limits_{\\wcard\\mapsto t\\in\\var{mRefunds}}R_p~t~\\ell(s)\n      \\right)\n      \\\\\n      &\n      \\sum\\limits_{\\wcard\\mapsto t\\in\\var{rewardAcnts'}}R_p~t~\\ell(s)\n      \\\\\n      &\n      \\left(\n        \\sum\\limits_{\\wcard\\mapsto t\\in\\var{stpools}}R_p~t~\\ell(s)\n      \\right)-\n      \\left(\n        \\sum\\limits_{\\wcard\\mapsto t\\in\\var{retired}\\subtractdom\\var{stpools}}R_p~t~\\ell(s)\n      \\right)\n    \\end{array}\n  \\end{equation*}\n  Therefore, it follows that if \\ref{DEO} holds before $\\mathsf{POOLREAP}$, then it also holds afterwards.\n  \\\\~\\\\\n  Case $\\hyperref[fig:rules:new-proto-param]{\\mathsf{NEWPP}}$.\n  We must show that \\ref{DBE} is preserved.\n%\n  We again assume that $s_i$ crosses the epoch boundary.\n  In this transition $\\var{pp}$ can change, but $\\var{stkCreds}$, $\\var{stpools}$,\n  and $\\var{deposits}$ do not change.\n  As in the $\\mathsf{SNAP}$ case, \\DBE{c}{s_i} holds trivially,\n  since it is set to the value that is determined by $\\fun{obligation}$.\n  \\\\~\\\\\n  Case $\\hyperref[fig:rules:utxo-shelley]{\\mathsf{UTXO}}$.\n  We assume that \\DBE{\\var{nes'}}{s_i} holds, where $\\var{nes'}$\n  is the new epoch state after the $\\mathsf{TICK}$ transition.\n  We must show that \\ref{DBE} is preserved after each $\\mathsf{UTXO}$ transition.\n%\n  The $\\mathsf{DELEGS}$ transition can result in values being\n  added to or deleted from $\\var{stkCreds}$, and added to $\\var{stpools}$.\n  Let $A_s$ be the added stake credentials, $D_s$ be the deleted credentials, and\n  $A_p$ be the added stake pools, where $\\var{stkCreds}'$ is the stake credential mapping\n   $\\var{stpools}'$ is the stake pools, and $\\var{deposits}'$ is the deposit pot  after $\\mathsf{DELEGS}$.\n  We have that\n  \\begin{equation*}\n    \\begin{array}{rcl}\n      \\var{D_s} & \\subseteq & \\var{\\var{stkCreds}\\cup\\var{A_s}} \\\\\n      \\var{stkCreds}' & = & (\\var{stkCreds}\\cup\\var{A_s})\\setminus\\var{D_s} \\\\\n      \\var{stpools}' & = & \\var{stpools}\\cup\\var{A_p} \\\\\n    \\end{array}\n  \\end{equation*}\n  The slots in the range of $A_s$ will all be equal to $s_i$,\n  but the slots in the range of $D_s$\nmay either be from the current epoch or an earlier one, so we split them using $\\fun{sep}$:\n  \\begin{equation*}\n    (\\var{D_{s\\_old}},~\\var{D_{s\\_new}}) = \\fun{sep}~\\var{D_s}~\\ell(s_i)\n  \\end{equation*}\n  We must then show that\n  \\begin{equation*}\n    \\var{deposits}' = \\var{deposits}\n    + |A_s|\\cdot d_{val}\n    + |P_c|\\cdot p_{val}\n    - |D_{s\\_new}|\\cdot d_{val}\n    - \\left(\\sum_{\\wcard\\mapsto t\\in D_{s\\_old}}R_c~t~\\ell(s_i)\\right)\n  \\end{equation*}\n  Looking at the $\\mathsf{UTXO}$ transition in Figure~\\ref{fig:rules:utxo-shelley},\n  \\begin{equation*}\n    \\var{deposits}' = \\var{deposits} + \\totalDeposits{pp}{stpools}{(\\txcerts{tx})}\n    - (\\var{refunded} + \\var{decayed})\n  \\end{equation*}\n  The function $\\fun{totalDeposits}$ is defined in Figure~\\ref{fig:functions:deposits-refunds}\n  and it is clear that here it is equal to\n  $$|A_s|\\cdot d_{val} + |P_c|\\cdot p_{val.}$$\n  Recall that\n  \\begin{equation*}\n    \\begin{array}{r@{~=~}l}\n      \\var{refunded} & \\keyRefunds{pp}{stkCreds}~{tx} \\\\\n      \\var{decayed} & \\decayedTx{pp}{stkCreds}~{tx}\n    \\end{array}\n  \\end{equation*}\n  where $\\fun{keyRefunds}$ is defined in Figure~\\ref{fig:functions:deposits-refunds}.\n  This iterates $\\fun{keyRefund}$ from the same figure,\n  which in turn just looks up the creation slot for a transaction and returns $R_c$.\n  The function to calculate the value of decayed deposits, $\\fun{decayedTx}$, is defined in Figure~\\ref{fig:functions:deposits-decay}.\n  This iterates $\\fun{decayedKey}$ from the same figure.\n  Therefore, to show that\n  \\begin{equation}\\label{deleted-is-refunds-plus-decayed}\n    |D_{s\\_new}|\\cdot d_{val} + \\sum_{\\wcard\\mapsto t\\in D_{s\\_old}}R_c~t~\\ell(s_i)\n    = \\var{refunded} + \\var{decayed},\n  \\end{equation}\n  and thus complete the proof for the $\\mathsf{UTXO}$ case,\n  it suffices to show that for a given $\\var{c}\\mapsto s\\in D_s$,\n  the $R_c$ value plus the $\\fun{decayedKey}$ value that is associated with the stake\n  credential $c$ is equal to $d_{val}$ if $\\epoch(s)=\\epoch(s_i)$, and is otherwise equal to $R_c~s~\\ell(s_i)$.\n  Looking at the definition of $\\fun{decayedKey}$, observe that if $\\epoch(s)=\\epoch(s_i)$\n  then $\\var{start}=\\var{created}$ and so the decayed value is $(R_c~s~s)-(R_c~s~s_i)$.\n  However, $R_c~s~s = d_{val}$, so the refund plus the decayed value is\n  $d_{val}-(R_c~s~s_i)+(R_c~s~s_i)=d_{val}$.\n  Otherwise, if $s$ is from a previous epoch, then $\\var{start}=\\ell(s_i)$, and so\n  the decayed value is $(R_c~s~\\ell(s_i))-(R_c~s~s_i)$.\n  The refund plus the decayed value is thus\n  $(R_c~s~\\ell(s_i))-(R_c~s~s_i)+(R_c~s~s_i)=(R_c~s~\\ell(s_i))$.\n  Therefore, equation~\\ref{deleted-is-refunds-plus-decayed} holds, and\n  consequently so also does \\DBE{c'}{s_i}.\n\n\\end{proof}\n\n", "meta": {"hexsha": "ff70007955e47e95cb55d3bf27824fc59a4a1c47", "size": 27222, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "shelley/chain-and-ledger/formal-spec/hand_proofs.tex", "max_stars_repo_name": "ilap/cardano-ledger-specs", "max_stars_repo_head_hexsha": "6474f68b24d05175fc3fd44a9bdfa95bda703a25", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 108, "max_stars_repo_stars_event_min_datetime": "2019-03-24T02:26:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-30T05:27:16.000Z", "max_issues_repo_path": "shelley/chain-and-ledger/formal-spec/hand_proofs.tex", "max_issues_repo_name": "ilap/cardano-ledger-specs", "max_issues_repo_head_hexsha": "6474f68b24d05175fc3fd44a9bdfa95bda703a25", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1266, "max_issues_repo_issues_event_min_datetime": "2019-03-18T20:23:28.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-04T12:50:51.000Z", "max_forks_repo_path": "shelley/chain-and-ledger/formal-spec/hand_proofs.tex", "max_forks_repo_name": "ilap/cardano-ledger-specs", "max_forks_repo_head_hexsha": "6474f68b24d05175fc3fd44a9bdfa95bda703a25", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 86, "max_forks_repo_forks_event_min_datetime": "2019-03-29T06:53:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-04T17:17:15.000Z", "avg_line_length": 42.1393188854, "max_line_length": 165, "alphanum_fraction": 0.6611196826, "num_tokens": 9487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577680977182187, "lm_q2_score": 0.6959583187272711, "lm_q1q2_score": 0.5969708431458611}}
{"text": "\\section{\\texorpdfstring{$\\mathcal{O}$}{O}-Notation}\n\n\\toclesssubsection{Motivation / Definition}\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Motivation}\n  \\textbf{We are interested in:}\n  \\begin{itemize}\n  \\item\n    Example: \\texttt{sorting}\n    \\begin{itemize}\n      \\item\n        Runtime of \\texttt{Minsort} \\enquote{is growing as}\n        $\\quad{}\\quad{}n^2$\n      \\item\n         Runtime of \\texttt{Heapsort} \\enquote{is growing as}\n         $\\quad{}n \\, \\log n$\n    \\end{itemize}\n    \\item<2- |handout:1>\n      Growth of a function in runtime $T(n)$\n      \\begin{itemize}\n      \\item the role of constants (e.g. $1ns$) is minor\n      \\item it is enough if relation holds for some $n\\geq \\ldots$\n      \\end{itemize}\n    \\item<3- |handout:1>\n      Describe the growth of the function \\textbf{more formally}\n    \\begin{itemize}\n      \\item\n        by the means of Landau-Symbols \\cite{wikipedia_big_o_notation}):\n            \\begin{itemize}\n                \\item\n                    {\\color{Mittel-Blau}$\\mathcal{O}(n)$} (Big O of $n$),\n                \\item\n                    {\\color{Mittel-Blau}$\\Omega (n)$} (Omega of $n$),\n                \\item\n                    {\\color{Mittel-Blau}$\\Theta (n)$} (Theta of $n$)\n            \\end{itemize}\n    \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Definition}\n  \\textbf{Big $\\mathcal{O}$-Notation:}\n  \\begin{itemize}\n    \\item<2- |handout:1>\n     Consider the function:\n      $f\\!: \\mathbb{N} \\to \\mathbb{R}, \\; n \\mapsto f(n)$\n      \\begin{itemize}\n        \\item\n          $\\mathbb{N}\\!:$ Natural numbers $\\rightarrow$ input size\n        \\item\n          $\\mathbb{R}\\!:$ Real numbers $\\rightarrow$ runtime\n      \\end{itemize}\n   \\item<3- |handout:1>\n     \\textbf{Example:}\n     \\begin{itemize}[<+- |handout:1>]\n       \\item\n         $f(n) = 3 \\, n$\n       \\item\n         $f(n) = 2 \\, n \\, \\log n$\n       \\item\n         $f(n) = \\frac{1}{10} n^2$\n       \\item\n         $f(n) = n^2 + 3 \\, n \\, \\log n - 4 \\, n$\n      \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Definition}\n  \\textbf{Big $\\mathcal{O}$-Notation:}\n  \\begin{itemize}\n    \\item<2- |handout:1>\n      Given two functions $f$ and $g$:\\\\\n      $f,g\\!: \\mathbb{N} \\to \\mathbb{R}$\n    \\item<3- |handout:1>\n      \\textbf{Intuitive:} $f$ is Big-O of $g$ ($f$ is $\\mathcal{O}(g)$)\n      \\begin{itemize}\n        \\item\n          $\\ldots$ if $f$ relative to $g$ does not grow faster than $g$\n        \\item\n          the growth rate matters, not the absolute values\n      \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n% For every picture that defines or uses external nodes, you'll have to\n% apply the 'remember picture' style. To avoid some typing, we'll apply\n% the style to all pictures.\n\\tikzstyle{every picture}+=[remember picture]\n\n% By default all math in TikZ nodes are set in inline mode. Change this to\n% displaystyle so that we don't get small fractions.\n%\\everymath{\\displaystyle}\n\n\\tikzstyle{na} = [baseline=-.5ex]\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Definition}\n  \\textbf{Big $\\mathcal{O}$-Notation:}%\n  \\begin{itemize}\n    \\item<2- |handout:1>\n      \\textbf{Informal:} $f = \\mathcal O(g)$\n      \\begin{itemize}\n        \\item<2- |handout:1>\n           \\enquote{$=$} corresponds to \"is\" not \"is equal to\"\n        \\item<3- |handout:1>\n           $\\ldots$ if for some value $n_0$ for all $n \\geq n_0$\n        \\item<4- |handout:1>\n           $f(n) \\leq C \\cdot g(n)$ for a constant C\n        \\item<5- |handout:1>\n            ($f = \\mathcal O(g)$: From a value $n_0$ for all\n            $n \\geq n_0 \\rightarrow f(n) \\leq C \\cdot g(n)$)\n      \\end{itemize}\n\t\\item<6- |handout:1>\n      \\textbf{Formal:} $f \\in \\mathcal O(g)$\\\\\n  \\end{itemize}\n  \\begin{block}{\\textbf{Formal:} $f \\in \\mathcal O(g)$}<6- |handout:1>\n    \\begin{math}\n      \\mathcal O(g) = \\lbrace \\tikz[baseline] {\n        \\node (fun) [anchor=base] {f};\n      }: \\mathbb{N} \\to \\mathbb{R} ~ \\tikz[baseline] {\n        \\node (for) [anchor=base] {|};\n      }\\tikz[baseline] {\n        \\node (exone) [anchor=base] {$\\exists$};\n      }\\!\\!n_0 \\in \\mathbb{N},\n      \\tikz[baseline] {\n        \\node (extwo) [anchor=base] {$\\exists$};\n      }\\!\\!C > 0,\\tikz[baseline] {\n        \\node (forall) [anchor=base] {$\\forall$};\n      }\\!\\!n \\geq n_0\\!\\!\\tikz[baseline] {\n        \\node (such) [anchor=base] {:};\n      }\n      f(n) \\leq C \\cdot g(n)\\rbrace\n    \\end{math}\n  \\end{block}\n  \\tikz[baseline] {\\node (funtext) [anchor=north] {\n    {\\footnotesize\\onslide<7- |handout:1>{\\begin{tabular}[c]{@{}c@{}}\n          ``set of\\\\\n          all functions''\n        \\end{tabular}}\n  }};}\\quad{} \\tikz[baseline] {\\node (fortext) [anchor=north] {\n    {\\footnotesize\\onslide<8- |handout:1>{\\begin{tabular}[c]{@{}c@{}}\n        ``for which''\n        \\end{tabular}}\n  }};}\\quad{} \\tikz[baseline] {\\node (extext) [anchor=north] {\n    {\\footnotesize\\onslide<9- |handout:1>{\\begin{tabular}[c]{@{}c@{}}\n        ``it exists''\n        \\end{tabular}}\n  }};}\\quad{} \\tikz[baseline] {\\node (foralltext) [anchor=north] {\n    {\\footnotesize\\onslide<10- |handout:1>{\\begin{tabular}[c]{@{}c@{}}\n        ``for all''\n        \\end{tabular}}\n  }};}\\quad{} \\tikz[baseline] {\\node (suchtext) [anchor=north] {\n    {\\footnotesize\\onslide<11- |handout:1>{\\begin{tabular}[c]{@{}c@{}}\n        ``such that''\n        \\end{tabular}}\n  }};}\n  \\begin{tikzpicture}[>=stealth,overlay,thick]\n    \\path[->,color=red]<7- |handout:1> (funtext) edge [bend left]  (fun);\n    \\path[->,color=red]<8- |handout:1> (fortext) edge [bend left]  (for);\n    \\path[->,color=red]<9- |handout:1> (extext) edge [bend left]  (exone);\n    \\path[->,color=red]<9- |handout:1> (extext) edge [bend right]  (extwo);\n    \\path[->,color=red]<10- |handout:1> (foralltext) edge [bend left]  (forall);\n    \\path[->,color=red]<11- |handout:1> (suchtext) edge [bend left]  (such);\n  \\end{tikzpicture}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\subsection{Examples}\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Examples}\n  \\textbf{Illustration of the Big O-Notation:}\\\\[-1.0em]\n  \\begin{columns}\n    \\begin{column}{0.9\\textwidth}\n      \\begin{figure}[!h]\n        \\includegraphics[width=\\linewidth]{Images/BigONotationRuntime.png}\n        \\caption{Runtime of two algorithms $f_1, f_2$}\n        \\label{fig:big_o_runtime_example}\n      \\end{figure}\n    \\end{column}\n    \\begin{column}{0.1\\textwidth}\n      \\vspace{-4.75em}\\\\\n      \\hspace*{-2.5em}$g(n)$\\\\[2.0em]\n      \\hspace*{-2.5em}$f_2 \\in \\mathcal{O}(g)$\\\\[2.5em]\n      \\hspace*{-2.5em}$f_1 \\in \\mathcal{O}(g)$\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Examples}\n  \\textbf{Example:}\n  \\begin{itemize}\n    \\item\n      $f(n) = 5 \\, n + 7, \\; g(n) = n$\\\\\n      $\\Rightarrow$ $5 \\, n + 7 \\in \\mathcal{O}(g)$\\\\\n      $\\Rightarrow f \\in \\mathcal{O}(g)$\n     \\item\n      \\textbf{Intuitive:}\\\\\n      $f(n) = 5 \\, n + 7$ $\\rightarrow$ linear growth\n  \\end{itemize}\n  \\begin{alertblock}{Attention}\n    $f(n) \\leq g(n)$ is not guaranteed, better is\n    $f(n) \\leq C \\cdot g(n) \\;\\; \\forall n \\geq n_0$.\n  \\end{alertblock}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Proof}\n  \\label{slide:proofone}\n  \\begin{proof}[\n    We have to proof:\n    \\begin{math}\n      \\exists n_0, \\, \\exists C, \\, \\forall n \\geq n_0 \\! : \\;\n        5 \\, n + 7 \\leq C \\cdot n\n    \\end{math}\n  ]\n    \\begin{eqnarray*}\n      \\onslide<2- |handout:1>{\n        &5 \\, n + 7 &\\leq \\;\\; 5 \\, n + n\n        \\hspace{1em} (\\text{for} ~ n \\geq 7)\n      }\\\\\n      \\quad{}\\onslide<3- |handout:1>{\n        && = \\;\\; 6 \\, n\n      }\n    \\end{eqnarray*}\n    \\onslide<4- |handout:1>{\n      $\\hspace{1.5em} \\Rightarrow n_0 = 7, \\, C = 6$ \\qedhere\n    }\n  \\end{proof}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Proof}\n  \\begin{proof}[Alternate proof:]\n    \\begin{eqnarray*}\n      \\onslide<2- |handout:1>{\n        &5 \\, n + 7 &\\leq \\;\\; 5 \\, n + 7 \\, n\n        \\hspace{1em} (\\text{for} ~ n \\geq 1)\n      }\\\\\n      \\quad{}\\onslide<3- |handout:1>{\n        && = \\;\\; 12 \\, n\n      }\n    \\end{eqnarray*}\n    \\onslide<4- |handout:1>{\n      $\\hspace{1.5em} \\Rightarrow n_0 = 1, \\, C = 12$ \\qedhere\n    }\n  \\end{proof}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Examples}\n  \\textbf{Big O-Notation:}\n  \\begin{itemize}\n    \\item\n      We are only interested in the term with the highest-order,\n      the fastest growing summand, the others will be ignored\n    \\item\n      $f(n)$ is limited {\\color{Mittel-Blau}from above} by $C \\cdot g(n)$\n  \\end{itemize}\n  \\textbf{Examples:}\n  \\begin{align*}\n     2 \\, n^2 + 7 \\, n - 20 \\in & \\,\\mathcal{O}(n^2)\\\\\n     2 \\, n^2 + 7 \\, n \\, \\log n - 20 \\in & {}\\\\\n     7 \\, n \\, \\log n - 20 \\in & {}\\\\\n     5 \\in & {}\\\\\n     2 \\, n^2 + 7 \\, n \\, \\log n + n^3 \\in & {}\\\\\n  \\end{align*}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Examples}\n  \\textbf{Harder Example:}\n  \\begin{itemize}\n    \\item Polynomes are simple\n    \\item More problematic: combination of complex functions\n      \\begin{displaymath}\n        2 \\sqrt{x} + 3 \\ln x \\in \\mathcal{O} (??)\n      \\end{displaymath}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\section{\\texorpdfstring{$\\Omega$}{Omega}-Notation}\n\n\\begin{frame}{$\\Omega$-Notation}{Definition}\n  \\textbf{Omega-Notation:}\n  \\begin{itemize}\n    \\item\n      \\textbf{Intuitive}:\n      \\begin{itemize}\n        \\item\n          $f \\in \\Omega(g)$, $f$ is growing at least as fast as $g$\n        \\item\n          So the same as Big-O but with \\textit{at-least} and not\n          \\textit{at-most}\n      \\end{itemize}\n  \\end{itemize}\n  \\begin{block}{\\textbf{Formal:} $f \\in \\Omega(g)$}\n    \\begin{math}\n      \\Omega(g) = \\lbrace f: \\mathbb{N} \\to \\mathbb{R} ~ | ~\n        \\exists n_0 \\in \\mathbb{N}, \\, \\exists C > 0, \\,  \\forall n \\geq n_0:\\,\n        f(n) \\tikz[baseline] {\\node (geq) [anchor=base] {$\\geq$};}\n        C \\cdot g(n)\\rbrace\n    \\end{math}\n  \\end{block}\n  \\begin{center}\n    \\tikz[baseline] {\\node (geqtext) [anchor=base] {\n      {\\footnotesize\\onslide<1->{\\begin{tabular}[c]{@{}c@{}}\n        ``in $O(n)$\\\\\n        we had $\\leq$''\n      \\end{tabular}}\n    }};}\n  \\end{center}\n  \\begin{tikzpicture}[>=stealth,overlay,thick]\n    \\path[->,color=red] (geqtext) edge [bend right] (geq);\n  \\end{tikzpicture}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\Omega$-Notation}{Proof}\n  \\label{slide:prooftwo}\n  \\textbf{Example:}\\\\\n  \\begin{proof}[Proof of $f(n) = 5n + 7 \\in \\Omega(n)$:]\n    \\begin{eqnarray*}\n      &\\underbrace{5 \\, n + 7}_{f(n)} &\\geq \\;\\; \\underbrace{1 \\cdot n}_{g(n)}\n      \\hspace{1em} (\\text{for} ~ n \\geq 1)\n    \\end{eqnarray*}\n    $\\hspace{1.5em} \\Rightarrow n_0 = 1, \\, C = 1$ \\qedhere\n  \\end{proof}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\Omega$-Notation}{Examples}\n  \\textbf{Illustration of the Omega-Notation:}\\\\[-1.0em]\n  \\begin{columns}\n    \\begin{column}{0.9\\textwidth}\n      \\begin{figure}[!h]\n        \\includegraphics[width=\\linewidth]{Images/OmegaNotationRuntime.png}\n        \\caption{Runtime of two algorithms $f_1, f_2$}\n        \\label{fig:omega_o_runtime_example}\n      \\end{figure}\n    \\end{column}\n    \\begin{column}{0.1\\textwidth}\n      \\vspace{-4.75em}\\\\\n      \\hspace*{-2.5em}$f_2 \\in \\Omega(g)$\\\\[2.0em]\n      \\hspace*{-2.5em}$f_1 \\in \\Omega(g)$\\\\[2.5em]\n      \\hspace*{-2.5em}$g(n)$\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\Omega$-Notation}{Examples}\n  \\textbf{Big Omega-Notation:}\n  \\begin{itemize}\n    \\item\n      We are only interested in the term with the highest-order,\n      the fastest growing summand, the others will be ignored\n    \\item\n      $f(n)$ is limited {\\color{Mittel-Blau}from underneath} by\n      $C \\cdot g(n)$\n  \\end{itemize}\n  \\textbf{Examples:}\n  \\begin{align*}\n    2 \\, n^2 + 7 \\, n - 20 \\in & \\,\\Omega(n^2)\\\\\n    2 \\, n^2 + 7 \\, n \\, \\log n - 20 \\in & {}\\\\\n    7 \\, n \\, \\log n - 20 \\in & {}\\\\\n    5 \\in & {}\\\\\n    2 \\, n^2 + 7 \\, n \\, \\log n + n^3 \\in & {}\n  \\end{align*}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\section{\\texorpdfstring{$\\Theta$}{Theta}-Notation}\n\n\\begin{frame}{$\\Theta$-Notation}{Definition}\n  \\textbf{Theta-Notation:}\n  \\begin{itemize}\n    \\item\n      \\textbf{Intuitive}: $f$ is Theta of $g$ $\\ldots$\n      \\begin{itemize}\n        \\item\n          $\\ldots$ if $f$ is growing as much as $g$\n        \\item\n          $f \\in \\Theta(g)$, $f$ is growing at the same speed as $g$\n       \\end{itemize}\n  \\end{itemize}\n  \\begin{block}{\\textbf{Formal:} $f \\in \\Theta(g)$}\n    $\\Theta(g) = \\underbrace{\\mathcal O(g) \\cap \\Omega(g)}_{Intersection}$\n  \\end{block}\n  \\textbf{Example:}\\\\\n  \\hspace*{1.5em}$f(n) = 5 \\, n + 7, \\;\n    f(n) \\in \\mathcal{O}(n), \\,\n    f(n) \\in \\Omega(n)$\\\\\n  \\hspace*{3.0em}$\\Rightarrow f(n) \\in \\Theta(n)$\\\\[0.5em]\n  \\begin{center}\n   \\textit{Proof for $\\mathcal{O}(g)$ and $\\Omega(g)$ look at\n     slides~\\ref{slide:proofone} and~\\ref{slide:prooftwo}}\n  \\end{center}\n\\end{frame}\n\n\\begin{frame}{$\\Theta$-Notation}{Graphs}\n  \\begin{center}\n    \\includegraphics[width=0.8\\textwidth]{Images/lower-upper-bound.pdf}\n  \\end{center}\n  \\begin{itemize}\n    \\item $f$ and $g$ have the same ``growth''\n  \\end{itemize}\n\\end{frame}\n%-------------------------------------------------------------------------------\n\n\\section{Runtime}\n\n\\toclesssubsection{Summary}\n\n\\begin{frame}{Runtime}{Landau-Symbol Summary}\n  \\textbf{Big O-Notation} $\\mathcal{O}(n)$\\textbf{:}\\\\\n    \\begin{itemize}\n      \\item\n        $f$ is growing {\\color{Mittel-Blau}at most} as fast as $g$\n      \\item\n        $C \\cdot g(n)$ is the upper bound\n    \\end{itemize}\n  \\textbf{Big Omega-Notation} $\\Omega(n)$\\textbf{:}\\\\\n  \\begin{itemize}\n    \\item\n      $f$ is growing {\\color{Mittel-Blau}at least} as fast as $g$\n    \\item\n      $C \\cdot g(n)$ is the lower bound\n  \\end{itemize}\n  \\textbf{Big Theta-Notation} $\\Theta(n)$\\textbf{:}\\\\\n  \\begin{itemize}\n    \\item\n      $f$ is growing at {\\color{Mittel-Blau}the same} speed as $g$\n      \\begin{itemize}\n        \\item\n          $C_1 \\cdot g(n)$ is the lower bound\n        \\item\n          $C_2 \\cdot g(n)$ is the upper bound\n      \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime}{Common Runtimes}\n  \\label{slide:theta-examples}\n  \\begin{table}[!h]\n    \\begin{center}\n      \\caption{Common runtime types}\n      \\label{RuntimeTable}\n      \\begin{tabularx}{\\textwidth}{XX}\n        Runtime & Growth\\\\\n        \\hline\n        $f \\in \\Theta (1)$ & constant time\\\\\n        $f \\in \\Theta (\\log n) = \\Theta (\\log_k n)$ & logarithmic time\\\\\n        $f \\in \\Theta (n)$ & linear time\\\\\n        $f \\in \\Theta (n \\, \\log n)$ & n-log-n time (nearly linear)\\\\\n        \\hline\n        $f \\in \\Theta (n^2)$ & squared time\\\\\n        $f \\in \\Theta (n^3)$ & cubic time\\\\\n        $f \\in \\Theta (n^k)$ & polynomial time\\\\\n        \\hline\n        $f \\in \\Theta (k^n)$, $f \\in \\Theta(2^n)$ & exponential time\n      \\end{tabularx}\n    \\end{center}\n  \\end{table}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\subsection{Limit / Convergence}\n\n\\begin{frame}\n  \\begin{itemize}\n  \\item So far discussed:\n    \\begin{itemize}\n      \\item Membership in $O(\\ldots)$ proofed by hand:\\\\\n        {\\color{MainA}Explicit calculation of $n_0$ and $C$}\n\t  \\vspace{1em}\n      \\item\n\t    \\textbf{However:} Both hint at {\\color{MainA}limits} in calculus\n    \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Limit / Convergence}\n  \\label{slide:limit}\n  \\textbf{Definition of \\enquote{Limit}}\n  \\begin{itemize}\n    \\item\n      The {\\color{MainA}limit $L$} exists for an infinite sequence\n\t  $f_1, f_2, f_3, \\ldots$\\\\\n      if for all $\\epsilon > 0$ one $n_0 \\in \\mathbb{N}$ exists,\n      such that for all\\\\\n      $n \\geq n_0$ the following holds true:\n      $\\left| f_n - L \\right| \\leq \\epsilon$\n    \\item\n      A function $f\\!: \\mathbb{N} \\rightarrow \\mathbb{R}$ can be written as a\n      sequence\\\\\n      $\\Rightarrow$ $\\lim\\limits_{n \\rightarrow \\infty} f_n = L$\n  \\end{itemize}\n  \\begin{block}<2- |handout:1>{The limit is converging:}\n    \\begin{math}\n      \\forall \\epsilon > 0 \\; \\exists n_0 \\in \\mathbb{N} \\;\\;\n      \\forall n \\geq n_0 \\! : \\; \\left| f_n - L \\right| \\leq \\epsilon\n    \\end{math}\n  \\end{block}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Limit / Convergence}\n  \\begin{itemize}\n  \\item Example for the proof of a limit\n  \\item Function $f(n) = 2 + \\frac{1}{n}$ with limes $\\lim_{n \\to \\infty} f(n) = 2$\n  \\item \\enquote{Engineering} solution: use $n = \\infty{}$\n    \\begin{displaymath}\n      \\frac{1}{\\infty} = 0 \\Rightarrow \\lim_{n \\to \\infty} f(n)\n        = \\lim_{n \\to \\infty} 2 + \\dfrac{1}{n}\n        = 2\n    \\end{displaymath}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Limit / Convergence}\n  \\begin{itemize}\n  \\item Now a more formal proof for $\\displaystyle\\lim_{n \\to \\infty} 2 +\n    \\dfrac{1}{n}  = 2$\n  \\item We need to show: for all given $\\epsilon$ there is an $n_0$ such\n    that for all $n \\geq n_0$\n    \\begin{displaymath}\n      \\left| 2 + \\dfrac{1}{n} - 2 \\right| =  \\left| \\dfrac{1}{n}  \\right| \\leq \\epsilon\n    \\end{displaymath}\n  \\item<2- |handout:1>\n    E.g.: for $\\epsilon=0.01$ we get $\\frac{1}{n} \\leq \\epsilon$\n    for $n\\geq 100$\n  \\item<3- |handout:1>\n    In general\n    \\begin{displaymath}\n      n_0 = \\left\\lceil \\dfrac{1}{\\epsilon} \\right\\rceil\n    \\end{displaymath}\n  \\item<4- |handout:1>\n    Then we get:\n    \\begin{eqnarray*}\n      \\left|\n        \\frac{1}{n} \\vphantom{\\dfrac{1}{\\frac{1}{\\epsilon}}}\n      \\right| = \\dfrac{1}{n}\n      \\; \\leq \\;\n      \\dfrac{1}{n_0} = \\dfrac{1}{\\left\\lceil\\frac{1}{\\epsilon}\\right\\rceil}\n      \\leq \\dfrac{1}{\\frac{1}{\\epsilon}}\n      = \\epsilon \\qed\n    \\end{eqnarray*}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Limit / Convergence}\n  \\label{slide:fraction}\n  Let $f,g \\! : \\; \\mathbb{N} \\to \\mathbb{R}$ with an existing limit\n  \\[\\lim_{n \\to \\infty} \\dfrac{f(n)}{g(n)} = L\\]\n  Hence the following holds:\n  \\begin{align}\n    \\label{eq:big-O}\n    & f \\in \\mathcal{O}(g) & \\Leftrightarrow\n    && \\lim_{n \\to \\infty} \\dfrac{f(n)}{g(n)} < \\infty\\\\\n    & f \\in \\Omega(g) & \\Leftrightarrow\n    && \\lim_{n \\to \\infty} \\dfrac{f(n)}{g(n)} > 0\\\\\n    & f \\in \\Theta(g) & \\Leftrightarrow\n    && 0 < \\lim_{n \\to \\infty} \\dfrac{f(n)}{g(n)} < \\infty\n  \\end{align}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Limit / Convergence}\n  \\[\n    f \\in \\mathcal{O}(g)\n    \\; \\Leftrightarrow \\;\n    \\lim\\limits_{n \\rightarrow \\infty} \\frac{f(n)}{g(n)} < \\infty\n  \\]\n  \\begin{proof}[Forward proof ($\\Rightarrow$):]\n     $f \\in \\mathcal{O}(g) \\stackrel{\\text{def. of }\\mathcal{O}(n)}{\\Rightarrow}\n     \\exists n_0, \\, C\\; \\forall n \\geq n_0:\n     \\; f(n) \\leq C \\cdot g(n)$\n     \\begin{alignat*}{2}\n       \\Rightarrow && \\exists n_0, \\, C\\; \\forall n \\geq n_0:\n       \\dfrac{f(n)}{g(n)} &\\leq \\; C\\\\\n       \\Rightarrow && \\displaystyle\n       \\lim_{n \\to \\infty}\\dfrac{f(n)}{g(n)} &\\leq \\; C\n       \\qedhere\n      \\end{alignat*}\n  \\end{proof}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{$\\mathcal{O}$-Notation}{Limit / Convergence}\n  \\begin{proof}[Backward proof ($\\Leftarrow$):]\n    \\begin{center}\n      \\vspace{-0.5em}\n      \\begin{math}\n        \\begin{aligned}\n          & {} & \\lim_{n \\to \\infty} \\frac{f(n)}{g(n)} &< \\; \\infty\\\\\n          & \\Rightarrow & \\lim_{n \\to \\infty} \\frac{f(n)}{g(n)} &= \\; C &&\n          \\hspace*{1.5em}\\text{For some } C \\in \\mathbb{R} \\text{ (Limit)}\n        \\end{aligned}\n      \\end{math}\n      \\vspace{1.0em}\\\\\n      \\begin{math}\n        \\begin{aligned}\n          \\stackrel{\\text{def. limes}}{\\Rightarrow} \\hspace*{0.5em} &&\n          \\exists n_0, \\, \\forall n \\geq n_0: \\hspace*{1.0em}&&\n          \\frac{f(n)}{g(n)} &\\leq \\; C + \\varepsilon \\;\\;\\;\n            (e.g.\\;\\varepsilon = 1)\\\\\n          \\Rightarrow \\; &&\n          \\exists n_0, \\, \\forall n \\geq n_0: \\hspace*{1.0em}&&\n          f(n) &\\leq \\; \\underbrace{(C + 1)}_{O-notation\\;constant} \\cdot g(n)\\\\\n          \\Rightarrow \\; &&\n          f \\in \\mathcal{O}(g)\n          \\qedhere\n        \\end{aligned}\n      \\end{math}\n    \\end{center}\n  \\end{proof}\n\\end{frame}\n", "meta": {"hexsha": "164983f6e8f4dceadae83d39e664b4d8e3c0587d", "size": 21422, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-3/Chapter/eng/010_O-Notation.tex", "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_issues_repo_path": "Lecture-3/Chapter/eng/010_O-Notation.tex", "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_forks_repo_path": "Lecture-3/Chapter/eng/010_O-Notation.tex", "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "avg_line_length": 33.1609907121, "max_line_length": 87, "alphanum_fraction": 0.5099430492, "num_tokens": 7553, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210896, "lm_q2_score": 0.8577681031721325, "lm_q1q2_score": 0.5969708415323164}}
{"text": "\\section{Overview of statistics}\n\nOne characterization of statistics is that it is the science of learning\nabout natural phenomena. The process of learning about natural phenomena proceeds\nin a cyclical fashion. One first posits a ``model'' for a natural phenomenon.\nShe designs an experiment to measure a quantity related to the phenomenon of interest.\nShe then collects data, summarizes it as a statistic, and performs a test of\ncompeting hypotheses before updating her hypothesis about the natural phenomenon.\nThe newly updated hypothesis replaces the original hypothesis in the second\niteration of the cycle. \n\nFor example, we might wish to study the genetics of body weight in mice. \nWe first posit the existence of body weight quantitative trait loci (QTL), which\nare regions of the genome that affect mouse body weight. We quantitatively state the competing possibilities as the null hypothesis that there is no QTL against the alternative hypothesis of presence of a QTL. We then design an experiment, perhaps using genetically diverse mice, like those from the Diversity Outbred mouse population. We measure body weight in, say, 100 Diversity Outbred mice and obtain genome-wide genetic data from each. We then summarize our observations by performing multiple instances of a statistical technique called ``linear regression''. We perform statistical hypothesis tests to quantify the evidence against the null hypothesis.  \n\n\\section{Maximum likelihood methods}\n\n\n\n\n\\section{Restricted maximum likelihood methods}\n\n", "meta": {"hexsha": "790826e61a5ff6239ae84b15ccdea3b36b955495", "size": 1524, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "stats-primer.tex", "max_stars_repo_name": "fboehm/diss-latex", "max_stars_repo_head_hexsha": "0698f667ed776f2a5a6768eb92d7681e0ed8825a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "stats-primer.tex", "max_issues_repo_name": "fboehm/diss-latex", "max_issues_repo_head_hexsha": "0698f667ed776f2a5a6768eb92d7681e0ed8825a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "stats-primer.tex", "max_forks_repo_name": "fboehm/diss-latex", "max_forks_repo_head_hexsha": "0698f667ed776f2a5a6768eb92d7681e0ed8825a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.2608695652, "max_line_length": 662, "alphanum_fraction": 0.8182414698, "num_tokens": 282, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577680940822761, "lm_q2_score": 0.6959583187272712, "lm_q1q2_score": 0.5969708406153966}}
{"text": "\n\\section{Dynamic Type Checking}\n\nExpressions evaluate to numbers, boolean values, strings, arrays or function values. Implementations\nof Source generate error messages when unexpected values are used as follows.\n\nOnly function values can be applied using the syntax:\n\n\\begin{eqnarray*}\n \\textit{expression}    \n                                   & ::=   &  \\textit{name}\n                                               \\texttt{\\textbf{(}}\\  \\textit{expressions} \\\n                                               \\texttt{\\textbf{)}}\\\\ \n\\end{eqnarray*}\n\nFor compound functions, implementations need to check that the number of \\textit{expressions}\nmatches the number of parameters.\n\nThe following table specifies what arguments Source's operators\ntake and what results they return. Implementations need to check the types of arguments and\ngenerate an error message when the types do not match.\n\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\noperator & argument 1 & argument 2 & result\\\\ \\hline\n\\texttt{\\textbf{+}} & number   & number     & number\\\\\n\\texttt{\\textbf{+}} & string   & string     & string\\\\\n\\texttt{\\textbf{-}} & number   & number     & number\\\\\n\\texttt{\\textbf{*}} & number   & number     & number\\\\\n\\texttt{\\textbf{/}} & number   & number     & number\\\\\n\\texttt{\\textbf{\\%}} & number   & number     & number\\\\\n\\texttt{\\textbf{===}} & any  & any     & bool\\\\\n\\texttt{\\textbf{!==}} & any   & any     & bool\\\\\n\\texttt{\\textbf{>}} & number   & number     & bool\\\\\n\\texttt{\\textbf{>}} & string   & string     & bool\\\\\n\\texttt{\\textbf{<}} & number   & number     & bool\\\\\n\\texttt{\\textbf{<}} & string   & string     & bool\\\\\n\\texttt{\\textbf{>=}} & number   & number     & bool\\\\\n\\texttt{\\textbf{>=}} & string   & string     & bool\\\\\n\\texttt{\\textbf{<=}}    & number   & number     & bool\\\\\n\\texttt{\\textbf{<=}} & string   & string     & bool\\\\\n\\texttt{\\textbf{\\&\\&}} & bool & any & any\\\\\n\\texttt{\\textbf{||}}   & bool & any & any\\\\\n\\texttt{\\textbf{!}}    & bool &      & bool\\\\\n\\texttt{\\textbf{-}}    & number &    & number\n\\end{tabular}\n\\end{center}\n\nPreceding \\texttt{\\textbf{?}} and following \\texttt{\\textbf{if}}, Source only allows\nboolean expressions.\n\nIn array access \\texttt{\\textbf{arr[key]}},\nonly arrays are allowed as \\texttt{\\textbf{arr}} and\nonly integers are allowed as \\texttt{\\textbf{key}}.\n\nArray indices in Source are limited to integers $i$ in the range $0 \\le i < 2^{32} - 1$.\n\nPairs in Source are represented by arrays with two elements. Therefore,\n\\begin{lstlisting}\nis_pair([1, 2]);\n\\end{lstlisting}\nand\n\\begin{lstlisting}\nequal(pair(1, 2), [1, 2]);\n\\end{lstlisting}\nevaluate to \\texttt{true}.\n\nAccess of an array with an array index to which no prior assignment has been\nmade on the array returns \\texttt{undefined}.\n", "meta": {"hexsha": "76f0c8a3dfdfe138c4f7b32a5eba5c46fafdc4f2", "size": 2742, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/specs/source_typing_3.tex", "max_stars_repo_name": "jaesimin/js-slang", "max_stars_repo_head_hexsha": "153596c436998e4aa182a61be455febf77eb5510", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2018-07-09T06:16:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T09:40:24.000Z", "max_issues_repo_path": "docs/specs/source_typing_3.tex", "max_issues_repo_name": "jaesimin/js-slang", "max_issues_repo_head_hexsha": "153596c436998e4aa182a61be455febf77eb5510", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1117, "max_issues_repo_issues_event_min_datetime": "2018-07-09T08:08:25.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T14:47:08.000Z", "max_forks_repo_path": "docs/specs/source_typing_3.tex", "max_forks_repo_name": "jaesimin/js-slang", "max_forks_repo_head_hexsha": "153596c436998e4aa182a61be455febf77eb5510", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 80, "max_forks_repo_forks_event_min_datetime": "2018-08-24T08:55:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-10T08:56:48.000Z", "avg_line_length": 39.1714285714, "max_line_length": 100, "alphanum_fraction": 0.6269146608, "num_tokens": 798, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795402, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.5968930778108886}}
{"text": "\\clearpage\n\\subsection{Vector Rotation}\n\nThese instructions provide bitwise rotation support for the\nVector Extension, analogous to the scalar rotation instructions\\footnote{\nSee \\url{https://github.com/AndyGlew/Ri5-stuff/wiki/VROT---Vector-Rotate}\nfor the initial design discussion around vector rotations.\n}.\n\n\\begin{cryptoisa}\nvrot.vv     vd, vs2, vs1, vm        // Vector-Vector\n    vd.eew[i] = vm[i] ? rotate(EEW,vs2[i], vs1[i]) : 0\n\\end{cryptoisa}\n\n\\begin{figure}[h]\n\\lstinputlisting[language=sail,firstline=405,lastline=426]{../sail/riscv_insts_crypto_rvv_alu.sail}\n\\caption{\nSail specification for the vector-vector right rotate instruction.\nOther variants such as vector-scalar and vector-xreg can be found in\nthe Sail code.\n}\n\\label{fig:sail:vrot}\n\\end{figure}\n\nThe vector-vector variant splits \\vrd, \\vrs{1} and \\vrs{2} into\nEEW-bit wide elements.\nIf the corresponding mask bit in \\vm is set,\nthe EEW-bit element in \\vrs{2} is rotated\n{\\em right} by the value in \\vrs{1}.\nOnly the low $log_2(EEW)$ bits of \\vrs{1} are used as the rotation\namount; all other bits are {\\em ignored}.\nIf the mask bit in \\vm is clear, then the element of \\vrd is zeroed.\n\n\\question{The zeroing v.s. leave-unmodified semantics of the vector\nmask registers are an implementation option. Which should we specify?}\n\nVector-vector rotation is extremely important to Keccak-based algorithms \nSHA3 and SHAKE.\nIt is also important to the ChaCha20 stream cipher.\n\nOther forms of rotate instruction include:\nvector-scalar (\\texttt{.vs}),\nvector-immediate (\\texttt{.vi})\nand\nvector-xreg (\\texttt{.vx}).\nPresently, the vector-vector variant is the most important from a\ncryptographic perspective.\nWe defer requiring other forms until they are required.\n", "meta": {"hexsha": "81d9fc7a22343447a23032ee9ed3187812befed6", "size": 1731, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/old-tex/tex/sec-vector-rotate.tex", "max_stars_repo_name": "dingiso/riscv-crypto", "max_stars_repo_head_hexsha": "608f550ea2a791fb091133fe6050321545dfc547", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 199, "max_stars_repo_stars_event_min_datetime": "2020-08-13T15:48:37.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T13:57:34.000Z", "max_issues_repo_path": "doc/old-tex/tex/sec-vector-rotate.tex", "max_issues_repo_name": "dingiso/riscv-crypto", "max_issues_repo_head_hexsha": "608f550ea2a791fb091133fe6050321545dfc547", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 118, "max_issues_repo_issues_event_min_datetime": "2020-08-13T16:09:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T20:00:35.000Z", "max_forks_repo_path": "doc/old-tex/tex/sec-vector-rotate.tex", "max_forks_repo_name": "dingiso/riscv-crypto", "max_forks_repo_head_hexsha": "608f550ea2a791fb091133fe6050321545dfc547", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 56, "max_forks_repo_forks_event_min_datetime": "2020-08-28T16:09:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T10:10:58.000Z", "avg_line_length": 35.3265306122, "max_line_length": 99, "alphanum_fraction": 0.7631426921, "num_tokens": 472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737963569014, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5968930753770177}}
{"text": "\\documentclass[12pt]{article}\r\n\\usepackage{amssymb, amsmath,algorithmic,algorithm,natbib}\r\n\r\n% \\usepackage[dvipdfm,bookmarks=true,bookmarksnumbered=false,bookmarkstype=toc,hyperfootnotes=false, citebordercolor={1 1 1} ,linkbordercolor={1 1 1} ]{hyperref}\r\n\r\n\\def\\d{{\\mathrm{d}}}\r\n\\def\\E{{\\mathrm{E}}}\r\n\\def\\Var{{\\mathrm{Var}}}\r\n\\def\\Cov{{\\mathrm{Cov}}}\r\n\\def\\max#1{{\\underset{#1}{\\mathrm{max}}}}\r\n\\def\\min#1{{\\underset{#1}{\\mathrm{min}}}}\r\n\\def\\({\\left(}\r\n\\def\\){\\right)}\r\n\\def\\R{{\\mathbb{R}}}\r\n\\def\\pdiff#1#2{\\frac{\\partial #1}{\\partial #2}}\r\n\\def\\xb{{\\mathbf{x}}}\r\n\\def\\fb{{\\mathbf{f}}}\r\n\\def\\nub{{\\mathbf{\\nu}}}\r\n\\def\\gb{{\\mathbf{g}}}\r\n\\def\\gammab{{\\mathbf{\\gamma}}}\r\n\\def\\pb{{\\mathbf{p}}}\r\n\\def\\db{{\\mathbf{d}}}\r\n\\def\\0{{\\mathbf{0}}}\r\n\\def\\1b{{\\mathbf{1}}}\r\n\\def\\deltab{{\\boldsymbol{\\delta}}}\r\n\\newcommand{\\Gammab}{{\\mathbf{\\Gamma}}}\r\n\\newcommand{\\be}{\\begin{equation}}\r\n\\newcommand{\\ee}{\\end{equation}}\r\n\\setlength{\\oddsidemargin}{0 in}\r\n\\setlength{\\evensidemargin}{0 in}\r\n\\setlength{\\textwidth}{6.5 in}\r\n\\setlength{\\topmargin}{-.5 in}\r\n\\setlength{\\textheight}{9 in}\r\n\r\n\\pagestyle{plain}\r\n\\begin{document}\r\n\r\n\\begin{center}\r\n {\\Large A manual for the Modified Powell's Hybrid Method} {\\large by Yoki Okawa }\\\\[0.5cm]\r\n {\\large \\today}\\\\[0.5cm]\r\n\\end{center}\r\n\r\n\\section{Introduction}\r\n\r\n\r\nIn this document, I explain the modified Powell's Hybrid method to solve the system of nonlinear\r\nequations. The modified Powell's Hybrid method has several advantages: avaiability of public domain high quality code; and popularity in various packages. This method seems ``the method'' used in almost all packages I checked so far. It includes IMSL, NAG, Matlab,\r\nand Octave. The algorithm is based on \\cite{Powell1970, Powell1970a}. This\r\nalgorithm was implemented in MINPACK, a high-quality optimization package in public\r\ndomain\\footnote{You can access MINPACK from http://www.netlib.org/minpack/ }. It is implemented in\r\nFortran 77 without the detail of the algorithm. \r\n\r\nThis document is intended to provide the code and explanation at the same time. There are many good\r\nexplanations of the idea behind this algorithm. For example, see \\cite{NocedalWright2000}. Also,\r\nthere are high-quality free codes like MINPACK. But it is hard to connect two. It is easy to get confused with what is \"psi\" or \"flag3\" in the code. So I am trying to build a\r\nbridge between algorithm and code by creating a one-to-one mapping of code and documentation.\r\n\r\n\\section{Basic Idea}\r\nFor the basic idea of trust region method and nonlinear optimization in general,\r\n\\cite{NocedalWright2000} provide readable yet detailed explanation. \r\n\r\nWe consider a following problem.\r\n\\[\r\n\\begin{pmatrix}\r\nf_1(x_1,\\dots,x_n)\\\\\r\nf_2(x_1,\\dots,x_n) \\\\\r\n\\vdots\\\\\r\nf_n(x_1,\\dots,x_n) \\\\\r\n\\end{pmatrix}\r\n= \r\n\\begin{pmatrix}\r\n0\\\\\r\n0 \\\\\r\n\\vdots\\\\\r\n0\\\\\r\n\\end{pmatrix}\r\n\\]\r\nOr, equivalently,\r\n\\be \\label{eq:prob}\r\n\\fb (\\xb) = \\0.\r\n\\ee\r\n\r\nInstead of solving the equation directly, we consider the minimization of the residual function\r\n\\[\r\nr(\\xb) = \\|\\fb(\\xb)\\|^2 = \\sum_{i=1}^n f_i(\\xb)^2\r\n\\]\r\n\r\nThe nonlinear equations are solved by repeatedly solving the locally Quadratic approximation of\r\nthe minimization problem of $r(\\xb)$ around the point $\\xb_i$: \r\n\r\n\\be \\label{eq:unconditionalQuadratic}\r\n\\min{\\pb} \\left[ r(\\xb_i) + \\tilde{\\pb}^T J^T\\fb(\\xb_i)  + \r\n        \\frac{1}{2} \\pb^T J^T(\\xb_i) J(\\xb_i)\\pb\\right] \r\n\\ee\r\nwhere $J$ is a Jacobian matrix and\r\n\\[\r\nJ_{kj} =\\pdiff{f_k(\\xb)}{x_j} \r\n\\]\r\nQuadratic approximation, which takes advantage of second derivative. Benefit of using second derivative is substantial. The first order convergence, only using first derivative will typically make the error half per iteration. This means $10^{3}$ improvement requires 10 iterations. In the second order convergence, the error becomes square of the original error. If original error was $10^{-3}$, $10^{3}$  improvement can be obtained by only one iteration. \r\n\r\nThe challenge for second order methods are calculating the second order derivative, or the Hessian. The nice point here is that we do not explicitly calculate the Hessian of $r(\\xb)$ because its functional form of the sum of the square implies simple form of the Hessian, $ J^T(\\xb_i)\r\nJ(\\xb_i)$\r\nThe solution to the problem is\r\n\\[\r\n\\pb = - J^{-1}\\fb(x_i),\\qquad \\xb _{i+1} = \\xb_i + \\pb_i\r\n\\]\r\nThis is the Newton-Raphson algorithm. \r\n\r\nNewton-Raphson update can fail spectacularly when Jacobian changes fast. For illustration, let's\r\nconsider one dimensional problem $f(x) = x^3+1 = 0.$. It has a unique solution at $x = -1 $. Note that $f'(x) = 2 x^2$, which is a strictly concave function. Someone may expect nice convergence features.   However,\r\nif we start from $x_0=0.01$ and apply the Newton-Raphson update, $x_1 $ is 3090. It is not approaching to the true solution at all.\r\n\r\nWhy the Newton Raphson failed? Since $f'(0.01) = 0.003$, the algorithm thinks $f(x)$ does not\r\nrespond sharply with the change $x$. And $f(0.01)$ is $ 1.0303$, which is not too close to zero. So\r\nthe algorithm thinks it has to take a large step to make $f(x)$ zero.  \r\n\r\nThe problem is that the Newton-Raphson algorithm is taking locally Quadratic approximation too\r\nseriously. The approximation is valid for the region such that Jacobian is not moving too fast. In\r\nour example, $f'(-0.1) = 0.3$, which is more than hundred times larger than the case of $x=0.01$. So it\r\nis not a good idea to move $x$ by three thousands. One solution is restricting the step size endogenously.  We modify \\eqref{eq:unconditionalQuadratic} by the following manner:\r\n\\begin{align}\r\n\\min{\\pb} &\\left[ r(\\xb_i) + \\tilde{\\pb}^T J^{T}\\fb(\\xb)  + \r\n    \\frac{1}{2} \\pb^T J^T(\\xb_i) J(\\xb_i)\\pb\\right] \r\n\\label{eq:conditionalQuadratic} \\\\\r\n&\\mathrm{subject \\ to}\\qquad \\|\\pb\\| \\le \\Delta\r\n\\end{align}\r\n$\\Delta$ is our parameter for the size of the region we trust the quadratic approximation. We\r\nadjust $\\Delta$ smartly (details below). \r\n\r\n\r\n\r\n\\section{Algorithm}\r\n\\subsection{Overview}\r\nThis is a basic component of the algorithm. \r\n\\begin{algorithm}[ht]\r\n\\caption{Main Algorithm (Subroutine Hybrid())}\r\n\\label{al:main}\r\n\\begin{algorithmic}[1]\r\n\\STATE Set Initial Value of $\\Delta$ and $J$\r\n\\STATE Do QR Decomposition of $J$: $QR = J$\r\n\\WHILE{$\\text{abs}(F(\\xb))>tol$}\r\n    \\STATE Calculate $\\pb$ given $Q,R$ and $\\Delta$ \r\n    \\IF{$F(\\xb+\\pb)<F(\\xb)$} \\label{ln:improvedF}\r\n        \\STATE $\\xb\\leftarrow \\xb+\\pb$\r\n        \\STATE Calculate $\\fb(\\xb+\\pb)$\r\n    \\ENDIF\r\n    \\STATE Update $\\Delta$ using $F(\\xb+\\pb),F(\\xb),J\\pb$\r\n    \\STATE Update $J$ using $f(\\xb)$\r\n    \\STATE Update $Q,R$ using $J$\r\n%   \\IF{$\\frac{\\|\\fb(\\xb)\\|-\\|\\fb(\\xb+\\pb)\\|}{\\|\\fb(\\xb)\\|-\\|\\fb(\\xb)+J\\xb\\|}<0.1$}\r\n\\ENDWHILE\r\n\\end{algorithmic}\r\n\\end{algorithm}\r\nFirst, we decompose the Jacobian as $J = QR$. \r\n\r\nThe first step of the loop is finding the possible increment $\\pb$. This is a compromise of Newton\r\ndirection and steepest descents of $F(\\xb)=\\sum_{k=1}^Nf_k(\\xb)^2$ called dogleg method.\r\n\r\n$\\Delta $ is the size of the trust region.  We adjust $\\Delta$ based on how well the current Jacobian is\r\nfitting the data. We decrease $\\Delta$ if the current improvement is not as good as we hoped because\r\nit means our Jacobian is a poor proxy of the true nature of the objective function. We do not want\r\nto move a lot based on a poor estimate.  \r\n\r\nNext, we update the Jacobian. We usually use Broyden's Rank 1 update for fast calculation. If the last\r\nfew steps was bad move, it might imply our Jacobian contains substantial errors. So we recalculate\r\nJacobian in that case. The last step of the iteration is an update of $Q, R$.  Because of their special\r\nstructure, it is relatively fast. There are many exceptional exits of the program. We explain it at\r\nthe end of the algorithm section.\r\n\r\n\\paragraph{Convergence} \r\nThis algorithm is designed so that if the Newton method fails to improve the residual function\r\n$r(\\xb)$, it chooses the Steepest Descent method with a gradually smaller step length. A small step\r\nto the Steepest Descent direction always improves the objective function value. So the convergence\r\nto some local optimal point is mathematically guaranteed. The mathematical theorem does not\r\nguarantee convergence in a reasonable time and precision. But this algorithm pays\r\nspecial attention to rounding errors and speed. So it would also likely to converge, at least much more than the Newton-Raphson. \r\n\r\n\r\n\r\n\\subsection{Update of $\\pb$}\r\n \\begin{algorithm}[ht]\r\n\\caption{Calculate  $\\pb$, (subroutine dogleg)}\r\n \\label{al:dogleg}\r\n\\begin{algorithmic}[1]\r\n\\STATE INPUT: $R,Q,\\Delta,\\fb(\\xb),\\Psi$\r\n\\STATE OUTPUT: $\\pb$\r\n\\STATE Calculate Newton Correction by solving $R\\boldsymbol{\\nu} = - Q^{T} \\fb (\\xb)$\r\n\\IF{$\\|\\boldsymbol{\\nu}\\|\\le \\Delta$}\r\n    \\STATE  Accept Newton Correction : $\\pb \\leftarrow \\boldsymbol{\\nu}$\r\n\\ELSE\r\n    \\STATE /* Try Steepest descent Correction */\r\n    \\STATE Calculate steepest descent Correction: $\\boldsymbol{g} = - (QR)^{T} \\fb (\\xb)$\r\n    \\STATE Calculate the predicted bottom at the direction of $\\boldsymbol{g}$:\r\n             $\\mu = \\| \\boldsymbol{g} \\|^2 / \\| QR\\boldsymbol{g} \\|^2$\r\n    \\IF{$\\mu \\|\\Psi^{-1} \\gb\\| \\ge \\Delta$} \r\n        \\STATE Move along Steepest descent correction $\\pb = \\Delta \\gb/\\|\\gb\\|$\r\n    \\ELSE\r\n        \\STATE /* Accept neither. Try linear combination */\r\n        \\STATE Find $\\theta$ such that $\\|(1-\\theta)\\mu  \\gb + \\theta\\Psi \\nub \\| = \\Delta$\r\n        \\STATE $\\pb \\leftarrow (1-\\theta)\\mu \\gb + \\theta \\nub$\r\n    \\ENDIF\r\n\\ENDIF\r\n\\end{algorithmic}\r\n\\end{algorithm}\r\n\r\nThis is an update of $\\pb$ by solving \\eqref{eq:conditionalQuadratic}. We do not solve the problem\r\nprecisely because it takes time and gain is not that big. \r\n\r\nFirst, we try the Newton Correction, which is an unconditional solution of \\eqref{eq:conditionalQuadratic}. The solution of the\r\nlinear equation takes just $O(n^2)$ operation because $R$ is an upper triangular matrix. We can apply\r\nsuccessive substitutions. If it is not moving more than $\\Delta$, that is the exact solution. \r\n\r\nIf not, we try the Steepest descent direction of $r(\\xb)=\\fb(\\xb)^T\\fb(\\xb)$, which is $-J^T\r\n\\fb(\\xb)$. \r\nIn a usual Steepest Descent method (or a more sophisticated algorithm like Conjugate\r\ngradient method or Quasi-Newton method), we perform line search along the suggested direction to\r\nfind minimum on the line. But we don't go for that because the line search requires several\r\nevaluations of the objective function, which can be costly. \r\n\r\nInstead, we explores the special feature of $r(\\cdot)$. Since $r(\\cdot)$ is the sum of squares, we can derive the Hessian without\r\nperforming error-prone finite difference. Namely, the Hessian is $J^TJ$. With Hessian and\r\nJacobian, we can directly derive the minimum of a function that is a quadratic approximation of\r\nthe objective function at the point, which is $\\xb + \\mu \\gb$. In this way, we can avoid costly\r\nline search. If the minimum is outside of the permissible area, or $\\mu \\|\\gb\\| \\ge \\Delta$, we\r\naccept up to $\\Delta$.\r\n\r\nIf the minimum along $\\gb$ is closer than $\\Delta$, we move a bit to the $\\nub$ too. We take a\r\nlinear combination of both $\\boldsymbol{\\nu}$ and $\\gb$ so that $\\|\\pb\\| = \\Delta$. This can be\r\nattained by setting\r\n\\[\r\n\\theta = \\frac{\\Delta^2 - \\|\\mu \\gb \\|^2}\r\n{(\\nub-\\mu\\gb,\\mu\\gb)+\\sqrt{\\{(\\nub,\\mu\\gb)-\\Delta^2\\}^2+\\{\\|\\nu\\|^2-\\Delta^2\\} \\{\\Delta^2-\\|\\mu\\gb\\|^2 \\}}}\r\n\\]\r\n\r\n\\paragraph{Normalization}\r\nIn this algorithm, it is important to normalize equations so that the problem would be well\r\nbehaved. Instead of considering the constraint in \\eqref{eq:conditionalQuadratic} as a ball, we\r\ncan consider an eclipse constraint. \r\n\\[\r\n \\|\\Psi \\pb\\| < \\Delta\r\n\\]\r\nwhere  $\\Psi$ is a diagonal normalization matrix. Let $\\Psi \\pb = \\tilde{\\pb}$. The problem\r\n\\eqref{eq:conditionalQuadratic} becomes\r\n\\begin{align}\r\n\\min{\\tilde{\\pb}} &\\left[ r(\\xb_i) +\\tilde{\\pb}^T\\Psi^{-1} J^T\\fb(\\xb_i)  + \r\n    \\frac{1}{2} \\tilde{\\pb}^T \\Psi^{-1}J^TJ\\Psi^{-1}\\tilde{\\pb}\\right] \r\n\\label{eq:conditionalQuadratic} \\\\\r\n&\\mathrm{subject \\ to}\\qquad \\|\\tilde{\\pb}\\| < \\Delta\r\n\\end{align}\r\nThe solution of the problem above, $\\tilde{\\pb}$ is equal to the solution of unconstrained dogleg\r\nproblem with $\\tilde{J} = J\\Psi ^{-1}$. So the dogleg problem with normalization is follows:\r\n\r\n\r\n\\begin{algorithm}[ht]\r\n\\caption{Calculate  $\\pb$ with normalization}\r\n \\label{al:delta}\r\n\\begin{algorithmic}[1]\r\n\\STATE INPUT: $R,Q,\\Delta,\\fb(\\xb),\\Psi $\r\n\\STATE OUTPUT: $\\pb$\r\n\\STATE $ \\tilde{R} \\leftarrow R \\Psi^{-1}$\r\n\\STATE Apply algorithm \\ref{al:dogleg}. Let its solution as $\\tilde{\\pb}$\r\n\\STATE $\\pb = \\Psi^{-1} \\tilde{\\pb} $\r\n\\end{algorithmic}\r\n\\end{algorithm}\r\n\r\n\r\n\r\n\\subsection{Revision of $\\Delta$ and Recalculating $J$}\r\nAlgorithm \\ref{al:delta2} is a update of the trust region size, $\\Delta$.\r\n\r\n\r\n\\begin{algorithm}[ht]\r\n\\caption{Update $\\Delta$ (subroutine UpdateDelta)}\r\n\\label{al:delta2}\r\n\\begin{algorithmic}[1]\r\n\\STATE Calculate predicted $f(\\xb+ \\delta)$: \\ $\\boldsymbol{\\phi} = \\fb(\\xb)+ J\\pb$\r\n\\STATE Calculate Predicted $r(\\xb +\\delta)$: \\  $\\Phi =\\|\\boldsymbol{\\phi}\\|  $\r\n\\IF{$\\frac{r(\\xb)- r(\\xb+\\pb)}{r(\\xb)-\\Phi} <0.1 $} \\label{al:delta2:l1}\r\n    \\STATE  Change of objective function is too small. Reduce $\\Delta$ by $\\Delta\r\n    \\leftarrow DeltaSpeed\\cdot \\Delta$ \\label{line1} \r\n    \\STATE GoodJacobian = 0\r\n    \\STATE BadJacobian = BadJacobian + 1\r\n\\ELSE\r\n    \\STATE BadJacobian = 0\r\n    \\STATE GoodJacobian = GoodJacobian +1 \r\n    \\IF{GoodJacobian$ > 1$ OR $\\frac{r(\\xb)- r(\\xb+\\pb)}{r(\\xb)-\\Phi} >0.5 $}\r\n        \\STATE $\\Delta = \\max{}(\\Delta, 2\\|\\pb\\| )$\r\n    \\ELSIF{ abs$\\big(\\frac{r(\\xb+\\pb)-\\Phi}{r(\\xb)-\\Phi}\\big)< 0.1 $}\r\n        \\STATE $\\Delta =2\\|\\pb\\| $\r\n    \\ENDIF  \r\n\\ENDIF\r\n\\IF {BadJacobian = 2}\r\n    \\STATE Recalculate Jacobian by forward differences.\r\n    \\STATE BadJacobian = 0\r\n\\ENDIF\r\n\\end{algorithmic}\r\n\\end{algorithm}\r\n\r\nLine \\ref{al:delta2:l1} checks if $J$ is good or not. If $J$ is a good prediction,\r\nthe predicted reduction of the objective function $r(\\xb)-\\Phi $ is almost equal to the\r\nactual reduction. If the actual reduction is less than 10\\% of the predicted reduction,\r\nthere is something wrong with $J$. It is either $\\fb(\\xb)$ is very nonlinear or\r\ncalculated $J$ contains too much error from the history. These errors generally become\r\nsmaller if the trust region $\\Delta$  shrinks.\r\n\r\nIf the reduction is relatively satisfactory, we start to weakly increase $\\Delta$. There is not\r\ntoo much justification for this increase. But Powell found that the result is numerically quite\r\nsatisfactory. Note that usually $\\|\\pb\\|$ is equal to $\\Delta$.  We do not want to modify\r\n$\\Delta$ all the time because we might fall into the oscillation of increase $\\rightarrow$\r\ndecrease $\\rightarrow $ increase $\\cdot \\cdot \\cdot$.  To avoid that, GoodJacobian is checking if\r\nit is the first attempt to increase $\\Delta$. If it is after the second attempt to increase $\\Delta$ and\r\nwe attain a modest increase in the objective function, which is that actual reduction is more than 50\\%\r\nof predicted reduction, we double the trust region. If our Jacobian is a pretty good estimate\r\n($r(\\xb+\\pb)-\\Phi$ is small), we also expand the trust region.\r\n\r\nBadJacobian is counting the number of consecutive failures of predicting changes in the objective\r\nfunction. If the algorithm fails to predict the change twice consecutively, we suspect that\r\nthe current Jacobian contains a serious error which came from previous updates. The previous points may\r\nbe far from the current $\\xb$. We recalculate it based on the forward difference from scratch, instead of\r\nupdating it.\r\n\r\n\\subsection{Update of J}\r\nWe update Jacobian using the Broyden's rank 1 update. For the detail, see Numerical Recipes\r\nCh. 9.7., Broyden's method.   :\r\n\\begin{align*}\r\n  J \\leftarrow J +\\frac{1}{\\|\\pb\\|^2} (\\fb(\\xb+\\delta)-f(\\xb) -J\\pb )\\pb^T\r\n\\end{align*}\r\n\r\n\r\n\r\n\\subsection{QR Decomposition}\r\nQR decomposition is decomposing a matrix $A$ to multiple of $Q$, orthogonal matrix ($Q^{-1}=Q^T$),\r\nand upper triangular matrix $R$. So $A=QR$. It is useful because solving $Ax=b$ takes $O(n^2)$\r\nstep once we know QR decomposition of A, which is faster than $O(n^3)$. Although QR decomposition\r\nitself takes $O(n^3)$, once we calculate the decomposition, its update only takes $O(n^2)$. So it\r\nis useful when we have to solve the linear equation repeatedly with slightly different\r\ncoefficients. It is the case when we calculate the Newton direction, which requires to solve\r\n$J\\delta = \\fb(\\xb)$ in every iteration. In this subsection, I assume that input is a square\r\nmatrix. The Fortran code is written in the way it accepts the rectangular matrix.\r\n\r\n\\subsubsection{Decomposition}\r\nThis is called at first and whenever $J$ is recalculated using finite difference.  The code uses\r\nHouseholder transformation repeatedly. Let $a_j$ as an arbitrary vector and we want to find $P_j$\r\nsuch that $P_j$ is orthogonal and $P_ja_j = (b_1, 0, \\dots,0)^T$. We can set\r\n\\[\r\nP_j = I - \\frac{2}{\\|u_j\\|^2} u_ju_j^T, \\qquad u_j = a_j - \\|a_j\\|e_1\r\n\\]\r\nwhere $I$ is identity matrix and $e_1= (1,0,\\dots,0)^T$ is a unit vector. We can apply this\r\nrepeatedly until all matrix becomes upper triangular. Many books about numerical algorithm covers\r\nQR decomposition with Householder transformation. \r\n\r\n\r\n\r\n\r\n\\subsubsection{Update}\r\nThe update is called at the end of each iteration. Our goal is given\r\n\\[\r\nA^{next} = A + uv^T, \\qquad A = QR\r\n\\]\r\nfind $Q^{next}$ and $R^{next}$ such that $Q^{next}R^{next}=A^{next}$. For more information, see \\cite{BjorckDahlquist2008}\r\nsection 8.4. \r\n\r\n\\paragraph{transform a bit}\r\nLet $w=Q^Tu$. Using this, \r\n\\[\r\nA^{next} = Q^{next}(R+wv^T)\r\n\\]\r\nWe are going to make $R+wv^T$ upper triangular matrix using Givens transformation.\r\n\r\n\\paragraph{Givens transformation of $w$}\r\nWe find a sequence of Givens transformation such that \r\n\\[\r\nP_1\\dots P_{n-1}w = \\alpha e_1.\r\n\\]\r\nGivens transformation is very similar to Jacobi transformation. It is a matrix defined by three\r\nparameters $(k,l,\\theta)$ such that $(i,j)$\r\nelement is \r\n\\[\r\nP(k,l,\\theta)_{i,j} = \\begin{cases}\r\n\\cos \\theta & i=j=k \\\\\r\n\\cos \\theta & i=j = l\\\\\r\n-\\sin \\theta & k = i , l = j \\\\\r\n\\sin \\theta & k = j, l=i \\\\\r\n1 & i = j \\neq k, i = j \\neq l\\\\\r\n0 & otherwise\r\n\\end{cases}\r\n\\]\r\nIt is zero other than diagonal elements, $(k,l)$ element, and $(l,k)$ element. $P$ is orthogonal\r\nmatrix.  Consider\r\n$P(n-1,n,\\theta) w$.  $n$th element of $P(n-1,n,\\theta) w$ is zero if\r\n\\[\r\n\\cos \\theta = \\frac{1}{\\sqrt{t^2+1}}, \\qquad\r\n\\sin \\theta = t \\cos \\theta, \\qquad t = \\frac{w_n}{w_{n-1}}\r\n\\]\r\nWe can repeat this until all element other than the first element is zero. \r\nLet \r\n\\[\r\nP^w = P_1\\dots P_{n-1}\r\n\\]\r\n\r\n\\paragraph{Transform upper Hessenberg matrix to upper triangular matrix}\r\nNow,\r\n\\[\r\nA^{next} = Q (P^w)^T(P^WR+P^wwv^T)\r\n\\]\r\nNote that elements in $P^wwv^T$ are zero other than the first row and $P^WR$ is upper Hessenberg\r\nmatrix. So $P^WR+P^wwv^T$ is an upper Hessenberg matrix. We are going to eliminate (1,2) elements to\r\n(n-1,n) elements using Givens transformation.  Let $H =P^WR+P^wwv^T$. To eliminate (1,2) element,\r\nwe need Givens transformation such that\r\n\\[\r\n\\cos \\theta = \\frac{1}{\\sqrt{t^2+1}}, \\qquad\r\n\\sin \\theta = t \\cos \\theta, \\qquad t = \\frac{h_{21}}{w_{11}}\r\n\\]\r\nWe can repeat this until all lower triangular elements are zero. Let $P^H$ as the accumulation of\r\nthe transformation. We complete the algorithm by \r\n\\[\r\nQ^{next} =  Q (P^w)^T(P^H)^T, \\qquad R^{next} = P^H(P^WR+P^wwv^T)\r\n\\]\r\n\r\n\r\n\r\n\r\n\\subsection{Termination of the Algorithm}\r\nThere are several conditions for the termination of this algorithm.\r\n\r\n\\paragraph{Successful Convergence}\r\n We call it successful if \r\n\\[\r\n\\frac{1}{\\sqrt{n}}\\|\\fb(\\xb)\\|=\\sqrt{\\frac{1}{n}\\sum_{i=1}^m(f_i(\\xb))^2} < ftol\r\n\\]\r\nThis means that the average squared residual is small. If this condition is satisfied, $|\\fb_i(\\xb)|$\r\nis at most $n\\cdot ftol1$\r\n\r\n\\paragraph{Trust region size shrinks to the tolerance level}\r\nThe program stops execution if $\\Delta < xtol(\\|\\xb\\|+xtol)$, the relative size of the trust\r\nregion is smaller than $xtol$. This happens if the Jacobian changes rapidly and wildly around the\r\npoint the program terminated. This is the most common type of unsuccessful termination. It can\r\nhappen for the following reasons: \r\n\\begin{itemize}\r\n\\item The problem you are going to solve has large second order derivatives. In this case,\r\n  Jacobian is sensitive to where it is evaluated. The local quadratic approximation is inappropriate. This might be the case if you have highly discontinuous Jacobian, for example.\r\n\\item You set the $ftol$ too small. You might get this message even if the algorithm is in indeed\r\n  the solution. If $ftol$ is too small, the algorithm can not attain it with the precision\r\n  required.\r\n\\item You set the speed of shrinking $\\Delta$ too small. \r\n\\item Your subroutine to be solved contains bugs. My personal experience suggests this explains\r\n  most of this error message. \r\n\\end{itemize} \r\n\r\nHere are possible solutions:\r\n\\begin{itemize}\r\n\\item Check the $\\fb(\\xb)$ and see if it is small enough or not. \r\n\\item Increase \\textit{DeltaSpeed}\r\n\\item Increase \\textit{ftol}\r\n\\item Decrease $JacobianStep$\r\n\\item Use different initial guesses\r\n\\item If you are using single precision real numbers, use double precision instead. \r\n\\end{itemize}\r\n\r\n\r\n\\paragraph{Too much function call}\r\nIf the number of function call exceeds MaxFunEval, the program stops the execution. \r\n\r\n\\paragraph{Algorithm reaches Local minima}\r\nIf the program terminates for the reason other than ``Successful Convergence'', we recalculate the\r\ngradient of the objective function $\\nabla r(\\xb) = J^T\\fb(\\xb)$ by finite difference. If\r\n$\\|\\nabla r(\\xb)\\| < gtol(\\|\\xb\\|+gtol)$, we are at the locally optimal point, although average\r\nresidual is not close enough to zero in the tolerance required. This happens if the algorithm is\r\ncaptured in the local minima. The solution is trying the different initial guess. This can also\r\nhappen if $ftol$ is too small to attain. You can check that by seeing the $\\fb(\\xb)$ at the\r\noutput.  \r\n\r\n\r\n\r\n\r\n\\section{Usage of the Subroutine}\r\nI explain inputs and outputs of this subroutine. Many arguments are optional. \r\n\\begin{description}\r\n    \\item[fun] Subroutine which returns the residual vector. It should take two arguments, and\r\n      first argument should be $\\xb$ and second argument should be the $\\fb(\\xb)$.\r\n    \\item[x0] Initial value of $\\xb$\r\n    \\item[xout] Solution of the least square problem.\r\n    \\item[info] (optional) Variable for the information of the termination of the algorithm. For\r\n      the detail of the conditions, see the Termination of the Algorithm. \r\n    \\begin{description}\r\n        \\item[info = 0] Improper input values\r\n        \\item[info = 1] Successful convergence. \r\n        \\item[info = 2] First order optimality satisfied\r\n        \\item[info = 3] Trust region size shrinks to the tolerance level\r\n        \\item[info = 4] Too much function call\r\n    \\end{description}\r\n    \\item[fvalout] (Optional) Output vector of residuals at $\\xb = xout$.\r\n    \\item[Jacobianout]  (Optional) Output Jacobian Matrix at $xout$\r\n    \\item[xtol] (Optional) Tolerance of the x. If trust region size is smaller than\r\n      $xtol(\\|\\xb\\|+xtol)$, the program terminates. If this is not provided, it would be set to\r\n      the default value of $10^{-8}$\r\n    \\item[ftol] (Optional) Tolerance of the residual. The program terminates if\r\n      $\\sqrt{\\sum_{i=1}^m(f_i(\\xb))^2/m} < ftol $. If this is not provided, it would be set to\r\n      the default value is $10^{-8}$.\r\n    \\item[gtol] (Optional)  Tolerance of the gradient. If  $ \\|\\nabla r(\\xb)\\| <\r\n      gtol(\\|\\xb\\|+gtol)$ holds, we regard it first order optimality is satisfied. The default\r\n      value is $10^{-8}$\r\n    \\item[JacobianStep] (Optional) Relative step size for calculating the Jacobian by Finite\r\n      difference. Default value is $10^{-3}$. \r\n    \\item[display] (Optional) Option to specify the display preference. If display is 2, the\r\n      algorithm shows the iteration. If display is 0, the algorithm shows nothing. Default value\r\n      is 0. \r\n    \\item[maxFunctionCall] (Optional) Number of function to be called before\r\n      terminating the algorithm. Default value is $100n$, where $n$ is number of equations. \r\n    \\item[noupdate] (Optional) Dummy variable for the Broyden's rank 1 update. If noupdate is 1,\r\n      the algorithm recalculate Jacobian with finite difference in each iteration. This would\r\n      almost always make the algorithm converged with fewer iteration. But this does not mean it\r\n      would speed up in terms of CPU time because in general the calculation of the Jacobian is\r\n      costly if the evaluation of the objective function is costly or $n$ is large. Turning off\r\n      the update can speed up the total computation time when $n$ is very small, (say less than\r\n      5). Default value is 0, so the Broyden's update is the default.\r\n    \\item[DeltaSpeed] (Optional) Controls the speed of $\\Delta$ when it shrinks. In algorithm\r\n      \\ref{al:delta2}, line \\ref{line1}, the trust region size $\\Delta$ shrinks if the actual\r\n      reduction do not match with predicted reduction. DeltaSpeed controls how fast it should\r\n      shrink. It should always be always less than one. The smaller the value is, the faster the\r\n      $\\Delta$ shrinks. The default value is $1/4$.\r\n\\end{description}\r\n\r\n\r\n\r\n\\section{Other issues}\r\n\r\n\r\n\\subsection{Improving Performance}\r\nCurrent implementation prefer the readability of the code to performance. If you are dealing with\r\nlarge scale problems, you might want to change some parts of the code. In general, this code is\r\nwritten for the case that number of parameters to be solved is not that large, (for example, less\r\nthan 50) and evaluation of the objective function is costly. If this is not your problem, you\r\nmight be able to get better performance for the following modifications.\r\n\r\n\r\n\\paragraph{QR factorization} \r\nOther than the evaluation of the objective function, $QR$ decomposition would take most of the\r\ntime in this algorithm. It is called at first and called occasionally afterward if the current\r\napproximation is suspected to contain substantial errors. If $n$ or $m$ is large (more than a few\r\nhundred $n$ or more than 10,000 $m$) and the evaluation of the objective function is not\r\nexpensive, MINPACK's slow implementation of QR decomposition is likely to be a bottleneck. You\r\nshould consider using high-quality linear algebra package like LAPACK. Also, LAPACK routine would\r\nimprove the numerical stability of QR decomposition. \r\n\r\n\\paragraph{Better solution of Locally quadratic problem}\r\nAlso, faster speed can be attained by improving the dogleg method to the nearly exact solution\r\nusing Cholesky factorization. It takes more time than the dogleg method per iteration. But improvement\r\nper iteration will be better. This appropriate when the evaluation of the function is costly and\r\nthe number of the variables are small.\r\n\r\n\r\n\\paragraph{Memory}\r\nThe memory should not be a concern for modern computers if $n$ is less than 1000. \r\n\r\nIf $n$ is larger than that, it might create a problem. This code stores a few $n$ by $n$\r\nmatrixes. To give you an idea for how serious this constraint is, here is the memory required to\r\nstore matrixes: To store a 1,000 by 1,000 matrix with 8 bytes (double) precision, it requires 8\r\nMegabytes of memory. For 5,000 by 5,000 matrix, that is 190 Megabytes and for 10,000 by 10,000\r\nmatrix, that is 762 Megabytes. In case memory binds, it is not too difficult to decrease the\r\nmemory usage by 50\\% or more with a little modification. For more memory savings, probably you\r\nneed a substantial rewrite of the code based on conjugate gradient method instead of Newton Based or\r\nuse sparse matrix.\r\n\r\n\r\n\\subsection{Comparison with various methods}\r\nHere is the comparison of various methods with this method. \r\n\r\n\\paragraph{Compared to Newton-Raphson method with BFGS, DFP, or BHHH} Great advantage of\r\nNewton-Raphson type method is its speed. But it is unstable, even if we update Jacobian/Hessian\r\nsmartly using BFGS, DFP, or BHHH. The Newton Raphson method just has local convergence. It\r\nmight fall into a cycle or diverge even if the function is smooth and globally concave unless the\r\ninitial guess is sufficiently close to the true value. So here is the origin of the agony of\r\ninitial guess. Also, Newton's method might take you to the saddle point. (see\r\n\\cite{BjorckDahlquist2008} section 11.3 for more discussion) On the other hand, the modified\r\nPowell's Hybrid has global convergence property. It always converges if the first derivative is\r\ncontinuous and bounded, no matter what the initial point is. This is a great improvement in terms\r\nof convergence. For speed, the Hybrid method tries to use Newton-Raphson update whenever it looks\r\nsafe. The Hybrid method is as good as that in Newton-Raphson if the initial guess is good. If the\r\ninitial guess is not so good, only the Hybrid method has guaranteed converges.\r\n\r\n\\paragraph{Compared to Newton-Raphson with Line search} \r\n(see \\cite{BjorckDahlquist2008} section 11.3 for more discussion) To obtain global convergence,\r\nthere are two ways. One is the trust region method which the Hybrid method is relied on, and the\r\nother is the line search method. They are different in what to do after we get the direction for\r\nimprovement. For line search, it searches the extremum along the direction. And trust-region\r\npicks step size without function evaluation by local quadratic approximation.\r\n\r\n\\paragraph{Compared to Conjugate Gradient and its derivatives} \r\nThe Conjugate Gradient method stops relying on second order derivative entirely. Second order derivative is $n$  by $n$  matrix, which won't fit in the memory if $n$  is very large, such as the deep learning. The Powell's method cannot be applied for that case.   \r\n\r\n\r\n\\paragraph{Compared to Ameba/Downhill Simplex } Their advantage is an ability to deal with\r\ndiscontinuity. If the objective function contains severe discontinuities near the solution, you\r\nshould consider this type. But it is VERY slow. In my experience, the Hybrid method is not that fragile to the mild discontinuity in both objective function and its derivative because of its trust region feature, although there is no mathematics to support the claim. I suggest trying to the Hybrid method first because it is much\r\nfaster. Sophisticated termination criterion might be fooled by the discontinuity so you might want\r\nto restart it once it converges.\r\n\r\n\\paragraph{Compared to Grid Search/Simulated annealing}\r\nThe global convergence property of the Hybrid method is that it would converge to local\r\nextrema. Like any derivative-based method, it does not guarantees the global property of the point\r\nobtained. If there are many local extrema and you need a global solution, probably there is no\r\nchoice other than Grid search or other globally convergent methods like the simulated annealing.\r\n\r\n\\paragraph{Compared to bisection/Brent's method} If the problem contains only one equation, we\r\nhave completely different algorithms. Bisection and its improvement, Brent's method, is regarded\r\nas robust algorithms that can be applied to highly discontinuous functions. The problem is that it\r\nrequires two initial guesses which brackets the solution. It is not always easy to find them. I am\r\nnot sure if they are so robust or fast if we consider the step of finding the bracket.  The hybrid\r\nalgorithm is faster than these algorithms with only one initial guess. But it may be trapped at\r\nthe local extrema of the function. So it is your call to pick.\r\n\r\n\\paragraph{Fixed point approach} \\cite{SuJudd2008} is fiercely criticizing this approach. It\r\nmakes some sense. But their solution, extensive usage of the canned package is not always a good idea. It is\r\nnot so hard to understand the algorithm, once we have nice documentation. \r\n\r\n\r\n\\section{Glossary}\r\nHere, $\\fb(\\xb)$ is system of equations which we want to make it zero, and $F(\\xb)$ is scalar\r\nobjective function which we want to minimize. \r\n\\begin{description}\r\n    \\item[Jacobian] Jacobian is a matrix of first order derivatives, which comes from a vector \r\n    valued function. \\[\r\n    J_{ij} =\\pdiff{f_i(\\xb))}{x_j}\\qquad J = \\begin{pmatrix}\r\n    \\pdiff{f_1(\\xb))}{x_1} & \\cdots & \\pdiff{f_1(\\xb))}{x_n} \\\\\r\n    \\vdots & \\ddots & \\vdots \\\\\r\n    \\pdiff{f_n(\\xb))}{x_1} & \\cdot\r\n & \\pdiff{f_n(\\xb))}{x_n} \\\\\r\n     \\end{pmatrix}\r\n    \\]\r\n    \\item[Gradient] Gradient is a vector of first order derivatives, which comes from a scalar\r\n      valued function. This is another name of first order conditions.  $\\bigtriangledown F(\\xb) =\r\n      (\\pdiff{F(\\xb))}{x_1},\\dots,\\pdiff{F(\\xb))}{x_N} )$\r\n    \\item[Hessian] Hessian is a matrix of second order derivatives, which comes from a scalar\r\n      valued function.  $H = \\bigtriangledown^2 F(\\xb) =\\bigtriangleup F(\\xb)$. $\r\n      H_{ij}=\\frac{\\partial^2 F(\\xb)}{\\partial x_i \\partial x_j}$. Hessian is a symmetric and\r\n      positive semidefinite.  \r\n    \\item[Hessian and Jacobian] Since the optimization problem is the same as finding zero in the first\r\n      order conditions, Hessian of the objective function is Jacobian of first order\r\n      conditions. But Jacobian from the first order conditions behaves better than ordinary\r\n      Jacobian, because Hessian has several special features. Many variants of the Newton-Raphson\r\n      algorithm exploit the fact for an efficient algorithm. \r\n    \\item[Newton-Raphson algorithm] This algorithm is both for solving a system of nonlinear\r\n      equations and optimization. If it is applied for solving a system of nonlinear equations, it uses the locally linear approximation of the equations. Linear approximation requires a slope,\r\n      which is Jacobian. If it is applied for the optimization problem, it uses a locally linear\r\n      approximation to the first order conditions. Jacobian of the first order conditions is\r\n      Hessian. So it requires Hessian.\r\n    \\item[BFGS/DFP update] Smart way to calculate Hessian in the Newton Raphson algorithm for\r\n      optimization. BFGS update is weakly better than DFP update in a sense that BFGS and DFP\r\n      performs almost identically for most of the problems and there are some cases BFGS is\r\n      clearly better than DFP update.  \r\n    \\item[BHHH update] Another smart way to calculate Hessian in the Newton Raphson algorithm for\r\n      maximum likelihood problems. Although this is popular for econometricians, plain-vanilla\r\n      implementation of the BHHH update is usually inferior to BFGS/DFP update. \r\n    \\item[Gauss-Newton's algorithm] Yet another smart way to calculate Hessian applicable to\r\n      nonlinear least square problem.\r\n    \\item[Steepest Descent algorithm/Cauchy algorithm] This algorithm is for solving a\r\n      optimization problem. It simply picks a direction which is inverse of gradient. If the step size\r\n      is sufficiently small, it always improves the value of the objective function. For the\r\n      determination of step size, a line search is usually used. It is also called the Cauchy\r\n      algorithm.\r\n\\end{description}\r\n\r\n\\bibliographystyle{aer}\r\n\r\n\r\n\\ifx\\undefined\\bysame\r\n\\newcommand{\\bysame}{\\leavevmode\\hbox to\\leftmargin{\\hrulefill\\,\\,}}\r\n\\fi\r\n\\begin{thebibliography}{xx}\r\n\r\n\\harvarditem[Bjorck and Dahlquist]{Bjorck and\r\n  Dahlquist}{2008}{BjorckDahlquist2008}\r\n{\\bf Bjorck, Ake and Germund Dahlquist}, {\\it Numerical Methods in Scientific\r\n  Computing Volume II} 2008.\r\n\r\n\\harvarditem[Nocedal and Wright]{Nocedal and Wright}{2000}{NocedalWright2000}\r\n{\\bf Nocedal, Jorge and Stephen Wright}, {\\it Numerical Optimization},\r\n  Springer, 2000.\r\n\r\n\\harvarditem[Powell]{Powell}{1970a}{Powell1970a}\r\n{\\bf Powell, Michael~J.D.}, ``A Fortran Subroutine For Solving Systems of\r\n  Nonlinear Algebraic Equations,'' in Philip Rabnowitz, ed., {\\it Nonlinear\r\n  Methods for Nonlinear Algebraic Equations}, Gordon and Breach, Science\r\n  Publishers Ltd., 1970, chapter~7.\r\n\r\n\\harvarditem[Powell]{Powell}{1970b}{Powell1970}\r\n{\\bf \\bysame{}}, ``A Hybrid Method For Nonlinear Equations,'' in Philip\r\n  Rabnowitz, ed., {\\it Nonlinear Methods for Nonlinear Algebraic Equations},\r\n  Gordon and Breach, Science Publishers Ltd., 1970, chapter~6.\r\n\r\n\\harvarditem[Su and Judd]{Su and Judd}{2008}{SuJudd2008}\r\n{\\bf Su, Che-Lin and Kenneth~L. Judd}, ``Constrainted Optimization Approaches\r\n  to Estimation of Structural Models,'' January 2008, (1460).\r\n\r\n\\end{thebibliography}\r\n\r\n\\end{document}", "meta": {"hexsha": "789b87e9f4d40a8b5a6c20891cdbe93a8226b785", "size": 36537, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/PowellHybrid.tex", "max_stars_repo_name": "yoki/Optimization", "max_stars_repo_head_hexsha": "49f3707ab5778b23c440ebc67dfa35354662565d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2017-10-26T16:18:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T09:14:51.000Z", "max_issues_repo_path": "documentation/PowellHybrid.tex", "max_issues_repo_name": "yoki/Optimization", "max_issues_repo_head_hexsha": "49f3707ab5778b23c440ebc67dfa35354662565d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documentation/PowellHybrid.tex", "max_forks_repo_name": "yoki/Optimization", "max_forks_repo_head_hexsha": "49f3707ab5778b23c440ebc67dfa35354662565d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-04-05T18:05:05.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-08T15:15:45.000Z", "avg_line_length": 52.3452722063, "max_line_length": 459, "alphanum_fraction": 0.7176834442, "num_tokens": 10059, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.8031737892899221, "lm_q1q2_score": 0.5968930655563319}}
{"text": "% ----- CHAPTER 3: NOTATION, DEFINITIONS AND BACKGROUND ----- %\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Notation}\\label{subsec:notation}\n\nFor the rest of the body of this text we set the following notation:\n\\begin{itemize}\n\\item $E$ is an elliptic curve over $\\QQ$ given by minimal Weierstrass equation\n\\begin{equation*}\ny^2 + a_1 xy + a_3 y^2 = x^3 + a_2 x^2 + a_4 x + a_6,\n\\end{equation*}\n where $a_1, a_3 \\in \\set{0,1}$, $a_2 \\in \\set{-1,0,1}$ and $a_4,a_6 \\in \\ZZ$.\n\\item $D(E)$, $N(E)$ and $r_{al}(E)$  and $r_{an}(E)$ are the discriminant, conductor, algebraic rank and analytic rank of $E$ respectively. For ease of exposition, the dependence on $E$ will most often be indicated by a subscript $E$ instead, and when there is no ambiguity it may be dropped entirely. Also, since much of this body of work assumes the validity of the BSD conjecture, the algebraic and analytic rank of a curve will most often be assumed to be equal, in which case it will just be denoted $r_E$.\n\\item $p$ is a (rational) prime number and $q$ is a prime power.\n\\item $s$ is the generic complex variable.\n\\item $L(E,s)$ and $\\Lambda(E,s)$ are the standard and completed $L$-functions attached to $E$ respectively. Again, for ease of exposition we will in general subsume the $E$ into a subscript and write $\\Les$ and $\\Lams$.\n\\item $C(E) = C_E$ is the leading nonzero coefficient of the Taylor series of $\\Lambda_E(s)$ about $s=1$; \\\\\n$C\\pr(E) = C\\pr_E$ is the leading nonzero coefficient of the Taylor series of $L_E(s)$ about $s=1$.\n\\item $\\gamma$ will always be used to denote the imaginary parts of nontrivial zeros of an $L$-function.\n\\item $\\beta(E) = \\beta_E$ is the bite of $E$, defined as $\\beta_E = \\sum_{\\gamma\\ne 0} \\gamma^{-2}$, where $\\gamma$ ranges over the noncentral nontrivial zeros of $\\Les$.\n\\item $\\eta$ is the Euler-Mascheroni constant $= 0.5772156649\\ldots$\n\\item $\\Gamma(s)$ is the standard Gamma function on $\\CC$, and the digamma function: $\\digamma(s) = \\frac{\\Gamma\\pr}{\\Gamma}(s)$ is the logarithmic derivative of $\\Gamma(s)$.\n\\end{itemize}\n\nFurthermore, we define the following values associated to $E$ (in all cases the dependence on $E$ is understood):\n\\begin{itemize}\n\\item $b_2 = a_1^2 + 4a_2$\n\\item $b_4 = a_1 a_3 + 2a_4$\n\\item $b_6 = a_3^2 + 4a^6$\n\\item $b_8 = a_1^2 a_6 + 4 a_2 a_6 - a_1 a_3 a_4 + a_2 a_3^2 - a_4^2$\n\\item $c_4 = b_2^2 - 24 b_4$\n\\item $c_6 = -b_2^3 + 36 b_2 b_4  - 216 b_6$\n\\item $D = D(E) = -b_2^2 b_8 - 8 b_4^3 - 27 b_6^2 + 9 b_2 b_4 b_6$; this is the definition of the discriminant of $E$\n\\item $j = j(E) = \\frac{c_4}{D}$ is the $j$-invariant of $E$\n%\\item $\\omega = \\frac{dx}{2y + a_1 x + a_3} = \\frac{dy}{3x^2 + 2a_2 x + a_4 - a_1 y}$\n\\end{itemize}\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Definitions and Basic Results}\n\nThe rest of this chapter covers the basic definitions of and results needed for the rest of this work (namely, big-Oh notation, elliptic curves and $L$-functions). Feel free to skip this if you are familiar with them.\n\n%%%%%%%%%%%%\n\\subsection{Big-Oh Notation}\n\nGiven that the running time of various algorithms will be discussed over the course of this work, we recall the definitions of big-Oh and soft-Oh notation, at least in the context of how they will be used here.\n\n\\begin{definition}\nLet $x$ be a positive input, and let $g(x)$ be some positive-valued reference function on $x$.\n\\begin{itemize}\n\\item We say a function $f(x) = O(g(x))$ (read ``$f$ is big-Oh of $g$''), if\n\\begin{equation}\n\\limsup_{x \\to \\infty} \\left| \\frac{f(x)}{g(x)}\\right| < \\infty.\n\\end{equation}\nThat is, $f(x) = O(g(x))$ if the asymptotic growth/decay rate of $f$ is bounded by some multiple of that of $g$.\n\\item We say a function $f(x) = \\softO(g(x))$ (read ``$f$ is soft-Oh of $g$''), if there is some $m>0$ such that\n\\begin{equation}\n\\limsup_{x \\to \\infty} \\left| \\frac{f(x)}{g(x)\\left(\\log g(x)\\right)^m}\\right| < \\infty.\n\\end{equation}\nThat is, $f(x) = \\softO(g(x))$ if the asymptotic growth/decay rate of $f$ scales like that of $g$, up to the inclusion of log factors.\n\\end{itemize}\n\\end{definition}\nNote that $f(x) = \\softO(g(x))$ implies that $f(x) = O(g(x)^{1+\\epsilon})$ for any $\\epsilon>0$, but not vice versa; there are complexity classes strictly between the two. In this thesis we will work exclusively with soft $O$ time complexities, so there is no need to elaborate on those classes here.\\\\\n\n\\begin{definition}\nLet $A$ be an algorithm which takes input of size $k$, where for simplicity we may think of $k$ as a positive integer. Let $t_A(k)$ be the running time of $A$ thought of as a function of the input size $k$.\n\\begin{itemize}\n\\item $A$ is said to have {\\it polynomial time complexity} if there is some $m>0$ such that the running time of $t_A(k) = O(k^{m})$, i.e. the asymptotic running time of the algorithm scales like some polynomial function of $k$.\n\\item If $t_A(k) = O(k^{\\epsilon})$ for any $\\epsilon>0$, then $A$ is said to have {\\it sub-polynomial time complexity}. Note that if $t_A(k) = \\softO(1)$, then $A$ has sub-polynomial time complexity.\n\\item If no $m>0$ exists such that $t_A(k) = O(k^m)$, then $A$ is said to have {\\it super-polynomial time complexity}. If there is some $m>1$ such that $t_A(k) = O(m^k)$, then $A$ is said to have {\\it exponential time complexity}.\n\\end{itemize}\n\\end{definition}\nAgain, there are complexity classes strictly between polynomial and exponential complexity, but we won't consider them in this thesis. The same terminology can be applied to the space requirements of an algorithm, wherein we would replace the word `time' with `space'. \\\\\n\nNote that in theoretical computer science the $k$ is typically the number of bits needed to specify the input to the algorithm. However, in computational number theory the input itself is often a positive integer; many algorithms scale with some polynomial of the input magnitude as opposed to the number of bits defining the input. We therefore highlight the distinction between ``polynomial time in the number of bits of the input\" and ``polynomial time in the magnitude of the input\": the former is asymptotically much faster than the latter. When discussing time complexities we will always be clear to delineate what the measure of complexity $k$ is. \\\\\n\n\n\n%%%%%%%%%%%%\n\\subsection{Elliptic curves}\n\n\\begin{definition}\nAn elliptic curve $E$ is a genus 1 smooth projective curve with a marked point $\\cO$. $E$ is defined over a field $K$ if $E$ may be represented by the {\\it Weierstrass equation} $y^2 + a_1 xy + a_3 y^2 = x^3 + a_2 x^2 + a_4 x + a_6$, where $a1,\\ldots a_6 \\in K$.\n\\end{definition}\n\nFor elliptic curves defined over $\\QQ$, we may always find a model for $E$ such that $a_1, a_3 \\in \\set{0,1}$, $a_2 \\in \\set{-1,0,1}$ and $a_4,a_6 \\in \\ZZ$. Furthermore, there is the notion of {\\it minimality} when it comes to models for elliptic curves. Without going into the definition thereof, unless stated otherwise we will assume that any given elliptic curve Weierstrass equation is specified by its global minimal model.\n\n\\begin{definition}\nThe set of $K$-rational points on $E$ is denoted $E(K)$. $E(K)$ comprises an abelian group, with the ``point at infinity\" $\\cO$ acting as the group identity element.\n\\end{definition}\n\nIt is often useful to view an elliptic curve $E$ as the vanishing locus of the polynomial\n\\begin{equation}\\label{eqn:E_poly}\nf(x,y) = y^2 + a_1 xy + a_3 y^2 - x^3 - a_2 x^2 - a_4 x - a_6.\n\\end{equation}\n That is $E(K) = \\set{(x,y) \\in K^2: \\; f(x,y) = 0}$, along with the point at infinity $\\cO$. \\\\\n\nFor a rational elliptic curve $E/\\QQ$, we may consider the reduced curve $\\tilde{E}/\\Fp$ for any prime $p$. If $E/\\QQ$ is given by the global minimal model $y^2 + a_1 xy + a_3 y^2 = x^3 + a_2 x^2 + a_4 x + a_6$, then the reduced curve is given by $y^2 + \\overline{a_1} xy + \\overline{a_3} y^2 = x^3 + \\overline{a_2} x^2 + \\overline{a_4} x + \\overline{a_6}$, where $\\overline{a_i}$ is $a_i$ reduced modulo $p$. For $p = 2$ or $3$ we may have to move to a different model for $E$ first to avoid the reduced curve being automatically singular.\n\n\\begin{definition}\nA prime $p$ is called {\\it good} if $\\tilde{E}/\\Fp$ is non-singular. The reduced curve is an elliptic curve over $\\Fp$ (by definition) which we denote by $E/\\Fp$; $E$ is said to have {\\it good reduction at $p$}. Otherwise, $p$ is said to be {\\it bad}, the reduced (singular) curve is denoted $\\tilde{E}/\\Fp$, and $E$ is said to have {\\it bad reduction at $p$}.\n\\end{definition}\n\n\\begin{theorem}\nFor any $E/\\QQ$, the set of bad primes is finite and non-empty.\n\\end{theorem}\n\nSingular reduced curves may be thought of as finite-field analogues of singular cubics over the rationals, for example those given by $y^2 = x^3$ and $y^2 = x^3+x^2$ as seen below. Singular curves have a (unique) {\\it singular point}, which is by definition where the partial derivatives $\\frac{\\partial f}{\\partial x}$ and $\\frac{\\partial f}{\\partial y}$ are both zero (here $f$ is as given by equation \\ref{eqn:E_poly}).\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/singular_cubics.png}\n    \\caption{An example of two singular cubics over the rationals. The singular point for both curves is at the origin; for the left curve the singular point is a {\\it cusp}, and for the right curve it is a {\\it node}.}\n    \\label{fig:singular_cubics}\n\\end{figure}\n\nIn the finite field setting the notion of partial derivatives still makes sense, so one may define singular points accordingly. Bad reduction at a prime may be classified into one of three types according to the nature of the tangent space at the singular point on $\\tilde{E}/\\Fp$.\n\\begin{definition}\nLet $E$ have bad reduction at $p$; let $P$ be the singular point on $\\tilde{E}/\\Fp$, and let $T_P(E)$ be the tangent space at $P$.\n\\begin{itemize}\n\\item If the $T_P(E)$ is one-dimensional, then $P$ is a cusp, and $E$ is said to have {\\it additive reduction} at $p$.\n\\item Otherwise $T_P(E)$ is two-dimensional, and $P$ is then a node; $E$ is then said to have {\\it multiplicative reduction} at $p$. Furthermore, multiplicative reduction can be decomposed into two cases:\n\\begin{itemize}\n\\item If $T_P(E)$ is defined over $\\Fp$, then $E$ is said to have {\\it split multiplicative reduction} at $p$\n\\item Otherwise $T_P(E)$ is defined over a quadratic extension of $\\Fp$, and $E$ is said to have {\\it non-split multiplicative reduction} at $p$.\n\\end{itemize}\n\\end{itemize}\n\\end{definition}\n\nPrimes of bad reduction are packaged together into an invariant called the {\\it conductor} of $E$:\n\\begin{definition}\nThe conductor of $E$, denoted by $N_E$, is a positive integer given by\n\\begin{equation}\nN_E = \\prod_{p} p^{f_p(E)},\n\\end{equation}\nwhere $p$ ranges over all primes, and for $p \\ne 2$ or $3$,\n\\begin{equation}\nf_p(E) = \\begin{cases} 0, & \\text{$E$ has good reduction at $p$} \\\\ 1, & \\text{$E$ has multiplicative reduction at $p$} \\\\ 2, & \\text{$E$ has additive reduction at $p$.}\\end{cases}\n\\end{equation}\nFor $p=2$ and $3$, the exponent $f_p(E)$ is still zero if $p$ is good; however the exponent may be as large as $8$ and $5$ respectively if $p$ is bad.\n\\end{definition}\nThe ``proper\" definition of the conductor is Galois representation-theoretic and is defined in terms of the representation of the inertia group at $p$ on the torsion subgroup of $E$; for $p\\ne 2$ or $3$ this reduces to the definition given above, but for $2$ and $3$ there may be nontrivial wild ramification which increases the exponent up to the stated amounts. A full technical definition of the conductor is given in \\cite[pp. 379-396]{Sil-1994}. In any case (including $2$ and $3$), the exponent $f_p(E)$ may be computed efficiently by Tate's algorithm, as detailed in the previous section of the same book \\cite[pp. 361-379]{Sil-1994}. \\\\\n\n%%%%%%%%%%%%\n\\subsection{Elliptic curve $L$-functions}\n\nWe now move on to the definition of the $L$-function attached to an elliptic curve. For this we must define the numbers $a_p(E)$:\n\\begin{definition}\\label{def:a_p} \\mbox{}\n\\begin{itemize}\n\\item For good primes $p$ (i.e. when $p \\nmid N_E$), let\n\\begin{equation}\na_p(E) = p+1-\\#\\set{\\EFp},\n\\end{equation}\nwhere $\\#\\set{\\EFp}$ is the number of points on $E/\\Fp$;\n\\item For bad primes (when $p \\mid N_E$), let\n\\begin{equation}\na_p(E) := \\begin{cases}\n+1 & \\text{if $E$ has split multiplicative reduction at $p$} \\\\\n-1 & \\text{if $E$ has non-split multiplicative reduction at $p$} \\\\\n0 & \\text{if $E$ has additive reduction at $p$.}\n\\end{cases}\n\\end{equation}\n\\end{itemize}\n\\end{definition}\n\nHasse's theorem states that the number of points on $E$ modulo $p$ can never be too far from $p+1$:\n\\begin{theorem}[Hasse, 1936]\nFor all elliptic curves $E/\\QQ$ and all primes $p$,\n\\begin{equation}\n|a_p(E)| \\le 2\\sqrt{p}.\n\\end{equation}\n\\end{theorem}\n\nFor ease of notation, when $E$ is fixed we will let $a_p := a_p(E)$, letting the dependence on $E$ be understood. \\\\\n\nThe Sato-Tate Conjecture, now a theorem thanks to Taylor, goes even further, giving an asymptotic distribution on the $a_p$:\n\\begin{theorem}[Taylor, 2006-]\nFor fixed $E/\\QQ$, the set of normalized $a_p$ values $\\set{\\frac{a_p}{2\\sqrt{p}}: p \\text{ prime}}$ obey a semicircular distribution on the interval $[-1,1]$. That is, for $1\\le a \\le b \\le 1$, the asymptotic proportion of primes for which $a \\le \\frac{a_p}{2\\sqrt{p}} \\le b$ is equal to the proportion of the area under the unit semicircle between $a$ and $b$.\n\\end{theorem}\n\n\\begin{definition} \\mbox{}\n\nThe $L$-function attached to $E$ is a complex analytic function $L_E(s)$, defined initially on some right half-plane of the complex plane.\n\\begin{itemize}\n\\item The Euler product of the $L$-function attached to $E$ is given by\n\\begin{equation}\nL(E,s) = \\prod_{p} \\frac{1}{1 - a_p p^{-s} + \\epsilon(p)p^{1-2s}},\n\\end{equation}\nwhere $\\epsilon(p) = 0$ for bad $p$, and $1$ for good $p$.\n\\item The Dirichlet series for $L_E(s)$ is given by\n\\begin{equation}\nL(E,s) = \\sum_{n=1}^{\\infty} a_n n^{-s}.\n\\end{equation}\nwhere for composite $n$, $a_n$ is defined to be the integer coefficient of $n^{-s}$ obtained by multiplying out the Euler product for $L(E,s)$.\n\\end{itemize}\n\\end{definition}\nAgain, we will often write $\\Les$ or just $L(s)$ to simplify notation. \\\\\n\n\\begin{corollary} \\mbox{}\n\\begin{itemize}\n\\item Hasse's Theorem implies that the Euler product and Dirichlet series for $\\Les$ converge absolutely for $\\Re(s) > \\frac{3}{2}$.\n\\item Sato-Tate implies that the Euler product and Dirichlet series for $\\Les$ converge conditionally for $\\Re(s) > \\frac{1}{2}$.\n\\end{itemize}\n\\end{corollary}\n\nIn this work we more often use the completed $L$-function attached to $E$:\n\\begin{definition}\nThe {\\it completed $L$-function} attached to $E$ is given by\n\\begin{equation}\n\\Lambda_E(s) = (N_E)^{\\frac{s}{2}}(2\\pi)^{-s}\\Gamma(s)\\Les,\n\\end{equation}\nwhere $N_E$ is the conductor of $E$ and $\\Gamma(s)$ the usual Gamma function on $\\CC$.\n\\end{definition}\n\nThanks to the modularity theorem, we may in fact analytically continue $\\Les$ and $\\Lams$ to be entire functions defined on all of $\\CC$.\n\\begin{theorem}[Breuille, Conrad, Diamond, Taylor, Wiles et al., 1995,1999,2001] \\mbox{}\\\\\nThere exists an integral newform $f = \\sum_n a_n q^n$ of of weight $k=2$ and level $N_E$ such that $\\Les = L_f(s)$.\n\\end{theorem}\n\nThe modularity theorem above is essentially the converse of the theorem by Shimura in the 1960s: if $f$ is a weight $2$ newform of level $N_E$ with rational Fourier coefficients, then there exists some elliptic curve $E/\\Q$ of conductor $N_E$ such that $L_f(s) = L_E(s)$. Hence any theorem about elliptic curve $L$-functions is thus really a theorem about $L$-functions of weight 2 newforms in disguise. \\\\\n\n\\begin{corollary} \\mbox{}\n\\begin{itemize}\n\\item $\\Lams$ extends to an entire function on $\\CC$. Specifically, $\\Lams$ obeys the {\\it functional equation}\n\\begin{equation}\n\\Lams = w_E \\Lambda_E(2-s),\n\\end{equation}\nwhere $w_E \\in \\set{-1, 1}$ is the action of the Atkin-Lehner involution on the newform attached to $E$.\n\\item $\\Les$ extends to an entire function on $\\CC$ via the definition of $\\Lams$ and the functional equation above.\n\\end{itemize}\n\\end{corollary}\n\nWe reproduce the analytic continuation for $\\Lams$ explicitly below. Define the auxiliary function $\\lambda_E(s)$ by\n\\begin{equation}\\label{eqn:Lams_analytic_continuation}\n\\lambda_E(s) = \\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)^{s} \\sum_{n=1}^\\infty a_n n^{-s}\\Gamma \\left(s,\\frac{2\\pi n}{\\sqrt{N_E}}\\right),\n\\end{equation}\nwhere all the quantities are as defined previously, and $\\Gamma(s,x)$ is the upper incomplete Gamma function on $\\CC\\cross \\RR_{>0}$. The sum converges absolutely for any $s$, so $\\lambda_E(s)$ is entire. Then\n\\begin{equation}\n\\Lambda_E(s) = \\lambda_E(s) + w_E \\lambda_E(2-s).\n\\end{equation}\nKnapp goes through the proof of this formula in \\cite[pp. 270-271]{Kna-1992}. \\\\\n\n\\begin{definition}\n$E$ is said to have {\\it even parity} if $w_E = 1$, and {\\it odd parity} if $w_E = -1$.\n\\end{definition}\n\nThe functional equation for $\\Lams$ shows that it is either symmetric or antisymmetric about the line $\\Re(s) = 1$; moreover, since all the constituent parts for $\\Lams$ are defined over the reals, $\\Lams$ is also conjugate symmetric about the real axis. It follows that $\\Lams$ is highly symmetric about the point $s=1$. This is formalized in the following statement:\n\\begin{proposition}\nAs a function of $s$, $\\Lambda_E(1+s)$ is even if $E$ has even parity, and odd if $E$ has odd parity.\n\\end{proposition}\nThis follows immediately from the functional equation. \\\\\n\n\\begin{definition} For elliptic curve $L$-functions:\n\\begin{itemize}\n\\item The point $s=1$ is called the {\\it central point} or the {\\it critical point}.\n\\item The vertical line of symmetry $\\Re(s)=1$ is called the {\\it critical line}.\n\\item The vertical strip $0 \\le \\Re(s) \\le 2$ is call the {\\it critical strip}.\n\\end{itemize}\n\\end{definition}\n\nThere is an oft-quoted anecdote that the way to differentiate analytic number theorists from algebraic number theorists is that for elliptic curve $L$-functions the former normalize so that the critical line lies at $\\Re(s) = \\frac{1}{2}$ (as is the case with $\\zeta(s)$), while the latter keep the critical line at $\\Re(s)=1$. In this thesis we work mostly with $L_E(1+s)$ and $\\Lambda_E(1+s)$ which shifts the critical line to the imaginary axis; a move which is bound to antagonize both parties equally! \\\\\n\nA standard result with $L$-functions of Hecke eigenforms (of elliptic curve $L$-functions are a subset) is that ``all the interesting stuff happens inside the critical strip'':\n\\begin{proposition}\nFor any $E/\\QQ$,\n\\begin{equation}\n\\Lambda_E(1+s) \\ne 0 \\;\\;\\mbox{when}\\;\\; |\\Re(s)| > \\frac{1}{2}.\n\\end{equation}\n\\end{proposition}\nThis can be proven by showing that the logarithmic derivative of $\\Lambda_E(1+s)$ converges absolutely for $\\Re(s) > \\frac{1}{2}$; see the corollary to Proposition \\ref{lem:ldLe_bound} for a proof. The statement can with a bit more work be strengthened to asserting that all zeros are {\\it strictly} inside the critical strip). In fact, the Generalized Riemann Hypothesis asserts that\n\\begin{equation}\n\\Lambda_E(1+s) \\ne 0  \\;\\;\\mbox{when}\\;\\; \\Re(s) \\neq 0.\n\\end{equation}\nFrom the functional equation we get that $\\Les$ has simple zeros at the nonpositive integers; these are denoted the {\\it trivial} zeros of $\\Les$. Zeros inside the critical strip are called {\\it nontrivial}. The Generalized Riemann Hypothesis (formally stated in Section \\ref{sec:conjectures}) asserts that all nontrivial zeros of $\\Les$ lie on the critical line $\\Re(s)=1$. \n\nIf $\\Les$ has a zero at the central point, it may or may not have multiplicity greater than 1.\n\\begin{definition}\nLet $E$ be an elliptic curve over $\\Q$ and let $\\Les$ be its $L$-series. The {\\it analytic rank} of $E$, denoted $r_{an}(E)$ or just $r_{an}$ is the order of vanishing of $L_E(s)$ at the central point $s=1$. That is, if the Taylor series of $\\Les$ about $s=1$ is\n\\begin{equation}\nL_E(1+s) = a_0 + a_1 s + a_2 s^2 + \\cdots,\n\\end{equation}\nthen $a_n = 0$ for $0 \\le n < r_{an}$ and $a_{r_{an}} \\ne 0$.\n\\end{definition}\nWe will work a lot with the leading coefficient of the $L$-series at the central point, so it's worth giving it a name. To this end:\n\\begin{definition} \\mbox{}\n\\begin{itemize}\n\\item Let $C\\pr_E$ (or just $C\\pr$ when $E$ is fixed) be the leading coefficient of $L_E(s)$ at the central point (the constant $a_{r_{an}}$ in the definition above).\n\\item Let $C_E$ (or just $C$ when $E$ is fixed) be the leading coefficient of $\\Lams$ at the central point.\n\\end{itemize}\n\\end{definition}\nObserve that $C\\pr_E = \\frac{2\\pi}{\\sqrt{N_E}}\\cdot C_E$. We will most often work with the latter, hence the notation. \\\\\n\nWe may use Equation \\ref{eqn:Lams_analytic_continuation} to produce formulae for the value of $\\Lams$ and its higher derivatives at the central point:\n\\begin{proposition} \\mbox{}\n\\begin{enumerate}\n\\item \\begin{equation}\n\\Lambda_E(1) = \\begin{cases} \\frac{\\sqrt{N_E}}{\\pi} \\sum_{n=1}^{\\infty}\\frac{a_n}{n} e^{-\\frac{2\\pi}{\\sqrt{N_E}}\\cdot n}, & w_E = 1 \\\\ 0, & w_E = -1 \\end{cases}.\n\\end{equation}\n\\item When $m$ has the same parity as $E$, the $m$th derivative of $\\Lams$ at the central point is given by\n\\begin{equation}\\label{eqn:central_derivatives}\n\\Lambda_E^{(m)}(1) = 2 \\sum_{n=1}^\\infty a_n \\int_{1}^{\\infty} \\left(\\log \\frac{t}{\\sqrt{N_E}}\\right)^m e^{-\\frac{2\\pi n}{\\sqrt{N_E}}\\cdot t} \\; dt.\n\\end{equation}\nWhen $m$ is opposite in parity to $E$, then $\\Lambda_E^{(m)}(1) = 0$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nObserve that the series in equation \\ref{eqn:Lams_analytic_continuation} converges uniformly over the interval of integration; we may therefore swap the integral and summation signs. After a change of variables we get\n\\begin{equation*}\n\\lambda_E(1+s) = N_E^{\\frac{1+s}{2}}  \\int_{\\frac{1}{\\sqrt{N_E}}}^{\\infty} x^s f_E(it) \\; dt  = N_E^{\\frac{1+s}{2}}  \\sum_{n=1}^\\infty a_n \\int_{\\frac{1}{\\sqrt{N_E}}}^{\\infty} t^s e^{-2\\pi nt} \\; dt,\n\\end{equation*}\nwhere $f_E$ is the cusp form attached to $E$. Both $t^s e^{-2\\pi nt}$ and its derivative w.r.t. $s$ are continuous over the integration interval for any $n$, so by the Leibniz integration rule we may differentiate under the integral sign and evaluate at $s=1$ to get\n\\begin{equation}\\label{eqn:lambda_derivs}\n\\lambda_E^{(m)}(1) = \\sqrt{N_E}\\cdot \\sum_{n=1}^\\infty a_n \\int_{\\frac{1}{\\sqrt{N_E}}}^{\\infty} (\\log t)^m e^{-2\\pi n t} \\; dt.\n\\end{equation}\nEquation \\ref{eqn:central_derivatives} follows by substituting $t \\mapsto \\sqrt{N_E} \\cdot t$. For $m=0$ the integrals may be evaluated directly: $\\int_{1}^{\\infty} e^{-\\frac{2\\pi n t}{\\sqrt{N_E}}} \\; dt = \\frac{\\sqrt{N_E}}{2\\pi n} e^{-\\frac{2\\pi n}{\\sqrt{N_E}}}$.\n\\end{proof}\n\n\nEquation \\ref{eqn:central_derivatives} allows us to establish bounds on the coefficients of the Taylor expansion of $\\Lams$ about the central point. For this we will need the following technical lemma:\n\\begin{lemma}\\label{lem:central_deriv_int_bounds}\nLet $N_E,n \\in \\ZZ_{>0}$, and suppose $m$ is a positive integer such that $m < \\frac{1}{2}\\log N_E$. Then\n\\begin{equation}\n\\left| \\int_{\\frac{1}{\\sqrt{N_E}}}^{\\infty} (\\log t)^{m} e^{-2\\pi n t} \\; dt \\right| < \\frac{\\left(\\frac{1}{2} \\log N_E\\right)^{m}}{2\\pi n}\\left[ e^{-\\frac{2\\pi n}{\\sqrt{N_E}}} + \\frac{e^{-2\\pi n\\sqrt{N_E}}}{2\\pi n \\sqrt N_E} \\right].\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe split the integral in two, dealing with the intervals $\\frac{1}{\\sqrt{N_E}}$ to $\\sqrt{N_E}$ and $\\sqrt{N_E}$ to $\\infty$ separately. Now $(\\log t)^{m}$ is at most $(\\frac{1}{2}\\log N_E)^m$ in magnitude on $[\\frac{1}{\\sqrt{N_E}},\\sqrt{N_E}]$, so\n\\begin{equation*}\n\\left| \\int_{\\frac{1}{\\sqrt{N_E}}}^{\\sqrt{N_E}} (\\log t)^{m} e^{-2\\pi n t} \\; dt \\right| < \\left(\\frac{1}{2} \\log N_E\\right)^m \\int_{\\frac{1}{\\sqrt{N_E}}}^{\\sqrt{N_E}} e^{-2\\pi n t} \\; dt < \\frac{\\left(\\frac{1}{2} \\log N_E\\right)^{m}}{2\\pi n}\\left(e^{-\\frac{2\\pi n}{\\sqrt{N_E}}} - e^{-2\\pi n\\sqrt{N_E}}\\right).\n\\end{equation*}\nFor the integral on $[\\sqrt{N_E},\\infty)$, we use integration by parts to get\n\\begin{equation*}\n\\int_{\\sqrt{N_E}}^{\\infty} \\left(\\log t \\right)^{m} e^{-2\\pi n t} \\; dt = \\frac{\\left(\\frac{1}{2} \\log N_E\\right)^{m}}{2\\pi n}\\cdot e^{-2\\pi n\\sqrt{N_E}} + \\frac{m}{2\\pi n} \\int_{\\sqrt{N_E}}^{\\infty} \\frac{\\left(\\log t \\right)^{m-1}}{t} e^{-2\\pi n t} \\; dt.\n\\end{equation*}\nIf $m < \\frac{1}{2}\\log N_E$, then $\\frac{\\left(\\log t \\right)^{m-1}}{t}$ is decreasing for $t > \\sqrt{N_E}$, so we have\n\\begin{equation*}\n\\frac{m}{2\\pi n} \\int_{\\sqrt{N_E}}^{\\infty} \\frac{\\left(\\log t \\right)^{m-1}}{t} e^{-2\\pi n t} \\; dt < \\frac{m\\left(\\frac{1}{2} \\log N_E\\right)^{m-1}}{2\\pi n\\sqrt{N_E}} \\int_{\\sqrt{N_E}}^{\\infty} e^{-2\\pi n t} \\; dt < \\frac{\\left(\\frac{1}{2} \\log N_E\\right)^{m}}{(2\\pi n)^2 \\sqrt{N_E}} \\cdot e^{-2\\pi n \\sqrt{N_E}}.\n\\end{equation*}\nAdd up all the values and you get the established result.\n\\end{proof}\n\nWith the above lemma in hand, we establish an upper bound on the magnitude of the $m$th Taylor coefficient of $\\Lams$ at the central point.\n\\begin{proposition}\\label{prop:central_deriv_bounds}\nLet $E$ have conductor $N_E$ and completed $L$-function $\\Lams$. Then so long as $m<\\frac{1}{2}\\log N_E$, the $m$th derivative of $\\Lams$ at the central point is bounded explicitly in terms of $N_E$ and $m$ by\n\\begin{equation}\n\\left| \\Lambda_E^{(m)}(1)\\right| < \\frac{(\\frac{1}{2}\\log N_E)^m}{2\\pi^2}\\left(N_E + \\frac{1}{e^{2\\pi\\sqrt{N_E}}-1} \\right).\n\\end{equation}\nThat is, for fixed $m$ the $m$th Taylor coefficient of $\\Lams$ is $O\\left( N_E(\\frac{1}{2}\\log N_E)^m\\right)$; the second term inside the final parentheses is negligible for $N_E\\gg1$.\n\\end{proposition}\n\n\\begin{proof}\nFrom Lemma \\ref{lem:central_deriv_int_bounds} and Equation \\ref{eqn:lambda_derivs} we have that\n\\begin{equation*}\n\\left| \\Lambda_E^{(m)}(1)\\right| < 2 \\sqrt{N_E} \\sum_{n=1}^{\\infty} |a_n| \\cdot \\left[\\frac{\\left(\\frac{1}{2} \\log N_E\\right)^{m}}{2\\pi n}\\left( e^{-\\frac{2\\pi n}{\\sqrt{N_E}}} + \\frac{e^{-2\\pi n\\sqrt{N_E}}}{2\\pi n \\sqrt N_E} \\right)\\right].\n\\end{equation*}\nUsing the bound $|a_n(E)| \\le n$ for any $E$, we get\n\\begin{equation*}\n\\left| \\Lambda_E^{(m)}(1)\\right| < \\frac{ \\sqrt{N_E}\\left(\\frac{1}{2} \\log N_E\\right)^{m}}{\\pi} \\sum_{n=1}^{\\infty} e^{-\\frac{2\\pi n}{\\sqrt{N_E}}} + \\frac{\\left(\\frac{1}{2} \\log N_E\\right)^{m}}{2\\pi^2} \\sum_{n=1}^{\\infty} \\frac{e^{-2\\pi n\\sqrt{N_E}}}{n}.\n\\end{equation*}\nNow\n\\begin{equation*}\n\\sum_{n=1}^{\\infty} e^{-\\frac{2\\pi n}{\\sqrt{N_E}}} = \\frac{1}{e^{\\frac{2\\pi n}{\\sqrt{N_E}}}-1}< \\frac{\\sqrt{N_E}}{2\\pi},\n\\end{equation*}\nwhile $\\sum_{n=1}^{\\infty} \\frac{e^{-2\\pi n\\sqrt{N_E}}}{n} \\le \\sum_{n=1}^{\\infty} e^{-2\\pi n\\sqrt{N_E}} = \\frac{1}{e^{2\\pi\\sqrt{N_E}}-1}$.\n\\end{proof}\n\nNote that for fixed $N_E$, if we allow $m \\to \\infty$, we actually have that the $m$th derivative can grow like $O\\left(\\frac{m!!}{(2\\pi e)^{m/2}}\\right)$, where $m!! = m(m-2)\\cdots$ is the double factorial on $m$ i.e. faster than exponentially in $m$. However, this behavior only starts to show when $m\\gg\\log N_E$ -- hence our restriction on the magnitude of $m$. This will in practice never be an issue: we are primarily interested in the central derivatives in order to establish results about the analytic rank of $E$. Since maximum analytic rank grows more slowly than $\\log N_E$ (c.f. Corollary \\ref{cor:logderiv_rank_bound}), we will never need to consider $\\Lambda_E^{(m)}(1)$ for $m> \\frac{1}{2}\\log N_E$. \\\\\n\n\nCrucial to this thesis, $L_E(s)$ and its derivatives can be provably computed to a given precision in time that scales with the square root of the conductor of $E$:\n\\begin{proposition}\\label{prop:L_E_time_complexity}\nWhen $m<\\frac{1}{2} \\log N_E$, the $m$th derivative of $L_E(s)$ at the central point can be provably computed to $k$ bits precision in $\\softO(k\\cdot \\sqrt{N_E})$ time, where $N_E$ is the conductor of $E$.\n\\end{proposition}\nThis is proven in full in the PhD thesis of Robert Bradshaw \\cite{Bra-2010}. The basic argument is as follows:\n\\begin{enumerate}\n\\item Since the two differ by an exponential and a Gamma factor, computing $L_E^{(m)}(1)$ takes the same order of magnitude time as computing $\\Lambda_E^{(m)}(1)$. This may be achieved, for example, by the formula given in Equation \\ref{eqn:central_derivatives};\n\\item The integral $\\int_{1}^{\\infty} \\left(\\log \\frac{t}{\\sqrt{N_E}}\\right)^m e^{-\\frac{2\\pi n}{\\sqrt{N_E}}\\cdot t} \\; dx$ can be computed to $k$ bits precision in time that scales proportional to $k$, is independent of $n$ and subpolynomial in $N_E$;\n\\item The number of terms needed in the sum to achieve $k$ bits precision is $O\\left( \\log(N_E)^m\\sqrt{N_E}\\right)$;\n\\item Computing $a_n$ can be done in time polynomial in $\\log n$;\n\\item Combining the above, computation time is dominated by evaluating $O( \\log(N_E)^m\\sqrt{N_E})$ integrals and $a_n$ values. That is, the sum can be evaluated to $k$ bits precision in time scaling with $k \\sqrt{N_E}$ times some power of $\\log N_E$.\n\\end{enumerate}\nWe will use the result of Proposition \\ref{prop:L_E_time_complexity} directly in the proof of Theorem \\ref{thm:main_theorem}. In fact, the $\\softO(\\sqrt{N_E})$ time needed to evaluate central derivatives of $\\Les$ is the computational bottleneck in algorithm \\ref{algo:compute_rank}; all other steps scale in time subpolynomial in $N_E$. \n\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Three Big Conjectures}\\label{sec:conjectures}\n\nThe main results in this thesis are contingent on the Birch and Swinnerton-Dyer conjecture, The Generalized Riemann Hypothesis and the ABC conjecture. We reproduce the three conjectures in full below; citations for the papers in which they first appeared or are fully formulated are listed at the top of each conjecture. \\\\\n\nThe Birch and Swinnerton-Dyer conjecture (BSD) is needed to establish a way to compute and hence bound the magnitude of the leading coefficient of $\\Les$ at the central point.\n\\begin{conjecture}[Birch, Swinnerton-Dyer]\\label{conj:BSD} \\cite{BSD-1965}\n\\mbox{}\n\\begin{enumerate}\n\\item $r_{an} = r$; that is, the analytic rank of $E$ is equal to its algebraic rank.\n\\item The leading coefficient at the central point in $L_E(s)$ is given by\n\\begin{equation}\\label{eqn:BSD_formula}\nC\\pr_E = \\left(\\frac{\\Omega_E\\cdot\\Reg_E\\cdot\\#\\Sha(E/\\Q)\\cdot\\prod_p c_p}{(\\#E_{\\text{Tor}}(\\Q))^2}\\right),\\end{equation}\nwhere\n\\begin{itemize}\n\\item $r$ is the algebraic rank of $E(\\Q)$,\n\\item $\\Omega_E$ is the real period of (an optimal model of) $E$,\n\\item $\\Reg_E$ is the regulator of $E$,\n\\item $\\#\\Sha(E/\\Q)$ is the order of the Shafarevich-Tate group attached to $E/\\Q$,\n\\item $\\prod_p c_p$ is the product of the Tamagawa numbers of $E$, and\n\\item $\\#E_{\\text{Tor}}(\\Q)$ is the number of rational torsion points on $E$.\n\\end{itemize}\n\\end{enumerate}\n\\end{conjecture}\n\nFor an excellent description of the conjecture and a breakdown of the arithmetic invariants mentioned above, see Andrew Wiles' official description of the BSD Conjecture on the Clay Math website \\cite{Wil-BSD}. \\\\\n\nThe Generalized Riemann Hypothesis (GRH), as the name suggests, generalizes the famous conjecture first posed by Bernhard Riemann in 1859 \\cite{Rie-1859}. The (standard) Riemann Hypothesis asserts that all nontrivial zeros of the Riemann zeta function $\\zeta(s)$ occur on the vertical line $\\Re(s)=\\frac{1}{2}$. It is hard to track down where the Generalized Riemann Hypothesis was first formulated in its full generality (Conrey gives a good exposition in \\cite{Con-2003}), but it asserts that for a large class of suitably defined $L$-functions, the nontrivial nonreal zeros will occur on a single vertical line in the complex plane (where the exact lateral placement of said line depends on the class of $L$-function being considered). We use GRH as it applies to elliptic curve $L$-functions:\n\\begin{conjecture}[Generalized Riemann Hypothesis for Elliptic Curves, version 1] \\cite{Rie-1859} \\cite{Con-2003}\n\\label{conj:GRH1}\n\\mbox{}\nLet $E$ be an elliptic curve over $\\Q$, and let $\\Les$ be its $L$-series. If $\\rho$ is a nontrivial zero of $\\Les$ with nonzero imaginary part, then $\\Re(\\rho) = 1$.\n\\end{conjecture}\nThat is, $\\Les$ is never zero outside of the {\\it critical line} $\\Re(s)=1$ and the nonpositive integers. There are numerous equivalent formulations of GRH; we will most often use the following pertaining to the shifted completed $L$-function:\n\\begin{conjecture}[Generalized Riemann Hypothesis for Elliptic Curves, version 2] \\cite{Rie-1859} \\cite{Con-2003}\n\\label{conj:GRH2}\n \\mbox{}\n Let $E$ be an elliptic curve over $\\Q$, and let $\\Lams$ be the completed $L$-function attached to $E$. Then\n\\begin{enumerate}\n\\item $\\Lambda_E(1+s) = 0 \\Longrightarrow \\Re(s) = 0$.\n\\end{enumerate}\n\\end{conjecture}\n\nFinally, we will need strong form of the ABC conjecture of Masser and Oesterl\\'{e} in order to establish lower bounds on the regulator and real period of $E$.\n\\begin{conjecture}[Masser-Oesterl\\'{e}] \\cite{Mas-1985} \\cite{Oes-1988}\n\\label{conj:ABC} \\\\\nLet $(a,b,c)$ be a triple of coprime positive integers such that $a+b = c$, and let $\\rad(abc) = \\prod_{p | abc} p$ be the product of all primes dividing $a$, $b$ and $c$. Then for any $\\epsilon > 0$ there is a constant $K_{\\epsilon}$ such that\n\\begin{equation}\nc < K_{\\epsilon}\\rad(abc)^{1+\\epsilon}.\n\\end{equation}\n\\end{conjecture}\n\nThe ABC conjecture is famous for the large number of other results that it implies. Of these, we will need two that relate to elliptic curves. It is a relatively straightforward exercise to show that the conductor of an elliptic curve divides its minimal discriminant. Szpiro's conjecture, formulated in the 1980s, asserts that the latter cannot be too big in terms of the former:\n\\begin{conjecture}[Szpiro] \\cite{szp-1987}\n\\label{conj:Szpiro} \\\\\nLet $E$ be an elliptic curve over $\\QQ$ with conductor $N_E$ and minimal discriminant $D_E$. Then for any $\\epsilon > 0$ there is a constant $K_{\\epsilon}$ such that\n\\begin{equation}\n|D_E| < K_{\\epsilon}\\cdot (N_E)^{6+\\epsilon}.\n\\end{equation}\n\\end{conjecture}\n\nWe will also invoke a equivalent version of the above conjecture:\n\\begin{conjecture}[Modified Szpiro]\\label{conj:modified_szpiro}\nLet $c_4$ and $c_6$ be the $c$-invariants of a minimal model of $E/\\QQ$, as defined in Section \\ref{subsec:notation}. Then for any $\\epsilon>0$ there is a constant $K_{\\epsilon}$ independent of $E$ such that\n\\begin{equation}\n\\max\\set{|c_4|^3,|c_6|^2} \\le K_{\\epsilon}\\cdot (N_E)^{6+\\epsilon}.\n\\end{equation}\n\\end{conjecture}\n\nLang's conjecture posits that the height of a point on a rational curve cannot be too small in terms of the discriminant:\n\\begin{conjecture}[Lang] \\cite[pp. 73-74]{Lang-1997}\n\\label{conj:Lang} \\\\\nThere is a positive constant $M_0$ such that for any elliptic curve $E/\\QQ$ with minimal discriminant $D_E$, the N\\'{e}ron-Tate canonical height of any nontorsion point $P\\in E(\\QQ)$ obeys\n\\begin{equation}\n\\hat{h}(P) \\ge M_0\\log |D_E|.\n\\end{equation}\n\\end{conjecture}\n\n%\\begin{conjecture}[Hall]\\label{conj:Hall}\n%For any $\\epsilon > 0$ there is a constant $K_{\\epsilon}$ such that for any $x,y \\in \\ZZ$ such that $x^3-y^2 \\ne 0$, we have\n%\\begin{equation}\n%|x^3-y^2| \\ge K_{\\epsilon}\\cdot |x|^{\\frac{1}{2}-\\epsilon}.\n%\\end{equation}\n%That is, if $x^3-y^2 \\ne 0$, then it cannot be much smaller in magnitude than about $\\sqrt{|x|}$.\n%\\end{conjecture}\n", "meta": {"hexsha": "9f69846451196de1fe6c0f53b558c90cf0e24d9e", "size": 35273, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/3_background.tex", "max_stars_repo_name": "haikona/thesis", "max_stars_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/3_background.tex", "max_issues_repo_name": "haikona/thesis", "max_issues_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/3_background.tex", "max_forks_repo_name": "haikona/thesis", "max_forks_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.5729386892, "max_line_length": 796, "alphanum_fraction": 0.7021234372, "num_tokens": 11514, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.8418256551882382, "lm_q1q2_score": 0.5968704968396332}}
{"text": "\\section{Analytic and Meromorphic Functions}\r\nWe have seen previously that there can be some multivalued functions in $\\mathbb C$ carries important meanings.\r\nMost important examples being $\\log$ and $\\sqrt[m]{\\cdot}$.\r\nWhat do we really mean by these multivalued functions?\r\nWhen we evaluate them, we always have to choose a branch cut, but it does not capture the relationship between different branch cuts.\r\nWe shall try to find a way to characterise these functions in a proper way that realises their properties as they deserve.\r\n\\subsection{Analytic Functions and their Zeros}\r\n\\begin{definition}\r\n    A domain is an open, connected subset $D\\subset\\mathbb C$.\r\n\\end{definition}\r\n\\begin{example}\r\n    Open disks, annuli, punctured disks are domains.\r\n\\end{example}\r\n\\begin{definition}\r\n    Let $D\\subset\\mathbb C$ be a domain.\r\n    A function $f:D\\to\\mathbb C$ is holomorphic or analytic if either it is $\\mathbb C$-differentiable everywhere on $D$ or it has a local Taylor series around every point in $D$.\r\n\\end{definition}\r\nWe know the two criteria above are equivalent from discussions back in IB Complex Analysis.\r\n\\begin{proposition}\r\n    Let $f:D\\to\\mathbb C$ be an analytic functions on a domain.\r\n    If $f(z_0)=0$, then either $f$ is identically zero or nowhere zero on a punctured disk centering at $z_0$.\r\n\\end{proposition}\r\n\\begin{proof}\r\n    Obvious and already discussed in Complex Analysis, but let's do it again.\r\n    Consider the Taylor series of $f$ in a neighbourhood $U$ of $z_0$.\r\n    So for $z\\in U$ we have\r\n    $$f(z)=\\sum_{n=0}^\\infty a_n(z-z_0)^n$$\r\n    If $f$ is not identically zero, then we can choose minimal $m$ such that $a_m\\neq 0$, so $f(z)=(z-z_0)^mg(z)$ where\r\n    $$g(z)=\\sum_{n=0}^\\infty a_{m+n}(z-z_0)^n\\neq 0$$\r\n    is nonzero at $z_0$, hence is nowhere zero in a punctured disk around $z_0$ in $U\\subset D$ by continuity.\r\n    $f$ is then nowhere zero in the same punctured disk.\r\n\\end{proof}\r\n\\begin{corollary}[Identity Principle]\r\n    Let $f,g$ be analytic functions defined on a domain $D\\subset\\mathbb C$.\r\n    If the subspace $\\{z\\in D|f(z)=g(z)\\}$ is not discrete, then $f\\equiv g$ on $D$.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    Immediate.\r\n\\end{proof}\r\n\\subsection{Meromorphic Functions and Singularities}\r\n\\begin{definition}\r\n    A function $f$ defined on a punctured disk around $z_0$ is said to have a isolated singularity at $z_0$.\r\n\\end{definition}\r\n\\begin{proposition}\r\n    If an analysic function $f$ has an isolated singularity at $z_0$, then $f$ has a Laurent series\r\n    $$f(z)=\\sum_{n=-\\infty}^\\infty a_n(z-z_0)^n$$\r\n    on a punctured disk around $z_0$.\r\n\\end{proposition}\r\n\\begin{proof}\r\n    Complex Analysis.\r\n\\end{proof}\r\n\\begin{definition}\r\n    If $a_n=0$ for $n<0$, then $z_0$ is said to be a removable singularity of $f$.\\\\\r\n    If $a_n=0$ for $n<-m<0$ and $a_{-m}\\neq 0$, then we say $z_0$ is a pole of order $m$.\\\\\r\n    Otherwise, we say $z_0$ is an essential singularity.\r\n\\end{definition}\r\n\\begin{theorem}\r\n    If $f$ is bounded near $z_0$, then $z_0$ has to be a removable singularity.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Complex Analysis.\r\n\\end{proof}\r\n\\begin{theorem}[Casorati-Weierstrass]\r\n    $z_0$ is an essential singularity iff $f(U)$ is dense in $\\mathbb C$ for any punctured neighbourhood of $z_0$.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Complex Analysis.\r\n\\end{proof}\r\n\\begin{definition}\r\n    If a holomorphic function $f:D\\setminus A\\to\\mathbb C$ for some domain $D$ and discrete $A$ has poles at the points of $A$, then $f$ is said to be meromorphic.\r\n\\end{definition}\r\n\\begin{example}\r\n    The function $f(z)=1/(e^{1/z}-1)$ is meromorphic where one takes $D$ to be the open upper half-plane and $A=\\{1/(2\\pi in):n\\in\\mathbb N\\}$.\r\n    In particular, the poles are all simple (i.e. of order $1$).\\\\\r\n    Note that the function can be extended to the whole of $\\mathbb C$ except at the discrete set $\\{1/(2\\pi in):n\\in\\mathbb Z\\setminus\\{0\\}\\}\\cup\\{0\\}$, which contains $0$ as an essential singularity and others as simple poles.\r\n\\end{example}\r\n\\subsection{Analytic Continuation}\r\n\\begin{definition}\r\n    A function element $F=(f,U)$ on a domain $D$ consists of a subdomain $U\\subset D$ and an analytic function $f:U\\to\\mathbb C$.\r\n\\end{definition}\r\n\\begin{lemma}[Direct Analytic Continuation]\r\n    Let $(f,U),(g,V)$ are function elements such that $U\\cap V\\neq\\varnothing$ and $f=g$ on $U\\cap V$, then $f$ determines $g$.\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Identity Principle.\r\n\\end{proof}\r\nWe write $(f,U)\\sim (g,V)$ for direct analytic continuation.\r\n\\begin{definition}\r\n    Analytic continuation is a sequence of iteration of direct analytic continuations that get from one function element to another.\r\n    We write $(f,U)\\approx (g,V)$ in this case.\r\n\\end{definition}\r\n\\begin{remark}\r\n    $\\approx$ is an equivalence relation.\r\n\\end{remark}\r\n\\begin{definition}\r\n    A $\\approx$-equivalence class $\\mathcal F$ of function elements on a domain $D$ is called a complete analytic function on $D$.\r\n\\end{definition}\r\n\\subsection{The Complex Logarithm}\r\nLet $\\mathbb C_\\star=\\mathbb C\\setminus\\{0\\}$.\r\nWe want to invert the exponential function $\\exp:\\mathbb C\\to\\mathbb C_\\star$, but $\\exp$ is not injective on $\\mathbb C$.\r\nWe used to see $\\log$ as a multivalued function to deal with this problem, but in fact, we can see it as a complete analytic function on $\\mathbb C_\\star$.\\\\\r\nIndeed, for $(\\alpha,\\beta)\\subset\\mathbb R$ with $|\\alpha-\\beta|<2\\pi$, we can define\r\n$$U_{(\\alpha,\\beta)}=\\{re^{i\\theta}|r>0,\\alpha<\\theta<\\beta\\},f_{(\\alpha,\\beta)}(z)=\\log r+i\\theta,r=|z|,\\theta\\in (\\alpha,\\beta)$$\r\nThen $F_{(\\alpha,\\beta)}=(f_{(\\alpha,\\beta)},U_{(\\alpha,\\beta)})$ is a collection of function elements.\r\nLet $I(n)=((n-1)\\pi/2,(n+1)\\pi/2)$ for $n\\in\\mathbb Z$.\r\n\\begin{proposition}\r\n    $F_{I(n)}\\sim F_{I(m)}$ iff $|m-n|\\le 1$.\r\n\\end{proposition}\r\n\\begin{proof}\r\n    Just do a case analysis based on $m-n\\bmod 4$.\r\n\\end{proof}\r\n\\begin{corollary}\r\n    $F_{I(m)}\\approx F_{I(n)}$ for any $m,n\\in\\mathbb Z$.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    Immediate.\r\n\\end{proof}\r\nSo this characterises a complete analytic function that is the complex logarithm.\r\n\\begin{remark}\r\n    We know that $f_{I(0)}(1)=0$ but $f_{I(4)}(1)=2\\pi i$, so analytic continuation in this way is not unique.\r\n    But we have ``pasted'' them together to make it a complete analytic function.\r\n\\end{remark}\r\n\\begin{definition}\r\n    Let $\\gamma:[0,1]\\to D$ be a path.\r\n    We say $(f,U)\\approx_\\gamma (g,V)$ if there is some $0=t_0<t_1<\\cdots<t_n=1$ and function elements $(f=f_0,U=U_0),(f_1,U_1),\\ldots,(g=f_{n+1},V=U_{n+1})$ such that $(f_i,U_i)\\sim (f_{i+1},U_{i+1})$ for $i=0,\\ldots,n$ and $\\gamma(t_i)\\in U_i\\cap U_{i+1}$.\r\n    This is called analytic continuation along a path.\r\n\\end{definition}\r\nAnalytic continuation along a path has the desired uniqueness property.\r\nThis fact is known as the Classical Monodromy Theorem.", "meta": {"hexsha": "04bf30f2947608a8c7eeb0fa61134238c51b0fb6", "size": 6884, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "1/mero.tex", "max_stars_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_stars_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "1/mero.tex", "max_issues_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_issues_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1/mero.tex", "max_forks_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_forks_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.3643410853, "max_line_length": 259, "alphanum_fraction": 0.6876815805, "num_tokens": 2150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.8198933403143929, "lm_q1q2_score": 0.5968622224703914}}
{"text": "\\section{Background on SMT-based $k$-induction}~\\label{sec:background} \nThe focus of our  investigation has  been on applying model checking to  prove\ninvariant properties of our monitors.   We  \nemploy a  technique known as $k$-induction~\\cite{Sheeran00,EenS03} for verifying inductive\nproperties of infinite state systems.   $k$-induction  has the\nadvantage that it is well suited  to  \\textsc{smt} \nbased bounded model checking. This section profiles the\nbasic concepts of the  $k$-induction proof technique needed in the\nremainder of the paper. In practice, we use tools that implement  enhancements of the basic procedure such as path compression~\\cite{dMRS03} that help the process scale, but are beyond the focus of the paper. \n\nConsider  a state transition system  $(S,I,T),$\nwhere $S$ is a set of states, $I \\subseteq S$ is the set of initial\nstates and $T \\subseteq S \\times S $ is a transition relation over\n$S.$ To show $P$ holds in the transition system one must show that (1)\nthe base case holds---that $P$ holds in all states reachable\nfrom an initial state in $k$ steps, and (2) the induction step holds---that if $P$ holds in states $s_0,\\ldots,s_{k-1}$ then it holds in\nstate $s_k.$ The $k$-induction principle is formally expressed in the\nfollowing two entailments:\n\\begin{eqnarray*}\nI(s_0) \\tland T(s_0,s_1) \\tland \\cdots \\tland T(s_{k-1},s_k) &\\models&\nP(s_k) \\\\\nP(s_0) \\tland \\cdots \\tland P(s_{k-1}) \\tland T(s_0,s_1) \\tland \\cdots \\tland T(s_{k-1},s_k) &\\models&\nP(s_k) \n\\end{eqnarray*} \nIf one cannot show the property to be true, the\nproperty is strengthened by either extending the formula or\nprogressively increasing the length of the reachable states\nconsidered.  \n\nProperty $P$ said to be a $k$-inductive property with respect to\n$(S,I,T)$ if there exists some $k \\in \\mathbb{N}^{0<}$ such that $P$\nsatisfies the $k$-induction principle. As $k$ increases, weaker\ninvariants may be proved. If $P$ is a safety property that does not hold, then the first entailment will break for a finite $k$ and a counterexample will be provided. The trick is to find an invariant that is\ntractable by the \\textsc{smt} solver yet weak enough to satisfy the desired\nproperty.  \n", "meta": {"hexsha": "f760a01db804f7dfde9e2115d0c757c3fbdc69cc", "size": 2190, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RV2015/Background.tex", "max_stars_repo_name": "Copilot-Language/copilot-discussion", "max_stars_repo_head_hexsha": "caccad918b23dae991095344a845827ddccd6047", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2015-06-10T00:44:21.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-17T13:20:09.000Z", "max_issues_repo_path": "RV2015/Background.tex", "max_issues_repo_name": "Copilot-Language/copilot-discussion", "max_issues_repo_head_hexsha": "caccad918b23dae991095344a845827ddccd6047", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 30, "max_issues_repo_issues_event_min_datetime": "2019-04-01T20:24:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-07T22:34:17.000Z", "max_forks_repo_path": "RV2015/Background.tex", "max_forks_repo_name": "Copilot-Language/copilot-discussion", "max_forks_repo_head_hexsha": "caccad918b23dae991095344a845827ddccd6047", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.8333333333, "max_line_length": 209, "alphanum_fraction": 0.7484018265, "num_tokens": 642, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933315126792, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.5968622209017}}
{"text": "%!TEX root = ../dissertation.tex\n\n\\chapter{Background: State of the art}\n\\label{chapter:evaluation}\n\\section{Planning Optimal Grasps}\n\\label{section:LMRW}\n\nThis paper \\cite{ferrari1992planning} explains the metric of the \"Largest-minimum resisted wrench\" that attempts to find a largest resisted wrench in any direction. To do this, the metric takes into account the total fingers force and the maximum of all forces done by each finger.\n\\par\nIt is assumed that the grasp's configuration used is a Force Closure Grasp (FCG), whose definition is a grasp that is able to balance externally applied forces and torques on the object.\nIt is also assumed that the contacts between the object and the points of contact are \"Hard-Contact\", i.e. between the two contact surfaces (of the object and the contact point) exists friction.\n\\par\nThe forces and torques that acts on an object or on contact points can be represented in a 6 dimensions space (3 dimensions for the total momentum acting on the object and the other 3 dimensions to represent the force). This space is called the wrench space ($\\mathcal{W}$). To simplify, a wrench can be expressed as set of force ($F$) and torque ($\\tau$) vectors, and from a mathematical point of view, the wrench can be defined as $w = [F^T\\ \\tau^T]^T$ and its magnitude by $||w|| = \\sqrt{||F||^2 + \\lambda||\\tau||^2}$, where $\\lambda$ is a scaling value between force and torque.\n\n\\subsection{The quality of the grasp}\nWhen it comes time to choose the quality measure, there may be configurations that are better than others, i.e. there are configurations can balance these external wrenches without having to apply too much force. An intuitive way to quantify the grasp quality would be to use the ratio of the magnitude of the largest resisted wrench in any direction and the magnitude of the forces applied by the fingers.\n\n\\subsubsection{Representing the finger forces}\n\nIn order to compute the forces made by the fingers on the objects, it is necessary to ensure that there is no slipping or detachment of the object at that contact point.\nFor this to happen, it is necessary that the Coulomb's law should be verified.\n\nCoulomb's law can be defined as: $f_i^{\\bot} \\ge \\mu f_i^t$ where $f_i^{\\bot}$ represents the force along the normal to the object's surface on the contact point and $ f_i^t$ is the tangential force to the object surface. \nThis law can be graphically interpreted through a friction cone, that when the force exerted on the object is inside that cone, the contact point is stable, otherwise there is slippage of the object or its detachment.\n\nThe forces acting on the object can be decomposed as linear combinations of the extrema of the friction cone. So in order to do this, the friction cone can be approximated by a with a pyramid of $m$ edges.\nThus the force applied at the contact point ($f_i$) can be given by the expression \\eqref{eq:primitive_forces}, where $\\alpha_h > 0$ and $\\sum_{h=1}^{m}\\alpha_h = 1$\n\\begin{equation}\\label{eq:primitive_forces}\n    f_i= \\sum_{h=1}^{m}\\alpha_{i,j}f_{i,h}\n\\end{equation}\nIt is known that only the normal component of the contact force ($f_i$) is not sufficient to specify the contact force, since there are several contact forces satisfying the equation \\eqref{eq:primitive_forces} and that have the same normal magnitude.\n\nLet be defined a vector g composed of the normal forces in $n$ contact points by $g= (f_1^{\\bot}, \\dots, f_n^{\\bot})^T $ and a predicate $A: \\mathcal{W} \\times \\mathcal{G} \\rightarrow \\{T, F\\}$, where $\\mathcal{G}$ is a set of all possible generalized forces $g$. $wAg$ is true if the wrench $w$ belongs to the set of wrenches that can be resisted through generalized force $g$. \n\\par\nAlso, $wA$ can be defined as the set of generalized forces that can resist a wrench $w$, and $Ag$ is a set of wrenches $w$ that can be resisted through a generalized force $g$. This can be represented by the equation \\eqref{eq:defintion_Apredicate}.\n\\begin{equation}\\label{eq:defintion_Apredicate}\n    wA= \\Big\\{g\\ |\\ wAg\\ is\\ true \\Big\\} \\quad and   \\quad  Ag= \\Big\\{w\\ |\\ wAg\\ is\\ true \\Big\\}\n\\end{equation}\n\n\\subsubsection{Grasp quality measure}\nThe local quality measure ($LQ$) is the best ratio between the resulting wrench and a given generalized force for that wrench. This measure can be given by \\eqref{eq:LQ_measure}.\n\\begin{equation}\\label{eq:LQ_measure}\n        LQ_w=   \\max_{g \\in wA}\\frac{||w||}{||g||}\n\\end{equation}\nWith this, the grasp quality measure ($Q$) can be given as the minimum of all maxima wrenches that can be resisted by each contact point. Which is the worst-case scenario where the object can fall from the hand.\n\\begin{equation}\\label{eq:Q_LMRW}\n    Q= \\min_w LQ_w\n\\end{equation}\nDue to the invariance of $Q$, minimizing $w$ means minimize over the directions of $w$. Thus \\eqref{eq:Q_LMRW} can be rewritten as in \\eqref{eq:Q_LMRW2}, where $\\hat{w} = w/||w||$ and $||w||$ is such that $||g||= 1 $.\n\\begin{equation}\\label{eq:Q_LMRW2}\n    Q=\\min_{\\hat{w}}\\ LQ_{\\hat{w}}\n\\end{equation}\nAlso, it can be defined a set $B \\subset A$, such as $B = \\{ (w,g)\\ |\\ wAg\\ is\\ true\\ and\\ ||g||=1 \\}$,  which is equivalent to define a predicate $B:\\mathcal{W} \\times \\mathcal{G} \\rightarrow \\{ T, F \\}$, such that $wBg$ is true, $w \\in Ag$ and $||g|| = 1$.\nAnd like what was defined with the predicate A in equation \\eqref{eq:defintion_Apredicate}, it can be done the same with predicate B, as can be seen in equation~\\eqref{eq:defintion_Bpredicate}.\n\\begin{equation}\\label{eq:defintion_Bpredicate}\n    wB= \\Big\\{g\\ |\\ wAg\\ is\\ true\\ and\\ ||g||=1 \\Big\\} \\quad and   \\quad  Bg= \\Big\\{w\\ |\\ wAg\\ is\\ true\\ and\\ ||g||=1 \\Big\\}\n\\end{equation}\nSo, $LQ_{\\hat{w}}$ can be rewritten like in \\eqref{eq:LQ2}, where $B\\mathcal{G}$ is a set of all sets $Bg$.\n\\begin{equation}\\label{eq:LQ2}\n    LQ_{\\hat{w}}= \\max_{w \\in B \\mathcal{G}} ||w||\n\\end{equation}\nTaking the maximum of the wrench module that belongs to the set $B\\mathcal{G}$,  means taking the wrenches that are on the boundary of  that set. And thus $LQ_{\\hat{w}}$ and $Q$ can be written like \\eqref{eq:LQ_Q_new}.\n\\begin{equation}\\label{eq:LQ_Q_new}\n    LQ_{\\hat{w}} = \\Big\\{   ||w||\\ |\\ w\\ \\in Bd(B\\mathcal{G}) \\Big\\} \\quad and \\quad Q = \\min_{\\hat{w}} \\Big\\{ ||w||\\ |\\ w \\in Bd(B\\mathcal{G} \\Big\\}\n\\end{equation}\n$Q$ can  be also seen as the distance from the closest point of the boundary $B\\mathcal{G}$ to the wrench space origin, in other words, $Q$ is the radius of the largest sphere centered on the origin and it's contained on the boundary $B\\mathcal{G}$. So Q is given by \\eqref{eq:Q_lasgest_radius}.\n\n\\begin{equation}\\label{eq:Q_lasgest_radius}\n    Q= \\min_{w \\in Bd(B\\mathcal{G}} ||w||\n\\end{equation}\n\n\\subsubsection{Minimizing the maximum finger force}\nThe optimal grasp configuration is the one that can resist the largest wrench in any direction, with the minimum applied force by each finger. Moreover, the force that a finger can do is upper bounded and the value considered for this upper limit is 1. This only can be assumed because the metric considers the ratio between wrenches and applied forces, that scale linearly with each other.\n\nConsidering the equation \\eqref{eq:force_and_torque} that gives the force at the i-th ($f_i$) contact and the reacting torque $\\tau_i$, where $r_i$ is the vector pointing from the object's center of mass to the contact point.\n\\begin{equation}\\label{eq:force_and_torque}\n     f_i= \\sum_{j=1}^{m}\\alpha_{i,j}f_{i,j} \\quad and \\quad  \\tau_i= \\sum_{j=1}^{m}\\alpha_{i,j}( r_i \\times f_{i,j} )\n\\end{equation}\nThus the wrench ($w_i$) and the set of all possible wrenches originated at the contact $i$ ($W_i$) can be given by \\eqref{eq:conctact_point_wrench}.\n\\begin{equation}\\label{eq:conctact_point_wrench}\n    w_i= \\sum_{j=1}^{m}\\alpha_{i,j}w_{i,j} \\quad and \\quad \n    W_i=\\bigg\\{w_i | w_i=\\sum_{j=1}^{m}\\alpha_{i,j}w_{i,j}, \\alpha_{i,j}\\ge 0, \\sum_{j=1}^{m}\\alpha_{i,j}\\ge 0 \\bigg\\}\n\\end{equation}\nAnd therefore the total wrench ($w$) applied to the object is the sum of all wrenches that are applied at each contact. So, now it's is possible to generate a set ($B\\mathcal{G}$) with all possible wrenches that can be applied to the object, as shown in the expression \\eqref{eq:total_wrench_all_possible_wrench}.\n\\begin{equation}\\label{eq:total_wrench_all_possible_wrench}\n    w=\\sum_{i=1}^n \\quad and \\quad\n       B\\mathcal{G}=W_{L_{\\infty}}=W_1 \\oplus ... \\oplus W_n\n\\end{equation}\nThe set with all combinations of the wrenches that can be applied at the object can be obtained through the Minkowski sum of convex sets of contact points $W_i$.\nAlso the the Minkowski sum can be exchanged with the Convex Hull operation, so the Convex Hull can be given by the expression \\eqref{eq:convex_hull}.\n\\begin{equation}\\label{eq:convex_hull}\n    W_{L_{\\infty}}= Convex Hull\\Bigg(\\bigoplus_{i=1}^{n} \\{ w_{i,1}, ..., w_{i,m} \\} \\Bigg)\n\\end{equation}\nThe advantage of using the Convex Hull relies on the efficient way to compute $W_{L_{\\infty}}$, starting from the primitive wrenches which describes each contact point.\nThe quality measure ($Q_{\\infty}$) can be seen as the distance of nearest facet of the Convex Hull from the origin.\n\n\\subsubsection{Minimizing the total finger force}\nIn this case, it is assumed that the force exerted by all the fingers is upper bounded, being this bound equals to 1, which is equivalent to say $||g|| = 1$.\nThe goal is to minimize the force done by all fingers in the object, so the total force can be defined as in \\eqref{eq:total_force}, moreover, every $f_i$ is on the friction cone and thus the total force can be written as the convex combination of the vector along the extrema of the friction cone.\n\\begin{equation}\\label{eq:total_force}\n    f= \\sum_{i=0}^{n}f_i \\quad and \\quad\n    f= \\sum_{i=0}^{n}\\sum_{j=0}^{m}\\alpha_{i,j}f_{i,j} \\quad where \\quad \\alpha_{i,j} \\ge 0 \\quad and \\quad \\sum_{i=0}^{n}\\sum_{j=0}^{m}\\alpha_{i,j}f_{i,j} \\le 1\n\\end{equation}\nJust like the total force to the object, the total wrench can be expressed as \\eqref{eq:total_wrench}.\n\\begin{equation}\\label{eq:total_wrench}\n    w= \\sum_{i=0}^{n}\\sum_{j=0}^{m}\\alpha_{i,j}w_{i,j}\n\\end{equation}\nThe total wrench exerted on the object belongs to a set such that the wrenches correspond to the primitive contacts on the object. So, the set of all possible wrenches can be given by \\eqref{eq:convex_hull2}, which can be computed through the Convex Hull over a finite set of points. The quality measure ($Q_1$) can be seen as the nearest facet of the Convex Hull from the origin.\n\\begin{equation}\\label{eq:convex_hull2}\n    W_{L_1}= Convex Hull\\Bigg(\\bigcup_{i=0}^{n}\\{w_{i,1},..., w_{i,m}\\}\\Bigg)\n\\end{equation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% GRASP QUALITY EVALUATION IN UNDER-ACTUATED HANDS\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Grasp Quality Evaluation in Underactuated Robotic Hands}\nThere are several types of metrics based on the geometric location of the contact points and the configuration of the hand.\nIn the case of underactuated hands, these metrics are not straightforward due to their nature.\nBut because of the way the hand is drawn, it can take care of his shape when grasping an object and thus it's no longer necessary to find specific contact points or pre-compute the configuration of the hand. \\par\nThe consideration of the limit of the forces that the hand can apply on the object facilitates the definition of contact robustness, i.e., how far the contact is from violating the contact constrains.\nThere are two grasp quality measures that can be defined through the effective capacity of the robotic hand to apply contact forces.\nThis metrics are: the Potential Contact Robustness (PCR) and the Potential Grasp Robustness (PGR).\n\n\\subsection{Measures based on the wrench space}\nDifferent quality measures analyze the Grasp Wrench Space (GWS) in order to find the best grasp. One of the most known of these metrics is the largest minimum resisted wrench.\nAs stated in the section \\ref{section:LMRW}, there are two ways to construct the set of wrenches that can be applied to the object. One limits the sum of all contact forces ($GWS_S$), while the other limits the individual contact forces ($GWS_F$).\nThe $GWS_S$ is more relevant for hands with a single actuator, while $GWS_F$  is more suitable for hands with multiple actuators. \\par\nIn the case of multi-fingered underactuated hands, there may be some contacts that are controlled by one actuator or contacts that are individually  controlled by one actuator. \nTo take this into account, a actuation matrix $A  \\in \\rm I\\!R^{n_a \\times n_c}$ is defined, where $n_a$ is the number of actuators and $n_c$ is the number of contact points. For each actuator (i.e each row), the contact point (column) is filled with 1 in case of the actuator has direct influence on those contact, otherwise it will be filled with 0.\\par\nThrough the actuation matrix, it is possible to know the set of combinations of wrenches affected by the same actuator. This set can be given by equation \\eqref{eq:atuator-wise}.\n\\begin{equation}\\label{eq:atuator-wise}\n    w_i^{d,a}= \\bigcup_{j=1}^{n_c}\n        \\begin{cases}\n        w_j^{d,c} & \\ \\text{if } \\texttt{A}_{i,j} = 1\\\\\n        \\textit{empty} &  \\text{otherwise}\n    \\end{cases}\n\\end{equation}\n\nAlso, all the sets of wrenches affected by each actuator can be grouped into a larger set $GWS_U$. Thus this set can be given by the expression \\eqref{eq:minkowski_actuator_wise}\n\n\\begin{equation}\\label{eq:minkowski_actuator_wise}\n    GWS_U = CH\\Bigg[\n    \\mathcal{P}=\\bigcup_{i=1}^{n_a}\\Bigg(\\bigcup_{j=1}^{{}^iC_{n_a}} \\big( \\bigoplus \\mathcal{W}_j^a \\big) \\Bigg) \\Bigg]\n    \\quad with \\quad \\mathcal{W}^a_j \\in \n\\left( \\begin{array}{c}\n             \\mathcal{W}^a    \\\\\n              i  \n    \\end{array} \\right) \n\\end{equation}\n\nThe computation of the maximum contact forces realizable by each finger can be describe as follow. The unit contact normal force $f_i$ of each contact $i$ can be represented by $\\mathfrak{f}_i$.  $\\mathfrak{F}_j$ is the set of all unit contact forces generated by the finger $j$. The torques $\\tau_j$ needed to achieve  $\\mathfrak{F}_j$ are calculated with the joint configuration $q_j$ and the corresponding body Jacobian $J(q_j)$ like it can be seen on the equation \\eqref{eq:torqueThroughforces}.\n\\begin{equation}\\label{eq:torqueThroughforces}\n    \\tau_j = J(q_j)^T \\cdot \\mathfrak{F}_j\n\\end{equation}\n\nThe finger torque $\\tau_j$ is scaled by a factor $k$ until one of the joints reaches its limit. Then the maximum realizable normal force at the contact point $i$ is given by $k\\mathfrak{f}_i$ and by using the realizable force for each contact, the quality measure represents the maximum magnitude wrench the a finger can resist.\n\n\\subsection{Measures of Contact and Grasp Robustness}\n\n\\subsubsection{Potential Contact Robustness}\n\nOnce that the hand is underactuated, the contacts and joints are not perfectly rigid and their stiffness can be linearly modeled by $K_s \\in \\rm I\\!R^{3n_c \\times 3n_c} $, which is contact stiffness matrix, and $K_p \\in \\rm I\\!R^{n_q \\times n_q} $ is the joint stiffness matrix.\nTherefore, the variation of grasp variables can be written as in \\eqref{eq:force_torque_variation}, where $\\Delta q_r$ is a variation of the reference joint configuration.\n\n\\begin{equation}\\label{eq:force_torque_variation}\n     \\Delta f = K_s(J \\Delta q - G^T\\Delta u) \\quad and \\quad\n     \\Delta \\tau = K_p(\\Delta q_r - \\Delta q)\n \\end{equation}\n    \nThe vector $d$ (shown in \\eqref{eq:vetor_d}) represents how far a grasp is from violating the friction constrains and the maximum force limits.\n\\begin{equation}\\label{eq:vetor_d}\n    d(f) = [d_{1,c}, d_{1,f}, d_{1,max}, \\dots, d_{n_c,c}, d_{n_c,f}, d_{n_c,max}]\n\\end{equation}\nWhere $d_{i,c}$ is the normal component of the $i^{th}$ contact force ($d_{i,c} = f_{i,n} $), $d_{i,f}$ is the distance of \\texttt{$f_i$} from the friction cone and $d_{i,max} = f_{i,max} - ||f_i||$.\\par\n\nThen, $||\\Delta f|| \\le d_{min}$ is sufficient to have a contact force perturbation $\\Delta f$ such that constrains of Coulomb law are not violated and $d_{min}$ is the minimum value of $d$.\n\nThrough the constrain above, the constraint of the external disturbance $\\Delta w$ acting on the object can be represented as in \\eqref{eq:deltaW}.\n \\begin{equation}\\label{eq:deltaW}\n     ||\\Delta w || = \\frac{d_{min}}{\\sigma_{max}(\\texttt{G}_K^R)}\n \\end{equation}\n Where $\\texttt{G}_K^R = \\texttt{KG}^T(\\texttt{GKG}^T)$ and $ \\texttt{K} =( \\texttt{K}_s^{-1} + \\texttt{JK}_p^{-1}\\texttt{J}^T)$ is the grasp stiffness matrix, and $\\sigma_{max}(\\texttt{G}_K^R)$ is the maximum singular value of $\\texttt{G}_K^R$.\n Also, $f = -G_K^R w + Ey$ represents the controllable contact forces and where $E$ is a basis for the subspace $\\mathcal{F}_a$ of the controllable internal forces. $y$ is a vector that parameterizes that subspace.\n \\par\n The Potential Contact Robustness (PCR) is defined in \\eqref{eq:PCR}, where $d_{min}^{\\mathcal{F}_a}= \\min\\{d(G_K^R w+ Ey\\}$ is a function of $y$.\n \\begin{equation}\\label{eq:PCR}\n    PCR = \\max_y\\frac{d_{min}^{\\mathcal{F}_a}}{\\sigma_{max}(G_K^R)}\n\\end{equation} \n \nAny disturbance wrench $\\Delta w$ that $||\\Delta w||< PCR$ can be resisted without having any object's slippage or detachment, since that the internal forces are actuated. \n\n\\subsubsection{Potential Grasp Robustness}\nGrasps that contain many points of contact (such as grasps performed by underactuated hands) can still be stable and be able to resist external wrenches even if there are detachment or slippage from contact points. However, PCR may underestimate the grasp quality.\n\nThe PGR extends the PCR, admitting that a grasp can still be stable even if there are contacts that do not satisfy the friction constrains. These friction constrain are given by \\eqref{eq:positive_contrain} and \\eqref{eq:comlumbLaw}.\n\n\\begin{equation}\\label{eq:positive_contrain}\n    f_{i,n} \\ge 0\n\\end{equation}\n\n\\begin{equation}\\label{eq:comlumbLaw}\n    \\sqrt{f_{i,t_1}^2+f_{i,t_2}^2} \\le \\mu_i  f_{n,t_2}\n\\end{equation}\nThus the PGR admits that each contact point is in one of the following 3 cases:\n\\begin{itemize}\n    \\item State 1: It considers as valid the equations \\eqref{eq:positive_contrain} and \\eqref{eq:comlumbLaw}, being that the contact force can be transmitted in any direction.\n    \\item State 2: Only the constraint \\eqref{eq:positive_contrain} is verified, and the force is transmitted through the normal direction of the contact.\n    \\item State 3: Non of the constrains are verified. Then it's considered that the contact is detached.    \n\\end{itemize}\nAssuming that there are $n_c$ contact points and that each of the contacts may be in one of the 3 previous states, there will be $3^{n_c}$ possible combinations $C_j$ and each of them will have a global matrix of stiffness.\nThe PGR can be seen as the maximum PCR index of all possible combinations $C_j$ and it can be computed as in \\eqref{eq:PGR}.\n\\begin{equation}\\label{eq:PGR}\n    PGR = \\max_{G_j, j=1,\\dots, n_c}PCR(C_j)\n\\end{equation}\nAlso PCR can be seen as a specific case of PGR, where the best grasp have all contact points in the state 1.\\par\nThese metrics take into account the hands actuated by postural synergies. These synergies are highly coordinated movements by the brain, which control the shape of the hand when grasping the object.\nThese synergies can be used to reduce the number of actuators without affecting the hand's performance. \\par\nWhen comparing the two metrics it is verified that both the PGR and the PCR are not decreasing with the increase in the use of the synergies. Moreover, the greater the number of synergies used, the greater the subspace of controllable internal forces. \\par\nIn conclusion, PGR is less conservative but more realistic than PGR, and thus it is better to detect stable grasps.\n\n\\section{Physically Based Grasp Quality Evaluation Under\nPose Uncertainty}\nWhile there has been a breakthrough in robot grasp planning, grasps made by the robotic hand are not as robust as people's.\nMost of the metrics focus on measuring the quality of the contacts after the establishment of the grasp and choosing the best. But in reality, this better grasp calculated by these metrics is difficult to implement in the robotic hand. \\par\nThis paper explores the use of dynamic simulation to estimate the success or failure of the grasp in the real world. The investigated factors that affect the success of the grasp on the object are its dynamic and its pose uncertainty.\\par\nMost grasp quality metrics only focus on the final hand configuration, or the situation where the robot is already grasping the object and through this choose the best grasp to be performed.", "meta": {"hexsha": "39aa96e5350f6d598ec906d59058b711705fe3fd", "size": 20922, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/evaluation.tex", "max_stars_repo_name": "LuisMAALMEIDA/IIEEC", "max_stars_repo_head_hexsha": "479694d56de161909c614c90931fae9ef45e91c7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/evaluation.tex", "max_issues_repo_name": "LuisMAALMEIDA/IIEEC", "max_issues_repo_head_hexsha": "479694d56de161909c614c90931fae9ef45e91c7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/evaluation.tex", "max_forks_repo_name": "LuisMAALMEIDA/IIEEC", "max_forks_repo_head_hexsha": "479694d56de161909c614c90931fae9ef45e91c7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.1674008811, "max_line_length": 582, "alphanum_fraction": 0.7307618774, "num_tokens": 5951, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933447152497, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.596862220835367}}
{"text": "\\label{ch:diffusion}\n\n\\section{Thermal Diffusion}\n\nCastro incorporates explicit thermal diffusion into the energy equation.  \nIn terms of the specific internal energy, $e$, this appears as:\n\\begin{equation}\n\\rho \\frac{De}{Dt} + p \\nabla \\cdot \\ub = \\nabla \\cdot \\kth \\nabla T\n\\end{equation}\nwhere $\\kth$ is the thermal conductivity, with units\n$\\mathrm{erg~cm^{-1}~s^{-1}~K^{-1}}$.\n\nTo see the similarity to the thermal diffusion equation, consider the special\ncase of constant conductivity, $\\kth$, and density, and assume an\nideal gas, so $e = c_v T$, where $c_v$ is the specific heat at constant volume.\nFinally, ignore hydrodynamics, so $\\ub = 0$.  This gives:\n\\begin{equation}\n\\frac{\\partial T}{\\partial t} = D \\nabla^2 T\n\\end{equation}\nwhere $D \\equiv \\kth/(\\rho c_v)$.  Solving this equation\nexplicitly requires a timestep limiter of\n\\begin{equation}\n\\Delta t_\\mathrm{diff} \\le \\frac{1}{2} \\frac{\\Delta x^2}{D}\n\\end{equation}\n(this is implemented in \\code{ca\\_estdt\\_temp\\_diffusion} in {\\tt\n  Castro/Source/driver/timestep.F90}).\n\nSupport for diffusion must be compiled into the code by setting {\\tt\n  USE\\_DIFFUSION = TRUE} in your {\\tt GNUmakefile}.  It is treated\nexplicitly, by constructing the contribution to the evolution as a\nsource term.  This is time-centered to achieve second-order accuracy\nin time.\n\nThe following parameter affects diffusion:\n\\begin{itemize}\n\\item \\runparam{castro.diffuse\\_temp}:  enable thermal diffusion (0 or 1; default 0)\n\\end{itemize}\n\nA pure diffusion problem (with no hydrodynamics) can be run by setting\n\\begin{verbatim}\ncastro.diffuse_temp = 1\ncastro.do_hydro = 0\n\\end{verbatim}\n\nTo complete the setup, a thermal conductivity must be specified.  The\ninterface for the conductivity is:\n\\begin{lstlisting}[language=fortran]\n  subroutine thermal_conductivity(eos_state, therm_cond)\n    \n    use extern_probin_module, only: const_conductivity\n\n    type (eos_t), intent(in) :: eos_state\n    real (kind=dp_t), intent(inout) :: therm_cond\n\\end{lstlisting}\nThe density, temperature, and mass fractions come in through the {\\tt\n  eos\\_state} type.  An EOS call is done in \\castro\\ just before the\ncall to \\code{thermal\\_conductivity}, so you can assume that the entire\nstate is consistent.\n\nThere are two conductivity routines provided with \\castro\\ by default:\n\\begin{itemize}\n\\item {\\tt constant} : A simple constant thermal conductivity.  This can be \n  selected by setting \n\\begin{verbatim}\nConductivity_dir := constant\n\\end{verbatim}\nin your {\\tt GNUmakefile}.  To set the value of the conductivity (e.g., to\n$100$), you add to your {\\tt probin} file's {\\tt \\&extern} namelist:\n\\begin{verbatim}\nconst_conductivity = 100.0\n\\end{verbatim}\n\n\\item {\\tt constant\\_opacity} : A simple constant opacity.  This is\n  converted to an opacity as:\n  \\begin{equation}\n    \\kth = \\frac{16 \\sigma_B T^3}{3 \\kappa_\\mathrm{const} \\rho}\n  \\end{equation}\nwhere $\\kappa_\\mathrm{const}$ is the opacity, with units $\\mathrm{cm^2~g^{-1}}$.\nThis is selected by setting\n\\begin{verbatim}\nConductivity_dir := constant_opacity\n\\end{verbatim}\nin your {\\tt GNUmakefile}.  To set the value of the opacity, e.g., to\n0.2 (e.g., for electron scattering), set:\n\\begin{verbatim}\nconst_opacity = 0.2\n\\end{verbatim}\nin the {\\tt \\&extern} namelist of your {\\tt probin}.\n\n\\end{itemize}\n\nThe diffusion approximation breaks down at the surface of stars,\nwhere the density rapidly drops and the mean free path becomes \nlarge.  In those instances, you should use the flux limited diffusion\nmodule in Castro to evolve a radiation field.  However, if your\ninterest is only on the diffusion in the interior, you can use\nthe \\runparam{castro.diffuse\\_cutoff\\_density} parameter to specify a density,\nbelow which, diffusion is not modeled.  This is implemented in the\ncode by zeroing out the conductivity and skipping the estimation\nof the timestep limit in these zones.\n\nA simple test problem that sets up a Gaussian temperature profile \nand does pure diffusion is provided as {\\tt diffusion\\_test}.\n\n\n\n\\section{Enthalpy Diffusion}\n\n\\castro\\ can also diffuse enthalpy \\MarginPar{these need to be documented}\n\nNote this uses the same interface for the transport coefficients as\nthermal diffusion, so the two cannot be used at the same time.\n\n\n\\section{Species Diffusion}\n\n\\castro\\ can also diffuse species.  \n\nNote this uses the same interface for the transport coefficients as\nthermal diffusion, so the two cannot be used at the same time.\n\n\n\n\\section{Viscosity}\n\n", "meta": {"hexsha": "90f7c3fa466634ab1169fdee2401b9ac8659ac61", "size": 4460, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Docs/Diffusion/CastroDiffusion.tex", "max_stars_repo_name": "yingtchen/Castro", "max_stars_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Docs/Diffusion/CastroDiffusion.tex", "max_issues_repo_name": "yingtchen/Castro", "max_issues_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Docs/Diffusion/CastroDiffusion.tex", "max_forks_repo_name": "yingtchen/Castro", "max_forks_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3968253968, "max_line_length": 84, "alphanum_fraction": 0.7533632287, "num_tokens": 1215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933359135361, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5968622192666758}}
{"text": "\\section{Replicated Growable Array}\n\\label{sect.rga}\n\nThe RGA, introduced by \\citet{Roh:2011dw}, is a replicated ordered list (sequence) datatype that supports \\emph{insert} and \\emph{delete} operations.\nIt can be used for collaborative editing of text by representing a string as an ordered list of characters.\n\nThe convergence of RGA has been proved by hand in previous work (see Section~\\ref{sect.related.verification}); we now present the first (to our knowledge) mechanised proof that RGA satisfies the specification of SEC from Section~\\ref{sect.abstract.convergence}.\nWe perform this proof within the causal broadcast model defined in Section~\\ref{sect.network}, and without making any assumptions beyond the six aforementioned network axioms.\nSince the axioms of our network model are easily justified, we have confidence in the correctness of our formalisation.\nOur proof makes extensive use of the general-purpose framework that we have established in the last two sections.\n\n\\subsection{Specifying Insertion and Deletion}\\label{sect.rga.spec}\n\nIn an ordered list, each insertion and deletion operation must identify the position at which the modification should take place.\nIn a non-replicated setting, the position is commonly expressed as an index into the list.\nHowever, the index of a list element may change if other elements are concurrently inserted or deleted earlier in the list; this is the problem at the heart of Operational Transformation (see Section~\\ref{sect.related.ot.crdts}).\nInstead of using indexes, the RGA algorithm assigns a unique, immutable identifier to each list element.\n\nInsertion operations place the new element \\emph{after} an existing list element with a given ID, or at the head of the list if no ID is given.\nDeletion operations refer to the ID of the list element that is to be deleted.\nHowever, it is not safe for a deletion operation to completely remove a list element, because then a concurrent insertion after the deleted element would not be able to locate the insertion position.\nInstead, the list retains \\emph{tombstones}: a deletion operation merely sets a flag on a list element to mark it as deleted, but the element actually remains in the list.\nA garbage collection process can be used to purge tombstones \\cite{Roh:2011dw}, but we do not consider it here.\n\nThe RGA state at each node is a list of elements.\nEach element is a triple consisting of the unique ID of the list element (of some type $\\isacharprime\\isa{id}$), the value inserted by the application (of some type $\\isacharprime\\isa{v}$), and a flag that indicates whether the element has been marked as deleted (of type $\\isa{bool}$):\n\\begin{isabelle}\n\\isacommand{type{\\isacharunderscore}synonym} {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ {\\isacharequal}\\ {\\isachardoublequoteopen}{\\isacharprime}id\\ {\\isasymtimes}\\ {\\isacharprime}v\\ {\\isasymtimes}\\ bool{\\isachardoublequoteclose}%\n\\end{isabelle}\n\nThe $\\isa{insert}$ function takes three parameters: the previous state of the list, the new element to insert, and optionally the ID of an existing element after which the new element should be inserted.\nIt returns the list with the new element inserted at the appropriate position, or $\\isa{None}$ on failure, which occurs if there was no existing element with the given ID.\nThe function iterates over the list, and for each list element $\\isa{x}$, it compares the ID (the first component of the $\\isacharprime\\isa{id} \\mathbin{\\isasymtimes} \\isacharprime\\isa{v} \\mathbin{\\isasymtimes} \\isa{bool}$ triple, written $\\isa{fst x}$) to the requested insertion position:\n\\begin{isabelle}\n~~~~{\\isachardoublequoteopen}insert\\ {\\isacharparenleft}x{\\isacharhash}xs{\\isacharparenright}\\ \\=e\\ {\\isacharparenleft}Some\\ i{\\isacharparenright}\\ \\={\\isacharequal}\\ {\\isacharparenleft}\\=\\kill\n\\isacommand{fun}\\ insert\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcolon}{\\isacharcolon}{\\isacharbraceleft}linorder{\\isacharbraceright}{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ {\\isasymRightarrow}\\ {\\isacharprime}id\\ option\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ option{\\isachardoublequoteclose}\\\\\n\\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}insert\\ xs\\ \\>e\\ None\\ \\>{\\isacharequal}\\ Some\\ {\\isacharparenleft}insert{\\isacharunderscore}body\\ xs\\ e{\\isacharparenright}{\\isachardoublequoteclose}\\ {\\isacharbar}\\\\\n~~~~{\\isachardoublequoteopen}insert\\ {\\isacharbrackleft}{\\isacharbrackright}\\ \\>e\\ {\\isacharparenleft}Some\\ i{\\isacharparenright}\\ \\>{\\isacharequal}\\ None{\\isachardoublequoteclose}\\ {\\isacharbar}\\\\\n~~~~{\\isachardoublequoteopen}insert\\ {\\isacharparenleft}x{\\isacharhash}xs{\\isacharparenright}\\ \\>e\\ {\\isacharparenleft}Some\\ i{\\isacharparenright}\\ \\>{\\isacharequal}\\ {\\isacharparenleft}\\>if\\ fst\\ x\\ {\\isacharequal}\\ i\\ then\\ Some\\ {\\isacharparenleft}x{\\isacharhash}insert{\\isacharunderscore}body\\ xs\\ e{\\isacharparenright}\\\\\n\\>\\>\\>else\\ insert\\ xs\\ e\\ {\\isacharparenleft}Some\\ i{\\isacharparenright}\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}t{\\isachardot}\\ Some\\ {\\isacharparenleft}x{\\isacharhash}t{\\isacharparenright}{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\nWhen the insertion position is found (or, in the case of insertion at the head of the list, immediately), the function $\\isa{insert-body}$ is invoked to perform the actual insertion:\n\\begin{isabelle}\n~~~~{\\isachardoublequoteopen}insert{\\isacharunderscore}body\\ {\\isacharparenleft}x{\\isacharhash}xs{\\isacharparenright}\\ \\=\\kill\n\\isacommand{fun} insert{\\isacharunderscore}body\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcolon}{\\isacharcolon}{\\isacharbraceleft}linorder{\\isacharbraceright}{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list{\\isachardoublequoteclose}\\ \\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}insert{\\isacharunderscore}body\\ {\\isacharbrackleft}{\\isacharbrackright} \\>e\\ {\\isacharequal}\\ {\\isacharbrackleft}e{\\isacharbrackright}{\\isachardoublequoteclose}\\ {\\isacharbar}\\\\\n~~~~{\\isachardoublequoteopen}insert{\\isacharunderscore}body\\ {\\isacharparenleft}x{\\isacharhash}xs{\\isacharparenright}\\>e\\ {\\isacharequal}\\ {\\isacharparenleft}if\\ fst\\ x\\ {\\isacharless}\\ fst\\ e\\ then\\ e{\\isacharhash}x{\\isacharhash}xs\\ else\\ x{\\isacharhash}insert{\\isacharunderscore}body\\ xs\\ e{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\n\nIn a non-replicated datatype it would be sufficient to insert the new element directly at the position found by the $\\isa{insert}$ function.\nHowever, a replicated setting is more difficult, because several nodes may concurrently insert new elements at the same position, and those insertion operations may be processed in a different order by different nodes.\nIn order to ensure that all nodes converge towards the same state (that is, the same order of list elements), we sort any concurrent insertions at the same position in descending order of the inserted elements' IDs.\nThis sorting is implemented in $\\isa{insert-body}$ by skipping over any elements with an ID that is greater than that of the newly inserted element (the $\\isa{fst x} > \\isa{fst e}$ case), and then placing the new element before the first existing element with a lesser ID (the $\\isa{fst x} < \\isa{fst e}$ case).\n\nNote that the type of IDs is specified as $\\isacharprime\\isa{id}\\isacharcolon\\isacharcolon\\isacharbraceleft\\isa{linorder}\\isacharbraceright$, which means that we require the type $\\isacharprime\\isa{id}$ to have an associated total (linear) order.\n$\\isa{linorder}$ is the name of a type class supplied by the Isabelle/HOL library.\nThis annotation is required in order to be able to perform the comparison $\\isa{fst x} < \\isa{fst e}$ on IDs.\nTo be precise, RGA requires the total order of IDs to be consistent with causality, which can easily be achieved using the logical timestamps defined by \\citet{Lamport:1978jq}.\n\nThe delete operation searches for the element with a given ID, and sets its flag to $\\isa{True}$ to mark it as deleted:\n\\begin{isabelle}\n~~~~{\\isachardoublequoteopen}delete\\ {\\isacharparenleft}{\\isacharparenleft}i{\\isacharprime}{\\isacharcomma}\\ v{\\isacharcomma}\\ flag{\\isacharparenright}{\\isacharhash}xs{\\isacharparenright}\\ \\=i\\ {\\isacharequal}\\ {\\isacharparenleft}\\=\\kill\n\\isacommand{fun}\\ delete\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcolon}{\\isacharcolon}{\\isacharbraceleft}linorder{\\isacharbraceright}{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ {\\isasymRightarrow}\\ {\\isacharprime}id\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ option{\\isachardoublequoteclose}\\ \\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}delete\\ {\\isacharbrackleft}{\\isacharbrackright}\\>i\\ {\\isacharequal}\\ None{\\isachardoublequoteclose}\\ {\\isacharbar}\\\\\n~~~~{\\isachardoublequoteopen}delete\\ {\\isacharparenleft}{\\isacharparenleft}i{\\isacharprime}{\\isacharcomma}\\ v{\\isacharcomma}\\ flag{\\isacharparenright}{\\isacharhash}xs{\\isacharparenright} \\>i\\ {\\isacharequal}\\ {\\isacharparenleft}\\>if\\ i{\\isacharprime}\\ {\\isacharequal}\\ i\\ then\\ Some\\ {\\isacharparenleft}{\\isacharparenleft}i{\\isacharprime}{\\isacharcomma}\\ v{\\isacharcomma}\\ True{\\isacharparenright}{\\isacharhash}xs{\\isacharparenright}\\\\\n\\>\\>else\\ delete\\ xs\\ i\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}t{\\isachardot}\\ Some\\ {\\isacharparenleft}{\\isacharparenleft}i{\\isacharprime}{\\isacharcomma}v{\\isacharcomma}flag{\\isacharparenright}{\\isacharhash}t{\\isacharparenright}{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}%\n\\end{isabelle}\nNote that the operations presented here are deliberately inefficient in order to make them easier to reason about.\nOne can see our implementations of $\\isa{insert-body}$, $\\isa{insert}$, and $\\isa{delete}$ as functional specifications for RGAs, which could be optimised into more efficient algorithms using data refinement, if desired.\n\n\\subsection{Commutativity of Insertion and Deletion}\n\nRecall from Section~\\ref{sect.ops.commute} that in order to prove the convergence theorem we need to show that for the datatype in question, all its concurrent operations commute.\nIt is straightforward to demonstrate that $\\isa{delete}$ always commutes with itself, on concurrent and non-concurrent operations alike:\n\\begin{isabelle}\n\\isacommand{lemma}\\ delete{\\isacharunderscore}commutes{\\isacharcolon}\\\\\n~~~~{\\isachardoublequoteopen}delete\\ xs\\ i{\\isadigit{1}}\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}ys{\\isachardot}\\ delete\\ ys\\ i{\\isadigit{2}}{\\isacharparenright}\\ {\\isacharequal}\\ delete\\ xs\\ i{\\isadigit{2}}\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}ys{\\isachardot}\\ delete\\ ys\\ i{\\isadigit{1}}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\n\nIt is a little more complex to demonstrate that two $\\isa{insert}$ operations commute.\nLet $\\isa{e1}$ and $\\isa{e2}$ be the two new list elements being inserted, each of which is a $\\isacharprime\\isa{id} \\mathbin{\\isasymtimes} \\isacharprime\\isa{v} \\mathbin{\\isasymtimes} \\isa{bool}$ triple.\nFurther, let $\\isa{i1} \\mathbin{\\isacharcolon\\isacharcolon} \\isacharprime\\isa{id option}$ be the position after which $\\isa{e1}$ should be inserted (either $\\isa{None}$ for the head of the list, or $\\isa{Some i}$ where $\\isa{i}$ is the ID of an existing list element), and similarly let $\\isa{i2}$ be the position after which $\\isa{e2}$ should be inserted.\nThen the two insertions commute only under certain assumptions:\n\\begin{isabelle}\n~~~~\\isakeyword{assumes}\\ \\=\\kill\n\\isacommand{lemma}\\ insert{\\isacharunderscore}commutes{\\isacharcolon}\\\\\n~~~~\\isakeyword{assumes}\\>{\\isachardoublequoteopen}fst\\ e{\\isadigit{1}}\\ {\\isasymnoteq}\\ fst\\ e{\\isadigit{2}}{\\isachardoublequoteclose}\\\\\n~~~~~~~~\\isakeyword{and}\\>{\\isachardoublequoteopen}i{\\isadigit{1}}\\ {\\isacharequal}\\ None\\ {\\isasymor}\\ i{\\isadigit{1}}\\ {\\isasymnoteq}\\ Some\\ {\\isacharparenleft}fst\\ e{\\isadigit{2}}{\\isacharparenright}{\\isachardoublequoteclose}\\\\\n~~~~~~~~\\isakeyword{and}\\>{\\isachardoublequoteopen}i{\\isadigit{2}}\\ {\\isacharequal}\\ None\\ {\\isasymor}\\ i{\\isadigit{2}}\\ {\\isasymnoteq}\\ Some\\ {\\isacharparenleft}fst\\ e{\\isadigit{1}}{\\isacharparenright}{\\isachardoublequoteclose}\\\\\n~~~~\\isakeyword{shows}\\>{\\isachardoublequoteopen}insert\\ xs\\ e{\\isadigit{1}}\\ i{\\isadigit{1}}\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}ys{\\isachardot}\\ insert\\ ys\\ e{\\isadigit{2}}\\ i{\\isadigit{2}}{\\isacharparenright}\\ {\\isacharequal}\\ insert\\ xs\\ e{\\isadigit{2}}\\ i{\\isadigit{2}}\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}ys{\\isachardot}\\ insert\\ ys\\ e{\\isadigit{1}}\\ i{\\isadigit{1}}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\n\\noindent\nThat is, $\\isa{i1}$ cannot refer to the ID of $\\isa{e2}$ and vice versa, and the IDs of the two insertions must be distinct.\nWe prove later that these assumptions are indeed satisfied for all concurrent operations.\nFinally, $\\isa{delete}$ commutes with $\\isa{insert}$ whenever the element to be deleted is not the same as the element to be inserted:\n\\begin{isabelle}\n~~~~\\isakeyword{assumes}\\ \\=\\kill\n\\isacommand{lemma}\\ insert{\\isacharunderscore}delete{\\isacharunderscore}commute{\\isacharcolon}\\\\\n~~~~\\isakeyword{assumes}\\>{\\isachardoublequoteopen}i{\\isadigit{2}}\\ {\\isasymnoteq}\\ fst\\ e{\\isachardoublequoteclose}\\\\\n~~~~\\isakeyword{shows}\\>{\\isachardoublequoteopen}insert\\ xs\\ e\\ i{\\isadigit{1}}\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}ys{\\isachardot}\\ delete\\ ys\\ i{\\isadigit{2}}{\\isacharparenright}\\ {\\isacharequal}\\ delete\\ xs\\ i{\\isadigit{2}}\\ {\\isasymbind}\\ {\\isacharparenleft}{\\isasymlambda}ys{\\isachardot}\\ insert\\ ys\\ e\\ i{\\isadigit{1}}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\n\n\\subsection{Embedding RGA in the Network Model}\n\nIn order to obtain a proof of the strong eventual consistency of RGA, we embed the insertion and deletion operations in the network model of Section~\\ref{sect.network}.\nWe first define a datatype for operations (which are sent across the network in messages), and an interpretation function as introduced in Section~\\ref{sect.ops.interpretation}:\n\\begin{isabelle}\n~~~~{\\isachardoublequoteopen}interpret{\\isacharunderscore}opers\\ {\\isacharparenleft}Insert\\ e\\ n{\\isacharparenright}\\ \\=\\kill\n\\isacommand{datatype} {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ operation\\ {\\isacharequal} Insert\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt{\\isachardoublequoteclose}\\ {\\isachardoublequoteopen}{\\isacharprime}id\\ option{\\isachardoublequoteclose}\\ {\\isacharbar} Delete\\ {\\isachardoublequoteopen}{\\isacharprime}id{\\isachardoublequoteclose}\\\\[4pt]\n\\isacommand{fun} interpret{\\isacharunderscore}opers\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcolon}{\\isacharcolon}linorder{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ operation\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ option{\\isachardoublequoteclose}\\\\\n\\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}interpret{\\isacharunderscore}opers\\ {\\isacharparenleft}Insert\\ e\\ n{\\isacharparenright}\\>xs\\ \\ {\\isacharequal}\\ insert\\ xs\\ e\\ n{\\isachardoublequoteclose}\\ {\\isacharbar}\\\\\n~~~~{\\isachardoublequoteopen}interpret{\\isacharunderscore}opers\\ {\\isacharparenleft}Delete\\ n{\\isacharparenright}\\>xs\\ \\ {\\isacharequal}\\ delete\\ xs\\ n{\\isachardoublequoteclose}\n\\end{isabelle}\n\nAs discussed above, the validity of operations depends on some assumptions: IDs of insertion operations must be unique, and whenever an insertion or deletion operation refers to an existing list element, that element must exist.\nAs introduced in Section~\\ref{sect.network.ops}, we can describe these requirements by using a predicate to specify what messages a node is allowed to broadcast when in a particular state:\n\\begin{isabelle}\n~~~~~~~~\\={\\isacharparenleft}i{\\isacharcomma}\\ Insert\\ e\\ {\\isacharparenleft}\\=Some\\ \\=pos{\\isacharparenright}\\={\\isacharparenright}\\ \\={\\isasymRightarrow}\\ fst\\ e\\ {\\isacharequal}\\ i\\ {\\isasymand}\\ \\=\\kill\n\\isacommand{definition}\\ valid{\\isacharunderscore}rga{\\isacharunderscore}msg\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ elt\\ list\\ {\\isasymRightarrow}\\ {\\isacharprime}id\\ {\\isasymtimes}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcolon}{\\isacharcolon}linorder{\\isacharcomma}\\ {\\isacharprime}v{\\isacharparenright}\\ operation\\ {\\isasymRightarrow}\\ bool{\\isachardoublequoteclose}\\ \\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}valid{\\isacharunderscore}rga{\\isacharunderscore}msg\\ list\\ msg\\ {\\isasymequiv}\\ case\\ msg\\ of\\\\\n\\>{\\isacharparenleft}i{\\isacharcomma}\\ Insert\\ e \\>None \\>\\>{\\isacharparenright}\\ \\>{\\isasymRightarrow}\\ fst\\ e\\ {\\isacharequal}\\ i\\ {\\isacharbar}\\\\\n\\>{\\isacharparenleft}i{\\isacharcomma}\\ Insert\\ e\\ {\\isacharparenleft}Some\\ pos{\\isacharparenright}{\\isacharparenright}\\ {\\isasymRightarrow}\\ fst\\ e\\ {\\isacharequal}\\ i\\ {\\isasymand}\\ pos\\ {\\isasymin}\\ set\\ {\\isacharparenleft}map\\ fst\\ list{\\isacharparenright}\\ {\\isacharbar}\\\\\n\\>{\\isacharparenleft}i{\\isacharcomma}\\ Delete \\>\\>pos \\>{\\isacharparenright} \\>{\\isasymRightarrow} \\>pos\\ {\\isasymin}\\ set\\ {\\isacharparenleft}map\\ fst\\ list{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\nWe can now define RGA by extending $\\isa{network-with-constrained-ops}$. The interpretation function is instantiated with $\\isa{interpret-opers}$, the initial state with the empty list $\\isacharbrackleft\\,\\isacharbrackright$, and the validity predicate with $\\isa{valid-rga-msg}$:\n\\begin{isabelle}\n\\isacommand{locale}\\ rga\\ {\\isacharequal}\\ network{\\isacharunderscore}with{\\isacharunderscore}constrained{\\isacharunderscore}ops\\ {\\isacharunderscore}\\ interpret{\\isacharunderscore}opers\\ {\\isachardoublequoteopen}{\\isacharbrackleft}{\\isacharbrackright}{\\isachardoublequoteclose}\\ valid{\\isacharunderscore}rga{\\isacharunderscore}msg\n\\end{isabelle}\n\nWithin this locale, we prove that whenever an insertion or deletion operation $\\isa{op2}$ references an existing list element, there is always a prior insertion operation $\\isa{op1}$ that created the element being referenced:\n\\begin{isabelle}\n~~~~\\isakeyword{assumes}\\ \\={\\isachardoublequoteopen}n\\ {\\isacharequal}\\ None\\ {\\isasymor}\\ {\\isacharparenleft}{\\isasymexists}e{\\isacharprime}\\ n{\\isacharprime}{\\isachardot}\\ \\=\\kill\n\\isacommand{lemma}\\ allowed{\\isacharunderscore}insert{\\isacharcolon}\\\\\n~~~~\\isakeyword{assumes}\\>{\\isachardoublequoteopen}Broadcast\\ {\\isacharparenleft}Insert\\ e\\ n{\\isacharparenright}\\ {\\isasymin}\\ set\\ {\\isacharparenleft}history\\ i{\\isacharparenright}{\\isachardoublequoteclose}\\\\\n~~~~\\isakeyword{shows}\\>{\\isachardoublequoteopen}n\\ {\\isacharequal}\\ None\\ {\\isasymor}\\ {\\isacharparenleft}{\\isasymexists}e{\\isacharprime}\\ n{\\isacharprime}{\\isachardot}\\>n\\ {\\isacharequal}\\ Some\\ {\\isacharparenleft}fst\\ e{\\isacharprime}{\\isacharparenright}\\ {\\isasymand}\\\\\n\\>\\>Deliver\\ {\\isacharparenleft}Insert\\ e{\\isacharprime}\\ n{\\isacharprime}{\\isacharparenright}\\ {\\isasymsqsubset}\\isactrlsup i\\ Broadcast\\ {\\isacharparenleft}Insert\\ e\\ n{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}\\\\[4pt]\n\\isacommand{lemma}\\ allowed{\\isacharunderscore}delete{\\isacharcolon}\\\\\n~~~~\\isakeyword{assumes}\\>{\\isachardoublequoteopen}Broadcast\\ {\\isacharparenleft}Delete\\ x{\\isacharparenright}\\ {\\isasymin}\\ set\\ {\\isacharparenleft}history\\ i{\\isacharparenright}{\\isachardoublequoteclose}\\\\\n~~~~\\isakeyword{shows}\\>{\\isachardoublequoteopen}{\\isasymexists}n{\\isacharprime}\\ v\\ b{\\isachardot}\\ Deliver\\ {\\isacharparenleft}Insert\\ {\\isacharparenleft}x{\\isacharcomma}\\ v{\\isacharcomma}\\ b{\\isacharparenright}\\ n{\\isacharprime}{\\isacharparenright}\\ {\\isasymsqsubset}\\isactrlsup i\\ Broadcast\\ {\\isacharparenleft}Delete\\ x{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\nSince the network ensures causally ordered delivery, all nodes must deliver the insertion $\\isa{op1}$ before the dependent operation $\\isa{op2}$.\nHence we show that in all cases where operations do not commute, one operation happens before another.\nConversely, whenever operations are concurrent, we show that they commute:\n\\begin{isabelle}\n\\isacommand{theorem}\\ concurrent{\\isacharunderscore}operations{\\isacharunderscore}commute{\\isacharcolon}\\\\\n~~~~\\isakeyword{shows}\\ {\\isachardoublequoteopen}hb{\\isachardot}concurrent{\\isacharunderscore}ops{\\isacharunderscore}commute\\ {\\isacharparenleft}node{\\isacharunderscore}deliver{\\isacharunderscore}messages\\ {\\isacharparenleft}history\\ i{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\nFurthermore, although the type signature of the interpretation function allows an operation to fail by returning $\\isa{None}$, we can prove that this failure case is never reached in any execution of the network:\n\\begin{isabelle}\n\\isacommand{theorem}\\ apply{\\isacharunderscore}operations{\\isacharunderscore}never{\\isacharunderscore}fails{\\isacharcolon}\\\\\n~~~~\\isakeyword{shows}\\ {\\isachardoublequoteopen}hb.apply{\\isacharunderscore}operations\\ {\\isacharparenleft}node{\\isacharunderscore}deliver{\\isacharunderscore}messages\\ {\\isacharparenleft}history\\ i{\\isacharparenright}{\\isacharparenright}\\ {\\isasymnoteq}\\ None{\\isachardoublequoteclose}\n\\end{isabelle}\nIt is now easy to show that the $\\isa{rga}$ locale satisfies all of the requirements of the abstract specification $\\isa{strong-eventual-consistency}$ (Section~\\ref{sect.abstract.sec.spec}), which demonstrates formally that RGA provides SEC.\n", "meta": {"hexsha": "b0cdbd6a4ca6f06264b95b8951eea5dee39dd5f4", "size": 22363, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/rga.tex", "max_stars_repo_name": "trvedata/crdt-isabelle", "max_stars_repo_head_hexsha": "8fde89bfb5e88acce000ac26e45f3f1cfe334e6e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 62, "max_stars_repo_stars_event_min_datetime": "2017-07-09T22:49:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-28T07:18:10.000Z", "max_issues_repo_path": "paper/rga.tex", "max_issues_repo_name": "trvedata/crdt-isabelle", "max_issues_repo_head_hexsha": "8fde89bfb5e88acce000ac26e45f3f1cfe334e6e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-29T14:33:47.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-17T09:22:42.000Z", "max_forks_repo_path": "paper/rga.tex", "max_forks_repo_name": "trvedata/crdt-isabelle", "max_forks_repo_head_hexsha": "8fde89bfb5e88acce000ac26e45f3f1cfe334e6e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-03-03T00:36:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-08T02:18:48.000Z", "avg_line_length": 143.3525641026, "max_line_length": 584, "alphanum_fraction": 0.7765058355, "num_tokens": 6846, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8198933271118222, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.5968622176979842}}
{"text": "\\hypertarget{group__numpp__structures__matrices__dense}{}\\section{Dense}\n\\label{group__numpp__structures__matrices__dense}\\index{Dense@{Dense}}\n\n\nModule containg constexpr dense matrix implementation.  \n\n\n\\subsection*{Classes}\n\\begin{DoxyCompactItemize}\n\\item \nclass \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense$<$ T, Rows, Columns $>$}\n\\item \nclass \\hyperlink{classtuple__size}{tuple\\+\\_\\+size}\n\\begin{DoxyCompactList}\\small\\item\\em \\hyperlink{classtuple__size}{tuple\\+\\_\\+size} specialization for \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense} \\end{DoxyCompactList}\\item \nclass \\hyperlink{classtuple__element}{tuple\\+\\_\\+element}\n\\begin{DoxyCompactList}\\small\\item\\em \\hyperlink{classtuple__element}{tuple\\+\\_\\+element} specialization for \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense} \\end{DoxyCompactList}\\end{DoxyCompactItemize}\n\\subsection*{Functions}\n\\begin{DoxyCompactItemize}\n\\item \n{\\footnotesize template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Columns, Rows $>$ \\hyperlink{group__numpp__structures__matrices__dense_ga07f80d900be174247ad3a3c42e9244fd}{numpp\\+::matrix\\+::transpose} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\hyperlink{group__numpp__structures__matrices__dense_ga60cc656d6425b919b2876e520854d029}{std\\+::pow} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&matrix, U \\&\\&exp)\n\\item \n{\\footnotesize template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\hyperlink{group__numpp__structures__matrices__dense_gaad33179f3c1ba4df7469b7e3b0da7dbf}{std\\+::log} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\hyperlink{group__numpp__structures__matrices__dense_ga5455e1c6ea8020055f4873c6c07d5810}{std\\+::exp} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\hyperlink{group__numpp__structures__matrices__dense_gaa1377585547eae134c6ce13018e08a6b}{std\\+::abs} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\hyperlink{group__numpp__structures__matrices__dense_gac89face89afdd17bfd0c07fda3997db2}{std\\+::sqrt} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\hyperlink{group__numpp__structures__matrices__dense_ga3cccd3987b35936ecd0368f9dc4d6832}{std\\+::sin} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\hyperlink{group__numpp__structures__matrices__dense_gaf4c3368b8cb2cdb05c0dd5a4d1790480}{std\\+::cos} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_ga9fdd727151e15fa8f6a806b9828d06d4}{numpp\\+::matrix\\+::operator+} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&first, const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&second)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_ga1a99075b6985b2ccafb41a6d9ad91824}{numpp\\+::matrix\\+::operator-\\/} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&first, const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&second)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows\\+Columns, std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_ga9f59a9112db4704ed081eaf0c2b738b5}{numpp\\+::matrix\\+::operator$\\ast$} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Rows\\+Columns $>$ \\&first, const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows\\+Columns, Columns $>$ \\&second)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_gaeaace68ab41170931eb69e1cdb4fa381}{numpp\\+::matrix\\+::multiply} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&first, const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&second)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_ga4ba3829a99216c5210cd95f49b703cf0}{numpp\\+::matrix\\+::operator$\\ast$} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&matrix, const U scalar)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_ga6eabf8b5b95a1aeea6b7dc2e4344619e}{numpp\\+::matrix\\+::operator$\\ast$} (const U scalar, const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&matrix)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_ga47bac473f5d6fe8ca8e085f8897d3f81}{numpp\\+::matrix\\+::operator/} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&matrix, const U scalar)\n\\item \n{\\footnotesize template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr auto \\hyperlink{group__numpp__structures__matrices__dense_ga1a70f81ac03f3832f97467af1dd782fc}{numpp\\+::matrix\\+::operator/} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&first, const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&second)\n\\item \n{\\footnotesize template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr T \\& \\hyperlink{group__numpp__structures__matrices__dense_gaa9d630e00d9adfc653c428dad39d1b56}{numpp\\+::matrix\\+::get} (\\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&n)\n\\item \n{\\footnotesize template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr T \\& \\hyperlink{group__numpp__structures__matrices__dense_ga161ec12506af265f6c407b695b06974c}{numpp\\+::matrix\\+::get} (\\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&\\&n)\n\\item \n{\\footnotesize template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr const T \\& \\hyperlink{group__numpp__structures__matrices__dense_ga5587c6c4095d1147e35da4860e22df52}{numpp\\+::matrix\\+::get} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&n)\n\\item \n{\\footnotesize template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ }\\\\constexpr const T \\&\\& \\hyperlink{group__numpp__structures__matrices__dense_ga8a016cefdf00f72d041d313318e7a2fc}{numpp\\+::matrix\\+::get} (const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&\\&n)\n\\end{DoxyCompactItemize}\n\n\n\\subsection{Detailed Description}\nModule containg constexpr dense matrix implementation. \n\n{\\bfseries \\begin{DoxyWarning}{Warning}\nCompilation time of this matrix may mitigate any runtime speedup.~\\newline\nFor dissertation of the problem see /ref praca \n\\end{DoxyWarning}\nThis matrix implementation uses heavy T\\+MP tricks and C++17 functionality is required in order for it to work.}\n\n{\\bfseries {\\bfseries \\begin{DoxyWarning}{Warning}\nIf you are looking for more compile-\\/runtime balanced implementations check Eigen, Armadillo or other libraries\n\\end{DoxyWarning}\n{\\bfseries Example\\+:} \n\\begin{DoxyCode}\n\\textcolor{comment}{//Declaration of matrix}\n\\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp::matrix::dense<double, 3, 3>} mat\\{\n                                              0,  2,   3.15\n                                              4,  8,   12,\n                                              5., 7.7, 4.2\n                                              \\};\n\\textcolor{comment}{//Perform matrix multiplication}\n\\textcolor{comment}{//Other operations have operator overloaded as well}\n\\textcolor{keyword}{auto} \\textcolor{keyword}{new} = mat*mat;\n\\textcolor{comment}{//Operation against scalar}\n\\textcolor{keyword}{auto} divided = mat/3.14;\n\n\\textcolor{keyword}{auto} transposed = \\hyperlink{group__numpp__structures__matrices__dense_ga07f80d900be174247ad3a3c42e9244fd}{transpose}(mat);\n\n\\textcolor{comment}{//Uses iterators, so you can print it this way:}\n\\textcolor{keywordflow}{for}(\\textcolor{keyword}{const} \\textcolor{keyword}{auto}& row: mat)\\{\n  \\textcolor{keywordflow}{for}(\\textcolor{keyword}{const} \\textcolor{keyword}{auto}& elem: row)\n    std::cout << elem << \\textcolor{stringliteral}{\" \"};\n  std::cout << std::endl;\n\\}\n\\end{DoxyCode}\n}}\n\n{\\bfseries {\\bfseries \n\\begin{DoxyCode}\n\\textcolor{preprocessor}{#include\"numpp/structures/matrices/dense\"}\n\\end{DoxyCode}\n }}\n\n\\subsection{Function Documentation}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_gaa1377585547eae134c6ce13018e08a6b}\\label{group__numpp__structures__matrices__dense_gaa1377585547eae134c6ce13018e08a6b}} \n\\index{Dense@{Dense}!abs@{abs}}\n\\index{abs@{abs}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{abs()}{abs()}}\n{\\footnotesize\\ttfamily template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nC\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$T, Rows, Columns$>$ std\\+::abs (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nStandard library overload, matrix element-\\/wise abs function $|A|$\n\n\n\\begin{DoxyParams}{Parameters}\n{\\em matrix} & Matrix with exact same elements, but in absolute value\\\\\n\\hline\n\\end{DoxyParams}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_gaf4c3368b8cb2cdb05c0dd5a4d1790480}\\label{group__numpp__structures__matrices__dense_gaf4c3368b8cb2cdb05c0dd5a4d1790480}} \n\\index{Dense@{Dense}!cos@{cos}}\n\\index{cos@{cos}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{cos()}{cos()}}\n{\\footnotesize\\ttfamily template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nC\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$T, Rows, Columns$>$ std\\+::cos (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nStandard library overload, matrix element-\\/wise sin function $|A|$\n\n\n\\begin{DoxyParams}{Parameters}\n{\\em matrix} & Matrix with elements passed through cos standard library function\\\\\n\\hline\n\\end{DoxyParams}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga5455e1c6ea8020055f4873c6c07d5810}\\label{group__numpp__structures__matrices__dense_ga5455e1c6ea8020055f4873c6c07d5810}} \n\\index{Dense@{Dense}!exp@{exp}}\n\\index{exp@{exp}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{exp()}{exp()}}\n{\\footnotesize\\ttfamily template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nC\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$T, Rows, Columns$>$ std\\+::exp (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nStandard library overload, matrix element-\\/wise exp function $e^(A)$\n\n\n\\begin{DoxyParams}{Parameters}\n{\\em matrix} & Matrix whose elements will be exponentiated\\\\\n\\hline\n\\end{DoxyParams}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_gaa9d630e00d9adfc653c428dad39d1b56}\\label{group__numpp__structures__matrices__dense_gaa9d630e00d9adfc653c428dad39d1b56}} \n\\index{Dense@{Dense}!get@{get}}\n\\index{get@{get}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{get()}{get()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [1/4]}}\n{\\footnotesize\\ttfamily template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr T\\& numpp\\+::matrix\\+::get (\\begin{DoxyParamCaption}\\item[{\\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{n }\\end{DoxyParamCaption})}\n\nOverload standard operator get for dense matrix\n\nElements are taken column-\\/wise. {\\bfseries Example\\+:} \n\\begin{DoxyCode}\n\\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp::matrix::dense<int, 2, 2>} foo\\{1,2,\n                                    3,4\\};\nstd::cout << get<0>(foo) << std::endl; \\textcolor{comment}{//Returns 1}\nstd::cout << get<2>(foo) << std::endl; \\textcolor{comment}{//Returns 3, jumps to the next column}\n\\end{DoxyCode}\n\n\n\\begin{DoxyReturn}{Returns}\nElement of matrix\n\\end{DoxyReturn}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga161ec12506af265f6c407b695b06974c}\\label{group__numpp__structures__matrices__dense_ga161ec12506af265f6c407b695b06974c}} \n\\index{Dense@{Dense}!get@{get}}\n\\index{get@{get}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{get()}{get()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [2/4]}}\n{\\footnotesize\\ttfamily template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr T\\& numpp\\+::matrix\\+::get (\\begin{DoxyParamCaption}\\item[{\\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&\\&}]{n }\\end{DoxyParamCaption})}\n\nThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga5587c6c4095d1147e35da4860e22df52}\\label{group__numpp__structures__matrices__dense_ga5587c6c4095d1147e35da4860e22df52}} \n\\index{Dense@{Dense}!get@{get}}\n\\index{get@{get}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{get()}{get()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [3/4]}}\n{\\footnotesize\\ttfamily template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr const T\\& numpp\\+::matrix\\+::get (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{n }\\end{DoxyParamCaption})}\n\nThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga8a016cefdf00f72d041d313318e7a2fc}\\label{group__numpp__structures__matrices__dense_ga8a016cefdf00f72d041d313318e7a2fc}} \n\\index{Dense@{Dense}!get@{get}}\n\\index{get@{get}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{get()}{get()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [4/4]}}\n{\\footnotesize\\ttfamily template$<$std\\+::size\\+\\_\\+t Index, typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr const T\\&\\& numpp\\+::matrix\\+::get (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&\\&}]{n }\\end{DoxyParamCaption})}\n\nThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_gaad33179f3c1ba4df7469b7e3b0da7dbf}\\label{group__numpp__structures__matrices__dense_gaad33179f3c1ba4df7469b7e3b0da7dbf}} \n\\index{Dense@{Dense}!log@{log}}\n\\index{log@{log}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{log()}{log()}}\n{\\footnotesize\\ttfamily template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nC\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$T, Rows, Columns$>$ std\\+::log (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nStandard library overload, matrix element-\\/wise logarithm $ln(A)$\n\n\n\\begin{DoxyParams}{Parameters}\n{\\em matrix} & Matrix whose elements will be logarithmized\\\\\n\\hline\n\\end{DoxyParams}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_gaeaace68ab41170931eb69e1cdb4fa381}\\label{group__numpp__structures__matrices__dense_gaeaace68ab41170931eb69e1cdb4fa381}} \n\\index{Dense@{Dense}!multiply@{multiply}}\n\\index{multiply@{multiply}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{multiply()}{multiply()}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::multiply (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{first,  }\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&}]{second }\\end{DoxyParamCaption})}\n\nElement-\\/wise multiplication of two matrices of the same size\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga9f59a9112db4704ed081eaf0c2b738b5}\\label{group__numpp__structures__matrices__dense_ga9f59a9112db4704ed081eaf0c2b738b5}} \n\\index{Dense@{Dense}!operator$\\ast$@{operator$\\ast$}}\n\\index{operator$\\ast$@{operator$\\ast$}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{operator$\\ast$()}{operator*()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [1/3]}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows\\+Columns, std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::operator$\\ast$ (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Rows\\+Columns $>$ \\&}]{first,  }\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows\\+Columns, Columns $>$ \\&}]{second }\\end{DoxyParamCaption})}\n\nMatrix row-\\/column multiplication\n\n\\begin{DoxyReturn}{Returns}\nMatrix with appropriate size after multiplication (dense$<$common\\+\\_\\+type\\+\\_\\+t$<$\\+U,\\+T$>$, Rows, Columns$>$)\n\\end{DoxyReturn}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga4ba3829a99216c5210cd95f49b703cf0}\\label{group__numpp__structures__matrices__dense_ga4ba3829a99216c5210cd95f49b703cf0}} \n\\index{Dense@{Dense}!operator$\\ast$@{operator$\\ast$}}\n\\index{operator$\\ast$@{operator$\\ast$}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{operator$\\ast$()}{operator*()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [2/3]}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::operator$\\ast$ (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{matrix,  }\\item[{const U}]{scalar }\\end{DoxyParamCaption})}\n\nMultiplication of dense matrix and scalar\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga6eabf8b5b95a1aeea6b7dc2e4344619e}\\label{group__numpp__structures__matrices__dense_ga6eabf8b5b95a1aeea6b7dc2e4344619e}} \n\\index{Dense@{Dense}!operator$\\ast$@{operator$\\ast$}}\n\\index{operator$\\ast$@{operator$\\ast$}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{operator$\\ast$()}{operator*()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [3/3]}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::operator$\\ast$ (\\begin{DoxyParamCaption}\\item[{const U}]{scalar,  }\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nMultiplication of dense matrix and scalar\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga9fdd727151e15fa8f6a806b9828d06d4}\\label{group__numpp__structures__matrices__dense_ga9fdd727151e15fa8f6a806b9828d06d4}} \n\\index{Dense@{Dense}!operator+@{operator+}}\n\\index{operator+@{operator+}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{operator+()}{operator+()}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::operator+ (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{first,  }\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&}]{second }\\end{DoxyParamCaption})}\n\nElement-\\/wise addition of two matrices of the same size\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga1a99075b6985b2ccafb41a6d9ad91824}\\label{group__numpp__structures__matrices__dense_ga1a99075b6985b2ccafb41a6d9ad91824}} \n\\index{Dense@{Dense}!operator-\\/@{operator-\\/}}\n\\index{operator-\\/@{operator-\\/}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{operator-\\/()}{operator-()}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::operator-\\/ (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{first,  }\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&}]{second }\\end{DoxyParamCaption})}\n\nElement-\\/wise subtraction of two matrices of the same size\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga47bac473f5d6fe8ca8e085f8897d3f81}\\label{group__numpp__structures__matrices__dense_ga47bac473f5d6fe8ca8e085f8897d3f81}} \n\\index{Dense@{Dense}!operator/@{operator/}}\n\\index{operator/@{operator/}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{operator/()}{operator/()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [1/2]}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::operator/ (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{matrix,  }\\item[{const U}]{scalar }\\end{DoxyParamCaption})}\n\nElement-\\/wise division of matrix against scalar\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga1a70f81ac03f3832f97467af1dd782fc}\\label{group__numpp__structures__matrices__dense_ga1a70f81ac03f3832f97467af1dd782fc}} \n\\index{Dense@{Dense}!operator/@{operator/}}\n\\index{operator/@{operator/}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{operator/()}{operator/()}\\hspace{0.1cm}{\\footnotesize\\ttfamily [2/2]}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr auto numpp\\+::matrix\\+::operator/ (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{first,  }\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ U, Rows, Columns $>$ \\&}]{second }\\end{DoxyParamCaption})}\n\nElement-\\/wise division of two matrices\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga60cc656d6425b919b2876e520854d029}\\label{group__numpp__structures__matrices__dense_ga60cc656d6425b919b2876e520854d029}} \n\\index{Dense@{Dense}!pow@{pow}}\n\\index{pow@{pow}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{pow()}{pow()}}\n{\\footnotesize\\ttfamily template$<$typename T , typename U , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nC\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$T, Rows, Columns$>$ std\\+::pow (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&}]{matrix,  }\\item[{U \\&\\&}]{exp }\\end{DoxyParamCaption})}\n\nStandard library overload, matrix with element-\\/wise power $A^exp$\n\n\n\\begin{DoxyParams}{Parameters}\n{\\em matrix} & Matrix whose elements will be taken to the exp power  power for every element\\\\\n\\hline\n\\end{DoxyParams}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga3cccd3987b35936ecd0368f9dc4d6832}\\label{group__numpp__structures__matrices__dense_ga3cccd3987b35936ecd0368f9dc4d6832}} \n\\index{Dense@{Dense}!sin@{sin}}\n\\index{sin@{sin}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{sin()}{sin()}}\n{\\footnotesize\\ttfamily template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nC\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$T, Rows, Columns$>$ std\\+::sin (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nStandard library overload, matrix element-\\/wise sin function $|A|$\n\n\n\\begin{DoxyParams}{Parameters}\n{\\em matrix} & Matrix with elements passed through sin standard library function\\\\\n\\hline\n\\end{DoxyParams}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_gac89face89afdd17bfd0c07fda3997db2}\\label{group__numpp__structures__matrices__dense_gac89face89afdd17bfd0c07fda3997db2}} \n\\index{Dense@{Dense}!sqrt@{sqrt}}\n\\index{sqrt@{sqrt}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{sqrt()}{sqrt()}}\n{\\footnotesize\\ttfamily template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nC\\+O\\+N\\+S\\+T\\+E\\+X\\+PR \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$T, Rows, Columns$>$ std\\+::sqrt (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{numpp\\+::matrix\\+::dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nStandard library overload, matrix element-\\/wise sqrt function $|A|$\n\n\n\\begin{DoxyParams}{Parameters}\n{\\em matrix} & Matrix with elements passed through sqrt standard library function\\\\\n\\hline\n\\end{DoxyParams}\n\\mbox{\\Hypertarget{group__numpp__structures__matrices__dense_ga07f80d900be174247ad3a3c42e9244fd}\\label{group__numpp__structures__matrices__dense_ga07f80d900be174247ad3a3c42e9244fd}} \n\\index{Dense@{Dense}!transpose@{transpose}}\n\\index{transpose@{transpose}!Dense@{Dense}}\n\\subsubsection{\\texorpdfstring{transpose()}{transpose()}}\n{\\footnotesize\\ttfamily template$<$typename T , std\\+::size\\+\\_\\+t Rows, std\\+::size\\+\\_\\+t Columns$>$ \\\\\nconstexpr \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$T, Columns, Rows$>$ numpp\\+::matrix\\+::transpose (\\begin{DoxyParamCaption}\\item[{const \\hyperlink{classnumpp_1_1matrix_1_1dense}{dense}$<$ T, Rows, Columns $>$ \\&}]{matrix }\\end{DoxyParamCaption})}\n\nTransposes given matrix \\begin{DoxyReturn}{Returns}\nTransposed dense matrix\n\\end{DoxyReturn}\n", "meta": {"hexsha": "d1f64a1325dd58f852c7dad553cd79ba69c604dd", "size": 28231, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/group__numpp__structures__matrices__dense.tex", "max_stars_repo_name": "szymonmaszke/numpp", "max_stars_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-06-06T01:51:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-02T15:17:00.000Z", "max_issues_repo_path": "docs/group__numpp__structures__matrices__dense.tex", "max_issues_repo_name": "vyzyv/numpp", "max_issues_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-11-28T12:15:46.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T00:03:38.000Z", "max_forks_repo_path": "docs/group__numpp__structures__matrices__dense.tex", "max_forks_repo_name": "szymonmaszke/numpp", "max_forks_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-08-06T13:58:27.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-06T06:45:22.000Z", "avg_line_length": 89.0567823344, "max_line_length": 475, "alphanum_fraction": 0.7413835854, "num_tokens": 10461, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.8198933315126791, "lm_q1q2_score": 0.5968622063854796}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{amsmath}\n\n\\title{An Example Document}\n\n\\author{John Smith}\n\n\\date{}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{The first section}\n\nThis is an example of a document formatted using \\LaTeX{}.\nThis is an example of a citation \\cite{gG07}.\nNow here is an example of an equation:\n\n\\begin{align}\ni\\hbar\\frac{\\partial}{\\partial t}\\Psi(r,t) = -\\frac{\\hbar^2}{2m}\\nabla^2\\Psi(r,t)+V(r)\\Psi(r,t)\n\\end{align}\n\n\\bibliographystyle{amsplain}\n\n\\bibliography{BibFile}\n\n\\end{document}\n", "meta": {"hexsha": "7cbbe32df95885858cb06479cd9e68c3f6062d82", "size": 508, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/sample_input_files/testingBib.tex", "max_stars_repo_name": "willsower/latex2speech", "max_stars_repo_head_hexsha": "36a69bb5ee74e1ca362968604b4a554034c5f408", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-03-17T22:13:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-30T20:35:39.000Z", "max_issues_repo_path": "Documentation/sample_input_files/testingBib.tex", "max_issues_repo_name": "willsower/latex2speech", "max_issues_repo_head_hexsha": "36a69bb5ee74e1ca362968604b4a554034c5f408", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 50, "max_issues_repo_issues_event_min_datetime": "2021-03-15T23:03:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-14T14:22:45.000Z", "max_forks_repo_path": "Documentation/sample_input_files/testingBib.tex", "max_forks_repo_name": "willsower/latex2speech", "max_forks_repo_head_hexsha": "36a69bb5ee74e1ca362968604b4a554034c5f408", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-03-30T18:18:40.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-14T17:51:26.000Z", "avg_line_length": 16.9333333333, "max_line_length": 95, "alphanum_fraction": 0.7204724409, "num_tokens": 166, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7905303186696747, "lm_q1q2_score": 0.5967831938549105}}
{"text": "\\documentclass[gr-notes.tex]{subfiles}\n\n\\begin{document}\n\n\\setcounter{chapter}{6}\n\n\\chapter{Physics in a curved spacetime}\n\n\\setcounter{section}{5}\n\n\\section{Exercises}\n\n\n\\textbf{1}\nIf Equation 7.3 were the correct generalization of 7.1 in a curved spacetime, what are the implications? What would happen to the number of particles in a comoving volume of the fluid over time? May we experimentally distinguish between Equations 7.2 and 7.3?\n\nThe number of particles would change proportionally to the square of the Ricci scalar, which corresponds to the curvature of the manifold. Whether particles are created ($+$) or destroyed ($-$) would depend on the sign of $q$ in the equation.\n\nWe could set up some experiment which tests for a change in the number of particles in a moving fluid, in various gravitational fields, to verify whether the RHS of the equation is non-zero.\n\n\n\n\n\\textbf{2}\nCompute $g^{\\alpha\\beta}$ for the line element given by Equation 7.8, to first order in $\\phi$.\n\nBased on the line element, we can infer that the metric is\n%\n\\begin{align*}\n  (g_{\\alpha\\beta}) &\\underset{(t,x,y,z)}{\\to}\n  \\mqty(\\dmat[0]{-(1+2\\phi),(1-2\\phi),(1-2\\phi),(1-2\\phi)})\n  \\\\\n  (g^{\\alpha\\beta}) &\\underset{(t,x,y,z)}{\\to}\n  \\mqty(\\dmat[0]{-(1+2\\phi)^{-1},(1-2\\phi)^{-1},(1-2\\phi)^{-1},(1-2\\phi)^{-1}})\n  \\approx\n  \\mqty(\\dmat[0]{-(1-2\\phi),(1+2\\phi),(1+2\\phi),(1+2\\phi)})\n\\end{align*}\n\n\n\n\\textbf{3}\nCalculate the Christoffel sybmols for the metric given by Equation 7.8, to first order in $\\phi$, assuming $\\phi = \\phi(t,x,y,z)$.\n\nI do the following with the assistance of the free computer algebra system \\texttt{Maxima}. I used the exact form of the metric tensor, and then approximated the resulting Christoffel symbols to first order in $\\phi$.\n\n\\begin{align*}\n  \\tensor{\\Gamma}{^t_{t\\alpha}} &=\n  \\pdv{\\phi}{x^\\alpha} \\frac{1}{1 + 2\\phi} \\approx\n  \\pdv{\\phi}{x^\\alpha} (1 - 2\\phi)\n  &\n  \\tensor{\\Gamma}{^\\alpha_{t\\alpha}} &=\n -\\pdv{\\phi}{t} \\frac{1}{1 - 2\\phi} \\approx\n -\\pdv{\\phi}{t} (1 + 2\\phi)\n  \\\\\n  \\tensor{\\Gamma}{^i_{tt}} &=\n  \\pdv{\\phi}{x^i} \\frac{1}{1 - 2\\phi} \\approx\n  \\pdv{\\phi}{x^i} (1 + 2\\phi)\n  &\n  \\tensor{\\Gamma}{^t_{ii}} &=\n -\\pdv{\\phi}{t} \\frac{1}{1 + 2\\phi} \\approx\n  \\pdv{\\phi}{t} (1 - 2\\phi)\n  \\\\\n  \\tensor{\\Gamma}{^i_{jj}} =\n -\\tensor{\\Gamma}{^j_{ij}} =\n -\\tensor{\\Gamma}{^i_{ii}} &=\n  \\pdv{\\phi}{x^i} \\frac{1}{1 - 2\\phi} \\approx\n  \\pdv{\\phi}{x^i} (1 + 2\\phi)\n\\end{align*}\n\n\n\\textbf{5}\n\n(a) In the case of a perfect fluid, verify that the spatial components of Equation 7.6 reduce to\n%\n\\begin{displaymath}\n  \\dot{\\boldsymbol{v}} +\n  (\\boldsymbol{v} \\cdot \\grad) \\boldsymbol{v} +\n  \\grad p / \\rho +\n  \\grad \\phi =\n  0\n\\end{displaymath}\n%\nin the Newtonian limit and in the weak-field regime (the metric given by Equation 7.8).\n\n\\begin{align*}\n  T^{\\mu\\nu} &=\n  (\\rho + p) U^\\mu U^\\nu + p g^{\\mu\\nu}\n  \\\\ &\\approx\n  \\rho U^\\mu U^\\nu + p g^{\\mu\\nu}\n  \\\\ &\\approx\n  m U^\\mu (n U^\\nu) + p g^{\\mu\\nu}\n  \\\\\n  T^{i\\nu} &\\approx\n  m U^i (n U^\\nu) + p g^{i\\nu}\n  \\\\\n  \\tensor{T}{^{i\\nu}_{;\\nu}} &\\approx\n  m [ U^i (n U^\\nu) ]_{;\\nu} + [ p g^{i\\nu} ]_{;\\nu} =\n  m n U^\\nu \\tensor{U}{^i_{;\\nu}} + g^{i\\nu} p_{;\\nu} = 0\n  \\\\ \\implies\n  0 &=\n  U^\\nu \\tensor{U}{^i_{;\\nu}} + g^{i\\nu} p_{;\\nu} / \\rho\n  \\\\ &=\n  U^\\nu (\\tensor{U}{^i_{,\\nu}} + U^\\lambda \\tensor{\\Gamma}{^i_{\\lambda\\nu}}) +\n  g^{i\\nu} p_{,\\nu} / \\rho\n  \\\\ &=\n  U^0 \\tensor{U}{^i_{,0}} +\n  U^j \\tensor{U}{^i_{,j}} +\n  U^\\nu U^\\lambda \\tensor{\\Gamma}{^i_{\\lambda\\nu}} +\n  g^{i\\nu} p_{,\\nu} / \\rho\n  \\\\ &=\n  \\gamma \\dv{\\tau} (\\gamma v^i) +\n  \\gamma v^j (\\gamma v^i)_{,j} +\n  U^\\nu U^\\lambda \\tensor{\\Gamma}{^i_{\\lambda\\nu}} +\n  g^{ii} p_{,i} / \\rho\n  \\\\ &\\approx\n  \\dv{v^i}{\\tau} +\n  v^j v^i_{,j} +\n  (U^0)^2 \\tensor{\\Gamma}{^i_{00}} +\n  (1 - 2\\phi) p_{,i} / \\rho\n  \\\\ &\\approx\n  \\dv{v^i}{\\tau} +\n  v^j v^i_{,j} +\n  \\phi_{,i} +\n  p_{,i} / \\rho\n\\end{align*}\n%\nrewriting this in vector form, we get the original equation.\n\n\n(b) Now look at the time-component instead of the spatial component.\n\n\\begin{align*}\n  T^{0\\nu} &=\n  (\\rho + p) U^0 U^\\nu + p g^{0\\nu} \\approx\n  m U^0 (n U^\\nu) + p g^{0\\nu}\n  \\\\\n  \\tensor{T}{^{0\\nu}_{;\\nu}} &=\n  m [U^0 (n U^\\nu)]_{;\\nu} + [p g^{0\\nu}]_{;\\nu} =\n  m n U^\\nu \\tensor{U}{^0_{;\\nu}} + g^{00} p_{,\\nu} =\n  0\n  \\\\ \\implies\n  0 &=\n  U^\\nu \\tensor{U}{^0_{;\\nu}} + g^{00} \\dot{p} / \\rho\n  \\\\ &=\n  U^\\nu (\\tensor{U}{^0_{,\\nu}} + U^\\lambda \\tensor{\\Gamma}{^0_{\\lambda\\nu}}) +\n  g^{00} \\dot{p} / \\rho\n  \\\\ &=\n  U^0 \\tensor{U}{^0_{,0}} +\n  U^i \\tensor{U}{^0_{,i}} +\n  U^\\nu U^\\lambda \\tensor{\\Gamma}{^0_{\\lambda\\nu}} +\n  g^{00} \\dot{p} / \\rho\n  \\\\ &\\approx\n  \\frac{1}{2} \\dv{v^2}{\\tau} +\n  \\frac{1}{2} v^i \\dv{v^2}{x^i} +\n  U^\\nu U^0 \\tensor{\\Gamma}{^0_{0\\nu}} -\n  (1 + 2\\phi) \\dot{p} / \\rho\n  \\\\ &\\approx\n  \\frac{1}{2} \\dv{v^2}{\\tau} +\n  \\frac{1}{2} v^i \\dv{v^2}{x^i} +\n  U^\\nu U^0 \\tensor{\\Gamma}{^0_{0\\nu}} -\n  \\dot{p} / \\rho\n  \\\\ &\\approx\n  \\frac{1}{2} \\dv{v^2}{\\tau} +\n  \\frac{1}{2} v^i \\dv{v^2}{x^i} +\n  U^\\nu U^0 \\phi_{,\\nu} -\n  \\dot{p} / \\rho\n  \\\\ &\\approx\n  \\frac{1}{2} \\dv{v^2}{\\tau} +\n  \\frac{1}{2} v^i \\dv{v^2}{x^i} +\n  \\dot{\\phi} + v^i \\phi_{,i} -\n  \\dot{p} / \\rho\n\\end{align*}\n\n(c) A metric is static if there exist coordinates such that $\\vec{e}_0$ is timelike, $g_{i0} = 0$, and $g_{\\alpha\\beta,0} = 0$. Show from Equation 7.6 that a static fluid (i.e. $U^i = 0$, $p_{,0} = 0$, etc) obeys the relativistic equation of hydrostatic equilibrium (Equation 7.40):\n%\n\\begin{displaymath}\n  p_{,i} + (\\rho + p) \\qty[ \\frac{1}{2} \\ln(-g_{00}) ]_{,i} = 0.\n\\end{displaymath}\n\nWe start by writing out Equation 7.6 as\n%\n\\begin{align*}\n  \\tensor{T}{^{\\mu\\nu}_{;\\nu}} &=\n  [(\\rho + p) U^\\mu U^\\nu]_{;\\nu} + [p g^{\\mu\\nu}]_{;\\nu} =\n  0\n  \\\\ &=\n  [(\\rho + p) U^\\mu] \\tensor{U}{^\\nu_{;\\nu}} +\n  [(\\rho + p) U^\\nu] \\tensor{U}{^\\mu_{;\\nu}} +\n  U^\\nu U^\\mu (\\rho + p)_{,\\nu} +\n  g^{\\mu\\nu} p_{,\\nu} =\n  0\n  \\\\ &=\n  \\tensor{T}{^{00}_{;0}} +\n  \\tensor{T}{^{ij}_{;j}} +\n  \\tensor{T}{^{0i}_{;i}} +\n  \\tensor{T}{^{i0}_{;0}} =\n  0\n  \\\\\n  \\tensor{T}{^{00}_{;0}} &=\n  [(\\rho + p) U^0] \\tensor{U}{^0_{;0}} +\n  [(\\rho + p) U^0] \\tensor{U}{^0_{;0}} +\n  U^0 U^0 (\\rho + p)_{,0} +\n  g^{00} p_{,0}\n  \\\\ &=\n  2 (\\rho + p) U^0 \\tensor{U}{^0_{;0}}\n  \\\\ &=\n  2 (\\rho + p) U^0\n  [\\tensor{U}{^0_{,0}} + U^\\lambda \\tensor{\\Gamma}{^0_{0\\lambda}}]\n  \\\\ &=\n  2 (\\rho + p)\n  [U^0]^2 \\tensor{\\Gamma}{^0_{00}} =\n  0\n  \\\\\n  \\tensor{T}{^{ij}_{;j}} &=\n  [(\\rho + p) U^i] \\tensor{U}{^j_{;j}} +\n  [(\\rho + p) U^j] \\tensor{U}{^i_{;j}} +\n  U^j U^i (\\rho + p)_{,j} +\n  g^{ij} p_{,j}\n  \\\\ &=\n  g^{ij} p_{,j}\n  \\\\\n  \\tensor{T}{^{0i}_{;i}} &=\n  [(\\rho + p) U^0] \\tensor{U}{^i_{;i}} +\n  [(\\rho + p) U^i] \\tensor{U}{^0_{;i}} +\n  U^i U^0 (\\rho + p)_{,i} +\n  g^{0i} p_{,i}\n  \\\\ &=\n  [(\\rho + p) U^0] \\tensor{U}{^i_{;i}} =\n  (\\rho + p) U^0\n  [\\tensor{U}{^i_{,i}} + U^\\lambda \\tensor{\\Gamma}{^i_{i\\lambda}}]\n  \\\\ &=\n  (\\rho + p) [U^0]^2 \\tensor{\\Gamma}{^i_{i0}}\n  \\\\ &=\n  \\frac{1}{2} [U^0]^2 (\\rho + p)\n  g^{i\\alpha} (g_{\\alpha i,0} + g_{\\alpha 0,i} - g_{0i,\\alpha}) =\n  0\n  \\\\\n  \\tensor{T}{^{i0}_{;0}} &=\n  [(\\rho + p) U^i] \\tensor{U}{^0_{;0}} +\n  [(\\rho + p) U^0] \\tensor{U}{^i_{;0}} +\n  U^0 U^i (\\rho + p)_{,0} +\n  g^{i0} p_{,0}\n  \\\\ &=\n  [(\\rho + p) U^0] \\tensor{U}{^i_{;0}} =\n  (\\rho + p) U^0\n  [\\tensor{U}{^i_{,0}} + U^0 \\tensor{\\Gamma}{^i_{00}}]\n  \\\\ &=\n  \\frac{1}{2} (\\rho + p) [U^0]^2\n  g^{i\\alpha} (g_{\\alpha0,0} + g_{\\alpha0,0} - g_{00,\\alpha}) =\n -\\frac{1}{2} (\\rho + p) [U^0]^2 g^{ij} g_{00,j}\n  \\\\ &=\n  \\frac{1}{2} (\\rho + p) g^{ij} g_{00,j} / g_{00} =\n  \\frac{1}{2} (\\rho + p) g^{ij} \\ln(-g_{00})_{,j}\n  \\\\\n  \\tensor{T}{^{\\mu\\nu}_{;\\nu}} &=\n  g^{ij} p_{,j} + \\frac{1}{2} (\\rho + p) g^{ij} \\ln(-g_{00})_{,j} = 0\n  \\\\ &=\n  p_{,j} + (\\rho + p) \\qty[ \\frac{1}{2} \\ln(-g_{00})]_{,j} = 0\n\\end{align*}\n\n\n(d) This suggests that there is a relationship between $g_{00}$ and $\\exp(2\\phi)$ in the case of a static fluid in a Newtonian potential. Show that Equation 7.8 and Exercise 4 are consistent with this.\n\nIn the Newtonian limit, the previous equation is unchanged when replacing $ng_{00}$ with $-\\exp(2\\phi)$, as $\\ln(\\exp(2\\phi))_{,i} = 2 \\phi_{,i}$, and\n%\n\\begin{align*}\n  \\ln(-g_{00})_{,i} &=\n  \\ln(1 + 2\\phi)_{,i} =\n  \\frac{(1 + 2\\phi)_{,i}}{1 + 2\\phi}\n  \\\\ &=\n  2 \\phi_{,i} (1 + 2\\phi)^{-1} \\approx\n  2 \\phi_{,i} (1 - 2\\phi) \\approx\n  2 \\phi_{,i}.\n\\end{align*}\n%\nI'm not really sure how to relate this to Exercise 4, as it relates $\\phi_{,\\alpha}$ to four-momentum, while this relates it to pressure and density.\n\n\n\\textbf{7}\nConsider the (i) Minkowski, (ii) Schwarzschild, (iii) Kerr, and (iv) Robertson--Walker metrics.\n\n(a) Find the conserved components $p_\\alpha$ of a the four-momentum of a particle in free-fall.\n\nFor this I will use Equation 7.29:\n%\n\\begin{displaymath}\n  m \\dv{p_\\beta}{\\tau} = \\frac{1}{2} g_{\\nu\\alpha,\\beta} p^\\nu p^\\alpha.\n\\end{displaymath}\n%\nWhat this tells us is that if $g_{\\alpha\\beta}$ is independent of $x^\\mu$, then $p_\\mu$ is constant along the trajectory.\n\nFor (i), the metric is independent of all coordinates $(t,x,y,z)$, and so all $p_\\alpha$ are conserved.\n\nFor (ii), the metric depends on coordinates $r$ and $\\theta$, but not $t$ and $\\phi$, so only $p_t$ and $p_\\phi$ are conserved.\n\nFor (iii) we have the same dependencies as (ii).\n\nFor (iv) there is an additional time dependence, and so only $p_\\phi$ is conserved.\n\n\n(b) Use the metric for a flat spacetime in spherical polar coordinates to argue that the Schwarzschild and Robertson--Walker metrics are spherically symmetric.\n\nOur metric in (i) can be expressed in spherical polars as\n%\n\\begin{displaymath}\n  \\dd{s}^2 =\n -\\dd{t}^2 + \\dd{r}^2 + r^2 (\\dd{\\theta}^2 + \\sin^2\\theta \\dd{\\phi}^2).\n\\end{displaymath}\n%\nThe Schwarzschild metric can be obtained from this by multiplying $\\dd{t}^2$ by $(1 - 2 M / r)$, and dividing $\\dd{r}^2$ by it. This newly introduced term only introduces a new radial dependence (the $r^{-1}$ term), not an angular one, so it retains spherical symmetry.\n\nThe Robertson--Walker metric can be obtained by dividing $\\dd{r}^2$ by $(1 - kr^2)$, and then multiplying everything \\emph{except} $\\dd{t}^2$ by $R^2(t)$. Again, the $(1 - kr^2)$ term only introduces a radial dependence in its $r^2$ term, and for a given time $t$, $R^2(t)$ is a constant, so spherical symmetry is retained.\n\n\n(c) For (i') and (ii)--(iv), a geodesic which at one point has $\\theta = \\pi/2$ and $p^\\theta = 0$ (i.e. tangent to the equatorial plane) conserves these quantities. For (i'), (ii), and (iii),use $\\vec{p} \\cdot \\vec{p} = -m^2$ to find $p^r$ as a function of $m$, other conserved quantities, and known functions of position.\n\n(i')\n%\n\\begin{align*}\n  \\vec{p} \\cdot \\vec{p} &=\n  g_{\\alpha\\beta} p^\\alpha p^\\beta =\n  g_{\\alpha\\alpha} (p^\\alpha)^2 =\n  g_{tt} (p^t)^2 +\n  g_{rr} (p^r)^2 +\n  g_{\\theta\\theta} (\\cancelto{0}{p^\\theta})^2 +\n  g_{\\phi\\phi} (p^\\phi)^2\n  \\\\ &=\n  -(p^t)^2 + (p^r)^2 + r^2 \\cancelto{1}{\\sin^2(\\theta)} (p^\\phi)^2 =\n  -m^2\n  \\\\ \\implies\n  (p^r)^2 &=\n  (p^t)^2 - r^2 (p^\\phi)^2 - m^2 =\n  g^{tt} (p_t)^2 - r^2 g^{\\phi\\phi} (p_\\phi)^2 - m^2 =\n  -(p^t)^2 - (p^\\phi)^2 - m^2\n  \\\\ \\implies\n  p^r &=\n  \\pm\\sqrt{-[(p^t)^2 + (p^\\phi)^2 + m^2]}\n\\end{align*}\n%\n(ii)\n%\n\\begin{align*}\n  \\vec{p} \\cdot \\vec{p} &=\n  g_{tt} (p^t)^2 +\n  g_{rr} (p^r)^2 +\n  g_{\\phi\\phi} (p^\\phi)^2\n  \\\\ &=\n -(1 - 2M/r) (p^t)^2 +\n  (1 - 2M/r)^{-1} (p^r)^2 +\n  r^2 \\cancelto{1}{\\sin^2\\theta} (p^\\phi)^2 = -m^2\n  \\\\ \\implies\n  (p^r)^2 &=\n  (1 - 2M/r) [ (1 - 2M/r) (p^t)^2 - r^2 (p^\\phi)^2 - m^2 ]\n  \\\\ &=\n -(1 - 2M/r) [ (1 - 2M/r) (p_t)^2 + (p_\\phi)^2 + m^2 ]\n\\end{align*}\n%\n(iii)\n%\nThis metric gets a bit messy, so I will keep things more abstract. First, I will simplify the metric, utilizing the fact that $\\theta = \\pi/2$.\n%\n\\begin{gather*}\n  \\dd{s}^2 =\n -\\frac{\\Delta - a^2}{r^2} \\dd{t}^2 -\n  2 \\frac{2 M a}{r} \\dd{t} \\dd\\phi +\n  \\frac{(r^2 + a^2)^2 - a^2 \\Delta}{r^2} \\dd\\phi^2 +\n  \\frac{r^2}{\\Delta} \\dd{r}^2 +\n  r^2 \\dd\\theta^2\n  \\\\\n  g_{tt} = -\\frac{\\Delta - a^2}{r^2};\n  \\quad\n  g_{rr} = \\frac{r^2}{\\Delta};\n  \\quad\n  g_{\\theta\\theta} = r^2;\n  \\quad\n  g_{\\phi\\phi} = \\frac{(r^2 + a^2)^2 - a^2 \\Delta}{r^2};\n  \\quad\n  g_{t\\phi} = -\\frac{2 M a}{r},\n  \\\\\n  \\lambda \\equiv\n  a^6 - 2 (D - r^2) a^4 + (r^4 - 4 M^2 r^2 - 2 D r^2 + D^2) a^2 - D r^4\n  \\\\\n  g^{tt} = r^2 (a^4 - (D - 2 r^2) a^2 + r^4) / \\lambda;\n  \\quad\n  g^{rr} = \\frac{D}{r^2};\n  \\quad\n  g^{\\theta\\theta} = \\frac{1}{r^2};\n  \\quad\n  g^{\\phi\\phi} = r^2 (a^2 - D) / \\lambda;\n  \\quad\n  g^{t\\phi} = 2 a M r^3 / \\lambda,\n  \\\\\n  \\vec{p} \\cdot \\vec{p} =\n  g_{tt} (p^t)^2 +\n  g_{rr} (p^r)^2 +\n  g_{\\phi\\phi} (p^\\phi)^2 +\n  2 g_{t\\phi} (p^t p^\\phi) =\n -m^2\n  \\\\\n  p^r =\n  \\pm\\sqrt{\n   -g^{rr}\n    [g_{tt} (p^t)^2 +\n     g_{\\phi\\phi} (p^\\phi)^2 +\n     2 g_{t\\phi} (p^t p^\\phi) +\n     m^2]\n  }\n  \\\\\n  p^t =\n  g^{t\\alpha} p_\\alpha =\n  g^{\\phi\\phi} p_t + g^{t\\phi} p_\\phi\n  \\\\\n  p^\\phi =\n  g^{\\phi\\alpha} p_\\alpha =\n  g^{\\phi\\phi} p_\\phi + g^{t\\phi} p_t\n\\end{gather*}\n\n(d)\n\nWhen $k = 0$, the line element and metric become\n%\n\\begin{gather*}\n  \\dd{s}^2 =\n -\\dd{t}^2 +\n  R^2(t) [ \\dd{r}^2 + r^2 (\\dd\\theta^2 + \\sin^2\\theta \\dd\\phi^2) ]\n  \\\\\n  g_{tt} = -1;\n  \\quad\n  g_{rr} = R^2(t);\n  \\quad\n  g_{\\theta\\theta} = R^2(t) r^2;\n  \\quad\n  g_{\\phi\\phi} = R^2(t) r^2 \\sin^2\\theta.\n\\end{gather*}\n%\nEquation 7.29 with $\\beta = r$ then becomes\n%\n\\begin{displaymath}\n  m \\dv{p_r}{\\tau} =\n  \\frac{1}{2} g_{\\nu\\alpha,r} p^\\nu p^\\alpha =\n  \\frac{1}{2} [ g_{tt,r} (p^t)^2 + g_{rr,r} (p^r)^2 ].\n\\end{displaymath}\n%\nSince $g_{tt,r} = g_{rr,r} = 0$, the RHS becomes zero, and so\n%\n\\begin{displaymath}\n  m \\dv{p_r}{\\tau} =\n  0 \\implies\n  \\text{$p_r$ is conserved}.\n\\end{displaymath}\n\n\n\n\\textbf{8}\nFor a coordinate system where $g_{\\alpha\\beta,\\mu} = 0$:\n\n(a) Show that $\\tensor{T}{^\\nu_{\\mu;\\nu}} = 0$ becomes\n%\n\\begin{displaymath}\n  \\frac{1}{\\sqrt{-g}} (\\sqrt{-g} \\tensor{T}{^\\nu_\\mu})_\\nu = 0.\n\\end{displaymath}\n\nFor this, I will make mathematicians cry, and go from the \\emph{solution} backwards to the starting point. So I expand the final expression, first using the Leibniz rule:\n%\n\\begin{displaymath}\n  \\tensor{T}{^\\nu_{\\mu,\\nu}} +\n  \\frac{(\\sqrt{-g})_\\nu}{\\sqrt{-g}} \\tensor{T}{^\\nu_\\mu} =\n  0,\n\\end{displaymath}\n%\nand then using Equation 6.40:\n%\n\\begin{displaymath}\n  \\tensor{T}{^\\nu_{\\mu,\\nu}} +\n  \\tensor{\\Gamma}{^\\alpha_{\\alpha\\nu}} =\n  0.\n\\end{displaymath}\n%\nJust pretend I did that backwards. Next I expand $\\tensor{T}{^\\nu_{\\mu;\\nu}}$, to show that the above expression makes it zero.\n%\n\\begin{align*}\n  \\tensor{T}{^\\nu_{\\mu;\\nu}} &=\n  \\tensor{T}{^\\nu_{\\mu,\\nu}} +\n  \\tensor{T}{^\\alpha_\\mu} \\tensor{\\Gamma}{^\\nu_{\\alpha\\nu}} -\n  \\tensor{T}{^\\nu_\\alpha} \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}}\n  \\\\ &=\n  \\tensor{T}{^\\nu_{\\mu,\\nu}} +\n  \\tensor{T}{^\\nu_\\mu} \\tensor{\\Gamma}{^\\alpha_{\\nu\\alpha}} -\n  \\tensor{T}{^\\nu_\\alpha} \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}}.\n\\end{align*}\n%\nNote that the positive terms are just the expression from before, which we showed was zero, so we're left with\n%\n\\begin{displaymath}\n  \\tensor{T}{^\\nu_{\\mu;\\nu}} =\n  \\tensor{T}{^\\nu_\\alpha} \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}}.\n\\end{displaymath}\n%\nNow we expand this\n%\n\\begin{align*}\n  \\tensor{T}{^\\nu_{\\mu;\\nu}} &=\n  -\\frac{1}{2} \\tensor{T}{^\\nu_\\alpha} g^{\\alpha\\beta}\n  (g_{\\beta\\mu,\\nu} + \\cancel{g_{\\beta\\nu,\\mu}} - g_{\\mu\\nu,\\beta})\n  \\\\ &=\n  -\\frac{1}{2} \\tensor{T}{^{\\nu\\beta}}\n  (g_{\\beta\\mu,\\nu} - g_{\\mu\\nu,\\beta}) =\n  -\\frac{1}{2} \\tensor{T}{^{(\\nu\\beta)}}\n  A_{[\\nu\\beta]\\mu} =\n  0.\n\\end{align*}\n\n(b) Suppose $T^{\\alpha\\beta}$ is zero except in a bounded region of the space-like hypersurface $x^0 =$ constant. Show that Equation 7.41 implies that\n%\n\\begin{displaymath}\n  \\int_{x^0 = \\mathrm{const}} \\tensor{T}{^\\nu_\\mu} \\sqrt{-g} n_\\nu \\dd[3]{x}\n\\end{displaymath}\n%\ndoes not depend on $x^0$, so long as $n_\\nu$ is the unit normal to the hypersurface.\n\nUsing Equation 7.41 and the differential in Equation 6.18, we take the integral\n%\n\\begin{displaymath}\n  \\int {\n    \\frac{1}{\\sqrt{-g}} (\\sqrt{-g} \\tensor{T}{^\\nu_\\mu})_\\nu \\sqrt{-g}\n  } \\dd[4]{x} =\n  \\int {\n    (\\sqrt{-g} \\tensor{T}{^\\nu_\\mu})_{,\\nu} \\dd[4]{x}\n  } \\dd[4]{x}.\n\\end{displaymath}\n%\nNow we use Equation 6.44:\n%\n\\begin{align*}\n  \\int {\n    (\\sqrt{-g} \\tensor{T}{^\\nu_\\mu})_{,\\nu} \\dd[4]{x}\n  } \\dd[4]{x} &=\n  \\oint {\n    \\sqrt{-g} n_\\nu \\tensor{T}{^\\nu_\\mu}\n  } \\dd[3]{S}\n  \\\\ &=\n  \\int_{x^0 = \\mathrm{const}} \\sqrt{-g} n_\\nu \\tensor{T}{^\\nu_\\mu} \\dd[3]{x}\n\\end{align*}\n\n(c) Now consider flat Minkowski space with a global inertial frame in spherical polar coordinates. Show that, from part (b), we have\n%\n\\begin{displaymath}\n  J =\n  \\int_{t=\\mathrm{const}} {\n    \\tensor{T}{^0_\\phi} r^2 \\sin\\theta\n  } \\dd{r} \\dd\\theta \\dd\\phi,\n\\end{displaymath}\n%\nwhich is independent of $t$. This is the system's total angular momentum.\n\nSince we are in flat Minkowski space, the unit-normal one form has components $\\tilde{n} \\to (1, 0, 0, 0)$, so only the $\\tensor{T}{^0_\\mu}$ term is retained. We also have $x^0 \\to t$, so we can write the expression from (b) as\n%\n\\begin{displaymath}\n  \\int_{t = \\mathrm{const}} \\sqrt{-g} \\tensor{T}{^0_\\mu} \\dd[3]{x}.\n\\end{displaymath}\n%\nWe also know that $\\sqrt{-g} \\dd[3]{x}$ in spherical polars is $r^2 \\sin\\theta \\dd{r} \\dd\\theta \\dd\\phi$, so we can write this as\n%\n\\begin{displaymath}\n  \\int_{t = \\mathrm{const}} {\n    \\tensor{T}{^0_\\mu} r^2 \\sin\\theta\n  } \\dd{r} \\dd\\theta \\dd\\phi.\n\\end{displaymath}\n%\nTaking the $\\phi$ component of $\\tensor{T}{^0_\\mu}$, we get something which we call $J$:\n%\n\\begin{displaymath}\n  J =\n  \\int_{t = \\mathrm{const}} {\n    \\tensor{T}{^0_\\phi} r^2 \\sin\\theta\n  } \\dd{r} \\dd\\theta \\dd\\phi.\n\\end{displaymath}\n\n(d) Now express the previous integral in terms of the components of $T^{\\alpha\\beta}$ on the Cartesian basis, ultimately arriving at\n%\n\\begin{displaymath}\n  J =\n  \\int {\n    (x T^{y0} - y T^{x0})\n  } \\dd{x} \\dd{y} \\dd{z}\n\\end{displaymath}\n\n\\begin{align*}\n  J &=\n  \\int_{t = \\mathrm{const}} {\n    \\tensor{T}{^0_\\phi} r^2 \\sin\\theta\n  } \\dd{r} \\dd\\theta \\dd\\phi\n  \\\\ &=\n  \\int_{t = \\mathrm{const}} {\n    \\tensor{\\Lambda}{^\\alpha_\\phi} \\tensor{T}{^0_\\alpha} r^2 \\sin\\theta\n  } \\dd[3]{x}\n  \\\\ &=\n  \\int_{t = \\mathrm{const}} (\n    \\tensor{\\Lambda}{^x_\\phi} \\tensor{T}{^0_x} +\n    \\tensor{\\Lambda}{^y_\\phi} \\tensor{T}{^0_y} +\n    \\tensor{\\Lambda}{^z_\\phi} \\tensor{T}{^0_z}\n  ) \\dd[3]{x}\n  \\\\ &=\n  \\int_{t = \\mathrm{const}} (\n    (-r \\sin\\theta \\sin\\phi) \\tensor{T}{^0_x} +\n    ( r \\sin\\theta \\cos\\phi) \\tensor{T}{^0_y} +\n    (0) \\tensor{T}{^0_z}\n  ) \\dd[3]{x}\n  \\\\ &=\n  \\int_{t = \\mathrm{const}} (\n    x \\tensor{T}{^0_y} -\n    y \\tensor{T}{^0_x}\n  ) \\dd[3]{x}\n  \\\\ &=\n  \\int_{t = \\mathrm{const}} (\n    \\eta_{yy} x \\tensor{T}{^{0y}} -\n    \\eta_{xx} y \\tensor{T}{^{0x}}\n  ) \\dd[3]{x}\n  \\\\ &=\n  \\int_{t = \\mathrm{const}} (\n    x \\tensor{T}{^{0y}} -\n    y \\tensor{T}{^{0x}}\n  ) \\dd[3]{x}\n\\end{align*}\n\n\n\\textbf{10}\n\n(a) Show that if the vector field $\\xi^\\alpha$ satisfies Killing's equation,\n%\n\\begin{displaymath}\n  \\grad_\\alpha \\xi_\\beta + \\grad_\\beta \\xi_\\alpha = 0,\n\\end{displaymath}\n%\nthen $p^\\alpha \\xi_\\alpha$ is constant along a geodesic.\n\nIf $p^\\alpha \\xi_\\alpha$ is constant along a geodesic, then $p^\\alpha \\xi_{\\alpha;\\beta} = 0$, so we simply have to show that this follows from Killing's equation.\n\nKilling's equation can be rewritten as\n%\n\\begin{displaymath}\n  \\xi_{\\beta;\\alpha} + \\xi_{\\alpha;\\beta} = 0 \\implies\n  \\xi_{\\beta;\\alpha} = -\\xi_{\\alpha;\\beta}.\n\\end{displaymath}\n%\nNow we combine this with the geodesic equation,\n%\n\\begin{displaymath}\n  p^\\alpha \\xi_{\\beta;\\alpha} = -p^\\alpha \\xi_{\\alpha;\\beta} = 0.\n\\end{displaymath}\n%\nAnd there we have it!\n\n(b) Find ten Killing fields for Minkowski spacetime.\n\nSince the basis vectors in Minkowski spacetime are all constant, $\\grad_\\beta \\vec{e}_\\alpha = 0$, and so we get four from $\\vec{e}_t$, $\\vec{e}_x$, $\\vec{e}_y$, $\\vec{e}_z$. According to part (c), we get a Killing field from any \\emph{constant} linear combination of these four, and so from that we may create an infinity of Killing fields. Schutz's solutions manual also lists expressions such as $x \\vec{e}_t - t \\vec{e}_x$ as Killing fields, which are linear combinations, but the coefficients are non-constant. I give an attempted derivation below, although at the very last step it turns out not to work, and I pretend it does anyway. I claim that the general form of Schutz's expressions is: $x^\\alpha \\vec{e}_\\beta - x^\\beta \\vec{e}_\\alpha$.\n\n\\begin{align*}\n  \\grad_\\alpha (x^\\alpha \\vec{e}_\\beta) -\n  \\grad_\\alpha (x^\\beta \\vec{e}_\\alpha) +\n  \\grad_\\beta (x^\\alpha \\vec{e}_\\beta) -\n  \\grad_\\beta (x^\\beta \\vec{e}_\\alpha)\n  &=\n  \\tensor{x}{^\\alpha_{;\\alpha}} \\vec{e}_\\beta -\n  \\tensor{x}{^\\beta_{;\\alpha}} \\vec{e}_\\alpha +\n  \\tensor{x}{^\\alpha_{;\\beta}} \\vec{e}_\\beta -\n  \\tensor{x}{^\\beta_{;\\beta}} \\vec{e}_\\alpha\n  \\\\ &=\n  \\vec{e}_\\beta -\n  \\vec{e}_\\alpha -\n  \\tensor{x}{^\\beta_{,\\alpha}} \\vec{e}_\\alpha +\n  \\tensor{x}{^\\alpha_{,\\beta}} \\vec{e}_\\beta\n  \\\\ &=\n  \\vec{e}_\\beta -\n  \\vec{e}_\\alpha -\n  \\tensor{\\Lambda}{^\\beta_\\alpha} \\vec{e}_\\alpha +\n  \\tensor{\\Lambda}{^\\alpha_\\beta} \\vec{e}_\\beta\n  \\\\ &\n  \\text{(magnets at work here)}\n  \\\\ &=\n  \\vec{e}_\\beta -\n  \\vec{e}_\\alpha -\n  \\vec{e}_\\beta +\n  \\vec{e}_\\alpha =\n  0\n\\end{align*}\n\n\n(c) Prove that any \\emph{constant} linear combination of two Killing fields $\\vec\\xi$ and $\\vec\\eta$ is itself a Killing field.\n\n\\begin{align*}\n  &\n  \\grad_\\mu \\xi_\\nu + \\grad_\\nu \\xi_\\mu = 0\n  \\\\\n  &\n  \\grad_\\mu \\eta_\\nu + \\grad_\\nu \\eta_\\mu = 0\n  \\\\ &\n  \\grad_\\mu (\\alpha \\xi_\\nu + \\beta \\eta_\\nu) +\n  \\grad_\\nu (\\alpha \\xi_\\mu + \\beta \\eta_\\mu)\n  \\\\ =&\n  \\alpha \\grad_\\mu \\xi_\\nu + \\beta \\grad_\\mu \\eta_\\nu +\n  \\alpha \\grad_\\nu \\xi_\\mu + \\beta \\grad_\\nu \\eta_\\mu\n  \\\\ =&\n  \\alpha (\\grad_\\mu \\xi_\\nu + \\grad_\\nu \\xi_\\mu) +\n  \\beta (\\grad_\\mu \\eta_\\nu + \\grad_\\nu \\eta_\\mu) =\n  0\n\\end{align*}\n\n\n(d)\nShow that the Lorentz transforms of the fields in (b) are also Killing fields.\n\nApplying a Lorentz transform $\\tensor{\\Lambda}{^\\mu_\\nu}$ we get the expression $\\tensor{\\Lambda}{^\\mu_\\nu} \\qty(x^\\alpha \\vec{e}_\\beta - x^\\beta \\vec{e}_\\alpha)$.\n%\n\\begin{align*}\n  &\n  \\grad_\\alpha\n  [\n    \\tensor{\\Lambda}{^\\mu_\\nu}\n    (x^\\alpha \\vec{e}_\\beta - x^\\beta \\vec{e}_\\alpha)\n  ] +\n  \\grad_\\beta\n  [\n    \\tensor{\\Lambda}{^\\mu_\\nu}\n    (x^\\beta \\vec{e}_\\alpha - x^\\alpha \\vec{e}_\\beta)\n  ]\n  \\\\ =&\n  \\tensor{\\Lambda}{^\\mu_{\\nu;\\alpha}}\n  [\n    (x^\\alpha \\vec{e}_\\beta - x^\\beta \\vec{e}_\\alpha) +\n    (x^\\beta \\vec{e}_\\alpha - x^\\alpha \\vec{e}_\\beta)\n  ]\n  \\\\ =&\n  \\tensor{\\Lambda}{^\\mu_{\\nu;\\alpha}}\n  [\n    x^\\alpha \\vec{e}_\\beta - x^\\alpha \\vec{e}_\\beta +\n    x^\\beta \\vec{e}_\\alpha - x^\\beta \\vec{e}_\\alpha\n  ] = 0\n\\end{align*}\n\n\n(e) Use the results in Exercise 7(a) to find Killing vectors for the non-Minkowski metrics listed in (ii)--(iv).\n\n(ii) Since the conserved quantities are $p_t$ and $p_\\phi$, then the Killing fields are any constant linear combinations or Lorentz transforms of $\\vec{e}_t$, $\\vec{e}_\\phi$, and $\\phi \\vec{e}_t - t \\vec{e}_\\phi$.\n\n(iii) Same as (ii).\n\n(iv) Only $p_\\phi$ is conserved, so any constant multiple of $\\vec{e}_\\phi$ is a Killing field.\n\n\n\n\n\n\\end{document}", "meta": {"hexsha": "b622195b9efdc93cba04a473612e653641196514", "size": 22858, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/textbook/tex/gr-ch7-notes.tex", "max_stars_repo_name": "dwysocki/ASTP-760", "max_stars_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/textbook/tex/gr-ch7-notes.tex", "max_issues_repo_name": "dwysocki/ASTP-760", "max_issues_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/textbook/tex/gr-ch7-notes.tex", "max_forks_repo_name": "dwysocki/ASTP-760", "max_forks_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.3156498674, "max_line_length": 749, "alphanum_fraction": 0.5703911103, "num_tokens": 9493, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7549149868676284, "lm_q1q2_score": 0.5967831851369796}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{indentfirst}\n\\usepackage{amssymb}\n\\usepackage{fancyhdr}\n\\usepackage[margin=1in]{geometry}\n\\title{LX331: Assignment 4}\n\\author{Duy Nguyen}\n\\date{3 March 2017}\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{Nguyen \\thepage}\n\n\\begin{document}\n\\maketitle\n\n\\section{Exploring semantic ambiguity with propositional logic}\n\nSo in (3a) we have the ambiguous sentence: \\textit{I didn\u2019t go to Phonetics \\textbf{or} Syntax 1 today}.\n\nIn this sentence, let  $p =$ I went to Phonetics today and $q =$ I went to Syntax 1 today, then we can write our interpretations of the sentence in propositional logic as follow:\n\n(3b) $\\sim (p \\lor q)$\n\n(3c) $\\sim (p \\oplus q)$\n\\begin{center}\n\\begin{tabular}{cc|cc}\n    $p$ & $q$ &  $\\sim (p \\lor q)$ & $\\sim (p \\oplus q)$ \\\\ \\hline\n    T & T & F & T \\\\\n    T & F & F & F \\\\\n    F & T & F & F \\\\\n    \\textbf{F} & \\textbf{F} & \\textbf{T} & \\textbf{T} \\\\\n\\end{tabular}\n\\end{center}\n\nNote here that we can see (3b) entails (3c) as there is no row that $\\sim (p \\lor q)$ is true and $\\sim (p \\oplus q)$ is false. The entailment relationship between (3b) and (3c) show that there is not really two different meaning of \\textit{or}.\n\n\\section{The conversational implicatures of \\textit{or}-sentences}\n\nWith $p$ = You have small children, $q$ = You need special assistance, and $r$ = You may board the flight early; we have:\n\n(1) If you have small children, or you need special assistance, then you may board the flight early.\n\n$ p \\lor q \\rightarrow r$\n\n(7) If you have small children, and you need special assistance, then you may board the flight early.\n\n$ p \\& q \\rightarrow r$\n\\begin{center}\n\\begin{tabular}{ccc|cc|cc}\n    $p$     & $q$     & $r$   &  $p \\lor q$ &  $(p \\lor q) \\rightarrow r$          & $p \\& q$  & $(p \\& q) \\rightarrow r$               \\\\ \\hline\n    T       & T       & T     & T           & \\textbf{T}                                     & T         & \\textbf{T}                    \\\\\n    T       & T       & F     & T           & F                                     & T         & F                    \\\\\n    T       & F       & T     & T           & \\textbf{T}                                     & F         & \\textbf{T}                    \\\\\n    T       & F       & F     & T           & F                                     & F         & T                    \\\\\n    F       & T       & T     & T           & \\textbf{T}                                     & F         & \\textbf{T}                    \\\\\n    F       & T       & F     & T           & F                                     & F         & T                    \\\\\n    F       & F       & T     & F           & \\textbf{T}                                     & F         & \\textbf{T}                    \\\\\n    F       & F       & F     & F           & \\textbf{T}                                     & F         & \\textbf{T}                    \\\\\n\\end{tabular}\n\\end{center}\n\nFrom the truth table, we can see that (1) asymmetrically entails (7). This make sense, as the category that gets to board the flight early are: a. have small children; b. need special assistance; and c. both have small children and need special assistance. Thus the meaning of and both is include in (1). \n\nHowever, the construction like (1) and (7) will not work for the exclusive or meaning. That is an utterance of (4) will not entails (6). From these two relationships, the listener can infer the if the \\textit{or} carries an exclusive implication by the fact if that version implies an \\textit{and} utterance.\n\n\\section{Presuppositions vs. Entailments}\n\n\\subsection*{(8)} From an utterance of (8a), we can easily see that it contains more information than both (8b) and (8c). We can see that through redundancy test where:\n\n\\# The woman who murdered Arturo was arrested. In fact, she was arrested.\n\n\\# The woman who murdered Arturo was arrested. In fact, she murdered Arturo. \n\nTherefore, sentence \\textbf{(8a) entails both (8b) and (8c)}.\n\nTo test for presupposition, we construct the s-family of (8a).\n\\begin{center}\n\\begin{tabular}{r|l}\n    $S$ & The woman who murdered Arturo was arrested. \\\\\n    $\\sim S$ & The woman who murdered Arturo was not arrested \\\\\n    $S?$ & Was the woman, who murdered Arturo, arrested? \\\\\n    $if-S$ & If the woman, who murdered Arturo, arrested, \\\\\n    & the victim's lawyer will press charges. \\\\\n    $maybe-S$ & Perhaps the woman, who murdered Arturo, was arrested.\\\\\n\\end{tabular}\n\\end{center}\nAt this point, we can see that through looking at the s-family, we can't not be sure if the woman was arrested, but we can say with stronger belief that Arturo was murdered, possibly by a woman. Therefore \\textbf{(8a) presupposes (8c) and does not presupposes (8b)}\n\n\\subsection*{(9)} From an utterance of (9a), the listener will know two pieces of information. The woman was arrested and Arturo was murdered. We again uses the redundancy test here:\n\n\\#The woman who was arrested murdered Arturo. In fact, she was arrested.\n\n\\#The woman who was arrested murdered Arturo. In fact, she murdered Arturo,\n\nBoth show redundancy thus, we can say that \\textbf{(9a) entails (9b) and (9c)}.\n\nLet's construct the s-family for (9a).\n\\begin{center}\n\\begin{tabular}{r|l}\n    $S$ & The woman who was arrested murdered Arturo. \\\\\n    $\\sim S$ & The woman who was arrested didn't murdered Arturo \\\\\n    $S?$ & Did the woman, who was arrested, murdered Arturo? \\\\\n    $if-S$ & If the woman, who was arrested, murder Arturo, \\\\\n    & then how did she escaped prison in the first place? \\\\\n    $maybe-S$ & Perhaps the woman, who was arrested, murdered Arturo.\\\\\n\\end{tabular}\n\\end{center}\nIn contrast with (8), here we are not sure of the fact that Arturo being murder. However, one thing stay the same is that the woman was arrested. Thus we can see that \\textbf{(9a) presupposes (9b) but not (9c)}. \n\nFrom here we could also draw a general observation that the information contained in the relative clause remains and could serve as a trigger for presupposition.\n\n\\subsection*{(10)} The utterance of (10a) tells us two pieces of information: John's baldness and his children's baldness. Let's try non-denialability test on the (10b) and (10c). \n\n\\# John is bald, and John's children are bald too. But he doesn't have any children.\n\n\\# John is bald, and John's children are bald too. But it's not true that a member of his family his bald.\n\nBoth sentences show contradictions, thus we can say that \\textbf{(10a) entails (10b) and (10c)}.\n\nLet's construct the s-family for (10a).\n\\begin{center}\n\\begin{tabular}{r|l}\n    $S$         & John is bald, and John's children are bald too.       \\\\\n    $\\sim S$    & It's not true that John is bald, and his children are also bald.    \\\\\n    $S?$        & Was it true that John is bald and his children are also bald?     \\\\\n    $if-S$      & If it's true that John is bald and his children are also bald,      \\\\\n                & then it's probably genetic.     \\\\\n    $maybe-S$   & Maybe John is bald and his children are also bald.\\\\\n\\end{tabular}\n\\end{center}\nHere, we are not to sure on the fact that John or his children are bald but we can tell that he has children. Thus we can say that \\textbf{(10a) presuppose (10b) but not (10c)}.\n\n\\subsection*{(11)} Similar to (9) and (8), we now switches some elements of the utterance around. We can tell the sentences still entail each other since we can reuse our non-deniability test. \n\n\\# John has children, and John\u2019s children are bald. But he doesn't have any children.\n\n\\# John has children, and John\u2019s children are bald. But it's not true that a member of his family his bald.\n\nTherefore \\textbf{(11a) entails (11b) and (11c)}.\n\nBefore we move on to constructing the s-family, we should note that in (11), we don't know whether John is bald. \n\\begin{center}\n\\begin{tabular}{r|l}\n    $S$         & John has children, and John's children are bald.       \\\\\n    $\\sim S$    & It's not true that John has children, and his children are bald.    \\\\\n    $S?$        & Was it true that John has children and his children are bald?     \\\\\n    $if-S$      & If it's true that John has children and his children are bald,      \\\\\n                & then it's still probably genetic.     \\\\\n    $maybe-S$   & Maybe John has children and his children are bald.\\\\\n\\end{tabular}\n\\end{center}\nIt's interesting here. Like other sentences, we see that one of the fact of the sentence become ambiguous: Whether John has children. However, in this case, $\\sim S$ in the s-family is semantically unsound: If John doesn't have children then who are bald?\n\nI think in this case, if the utterance doesn't presuppose (11b) \\textit{John has children} then it could not presuppose (11c) \\textit{John's children are bald}. I think this is because (11c) depends on (11b) as (11c) entails (11b). Upon hearing \\textit{John's children ...} the speaker will automatically understand (11b) \\textit{John has children}. \n\nTherefore, I think \\textbf{(11a) does not presuppose (11b) or (11c)}.\n\n\\end{document}\n", "meta": {"hexsha": "7440b314c2511fb0d155600899f8e0c080fcbf40", "size": 9024, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "unnatural_rubber/hw/LX/LX331_A4.tex", "max_stars_repo_name": "zuik/stuff", "max_stars_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "unnatural_rubber/hw/LX/LX331_A4.tex", "max_issues_repo_name": "zuik/stuff", "max_issues_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "unnatural_rubber/hw/LX/LX331_A4.tex", "max_forks_repo_name": "zuik/stuff", "max_forks_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.049689441, "max_line_length": 350, "alphanum_fraction": 0.6264406028, "num_tokens": 2448, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303137346446, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5967831814114513}}
{"text": "% !TeX root = ../main.tex\n\\section{The Integration of Functions into Logic Programming}\nIn this section we give a simple implementation of the amalgamation of functional and logic programming languages, for a more detailed and extended discussion the reader can see \\cite{HANUS1994583}. The traditional approach to study a logic programming  is to define an \\textit{operational semantics} of an abstract interpreter for the language. This interpreter is an idealised machine that performs the most simple and (sometimes) atomic\\footnote{Example of atomic computation step of an idealised machine is the motion of the \\textit{reading head} of the Turing Machine, defined by Alan Turing.} operations called \\textit{computation}. The definition of computation is the core of the language because it says how one can reason about the behaviour, power and limitation of the machine itself. It is by a mathematical definition that we can predict properties of machines and study the \\textit{soundness} and \\textit{completeness} of this idealised interpreter.\n\nThe interest in the fusion of logic programming and functional programming language have the advantages from both the functional and logic point of view. In comparison with pure functional languages, functional logic languages have more expressive power due to the availability of features like function inversion, partial data structures, and logical variables. In comparison with pure logical languages, functional logic languages have a more efficient operational behaviour since functions allow a more deterministic evaluation principles than pure logic predicates. These two principal consideration are the main initial motivation for integrating these two types of languages.\n\nThe integration of functions into logic programming is very simple from a syntactic point of view. For this purpose, we have to extend a logic programming by the addition:\n\\begin{enumerate}\n\t\\item A method to define new functions.\n\t\\item A possibility to use these functions inside program clauses.\n\\end{enumerate}\n\nIn pure Prolog systems the only \\textit{equality predicate} was the built-in syntactic equality. It is by adding the support for terms to be equal \\textit{modulo} some equality predicate without being syntactically equal that one can define functional logic programs. For example, the following equational program (equivalent to the TRS defined in the Example \\ref{example:app-concat-standard-narrowing})\n\n\\begin{align*}\n    \\appFunc{\\nilList}{x} &= x. \\\\\n    \\appFunc{x \\cdot y}{z} &= x \\cdot \\appFunc{y}{z}.\n\\end{align*}\nUsing this clauses for equality, we can prove that the term $\\appFunc{[1,2]}{[3]}$ is equal to $[1,2,3]$.\n\nIn order to give the precise semantics of functional logic programs and to fix some basic notions of logic programming we give some definitions below, for an introduction to logic programming one is refereed to the classic book \\cite{Clocksin:1984:PP:2343}.\n\n\\begin{definition}\n    Let $\\mathcal{P}$ be a set of \\textit{predicate symbols} including the binary equality predicate $=$. A literal $p(t_1, \\dots, t_n)$ consists of an n-ary predicate symbol applied to $n$ argument terms. An logical equation is a literal with $=$ as predicate symbol.\n\\end{definition}\n\n\\begin{definition}\n    A clause has the form\n    $$L_0 \\leftarrow L_1, \\dots, L_n$$\n    $(n \\geq 0)$, where $L_0, L_1, \\dots, L_n$ are literals. It is called (conditional) equation if $L_0$ is an equation, and unconditional equation if $L_0$ is an equation and $n = 0$.\n\\end{definition}\n\n\\begin{definition}\n    An \\textit{program objective} $G$ is a logic clause of the form:\n    $$\\leftarrow B_1, \\dots, B_m$$\n\\end{definition}\n\nA clause $C$ is a \\textit{variant} of another cause $D$ if it is an instance of $D$ by some variable renaming substitution $\\theta$, i.e. $C = D \\theta$.\n\n\\begin{definition}\n    A \\textit{functional logic program} $P$ is a finite set of clauses.\n\\end{definition}\n\nNote that if we have to evaluate a function applied to ground terms during a unification in a functional logic program, we can simply evaluate this function call as in functional languages by applying appropriate rules to this call. This is called a rewrite step. But, if in the objective one have free variables, one needs to instantiate these variables to some terms and then operates on a rewrite step. This is called \\textit{narrowing}, as we have seen in this report early.\n\nThat introduce the needs of narrowing-based strategy on evaluation of functions defined inside a logic programming language. We already have established the conditions for this strategy be indeed an terminating procedure for the equational theory defined by this equational classes (in Proposition \\ref{proposition:narrowing-as-E-unification}).\n\nThe computation in pure Prolog languages is by the means of $SLD$-resolution. This is a complete and sound operational semantic for pure Prolog systems. This evaluation rule use standard unification for solving the resolution of objectives with free variables. In order to extend logic programs we need to be able to compute reasoning with equations. This is done by incorporating narrowing-based unification procedure in the resolution step.\n\nWe now see one example of this type of integration. (From the survey of Michael Hanus, see \\cite{HANUS1994583}).\n\n\\subsection{Constructor-Based Programs}\nIn order to implement a functional logic language based on basic narrowing we have to manage the set of basic positions in program clauses and we try to apply all rules at all basic positions in each step. That yields a highly non-deterministic execution principle. On the other hand, pure functional languages deterministically select the position where the rules are applied next (innermost positions for \\textit{eager evaluation} languages and outermost positions for \\textit{lazy evaluation} languages). An approach to achieve similar strategy for functional logic languages is the partition of the signature of the program into a disjoint set $\\constructors$ of constructors and a set $\\defFunctions$ of defined functions.\n\nConstructors are used to build data types, whereas defined functions operate on these data types. Constructors terms are always in normal form, whereas defined functions are defined by equational rules. This enable us to use narrowing techniques on evaluation strategy. We call a term \\textit{innermost} if it has the form $f(t_1, \\dots, t_n)$, with $f \\in \\defFunctions$ and $t_1, \\dots, t_n$ are terms built with $\\var$ and $\\constructors$. A functional logic program is \\textit{constructor-based} if the left-hand side of each rule is an innermost term.\n\nIn constructor-based functional logic programs, we can solve equations by innermost narrowing strategy, as we already have established early in the report. To define a evaluation principle for this kind of logic functional programs, one needs to take care of some issues. For instance, innermost evaluation only give raise to an incomplete semantics as the example below shall demonstrate.\n\\begin{example}\n    Consider the following rules where $a$ is a constructor:\n    \\begin{align*}\n        f(x) &= a. \\\\\n        g(a) &= a.\n    \\end{align*}\n    Since $f$ is a constant function mapping all inputs to $a$, the identity substitution $[]$ is a solution of the equation $f(g(x)) = a$. However, the only innermost narrowing derivation is\n    $$f(g(x)) = a \\narrow_{[x/a]} f(a) = a \\narrow_{[]} a = a \\narrow \\top$$\n    i.e. innermost narrowing computes only the more specific solution $[x / a]$.\n\\end{example}\n\nTo the operational semantics of constructor-based programs be complete, one need to impose several conditions in the forms of functions defined by clauses. The most important one is: innermost narrowing evaluation is complete if all functions are totally defined, i.e. the only irreducible ground terms are constructors terms. The next example (\\cite{HANUS1994583}), shows the incompleteness of innermost narrowing in the presence of partial functions.\n\n\\begin{example}\n    Consider the following rules, where $a$ and $b$ are constructors:\n    \\begin{align*}\n        f(a, Z) &= a. \\\\\n        g(b) &= b.\n    \\end{align*}\n    If we want to solve the equation $f(X, g(X)) = a$, then there is the successful narrowing derivation\n    $$f(X, g(X)) = a \\narrow_{[X / a]} a = a$$\n    by applying the first rule to the term $f(X, g(X))$. However, this derivation is not innermost, and the only innermost narrowing derivation is not successful:\n    $$f(X,g(X)) = a \\narrow_{[X/b]} f(b,b) = a$$\n    Therefore, innermost narrowing cannot compute the solution.\n\\end{example}\n\n\\begin{theorem}[\\cite{HANUS1994583}]\n    If $E$ is a finite set of constructor-based unconditional equations such that the rewrite relation $\\contr_E$ is convergent and all function are totally defined, then innermost narrowing is complete w.r.t ground substitutions.\n\\end{theorem}\n\nFor a precise description of this strategy (under the conditions of the above theorem) we represent a equational literal in a goal by a skeleton and an environment part. The skeleton is an equation composed of terms occurring in the original program, and the environment is a substitution which as to be applied to the equation in order to obtain  the actual literal.\n\n\\begin{definition}\n     The initial equation $E$ is represented by the pair $\\pair{E}{[]}$. If $\\pair{E}{\\sigma}$ is a literal, then a derivation step in the innermost narrowing calculus is one the following two forms. Let $p$ be an innermost position:\n     \\begin{enumerate}\n         \\item \\textit{Narrowing:} Let $l = r$ be a variant of a rule such that $\\restr{E}{p}\\sigma$ and $l$ are unifiable with $mgu$ $\\tau$. Then $\\pair{E[r]_p}{\\sigma \\tau}$ is the next literal derived by an innermost basic narrowing step.\n         \\item \\textit{Innermost Reflection:} Let $\\gamma$ be the substitution $[x / \\restr{E}{p}\\sigma]$, with the condition that $x$ is a fresh variable. Then $\\pair{E[x]_p}{\\sigma \\gamma}$ is the next literal derived by an innermost reflection step.\n     \\end{enumerate}\n\\end{definition}\n\n", "meta": {"hexsha": "81c372a8b79398d0f761ab9293fa00924f3ab58a", "size": 10086, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/functional_logic_programming.tex", "max_stars_repo_name": "deividrvale/report-narrowing", "max_stars_repo_head_hexsha": "1e3ce34a1afb5268b4307fcc9af9374d2e121a27", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/functional_logic_programming.tex", "max_issues_repo_name": "deividrvale/report-narrowing", "max_issues_repo_head_hexsha": "1e3ce34a1afb5268b4307fcc9af9374d2e121a27", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/functional_logic_programming.tex", "max_forks_repo_name": "deividrvale/report-narrowing", "max_forks_repo_head_hexsha": "1e3ce34a1afb5268b4307fcc9af9374d2e121a27", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 102.9183673469, "max_line_length": 964, "alphanum_fraction": 0.7672020623, "num_tokens": 2332, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7549149758396752, "lm_q1q2_score": 0.5967831764190482}}
{"text": "\\graphicspath{{sliding\\_puzzle/fig}}\r\n\\chapter{Solvability of a NxN sliding puzzle}\r\n\\label{chap:Solvability of a NxN sliding puzzle}\r\n\r\n{\\color{red} \\Huge{info in this chapter to still be integrated somewhere else into report if needed. Else will leave it out}}\r\n\r\n\\section{General puzzle description}\r\nLet us assume that we have an NxN puzzle, then we have NxN number of blocks. We can represent the puzzle as an NxN array, then we stack the array into a one dimensional array of 1 x (N*N). For example see the 4x4 puzzle in Figure \\ref{fig:sliding_puzzle} we have a 1 x 16 array as: \r\nArray = (12,7,8,13,4,9,2,11,3,6,15,14,5,1,10).\r\nBefore we describe the conditions for a sliding puzzle to be solvable, we first define the term \u201cinversion\u201d. Assuming the the first index of the 1xN 2 array starts at the left top corner (valued 12) in\r\nFigure \\ref{fig:sliding_puzzle}, and that it runs from [0,(N*N)-1]. Then an inversion occurs when Array[index] >\r\nArray[index+1] where index is an arbitrary integer between 0 and N*N-1. Hence in Figure \\ref{fig:sliding_puzzle} we have a\r\ntotal: sum of inversions(Array) = 11 + 6 + 6 + 8 + 3 + 5 + 1 + 5 + 1 + 2 + 4 + 3 + 1 + 0 = 56.\r\n\r\n\\begin{figure}[!htb]\r\n\t\\centering\r\n\t\\includegraphics[width=0.25\\linewidth]{sliding_puzzle/fig/puzzle.png}\r\n\t\\caption{Example of a sliding puzzle}\r\n\t\\label{fig:sliding_puzzle}\r\n\\end{figure}\r\n\r\n\\section{Conditions for solvability}\r\nEven and odd sized boards are analysed separately (where size = N).\r\n\r\nFor odd sized boards where N is odd we have the puzzle only being solvable if and only if the boards\r\nhas an even number of inversions. The proof for this can be deduced by looking at Figure 2 and noting that for every switch of the blank block we have an even change in the sum of inversions of the board. \\cite{princeton_8puzzle_assignment}\r\n\r\n\\begin{figure}[!htb]\r\n\t\\centering\r\n\t\\includegraphics[width=1\\linewidth]{sliding_puzzle/fig/princeton_odd_boards.png}\r\n\t\\caption{Odd boards with change in blank piece only having even inversion change \\cite{princeton_8puzzle_assignment}}\r\n\t\\label{fig:sol_odd_board}\r\n\\end{figure}\r\n\r\nFor even sized boards where N is even we have the board solvable if and only if the number of\r\ninversions plus the row of the blank square is odd. This is illustrated in Figure 3.\r\n\r\n\\begin{figure}[!htb]\r\n\t\\centering\r\n\t\\includegraphics[width=1\\linewidth]{sliding_puzzle/fig/princeton_even_board_solvability.png}\r\n\t\\caption{Even board solvability \\cite{princeton_8puzzle_assignment}}\r\n\t\\label{fig:sol_even_board}\r\n\\end{figure}\r\n\r\nHalf of all puzzle configurations are unsolvable. \\cite{Notes_15_puzzle} This means that we only have N! / 2 configurations\r\nthat are solvable for an NxN board. This was proven using parity in the paper in \\cite{Notes_15_puzzle}. Sliding puzzles\r\ncan be solved relatively quickly with today\u2019s processing of computers for puzzles for example an 5x5\r\npuzzle was solved in 205 tile moves in 2016. \\cite{Domain_cube_forum}\r\n\r\nThe issue more so lies in finding the shortest path to solving a puzzle. This specific problem of solving\r\nwith the least amount of tile moves of a sliding puzzle has been defined as NP (non-deterministic polynomial-time) hard. NP hardness is are problems that are as least as hard as NP.\r\nWhere in computational complexity theory NP (non-deterministic polynomial-time) is a has a solution\r\nwith a proof variable to be in polynomial time by a deterministic Turing Machine. A Turing machine is\r\na mathematical model defining an abstract machine which manipulates symbols according to a set of\r\nrules. \\cite{Computation_finite_and_infinite_machines}\r\n\r\nIn simpler terms a problem is NP if it can be solved within a time that is a polynomial function of the\r\ninput. For instance if we define the time to solve a problem as \u2018T\u2019 and the input data as \u2018D\u2019. Then as\r\nlong as T = polynomial function (D) then a problem is NP.", "meta": {"hexsha": "4476207b78067b6469d617e2952a6d1c2e75885b", "size": 3872, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sliding_puzzle/sliding_puzzle.tex", "max_stars_repo_name": "umr-bot/sliding-puzzle-solver-bot", "max_stars_repo_head_hexsha": "826532a426f343bcc66034b241a42b3bd864e07c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/sliding_puzzle/sliding_puzzle.tex", "max_issues_repo_name": "umr-bot/sliding-puzzle-solver-bot", "max_issues_repo_head_hexsha": "826532a426f343bcc66034b241a42b3bd864e07c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sliding_puzzle/sliding_puzzle.tex", "max_forks_repo_name": "umr-bot/sliding-puzzle-solver-bot", "max_forks_repo_head_hexsha": "826532a426f343bcc66034b241a42b3bd864e07c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.6271186441, "max_line_length": 283, "alphanum_fraction": 0.7675619835, "num_tokens": 1039, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.8128673246376009, "lm_q1q2_score": 0.5967417757955732}}
{"text": "\n\\subsection{Introduction}\n\nTake physical disk drives, create logical disk drives.\n\n\\subsection{Striping}\n\nA single file is spread over multiple disks.\n\nCan be done at bit/byte/block level.\n\n\\subsection{Mirroring}\n\nSame data on multiple drives\n\n\\subsection{Parity}\n\nUsed for error detection.\n\nIn protocol for sending/saving we say that all bits must be even (eg 1100) (or odd)\n\nFor given bit of info, we add parity bit to guarantee that bit is indeed even/odd. 1100 becomes 11000; 1000 becomes 10000\n\nCannot correct errors, just detect them. only detects if odd number of errors.\n\nParity can be stored on dedicate disk, distributed.\n\nalternative to parity bit: hamming code\n\n\\subsection{RAID levels}\n\nRAID 0: Uses striping across disks, but no redunency. allows for improved read/write times. if any drive fails, all fail\nRAID 1: Data written identically to two drives. read times increased, as with raid 0, due to mirroring. writing is slower. no parity or striping\nRAID 2: bit level striping and hamming code. rarely used\nRAID 3: rarely used. byte level stripping. Dedicated parity disk\nRAID 4: dedicated parity disk\nRAID 5: block level striping, distributed parity\nRIAD 6: double distributed parity\n\n\\subsection{ZFS}\n\nRAID-Z: Under ZFC, similar to RAID 5\n\n\\subsection{Off-site backups}\n\n", "meta": {"hexsha": "137e77c342b344804f76802a5c33317f16d8fe9a", "size": 1290, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/engineering/RAID/01-01-Introduction.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/engineering/RAID/01-01-Introduction.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/engineering/RAID/01-01-Introduction.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0434782609, "max_line_length": 144, "alphanum_fraction": 0.7813953488, "num_tokens": 328, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673178375734, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5967417755320357}}
{"text": "\\def\\Relength{{\\rm Relength}}\n\\def\\length{{\\rm length}}\n\\def\\Arccosh{{\\rm Arccosh}}\n\\def\\trace{{\\rm trace}}\n\\def\\distance{{\\rm distance}}\n\n\\vglue-8pt\n\\section{Killerwords and the parameter space}\n\\vglue-4pt\n\n{\\it Notation and conventions} FIXME(1.1).\nA hyperbolic $3$-manifold is a Riemannian $3$-manifold of constant sectional curvature $-1$.  All hyperbolic $3$-manifolds under\nconsideration will be closed and orientable. We will work in the upper-half-space model for\nhyperbolic 3-space:  ${\\bf H}^3 = \\{(x,y,z): z > 0\\}$ with \nmetric ${\\rm ds_H} = {\\rm ds_E}/z.$ The distance between two points $w$ and $v$ in ${\\bf H}^3$ will be denoted $\\rho(w,v).$\n\nIt is well known that  \n${\\rm Isom}_+({\\bf H}^3) = {\\rm PSL(2,}{\\bf C}),$  \nwhere an element of\n${\\rm PSL(2,}{\\bf C})$ acts as a M\\\"obius transformation on the bounding (extended) complex plane and the extension to upper-half-space is the natural extension\n (see [Bea]).  \nIf $M$ is a hyperbolic $3$-manifold, then $M={\\bf H}^3/\\Gamma$ where $\\Gamma$ is a \ndiscrete, torsion-free subgroup of ${\\rm PSL(2,}{\\bf C})$.   \n\nFor computational convenience, we will often normalize so that the (positive) $z$-axis is the axis of an isometry.  As such, we set up some special notation.\nLet $B_{(0;\\infty)}$ denote the oriented geodesic $\\{(0,0,z): 0< z < \\infty \\}$,\nwith negative endpoint $(0,0,0).$  (An {\\it endpoint} of an axis refers to a limit point of the axis on $S^2_{\\infty}$.) Let $B_{(-1;1)}$ denote the\noriented geodesic with negative endpoint $(-1,0,0)$ and positive endpoint $(1,0,0)$.\n\nWhen working in a group $G$ generated by $f$ and $w$ and looking \nat words in $f,w, f^{-1}, w^{-1}$ we will often let $F$ and $W$ denote\n$f^{-1}$ and $w^{-1},$ respectively.\n\n\\advance\\theoremcount by 1\n \\numbereddemo{Definition}\n If $f$ is an isometry, then we define\n$${\\it Relength}(f)=\\inf\\{\\rho(w,f(w))\\mid w\\in {\\bf H}^3\\}.$$\nThus $\\Relength(f)=0$ if and only if $f$ is either a parabolic or elliptic\nisometry.  If $\\Relength(f) > 0,$ then $f$\nis hyperbolic and maps a unique geodesic $\\sigma$ in ${\\bf H}^3$ to itself. In that case\n$\\sigma$ is oriented (the negative end\nbeing the repelling fixed point on $S^2_\\infty$)  and the isometry $f$ is the\ncomposition of a  rotation of $t \\ ({\\rm mod}\\ 2\\pi)$\nradians along $\\sigma$ (the sign of the angle of rotation is determined by the right-hand rule) followed by a pure translation of ${\\bf H}^3$ along $\\sigma$ of\n$l = \\Relength(f)$.  We define\n${\\it length}(f)=l+it,$ and call $A_f = \\sigma$ the {\\it axis} of $f.$  Now,  $A_f$ is an oriented interval with endpoints in $S^2_{\\infty},$ the\norientation being induced from $\\sigma.$\n\nIf the geodesic $\\sigma$ is  given a fixed orientation, we define an $l+it$\ntranslation $f$ along $\\sigma$  to be a distance $l$ translation in the positive\ndirection, followed by a rotation of $\\sigma$ by $t$ radians.  Of course if\n$l < 0,$ then each point of $\\sigma$ gets moved $-l$ in the negative direction.\nAlso, via the right-hand rule, the orientation determines what is meant\nby a $t$-radian rotation.  Thus if $l > 0,$ the orientation induced on\n$\\sigma$ by $f$ (as in the previous paragraph) equals the given orientation.\nIf $l < 0,$ then the induced orientation is opposite to the given orientation\nand $f$ is a $-(l+it)$ translation of $-\\sigma$ in the sense of the previous\nparagraph.\n\nIf $f$ is elliptic, then $f$ is a rotation of $t$ radians where \n$0 \\le t \\le \\pi$ about some oriented geodesic, and we define $\\length(f) = t i.$ \nIf $f$ is parabolic or the identity, we define $\\length(f) = 0 + i0.$\nSo, for all isometries we have that $\\Relength = {\\rm Re}(\\length).$\n\\enddemo\n\n\\numbereddemo{Definition}   If\n$G$ is a subgroup of ${\\rm Isom}_+({\\bf H}^3),$ then  we say that $f$ is a {\\it shortest} element  in $G$ if $f\\neq {\\rm id}$ and\n$\\Relength(f)\\le \\Relength(g)$\nfor all $g\\in G, \\ g\\neq {\\rm id}.$\n\\enddemo\n \n\\numbereddemo{Definition}\n  If $\\sigma,\\ \\tau$ are disjoint oriented geodesics  in \n${\\bf H}^3$ which do not meet at infinity,\nthen define\n${\\it distance}(\\sigma,\\tau)=\\length(w)$ where \n$w\\in {\\rm Isom}_+({\\bf H}^3)$ is the hyperbolic\nelement which translates ${\\bf H}^3$\nalong the unique common perpendicular between $\\sigma$ and $\\tau$ and which takes the\noriented geodesic $\\sigma$ to the\noriented geodesic $\\tau$.  The oriented common perpendicular from $\\sigma$ to $\\tau$ is\ncalled the {\\it orthocurve} between $\\sigma$\nand $\\tau$.  The {\\it ortholine} between $\\sigma$ and $\\tau$ is the complete oriented\ngeodesic in ${\\bf H}^3$ which contains the\northocurve between $\\sigma$ and~$\\tau$.\n\nIf $\\sigma$ and $\\tau$ intersect at one point in ${\\bf H}^3$ then \nthere is an elliptic isometry $w$ taking $\\sigma$ to $\\tau$ fixing $\\sigma \\cap \\tau.$  Again,  define $\\distance(\\sigma, \\tau) = \\length(w).$  In this case, the orthocurve is the point $\\sigma \\cap \\tau,$ and the ortholine $O$ from $\\sigma$ to $\\tau$ is oriented\nso that $\\sigma,\\ \\tau,\\ O$ form a right-handed frame.\n\nIf $\\sigma$ and $\\tau$ intersect at infinity, then there is no unique common perpendicular, hence no ortholine, and we define\n$\\distance(\\sigma,\\tau) = 0 + i0,$ or $0 + i\\pi$ depending on whether or not $\\sigma$ and $\\tau$ point in the same direction at their intersection point(s) at infinity.\n\nDefine ${\\it Redistance} = {\\rm Re} (\\distance).$\n\nAs defined, Redistance is nonnegative.\nIn Definition FIXME(1.8) and in Sections FIXME(2 and 3),\nit will be useful to have a broader definition. Given an oriented\ngeodesic $\\alpha$ in ${\\bf H}^3$ orthogonal to oriented geodesics $\\beta$ and $\\gamma$, define $d_\\alpha(\\beta,\\gamma) \\in {\\bf C}$ where a\n$d_\\alpha(\\beta,\\gamma)$ translation of ${\\bf H}^3$  along $\\alpha$ takes $\\beta$ to $\\gamma.$\n\\enddemo\n\n\\numbereddemo{Definition}\nA tube of radius $r$ about a geodesic $\\delta_0$ in ${\\bf H}^3$ is \n$\\{w \\in {\\bf H}^3 \\mid \\rho(w, v) \\le r$ for some $v \\in \\delta_0 \\}.$    \nIf $\\delta$\nis a simple closed geodesic in the hyperbolic $3$-manifold $N$ and if\n$\\{\\delta_i \\}$ is the set of pre-images of $\\delta$ in ${\\bf H}^3$, then define\n{\\it tuberadius}($\\delta)={1 \\over 2} \\min \n\\{{\\rm Redistance}(\\delta_i,\\delta_j) \\mid i\\neq j \\}.$\nIf $r = {\\rm tuberadius}(\\delta),$ \nthen define a {\\it maximal tube} about $\\delta$ to be the\nimage of a tube of radius $r$ about $\\delta_0.$  Note that\ntuberadius$(\\delta)=\n\\sup\\{r \\mid $ there exists an embedded tubular neighborhood of\nradius $r$ about $\\delta \\}.$\n\\enddemo\n\n\\numbereddemo{Definition}\nOur desire to understand tuberadii about closed geodesics, and especially about a simple closed geodesic $\\delta$, leads us to investigate certain 2-generator subgroups $G =\n\\langle f,w\\rangle$  of ${\\rm Isom}_+({\\bf H}^3)$ with the generator $f$ corresponding to a primitive isometry fixing $\\delta_0$ and the generator\n$w$ corresponding to an element taking $\\delta_0$ to its nearest covering translate.  We investigate these 2-generator groups by using certain\nsubsets of ${\\bf C}^3$ as parameter spaces.\n\nA {\\it marked {\\rm (2-}generator\\/{\\rm )} group} is a triple $\\{G,f,w\\}$ consisting of a 2-generator subgroup $G$ of ${\\rm Isom}_+({\\bf H}^3)$\nand an ordered pair of isometries $f,w$ of ${\\bf H}^3$ which generate $G$  such that $\\Relength(f) > 0$ and if $A_f$ is the axis of $f,$ then $w(A_f)\n\\cap A_f  = \\emptyset$  (here, intersection is taken in ${\\bf H}^3 \\cup S^2_\\infty$). Two marked groups $\\{G_1,f_1,w_1\\}$ and $\\{G_2,f_2,w_2\\}$\nare {\\it conjugate} if $G_1$ and $G_2$ are conjugate via an element of ${\\rm Isom}_+({\\bf H}^3)$ and this conjugating element takes $f_1$ to $f_2$\nand\n$w_1$ to $w_2.$  Within any conjugacy class of marked groups is a unique normalized element\n$\\{G,f,w\\}$ where $f$ is a\npositive translation  along the (oriented) geodesic $B_{(0;\\infty)},$  and the\northocurve from $w^{-1}(B_{(0;\\infty)})$ to $B_{(0;\\infty)}$ lies on \n$ B_{(-1;1)}$\non the negative side of $ B_{(-1;1)}\\cap B_{(0;\\infty)}.$  \nTo minimize notation, we will frequently equate a conjugacy class with its normal representative.\n\\enddemo\n\nGiven $(L,D,R)=(l+it, d+ib, r+ia)\\in{\\bf C}^3$ with $l > 0,\\ d > 0,$ one  can associate a group $G$ generated by  elements $f$ and $w$ as follows.  Define $f$ to be an $l+it$ translation along $B_{(0;\\infty)}$ and $w$ to be a $d+ib$ translation along $ B_{(-1;1)}$ followed by an $r+ia$ translation along $B_{(0;\\infty)}$ (here, $r$ can be negative, in which case this is equivalent to\na\n$-r-ia$ translation along $-B_{(0;\\infty)}$).  Conversely if $\\{G,f,w\\}$ is a normalized marked group then $f$ is an $L$ translation of $B_{(0;\\infty)}$\nand $w$ is a $D$ translation of \n$ B_{(-1;1)}$ followed by an $R$ translation of $B_{(0;\\infty)}.$ Thus\n${\\cal P}^\\prime = \\{(l+it,d+ib,r+ia)\\in {\\bf C}^3 | \\ l>0, d>0\\}$ \nparametrizes the\nset of conjugacy classes of marked groups.  In particular, the parametrization is surjective and locally one-to-one.\n\nWe are primarily interested in the set \n${\\cal T}^{\\prime}\\subset {\\cal P}^{\\prime}$\nwhich parametrizes all conjugacy classes of \nmarked groups\n$\\{G,f,w\\}$ for which $f$ is a shortest element  of $G$ which (positively) translates $B_{(0;\\infty)}$ and $w\\in G$ takes $ B_{(0;\\infty)}$ to a\n{\\it nearest} translate $w(B_{(0;\\infty)})$ such that \n$-\\Relength(f)/2< {\\rm Re}(d_ {B_{(0;\\infty)}})$\n(ortholine from $w^{-1}( B_{(0;\\infty)})$ to $ B_{(0;\\infty)}$),\n(ortholine from $ B_{(0;\\infty)}$ to $w(B_{(0;\\infty)}))))\n\\le \\Relength(f)/2.$  \nSee Figure FIXME(1.1).\nNote that because $f$ is shortest and\n$\\Relength(f) > 0,$ it follows that $G$ must be \ndiscrete, torsion-free, and parabolic-free.\n \n\n\n\\numbereddemo{{R}emark}  ${\\cal T}^{\\prime}$ consists of those parameters corresponding to marked groups $\\{G,f,w\\}$ such that $l$ is the real\nlength of a shortest element of $G,\\  d$ is the real distance between $ B_{(0;\\infty)}$ and a nearest\ntranslate, and $-l/2<r\\le l/2.$  \nIn what follows, it is\nessential to remember that an element $\\alpha$ of ${\\cal P^{\\prime}}$ corresponds not only\nto a group $G,$ but to a marked group.  \nTo further establish the point, we note that,\nfor elements of ${\\cal T}^{\\prime},$\nthe parameter $l$ is an invariant of $G$ alone (that is, $l$ is the shortest real length of an element of $G$), while  the parameter $d$ is determined by $G$ and $f$ (that is, the notion of ``nearest\" used to define $w$ in the definition of ${\\cal T}^{\\prime}$ requires a choice of $f$).\n\nAs mentioned in the introduction to this paper, we are only interested in the subset of ${\\cal T}^{\\prime}$ corresponding to\nparameters $\\alpha$ with $d \\le \\ln(3).$  The\nfollowing two propositions imply this subset of ${\\cal T}^{\\prime}$ lives in a compact subset of ${\\cal P}^{\\prime}.$\n\\enddemo\n\n \\figin{fig1.1}{500}\n \\centerline{Figure FIXME(1.1)}\n\n\\numbereddemo{Definition} \\hglue-8pt Let ${\\cal P}\\subset{\\cal P}^{\\prime}$ be the set of those parameters $\\alpha\n= (l+it$, $d+ib, r+ia)$ such\nthat\n \\vglue3pt\n a)       $0.0978\\le l \\le 1.289785$,\n\\vglue2pt b)\\enspace      $l/2 \\le d\\le \\ln(3)$,\n\\vglue2pt c) \\enspace   $0\\le r \\le l/2$,\n\\vglue2pt d) \\enspace      $-\\pi \\le t \\le 0$,\n\\vglue2pt e) \\enspace\t$-\\pi \\le b \\le \\pi$,\n\\vglue2pt f) \\enspace\t$-\\pi \\le a \\le \\pi$.\n\\vglue2pt\\noindent \nDefine ${\\cal T}={\\cal T}^{\\prime}\\cap {\\cal P}.$\n\\enddemo\n\nThe point of the definition of ${\\cal P}$ and ${\\cal T}$ is as follows.  We want to analyze by computer the relationship between lengths of shortest geodesics  and their tuberadii in hyperbolic $3$-manifolds.  We were naturally led to the parameter space \n${\\cal P}^{\\prime}$ and its subset ${\\cal T}^{\\prime}.$  But \n${\\cal P}^{\\prime}$ is problematic from the computational viewpoint because it is noncompact.  We wish to replace \n${\\cal P}^{\\prime}$ by ${\\cal P}$ which is compact, and \n${\\cal T}^{\\prime}$ by ${\\cal T}$ in our computer analysis.  This is carried out in Lemma FIXME(1.13).  Note that we worked to make ${\\cal P}$ as small as reasonable to save computation time;  for example, the $t$ and $r$ restrictions above cut down the parameter space by a factor of 4 over the obvious $t$ and $r$ restrictions. \n\nBy [G; Lemma 5.9] (or see Example A.3 in the appendix) a closed orientable hyperbolic $3$-manifold $N$\nsatisfies the insulator condition provided that\ntuberadius$(\\delta) > \\ln(3)/2$ for some closed geodesic $\\delta\\subset N.$  Thus we\nare led to consider:\n\n\\numbereddemo{Definition}  A word $h$ in $w,f,w^{-1},f^{-1}$ for which statement a) (resp.\\ b)) in Remark FIXME(1.17) holds for each $\\beta\\in {\\cal P}_i$\nand for which $h_\\beta$ is not a power of $f_\\beta$ for each $\\beta\\in {\\cal P}_i$ is\ncalled a {\\it killerword} for ${\\cal P}_i$ with respect to contradiction a) (resp.\\ b)).\n\n\\numbereddemo{Summary}  With seven exceptions,  to each of the approximately one\nbillion regions partitioning ${\\cal P},$ we will\nassociate a killerword and a contradiction.  \\enddemo \n\n\\numbereddemo{{R}emark}  Computers are well suited for partitioning a set such as ${\\cal P}$\ninto many regions $\\{{\\cal P}_i \\},$ and finding a\nkillerword $h_i$ which eliminates all $\\alpha_i \\in {\\cal P}_i $ due to contradiction $C_i.$\nDepending on the contradiction, we find\ncomputable expressions  for approximations of the values of\n$\\Relength(h_\\beta)$ or Redistance$(h_\\beta(B_{(0;\\infty)}), B_{(0;\\infty)})$ and\nthus use the computer to eliminate all of~${\\cal P}_i .$ \n\\enddemo\n\n\\vglue8pt \n{\\it Definition} FIXME(1.22). Let \n\\vglue4pt\n\\centerline{$ {\\cal W} = \\{ (x_0,x_1,x_2,x_3,x_4,x_5) : |x_i| \\le 4 \\times 2^{(5 - i) /6} {\\rm \\ for\\ } i = 0,1,2,3,4,5 \\}$}\n\\begin{eqnarray*}\n\\noalign{\\vskip-12pt}\n \\supset \\exp({\\cal P}) = \n\\{(x_0,x_1,x_2,x_3,x_4,x_5)\\mid \\hskip-16pt&\\hskip-16pt & \nx_0 + i x_3=\\exp(e),  \\enspace x_1 + i x_4\n  = \\exp(f),  \\\\\n\\hskip-16pt&\\hskip-16pt& x_2 + i x_5 = \\exp(g) \\\n{\\rm where} (e,f,g)\\in {\\cal P}\\}\\\\\n\\noalign{\\vskip-28pt}\n\\end{eqnarray*}\n and let\n$${\\cal S}=\\exp({\\cal T}).$$\nAs we are taking $\\exp$ of the various complex co-ordinates, it is notationally convenient to replace our complex parameters \n$L = l+it,\\ D = d+ib,\\ R = r+ia$ by exponentiated versions.  That is, let \n\\begin{eqnarray*}\nL^\\prime &\\hskip-8pt=\\hskip-8pt& \\exp(L) = \\exp(l+it),\\enspace D^\\prime = \\exp(D) = \\exp(d + ib), \n\\\\ R^\\prime &\\hskip-8pt=\\hskip-8pt& \\exp(R) = \\exp(r+ia).\n\\end{eqnarray*}\n \n\n{\\it Remarks} FIXME(1.23).\ni) We work with ${\\cal W}$ instead of $\\exp({\\cal P})$ because we want our initial parameter space to be a (6-dimensional) box that is easily\nsubdivided.  This has the side effect that certain regions (sub-boxes)\n ${\\cal W}_i$ of ${\\cal W}$ will be eliminated because they are outside of $\\exp({\\cal P})$ not because of the analogues of conditions a) and b) in\nRemark FIXME(1.17).\nThe entire collection of conditions is given in Section FIXME(5).\n \nii)  The presence of the factor $2^{(5-i)/6}$ in the definition of ${\\cal W}$ is explained in Construction FIXME(5.3).\nBriefly, the main reason for including it\nis to make the shape of regions stay as uniform and ``round\" as possible under subdivision.\nThis makes the Taylor approximations efficient, hence fast.\n\\vglue4pt\niii)  We chose the co-ordinates of ${\\cal W}$  so that $L^\\prime = x_0 + i x_3,\\ D^\\prime = x_1 + i x_4,\\ R^\\prime = x_2 + i x_5\\ $ to gain a mild\ncomputer advantage.  \n\\advance\\theoremcount by 3\n\n\\proclaim{Lemma}  If $(L^\\prime, D^\\prime, R^\\prime)\\in {\\cal W}$ and  $\\{G,f,w\\}$ is the associated normalized \nmarked group{\\rm ,} then $f$ and $w$ have matrix representatives\n $$ f = \\left(\\matrix{\\sqrt{L^\\prime}&0\\cr \n0 & 1/\\sqrt{L^\\prime}\\cr}\\right), \\leqno{{\\rm a)}}$$\n\n  $$ w = \\left(\\matrix{\\sqrt{R^\\prime} * ch & \\sqrt{R^\\prime} * sh \\cr sh / \\sqrt{R^\\prime} & ch/\\sqrt{R^\\prime}\\cr}\\right) \\leqno{\\rm b)}$$ \nwhere \n$ch = (\\sqrt{D^\\prime} + 1/\\sqrt{D^\\prime})/2\\ \\ {\\rm and}\\ \\ \nsh = (\\sqrt{D^\\prime} - 1/\\sqrt{D^\\prime})/2.$\n\\endproclaim\n \n\\demo{Proof}  a)  In our set-up  the (oriented) axis of $f$ is $B_{(0;\\infty)}$.  \nAs such, $f$ corresponds to a diagonal matrix, with diagonal entries $p$ and $p^{-1},$  with $|p| >1.$ \nThe action of $f$ on the bounding complex plane is simply multiplication by $p^2.$  Extending this action to upper-half-space in the natural way rotates the $z$-axis by angle $\\arg(p^2)$ and sends $(0,0,1)$ to $(0,0,|p|^2).$ \n Thus, $${\\rm Im}(\\length(f)) = \\arg(p^2) = {\\rm Im}(\\ln(p^2))$$ and,\nusing the hyperbolic metric, \n$${\\rm Re}(\\length(f)) = \\ln(|p|^2) = {\\rm Re}(\\ln(p^2)).$$\nThat is, $\\length(f) = \\ln(p^2)$ and \n$$p = \\pm \\exp(\\length(f)/2) = \\pm \\sqrt{\\exp(\\length(f))} = \\pm \\sqrt{\\exp(L)} = \\pm \\sqrt{L^\\prime}.$$ Now, we take the positive square\nroot (taking the negative square root produces the other lift from \n${\\rm PSL(2,}{\\bf C})$ to ${\\rm SL(2,}{\\bf C})$).\n\nb)  $w = \\beta \\circ \\alpha$ where $\\beta$ is translation of distance $R$ along $ B_{(0;\\infty)}$ and $\\alpha$ is translation of distance $D$ along $ B_{(-1;1)}$.  Thus,\na matrix representative of $\\beta$ is $$ \\left(\\matrix{\\sqrt{R^\\prime} & 0 \\cr 0 & 1/\\sqrt{R^\\prime}\\cr}\\right)$$ and a matrix representative of $\\alpha$ can be computed to be $$\\left(\\matrix{\\cosh(D/2) & \\sinh(D/2) \\cr \\sinh(D/2) & \\cosh(D/2)\\cr}\\right).$$ But\n$\\cosh(D/2) = (\\exp(D/2) + \\exp(-D/2))/2 = \n(\\sqrt{D^\\prime} + 1/\\sqrt{D^\\prime})/2 = ch$ and similarly for $sh.$\nThus, $$\\alpha = \\left(\\matrix{ch & sh \\cr sh & ch\\cr}\\right)$$ and b) follows by matrix multiplication.\n\\enddemo\n\n\\proclaim{Lemma} If $h \\in {\\rm Isom}_+({\\bf H}^3)$ is represented by the matrix $$A = \\left(\\matrix{a &  b \\cr c & d\\cr}\\right)\\in {\\rm SL}(2,{\\bf\nC}),$$ then \n\\begin{itemize}\n\\ritem{a)} $\\exp(\\Relength (h))= |\\trace(A)/2 \\pm \\sqrt{(\\trace(A)/2)^2 - 1}|^2,$\n\n\\ritem{b)} $\\exp({\\rm Redistance} (h(B_{(0;\\infty)}), B_{(0;\\infty)}))=|{\\rm orthotrace}(A) \\pm$\\hfill \\noindent $\\sqrt{({\\rm\northotrace}(A))^2 - 1}|$  where \n{\\rm orthotrace(}$A) = ad + bc.$ $\\phantom{\\sum^\\int}$\n\\end{itemize}\nIn both cases{\\rm ,} the $+, -$ produce reciprocal values for the right\\/{\\rm -}\\/hand side{\\rm ,}\n  and we take the one producing the larger value{\\rm ,} unless the value is $1${\\rm ,} in\nwhich case there is no need to choose.\n\\endproclaim\n\n\\demo{Proof}  a)  If $A$ is elliptic or parabolic, the proof is straightforward (the trace of a parabolic is $\\pm 2$ while the trace of an elliptic\nis a real number between 2 and -2).  \n\nWe assume $A$ is hyperbolic.  Because trace is a conjugacy invariant, we can assume the oriented axis of $A$ is $ B_{(0;\\infty)}.$  Thus $A$ is a diagonal matrix with $p$ and $p^{-1}$ along the diagonal with \n$|p| > 1,$ and, as in the proof of Lemma FIXME(1.24), we see that \n$\\exp({\\rm length}(h)) = p^2.$  Of course,  $\\trace(A)= p + p^{-1}$, and it is easy enough to solve for $p.$   Specifically, \n$p = \\trace(A)/2 \\pm \\sqrt{(\\trace(A)/2)^2 - 1}.$\nThus, \n\\begin{eqnarray*}\n\\exp(\\Relength (h))& =& |\\exp(\\length(h))|  = \n|p|^2\\\\[5pt]\n&=& |(\\trace(A)/2) \\pm \\sqrt{(\\trace(A)/2)^2 - 1}|^2.\n\\end{eqnarray*}\n\\vglue4pt\nb)  If $ B_{(0;\\infty)}$ and $h(B_{(0;\\infty)})$ intersect at infinity, then the proof is straightforward.  For example, \nif $h$ fixes the point $(0,0,0)$ at infinity, then $c = 0,\\ ad = 1$ and the formula holds.  Similarly for the other cases in which $ B_{(0;\\infty)}$ and $h(B_{(0;\\infty)})$ intersect at infinity.\n\nWe assume $ B_{(0;\\infty)}$ and $h(B_{(0;\\infty)})$ do not intersect at infinity. We will compute the length of $k$, the square of the transformation taking $ B_{(0;\\infty)}$ to $h(B_{(0;\\infty)})$ along their ortholine. \nLet $\\tau$ be 180-degree rotation about $ B_{(0;\\infty)},$ then $(h \\circ \\tau \\circ h^{-1})$ is 180-degree rotation about $h(B_{(0;\\infty)}),$ and\nwe have that  $k = (h \\circ \\tau \\circ h^{-1}) \\circ \\tau.$   \nNow, $\\tau$ and $h$ are represented by the matrices \n$$ \\left(\\matrix{ i & 0 \\cr \n                                0  & -i \\cr} \\right) {\\rm \\ \\ and\\ \\ } \n                         \\left(\\matrix{ a & b \\cr \n                                c  & d \\cr} \\right)  \\in {\\rm SL}(2,{\\bf C}).$$ \nHence,   $k = (h \\circ \\tau \\circ h^{-1}) \\circ \\tau$ can be computed to have matrix\nrepresentation $$\\left(\\matrix{ ad + bc & 2ab \\cr \n                                2cd & ad + bc \\cr} \\right).$$ \nThus,\n\\begin{eqnarray*}\n&&\\hskip-66pt\\exp({\\rm Redistance} (h(B_{(0;\\infty)}), B_{(0;\\infty)}))\\\\\n& = &\n\\exp(\\Relength(k)/2)= \\sqrt{|\\exp(\\length(k)|}\\\\\n& =&\n|(\\trace(k)/2) \\pm \\sqrt{(\\trace(k)/2)^2 - 1}|\\\\\n& =&\n|(ad + bc) \\pm \\sqrt{(ad + bc)^2 - 1}|. \\\\\n\\noalign{\\vskip-36pt}\n\\end{eqnarray*}\n\\enddemo\n\n\\numbereddemo{{R}emarks} i)  It follows from Lemma FIXME(1.25) that if $h$ is a word in $f,w,f^{-1},w^{-1},$ then for any\nparameter value $\\alpha\\in {\\cal W},$\n$$\\exp(\\Relength(h_\\alpha)), \\hbox{ and }\n  \\exp({\\rm Redistance}(h_\\alpha(B_{(0;\\infty)}), B_{(0;\\infty)}))$$ can be computed using only the\noperations $+, -, \\times, /, \\sqrt{}.$\n\\vglue4pt\n ii) During the course of the computer work needed to prove the main theorems, the parameter space\n${\\cal W}$ was decomposed into sub-boxes by computer via a recursive subdivision process:\nGiven a sub-box  being analyzed, either it can be {\\it killed directly} (that is, eliminated by a killerword and associated condition as described in\nRemark FIXME(1.17),\nor for the trivial reason described in Remark FIXME(1.23i)), or it cannot.  \nIf it cannot be killed directly, it is subdivided in half by a hyperplane\n$\\{x_i = c \\}$ (where $i$ runs through the various co-ordinate dimensions\ncyclically) and the two pieces are analyzed separately, and so on. \n\nAs such, a sub-box of ${\\cal W}$ can be described by a sequence of 0's and 1's where 0 means ``take the lesser $x_i$ values\" and 1 means ``take the greater $x_i$ values.\"  \nFor the decomposition of ${\\cal W}$ into sub-boxes, all the \nsub-box descriptions could be neatly encoded into one tree (although in practice we found it preferable to use several trees to describe the entire\ndecomposition.  See \\S 5).\n\\vglue4pt\niii) In the following proposition, seven {\\it exceptional boxes} are described as sequences of 0's and 1's.  \nFour of the exceptional boxes---$X_0, X_4, X_5, X_6$---are each the union of two abutting sub-boxes, $X_0 = X_{0a} \\cup X_{0b}$ and so on.  It is a pleasant exercise to work through the fact that they abut.  \nIt should be noted that had the set-up for ${\\cal W}$ been different, more sub-boxes (or perhaps fewer) might have been needed to construct the seven\nexceptional regions. \n\nIt is also a pleasant exercise to calculate by hand the co-ordinate ranges of the various sub-boxes.  For example, the range of the last co-ordinate (i.e., $x_5$) of the sub-box \n\\eject\n\n\\noindent \n$X_{6a} = \n111000000001000111\\ \n111111110101001111\\ \n011111010111111111$\\hfill\n\\vglue4pt\n  \\hfill $  \n110001001011000111\\ 0$\n\\vglue4pt\\noindent \nis found by taking the 6th entry, the 12th entry, the 18th entry, and so on.  These entries are 011111111111.  The first entry (0) means take the\nlesser $x_5$ values, and produces the interval $[-4,0].$  The second entry (1) means take the greater $x_5$ values, and produces the interval\n$[-2,0].$  The third entry (1) produces $[-1,0].$  Continuing, we see that $X_{6a}^{\\phantom{|}}$ has $-2^{-9} \\le x_5 = {\\rm Im}(R') \\le 0.$  The other\nco-ordinates can be computed in the same fashion, although they must at the end be multiplied by the factor $2^{(5 - i)/6}$ (see the definition of the\ninitial box\n${\\cal W}$).  The range of co-ordinate values for each  exceptional  box $X_0, X_1, \\ldots, X_6$ is given in Table FIXME:1.1 (a limited number of significant\ndigits is given), and  then a range of co-ordinates for exceptional regions (in ${\\cal P}$) \n${\\cal R}_i \\supset \\exp^{-1}(X_i)$ is given (see Remarks FIXME:1.30i) and 1.30ii)) in Table FIXME:1.2. (Note that this use of the symbol ${\\cal R}_i$ differs\nslightly from the use in \\S 0.)\nFinally, two {\\it quasi-relators} are given in Proposition FIXME:1.28 for each exceptional box $X_0, X_1, \\ldots, X_6$ (see the next definition).\n \n\n\\numbereddemo{Definition} A {\\it quasi-relator} in a sub-box $X$ of ${\\cal W}$ is a word in $f,w,F=f^{-1},W=w^{-1}$ that is close to the identity\nthroughout\n$X$ and experimentally appears to be converging to the identity at some point in $X.$  In particular, a quasi-relator rigorously has Relength less than\nthat of $f$ at all points in $X.$\n\\enddemo\n\n\\proclaim{Proposition}  \nWithin the parameter space ${\\cal W}$ but outside the seven exceptional boxes there are no parameter points corresponding to \nmarked groups $\\{G,f,w\\}$ where $G$ is \ndiscrete{\\rm ,} torsion\\/{\\rm -}\\/free and  parabolic\\/{\\rm -}\\/free\\/{\\rm ; }\n$f$ corresponds to a shortest geodesic $\\delta$ of tuberadius $\\le\n\\ln(3)/2${\\rm ;} and $w$ takes a particular lift of $\\delta$ to \na nearest translate.\n Specifically{\\rm ,} \n${\\cal S} \\cap ({\\cal W} - \\bigcup_{n = 0,\\dots, 6} X_n)=\\emptyset$ where the $X_n$ are the exceptional boxes\n\n\\vglue4pt\n\n \\noindent $X_0 = X_{0a} \\cup X_{0b},$\n\\vglue8pt\n\\noindent $X_{0a} = \n001000110111110001\\ \n101001010101011001\\ \n011011010111101101$\\hfill \n\n \n \\hfill\n$100001101101000111\\ \n010001110101100101\\ \n1101110111110100,$  \n\\vglue8pt\n \\noindent $X_{0b} = \n001001110110110000\\ \n101000010100011000\\ \n011010010110101100$\n\n\\hfill\n$100000101100000110\\ \n010000110100100100\\ \n1101100111100100,$ \n \\vglue4pt\n\\noindent  $X_0\\ \\ $ quasi\\/{\\rm -}\\/relators\\/{\\rm :}\n\n$r_1 = fwFwwFwfww,$\n \n$r_2 = FwfwfWfwfw,$\n\n\\vglue8pt\n\\noindent $X_1 = \n001000110001110110\\ \n011101000110111110\\ \n100010110000100011$\\hfill \n\n\\hfill $\n101101001101001000\\ \n110101011000000100\\ \n000.$ \n\n\\noindent $X_1\\ \\ \n$ quasi\\/{\\rm -}\\/relators\\/{\\rm :}\n\n$r_1 = FFwFWFWfWFWFwFFww,$\n \n$r_2 = FFwwFwfwfWfwfwFww,$\n\\vglue4pt\n\\noindent $X_2 = \n001000110101010010\\ \n101010110001100101\\ \n110111100001101010$\\hfill\n\n\\hfill $111100100000010001\\ \n111100,$\n\\vglue4pt\n\\noindent $X_2\\ \\ $ quasi\\/{\\rm -}\\/relators\\/{\\rm :}\n\n$r_1 = FwfwfWffWfwfwFww,$\n\n$r_2 = FFwFFwwFwfwfwFww,$\n\\vglue4pt\n\\noindent \n$X_3 = \n111000000001000110\\ \n011011101101011000\\ \n111101011110001100$\\hfill\n\n\\hfill  \n$111111100110110000\\ \n0000100010100010,$\n\\vglue4pt\n\\noindent $X_3\\ \\ $ quasi\\/{\\rm -}\\/relators\\/{\\rm :}\\/\n\n$r_1 = FFwfwFFwwFWFwFWfWFWffWFWfWFwFWFww,$ \n\n$r_2 = FFwfwFwfWfwfWWfwfWfwFwfwFFwwFWFww,$\n\\vglue4pt \n\\noindent $X_4 = X_{4a} \\cup X_{4b},$\n\\vglue4pt\n\\noindent $X_{4a} = \n111000000001000110\\ \n011001001111101010\\ \n011110110110111101$\\hfill\n\n\\hfill  \n$100011111110110110\\ \n10000111101,$\n\\vglue4pt\n\\noindent $X_{4b} = \n111000000001000110\\ \n011001001111101010\\ \n111110010110011101$\\hfill\n\n\\hfill  \n$000011011110010110\\ \n00000101101,$\n\\vglue4pt\n\\noindent $X_4\\ \\ $ quasi\\/{\\rm -}\\/relators\\/{\\rm :}\n\n$r_1 = FFwfwFwfWfwfWfwFwfwFFwwFWFwFWFww,$\n\n$r_2 = FFwfwFwfwFFwwFWFwFWfWFWfWFwFWFww,$\n\\vglue4pt\n\\noindent $X_5 = X_{5a} \\cup X_{5b},$\n\\vglue4pt\n\\noindent $X_{5a} = \n001000110001110111\\ \n001111000101111111\\ \n101111100111001111$\\hfill\n\n\\hfill \n$000001111011110111\\ 1,$\n\\vglue4pt\n\\noindent  $X_{5b} = \n001001110000110110\\ \n001110000100111110\\ \n101110100110001110$\\hfill\n\n\\hfill  \n$000000111010110110\\ 1,$\n\\vglue4pt\n\\noindent $X_5\\ \\ $ quasi\\/{\\rm -}\\/relators\\/{\\rm :}\n\n$r_1 = FwFWFwFwfwfWfwfw,$\n\n$r_2 = FwfwfWfWFWfWfwfw,$\n\\vfil\n\\noindent $X_6 = X_{6a} \\cup X_{6b},$\n\\vfil\n\\noindent $X_{6a} = \n111000000001000111\\ \n111111110101001111\\ \n011111010111111111$\\hfill\n\n\\hfill  \n$110001001011000111\\ 0,$\n\\vglue4pt\n\n\\noindent $X_{6b} = \n111001000000000110\\ \n111110110100001110\\ \n011110010110111110$\\hfill\n\n\\hfill  \n$110000001010000110\\ 0,$\n\\vglue4pt\n\\noindent $X_6\\ \\ $ quasi\\/{\\rm -}\\/relators\\/{\\rm :}\\/\n\n$r_1 = FWFwFWfWFwFWFwfw,$\n\n$r_2 = FWFwfwFwfWfwFwfw.$\n\\endproclaim\n\n \n\\demo{Proof}  \nThe proof follows along the lines presented in Remark FIXME(1.17).  \nTwo computer files contain the data needed for the proof.   \nThe first computer file describes the partition of ${\\cal W}$ into\nsub-boxes and attaches an integer to each such sub-box, \nand the second file, called ``conditionlist\" is an ordered list of conditions and killerwords.\nThe integer associated to a sub-box in the first file describes the numbered condition/killerword from   conditionlist that will eliminate the sub-box\nin question (other than those corresponding to the $X_i).$ A computer program named {\\it verify} shows\nthat the conditions and killerwords in question actually do kill off their associated sub-boxes (see Section 5 for more details).  This computer\nprogram addresses the issues of Remark FIXME(1.20).  The code for {\\it\nverify} is available at the {\\it Annals} web site. \n\nIn addition, a mild modification of {\\it verify} showed that the listed words were quasi-relators for the given sub-boxes. \n\\enddemo\n\n\n", "meta": {"hexsha": "941438fed0c99f500f2c8b0cfeb50c4dea381097", "size": 28438, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/TeX_and_Figures_Files/Chapter_2.tex", "max_stars_repo_name": "njt99/findingkillerwords", "max_stars_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/TeX_and_Figures_Files/Chapter_2.tex", "max_issues_repo_name": "njt99/findingkillerwords", "max_issues_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/TeX_and_Figures_Files/Chapter_2.tex", "max_forks_repo_name": "njt99/findingkillerwords", "max_forks_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.5290102389, "max_line_length": 385, "alphanum_fraction": 0.6731837682, "num_tokens": 9924, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673133042217, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5967417722040136}}
{"text": "\\chapter{Resampling Methods}\n\\label{chp:crossvalidation}\nIn this chapter different types of resampling methods are explained. The validation set approach and cross validation is presented as two ways of estimating the test error for a model. Lastly Bootstrap is presented as a method for estimating properties of a model.\n\n\\section{The Validation Set Approach}\n\\label{sec:VSA}\nIn the validation set approach the available set of samples is randomly divided into two parts of approximately equal size. One is the training set and the other is the validation set. The idea is then to fit the model on the training set, and then apply the model to the validation set. The resulting prediction error of the validation set is then used as an estimate of the test error. This error rate is typically calculated as the mean squared error (MSE). \n\n\\myFigure{validationSetApproach.png}{Splitting data randomly}{fig:valid}{0.6} \n\nThe concept of randomly splitting the  data set is illustrated in figure \\ref{fig:valid}. The  data set is a series of observations from 1 to n, and the result is two almost equally sized parts, with the observations from the original  data set randomly distributed. \n\nThe blue half of the  data set is the training set, this is used for fitting the model. The orange half of the data set is the validation set, used for estimating the prediction error rate.\n\n\n\\subsection{Lab 5.3.1 - The Validation Set Approach}\nIn this exercise the goal is to use the validation set approach. So as described in the section above the first thing to do is to split the data randomly in two halves. The data used here is the \\emph{Auto}  data set, containing a list of car information. \n\n\\newpage\n\n\\begin{lstlisting}[language=Python, label=lst:fit_valid, caption=Fitting linear regression]\ntrain, test = train_test_split(df, test_size = 0.5)\n\nX_train = train[['horsepower']]\nX_test = test[['horsepower']]\n\nY_train = train['mpg']\nY_test = test['mpg']\n\nlm = linear_model.LinearRegression()\nlm.fit(X_train, Y_train)\n\\end{lstlisting}\n\nListing \\ref{lst:fit_valid} shows the \\emph{Auto} data as \\emph{df} being split randomly in half by the \\emph{train\\_test\\_split()} function. \n\\emph{horsepower} is used as a predictor and denoted X while \\emph{mpg} is the response value denoted Y. The model is then fitted on the training data \\emph{X\\_train} and \\emph{Y\\_train}.\n\n\\begin{lstlisting}[language=Python, label=lst:print_valid, caption=Printing MSE]\nprint(\"Mean squared error: %.2f\" \n\t\t% np.mean((lm.predict(X_test) - Y_test) ** 2))\n\\end{lstlisting}\n\nIn Listing \\ref{lst:print_valid} the MSE is computed by applying the fitted model to the test sets. The error is 24,42. \n\nBecause the relationship between \\emph{horsepower} and \\emph{mpg} is non-linear the model can be improved upon by using higher polynomial features for fitting the model. Using second degree polynomials is shown in Listing \\ref{lst:poly2_valid}. Third degree polynomials was also applied in the exercise.\n\n\\begin{lstlisting}[language=Python, label=lst:poly2_valid, caption=Polynomial features with degree = 2]\npoly = PolynomialFeatures(degree=2)\nX_poly_train_deg2 = poly.fit_transform(X_train)\nX_poly_test_deg2 = poly.fit_transform(X_test)\n\nlm.fit(X_poly_train_deg2, Y_train)\n\\end{lstlisting}\n\nUsing polynomials shows a clear improvement of the MSE, but not continuously. Using the second degree polynomial shows a drop in the MSE from 24,42 to 19,46, but using the third degree polynomial shows a slight increase from 19,46 to 19,55. \n\n\\section{Leave-one-out cross-Validation}\nIn validation-set approach only half of the data is used for training the model. This might give an overestimate of the mean squared error when training on the complete data set. The MSE might also vary a lot when doing random splits of the  data set  \\citep[pp. 54]{AIbook}. Leave-one-out cross-validation tries to address this drawback and is illustrated in figure \\ref{fig:loocv}. It splits the  data set in number of observations n and then uses n-1 observations for training and the last observation as validation set. It then iterates through using all observations as validation. This might address the drawbacks of validation set approach but it is also very time consuming as a model has to be fitted n times.\n\n\\myFigure{LOOCV.png}{Leave-one-out cross-validation}{fig:loocv}{0.4}\n\n\\subsection{Lab 5.3.2 - Leave-One-Out Cross-Validation}\nFor the exercise for LOOCV, the \\emph{Auto} data is read just like in Lab 5.3.1, but not split into two halves of random values. The \\emph{horsepower} and \\emph{mpg} data parts are though still distributed into respectively \\emph{X\\_train} and \\emph{Y\\_train}. \n\n\\begin{lstlisting}[language=Python, label=lst:LOOCV, caption=Leave-one-out cross-validation loop]\nfor i in range(1, 6):\n\tlm = linear_model.LinearRegression()\n\tpoly = PolynomialFeatures(degree=i)\n\tX_poly_train = poly.fit_transform(X_train)\n\t\n\tkf = KFold(n_splits=X_train.shape[0]) \n\ttest = cross_val_score(lm, X_poly_train, Y_train, cv = kf, \n\t\t\tscoring = \"neg_mean_squared_error\")\n\t\n\tprint(np.mean(-test))\n\\end{lstlisting}\n\nIn Listing \\ref{lst:LOOCV} the LOOCV is done. Lines 2-4 are the same functionality as in Listing \\ref{lst:poly2_valid}, where polynomials are used for the validation set approach. However, here the degree is iteratively increasing up to a fifth degree polynomial.\n\nThe \\emph{KFold} function is used for LOOCV to split the data set in as many folds as there are observations, i.e. making 392 folds. Then everything is used as parameters for the \\emph{cross\\_val\\_score} function. It uses the fitted model, the \\emph{horsepower} training set fitted to an i'th degree polynomial, the \\emph{mpg} training set and the folds to evaluate a score by cross-validation.\n\nThe mean squared error is lastly computed from the result of the cross-validation. Once again it is clear that there is improvement when using second degree polynomial, but from there no clear improvement can be seen. Here the MSE is a bit lower than with the validation set approach. This can be caused by the discussed overestimation of the MSE by only training on half the  data set. For first degree polynomial the MSE is 24.23, for second degree polynomial 19.25 and for third degree polynomial the MSE was calculated to 19.33. All of these error rates showed lower than with validation set approach.\n\n\n\\section{K-Fold Cross-Validation}\nK-fold cross-validation is similar to LOOCV, however it surpasses LOOCV in some areas. Instead of using only one observation as the validation set, the  data set is split into k roughly equal-sized parts, called \\emph{folds}, and one of these folds is the validation set. All other folds are used as training set. Similar to LOOCV the algorithm then iterates through which fold is the validation set. Since there are less iterations to go through the K-fold cross-validation has a computational advantage over the LOOCV, but that is not all. You can also look at what is called the bias-variance trade-off. \n\nThe LOOCV gives an approximately unbiased estimate of test error compared to the validation set approach, due to training on all but one observation. However, in an estimating procedure, another important aspect is the procedure's variance, and this is an aspect where the k-fold cross-validation is superior to LOOCV \\citep[pp. 304-305]{datamining}. When performing LOOCV, the result is an average of \\emph{n} fitted models, and since the training set for each model only vary by one observation, the outputs are highly correlated. In k-fold cross-validation the training sets in each model has a smaller overlap, and since the mean of highly correlated outputs has high variance, the LOOCV's results have higher variance than the K-fold cross-validation's results. So the bias-variance trade-off is associated with k in K-fold cross-validation. The higher the k, the more folds, and thereby higher correlation giving higher variance. In practice, $k=5$ or $k=10$ gives a good result for estimating the test error \\citep[pp. 183]{ISLR} \\citep[pp. 54]{AIbook}.\n\n\\myFigure{K_fold.png}{Representation of K-fold cross-validation}{fig:K_fold}{0.4}\n\nThe idea behind k-fold cross-validation is illustrated in Figure \\ref{fig:K_fold}, where the blue folds are the training sets and the orange fold is the validation set. Note that the case where k is equal to the number of observations is what was just described as LOOCV. Having a lower value of k therefore provides a less computationally expensive way of doing cross-validation.\n\n\\FloatBarrier\n\\subsection{Lab 5.3.3 - k-Fold Cross-Validation}\nThe only difference from Lab 5.3.2 to this exercise is that now the number of folds is set by k, and not the amount of observations. In Listing \\ref{lst:K_fold} it can be seen that the number of splits is set to 10, as the exercise specifies $k=10$, and the loop now runs 10 times, up until a 10th degree polynomial. The rest is similar to Listing \\ref{lst:LOOCV}.\n\n\\begin{lstlisting}[language=Python, label=lst:K_fold, caption=K-fold cross-validation loop]\nfor i in range(1, 11):\n\t...\n\tk_fold = KFold(n_splits=10, shuffle=True) \n\t...\n\\end{lstlisting}\n\\FloatBarrier\n\nIn table \\ref{table:kfold_polynomials_mse} the result of the k-fold cross-validation is shown from a 1 to 10 degree polynomial. The mean squared error is lower than with validation set approach. Compared to LOOCV it has a higher MSE for some polynomials while for others the MSE is lower. As explained this method is computationally less expensive than LOOCV and this was also obvious when executing the implementation.\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\begin{subtable}[t]{2in}\n\t\t\\centering\n\t\t\\begin{tabular}{ p{2.5cm} p{1.5cm}  }\n\t\t\t\\textbf{Degree} & \\textbf{MSE} \\\\\n\t\t\t\\hline \n\t\t\t\\\\\n\t\t\t1 & 24.22 \\\\\\hline\n\t\t\t\\\\\n\t\t\t2 & 19.19 \\\\\\hline\n\t\t\t\\\\\n\t\t\t3 & 19.28 \\\\\\hline\n\t\t\t\\\\\n\t\t\t4 & 19.54  \\\\\\hline\n\t\t\t\\\\\n\t\t\t5 & 19.00  \\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:mse_validation}\n\t\\end{subtable}\n\t\\quad \n\t\\begin{subtable}[t]{2in}\n\t\t\\centering\n\t\t\\begin{tabular}{ p{2.5cm} p{1.5cm}  }\n\t\t\t\\textbf{Degree} & \\textbf{MSE} \\\\\n\t\t\t\\hline \n\t\t\t\\\\\n\t\t\t6 & 18.85 \\\\\\hline\n\t\t\t\\\\\n\t\t\t7 & 19.48  \\\\\\hline\n\t\t\t\\\\\n\t\t\t8 & 19.41  \\\\\\hline\n\t\t\t\\\\\n\t\t\t9 & 19.02  \\\\\\hline\n\t\t\t\\\\\n\t\t\t10 & 19.04  \\\\\\hline\n\t\t\\end{tabular}\n\t\t\\label{table:mse_cross}\n\t\\end{subtable}\n\t\\caption{Polynomials from 1 to 10 and calculated mean squared error.}\\label{table:kfold_polynomials_mse}\n\\end{table}\n\n\n\\section{Bootstrap}\nThis chapter is about the bootstrap method which is a statistical tool that can be used to calculate uncertainty associated with estimators or statistical learning methods. It can for example be used to estimate the standard errors of the coefficients from a linear regression fit. Bootstrap is a simple approach which can be adapted and applied to many different statistical learning methods. Bootstrap relies on random sampling with replacement. Replacement means that the same observations can occur more than once or maybe not at all in the bootstrap data set. \n\n%Link til billede: https://www.draw.io/?lightbox=1&highlight=0000ff&edit=_blank&layers=1&nav=1#G0Bw37nIXex8aXSl91SjQ4MjlvNXc\n\\myFigure{bootstrap_example.PNG}{A graphical example of the bootstrap approach}{fig:bootstrap}{0.6}\n\nFigure \\ref{fig:bootstrap} is a graphical example of the bootstrap approach on a sample containing 3 observations. These observations are randomly sampled with replacement from the original data set Z. For each bootstrap data set $\\hat{\\alpha}$ can be calculated.\n\nWith all $\\alpha$ values estimated the mean and standard deviation can be estimated to reason about the accuracy of the bootstrap data sets.\n\n\\subsection{Lab 5.3.4 - The Bootstrap}\n\nThe goal with Lab 5.3.4 is to show the use of bootstrap on the example used in Section 5.2 \\citep[pp. 187-190]{ISLR} which uses the \\emph{Portfolio} data set. The second task of Lab 5.3.4 is to estimate the accuracy of the linear regression model on the \\emph{Auto} data set.\n\n\\subsubsection{Estimating the Accuracy of a Statistic of Interest}\n\nInitially, a function for calculating $\\alpha$ is required. The function is shown in listing \\ref{lst:alpha}. The function receives \\emph{data} which is a vector with the properties X and Y. The second input is \\emph{index} which is used to access the wanted X and Y values in the vector \\emph{data}.\nWith the X and Y values the $\\alpha$ can now be calculated and returned.\n\\newpage\n\\begin{lstlisting}[caption={Function for calculating $\\alpha$ in python}, label=lst:alpha, mathescape=true]\ndef alpha(data,index):\n\tX = data['X'][index]\n\tY = data['Y'][index]\n\treturn ((np.var(Y) - np.cov(X,Y)) / (np.var(X) + np.var(Y) - \n\t\t2*np.cov(X,Y)))[0,1]\n\\end{lstlisting}\n\nThe next step is to bootstrap the \\emph{Portfolio} data set.\nTo perform a bootstrap a second function is needed which is seen in listing \\ref{lst:bootstrap}.\n\n\\begin{lstlisting}[caption={Bootstrap function in python}, label=lst:bootstrap, mathescape=true]\ndef boot_python(data, function, num_of_iteration):\n\tn = data.shape[0]\n\tidx = np.random.randint(0, n, (num_of_iteration, n))\n\tstat = np.zeros(num_of_iteration)\n\tfor i in xrange(len(idx)):\n\t\tstat[i] = function(data, idx[i])\n\treturn {'Mean': np.mean(stat), 'std. error': np.std(stat)}\n\\end{lstlisting}\n\nThe bootstrap method receives three arguments, the first being \\emph{data} which is the data that the bootstrap approach should be performed on. The next argument is the function for calculating $\\alpha$. The last argument is the number of simulated data that should be created.  \n\nNow everything is set to perform a bootstrap on the \\emph{Portfolio} data set with an \\emph{num\\_of\\_iteration} set to 1,000. The result is as follows.\n\n\\begin{center}\n\t$\\hat{\\alpha} = 0.5834$\n\\end{center}\n\n\\begin{center}\n\t$SE(\\hat{\\alpha}) = 0.09102$\n\\end{center}\n\n\n\\subsubsection{Estimating the Accuracy of a Linear Regression Model}\n\nIn the next part of Lab 5.3.4 the goal is to use the bootstrap approach to check the variability for $\\beta_0$ and $\\beta_1$, which is the intercept and slope terms for the linear regression model. The linear regression model in this exercise uses \\emph{horsepower} to make a prediction of \\emph{mpg}. \\emph{horsepower} and \\emph{mpg} are part of the \\emph{Auto} data set.\n\nFirst step is to create a function to calculate the intercept and slope, this is seen in listing \\ref{lst:boot}. The function \\emph{boot} performs a linear regression fit on X and Y. \n\\newpage\n\\begin{lstlisting}[caption={Boot function in python}, label=lst:boot, mathescape=true]\ndef boot(data, index):\n\tX = data['horsepower'][index]\n\tY = data['mpg'][index]\n\tslope, intercept, r_value, p_value, std_err = stats.linregress(X,Y)\n\treturn [intercept, slope]\n\\end{lstlisting}\n\nThe next step is to adjust the bootstrap function from the last part. The modified version is the same function as in Listing \\ref{lst:bootstrap}, except the modified function now returns the mean intercept, standard error, mean slope and standard error slope.\n\nNow with the standard errors of 1,000, the bootstrap estimate for the intercept and slope can be computed.\nThe bootstrap estimates for the standard error intercept and slope are shown below.\n\n\\begin{center}\n\t$SE(\\hat{\\beta_0}) = 0.8601$\n\\end{center} \n\\begin{center}\n\t$SE(\\hat{\\beta_1}) = 0.007335$\n\\end{center}\n\nThe standard errors estimates $SE(\\hat{\\beta_0})$ and $SE(\\hat{\\beta_1})$ obtained by the library when performing a linear regression on the \\emph{Auto} data set are 0.717 for the intercept and 0.0064 for the slope. These are different from the results which the bootstrap estimates yielded. This is caused by the libraries relying on the assumption of the data being linear, while it is in fact non-linear. In this case bootstrap is therefore more reliable. \n\nThe reason is that the \\emph{Auto} data set has a non-linear relationship, so using a quadratic model to fit the data should yield a result that is closer to the bootstrap estimates\n\nSo a new function for finding the standard error is needed, and this function can be seen in Listing \\ref{lst:bootstrap_q}.\n\n\\begin{lstlisting}[caption={Function to calculate standard error with a qudratic model}, label=lst:bootstrap_q, mathescape=true]\ndef boot_fn2(data, index):\n\tformula = 'mpg ~ horsepower+I(horsepower**2)'\n\tmodel = smf.glm(formula=formula, data=data, subset=index)\n\tresult = model.fit()\n\treturn result.params\n\\end{lstlisting}\n\nThe results it yields with a 1,000 simulated observations:\n\n\\begin{center}\n\t$SE(\\hat{\\beta_0}) = 2.06$\n\\end{center}\n\\begin{center}\n\t$SE(\\hat{\\beta_1}) = 0.0328$\n\\end{center}\n\\begin{center}\n\t$SE(\\hat{\\beta_2}) = 0.00012$\n\\end{center}\n\nThe new values from the quadratic formula shows a closer correspondence between the estimates from the bootstrap and the estimates from the library.", "meta": {"hexsha": "6037185cef5ae5dd930aecd518ea55207bb6c708", "size": 16730, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/chapter/cross_validation.tex", "max_stars_repo_name": "Rotvig/F17Q4---Decision-Support-Systems", "max_stars_repo_head_hexsha": "62c3bb41a8fb4df75f4f038dea7af521cddafa4a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/chapter/cross_validation.tex", "max_issues_repo_name": "Rotvig/F17Q4---Decision-Support-Systems", "max_issues_repo_head_hexsha": "62c3bb41a8fb4df75f4f038dea7af521cddafa4a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/chapter/cross_validation.tex", "max_forks_repo_name": "Rotvig/F17Q4---Decision-Support-Systems", "max_forks_repo_head_hexsha": "62c3bb41a8fb4df75f4f038dea7af521cddafa4a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.6078431373, "max_line_length": 1060, "alphanum_fraction": 0.7656306037, "num_tokens": 4375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.8128673178375734, "lm_q1q2_score": 0.5967417708035402}}
{"text": "\\RequirePackage{fix-cm} \t\t\t\t\t\t\t\t\t\t\t% \n\\documentclass[11pt, a4paper, parskip=half*, bibliography=totoc, cleardoublepage=empty, final,\nnumbers=noenddot]{scrbook}\n\\usepackage[ansinew]{inputenc}                                                                          \n\\usepackage[automark]{scrpage2}                                                                        \n\\usepackage{fixltx2e}                                                                                           \n\\usepackage[linktocpage]{hyperref}\n\\usepackage{breakurl}\n\\usepackage[english]{babel}\n\\usepackage{multicol} \n\\usepackage{multirow}\t\n\\usepackage{cite}\n\\usepackage{bm}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{amssymb}\n\\usepackage{subcaption}\n\\usepackage{rotating}\n\\usepackage{fixmath}\n\\usepackage{braket}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\usepackage[margin=10pt,labelfont=bf]{caption}\n\\usepackage[top=2.5cm,left=3.5cm,right=2.5cm,bottom=3cm]{geometry}\n\\usepackage[onehalfspacing]{setspace}\n\\usepackage{lmodern}\n\\captionsetup{justification=raggedright,singlelinecheck=false}\n\\begin{document}\n\\chapter{Linear algebra}\nNote that we consider only real values.\n\\section{Gram-Schmidt orthogonalization}\nUsing orthogonal projections $\\frac{<v, u>} {<u, u>} u$ of vectors $v$ onto vectors $u$ and differences $v - \\frac{<v, u>}{<u, u>} u$, the Gram-Schmidt orthogonalization constructs from a set of vectors a set of orthogonal ones in an iterative procedure. Note that the difference $v - \\frac{<v, u>}{<u, u>} u$ is orthogonal to the projection (assuming $u$ is a unit vector):\n\\begin{align}\n\\langle <v, u> u, v- <v, u> u \\rangle &= \\langle <v, u> u, v \\rangle - \\langle <v, u> u, <v, u> u \\rangle \\\\\n&= <v, u>^2 - <v, u>^2 \\underbrace{<u, u>}_{=1} \\\\\n&= 0.\n\\label{eq:orthogonal-proj-zero}\n\\end{align}\nLet $S_V = \\{v_1, v_2, ..., v_n\\}$ be a set of vectors and $d$ be the dimension of the space $V$ spanned by the vectors in $S_V$. Then the Gram-Schmidt orthogonalization (or here orthonormalization) generates an orthonormal basis set $S_U = \\{u_1, u_2, ..., u_d\\}$ in $V$ by:\n\\begin{itemize}\n\\item[1.] $u_1 = \\frac{v_1}{\\| v_1\\|}$\n\\item[2.] For all $i>1$: first\n\\begin{equation}\n\\tilde{u}_i = v_i - \\sum_{k=1}^{i-1} <v_i, u_k> u_k     ,\n\\label{eq:gs-step}\n\\end{equation} \nthen $u_i = \\frac{\\tilde{u}_i}{\\| \\tilde{u}_i\\|}$.\n\\end{itemize}\nZero vectors, which occur when there is a linear dependency in $S_V$, are not added to $S_U$. The sum in Eq. \\ref{eq:gs-step} gives the orthogonal projection of the vector $v_i$ onto the hyperplane spanned by the $i-1$ vectors $u_k$. Note that in the code, we implement the orthogonalization also for functions (not only arrays), i.e. in ``ortho\\_basis.py''. If arrays (matrices) are used, the second step can be calculated via (in case of the standard scalar product)\n\\begin{equation}\n\\tilde{\\bm{u}}_i = \\bm{v}_i - \\bm{U}_{i-1}  \\bm{U}^T_{i-1}  \\bm{v}_i     ,\n\\end{equation} \nwhere $\\bm{U}_{i-1}$ denotes the matrix with columns $\\bm{u}_{1}, ..., \\bm{u}_{i-1}$. \\\\\nBetter numerical stability can be obtained by the \\textbf{modified Gram-Schmidt} orthogonalization which replaces the second step of the not-modified one by:\n\\begin{itemize}\n\\item[1.] Let $k=1$ and $\\tilde{u}^{(1)}_i = v_i$.\n\\item[2.] Calculate, first\n\\begin{equation}\n\\tilde{u}^{(k+1)}_i = \\tilde{u}^{(k)}_i -  <\\tilde{u}^{(k)}_i, u_k> u_k     ,\n\\end{equation} \nthen $u^{(k+1)}_i = \\frac{\\tilde{u}^{(k+1)}_i}{\\| \\tilde{u}^{(k+1)}_i\\|}$.\n\\item[3.] If $k=i-1$ stop and $u_i = u^{(k+1)}_i$. Otherwise, let $k = k+1$ and continue with 2. step (of this algorithm).\n\\end{itemize}\n\n\\section{QR decomposition}\nThe matrix $\\bm{A} \\in \\mathbb{R}^{m \\times n}$ is decomposed into an orthogonal matrix $\\bm{Q}$ and an upper triangular  matrix $\\bm{R}$:\n\\begin{equation}\n\\bm{A}=\\bm{Q} \\bm{R}.\n\\end{equation}\nIn our implementation both $\\bm{Q}$ and $\\bm{R}$ do not need to be square matrices. \\\\\n\\subsection{Using Gram-Schmidt orthogonalization}\nUsing the Gram-Schmidt process an orthogonal basis set is found (for the column space of $\\bm{A}$) by transforming the columns of $\\bm{A}$. $\\bm{Q}$ is given by the new orthogonal vectors. Therefore, we use the size convention $\\bm{Q} \\in \\mathbb{R}^{m \\times \\text{rank}(A)}$ and $\\bm{R} \\in \\mathbb{R}^{\\text{rank}(A) \\times n}$. One way to obtain $\\bm{R}$ is by calculating $\\bm{R} = \\bm{Q}^T \\bm{A}$. However, in practice the whole matrix multiplication does not need to be carried out because some entries are known to become zero.\n\\subsection{Using Housholder reflections}\nThe QR decompositions based on Housholder reflections is known to be more numerically stable than the Gram-Schmidt process (however, I could not verify this statement through the cases I have looked into with the here presented codes.) The Housholder reflection is a linear transformation that reflects a vector $x$ about some (hyper-)plane by\n\\begin{equation}\nx' = x - 2 <x, v> v, \n\\label{eq:housholder}\n\\end{equation}\nwhere $v$ is a unit vector orthogonal to the hyperplane. The reflected vector $x'$ has the same length as $x$:\n\\begin{align}\n\\| x' \\|^2 &= \\langle x - 2 <x, v> v, x - 2 <x, v> v \\rangle \\\\\n&= <x,x> - 2 \\langle x, 2 <x, v> v \\rangle + 4 <x, v>^2 \\underbrace{<v, v>}_{=1}\\\\\n&= <x, x> - 4 <x, v>^2 + 4 <x, v>^2 \\\\\n&= \\| x\\|^2.\n\\end{align} \nNow, let us define:\n\\begin{align}\nu &= x - \\| x \\| e, \\label{eq:proj-v1}\\\\\nv &= \\frac{u}{\\| u \\|},\n\\label{eq:proj-v2}\n\\end{align}\nwhere $e$ a unit vector. Then, with $<e,e>=1$,\n\\begin{align}\nx' &= x - 2 \\frac{<x, x - \\| x \\| e> }{<x - \\| x \\|  e, x - \\| x \\|  e>} (x - \\| x \\|  e) \\\\\n&= x - 2 \\frac{<x, x> - \\| x \\| <x,  e> }{<x,x> - 2 \\| x \\| <x, e> x + \\| x \\|^2 <e, e> } (x - \\| x \\|  e) \\\\\n&= x - 2 \\frac{ \\| x \\|^2 - \\| x \\| <x,  e> }{ \\| x \\|^2 - 2 \\| x \\| <x, e> x + \\| x \\|^2  } (x - \\| x \\|  e) \\\\\n&= x - 2 \\frac{1}{2} (x - \\| x \\|  e) \\\\\n&=  \\| x \\| e \\\\\n\\end{align}\nSo, interestingly, with the choice in Eq. \\ref{eq:proj-v1} and \\ref{eq:proj-v2} for $v$ the Housholder reflection becomes a projection of $x$ onto the unit vector $e$. Crucially, if we chose for $e$ the vector $(1, 0, 0, ...)^T$ the Housholder reflection based on this $v$ will transform $x$ into a vector where only one element is non-zero. As a consequence, we can build a clever iterative procedure to apply this reflection (partially) to every column of a matrix $\\bm{A}$ and transform $\\bm{A}$ to the targeted upper triangular form $\\bm{R}$ of the QR decompostion, as described in the following.\n\nConsider matrix form. We can rewrite Eq. \\ref{eq:housholder} to\n\\begin{align}\n\\bm{x}' = \\bm{Q} \\bm{x}\n\\end{align}\nwith the square matrix\n\\begin{align}\n\\bm{Q} = \\bm{I} - 2 \\bm{v} \\bm{v}^T.\n\\end{align}\nIn the first iteration step of the QR decomposition we define  \n\\begin{align}\n\\bm{u}_1 &= \\bm{A}_1 - \\| \\bm{A}_1 \\| \\bm{e}_1,\\\\\n\\bm{v}_1 &= \\frac{\\bm{u}_1 }{\\|\\bm{u}_1 \\|},\n\\end{align}\nwhere $\\bm{A}_1$ is the first column of $\\bm{A}$ and $\\bm{e}_1\\ = (1, 0,  ..., 0)^T$ a unit vector. Then, with $\\bm{Q}_1 = \\bm{I} - 2 \\bm{v}_1 \\bm{v}^T_1$,\n\\begin{align}\n\\bm{Q}_1 \\bm{A}_1 = \\begin{pmatrix}\n\\| \\bm{A}_1 \\| \\\\\n0 \\\\\n\\vdots\\\\\n0\n\\end{pmatrix}\n\\end{align}\nand\n\\begin{align}\n\\bm{Q}_1 \\bm{A} = \\begin{pmatrix}\n\\| \\bm{A}_1 \\| & \\star &\\hdots & \\star \\\\\n0 \\\\\n\\vdots & & \\bm{A}' & \\\\\n0\n\\end{pmatrix}.\n\\end{align}\nIn the next step this is repeated for the matrix $\\bm{A}'$ ($\\bm{A}$ with deleted first row and column), where now $\\bm{v}_2$ is based on the first column $\\bm{A}_1'$  of $\\bm{A}'$, resulting in a matrix $\\bm{Q}_2'$. To keep the original matrix sizes, $\\bm{Q}_k'$ of any iteration step $k$ is extended by: \n\\begin{align}\n\\bm{Q}_k  = \\begin{pmatrix}\n\\bm{I}_{k-1} & 0\\\\\n0 &  \\bm{Q}_k'\\\\\n\\end{pmatrix}.\n\\end{align}\nAfter $t=\\min(m-1, n)$ iterations, the transpose of the final orthogonal matrix $\\bm{Q}$ and the triangular matrix $\\bm{R}$ of the QR decomposition are given by\n\\begin{align}\n\\bm{Q}^T  = \\bm{Q}_t \\cdots \\bm{Q}_2 \\bm{Q}_1\n\\end{align}\nand \n\\begin{align}\n\\bm{R}  = \\bm{Q}^T \\bm{A}.\n\\end{align}\n\n\\section{Eigenvalues and -vectors }\nThe eigenvalue equation is written \n\\begin{equation}\n\\bm{A} \\bm{v} = \\lambda \\bm{v}, \n\\end{equation}\nwhere $\\bm{A}$ is a square matrix, the non-zero vector $\\bm{v}$ is the eigenvector, and the scalar $\\lambda$ is the eigenvalue. The eigenvectors yield the set of vectors which are not rotated in a linear transformation based on $\\bm{A}$ but only scaled or inverted in direction.\n\\subsection{QR algorithm}\nThe (practical) QR algorithm (without shifts) calculates the eigenvalues of $\\bm{A}$ using QR decompositions  iteratively. We initialize $\\bm{A}_0 = \\bm{A}$ and calculate at each iteration\n\\begin{equation}\n\\bm{A}_{k+1} = \\bm{R}_k \\bm{Q}_k,\n\\end{equation}\nwhere $\\bm{R}_k$ and $\\bm{Q}_k$ are obtained from the QR decomposition: $\\bm{A}_{k} = \\bm{Q}_k \\bm{R}_k$. Given that all $\\bm{A}_k$ are similar,\n\\begin{equation}\n\\bm{A}_{k+1} = \\bm{R}_k \\bm{Q}_k = \\bm{Q}^{-1}_k \\bm{Q}_k \\bm{R}_k \\bm{Q}_k = \\bm{Q}^{-1}_k \\bm{A}_k \\bm{Q}_k,\n\\end{equation}\nthey all have the same eigenvalues, as:\n\\begin{align}\n\\bm{A}_{k+1} \\bm{v} &= \\bm{Q}^{-1}_k \\bm{A}_k \\bm{Q}_k \\bm{v} \\\\\n\\lambda \\bm{v} &= \\bm{Q}^{-1}_k \\bm{A}_k \\bm{Q}_k \\bm{v} \\\\\n\\lambda \\bm{Q}_k \\bm{v} &=  \\bm{A}_k \\bm{Q}_k \\bm{v}.\n\\end{align}\nSo every eigenvalue $\\lambda$ of $\\bm{A}_{k+1}$ is also an eigenvalue of $\\bm{A}_{k}$, where the corresponding eigenvector of $\\bm{A}_{k}$ is  $\\bm{Q}_k \\bm{v}$. And one can similarly show that every eigenvalue of $\\bm{A}_{k}$ is an eigenvalue of $\\bm{A}_{k+1}$. \\\\\nUnder some conditions, the matrices $\\bm{A}_{k}$ will converge against a triangular matrix in the iterative procedure. The eigenvalues of a triangular matrix are given by its diagonal elements. If $\\bm{A}$ is symmetric, then the product of the matrices $\\cdots \\bm{Q}_{2} \\bm{Q}_{1} \\bm{Q}_{0}$ yield a matrix whose columns are the eigenvectors of $\\bm{A}$.\n\\section{Singular value decomposition}\nThe matrix $\\bm{A} \\in \\mathbb{R}^{m \\times n}$ is decomposed into:\n\\begin{equation}\n\\bm{A} = \\bm{U} \\bm{S} \\bm{V}^T,\n\\end{equation}\nwhere $\\bm{U} \\in \\mathbb{R}^{m \\times m}$ and $\\bm{V} \\in \\mathbb{R}^{n \\times n}$ are orthogonal matrices and  $\\bm{S} \\in \\mathbb{R}^{m \\times n}$ is a diagonal rectangular matrix. The diagonal entries of $\\bm{S}$ are given by the singular values of $\\bm{A}$, i.e. the square roots of non-negative eigenvalues of $\\bm{A}^T \\bm{A}$. However, in this code, $\\bm{U}$ and $\\bm{V}$ are not necessarily square matrices but shaped by $\\text{rank}(\\bm{A})$ while $\\bm{S}$ is square. We compute the decomposition by:\n\\begin{itemize}\n\\item[1.] Calculate the eigenvalues and -vectors of $\\bm{A}^T \\bm{A}$.\n\\item[2.] Build the matrix $\\bm{V}$ of the eigenvectors as columns and the rectangular diagonal matrix $\\bm{S}$  with square roots of the non-zero eigenvalues.\n\\item[3.] Calculate $\\bm{U} = \\bm{A} \\bm{V} \\bm{S}^{-1}$.\n\\end{itemize}\nInterestingly, when defining a linear transformation $T$ with $\\bm{A}$ from $K^n$ to $K^m$, then \n\\begin{align}\nT(\\bm{V}_i) &= \\sigma_i \\bm{U}_i \\ \\text{ for } \\ i = 1,..., \\text{rank}(\\bm{A})\\\\\nT(\\bm{V}_i) &= 0 \\ \\text{ for } \\ i > \\text{rank}(\\bm{A}).\n\\end{align}\nThis means that one can find orthonormal bases of $K^n$ and $K^m$ such that $T$ maps the $i$-th basis vector of $K^n$ to a non-negative multiple of the $i$-th basis vector of $K^m$, and yields zero for the remaining basis vectors. \n\n\\section{Pseudoinvers}\nIn order to calculate the generalized inverse $\\bm{A}^+ \\in \\mathbb{R}^{n \\times m}$ of a matrix  $\\bm{A} \\in \\mathbb{R}^{m \\times n}$ that is non-square or not invertible, we use the singular value decomposition $\\bm{A} = \\bm{U} \\bm{S} \\bm{V}^T$:\n\\begin{equation}\n\\bm{A}^+ = \\bm{V} \\bm{S}^{-1} \\bm{U}^T.\n\\end{equation}\nThis definition fulfills the Moore-Penrose conditions that define a pseudoinverse:\n\\begin{itemize}\n\\item[1.] $\\bm{A} \\bm{A}^+ \\bm{A} = \\bm{A}^+$\n\\item[2.] $\\bm{A}^+ \\bm{A} \\bm{A}^+ = \\bm{A}$\n\\item[3.] $(\\bm{A} \\bm{A}^+)^T = \\bm{A} \\bm{A}^+$\n\\item[4.] $(\\bm{A}^+ \\bm{A})^T = \\bm{A}^+ \\bm{A}$.\n\\end{itemize}\n\n\\section{Cholesky decomposition} \\label{sec:cholesky}\nThe Cholesky decomposition of a positive-definite matrix $A \\in \\mathbb{n \\times n}$ is given by\n\\begin{equation}\n\\bm{A} = \\bm{L} \\bm{L}^T\n\\end{equation}\nwhere $\\bm{L} \\in \\mathbb{R}^{n \\times n}$ is a lower triangular matrix with positive diagonal elements. The decomposition can be shown using the QR decomposition.\nA positive-definitem matrix $\\bm{A}$ can be written as a product of its square root matrix, $\\bm{A} = \\bm{B} \\bm{B}^T$. With $\\bm{B}^T = \\bm{Q} \\bm{R}$ and $\\bm{R} = \\bm{L}^T$, we have:\n\\begin{align}\n\\bm{A} = \\bm{B} \\bm{B}^T = \\bm{R}^T \\bm{Q}^T \\bm{Q} \\bm{R} = \\bm{R}^T \\bm{R} = \\bm{L} \\bm{L}^T. \n\\end{align}\nThe elements of $\\bm{L}$ are determined by:\n\\begin{align}\nL_{j,j} &= \\sqrt{A_{j, j} - \\sum_{k=1}^{j-1} L_{j,k}^2},\\\\\nL_{i,j} &= \\frac{1}{L_{j,j}} \\left( A_{i, j} - \\sum_{k=1}^{j-1} L_{i,k} L_{j,k}\\right) \\quad \\text{for} \\quad i>j.\n\\end{align}\nThe Cholesky decomposition is useful for solving $\\bm{A} \\bm{x} = \\bm{b}$. The equation is rewritten: $\\bm{L} \\bm{L}^T \\bm{x} = \\bm{b}$. Then one can solve first $\\bm{L}  \\bm{y} = \\bm{b}$ using forward substitution and next $ \\bm{L}^T \\bm{x} = \\bm{y}$ via back substitution.\n\n\n\\chapter{Quadratic programming} \\label{ch:qp}\nConsider the optimization problem:\n\\begin{align}\n\\min_{\\bm{x}} \\quad &\\frac{1}{2} \\bm{x}^T \\bm{Q} \\bm{x} + \\bm{p}^T \\bm{x}   \\\\\n\\text{subject to} \\quad &\\bm{A} \\bm{x} = \\bm{b} \\\\\n                  &\\bm{G} \\bm{x} \\leq \\bm{h} \\\\\n                  &\\bm{x} \\geq \\bm{0}.\n\\end{align} \nThis problem is solved by quadratic programming. Before that, we transform the inequality constraint $\\bm{G} \\bm{x} \\leq \\bm{h}$ to an equality constraint, by introducing slack variables $\\bm{\\xi} >\\bm{0}$ such that  \\mbox{$\\bm{G} \\bm{x} + \\bm{\\xi} \\leq \\bm{h}$}. This transforms the constraints to:\n\\begin{align}\n\\begin{pmatrix}\n\\bm{A} & \\bm{0} \\\\\n\\bm{G} & \\bm{I}\n\\end{pmatrix} \n\\begin{pmatrix}\n\\bm{x} \\\\\n\\bm{\\xi}\n\\end{pmatrix} \n&= \n\\begin{pmatrix}\n\\bm{b} \\\\\n\\bm{h}\n\\end{pmatrix}, \\\\\n\\begin{pmatrix}\n\\bm{x} \\\\\n\\bm{\\xi}\n\\end{pmatrix} \n&\\geq \\bm{0}.\n\\end{align} \nFrom now we will assume that this is already taken care of (also the right vectorizations in the implementation) and that it is enough to consider a problem of the form:\n\\begin{align}\n\\min_{\\bm{x}} \\quad &\\frac{1}{2} \\bm{x}^T \\bm{Q} \\bm{x} + \\bm{p}^T \\bm{x}   \\\\\n\\text{subject to} \\quad &\\bm{A} \\bm{x} = \\bm{b} \\\\\n                  &\\bm{x} \\geq \\bm{0}.\n\\end{align}\nWe will solve this problem by the primal-dual interior-point method.\\\\\nFirst we will reformulate the problem based on minimizing a function with constraints to minimizing a function with further penalty terms, using the method of Lagrange multipliers and introducing a logarithmic barrier function for the non-negative variables to force almost all iterates to stay in the feasible set (space of $\\bm{x}$ fulfilling the constraints). Define the Lagrange function:\n\\begin{align}\n\\mathcal{L} = \\frac{1}{2} \\bm{x}^T \\bm{Q} \\bm{x} + \\bm{p}^T \\bm{x} - \\bm{\\lambda}^T (\\bm{A} \\bm{x} - \\bm{b}) - \\mu \\sum_i \\log{x_i},\n\\end{align}\nwhere $\\bm{\\lambda} > \\bm{0}$ and $\\mu > 0$. For a given $\\mu$, the minimum point satisfies the Karush-Kuhn-Tucker conditions:\n\\begin{align}\n\\nabla_{\\bm{x}} \\mathcal{L} &=  \\bm{Q} \\bm{x} + \\bm{p} -  \\bm{A} \\bm{\\lambda}^T -  \\mu \\bm{\\chi} = 0,\\\\\n\\nabla_{\\bm{\\lambda}} \\mathcal{L} &=  \\bm{A} \\bm{x} - \\bm{b} = 0,\n\\end{align}\nwhere $\\chi_i = 1/x_i$. By defining $\\bm{s} = \\mu \\bm{\\chi}$ the equation to be solved becomes:\n\\begin{align}\n\\bm{f}(\\bm{x}, \\bm{\\lambda}, \\bm{s}) = \\bm{0}\n\\end{align} \nwith\n\\begin{align}\n\\bm{f}(\\bm{x}, \\bm{\\lambda}, \\bm{s}) = \n\\begin{pmatrix}\n\\bm{Q} \\bm{x} + \\bm{p} -  \\bm{A} \\bm{\\lambda}^T - \\bm{s} \\\\\n\\bm{A} \\bm{x} - \\bm{b} \\\\\n\\bm{X} \\bm{s} - \\bm{\\mu}\n\\end{pmatrix},\n\\end{align} \nwhere $\\bm{X} = \\text{diag}(\\bm{x})$ is a diagonal with elements of $\\bm{x}$ and $\\bm{\\mu}$ is a vector full of $\\mu$. We will solve this problem using Newtons method, i.e. an iterative procedure to find the root, where at each step, $x$ is updated by $\\Delta x$, with $f(x)' \\Delta x= f(x)$. The Jacobian matrix (gradients of $\\bm{f}$) is denoted by:\n\\begin{align}\n\\bm{J}(\\bm{x}, \\bm{\\lambda}, \\bm{s}) = \n\\begin{pmatrix}\n\\bm{Q} &  -\\bm{A}^T & - \\bm{I} \\\\\n\\bm{A} & \\bm{0} & \\bm{0} \\\\\n\\bm{S} & \\bm{0} & \\bm{X}\n\\end{pmatrix},\n\\end{align}\nwhere $\\bm{S}=\\text{diag}(\\bm{s})$.\nThen, at each iteration $k$, we solve the linear system\n\\begin{align}\n\\bm{J}(\\bm{t}^k) \\bm{d} = \\bm{f}(\\bm{t}^k),\n\\end{align}\nwhere $\\bm{t}^k = (\\bm{x}^k, \\bm{\\lambda}^k, \\bm{s}^k)$, and then update $\\bm{t}^{k+1} = \\bm{t}^{k} + \\alpha \\bm{t}^{k}$. Note that this gives a minimum point $\\bm{x}_\\mu$ only for a given $\\mu$. However, by decreasing $\\mu$, $x_\\mu$ will converge towards the solution of the problem. In practice, $\\mu$ (and also $\\alpha$) could be updated at each iteration $k$ of the Newton(-like) method.\n\n\n\\chapter{Machine learning}\nWe consider a data set of $n$ input-output pairs $\\{(\\bm{x}_1, y_1), ..., (\\bm{x}_n, y_n)\\}$. $y_i$ represents the value of a targeted property of the data point $i$ to be predicted by a vectorial input (representation of the data point) $\\bm{x}_i \\in \\mathbb{R}^m$ of the data point. Often, the goal is to identify a function $f$ out of a function space $\\mathcal{F}$ in order to describe the input-output relationship. A general formulation of the ML optimization problem is given by \n\\begin{equation}\n\\argmin_{f \\in \\mathcal{F}} \\sum_{i=1}^n L(f(\\bm{x}_i), y_i)+\\lambda r(f),\n\\label{eq:opt-ml}\n\\end{equation}\nwhere $L$ is a loss function and $r(f)$ is a measurement of the \\textit{complexity} of $f$. The parameter $\\lambda \\geq 0$ regulates the compromise between the accuracy and a low complexity of the model. One reason for regularization is to avoid \\textit{overfitting} (fitting \\textit{unimportant fluctuations}) in order to increase the prediction accuracy on data points out of the training set. The choice of $\\mathcal{F}$, $L$ and $k$ determines the ML method. All supervised-learning based methods used in the code are based on the squared error loss $ L(f(\\bm{x}_i),  y_i) = (f(\\bm{x}_i)- y_i)^2$.\\\\\n\nIn the following sections we write many equations in matrix form and yield the $n$ data points as a target vector $\\bm{y} \\in \\mathbb{R}^n$ and an input matrix $\\bm{X} \\in \\mathbb{R}^{n \\times m}$. We will assume that $\\bm{X}$ is standardized to have zero mean and variance one, for numerical or conceptional reasons.\n\n\\section{Linear regression}\nThe most basic and widely used machine-learning method is linear regression. One reason is that many linear-regression problems can be solved by linear algebra and convex optimization. The least-squares problem\n\\begin{equation}\n\\argmin_{\\bm{c} \\in \\mathbb{R}^m} \\| \\bm{y} - \\bm{X} \\bm{c} \\|^2\n\\label{eq:ls}\n\\end{equation}\nformulates the fundament of linear regression analysis determining the regression coefficients $\\bm{c}$ of a linear model $f(\\bm{x}) = \\bm{x}^T \\bm{c}$. Its solution is given by the closed-form expression $\\bm{c}^* = (\\bm{X}^\\text{T}\\bm{X})^{-1}\\bm{X}^\\text{T} \\bm{y}$, which gives the orthogonal projection $\\text{Proj}_{\\mathcal{C}(\\bm{X})}(\\bm{y}) = \\bm{X} \\bm{c}^*$ of $\\bm{y}$ onto the column space $\\mathcal{C}(\\bm{X}) = \\{ \\bm{X} \\bm{c} \\in \\mathbb{R}^n  : \\bm{c} \\in \\mathbb{R}^m \\}$ of $\\bm{X}$.\\\\ \nThe fact that the orthogonal projection minimizes the Euclidean distance in Eq. \\ref{eq:ls} is clear because any other vector $\\bm{X} (\\bm{c}^* - \\bm{k})$ with nonzero $\\bm{k}$ will have a higher distance to $\\bm{y}$ due to the orthogonality \\mbox{$<\\text{Proj}_{\\mathcal{C}(\\bm{X})}(\\bm{y}), \\bm{X} \\bm{k}> = 0$} (using Pytagoras theorem):\n\\begin{align}\n\\| \\bm{y} -  \\text{Proj}_{\\mathcal{C}(\\bm{X})}(\\bm{y}) +\\bm{X}\\bm{k}\\|^2 &= \\| \\bm{y} -  \\text{Proj}_{\\mathcal{C}(\\bm{X})}(\\bm{y}) \\|^2 + \\| \\bm{X}\\bm{k} \\|^2 \\\\\n&> \\| \\bm{y} -  \\text{Proj}_{\\mathcal{C}(\\bm{X})}(\\bm{y}) \\|^2.\n\\end{align}\n\nWe can calculate the least squares solution using the singular value decomposition by (note that we use square $\\bm{S}$):\n\\begin{align}\n\\bm{c}^* &= (\\bm{X}^\\text{T}\\bm{X})^{-1}\\bm{X}^\\text{T} \\bm{y} \\\\\n&= (\\bm{V} \\bm{S} \\bm{U}^T \\bm{U} \\bm{S} \\bm{V}^T)^{-1}\\bm{X}^\\text{T} \\bm{y}\\\\\n&= (\\bm{V} \\bm{S}\\bm{S} \\bm{V}^T)^{-1}\\bm{X}^\\text{T} \\bm{y}\\\\\n&= (\\bm{V} \\bm{S}^{-1}\\bm{S}^{-1} \\bm{V}^T )\\bm{X}^\\text{T} \\bm{y}\\\\\n&= \\bm{V} \\bm{S}^{-1}\\bm{S}^{-1} \\bm{V}^T \\bm{V} \\bm{S} \\bm{U}^T \\bm{y} \\\\\n&= \\bm{V} \\bm{S}^{-1} \\bm{U}^T \\bm{y}. \n\\label{eq:ls-svd}\n\\end{align}\n\n\n\nIn order to avoid overfitting (and also numerical instability), the problem \\ref{eq:ls} is often extended by an $\\ell_2$ penalty \\mbox{$\\lambda \\| c \\|^2_2 = \\lambda \\| c \\|^2$} (linear ridge regression):\n\\begin{equation}\n\\argmin_{\\bm{c} \\in \\mathbb{R}^m} \\| \\bm{y} - \\bm{X} \\bm{c} \\|^2+ \\lambda \\| \\bm{c} \\|^2.\n\\label{eq:ridge}\n\\end{equation}\nThe solution to this problem is given by  $\\bm{c} = (\\bm{X}^\\text{T}\\bm{X}+\\lambda \\bm{I})^{-1}\\bm{X}^\\text{T} \\bm{y}$, where $\\bm{I} \\in \\mathbb{R}^{m \\times m}$ is the identity matrix.\nIn terms of singular value decomposition, we can write the solution:\n\\begin{align}\n\\bm{c}^* &= (\\bm{X}^\\text{T}\\bm{X} + \\lambda \\bm{I})^{-1}\\bm{X}^\\text{T} \\bm{y} \\\\\n&= (\\bm{V} \\bm{S} \\bm{U}^T \\bm{U} \\bm{S} \\bm{V}^T + \\lambda \\bm{I})^{-1}\\bm{X}^\\text{T} \\bm{y}\\\\\n&= (\\bm{V} \\bm{S}\\bm{S} \\bm{V}^T + \\lambda \\bm{I})^{-1}\\bm{X}^\\text{T} \\bm{y}\\\\\n&= (\\bm{V} \\bm{S}\\bm{S} \\bm{V}^T + \\lambda \\bm{I}\\bm{V} \\bm{V}^T)^{-1}\\bm{X}^\\text{T} \\bm{y}\\\\\n&= (\\bm{V} [\\bm{S}\\bm{S} + \\lambda \\bm{I}] \\bm{V}^T)^{-1}\\bm{X}^\\text{T} \\bm{y}\\\\\n&= \\bm{V} (\\bm{S}\\bm{S} + \\lambda \\bm{I})^{-1} \\bm{V}^T \\bm{X}^\\text{T} \\bm{y}\\\\\n&= \\bm{V} (\\bm{S}\\bm{S} + \\lambda \\bm{I})^{-1} \\bm{V}^T \\bm{V} \\bm{S} \\bm{U}^T \\bm{y}\\\\\n&= \\bm{V} (\\bm{S}\\bm{S} + \\lambda \\bm{I})^{-1} \\bm{S} \\bm{U}^T \\bm{y}.\n\\label{eq:ridge-svd}\n\\end{align}\nBoth the least-squares and ridge solution are written in the form of $\\bm{V} \\bm{D} \\bm{U}^T \\bm{y}$ and distinguished only by the diagonal matrix $\\bm{D}$. The elements of $\\bm{D}$ are given by\n\\begin{equation}\nd_{ii} = \\frac{1}{s_{ii}}\n\\end{equation}\nand\n\\begin{equation}\nd_{ii} = \\frac{s_{ii}}{s_{ii}^2 + \\lambda},\n\\end{equation}\nrespectively. When the $s_{ii}$ come close to zero, the inverse singular values explode. If $\\lambda>0$, the solution of the ridge regression becomes not only more stable, but also the coefficients of correlated columns (features) of $\\bm{X}$ become more strongly shrinkaged. In particular, $|s_{ii}|$ is small when features are correlated, because when being close to collinearity, $\\| \\bm{X} \\bm{v}_i \\|$ is small:\n\\begin{align}\n\\| \\bm{X} \\bm{v}_i \\|^2 &= \\bm{v}_i^T \\bm{X}^T \\bm{X}  \\bm{v}_i \\\\\n&= \\bm{v}_i^T \\bm{V} \\bm{S} \\bm{U}^T \\bm{U} \\bm{S} \\bm{V}^T  \\bm{v}_i \\\\ \n&= \\bm{v}_i^T \\bm{V} \\bm{S}\\bm{S} \\bm{V}^T  \\bm{v}_i \\\\\n&= s_{ii}^2.  \n\\end{align}\n\n\\section{Non-negative linear regression}\nAssume we want solve the linear regression (ridge regression) problem of Eq. \\ref{eq:ridge}, however, with the constraint that the coefficients $\\bm{c}$ should be non-negative:\n\\begin{equation}\n\\argmin_{\\bm{c} \\in \\mathbb{R}^m} \\| \\bm{y} - \\bm{X} \\bm{c} \\|^2+ \\lambda \\| \\bm{c} \\|^2\n\\quad \\text{subject to} \\quad   \\bm{c} \\geq \\bm{0}.\n\\label{eq:nonnegative-ridge}\n\\end{equation}\nRewriting the cost function in Eq. \\ref{eq:nonnegative-ridge} yields:\n\\begin{align}\n\\| \\bm{y} - \\bm{X} \\bm{c} \\|^2+ \\lambda \\| \\bm{c} \\|^2 \n&= (\\bm{y} - \\bm{X} \\bm{c})^T (\\bm{y} - \\bm{X} \\bm{c}) + \\lambda \\bm{c}^T \\bm{c}\\\\\n&= \\bm{y}^T \\bm{y} - \\bm{y}^T \\bm{X} \\bm{c} - (\\bm{X} \\bm{c})^T \\bm{y}  + (\\bm{X} \\bm{c})^T  \\bm{X} \\bm{c} + \\lambda \\bm{c}^T \\bm{c} \\\\\n&= \\bm{y}^T \\bm{y} -2 \\bm{y}^T \\bm{X} \\bm{c}  + \\bm{c}^T \\bm{X}^T  \\bm{X} \\bm{c} + \\lambda \\bm{c}^T \\bm{c}\\\\\n&= \\bm{y}^T \\bm{y} -2 \\bm{y}^T \\bm{X} \\bm{c}  + \\bm{c}^T (\\bm{X}^T  \\bm{X} + \\lambda \\bm{I}) \\bm{c} \\\\\n&= \\bm{y}^T \\bm{y} +2 \\bm{p}^T \\bm{c}  + \\bm{c}^T \\bm{Q} \\bm{c},\n\\label{eq:nonnegative-cost}\n\\end{align}\nwhere $\\bm{Q} = \\bm{X}^T  \\bm{X} + \\lambda \\bm{I} $ and $\\bm{p} =- \\bm{X}^T \\bm{y}$. Substituting, Eq. \\ref{eq:nonnegative-cost} into Eq. \\ref{eq:nonnegative-ridge} gives (in equivalent form):\n\\begin{equation}\n\\argmin_{\\bm{c} \\in \\mathbb{R}^m} \\frac{1}{2} \\bm{c}^T \\bm{Q} \\bm{c} +\\bm{p}^T \\bm{c} \\quad \\text{subject to} \\quad   \\bm{c} \\geq \\bm{0}.\n\\label{eq:nonnegative-ridge}\n\\end{equation}\nThis problem can be solved by quadratic programming described in Chapter \\ref{ch:qp}.\n\n\\section{Orthogonal matching pursuit}\nIn some applications it is desirable to find linear models $f(\\bm{x}) = \\bm{x}^T \\bm{c}$ where some, many, or most coefficients (elements of $\\bm{c}$) are zero. Apart from the least absolute shrinkage and selection operator (LASSO, $\\ell_1$ regularization) the orthogonal-matching-pursuit (OMP) algorithm provides a way to find such a model. It is a greedy algorithm which adds at each iteration $k$ a column of $\\bm{X}$ with non-zero coefficient to the model (where all columns with zero $c_i$ are removed in some sense) that is based on all added columns during the iterative procedure. The non-zero coefficients are determined via least-squares regression. The added column $\\bm{x}^r$ provides that column among all columns $\\bm{x}^i$ of $\\bm{X}$ which is the closest to the current residual vector $\\bm{R} = \\bm{y} - f_k$\n\\begin{align}\n\\|\\bm{R} - \\bm{x}^r c_r \\|^2 \\leq \\|\\bm{R} - \\bm{x}^i c_i \\|^2 \\ \\quad \\forall i \n\\label{eq:omp-close}\n\\end{align}\nwhere $c_i$ denotes the corresponding least squares solutions in fitting $\\bm{R}$. First, let us add $\\|\\bm{x}^r c_r\\|_2^2+ \\|\\bm{x}^i c_i\\|_2^2$ to both sides of Eq. \\ref{eq:omp-close}: \n\\begin{align}\n\\|\\bm{R} - \\bm{x}^r c_r \\|^2 + \\|\\bm{x}^r c_r\\|^2+ \\|\\bm{x}^i c_i\\|^2 \\leq \\|\\bm{R} - \\bm{x}^i c_i \\|^2 + \\|\\bm{x}^r c_r\\|^2+ \\|\\bm{x}^i c_i\\|^2 \n\\end{align}\nRecall that the least squares solution $c_i$ provides the orthogonal projection $\\bm{x}^i c_i$ of $\\bm{R}$ onto $\\bm{x}_i$. Thus, $\\bm{x}^i c_i$ and $\\bm{R} - \\bm{x}^i c_i$ are orthogonal, i.e. $\\langle \\bm{R} - \\bm{x}^i c_i , \\bm{x}^i c_i \\rangle = 0$, see Eq. \\ref{eq:orthogonal-proj-zero}. This allows us to use the Pythagorean theorem:\n\\begin{align}\n\\|\\bm{R} - \\bm{x}^r c_r + \\bm{x}^r c_r\\|^2+ \\|\\bm{x}^i c_i\\|^2  &\\leq \\|\\bm{R} - \\bm{x}^i c_i\\ + \\bm{x}^i c_i\\|^2 + \\|\\bm{x}^r c_r\\|^2 \\\\\n\\Leftrightarrow \\|\\bm{R}\\|^2+ \\|\\bm{x}^i c_i\\|^2          &\\leq \\|\\bm{R}\\|^2 + \\|\\bm{x}^r c_r\\|^2 \\\\\n\\Leftrightarrow \\|\\bm{x}^i c_i\\|^2 &\\leq  \\|\\bm{x}^r c_r\\|^2 \\\\\n\\Leftrightarrow \\| \\bm{x}^i\\|^2 | c_i|^2  &\\leq  \\|\\bm{x}^r \\|^2 | c_r|^2\\\\\n\\end{align}\nRecall that the orthogonal projection coefficient is given by \n\\begin{equation}\nc_i =  \\frac{\\langle \\bm{R}, \\bm{x}^i \\rangle}{\\|\\bm{x}^i\\|^2} . \n\\end{equation}\nUsing that and assuming furthermore that all columns have same lengths $\\|\\bm{x}^i\\|$ (that is the case if they are for example standardized), we obtain:\n\\begin{align}\n\\| \\bm{x}^i\\|^2 \\frac{|\\langle \\bm{R}, \\bm{x}^i \\rangle|^2}{\\|\\bm{x}^i\\|^4}  &\\leq  \\|\\bm{x}^r \\|^2  \\frac{|\\langle \\bm{R}, \\bm{x}^r \\rangle|^2}{\\|\\bm{x}^r\\|^4} \\\\\n\\Leftrightarrow  \\frac{|\\langle \\bm{R}, \\bm{x}^i \\rangle|^2}{\\|\\bm{x}^i\\|^2}  &\\leq   \\frac{|\\langle \\bm{R}, \\bm{x}^r \\rangle|^2}{\\|\\bm{x}^r\\|^2} \\\\\n\\Leftrightarrow  |\\langle \\bm{R}, \\bm{x}^i \\rangle|^2  &\\leq   |\\langle \\bm{R}, \\bm{x}^r \\rangle|^2 \\\\\n\\Leftrightarrow  |\\langle \\bm{R}, \\bm{x}^i \\rangle|  &\\leq   |\\langle \\bm{R}, \\bm{x}^r \\rangle|.\n\\end{align}\nThis means that in each iteration, the closest column $\\bm{x}^i$ to the residual $\\bm{R}$ is given by the highest absolute value of the projection score $|\\langle \\bm{R}, \\bm{x}^i \\rangle|$, i.e. the calculation expense at each iteration is only given by calculating $\\bm{X}^T \\bm{R}$ and determining the index with the highest absolute value.\\\\\nThe OMP algorithm is given by:\n\\begin{enumerate}\n\\item Initialize $\\bm{R}_1 =\\bm{y}$ and $\\mathcal{X}=\\emptyset$ the set of saved columns. Chose a target number $q$ of selected columns (or nonzero coefficients) for the linear model and let the iteration counter $k=1$.\n\\item Find the closest $\\bm{x}^i$ to $\\bm{R}_k$ and add it to $\\mathcal{X}$. \n\\item Build the matrix $\\tilde{\\bm{X}}$ with all $\\bm{x}^i \\in \\mathcal{X}$. Calculate $\\bm{c}^* = \\argmin_{\\bm{c} \\in \\mathbb{R}^{k}} \\| \\bm{y} - \\tilde{\\bm{X}} \\bm{c} \\|^2$. Let $\\bm{R}_{k+1} =\\bm{y} - \\tilde{\\bm{X}} \\bm{c}^*$.\n\\item If $k=q$, stop. Otherwise set $k=k+1$ and return to the 2. step.\n\\end{enumerate}\n\n\\section{Kernel ridge regression} \\label{sec:krr}\nThe kernel ridge regression is a generalization of the linear ridge regression \\ref{eq:ridge} towards considering nonlinar models $f$. Consider again the general optimization problem in Eq. \\ref{eq:opt-ml} with the squared error loss $ L(f(\\bm{x}_i),  y_i) = (f(\\bm{x}_i)- y_i)^2$ and, moreover, $r(f) = \\| f \\|^2_\\mathcal{F}$ for a reproducing kernel Hilbert space $\\mathcal{F}$\\footnote{A Hilbert space is a reproducing kernel Hilbert space if it has a bounded linear evaluation function. For more details we refer to the corresponding literature of functional analysis.}. Then, according to the representer theorem, the solution to the optimization problem has the form\n\\begin{equation}\nf(\\cdot) = \\sum_{i=1}^n \\alpha_i k(\\cdot, \\bm{x}_i)\n\\label{eq:krr-model} \n\\end{equation}\nwhere $k: \\mathcal{X} \\times \\mathcal{X} \\rightarrow \\mathbb{R}$ is the associated positive-definite real-valued kernel on our input space $\\mathcal{X}$. It can be shown that the coefficients $\\{ \\alpha_i \\}$ are a closed-form solution $\\bm{\\alpha} = (\\bm{K}+\\lambda \\bm{I})^{-1} \\bm{y}$ to the optimization problem (kernel ridge regression)\n\\begin{equation}\n\\argmin_{\\bm{\\alpha} \\in \\mathbb{R}^n} \\| \\bm{y} - \\bm{K} \\bm{\\alpha} \\|^2+ \\lambda  \\bm{\\alpha}^T \\bm{K} \\bm{\\alpha},\n\\label{eq:kernel-ridge}\n\\end{equation}\nwhere $\\bm{K} \\in \\mathbb{R}^{n \\times n}$ represents the matrix with elements $K_{ij} = k(\\bm{x}_i, \\bm{x}_j)$. The fact that the non-linear function \\ref{eq:krr-model} is learnt by a linear learning algorithm, i.e. the closed-form expression, is considered as the \\textit{kernel trick}. For a linear kernel $k(\\bm{x}_i, \\bm{x}_j) = \\bm{x}_i \\cdot  \\bm{x}_j$ the \\mbox{model \\ref{eq:krr-model}} becomes a linear function and equals the linear model that results from the optimization problem \\ref{eq:ridge} of the linear ridge regression. For instance, the kernel ridge regression can alternatively be derived by kernelizing the linear ridge regression and, moreover, introducing a potentially nonlinear feature map $\\phi$ such that $k(\\bm{x}_i, \\bm{x}_j) = < \\phi(\\bm{x}_i),   \\phi(\\bm{x}_j)>$. However, $\\phi$ does not need to be known and rather the kernel is specified directly. A typical (nonlinear) kernel is given by the Gaussian kernel $k(\\bm{x}_i, \\bm{x}_j) = \\exp(- \\frac{\\|\\bm{x}_i -  \\bm{x}_j\\|^2}{2 \\sigma^2})$. Note that the kernel function is often viewed as a similarity measurement between two data points.\\\\\nIn practice, we solve the linear system $\\bm{K} \\bm{\\alpha}=\\bm{y}$ using the Cholesky decomposition and forward and back substitution as described in section \\ref{sec:cholesky}.\n\n\\section{Logistic regression}\nConsider the binary classification problem with $y_i \\in \\{0, 1 \\}$. The logistic regression model\n\\begin{align}\nf(\\bm{x}, y) = P(y| \\bm{x}; \\bm{c})= P(Y=1| \\bm{x}; \\bm{c})^y  P(Y=0| \\bm{x}; \\bm{c})^{(1-y)} \n\\end{align}\ncalculates the probability of a data point $\\bm{x}$ for having the label $y$, where\n\\begin{align}\n P(Y=1| \\bm{x}; \\bm{c}) &= \\frac{1}{1 + e^{- \\bm{x}^T \\bm{c}}} \\\\\nP(Y=0| \\bm{x}; \\bm{c}) &= 1 - P(Y=1| \\bm{x}; \\bm{c}) = 1 - \\frac{1}{1 + e^{- \\bm{x}^T \\bm{c}}}. \n\\end{align}\nAssuming that the $\\bm{x}_i$ are independent and identically distributed random variables, their joint probability distribution is given by\n\\begin{equation}\nP(y_1, y_2, ..., y_n| \\bm{x_1}, \\bm{x}_2, ..., \\bm{x}_n| \\bm{c}) = \\prod_{i=1}^n P(y_i| \\bm{x_i}; \\bm{c}).\n\\end{equation}\nThe right side is equal to the likelihood function $L(\\bm{c}|y_1, y_2, ..., y_n| \\bm{x_1}, \\bm{x}_2, ..., \\bm{x}_n)$ which, however, treats $\\bm{c}$ as a parameter and the input-output pairs as given. In order to determine $\\bm{c}$ such that the observed data is most probable, $L$ is maximized or, in practice, $\\log L$:\n\\begin{align}\n \\log L(\\bm{c}|y_1, y_2, ..., y_n| \\bm{x_1}, \\bm{x}_2, ..., \\bm{x}_n) &= \\log \\prod_{i=1}^n P(y_i| \\bm{x_i}; \\bm{c}) \\\\\n &=  \\sum_{i=1}^n \\log P(y_i| \\bm{x_i}; \\bm{c}).\n\\end{align}\nWe solve the optimization problem\n\\begin{align}\n\\max_{\\bm{c} \\in \\mathbb{R}^m} \\sum_{i=1}^n \\log P(y_i| \\bm{x_i}; \\bm{c})\n\\end{align}\nvia gradient descent.\n\n\\section{Support vector machines}\nConsider the binary classification problem with $y_i \\in \\{ 1, -1 \\}$. Let us target the goal to find some hyperplane defined by\n\\begin{equation}\n\\bm{x} \\bm{w} - b =0\n\\label{eq:hyper}\n\\end{equation} \nthat separates the two classes in the input space of the data represented by $\\bm{X}$. In Eq. \\ref{eq:hyper}, $\\bm{w}$ denotes the normal vector to the hyperplane and $\\frac{b}{\\| \\bm{w} \\|}$ the offset of the hyperplane from the origin along $\\bm{w}$. Assume for the moment that the data is linearly separable. The SVM determines $\\bm{w}$ and $b$ (there is at least one with $y=1$ and one with $y=-1$) such that the closest data points (there is at least one with $y=1$ and one with $y=-1$) to the hyperplane have a distance to it of $\\frac{1}{\\| \\bm{w} \\|}$ along $\\bm{w}$ if $y=1$ and along $-\\bm{w}$ if $y=-1$: \n\\begin{align}\n\\bm{x}_i \\bm{w} - b &\\geq 1 \\quad   \\text{if} \\quad   y_i=1,\\\\\n\\bm{x}_i \\bm{w} - b &\\leq -1 \\quad   \\text{if} \\quad   y_i=-1\n\\end{align}\nfor all data points $i$. This can be rewritten:\n\\begin{align}\ny_i(\\bm{x}_i \\bm{w} - b) &\\geq 1.\n\\end{align}\nThat means the closest data points $\\bm{x}_i$ (support vectors) lie on two hyperplanes with the same normal vector $\\bm{w}$ which define a margin of $\\frac{2}{\\| \\bm{w} \\|}$ between the two classes. \\\\\nIn case the data is not linearly separable, we need to introduce a set of slack variables $\\xi_i \\geq 0$ which allow for misclassified data points:\n\\begin{align}\n\\bm{x}_i \\bm{w} - b &\\geq 1 - \\xi_i\\quad   \\text{if} \\quad   y_i=1,\\\\\n\\bm{x}_i \\bm{w} - b &\\leq -1 + \\xi_i \\quad   \\text{if} \\quad   y_i=-1\n\\end{align}\nfor all data points $i$. This can be rewritten:\n\\begin{align}\ny_i(\\bm{x}_i \\bm{w} - b) &\\geq 1 - \\xi_i.\n\\end{align}\nTogether with the further criterion that the margin between the two classes should be maximal, the SVM-optimization problem becomes:\n\\begin{align}\n\\min_{\\bm{w}, b, \\bm{\\xi}} \\frac{1}{2} \\| \\bm{w} \\|^2 + C \\sum_{i=1}^n \\xi_i \\quad \\text{subject to} \\quad   y_i(\\bm{x}_i \\bm{w} - b) &\\geq 1 - \\xi_i \\quad \\text{and} \\quad \\xi_i \\geq 0, \\quad  \\forall i.\n\\end{align}\nNote that the optimization problem is fully described by those data points that do not fulfill $y_i(\\bm{x}_i \\bm{w} - b) > 1$. A consequence of minimizing the slack variables $\\xi_i$ is that they will be zero for data points that are on the correct side of the hyperplane margin, i.e. $y_i(\\bm{x}_i \\bm{w} - b) \\geq 1$. In case of misclassifed data points and also ones that are correctly classified but inside the margin, the consequence of minimizing the $\\xi_i$ is the that their sizes are not larger than the respective distances of the positions of the data point along $\\bm{w}$ to the correct side of the hyperplane margin. However, given the side constraint that $\\xi_i$ should be at least the distance to the correct side of the hyperplane margin, we have $\\xi_i = 1 - y_i(\\bm{x}_i \\bm{w} - b) $ for misclassified data points. With a large $C$, we will put more weight on minimizing the slack variables $\\xi_i$. As a result, increasing $C$ will lead to decreasing the margin of the hyperplane, because the distance of the data points which are not on the correct side of the margin needs to become small (or just because less weight is put on minimizing $\\| \\bm{w} \\|$, i.e. the inverse of the margin). If the model complexity allows such as with a flexible model based on some (nonlinear) kernel (this will be introduced a few paragraphs later), instead of a linear model as discussed till now, increasing $C$ will lead to a more \\textit{wiggly} that decreases the number of mislassified data points, deacreasing the number of $\\xi_i>0$. Decreasing $C$ will lead to a \\textit{coarser} model that allows for misclassified data points (taken care of by sufficiently large $\\xi_i$), however, might be more robust when predicting the labels of new data points. \\\\\nWe will solve this problem using the method of Lagrange multipliers, where the multipliers $\\alpha_i \\geq 0$ and $\\mu_i \\geq 0$, take care of the side constraints. The respective Lagrangian is given by:\n\\begin{align}\n\\mathcal{L} &= \\frac{1}{2} \\| \\bm{w} \\|^2 + C \\sum_{i=1}^n \\xi_i - \\sum_{i=1}^n \\alpha_i[ y_i(\\bm{x}_i \\bm{w} - b) -1 + \\xi_i] - \\sum_{i=1}^n \\mu_i \\xi_i \\label{eq:first}\\\\ \n&= \\frac{1}{2} \\| \\bm{w} \\|^2 + C \\sum_{i=1}^n \\xi_i - \\bm{w} \\sum_{i=1}^n \\alpha_i y_i \\bm{x}_i  + b \\sum_{i=1}^n \\alpha_i y_i + \\sum_{i=1}^n \\alpha_i - \\sum_{i=1}^n \\alpha_i \\xi_i - \\sum_{i=1}^n \\mu_i \\xi_i \\\\\n&= \\frac{1}{2} \\| \\bm{w} \\|^2 - \\bm{w} \\sum_{i=1}^n \\alpha_i y_i \\bm{x}_i  + b \\sum_{i=1}^n \\alpha_i y_i+ \\sum_{i=1}^n \\alpha_i + \\sum_{i=1}^n (C - \\alpha_i - \\mu_i) \\xi_i\n\\label{eq:svm-lagrangian-primary}\n\\end{align}\nMinimizing the Lagrangian with respect to $\\bm{w}$ and $b$ and $\\xi_i$ yields:\n\\begin{align}\n\\frac{\\partial \\mathcal{L}_\\text{P}}{\\partial \\bm{w}} = 0 \\quad  \\Rightarrow \\bm{w} = \\sum_{i=1}^n \\alpha_i y_i \\bm{x}_i, \\label{eq:svm-sc1}\\\\\n\\frac{\\partial \\mathcal{L}_\\text{P}}{\\partial b} = 0 \\quad  \\Rightarrow \\sum_{i=1}^n \\alpha_i y_i  = 0. \\label{eq:svm-sc2}\\\\\n\\frac{\\partial \\mathcal{L}_\\text{P}}{\\partial \\xi_i} = 0 \\quad  \\Rightarrow C - \\alpha_i - \\mu_i =0. \\label{eq:svm-sc3}\n\\end{align}\nBy substituting Eq. \\ref{eq:svm-sc1}, \\ref{eq:svm-sc2}, and \\ref{eq:svm-sc3} into \\ref{eq:svm-lagrangian-primary}, we obtain a new Lagrangian:\n\\begin{align}\n\\mathcal{L}_\\text{D} &= \\frac{1}{2} \\| \\bm{w} \\|^2 - \\bm{w} \\sum_{i=1}^n \\alpha_i y_i \\bm{x}_i  + b \\underbrace{\\sum_{i=1}^n \\alpha_i y_i}_{=0}+ \\sum_{i=1}^n \\alpha_i + \\sum_{i=1}^n \\underbrace{(C - \\alpha_i - \\mu_i)}_{=0} \\xi_i\\\\\n&= \\frac{1}{2} \\| \\bm{w} \\|^2 - \\bm{w} \\sum_{i=1}^n \\alpha_i y_i \\bm{x}_i +\\sum_{i=1}^n \\alpha_i \\\\\n&= \\frac{1}{2} \\sum_{i,j} \\alpha_i \\alpha_j y_i y_j \\bm{x}_i \\bm{x}_j- \\sum_{i,j} \\alpha_i \\alpha_j y_i y_j \\bm{x}_i \\bm{x}_j  + \\sum_{i=1}^n \\alpha_i\\\\ \n&= -\\frac{1}{2} \\sum_{i,j} \\alpha_i \\alpha_j y_i y_j \\bm{x}_i \\bm{x}_j  + \\sum_{i=1}^n \\alpha_i\\\\\n&= -\\frac{1}{2} \\bm{\\alpha}^T \\bm{H} \\bm{\\alpha}  + \\sum_{i=1}^n \\alpha_i,\n\\label{eq:svm-lagrangian-dual}\n\\end{align}\nwhere $H_{ij} = y_i y_j \\bm{x}_i \\bm{x}_j $. With that and the fact that Eq. \\ref{eq:svm-sc3} and $\\mu_i > 0$ lead to $ 0 \\leq \\alpha_i \\leq C$, the optimization problem becomes the dual problem\n\\begin{align}\n\\max_{\\bm{\\alpha}} \\sum_{i=1}^n \\alpha_i -\\frac{1}{2} \\bm{\\alpha}^T \\bm{H} \\bm{\\alpha}  \\quad \\text{subject to} \\quad   0 \\leq \\alpha_i \\leq C \\ \\ \\forall i \\quad \\text{and} \\quad \\sum_{i=1}^n \\alpha_i y_i  = 0,\n\\end{align} \nwhich can be solved via quadratic programming, see Chapter \\ref{ch:qp}. Recall the Lagrange duality theory: the primal problem is given by\n\\begin{align}\n\\min_{\\bm{w}, b, \\bm{\\xi}}\\left( \\max_{\\bm{\\alpha}\\geq0, \\bm{\\mu}\\geq0} \\mathcal{L}(\\bm{w}, b, \\bm{\\alpha}, \\bm{\\mu}, \\bm{\\xi}) \\right) = \\min_{\\bm{w}, b, \\bm{\\xi}} \\mathcal{L_\\text{P}}\n\\end{align}\nand the dual one by\n\\begin{align}\n\\max_{\\bm{\\alpha}\\geq0, \\bm{\\mu}\\geq0}  \\left(  \\min_{\\bm{w}, b, \\bm{\\xi}} \\mathcal{L}(\\bm{w}, b, \\bm{\\alpha}, \\bm{\\mu}, \\bm{\\xi}) \\right) = \\max_{\\bm{\\alpha}\\geq0, \\bm{\\mu}\\geq0}  \\mathcal{L_\\text{D}}.\n\\end{align}\nNote that when considering the side constraints, the maximum of the $\\mathcal{L_\\text{D}}$ with respect to $\\bm{\\alpha}$ (and $\\bm{\\mu}$) is $\\frac{1}{2} \\|\\bm{w} \\|^2$. Assume for a given $\\bm{w}$ and $b$, some data points fulfill $y_i(\\bm{x}_i \\bm{w} - b) > 1 -\\xi_i$ (then also $y_i(\\bm{x}_i \\bm{w} - b) > 1$ because $\\xi_i$ is minimized). Then for the $\\alpha_i$-based term in Eq. \\ref{eq:first}, we have:\n\\begin{align}\n - \\sum_{i=1}^n \\alpha_i[ y_i(\\bm{x}_i \\bm{w} - b) -1 + \\xi_i]  <0.\n\\end{align}\nMaximizing with respect to $\\alpha_i$ yields $\\alpha_i=0$ for these (non-support-vector) data points. \nWe define the set $S$ of support vectors $\\bm{x_s}$ which are characterized by $\\alpha_s>0$. We rewrite the weight vector as a linear combination of the support vectors, i.e. we rewrite Eq. \\ref{eq:svm-sc1} to \n\\begin{align}\n\\bm{w} = \\sum_{s \\in S}^n \\alpha_s y_s \\bm{x}_s.\n\\end{align}\nGiven furthermore that the support vectors fulfill $y_i(\\bm{x}_i \\bm{w} - b) = 1$, we obtain from\n\\begin{align}\ny_i &= \\underbrace{y_i^2}_{=1}(\\bm{x}_i \\bm{w} - b) \\\\\n& = \\bm{x}_i \\sum_{s \\in S}^n \\alpha_s y_s \\bm{x}_s - b \n\\end{align}\na way to calculate $b$:\n\\begin{align}\nb =  \\sum_{s \\in S}^n \\alpha_s y_s \\bm{x}_i \\bm{x}_s - y_i.\n\\end{align}\n\nThe model can be extended to a nonlinear decision boundary by kernelizing the linear problem (see kernel ridge regression section) by introducing a potentially nonlinear feature map $\\phi(\\bm{x}_i)$ and a kernel given by an inner product $k(\\bm{x}_i ,\\bm{x}_j) = <\\phi(\\bm{x}_i) , \\phi(\\bm{x}_j)>$. Then $H_{ij} = y_i y_j k(\\bm{x}_i, \\bm{x}_j)$ and the original problem can be reproduced by  considering a linear kernel $k(\\bm{x}_i ,\\bm{x}_j) = \\bm{x}_i \\bm{x}_j$. \n\n\n\n\n\n\n\n\n\n\\end{document}\n\n\n\n\n", "meta": {"hexsha": "3143facd1efad4c02378525366d1fa81271a2a1f", "size": 41204, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Theoretical-background.tex", "max_stars_repo_name": "ahmetcik/ML-and-LA-from-scratch", "max_stars_repo_head_hexsha": "9718ccb68c228be6cdcb0b14b97f9d0acdfc28ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-08T01:31:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-08T01:31:20.000Z", "max_issues_repo_path": "Theoretical-background.tex", "max_issues_repo_name": "ahmetcik/ML-and-LA-from-scratch", "max_issues_repo_head_hexsha": "9718ccb68c228be6cdcb0b14b97f9d0acdfc28ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Theoretical-background.tex", "max_forks_repo_name": "ahmetcik/ML-and-LA-from-scratch", "max_forks_repo_head_hexsha": "9718ccb68c228be6cdcb0b14b97f9d0acdfc28ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 73.1865008881, "max_line_length": 1768, "alphanum_fraction": 0.6403019124, "num_tokens": 15565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.8615382040983515, "lm_q1q2_score": 0.596737541165232}}
{"text": "\\documentclass[\n  xhtml,%\n  use filename%\n]{internet}\n\n\\usepackage{tutorial}\n\\usepackage{hyperref}\n\n\\title{A Christmas Card}\n\\date{\\today}\n\\begin{document}\n\\maketitle\n\n\\section{Introduction}\n\nThere are several built-in shapes that you can use to make a scene on the screen, but sometimes you need a little more flexibility.\nIn this tutorial, we'll introduce \\verb+path+s and see how we can use them to make almost any shape.\n\n\\section{The Basics}\n\nA \\verb+path+ is what it sounds like: a path on the screen.\nIt is built up out of pieces like lines and arcs.\nTo make a path, you first give it a name.\nThen you add pieces to it.\n\nFor example, in the following code we define a simple shape in the \\verb+setup+ function.\n\n\\begin{verbatim}\nfunction setup()\n  print(\"hello world\")\n  shape = path()\n  shape:moveTo(200,200)\n  shape:lineTo(300,200)\n  shape:lineTo(200,300)\nend\n\nfunction draw()\n  background(40,40,50)\n  fill(150,200,30)\n  stroke(200,30,150)\n  strokeWidth(10)\n  rect(20,20,100,100)\n  shape:fill()\n  shape:draw()\nend\n\\end{verbatim}\n\nDefining the shape is the lines:\n\n\\begin{verbatim}\nshape = path()\nshape:moveTo(200,200)\nshape:lineTo(300,200)\nshape:lineTo(200,300)\n\\end{verbatim}\n\nThe first line names our path \\verb+shape+.\nThe second line \\emph{moves} our path to the point \\((200,200)\\).\nThis tells our path where to start.\nThe third line then adds a line to the path, starting at its current point (which is \\((200,200)\\)) and ending at \\((300,200)\\).\nEvery time we add a piece to the path, it starts where the last one ended.\nThen the last line adds another line from \\((300,200)\\) to \\((200,300)\\).\n\nThis sets up the shape.\nTo put something on the screen, we have the lines in the \\verb+draw+ function:\n\n\\begin{verbatim}\nshape:fill()\nshape:draw()\n\\end{verbatim}\n\nThe first line \\emph{fills} the path with the current \\verb+fill+ colour.\nAs our path is not closed, it joins it up to close it before filling it.\nThe second line draws the path with the current \\verb+stroke+ colour and \\verb+strokeWidth+.\n(The path isn't closed when it is drawn.)\n\n\\section{Drawing a Tree}\n\nWe can draw a Christmas tree using paths.\nLet's start with naming our path: put \\verb+tree = path()+ in the \\verb+setup+ function.\nWe'll define it so that the top of the tree is at the point \\verb+(0,0)+ and then use \\emph{transformations} to move it to where we want it.\nTo make our tree start at \\verb+(0,0)+, we write \\verb+tree:moveTo(0,0)+.\n\nWe want to start with a nice curve.\nWe can do that with an arc.\nTo add an arc, we need to fix a centre, a radius, and starting and ending angles.\nSo to draw a quarter circle starting at the top and sweeping down, we could use:\n\n\\begin{verbatim}\ntree:arc(-100,0,100,0,-90,true)\n\\end{verbatim}\n\nThis arc is drawn from a circle centred at \\verb+(-100,0)+ and has radius \\verb+100+.\n\nWe'll want another arc the other side.\n\n\\begin{verbatim}\ntree:arc(100,0,100,-90,-180,true)\n\\end{verbatim}\n\nWe'll just be filling the shape so don't need to worry about joining up the arcs.\n\nTo draw the shape, we need to move it and then fill it.\nIn the \\verb+draw+ function, we write:\n\n\\begin{verbatim}\ntree:use({transformShape = true, fill})\n\\end{verbatim}\n\nPutting all that together, we have:\n\n\\begin{verbatim}\nfunction setup()\n  print(\"hello world\")\n  tree = path()\n  tree:moveTo(0,0)\n  tree:arc(-100,0,100,0,-90,true)\n  tree:arc(100,0,100,-90,-180,true)\nend\n\nfunction draw()\n  background(40,40,50)\n  fill(50,200,30)\n  translate(WIDTH/2,HEIGHT-50)\n  tree:use({transformShape = true, fill = true})\nend\n\\end{verbatim}\n\nWe can add another segment of the tree with more arcs:\n\n\\begin{verbatim}\ntree:arc(-200,0,200,0,-90,true)\ntree:arc(200,0,200,-90,-180,true)\n\\end{verbatim}\n\nWe can add a third segment by changing the \\verb+200+s to \\verb+300+s, but that doesn't look quite right.\nBetter is to use a smaller arc slightly lower down.\n\n\\begin{verbatim}\ntree:arc(-250,-50,250,0,-90,true)\ntree:arc(250,-50,250,-90,-180,true)\n\\end{verbatim}\n\nOur code at this point looks like the following:\n\n\\begin{verbatim}\nfunction setup()\n  print(\"hello world\")\n  tree = path()\n  tree:moveTo(0,0)\n  tree:arc(-100,0,100,0,-90,true)\n  tree:arc(100,0,100,-90,-180,true)\n  tree:arc(-200,0,200,0,-90,true)\n  tree:arc(200,0,200,-90,-180,true)\n  tree:arc(-250,-50,250,0,-90,true)\n  tree:arc(250,-50,250,-90,-180,true)\nend\n\nfunction draw()\n  background(40,40,50)\n  fill(50,200,30)\n  stroke(0,0,0)\n  strokeWidth(5)\n  translate(WIDTH/2,HEIGHT-50)\n  tree:use({transformShape = true, fill = true})\nend\n\\end{verbatim}\n\n\\section{The Trunk and Bucket}\n\nThe trunk and the bucket will be a new paths, one for each.\nOne reason for this is that we will fill them with different colours.\nSo in \\verb+setup+ we need paths \\verb+trunk = path()+ and \\verb+bucket = path()+.\nWe'll actually draw the trunk first so it can overlap slightly with the tree and bucket.\nSo it just needs to be a rectangle.\nThe centre of the bottom of the tree is at \\verb+(0,-300)+, so we start just left of there:\n\n\\begin{verbatim}\ntrunk:moveTo(-10,-300)\ntrunk:lineTo(-10,-350)\ntrunk:lineTo(10,-350)\ntrunk:lineTo(10,-300)\n\\end{verbatim}\n\nWhen filling each path, we can specify the colour as part of its options.\nThis saves us having to remember which colour goes with which piece.\nSo in the \\verb+draw+ function, we can write:\n\n\\begin{verbatim}\nfunction setup()\n  print(\"hello world\")\n  tree = path()\n  tree:moveTo(0,0)\n  tree:arc(-100,0,100,0,-90,true)\n  tree:arc(100,0,100,-90,-180,true)\n  tree:arc(-200,0,200,0,-90,true)\n  tree:arc(200,0,200,-90,-180,true)\n  tree:arc(-250,-50,250,0,-90,true)\n  tree:arc(250,-50,250,-90,-180,true)\n  trunk = path()\n  trunk:moveTo(-10,-300)\n  trunk:lineTo(-10,-350)\n  trunk:lineTo(10,-350)\n  trunk:lineTo(10,-300)\nend\n\nfunction draw()\n  background(40,40,50)\n  fill(50,200,30)\n  stroke(0,0,0)\n  strokeWidth(5)\n  translate(WIDTH/2,HEIGHT-50)\n  trunk:use({transformShape = true, fill = colour(\"brown\")})\n  tree:use({transformShape = true, fill = colour(50,200,30)})\nend\n\\end{verbatim}\n\nOf course, we can use other colours.\nSee the \\href{Style.xhtml}{Style} tutorial for more details.\n\nNow for the bucket.\nTo give it a more 3D look, we'll curve the top and bottom.\nSo in \\verb+setup+:\n\n\\begin{verbatim}\nbucket = path()\nbucket:moveTo(-30,-340)\nbucket:lineTo(-20,-400)\nbucket:curveTo(-10,-410,10,-410,20,-400)\nbucket:lineTo(30,-340)\nbucket:curveTo(20,-350,-20,-350,-30,-340)\n\\end{verbatim}\n\nAnd in \\verb+draw+:\n\n\\begin{verbatim}\ntrunk:use({transformShape = true, fill = colour(\"brown\")})\nbucket:use({transformShape = true, fill= colour(\"red\")})\ntree:use({transformShape = true, fill = colour(50,200,30)})\n\\end{verbatim}\n\nOur full code looks like this:\n\n\\begin{verbatim}\nfunction setup()\n  print(\"Merry Christmas!\")\n  tree = path()\n  tree:moveTo(0,0)\n  tree:arc(-100,0,100,0,-90,true)\n  tree:arc(100,0,100,-90,-180,true)\n  tree:arc(-200,0,200,0,-90,true)\n  tree:arc(200,0,200,-90,-180,true)\n  tree:arc(-250,-50,250,0,-90,true)\n  tree:arc(250,-50,250,-90,-180,true)\n  trunk = path()\n  trunk:moveTo(-10,-300)\n  trunk:lineTo(-10,-350)\n  trunk:lineTo(10,-350)\n  trunk:lineTo(10,-300)\n  bucket = path()\n  bucket:moveTo(-30,-340)\n  bucket:lineTo(-20,-400)\n  bucket:curveTo(-10,-410,10,-410,20,-400)\n  bucket:lineTo(30,-340)\n  bucket:curveTo(20,-350,-20,-350,-30,-340)\nend\n\nfunction draw()\n  background(40,40,50)\n  translate(WIDTH/2,HEIGHT-50)\n  trunk:use({transformShape = true, fill = colour(\"brown\")})\n  bucket:use({transformShape = true, fill= colour(\"red\")})\n  tree:use({transformShape = true, fill = colour(50,200,30)})\nend\n\\end{verbatim}\n\n\\section{Decorations}\n\nOne we have our basic tree, we can decorate it how we like.\nWe could use \\verb+ellipse+ or \\verb+circle+ to make baubles, and \\verb+rect+ to make presents.\nOr we could define new paths to make more complicated objects, like a star for the top.\n\nIf we're feeling very ambitious, we could even animate the decorations so that they move around the tree.\n\n\\end{document}\n", "meta": {"hexsha": "7f7a28196146031a66a0ba4d1969c7aa81042770", "size": 7904, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ChristmasTree.tex", "max_stars_repo_name": "loopspace/jsCanvas-Tutorials", "max_stars_repo_head_hexsha": "7bff26820a9fd3aa3b5abcb7584dd3ca37577054", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ChristmasTree.tex", "max_issues_repo_name": "loopspace/jsCanvas-Tutorials", "max_issues_repo_head_hexsha": "7bff26820a9fd3aa3b5abcb7584dd3ca37577054", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ChristmasTree.tex", "max_forks_repo_name": "loopspace/jsCanvas-Tutorials", "max_forks_repo_head_hexsha": "7bff26820a9fd3aa3b5abcb7584dd3ca37577054", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.6363636364, "max_line_length": 140, "alphanum_fraction": 0.7083755061, "num_tokens": 2465, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.86153820232079, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5967375290018075}}
{"text": "\\documentclass[12pt,a4paper]{exam}\n\n% Edit these as appropriate\n\\newcommand\\hwnumber{1}                            % <-- homework number\n\\newcommand\\DateOfTutorium{\\DTMdate{2019-12-23}}   % <-- date of tutorium\n\n% Load Preamble where settings and styles of document is defined\n\\input{Preamble.tex} \n\n\\begin{document}\n\n\\section*{The Brachistochrone Problem}\n\n  In the following we are going to discuss the famous Brachistochrone Problem using the method of Lagrangian mechanics. \n\n  Our goal is to find the optimal path between two points in space. For this path the time it takes an ideal object to roll down should be as short as possible. This means we want to find the minimum of the integral\n  \n    \\begin{equation}\n      t_{12} = \\int_1^2 \\text{d}t = \\int_1^2 \\frac{1}{v} \\, \\text{d}s, \\label{eq:IntegralOne}\n    \\end{equation}\n  where $v$ is the velocity at every point and $\\text{d}s$ is the distance along the path. Using pythagoras we can write \n    \n    \\begin{equation}\n      \\text{d} s^2 = \\text{d} x^2 + \\text{d} y^2 = \\text{d} x^2 \\left(1 + \\left(\\frac{\\text{d} y}{\\text{d} x}\\right)\\right) =: \\text{d} x^2 \\left(1 + y'^2\\right). \\label{eq:Distance} \n    \\end{equation}      \n  In order to find the velocity we can use conservation of the total energy $E$\n    \n    \\begin{equation}\n      E = \\frac{1}{2}mv^2 + mgy \\overset{!}{=} 0 \n    \\end{equation}\n  which can be set equal to zero when choosing the origin of our coordinate system appropriately. Thus the velocity is\n  \n    \\begin{equation}\n      v = \\sqrt{2gy}. \\label{eq:Velocity}\n    \\end{equation} \n  Now we can plug in the expressions for $\\text{d}s$, \\eqref{eq:Distance} and the velocity, \\eqref{eq:Velocity} into the integral of time, \\eqref{eq:IntegralOne} which gets\n    \n    \\begin{equation}\n      t_{12} = \\int_1^2 \\frac{\\sqrt{1 + y'^2}}{\\sqrt{2gy}} \\text{d}x. \\label{eq:IntegralTwo}\n    \\end{equation}\n  \n  At this point we can use the tools of variational calculus. The goal is to find a function $y(x)$ which minimizes integral \\eqref{eq:IntegralTwo} and thus the time it takes the object to roll down the path. Notice that the path is described by the function $y(x)$. \n  \n  We can define the Lagrange function $\\mathcal{L}$ as the integrand of \\eqref{eq:IntegralTwo} \n  \n    \\begin{equation}\n      \\mathcal{L}\\left(y, y'\\right) := \\sqrt{\\frac{1 + y'^2}{2gy}} \\label{eq:Lagrangian}.\n    \\end{equation}\n  Usually solving the Euler-Lagrange equation\n  \n    \\begin{equation}\n      \\frac{\\partial \\mathcal{L}}{\\partial y} - \\frac{\\text{d}}{\\text{d} x} \\frac{\\partial \\mathcal{L}}{\\partial y'} \\overset{!}{=} 0 \\label{eq:EulerLagrange}\n    \\end{equation}\n  gives us the solution for $y(x)$ but since $\\mathcal{L}$ is independet of $x$ ($\\frac{\\partial \\mathcal{L}}{\\partial x} = 0$) we can compute the total derivative \n  \n    \\begin{equation}\n      \\begin{split}\n        \\text{d} \\mathcal{L} &= \\frac{\\partial \\mathcal{L}}{\\partial y} \\text{d} y + \\frac{\\partial \\mathcal{L}}{\\partial y'} \\text{d} y' + \\frac{\\partial \\mathcal{L}}{\\partial x} \\text{d} x \\\\\n        \\Rightarrow \\frac{\\text{d} \\mathcal{L}}{\\text{d} x} &= \\frac{\\partial \\mathcal{L}}{\\partial y} \\frac{\\text{d} y}{\\text{d} x} + \\frac{\\partial \\mathcal{L}}{\\partial y'} \\frac{\\text{d} y'}{\\text{d} x} + \\frac{\\partial \\mathcal{L}}{\\partial x} \\\\\n        \\Leftrightarrow \\frac{\\text{d} \\mathcal{L}}{\\text{d} x} &= \\frac{\\partial \\mathcal{L}}{\\partial y} y' + \\frac{\\partial \\mathcal{L}}{\\partial y'} y'' + \\frac{\\partial \\mathcal{L}}{\\partial x}.\n      \\end{split}\n    \\end{equation}\n  Now we can insert the expression for $\\frac{\\partial \\mathcal{L}}{\\partial y}$ from \\eqref{eq:EulerLagrange} and after using the product rule to rearrange the equation get\n  \n    \\begin{equation}\n      \\begin{split}\n        \\frac{\\text{d} \\mathcal{L}}{\\text{d} x} - y' \\frac{\\text{d}}{\\text{d} x} \\frac{\\partial \\mathcal{L}}{\\partial y'}  - \\frac{\\partial \\mathcal{L}}{\\partial y'} y'' - \\frac{\\partial \\mathcal{L}}{\\partial x} &= 0 \\\\\n        \\Leftrightarrow \\frac{\\text{d} \\mathcal{L}}{\\text{d} x} -  \\frac{\\text{d}}{\\text{d} x} y' \\frac{\\partial \\mathcal{L}}{\\partial y'} - \\frac{\\partial \\mathcal{L}}{\\partial x} &= 0 \\\\\n        \\Leftrightarrow \\frac{\\text{d} }{\\text{d} x} \\left(\\mathcal{L} -   y' \\frac{\\partial \\mathcal{L}}{\\partial y'}\\right) &= 0.\n      \\end{split}\n    \\end{equation}\n  Thus the term in brackets is a constant,\n  \n    \\begin{equation}\n      \\mathcal{L} -   y' \\frac{\\partial \\mathcal{L}}{\\partial y'} =: C\n    \\end{equation}\n  and calculating the derivative\n  \n    \\begin{equation}\n      \\frac{\\partial \\mathcal{L}}{\\partial y'} = \\frac{y'}{\\sqrt{2gy\\left(1+y'^2\\right)}}\n    \\end{equation}\n  gives \n    \\begin{equation}\n      \\begin{split}\n        \\sqrt{\\frac{1 + y'^2}{2gy}} - \\frac{y'^2}{\\sqrt{2gy\\left(1+y'^2\\right)}} &= C \\\\\n        \\Leftrightarrow \\frac{1}{\\sqrt{2gy\\left(1+y'^2\\right)}} &= C.\n      \\end{split}\n    \\end{equation}\n  Let's rearrange this expression to get\n  \n    \\begin{equation}\n      \\begin{split}\n        y \\left(1 + y'^2\\right) &= \\frac{1}{2gC^2} := \\frac{A}{2} \\\\\n        \\Leftrightarrow y \\left(1 + \\left(\\frac{\\text{d} y}{\\text{d} x}\\right)^2\\right) &= \\frac{A}{2}  \\\\\n        \\Leftrightarrow \\frac{\\text{d} y}{\\text{d} x} &= \\sqrt{\\frac{A}{2y} - 1}.\n      \\end{split}\n    \\end{equation}\n    \n  This expression is solved by \n    \n    \\begin{equation}\n      x = A \\left(\\phi - \\sin \\phi \\right), y = A \\left(1 - \\cos \\phi \\right),\n    \\end{equation}\n  where $A$ is some parameter. In the following is an inverted sketch of this optimized path.\n  \n  \\begin{center}\n    \\begin{tikzpicture}[scale=2.5]\n      \\tkzInit[xmin=0,xmax=3.3,xstep=1,ymin=0,ymax=2,ystep=1]\n      \\tkzAxeX[step=1]\n      \\tkzAxeY[step=1]\n      \\tkzFctPar[samples=100,domain=0:4*pi]{t-sin(t)}{1-cos(t)}\n    \\end{tikzpicture}  \n  \\end{center}\n  \n  \n  \n\\end{document}\n", "meta": {"hexsha": "8085b280e0ea68975743ff57533a2e0e2cda8f29", "size": 5814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DynamicSystems/BrachistochroneProblem/Template.tex", "max_stars_repo_name": "Progklui/physicsProjects", "max_stars_repo_head_hexsha": "158481bd38c046a1ca6217dab65f9e80763ead10", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DynamicSystems/BrachistochroneProblem/Template.tex", "max_issues_repo_name": "Progklui/physicsProjects", "max_issues_repo_head_hexsha": "158481bd38c046a1ca6217dab65f9e80763ead10", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DynamicSystems/BrachistochroneProblem/Template.tex", "max_forks_repo_name": "Progklui/physicsProjects", "max_forks_repo_head_hexsha": "158481bd38c046a1ca6217dab65f9e80763ead10", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.2711864407, "max_line_length": 267, "alphanum_fraction": 0.6245270038, "num_tokens": 2030, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117983401364, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.5966736190700286}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% =================================================================================================\n\\section*{Compute $\\Gamma^{a}{}_{bcd}$ directly}\n\nHere we will compute one of generalised connections directly from the connection. That is, we will\ncompute\n\\begin{align}\n   \\Gamma^{a}{}_{bcd} = \\Gamma^{a}{}_{(bc,d)} - 2 \\Gamma^{a}{}_{p(c} \\Gamma^{p}{}_{bd)}\n\\end{align}\ngiven an explict expression for the RNC connection $\\Gamma^{a}{}_{bc}$.\n\nThis code was written as a check for the {\\tt genGamma.tex} code. I had a discrepency between my\nnewly created Cadabra v2.0 codes and my old Cadabra v1.0 codes (which were the basis of my lcb09-03 paper).\nI found that my new codes agreed with this code and thus my old codes were wrong. I found the errors\nin the old code (see the updated codes in v1.0/rnc-new/gen-gamma.cdbp, see also the file v1.0/rnc-new/NOTES.txt).\n\nThe head-on approach shown in this code works for this simple compuation but for the higher\ngeneralised connections it's almost certainly going to be way too slow to be use. So this code is\nonly a check of the results from {\\tt genGamma.tex} code. The good news is -- they agree.\n\n\\clearpage\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).\n\n   D{#}::Derivative.\n   \\nabla{#}::Derivative.\n   \\partial{#}::PartialDerivative.\n\n   g_{a b}::Metric.\n   g^{a b}::InverseMetric.\n   g_{a}^{b}::KroneckerDelta.\n   g^{a}_{b}::KroneckerDelta.\n   \\delta^{a}_{b}::KroneckerDelta.\n   \\delta_{a}^{b}::KroneckerDelta.\n\n   R_{a b c d}::RiemannTensor.\n   R_{a b c d}::Depends(\\nabla{#}).\n\n   x^{a}::Depends(\\partial{#}).\n\n   \\Gamma^{a}_{b c}::Depends(\\partial{#}).\n   \\Gamma^{a}_{b c}::TableauSymmetry(shape={2}, indices={1,2}).\n\n   Q_{a b c d}::Weight(label=numR,value=2).\n   Q_{a b c d e}::Weight(label=numR,value=3).\n   Q_{a b c d e f}::Weight(label=numR,value=4).\n   Q_{a b c d e f g}::Weight(label=numR,value=5).\n\n   # note: keeping numbering as is (out of order) to ensure R appears before \\nabla R etc.\n   def product_sort (obj):\n       substitute (obj,$ A^{a}                            -> A001^{a}               $)\n       substitute (obj,$ x^{a}                            -> A002^{a}               $)\n       substitute (obj,$ g^{a b}                          -> A003^{a b}             $)\n       substitute (obj,$ \\nabla_{e f g h}{R_{a b c d}}    -> A008_{a b c d e f g h} $)\n       substitute (obj,$ \\nabla_{e f g}{R_{a b c d}}      -> A007_{a b c d e f g}   $)\n       substitute (obj,$ \\nabla_{e f}{R_{a b c d}}        -> A006_{a b c d e f}     $)\n       substitute (obj,$ \\nabla_{e}{R_{a b c d}}          -> A005_{a b c d e}       $)\n       substitute (obj,$ R_{a b c d}                      -> A004_{a b c d}         $)\n       sort_product   (obj)\n       rename_dummies (obj)\n       substitute (obj,$ A001^{a}                  -> A^{a}                         $)\n       substitute (obj,$ A002^{a}                  -> x^{a}                         $)\n       substitute (obj,$ A003^{a b}                -> g^{a b}                       $)\n       substitute (obj,$ A004_{a b c d}            -> R_{a b c d}                   $)\n       substitute (obj,$ A005_{a b c d e}          -> \\nabla_{e}{R_{a b c d}}       $)\n       substitute (obj,$ A006_{a b c d e f}        -> \\nabla_{e f}{R_{a b c d}}     $)\n       substitute (obj,$ A007_{a b c d e f g}      -> \\nabla_{e f g}{R_{a b c d}}   $)\n       substitute (obj,$ A008_{a b c d e f g h}    -> \\nabla_{e f g h}{R_{a b c d}} $)\n\n       return obj\n\n   def truncate (obj,n):\n\n   # I would like to assign different weights to \\nabla_{a}, \\nabla_{a b}, \\nabla_{a b c} etc. but no matter\n   # what I do it appears that Cadabra assigns the same weight to all of these regardless of the number of subscripts.\n   # It seems that the weight is assigned to the symbol \\nabla alone. So I'm forced to use the following substitution trick.\n\n       tmp := @(obj).\n\n       substitute (tmp, $\\nabla_{e f g}{R_{a b c d}} -> Q_{a b c d e f g}$)\n       substitute (tmp, $\\nabla_{e f}{R_{a b c d}} -> Q_{a b c d e f}$)\n       substitute (tmp, $\\nabla_{e}{R_{a b c d}} -> Q_{a b c d e}$)\n       substitute (tmp, $R_{a b c d} -> Q_{a b c d}$)\n\n       ans = Ex(0)\n\n       for i in range (0,n+1):\n          foo := @(tmp).\n          bah = Ex(\"numR = \" + str(i))\n          keep_weight (foo, bah)\n          ans = ans + foo\n\n       substitute (ans, $Q_{a b c d e f g} -> \\nabla_{e f g}{R_{a b c d}}$)\n       substitute (ans, $Q_{a b c d e f} -> \\nabla_{e f}{R_{a b c d}}$)\n       substitute (ans, $Q_{a b c d e} -> \\nabla_{e}{R_{a b c d}}$)\n       substitute (ans, $Q_{a b c d} -> R_{a b c d}$)\n\n       return ans\n\n   def get_term (obj,n):\n\n       bah := @(obj).\n       distribute (bah)\n\n       substitute (bah, $\\nabla_{e f g}{R_{a b c d}} -> Q_{a b c d e f g}$)\n       substitute (bah, $\\nabla_{e f}{R_{a b c d}} -> Q_{a b c d e f}$)\n       substitute (bah, $\\nabla_{e}{R_{a b c d}} -> Q_{a b c d e}$)\n       substitute (bah, $R_{a b c d} -> Q_{a b c d}$)\n\n       foo = Ex(\"numR = \" + str(n))\n       keep_weight (bah, foo)\n\n       substitute (bah, $Q_{a b c d e f g} -> \\nabla_{e f g}{R_{a b c d}}$)\n       substitute (bah, $Q_{a b c d e f} -> \\nabla_{e f}{R_{a b c d}}$)\n       substitute (bah, $Q_{a b c d e} -> \\nabla_{e}{R_{a b c d}}$)\n       substitute (bah, $Q_{a b c d} -> R_{a b c d}$)\n\n       return bah\n\n   def tidy (obj,number):\n      bah  = Ex(str(number))\n      tmp := @(bah) @(obj).\n      distribute  (tmp)\n      factor_out  (tmp,$A^{a?},x^{b?}$)\n      ans := @(tmp) / @(bah).\n      return ans\n\n   import cdblib\n\n   Gamma = cdblib.get ('Gamma','../connection.json')\n\n   defGamma := \\Gamma^{d}_{a b} -> @(Gamma).\n\n   genGam := A^{d} A^{b} A^{c} (\\partial_{d}{\\Gamma^{a}_{b c}} - 2 \\Gamma^{a}_{p c} \\Gamma^{p}_{b d}).\n\n   substitute     (genGam,defGamma)   # cdb (genGam.001,genGam)\n   distribute     (genGam)\n   unwrap         (genGam)            # cdb (genGam.002,genGam)\n   distribute     (genGam)            # cdb (genGam.003,genGam)\n   product_rule   (genGam)            # cdb (genGam.004,genGam)\n   distribute     (genGam)            # cdb (genGam.005,genGam)\n   substitute     (genGam,$\\partial_{a}{x^{b}}->\\delta_{a}^{b}$)   # cdb (genGam.006,genGam)\n   eliminate_kronecker (genGam)       # cdb (genGam.007,genGam)\n   genGam = truncate     (genGam,5)   # cdb (genGam.008,genGam)\n   genGam = product_sort (genGam)     # cdb (genGam.009,genGam)\n   rename_dummies (genGam)            # cdb (genGam.010,genGam)\n   canonicalise   (genGam)            # cdb (genGam.011,genGam)\n\n   # all done, now for some housekeeping\n\n   term3 = get_term (genGam,3)   # cdb (term3.001,term3)\n   term4 = get_term (genGam,4)   # cdb (term4.001,term4)\n   term5 = get_term (genGam,5)   # cdb (term5.001,term5)\n\n   term3 = tidy (term3,2)\n   term4 = tidy (term4,120)\n   term5 = tidy (term5,180)\n\n   genGam := @(term3) + @(term4) + @(term5).\n\n   scaledGamma := 360 @(genGam).  # cdb (scaledGamma.001,scaledGamma)\n\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb*{term3.001} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{term4.001} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{term5.001} \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\n% =================================================================================================\n\\section*{Summary}\n\n\\begin{dgroup*}\n   \\begin{dmath*} 360 A^b A^c A^d \\Gamma^{a}_{b c d} = \\cdb{scaledGamma.001} \\end{dmath*}\n\\end{dgroup*}\n\n\\end{document}", "meta": {"hexsha": "9b2b987d10a5bbbeb42a20f4d4e65d63c6024871", "size": 7497, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/checks/check-genGamma.tex", "max_stars_repo_name": "leo-brewin/riemann-normal-coords", "max_stars_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-20T16:15:58.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-20T16:15:58.000Z", "max_issues_repo_path": "source/cadabra/checks/check-genGamma.tex", "max_issues_repo_name": "leo-brewin/riemann-normal-coords", "max_issues_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/checks/check-genGamma.tex", "max_forks_repo_name": "leo-brewin/riemann-normal-coords", "max_forks_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.8776595745, "max_line_length": 124, "alphanum_fraction": 0.5327464319, "num_tokens": 2572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706735, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.5966736159885728}}
{"text": "% Define document class\n\\documentclass[modern]{aastex631}\n\n% Filler text\n\\usepackage{blindtext}\n\n% Math shortcuts\n\\newcommand{\\like}{\\mathcal{L}}\n\\newcommand{\\order}[1]{\\mathcal{O}\\left( #1 \\right)}\n\n% Begin!\n\\begin{document}\n\n% Title\n\\title{When and How Can we Fit Residuals?}\n\n% Author list\n\\author[0000-0003-1540-8562]{Will M. Farr}\n\\email{will.farr@stonybrook.edu}\n\\email{wfarr@flatironinstitute.org}\n\\affiliation{Department of Physics and Astronomy, Stony Brook University, Stony Brook NY 11794, USA}\n\\affiliation{Center for Computational Astrophysics, Flatiron Institute, New York NY 10010, USA}\n\n% Abstract with filler text\n\\begin{abstract}\n    I discuss the effect of fixing the \\emph{residuals} from a global fit in\n    LISA-like data when fitting broadband signals like the inspiral of a\n    high-redshift seed BBH.\n\\end{abstract}\n\n% Main body with filler text\n\\section{Introduction}\n\nThe joint likelihood for a single BBH merger waveform $h$ and some number of\nwhite dwarf monochromatic or near-monochromatic waveforms in LISA data $d$ is\n%\n\\begin{equation}\n    \\log \\like \\propto - \\Delta f \\sum_i \\frac{\\left| d_i - h_i - \\sum_\\alpha g_{\\alpha, i} \\right|^2}{2 S_i},\n\\end{equation}\n%\nwhere the sum runs over frequency-domain components and $S_i$ is the noise PSD\nat frequency $i$.  Expanding to quadratic order about $h_i = h_{i,0}$ in terms\nof the parameters $\\theta$ that control the waveform, we have\n%\n\\begin{multline}\n    \\log \\like \\sim - \\Delta f \\sum_i \\frac{1}{2 S_i} \\left( \\left| d_i - h_{i,0} - G_i \\right|^2 - 2 \\Re \\left( d_i - h_{i,0} - G_i \\right)^* \\frac{\\partial h_i}{\\partial \\theta^a} \\left( \\theta^a - \\theta^a_0 \\right)  \\right. \\\\ \\left. + \\left(\\theta^a - \\theta_0^a \\right) \\left( \\frac{\\partial h_i^*}{\\partial \\theta^a} \\frac{\\partial h_i}{\\partial \\theta^b} + \\Re \\left( d_i - h_{i,0} - G_i \\right)^* \\frac{\\partial^2 h_i}{\\partial \\theta^a \\partial \\theta^b} \\right) \\left( \\theta^b - \\theta_0^b \\right) \\right),\n\\end{multline}\n%\nwhere\n%\n\\begin{equation}\n    G_i \\equiv \\sum_\\alpha g_{\\alpha,i}.\n\\end{equation}\n%\nCollecting terms, we see that we have a Gaussian likelihood for the parameters\n$\\theta$, with\n%\n\\begin{equation}\n    \\log \\like \\sim - \\left( A + B_a \\Delta \\theta^a + \\frac{1}{2} \\Delta \\theta^a C_{ab} \\Delta \\theta^b \\right),\n\\end{equation}\n%\nwith\n%\n\\begin{equation}\n    B_a = \\Delta f \\sum_i \\frac{1}{S_i} \\left( - \\Re \\left( d_i - h_{i,0} - G_i \\right)^* \\frac{\\partial h_i}{\\partial \\theta^a} \\right)\n\\end{equation}\n%\nand\n%\n\\begin{equation}\n    C_{ab} = \\Delta f \\sum_i \\frac{1}{S_i} \\left( \\frac{\\partial h_i^*}{\\partial \\theta^a} \\frac{\\partial h_i}{\\partial \\theta^b} + \\Re \\left( d_i - h_{i,0} - G_i \\right)^* \\frac{\\partial^2 h_i}{\\partial \\theta^a \\partial \\theta^b} \\right).\n\\end{equation}\n%\n\nNow we make the assumption that the sources making up $G$ are \\emph{narrowband},\nthat is that their S/N accumulates over a small number of bins around $i = i_0$,\nsuch that\n%\n\\begin{equation}\n\\frac{\\left| g_{\\alpha,i} \\right|^2}{S_i} \\sim \\begin{cases}\n    \\order{\\rho_\\alpha^2} & \\left| i - i_0 \\right| \\sim \\order{1} \\\\\n    0 & \\mathrm{otherwise}\n\\end{cases}.\n\\end{equation}\n%\nWe further assume that $h$ and therefore $h_0$ are \\emph{broadband}, so\n%\n\\begin{equation}\n\\frac{\\left| h_i \\right|^2}{S_i} \\sim \\frac{\\order{\\rho_h^2}}{N}\n\\end{equation}\n%\nwhere $N \\gg 1$ is the number of bins over which the S/N of $h$ accumulates.\n\nLet us suppose that the data are composed of signals like $h$ and $G$ plus colored noise $n$ with PSD $S$:\n%\n\\begin{equation}\n    d_i = \\bar{h}_i + \\bar{G}_i + n_i\n\\end{equation}\n%\n\n\n\n\\end{document}\n", "meta": {"hexsha": "2cdc9a62dd5c423b08c08da952e80e8fd6c067f7", "size": 3588, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/ms.tex", "max_stars_repo_name": "farr/AnalyzeTheResiduals", "max_stars_repo_head_hexsha": "e60bc29daf213446c51b6fa257cd3eb69f5e4dc7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/ms.tex", "max_issues_repo_name": "farr/AnalyzeTheResiduals", "max_issues_repo_head_hexsha": "e60bc29daf213446c51b6fa257cd3eb69f5e4dc7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ms.tex", "max_forks_repo_name": "farr/AnalyzeTheResiduals", "max_forks_repo_head_hexsha": "e60bc29daf213446c51b6fa257cd3eb69f5e4dc7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.5, "max_line_length": 518, "alphanum_fraction": 0.6819955407, "num_tokens": 1257, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772417253256, "lm_q2_score": 0.6825737408694988, "lm_q1q2_score": 0.5966221726933487}}
{"text": "\\section{two scatterer folding}\r\n\r\nTwo scatterers: incident is A, reflected is B, and transmitted is G. Between are C (forward) and D (backward).\r\n\r\n% see notes, 20080619\r\nFirst, boundary conditions match for the amplitudes and the derivatives, where n is the single incident channel, which scatters into channels m. For the first scatterer,\r\n\\begin{equation}\r\n\\begin{gathered}\r\nA_n \\delta_{mn} + B_m = C_m+D_m \\\\\r\n(i k_m C_m-i k_m D_m) - (i k_m \\delta_{nm}^{(1)} A_m-i k_m B_m) = \r\n\\sum_{m'} \\Gamma_{mm'} (A_n \\delta_{nm'}+B_{m'})\r\n\\end{gathered}\r\n\\end{equation}\r\nand similarly for the second scatterer, which is at x=a instead of zero,\r\n\\begin{equation}\r\n\\begin{gathered}\r\nC_m \\exp(i k_m a) + D_m \\exp(-i k_m a) = G \\exp(i k_m a) \\\\\r\n(i k G_m \\exp(i k_m a)-(i k_m C_m \\exp(i k_m a) - i k_m D_m \\exp(-i k_m a)) = \r\n\\sum_{m'} \\Gamma_{mm'}^{(2)} G_{m'} \\exp(i k_{m'} a)\r\n\\label{matchatxequalsa}\r\n\\end{gathered}\r\n\\end{equation}\r\nthe k variable is a matrix of diagonal elements indexed by channel number\r\n\\begin{equation}\r\n\\hat{k}_{nn} = \\sqrt{(\\frac{2 \\pi}{\\lambda})^2 -(\\frac{\\pi n}{w})^2}\r\n\\end{equation}\r\nwhere n goes from 1 to $N+N_c$\r\n\r\n\\begin{equation}\r\n\\begin{gathered}\r\nC_m +D_m = \\delta_{mn} A_m+B_m \\\\\r\nC_m -D_m = A_m \\delta_{mn} - B_m +\\frac{1}{i k_m} \\sum_{m'} \r\n\\Gamma_{mm'}^{(1)} (\\delta_{nm'} A_{m'}+B_{m'})\r\n\\end{gathered}\r\n\\end{equation}\r\nSolve for $C_m$ and $D_m$ at x = 0.\r\n\\begin{comment}\r\nuse\r\nx+y=A\r\nx-y=B\r\n==>  2x = A+B\r\n====> x = (A+B)/2\r\nrepeat for y\r\n\\end{comment}\r\n\r\n\\begin{equation}\r\n\\begin{gathered}\r\nC_m = A_m \\delta_{nm} + \\frac{1}{2 i k_m}\\sum_{m'}\\Gamma_{mm'}^{(1)} \r\n(\\delta_{nm'} A_{m'}+B_{m'}) \\\\\r\nD_m = B_m - \\frac{1}{2 i k_m}\\sum_{m'}\\Gamma_{mm'}^{(1)} \r\n(\\delta_{nm'} A_{m'}+B_{m'})\r\n\\end{gathered}\r\n\\end{equation}\r\n\\begin{equation}\r\n\\begin{gathered}\r\nA_m \\delta_{nm} \\exp(i k_m a)+B_m \\exp(-i k_m a) + \r\n\\frac{1}{k_m}(\\sum_{m'}\\Gamma_{mm'}^{(1)}(\\delta_{nm'} A_{m'}+B_{m'})) sin(k_m a) = \\\\\r\nG_m \\exp(i k_m a) \\\\\r\nG_m \\exp(i k_m a) - (A_m \\delta_{nm} \\exp(i k_m a)-B_m \\exp(-i k_m a) + \r\n\\frac{1}{i k_m}(\\sum_{m'}\\Gamma_{mm'}^{(1)}(\\delta_{nm'} A_{m'}+B_{m'})) cos(k_{m'} a) = \\\\\r\n\\frac{1}{i k_m}(\\sum_{m'}\\Gamma_{mm'}^{(2)} G_{m'} \\exp(i k_{m'} a)) \\\\\r\n\\end{gathered}\r\n\\end{equation}\r\ndivide everything by $A_m$ to get normalized incident wave\r\n\\begin{equation}\r\n\\begin{gathered}\r\n\\delta_{nm} \\exp(i k_m a)+r_{mn} \\exp(-i k_m a) + \r\n\\frac{1}{k_m}(\\sum_{m'}\\Gamma_{mm'}^{(1)}(\\delta_{nm'}+r_{nm'})) sin(k_m a) = \\\\\r\nt_{nm} \\exp(i k_m a) \\\\\r\nt_{nm} \\exp(i k_m a) - (\\delta_{nm} \\exp(i k_m a)-r_{mn} \\exp(-i k_m a) + \r\n\\frac{1}{i k_m}(\\sum_{m'}\\Gamma_{mm'}^{(1)}(\\delta_{nm'}+r_{nm'})) cos(k_{m'} a) = \\\\\r\n\\frac{1}{i k_m}(\\sum_{m'}\\Gamma_{mm'}^{(2)} t_{nm'} \\exp(i k_{m'} a)) \\\\\r\n\\end{gathered}\r\n\\end{equation}\r\nwhere \r\n\\begin{equation}\r\n\\begin{gathered}\r\n\\frac{B}{A} \\equiv r_{nm} \\\\\r\n\\frac{G}{A} \\equiv t_{nm}\r\n\\end{gathered}\r\n\\end{equation}\r\nNow we'll switch from indexed variable to matrix notation\r\n\r\n\\begin{equation}\r\n\\begin{gathered}\r\n(\\exp(-i \\hat{k} a)+\\hat{k}^{-1} sin(\\hat{k} a) \\hat{\\Gamma}) \\vec{r} + \r\n(-\\exp(i \\hat{k} a)) \\vec{t} = \\\\\r\n-(\\exp(i \\hat{k} a +\\hat{k}^{-1} sin(\\hat{k} a) \\hat{\\Gamma}) \\vec{h}\\\\\r\n(\\exp(-i \\hat{k} a)+i \\hat{k}^{-1} cos(\\hat{k} a) \\hat{\\Gamma}) \\vec{r} + \r\n(\\exp(i \\hat{k} a)+i \\hat{k}^{-1} \\exp(i \\hat{k} a) \\hat{\\Gamma}) \\vec{t} = \\\\\r\n(\\exp(i \\hat{k} a)-i \\hat{k}^{-1} cos(\\hat{k} a) \\hat{\\Gamma}) \\vec{h}\r\n\\end{gathered}\r\n\\end{equation}\r\nwe clump the coefficients in that big mess to another indexed variable, ``F''.\r\n\\begin{equation}\r\n\\begin{gathered}\r\n\\hat{F_1} \\vec{r}+\\hat{F_2} \\vec{t}=\\hat{F_3} \\vec{h} \\\\\r\n\\hat{F_4} \\vec{r}+\\hat{F_5} \\vec{t}=\\hat{F_6} \\vec{h}\r\n\\end{gathered}\r\n\\end{equation}\r\n\r\nsolve for $\\vec{r}$, set the two equal\r\n\\begin{equation}\r\n\\begin{gathered}\r\n(\\hat{F_1}^{-1} \\hat{F_3}-\\hat{F_4}^{-1} \\hat{F_6}) \\vec{h} = \r\n(\\hat{F_1}^{-1} \\hat{F_2}-\\hat{F_4}^{-1} \\hat{F_5}) \\vec{t}\r\n\\end{gathered}\r\n\\end{equation}\r\nthen solve for $\\vec{h}$ in terms of $\\vec{t}$\r\n\\begin{equation}\r\n\\vec{h} = (\\hat{F_1}^{-1} \\hat{F_3} - \\hat{F_4}^{-1} \\hat{F_6})^{-1}\r\n(\\hat{F_1}^{-1} \\hat{F_2} - \\hat{F_4}^{-1} \\hat{F_5}) \\vec{t}\r\n\\end{equation}\r\nand we'll call that big messy matrix, which has dimension $(N+N_c)x(N+N_c)$,\r\n\\begin{equation}\r\n(\\hat{F_1}^{-1} \\hat{F_3} - \\hat{F_4}^{-1} \\hat{F_6})^{-1}\r\n(\\hat{F_1}^{-1} \\hat{F_2} - \\hat{F_4}^{-1} \\hat{F_5}) \\equiv\r\n\\hat{I}+\\frac{i}{2}\\hat{k}^{-1}\\Gamma^{combined}\r\n\\end{equation}\r\nSolving for $\\Gamma^{combined}$, \r\n\\begin{equation}\r\n\\Gamma^{combined} = -i 2\\hat{k}(((\\hat{F_1}^{-1} \\hat{F_3} - \\hat{F_4}^{-1} \\hat{F_6})^{-1}\r\n(\\hat{F_1}^{-1} \\hat{F_2} - \\hat{F_4}^{-1} \\hat{F_5})) - \\hat{I})\r\n\\end{equation}\r\n\r\nNow we have a general algorithm for combining any two $\\Gamma$ matrices, either two reduced (open channels only, (NxN)) or two unreduced (open and closed channels, (($N+N_c$)x($N+N_c$))). \r\n\r\nUsing this ``gamma combiner'' algorithm with the above ``gamma reducer'' algorithm we can numerically compute how the two are different, and how separation (density in many scatterers) is affected when N varies, when $N_c$ is include or not.", "meta": {"hexsha": "ea940201ff6ae8c5e438b1fd97b9c7129cb6b1cc", "size": 5079, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix_q1d_double_scatterer_derivation.tex", "max_stars_repo_name": "bhpayne/physics_phd_dissertation", "max_stars_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendix_q1d_double_scatterer_derivation.tex", "max_issues_repo_name": "bhpayne/physics_phd_dissertation", "max_issues_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/appendix_q1d_double_scatterer_derivation.tex", "max_forks_repo_name": "bhpayne/physics_phd_dissertation", "max_forks_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.6796875, "max_line_length": 241, "alphanum_fraction": 0.5983461311, "num_tokens": 2131, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.874077222043951, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5966221479712205}}
{"text": "\n\\section{The \\insertionsort algorithm}\n\\Label{sec:insertionsort}\n\nLike \\selectionsort,\nthe algorithm \\insertionsort traverses the given array \\inl{a[0..n-1]}\nleft to right, maintaining a left-adjusted, \nconstantly increasing range \\inl{a[0..i-1]} that is already in increasing order.\n\nUnlike \\selectionsort, however, \\insertionsort adds \\inl{a[i]} to the\ninitial segment in the \\inl{i}th step (see Figure~\\ref{fig:insertionsort-example}).\n%\nIt determines the (rightmost) appropriate position to insert \\inl{a[i]}\nby a call to \\specref{upperbound} and then uses \\specref{rotatei} to \nperform a \\emph{circular shift} to establish the insertion.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=0.65\\textwidth]{Figures/insertion_sort.pdf}\n\\caption{An iteration of \\insertionsort}\n\\Label{fig:insertionsort-example}\n\\end{center}\n\\end{figure}\n\n\\FloatBarrier\n\n\\subsection{Formal specification of \\insertionsort}\n\nThe following listing shows our (generic sorting) contract for \\insertionsort.\n\n\\input{Listings/insertion_sort.h.tex}\n\n\\clearpage\n\n\\subsection{Implementation of \\insertionsort}\n\nThe implementation of \\insertionsort is shown in the next listing.\n%\nWe used an \\acsl statement contract to specify those aspects of the\n\\rotatei contract that are needed here.\n%\nProperties related to the result of \\insertionsort being in increasing\norder are labelled \\inl{increasing}.\nProperties related to the rearrangement of elements are labelled \\inl{reorder} and,\nwhenever their order isn't changed, \\inl{unchanged}.\n\n\\input{Listings/insertion_sort.c.tex}\n\nWhen we originally \nimplemented and verified \\rotatei, we hadn't yet in mind to\nuse that function inside of \\insertionsort.\n%\nConsequently, the properties needed for the latter\naren't directly provided by the former.\n%\nOne approach to solve this problem is to add the new properties to\nthe contract of \\specref{rotatei} and repeat its verification proof.\nHowever, if \\rotatei is assumed to be part of a pre-verified library,\nthis approach isn't feasible, since \\rotatei's implementation may not\nbe available for re-verification.\n%\nTherefore, we used another approach, viz.\\ to prove that \\rotatei's\noriginal specification \\emph{implies} all the properties we need in\n\\insertionsort.\nThis is another use of the Hoare calculus' implication rule\n(\\S\\ref{sec:The Implication Rule}).\n%\nWe used several lemmas, shown below,\nto make the necessary implications explicit, and to help the provers to\nestablish them.\n%\nSome of them needed manual proofs by induction.\n\n\\clearpage\n\nLemma \\logicref{IncreasingEqual} in the following listing assumes an ordered range\n\\inl{a[m..n-1]} and claims that every (elementwise) equal range\nrange \\inl{a[m+p..n+p-1]} is ordered, too.\n%\nIt is needed to establish that the call to \\specref{rotatei} preserves the order of\nthose elements that are shifted upwards \n(cf.\\ Figure~\\ref{fig:insertionsort-example}).\n\nSimilarly, lemma \\logicref{CountEqual} says that two elementwise equal ranges\n\\inl{a[m..n-1]} and \\inl{a[p..p+n-m-1]} will result in the same occurrence count,\nfor each value \\inl{v}.\n%\nThis lemma is useful in the proof of the lemma\n\\logicref{CircularShiftMultisetReorder} (discussed below),\nsince the predicate \\logicref{MultisetReorder}\nis defined via the logic function \\logicref{Count}.\n\nLemma \\logicref{CircularShiftStrictLowerBound} in the next listing\nis used to prove that the range \\inl{a[k..i-1]} having \n\\inl{a[i]} as strict lower bound before our call to \\rotatei ensures\nthat it has \\inl{a[k]} as such a bound after the call.\nNote that this lemma reflects that \\rotatei is uses as a \\emph{circular shift}\nat the call site.\n%\nSimilarly, lemma \\CircularShiftMultisetReorder establishes that \na circular shift just reorders the range it is applied to.\n\n\\input{Listings/CircularShiftLemmas.acsl.tex}\n\n\\clearpage\n\n", "meta": {"hexsha": "93135872aeb4d82a97b14003a60d848e2562cd6b", "size": 3815, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/sorting/insertion_sort.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/sorting/insertion_sort.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/sorting/insertion_sort.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 35.6542056075, "max_line_length": 83, "alphanum_fraction": 0.7826998689, "num_tokens": 1011, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.8376199633332891, "lm_q1q2_score": 0.5965799062797315}}
{"text": "\\chapter{Additional Kolmogorov-Smirnov Two Sample Test Tables}\n\\label{appendix:ks-test}\nThis appendix lists the Kolmogorov-Smirnov tables comparing the features for intensity and texture.\n\n\\begin{table}[H]\n\\label{table:blob-texture-ks}\n\\centering\n\\primitiveinput{tables/texture_features_ks.tex}\n\\caption{Comparison of the Kolmogorov-Smirnov test results for each texture feature derived from patches of images defined by blobs across ten scales.}\n\\end{table}\n\n\\primitiveinput{tables/intensity_features_ks.tex}\n\n\\begin{table}[H]\n\\label{table:line-texture-ks}\n\\centering\n\\primitiveinput{tables/texture_features_ks_lines.tex}\n\\caption{Comparison of the Kolmogorov-Smirnov test results for each texture feature derived from patches of images defined by lines.}\n\\end{table}\n\n\\begin{table}[H]\n\\label{table:line-intensity-ks}\n\\centering\n\\primitiveinput{tables/line_intensity_features_ks.tex}\n\\caption{Comparison of the Kolmogorov-Smirnov test results for each intensity feature derived from patches of images defined by lines.}\n\\end{table}\n", "meta": {"hexsha": "163ecaba7a7e136b83ce52bae8579ef725e7392d", "size": 1033, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/final-report/Appendix3/appendix3.tex", "max_stars_repo_name": "samueljackson92/major-project", "max_stars_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2015-01-26T16:23:29.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-17T00:57:42.000Z", "max_issues_repo_path": "documents/final-report/Appendix3/appendix3.tex", "max_issues_repo_name": "samueljackson92/major-project", "max_issues_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 64, "max_issues_repo_issues_event_min_datetime": "2015-02-05T06:34:56.000Z", "max_issues_repo_issues_event_max_datetime": "2015-05-03T15:46:49.000Z", "max_forks_repo_path": "documents/final-report/Appendix3/appendix3.tex", "max_forks_repo_name": "samueljackson92/major-project", "max_forks_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2592592593, "max_line_length": 151, "alphanum_fraction": 0.8180058083, "num_tokens": 271, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199714402812, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5965799018210759}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{subfigure}\n\\usepackage{float}\n\\usepackage{ulem}\n\\usepackage{bm}\n\\usepackage{anysize}\n\\usepackage{pythonhighlight}\n\n\\marginsize{2cm}{2cm}{0.9cm}{1.8cm}\n\n\\title{EECE 5639 Computer Vision\\\\ [2ex] \\begin{large} Homework \\#2 \\end{large} }\n\\author{Jiyu Tian}\n\\date{}\n\n\\begin{document}\n\\maketitle\n%%---------------------------------------------------------------\n%% Question 1\n%%---------------------------------------------------------------\n\\section{Solution:}\nAs shown in Figure \\ref{raw}, the estimated noise of generated images is 1.9462, which is very approximate to 2, the targeted $\\sigma$. And the worst case acquisition noise is 4.0258, around $2\\sigma$.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width = 1\\textwidth]{fig1.png}\n\\caption{Raw Images}\n\\label{raw}\n\\end{figure}\n%%---------------------------------------------------------------\n%% Question 2\n%%---------------------------------------------------------------\n\\section{Solution:}\nAs shown in Figure \\ref{filtered}, the estimated noise of the images after $3\\times3$ box filter is 0.6472, and the worst case acquisition noise is 1.3971. Compared to the original ones, the image noises are reduced significantly, which we can also easily tell from the two figures.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width = 1\\textwidth]{fig2.png}\n\\caption{Raw Images}\n\\label{filtered}\n\\end{figure}\n%%---------------------------------------------------------------\n%% Question 3\n%%---------------------------------------------------------------\n\\section{Solution:}\nThe 2D Gaussian filter mask can be expressed as:\n\\begin{equation*}\n    G(x,y) = \\frac{1}{2\\pi\\sigma^2}e^{-\\frac{x^2+y^2}{2\\sigma^2}}\n\\end{equation*}\nwhich can also be separated into two 1D Gaussian filters:\n\\begin{equation*}\n    G(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-\\frac{x^2}{2\\sigma^2}}, G(y) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-\\frac{y^2}{2\\sigma^2}}\n\\end{equation*}\nThe minimum filter size is $2\\times ceil(2\\sigma)+1=7$. With implementation in python, a $7\\times 7$ 2D Gaussian filter mask with standard deviation $1.4$ can be applied onto the sample figure, as shown in Figure \\ref{ouput}. Please refer to Appendix for detailed code.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width = 1\\textwidth]{output.png}\n\\caption{Output of images after applying Gaussian filter mask}\n\\label{ouput}\n\\end{figure}\n\n\\vfill\n\\clearpage\n%%---------------------------------------------------------------\n%% Question 4\n%%---------------------------------------------------------------\n\\section{Solution:}\n(a) With filter (a): \n\\begin{equation*}\n{\\left[ \\begin{array}{cccccccccc}\n6 & 8 & 10 & 16 & 22 & 28 & 34&  40 &  32& 24\n\\end{array} \\right]}\n\\end{equation*}\n\\noindent With filter (b):\n\\begin{equation*}\n{\\left[ \\begin{array}{cccccccccc}\n7 & 9 & 10 & 13 & 19 & 31 & 37 & 40 & 36 & 28\n\\end{array} \\right]}\n\\end{equation*}\n\\noindent (b) Consider a 1D image which is large enough to ignore the borders, with additive, uncorrelated Gaussian noise with zero mean and variance $\\sigma^2$. Since filter (b) has different weights upon each filtered pixel, its computation cost is higher than that of filter (a). As for the variance of the noise, with filter (a), a pixel in the output noise $O$ with respect to input noise $I$ is:\n\\begin{equation*}\nO_i = \\frac{1}{5}\\sum^2_{j = -2}I_{i+j}\n\\end{equation*}\nThe expected value of a pixel after filtering is:\n\\begin{equation*}\nE[O] = E\\left[\\frac{1}{5}\\sum^2_{j = -2}I_{i+j}\\right] = \\frac{1}{5}\\sum^2_{j=-2}E\\left[I_{i+j}\\right] = 0\n\\end{equation*}\nThe variance of a pixel after filtering is:\n\\begin{equation*}\n\\begin{aligned}\nE[(O-E[O])^2] &= E[O^2] \\\\\n&= E\\left[\\frac{1}{25}\\left(\\sum^2_{j = -2}I_{i+j}\\right)^2\\right]\\\\\n&= \\frac{1}{25}E\\left[\\sum^2_{j-2}I^2_{i+j} + \\sum^2_{m,n = -2}I_{i+m}I_{i+n}\\right]\\\\\n&= \\frac{1}{25}\\sum^2_{j-2}E\\left[I^2_{i+j}\\right]\\\\\n&=\\frac{1}{25}\\times5\\sigma^2 = \\frac{\\sigma^2}{5}\n\\end{aligned}\n\\end{equation*}\nWith filter (b):\n\\begin{equation*}\nO_i = \\frac{1}{10}\\left(I_{i-2} + 2I_{i-1} + 4I_{i} + 2I_{i+1} + I_{i+2}\\right)\n\\end{equation*}\nThe expected value of a pixel after filtering is:\n\\begin{equation*}\n\\begin{aligned}\nE[O] &= E\\left(\\frac{1}{10}\\left[I_{i-2} + 2I_{i-1} + 4I_{i} + 2I_{i+1} + I_{i+2}\\right]\\right)\\\\\n&= \\frac{1}{10}\\left[E[I_{i-2}] + E[2I_{i-1}] + E[4I_{i}] + E[2I_{i+1}] + E[I_{i+2}] \\right] \\\\\n&= 0\n\\end{aligned}\n\\end{equation*}\nThe variance of a pixel after filtering is:\n\\begin{equation*}\n\\begin{aligned}\nE[(O-E[O])^2] &= E[O^2] \\\\\n&=\\frac{1}{100}E\\left[ I^2_{i-2} + 4I^2_{i-1} + 16I^2_{i} + 4I^2_{i+1} + I^2_{i+2} +\\sum_l\\sum^2_{m,n = -2}\\alpha_lI_{i+m}I_{i+n} \\right]\\\\\n&=\\frac{1}{100}\\left[ E[I^2_{i-2}] + 4E[I^2_{i-1}] + 16E[I^2_{i}] + 4E[I^2_{i+1}] + E[I^2_{i+2}] \\right]\\\\\n&=\\frac{1}{100}\\times26\\sigma^2=\\frac{13}{50}\\sigma^2> \\frac{\\sigma^2}{5}\n\\end{aligned}\n\\end{equation*}\nTherefore, after these two average filters, the expected value keeps the same. But the variance after filter (b) is larger than that of filter (a).\n%%---------------------------------------------------------------\n%% Question 5\n%%---------------------------------------------------------------\n\\section{Solution:}\nWhen operator is centered on the line of grey level 50, i.e. $I = [noise\\ \\ line\\ \\ noise]$:\n\\begin{equation*}\n\\begin{aligned}\n&P(I = \\left[ \\begin{array}{ccc}\nsalt & line & salt\n\\end{array} \\right]) = P(O = -100) = 0.49\\\\\n&P(I = \\left[ \\begin{array}{ccc}\nsalt & line & pepper\n\\end{array} \\right]) = P(O = 0) = 0.21\\\\\n&P(I = \\left[ \\begin{array}{ccc}\npepper & line & salt\n\\end{array} \\right]) = P(O = 0) = 0.21\\\\\n&P(I = \\left[ \\begin{array}{ccc}\npepper & line & pepper\n\\end{array} \\right]) = P(O = 100) = 0.09\n\\end{aligned}\n\\end{equation*}\nAnd thus the PDF of output value is:\n\\begin{equation*}\n    f(x) = \\frac{49}{100}\\delta(x+100) + \\frac{42}{100}\\delta(x) + \\frac{9}{100}\\delta(x-100)\n\\end{equation*}\nWhen operator is centered on the line adjacent to the line, i.e. $I = [line\\ \\ noise\\ \\ noise]$:\n\\begin{equation*}\n\\begin{aligned}\n&P(I = \\left[ \\begin{array}{ccc}\nline & salt & salt\n\\end{array} \\right]) = P(O = 50) = 0.49\\\\\n&P(I = \\left[ \\begin{array}{ccc}\nline & salt & pepper\n\\end{array} \\right]) = P(O = 150) = 0.21\\\\\n&P(I = \\left[ \\begin{array}{ccc}\nline & pepper & salt\n\\end{array} \\right]) = P(O = -150) = 0.21\\\\\n&P(I = \\left[ \\begin{array}{ccc}\nline & pepper & pepper\n\\end{array} \\right]) = P(O = -50) = 0.09\n\\end{aligned}\n\\end{equation*}\nAnd thus the PDF of output value is:\n\\begin{equation*}\n    f(x) = \\frac{49}{100}\\delta(x-50) + \\frac{21}{100}\\delta(x-150) + \\frac{21}{100}\\delta(x+150) + \\frac{9}{100}\\delta(x+50)\n\\end{equation*}\n\n\\vfill\n\\clearpage\n%%---------------------------------------------------------------\n%% Question 6\n%%---------------------------------------------------------------\n\\section{Solution:}\nThe image I is:\n\\begin{equation*}\nI={\\left[ \\begin{array}{cccccccc}\n0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\\\\n1 & 0 & 1 & 2 & 3 & 4 & 5 & 6\\\\\n2 & 1 & 0 & 1 & 2 & 3 & 4 & 5\\\\\n3 & 2 & 1 & 0 & 1 & 2 & 3 & 4\\\\\n4 & 3 & 2 & 1 & 0 & 1 & 2 & 3\\\\\n5 & 4 & 3 & 2 & 1 & 0 & 1 & 2\\\\\n6 & 5 & 4 & 3 & 2 & 1 & 0 & 1\\\\\n7 & 6 & 5 & 4 & 3 & 2 & 1 & 0\\\\\n\\end{array} \\right]}\n\\end{equation*}\nAfter applying a $3\\times3$ median filter, the output is \n\\begin{equation*}\nO={\\left[ \\begin{array}{cccccccc}\n0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\\\\n1 & 1 & 1 & 2 & 3 & 4 & 5 & 6\\\\\n2 & 1 & 1 & 1 & 2 & 3 & 4 & 5\\\\\n3 & 2 & 1 & 1 & 1 & 2 & 3 & 4\\\\\n4 & 3 & 2 & 1 & 1 & 1 & 2 & 3\\\\\n5 & 4 & 3 & 2 & 1 & 1 & 1 & 2\\\\\n6 & 5 & 4 & 3 & 2 & 1 & 1 & 1\\\\\n7 & 6 & 5 & 4 & 3 & 2 & 1 & 0\\\\\n\\end{array} \\right]}\n\\end{equation*}\n\n%%---------------------------------------------------------------\n%% Question 7\n%%---------------------------------------------------------------\n\\section{Solution:}\nThe input image is:\n\\begin{equation*}\nI={\\left[ \\begin{array}{cccccccc}\n4 & 4 & 4 & 4 & 8 & 8 & 8 & 8\n\\end{array} \\right]}\n\\end{equation*}\nAfter applying the median filter assuming that the border pixels are not changed:\n\\begin{equation*}\nO={\\left[ \\begin{array}{cccccccc}\n4 & 4 & 4 & 4 & 8 & 8 & 8 & 8\n\\end{array} \\right]}\n\\end{equation*}\nAfter applying the averaging mask assuming that the border pixels are not changed:\n\\begin{equation*}\nO={\\left[ \\begin{array}{cccccccc}\n4 & 4 & 4 & 5 & 7 & 8 & 8 & 8\n\\end{array} \\right]}\n\\end{equation*}\nThe median filter keeps the step image unchanged, while the averaging mask smooths the step.\n\\vfill\n\\clearpage\n%%---------------------------------------------------------------\n%% Appendix\n%%---------------------------------------------------------------\n\\section*{Appendix}\n\\begin{python}\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport numpy as np\nimport matplotlib.pylab as plt\nimport matplotlib.image as mpimg\nplt.rcParams['figure.figsize'] = 15, 6\n#plt.rcParams['figure.dpi'] = 300\n\n\nNUMBER = 10\nIMG_SHAPE = (256, 256)\nGREY_LEVEL = 128\nMU = 0\nSIGMA = 2\n\nclass Homework2(object):\n    \n    def __init__(self):\n        pass\n    \n    @staticmethod\n    def gen_one_image():\n        \"\"\"Generate one image.\"\"\"\n        img = GREY_LEVEL * np.ones(IMG_SHAPE)\n        noise = np.random.normal(MU, SIGMA, IMG_SHAPE)\n        image = img + noise\n        \n        return image\n    \n    def gen_image(self, nimage):\n        \"\"\"Generate all required images.\"\"\"\n        tmp = [0] * nimage\n        for i in np.arange(nimage):\n            tmp[i] = self.gen_one_image()\n        \n        return np.array(tmp)\n\n    @staticmethod\n    def EST_NOISE(images):\n        \"\"\"Implementation of EST_NOISE in Chapter 2 of Trucco and Verri.\"\"\"\n        num = images.shape[0]\n        m_e_bar = sum(images)/num\n        m_sigma = np.sqrt(sum((images - m_e_bar)**2)/(num - 1))\n    \n        return m_sigma\n    \n    def solve_p1(self):\n        \"\"\"Solve problem #1.\"\"\"\n        all_image = self.gen_image(NUMBER)\n        result = self.EST_NOISE(all_image)\n        print(\"\\n\\nSolution for problem #1.\")\n        self.plot_result(all_image, result)\n        \n        self.images = all_image\n   \n    @staticmethod\n    def apply_2d_filter(bfilter, timage):\n        \"\"\"Apply given 2D filter onto an image.\n        \n        Parameters\n        ----------\n        bfilter: array-like\n            The filter\n        timage: array-like\n            Targeted image\n        \"\"\"\n        image_shape = timage.shape\n        ovrlay = int(bfilter.shape[0] / 2)\n        tmp_matrix = np.zeros(np.array(image_shape) + 2 * ovrlay)\n        tmp_matrix[ovrlay:-ovrlay, ovrlay:-ovrlay] = timage\n        res_matrix = np.zeros(timage.shape)\n        for i in np.arange(image_shape[0]) + ovrlay:\n            for j in np.arange(image_shape[1]) + ovrlay:\n                local_matrix = tmp_matrix[i - ovrlay:i + ovrlay + 1, \n                                          j - ovrlay:j + ovrlay + 1]\n                res_matrix[i - ovrlay, j - ovrlay] = sum(sum(local_matrix * bfilter))\n        return res_matrix\n    \n    def solve_p2(self):\n        \"\"\"Solve problem #2.\"\"\"\n        box_filter = np.ones((3, 3)) / 9\n        all_image = self.images\n        filted_image = [0] * NUMBER\n        for i in np.arange(NUMBER):\n            filted_image[i] = self.apply_2d_filter(box_filter, all_image[i])\n        filted_image = np.array(filted_image)\n        result = self.EST_NOISE(filted_image)\n        print(\"\\n\\nSolution for problem #2.\")\n        self.plot_result(filted_image, result)\n    \n    @staticmethod\n    def plot_result(images, res):\n        \"\"\"Plot results.\"\"\"\n        fig = plt.figure()\n        for i in np.arange(NUMBER):\n            ax = fig.add_subplot(2, 5, i + 1)\n            ax.axis('off')\n            ax.imshow(images[i], cmap=plt.cm.gray)\n        fig.suptitle(f\"Estimated Noise {res.mean():.4f}, Worst Case Noise {res.max():.4f}\",\n                     fontsize=12)\n        fig.show()\n    \n    def solve_p3(self, sigma=1.4, show=True):\n        \"\"\"Solve problem #1.\"\"\"\n        n = int(2 * np.ceil(2 * sigma) + 1)\n        # 2D Gaussian\n        ovrlay = int(n / 2)\n        inds = np.arange(-ovrlay, ovrlay + 1)\n        x, y = np.meshgrid(inds, inds)\n        mask = np.exp(-(x**2 + y**2)/(2*sigma**2))\n        mask = mask/sum(sum(mask))\n        \n        # two 1D Gaussian\n        gaussian_1d = np.exp(-inds**2/(2 * sigma**2))\n        gaussian_1d = gaussian_1d /sum(gaussian_1d).reshape((-1, 1))\n        mask2 = gaussian_1d * gaussian_1d.T\n        \n        print(\"\\n\\nSolution for problem #3.\")\n              \n        if show:\n            Fsize = 16\n            test_img = mpimg.imread(r\".\\fig\\test.png\")\n            output_img1 = self.apply_2d_filter(mask, test_img)\n            output_img2 = self.apply_2d_filter(mask2, test_img)\n            fig = plt.figure()\n            ax1 = fig.add_subplot(131)\n            ax1.axis('off')\n            ax1.imshow(test_img, cmap=plt.cm.gray)\n            ax1.set_title(\"Raw image\", fontsize=Fsize)\n            ax2 = fig.add_subplot(132, sharex=ax1, sharey=ax1)\n            ax2.axis('off')\n            ax2.imshow(output_img1, cmap=plt.cm.gray)\n            ax2.set_title(f\"2D {n}x{n} Gaussian\", fontsize=Fsize)\n            ax3 = fig.add_subplot(133, sharex=ax1, sharey=ax1)\n            ax3.axis('off')\n            ax3.imshow(output_img2, cmap=plt.cm.gray)\n            ax3.set_title(f\"Two 1D n={n} Gaussian\", fontsize=Fsize)\n            fig.show()\n\n        return mask, gaussian_1d\n\n    @staticmethod    \n    def apply_1d_filter(bfilter, timage):\n        \"\"\"Apply given 1D filter onto an image.\n        \n        Parameters\n        ----------\n        bfilter: array-like\n            The filter\n        timage: array-like\n            Targeted image\n        \"\"\"\n        image_length = len(timage)\n        ovrlay = int(bfilter.shape[0] / 2)\n        tmp_array = np.zeros(image_length + 2 * ovrlay)\n        tmp_array[ovrlay:-ovrlay] = timage\n        res_array = np.zeros(image_length )\n        for i in np.arange(image_length) + ovrlay:\n            local_matrix = tmp_array[i - ovrlay:i + ovrlay + 1]\n            res_array[i - ovrlay] = sum(local_matrix * bfilter)\n        return res_array\n    \n    @staticmethod\n    def apply_1d_median_filter(n, timage):\n        \"\"\"Applying a 3 median flter on the image I assuming that the\n        border pixels are not changed.\n        \n        Parameters\n        ----------\n        n: int\n            Shape of median filter\n        timage: array-like\n            Targeted image\n        \"\"\"\n        image_shape = timage.shape\n        ovrlay = int(n / 2)\n        res_matrix = np.copy(timage)\n        for i in np.arange(image_shape[0])[1:-1]:\n            local_matrix = timage[i - ovrlay:i + ovrlay + 1] \n            median = np.median(local_matrix)\n            res_matrix[i] = median\n        return res_matrix\n    \n    def solve_p4(self):\n        \"\"\"Solve problem #4.\"\"\"\n        I = np.array([10] * 5 + [40] * 5)\n        filter1 = np.ones(5)/5\n        filter2 = np.array([1, 2, 4, 2, 1]) / 10\n        \n        O1 = self.apply_1d_filter(filter1, I).astype(int)\n        O2 = self.apply_1d_filter(filter2, I).astype(int)\n        print(\"\\n\\nSolution for problem #4.\")\n        print(\"Filter (a)\")\n        print(O1)\n        print(\"Filter (b)\")\n        print(O2)\n        return O1, O2\n\n    @staticmethod\n    def apply_2d_median_filter(n, timage):\n        \"\"\"Applying a nxn median filter on the image I assuming that the\n        border pixels are not changed.\"\"\"\n        image_shape = timage.shape\n        ovrlay = int(n / 2)\n        res_matrix = np.copy(timage)\n        for i in np.arange(image_shape[0])[1:-1]:\n            for j in np.arange(image_shape[1])[1:-1]:\n                local_matrix = timage[i - ovrlay:i + ovrlay + 1, \n                                      j - ovrlay:j + ovrlay + 1]\n                median = np.median(local_matrix)\n                res_matrix[i, j] = median\n        return res_matrix\n    \n    def solve_p5(self, show=True):\n        \"\"\"Solve problem #5.\"\"\"\n        print(\"\\n\\nSolution for problem #5.\")\n        if show:\n            fig = plt.figure()\n            ax1 = fig.add_subplot(121)\n            ax1.stem([-100, 0, 100], np.array([49, 42, 9])/100)\n            ax1.set_ylabel(\"Probability\")\n            ax1.set_xlabel(\"Output\")\n            ax1.set_title(\"On the line\")\n            ax2 = fig.add_subplot(122)\n            ax2.stem([-150, -50, 50, 100], np.array([21, 9, 21, 49])/100)\n            ax2.set_ylabel(\"Probability\")\n            ax2.set_xlabel(\"Output\")\n            ax2.set_title(\"On the adjacent line\")\n            fig.show()\n\n    def solve_p6(self):\n        \"\"\"Solve problem #6.\"\"\"\n        I = np.zeros((8, 8)).astype(int)\n        for i in np.arange(8):\n            for j in np.arange(8):\n                I[i, j] = np.abs(i - j)\n        O = self.apply_2d_median_filter(3, I).astype(int)\n        print(\"\\n\\nSolution for problem #6.\")\n        print(I)\n        print(O)\n        return I, O\n    \n    def solve_p7(self):\n        \"\"\"Solve problem #7.\"\"\"\n        I = np.array([4] * 4 + [8] * 4)\n        O1 = self.apply_1d_median_filter(3, I)\n        avgfilter = np.array([1, 2, 1]) / 4\n        O2 = np.copy(I)\n        O2[1:-1] = self.apply_1d_filter(avgfilter, I)[1:-1]\n        \n        print(\"\\n\\nSolution for problem #7.\")\n        print(\"Median Filter\")\n        print(O1)\n        print(\"Average Mask\")\n        print(O2)\n        return O1, O2\n\nif __name__ == \"__main__\":\n    hw2 = Homework2()\n    hw2.solve_p1()\n    hw2.solve_p2()\n    hw2.solve_p3()\n    hw2.solve_p4()\n    hw2.solve_p5()\n    hw2.solve_p6()\n    hw2.solve_p7()\n    \n\\end{python}\n\\end{document}\n\n\\begin{equation*}\n\\begin{aligned}\nG(x, y) = &\\left[ \\begin{array}{ccccc}\n0.01214612 & 0.02610994 & 0.03369732 & 0.02610994 & 0.01214612\\\\\n0.02610994 & 0.0561273  & 0.07243752 & 0.0561273  & 0.02610994\\\\\n0.03369732 & 0.07243752 & 0.09348738 & 0.07243752 & 0.03369732\\\\\n0.02610994 & 0.0561273  & 0.07243752 & 0.0561273  & 0.02610994\\\\\n0.01214612 & 0.02610994 & 0.03369732 & 0.02610994 & 0.01214612\\\\\n\\end{array} \\right] = G(x)G(y)\\\\\n= &\\left[ \\begin{array}{c}\n0.11020946 \\\\\n0.23691201 \\\\\n0.30575706 \\\\\n0.23691201 \\\\\n0.11020946\n\\end{array} \\right] \\times \\left[ \\begin{array}{ccccc}\n0.11020946 & 0.23691201 & 0.30575706 & 0.23691201 & 0.11020946\n\\end{array} \\right]\n\\end{aligned}\n\\end{equation*}\n", "meta": {"hexsha": "d3ac1c7f39bedb9d6229d44c73e46aec4eb4c0df", "size": 18086, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EECE5639-Computer-Vision/Homework-2/main.tex", "max_stars_repo_name": "tjyiiuan/Graduate-Courses", "max_stars_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "EECE5639-Computer-Vision/Homework-2/main.tex", "max_issues_repo_name": "tjyiiuan/Graduate-Courses", "max_issues_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EECE5639-Computer-Vision/Homework-2/main.tex", "max_forks_repo_name": "tjyiiuan/Graduate-Courses", "max_forks_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7140115163, "max_line_length": 401, "alphanum_fraction": 0.5506469092, "num_tokens": 5857, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8376199653600372, "lm_q1q2_score": 0.5965798974905304}}
{"text": "\\section{Model Functions}\nThe main functions in this model include converting from Keplerian orbital elements to Cartesian vectors and converting from Cartesian vectors to Keplerian orbital elements. Orbital elements used for this conversion include the semimajor axis, eccentricity, inclination, ascending node, argument of periapses, and true anomaly. The Cartesian parameters consist of position and velocity vectors. These conversions are able to be performed with a variety of inputs, so listed below are the orbit types available for accurate conversion.\n\n\\begin{itemize}\n\t\\item \\textbf{Elliptic Orbit} \\boldmath($0<e<1.0$, \\quad $a>0$)\\unboldmath\n\t\\begin{itemize}\n\t\t\\item Inclined: $i>0$, \\quad $\\Omega>0$\n\t\t\\item Equatorial: $i=0$, \\quad $\\Omega=0$\n\t\\end{itemize}\n\t\\item \\textbf{Circular Orbit} \\boldmath($e=0$, \\quad $a>0$, \\quad $\\omega=0$)\\unboldmath\n\t\\begin{itemize}\n\t\t\\item Inclined: $i>0$, \\quad $\\Omega>0$\n\t\t\\item Equatorial: $i=0$, \\quad $\\Omega=0$\n\t\\end{itemize}\n\t\\item \\textbf{Parabolic orbit} \\boldmath($e= 1.0$, \\quad $a=-r_p$)\\unboldmath\n\t\\begin{itemize}\n\t\t\\item Inclined: $i>0$, \\quad $\\Omega>0$\n\t\t\\item Equatorial: $i=0$, \\quad $\\Omega=0$\n\t\\end{itemize}\n\t\\item \\textbf{Hyperbolic orbit} \\boldmath($e>1.0$, \\quad $a<0$)\\unboldmath\n\t\\begin{itemize}\n\t\t\\item Inclined: $i>0$, \\quad $\\Omega>0$\n\t\t\\item Equatorial: $i=0$, \\quad $\\Omega=0$\n\t\\end{itemize}\n\\end{itemize}\n\n\\section{Model Assumptions and Limitations}\n\\subsection{Assumptions}\n\\begin{itemize}\n\t\\item The origin of the inertial frame is coincident with the geocentric equatorial system.\n\t\\item The attracting body is specified by the supplied gravity constant $\\mu [\\frac{km^3}{s^2}]$.\n\\end{itemize}\n\\subsection{Limitations}\n\\begin{itemize}\n\t\\item \\textbf{\\boldmath For input $e \\geq 1.0$}\n\t\\begin{itemize}\n\t\t\\item Rectilinear orbits are not supported with this module because their angular momentum equals zero. However, Cartesian vectors can be obtained from orbital elements because this only affects the Cartesian to element conversion. Regardless, this case will not be needed when using the orb\\_elem\\_convert module.\n\t\t\\item The semimajor axis input must be negative.\n\t\\end{itemize}\n\t\\item \\textbf{\\boldmath For input $e = 0$}\n\t\\begin{itemize}\n\t\t\\item The argument of periapsis input must be zero.\n\t\\end{itemize}\n\t\\item \\textbf{\\boldmath For input $a = 0$}\n\t\\begin{itemize}\n\t\t\\item The ascending node input must be zero.\n\t\\end{itemize}\n\t\\item \\textbf{\\boldmath For input $a<0$}\n\t\\begin{itemize}\n\t\t\\item The eccentricity must be greater than or equal to 1.0.\n\t\\end{itemize}\n\t\\item \\textbf{Orbital element angles ($i$, $\\Omega$, $\\omega$, $f$)}\n\t\\begin{itemize}\n\t\t\\item Must be greater than or equal to zero\n\t\t\\item Must be radians\n\t\\end{itemize} \n\\end{itemize}", "meta": {"hexsha": "0a7695f421aa644d0a63b137c3218d606908898b", "size": 2743, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/simulation/dynamics/DynOutput/orbElemConvert/_Documentation/secModelFunctions.tex", "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/simulation/dynamics/DynOutput/orbElemConvert/_Documentation/secModelFunctions.tex", "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_forks_repo_path": "src/simulation/dynamics/DynOutput/orbElemConvert/_Documentation/secModelFunctions.tex", "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.1228070175, "max_line_length": 534, "alphanum_fraction": 0.732045206, "num_tokens": 873, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.837619947119304, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5965798793825349}}
{"text": "\\section{Server Farms}\n\\label{sec:server-farms}\n\nA server farm is a collection of servers that cooperates to handle incoming requests. \nA server farm is preferable to a super-fat server because it is cheaper and more flexible. This practical advantages have made server farms ubiquitous.\n\nA server farm can be modeled with a $M/M/k/k$ queue and with a $M/M/k$ queue.\n\nThese queues are both birth-death processes, thus they are both time-reversible.", "meta": {"hexsha": "0ee42d6c3d69c0acfff1bb24626be4f8ee302208", "size": 448, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "performance-modeling/sec/server-farms.tex", "max_stars_repo_name": "gmarciani/research", "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_issues_repo_path": "performance-modeling/sec/server-farms.tex", "max_issues_repo_name": "gmarciani/research", "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "performance-modeling/sec/server-farms.tex", "max_forks_repo_name": "gmarciani/research", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "avg_line_length": 49.7777777778, "max_line_length": 151, "alphanum_fraction": 0.7857142857, "num_tokens": 100, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8872045877523148, "lm_q2_score": 0.6723317123102955, "lm_q1q2_score": 0.5964957796530637}}
{"text": "\\chapter{Experiments}\nIn this chapter, BVBMC is applied in a number of examples, as to show the algorithm behavior in different scenarios. The examples consists of estimating continuous probability densities in up to $10$ dimensions \\footnote{In more than $10$ dimensions, performance was very poor. What was seen is that the algorithm got \\enquote{stuck} in its first component proposal, and could not find new components to be proposed with acceptable weights.}.\n\n\\section{1-d Mixture of Gaussians}\n\nAs a showcase example, a one-dimensional mixture of Gaussians is considered\n\\begin{displaymath}\n f(x) = \\sum_{i=1}^{12} w_i \\mathcal{N}(x;\\mu_i,\\sigma^2_i),\n\\end{displaymath}\nwith $w_i = \\frac{1}{12}$, $\\mu_i \\sim \\mathcal{N}(0,\\sqrt{5})$ and $\\sigma^2_i = 1$. This results in a many-peaked distribution, with mean $\\mu_0=-1.6585$ and variance $\\sigma^2_0 = 25.0316$, whose density is shown in Figure \\ref{target1dmixture}. \n\nIn each example, the quality of the approximation by the BVBMC algorithm is measured, by calculating both the difference between the estimated mean $\\mu$ and the true mean $\\mu_0$, in $\\log_{10}|\\mu - \\mu_0|$, and the difference between the estimated variance $\\sigma^2$ and the true variance $\\sigma^2_0$, in $\\log_{10}(|\\sigma^2-\\sigma^2_0|/|\\sigma^2_0|)$. Code for this section can be found in \\url{https://github.com/DFNaiff/Dissertation/tree/master/tests_dissertation/illustrative}. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{figs/targetexil1a.png}\n\t\\caption{\\label{target1dmixture} Target distribution.}\n\\end{figure}\n\n\\subsection{Passive evaluation}\nIn the first two examples, the target evaluations of $\\log f(x)$ were done in a uniform grid from $-20$ to $20$ with $51$ points, resulting in the GP replicating almost exactly the target distribution. This is done to illustrate the algorithm's capacity to generate good approximate distributions, if the GP approximates closely the (unnormalized) target distribution. Output scaling was done by normalizing, as discussed in Section \\ref{outputscaling}. The GP mean was chosen to be constant, setting it with the heuristic corresponding to the \\textit{normalize} scaling, as discussed in Section \\ref{meansection}.\n\n\\subsubsection{Influence of kernel in approximation}\nIt was tested the kernel influence on the final approximation. For this, the tested kernels were $k_{\\text{PMat},1/2}$, $k_{\\text{PMat},2/2}$, $k_{\\text{PMat},5/2}$ and $k_{\\text{SQE}}$. The algorithm was run for 50 iterations, with joint parameter updating done every 10 steps. The results are shown in Figure \\ref{kernelcomparison}.\n\nThere is an interesting behavior to be seen, in that the GP approximation for the kernels $k_{\\text{PMat},1/2}$ and  $k_{\\text{PMat},3/2}$ were considerably more accurate than with kernels $k_{\\text{PMat},5/2}$ and $k_{\\text{SQE}}$. However, this accuracy in the GP approximation does not reflect in a better posterior approximation, that are seen with $k_{\\text{PMat},5/2}$ and $k_{\\text{SQE}}$. This may be due to the fact that the first two kernels makes for GP approximations that are too rough, making it difficult to be approximated by the BVBMC algorithm.\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[$k_{\\text{PMat},1/2}$, moments.]{\\label{fig11a}\\includegraphics[width=0.4\\linewidth]\n\t\t{figs/dmcil1fk12.png}}\n\t\\subfloat[$k_{\\text{PMat},1/2}$, final result.]{\\label{fig11b}\\includegraphics[width=0.4\\linewidth]\n\t{figs/convgraphil1fk12.png}}\n\n\t\\subfloat[$k_{\\text{PMat},3/2}$, moments.]{\\label{fig11a}\\includegraphics[width=0.4\\linewidth]\n\t{figs/dmcil1fk32.png}}\n\\subfloat[$k_{\\text{PMat},3/2}$, final result.]{\\label{fig11b}\\includegraphics[width=0.4\\linewidth]\n\t{figs/convgraphil1fk32.png}}\n\n\t\\subfloat[$k_{\\text{PMat},5/2}$, moments.]{\\label{fig11a}\\includegraphics[width=0.4\\linewidth]\n\t{figs/dmcil1fk52.png}}\n\\subfloat[$k_{\\text{PMat},5/2}$, final result.]{\\label{fig11b}\\includegraphics[width=0.4\\linewidth]\n\t{figs/convgraphil1fk52.png}}\n\n\t\\subfloat[$k_{\\text{SQE}}$, moments.]{\\label{fig11a}\\includegraphics[width=0.4\\linewidth]\n\t{figs/dmcil1fkSQE.png}}\n\\subfloat[$k_{\\text{SQE}}$, final result.]{\\label{fig11b}\\includegraphics[width=0.4\\linewidth]\n\t{figs/convgraphil1fkSQE.png}}\n\t\\caption[Accuracy analysis for different kernels.]{\\label{kernelcomparison} Accuracy analysis for different kernels. Each row corresponding to one kernel, in order being $k_{\\text{PMat},1/2}$, $k_{\\text{PMat},2/2}$, $k_{\\text{PMat},5/2}$ and $k_{\\text{SQE}}$. The first column (MC) shows the accuracy of mean (blue), and variance (red), while the second column shows the the predicted density, the true density and the GP approximation of the density.}\n\\end{figure}\n\n\\subsubsection{Influence of training routine in approximation}\nNext, it was tested the difference that the training routine makes on both accuracy and algorithm running time. For this, using the same setting as above with kernel $k_{\\text{PMat},5/2}$, three training routines were tested:\n\\begin{itemize}\n\t\\item Running the algorithm for 50 iterations, with joint parameter updating done every step (training routine A).\n\t\\item Running the boosting algorithm for 50 iterations, with joint parameter updating done every 10 steps (training routine B).\n\t\\item Running the boosting algorithm for 50 iterations, with no joint parameter updating (training routine C).\n\\end{itemize}\n\nThe results are shown in Figure \\ref{trainingcomparison}.  It can be seen that training routine B performs considerably better than routine A and routine C, while training routine A and training routine B has comparable final performance, with routine A converging in fewer iterations. As of running time, it can be see that for training routine A it increases quadratically in relation to the number of iterations, while for training routine C it increases linearly. As for training routine B, the running time increases linearly except to jumps corresponding to joint parameter updating, with each jump size growing with the iteration.\n\nIt is interesting to pause and consider the behavior of training routine B, since in the next set of examples this were the routine used, with some differences of number of iterations and intervals between joint parameter updating. Notice that boosting improves considerably accuracy up to the first joint parameter updating, in 10 iterations, and after this many intervals between two joint parameter updating essentially shows no improvements. This suggests that a smarter training routine, involving boosting only in the initial steps may be desirable. This idea was not explored further in this dissertation.\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[Routine A, moments]{\\label{fig11a}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/dmcil1ati1.png}}\n\t\\subfloat[Routine A, final result.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/convgraphil1ati1.png}}\n\t\\subfloat[Routine A, running time.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/timegraphil1ati1.png}}\n\n\t\\subfloat[Routine B,moments.]{\\label{fig11a}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/dmcil1ati10.png}}\n\t\\subfloat[Routine B, final result.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/convgraphil1ati10.png}}\n\t\\subfloat[Routine B, running time.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/timegraphil1ati10.png}}\n\n\t\\subfloat[Routine C, moments.]{\\label{fig11a}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/dmcil1ati1000.png}}\n\t\\subfloat[Routine C, final result.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/convgraphil1ati1000.png}}\n\t\\subfloat[Routine C, running time.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/timegraphil1ati1000.png}}\n\n\t\\caption[Accuracy analysis for different training routines.]{\\label{trainingcomparison} Accuracy analysis for different training routines. Each row corresponding to training routine A, B and C, respectively. The first column shows the accuracy of mean (blue) and (red), the second column show the difference between the predicted density, the true density and the GP approximation of the density, and the third column shows the algorithm running time at each step.}\n\\end{figure}\n\n\\subsection{Active evaluation}\nIn this example, it is considered how BVBMC performs with active evaluation. For all examples, $5$ initial evaluation points are sampled randomly, with distribution $\\mathcal{N}(0,\\sqrt{10})$. Subsequently, at each iteration an evaluation point is chosen according to the running acquisition function. Four acquisition functions were tested: uncertainty sampling (US, \\eqref{us_vbmc}), prospective prediction (PROP, \\eqref{prospective_vbmc}), moment-matched log transform (MMLT, \\eqref{mmlt_vbmc}), and prospective moment-matched log transform (MMLT$_P$, \\eqref{mmltprop_vbmc}). The results are shown in Figure \\ref{acquisition}. There it can be seen that all acquisition function performed well, although the GP approximation is better for the PROP and MMLT acquisitions.\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[US, moments.]{\\label{fig11a}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/dmcil1g_aq_uncertainty_sampling.png}}\n\t\\subfloat[US, final result.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/convgraphil1g_aq_uncertainty_sampling.png}}\n\t\\subfloat[US, sampling.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t\t{figs/explopattern1g_aq_uncertainty_sampling.png}}\n\t\n\t\\subfloat[PROP, moments.]{\\label{fig11a}\\includegraphics[width=0.35\\linewidth]\n\t{figs/dmcil1g_aq_prospective.png}}\n\\subfloat[PROP, final result.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/convgraphil1g_aq_prospective.png}}\n\\subfloat[PROP, sampling.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/explopattern1g_aq_prospective.png}}\n\t\n\t\\subfloat[MMLT, moments.]{\\label{fig11a}\\includegraphics[width=0.35\\linewidth]\n\t{figs/dmcil1g_aq_mmlt.png}}\n\\subfloat[MMLT, final result.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/convgraphil1g_aq_mmlt.png}}\n\\subfloat[MMLT, sampling.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/explopattern1g_aq_mmlt.png}}\n\n\t\\subfloat[MMLT$_P$, moments.]{\\label{fig11a}\\includegraphics[width=0.35\\linewidth]\n\t{figs/dmcil1g_aq_mmlt_prospective.png}}\n\\subfloat[MMLT$_P$, final result.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/convgraphil1g_aq_mmlt_prospective.png}}\n\\subfloat[MMLT$_P$, sampling.]{\\label{fig11b}\\includegraphics[width=0.35\\linewidth]\n\t{figs/explopattern1g_aq_mmlt_prospective.png}}\n\t\n\t\\caption[Accuracy analysis for different acquisition function.]{\\label{acquisition} Convergence analysis for different acquisition function. Each row corresponds to acquisitions US, PROP, MMLT and MMLT$_P$, respectively. The first column shows the accuracy of mean and covariance, the second column show the difference between the predicted density, the true density and the GP approximation of the density, and the third column shows the places where the function was evaluated.}\n\\end{figure}\n\n\\newpage\n\\section{N-d toy examples}\nIn this section, we consider the algorithm performance on a set of toy examples, the same ones considered in \\cite{Acerbi_2018}. Code for this section can be found in \\url{https://github.com/DFNaiff/Dissertation/tree/master/tests_dissertation/toy}.\n\nThree classes of test cases were considered:\n\\begin{itemize}\n\\item \\textit{Lumpy}, a mixture of multivariate Gaussians\n\\begin{equation}\nf(x) = \\sum_{i=1}^{12} w_i \\mathcal{N}(x;\\mu_i,\\Sigma_i),\n\\end{equation}\nwith $(w_1,\\ldots,w_{12}) \\sim \\text{Dir}(1,\\ldots,1)$, $\\mu_i \\sim \\text{Unif}([0,1]^D)$ and $\\Sigma = \\text{diag}(\\sigma_1^2,\\ldots,\\sigma_n^2)$, with $\\sigma_i^2 \\sim \\text{Unif}(0.2,0.6)$. This distribution tests the algorithm performance in presence of possible multimodalities.\n\n\\item \\textit{Cigar}, a anisotropic Gaussian distribution\n\\begin{equation}\nf(x) = \\mathcal{N}(x;0,\\Sigma),\n\\end{equation}\nwhere $\\Sigma = Q \\Lambda Q^T$, with $\\Lambda = (10.0,0.1,\\ldots,0.1)$, and $Q$ sampled from the uniform measure in the special orthogonal group. This distribution tests the algorithm performance in presence of large anisotropy.\n\n\\item \\textit{Student-t}, a product of t distributions \n\\begin{equation}\nf(x) =  \\prod_{d=1}^D \\mathcal{T}(x_i;\\nu_i),\n\\end{equation}\nwith $\\nu_i \\sim \\text{Unif}(2.5,2+0.5D)$. This distribution tests the algorithm performance in presence of heavy tails \\footnote{For both \\textit{Cigar} and \\textit{Student-t}, BVBMC was applied to an unnormalized density.} \\footnote{Originally, $\\nu_i = 2.5$ for every $i$, which assured reasonably heavy tails at every dimension. However, for comparison with \\cite{Acerbi_2018}, this later case is not shown here}.\n\\end{itemize}\n\nFor each case, dimensions $D = 2,6,10$ where tested, and the BVBMC algorithm was run for $100$ iterations, with $10D$ initial samples. The GP kernel used were $k_{\\text{PMat},\\nu=2.5}$, with active evaluation at each iteration, according to an acquisition function randomly chosen between the pair $(\\alpha_\\text{PROP},\\alpha_\\text{MMLT})$. Every 20 steps, joint parameter updating was done, and pruning was done at each iteration, with $\\beta = 10^{-3}$.\n\nThe algorithm performance was compared by checking the divergence between the true mean $\\mu_0$ and the estimated mean $\\mu$, in $\\log_{10}||\\mu - \\mu_0||_2$, and between the true covariance $\\Sigma_0$ and estimated covariance $\\Sigma$, in $\\log_{10}||\\Sigma - \\Sigma_0||_F/||\\Sigma_0||_F$, where $||\\cdot||_F$ denotes the Frobenius norm. The results are shown in Figure \\ref{ndtoy}. For comparison with results given by the VBMC algorithm, shown in \\cite{Acerbi_2018}, the \"Gaussianized\" symmetric KL divergence (gsKL) between the true distribution and estimated distribution is computed, which is defined as the symmetric KL divergence between two multivariate distributions with mean and covariance equals to the true distribution and the estimated distribution, respectively \\footnote{In \\cite{Acerbi_2018}, what is done are 15 evaluations for each case, and the median is taken. A better approach here would be doing the same thing for BVBMC, however due to time constraints this couldn't be done.}. The results are shown in Table \\ref{toytable}. There it can be seen that, in general performance between BVBMC and VBMC was comparable, excluding the experiment with the \\textit{Cigar} distribution with $D=10$, in which VBMC performed much better, and $D=2$, in which BVBMC performed better. \\footnote{It should be noted that the BVBMC algorithm have used fewer evaluations, namely $120$,$160$ and $200$ for $D=2,6,10$, respectively, than the VBMC algorithm, which used $200$,$400$,$600$, for $D=2,6,10$. In this toy example this doesn't make a large computational difference, however in applications where likelihood evaluation is expensive it does.}\n\n\\begin{table}[]\n\t\\begin{tabular}{l|l|l|l|l|l|l|}\n\t\t\\cline{2-7}\n\t\t& \\multicolumn{2}{l|}{Lumpy} & \\multicolumn{2}{l|}{Cigar} & \\multicolumn{2}{l|}{Student-t} \\\\ \\cline{2-7} \n\t\t& BVBMC         & VBMC        & BVBMC         & VBMC        & BVBMC         & VBMC        \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{D=2}  & $3.12 \\times 10^{-3}$        & $6.5 \\times 10^{-4}$         & $8.12 \\times 10^{-3}$           & $2.1 \\times 10^{-1}$         & $2.9 \\times 10^{-1}$          & $2.0 \\times 10^{-3}$         \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{D=6}  & $6.59 \\times 10^{-2}$         & $ 3.5 \\times 10^{-2}$         & $5.56 \\times 10^{-1}$           & $ 1.07 \\times 10^{-1}$        & $1.14 \\times 10^{-1}$           & $2.3 \\times 10^{-1}$         \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{D=10} & $1.19 \\times 10^{-1}$           & $ 4.2 \\times 10^{-1}$         & $1.29 $           &  $ 1.0 \\times 10^{-1}$        & $2.56 \\times 10^{-1}$           & $ 2.7 \\times 10^{-1}$        \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{gsKL divergence between true distribution and estimated distribution. The values for VBMC were taken from the graphs in \\cite{Acerbi_2018}.}\\label{toytable}\n\\end{table}\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[Lumpy, means accuracy.]{\\label{fig11a}\\includegraphics[width=0.5\\linewidth]\n\t\t{figs/ex1b_mean.png}}\n\t\\subfloat[Lumpy, covariances accuracy.]{\\label{fig11b}\\includegraphics[width=0.5\\linewidth]\n\t\t{figs/ex1b_cov.png}}\n\n\t\n\t\\subfloat[Cigar, means accuracy.]{\\label{fig11a}\\includegraphics[width=0.5\\linewidth]\n\t\t{figs/ex4b_mean.png}}\n\t\\subfloat[Cigar, covariances accuracy.]{\\label{fig11b}\\includegraphics[width=0.5\\linewidth]\n\t\t{figs/ex4b_cov.png}}\n\n\t\\subfloat[Student-t, means accuracy.]{\\label{fig11a}\\includegraphics[width=0.5\\linewidth]\n\t{figs/ex5b_mean.png}}\n\\subfloat[Student-t, covariances accuracy.]{\\label{fig11b}\\includegraphics[width=0.5\\linewidth]\n\t{figs/ex5b_cov.png}}\n\n\t\n\t\\caption[Accuracy analysis for different N-d examples.]{\\label{ndtoy} Accuracy analysis for different N-d examples. Each row corresponding to \\textit{Lumpy}, \\textit{Cigar} and \\textit{Student-t}, respectively. The first column shows the accuracy of means, while the second column shows the accuracy of covariances.}\n\\end{figure}\n\n\\section{Contamination source estimation}\nIn this example, a contamination source location problem was considered, inspired by a problem in \\cite{Bilionis_2013}. This problem is a toy example of an actual inverse problem, that is, given some sensor measurements of a contaminated field (the contamination may be, for instance, radiation), find where the contaminant as located, as well as its nature. Code for this section can be found in \\url{https://github.com/DFNaiff/Dissertation/tree/master/tests_dissertation/source1d}.\n\nThe example consider a one-dimensional domain $B = [0,1]$, in which a contamination source $q(x,t)$ is inserted from $t=0$ to $t=t_s$. This source is modeled as\n\\begin{equation}\nq(x,t) = q_0 \\exp \\left(-\\frac{(x-x_0)^2}{2 \\rho^2} \\right) \\mathbf{1}_{[0,t_s)}(t).\n\\end{equation}\nThe contaminant itself is assumed to follow the diffusion equation \n\\begin{equation}\n\\frac{\\partial}{\\partial t} u(x,t) = \\frac{\\partial^2}{\\partial x^2} u(x,t) + q(x,t), \\quad x \\in \\text{int} B.\n\\end{equation}\nMoreover, the initial contamination is considered to be $0$, while the walls are considered insulated, resulting in the boundary and initial value conditions\n\\begin{equation}\nu(x,0) = 0, \\; \\frac{\\partial}{\\partial x} u(0,t) = \\frac{\\partial}{\\partial x} u(1,t) = 0.\n\\end{equation}\nIt must be noticed that, for general domain length $L$ and diffusion coefficient $k$, using adimensionalization one can reduce the general problem to the above.\n\nIn this setting, at each wall $x=0,1$, four measurements of $u$ are made, for $t_m \\in T_m = \\{0.075,0.15,0.225,0.3,0.4\\}$, resulting in the data \n\\begin{displaymath}\n \\mathcal{D} = \\{\\hat{u}(x_m,t_m)\\}_{x_m \\in \\{0,1\\},t_m \\in T_m}.\n\\end{displaymath}\nMoreover, the measurements are assumed to be noisy, with $\\hat{u}(x_m,t_m) = u(x_m,t_m) + \\epsilon$, $\\epsilon \\sim \\mathcal{N}(0,\\sigma^2)$. The noise parameter $\\sigma^2$ is assumed to follow a prior $\\text{InvGamma}(\\alpha,\\beta)$. This allows us to marginalize out $\\sigma^2$, letting $\\hat{u}(x_m,t_m)$ be distributed according to the generalized t-distribution \\footnote{If $T_0$ follows a (standardized) t-distribution, with $nu$ degrees of freedom, then $T = \\sigma T_0 + \\mu$ follows a generalized t-distribution denoted by $\\mathcal{T}(\\mu,\\sigma^2,\\nu)$} with $2\\alpha$ degrees of freedom $\\mathcal{T}(u(x_m,t_m),\\beta/\\alpha,2 \\alpha)$. \n\nThe setting above results in a 4-dimensional inference problem, for the variables $(x_0,t_s,q_0,\\rho)$, with likelihood\n\\begin{equation}\n p(\\mathcal{D}|x_0,t_s,q_0,\\rho) = \\prod_{x_m \\in \\{0,1\\},t_m \\in T_M} \\mathcal{T}(\\hat{u}(x_m,t_m);u(x_m,t_m),\\beta/\\alpha,2 \\alpha).\n\\end{equation}\nGiven priors for $x_0$,$t_s$,$q_0$ and $\\rho$, the associated posterior distribution for the parameters becomes \n\\begin{equation}\\label{posteriorsource}\np(x_0,q_0,t_s,\\rho|\\mathcal{D}) \\propto p(\\mathcal{D}|x_0,q_0,T_s,\\rho) p(x_0)p(q_0)p(\\rho)p(t_s).\n\\end{equation}\n\nA synthetic data $\\mathcal{D}$ was generated, with the parameters given in Table \\ref{sourcetable}, first row. The measurement noise was $\\sigma^2 = 10^{-2}$, and the equation was simulated using a finite differences routine. The priors for the values to be inferred were set as \\footnote{The Half-Cauchy distribution was used here as to represent a non-informative prior.}\n\\begin{equation}\n\\begin{split}\n& p(x_0) = \\text{Unif}(x_0;0,1) \\\\ \n& p(t_s) = \\text{Unif}(t_s;0,0.4) \\\\\n& p(q_0) = \\text{HalfCauchy}(q_0;10) \\\\ \n& p(\\rho) = \\text{HalfCauchy}(\\rho;0.1)\n\\end{split}\n\\end{equation}\nIn \\eqref{posteriorsource}, $u(x,t)$ is also calculated by finite differences.\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{l|l|l|l|l|}\n\t\t\\cline{2-5}\n\t\t& $x_0$ & $t_s$ & $q_0$  & $\\rho$ \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{True}  & 0.230 & 0.300 & 6.366  & 0.050  \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{BVBMC mean} & 0.328 & 0.213 & 5.435  & 0.140  \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{EMCEE mean} & 0.352 & 0.206 & 10.228 & 0.218  \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{BVBMC HDP 70\\%} & $(1.1 \\cdot 10^{-4},0.43)$ & $(0.12, 0.36)$ & $(1.0, 6.7)$ & $(2.7 \\cdot 10^{-3}, 0.1)$  \\\\ \\hline\n\t\t\\multicolumn{1}{|l|}{EMCEE HDP 70\\%} & $(3.4 \\cdot 10^{-4}, 0.45)$ & $(0.08, 0.33)$ & $(0.4,10.3)$ & $(2.1 \\cdot 10^{-3}, 0.1)$  \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{\\label{sourcetable} Comparison of the true parameter of the problem (first row), the estimated means using BVBMC (second row) and EMCEE (third row), and 70\\% highest posterior density interval for BVBMC and EMCEE.}\n\\end{table}\n\nThe BVBMC algorithm assumes that the distribution to be estimated has support in $\\mathbb{R}^D$. This is not the case for the problem above, since $x_0$ and $t_s$ have both bounded support, while $q_0$ and $\\rho$ have positive support. In order to apply the BVBMC algorithm, inference was made on the warped variables $\\tilde{x}_0,\\tilde{t}_s,\\tilde{q}_0,\\tilde{\\rho}$, all of them with support in $\\mathbb{R}$, such that  \\footnote{The 0.4 factor here is due to the fact that $t_s$ lies between 0 and 0.4.}:\n\\begin{equation}\n\\begin{split}\n & x_0 = \\text{sigmoid}(\\tilde{x}_0) \\\\\n & t_s = 0.4 \\times \\text{sigmoid}(\\tilde{t}_s)  \\\\\n & q_0 = \\exp(\\tilde{q}_0) \\\\\n & \\rho = \\exp(\\tilde{\\rho}), \\\\\n\\end{split}\n\\end{equation}\nwith\n\\begin{equation}\n\\text{sigmoid}(x) = \\frac{1}{1+e^{-x}}.\n\\end{equation}\nThis results in the posterior distribution for the warped variables\n\\begin{equation}\n p(\\tilde{x}_0,\\tilde{t}_s,\\tilde{q}_0,\\tilde{\\rho}|\\mathcal{D}) \\propto p(x_0,q_0,t_s,\\rho|\\mathcal{D}) \\text{sigmoid}'(\\tilde{x}_0) \\text{sigmoid}'(\\tilde{t}_s) \\exp(\\tilde{q}_0) \\exp(\\tilde{\\rho})\n\\end{equation}\n\nThe BVBMC algorithm was applied to the problem, with the following setup:\n\\begin{itemize}\n\t\\item The kernel used was $k_{\\text{PMat},3/2}$ (both $k_{\\text{PMat},5/2}$ and $k_{\\text{SQE}}$ were also tested, with mixed results).\n\t\\item The algorithm was initialized with 40 samples from the prior. After this, before the training loop, 40 more evaluations points were chosen by using the $\\alpha_{MMLT}$ acquisition function.\n\t\\item The algorithm was then run for 100 iterations, with an evaluation point chosen at each iteration, with the acquisition function chosen randomly between $\\alpha_\\text{{MMLT}}$ and $\\alpha_\\text{{PROP}}$. At each 20 iterations, the parameters were jointly optimized.\n\\end{itemize}\n\nThe predicted mean is shown in Table \\ref{sourcetable}, second row. It can be seen that the estimated source location was relatively accurate, compared to the original one, while estimates for $\\rho$ and $q_0$ where reasonable, and the estimate for $t_s$ did not deviate far from the prior mean. The resulting marginal univariate and bivariate distributions are shown in Figure \\ref{sourcevbhistogram}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{figs/sourceproblemhistogramsvb.png}\n\t\\caption[KDE plots of estimated marginals with BVBMC. On diagonal.]{KDE plots of estimated marginals with BVBMC. On diagonal, marginal univariate distributions for $x_0$,$t_s$,$q_0$ and $\\rho$ are shown, while off-diagonal, the corresponding bivariate marginals for each pair is shown.}\n\t\\label{sourcevbhistogram} \n\\end{figure}\n\nFor comparison, the EMCEE algorithm \\cite{Foreman_Mackey_2013}, which is a MCMC algorithm usually used for problems in astrophysics, was also tested. The EMCEE ran with $10$ walkers and $10000$ steps for each walker, using burn-in in the first $1000$ steps. The resulting estimated means is shown in Table \\ref{sourcetable}, third row. It can be seem that in general the estimates were in the BVBMC algorithm, except for $q_0$, in which BVBMC seems to be more precise. The resulting marginal univariate and bivariate distributions are shown in Figure \\ref{sourcevbhistogram}, showing some resemblance with the results found in BVBMC.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{figs/sourceproblemhistogramsemcee.png}\n\t\\caption[KDE plots of estimated marginals with EMCEE. On diagonal.]{KDE plots of estimated marginals with EMCEE. On diagonal, marginal univariate distributions for $x_0$,$t_s$,$q_0$ and $\\rho$ are shown, while off-diagonal, the corresponding bivariate marginals for each pair is shown.}\n\t\\label{sourceemceehistogram} \n\\end{figure}\n\n\\section{Checking performance}\nIn practice, one does not know the true posterior for doing comparison, and there must be some way to check whether BVBMC arrived at a good posterior.\n\nTwo sources of error may arise in the BVBMC estimate and the true posterior: Whether the variational proposal does not approximate well the GP surrogate model, or the GP surrogate model does not approximate well the true  unnormalized posterior. The first case may be checked using some rough estimate of the KL divergence between the variational proposal and the surrogate model, and is implemented in the method \\textit{kl\\_vb\\_bmc} of the associated package. \n\nChecking whether the GP surrogate model resembles the true model may be harder. One option is to use leave-one-out testing \\cite{Rasmussen06}, but one must consider that one does not care about accuracy in places where the unnormalized posterior does not contribute in probability mass. A heuristic to address this problem was not developed by the author.\n\n\n\n", "meta": {"hexsha": "05311708ad9a96f8ecaddf02d7171b1b9a853e8e", "size": 26216, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex_copy/chapters/capituloF.tex", "max_stars_repo_name": "DFNaiff/Dissertation", "max_stars_repo_head_hexsha": "8db72a0e588042a582053625ec58cde6a661f2a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex_copy/chapters/capituloF.tex", "max_issues_repo_name": "DFNaiff/Dissertation", "max_issues_repo_head_hexsha": "8db72a0e588042a582053625ec58cde6a661f2a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex_copy/chapters/capituloF.tex", "max_forks_repo_name": "DFNaiff/Dissertation", "max_forks_repo_head_hexsha": "8db72a0e588042a582053625ec58cde6a661f2a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.9731543624, "max_line_length": 1656, "alphanum_fraction": 0.7436298444, "num_tokens": 7905, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.5964256601075789}}
{"text": "\\input{preamble}\n\n\\begin{document}\n    \\section{Subjects}\n    \\begin{itemize}\n        \\item Bagging\n        \\item Boosting\n    \\end{itemize}\n    \n    \\section{Notes}\n    \n    \\subsection{Decision trees}\n    Decision trees simply split the feature space into a set of rectangles, and \n    then fit a simple model (e.g. a constant) in each one.\n    \n    So one might say that, we we take some input $x$ and ask a series of \n    questions. Is $x < 21$? Then go down the left path of the tree of \n    questions. So decision trees are very simple to understand but gain are \n    also very expressive. One of the major benefits of decision trees, is that \n    they are very easy to read and understand. It's very easy to look at it and \n    figure out what questions it asks and how much does it weigh that question \n    (e.g. what pixels does it look at to understand what the image is, what \n    pixels are ``important'').\n    \n    If we just look at a very simple form of decision trees, i.e. binary trees \n    then things will be a bit simpler. Then every question is of the form ``if \n    this then go left, otherwise go right'' so every node in the tree is a \n    single dividing line in the feature space. If we look at tree where the \n    values of the regions are constant, then we can predict the value of \n    some input $x$ as follows:\n    \\begin{equation*}\n        h(x)=\\sum_{\\text{regions } r.} 1_{[x\\in r]} \\cdot c_r(x)\n    \\end{equation*}\n    That is the constant value of the region that $x$ is in.\n    \n    \\subsubsection{Growing a regression tree}\n    Using the binary regression tree described earlier, with constant region \n    values, we will then describe how to grow a regression tree (learn).\n    \n    Given some input of $N$ observations $(x_i,y_i)$ for $i=1,2,\\dots,N$ with \n    $x_i = (x_{i1}, x_{i2},\\dots,x_{id})$ and $y_i$ being the label or the \n    ``true'' value, we then need to figure out how to approximate $y$ for some \n    unknown $x$ outside of the training set. The algorithm then needs to decide \n    on the splitting variables and split points, as well as the topology of the \n    tree.\n    \n    Suppose we have a partition into $M$ regions $R_1, R_2, \\dots, R_M$ and we \n    model the response as mentioned earlier:\n    \\begin{equation*}\n        h(x)=\\sum_{\\text{regions } r}1_{[x\\in r]} \\cdot c_r(x)\n    \\end{equation*}\n    As usual with regression, we can use the squared error measure: $(h(x_i) \n    -y_i)^2$ to evaluate our performance. Then, we can easily see that the best \n    constant for region $R_m$ is simply the average of $y_i$ which ended up in \n    region $R_m$, because least squares measure punishes distance the target. \n    I.e. one point which is off by $2$ is punished more than two points that \n    are off by $1$, and thus the mean is the smallest distance on average:\n    \\begin{equation*}\n        \\hat{c}_m=\\frac{1}{|D|}\\sum_{(x,y) \\in D}1_{[x\\in r]} \\cdot y\n    \\end{equation*}\n    \n    Now, actually finding the best partition of the feature space into the \n    $R_m$ optimal regions is, in general, computationally infeasible. Thus, we \n    will proceed with a greedy approximation algorithm.\n    \n    Consider some splitting variable $j$ and splitting point $s$, we can then \n    define the pair of half-planes:\n    \\begin{equation*}\n        R_1(j,s)=\\{X|X_j \\leq s\\} \\text{ and } R_2(j,s)=\\{X|X_j > s\\}\n    \\end{equation*}\n    Note here that the point $X$ can be any point in the feature space and is \n    not necessarily a point from $D$.\n    \n    We then seek to find the splitting variable $j$ and split point $s$ that \n    solve:\n    \\begin{equation*}\n        \\min_{j,s}\\left[\\min_{c_1}\\left[\\sum_{x_i\\in \n        R_1(j,s)}(y_i-c_1)^2\\right] + \\min_{c_2}\\left[\\sum_{x_i \\in \n        R_2(j,s)}(y_i-c_2)^2\\right]\\right]\n    \\end{equation*}\n    \n    If we use the mean, as mentioned earlier, we can simplify this to:\n    \\begin{equation*}\n    \\min_{j,s}\\left[ \\left(\\sum_{x_i \\in R_1(j,s)} (y_i - \\hat{c}_1)^2 \\right) \n    + \\left(\\sum_{x_i \\in R_2(j,s)} (y_i - \\hat{c}_2)^2 \\right) \\right]\n    \\end{equation*}\n    Or, stated differently:\n    \\begin{equation*}\n    \\min_{j,s}\\left[ \\left(\\sum_{(x, y) \\in D, x_j \\leq s} (y - \\hat{c}_1)^2 \n    \\right) + \\left(\\sum_{x,y \\in D, x_j > s} (y_i - \\hat{c}_2)^2 \\right) \n    \\right]\n    \\end{equation*}\n    For each splitting variable $j$, we can simply pick $s$ as the midpoints \n    between two $x_j$ values, which results in $|D| - 1$ different split-points \n    to consider.\n    \n    Now that we know how to compute the ``best'' split, we simply follow the \n    following greedy algorithm:\n    \\begin{itemize}\n        \\item Select best ``split'' variable $j$, and the split-point $s$\n        \\item Make a new node in the tree with $(j,s)$ (i.e. a new split)\n        \\item Make a child for each new region (outcome of testing $j$)\n        \\item Split the training examples up between the two regions\n        \\item Call recursively on the new children (the new regions)\n        \\item Stop when done\n    \\end{itemize}\n    \n    \\subsubsection{Size of the tree}\n    How large should the tree become? If we make it too large, we might \n    overfit. Too small and we might not capture the structure of the data \n    properly.\n    \n    Tree size is a hyper-parameter, and it should be chosen adaptively based on \n    the data. We could, for example, stop if the decrease in error drops below \n    some is under some threshold. This is fairly short-sighted however, as a \n    bad split at some level might result in a crucial split later on.\n    \n    The usual strategy is to grow a large tree $T_0$ and stop it when the size \n    of the nodes (i.e. how many data-points we have for each node) drops below \n    some set threshold (e.g. $5$) and then pruning this tree.\n    \n    One way to prune the tree, is to simply compute the accuracy gained by \n    removing on some validation set and greedily deleting the splits that \n    increases accuracy the most. Repeating this as long as it helps.\n    \n    Limiting the size of the tree is the same as regularizing for decision \n    trees. So variance will go down and bias will go up etc.\n    \n    \\subsection{Classification trees}\n    For regression trees, we used the squared error impurity measure $Q_m(T)$:\n    \\begin{equation*}\n        Q_m(T)=\\frac{1}{N_m}\\sum_{x_i \\in R_m}(y_i - \\hat{c}_m)^2\n    \\end{equation*}\n    Where $N_m = |\\{x_i | x_i\\in R_m\\}|$ If the target is a classification \n    outcome, taking values $1,2,\\dots,K$, then this measure is not suitable.\n    \n    If we let $\\hat{p}_{mk}$ be the proportion of observations that had class \n    $k$ in node $m$. We can then predict the class of points in region $m$ to \n    have class $k(m)=\\arg\\max_k \\hat{p}_{mk}$, i.e. the majority class in \n    region $m$. Then, different measures of $Q_m(T)$ of node impurity includes \n    the following:\n    \\begin{description}\n        \\item[Misclassification error (0-1 Loss)]\n        \\begin{equation*}\n            \\frac{1}{N_m} \\sum_{(x,y) \\in R_m}1_{[k(m) \\neq y]} = 1- \n            \\hat{p}_{mk(m)}\n        \\end{equation*}\n        \\item[Gini index]\n        \\begin{equation*}\n            \\sum_{k\\neq k'} \\hat{p}_{mk}\\hat{p}_{mk'} = \n            \\sum_{k=1}^{K}\\hat{p}_{mk}(1-\\hat{p}_{mk})\n        \\end{equation*}\n        \\item[Cross-entropy]\n        \\begin{equation*}\n            - \\sum_{k=1}^{K} \\hat{p}_{mk} \\log \\hat{p}_{mk}\n        \\end{equation*}\n    \\end{description}\n    \n    \\subsubsection{Why binary splits?}\n    We can represent multiway splits by a series of binary splits (e.g. a \n    tertiary split can be represented by two binary splits) so we don't lose \n    generality. Furthermore, multiway splits split the data too quickly, so we \n    are left with too little information at the lower levels.\n    \n    \\subsection{Ensemble methods}\n    Decision trees have a high variance, just one data-point can completely \n    change the outcome of the tree. When we did the pruning and size-limitation \n    we sought to decrease the variance (through regularization). Another option \n    for reducing variance is ensemble methods.\n    \n    \\subsubsection{Bootstrapping datasets}\n    Bootstrapping is a general tool for assessing statistical accuracy. Suppose \n    we have our data-set $D$ as before. The basic idea is then to pick random \n    points from this such that we have $B$ different data-sets $D_i$ that each \n    have a random selection of points from $D$ (which may be overlapping). We \n    can use these data-sets $D_i$ as ``independent'' data-sets. This method is \n    often used to measure statistics like variance, confidence bounds, std.err. \n    etc. in other, more statistical oriented, domains.\n    \n    \\subsubsection{Bagging}\n    Bagging uses bootstrapped data-sets in order to improve on our predictions. \n    What we do in bagging (bootstrap aggregation) is simply to get our $B$ \n    bootstrapped data-sets $D_i$, train our model on all the different \n    data-sets producing $h_1,\\dots,h_B$ (our ensemble of models). We can then \n    perform predictions by doing a majority vote (for classification) or \n    returning the mean of the predictions.\n    \\begin{equation*}\n        h_{\\text{bag}}(x)=\\frac{1}{B}\\sum_{i=1}^{B} h_i(x)\n    \\end{equation*}\n    This should bring down variance while not touching bias. Ideally we would \n    get:\n    \\begin{equation*}\n        Var(Bagging(T))=\\frac{Var(T)}{m}\n    \\end{equation*}\n    But in practice, the reduction is less since the bagged models are still \n    fairly correlated. Furthermore, some classifiers are better than others. \n    Let's look at the following example with a trivial target function $f(x)=1, \n    \\forall x$. If we then bagged ``poor'' classifiers that return $1$ with \n    probability $0.4$, then if we had many classifiers, the probability would \n    converge to $0$ as the number of classifiers increases and $E_{out}=1$. If \n    we instead bagged ``better'' classifiers which return $1$ with probability \n    $0.6$, then the opposite would happen. This has been popularized as \n    ``Wisdom of Crowds'', which states that as long as the persons in the crowd \n    are smarter than just guessing, then if we ask many people we will improve \n    on our result, by either taking mean or majority vote (depending on \n    regression or classification).\n    \n    In order to improve on this, we can use process called ``boosting'' which \n    improves the quality on individual learners, such that the learners we put \n    into our ``crowd'' perform better than just guessing.\n    \n    A final note: the main draw-back of bagging is that it mostly works on \n    unstable models, which are also often the models that are easy to \n    interpret. Bagging, however, ruins the simplicity of the model and makes it \n    harder to interpret the model as the model will no longer be e.g. a tree, \n    but in fact it is a more complex structure.\n    \n    \\todo[inline]{Random forests?}\n    \n    \\subsubsection{Boosting}\n    Boosting works in similar way to bagging, but the similarity is mostly \n    superfluous. In boosting we, similar to bagging, produce $M$ modified \n    versions of the data-set, producing a sequence of weak classifiers \n    $h_1,h_2,\\dots,h_M$ (classifiers whose error rate is only slightly better \n    than random guessing) and then we find a weighted majority-vote, to produce \n    the final prediction:\n    \\begin{equation*}\n        h_{boost}(x)=\\sign \\left(\\sum_{m=1}^{M}\\alpha_m h_m(x)\\right)\n    \\end{equation*}\n    Here, $\\alpha_1,\\alpha_2,\\dots,\\alpha_M$ are computed by the boosting \n    algorithm. The $\\alpha$'s are supposed to give the more accurate \n    classifiers in the sequence more influence.\n    \n    Now let's look at an example of a boosting algorithm, AdaBoost. The ides is \n    that at each boosting step, we produce the data-set $D_i$ by applying some \n    weights to each of the training points $(x,y)$. Initially all weights $w_i$ \n    are set to: $w_i=\\frac{1}{N}$ so that the first step simply trains in the \n    usual manner. Then for the data-set $D_i$ at step $i$ we have modified the \n    weights such that those observations that were misclassified by \n    $h_{i-1}(x)$ have their weights increased, and those that were classified \n    correctly have their weights decreased, such that the observations that are \n    difficult to classify, receive more and more influence. The algorithm for \n    AdaBoost (or rather AdaBoost.M1) is as follows:\n    \\begin{enumerate}\n        \\item Initialize the observation weights \n        \\begin{equation*}\n            w_i=\\frac{1}{N}, \\quad i=1,2,\\dots,N\n        \\end{equation*}\n        \\item For $m=1$ to $M$:\n        \\begin{enumerate}[a)]\n            \\item Fit a classifier $h_m(x)$ to the training data using weights \n            $w_i$.\n            \\item Compute:\n            \\begin{equation*}\n                err_m=\\frac{\\sum_{i=1}^{N}w_i \\cdot 1_{[y_i \\neq \n                h_m(x_i)]}}{\\sum_{i=1}^{N}w_i}\n            \\end{equation*}\n            \\item Compute:\n            \\begin{equation*}\n                \\alpha_m=\\log\\left(\\frac{1-err_m}{err_m}\\right)\n            \\end{equation*}\n            \\item Set:\n            \\begin{equation*}\n                w_i \\leftarrow w_i \\cdot \\exp\\left[\\alpha_m \\cdot 1_{[y_i \\neq \n                h_m(x_i)]}\\right], \\quad i=1,2,\\dots,N\n            \\end{equation*}\n        \\end{enumerate}\n        \\item Output:\n        \\begin{equation*}\n            h_{boost}(x)=\\sign \\left[\\sum_{m=1}^{M}\\alpha_m h_m(x)\\right]\n        \\end{equation*}\n    \\end{enumerate}\n    Note that this is the ``discrete'' version of AdaBoost, there are versions \n    of AdaBoost that work on regression problems as well.\n    \n    Since AdaBoost only needs weak classifiers, then just a shallow tree of \n    depth $2$ for example, can produce impressive results after a lot of \n    iterations. Even classifiers as weak as $45\\%$ quickly improves their \n    accuracy.\n    \n    \\subsubsection{Additive models}\n    An additive model is described as:\n    \\begin{equation*}\n        h(x) = \\sum_{m=1}^{M} \\alpha_m h_m(x;\\theta_m)\n    \\end{equation*}\n    Where $\\alpha_m$ is the vote importance, $h_m$ is the base learner and \n    $\\theta_m$ is the parameters that define the specific version of the base \n    learner.\n    These models are typically fit by minimizing a loss function averaged over \n    the training data (e.g. squared-error or cross-entropy):\n    \\begin{equation*}\n        \\arg\\min_{\\alpha_j,\\theta_j,j=1,\\dots,M} \\sum_{x,y \\in D} \n        \\error\\left(y, \\sum_{m=1}^{M}\\alpha_m h(x, \\theta_m)\\right)\n    \\end{equation*}\n    \n    We can then formulate the forward stagewise additive modelling algorithm:\n    \\begin{enumerate}\n        \\item $h_0(x) = 0$\n        \\item For $m=1,\\dots,M$\n        \\begin{enumerate}[a)]\n            \\item Compute:\n            \\begin{equation*}\n                (\\beta_m,\\bar{h}_m)=\\arg\\min_{\\beta,h}\\sum_{i=1}^{N}\\error\\left(y_i,\n                 h_{m-1}(x_i)+\\beta h(x_i)\\right)\n            \\end{equation*}\n            \\item Compute:\n            \\begin{equation*}\n                h_m(x)=h_{m-1}(x)+\\beta_m \\bar{h}(x)\n            \\end{equation*}\n        \\end{enumerate}\n        \\item Output $h_M$\n    \\end{enumerate}\n    For example, given the squared error: $\\error(y, h(x))=(y-h(x))^2$, then we \n    have that, at each step $m$ we minimize:\n    \\begin{equation*}\n        \\error(y_i, h_{m-1}(x_i)+\\beta_m h_m(x_i))=(y_i-h_{m-1}(x_i)-\\beta_m \n        h_m(x_i))^2\n    \\end{equation*}\n    Since $y_i-h_{m-1}(x_i)$ is simply the error of the current model, then we \n    see that we are repeatedly trying to fit the errors of the current model.\n    \n    If we instead use the cost function:\n    \\begin{equation*}\n        \\error(y,h(x))=\\exp(-y\\cdot h(x))\n    \\end{equation*}\n    Which is small if $y$ and $h(x)$ has the same sign, otherwise large. Then \n    it turns out that this is exactly what AdaBoost.M1 is minimizing, thus \n    AdaBoost.M1 is a a forward-stagewise additive modeling approach.\n\\end{document}", "meta": {"hexsha": "533a5ab7bb394d9a12de2c39f2d18bf06f4d39e6", "size": 15988, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ML/Exam/DecisionTreesAndEnsembleMethods.tex", "max_stars_repo_name": "lukaspj/Uni-Notes", "max_stars_repo_head_hexsha": "cdaf3c70040fe2cd3f8edb4aa1914c1d2e021cd8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-06-13T15:41:03.000Z", "max_stars_repo_stars_event_max_datetime": "2017-06-13T15:41:03.000Z", "max_issues_repo_path": "ML/Exam/DecisionTreesAndEnsembleMethods.tex", "max_issues_repo_name": "lukaspj/Uni-Notes", "max_issues_repo_head_hexsha": "cdaf3c70040fe2cd3f8edb4aa1914c1d2e021cd8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ML/Exam/DecisionTreesAndEnsembleMethods.tex", "max_forks_repo_name": "lukaspj/Uni-Notes", "max_forks_repo_head_hexsha": "cdaf3c70040fe2cd3f8edb4aa1914c1d2e021cd8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.5957446809, "max_line_length": 84, "alphanum_fraction": 0.652489367, "num_tokens": 4478, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7931059438487662, "lm_q1q2_score": 0.596425650913792}}
{"text": "\n\\documentclass[letterpaper, 10 pt, conference]{ieeeconf}  \n\\IEEEoverridecommandlockouts                             \n\\usepackage{graphicx} \n\\usepackage{hyperref}\n\n\\overrideIEEEmargins\n\n\\title{\\Huge Image Mosaicing}\n\\author{Jiyu Tian} \n\n\\begin{document}\n\n\\maketitle\n\\thispagestyle{empty}\n\\pagestyle{empty}\n\n%-------------------------------------------------------------------------\n\n\\section{INTRODUCTION}\nIn this project we implement a technique for image mosaicing with feature matching. \nHarris corner detector is applied to find corners in two images. With corresponding features, a homography between the two images can then be estimated. Finally we warp one image into the coordinate system of the second one to produce a mosaic containing the union of all pixels in the two images.\n%-------------------------------------------------------------------------\n\\section{ALGORITHMS DESCRIPTION}\n\\subsection{Grayscale Conversion}\nAfter reading in two images, we first make them grayscale.We apply the conversion as the following eqaution:\n\\begin{equation}\nGray = 0.299 \\times R + 0.587 \\times G + 0.114 \\times B\n\\end{equation}\n\nIf, in the worst case, input images do not have all the three $RGB$ channels, we simply regard the first channel as its grayscale value.\n\\subsection{Harris Corner Detector}\nThe image gradient $I_x$ and $I_y$ is computed using horizontal and vertical components of Prewitt/Sobel mask. Then at each pixel we compute products of derivatives $I^2_x$, $I^2_y$ and $I^2_{xy}$. A window centering at the pixel is selected for averaging the sums of the products $S^2_x$, $S^2_y$ and $S^2_{xy}$.\nAt each pixel, a matrix $M$ is defined as \n\\begin{equation}\nM=\\left[ \\begin{array}{cc}\nS^2_x & S^2_{xy}\\\\\nS^2_{xy} & S^2_y\n\\end{array} \\right]\n\\end{equation}\n\nAnd the response $R$ can be computed as\n\\begin{equation}\nR=det(M) - k\\times trace(M)^2\n\\end{equation}\n\n\\subsection{Correspondences}\nGiven two set of corners from the two images, we compute normalized cross correlation ($NCC$) of image patches centered at each corner. \n\\begin{equation}\nN_{fg} = \\sum_{[i, j]\\in R}\\hat{f}(i, j)\\hat{g}(i, j),\\ \\ \\hat{f} = \\frac{f}{||f||}, \\ \\  \\hat{g} = \\frac{g}{||g||}\n\\end{equation}\n\nWe choose potential corner matches by finding pair of corners (one from each image) such that they have the highest $NCC$ value. We also set a threshold to keep only matches pairs that have a large $NCC$ score.\n\\subsection{Homography Estimation}\nGiven a set of 4 points pairs $(x,y)\\ (x', y')$, we can compute the homography from\n\\begin{equation}\n\\left[ \\begin{array}{c}\nx'\\\\\ny'\\\\\n1\n\\end{array} \\right] = \\left[ \\begin{array}{ccc}\nh_{11} & h_{12} & h_{13} \\\\\nh_{21} & h_{22} & h_{23} \\\\\nh_{31} & h_{32} & h_{33} \n\\end{array} \\right]\\left[ \\begin{array}{c}\nx\\\\\ny\\\\\n1\n\\end{array} \\right] \n\\end{equation}\n\nWe use RANSAC to robustly estimate the homography from the noisy correspondences:\n\\begin{itemize}\n\\item Repeatedly sample 4 random points\n\\item Compute a homography from these four points\n\\item Map all points using the homagraphy and comparing distances between predicted and observed locations to determine the number of inliers\n\\item At the end, compute a least-squares homgraphy from \\textbf{all} the inliers in the largest set of inliers.\n\\end{itemize}\n\n\n\\subsection{Image Warp}\nAfter generating the homography, we can warp one image onto the other one, blending overlapping pixels together to create a single image that shows the union of all pixels from both input images. \n\nThe steps are as follows:\n\\begin{itemize}\n    \\item Determine how big to make the final output image so that it contains the union of all pixels in the two images\n    \\item Copy the image that does not have to be warped into the appropriate location in the output\n    \\item Warp the other image into the output image based on the estimated homography\n    \\item Blend pixels in the area of overlap between both images\n\\end{itemize}\n%-------------------------------------------------------------------------\n\\section{EXPERIMENTAL RESULTS}\nThe two raw images we adopt for report are shown in Fig \\ref{raw}. They are converted into grayscale for the following steps. Please refer to the Appendix A for a complete flowchart.\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{raw.png}\n\\caption{Raw Images}\n\\label{raw}\n\\end{figure}\n%--------------------------------------\n\\subsection{Corner Detection}\nAfter applied by Harris corner detector, the corner features within two images are shown in Fig \\ref{corner}.\n\nThe gradient is computed with Sobel mask. We utilize a $7\\times7$ Gaussian averaging window with $\\sigma=1.4$ to find matrix $M$ at each pixel. A threshold of $R = 1\\times 10^8$ is set to filtered pixels with low response, and $k$ is set to $0.05$.\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{corner.png}\n\\caption{Corner Detection Result}\n\\label{corner}\n\\end{figure}\n\nWe compute non-maximum suppression to find local peaks within $3\\times3$ neighbors.\n\n%--------------------------------------\n\\subsection{Correspondences}\nFor each pair of corners from two images, we compute their $NCC$ in a $3\\time3$ window. We find the potential corner pairs from the highest to the lowest $NCC$ score larger than $0.5$.\n\nThe corresponding pairs with outliers are shown in Fig \\ref{corr}. There are over $100$ pairs in total.\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{correspondence.png}\n\\caption{Corresponding Pairs with Outliers}\n\\label{corr}\n\\end{figure}\n\n\n%--------------------------------------\n\\subsection{Homography Estimation}\nWe run RANSAC to find final point correspondences. A total of $1\\times10^4$ iterations are executed. A projected point is considered to be inlier if its Euclidien distance from matched point is less than $2$.\n\nIn a iteration, if $10\\%$ of total points pairs are inliers, this\n\\newpage\n\\noindent homography is good enough; or a new set of four random point pairs is selected to compute homography.\n\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{ransac.png}\n\\caption{Cleaned Corresponding Pairs}\n\\label{clean}\n\\end{figure}\n\n%--------------------------------------\n\\subsection{Final Mosaic}\nAs shown in Fig \\ref{mosaic}, image $1$ is wrapped into image 2 with homography computed. In this part we utilize OpenCV for implementation.\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{mosaic.png}\n\\caption{Final Mosaic}\n\\label{mosaic}\n\\end{figure}\n\n%--------------------------------------\n\\subsection{Limitation}\nOne key limitation of our algorithm is its high computing consumption. Plenty of time is spent on corner detection and correspondences computing, since the are $O(n^2)$ complexity.\n\nWe also notice that RANSAC process depends on the initial selection of point pairs. If luckily enough, iteration time can be reduced significantly.\n\n%-------------------------------------------------------------------------\n\\section{CONCLUSION}\nIn this project, we explored various application of feature matching, and creatively applied them to image mosaicing. This is a very useful technique for constructing panoramic image mosaics from sequences of images.\n\\vfill\n\n\\end{document}\n", "meta": {"hexsha": "f2d8e16344c68d60b95437cd7fcb2f2483518686", "size": 7200, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EECE5639-Computer-Vision/Project-2/Report/main.tex", "max_stars_repo_name": "tjyiiuan/Graduate-Courses", "max_stars_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "EECE5639-Computer-Vision/Project-2/Report/main.tex", "max_issues_repo_name": "tjyiiuan/Graduate-Courses", "max_issues_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EECE5639-Computer-Vision/Project-2/Report/main.tex", "max_forks_repo_name": "tjyiiuan/Graduate-Courses", "max_forks_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.8571428571, "max_line_length": 313, "alphanum_fraction": 0.7066666667, "num_tokens": 1853, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.793105953629227, "lm_q1q2_score": 0.5964256494529334}}
{"text": "\\section{The \\protect\\EcoLab{} Model}\\label{model}\n\nThe \\EcoLab{} model is but one model implemented using the \\EcoLab{}\nsoftware. This section documents the model itself, and may be skipped\nif your intention is to use \\EcoLab{} for other models.\n\nWe start with a generalised form of the Lotka-Volterra equation \n\\begin{equation}\\label{lotka-volterra}\n\\dot{\\bn} = \\br*\\bn + \\bn*\\bbeta\\bn.\n\\end{equation}\nHere \\bn\\ is the population density, the component $n_i$ being the\nnumber of individuals of species $i$, \\br\\ is the difference\nbetween reproduction and death, \\bbeta\\ is the interaction matrix,\nwith $\\beta_{ij}$ being the interaction between species $i$ and $j$, *\nreferring to elementwise multiplication and {\\tt mutate} is the\nmutation operator.\n\n\\subsection{Lotka-Volterra Dynamics}\n\nThe most obvious thing about equation (\\ref{lotka-volterra}) is its\nfixed point \n\\begin{equation}\\label{fixed point}\n\\hat{\\bn} = -\\bbeta^{-1}\\br,\n\\end{equation}\nwhere $\\dot{\\bn}=0$. For this point to be biologically meaningful, all\ncomponents of $\\hat{\\bn}$ must be positive, giving rise to the following\ninequalities:\n\\begin{equation}\\label{positive species}\n\\hat n_i = \\left(\\bbeta^{-1}\\br\\right)_i>0, \\forall i\n\\end{equation}\nThe stability of this point is related to the\nnegative definiteness of derivative of $\\dot{\\bn}$ at $\\hat{\\bn}$. The\ncomponents of the derivative are given by\n\\begin{equation}\\label{derivative}\n\\frac{\\partial\\dot{n}_i}{\\partial n_j} =\n\\delta_{ij}\\left(r_i+\\sum_k\\beta_{ik}n_k\\right) + \\beta_{ij}n_i\n\\end{equation}\nSubstituting eq (\\ref{fixed point}) gives\n\\begin{equation}\n\\left.\\frac{\\partial\\dot{n}_i}{\\partial n_j}\\right|_{\\hat{\\bn}}=\n-\\beta_{ij}\\left(\\bbeta^{-1}\\br\\right)_i\n\\end{equation}\n\nStability of the fixed point requires that this matrix should be\nnegative definite. Since the $\\left(\\bbeta^{-1}\\br\\right)_i$ are\nall negative by virtue of (\\ref{positive species}), each minor\ndeterminant of this matrix is equal to a minor determinant of \\bbeta\\\nmultiplied by a positive number, stability of the equilibrium is\nequivalent to \\bbeta\\ being negative definite.\n\nA weaker condition is to require that the system remain bounded with\ntime:\n\\begin{equation}\\label{boundedness}\n\\sum_i\\dot{n_i}=\\br\\cdot\\bn + \\bn\\cdot\\bbeta\\bn < 0, \\;\\forall \\bn:\n\\sum_in_i>N \\;\\exists N\n\\end{equation}\n\nAs \\bn\\ becomes large in any direction, this functional is dominated\nby the quadratic term, so this implies that  $\\bn\\cdot\\bbeta\\bn\\leq0\n\\; \\forall\\bn: n_i>0$. Negative definiteness of \\bbeta\\ is sufficient,\nbut not necessary for this condition. For example, the predator-prey\nrelations (heavily normalised) have the following matrix as \\bbeta:\n\\begin{math}\n\\bbeta=\\left(\\begin{array}{cc}\n-1 & 2\\\\\n-2 & 0\\\\\n\\end{array}\\right)\n\\end{math}\nwhich has eigenvalues $3/2, -5/2$. If we let $\\bn=(x,y), x,y\\geq0$, then\n$\\bn\\cdot\\bbeta\\bn=-2x^2$, which is clearly non-positive\nfor all $x$.\n\nConsider adding a new row and column to \\bbeta. What is condition is\nthe new row and column required to satisfy such that equation\n(\\ref{boundedness}) is satisfied. Break up \\bbeta\\ in the following\nway:\n\\begin{displaymath}\n\\left(\n  \\mbox{\n     \\begin{tabular}{c|c}\n       $\\begin{array}{ccc}\\ddots\\\\&{\\bf A}\\\\&&\\ddots\\end{array}$ & \n       $\\begin{array}{c}\\vdots\\\\{\\bf B}\\\\\\vdots\\end{array}$ \\\\\n       \\hline\n       $\\begin{array}{ccc}\\cdots&{\\bf C}&\\cdots\\end{array}$ & D\n     \\end{tabular}\n   }\n\\right)\n\\left(\n  \\mbox{\n     \\begin{tabular}{c}\n     $\\begin{array}{c}\\vdots\\\\{\\bf n_1}\\\\\\vdots\\end{array}$\\\\\n     \\hline\n     $n_2$\n     \\end{tabular}\n    }\n\\right)\n\\end{displaymath}\n\nCondition (\\ref{boundedness}) becomes:\n\\begin{equation}\\label{boundedness2}\n{\\bf n_1}\\cdot{\\bf A}{\\bf n_1} + {\\bf n_1}\\cdot({\\bf B}+{\\bf C})n_2 +\nDn_2^2 \\leq 0\n\\end{equation}\n\nLet \n\\begin{displaymath}\na=\\max_{n=1} \\bn\\cdot A\\bn,\\mbox{ and } b=\\max_{i}B_i+C_i.\n\\end{displaymath}\n Then a sufficient but\nnot necessary condition for condition (\\ref{boundedness2}) is\n\\begin{displaymath}\nan_1^2+bn_1n_2+Dn_2^2\\leq0\n\\end{displaymath}\n\nThe maximum value with respect to $n_2$ is $an_1^2-(bn_1)^2/4D$, so\nthis requires that\n\\begin{equation}\\label{boundedness3}\nb \\geq 2\\sqrt{aD}\n\\end{equation}\n\n\\subsection{Mutation}\\label{mutation}\n\nWith mutation, equation (\\ref{lotka-volterra}) reads\n\\begin{equation}\n\\dot{\\bn} = \\br*\\bn + \\bn*\\bbeta\\bn + {\\tt mutate}(\\bmu,\\br,\\bn).\n\\end{equation}\n\n\nThe difficulty with adding mutation to this model is how to define the\nmapping between genotype space and phenotype space, or in other words,\nwhat defines the {\\em embryology}. A few studies, including Ray's\nTierra world, do this with an explicit mapping from the genotype to to\nsome particular organism property (e.g. interpreted as machine language\ninstructions, or as weight in a neural net). These organisms then\ninteract with one another to determine the population dynamics. In\nthis model, however, we are doing away with the organismal layer, and\nso an explicit embryology is impossible. The only possibility left is\nto use a statistical model of embryology. The mapping between\ngenotype space and the population parameters $\\br$,\n$\\bbeta{}$  is expected to look like a rugged\nlandscape, however, if two genotypes are close together (in a Hamming\nsense) then one might expect that the phenotypes are likely to be\nsimilar, as would the population parameters. This I call {\\em random\nembryology with locality}.\n\nIn the simple case of point mutations, the probability $P(x)$ of any\nchild lying distance $x$ in genotype space from its parent follows a\nPoisson distribution. Random embryology with locality implies that the\nphenotypic parameters are distributed randomly about the parent\nspecies, with a standard deviation that depends monotonically on the\ngenotypic displacement. The simplest such model is to distribute the\nphenotypic parameters in a Gaussian fashion about the parent's values,\nwith standard deviation proportional to the genotypic displacement.\nThis constant of proportionality can be conflated with the species'\nintrinsic mutation rate, to give rise another phenotypic parameter\n$\\bmu$.  It is assumed that the probability of a mutation generating a\npreviously existing species is negligible, and can be ignored. We also\nneed another arbitrary parameter $\\rho$, ``species radius'', or\n\\verb+ecolab.sp_sep+,\\index{sp\\_sep} which can be understood as the\nminimum genotypic distance separating species, conflated with the same\nconstant of proportionality as $\\bmu$.\n\nIn summary, the mutation algorithm is as follows:\n\\begin{enumerate}\n\\item The number of mutant species arising from species $i$ within a\ntimestep is $\\mu_i\\alpha_in_i/\\rho$. This number is rounded\nstochastically to the nearest integer, e.g. 0.25 is rounded up to 1\n25\\% of the time and down to 0 75\\% of the time.\n\n\\item Roll a random number from a Poisson distribution\n$e^{-x/\\mu+\\rho}$ to determine the standard deviation $\\sigma$ of phenotypic\nvariation. \n\n\\item Vary $\\br$ according to a Gaussian distribution about the\n  parents' values, with $\\sigma\\alpha_0$ as the standard deviation,\n  where $\\alpha_0$ is the range of values that $\\br$ is initialised to,\n  ie $\\alpha_0$=\\verb|ecolab.repro_max|\\index{ecolab.repro\\_max}$-$\n  \\verb|ecolab.repro_min|\\index{ecolab.repro\\_min}\n\n\\item The diagonal part of $\\bbeta$ must be negative, so vary $\\bbeta$\naccording to a log-normal distribution. This means that if the old\nvalue is $\\beta$, the new value becomes\n$\\beta'=-\\exp(\\log_e(\\beta)+\\sigma)$. These values cannot become\narbitrarily small, however, as this would imply that some species make\narbitrarily small demands on the environment, and will become infinite\nin number. In \\EcoLab{}, the diagonal interactions terms are prevented from\nbecoming larger than $-r/(.1*{\\tt INT\\_MAX})$.\n\n\\item The offdiagonal components of $\\bbeta$, are varied in a similar\nfashion to $\\br$. However new connections are added, or old ones\nremoved according to $\\lfloor 1/r\\rfloor$, where $r\\in(-2,2)$ is\nchosen from a stepped uniform distribution \n\\begin{displaymath}\nP(x)=\\left\\{\n\\begin{array}{ll}\n0.25(1-g) & \\mathrm{if}\\; x\\leq0\\\\\n0.25(1+g) & \\mathrm{if}\\; x>0\\\\\n\\end{array}\n\\right.\n\\end{displaymath}\nwhere $g\\in[-1,1]$ (default of 0) is specified by the TCL variable\n\\verb+generalization_bias+\\index{generalization\\_bias}. The values on\nthe new connections are chosen from the same initial distribution that\nthe offdiagonal values where originally set with, ie the range\n\\verb|ecolab.odiag_min|\\index{ecolab.odiag\\_min} to\n\\verb|ecolab.odiag_max|\\index{ecolab.odiag\\_max}. Since $a$ in\ncondition (\\ref{boundedness3}) is computationally expensive, we use a\nslightly stronger criterion that is sufficient, computationally\ntractable yet still allows ``interesting'' non-definite matrix\nbehaviour namely that the sum $\\beta_{ij}+\\beta_{ji}$ should be\nnonpositive.\n\n\n\\item $\\bmu$ must be positive, so should evolve according to the\n  log-normal distribution like the diagonal components of $\\bbeta$.\n  Similar to $\\bbeta$, it is a catastrophe to allow $\\bmu$ to become\n  arbitrarily large. In the real world, mutation normally exists at\n  some fixed background rate --- species can reduce the level of\n  mutation by improving their genetic repair algorithms. In \\EcoLab{},\n  this ceiling on $\\bmu$ is given by the\n  \\verb|ecolab.mut_max|\\index{ecolab.mut\\_max} variable.\n\n\\end{enumerate}\n\n\\subsection{Input Parameters}\\label{input parameters}\n\nThe model's parameters are set by TCL variables in {\\tt model.tcl}.\nThe actual data structures of the model are initialised the first time\nthe model's generate step is called. An example input set is:\n\\begin{verbatim}\n# initial condition\necolab.species {1 2}\necolab.density {100 100} \necolab.create {0 0}\necolab.repro_rate {.1 -.1}\necolab.interaction.diag {-.0001 -1e-5}\necolab.interaction.val {-0.001 0.001}\necolab.interaction.row {0 1}\necolab.interaction.col {1 0}\necolab.migration {.1 .1}\n\n\n# mutation parameters\necolab.mutation {.01 .01}\necolab.sp_sep .1\necolab.repro_min -.1\necolab.repro_max .1\necolab.odiag_min -1e-3\necolab.odiag_max 1e-3\necolab.mut_max .01\n\\end{verbatim}\n\nModel variables define a TCL command of the same name as they appear\nin the C++ source. So in the \\EcoLab{} model, the C++ object {\\tt\n  ecolab} defines a set of TCL commands such as {\\tt ecolab.density}\nthat can be used for setting or querying the values of {\\tt ecolab}'s\nmembers.  If an argument is specified, then that argument is used to\nset the variable's value, otherwise, the variable's value is returned.\nArray members in the model are initialised by specifying an TCL list\nargument to the variables name, and return TCL lists when no argument\nis specified. The above example starts the ecology off with a single\npredator and prey (based on {\\tt pred-prey.tcl}\\index{pred-prey.tcl}).\n\n\\subsection{\\protect\\EcoLab{} Model commands}\n\n\\subsubsection{generate}\\index{generate}\n\nThis implements the basic Lotka-Volterra equations:\n\n\\begin{math}\n\\dot{\\bn} = \\br*\\bn + \\bn*\\bbeta\\bn\n\\end{math}\n\nwith \\br\\ being the reproduction rate and \\bbeta\\ being the\ninterspecies interaction. This is implemented as a single line:\n\n\\begin{verbatim}\n  density += repro_rate * density + (interaction * density) * density;\n\\end{verbatim}\n\nThis command also increments the timestep counter {\\tt tstep}.\n\nAn optional argument specifies a number of timesteps to run the\ngenerate step. This improves the speed by amortising the real to\ninteger conversion operation over a number of timesteps. The downside\nis that computation may fail if the problem is ill-conditioned\n(offdiagonal elements of \\bbeta\\ too large with respect to the\ndiagonal elements).\n\n\\subsubsection{condense}\\label{condense}\\index{condense}\n\nThis command compacts the systems of equations by removing extinct\nspecies where $n_i=0$.\n\n\\subsubsection{mutate}\\label{mutate}\\index{mutate}\n\nThis applies the point mutations to the system. The precise algorithm\nis described in \\S\\ref{mutation}. \n\n\\subsubsection{migrate}\\label{migrate}\\index{migrate}\n\nThis operator implements migration within cellular \\EcoLab{}. This\nupdates density values according to the difference with the 4 nearest\nneighbours: $\\bn+=\\bgamma*(0.25(\\bn_n+\\bn_e+\\bn_s+\\bn_w)-\\bn)$, where the\n$n,e,s,w$ index the north, east, south and west neigbouring cells.\n\n\\subsubsection{maxeig}\\label{maxeig}\\index{maxeig}\n\n\\verb|maxeig| returns the maximum eigenvalue of \\bbeta. If this number\nis negative, the equilibrium point is stable, if positive, it is\nunstable. As reported in Standish (1994)\\cite{Standish94}, the mutation\ndrives the maximum eigenvalue slightly positive, then instabilities\nact to push the eigenvalue back to zero. This command requires LAPACK\\index{LAPACK}).\n\n\\subsubsection{lifetimes}\\label{lifetimes}\\index{lifetimes}\n\n\\verb|lifetimes| records the timestep when a\nspecies passes a threshold (hardwired at 10) in the {\\tt create}\\index{create}\niarray. If a species has yet to pass the threshold, or has gone\nextinct, the value in {\\tt create} is zero. Upon return, this routine\nreturns the lifetime of the species that have gone extinct. This can\nthen be passed to a histogram routine, or written to a file.\n\n\\subsubsection{random\\_interaction}\\label{random_interaction}\n\\index{random\\_interaction}\n\nCalls \\hyperref{{\\protect\\tt sparse\\_mat::init\\_rand()}}{(See \\S}{)}{sparse_mat} to randomly initialising the nonzero pattern of\nthe offdiagonal elements. The average number of nonzeros per row is\n{\\tt conn}, and the standard deviation of the number of nonzeros is\n{\\tt sigma}.\n\n\\subsubsection{set\\_grid}\n\nSynopsis\n\n{\\tt ecolab.set\\_grid} {\\em x} {\\em y}\n\n\\noindent Set up an $x\\times y$ grid in spatial \\EcoLab. See\n\\verb+ecolab_spatial.tcl+ for an example using this.\n\n\\subsubsection{get}\n\nSynopsis\n\n{\\tt ecolab.get} {\\em x} {\\em y}\n\n\\noindent Create TCL method for accessing the internals of cell $x$\n$y$. The new commands look like array elements, eg\n\\begin{verbatim}\necolab(1,0).density\n\\end{verbatim}\n\n\\subsubsection{forall}\n\nSynopsis\n\n{\\tt ecolab.forall ecolab.}{\\em command} {\\em args}\n\nRun {\\em command} on all cells.\n\n\\subsection{Spatial Variation}\\label{spatial}\n\nThe ecolab model can run in multicellular mode by calling\n\\verb+ecolab.set_grid+ from TCL, specifying the dimensions of the\ngrid.\\index{set\\_grid}. See\n\\verb+ecolab_spatial.tcl+\\index{ecolab\\_spatial} for an example.\n\nOnly population density varies between the cells --- all other\nvariables are members of the ecolab variable so can be set or queried\nin the usual way.\n\nThe usual ecolab model methods (generate, mutate, condense and\nlifetimes)\\index{generate}\\index{mutate}\\index{condense}\\index{lifetimes}\ncan now be called, but operate on the entire grid. A new {\\tt\nmigrate}\\index{migrate} is defined to handle migration between\ncells. You can also call a method of the ecolab cell on all cells\nusing the \\verb+forall+\\index{forall} command. For instance, to set\nall cells to same initial density, use:\n\\begin{verbatim}\necolab.forall ecolab.density [constant $nsp 100]\n\\end{verbatim}\n\nAccess to the individual cells can obtain by creating a TCL\\_obj to\nrefer to it by using the \\verb+ecolab.get+\\index{get} method, which\ncreates commands like {\\tt ecolab({\\em x},{\\em y})} to refer to the\ncell. These can be fed to visualisers in the usual way.\n\n\\subsubsection{Parallel Execution}\n\nSince the cells are pins in a \\hyperref{graphcode Graph}{graphcode\n  Graph (\\S}{)}{graphcode}\\index{graphcode}, they are distributed over\nparallel processes if available.\n\nRemember to call the \\verb+gather+\\index{gather} method to ensure node\n0 is updated before running a visualiser on global data.\n\n\\begin{description}\n\\item[ecolab.gather] Bring processor 0's data up to date with the rest\n  of the grid\n\\item[ecolab.distribute\\_cells] Broadcast processor 0's data out to the\n  rest of the grid.\n\\end{description}\n\n\\section{Palauan Jellyfish model}\\label{jellyfish}\n\nThis is a model being developed by Mike Dawson and Russell Standish to\nmodel the behaviour of jellyfish in a number of lakes on the island of\nPalau. Jellyfish are photosynthetic animals, so have have a preference\nfor the sun and avoiding shadows. In this model, the jellyfish are\nrepresented by agents that have a position and velocity. If the\njellyfish moves into shadow, or bumps into the side of the lake, it\nwill reverse its direction. From time to time, the jellyfish will\nchange direction and speed. The random generators governing these are\nselectable at runtime through the experimental script. Also, jellyfish\ndo bump into each other. In the model display, a jellyfish will flash\ngreen if it bumps into another one.\n\nTo run the jellyfish model, run the jellyfish.tcl script (located in\nthe models directory), specifying the lake as an argument, eg:\n\\begin{verbatim}\njellyfish.tcl lakes/OTM\n\\end{verbatim}\n\nYou can choose whether to compile the 2D version of the model or the\n3D version, by (not) defining the preprocessor flag\n\\verb+THREE_D+\\index{THREE\\_D} (see Makefile).\n\nThe lake itself is represented by a Tk pixmap, with the blue component\nrepresenting water. The lake shapes were scanned into a GIF file, and\nedited with a run-of-the-mill paint program to produce the lakes.\n\nVisualising the lake involved creating a Tk canvas widget, displaying\nthe lake image in it, then overlaying it with shadow lines extending\nfrom the pixels lying on the boundary, and finally representing the\njellyfish with arrow symbols (to indicate position and velocity).\n\nThis model illustrates the use of probes. Mouse clicks in the canvas\nregion are bound to a short method that determines which agent is\nclosest to the mouse position. It then colours that agent red (for\ntracking purposes), creates a TCL\\_obj representing that agent, and\nreturns the name back to TCL. TCL then calls the object browser\n(\\S\\ref{object-browser}) on that TCL\\_obj. In all, 14 lines of C++\ncode, and 3 lines of TCL code. The result is very effective.\n\nThe jellyfish model is written to be run in parallel using\n\\hyperref{Graphcode}{Graphcode\n  (\\S}{)}{graphcode}\\index{graphcode}. The strategy is effectively a\n{\\em particle-in-cell} method. The lake is subdivided into a Cartesion\ngrid of cells, and each Jellyfish only needs to consult the cell that\nit is in, as well as neighbouring cells to determine if it will\ncollide with any other jellyfish.\n\n", "meta": {"hexsha": "0968b6f17b38349b17a80d3eb093c049dae1467e", "size": 18245, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/model.tex", "max_stars_repo_name": "digiperfect/ecolab", "max_stars_repo_head_hexsha": "52751dfa805b67b775ea50e37c5d02c2735ed0d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2017-04-19T15:02:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-18T05:03:56.000Z", "max_issues_repo_path": "doc/model.tex", "max_issues_repo_name": "digiperfect/ecolab", "max_issues_repo_head_hexsha": "52751dfa805b67b775ea50e37c5d02c2735ed0d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2016-01-17T21:14:29.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-28T13:18:28.000Z", "max_forks_repo_path": "doc/model.tex", "max_forks_repo_name": "digiperfect/ecolab", "max_forks_repo_head_hexsha": "52751dfa805b67b775ea50e37c5d02c2735ed0d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-01-17T20:32:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-29T19:11:21.000Z", "avg_line_length": 40.2759381898, "max_line_length": 128, "alphanum_fraction": 0.7607563716, "num_tokens": 5048, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597971, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5964256439366613}}
{"text": "\\documentclass[11pt]{amsart}\n\\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.\n\\geometry{letterpaper}                   % ... or a4paper or a5paper or ... \n%\\geometry{landscape}                % Activate for for rotated page geometry\n%\\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{epstopdf}\n\\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}\n\n\\title{Discretization of Elliptic Linear PDE and Neural Network}\n%\\author{The Author}\n%\\date{}                                           % Activate to display a given date or no date\n\n\\begin{document}\n\\maketitle\n%\\section{}\n%\\subsection{}\n\n\\section{Problem setup}\n\\subsection{HJB}\nWe want to solve a d-dimensions linear PDE given below:\n\\begin{itemize}\n \\item Domain \n $$O = \\{x\\in \\mathbb R^{d}: 0<x_{i}< 1, i =1,2, \\ldots d\\}.$$\n \\item Equation on $O$: \n $$(\\frac 1 2 \\Delta -  \\lambda) v(x) + \n \\sum_{i=1}^db_i(x)  \\frac{\\partial v(x)}{\\partial x_i}  \n  + \\ell(x) = 0.$$\n \\item Dirichlet data on $\\partial O$:\n $$v(x) = g(x).$$\n\\end{itemize}\n\n\\subsection{Examples} \n\n\\subsubsection{Multidimensional PDE with quadratic function as its solution}\nConsider a class of PDE with coefficients satisfying,\n$$d - \\lambda \\|x - \\frac 1 2 {\\bf 1}\\|_{2}^{2} \n+ b(x)\\cdot (2x - {\\bf 1})+ \\ell(x) = 0,$$\nwhere ${\\bf 1}$ is an $\\mathbb R^{d}$-vector with each element being $1$.\nThe exact solution is \n$$\nv(x) = \\|x - \\frac 1 2 {\\bf 1}\\|_{2}^{2} = \\sum_{i=1}^{d} (x_{i} - \\frac 1 2)^{2}.\n$$\n\n\\section{Discretization}\n\n\\subsection{FDM}\nWe introduce some notions of finite difference operators.\nCommonly used first order finite difference operators \nare FFD, BFD, and CFD. \nForward Finite Difference (FFD) is\n$$\\frac{\\partial}{\\partial x_{i}}v(x) \\approx \\delta_{he_{i}} v(x) \n:= \\frac{v(x+he_{i}) - v(x)}{h}.$$\nBackward Finite Difference (BFD) is\n$$\\frac{\\partial}{\\partial x_{i}}v(x) \\approx \\delta_{-he_{i}} v(x) \n:= \\frac{v(x-he_{i}) - v(x)}{-h}.$$\nCentral Finite Difference (CFD) is\n$$\\frac{\\partial}{\\partial x_{i}}v(x) \\approx \n\\bar \\delta_{h e_{i}} v(x)\n:= \\frac 1 2 (\\delta_{-he_{i}} + \\delta_{he_{i}}) v(x)\n.$$\nIt can be verified that the CFD has the following explicit form:\n$$\\bar \\delta_{h e_{i}} v(x) = \\frac{v(x+he_{i}) - v(x-he_{i})}{2h}.$$\nSecond order finite difference operators are the followings:\n$$\n\\frac{\\partial^{2}}{\\partial x_{i}^{2}} v(x)\n\\approx\n\\delta_{-he_{i}} \\delta_{he_{i}} v(x)\n= \\frac{v(x+he_{i}) - 2 v(x) + v(x- he_{i})}{h^{2}}.\n$$\nAlthough the next operator will not be used below, we will write it for its completeness. If $i \\neq j$, we use\n$$\n\\begin{array}\n {ll}\n\\frac{\\partial^{2}}{\\partial x_{i} \\partial x_{j}} v(x) &\\approx\n\\frac 1 2 (\\delta_{he_{i}} \\delta_{-he_{j}} v(x) + \n\\delta_{he_{j}} \\delta_{-he_{i}} v(x))\n\\end{array}\n$$\n\n\n\\subsection{CFD on PDE}\nApproximations for PDE are \n$$\n\\frac{\\partial v(x)}{\\partial x_i} \\leftarrow \n\\delta_{\\pm h e_{i}} v(x)\n$$\nand\n$$\n\\frac{\\partial^2 v(x)}{\\partial x_i^2} \\leftarrow\n\\delta_{-he_{i}} \\delta_{he_{i}} v(x).$$\nFor simplicity, if we set \n$$\n\\gamma = \\frac{d}{d+ h^{2} \\lambda}, \\\np^{h}(x \\pm he_{i}|x) = \\frac 1 {2d} (1 \\pm h b_{i}(x)), \\\n\\ell^{h}(x) = \\frac{h^{2} \\ell(x)}{d},\n$$\nthen it yields DPP\n$$\nv (x) = \\gamma \n\\Big\\{ \\ell^{h}(x) + \n\\sum_{i=1}^{d} \np^{h}(x+he_{i}|x) v(x+he_{i})\n+ p^{h}(x-he_{i}|x) v(x-he_{i})\n\\Big\\}.\n$$\n\n\\subsection{UFD on PDE}\nUpwind finite difference(UFD) is the following:\n$$\n\\frac{\\partial v(x)}{\\partial x_i} \\leftarrow \n\\delta_{ h e_{i}} v(x) \\cdot I(b_{i}(x)\\ge 0) +\n\\delta_{-he_{i}} v(x) \\cdot I(b_{i}(x) <0)\n$$\nand\n$$\n\\frac{\\partial^2 v(x)}{\\partial x_i^2} \\leftarrow\n\\delta_{-he_{i}} \\delta_{he_{i}} v(x).$$\nThen, with\n$$c = d+h\\sum_{i} |b_{i}(x)|, \\ \n\\gamma = \\frac{c}{c+h^{2}\\lambda}, \\ \n\\ell^{h} = \\frac{\\ell(x) h^{2}}{c}, \\ \np^{h}(x \\pm he_{i}|x) = \\frac{1+ 2hb_{i}^{\\pm}(x)}{2c},\n$$\nthen it yields DPP\n$$\nv (x) = \\gamma \n\\Big\\{ \\ell^{h}(x) + \n\\sum_{i=1}^{d} \np^{h}(x+he_{i}|x) v(x+he_{i})\n+ p^{h}(x-he_{i}|x) v(x-he_{i})\n\\Big\\}.\n$$\n\n\n\n\\end{document}  ", "meta": {"hexsha": "9e936e604d58d316467062dbb70b077be0bf4c8b", "size": 4117, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/191222epde.tex", "max_stars_repo_name": "songqsh/foo1", "max_stars_repo_head_hexsha": "536bf44cc4fb43a3ac0f2a64695f619ac7526651", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-03-14T03:04:24.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-14T03:04:24.000Z", "max_issues_repo_path": "doc/191222epde.tex", "max_issues_repo_name": "songqsh/foo1", "max_issues_repo_head_hexsha": "536bf44cc4fb43a3ac0f2a64695f619ac7526651", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-07-01T20:35:39.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-04T22:07:50.000Z", "max_forks_repo_path": "doc/191222epde.tex", "max_forks_repo_name": "songqsh/foo1", "max_forks_repo_head_hexsha": "536bf44cc4fb43a3ac0f2a64695f619ac7526651", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-08-25T00:50:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-25T20:06:32.000Z", "avg_line_length": 29.8333333333, "max_line_length": 111, "alphanum_fraction": 0.6026232694, "num_tokens": 1598, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.752012562644147, "lm_q1q2_score": 0.5964256387982877}}
{"text": "\\documentclass{article}\n\\usepackage{amsthm, amsmath, amssymb}\n\\usepackage{graphicx}\n\\title{Second order graph convolutional networks}\n\\date{}\n\\author{Elnur Gasanov}\n\\parindent=0.0mm\n\\begin{document}\n\\maketitle\n\n\\section{Introduction}\n\nMany important business-oriented tasks (such as implicit social relation prediction, log data and paper classification) highly depend on graph-structured data which is a non-euclidean domain. For the given graph with the description of nodes we aim to classify each node. Solution of this problem related to, for example, citation graphs may enable automatic determination of conference section to which the given paper relates the most.\n\n\\section{Related work}\n\nThe idea to use spectral graph information in order to construct convolutional layers has been first proposed in~\\cite{first_paper}. Later, in~\\cite{CNN_LSF} authors proposed practical version of CNNs on graphs with fast, trainable filters. Further simplification to linearly approximated filters has lead to Graph Convolutional Network~\\cite{GCN}, which became a state-of-art technique in 2015. \n\\section{Second order graph convolutional network (SO~-~GCN)}\n\\subsection{Spectral graph convolutions}\n\nGeneral form of filtering is the following~\\cite{first_paper}:\n\\[\ng_\\theta \\star x = Ug_\\theta U^\\top x \n\\]\n\nChebyshev parametrization of the filter has been proposed in~\\cite{CNN_LSF}:\n\\begin{align}\ng_\\theta(L) = \\sum\\limits_{k=0}^{K-1} \\theta_k T_k(\\tilde{\\Lambda}), \\label{eq}\n\\end{align}\nwhere $\\tilde{\\Lambda} = \\frac{2}{\\lambda_{\\max}} \\Lambda - I$ and  $T_k$ is the Chebyshev polynomial of order $k$. \n\n$T_k(x) = 2 x T_{k-1}(x) - T_{k-2}(x), T_0(x) = 1, \\ T_1(x) = x$. \n\\subsection{Layer-wise quadratic model}\nSecond order model according to the formula~\\ref{eq} has got the following form:\n\\[\ng_\\theta \\star x = (\\theta_0' I + \\theta_1' \\tilde{L} + \\theta_2' (2 \\tilde{L}^2 - I))x, \n\\]\nwhere $\\tilde{L} = \\frac{2}{\\lambda_{\\max}} L - I$ and $L = I - D^{-\\frac12} A D^{-\\frac12}$ is the Laplacian matrix of the graph. We will relax first two terms as it has been done in~\\cite{GCN} (so that $\\theta_0' I + \\theta_1' \\tilde{L} = \\theta_1 \\tilde{D}^{-\\frac12} \\tilde{A} \\tilde{D}^{-\\frac12} $). For the sake of calculation simplification, we will assume $\\lambda_{\\max} = 2$, so quadratic term will get the form: $ 2 \\tilde{L}^2 - I = 2 D^{-\\frac12} A D^{-1} A D^{-\\frac12} - I$. In order to perform a kernel trick, as it has been done in~\\cite{GCN}, we will get rid of the identity matrix, so finally the quadratic term will have the following form:\n$$\n\\theta_2 (2 D^{-\\frac12} A D^{-1} A D^{-\\frac12} - I) \\approx \\theta_2' \\tilde{D}_2^{-\\frac12} A^2 \\tilde{D}_2^{-\\frac12},\n$$\nwhere $\\tilde{D}_{2, ii} = \\sum_j [A^2]_{ij}$. One layer's otput is:\n\n\\begin{align*}\nZ = \\text{ReLU} (\\tilde{D}^{-\\frac12} \\tilde{A} \\tilde{D}^{-\\frac12} X W^{(1)} + \\tilde{D}_2^{-\\frac12} A^2 \\tilde{D}_2^{-\\frac12} X W^{(2)} )\n\\end{align*}\n\nLet us denote $\\hat{A}_1 = \\tilde{D}^{-\\frac12} \\tilde{A} \\tilde{D}^{-\\frac12}, \\ \\hat{A}_2 = \\tilde{D}_2^{-\\frac12} A^2 \\tilde{D}_2^{-\\frac12}$, the final classification model $Z$ is:\n$$\nZ' = \\text{ReLU} (\\hat{A}_1 X W^{(0, 1)} + \\hat{A}_2 X W^{(0, 1)})\n$$\n$$\nZ = \\text{SoftMax}(\\hat{A}_1 Z' W^{(1, 1)} + \\hat{A}_2 Z' W^{(0, 1)})\n$$\n\n\\section{Computational experiments}\n\n\\subsection{Datasets}\n\nWe use two group of datasets: citation graph networks CiteSeer and Cora(\\cite{Sen}) and web graph networks Texas and Wisconsin. We summarize information about the graphs in the table below.\n\\begin{table}[h]\n\\centering\n\\caption{Dataset statistics}\n~\\\\\n\\begin{tabular}{c c c c c c}\n{\\bf Datasets} & {\\bf Type} & {\\bf Nodes} & {\\bf Edges} & {\\bf Classes} & {\\bf Features}\\\\\n\\hline\nCiteseer & Citation network & 3327 & 4732 & 6 & 3703\\\\\nCora & Citation network & 2708 & 5429 & 7 & 1433 \\\\\nTexas & Web & 187 & 328 & 5 & 1703\\\\\nWisconsin & Web & 265 & 530 & 5 & 1703 \\\\\n\\end{tabular}\n\\end{table}\n\n\\subsection{Experimental setup}\n\nWe train two-layer graph convolutional network with a dropout layer between the layers. Weights of the model has been trained with Adam optimizer with stepsize 0.01. Each model has been trained for 200 epochs. For citation networks, training has been performed on 140 nodes, testing on 1000 nodes. For web networks, training has been performed on ten percent of data, testing on 70 percent (20 percent has been picked for validation).\n\n\\subsection{Results}\n\nClassification accuracy of each model is summarized in the table below. \n\n\\begin{table}[h]\n\\centering\n\\caption{Summary of results in terms of classification accuracy} \n~\\\\\n\\begin{tabular}{c c c c c}\n{\\bf Method} & {\\bf CiteSeer} & {\\bf Cora} & {\\bf Texas} & {\\bf Wisconsin}\\\\\n\\hline\nGCN & 61.4 & 82.6 & 51.9 & 47.3\\\\\nSO GCN & 64.1 & 83.6 & 56.5 & 48.9\\\\\n\\end{tabular}\n\\end{table}\n\nAs can be seen from it, for described experimental setup, SO GCN is always better than GCN. \n\n\\section{Conclusion}\n\nSecond order GCN is a good approach for solving classification problems as experiments show. Further extension of the work may lie in increasing the number of orders, comparison of time needed to train the model with performance and applying more advanced regularization techniques.\n\n\\bibliography{references}\n\\bibliographystyle{plain}\n\\end{document}", "meta": {"hexsha": "86ea08ba96aeda2ceebacb0a8b85d4794a2284e0", "size": 5250, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/report.tex", "max_stars_repo_name": "gaseln/KAUST_CS340_GraphProject", "max_stars_repo_head_hexsha": "3a21e97a7b43e03993ffaa6522f01452fbca7058", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-22T08:58:27.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-22T08:58:27.000Z", "max_issues_repo_path": "doc/report.tex", "max_issues_repo_name": "gaseln/KAUST_CS340_GraphProject", "max_issues_repo_head_hexsha": "3a21e97a7b43e03993ffaa6522f01452fbca7058", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/report.tex", "max_forks_repo_name": "gaseln/KAUST_CS340_GraphProject", "max_forks_repo_head_hexsha": "3a21e97a7b43e03993ffaa6522f01452fbca7058", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.9708737864, "max_line_length": 661, "alphanum_fraction": 0.7081904762, "num_tokens": 1690, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105941403651, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5964256358512025}}
{"text": "\\documentclass{article}\n\\usepackage[minionint,mathlf,textlf]{MinionPro} % To gussy up a bit\n\\usepackage[margin=1in]{geometry}\n\\usepackage{graphicx} % For .eps inclusion\n%\\usepackage{indentfirst} % Controls indentation\n\\usepackage[compact]{titlesec} % For regulating spacing before section titles\n\\usepackage{adjustbox} % For vertically-aligned side-by-side minipages\n\\usepackage{array, mathrsfs, mhchem, amsmath} % For centering of tabulars with text-wrapping columns\n\\usepackage{hyper ref}\n\n\\pagenumbering{gobble} \n\\setlength\\parindent{0 cm}\n\\begin{document}\n\\large\n\\section*{Introduction}\nTransition from discussion of diffusion to cooperativity. Reminder of the scale imposed by diffusion and the dependence of the rate of oxygen transfer on the concentration gradient.\n\\[ c(x) = c(0) e^{-x\\sqrt{k/D}} \\]\n\n\\[ J = -D \\frac{\\partial c}{\\partial x} \\]\n\n\\section*{Heme and hemoglobin}\n\nSee slides.\n\n\\section*{Independent sites}\nIf the binding sites on each subunit of hemoglobin are truly independent, then we can treat each hemoglobin subunit separately. Consider the single site on one hemoglobin molecule. We'll represent the protein with $P$ and the oxygen molecule, its ligand, with an $L$ to emphasize the generality of the finding:\n\n\\begin{eqnarray*}\n\\ce{P + L <=>[K_D] PL}\n\\end{eqnarray*}\n\nThe dissociation constant $K_D$ is the ratio of the bound complex to its constituent parts:\n\\[ K_D = \\frac{\\left[ P \\right]\\left[ L \\right]}{\\left[ PL \\right]} \\]\n\nWe expect that as more ligand (oxygen) is added, the balance of this reaction will shift to a higher concentration of $[PL]$ than $[P]$. Therefore the fraction of bound sites should increase with $[L]$, and we would like to know more precisely how they vary together.\\\\\n\nThe total number of binding sites is equal to the total concentration of the protein, which is:\n\n\\[ \\left[ P \\right]_{tot} = \\left[ PL \\right] + \\left[ P \\right] \\]\n\nSo the fraction of sites bound, which by convention is called $Y$, will be:\n\n\\[ Y = \\frac{\\left[ PL \\right]}{\\left[ P \\right]_{tot}} = \\frac{\\left[ PL \\right]}{\\left[ PL \\right] + \\left[ P \\right]} \\]\n\nWe can rearrange the definition of the binding constant to find $[P] = K_D[PL]/[L]$ and plug in to eliminate $[P]$ from this expression:\n\n\\[ Y = \\frac{\\left[ PL \\right]}{\\left[ PL \\right] + K_D\\left[ PL \\right]/\\left[ L \\right]} = \\frac{1}{1 + K_D/\\left[ L \\right]} = \\frac{\\left[ L \\right]}{\\left[ L \\right] + K_D} \\]\n\nThis is a \\textit{hyperbolic} curve. In this form the relationship to the familiar $y=c/x$ is probably not apparent. However, consider the asymptotes of this function: when the ligand concentration is large, $Y \\to 1$; and although negative ligand concentrations are not physically meaningful, $Y \\to - \\infty$ as the ligand concentration goes to $-K_D$. Suppose that we shifted to the new coordinates $\\left[ L' \\right] = \\left[ L \\right] + K_D$ and $Y' = Y - 1$:\n\n\\begin{eqnarray*}\nY & = & \\frac{\\left[ L' \\right] - K_D}{\\left[ L' \\right]}\\\\\nY' & = & Y - 1 = \\frac{\\left[ L' \\right] - K_D}{\\left[ L' \\right]} - \\frac{\\left[ L' \\right]}{\\left[ L' \\right]} = \\frac{- K_D}{\\left[ L' \\right]}\n\\end{eqnarray*}\n\n$\\ldots$ which hopefully clarifies why we called this a hyperbolic function.\\\\\n\nPoint out that $K_D$ is equal to the concentration of ligand at half-saturation. Plot and point out how the approach to low binding fraction is quite slow, then suddenly abrupt at a ligand concentration very near zero.\n\n``How bad is it\": show that you need an 81-fold change in concentration to go from 10\\% bound to 90\\% bound. Yeesh -- hopefully your tissues never come close to this!\n\n\\section*{Cooperativity}\nLet's suppose that the rate of oxygen binding or dissociation at any given site depends on the status of the other three sites. For example, once one oxygen molecule has been bound, it may be easier to bind a second one. (Later we'll discuss mechanistically how this could occur; but for now, we'll focus on demonstrating that this assumption is enough to give us the desired binding properties.) The ``on\" and ``off\" rates for ligand at a single site ($k_i$ and $k_{-1}$, respectively) may differ depending on how many ligands are currently bound to the protein:\n\n\\begin{eqnarray*}\n\\ce{P + L <=>[4k_1][k_{-1}] PL}\\\\\n\\ce{PL + X <=>[3k_2][2k_{-2}] PL_2}\\\\\n\\ce{PL_2 + X <=>[2k_3][3k_{-3}] PL_3}\\\\\n\\ce{PL_3 + X <=>[k_4][4k_{-4}] PL_4}\n\\end{eqnarray*}\n\nNote that each rate $k_i$ is scaled by the number of equivalent sites on the protein at which binding or dissociation could take place. For example, the singly-bound complex PL has three sites at which a second ligand could bind and therefore a binding rate of $3k_2$; PL has only one bound ligand, so the dissociation rate is simply $k_{-1}$. With these rate constants, we can relate the concentrations of proteins in each state:\n\n\\begin{eqnarray*}\n\\left[ \\textrm{PL} \\right] & = & \\frac{4k_1}{k_{-1}} \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right] = 4K_1 \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right]\\\\\n\\left[ \\textrm{PL}_2 \\right] & = &  \\frac{3k_2}{2k_{-2}} \\left[ \\textrm{PL} \\right]\\left[ \\textrm{L} \\right] = \\frac{6k_1k_2}{k_{-1}k_{-2}} \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right]^2 = 6K_1K_2 \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right]^2\\\\\n\\left[ \\textrm{PL}_3 \\right] & = &  \\frac{2k_3}{3k_{-3}} \\left[ \\textrm{PL}_2 \\right]\\left[ \\textrm{L} \\right] = \\frac{4k_1k_2k_3}{k_{-1}k_{-2}k_{-3}} \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right]^3 = 4K_1K_2K_3 \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right]^3\\\\\n\\left[ \\textrm{PL}_4 \\right] & = &  \\frac{k_4}{4k_{-4}} \\left[ \\textrm{PL}_3 \\right]\\left[ \\textrm{L} \\right] = \\frac{k_1k_2k_3k_4}{k_{-1}k_{-2}k_{-3}k_{-4}} \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right]^4 = K_1K_2K_3K_4 \\left[ \\textrm{P} \\right]\\left[ \\textrm{L} \\right]^4\n\\end{eqnarray*}\n\nwhere we have introduced the constants $K_i = k_i/k_{-i}$ to simplify notation. Recall that the fraction of bound sites is:\n\n\\[ Y = \\frac{\\textrm{Number of sites bound by ligand}}{\\textrm{Total number of binding sites}} = \\frac{\\sum_{i=1}^4 i \\left[ \\textrm{PL}_i \\right]}{4 \\sum_{i=0}^4 \\left[ \\textrm{PL}_i \\right]} = \\frac{\\left[ \\textrm{PL} \\right] + 2 \\left[ \\textrm{PL}_2 \\right] + 3 \\left[ \\textrm{PL}_3 \\right] + 4 \\left[ \\textrm{PL}_4 \\right]}{4 \\left( \\left[ \\textrm{P} \\right] + \\left[ \\textrm{PL} \\right] + \\left[ \\textrm{PL}_2 \\right] + \\left[ \\textrm{PL}_3 \\right] + \\left[ \\textrm{PL}_4 \\right] \\right)}\\]\n\nPlugging in our expressions for the relative concentrations of proteins in each state:\n\n\\begin{eqnarray}\nY & = & \\frac{\\left[ P \\right] \\left( 4 K_1 \\left[ L \\right] + 12K_1K_2 \\left[ L \\right]^2 + 12K_1K_2K_3 \\left[ L \\right]^3 + 4K_1K_2K_3K_4 \\left[ L \\right]^4 \\right)}{4\\left[ P \\right] \\left( 1 + 4K_1 \\left[ L \\right] + 6K_1K_2 \\left[ L \\right]^2 + 4K_1K_2K_3 \\left[ L \\right]^3 + K_1K_2K_3K_4 \\left[ L \\right]^4\\right)} \\nonumber\\\\\nY & = & \\frac{K_1 \\left[ L \\right] + 3K_1K_2 \\left[ L \\right]^2 + 3K_1K_2K_3 \\left[ L \\right]^3 + K_1K_2K_3K_4 \\left[ L \\right]^4}{1 + 4K_1 \\left[ L \\right] + 6K_1K_2 \\left[ L \\right]^2 + 4K_1K_2K_3 \\left[ L \\right]^3 + K_1K_2K_3K_4 \\left[ L \\right]^4 \\label{adair}}\n\\end{eqnarray}\nEquation \\ref{adair} is the \\textit{Adair equation} for a molecule with four binding sites. The relationship between $Y$ and $L$ will depend on the $K_i$ in a potentially complicated way. However, if we assume that the protein has a relatively high affinity for the final ligand (i.e. $K_4 \\gg K_1, K_2, K_3$), this equation simplifies greatly to:\n\\begin{eqnarray*}\nY & \\approx & \\frac{K_1K_2K_3K_4 \\left[ L \\right]^4}{1 + K_1K_2K_3K_4 \\left[ L \\right]^4} = \\frac{\\left[ L \\right]^4}{K + \\left[ L \\right]^4}\n\\end{eqnarray*}\nwhere $K = 1/K_1K_2K_3K_4$ functions as an effective dissociation constant averaged over all of the binding events. This is a specific instance of the Hill equation:\n\\begin{eqnarray}\nY & = & \\frac{\\left[ X \\right]^n}{K + \\left[ X \\right]^n\\label{hill}}\n\\end{eqnarray}\nwhere the exponent $n$ is called the \\textit{Hill coefficient}. Notice that when $n=1$, the function is hyperbolic: this is equivalent to the non-cooperative case. For $n > 1$, the function is instead \\textit{sigmoidal}, i.e. s-shaped.\\footnote{The term refers to the lowercase Greek sigma glyph $\\varsigma$, not $\\sigma$.} This sigmoidal shape indicates that binding is more ``switch-like\": it transitions from a low fractional saturation to a high fractional saturation at a ligand concentration potentially far from zero.\\\\\n\nWhy is this reaction scheme suggested by the Hill approximation not realistic for oxygen binding to hemoglobin?\n\\begin{eqnarray*}\n\\ce{P + 4L <=>[K] PL_4}\n\\end{eqnarray*}\n\nA good measure of how ``switch-like\" the behavior is can be given by the maximum slope of the curve, which we can of course find through differentiation:\n\n\\[ \\frac{dY}{d\\left[X\\right]} = \\frac{n[X]^{n-1}\\left(K^n + [X]^n\\right) - [X]^n \\cdot -n [X]^{n-1}}{\\left([X]^n + K^n\\right)^2} = \\frac{nK^n [X]^{n-1}}{\\left([X]^n + K^n\\right)^2} \\]\n\nTaking a second derivative (or simply examining the original curve) informs us that the maximum slope occurs when $[X]=K$, at which point $dY/d[X] = n/4K$. In other words, the slope at the inflection point increases linearly with the Hill coefficient.\\\\\n\nEarlier we saw that for a hyperbolic curve, we would need an 81-fold difference in ligand concentration to go from 10\\% to 90\\% saturation. How much improvement have we bought with cooperativity? Let $A$ and $B$ be the concentrations of ligand at which $Y=0.1$ and $Y=0.9$, respectively, Then from the Hill equation:\n\\begin{eqnarray*}\n0.1 = \\frac{A^n}{K^n + A^n} & \\implies & A = K\\sqrt[n]{\\frac{1}{9}}\\\\\n0.9 = \\frac{B^n}{K^n + B^n} & \\implies & B = K\\sqrt[n]{9}\\\\\n\\frac{B}{A} & = & \\sqrt[n]{81}\n\\end{eqnarray*}\nSo for example if $n=4$, we need only a three-fold change in ligand concentration to move from 10\\% to 90\\% saturation, making it much easier to pick up and let go of, say, oxygen over a physiological range of partial pressures.\\\\\n\nIs the Hill function a good fit to the hemoglobin data. To address this we will want to fit both parameters, $n$ (to keep ourselves honest) and $K$. Remember that for oxygen/hemoglobin we do have experimental control over the partial pressure of oxygen ([$L$]) and we can measure the fraction of heme groups bound by oxygen ($Y$) by spectrophotometry. We'll use linear regression to fit the parameters, using an algebraic trick to transform the Hill equation:\n\\begin{eqnarray*}\nY = \\frac{\\left[ L \\right]^n}{K^n + \\left[ L \\right]^n} & \\implies & 1 - Y = \\frac{K^n}{K^n + \\left[ L \\right]^n}\\\\\n\\frac{Y}{1-Y} & = & \\frac{\\left[ L \\right]^n}{K^n}\\\\\n\\log \\left( \\frac{Y}{1-Y} \\right) & = & n \\log \\left[ L \\right] - n \\log K\n\\end{eqnarray*}\n\nNotice that this equation has the form of a line, with slope $n$ and y-intercept $-n\\log K$. We simply use our measurements of [$L$] and $T$ to plot $\\log (Y/1-Y)$ vs. $\\log [L]$, then find $n$ and $K$\\footnote{Of course, we will need to fit $n$ and $-n\\log K$ first, then back out what the value of $K$ should be.} by linear regression.\\\\\n\nWe see that the best-fit slope is $n=2.8$, which is lower than the $n=4$ we might have predicted naively. The reason is that our approximation ($K_4 \\gg K_1, K_2, K_3$) was unrealistic: cooperative binding of the final ligand is not so large that the other terms of the Adair equation can be fully ignored while assuming $n=4$. Relaxing the constraint on $n$ allowed us to obtain a good fit to the Hill equation anyway, by choosing a smaller value of $n$. The best-fit value of $n$ is still useful, however: it provides a lower bound to the number of ligand binding sites.\n\n\\section*{Monod-Wyman-Changeux Model}\nWe have now seen that cooperativity provides the switch-like binding behavior we hoped for, as well as the Adair and Hill equations which fit the data well. We still lack a mechanistic insight: how does hemoglobin achieve the change in binding coefficients? Which of the binding coefficients in the Adair equation are likely to differ and why?\\\\\n\nWe have a clue, of course: each heme group sits in a subunit of hemoglobin, and when oxygen binds, the surrounding subunit changes shape. (This change is at least partly mediated by the physical connection of an amino acid side chain to the heme group.) We have the structures for fully bound and unbound hemoglobin: this animation is just an inferred transition between them. But the concerted motion is a simplified model for what might be causing the transition from low to high affinity.\n\nLet's assume the ratio $K = [P_T]/[P_R]$ is relatively large: when no ligand is bound, the taut state is highly favored. However, the ratio $[P_TL_4]/[P_TL_4]$ for the fully bound complex is somewhat small, favoring the relaxed state. Each state has its own binding constant, $K_T$ or $K_R$, that does \\textit{not} vary depending on the number of ligands bound (modulo the multiplicities in the rate constants). Choose $K_T < K_R$ so that most hemoglobins will have either none or all sites bound. We'll see that these three parameters -- $K$, $K_T$, and $K_R$ -- are all we need to get cooperativity.\\\\\n\nLet's start by relating all of our concentrations, as we did before for Adair equation:\n\\begin{eqnarray*}\n\\left[ P_RL \\right] = 4K_R \\left[ P_R \\right] \\left[ L \\right] &  & \\left[ P_TL \\right] = 4K_T \\left[ P_T \\right] \\left[ L \\right] = 4\\alpha K_R K \\left[ P_R \\right] \\left[ L \\right]\\\\\n\\left[ P_RL_2 \\right] = 6K_R^2 \\left[ P_R \\right] \\left[ L \\right]^2 &  & \\left[ P_TL_2 \\right] = 6\\alpha^2 K_R^2 K \\left[ P_R \\right] \\left[ L \\right]^2\\\\\n\\left[ P_RL_3 \\right] = 4K_R^3 \\left[ P_R \\right] \\left[ L \\right]^3 &  & \\left[ P_TL_3 \\right] = 4 \\alpha^3 K_R^3 K \\left[ P_R \\right] \\left[ L \\right]^3\\\\\n\\left[ P_RL_4 \\right] = K_R^4 \\left[ P_R \\right] \\left[ L \\right]^4 &  & \\left[ P_TL_4 \\right] = \\alpha^4 K_R^4 K \\left[ P_R \\right] \\left[ L \\right]^4\\\\\n\\end{eqnarray*}\n\nThis is all of the information needed to find the fraction of all sites bound:\n\n\\begin{eqnarray*}\n Y & = & \\frac{\\textrm{Number of sites bound by ligand}}{\\textrm{Total number of binding sites}} = \\frac{\\sum_{i=1}^4 i \\left[ \\textrm{P$_R$L}_i \\right] \\, + \\, \\sum_{i=1}^4 i  \\left[ \\textrm{P$_T$L}_i \\right]}{4 \\left(\\sum_{i=0}^4 \\left[ \\textrm{P$_R$L}_i \\right] \\, + \\, \\sum_{i=0}^4 \\left[ \\textrm{P$_T$L}_i \\right]\\right)}\\\\\n& = & \\frac{K_R \\left[ L \\right] \\left( 1 + 3 K_R\\left[ L \\right] + 3 K_R^2\\left[ L \\right]^2 + K_R^3\\left[ L \\right]^3\\right) + \\alpha K_R K \\left[ L \\right] \\left( 1 + 3 \\alpha K_R\\left[ L \\right] + 3 \\alpha^2 K_R^2\\left[ L \\right]^2 + \\alpha^3 K_R^3\\left[ L \\right]^3\\right)}{1 + 4 K_R\\left[ L \\right] + \\ldots +  K_R^4\\left[ L \\right]^4 + K\\left( 1 + 4 \\alpha K_R\\left[ L \\right] + \\ldots +  \\alpha^4 K_R^4\\left[ L \\right]^4 \\right)}\\\\\n& = & K_R \\left[ L \\right]  \\frac{\\left( 1 + K_R \\left[ L \\right]\\right)^3 + \\alpha K \\left( 1 + \\alpha K_R \\left[ L \\right]\\right)^3}{\\left( 1 + K_R \\left[ L \\right]\\right)^4 + K \\left( 1 + \\alpha K_R \\left[ L \\right]\\right)^4}\n\\end{eqnarray*}\nMore generally for $b$ binding sites:\n\\begin{eqnarray}\nY & = & K_R \\left[ L \\right]  \\frac{\\left( 1 + K_R \\left[ L \\right]\\right)^{b-1} + \\alpha K \\left( 1 + \\alpha K_R \\left[ L \\right]\\right)^{b-1}}{\\left( 1 + K_R \\left[ L \\right]\\right)^b + K \\left( 1 + \\alpha K_R \\left[ L \\right]\\right)^b}\n\\end{eqnarray}\nAlthough probably not obvious, this corresponds to a sigmoidal binding curve. What was the trick?\\\\\n\nYou can make a Hill plot this function (numerically or, with even more gnarly algebra, through rearrangement) to get [INSERT GRAPHIC]. As you can see the slope of the curve is one at both very large and very small concentrations of ligand. The y-intercepts are different for the two asymptotic lines, however: from before we know this must mean that the binding constants differ. In other words, the protein has sites that bind non-cooperatively (Hill coefficient $n=1$) but the protein's effective binding constant transitions from $K_T$ to $K_R$ as the concentration of ligand increases.\\\\\n\nThis model does a very good job of fitting the data and we may now feel we have a plausible explanation for the transition in affinity.\n\n\\section*{Koshland Model}\n...however we know that at least some of the conformational changes associated with oxygen binding really are induced by the presence of the ligand. For example, the histidine chain movement is caused by the change in coordination at the iron atom. It is not likely that all these changes really propagate from one subunit to all of the others or that the change is ``all-or-nothing.\" A more plausible model perhaps is that the nearest neighbors of a bound subunit change to a conformational state that is slightly more receptive to oxygen binding. The Koshland model -- which I'll introduce only philosophically here -- captures that intuition. The fit to the data is not, unfortunately, really any better upon accounting for differences in parameter number.\n\n\\section*{Another Mechanism: Tethering}\nWe have been trying to justify why the binding constants would be different using conformational changes in the protein. However sometimes the explanation is much simpler.\\\\\n\nAnother case where switch-like behavior is desirable is gene regulation, as you saw in section 1's discussion of the Lac operon. As a reminder this is a set of genes which are only required when lactose is present and are therefore normally repressed by the transcription factor LacI. LacI is a dimer-of-dimers: each tetramer is capable of binding to two binding sites called lac operators or lacO sites. There are three twenty-something basepair Lac operators in the 6 Mbp \\textit{E. coli} genome and they are all within a few hundred basepairs of each other. For a LacI protein scanning the genome, the hard part is unquestionably finding one of the three Lac operators. However, once that ``first\" operator has been bound, the probability of binding a second operator substantially increases.\\\\\n\nThe reason is not due to a conformational change, but rather tethering. You can think of it like the cup-and-ball game. The other operators cannot be very far away from the first one that LacI found, since they are connected by a relatively short stretch of DNA. Eventually the random coil of DNA will bend such that LacI's other end encounters that site and binds it. So we have effectively changed the ``on\" rate at the second site by ensuring that other operators are close at hand.\n\n\\end{document}", "meta": {"hexsha": "4c0f6054ff438d431b0f0400189742e2b33b4af8", "size": 18449, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/Lecture 7 - Inhibition and Cooperativity/ls200 cooperativity lecture.tex", "max_stars_repo_name": "mewahl/intro-systems-biology", "max_stars_repo_head_hexsha": "95ad58ec50ef79d084e71f4380fbfbf5e1603836", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-01-20T17:43:31.000Z", "max_stars_repo_stars_event_max_datetime": "2019-01-31T17:23:09.000Z", "max_issues_repo_path": "lectures/Lecture 7 - Inhibition and Cooperativity/ls200 cooperativity lecture.tex", "max_issues_repo_name": "mewahl/intro-systems-biology", "max_issues_repo_head_hexsha": "95ad58ec50ef79d084e71f4380fbfbf5e1603836", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/Lecture 7 - Inhibition and Cooperativity/ls200 cooperativity lecture.tex", "max_forks_repo_name": "mewahl/intro-systems-biology", "max_forks_repo_head_hexsha": "95ad58ec50ef79d084e71f4380fbfbf5e1603836", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-01-20T17:43:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-25T14:42:10.000Z", "avg_line_length": 106.6416184971, "max_line_length": 797, "alphanum_fraction": 0.711962708, "num_tokens": 5962, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019594, "lm_q2_score": 0.8056321913146127, "lm_q1q2_score": 0.5963083024793041}}
{"text": "%auto-ignore\n\\providecommand{\\MainFolder}{..}\n\\documentclass[\\MainFolder/Text.tex]{subfiles}\n\\begin{document}\n\\section{Approximation using heat form}\\label{Sec:Hwe}\n\\allowdisplaybreaks\n\nGiven an oriented Riemannian manifold $(M,g)$, consider the \\emph{heat form}\n\\begin{equation}\\label{Eq:HK}\n\\KKer_t(x,y) = \\sum (-1)^{kn}e^{-\\lambda_i t} (\\star e_i)(x) \\wedge e_i(y),\n\\end{equation}\nwhere $(e_i)$ are eigenvectors of $\\Delta$ and $\\lambda_i$ the corresponding eigenvalues. It is equivalently the Schwartz form of the operator $\\exp(-\\Delta t)$ (see \\cite[Chapter~3]{Harris2004}) or a unique solution of the equation $\\Laplace \\KKer_t(x,y) = - \\frac{\\partial}{\\partial t}\\KKer_t(x,y)$ with $\\lim_{t\\to 0} \\KKer_t = \\DR_\\Id$, where $\\DR_\\Id$ is the Schwartz form of the identity (see \\cite{Hein2006}).\n%\n\\begin{Proposition}[Properties of the heat kernel]\\label{Prop:Heasd}\nLet $M$ be an oriented Riemannian manifold. The heat form $\\KKer_t(x,y)$ is smooth on $M\\times M \\times (0,\\infty)$ and satisfies\n$$ \\Dd \\KKer_t = 0, \\quad \\tau^*\\KKer_t = (-1)^n\\KKer_t\\quad\\text{and}\\quad \\frac{1}{2}\\Delta \\KKer_t = \\Diag_x \\KKer_t = - \\frac{\\partial }{\\partial t} \\KKer_t. $$\n\\end{Proposition}\n\\begin{proof}\nStraightforward computation and a nice combinatorial argument.\n\\end{proof}\n%\n%\\begin{proof}\n%Well-known.\n%%\\begin{description}\n%%\\item[E)]\n%%It suffices to prove it for $e^{-d(x,y)^2/t}$ by heat expansion.  Quantitively, i.e.\\ making no distinction between $x_i$'s, we have\n%%$$ \\partial^l e^{-r^2/t} = p_l(x,t) e^{-r^2/t}, $$\n%%where $p_l = (\\frac{x}{t} + \\partial_x)p_{l-1}$ with $p_0 = 1$, where \n%%$$ \\Abs{\\underbrace{\\frac{\\partial }{\\partial x_i} d^2(x,y)}_{\\sim x}} \\le C d(x,y), $$\n%%and hence in the estimate we can replace $x$ by $r$. For $t\\in (0,1)$ it holds $\\frac{1}{t^a} \\le \\frac{1}{t^b}$ iff $a\\le b$. In every term we have only\n%%$$ \\frac{r^a}{t^b} e^{-\\frac{r^2}{t}} = \\frac{1}{t^{b-\\frac{a}{2}}}(\\frac{r^2}{t})^{\\frac{a}{2}} e^{-\\frac{r^2}{t}} = \\frac{1}{t^{b-\\frac{a}{2}}}  \\underbrace{\\bigl[ 2^{\\frac{a}{2}}\\bigl(\\frac{r^2}{2t}\\bigr)^{\\frac{a}{2}} e^{-\\frac{r^2}{2t}} \\bigr]}_{=:g_a(x), y= \\frac{r^2}{2t}} e^{-\\frac{r^2}{2t}} $$\n%%for $a\\le b$. But $g_a(y), y\\ge 0$ is a bounded function. We have $p_l$ polynoms in $\\frac{r}{t}$. By the recursive formulas we get that the terms $\\frac{1}{t^{l/2}}$, resp. $\\frac{r}{t^{(l+1)/2}}$ for $l$ even resp. $l$ odd have biggest $b-a/2$ equal to $l/2$. Indeed, both operations $x/t \\cdot$ and $\\partial_x$ increase $a-b/2$ by maximally $1/2$. We start with $a = 0$, $b=0$ for $l=0$ and we proceed by induction that the maximum of $a-b/2$ is $l/2$ in $p_l$. \\qedhere\n%%\\end{description}\n%\\end{proof}\n%\n%We define\n%$$ \\begin{aligned} \n%\\GKer_t & = \\int_t^\\infty \\bigl(\\KKer_t - \\HKer\\bigr) \\\\ \n%\\Prpg_t & = \\int_t^\\infty \\CoDd\\KKer_t \n%\\end{aligned} $$\n%and denote $\\GKer = \\GKer_0$, $\\Prpg = \\Prpg_0$.\n%\n%\\begin{Lemma}[Integrals dependent on parameter] \\label{Lem:IntPar}\n%Let $I$, $J\\subset \\R$ be intervals and $f(a,u) : I \\times J \\rightarrow \\R$. Suppose that:\n%\\begin{enumerate}\n% \\item $I$ is open\n% \\item For all $a\\in I$ is $f(a,\\cdot)$ measurable in $J$.\n% \\item For almost all $u\\in J$ is $f(\\cdot,u)$ differentiable in $I$\n% \\item There is a $g\\in L^1(J)$ such that $\\Abs{\\frac{\\partial}{\\partial a} f(a,u)} \\le g(u)$ for almost all $u\\in J$ and all $a\\in I$\n% \\item There is an $a_0\\in I$ such that $f(a_0,\\cdot) \\in L^1(J)$  \n%\\end{enumerate}\n%Then $F(a) \\coloneqq \\int_J f(a,u) \\Diff{u}$ is finite and \n%$$ F'(a) = \\int_J \\frac{\\partial }{\\partial a} f(a,u) \\Diff{u} $$\n%\\end{Lemma}\n\\begin{Proposition}[Approximation using heat form]\\label{Prop:HeatKerFormulas}\nLet $M$ be an oriented Riemannian manifold and $\\KKer_t(x,y)$ the heat form.\nFor all $(t,x,y)\\in \\bigl([0,\\infty)\\times M \\times M\\bigr)\\backslash \\{0\\}\\times \\Diag =: D(\\KKer)$, define\n\\begin{equation}\\label{Eq:HeatKerApprox}\n\\begin{aligned} \n\\GKer_t(x,y) &\\coloneqq \\int_t^\\infty \\KKer_\\tau(x,y)\\Diff{\\tau}\\quad\\text{and}\\\\\n\\Prpg_t(x,y) &\\coloneqq (-1)^{n+1}\\int_t^\\infty (\\Id\\COtimes\\CoDd_y) \\KKer_\\tau(x,y)\\Diff{\\tau}.\n\\end{aligned}\n\\end{equation}\nThen:\n\\begin{ClaimList}\n\\item The forms $\\GKer_t$ and $\\Prpg_t$ are smooth on $D(\\KKer)$, the point-wise limits $\\GKer'$ and $\\Prpg'$ as $t\\to 0$ exist, and it holds $\\GKer_t \\darrow[t]\\GKer'$ and $\\Prpg_t\\darrow[t]\\Prpg'$ as $t\\to 0$ (uniform convergence) in $C^\\infty_{\\text{loc}}(M\\times M\\backslash\\Diag)$.\n\\item On $D(\\KKer)$, the following relations hold:\n\\begin{align*}\n\\Dd \\GKer_t &= 0 & \\Prpg_t &= (-1)^{n+1}\\frac{1}{2} \\CoDd\\GKer_t \\\\\n\\Laplace \\GKer_t &= \\KKer_t - \\HKer & \\Dd \\Prpg_t &= (-1)^n(\\HKer - \\KKer_t) \\\\\n\\tau^*\\GKer_t &=(-1)^n\\GKer_t  & \\tau^* \\Prpg_t &= (-1)^n \\Prpg_t.\n\\end{align*}\nIt follows that $\\GKer' = \\GKer$ is the (Laplace) Green form and $\\Prpg' = \\StdPrpg$ the standard Hodge propagator. \n\\end{ClaimList}\n\\end{Proposition}\n\n\\begin{proof}\nThe formal computation is clear. An honest proof uses the standard heat kernel estimates.\n\\ToDo[caption={Say more},noline]{Say more about the proofs!}\n%The relations follow from\n%$$ \\Dd \\KKer_t = 0\\quad\\text{and}\\quad (\\CoDd_x\\COtimes\\Id)\\KKer_t = (\\Id\\COtimes\\CoDd_y) = \\frac{1}{2}\\CoDd\\KKer_t. $$\n%Use properties of the heat kernel,... Also see Harris, Heine,.. We have that $\\CoDd = \\CoDd_x \\COtimes \\Id + \\Id \\COtimes \\CoDd_y$, it holds $(\\CoDd_x \\COtimes \\Id)\\tau^* = \\Id \\COtimes \\CoDd_y$.\n\\end{proof}\n\n\\begin{Proposition}[$\\StdPrpg$ is codifferential of $\\GKer$]\\label{Prop:StdCodifInt}\nLet $M$ be an oriented Riemannian manifold, and let $\\GKer\\in \\DR^n(M\\times M\\backslash\\Diag)$ be the Green form. Then the standard Hodge propagator $\\StdPrpg$ satisfies\n\\begin{equation}\\label{Eq:FormForPUsingG}\n\\StdPrpg(x,y)= (-1)^{n+1}(\\Id\\otimes \\CoDd_y)\\GKer(x,y),\n\\end{equation}\nwhere $\\Id\\otimes \\CoDd_y: \\DR^\\bullet(M\\times M) \\rightarrow \\DR^{\\bullet-1}(M \\times M)$ is the differential operator defined in local coordinates by commuting $\\CoDd$ over the first factor with the Koszul sign and applying it to the second factor.\n\\end{Proposition}\n\\begin{proof}\nAs for the signs, $(-1)^n$ comes from $\\TOp \\GOp \\omega(y) = (-1)^{nT}\\int_x(\\Id \\otimes \\TOp_y)\\GKer(x,y)\\omega(x)$ with $T=\\CoDd$ and $-1$ from $\\StdHtp = - \\CoDd\\GOp$.\nThe rest can be proven using the heat kernel approximation and standard heat kernel estimates.\nThere is an other method using the asymptotic expansion of $\\GKer$, which was shown to the author by Prof.~Dr.~Christian~B\u00e4r.\n\\ToDo[caption={Say more},noline]{Say more about the proofs!}\n\\end{proof}\n%We have the following.\n%\n%\\begin{Proposition}[Asymptotic expansion of $\\KKer_t$ and $\\GKer$]\n%\\end{Proposition}\n%\n%\\begin{itemize}\n% \\item Asymptotic expansion of \n% \\item \n%\\end{itemize}\n\n%\n%Does $\\GKer$ extend to the blow-up? \n%\n%Can I use this to show that $\\CoDd \\GKer$ does too extend to blow-up?\n\n%\n%\\begin{Remark}[Operators not extending to blow-up]\n%Under the change $x=u+r\\omega$, $y=u-r\\omega$ and $\\tilde{f}(r,\\omega,u) = f(x,y)$, we compute the following\n%\\begin{align*}\n%\\frac{\\partial \\tilde{f}}{\\partial x^i} & = \\frac{1}{2} \\omega_i \\frac{\\partial \\tilde{f}}{\\partial r} + \\frac{1}{2r}\\sum_{j=1}^{n+1}(\\delta_{ij} - \\omega_i\\omega_j)\\frac{\\partial\\tilde{f}}{\\partial \\omega^j} + \\frac{1}{2}\\frac{\\partial\\tilde{f}}{\\partial u^i} \\\\\n%\\frac{\\partial \\tilde{f}}{\\partial y^i} & = -\\frac{1}{2} \\omega_i \\frac{\\partial \\tilde{f}}{\\partial r} - \\frac{1}{2r}\\sum_{j=1}^{n+1}(\\delta_{ij} - \\omega_i\\omega_j)\\frac{\\partial\\tilde{f}}{\\partial \\omega^j} + \\frac{1}{2}\\frac{\\partial\\tilde{f}}{\\partial u^i} \n%\\end{align*}\n%Because of the middle $\\frac{1}{r}$ these do not always descend. We also compute\n%\\end{Remark}\n\\end{document}\n", "meta": {"hexsha": "195cfc66da2f851df3ac1ccc1e7c5411c9ef74b7", "size": 7623, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Subfiles/GrKer_GrHeat.tex", "max_stars_repo_name": "p135246/phd-thesis", "max_stars_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Subfiles/GrKer_GrHeat.tex", "max_issues_repo_name": "p135246/phd-thesis", "max_issues_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Subfiles/GrKer_GrHeat.tex", "max_forks_repo_name": "p135246/phd-thesis", "max_forks_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.9756097561, "max_line_length": 476, "alphanum_fraction": 0.6599763872, "num_tokens": 2917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5963082915172969}}
{"text": "% Adapted from https://www.overleaf.com/latex/templates/problem-set-template/bdwzvbkxyjfg\n\n\\documentclass[11pt]{article}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[T1]{fontenc}\n\\usepackage{kpfonts}\n\\usepackage{graphicx} % for figures\n\\usepackage{grffile}\n\\usepackage{amsmath}  % for extended math markup\n\\usepackage{amssymb}\n\\usepackage{bm}\n\\usepackage{xfrac}\n\\usepackage{enumerate}\n\\usepackage{float}\n\\usepackage{multirow}\n\\usepackage[margin=1cm]{caption}\n\\usepackage[bookmarks=false]{hyperref} % for URL embedding\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\newcommand{\\Course}{CSE515T: Bayesian Methods in Machine Learning}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n% these are common math formatting commands that aren't defined by default\n\\newcommand{\\union}{\\cup}\n\\newcommand{\\isect}{\\cap}\n\\newcommand{\\ceil}[1]{\\ensuremath \\left\\lceil #1 \\right\\rceil}\n\\newcommand{\\floor}[1]{\\ensuremath \\left\\lfloor #1 \\right\\rfloor}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\norm}[1]{\\ensuremath \\mid\\mid #1 \\mid\\mid}\n\\newcommand{\\mc}[1]{\\mathcal{#1}}\n\\newcommand{\\data}{\\mc{D}}\n\n\\DeclareMathOperator*{\\argmin}{\\arg\\min}\n\\DeclareMathOperator*{\\argmax}{\\arg\\max}\n\n\\newenvironment{problem}[2][Problem]\n{\\ifnum\\value{page}>1 \\newpage\\else\\fi\\noindent{\\bfseries #1}  {\\bfseries #2.}}\n{\\mbox{}\\\\\\vspace{-.5em}\\noindent\\rule{\\textwidth}{0.4pt}\\hfil\\vspace{1em}}\n\n\\numberwithin{equation}{section}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{document}\n\\title{Project Report}\n\\author{Alexis Park, Jonathan Chen, Kevin Xie \\\\ \\Course}\n\n\\maketitle\n\n\\section{Data visualization}\n\nThe Branin unction is defined as follows:\n\n\\begin{equation}\n  f(\\bm{x}) = a(x_2 - bx_1^2 + cx_1 - r)^2 + s(1 - t)\\cos(x_1) + s\n  \\label{eq:Branin}\n\\end{equation}\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Data Visualization/trueBraninHeatmap.png}\n  \\caption{Heatmap of True Branin function values with domain $\\mc{X} = [-5, 10] \\times [0, 15]$  with 1000 values per dimension.}\n  \\label{fig:true_Branin_heatmap}\n\\end{figure}\n\nFrom figure \\ref{fig:true_Branin_heatmap}, we can see that the function's\nvalues fluctuate in a somewhat sinusoidal manner, with a large minimal\n``trench'' spanning diagonally across the center of the domain with steadily\nincreasing regions along the trench's sides. Therefore the function does not\nappear to be stationary.\n\nIn order to make the function more stationary, we tried a variety of\ntransformations. By analyzing the equation, we noted that the sinusoidal\nfluctuations can be attributed to both the added cosine term and the 4th\norder polynomial term. We thus attempted to pass the data through a\ntransformation that combined an inverse cosine function ($\\arccos$) with the\ncumulative density function of the normal probability distribution\n($\\Phi(x)$). However, taking the $\\log$ of the data gave us the most\nstationary result as shown in figure \\ref{fig:log_Branin_heatmap}.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Data Visualization/Log_Branin_Heatmap.png}\n  \\caption{Heatmap of log Branin function values with domain $\\mc{X} = [-5,\n  10] \\times [0, 15]$ with 1000 values per dimension. The plot shows the\n  behavior of the Branin function is more constant throughout the domain when\n  compared to the original function's behavior.}\n  \\label{fig:log_Branin_heatmap}\n\\end{figure}\n\nNext, we plotted a kernel density estimate (KDE) of LDA and SVM benchmarks\n(Fig. \\ref{fig:kde-lda-svm}A and \\ref{fig:kde-lda-svm}C). By analyzing the\nresultant probability density graphs, we identified that KDE of LDA show\nlog-normal like distribution. Also, both estimates have relatively similar\nbehavior but on significantly different scales.\n\nSince KDE of LDA show log-normal like distribution, the log of LDA values\nshould produce the normal distribution, which resulted in better performance\n(Fig. \\ref{fig:kde-lda-svm}B). As we expected, the (log) LDA KDE is more\ndistributed throughout the graph and relatively less concentrated around the\npeak. However, the (log) SVM KDE made the distribution less normal as shown\nin figure \\ref{fig:kde-lda-svm}D.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/kde_lda_svm.eps}\n  \\caption{Kernel density estimates for LDA (green, A and B) and SVM (purple, C and D) datasets, along with the estimates for their logs.}\n  \\label{fig:kde-lda-svm}\n\\end{figure}\n\n\\section{Model fitting}\n\\subsection*{Model fitting with Branin function}\nFor model fitting, we used 32 Sobol sequence Branin function training points\nand fit the data to a Gaussian Process (GP) model with constant mean with\ninitial value of 0, squared exponential covariance with initial values of\nlength scale and output scale of 1. After optimization by maximizing the\nmarginal likelihood, we obtained the following values for the\nhyperparameters:\n\\begin{itemize}\n  \\item\n    mean: 137.19\n  \\item\n    length scale: 4.00\n  \\item\n    output scale: 114.05\n\\end{itemize}\nAlthough we assumed that there is no noise in the data, the exact Gaussian likelihood is parameterized and the optimization procedure also fit it's value (1e-3). This is fine because the number is small, but also because this increases the numerical stability of the GP.\n\nBased on figure \\ref{fig:gp_post_mean_heatmap}, both the mean and the length\nscale values are reasonable. The range of the Branin function extends from\napproximately 0 to 350, with more mass below the approximate \"midpoint\" value\nof 175. Thus, an optimized function mean of 137.19 is also reasonable. The\noverall slowly oscillating behavior of the Branin function also appears to\nhave a \"periodicity\" of approximately six units; thus, a length scale of\n4.00 is acceptable. The oscillations themselves have some combination of\namplitudes that range from small towards the center of the function's domain\nto very large towards the edges; thus, an output scale of\n114.05 is acceptable.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Model Fitting/GP_Posterior_Mean.png}\n  \\caption{Heatmap of GP posterior mean}\n  \\label{fig:gp_post_mean_heatmap}\n\\end{figure}\n\nComparing the predicted GP posterior mean to the true Branin function values,\nfigure \\ref{fig:gp_postmean_and_trueval} shows that the training points lie in\nthe darkest spots on the heatmap, where the predicted and\ntrue values are almost identical. Similarly, as the predictive mean's\nlocation departs from the areas where the training values are known, the absolute\ndifferences increase.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Model Fitting/GP_Posterior_Mean_vs_True_Branin.png}\n  \\caption{Heatmap of GP posterior mean and True Branin values}\n  \\label{fig:gp_postmean_and_trueval}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Model Fitting/GP_Posterior_s2_With_Training_Points.png}\n  \\caption{Heatmap of the GP posterior standard deviation}\n  \\label{fig:gp_post_std}\n\\end{figure}\nAs shown in figure \\ref{fig:gp_post_std}, as you approach the training data points, the standard deviation approaches zero; Areas far from the training points therefore have significantly higher uncertainty. Furthermore, the scale of the standard deviation extends from 0 to 25. This behavior is expected, as one would expect the GP model to be certain near the training points (where it knows the underlying function passes through), and more uncertain as it gets farther from these points. The scale of this uncertainty is similarly reasonable. As previously mentioned, the Branin function has both small and large variations. Thus, the GP must return a relatively high standard deviation at the locations where it is uncertain in order to properly accommodate the large range of possible fluctuations.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Model Fitting/Z_Score_KDE.png}\n  \\caption{Kernel density estimate of the z-scores of the residuals between the GP posterior mean and true values.}\n  \\label{fig:zscore_kde}\n\\end{figure}\nBased on figure \\ref{fig:zscore_kde}, the KDE of the Z-score distribution follows an approximately standard normal distribution with more concentrated density at the peak.\n\n\\subsection*{Model fitting with log transformation of Branin function}\nWe repeated model fitting using a log transformation of the Branin function. After maximizing the marginal likelihood under the same conditions as the previous fitting attempt, we obtained the following hyperparameter values:\n\\begin{itemize}\n  \\item\n    mean: 3.62\n  \\item\n    length scale: 3.32\n  \\item\n    output scale: 1.63\n\\end{itemize}\nAgain, the optimization procedure also fit the likelihood, resulting in a hyperparameter value of 0.001.\n\nThe log Branin posterior mean ranges from approximately 0.5 to 6 as shown in figure \\ref{fig:gp_post_mean_log}, with more area above the midpoint value of 3.25. Thus, the hyperparameter mean value with a slight increment from the \"midpoint\" follows our expectation. The range of the function also supports an output scale of 1.63, which indicates a relatively \"flat\" function, as expected. Finally, the oscillatory behavior of the log branin function appears to be slightly less frequent; thus, a length scale of 3.32 is reasonable. \n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Log Fits/GP_Posterior_Mean_Log.png}\n  \\caption{Heatmap of GP posterior mean of log transformed Branin function values}\n  \\label{fig:gp_post_mean_log}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Log Fits/GP_Posterior_vs_Log_True.png}\n  \\caption{Heatmap of GP posterior mean and true log transformed Branin function values}\n  \\label{fig:gp_post_mean_true_log}\n\\end{figure}\nThe absolute difference between the predicted values and the true values are nearly zero as shown in the darkest areas of figure \\ref{fig:gp_post_mean_true_log} like in figure \\ref{fig:gp_postmean_and_trueval}. Additionally, the values farther away from the training points have increasing absolute differences, including areas which show very high absolute differences.\n\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Log Fits/GP_Posterior_S2_and_Training_points.png}\n  \\caption{Heatmap of GP posterior standard deviation of log transformed Branin function values}\n  \\label{fig:gp_post_std_log}\n\\end{figure}\nAs shown in figure \\ref{fig:gp_post_std_log}, the standard deviation again approaches zero as the predicted values approach the training points. The scale of the standard deviation extends from 0 to approximately 0.7. As explained previously, this behavior is expected: the GP model should be more confident the closer it is to a training point. The application of the log function to the Branin model should also reduce the scale of the standard deviation, as seen. \n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Log Fits/Z_Score_KDE.png}\n  \\caption{Kernel density estimate of z-score of log transformed Branin function values}\n  \\label{fig:zscore_kde_log}\n\\end{figure}\nBased on figure \\ref{fig:zscore_kde_log}, the KDE of the Z-score distribution is similar a \"focused\" Gaussian distribution.\n\nBy using a log transformation of the Branin function, the marginal likelihood is improved and more calibrated: the absolute difference between predicted and true values are much lower in figure \\ref{fig:gp_post_mean_true_log} than in figure \\ref{fig:gp_postmean_and_trueval}, which indicates that the model trained from log transformed data has higher marginal likelihood.\n\n\\subsection*{Bayesian Information Criterion (BIC)}\nIn order to compare which model fits better with the dataset, we used the BIC\nto evaluate how well those choices fit a given dataset. To compute the BIC,\nwe found the values of the hyperparameters maximizing the (log) marginal\nlikelihood using \\ref{eq:hyperparameter-estimate}, and then \\ref{eq:bic},\nwhere $\\mid \\theta \\mid$ is the total number of hyperparameters and $\\mid\n\\data \\mid$ is the number of observations. We used three for $\\mid \\theta\n\\mid$ and 32 for $\\mid \\data \\mid$.\n\nFrom equation \\ref{eq:bic}, it can be seen that the first term should be\nminimized to prevent overfitting, while the second term should be maximized,\nsince it corresponds to the model evidence. Overall, lower BIC values imply\nbetter model fit.\n\n\\begin{equation}\n  \\hat \\theta = \\argmax_\\theta \\log p(y \\mid X, \\theta)\n  \\label{eq:hyperparameter-estimate}\n\\end{equation}\n\n\\begin{equation}\n  \\text{BIC} = \\mid \\theta \\mid \\log \\mid \\data \\mid - 2 \\log p(y \\mid X, \\hat \\theta)\n  \\label{eq:bic}\n\\end{equation}\n\n\\begin{table}[H]\n  \\centering\n  \\begin{tabular}{| l | l |}\n    \\hline\n    Data       & BIC score \\\\\n    \\hline\n    Branin     & 304.3118  \\\\\n    \\hline\n    log Branin & 61.4710   \\\\\n    \\hline\n  \\end{tabular}\n  \\caption{BIC score of data from model fitting (Branin and (log) Branin)}\n  \\label{tab:basic_bic}\n\\end{table}\n\nNow considering BIC as a function of the choice of mean and covariance functions $(\\mu, K)$, we used MeanConst and four different covariance functions, Squared Exponential (SE), Rational Quadratic (RQ), sum of SE and RQ, and product of SE and RQ. Table \\ref{tab:bic_score} shows the best model and BIC scores we found for each function.\n\n\\begin{table}[H]\n  \\centering\n  \\begin{tabular}{| l | l | l |}\n    \\hline\n    Data       & Model                & BIC score \\\\\n    \\hline\n    Branin     & Rational Quadratic   & 299.367   \\\\\n    \\hline\n    log Branin & Squared Exponential  & \\textbf{61.471}    \\\\\n    \\hline\n    LDA        & Product of SE and RQ & 6.777e6 \\\\\n    \\hline\n    log LDA    & Rational Quadratic   & \\textbf{17.298}    \\\\\n    \\hline\n    SVM        & Product of SE and RQ & \\textbf{-138.894}  \\\\\n    \\hline\n    log SVM    & Rational Quadratic   & -77.547   \\\\\n    \\hline\n  \\end{tabular}\n  \\caption{Computed BIC scores and best model for different functions and their log transformations. The best value for each function is shown in bold.}\n  \\label{tab:bic_score}\n\\end{table}\n\n\\section{Bayesian Optimization}\n\nThe best-fitting models were selected from the previous experiments. We then used the Expected Improvement (EI) acquisition function, defined as:\n\\begin{equation}\n  a_{EI}(\\bm{x}) = (f' - \\mu(\\bm{x}))\\Phi(f';\\mu(\\bm{x}),K(\\bm{x},\\bm{x})) + K(\\bm{x},\\bm{x})\\mathcal{N}(f';\\mu(\\bm{x}),K(\\bm{x},\\bm{x})),\n  \\label{eq:Expected Improvement}\n\\end{equation}\nwhere $\\Phi(\\bm{x})$ is the cumulative probability density of the Normal distribution, and $f'$ is the minimum value of the current observations.\nNote that the EI acquisition function combines both exploitation and exploration by using the posterior mean and known minimum value, and the posterior standard deviation, respectively. \n\nUsing the previously selected 32 points, a GP model was fit using the aforementioned optimized settings and the following heatmaps of the posterior mean and standard deviation of the log Branin function were created:\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Optimization Plots/EI_FMu_Log.png}\n  \\caption{Predictive posterior mean of the log Branin function, calculated using a previously optimized GP model trained on 32 Sobol sequence points. Warmer colors indicate higher values. Note the predicted minimum areas in dark blue at the top left and bottom right corners of the plot.}\n  \\label{fig:ei-fmu-log}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Optimization Plots/EI_S_Log.png}\n  \\caption{Predictive posterior standard deviation of the log Branin function, calculated using a previously optimized GP model trained on 32 Sobol sequence points. Warmer colors indicate higher deviations and imply greater uncertainty in the prediction.}\n  \\label{fig:ei-fs-log}\n\\end{figure}\n\nThe EI function values were then calculated using the posteriors and the point $(7.658, 0)$ was identified as the optimal point to observe next. \nA heatmap of the EI values is shown below with the optimal observation location marked as a black point:\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/Optimization Plots/EI_Log_Optimal_point.png}\n  \\caption{Expected Improvement acquisition values of the optimized GP log Branin model trained on 32 Sobol sequence points. Warmer colors indicate higher expected improvement. The maximum expected improvement is marked as a black point and denotes the optimal location for the next observation.}\n  \\label{fig:ei-optimalPoint-log}\n\\end{figure}\n\nAnalyzing figures \\ref{fig:ei-fmu-log}, \\ref{fig:ei-fs-log}, and \\ref{fig:ei-optimalPoint-log} above, we reason that the proposed optimal testing point is ideal. From the posterior mean, it can be seen that the point $(7.658, 0)$ is within a region of predicted minimal values. Furthermore, from the posterior standard deviation, it can be seen that the point is also within a region of higher uncertainty and therefore reduced predictive confidence. Thus, it is plausible that the EI acquisition function would seek to test this area and this specific point in order to identify a possible global minimum and improve confidence in an area of uncertainty. \n\nThe following Bayesian active learning experiment was then applied independently to the log Branin, log LDA, and SVM functions:\n\\begin{enumerate}\n  \\item Five initial observations were randomly selected, constituting the initial dataset $\\mathcal{D}$.\n  \\item For the log Branin function only, a dense, evenly spaced grid of 250,000 points was generated within the domain of the function for use in calculating the GP predictive posterior.\n  \\item A GP model using the respective aforementioned optimized settings was fit to $\\mathcal{D}$.\n  \\item A new point \\emph{x} was found using the EI acquisition function and the GP predictive posterior.\n  \\item The function value \\emph{f(x)} was calculated and the point (\\emph{x}, \\emph{f(x)}) was added to $\\mathcal{D}$.\n  \\item Steps 3-5 were repeated 30 times, resulting in a final dataset $\\mathcal{D}$ of 35 points.\n\\end{enumerate}\n\nThe performance of each of the above experiments was evaluated using the \"gap\" measure, defined for minimization as:\n\\begin{equation}\n  \\text{gap} = \\dfrac{f(\\text{best found}) - f(\\text{best initial})}{f(\\text{minimum}) - f(\\text{best initial})}\n  \\label{eq:gap}\n\\end{equation}\nThe gap can be interpreted as a measure of the accuracy or performance of the optimization; as the gap value approaches one, the attempted optimization approaches the true global minimum. \n\nThe gaps for the log Branin, log LDA, and SVM models were calculated to be 1.00, 0.79, and 0.96, respectively. \nThis implies that EI successfully found a global minimum of the log Branin function, but missed the global minimum of the log LDA and SVM functions.\n\nThe above bayesian active learning experiment was then modified as such:\n\\begin{enumerate}\n  \\item A seed for the random number generator (RNG) was chosen.\n  \\item Five initial observations were randomly selected, constituting the initial dataset $\\mathcal{D}$.\n  \\item For the log Branin function only, a dense, evenly spaced grid of 250,000 points was generated within the domain of the function for use in calculating the GP predictive posterior.\n  \\item A GP model using the respective aforementioned optimized settings was fit to $\\mathcal{D}$.\n  \\item A new point \\emph{x} was found using the EI acquisition function and the GP predictive posterior.\n  \\item The function value \\emph{f(x)} was calculated and the point (\\emph{x}, \\emph{f(x)}) was added to $\\mathcal{D}$.\n  \\item Steps 4-6 were repeated 150 times, resulting in a final dataset $\\mathcal{D}$ of 155 points.\n\\end{enumerate}\nThe above procedure was duplicated, only the Random Search acquisition function (which selects a point at random) was used in place of EI.\nThese experiments were repeated 20 times with 20 different RNG seeds to create different random initializations for the log Branin, log LDA, and SVM functions, resulting in a total of 60 experiments. \nThe learning curves of these 60 trials using only the first 30 new observations for both EI and RS acquisition functions are shown in the figures below:\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=\\textwidth]{figures/optimization_log_Branin.eps}\n  \\caption{Learning curves for 20 log Branin function experiments. The EI acquisition function is shown in blue, and the RS acquisition function is shown in orange. Observe that EI consistently outperforms RS in both rate of gap increase and final gap convergence at 30 observations.}\n  \\label{fig:ei-v-rand-branin}\n\\end{figure}\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=\\textwidth]{figures/optimization_log_lda.eps}\n  \\caption{Learning curves for 20 log LDA function experiments. The EI acquisition function is shown in blue, and the RS acquisition function is shown in orange. Note that EI and RS both consistently converge towards the same gap value at 30 observations}\n  \\label{fig:ei-v-rand-lda}\n\\end{figure}\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=\\textwidth]{figures/optimization_norm_svm.eps}\n  \\caption{Learning curves for 20 SVM function experiments. The EI acquisition function is shown in blue, and the RS acquisition function is shown in orange. It can be seen that EI performs at least as well as RS in the majority of experiments, but can find and get stuck in a local minima, as seen in Seed 18.}\n  \\label{fig:ei-v-rand-svm}\n\\end{figure}\nFrom figures \\ref{fig:ei-v-rand-branin}, and \\ref{fig:ei-v-rand-svm} above, it can be seen that EI significantly outperforms RS on both the log Branin and the SVM functions.\nIt is evident that in these experiments, EI often converges to its final gap value in significantly fewer observations than RS does, and that its final gap values at 30 observations are generally higher than those of RS. This behavior was expected, as EI combines both exploitation and exploration together to identify locations with higher expected benefit, whereas RS only applies (random) exploration. The addition of exploitation and non-random exploration in EI allows it to actually navigate towards optima, whereas RS can only approach optima through chance. \nHowever, there are cases where EI and RS have similar performance: from figure \\ref{fig:ei-v-rand-lda}, it can be seen that the two acquisition functions perform similarly to each other in the majority of experiments at 30 observations. \nWe believe that EI's lack of performance on the log LDA function can be attributed to a large amount of local minima; EI can often become stuck in local minima, thereby decreasing its performance, whereas RS is purely exploratory and therefore unhindered by the presence of any local optima.\n\nThe difference in performance between EI and RS can be further seen when one considers their mean gaps at 30, 60, 90, 120 and 150 observations, as shown in the table below:\n\\begin{table}[H]\n  \\centering\n  \\begin{tabular}{| c | c | c | c | c | c | c |}\n    \\hline\n    \\# Observations & EI (Branin) & RS (Branin) & EI (LDA)& RS (LDA) & EI (SVM)& RS (SVM)\\\\ \n    \\hline\n    30 & 0.9072 & 0.3409 & 0.4877 & 0.7122 & 0.9106 & 0.3728 \\\\ \n    \\hline\n    60 & 0.9532 & 0.6086 & 0.9611 & 0.7944 & 0.9415 & 0.5676 \\\\ \n    \\hline\n    90 & 0.9532 & 0.7124 & 0.9753 & 0.9174 & 0.9592 & 0.9190 \\\\ \n    \\hline\n    120 & 0.9532 & 0.9096 & 0.9753 & 0.9633 & 0.9018 & 0.9716 \\\\ \n    \\hline\n    150 & 0.9532 & 0.9202 & 0.9473 & 0.9016 & 0.9018 & 0.9737 \\\\ \n    \\hline\n   \\end{tabular}\n   \\caption{Mean Gaps of EI and RS on the log Branin, log LDA, and SVM functions at 30, 60, 90, 120 and 150 observations. Note that EI reaches a higher gap at 150 observations in all functions when compared to RS, signifying better performance overall. Furthermore, for the log Branin and SVM functions, EI achieves these higher values significantly faster than RS does.}\n   \\label{tab:ei-rs-observations}\n\\end{table}\n\nIt is evident that EI outperforms RS on all three functions, including log LDA, at 150 observations. \nAdditionally, EI generally requires less observations to achieve higher gap values, implying a faster learning rate than RS. We also note that EI likely spends considerable time early on navigating local minima in the log LDA function, \nas evident by its noticeably worse performance at 30 observations when compared to that of RS. However, EI for log LDA rapidly improves following this local-minima exploration and quickly outperforms RS. It is also important to note that on average, no method was able to find the global minimum. This result was expected for RS, which would need to be extraordinarily lucky to find the global minimum, but was somewhat surprising for EI. It appears that in the limit, EI may become entrenched in local minima and thus converges towards a suboptimal result. \n\nWe more rigorously compare the performance of EI and RS on all three functions using a paired t-test and evaluate the null hypothesis that the gap values attained for EI and RS come from the same distribution (i.e. the performance of EI and RS are equivalent or similar). We begin the test using only 30 observations for both EI and RS and calculate the corresponding p-values, as seen in the table below:\n\n\\begin{table}[H]\n  \\centering\n  \\begin{tabular}{| c | c |}\n    \\hline\n    Function & P-Value  \\\\\n    \\hline\n    Log Branin   & 5.39e-4 \\\\\n    \\hline\n    Log LDA      & 0.38  \\\\\n    \\hline\n    SVM      & 0.039   \\\\\n    \\hline\n  \\end{tabular}\n  \\caption{P-Values comparing the similarity of performance between EI and gap on the log Branin, log LDA and SVM functions. Higher p-values indicate more similar performance.}\n  \\label{tab:ei-rs-pvals}\n\\end{table}\n\nFrom the table above, it is quite obvious that at 30 observations, EI and RS have significantly different performance on the log Branin and SVM functions; however, the two acquisition functions have similar performance on the LDA function. Such behavior was supported visually in figures \\ref{fig:ei-v-rand-branin}, \\ref{fig:ei-v-rand-lda} and \\ref{fig:ei-v-rand-svm}.\n\nTo identify the number of observations required for RS and EI to exhibit similar performance, we then began RS at one observation and increment the number of observations for RS while holding EI at 30 observations until the p-value exceeds 0.05. The resultant number of observations before RS attains possibly similar performance to EI is shown below:\n\n\\begin{table}[H]\n  \\centering\n  \\begin{tabular}{| c | c | c |}\n    \\hline\n    Function & Observations Required & P-Value \\\\\n    \\hline\n    Log Branin & 59 & 0.0624 \\\\\n    \\hline\n    Log LDA & 10 & 0.0599 \\\\\n    \\hline\n    SVM & 3 & 0.0951 \\\\\n    \\hline\n  \\end{tabular}\n  \\caption{Number of observations required for RS before it achieves possibly similar performance to EI with 30 observations for the log Branin, log LDA, and SVM functions.}\n  \\label{tab:ei-rs-pvals-to-equivalence}\n\\end{table}\n\nLooking at tables \\ref{tab:ei-rs-pvals} and \\ref{tab:ei-rs-pvals-to-equivalence}, one can see that in the log Branin function, EI significantly outperforms RS. Specifically, 59 points must be observed before RS and EI show possibly similar performance.\nHowever, it is also evident that RS likely performs as well as EI on the log LDA function. This was expected, given their similar learning curves above in figure \\ref{fig:ei-v-rand-lda}.\nPerhaps more interestingly, the performance of RS varies significantly relative to that of EI for the SVM function: when there are less observations, the two acquisition methods, on average, demonstrate similar performance. However, as the number of observations increase, the two methods begin to differ before gradually converging with each other again. Another iterative paired t-test experiment was performed, and it was determined that such oscillatory behavior continued until 44 new observations were attained, after which the performance of EI and RS remained consistently comparable.\n\n\\section{Bonus}\nFor the bonus section, we implemented two more acquisition functions: max\nvariance and lower confidence bound (LCB). We also tried optimizing hyperparameters with the addition of each observation to the data set, dubbed ``online optimization'' (OO) for fun, and injected more exploration by randomly sampling the next point instead of using the acquisition function suggested point with some probability.\n\n\\subsection*{More Acquisition Functions}\nWe chose to implement max variance (eq. \\ref{eq:max-variance}) as one of our\nacquisition functions because it seemed like an intuitive way to explore a\nfunction's values and shape. If our objective is to build a model of a target\nfunction by actively searching the parameter space, then it makes sense to\nuse our current belief about $f$ (the posterior) to pick the next point to\nsample. In the case of max variance, we would sample where our model is least\ncertain. \\emph{A priori}, we wouldn't expect max variance to be optimal for\nidentifying the minima of $f$ since the objective it is optimizing only\nincentivizes exploration and not exploitation.\n\\begin{equation}\n  \\alpha_{\\max \\text{Var}}(x) = K(x, x)\n  \\label{eq:max-variance}\n\\end{equation}\n\nWe also selected the minimization formulation of the upper confidence bound acquisition function, dubbed lower confidence bound (eq. \\ref{eq:lcb}). This  computes the trade off between exploitation and exploration directly by mixing naive greedy exploitation with just the minimum of the mean function and max variance from before.\n\\begin{equation}\n  \\alpha_{LCB}(x; \\beta) = \\beta \\sigma(x) - \\mu(x), \\text{ where } \\sigma(x) = \\sqrt{K(x, x)}\n  \\label{eq:lcb}\n\\end{equation}\nLooking at it's formulation, we can see that it is the difference between the\nstandard deviation and the mean function value of the posterior. It is also\nparameterized by $\\beta$, which allows us to control the amount of\nexploration we want to do. We can see that the function will be greatest when\nthe mean function value is low and when the variance is high. LCB effectively\naddresses the limitation of the max variance acquisition function by\nextending it to do exploitation as well.\n\nWe evaluated the performance of each acquisition function on a 300-by-300\ngrid from the Branin function using a procedure similar to the one used in\nSection 3: starting with five randomly selected points, we fit a GP, computed\nacquisition values, used the maximum of those values to choose our next\npoint, and computed the gap value of the new dataset. We did ten trials with\n100 iterations of active observation for each function and averaged the gap\nvalues to get those shown in Figure \\ref{fig:bonus-gaps} as solid lines.\n\nOn their own, max variance and LCB are slower to find the global minimum than\nexpected improvement. Although not shown in the figure, max variance\nperformed similarly to random search. This is what we expected since max\nvariance does not do any exploitation. LCB performs relatively better, but is\nslower to move out of local minima. This is likely because of it's\nnaive-greedy nature and is somewhat expected.\n\n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{figures/bonus-gaps-by-acqfn.eps}\n  \\caption{A busy plot showing gap as a function of the number of\n  observations made.}\n  \\label{fig:bonus-gaps}\n\\end{figure}\n\n\\subsection*{Forcing More Exploration}\nIt is possible for our acquisition values to get stuck in local minima for too long. To mitigate this, we force each acquisition function to do more exploration by using a randomly sampled point with probability $p = 0.1$ instead of the one suggested by the acquisition function. From Figure \\ref{fig:bonus-gaps}, we can see that occasionally picking random samples did not help our models converge. It is possible that this is true only for the Branin function, but we can't say from this experiment alone.\n\n\\subsection*{Online Optimization}\nWe also tried optimizing the model hyperparameters after each iteration. This\nis representative of how Bayesian optimization is performed in practice. It\nmakes sense to do this for acquisition functions that rely on the model posterior. If we are evaluating an expectation, like in EI, then it makes sense to use the posterior of the most-likely model. For LCB, we use the mean and variance of the posterior directly, which means that its performance will improve if we have better estimates of both from a more likely model. We didn't expect max variance to improve because it's lack of exploitation still constrains its performance even with optimized hyperparameters. These hypotheses are confirmed in Figure \\ref{fig:bonus-gaps}. The gap values with online optimization converge to one faster for both EI and LCB. Here we see that LCB actually converges faster that EI with online optimization.\n\\end{document}\n", "meta": {"hexsha": "bb1ce08e2ffc76ad64bb8b5ef24d6ed0e643bfdb", "size": 33019, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report.tex", "max_stars_repo_name": "TheBayesians/Report", "max_stars_repo_head_hexsha": "def7bff00a82d17a0c80177c41f42c9791b57e77", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report.tex", "max_issues_repo_name": "TheBayesians/Report", "max_issues_repo_head_hexsha": "def7bff00a82d17a0c80177c41f42c9791b57e77", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report.tex", "max_forks_repo_name": "TheBayesians/Report", "max_forks_repo_head_hexsha": "def7bff00a82d17a0c80177c41f42c9791b57e77", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.4980769231, "max_line_length": 804, "alphanum_fraction": 0.7654380811, "num_tokens": 8364, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.8056321866478979, "lm_q1q2_score": 0.5963082897902056}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section{Sheet 5}\n\n\\subsection{Alternative derivation of the contracted Bianchi identities}\n\nThe form of the Riemann tensor in a LIF was derived in section \\ref{sec:LIF-form-Riemann-tensor}.\n\n\\subsubsection{Bianchi identities of the Riemann tensor}\n\nIn the LIF, given that \\(  R_{\\mu \\nu \\rho \\sigma} = g_{\\mu [\\sigma |, \\nu |\\rho ]} - g_{\\nu  [\\sigma |, \\mu | \\rho ]} \\), we want to show that \\(R_{\\mu \\nu [\\rho \\sigma ; \\alpha  ]} = R_{\\mu \\nu [\\rho \\sigma , \\alpha ]} =0\\).\n\nThis is equivalent to the formulation of the Bianchi identities given in the problem sheet, because of the antisymmetry of the Riemann tensor in its last two indices: there are six terms in the antisymmetrization of \\(R_{\\mu \\nu [\\rho \\sigma , \\alpha ]}\\), but they are pairwise equal: the term \\(R_{\\mu \\nu \\rho \\sigma , \\alpha }\\) is equal to \\(- R_{\\mu \\nu \\sigma \\rho , \\alpha }\\) by antisymmetry, and these are exactly the pairs of terms which appear in the three-index antisymmetrization.\n\nWhat we need to do is to take the derivative of the Riemann tensor in the LIF: \n%\n\\begin{align}\n    R_{\\mu \\nu \\rho \\sigma , \\alpha } = \n    g_{\\mu [\\sigma |, \\nu |\\rho ] \\alpha } - g_{\\nu  [\\sigma |, \\mu | \\rho ] \\alpha }\n\\,,\n\\end{align}\n%\nand permute the three indices \\(\\rho \\sigma  \\alpha \\) cyclically. Writing all the terms out we get (up to a factor \\(2\\), which is irrelevant since we will find that all the terms cancel and everything is equal to 0):  \n%\n\\begin{subequations}\n\\begin{align}\n    \\begin{split}\n        +&g_{\\mu \\sigma , \\nu \\rho \\alpha } - g_{\\nu  \\sigma , \\mu  \\rho  \\alpha }\n        - g_{\\mu \\rho  , \\nu \\sigma \\alpha } + g_{\\nu  \\rho  , \\mu  \\sigma  \\alpha } + \\\\\n        +&g_{\\mu \\alpha , \\nu \\sigma  \\rho  } - g_{\\nu  \\alpha  , \\mu  \\sigma \\rho  }\n        - g_{\\mu \\sigma  , \\nu \\alpha \\rho  } + g_{\\nu  \\sigma   , \\mu  \\alpha \\rho  } + \\\\\n        + &g_{\\mu \\rho  , \\nu \\alpha \\sigma  } - g_{\\nu  \\rho  , \\mu  \\alpha \\sigma  }\n        - g_{\\mu \\alpha   , \\nu \\rho \\sigma  } + g_{\\nu  \\alpha   , \\mu \\rho \\sigma  } \n        \\,,\n    \\end{split}\n\\end{align}\n\\end{subequations}\n%\nso we have \\(6 \\) terms with a + sign, and \\(6\\) with a \\(-\\) sign: they cancel pairwise, since the partial  derivatives commute.\n\n\\subsubsection{Contracting the identities}\n\nWe start by contracting \\(2 R_{\\mu \\nu [ \\rho \\sigma ; \\alpha ]}\\) with \\(g^{\\mu \\rho }\\): we get \n%\n\\begin{align}\n    0=\n  g^{\\mu \\rho } \\qty(R_{\\mu \\nu \\rho \\sigma ; \\alpha } + R_{\\mu \\nu \\alpha \\rho ; \\sigma } + R_{\\mu \\nu \\sigma \\alpha ; \\rho }) = R_{\\nu \\sigma ; \\alpha } - R_{\\nu \\alpha ; \\sigma } + g^{\\mu \\rho } R_{\\mu \\nu \\sigma \\alpha ; \\rho }\n\\,,\n\\end{align}\n%\nwhere, in the second term, we used the antisymmetry of the first two indices of the Riemann tensor in order to get the form which allowed us to use the definition of the Ricci tensor \\(R_{\\mu \\nu } = g^{\\rho \\sigma } R_{\\rho \\mu \\sigma \\nu }\\).\nAlso, we brought the metric inside the covariant derivatives since it is covariantly constant.\nThen, we contract the expression we found with \\(g^{\\nu \\sigma }\\): \n%\n\\begin{subequations}\n\\begin{align} \n  0=\n  g^{\\nu \\sigma } \\qty(R_{\\nu \\sigma ; \\alpha } - R_{\\nu \\alpha ; \\sigma } + g^{\\mu \\rho } R_{\\mu \\nu \\sigma \\alpha ; \\rho })\n  &= R_{;\\alpha  } - \\tensor{R}{^{\\sigma }_{\\alpha; \\sigma  }} - g^{\\mu \\rho} R_{\\mu \\alpha ; \\rho }  \\\\\n  &= R_{;\\alpha  } - \\tensor{R}{^{\\sigma }_{\\alpha; \\sigma  }} - \\tensor{R}{^{\\sigma }_{\\alpha ; \\sigma }}\n\\,,\n\\end{align}\n\\end{subequations}\n%\nwhere we used the same properties as before and the definition of the scalar curvature \\(R = g^{\\mu \\nu }R_{\\mu \\nu }\\). So, we have the contracted Bianchi identities \\(0 = R_{;\\alpha } - 2 \\tensor{R}{^{\\sigma }_{\\alpha ; \\sigma }}\\).\nRaising an index with the inverse metric \\(g^{\\alpha \\beta }\\) and relabeling \\(\\sigma \\) to \\(\\alpha \\) in the second term (after having raised the index), these can be written as \n%\n\\begin{align}\n  \\nabla_{\\alpha } \\qty(R g^{\\alpha \\beta } - 2R^{\\alpha \\beta })\n\\,.\n\\end{align}\n%\n\n\\subsection{Weak-field geodesic equation}\n\nThis was already treated in section \\ref{sec:weak-field-gravitational-potential}.\n\n\\subsection{Hyperbolic plane geodesics}\n\nOur coordinates are \\((x, y)\\), and our metric is \\(g_{ij } = y^{-2} \\delta_{ij }\\), with inverse \\(g^{ij} = y^2 \\delta^{ij}\\).\n\nSo, we can calculate the Christoffel symbols as: \n%\n\\begin{align}\n  \\Gamma^{i}_{jk} = \\frac{1}{2} g^{im} \\qty(g_{mj,k} + g_{m k, j } - g_{jk,m})\n\\,,\n\\end{align}\n%\nthis calculation is simplified by the fact that the only nonvanishing derivatives of the metric are \\(g_{00,1}=g_{11,1} = -2 y^{-3} \\). If the index \\(i\\) in \\(\\Gamma^{i}_{jk}\\) is zero, then the last term in the sum vanishes since it corresponds to a derivative with respect to \\(x\\).\nWith these we get: \n%\n\\begin{subequations}\n\\begin{align}\n  \\Gamma^{0}_{00} &=  \\frac{1}{2} y^2 \\qty(2g_{00,0})= 0 \\\\\n  \\Gamma^{0}_{01} &=  \\frac{1}{2} y^2 \\qty(g_{00,1})= - \\frac{1}{y}  \\\\\n  \\Gamma^{0}_{11} &=  \\frac{1}{2} y^2 \\qty(g_{01,1} + g_{01,1})= 0  \\\\\n  \\Gamma^{1}_{00} &=  \\frac{1}{2} y^2 \\qty(- g_{00,1})= \\frac{1}{y}  \\\\\n  \\Gamma^{1}_{01} &=  \\frac{1}{2} y^2 \\qty(g_{10,0} + g_{11,0} - g_{01,1})= 0   \\\\\n  \\Gamma^{1}_{11} &=  \\frac{1}{2} y^2 \\qty(g_{11,1}+g_{11,1}-g_{11,1})= -\\frac{1}{y}  \n\\,,\n\\end{align}\n\\end{subequations}\n%\nand the geodesic equation \\(u^{\\mu } \\nabla_{\\mu } u^{\\nu }= 0 \\) is written with respect to these.\n\n\\subsubsection{Vertical lines}\n\nFirst of all we want to parametrize these vertical lines: we choose our parameter so that the length of the velocity vector is everywhere equal to one.\n\nSince the lines are vertical, we want the position \\(x^{i}\\) with respect to the parameter \\(s\\) to look something like \\(x^{i}(s) = (x_0, y(s))\\).\n\nWe use the arclength parameter: \n%\n\\begin{align}\n  s = \\int \\sqrt{g_{ij} u^{i} u^{j}} \\dd{\\lambda }\n\\,,\n\\end{align}\n%\nwhere \\(u^{i} = \\dv*{x^{i}}{\\lambda }\\) and \\(\\lambda \\) is an arbitrary parameter.\n\nWe can rewrite this integral with respect to the Euclidean norm \\(\\norm{u}_E^2 = \\delta_{ij} u^{i} u^{j}\\): we get \n%\n\\begin{align}\n  s = \\int \\frac{1}{y} \\norm{u}_E \\dd{\\lambda }\n\\,,\n\\end{align}\n%\nso we can see that we get \\(s = \\int \\dd{\\lambda }\\), or \\(s = \\lambda \\), iff \\(\\norm{u}_E = y\\): so, let us drop the distinction between \\(s\\) and \\(\\lambda \\) and apply this condition.\nThe velocity vector is \n%\n\\begin{align}\n  u^{i} = \\dv{x^{i}}{s} = \\qty(0, \\dv{y}{s})\n\\,,\n\\end{align}\n%\nwhose Euclidean norm is just (the absolute value of) \\(\\dv*{y}{s}\\). \nSo, we must impose the condition \\(y = \\dv*{y}{s}\\), which can be solved by separation of variables to yield \\(s = \\log y\\), or \\(y = e^{s}\\). \n\nSo our parametrization for the curve is \\(s \\rightarrow x^{i} = (x_0, e^{s})\\), the velocity is \\(u^{i} = (0, e^{s})\\) and the derivative of velocity with respect to \\(s\\) is again \n%\n\\begin{align}\n  \\dv[]{u^{i}}{s} = (0, e^{s})\n\\,.\n\\end{align}\n\nNow we can plug these into our geodesic equation, which is simplified by the fact that \\(u^{0} = 0\\), therefore there is only one relevant term in the Christoffel sum: \n%\n\\begin{align}\n  \\dv[]{ u^{i}}{s} + \\Gamma^{i}_{11} u^{1} u^{1} = 0\n\\,,\n\\end{align}\n%\nwhose components are \n%\n\\begin{align}\n  \\underbrace{\\dv{ u^{0}}{s}}_{0} + \\underbrace{\\Gamma^{0}_{11}}_{0} y^2 = 0\n\\,\n\\end{align}\n%\nand \n%\n\\begin{subequations}\n  \\begin{align}\n    \\dv{ u^{1}}{s} + \\Gamma^{1}_{11} u^{1} u^{1} &= 0  \\\\\n    y + \\qty(- \\frac{1}{y}) y^2 &=0\n    \\,,\n  \\end{align}\n\\end{subequations}\n%\nwhich are identities, therefore vertical lines are indeed geodesics in this hyperbolic plane.\n\n\\subsubsection{More solutions}\n\nWe have the Killing vector field \\(\\xi = (1,0)\\) corresponding to the symmetry with respect to translations along the \\(x\\) axis: then, the quantity \\(\\vec{\\xi} \\cdot \\vec{u}\\) is conserved: so \\(\\xi^{\\mu } u^{\\nu } g_{\\mu \\nu }=\\dot{x} g_{00} = \\dot{x} y^{-2}\\) is constant along the trajectory (here we denote derivatives with respect to the parameter \\(s\\) by a dot).\n\nWe also know that, if we parametrize with arclength, \\(\\norm{u}_{E} / y \\equiv 1 \\), which means \n%\n\\begin{align}\n  y = \\norm{u}_{E} = \\sqrt{\\dot{x}^2 + \\dot{y}^2} = \\dot{x} \\sqrt{\\qty(y^{\\prime })^2 + 1}\n\\,,\n\\end{align}\n%\nwhere \\(y^{\\prime } = \\dv*{y}{x} = \\dot{y} / \\dot{x}\\).\n\nThen, the conserved quantity can be rewritten as \n%\n\\begin{align}\n  \\dot{x} y^{-2} = y^{-2} \\frac{y}{\\sqrt{y^{\\prime 2} +1}}\n  = \\frac{1}{y \\sqrt{y^{\\prime 2} + 1}}\n\\,,\n\\end{align}\n%\nwhich the exercise sheet calls \\(A\\) but I will call it \\(1/R\\): so \\(R = y \\sqrt{y^{\\prime 2}+1}\\). We can square this to write the conservation equation \n%\n\\begin{align}\n  y^2 \\qty(y^{\\prime 2} + 1 ) = R^2 = \\const\n\\,.\n\\end{align}\n%\n\nAlternatively, once one has found the equations of motion\n%\n\\begin{subequations}\n\\begin{align}\n  y \\ddot{y} &= \\dot{y}^2 - \\dot{x}^2  \\\\\n  y \\ddot{x} &= 2 \\dot{x} \\dot{y} \n\\,,\n\\end{align}\n\\end{subequations}\n\nto prove the constancy of \\(\\dot{x}\n y^{-2}\\) one can observe that \n%\n\\begin{align}\n  y^2 \\dv{}{s} \\qty(\\frac{\\dot{x}}{y^2})\n  = \\ddot{x} - 2 \\frac{\\dot{x} \\dot{y}}{y}\n\\,,\n\\end{align}\n%\n\nso \\(\\dot{x} y^{-2}\\) being a constant is equivalent to the second equation of motion holding.\n\n% \\subsubsection{Old, ugly solution}\n\n% Before finding the aforementioned derivation of the first integral, here is what I wrote. \n\n% I find the solution which follows quite ugly and unjustified, I have yet to find a geometric justification for these manipulations, which were derived by reverse-engineering the first integral.\n% Anyhow, these manipulations work\\dots\n\n% We insert the Christoffel symbols into the geodesic equations and multiplying through by \\(y\\) we get the following system, in which we denote differentiation with respect to \\(s\\) by a dot: \n% %\n% \\begin{subequations}\n% \\begin{align}\n%   y \\ddot{y} &= \\dot{y}^2 - \\dot{x}^2  \\\\\n%   y \\ddot{x} &= 2 \\dot{x} \\dot{y} \n% \\,.\n% \\end{align}\n% \\end{subequations}\n\n% Now, we will denote the derivative of \\(y \\) with respect to \\(x\\) as \n% %\n% \\begin{align}\n%   y' = \\dv[]{y}{x} = \\frac{\\dot{y}}{\\dot{x}}\n% \\,.\n% \\end{align}\n\n% We want to find a first integral. We start with the definition of \\(y'\\): \n% %\n% \\begin{align}\n%   \\dot{y} - y^{\\prime } \\dot{x} = 0\n% \\,,\n% \\end{align}\n% %\n% and add and subtract the quantity \\(\\dot{y} y^{\\prime 2}\\): \n% %\n% \\begin{align}\n%   \\dot{y} + (\\dot{y} y^{\\prime } - \\dot{x}) y^{\\prime } - \\dot{y} y^{\\prime 2} =0\n% \\,,\n% \\end{align}\n% %\n% which can also be written as \n% %\n% \\begin{align} \\label{eq:hyperbolic-plane-geodesics-integral-step1}\n%   \\dot{y} + (\\dot{y} y^{\\prime } - \\dot{x}) y^{\\prime } + \\dot{y} y^{\\prime 2} - 2 \\dot{y} y^{\\prime 2} = 0\n% \\,.\n% \\end{align}\n\n% Now, we want to substitute in our equations of motion: we will need them in the form \\(y \\ddot{y} / \\dot{x} = \\dot{y} y^{\\prime } - \\dot{x}\\) and \\(y \\ddot{x} / \\dot{x} = 2 \\dot{y}\\).\n% We recognize the LHS of both of these in \\eqref{eq:hyperbolic-plane-geodesics-integral-step1} and plug them in: it becomes\n% %\n% \\begin{align} \\label{eq:hyperbolic-plane-geodesics-integral-step2}\n%   \\dot{y} + \\frac{y \\ddot{y}}{\\dot{x}} y^{\\prime } + \\dot{y} y^{\\prime 2} - \\frac{y \\ddot{x}}{\\dot{x}} y^{\\prime 2} = 0\n% \\,.\n% \\end{align}\n\n% We can recognize that the second derivative terms look similar to the derivative of \\(y'\\), which is:\n% %\n% \\begin{align}\n%   \\dv[]{}{s} y^{\\prime } \n%   = \\frac{\\ddot{y}}{\\dot{x}} - \\frac{\\dot{y}}{\\dot{x}\n%   } \\ddot{x}\n%   = \\frac{\\ddot{y}}{\\dot{x}} - y^{\\prime } \\ddot{x}\n% \\,,\n% \\end{align}\n% %\n% so equation \\eqref{eq:hyperbolic-plane-geodesics-integral-step2} becomes:\n% %\n% \\begin{align}\n%   \\dot{y} + \\dot{y} y^{\\prime 2} + y y^{\\prime } \\dv{y'}{s}=0\n% \\,,\n% \\end{align}\n% %\n% which is \\(1/(2y)\\) times the derivative of \\(y^2 (y^{\\prime 2} + 1)\\), which comes out to be: \n% %\n% \\begin{align}\n%   \\frac{1}{2y}\n%   \\dv{}{s} \\qty(y^2 (y^{\\prime 2} + 1))\n%   = \\frac{1}{2y}\\qty(2 y \\dot{y} \n%   (y^{\\prime 2} + 1)\n%   + y^2 2 y^{\\prime } \\dv{}{s} y^{\\prime })\n% \\,,\n% \\end{align}\n% %\n% exactly what we had before.\n\n% So, \\(y^2 (y^{\\prime 2} + 1) = \\const\\), and we can call this constant \\(R^2\\) (which is \\(1/A^2\\) in the homework notation: I find this notation to be more suggestive).\n\n\\subsubsection{Solving the equation}\n\nThe equation \\(y^2 (y^{\\prime 2}+1) = R^2\\) can be rewritten as \n%\n\\begin{align}\n  \\dv{y}{x} = \\pm \\sqrt{\\frac{R^2}{y^2} - 1}\n\\,,\n\\end{align}\n%\nwhich means that if we fix \\(R\\) the value of the derivative of the curve can only attain two opposite values. Do note that we can go from one branch to the other with the transformation \\(x \\rightarrow -x\\), a mirror symmetry around some center.\nThen, we can just make the gauge choice \\(y^{\\prime }>0\\) and integrate by separation of varibles: \n%\n\\begin{align}\n  \\int \\frac{y \\dd{y}}{\\sqrt{R^2-y^2}} = \\int \\dd{x}\n\\,,\n\\end{align}\n%\nwhich can be solved with the substitution \\( y = R \\sin(\\theta )\\), with \\(\\dd{y} = R \\cos(\\theta ) \\dd{\\theta }\\). Inserting this, we find: \n%\n\\begin{align}\n  x - x_0  = \\int \\frac{R \\sin\\theta R \\cos\\theta  \\dd{\\theta }}{R \\sqrt{1 - \\sin^2 \\theta }} \n  = R \\int \\sin \\theta  \\dd{\\theta } = - R \\cos \\theta \n\\,,\n\\end{align}\n%\nwhich can be squared to find \\((x- x_0 )^2 = R^2 (1 - \\sin^2 \\theta ) = R^2 - y^2\\), or \n%\n\\begin{align}\n  R^2 = (x-x_0  )^2 + y^2\n\\,,\n\\end{align}\n%\nthe equation of a circle.\nWe can then confirm that this solution also holds in the other branch, up to a change of integration constant, by rewriting \\((x- x_0 )^2 = ((-x) - x_1 )^2\\): this is solved by \\(x_0 = - x_1 \\). \n\nGeometrically, we are looking at circles with origins on the \\(x\\) axis and radius \\(R\\). If \\(y\\) and \\(R\\) are fixed, then there are only two possible circles, which can be found by connecting a certain point at height \\(y\\) to the \\(x\\) axis with a segment of length \\(R\\).\nOne can then see that the right halves of the circles we found can be found from the left halves by symmetry.\n\n\\end{document}", "meta": {"hexsha": "04255dcade4a46f3a33804ebe30ca2ec74d24682", "size": 13868, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_first_semester/gr_exercises/sheet5.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_first_semester/gr_exercises/sheet5.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_first_semester/gr_exercises/sheet5.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 39.9654178674, "max_line_length": 494, "alphanum_fraction": 0.6119123161, "num_tokens": 5006, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390162, "lm_q2_score": 0.8056321796478255, "lm_q1q2_score": 0.5963082846089314}}
{"text": "\\chapter{Affine schemes: the sheaf}\n\\label{ch:spec_sheaf}\n\nWe now complete our definition of $X = \\Spec A$ by\ndefining the sheaf $\\OO_X$ on it, making it into a ringed space.\nThis is done quickly in the first section.\n\nHowever, we will then spend the next several chapters\ntrying to convince the reader to \\emph{forget}\nthe definition we gave, in practice.\nThis is because practically,\nthe sections of the sheaves are best computed by not using\nthe definition directly, but by using some other results.\n\nAlong the way we'll develop some related theory:\nin computing the stalks we'll find out the definition of a local ring,\nand in computing the sections we'll find out about distinguished open sets.\n\nA reminder once again: \\Cref{ch:spec_examples} has\nmany more concrete examples.\nIt's not a bad idea to look through there for more examples\nif anything in this chapter trips you up.\n\n\\section{A useless definition of the structure sheaf}\n\\prototype{Still $\\CC[x_1, \\dots, x_n] / I$.}\n\nWe have now endowed $\\Spec A$ with the Zariski topology,\nand so all that remains is to put a sheaf $\\OO_{\\Spec A}$ on it.\nTo do this we want a notion of ``regular functions'' as before.\n\nThis is easy to do since we have localizations on hand.\n\\begin{definition}\n\tFirst, let $\\SF$ be the pre-sheaf of ``globally rational'' functions:\n\ti.e.\\ we define $\\SF(U)$ to be the localization\n\t\\[\n\t\t\\SF(U) = \\left\\{\n\t\t\t\\frac fg \\mid f, g \\in A\n\t\t\t\\text{ and } g(\\kp) \\neq 0 \\; \\forall \\kp \\in U\n\t\t\\right\\}\n\t\t= \\left(A \\setminus \\bigcup_{\\kp \\in U} \\kp \\right)\\inv A.\n\t\\]\n\tWe now define the structure sheaf on $\\Spec A$.\n\tIt is\n\t\\[ \\OO_{\\Spec A} = \\SF\\sh \\]\n\ti.e.\\ the sheafification of the $\\SF$ we just defined.\n\\end{definition}\n\\begin{exercise}\n\tCompare this with the definition for $\\OO_V$\n\twith $V$ a complex variety, and check that they essentially match.\n\\end{exercise}\nAnd thus, we have completed the transition to adulthood,\nwith a complete definition of the affine scheme.\n\nIf you really like compatible germs,\nyou can write out the definition:\n\\begin{definition}\n\tLet $A$ be a ring.\n\tThen $\\Spec A$ is made into a ringed space by setting\n\t\\[ \\OO_{\\Spec A}(U)\n\t\t= \\left\\{ (f_\\kp \\in A_\\kp)_{\\kp \\in U}\n\t\t\\text{ which are locally quotients} \\right\\}. \\]\n\tThat is, it consists of sequence $(f_\\kp)_{\\kp \\in U}$, with\n\teach $f_\\kp \\in A_\\kp$, such that for every point $\\kp$ there\n\tis an open neighborhood $U_\\kp$ and an $f,g \\in A$ such that\n\t$f_\\kq = \\frac fg \\in A_\\kq$ for all $\\kq \\in U_\\kp$.\n\\end{definition}\n\nWe will now \\textbf{basically forget about this definition},\nbecause we will never use it in practice.\nIn the next two sections, we will show you:\n\\begin{itemize}\n\t\\ii that the stalks $\\OO_{\\Spec A, \\kp}$ are just $A_\\kp$, and\n\t\\ii that the sections $\\OO_{\\Spec A}(U)$\n\tcan be computed, for any open set $U$,\n\tby focusing only on the special case where $U = D(f)$\n\tis a distinguished open set.\n\\end{itemize}\nThese two results will be good enough for all of our purposes,\nso we will be able to not use this definition.\n(Hence the lack of examples in this section.)\n\n\\section{The value of distinguished open sets (or: how to actually compute sections)}\n\\prototype{$D(x)$ in $\\Spec \\CC[x]$ is the punctured line.}\n\nWe will now really hammer in the importance of\nthe distinguished open sets.\nThe definition is analogous to before:\n\\begin{definition}\n\tLet $f \\in \\Spec A$.\n\tThen $D(f)$ is the set of $\\kp$ such that $f(\\kp) \\neq 0$,\n\ta \\vocab{distinguished open set}.\n\\end{definition}\nDistinguished open sets will have three absolutely crucial properties,\nwhich build on each other.\n\n\\subsection{A basis of the Zariski topology}\nThe first is a topological observation:\n\\begin{theorem}\n\t[Distinguished open sets form a base]\n\t\\label{thm:distinguished_base}\n\tThe distinguished open sets $D(f)$\n\tform a basis for the Zariski topology:\n\tany open set $U$ is a union of distinguished open sets.\n\\end{theorem}\n\\begin{proof}\n\tLet $U$ be an open set;\n\tsuppose it is the complement of closed set $V(I)$.\n\tThen verify that \\[ U = \\bigcup_{f \\in I} D(f). \\qedhere \\]\n\\end{proof}\n\n\\subsection{Sections are computable}\nThe second critical fact is that the sections\non distinguished open sets can be computed explicitly.\n\\begin{theorem}\n\t[Sections of $D(f)$ are localizations away from $f$]\n\tLet $A$ be a ring and $f \\in A$.\n\tThen \\[ \\OO_{\\Spec A}(D(f)) \\cong A[1/f]. \\]\n\\end{theorem}\n\\begin{proof}\n\tOmitted, but similar to\n\t\\Cref{thm:reg_func_distinguish_open}.\n\\end{proof}\n\n\\begin{example}\n\t[The punctured line is isomorphic to a hyperbola]\n\tThe ``hyperbola effect'' appears again:\n\t\\[ \\OO_{\\Spec \\CC[x]} (D(x))\n\t\t= \\CC[x, x\\inv]\n\t\t\\cong \\CC[x,y] / (xy-1). \\]\n\\end{example}\n\nOn a tangential note,\nwe had better also note somewhere that $\\Spec A = D(1)$\nis itself distinguished open, so the global sections can be recovered.\n\\begin{corollary}\n\t[$A$ is the ring of global sections]\n\tThe ring of global sections of $\\Spec A$ is $A$.\n\\end{corollary}\n\\begin{proof}\n\tBy previous theorem, $\\OO_{\\Spec A}(\\Spec A)\n\t= \\OO_{\\Spec A}(D(1)) = A[1/1] = A$.\n\\end{proof}\n\n\\subsection{They are affine}\n\\label{subsec:distinguished_open_affine}\nWe know $\\OO_X(D(f)) = A[1/f]$.\nIn fact, if you draw $\\Spec A[1/f]$,\nyou will find that it looks exactly like $D(f)$.\nSo the third final important fact is that\n$D(f)$ will actually be \\emph{isomorphic} to $\\Spec A[1/f]$\n(just like the line minus the origin is isomorphic to the hyperbola).\nWe can't make this precise yet,\nbecause we have not yet discussed morphisms of schemes,\nbut it will be handy later (though not right away).\n%However, you can already see this at the level of topological spaces;\n%see \\Cref{prob:homeomorphism}.\n%Since distinguished open sets form a base,\n%though, this means that open sets of affine schemes\n%are, at least locally, themselves affine schemes:\n%given any open set $U \\subseteq \\Spec A$, and point $\\kp \\in U$,\n%there is some open neighborhood $V \\ni p$ contained in $U$\n%which is itself affine.\n\n\\subsection{Classic example: the punctured plane}\n\\label{subsec:punctured_plane}\nWe now give the classical example of a computation which shows\nhow you can forget about sheafification,\nif you never liked it.\\footnote{This perspective is\n\tso useful that some sources, like Vakil \\cite[\\S4.1]{ref:vakil}\n\twill \\emph{define} $\\OO_{\\Spec A}$\n\tby requiring $\\OO_{\\Spec A}(D(f)) = A[1/f]$,\n\trather than use sheafification as we did.}\nThe idea is that:\n\\begin{moral}\n\tWe can compute any section $\\OO_X(U)$ in practice\n\tby using distinguished open sets and sheaf axioms.\n\\end{moral}\n\nLet $X = \\Spec \\CC[x,y]$,\nand consider the origin, i.e.\\ the point $\\km = (x,y)$.\nThis ideal is maximal, so it corresponds to a closed point,\nand we can consider the open set $U$\nconsisting of all the points other than $\\km$.\nWe wish to compute $\\OO_X(U)$.\n\n\\begin{center}\n\\begin{asy}\n\tgraph.xaxis(\"$\\mathcal{V}(y)$\", red);\n\tgraph.yaxis(\"$\\mathcal{V}(x)$\", red);\n\tfill(box( (-3,-3), (3,3) ), opacity(0.2)+lightcyan);\n\topendot(origin, blue+1.5);\n\tlabel(\"$\\mathfrak m = (x,y)$\", origin, dir(45), blue);\n\\end{asy}\n\\end{center}\n\nUnfortunately, $U$ is not distinguished open.\nBut, we can compute it anyways by writing $U = D(x) \\cup D(y)$:\nconveniently, $D(x) \\cap D(y) = D(xy)$.\nBy the sheaf axioms,\nwe have a pullback square\n\\begin{center}\n\\begin{tikzcd}\n\t\\OO_X(U) \\ar[r] \\ar[d] & \\OO_X(D(x)) = \\CC[x,y,x\\inv] \\ar[d] \\\\\n\t\\OO_X(D(x)) = \\CC[x,y,y\\inv]_{y} \\ar[r] & \\OO_X(D(xy)) = \\CC[x,y,x\\inv, y\\inv].\n\\end{tikzcd}\n\\end{center}\nIn other words, $\\OO_X(U)$ consists of pairs\n\\begin{align*}\n\tf &\\in \\CC[x,y,x\\inv] \\\\\n\tg &\\in \\CC[x,y,y\\inv]\n\\end{align*}\nwhich agree on the overlap:\n$f = g$ on $D(x) \\cap D(y)$.\nWell, we can describe\n$f$ as a polynomial with some $x$'s in the denominator, and\n$g$ as a polynomial with some $y$'s in the denominator.\nIf they match, the denominator is actually constant.\nPut crudely,\n\\[ \\CC[x,y,x\\inv] \\cap \\CC[x,y,y\\inv] = \\CC[x,y]. \\]\nIn conclusion,\n\\[ \\OO_X(U) = \\CC[x,y]. \\]\nThat is, we get no additional functions.\n\n\n\n\\section{The stalks of the structure sheaf}\n\\prototype{The stalk of $\\Spec \\CC[x,y]$ at $\\km = (x,y)$\nare rational functions defined at the origin.}\n\nDon't worry, this one is easier than last section.\n\n\\subsection{They are localizations}\n\\begin{theorem}\n\t[Stalks of $\\Spec A$ are $A_\\kp$]\n\tLet $A$ be a ring and let $\\kp \\in \\Spec A$.\n\tThen \\[ \\OO_{\\Spec A, \\kp} \\cong A_\\kp. \\]\n\tIn particular $X$ is a locally ringed space.\n\\end{theorem}\n\\begin{proof}\n\tSince sheafification preserved stalks,\n\tit's enough to check it for $\\SF$ the pre-sheaf\n\tof globally rational functions in our definition.\n\tThe proof is basically the same as \\Cref{thm:stalks_affine_var}:\n\tthere is an obvious map $\\SF_\\kp \\to A_\\kp$ on germs by\n\t\\[ \\left(U, f/g \\in \\SF(U) \\right)\n\t\t\\mapsto f/g \\in A_\\kp . \\]\n\t(Note the $f/g$ on the left lives in $\\SF(U)$\n\tbut the one on the right lives in $A_\\kp$).\n\tWe show injectivity and surjectivity:\n\t\\begin{itemize}\n\t\t\\ii Injective: suppose $(U_1, f_1 / g_1)$ and $(U_2, f_2 / g_2)$\n\t\tare two germs with $f_1/g_1 = f_2/g_2 \\in A_\\kp$.\n\t\tThis means $h(g_1 f_2 - f_2 g_1) = 0$ in $A$, for some nonzero $h$.\n\t\tThen both germs identify with\n\t\tthe germ $(U_1 \\cap U_2 \\cap D(h), f_1 / g_1)$.\n\t\t\\ii Surjective: let $U = D(g)$. \\qedhere\n\t\\end{itemize}\n\\end{proof}\n\n\\begin{example}\n\t[Denominators not divisible by $x$]\n\tWe have seen this example so many times\n\tthat I will only write it in the new notation,\n\tand make no further comment:\n\tif $X = \\Spec \\CC[x]$ then\n\t\\[ \\OO_{\\Spec X, (x)} = \\CC[x]_{(x)}\n\t\t= \\left\\{  \\frac fg \\mid g(0) \\ne 0 \\right\\}. \\]\n\\end{example}\n\\begin{example}\n\t[Denominators not divisible by $x$]\n\tLet $X = \\Spec \\CC[x,y]$ and let $\\km = (x,y)$ be the origin.\n\tThen\n\t\\[ \\CC[x,y]_{(x,y)}\n\t\t= \\left\\{ \\frac{f(x,y)}{g(x,y)} \\mid g(0,0) \\ne 0 \\right\\}. \\]\n\\end{example}\n\nIf you want more examples,\ntake any of the ones from \\Cref{sec:localize_prime_ideal},\nand try to think about what they mean geometrically.\n\n\\subsection{Motivating local rings: germs should package values}\nLet's return to our well-worn example $X = \\Spec \\CC[x,y]$\nand consider $\\km = (x,y)$ the origin.\nThe stalk was\n\\[ \\OO_{X, \\km} = \\CC[x,y]_{(x,y)}\n\t= \\left\\{ \\frac{f(x,y)}{g(x,y)} \\mid g(0,0) \\ne 0 \\right\\}. \\]\nSo let's take some section like $f = \\frac{1}{xy + 4}$,\nwhich is a section of $U = D(xy+4)$ (or some smaller open set,\nbut we'll just use this one for simplicity).\nWe also have $U \\ni m$, and so $f$ gives a germ at $\\km$.\n\nOn the other hand, $f$ also has a value at $\\km$:\nit is $f \\pmod{\\km} = \\frac 14$.\nAnd in general, the ring of possible values of a section\nat the origin $\\km$ is $\\CC[x,y] / \\km \\cong \\CC$.\n\nNow, you might recall that I pressed the point of view\nthat a germ might be thought of as an ``enriched value''.\nThen it makes sense that if you know the germ of a section $f$ at\na point $\\km$ --- i.e., you know the ``enriched value'' ---\nthen you should be able to value as well.\nWhat this means is that we ought to have some map\n\\[ A_\\km \\to A/\\km \\]\nsending germs to their associated values.\n\nIndeed you can, and this leads us to\\dots\n\n\\section{Local rings and residue fields:\nlinking germs to values}\n\\prototype{The residue field of $\\Spec \\CC[x,y]$ at $\\km = (x,y)$\nis $\\CC$.}\n\n\\subsection{Localizations give local rings}\nThis notation is about to get really terrible, but bear with me.\n\\begin{theorem}\n\t[Stalks are local rings]\n\t\\label{thm:stalks_local_ring}\n\tLet $A$ be a ring and $\\kp$ any prime ideal.\n\tThen the localization $A_\\kp$ has exactly one maximal ideal,\n\tgiven explicitly by\n\t\\[ \\kp A_\\kp =\n\t\t\\left\\{ \\frac fg \\mid f \\in \\kp \\; g \\notin \\kp \\right\\}. \\]\n\\end{theorem}\nThe ideal $\\kp A_\\kp$ thus captures the idea\nof ``germs vanishing at $\\kp$''.\\footnote{The notation $\\kp A_\\kp$ really means\nthe set of $f \\cdot h$ where $f \\in \\kp$\n(viewed as a subset of $A_\\kp$ by $f \\mapsto \\frac f1$) and $h \\in A_{\\kp}$.\nI personally find this is more confusing than helpful,\nso I'm footnoting it.}\n\nProof in a moment;\nfor now let's introduce some words so we can\ngive our examples in the proper language.\n\\begin{definition}\n\tA ring $R$ with exactly one maximal ideal $\\km$\n\twill be called a \\vocab{local ring}.\n\tThe \\vocab{residue field} is the quotient $A / \\km$.\n\\end{definition}\n\\begin{ques}\n\tAre fields local rings?\n\\end{ques}\n\nThus what we find is that:\n\\begin{moral}\n\tThe stalks consist of the possible enriched values (germs);\n\tthe residue field is the set of (un-enriched) values.\n\\end{moral}\n\n\\begin{example}\n\t[The stalk at the origin of {$\\Spec \\CC[x,y]$}]\n\tAgain set $A = \\CC[x,y]$, $X = \\Spec A$ and $\\kp = (x,y)$\n\tso that $\\OO_{X,\\kp} = A_\\kp$.\n\t(I switched to $\\kp$ for the origin,\n\tto avoid confusion with the maximal ideal $\\kp A_{\\kp}$\n\tof the local ring $A_\\kp$.)\n\tAs we said many times already,\n\t$A_{\\kp}$ consists of rational functions not vanishing at the origin,\n\tsuch as $f = \\frac{1}{xy+4}$.\n\n\tWhat is the unique maximal ideal $\\kp A_\\kp$?\n\tAnswer: it consists of the rational functions\n\twhich \\emph{vanish} at the origin:\n\tfor example, $\\frac{x}{x^2+3y}$, or $\\frac{3x+5y}{2}$,\n\tor $\\frac{-xy}{4(xy+4)}$.\n\tIf we allow ourselves to mod out by such functions,\n\twe get the residue field $\\CC$,\n\tand $f$ will have the value $\\frac14$, since\n\t\\[ \\frac{1}{xy+4} -\n\t\t{\\underbrace{\\frac{-xy}{4(xy+4)}}_{\\text{vanishes at origin}}}\n\t\t= \\frac14. \\]\n\n\tMore generally, suppose $f$ is any section of some open\n\tset containing $\\kp$.\n\tLet $c \\in \\CC$ be the value $f(\\kp)$, that is, $f \\pmod \\kp$.\n\tThen $f - c$ is going to be another section\n\twhich vanishes at the origin $\\kp$,\n\tso as promised, $f \\equiv c \\pmod{\\kp A_{\\kp}}$.\n\\end{example}\n\nOkay, we can write down a proof of the theorem now.\n\\begin{proof}\n\t[Proof of \\Cref{thm:stalks_local_ring}]\n\tOne may check set $I = \\kp A_\\kp$ is an ideal of $A_\\kp$.\n\tMoreover, $1 \\notin I$, so $I$ is proper.\n\n\tTo prove it is maximal and unique,\n\tit suffices to prove that any $f \\in A_\\kp$ with $f \\notin I$\n\tis a \\emph{unit} of $A_\\kp$.\n\tThis will imply $I$ is maximal: there are no more non-units to add.\n\tIt will also imply $I$ is the only maximal ideal:\n\tbecause any proper ideal can't contain units, so is contained in $I$.\n\n\tThis is actually easy.\n\tAn element of $A_\\kp$ not in $I$ must be $x = \\frac fg$\n\tfor $f,g \\in A$ and $f,g \\notin \\kp$.\n\tFor such an element, $x\\inv = \\frac gf \\notin \\kp$ too.\n\tSo $x$ is a unit. End proof.\n\\end{proof}\n\nEven more generally:\n\\begin{moral}\n\tIf a sheaf $\\SF$ consists of ``field-valued functions'',\n\tthe stalk $\\SF_p$ probably has a maximal ideal\n\tconsisting of the germs vanishing at $p$.\n\\end{moral}\n\n\\begin{example}[Local rings in non-algebaric geometry sheaves]\nLet's go back to the example of $X = \\RR$ and $\\SF(U)$ the smooth functions,\nand consider the stalk $\\SF_{p}$, where $p \\in X$.\nDefine the ideal $\\km_p$ to be the set of germs $(s,U)$ for which $s(p) = 0$.\n\nThen $\\km_p$ is maximal: we have an exact sequence\n\\[ 0 \\to \\km_p \\to \\SF_p \\taking{(s,U) \\mapsto s(p)} \\RR \\to 0 \\]\nand so $\\SF_p / \\km_p \\cong \\RR$, which is a field.\n\nIt remains to check there are no nonzero maximal ideals.\nNow note that if $s \\notin \\km_p$,\nthen $s$ is nonzero in some open neighborhood of $p$,\nand one can construct the function $1/s$ on it.\nSo \\textbf{every element of $\\SF_p \\setminus \\km_p$ is a unit};\nand again $\\km_p$ is in fact the only maximal ideal!\n\nThus the stalks of each of the following types of sheaves\nare local rings, too.\n\\begin{itemize}\n\t\\ii Sheaves of continuous real/complex functions on a topological space\n\t\\ii Sheaves of smooth functions on any manifold\n\t\\ii etc.\n\\end{itemize}\n\\end{example}\n\n\\subsection{Computing values: a convenient square}\nVery careful readers might have noticed something\na little uncomfortable in our extended example with $\\Spec A$\nwith $A = \\CC[x,y]$ and $\\kp = (x,y)$ the origin.\nLet's consider $f = \\frac{1}{xy+4}$.\nWe took $f \\pmod{x,y}$ in the original ring $A$ in order\nto decide the value ``should'' be $\\frac14$.\nHowever, all our calculations actually\ntook place not in the ring $A$, but instead in the ring $A_\\kp$.\nDoes this cause issues?\n\nThankfully, no, nothing goes wrong, even in a general ring $A$.\n\n\\begin{definition}\n\tWe let the quotient $A_\\kp / \\kp A_\\kp$,\n\ti.e.\\ the \\vocab{residue field} of the stalk of $\\Spec A$ at $\\kp$,\n\tbe denoted by $\\kappa(\\kp)$.\n\\end{definition}\n\nThe following result from commutative algebra\nshows that the order doesn't matter.\n\\begin{theorem}\n\t[The germ-to-value square]\n\tLet $A$ be a ring and $\\kp$ a prime ideal.\n\tThe following diagram commutes:\n\t\\begin{center}\n\t\\begin{tikzcd}\n\t\tA \\ar[r, \"\\text{localize}\"] \\ar[d, \"\\bmod \\kp\"']\n\t\t\t& A_\\kp \\ar[d, \"\\bmod \\kp\"] \\\\\n\t\tA/\\kp \\ar[r, \"\\Frac(-)\"] & \\kappa (\\kp)\n\t\\end{tikzcd}\n\t\\end{center}\n\tIn particular, $\\kappa(\\kp)$\n\tcan also be described as $\\Frac(A/\\kp)$.\n\\end{theorem}\nSo for example, if $A = \\CC[x,y]$ and $\\kp = (x,y)$,\nthen $A/\\kp = \\CC$ and $\\Frac(A_\\kp) = \\Frac(\\CC) = \\CC$, as we expected.\nIn practice, $\\Frac(A/\\kp)$ is probably the easier way\nto compute $\\kappa(\\kp)$ for any prime ideal $\\kp$.\n\n\n\n\\section{Recap}\nTo recap the last two chapters, let $A$ be a ring.\n\\begin{itemize}\n\t\\ii We define $X = \\Spec A$ to be the set of prime ideals of $A$.\n\t\\begin{itemize}\n\t\t\\ii The maximal ideals are the ``closed points'' we are used to,\n\t\tbut the prime ideals are ``generic points''.\n\t\\end{itemize}\n\n\t\\ii We equip $\\Spec A$ with the Zariski topology by declaring\n\t$\\VV(I)$ to be the closed sets, for ideals $I \\subseteq A$.\n\t\\begin{itemize}\n\t\t\\ii The distinguished open sets $D(f)$,\n\t\tform a topological basis.\n\t\t\\ii The irreducible closed sets are exactly the closures of points.\n\t\\end{itemize}\n\n\t\\ii Finally, we defined a sheaf $\\OO_X$.\n\tWe set up the definition such that\n\t\\begin{itemize}\n\t\t\\ii $\\OO_{X}(D(f)) = A[1/f]$:\n\t\tat distinguished open sets $D(f)$,\n\t\twe get localizations too.\n\t\t\\ii $\\OO_{X,\\kp} = A_\\kp$:\n\t\tthe stalks are localizations at a prime.\n\t\\end{itemize}\n\tSince $D(f)$ is a basis,\n\tthese two properties lets us explicitly compute $\\OO_X(U)$\n\tfor any open set $U$,\n\tso we don't have to resort to the definition using sheafification.\n\\end{itemize}\n\n\\section{Functions are determined by germs, not values}\n\\prototype{The functions $0$ and $x$ on $\\Spec \\CC[x]/(x^2)$.}\n\nWe close the chapter with a word of warning.\nIn any ringed space, a section is determined by its germs;\nso that on $\\Spec A$ a function $f \\in A$ is determined\nby its germ in each stalk $A_\\kp$.\nHowever, we now will mention that an $f \\in A$ is \\emph{not}\ndetermined by its value $f(\\kp) = f \\pmod \\kp$ at each point.\n\nThe famous example is:\n\\begin{example}\n\t[On the double point, all multiples of $x$ are zero at all points]\n\tThe space $\\Spec \\CC[x] / (x^2)$ has only one point, $(x)$.\n\tThe functions $0$ and $x$ (and for that matter $2x$, $3x$, \\dots)\n\tall vanish on it.\n\tThis shows that functions are not determined uniquely\n\tby values in general.\n\\end{example}\n\nFortunately, we can explicitly characterize\nwhen this sort of ``bad'' behavior happens.\nIndeed, we want to see when $f(\\kp) = g(\\kp)$ for every $\\kp$,\nor equivalently, $h = f-g$ vanishes on every prime ideal $\\kp$.\nThis is equivalent to having\n\\[ h \\in \\bigcap_{\\kp} \\kp = \\sqrt{(0)} \\]\nthe radical of the \\emph{zero} ideal.\nThus in the prototype, the failure was caused by the fact that $x^n = 0$\nfor some large $n$.\n\n\\begin{definition}\n\tFor a ring $A$, the radical of the zero ideal, $\\sqrt{(0)}$,\n\tis called the \\vocab{nilradical} of $A$.\n\tElements of the nilradical are called \\vocab{nilpotents}.\n\tWe say $A$ is \\vocab{reduced} if $0$ is the only nilpotent,\n\ti.e.\\ $\\sqrt{(0)} = (0)$.\n\\end{definition}\n\\begin{ques}\n\tAre integral domains reduced?\n\\end{ques}\n\nThen our above discussion gives:\n\\begin{theorem}\n\t[Nilpotents are the only issue]\n\tTwo functions $f$ and $g$ have the same value\n\ton all points of $\\Spec A$ if and only if $f-g$ is nilpotent.\n\\end{theorem}\nIn particular, when $A$ is a reduced ring,\neven the values $f(\\kp)$ as $\\kp \\in \\Spec A$\nare enough to determine $f \\in A$.\n\n\\section{\\problemhead}\nAs \\Cref{ch:spec_examples} contains many\nexamples of affine schemes to train your intuition;\nit's likely to be worth reading even before attempting these problems.\n\n\\begin{dproblem}\n\t[Spectrums are quasicompact]\n\t\\gim\n\tShow that $\\Spec A$ is quasicompact for any ring $A$.\n\\end{dproblem}\n\n\\begin{problem}\n\t[Punctured gyrotop, communicated by Aaron Pixton]\n\tThe gyrotop is the scheme $X = \\Spec \\CC[x,y,z] / (xy,z)$.\n\tWe let $U$ denote the open subset obtained\n\tby deleting the closed point $\\km = (x,y,z)$.\n\tCompute $\\OO_X(U)$.\n\t\\begin{hint}\n\t\t$k[x,y] \\times k[z,z\\inv]$.\n\t\\end{hint}\n\\end{problem}\n\n\\begin{problem}\n\tShow that a ring $R$ is a local ring\n\tif and only of the following property is true:\n\tfor any $x \\in R$,\n\teither $x$ or $1-x$ is a unit.\n\\end{problem}\n\n\\begin{problem}\n\tLet $R$ be a local ring, and $\\km$ be its maximal ideal.\n\tDescribe $R_\\km$.\n\t\\begin{hint}\n\t\tIt's isomorphic to $R$!\n\t\\end{hint}\n\\end{problem}\n\n\\begin{problem}\n\tLet $A$ be a ring, and $\\km$ a maximal ideal.\n\tConsider $\\km$ as point of $\\Spec A$\n\tShow that $\\kappa(\\km) \\cong A/\\km$.\n\\end{problem}\n\n\n", "meta": {"hexsha": "8302733adf9d74efe3ba20aa78abefb8a80b96bd", "size": 20999, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/alg-geom/spec-sheaf.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/alg-geom/spec-sheaf.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/alg-geom/spec-sheaf.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.9400998336, "max_line_length": 85, "alphanum_fraction": 0.6867946093, "num_tokens": 6728, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8333245911726382, "lm_q1q2_score": 0.5961804189138009}}
{"text": "%!TEX root = forallxcam.tex\n\\part{Truth tables}\n\\label{ch.TruthTables}\n\n\\chapter{Characteristic truth tables}\\label{s:CharacteristicTruthTables}\nAny non-atomic sentence of TFL is composed of atomic sentences with sentential connectives. The truth value of the compound sentence depends only on the truth value of the atomic sentences that comprise it. In order to know the truth value of `$(D \\eand E)$', for instance, you only need to know the truth value of `$D$' and the truth value of `$E$'. \n\nI introduced five connectives in chapter \\ref{ch.TFL}. So I just need to explain how they map between truth values. For convenience, I abbreviate `True' with `T' and `False' with `F'. (But, to be clear, the two truth values are True and False; the truth values are not \\emph{letters}!)\n\n\\paragraph{Negation.} For any sentence \\meta{A}: If \\meta{A} is true, then \\enot\\meta{A} is false; and if \\enot\\meta{A} is true, then \\meta{A} is false. We can summarize this in the \\emph{characteristic truth table} for negation:\n\\begin{center}\n\\begin{tabular}{c|c}\n\\meta{A} & \\enot\\meta{A}\\\\\n\\hline\nT & F\\\\\nF & T \n\\end{tabular}\n\\end{center}\n\n\\paragraph{Conjunction.} For any sentences \\meta{A} and \\meta{B}, \\meta{A}\\eand\\meta{B} is true if and only if both \\meta{A} and \\meta{B} are true. We can summarize this in the {characteristic truth table} for conjunction:\n\\begin{center}\n\\begin{tabular}{c c |c}\n\\meta{A} & \\meta{B} & $\\meta{A}\\eand\\meta{B}$\\\\\n\\hline\nT & T & T\\\\\nT & F & F\\\\\nF & T & F\\\\\nF & F & F\n\\end{tabular}\n\\end{center}\nNote that conjunction is \\emph{symmetrical}. The truth value for $\\meta{A} \\eand \\meta{B}$ is always the same as the truth value for $\\meta{B} \\eand \\meta{A}$.  \n\n\\paragraph{Disjunction.} Recall that `$\\eor$' always represents inclusive or. So, for any sentences \\meta{A} and \\meta{B}, $\\meta{A}\\eor \\meta{B}$ is true if and only if either \\meta{A} or \\meta{B} is true. We can summarize this in the {characteristic truth table} for disjunction:\n\\begin{center}\n\\begin{tabular}{c c|c}\n\\meta{A} & \\meta{B} & $\\meta{A}\\eor\\meta{B}$ \\\\\n\\hline\nT & T & T\\\\\nT & F & T\\\\\nF & T & T\\\\\nF & F & F\n\\end{tabular}\n\\end{center}\nLike conjunction, disjunction is symmetrical. \n\n\\paragraph{Conditional.} I'm just going to come clean and admit it: conditionals are a big old mess in TFL. Exactly how much of a mess they are is a matter of \\emph{philosophical} contention. I shall discuss a few of the subtleties  in \\S\\S\\ref{s:IndicativeSubjunctive} and \\ref{s:ParadoxesOfMaterialConditional}. For now, I am going to stipulate the following: $\\meta{A}\\eif\\meta{B}$ is false if and only if \\meta{A} is true and \\meta{B} is false. We can summarize this with a characteristic truth table for the conditional.\n\\begin{center}\n\\begin{tabular}{c c|c}\n\\meta{A} & \\meta{B} & $\\meta{A}\\eif\\meta{B}$\\\\\n\\hline\nT & T & T\\\\\nT & F & F\\\\\nF & T & T\\\\\nF & F & T\n\\end{tabular}\n\\end{center}\nThe conditional is \\emph{asymmetrical}. You cannot swap the antecedent and consequent without changing the meaning of the sentence; $\\meta{A}\\eif\\meta{B}$ and $\\meta{B} \\eif \\meta{A}$ have different truth tables.\n\n\\paragraph{Biconditional.} Since a biconditional is to be the same as the conjunction of a conditional running in both directions, the truth table for the biconditional should be:\n\\begin{center}\n\\begin{tabular}{c c|c}\n\\meta{A} & \\meta{B} & $\\meta{A}\\eiff\\meta{B}$\\\\\n\\hline\nT & T & T\\\\\nT & F & F\\\\\nF & T & F\\\\\nF & F & T\n\\end{tabular}\n\\end{center}\nUnsurprisingly, the biconditional is symmetrical. \n\n\\chapter{Truth-functional connectives}\\label{s:TruthFunctionality}\n\n\\section{The idea of truth-functionality}\nI want to introduce an important idea. \n\t\\factoidbox{\n\t\tA connective is \\define{truth-functional} iff the truth value of a sentence with that connective as its main logical operator is uniquely determined by the truth value(s) of the constituent sentence(s).\n\t}\nEvery connective in TFL is truth-functional. The truth value of a negation is uniquely determined by the truth value of the unnegated sentence. The truth value of a conjunction is uniquely determined by the truth value of both conjuncts. The truth value of a disjunction is uniquely determined by the truth value of both disjuncts. And so on. To determine the truth value of some TFL sentence, we only need to know the truth value of its components. \n\nThis is what gives TFL its name: it is \\emph{truth-functional logic}.\n\nMany languages use connectives that are not truth-functional. In English, for example, we can form a new sentence from any simpler sentence by prefixing it with `It is necessarily the case that\\ldots'. The truth value of this new sentence is not fixed solely by the truth value of the original sentence. For consider two true sentences:\n\t\\begin{earg}\n\t\t\\item $2 + 2 = 4$\n\t\t\\item Shostakovich wrote fifteen string quartets\n\t\\end{earg}\nWhereas it is necessarily the case that $2 + 2 = 4$, it is not \\emph{necessarily} the case that Shostakovich wrote fifteen string quartets. If Shostakovich had died earlier, he would have failed to finish Quartet no.\\ 15; if he had lived longer, he might have written a few more. So `It is necessarily the case that\\ldots' is not \\emph{truth-functional}.\n\n\n\\section{Symbolising versus translating}\nAll of the connectives of TFL are truth-functional. But more than that: they really do nothing \\emph{but} map us between truth values.  \n\nWhen we symbolise a sentence or an argument in TFL, we ignore everything \\emph{besides} the contribution that the truth values of a component might make to the truth value of the whole. There are subtleties to our ordinary claims that far outstrip their mere truth values. Sarcasm; poetry; snide implicature; emphasis; these are important parts of everyday discourse. But none of this is retained in TFL. As remarked in \\S\\ref{s:TFLConnectives}, TFL cannot capture the subtle differences between the following English sentences:\n\t\\begin{earg}\n\t\t\\item Jon is fat and Jon is quick\n\t\t\\item Although Jon is fat, Jon is quick\n\t\t\\item Despite being fat, Jon is quick\n\t\t\\item Jon is quick, albeit fat\n\t\t\\item Jon's fatness notwithstanding, he is quick\n\t\\end{earg}\nAll of the above sentences will be symbolised with the same TFL sentence, perhaps `$F \\eand Q$'.\n\nNow, I keep saying that we use TFL sentences to \\emph{symbolise} English sentences. Many other textbooks talk about \\emph{translating} English sentences into TFL. But a good translation should preserve certain facets of meaning, and---as we just saw---TFL cannot do that. This is why I speak of \\emph{symbolising} English sentences, rather than of \\emph{translating} them.\n\nThis affects how you should understand symbolisation keys. Consider a key like:\n\t\\begin{ekey}\n\t\t\\item[F] Jon is fat.\n\t\t\\item[Q] Jon is quick.\n\t\\end{ekey}\nOther textbooks will understand this as a stipulation that the TFL sentence `$F$' should \\emph{mean} that Jon is fat, and that the TFL sentence `$Q$' should \\emph{mean} that Jon is quick. But TFL just is totally unequipped to deal with \\emph{meaning}. The preceding symbolisation key is doing no more nor less than stipulating truth values for the TFL sentences `$F$'  and `$Q$'. We are laying down that `$F$' should be true if Jon is fat (and false otherwise), and that `$Q$' should be true if Jon is quick (and false otherwise.) \n\t\\factoidbox{\n\t\tWhen we treat a TFL sentence as \\emph{symbolising} some English sentence, we are simply stipulating a truth value for that TFL sentence.\n\t}\n\n\n\\section{Indicative versus subjunctive conditionals}\\label{s:IndicativeSubjunctive}\nTo bring home the point that TFL can \\emph{only} deal with truth functions, I want to say a little bit about conditionals. When I introduced the characteristic truth table for the material conditional in \\S\\ref{s:CharacteristicTruthTables}, I did not say anything to justify it. Let me now offer a justification, which follows Dorothy Edgington.\\footnote{Dorothy Edgington, `Conditionals', 2014, in the \\emph{Stanford Encyclopedia of Philosophy} (\\url{http://plato.stanford.edu/entries/conditionals/}).} \n\nSuppose that Lara has drawn some shapes on a piece of paper, and coloured some of them in. I have not seen them, but I claim:\n\t\\begin{quote}\n\t\tIf any shape is grey, then that shape is also circular.\n\t\\end{quote}\nAs it happens, Lara has drawn the following:\n\\begin{center}\n\\begin{tikzpicture}\n\t\\node[circle, grey_shape] (cat1) {A};\n\t\\node[right=10pt of cat1, diamond, phantom_shape] (cat2)  { } ;\n\t\\node[right=10pt of cat2, circle, white_shape] (cat3)  {C} ;\n\t\\node[right=10pt of cat3, diamond, white_shape] (cat4)  {D};\n\\end{tikzpicture}\n\\end{center}\nIn this case, my claim is surely true.  Shapes C and D are not grey, and so can hardly present \\emph{counterexamples} to my claim. Shape A \\emph{is} grey, but fortunately it is also circular. So my claim has no counterexamples. It must be true. And that means that each of the following \\emph{instances} of my claim must be true too:\n\t\\begin{ebullet}\n\t\t\\item If A is grey, then it is circular \\hfill (true antecedent, true consequent)\n\t\t\\item If C is grey, then it is circular\\hfill (false antecedent, true consequent)\n\t\t\\item If D is grey, then it is circular \\hfill (false antecedent, false consequent)\n\t\\end{ebullet}\nHowever, if Lara had drawn a fourth shape, thus:\n\\begin{center}\n\\begin{tikzpicture}\n\t\\node[circle, grey_shape] (cat1) {A};\n\t\\node[right=10pt of cat1, diamond, grey_shape] (cat2)  {B};\n\t\\node[right=10pt of cat2, circle, white_shape] (cat3)  {C};\n\t\\node[right=10pt of cat3, diamond, white_shape] (cat4)  {D};\n\\end{tikzpicture}\n\\end{center}\nthen my claim would have been false. So this claim must also be false:\n\t\\begin{ebullet}\n\t\t\\item If B is grey, then it is a circular \\hfill (true antecedent, false consequent)\n\t\\end{ebullet}\nNow, recall that every connective of TFL has to be truth-functional. This means that the mere truth value of the antecedent and consequent must uniquely determine the truth value of the conditional as a whole. Thus, from the truth values of our four claims---which provide us with all possible combinations of truth and falsity in antecedent and consequent---we can read off the truth table for the material conditional.\n\nWhat this argument shows is that `$\\eif$' is the \\emph{only} candidate for a truth-functional conditional. Otherwise put, \\emph{it is the best conditional that TFL can provide}. But is it any good, as a surrogate for the conditionals we use in everyday language? Consider two sentences:\n\t\\begin{earg}\n\t\t\\item[\\ex{brownwins1}] If Hillary Clinton had won the 2016 US election, then she would have been the first female president of the US.\n\t\t\\item[\\ex{brownwins2}] If Hillary Clinton had won the 2016 US election, then she would have turned into a helium-filled balloon and floated away into the night sky.\n\t\\end{earg}\nSentence \\ref{brownwins1} is true; sentence \\ref{brownwins2} is false. But both have false antecedents and false consequents. (Hillary did not win; she did not become the first female president of the US; and she did not fill with helium and float away.) So the truth value of the whole sentence is not uniquely determined by the truth value of the parts. \n\nThe crucial point is that sentences \\ref{brownwins1} and \\ref{brownwins2} employ \\emph{subjunctive} conditionals, rather than \\emph{indicative} conditionals. They ask us to imagine something contrary to fact---after all, Hillary Clinton lost the 2016 election---and then ask us to evaluate what \\emph{would} have happened in that case. Such considerations simply cannot be tackled using `$\\eif$'.\n\nI shall say more about the difficulties with conditionals in \\S\\ref{s:ParadoxesOfMaterialConditional}. For now, I shall content myself with the observation that `$\\eif$' is the only candidate for a truth-functional conditional, but that many English conditionals cannot be represented adequately using `$\\eif$'. TFL is an intrinsically limited language. And you should not blithely assume that you can adequately symbolise an English `if\\ldots, then\\ldots' with TFL's `$\\eif$'. \n\n\n\\chapter{Complete truth tables}\\label{s:CompleteTruthTables}\nSo far, I have used symbolisation keys to assign truth values to TFL sentences \\emph{indirectly}. For example, we might say that the TFL sentence `$B$' is to be true iff Big Ben is in London. Since Big Ben \\emph{is} in London, this symbolisation would make `$B$' true. But we can also assign truth values \\emph{directly}. We can simply stipulate that `$B$' is to be true, or stipulate that it is to be false. Such stipulations are called \\emph{valuations}:\n\t\\factoidbox{\n\t\tA \\define{valuation} is any assignment of truth values to particular atomic sentences of TFL.\n\t}\nThe power of truth tables lies in the following. Each row of a truth table represents a possible valuation. The complete truth table represents all possible valuations. And the truth table provides us with a means to calculate the truth value of complex sentences, on each possible valuation. But all of this is easiest to explain by example.\n\n\\section{A worked example}\nConsider the sentence `$(H\\eand I)\\eif H$'. There are four possible ways to assign True and False to the atomic sentence `$H$' and `$I$'---four valuations---which we can represent as follows:\n\\begin{center}\n\\begin{tabular}{c c|d e e e f}\n$H$&$I$&$(H$&\\eand&$I)$&\\eif&$H$\\\\\n\\hline\n T & T\\\\\n T & F\\\\\n F & T\\\\\n F & F\n\\end{tabular}\n\\end{center}\nTo calculate the truth value of the entire sentence `$(H \\eand I) \\eif H$', we first copy the truth values for the atomic sentences and write them underneath the letters in the sentence:\n\\begin{center}\n\\begin{tabular}{c c|d e e e f}\n$H$&$I$&$(H$&\\eand&$I)$&\\eif&$H$\\\\\n\\hline\n T & T & {T} & & {T} & & {T}\\\\\n T & F & {T} & & {F} & & {T}\\\\\n F & T & {F} & & {T} & & {F}\\\\\n F & F & {F} & & {F} & & {F}\n\\end{tabular}\n\\end{center}\nNow consider the subsentence `$(H\\eand I)$'. This is a conjunction, $(\\meta{A}\\eand\\meta{B})$, with `$H$' as \\meta{A} and with `$I$' as \\meta{B}. The characteristic truth table for conjunction gives the truth conditions for \\emph{any} sentence of the form $(\\meta{A}\\eand\\meta{B})$, whatever $\\meta{A}$ and $\\meta{B}$ might be. It summarises the point that a conjunction is true iff both conjuncts are true. In this case, our conjuncts are just `$H$' and `$I$'. They are both true on (and only on) the first line of the truth table. Accordingly, we can calculate the truth value of the conjunction on all four rows.\n\\begin{center}\n\\begin{tabular}{c c|d e e e f}\n & & \\meta{A} & \\eand & \\meta{B} & & \\\\\n$H$&$I$&$(H$&\\eand&$I)$&\\eif&$H$\\\\\n\\hline\n T & T & T & {T} & T & & T\\\\\n T & F & T & {F} & F & & T\\\\\n F & T & F & {F} & T & & F\\\\\n F & F & F & {F} & F & & F\n\\end{tabular}\n\\end{center}\nNow, the entire sentence that we are dealing with is a conditional, $\\meta{A}\\eif\\meta{B}$, with `$(H \\eand I)$' as \\meta{A} and with `$H$' as \\meta{B}. On the second row, for example, `$(H\\eand I)$' is false and `$H$' is true. Since a conditional is true when the antecedent is false, we write a `T' in the second row underneath the conditional symbol. We continue for the other three rows and get this:\n\\begin{center}\n\\begin{tabular}{c c| d e e e f}\n & &  & \\meta{A} &  &\\eif &\\meta{B} \\\\\n$H$&$I$&$(H$&\\eand&$I)$&\\eif&$H$\\\\\n\\hline\n T & T &  & {T} &  &{T} & T\\\\\n T & F &  & {F} &  &{T} & T\\\\\n F & T &  & {F} &  &{T} & F\\\\\n F & F &  & {F} &  &{T} & F\n\\end{tabular}\n\\end{center}\nThe conditional is the main logical connective of the sentence. And the column of `T's underneath the conditional tells us that the sentence `$(H \\eand I)\\eif H$' is true regardless of the truth values of `$H$' and `$I$'. They can be true or false in any combination, and the compound sentence still comes out true. Since we have considered all four possible assignments of truth and falsity to `$H$' and `$I$'---since, that is, we have considered all the different \\emph{valuations}---we can say that `$(H \\eand I)\\eif H$' is true on every valuation.\n\nIn this example, I have not repeated all of the entries in every column in every successive table. When actually writing truth tables on paper, however, it is impractical to erase whole columns or rewrite the whole table for every step. Although it is more crowded, the truth table can be written in this way:\n\\begin{center}\n\\begin{tabular}{c c| d e e e f}\n$H$&$I$&$(H$&\\eand&$I)$&\\eif&$H$\\\\\n\\hline\n T & T & T & {T} & T & \\TTbf{T} & T\\\\\n T & F & T & {F} & F & \\TTbf{T} & T\\\\\n F & T & F & {F} & T & \\TTbf{T} & F\\\\\n F & F & F & {F} & F & \\TTbf{T} & F\n\\end{tabular}\n\\end{center}\nMost of the columns underneath the sentence are only there for bookkeeping purposes. The column that matters most is the column underneath the \\emph{main logical operator} for the sentence, since this tells you the truth value of the entire sentence. I have emphasised this, by putting this column in bold. When you work through truth tables yourself, you should similarly emphasise it (perhaps by underlining).\n\n\\section{Building complete truth tables}\nA \\define{complete truth table} has a line for every possible assignment of True and False to the relevant atomic sentences. Each line represents a \\emph{valuation}, and a complete truth table has a line for all the different valuations. \n\nThe size of the complete truth table depends on the number of different atomic sentences in the table. A sentence that contains only one atomic sentence requires only two rows, as in the characteristic truth table for negation. This is true even if the same letter is repeated many times, as in the sentence\n`$[(C\\eiff C) \\eif C] \\eand \\enot(C \\eif C)$'.\nThe complete truth table requires only two lines because there are only two possibilities: `$C$' can be true or it can be false. The truth table for this sentence looks like this:\n\\begin{center}\n\\begin{tabular}{c| d e e e e e e e e e e e e e e f}\n$C$&$[($&$C$&\\eiff&$C$&$)$&\\eif&$C$&$]$&\\eand&\\enot&$($&$C$&\\eif&$C$&$)$\\\\\n\\hline\n T &    & T &  T  & T &   & T  & T & &\\TTbf{F}&  F& &   T &  T  & T &   \\\\\n F &    & F &  T  & F &   & F  & F & &\\TTbf{F}&  F& &   F &  T  & F &   \\\\\n\\end{tabular}\n\\end{center}\nLooking at the column underneath the main logical operator, we see that the sentence is false on both rows of the table; i.e., the sentence is false regardless of whether `$C$' is true or false. It is false on every valuation.\n\nThere will be four lines in the complete truth table for a sentence containing two atomic sentences, as in the characteristic truth tables, or the truth table for `$(H \\eand I)\\eif H$'.\n\nThere will be eight lines in the complete truth table for a sentence containing three atomic sentences, e.g.:\n\\begin{center}\n\\begin{tabular}{c c c|d e e e f}\n$M$&$N$&$P$&$M$&\\eand&$(N$&\\eor&$P)$\\\\\n\\hline\n%           M        &     N   v   P\nT & T & T & T & \\TTbf{T} & T & T & T\\\\\nT & T & F & T & \\TTbf{T} & T & T & F\\\\\nT & F & T & T & \\TTbf{T} & F & T & T\\\\\nT & F & F & T & \\TTbf{F} & F & F & F\\\\\nF & T & T & F & \\TTbf{F} & T & T & T\\\\\nF & T & F & F & \\TTbf{F} & T & T & F\\\\\nF & F & T & F & \\TTbf{F} & F & T & T\\\\\nF & F & F & F & \\TTbf{F} & F & F & F\n\\end{tabular}\n\\end{center}\nFrom this table, we know that the sentence `$M\\eand(N\\eor P)$' can be true or false, depending on the truth values of `$M$', `$N$', and `$P$'.\n\nA sentence containing four different atomic sentences needs a truth table with 16 lines. Five atomic sentences, 32 lines. Six atomic sentences, 64 lines. And so on. To be perfectly general: a complete truth table with $n$ atomic sentences must have $2^n$ lines.\n\nIn order to fill in the columns of a complete truth table, begin with the right-most atomic sentence and alternate between `T' and `F'. In the next column to the left, write two `T's, write two `F's, and repeat. For the third atomic sentence, write four `T's followed by four `F's. This yields an eight line truth table like the one above. For a 16 line truth table, the next column of atomic sentences should have eight `T's followed by eight `F's. For a 32 line table, the next column would have 16 `T's followed by 16 `F's. And so on.\n\n\n\\section{More bracketing conventions}\\label{s:MoreBracketingConventions}\nConsider these two sentences:\n\t\\begin{align*}\n\t\t((A \\eand B) \\eand C)\\\\\n\t\t(A \\eand (B \\eand C))\n\t\\end{align*}\nThese have the same truth table. Consequently, it will never make any difference from the perspective of truth value---which is all that TFL cares about (see \\S\\ref{s:TruthFunctionality})---which of the two sentences we assert (or deny). And since the order of the brackets does not matter, I shall allow us to drop them.  In short, we can save some ink and some eyestrain by writing:\n\t\\begin{align*}\n\t\tA \\eand B \\eand C\n\t\\end{align*}\nThe general point is that, if we just have a long list of conjunctions, we can drop the inner brackets. (I already allowed us to drop outermost brackets in \\S\\ref{s:TFLSentences}.) The same observation holds for disjunctions. Since the following sentences have exactly the same truth table:\n\t\\begin{align*}\n\t\t((A \\eor B) \\eor C)\\\\\n\t\t(A \\eor (B \\eor C))\n\t\\end{align*}\nwe can simply write:\n\t\\begin{align*}\n\t\tA \\eor B \\eor C\n\t\\end{align*}\nAnd generally, if we just have a long list of disjunctions, we can drop the inner brackets. \\emph{But be careful}. These two sentences have \\emph{different} truth tables:\n\t\\begin{align*}\n\t\t((A \\eif B) \\eif C)\\\\\n\t\t(A \\eif (B \\eif C))\n\t\\end{align*}\nSo if we were to write:\n\t\\begin{align*}\n\t\tA \\eif B \\eif C\n\t\\end{align*}\nit would be dangerously ambiguous. So we must not do the same with conditionals. Equally, these sentences have different truth tables:\n\t\\begin{align*}\n\t\t((A \\eor B) \\eand C)\\\\\n\t\t(A \\eor (B \\eand C))\n\t\\end{align*}\nSo if we were to write:\n\t\\begin{align*}\n\t\tA \\eor B \\eand C\n\t\\end{align*}\nit would be dangerously ambiguous. \\emph{Never write this.} The moral is: you can drop brackets when dealing with a long list of conjunctions, or when dealing with a long list of disjunctions. But that's it.\n\n\\practiceproblems\n\\problempart\nOffer complete truth tables for each of the following:\n\\begin{earg}\n\\item $A \\eif A$ %taut\n\\item $C \\eif\\enot C$ %contingent\n\\item $(A \\eiff B) \\eiff \\enot(A\\eiff \\enot B)$ %tautology\n\\item $(A \\eif B) \\eor (B \\eif A)$ % taut\n\\item $(A \\eand B) \\eif (B \\eor A)$  %taut\n\\item $\\enot(A \\eor B) \\eiff (\\enot A \\eand \\enot B)$ %taut\n\\item $\\bigl[(A\\eand B) \\eand\\enot(A\\eand B)\\bigr] \\eand C$ %contradiction\n\\item $[(A \\eand B) \\eand C] \\eif B$ %taut\n\\item $\\enot\\bigl[(C\\eor A) \\eor B\\bigr]$ %contingent\n\\end{earg}\n\\problempart\nCheck all the claims made in introducing the new notational conventions in \\S\\ref{s:MoreBracketingConventions}, i.e.\\ show that:\n\\begin{earg}\n\t\\item `$((A \\eand B) \\eand C)$' and `$(A \\eand (B \\eand C))$' have the same truth table\n\t\\item `$((A \\eor B) \\eor C)$' and `$(A \\eor (B \\eor C))$' have the same truth table\n\t\\item `$((A \\eor B) \\eand C)$' and `$(A \\eor (B \\eand C))$' do not have the same truth table\n\t\\item `$((A \\eif B) \\eif C)$' and `$(A \\eif (B \\eif C))$' do not have the same truth table\n\\end{earg}\nAlso, check whether:\n\\begin{earg}\n\t\\item[5.] `$((A \\eiff B) \\eiff C)$' and `$(A \\eiff (B \\eiff C))$' have the same truth table\n\\end{earg}\nIf you want additional practice, you can construct truth tables for any of the sentences and arguments in the exercises for the previous chapter.\n\n\n\\chapter{Semantic concepts}\\label{s:semanticconcepts}\nIn the previous section, I introduced the idea of a valuation and showed how to determine the truth value of any TFL sentence, on any valuation, using a truth table. In this section, I shall introduce some related ideas, and show how to use truth tables to test whether or not they apply.\n\n\n\\section{Tautologies and contradictions}\nIn \\S\\ref{s:BasicNotions}, I explained \\emph{necessary truth} and \\emph{necessary falsity}. Both notions have surrogates in TFL. We shall start with a surrogate for necessary truth.\n\t\\factoidbox{\n\tA sentence is a \\define{tautology} iff it is true on every valuation.\n\t}\nWe can use truth tables to decide whether a sentence is a tautology. If the sentence is true on every line of its complete truth table, then it is true on every valuation, so it is a tautology. In the example of \\S\\ref{s:CompleteTruthTables}, `$(H \\eand I) \\eif H$' is a tautology. \n\nThis is only, though, a surrogate for necessary truth. There are some necessary truths that we cannot adequately symbolise in TFL. One example is `$2 + 2 = 4$'. This \\emph{must} be true, but if we try to symbolise it in TFL, the best we can offer is an atomic sentence, and no atomic sentence is a tautology. Still, if we can adequately symbolise some English sentence using a TFL sentence which is a tautology, then that English sentence expresses a necessary truth.\n\nWe have a similar surrogate for necessary falsity:\n\t\\factoidbox{\n\t\tA sentence is a \\define{contradiction} iff it is false on every valuation.\n\t}\nWe can use truth tables to decide whether a sentence is a contradiction. If the sentence is false on every line of its complete truth table, then it is false on every valuation, so it is a contradiction. In the example of \\S\\ref{s:CompleteTruthTables}, `$[(C\\eiff C) \\eif C] \\eand \\enot(C \\eif C)$' is a contradiction.\n\n\n\\section{Tautological equivalence}\nHere is a similar, useful notion:\n\t\\factoidbox{\n\t\tSentences are \\define{tautologically equivalent} iff they have the same truth value on every valuation.\n\t}\nWe have already made use of this notion, in effect, in \\S\\ref{s:MoreBracketingConventions}; the point was that `$(A \\eand B) \\eand C$' and  `$A \\eand (B \\eand C)$' are tautologically equivalent. Again, it is easy to test for tautological equivalence using truth tables. Consider the sentences `$\\enot(P \\eor Q)$' and `$\\enot P \\eand \\enot Q$'. Are they tautologically equivalent? To find out, we construct a truth table.\n\\begin{center}\n\\begin{tabular}{c c|d e e f |d e e e f}\n$P$&$Q$&\\enot&$(P$&\\eor&$Q)$&\\enot&$P$&\\eand&\\enot&$Q$\\\\\n\\hline\n T & T & \\TTbf{F} & T & T & T & F & T & \\TTbf{F} & F & T\\\\\n T & F & \\TTbf{F} & T & T & F & F & T & \\TTbf{F} & T & F\\\\\n F & T & \\TTbf{F} & F & T & T & T & F & \\TTbf{F} & F & T\\\\\n F & F & \\TTbf{T} & F & F & F & T & F & \\TTbf{T} & T & F\n\\end{tabular}\n\\end{center}\nLook at the columns for the main logical operators; negation for the first sentence, conjunction for the second. On the first three rows, both are false. On the final row, both are true. Since they match on every row, the two sentences are tautologically equivalent.\n\n\n\\section{Consistency}\nIn \\S\\ref{s:BasicNotions}, I said that sentences are jointly consistent iff it is possible for all of them to be true at once. We can offer a surrogate for this notion too:\n\t\\factoidbox{\n\tSentences are \\define{jointly tautologically consistent} iff there is some valuation which makes them all true.\n\t}\nDerivatively, sentences are jointly tautologically inconsistent iff no valuation makes them all true. Again, it is easy to test for joint tautological consistency using truth tables. \n\n\\section{Tautological entailment and validity}\nThe following idea is closely related to that of joint consistency:\n\t\\factoidbox{\n\t\t$\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ \\define{tautologically entail}  $\\meta{C}$ iff no valuation of the relevant atomic sentences makes all of $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ true and $\\meta{C}$ false.\n\t}\nAgain, it is easy to test this with a truth table. To check whether `$\\enot L \\eif (J \\eor L)$' and `$\\enot L$' tautologically entail `$J$', we simply need to check whether there is any valuation which makes both `$\\enot L \\eif (J \\eor L)$' and `$\\enot L$' true whilst making `$J$' false. So we use a truth table: \n\\begin{center}\n\\begin{tabular}{c c|d e e e e f|d f| c}\n$J$&$L$&\\enot&$L$&\\eif&$(J$&\\eor&$L)$&\\enot&$L$&$J$\\\\\n\\hline\n%J   L   -   L      ->     (J   v   L)\n T & T & F & T & \\TTbf{T} & T & T & T & \\TTbf{F} & T & \\TTbf{T}\\\\\n T & F & T & F & \\TTbf{T} & T & T & F & \\TTbf{T} & F & \\TTbf{T}\\\\\n F & T & F & T & \\TTbf{T} & F & T & T & \\TTbf{F} & T & \\TTbf{F}\\\\\n F & F & T & F & \\TTbf{F} & F & F & F & \\TTbf{T} & F & \\TTbf{F}\n\\end{tabular}\n\\end{center}\nThe only row on which both`$\\enot L \\eif (J \\eor L)$' and `$\\enot L$' are true is the second row, and that is a row on which `$J$' is also true. So `$\\enot L \\eif (J \\eor L)$' and `$\\enot L$' tautologically entail `$J$'.\n\nThe notion of tautological entailment is deeply connected to the notion of validity:\\footnote{Recall from \\S\\ref{s:UseMention} the use of `$\\therefore$'; here. Without it, the information in this box could be written as follows. If $\\meta{A}_1, \\ldots, \\meta{A}_n$ tautologically entail $\\meta{C}$, then the argument with premises $\\meta{A}_1, \\ldots, \\meta{A}_n$ and conclusion $\\meta{C}$ is valid.}\n\t\\factoidbox{\n\t\tIf $\\meta{A}_1, \\ldots, \\meta{A}_n$ tautologically entail $\\meta{C}$, then $\\meta{A}_1, \\ldots, \\meta{A}_n \\therefore \\meta{C}$ is valid.\n\t}\nHere's why. If $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ tautologically entail $\\meta{C}$, then no valuation makes all of $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ true whilst making $\\meta{C}$ false. So it is \\emph{logically impossible} for $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ all to be true whilst $\\meta{C}$ is false. And this guarantees the validity of the argument with premises $\\meta{A}_1, \\ldots, \\meta{A}_n$ and conclusion $\\meta{C}$. \n\nIn short, we have a way to test for the validity of English arguments. First, we symbolise them in TFL; then we test for tautological entailment using truth tables. \n\n\n\\section{The limits of these tests}\\label{s:ParadoxesOfMaterialConditional}\nThis is an important milestone: a test for the validity of arguments! But, we should not get carried away just yet. It is important to understand the \\emph{limits} of our achievement. I shall illustrate these limits with three examples.\n\nFirst, consider the argument: \n\t\\begin{earg}\n\t\t\\item Daisy has four legs. So Daisy has more than two legs.\n\t\\end{earg}\nTo symbolise this argument in TFL, we would have to use two different atomic sentences -- perhaps `$F$'  and `$T$' -- for the premise and the conclusion respectively. Now, it is obvious that `$F$' does not tautologically entail `$T$'. But the English argument is surely valid!\n\nSecond, consider the sentence:\n\t\\begin{earg}\n\\setcounter{eargnum}{1}\n\t\t\\item\\label{n:JanBald} Jan is neither bald nor not-bald.\n\t\\end{earg}\nTo symbolise this sentence in TFL, we would offer something like `$\\enot J \\eand \\enot \\enot J$'. This a contradiction (check this with a truth-table). But sentence \\ref{n:JanBald} does not itself seem like a contradiction; for we might have happily added `Jan is on the borderline of baldness'!\n\nThird, consider the following sentence:\n\t\\begin{earg}\n\\setcounter{eargnum}{2}\t\n\t\t\\item\\label{n:GodParadox}\tIt's not the case that, if God exists, then She answers malevolent prayers.\n\t\\end{earg}\nSymbolising this in TFL, we would offer something like `$\\enot (G \\eif M)$'. Now, `$\\enot (G \\eif M)$' tautologically entails `$G$' (again, check this with a truth table). So if we symbolise sentence \\ref{n:GodParadox} in TFL, it seems to entail that God exists. But that's strange: surely even the atheist can accept sentence \\ref{n:GodParadox}, without contradicting herself!\n\nIn different ways, these three examples highlight some of the limits of working with a language (like TFL) that can \\emph{only} handle truth-functional connectives. Moreover, these limits give rise to some interesting questions in philosophical logic. The case of Jan's baldness (or otherwise) raises the general question of what logic we should use when dealing with \\emph{vague} discourse. The case of the atheist raises the question of how to deal with the (so-called) \\emph{paradoxes of material implication}. Part of the purpose of this course is to equip you with the tools to explore these questions of \\emph{philosophical logic}. But we have to walk before we can run; we have to become proficient in using TFL, before we can adequately discuss its limits, and consider alternatives. \n\n\\section{The double-turnstile}\nIn what follows, we will use the notion of tautological entailment rather often. It will help us, then, to introduce a symbol that abbreviates it. Rather than saying that the TFL sentences $\\meta{A}_1, \\meta{A}_2, \\ldots$ and $\\meta{A}_n$ together tautologically entail $\\meta{C}$, we shall abbreviate this by:\n\t$$\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n \\entails \\meta{C}$$\nThe symbol `$\\entails$' is known as \\emph{the double-turnstile}, since it looks like a turnstile with two horizontal beams.\n\nLet me be very clear about something. `$\\entails$' is not a symbol of TFL. Rather, it is a symbol of our metalanguage, augmented English (recall the difference between object language and metalanguage from \\S\\ref{s:UseMention}). So the metalanguage sentence:\n\t\\begin{ebullet}\n\t\t\\item $P, P \\eif Q \\entails Q$\n\t\\end{ebullet}\nis \\emph{just} an abbreviation for this metalanguage sentence: \n\t\\begin{ebullet}\n\t\t\\item The TFL sentences `$P$' and `$P \\eif Q$' tautologically entail `$Q$'\n\t\\end{ebullet}\nNote that there is no limit on the number of TFL sentences that can be mentioned before the symbol `$\\entails$'. Indeed, we can even consider the limiting case:\n\t$$\\phantom{\\meta{A}}\\entails \\meta{C}$$\nThis says that there is no valuation which makes all the sentences mentioned on the left side of `$\\entails$' true whilst making $\\meta{C}$ false. Since \\emph{no} sentences are mentioned on the left side of `$\\entails$' in this case, this just means that there is no valuation which makes $\\meta{C}$ false. Otherwise put, it says that every valuation makes $\\meta{C}$ true. Otherwise put, it says that $\\meta{C}$ is a tautology. Equally, to say that $\\meta{A}$ is a contradiction, we can write:\n\t$$\\meta{A} \\entails\\phantom{\\meta{C}}$$\nFor this says that no valuation makes $\\meta{A}$ true. \n\nSometimes, we will want to deny that there is a tautological entailment, and say something of this shape: \n\\begin{center}\n\tit is \\emph{not} the case that $\\meta{A}_1, \\ldots, \\meta{A}_n \\entails \\meta{C}$\n\\end{center}\nIn that case, we can just slash the turnstile through, and write: \n$$\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n \\nentails\\meta{C}$$\nThis means that \\emph{some} valuation makes all of $\\meta{A}_1, \\ldots, \\meta{A}_n$ true whilst making $\\meta{C}$ false. (But note that it does \\emph{not} immediately follow that $\\meta{A}_1,\\ldots, \\meta{A}_n \\entails \\enot \\meta{C}$, for that would mean that \\emph{every} valuation makes all of $\\meta{A}_1, \\ldots, \\meta{A}_n$ true whilst making $\\meta{C}$ false.)\n\n\\section{`$\\entails$' versus `$\\eif$'}\nI now want to compare and contrast `$\\entails$' and `$\\eif$'. \n\nObserve: $\\meta{A} \\entails \\meta{C}$ iff no valuation of the atomic sentences makes $\\meta{A}$ true and $\\meta{C}$ false. \n\nObserve: $\\meta{A} \\eif \\meta{C}$ is a tautology iff no valuation of the atomic sentences  makes $\\meta{A} \\eif \\meta{C}$ false. Since a conditional is true except when its antecedent is true and its consequent false, $\\meta{A} \\eif \\meta{C}$ is a tautology iff no valuation makes $\\meta{A}$ true and $\\meta{C}$ false. \n\nCombining these two observations, we see that $\\meta{A} \\eif \\meta{C}$  is a tautology iff  $\\meta{A} \\entails \\meta{C}$. But there is a really, really important difference between `$\\entails$' and `$\\eif$':\n\t\\factoidbox{`$\\eif$' is a sentential connective of TFL.\\\\ `$\\entails$' is a symbol of augmented English.\n\t}\nIndeed, when `$\\eif$' is flanked with two TFL sentences, the result is a longer TFL sentence. By contrast, when we use `$\\entails$', we form a metalinguistic sentence that \\emph{mentions} the surrounding TFL sentences. \n\n\n\\practiceproblems\n\\problempart\nRevisit your answers to \\S\\ref{s:CompleteTruthTables}\\textbf{A}. Determine which sentences were tautologies, which were contradictions, and which were neither tautologies nor contradictions.\n\n\\\n\n\\problempart\nUse truth tables to determine whether these sentences are jointly consistent, or jointly inconsistent:\n\\begin{earg}\n\\item $A\\eif A$, $\\enot A \\eif \\enot A$, $A\\eand A$, $A\\eor A$ %consistent\n\\item $A\\eor B$, $A\\eif C$, $B\\eif C$ %consistent\n\\item $B\\eand(C\\eor A)$, $A\\eif B$, $\\enot(B\\eor C)$  %inconsistent\n\\item $A\\eiff(B\\eor C)$, $C\\eif \\enot A$, $A\\eif \\enot B$ %consistent\n\\end{earg}\n\n\\problempart\nUse truth tables to assess the following:\n\\begin{earg}\n\\item $A\\eif A \\therefore A$ %invalid\n\\item $A\\eif(A\\eand\\enot A) \\therefore \\enot A$ %valid\n\\item $A\\eor(B\\eif A) \\therefore\\enot A \\eif \\enot B$ %valid\n\\item $A\\eor B, B\\eor C, \\enot A \\therefore B \\eand C$ %invalid\n\\item $(B\\eand A)\\eif C, (C\\eand A)\\eif B \\therefore (C\\eand B)\\eif A$ %invalid\n\\end{earg}\n\n\\problempart\nAnswer each of the questions below and justify your answer.\n\\begin{earg}\n\\item Suppose that \\meta{A} and \\meta{B} are tautologically equivalent. What can you say about $\\meta{A}\\eiff\\meta{B}$?\n%\\meta{A} and \\meta{B} have the same truth value on every line of a complete truth table, so $\\meta{A}\\eiff\\meta{B}$ is true on every line. It is a tautology.\n\\item Suppose that $(\\meta{A}\\eand\\meta{B})\\eif\\meta{C}$ is neither a tautology nor a contradiction. What can you say about this: $\\meta{A}, \\meta{B} \\entails\\meta{C}$?\n%The sentence is false on some line of a complete truth table. On that line, \\meta{A} and \\meta{B} are true and \\meta{C} is false. So the argument is invalid.\n\\item Suppose that $\\meta{A}$, $\\meta{B}$ and $\\meta{C}$  are jointly tautologically inconsistent. What can you say about this: $(\\meta{A}\\eand\\meta{B}\\eand\\meta{C})$?\n\\item Suppose that \\meta{A} is a contradiction. What can you say about this: $\\meta{A}, \\meta{B} \\therefore \\meta{C}$?\n%Since \\meta{A} is false on every line of a complete truth table, there is no line on which \\meta{A} and \\meta{B} are true and \\meta{C} is false. So the argument is valid.\n\\item Suppose that \\meta{C} is a tautology. What can you say about this: $\\meta{A}, \\meta{B}\\entails \\meta{C}$?\n%Since \\meta{C} is true on every line of a complete truth table, there is no line on which \\meta{A} and \\meta{B} are true and \\meta{C} is false. So the argument is valid.\n\\item Suppose that \\meta{A} and \\meta{B} are tautologically equivalent. What can you say about $(\\meta{A}\\eor\\meta{B})$?\n%Not much. $(\\meta{A}\\eor\\meta{B})$ is a tautology if \\meta{A} and \\meta{B} are tautologies; it is a contradiction if they are contradictions; it is contingent if they are contingent.\n\\item Suppose that \\meta{A} and \\meta{B} are \\emph{not} tautologically equivalent. What can you say about this: $(\\meta{A}\\eor\\meta{B})$?\n%\\meta{A} and \\meta{B} have different truth values on at least one line of a complete truth table, and $(\\meta{A}\\eor\\meta{B})$ will be true on that line. On other lines, it might be true or false. So $(\\meta{A}\\eor\\meta{B})$ is either a tautology or it is contingent; it is \\emph{not} a contradiction.\n\\end{earg}\n\\problempart \nConsider the following principle:\n\t\\begin{ebullet}\n\t\t\\item Suppose $\\meta{A}$ and $\\meta{B}$ are tautologically equivalent. Suppose an argument contains $\\meta{A}$ (either as a premise, or as the conclusion). The validity of the argument would be unaffected, if we replaced $\\meta{A}$ with $\\meta{B}$.\n\t\\end{ebullet}\nIs this principle correct? Explain your answer.\n\n\n\n\\chapter{Truth table shortcuts}\nWith practice, you will quickly become adept at filling out truth tables. In this section, I want to provide (and justify) some shortcuts which will help you along the way. \n\n\\section{Working through truth tables}\nYou will quickly find that you do not need to copy the truth value of each atomic sentence, but can simply refer back to them. So you can speed things up by writing:\n\\begin{center}\n\\begin{tabular}{c c|d e e e e f}\n$P$&$Q$&$(P$&\\eor&$Q)$&\\eiff&\\enot&$P$\\\\\n\\hline\n T & T &  & T &  & \\TTbf{F} & F\\\\\n T & F &  & T &  & \\TTbf{F} & F\\\\\n F & T &  & T & & \\TTbf{T} & T\\\\\n F & F &  & F &  & \\TTbf{F} & T\n\\end{tabular}\n\\end{center}\nYou also know for sure that a disjunction is true whenever one of the disjuncts is true. So if you find a true disjunct, there is no need to work out the truth values of the other disjuncts. Thus you might offer:\n\\begin{center}\n\\begin{tabular}{c c|d e e e e e e f}\n$P$&$Q$& $(\\enot$ & $P$&\\eor&\\enot&$Q)$&\\eor&\\enot&$P$\\\\\n\\hline\n T & T & F & & F & F& & \\TTbf{F} & F\\\\\n T & F &  F & & T& T& &  \\TTbf{T} & F\\\\\n F & T & & &  & & & \\TTbf{T} & T\\\\\n F & F & & & & & &\\TTbf{T} & T\n\\end{tabular}\n\\end{center}\nEqually, you know for sure that a conjunction is false whenever one of the conjuncts is false. So if you find a false conjunct, there is no need to work out the truth value of the other conjunct. Thus you might offer:\n\\begin{center}\n\\begin{tabular}{c c|d e e e e e e f}\n$P$&$Q$&\\enot &$(P$&\\eand&\\enot&$Q)$&\\eand&\\enot&$P$\\\\\n\\hline\n T & T &  &  & &  & & \\TTbf{F} & F\\\\\n T & F &   &  &&  & & \\TTbf{F} & F\\\\\n F & T & T &  & F &  & & \\TTbf{T} & T\\\\\n F & F & T &  & F & & & \\TTbf{T} & T\n\\end{tabular}\n\\end{center}\nA similar short cut is available for conditionals. You immediately know that a conditional is true if either its consequent is true, or its antecedent is false. Thus you might present:\n\\begin{center}\n\\begin{tabular}{c c|d e e e e e f}\n$P$&$Q$& $((P$&\\eif&$Q$)&\\eif&$P)$&\\eif&$P$\\\\\n\\hline\n T & T & &  & & & & \\TTbf{T} & \\\\\n T & F &  &  & && & \\TTbf{T} & \\\\\n F & T & & T & & F & & \\TTbf{T} & \\\\\n F & F & & T & & F & &\\TTbf{T} & \n\\end{tabular}\n\\end{center}\nSo `$((P \\eif Q) \\eif P) \\eif P$' is a tautology. In fact, it is an instance of \\emph{Peirce's Law}, named after Charles Sanders Peirce.\n\n\\section{Testing for validity and entailment}\nIn \\S\\ref{s:semanticconcepts}, we saw how to use truth tables to test for validity. In that test, we look for \\emph{bad} lines: lines where the premises are all true and the conclusion is false. Now:\n\\begin{earg}\n\t\\item[\\textbullet] If the conclusion is true on a line, then that line is not bad. (And we don't need to evaluate anything \\emph{else} on that line to confirm this.)\n\t\\item[\\textbullet] If any premise is false on a line, then that line is not bad. (And we don't need to evaluate anything \\emph{else} on that line to confirm this.)\n\\end{earg}\nWith this in mind, we can speed up our tests for validity quite considerably. \n\nLet's consider how we might test the following:\n$$\\enot L \\eif (J \\eor L), \\enot L \\therefore J$$\nThe \\emph{first} thing we should do is evaluate the conclusion. If we find that the conclusion is \\emph{true} on some line, then that is not a bad line. So we can simply ignore the rest of the line. So, after our first stage, we are left with something like this:\n\\begin{center}\n\t\\begin{tabular}{c c|d e e e e f |d f|c}\n\t\t$J$&$L$&\\enot&$L$&\\eif&$(J$&\\eor&$L)$&\\enot&$L$&$J$\\\\\n\t\t\\hline\n\t\t%J   L   -   L      ->     (J   v   L)\n\t\tT & T & &&&&&&&& {T}\\\\\n\t\tT & F & &&&&&&&& {T}\\\\\n\t\tF & T & &&?&&&&?&& {F}\\\\\n\t\tF & F & &&?&&&&?&& {F}\n\t\\end{tabular}\n\\end{center}\nwhere the blanks indicate that we won't bother with any more investigation (since the line is not bad), and the question-marks indicate that we need to keep digging. \n\nThe easiest premise to evaluate is the second, so we do that next, and get:\n\\begin{center}\n\t\\begin{tabular}{c c|d e e e e f |d f|c}\n\t\t$J$&$L$&\\enot&$L$&\\eif&$(J$&\\eor&$L)$&\\enot&$L$&$J$\\\\\n\t\t\\hline\n\t\t%J   L   -   L      ->     (J   v   L)\n\t\tT & T & &&&&&&&& {T}\\\\\n\t\tT & F & &&&&&&&& {T}\\\\\n\t\tF & T & &&&&&&{F}&& {F}\\\\\n\t\tF & F & &&?&&&&{T}&& {F}\n\t\\end{tabular}\n\\end{center}\nNote that we no longer need to consider the third line on the table: it is certainly not bad, because some premise is false on that line. And finally, we complete the truth table:\n\\begin{center}\n\t\\begin{tabular}{c c|d e e e e f |d f|c}\n\t\t$J$&$L$&\\enot&$L$&\\eif&$(J$&\\eor&$L)$&\\enot&$L$&$J$\\\\\n\t\t\\hline\n\t\t%J   L   -   L      ->     (J   v   L)\n\t\tT & T & &&&&&&&& {T}\\\\\n\t\tT & F & &&&&&&&& {T}\\\\\n\t\tF & T & &&&&&&{F}& & {F}\\\\\n\t\tF & F & T &  & \\TTbf{F} &  & F & & {T} & & {F}\n\t\\end{tabular}\n\\end{center}\nThe truth table has no bad lines, so the argument is valid. Any valuation which makes every premise true makes the conclusion true.\n\nIt's probably worth illustrating the tactic again. Consider this argument:\n$$A\\eor B, \\enot (B\\eand C) \\therefore (A \\eor \\enot C)$$\nAgain, we start by evaluating the conclusion. Since this is a disjunction, it is true whenever either disjunct is true, so we can speed things along a bit.\n\\begin{center}\n\\begin{tabular}[t]{c c c| c|c|d e e f }\n$A$ & $B$ & $C$ & $A\\eor B$ & $\\enot (B \\eand C)$ & $(A$ &$\\eor $& $\\enot $ & $C)$\\\\\n\\hline\nT & T & T &  &  & & \\TTbf{T} & & \\\\\nT & T & F &  &  & & \\TTbf{T} & & \\\\\nT & F & T &  &  & & \\TTbf{T} & & \\\\\nT & F & F &  &  & & \\TTbf{T} & & \\\\\nF & T & T & ? & ? & & \\TTbf{F} &F & \\\\\nF & T & F &  &  && \\TTbf{T} & T& \\\\\nF & F & T & ? & ? && \\TTbf{F} & F& \\\\\nF & F & F &  &  & & \\TTbf{T} & T& \\\\\n\\end{tabular}\n\\end{center}\n We can now ignore all but the two lines where the sentence after the turnstile is false. Evaluating the two sentences on the left of the turnstile, we get:\n \\begin{center}\n \t\\begin{tabular}[t]{c c c| c|d e e f |d e e f }\n \t\t$A$ & $B$ & $C$ & $A\\eor B$ & $\\enot ($&$B$&$ \\eand$&$ C)$ & $(A$ &$\\eor $& $\\enot $ & $C)$\\\\\n \t\t\\hline\n \t\tT & T & T &  & &&& & & \\TTbf{T} & & \\\\\n \t\tT & T & F &  & &&& & & \\TTbf{T} & & \\\\\n \t\tT & F & T &  & &&& & & \\TTbf{T} & & \\\\\n \t\tT & F & F &  & &&& & & \\TTbf{T} & & \\\\\n \t\tF & T & T & \\textbf{T} & \\textbf{F}&&T& & & \\TTbf{F} &F & \\\\\n \t\tF & T & F & &&& & && \\TTbf{T} & T& \\\\\n \t\tF & F & T & \\textbf{F} & &&& & & \\TTbf{F} & F& \\\\\n \t\tF & F & F & &&&& && \\TTbf{T} & T& \\\\\n \t\\end{tabular}\n \\end{center}\nSo the entailment holds! And our shortcuts saved us a \\emph{lot} of work. \n \nI have been discussing shortcuts in testing for  validity. But exactly the same shortcuts can be used in testing for tautological entailment. By employing a similar notion of bad lines, you can save yourself a huge amount of work.\n \n\\practiceproblems\n\\problempart\nUsing shortcuts, check whether each sentence is a tautology, a contradiction, or neither. \n\\begin{earg}\n\t\\item $\\enot B \\eand B$ %contra\n\t\\item $\\enot D \\eor D$ %taut\n\t\\item $(A\\eand B) \\eor (B\\eand A)$ %contingent\n\t\\item $\\enot[A \\eif (B \\eif A)]$ %contra\n\t\\item $A \\eiff [A \\eif (B \\eand \\enot B)]$ %contra\n\t\\item $\\enot(A\\eand B) \\eiff A$ %contingent\n\t\\item $A\\eif(B\\eor C)$ %contingent\n\t\\item $(A \\eand\\enot A) \\eif (B \\eor C)$ %tautology\n\t\\item $(B\\eand D) \\eiff [A \\eiff(A \\eor C)]$%contingent\n\\end{earg}\n\n\n\n\n\n\\chapter{Partial truth tables}\\label{s:PartialTruthTable}\n\nSometimes, we do not need to know what happens on every line of a truth table. Sometimes, just a single line or two will do. \n\n\\paragraph{Tautology.} \nIn order to show that a sentence is a tautology, we need to show that it is true on every valuation. That is to say, we need to know that it comes out true on every line of the truth table. So we need a complete truth table. \n\nTo show that a sentence is \\emph{not} a tautology, however, we only need one line: a line on which the sentence is false. Therefore, in order to show that some sentence is not a tautology, it is enough to provide a single valuation---a single line of the truth table---which makes the sentence false. \n\nSuppose that we want to show that the sentence `$(U \\eand T) \\eif (S \\eand W)$' is \\emph{not} a tautology. We set up a \\define{partial truth table}:\n\\begin{center}\n\\begin{tabular}{c c c c |d e e e e e f}\n$S$&$T$&$U$&$W$&$(U$&\\eand&$T)$&\\eif    &$(S$&\\eand&$W)$\\\\\n\\hline\n   &   &   &   &    &   &    &\\TTbf{F}&    &   &   \n\\end{tabular}\n\\end{center}\nI have only left space for one line, rather than 16, since we are only looking for one line, on which the sentence is false (hence, also, the `F'). \n\nThe main logical operator of the sentence is a conditional. In order for the conditional to be false, the antecedent must be true and the consequent must be false. So we fill these in on the table:\n\\begin{center}\n\\begin{tabular}{c c c c |d e e e e e f}\n$S$&$T$&$U$&$W$&$(U$&\\eand&$T)$&\\eif    &$(S$&\\eand&$W)$\\\\\n\\hline\n   &   &   &   &    &  T  &    &\\TTbf{F}&    &   F &   \n\\end{tabular}\n\\end{center}\nIn order for the `$(U\\eand T)$' to be true, both `$U$' and `$T$' must be true.\n\\begin{center}\n\\begin{tabular}{c c c c|d e e e e e f}\n$S$&$T$&$U$&$W$&$(U$&\\eand&$T)$&\\eif    &$(S$&\\eand&$W)$\\\\\n\\hline\n   & T & T &   &  T &  T  & T  &\\TTbf{F}&    &   F &   \n\\end{tabular}\n\\end{center}\nNow we just need to make `$(S\\eand W)$' false. To do this, we need to make at least one of `$S$' and `$W$' false. We can make both `$S$' and `$W$' false if we want. All that matters is that the whole sentence turns out false on this line. Making an arbitrary decision, we finish the table in this way:\n\\begin{center}\n\\begin{tabular}{c c c c|d e e e e e f}\n$S$&$T$&$U$&$W$&$(U$&\\eand&$T)$&\\eif    &$(S$&\\eand&$W)$\\\\\n\\hline\n F & T & T & F &  T &  T  & T  &\\TTbf{F}&  F &   F & F  \n\\end{tabular}\n\\end{center}\nSo we now have a partial truth table, which shows that `$(U \\eand T) \\eif (S \\eand W)$' is not a tautology. Put otherwise, we have shown that there is a valuation which makes `$(U \\eand T) \\eif (S \\eand W)$' false, namely, the valuation which makes `$S$' false, `$T$' true, `$U$' true and `$W$' false. \n\n\\paragraph{Contradiction.}\nShowing that something is a contradiction requires a complete truth table: we need to show that there is no valuation which makes the sentence true; that is, we need to show that the sentence is false on every line of the truth table. \n\nHowever, to show that something is \\emph{not} a contradiction, all we need to do is find a valuation which makes the sentence true, and a single line of a truth table will suffice. We can illustrate this with the same example.\n\\begin{center}\n\\begin{tabular}{c c c c|d e e e e e f}\n$S$&$T$&$U$&$W$&$(U$&\\eand&$T)$&\\eif    &$(S$&\\eand&$W)$\\\\\n\\hline\n  &  &  &  &   &   &   &\\TTbf{T}&  &  &\n\\end{tabular}\n\\end{center}\nTo make the sentence true, it will suffice to ensure that the antecedent is false. Since the antecedent is a conjunction, we can just make one of them false. Making an arbitrary choice, let's make `$U$' false; we can then assign any truth value we like to the other atomic sentences.\n\\begin{center}\n\\begin{tabular}{c c c c|d e e e e e f}\n$S$&$T$&$U$&$W$&$(U$&\\eand&$T)$&\\eif    &$(S$&\\eand&$W)$\\\\\n\\hline\n F & T & F & F &  F &  F  & T  &\\TTbf{T}&  F &   F & F\n\\end{tabular}\n\\end{center}\n\n\\paragraph{Tautological equivalence.}\nTo show that two sentences are tautologically equivalent, we must show that the sentences have the same truth value on every valuation. So this requires a  complete truth table.\n\nTo show that two sentences are \\emph{not} tautologically equivalent, we only need to show that there is a valuation on which they have different truth values. So this requires only a one-line partial truth table: make the table so that one sentence is true and the other false.\n\n\\paragraph{Consistency.}\nTo show that some sentences are jointly consistent, we must show that there is a valuation which makes all of the sentence true. So this requires only a partial truth table with a single line. \n\nTo show that some sentences are jointly inconsistent, we must show that there is no valuation which makes all of the sentence true. So this requires a complete truth table: You must show that on every row of the table at least one of the sentences is false.\n\n\\paragraph{Validity/entailment.}\nTo show that an argument is valid, we must show that there is no valuation which makes all of the premises true and the conclusion false. So this  requires a complete truth table.  (Likewise for entailment.)\n\nTo show that argument is \\emph{invalid}, we must show that there is a valuation which makes all of the premises true and the conclusion false. So this requires only a one-line partial truth table on which all of the premises are true and the conclusion is false. (Likewise for a failure of entailment.)\n\n\n\\\n\\\\This table summarises what is required:\n\n\\begin{center}\n\\begin{tabular}{l l l}\n%\\cline{2-3}\n & \\textbf{Yes} & \\textbf{No}\\\\\n \\hline\n%\\cline{2-3}\ntautology? & complete truth table & one-line partial truth table\\\\\ncontradiction? &  complete truth table  & one-line partial truth table\\\\\nequivalent? & complete truth table & one-line partial truth table\\\\\nconsistent? & one-line partial truth table & complete truth table\\\\\nvalid? & complete truth table & one-line partial truth table\\\\\nentailment? & complete truth table & one-line partial truth table\\\\\n\\end{tabular}\n\\end{center}\n\\label{table.CompleteVsPartial}\n\n\n\\practiceproblems\n\\problempart\nUse complete or partial truth tables (as appropriate) to determine whether these pairs of sentences are tautologically equivalent:\n\\begin{earg}\n\\item $A$, $\\enot A$ %No\n\\item $A$, $A \\eor A$ %Yes\n\\item $A\\eif A$, $A \\eiff A$ %Yes\n\\item $A \\eor \\enot B$, $A\\eif B$ %No\n\\item $A \\eand \\enot A$, $\\enot B \\eiff B$ %Yes\n\\item $\\enot(A \\eand B)$, $\\enot A \\eor \\enot B$ %Yes\n\\item $\\enot(A \\eif B)$, $\\enot A \\eif \\enot B$ %No\n\\item $(A \\eif B)$, $(\\enot B \\eif \\enot A)$ %Yes\n\\end{earg}\n\n\\problempart\nUse complete or partial truth tables (as appropriate) to determine whether these sentences are jointly tautologically consistent, or jointly tautologically inconsistent:\n\\begin{earg}\n\\item $A \\eand B$, $C\\eif \\enot B$, $C$ %inconsistent\n\\item $A\\eif B$, $B\\eif C$, $A$, $\\enot C$ %inconsistent\n\\item $A \\eor B$, $B\\eor C$, $C\\eif \\enot A$ %consistent\n\\item $A$, $B$, $C$, $\\enot D$, $\\enot E$, $F$ %consistent\n\\end{earg}\n\n\\problempart\nUse complete or partial truth tables (as appropriate) to determine whether each argument is valid or invalid:\n\\begin{earg}\n\\item $A\\eor\\bigl[A\\eif(A\\eiff A)\\bigr] \\therefore A$ %invalid\n\\item $A\\eiff\\enot(B\\eiff A) \\therefore A$ %invalid\n\\item $A\\eif B, B \\therefore A$ %invalid\n\\item $A\\eor B, B\\eor C, \\enot B \\therefore A \\eand C$ %valid\n\\item $A\\eiff B, B\\eiff C \\therefore A\\eiff C$ %valid\n\\end{earg}", "meta": {"hexsha": "ac6caa4a338787bc72f8c2d80ce1b0d4a7249171", "size": 53583, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "forallx-cam-truthtables.tex", "max_stars_repo_name": "OpenLogicProject/forallx-cam", "max_stars_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-02-19T01:39:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-04T05:59:31.000Z", "max_issues_repo_path": "forallx-cam-truthtables.tex", "max_issues_repo_name": "ryanmichaelhebert/forallx-cam", "max_issues_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "forallx-cam-truthtables.tex", "max_forks_repo_name": "ryanmichaelhebert/forallx-cam", "max_forks_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-09-08T05:09:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T10:13:13.000Z", "avg_line_length": 65.2655298417, "max_line_length": 792, "alphanum_fraction": 0.6933728981, "num_tokens": 17155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8333245911726382, "lm_q1q2_score": 0.5961804189138009}}
{"text": "\\documentclass[journal,hidelinks]{IEEEtran}\n\\usepackage[utf8]{inputenc}\n\\usepackage[\n  pdftitle={Assignment \\#3},\n  pdfauthor={Andrei Purcarus},\n  pdfsubject={ECSE-543 -- Numerical Methods in EE}\n]{hyperref}\n\\usepackage{graphicx}\n\\usepackage[all]{hypcap}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{cleveref}\n\\usepackage{indentfirst}\n\\usepackage[per-mode=symbol]{siunitx}\n\\usepackage{listings}\n\\lstset{showstringspaces=false}\n\n\\title{ECSE-543 \\\\ Numerical Methods in EE \\\\ Assignment \\#3}\n\\author{Andrei~Purcarus,~260631911,~\\IEEEmembership{McGill~University}}\n\n\\begin{document}\n\\sloppy\n\n\\maketitle\n\n\\section*{Code Listings and Unit Testing}\n\nThe source code used for this assignment is listed in the appendices. In order to save space, we did not include the unit tests. For the full code, see the \\href{https://github.com/Gripnook/ECSE543-F17-A3}{GitHub repository}.\n\n\\Cref{sec:main} defines the main function, which contains the core code that produces answers to the questions in this assignment. \\Cref{sec:polynomial-h,sec:polynomial-cpp} define a polynomial class. \\Cref{sec:interpolation-h,sec:interpolation-cpp} define functions that perform interpolation on data. \\Cref{sec:solver-h,sec:solver-cpp} define nonlinear equation solvers. \\Cref{sec:matrix} defines a matrix library. \\Cref{sec:cholesky} defines functions that perform Cholesky decomposition using banded and non-banded methods. \\Cref{sec:matrix-solver} defines a generic solver for systems of equations that have a positive-definite coefficient matrix. Finally, \\Cref{sec:integral-h,sec:integral-cpp} define functions that perform numerical integration.\n\n\\section*{Question 1}\n\n\\subsection*{(a)}\n\nWe started by performing interpolation on the first $6$ points in the B-H data for M19 steel using a Lagrange polynomial on the entire domain. To do this, we used the Lagrange interpolation function defined in \\Cref{sec:interpolation-h,sec:interpolation-cpp}. The result is shown in \\Cref{fig:q1a}. From this figure, we can see that the interpolated curve matches the data well over this range, since it is monotonically increasing and does not have any significant ``wiggles''.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-1/q1a.eps}\n  \\caption{A Lagrange interpolation of the first $6$ points in the B-H data for M19 steel.}\n  \\label{fig:q1a}\n\\end{figure}\n\n\\subsection*{(b)}\n\nNext, we performed the same Lagrange polynomial interpolation on the $6$ points at $B = \\SI{0.0}{\\tesla}$, $B = \\SI{1.3}{\\tesla}$, $B = \\SI{1.4}{\\tesla}$, $B = \\SI{1.7}{\\tesla}$, $B = \\SI{1.8}{\\tesla}$, and $B = \\SI{1.9}{\\tesla}$.  The results are shown in \\Cref{fig:q1b}. From this figure, we can see that the result matches the data well for $B \\ge \\SI{1.3}{\\tesla}$, but has a large deviation between $B = \\SI{0.0}{\\tesla}$ and $B = \\SI{1.3}{\\tesla}$ that does not seem plausible for the given data. This is likely to have occurred due to the lack of data in the latter range, as well as the high order polynomial we used to perform the interpolation.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-1/q1b.eps}\n  \\caption{A Lagrange interpolation of the $6$ points at $B = \\SI{0.0}{\\tesla}$, $B = \\SI{1.3}{\\tesla}$, $B = \\SI{1.4}{\\tesla}$, $B = \\SI{1.7}{\\tesla}$, $B = \\SI{1.8}{\\tesla}$, and $B = \\SI{1.9}{\\tesla}$ in the B-H data for M19 steel.}\n  \\label{fig:q1b}\n\\end{figure}\n\n\\subsection*{(c)}\n\nAs an alternative to full-domain Lagrange interpolation, we also used cubic Hermite interpolation on each of the $5$ sub-domains between the $6$ points used in the previous section. To do this, we used the Hermite interpolation function shown in \\Cref{sec:interpolation-h,sec:interpolation-cpp}. However, to do this, we had to provide values for the derivatives of the function at the given points. The simplest approach to this is to provide a numerical estimate of the derivative using the given data. Therefore, we used the slope between the points immediately before and after a given point to estimate the derivative at that point. For $B = \\SI{0.0}{\\tesla}$ and $B = \\SI{1.9}{\\tesla}$, we instead used a one-sided estimate of the derivative since points were not available on both sides. The results of doing this are shown in \\Cref{fig:q1c}. From this figure, we can see that the interpolation is better than the one using Lagrange polynomials. However, it still has some problems. Due to the large gap in data, the curve initially decreases below $\\SI{0.0}{\\ampere\\per\\meter}$, which is not a realistic scenario.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-1/q1c.eps}\n  \\caption{A piecewise cubic Hermite interpolation of the $6$ points at $B = \\SI{0.0}{\\tesla}$, $B = \\SI{1.3}{\\tesla}$, $B = \\SI{1.4}{\\tesla}$, $B = \\SI{1.7}{\\tesla}$, $B = \\SI{1.8}{\\tesla}$, and $B = \\SI{1.9}{\\tesla}$ in the B-H data for M19 steel.}\n  \\label{fig:q1c}\n\\end{figure}\n\n\\section*{Question 2}\n\n\\subsection*{(a)}\n\nWe started by deriving an equation for the flux $\\Psi$ in the magnetic circuit. From Ampere's law, we have\n\\begin{equation}\nN I = H_c l_c + H_g l_g\n\\end{equation}\nSince the cross-sectional area $S$ is uniform in the solenoid, so are the magnetic flux $\\Psi$ and the magnetic flux density $B$. Given that $H_c$ depends on $B$ according to the non-linear M19 steel data, and that $H_g$ depends on $B$ in a linear fashion, we can write\n\\begin{align}\nN I &= H_c(B) l_c + B l_g / \\mu_0 \\\\\nN I &= H_c(\\Psi / S) l_c + \\Psi l_g / (\\mu_0 S)\n\\end{align}\nThus, the equation to solve is $f(\\Psi) = 0$, where\n\\begin{align}\nf(\\Psi) &= H_c(\\Psi / S) l_c + \\Psi l_g / (\\mu_0 S) - N I \\\\\nf'(\\Psi) &= H_c'(\\Psi / S) l_c / S + l_g / (\\mu_0 S)\n\\end{align}\n\n\\subsection*{(b)}\n\nWith the equations derived in the previous section, we used the Newton-Raphson method to solve for the flux $\\Psi$. The code that does this is shown in \\Cref{sec:solver-h,sec:solver-cpp}. We used a piecewise linear interpolation for $H_c(B)$ and $H_c'(B)$. We then started the solver with $\\Psi = \\SI{0.0}{\\tesla\\meter^2}$ and stopped when $|f(\\Psi) / f(0)| < 10^{-6}$. The resulting output of the program is shown in \\Cref{fig:q2b}, which shows that the final result was $\\Psi = \\SI{0.000161269}{\\tesla\\meter^2}$, corresponding to a flux density of $B = \\SI{1.61269}{\\tesla}$. The figure also shows that convergence took only $3$ iterations.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.8\\columnwidth]{question-2/q2b.png}\n  \\caption{The output of the program for the magnetic circuit solver using the Newton-Raphson method.}\n  \\label{fig:q2b}\n\\end{figure}\n\n\\subsection*{(c)}\n\nWe next tried solving the same magnetic circuit using successive substitution. We first modified the equation to be in the form $\\Psi = f(\\Psi)$, as\n\\begin{align}\n\\Psi l_g / (\\mu_0 S) &= N I - H_c(\\Psi / S) l_c \\\\\n\\Psi &= \\mu_0 S / l_g (N I - H_c(\\Psi / S) l_c)\n\\end{align}\n\nWe tried using the successive substitution solver shown in \\Cref{sec:solver-h,sec:solver-cpp}, but the problem failed to converge to a solution. In order to make the problem converge, we performed an inversion of the given equation, as\n\\begin{align}\nH_c(\\Psi / S) l_c &= N I - \\Psi l_g / (\\mu_0 S) \\\\\nH_c(\\Psi / S) &= (N I - \\Psi l_g / (\\mu_0 S)) / l_c \\\\\n\\Psi &= S H_c^{-1}((N I - \\Psi l_g / (\\mu_0 S)) / l_c)\n\\end{align}\n\nThis reformulation of the problem did converge using successive substitution, and the results are shown in \\Cref{fig:q2c}. This figure shows that the solver converged to the same solution of $\\Psi = \\SI{0.000161269}{\\tesla\\meter^2}$, but said convergence was a lot slower, taking $14$ iterations.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.8\\columnwidth]{question-2/q2c.png}\n  \\caption{The output of the program for the magnetic circuit solver using the successive substitution method.}\n  \\label{fig:q2c}\n\\end{figure}\n\n\\section*{Question 3}\n\n\\subsection*{(a)}\n\nWe next attempted to solve the non-linear electric circuit shown in \\Cref{fig:q3-circuit} by using a node voltage method to derive a system of equations. We defined the node voltages $v_1$ and $v_2$ as shown in the figure, and derived the system $\\boldsymbol{f}(\\boldsymbol{v}) = \\boldsymbol{0}$, where\n\\begin{align}\n\\boldsymbol{f}(\\boldsymbol{v}) &=\n\\begin{bmatrix}\n(v_1 - E) / R + I_{sA} (e^{(v_1 - v_2)/v_t} - 1) \\\\\n-I_{sA} (e^{(v_1 - v_2)/v_t} - 1) + I_{sB} (e^{v_2/v_t} - 1) \\\\\n\\end{bmatrix}\n\\end{align}\nNote that we used $v_t = kT/q = \\SI{25}{\\milli\\volt}$.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.5\\columnwidth]{question-3/q3-circuit.pdf}\n  \\caption{The non-linear electric circuit used in Question 3.}\n  \\label{fig:q3-circuit}\n\\end{figure}\n\n\\subsection*{(b)}\n\nIn order to solve the system of equations using the Newton-Raphson method, we also derived a formula for the Jacobian of $\\boldsymbol{f}$ as\n\\begin{align}\n\\boldsymbol{J}(\\boldsymbol{v}) &=\n\\begin{bmatrix}\n\\partial f_1 / \\partial v_1 & \\partial f_1 / \\partial v_2 \\\\\n\\partial f_2 / \\partial v_1 & \\partial f_2 / \\partial v_2 \\\\\n\\end{bmatrix}\n\\end{align}\nwhere\n\\begin{align}\n\\partial f_1 / \\partial v_1 &= 1 / R + I_{sA} e^{(v_1 - v_2)/v_t} / v_t \\\\\n\\partial f_1 / \\partial v_2 &= -I_{sA} e^{(v_1 - v_2)/v_t} / v_t \\\\\n\\partial f_2 / \\partial v_1 &= -I_{sA} e^{(v_1 - v_2)/v_t} / v_t \\\\\n\\partial f_2 / \\partial v_2 &= I_{sA} e^{(v_1 - v_2)/v_t} / v_t + I_{sB} e^{v_2/v_t} / v_t\n\\end{align}\n\nSince this Jacobian is positive definite, we used the Cholesky decomposition based solver we implemented in Assignment \\#1 to solve the update equation given by\n\\begin{align}\n\\boldsymbol{J}(\\boldsymbol{v}^{k}) \\boldsymbol{v}^{k+1} = \\boldsymbol{J}(\\boldsymbol{v}^{k}) \\boldsymbol{v}^{k} - \\boldsymbol{f}(\\boldsymbol{v}^{k})\n\\end{align}\nThe code that does this is shown in \\Cref{sec:matrix-solver}. In addition, the Newton-Raphson solver for matrix equations is shown in \\Cref{sec:solver-h,sec:solver-cpp}. We defined the error $\\epsilon_k$ at each iteration as $\\epsilon_k = \\sum_{i} |f_i(\\boldsymbol{v}^{k})| / \\sum_{i} |f_i(\\boldsymbol{v}^{0})|$, and stopped when $\\epsilon_k < 10^{-6}$.\n\nThe results of the solver are shown in \\Cref{fig:q3-voltages,fig:q3-residual,fig:q3-error,fig:q3-error-ratio}. In \\Cref{fig:q3-voltages}, we can see that the voltages across the diodes converged to values of $V_A = \\SI{0.1075633}{\\volt}$ and $V_B = \\SI{0.0905707}{\\volt}$ in only $6$ iterations. \\Cref{fig:q3-residual} shows the values of the function $\\boldsymbol{f}$ across these $6$ iterations. Finally, \\Cref{fig:q3-error} shows the error for each iteration.\n\nIn order to evaluate if the convergence was quadratic, we plotted $\\epsilon_k / \\epsilon_{k-1}^2$ in \\Cref{fig:q3-error-ratio}. This figure shows that starting with iteration $2$, this ratio tended asymptotically towards a constant value. This indicates that the convergence is indeed quadratic after a few iterations. Unfortunately, the convergence of the function was so quick that we could not observe the effect in more detail.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-3/voltages.eps}\n  \\caption{Diode voltages vs. iteration for the Newton-Raphson solver used to solve the circuit in \\Cref{fig:q3-circuit}.}\n  \\label{fig:q3-voltages}\n\\end{figure}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-3/residual.eps}\n  \\caption{Residual vs. iteration for the Newton-Raphson solver used to solve the circuit in \\Cref{fig:q3-circuit}.}\n  \\label{fig:q3-residual}\n\\end{figure}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-3/error.eps}\n  \\caption{Error vs. iteration for the Newton-Raphson solver used to solve the circuit in \\Cref{fig:q3-circuit}.}\n  \\label{fig:q3-error}\n\\end{figure}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-3/error_ratio.eps}\n  \\caption{Error ratio vs. iteration for the Newton-Raphson solver used to solve the circuit in \\Cref{fig:q3-circuit}.}\n  \\label{fig:q3-error-ratio}\n\\end{figure}\n\n\\section*{Question 4}\n\\subsection*{(a)}\n\nNext, we performed numerical integration on the function $f(x) = cos(x)$ from $x = 0$ to $x = 1$ using one-point Gauss-Legendre integration over $N$ equal segments. The code that performs this function is shown in \\Cref{sec:integral-h,sec:integral-cpp}. The results are shown in \\Cref{fig:q4a}, which shows the natural logarithm of the absolute error plotted against the natural logarithm of the number of segments. This figure shows a clear linear trend, with a mean slope of $-2.0031$. This indicates that the error is approximately proportional to $1/N^2$, as we would expect since the one-point method integrates linear functions exactly and thus the error should be quadratic.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-4/q4a.eps}\n  \\caption{$ln(E)$ vs. $ln(N)$ for the integral of $f(x) = cos(x)$ from $x = 0$ to $x = 1$ using one-point Gauss-Legendre integration over $N$ equal segments.}\n  \\label{fig:q4a}\n\\end{figure}\n\n\\subsection*{(b)}\n\nNext, we applied the same method to the function $f(x) = ln(x)$ from $x = 0$ to $x = 1$. The results are shown in \\Cref{fig:q4b}, which shows the natural logarithm of the absolute error plotted against the natural logarithm of the number of segments. This figure shows a clear linear trend, with a mean slope of $-0.9981$. This indicates that the error is approximately proportional to $1/N$. Thus the error for this function has a lower order than for $f(x) = cos(x)$. This can be explained by the fact that $f(x) = ln(x)$ is unbounded on the integration interval. Indeed, since the second derivative tends to $-\\infty$ as $x \\to 0$, we cannot use a simple Taylor series to show that the error should be quadratic on this interval.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-4/q4b.eps}\n  \\caption{$ln(E)$ vs. $ln(N)$ for the integral of $f(x) = ln(x)$ from $x = 0$ to $x = 1$ using one-point Gauss-Legendre integration over $N$ equal segments.}\n  \\label{fig:q4b}\n\\end{figure}\n\n\\subsection*{(c)}\n\nFinally, we attempted to integrate the function $f(x) = ln(x)$ as accurately as possible using a one-point Gauss-Legendre method with only $10$ segments. We wrote a function that accepts a vector of segment lengths and performs this computation. The code for this is shown in \\Cref{sec:integral-h,sec:integral-cpp}.\n\nIn order to perform a methodical study, we decided to divide the lengths according to a geometric ratio, which would make the segments close to the singularity at $x = 0$ shorter compared to the segments close to $x = 1$. Thus, we used segments of the form $\\alpha$, $\\alpha r$, $\\ldots$, $\\alpha r^9$. We varied $r$ from $1$ to $2$ and computed $\\alpha$ using\n\\begin{align}\n\\alpha (1 + r + ... + r^{9}) &= 1 \\\\\n\\alpha (r^{10} - 1) / (r - 1) &= 1 \\\\\n\\alpha &= (r - 1) / (r^{10} - 1)\n\\end{align}\n\nThe results of this approach are shown in \\Cref{fig:q4c}. This figure shows that the error reaches a minimum of $0.0106114$ at $r = 1.45$, which corresponds to $\\alpha = 0.0112262$.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{question-4/q4c.eps}\n  \\caption{$E$ vs. $r$ for the integral of $f(x) = ln(x)$ from $x = 0$ to $x = 1$ using one-point Gauss-Legendre integration over $10$ unequal segments.}\n  \\label{fig:q4c}\n\\end{figure}\n\n\\newpage\n\\onecolumn\n\n\\begin{appendices}\n\n\\section{Main.cpp}\n\\label{sec:main}\n\\lstinputlisting[language=C++]{src/main.cpp}\n\\newpage\n\n\\section{Polynomial.h}\n\\label{sec:polynomial-h}\n\\lstinputlisting[language=C++]{src/polynomial.h}\n\\newpage\n\n\\section{Polynomial.cpp}\n\\label{sec:polynomial-cpp}\n\\lstinputlisting[language=C++]{src/polynomial.cpp}\n\\newpage\n\n\\section{Interpolation.h}\n\\label{sec:interpolation-h}\n\\lstinputlisting[language=C++]{src/interpolation.h}\n\\newpage\n\n\\section{Interpolation.cpp}\n\\label{sec:interpolation-cpp}\n\\lstinputlisting[language=C++]{src/interpolation.cpp}\n\\newpage\n\n\\section{Solver.h}\n\\label{sec:solver-h}\n\\lstinputlisting[language=C++]{src/solver.h}\n\\newpage\n\n\\section{Solver.cpp}\n\\label{sec:solver-cpp}\n\\lstinputlisting[language=C++]{src/solver.cpp}\n\\newpage\n\n\\section{Matrix.h}\n\\label{sec:matrix}\n\\lstinputlisting[language=C++]{src/matrix.h}\n\\newpage\n\n\\section{Cholesky.h}\n\\label{sec:cholesky}\n\\lstinputlisting[language=C++]{src/cholesky.h}\n\\newpage\n\n\\section{Matrix-solver.h}\n\\label{sec:matrix-solver}\n\\lstinputlisting[language=C++]{src/matrix-solver.h}\n\\newpage\n\n\\section{Integral.h}\n\\label{sec:integral-h}\n\\lstinputlisting[language=C++]{src/integral.h}\n\\newpage\n\n\\section{Integral.cpp}\n\\label{sec:integral-cpp}\n\\lstinputlisting[language=C++]{src/integral.cpp}\n\\newpage\n\n\\end{appendices}\n\n\\end{document}\n", "meta": {"hexsha": "b65fbae2110921627e5c9e4cde70f94c4faaa317", "size": 16663, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignment3/report/main.tex", "max_stars_repo_name": "Gripnook/numerical-methods", "max_stars_repo_head_hexsha": "14cb60050e5f2ba413f59690beb24cacb1153563", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-02-21T01:35:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-21T01:35:02.000Z", "max_issues_repo_path": "Assignment3/report/main.tex", "max_issues_repo_name": "Gripnook/numerical-methods", "max_issues_repo_head_hexsha": "14cb60050e5f2ba413f59690beb24cacb1153563", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignment3/report/main.tex", "max_forks_repo_name": "Gripnook/numerical-methods", "max_forks_repo_head_hexsha": "14cb60050e5f2ba413f59690beb24cacb1153563", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.925566343, "max_line_length": 1120, "alphanum_fraction": 0.7221988838, "num_tokens": 5255, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8333246035907932, "lm_q1q2_score": 0.5961804176856041}}
{"text": "\\documentclass[PHIL101-Textbook.tex]{subfiles}\n\n\\begin{document}\n\n\\chapter{Quick Reference}\n\\label{app.solutions}\n\n\n\\section*{Truth Table Connectives}\n\\label{app.CharacteristicTTs}\n\n\\indent\n\\begin{tabular}{c|c}\n\\script{A} & \\enot\\script{A}\\\\\n\\hline\n\\vT & \\vF\\\\\n\\vF & \\vT \n\\end{tabular}\n\\hfill\n\\begin{tabular}{c|c|c|c|c|c}\n\\script{A} & \\script{B} & (\\script{A}\\eand\\script{B}) & (\\script{A}\\eor\\script{B}) & (\\script{A}\\eif\\script{B}) & (\\script{A}\\eiff\\script{B})\\\\\n\\hline\n\\vT & \\vT & \\vT & \\vT & \\vT & \\vT\\\\\n\\vT & \\vF & \\vF & \\vT & \\vF & \\vF\\\\\n\\vF & \\vT & \\vF & \\vT & \\vT & \\vF\\\\\n\\vF & \\vF & \\vF & \\vF & \\vT & \\vT\n\\end{tabular}\n\\hfill\n\n%\\vfill\n%\\newpage\n\n\n\\section*{Symbolisation}\n%\\begin{center}\n\\label{app.symbolisation}\n\\begin{tabular*}{\\textwidth}{rl}\n\\multicolumn{2}{c}{\\textsc{Connectives} (chapter \\ref{ch.TFL})}\\\\ \\\\\nIt is not the case that $P$. & $\\enot P$\\\\\nEither $P$, or $Q$. & $(P \\eor Q)$\\\\\nNeither $P$, nor $Q$. & $\\enot(P \\eor Q)$\\ or \\ $(\\enot P \\eand \\enot Q)$\\\\\nBoth $P$, and $Q$. & $(P \\eand Q)$\\\\\nIf $P$, then $Q$. $Q$ if $P$. & $(P \\eif Q)$\\\\\n$P$ only if $Q$. Only if $Q$, $P$. & $(P \\eif Q)$\\ or \\ $(\\enot Q \\eif \\enot P)$\\\\\n$P$ if and only if $Q$. & $(P \\eiff Q)$\\\\\nUnless $P$, $Q$. $Q$ unless $P$. & $(\\enot P \\eif Q)$\\\\\n\\\\\n\\multicolumn{2}{c}{\\label{SymbolizingPredicates}\\textsc{Predicates} (chapter \\ref{ch.PL})}\\\\ \\\\\nAll $F$s are $G$s. & $\\forall x(Fx \\eif Gx)$\\\\\nSome $F$s are $G$s. & $\\exists x(Fx \\eand Gx)$\\\\\nNot all $F$s are $G$s. & $\\enot\\forall x(Fx \\eif Gx)$\\ or\\ $\\exists x(Fx \\eand \\enot Gx)$\\\\\nNo $F$s are $G$s. & $\\forall x(Fx \\eif\\enot Gx)$\\ or\\ $\\enot\\exists x(Fx \\eand Gx)$\\\\\n\n\\end{tabular*}\n%\\end{center}\n\n\n\\newpage\n\n\\section*{Truth Trees}\n\n\\begin{tabular}{cc}\n\\textbf{Conjunction}\\\\\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\meta{A}\\eand\\meta{B}, checked\n\t\t[\\meta{A} , just={$i$ \\eand}\n\t\t[\\meta{B} , just={$i$ \\eand}\n\t\t]]\n\t]\n\t\\end{prooftree}\n}\n&\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\enot(\\meta{A}\\eand\\meta{B}), checked\n\t\t\t[\\enot\\meta{A} , just={$i$, \\enot\\eand}]\n\t\t\t[\\enot\\meta{B} , just={$i$, \\enot\\eand}]\n\t]\n\t\\end{prooftree}\n}\\\\\n\\textbf{Disjunction}\\\\\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\meta{A}\\eor\\meta{B}, checked\n\t\t[\\meta{A} , just={$i$, \\eor}]\n\t\t[\\meta{B} , just={$i$, \\eor}]\n\t]\n\t\\end{prooftree}\n}\n&\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\enot(\\meta{A}\\eor\\meta{B}), checked\n\t\t[\\enot\\meta{A} , just={$i$, \\enot\\eor}\n\t\t[\\enot\\meta{B} , just={$i$, \\enot\\eor}\n\t\t]\n\t\t]\n\t]\n\t\\end{prooftree}\n}\\\\\n\\textbf{Conditional}\\\\\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\meta{A}\\eif\\meta{B}, checked\n\t\t[\\enot\\meta{A} , just={$i$, \\eif}]\n\t\t[     \\meta{B} , just={$i$, \\eif}]\n\t]\n\t\\end{prooftree}\n}\n&\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\enot(\\meta{A}\\eif\\meta{B}), checked\n\t\t[     \\meta{A} , just={$i$, \\enot\\eif}\n\t\t[\\enot\\meta{B} , just={$i$, \\enot\\eif}\n\t\t]]\n\t]\n\t\\end{prooftree}\n}\\\\\n\\textbf{Negation}\\\\\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\enot\\enot\\meta{A}, checked\n\t\t[\\meta{A} , just={$i$, \\enot\\enot}]\n\t]\n\t\\end{prooftree}\n}\\\\\n\\textbf{Biconditional}\\\\\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\meta{A}\\eiff\\meta{B}, checked\n\t\t[\\meta{A}, just={$i$, \\eiff}\n\t\t[\\meta{B}, just={$i$, \\eiff}\n\t\t]]\n\t\t[\\enot\\meta{A}\n\t\t[\\enot\\meta{B}\n\t\t]]\n\t]\n\t\\end{prooftree}\n}\n&\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\enot(\\meta{A}\\eiff\\meta{B}), checked\n\t\t[     \\meta{A}, just={$i$, \\enot\\eiff}\n\t\t[\\enot\\meta{B}, just={$i$, \\enot\\eiff}\n\t\t]]\n\t\t[\\enot\\meta{A}\n\t\t[     \\meta{B}\n\t\t]]\n\t]\n\t\\end{prooftree}\n}\\\\\n\\textbf{Negated Quantifier}\\\\\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\enot\\exists\\script{x}\\ \\meta{A}\n\t\t[\\forall\\script{x}\\ \\enot\\meta{A} , just={$i$ \\enot\\esome}]\n\t]\n\t\\end{prooftree}\n}\n&\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\enot\\forall\\script{x}\\ \\meta{A}\n\t\t[\\exists\\script{x}\\ \\enot\\meta{A} , just={$i$ \\enot\\eall}]\n\t]\n\\end{prooftree}\n}\\\\\n\\textbf{Quantifier}\\\\\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\exists\\script{x}\\meta{A}, checked={\\script{a}}\n\t [\\meta{A}\\substitution{x}{a}, just={$i$ \\esome \\script{a}}\n \t  [ , just={(where \\script{a} is new})]\n\t]]\n\\end{prooftree}\n}\n&\n{\\begin{prooftree}\n\t{not line numbering}\n\t[\\forall\\script{x}\\meta{A}, subs={\\script{a}}\n\t [\\meta{A}\\substitution{x}{a}, just={$i$ \\eall \\script{a}}\n \t  [ , just={(where \\script{a} is old})]\n\t]]\n\\end{prooftree}\n}\n\\end{tabular}\n\n\\newpage\n\n\n\\section*{Identity}\n%\\begin{center}\n\\begin{tabular*}{\\textwidth}{rl}\n\\multicolumn{2}{c}{\\textsc{Identity} (section \\ref{sec.identity})}\\\\ \\\\\nOnly $j$ is $G$. & $\\forall x(Gx \\eiff x=j)$\\\\\nEverything except $j$ is $G$. & $\\forall x(x \\neq j \\eif Gx)$\\\\\n%$j$ is more $R$ than anyone else. & $\\forall x(x\\neq j \\eif Rjx)$\\\\\nThe $F$ is $G$. & $\\exists x(Fx \\eand \\forall y(Fy \\eif x=y) \\eand Gx)$\\\\\n%\\multicolumn{2}{l}{`The F is not G' can be translated two ways:} \\\\\n%It is not the case that the F is G. (wide)& $\\enot\\exists x(Fx \\eand \\forall y(Fy \\eif x=y) \\eand Gx)$\\\\\nThe $F$ is non-$G$. (narrow) & $\\exists x(Fx \\eand \\forall y(Fy \\eif x=y) \\eand \\enot Gx)$\n\n\\end{tabular*}\n\n\\subsection*{There are at least \\blank\\ $F$s.}\n\\label{summary.atleast}\n\n\\begin{ekey}\n\\item[one] $\\exists xFx$\n\\item[two] $\\exists x_1\\exists x_2(Fx_1 \\eand Fx_2 \\eand x_1 \\neq x_2)$\n\\item[three] $\\exists x_1\\exists x_2\\exists x_3(Fx_1 \\eand Fx_2 \\eand Fx_3 \\eand x_1 \\neq x_2 \\eand x_1 \\neq x_3 \\eand x_2 \\neq x_3)$\n%\\item[four] $\\exists x_1\\exists x_2\\exists x_3\\exists x_4 (Fx_1 \\eand Fx_2 \\eand Fx_3 \\eand Fx_4 \\eand x_1 \\neq x_2 \\eand x_1 \\neq x_3 \\eand x_1 \\neq x_4 \\eand x_2 \\neq x_3 \\eand x_2 \\neq x_4 \\eand x_3 \\neq x_4)$\n\\item[n] $\\exists x_1\\cdots\\exists x_n(Fx_1 \\eand\\cdots\\eand Fx_n \\eand x_1 \\neq x_2 \\eand\\cdots\\eand x_{n-1}\\neq x_n)$ \n\\end{ekey}\n\n\\subsection*{There are at most \\blank\\ $F$s.}\n\\label{summary.atmost}\n\nOne way to say `at most $n$ things are $F$' is to put a negation sign in front of one of the symbolizations above and say `NOT at least $n+1$ things are $F$.' Equivalently:\n\\begin{ekey}\n\\item[one] $\\forall x_1\\forall x_2\\bigl[(Fx_1 \\eand Fx_2) \\eif x_1=x_2\\bigr]$\n\\item[two] $\\forall x_1\\forall x_2\\forall x_3\\bigl[(Fx_1 \\eand Fx_2 \\eand Fx_3) \\eif (x_1=x_2 \\eor x_1=x_3 \\eor x_2=x_3)\\bigr]$\n%\\item[three] $\\forall x_1\\forall x_2\\forall x_3\\forall x_4\\bigl[(Fx_1 \\eand Fx_2 \\eand Fx_3 \\eand Fx_4) \\eif (x_1=x_2 \\eor x_1=x_3 \\eor x_1=x_4 \\eor x_2=x_3 \\eor x_2=x_4 \\eor x_3=x_4)\\bigr]$\n\\item[n]$\\forall x_1\\cdots\\forall x_{n+1}\n\\bigl[(Fx_1\\eand \\cdots \\eand Fx_{n+1}) \\eif (x_1=x_2 \\eor \\cdots \\eor x_n=x_{n+1})\\bigr]$ \n\\end{ekey}\n\n\\subsection*{There are exactly \\blank\\ $F$s.}\n\\label{summary.exactly}\n\nOne way to say `exactly $n$ things are $F$' is to say `at least $n$ things are $F$ AND at most $n$ things are $F$.' The following is shorter:\n\\begin{ekey}\n\\item[zero] $\\enot \\exists x Fx$\n\\item[one] $\\exists x\\bigl[Fx \\eand \\enot\\exists y(Fy \\eand x\\neq y)\\bigr]$\n\\item[two] $\\exists x_1\\exists x_2\\bigl[(Fx_1 \\eand Fx_2) \\eand (x_1 \\neq x_2) \\eand \\enot\\exists y\\bigl(Fy \\eand y\\neq x_1 \\eand y \\neq x_2\\bigr) \\bigr]$\n%\\item[three] $\\exists x_1\\exists x_2\\exists x_3\\bigl[Fx_1 \\eand Fx_2 \\eand Fx_3 \\eand x_1 \\neq x_2 \\eand x_1 \\neq x_3 \\eand x_2 \\neq x_3 \\eand\\\\ \\enot\\exists y(Fy \\eand y \\neq x_1 \\eand y \\neq x_2 \\eand y\\neq x_3) \\bigr]$\n\\item[n] $\\exists x_1\\cdots\\exists x_n\\bigl[(Fx_1 \\eand\\cdots\\eand Fx_n)  \\eand (x_1 \\neq x_2 \\eand\\cdots\\eand x_{n-1}\\neq x_n) \\eand\\\\\n \\enot\\exists y(Fy \\eand y\\neq x_1 \\eand \\cdots \\eand y\\neq x_n)\\bigr]$ \n%\\item[one] $\\exists x\\forall y\\bigl[Fx \\eand (Fy \\eif y = x)\\bigr]$\n%\\item[two] $\\exists x\\exists y\\forall z\\Bigl(Fx \\eand Fy \\eand \\bigl[Fz \\eif (z=x \\eor z=y)\\bigr] \\eand x \\neq y\\Bigr)$\n%\\item[three] $\\exists x_1\\exists x_2\\exists x_3\\forall y\\Bigl(Fx_1 \\eand Fx_2 \\eand Fx_3 \\eand [Fy \\eif (y=x_1 \\eor y=x_2 \\eor y=x_3)] \\eand x_1 \\neq x_2 \\eand x_1 \\neq x_3 \\eand x_2 \\neq x_3\\Bigr)$\n%\\item[n] $\\exists x_1\\cdots\\exists x_n\\forall y\\Bigl(Fx_1 \\eand\\cdots\\eand Fx_n \\eand \\bigl[Fy \\eif (y=x_1 \\eor \\cdots \\eor y=x_n)\\bigr] \\eand x_1 \\neq x_2 \\eand\\cdots\\eand x_{n-1}\\neq x_n\\Bigr)$ \n\\end{ekey}\n\n\n\\newpage\n\n\n% Tree Rules\n\n% Tree uses \n\n\\section*{Proving Logical Properties}\n%\\begin{table}\n\tSometimes we should use a truth tree, and sometimes a model. \n\t\\begin{center}\n\t\\begin{tabular*}{\\textwidth}{p{10em}|p{10em}|p{10em}|}\n\t\\cline{2-3}\n\t & {\\centerline{YES}} & {\\centerline{NO}}\\\\\n\t\\cline{2-3}\n\tIs \\script{A} a tautology? & close tree with root $\\enot \\script{A}$ & give a model in which \\script{A} is false\\\\\n\t\\cline{2-3}\n\tIs \\script{A} a contradiction? &  close tree with root $\\script{A}$ & give a model in which \\script{A} is true\\\\\n\t\\cline{2-3}\n\tIs \\script{A} contingent? & give a model in which \\script{A} is true and one in which \\script{A} is false & close tree with root $\\script{A}$ or root $\\enot\\script{A}$\\\\\n\t\\cline{2-3}\n\tAre \\script{A} and \\script{B} equivalent? & close tree with root $\\script{A}\\eiff\\script{B}$ & give a model in which \\script{A} and \\script{B} have different truth values\\\\\n\t\\cline{2-3}\n\tIs the set $\\{\\script{A}_1,\\ldots, \\script{A}_n\\}$ consistent? & give a model in which all $\\script{A}_1,\\ldots, \\script{A}_n$ are true & close tree with root $\\script{A}_1,\\ldots, \\script{A}_n$\\\\\n\t\\cline{2-3}\n\tIs the argument $\\script{A}_1,\\ldots, \\script{A}_n \\therefore\\ \\script{C}$ valid? & close tree with root $\\script{A}_1,\\ldots, \\script{A}_n, \\enot\\script{C}$ & give a model in which $\\script{A}_1,\\ldots, \\script{A}_n$ is true and \\script{C} is false\\\\\n\t\\cline{2-3}\n\t\\end{tabular*}\n\t\\end{center}\n%\\end{table}\n\t\n\\vfill\n\n\\end{document}", "meta": {"hexsha": "3710cfdb4993e67ba3ab243e743d574d3b4a79e1", "size": 9426, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "app-quickreference.tex", "max_stars_repo_name": "jseligma/forallx-auckland", "max_stars_repo_head_hexsha": "62c7b8be2240f9ad693556a7f3662678530deaeb", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "app-quickreference.tex", "max_issues_repo_name": "jseligma/forallx-auckland", "max_issues_repo_head_hexsha": "62c7b8be2240f9ad693556a7f3662678530deaeb", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "app-quickreference.tex", "max_forks_repo_name": "jseligma/forallx-auckland", "max_forks_repo_head_hexsha": "62c7b8be2240f9ad693556a7f3662678530deaeb", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4255319149, "max_line_length": 252, "alphanum_fraction": 0.6267769998, "num_tokens": 4189, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8333245973817158, "lm_q1q2_score": 0.5961804132434813}}
{"text": "\\section{VariableGroups}\n\\label{sec:VariableGroups}\n\nThe \\xmlNode{VariableGroups} block is an optional input for the convenience of the user.  It allows the\npossibility of creating a collection of variables instead of re-listing all the variables in places throughout\nthe input file, such as DataObjects, ROMs, and ExternalModels.\n%\nEach entry in the \\xmlNode{VariableGroups} block has a distinct name and list of each constituent variable in\nthe group.\n%\nAdditionally, set operations can be used to construct variable groups from other variable groups, by listing\nthem in node text along with the operation to perform.\nThe following types of\nset operations are included in RAVEN:\n\\begin{itemize}\n  \\item \\texttt{+}, Union, the combination of all variables in the \\xmlString{base} set and listed set,\n  \\item \\texttt{-}, Complement, the relative complement of the listed set in the \\xmlString{base} set,\n  \\item \\texttt{\\^}, Intersection, the variables common to both the \\xmlString{base} and listed set,\n  \\item \\texttt{\\%}, Symmetric Difference, the variables in only either the \\xmlString{base} or listed set,\n    but not both.\n\\end{itemize}\nMultiple set operations can be performed by separating them with commas in the text of the group node, whether\nthey be variable groups or single variables. In the event a circular dependency loop is detected, an error will be\nraised. VariableGroups are evaluated in the order of entries listed in their node text.\n\nWhen using the variable groups in a node, they can be listed alone or as part of a comma-separated list.  The\nvariable group name will only be substituted in the text of nodes, not attributes or tags.\n\nEach \\xmlNode{Group} node has the following attributes:\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\xmlAttr{name}, \\xmlDesc{required string attribute}, user-defined name\n  of the group. This is the identifier that will be used elsewhere in the RAVEN input.\n  %\n\\end{itemize}\n\\vspace{-5mm}\n\nAn example of constructing and using variable groups is listed here.  The variable groups \\xmlString{x\\_odd},\n\\xmlString{x\\_even}, \\xmlString{x\\_first},  and  \\xmlString{y\\_group} are constructed independently, and the\nremainder are examples of other operations.\n\\begin{lstlisting}[style=XML,morekeywords={name,file}] %moreemph={name,file}]\n<Simulation>\n  ...\n  <VariableGroups>\n    <Group name=\"x_odd\"     >x1,x3,x5</Group>\n    <Group name=\"x_even\"    >x2,x4,x6</Group>\n    <Group name=\"x_first\"   >x1,x2,x3</Group>\n    <Group name=\"y_group\"   >y1,y2</Group>\n    <Group name=\"add_remove\">x_first,-x1,+ x4,+x5</Group>\n    <Group name=\"union\"     >x_odd,+x_even</Group>\n    <Group name=\"complement\">x_odd,-x_first</Group>\n    <Group name=\"intersect\" >x_even,^x_first</Group>\n    <Group name=\"sym_diff\"  >x_odd,% x_first</Group>\n  </VariableGroups>\n  ...\n  <DataObjects>\n    <PointSet name=\"dataset\">\n      <Input>union</Input>\n      <Output>y_group</Output>\n    </PointSet>\n  </DataObjects>\n  ...\n</Simulation>\n\\end{lstlisting}\n", "meta": {"hexsha": "9780ea88f70498bae660b69a83f7d85bca47ffc5", "size": 2988, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/variablegroups.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "doc/user_manual/variablegroups.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "doc/user_manual/variablegroups.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 45.2727272727, "max_line_length": 114, "alphanum_fraction": 0.7409638554, "num_tokens": 787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540518, "lm_q2_score": 0.7490872131147276, "lm_q1q2_score": 0.5960173160277947}}
{"text": "\\section{Spectral clustering}\n\\label{sec:spectralclustering}\n% Basic\nSpectral clustering algorithm is a graph-based clustering algorithm of $n$ data points. \nIt make use of an $n \\times n$ affinity matrix which is constructed after measuring pairwise similarity and distance between data points to form edges. \nUnlike the other clustering algorithms, it can determine the number of clusters $k$ by using eigengap algorithm. \n%It make use of similarity matrix which represents data points as vertices. \nAfter that, it takes $k$ eigenvectors of corresponding to the $k$ largest eigenvalues of an affinity matrix as columns, \nthen uses $k$-way clustering algorithm in $k$ dimensions. \nIn short, it performs dimensionality reduction since $k < n$ then produces clusters in fewer dimensions. \nNote that the number of cluster $k$ is the same as the number of column $k$ of reduced space. \n\nThe assumptions to the methods are that \\begin{inparaenum}[\\itshape a\\upshape)]\n%It involves taking top eigenvectors of an affinity matrix, which is transformed by calculating the similarity between objects in dataset, and then use graph cut algorithm. \n\\item minimizing normalized cut is equivalent to minimizing the probability that a random walk on the graph.\n\\item the larger the gap between eigenvalues of the graph, %$\\lambda_{k}$ and $\\lambda_{k+1}$, \nthe closer the piecewise-constant eigenvectors will be\n\\end{inparaenum}. \\newline \n%In this section, I describe the detail of the spectral clustering algorithm. More details can be found in \\cite{ulrike07}.\\newline\n%In Section~\\ref{subsec:eigengap}, I describes the eigengap algorithm.\\newline\n%In Section~\\ref{subsec:multiclass}, I describes the multiclass spectral clustering algorithm.\n\n%So it may be applied to image data but not to real network connection data. \n% Although its performance is generally good, it is weak at noise. \n% In what way they differ from what we do here\n% Why is our approach better than what has done before. does it solve a partly new aspect of the problem or does it simply perform better\n%It will measure pairwise similarity of data points. %like the other spectral clustering. \n%In our implementation, we follow Shi's multiclass normalized cut algorithm \\cite{jianbo03}. \n\n\\subsection{Eigengap algorithm}\n\\label{subsec:eigengap}\n% Important concept - k -  to a reviewer first \nThe eigengap is a difference between two consecutive eigenvalues. \nWe can find the value of $k$ that maximizes the eigengap by following equation. \n\\begin{equation}\n\\Delta_{max} = \\operatorname*{arg\\,max}_{k} \\Delta_k, \\text{~~} \\Delta_k = | \\lambda_k - \\lambda_{k-1} |\n\\end{equation}\nwhere $\\lambda$ is an eigenvalue and $k$ is a desirable choice of the number of clusterings \\cite{ulrike07}.\nThis is important to understand that $k$ is the number of top eigenvalues determined by eigengap and is also the number of clusters. \n\n\\subsection{Multiclass spectral clustering algorithm}\n\\label{subsec:multiclass}\n%\\subsection{Algorithm}\n%\\subsection{Normalized Laplacian Matrix}\n% Then, spectral clustering\nSpectral clustering algorithms attempt to find $k$ subsets of data points which have the property that data points within a cluster are similar to each other than to data points in other clusters. \n%In order to do this, a spectral clustering builds embedded space from the eigenvectors corresponding to the $k$ eigenvalues before it does $k$-way clustering. \n%subsets, spectral clustering algorithm needs the normalized Laplacian $n \\times n$ matrix $L$. \n\nLet $A \\in \\mathbb{R}^{n \\times n}$ is the affinity matrix where $n$ is the number of data points. It is defined as \n\\begin{equation}\n\\label{eq:sim}\nA_{i,j} = \\left\\{ \n  \\begin{array}{l l}\n    sim(x_i, x_j) & \\quad \\text{for $x_j$ is a data point in neighbourhood of $x_i$}\\\\\n    0 & \\quad \\text{otherwise}\n  \\end{array} \\right.\n\\end{equation}\nwhere neighbourhood means 8-nearest neighbourhood that is all 8 closest data points around the point $x_i$\nand $sim$ is a cosine similarity function. \n%In the paper, I use cosine similarity for similarity function $sim$. It is discussed in Section~\\ref{subsec:normalabnormalsimilarity}. \n%That is, edges of $A$ only if they are neighbourhood. \n%In short $A_{i,j}$ is the pairwise similarity only if they are neighbourhood. \n%I use nearest 8-neighborhood to construct graph by inserting edges between a node and its nearest 8-neighbours. \n%For example, if there is a graph $G$ which has 100 nodes and they are fully connected, each node of nearest 8-neighborhood graph $\\hat{G}$ will be connected to only 8 nodes that have highest weight among them. \nNote that we are assuming that the graph is undirected which makes $A$ symmetric and a weight of an edge is a non-negative measure of similarity between the two vertices. \nSo higher weight implies greater similarity. \n%and $W$ be the weighted adjacency matrix, such that $W_{i,i} = 0$ and $w_{i,j}$ is the weight of the edge between $v_i \\in V$ and $v_j \\in V$. \nAfter then we get the normalized Laplacian matrix $L \\in \\mathbb{R}^{n \\times n}$ as follows : \n\\begin{equation}\nL = D^{-1/2} A D^{-1/2}\n\\end{equation}\nwhere $D \\in \\mathbb{R}^{n \\times n}, d_{i,i} = \\sum_{j} A_{i,j}$ be the diagonal degree matrix. \n%where $D$ is the diagonal degree matrix $d_{ii}$ is the detree of vertex $i$ defined as $d_{ii} = \\sum_{j=1}^n$ and $W$ is the weighted adjacency matrix, such that $W_{ii} = 0$ and $w_{ij}$ is the weight of the edge between $v_i \\in V$ and $v_j \\in V$. \n%%\\subsection{Algorithm}\n%A spectral clustering builds embedded space from the eigenvectors corresponding to the $k$ eigenvalues before it does $k$-way clustering. \n\nWe get the $k$ from $L$ with eigengap algorithm. \nThen we get the matrix $V \\in \\mathbb{R}^{n \\times k}$ with the eigenvectors as columns.\nSpectral clustering interprets the rows of $V$ as new data points $Z_i \\in \\mathbb{R}^k$ where $i \\in \\{1, \\cdots, n\\}$ \n%A spectral clustering builds embedded space from the eigenvectors corresponding to the $k$ eigenvalues before it does $k$-way clustering. \n%From $L$, we can get the eigenvalues and eigenvectors of Laplacian matrix, \nand we can apply known outlier detection techniques \\cite{knorr00} to the reduced $\\mathbb{R}^{n \\times k}$ space to produce $k$ clusters. \n\nA spectral clustering has three issues. \nFirst of all, eigengap algorithm helps us to pick proper $k$ value, but no such algorithm can be perfect especially if there is too much noises. \n%We can check whether the result of eigengap is fine or not by threshold. \n%In this paper, I use $\\sqrt{n}$ where $n$ is the number of data points as threshold on $k$. \nSecondly, the choice of similarity function has an important effect on the accuracy. \nThirdly, we need to choose proper clustering method. \n%I follow $k$-class clustering approach as I mentioned before because I found recursive bipartite approach is not applicable for real data which has lots of noise. \n\n%we need to find the eigenvalues and eigenvectors of Laplacian matrix to decompose do this. \n%After we compute normalized graph Laplacian matrix, we can decompose the matrix. \n%After we find them, we can build embedded space from the eigenvectors corresponding to the $k$ eigenvalues given by eigengap algorithm. \n%After that, we can apply known outlier detection techniques \\cite{knorr00} to the reduced $n \\times k$ space to produce $k$ clusters. \n\\begin{figure}[ht]\n\\label{fig:spectralclustering}\n\\begin{mdframed}\n\\begin{enumerate}\n\\item[Input] : Similarity matrix $S \\in \\mathbb{R}^{n \\times n}$, number $k$ of clusters. \n\\item[Step 1] : Construct the normalized graph Laplacian $L = D^{-1/2} A D^{-1/2}$.\n%Construct the normalized graph Laplacian $L = D^{-1/2} A D^{-1/2} - W$, using the weighted $n \\times n$ adjacency matrix $W$, where $w_{ij}$ is a function of $S_{ij}$ that gives non-negative weights. \\\\\n\\item[Step 2] : Find the $k$ eigenvectors $u_1, \\cdots, u_k$ corresponding to the smallest $k$ eigenvalues that solve the generalized eigenvector problem $L u = \\lambda D u$ \n\\item[Step 3] : Let $U = R^{n \\times k}$ be the matrix containing the eigenvectors $u_i$ as columns, and let $y_i \\in R^k$ be the $i$th row of $U$.\n\\item[Step 4] : Cluster the points $(y_i)_{i=1,\\cdots,n}$ in $\\mathbb{R}^k$ using $k$-means clustering into clusters $C_1,\\cdots,C_k$.\n\\item[Output] : Clusters $C_1, \\cdots, C_k$\n\\end{enumerate}\n\\end{mdframed}\n\\caption{Overview of multiclass spectral clustering algorithm according to Shi}\n\\end{figure}\n%A spectral clustering has three issues. \n%First of all, eigengap does not pick $k$ value correctly if there is too much noises. \n%We can check whether the result of eigengap is fine or not by threshold. \n%In this paper, I use $\\sqrt{n}$ where $n$ is the number of data points as threshold on $k$. \n%Secondly, the choice of similarity function has an important effect on the accuracy. \n%Thirdly, we need to choose proper clustering method. \n%I follow $k$-class clustering approach as I mentioned before because I found recursive bipartite approach is not applicable for real data which has lots of noise. \n%For clustering method, there is two ways to do it. \n%First is a $k$-class clustering \\cite{jianbo03} and the other is one by one iteratively until it makes $k$ clusters. \n%The problem of latter one is that it is too weak for a noise. \n%Also the number of cluster $k$ from its eigengap does not always best when the data have considerable noises. \n%Therefore I follow multi-class approach since iterated one-to-one clustering is not applicable. %in this problem because of the limitation of eigensolver's sensitivity. \n% I followed both way but one-by-one is sometimes really bad.\n", "meta": {"hexsha": "69b263d5d509f9bf7603f74a0cb99bfe5886c28d", "size": 9649, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/spectralclustering.tex", "max_stars_repo_name": "wsgan001/AnomalyDetection", "max_stars_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/sections/spectralclustering.tex", "max_issues_repo_name": "wsgan001/AnomalyDetection", "max_issues_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections/spectralclustering.tex", "max_forks_repo_name": "wsgan001/AnomalyDetection", "max_forks_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-03-16T21:50:52.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-16T21:50:52.000Z", "avg_line_length": 77.192, "max_line_length": 254, "alphanum_fraction": 0.7563478081, "num_tokens": 2574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7956580952177051, "lm_q1q2_score": 0.5960173140525802}}
{"text": "\\section{Value-Based}\n\\label{section:value-based}\nValue-based methods typically aim at finding $Q^*$ first, which then gives the optimal policy $\\pi^*(s) = \\argmax_a Q^*(s,a)$. For this, they require finite action spaces.\n% Like other model-free approaches, the agent needs to interact with the environment to learn.\n\n\\subsection{Tabular environments}\nIn the case of tabular environments (i.e. $S$ is finite as well), $Q^\\pi$ can actually be represented by a matrix.\n\n\\paragraph{Tabular Q-learning}\nStarting from a random value-action function $Q$, tabular Q-learning consists in interacting with the environment and applying equation \\ref{eq:bellman-q*} to converge to $Q^*$ (algorithm \\ref{algo:tabular-q-learning}). Q-learning is an \\emph{off-policy} method, i.e. learns the value of the optimal policy independently of the agent's actions, and $\\pi$ can be any policy as long as all state-action pairs are visited enough.\n\n\\begin{algorithm}[H]\n\\DontPrintSemicolon\n\\For{a number of episodes}{\n    initialize $s_0$ \\;\n    \\While{episode not done}{\n        take action $a_t \\sim \\pi(s_t)$, observe $r_t$ and $s_{t+1}$ \\;\n        $Q(s_t,a_t) \\leftarrow Q(s_t,a_t) + \\alpha \\left[ r_t + \\gamma \\max_a Q(s_{t+1}, a) \\right]$ \\;\n    }\n}\n\\caption{Tabular Q-learning}\n\\label{algo:tabular-q-learning}\n\\end{algorithm}\n\n\\paragraph{SARSA} The SARSA algorithm is the \\emph{on-policy} equivalent of Q-learning as it learns the value of the current agent's policy $\\pi$. Instead of using Bellman equation \\ref{eq:bellman-q*}, it uses equation \\ref{eq:bellman-q} for the update step, and converges to the optimal policy $\\pi^*$ as long as policy $\\pi$ becomes greedy in the limit.\n\n\\paragraph{Exploration strategies} \\label{paragraph:exploration-strategies} Used by value-based algorithms (SARSA, Q-learning) to balance between exploration and exploitation when interacting with the environment.\n\\begin{itemize}\n    \\item \\emph{$\\epsilon$-greedy strategy}\n        \\[\n          \\pi(s)=\\begin{cases}\n            \\argmax_a Q(s,a) & \\text{be greedy with probability $\\epsilon$}\\\\\n            \\textit{a random action}, & \\text{otherwise}\n          \\end{cases}\n        \\]\n    \\item \\emph{Boltzmann selection}\n        \\[\n          \\pi(a\\mid s) = \\frac{\\exp(Q(s,a)/\\tau)}{\\sum_{a'} \\exp(Q(s,a')/\\tau)}\n        \\]\n    \\item Others: UCB-1, pursuit strategy. Comparison in \\cite{tijsma2016comparing} (for stochastic mazes).\n\\end{itemize}\n\n\\paragraph{Temporal Difference (TD) learning} TD methods combine Monte-Carlo (MC) sampling \\footnote{Since we don't know the MDP, we have to approximate the expectation using trajectory samples} with the \\emph{bootstrapping} of DP methods \\footnote{Bootstrapping: using our current approximation of $V^{\\pi}(s')$ to estimate $V^{\\pi}(s)$}, and are used to learn $V^\\pi$ or $Q^\\pi$. The following quantity\n\\[\n    \\delta_t = r_{t+1} + \\gamma V(s_{t+1}) - V(s_t)\n\\]\nis called the \\emph{TD-error}.\n\n\\begin{itemize}\n    \\item \\emph{TD(0) learning} is the most straightforward method. After acting according to $\\pi$, the value function is updated with: \n    \\[\n        V(s_t) \\leftarrow V(s_t) + \\alpha [r_t + \\gamma V(s_{t+1}) - V(s_t)]\n    \\]\n    Since $\\E[r_t + \\gamma V^\\pi(s_{t+1})] = V^\\pi(s_t)$, $V$ will eventually converge to $V^\\pi$. \n    % todo: understand the difference with TD(1)\n    \n    \\item \\emph{Multi-steps learning}. When the reward is delayed and is observed several steps after the decisive action, TD(0) is slow since values only propagate from one state to the previous one. With:\n    \\[\n        G_t^{(n)} = r_t + \\gamma r_{t+1} + \\dots + \\gamma^{n-1} r_{t+n} + \\gamma^n V^\\pi(s_{t+n})\n    \\]\n    the \\emph{$n$-step return}, it is also true that $\\E[G_t^{(n)}] = V^\\pi(s_t)$. Therefore, the update steps becomes: \n    \\[\n        V(s_t) \\leftarrow V(s_t) + \\alpha [G_t^{(n)} - V(s_t)]\n    \\]\n    Multi-steps learning can converge faster depending on the problem. As $n \\rightarrow \\infty$, $G_t^{(n)}$ approaches the unbiased MC estimate of $V^\\pi$, at the cost of higher variance.\n\n    \\item \\emph{TD($\\lambda$) learning}. Instead of choosing the right $n$, a better way to navigate between TD(0) and MC updates is to average all $n$-step returns with a decay $\\lambda$. For $\\lambda \\in [0,1)$, the \\emph{$\\lambda$-return} is defined as\n    \\[\n        G_t^\\lambda = (1-\\lambda) \\sum_{n=1}^\\infty \\lambda^{n-1} G_t^{(n)}\n    \\]\n    and also verifies $\\E[G_t^\\lambda] = V^\\pi(s_t)$, therefore can be used in the update step. Choosing $\\lambda=0$ is equivalent to TD(0) updates, and $\\lambda \\rightarrow 1$ approaches MC updates.\n\\end{itemize}\n\n% todo: TD learning algo ? (forward view)\n\n\\paragraph{Eligibility traces} Eligibility traces $e_t$ are an equivalent but more convenient way of implementing TD($\\lambda$) learning. Updates are done online (\\emph{backward view}) instead of having to wait the end of the trajectory (\\emph{forward view}).\n% todo: check\n% Note: these are actually accumulating traces, but other versions exist (https://www.cs.utexas.edu/~pstone/Courses/394Rfall16/resources/week6-sutton.pdf):\n% - dutch traces, equivalent to TD(lambda) and with a sound background (better than accumulating traces?), but can only be implemented with tabular env (?) cf sutton\n% - replace traces, a crude approximation to dutch traces\n\n\\begin{algorithm}[H]\n\\DontPrintSemicolon\n\\For{a number of episodes}{\n    initialize $s_0$ and set $e_0(s)=0, \\forall s$\\;\n    \\While{episode not done}{\n        take action $a_t=\\pi(s_t)$, observe $r_t$ and $s_{t+1}$ \\;\n        \\For{all $s$}{\n          $e_t(s) \\leftarrow \\lambda \\gamma e_{t-1}(s) + \\mathbbm{1}_{s_t=s}$ \\tcp*{this leads to $e_t(s) = \\sum_{k=0}^t(\\lambda \\gamma)^{t-k} \\mathbbm{1}_{s_k=s}$}\n          $V(s_t) \\leftarrow V(s_t) + \\alpha e_t(s) \\delta_t$ \\;\n        } \n    }\n}\n\\caption{Eligibility traces, also called online TD($\\lambda$), tabular version}\n\\label{algo:eligibility-traces}\n\\end{algorithm}\n\n% todo: version with weights\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}{0.3\\textwidth}\n        \\includegraphics[width=\\linewidth]{figures/backup-td.png}\n        \\caption{TD(0) Learning (sampling and bootstrapping)}\n        \\label{fig:backup-td}\n    \\end{subfigure}\n    \\begin{subfigure}{0.3\\textwidth}\n        \\includegraphics[width=\\linewidth]{figures/backup-mc.png}\n        \\caption{Monte-Carlo (sampling, no bootstrapping)}\n        \\label{fig:backup-mc}\n    \\end{subfigure}\n    \\begin{subfigure}{0.3\\textwidth}\n        \\includegraphics[width=\\linewidth]{figures/backup-dp.png}\n        \\caption{Dynamic Programming (bootstrapping, no sampling)}\n        \\label{fig:backup-dp}\n    \\end{subfigure}\n    \\caption{Different approaches to learning value functions. By using an estimate in the update, bootstrapping methods do not need to go as deep and can be faster, although they introduce bias. Unless the MDP is known, sampling is necessary. Figures taken from David Silver notes \\cite{silver2015}, lecture 4 on Model-Free prediction.}\n\\end{figure}\n\n\n\\subsection{Approximate Q-learning}\nTabular learning approaches have trouble learning in large environments since there is no generalization between similar situations, and storing Q or V can even be an issue. Approximate approaches allow working with large finite environments and even with continuous state space $S$, by considering\n\\[\n    Q(s,a) \\approx Q_\\theta(\\phi(s),a)\n\\]\nwith $\\phi(s) \\in \\mathbb{R}^d$ a vector of features. Features $\\phi$ can be hand-crafted, but in Deep Reinforcement Learning (DRL) they are typically learned using neural networks and we simply note $Q(s,a) \\approx Q_\\theta(s,a)$.\n\n\\paragraph{Deep Q-Network (DQN) \\cite{mnih2013playing}\\cite{mnih2015human}} DQN was the first successful attempt at applying DRL on high-dimensional state spaces, and uses 2D convolutions with an MLP to extract features. It overcomes stability issues thanks to:\n\\begin{itemize}\n    \\item \\emph{Experience-replay}. Within a trajectory, transitions are strongly correlated but gradient descent algorithms typically assume independent samples, otherwise gradient estimates might be biased. Storing transitions and sampling from a memory buffer $D$ reduces correlation, and even allows mini-batch optimization to speed up training. Re-using past transitions also limits the risk of catastrophic forgetting.\n    \\item \\emph{Target network}. Based on eq. \\ref{eq:bellman-q*}, values $Q_\\theta(s,a)$ can be learned by minimizing the error\\footnote{This can be the mean square error, or Huber loss for more stability} to the target $r + \\gamma \\max_{a'}Q_\\theta(s',a')$. In this case, $\\theta$ is continuously updated and we are chasing a non-stationary target. Learning can be stabilized by using a separate network $Q_{\\theta^-}$ called the target network, with weights $\\theta^-$ updated every $k$ steps to match $\\theta$.\n\\end{itemize}\n\n\\begin{algorithm}[H]\n\\DontPrintSemicolon\n\\SetKwInput{Init}{Init}\n\\Init{replay memory $D$ with capacity $M$, $Q_\\theta$ with random weights, $\\theta^- = \\theta$}\n\\For{a number of episodes}{\n    initialize $s_0$ \\;\n    \\While{episode not done}{\n        take action $a_t \\sim \\pi(s_t)$, observe $r_t$ and $s_{t+1}$ \\tcp*{$\\pi$ can be $\\epsilon$-greedy for instance}\n        store transition $(s_t, a_t, r_t, s_{t+1})$ in $D$ \\;\n        sample random mini-batch of transitions $\\left((s_j, a_j, r_j, s_{j+1})\\right)_{j=1,\\dots,N}$ from $D$ \\;\n        set $y_j=\\begin{cases}\n            r_j & \\text{if $s_{j+1}$ is a terminal state}\\\\\n            r_j + \\gamma \\max_{a'} Q_{\\theta^-}(s_{j+1},a') & \\text{otherwise}\n          \\end{cases}$ \\;\n        $\\theta \\leftarrow \\theta - \\alpha \\nabla_\\theta \\frac{1}{N} \\sum_{j=1}^N (y_j - Q_\\theta(s_j,a_j))^2$ \\;\n        every $k$ steps, update $\\theta^- = \\theta$ \\;\n    }\n}\n\\caption{Deep Q-learning with experience replay and target network (DQN)}\n\\label{algo:DQN}\n\\end{algorithm}\n\n\\paragraph{Prioritized Experience Replay (PER) \\cite{schaul2015prioritized}}\nImprove experience replay: rather than sampling transitions $t_i$ uniformly from the memory buffer, prioritize the ones that are the most informative, i.e. with the largest error.\n\\[\ni \\sim P(i) = \\frac{p_i^\\alpha}{\\sum_k p_k^\\alpha}\n\\quad \\text{with} \\quad\np_i = |\\delta_i| + \\epsilon\n\\]\nAnd $\\delta_t = r_t + \\gamma \\max_a Q(s_{t+1},a) - Q(s_t, a_t)$ the TD-error again. In practice when implementing PER, errors $\\delta_i$ are stored in memory with their associated transition $t_i$ and are only updated with current $Q_\\theta$ when $t_i$ is sampled. A SumTree structure can be used to efficiently sample from $P(i)$, in $O(\\log |D|)$. Since PER introduces a bias\\footnote{This \\href{https://danieltakeshi.github.io/2019/07/14/per/}{blog post} goes into more details}, authors use \\emph{importance sampling} and correct the loss with a term $w_i = 1 / (N P(i))^\\beta$.\n\n\\paragraph{Double DQN \\cite{van2016deep}}\nBecause it is using the same value function $Q_\\theta$ for both action selection and evaluation in the target $y_t = r_t + \\gamma \\max_a Q_\\theta(s_{t+1}, a)$, Q-learning tends to overestimate action values as soon as there is any estimation error. This is particularly the case when the action space is large (figure \\ref{fig:double-dqn}). Double Q-learning \\cite{hasselt2010double} decouples action selection and evaluation using 2 parallel networks $Q_\\theta$ and $Q_{\\theta'}$. Double DQN \\cite{van2016deep} improves DQN performance and stability by doing it simply with the online network $Q_\\theta$ and target network $Q_{\\theta^-}$:\n\\[\n    y_t = r_t + \\gamma Q_{\\theta^-}(s_{t+1}, \\argmax_a Q_\\theta(s_{t+1}, a))\n\\]\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.5\\linewidth]{figures/double-dqn.png}\n    \\caption{Red bars (resp. blue bars) show the bias in a single Q-learning update (resp. double Q learning update) when considering i.i.d. Gaussian noise. Figure taken from \\cite{van2016deep}}\n    \\label{fig:double-dqn}\n\\end{figure}\n\n\\paragraph{Rainbow \\cite{hessel2018rainbow}}\nRainbow studies and combines a number of improvements to the DQN algorithm: multi-steps returns, prioritized experience replay, Double Q-learning, as well as dueling networks (using a value function $V_\\theta(s)$ and an advantage function $A_\\theta(s,a)$ to estimate $Q(s,a)$), noisy nets (adaptive exploration by adding noise to the parameters of the last layer, instead of being $\\epsilon$-greedy) and distributional RL (approximate the distribution of returns instead of the expected return).\n\n\\paragraph{Deep Recurrent Q-Network \\cite{hausknecht2015deep}}\nBy using a RNN to learn the features (e.g. after the convolutional layers), DQRN keeps in memory a representation of the world according to previous observations. This makes it possible to go beyond the Markov property and work with POMDPs, but it can be harder to train.\n", "meta": {"hexsha": "2fca87de007e6282b62f118928d9b32d7591ff33", "size": 12753, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/4_value_based.tex", "max_stars_repo_name": "alexandrethm/rl-cheatseet", "max_stars_repo_head_hexsha": "1f1d1f51b66eb48981d07f991ebfef8f76292138", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2021-06-18T23:54:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T11:54:02.000Z", "max_issues_repo_path": "sections/4_value_based.tex", "max_issues_repo_name": "alexandrethm/rl-cheatsheet", "max_issues_repo_head_hexsha": "1f1d1f51b66eb48981d07f991ebfef8f76292138", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/4_value_based.tex", "max_forks_repo_name": "alexandrethm/rl-cheatsheet", "max_forks_repo_head_hexsha": "1f1d1f51b66eb48981d07f991ebfef8f76292138", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.6885245902, "max_line_length": 639, "alphanum_fraction": 0.7061083667, "num_tokens": 3635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631541, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5960173132253557}}
{"text": "\\chapter{\\Pelectron \\Ppositron annihilation}\nThe annihilation process \\HepProcess{\\Pelectron \\Ppositron \\to \\Pmuon \\APmuon} is related to the scattering process \\HepProcess{\\Pelectron \\Pmuon \\to \\Pelectron \\Pmuon} as can be seen by comparing their diagrams. The diagram for the annihilation process is\n\\begin{figure}[th]\n\\centering\n\\include{figures/annihilate}\n\\caption{Diagram for the s-channel electron annihilation process to positrons.}\n\\end{figure}\nwhich is the same as for t-channel elastic scattering, only rotated by $90^\\circ$. The rotation is equivalent to exchanging $k^\\prime$ with $-p$. The consequence of this transformation is simply that the roles of $s$ and $t$ are swapped in the probability $\\abs{T_{fi}}^2$, so the differential cross section for the annihilation process is\n\\begin{equation}\\boxed{\n\\dv{\\sigma}{\\Omega} = \\frac{e^4}{32\\pi^2s} \\frac{t^2+u^2}{s^2}\n}\\end{equation}\n\n\\section{Total cross section}\nWe wish to integrate the differential cross section in the limit where the masses of the particles involved become zero.\n\nWrite the kinematic Mandelstam variables in term of energies:\n\\begin{align}\ns = (k + p)^2 &\\approx 2k \\cdot p \\nonumber \\\\\n&= 2 \\mqty(E_{\\Pelectron} \\\\ E_{\\Pelectron}) \\cdot \\mqty(E_{\\Ppositron} \\\\ -E_{\\Ppositron}) \\nonumber \\\\\n&= 4E_{\\Pelectron} E_{\\Ppositron}\n\\end{align}\n\\begin{align}\nt = (k - k^\\prime)^2 &\\approx -2k\\cdot k^\\prime \\nonumber \\\\\n&= -2 \\mqty(E_{\\Pelectron} \\\\ E_{\\Pelectron}) \\cdot \\mqty(E_{\\Pmuon} \\\\ E_{\\Pmuon} \\cdot \\cos\\theta) \\nonumber \\\\\n&= -2 E_{\\Pelectron}E_{\\Pmuon}(1-\\cos\\theta)\n\\end{align}\n\\begin{align}\nu = (k-p^\\prime)^2 &\\approx -2k\\cdot p^\\prime \\nonumber \\\\\n&= -2 \\mqty(E_{\\Pelectron} \\\\ E_{\\Pelectron}) \\cdot \\mqty(E_{\\APmuon} \\\\ -E_{\\APmuon}\\cdot \\cos\\theta) \\nonumber \\\\\n&= -2 E_{\\Pelectron}E_{\\APmuon} (1+\\cos\\theta)\n\\end{align}\n\nThen the differential cross section becomes\n\\begin{align}\n\\dv{\\sigma}{\\Omega} &= \\frac{e^4}{32\\pi^2s} \\frac{4E_{\\Pmuon}^2(1-\\cos\\theta)^2 + 4E_{\\APmuon}^2(1+\\cos\\theta)^2}{16 E_{\\Ppositron}^2} \\nonumber \\\\\n&= \\frac{e^4}{128\\pi^2s} \\frac{E_{\\Pmuon}^2(1-\\cos\\theta)^2 + E_{\\APmuon}^2(1+\\cos\\theta)^2}{E_{\\Ppositron}^2}\n\\end{align}\n\nNow we choose the centre of momentum frame where $E_{\\Pelectron} = E_{\\Pmuon} = E_{\\APelectron} = E_{\\APmuon}$ for massless particles. Then the differential cross section becomes\n\\begin{equation}\n\\dv{\\sigma}{\\Omega} = \\frac{e^4}{64\\pi^2s} \\left( 1 + \\cos^2\\theta \\right)\n\\end{equation}\n\nwhich can be easily integrated:\n\\begin{align}\n\\sigma &= \\frac{e^4}{64\\pi^2s}\\int(1+\\cos^2\\theta) \\, \\dd{\\Omega} \\nonumber \\\\\n&= \\frac{e^4}{64\\pi^2s} \\left( 4\\pi + 2\\pi\\int\\limits_{-1}^1 \\cos^2\\theta \\, \\dd(\\cos{\\theta}) \\right) \\nonumber \\\\\n&= \\frac{e^4}{64\\pi^2s} \\left( 4\\pi + \\frac{4\\pi}{3} \\right) \\nonumber \\\\\n&= \\frac{e^4}{12\\pi s}.\n\\end{align}\n\nFinally we use that the fine structure constant $\\alpha = \\frac{e^2}{4\\pi}$ (see \\eqref{eq:fineStructure}) to get\n\\begin{equation}\\boxed{\n\\sigma = \\frac{4\\pi\\alpha^2}{3s}.\n}\\end{equation}\n\n\\section{$R$ at \\Pelectron \\Ppositron colliders}\nThe process $\\HepProcess{\\Pelectron\\Ppositron \\to \\Pmuon\\APmuon}$ may be generalised to include other allowed particles in the final state, shown in Figure~\\ref{fig:generalAnnihilation}.\n\\begin{figure}[th]\n\\centering\n\\include{figures/generalAnnihilation}\n\\caption{The diagram for general $s$-channel \\Pelectron \\Ppositron annihilation with final state fermions in QED.\\label{fig:generalAnnihilation}}\n\\end{figure}\nThe process is exclusively $s$-channel except in the case for final-state electrons, where $t$- and $u$- channels contribute, and in fact dominate at low energies. Also, muons are simpler to detect. For these reasons, we use the process with final-state muons as a `standard candle' and define the ratio $R$,\n\\begin{equation}\\boxed{\nR = \\frac{\\sigma( \\Pelectron\\Ppositron \\to \\Pquark\\APquark )}{\\sigma ( \\Pelectron\\Ppositron \\to \\Pmuon\\APmuon )}.\n}\\end{equation}\n\nAt low energies, only \\Pup and \\Pdown quarks are accessible in the final state. They instantly hadronise into pions or $\\rho$ mesons, for example. The cross section is proportional to the square of the quark charge and we simply add the cross sections for distinguishable final states,\n\\begin{equation}\nR_{\\Pup\\Pdown} = \\left[\\left(\\frac{2}{3}\\right)^2 + \\left(\\frac{-1}{3}\\right)^2 \\right] \\times 3 = \\frac{5}{3}\n\\end{equation}\nwhere the factor of 3 is due to colour multiplicity of the quark states. In this simple treatment, $R$ increases in steps to $\\frac{6}{3}$, $\\frac{10}{3}$, $\\frac{11}{3}$ up to the CM energy $\\sqrt{s} = 2m_{\\Pbottom} \\simeq \\SI{9.3}{\\giga\\electronvolt}$. When the final state quarks combine to form resonances (e.g.~\\Prho: \\SI{770}{\\mega\\electronvolt}, \\PJpsi: \\SI{3.1}{\\giga\\electronvolt}, \\PUpsilonFourS: \\SI{10.6}{\\giga\\electronvolt}) $R$ increases above these predictions due to the enhanced cross section.\n\nThe Barbar \\Pelectron \\Ppositron collider operated at the \\PUpsilonFourS resonance, allowing measurement of $R$ at about 3.84 (expected 3.67). The higher measured value can be accounted for by QCD higher order $O(\\alpha_S)$ QCD corrections due to gluon radiation.\n\nSimilarly, the LEP collider operated at the $\\PZ$ resonance $\\sqrt{s} = \\SI{91}{\\giga\\electronvolt}$. An analogous quantity $R_Z$ is defined\n\\begin{equation}\nR = \\frac{\\sigma( \\PZ \\to \\Pquark\\APquark )}{\\sigma ( \\PZ \\to \\Pmuon\\APmuon )}.\n\\end{equation}\nTo lowest order, $P_Z = 20.09$ and it is measured to be $20.79 \\pm 0.04$. The $3.5\\%$ discrepancy is fully explained by higher order QCD corrections involving gluons.\n\\begin{figure}[hb]\n\\centering\n\\include{figures/Zgluonrad}\n\\caption{$\\PZ$ decay into quark final states with gluon radiation. This diagram contributes to the $O(\\alpha_S)$ QCD correction to $R_Z$.\\label{fig:Zgluonrad}}\n\\end{figure}\n\n\\section{Helicity conservation at high energies}\nRecall the chiral projection operators from Section \\ref{sec:gammaMatrix},\n\\begin{equation*}\nP_R = \\frac{1}{2}\\left( 1 + \\gamma^5 \\right), \\quad P_L = \\frac{1}{2}\\left( 1 - \\gamma^5 \\right).\n\\end{equation*}\nAs discussed above, in the limit where $m \\rightarrow 0$, helicity and chirality have the same eigenstates. The left-handed chiral state is given, for example, by $U_L = P_L U$.\n\nUsing the fact that $\\gamma^5$ -- and hence $P_L$ and $P_R$ -- is Hermitian and $\\gamma^\\mu \\gamma^5 = -\\gamma^5 \\gamma^\\mu$, we see that $P_L\\gamma^\\mu = \\gamma^\\mu P_R$ and therefore\n\\begin{equation}\n\\overline{U}_L = (P_L U)^\\dagger \\gamma^0 = U^\\dagger P_L \\gamma^0 = U^\\dagger \\gamma_0 P_R = \\overline{U} P_R.\n\\end{equation}\nSimilarly, $\\overline{U}_R = \\overline{U}P_L$.\n\nNow consider the QED current\n\\begin{equation}\n\\overline{U}\\gamma^\\mu{U} = (\\overline{U}_L + \\overline{U}_R)\\gamma^\\mu(U_L + U_R)\n\\end{equation}\nand look at a cross-term specifically,\n\\begin{align}\n\\overline{U}_L \\gamma^\\mu U_R &= \\overline{U} P_R \\gamma^\\mu P_R U \\nonumber \\\\\n&= \\overline{U} \\gamma^\\mu P_L P_R U \\nonumber \\\\\n&= 0\\end{align}\nbecause $P_LP_R=0$. This means there are no chirality-changing currents in QED and hence QED conserved chirality, or equivalently helicity in the high energy limit.\n", "meta": {"hexsha": "2557c4dca7aaed22bb6203603f02f53a067b0b50", "size": 7082, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/11_e_e_annihilation.tex", "max_stars_repo_name": "adambozson/Standard-Model-I", "max_stars_repo_head_hexsha": "9ea0d388c93f21d1c636ee18c9210a1b91169308", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/11_e_e_annihilation.tex", "max_issues_repo_name": "adambozson/Standard-Model-I", "max_issues_repo_head_hexsha": "9ea0d388c93f21d1c636ee18c9210a1b91169308", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/11_e_e_annihilation.tex", "max_forks_repo_name": "adambozson/Standard-Model-I", "max_forks_repo_head_hexsha": "9ea0d388c93f21d1c636ee18c9210a1b91169308", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.8018018018, "max_line_length": 510, "alphanum_fraction": 0.7137814177, "num_tokens": 2417, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.5960173115709062}}
{"text": "\\documentclass[12pt]{article}\n\\input{physics1}\n\\begin{document}\n\n\\section*{NYU Physics I---the spacetime interval}\n\n\\paragraph{\\theproblem}\\refstepcounter{problem}%\nDraw a spacetime diagram showing the four events\n$A=(c\\,t_A,x_A)=0\\,\\m,0\\,\\m)$, $B=(1\\,\\m,1\\,\\m)$, $C=(1\\,\\m,0\\,\\m)$,\nand $D=(0\\,\\m,1\\,\\m)$,\n\n\\paragraph{\\theproblem}\\refstepcounter{problem}%\nCompute all six spacetime intervals between all pairs of events.\n\n\\paragraph{\\theproblem}\\refstepcounter{problem}%\nNow Lorentz Transform these events to a new frame using the Lorentz\nTransformation\n\\begin{eqnarray}\\displaystyle\nc\\,t' & = & \\gamma\\,c\\,t - \\beta\\,\\gamma\\,   x \\nonumber\\\\\n   x' & = & \\gamma\\,   x - \\beta\\,\\gamma\\,c\\,t \\quad ,\n\\end{eqnarray}\nwith $\\beta=(3/5)$.\n\n\\paragraph{\\theproblem}\\refstepcounter{problem}%\nDraw a spacetime diagram of the four events in the new frame.\n\n\\paragraph{\\theproblem}\\refstepcounter{problem}%\nRe-compute all six spacetime intervals in this new frame.  Which are\nnull, which are spacelike, and which are timelike?\n\n\\paragraph{\\theproblem}\\refstepcounter{problem}%\nNow Lorentz Transform the events again using the opposite velocity, or\n\\begin{eqnarray}\\displaystyle\nc\\,t'' & = & \\gamma\\,c\\,t' + \\beta\\,\\gamma\\,   x' \\nonumber\\\\\n   x'' & = & \\gamma\\,   x' + \\beta\\,\\gamma\\,c\\,t' \\quad .\n\\end{eqnarray}\nIs everything okay?\n\n\\end{document}\n", "meta": {"hexsha": "eb446494fe75f0c64a6ea0bb495a67a2d7d7cd03", "size": 1337, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/worksheet_interval.tex", "max_stars_repo_name": "davidwhogg/Physics1", "max_stars_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-11-13T03:48:56.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-13T03:48:56.000Z", "max_issues_repo_path": "tex/worksheet_interval.tex", "max_issues_repo_name": "davidwhogg/Physics1", "max_issues_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 29, "max_issues_repo_issues_event_min_datetime": "2016-10-07T19:48:57.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-29T22:47:25.000Z", "max_forks_repo_path": "tex/worksheet_interval.tex", "max_forks_repo_name": "davidwhogg/Physics1", "max_forks_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.425, "max_line_length": 70, "alphanum_fraction": 0.6978309648, "num_tokens": 463, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7956581000631542, "lm_q1q2_score": 0.5960173087684671}}
{"text": "\n\\chapter{Total boundness of reloids}\n\n\n\\section{Thick binary relations}\n\\begin{defn}\n\\index{alpha-thick@$\\alpha$-thick}I will call \\emph{$\\alpha$-thick} and denote $\\thick_{\\alpha}(E)$\na $\\mathbf{Rel}$-endomorphism $E$ when there exists a finite cover $S$\nof $\\Ob E$ such that $\\forall A\\in S:A\\times A\\subseteq\\GR E$.\n\\end{defn}\n\n\\begin{defn}\n$\\CS(S)=\\bigcup\\setcond{A\\times A}{A\\in S}$ for a collection $S$\nof sets.\\end{defn}\n\\begin{rem}\n$\\CS$ means ``Cartesian squares''.\\end{rem}\n\\begin{obvious}\nA $\\mathbf{Rel}$-endomorphism is $\\alpha$-thick iff there exists\na finite cover $S$ of $\\Ob E$ such that $\\CS(S)\\subseteq\\GR E$.\\end{obvious}\n\\begin{defn}\n\\index{beta\n-thick@$\\beta$-thick}I will call \\emph{$\\beta$-thick} and denote $\\thick_{\\beta}(E)$\na $\\mathbf{Rel}$-endomorphism $E$ when there exists a finite set\n$B$ such that $\\rsupfun{\\GR E}B=\\Ob E$.\\end{defn}\n\\begin{prop}\n$\\thick_{\\alpha}(E)\\Rightarrow\\thick_{\\beta}(E)$.\\end{prop}\n\\begin{proof}\nLet $\\thick_{\\alpha}(E)$. Then there exists a finite cover $S$ of\nthe set $\\Ob E$ such that $\\forall A\\in S:A\\times A\\subseteq\\GR E$.\nWithout loss of generality assume $A\\ne\\emptyset$ for every $A\\in S$.\nSo $A\\subseteq\\rsupfun{\\GR E}\\{x_{A}\\}$ for some $x_{A}$ for every\n$A\\in S$. So\n\\[\n\\rsupfun{\\GR E}\\setcond{x_{A}}{A\\in S}=\\bigcup\\setcond{\\rsupfun{\\GR E}\\{x_{A}\\}}{A\\in S}=\\Ob E\n\\]\nand thus $E$ is $\\beta$-thick.\\end{proof}\n\\begin{obvious}\nLet $X$ be a set, $A$ and $B$ be $\\mathbf{Rel}$-endomorphisms\non $X$ and $B\\sqsupseteq A$. Then:\n\\begin{itemize}\n\\item $\\thick_{\\alpha}(A)\\Rightarrow\\thick_{\\alpha}(B)$;\n\\item $\\thick_{\\beta}(A)\\Rightarrow\\thick_{\\beta}(B)$.\n\\end{itemize}\n\\end{obvious}\n\\begin{example}\nThere is a $\\beta$-thick Rel-morphism which is not $\\alpha$-thick.\\end{example}\n\\begin{proof}\nConsider the $\\mathbf{Rel}$-morphism on $[0;1]$ with the graph on\nfigure~\\ref{fig:thick-counterexample}:\n\\[\n\\Gamma=\\setcond{(x,x)}{x\\in[0;1]}\\cup\\setcond{(x,0)}{x\\in[0;1]}\\cup\\setcond{(0,x)}{x\\in[0;1]}.\n\\]\n\\begin{figure}[ht]\n\\begin{tikzpicture}\n\\draw[thin,gray,text=black,<->] (0,1.4) node[left] {$y$}\n|- (1.4,0) node[below] {$x\\vphantom{1}$};\n\\draw[densely dotted,gray] (1,0) |- (0,1);\n\\draw[thick] (0,1) node[left] {$1$} \n-- (0,0) node[below left] {$0$} \n-- (1,0) node[below] {$1$}\n(0,0) -- (1,1);\n\\end{tikzpicture}\n\n\\caption{\\label{fig:thick-counterexample}Thickness counterexample graph}\n\\end{figure}\n\n\n$\\Gamma$ is $\\beta$-thick because $\\rsupfun{\\Gamma}\\{0\\}=[0;1]$.\n\nTo prove that $\\Gamma$ is not $\\alpha$-thick it's enough to prove\nthat every set $A$ such that $A\\times A\\subseteq\\Gamma$ is finite.\n\nSuppose for the contrary that $A$ is infinite. Then $A$ contains\nmore than one non-zero points $y$, $z$ ($y\\ne z$). Without loss\nof generality $y<z$. So we have that $(y,z)$ is not of the form\n$(y,y)$ nor $(0,y)$ nor $(y,0)$. Therefore $A\\times A$ isn't a\nsubset of $\\Gamma$.\n\\end{proof}\n\n\\section{Totally bounded endoreloids}\n\nThe below is a straightforward generalization of the customary definition\nof totally bounded sets on uniform spaces (it's proved below that\nfor uniform spaces the below definitions are equivalent).\n\\begin{defn}\n\\index{alpha\n-totally bounded@$\\alpha$-totally bounded}An endoreloid $f$ is \\emph{$\\alpha$-totally bounded} ($\\totBound_{\\alpha}(f)$)\nif every~$E\\in\\up f$ is $\\alpha$-thick.\n\\end{defn}\n\n\\begin{defn}\n\\index{beta\n-totally bounded@$\\beta$-totally bounded}An endoreloid $f$ is \\emph{$\\beta$-totally bounded} ($\\totBound_{\\beta}(f)$)\nif every~$E\\in\\up f$ is $\\beta$-thick.\\end{defn}\n\\begin{rem}\nWe could rewrite the above definitions in a more algebraic way like\n$\\up f\\subseteq\\thick_{\\alpha}$ (with $\\thick_{\\alpha}$ would be\ndefined as a set rather than as a predicate), but we don't really\nneed this simplification.\\end{rem}\n\\begin{prop}\nIf an endoreloid is $\\alpha$-totally bounded then it is $\\beta$-totally\nbounded.\\end{prop}\n\\begin{proof}\nBecause $\\thick_{\\alpha}(E)\\Rightarrow\\thick_{\\beta}(E)$.\\end{proof}\n\\begin{prop}\nIf an endoreloid $f$ is reflexive and $\\Ob f$ is finite then $f$\nis both $\\alpha$-totally bounded and $\\beta$-totally bounded.\\end{prop}\n\\begin{proof}\nIt enough to prove that $f$ is $\\alpha$-totally bounded. Really,\nevery $E\\in\\up f$ is reflexive. Thus $\\{x\\}\\times\\{x\\}\\subseteq\\GR E$\nfor $x\\in\\Ob f$ and thus $\\setcond{\\{x\\}}{x\\in\\Ob f}$ is a sought\nfor finite cover of $\\Ob f$.\\end{proof}\n\\begin{obvious}\n~\n\\begin{itemize}\n\\item A principal endoreloid induced by a $\\mathbf{Rel}$-morphism $E$ is\n$\\alpha$-totally bounded iff $E$ is $\\alpha$-thick.\n\\item A principal endoreloid induced by a $\\mathbf{Rel}$-morphism $E$ is\n$\\beta$-totally bounded iff $E$ is $\\beta$-thick.\n\\end{itemize}\n\\end{obvious}\n\\begin{example}\nThere is a $\\beta$-totally bounded endoreloid which is not $\\alpha$-totally\nbounded.\\end{example}\n\\begin{proof}\nIt follows from the example above and properties of principal endoreloids.\n\\end{proof}\n\n\\section{Special case of uniform spaces}\n\nRemember that \\emph{uniform space} is essentially the same as symmetric,\nreflexive and transitive endoreloid.\n\\begin{thm}\nLet $f$ be such an endoreloid that $f\\circ f^{-1}\\sqsubseteq f$.\nThen $f$ is $\\alpha$-totally bounded iff it is $\\beta$-totally\nbounded.\\end{thm}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{$\\Rightarrow$}] Proved above.\n\\item [{$\\Leftarrow$}] For every $\\epsilon\\in\\up f$ we have that $\\rsupfun{\\GR\\epsilon}\\{c_{0}\\},\\dots,\\rsupfun{\\GR\\epsilon}\\{c_{n}\\}$\ncovers the space. $\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}\\times\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}\\subseteq\\GR(\\epsilon\\circ\\epsilon^{-1})$\nbecause for $x\\in\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}$ (the same as $c_{i}\\in\\rsupfun{\\GR\\epsilon}\\{x\\}$)\nwe have \n\\[\n\\rsupfun{\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}\\times\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}}\\{x\\}=\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}\\subseteq\\rsupfun{\\GR\\epsilon}\\rsupfun{\\GR\\epsilon^{-1}}\\{x\\}=\\rsupfun{\\GR(\\epsilon\\circ\\epsilon^{-1})}\\{x\\}.\n\\]\nFor every $\\epsilon'\\in\\up f$ exists $\\epsilon\\in\\up f$ such that\n$\\epsilon\\circ\\epsilon^{-1}\\sqsubseteq\\epsilon'$ because $f\\circ f^{-1}\\sqsubseteq f$.\nThus for every $\\epsilon'$ we have $\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}\\times\\rsupfun{\\GR\\epsilon}\\{c_{i}\\}\\subseteq\\GR\\epsilon'$\nand so $\\rsupfun{\\GR\\epsilon}\\{c_{0}\\},\\dots,\\rsupfun{\\GR\\epsilon}\\{c_{n}\\}$\nis a sought for finite cover.\n\\end{description}\n\\end{proof}\n\\begin{cor}\nA uniform space is $\\alpha$-totally bounded iff it is $\\beta$-totally\nbounded.\\end{cor}\n\\begin{proof}\nFrom the theorem and the definition of uniform spaces.\n\\end{proof}\nThus we can say about just \\emph{totally bounded} uniform spaces (without\nspecifying whether it is $\\alpha$ or $\\beta$).\n\n\n\\section{Relationships with other properties}\n\\begin{thm}\nLet $\\mu$ and $\\nu$ be endoreloids. Let $f$ be a principal $\\continuous'(\\mu,\\nu)$\ncontinuous, monovalued, surjective reloid. Then if $\\mu$ is $\\beta$-totally\nbounded then $\\nu$ is also $\\beta$-totally bounded.\\end{thm}\n\\begin{proof}\nLet $\\varphi$ be the monovalued, surjective function, which induces\nthe reloid~$f$.\n\nWe have $\\mu\\sqsubseteq f^{-1}\\circ\\nu\\circ f$.\n\nLet $F\\in\\up\\nu$. Then there exists $E\\in\\up\\mu$ such that $E\\subseteq\\varphi^{-1}\\circ F\\circ\\varphi$.\n\nSince $\\mu$ is $\\beta$-totally bounded, there exists a finite typed\nsubset $A$ of $\\Ob\\mu$ such that $\\rsupfun{\\GR E}A=\\Ob\\mu$.\n\nWe claim $\\rsupfun{\\GR F}\\rsupfun{\\varphi}A=\\Ob\\nu$.\n\nIndeed let $y\\in\\Ob\\nu$ be an arbitrary point. Since $\\varphi$ is\nsurjective, there exists $x\\in\\Ob\\mu$ such that $\\varphi x=y$. Since\n$\\rsupfun{\\GR E}A=\\Ob\\mu$ there exists $a\\in A$ such that $a\\mathrel{(\\GR E)}x$\nand thus $a\\mathrel{(\\varphi^{-1}\\circ F\\circ\\varphi)}x$. So $(\\varphi a,y)=(\\varphi a,\\varphi x)\\in\\GR F$.\nTherefore $y\\in\\rsupfun{\\GR F}\\rsupfun{\\varphi}A$.\\end{proof}\n\\begin{thm}\nLet $\\mu$ and $\\nu$ be endoreloids. Let $f$ be a principal $\\continuous''(\\mu,\\nu)$\ncontinuous, surjective reloid. Then if $\\mu$ is $\\alpha$-totally\nbounded then $\\nu$ is also $\\alpha$-totally bounded.\\end{thm}\n\\begin{proof}\nLet $\\varphi$ be the surjective binary relation which induces the\nreloid $f$.\n\nWe have $f\\circ\\mu\\circ f^{-1}\\sqsubseteq\\nu$.\n\nLet $F\\in\\up\\nu$. Then there exists $E\\in\\up\\mu$ such that $\\varphi\\circ E\\circ\\varphi^{-1}\\subseteq F$.\n\nThere exists a finite cover $S$ of $\\Ob\\mu$ such that $\\bigcup\\setcond{A\\times A}{A\\in S}\\subseteq\\GR E$.\n\nThus $\\varphi\\circ\\left(\\bigcup\\setcond{A\\times A}{A\\in S}\\right)\\circ\\varphi^{-1}\\subseteq\\GR F$\nthat is $\\bigcup\\setcond{\\rsupfun{\\varphi}A\\times\\rsupfun{\\varphi}A}{A\\in S}\\subseteq\\GR F$.\n\nIt remains to prove that $\\setcond{\\rsupfun{\\varphi}A}{A\\in S}$ is\na cover of $\\Ob\\nu$. It is true because $\\varphi$ is a surjection\nand $S$ is a cover of $\\Ob\\mu$.\n\\end{proof}\nA stronger statement (principality requirement removed):\n\\begin{conjecture}\nThe image of a uniformly continuous entirely defined monovalued surjective\nreloid from a ($\\alpha$-, $\\beta$-)totally bounded endoreloid is\nalso ($\\alpha$-, $\\beta$-)totally bounded.\n\\end{conjecture}\nCan we remove the requirement to be entirely defined from the above\nconjecture?\n\\begin{question}\nUnder which conditions it's true that join of ($\\alpha$-, $\\beta$-)\ntotally bounded reloids is also totally bounded?\n\\end{question}\n\n\\section{Additional predicates}\n\nWe may consider also the following predicates expressing different\nkinds of what is intuitively is understood as boundness. Their usefulness\nis unclear, but I present them for completeness.\n\\begin{itemize}\n\\item $\\totBound_{\\alpha}(f)$\n\\item $\\totBound_{\\beta}(f)$\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\alpha}(E^{n})$\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\beta}(E^{n})$\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\alpha}(E^{0}\\sqcup\\ldots\\sqcup E^{n})$\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\beta}(E^{0}\\sqcup\\ldots\\sqcup E^{n})$\n\\item $\\exists n\\in\\mathbb{N}:\\totBound_{\\alpha}(f^{n})$\n\\item $\\exists n\\in\\mathbb{N}:\\totBound_{\\beta}(f^{n})$\n\\item $\\exists n\\in\\mathbb{N}:\\totBound_{\\alpha}(f^{0}\\sqcup\\ldots\\sqcup f^{n})$\n\\item $\\exists n\\in\\mathbb{N}:\\totBound_{\\beta}(f^{0}\\sqcup\\ldots\\sqcup f^{n})$\n\\item $\\totBound_{\\alpha}(S(f))$\n\\item $\\totBound_{\\beta}(S(f))$\n\\end{itemize}\nSome of the above defined predicates are equivalent:\n\\begin{prop}\n~\n\\begin{itemize}\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\alpha}(E^{n})\\Leftrightarrow\\exists n\\in\\mathbb{N}:\\totBound_{\\alpha}(f^{n})$.\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\beta}(E^{n})\\Leftrightarrow\\exists n\\in\\mathbb{N}:\\totBound_{\\beta}(f^{n})$.\n\\end{itemize}\n\\end{prop}\n\\begin{proof}\nBecause for every $E\\in\\up f$ some $F\\in\\up f^{n}$ is a subset of $E^{n}$, we have\n\\[\\forall E\\in\\up f:\\thick_{\\alpha}(E^{n})\\Leftrightarrow\\forall F\\in\\up f^n:\\thick_{\\alpha}(F)\\]\nand likewise for $\\thick_{\\beta}$.\\end{proof}\n\\begin{prop}\n~\n\\begin{itemize}\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\alpha}(E^{0}\\sqcup\\ldots\\sqcup E^{n})\\Leftrightarrow\\exists n\\in\\mathbb{N}:\\totBound_{\\alpha}(f^{0}\\sqcup\\ldots\\sqcup f^{n})$\n\\item $\\exists n\\in\\mathbb{N}\\forall E\\in\\up f:\\thick_{\\beta}(E^{0}\\sqcup\\ldots\\sqcup E^{n})\\Leftrightarrow\\exists n\\in\\mathbb{N}:\\totBound_{\\beta}(f^{0}\\sqcup\\ldots\\sqcup f^{n})$\n\\end{itemize}\n\\end{prop}\n\\begin{proof}\nIt's enough to prove\n\\begin{align}\n\\label{thk-1}&\\forall E\\in\\up f\\exists F\\in\\up(f^0\\sqcup\\dots\\sqcup f^n):F\\sqsubseteq E^{0}\\sqcup\\ldots\\sqcup E^{n} \\text{ and}\\\\\n\\label{thk-2}&\\forall F\\in\\up(f^0\\sqcup\\dots\\sqcup f^n)\\exists E\\in\\up f:E^{0}\\sqcup\\ldots\\sqcup E^{n}\\sqsubseteq F.\n\\end{align}\n\nFor the formula~\\eqref{thk-1} take $F=E^0\\sqcup\\dots\\sqcup E^n$.\n\nLet's prove~\\eqref{thk-2}. Let $F\\in\\up(f^0\\sqcup\\dots\\sqcup f^n)$. Using the fact that $F\\in\\up f^i$ take\n$E_i\\in\\up f$ for $i=0,\\dots,n$ such that $E_i^i\\sqsubseteq F$ (exercise~\\ref{rld-fn} and properties of generalized filter bases) and then $E=E_{0}\\sqcap\\dots\\sqcap E_{n}\\in\\up f$. We have\n$E^{0}\\sqcup\\ldots\\sqcup E^{n}\\sqsubseteq F$.\n\\end{proof}\n\\begin{prop}\nAll predicates in the above list are pairwise equivalent in the case\nif $f$ is a uniform space.\\end{prop}\n\\begin{proof}\nBecause $f\\circ f=f$\nand thus $f^n=f^0\\sqcup\\dots\\sqcup f^n=S(f)=f$.\\end{proof}\n\n", "meta": {"hexsha": "dfe50c18754fc38b9393588e0b866c9ea75d0726", "size": 12062, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-bound.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-bound.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-bound.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.0785714286, "max_line_length": 221, "alphanum_fraction": 0.691096004, "num_tokens": 4483, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.795658090372256, "lm_q2_score": 0.7490872187162396, "lm_q1q2_score": 0.5960173059660276}}
{"text": "\n\n\n\\subsection{Physical system and governing equations}\n\nWe consider a thin layer of fluid over a flat bottom and with a free\nsurface.\n%\nUnder the assumption that the characteristic horizontal scale of the\nflow $L$ is much larger than the mean layer thickness $H_0$, it can be\nshown that the pressure is only a function of the local layer\nthickness $H(x, y)$, i.e.\\ that the flow is hydrostatic.\n%\nFurther assuming that the horizontal flow is vertically invariant, it\ncan be shown that the shallow-water system is governed by the\nSaint-Venant equations \\cite[see for example][]{VallisLIVRE2006}\n\\begin{eqnarray}\n(\\p_t + \\uu\\cdot \\bnabla ) \\uu   \n&=& - c^2 \\bnabla h - f \\eez \\wedge \\uu + \\nu \\bnabla^2 \\uu,   \\label{eq_uu}\\\\\n\\p_t   h    &=& - \\bnabla \\cdot (h \\uu).  \\label{eq_h}\n\\end{eqnarray}\nwhere $\\uu$ is the horizontal velocity, %\n$h = H/H_0$ the normalized layer thickness, %\n$\\bnabla$ the horizontal gradient, %\n$\\nu$ the kinetic viscosity, %\n$f$ the Coriolis parameter %\nand %\n$c = \\sqrt{gH_0}$, with $g$ the gravitational acceleration.\n%\nIn the following, the Coriolis parameter is assumed to be spatially\nhomogeneous.\n\n\n\n\\subsection{Eigenmodes of the linearized equations}\n\n\\label{subsection_eigen}\n\nNeglecting the non-linear and the dissipative terms, the governing\nequations can be rewritten as\n\\begin{equation}\n\\p_t q = 0, \\mbox{\\hspace{1cm}} \n\\p_t d = a \\mbox{\\hspace{6mm} and \\hspace{6mm}} \n\\p_t a = - c^2\\varkappa^2 d, \n\\end{equation}\nwhere $q = \\zeta - f\\eta$ is the linear Charney potential vorticity, %\n$d = \\bnabla \\cdot \\uu$ the horizontal divergence, %\n$a = -c^2\\bnabla^2 \\eta + f\\zeta$ an ageostrophic variable and %\n$\\varkappa^2 = {k_d}^2 - \\bnabla^2$ the Helmholtz operator, %\nwith $k_d = f/c$ the deformation wave number. %\nHere, $\\eta = h-1$ is the normalized surface displacement and %\n$\\zeta = \\eez \\cdot (\\bnabla \\wedge \\uu)$ is the vertical component of\nthe vorticity.\n%\nThe dispersion relation is $\\omega^2 = {\\omega_l}^2$, where\n\\begin{equation}\n \\omega_l(\\kk) \\equiv c  \\sqrt{ {k_d}^2 + |\\kk|^2 } = \\sqrt{ f^2 + c^2|\\kk|^2 },\n\\end{equation}\nwhich implies that in the non-rotating case the waves are\nnon-dispersive.\n%\nThe eigenfunctions of the linear operator are the prograde and\nretrograde linear waves with positive and negative frequencies,\nrespectivelly.\n%\nDenoting the prograde quantities by the index $+$, the retrograde\nquantities by the index $-$ and the temporal and spatial Fourier\ntransform by a tilde, we have by definition %\n$a = a_+ + a_-$, $d = d_+ + d_-$, %\n$\\widetilde{\\p_t a_\\pm} = \\mp i \\omega_l \\tilde a_\\pm$ and %\n$\\widetilde{\\p_t d_\\pm} = \\mp i \\omega_l \\tilde d_\\pm$.\n%\nThe linearized governing equations for the waves can be rewritten as %\n$i \\tilde a_\\pm = \\pm \\omega_l \\tilde d_\\pm$, which gives %\n$ \\tilde a_\\pm = \\tilde a /2 \\pm \\omega_l \\tilde d /(2i) .$\n\n\n\n\n\\subsection{Conserved quantities}\n\n\nThe non-dissipative one-layer shallow-water equations conserve a local\nquantity along the trajectories of the fluid particles, the Ertel\npotential vorticity $\\ErtelPV = \\zeta_a/h$, where $\\zeta_a = \\zeta +\nf$ is the absolute vorticity.\n%\nNote that since the lagrangian derivative of the Ertel potential\nvorticity is $\\D_t \\ErtelPV = 0$, the space-averaged Ertel potential\nvorticity is not conserved $\\p_t \\meanx{\\ErtelPV} = \\meanx{d \\ErtelPV}\n\\neq 0 $, where the brackets $\\meanx{}$ denote the space average.\n%\nIn the case of two-dimensional turbulence, it can be shown that the\nenergy can not cascade towards small scales due to the \nconstraints that the enstrophy $\\meanx{ \\zeta^2/2 }$ is conserved and\nthat the spectra of energy and enstrophy are proportional\n\\cite[]{Kraichnan1967}.\n%\nThis result was further extended by \\cite{Charney1971} to the case of\nquasi-geostrophic turbulence using the analogy between the\nenstrophy and the Charney linear potential enstrophy $\\meanx{ q^2/2\n}$, where $q$ is the linear potential vorticity equal in the shallow\nwater case to $q = \\zeta - f \\eta$.\n%\nNote, however, that the analogy does not hold for the nonlinear Ertel\npotential vorticity $\\ErtelPV$ and that the local conservation of\n$\\ErtelPV$ is a much weaker constraint for the dynamics of the flows.\n\n\nThe non-dissipative one-layer shallow-water equations also conserve\nsome space-averaged quantities.\n%\nFor example, since\n\\begin{equation}\n(\\D_t+d) \\zeta_a = 0 \\Rightarrow \\p_t \\meanx{\\zeta_a} = 0 ,\n\\end{equation}\nthe space-averaged absolute vorticity $\\meanx{\\zeta_a }$ is conserved.\nSince the governing equation for the thickness $h$ and the mass flux\n$\\JJ = h \\uu$ are\n\\begin{eqnarray}\n(\\D_t+d) h \n&=& 0, \\label{eq_h_2}\\\\\n(\\D_t+d) \\JJ \n&=& - \\bnabla E_P - f \\eez \\wedge \\JJ, \\label{eq_JJ}\n\\end{eqnarray}\nwhere $E_P = c^2 h^2/2$ is the total potential energy, the\nspace-averaged thickness $\\meanx{ h }$ is conserved and the kinetic\nmomentum $\\meanx{ \\JJ }$ is conserved in the absolute not-rotating\nreference frame.\n%\nNote that in contrast to the case of the Navier-Stokes equations, \nthe dissipative shallow-water equations do not conserve $\\meanx{ \\JJ }$,\neven with a Newtonian viscous operator.\n\nThe non-dissipative one-layer shallow-water equations also conserve\nthe space-averaged total energy, the local energy being the sum of the\nlocal potential energy (PE), $E_P = c^2 h^2/2$, and the local kinetic\nenergy (KE), $E_K = \\JJ\\cdot\\uu/2$.  Using (\\ref{eq_uu}),\n(\\ref{eq_h_2}) and (\\ref{eq_JJ}), it is straightforward to show that\n\\begin{eqnarray}\n(\\D_t + d) E_K\n&=&  \n- \\uu \\cdot \\bnabla E_P, \\\\\n(\\D_t + d) E_P\n&=&  \n- E_P \\bnabla \\cdot \\uu.\n\\end{eqnarray}\n%\nThe space-averaged conversion from potential energy to kinetic energy\nis equal to $C = -\\meanx{ \\uu \\cdot \\bnabla E_P } = \\meanx{ E_P\n\\bnabla \\cdot \\uu }$.\n%\nThe total potential energy can be split in three parts\n\\begin{equation}\nE_P = c^2/2 + c^2\\eta+ c^2 \\eta^2/2.\n\\end{equation}\nThe first term corresponds the potential energy of the state with null\nsurface displacement.  The second term is a contribution to the\npotential energy which is zero in average.  We call the third term\n$E_A = c^2\\eta^2/2$ the available potential energy (APE).  Its\nspace-average is indeed equal to the space-averaged APE.\n", "meta": {"hexsha": "ee533437ff741feebda49cce4f19c9edbacfa542", "size": 6129, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Old/section_basic_theory.tex", "max_stars_repo_name": "ashwinvis/augieretal_jfm_2019_shallow_water", "max_stars_repo_head_hexsha": "88d97c2bd5df0795ca636306c1d795ef1d3a8949", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-23T11:06:53.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-23T11:06:53.000Z", "max_issues_repo_path": "Old/section_basic_theory.tex", "max_issues_repo_name": "ashwinvis/augieretal_jfm_2019_shallow_water", "max_issues_repo_head_hexsha": "88d97c2bd5df0795ca636306c1d795ef1d3a8949", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-08-23T13:00:31.000Z", "max_issues_repo_issues_event_max_datetime": "2019-08-23T13:00:31.000Z", "max_forks_repo_path": "Old/section_basic_theory.tex", "max_forks_repo_name": "ashwinvis/augieretal_jfm_2019_shallow_water", "max_forks_repo_head_hexsha": "88d97c2bd5df0795ca636306c1d795ef1d3a8949", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8333333333, "max_line_length": 80, "alphanum_fraction": 0.7245880241, "num_tokens": 1919, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.5960173043115786}}
{"text": "\\documentclass{article}\n    % General document formatting\n    \\usepackage[margin=0.7in]{geometry}\n    \\usepackage[parfill]{parskip}\n    \\usepackage[utf8]{inputenc}\n    \\usepackage{mathrsfs}\n    \\usepackage{amsmath}\n    \\usepackage{amssymb}\n    \\usepackage{tikz}\n    \\usepackage{fancyhdr}\n    \\usepackage{multicol}\n\n    \\usetikzlibrary{positioning}\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{Edgar Jacob Rivera Rios - A01184125}\n\n\\begin{document}\n\\section*{2.5.3. Linear Temporal Logic (LTL)}\nProve $\\models \\neg \\diamond \\neg p \\rightarrow \\Box p$ (the converse direction (the sufficiency) of Theorem 13.14 in Ben-Ari, M.).\n\\begin{align*}\n    \\models \\neg \\diamond \\neg p &\\rightarrow \\Box p\\\\\n    \\neg \\diamond \\neg p &= True\\\\\n    \\diamond \\neg p &= False\\\\\n    \\forall s \\in S&,\\ \\forall S_{i} \\in \\mathscr{I}(s)\\\\\n    S_{i}(P) &= True \\equiv \\Box p\n\\end{align*}\n\\begin{center}\n    \\begin{tikzpicture}[sibling distance=13em, every node/.style = {align=center}]\n      \\node {$\\neg(\\neg \\diamond \\neg p \\rightarrow \\Box p)$}\n        child {node {$\\neg \\diamond \\neg p, \\neg \\Box p$}\n          child {node {$\\exists s \\in S,\\ S_{i} \\in \\mathscr{I}(j)$}\n            child {node {$p, \\neg p$\\\\$\\times$}\n            edge from parent [solid] node [right] {}}\n          edge from parent [dashed] node [right] {instantiation}}\n        edge from parent node [right] {$\\alpha$}};\n    \\end{tikzpicture}\n\\end{center}\nBy contradiction of the inverse, it's proved true\n\n\\section*{Exercise 2.5.4. Linear Temporal Logic (LTL)}\nProve Theorem 13.15 from Ben-Ari, M.:  $\\models \\Box (p \\rightarrow q) \\rightarrow (\\Box p \\rightarrow \\Box q)$.\n\\begin{align*}\n    \\models \\Box (p \\rightarrow q) \\rightarrow (\\Box p \\rightarrow \\Box q)\\\\\n    \\Box (p \\rightarrow q) = True\\\\\n    \\forall s \\in S,\\ \\forall S_{i} \\in \\mathscr{P}(s)\n\\end{align*}\n\\begin{center}\n    \\begin{tikzpicture}[sibling distance=13em, every node/.style = {align=center}]\n      \\node {$\\neg(\\Box (p \\rightarrow q) \\rightarrow (\\Box p \\rightarrow \\Box q))$}\n        child { node {$\\Box (p \\rightarrow q), \\neg (\\Box p \\rightarrow \\Box q)$}\n          child {node {$\\Box (p \\rightarrow q), \\Box p, \\neg \\Box q$}\n            child {node {$\\exists s \\in S,\\ S_{i} \\in \\mathscr{I}(j)$}\n              child {node {$\\neg p, p, \\neg q$\\\\$\\times$}\n              edge from parent [solid] node [right] {}}\n              child {node {$q, p, \\neg q$\\\\$\\times$}\n              edge from parent [solid] node [right] {}}\n            edge from parent [dashed] node [right] {instantiation}}\n          edge from parent node [right] {$\\alpha$}}\n        edge from parent node [right] {$\\alpha$}};\n    \\end{tikzpicture}\n\\end{center}\nBy contradiction of the inverse, it's proved true\n\\end{document}", "meta": {"hexsha": "8641d9ed7c60d17ab074868b2496067d079e67e5", "size": 2715, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/Homework2_5.tex", "max_stars_repo_name": "edjacob25/Applied-Maths", "max_stars_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework/Homework2_5.tex", "max_issues_repo_name": "edjacob25/Applied-Maths", "max_issues_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/Homework2_5.tex", "max_forks_repo_name": "edjacob25/Applied-Maths", "max_forks_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7692307692, "max_line_length": 131, "alphanum_fraction": 0.6114180479, "num_tokens": 875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585903489891, "lm_q2_score": 0.8152324960856175, "lm_q1q2_score": 0.5959827193950393}}
{"text": "\\documentclass{standalone}\n\n\n\\begin{document}\n\t\n\t\\chapter{Series}\n\t\n\t\\section{Sequences}\n\t\n\tConsider the following set of numbers: \n\t\\begin{center}\n\t\t$2,4,6,8,...$\\\\\n\t\t$1,2,4,8,16,...$\\\\\n\t\t$4,9,16,25,...$\\\\\n\t\\end{center}\n\tIn each of the above cases, the numbers are written in a particular order and there is a clear rule for obtaining the next number and as many numbers in the list.\\\\\n\t\n\tThe above are all examples of sequences where a sequence is a set of terms in a defined order with a rule for obtaining the terms.\n\t\n\t\\section{Summation Series}\n\t\n\tWhen the terms of a sequence are added, a summation series is formed:\n\t\n\t\\begin{multicols}{2}\n\t\t\\begin{center}\n\t\t\t$2+4+6+8+10+...$\\\\\n\t\t\t$1+2+4+8+16+...$\\\\\n\t\t\t$4+9+16+25+36+...$\\\\\n\t\t\\end{center}\n\t\t\\begin{center}\n\t\t\t~\\\\\n\t\t\tare all examples of series.\n\t\t\t~\\\\\n\t\t\\end{center}\n\t\\end{multicols}\n\tA series can be finite or infinite, where a finite series consists of a fixed number of terms, whereas an infinite series has an infinite number of terms.\\\\\n\t\n\tConsidering the follwing series, \\[1+\\frac{1}{2}+ \\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+...\\] we can notice that the general term of this series is $\\frac{1}{2^r}$. The general term of a series is not unique, it depends on the initial value of $r$. Thus, the general term $\\frac{1}{2^{r-1}}$ also corresponds to the above series but we take the initial value of $r$ to be 1.\\\\\n\t\n\tA summation series can be defined in a concise way using the Greek letter $\\Sigma$ denoting the \\textit{summation of terms}. The above series may be expressed as \\[\\sum_{r=0}^\\infty \\frac{1}{2^r}\\] which is equivalent to \\[\\sum_{r=1}^\\infty \\frac{1}{2^{r-1}} \\quad r \\in \\mathbb{Z}\\]\n\t\n\t\\section{Arithmetic Progressions}\n\t\\subsection{Definition}\n\tAn arithmetic progression is a sequence of numbers starting with term $a$, in which successive terms are obtained by adding the same constant, denoted by $d$, reffered to as the \\textbf{common difference}.\\\\\n\t\n\t\\subsection{General term}\n\tLet us consider the general A.P with first term $a$ and common difference $d\\colon$ \\[a+(a+d)+(a+2d)+(a+3d)+(a+4d)+(a+5d)+...\\] By observing the coefficient of $d$ and the position of the term, we can conclude that the general term can be obtained by the equation: \\[T_n = a+(n-1)d\\]\n\t\\subsection{Sum of the first n terms}\n\tConsidering an A.P. with $n$ terms, let the first term be $a$, the common difference to be $d$\tand the last term to be $l$\n\t\n\t\\begin{align*}\n\t\t\t\tS_n &= a+(a+d)+(a+2d)+(a+3d)+(a+4d)+(a+5d)+...+(l-d)+l\\\\\n\\implies\tS_n &= l+(l-d)+(l-2d)+(l-3d)+(l-4d)+(l-5d)+...+(a+d)+a\\\\\n\t\\implies\t2S_n &= n(2a+l)\\\\\n \\implies   S_n  &= \\frac n2 (2a+l)\\\\\n\t   \t\t&= \\frac n2 (2a +(n-1)d)\n\t\\end{align*}\n\t\\newpage\n\t\\section{Geometric progressions}\n\t\\subsection{Definition}\n\tA geometric progression is a sequence of numbers starting with term $a$, in which successive terms are obtained by multiplying the same constant, denoted by $r$, reffered to as the \\textbf{common ratio}.\\\\\n\t\n\t\\subsection{General term}\n\t\n\tLet us consider a general G.P. with first tirm $a$ and a common ratio $r$: \\[a+ar+ar^2+ar^3+ar^4+ar^5+...+ar^n-1+\\ldots\\]\n\tBy observing the exponent of $r$ and the position of the term we can conclude that the general term can be obtained by the equation: \\[\\mathrm{T_n} = ar^{n-1}\\]\n\t\n\t\\subsection{Sum of the first n terms}\n\tConsidering a G.P. with first term $a$, common ratio $d$ and it's sum denoted by $S_n$:\n\t\n\t\\begin{align*}\n\t\tS_n = a+&ar+ar^2+ar^3+ar^4+ar^5+\\ldots+ar^{n-1}+\\ldots\\\\\n\t\trS_n = \\phantom{a+}&ar+ar^2+ar^3+ar^4+ar^5+\\ldots+ar^n+\\ldots\\\\\n\t\tS_n - rS_n =\\phantom{a+}& a-ar^n\\\\\n\t\t(1-r)S_n = \\phantom{a+}&a(1-r^n)\\\\\n\t\tS_n = \\phantom{a+}&  \\frac{a(1-r^n)}{1-r} \\qed\n\t\t\\end{align*}\n\t\n\t\n\t\\section{Convergence of a series}\n\t\n\tFor any\\footnote{A.P's by definition \\textbf{do not} converge.} given G.P., we can always add infinitely many times more terms to the end of the series, but what hapens to it's sum? Given that $\\abs r = 1$, the sum will always get infinitesimally closer to a definite number.\\\\\n\t\n\tConsidering the G.P. $\\frac 12 + \\frac 14 + \\frac 18 + \\frac 1 {16} + \\ldots$ we can observe that if you keep on adding terms and summing them, the sum will approach 1. In mathematical language we say \\[\\text{as } n \\rightarrow \\infty\\,,\\,S_n = 1\\] or \\[\\lim_{n\\to\\infty}S_n = 1\\]\n\t\n\t\\section{Binomial theorem}\n\t\n\tThe binomial theorem states that \\[(x+y)^n = \\sum_{k=0}^n\\begin{pmatrix}n\\\\k\\end{pmatrix} x^ky^{n-k}\t\\]\n\t\n\t\\section{Maclaurin Series}\n\t\\subsection{Derivation}\n\tLet $f(x)$ be nay function of $ x $ and suppose that $ f(x) $ can be expanded as a series of ascending powers of $ x $ and that this series can be differentiated $ w.r.t. x $\n\t\\begin{center}\n\t\t$\tf(x) \\equiv a_0   + a_{1}x  + a_{2}x^2 + a_{3}x^3 + a_{4}x^4 + ... +\t a_{r}x^r$\\\\\n\t\t\\text{where $a_n$ are constants to be found}\n\t\\end{center}\n\tThus, inputting $0$ into $f(x)$ returns:\n\t$$\\boxed{f(0) = a_{0}}$$\\\\\n\tDifferentiating  $f(x)$ $w.r.t.x\\colon$ \n\t$$f'(x) \\equiv a_{1} + 2a_{2}x + 3a_{3}x^2 + 4a_4x^3 + \\cdots + ra_{r}x^{r-1} + \\cdots$$\n\tInputting $0$ into $f'(x)\\colon$\n\t$$\\boxed{f'(0) = a_1}$$\\\\\n\tDifferentiating $f'(x)$ $w.r.t.x\\colon$\n\t$$f''(x) \\equiv 2a_{2} + 6a_{3}x + 12a_4x^2 + \\cdots + (r-1)(r)a_{r}x^{r-2} + \\cdots$$\n\tInputting $0$ into $f''(x)\\colon$\n\t$$\\boxed{f''(0) = 2a_2}$$\\\\\n\tDifferentiating $f''(x)$ $w.r.t.x\\colon$\n\t$$f'''(x) \\equiv 6a_{3} +24a_4x + \\cdots + (r-2)(r-1)(r)a_{r}x^{r-3}+ \\cdots$$\n\tInputting $0$ into $f'''(x)\\colon$\n\t$$\\boxed{f'''(0) = (2)(3)a_3}$$\n\t$$\\vdots$$\n\t\\newpage\n\tBy the above calculation we can conclude that: \n\t$$\\boxed{a_r = \\frac{f^r(x)}{r!}}$$\n\tConsidering all of the above: \n\t\n\t\\tcbset{\n\t\tenhanced,\n\t\tcolback=red!5!white,\n\t\tboxrule=0.1pt,\n\t\tcolframe=black!75!black,\n\t\tfonttitle=\\bfseries\n\t}\n\t\\begin{center}\n\t\t\\begin{tcolorbox}[center title,hbox,    %%<<---- here\n\t\t\tlifted shadow={1mm}{-2mm}{3mm}{0.1mm}%\n\t\t\t{black!50!white}]\n\t\t\t\\begin{varwidth}{\\textwidth}\n\t\t\t\t\\begin{center}\n\t\t\t\t\t$f(x) \\equiv f(0) + f'(0)x + \\dfrac{f''(0)x^2}{2!} + \\dfrac{f'''(0)x^3}{3!} + \\cdots + \\dfrac{f^r(0)x^r}{r!} + \\cdots$\\\\\n\t\t\t\t\t\\bigskip\n\t\t\t\t\t$\\displaystyle \\therefore \\quad f(x) \\equiv \\sum_{r=1}^{\\infty} \\dfrac{f^r(x)}{r!}$\n\t\t\t\t\\end{center}\n\t\t\t\\end{varwidth}\n\t\t\\end{tcolorbox} \n\t\\end{center}\n\t\n\t\n\tThis is known as Maclaurin's Theorem, and can be obtained if and only if $f^r(0) \\in \\mathbb{R}$. In the following examples we use Maclaurin's Theorem to obtain the series expansion of some standard equations. The range of validity of each expansion is left as an exercise to the reader.\n\t\\subsection{Examples}\n\t\\begin{example}\n\t\t\\emph{Express} $e^x$ \\emph{as a series expansion using the Maclaurin theorem.}\n\t\\end{example}\n\tLet $f(x) = e^x$\n\t\\begin{alignat*}{5}\n\t\t&   & f(x)    & = & e^x \\quad & \\Rightarrow\\quad & f(0) = 1    \\\\\n\t\t&   & f'(x)   & = & e^x \\quad & \\Rightarrow\\quad & f'(0) = 1   \\\\\n\t\t&   & f''(x)  & = & e^x \\quad & \\Rightarrow\\quad & f''(0) = 1  \\\\\n\t\t&   & f'''(x) & = & e^x \\quad & \\Rightarrow\\quad & f'''(0) = 1 \n\t\\end{alignat*}\n\t\n\t$$\\therefore \\quad e^x  = 1 + x + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\frac{x^4}{4!} +\\ldots + \\frac{x^r}{r!} + \\ldots $$\n\t\\hrulefill\n\t\\begin{example}\n\t\t\\emph{Express} $\\cos x$ \\emph{as a series expansion using the Maclaurin theorem.}\n\t\\end{example}\n\t\\begin{alignat*}{5}\n\t\t&   & f(x)    & =           & \\cos (x) \\quad & \\Rightarrow\\quad & f(0) = 1    \\\\\n\t\t&   & f'(x)   & = \\text{ -} & \\sin (x) \\quad & \\Rightarrow\\quad & f'(0) = 1   \\\\\n\t\t&   & f''(x)  & = \\text{ -} & \\cos(x) \\quad  & \\Rightarrow\\quad & f''(0) = 1  \\\\\n\t\t&   & f'''(x) & =           & \\sin (x) \\quad & \\Rightarrow\\quad & f'''(0) = 1 \n\t\\end{alignat*}\n\t$$\\therefore \\quad \\cos (x)  = 1 - \\frac{x^2}{2!} + \\frac{x^4}{4!} -\\frac{x^6}{6!}+\\ldots + (-1)^r\\times\\frac{x^{2r}}{2r!} + \\ldots $$\\\\\n\tThe above expansion justifies the fact that when $x$ is very small and thus high powers of $x$ may be neglected, then: \\boxed{\\cos x \\approx 1- \\frac{x^2}{2}}\\\\\n\t\\begin{example}\n\t\t\\emph{Express} $\\ln(1+x)$ \\emph{as a series expansion using the Maclaurin theorem.}\n\t\\end{example}\n\t\n\t\\begin{alignat*}{5}\n\t\t&   & f(x)    & = & \\ln(1+x)      \\quad & \\Rightarrow  \\quad & f(0)    & = 0  \\\\\n\t\t&   & f'(x)   & = & (x+1)^{-1} \\quad    & \\Rightarrow  \\quad & f'(0)   & = 1  \\\\\n\t\t&   & f''(x)  & = & -(1+x)^{-2}  \\quad  & \\Rightarrow \\quad  & f''(0)  & = -1 \\\\\n\t\t&   & f'''(x) & = & 2(1+x)^{-3} \\quad   & \\Rightarrow  \\quad & f'''(0) & = 2  \n\t\\end{alignat*}\n\t$$\\therefore \\quad \\ln(1+x)  = x - \\frac{x^2}{2} + \\frac{x^3}{3} - \\ldots + (-1)^{r+1}\\times\\frac{x^r}{r} + \\ldots $$\\\\\n\t\\hrulefill\n\t\n\t\\begin{example}\n\t\t\\emph{Expand} $\\arcsin(x)$ \\emph{up to the term in $x^3$. By putting $x=\\frac{1}{2}$, find an approximate value for $\\pi$}\n\t\\end{example}\n\t\n\t\\begin{alignat*}{5}\n\t\t& f(x)    & = & \\arcsin(x)      \\quad                                   & \\Rightarrow  \\quad & f(0)    & = 0 \\\\\n\t\t& f'(x)   & = & (1-x^2)^{\\frac{-1}{2}} \\quad                            & \\Rightarrow  \\quad & f'(0)   & = 1 \\\\\n\t\t& f''(x)  & = & x(1-x^2)^{\\frac{-3}{2}}\\quad                            & \\Rightarrow \\quad  & f''(0)  & = 0 \\\\\n\t\t& f'''(x) & = & 3x(1-x^2)^{\\frac{-5}{2}} + (1-x^2)^{\\frac{-3}{2}} \\quad & \\Rightarrow  \\quad & f'''(0) & = 1 \n\t\\end{alignat*}\n\t$$\\therefore \\quad \\arcsin(x)   = x + \\frac{x^3}{3!} + \\ldots $$\\\\\n\tPutting $x = \\frac{1}{2}$\\\\\n\t\\begin{alignat*}{2}\n\t\t&          & f\\left(\\frac{1}{2}\\right) & = \\frac{\\pi}{6}                                \\\\\n\t\t& \\implies & \\pi                       & \\approx 6\\left(\\frac{1}{2}+\\frac{1}{81}\\right) \\\\\n\t\t& \\implies & \\pi                       & \\approx \\frac{83}{27}                          \n\t\\end{alignat*}\n\t\\newpage\n\t\\subsection{Expanding compound functions using standard functions}\n\t\\begin{example}\n\t\tExpand $a)\\quad \\dfrac{e^{2x}+ e^{-3x}}{e^x}\\quad b)\\quad \\ln\\left( \\dfrac{1-2x}{(1+2x)^2}\\right)  $as series of ascending powers of $x$ up to the term in $x^4$. Give the general term in each case and the range of values of $x$ for which each expansion is valid.\n\t\\end{example}\n\t\\begin{alignat*}{2}\n\t\ta) &                  & e^x                                    & = 1 + x + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\frac{x^4}{4!} +\\ldots                                                                                                  \\\\\n\t\t&                  & e^{-3x}                                & = 1 + (-3)x + \\frac{(-3x)^2}{2!} + \\frac{(-3x)^3}{3!} + \\frac{(-3x)^4}{4!} +\\ldots                                                                                  \\\\\n\t\t& \\therefore \\quad & e^{x} + e^{-3x}                        & = \\left (1 + x + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\frac{x^4}{4!}\\right ) + \\left (1 + (-3)x + \\frac{(-3x)^2}{2!} + \\frac{(-3x)^3}{3!} + \\frac{(-3x)^4}{4!}\\right ) \\\\\n\t\t&                  &                                        & = 2 -2x + \\frac{10x^2}{2!} - \\frac{26x^3}{3!} + \\frac{89x^4}{4!}                                                                                                    \\\\\n\t\t\\bigskip\\\\\n\t\tb) &                  & \\ln\\left(\\dfrac{1-2x}{(1+2x)^2}\\right) & = \\ln(1-2x) -2\\left (\\ln(1+2x)\\right)                                                                                                                               \n\t\t\\intertext{\\emph{Consider $\\ln(1-2x)\\colon$}}\n\t\t&                  & \\ln\\left(1+(-2x)\\right)                & = -2x + -2x^2-\\frac{8x^3}{3}-4x^4 + \\ldots + \\frac{(-1)^{r-1}(-2x)^r}{r}+ \\ldots                                                                                    \\\\\n\t\t\\intertext{\\emph{Consider $\\ln(1+2x)\\colon$}}\n\t\t&                  & \\ln\\left(1+2x\\right)                   & = 2x + -2x^2+\\frac{8x^3}{3}-4x^4 +\\ldots + \\frac{-2(-1)^{r-1}(2x)^r}{r} + \\ldots                                                                                    \\\\ \n\t\t& \\therefore       & \\ln\\left(\\dfrac{1-2x}{(1+2x)^2}\\right) & = \\left( -2x + -2x^2-\\frac{8x^3}{3}-4x^4\\right) -2\\left(2x + -2x^2+\\frac{8x^3}{3}-4x^4\\right)                                                                       \\\\\n\t\t&                  &                                        & = -6x+2x^2-8x^3\t+4x^{4}                                                                                                                                             \\\\\n\t\\end{alignat*}\n\t\\text{Range of Validity:}\n\t\\begin{multicols}{2}\n\t\t\\begin{center}\n\t\t\t$$\t\\quad\\frac{(-1)^{r-1}(-2x)^r}{r} - \\frac{2(-1)^{r-1}(2x)^r}{r}$$\\\\\n\t\t\t$$  =\\frac{(-1)^{r-1}(-1)^r(2x)^r+2(-1)^r(2x)^r}{r}$$\\\\\n\t\t\t$$\t=\\frac{(-1)^{2r-1}(2x)+2(-1)^r(2x)^r}{r}$$\\\\\n\t\t\t$$\t=\\frac{\\left( (-1)^{2r-1}+2(-1)^r\\right) (2x)^r}{r}$$\\\\\n\t\t\t$$  =\\frac{ \\left(-1+2(-1)^r\\right(2x)^r)}{r}$$\n\t\t\t$$ =\\frac{2^r(2(-1)^r -1)x^r}{r}$$\t\t\t\t\t\t\t\n\t\t\\end{center}\n\t\\end{multicols}\n\t\n\t\\begin{example}\n\t\tExpand  $\\ln\\left(\\frac{x+1}{x}\\right)\\quad$as series of ascending powers of $x$ up to the term in $x^4$. Give the general term in each case and the range of values of $x$ for which each expansion is valid.\n\t\\end{example}\n\t\n\t$$f(x) \\ln\\left(\\frac{x+1}{x}\\right)  = \\ln\\left(1+\\frac{1}{x}\\right ) $$\n\t\n\t\\begin{alignat*}{5}\n\t\t&   & f(x)    & = & \\ln(1+\\frac{1}{x})    \\quad & \\Rightarrow  \\quad & f(0)    & = 0  \\\\\n\t\t&   & f'(x)   & = & (x+1)^{-1} \\quad            & \\Rightarrow  \\quad & f'(0)   & = 1  \\\\\n\t\t&   & f''(x)  & = & -(1+x)^{-2}  \\quad          & \\Rightarrow \\quad  & f''(0)  & = -1 \\\\\n\t\t&   & f'''(x) & = & 2(1+x)^{-3} \\quad           & \\Rightarrow  \\quad & f'''(0) & = 2  \n\t\\end{alignat*}\n\t\n\t\\begin{center}\n\t\t$=\\frac{1}{x} - \\frac{1}{2x^2} + \\frac{1}{3x^3} - \\frac{1}{4x^4} + \\ldots + \\frac{(-1)^{r+1}}{r} $\n\t\\end{center}\n\t\\hrulefill\n\t\\begin{example}\n\t\tExpand $\\sin^2{x}$ using Maclaurin's series up to $x^4$\n\t\\end{example}\n\t\\bigskip\n\t$$sin^2(x)\\equiv\\frac{1-\\cos(2x)}{2}$$\n\t\\begin{alignat*}{2}\n\t\t\\intertext{\\emph{Consider} $\\cos(2x)\\colon$} &                 &           & =1-\\frac{(2x)^{2}}{2!} + \\frac{(2x)^4}{4!}  - \\cdots + \\frac{(-1)^r(2x)^{2r}}{(2r)!}+\\cdots           \\\\\n\t\t&                 &           & =1-2x^2+\\frac{2x^4}{3}-\\cdots+\\frac{(-1)^r(2x)^{2r}}{(2r)!}+\\cdots                                    \\\\\n\t\t& \\therefore\\quad & \\sin^2(x) & \\equiv \\frac{1}{2}\\left( 1-(1-2x^2+\\frac{2x^4}{3}-\\cdots+\\frac{(-1)^r(2x)^{2r}}{(2r)!}+\\cdots)\\right) \\\\\n\t\t&                 &           & =\\frac{1}{2}\\left(1-1+2x^2 - \\frac{2x^4}{3} +\\cdots+\\frac{(-1)^r(2x)^{2r}}{(2r)!}+\\cdots\\right)       \\\\\n\t\t&                 &           & =x^2-\\frac{x^4}{3} +\\cdots+\\frac{(-1)^{r+1}(2x)^{2r}}{(2r)!}+\\cdots                                   \n\t\\end{alignat*}\n\t\\newpage\n\t\\begin{example}\n\t\tGiven $e^{2x}\\cdot \\ln{1+ax}$ find possible values for $p$ and $q$.\n\t\\end{example}\n\t\\begin{alignat*}{2}\n\t\t\\intertext{\\emph{Consider} $e^{2x}\\colon$}\n\t\t&                 & e^{2x}                & = 1+2x + \\frac{(2x)^2}{2!} + \\frac{(2x)^{3}}{3!} +\\cdots +  \\frac{x^r}{r!} + \\cdots           \\\\\n\t\t\\intertext{\\emph{Consider} $\\ln(1+ax)\\colon$}\n\t\t&                 & \\ln(1+ax)             & = ax - \\frac{(ax)^2}{2} + \\frac{(ax)^3}{3} - \\cdots + \\frac{(-1)^{r+1}x^r}{r} + \\cdots        \\\\\n\t\t& \\therefore\\quad & e^{2x}\\cdot \\ln(1+ax) & = \\left(1+2x+2x^2+\\frac{4x^3}{3}\\right) \\left(ax - \\frac{a^2x^2}{2} + \\frac{a^3x^3}{3}\\right) \\\\\n\t\t&                 &                       & =ax-\\frac{a^2x^2}{2}+\\frac{a^3x^3}{3}+2ax^2-2a^2x^3                                           \\\\\n\t\t&                 &                       & =ax-\\left(\\frac{a^2}{2}+2a\\right) x^2+\\left( \\frac{a^3}{3}-2a^2\\right) x^3                    \n\t\\end{alignat*}\n\t~\\\\\n\t\\begin{equation*}\n\t\t\\therefore\\left.\n\t\t\\begin{alignedat}{2}\n\t\t\t\\hspace{0.5 in}\t&&p&=a\\\\\n\t\t\t&&\t\\frac{a^2}{2}+2a&=\\frac{-3}{2}\\\\\n\t\t\t&&\t\\frac{a^3}{3}-2a^2&=q\n\t\t\\end{alignedat}\t\\right\\}\n\t\\end{equation*}\n\t\n\t\\begin{center}\n\t\t$$ p = -3,-1 $$\n\t\t$$ q= -27, -\\dfrac{7}{3}$$\n\t\\end{center}\n\t\\hrulefill\n\t\\newpage\n\t\\section{Summation of Series}\n\t\\subsection{Method 1: Generating differences}\n\t\n\t\\begin{example}\n\t\tSimplify $f(r)-f(r+1)$, when $f(x) = \\frac{1}{r^2}$. Hence, find the sum up to $n$ terms of:\\\\ $$\\sigma_1 = \\frac{3}{1^2\\cdot 2^2} + \\frac{5}{2^2\\cdot 3^2} + \\frac{7}{3^2\\cdot 4^2} + \\ldots$$\n\t\\end{example}\n\t~\\\\\n\tSimplifying $f(r) - f(r+1)\\colon$\n\t\\begin{alignat*}{2}\n\t\t&   & f(r)-f(r+1) & = \\frac{1}{r^{2}} - \\frac{1}{(r+1)^{2}} \\\\\n\t\t&   &             & =\\frac{(r+1)^{2}-r^2}{r^2(r+1)^{2}}     \\\\\n\t\t&   &             & =\\frac{2r+1}{r^2(r+1)^2}                \n\t\\end{alignat*}\n\t\\textit{Generating series and adding}\\\\\n\t\\textit{quantitatively equivalent terms:}\n\t\\begin{center}\n\t\t$\t\\frac{1}{1^2} - \\cancel{\\frac{1}{2^2}}$\n\t\t$$\t\\cancel{\\frac{1}{2^2}} - \\cancel{\\frac{1}{3^2}}$$\n\t\t$$\t\\cancel{\\frac{1}{3^2}} - \\cancel{\\frac{1}{4^2}}$$\n\t\t$$\\vdots$$\n\t\t$$\\cancel{\\frac{1}{n^2}} - \\frac{1}{n+1^2}$$\n\t\t~\\\\\n\t\t$$\\boxed{\\therefore \\quad \\sigma_1 = 1 - \\frac{1}{n+1^2}}$$\n\t\\end{center}\n\t\\hrulefill\n\t\\begin{example}\n\t\tIf $f(r) = r(r+1)!$ simplify $f(r) - f(r-1)$. Hence sum the series:\\\\\n\t\\end{example}\n\t\\begin{center}\n\t\t$\\sigma_1 = 5\\cdot 2! + 10\\cdot 3! + 17\\cdot 4! +  ... + (n^2=1)n!$\\\\\n\t\\end{center}\n\t\\hrulefill\n\t\\begin{alignat*}{2}\n\t\t&   & f(r)-f(r-1) & =r(r+1)! - (r-1)r!  \\\\\n\t\t&   &             & =r(r+1)r! - (r-1)r! \\\\\n\t\t&   &             & =r!(r^2+r-r+1)      \\\\\n\t\t&   &             & =r!(r^2+1)          \n\t\\end{alignat*}\n\t\\newpage\n\t\\textit{Generating series and adding}\\\\\n\t\\textit{quantitatively equivalent terms:}\n\t\\begin{center}\n\t\t$\\bcancel{f(2)} - f(1)$\n\t\t$$\\bcancel{f(3)} - \\bcancel{f(2)}$$\n\t\t$$\\bcancel{f(4}) - \\bcancel{f(3)}\t$$\n\t\t$$\\vdots$$\n\t\t$$\\bcancel{f(n-1)} - \\bcancel{f(n-2)}$$\n\t\t$$f(n) - \\bcancel{f(n-1)}$$\\\\\n\t\\end{center}\n\t\\begin{example}\n\t\tIf $f(r) = \\cos2r\\theta$, simplify $f(r) - f(r+1)$. Hence find $\\sin3\\theta + \\sin5\\theta + \\sin7\\theta$\n\t\t\\hrulefill\n\t\\end{example}\n\t\\begin{alignat*}{2}\n\t\t&   & f(r)-f(r+1) & =\\cos(2r\\theta)- \\cos(2(r+1)\\theta)                                                                            \\\\\n\t\t&   &             & =-2\\sin\\left(\\frac{2r\\theta + (2r+2)\\theta}{2}\\right)\\cdot \\sin\\left(\\frac{ 2r\\theta - 2(r+1)\\theta}{2}\\right) \\\\\n\t\t&   &             & =-2\\sin(2r\\theta+ \\theta)\\sin(-\\theta)                                                                         \\\\\n\t\t&   &             & =2\\sin(\\theta[2r+1])\\sin\\theta                                                                                 \n\t\\end{alignat*}\n\t\\textit{Generating series and adding}\\\\\n\t\\textit{quantitatively equivalent terms:}\\\\\n\t\\begin{center}\n\t\t\n\t\t\\begin{tabular}{ccccc}\n\t\t\t$r=1$    &   & $2\\sin(3\\theta)\\sin(\\theta)$ & $=$ & $f(1)\\cancel{-f(2)}$          \\\\\n\t\t\t$r=2$    &   & $2\\sin(5\\theta)\\sin(\\theta)$ & $=$ & $\\cancel{f(2)}\\cancel{-f(3)}$ \\\\\n\t\t\t$r=3$    &   & $2\\sin(7\\theta)\\sin(\\theta)$ & $=$ & $\\cancel{f(3)}\\cancel{-f(4)}$ \\\\\n\t\t\t$\\vdots$ &   & $\\vdots$                     & $=$ & $\\vdots$                      \\\\\n\t\t\t$r=n$    &   & $2\\sin(2n+1)\\sin(\\theta)$    & $=$ & $\\cancel{f(n)}-f(n+1)$        \n\t\t\\end{tabular}\\\\\n\t\t~\\\\\n\t\t\\begin{alignat*}{2}\n\t\t\t&   & f(1) - f(n+1) & = 2\\sin(\\theta)\\sin(2n+1)                                \\\\\n\t\t\t&   &               & =\\frac{\\cos(2\\theta)- \\cos(2\\theta(n+1))}{2\\sin(\\theta)} \\\\\n\t\t\t&   &               & =\\frac{2\\sin(\\theta(2r+1))\\sin\\theta}{2\\sin(\\theta)}     \\\\\n\t\t\t&   &               & =\\frac{sin((n=1)\\theta)\\sin(n\\theta)}{\\sin(\\theta)}      \n\t\t\\end{alignat*}\n\t\\end{center} \n\t\n\t\n\t\\subsection{Method 2: Using partial fractions}\n\tA special case of the previous method can happen to imply a partial fraction decomposition.\n\t\n\t\\begin{example}\n\t\tDecompose $\\frac{1}{r(r+1)}$. Hence find the sum of $$\\sigma_1 = \\frac{1}{1\\cdot 2} + \\frac{1}{2\\cdot 3} + \\frac{1}{3\\cdot 4} + \\cdots$$\n\t\\end{example}~\\\\\n\t\\textit{Decomposing:}\n\t$$\\frac{1}{r(r+1)} \\equiv \\frac{1}{r} - \\frac{1}{r-1}$$\n\t{Generating series and adding}\\\\\n\t{quantitatively equivalent terms:}\\\\\n\t\\begin{center}\n\t\t\\begin{tabular}{ccccc}\n\t\t\t$r=1$    &   & $\\frac{1}{1}$          & $-$ & $\\cancel{\\frac{1}{2}}$\\linebreak \\\\\n\t\t\t&&&\\\\\n\t\t\t$r=2$    &   & $\\cancel{\\frac{1}{2}}$ & $-$ & $\\cancel{\\frac{1}{3}}$           \\\\\n\t\t\t&&&\\\\\n\t\t\t$r=3$    &   & $\\cancel{\\frac{1}{3}}$ & $-$ & $\\cancel{\\frac{1}{4}}$           \\\\\n\t\t\t&&&\\\\\n\t\t\t$\\vdots$ &   & $\\cancel{\\vdots}$      & $-$ & $\\cancel{\\vdots}$                \\\\\n\t\t\t&&&\\\\\n\t\t\t$r=n$    &   & $\\cancel{\\frac{1}{n}}$ & $-$ & $\\frac{1}{n+1}$                  \\\\\n\t\t\\end{tabular}\\\\\n\t\\end{center}\n\t\\begin{alignat*}{2}\t\n\t\t& \\therefore \\quad & \\sigma_1 & = 1 - \\frac{1}{n+1}                                  \\\\\n\t\t\\intertext{Finding convergent value: }\n\t\t&                  & 1        & =\\lim_{x \\to \\infty} \\left( 1 - \\frac{1}{n+1}\\right) \n\t\\end{alignat*}\n\t\\hrulefill\n\t\\newpage\n\t\\begin{example}\n\t\tFind $\\quad \\sum_{r=3}^n \\frac{2}{(r-1)(r+1)}$\n\t\\end{example}\n\t\\begin{alignat*}{2}\t\n\t\t\\intertext{Consider $\\frac{2}{(r-1)(r+1)}\\colon$}\n\t\t&   & \\frac{2}{(r-1)(r+1)}                                      & \\equiv \\frac{1}{r-1} - \\frac{1}{r+1}                           \\\\\n\t\t&   & \\therefore \\quad \\sum_{r=3}^n \\dfrac{2}{(r-1)(r+1)} \\quad & \\equiv \\quad  \\sum_{r=3}^n \\dfrac{1}{(r-1)} - \\dfrac{1}{(r+1)} \\\\\n\t\\end{alignat*}\n\t{Generating series and adding}\\\\\n\t{quantitatively equivalent terms:}\\\\\n\t\\subsection{Method 3: Using standard results}\n\t\n\t\\subsection{Method 4: Comparing to standard results}\n\t\n\\end{document}e", "meta": {"hexsha": "442145497ba48bbc5f34a7b08dd4ffe74556764a", "size": 20998, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Pure Mathematics/Series.tex", "max_stars_repo_name": "Girogio/My-LaTeX", "max_stars_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-12T11:45:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-30T21:47:25.000Z", "max_issues_repo_path": "Pure Mathematics/Series.tex", "max_issues_repo_name": "Girogio/My-LaTeX", "max_issues_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pure Mathematics/Series.tex", "max_forks_repo_name": "Girogio/My-LaTeX", "max_forks_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.4656862745, "max_line_length": 376, "alphanum_fraction": 0.4929993333, "num_tokens": 8471, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.8152324983301568, "lm_q1q2_score": 0.5959827114822323}}
{"text": "\\section{Concluding Locality! and CW-complexes}\nAgain recall what $\\sca$-small covers, etc. are. We want to prove that:\n\\begin{theorem}\n$S^\\sca_\\ast(X)\\hookrightarrow S_\\ast(X)$ is a quasi-isomorphism, i.e., an isomorphism on homology.\n\\end{theorem}\nWe developed the subdivision operator $\\$^k:S_\\ast(X)\\to S_\\ast(X)$, and proved that it's a chain map. We showed that $T_k:\\$^k\\sim 1$.\n\\begin{proof}[Proof of locality]\nWe want to prove surjectivity of $ H_n(S^\\sca_\\ast(X))\\to H_n(S_\\ast(X))= H_n(X)$. Let $c\\in Z_n(C_\\bullet)(X)$. We want to find an $\\sca$-small $n$-cycle that is homologous to $c$. There's only one thing to do. Pick $k$ such that $\\$^k c$ is $\\sca$-small. This is a cycle because $d\\$^k c=\\$^k dc=0$ because $\\$^k$ is a chain map. I want to compare this new cycle with $c$. Consider the chain homotopy $T_k$; then: $dT_k c+T_kdc=\\$^kc-c$. But $dc=0$, so $\\$^k c - c=dT_k c$, so they differ by a boundary, and they're homologous.\n\nNow for injectivity. Suppose $c\\in S^\\sca_n(X)$ with $dc=0$. Suppose that $c=db$ for some $b\\in S_{n+1}(X)$, not necessarily $\\sca$-small. We want $c$ to be a boundary of an $\\sca$-small chain. Well:\n\\begin{align*}\n& dT_kb+T_kdb=\\$^k b-b\\\\\n\\Rightarrow& dT_kb+T_kc=\\$^k b-b\\\\\n\\Rightarrow& d(dT_kb+T_kc)=\\$^kb-b=dT_kc=d\\$^k b-c\\\\\n\\Rightarrow& c=d\\$^kb-dT_kc=d(\\$^k b-T_kc)\n\\end{align*}\nNow, $\\$^k$ is $\\sca$-small. Is $T_kc$ also $\\sca$-small? I claim that it is. Why? It is enough to show that $T_k\\sigma$ is $\\sca$-small if $\\sigma$ is. We know that $\\sigma=\\sigma_\\ast\\iota_n$. Because $\\sigma$ is $\\sca$-small, we know that $\\sigma:\\Delta^n\\to X$ is the composition $i_\\ast\\overline{\\sigma}$ where $\\overline{\\sigma}:\\Delta^n\\to A$ and $i:A\\to X$ is the inclusion for some $A\\in\\sca$. This means that $T_k\\sigma=T_ki_\\ast\\overline{\\sigma}=i_\\ast T_k\\overline{\\sigma}$, which certainly is $\\sca$-small.\n\\end{proof}\n``Are you happy? You should be very happy, because we've finished our first portion of this course. We now have a whole package of homology.''\n\\subsection{CW-complexes}\nSimplicial complexes are rigid and combinatorial. But manifolds are smooth. In between, you have CW-complexes. (A lot of advertisement for this.) We want to ``glue'' things. This is the pushout construction. Namely, if you have $i:A\\hookrightarrow B$ and $f:A\\to X$, then you define $X\\cup_f B$ (or $X\\cup_A B$) via:\n\\begin{equation*}\n\\xymatrix{A\\ar[r]^f\\ar@{^(->}[d]_i & X\\ar[d]\\\\\nB\\ar[r] & X\\cup_f B}\n\\end{equation*}\ndefined by $X\\cup_f B=X\\sqcup B/\\sim$ where $\\forall a\\in A$, $f(a)\\sim a$. This is $X$ with $B$ attached along $f$. There are two kinds of equivalence classes, namely elements of $B-A$, because anything not in $A$ is just a singleton. The other is $\\{x\\}\\cup f^{-1}(x)$ for $x\\in X$, because anything that's not in $\\img f$ is a singleton, but if something is in $\\img f$, you identify it with its preimage. This is what it is as a set. It has a universal property. Suppose you have another space $Y$.\n\\begin{equation*}\n\\xymatrix{A\\ar[r]^f\\ar@{^(->}[d]_i & X\\ar[d]^j\\ar[ddr]^{\\overline{j}} & \\\\\nB\\ar[r]\\ar[drr]_{\\overline{g}} & X\\cup_f B\\ar@{-->}[dr] & \\\\\n & & Y}\n\\end{equation*}\nsuch that $\\overline{j}f=\\overline{g}i$. The topology is right too because that's what the quotient topology does for you. As I wrote before, this is called a \\emph{pushout} of the following diagram:\n\\begin{equation*}\n\\xymatrix{A\\ar[r]^f\\ar@{^(->}[d]_i & X\\\\\nB &}\n\\end{equation*}\n\\begin{example}\nLet $X=\\ast$. Then you have a pushout:\n\\begin{equation*}\n\\xymatrix{A\\ar[r]^f\\ar@{^(->}[d]_i & \\ast\\ar[d]\\\\\nB\\ar[r] & \\ast\\cup_f B}\n\\end{equation*}\nSo then $\\ast\\cup_f B=B/A$.\n\\end{example}\n\\begin{example}\n\\begin{equation*}\n\\xymatrix{\\emptyset\\ar[r]^f\\ar@{^(->}[d]_i & X\\ar[d]\\\\\nB\\ar[r] & X\\cup_f B}\n\\end{equation*}\nIt's then clear that this is exactly $X\\sqcup B$.\n\\end{example}\n\\begin{example}\nIf both:\n\\begin{equation*}\n\\xymatrix{\\emptyset\\ar[r]^f\\ar@{^(->}[d]_i & \\ast\\ar[d]\\\\\nB\\ar[r] & \\ast\\cup_f B}\n\\end{equation*}\nSo $B/\\emptyset=\\ast\\sqcup B$. For example, $\\emptyset/\\emptyset=\\ast$. This is ``creation from nothing''. ``We won't get into the religious ramifications.''\n\\end{example}\n\\begin{example}[Attaching a cell, the most important]\nConsider:\n\\begin{equation*}\n\\xymatrix{S^{n-1}\\ar[r]^f\\ar@{^(->}[d]_i & X\\ar[d]\\\\\nD^n\\ar[r] & X\\cup_f D^n}\n\\end{equation*}\nThis is called attaching a ``cell''. The $D^n$ is what's called a cell. You're attaching a contractible space. You might want to generalize this a little bit:\n\\begin{equation*}\n\\xymatrix{\\coprod_{\\alpha\\in A}S^{n-1}_\\alpha\\ar[r]^f\\ar@{^(->}[d]_i & X\\ar[d]\\\\\n\\coprod_{\\alpha\\in A}D^n_\\alpha\\ar[r] & X\\cup_f \\coprod_{\\alpha\\in A}D^n_\\alpha}\n\\end{equation*}\n\\end{example}\nWhat are some examples? When $n=0$, the declaration is that $S^{-1}=\\emptyset$, so this is:\n\\begin{equation*}\n\\xymatrix{\\emptyset\\ar[r]^f\\ar@{^(->}[d]_i & X\\ar[d]\\\\\n\\coprod_{\\alpha\\in A}\\ast\\ar[r] & X\\cup_f \\coprod_{\\alpha\\in A}\\ast}\n\\end{equation*}\nYou're just adding a bunch of points to $X$. This is a little more interesting. What about:\n\\begin{equation*}\n\\xymatrix{S^0\\sqcup S^0\\ar[r]^f\\ar@{^(->}[d]_i & \\ast\\ar[d]\\\\\nD^1\\sqcup D^1\\ar[r] & \\ast\\cup_f (D^1\\sqcup D^1)}\n\\end{equation*}\nThen $\\ast\\cup_f(D^1\\sqcup D^1)$ is a figure $8$, because you have two $1$-disks, where you identify the four boundary points together. If we consider $(X,\\ast),(Y,\\ast)$, then $X\\vee Y:= X\\sqcup Y/\\ast\\sim \\ast$. So $\\ast\\cup_f(D^1\\sqcup D^1)=S^1\\vee S^1$. More interestingly:\n\\begin{equation*}\n\\xymatrix{S^1\\ar[r]^{aba^{-1}b^{-1}}\\ar@{^(->}[d]_i & S^1\\vee S^1\\ar[d]\\\\\nD^2\\ar[r] & (S^1\\vee S^1)\\cup_f D^2}\n\\end{equation*}\nThis is exactly the torus, i.e., $(S^1\\vee S^1)\\cup_f D^2=T^2$.\n\\begin{definition}\nA \\emph{CW-complex} is a space $X$ with a sequence of subspaces $\\emptyset=X_{-1}\\subseteq X_0\\subseteq X_1\\subseteq\\cdots\\subseteq X$ (could be an infinite sequence) such that for all $n$, there is a pushout diagram like this:\n\\begin{equation*}\n\\xymatrix{\\coprod_{\\alpha\\in A_n}S^{n-1}_\\alpha\\ar[r]^f\\ar@{^(->}[d]_i & X_{n-1}\\ar[d]\\\\\n\\coprod_{\\alpha\\in A_n}D^n_\\alpha\\ar[r] & X_{n}}\n\\end{equation*}\nAnd $X=\\bigcup X_n$, topologically (i.e. $A\\subseteq X$ is open if and only if $A\\cap X_n$ is open for all $n$). Often, $X_n=\\mathrm{Sk}_n(X)$, called the $n$-skeleton, in honor of Halloween (coming right up!), of $X$.\n\\end{definition}\n\\begin{example}\nThe torus is $\\emptyset\\subseteq T^2_0\\subseteq T^2_1\\subseteq T^2$. Here, $T^2_0=\\ast$ and $T^2_1=S^1\\vee S^1$.\n\\end{example}\n\\begin{definition}\nA CW-complex is \\emph{finite-dimensional} if $X_n=X$ for some $n$. Say that $X$ is of \\emph{finite type} if each $A_n$ is finite, i.e., finitely many cell in each dimension. Say that $X$ is \\emph{finite} if it's finite-dimensional and of finite type.\n\\end{definition}\nIn CW, the C is for cell, and the W is for weak, because of the topology on a CW-complex. This definition is due to J. H. C. Whitehead. Some people say that the ``CW'' comes from his name.\n\\begin{theorem}\n\\begin{enumerate}\n\\item Any CW-complex is Hausdorff, and it's compact if and only if it's finite.\n\\item Any compact smooth manifold admits a CW structure.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nNot going to do this.\n\\end{proof}\nNote that there could be multiple CW-structures on something.\n", "meta": {"hexsha": "c56c2bcc757f0d0a6ca7cf582e0fd093e58ebdca", "size": 7228, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old-905/lec-14-ending-locality-and-cw-complexes.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "old-905/lec-14-ending-locality-and-cw-complexes.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "old-905/lec-14-ending-locality-and-cw-complexes.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 63.9646017699, "max_line_length": 529, "alphanum_fraction": 0.6759822911, "num_tokens": 2690, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110203, "lm_q2_score": 0.8152324915965392, "lm_q1q2_score": 0.5959826970058664}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath,amsfonts}\n\\usepackage{graphicx}\n\\usepackage[colorinlistoftodos]{todonotes}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{enumitem}\n\\usepackage{listings}\n\\usepackage{soul}\n\\usepackage{tcolorbox}\n\n\\title{Problem Set 3: Free probability}\n\n\\date{Depth First Learning Week 4}\n\n\\begin{document}\n\\maketitle\n\n\n\\section*{Problem 1: Why we need free probability}\n\nRelevant readings: Livan textbook, chapter 17: \\emph{Born to Be Free}\n\nIn the upcoming lectures, we will encounter the concept of free independence of random matrices.  As a reminder, in standard probability theory (of scalar-valued random variables), two random variables $X$ and $Y$ are said to be independent if their joint pdf is simply the product of the individual marginals, i.e.\n\\begin{equation}\n    p_{X,Y}(x,y) = p_X(x) p_Y(x)\n\\end{equation}\nWhen we have independent scalar random variables $X$ and $Y$, then in principle it is possible to calculate the distribution of any function of these variables, say the sum $X + Y$ or the product $XY$. \n\nWhen it comes to random matrices, we are often interested in calculating the spectral density (the probability density of eigenvalues) of the sum or product of random matrices.  In the \\emph{Resurrecting the Sigmoid} paper, for example, we will calculate the spectral density of the network's input-output Jacobian, which is the product of several matrices for each layer.  So we need an analogue of independent variables for matrices (this condition is known as \\emph{free independence}), such that if we know the spectral densities of each one, we can calculate spectral densities of sums and products.\n\nThe simplest condition we might imagine under which two matrix-valued random variables (or, equivalently, two matrix ensembles) being freely independent is that all of the entries of each matrix are mutually independent.  However, it turns out that this condition is not good enough! In other words, independent entries sometimes are not enough to destroy all possible angular correlations between the eigenbases of two matrices. Instead, the property that generalizes statistical independence to random matrices is stronger and known as \\textit{freeness}.\n\n In this problem, we will see a concrete example of matrix ensembles with mutually independent entries, yet knowing the eigenvalue spectral density of each ensemble is not enough to determine the eigenvalue spectral density of the sum. \n\nDefine three different ensembles of 2 by 2 matrices:\n\n\\begin{itemize}\n    \\item \\textbf{Ensemble 1:} To sample a matrix from ensemble 1, sample a standard Gaussian scalar random variable $z$ and multiply it by each element in the matrix $\\sigma_z$, where \n    \\begin{equation}\n        \\sigma_z = \\left( \\begin{array}{cc} 1 & 0 \\\\ 0 & -1 \\end{array} \\right)\n    \\end{equation}\n    Thus the sampled matrix will be $z \\sigma_z$.\n    \\item \\textbf{Ensemble 2:} To sample a matrix from ensemble 2, sample a standard Gaussian  scalar random variable $z$ and multiply it by each element in the matrix $\\sigma_x$, where \n    \\begin{equation}\n        \\sigma_x = \\left( \\begin{array}{cc} 0 & 1 \\\\ 1 & 0 \\end{array} \\right)\n    \\end{equation}\n    Thus the sampled matrix will be $z \\sigma_x$.\n\\end{itemize}\n\n\\begin{enumerate}[label=(\\alph*)]\n\\item What is the spectral density $\\rho_1(x)$ of eigenvalues of matrices sampled from ensemble 1?\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nThe eigenvalues of $\\sigma_z$ are $\\pm 1$,  so the spectral density $\\rho_1(x)$ will be identical to the probability density of $z$ except for a factor of $2$ (because it has to integrate to $2$, the number of eigenvalues, instead of $1$) namely\n\\begin{equation}\n    \\rho_1(x) = \\frac{\\sqrt{2}}{\\sqrt{\\pi}} e^{-x^2/2}\n\\end{equation}\n\\end{tcolorbox}\n\n\\item What is the spectral density $\\rho_2(x)$ of eigenvalues of matrices sampled from ensemble 2?\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nThe eigenvalues of $\\sigma_x$ are exactly the same as those of $\\sigma_z$, namely $\\pm 1$,  so the spectral density $\\rho_2(x)$ is the same as $\\rho_1(x)$:\n\\begin{equation}\n    \\rho_2(x) = \\frac{\\sqrt{2}}{\\sqrt{\\pi}} e^{-x^2/2}\n\\end{equation}\n\\end{tcolorbox}\n\nYou should have found above that the spectral densities of both ensembles are the same.  However, we will see now that simply knowing the spectral density is not enough to determine the spectral density of the sum.  \n\\item Let $A$ and $B$ be two matrices independently sampled from ensemble 1.  Calculate \\emph{analytically} the spectral density of the sum, $A + B$.\n\\begin{tcolorbox}\n\\textbf{Solution:}\nWe can write this matrix as $(z_1 + z_2)\\sigma_z$, where $z_1$ and $z_2$ are standard normal variables.  The eigenvalues are thus $\\pm (z_1 + z_2)$.  Since $z_1$ and $z_2$ are independent, their sum will be a zero-mean Gaussian with variance $2$.  So the spectral density of eigenvalues will be twice that of such a Gaussian, namely:\n\\begin{equation}\n    \\rho_{A+B}(\\lambda) = \\frac{1}{\\sqrt{\\pi}} e^{-\\lambda^2/4}\n\\end{equation}\n\n\\end{tcolorbox}\n\n\\item Now let $C$ be a matrix sampled from ensemble 2.  In the next part, you will calculate the spectral density of the sum $A + C$, where $A$ is drawn from ensemble 1 and $C$ is drawn from ensemble 2.  However, to see immediately that the distributions of $A+B$ and $A+C$ will be different, consider the behavior of the spectral density of $A+C$ at zero.  Based on your knowledge of avoided crossings from the previous problem set, \\textbf{describe the spectral density of $A+C$ at $\\lambda =0$ and contrast this to the spectral density of $A+B$}.  \n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nNotice that the matrix $A+C$ will have the same form as the matrix considered in the previous problem set, and we found that for such a matrix the presence of the off-diagonal term caused their to be a level repulsion.  So, the eigenvalue spectral density should go to zero as $\\lambda$ approaches zero, for the matrix $A+C$.  However, in the above part, we calculated that for matrices $A+B$, there is no avoided crossing and the pdf is finite at $\\lambda = 0$.\n\\end{tcolorbox}\n\n\\item Now let $C$ be a matrix sampled from ensemble 2.  Calculate the spectral density of the sum, $A + C$.  Make sure this is consistent with what you argued above about the behavior at $\\lambda = 0$.\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nWriting $z_1$ and $z_2$ as standard independent Gaussian random variables, the matrix we are after is $z_1 \\sigma_z + z_2\\sigma_x$, or\n\\begin{equation}\n    \\left( \\begin{array}{cc} z_1 & z_2 \\\\ z_2 & -z_1 \\end{array} \\right)\n\\end{equation}\nDiagonalizing, we get $\\lambda = \\pm \\sqrt{z_1^2 + z_2^2}$.  Then we can find the pdf of $\\lambda$ using:\n\\begin{equation}\n    p_{|\\Lambda|}(\\lambda) = \\frac{1}{2\\pi} \\int~dz_1 ~\\int~dz_2 e^{-z_1^2/2}e^{-z_2^2/2} \\delta \\left(\\lambda - \\sqrt{z_1^2 + z_2^2}\\right)\n\\end{equation}\nConvert this to polar coordinates:\n\\begin{equation}\n    p_{|\\Lambda|}(\\lambda) = \\frac{1}{2\\pi} \\int_0^\\infty ~2\\pi r ~dr~ e^{-r^2/2}\\delta(\\lambda - r)\n\\end{equation}\nSo $p_{|\\Lambda|}(\\lambda) = \\lambda e^{-\\lambda^2/2}$.  That was for the positive eigenvalue, and since they come in pairs we get \n\\begin{equation}\n    p_\\Lambda(\\lambda) = \\frac{1}{2}|\\lambda| e^{-\\lambda^2/2}.\n\\end{equation}\n\nAs expected, this has a zero at $\\lambda=0$, and is markedly different from the Gaussian pdf we got previously.  \n\\end{tcolorbox}\n\n\nNotice that the answers you got in the previous two parts were different, even though the underlying matrices that were being added had the same spectral density and independent entries.  \n\n\\end{enumerate}\n\n\\section*{Problem 2: Using the tools of free probability theory}\n\nRelevant readings: Livan textbook, chapter 17: \\emph{Born to Be Free}\n\nFrom the last problem, you learned that if you're given two different random matrix ensembles, and you know the spectral density of the eigenvalues of each one, that might not be enough to determine the eigenvalue distribution of the sum (or product) of the two random matrices, \\emph{even if all of the entries of the two matrices are mutually independent!}   As we mentioned in the last problem, the (stronger) condition that we are after is known as \\emph{free independence}.  In general, proving that two matrix ensembles are ``free\" (freely independent) is quite tough, so we will not do that here.  Instead, we will look at the tools we use to do calculations \\emph{assuming} we have random matrix ensembles which are freely independent.\n\nSpecifically, we will show that the sum of two freely independent random matrices, each of whose spectral density is given by a semicircle, is also described by the semicircle distribution.\n\n\\begin{enumerate}[label=(\\alph*)]\n\\item Recall that the spectral density of the Gaussian orthogonal ensemble (in the large $N$ limit) is given by the semicircle law:\n\\begin{equation}\n    \\rho_{sc}(x) = \\frac{1}{\\pi}\\sqrt{2-x^2}\n\\end{equation}\n(sometimes you see this  with a $4$ or $8$ in the square root and a different factor accompanying $\\pi$ in the denominator.  This is just a matter of choosing which Gaussian ensemble---orthogonal, unitary, or symplectic---to use, and doesn't really matter for this problem)\n\nIn a previous problem set, you calculated the Stieltjes transform associated with the spectral density for the Gaussian \\emph{unitary} ensemble.  Recall that the Stieltjes transform, $G(z)$, is defined via the relation\n\\begin{equation}\n     G(z) = \\int_\\mathbb{R}~dt \\frac{\\rho(t)}{z - t}\n\\end{equation}\n\n(In the previous problem set, this was called $s_{\\mu_N}(z)$.  In literature you often see the $G(z)$ notation, since the Stieltjes transform is also known as the \\emph{resolvent} or \\emph{Green's function}.)\n\nYou should have calculated in the last problem set that under the Stieltjes transform, \n\\begin{equation}\n    \\frac{1}{2\\pi}\\sqrt{4-x^2} \\mapsto \\frac{z - \\sqrt{z^2 - 4}}{2}\n\\end{equation}\n\n\\textbf{Use the above fact to calculate the Stieltjes transform of the GOE semicircle given at the beginning of this problem (part (a)).  This is the first step to calculating the spectral density of the sum.}\n\n\\begin{tcolorbox}\n\\textbf{Solution}: \nDefine \n\\begin{eqnarray}\n    f(x) &=& \\frac{1}{2\\pi}\\sqrt{4 - x^2} \\\\\n    g(x) &=& \\frac{1}{\\pi}\\sqrt{2 - x^2}\n\\end{eqnarray}\nNotice that \n\\begin{equation}\n    g(x) = \\sqrt{2} f(x\\sqrt{2}).\n\\end{equation}\nIf we define $G_g(z)$ and $G_f(z)$ as the Green's functions corresponding to $g(x)$ and $f(x)$, respectively, then we can get a relation between the two:\n\\begin{eqnarray}\n    G_g(z) &=& \\int~dt~\\frac{g(t)}{z - t} \\\\\n    &=& \\sqrt{2}\\int~dt~\\frac{f(t\\sqrt{2})}{z - t} \\\\\n    &=& \\sqrt{2}\\int~\\frac{dy}{\\sqrt{2}} \\frac{f(y)}{z - y/\\sqrt{2}} \\\\\n    &=& \\sqrt{2}\\int~dy~\\frac{f(y)}{z\\sqrt{2} - y} \\\\\n    &=& \\sqrt{2}G_f(z\\sqrt{2}).\n\\end{eqnarray}\nSince we have previously calculated that \n\\begin{equation}\n    G_f(z) = \\frac{z - \\sqrt{z^2 - 4}}{2},\n\\end{equation}\nthis immediately gives us that\n\\begin{equation}\n    G_g(z) = z - \\sqrt{z^2 - 2}.\n\\end{equation}\n\\end{tcolorbox}\n\n\\item  We have calculated the Stieltjes transform or Green's function of the semicircle.  Now we proceed to calculate the so-called Blue's function, which is just defined as the functional inverse of the Green's function.  That is, the Green's function $G(z)$ and the Blue's function $B(z)$ satisfy \n\\begin{equation}\n    G(B(z)) = B(G(z)) = z.\n\\end{equation}\n\\textbf{Calculate the Blue's function corresponding to the semicircle Green's function you derived above.}\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nThe inverse function is defined by the relation \n\\begin{equation}\n    z = B - \\sqrt{B^2 - 2}.\n\\end{equation}\nThen \n\\begin{eqnarray}\n    \\sqrt{B^2 - 2} &=& B - z \\\\\n    B^2 - 2 &=& B^2 - 2Bz + z^2 \\\\\n    2Bz &=& z^2 + 2 \\\\\n    B &=& \\frac{z}{2} + \\frac{1}{z}\n\\end{eqnarray}\n\\end{tcolorbox}\n\n\\item You should have noticed that the Blue's function you calculated had a singularity at the origin, that is, a term given by $1/z$.  The $R$-transform is defined as the Blue's function minus that singularity; that is, \n\\begin{equation}\n    R(z) = B(z) - \\frac{1}{z}.\n\\end{equation}\n\\textbf{What is the $R$-transform of the GOE semicircle?}\n\n\\begin{tcolorbox}\nSince\n\\begin{equation}\n    B = \\frac{z}{2} + \\frac{1}{z},\n\\end{equation}\nwe can immediately write\n\\begin{equation}\n    R(z) = \\frac{z}{2}\n\\end{equation}\n\\end{tcolorbox}\n\n\\item Finally we come to the law of addition of freely independent random matrices:  If we are given freely independent random matrices $X$ and $Y$, whose $R$-transforms are $R_X(z)$ and $R_Y(z)$, respectively, then the $R$-transform of the sum (or more precisely, the $R$-transform of the spectral density of the sum $X + Y$) is simply given by $R_X(z) + R_Y(z)$.  \n\nAssume that two standard GOE matrices, say $H_1$ and $H_2$, are freely independent. \\textbf{What is the $R$-transform of the spectral density of the sum $H_+ = pH_1 + (1 - p) H_2$?}\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\n$R_{H_+}(z) = z$\n\\end{tcolorbox}\n\n\\item \\textbf{Using the results above, argue that the sum of two freely-independent ensembles described by the semicircular law is also described by the semicircular law.}\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\n\nThe $R$-transform of the sum ($z$) has the same functional form as the individual $R$-transforms ($z/2$), so it seems plausible that this means that when we invert it, we get a semicircle.  Let's make sure of this fact.\n\nWe should first figure out how the scaling of the $R$-transform affects the scaling of the matrix it is describing.  We can guess that this amounts to a simple scaling of the matrix itself.  Under the scaling of a general matrix $H \\mapsto cH$, the eigenvalue distribution goes from $\\rho(\\lambda) \\mapsto \\rho(\\lambda/c) /c $.  Then, by the same logic we used in part (a) of this problem, the Green's function goes $G \\mapsto G(z/c)/c$ (in part (a), $c$ was $\\sqrt{2}$).  To figure out the change in the Blue's function, we can write:\n\\begin{eqnarray}\n    G_{pH}(B_{pH}(z)) = \\frac{1}{p} G_{H}(B_{H}(z)/p) = z \\\\\n     G_{H}(B_{H}(z)/p) = pz \\\\\n     B_{pH}(z) = p B_H(pz) \\\\\n\\end{eqnarray}\nAnd finally we can get the scaling of the $R$-transform:\n\\begin{eqnarray}\n    R_{pH}(z) &=& pB_H(pz) - \\frac{1}{z} \\\\\n    &=& p\\left( R_H(pz) + \\frac{1}{pz} \\right) - \\frac{1}{z} \\\\\n    &=& p R_H(pz)\n\\end{eqnarray}\nWith this result, we know that if we multiply the GOE matrix by $\\sqrt{2}$, the $R$-transform goes from $z/2$ to $z$.  This means that the spectral density of a sum of two GOE matrices is still semicircular, just with a $\\sqrt{2}$ scaling.  Another way of saying this is that the semicircular law is stable under free addition.\n\\end{tcolorbox}\n\n\\end{enumerate}\n\n\\section*{Problem 3: Another toy example of free addition}\n\nRelevant readings: Partial freeness of random matrices, section 1.3\n\nThough proving that two random matrices are freely independent is tough (and only strictly defined in the limit that the matrix dimension goes to infinity), as the reading suggests, we can approximate two matrices as freely independent if their eigenbases are oriented uniformly randomly with respect to each other.  In this problem, we'll see a toy example of such a situation, and get more experience using the tools of free probability.\n\n\\begin{enumerate}[label=(\\alph*)]\n\n\\item Take random matrix $A$ to be the following:\n\\begin{equation}\n    A(t) = U(t) \\sigma_z U(-t),\n\\end{equation}\nwhere $\\sigma_z$ was defined in problem 1 above and $U(t)$ is a rotation matrix given by \n\\begin{equation}\n        U(t) = \\left( \\begin{array}{cc} \\cos{t} & \\sin{t} \\\\ -\\sin{t} & \\cos{t} \\end{array} \\right).\n\\end{equation}\nThe randomness here comes from $t$, which is sampled uniformly from the interval $\\left[0, 2\\pi\\right)$.\n\n\\textbf{What is the spectral density of eigenvalues $\\rho(x)$ of this random matrix ensemble?}\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nAs described in problem 1, the matrix $\\sigma_z$ has eigenvalues $\\pm 1$, and we know that conjugation by a unitary operator does not change the eigenvalues.  So the eigenvalue spectral density is given by \n$$\\rho(x) = \\delta(x - 1) + \\delta(x + 1)$$,\nwith $\\delta(x)$ denoting the Dirac delta function.\n\\end{tcolorbox}\n\n\\item Take `random' matrix $B$ to be the following: \\begin{equation}\n    B(t) = \\sigma_z,\n\\end{equation}\nwith the same definitions as above.  \n\n\\textbf{What is the spectral density of eigenvalues $\\rho(x)$ of this random matrix ensemble?}\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nSince the matrix is deterministic and has eigenvalues at $\\pm 1$, we can immediately write \n$$\\rho(x) = \\delta(x - 1) + \\delta(x + 1)$$\n\\end{tcolorbox}\n\n\\item If we sample from $A$, calculate the eigenbasis of the sampled matrix, then sample from $B$, and again calculate the eigenbasis of the sampled matrix, we can define the angle between the eigenbases of the two matrices by the angle which would be required to rotate one eigenbasis into the other. Equivalently, we can call it the angle of the rotation matrix required to transform one matrix into the other \\textbf{For matrix ensembles $A$ and $B$, what is the distribution of the angle between their eigenbases?}\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\nThe angle of $B$ is always fixed (since the eigenvectors of $\\sigma_z$ are always the constant vectors $(1 ~ 0)^T$ and $(0 ~  1)^T$), so let's call it zero.  The angle of $A$ is $t$, which is uniformly distributed along  $\\left[0, 2\\pi\\right)$.  So this is also the distribution of the difference in angles. \n\\end{tcolorbox}\n\n\\item You should have found that the spectral densities of both ensembles $A$ and $B$ are identical and that the distribution of angles between their eigenbases is Uniform($(0,2\\pi])$. Remember, according to the readings, we can call $A$ and $B$ freely independent if their eigenbases are oriented uniformly randomly with respect to each other.\n\nNow we imagine adding the random variables $A$ and $B$.  In part (c), you found that the eigenbases of $A$ and $B$ are rotated uniformly randomly with respect to each other. As per the readings, we can now treat $A$ and $B$ as if they are freely independent and consequently use the property of freely independent random matrices that their $R$-transforms add under free addition.  \n\n(i) \\textbf{Go through the same steps as in the previous problem to calculate the $R$-transform corresponding to that spectral density.} As a reminder, here's the recipe:\n    \\begin{itemize}\n        \\item Calculate Green's function via \n        \\begin{equation}\n         G(z) = \\int_\\mathbb{R}~dt \\frac{\\rho(t)}{z - t}\n        \\end{equation}\n        \n        \\item Calculate the Blue's function via the relation\n        \\begin{equation}\n            G(B(z)) = B(G(z)) = z.\n        \\end{equation}\n        \n        \\item Check that the Blue's function has a $1/z$ singularity at the origin, and subtract it to get the $R$-transform:\n        \\begin{equation}\n            R(z) = B(z) - 1/z.\n        \\end{equation}\n    \\end{itemize}\n    \n\\begin{tcolorbox}\n\\textbf{Solution:}\nSince $\\rho(x) = \\frac{1}{2}(\\delta(x+1) + \\delta(x-1))$\\footnote{For the Green's functions we use spectral densities normalized as if they were a probability distribution, i.e. which integrate to unity}, we have that \n\\begin{eqnarray}\nG(z) &=& \\frac{0.5}{z-1} + \\frac{0.5}{z+1} \\\\\n&=& \\frac{z}{z^2 - 1}\n\\end{eqnarray}\nSince the blue and Green functions are inverses of each other, this means \n\\begin{equation}\n    z = \\frac{B(z)}{B^2(z)-1},\n\\end{equation}\nor\n\\begin{equation}\n    -z + zB^2 - B = 0.\n\\end{equation}\nUsing quadratic formula gives\n\\begin{equation}\n    B = \\frac{1 \\pm \\sqrt{1 + 4 z^2}}{2z}.\n\\end{equation}\n\nKnowing that we need the blue function to have a $1/z$ singularity at the origin tells us that we need to take the positive root, so \n\\begin{equation}\n    B(z) = \\frac{1 + \\sqrt{1 + 4 z^2}}{2z}.\n\\end{equation}\nThis makes the $R$-transform \n\\begin{equation}\n    R(z) = \\frac{-1 + \\sqrt{1 + 4 z^2}}{2z}.\n\\end{equation}\n\n\\end{tcolorbox}    \n\n(ii) Let's find the spectral density of the sum $A$ + $B$. \\textbf{Calculate the $R$-transform of the sum of $A$ and $B$}.\n\\begin{tcolorbox}\nThe $R$-transform of the sum's spectral density is sum of the individual $R$-transforms, namely \n\\begin{equation}\n    R(z) = \\frac{-1 + \\sqrt{1 + 4 z^2}}{z}.\n\\end{equation}\n\n\\end{tcolorbox}\n\n(iii) We can now reverse the steps you did in part (d) to get the spectral density of the sum.  That is, from the $R$-transform you just calculated, you can calculate the Blue's function, then the Green's function, and finally the spectral density.  You will need the formula for going from the Green's function to the spectral density, given in chapter 8 of Livan's book:\n\\begin{equation}\n\\rho(x) = \\frac{1}{\\pi}\\lim_{\\epsilon\\rightarrow 0^+} \\mathrm{Im}~ G(x-i\\epsilon)\n\\end{equation}\n\\textbf{Calculate the spectral density of the sum of the random matrices $A$ and $B$}\n\n\\begin{tcolorbox}\nWe add $1/z$ to the $R$-transform to get the blue function:\n\\begin{equation}\n    B(z) = \\frac{\\sqrt{1 + 4 z^2}}{z}.\n\\end{equation}\nNow we invert to get the Green's function:\n\\begin{equation}\n    z = \\frac{\\sqrt{1 + 4 G^2}}{G}.\n\\end{equation}\n\\begin{equation}\n    (z^2 - 4) G^2 = 1.\n\\end{equation}\n\\begin{equation}\n    G = \\pm\\frac{1}{\\sqrt{z^2 - 4}}\n\\end{equation}\nTo get the correct sign, we look for the $1/z$ behavior at infinity, so we take the plus sign.  \nThus \n\\begin{equation}\n    G(z) = \\frac{1}{\\sqrt{z^2 - 4}}\n\\end{equation}\nTo get the spectral density, let $z = x - i\\epsilon$. Then \n\\begin{eqnarray}\n    G(z) &=& \\frac{1}{\\sqrt{(x - i\\epsilon)^2 - 4}} \\\\\n    &=& \\frac{1}{\\sqrt{x^2 - 2ix\\epsilon - \\epsilon^2 - 4}}.\n\\end{eqnarray}\nIn the limit that $\\epsilon$ goes to zero, the imaginary part will only be non-zero when $|x|$ is less than 2.  We thus arrive at \n\\begin{equation}\n    \\rho(x) = \\frac{1}{\\pi\\sqrt{4-x^2}}\n\\end{equation}\n\\end{tcolorbox}\n\n\\item The spectral density of $A+B$ you caculated in closed form in exercise (d) part (iii) assumes $A$ and $B$ are freely independent, a fact which we attributed to the uniformly-random angle between their eigenbases. \\textbf{Verify using simulations that the spectral density you calculated does in fact describe the sum $A$ + $B$.}\n\nWhat does this result tell you about the free independence or partial free independence of $A$ and $B$?\n\n\\begin{tcolorbox}\n\\textbf{Solution:}\n\nThe following code will work:\n\n\\begin{lstlisting}\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom functools import reduce\n\ndef sample_ensemble(n_samples):\n    samples = []\n    sigma_z = np.diag([1, -1])\n    \n    for i in range(n_samples):    \n        theta = np.random.uniform(low=0., high=2*np.pi)\n        \n        rot_theta = np.array([[np.cos(theta), -np.sin(theta)],\n                              [np.sin(theta), np.cos(theta)]])\n\n        rot_theta_dag = np.array([[np.cos(theta), np.sin(theta)],\n                              [-np.sin(theta), np.cos(theta)]])\n\n        sample = reduce(np.dot, [rot_theta, sigma_z, rot_theta_dag]) \n                            + sigma_z\n        \n        evals = np.linalg.eigvals(sample)\n        samples.append(evals)\n    \n    return np.array(samples)\n\nn_samples = 100000\nensemble_samples = sample_ensemble(n_samples)\n\nempirical_density, bins = np.histogram(ensemble_samples, \n                                       density=True, \n                                       bins=50);\nbin_centers = (bins[:-1] + bins[1:])/2.\n\ntheoretical_density = 1./(np.pi*np.sqrt(4-np.power(bin_centers, 2)))\n\nplt.scatter(bin_centers, empirical_density)\nplt.plot(bin_centers, theoretical_density)\n\nplt.title('Spectral density')\nplt.xlabel('Eigenvalue')\nplt.ylabel('Density')\n\\end{lstlisting}\n\nRunning this code (or equivalent) should confirm that the distribution calculated for the matrices $A$ + $B$ is accurate.  This result is evidence of the connection between the uniformly random rotation between the eigenbases of $A$ and $B$, and their free independence.\n\\end{tcolorbox}\n\n\\item You just saw that the matrices $A$ and $B$ were freely independent, and we attributed this to the fact that in the coordinate system defined by the eigenbasis of one of the matrices, the eigenvectors of the other were uniformly distributed across the circle.  \\textbf{Use this fact to argue that two matrices sampled from the Gaussian Orthogonal Ensemble or Gaussian Unitary Ensemble are freely independent.}\n\n\\begin{tcolorbox}\n\\textbf{Solution:} \nOne of the defining properties of the two listed ensembles is that they are rotationally invariant, meaning that the probability of any eigenvector is uniform.  This means that the rotation matrix between eigenvectors of two different realizations of (say) GOE matrices will also be uniformly distributed (this is known as being Haar-random, if you've seen the term before).  Thus we should expect matrices sampled independently (i.e. with all entries mutually independent) from any of these ensembles should be freely independent.  \n\\end{tcolorbox}\n\n\\end{enumerate}\n\n\n\\end{document}", "meta": {"hexsha": "f89467891d766aa929137051c221788b7ab598d5", "size": 25008, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/sigmoid/problem-sets/source/3/problems_and_solutions.tex", "max_stars_repo_name": "skroon/depthfirstlearning.com", "max_stars_repo_head_hexsha": "326c2a5de7497c2383caae937753da3309506e7b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 192, "max_stars_repo_stars_event_min_datetime": "2018-06-06T16:01:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-13T16:56:14.000Z", "max_issues_repo_path": "assets/sigmoid/problem-sets/source/3/problems_and_solutions.tex", "max_issues_repo_name": "skroon/depthfirstlearning.com", "max_issues_repo_head_hexsha": "326c2a5de7497c2383caae937753da3309506e7b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 39, "max_issues_repo_issues_event_min_datetime": "2018-06-05T02:46:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T08:43:34.000Z", "max_forks_repo_path": "assets/sigmoid/problem-sets/source/3/problems_and_solutions.tex", "max_forks_repo_name": "skroon/depthfirstlearning.com", "max_forks_repo_head_hexsha": "326c2a5de7497c2383caae937753da3309506e7b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 27, "max_forks_repo_forks_event_min_datetime": "2018-06-07T06:08:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-26T09:25:16.000Z", "avg_line_length": 54.1298701299, "max_line_length": 743, "alphanum_fraction": 0.7109325016, "num_tokens": 7234, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.8221891239865619, "lm_q1q2_score": 0.5959804573737207}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{September 3, 2014}\n\\maketitle\n\\section*{$\\epsilon \\Delta$ definition of limit}\nlet $\\{a_n\\}_{n=1}^{\\infty}$ be a sequence of real numbers. We say that L is the limit of the sequence $\\{a_n\\}_{n=1}^{\\infty}$ if $\\forall\\epsilon>0$ $\\exists N\\in\\mathbb{N}$ such that $\\left\\lvert a_n-L\\right\\rvert < \\epsilon \\forall n\\ge N$\n\\subsection*{example 1}\n$a_n=\\frac{1}{n},n\\in\\mathbb{N}$\n\nprove that\n\\[\\lim_{n\\to\\infty}\\frac{1}{n}=0\\]\nby the definition\nfix $\\epsilon>0$. we need to find $N$ such that $\\left\\lvert \\frac{1}{n}-0\\right\\rvert<\\epsilon$  if $n\\ge N$. find $N$ such that $1\\le\\epsilon n,\\forall n\\ge N$.\n\n\\subsubsection*{archimedean property}\ngiven $x>0, y\\in\\mathbb{R}, \\exists N\\in\\mathbb{N},N>0$ such that $Nx>y$.\n\napply this to $x=\\epsilon, y=1$, $\\exists N$ such that $N\\epsilon>1$ if $n\\ge N, n\\epsilon\\ge N\\epsilon>1$ so the archimediean property gives us the $N$ for the given $\\epsilon$\n\n\\subsection*{example 2}\nfind\n\\begin{align*}\n\\lim_{n\\to\\infty}\\left(\\sqrt{n+1}-\\sqrt{n}\\right)&=\n\\lim_{n\\to\\infty}\\left(\\frac{\\sqrt{n+1}-\\sqrt{n}}{\\sqrt{n+1}+\\sqrt{n}}\\sqrt{n+1}+\\sqrt{n}\\right)\\\\\n&=\\lim_{n\\to\\infty}\\left(\\frac{(n+1)-(n)}{\\sqrt{n+1}+\\sqrt{n}}\\right)\\\\\n&=\\lim_{n\\to\\infty}\\left(\\frac{1}{\\sqrt{n+1}+\\sqrt{n}}\\right)\\\\\n&=0\n\\end{align*}\ngiven $\\epsilon>0$ we need to find $N$ such that $\\left\\lvert (\\sqrt{n+1}-\\sqrt{n}-0\\right\\rvert<\\epsilon\\forall n\\ge N\\rightarrow\\frac{1}{\\sqrt{n+1}+\\sqrt{n}}<\\epsilon\\forall n\\ge N$.\n\nenough to find N such that $\\frac{1}{2\\sqrt{n}}<\\epsilon\\forall n\\ge N$\n\ngiven $\\epsilon$ and $\\frac{1}{2}$$\\exists N\\in\\mathbb{N}$ such that $\\frac{1}{2}<N\\epsilon$. for any $n\\in N$ such that $\\sqrt{n}>N$ (or $n>N^2$). $\\frac{1}{2}<\\sqrt{n}\\epsilon\\rightarrow\\frac{1}{2\\sqrt{n}}<\\epsilon\\forall n>N^2$\n\n\\subsection*{example 3}\nnot convergent\nlet $a_n=2^n$.\n\\subsubsection*{proof}\nassum $\\exists L\\in \\mathbb{N}$ such that $\\forall \\epsilon>0\\exists N\\in\\mathbb{N}$ such that $\\left\\lvert 2^n-L\\right\\rvert<\\epsilon\\forall n\\ge N$. Take any $n>m>1, n,m\\in\\mathbb{N}$\n\\begin{align*}\n  \\left\\lvert2^n-2^m\\right\\rvert&=2^m\\left(2^{n-m}-1\\right)>1\\text{ or }2\n\\end{align*}\nconsider $\\left\\lvert a_n-L\\right\\rvert+\\left\\lvert a_m-L\\right\\rvert\\ge\\left\\lvert(a_n-L)+(L-a_m)\\right\\rvert$ (triangle inequality) leads to $\\left\\lvert a_n-a_m\\right\\rvert>1$. so either $\\left\\lvert a_n-L\\right\\rvert>\\frac{1}{2}$ or $\\left\\lvert a_m-L\\right\\rvert>\\frac{1}{2}$ (pigeonhole). take $\\epsilon>\\frac{1}{2}$. Since $n,m$ are arbitrary natural numbers, we cannot find $N$ such that $\\left\\lvert a_n-L\\right\\rvert<\\epsilon\\forall n\\ge N,$ and hence the sequence does not converge.\n\ncan use this argument with $\\left\\lvert a_n-a_m\\right\\rvert>\\alpha, \\alpha>0$\n\n\\section*{squeeze theorem (2.4.6)}\nlet $\\{a_n\\}_{n=1}^\\infty,\\{b_n\\}_{n=1}^\\infty,\\{c_n\\}_{n=1}^\\infty$ be sequences of reals such that $a_n\\le b_n\\le c_n$ for each $n\\in\\mathbb{N}$. Assume\n\\[\\lim_{n\\to\\infty}a_n=L=\\lim_{n\\to\\infty}c_n\\]\nThen \n\\[\\lim_{n\\to\\infty}b_n=L\\]\n\\subsection*{proof}\nGiven $\\epsilon>0$ we need to find $N\\mathbb{N}$ such that $\\left\\lvert b_n-L\\right\\rvert<\\epsilon\\forall n\\ge N$\n\nhave: an $\\epsilon>0$. hopothesis: $\\lim a_n=L, \\lim c_n=L$\n\nsince we have $\\epsilon>0$ and $\\lim a_n=L$, aby definition of limit $\\exists N_1$ such that $\\forall n\\ge N_1$, $\\left\\lvert L-a_n\\right\\rvert<\\epsilon$ for the same $\\epsilon>0, \\exists N_2$ such that $\\forall n\\ge N_2$ $\\left\\lvert L-c_n\\right\\rvert<\\epsilon$\n\\begin{align*}\n  L-\\epsilon&< a_n<L+\\epsilon\\\\\n  L-\\epsilon&< c_n<L+\\epsilon\\\\\n  L-\\epsilon&< a_n\\le b_n\\le c_n<L+\\epsilon\\\\\n\\end{align*}\n\nif $n\\ge\\max\\{N_1,N_2\\}$. hence if $n\\ge\\tilde N=\\max{N_1,N_2}, \\left\\lvert b_n-L\\right\\rvert<\\epsilon$\n\n\\subsection*{example 2.4.7}\nvariation of squeeze theorem. read it please\n\n\\section*{properties of limits}\nassume $\\lim_{n\\to\\infty}a_n,\\lim_{n\\to\\infty}b_n$ exist\n\\begin{enumerate}\n\\item\n\\begin{align*}\n  \\lim_{n\\to\\infty}(a_n+b_n)=\\lim a_n+\\lim b_n\\\\\n  \\lim ra_n=r\\lim a_n\\forall r\\in\\mathbb{R}\\\\\n  \\lim a_nb_n=\\lim a_n \\lim b_n\n  \\lim \\frac{a_n}{b_n}=\\frac{\\lim a_n}{\\lim a_n}\n\\end{align*}\n\n\\end{enumerate}\nlet A be limit of $a_n$ B be limit of $b_n$ given $\\epsilon>0$ we need to find $N\\in \\mathbb{N}$ such that $\\forall n\\ge N$ $\\left\\lvert a_nb_n-AB\\right\\rvert<\\epsilon$. for this given $\\epsilon$\n\n$\\exists N_1$ such that $\\left\\lvert a_n-A\\right\\rvert<\\epsilon$ if $n\\ge N$\n\n$\\exists N_2$ such that $\\left\\lvert b_n-B\\right\\rvert<\\epsilon$ if $n\\ge N$\n\n$\\left\\lvert a_nb_n-AB\\right\\rvert=\\left\\lvert a_nb_n-a_nB+a_nB-AB\\right\\rvert\\le|a_n||b_n-B|+|a_n-A||B|<\\epsilon\\{\\max\\{|A+1|,|A-1|\\}+|B|\\}$ provided $n\\ge\\max\\{N_1,N_2,M\\}$\n\\end{document}\n", "meta": {"hexsha": "a9214b180c9e685dee9dd954d061a4882393d526", "size": 4837, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-09-03.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-09-03.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-09-03.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.8585858586, "max_line_length": 493, "alphanum_fraction": 0.6752119082, "num_tokens": 1952, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8688267796346599, "lm_q1q2_score": 0.595971255986918}}
{"text": "\n\\subsection{Text classification using Naive Bayes}\n\n\\subsubsection{Naive Bayes and text classification}\n\nNaive Bayes can be used to classify text documents. The \\(x\\) variables can be appearances of each word, and \\(y\\) can be the document classification.\n\n", "meta": {"hexsha": "1a770cfc22586fb7b24c6cfdda3dacec621b104a", "size": 258, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/supervisedNaiveBayes/01-03-naiveBayesText.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/supervisedNaiveBayes/01-03-naiveBayesText.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/supervisedNaiveBayes/01-03-naiveBayesText.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.25, "max_line_length": 150, "alphanum_fraction": 0.7829457364, "num_tokens": 58, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8774767746654976, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5959635400720704}}
{"text": "\\documentclass[journal]{IEEEtran}\n\\usepackage[utf8]{inputenc}\n\\usepackage{minted}\n\\usepackage{booktabs}\n\\usepackage{color}\n\\usepackage{mathtools}\n\\usepackage{mathrsfs}\n\\usepackage{multirow}\n\\usepackage{clrscode3e}\n\n\\newcommand{\\lagr}{\\mathscr{L}}\n\n\\usepackage[binary-units=true]{siunitx}\n\n\\newcommand{\\scala}[1]{\\mintinline{scala}{#1}}\n\n\\title{Advanced Algorithms (\\texttt{LINGI2266}) \\\\ Assignment 2: Weighted Set Cover Problem}\n\\author{Gilles Peiffer}\n\\date{November 22, 2019}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\n\tThe present document contains an explanation of the solution to the weighted set cover problem using the method of Lagrangian relaxation.\n\tUsing this relaxation, one is able to find a consistent lower bound on the cost of a solution, which, coupled with the feasibility of the intermediate solutions, leads to an efficient technique to find an optimal assignment of variables, or a good lower bound on the cost of this assignment.\n\t\n\tSome code fragments in Scala are also showcased.\n\\end{abstract}\n\n\\section{Introduction}\n\\label{sec:intro}\nThe \\emph{weighted set cover problem} is concerned with selecting the minimal subset of a list of \\(m\\) sets containing elements from \\(\\{1, \\ldots n\\}\\) such that the union of this minimal subset contains all the elements at least once.\nMinimality is taken with respect to the costs of the sets which are chosen.\nIf no such subset exists, the problem is infeasible.\n\n\\section{Weighted Set Cover Problem}\nWe use \\(\\mathscr{N}\\) and \\(\\mathscr{M}\\) to denote the \\(n\\) first and \\(m\\) first strictly positive integers, respectively.\n\n\\subsection{Initial Formulation}\nThe weighted set cover problem can be written as an optimization problem as follows:\n\\[\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{array}{lrcl@{\\quad}l}\n\\textnormal{minimize} & \\sum_{i \\in \\mathscr{M}} w_i x_i, &&&\\\\\n\\textnormal{subject to} & \\sum_{i : j \\in \\mathcal{S}_i} x_i & \\ge & 1, & \\forall j \\in \\mathscr{N},\\\\\n& x_i &\\in& \\{0, 1\\}, & \\forall i \\in \\mathscr{M},\n\\end{array}\n\\]\nwhere \\(w_i\\) is the cost of the \\(i\\)th set, \\(x_i\\) is a binary variable determining whether the \\(i\\)th set is taken, and \\(\\mathcal{S}_i\\) is the \\(i\\)th set.\n\n\\subsection{Lagrangian Relaxation}\nThe \\emph{Lagrangian relaxation} of this problem is then found by relaxing the covering constraint by modifying the objective function so that it takes into account constraint violations.\nThe problem then becomes\n\\[\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{array}{lrcl@{\\quad}l}\n\\textnormal{minimize} & \\multicolumn{4}{l}{\\sum_{i \\in \\mathscr{M}} w_i x_i - \\sum_{j \\in \\mathscr{N}} \\lambda_j \\left(1 - \\sum_{i : j \\in \\mathcal{S}_i} x_i \\right),}\\\\\n\\textnormal{subject to} & x_i &\\in& \\{0, 1\\}, & \\forall i \\in \\mathscr{M},\n\\end{array}\n\\]\nwhere \\(\\lambda_j\\) is a nonnegative Lagrange multiplier pertaining to the \\(j\\)th element.\nOne can observe the fundamental property of this modified problem, which is that its optimal value for a feasible assignment \\(x\\) is always less than or equal to the optimal value of the original problem.\nIt is also much easier to solve than the original problem.\nThis then leads to the following observation: if one finds the assignment \\(\\lambda\\) which maximizes the lower bound, this can be used as an efficient pruning function in a branch and bound search; this assignment is only concerned with applying it to the root of the search tree.\n\n\\subsection{Finding the Lower Bound: Subgradient Algorithm}\nWe denote the lower bound associated to a given assignment \\(\\lambda\\) by \\(\\mathscr{L}\\).\nThe problem of finding the optimal lower bound \\(\\mathscr{L}^*\\) then corresponds to the following:\n\\[\n\\mathscr{L}^* = \\max_\\lambda \\min_{x} \\left(\\sum_{i \\in \\mathscr{M}} w_i x_i - \\sum_{j \\in \\mathscr{N}} \\lambda_j \\left(1 - \\sum_{i : j \\in \\mathcal{S}_i} x_i \\right)\\right).\n\\]\n\nIn order to find this optimal lower bound, one can use a \\emph{subgradient algorithm} to find the optimal assignment of \\(\\lambda\\):\n\\begin{enumerate}\n\t\\item Start with an initial assignment \\(\\lambda^{(0)}\\); in this case, \\(0\\) was used, and set a step size \\(\\mu^{(0)}\\); in this case \\(\\mu^{(0)} = 1\\).\n\t\\item Let \\(x^{(k)}\\) be the variable assignment at iteration \\(k\\).\n\tThe update rule then becomes\n\t\\[\n\t\\lambda_j^{(k+1)} = \\max\\left\\{0, \\lambda_j^{(k)} + \\mu^{(k)} \\left(1 - \\sum_{i : j \\in \\mathcal{S}_i} x_i^{(k)}\\right)\\right\\}.\n\t\\]\n\tThis is done by the following bit of code.\n\\begin{minted}[fontsize=\\scriptsize]{scala}\n// Update the lambdas using the update rule.\nfor (j <- lambda.indices) {\n  var mul: Int = 1\n  for (i <- setsContaining(j) if x(i) == 1)\n    mul -= 1 // Find the sets in which j appears.\n  lambda(j) = Math.max(0, lambda(j) + mu * mul)\n}\n\\end{minted}\n\n\tThe variable assignment for the next iteration is then computed as follows.\n\\begin{minted}[fontsize=\\scriptsize]{scala}\nval coeff: Array[Double] = new Array[Double](m)\n// Coefficients of the decision variables.\nx = new Array[Int](m)\nfor (i <- coeff.indices) {\n  coeff(i) = w(i)\n  for (j <- elementsIn(i))\n    coeff(i) -= lambda(j) // Update coefficients.\n  if (coeff(i) < 0.0)\n    x(i) = 1 // Decide which sets to take.\n}\n\\end{minted}\n\t\n\tFinally, one must also update \\(\\mu^{(k)}\\) in a way which satisfies the following convergence hypotheses:\n\t\\begin{align*}\n\t\\lim_{k \\to \\infty} \\mu^{(k)} &= 0, \\\\\n\t\\sum_{k = 0}^{+\\infty} \\mu^{(k)} &= +\\infty.\n\t\\end{align*}\n\tIn this case, the harmonic sequence \\(\\mu^{(k)} = 1/(k+1)\\) was chosen.\n\\end{enumerate}\nThis algorithm ends after a certain number of iterations, though other measures can be used such as a threshold on the step size \\(\\mu\\) or on the optimality gap \\(\\mathscr{G}\\), defined as follows:\n\\[\n\\mathscr{G}^{(k)} = \\frac{c_{x^{(k)}} - \\lagr^*}{\\lagr^*},\n\\]\nwhere \\(c_{x^{(k)}}\\) is the cost of assignment \\(x^{(k)}\\).\n\nUsing this procedure, the Lagrangian lower bound \\(\\mathscr{L}^{(k)}\\) is guaranteed to change nondecreasingly.\n\nIn the code, the lower bound is updated by the following bit of code.\n\\begin{minted}[fontsize=\\scriptsize]{scala}\nlb = 0.0\nfor (i <- x.indices if x(i) == 1) lb += w(i)\nfor (j <- lambda.indices) {\n  var mul: Int = 1\n  for (i <- setsContaining(j) if x(i) == 1)\n    mul -= 1 // Find the sets in which j appears.\n  lb += lambda(j) * mul\n}\nlb = Math.ceil(lb)\n\\end{minted}\nSection~\\ref{sec:ceil} explains why the last line is allowed.\n\n\\subsection{Complete Algorithm}\nThe complete algorithm for solving the weighted set covering problem can then be written as follows.\n\\begin{codebox}\n\t\\Procname{\\(\\proc{Weighted-Set-Cover}\\)}\n\t\\li \\(\\lagr^* \\gets -\\infty, \\mu^{(0)} \\gets 1, \\lambda^{(0)} \\gets 0\\)\n\t\\li \\If \\(\\bigcup_i \\mathcal{S}_i \\ne \\mathscr{N}\\)\n\t\\li \t\\Then \\Return ``infeasible''\n\t\\End\n\t\\li \\For \\(k \\gets 1\\) \\To maxiter \\Do\n\t\\li \tcompute the optimal assignment \\(x^{(0)}\\) \\Indentmore\n\t\\zi \t\tusing the modified objective function \\End\n\t\\li \tset \\(\\lagr^{(k)}\\) to the objective value\n\t\\li \t\\If \\(\\lagr^{(k)} \\ge \\lagr^*\\)\n\t\\li \t\t\\Then\n\t\\(\\lagr^* \\gets \\lagr^{(k)}\\)\n\t\\li\t\t\t\\If \\(x^{(k)}\\) is feasible\n\t\\li \t\t\t\t\\Then\n\t\\(x^* \\gets x^{(k)}\\)\n\t\\End\n\t\\End\n\t\\li \tupdate \\(\\lambda^{(0)}\\) and \\(\\mu^{(k)}\\)\n\t\\End\n\\end{codebox}\n\n\\section{I/O}\nThe program starts by reading the input and detecting whether the problem is feasible; if not, it prints ``infeasible'' and returns.\nThe function used to test for feasibility is the following.\n\n\\begin{minted}[fontsize=\\scriptsize]{scala}\ndef isFeasible(): Boolean = {\n  // See which elements are in the solution.\n  val elemsTaken: Array[Int] = new Array[Int](n)\n  for (i <- y.indices if y(i) == 1; j <- elementsIn(i))\n    elemsTaken(j) = 1 // Set to 1 when element is taken.\n  !elemsTaken.contains(0)\n}\n\\end{minted}\n\nIf the problem is feasible, the program solves the problem and outputs according to the type of solution and the corresponding format specified in the problem statement.\n\n\\section{Improvements}\n\\label{sec:improvements}\nSeveral improvements to the algorithm were made in order to pass the various test cases on INGInious.\n\n\\subsection{Avoiding Lower Bounds With No Solution}\nOnce feasibility has been established, one can immediately see that taking all the sets is a (very naive) feasible solution to the problem; its associated cost is then the sum of all the set costs.\nBy doing this at the start, the program never runs into the case where a lower bound with no solution has been found.\n\n\\subsection{Integrality of the Cost}\n\\label{sec:ceil}\nSince the cost of a solution is a sum of integers, it must itself be an integer.\nThis means that fractional lower bounds can be rounded up to the nearest integer, which drastically improves the efficiency of the program by requiring fewer operations.\n\nBy doing this, one also gets rid of some of the problems due to numerical instability.\n\n\\section{Submission on INGInious}\nBy adding the improvements detailed in Section~\\ref{sec:improvements}, the program (written in Scala) was able to pass all the test instances without needing other modifications.\n\n\n\\end{document}", "meta": {"hexsha": "f44b1d555ce9e585c6e8a30f7d97707eb40a9f99", "size": 8972, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/assignment2/report.tex", "max_stars_repo_name": "Peiffap/lingi2266-implementations", "max_stars_repo_head_hexsha": "f069d8cc7872c7d8b22cf69b4c77d4d7bb83fd16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/assignment2/report.tex", "max_issues_repo_name": "Peiffap/lingi2266-implementations", "max_issues_repo_head_hexsha": "f069d8cc7872c7d8b22cf69b4c77d4d7bb83fd16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/assignment2/report.tex", "max_forks_repo_name": "Peiffap/lingi2266-implementations", "max_forks_repo_head_hexsha": "f069d8cc7872c7d8b22cf69b4c77d4d7bb83fd16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.7755102041, "max_line_length": 292, "alphanum_fraction": 0.7076460098, "num_tokens": 2667, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7718435083355188, "lm_q1q2_score": 0.5957423932580699}}
{"text": "\\documentclass[11pt,a4paper]{report}\n\\usepackage{amsmath,amsfonts,amssymb,amsthm,epsfig,epstopdf,titling,url,array}\n\\usepackage{enumitem}\n\\usepackage{changepage}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\theoremstyle{plain}\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{lem}[thm]{Lemma}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem*{cor}{Corollary}\n\\theoremstyle{definition}\n\\newtheorem{defn}{Definition}[section]\n\\newtheorem{conj}{Conjecture}[section]\n\\newtheorem{exmp}{Example}[section]\n\\newtheorem{exercise}{Exercise}[section]\n\\theoremstyle{remark}\n\\newtheorem*{rem}{Remark}\n\\newtheorem*{note}{Note}\n\\def\\changemargin#1#2{\\list{}{\\rightmargin#2\\leftmargin#1}\\item[]}\n\\let\\endchangemargin=\\endlist \n\\begin{document}\n\n\n\\section*{Problem}\nEvaluate $lim_{x\\to 0} sin(x)/x$.\n\n\\section*{Solution}\nThis can be evaluated using l'Hospital's rule, which states that if $f$ and $g$ are continuously differentiable at $a$, and  $\\lim_{x \\to a} f(x) = 0 =  \\lim_{x \\to a} g(x)$ then $\\lim_{x \\to a} {f(x)/g(x)} = f'(0) / g'(0)$.  So $lim_{x\\to 0} sin(x)/x = \\lim_{x \\to 0} {cos(0) / 1} = 1.$\n\nBut why is l'Hospital's rule true?  This follows nicely from the definition of the derivative.  Recall that the derivative of $f$ at $x$ is defined by\n$$f'(x) = \\lim_{h \\to 0} \\frac{f(x + h) - f(x)}{h}.$$\nSo $$\\frac{f'(0)}{g'(0)} = $$ $$\\frac{\\mathop{\\lim}\\limits_{h \\to 0} \\frac {f(0 + h) - f(0)} {h}} {\\mathop{\\lim}\\limits_{h \\to 0} \\frac {g(0 + h) - g(0)} {h}} = $$ $$\\lim_{h \\to 0} \\frac {f(h) - f(0)} {g(h) - g(0)}$$.\n\\end{document}", "meta": {"hexsha": "8c036f57d6412acc33b1e92e338daa318f3ad697", "size": 1526, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lhospital/lhospital.tex", "max_stars_repo_name": "psteitz/problems", "max_stars_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lhospital/lhospital.tex", "max_issues_repo_name": "psteitz/problems", "max_issues_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-03T21:08:11.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-03T21:08:11.000Z", "max_forks_repo_path": "lhospital/lhospital.tex", "max_forks_repo_name": "psteitz/problems", "max_forks_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.8823529412, "max_line_length": 287, "alphanum_fraction": 0.6716906946, "num_tokens": 582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.8519527963298947, "lm_q1q2_score": 0.595731770046039}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage[nodayofweek]{datetime}\n\n\\usepackage{minted}\n\n% Set size of text area with total parameter\n\\usepackage[a4paper, total={155mm, 255mm}]{geometry}\n\n\\title{Higher Probability Q14}\n\\author{Dyson}\n\\date{\\today}\n\n\\newcommand{\\oo}[1]{\\frac{1}{#1}}\n\n\\begin{document}\n\n\\maketitle\n\n% Set paragraph spacing here to avoid messing with title\n\\setlength{\\parindent}{0em}\n\\setlength{\\parskip}{1em}\n\n\\section{The Question}\n\nHigher Probability Question 14 in the A Level Summer Work is quite an interesting question. It says:\n\nDavid has designed a game.\\\\\nHe uses a fair 6-sided dice and a fair 5-sided spinner.\\\\\nThe dice is numbered 1 to 6.\\\\\nThe spinner is numbered 1 to 5.\n\nEach player rolls the dice once and spins the spinner once.\\\\\nA player can win \u00a35 or \u00a32.\n\nFor a player to win \u00a35, they must roll a 5 \\textbf{and} spin a 5.\\\\\nFor a player to win \u00a32, they must roll a 1 \\textbf{or} spin a 1 \\textbf{or} both.\n\nDavid expects 30 people will play his game.\\\\\nEach person will pay David \u00a31 to play the game.\n\nWork out how much profit David can expect to make.\n\n\\hspace*{\\fill}(4 marks)\n\n\\section{The Original Working}\n\nInitially, my thinking was: each of the 30 people pays \u00a31 to play, so we have a \u00a330 gross profit. Then, the probability of rolling a 5 is $\\oo{6}$, the probability of spinning a 5 is $\\oo{5}$, so the probability of both is $\\oo{6} \\times \\oo{5} = \\oo{30}$. This means we can expect 1 of the 30 people to win \u00a35, so our net profit at this point becomes \u00a325.\n\nThe probability of rolling a 1 is $\\oo{6}$, the probability of spinning a 1 is $\\oo{5}$, and the probability of both is therefore $\\oo{6} \\times \\oo{5} = \\oo{30}$. Then, to get the probability of A \\textbf{or} B \\textbf{or} C, I added these probabilities together to get $\\frac{2}{5}$. This is $\\frac{12}{30}$, meaning 12 people win \u00a32. This means we subtract \u00a324 from our net profit and get a final profit of \u00a31.\n\nNow, this seemed off to me. It just didn't feel right that David would only make \u00a31 profit. So, I wrote some Julia code to simulate it.\n\n\\newpage\n\n\\section{The Improved Working}\n\n\\begin{minted}{julia}\nusing Statistics\n\nfunction game()\n\ttotal = 0\n\n\tfor _ in 1:30\n\t\t# Add one to the total for each person\n\t\ttotal += 1\n\n\t\t# Roll a die and spin a spinner\n\t\td = rand(1:6)\n\t\ts = rand(1:5)\n\n\t\t# Check the conditions and subtract the relevant amounts from the total\n\t\tif d == 5 && s == 5\n\t\t\ttotal -= 5\n\t\telseif d == 1 || s == 1 || (d == 1 && s == 1)\n\t\t\ttotal -= 2\n\t\tend\n\tend\n\n\treturn total\nend\n\nprintln(mean([game() for _ in 1:10000]))\n\\end{minted}\n\nThis code consistently gave me a mean of \u00a35. I spent about an hour carefully checking through my code and making sure that everything was correct and eventually concluded that \u00a35 was the real answer and my maths must have been wrong.\n\nAfter some thinking, I realised that the \\textit{both} part of the second condition is irrelevant. If a person both rolls a 1 \\textbf{and} spins a 1, then they must have already rolled a 1 \\textbf{or} spun a 1. We can show this with some boolean algebra.\n\nLet $R$ be whether the person has rolled a 1 and let $S$ be whether the person has spun a 1. Then, our expression is $R \\vee S \\vee (R \\wedge S)$. By associativity of $\\vee$, we can write this as $R \\vee (S \\vee (R \\wedge S))$. Then, by absorption, $S \\vee (R \\wedge S) = S$, so our expression gets simplified to $R \\vee S$. Therefore, the \\textit{both} part of the second condition can be ignored.\n\nChanging the code confirms that by removing the check for both, the answer doesn't change. However, this probability of both \\textit{must} be removed from the probability calculation. This gives us $P(\\text{\u00a3}2) = \\frac{11}{30}$, so 11 people win \u00a32 and we get a final total of \u00a33. This is still wrong.\n\nI had to do some research on combining probabilities and learned that $P(A \\vee B)$ only equals $P(A) + P(B)$ if $A$ and $B$ are dependent events, meaning they influence each other. If $A$ happens, then $B$ cannot. This is not the case with this question and we cannot simply add the probabilities of independent events like this.\n\nIn the platonic world of perfect probabilities with 30 people playing the game, 5 people roll a 1 and 6 people spin a 1. This does \\textbf{not} mean that 11 people rolled a 1 \\textbf{or} spun a 1. This is because $P(R \\wedge S) = P(R) \\times P(S) = \\oo{6} \\times \\oo{5} = \\oo{30}$. That means that 1 person rolled a 1 \\textbf{and} spun a 1, so this person gets double counted when we simply add the probabilities.\n\n\\newpage\n\nThis means that for two independent events $A$ and $B$, $P(A \\vee B) = P(A) + P(B) - P(A \\wedge B)$. Thus, $$P(R \\vee S \\vee (R \\wedge S)) = P(R \\vee S)$$\n$$= P(R) + P(S) - P(R \\wedge S)$$\n$$= \\oo{6} + \\oo{5} - \\oo{30} = \\oo{3}$$\n\nThis means that $\\oo{3}$ of the 30 people win \u00a32, meaning we only take \u00a320 from our \u00a325 net profit, finally getting the \u00a35 answer.\n\nQuite frankly, this question was a bitch.\n\n\\end{document}\n", "meta": {"hexsha": "53d18c651bd9c31c24b3eaac147b5b97e5135742", "size": 4987, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Questions/Higher_Probability_Q14.tex", "max_stars_repo_name": "DoctorDalek1963/LaTeX", "max_stars_repo_head_hexsha": "e91a79837bff80f9d361b921acb870a9fcfc3e0b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Questions/Higher_Probability_Q14.tex", "max_issues_repo_name": "DoctorDalek1963/LaTeX", "max_issues_repo_head_hexsha": "e91a79837bff80f9d361b921acb870a9fcfc3e0b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Questions/Higher_Probability_Q14.tex", "max_forks_repo_name": "DoctorDalek1963/LaTeX", "max_forks_repo_head_hexsha": "e91a79837bff80f9d361b921acb870a9fcfc3e0b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.3363636364, "max_line_length": 413, "alphanum_fraction": 0.7052336074, "num_tokens": 1522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8519528019683106, "lm_q1q2_score": 0.5957317633097381}}
{"text": "\\documentclass{article}\n\\usepackage{tocloft}\n\\include{common_symbols_and_format}\n\\renewcommand{\\cfttoctitlefont}{\\Large\\bfseries}\n\n\\begin{document}\n\\logo\n\\rulename{Average Directional Movement Index} %Argument is name of rule\n\\tblofcontents\n\n\\ruledescription{The Average Directional Index (ADX) introduce by Wells Wilder, is a technical indicator that is used to help measure the overall strength of the trend. This indicator attempts to measure the strength of price movement in positive and negative direction using the $+$DMI and $-$DMI indicators along with the ADX. A trend is considered strong when the value of the ADX is above 25 and considered weak when the value is below 25. A trend is also considered bullish when the $+$DMI is above the $-$DMI and considered bearish when the $+$DMI is below the $-$DMI.\n}\n\n\\howtotrade\n{The strategy is to identify the trend and the strength of the trend using the values of ADX, $+$DMI and $-$DMI. \\\\\nBullish Trend - When the $+$DMI is above the $-$DMI and the ADX value is above 25. \\\\\nBearish Trend - When the $+$DMI is below the $-$DMI and ADX is below 25.\n}\n\n\\ruleparameters %You can include however many arguments (in groups of 4) as you want!\n{Look Back Length}{14}{Number of timestamps used to calculate ADX.}{$\\lookbacklength$}\n\\stoptable %must be included or Tex engine runs infinitely\n\n\\newpage\n\\section{Equation}\nBelow are the equations which govern how this specific trading rule calculates a trading position.\n\n\\begin{equation}\n    TR_{\\currenttime} = \\max{\\{P_{\\currenttime}^{h}-P_{\\currenttime}^{l},\\ |P_{\\currenttime}^{h} - P_{\\currenttime-1}^{c}|, \\\n    |P_{\\currenttime-1}^{c} - P_{\\currenttime}^{h}|}\\}\n\\end{equation}\n\\begin{equation}\n    ATR_{\\currenttime} = \\frac{1}{\\lookbacklength} \\sum_{i=1}^{\\lookbacklength} TR_{i}\n\\end{equation}\n\\begin{equation}\n    +DM_{\\currenttime} = \\begin{cases}\n    0 & P_{\\currenttime}^{h}-P_{\\currenttime-1}^{h} < 0 \\\\\n    P_{\\currenttime}^{h}-P_{\\currenttime-1}^{h} & else\n    \\end{cases}\n\\end{equation}\n\\begin{equation}\n    -DM_{\\currenttime} = \\begin{cases}\n    0 & P_{\\currenttime-1}^{l}-P_{\\currenttime}^{l} < 0 \\\\\n    P_{\\currenttime-1}^{l}-P_{\\currenttime}^{l} & else\n    \\end{cases}\n\\end{equation}\n\\begin{equation}\n    +/-DM_{\\currenttime}^{s} = \\sum_{i=1}^{\\lookbacklength} DM_{i} + \\sum_{i=1}^{\\lookbacklength} \\frac{DM_{i}}{\\lookbacklength} + DM_{c}\n\\end{equation}\n\\begin{equation}\n    +DI_{\\currenttime} = \\frac{+DM_{\\currenttime}^{s}}{ATR_{\\currenttime}} \\times 100\n\\end{equation}\n\\begin{equation}\n    -DI_{\\currenttime} = \\frac{-DM_{\\currenttime}^{s}}{ATR_{\\currenttime}} \\times 100\n\\end{equation}\n\\begin{equation}\n    DX_{\\currenttime} = \\Big( \\frac{|+DI_{\\currenttime} - -DI_{\\currenttime}|}{|+DI_{\\currenttime} + -DI_{\\currenttime}|}\\Big) \\times 100\n\\end{equation}\n\\begin{equation}\nADX_{t} =\n    \\begin{cases}\n        \\frac{1}{L} \\sum_{i=1}^{\\lookbacklength} DX_{i} & \\currenttime = 0 \\\\ \\\\\n        \\frac{(ADX_{\\currenttime-1} \\times (\\lookbacklength -1)) + DX_{\\currenttime}}{\\lookbacklength} & \\currenttime > 0\n    \\end{cases}\n\\end{equation}\n\\\\ % creates some space after equation\nwhere:\n\n$P_{\\currenttime}^{h}$: is the asset's high price at time \\currenttime.\n\n$P_{\\currenttime}^{c}$: is the asset's close price at time \\currenttime.\n\n$P_{\\currenttime}^{l}$: is the asset's low price at time \\currenttime.\n\n$TR_{\\currenttime}$: is the True Range at time \\currenttime.\n\n$ATR_{\\currenttime}$: is the Average True Range at time \\currenttime.\n\n$DM_{c}$: is the Directional Movement at current time.\n\n$+/-DM_{\\currenttime}$: is the Directional Movement at time \\currenttime.\n\n$+/-DM_{\\currenttime}^{s}$: is the Smoothed Directional Movement at time \\currenttime.\n\n$+/-DI_{\\currenttime}$: is the Directional Index at time \\currenttime.\n\n$DX_{\\currenttime}$: is the Directional Movement Index at time \\currenttime.\n\n$ADX_{\\currenttime}$: is the Average Directional Movement Index at time \\currenttime.\n\n\n\\keyterms\n\\furtherlinks %The footer\n\\end{document}", "meta": {"hexsha": "1366443889ce256650a3bfc345035802d854fcf5", "size": 3966, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/strategies/tex/AverageDirectionalIndex.tex", "max_stars_repo_name": "parthgajjar4/infertrade", "max_stars_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_issues_repo_path": "docs/strategies/tex/AverageDirectionalIndex.tex", "max_issues_repo_name": "parthgajjar4/infertrade", "max_issues_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 137, "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_forks_repo_path": "docs/strategies/tex/AverageDirectionalIndex.tex", "max_forks_repo_name": "parthgajjar4/infertrade", "max_forks_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "avg_line_length": 42.1914893617, "max_line_length": 574, "alphanum_fraction": 0.7062531518, "num_tokens": 1268, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.8519528019683106, "lm_q1q2_score": 0.5957317526307496}}
{"text": "\\chapter{detection}\nThe \\textit{quantum efficiency} of a detector is the fraction of the incident photons that contributes to the output of the detecting system.  This is the same as saying that the quantum efficiency is equal to the probability that a single incident x-ray photon will be detected.  \n\\textit{detective quantum efficiency} is the fraction of incident photons that would have to be detected without additional noise to yield the same signal-to-noise ratio as it actually observed by the detector in question.\nDQE $\\le$ QE $\\le$ 1 \\newline\n\nFrom a pragmatic (dealing with things sensibly and realistically in a way that is based on practical rather than theoretical considerations) viewpoint it does not matter what the QE is: the limits of detectability of a signal are set by the SNR, which is determined by DQE and not QE.  The QE is useful only for determining the amount of gain required to bring the signal up to some desired level.\n\nNoise factor (NF) of the detector is the decrease in the signal-to-noise ratio that accompanies the detection process. Given by:\n\\begin{equation}\n\\begin{split}\nNF = &(DQE)^{-1/2} \\\\\nNF = &\\frac{1}{\\sqrt{DQE}}\n\\end{split}\n\\end{equation}\n\nIn radiology the speed of the film is usually stated as the reciprocal of the exposure in roentgens required to produce a final film density of a stated amount, usually in the range of 0.7-1.0.  The source spectrum (or equivalently, the necessary details concerning tube target, accelerating voltage, and beam filtration), the development conditions, and the fluorescent screens used (if any) must be stated for the speed measure to be a valid and useful quantity.  \n\n\\subsection{spectral analysis BM}\nFourier transform of a random process is another random process, and we are usually more interested in averages than in properties of individual samples.  In particular, with finite-power processes, we often want to know how the average power is distributed as a function of frequency.  Frequency-domain description of the average power is known as \\textit{spectrum, power spectrum} or \\textit{power spectral density}.\n\\newline\nIn the limit of an infinite number of realizations, \\textit{Wiener-Khinchin theorem} amounts to incorporating a statistical average in the definition of the spectrum.  Khinchin's definition was:\n\\begin{equation}\nS_{ac}(v) = \\mathcal{F} \\{ R(\\Delta t) \\} = \\int_{-\\infty}^{\\infty} d \\Delta t \\langle f(t + \\Delta t) f^{\\ast} (t) \\rangle exp(-2 \\pi i\\nu \\Delta t), \n\\end{equation}\nwhere the subscript $\\mathit{ac}$ indicates that this version of the spectrum is derived from the autocorrelation function $\\mathit{R(\\Delta t)}$ of a stationary random process.  The spectrum defined this way is well behaved mathematically and universally used.\n\n\\noindent\n$\\mathit{Spatial power spectra}$\nIf we use the stationary spatial random process, then the spatial version of the Wiener-Khinchin theorem is:\n\\begin{equation}\nS(\\mathbf{\\rho}) = \\int_{\\infty} d^q \\Delta r \\: R(\\Delta \\mathbf{r}) \\, exp(-2 \\pi i \\mathbf{\\rho} \\cdot \\Delta \\mathbf{r}).\n\\end{equation}\n\n\\noindent\n$\\mathit{Stochastic Wigner distribution function}$\nA general way of applying Fourier analysis to non-stationary random processes is to make use of the Wigner distribution function.  For a spatial random process $f(\\mathbf{r})$, we define the stochastic Wigner function by:\n\n\\begin{equation}\nW_f (\\mathbf{r}_0, \\mathbf{\\rho}) = \\int_{\\infty} d^{q} \\Delta r \\; \\langle f(\\mathbf{r}_0 + \\frac{1}{2} \\Delta \\mathbf{r}) f^{\\ast}(\\mathbf{r}_0 - \\frac{1}{2}) \\rangle\nexp(-2 \\pi i \\mathbf{\\rho} \\cdot \\Delta \\mathbf{r}).\n\\end{equation}\n\n\\noindent Wigner distribution can be thought as a sliding-window Fourier transform, where the function itself serves as the window.\nWe can use the Wigner distribution function for a spatial random process and compare it to the Wiener-Khinchin theorem:\n\\begin{equation}\n\\begin{split}\nS(\\mathbf{\\rho}) =& \\int_{\\infty} d^q \\Delta r \\langle f(r + \\Delta r) f^*(r) \\rangle exp(-2 \\pi i \\rho \\cdot \\Delta r) \\\\\n= & \\int_{\\infty} d^q \\Delta r \\langle f(r_0 + \\frac{1}{2} \\Delta r) f^*(r_0 - \\frac{1}{2} \\Delta r) \\rangle exp(-2\\pi i \\rho \\cdot \\Delta r), \n\\end{split}\n\\end{equation}\nSo if the process is stationary, the stochastic Wigner function is independent of $\\mathbf{r}_0$ and is precisely the power spectral density.\n\nFor non-stationary processes, however, $W_f(r_0, \\rho)$ is a function of $r_0$ as well as $\\rho$; it can be interpreted as the spectral content associated with point $r_0$.  \nThe Wigner distribution is just the Fourier transform of the short-range part of the autocorrelation function, modulated by the shift-variant strength of the slowly varying component at $r_0$.\n\n\\subsection{Notes from Handbook of medical imaging-Image quality metrics for Digital Systems (Chap3)}\nThe increment of exposure values used should be sufficient to adequately fit a curve to the data on both a logarithmic curve and a linear curve. For example, 0.03, 0.3, 3, and 30 mR, which covered the majority of the usable range of clinical exposures to be encountered.  The shape of the characteristic curve for most systems is not highly dependent on beam spectrum.  A reasonable compromise spectrum might be 70 $kV_p$ with 0.5-mm Cu filtration, because that is the spectrum has has been used for a number of MTF and DQE measurements~\\citep{Dobbins1995}, \\citep{Bradford1999}, \\citep{Hillen1987}. \nIt is important to make multiple measurements of the exposure (but not image data, dosimeter) for the low-exposure range of the curve.  The need for multiple measurements of exposure is because of the limited precision of most exposure meters at low exposure.\n\n\\subsubsection{Signal-to-noise ratio}\nFor a given number of detected photons, N, and a stochastic signal fluctuation, $\\sigma$, the maximum available signal-to-noise (SNR) is\n\\begin{equation}\n\\frac{S}{\\sigma} = \\frac{N}{\\sqrt{N}} = \\sqrt{N},\n\\end{equation}\nbecause, for Poisson-distributed x-ray quanta, the standard deviation of measured pixel values goes as the square root of the mean number of quanta.  This is the ``raw'' signal-to-noise ratio, because all detected quanta are included in ``signal.''  Often of more interest is the differential signal-to-noise ratio, $SNR_{diff}$, which uses the difference in signal behind the object of interest relative to its background.\n\\begin{equation}\nSNR_{diff} = \\frac{\\Delta S}{\\sigma} = \\frac{C \\; S}{\\sigma} = \\frac{C \\; N}{\\sqrt{N}} = C \\sqrt{N},\n\\end{equation}\nwhere C is the contrast, $\\Delta S / S$.\nIf an object has area A, and there are $N_a$ photons per unit area detected, then the measured $SNR_{diff}$ of the object is:\n\\begin{equation}\n\\begin{split}\n& SNR_{diff} = C \\; \\sqrt{N} = C \\; \\sqrt{N_a A}.\\\\\n& N_a = \\frac{SNR_{diff}^2}{C^2 A}\n\\end{split}\n\\end{equation}\n\n\\subsubsection{MTF - handbook of medical imaging chap 3}\nMTF traditionally has been described mathematically in one of two ways:\n\\begin{enumerate}\n\\item as the ratio of frequency content output vs frequency content input:\n\\begin{equation}\nMTF(u,v) = \\frac{|FT_{out}(u,v)|}{|FT_{in}(u,v)|}\n\\end{equation}\n\n\\item as the Fourier amplitude of the response to a delta-function input to the system:\n\\begin{equation}\nMTF(u,v) = |OTF(u,v)|\n\\end{equation}\nwhere OTF(u,v) is the optical transfer function, the Fourier transform (FT) of PSF.  Both of these descriptions are equivalent for systems without aliasing.  However, with aliasing, the two give different results.\n\\end{enumerate}\nFor digital imaging systems, all MTF are basically undersampled.\nThree methods of measuring MTF, square wave, edge and slit.  Square wave is the least accurate, good for quick test.  Slit is good for measuring high frequencies and edge is good for low frequencies.  Ideally one should use both the edge and slit.\n\n\\subsubsection{Noise-power spectrum}\nThe noise-power spectrum may be understood in several ways.  First, it may be thought of as the variance of image intensity (i.e., image noise) divided among the various frequency components of the image.  Alternatively, the NPS may be pictured as the variance (per frequency bin) of a given spatial-frequency component in an ensemble of measurements of that spatial-frequency.  These concepts are equivalent.\nNPS is the pixel variance spread over all frequencies.\nThere are two options for computing the NPS in the presence of a background signal.  One may either subtract the background signal from the image prior to NPS analysis, or may compute the ensemble-averaged Fourier transform square of the noise+signal and then subtract the square of the Fourier transform of the background signal.  Numerical simulation reveals that both are equivalent if an infinite number of values are averaged in the ensemble.  However, the certainty in measurement is better when one first subtracts off the background signal.\n\n\\begin{equation}\nNPS(0) = (N \\Delta x) \\sigma^2_{ROImean}\n\\end{equation}\nwhere N = number of points in the region of interest, $\\Delta x$ is the sample interval (pixel size), $\\sigma^2_{ROImean}$ is the variance inside the region of interest.\n\n\\comment{still need to read up on NPS, and noise aliasing, I just can't concentrate anymore}\n\n\\textbf{Measuring NPS experimentally}\nThe appropriate x-ray spectrum for the study must be selected.  Ideally, the same x-ray spectrum should be used for NPS measurement as was used for measurement of MTF.  There is unfortunately not accepted standard x-ray spectra for NPS measurements in the literature.  Many measurements have been made at 70 kV (including scree-film and some CR measurements).  Several beam filtrations have been quoted with 70 kV: 0.5 mm Cu (\\citep{Dobbins1995}, \\citep{Bradford1999}, \\citep{Hillen1987}, \\citep{Chotas1997} and 19 mm Al \\citep{Samei1997}.\nOnce the spectrum has been selected, flat-field images over a range of exposure must be acquired.  Exposure values covering the usable dynamic range of the device should be used because the NPS is often exposure dependent (Exposure in terms of mR).  The characteristic curve of the device should be measured (linearity).\nOnce the flat-field images have been acquired, the next thing to be determined for NPS measurement is the size of ROI to be used for the analysis, and the number of ROIs to be averaged.    In practice, it is necessary to select the best possible values of ROI size (as given by N) and the number of ROIs in the ensemble average (as given my M), under the constraint of a finite amount of available data.  \n\\begin{enumerate}\n\\item first choose N such that there is minimal distortion of the NPS\n\\item compute what certainty in the estimate of NPS is required.  This latter step gives the number of ensemble averages, M, required.\n\\end{enumerate}\n\n\\textbf{Fixed pattern noise}\nIt is important philosophically to include this structured noise, such as would occur from phosphor granularity in a phosphor screen, in the estimate of NPS.  It is included by default in all screen film NPS calculations.  It is important to include structured noise because it is part of the spatially-stochastic intensity variation that affects the ability of the observer to pick out a signal from the noise.   \n\\subsubsection{NEQ and DQE}\n\nFrom Handbook of medical imaging Volume 1, by Myers, chapter 9.\nNEQ(v) can be interpreted as the frequency-dependent density of quanta at the input of a perfect detector that would yield the same output noise as the real detection system under evaluation.\n\\begin{equation}\nSNR^2_{Ideal} = \\int d \\nu |\\Delta \\tilde{\\bar{g}}(\\nu)|^2 \\; NEQ(\\nu)\n\\end{equation}", "meta": {"hexsha": "0643891fc7915bb6ef808eb4451a17973ed34d7c", "size": 11551, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/detection.tex", "max_stars_repo_name": "hfan36/dissertation", "max_stars_repo_head_hexsha": "5d755c96cf6cbece2c382789015e9db9ceb02da7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/detection.tex", "max_issues_repo_name": "hfan36/dissertation", "max_issues_repo_head_hexsha": "5d755c96cf6cbece2c382789015e9db9ceb02da7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/detection.tex", "max_forks_repo_name": "hfan36/dissertation", "max_forks_repo_head_hexsha": "5d755c96cf6cbece2c382789015e9db9ceb02da7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.1532258065, "max_line_length": 600, "alphanum_fraction": 0.7672063025, "num_tokens": 2960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.847967769904032, "lm_q2_score": 0.7025300573952054, "lm_q1q2_score": 0.5957228460599638}}
{"text": "\\chapter{On-Policy Prediction with Approximation}\n\n\\section{Summary}\n\n\\subsection{Function approximation}\nIn the previous chapters tables were used to represent the value function. This means that every state is updated separately. If the number of states goes up, this becomes infeasible. The state function can be approximated by $\\hat{v}(s, w)$, with $w \\in \\Re^d$, the dimension of w is much smaller then the number of states $d << |s|$. $\\hat{v}$ generalizes the update of one state over many.\n\n\\subsection{The prediction objective $\\overline{VE}$}\n\n\\begin{itemize}\n\t\\item $\\mu(s)$: state distribution/on-policy distribution, indicates how important a certain state is.\n\t\\item $h(s)$: probability that the episode began in state $s$\n\t\\item $\\eta(s)$: number of timesteps spend on average in state $s$\n\\end{itemize}\n\nThe value function updates don't update the state estimation directly but have to adjust the weights of a function. A common metric used is the \\textbf{mean squared value error} illustrated in equation~\\ref{eq:mean squared value error}.\n\n\\begin{equation}\n\\overline{VE}(w) = \\sum_{s \\in S} \\mu(s)\\big[v_{\\pi}(s) - \\hat{v}(s, w) \\big]^2\n\\label{eq:mean squared value error}\n\\end{equation}\n\nWhen training a episodic task on-policy $\\mu$ can be derived from $\\eta$ using equation~\\ref{eq:on-policy average time spend in state system of equations}. Solving the sytem of equations results in $\\eta$, which can be used to find $\\mu(s)$ $\\mu = \\frac{\\mu(s)}{\\sum_{s'}\\mu(s')}$.\n\n\\begin{equation}\n\\eta(s) = h(s) + \\sum_{\\bar{s}} \\eta(\\bar{s}) \\sum_a \\pi(a | \\bar{s})p(s | \\bar{s}, a)\n\\label{eq:on-policy average time spend in state system of equations}\n\\end{equation}\n\nThe value function $\\overline{VE}$ might not always be the the best option, but it's a common used one. The updates of the approximation sometime lead to a local optimum, and not to a global one.\n\n\\section{Stochastic gradient and semi-gradient methods}\n\n\\begin{itemize}\n\t\\item $w = (w_1, w_2, ... w_d)^T$: weights\n\t\\item $\\hat{v}(s, w)$: approximation function\n\t\\item $w_t$: update of $w$ at timestep $t$\n\\end{itemize}\n\nOften there will be no $w$ that gets all the states right. So the $w$ with the lowest error is picked by stochastic gradient descent (SGD). If the actual value function is present the update of the approximation is equation~\\ref{eq:gradient descent update rule}. However $v_{\\pi}$ is not known in practice, so take $v_{\\pi} \\approx U_t$ and the update rule then becomes equation~\\ref{eq:stochastic gradient descent update rule}.\n\n\\begin{equation}\n\\begin{split}\nw_{t+1} & = w_t - \\frac{1}{2} \\alpha \\nabla \\big[ v_{\\pi}(S_t) - \\hat{v}(S_t, w_t) \\big]^2 \\\\\n& = w_t + \\alpha \\big[ v_{\\pi}(S_t) - \\hat{v}(S_t, w_t) \\big] \\hat{\\nabla}(S_t, w_t)\n\\end{split}\n\\label{eq:gradient descent update rule}\n\\end{equation}\n\n\\begin{equation}\nw_{t+1} = w_t + \\alpha \\big[ U_t - \\hat{v}(S_t, w_t) \\big] \\hat{\\nabla}(S_t, w_t)\n\\label{eq:stochastic gradient descent update rule}\n\\end{equation}\n\n\\textbf{Monte carlo methods} approximate $U_t=G_t$, the update rule then becomes equation~\\ref{eq:stochastic gradient descent monte carlo update rule}. We know that $G_t$ is an unbiased estimate of $v_{\\pi}$.($\\EX[U_t | S_t = s]=v_{\\pi}(s)$). With \\textbf{bootstrapping methods}(equation~\\ref{eq:bootstrapping Ut approximation}) this is not the case anymore, as $U_t$ relies on the weights $w_t$ it has a bias. They can however converge much quicker, but are as robust. We call these \\textbf{semi-gradient methods}. \n\n\\begin{equation}\nw_{t+1} = w_t + \\alpha \\big[ U_t - \\hat{v}(S_t, w_t) \\big] \\hat{\\nabla}(S_t, w_t)\n\\label{eq:stochastic gradient descent monte carlo update rule}\n\\end{equation}\n\n\\begin{equation}\nU_t = \\sum_{a,s',r} \\pi(a|S_t)p(s', r| S_t, a)[r+ \\gamma\\hat{v}(s', w)]\n\\label{eq:bootstrapping Ut approximation}\n\\end{equation}\n\nSo the TD(0) algorithm with approximation is a \\textbf{semi-gradient method}, the update is illustrated in equation~\\ref{eq:TD(0) stochastic gradient update rule}.\n\n\\begin{equation}\nw_{t+1} = w_t + \\alpha \\big[ R_{t+1} - \\gamma \\hat{v}(S_t, w_t) \\big] \\hat{\\nabla}(S_t, w_t)\n\\label{eq:TD(0) stochastic gradient update rule}\n\\end{equation}\n\n\\textbf{State aggregation} is a simplified version of general function approximation. The states are grouped, and one component of $w$ represents the whole group.\n\n\\subsection{Linear methods}\n\\begin{itemize}\n\t\\item $x(s)$: feature vector, representing basis functions\n\t\\item $x_i \\in S$: all features are just states\n\t\\item $\\hat{v}(s, w) = w^Tx(s) = \\sum_{i=1}^{d}w_ix_i$: linear value function\n\t\\item $\\nabla \\hat{v}(s, w) = x(s)$: gradient value function\n\\end{itemize}\n\nMonte carlo with linear approximation has the update rule of equation~\\ref{eq:monte carlo update gradient descent linear approximation}. It converges robustly to the global optimum.\n\n\\begin{equation}\nw_{t+1} = w_t + \\alpha \\big[ G_t - \\hat{v}(S_t, w_t) \\big]x(s_t)\n\\label{eq:monte carlo update gradient descent linear approximation}\n\\end{equation}\n\nTD(0) has the update rule of equation~\\ref{eq:TD(0) update semi-gradient descent linear approximation}, which only converges to the local optimum. \n\n\\begin{equation}\n\\begin{split}\nw_{t+1} & = w_t + \\alpha \\big( R_{t+1} + \\gamma w_t^Tx_{t+1} - w_t^T x_t\\big)x_t\\\\\n& = w_t + \\alpha \\big[R_{t+1}x_t - x_t ( x_t - \\gamma x_{t+1} )^Tw_t\\big]\n\\end{split}\n\\label{eq:TD(0) update semi-gradient descent linear approximation}\n\\end{equation}\n\nThe expected TD(0) weight vector is derived from equation~\\ref{eq:TD(0) update semi-gradient descent linear approximation} in equation~\\ref{eq:TD(0) linear approximation expected weight vector}. The TD(0) steady state vector $w_{TD}$ can be solved from this as a fixed point $w_{TD} = A^{-1}b$.\n\n\\begin{equation}\n\\begin{split}\n& \\EX[w_{t+1}| w_t] = w_t + \\alpha (b-Aw_t) \\\\\n& b = \\EX[R_{t+1}x_t] \\in \\Re^d \\\\\n& A = \\EX[x_t(x_t - \\gamma x_{t+1})^T]\n\\end{split}\n\\label{eq:TD(0) linear approximation expected weight vector}\n\\end{equation}\n\nThe upper bound error of $\\overline{VE}$ when using TD is defined by equation~\\ref{eq:upper bound error TD(0) linear approximation}. The Monte Carlo error is $\\min_w \\overline{VE}$. When $\\gamma$ is near zero the error  goes up drastically, but because it's a TD method, it has low variance. \n\n\\begin{equation}\n\\overline{VE}(W_{TD}) \\leq \\frac{1}{1-\\gamma}\\min_w \\overline{VE}(w)\n\\label{eq:upper bound error TD(0) linear approximation}\n\\end{equation}\n\n\\subsection{Feature construction for Linear Methods}\nA big limitation of the linear form is that no interactions between features can be modeled. Such as presence of a feature being good only in the absence of an other feature.\n\n\\subsubsection{Polynomials}\nPolynomials are a simple way to model interpolation and regression. The calculation of the weights $w$ is still a linear problem.\n\n\\begin{itemize}\n\t\\item high dimensional feature vector $x(s)= (1, s_1, s_2, s_1s_2, s_1^2,s_2^2,s_1^2s_2,s_1s_2^2,s_1^2,s_2^2)$\n\\end{itemize}\n\n\\begin{equation}\n\\begin{split}\nx_i(s) = \\prod_{j=1}^{k}s_j^{c_{i,j}} \\\\\nc_{i,j} = {0, 1, ..., n} \\\\\n(n+1)^k \\text{ features} \\\\\n\\end{split}\n\\label{eq:n ordered polynomial basis feature}\n\\end{equation}\n\n\\subsubsection{Fourier basis}\nThe Fourier series(linear combination) can be a basis for the function approximation. It works good with SARSA on smooth functions, it has some problems with discontinuations.\n\n\\begin{equation}\n\\begin{split}\n& x_i(s) = cos(\\pi s^Tc^i), s \\in [0,1] \\\\\n& c^i = (c_1^i, ... c_k^i)  \\\\\n& c_j^i \\in {0, ..., n} \\\\\n& (n+1)^k \\text{ features}\n\\end{split}\n\\label{eq:fourier basis}\n\\end{equation}\n\n\\begin{equation}\n\\alpha_i = \\frac{\\alpha}{\\sqrt{(c_1^i)^2 + ... + (c_k^i)^2}}\n\\label{eq:step size TD with fourier approximation}\n\\end{equation}\n\n\\subsubsection{Coarse Coding}\nThe state space is covered by overlapping shapes(typically circles). A state is a point on in the state space. All the shapes that have this point inside their borders become 1, the other zero. The shape/width of the shapes determines the generalization. Small shapes give very little generalization, while big ones will give a lot of generalization.\n\n\\subsubsection{Tile coding}\nSimilar to coarse coding, but the shapes (such as rectangle) do not overlap. Only points in the same tile are generalized with each other. In order to get the same kind of generalization as in coarse coding, several layers of tiles can be made. This results in faster then coarse coding, but equally good results. The step size is more intuitive $\\alpha = \\frac{1}{n}$ would move 1 tile per iteration. Or if there are multiple layers $\\alpha = \\frac{w}{n}$, with w the offset between the layers.\n\n\\subsubsection{Radial basis functions}\nThe Radian basis functions (RBF) are the natural generalization of coarse coding to continuous valued features. It can be anything in $[0,1]$, while the coarse coding is a Boolean.\n\n\\begin{itemize}\n\t\\item $c_i$: position of the RBF curve\n\t\\item $\\sigma$: width of the RBF curve\n\\end{itemize}\n\n\\begin{equation}\nx_i(s) = \\exp \\left( \\frac{|| s - c_i ||^2}{2 \\sigma_i^2} \\right)\n\\end{equation}\n\n\\subsection{Selecting step-size parameters manually}\n\nThe classical choice for step size is $\\alpha = \\frac{1}{t}$, this works great with MC methods, but not with TD methods.\n\nIn the tabular case, $\\alpha=1$ would eliminate the error after just one step. $\\alpha=\\frac{1}{\\tau}$ eliminates the error after $\\tau$ steps. This rule can be generalized to the linear approximation $\\alpha = \\EX[x^Tx]]$, with $x$ a random feature vector. This does assume that the size of $x^Tx$ is about constant.\n\n\\subsection{Nonlinear function approximation: Artificial Neural network}\n\nThe text in the book is rather vague, the only relevant take away is that TD-error's are used as well to update the value function if you use ANN.\n\n\\subsection{Least-Squared TD}\nEarlier on in the chapter we used Least-Squares to do linear approximation. We ended up a fixed point $W_{TD}=A^{-1}b$ where $A=\\EX[x_t(x_t - \\gamma x_{t+1})^T]$ and $b=\\EX[R_{t+1}x_t]$. Instead of interating over the fixed point, $W_{TD}$ can be found in 1 go.  This is illustrated in equation~\\ref{eq:TDLS update equation}, a more detailed algorithm can be found on page 230 of the book.\n\n\\begin{equation}\n\\begin{split}\n\\hat{A}_t & = \\sum_{k=0}^{t-1}x_k(x_k - \\gamma x_{k+1})^T + \\epsilon I \\\\\nw_t & = \\hat{A}_t^{-1}b_t\n\\end{split}\n\\label{eq:TDLS update equation}\n\\end{equation}\n\nIf all the samples are used in the update, we don't get the forgetting behavior of old samples, as we had before. This can cause some issues if the target policy $\\pi$ is updated. There are some extra steps required to add forgetting in the algorithm. There is however no need to pick a step size, but you do have to pick the size of $\\epsilon$, so this is not really a win.\n\n\\subsection{Memory-based function approximation}\n\nInstead of using the training examples to find the proper value of function parameters. The training data itself can be saved, and when the approximation is called upon with a certain state. It could return the answer of \\textbf{the nearest neighbor}. Alternatively \\textbf{the weighted average} of a set of neighbors could be returned, or a surface could be fit on the neighbors. The value can then be approximated using this surface, this is called \\textbf{locally weighted regression}.\n\nThe 2 major advantages of this kind of approximation are:\n\\begin{enumerate}\n\t\\item No parameters.\n\t\\item The effect on the value function of adding a sample to the set is immediate.\n\\end{enumerate}\nMemory-based functions can however become slow in execution as the number of samples grows. This can be mitigated by using \\textbf{k-d trees}, to speedup the look ups.\n\n\\subsection{Kernel-based function approximation}\nAny linear parametric regression method that uses feature vectors can be recasted as kernel regression with $k(s',s)=x(s)^Tx(s)$. (the kernel trick)\n\\begin{itemize}\n\t\\item $g(s)$: Target for the state s.\n\t\\item $D$: Set of stored samples.\n\\end{itemize}\n\n\\begin{equation}\n\\hat{V}(s, D) = \\sum_{s' \\in D} k(s, s')g(s')\n\\end{equation}\n\nThe text on page 232-233 is very vague on this method, might require some further investigation.\n\n\\subsection{Looking deeper at on-policy learning: Interest and Emphasis}\nNot all states are as relevant to the RL problem. For example states that only occur after a series of very poor choices don't need to have an as accurately value function as those frequently visited by the greedy policy.\n\n\\begin{itemize}\n\t\\item $I_t$: interest, the degree to which we are interested in accurately valuing the state. An interest of $0$ means there is no interest, and a value of $1$ means it's of maximum interest\n\t\\item $M_t$: emphasis, is multiplied with the learning rate\n\t\\item $M_t = I_t + \\gamma^nM_{t-n}$\n\\end{itemize}\n\n\\begin{equation}\nw_{t+n} = w_{t+n-1} + \\alpha M_t\\big[ G_{t:n+n} - \\hat{v}(s_t, w_{t+n-1}) \\big] \\nabla \\hat{v}(s_t, w_{t+n-1})\n\\end{equation}\n\n\\section{Exercises}\n\\subsection{Exercise 9.1 page 209}\n\\textbf{Show that tabular methods such as presented in Part I of the book are a special case of linear function approximations. What would the feature vectors be?}\n\nThe weights $w$ would be the value that used to be in the table. The feature vectors x(s) would be a vector that is 1 on the state that is active so, if $S_t=3$ then $x(3) = (0, 0, 0, 1 , 0 , 0)^T$.(one hot encoding)) This means that the value function is the weight on the index of that state $\\hat{v}(s=3, w)=w^Tx(3)=w[3]$.\n\n\\subsection{Exercise 9.2 page 211}\n\\textbf{Why does (9.17 in the book) define $(n+1)^k$ distinct features for dimension k?}\nThe highest degree of polynomial is $n$, so there are $n+1$ terms in a polygon that can be turned on or off. There are $k$ dimensions, so to the power of $k$.\n\n\\subsection{Exercise 9.3 page 211}\n\\textbf{What n and $c_{i,j}$ produce the feature vectors $x(s)=(1, s_1, s_2, s_1s_2, s_1^2,s_2^2,s_1^2s_2,s_1s_2^2,s_1^2,s_2^2)$?}\n\n$1, s_1, s_2, s_1s_2, s_1^2,s_2^2,s_1^2s_2,s_1s_2^2,s_1^2,s_2^2$ has basis functions of a degree at most 2, and a dimension of 2 $(k=2)$. A max degree of 2 leads to $n=2$.\n\n\\begin{itemize}\n\t\\item $c_{i,j}$: weight vector\n\t\\item $i$: feature base index\n\t\\item $j$: power of polynomial term\n\\end{itemize}\n\nThe basis functions exists at most out of 2 terms, one of each base index. The table below show the powers used on the terms to construct the base function.\n\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\ni & basis function & $c_{i,0}$ & $c_{i,1}$ \\\\\n\\hline\n0 & $1$ & 0 & 1 \\\\\n1 & $s_1$ & 1 & 0 \\\\\n2 & $s_2$ & 0 & 1 \\\\\n3 & $s_1s_2$ & 1 & 1 \\\\\n4 & $s_1^2s_2$ & 2 & 1 \\\\\n5 & $s_1s_2^2$ & 1 & 2 \\\\\n6 & $s_1^2s_2^2$ & 2 & 2 \\\\\n\\end{tabular}\n\\end{center}\n\n\\subsection{Exercise 9.4 page 221}\n\\textbf{Suppose we believe that one of two state dimensions is more likely to have an effect on the value function than is the other, that generalization should be primarily across the dimension rather then along it. What kind of tilings could be used to take advantage of this prior knowledge}\n\nThis is mentioned on page 220, a form of tiling that is more elongated along the less relevant dimension could work very well.\n\n\\subsection{Exercise 9.5 page 223}\n\n\\begin{itemize}\n\t\\item 7 dimensions\n\t\\item 8 strip tiling's\n\t\\item 7*8 = 56 dimension independent tiling's\n\t\\item $\\binom{7}{2}$=21 pairs of 2 dimensional \n\t\\item total tiling's = 21*2 + 56 = 98\n\\end{itemize}\n\nFor any x, it will belong to exactly 8 strip tilings. And it will belong to 6 pairwise interactions so 12 tilings. To represent this you need at 20 non-zero entries equal to 1, therefore $\\EX[x^Tx] = 20$. This leads to $\\alpha = (\\tau\\EX[x^Tx])^{-1}=20^{-1}$\n\n\\subsection{Exercise 9.6 page 223}\n\\textbf{If $\\tau=1$ prove that (9.19) together with (9.7) results in the error being reduced to zero in one update}\n\n\\begin{itemize}\n\t\\item equation 9.19 from book page 223: $\\alpha = (\\tau \\EX[x^Tx])^{-1}$\n\t\\item equation 9.7 from book page 202: $w_{t+1} = w_t + \\alpha \\big[ U_t - \\hat{v}(S_t, w_t)\\big]\\nabla \\hat{v}(S_t, w_t)$\n\\end{itemize}\n\nI am not sure how to continue here, it seems $\\bar{x}$ must be $1$, for this to work out.\n\n\\begin{equation}\n\\begin{split}\nw_{t+1} & = w_t + \\alpha \\big[ U_t - \\hat{v}(S_t, w_t)\\big]\\nabla \\hat{v}(S_t, w_t) \\\\\n& = w_t +  \\EX[x^Tx]^{-1}  e x \\\\\n& = w_t +  (x^Tx)^{-1} ex \\\\\n& = w_t +  e\\bar{x} \\\\\ne & =  U_t - \\hat{v}(S_t, w_t) \\\\\ne & = w_{t+1} - w_{t} \\\\\n\\bar{x} & = \\frac{x}{x^Tx} \\\\\n\\nabla \\hat{v}(S_t, w_t) & = x\n\\end{split}\n\\end{equation}", "meta": {"hexsha": "edbe44f3ffa0f33421e3503ccd196b67978f39b5", "size": 16421, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RL/notes/TeX_files/chapter09.tex", "max_stars_repo_name": "Zilleplus/HML", "max_stars_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "RL/notes/TeX_files/chapter09.tex", "max_issues_repo_name": "Zilleplus/HML", "max_issues_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RL/notes/TeX_files/chapter09.tex", "max_forks_repo_name": "Zilleplus/HML", "max_forks_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.142394822, "max_line_length": 516, "alphanum_fraction": 0.7156689605, "num_tokens": 5160, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677583778258, "lm_q2_score": 0.7025300573952054, "lm_q1q2_score": 0.5957228379624576}}
{"text": "\\documentclass[11pt,a4paper]{article}\n\\usepackage[top=1cm, bottom=1cm, left=2cm, right=2cm, includefoot, heightrounded]{geometry}\n\\usepackage{fontspec}\n\\setmainfont{Liberation Serif}\n\n\\title{\\bf{Golang Workshop Exercises}}\n\\author{Baiju Muthukadan \\\\ Architect, ZeOmega}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\centerline{\\LARGE\\bf Exercises}\n\\section*{Exercise 1}\nWrite a program to print \"Hello, World!\" and save this in a\nfile named {\\it helloworld.go}.  Compile the program and run\nit like this: {\\it./helloworld}\n\n\\section*{Exercise 2}\nWrite a program to print whether the number given as the command line\nargument is even or odd\n\n\\section*{Exercise 3}\nWrite a program to print sum of all numbers below 50 completely\ndivisible by 3 or 5 (i.e., remainder 0)\n\n\\section*{Exercise 4}\n\nWrite a program to print area and perimeter of a given circle and\nrectangle.  Use {\\it struct} to represent circle and rectangle.  The\ntype of shape and dimensions can be read from command line as given\nhere:\n\n\\begin{verbatim}\n$ ./shape circle 2\nArea: 12.56\nPerimeter: 12.56\n$ ./shape rectangle 2 3\nArea: 6\nPerimeter: 8\n\\end{verbatim}\n\n\\section*{Exercise 5}\n\nUpdate the previous program (Exercise 4) to use interfaces.\n\n\\section*{Exercise 6}\n\nRewrite the program in Exercise 2 using goroutine and channel.  The\ntask is to print sum of all numbers below 50 completely divisible by 3\nor 5 (i.e., remainder 0)\n\n\\section*{Exercise 7}\n\nWrite a program to download a list of web pages concurrently using\nGoroutines.\n\n\\noindent\nHint: Use this tool for serving junk content for testing:\nhttps://github.com/baijum/lipsum\n\n\\newpage\n\n\\centerline{\\LARGE\\bf Answers}\n\\section*{Answer 1}\n\n\\begin{enumerate}\n\\item Content of {\\it helloworld.go}:\n\\begin{verbatim}\npackage main\n\nimport \"fmt\"\n\nfunc main() {\n        fmt.Println(\"Hello, World!\")\n}\n\\end{verbatim}\n\n\\item Build program:\n\\begin{verbatim}\n$ go build helloworld.go\n\\end{verbatim}\n\n\\item Run program and verify output like this:\n\\begin{verbatim}\n$ ./helloworld\nHello, World!\n\\end{verbatim}\n\\end{enumerate}\n\n\\section*{Answer 2}\n\\begin{enumerate}\n\\item Content of the file {\\it evenodd.go}:\n\\begin{verbatim}\npackage main\n\nimport (\n        \"fmt\"\n        \"os\"\n        \"strconv\"\n)\n\nfunc main() {\n        i := os.Args[1]\n        n, err := strconv.Atoi(i)\n        if err != nil {\n                fmt.Println(\"Not a number:\", i)\n                os.Exit(1)\n        }\n\n        if n%2 == 0 {\n                fmt.Println(\"even number:\", n)\n        } else {\n                fmt.Println(\"odd number:\", n)\n        }\n}\n\\end{verbatim}\n\n\\item Run program and verify output like this:\n\\begin{verbatim}\n$ go run evenodd.go 2\neven number: 2\n$ go run evenodd.go 3\nodd number: 3\n\\end{verbatim}\n\n\\end{enumerate}\n\n\\section*{Answer 3}\n\\begin{enumerate}\n\\item Content of the file {\\it sum.go}:\n\\begin{verbatim}\npackage main\n\nimport \"fmt\"\n\nfunc main() {\n        sum := 0\n        for i := 1; i < 50; i++ {\n                if i%3 == 0 {\n                        sum = sum + i\n                } else {\n                        if i%5 == 0 {\n                                sum = sum + i\n                        }\n                }\n        }\n        fmt.Println(sum)\n}\n\\end{verbatim}\n\n\\item Run program and verify output like this:\n\\begin{verbatim}\n$ go run sum.go\n543\n\\end{verbatim}\n\n\\end{enumerate}\n\n\\section*{Answer 4}\n\\begin{enumerate}\n\\item Content of the file {\\it shapes.go}:\n\\begin{verbatim}\npackage main\n\nimport (\n        \"fmt\"\n        \"os\"\n        \"strconv\"\n)\n\ntype Rectangle struct {\n        length float64\n        width  float64\n}\n\ntype Circle struct {\n        radius float64\n}\n\nfunc (r Rectangle) Area() float64 {\n        return r.length * r.width\n}\n\nfunc (r Rectangle) Perimeter() float64 {\n        return 2 * (r.length + r.width)\n}\n\nfunc (c Circle) Area() float64 {\n        return 3.14 * c.radius * c.radius\n}\n\n// Circumference\nfunc (c Circle) Perimeter() float64 {\n        return 2 * 3.14 * c.radius\n}\n\nfunc main() {\n\n        shape := os.Args[1]\n\n        if shape == \"circle\" {\n\n                r := os.Args[2]\n                radius, _ := strconv.Atoi(r)\n\n                circle := Circle{float64(radius)}\n\n                area := circle.Area()\n                fmt.Println(\"Area:\", area)\n\n                perimeter := circle.Perimeter()\n                fmt.Println(\"Perimeter:\", perimeter)\n\n        } else {\n                w := os.Args[2]\n                h := os.Args[3]\n\n                width, _ := strconv.Atoi(w)\n                height, _ := strconv.Atoi(h)\n\n                rectangle := Rectangle{float64(width), float64(height)}\n\n                area := rectangle.Area()\n                fmt.Println(\"Area:\", area)\n\n                perimeter := rectangle.Perimeter()\n                fmt.Println(\"Perimeter:\", perimeter)\n        }\n}\n\\end{verbatim}\n\n\\item Run program and verify output like this:\n\\begin{verbatim}\n$ go run shapes1.go circle 4\nArea: 50.24\nPerimeter: 25.12\n$ go run shapes1.go rectangle 2 3\nArea: 4\nPerimeter: 8\n\\end{verbatim}\n\n\\end{enumerate}\n\n\\section*{Answer 5}\n\\begin{enumerate}\n\\item Change the previous source file ({\\it shapes.go}) to include the below code:\n\\begin{verbatim}\ntype Geometry interface {\n        Area() float64\n        Perimeter() float64\n}\n\nfunc Measure(g Geometry) {\n        area := g.Area()\n        fmt.Println(\"Area:\", area)\n\n        perimeter := g.Perimeter()\n        fmt.Println(\"Perimeter:\", perimeter)\n}\n\nfunc main() {\n\n        shape := os.Args[1]\n\n        if shape == \"circle\" {\n\n                r := os.Args[2]\n                radius, _ := strconv.Atoi(r)\n\n                circle := Circle{float64(radius)}\n                Measure(circle)\n\n        } else {\n                w := os.Args[2]\n                h := os.Args[3]\n\n                width, _ := strconv.Atoi(w)\n                height, _ := strconv.Atoi(h)\n\n                rectangle := Rectangle{float64(width), float64(height)}\n                Measure(rectangle)\n        }\n}\n\\end{verbatim}\n\n\\end{enumerate}\n\n\n\\section*{Answer 6}\n\\begin{enumerate}\n\\item Content of the file {\\it newsum.go}:\n\\begin{verbatim}\npackage main\n\nimport \"fmt\"\n\nfunc Sum(s chan int) {\n        sum := 0\n        for i := 1; i < 50; i++ {\n                if i%3 == 0 {\n                        sum = sum + i\n                } else {\n                        if i%5 == 0 {\n                                sum = sum + i\n                        }\n                }\n        }\n        s <- sum\n}\n\nfunc main() {\n        t := make(chan int)\n        go Sum(t)\n        fmt.Println(\"Sum:\", <-t)\n}\n\\end{verbatim}\n\n\\item Run program and verify output like this:\n\\begin{verbatim}\n$ go run sum.go\n543\n\\end{verbatim}\n\n\\end{enumerate}\n\n\\section*{Answer 7}\n\\begin{enumerate}\n\\item Content of the file {\\it download.go}:\n\\begin{verbatim}\npackage main\n\nimport (\n        \"io/ioutil\"\n        \"log\"\n        \"net/http\"\n        \"net/url\"\n        \"sync\"\n)\n\nfunc main() {\n        urls := []string{\n                \"http://localhost:9999/1.txt\",\n                \"http://localhost:9999/2.txt\",\n                \"http://localhost:9999/3.txt\",\n                \"http://localhost:9999/4.txt\",\n        }\n        var wg sync.WaitGroup\n        for _, u := range urls {\n                wg.Add(1)\n                go func(u string) {\n                        defer wg.Done()\n                        ul, err := url.Parse(u)\n                        fn := ul.Path[1:len(ul.Path)]\n                        res, err := http.Get(u)\n                        if err != nil {\n                                log.Println(err, u)\n                        }\n                        content, _ := ioutil.ReadAll(res.Body)\n                        ioutil.WriteFile(fn, content, 0644)\n                        res.Body.Close()\n                }(u)\n        }\n        wg.Wait()\n}\n\\end{verbatim}\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "d494c78e41629d2b25d1736cdff4e13e64106b50", "size": 7748, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises.tex", "max_stars_repo_name": "vsrecio/presentation", "max_stars_repo_head_hexsha": "92236fe43fadef713cd16685a1109e819d3fb408", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises.tex", "max_issues_repo_name": "vsrecio/presentation", "max_issues_repo_head_hexsha": "92236fe43fadef713cd16685a1109e819d3fb408", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises.tex", "max_forks_repo_name": "vsrecio/presentation", "max_forks_repo_head_hexsha": "92236fe43fadef713cd16685a1109e819d3fb408", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.9405405405, "max_line_length": 91, "alphanum_fraction": 0.5486577181, "num_tokens": 1970, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.8479677526147223, "lm_q1q2_score": 0.5957228339137042}}
{"text": "% ***********************************************************************************\n% Pure LaTeX part to be inserted in a document (be careful of depencies of packages & commands)\n% Prepared by XXX and YYY under the supervision of Arnaud de La Fortelle\n% Fall 2017\n% 12 random walk subsection of the simulation part\n% ***********************************************************************************\n\n\\subgroup{4}{Yue Hu, Carlin Liao and Robert Ruigrok}\n\n\\paragraph{Model presentation}\nIn this example we simulate the random walk of a particle in a 2D space. A random walk is a mathematical object, known as a stochastic or random process, that describes a path that consists of a succession of random steps. In order to simulate this process, we let a particle move over a discretized grid where its motion is drawn from a set of possible directions. In this simulation we are interested in finding expected distribution of particles after a certain number of time steps as well as the position where they hit the boundaries of the spatial grid.\\newline\n\nA particle starts at a specified initial position, from where it begins moving through the grid. In our code, we used a coordinate system to represent the location of a particle as provided in figure \\ref{fig:RandomWalkGrid}. The outer border of the grid is enclosed by a ``wall''. When the particle hits this wall, its motion stops and the location where it makes contact is registered.\n\n\\begin{figure}[htb]\n    \\label{fig:RandomWalkGrid}\n\t\\centering\n\t\\includegraphics[width=6cm]{RandomWalkGrid.png}       \n\t\\caption{Coordinate representation in spatial grid}\n\\end{figure}\n\nThe dynamics of a particle are relatively straightforward and can be described by equation \\ref{eq:RandomWalkDynamics}. A particle has 5 options for its motion: moving up, right, down, left or no motion (options are depicted in figure \\ref{fig:RandomWalkMotion}). Every motion has a certain probability $p$ to occur. These probabilities can be given as input and must add up to 1.\n\n\\begin{equation}\n\\label{eq:RandomWalkDynamics}\nX_{k+1}(x,y) = X_k(x,y) +  \\begin{cases}\n(0,1) &\\text{with probability $p\\uparrow$}\\\\\n(1,0) &\\text{with probability $p\\rightarrow$}\\\\\n(0,-1) &\\text{with probability $p\\downarrow$}\\\\\n(-1,0) &\\text{with probability $p\\leftarrow$}\\\\\n(0,0) &\\text{with probability $p\\ \\bullet$}\n\\end{cases}\n\\end{equation}\n\n\n\n\\begin{figure}[htb]\n    \\label{fig:RandomWalkMotion}\n\t\\centering\n\t\\includegraphics[width=3cm]{RandomWalkMotion.png}       \n\t\\caption{Potential motion per time step}\n\\end{figure}\n\n\n\\paragraph{Implementation} \nAt the top of the file, it is possible to set the grid size, starting position of particles, \\# of particles, simulation horizon and motion direction probabilities. The script will also generate snapshots of the particle distribution at different moments in time during the simulation. By selecting the number of subplots, you can determine how many instances you would like to see. This will provide insights into how the particle distribution develops over time. \\newline\n\nParticles are simulated one at a time. The simulation runs until the time horizon $T$ is reached or the particle hits the wall. Their position is saved in a 3-dimensional array (time, x-position, y-position) at specified moments in time only; this way the amount of required memory is kept to a minimum. %You do not save the position of every particle at every time step, but you only ``count'' their positions at time steps that you are interested in. Information about individual particles gets lost, but that is fine.\n\\newline\n\nInformation about where particles hit the wall is included in the same arrays. This data ``circumvents'' the $n \\times n$ data about the particle distribution within the grid. As a result, when we plot the full array we can see the distribution of particles in the grid at their respective location, and the distribution of particles at the wall directly ``behind'' the wall.\\newline\n\n\\begin{lstlisting}[language = python, caption = 2D Ramdom Walk Simulation]\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom random import *\nimport pylab\n\n############## START INPUT #################\n\nGridSize = 20 # will create an 20x20 square grid\nPos_init = np.ceil(np.array([0.4,0.4])*GridSize)  # start position\nT = 0.3*np.power(GridSizeSquare, 2)  # total # of time steps\nn_particles = 10000 # number of particles, 10000+ recommended\n# specify subplots for intermediate time snap shots\nsubplot_row = 2             \nsubplot_column = 2\n#define the probabilities of motion: ([up,right,down,left,0]) \nmotion_prob = np.array([0.2,0.2,0.2,0.2,0.2])\n\n############## END INPUT #################\n\nx_grid = GridSizeSquare\ny_grid = GridSizeSquare\nn_subplot = subplot_row*subplot_column\n# calculate time step for snapshots of process\nPlot_interval = np.floor((T-1)/(n_subplot-1))\n# Normalize probabilities in case they don't add up to 1\nmotion_prob = motion_prob/np.sum(motion_prob)\n# define the change in coordinates of every motion:\nmotion_xy = np.array([[0,1],[1,0],[0,-1],[-1,0],[0,0],])\nmotion_prob_sum = np.cumsum(motion_prob)\n# this is used later to draw from with randomizer\n\n# make an empty data grid to sum the particle positions\nData = np.zeros((x_grid+2,y_grid+2))  # +2 for wall data\n# Some resizing here and later are for ploting purposes\nData_resized = np.zeros((Data.shape[0]+1,Data.shape[1]+1))\n# Create a new empty data set for the intermediate plots:\nData_Snap=np.zeros((n_subplot,Data_resized.shape[0],Data_resized.shape[1]))\n\n# construct some arrays for plotting later on:\nxx, yy = pylab.meshgrid(\n    pylab.linspace(-1,x_grid+1,x_grid+3),\n    pylab.linspace(-1,y_grid+1,y_grid+3))\n\n# start loop over all the particles\nfor i in range(1, n_particles+1):\n    # Initialize simulation\n    t = 0\n    HitWall = False\n    Pos = Pos_init\n    Subplot = 1\n    \n    # Simulation Process\n    while t < T and not HitWall:\n        MotionRandom = random()\n        IndexMotion = np.argmax(motion_prob_sum>MotionRandom)\n        Pos = Pos + motion_xy[IndexMotion,:] \n        \n        # Now check for hitting the wall\n        if Pos[0] == 0|x_grid+1 or Pos[1] == 0|y_grid+1:\n            if Subplot <= n_subplot:             \n                Data_Snap[Subplot-1,Pos[0],Pos[1]] = \\\n                Data_Snap[Subplot-1,Pos[0],Pos[1]]+1\n            HitWall = True\n\n        # Record the position for each time snap\n        if t % Plot_interval==0 and Subplot <= n_subplot and not(HitWall):        \n            Data_Snap[Subplot-1,Pos[0],Pos[1]] =\\\n            Data_Snap[Subplot-1,Pos[0],Pos[1]]+1\n            Subplot = Subplot+1        \n        \n        t = t+1\n    \n    #Save the final results\n    Data[Pos[0],Pos[1]] = Data[Pos[0],Pos[1]] + 1\n    Data_resized[:-1,:-1] = Data       \n\n# Add the particles that hit the wall in earlier time steps\n# to the later plots\nData_Snap_New = np.cumsum(Data_Snap,axis=0)\nData_Snap_New[:,1:x_grid+1,1:y_grid+1] = Data_Snap[:,1:x_grid+1,1:y_grid+1]\n\n# Visualize the outcomes at each snap of process\nplt.figure()\nfor j in range(1, n_subplot+1):\n    pylab.subplot(subplot_row, subplot_column, j)\n    pylab.pcolor(xx,yy,np.transpose(Data_Snap_New[j-1,:,:]))\n    Str = 'Distribution at t = ' + str((j-1)*Plot_interval+1)\n    pylab.title(Str)\n    # add a color bar\n    pylab.colorbar()\n    pylab.hold(True)\n    pylab.plot([0, x_grid],[0, 0], 'r',\n               [0, x_grid],[y_grid, y_grid], 'r',[0, 0],[0, y_grid], 'r',\n               [x_grid, x_grid],[0, y_grid], 'r')\n    pylab.plot(Pos_init[0]-0.5,Pos_init[1]-0.5,'ro')\n\npylab.show()\n\n# Plot the final distribution\nplt.figure()\npylab.pcolor(xx,yy,np.transpose(Data_resized))\npylab.title('Final distribution at t = %d, including hitting walls' %T)\n# add a color bar \npylab.colorbar()\npylab.hold(True)\npylab.plot([0, x_grid],[0, 0], 'r',\n           [0, x_grid],[y_grid, y_grid], 'r',[0, 0],[0, y_grid], 'r',\n           [x_grid, x_grid],[0, y_grid], 'r')\npylab.plot(Pos_init[0]-0.5,Pos_init[1]-0.5,'ro')\npylab.show\n\\end{lstlisting}\n\n\n\n\\paragraph{Results}\nIn this section we included two simulation for both a $10x10$ grid and a $20 \\times 20$ grid, with a different simulation time horizon $T$. Both simulations use 10,000 particles and have a motion probability of $0.2$ in all directions. \\newline\n\nIt is clearly visible how the particles spread out over time and make their way to the walls over time. The more particles that are simulated per grid resolution, the smoother the distribution becomes. You can clearly see that 10,000 particles simulated lead to a clean distribution in the smaller plot of figure \\ref{fig:RandomWalk10}, while the larger plot of \\ref{fig:RandomWalk20} shows a more grainy distribution.\n\n\\begin{figure}[htb]\n    \\label{fig:RandomWalk10}\n\t\\centering\n\t\\includegraphics[width=14cm]{figure10x10.png}       \n\t\\caption{Particle distribution for $10 \\times 10$ grid at different time steps}\n\\end{figure}\n\n\\begin{figure}[htb]\n    \\label{fig:RandomWalk20}\n\t\\centering\n\t\\includegraphics[width=14cm]{figure20x20.png}       \n\t\\caption{Particle distribution for $20 \\times 20$ grid at different time steps}\n\\end{figure}\n\n\n \n\\paragraph{Interpretation}\n{\\it Relate these quantities to the model and to theoretical knowledge of the course.}\\newline\n\n{\\it I think this motion is described by Fick's Law in 2 dimension. The concentration on a specific point changes over time, depending on the concentration of its surroundings. Should we derive why the second derivative matters? $\\phi$ is the concentration, $D$ the diffusion coefficient.}\n\n\\begin{equation}\n\\label{eq:FicksLaw}\n\\frac{d\\phi}{dt} = D\\nabla\\phi = D \\big(\\frac{d^2\\phi}{dx^2} + \\frac{d^2\\phi}{dy^2}\\big)\n\\end{equation}\n\n\n \\paragraph{Conclusion}\n \\textit{What have we learned? Is everything aligned (theory and practice)? What was difficult? Provide perspectives.}\n \n The dynamics of the random walk were easy to model. The challenge in this simulation was to save the data for the intermediate time steps. Since the particles are simulated one by one for the full time horizon, we had to write some non-intuitive code to save the location of every particle at the relevant intermediate time steps. \\newline\n \n From the lecture we recall that diffusion distance scales with the square root of time. Here we tried to simulate that. When doubling the grid size and taking a four times higher simulation horizon, the distribution looks similar. However, we have the idea that the scaling did not work for 100\\%. At the end of the scaled simulation horizon, it seems as if the smaller grid has relatively more particles in the grid than the larger grid has. What could cause this difference?\n \n", "meta": {"hexsha": "6b93770c2870d12d73d323fe4e0e7cd4f8296b48", "size": 10617, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "simulation-2DRandomWalk.tex", "max_stars_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems", "max_stars_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-01-08T02:54:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-29T06:19:28.000Z", "max_issues_repo_path": "simulation-2DRandomWalk.tex", "max_issues_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems", "max_issues_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "simulation-2DRandomWalk.tex", "max_forks_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems", "max_forks_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-16T17:29:03.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-16T17:29:03.000Z", "avg_line_length": 51.2898550725, "max_line_length": 568, "alphanum_fraction": 0.7120655552, "num_tokens": 2745, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5957167767265948}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{amsfonts,amsmath}\n\\usepackage{fourier}\n\n\\title{Problem Description}\n\n\\begin{document}\n\\maketitle \n\n\\section{Introduction}\n\nIn this document, we aim to detail the implementation of a Fortran project using ...\n\n\n\\section{Problem Description}\n\nGiven some matrix $P \\in {\\mathbb{R}}^{4\\times 4}$, let $x=(x_1,x_2,x_3,x_4)$, $D = \\text{diag}(x)$ and $\\varepsilon = (\\varepsilon_{ij}) \\in \\mathbb{R}^{4\\times 4}$. We are interested in solving the following problem:\n\n\\begin{align*}\n\t\\text{minimize} & \\qquad \\sum_{i=1}^4\\sum_{j=1}^4 \\varepsilon_{ij}^2\\\\\n\t\\text{subject to} & \\qquad x_4=1\\\\\n    & \\qquad D(P+\\varepsilon) \\text{ must be persymmetric.}\n\\end{align*}\nThis can be written as the following problem:\n\\begin{align*}\n\t\\text{minimize} & \\quad \\sum_{i=1}^4\\sum_{j=1}^4 \\varepsilon_{ij}^2\\\\\n\t\\text{subject to} & \\quad x_4=1\\\\\n\t& \\quad x_i (P_{ij}+\\varepsilon_{ij}) - x_{\\alpha(j)} (P_{\\alpha(j)\\alpha(i)}+\\varepsilon_ {\\alpha(j)\\alpha(i)}) = 0,\\\\ \n    & \\quad \\text{for } i=1, 2, 3, j=1, \\ldots, 4-i \\text{ where } \\alpha(i)=5-i.\n\\end{align*}\n\n\n\\section{Solving}\n\n\\section{Results}\n\\end{document}", "meta": {"hexsha": "11e4eab83b03797c5f1828514efeeb9bd58ebc66", "size": 1136, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/problemdescription.tex", "max_stars_repo_name": "melissawm/findingaleph", "max_stars_repo_head_hexsha": "381f742d61bd7d5e5538934a19c1e7bea405b963", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/problemdescription.tex", "max_issues_repo_name": "melissawm/findingaleph", "max_issues_repo_head_hexsha": "381f742d61bd7d5e5538934a19c1e7bea405b963", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/problemdescription.tex", "max_forks_repo_name": "melissawm/findingaleph", "max_forks_repo_head_hexsha": "381f742d61bd7d5e5538934a19c1e7bea405b963", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.5555555556, "max_line_length": 218, "alphanum_fraction": 0.6672535211, "num_tokens": 438, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802264851919, "lm_q2_score": 0.7745833945721304, "lm_q1q2_score": 0.5957167725292029}}
{"text": "% TeX Source\n%\n% Author: Tetsuya Ishikawa <tiskw111gmail.com>\n% Date  : October 16, 2021\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SOURCE START %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[twocolumn, a4paper, 10pt]{article}\n\\usepackage{tiskw}\n\n\\begin{document}\n\n\\title{Random Fourier Features for Gaussian Process Model}\n\\author{Tetsuya Ishikawa \\\\ \\normalsize\\texttt{tiskw111@gmail.com}}\n\\maketitle\n\n\\section*{Abstract}\\titlebar\n% {{{\n\nThis article describes the procedure for applying random Fourier features~\\cite{Rasmussen2006}\nto the Gaussian process model~\\cite{Rahimi2007}. This makes it possible to speed up the training\nand inference of the Gaussian process model, and to apply the model to larger data.\n\nThe Gaussian process model~\\cite{Rahimi2007} is one of the supervised machine learning frameworks\ndesigned on a probability space, and is widely used for regression and classification tasks, like\nsupport vector machine and random forest. The major difference between Gaussian process models and\nother machine learning models is that the Gaussian process model is a \\textit{stochastic} model.\nIn other words, since the Gaussian process model is formulated as a stochastic model,\nit can provide not only the predicted value but also a measure of uncertainty for the prediction.\nThis is a very useful property that can improve the explainability of machine learning model.\n\nOn the other hand, the Gaussian process model is also known for its high computational cost\nof training and inrefence. If the total number of training data is $N \\in \\mathbb{Z}^{+}$,\nthe computational cost required for training the Gaussian process model is $O(N^3)$, and\ncomputational cost required for inference is $O(N^2)$, where $O$ is the\n\\textit{Bachmann\u2013Landau notation}. The problem is that the computational cost is given by\na power of the total number of training data $N$, which can be an obstacle when appliying\nthe model to large-scale data. This comes from the fact that the Gaussian process model has\nthe same mathematical structure as the kernel method, in other words,\nthe kernel support vector machine also has the same problem.\n\nOne of the methods to speed up the kernel method is random Fourier features~\\cite{Rasmussen2006}\n(hereinafter abbreviated as RFF). This method can significantly reduces the computational cost\nwhile keeping the flexibility of the kernel method by approximating the kernel function as the inner\nproduct of finite dimensional vectors. Specifically, the compurational cost required for training\ncan be reduced to $O(N D^2)$, and the amount of calculation required for inference can be reduced\nto $O(D^2)$. However, $D \\in \\mathbb{Z}^{+}$ is a hyperparameter of RFF and can be specified\nindependently of the total number of training data $N$.\n\nSince the Gaussian process model has the same mathematical structure as the kernel method,\nRFF can be applied to the Gaussian process model as well. This evolves the Gaussian process model\ninto a more powerful, easy-to-use, and highly reliable ML tool.\n\nHowever, when applying RFF to a Gaussian process model, some mathematical techniques are required\nthat are not straightforward. However, unfortunately, there seems to be no articles in the world\nthat mentions it's difficulties and solutions, so I left an explanation of the procedure.\n\nIf you preffer the Japanese version of this document, see this repository\n\\footnote{\\texttt{https://github.com/tiskw/mathematical-articles}}.\n\n% }}}\n\n\\section{Gaussian Process Model Revisited}\\titlebar\n% {{{\n\nThis section gives an overview of the Gaussian process model. Unfortunately, this document\ndoes not cover details such as the formulation and derivation of Gaussian process models,\nso if you are interested in the details, please refer \\cite{Rasmussen2006}.\n\nLet $\\mathcal{D} = \\{(\\bs{x}_n, y_n)\\}_{n=1}^{N}$ be a training data, and $\\sigma \\in \\mathbb{R}^{+}$\nbe a standard deviation of the label observation error, where $\\bs{x}_n \\in \\mathbb{R}^M$, $y_n \\in \\mathbb{R}$.\nThe Gaussian process model describes the prediction as a probability variable that follows normal distribution.\nIf the test date is $\\bs{\\xi} \\in \\mathbb{R}^M$, the expectation and standard deviation is given by:\n\\begin{equation}\n    m (\\bs{\\xi}) = \\widehat{m} (\\bs{\\xi}) + \\left( \\bs{y} - \\widehat{\\bs{m}} \\right)\\tran\n    \\left (\\bs{K} + \\sigma^2 \\bs{I} \\right)^{-1} \\bs{k} (\\bs {\\xi}),\n    \\label{eqn:gp_exp} \\\\\n\\end{equation}\nand the covariance of the test data $\\bs{\\xi}_1$, $\\bs{\\xi}_2$ is given by:\n\\begin{equation}\n    v (\\bs{\\xi}_1, \\bs{\\xi}_2) = k (\\bs{\\xi}_1, \\bs{\\xi}_2)\n    - \\bs{k} (\\bs{\\xi}_1)\\tran \\left( \\bs{K} - \\sigma^2 \\bs{I} \\right)^{-1} \\bs{k} (\\bs{\\xi}_2),\n    \\label{eqn:gp_cov}\n\\end{equation}\nwhere the function $k: \\mathbb{R}^M \\times \\mathbb{R}^M \\to \\mathbb{R}$ is a kernel function,\nthe matrix $\\bs{K} \\in \\mathbb{R}^{N \\times N}$ is a kernel matrix defined as\n\\begin {equation}\n    \\bs{K} = \\begin{pmatrix}\n        k (\\bs{x}_1, \\bs{x}_1) & \\cdots & k (\\bs{x}_1, \\bs{x}_N) \\\\\n        \\vdots & \\ddots & \\vdots \\\\\n        k (\\bs{x}_N, \\bs{x}_1) & \\cdots & k (\\bs{x}_N, \\bs{x}_N) \\\\\n    \\end{pmatrix},\n\\end{equation}\nand the vector $\\bs{k} (\\bs {\\xi}) \\in \\mathbb{R}^N$ and\nthe vector $\\bs{y} \\in \\mathbb{R}^N$ is defined as\n\\begin{equation}\n    \\bs{k} (\\bs{\\xi}) = \\begin{pmatrix}\n        k (\\bs{\\xi}, \\bs{x}_1) \\\\\n        \\vdots \\\\\n        k (\\bs{\\xi}, \\bs{x}_N) \\\\\n    \\end{pmatrix},\n    \\hspace{10pt}\n    \\bs{y} = \\begin{pmatrix}\n        y_1 \\\\ \\vdots \\\\ y_N \\\\\n    \\end{pmatrix},\n\\end{equation}\nrespectively.\nAlso, $\\widehat{m} (\\bs{\\xi})$ is the prior distribution of the prediction, and\n$\\widehat{\\bs{m}} = (\\widehat{m} (\\bs{x}_1), \\ldots, \\widehat{m} (\\bs{x}_N))\\tran$ is\nthe prior distribution of the predicted values of the training data. If you don't need to set\nprior distribution, it's common to set $\\widehat{m} (\\cdot) = 0$ and $\\widehat{\\bs{m}} = \\bs{0}$.\n\nYou can compute the variance of the prediction of the test data $\\bs{\\xi}$\nby substituting $\\bs{\\xi}_1 = \\bs{\\xi}_2 = \\bs{\\xi}$ into the equation (\\ref{eqn:gp_cov}),\n\\begin{equation}\n    v (\\bs{\\xi}, \\bs{\\xi}) = k (\\bs{\\xi}, \\bs{\\xi})\n    - \\bs{k} (\\bs{\\xi})\\tran \\left( \\bs{K} - \\sigma^2 \\bs{I} \\right)^{-1} \\bs{k} (\\bs{\\xi}).\n\\end{equation}\n\n% }}}\n\n\\section{RFF Revisited}\\titlebar\n% {{{\n\nThis section, we revisit random Fourier features. Unfortunately, this article don't have\nenough space to explain the details, therefore if you would like to know more details,\nplease refer to the original paper \\cite{Rahimi2007}.\n\nLet the function $k: \\mathbb{R}^M \\times \\mathbb{R}^M \\to \\mathbb{R}$ be the kernel function.\nIn RFF, the kernel function can be approximated as\n\\begin{equation}\n    k (\\bs{x}_1, \\bs{x}_2) \\simeq \\bs{\\phi} ({\\bs{x}_1})\\tran \\bs{\\phi} ({\\bs{x}_2) }),\n    \\label{eqn:rff_kernel_approx}\n\\end{equation}\nwhere the dimension $D$ of the vector $\\bs{\\phi} (\\bs{x}_1)$ is a hyperparamter of RFF.\nThe larger the dimension $D$, the higher the approximation accuracy of the equation\n(\\ref{eqn:rff_kernel_approx}), while the larger the dimension $D$, the greater compurational cost.\n\nFor example, in the case of the RBF kernel\n\\begin{equation}\n    k (\\bs{x}_1, \\bs{x}_2) = \\exp \\left (- \\gamma \\| \\bs{x}_1 - \\bs{x}_2 \\|^2 \\right),\n\\end{equation}\nwhich is famous as one of the kernel functions, the vector $\\bs{\\phi} (\\bs{x})$ is given by\n\\begin{equation}\n    \\bs{\\phi} (\\bs{x}) = \\begin{pmatrix}\n        \\cos \\bs{Wx} \\\\\n        \\sin \\bs{Wx}\n    \\end{pmatrix},\n\\end{equation}\nwhere, the matrix $\\bs{W} \\in \\mathbb{R}^{D/2 \\times M} $ is a random matrix in which each element\nis sampled from the normal distribution $\\mathcal{N} (0, \\frac{1}{4 \\gamma})$.\n\n% }}}\n\n\\section{Gaussian process model and RFF}\\titlebar\n% {{{\n\nIn this section, we apply RFF to the Gaussian process model\nand theoretically confirm the effect of speeding up.\n\n\\subsection{Computational complexity of Gaussian process model before applying RFF}\n\nFirst, let's check the computational cost required for training and inferring a normal Gaussian\nprocess model. As a premise, it is assumed that the number of training data $N \\in \\mathbb{Z}^{+}$\nis sufficiently larger than the dimension $M \\in \\mathbb{Z}^{+}$ of the input vector and \ndimention $D \\in \\mathbb{Z}^{+}$ which is a hyperparameter of RFF. Here, the bottleneck of\ntraining computational cost is obviously the calculation of the inverse matrix\n$\\left (\\bs{K} + \\sigma^2 \\bs{I} \\right)^{-1}$ in the formulas (\\ref{eqn:gp_exp}) and (\\ref{eqn:gp_cov}).\nSince the size of this matrix is $N \\times N$, the computational cost for training is $O(N^3)$.\n\nNext, the bottleneck of the inference is matrix multiplications\n$\\left (\\bs{y} -\\widehat{\\bs{m}} \\right)\\tran \\left( \\bs{K} + \\sigma^2 \\bs{I} \\right)^{-1}$ or\n$\\bs{k} (\\bs{\\xi}_1)\\tran \\left( \\bs{K} - \\sigma^2 \\bs{I} \\right)^{-1} \\bs{k} (\\bs{\\xi}_2)$,\nand either of these computational cost is $O(N)$.\n\n\\subsection{Applying RFF to expectation of prediction}\n\nNow, let's apply RFF to the Gaussian process model. First of all, if you substitute the RFF\napproximation formula (\\ref{eqn:gp_cov}) into the formula of expectation of the prediction\nin the Gaussian process (\\ref{eqn:gp_exp}), you'll get\n\\begin{equation}\n    m (\\bs{\\xi}) = \\widehat{m} (\\bs{\\xi}) + \\left( \\bs{y} - \\widehat{\\bs{m}} \\right)\\tran\n    \\left( \\bs{\\Phi}\\tran \\bs{\\Phi} + \\sigma^2 \\bs{I} \\right)^{-1} \\bs{\\Phi}\\tran \\bs{\\phi} (\\bs{\\xi}),\n    \\label{eqn:rffgp_exp_naive}\n\\end{equation}\nwhere the matrix $\\bs{\\Phi} \\in \\mathbb{R}^{D \\times N}$ is defined as\n$\\bs{\\Phi} = (\\bs{\\phi} (\\bs{x}_1), \\ldots, \\bs{\\phi} (\\bs{x}_N))$.\nHowever, this has not yet speeded up. The complexity bottleneck of the above expression\n(\\ref{eqn:rffgp_exp_naive}) is still the inverse of the $N \\times N$ matrix\n$\\left( \\bs{\\Phi}\\tran \\bs{\\Phi} + \\sigma^2 \\bs{I} \\right)^{-1}$.\n\nNow we will add a bit of contrivance to the equation (\\ref{eqn:rffgp_exp_naive}).\nAt first, let us introduce the \\textit{matrix inversion lemma} (it's also referred as\n\\textit{binominal inverse lemma}) which is a useful formula for expansion of matrix inverse.  \n\n\\begin{theorem}[Matrix Inversion Lemma]\n    Let\n    $\\bs{A} \\in \\mathbb{R^{N \\times N}}$,\n    $\\bs{B} \\in \\mathbb{R^{N \\times M}}$,\n    $\\bs{C} \\in \\mathbb{R^{M \\times N}}$,\n    and\n    $\\bs{D} \\in \\mathbb{R^{M \\times M}}$\n    be real matrices. Then the equation\n    \\begin{equation}\n        \\left( \\bs{A} + \\bs{BDC} \\right)^{-1} = \\bs {A}^{-1} - \\bs{A}^{-1} \\bs{B}\n        \\left( \\bs{D}^{-1} + \\bs{CA}^{-1} \\bs{B} \\right)^{-1} \\bs{CA}^{-1}\n        \\label{eqn:matrix_inversion_lemma}\n    \\end{equation}\n    holds, where the matrix $\\bs{A}$ and $\\bs{D}$ are regular matrices.\n\\end{theorem}\n\nThe proof of the matrix inversion lemma is given at the end of this article,\nand let us move on to the utilization of the lemma to the equation (\\ref{eqn:rffgp_exp_naive}).\n\nBy replacing $\\bs{A} = \\sigma^2 \\bs{I}$, $\\bs{B} = \\bs{\\Phi}\\tran$, $\\bs{C} = \\bs{\\Phi}$,\nand $\\bs{D} = \\bs{I}$ on the equation (\\ref{eqn:matrix_inversion_lemma}),\nwe obtain the following equation:\n\\begin{equation}\n    \\left( \\bs{\\Phi}\\tran \\bs{\\Phi} + \\sigma^2 \\bs{I} \\right)^{-1}\n    = \\frac{1}{\\sigma^2} \\left (\\bs{I} - \\bs{\\Phi}\\tran\n    \\left( \\bs{\\Phi \\Phi}\\tran + \\sigma^2 \\bs{I} \\right)^{-1} \\bs{\\Phi} \\right),\n    \\label{eqn:rffgp_exp_solving}\n\\end{equation}\nwhere $\\bs{P} = \\bs{\\Phi \\Phi}\\tran \\in \\mathbb{R}^{D \\times D}$.\nThen multiply $\\bs{\\Phi}$ from the right the to the above equation (\\ref{eqn:rffgp_exp_solving}),\nwe get\n\\begin{equation}\n    \\left( \\bs{\\Phi}\\tran \\bs{\\Phi} + \\sigma^2 \\bs{I} \\right)^{-1} \\bs{\\Phi}\n    = \\frac{1}{\\sigma^2} \\bs{\\Phi}\\tran\n    \\left( \\bs{I} - \\left( \\bs{P} + \\sigma^2 \\bs{I} \\right)^{-1} \\bs{P} \\right).\n    \\label{eqn:rff_key_eqn}\n\\end{equation}\nTherefore, the expression (\\ref{eqn:rffgp_exp_naive}) can be written as\n\\begin{equation}\n    m (\\bs{\\xi}) = \\widehat{m} (\\bs{\\xi}) + \\frac{1}{\\sigma^2}\n    \\left( \\bs{y} - \\widehat{\\bs{m}} \\right)\\tran \\bs{\\Phi}\\tran \\bs{S},\n    \\label{eqn:rffgp_exp}\n\\end{equation}\nwhere\n\\begin{equation}\n    \\bs{S} = \\bs{I} - \\left( \\bs{P} + \\sigma^2 \\bs{I} \\right)^{-1} \\bs{P}.\n    \\label{eqn:rffgp_exp_cache}\n\\end{equation}\n\nClever readers would have already noticed that the bottleneck has been resolved.\nThe inverse matrix $(\\bs{K} + \\sigma^2 \\bs{I})^{-1}$, which was the bottleneck of\nthe expression (\\ref{eqn:rffgp_exp_naive}), became $(\\bs {P} + \\sigma^2 \\bs{I})^{-1}$\nin the expressions (\\ref{eqn:rffgp_exp}) and (\\ref{eqn:rffgp_exp_cache}) where the size of\nthe inverse matrix is $D \\times D$. Normally, the RFF dimension $D$ is set sufficiently\nsmaller than the number of training data $N$, therefore the inverse matrix\n$(\\bs {P} + \\sigma^2 \\bs{I})^{-1}$ is no longer a bottleneck of computational cost.\nThe bottleneck of the expressions (\\ref{eqn:rffgp_exp}) and (\\ref{eqn:rffgp_exp_cache})\nis the matrix product $\\bs{P} = \\bs{\\Phi \\Phi}\\tran$, whoes computational cost is $O(ND^2)$.\nTherefore we've achieved a considerable speedup of the training of the Gaussian process model\nby applying RFF because the calculational cost before RFF is $O(N^3)$.\n\n\\subsection{Applying RFF to covariance of prediction}\n\nNext, we apply RFF to the covariance of the prediction (\\ref{eqn:gp_cov}).\nBy substitute RFF approximation (\\ref{eqn:rff_key_eqn}) to the expression (\\ref{eqn:gp_cov}),\nwe obtain\n\\begin{align}\n    v (\\bs{\\xi}_1, \\bs{\\xi}_2)\n    & = \\bs{\\phi} (\\bs{\\xi}_1)\\tran \\bs{\\phi} (\\bs{\\xi}_2)\n    - \\frac{1}{\\sigma^2} \\bs{\\phi} (\\bs{\\xi}_1)\\tran \\bs{PS} \\bs{\\phi} (\\bs{\\xi}_2) \\notag \\\\\n    & = \\bs{\\phi} (\\bs{\\xi}_1)\\tran\n    \\left( \\bs{I} - \\frac{1}{\\sigma^2} \\bs{PS} \\right)\n    \\bs{\\phi} (\\bs{\\xi}_2),\n    \\label{eqn:rffgp_cov}\n\\end{align}\nThe bottleneck of the expression (\\ref{eqn:rffgp_cov}) is, as the same as the expectation of the\nprediction, the matrix product $\\bs{P} = \\bs{\\Phi \\Phi}\\tran$ whoes calculation cost is $O(ND^2)$.\n\nThe procedure of training and inference of the Gaussian process model after applying RFF is\ndescrived in algorithm \\ref{alg:rffgp_train} and \\ref{alg:rffgp_infer} as pseudo code.\nNote that the prior distribution of the Gaussian process model is set to 0 for the sake of\nsimplicity in Algorithm \\ref{alg:rffgp_train} and \\ref{alg:rffgp_infer}.\n\n\\begin{algorithm}[t]\n    \\caption{\\bf Training of the GP model after RFF}\n    \\label{alg:rffgp_train}\n    \\KwData{$\\mathcal{D} = \\left\\{ (\\bs{x}_n, y_n) \\right\\}_{n=1}^{N}$, \\, $\\sigma \\in \\mathbb{R}^{+}$}\n    \\KwResult{$\\bs{c}_\\mathrm{m} \\in \\mathbb{R}^D$, \\, $\\bs{C}_\\mathrm{v} \\in \\mathbb{R}^{D \\times D}$}\n    $\\bs{y} \\gets (y_1, \\ldots, y_N)\\tran$ \\\\\n    $\\bs{\\Phi} \\gets (\\bs{\\phi}(\\bs{x}_1), \\ldots, \\bs{\\phi}(\\bs{x}_N))$ \\\\\n    $\\bs{P} \\gets \\bs{\\Phi\\Phi}\\tran$ \\\\\n    $\\bs{S} \\gets \\bs{I} - \\left( \\bs{P} + \\sigma^2 \\bs{I} \\right)^{-1} \\bs{P}$ \\\\\n    $\\bs{c}_\\mathrm{m} \\gets \\frac{1}{\\sigma^2} \\bs{y}\\tran \\bs{\\Phi}\\tran \\bs{S}$\n    \\hfill\\Comment{{\\footnotesize Cache for expectation}\\hspace*{-48pt}\\mbox{}}\n    $\\bs{C}_\\mathrm{v} \\gets \\bs{I} - \\frac{1}{\\sigma^2} \\bs{PS}$\n    \\hfill\\Comment{{\\footnotesize Cache for covariance}\\hspace*{-45pt}\\mbox{}}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n    \\caption{\\bf Inference of the GP model after RFF}\n    \\label{alg:rffgp_infer}\n    \\KwData{$\\bs{\\xi} \\in \\mathbb{R}^M$, \\, $\\bs{c}_\\mathrm{m} \\in \\mathbb{R}^D$, \\, $\\bs{C}_\\mathrm{v} \\in \\mathbb{R}^{D \\times D}$}\n    \\KwResult{$\\mu \\in \\mathbb{R}$, $\\eta \\in \\mathbb{R}$}\n    $\\bs{z} \\gets \\bs{\\phi}(\\bs{\\xi})$ \\\\\n    $\\mu \\gets \\bs{c}_\\mathrm{m} \\bs{z}$\n    \\hfill\\Comment{{\\footnotesize Inference of expectation}\\hspace*{-70pt}\\mbox{}}\n    $\\eta \\gets \\bs{z}\\tran \\bs{C}_\\mathrm{v} \\bs{z}$\n    \\hfill\\Comment{{\\footnotesize Inference of covariance}\\hspace*{-58pt}\\mbox{}}\n\\end{algorithm}\n\nFinally, the calculational cost after applying RFF is summarized in the table\n\\ref{tab:gp_complexity}, where $N \\in \\mathbb{Z}^{+}$ is the number of training data\nand $D \\in \\mathbb{Z}^{+}$ is the dimension of RFF.\n\n\\begin{table}[t]\n    \\caption{Computational cost of the GP model before/after RFF}\n    \\label{tab:gp_complexity}\n    \\begin{center}\\begin{tabular}{ccc}\n        \\hline\n         & Training & Inference \\\\\n        \\hline\n        Before RFF & $O(N^3)$   & $O(N)$   \\\\  \n        After RFF  & $O(N D^2)$ & $O(D^2)$ \\\\\n        \\hline\n    \\end{tabular}\\end{center}\n\\end{table}\n\n% }}}\n\n\\appendix\n\n\\section{Appendices}\\titlebar\n% {{{\n\n\\subsection{Proof of matrix inversion lemma}\n\nThe matrix inversion lemma is reprinted and proved.\n\n\\begin{theorem}[Matrix Inversion Lemma]\n    Let\n    $\\bs{A} \\in \\mathbb{R^{N \\times N}}$,\n    $\\bs{B} \\in \\mathbb{R^{N \\times M}}$,\n    $\\bs{C} \\in \\mathbb{R^{M \\times N}}$,\n    and\n    $\\bs{D} \\in \\mathbb{R^{M \\times M}}$\n    be real matrices. Then the equation\n    \\begin{equation}\n        \\left( \\bs{A} + \\bs{BDC} \\right)^{-1} = \\bs {A}^{-1} - \\bs{A}^{-1} \\bs{B}\n        \\left( \\bs{D}^{-1} + \\bs{CA}^{-1} \\bs{B} \\right)^{-1} \\bs{CA}^{-1}\n    \\end{equation}\n    holds, where the matrix $\\bs{A}$ and $\\bs{D}$ are regular matrices.\n\\end{theorem}\n\\begin{proof}\nThe following equation holds:\n\\begin{align*}\n    \\begin{pmatrix}\n        \\bs{A} & \\bs{B} \\\\\n        \\bs{C} & \\bs{D}\n    \\end{pmatrix}^{-1}\n    &= \\begin{pmatrix}\n        \\bs{A}^{-1} + \\bs{A}^{-1} \\bs{BSCA}^{-1} & - \\bs{A}^{-1} \\bs{BS} \\\\\n        - \\bs {SCA}^{-1}                         & \\bs {S}\n    \\end{pmatrix} \\\\\n    &= \\begin{pmatrix}\n        \\bs{T}                & -\\bs{TBD}^{-1} \\\\\n        - \\bs{D}^{-1} \\bs{CT} & \\bs{D}^{-1} + \\bs{D}^{-1} \\bs{CTBD}^{-1}\n    \\end{pmatrix},\n\\end{align*}\nwhere\n\\begin{align}\n    \\bs{T} &= \\left( \\bs{D} - \\bs {CA}^{-1} \\bs{B} \\right)^{-1}, \\\\\n    \\bs{S} &= \\left( \\bs{A} - \\bs {BD}^{-1} \\bs{C} \\right)^{-1}.\n\\end{align}\nIt is easy to verify the above equation from a direct calculation.\nBy comparing the corresponding parts of the above block matrix, we get\n\\begin{align}\n    \\bs{T} &= \\bs{A}^{-1} + \\bs{A}^{-1} \\bs{BSCA}^{-1},\n    \\label{eqn:binomial_theorem_proof_1} \\\\\n    \\bs{S} &= \\bs{D}^{-1} + \\bs{D}^{-1} \\bs{CTBD}^{-1}, \\\\\n    - \\bs{A}^{-1} \\bs{BS} &= - \\bs{TBD}^{-1}, \\\\\n    - \\bs{SCA}^{-1} &= - \\bs{D}^{-1} \\bs{CT},\n\\end{align}\nBy replacing with\n\\begin{center}\n    $\\bs{A} \\to \\bs{D}^{-1}$, \\hspace{5pt}\n    $\\bs{B} \\to -\\bs{C}$, \\hspace{5pt}\n    $\\bs{C} \\to \\bs{B}$, \\hspace{5pt}\n    $\\bs{D} \\to \\bs{A}$,\n\\end{center}\nin the equation (\\ref{eqn:binomial_theorem_proof_1}), we get the formula to be proved.\n\\end{proof}\n\n% }}}\n\n\\pagebreak\n\n\\begin{thebibliography}{9}\n% {{{\n\n    \\bibitem{Rahimi2007}\n    A.~Rahimi and B.~Recht, \n    ``Random Features for Large-Scale Kernel Machines'',\n    Neural Information Processing Systems, 2007.\n\n    \\bibitem{Rasmussen2006}\n    C.~Rasmussen and C.~Williams, ``Gaussian Processes for Machine Learning'', MIT Press, 2006.\n\n% }}}\n\\end{thebibliography}\n\n\\end{document}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SOURCE FINISH %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% vim: expandtab tabstop=4 shiftwidth=4\n", "meta": {"hexsha": "a7d225b94621016ed4aaa5e3c13fe0c5deea6903", "size": 18910, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/rff_for_gaussian_process.tex", "max_stars_repo_name": "tiskw/random-fourier-features", "max_stars_repo_head_hexsha": "4a12185e44d1f9aba594f8a7569042e73675d6cd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 24, "max_stars_repo_stars_event_min_datetime": "2021-02-14T12:05:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-12T07:29:25.000Z", "max_issues_repo_path": "documents/rff_for_gaussian_process.tex", "max_issues_repo_name": "tiskw/RandomFourierFeatures", "max_issues_repo_head_hexsha": "4a12185e44d1f9aba594f8a7569042e73675d6cd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-19T09:53:15.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-03T03:04:30.000Z", "max_forks_repo_path": "documents/rff_for_gaussian_process.tex", "max_forks_repo_name": "tiskw/random-fourier-features", "max_forks_repo_head_hexsha": "4a12185e44d1f9aba594f8a7569042e73675d6cd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2021-03-07T12:08:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T09:47:36.000Z", "avg_line_length": 46.2347188264, "max_line_length": 133, "alphanum_fraction": 0.6491274458, "num_tokens": 6379, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7690802317779601, "lm_q1q2_score": 0.5957167686249158}}
{"text": "\\chapter{Visualize spectral distances}\n\\label{ch:visualize-spectral-distances}\n\n\\begin{wrapfigure}{o}{0.65\\textwidth}\n\t\\vspace{-1.5 cm}\n    \\includegraphics[scale=0.6]{ch-visualize_spectral_distances-fig1.png}\n\\end{wrapfigure}\n\n\\newthought{Let's consider a spectrum}, or any other data entry for that matter, as a point in a multidimensional space. We can define distance metrics between these points and visualize the distance values from one another or from a selected reference point or reference spectrum. By doing so, we can explore how similar our measurements are to a selected reference. We can do this on a series of spectra or even on hyperspectral maps!\n\nIf you build the workflow shown above this paragraph in \\mutation\\ you will be able to explore various distance metrics. First, let's use the \\textit{'Liver cirrhosis - spectral image'} dataset provided by the \\widget{Datasets} widget, then calculate the \\textit{Euclidean distances} from the average spectrum with the \\widget{Neighbors} widget and visualize them in \\widget{Hyperspectra}. \n\n\\medskip\nCan you reproduce the results below? Pay attention to the color scheme.\n\n\\begin{figure*}[h]\n\\centering\n\\infinitewidthbox{\n  \\stackinset{r}{-0.5\\linewidth}{t}{+0.1\\linewidth}\n  {\\includegraphics[scale=0.4]{ch-visualize_spectral_distances-fig3.png}}\n  {\\includegraphics[scale=0.6]{ch-visualize_spectral_distances-fig2.png}}\n  \\hspace{8cm}\n  }\n%\\caption{Try changing the parameters!}\n\n\\end{figure*}\n\nExplore different distance metrics, inspect distances in a \\widget{Data Table} widget. Don't forget, you can select points on the top map and see the corresponding spectra on the bottom in \\widget{Hyperspectra}. \n\n% I would like to have a special environment called notes for example that we can switch on or off to print or hide lecturer notes.\n\\lecnotes{Possibility for discussion of the general mathematical properties of distance functions. See \\url{https://en.wikipedia.org/wiki/Metric_(mathematics)}}", "meta": {"hexsha": "23ab5e83386ea82949e711b71a6a99493ade911c", "size": 1970, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/spec-022-visualize-distances/visualize-distances.tex", "max_stars_repo_name": "markotoplak/orange-lecture-notes", "max_stars_repo_head_hexsha": "b83343f4390c25c7fe4094323b1d3adf2b571b59", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/spec-022-visualize-distances/visualize-distances.tex", "max_issues_repo_name": "markotoplak/orange-lecture-notes", "max_issues_repo_head_hexsha": "b83343f4390c25c7fe4094323b1d3adf2b571b59", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/spec-022-visualize-distances/visualize-distances.tex", "max_forks_repo_name": "markotoplak/orange-lecture-notes", "max_forks_repo_head_hexsha": "b83343f4390c25c7fe4094323b1d3adf2b571b59", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-24T23:35:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-24T23:35:37.000Z", "avg_line_length": 63.5483870968, "max_line_length": 436, "alphanum_fraction": 0.7868020305, "num_tokens": 489, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5956733500515689}}
{"text": "\\documentclass{article}\n%\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{color}\n\\usepackage{tabu}\n\\usepackage{longtable}\n\\usepackage{mathrsfs}\n\\usepackage{enumerate}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\lhead{Final 02}\n\\rhead{Jon Allen}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\\newcommand{\\degree}{\\ensuremath{^\\circ}}\n\n\\begin{document}\n\\subsubsection*{PDE A.}\n\\begin{align*}\n  \\text{PDE.}&&\\frac{\\partial u}{\\partial t}&=\\frac{\\partial^2u}{\\partial x^2}&&\\text{for}&0&<x<1,&0&<t<\\infty\\\\\n  \\text{BC.}&&u_x(0,t)&=0=u_x(1,t)&&\\text{for}&&&0&<t<\\infty\\\\\n  \\text{IC.}&&u(x,0)&=f(x)&&\\text{for}&0&<x<1\n\\end{align*}\n\nFor PDE A, determine the full solution for the general initial condition $f(x)$. State how orthogonality is used and how coefficients in the series expansion are determined using $f(x)$\n\\begin{align*}\n  u_0(x,t)&=c_0\\\\\n  u_n(x,t)&=c_ne^{-n^2\\pi^2t}\\cos(n\\pi x)&n=1,2,3,\\dots\\\\\n  u(x,t)&=c_0+\\sum\\limits_{n=1}^\\infty{c_ne^{-n\\pi^2t}\\cos(n\\pi x)}\\\\\n  u(x,t)&=\\sum\\limits_{n=0}^\\infty{c_ne^{-n\\pi^2t}\\cos(n\\pi x)}\\\\\n  f(x)&=u(x,0)=\\sum\\limits_{n=0}^\\infty{c_n\\cos(n\\pi x)}&\\text{let }m\\in\\mathbb{Z}\\\\\n  \\int_{0}^1{f(x)\\cos(m\\pi x)\\,\\mathrm{d}x}&=\\int_{0}^1{\\cos(m\\pi x)\\sum\\limits_{n=0}^\\infty{c_n\\cos(n\\pi x)}\\,\\mathrm{d}x}\\\\\n  \\text{let }n&=m=0\\\\\n  \\int_{0}^1{c_0\\cos(0)^2\\,\\mathrm{d}x}&=c_0\\\\\n  \\text{let }n&=m\\ne 0\\\\\n  \\int_{0}^1{c_m\\cos(m\\pi x)^2\\,\\mathrm{d}x}&=\\frac{1}{2}c_m\\int_0^1{2\\cos(m\\pi x)^2\\,\\mathrm{d}x}\\\\\n  &=\\frac{1}{2}c_m\\int_0^1{1+\\cos(2m\\pi x)\\,\\mathrm{d}x}\\\\\n  &=\\frac{1}{2}c_m\\left\\lvert x+\\frac{1}{2m\\pi}\\sin(2m\\pi x)\\right\\rvert_0^1\\\\\n  &=\\frac{1}{2}c_m+\\frac{1}{2}c_m\\frac{\\sin(2m\\pi)}{2m\\pi}=\\frac{1}{2}c_m&m\\in\\mathbb{Z}\\to\\sin(2m\\pi)=0\n  \\intertext{and to establish orthogonality let $n\\ne m$}\n  \\int_{0}^1{c_m\\cos(n\\pi x)\\cos(m\\pi x)\\,\\mathrm{d}x}&=\\frac{1}{2}c_m\\int_0^1{2\\cos(n\\pi x)\\cos(m\\pi x)\\,\\mathrm{d}x}\\\\\n  &=\\frac{1}{2}c_m\\int_0^1{\\cos(n\\pi x-m\\pi x)+\\cos(n\\pi x+m\\pi x)\\,\\mathrm{d}x}\\\\\n  &=\\frac{1}{2}c_m\\left[\\frac{1}{(n-m)\\pi x}\\sin((n-m)\\pi x)\\right.\\\\\n  &\\qquad\\quad\\left.+\\frac{1}{(n+m)\\pi x}\\sin((n+m)\\pi x)\\right]_0^1\\\\\n  &=\\frac{1}{2}c_m\\left[\\frac{1}{(n-m)\\pi}\\sin((n-m)\\pi)\\right.\\\\\n  &\\qquad\\quad\\left.+\\frac{1}{(n+m)\\pi}\\sin((n+m)\\pi)\\right]&n-m,n+m\\in\\mathbb{Z}\\\\\n  &=\\frac{1}{2}c_m\\cdot 0=0&\\sin((m\\pm n)\\pi)=0\\\\\n\\end{align*}\nAnd now we have everything we need to determine the full solution. Since we can find the coefficient to the $m$th term by multiplying $\\cos(m\\pi x)$ and then integrating, letting orthogonality kill off all the extra terms.\n\\begin{align*}\n  u(x,t)&=\\sum\\limits_{n=0}^\\infty{c_ne^{-n\\pi^2t}\\cos(n\\pi x)}\\\\\n  c_0&=\\int_0^1{f(x)\\,\\mathrm{d}x}\\\\\n  c_m&=2\\int_{0}^1{f(x)\\cos(m\\pi x)\\,\\mathrm{d}x}&\\text{where }m&=1,2,3,\\dots\n\\end{align*}\n\\end{document}\n", "meta": {"hexsha": "97230a3706a5de1fb7dceceec7d30cf9924eb31c", "size": 2878, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "partial differential equations/pde-final-02.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "partial differential equations/pde-final-02.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "partial differential equations/pde-final-02.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.1803278689, "max_line_length": 222, "alphanum_fraction": 0.6292564281, "num_tokens": 1365, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.8080672181749422, "lm_q1q2_score": 0.5956733458303118}}
{"text": "\n\n\nAs a setting to illustrate computer techniques for describing variability, take\nthe data that Galton collected on the heights of adult children\nand their parents. \\datasetGalton\nThe file \\texttt{galton.csv} stores these data in a modern,\ncase/variable format.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> require(mosaic)\n> galton = fetchData(\"galton.csv\")\n\\end{Sinput}\n\\end{Schunk}\n\n\\subsection{Simple Statistical Calculations}\n\n\\index{C}{descriptive statistics!computing|(}\n\nSimple numerical descriptions are easy to compute.  Here are the mean,\nmedian, standard deviation and variance of the children's heights (in\ninches).\n\\index{P}{Descriptive Statistics!mean@\\texttt{mean}}\n\\index{P}{Descriptive Statistics!sd@\\texttt{sd}}\n\\index{P}{Descriptive Statistics!median@\\texttt{median}}\n\\index{P}{mean@\\texttt{mean}}\n\\index{P}{median@\\texttt{median}}\n\\index{P}{sd@\\texttt{sd}}\n\\index{P}{var@\\texttt{var}}\n\\index{C}{variance!sqrt of standard deviation}\n\\begin{Schunk}\n\\begin{Sinput}\n> mean(height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 66.8\n\\end{Soutput}\n\\begin{Sinput}\n> median(height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 66.5\n\\end{Soutput}\n\\begin{Sinput}\n> sd(height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 3.58\n\\end{Soutput}\n\\begin{Sinput}\n> var(height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 12.8\n\\end{Soutput}\n\\end{Schunk}\n\nNotice that the variance function (\\function{var}) returns the square\nof the standard deviation (\\function{sd}).\nHaving both is merely a convenience.\n\n\\index{P}{Descriptive Statistics!pdata@\\texttt{pdata}}\n\\index{P}{Descriptive Statistics!qdata@\\texttt{qdata}}\n\\index{C}{percentile!quantile operator}\n\\index{P}{quantile@\\texttt{quantile}}\n\\index{P}{qdata@\\texttt{qdata}*}\n\\index{P}{pdata@\\texttt{pdata}*}\nA percentile tells where a given value falls in a distribution.  For example, a height of 63 inches is on the short side in Galton's data:\n\\begin{Schunk}\n\\begin{Sinput}\n> pdata(63, height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 0.192\n\\end{Soutput}\n\\end{Schunk}\nOnly about 19\\% of the cases have a height less than or equal to 63 inches.  The \\code{pdata} operator takes one or more values as a first argument and finds where they fall in the distribution of values in the second argument.\n\nA quantile refers to the same sort of calculation, but inverted.  Instead of giving a value in the same units as the distribution, you give a probability: a number between 0 and 1.  The \\code{qdata} operator then calculates the value whose percentile would be that value:\n\\begin{Schunk}\n\\begin{Sinput}\n> qdata(0.20, height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n 20% \n63.5 \n\\end{Soutput}\n\\end{Schunk}\nRemember that the probability is given as a number between 0 and 1, so use 0.50 to indicate that you want the value which falls at the 50th percentile.\n\n\\begin{itemize}\n\\index{C}{quantiles!and coverage intervals}\n\\index{C}{coverage interval!quantiles}\n\\item The 25th and 75th percentile in a single command --- in other\n  words, the 50 percent coverage interval:\n\\begin{Schunk}\n\\begin{Sinput}\n> qdata(c(0.25, 0.75), height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n 25%  75% \n64.0 69.7 \n\\end{Soutput}\n\\end{Schunk}\n\n\\item The 2.5th and 97.5th percentile --- in other words, the 95\n  percent coverage interval:\n\\begin{Schunk}\n\\begin{Sinput}\n> qdata(c(0.025, 0.975), height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n 2.5% 97.5% \n   60    73 \n\\end{Soutput}\n\\end{Schunk}\n\\end{itemize}\n\nThe interquartile range is the width of the 50 percent coverage\ninterval:\n\\begin{Schunk}\n\\begin{Sinput}\n> IQR(height, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 5.7\n\\end{Soutput}\n\\end{Schunk}\n\nSome other useful operators are \\function{min}, \\function{max}, and\n\\function{range}.  \n\\index{P}{min@\\texttt{min}}\n\\index{P}{max@\\texttt{max}}\n\\index{P}{range@\\texttt{range}}\n\n\\index{C}{descriptive statistics!computing|)}\n\n\n\\subsection{Simple Statistical Graphics}\n\n\\index{C}{graphics!statistical|(}\n\\index{C}{statistical graphics|(}\nThere are several basic types of statistical graphics to display\nthe distribution of a variable: histograms, density plots, and\nboxplots.  These are easily mastered by example. \n\n\\subsubsection{Histograms and Distributions}\n\n\\index{C}{histogram!drawing}\n\\index{P}{histogram@\\texttt{histogram}}\n\\index{P}{Plotting \\& Graphics!histogram@\\texttt{histogram}}\n\nConstructing a histogram involves dividing the range of a variable up\ninto bins and counting how many cases fall into each bin.  This is\ndone in an almost entirely automatic way:\n\\begin{Schunk}\n\\begin{Sinput}\n> histogram( galton$height )\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{Figures/variation-var1-hist}\n\nWhen constructing a histogram, R makes an automatic but sensible\nchoice of the number of bins.  If you like, you can control this\nyourself.  For instance:\n\\begin{Schunk}\n\\begin{Sinput}\n> histogram( galton$height, breaks=25 )\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{Figures/variation-var2-hist}\n\nThe  horizontal axis of the histogram is always in the units of the variable.\nFor the histograms above, the  horizontal axis is in ``inches'' because that\nis the unit of the \\code{galton$height} variable.       % $\n\nThe vertical axis is conventionally drawn in one of three ways:\ncontrolled by an optional argument named \\code{type}.\n\\begin{description}\n\\item[Absolute Frequency or Counts]  A simple count of the number of cases that\n  falls into each bin.  This mode is set with\n  \\code{type=\"count\"} as in \n\\begin{Schunk}\n\\begin{Sinput}\n> histogram( galton$height, type=\"count\")\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{Figures/variation-var3-hist}\n\n\\item[Relative Frequency] The vertical axis is scaled so that the \nheight of the bar give the proportion of cases that fall into the\nbin.  This is the default.\n\\item[Density] The vertical axis \n{\\em area} of the bar gives the relative proportion of cases that fall\ninto the bin.  Set \\code{type=\"density\"} as in \\code{histogram(galton$height,type=\"density\")} .   % $ -kill the math mode\n\\end{description}\nIn a density plot, areas can be interpreted as probabilities and the\narea under the entire histogram is equal to 1.\n\nOther useful optional arguments set the labels for the axes and the\ngraph as a whole and color the bars.  For example,\n\\begin{Schunk}\n\\begin{Sinput}\n> histogram(galton$height, type=\"density\",\n     xlab=\"Height (inches)\", \n     main=\"Distribution of Heights\",\n     col=\"gray\")\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{Figures/variation-var4-hist}\n\nThe above command is so long that it has been broken into several\nlines for display purposes.  R ignores the line breaks, holding off on\nexecuting the command until it sees the final closing parentheses.\nNotice the use of quotation marks to delimit the labels and names like\n\\code{\"blue\"}.\n\n\\subsubsection{Density Plots}\n\n\\index{P}{densityplot@\\texttt{densityplot}}\n\\index{P}{Plotting \\& Graphics!densityplot@\\texttt{densityplot}}\n\n\\index{C}{density plot!drawing}\nA \\newword{density plot} avoids the need to create bins and plots out\nthe distribution as a continuous curve.  Making a density plot\ninvolves two operators.  The \\newword{density} operator performs the\nbasic computation which is then displayed using either the \\code{plot}\nor the \\code{lines} operator.  For example:\n\\begin{Schunk}\n\\begin{Sinput}\n> densityplot(galton$height) \n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{Figures/variation-var5-density}\n\nIf you want to suppress the rug-like plotting of points at the bottom\nof the graph, use \\code{densityplot(galton$height,plot.points=FALSE)}. % $\n\n\\subsubsection{Box-and-Whisker Plots}\n\nBox-and-whisker plots are made with the \\code{bwplot} command:\n\\begin{Schunk}\n\\begin{Sinput}\n> bwplot(galton$height)\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{Figures/variation-var6-bw}\n\nThe median is represented by the heavy dot in the middle.  Outliers,\nif any, are marked by dots outside the whiskers.\n\\index{C}{outlier!in box plots}\n\\index{C}{box plot!outliers}\n\n\\index{C}{box plot!drawing}\n\\index{P}{bwplot@\\texttt{bwplot}}\n\\index{P}{Plotting \\& Graphics!bwplot@\\texttt{bwplot}}\n\\index{C}{box and whisker plots|see{box plots}}\nThe real power of the box-and-whisker plot is for comparing\ndistributions.  This will be raised again more systematically in later\nchapters, but just to illustrate, here is how to compare the heights\nof males and females:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> bwplot(height ~ sex, data=galton)\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{Figures/variation-var7-bw}\n\n\\subsection{Displays of Categorical Variables}\n\n\\index{C}{counts!computing}\n\\index{C}{proportions!computing}\n\\index{P}{counts@\\texttt{counts}}\n\\index{P}{props@\\texttt{props}}\n\\index{P}{Descriptive Statistics!props@\\texttt{props}}\n\\index{C}{table!of counts}\nFor categorical variables, it makes no sense to compute descriptive\nstatistics such as the mean, standard deviation, or variance.\nInstead, look at the number of cases at each level of the variable.\n\\begin{Schunk}\n\\begin{Sinput}\n> counts(sex, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n  F   M \n433 465 \n\\end{Soutput}\n\\end{Schunk}\nProportions can be found in a similar way:\n\\begin{Schunk}\n\\begin{Sinput}\n> props(sex, data=galton)\n\\end{Sinput}\n\\begin{Soutput}\n    F     M \n0.482 0.518 \n\\end{Soutput}\n\\end{Schunk}\n\n\n\\index{C}{graphics!statistical|)}\n\\index{C}{statistical graphics|)}\n\n\n", "meta": {"hexsha": "682585a245152b2c8611fbdb709d21713504edae", "size": 9294, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ComputationalTechnique-Orig/DescribingVariation/computer-describing-variation.tex", "max_stars_repo_name": "dtkaplan/SM3", "max_stars_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-01T01:28:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-01T01:28:07.000Z", "max_issues_repo_path": "ComputationalTechnique-Orig/DescribingVariation/computer-describing-variation.tex", "max_issues_repo_name": "BriannaBarry/SM3", "max_issues_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ComputationalTechnique-Orig/DescribingVariation/computer-describing-variation.tex", "max_forks_repo_name": "BriannaBarry/SM3", "max_forks_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-02-14T05:22:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T12:42:15.000Z", "avg_line_length": 30.2736156352, "max_line_length": 271, "alphanum_fraction": 0.7538196686, "num_tokens": 2755, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672181749423, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5956733411642255}}
{"text": "%!TEX root = ../notes.tex\n\\section{March 11, 2022}\n\\subsection{Quadratic Sieve \\emph{continued}}\nWe resume our discussion of the example of \\cref{example:quadratic-sieve}. We want to find an optimal $B$ for the algorithm, and we do this by analyzing runtimes.\n\nThe runtime of $X$ is first approximately $B\\cdot u^u$ where \\[u = \\frac{\\log \\sqrt{N}}{\\log B}.\\]\n\nAnd in addition, solving a $B\\cdot B$ system of linear equations. It takes $B^2$ operations to zero out one column ($B$ rows and subtracting a column takes $B$ operations), then we do this for $B$ columns. So the runtime is $B^3$.\n\nWe now have $u^u$ is decreasing in $B$ and $B^3$ is increasing in $B$. Minimizing the runtime, we set $B^3 \\sim B\\cdot u^u$, i.e. $B^2\\sim u^u$. Using very sketchy mathematics,\n\\begin{align*}\n    u^u                        & \\sim B                             \\\\\n    u\\log u                    & \\sim B                             \\\\\n    \\frac{log N}{\\log B}\\sim u & \\sim B                             \\\\\n    \\log B                     & \\sim \\sqrt{\\log N}                 \\\\\n    B                          & \\sim e^{\\sqrt{\\log N}}             \\\\\n    u                          & \\sim \\frac{\\log{\\sqrt{N}}}{\\log B}\n    \\sim \\frac{\\log N}{\\log B} \\sim \\sqrt{\\log N}\n\\end{align*}\nis a \\emph{loose} guess. But we can use this to make a more approximate guess.\n\nBeing more rigorous, $u^u= B^2$ gives $u\\log u = 2\\log B$. Then using our estimate for $u$ from above, we have\n\\begin{align*}\n    u\\cdot \\log\\left[\\sqrt{\\log N}\\right] & = 2\\log B                                    \\\\\n    \\frac{1}{2}\\frac{\\frac{1}{2}\\log N}{\\log B}\\log\\log N\n    = \\frac{1}{2}u\\log\\log N              & = 2\\log B                                    \\\\\n    (\\log B)^2                            & = \\frac{1}{8}\\log N\\log\\log N                \\\\\n    \\Rightarrow B                         & \\sim e^{\\sqrt{\\frac{1}{8}\\log N \\log\\log N}}\n\\end{align*}\nwhere we note the difference of a factor of $\\frac{1}{8}\\log\\log N$ isn't that far off from $e^{\\sqrt{\\log N}}$. So total runtime is around $B^3$ which is $e^{\\sqrt{\\frac{9}{8}\\log N\\log\\log N}}$. \\emph{It's not super fast but not totally stupid.}\n\nRealistically, the $B^3$ can be reduced to $B^2$ in solving our $B\\times B$ system, but we can get rid of our factors of $2$'s from before. This lets us get rid of the factor of $\\frac{9}{8}$ so our total runtime is like\n\\[e^{\\sqrt{\\log N\\log\\log N}}\\]\nThere are even faster algorithms, namely the number field sieve. It replaces the square root from above into a cube root and has runtime\n\\[e^{\\sqrt[3]{c\\cdot \\log N(\\log\\log N)^2}}\\]\n\n\\subsection{Index Calculus \\& Discrete Logs}\nThe discrete log problem is solving $x$ given $g^x = a$ and we know $g$ and $a$. We can do a similar strategy to the quadratic sieve.\nWe calculate\n\\begin{align*}\n    g^1 \\pmod p & = 2                   \\\\\n    g^2 \\pmod p & = \\text{(not smooth)} \\\\\n    g^3 \\pmod p & = 2\\cdot 3            \\\\\n\\end{align*}\nwhere we pick some smoothness bound $B$. We then find $g^i\\equiv \\text{(B-smooth)}\\mod p$. We find enough $B$ smooth things such that by linear algebra, we can solve $g^x = 2, 3, 5, 7, \\dots$.\n\nWe take\n\\begin{align*}\n    a\\mod p             & = \\text{(not smooth)} \\\\\n    a\\cdot g^{-1}\\mod p & = \\text{(not smooth)} \\\\\n    a\\cdot g^{-2}\\mod p & = 3\n\\end{align*}\nwhich is smooth. So we find $a\\cdot g^{-i}\\equiv \\text{(B-smooth)}\\mod p$. Then by linear algebra we can solve for $a$ given lots of smooth $g^i$.\n\n\\begin{ques*}What is the runtime of this? \\end{ques*}\n\nWe need to go up to $g^x$ where $x$ is about $B\\cdot u^u$ ($u^u$ is the probability of being $B$-smooth). Linear algebra will take runtime $B^2$. Similar to just now, this is exactly the same problem as above except $\\sqrt{N}$ is replaced with $P$. Before (in the quadratic sieve), we had\n\\[B\\sim \\exp\\left(\\frac{1}{4}\\log N\\log\\log N\\right)\\]\nand\n\\[\\text{Runtime}\\sim \\exp\\left(\\log N\\log\\log N\\right)\\]\nReplacing $\\log{N}$ with $2\\log P$, our bound becomes\n\\[B\\sim \\exp\\left(\\frac{1}{2}\\log p\\log\\log p\\right)\\]\n\\[\\text{Runtime}\\sim \\exp\\left(2\\log p\\log\\log p\\right)\\]\nWhat if $g\\in$ subgroup of order $q$? Babystep-Giantstep changes from $\\sqrt{p}$ to $\\sqrt{q}$. With index calculus, there is no advantage of knowing that $g$ is im a smaller subgroup. We don't have a decrease in runtime. ", "meta": {"hexsha": "70aa29bbe7233bdb6c7552616c5c5ac60163662f", "size": 4307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-03-11.tex", "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_issues_repo_path": "lectures/2022-03-11.tex", "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-03-11.tex", "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.296875, "max_line_length": 288, "alphanum_fraction": 0.5874158347, "num_tokens": 1376, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5956733407193964}}
{"text": "\t%to force start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Trigonometry}\\label{trigonometry}\n\t\\lettrine[lines=4]{\\color{BrickRed}T}rigonometry is part of the science of geometry. Geometry whose etymological root is \"measure of the earth\", trigonometry has etymological root \"measuring of body with three angles (trine)\". Thus the trigonometry is a branch of mathematics that studies relations involving lengths and angles of triangles.\n\t\n\tThere are currently four known trigonometries (defined) commonly used in mathematics: the \"\\NewTerm{circle trigonometry}\\index{circle trigonometry}\" (assimilated with the study of \"circular functions\"), the  \"\\NewTerm{hyperbolic trigonometry}\\index{hyperbolic trigonometry}\" and  \"\\NewTerm{spherical trigonometry}\\index{spherical trigonometry}\" and a more exotic \"\\NewTerm{elliptical trigonometry}\\index{elliptical trigonometry}\" (related to the elliptic integrals and their related elliptic function). We propose in this text an attempt to a fairly rigorous approach to all the most famous relations of the three first type of trigonometries (the last one - elliptical trigonometry - reserved to another section of this book).\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} We will not deal here on the quadratic and rhombic trigonometries that are used by the electricians and have little or no interest in theoretical physics. The same remark is valid for the lemniscatique trigonometry which is related to pure mathematics and in particular to the Riemann zeta function.\\\\\n\t\n\t\\textbf{R2.} The reader who is looking for the proofs of derivatives and integrals of trigonometric functions defined below should read the Differential and Integral Calculus section (\\SeeChapter{see chapter Algebra page \\pageref{differential and integral calculus}}) where the derivatives and integrals of common trigonometric functions are all proven.\n\t\\end{tcolorbox}\t\n\n\tThe purpose of this section will be to determine the most common relations in trigonometry and that are used extensively in all sections of this book (Mechanics, Astronomy, Statistics, Atomistic, etc.). Note that the majority of the relations (but not all!) that we will define and prove were determined in the 16th century by algebraists like Vi\u00e8te.\n\n\t\\subsection{Radian}\\label{radian}\n\tWhen we speak about trigonometry, the first thing that should come to mind and emerge as standard for plan angles measurements (see the section of Euclidean Geometry chapter for the definition of the concept of angle) is the notion of \"radians \".\n\n\t\\textbf{Definition (\\#\\mydef):} $1$ \"\\NewTerm{radian}\\index{radian}\" (denoted sometimes [rad]) is the angle described by a plane secant to a circle passing through its center as the arc thus defined by the horizontal axis passing through the center of the circle and the secant is equal in length to the radius of this circle.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe radian is widely used in physics, as we will see it in the corresponding chapters of this book, when angular measurements are required. For example, angular velocity is typically measured in radians per second ([rad/s]). One revolution per second is equal to $2\\pi$ radians per second.\n\t\\end{tcolorbox}\n\t\n\tFor example, for a circle of radius $R=1$ therefore of circumference (or \"perimeter\") $P=2\\pi R=2\\pi$ the length of the arc defined by a secant having an angle of 1 radian with respect to the horizontal passing through the center of the circle will be equal 1.\n\n\tHence it follows that the angle for a \"round\" of the circle will be:\n\t\n\tThis can be resume in the following drawing:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/geometry/radians.eps}\n\t\t\\caption[A chart to convert between degrees and radians]{A chart to convert between degrees and radians (source: Wikipedia)}\n\t\\end{figure}\n\tThe previous example can be generalized to any circle of radius $R$ as the angle for a complete tour will always be for $2\\pi [\\text{rad}]$ a whole turn-around, $\\pi [\\text{rad}]$  for a half turn-around and $\\dfrac{\\pi}{2} [\\text{rad}]$ for fourth turn-around...\n\t\n\tUnfortunately in high-schools most teachers still teach children to measure angles in degrees. Fortunately conversion to do is not too difficult ... (it's a simple rule of three):\n\t\t\n\tLet $r$ be the measurement of an angle in radians, $d$ the measurement of the same angle in degrees, $g$ and the measurement of the same angle in gradients (old unit) we have by definition:\n\t\n\tAstronomers and astrophysicists like to talk in minutes or seconds of arc such as:\n\t\n\tFor example if the Moon has a diameter angle of approximately $29'$ ($0.5^\\circ$) in the sky and if our eyes would have a deep exposure capability we would be able to see that Andromeda (M31) has a diameter angle of $178'$ ($3^\\circ$), therefore about $6$ times as wide as the Moon!\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/geometry/angular_diameter.jpg}\n\t\t\\caption[]{Earth's sky as seen if human eyes had deep exposure capabilities}\n\t\\end{figure}\n\tHere is a  nice visual summarizing this concept of sub-angles units:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/degrees_minutes_seconds.jpg}\n\t\t\\caption[Degrees, minutes of arc, seconds of arc]{Degrees, minutes of arc, seconds of arc (author: ?)}\n\t\\end{figure}\n\n\t\\pagebreak\n\t\\subsection{Circle Trigonometry}\\label{circle trigonometry}\n\tConsider the figure below showing any circle centered at the origin in a direct basis:\t\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/trigonometric_circles.eps}\n\t\t\\caption{Construction idea of elementary (circle) trigonometric functions}\n\t\\end{figure}\n\n\tFirst, by applying the Pythagorean theorem (\\SeeChapter{see section Euclidean Geometry page \\pageref{pythagorean theorem}}) in the visible right triangle above, we get:\n\t\nwhere $R$ is the radius of the circle.\n\n\tTo define the trigonometric functions for the angle $\\alpha$, start with any right triangle that contains the angle $\\alpha$. The three sides of the triangle are named as follows:\n\t\\begin{enumerate}\n\t\t\\item The \"\\NewTerm{hypotenuse}\\index{hypotenuse}\" is the side opposite the right angle, in this case side it is the segment $\\overline{\\text{O}M}$. The hypotenuse is always the longest side of a right-angled triangle.\n\t\t\n\t\t\\item The \"\\NewTerm{opposite side}\\index{opposite side}\" is the side opposite to the angle we are interested in.\n\t\t\n\t\t\\item The \"\\NewTerm{adjacent side}\\index{adjacent side}\" is the side adjacent to the angle we are interested in.\n\t\\end{enumerate}\n\nFrom this representation we can define mathematical entities named \"\\NewTerm{trigonometric functions of the circle}\\index{trigonometric functions of the circle}\", also sometimes named by the ancient (...) \"\\NewTerm{cyclometrics functions}\\index{cyclometrics functions}\\label{cyclometrics functions}\" such as (for the most known one)\\label{definition trigonometric functions}\n\n\n\nBe careful because depending on the authors and on the context (and that's our case!) $\\arccos, \\arcsin, \\arctan, $ are respectively denoted by $\\cos^{-1}, \\sin^{-1}, \\tan^{-1}$.\n\nRead \"\\NewTerm{cosine}\\index{cosine}\" for \"cos\", \"\\NewTerm{sine}\\index{sine}\" for \"sin\", \"\\NewTerm{tangent}\\index{tangent}\" for \"tan\", \"\\NewTerm{cotangent}\\index{cotangent}\" for \"cot\", \"\\NewTerm{secant}\\index{secant}\" for \"sec\", \"\\NewTerm{cosecant}\\index{cosecant}\" for \"csc\".\n\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} When the context allows it, and that there can not be any ambiguity, the parentheses after the name of the trigonometric function may be omitted (this is often the case in physics).\\\\\n\t\n\t\\textbf{R2.} The arc... functions are obviously the reciprocal functions of the trigonometric functions!\n\t\\end{tcolorbox}\n\nIn practice (school or high level engineering) you have to take care to two important properties: \n\t\\begin{enumerate}\n\t\t\\item Trigonometric functions are far from being bijective. Indeed, they tend to repeat themselves periodically as we will see further below. Nonetheless with a little restriction on their definition domain, they may acquire this quality. In this context, the perspective of a reciprocal becomes possible.\n\t\t\\item If the sign of the bracket of the arc tangent is negative, then we do not know exactly in which quadrant (I, II, III or IV) we are so we do not know the sign of the numerator or denominator! It is for this reason that most calculators and softwares have two arc tangent functions: \\texttt{arctan} and \\texttt{arctan2}. The first gives an angle between $\\left[-\\dfrac{\\pi}{2},+\\dfrac{\\pi}{2}\\right]$ and therefore we can not know precisely in which quadrant we are. The second gives us an angle between $\\left[-\\pi,\\pi\\right]$ so we can know in which quadrant we are but we must provide the sign of the numerator or denominator.\n\t\\end{enumerate}\n\n\tFrom these functions, we can make some combinations and define simple but remarkable relations but whom usefulness is debatable (and which are very rarely used) such as:\n\t\n\tbut you will never meet these functions in this book because we personally never use such notations (it is rather customary in some U.S. books).\n\t\n\tHere is a nice figure... that resume all that stuff:\n\t\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.75]{img/geometry/trigonometry_resume.eps}\n\t\\caption{Design principle of common circle trigonometric functions}\n\t\\end{figure}\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{phase-shift}\\index{phase-shift}\" and angle $\\delta$ that shift any trigonometric function argument of a given value. For example as illustrated below:\n\t\n\t\\begin{figure}[H]\n\t\\centering\n\t\t\\begin{tikzpicture}\n\t  \t\\begin{axis}[\n\t    trig format plots=rad,\n\t    axis lines = middle,\n\t    enlargelimits,\n\t    clip=false\n\t    ]\n\t    \\addplot[domain=-2*pi:2*pi,samples=200,blue] {sin(x)};\n\t    \\addplot[domain=-2*pi:2*pi,samples=200,red] {sin(x-2)};\n\t    \\draw[dotted,blue!40] (axis cs: 0.5*pi,1.1) -- (axis cs: 0.5*pi,0);\n\t    \\draw[dotted,red!40] (axis cs: 0.5*pi+2,1.1) -- (axis cs: 0.5*pi+2,0);\n\t    \\draw[dashed,olive,<->] (axis cs: 0.5*pi,1.05) -- node[above,text=black,font=\\footnotesize]{$\\delta$}(axis cs: 0.5*pi+2,1.05);\n\t  \t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\\end{figure}\n\t\n\tOK for people reading this book on a computer with Adobe Flash player installed and activated, here is an animated version (otherwise see here: \\url{https://vimeo.com/575748635}):\n\t\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=\\textwidth,height=500pt,\n\t]{}{swf/Trigonometric_circle.swf}\n\t\\end{center}\n\n\tLet us see now some very simple angular properties of these common functions:\n\t\\begin{enumerate}\n\t\t\\item[P1.] if we remain in the study of the circle, hence named \"\\NewTerm{trigonometric circle}\\index{trigonometric circle}\\label{trigonometric circle}\", we must put for the above definitions above $R=1$. Thus, it appears more clearly the physical meaning of these definitions and it will result in a number of properties and applications directly usable in theoretical physics and pure mathematics.\n\t\t\n\t\tIndeed, if $R=1$ we trivially have (\\SeeChapter{see section Vector Calculus page \\pageref{polar coordinates}}):\n\t\t\n\t\tand by applying the Pythagorean theorem (\\SeeChapter{see section Euclidean Geometry page \\pageref{pythagorean theorem}}):\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\n\t\t\\item[P2.] If $\\alpha$ is a real number, and for $\\forall k \\in \\mathbb{N}$, the real numbers $\\alpha$ and $\\alpha + 2k\\pi$ are associated with the same point $M$ by the periodicity of the unit circle. Indeed, $\\alpha$ and $\\alpha + 2k\\pi$ are two measures of the same oriented angle. So:\n\t\t\n\t\tDitto for all trigonometric functions arising from the definition of sine and cosine functions.\n\t\\end{enumerate}\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn the measure of \"oriented angles\", we say that two measures are congruent modulo $2k\\pi [rad]$ if and only if their difference is a multiple of $2k\\pi [rad]$. This characterize two measurements of the same angle.\n\t\\end{tcolorbox}\n\t\n\tBy definition, the sine and cosine of any real number part of the \n interval $[-1,+1]$. Specifically, the position of the point $M$ allows us to learn more about the cosine and sine equation. So:\n \n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/geometry/trigonometry_angles.eps}\n\\caption{Remarkable angles of common circle trigonometric functions}\n\\end{figure}\n\nThere is also another representation of the trigonometric functions of the circle, a bit more technical but almost important to understand well later, at least visually, for the study of wave mechanics (see section of the same name page \\pageref{wave mechanics}):\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=1.5]\n\t\t\\begin{axis}[domain=-3*pi:3*pi,samples=200,grid=major,\n\t\t    restrict y to domain=-3:3, \n\t\t    legend style={font=\\fontsize{4}{5}\\selectfont},\n\t\t    legend pos=north west,\n\t\t    ytick={-2,-1,0,1,2},\n\t\t    xtick={-6.28,-4.71,-3.14,-1.57,0,1.57,3.14,4.71,6.28},\n\t     \txticklabels={-$2\\pi\\,\\,\\,$,-$\\frac{3}{2}\\pi\\,\\,\\,$,-$\\pi\\,$,-$\\frac{\\pi}{2}$,$0$, +$\\frac{\\pi}{2}$,+$\\pi$,+$\\frac{3}{2}\\pi$,+$2\\pi$},\n\t     \taxis lines=middle,\n\t     \tx tick label style={font=\\tiny},y tick label style={font=\\tiny}\n\t     \t]\n\t\t  \\addplot[domain=-2*pi:2*pi,samples=200,red]{sin(deg(x))};\n\t      \\addplot[domain=-2*pi:2*pi,samples=200,blue]{cos(deg(x))};\n\t      \\addplot[domain=-2*pi:2*pi,samples=200,green]{tan(deg(x))};\n\t      \\addplot[domain=-2*pi:2*pi,samples=200,purple]{cot(deg(x))};\n\t\t\\legend{$\\sin(x)$, $\\cos(x)$, $\\tan(x)$, $\\cot(x)$}\n\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\\end{figure}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=1.5]\n\t\t\\begin{axis}[domain=-1:1,\n\t\tsamples=500,\n\t\taxis lines=middle,\n\t\tlegend style={font=\\fontsize{4}{5}\\selectfont},\n\t\txtick={-2,1.5,-1,-0.5,0,0.5,1,1.5,2},\n\t\tytick={-6.28,-4.71,-3.14,-1.57,0,1.57,3.14,4.71,6.28},\n\t\tyticklabels={-$2\\pi\\,\\,\\,$,-$\\frac{3}{2}\\pi\\,\\,\\,$,-$\\pi\\,$,-$\\frac{\\pi}{2}$,$0$, +$\\frac{\\pi}{2}$,+$\\pi$,+$\\frac{3}{2}\\pi$,+$2\\pi$},\n\t\tx tick label style={font=\\tiny},y tick label style={font=\\tiny},\n\t\tlegend pos=north east,\n\t\tgrid=major\n\t\t]\n    \t\\addplot[color = red]  {rad(asin(x))};\n    \t\\addplot[color = blue]  {rad(acos(x))};\n    \t\\addplot[domain=-2:2,color = green]  {rad(atan(x))};\n\t\t\\legend{$\\arcsin(x)$,$\\arccos(x)$,$\\arctan(x)$}\n\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Plots of some common circle trigonometric functions}\n\t\\end{figure}\n\nThe reader should notice at this point without too much difficulties the following properties (often used in physics!) with $n \\in \\mathbb{Z}$\\label{periodicity trigonometric functions}:\n\t\nand easily recognize that the sine is an odd function and the cosine an even function (observation that we often will be useful in various mathematical developments on trigonometric series).\n\nAs we saw at the beginning of this section, following the definition of the trigonometric functions we obviously have:\n\t\n\tand also:\n\t\n\tIn exactly the same way we prove that:\n\t\n\tFrom previous equalities we found without too much difficulty (elementary algebra):\n\t\n\tWe have identically:\n\t\n\tBy reverse reasoning we get just as easily:\n\t\n\tIt could comes without difficulty by observing the unit circle with its remarkable angles that\\label{remarkable angles}:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe minus-plus sign \"$\\mp$\" is generally used in conjunction with the \"$\\pm$\" sign in a same relation. So when the first is at the \"$+$\" level, then the other one is at \"$-$\" level and conversely.\n\t\\end{tcolorbox}\n\tHere are the figures that summarize how to analyse some of these properties (for other relations, the method is the same):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.65]{img/geometry/trigonometry_remarkable_angles.jpg}\n\t\t\\caption{Visual equivalences of some trigonometric relations}\n\t\\end{figure}\n\tHere are some remarkable angles associated with the values of cosine and sine functions that many students must memorize during their studies:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.65]{img/geometry/trigonometry_angles_identity.jpg}\n\t\t\\caption{Some identity angles and conventional trigonometric associated values for sine and cosine}\n\t\\end{figure}\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.99]{img/geometry/trigonometric_angles.jpg}\n\t\t\\caption{Some identity angles and conventional trigonometric associated values table}\n\t\\end{figure}\n\tLet us now introduce one last definition concerning circle trigonometric function relation that we will meet again in the sections of Wave Optics or during our study of Fourier transforms in the section of Sequences and Series which is the \"\\NewTerm{sine cardinal}\\index{sine cardinal}\\label{sinc cardinal}\":\n\t\n\trepresented by (with Maple 4.00b):\n\t\n\t\\texttt{>with(plots):\\\\\n\t>sinc:=sin(x)/x;\\\\\n\t>plot([sinc(x)], x=-5*Pi..5*Pi, Sinc);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/sinus_cardinal_2d.jpg}\n\t\t\\caption{2D plot of the sinc function with Maple 4.00b}\n\t\\end{figure}\n\tEven if engineers know very well the previous figure it is especially its pseudo-3D representation which is known by common peoples because often used for marketing reasons as reminiscent of a drop of water falling into water (figure made with Maple 4.00b) and it is always pretty to look at ...:\n\n\\texttt{>plot3d(sin(sqrt(x\\string^2+y\\string^2))/(sqrt(x\\string^2+y\\string^2)),x=-20..20,y=-20..20);}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.8]{img/geometry/sinus_cardinal_3d.jpg}\n\\caption{Pseudo 3D plot of the sinc function with Maple 4.00b}\n\\end{figure}\nIn either case, the value at $x = 0$ is defined to be the limiting value $\\mathrm{sinc(0)} = 1$ that is really easy to prove using Hospital's rule (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{Hospital rule}}).\n\nKeep in mind that \"sinc\" function is particularly important in signal processing and is the Fourier transform of a rectangular pulse and also in some Optics Phenomenon.\n\n\t\\subsubsection{Remarkable trigonometric triangle identities}\\label{remarkable trigonometric identities}\n\tThe drawing below will allow us to build relations that will help to solve equations involving trigonometric functions (all these relations are of primary importance in physics to simplify problem solving!).\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics{img/geometry/trigonometric_triangle.jpg}\n\\caption{Basic construction for determining remarkable circle trigonometric identities for triangles}\n\\end{figure}\n\n\tFirst we will note on the above figure the following relation:\n\t\n\tTherefore:\n\t\n\tTo resume:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe hesitated to put the main triangle identities in boxes (frames) but in fact all the results above and below are important and therefore everything will have been into boxes... So we apology to the reader if the absence of boxes (frames) disturbed.\n\t\\end{tcolorbox}\n\tThis implies trivially if $\\alpha=\\beta$:\n\t\n\tand:\n\t\n\tTherefore:\n\t\n\tWe also have:\n\t\n\tTherefore:\n\t\n\tThis implies trivially if $\\alpha=\\beta$:\n\t\n\tWith the already proven identity $\\cos^2(\\alpha)+\\sin^2(\\alpha)=1$ we also get the very important following relations:\n\t\n\tRelations with which we get very easily relations that seems to be named \"\\NewTerm{Carnot's Formula}\\index{Carnot's Formula}\":\n\t\n\tand: \n\t\n\tWe also have:\n\t\n\tThis, to get the relation:\n\t\n\twhich implies when $\\alpha=\\beta$:\n\t\n\tand obviously:\n\t\n\tTherefore to summarize:\n\t\n\tWe also get trivially (ask us if you have problems!) from previous relations (we do a little mixing and we shake...):\n\t\n\tWe also have using above results:\n\t\n\tusing the relation proved far above :\n\t\n\tTherefore:\n\t\n\tExactly similarly we get (ask us if you need the details):\n\t\n\tUsing always:\n\t\n\tTherefore:\n\t\n\tNow let us determine additional trigonometric formulas that seems to be name \"\\NewTerm{Simpson's Formula}\\index{Simpson's Formula}\" or \"\\NewTerm{Trigonometric addition formulas}\\index{trigonometric addition formulas}\" which express the sum of sine and / or cosine in product of sine and / or cosine.\n\t\n\tGiven the identities already previously proved:\n\t\n\tThat can also be summarized as:\n\t\n\tLet us put $\\alpha+\\beta=p$ and $\\alpha-\\beta=q$ and this also give us:\n\t\n\tWe get by summing (1) and (2):\n\t\n\tand by subtracting:\n\t\n\tIn the same way we but starting from:\n\t\n\tThat can also be summarized as:\n\t\n\tWe get first by summing:\n\t\n\tand by subtracting:\n\t\n\tand re-injecting the angles we easily (normally but if you have difficulties tell us!) we fall back on the following \"\\NewTerm{Simpson's inverse formulas}\\index{Simpson's inverse formulas}\":\n\t\n\t\n\tWe also have starting from:\t\n\t\n\tand we divide by $\\cos(\\alpha)\\cos(\\beta)$ to get:\n\t\n\tAnd exactly in the same way we get:\n\t\n\t\n\tAll these relations will help us in our study of general physics (especially in the section of Wave Mechanics and therefore in the sections of Electrodynamics and Acoustics) and particularly in the case of integral calculations or superposition of waves.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nIt helps perhaps to resume that the following relations are the Simpson's relations:\n\t\n\t\\end{tcolorbox}\n\tLet us also derive the following identity that will be useful to us in the section of Differential and Integral Calculus:\n\t\n\tand respectively:\n\t\n\t\n\t\\paragraph{Laws of Cosines}\\label{law of cosines}\\mbox{}\\\\\\\\\\\n\tLet us prove now the cosine theorem that will be very useful to us in the sections of Geometry, Cosmology, Electrodynamics and Mechanics.\n\t\n\t\\begin{theorem}\n\tIn any triangle, the square of one of the three sides is equal to the sum of the squares of the two others reduced by the product of these two sides by the cosine of the angle between them:\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/law_of_cosines.jpg}\n\t\\caption{Construction for the proof of the law of cosines}\n\t\\end{figure}\n\t\\end{theorem}\n\t\\begin{dem}\n\tOne possible proof has the following path by using the Pythagorean theorem:\n\t\n\tBut in the $ABH$ triangle, rectangle in $H$, we have the relation $n/c=\\cos(\\chi)$ seen above therefore:\n\t\n\tSo we get one of the relations of \"\\NewTerm{cosine theorem}\\index{cosine theorem}\":\n\t\n\tBy circular permutation, we get the other two known relations of the cosine theorem.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe cosine theorem is sometimes named  \"\\NewTerm{Al-Kashi's formula}\\index{Al-Kashi's formula}\". And you can note that if $a$ is the hypotenuse and its opposite angle a right angle such that $\\cos(\\chi)$ is zero, so we fall back on the Pythagorean theorem. Here's why we sometimes name the Al-Kashi's formula: \"\\NewTerm{generalized Pythagorean formula}\\index{generalized Pythagorean formula}\".\n\t\\end{tcolorbox}\n\tAn interesting case for applying the cosine theorem is to determine the distance $l$ between a point $b$ on the circumference of the circle and a point $a$ situated inside the circle depending on the direction (angle) in which the eye is directed:\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/law_of_cosines_distance_point_circle.jpg}\n\t\\caption{Distance from a point inside a circle at its circumference}\n\t\\end{figure}\n\tWe can therefore apply the cosine theorem that gives us (knowing $d, R$ and $\\beta$):\n\t\n\tTherefore:\n\t\n\tIt is therefore an equation of the second degree whose discriminant is (\\SeeChapter{see section Calculus page \\pageref{discriminant}}):\n\t\n\tand whose two solutions are (\\SeeChapter{see section Calculus page \\pageref{double root}}):\n\t\n\tWe will also see an important application in the section of Geometric Shapes to calculate the area of any triangle (Heron's formula) page \\pageref{heron formula}.\n\t\n\t\\pagebreak\n\t\\paragraph{Laws of Sines}\\label{law of sines}\\mbox{}\\\\\\\\\\\n\t\\index{law of sines}Consider the below triangle for whom we represent the two heights:\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/law_of_sines.jpg}\n\t\\caption{Construction for the proof of the law of sines}\n\t\\end{figure}\n\tIn the triangle above we have the relations:\n\t\n\twhich leads us to the expression:\n\t\n\tTherefore:\n\t\n\tBy similar reasoning we have:\n\t\n\tThat gives in the same way:\n\t\n\tAll combined provides us the \"sine theorem\", the most beautiful example of application in this books is certainly the determination of the Lagrange points L4 and L5 in the section Astronomy:\n\t\n\tObviously, there is not here all the existing trigonometric circle relations as we have already said, but at least the most important you need to know to find we you study  physical systems.\n\t\n\t\\pagebreak\n\t\\subsection{Hyperbolic Trigonometry}\\label{hyperbolic trigonometry}\n\tWe have shown in the section of Functional Analysis that any function $f (x)$ can be divided into an odd and even function as:\n\t\n\tThus, for the function $f(x)=e^x$, we get:\n\t\n\tRemember that in our study of complex numbers (\\SeeChapter{see section Numbers page \\pageref{de Moivre and Euler formulas}}) we proved the following Euler formulas:\n\t\n\tand we talked about the fact that even if $\\phi$ should be most of time bin in $\\mathbb{R}$, it can also be in $\\mathbb{C}$ and therefore we generalize the definition of the sine and cosine by analogy with the hyperbolic sine and cosine (we will demonstrate the origin of this term later below) by:\n\t\n\twhere $z \\in \\mathbb{C}$. Therefore we can also obviously define:\n\t\n\tAnd so on... like for circle trigonometry.\n\n\tWe can therefore write:\n\t\n\trelation that we can again put in analogy with:\n\t\n\tInterestingly, we can now work with complex angles in trigonometry!!! Indeed, if we put $a+\\mi b$ then we have:\n\t\n\tBut as we know (\\SeeChapter{see section Numbers page \\pageref{power rules calculations}}):\n\t\n\tTherefore we can write the following equality:\n\t\n\tTherefore the hyperbolic trigonometric function of a complex angle exists and the image is also a complex number. We can see abusively hyperbolic trigonometry as a kind of generalization of trigonometry circle with real and complex angles.\n\t\n\tAs opposed to the circle trigonometry where we have proved that:\n\t\n\tIn hyperbolic trigonometry we have:\n\t\n\tThe proof is easy:\n\t\n\tBy rearranging we get:\n\t\n\tBefore we prove why this is named \"hyperbolic trigonometric\" let us see some important identities that we will use in various sections of this book.\n\t\n\tFirst we will look for a closed form of the reciprocal hyperbolic sine and cosine functions. For this purpose remember that:\n\t\n\tand that the search for the inverse function is always to isolate the variable (in this case: $z$).\n\t\n\tTherefore:\n\t\n\tThat is to say (multiplying by $-e^z$):\n\t\n\tby solving the polynomial this second degree polynomial on $e^{z}$ we get:\n\t\n\tAs $e^{z}>0$ we must reject the solution with the sign \"$-$\" (yes! otherwise with a $z \\in \\mathbb{R}$ you could have a negative $e^z$). It comes then:\n\t\n\tTherefore:\n\t\n\tUsing the same for:\n\t\n\tTherefore:\n\t\n\tThat is to say (multiplying by $-e^z$):\n\t\n\tby solving the polynomial this second degree polynomial on $e^{z}$ we get:\n\t\n\tAs $e^{z}>0$ we must reject the solution with the sign \"-\" (yes! otherwise with a $z \\in \\mathbb{R}$ you could have a negative $e^z$). It comes then:\n\t\n\tTherefore:\n\t\n\tFinally \\label{inverse hyperbolic to logarithm}:\n\t\n\t\n\t\\begin{tcolorbox}[colback=red!5,borderline={1mm}{2mm}{red!5},arc=0mm,boxrule=0pt]\n\t\\bcbombe Caution! Following the authors $\\text{arccosh}$ and $\\text{arcsinh}$ are denoted by $\\text{argcosh}, \\text{argsinh}$ or $\\cosh^{-1}, \\sinh^{-1}$ (the two last notations are confusing with the corresponding inverse hyperbolic function!).\n\t\\end{tcolorbox}\n\t\n\t\n\tTo study now a simple important geometric representation of that stuff let us now write:\n\t\n\tWith a restriction $\\alpha,z in \\mathbb{R}$ we have:\n\t\n\tThis can be represented as follow:\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/hyperbolic_functions.jpg}\n\t\\caption[Hyperbolic functions domain]{Hyperbolic functions domain (source: Wikipedia)}\n\t\\end{figure}\n\tWith this new notation, we have:\n\t\n\tBut as we shall see in our study on conical in the section of Analytical Geometry:\n\t\\begin{enumerate}\n\t\t\\item The first of these two relations is for the entire given domain of definition, a circle of unit radius centered on the origin. The reader will note that it is curious enough for circle trigonometry to get a circle \\Winkey\n\t\t\\item The second of these two relations is for the entire domain of a definition, an equilateral hyperbola oriented along the $X$ axis whose apex is $S (1,0)$ and of focus $F(\\sqrt{2},0)$. The reader will note again that he is curious enough for hyperbolic trigonometry to get a hyperbola \\Winkey\n\t\\end{enumerate}\n\tFor the second case this take us to have the possibility to represent the two main hyperbolic trigonometric functions as:\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.7]{img/geometry/hyperbola_hyperbolic_trigonometry.jpg}\n\t\\caption[Equivalence of hyperbolic functions and hyperbola]{Equivalence of hyperbolic functions and hyperbola (source: Wikipedia)}\n\t\\end{figure}\n\tThis figure is very interesting because as we will prove it now, from the calculation of the red area $a/2$, where $a$ is assimilated to a \"\\NewTerm{hyperbolic angle}\\index{hyperbolic angle}\", we can also define the hyperbolic trigonometric functions.\n\t\n\tLet us denote by $x$ and $y$ the abscissa and ordinate of the red triangle (by imagining this is \\underline{full} a triangle). We therefore have (\\SeeChapter{see section Geometric Shapes page \\pageref{unspecified triangle}}) therefore for its area (base multiplied by high divided by $2$):\n\t\n\tBut we see that we have to subtract a part of an area under the hyperbola of equation $X^2-Y^2=1$. That is to say of equation:\n\t\n\tAnd the area to subtract is therefore given by:\n\t\n\tTherefore the read area is given by:\n\t\n\tUsing the primitive proved in the section of Differential and Integral Calculus we get:\n\t\n\tTherefore:\n\t\n\tBut we can rewrite this:\n\t\n\tTherefore:\n\t\n\tFinally:\n\t\n\tWe recognize here the first relation of the both that we proved a little sooner above if we put $z=a,y=x$:\n\n\tSo the interpretation is very interesting: the hyperbolic angle can be seen as the double of the red triangle surface (or if you prefer: the half of the hyperbolic angle is equal to the red surface)!\n\t\n\tThese two relations:\n\t\n\t\n\tshould help, we hope to understand the origin of the name of the hyperbolic trigonometry and note that the study of hyperbolic trigonometry, is analogous to the study of circle trigonometry on the circle.\n\t\n\tIf we represent the trigonometric circle and the trigonometric hyperbole and we add some additional information, here is what we get:\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/circle_hyperbolic_trigonometry_resume.jpg}\n\t\\caption{Visual definition of circle and hyperbolic trigonometric functions}\n\t\\end{figure}\n\tExplanations: \n\n\tTo trace with the ruler and compass the point P$(x, y)$ of the hyperbole, we give us a $x$, that is to say the point $A(x, 0)$. We draw the tangent to the circle (C) through A (x, 0) which gives us the tangent point T. We trace the circle (G) of center A$(x, 0)$ and passing through T. This circle intersects the hyperbola at the point P$(x, y)$ perpendicular to the A$(x, 0)$ to $\\text{O}x$.\n\t\n\tWe see appear on the figure several values of the hyperbolic functions corresponding to $x=\\cosh(t)=\\text{ch}(t),y=\\sinh(t)=\\text{sh}(t)$ but also $\\tanh(t)=\\text{th}(t),\\coth(t)$, etc. Among others, the circle (L) intersects the axis $\\text{O}x$ in two points whose abscissas are $e^t$ and $e^{-t}$.\n\t\n\tIf the reader wants to check thanks to the figure, it can control that in all points of the  hyperbole, the relations (among others):\n\t\n\tare always verified.\n\t\n\tWe get for the readers who wants to play with software using Maple 4.00b:\\\\\\\\\n\t\t\\texttt{>plot([sinh(x),cosh(x),tanh(x)],x=-2..2,color=[red,black,blue]);}\n\t\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/maple_hyperbolic_functions.jpg}\n\t\\caption{Visual definition of hyperbolic trigonometric functions with Maple 4.00b}\n\t\\end{figure}\n\tWe will see again the $\\cosh(x)$ in the section of Civil Engineering for example in the context of suspended cables. We also see again $\\sinh (x)$ and $\\tanh (x)$ as part of the study of gravity waves in the section of Marine and Weather Engineering.\n\t\n\t\\subsubsection{Remarkable hyperbolic identities}\n\t\n\tThe purpose now is as for the circle trigonometry to prove some remarkable identities but for hyperbolic trigonometry.\n\t\n\tSo we start with the definition:\n\t\n\tand:\n\t\n\tFrom these definitions and using the basic algebra operations we can determine the following outstanding relations (this is much easier than determining the remarkable relations of circle trigonometry, so unless requested we give most of these relations without proof).\n\t\n\tFirst let us begin with the very easiest one:\n\t\n\tWe also have:\n\t\n\tAnd we have the addition relations:\n\t\n\tFollowing the request of a reader, let us prove the first and third above relations.\n\t\n\tFor the first:\n\t\n\tand for the third:\n\t\n\tAlso let us mention other remarkable identities (once again don't hesitate to request us the detail proof if you need help):\n\t\n \tand finally (we stop here because we don't need anything else to study the other chapters and sections of this book):\n \t\n\t\n\t\\pagebreak\n\t\\subsection{Spherical Trigonometry}\\label{spherical trigonometry}\n\tThe aim of spherical trigonometry is to determine remarkable relations between the angles and sides of projected forms (also named \"geodetic forms\" because following the curvature of space) on the surface of a sphere. To determine this relations, we will look at the special case of a sphere of radius unity and to the relations between the sides of a triangle (planar elementary surface) and the various existing angles. We will see that the results are completely independent of the radius of the sphere.\n\t\n\tConsider the figure in which is located a geodesic triangle with vertices $A, B, C$ of respective opening angles $\\hat{A}, \\hat{B}, \\hat{C}$ and opposite sides $a, b, c$ and three unit vectors $\\vec{i}, \\vec{j}, \\vec{k}$ such as $\\vec{i} \\perp \\vec{k}, \\vec{j} \\perp \\vec{k}$ and that the end of $\\vec{k}$ is merged with the vertices $A$:\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/spheric_trigonometric_construction.jpg}\n\t\\caption{Construction to introduce the spherical trigonometry}\n\t\\end{figure}\n\tThe angle between points $B$ and $C$, denoted by $\\alpha$, could not be shown in the diagram above because of lack of space.\n\t\n\tLet us recall that the perimeter of a unit circle on the sphere of radius unity is obviously $P=2\\pi R$ (\\SeeChapter{see section Euclidean Geometry page \\pageref{circle}}). The perimeter of the circle as a function of the opening angle of the latter being given by (relation very often used in physics !!!)\\label{arc length trigonometry}:\n\t\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{img/geometry/length_arc_circle.jpg}\n\t\\end{figure}\n\tIf the circle is of radius $R=1$ (as is the case for our sphere), the calculation of the arc length is simplified and becomes:\n\t\n\tWe will keep this constraint of a unitary radius subsequently to simplify the expressions we'll get later on.\n\t\n\tConsequently regarding the points on our sphere; the sides of the triangle are given by:\n\t\n\tNow consider the scalar product (\\SeeChapter{see section Vector Calculus page \\pageref{dot product}}):\n\t\n\tand as $\\overrightarrow{OB}= \\overrightarrow{OC}$ (unit radius) we have:\n\t\n\tIf we decompose the two vectors $\\overrightarrow{OB}$ and $\\overrightarrow{OC}$ vectors on the orthonormal vectors we have:\n\t\n\tThis gives us:\n\t\n\tgiving (distributive property of scalar product):\n\t\n\tAs $\\vec{k}\\circ \\vec{k}=1$ and $\\vec{k}\\circ \\vec{i}=\\vec{k}\\circ \\vec{j}=0$ the above relation reduces to:\n\t\n\tAnd as:\n\t\n\tWe get:\n\t\n\trelation named \"\\NewTerm{fundamental relation}\\index{trigonometric fundamental relation}\" or \"\\NewTerm{cosine formula}\\index{cosine formula}\\label{cosine formula}\" that we can therefore as well write (because of the unitary radius assumption):\n\t\n\tAlso often rearranged as following (especially in statistics to introduce the partial correlation coefficient):\n\t\n\tThe latter relation is invariant by circular permutation of the variables $a,b,c,\\hat{A}$. It is also interesting to note before we continue that if the spherical triangle has right angle at $A$, the previous relation simplifies to:\n\t\n\tand if the triangle is sufficiently small relative to the radius and we make a second order Taylor expansion (\\SeeChapter{see section Sequences and Series page \\pageref{taylor series}}) near $0$  for each of the terms we get:\n\t\n\tTherefore:\n\t\n\tAfter simplification:\n\t\n\twe find back the Pythagorean theorem (\\SeeChapter{see section Euclidean Geometry page \\pageref{pythagorean theorem}}). Therefore:\n\t\n\tis the spherical geometry (non-Euclidean geometry) equivalent  of the Pythagorean theorem in plane geometry (Euclidean geometry)!!!\n\t\n\tThis  parenthesis now closed, let's us go back on our main business. The sine of all angles being positive (as all angles are $[0,\\pi]$), we can write:\n\t\n\tThe latter relation is of course also invariant by circular permutation of the variables $a,b,c,\\hat{A}$. So we get a remarkable relation of the spherical triangle, named \"\\NewTerm{sines relations}\\index{sines relations}\" or \"\\NewTerm{sines formula}\\index{sines formula}\" and given by:\n\t\n\tAs spherical trigonometry is often used for land trails, often with two very specific and orthogonal circles: the Earth's equator and a meridian or any parallel, this case is of particular interest! Therefore in the case of a right triangle in $A$ we obviously get:\n\t\n\tAll relations we have determined so far allow us in case of to get very interesting relation for geophysics:\n\t\n\tFor sure, we have not presented here all the relations or remarkable identities existing about spherical trigonometry, but at least the most important one that should be know to read the other sections of this book.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe define the  \"surplus\" or \"spherical excess\" by the number:\n\t\n\t\\end{tcolorbox}\n\tWhile we're at it, let us calculate a classic problem which is that of the area of a triangle on a sphere. Consider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/spherical_triangle.jpg}\n\t\t\\caption[]{Construction for the study of the surface of a spherical triangle}\n\t\\end{figure}\n\tIf we extend the geodesic arcs $AB$ and $AC$ until $A_1$ we get a slice of a sphere whose surface is proportional to the angle $\\alpha$ in $A$. If this angle was equal to $2\\pi$, we would have the whole sphere and the surface would be equal to $4\\pi R^2$ (\\SeeChapter{see section Geometric Shapes page \\pageref{sphere}}). As the angle is equal to $\\alpha$, the proportionality told us that $S_1$ is equal to:\n\t\n\tSimilarly, if we extend the arcs $BC$ and $BA$ up to $B_1$ and if we extend the arcs $CA$ and $CB$ until $C_1$, we get two slices with surfaces $S_2$ and $S_3$ are equal to:\n\t\n\tNow suppose we add these three areas:\n\t\n\twe then get the half of the sphere $S_\\odot$ (look to the figure to represent it yourself mentally) more two times the geodesic triangle area $S$ in pink on the figure (as counted two times in excess).\n\t\n\tWe must subtract twice the blue triangle area to get the surface of the hemisphere:\n\t\n\tTherefore:\n\t\n\tas $S_\\odot=4\\pi R^2$, we get:\n\t\n\tAfter simplification we deduce that the area $S$ of the triangle $ABC$ is:\n\t\n\twhere $\\varepsilon$ is a solid angle (see below definition). That latter relation is sometimes named \"\\NewTerm{Girard's theorem}\\index{Girard's theorem}\". More commonly written:\n\t\n\tand $K=1/R^2$ is defined as the \"curvature\" (it indicates how much the geometry of the sphere deviates from the Euclidean plane geometry where $K=0$). On the basis of its definition $K$ seems to depend on the radius of the sphere considered as a two dimensional manifold embedded in the ambient Euclidean space $\\mathbb{R}^3$. The boxed relation above however tells us that the curvature $K$ can be evaluated by just performing measures of angles and areas on the two-dimensional surface of the sphere. It follows that $K$ has an intrinsic geometric meaning, that is independent of its embedding in the flat 3-dimensional space, since it can be computed by only using measures on the spherical surface. \n\t\n\tIt is fairly simple to generalize this concept to other forms following the same philosophy (especially those composed of triangles...).\n\t\n\t\\subsection{Solid Angle}\\label{solid angle}\n\tIn spatial geometry, arise the problem of opening angle of a portion of the space (in extension to the so named \"plane angle\" that applies to $\\mathbb{R}^2$). , We define the \"\\NewTerm{solid angle}\\index{solid angle}\" by measuring the portion of space delimited by a conical surface of apex $\\text{O}$ and we express it in \"\\NewTerm{steradian}\\index{steradian}\", given by the ratio (logical extension of the definition of the plane angle):\n\t\n\t$S$ being the area of the cap cutted by cone on a sphere of radius $r$.\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/solid_angle_configuration.jpg}\n\t\t\\caption{Configuration for the definition of the solid angle}\n\t\\end{figure}\n\tIf $\\alpha$ is the half plane-angle of the cone, we get for this ratio (for calculation of the cap of a spherical surface see the section Geometric Shapes):\n\t\n\tHence we conclude that the total solid angle is by definition on the whole space:\n\t\n\tWe can also calculate the \"\\NewTerm{elementary solid angle}\\index{elementary solid angle}\" as shown below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/elementary_solid_angle_configuration.jpg}\n\t\t\\caption{Configuration for the definition of the elementary solid angle}\n\t\\end{figure}\n\tGiven an elementary solid angle $\\mathrm{d}\\Omega$ and $\\overline{\\text{O}M}$ the cone axis. We put:\n\t\n\tWe consider any surface $\\Sigma$ passing through the point $M$. $\\mathrm{d}\\Omega$ cut on this surface a portion $\\mathrm{d}\\Sigma$.\n\t\n\tIf we draw the sphere $S$ of center $\\text{O}$ and radius $r$, this solid angle cut on the sphere a cap of area $\\mathrm{d}S$:\n\t\n\tGiven $MN$ the normal to $\\mathrm{d}\\Sigma$ which make an angle $\\theta$ with $OM$. We have assimilating $\\mathrm{d}S$ and $\\mathrm{d}\\Sigma$ to portions of a plane:\n\t\n\tTherefore:\n\t\n\tThis concept of solid angle will be very useful especially in the field of theoretical physics that deals with the thermal radiation (see sections on Optics page \\pageref{solid angle optics} and Thermodynamics page \\pageref{solid angle black body} and Atomic Physics page \\pageref{solid angle atomic physics}).\n\t\n\tWe can still calculate from the previous concepts, the elementary solid angle of revolution as presented in the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/solid_angle_of_revolution_configuration.jpg}\n\t\t\\caption{Configuration for the definition of the elementary solid angle of revolution}\n\t\\end{figure}\n\tIt is between two solid angles of revolution whose half-angles at the apex differ of $\\mathrm{d}\\alpha$:\n\t\n\tTherefore:\n\t\n\t\\begin{dem}\n\tIn the section dealing with Geometric Shapes (\\SeeChapter{see section Geometric Shapes page \\pageref{sphere}}) we proved the different ways to calculate the area of a sphere. From these calculations it was concluded that the elementary surface with constant radius $R$ was:\n\t\n\tand since:\n\t\n\tthe elementary solid angle is then:\n\t\n\tThus, the solid angle defined by a cone of revolution, with an apex plane angle equal to $\\alpha$ is given by:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{50} & \\pbox{20cm}{\\score{3}{5} \\\\ {\\tiny 186 votes,  80.54\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to force start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\t\n\t\\section{Euclidean Geometry}\\label{euclidean geometry}\n\n\tThe purpose of the \"\\NewTerm{Euclidean geometry}\\index{Euclidean geometry}\" (more commonly named \"\\NewTerm{plane geometry}\\index{plane geometry}\") is, in principle, the study of forms and properties of natural bodies. The geometry is however not an experimental science, since its purpose is not to study certain aspects of nature, but a necessarily arbitrary reproduction of it.\n\nWe will present in this section implicitly,at first, the five postulates of the Euclidean geometry (the first four are nowadays regarded as axioms) and then develop around these basic geometry that will be necessary to the reader need to study the rest of this book. Once this is done, we will summarize our study by presenting explicitly the five postulates of Euclid's and then Hilbert's axioms.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nWe tried to preserve Euclid's notations as best as possible but although with a little more modern approach to certain concepts and present only those that are useful to Applied Mathematics.\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Objects of Euclidean Geometry}\n\t\n\tBefore stating the five Euclid's postulates, it seems good to set some intuitive concepts first:\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] The simplest experimental concept is that of \"\\NewTerm{volume}\\index{volume}\". We say that a body occupies a given volume when it occupies in the three-dimensional space a certain place (for spaces with higher dimensions, we talk about \"\\NewTerm{hyper-volumes}\\index{hyper-volume}\").\n\t\t\n\t\t\\item[D2.] We assume as obvious that a volume is limited by a \"\\NewTerm{surface}\\index{surface}\"; but if the existence of the volume is physically controllable and measurable, the surface is a creation of the mind; it is something similar to a balloon, for example, wrapping any volume, but only similar. It is a two-dimensional geometric without thickness.\n\t\t\n\t\t\\item[D3.] When a surface is limited, this limit is a \"line\". Again, the line is a creation of the mind, a line has no experimental existence; it is something similar to the figure formed by a wire. Still geometric entity but without width (otherwise it is a surface)... only a length.\n\t\t\n\t\t\\item[D4.] A \"\\NewTerm{straight-line}\\index{straight-line}\" is defined as the unbounded line of shortest distance joining two points on a surface. In other words, A straight line is a line with no beginning and no end, and therefore infinitely long.\n\t\t\n\t\t\\item[D5.] When a line is limited (bounded), its limit is a \"\\NewTerm{point}\\index{point}\": the point is something analogous to the intersection of two stretched wires. This is still a creation of the mind, a geometric being (because it is suppose to have now width, no length, no height... just no dimensions).\n\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nIt is customary in geometry to represent a point by a letter $A, B, ...$; a line or a surface by a letter in brackets (but this is rarely respected because we often assume the reader knows what we're talking about). Then we write, for example: the line $(L)$, the surface $(S)$.\n\t\\end{tcolorbox}\t\t\n\t\t\n\t\t\\item[D6.] The term the \"\\NewTerm{segment $\\overline{AB}$}\\index{segment}\" generally designates a line bounded by the points $A$ and $B$. We say that a point $M$ is on the segment $\\overline{AB}$ is to translate the following fact: any segment $\\overline{AB}$ can be split in an infinity of ways into two pieces limited by $A$ and $M$ on one hand, by $M$ and $B$ on the other - fact actually inspired by the experimental possibility to cut a piece of wire in half, and that in an infinite of ways (we will come back later about this).\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe sentence: \u00abThe line $(L)$ is drawn on a surface $(S)$\u00bb means that the surface $(S)$ could be divided into several pieces, so that the line $(L)$ is the boundary or part of a boundary of one of these pieces. This definition is based on the fact that it is possible to cut a tissue, for example by following with scissors any line on this tissue.\n\t\\end{tcolorbox}\n\nWhen a line $(L)$ is drawn on a surface $(S)$, any point $M$ which is located on the line $(L)$ is, by definition, also located on the surface $(S)$. Then we say that it is a \"point of this surface.\"\n\n\t\t\\item[D7.] A \"\\NewTerm{semi-straight line}\\index{semi-straight line}\" is a line which has a boundary (or vertex) on one side and is infinite on the other side.\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/lines_type.jpg}\n\t\t\\caption{Difference between line, segment and semi-straight line}\n\t\t\\end{figure}\n\t\t\n\t\t\\item[D8.] We name \"\\NewTerm{angle}\\index{angle}\" (or \"\\NewTerm{plane angle}\\index{plane angle}\") or more strictly \"\\NewTerm{straight angle}\\index{straight line}\" the plane portion limited by two half-lines (see further below the definition of a half-line).\n\t\\end{enumerate}\n\t\n\t\\subsubsection{Dimensions}\\label{dimensions}\n\nWe talked before about volume, surface and line to whom we can associate dimensions. But what is a dimension? \n\nWe will try to try to define it as best as possible but first, it is important to know that there are several types of dimensions in geometry. The best known and common one is that we call \"\\NewTerm{topological dimension}\\index{topological dimension}\".\n\nFor example, the point (mathematical and geometric abstraction) has a topological dimension of $0$, the curve (continuous line of zero thickness) a dimension of $1$, a  surface a dimension of $2$, a volume of a dimensions $3$ and a hyper- volume a dimensions of $4$ (to represent a hyper-volume take a volume drawn on a paper (...) make it a translation and connect the vertices). These are all integer values by definition:\n\n{\\centering\n\\begin{table}[H]\n\\begin{tabular}{cc}\n\\begin{subfigure}{0.5\\textwidth}\\centering\n\\textbf{Dimension: $0$}\\\\\n\\includegraphics[width=0.5\\columnwidth]{img/geometry/point.eps}\\caption{A point}\\end{subfigure}&\n\\begin{subfigure}{0.5\\textwidth}\\centering\n\\textbf{Dimension: $1$}\\\\\n\\includegraphics[width=0.5\\columnwidth]{img/geometry/line.eps}\\caption{A line (straight or curved)}\\end{subfigure}\\\\\n\\newline\n\\begin{subfigure}{0.5\\textwidth}\\centering\n\\textbf{Dimension: $2$}\\\\\n\\includegraphics[width=0.5\\columnwidth]{img/geometry/surface.eps}\\caption{Plane figure/Surface (polygon, ellipse, etc.)}\\end{subfigure}&\n\\begin{subfigure}{0.5\\textwidth}\\centering\n\\textbf{Dimension: $3$}\\\\\n\\includegraphics[width=0.5\\columnwidth]{img/geometry/volume.eps}\\caption{A solid or body (sphere, parallelepiped, etc.)}\\end{subfigure}\\\\\n\\end{tabular}\n\\caption{Objects, representations and corresponding dimensions}\n\\end{table}\n}\n\nIn physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus a line has a dimension of one because only one coordinate is needed to specify a point on it. \n\n\tIn mathematics, the dimension of an object is an intrinsic property independent of the space in which the object is embedded. For example, a point on the unit circle in the plane can be specified by two Cartesian coordinates, but a single polar coordinate (the angle) would be sufficient (for more details on coordinate systems see the section of Vector Calculus), so the circle is 1-dimensional even though it exists in the 2-dimensional plane. This intrinsic notion of dimension is one of the chief ways the mathematical notion of dimension differs from its common usages.\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/geometry/co_ordinates_systems.jpg}\n\t\t\\caption[Example of some co-ordinate systems]{Example of some co-ordinate systems (source: Wikipedia)}\n\t\\end{figure}\n\n\tTo calculate the size of certain objects, we will use the method of the plane metric geometry by taking a standard measure of this object, that is to say the object itself but smaller, and to carry it over our object a given number of times:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/standard_measurement_1d.jpg}\n\t\t\\caption{Concept of one-dimensional standard}\n\t\\end{figure}\n\tLet $L$ be the total length of the segment. We will take a standard segment of length $n$ that we will report on the big segment. This standard will be reported $L / n$ times. We will write this:\n\t\n\tWe can apply the same reasoning to a surface:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/standard_measurement_2d.jpg}\n\t\t\\caption{Concept of two-dimensional standard}\n\t\\end{figure}\n\tLet $L^2$ be the total area of the square. We will take a standard square of area $n^2$ that we will report into the big square. This standard will be reported $L^2 / n^2$ times. We note that:\n\t\n\tWe can apply the same reasoning to a volume:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/standard_measurement_3d.jpg}\n\t\t\\caption{Concept of three-dimensional standard}\n\t\\end{figure}\n\tLet $L^3$ be the total area of the volume. We will take a standard cube of area $n^2$ that we will report into the big cube. This standard will be reported $L^3 / n^3$ times. We note that:\n\t\n\tand so on...\n\t\n\tThrough these three examples, we make appear the number $1$ for the segment,  the number $2$ for the square, the number $3$ for the volume. These numbers are the \"dimension\" of the object (same applies for a sphere for example).\n\t\n\tSo, to summarize:\n\t\\begin{enumerate}\n\t\t\\item The lines are of dimension $1$, because to measure a length with a finer division by a factor $n$ of a standard segment, the number of subdivisions will be multiplied by the same factor. So there is a unity power relation between the subdivision and the measurement.\n\t\t\n\t\t\\item Surfaces are two-dimensional, as for measuring a surface with a finer division by a factor $n$ of a standard square, the number of subdivisions will be given by the square power of $n$. Therefore there is a power of two relation between the subdivision and the measurement (if we take the square two times smaller to cover a surface, we will need four times the standard).\n\t\t\n\t\t\\item Volumes are 3-dimensional, as for measuring a volume with a finer division by a factor $n$ of a standard cube, the number of subdivisions will be given by the cubic power of $n$. So there is a power of three relation between the subdivision and the measurement (if we take cubes two time smaller to fill in a volume, we will need eight times the standard).\n\t\\end{enumerate}\n\tLet us generalize: let $N$ be the number of times we report the standard of length $n$ on our object of length $L$, and given $d$ the dimension of the object, we have:\n\t\n\tIn the case of fractals (\\SeeChapter{see section Fractals page \\pageref{fractals}}) the dimensions are variable and fractional. Consider the Von Koch curve (for example) after one iteration of the sequence defining it:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/vonKoch_whole.jpg}\n\t\t\\caption{von Koch curve}\n\t\\end{figure}\n\tLet $L$ be its size such that $L=1$. To calculate its dimension we choose the fundamental element of the curve in red below (yes... we could take the segment of line for sure...):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/vonKoch_standard.jpg}\n\t\t\\caption{Standard choice for Von Koch curve}\n\t\\end{figure}\n\tLet $n$ be the size of this standard such as $n=1/3$. We see very well that we can report it $4$ times on the curve. Therefore:\n\t\n\tThe dimension of the Von Koch fractal therefore has a fractional value and is more a curve (because close to $1$), rather than a surface (which is of dimension $2$). Cauliflower are for example fractal with dimension $2.33$...\n\t\n\tSo we can calculate the size of any fractal objects on condition of knowing their standard measurement.\n\t\n\tWe don't venture fetching complex objects in some galaxies, while the most famous fractal is in your plate (the second being your lungs ...). Eh yes! Cauliflower is a fractal (as well as your lungs)! You've probably noticed that when we cut cauliflower (not something recommended to try with your lungs ...), we are breaking it instead of cutting it, and it gives lots of small cauliflower, which themselves may give other smaller cauliflower. This characteristic of self-similarity at different scales is makes the cauliflower a fractal.\n\t\n\tLet us calculate now the fractal dimension of cauliflower. When we break cauliflower, we get between 12 and 14 branches that resemble to the whole cauliflower close to a given scale... This expansion is, if we calculate it, of a factor 3. So according to the formula above, the fractal dimension of the cauliflower is approximately:\n\t\n\tSo the cauliflower is closer of a surface than a volume.\n\tLet us do one last example with Sierpinski triangle fractal (\\SeeChapter{see section Fractals page \\pageref{sierpinski fractal}}):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sierpinski_triangle.jpg}\n\t\t\\caption{Sierpinski triangle fractal}\n\t\\end{figure}\n\tFirst, it is clear that we need a square to cover it (round it) completely. With squares two times smaller, we will need three squares:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sierpinski_triangle_three_squares.jpg}\n\t\t\\caption{Three squares to cover the Sierpinski fractal}\n\t\\end{figure}\n\tIf we divide again once the size of the squares by two, we will need $9$ squares to cover the triangle:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sierpinski_triangle_nine_squares.jpg}\n\t\t\\caption{Nine squares to cover the Sierpinski fractal}\n\t\\end{figure}\n\tIf we divide once again the size of the square by two we will need $27$ of them:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sierpinski_triangle_twentyseven_squares.jpg}\n\t\t\\caption{Twenty seven squares to cover the Sierpinski fractal}\n\t\\end{figure}\n\tAnd so we see that:\n\t\n\tAnd therefore:\n\t\n\tSo the Sierpinski fractal is closer to a surface than a curve.\n\t\n\tThere exist also other dimensions. Take for example the \"\\NewTerm{homothetic dimensions}\\index{homothetic dimensions}\" Here are some simple examples (see further below the strict definition of \"homothetic transformation\"):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/homothetic_dimension.jpg}\n\t\t\\caption{Representation of homothetic dimensions}\n\t\\end{figure}\n\tThe segment (far left), of $1$-dimension, has by homothetic transformation seen its length doubled, we note that:\n\t\n\tThe square (at the center), of $2$-dimension has by homothetic transformation seen its surface doubled and we note that:\n\t\n\tThe cube (far right), of 3-dimension has by homothetic transformation seen its volume has quadruple and we note that:\n\t\n\tThe duplication of scale factor (homothetic) is therefore equal to:\n\t\n\tAs you can see, this is always an integer value but of a different kind of dimension.\n\t\n\tThe concept of dimensions having been introduced let us look now to the Euclid's postulates which may seem vague at first but will be detailed as we go far in our reading of this section.\n\t\n\t\\pagebreak\n\t\\subsection{Euclid's Constructions}\\label{euclid's postulates}\n\tThe construction of the planar Euclidean geometry is based on five following postulates (the first four are today regarded as axioms as we have already mentioned):\n\t\\begin{enumerate}\n\t\t\\item[P1.] Drawing a straight line of any point to any point.\n\t\t\n\t\tIn a modern way we would say that through two distinct points $A$ and $B$, it passes a straight line and it goes in only one.\n\t\t\n\t\tIn other words, two straight lines $(D)$ and $(D ')$ which have two common points are coincident, any point of one is a point of the other and vice versa.\n\t\t\n\t\tIt follows from this postulate that two lines $(D)$ and $(D ')$ have no common point or have a single common point named \"\\NewTerm{intersection}\\index{intersection point}\" and are then \"\\NewTerm{intersecting}\" and \"\\NewTerm{distinct}\\index{distinct lines}\" or have more than one common point and are then \"\\NewTerm{combined}\\index{combined lines}\".\n\t\t\n\t\t\\item[P2.] Extend indefinitely, in its direction, a finite straight line.\n\t\t\n\t\tIn a modern way we will say that any finite segment $\\overline{AB}$ can be extended in a straight line passing through $A$ and $B$ (given the first axiom, it is unique in Euclidean geometry).\n\t\t\n\t\t\\item[P3.] From any point and with any interval, describe a circle circumference.\n\t\t\n\t\tIn a modern way we will say that for any point $A$ and any separate point $B$ of $A$, we can draw a circle with center $A$ passing through $B$.\n\t\t\n\t\t\\item[P4.] All right angles are equal.\n\t\t\n\t\tUnder modern form we say that for every angle $\\widehat{xyz}$ of the plane corresponds its  measurement $\\theta$ (also denoted in high-school by $\\measuredangle AOB$), performed with a unit chosen once for all where $\\theta$ is a positive number, less than $2\\pi$. Conversely, let $\\theta$ be any positive number between $0$ and $2\\pi$, we shall assume that there is an infinite number of angles $\\widehat{xyz}$ equal to one another whose measurement with the chosen angle unit is $\\theta$.\n\t\t\n\t\t \\item[P5.] If a straight line, falling on two straight lines, makes the interior angles on the same side smaller than two right angles, these lines, extended to infinity, will meet at the side where the angles are smaller than the two rights.\n\t\t \n\t\t In a modern way we will say that given a straight line and a point, there is a unique line through this point and not intersecting the initial right.\t\t\n\t\\end{enumerate}\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/euclids_postulates.jpg}\n\t\t\\caption{Summary of Euclid's postulates}\n\t\\end{figure}\n\tThe Euclid's postulates will allow us the development of the concept of measurement of a length, area, volume, angle, as we shall see below.\n\t\n\tThe two fundamental theorems of Euclidean geometry is the Pythagorean theorem and the Thales theorem as we will prove later. A bit of analysis permits us to go further with trigonometry that we have already developed in the previous section.\n\t\n\t\\subsubsection{Segments and Lines}\n\tFirst, the simplest geometric figure (excepted the point ...) in Euclidean geometry is the \"straight line\" and it is directly concerned by the first two Euclid's postulates.\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] The \"\\NewTerm{straight line}\\index{straight line}\" is the image given by a tensioned wire of zero thickness and infinite length.\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tWe can also define the \"straight line\" as an infinity of points placed next to each other in a same direction on a plane.\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D2.] We name \"\\NewTerm{half-line}\\index{half-line}\" the portion of a line limited to a point O named \"\\NewTerm{origin}\\index{origin}\".\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe expression the \"\\NewTerm{half-line $\\overline{\\text{O}A}$}\\index{half-line}\" means the half-line of origin O, point named the \"first\", which contains the point $A$.\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D3.] We say that two half-lines $\\overline{\\text{O}A}$, $\\overline{\\text{O}B}$ are \"\\NewTerm{opposing half-lines}\\index{opposing half-lines}\" when they constitute the whole line $\\overline{AB}$.\n\t\t\n\t\t\\item[D4.] We name \"\\NewTerm{segment $\\overline{AB}$}\\index{segment}\" the portion of a right straight line limited by two points $A$ and $B$. These points are called the \"extremities\" of the segment.\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\paragraph{Quantities of the same type}\\mbox{}\\\\\\\\\\\n\tWe say that geometric figures (meaning straight lines) are of the \"\\NewTerm{same type quantities}\\index{identical geometric quantities}\" when it is possible to define:\n\t\\begin{enumerate}\n\t\t\\item In which case a figure $(A)$ is said to be equal to a figure $(B)$ and, if they are unequal, which is the smallest.\n\t\t\n\t\t\\item What we need to understand when we sum a figure $(A)$ and a figure $(B)$.\n\t\\end{enumerate}\n\tThe definitions should be chosen such that if $(A)$ is smaller than said $(B)$ and $(B)$ smaller than $(C)$, then $(A)$ must be reported to be small than $(C)$.\n\t\n\tIt is necessary, moreover, that the figure resulting of the sum of $(A)$ and $(B)$ is equal to that which is named: sum of $(B)$ and $(A)$.\n\t\n\tFinally, a substitution in a comparison, or an equal amount, of a figure by an identical figure should not affect the result of operations.\n\t\n\tTo understand what are the quantities of the same type, take the example of line segments:\n\t\\begin{itemize}\n\t\t\\item We admit that it is possible to determine the equality of two segments $\\overline{AB}$ and $\\overline{A'B'}$ when we can make them coincide.\n\t\t\n\t\t\\item We also admit that it is possible to replace the segment $\\overline{A'B'} \\in AB$ by an equal segment $\\overline{AD}$ and putt on the line $AB$.\n\t\t\n\t\t\\item Finally, we will assume that it is possible to distinguish between three points $A, B, C$, taken at random on a line or segment and to know which is between the other two.\n\t\\end{itemize}\n\t\n\tWe then agree to say that the segment $\\overline{A'B'}$ is smaller than the segment $\\overline{AB}$, which is written formally \n(\\SeeChapter{see section Operators page \\pageref{comparators}}):\n\t\n\twhen the point $C$, obtained by putting it on the segment $\\overline{AB}$ a segment  $\\overline{AB}$ equal to $\\overline{A'B'}$, falls between $A$ and $B$.\n\t\n\tIf the point $C$ was on $B$, the segments $\\overline{AB}$ and $\\overline{A'B'}$  would be equal and then we would write:\n\t\n\tWe agree to named \"\\NewTerm{sum of two segments}\\index{sum of two segments}\" $\\overline{AB}$, $\\overline{A'B'}$, the segment $\\overline{AC}$ obtained by translating on the half-line opposed to the segment $\\overline{BA}$ a segment $\\overline{AB}$ equal to $\\overline{A'B'}$. We translate this by writing:\n\t\n\tLet us still consider quantities of the same species ... Add therebetween several of these quantities is to add one of them to another, the sum determined to another third, etc. For example, add the segments $\\overline{AB}, \\overline{BC}, \\overline{CD}$, is add $\\overline{AB}$ and $\\overline{BC}$ which gives $\\overline{AC}$, then $\\overline{AC}$ and $\\overline{CD}$ which gives $\\overline{AD}$. We summarize the operation by writing:\n\t\n\tMultiply by a quantity by an integer $n$, it is $n$ quantities to itself. For example, if we have $\\overline{AB} = \\overline{BC} =\\overline{CD}$, the above relation would be written:\n\t\n\tWe will now define what we name \"compare two quantities $(A)$ and $(B)$ of the same kind\". For this let us arbitrarily select a quantity $(C)$ of the same kind as $(A)$ and $(B)$ and smaller than each of them. Let us build a sequence of quantities such that:\n\t\n\tWe find that the quantity $(A)$ is between two quantities $(C_p)$ and $(C_{p+1})$ equation and $(B)$ is located between two other $(C_q)$ and $(C_{q+1}$ by construction.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider then, for example, any two segments $\\overline{AB}, \\overline{A'B'}$ but different (e.g. $1.2\\;[\\text{cm}]$ and $3.5\\;[\\text{cm}]$):\\\\\n\t\n\tTo perform the previous operation, let us use a graduated rule which unit will be the quantity $(C)$, an arbitrary segment (for example $1 \\;[\\text{cm}]$).\\\\\n\t\n\tWe apply the zero of the rule on $A$, and $B$ will by design intercalate between two graduations of the rule numbered $p$ and $p + 1$ unless the size $(C)$ is equal to $\\overline{AB}$ ... (either with the choice taken as an example, $B$ intercalate between the $1$st and $2$nd graduation).\\\\\n\t\n\tFor $\\overline{A'B'}$, we apply the zero of the rule also on $A'$, and $B'$ also intercalate between two values of the rule numbered $q$ and $q + 1$ unless the variable $C$ is equal to $\\overline{A'B'}$ ... (either with the choice taken as an example, $B'$ will intercalate between the $3$rd and the $4$th one).\\\\\n\t\n\tWe will express the results of these measurements and their ratio by writing:\n\t\n\twhere the term on the left extremity is named a \"\\NewTerm{default measurement}\\index{default measurement}\" and this at the opposite a \"\\NewTerm{measurement by excesses}\\index{measurement by excesses}\".\\\\\n\t\n\tSo with the measures taken as an example we have:\n\t\n\t\\end{tcolorbox}\n\t\\textbf{Definition (\\#\\mydef):} And in physics, we name \"\\NewTerm{measure of a quantity}\\index{measure of a quantity}\" $(A)$ the positive number which measures the ratio of this size and a quantity $(U)$ arbitrarily chosen as reference and which we name the \"\\NewTerm{unit}\\index{unit}\", the measuring unit being \"$1$\", by definition.\n\t\n\tWe can show that if $a$ is a measure of $(A)$, $b$ that of $(B)$ evaluated both with the same unit $(U)$, the number $(A) / (B)$ is equal to the ratio $a / b$. This ratio is (and this should be obvious for the reader at this level) independent of the chosen unit of measurement.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe say that $(B)$ is an \"\\NewTerm{aliquot part}\\index{aliquot part}\" of $(A)$ if the ratio $(A)/(B)$ is an integer.\n\t\\end{tcolorbox}\n\tWe will agree once and for all, that in geometry all quantities of the same species that occur in a given figure are measured with the same unit.\n\t\n\tGiven $(A), (B), (C), ..., (S)$ quantities of the same species and $(A'), (B'), (C'), ..., (S ')$, quantities of the same species, but that are not necessarily of the same kind as the previous. We say that these quantities are \"\\NewTerm{homologous}\\index{homologous}\" if we can group them in pairs, $(A ')$ homologous to $(A)$, $(B')$ homologous to $(B)$, ..., etc., so that following conditions are met:\n\t\\begin{itemize}\n\t\t\\item If $(A)$ is equal to $(B)$, $(A')$ is equal to $(B')$.\n\t\t\\item If $(A)$ is smaller than $(B)$, $(A')$ is smaller than $(B')$.\n\t\t\\item If $(S)$ is the sum of $(A)$ and $(B)$, $(S')$ is the sum of $(A')$ and $(B')$.\n\t\\end{itemize}\n\tTo calculate the ratio $(A) / (B)$, let us build the following sequence $(C_1),(C_2),...,(C_p)$ and to calculate the ratio $(A')/(B')$, let us build the sequence  $(C_1^{\\prime}),(C_2^{\\prime}),...,(C_p^{\\prime})$ following equation, obtained as the previous one, but from the quantity $(C ')$ homolog of $(C)$.\n\t\n\tIt is obvious that if $(A)$ can be intercalate between $(C_p)$ and $(C_{p+1})$, $(A')$ could therefore also be intercalate between $(C_p^{\\prime})$ and $(C_{p+1}^{\\prime})$; Similarly, if $(B)$ can be intercalated between $(C_q)$ and $(C_{q+1})$. The ratios $(A) / (B)$ and $(A ') / (B')$ will be bounded by the same numbers:\n\t\t\n\tConsequently, the ratio between two quantities $(A)$ and $(B)$ is equal to the ratio of the homolog quantities $(A')$ and $(B')$.\n\n\tIf, in particular, the quantities $(A),(B),...$ are measured with a unity $(U)$, and if the quantities $(A'),(B'),...$ are measured with the sames unities $(U')$, homolog of $(U)$, the equal ratios:\n\t\n\tare only the measurement of $(A)$ and of $(A')$. Consequently:\n\t\n\tThe measurements of two homolog quantities $(A)$ and $(A')$ are equal at the condition that the chosen units to measure them are also homolog quantities.\n\n\tLes us consider now on a half-line along $\\text{O}x$-axis a point $M$. Given $x$ the measurement (following previous definition) of $\\overline{\\text{O}M}$. For every point $M$ of the half-line $\\overline{\\text{O}M}$ corresponds a unique positive $x$. We will admit that to any positive value $x$ randomly chosen correspond a unique point $M$ on the half-line.\n\t\n\tA consequence of this assumption is that there is a point, and only one, $C$, which divides the segment $\\overline{\\text{O}M}$ in equal parts. This point is the point of the half-line $\\overline{\\text{O}M}$, such that:\n\t\n\tWe name this point the \"\\NewTerm{middle of the segment}\\index{middle of a segment}\" $\\overline{\\text{O}M}$.\n\t\\begin{theorem}\n\tThere exists a point $M$ and only one located on the segment $\\overline{AB}$ such that the relation $\\overline{MA}/\\overline{MB}$ is equal to a given positive number.\n\t\\end{theorem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tObviously if $\\lambda=1$, this point is the middle of the segment.\n\t\\end{tcolorbox}\n\t\\begin{dem}\n\tGiven $M$ any point of the segment $\\overline{AB}$ on a plane; given $x$, the measurement of $\\overline{AM}$ and $a$  the measurement of $\\overline{AB}$: the measurement of $\\overline{MB}$ will be equal to $a-x$ since $M$ is placed between $A$ and $B$. We will have:\n\t\n\tFor this ratio to be equal to $\\lambda$, we have to, and it is sufficient to, that $x$ is solution of the equation:\n\t\n\tThis equation has for unique solution:\n\t\n\tTo this positive $x$ value that is less than $a$ corresponds a point $M$ and only one of the half-line $\\overline{AB}$ such as $\\overline{MA} = x$. This point $M$ satisfies, and satisfied alone, to the imposed conditions.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\begin{theorem}\n\tThere is a point $M$ and only one of the line $\\overline{AB}$, located outside the segment $\\overline{AB}$, such as the ratio $\\overline{MA}/\\overline{MB}$  equals a given definite number $\\lambda$ and from $1$.\n\t\\end{theorem}\n\t\\begin{dem} Let us prove this uniqueness in two steps:\n\t\\begin{enumerate}\n\t\t\\item Let us suppose $\\lambda>1$. Given $M$ any point of the line $\\overline{AB}$ located outside the segment $\\overline{AB}$: even $A$ is on the $\\overline{MB}$ segment, or $B$ is on the $\\overline{MA}$ segment. If $A$ is the $\\overline{MB}$segment, we have necessarily $\\overline{MB}>\\overline{MA}$, thus:\n\t\t\n\t\tThe point $M$ thus does not respond to the question of uniqueness.\n\t\t\n\t\tIf $B$ is on the segment $\\overline{MA}$ and we denote$\\overline{MA} = x, \\overline{AB} = a, \\overline{MB}=x-a$. So we have:\n\t\t\n\t\tFor this ratio to be equal to $\\lambda$, it is necessary, and sufficient that $x$ is a solution of the equation:\n\t\t\n\t\tThis equation admits as unique solution:\n\t\t\n\t\tthat gives a value of $x$ that will always be positive and greater than $\\lambda$. To this positive $x$ value and greater than $a$ corresponds a point $M$ and only one of the half-line $\\overline{AB}$. This point $M$ satisfies, and only it, to the conditions of uniqueness imposed.\n\t\t\n\t\t\\item Let us suppose that $\\lambda<1$. We will seek the point $M$ for which (we simply reversed the ratio):\n\t\t\n\t\tis a number greater than $1$. There is one and only one from the point (1). It is the only one that satisfies the imposed conditions.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThere is no point $M$ located outside the segment $\\overline{AB}$ for which $\\overline{MA} / \\overline{MB} = 1$. Indeed, if $A$ is on the $\\overline{MB}$ segment, we have regardless of $M$, $\\overline{MA}/\\overline{MB}<1$, and if $B$ is on the segment $\\overline{MA}$, $\\overline{MA}/\\overline{MB}>1$....\n\t\\end{tcolorbox}\n\t\t\n\t\\end{enumerate}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\pagebreak\n\t\\subsection{Plane Geometry}\n\tLet us now study a geometrical object of greater dimension than that of the straight line that is the Plane and the Surface.\n\t\n\tA plane is a flat, two-dimensional surface that extends infinitely far. A plane is the two-dimensional analogue of a point (zero dimensions), a line (one dimension) and three-dimensional space. Planes can arise as subspaces of some higher-dimensional space, as with a room's walls extended infinitely far, or they may enjoy an independent existence in their own right, as in the setting of Euclidean geometry.\n\t\n\tWhen working exclusively in two-dimensional Euclidean space, the definite article is used, so, \"the plane\" refers to the whole space. Many fundamental tasks in mathematics, geometry, trigonometry, graph theory and graphing are performed in a two-dimensional space, or in other words, in: \"the plane\".\n\t\n\tLet us now consider a finite surface $(S)$ and two points $A$ and $B$ of this surface. At least two simple cases are possible:\n\t\n\t\\begin{enumerate}\n\t\t\\item There are points of the straight segment $\\overline{AB}$ that are not on the surface $(S)$. We say in this case that: \"the segment $\\overline{AB}$ intersects the surface\"; the common point to the segment $\\overline{AB}$ and the surface $(S)$ are the intersection points of the surface $(S)$ and of the segment $\\overline{AB}$. Among these common points there are especially the points $A$ and $B$.\n\t\t\n\t\t\\item All points on the segment $\\overline{AB}$ are points of  the surface $(S)$. We say then that: \"the segment $\\overline{AB}$ is on the surface $(S)$\".\n\t\\end{enumerate}\n\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{plane}\\index{plane}\" the surface such that any segment $\\overline{AB}$ joining two points arbitrarily chosen on the surface, is on the surface.\n\t\n\tWe will assume that such a surface exists and that in an Euclidean space of any number of dimensions, a plane is uniquely determined by any of the following:\t\n\t\\begin{itemize}\n\t\t\\item Three non-collinear points (points not on a single line).\n\t\t\\item A line and a point not on that line.\n\t\t\\item Two distinct but intersecting lines.\n\t\t\\item Two parallel lines.\n\t\\end{itemize}\n\t\n\tThe following properties hold in three-dimensional Euclidean space but not in higher dimensions, though they have higher-dimensional analogues:\n\t\\begin{enumerate}\n\t\t\\item[P1.] Two distinct planes are either parallel or they intersect in a line.\n\t\t\\item[P2.] A line is either parallel to a plane, intersects it at a single point, or is contained in the plane.\n\t\t\\item[P3.] Two distinct lines perpendicular to the same plane must be parallel to each other.\n\t\t\\item[P4.] Two distinct planes perpendicular to the same line must be parallel to each other.\n\t\\end{enumerate}\n\t\n\tThe study of planes will be made later in the section (but mathematical vectorial analysis of a plane is done the in the section of Analytical Geometry). we will now focus on the study of geometric figures drawn in a given plane, figures named \"\\NewTerm{plane figures}\\index{plane figures}\". Their study is related to the field of \"\\NewTerm{plane geometry}\\index{plane geometry}\".\n\t\n\tAll the shapes exist in a flat plane. A plane can be thought of an a flat sheet with no thickness, and which goes on for ever in both directions. It is absolutely flat and infinitely large, which makes it hard to draw.\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The study of geometry can be broken into two broad types: plane geometry, which deals with only two dimensions, and solid geometry which allows all three. The world around us is obviously three-dimensional, having width, depth and height, Solid geometry deals with objects in that space such as cubes and spheres.\\\\\n\t\n\t\\textbf{R2.} In practice, the figures are drawn either on a sheet of paper or on the surface of the blackboard (or on a computer screen).\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Displacements and Turnarounds}\n\tGiven $(F)$ a drawing made on a plane board. Imagine we do on a transparent layer, which front  is applied on the board, a copy $(C)$ of the drawing $(F)$. Let us perform afterwards, by moving this transparent layer on another point of the plane board, a new drawing a new drawing $(F ')$ identical to $(F)$.\n\t\n\tTwo major cases have to be considered mathematically speaking:\n\t\\begin{enumerate}\n\t\t\\item If the recto side of the transparent layer is remained applied to the plane board, the drawing $(F')$ is obtained from $(F)$ by an operation named \"\\NewTerm{displacement}\\index{displacement}\" or more commonly \"\\NewTerm{translation}\\index{translation}\".\n\t\t\n\t\t\\item If, however, the transparent layer was horizontally or vertically returned, and if it is the verso that is applied to the plane board, the operation is named \"\\NewTerm{turnaround}\\index{turnaround}\" or more commonly a \"\\NewTerm{vertical or horizontal symmetry}\\index{vertical symmetry}\\index{horizontal symmetry}\".\n\t\\end{enumerate}\n\tIn the figure below the red triangle is a translation of the blue one. The green triangle is a horizontal symmetry of the blue one plus a translation.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/translation_symmetry.jpg}\n\t\t\\caption{Illustrated example a translation and a symmetry}\n\t\\end{figure}\n\t\n\t\\textbf{Definition (\\#\\mydef):} We commonly say that the drawing $(F)$ is \"\\NewTerm{stackable}\\index{stackable}\" to drawing $(F '$) and that these drawings represent equal figures.\n\n\t\\subsubsection{Plane angles}\n\tWe have already defined the concept of \"\\NewTerm{angle}\\index{angle}\" in the previous section on Trigonometry (circle angle and spherical angle). We now have more the fourth Euclid's postulate at our disposal regarding the concept of plane angle.\n\t\n\tWe will now come back more in detail about the concepts of plane angle and see the underlying concepts that will enable us to address further a particularly useful object which is the bisecting line!\n\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] We name \"\\NewTerm{angle}\\index{angle}\" (or \"\\NewTerm{plane angle}\\index{plane angle}\") or more strictly \"\\NewTerm{linear angle}\" the plane portion bounded by two half-lines O$A$, O$B$, for example. The point O is called the \"\\NewTerm{top}\\index{top}\" of the angle, the half-lines O$A$, O$B$ are named the \"\\NewTerm{sides}\\index{sides}\" of the angle.\n\n\t\tIn highs-school, angles are measured with an instrument named a \"\\NewTerm{protractor}\\index{protractor}\":\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.375]{img/geometry/protractor.jpg}\n\t\t\t\\caption{A half circle protractor marked in degrees ($180^circ$)}\n\t\t\\end{figure}\n\n\t\t\\item[D2.] We name \"\\NewTerm{angle between two half-lines $AB$, $AC$}\\index{angle between two segments}\", the angle of top $A$ which sides are the half-lines $AB$, $AC$.\n\n\t\t\\item[D3.] The half-lines O$A$, O$B$ divide the plane into two regions: they therefore define two angles:\n\t\t\\begin{enumerate}\n\t\t\t\\item One consists of the covered hatch area (see figure on the far left below) is named \"\\NewTerm{salient angle}\\index{salient angle}\".\n\t\n\t\t\t\\item The other consists of the covered hatch covered (see figure on the center below) is named \"\\NewTerm{reflex angle}\\index{reflex angle}\".\n\t\t\\end{enumerate}\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/angle_families.jpg}\n\t\t\t\\caption{Salient angle, reflex angle, adjacent angle}\n\t\t\\end{figure}\n\t\tThe notation $\\widehat{BOA}$ or $\\widehat{AOB}$ denotes one of the two angles: the letter that designate the top should be (usually) written in the middle (often we do not mention the top if the context is clear). Where no precision accompanies this notation, it is by definition the salient angle!!\n\n\t\t\\item[D4.] We name \"\\NewTerm{adjacent angles}\\index{adjacent angle}\" two angles which have the top and one side that are common and which are placed on either side of the common side. In the figure above the far right, the saillant angles $\\widehat{AOB}$, $\\widehat{BOC}$ are adjacent.\n\t\t\n\t\tGiven $\\widehat{AOB}$ and $\\widehat{A'OB'}$  two angles of a sample plane. We have assumed previously that there is a displacement that brings $O'$ on $O$ and $A'$ on a point $A$ of the line $OA$. This movement causes $O'B'$ to move either on $OB_1$ so that the two angles $(\\widehat{AOB},\\widehat{AOB_1})$ are not adjacent, or on $OB_2$ so that $(\\widehat{AOB},\\widehat{AOB_2})$ are adjacent. \n\n\t\tIn the latter case, an additional half turn around $OA$ will bring $\\widehat{AOB_2}$ into the position $\\widehat{AOB_1}$. This movement and this rotation, if it occurs, replace $\\widehat{A'O'B'}$ by an angle $\\widehat{AOB_1}$ that is equal by definition.\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/angle_deplacement.jpg}\n\t\t\t\\caption{Representation of the movement of two angles}\n\t\t\\end{figure}\n\t\tIf $OB_1$ is merged with $OB$, there are points of one of the two angles $\\widehat{AOB_1}$ and $\\widehat{AOB}$ that are equal, any point of one being a point of the other, we say, in this case that: the angles $\\widehat{AOB}$, $\\widehat{A'O'B'}$ are \"\\NewTerm{equal angles}\\index{equal angles}\", which is expressed by the expression:\n\t\t\n\t\tIf $OB_1$ is not merged with $OB$, there are points of one of the two angles $\\widehat{AOB_1}$, for example, which are not points of $AOB$. On the figure above, the angle $\\widehat{AOB}$ is covered by hatch, the angle $\\widehat{AOB_1}$ also. The points we are speaking about are those of the angle $\\widehat{BOB_1}$ covered only once by hatch. We will agree to say that the angle $\\widehat{AOB_1}$ and, therefore, the angle equal to $\\widehat{A'O'B'}$ are, in this case, greater than the angle $\\widehat{AOB}$, which is expressed by the inequality:\n\t\t\n\t\tNow that we are able to compare angles, let us study how we can sum the up (and thus subtract respectively).\n\t\t\n\t\tLet us study first the case of the sum of two adjacent angles $\\widehat{AOB}$, $\\widehat{BOC}$. Two cases can occur following that the angles are salient angles or reflex angles:\n\t\t\\begin{enumerate}\n\t\t\t\\item Given $\\widehat{AOB}$ and $\\widehat{BOC}$ the two angles to sum (see left figure below). By definition, the sum of these angles is the angle $\\widehat{AOC}$, what we express by the equality:\n\t\t\t\n\t\t\t\n\t\t\t\\item Given $\\widehat{AOB}$ a salient angle to sum up to the reflex angle $\\widehat{BOC}$. If we cover of hatch successively the both angles (see right figure below), the salient angle $\\widehat{AOC}$ is covered twice:\n\t\t\\end{enumerate}\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/angle_sum.jpg}\n\t\t\t\\caption{Sum of two adjacent angles}\n\t\t\\end{figure}\n\t\tIn this case, the sum of the angles $\\widehat{AOB}$ (saillant) and $\\widehat{BOC}$ (reflex) is therefore equal to $\\widehat{AOC}$ plus two \"flat angles\" (see below for the definition), which is expressed by writing:\n\t\t\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThis may seem confusing to some but those who have already read the section of Trigonometry know already that the angles of the trigonometric circle are equal to themselves modulo $2\\pi$ [rad].\n\t\t\\end{tcolorbox}\n\t\tLet us now study the case of the sum of any two angles:\n\t\tThe sum of two angles $\\hat{AOB}$, $\\widehat{A'O'B'}$ is, by definition, equal to the sum of the angle $\\widehat{AOB}$ and of an angle $\\widehat{BOC}$ equal to the angle $\\widehat{A'O'B'}$ and adjacent to the angle $\\widehat{AOB}$.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/angle_sum_any.jpg}\n\t\t\t\\caption{Sum of any two angles}\n\t\t\\end{figure}\n\t\tSuch an angle is obtained by a displacement which brings O on O' and $A$ at a point O$A$, followed or not by a rotation around O$A$.\n\t\t\n\t\tLet us study for last case the sum of more than two angles:\n\t\t\n\t\tThe sum of several angles $\\widehat{AOB}$, $\\widehat{A'O'B'}$, etc., is by definition equal to the sum obtained by adding the first to the second, this sum to the third, and so on.\n\t\t\n\t\tGiven $\\widehat{AOB}$ the first angle, $\\widehat{BOC}$ the second angle, $\\widehat{KOL}$ an angle equal to the last angle to add and adjacent to the previous one. The result of the sum will be $\\widehat{AOL}$ augmented as many times of two flat angles that the plane was recovered during this operation. We easily see that this result does not depend on the order of angles $\\widehat{AOB},\\widehat{A'O'B'}$ to add, etc.\n\t\n\t\t\\item[D5.] Two angles formed by two lines cut by a secant are named \"\\NewTerm{alternate angles}\\index{alternate angles}\" if:\n\t\t\\begin{enumerate}\n\t\t\t\\item They are located on either side of the secant\n\t\n\t\t\t\\item They are located between the two lines\n\t\t\t\n\t\t\t\\item They are not adjacent angles\n\t\t\\end{enumerate}\n\t\tIn the example below, the straight lines $(X)$ and $(Y)$ are cut respectively in $A$ and $B$ by the secant $(S)$:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/alternate_angle.jpg}\n\t\t\t\\caption{Example of alternate angle}\n\t\t\\end{figure}\n\t\tand the two shown angles are alternate angles. In the case where $(X)$ and $(Y)$ are parallel, the alternate angles are equal. \n\n\t\tIt is also this which would have allowed Eratosthenes deduced the circumference of the Earth from a purely geometrical way (thus it is not useless!).\n\t\t\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tIndeed Eratosthenes used the observation he made on the shadow of two objects in two different places, Syene (now Aswan) and Alexandria, considered on the same meridian, the June 21 (summer solstice at his time) at local solar noon. It is at this precise time of the year that in the northern hemisphere the Sun holds the highest position above the horizon. In a previous observation, Eratosthenes noticed that there was no shadow in a water well at Syene. Thus, at that moment, the Sun was vertical and its light shone directly the bottom of the water well. Eratosthenes noticed however that the same day at the same hour, an obelisk located in Alexandria formed a shadow. The Sun was no longer vertical and the obelisk had an off centered shadow. Eratosthenes considered as parallel the light rays of the Sun at any point on the Earth (considered as spherical). He deduced that the angle between the sunlight and the vertical was of $7.2$ degrees.\\\\\n\t\t\n\t\tEratosthenes then estimated the distance between Syene and Alexandria by using a b\u00e9matiste which extrapolate upon the time of camel-day journey between the two cities: the distance achieved was estimated as $787.5$ kilometers (measure very close to reality).\\\\\n\t\t\n\t\tBy the geometric theory of alternate congruent angles  Eratosthenes proposed a simple figure: it consisted of a single circle having a central angle of $7.2$ degrees which intercepts an arc (connecting Syene to Alexandria) of $800$ kilometers.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/erasthoten_alternate_angle.jpg}\n\t\t\t\\caption{Alternate angle principle as used by Eratosthenes}\n\t\t\\end{figure}\n\t\tBy the ratio in the circles (already known at that time), he calculated the circumference of the Earth was by a simple rule of three equal to $39,375$ kilometers. What was a remarkably accurate measurement for that time (the current measures give an average of $40,075.02$ kilometers).\n\t\t\\end{tcolorbox}\n\t\\end{enumerate}\n\t\n\tLet us do a summary of most type on angles that the reader mus remember:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/angle_types.jpg}\n\t\t\\caption{Angle types summary}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\paragraph{Angle Measurements}\\mbox{}\\\\\\\\\n\tWe have defined the equality and the sum of two or more angles. These definitions meet the conditions of the same quantities variables we have already seen previously.\n\n\tSo let us choose arbitrarily an angle of the plane $\\widehat{aob}$, which will bet the unit angle for the plane. The measurement of the ratio:\n\t\n\tproceed as it has been explained previously during our study of the quantities of the same species, will be a positive number denoted most of time by the greek letters $\\alpha$ or $\\theta$ named by definition the \"\\NewTerm{angle measurement $\\widehat{AOB}$, with the selected unit $\\widehat{aob}$}\\index{angle measurement}\".\n\t\n\tWe denote by to the lowercase Greek letter \"pi\" the irrational number:\n\t\n\twhich is by definition the measurement value of a \"\\NewTerm{flat angle}\\index{flat angle}\" or also named \"\\NewTerm{plane angle}\\index{plane angle}\" (we have already see we this value comes from in the section of Trigonometry).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAs all angles of the plane are smaller than two flat angles, the number $\\theta$ (which is the measurement of $\\widehat{AOB}$) must be less than $2\\pi$.\n\t\\end{tcolorbox}\n\tHaving defined the flat angle, we can now define other types of commonly used angles:\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] Two angles are \"\\NewTerm{perpendicular angles}\\index{perpendicular angles}\", denoted by the symbol $\\perp$, when they are both rights and adjacent.\n\n\t\t\\item[D2.] We name \"\\NewTerm{oriented angle}\\index{oriented angle}\" or \"\\NewTerm{vector angle}\\index{vector angle}\", the angle between two vectors or lines (\\SeeChapter{see section Vector Calculus page \\pageref{vector}}) having same origin and which value measured counter-clockwise will be taken as positive and negative if taken in the clockwise direction.\n\t\t\n\t\tThe best known example of oriented angle is the trigonometric unit circle:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/angle_positive_oriented_or_null.jpg}\n\t\t\t\\caption{Positive or null oriented angle}\n\t\t\\end{figure}\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/angle_negative_oriented_or_null.jpg}\n\t\t\t\\caption{Negative or null oriented angle}\n\t\t\\end{figure}\n\t\t\n\t\t\\item[D3.] We say that two angles are \"\\NewTerm{additional angles}\\index{additional angles}\" when their sum is equal to right angles (i.e. a flat angle).\n\n\t\t\\item[D4.] We say that two angles are \"\\NewTerm{complementary angles}\\index{complementary angles}\" when their sum is equal to a right angle (thus two complementary angles are always right...).\n\t\t\n\t\tThe \"\\NewTerm{internal angle}\\index{internal angle}\" and \"\\NewTerm{external angle}\" of a regular polygons are complementary angles as by construction:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.5]{img/geometry/internal_external_angle.jpg}\n\t\t\t\\caption[Internal + External angle]{Internal + External angle (source: Wikipedia)}\n\t\t\\end{figure}\n\t\t\n\t\\end{enumerate}\n\tLet us now consider the symbols $\\alpha,\\beta,\\gamma$ as the measurements of several angles $widehat{AOB}$, $\\widehat{BOC}$, $\\widehat{COD}$.\n\n\tWe will not insist on the obvious fact that the equalities $\\widehat{AOB}=\\widehat{BOC}$ or $\\alpha=\\beta$ are equivalent and thus the  inequalities $\\widehat{AOB}<\\widehat{BOC}$, or $\\alpha<\\beta$. These common sense remarks are applied whenever the same species of quantities will have been measured, of course with the same unit!\n\t\n\tHowever, we insist that, according to the definition of the sum of several angles, $\\alpha+\\beta+\\gamma$ is the measure of the angle $\\widehat{AOD}$ increased as many times of two flat angles $\\pi$ that the plane was covered during the operations of addition:\n\t\n\tThe relative integer $n$ which is introduce in this calculation has a value that can be specified, but that has no importance for the mathematician or physicist, as we will see later. We thus do not decide not to write the $2n\\pi$ in geometry (as it will be always implicit). Also, we decide by convention to write:\n\t\n\tSo we have:\n\t\n\tthe notation convention means that $\\theta$ is the measurement of the angle $\\widehat{AOD}$ if we have $0\\leq \\theta <2\\pi$.\n\t\n\tIf $\\theta$ is greater than $2\\pi$, this equality means that the measurement of the angle $\\widehat{AOD}$ is $\\theta-2n\\pi$, $n$ being a relative integer so that we have $0 \\leq \\theta-2n\\pi <2\\pi$.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} If $\\theta>2\\pi$ there is only one unique positive integer $n$ and only one, as we have $0\\leq \\theta-2n\\pi<2\\pi$, that is to say:\n\t\n\tbecause the two numbers $(\\theta/2\\pi)-1$ and $\\theta/2\\pi$ are positive and different from $1$.\\\\\n\t\n\t\\textbf{R2.} Saying that $\\theta$ is the measurement of the angle $\\widehat{AOD}$ suppose that we have $0\\leq \\theta<2\\pi$ and leads to the equality $\\widehat{AOD}=\\alpha$. But write $\\widehat{AOD}=\\alpha$ does not necessarily mean that $\\theta$ is the measurement of $\\widehat{AOD}$; it is necessary for this that we have $0\\leq \\theta<2\\pi$.\n\t\\end{tcolorbox}\n\t\n\t\\paragraph{Units of Angle Measurements}\\mbox{}\\\\\\\\\n\tWe have define the flat angle to be equal to $\\pi$ without specifying the unit. This is what we will now apply to do. There are (still and sadly) several units of angle measurements which most important and common one are listed below:\n\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] We name \"\\NewTerm{degree}\\index{degree}\" the $180$th part of a flat angle.\n\t\t\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe original motivation for choosing the degree as a unit of rotations and angles is unknown.\n\t\t\\end{tcolorbox}\n\t\t\n\t\tMany old former are still made in degrees (and still today in some areas of physics/astronomy). The sub-multiple of the degree is the: \"\\NewTerm{Sexagesimal minute}\\index{Sexagesimal minute}\" equal to the $60$th of 1 degree, and the \"\\NewTerm{Sexagesimal second}\\index{Sexagesimal second}\" is equal to the $60$th of the sexagesimal minute.\n\t\t\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tThe value:\n\t\t\n\t\t will be read as two hundred thirty six degrees, twenty minutes and forty two seconds.\\\\\n\t\t \n\t\t The conversion from degrees to fractional decimal angles is quite easy:\n\t\t\n\t\t\\end{tcolorbox}\n\t\tMost countries around the world sadly still use today degrees in Kindergarten and High-schools but without making use of the notation of minutes and seconds (not very convenient for pedagogical reasons). \n\n\t\t\\item[D2.] As we have already see it in the section of Trigonometry, we name \"\\NewTerm{radian}\\index{radian}\" (denoted [rad]) the plane angle described by a secant to a circle, passing through its center, such that the arc as defined by the horizontal axis through the center of the circle and the secant be of equal length of the radius of the circle:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/radian_definition.jpg}\n\t\t\t\\caption{Geometric definition of the radian}\n\t\t\\end{figure}\n\n\t\tIn most mathematical work beyond practical geometry, angles are typically measured in radians rather than degrees. This is for a variety of reasons; for example, the trigonometric functions have simpler and more \"natural\" properties when their arguments are expressed in radians (\\SeeChapter{see section Trigonometry page \\pageref{radian}}). \n\t\t\n\t\tOne complete turn ($360^\\circ $) is equal to $2\\pi$ radians, so a half turn of $180^\\circ$ is equal to $\\pi$ radians, or equivalently, the degree is a mathematical constant given by:\n\t\t\n\t\tThus, in radians, a flat angle is equal to $\\pi$ and all other angles are real multiples of this constant.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/degrees_radians.jpg}\n\t\t\t\\caption[Degrees-Radians conversions figure]{Degrees-Radians conversions figure (source: Wikipedia)}\n\t\t\\end{figure}\n\n\t\t\\item[D3.] We name \"\\NewTerm{grad}\\index{grad}\" the $200$th part of the flat angle.\n\t\t\n\t\tThe grade is also an old angle unit. Its sub-multiples are: the \"centesimal minute,\" equal to $100$th of one grad, and the \"centesimal second\", equal to one hundredth of the centesimal minute.\n\t\t\n\t\tThe value:\n\t\t\n\t\t will be read as two hundred thirty six grads, eighteen  minutes and five second.\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\paragraph{Bisector}\\mbox{}\\\\\\\\\n\tNow that we know how to compare, add and measure angles we will be able to look at an important concept in geometry that is the one of \"bisection\" and some relative properties that we will reuse later for quite important theorems.\n\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] We name \"\\NewTerm{bisector}\\index{bisector}\" the straight line that divides an angle into two equal parts.\n\n\t\t\\item[D2.] We name \"\\NewTerm{half-bisector}\\index{half-bisector}\"  the segment that divides an angle into two equal parts and start at the origin of the angle.\n\n\t\t\\item[D3.] We name \"\\NewTerm{line segment bisector}\\index{line segment bisector}\" the (infinite) straight line that passes through the midpoint of a segment.\n\n\t\tParticularly important is the \"\\NewTerm{perpendicular bisector}\\index{perpendicular bisector}\" of a segment, which, according to its name, meets the segment at right angles. \n\t\\end{enumerate}\n\t\n\tTwo straight lines $AB$ and $CD$ intersecting on a point $O$ form as we already know intuitively, four angles:\n\t\n\tThe angles $\\widehat{AOC},\\widehat{BOD}$, as well as the angles $\\widehat{COB},\\widehat{DOA}$ whose sides are opposite, are known as \"\\NewTerm{opposing angles by the top}\\index{opposing angles by the top}\".\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/opposite_angles.jpg}\n\t\\end{figure}\n\tObviously, if $\\theta$ is the measurement of the angle $\\widehat{AOC}$, the measurement of the adjacent angle $\\widehat{COB}$ is $\\pi-\\theta$. That of the angle $\\widehat{DOB}$ adjacent to the previous is $\\pi-(\\pi-\\theta)$, that of $\\widehat{DOA}$ is $\\pi-\\theta$.\n\t\n\tGiven $OE$ the half-bisector of the angle $\\widehat{AOC}$, $OG$ that of the angle $\\widehat{COB}$, $OH$ that of the angle $\\widehat{BOD}$, $OH$ that of the angle $\\widehat{DOA}$, we have:\n\t\n\tand consequently:\n\t\n\tWe would have also same:\n\t\n\tWe summarize as following all these results as well as measurement properties:\n\t\\begin{enumerate}\n\t\t\\item[P1.] Two opposite angles that share the same top are equals.\n\t\t\\item[P2.] Two opposite angles that share the same top have the same bisector.\n\t\t\\item[P3.] The bisectors of two additional adjacent angles are rectangular bisector.\n\t\t\\item[P4.] The bisectors of angles formed by two secants are two perpendicular lines (right angles).\n\t\\end{enumerate}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\textwidth]{img/geometry/useful_triangle_angles_in_statics.jpg}\n\t\t\\caption{Typical useful triangle angles in the study of Statics}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\subsubsection{Triangles}\n\tWe have study so far until now the concept of dimensions, point, straight segment, straight line, angle and of open (infinite) plan. However, a plane may be defined by several lines to thereby obtain geometric plane shapes  the simplest of which may be regarded as triangles (for many other shapes see the section Geometrical Shapes).\n\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{triangle}\\index{triangle}\" a figure (see below) formed by three segments (or lines) $AB$, $BC$, $CA$, the points $A$, $B$, $C$ being not aligned. The segments $AB$, $BC$, $CA$, are the \"\\NewTerm{sides}\\index{sides}\" of the triangle. The points $A$, $B$, $C$ are the \"\\NewTerm{vertices}\\index{vertices}\" of the triangle. The salient angle $\\widehat{BAC}$, which contains all the points of $BC$, is named angle $\\hat{A}$ of the triangle and $BC$ is then named the \"\\NewTerm{opposite side}\\index{opposite side}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe use the notation $\\hat{A}$ when no confusion is possible. If necessary we use the notation $\\widehat{BAC}$ with the same meaning.\n\t\\end{tcolorbox}\n\tThere are $6$ elements in a triangle, namely: three angles $\\hat{A}$,  $\\hat{B}$,  $\\hat{C}$ and three finite sides $\\overline{AB}$, $\\overline{BC}$, $\\overline{CA}$ that are most of time simply denoted by $AB$, $BC$, $CA$.\n\n\tWe will designate by $a=BC$, $b=AC$, $c=AB$ the measured lengths of the sides with the same unit of measurement, and obviously by $\\hat{A}$,  $\\hat{B}$,  $\\hat{C}$ the measurements of the angles.\n\n\t\\begin{theorem}\n\tThe sum of the angles of a plane triangle is always equal to $\\pi$ radians (or $180^\\circ$). The proof is quite simple and is named the \"\\NewTerm{angle sum theorem}\\index{angle sum theorem}\\label{angle sum theorem}\".\n\t\\end{theorem}\n\t\\begin{dem}\n\tIn the figure below, $ABCD$ is any triangle, and $D$ is the parallel to $\\overline{BC}$ passing through $A$. We observe:\n\t\\begin{enumerate}\n\t\t\\item The blue angles have the same amplitude (value) because they are alternate angles (the segment $\\overline{BC}$ and the line $D$ being parallels)\n\n\t\t\\item Also, the green angles have the same amplitude (value) because they are alternate angles.\n\n\t\t\\item We notice that the sum of the angles blue $+$ red $+$ green are equal to a flat angle on $A$.\n\t\\end{enumerate}\n\tFrom the equalities seen in (1) and (2) previously, we deduce that:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/triangle_sum_angles.jpg}\n\t\t\\caption{Sum of angles of a plane triangle}\n\t\\end{figure}\n\tThis proof is valid regardless the type of triangle drawn in the plane.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tAs for all other shapes that we will see in this section, the calculations of the perimeter and area (surface) are given in the section of Geometric Shapes page \\pageref{known surfaces}.\n\t\n\t\\paragraph{Equal Triangles (congruent triangles)}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} We say that two triangles are \"\\NewTerm{equal triangles}\\index{equal triangles}\" when we can by movement or by a reversal or both combined, superimpose all vertices of the first triangle with that of the second. Then we say also that the triangles are \"\\NewTerm{homologous triangles}\\index{homologous triangles}\" or \"\\NewTerm{congruent triangles}\\index{congruent triangles}\".\n\n\tIf a triangle $ABC$ is congruent to a triangle $DEF$, the relation can be written mathematically as:\n\t\n\n\tFrom this definition, it comes that two triangles are congruent when either the following two properties are satisfied:\n\t\\begin{enumerate}\n\t\t\\item[P1.] They have one equal side and two equal angles.\n\t\t\\begin{dem}\n\t\tSo we state that two triangles have equal side $\\overline{BC} = \\overline{B'C'}$ between two equal angles $\\hat{B}=\\hat{B}'$,$\\hat{C}=\\hat{C}'$ are equal.\n\t\t\n\t\tIn other words, if $2$ triangles have $2$ equal angles $2$ by $2$, then the $3$rd angles are equal too. That said we can now take as equal side the one that is located between the $2$ equal angles without loss of generality.\n\n\tSince $\\overline{BC} = \\overline{B'C'}$ (see figure below), there is a movement which brings $B'$ to $B$ and $C'$ in $C$. This movement brings $A'$ on $A_1$ located on the same side relatively to the segment $\\overline{BC}$, or on $A_2$ symmetric of $A_1$ relatively to this same segment.\n\n\tThe two segments $\\overline{BA}$ and  $\\overline{BA_1}$ do by hypothesis, with  $\\overline{BC}$  the same angle $\\hat{B}=\\hat{B}'$. As they are by construction on the same side of  $\\overline{BC}$ , they are merged. The both segments  $\\overline{CA}$ and $\\overline{CA_1}$  are merged for the same reason and because  $\\hat{C}=\\hat{C}'$. $A_1$ is then merged with $A$. The two triangles $ABC$, $A'B'C'$ are then indeed congruent.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/congruent_triangles_one_side_equal_two_angles_equal.jpg}\n\t\t\t\\caption{Two triangles having equal side and two equal angles}\n\t\t\\end{figure}\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\tThis result is named sometimes the \"\\NewTerm{angle-side-angle theorem}\\index{angle-side-angle theorem}\".\n\n\t\t\\item[P2.] They have a same angle between two sides of equal length.\n\t\t\\begin{dem}\n\t\tSo we state that two triangles with an equal angle $\\hat{A}=\\hat{A}'$ between two equal sides $\\overline{AB} = \\overline{A'B '}$ and $\\overline{AC}= \\overline{A'C'}$ are equal.\n\t\t\n\t\tSince $AB = A'B'$ (see figure below) there exist a shift located relatively to $\\overline{AB}$ at the same side as the point. If it brought it on $C_2$ symmetric of $C_1$ with respect to $\\overline{AB}$, a half turn around the $\\overline{AB}$ would take it on $C_1$. The segments $\\overline{AC}$ and $\\overline{AC_2}$ located on the same side of $\\overline{AB}$ do, by hypothesis the same angle with $\\overline{AB}$, since $\\hat{A}=\\hat{A}'$. They are then merged. The hypothesis $\\overline{AC}=\\overline{AC_1}$ then make us conclude that $C_1$ and $C$ coincide. The two triangles $ABC$, $A'B'C'$ are then congruent.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/congruent_triangles_one_equal_angle_two_equal_sides.jpg}\n\t\t\t\\caption{Two triangles having equal angles and two equal sides}\n\t\t\\end{figure}\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\\begin{corollary}\n\tAs corollary we have that two triangles are congruent if their three sides are equal. This is named sometimes the \"\\NewTerm{SSS (side-side-side) theorem}\\index{side-side-side theorem}\".\n\t\\end{corollary}\n\t\n\tNow about drawing symbols, it is common to represent various equal angles or equal sides in triangle as following formation information:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/congruent_triangle_1.jpg}\n\t\t\\caption{First typical notation for congruent triangles}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/congruent_triangle_2.jpg}\n\t\t\\caption{Second typical notation for congruent triangles}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\paragraph{Isosceles Triangles}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} We say that $ABC$ is an \"\\NewTerm{isosceles triangle}\\index{isosceles triangle}\" when two of its sides $AB$ and $AC$ are equal (\"iso\" meaning \"same\"). The third side $\\overline{BC}$ is then named the \"\\NewTerm{base}\\index{base (triangle)}\" of the triangle.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe say that a triangle is \"\\NewTerm{scalene}\\index{scalene triangle}\" when it has three unequal sides.\n\t\\end{tcolorbox}\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{mediator}\\index{mediator of a triangle}\\label{mediator}\" of a segment $\\overline{BC}$, the perpendicular to $\\overline{BC}$ at point $H$ of this segment, middle of $\\overline{BC}$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mediator_isosceles_triangles.jpg}\n\t\t\\caption{Representation of the mediator in isosceles triangle}\n\t\\end{figure}\n\t\\begin{theorem}\n\tIn an isosceles triangle $ABC$ as shown above, the angles $\\hat{B}$ and $\\hat{B}$ opposed to the equal sides are equal.\n\t\\end{theorem}\n\t\n\t\\begin{dem}\n\tThe both (right) triangles $BAH$ and $CAH$ defined by the bisector of  $\\widehat{A}$ have an equal angle $\\widehat{BAH}=\\widehat{CAH}$ by construction between two equal sides: $\\overline{AH}$ that is the shared (common) side and $\\overline{AB} = \\overline{AC}$ by hypothesis. As the angles $\\widehat{BHA}=\\widehat{CHA}$ and rights and equals and that the sum of angles of a triangle is equal to a straight angle ($180^\\circ$ or $\\pi$ radians), then the angles $\\widehat{HBA}$ and $\\widehat{ACH}$ are equal.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThis latest proof is one to which refers the comic Logicomix on page $55$ and it is this type of proof by logical induction which would have brought the young Bertrand Russell to became interested in logic to later become one of the most famous logicians in modern history.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{theorem}\n\tIn an isosceles triangle $ABC$ as shown above the mediator of $\\overline{BC}$ and the bisector of the angle $\\hat{A}$ coincide (figure below).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/locus_mediator_bisector.jpg}\n\t\\end{figure}\n\tIn other word, the locus of equidistant points $M$ (at same distance) between of two given points $B$ and $C$ is given by the bisector $(D)$ of segment $\\overline{BC}$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn geometry, a \"\\NewTerm{locus}\\index{locus}\" is a set of points (commonly, a line, a line segment, a curve or a surface), whose location satisfies or is determined by one or more specified conditions. More formally we name \"locus\" of a point $M$, subject to some constraints (conditions), all positions occupied by this point $M$ that still satisfy the constraints (conditions).\n\t\\end{tcolorbox}\n\t\\end{theorem}\n\t\\begin{dem}\n\tWe will prove first that any point of the locus is on the line $(D)$ and afterwards that any point of $(D)$ is a point of the locus.\n\t\\begin{enumerate}\n\t\t\\item Any point of the locus is on the line $(D)$:\n\n\t\tIn other words, the assumption, in the figure above, the assumption that $\\overline{MB} =\\overline{MC}$ implies that $M$ is the mediator of $\\overline{BC}$. Indeed, if $\\overline{MB} =\\overline{MC}$, the triangle $MBC$ is isosceles and the top $M$ is on the mediator of $BC$ (that coincides as we know with the bisector).\n\t\t\t\n\t\t\\item Any point of $(D)$ is a point of the locus:\n\t\t\n\t\tThis means that if $M$ is on the mediator of $\\overline{BC}$, we have $\\overline{MB} = \\overline{MC}$. Indeed, if $M$ is on the line $(D)$ that meets in $H$, midpoint of $\\overline{BC}$, the line $\\overline{B}$, the triangles $MHB$, $MHC$ are then equal (second case of equality: $\\widehat{BHM}=\\widehat{CHM}$ because these angles are straight, $\\overline{HM}$ common; $\\overline{HB} = \\overline{HC}$ because $H$ is the midpoint of $\\overline{BC}$): the sides $\\overline{MB}$, $\\overline{MC}$ are then also equal and we have indeed $\\overline{MB} = \\overline{MC}$. The point $M$ is a point of the locus.\n\n\t\\end{enumerate}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tUsing this theorem we can state another one:\n\t\\begin{theorem}\n\tBy taking a point $A$ outside a line $\\overline{BC}$, we can draw to this line a single perpendicular (or in other words: to a given point outside of a given line we can draw one and only one perpendicular line).\n\n\tThis is an equivalent to the Euclid's fifth postulate and a way to build an isoceles triangle as the apex is on the perpendicular of the base.\n\t\\end{theorem}\n\t\\begin{dem}\n\tThe proof is done in two steps: \n\t\\begin{enumerate}\n\t\t\\item There exist a perpendicular line:\n\t\t\n\t\tIndeed, given a triangle $ABC$, let us subjecting this triangle to half a turn around $BC$ (horizontal symmetry) as in the figure below:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/isoceles_triangle_simetry.jpg}\n\t\t\\end{figure}\n\t\tThe apex $A$ comes in $A'$ symmetrical by definition, relative to $\\overline{BC}$. Since the figures, $ABC$ and $A'BC$ are equal, $\\overline{AB}=\\overline{A'B}$ and $\\overline{AC} = \\overline{A'C}$. \n\n\t\t$\\overline{BC}$ is then the mediator of $\\overline{AA '}$ and the lines $\\overline{BC}$ and $\\overline{AA'}$ are perpendicular. $\\overline{AA'}$ is therefore well perpendicular to $\\overline{BC}$ passing through $A$.\n\t\t\n\t\t\\item The perpendicular line is unique:\n\t\t\n\t\tGiven $\\overline{AH}$s a perpendicular drawn from $A$ to $\\overline{BC}$, it meets the line $\\overline{BC}$ on a point $H$ that is different from either $B$ or $C$.\n\n\t\tLet us Suppose that $H$ is different from $B$. The angles $\\widehat{AHB}$, $A'HB$ which are deduced from each other by rotation, are equal, and as each of them is a right angle, the angle $\\widehat{AHA'}$ is a flat angle. The line $\\overline{AH}$ thus coincides with the straight line $\\overline{AA'}$ and is therefore the only perpendicular to $\\overline{BC}$ which passes through the apex $A$.\n\n\t\\end{enumerate}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] We name \"\\NewTerm{orthogonal projection}\\index{orthogonal projection}\" from point $A$ on a line $\\overline{BC}$ the point $H$ point where the perpendicular drawn from $A$ at this line meet $H$. The  point $H$ is also named the \"\\NewTerm{foot}\\index{foot of a perpendicular}\" of the perpendicular.\n\n\t\tWe have already see the vector version of this projection in the section of Vector Calculus and that it was denoted by $\\text{proj}_{\\overline{BC} A}$\n\n\t\t\\item[D2.] We name \"\\NewTerm{geometrical distance}\\index{geometrical distance}\" from $A$ to $\\overline{BC}$ the length of the segment $\\overline{AH}$.\n\t\\end{enumerate}\n\tSince in the previous figure $\\overline{BC}$ is the mediator of $\\overline{AA'}$, $H$ is the midpoint of $\\overline{AA'}$. So: A point $A$ and its symmetrical point $A'$ are with respect to a straight line $(D)$ are characterized by the following two properties that we will not prove as we consider them sufficiently intuitive:\n\t\\begin{enumerate}\n\t\t\\item[P1.] $\\overline{AA'}$ is perpendicular to $(D)$\n\t\t\n\t\t\\item[P2.]  The middle of $\\overline{AA'}$ is on $(D)$\n\t\\end{enumerate}\n\tThe line $\\overline{AB}$, which connects the apex $A$ to a point of the straight line $\\overline{BC}$, other than the foot $H$ of the perpendicular drawn from $A$ to this line, is named an \"\\NewTerm{oblique line}\\index{oblique line}\". The point $B$ being named the \"\\NewTerm{oblique foot}\\index{oblique foot}\".\n\t\n\t\\pagebreak\n\t\\paragraph{Equilateral Triangles}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} As triangle $ABC$ is say to be an \"\\NewTerm{equilateral triangle}\\index{equilateral triangle}\\label{equilateral triangle}\" when all its three sides are equal. In the familiar Euclidean geometry, equilateral triangles are also equiangular; that is, all three internal angles $\\hat{A},\\hat{B},\\hat{C}$ are also congruent to each other and are each equal to $\\pi/3$ ($60^\\circ$). They are regular polygons (see definition further below), and can therefore also be referred to as \"\\NewTerm{regular triangles}\\index{regular triangle}\":\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/equilateral_triangle_definition.jpg}\n\t\t\\caption{Equilateral Triangle}\n\t\\end{figure}\n\tAs for all other shapes that we will see in this section the calculation of the perimeter and area (surface) are given in the section of Geometric Shapes.\n\t\n\t\\paragraph{Right Triangle}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{right triangle}\\index{right triangle}\" (American English) or \"\\NewTerm{right-angled triangle}\\index{right-angle triangle}\" (British English) is a triangle in which one angle is a right angle (that is, a $90^\\circ$ angle or $\\pi/2$ [rad]). The relation between the sides and angles of a right triangle is the basis for trigonometry.\n\n\tThe side opposite the right angle is named the \"\\NewTerm{hypotenuse}\\index{hypotenuse}\" (side $c$ on the figure). The sides adjacent to the right angle are named \"\\NewTerm{legs}\\index{legs}\". Side $a$ may be identified as \"the side adjacent to angle $\\hat{B}$\" and \"opposed to (or opposite) angle $\\hat{C}$\", while side $b$ is the side adjacent to angle $\\hat{C}$ and opposed to angle $\\hat{B}$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/right_angle_triangle_shape.jpg}\n\t\t\\caption{Right triangle}\n\t\\end{figure}\n\tTo say that the triangle is right at $A$ means it is in $\\hat{A}$ that is the right angle.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn a right triangle, the longest side is always the side opposite the right angle. We will prove this property with the Pythagorean theorem (see further below).\n\t\\end{tcolorbox}\n\t\n\t\\paragraph{Right Isosceles Triangle}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{right isosceles triangle}\\index{right isosceles triangle}\" $ABC$ is both right and isosceles, meaning that it has both a right angle and two sides of equal length.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/right_isoceles_triangle.jpg}\n\t\t\\caption{Right isosceles triangle}\n\t\\end{figure}\n\tThe main vertex (apex) corresponds to the right angle ($A$). Indeed, as $\\overline{BC}$, the hypotenuse must be the greatest side, the sides $\\overline{AB}$ and $\\overline{AC}$ have the same length (smaller)!\n\t\n\tSo far we have for summary:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/type_of_triangles.jpg}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\paragraph{Inequalities in the triangles}\\mbox{}\\\\\\\\\n\tLet us now see some interesting inequalities (properties) in the triangle.\n\t\\begin{enumerate}\n\t\t\\item[P1.] First let us prove that in any triangle, a side opposite to a right or obtuse angle (greater than $90^\\circ$ or $\\pi/2$ [rad]) is greater than each of the other two sides of the triangle.\n\n\t\t\\begin{dem}\n\t\tConsider the triangle $ABC$ below where $\\hat{A}\\geq \\pi/2$ and $Cx$ the half-line extension of $\\overline{BC}$. Let us draw, on the half-line $Bx$, a length $\\overline{BD}=\\overline{BA}$ to construct an isosceles triangle in the original triangle:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/property1_triangle.jpg}\n\t\t\\end{figure}\n\t\tTherefore the triangle $BAD$ is an isosceles triangle whose angle at the base $BAD$ is obviously acute (less than $\\pi/2$ or less than $90^\\circ$).\n\t\t\n\t\tSo by construction $\\widehat{BAD}<\\widehat{BAC}$. The straight line $\\overline{AD}$ is interior by construction the angle $\\widehat{BAC}$ and the it follows:\n\t\t\n\t\tand as $\\overline{BD}=\\overline{BA}$ by construction we have:\n\t\t\n\t\tWhich finites our proof. As the reasoning is the same to prove that $\\overline{BC}>\\overline{CA}$.\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item[P2.] In any triangle whose sides have strictly increasing lengths, one side is always less than the sum of the other two.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/property2_triangle.jpg}\n\t\t\\end{figure}\n\t\t\\begin{dem}\n\t\tLet $D$ be the point of the side $\\overline{BC}$ such that $\\overline{BD}=\\overline{BA}=c$ are the sides of the isosceles triangle $ABD$. We get:\n\t\t\n\t\tThe triangle $ABD$ being isosceles, the angle at the base $\\widehat{ADB}$ is acute and its complement $\\widehat{ADC}$ is obtuse. In the ADC triangle, we get from the preceding equation P1 property that is to say:\n\t\t\n\t\twhere:\n\t\t\n\t\tthis is the famous \"\\NewTerm{triangle inequality}\\index{triangle inequality}\" in it geometrical version. We will see it again in many other sections of this book in spaces and more abstract mathematical concepts.\n\t\n\t\tThe property is the immediate for other sides $b$ and $c$ by permutation of the method:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\n\t\t\\item[P3.] In any triangle any side is greater than the difference of the other two.\n\t\t\\begin{dem}\n\t\tSuppose we have $a>b>c$. Therefore we also have:\n\t\t\n\t\tSubtracting $c$ to both sides of the inequality gives:\n\t\t\n\t\tThe property is the immediate for the other sides $b$ and $c$ by permutation of the method:\n\t\t\n\t\tAnd finally as:\n\t\t\n\t\tfor any triangle with increasing sides we have:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\n\t\\paragraph{Triangles remarkable interior lines}\\label{triangles remarkable interior lines}\\mbox{}\\\\\\\\\n\tA little summary about triangles remarkable interior lines is probably necessary at this point (before continuing on other triangles theorems) to avoid if possible to much confusion as we have given a lot of definitions so far:\n\t\\begin{itemize}\n\t\t\\item The \"\\NewTerm{perpendicular bisector}\\index{perpendicular bisector}\" of a segment is the line which is perpendicular to the segment and which passes through its center.\n\n\t\t\\item The \"\\NewTerm{height}\\index{height}\" or \"\\NewTerm{altitude}\\index{altitude}\"  of a triangle is a line which passes through a vertex of the triangle and which intercept the opposite edge with right angle.\n\n\t\t\\item The \"\\NewTerm{angle bisector}\\index{angle bisector}\" of an angle is the semi-straight line that divides the angle into two equal angles.\n\n\t\t\\item The \"\\NewTerm{median}\\index{median}\" of triangle passes through a vertex and intercepts the opposite edge at its center.\n\n\t\t\\item The \"\\NewTerm{mediator}\\index{mediator}\" is the perpendicular line to the crossing point of the median.\n\t\\end{itemize}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/bissectors_median_mediator_height.jpg}\n\t\t\\caption{Bisectors, Median, Mediator, Height in a triangle}\n\t\\end{figure}\n\tObviously in some geometries all a part of all the lines above are not discernible.\n\tIn a triangle:\n\t\\begin{itemize}\n\t\t\\item The intersection of $3$ altitudes is named the \"\\NewTerm{orthocenter}\\index{orthocenter}\".\n\t\t\\item The intersection of $3$ medians is named the \"\\NewTerm{centroid}\\index{centroid}\".\n\t\t\\item The intersection of $3$ angle bisectors is the center of the inscribed circle.\n\t\t\\item The intersection of $3$ perpendicular bisectors is the center of the circumscribed circle.\n\t\\end{itemize}\n\t\n\t\\pagebreak\n\t\\paragraph{Pythagorean theorem}\\label{pythagorean theorem}\\mbox{}\\\\\\\\\n\tNow that we have see what was a triangle and reviewed some of its properties as well as the concept angle, we can prove the famous \"Pythagorean theorem\" that gives the relation that must satisfy three numbers that represent the sides of a right triangle in Euclidean Geometry and also permits to make circle trigonometry (\\SeeChapter{see section Trigonometry page \\pageref{circle trigonometry}}).\n\t\n\t\\begin{theorem}\n\tThe square of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the other two sides\n\t\\end{theorem}\n\tThe theorem has been given numerous proofs - possibly the most for any mathematical theorem. They are very diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years. The theorem can be generalized in various ways, including higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, and indeed, to objects that are not triangles at all, but $n$-dimensional solids.\n\t\n\tAmong all possible proof we chose to present the next one as it seemed to most easy one to us for readers (student mainly in this case):\n\t\\begin{dem}\n\tGiven a square (four right angles polygon) inside which is inscribed another square of smaller sides. We can calculate the surface of the inscribed square thanks to the right triangles resulting from the empty spaces between the two squares such as presented below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/pythagorean_theorem.jpg}\n\t\\end{figure}\n\tThe surface of the white square is obviously:\n\t\n\tand that of the gray square:\n\t\n\tTo get the surface of the gray square we can subtract from the with square the surface of the four right triangles having each for surface (\\SeeChapter{Geometric Shapes}):\n\t\n\tThe surface of the gray square is finally:\n\t\n\tWe then get the final result of the famous \"\\NewTerm{Pythagorean theorem}\\index{Pythagorean theorem}\"\n\t\n\tIn a right triangle, the square of the hypotenuse (side opposite the right angle) is equal to the sum of the squares of sides of the right angle.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tIn the particular case we have three integers $a$, $b$ and $c$ that satisfy the Pythagorean theorem (there are infinite combinations of integers satisfying the Pythagorean theorem), then we speak of \"\\NewTerm{Pythagorean triple}\\index{Pythagorean triple}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe own this possible proof to the Chinese Chao King (2nd century).\n\t\\end{tcolorbox}\n\tIt is often mentioned in smaller class of the reciprocal of Pythagorean theorem which states: In a triangle, if the square of one side is equal to the sum of the squares of the other two sides, then the triangle is rectangle and the hypotenuse will be the longest side of the triangle.\n\t\n\t\\paragraph{Thales' Theorem (intercept theorem)}\\mbox{}\\\\\\\\\n\tThe \"\\NewTerm{intercept theorem}\\index{intercept theorem}\", also known as \"\\NewTerm{Thales' theorem}\\index{Thales' theorem}\\label{thales theorem}\" (not to be confused with another theorem with that name), is an important theorem in elementary geometry about the ratios of various line segments that are created if two intersecting lines are intercepted by a pair of parallels.\n\t\n\tHaving just proved the Pythagorean theorem and that now the concepts of parallels, segments, angles and others are quite well known to us, we can finally prove the Thales' theorem which is given here a possible proof that first requires the development of two lemmas:\n\t\\begin{lemma}\n\t Triangles with same surfaces: If two triangles have a common side and if the third vertices are on a parallel to this common side, then they have the same surface!\n\t\\end{lemma}\n\t\\begin{dem}\n\tGiven the figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/thales_theorem_lemma_1.jpg}\n\t\t\\caption[]{First construction for Thales' theorem proof}\n\t\\end{figure}\n\tWe have:\n\t\n\t$EFGH$ is a rectangle because its sides are parallel in pairs and have at least two right angles. So its opposite sides have the same length: $\\overline{EH} = \\overline{FG}$.\n\t\n\t$\\overline{EH}$ is the relative height to $\\overline{AB}$ in the triangle $EAB$ and $\\overline{FG}$ is the relative height $\\overline{AB}$ in the triangle $FAB$.\n\t\n\tThe surface of the triangle depends only on the length of the side and the length of the height relative this side (\\SeeChapter{see section Geometric Shapes page \\pageref{unspecified triangle}}). For the both triangles $EAB$ and $FAB$, these lengths are equal, so they have the same surface!\n\t\n\tSo we have indeed that of two triangles have a common side and if the third vertices are on a parallel to this common side, then they have the same surface!\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\begin{lemma}\n\tWe have:\n\t\n\t\\end{lemma}\n\t\\begin{dem}\n\tGiven the proportions ratio (\"proportional calculus\" or \"product in cross\"):\n\t\n\tthen:\n\t\n\tIf $ad=bc$, then:\n\t\n\twhere we added as same positive or negative number to the two members of the equality. Thus after factorization:\n\t\n\tand by applying contreversly the product in cross:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tLet us now expose what is the Thales theorem.\n\t\\begin{theorem}\n\t If two intersecting lines are cut by parallel lines, the line segments cut by the parallel lines from one of the lines are proportional to the corresponding line segments cut by them from the other line and from this we can also deduce also other remarkable identities.\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven the figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/thales_theorem_first_approach.jpg}\n\t\t\\caption{First approach of Thales' theorem}\n\t\\end{figure}\n\tWith:\n\t\n\tWe have proven previously that if two triangles have a common side and if the third vertices are on a parallel, then they have the same surface. So the triangles $\\Delta ACD$ and $\\Delta BCD$ have the same surface!\n\n\tAdding to each of these two surfaces that of the triangle $\\Delta OCD$, we get that the triangles $\\Delta ODA$ and $\\Delta OCB$ have the same surface!\n\n\tWe deduce then that using the product in cross that:\n\t\n\tGiven $h_1$ the height issue from $D$ in the triangle $\\Delta OCD$ and $h_2$ the height issue from $C$ in the triangle $\\Delta OCD$:\n\t\n\tConclusion:\n\t\n\tGiven now the figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/thales_theorem_second_approach.jpg}\n\t\t\\caption{Second approach of Thales' theorem}\n\t\\end{figure}\n\tThe triangles $\\Delta IJD$ and $\\Delta IDB$ have the same surface from our first lemma, also same for the triangles $\\Delta OJD$ and $\\Delta OIB$ therefore:\n\t\n\thence:\n\t\n\ttherefore:\n\t\n\tIn the same way, in the triangles $OIA$ and $OCJ$, we get:\n\t\n\tFrom the lemma 2, as:\n\t\n\tThen:\n\t\n\tBy applying to the first triangle $OJB$ what we did during the first approach, we also get:\n\t\n\tSo finally by taking all previous results we get:\n\t\n\twhich constitutes the \"\\NewTerm{Thales' theorem}\" of ratios.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\tHere is an excellent summary of what we have seen so far about Thales' theorem:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/thales_summary.jpg}\n\t\t\\caption[Thales' theorem]{Thales' theorem (source: Wikipedia)}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\subsubsection{Parallelism}\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{parallels}\\index{parallels}\" two line (non-necessarily straight one!) equally distant from one another over their entire length.\n\n\tThis concept is directly connected to Euclid's fifth postulate (parallel postulate) and is often considered as the most important being been proved that it can not be considered as an axiom as it is not respected in non-Euclidean geometries (\\SeeChapter{see section Non-Euclidean Geometry page \\pageref{non-euclidean geometry}}).\n\t\n\tThe consequences of this postulate \tare as we know the following in a Euclidean geometry:\n\t\\begin{enumerate}\n\t\t\\item If two lines $AB$ and $CD$ are parallels, any straight line that cut one, cut the other one.\n\t\t\\begin{dem}\n\t\tGiven $F$ the common point to the straight line $CD$ and the straight line $E'F'$: if the straight line $E'F'$ did not cut the straight line $AB$, it would be parallel to it, and through the point $F$ would pass two straight lines $CD$ and $E'F'$ parallel to a third one $AB$, which is not the case. So the straight line $E'F'$, cuts the line $AB$.\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item Two straight lines $AB$ and $CD$ parallels to a third line $E'F'$ are parallel between them.\n\t\t\\begin{dem}\n\t\tIf the straight line $CD$ was not parallel to the straight line $AB$ it would cut (cross) it. It would also cut (cross) the straight line $E'F '$ parallel to the straight line $AB$, it would therefore not be parallel to $E 'F'$.\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\\begin{theorem}\n\tIf two parallels lines are cut by a secant then:\n\t\\begin{enumerate}\n\t\t\\item The intern-alternate angles are equal;\n\t\t\\item The external-alternate angles are equal;\n\t\t\\item The corresponding angles are equal.\n\t\\end{enumerate}\n\t\\end{theorem}\n\t\\begin{dem}\n\tConsider two (coplanar) parallel $AB$ and $CD$ and a secant $EF$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/two_parallels_with_a_secant.jpg}\n\t\\end{figure}\n\t\\begin{enumerate}\n\t\t\\item By the middle O of $\\overline{EF}$ let us conduct the perpendicular $\\overline{GH}$ to $\\overline{AB}$, which is also perpendicular to $\\overline{CD}$. The right triangles $E$O$G$ and $F$O$H$ an acute angle equal to $\\widehat{G\\text{O}E}=\\widehat{F\\text{O}H}$ and an equal hypotenuse: O$F = $O$E$. They are equal, and the angles $\\widehat{\\text{O}FH}$ and $\\widehat{GE\\text{O}}$ are equal.\n\n\t\t\\item The alternate exterior angles $\\widehat{E'EB}$ and $\\widehat{F'FC}$ are equal, because $\\widehat{E'EB}$ is opposed by the apex at the angle $\\widehat{GE\\text{O}}$, which an intern-alternate with the angle $\\widehat{\\text{O}FH}$.\n\t\\end{enumerate}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t... and vice versa if two straight lines are cut by a secant forming with these straight lines:\n\t\\begin{enumerate}\n\t\t\\item Two intern-alternate equal angles;\n\t\t\\item Two external-alternate equal angles;\n\t\t\\item Two corresponding equal angles.\n\t\\end{enumerate} \n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tTo prove the parallelism of two straight lines, it is necessary and sufficient that the intern-alternate angles, extern-alternate angles or corresponding angles, formed by these two straight lines with secant transversal, are all equal. That is to say equal to $\\pi/2$.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsubsection{Circle}\\label{circle}\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{circle}\\index{circle}\" the locus of points $M$ of the plane that are a given distance $R$, named \"\\NewTerm{radius}\\index{radius}\" of the circle, of a fixed point O, named \"\\NewTerm{center of the circle}\\index{center of the circle}\". Or using the concept of \"locus\" defined earlier:  a circle is defined as the locus of a point that is at a given distance of a fixed point, the center of the circle:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/circle_definition.jpg}\n\t\t\\caption{Circle and Radius}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe word \"radius\" means either the segment $\\overline{\\text{O}M}$, or its measurement $R$.\n\t\\end{tcolorbox}\n\tCircles are directly concerned by the third Euclid that we stated earlier above.\n\n\tWe name \"\\NewTerm{diameter}\\index{diameter}\", denoted $\\varnothing$, of a circle every line passing through the center O of the circle. Any diameter meets the circle at two points $A$ and $B$, such that by construction:\n\t\n\twhich we name \"\\NewTerm{extremities of the diameter}\\index{extremities of the diameter}\". We reserve the expression \"\\NewTerm{diameter $\\overline{AB}$}\" for the diameter of extremities $A$ and $B$. We say that two points of a circle are \"\\NewTerm{diametrically opposed}\\index{diametrically opposed points}\" when they are the two ends of the same diameter:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/circle_diameter.jpg}\n\t\t\\caption{Circle and Diameter}\n\t\\end{figure}\n\n\tA circle divides the plane into two parts: one which contains the center, which we name \"\\NewTerm{inner region}\\index{inner region}\" and one that does not contain it, which we call \"\\NewTerm{outer region}\\index{outer region}\":\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/circle_inner_outer.jpg}\n\t\\end{figure}\n\t\n\t\\begin{theorem}\n\tThe necessary and sufficient condition for a point $P$ to be strictly inside a circle $(C)$, of center O and radius $R$, is .\n\t\\end{theorem}\n\t\\begin{dem}\n\tThere are two case to consider for the proof:\n\t\\begin{enumerate}\n\t\t\\item The condition is necessary: If, by hypothesis, $P$ is inside the circle $(C)$, it is located either on O, or between the ends $A$ and $B$ of the diameter defined by the locus of the points $M$. If it is on O the proposition is obvious, if it is not on O, then it is between O and $A$ for example, and we have $\\overline{\\text{O}P}<\\overline{\\text{O}A}$, that is to say equation.\n\n\t\t\\item The condition is sufficient: If, by hypothesis $\\overline{OP}<R$, $P$ is between the extremities $A$ and $B$ of the loci defined by the points $M$, so inside the circle $(C)$.\n\t\\end{enumerate}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{corollary}\n\tThe necessary and sufficient condition for a point $P$ to be outside a circle $(C)$ is $\\overline{\\text{O}P}>R$\n\t\\end{corollary}\n\t\n\tWe name \"\\NewTerm{string of a circle}\\index{string of a circle}\" $C$ the segment  $\\overline{CD}$ whose extremities $C$ and $D$ are on the circle (on the loci of $M$) as visible in the figure below.\n\n\t\\begin{theorem}\n\tThe mediator of a string $\\overline{CD}$ is a diameter and obviously $C\\text{O}D$ is an isosceles triangle.\n\t\\end{theorem}\n\t\\begin{dem}\n\tThe mediator $\\Delta$ of $\\overline{CD}$ (see figure below), string of the circle $(C)$ of center O and radius $R$ pass through the point O because as proved it during our study of the triangles we have $\\overline{\\text{O}R}=\\overline{\\text{O}D}=R$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/circle_string.jpg}\n\t\t\\caption[]{Mediator of circle string}\n\t\\end{figure}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{corollary}\n\tThe perpendicular drawn from the center O of a circle of any circle string pass by construction through the middle $H$ of this string.\n\t\\end{corollary}\n\t\n\t\\pagebreak\n\t\\paragraph{Circumscribed circle theorem}\\mbox{}\\\\\\\\\n\t\\begin{theorem}\n\tBy three points non-aligned $A$, $B$, $C$ we can draw a circle and only one. This is the \"\\NewTerm{circumscribed circle theorem}\\index{circumscribed circle theorem}\".\n\t\n\tOr in other words: All triangles are cyclic, i.e. every triangle has a circumscribed circle or \"\\NewTerm{circumcircle}\\index{circumcircle}\".\n\t\\end{theorem}\n\t\\begin{dem}\n\t First, we draw the mediator $D$ of  the segment  $\\overline{AB} $ and the mediator $\\Delta$ of  the segment $\\overline{AC}$. If $(D)$ and $\\Delta$ were parallel, the perpendicular $\\overline{AB}$ to $D$ would be perpendicular also to $\\Delta$ so coinciding with $\\overline{AC}$. The points $A$, $B$, $C$ would then be aligned. Therefore $D$ and $\\Delta$ are non-parallel and intersect at a point O:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/circumscribed_circle_theorem.jpg}\n\t\\end{figure}\n\t\\begin{enumerate}\n\t\t\\item There is one circle going through $A$, $B$, $C$:\n\n\t\tThe point O being on $D$, mediator of $\\overline{AB}$, $\\overline{\\text{O}A} = \\overline{\\text{O}B}$ by construction:  the point O being on $D$, mediator of $\\overline{AC}$, $\\overline{\\text{O}A} = \\overline{\\text{O}C}$. The circle $(C)$, of center O and of radius $\\overline{\\text{O}A}$ passes through $B$ (since  $\\overline{\\text{O}A} = \\overline{\\text{O}B}$) and also by $C$ (since $\\overline{\\text{O}A} = \\overline{\\text{O}C}$). It therefore passes through $A$, $B$, $C$.\n\n\t\t\\item It goes through the points $A$, $B$, $C$ only one circle: \n\n\t\tIf it was going through $A$, $B$, $C$ a different circle than that $(C)$ of center O and radius $\\overline{\\text{O}A}$, its center O' would be on the mediator of $\\overline{AB}$ and $\\overline{AC}$ that are two strings of the circle; it is then confused with O!\n\t\\end{enumerate}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe mediator of $\\overline{BC}$, string of the circle $(C)$ above, also passes through the point O. We can say (important result!) that the three mediators of the sides of a triangle $ABC$ are concurrent in a same point named the \"\\NewTerm{orthocenter}\\index{orthocenter}\".\n\t\\end{tcolorbox}\n\tWe will see further below that using the law of sines it's quite easy to get the diameter of the circumscribed circle and therefore its surface if we know its intern-angles. But if we don't know its intern-angles we need the center angles theorem that we will prove further below to calculate its surface.\n\n\t\\paragraph{Inscribed circle theorem}\\mbox{}\\\\\\\\\n\t\\begin{theorem}\n\tIt is the opposite of the previous theorem and states therefore a unique circle can be inscribed in any triangle, i.e. every triangle has an incircle tangent to it.\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven $ABC$, bisect the angles at the vertices $A$ and $B$.\n\n\tThese angles bisectors must interact at a point O:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/inscribed_circle_theorem.jpg}\n\t\\end{figure}\n\n\tWe locate afterwards the point $D$, $E$ and $F$ on sides $\\overline{AB}$, $\\overline{BC}$ and $\\overline{CA}$ respectively so that:\n\t\n\tObserve that the triangles $\\Delta A\\text{O}D$ and $\\Delta A\\text{O}F$ are congruent and also the triangles $\\Delta B\\text{O}D$ and $\\Delta B\\text{O}E$ by the angle-side-angle theorem.\n\t\n\tSince corresponding sides of congruent triangles are equal, we also know that:\n\t\n\tHence:\n\t\n\tSo the circle with center O and radius $R$ is an incircle for the triangle.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tWe can be interested to calculate the surface of the incircle of radius $R$ of the triangle $\\Delta ABC$ knowing only the distances between the points $A$, $B$ and $C$.\n\t\n\tSo following the above figure, we consider the triangle $\\Delta ABC$ with incircle having center at O, we notice that:\n\t\n\tThus:\n\t\n\tThat is:\n\t\n\tSo:\n\t\n\tTherefore:\n\t\n\twhere $P$ is the perimeter of the triangle.\n\t\n\t\\pagebreak\n\t\\paragraph{Thales' theorem of the circle}\\mbox{}\\\\\\\\\n\t\\begin{theorem}\n\tIf $A$, $B$ and $C$ are points on a circle where the segment $\\overline{AC}$ is a diameter of the circle, then the angle $\\widehat{ABC}$ is a right angle. This is the \"\\NewTerm{Thales' theorem for circle}\\index{Thales' theorem for circle}\".\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven the figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/thales_circle.jpg}\n\t\\end{figure}\t\n\tSince $\\overline{\\text{O}A} = \\overline{\\text{O}B} = \\overline{\\text{O}C}$, the triangles $\\Delta OBA$ and $\\Delta OBC$ are therefore isosceles triangles, and by the equality of the base angles of an isosceles triangle, $\\widehat{\\text{O}BC} = \\widehat{\\text{O}CB}$ and $\\widehat{BA\\text{O}} = \\widehat{AB\\text{O}}$.\n\n\tLet $\\alpha = \\widehat{BA\\text{O}}$ and $\\beta = \\widehat{\\text{O}BC}$. The three internal angles of the  triangle $ABC$ are $\\alpha$, $(\\alpha + \\beta)$, and $\\beta$. Since the sum of the angles of a triangle is equal to $\\pi$ [rad] ($180^\\circ$), we have:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{corollary}\n\tIf and only if the inscribed triangle is a right triangle, the circumcenter lies at the center of the hypotenuse.\n\t\\end{corollary}\n\t\n\t\\pagebreak\n\t\\paragraph{Central angle theorem}\\mbox{}\\\\\\\\\n\tThe \"\\NewTerm{central angle theorem}\\index{central angle theorem}\" is very useful in solving questions that deals with angles within circles as we will see just after. The use of this theorem often simplifies a complicated situation into a rather simple one.\n\t\n\t\\begin{theorem}\n\tThe central angle theorem states that the central angle from two chosen points $A$ and $B$ on the circle is always twice the inscribed angle from those two points. The inscribed angle can be defined by any point $P$ along the outer arc $AB$ and the two points $A$ and $B$.\n\t\\end{theorem}\n\n\t\\begin{dem}\n\tConsider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/center_angle_theorem.jpg}\n\t\\end{figure}\n\tIf we denote $\\widehat{AP\\text{O}}=\\alpha$ and $\\widehat{BP\\text{O}}=\\beta$ then it follows as $\\Delta P\\text{O}A$ and $\\Delta P\\text{O}B$ are isosceles triangle that:\n\t\n\tUsing the fact that the sum of the angles of a triangle is equal to $\\pi$ (or $180^\\circ$) then:\n\t\n\tSince all angles at center O must add up to $2\\pi$ (or $360^\\circ$), we have:\n\t\n\tTherefore:\n\t\n\tDon't forget that this result is valid whatever the position of $P$ on the circle!!!!!!\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tUsing this theorem we can now prove that the relation that appears in the law of sines (\\SeeChapter{see section Trigonometry page \\pageref{law of sines}}) is the diameter of the circumcircle of the triangle $ABC$.\n\n\t\\begin{theorem}\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tFor the proof consider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/law_of_sines_diameter.jpg}\n\t\\end{figure}\n\tGiven the triangle $ABC$, let O denoted the center of its circumcircle. \n\n\tObserve that $\\widehat{B\\text{O}C}=2\\widehat{BAC}$ due to the central angle theorem.\n\n\tThat is:\n\t\n\tLet $M$ be the midpoint of $\\overline{BC}$.\n\n\tThen the triangle $\\Delta B\\text{O}M$ is congruent to the triangle $\\Delta C\\text{O}M$ by the side-side-side theorem.\n\t\n\tSo we have that:\n\t\n\tsince congruent angles in congruent triangles are equal.\n\t\n\tBy construction we have:\n\t\n\tLet us now write:\n\t\n\tNow in the right triangle $B\\text{O}M$ we see that:\n\t\n\tTherefore:\n\t\n\tHence, by applying the law of sines, we conclude that:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tSo we can easily calculate the surface knowing the intern angles. But consider now that we don't know the intern-angles. So we also use the central angle theorem and the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/circumscribed_circle_radius.jpg}\n\t\\end{figure}\n\tFrom triangle $\\Delta BD\\text{O}$ we have:\n\t\n\tAs we will prove in the section of Geometric Shapes, the surface of the internal triangle is:\n\t\n\tSo if we choose for basis of the triangle $b$ and knowing that therefore:\n\t\n\twe get:\n\t\n\tTherefore:\n\t\n\tand we can then easily get the surface of the circumscribed circle from its radius.\n\t\n\t\\pagebreak\n\t\\subsection{Hilbert's Axioms}\\label{hilbert axioms}\n\tEuclid gathered all the geometric knowledge of his time in the form of the $5$ postulates. He left his name to the Euclidean geometry that uses its fifth postulate, to non-Euclidean geometry that does not use it, and to Euclidean spaces.\n\n\tThis is postulated base, however, is imperfect, to rigorously prove theorems associated with this geometry, it is necessary to accept as true additional implicit assumptions. David Hilbert built a corresponding axiomatic to the idea that made Euclid  of the geometry by adding ad hoc assumptions (mathematicians make definitions with coffee...).\n\n\tThe Hilbert's axioms are grouped into five categories: the association, the order, the congruence, the continuity and the parallels.\n\n\tHilbert's axiom system is constructed with $6$ primitive notions: three primitive terms and three primitive relations.\n\n\tThe three primitive terms are:\n\t\\begin{enumerate}\n\t\t\\item Point\n\t\t\\item Line\n\t\t\\item Plane\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tPoints, lines and planes are considered a distinct by default.\n\t\\end{tcolorbox}\n\tThe three primitive relations are:\n\t\\begin{enumerate}\n\t\t\\item That of association defines the word \"\\NewTerm{contains}\" in geometry! It corresponds to the ideas \"is part of\" and \"is included in\" of set theory.\n\n\t\t\\item That of \"\\NewTerm{order}\\index{order}\" is a binary relation between a couple of points and one point, it appears in the terms \"between\" and to define the segments.\n\n\t\t\\item That of \"\\NewTerm{congruence}\\index{congruence}\", which corresponds to the three \"equivalence relations\" for the couples of points, the triangles and the angles.\n\t\\end{enumerate}\n\n\t\\pagebreak\t\n\tHere are the \"\\NewTerm{Hilbert's axioms}\\index{Hilbert's axioms}\":\n\n\t\\subsubsection{Incidence Axioms (axioms of association)}\n\t\t\\begin{enumerate}\n\t\t\t\\item[A.A1.] Given two points, there is a line going through these two points (contains them both).\n\t\n\t\t\t\\item[A.A2.] Given two points, there is one line and only  one passing through these two points (verbatim the line described in A.A1).\n\t\n\t\t\t\\item[A.A3.] A line contains at least two points, and for a given line, there exists at least one point not contained into that line.\n\t\n\t\t\t\\item[A.A4.] Given three points that are not contained in a line, there is a plane containing the three points. Any plane contains at least one point.\n\t\n\t\t\t\\item[A.A5.] Given three points that are not contained in a line, there is only one plane containing the three points.\n\t\n\t\t\t\\item[A.A6.] Given two points contained in a line $D$ and in a plane $A$, then $A$ contains all the points of $D$.\n\t\n\t\t\t\\item[A.A7.] If two planes $A$ and $B$ both contain a point $C$, then the intersection of $A$ and $B$ contains at least one other point.\n\t\n\t\t\t\\item[A.A8.] There are at least four non coplanar points.\n\t\t\\end{enumerate}\n\t\t\n\t\\subsubsection{Order Axioms}\n\t\\begin{enumerate}\n\t\t\\item[A.O1.] If point $B$ is between the points $A$ and $C$, $B$ is then also between the points $C$ and $A$, and there is a line containing the three points $A$, $B$, $C$.\n\n\t\t\\item[A.O2.] Given two points $A$ and $C$, there is a point $B$ element of the line $\\overline{AC}$ such that $C$ is between $A$ and $B$.\n\n\t\t\\item[A.O3.]  Given three points contained in a line, then a one and only is located between the other two.\n\n\t\t\\item[A.O4.] (\"\\NewTerm{Pasch's theorem}\\index{Pasch's theorem}\") Given three points $A$, $B$, $C$ non-collinear, and given a line $D$ in the plane $ABC$ but containing none of the points $A$, $B$, $C$: If $D$ contains a point of the segment $\\overline{AB}$, then $D$ contains either a point of the segment $\\overline{AC}$ or a point of the segment $\\overline{BC}$.\n\t\\end{enumerate}\n\n\t\\pagebreak\n\t\\subsubsection{Congruence Axioms}\\label{congruence axioms}\n\tFirst let us recall that intuitively \"\\NewTerm{congruent}\\index{congruent}\" means in geometry \"\\NewTerm{stackable}\\index{stackable}\" (exactly the same size and shape) and the opposite of \"\\NewTerm{similar}\" (only one of the both properties are satisfied).\n\t\n\t\\begin{enumerate}\n\t\t\\item[A.G1.] Given two points $A$, $B$ and a point $A'$ element of a line $d$, there are two and two only unique points $C$ and $D$, such that $A'$ is between $C$ and $D$, and $\\overline{AB}$ is congruent to $\\overline{A'C}$ and $\\overline{AB}$ is congruent to $\\overline{A'D}$.\n\t\n\t\t\\item[A.G2.] The congruence relation is transitive, that is to say, if $\\overline{AB}$ is congruent to $\\overline{CD}$ and if $\\overline{CD}$ is congruent to $\\overline{EF}$, then $\\overline{AB}$ is congruent to $overline{EF}$.\t\n\n\t\t\\item[A.G3.] Given a straight line $d$ containing the adjacent segments $\\overline{AB}$ and $\\overline{BC}$, and given a straight line containing the adjacent segments $\\overline{A'B'}$ and $\\overline{B'C'}$. If $\\overline{AB}$ is congruent to $\\overline{A'B'}$ and $\\overline{BC}$ is congruent to $\\overline{B'C'}$ then $\\overline{AC}$ is congruent to $\\overline{A'C'}$.\t\n\n\t\t\\item[A.G4.] Given an angle $\\widehat{ABC}$ and a half-line $\\overline{B'C'}$, there are two and only two half-lines, $\\overline{B'D}$ and $\\overline{B'E}$, such that the angle $\\widehat{DB'C'}$ is congruent to the angle $\\widehat{ABC}$ and the angle $\\widehat{EB'C'}$ is congruent to the angle $\\widehat{ABC}$.\n \t\n\t\t\\item[A.G5.] Given two triangles $ABC$ and $A'B'C'$ such that $\\overline{AB}$ is congruent to $\\overline{A'B'}$, $\\overline{AC}$ is congruent to $\\overline{A'C'}$, and the angle $\\widehat{BAC}$ is congruent to the angle $\\widehat{B'A'C'}$, then the triangle $ABC$ is congruent to the triangle $A'B'C'$.\n\t\\end{enumerate}\n\tTherefore we see that these axioms are used to compare segments, and also angles, to define the center of a segment, the orthogonal lines, to talk about equilateral triangles, isosceles, etc. They also enable to strictly define the translations which was so often used by Euclid without having been defined.\n\n\t\\subsubsection{Continuity Axioms}\n\t\\begin{enumerate}\n\t\t\\item[A.C1.] Also named \"\\NewTerm{Archimedes' axiom}\\index{Archimedes' axiom}\", this axiom assume that given two segments $\\overline{AB}$ and $\\overline{CD}$ then there is always a finite sequence of points $A_1,\\ldots, A_n$ belonging to the line containing the segment $\\overline{AB}$  and such that $A<A_1<\\ldots<A_{n-1}<B<A_n$ that can satisfy $\\overline{AA_1}\\equiv \\overline{AA_2}\\equiv\\ldots \\overline{A_{n-1}A_n}\\equiv \\overline{CD}$.\n\t\n\t\t\\item[A.C2.] Also named \"\\NewTerm{Cantor's Axiom}\\index{Cantor's Axiom}\" it assumes if $(A_n)$ and $(B_n)$ are two infinite sequence of points and that we build overlapping line segments $\\overline{A_iB_i}$ such as $\\overline{A_{k}B_{k}} \\subset \\overline{A_iB_i}$ if $i<k$ (also sometimes written $]A_{i+1},B_{i+1}[\\subset]A_i,B_i[$) located on a line $L$ and such that $\\forall \\overline{CD},\\exists i: \\overline{A_iB_i}\\equiv\\overline{CD}$, then there exists a point $X$ belonging to all segments $\\overline{A_iB_i}$. In other words, either a series of nested segments whose length tends to $0$ then there is a point common to all segments.\n\t\\end{enumerate}\n\tIt is not obvious but the Archimedean axiom is the result of the experience in a large number of measurements and guarantees the measurability of line segments. An arbitrary line segment can be attached a real number called the line segment length. The axiom does not guarantee the inverse relation, that an arbitrary real number can be related to some line segment as the length of this line segment. This fact is guaranteed by the Cantor axiom.\n\n\n\t\\subsubsection{Parallels Axioms}\n\t\\begin{enumerate}\n\t\t\\item[A.P1.] Also know as \"\\NewTerm{Euclidean axiom}\\index{Euclidean axiom}\" it assumes that given of a straight line $L$ and a point $P$ not belonging to $L$, it passes one and only one straight line $L'$ through $P$ which is parallel to $d$.\n\t\t\n\t\tOther equivalent formulation is:\n\t\n\t\t\\item[A.P1'.] Given a straight line $L$, a point $P$ not included on $L$, then there exists a plane containing $d$ and $P$. This plane contains one and only one straight line containing $P$ and containing no point of $d$.\n\t\t\n\t\tAny non-empty set of points (in the plane or in the space) consistent to the axiomatic system with the Euclidean postulate forms the Euclidean geometry (Euclidean plane $\\mathcal{E}^2$, or Euclidean space $\\mathcal{E}^3$) which is called also \"\\NewTerm{parabolic geometry}\\index{parabolic geometry}\". One of the most well known theorems following directly from the Euclidean axiom of parallelism is that the sum of the interior angles in an arbitrary triangle equal $\\pi$.\n\t\\end{enumerate}\n\t\n\tWe can't prove until now the non-contradictory logic of all these axioms. However, we know two things if we make an analogy with what we studied in the chapters Arithmetic and Algebra of the books (especially the section on Sets Theory, Functional Analysis and Sequences and Series):\n   \\begin{enumerate}\n      \\item If these axioms are contradictory, then the theory of real numbers is also contradictory.\n\n      \\item If the system of axioms obtained by removing the axiom of Cantor is contradictory, then the theory of rational numbers is contradictory.\n   \\end{enumerate}\n   Thus, the confidence we have in the strength of these axioms lies on the one we have in the theory of real numbers, which is already also quite big.\n   \n   \\pagebreak\n   \\subsection{Barycentre (centroid)}\\label{barycenter}\n   Now that we have discussed the minimum of the construction of Euclid and Hilbert geometry, we can move to a higher level to the analysis of properties of geometric shapes. We will begin by studying the concept of \"\\NewTerm{center of gravity}\", also referred to but rarely to \"\\NewTerm{centroid}\" or \"\\NewTerm{barycentre}\" and that is very important not only in geometry but also in classical mechanics, astronomy and many other domains where multiple masses must be reduced to a single virtual point.\n   \n   \\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{center of gravity}\\index{center of gravity}\" or \"\\NewTerm{centroid}\\index{centroid}\" or \"\\NewTerm{geometric center}\\index{barycentre}\" of the points $A_i$ of the plane or of the space affected respectively by the coefficients $\\alpha_1,\\alpha_2,\\ldots,\\alpha_n$ (where the $\\alpha_i$ are real such that $\\sum \\alpha_i\\neq 0$) the single point $G$ (or also denotes sometimes CM for \"Center of Mass\") such that:\n\t\n\tThe couple denoted $(A_i,\\alpha_i)$ is named \"\\NewTerm{ponderated point}\\index{ponderated point}\" or \"\\NewTerm{massive point}\\index{massive point}\" in the context of study of physics and when $\\alpha_i>0$ represents a mass. If all the $\\alpha_i$ have the value that the centroid is assimilated obviously to the  mean position of all the points in all of the coordinate directions.  Therefore a physical object has uniform density, then its center of mass is the same as the centroid of its shape!\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/centroid_triangle.jpg}\n\t\t\\caption[Example of planar centroid]{Example of planar centroid (source: Wikipedia)}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1}. In physics, the \"\\NewTerm{center of inertia}\\index{center of inertia}\" of a body corresponds to the centroid of the particles that make up the body in question. Each particle being weighted by its own mass. So this is the point from which the mass is evenly distributed. If the density is constant, the center of mass coincides with the center of gravity as already mention.\\\\\n\n\t\\textbf{R2}. The \"\\NewTerm{center of gravity}\\index{center of gravity}\" of a body corresponds to the centroid of the particles that make up the body in question, each particle being weighted by its own weight! Very often in mechanics, the size of the body is small compared with the roundness of the earth, we consider a uniform gravity field. Under this assumption, the center of gravity and the center of mass coincide.\\\\\n\n\t\\textbf{R3}. When for any massive point $(A_i,\\alpha_i)$ we have $\\alpha_i=\\alpha_j\\;\\forall (i,j)\\in \\mathbb{N}^{*}$, then we speak of \"barycentre\".\n\t\\end{tcolorbox}\n\tFor an arbitrary point O, we obviously have by simple vector addition:\n\t\n\thence the major result:\n\t\n\tPassing to the limit, if the domain is continuous (or can be considered as), we have:\n\t\n\tWe may well also work with the surface or volumes elements (to mention only the more trivial case) to determine the centroid:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tTo calculate for example the centroid of the area between the 0$x$ axis and the parabola function $y=kx^2$ for $a\\leq x\\leq a$ we have (do not forget that when we calculate the centroid along the $y$ axis that we have to take it the right side of the curve!):\n\t\n\tin the first component we have $\\mathrm{d}S=y\\mathrm{d}x$ so the double integral becomes a simple integral!\\\\\n\t\n\tFor the second component we can't use $\\mathrm{d}S=x\\mathrm{d}y$ as it represent an element of area between the $y$-axis and the parabola curve. Indeed, we want the outside area (above the parabola curve when see of the point of view of the $y$-axis!). Therefore the idea is to translate the parabola of $a$ so that the parabola surface relatively to $y$ will be the expected one. Therefore we have $\\mathrm{d}S=(x-a)\\mathrm{d}y$.\\\\\n\t\n\tAnd it comes:\n\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhen we calculate the center of gravity of a function as in the example above, then we speak of \"\\NewTerm{weighted curve}\\index{weighted curve}\".\n\t\\end{tcolorbox}\n\tIn space with a reference frame $(\\text{O},\\vec{i},\\vec{j},\\vec{k})$ by writing $(x_i,y_i,z_i)$ the coordinates of the weighted points \n$(A_i,\\alpha_i)$ and $(x_G,y_G,z_G)$ those of $G$, then we have (in the discrete case):\n\t\n\tLet us now give and prove some properties of the centroid (center of mass/gravity):\n\t\\begin{enumerate}\n\t\t\\item[P1.] Given $(A_1,\\alpha_1),(A_2,\\alpha_2),\\ldots,(A_n,\\alpha_n)$, $n$ weighted points. If $\\sum\\alpha_i\\neq 0$, we have then for any point $M$:\n\t\t\n\t\t\\begin{dem}\n\t\t\n\t\tAs by definition of the barycentre we have:\n\t\t\n\t\tThen we have indeed:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item[P2.] For $\\forall k\\in\\mathbb{R}^{*}$, the weighted points $(A_1,\\alpha_1),(A_2,\\alpha_2),\\ldots,(A_n,\\alpha_n)$ and $(A_1,k\\alpha_1),(A_2,k\\alpha_2),\\ldots,(A_n,k\\alpha_n)$ have same barycentre because (invariance of the barycentre):\n\t\n\t\t\\begin{dem}\n\t\tThe proof is in our point of view immediate (but if you don't see it don't hesitate as always to contact us and we will write it).\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item[P3.] The barycentre $G$ of $n$ weighted points is invariant when we replace $p$ of them, by their barycentre $G'$, assigned to the condition that their coefficients are such that $\\sum_{i=1}^n \\alpha_i\\neq 0$, then $G$ is the barycentre of:\n\t\t\n\t\t\\begin{dem}\n\t\tIf $G$ is the barycentre of the $n$ then weighted points:\n\t\t\n\t\tFor the particular case where $M = G$:\n\t\t\n\t\tBut $G$ being the barycentre of the $n$ weighted points $(A_1,\\alpha_1),(A_2,\\alpha_2),\\ldots,(A_n,\\alpha_n)$ we have then:\n\t\t\n\t\tAs:\n\t\t\n\t\tthe previous equality proves that $G$ is the barycentre of weighted points:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item[P4.] If $\\sum \\alpha_i=0$, for any point $M,N$:\n\t\t\n\t\t\\begin{dem}\n\t\tFor $M\\neq N$ let us calculate:\n\t\n\t\tas $\\sum_{i=1}^n \\alpha_i=0$, we then have:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhen a body has a certain symmetry, the calculations are simplified because the center of gravity (barycentre) should coincide with the element of symmetry. If a body such as a sphere, parallelepiped, etc., has a center of symmetry, the centroid is coincides with it. If the body only has one axis of symmetry, the centroid is then on this axis.\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Geometric Transformations}\\label{geometric transformations}\n\tThe transformations in the plane (and spaces of more dimensions) are usually strictly defined using group theory (\\SeeChapter{see section Set Algebra page \\pageref{set algebra}}). But in the context of Euclidean geometry, this set theory approach does not interest us. So we will do in this subsection only a simple formal approach (and therefore more intuitive) of elementary transformations in the plane that are: translation, scaling, rotation and skew (shear). For the translation we will see also the equivalent in space as it is very important for our study of Physics (especially for the study of the gyroscope).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tBy definition an \"\\NewTerm{isometry}\\index{isometry}\" is a transformation that preserves distances and areas ( distance-preserving injective map between metric spaces). As we will see below, the translation, rotation and reflection are isometries. The scaling being obviously not an isometry in the plane or the space.\n\t\\end{tcolorbox}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/transformations_type_01.jpg}\n\t\t\\includegraphics{img/geometry/transformations_type_02.jpg}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\subsubsection{Translation}\n\tGiven a straight line in a plane $P$ on which two points $A$ and $B$ define a segment of line denoted $\\overline{AB}$.\n\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{translation $T$}\\index{translation}\" (\"displacement in a given direction\" as said Euclid) of this line segment associates with each point $A$ and $B$ new points $A'$ and $B'$ such that $\\overline{AB}=\\overline{A'B'}$. So we can restrict the notion of translation to a point only if we want such that mathematically we can write:\n\t\n\tIn other words, a transformation function (application) of the type transformation $T$ of the of the entire plane in itself associates with each pre-image not more than a single image. Translation is therefore a bijective function. So we can define a unique reciprocal transformation application denoted as $T^{-1}$ (recall of what was seen in the chapter Arithmetics):\n\t\n\tWe say by definition a point is \"\\NewTerm{translationally invariant}\\index{translationally invariant point}\" if and only if:\n\t\n\tIn another type of formalism, the displacement of the point $A$ to point $B$ following the vector $\\overrightarrow{AB}$ is named \"\\NewTerm{translation vector $\\overrightarrow{AB}$}\\index{translation vector}\" equation (\\SeeChapter{see section Vector Calculus page \\pageref{translation vector}}). It is expressed mathematically by the sum of the point coordinates and the matrix of the coordinates of the vector (see just below the details).\n\t\n\tFor example in a three-dimensional space:\n\t\n\tThe translation is not a linear transformation (\\SeeChapter{see section Linear Algebra page \\pageref{linear application}}), we can not allow ourselves to be represented by the multiplication of a square matrix as we will see it for following other transformations (scaling and transformation).\n\n\tFor this purpose (be able to express this transformation as square matrix linear application) we must use a workaround consisting in using a system named \"\\NewTerm{homogeneous coordinates}\\index{homogeneous coordinates}\" (\\SeeChapter{see section Projective Geometry page \\pageref{homogeneous coordinates}}) where the points of the plane are represented by a vector having $3$ components (and respectively those of space by a vector with $4$ components):\n\t\n\tIn the context of the study of translation we put $w=1$ because in this case:\n\t\n\tThis system of homogeneous coordinates is applicable to all other transformation we will see later by adding each time a coordinate (\\SeeChapter{see section Projective Geometry page \\pageref{homogeneous coordinates}}).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tTranslation sends a line on a parallel line to the original line of course!\t\n\t\\end{tcolorbox}\n\n\t\\subsubsection{Homothety (scaling)}\\label{scaling}\n\tGiven any shape in the plane or space (point, line, oval, polygon, etc.), a number $R$, and a point placed $C$ at a predefined location.\n\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{homothetic transformation $H$}\\index{homothetic transformation}\" (also named \"\\NewTerm{scaling}\\index{scaling}\" or rarely \"\\NewTerm{enlargement}\") of ratio $R$ and center $C$ is the application that at every point $M$ of the shape associates to the segment $\\overline{CM}$ a new point collinear to $\\overline{CM}$ but arranged at a greater or smaller distance of ratio $R$ from the center $C$ such that $\\overline{CM'}=R\\cdot \\overline{CM}$.\n\n\tWe can restrict the concept of scaling to a line segment such that we can write mathematically:\n\t\n\tIn other words, a transformation of the type scaling in the entire plan in itself associates with each pre-image not more than a single image. Dilation is therefore as for the translation a bijective function. We can the define a reciprocal transformation application denoted $H^{-1}$ such that as:\n\t\n\tIf $R\\neq 1$, then $C$ is the only invariant point. If $R=1$, then all points are invariant and the scaling is say to be of the type \"\\NewTerm{scaling identity}\\index{scaling identity}\". If $R=-1$, then we say that we have a \"\\NewTerm{central symmetry}\\index{central symmetry}\". A central symmetry is then a rotation of $180^\\circ$ (or $\\pi$ [rad]).\n\t\n\tIn another type of formalism, a scaling of center O and ratio $k$, associate to the point $A$ a point $B$ such that $\\overline{\\text{O}B}=k\\overline{\\text{O}A}$. The point $B$ being located on the straight line $\\text{O}A$ and at a distance $\\overline{\\text{O}B}=k\\overline{\\text{O}A}$. The sign of $k$ determines the position of $B$ with respect to O:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/scaling.jpg}\\\\\n\t\t$\\overline{\\text{O}B}=k\\overline{\\text{O}A}\\Leftrightarrow (x',y',z')=k(x,y,z)$\n\t\t\\caption{Example of scaling of center O)}\n\t\\end{figure}\n\tWe now make a jump in the spatial geometry (the jump is not very large and require just the knowledge of elementary vector calculus and linear algebra)\n\n\tWe can also replace the scalar $k$ by a square matrix such that:\n\t\n\tA trivial solution for a scaling is to put that $b=c=d=f=g=h=0$ from which the matrix form obvious diagonal k:\n\t\n\tThis matrix is named \"\\NewTerm{transformation matrix by scaling of center O (in this case the origin of the reference frame) and ratio $k$}\\index{transformation matrix}\" and therefore the scaling being a diagonal matrix commutes with any linear application.\n\t\n\tObviously for the two dimension case we have immediately:\n\t\n\twhere $W$ is for \"width\" and $H$ is for \"height\".\n\t\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIt  should be noticed that as many planar shapes have their surface that depends on the multiplication of two of their geometrical parameters, as scaling of factor $k$ has a squared effect of factor $k$ on the surface. The same principle applied for a volume but it's cubic amplitude rather than a squared one.\n\t\\end{tcolorbox}\n\tIn the case presented above, the scaling conserves the forms in all directions (its geometry is invariant under this transformation) if we use indeed the same factor $k$ for all axes (we speak then of \"\\NewTerm{uniform scaling}\\index{uniform scaling}\"). But we may also use the following matrix:\n\t\n\tthat would deforms the shape according to a different factor for each axis.\n\tThe inverse transformation of the scaling is obviously the scaling of center O and ratio $k^{-1}$ thus in the form of a matrix:\n\t\n\tWhen the scaling center does not coincide with the origin of the selected coordinate system (which almost happens almost all the time), the procedure for calculating the coordinates of the image point is very simple. It is necessary to:\n\t\\begin{enumerate}\n\t\t\\item Carry out a translation to match the center of scaling with the origin of the frame and apply this translation to all points involved.\n\n\t\t\\item Make the scaling as described above (the center being the origin of the reference frame).\n\n\t\t\\item Make the reverse translation to take back the of center and the image in its original place.\n\t\\end{enumerate}\n\tTo conclude, notice that the successive operations of translation and scaling are not commutative in the general case. In practice, we often performs first scaling and then translation as described in the three steps above.\n\t\n\t\\subsubsection{Shear (skew) transformation}\n\t\\textbf{Definition (\\#\\mydef):} In plane geometry, a \"\\NewTerm{shear mapping}\\index{shear mapping}\" is a linear application that displaces each point in fixed direction, by an amount proportional to its signed distance from a line that is parallel to that direction.This type of mapping is also named \"\\NewTerm{shear transformation}\\index{shear transformation}\", \"\\NewTerm{transvection}\\index{transvection}\", or just \"\\NewTerm{shearing}\\index{shearing}\".\n\t\n\tBefore formalize the general case of shear let us consider the following special case of a shearing factor $k$ along the $x$ axis:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/shearing_example.jpg}\n\t\t\\caption{Scaling with shear}\n\t\\end{figure}\n\tOK let us generalize that stuff! In fact for the shearing we must consider the case of shearing through the $x$-axis whose situation can be summarized by:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/shearing_x.jpg}\n\t\t\\caption{Shearing trough $x$-axis}\n\t\\end{figure}\n\tSo it is immediate that we have:\n\t\n\tand the case of shearing through the $y$-axis whose situation can be summarized by:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/shearing_y.jpg}\n\t\t\\caption{Shearing trough $y$-axis}\n\t\\end{figure}\n\tSo it is immediate that we have:\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Rotation}\\label{rotation}\n\tGiven any some form in the plane (point, line, oval, polygon, etc.), a real number $\\alpha$, and a point $C$ located at a predefined location.\n\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{rotation $R$}\\index{rotation}\" of angle $\\alpha$ and of center $C$ is the application that to every point $M$ of the shape associates to the segment $\\overline{CM}$ a new point, but having had a positive or negative angle of rotation $\\alpha$ of center $C$ such that $\\overline{CM}$ and $\\overline{CM'}$ have the same length but not same direction.\n\t\n\tFrom this definition, it is apparent that the axis of rotation of an object is the locus of points of the object which remain stationary!\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe rotation is also, in more academic way, a bijective mapping in the plane, we can also define a reciprocal transformation application denoted $R^{-1}$. But the rotation in the space is not bijective so we can not define a unique reciprocal transformation.\n\t\\end{tcolorbox}\n\tIf $\\alpha=0$, then $C$ is the only invariant point. If $\\alpha=2k\\pi$ (with $k\\in\\mathbb{Z}$), then all points are invariant and the rotation is say to be of the type \"\\NewTerm{identity rotation}\\index{identity rotation}\". If we choose an appropriate system of perpendicular axes such that their intersection coincides with $C$ and that $\\pm \\pi$ then $R$ is named a \"\\NewTerm{rotation of central symmetry}\\index{rotation of central symmetry}\".\n\n\tIn another type of formalism, the rotation is expressed much more rigorously. We will help us with the drawing of a unit circle (in the plane) to study this type of transformation. We will consider the first case the origin of the reference and translation coincides: \n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/rotation_transformation_in_the_plane.jpg}\n\t\t\\caption{Example of rotation in the plane}\n\t\\end{figure}\n\twhere $A'$ is the image of $A$ by the rotation of center O and angle $\\theta$.\n\n\tWe have in the plane for the point $A$ (\\SeeChapter{see section Trigonometry page \\pageref{circle trigonometry}}):\n\t\n\tand identically for the point $A'$:\n\t\n\twith $\\beta=\\alpha+\\theta$.\n\n\tWhich brings us to write:\n\t\n\tTherefore using the Trigonometric identities proved in the section Trigonometry page \\pageref{remarkable trigonometric identities}:\n\t\n\tIdentically (based on the fact that the basic trigonometric identities presented in the section Trigonometry of this book are known), we find:\n\t\n\tThis allows us to write the rotation matrix in the plane (imagining that the $z$-axis out of the sheet/screen)\\label{rotation matrix in the plane}:\n\t\n\tTherefore the rotation matrix in the plane is given by:\n\t\n\tThe inverse transformation is simply the rotation of center O (same as before) but of angle $-\\theta$ that is (we use obviously again the trigonometric relations of opposite angles):\n\t\n\tAn easy way to see this is to notice that it is immediate that:\n\t\n\tWhen we want to make a rotation of an object around its barycentre (centroid) we should proceed as for the scaling, that is to say: bring the barycentre of the shape back to origin O of the reference frame with a translation, apply the rotation on the shape and only after take back the shape to its original position with a translation.\n\t\n\t\\label{3d rotation matrix around}During the rotation of an object in space (we take the opportunity to introduce this now... because we need it in several on physics concepts that will follow in this book), the transformation is quite similar to the previous one except that the product of two rotations does not give a rotation and that it is not commutative when the number of dimensions is strictly greater than $2$ (we can make the detailed calculation on request).\n\t\n\tIndeed, during a rotation of angle $\\phi$ around the $z$-axis coordinate, the coordinate $z$ does not change. Which brings us to write the rotation matrix in three-dimensional space relatively to the plane $x-y$ as:\n\t\n\tOK for people reading this book on a computer with Adobe Flash player installed and activated, here is an animated version (otherwise see here: \\url{https://vimeo.com/575753732}):\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=125pt,height=125pt,\n\t]{}{swf/Z_rotation.swf}\n\t\\end{center}\n\tSo therefore the rotation matrix application around the $z$ axis of angle $\\phi$ is:\n\t\t\n\t So we can see that $R_Z(\\phi)$ contain the rotation matrix in the plane $x-y$ that we have proved earlier above but with one more component according to the idea of homogeneous coordinates!!!\n\t \n\t The philosophy is then always the same relative to other axes:\n\t \n\t The three-dimensional rotation around the $x$-axis (relatively to the plane $z-y$) of angle $\\theta$:\n\t\n\tOK for people reading this book on a computer with Adobe Flash player installed and activated, here is an animated version (otherwise see here: \\url{https://vimeo.com/575753223}):\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=125pt,height=125pt,\n\t]{}{swf/X_rotation.swf}\n\t\\end{center}\n\tSo therefore the rotation matrix application around the $x$ axis of angle $\\theta$ is \\label{3d rotation matrix around x}:\n\t\n\t \n\t And finally the three-dimensional rotation around the $y$-axis (relatively to the plane $z-x$) of angle $\\gamma$:\n\t\n\tOK for people reading this book on a computer with Adobe Flash player installed and activated, here is an animated version (otherwise see here: \\url{https://vimeo.com/575753505}):\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=125pt,height=125pt,\n\t]{}{swf/Y_rotation.swf}\n\t\\end{center}\n\tSo therefore the rotation matrix application around the $y$ axis of angle $\\gamma$ is:\n\t\n\t We finally have three rotation matrices $R_X(\\theta)$, $R_Y(\\gamma)$, $R_Z(\\phi)$ each corresponding to one of the planes $(xy,yz,xz)$ of three-dimensional space. The three angles referred above are named \"\\NewTerm{Euler angles}\\index{Euler angles}\" and sadly there is no convention in the notation of the angles, they depends on the authors and teachers.\n\n\tThese three matrices are part of the group of matrices of order three, denoted \"\\NewTerm{SO ($3$)}\" and named by physicists and mathematicians \"\\NewTerm{group of spatial rotations SO($3$)}\\index{group of spatial rotations}\" as seen in the section of Set Algebra. Any rotation can be represented by the product matrix resulting from the product of these three (orthogonal) matrices (whose determinant is equal to $1$ as proved in the section of Linear Algebra).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThere is not only one eigenvector for one rotation matrix. There can be one but also a maximum of three. Indeed, consider a rotation of an object around the $z$-axis of $\\pi$ [rad] then obviously the $z$ eigenvector $(0,0,1)$ with eigenvalues $-1$, $+1$ is obviously an eigenvector but also in this special case $(1,0,0)$ is an eigenvector but with an eigenvalue of $-1$.\n\t\\end{tcolorbox}\n\tAny rotation is a composition of these three rotations but it is important that the reader remembers the in the section of Linear Algebra where we saw that matrix multiplication is not commutative. Thus, rotation about the $x$-axis of $90^\\circ$ ($\\pi/2$ [rad]) and the about the $z$-axis of  $90^\\circ$ ($\\pi/2$ [rad]) is not equivalent to rotate first around the $z$-axis and then around the $x$axis by the same angle as shown in the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/rotation_transformation_non_commutativity.jpg}\n\t\t\\caption{Example of non-commutativity of the rotation matrix}\n\t\\end{figure}\n\tFinally, to get the matrix that makes up a particular case of the three rotations here for example are the commands to provide in Maple 4.00b:\n\t\n\t\\texttt{>X:=array([[1,0,0],[0,cos(theta),sin(theta)],[0,-sin(theta),cos(theta)]]);\\\\\n\t>Y:=array([[cos(lambda),0,-sin(lambda)],[0,1,0],[sin(lambda),0,cos(lambda)]]);\\\\\n\t>Z:=array([[cos(phi),sin(phi),0],[-sin(phi),cos(phi),0],[0,0,1]]);\n\t>evalm(X\\&*Y\\&*Z);\\\\\n\t}\\\\\n\tIt is not very interesting to provide to our reader the expression of $R_{XYZ}=R_XR_YR_Z$ as the rotations are not commutative and that therefore (excepted for some special cases) we have $R_{XYZ}\\neq R_{XZY}\\neq R_{YZX}\\neq R_{YXZ}\\neq R_{ZXY}\\neq R_{ZYX}$. This is why most book don't give them (excepted some PDFs on Internet).\n\t\n\tBut as a reader requested it for $R_{ZYX}$ let us give it:\n\t\n\t\n\tIf we are looking to achieve the composition of a translation $T$ and a rotation $R$ and a scale scaling $H$ the transformation matrix will be typically:\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} Let us recall (\\SeeChapter{see section Linear Algebra page \\pageref{non-commutativity matrices}}) that the multiplication of two matrices is generally not commutative.\\\\\n\t\n\t\\textbf{R2.} The direct similarity of center $C$, of ratio $R$ and angle $\\alpha$ is the composite of the scaling of center $C$ with ratio $R$ and a rotation of center $C$ and angle $\\alpha$. We refer the reader to the section Numbers to review that complex numbers allow to operate formally to similarities with the operations of addition and multiplication (direct or retrograde).\\\\\n\t\n\t\\textbf{R3.} We can make much more powerful and variable rotations using quaternions numbers (or \"hypercomplex numbers\"). For more information about this type of numbers the reader should refer to the section Numbers where the quaternions are introduced with a lot of details that makes more clear why they are preferred sometimes in 3D computing for rotations instead of the rotation matrices.\n\t\\end{tcolorbox}\n\t\n\t\\paragraph{Gimbal lock}\\mbox{}\\\\\\\\\n\tGimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, \"locking\" the system into rotation in a degenerate two-dimensional space.\n\t\n\tThe problem of gimbal lock appears when one uses Euler angles in applied mathematics. Developers of 3D computer programs, such as 3D modelling, embedded navigation systems, robotics and video games must take care to avoid it.\n\t\n\tBefore going in the math stuff let us show an example of gimbal lock. Consider the following gyroscope like structure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/gimbal_lock_01.jpg}\n\t\\end{figure}\n\tIn the above configuration we can rotate the Jet in any angle (that means that a jet will know its position wherever it go) Indeed, the inner disc can rotate uniquely, same for the inner and outer disc. But now if the following configuration happens to occur:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/gimbal_lock_02.jpg}\n\t\\end{figure}\n\tThe reader can then see that the inner disc then can only rotate around the same axis as the outer disc. So... yes!!! We lost one degree of freedom.\n\t\n\tA lot of confusion results sometimes from the term \"lock\" in this issue. Indeed, we see from the above figure the gimbals don't actually \"lock\". They're all still free to spin about like they normally would, no ring is being held up or forcibly locked into position. It's only that if two of the three axis are aligned we can't say anymore  the difference between rotating about one axis or another one. We can't because they're effectively the same axis in this special position. Both gimbals would rotate in the same direction at the same time.\n\n\tSo in certain orientations a way, pitch or roll axis aligns with one of the others. A that point, a change in the first can't be distinguished from a change in the other. That's a big problem if we need to keep track of our orientation accurately. \n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIt is said that well-known gimbal lock incident happened in the Apollo 11 Moon mission\n\t\\end{tcolorbox}\n\tIn formal language, gimbal lock occurs because the map from Euler angles to rotations . Indeed, Euler angles provide a means for giving a numerical description of any rotation in three-dimensional space using three numbers, but not only is this description not unique, but there are some points where not every change in the target space (rotations) can be realized by a change in the source space (Euler angles). This is a topological constraint - there is no covering map from the 3-torus to the 3-dimensional real projective space; the only (non-trivial) covering map is from the 3-sphere, as in the use of quaternions.\n\n\tA rotation in 3D space can be represented numerically with matrices in several ways. One of these representations is as we have just seen it:\n\t\n\tLet us observe what happens if $\\theta=0$ (that is to say we block the rotation around the $x$-axis):\n\t\n\tCarrying out matrix multiplication:\n\t\n\tAnd finally using some of our trigonometry identities (\\SeeChapter{see section Trigonometry page \\pageref{remarkable trigonometric identities}}):\n\t\n\tChanging the values of $\\phi$ and $\\gamma$  in the above matrix has the same effects: the rotation angle $\\phi-\\gamma$ changes, but the rotation axis remains in the $z$ direction! Indeed, we know that in a rotation matrix that when the first row and first column are fixed then we have a rotation around the $x$-axis, when the second row and second column are fixed then we have a rotation around the $y$-axis, when we the third a row and third column are fixed then we have a rotation around the $z$ axis. So when the third column is fixed but instead of the third row it is the first row that is fixed than the rotation\n\t\n\tthe last column and the first row in the matrix won't change. The only solution for $\\phi$  and $\\gamma$  to recover different roles is to change $\\theta$.\n\t\n\tIn this specific situation, the gimbal lock behaviour can NOT be avoided. Quaternion can uniquely identify a rotation, but when it is converted into Euler rotation, it loses also one degree of freedom. \n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/geometry/rotation_vocabulary_in_aeronautic_science.jpg}\n\t\t\\caption{Rotation vocabulary in aeronautic science}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\paragraph{Euler angles}\\mbox{}\\\\\\\\\n\tLet us come back on the Euler angles for an important result for our study late of the gyroscope effect in Classical Mechanics (especially the \"nutation\" effect)!\n\t\t\n\t\tSo we characterize a general orientation of a body with respect to the inertial system $xyz$ (see figure below) in term of the following $3$ rotations respectively by tradition the following order:\n\t\\begin{enumerate}\n\t\t\\item Rotation by angle $\\phi$ around the $z$-axis\n\t\t\\item Rotation by angle $\\theta$ around the new $x_1'$ axis, which we name the \"\\NewTerm{line of nodes}\\index{line of nodes}\".\n\t\t\\item Rotation by angle $\\gamma$ about the new $x_3$-axis\n\t\\end{enumerate}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/euler_angles_rotation_convention.jpg}\n\t\t\\caption{Euler angles rotation convention}\n\t\\end{figure}\n\tFormally as we know this is given by:\n\t\n\tFortunately, we most of time not have to use the resulting matrix of the three matrix multiplication above. In most kinematic problems what we really need is to be able to write the components of the angular velocity $\\vec{\\Omega}$ in both systems of coordinates. Since $\\vec{\\Omega}$\u00ad describes precisely how fast the angles vary in time, we have:\n\t\n\tsince the three rotations are about these particular axes.\n\n\tLet us analyze each contribution to $\\vec{\\Omega}$:\n\t\\begin{enumerate}\n\t\t\\item $\\dot{\\vec{\\phi}}=\\vec{e}_z\\dot{\\phi}$ (with respect to $xyz$ system). Following the rotations, we find that with respect to $123$ system we have first (the reader can take the extreme situations where $\\theta=0$ and $\\theta=\\pi/2$ to see that this is accurate):\n\t\t\n\t\tbut we notice that we also have with the same idea:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\tHence:\n\t\t\n\t\twith respect to the $123$ system.\n\t\t\n\t\t\\item $\\dot{\\vec{\\theta}}=\\vec{e}_{1'}\\dot{\\theta}$ (with respect to $1'2'3'$ system). Following the rotations, we find that with respect to $123$ system we have obviously (the reader can take the extreme situations where $\\gamma=0$ and $\\gamma=\\pi/2$ to see that this is accurate):\n\t\t\n\t\tTherefore:\n\t\t\n\t\twith respect to $123$ system.\n\n\t\t\\item $\\dot{\\vec{\\gamma}}=\\vec{e}_{3}\\dot{\\gamma}$. Following the rotations, we have then immediately with respect to $123$:\n\t\t\n\t\\end{enumerate}\n\tTherefore summing the relations:\n\t\n\tTherefore the sum finely regrouped gives:\n\t\n\twith respect to $123$ system. Many times denoted:\n\t\n\tThe system:\n\t\n\tis named \"\\NewTerm{kinematic Euler equations}\\index{kinematic Euler equations}\" and is very useful to express the rotation speed of a body in its own reference frame (most of time its center of mass).\n\t\n\t\\pagebreak\n\t\\subsubsection{Reflection}\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{reflection}\\index{reflection}\", also named \"\\NewTerm{axial symmetry}\\index{axial symmetry}\", denoted $S_\\Delta$ (in geometry) relatively to the straight line $\\Delta$ (in space the reflection is done relatively to a plane) is the application that associates to each point $M$ outside $\\Delta$ the point $M'$ such that $\\Delta$ is the mediator of $\\overline{MM'}$. If $M$ belongs to $\\Delta$, then $M=M'$.\n\n\tMathematically it is written:\n\t\n\tIn other words, a transformation function of the type reflection of the entire plan in itself associates with each pre-image not more than a single image. Reflection is then a bijective function. So we can define a reciprocal transformation denoted $S_\\Delta^{-1}$ such that:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAll points  of $\\Delta$ are trivially invariant under reflection in the plane.\n\t\\end{tcolorbox}\n\tIn matrix form the reflections in the plane are extremely simple to formalize using linear algebra (see section of the same name page \\pageref{linear algebra}) as shown in the situations below:\n\t\\begin{itemize}\n\t\t\\item Reflection relatively to the $y$-axis:\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/reflection_y.jpg}\n\t\t\t\\caption{Reflection around $y$-axis}\n\t\t\\end{figure}\n\n\t\t\\item Reflection relatively to the $x$-axis:\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/reflection_x.jpg}\n\t\t\t\\caption{Reflection around $x$-axis}\n\t\t\\end{figure}\n\n\n\t\t\\item Reflection relatively to the origin O:\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/reflection_origin.jpg}\n\t\t\t\\caption{Reflection around origin O}\n\t\t\\end{figure}\n\t\tThe reader have probably notice that every reflection matrix is in fact only the special case of the plane rotation matrix and this was not difficulty to guess as we can see on every figure that a reflection is only a rotation around a special center of rotation.\n\t\t\n\t\\end{itemize}\n\t\n\tHere is a quite good summary of what he have seen so far:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/affine_transformation_matrix_summary.jpg}\n\t\t\\caption[Summary of affine transformations]{Summary of affine transformations (source: Wikipedia)}\n\t\\end{figure}\n\t\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{70} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 31 votes,  71.61\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\t\n\t\\section{Non-Euclidean Geometry}\\label{non-euclidean geometry}\n\t\\lettrine[lines=4]{\\color{BrickRed}O}ne may recall from their the section of Euclidean Geometry that the sum of the angles in a triangle is 180 degrees. This comes from the Euclidean or \"flat\" geometry, which includes something called the \"parallel postulate\" which states that if you were to draw two points next to one another, then extend from those points two lines that are parallel to one another, those lines could be extended to infinity without ever becoming closer together or further apart. This makes perfect intuitive sense; Euclidean geometry seems to be the structure of our world. However, our senses often deceive us. It turns out that gravity literally warps the geometry of space-time itself as we will see it in the section of General Relativity. That's right, because of the gravitational field, space-time is non-Euclidean (and there is some amount of gravity everywhere, since it is a force with infinite range). If not Euclidean, what else can geometry even be?As illustrated below, geometry on curved surfaces is a little different from geometry on flat (Euclidean) surfaces:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/curvatures.jpg}\n\t\t\\caption{Positive (elliptic), negative (hyperbolic) and flat (euclidean) curves}\n\t\\end{figure}\n\t\n\tIn a negatively curved (\"hyperbolic\") geometry, the sum of the angles in a triangle add up to less than $\\pi$ degrees, and parallel lines diverge from one another. In a positively curved geometry (also known as an \"elliptic\" geometry), the sum of the angles in a triangle add up to greater than $\\pi$ degrees as proved in the section of Trigonometry, and parallel lines converge towards one another. A \"flat\" geometry, the Euclidean case, has zero curvature. It is a special case that is the exact point in between hyperbolic and elliptic geometries. Another illustration below shows the failure of the parallel postulate in non-Euclidean spaces:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/curvatures_plane_view.jpg}\n\t\\end{figure}\n\tTherefore we guess obviously that non-Euclidean geometries are all geometries that not necessarily satisfy all the Hilbert's axioms (\\SeeChapter{see section Euclidean Geometry page \\pageref{hilbert axioms}}) but without any contradiction between them (unlike the old axioms of Euclid, particularly that on parallels).\\\\\n\t\n\tA particular representation of this type of geometry consists to define the points as being distributed over the surface of a sphere (that are the intersection of the diameters of the sphere with the surface), and the lines, to generalize the concept of straight line (now we say \"geodesic\"), as the intersections of the surface of the sphere with the planes containing the center of the sphere. Two points then define uniquely a line and a point is always given by the intersection of two lines. However, in this geometry, if we give ourselves a line $AB$ and a external point $P$ to the line, there is no line passing through $P$ and not intersecting $AB$. So Euclid's fifth postulate is not satisfied because in $P$ we can not draw any parallel to $AB$.\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/euclid_paralle_violation.jpg}\n\t\t\\caption{Illustrated example of the violation of the fifth Euclide postulate}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tBefore starting to read this section, we strongly recommend the reader to read and if possible to understand the content of the section on Trigonometry, Euclidean Geometry and Tensor Calculus as we will use many results, not necessarily trivial, we have proved in them.\n\t\\end{tcolorbox}\n\t\n\tIn section on Euclidean Geometry, we studied a number of theorems related to plans. Let us insist on the fact that the \"plan\" is a two-dimensional figure whose curvature is null and immersed in a 3-dimensional space (therefore the plane can be moved in it). This said, it should perhaps now necessary to define a little bit more rigorously the intuitive concept of \"curvature(s)\".\n\t\n\t\\subsection{Curvature(s)}\n\t\\textbf{Definition (\\#\\mydef):}\tA figure is said to be \"\\NewTerm{curved}\\index{curved}\" if there is at least one point lying on the straight or straights defining the bounds of the figure (surfaces, volumes, etc.) defining a tangent on the bound and not crossing the bound elsewhere.\n\t\n\tThis is Friedrich Gauss who in 1824 had formulated the possibility that there are alternatives to Euclidean geometries. We distinguish between geometries with \"\\NewTerm{negative curvature}\\index{negative curvature}\", like the one of the Russian Nikolai Lobachevsky (1829) and Bolyai (1832) (sum of angles of a triangle below $\\pi$, infinite number of possible parallel to a line through a point), from geometries with \"\\NewTerm{positive curvature}\\index{positive curvature}\" as the Riemann (1867) one (sum of the angles of a triangle greater than $\\pi$, parallel joining to the poles).\n\t\n\tWe will see in this section various non-Euclidean geometries, the best known are the \"\\NewTerm{Riemannian geometries}\\index{Riemannian geometries}\" (constant curvature) and \"Lobachevsky geometries\" (hyperbolic type with non-constant curvature).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe geometry commonly named \"Riemann geometry\" is a three-dimensional spherical space, finite space but boundless, with regular curvature, alternative to  Euclidean postulate on parallels.\n\t\\end{tcolorbox}\n\t\n\tThe interest in the study of these geometries is that we can't determine whether the Universe we live in is made of a type of geometry rather than another because given our (physical) size, immersed we are in any geometry whatsoever with low curvature, any surface space seems locally Euclidean to us (two parallel lines do not intersect). However, in General Relativity, which makes an excessive us of tensor calculus (generalization of any geometry as seen in the section on Tensor Calculus) shows that there are region of the space where the geometry is strongly curved and therefore locally non-Euclidean and only the study of this kind of geometries allows us to draw theories explaining observations that are not exploitable only with human intuition.\n\t\n\tA distinction has also to be made between \"\\NewTerm{intrinsic curvature}\\index{curvature!intrinsic curvature}\" and \"\\NewTerm{extrinsic curvature}\\index{curvature!extrinsic curvature}\\footnote{As a common question on Internet is that a space can only be curved in relation to an absolute \"straight space\"? Otherwise we would never know that it is curved. There must be an absolute straight space geometry underlying the curved space?}\". The easy to visualize curvature is extrinsic. The physically meaningful curvature is intrinsic! Intrinsic curvature is what General Relativity deals with! Imagine a sheet of paper laid flat on a table. On this paper is a bug. The bug walks along the shortest path on the paper from point $A$ to point $B$. We can all agree that the paper is flat and that the line is straight. Now roll up the paper into a tube. Again the bug walks along the shortest path from point $A$ to point $B$. The bug will have walked across the exact same path on the paper (assuming that the shortest path does not cross the seam). From the point of view of the paper, the path is straight. From our external point of view, the paper is curved and the path follows the paper. The above is an example of extrinsic curvature. We have a two dimensional space (the paper) and a larger three dimensional space (our ordinary three dimensional geometry) in which it is embedded. The curvature of the path depends on how we roll up the paper, ie how we choose to do the embedding. For an inhabitant living on the surface of the paper using measurements made only on that surface, extrinsic curvature is not detectable.\n\t\n\t Extrinsic curvature is a property of the embedding. The bug traces out the same path on the paper, whether it is rolled or unrolled. It is only from our external viewpoint that we can see any \"curvature\". In the case of a sphere, things are not so simple. We cannot roll up a flat sheet of paper into a sphere without stretching or wrinkling it. If the bug on a sphere follows shortest paths and keeps careful track of distances, he can notice that the surface he lives on is not Euclidean. The interior angles of a triangle will sum to more than $180$ degrees as we have already proved in the section of Trigonometry. The above is an example of intrinsic curvature. For an inhabitant living on the surface of the paper using measurements made only on that surface, intrinsic curvature is detectable. Here comes the hard part... In the examples above, we talked about a two dimensional surface embedded within a three dimensional Euclidean space. That was just an aid to visualization. We can talk about the geometry of a two (or three or four or more) dimensional space without requiring that it be embedded in a higher dimensional space at all. A common way to do that is to imagine that the inhabitants of the space are able to measure distances. From any point in the space to any other point in the space, they can measure the distance between them. The distance measurement is the mathematical notion of a \"metric\". A space for which a metric exists is called a \"metric space\" as we know it. Given a metric space, one can define intrinsic curvature in terms of the metric. A standard way of handling a metric space in physics is to divide it up (if needed) into pieces and equip each piece with a Cartesian coordinate system. There are some rules about the boundaries and how to handle the seams between the pieces that we need not concern ourselves with. The result is a \"manifold\". A surface of a sphere can be modelled as a two dimensional manifold. There is no embedding in three dimensional space and no meaningful notion of extrinsic curvature. There is still non-zero intrinsic curvature for this space.\n\t\n\tBefore we tackle formally and with an abstract manner to certain non-Euclidean geometries we will first make a pragmatic and specific introduction of certain concepts that we are not totally foreign for because already covered in other section theoretically. Once made this introduction, which will be very useful pedagogically, we will address these concepts more rigorously.\n\t\n\t\\subsection{Axioms of non-euclidean geometry}\n\tTo obtain a non-Euclidean geometry, the parallel postulate A.P1 (or its equivalent) that we have seen during our study of Euclidean Geometry must be replaced by its negation (A.NP1). Everything that follows is related to that figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/curvatures_plane_view.jpg}\n\t\\end{figure}\t\n\tThen the axiom of parallels becomes:\n\t\\begin{enumerate}[leftmargin=2cm]\n\t\t\\item[A.NP1.] The also named \"\\NewTerm{Lobachevsky axiom}\\index{Lobachevsky axiom}\" or \"\\NewTerm{Hyperbolic Parallel Postulate (HPP)}\\index{Hyperbolic Parallel Postulate}\" assume that there exist at least two lines passing through the given point $P$ not located on a given line and parallel to this line (we speak then of \"two diverging ultra-parallel lines\").\n\t\\end{enumerate}\n\tAn alternative and more easy formulation is:\n\t\\begin{enumerate}[leftmargin=2cm]\n\t\t\\item[A.NP1'.] There exist a line $L$ and a point $P$ not on $L$ such that at least two distinct lines parallel to $l$ pass through $P$. In other words:  Through a point not on a line there is more than one line parallel to the given line!!!\n\t\\end{enumerate}\n\tThe above form of the axiom of parallelism ensures the existence of an infinite number of parallel lines satisfying the given conditions. Geometric space defined by the axiomatic system including Lobachevsky axiom is named \"\\NewTerm{hyperbolic non-Euclidean geometry}\\index{hyperbolic non-Euclidean geometry}\". Geometry of the curved surface hyperboloid can be described as a hyperbolic geometry. In the hyperbolic space, the interior angles in an arbitrary triangle is less than $\\pi$.\n\t\n\tA second possible form of the negation of axiom of parallelism is as follows:\n\t\\begin{enumerate}[leftmargin=2cm]\n\t\t\\item[A.NP1.] There exist no line passing through the given point not located on a given line and parallel to this line.\n\t\\end{enumerate}\n\tGeometric space defined by this last axiomatic system with the named \"\\NewTerm{elliptic non-Euclidean geometry}\\index{elliptic non-Euclidean geometry}\". Geometry of the sphere (spherical geometry) can be regarded as elliptic geometry. In the elliptic space, the interior angles in an arbitrary triangle is greater than $\\pi$ (see proof in the section Trigonometry).\n\t\n\t\\subsection{Geodesic and Metric Equation}\\label{geodesic and metric equation}\n\tLet us come back on the concepts of geodesic and curvature which we have often mentioned in the section of Tensor Calculus (the fact of not having read the section of Tensor Calculus is normally not a problem in understanding what follows).\n\t\n\tConsider, as an introduction example, the two-dimensional surface of a sphere of radius $R$. Given two points $B$ and $C$ diametrically opposed, we seek the shortest distance $s$ measured on the sphere between $B$ and $C$. The curve we get is as we know a \"geodesic\", a concept that generalizes therefore, for an arbitrary surface, the notion of the straight lines for planes:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/geodesic.jpg}\n\t\t\\caption{Illustration of the problem to research the geodesic on a sphere}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe assume as intuitive that the length of a curve in the three-dimensional Euclidean space is always greater than or equal to the length of any planar projection of this curve. The geodesic curve is therefore necessarily a plane curve.\n\t\\end{tcolorbox}\n\t\n\tThe radius between the axis $\\text{O}z$ and one of the points $B$ and $C$ is trivially given by some elementary trigonometry (\\SeeChapter{see section Trigonometry page \\pageref{remarkable trigonometric identities}}):\n\t\n\tAnd therefore the half circumference of the circle at the height of $B$ and $C$ is given by:\n\t\n\tAnd we have proved in the chapter of trigonometry that the perimeter of a circle depending on the opening angle of the latter was given by ($\\alpha$ must be in radians!):\n\t\n\tIt therefore comes automatically:\n\t\n\tFinally:\n\t\n\tA plot or Taylor approximation shows easily that $\\pi\\sin(\\theta)\\geq 2\\theta$ on the interval $[0,\\pi/2]$ then on the same interval $s_2 \\geq s_1$ (there is equality in $\\theta=0$ and $\\theta=\\pi/2$).\n\t\n\tThe geodesics of the sphere are therefore the big circles arcs $s_1$, paths taken by aircraft for intercontinental flight (in empty space, without wind and perfect spherical planet), and correspond to the lines obtained between the surface of the sphere and a plane passing through the center thereof!\n\t\n\tThe geometrical properties of figures drawn on the surface of a sphere are obviously no longer those of Euclidean Geometry. Thus, the shortest path from point $B$ to point $C$ on the spherical surface, consists of a great circle passing through the points $B$ and $C$ plotted on a plane passing through the center of the sphere. Great circle arcs play the same role for the sphere that lines in the plane. These are the \"\\NewTerm{geodesic}\\index{geodesic}\" of the sphere.\n\t\n\tNow consider two dimensional surfaces: the surface of the sphere and that of the cylinder. Given two points $B$ and $C$, we trace the geodesic curve between these points:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/some_geodesics.jpg}\n\t\t\\caption{Some geodesics of trivial surface on trivial volumes}\n\t\\end{figure}\n\t\n\tThe cylinder may be cut parallel to its axis and unfolded flat. The geodesic thus appears as a straight line of the plane! We say then that the cylinder is \"inherently flat\" (even if its topology is different from the plane, we must especially avoid that the cut crosses the geodesic). This is intuitively obviously not the case of the surface of the sphere.\n\t\n\tIn the case of the cylindrical surface, we can define the Cartesian coordinates of the plane $B(y_1,z_1)$ and $C(y_2,z_2)$ that permits to write the length $s$ of the curve (straight line) $BC$ with the Pythagorean theorem:\n\t\n\tThe metric of the plane is Euclidean and under infinitesimal form we get the \"\\NewTerm{Euclidean metric equation}\\index{Euclidean metric equation}\":\n\t\n\tOn the cylinder, the change of variable gives $y=r\\theta$ (with the angle in radians!):\n\t\n\tOr under local form:\n\t\n\tThe surface of the cylinder can thus be represented by Cartesian coordinates similar to those of the plane, the metric of the cylinder surface being Euclidean under infinitesimal form and under global form.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe previous relation is what we got in the section of Tensor Calculus for the metric equation in polar coordinates.\n\t\\end{tcolorbox}\n\tWe can interest us now to the problem to write the analogue of Pythagorean theorem to a spherical surface. The impossibility to cut the ball and flatten it to marry a plane suggests some difficulties...\n\t\n\tThat is why the equation of the metric can't be written in general form as the Pythagorean theorem. Indeed, we proved in the section of Tensor Calculus that this latter was given for a spherical surface by:\n\t\n\tHowever, locally (that is to say in a small region of small dimension relatively to the radius of the sphere), the properties of the sphere can be described by Cartesian coordinates of a plane tangent to the surface (the essential property of Riemann spaces) as the metric equation is locally Euclidean!:\n\t\n\tBy writing:\n\t\n\tWe therefore have:\n\t\n\twith:\n\t\n\tWhile $(\\theta,\\phi)$ are the \"\\NewTerm{Gauss coordinates}\\index{Gauss coordinates}\",  $(\\xi,\\eta)$ are the \" \\NewTerm{Riemann coordinates}\\index{Riemann coordinates}\\label{riemann coordinates}\" of the locally tangent plane. Thus, we notice that Euclidean space is a special case of Gauss coordinates for which we have (\\SeeChapter{see section Tensor Calculus page \\pageref{interval invariant}}):\n\t\n\tand thus the metric tensor is a identity matrix (the diagonal elements are equal to $1$ the rest being zero):\n\t\n\tThe reader will have perhaps notice that if we take the vector basis of the polar coordinates as seen in the section of Vector Calculus but without normalizing them to then unit (that is to say without dividing the first vector by $r$ and the second by $\\sin(\\theta)$) we have:\n\t\n\twith $i,j=\\{\\theta,\\phi\\}$ such that and so on for every other coordinate system.\n\t\n\t\\pagebreak\n\t\\subsection{Riemann Spaces}\\label{riemann spaces}\n\tTo better understand what is a Riemann space, we will now go through a small example of a two-dimensional surface (classic example):\n\t\n\tConsider a sphere of radius $r$, of surface $S$, located in the ordinary three-dimensional space. The Cartesian coordinates $x, y, z$ of a point $M$ on the surface $S$ can be expressed, for example, depending on the spherical coordinate $(r,\\theta,\\phi)$ (\\SeeChapter{see section Vector Calculus page \\pageref{spherical coordinates}}). The sphere is fully described for a given radius $r$ and $0\\leq \\phi < 2\\phi$ and $0 \\leq \\theta < \\pi$.\n\t\n\tThree such parameters, giving the possibility to determine a point on the surface of a sphere, are as we know (\\SeeChapter{see section Tensor Calculus page \\pageref{curvilinear coordinates tensor calculus}}) curvilinear coordinates of the surface or also named \"Gaussian coordinates\" (Gauss being one of the first mathematicians interested in the study of bodies immersed in non-Euclidean spaces). Other arbitrary parameters $u, v, w$ may of course be chosen as curvilinear coordinates on the surface.\n\t\n\tThe linear element $\\mathrm{d}s^2$ of the surface, square distance between two infinitely close points $M, M'$, is written based on spherical coordinates, as we have seen in the section of Tensor Calculus and just above:\n\t\n\tWe thus obtain an expression of the linear element based on only the three Gauss  coordinates $(r,\\theta,\\phi)$. We could of course impose a local study (tangent plane) so that the linear element is only function of $(\\theta,\\phi)$ as we have seen above:\n\t\n\tWritten using the three parameters,the surface of the sphere (considered as a two-dimensional space) is an example of two-dimensional Riemann space whose linear element is in the general form well known (see the chapter of tensor calculus):\n\t\n\twhere the $\\mathrm{d}u^i,\\mathrm{d}u^k$ are the contravariant components of the vector $\\mathrm{d}\\vec{M}=\\overrightarrow{MM'}$ relatively to the referential denoted by $(M,\\vec{e}_j)$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe study of figures on Riemannian surfaces is part of differential geometry to which we devote a whole section in this chapter.\n\t\\end{tcolorbox}\n\tNow consider any surface of coordinates $u^1,u^2$. The Cartesian coordinates $x, y, z$ of ordinary space where this surface is dived are written in general terms with the Gauss coordinates:\n\t\n\tRemember also that the metric equation can be written in tensorial notation:\n\t\n\tcan be written in an expanded form as follows (this is proved in details with a geometric approach in the section of Differential Geometry):\n\t\n\twith:\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The expression given above for the linear elements $\\mathrm{d}s$ is named \"\\NewTerm{quadratic fundamental form}\\index{quadratic fundamental form}\" of the surface considered. The coefficients $E, F, G$ are functions of curvilinear coordinates. Generally this surface, considered a two-dimensional space, will be an example of Riemann space, for arbitrary curvilinear coordinates.\\\\\n\t\n\t\\textbf{R2.} The different Riemann spaces are what we name in general form (because there is not only Riemannian spaces of constant curvature) a \"\\NewTerm{variety}\\index{variety}\" equipped with a Riemannian metric. A variety may be defined (not formally), for example, by a set of points into an existing space. Generally a surface gives the idea of a two-dimensional variety. The sphere and the torus are two-dimensional varieties without borders. A cylinder of revolution, a hyperbolic paraboloid are two-dimensional open varieties with borders to infinity. But we can also consider abstract varieties. This is the case for example of a configuration space. This is then an $n$-dimensional space represented by a set $q^i$ (or denoted by $u^i$) of generalized coordinates (see the introduction to the Lagrangian formalism in the section of Analytical Mechanics page \\pageref{lagrangian formalism}), the latter can have values in a finite domain or not.\n\t\\end{tcolorbox}\n\t\n\tWe can now define in a little better way what is a Riemann space.\n\t\n\t\\textbf{Definition (\\#\\mydef):}\tA \"\\NewTerm{Riemann space}\\index{Riemann space}\" is a variety to which we attached a metric. This means that in every part of the variety analytically represented by a coordinate system $u^i$, we gave ourselves a quadratic differential form:\n\t\n\twhich is the metric of space.\n\t\n\tThe coefficients $g_{ij}$ are not entirely arbitrary and should verify, as we have shown in the section of Tensor Calculus, the following conditions:\n\t\\begin{enumerate}\n\t\t\\item[C1.] The components are symmetric as $g_{ij}=g_{ji}$\n\t\t\n\t\t\\item[C2.] The determinant of the matrix of $g_{ij}$ is not zero\n\t\t\n\t\t\\item[C3.] The differential form of the linear element  $\\mathrm{d}s^2$, and therefore the concept of distance defined by the $g_{ij}$, is invariant vis-a-vis any change of coordinates.\n\t\t\n\t\t\\item[C4.] All partial differential of second order of the $g_{ij}$ exist and are continuous and therefore of class $\\mathcal{C}^2$.\n\t\t\\end{enumerate}\n\t\tA Riemann space is a space of points, each being identified by a system of $n$ coordinates $u^i$, with a metric such as the differential form of the linear element that satisfies the above conditions. This metric is therefore named \"\\NewTerm{Riemannian metric}\\index{Riemannian metric}\".\n\t\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} If the metric is positive definite, that is to say, if $g_{ij}u^iu^j$ for any non-zero vector $\\vec{v}$, we say that the space is \"\\NewTerm{strictly Riemannian}\\index{strictly Riemannian}\". In this case, the determinant of the matrix $g_{ij}$ is strictly positive and all the eigenvalues of the matrix are strictly positive (\\SeeChapter{see section Analytical Geometry page \\pageref{classification of conical by the determinant}}).\\\\\n\t\n\t\\textbf{R2.} By definition, we say that a metric space is Euclidean when any fundamental tensor of this space may be reduced, by an appropriate change of coordinates, to a form such as (\\SeeChapter{see section Vector Calculus page \\pageref{canonical basis}}) the canonical orthonormal basis. That is to say $g_{ij} =\\delta_{ij}$.\\\\\n\t\n\t\\textbf{R3.} The definition of Riemannian spaces shows that Euclidean space is a very special case of these spaces. So there exists only one Euclidean space but we can create an infinite number of Riemannian spaces.\n\t\\end{tcolorbox}\n\tIt is important to see that:\n\t\n\tCan be rewritten as:\n\t\n\tthat we denote by tradition:\n\t\n\tTherefore:\n\t\n\tHence the length of a curve on any differential surface:\n\t\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us consider the parametric equation of a special hyperbolic surface:\n\t\n\tWe take now the parametric equation of a circle or radius $1$ on the plane:\n\t\n\tThat is to say visually with Maple 4.00b:\\\\\n\t\n\t\\texttt{>x:=u;y:=v;z:=u*v;\\\\\n\t>with(plots):\\\\\n\t>surface:=plot3d([x,y,z],u=-1.5..1.5,v=-1.5..1.5,grid=[30,30],\\\\\n\tscaling=constrained,axes=boxed,style=PATCH):\\\\\n\t>curve:=spacecurve([cos(t),sin(t),cos(t)*sin(t)],t=-4*Pi..4*Pi,\\\\\n\tnumpoints=1000,scaling=CONSTRAINED,orientation=[50,60],style=PATCH,\\\\\n\taxes=NORMAL,color=black,thickness=3):\\\\\n\t>display(surface,curve,orientation=[-20,70]);\n\t}\n\t\n\tWhich gives:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/curvilinear_length.jpg}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tTherefore:\n\t\n\tThen we have:\n\t\n\tAnd we want to calculate the perimeter of the circle on the surface. Therefore:\n\t\n\tWe make a change of variable $\\tau=2t$, and therefore:\n\t\n\tand therefore:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tAs the integrand is even:\n\t\n\tWe have then:\n\t\t\n\tWith:\n\t\n\tand is referred to as the \"\\NewTerm{complete elliptic integral of second kind}\\index{elliptic integral!complete elliptic integral of second kind}\\label{elliptic integral riemann space}\" (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{elliptic integrals}}). This integral cannot be formally calculated. But a numerical integration gives:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{30} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 26 votes,  80.77\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to force start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\t\t\t\n\t\\section{Projective Geometry}\\label{projective geometry}\n\t\\lettrine[lines=4]{\\color{BrickRed}S}ince Chasles and this until the years 1930s, Projective Geometry was often synonymous of \"higher geometry.\" It was opposed to Euclidean Geometry: that was said to be elementary and analytical. At the time of Monge, Carnot and von Staudt, we also spoke of \"geometry of position\" or \"geometry of situation\". These geometries study figures at the point of view of their respective positions and of their invariant properties which bind them in a geometric transformation (rotation, symmetry, scaling, etc.), homographic in particular. Besides the harmonic division, basic concept, it uses the famous anharmonic ratio (cross ratio), the inverse, the involution the transformation by reciprocal polar, stereographic projection\\footnote{ie: projection on a plane}, the correlation, the homology, the duality, the conical, etc. (see later in this text for the definitions).\n\t\n\t\\textbf{Definition (\\#\\mydef):} \"\\NewTerm{Projective geometry}\\index{projective geometry}\" is a topic of mathematics. It is the study of geometric properties that are invariant with respect to projective transformations. This means that, compared to elementary geometry, projective geometry has a different setting, \"\\NewTerm{projective space}\\index{projective space}\", and a selective set of basic geometric concepts. \n\t\n\tProject Geometry is quite abstract apart from some general and basic principles outlined below (undergraduate level). It is therefore relatively difficult to understand and requires before study it: a good knowledge of elementary geometry even in space, a perfect mastery of three-dimensional space in the observation-representation-interpretation cycle, to master also complex analysis for advanced topics. This will led us to introduce some concepts that do not normally have their place in this section, but that we think, can greatly help the reader to understand this branch of mathematics.\n\t\n\tIn a first place, we will discuss the basic concepts of perspective with a particular focus on the concept of \"projective\" representation (there are other empirical perspective methods: cavalier, cabinet, isometric, military, ... but these latter don't have mathematical or real meaning even if they represent quite properly volumetric objects). Then we will study the mathematical representations of some three-dimensional objects with some computer application to finally go study \"hard\" and \"pure\" projective geometry.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe \"\\NewTerm{descriptive geometry}\\index{descriptive geometry}\" is a rigorous artistic form of projective geometry but not a formal on (in the sense that it is not mathematically formalized... at least as far as we know...).\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Conical Perspective (Central Perspective)}\n\tOne problem with the study of three-dimensional volumes and their representation is the concept of \"\\NewTerm{perspective}\\index{perspective}\". Indeed, human beings can not see the $3$ dimensions of an object, it is the brain that interprets the shadows and reflections of an object so that we can interpret it as having a volume (there are optical illusions that go in this direction ...: the \"trompe l'oeil\").\n\n\tHere is an animated example of one of the most famous optical illusion, named the Shepard tabletop illusion, that shows that we should, statistically speaking, not trust our intuitions too much...:\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=\\textwidth,height=400pt,\n\t]{}{swf/shepard_tabletops.swf}\n\t\\end{center}\n\tThe animation above will run for people having a PDF reader with Adobe Flash player installed and activated (otherwise see here: \\url{https://vimeo.com/575741013}).\n\t\n\tWe will focus in the following paragraphs on the \"\\NewTerm{conical perspective}\\index{conical perspective}\", also named \"\\NewTerm{central perspective}\\index{central perspective}\" or \"\\NewTerm{linear perspective}\\index{linear perspective}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn the field of projective geometry we are not talking about \"\\NewTerm{conical perspective}\\index{conical perspective}\" but \"\\NewTerm{conical projection}\\index{conical projection}\".\n\t\\end{tcolorbox}\n\t\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{conical perspective}\\index{conical perspective}\" or \"\\NewTerm{perspective projection}\\index{perspective projection}\" (to not be confused with the \"conical projection\"!!!!!) is by construction the closest representation of our human visual perceptions, it allows especially to see a sphere as a circle as the human vision is base on a visual cone that when looking to a sphere is equivalent to see a circle as illustrated below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/visual_cone.jpg}\n\t\t\\caption{Visual cone and corresponding projection circle}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe conical perspective is that of Renaissance painters. It is also the one that appears on photos.\n\t\\end{tcolorbox}\n\tThe difficulty of the representation in perspective is to translate it in a plane (for example that of the paper or the computer screen as our eyes makes a projection of a 3D world in a 2D image in our brain) a construction (object) that is defined - fairly simply, by the way - in space using mathematical tools.\n\t\n\tA conical perspective of the conventional $3$-dimensional space is a projective transformation that sends all the points of space on the same plane of this space. It requires the information of a point O (equivalent to the position of the observer's eye) and a projection plane named the \"\\NewTerm{table}\\index{table}\" or \"\\NewTerm{D\u00fcrer glass}\\index{D\u00fcrer glass}\" (the equivalent of our retina) as referring to a quite old copy painting method:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/durer_plane.jpg}\n\t\t\\caption{Origin of Durer glass method illustration}\n\t\\end{figure}\n\tWe will see during the theoretical mathematical developments that unlike affine projections (see the following subsection), the conical projection does not retain the barycentre (therefore the ratios of lengths on a given straight line) but it conserve the alignment and cross ratio.\n\t\n\tWhen we speak of conical perspective, we use a few special planes and straight lines in three-dimensional space (see figure below).\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] The \"\\NewTerm{picture plane}\\index{picture plane}\" or \"\\NewTerm{D\u00fcrer glass}\\index{D\u00fcrer glass}\", denoted T, is the plane on which we are drawing (projection plane).\n\n\t\t\\item[D2.] The \"\\NewTerm{ground plane}\\index{ground plane}\", denoted S, is a fixed plane, perpendicular to the picture plane T.\n\t\n\t\t\\item[D3.] The \"\\NewTerm{point of view}\\index{point of view}\" (or \"\\NewTerm{center of projection}\\index{center of projection}\"), denoted O, is a point out of T and of S: this is the point where we will have to place the eye for the drawing on the picture plane $T$ coincides with the real image.\n\t\n\t\t\\item[D4.] The \"\\NewTerm{horizon plane}\\index{horizon plane}\" denoted H, is the plane parallel to the ground plane S passing through the point of view point O.\n\t\n\t\t\\item[D5.] The \"\\NewTerm{horizon line}\\index{horizon line}\", denoted h, is the intersection of the horizon plane H and the picture plane T.\n\t\t\n\t\t\\item[D6.] The \"\\NewTerm{Earth line}\\index{Earth line}\", denoted EL, is the intersection of the ground plane S plan and of the picture plane $T$.\n\t\t\n\t\t\\item[D7.] A plane or a straight line parallel to the ground plane S that are named \"\\NewTerm{horizontal}\\index{horizontal}\".\n\t\t\n\t\t\\item[D8.] A plane or a straight line perpendicular to the ground plane S that are named \"\\NewTerm{vertical}\\index{vertical}\".\n\t\t\n\t\t\\item[D9.] A plane or a straight line parallel to the picture plane T that are named \"\\NewTerm{front}\\index{front}\".\n\t\t\n\t\t\\item[D10.] A plane or a straight line perpendicular to the picture plane T that are named \"\\NewTerm{end}\\index{end}\".\n\t\\end{enumerate}\n\tHere is a diagram showing these different concepts:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/perspective.jpg}\n\t\t\\caption{Definition of the planes and lines of projective geometry}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\subsubsection{Images of Points}\n\tAny volumetric object (which can only be determined by touching or at the level of mathematical abstraction) composed by a set of points $M$, is for our brain, the image of a plane projection $m$ whose support is a surface in the space between the observed object and our eye.\n\t\n\tIn mathematics, this surface, we know, named the \"table\" is defined in its central view by the horizon line (where are located the vanishing points) and a physical repository so named the earth line (see figure below where the observer is at a higher point than the observed object):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/projective_table.jpg}\n\t\t\\caption{Example of what is a \"table\" in projective geometry}\n\t\\end{figure}\n\tThe height between the ground line and the point view is named the \"\\NewTerm{horizon height}\\index{horizon height}\" and is denoted by $h$.\n\t\n\tThe objective from this figure is to mathematically determine a representation of a solid object on a surface (on a plane in a simple case, otherwise any) by knowing the equation of the line between point $M$ (point of the observed object) and the point $V$ (point of view) in the purpose to determine the coordinates of intersection between this line and the table.\n\n\tWe can from the above diagram draw the following relations using the Thales theorem (\\SeeChapter{see section Euclidean Geometry}) in the different triangles:\n\t\n\twith $\\Delta\\geq 0$.\n\n\tIf we put $y'=0$, according to what show the figure above, the coordinates of $m$ become:\n\t\n\tand:\n\t\n\tFrom these relations, the problem of a plane representation of a volumetric shape completely solved, since we can always project a point (or the distance between two points) on a table from the coordinates of the original.\n\t\n\tThe term $\\Delta$ is commonly named the \"\\NewTerm{focal length}\\index{focal length}\" from the point of view to the table and optical specialists usually denote it with the letter $f$.\n\n\tTo understand this result, we can put ourselves in the context of a two-dimensional study where the observer is at the height of the ground line ($h = 0$) and arranged to represent a person looking at the table (comparable to a TV screen, computer screen or other screen) in which we ask the conventional $x$ and $y$ axes in the screen plane and the $z$ axes perpendicular to it (thus relative to the last figure, $Y$ becomes $Z$ and vice versa).\n\n\tThus, the above relation of the ratios:\n\t\n\tbecomes with this change of axis:\n\t\n\tand as $h = 0$ (which is often the case in front of screens):\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThis relation is a special form of what we name the \"\\NewTerm{homographic transformations}\\index{homographic transformations}\". We will come back on the latter further below and prove some of their properties.\n\t\\end{tcolorbox}\n\tIf the table is placed on the axis of reference (the projection table is assimilated to the screen), then we have $z '= 0$ which gives us:\n\t\n\tand by proceeding in the same way:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe last two relations are those that we use to make 3D animations programmed in the Macromedia Flash 6.0 (see example further below) or in Javascript (see example also further below) on in any other programming language.\n\t\\end{tcolorbox}\n\t\n\t\n\tWe see with the last two relation and identical term:\n\t\n\tthis term corresponds to the \"\\NewTerm{depth}\\index{depth}\" of the perspective.\n\n\tIn some works, this depth is denoted by (simply factorization):\n\t\n\tIf we consider two points ($x_1,x_2$ or $y_1,y_2$) visible from the surface of a volume seen by an observer and their respective distance $\\Delta x$ or $\\Delta y$, these quantities are preserved if the two points coincide in the plane of the table because we have then:\n\t\n\tas $z=0,\\forall \\Delta$.\n\n\tIt is interesting to study what should be the value of the focal length for $z\\neq 0$ to have $\\Delta x'=\\Delta x$ or $\\Delta y'=\\Delta y$. Thus, if we take the limit:\n\t\n\tby applying the rule of the Hospital (derivative of the numerator and denominator as seen in the section of Differential and Integral Calculus) and remembering that $z$ is fixed, then:\n\t\n\tFrom this result we can conclude the following:\n\t\n\tFor any real distance between two points that do not coincide in the table but located in a same plane have an equal projection distance, it is necessary that the equations of the two lines that determine their intersection with table are parallel. This implies, since the observer is a convergent point, that we have to take away the observer to an infinite distance from the plane to keep the quantities projected on the table: this is the \"\\NewTerm{orthogonal parallel projection}\\index{orthogonal parallel projection}\" also sometimes named \"\\NewTerm{orthographic parallel projection}\\index{orthographic parallel projection}\".\n\t\n\tA very good example to see these results is to program in pseudo-3D on a computer.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tThere are many ways to do pseudo 3D with computers. The best known are technically OpenGL or DirectX and C++ but are not very easy to introduce ... so we'll see how to turn a pseudo-sphere in the projective space with Macromedia Flash 6.0 in order to show how apply the different theoretical elements presented above, but also to show that these are not the only tools available.\\\\\n\t\n\tFor this purpose, open the Macromedia Flash 6.0 software and save the new animation with the name \\textit{Circle.fla}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_gui.jpg}\n\t\t\\caption[]{Macromedia Flash 6.0 GUI}\n\t\\end{figure}\n\tWith the \\textbf{Circle} tool in the \\textbf{Drawing} toolbar, draw a respectable size disc in the animation area:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/mm_flash_disc.jpg}\n\t\t\\caption[]{Disc in the animation area}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tAfter having selected the circle with the \\textbf{Fill} tool \\includegraphics{img/geometry/mm_flash_fillin_tool.jpg} choose a Radial Gradient:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_gradient_color.jpg}\n\t\t\\caption[]{Color palette to fill-in the circle}\n\t\\end{figure}\n\tRight-click on the circle and choose the option \\textbf{Convert to Symbol} and enter the information as presented below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_convert_to_symbol.jpg}\n\t\t\\caption[]{Conversion dialogue box from of Object to Symbol}\n\t\\end{figure}\n\tRename the layer where lies your animation clip with the name \\textit{3d clip}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_rename_layer.jpg}\n\t\t\\caption[]{Renaming the animation layer}\n\t\\end{figure}\n\tAfterwards Double-click on your circle to enter your animation Clip.\\\\\n\t\n\tThere select the circle again, right-click it and select \\textbf{Convert to Symbol}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_rename_layer.jpg}\n\t\t\\caption[]{Renaming the animation layer}\n\t\\end{figure}\n\tThere select the circle again, right-click it and select \\textbf{Convert to Symbol} and enter the information as presented below:\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_convert_to_symbol_circle.jpg}\n\t\\end{figure}\n\tThen, in the properties of the circle there the name \\textit{Point}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_symbole_name.jpg}\n\t\\end{figure}\n\tNow, in the animation Clip named \\textit{Cercle}, we will insert three frames: the first to define the mathematical functions necessary for recalculating the variables, the second calling functions, the third that allows us to loop and go the second frame indefinitely.\\\\\n\t\n\tTo make things in an almost clean way, we will create a second layer (by renaming the first one that contains our circle with the name \\textit{Cercle}) that we will rename \\textit{Code}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_convert_create_layes.jpg}\n\t\\end{figure}\n\tDo a right-click on the third image of the layer containing our circle:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_insert_first_frame.jpg}\n\t\\end{figure}\n\tand choose the option \\textbf{Insert Frame} and for the layer \\textit{Code} made almost the same, but by choosing \\textbf{Insert Key Frame}. You should then get the following display:\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_frame_and_keyframe.jpg}\n\t\\end{figure}\n\tThen by selecting the first image of the layer \\textit{Code} active the display of the Action to insert the following code:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_action_script_01.jpg}\n\t\\end{figure}\t\n\tRepeat the same with the second image of the layer \\textit{Code}, but putting this time inside it:\n\t\\begin{figure}[H]\n\t\t\\includegraphics{img/geometry/mm_flash_action_script_02.jpg}\n\t\\end{figure}\n\tand finally doing the same with the third image of the same layer, but by putting inside it:\n\t\\begin{figure}[H]\n\t\t\\includegraphics{img/geometry/mm_flash_action_script_03.jpg}\n\t\\end{figure}\n\tWe then get the following animated result (the animation is only visible on a PDF reader with Adobe Flash installed and activated otherwise go see here \\url{https://vimeo.com/578782166}):\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=360pt,height=360pt,\n\t]{}{swf/Cercle1.swf}\n\t\\end{center}\n\tWe will now make appear the $z$-axis by fixing $y$ (and therefore by moving $z$). Obviously, we will not see anything happen with regard to $z$ until we define the homographic projection and this because a computer is unable to show basically the concept of depth... The code is then written:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_action_script_04.jpg}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tThe calculation of \\texttt{Zpos} will enable us further below to calculate the depth of the movement of the object along the $z$-axis and this is where the homographic projection will play a role.\\\\\n\t\n\tWe then get a pseudo-sphere which rotates around an axis in a plane perpendicular to the screen. This is why we see the pseudo-sphere make left / right trips (the concept of distance is not yet present because of a lack of presence of the depth factor):\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=300pt,height=300pt,\n\t]{}{swf/Cercle2.swf}\n\t\\end{center}\n\tThe animation above will run for people having a PDF reader with Adobe Flash player installed and activated (otherwise see here: \\url{https://vimeo.com/578782447}).\\\\\n\t\n\tNow we will use the relations:\n\t\n\tproved earlier. If we seek to represent the depth of any point of the projection table, it is the distance ratio between two points in the table that will interest us to determine the scaling factor:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tWe will have therefore to apply this result as defining the scale of the projection table.\\\\\n\n\tWe then have the following code where the depth $P$ plays on the height and width of the animated surface of the instance \\textit{Point} of our pseudo-sphere:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/mm_flash_action_script_05.jpg}\n\t\\end{figure}\n\tWhich gives us:\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=245pt,height=245pt,\n\t]{}{swf/Cercle3.swf}\n\t\\end{center}\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tand obviously it works regardless of the parametric equation of the path followed! We can then copy this instance animation, change the height, the starting angle for the get the $4$ vertices of a cube rotating in space.\\\\\n\t\n\tThe animation above will run for people having a PDF reader with Adobe Flash player installed and activated (otherwise see here: \\url{https://vimeo.com/578783079}).\\\\\n\n\tThis is the next step:\\\\\n\n\tIndeed, let us change our code as shown below for $4$ pseudo-spheres rotating around an imaginary $z$-axis going out of the screen (we use the rotation matrices proved in the section of Euclidean Geometry) and sorry for the comments in French as I did first the code for french readers:\\\\\n\t\n\t\\includegraphics{img/geometry/mm_flash_action_script_06.jpg}\n\t\\includegraphics{img/geometry/mm_flash_action_script_07.jpg}\\\\\n\t\n\tThis gives us ... four charming pseudo-spheres turning around a common center (the animation below will run for people having a PDF reader with Adobe Flash player installed and activated otherwise see here: \\url{https://vimeo.com/578783905}):\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=180pt,height=180pt,\n\t]{}{swf/Cercle4.swf}\n\t\\end{center}\n\tNow we reverse $y$ and $z$ again and apply the homographic projection:\\\\\n\t\n\t\\includegraphics{img/geometry/mm_flash_action_script_08.jpg}\n\t\\includegraphics{img/geometry/mm_flash_action_script_09.jpg}\n\t\\end{tcolorbox}\n\t\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tand we get (the animation below will run for people having a PDF reader with Adobe Flash player installed and activated otherwise see here: \\url{https://vimeo.com/578784812}):\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=350pt,height=350pt,\n\t]{}{swf/Cercle5.swf}\n\t\\end{center}\n\tFor the remaining part, we will generate $8$ pseudo-sphere and instead to turn them always about the same axis, we will rotate around three axes $x$, $y$ or $z$ using the variables \\texttt{XAngle}, \\texttt{YAngle} or \\texttt{ZAngle} and rotation matrices about each of these respective axes:\\\\\n\t\n\t\\includegraphics{img/geometry/mm_flash_action_script_10.jpg}\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\includegraphics{img/geometry/mm_flash_action_script_11.jpg}\n\t\\includegraphics{img/geometry/mm_flash_action_script_12.jpg}\n\t\n\tand here is the final result (the animation below will run for people having a PDF reader with Adobe Flash player installed and activated otherwise see here: \\url{https://vimeo.com/578785768}):\n\t\\begin{center}\n\t\\centering\n\t\t\\includemedia[activate=pageopen,width=213pt,height=213pt,\n\t]{}{swf/Cercle6.swf}\n\t\\end{center}\n\t\\end{tcolorbox}\n\tSince the Adobe Flash technology is virtually dead since the time I wrote the code above... an anonymous reader rewrote the above code again into WebGL and here is the corresponding code (if you want to give the same example in other languages don't hesitate):\n\t\n\t\\includegraphics{img/geometry/webgl_pseudo_sphere_01.jpg}\n\t\n\t\\includegraphics{img/geometry/webgl_pseudo_sphere_02.jpg}\n\t\n\t\\includegraphics{img/geometry/webgl_pseudo_sphere_03.jpg}\n\t\n\t\\includegraphics{img/geometry/webgl_pseudo_sphere_04.jpg}\n\t\n\t\\includegraphics{img/geometry/webgl_pseudo_sphere_05.jpg}\n\t\n\tTo see this code work or copy it you can go visit the following URL:\n\t\\begin{center}\n\t\\url{http://www.sciences.ch/htmlen/cube.htm}\n\t\\end{center}\n\t\n\t\\subsubsection{Images of Straight Lines}\n\tLet us determine from the previous results, the image of a straight line parallel to the ground line (hence the $x$-axis) of the table $XY$. In this case, we have:\n\t\n\twhere $a$ and $b$ are constants, and for any value of $h$. This gives us from the relations obtained previously:\n\t\n\tSo as we could expect it, a straight line parallel to the ground line becomes in perspective still a straight line but at a height $y'$ in the $XY$ plane parallel to our screen (it was quite intuitive to guess...).\n\n\tFor any line parallel to the $z$-axis of the screen (and therefore in its \"depth\"), we have:\n\t\n\twhere $a$ and $b$ are constants for any value of $h$. Which gives us:\n\t\n\tThe straight lines of equation:\n\t\n\tall pass through the point $P(x'=0,y'=h)$ when $z\\rightarrow +\\infty$ which is the \"main vanishing point\" and through the point $P(x'=a,y'=b)$ when $z\\rightarrow  0$ as shown below in  the projection made in Adobe Photoshop (the horizontal lines corresponding to the ground live have been added to give the effect of perspective):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/perspective_unique_vanishing_point.jpg}\n\t\t\\caption{Example of a single vanishing point}\n\t\\end{figure}\n\tFrom the figure above, we can define the concept of \"\\NewTerm{departure angle}\\index{departure angle}\" given by the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/departure_angle.jpg}\n\t\t\\caption{Departure angle}\n\t\\end{figure}\n\tAnother geometric representation can perhaps help to better understand the result. Let us recall that the vanishing point of a line $(D)$ is the intersection point $F$ of the table plane $T$ with the line parallel to $D$ passing through O. Two parallel lines $(D)$ and $(D')$ therefore have the same vanishing point (in the perspective point of view obviously!). As shown below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/parallel_lines_conical_perspective.jpg}\n\t\\end{figure}\n\tIf we denote by $A$ the intersection point of $(D)$ with $T$, the perspective drawing of $D$ is the straight line $\\overline{AF}$, intersection of $T$ with the plane containing $O$ and $D$. Since two parallel lines in a conical perspective point of view have the same vanishing point $F$, they are therefore represented by two intersecting straight lines on $F$.\n\t\n\tFor any straight line being in the $XZ$ plane of the screen (i.e. in its \"depth\"), we have:\n\t\n\tand for $y=0$. Which gives us:\n\t\n\tFrom the latter equation we deduce:\n\t\n\tby putting $z$ in the expression of $x$:\n\t\n\twe replace $x$ and $y$ in the equation of the straight line $z=mx+p$, and once calculations and simplifications made, we find:\n\t\n\tLet us consider the particular case of straight lines passing through the opposite vertices of a tile, that is to say inclined at $\\pm\\pi/4$ therefore with a director coefficient (slope) of $\\pm 1$.\n\tThe image of these straight lines are then given by:\n\t\n\tIf $y'=h$, then we have following the previous relation:\n\t\n\tthis mean that any projection of lines of director coefficient $\\pm 1$ lying in the $xy$-plane for all $x'$ constitutes secondary vanishing points situated on the same remote horizon line at an equal distance of the main vanishing point as shown below (in the context of a on Adobe Photoshop training, we added a cube is in this perspective):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/secondary_vanishing_point.jpg}\n\t\t\\caption{Example of secondary vanishing point}\n\t\\end{figure}\n\tWe see well in the figure above the symmetry about the vertical axis and the two vanishing point that are causing the tiles.\n\n\tNow let us consider the straight lines parallel to the $y$-axis of the table (display) and their projection equation if $x=a$ and $z=b$:\n\t\n\tThese right images thus remain straight lines parallel to the $y$ axis. In other words, the right images remain parallel to straight \"object\" as shown below (for different positions of the observation point):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/perspective_vertical_lines.jpg}\n\t\t\\caption{Different perspective based on the values of the constants}\n\t\\end{figure}\n\tLet us take for example of this last result, segment of height $Hi$ whose footer is on the horizontal plane (ground):\n\t\n\tThe height of the segment is given by:\n\t\n\tLet us consider now $x=a$, that is to say the vertical columns of height $H$ are aligned on the line $x=a$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/perspective_columns.jpg}\n\t\t\\caption{Aligned columns perspective problem}\n\t\\end{figure}\n\tLet us calculate the coordinates of the vertices of these images:\n\t\n\tThis is obviously still the equation of a straight line and let us notice that all these line pass trough the point of coordinates $(x'=0,y'=h)$ that is just the main vanishing point!\n\t\n\tSo as we experiment in real life, if we consider the above figure corresponding to the previous developments when $H<h$ the columns seems to have their height that decrease as the are more far from the observer and if $h>H$ the columns height seems to increase.\n\t\n\tFrom the previous discussion, we have therefore also obviously a new method for rotating object in three dimensional space. Instead of rotating the object around different axes, we can imagine using the above equations to rotate the viewer around the object (that is a point of view...).\n\n\tWe have considered so far only the projective perspective on a plane. By the way, to work on any projection methods (sphere on a plane, plane on a sphere, sphere on sphere, anything about anything) it is simply enough to extend the analysis that we made above in a coordinate system suitable for the studied system (polar coordinates, cylindrical coordinates, spherical coordinates, ...). This is probably that way that some 3D simulation software use to project an image on a reflective surface such as a semi-transparent undulated glass at the difference that on computer a surface don't have an infinite number of points but a limited number of pixels and therefore the projection need some smoothing trough special interpolation techniques.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tFrom the results we have obtained above, we can extract an interesting and intuitive conclusion For an observer of a picture or a table seethe image as it was originally, it must be placed at specific coordinates of the table (of the picture or of the table).\n\t\\end{tcolorbox}\n\tSoftware such as Adobe Illustrator offers since the early 2010s a tool to create perspectives for one vanishing point, for two vanishing points and three vanishing points:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/adobe_illustrator_one_vanishing_point.jpg}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/adobe_illustrator_two_vanishing_point.jpg}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/adobe_illustrator_three_vanishing_point.jpg}\n\t\t\\caption{Aligned columns perspective problem}\n\t\\end{figure}\n\tAnd to finish a wink for all our favourite flat earthers (...):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{img/geometry/vanishing_point_flat_earth.jpg}\n\t\\end{figure}\n\t\n\n\t\\pagebreak\n\t\\subsection{Affine projections}\n\tAlthough the best method of perspective representation is the conical perspective method, we can not always tolerate to do a large quantity of computation to represent a volume. Thus, it is possible to define projection techniques that result from an approximation of the mathematical results obtained previously to get two new techniques (there are more but these two perspectives we will see further below are by far the most used) which we see daily on many technical  papers or artistic works. These two techniques are respectively the \"\\NewTerm{isometric projection}\\index{isometric projection}\" and \"\\NewTerm{orthogonal projection}\\index{orthogonal projection}\" that are part of the family of  \"\\NewTerm{affine projections}\\index{affine projection}\" (in fact the use of the word \"projection\" must understand \"projection on the Table of the observer\" and therefore it is also a perspective).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/conical_perspective_vs_isometric_projection.jpg}\n\t\\end{figure}\n\tAn example of projective transformation that is not affine is photography through a Camera:\n\t \n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/not_affine_camera.jpg}\n\t\\end{figure}\n\tLike all affine transformations of space, an affine projection preserves:\n\t\\begin{itemize}\n\t\t\\item The parallelism between the lines\n\n\t\t\\item The barycentre, that is all the proportions existing on a given line\n\t\\end{itemize}\n\t\tOnly the lengths and angles within a parallel plane to the projection plane are kept.\n\t\n\tLet us we present briefly these two techniques because they must be part of the general culture of the engineer.\n\t\n\tLike all projections and all perspective, the loss of the third dimension and the non-respect of real mathematical transformation induces possible errors of interpretation. This has been extensively used by the artist M. C. Escher to create impossible situations:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.4]{img/geometry/mc_escher_waterfall.jpg}\n\t\\end{figure}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{affine projection}\\index{affine projection}\" of the usual 3-dimensional space is an affine transformation that sends all the points of this space on a same plane of this space. If the point $M (x, y, z)$ is not on the projection plane, he and its image $m (x ', y', z ')$ form a line whose direction is constant: we name it the \"\\NewTerm{projection direction}\\index{projection direction}\". The corresponding perspective is named \"\\NewTerm{parallel perspective}\\index{parallel perspective}\" or \"\\NewTerm{cylindrical perspective}\\index{cylindrical perspective}\".\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Isometric perspective}\n\tIsometric projection is a method for visually representing three-dimensional objects in two dimensions in technical and engineering drawings. It is a parallel projection (object is rotated along one or more of its axes relative to the plane of projection) in which the three coordinate axes appear equally foreshortened (hence the \"iso\") and the angle between any two of them is approximated to $120$ degrees ($2\\pi/3$ [rad]):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/geometry/isometric_angles.jpg}\n\t\t\\caption[Isometric angles]{Isometric angles (source: Wikipedia)}\n\t\\end{figure}\n\tIndeed, as we will see during our study of Analytical Geometry, the angle of $30^\\circ$ above is only an approximation of the angle between the $xy$-plane and the (isometric) plane vertical to the vector $\\vec{n}=(1,1,1)$ passing through the corner of a cube of equal sides of length $1$. In fact the choice of the angle is empirical and for example in computer games it varies between $30^\\circ$ and $60^\\circ$ depending on the game editor.\n\t\t\n\tLet us consider some 3D shapes using the isometric drawing method. In the figure below coming from Wikipedia (year 2016) the black dimensions are supposed true lengths as found in an orthographic projection (see further below). The red dimensions are used when drawing with the isometric drawing method. The same 3D shapes drawn in isometric projection would appear smaller; an isometric projection will show the object's sides foreshortened, by approximately $80\\%$.\n\t\n\tIn fact we will see that the figure below coming from Wikipedia (year 2016) is in fact quite wrong or at least can bring the reader to some misunderstanding!!!!\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/isometric_projection_volumes.jpg}\n\t\t\\caption[Isometric volumes]{Isometric volumes (source: Wikipedia)}\n\t\\end{figure}\n\tOk let us see closer to the figure above. To prove where this values comes from we need to use the change of basis mathematics tools (\\SeeChapter{see section Linear Algebra page \\pageref{change of basis}}). This will be a veeery nice application of Linear Algebra and Vector Calculus!!!\n\t\n\tSo let us consider the following situation that we already used in the section of Analytical Geometry page \\pageref{isometric plane}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isometric_perspective_plane.jpg}\n\t\t\\caption{Isometric plane to find the cube dimensions}\n\t\\end{figure}\t\n\tTo simplify the determination of the new basis in the plane we well make pass our plane trough the origin (isometric affine plane) such that its equation becomes therefore:\n\t\n\n\tObviously as our plane is perpendicular to $\\vec{n}=(1,1,1)$ it comes immediately that a first basis vector of our orthonormal 2D basis will be the vector:\n\t\n\tWe can easily control that this point belongs well to our isometric affine plane of equation:\n\t\n\tby just replacing the values and see that the equality holds.\n\n\t\tNow we are looking for the second orthonormal vector of our plane basis. So we first simply use the cross product:\n\t\n\tWe can also easily control that this point belongs well to our isometric affine plane.\n\t\n\tAnd we must normalize the to the unit so finally:\\\\\n\t\n\tNow we calculate the images of the vectors of the canonical basis $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$ by the orthogonal projection on our isometric affine plane $P$:\n\t\n\tFinally the projection matrix in the canonical basis is:\n\t\n\tOk now let us consider the projection of the origin $(0,0,0)$ we then have obviously:\n\t\n\tand now one of its $X$ or $Y$ unit vector :\n\t\n\tSo we get for the norm of the distance of a unit vector of the original base of the $XY$-plane:\n\t\n\tAnd finally the unit vector along the $z$-axis:\n\t\n\tSo we get for the norm of the distance of a unit vector of the original base along the $Z$-axis:\n\t\n\t\n\t\\pagebreak\n\tSo now let us come back on Wikipedia's figure:\n\t\\begin{itemize}\n\t\t\\item First the cube... \n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.75]{img/geometry/isometric_cube.jpg}\n\t\t\\end{figure}\n\t\tSo the real sides of a cube of side length equal to $1$ transform same as the basis vector as seen just before. That is to say an isometric transformation through $Z$ gives for the sides in or parallel to the $XY$ plane:\n\t\t\n\t\tand along the $Z$-axis:\n\t\t\n\t\tSo when Wikipedia says that the distances are shortened this is true... but they omit to say that this is only in two space directions. Also when on the figure they show that $1$ in real dimensions is equal to $1$ for the sides in isometric perspective is completely false.\n\t\t\n\t\tThe distance between the two horizontal top vertices is given by the calculation of the norm of the projection:\n\t\t\n\t\tSo:\n\t\t\n\t\tSo the norm is:\n\t\t\n\t\tAnd therefore the original diagonal transforms to:\n\t\t\n\t\tSo once again Wikipedia is wrong. In fact on Wikipedia they take for original sides size the inverse of $\\sqrt{2/3}$, that is to say $\\sqrt{3/2}$. Indeed we have in  this case:\n\t\t\n\t\tSo the norm is:\n\t\t\n\t\tSo we fall back on the value given by Wikipedia...\n\t\t\n\t\t\\item Second the cylinder...\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.75]{img/geometry/isometric_cylinder.jpg}\n\t\t\\end{figure}\n\t\tThe height of the cylinder follows exactly the same procedure as for the cube and the remark relatively to Wikipedia's figure remains the same! \n\t\t\n\t\tSo what will interest us here is especially the \"isometric ellipse\" (also named \"isocircle\") visible in the figure above and we will focus on the minor and major axes of that latter. \n\t\t\n\t\tFirst, to find these values we need to project the circle on the isometric plane. The figure (where we added a circle on the cube) is given now by (\\SeeChapter{see section Analytical Geometry page \\pageref{isometric plane}}):\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/isometric_plane_isocircle.jpg}\n\t\t\t\\caption{Isometric plane to find the isocircle dimensions}\n\t\t\\end{figure}\n\t\t\\StickyNote[2.5cm]{\\LARGE To finish depending on donations}[6.5cm]\n\t\t\n\t\\end{itemize}\n\t\n\t\\subsubsection{Oblique perspective}\n\tLet us take a view (eg the front view), and let us name the axes \"$x$\" (horizontal) and $y$ (vertical) as usual. The $z$-axis being the axis perpendicular to the view (also as usual).\n\n\tIn isometric projection, we plot the $z$-axis with an angle relative to the $x$ axis (for example $\\pi/6$ ($30^\\circ$) or sometimes $\\pi/4$ ($45^\\circ$)), and we report then the distances by multiplying by a coefficient less than $1$ that is empirical on by using trigonometric basic rules such as.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe oblique projections for all angles other than $\\pi/4$ are named \"\\NewTerm{Cavalier projection}\\index{Cavalier projection}\". When the angle is equal to $\\pi/4$ we then speak of \"\\NewTerm{Cabinet projection}\\index{Cabinet projection}\".\t\n\t\\end{tcolorbox}\n\t\n\tOblique projection is a simple type of technical drawing of graphical projection used for producing two-dimensional images of three-dimensional objects. The objects are not in perspective, so they do not correspond to any view of an object that can be obtained in practice, but the technique does yield somewhat convincing and useful images:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/oblique_pictorial.jpg}\n\t\\end{figure}\n\n\tIn the oblique pictorials coordinate system only one axes is at an angle. The angle may range between $0$ and $90$ degrees ($\\pi/2$ [rad]); however, the most commonly used angle is $45$ degrees.\n\n \tIt is thus often used when a figure must be drawn by hand, e.g. on a black board (lesson, oral examination).\n \t\n \t\\pagebreak\n \t\\subsubsection{Orthogonal projection}\n\tIf the projection direction is orthogonal to the plane of projection, then the perspective becomes transforms a sphere into a simple circle. This is a type of perspective drawing used as an alternative to the conical perspective (with which it will also coincide when the eye of the observer is placed infinitely far away from the \"table\").\n\n\tThe simplest orthogonal projection to express is obviously the one that sends space on a plane parallel to the table, this is the \"\\NewTerm{parallel orthogonal projection}\\index{parallel orthogonal projection}\" or \"\\NewTerm{orthographic projection}\\index{orthographic projection}\", of distance $d$. In other words, such that for example for the \"above view\" we have for any point of the volume:\n\t\n\tSo we trivially obtain all the coordinates in an orthonormal 2D frame proper to this plane itself where $x=x'$ and $y=y'$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/orthographic_view.jpg}\n\t\t\\caption{Orthogonal projection in real life...}\n\t\\end{figure}\n\tNowadays, softwares like AutoCAD, Inventor, Cathia and so on... generate automatically orthogonal projections from an isometric view of a drawing.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/technical_drawing_perspective.jpg}\n\t\t\\caption[]{Orthogonal Projection with a software}\n\t\\end{figure}\n\tIn computer graphics, one of the most common matrices used for orthographic projection can be defined by a $6$-tuple, (left, right, bottom, top, near, far), which defines the clipping planes (that is, the region of eye space that contains all the geometry you want to display). These planes form a box with the minimum corner at (left, bottom, -near) and the maximum corner at (right, top, -far). The box is then translated so that its center is at the origin, then it is scaled to the unit cube which is defined by having a minimum corner at $(-1,-1,-1)$ and a maximum corner at $(1,1,1)$.\n\n\tThe orthographic transform to get the canonical can be given by the following matrix:\n\t\n\twhich is in fact obviously a scaling followed by a translation of the form (the positions are abbreviated now with their first letter):\n\t\n\tAs visible illustrated in the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/canonical_view_orthogonal_projection.jpg}\n\t\t\\caption[]{Canonical transformation for orthogonal projection}\n\t\\end{figure}\n\tIf the origin of the matrix $C_V$ is not obvious here are the details to get it:\n\t\n\tThe easiest approach may be to consider each of the three axes separately, and compute how to map points along that axis from the original view volume into the canonical view volume. We begin with the $x$-coordinate. A point within our view volume will have an $x$-coordinate on the range $[l, r]$, and we want to transform it to the range $[-1, 1]$!\n\n\tSo first we see that we have:\n\t\n\tNow, in preparation to scale the range down to the size we want, we subtract $l$ from all terms to produce a zero on the left-hand side. Another approach we could take here would be to translate the range so that it centers on zero, rather than having one of its endpoints at zero, but the algebra is a bit neater this way, so I'll do it like this for the sake of readability:\n\t\n\tNow that one end of your range is positioned at zero, we can scale it down to the size we want. We want the range of $x$-values to be two units wide, from $1$ to $-1$, so we multiply through by $2/(r - l)$. Notice that $r - l$ is the width of our view volume and is thus always a positive number, so we don't have to worry about the inequalities changing directions:\n\t\n\tNext, we subtract one from all terms to produce our desired range of $[-1, 1]$:\n\t\n\tA bit of basic algebra allows us to write the center term as a single fraction:\n\t\n\tFinally, we split the center term into two fractions so that it takes the form $px + q$. For this we need to group our terms this way so that the equations we derive can be easily translated into matrix form:\n\t\n\tThe center term of this inequality now gives us the equation we need to transform $x$ into the canonical view volume:\n\t\n\tThe steps required to obtain a formula for $y$ are exactly the same. We just have to substitute $y$ for $x$, $t$ for $r$, and $b$ for $l$. So rather than repeat them here, we will just show the result:\n\t\n\tFinally, we need to derive a relation for $z$. It's a little different in this case because we are mapping $z$ to the range $[0, 1]$ rather than $[-1, 1]$, but this should look very familiar. Here our starting condition, a $z$-coordinate on the range $[n, f]$:\n\t\n\tWe subtract $n$ from all terms so the lower end of the range is positioned at zero:\n\t\n\tAnd now, all that's left is to divide through by $f - n$ to produce a final range of $[0, 1]$. As before, we notice that $f - n$ indicates the depth of our viewing volume and thus will never be negative:\n\t\n\tFinally, we split this into two fractions so it takes the form $pz + q$:\n\t\n\tThis gives us our formula for transforming $z$:\n\t\n\tNow, we are ready to write our centering orthographic projection matrix. To recap our work thus far, here are the three projection equations we have derived:\n\t\n\tOnce the canonical volume obtained we apply for example the simple orthographic projection onto the plane $z = 0$ that is intuitively defined by the following matrix:\n\t\n\tFor each point $(x, y, z)$, the transformed point would be:\n\t\n\tOften, it is more useful to use homogeneous coordinates. The transformation above can be represented for homogeneous coordinates as\n\t\n\tFor each homogeneous vector$(x,y,z)$, the transformed vector will be obviously:\n\t\n\tAlthough this is the most used method in the industry on paper, however, it has a drawback: the real effect of depth is lost entirely this is what orthogonal projection is many times is often accompanied of an isometric projection view of the object of interest.\n\t\n\t\\pagebreak\n\t\\subsection{Spherical projections}\n\tSpherical projections\\index{spherical projections} is an important field in engineering topography and geography (and therefore navigation). As far as we know there is not far from more than $200$ various techniques of spherical projection. \n\t\n\tIndeed, it is impossible to render paths on a sphere onto a flat surface in such a way that all distances remain the same. In drawing a map of a sphere, therefore, some compromises must be made. Most maps adopt one of two possible strategies that areas are preserved or that angles are preserved. Stereographic projection (ie. projection on a plane) is one way of making maps, and it preserves angles. It has been used since ancient times for this purpose, and its basic geometrical properties were known even then.\n\n\tWe will focus in this book only on the most known spherical projections and also without detailing the problems of projecting images from a sphere to a plane when the number of points (pixels) is not infinite...\n\t\n\tFor example here is a summary of some well known spherical projections\\footnote{the choice of the projection used in school depends mainly on the country government as some projections give more importance to some countries than others... But anyway any student can now see a Earth Sphere in its class and this is the only $1:1$ existing projection!}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.53\\textwidth]{img/geometry/spherical_projections.jpg}\n\t\t\\caption[Various spherical projections]{Various spherical projections like \\\\ Mercator, Gall-Peters, Mollweide, Robinson, etc. (author: Mike Bostock)}\n\t\\end{figure}\n\tOr with a more detailed idea of each technique and its corresponding name:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.45]{img/geometry/projections_list.jpg}\n\t\t\\caption{Technical view of various spherical projections techniques}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAn exhaustive list of spherical projections is given on the projection list web page of the Geocart software available here: \t\n\t\\begin{center}\n\t\\url{https://www.mapthematics.com/ProjectionsList.php}\n\t\\end{center}\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Stereographic projection}\n\tThe \"\\NewTerm{stereographic projection}\\index{stereographic projection}\" (in the classical sense of the term), or more precisely the \"\\NewTerm{azimuthal normal aspect projection}\\index{azimuthal normal aspect projection}\", is one way of projecting the points that lie on a spherical surface onto a plane. Such projections are commonly used in Earth and space mapping where the geometry is often inherently spherical and needs to be displayed on a flat surface such as paper or a computer display. Any attempt to map a sphere onto a plane requires distortion, stereographic projections are no exception and indeed it is not an ideal approach if minimal distortion is desired. \n\n\tA physical model of stereographic projections is to imagine a transparent sphere sitting on a plane. If we name the point at which  the sphere touches the plane the \"south pole\" then we place a light source at the \"north pole\". Each ray from the light passes through a point on the and then strikes the plane, this is the stereographic projection of the point on the sphere. \n\t\n\tIn order to derive the formula for the projection of a point $(x,y,z)$ lying on the sphere assume the sphere is centered at the origin and is of radius $r$. The plane is all the points $z = -r$, and the light source is at point $(0,0,r)$. The cross section of this arrangement is shown below in what is commonly named a \"\\NewTerm{Schlegel diagram}\\index{Schlegel diagram}\" that looks like below in pseudo-perspective:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/schlegal_diagram_pseudo_perspective.jpg}\n\t\t\\caption{Schlegel diagram in pseuod-perspective}\n\t\\end{figure}\n\tand in front view:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/schlegal_diagram_pseudo_front_view.jpg}\n\t\t\\caption[]{Schlegel diagram in front view}\n\t\\end{figure}\n\tAs we can see above the projection of $(0,0,r)$ gives obviously $(0,0,-r)$ and using Thales's theorem (or \"intercept theorem\") as proved in the section of Euclidean Geometry we have immediately that the projection of $(-r,0,0)$ is $(-2r,0,r)$.\n\t\n\tLet us consider now the equation of the line from $P_1=(0,0,r)$ through a point $P_2=(x,y,z)$ on the sphere. Therefore we have for the projected point $P$:\n\t\n\tSolving this for $a$ for the $z$ component yields:\n\t\n\tor:\n\t\n\tFinally:\n\t\n\tNotice that:\n\t\\begin{itemize}\n\t\t\\item The South pole is at the center of the projected points\n\n\t\t\\item Lines of latitude project to concentric circles about $(0,0,-r)$\n\n\t\t\\item Lines of longitude project to rays from the point $(0,0,-r)$\n\n\t\t\\item There is little distortion near the South pole\n\n\t\t\\item The equation project to a circle of radius $2r$\n\n\t\t\\item The distortion increases the closer one gets to the north pole finally becoming infinite at the north pole\n\t\t\n\t\t\\item As we can see in the image below, angles are preserved (since latitude and longitude are still perpendicular on the projected plane)\n\t\\end{itemize}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/earth_stereographic_projection.jpg}\n\t\t\\caption[Earth's stereographic projection]{Earth's stereographic projection (source: ?)}\n\t\\end{figure}\n\tSome author put the origin of the frame at the center of the sphere. Therefore the previous relations becomes:\n\t\n\tand take for radius $r=1$. Then we have:\n\t\n\tAnd as the denote the components project points by $a,b,c$ instead of $x,y,z$, we get:\n\t\n\tAnd as the last coordinate is always $0$ we only write:\n\t\n\tAnd when the projection plane is assimilated to the (complex) projection plane, then we get the relation that we can found most of time in books:\n\t\n\t\\begin{theorem}\n\tThe stereographic projection is a conformal projection\\index{conformal projection} (ie. conserves the angles). That is to say:\n\t\n\tIf we consider the stereographic projection $\\pi:S^2 \\setminus\\{\\vec{e}_3\\}\\in\\mathbb{R}^3$ as we have proved it, defined by:\n\t\n\twith $x^2+y^2+z^2=1$ and $\\vec{e}_3=(0,0,1)^T$.\n\n\tGiven two paths $\\varphi$ and $\\gamma$ of type $\\mathcal{C}^1$ on the sphere (that is to say two differentiable applications in which the derivative is continuous for recall...) such as (empirical choice):\n\t\n\tand such that $\\varphi(0)=\\gamma(0)\\neq \\vec{e}_3$, we want to check if (we use the bracket notation for the dot product to not confuse it with the round symbol of function composition):\n\t\n\twhere $\\varphi'(0)$ and $\\gamma'(0)$ are the tangent vectors of $\\varphi$ and $\\gamma$ on point $0$ and $(\\pi\\circ\\varphi)'(0)$ and $(\\pi\\circ\\gamma)'(0)$ are there corresponding tangent vectors on the projection plane.\n\t\\end{theorem}\n\t\\begin{dem}\n\tTo do the proof let start by developing the expressions $(\\pi\\circ\\varphi)'(0)$ and $(\\pi\\circ\\gamma)'(0)$. Let us write $\\varphi=(\\varphi_x,\\varphi_y,\\varphi_z)^T$. We have :\n\t\n\tLet us simplify the denominator by putting:\n\t\n\tThen we get:\n\t\n\tHence:\n\t\n\tAnd same, by putting:\n\t\n\twe get:\n\t\n\tLet us now develop $\\langle (\\phi\\circ\\varphi)'(0),(\\phi\\circ\\gamma)'(0)\\rangle$:\n\t\n\tThe change of variables take us immediately to:\n\t\n\tTherefore:\n\t\n\tAs by construction:\n\t\n\tand that we also have:\n\t\n\tand identically:\n\t\n\tthen the above dot product becomes:\n\t\n\tBut:\n\t\n\tsince by construction $\\varphi_x^2+\\varphi_y^2+\\varphi_z^2=1$.\n\t\n\tSo finally our numerator is given by:\n\t\n\tLet us now calculate the first term of the denominator $||(\\pi\\circ\\varphi)'(0)||$:\n\t\n\tas for recall (see proof just before) $\\langle u,u'\\rangle=-{u'}_z$.\n\t\n\tBut we also have:\n\t\n\tTherefore the previous expressions becomes:\n\t\n\tSimilarly:\n\t\n\tTo finish, we put:\n\t\n\tinside:\n\t\n\tand we get:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\pagebreak\n\t\\subsubsection{Cylindrical projection}\n\tThe \"\\NewTerm{cylindrical projection}\\index{cylindrical projection}\" (in the classical sense of the term) or more precisely named the \"\\NewTerm{cylindrical normal aspect projection}\\index{cylindrical normal aspect projection}\", is one where lines of latitude are projected to equally spaced parallel lines and lines of longitude are projected onto not necessarily equally spaced parallel lines. The figure below illustrates the basic projection, a line is projected from the center of the sphere through each point on the sphere until it intersects.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.85]{img/geometry/cylindrical_projection_mathematical_aspect.jpg}\n\t\\end{figure}\n\tAs we can see from the above figure it is quite obvious first that:\n\t\n\tand for $x'$ we have obviously:\n\t\n\twith for recall:\n\t\n\tSo finally:\n\t\n\tSchematically the steps of construction are the following:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cylindrical_projection_cylinder_closed.jpg}\n\t\t\\caption{Earth's cylindrical projection}\n\t\\end{figure}\n\twhich results in:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cylindrical_projection_cylinder_open.jpg}\n\t\\end{figure}\n\tIt should be quite obvious (without proof) that this projection does not conserve the areas, but conserve the angles!\n\t\n\t\\subsubsection{Mercator projection}\n\tA \"\\NewTerm{Mercator projection}\\index{Mercator projection}\" is similar in appearance to a cylindrical projection but has a different distortion in the spacing of the lines of longitude (the same idea applies for the so named \"Gall stereographic projection\" or \"Gall\u2013Peters projection\" that we will not introduce in this book). \n\n\tLike the cylindrical projection North and South are always vertical, we cannot therefore represent the poles because the mathematics have an infinity singularity there.\n\n\tThe Mercator projection is the most common projection used in mapping the Earth onto a flat surface. \n\t\n\tThe result of such a projection is:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.14]{img/geometry/mercator_earth_projection.jpg}\n\t\t\\caption{Earth's Mercator projection}\n\t\\end{figure}\n\tAs we have seen previously, the cylindrical map projection is specified by formula linking the geographic coordinates of latitude $\\beta$ and longitude $\\alpha$ to Cartesian coordinates on the map with origin on the equator and $x$-axis along the equator. By construction, all points on the same meridian lie on the same generator of the cylinder at a constant value of $x$, but the distance $y$ along the generator (measured from the equator) is an arbitrary function of latitude, $y(\\beta)$. In general this function does not describe the geometrical projection (as of light rays onto a screen) from the center of the globe to the cylinder, which is only one of an unlimited number of ways to conceptually project a cylindrical map.\n\t\n\tSince the cylinder is tangential to the globe of radius $R$ at the equator, the scale factor between globe and cylinder is obviously unity on the equator but nowhere else. In particular since the radius of a parallel, or circle of latitude, is $R\\cos(\\beta)$, the corresponding parallel on the map must have been stretched by a factor of:\n\t\n\tThis scale factor on the parallel is conventionally denoted by $k$ and the corresponding scale factor on the meridian is denoted by $h$.\n\t\n\tThe relations between $y(\\beta)$ and properties of the projection, such as the transformation of angles and the variation in scale, follow from the geometry of corresponding small elements on the globe and map. The figure below shows a point $P$ at latitude $\\beta$ and longitude $\\alpha$ on the globe and a nearby point $Q$ at latitude $\\phi + \\delta \\phi$ and longitude $\\alpha + \\delta \\alpha$. The vertical segments $\\wideparen{PK}$ and $\\wideparen{MQ}$ are arcs of meridians of length $R\\delta \\beta$ (see the proof in the section Trigonometry). The horizontal segment $\\wideparen{PM}$ and $\\wideparen{KQ}$ are arcs of parallels of length $R\\cos(\\beta)\\delta\\alpha$ (still following from the same proof of the section Trigonometry). The corresponding points on the projection define a rectangle of width $\\delta x$ and height $\\delta y$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/mercator_construction_principle.jpg}\n\t\t\\caption{Mercator projection function construction}\n\t\\end{figure}\n\tFor small elements, the angle $\\widehat{PKQ}$ is approximately a right angle and therefore\n\t\n\tThe previously mentioned scaling factors from globe to cylinder are given by:\n\t\\begin{itemize}\n\t\t\\item Parallel scale factor: \n\t\t \n\n\t\t\\item Meridian scale factor:\n\t\t \n\t\\end{itemize}\n\tConsider now that the projection plane we take has for $x$-coordinates the range $[-\\pi R,+\\pi R]$ (the whole being then equal to $2\\pi R$). Then for every meridian:\n\t \n\tIndeed, it is easy to see that if $(\\alpha-\\alpha_0)=2\\pi$ we have well $x=2\\pi r$ when we take time $\\alpha_0=0$. But in the general case we have obviously:\n\t \n\tTherefore we can write:\n\t\n\tWhere we have considered that:\n\t\n\tThe choice of the function $y(\\beta)$ for the Mercator projection is determined by the demand that the projection be conformal, a condition which can be defined as the equality of angles as proved earlier above. The condition that a sailing course of constant azimuth $\\lambda$ on the globe is mapped into a constant grid bearing $\\gamma$ on the map. Setting $\\lambda = \\gamma$ in:\n\t\n \tgives immediately after rearranging:\n\t\n\twith $y(0) = 0$, by using the primitives proved in the section of Differential and Integral Calculus we have immediately:\n\t\n\tSo finally:\n\t\n\tIn the first equation $\\alpha_0$ is the longitude of an arbitrary central meridian usually, but not always, that of Greenwich (i.e., zero).\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/geometry/mercator_1569.jpg}\n\t\t\\caption[Mercator world map as seen in 1569]{Mercator world map as seen in 1569... (source: Wikipedia)}\n\t\\end{figure}\n\t\n\t\\subsubsection{Lambert's equivalent projection (Peters projection)}\n\tThe idea behind the \"\\NewTerm{Lambert's equivalent projection}\\index{Lambert's equivalent projection}\", also named \"\\NewTerm{Lambert's cylindrical equal-area projection}\\index{Lambert's cylindrical equal-area  projection}\" or sometimes \"\\NewTerm{Peter's projection}\\index{Peter's projection}\" (must not be confused with \"Gall's-Peter projection\"!), is quite easy as illustrated in the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/lambert_spherical_projection.jpg}\n\t\t\\caption{Lambert's equivalent projection (or Lambert's equal area projection)}\n\t\\end{figure}\n\tSuppose we have a terrestrial globe as visible above. It is enough to wrap a sheet of paper around the globe as in the figure above and to project horizontally each point of the sphere on the cylinder. For that, if $A$ is a point of the sphere, we take the line passing through $A$ and perpendicular to the axis of the poles This line intersects the cylinder at $A'$: the point $A'$ represents the point $A$ on the map.\n\t\n\tOr more explicitly:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/lambert_projection_earth.jpg}\n\t\\end{figure}\n\tNow using the following figure, let us prove that the Lambert's cylindrical projection conserve the areas:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/lambert_projection_equal_area.jpg}\n\t\\end{figure}\n\tFor this proof, we will look to a small area of the sphere delimited by the parallels of latitude $\\theta_1$ and $\\theta_2$ and of the meridians of longitude $\\phi_1$ and $\\phi_2$. We will denote by $R$ the radius of the sphere and we will assume that $\\theta_2-\\theta_1$ and $\\phi_2-\\phi_1$ are positive and small.\n\n\tAccording to this assumption, the area of this region is that of a small rectangle (see figure above). At the latitude $\\theta_1$, the radius of the parallel is obviously $R\\cos(\\theta_1)$. Then by a simple rule of three, the length of the arc from angle $\\phi_2-\\phi_1$ is equal to $R(\\phi_2-\\phi_1)\\cos(\\theta_1)$, as the length of the area is the product or the radius by the angle in radian (\\SeeChapter{see section Trigonometry page \\pageref{arc length trigonometry}}).\n\t\n\tWhat is now the height of this rectangle? As the meridian is a circle of radius $R$ and that the height of the rectangle is given by angle $\\theta_2-\\theta_1$, then this height is given approximately by $R(\\theta_2-\\theta_1)$. Then the area of our small region is approximately equal to:\n\t\n\tLet us calculate now the area of its projection on the cylinder. This projection is now a real rectangle. The parallel has been projected on a circle of radius $R$. As the corresponding angle has remained the same, that is $\\phi_2-\\phi_1$, the width of the rectangle is $R(\\phi_2-\\phi_1)$. For the height, the reader must notice that the tangent to the meridian at the latitude $\\theta_1$ make an angle of:\n\t\n\twith the horizontal. The projection on the vertical of a segment of length $R(\\theta_2-\\theta_1)$ is then equal to:\n\t\n\tThe area on the map is then equal to:\n\t\n\tThus, the same area than that of the small area on the sphere. This finish the proof that Lambert's cylindrical projection conserves the area.\n\t\n\t\\subsubsection{Mollweide projection}\n\tThe \"\\NewTerm{Mollweide projection}\\index{Mollweide projection}\\label{Mollweide projection}\" is a very famous equal-area, pseudocylindrical map projection generally used for global maps of the world or night sky (it is also known as the \"Babinet projection\", \"homalographic projection\", \"homolographic projection\" and \"elliptical projection\"). The projection trades accuracy of angle and shape for accuracy of proportions in area, and as such is used where that property is needed, such as maps depicting global distributions. \n\t\n\tThe text that follows is entirely inspired from the paper \\cite{lapaine2011mollweideova} where the method is derived in details using three different proofs. Here we will focus on what seems to us the simplest of the three derivations that make sense.\n\t\n\tPseudocylindrical map projections have in common straight parallel lines of latitude and curved meridians. Until the 19th century the only pseudocylindrical projection with important properties was the sinusoidal or Sanson-Flamsteed. The sinusoidal has equally spaced parallels of latitude, true scale along parallels, and equivalency or equal-area. As a world map, it has disadvantage of high distortion at latitudes near the poles, especially those far from the central meridian:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{img/geometry/sanson_flamsteed.jpg}\n\t\t\\caption{Sansons or Sanson-Flamsteed or Sinuoidal projection}\n\t\\end{figure} \n\tIn 1805, Mollweide announced an equal-area world map projection that is aesthetically more pleasing than the sinusoidal because the world is placed in an ellipse with axes in a ratio and all the meridians are equally spaced semi-ellipses. The Mollweide projection was the only new pseudocylindrical projection of the nineteenth century to receive much more than academic interest:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{img/geometry/mollweide_projection.jpg}\n\t\t\\caption{Mollweide projection}\n\t\\end{figure} \n\tO'Connor and Robertson stated that Mollweide produced the map projection to correct the distortions in the Mercator projection, first used by Gerardus Mercator\nin 1569. While the Mercator projection is well adapted for sea charts, its very great exaggeration of land areas in high latitudes makes it unsuitable for most other purposes. In the Mercator projection the angles of intersection between\nthe parallels and meridians, and the general configuration of the land, are preserved but as a consequence areas and distances are increasingly exaggerated as one moves away from the equator. To correct these defects, Mollweide drew \nhis elliptical projection; but in preserving the correct relation between the areas he was compelled to sacrifice configuration and angular measurement. The Mollweide projection lay relatively dormant until J. Babinet reintroduced it in 1857 under the name \"homalographic projection\".\n\n\tThe well known equations of the Mollweide projections that we will prove further below read as follows:\n\t\n\tIn these formulas $x$ and $y$ are rectangular coordinates in the plane of projection, $\\varphi$ (latitude angle) and $\\lambda$ (longitude angle) are geographic coordinates of the points on the sphere and $R$ is the radius of the sphere to be mapped. The angle $\\beta$ is an auxiliary angle that is connected with the latitude $\\varphi$ by the relation $2\\beta+\\sin(2\\beta)=\\pi\\sin(\\varphi)$. For given latitude $\\varphi$, the equation $2\\beta+\\sin(2\\beta)=\\pi\\sin(\\varphi)$ is a transcendental equation in $\\beta$. In the past, it was solved by using tables and interpolation method. In our days, it is usually solved by using some iterative numerical method, like bisection or Newton-Raphson method.\n\t\\begin{dem}\n\tGiven the earth's radius $R$, suppose the equatorial aspect of an equal-area projection with the following properties:\n\t\\begin{itemize}\n\t\t\\item A world map is bounded by an ellipse twice broader than tall\n\t\t\\item Parallels map into parallel straight lines with uniform scale\n\t\t\\item The central meridian is a part of straight line; all other ones are semi-elliptical arcs.\n\t\\end{itemize}\n\tSuppose an earth-sized map; let us define two regions, $S_{1}$ on the map and $S_{2}$ on the earth, both bounded by the equator and a parallel (the figure depicts and 2D representation but the left geometry is an ellipse - not an ellipsoid ! - and the right one a sphere):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{img/geometry/derivation_mollweide_projection_equations.jpg}\n\t\\end{figure}\n\tThe equal-area property can be used to calculate $x$ for given $\\varphi$. Given $x$ and $\\lambda$ (longitude), $y$ can be calculated immediately from the ellipse equation, since horizontal scale is constant. Equation of ellipse centred in origin, with major axis on $y$-axis is as we already know:\n\t\n\tFor $0 \\leq x \\leq a$:\n\t\n\tThe area between $y$-axis and parallel mapped into $x=x_{1}$ is:\n\t\n\tLet us introduce the auxiliary angle:\n\t\n\tand:\n\t\n\tthen:\n\t\n\tSince:\n\t\n\tThen:\n\t\n\tTherefore:\n\t\n\tfor some $0 \\leq \\beta \\leq \\pi/2$, corresponding to $x_{1}=a \\sin (\\beta)$ and because of $a b \\pi=4 R^2 \\pi$.\n\t\n\tOn a sphere, the area between the equator and parallel $\\varphi$ is:\n\t\n\tBut as $S_{1}=S_{2}$, then:\n\t\n\tHence:\n\t\n\tSimplifying we fall back the third relation of the Mollweide projection:\n\t\n\tThe auxiliary angle $\\beta$, as already mentioned, must be found by interpolation or successive approximation. \n\t\n\tFinally, since horizontal scale is uniform, from the two equalities (remember that the left $ab\\pi$ is the surface of the elliptical projection 2D and the right $4R^2\\pi$ is the surface of the projected sphere!):\n\t\n\tWe get:\n\t\n\tand therefore we can fall back on the first relation of the Mollweide projection:\n\t\n\tNow, remember that $\\lambda$ is the longitude that goes starting from $0$ [rad], to $-\\pi$ on the West to $+\\pi$ on the East. As it is symmetrical, from the meridian $0$ [rad] just having one of the value of the angle going to West or East in enough information to make the mapping on the ellipse (as it is symmetrical!). The clever idea now is to shrink the elliptical projection so that when the latitude is near zero we get a very small projection ellipse as visible in red below and when it is equal to $+\\pi$  we get the whole projection ellipse in green below (remember we do that in the horizontal direction of $y$). So multiplying the elliptic function $y$ by the ratio $\\lambda/\\pi$ will the expected shrinkage:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{img/geometry/latitude_shrinkage_mollweide.jpg}\n\t\\end{figure}\n\tTherefore:\n\t\n\tThen we fall back on the second relation of the Mollweide projection:\n\t\n\tWe have all of the tree relations and this finish the proofs!\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tAlthough the applied condition was that a half of the sphere has to be mapped onto a disc, the final projection equations hold for the whole sphere and give its image situated into an ellipse.\n\t\n\tLet us consider the shape of the Mollweide projection of the whole sphere. From:\n\t\n\tis easy to get the equation of a meridian in the projection:\n\t\n\tIt is obvious that for a given $\\lambda(14)$ is the equation of an ellipse. It follows that the semi-axis $a$ is constant, while $b$ depends on the longitude $\\lambda$. If we take $\\lambda=\\pi$, than $b=2 \\sqrt{2} R$, and $a: b=1: 2$ and that is the ratio of semi-axes in the Mollweide projection. The question arises: is it possible to find out a pseudocylindrical equal-area projection that will give the whole word in an arbitrary ellipse satisfying any given ratio $a: b$ or $b: a$ ? The answer is yes and this leads to \"Generalized Mollweide projection\". But as the proof is of no interest in this book, the reader can refer to \\cite{lapaine2011mollweideova} for a detailed proof of it.\n\n\tSo the Mollweide is a pseudocylindrical projection in which the equator is represented as a straight horizontal line perpendicular to a central meridian one-half its length. The other parallels compress near the poles, while the other meridians are equally spaced at the equator. The meridians at 90 degrees east and west form a perfect circle, and the whole earth is depicted in a proportional 2:1 ellipse.\n\t\n\tNotice that the Mollweide and Hammer projections are occasionally confused, since they are both equal-area and share the elliptical boundary; however, the latter design has curved parallels and is not pseudocylindrical:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{img/geometry/hammer_projection.jpg}\n\t\t\\caption{Hammer projection}\n\t\\end{figure}\n\t\n\t\\subsection{Other perspectives}\n\tWe can also choose any other angle of perspective and that not keep anything as equal but that are just well adapted to the technology or to human perception. For example the game Diablo II or Age of Empire use an angle in degrees of $53.130102^\\circ$ (rounded to $53$) as for old $4:3$\\footnote{The origin of the $4:3$ is interesting and useful as the radio forms a Pythagorean triangle, meaning that the length of the diagonal (hypotenuse) is an exact integer value (of pixels...).} screens it gives as result that an object have a vanishing line on the upper left corner of the $4:3$ screen will also pass trough the bottom right corner of the screen. It is therefore just choice of comfort.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/diablo_II_perspective_grid.jpg}\n\t\t\\caption[]{Diablo II perspective grid}\n\t\\end{figure}\n\tAlternatively there are also a few games which use a trimetric projection where one axis is 1:2 and the other is 2:1 as below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/trimetric_projection.jpg}\n\t\t\\caption[]{Example of trimetric projection}\n\t\\end{figure}\n\tTo resume we have seen so far the following projections (the spherical projections are not listed in the figure below for the moment!):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/projections_summary.jpg}\n\t\t\\caption{Non-exhaustive orgchart of various projections methods}\n\t\\end{figure}\n\t\n\t\\subsection{Homogeneous Coordinates (projection coordinates)}\n\tIn mathematics, \"\\NewTerm{homogeneous coordinates}\\index{homogeneous coordinates}\\label{homogeneous coordinates}\" also named \"\\NewTerm{projection coordinates}\", introduced by August Ferdinand M\u00f6bius, make calculations possible in the projective space as do the Cartesian coordinates in Euclidean space. Homogeneous coordinates are used extensively in computer graphics and more particularly for the representation of three dimensional (3D) scenes  because they are adapted to projective geometry and allow to characterize the transformations of space under optimal algorithmic form. The matrix notation form is particularly used in 3D graphics programming  libraries such as OpenGL and Direct3D.\n\n\tWe proved in our study of transformations in the plane and space (\\SeeChapter{see section Euclidean Geometry page \\pageref{geometric transformations}}) that the translation, the scaling, rotation or reflection were not possible to be translate represented in (real) matrix form without a trick that was to add a dummy extra dimension to the vector of coordinates and to the associated matrix transformation but we did it only for the translation. So let us see it now for all other classical transformation.\n\t\n\t\\subsubsection{$\\mathcal{P}^2$ Projective Space}\n\tThus, we saw that the translation in $\\mathbb{R}^2$ could be written in matrix form:\n\t\n\tSo always in $\\mathbb{R}^2$, a scaling of factor $k$ who was written:\n\t\n\tbecomes in homogeneous coordinates:\n\t\n\tSo always in,$\\mathbb{R}^2$ a rotation of angle $\\theta$ in the $x-y$:\n\t\n\tbecomes (and as we proved it, this is indeed a rotation around the $z$-axis):\n\t\n\tSo always in $\\mathbb{R}^2$, the reflection that was written along the $x$-axis by:\n\t\n\tbecomes:\n\t\n\tand so on for other transformations that we have proved and that were resumed in the following figure we have already see and where the homogeneous coordinates are in gray:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.9]{img/geometry/affine_transformation_matrix_summary.jpg}\n\t\t\\caption{Summary homogeneous coordinates transformations}\n\t\\end{figure}\n\tMathematicians say then that we when we use the above transformations of the composition of some of them, we put ourselves in the projective space $\\mathcal{P}^2$.\n\t\n\t\\subsubsection{$\\mathcal{P}^3$ Projective Space}\n\tVerbatim, we can do the same with a point $P$ of $\\mathbb{R}^3$ which will then in $\\mathcal{P}^3$ be represented by a $4\\times 1$ coordinate vector:\n\t\n\tThus we have in space the following transformation matrices:\n\t\\begin{itemize}\n\t\t\\item For translations:\n\t\t\n\t\t\n\t\t\\item For rotations it is enough for sure in 3D to have as we proved it:\n\t\t\n\t\tBut if we want to work with projection coordinates, this becomes obviously:\n\t\t\n\t\t\n\t\t\\item For homoteties (scaling) obviously:\n\t\t\n\t\t\n\t\t\\item In space, one can push in two coordinate axis directions and keep the third one fixed. The following is the shear transformation in both $x-$ and $y-$directions with shearing factors $a$ and $b$, respectively, keeping the $z-$ coordinate the same (so its only one of the possible special cases):\n\t\t\n\t\tThus, a point $(x, y, z)$ in space is transformed to $(x + az, y + bz, z)$. Therefore, the $z$-coordinate does not change, while $(x, y)$ is \"pushed'' in the direction of $(a, b, 0)$ with a factor $z$.\n\t\\end{itemize}\n\tMathematicians say then that we when we use the above transformations of the composition of some of them, we put ourselves in the projective space $\\mathcal{P}^3$.\n\t\n\tWe have proved above that in the case of the conical project, we had the following homographic transformations:\n\t\n\twhere $x',y',z'$ are the projection of the relation points $x,y,z$ on the table with a focal distance $\\Delta$ for recall...\n\n\tWhat is traditionally written by putting $z:=z+\\Delta$ and $f:=\\Delta$:\n\t\n\twhere we see that if the focal distance $f$ is infinite, the subject coincides with the $XY$ plane (or if the object is infinitely far away as it is equivalent...).\n\t\n\tLet us put:\n\t\n\tWe can the write:\n\t\n\tOr better:\n\t\n\tWe then use the following matrix for the conical projection:\n\t\n\tand then we normalize the coordinates by the ratio $z/f$. Indeed (a reader request the details):\n\t\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{80} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 35 votes,  82.29\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\t\t\n\t\\section{Analytical Geometry}\n\n\t\\lettrine[lines=4]{\\color{BrickRed}T}he \"\\NewTerm{analytic geometry}\\index{analytic geometry}\" is the branch of geometry that deals with the study of geometric shapes and their properties using the advanced tools of Calculus such as functional analysis, vector calculus and linear algebra. Its border lies in the tools used and originated by the work of Ren\u00e9 Descartes in the early 17th century. The \"vector geometry\" is a subset of analytical geometry and we will also us it in this section as in this of Differential Geometry.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhen we use for these same subjects that will follow the differential and integral calculus we then say that we do \"differential geometry\" (see section of the same name page \\pageref{differential geometry}).\n\t\\end{tcolorbox}\n\t\n\tAnalytic geometry is a very broad area (like everything else in this book) then... we discussed here that the elements indispensable for the study of physics (especially astronomy and quantum physics particle) and engineering (spacial engineering). These elements are also often studied in small classes and are (cited in order of study at high-school): the conics, the equations of the line, of the plan, of the sphere, etc. their intersections, their tangent planes and many others.\t\n\t\n\tThe reader will notice that we begin this section with conics and not straight lines and this is simply because we will use much more conics properties than straight lines in this book. For analytical geometric properties of straight lines see the subsection after the conics.\n\n\t\\subsection{Conics}\\label{conics}\n\tIt has been very difficult for us to choose whether to put the study of the conics in the chapter on Algebra or on Geometry. We finally decided to put this study in this chapter (hence Geometry...) which suggests that the reader that has made a linear reading of this book has already covered all chapters and sections presenting the mathematical tools necessary to study of conics. We hope that our choice will be the best one for the reader.\n\t\n\tThe study of conic will be very useful in the section of Astronomy (we own to Kepler is  many results from on the study on conics) and also in the sections of Geometrical Optics, Statistics and Industrial Engineering. It is therefore appropriate to go in the details of this subject.\n\n\t\\subsubsection{Algebraic approach}\n\tConsider $(\\text{O},\\vec{i},\\vec{j})$ an orthonormal plan. The simplest algebraic curves that we can found after the lines whose general form equations are (reminder):\n\t\nare the curves of the second degree, named \"\\NewTerm{analytical conics}\\index{analytical conics}\", that is to say by extension:\n\t\nwith $\\alpha,\\beta,\\gamma$ all non-zero.\n\n\tThe second degree corresponding curves are named \"conics\" (also named \"quadrics\\index{quadrics}\\label{quadrics}\" because of the presence of a quadratic term). The conic will be named \"proper conic\" if it is an ellipse, a parabola or a hyperbole as we shall see further below.\n\nThis last relation can also be written in matrix form (very important in the practice of Numerical Methods as we will see it in the section of the same name and in the classification of differential equations as discussed in the section of Differential and Integral Calculus):\n\t\nnotation for which there exists several variants...:\n\t\n\n\tgives for different values of $g$ the following paths:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/geometry/conics_plane_plots.eps}\n\t\t\\caption{Conics with respectively $b=0.5, 1.5$ and $1$ and more values of $g$}\n\t\\end{figure}\n\tOur first task will be to obtain, by translation and rotation of the coordinate system in which this relation is expressed, a reduced equation much simpler by eliminating the term $xy$. Indeed, let us choose a new referential being deduced by the previous one just by a rotation of angle equation. Let $x'$ and $y'$ be the coordinates of the new points. We have (\\SeeChapter{see section Euclidean Geometric page \\pageref{rotation matrix in the plane}}):\n\t\nThus:\n\t\nThe equation becomes:\n\t\nSo we want all the terms $x'y'$ to be grouped as:\n\t\nSince (\\SeeChapter{see section Trigonometry page \\pageref{remarkable trigonometric identities}}):\n\t\nsubstituting, we get:\n\t\nTo have only the terms in $x'y'$ that simplifies, we just have to choose the rotation angle $\\theta$ such as:\n\t\nTherefore we will consider now the equation:\n\t\n\t\\begin{enumerate}\n\t\t\\item If we write that $\\alpha=0$ and  $\\beta=0$. And we divide by $\\beta$, we can take us back to an equation of the following type:\n\t\t\n\tWhere:\n\t\t\\begin{itemize}\n\t\t\t\\item If $\\gamma\\neq 0$, we end up with an equation describing the figure of a \"\\NewTerm{parabola}\\index{parabola}\" of axis parallel to $\\text{O}X$.\n\t\t\t\\item If $\\gamma = 0$, it is a degenerate case (a degenerate conic is a conic that is reduced to be two lines that may or may not be parallel or a single line).\n\t\t\\end{itemize}\n\t\\item If we write $\\alpha \\neq 0$ and $\\beta = 0$ thus the situation is treated exactly as before.\n\t\n\t\\item If $\\alpha \\neq 0$ and  $\\beta \\neq 0$, we can remove the terms $\\delta x$ and $\\varepsilon y$ terms as follows:\n\t\n\t\n\tAnd by a simple change of referential via translations, we arrive at an equation that looks like following:\n\t\t\n\t\t\\begin{itemize}\n\t\t\t\\item If $\\gamma=0$ the previous relations simplifies to:\n\t\t\t\t\n\t\t\t\tAnd therefore it is obvious that if $\\sgn(x)=\\sgn(y)$ there is only the trivial solution $X=Y=0$ if we stay in $\\mathbb{R}$. Instead if $\\sgn(x) \\neq \\sgn(y)$ thus we have an infinity of solutions represented by a straight line.\n\t\t\t\\item $\\gamma \\neq 0$ let us put that:\n\t\t\t\t\n\t\t\tand let us divide by $\\mid \\gamma \\mid$ such that:\n\t\t\t\t\n\t\t\tIf we put now:\n\t\t\t\n\t\t\tWe get:\n\t\t\t\n\t\t\tSo we therefore have several possible situations:\n\t\t\t\n\t\t\tTwo terms above are impossible in $\\mathbb{R}^2$, this is why we crossed them (the sum of two positive numbers can not be negative and vice versa).\n\t\t\t\n\t\t\tThere are several cases of interesting figures:\t\t\t\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item For\\label{equation of a circle}:\n\t\t\t\t\t\n\t\t\t\tand $a=b=1$, we have a unit circle. The reader can have fun testing this with the following commands in Maple 4.00b:\\\\\n\t\t\t\t\n\t\t\t\t\\texttt{>with(plots):}\\\\\n\t\t\t\t\\texttt{>a:=1;b:=1;}\\\\\n\t\t\t\t\\texttt{>implicitplot(x\\string^2/a\\string^2+y\\string^2/b\\string^2=1,x=-10..10,y=-10..10);}\n\t\t\t\t\n\t\t\t\t\\item For:\n\t\t\t\t\t\n\t\t\t\tand $\\forall a,b \\in \\mathbb{R}^{*}$ we have an ellipse\\label{analytical expression ellipse} (which therefore contains the specific case of the circle) with half-axes $a$ and $b$ (for the visual representation of an ellipse see further below or otherwise see the section of Geometric Forms).\\\\\n\t\t\t\t\n\t\t\t\tThe reader can have fun testing this with the following command in Maple 4.00b:\\\\\n\t\t\t\t\n\t\t\t\t\\texttt{>with(plots):}\\\\\n\t\t\t\t\\texttt{>a:=4;b:=8;}\\\\\n\t\t\t\t\\texttt{>implicitplot(x\\string^2/a\\string^2+y\\string^2/b\\string^2=1,x=-10..10,y=-10..10);}\n\t\t\t\t\n\t\t\t\t\\item For:\n\t\t\t\t\n\t\t\t\tand $\\forall a,b \\in \\mathbb{R}^{*}$, we get hyperbolas whose symmetry axis is parallel to $\\text{O}x$ or $\\text{O}y$ (for visual representation see further below). We say that the hyperbole is an \"\\NewTerm{equilateral hyperbola}\\index{hyperbola}\\label{hyperbola}\" when $a = b$.\\\\\n\t\t\t\t\n\t\t\t\tThe reader can have fun testing this with the following commands in Maple 4.00b:\\\\\n\t\t\t\t\n\t\t\t\t\\texttt{>with(plots):}\\\\\n\t\t\t\t\\texttt{>a:=4;b:=4;}\\\\\n\t\t\t\t\\texttt{>implicitplot(x\\string^2/a\\string^2-y\\string^2/b\\string^2=1,x=-10..10,y=-10..10);}\t\n\t\t\t\\end{enumerate}\n\t\t\\end{itemize}\n\t\tThe term \"conical\" comes from the fact that one of the first definitions of conicals consisted of the intersection of a cone and a plane.\n\t\t\n\t\tIndeed, given:\t\t \n\t\t\t\t\n\t\tthe equation of a cone having an angle at the top of $\\pi/4$ (or an angle of $\\pi/2$ relatively to its surface and the $x$-axis). This is simple the application of Pythagorean theorem...\n\t\t\n\t\tPlus also given a plane of oriented with an angle $\\theta$ relatively to the $Z$ axis:\n\t\t\n\t\twhere the reader can see that if $\\theta=0$ we get $z=h$ for $\\forall x,y$ the plane is therefore parallel to $x\\text{O}y$, if $\\theta=\\pi/2$ we get $y=h$ for $\\forall z,y$ the plane is parallel to $x\\text{O}z$. In both situation and for every $\\theta$ the plane never intersect with the axis $X$ and we have a normal vector that is always in the plane $x\\text{O}y$.\n\t\t\n\t\tThis normal vector has almost obviously the following coordinates (we us cosine-director notation):\n\t\t\n\t\tConsider now the rotation matrix in the space $\\mathbb{R}^3$ relative to the axis $\\text{O}z$ (\\SeeChapter{see section Euclidean Geometry page \\pageref{3d rotation matrix around}}):\n\t\t\n\t\tSo we have for expression of rotation of our plane:\n\t\t\n\t\tAfter simplification, $\\forall \\theta$:\n\t\t\n\t\tThis just mean that when we apply the rotation matrix to our plane with the same angle $\\theta$ we take it to is \"initial\" position.\n\t\tIdentically, for the cone, a rotation according to the $Z$ axis (so it does not happen much):\n\t\t\n\t\tAfter development and simplification:\n\t\t\n\t\tEquation that gives a horizontal cone for $\\theta=0$ and a vertical one for $\\theta=\\frac{\\pi}{2}$.\n\t\tThus, we have one possible general system:\n\t\t\n\t\tWe see then that for:\n\t\t\\begin{itemize}\n\t\t\t\\item $\\theta \\in \\left]\\dfrac{\\pi}{4},\\dfrac{\\pi}{2}\\right]$ we get an intersection between the plane and the cone giving an ellipse\n\t\t\t\\item $\\theta=\\dfrac{\\pi}{4}$ we get a parabola\n\t\t\t\\item $\\theta \\in \\left[0,\\dfrac{\\pi}{4}\\right[$ we get a hyperbola\n\t\t\\end{itemize}\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/cone_plane_intersections.jpg}\n\t\t\t\\caption{Various intersections between a cone and a plane}\n\t\t\\end{figure}\n\t\tand possible real world neighbourhood noise impact analysis:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.4]{img/geometry/show_wave_cone_hyperbola.jpg}\n\t\t\t\\caption[A shock wave intersecting the ground]{A shock wave intersecting the ground (source: OpenStax)}\n\t\t\\end{figure}\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tWe also give the curve of equation $xy=1$ the name hyperbole because, with a variable change:\n\t\t\n\t\tWhich brings us to:\n\t\t\n\t\twhich as we have seen, is the equation of a hyperbole.\n\t\t\\end{tcolorbox}\n\t\tFor those who have Maple 4.00b or later here are some non-trivial commands to have fun to do a parabola:\n\t\t\n\t\t\\texttt{>restart: with(plots):}\\\\\n\t\t\\texttt{>c:=sqrt(x\\string^2+y\\string^2):}\\\\\n\t\t\\texttt{>p:=y+3:}\\\\\n\t\t\\texttt{>Y:=solve(c=p,y);}\\\\\n\t\t\\texttt{>intsect:=subs(y=Y,c);}\\\\\n\t\t\\texttt{>intsect := (x +(1/6*x-3/2))}\\\\\n\t\t\\texttt{>P1:=plot3d(c,x=-5..5,y=-5..5,axes=normal,color=red,numpoints=2000}\n\t\t\\texttt{,view=[-5..5,-5..5,0..5],style=wireframe):}\\\\\n\t\t\\texttt{>P2:=plot3d(p,x=-5..5,y=-5..5,axes=normal,color=yellow,}\\\\\n\t\t\\texttt{numpoints=2000,view=[-5..5,-5..5,0..5],style=patchnogrid):}\\\\\n\t\t\\texttt{>display(P1,P2,scaling=constrained, orientation=[-10,75]);}\\\\\n\t\t\n\t\tthat gives:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/plot_maple_parabola.jpg}\n\t\t\t\\caption{Maple 4.00b plot to obtain a parabola by the intersection of a cone and a plane}\n\t\t\\end{figure}\n\t\tor to obtain a hyperbola:\n\t\t\n\t\t\\texttt{>restart: with(plots):}\\\\\n\t\t\\texttt{>c:=sqrt(x\\string^2+y\\string^2):}\\\\\n\t\t\\texttt{>p:=4*y+5:}\\\\\n\t\t\\texttt{>Y:=solve(c=p,y);}\\\\\n\t\t\\texttt{>intsect:=subs(y=Y[1],c);}\\\\\t\\texttt{>P1:=plot3d(c,x=-5..5,y=-5..5,axes=normal,color=red,numpoints=2000}\\\\\n\t\t\\texttt{,view=[-5..5,-5..5,0..5],style=wireframe):}\\\\\t\\texttt{>P2:=plot3d(p,x=-5..5,y=-5..5,axes=normal,color=yellow,numpoints=2000}\\\\\n\t\t\\texttt{,view=[-5..5,-5..5,0..5],style=patchnogrid):}\\\\\t\\texttt{>P3:=spacecurve([x,Y[1],intsect],x=-5..5,color=black,thickness=3):}\\\\\n\t\\texttt{P3:=spacecurve([x,Y[1],intsect],x=-5..5,color=black,thickness=3):}\\\\\n\t\t\\texttt{>display(P1,P2,P3,scaling=constrained);}\\\\\n\t\t\n\t\tthat gives:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/plot_maple_hyperbola.jpg}\n\t\t\t\\caption{Maple 4.00b plot to obtain a hyperparabola by the intersection of a cone and a plane}\n\t\t\\end{figure}\n\t\tor to obtain an ellipse:\n\t\t\n\t\t\\texttt{>restart: with(plots):}\\\\\n\t\t\\texttt{>c:=sqrt(x\\string^2+y\\string^2):}\\\\\n\t\t\\texttt{>p:=y/3+3:}\\\\\n\t\t\\texttt{>Y:=solve(c=p,y); }\\\\\n\t\t\\texttt{>E1:=subs(y=Y[1],c); E2:=subs(y=Y[2],c);}\\\\\n\t\t\\texttt{>P1:=plot3d(c,x=-5..5,y=-5..5,axes=normal,color=red,numpoints=2000,}\\\\\n\t\t\\texttt{view=[-5..5,-5..5,0..5],style=wireframe):}\\\\\n\t\t\\texttt{>P2:=plot3d(p,x=-5..5,y=-5..5,axes=normal,color=yellow,numpoints=2000,}\\\\\n\t\t\\texttt{view=[-5..5,-5..5,0..5],style=patchnogrid):}\\\\\n\t\t\\texttt{>P3:=spacecurve({[x,Y[1],E1],[x,Y[2],E2]},x=-5..5,color=black,thickness=3,numpoints=2000):}\\\\\n\t\t\\texttt{>display(P1,P2,P3,scaling=constrained);}\\\\\n\t\t\n\t\tthat gives:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/plot_maple_ellipse.jpg}\n\t\t\t\\caption{Maple 4.00b plot to obtain an ellipse by the intersection of a cone and a plane}\n\t\t\\end{figure}\n\t\tor finally to obtain a circle:\n\t\t\n\t\t\\texttt{>restart: with(plots):}\\\\\n\t\t\\texttt{>c:=sqrt(x\\string^2+y\\string^2):}\\\\\n\t\t\\texttt{>p:=3:}\\\\\n\t\t\\texttt{>Y:=solve(c=p,y);}\\\\\n\t\t\\texttt{>circ1:=subs(y=Y[1],c); circ2:=subs(y=Y[2],c);}\\\\\n\t\t\\texttt{>P1:=plot3d(c,x=-5..5,y=-5..5,axes=normal,color=red,numpoints=2000,}\\\\\n\t\t\\texttt{view=[-5..5,-5..5,0..5],style=wireframe):}\\\\\n\t\t\\texttt{>P2:=plot3d(p,x=-5..5,y=-5..5,axes=normal,color=yellow,numpoints=2000,}\\\\\n\t\t\\texttt{view=[-5..5,-5..5,0..5],style=patchnogrid):}\\\\\n\t\t\\texttt{>P3:=spacecurve({[x,Y[1],circ1],[x,Y[2],circ2]},\nx=-5..5,color=black,thickness=3,numpoints=2000):}\\\\\n\t\t\\texttt{>display(P1,P2,P3);}\\\\\n\n\t\tthat gives:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/plot_maple_circle.jpg}\n\t\t\t\\caption{Maple 4.00b plot to obtain a circle by the intersection of a cone and a plane}\n\t\t\\end{figure}\t\t\n\t\\end{enumerate}\n\tHowever, the conicals also have a geometrical definition:\n\t\n\t\\pagebreak\n\t\\subsubsection{Geometric Approach}\n\tLet $F$ be a point in the plane, $D$ a line but not containing $F$ (but to a non-zero distance of it) and $e$ a positive real number. We are interested in the set of points $M$ defined by:\n\t\n\twhere $F$ is named the \"\\NewTerm{focus}\\index{focus}\", $D$ the \"\\NewTerm{direction of the conical}\\index{direction of the conical}\" and $e$ the \"\\NewTerm{eccentricity}\\index{eccentricity}\\label{eccentricity}\":\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/focus_direction_excentricity.jpg}\n\t\t\\caption{Definition of the focus, direction of the conical and eccentricity}\n\t\\end{figure}\n\tWe will arrange subsequently to have always $F$ as the origin of the basis for the conical, therefore $D$ will have by definition for equation:\n\t\n\twith $h>0$ so we therefore denote by $d (M, D)$ the distance between the point $M$ and $D$.\n\t\n\tTherefore:\n\t\n\tWe well fall back on the equation of a conic since the latter is a special case of:\n\t\n\tWe can now consider several specific cases:\n\t\\begin{enumerate}\n\t\t\\item Case where $e=1$:\n\t\tThe equation:\n\t\t\n\t\tthen reduces trivially to:\n\t\t\n\t\tIt is therefore a parabola of axis orthogonal to $D$, whose top $\\Omega$ is the mid-segment $\\overline{FK}$, where $K$ is the projection of $F$ on $D$ (see figure below).\n\t\t\n\t\tIf we rewrite the last relation in the form:\n\t\t\n\t\tand redefining the origin with respect to $\\Omega$ by a translation of $h/2$, the generating focus of the parabola will be in $F=(h/2,0)$ and the latter equation is then reduced to:\n\t\t\n\t\twhere $h$ is named \"\\NewTerm{parameter of the parabola}\\index{parameter of the parabola}\" and relatively to $\\Omega$, the focus will be given by the coordinates $F=(h/2,0)$ and the direction by the equation $D:x=-h/2$. Written in a different way we fall back on the famous relation of the parabola in functional analysis (the reader can refer to a practical calculation example for an antenna in the section about Optics):\n\t\t\n\t\tThat is:\n\t\t\n\t\tAs shown in the figure below, the distance to the direction to $\\Omega$ is imposed by the conditions of the model:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/conical_parabola_eccentricity_equal_one.jpg}\n\t\t\t\\caption{Representation of the parameters of the conical for the parabola}\n\t\t\\end{figure}\n\t\n\t\t\\item Case where $e<1$:\n\t\t\n\t\tIt is an ellipse. Indeed it is not easy to see it but by rearranging the terms of equation:\n\t\t\n\t\twe have (and this independently of whether $e<1$ or not):\n\t\t\n\t\tThe last term of the ante-previous relation can recovered as following after development:\n\t\t\n\t\tLet us define:\n\t\t\n\t\tis the origin of the ellipse on $X$ (then the $x$ and $y$ must now be taken relative to this new origin!!!). The previous equation can therefore be simplified and becomes (this always regardless of whether $e<1$ or not):\n\t\t\n\t\tTherefore if $e<1$, the denominators are both positive and we fall back on the reduced equation of an ellipse since $a\\neq b$.\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tYou must notice that this definition can not include the circle among the ellipses otherwise there is a singularity because the denominators would be zero (the eccentricity being equal to zero for the circle).\n\t\t\\end{tcolorbox}\n\t\tTo get the semi-major axis of the ellipse we just have to put $y=0$. Thus, we have:\n\t\t\n\t\tthen the semi-major axis is equal to:\n\t\t\n\t\tin the same way, we get for the semi-minor axis:\n\t\t\n\t\tputting $p=eh$ being the \"\\NewTerm{parameter of the ellipse}\\index{parameter of the ellipse}\\label{parameter of the ellipse}\" or \"\\NewTerm{focal parameter of the ellipse}\\index{focal parameter of the ellipse}\\label{focal parameter of the ellipse}\", we get:\n\t\t\n\t\twhose first relation will be very useful in the section of Astronomy and also in the section of General Relativity.\n\t\t\n\t\tNotice also by the way that we have then:\n\t\t\n\t\t\n\t\tIt is customary to denote the distance from the center $\\Omega$ of the ellipse to the foci point $F$ with the letter $c$ such that:\n\t\t\n\t\tWe then have:\n\t\t\n\t\tThere are therefore two foci to the ellipse at a distance equivalent but opposite of the center $\\Omega$. We define therefore the eccentricity of an ellipse by the ratio:\n\t\t\n\t\tThe reader will perhaps also notice that we have $x = 0$ in:\n\t\t\n\t\twe get for the ordinate the product $eh$ and it can be expressed by the ellipse parameters (even is this may seem odd at units level this is implicitly correct):\n\t\t\n\t\tWe can then also prove a relation that is very common in formula compilation book about ellipses:\n\t\t\n\t\tThat is to say:\n\t\t\n\t\tThis property is then proved and that brings us to be able to write the eccentricity only thanks to the conventional parameters of the ellipse:\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/ellipse_parameters.jpg}\n\t\t\t\\caption{Representation of the parameters of the conical for the ellipse}\n\t\t\\end{figure}\n\t\tHere are some well known elliptic eccentricities:\n\t\t\\begin{table}[H]\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{|l|c|}\n\t\t\t\\hline\n\t\t\t\\rowcolor[HTML]{9B9B9B} \n\t\t\t\\multicolumn{1}{|c|}{\\cellcolor[HTML]{9B9B9B}\\textbf{Celestial object orbit}} & \\textbf{Orbit eccentricity $\\pmb{e}$} \\\\ \\hline\n\t\t\tMercury & $0.206$ \\\\ \\hline\n\t\t\tVenus & $0.007$ \\\\ \\hline\n\t\t\tEarth & $0.017$ \\\\ \\hline\n\t\t\tMars & $0.093$ \\\\ \\hline\n\t\t\tJupiter & $0.048$ \\\\ \\hline\n\t\t\tSaturn & $0.056$ \\\\ \\hline\n\t\t\tUranus & $0.047$ \\\\ \\hline\n\t\t\tNeptun & $0.09$ \\\\ \\hline\n\t\t\tPluton & $0.250$ \\\\ \\hline\n\t\t\tHubble Telescope & $0.0002842$ \\\\ \\hline\n\t\t\tInternational Space Station & $0.0005309$ \\\\ \\hline\n\t\t\tHalley's comet & $0.970$ \\\\ \\hline\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Some elliptic eccentricity values}\n\t\t\\end{table}\n\t\t\n\t\tWe can also find out where are the director lines $D$ of the ellipse with respect to the edge of this latter by using the definition of eccentricity when $y=0$. Then we have:\n\t\t\n\t\t\n\t\ttherefore the factor of $a$ is between $0$ and $+\\infty$ and this has for meaning that the director lines will be situated (at least in the special case presented here where the eccentricity is positive and strictly less than the unit) always outside of the ellipse border and in the nearest case tangential to this same border (but in all cases they will be outside of the ellipse).\n\t\t\n\t\tA useful and obvious parametric representation of the ellipse is an extension of the trigonometric circle (\\SeeChapter{see section Trigonometry page \\pageref{trigonometric circle}}):\n\t\t\n\t\tIndeed, if we consider the Cartesian equation of the ellipse proved above:\n\t\t\n\t\tand putting $R=X/a$ and $S=Y/b$ we get:\n\t\t\n\t\tIf we remember the unit circle in trigonometry, this equation admits as solutions $R=\\cos(t)$ and $S=\\sin(t)$. It comes then\\label{parametric equation of an ellipse}:\n\t\t\n\t\tThat's it! And obviously if $a=b\\neq 0$ we fall back on the parametric equation of a circle!\n\t\t\n\t\tNow let us prove an important property of ellipses: The sum of the distance of the two foci to a point $M$ on the border of the ellipse is constant and equal to the major axis.\n\t\t\n\t\t\\begin{dem}\n\t\tWe consider the center is the midpoint of the foci of coordinate $\\Omega(0,0)$. The foci lie on the major axis $y=0$ and same for the equation of the minor axis for which $x=0$.\n\t\t\n\t\tNow, if the length of the major, minor axes are $2a,2b$ respectively with eccentricity $e$ the equation of the ellipse is:\n\t\t\n\t\tAny point $M$ on the ellipse can be as we just see before:\n\t\t\n\t\t\n\t\tSo, the distance between a point $M(a\\cos(\\theta),b\\sin(\\theta))$ and one of the foci of coordinate $(ae,0)$ is:\n\t\t\n\t\tSimilarly (as always you can request the details if necessary), the distance between $(a\\cos(\\theta),b\\sin(\\theta)),(-ae,0)$ is equal to $a(1+e\\cos(\\theta))$.\n\t\t\n\t\tTherefore the sum gives:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\tIt is this result that authorize us to say that an ellipse can be constructed in the famous following way:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.7]{img/geometry/ellipse_hand_made_construction.jpg}\n\t\t\t\\caption[Hand-made construction of an ellipse]{Hand-made construction of an ellipse (source: OpenStax)}\n\t\t\\end{figure}\n\t\tIt is interesting also to see the opposite reasoning! We suppose the construction above as a definition and we build from it the equation of the ellipse centered at the Origin!\n\t\t\n\t\tFor this we begin with the foci $\\left(-c,0\\right)$ and $\\left(c,0\\right)$. The ellipse is then by definition the set of all points $\\left(x,y\\right)$ such that the sum of the distances from $\\left(x,y\\right)$ to the foci is constant, as shown in the figure above.\n\t\t\n\t\tIf $\\left(a,0\\right)$ is a vertex of the ellipse, the distance from $\\left(-c,0\\right)$ to $\\left(a,0\\right)$ is:\n\t\t\n\t\tThe distance from $\\left(c,0\\right)$ to $\\left(a,0\\right)$ is $a-c$. The sum of the distances from the foci to the vertex is:\n\t\t\n\t\tIf $\\left(x,y\\right)$ is a point on the ellipse, then we can define the following variables:\n\t\t\n\t\tBy the definition of an ellipse, the \\underline{sum} of  distances ${d}_{1}$, ${d}_{2}$ of the foci to any point $(x,y)$ of the ellipse is equal to  and is constant! We know that the sum of these distances is $2a$ for the vertex $(a,0)$. It follows that:\n\t\t\n\t\tfor any point on the ellipse. We will begin the derivation by applying the distance formula. The rest of the derivation is algebraic:\n\t\t\n\t\tThus, the standard equation of an ellipse is:\n\t\t\n\t\tThe equation defines an ellipse centered at the origin, with for recall:\n\t\t\\begin{itemize}\n\t\t\t\\item If $a>b$, the ellipse is stretched further in the horizontal direction\n\t\t\t\\item If $b>a$, the ellipse is stretched further in the vertical direction\n\t\t\t\\item The length of the major axis is $2a$\n\t\t\t\\item The coordinates of the vertices are $(0,\\pm a)$\n\t\t\t\\item The length of the minor axis is $2$b\n\t\t\t\\item The coordinates of the co-vertices are $(\\pm b,0)$\n\t\t\t\\item The coordinates of the foci are $(0,\\pm c)$\n\t\t\t\\item We have the identity $c^2=a^2-b^2$\n\t\t\\end{itemize}\n\t\t\\label{centered offset ellipse}The standard form of the of an ellipse with center $(h,k)$ is then obviously:\n\t\t\n\t\n\t\tHowever, there is another form of the equation of the ellipse, much more important, which is found in the sections of mechanics, astrophysics and quantum physics of this book.\n\t\t\n\t\tFirst let us recall that:\n\t\t\n\t\tIn polar coordinates, this gives:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tafter identification:\n\t\t\n\t\tWe obtain two different equations, but it is actually the same curve that describes the radius of the ellipse from one of its two foci. And we can see that:\n\t\t\n\t\tSince the $p=eh$ is defined as the parameter of the conical, the polar equation of the ellipse is given by:\n\t\t\n\t\tNotice the three special values\\index{focal parameter}\\index{pericenter}\\index{apocenter}:\n\t\t\n\t\tand we get obviously:\n\t\t\n\t\tand since $e=\\pm c/a$ we get also another famous relation for eccentricity:\n\t\t\n\t\tThe pericenter is better known as the \"\\NewTerm{perigee}\\index{perigee}\" in astronomy as well as the apocenter which is better known as the \"\\NewTerm{apogee}\\index{apogee}\" and are used a lot in astronomy and astrodynamics.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.8]{img/geometry/apogee_perigee.jpg}\n\t\t\t\\caption{Apogee and Perigee}\n\t\t\\end{figure}\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tIn facts, \"apogee\" and \"perigee\" are reserved words when the Earth is on the foci. If it is the Sun that is the foci we then speak of \"aphelion\" (corresponding to beginning of Winter) and \"perihelion\" (corresponding to the beginning of summer).\n\t\t\\end{tcolorbox}\n\t\tIn the general case, $D$ may do any angle with the axis of polar angles, and the general equation is then (very important relation in astronomy and aerospace engineering!):\n\t\t\n\t\t\n\t\t\\item Case where $e>1$:\n\t\tThis is a hyperbola (same reasoning as the ellipse considering by starting with any other value than $e>1 $or not):\n\t\t\n\t\tWhere we put again:\n\t\t\n\t\tis the origin of hyperbole. The above equation is simplified and becomes (so far, we end up with exactly the same expression as for the ellipse):\n\t\t\n\t\tBut as the denominator $e>1$ the denominator of the second term will be negative and therefore we fall back on the reduced equation of a hyperbole!\n\t\t\n\t\tWe have therefore for semi-major axis and semi-minor axis (same reasoning as for the ellipse):\n\t\t\n\t\tand:\n\t\t\n\t\tand the corresponding figure:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/hyperbole_parameters.jpg}\n\t\t\t\\caption{Representation of the parameters of the conical for the hyperbola}\n\t\t\\end{figure}\n\t\\end{enumerate}\n\twhere we presented the two asymptotes using a simple technique of passing to the limit:\n\t\n\tIf the hyperbole is equilateral, then it is obvious that the two asymptotes are perpendiculars since their slope is then respectively $+1$ and $-1$. In the equilateral case, we also have the eccentricity which is immediately given by:\n\t\n\tWe can also determine where are the director lines $D$ of the hyperbolas with respect to their border by using the definition of the eccentricity when $y=0$ by performing exactly in the same as for the ellipse. Then we have:\n\t\n\ttherefore the factor of $a$ is between $0$ and tend to $1$ as $e$ is very large meaning that the director lines will be located (at least in the case here where this eccentricity is positive and strictly smaller than unity) is tangential to the hyperbola that is to say the closest to the $y$-axis (but in all cases they will therefore be between the two hyperbolas).\n\t\n\tThere is another nice way to builder the equation of the hyperbola by assuming that he hyperbola is the set of all points $(x,y)$ such that the \\underline{difference} of the distances $d_1$, $d_2$ from $(x,y)$ to the foci is constant.\n\n\tFor this proof let us consider $(-c,0)$ and $(c,0)$ be the foci of a hyperbola centered at the origin.\n\n\tIf $(a,0)$ is a vertex of the hyperbola, the distance from $(-c,0)$ to $(a,0)$ is:\n\t\n\tThe distance from $(c,0)$ to $(a,0)$ is $c-a$. The sum of the distances from the foci to the vertex is:\n\t\n\tIf $(x,y)$ is a point on the hyperbola, we can define the following variables:\n\t\n\tBy assuming that for a hyperbola, $d_2-d_1$ is constant for any point $(x,y)$ on the hyperbola it follows for the vertex $(a,0)$ that:\n\t\n\tand therefore the same value should be consistent for the whole hyperbola.  As with the derivation of the equation of an ellipse seen just previously, we will begin by applying the distance formula. The rest of the derivation is algebraic. Compare this derivation with the one from the previous section for ellipses.\n\t\n\tThis equation defines a hyperbola centered at the origin with vertices $(\\pm a,0)$ and co-vertices $(0,\\pm b)$.\n\t\n\tTherefore standard form of the equation of a hyperbola with center $(0,0)$ and transverse axis on the $x$-axis is:\n\t\n\twhere:\n\t\\begin{itemize}\n\t\t\\item The length of the transverse axis is $2a$\n\t\t\\item The coordinates of the vertices are $(\\pm a,0)$\n\t\t\\item The length of the conjugate axis is $2$b\n\t\t\\item The coordinates of the co-vertices are $(0,\\pm b)$\n\t\t\\item The distances between the foci is $2c$\n\t\t\\item We have the identity $c^2=a^2+b^2$\n\t\t\\item The coordinates of the foci are $(\\pm c,0)$\n\t\t\\item The equations of the asymptotes are $y=\\pm\\displaystyle \\dfrac{b}{a}$\n\t\\end{itemize}\n\tAnd for the standard form equation of a hyperbola with center $(h,k)$ and transverse axis parallel to the $x$-axis we have:\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Dandelin Theorem (Dandelin spheres)}\n\tIn geometry, the \"\\NewTerm{Dandelin spheres}\\index{Dandelin spheres}\" are one or two spheres that are tangent both to a plane and to a cone that intersects the plane. The intersection of the cone and the plane is a conic section, and the point at which either sphere touches the plane is a focus of the conic section, so the Dandelin spheres are also sometimes called focal spheres.\n\t\n\tThe Dandelin spheres can be used to prove at least two important theorems:\n\t\\begin{enumerate}\n\t\t\\item The first theorem is that a closed conic section (i.e. an ellipse) is the locus of points such that the sum of the distances to two fixed points (the foci) is constant. \n\t\t\n\t\tAs we already have proved this result previously in a way that we consider as the most simple and more pretty way we will not do it again using Dandelin spheres.\n\n\t\t\\item The second theorem is that for any conic section, the distance from a fixed point (the focus) is proportional to the distance from a fixed line (the directrix), the constant of proportionality being named the \"eccentricity\".\n\t\t\n\t\tAs we already have proved this result previously in a way that we consider as the most simple and more pretty way we will also not do it again using Dandelin spheres. \n\t\\end{enumerate}\n\t\n\t\\tikzset{\n\tMyPersp/.style={scale=1.8,x={(-0.8cm,-0.4cm)},y={(0.8cm,-0.4cm)},\n    z={(0cm,1cm)}},\n%  MyPersp/.style={scale=1.5,x={(0cm,0cm)},y={(1cm,0cm)},\n%    z={(0cm,1cm)}}, % uncomment the two lines to get a lateral view\n\tMyPoints/.style={fill=white,draw=black,thick}\n\t}\n\t\n\t\\begin{figure}[H]\n\t\t\\begin{center}\n\t\\begin{tikzpicture}[MyPersp,font=\\large, scale=0.8]\n\t\t% the base circle is the unit circle in plane Oxy\n\t\t\\def\\h{2.5}% Heigth of the ellipse center (on the axis of the cylinder)\n\t\t\\def\\a{35}% angle of the section plane with the horizontal\n\t\t\\def\\aa{35}% angle that defines position of generatrix PA--PB\n\t\t\\pgfmathparse{\\h/tan(\\a)}\n\t  \\let\\b\\pgfmathresult\n\t\t\\pgfmathparse{sqrt(1/cos(\\a)/cos(\\a)-1)}\n\t  \\let\\c\\pgfmathresult %Center Focus distance of the section ellipse.\n\t\t\\pgfmathparse{\\c/sin(\\a)}\n\t  \\let\\p\\pgfmathresult % Position of Dandelin spheres centers\n\t                       % on the Oz axis (\\h +/- \\p)\n\t\t\\coordinate (A) at (2,\\b,0);\n\t\t\\coordinate (B) at (-2,\\b,0);\n\t\t\\coordinate (C) at (-2,-1.5,{(1.5+\\b)*tan(\\a)});\n\t\t\\coordinate (D) at (2,-1.5,{(1.5+\\b)*tan(\\a)});\n\t\t\\coordinate (E) at (2,-1.5,0);\n\t\t\\coordinate (F) at (-2,-1.5,0);\n\t\t\\coordinate (CLS) at (0,0,{\\h-\\p});\n\t\t\\coordinate (CUS) at (0,0,{\\h+\\p});\n\t\t\\coordinate (FA) at (0,{\\c*cos(\\a)},{-\\c*sin(\\a)+\\h});% Focii\n\t\t\\coordinate (FB) at (0,{-\\c*cos(\\a)},{\\c*sin(\\a)+\\h});\n\t\t\\coordinate (SA) at (0,1,{-tan(\\a)+\\h}); % Vertices of the\n\t                                           % great axes of the ellipse\n\t\t\\coordinate (SB) at (0,-1,{tan(\\a)+\\h});\n\t\t\\coordinate (PA) at ({sin(\\aa},{cos(\\aa)},{\\h+\\p});\n\t\t\\coordinate (PB) at ({sin(\\aa},{cos(\\aa)},{\\h-\\p});\n\t\t\\coordinate (P) at ({sin(\\aa)},{cos(\\aa)},{-tan(\\a)*cos(\\aa)+\\h});\n\t     % Point on the ellipse on generatrix PA--PB\n\t\n\t\t\\draw (A)--(B)--(C)--(D)--cycle;\n\t\t\\draw (D)--(E)--(F)--(C);\n\t\t\\draw (A)--(E) (B)--(F);\n\t\t\\draw[blue,very thick] (SA)--(SB);\n\t\n\t%\t\\coordinate (O) at (0,0,0);\n\t%\t\\draw[->] (O)--(2.5,0,0)node[below left]{x};\n\t%\t\\draw[->] (O)--(0,3,0)node[right]{y};\n\t%\t\\draw[->] (O)--(0,0,6)node[left]{z};\n\t\n\t\t\\foreach \\t in {20,40,...,360}% generatrices\n\t\t\t\\draw[magenta,dashed] ({cos(\\t)},{sin(\\t)},0)\n\t      --({cos(\\t)},{sin(\\t)},{2.0*\\h});\n\t\t\\draw[magenta,very thick] (1,0,0) % lower circle\n\t\t\t\\foreach \\t in {5,10,...,360}\n\t\t\t\t{--({cos(\\t)},{sin(\\t)},0)}--cycle;\n\t\t\\draw[magenta,very thick] (1,0,{2*\\h}) % upper circle\n\t\t\t\\foreach \\t in {10,20,...,360}\n\t\t\t\t{--({cos(\\t)},{sin(\\t)},{2*\\h})}--cycle;\n\t\t\\fill[blue!15,draw=blue,very thick,opacity=0.5]\n\t     (0,1,{\\h-tan(\\a)}) % elliptical section\n\t\t\t\\foreach \\t in {5,10,...,360}\n\t\t\t\t{--({sin(\\t)},{cos(\\t)},{-tan(\\a)*cos(\\t)+\\h})}--cycle;\n\t\n\t\t\\foreach \\i in {-1,1}{%Spheres!\n\t\t\t\\foreach \\t in {0,15,...,165}% meridians\n\t\t\t\t{\\draw[gray] ({cos(\\t)},{sin(\\t)},\\h+\\i*\\p)\n\t\t\t\t\t\\foreach \\rho in {5,10,...,360}\n\t\t\t\t\t\t{--({cos(\\t)*cos(\\rho)},{sin(\\t)*cos(\\rho)},\n\t          {sin(\\rho)+\\h+\\i*\\p})}--cycle;\n\t\t\t\t}\n\t\t\t\\foreach \\t in {-75,-60,...,75}% parallels\n\t\t\t\t{\\draw[gray] ({cos(\\t)},0,{sin(\\t)+\\h+\\i*\\p})\n\t\t\t\t\t\\foreach \\rho in {5,10,...,360}\n\t\t\t\t\t\t{--({cos(\\t)*cos(\\rho)},{cos(\\t)*sin(\\rho)},\n\t          {sin(\\t)+\\h+\\i*\\p})}--cycle;\n\t\t\t\t}\n\t\t\t\t\t\t\\draw[orange,very thick] (1,0,{\\h+\\i*\\p})% Equators\n\t\t\t\\foreach \\t in {5,10,...,360}\n\t\t\t\t{--({cos(\\t)},{sin(\\t)},{\\h+\\i*\\p})}--cycle;\n\t\t\t}\n\t\t\\draw[red,very thick] (PA)--(PB);\n\t\t\\draw[red,very thick] (FA)--(P)--(FB);\n\t%\t\\fill[MyPoints] (CLS) circle (1pt);% center of lower sphere\n\t%\t\\fill[MyPoints] (CUS) circle (1pt);% center of upper sphere\n\t\t\\fill[MyPoints] (FA) circle (1pt)node[right]{$F_1$};\n\t\t\\fill[MyPoints] (FB) circle (1pt)node[left]{$F_2$};\n\t\t\\fill[MyPoints] (SA) circle (1pt);\n\t\t\\fill[MyPoints] (SB) circle (1pt);\n\t\t\\fill[MyPoints] (P) circle (1pt)node[below left]{$P$};\n\t\t\\fill[MyPoints] (PA) circle (1pt)node[below right]{$P_1$};\n\t\t\\fill[MyPoints] (PB) circle (1pt)node[above right]{$P_2$};\n\t\\end{tikzpicture}\n\t\\end{center}\n\t\t\\caption[Dudlin Sphere]{Dudlin Sphere (source: Hugues Vermeiren)}\n\t\\end{figure}\n\tSorry for the reader that likes the Dudlin proof but we don't want to lose time by rewriting proofs just in another way and this especially with the \\LaTeX{} language script... (even if the proofs are small). \n\t\n\t\\subsubsection{Classification of conical by the determinant}\\label{classification of conical by the determinant}\n\tFor the purposes of the classification of the partial differential equations which we shall study in the section of Differential and Integral Calculus, let us return to the general equation of conics (with a $2$ factor on some terms to simplify the developments that will follow):\n\t\n\twith $(a,b,c)\\neq (0,0,0)$. \n\n\tWe will try to classify the conics by using a purely matrix property and drawing inspiration from what has been seen previously.\n\n\tAs we know, any $\\Gamma$ of $\\mathbb{R}^2$ admitting an equation of the above form is named a \"conic\".\n\n\tWe have also show that we can rewrite the equation above as:\n\t\n\twith:\n\t\n\tand let us recall that we have proved in the Linear Algebra chapter that if is symmetric, there exists an orthogonal matrix $S$ (thus satisfying $S^TS=\\mathds{1}$) such that:\n\t\n \twhere $\\lambda$ and $\\mu$ are the eigenvalues of $A$.\n\n\tBefore proceeding, let us notice that by using one of the properties of the determinant demonstrated in the section of Linear Algebra we have:\n\t\n\tbecause:\n\t\n\tTherefore:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIf we would not have introduce the factor $2$ at the beginning this would have been written:\n\t\n\tThis is why a lot of textbooks classify the conic by multiplying the determinant by $4$ and to make an analogy to the discriminant of polynomial (it's more easy for the student to remember) they multiply it by $-1$ to get therefore the \"\\NewTerm{discriminant of conic sections}\\index{discriminant of conic section}\":\n\t\n\t\\end{tcolorbox}\n \tNow let us observe what happens as a function of the value of this determinant and therefore in extenso as a function of the components of the matrix $A$.\n\n\tThere are two cases to consider from which only three a very exclusive:\n\t\\begin{itemize}\n\t\t\\item Case of a non-null determinant $\\det(A)\\neq 0$:\n\t\tIn the case where $\\det(A)\\neq 0$, then we have then that is $A$ invertible (\\SeeChapter{see demonstration Linear Algebra}). In order to simplify the equation:\n\t\t\n\t \tto make the term $X$ disappear in order to obtain something well known to us, we are going to seek to translate the curve (the choice of starting with a translation being the most trivial one). By a small trial and error, we find that it is enough to translate each point $X$ of the curve from an $Y$ worth:\n\t\t\n\t\tWe then have as $(\\vec{x}+\\vec{y})\\in \\Gamma$:\n\t\t\n\t \tAfter simplification we get by using the associativity properties of the multiplication, the distributivity of the matrix addition and by remembering that for a symmetric matrix (\\SeeChapter{see section Linear Algebra page \\pageref{symmetric matrix}}):\n\t\t\n\t\tand also remembering that (\\SeeChapter{see section Linear Algebra page \\pageref{transposed matrix}}):\n\t\t\n\t\tThen we have:\n\t\t\n\t\tTo continue, remember that $\\vec{x}$ and $B$ are vectors, then $\\vec{x}^TB$ and $B^T\\vec{x}$ are scalars that are equals and therefore cancel. It remains then to us:\n\t\t\n\t\tWe can then say that the translated curve admits as equation:\n\t\n\twhere obviously:\n\t\n \tor what is explicitly equivalent to:\n\t\n\tWe thus succeeded in simplifying the equation in the case where $A$ is invertible by removing the term in $\\vec{x}$.\n\n\tThe type of curve (ellipse, parabola, hyperbola, etc.) is obviously not changed by a translation, which is why we consider the previous equation to be the simplest equation.\n\n\tMore generally, the type of curve is not changed by an orthogonal matrix (easy to see because the image of an orthonormal basis by an orthogonal matrix is still an orthonormal basis). Thus, by choosing to undergo an orthogonal transformation to $X$ by $S$ (which is an orthogonal matrix for information), we are then led to write:\n\t\n\tgiven that for recall that:\n\t\n \twe deduce that:\n\t\n\tIn conclusion, we can assert that the curve defined by the equation above is of the same type as the original curve $\\Gamma$.\n\n\t\t\\item Case of a strictly positive determinant $\\det(A)>0$:\n\t\t\n\t\tIn the case where $\\det(A)>0$, that is to say $ac-b^2>0$ or equivalently as we have mention it the previous remark $b^2-4ac<0$, $\\lambda$ and $\\mu$ have the same sign as by construction and for recall:\n\t\t\n\t\tThe relation:\n\t\t\n\t\tstill remains valid in this situation.\n\t\n\t\tBut, in function of the value of $C_1$ we have prove earlier above how to interpret this equation. That is for recall (don't forget that we are in the space of real numbers!):\n\t\t\\begin{itemize}\n\t\t\t\\item If $C_1$ has the same sign as $\\lambda$ and $\\mu$ the equation has no solution and $\\Gamma=0$\n\t\n\t\t\t\\item If $C_1=0$ we see that the equation is reduced ton a point\n\t\n\t\t\t\\item If $C_1$ has a sign opposed to $\\lambda$ and $\\mu$ then we recognized the equation of an ellipse\n\t\t\\end{itemize}\n\n\t\t\\item Case of a strictly negative determinant $\\det(A)<0$:\n\t\t\n\t\tIn the case where $\\det(A)<0$, that is to say $ac-b^2<0$ or equivalently as we have mention it the previous remark $b^2-4ac>0$, $\\lambda$ and $\\mu$ have opposite signs as by construction and for recall:\n\t\t\n\t\tThe relation:\n\t\t\n\t\tstill remains valid in this situation.\n\n\t\tBut, in function of the value of $C_1$ we have prove earlier above how to interpret this equation. That is for recall (don't forget that we are in the space of real numbers!):\n\t\t\\begin{itemize}\n\t\t\t\\item If $C_1=0$ then $\\Gamma$ is the union of the secant lines\n\t\n\t\t\t\\item If $C_1\\neq 0$ we recognize the equation of a hyperbola\n\t\t\\end{itemize}\n\n\t\t\\item Case of a null determinant $\\det(A)=0$:\n\t\t\n\t\tIn the case where $\\det(A)=0$, that is to say $ac-b^2=0$ or equivalently as we have mention it the previous remark $b^2-4ac=0$, one of the eigenvalues is equal to zero, for example $\\mu=0$ (only because we must have $A\\neq 0$). As we know:\n\t\t\n\t\tand by the equation\n\t\t\n\t\twe then have:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tBy developing explicitly, we get:\n\t\t\n\t\tThe latter equation can be rewritten as:\n\t\t\n\t\twith obviously:\n\t\t\n\t\tIf $e=0$ we can have three situations:\n\t\t\\begin{itemize}\n\t\t\t\\item $C_2=0$, then $\\Gamma$ is a straight line\n\n\t\t\t\\item $C_2$ and $\\lambda$ have the same signs the $\\Gamma=\\varnothing$\n\n\t\t\t\\item $C_2$ and $\\lambda$ are of opposite signs, then $\\Gamma$ is the union of two parallel distinct straight lines\n\t\t\\end{itemize}\n\t\tIf $e\\neq 0$, then $\\Gamma$ is a parabola.\n\t\\end{itemize}\n\tLet us make a summary of this classification\\label{type of conics determinant}\n\t\\begin{itemize}\n\t\t\\item If $\\det(A)=ac-b^2>0$ (or $b^2-4ac<0$), then $\\Gamma$ is either empty, reduced to a point, or an ellipse\n\n\t\t\\item If $\\det(A)=ac-b^2<0$ (or $b^2-4ac>0$), then $\\Gamma$ is either the intersection of two secant straight lines, or an hyperbola\n\n\t\t\t\\item If $\\det(A)=ac-b^2=0$ (or $b^2-4ac=0$), then $\\Gamma$ is either empty, or the parallel straight lines, or a parabola\n\t\\end{itemize}\n\tAnd we can now also introduce (considering $b=0$ just to simplify the conceptual representation for human brain)\\label{type of conics matrix approach}\n\t\\begin{itemize}\n\t\t\\item \"\\NewTerm{Positive-definite matrices}\\index{positive-definite matrix}\" that are such that:\n\t\t\n\t\tthat will look like ellipsoids. For this $a$ and $c$ must then be of the same sign and positive satisfying then $-4ac<0$.\n\t\n\t\t\\item \"\\NewTerm{Positive-semidefinite matrices}\\index{positive-semidefinite matrix}\" that are such that:\n\t\t\n\t\tthat look like a half-cylinders (see it as a surface generated by parabolas). For this we must have $a$ or $c$ equal to zero satisfying the $-4ac=0$. Obviously, we see that positive-definite matrices are then a special case of positive-semidefinite matrices (and we would also be able to define negative-semidefinite matrices).\n\t\n\t\t\\item \"\\NewTerm{Indefinite matrices}\\index{indefinite matrix}\" that are such that $\\vec{x}^TA\\vec{x}$ are negative or positive depending of the values of $\\vec{x}$ since $a\\neq 0$, $c\\neq 0$ and that $\\mathrm{sgn}(a)\\neq\\mathrm{sng}(c)$ (ie must be of opposite sign and not null) and that look like a saddle (surface generated by hyperbolas).\n\t\\end{itemize}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{img/geometry/positive_postive_semidefinite_indefinite_matrices.jpg}\n\t\t\\caption{Positive definite, positive-semidefinite and indefinite matrices visual representation}\n\t\\end{figure}\n\n\t\\pagebreak\n\t\\subsection{Parametrizations}\n\tParametrization  is the process of deciding and defining the parameters necessary for a complete or relevant specification of a model or geometric object.\n\t\n\tParametrization is also the process of finding parametric equations of a curve, a surface, or, more generally, a manifold or a variety, defined by an implicit equation. The inverse process is called \"\\NewTerm{implicitization}\\index{implicitization}\".\n\t\n\tMost often, parametrization is a mathematical process involving the identification of a complete set of effective coordinates or degrees of freedom of the system, process or model, without regard to their utility in some design. Parametrization of a line, surface or volume, for example, implies identification of a set of coordinates that allows one to uniquely identify any point (on the line, surface, or volume) with an ordered list of numbers. Each of the coordinates can be defined parametrically in the form of a parametric curve (one-dimensional) or a parametric equation (2+ dimensions).\n\t\n\tGenerally, the minimum number of parameters required to describe a model or geometric object is equal to its dimension, and the scope of the parameters\u2014within their allowed ranges\u2014is the parameter space.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tParametrizations are not unique. The ordinary three-dimensional object can be parametrized (or \"coordinatized\") equally efficiently with Cartesian coordinates, cylindrical polar coordinates, spherical coordinates or other coordinate systems (\\SeeChapter{see section Vector Calculus page \\pageref{system of coordinates}}).\n\t\\end{tcolorbox}\n\tAs notice above for some of the forms described below, it is possible to select a different coordinate than the Cartesian coordinates such as, for example cylindrical or spherical coordinates which are in some cases much simpler to implement. We will try as far as possible to present the most important one.\n\t\n\t\\subsubsection{Equation of the Plane}\\label{equation of the plane}\n\tGiven a plane $P$ for which we know a normal and unit vector $\\vec{n}(a,b,c)$ but not the equation and $A(x,y,z)$ a point of the plane $P$.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIf we don't have $\\vec{n}$ but instead three points of the plane $P_1(x_1,y_1,z_1),P_2(x_2,y_2,z_2),P_3(x_3,y_3,z_3)$ we easily get  $\\vec{n}$ by calculating the cross product:\n\t\n\t\\end{tcolorbox}\n\t\n\tFor a point $M$ with coordinates $(x, y, z)$ belongs to the plane $P$ it is necessary and sufficient that the vectors $\\overrightarrow{AM}$ and $\\vec{n}$ are orthogonal.\n\t\n\tSo given the point by the vector $\\overrightarrow{AM}$ of coordinates:\n\t\n\tIf $\\overrightarrow{AM}$ is perpendicular to $\\vec{n}$ then the dot product must be zero as:\n\t\n\tWhich can also be written as:\n\t\n\tsuch that we get the general Cartesian equation of the plane:\n\t\n\tThis equation where $(a,b,c)\\cong (0,0,0)$ that checks that the coordinates of any point $M(x,y,z)$ belongs to the plan $P$ is therefore named \"\\NewTerm{Cartesian equation of the plane $P$}\\index{Cartesian equation of a plane}\".\n\t\n\tIf we write the equation with the direction cosines of $\\vec{n}$ (\\SeeChapter{see section Calculus Vector page \\pageref{cosines directions}}), we therefore have also:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tTo get a cube in space, with only need six planes delimited by conditions such as $x\\leq...,y\\leq...,z\\leq...$.\n\t\\end{tcolorbox}\t\n\tIt is relatively easy to go from case by case from the Cartesian equation of the plane to the parametric equation of the plane. We can (with the usual precautions...) resume the equation:\n\t\n\tand rewriting it as follows:\n\t\n\tand therefore, the parametric equation of the plane in the space of dimension $3$ will be written:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith Maple 4.00b:\\\\\n\n\t\\texttt{>a:=3:b:=-2:c:=1:d:=5:\\\\\n\t>plot3d([x,y,(-d-b*y+a*x)/c],x=-2..2,y=-2..2, orientation=[-87,81],style=PATCH,\naxes=NORMAL);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/plane.jpg}\n\t\t\\caption{Example of plane with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\\begin{theorem}\n\tThe vector $\\vec{n}=(a,b,c)$ is normal\\index{normal vector} to the plane of equation:\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tTake two points $P_1=(x_1,y_1,z:1)$ and $P_2=(x_2,y_2,z_2)$. Then:\n\t\n\tAs $\\vec{P}_2-\\vec{P}_1=\\overrightarrow{P_2P_1}$ is in the plane $ax+by+cz=d$ and that by subtracting both equations we have:\n\t\n\tthat is equivalent to:\n\t\n\tThis proves indeed that\\label{vector normal plane}:\n\t\n\tis perpendicular to the plane.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tIn Projective Geometry (see section of the same name page \\pageref{projective geometry}) an important case to consider is the plane of perpendicular to the $\\vec{n}=(1,1,1)$ that give the isometric perspective that keeps lengths of cube of equal size (this plane being assimilated to the table of the observer)\\label{isometric plane}:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isometric_perspective_plane.jpg}\n\t\\end{figure}\n\n\tSo using the previous relation we get:\n\t\n\tNow to calculate the angle between this plane and the $xy$-plane we calculate the dot product between the respective normal vectors and subtract $\\pi$ such that:\n\t\n\tThus:\n\t\n\t Therefore:\n\t\n\t Therefore the angle $\\beta$ between the two planes is in degrees (as it is usage in perspective geometry to work in degrees):\n\t \n\t This is why perspective geometry angles are rounded to the famous $30^\\circ$\n\t\n\t\\pagebreak\t\n\t\\subsubsection{Equation of the Straight line}\\label{equation of the straight line}\n\tAs we saw it in the section of Functional Analysis, a straight line in the plane can be described by the function:\n\t\n\tand we have also proved how to determine the equation of the mediator of two points in the plane (we had mentioned also that the proof was more on the order of Analytical Geometry as Functional Analysis).\n\t\n\tThe general Cartesian equation of the straight line is then simply given by:\n\t\n\tor sometimes:\n\t\n\tIndeed, simplifying we fall back on the \"\\NewTerm{reduced Cartesian equation}\\index{reduced Cartesian equation of a line}\":\n\t\n\twith by definition:\n\t\n\tWe also saw in the section Calculus that straight lines with inequalities rather than strict equalities gives the possibility to define special areas on the plane:\n\t\\begin{center}\n\t\\begin{tikzpicture}\n\n    \\draw[gray!50, thin, step=0.5] (-1,-3) grid (5,4);\n    \\draw[very thick,->] (-1,0) -- (5.2,0) node[right] {$x_1$};\n    \\draw[very thick,->] (0,-3) -- (0,4.2) node[above] {$x_2$};\n\n    \\foreach \\x in {-1,...,5} \\draw (\\x,0.05) -- (\\x,-0.05) node[below] {\\tiny\\x};\n    \\foreach \\y in {-3,...,4} \\draw (-0.05,\\y) -- (0.05,\\y) node[right] {\\tiny\\y};\n\n    \\fill[blue!50!cyan,opacity=0.3] (8/3,1/3) -- (1,2) -- (13/3,11/3) -- cycle;\n\n    \\draw (-1,4) -- node[below,sloped] {\\tiny$x_1+x_2\\geq3$} (5,-2);\n    \\draw (1,-3) -- (3,1) -- node[below left,sloped] {\\tiny$2x_1-x_2\\leq5$} (4.5,4);\n    \\draw (-1,1) -- node[above,sloped] {\\tiny$-x_1+2x_2\\leq3$} (5,4);\n\n\t\\end{tikzpicture}\n\t\\end{center}\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{direction vector}\\index{direction vector}\" of a line $(D)$, all non-zero vector in the same direction as the straight line.\n\t\n\tLet us now prove two small friendly theorems:\n\t\\begin{theorem}\n\tIf a line has for equation $y=ax+b$ then the vector:\n\t\n\tis a direction vector for this line.\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven $D:ax+b$ and $A, B$ two points of this line taken such as ${A(0,b),B(1,a+b)}$. Since $A, B$ are two points $D$ then $\\overrightarrow{AB}$ is a direction vector of $D$ then:\n\t\n\tOn the way a small interesting corollary that has an application in physics!:\n\t\n\tIf a straight line $D_1$ has direction vector being:\n\t\n\tand another straight line $D_2$ with a direction vector being:\n\t\n\ttherefore their scalar product (\\SeeChapter{see section Vector Calculus page \\pageref{dot product}}) is zero:\n\t\n\tthat shows that two straight lines which multiplication of slopes (second coordinate of the direction vector) is equal to $-1$ are perpendicular!\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{theorem}\n\tIf a line has for equation $ax+by+c=0$ then the vector:\n\t\n\tis a direction vector for this line.\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven:\n\t\n\ttherefore:\n\t\n\tthen the vector:\n\t\n\tis a direction vector $D$ and any vector:\n\t\n\twith $\\alpha\\in \\mathbb{R}$.\n\tThus, there are infinite ways to define the same line, because the line is composed of an infinite number of points (all of which can serve as an anchor point) and there is an infinity of multiple direction vector!\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\pagebreak\n\t\\paragraph{Distance from a line to a point}\\mbox{}\\\\\\\\\n\tOften we seek the distance between a line and a point external to it. Thus, consider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/point_to_line.jpg}\n\t\t\\caption{Representation of the search of the distance from a point to a line}\n\t\\end{figure}\n\tWith the point $H$ the orthogonal projection of $A$ on the straight line $d$, $P$ an arbitrary point of the straight line $d$ and $\\vec{n}$ any orthogonal vector (normal) to $d$.\n\t\n\tWe have (\\SeeChapter{see section Vector Calculus page \\pageref{dot product}}):\n\t\n\tAs $\\alpha=0$ or $\\alpha=\\pi$. Therefore:\n\t\n\twhere $\\delta$ is the distance (we can note denote the distance with the letter $d$ as we did at the beginning of this chapter, otherwise there would be confusion a with the $d$ chosen to represent the straight right in this development).\n\t\n\tSo we get the relation:\n\t\n\tConsider now explicitly the point $A(x_0,y_0)$ and the straight line equation: $d:ax+by+c=0$.\n\t\n\tLet us choose a point $P\\in d:P(0,-c/b)$ and a vector normal $\\vec{n}$, normal to $d:\\vec{n}\\begin{pmatrix}a \\\\ b\\end{pmatrix}$. Indeed, as we have proven just before that:\n\t\n\tis a direction vector of a straight line $D$ we have well:\n\t\n\tand thus $\\vec{v}$ and $\\vec{n}$ are perpendicular.\n\t\n\tThus, by applying the previous relation, we have:\n\t\n\tand therefore:\n\t\n\t\n\t\\paragraph{Line defined by the intersection of planes}\\mbox{}\\\\\\\\\n\tIf we now consider two non-parallel planes in space, their intersection is a straight line. Indeed, consider two planes of respective equations:\n\t\n\tand $D$ and their straight line of intersection.\n\t\n\tObviously a point $M(x_0,y_0,z_0)$ in space is located on the straight line $D$ if and only if the point $M$ satisfies the system of equations:\n\t\n\tAn example with images and numerical application is given in the section of Theoretical Computing.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhile in $\\mathbb{R}^2$ a straight line is fully characterized by an equation of the type $ax+by+c=0$, in space, a single equation of the form $ax+by+cz+d=0$ obviously characterizes a plan. To characterize a straight line outside of the planes of the axes, it is necessary (parametric equation apart) to have two equations of planes.\n\t\\end{tcolorbox}\t\n\t\n\t\\paragraph{Parametric equation of a line in $\\mathbb{R}^3$}\\mbox{}\\\\\\\\\n\tIt is relatively trivial (but we will still prove it) that the parametric equation in $\\mathbb{R}^3$ of a straight line is a kind of system of equations:\n\t\n\tThus, each component increases linearly with respect to the same variable to a constant and a factor. This can also be written in vector form (more traditional):\n\t\n\tThe vector $\\vec{v}=(a,b,c)$ is obviously named the \"direction vector\".\n\t\\begin{dem}\n\tSo we have the system of equations (two equations with three unknowns, and therefore one unknown will be indeterminate):\n\t\n\tLet us eliminate one of the variables (arbitrarily we start with $z$):\n\t\n\twhere $\\alpha=c/c'$ therefore:\n\t\n\tSo (it's a little stupid to write but...):\n\t\n\tSimilarly with $y$ such that $\\beta=b/b'$ we have:\n\t\n\tTherefore:\n\t\n\tFinally we have:\n\t\n\tThe direction vector and the vector or ordinates have all constant components. This allows us to write more generally:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The equation of a straight line is almost one of the most important thing in the synthesis of 3 images because from this latter we can build polygons and assemble them to build more complex three-dimensional shapes.\\\\\n\t\n\t\\textbf{R2.} As already seen before with an example, to determine if a line is perpendicular to a plane we must determine at least two intersecting lines in the same plane and perform the cross product of their direction vector and then calculate the dot product between the result of the vector product and the first straight light for which we seek the orthogonality. Indeed, a straight single of the plane does not permit to determine the orientation of the latter; wee need a least two straight lines.\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Equation of a Square}\n\tIn 1992, Manuel Hernandez Gusting introduced an algebraic equation for representing an intermediate shape between the circle and the square. His equation included a parameter $s$ that can be used to blend the circle and the square smoothly thanks to a quite simple equation named \"\\NewTerm{FG-squircle}\\index{FG-squircle}\\label{fg squircle}\" and given by:\n\t\n\tThe squareness parameter $s$ can have any value between $0$ and $1$. When $s = 0$, the equation produces a circle with radius $k$. When $s = 1$, the equation produces a square with a side length of $2k$. In between, the equation produces a smooth curve that interpolates between the two shapes.\n\t\n\tWe will focus here to the case $k=s=1$ that gives therefore:\n\t\n\tand when $-1<x<1$ and $-1<y<1$ this corresponds to a square centred and the origin. Indeed with Maple:\n\t\n\t\\texttt{>with(plots):\\\\\n\t>implicit plot(x\\string^2+y\\string^2-x\\string^2*y\\string^2= 1,x=-1..1,y=-1..1);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/fg_squircle.jpg}\n\t\t\\caption{FG-squircle representation in Maple 4.00}\n\t\\end{figure}\n\n\tThis implicit equation of the square will be useful to us to study the isomorphism of a circle and a square in the section of Topology.\n\t\n\t\\subsubsection{Equation of a Cycloid}\\label{cycloid curve}\n\tA cycloid is the curve traced by a point on the rim of a circular wheel as the wheel rolls along a straight line without slippage. It is an example of a roulette, a curve generated by a curve rolling on another curve.\n\n\tThe cycloid is an important curve in physics! Indeed as we will prove it in the section of Classical Mechanics, the cycloid, with the cusps pointing upward, is the curve of fastest descent under constant gravity, and is also the form of a curve for which the period of an object in descent on the curve does not depend on the object's starting position. It is also a mathematical curve on which we will fall back during our detailed study of Friedmann-Lema\u00eetre-Robertson-Walker Universe Models in the section of Cosmology (based on the mathematics of General Relativity)!\n\n\t\\begin{center}\n\t  \\begin{tikzpicture}[scale=1.8]\n\t  \\coordinate (O) at (0,0);\n\t  \\coordinate (A) at (0,3);\n\t  \\def\\r{1} % radius\n\t  \\def\\c{1.4} % center\n\t  \\coordinate (C) at (\\c, \\r);\n\t\n\t\n\t  \\draw[-latex] (O) -- (A) node[anchor=south] {$y$};\n\t  \\draw[-latex] (O) -- (2.6*pi,0) node[anchor=west] {$x$};\n\t  \\draw[red,domain=-0.5*pi:2.5*pi,samples=50, line width=1] \n\t       plot ({\\x - sin(\\x r)},{1 - cos(\\x r)});\n\t  \\draw[blue, line width=1] (C) circle (\\r);\n\t  \\draw[] (C) circle (\\r);\n\t\n\t  % coordinate x \n\t  \\def\\x{0.4} % coordinate x\n\t  \\def\\y{0.83} % coordinate y\n\t  \\def\\xa{0.3} % coordinate x for arc left\n\t  \\def\\ya{1.2} % coordinate y for arc left\n\t  \\coordinate (X) at (\\x, 0 );\n\t  \\coordinate (Y) at (0, \\y );\n\t  \\coordinate (XY) at (\\x, \\y );\n\t\n\t  \\node[anchor=north] at (X) {$x$} ;\n\t\n\t  % draw center of circle\n\t  \\draw[fill=blue] (C) circle (1pt);\n\t\n\t  % draw radius of the circle\n\t  \\draw[] (C) -- node[anchor=south] {\\; $a$} (XY);\n\t\n\t  % bottom of circle, radius to the bottom\n\t  \\coordinate (B) at (\\c, 0);\n\t  \\draw[] (C) -- (B) node[anchor=north] {$a \\, \\theta$};\n\t\n\t  % projections of point XY\n\t  \\draw[dotted] (XY) -- (X);\n\t  \\draw[dotted] (XY) -- (Y) node[anchor=east, xshift=1mm] {$\\quad y$};\n\t\n\t  % arc theta\n\t  % start arc\n\t  \\coordinate (S) at (\\c, 0.4);\n\t  \\draw[->] (S) arc (-90:-165:0.6);\n\t  \\node[xshift=-2mm, yshift=-2mm] at (C) {\\scriptsize $\\theta$};\n\t\n\t  % arc above\n\t  \\coordinate (AA) at (\\xa, \\ya);\n\t  \\draw[-latex, rotate=25] (AA) arc (-220:-260:1.3);\n\t\n\t  % arc below\n\t  \\def\\xb{2.5} % coordinate x for arc bottom\n\t  \\def\\yb{0.8} % coordinate y for arc bottom\n\t  \\coordinate (AB) at (\\xb, \\yb);\n\t  \\draw[-latex, rotate=-10] (AB) arc (-5:-45:1.3);\n\t\n\t  % XY dot\n\t  \\draw[fill=black] (XY) circle (1pt);\n\t\n\t  % top label\n\t  \\coordinate (T) at (pi, 2);\n\t  \\node[anchor=south] at (T)  {$(\\pi a, 2 a )$} ;\n\t  \\draw[fill=black] (T) circle (1pt);\n\t\n\t  % equations\n\t  \\coordinate (E) at ( 4,1.2);\n\t  \\coordinate (F) at ( 4,0.9);\n\t  \\node[] at (E) { $x=a(\\theta - \\sin (\\theta))$};\n\t  \\node[] at (F) { $y=a(1 - \\cos(\\theta))$};\n\t\n\t  % label 2pi a\n\t  \\coordinate (TPA) at (2*pi, 0);\n\t  \\node[anchor=north] at (TPA) {$2 \\pi a$};\n\t\n\t  \\end{tikzpicture}\n\t\\end{center}\n\tWe see on the figure above that as an arc $\\theta$ of the circle of radius $a$ has a length:\n\t\n\tThis corresponds to the abscissa distance in the range $[0,2\\pi a]$.\n\t\n\tBut to get the $x$  value of the tracked point we must subtract the horizontal projection $a'$ of the radius $a$ on the abscissa and as:\n\t\n\tTherefore:\n\t\n\tAnd the same procedure applies for $y$.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tNotice that it means if $\\theta$ do a whole round ($\\theta=2\\pi$), then the horizontal travelled distance will be \n\t\\end{tcolorbox}\n\n\tSo we finally get:\n\t\n\tNotice that the complete arc length $L$ of the cycloid is given by:\n\t\n\t\n\t\\subsubsection{Equation of an Epicycloid}\n\tIn geometry, an epicycloid is a plane curve produced by tracing the path of a chosen point on the circumference of a circle - named an \"epicycle\" - which rolls without slipping around a fixed circle. It is a particular kind of roulette.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/epicycloid_path.jpg}\n\t\t\\caption[Epicycloid path]{Epicycloid path (source: Wikipedia, Sam Derbyshire)}\n\t\\end{figure}\n\tAs we will see in the section of Mechanical Engineering page \\pageref{epicyclic gears}, the epicycloid is used to build efficient energy gearing geometry shapes!\n\t\n\tFor the need of the section of Mechanical Engineering we need to determine the parametric equation of the epicycloid . For this consider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/geometry/epicycloid_sketch_for_proof.jpg}\n\t\t\\caption[Epicycloid sketch for proof]{Epicycloid sketch for proof (source: Wikipedia, Sam Derbyshire)}\n\t\\end{figure}\n\tWe assume that the position of $P$ is what we want to solve, {$\\alpha$  is the radian from the tangential point to the moving point $P$, and $\\theta$  is the radian from the starting point to the tangential point.\n\n\tSince there is no sliding between the two cycles, then we have that:\n\t\n\tBy the definition of radian (which is the rate arc over radius), then we have that:\n\t\t\n\tFrom these two conditions, we get the identity:\n\t\n\tBy rearranging, we get the relation between $\\alpha$  and $\\theta$, which is\n\t\n\tFrom the figure, we see the position of the point $P$ is clearly given by:\n\t\n\t\n\t\\subsubsection{Equation of a Spiral}\n\tA \"\\NewTerm{spiral}\\index{spiral}\" is a curve in the plane or in the space, which runs around a center in a special way. It sometimes a useful shape in some industrial fields (robotics, watchmaking, texts designs, etc.).\n\t\n\tWe can make a spiral by two motions of a point: There is a uniform motion in a fixed direction and a motion in a circle with constant speed. Both motions start at the same point. This construction is named the \"\\NewTerm{Archimedean Spiral}\\index{Archimedean Spiral}\"  (also known as the \"arithmetic spiral\").\n\t\n\tWe get the Archimedean Spiral by analogy to the circle parametric equations.  Let us recall that the parametric equation of a circle is given by:\n\t\n\tLet us rewrite is a following:\n\t\n\twhere $a$ is a chosen constant.\n\t\n\tNow for the Archimedean Spiral, the idea is to make $R$ also proportional of $t$. Therefore the parametric equation is:\n\t\n\tBelow is an example of an Archimedean Spiral with in polar coordinate $r=a\\theta$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{tikzpicture}\n\t    \\begin{polaraxis}\n\t      [no marks,samples=201,smooth,domain=0:4]\n\t      \\addplot+ (4*180*x,x);\n\t    \\end{polaraxis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Archimedean Spiral}\n\t\\end{figure}\n\t\n\tThe distances between the spiral branches are the same. More exactly: The distances of intersection points along a line through the origin are the same. \n\t\n\t\\subsubsection{Equation of an Hypocycloid}\n\tIn geometry, a hypocycloid is a special plane curve generated by the trace of a fixed point on a small circle that rolls within a larger circle. It is comparable to the cycloid or epicycloid but instead of the circle rolling along a line, it rolls within a circle.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/hipocycloid_path.jpg}\n\t\t\\caption[Hypocycloid path]{Hypocycloid path (source: Wikipedia, Sam Derbyshire)}\n\t\\end{figure}\n\tWhen we know the parametric equation of the epicycloid, that of the hypocycloid is immediate:\n\t\n\tNow for the section of Mechanical Engineering it will help if the reader compare side by side and epicycloid and a hypocycloid one:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{subfigure}{.4\\textwidth}\n\t\t  \\centering\n\t\t  \\includegraphics[scale=0.9]{img/geometry/epicycloid.jpg}\n\t\t  \\caption[Epicycloid]{Epicycloid (source: Wikipedia)}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{.4\\textwidth}\n\t\t  \\centering\n\t\t  \\includegraphics[scale=1]{img/geometry/hipocycloid.jpg}\n\t\t  \\caption{Hipocycloid}\n\t\t\\end{subfigure}\n\t\\end{figure}\n\t\t\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Surface of revolution}\n\tMore generally many surfaces (and also some of which we've seen before as the sphere, tore and cylinder) can be described by revolving a primary form of smaller size and then by rotation.\n\t\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{surface of revolution}\\index{surface of revolution}\" is a surface obtained by rotating a plane curve (e.g. $z=f(x)$), named the \"\\NewTerm{generatrix}\\index{generatrix}\", around the $z$-axis (for example!). So we pass from a plane of $\\mathbb{R}^2$ to a basis in $\\mathbb{R}^3$, the $x$-axis then generates a plane that has become the $y$O$z$ plane.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.85]{img/geometry/revolution_surface.jpg}\n\t\\end{figure}\n\tLet us see seven famous cases:\n\t\n\t\\paragraph{Cone}\\label{cone of revolution}\\mbox{}\\\\\\\\\n\tConsider a circular cone of vertex O (origin) with a circle of center $(0,0, h)$ (where $h$ is a positive real number corresponding to the height of the) and radius $R$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cone.jpg}\n\t\t\\caption{Representation of a cone}\n\t\\end{figure}\n\tA parametric representation of the circle at the height $h$ is given as we already proved earlier by:\n\t\n\twhere $t$ belongs to the interval $[-\\pi,\\pi]$.\n\t\n\tBy extension, the parametric representation of a cone is:\n\t\n\tthis can also be written in vector form (more traditional):\n\t\n\t\n\tIn other words, the circle linearly propagates in all directions.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith Maple 4.00b:\\\\\n\n\t\\texttt{>r:=1:h:=4:\\\\\n\t>plot3d([k*r*cos(t),k*r*sin(t),k*h],k=0..10,t=0..2*Pi,\\\\\n\torientation=[50,60],style=PATCH,axes=NORMAL);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cone_maple.jpg}\n\t\t\\caption{Parametric representation of a cone with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\tThis suggests also the following cartesian equation of this cone:\n\t\n\tAs we have $\\tan(\\alpha)=r/h$ we can write:\n\t\n\tTherefore:\n\t\n\twhich is the Cartesian equation of a cone in space that we will see again in the section of Special Relativity in our study of light cones.\n\t\n\t\\paragraph{Sphere}\\label{sphere}\\mbox{}\\\\\\\\\n\tConsider the orthonormal basis ${\\vec{i},\\vec{j},\\vec{k}}$, Given the sphere $S^2$ of  center $\\Omega(a,b,c)$ and of radius $r$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sphere.jpg}\n\t\t\\caption{Representation of a sphere}\n\t\\end{figure}\n\t$M(x,y,z)$ belongs to the sphere  $S^2$ of radius $r$ fi and only if:\n\t\n\tthat is to say by applying Pythagoras theorem:\n\t\n\tHence the \"\\NewTerm{Cartesian equation of the sphere}\\index{Cartesian equation of the sphere}\" in the basis ${\\vec{i},\\vec{j},\\vec{k}}$:\n\t\n\tThere is another way to describe the sphere using the parametric equation. Indeed, we saw in the section of Vector Calculus that the transformation of the Cartesian coordinates to spherical coordinates is given by the following curvilinear coordinates:\n\t\n\tTherefore we have well (by a change of variable this also represent a sphere not centered at the origin):\n\t\n\tSo the parametric equation of the sphere is well:\n\t\n\tWe fall back therefore on the Cartesian equation of a sphere to a given translation constant.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith Maple 4.00b for a sphere of unit radius:\\\\\n\n\t\\texttt{>r:=1:h:=4:\\\\\n>plot3d([sin(theta)*cos(phi),sin(theta)*sin(phi),cos(theta)],\\\\\ntheta=0..Pi,phi=-Pi..Pi,scaling=CONSTRAINED,orientation=[50,60],\\\\\nstyle=PATCH,axes=NORMAL);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sphere_maple.jpg}\n\t\t\\caption{Parametric representation of a sphere with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\subparagraph{Great circle of sphere}\\mbox{}\\\\\\\\\n\tWe will introduce here a concept that will be useful to us when we will study the geodesic of a sphere in the section of Analytical Mechanics.\n\t\n\t\\textbf{Definition (\\#\\mydef):}  A \"\\NewTerm{great circle}\\index{great circle}\", also known as an \"\\NewTerm{orthodrome}\\index{orthodrome}\" or \"\\NewTerm{Riemannian circle}\\index{Riemannian circle}\", of a sphere is the intersection of the sphere and a plane that passes through the center point of the sphere. This partial case of a circle of a sphere is opposed to a \"\\NewTerm{small circle}\\index{small circle}\", the intersection of the sphere and a plane that does not pass through the center. Any diameter of any great circle coincides with a diameter of the sphere, and therefore all great circles have the same circumference as each other, and have the same center as the sphere. A great circle is the largest circle that can be drawn on any given sphere. Every circle in Euclidean 3-space is a great circle of exactly one sphere.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/geometry/great_circle.jpg}\n\t\t\\caption{A great circle divides the sphere in two equal hemispheres}\n\t\\end{figure}\n\tLet use choose now a unit normal vector to a plane passing through the origin using Euler angles be:\n\t\n\twhere for rotated our such that the component $n_2$ is equal to $0$ (as it is always possible to do so whatever the plan we choose that pass through the origin!).\n\t\n\tIf we choose a sphere of radius $r=1$ such that it's parametric equation is written as:\n\t\n\tAnd as the set of points that are at the intersection of the plane and the sphere are by construction such that:\n\t\n\tTherefore we get:\n\t\n\tAfter rearranging:\n\t\n\tLet us put:\n\t\n\tand using the definition of the cotangent:\n\t\n\tThis is therefore the \"\\NewTerm{equation of a great circle}\\index{equation of a great circle}\".\n\t\n\t\\subparagraph{$n$-Sphere volume}\\mbox{}\\\\\\\\\n\tBefore we deal with the subject, we want to thanks professor Howard Haber that has provided us the major \\LaTeX source code of the text below that will be useful to us in the sections of Statistical Mechanics (especially to derive the Sackur-Tetrode equation) and of General Relativity!\n\t\n\tAn $n$-dimensional hypersphere of radius $R$ consists of the locus of points such that the distance from the origin is less than or equal to $R$.\n\t\n\tA point in an $n$-dimensional Euclidean space is designated by $(x_1\\,,\\,x_2\\,,\\,\\ldots\\,,\\,x_n)$.\n\n\tIn equation form, the hypersphere corresponds then by definition to the set of points such that:\n\t\n\tTo compute the volume of this hypersphere, we simply integrate the infinitesimal volume element $\\mathrm{d}V=\\mathrm{d}x_1 \\mathrm{d}x_2\\cdots \\mathrm{d}x_n$ over the region of $n$-dimensional space indicated by \\ref{eqsphere}.  We wish to compute this volume $V_n(R)$. Explicitly:\n\t\n\tThe factor of $R^n$ is a consequence of dimensional analysis.\n\t\n\tThe surface-area of the $n$-dimensional hypersphere defined by will be denoted in what follows by $S_{n-1}(R)$.\n\t\n\tThe surface of the hypersphere corresponds to the locus of points such that:\n\t\n\tWe can construct the volume $V_n(R)$ by adding infinitely thin spherical shells of radius $0\\leq r\\leq R$.  In equation form, this reads:\n\t\n\tAnd as we know:\n\t\n\twhere we have used \\ref{vn}.  Thus, the only remaining task is to compute $C_n$.\n\t\n\tIn order to obtain a better intuition on the meaning of $C_n$, let us equate the two expressions we have for $V_n(R)$, namely \\ref{vn} and \\ref{vds}.\n\t\n\tIn the latter, $S_{n-1}(r)$ is determined by \\ref{sn}.  Thus:\n\t\n\tThis last relation is simply the evaluation of an $n$-dimensional integral either in rectangular co-ordinates or hyper-spherical co-ordinates.  In $n$-dimensions,\n\twe would write\n\t\n\twhere $\\mathrm{d}\\Omega_{n-1}$ contains all of the angular factors.  For example, for\n\t$n=2$, $\\mathrm{d}\\Omega_1=\\mathrm{d}\\theta$;  for $n=3$, $\\mathrm{d}\\Omega_2=\\sin\\theta\\, \\mathrm{d}\\theta\\, d\\phi$, etc.\n\t\n\tOne could explicitly define the $n-1$ angular variables in $n$-dimensions.\n\t\n\tHowever, if we are integrating over a spherically symmetric function, then\n\twe will simply integrate over $\\mathrm{d}\\Omega_{n-1}$. Comparing \\ref{polar} and \\ref{omega},\n\twe see that:\n\t\n\t\n\tWe will present a trick for computing $C_n$ without ever explicitly parametrizing the angles of the hyper-spherical coordinate system.  For this purpose, consider the function:\n\t\n\tIf we integrate this function over the full $n$-dimensional space in both rectangular and hyper-spherical coordinates, we obtain:\n\t\n\tSince the integrand on the right hand side depends only on $r$ (there is no angular dependence), we can immediately perform the integral over $\\mathrm{d}\\Omega_{n-1}$.   Using \\ref{ncn}:\n\t\n\tAll the integrals above can be evaluated exactly using Gauss integral (\\SeeChapter{see section Statistics page \\pageref{Gauss integral}}) and Euler-Gamma integral (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{gamma euler function}}):\n\t\n\tInserting these results into \\ref{intrel}, we get :\n\t\n\twhere we used the property of the Gamma function, $x\\Gamma(x)=\\Gamma(x+1)$, at the final\n\tstep.  Solving for $C_n$, we get:\n\t\n\t\n\tAlthough we chose a particular function $f(x_1\\,,\\,x_2\\,,\\,\\ldots\\,,\\,x_n)$ to get the final result for $C_n$, it is clear that the value of $C_n$ is independent of this function.  After all, the defining equation for $C_n$ [\\ref{ncn}] makes no reference to any function.  As a check, let us evaluate $n\\,C_n$ for $n=2$ and $n=3$:\n\t\n\tIn the last computation, we used $\\Gamma\\left(\\frac{5}{2}\\right)=\\frac{3}{2}\n\t\\Gamma\\left(\\frac{3}{2}\\right)=(\\frac{3}{2})\\,(\\frac{1}{2})\\,\\Gamma\\left(\\frac{1}{2}\\right)=\n\t\\frac{3}{4}\\sqrt{\\pi}$.  Indeed, we have produced the correct values for the\n\tintegration over the angular factors in two and three dimensions.\n\t\n\tWe are finally ready to compute the volume and surface-area of an $n$-dimensional hypersphere.  Inserting \\ref{cn} into \\ref{vn} and \\ref{sn}, we find:\n\t\n\tThe expression for $S_n(R)$ can be slightly simplified by writing $\\Gamma\\left(1+\\dfrac{n}{2}\\right)=\\dfrac{n}{2}\\,\\Gamma\\left(\\dfrac{n}{2}\\right)$. This\tyields:\n\t\n\tYou should check that you get the expected results for $n=2$ and $n=3$.  For example,$V_3(R)=\\frac{4}{3}\\pi R^3$ and $S_2(R)=4\\pi R^2$.\n\t\n\tFinally, it is amusing to note that $\\lim_{n\\to\\infty} V_n(R)=0$ and $\\lim_{n\\to\\infty} S_n(R)=0$.  \n\t\n\tTo understand this peculiar behaviour, consider a hypercube in $n$-dimensional space measuring one unit on each side.  The total volume of this hypercube is $1$. We can fit a hypersphere of diameter 1 (or radius $\\frac{1}{2}$) inside the hypercube such that the surface of the hypersphere just touches each of the walls of the hypercube. Then $1-V_n(\\frac{1}{2})$ is the volume inside the cube but outside the hypersphere.  \n\t\n\tIn particular, as $n$ becomes large, $1-V_n(\\frac{1}{2})$ rapidly approaches $1$, which is consistent \twith the assertion that $\\lim_{n\\to\\infty} V_n(R)=0$.  This simply means that as the number of dimensions becomes larger and larger, the amount of space outside the hypersphere (but inside the cube) is become relatively more and more important.  This is already happening as you go from 2 to 3 dimensions.  So you can check your intuition by inscribing a circle in a unit square and a sphere in a unit cube and computing the total volume in three dimensions (area in two dimensions) outside the sphere (circle) but inside the cube (square).  If you take the ratio of volumes (areas) of the sphere (circle) to that of the cube (square), this ratio actually \\textit{decreases} as you go from 2 dimensions to 3 dimensions!\n\t\n\tThis shows that the integer dimension with maximum integer coefficient is $n = 5$. \n\t\n\t\\begin{table}[H]\n\t\t\\centering\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\\rowcolor[HTML]{C0C0C0} \n\t\t\\textbf{Dimension} & \\textbf{Volume} & \\textbf{Volume at $\\pmb{r=1}$} \\\\\n\t\t$2$ & $\\pi r^2$ & $3.14159$ \\\\ \\hline\n\t\t$3$ & $4/3\\pi r^3$ & $4.18879$ \\\\ \\hline\n\t\t$4$ & $1/2 \\pi^2 r^4$ & $4.93480$ \\\\ \\hline\n\t\t$5$ & $8/15 \\pi^2 r^5$ & $5.26379$ \\\\ \\hline\n\t\t$6$ & $1/6 \\pi^3 r^6$ & $5.16771$ \\\\ \\hline\n\t\t$7$ & $16/105 \\pi^3 r^7$ & $4.72477$ \\\\ \\hline\n\t\t$8$ & $1/24 \\pi^4 r^8$ & $4.05871$ \\\\ \\hline\n\t\t$9$ & $32/945 \\pi^4 r^9$ & $3.29851$ \\\\ \\hline\n\t\t$10$ & $1/120 \\pi^5 r^{10}$ & $2.55016$\\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\pagebreak\n\t\\paragraph{Ellipsoid (spheroid)}\\mbox{}\\\\\\\\\n\tWe saw in our study of conic that the cartesian equation of an ellipse in the plane was given by:\n\t\n\twith $a, b$ the two axes of the ellipse (small and large one).\n\t\n\tThus, without rigorous proof, we can verify by hand or using computers that the Cartesian equation:\n\t\n\tis an ellipsoid:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/ellipsoid.jpg}\n\t\t\\caption{Representation of an ellipsoid}\n\t\\end{figure}\n\tHowever, there is another way of describing an ellipsoid using also curvilinear coordinates:\n\t\n\tSo the parametric equation of an ellipsoid will be:\n\t\n\tWe can see that this is simply the parametric equation of a sphere but with different radii along the axes of the chosen basis.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith Maple 4.00b:\\\\\n\n\t\\texttt{>a:=100:b:=2:c:=20:\\\\\n>plot3d([a*cos(theta)*cos(lambda),b*cos(theta)*sin(lambda),\\\\\nc*sin(theta)],lambda=0..Pi,theta=-Pi..Pi,orientation=[50,60],\\\\\nstyle=PATCH,axes=NORMAL);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/ellipsoid_maple.jpg}\n\t\t\\caption{Parametric representation of an ellipsoid with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\tSo we have:\n\t\n\thence:\n\t\n\tFinally:\n\t\n\tIs astronomy (\\SeeChapter{see section Astronomy page \\pageref{astronomy}}) the ellipsoid is mainly named \"\\NewTerm{oblate spheroid}\\index{oblate spheroid}\".\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/spheroid.jpg}\n\t\t\\caption[Left oblate spheroid, right prolate spheroid]{Left oblate spheroid, right prolate spheroid (source: Wikipedia)}\n\t\\end{figure}\n\tand we have technically a oblate spheroid when $c<a$ and prolate one when $c>a$. Obviously the case $a = c$ reduces to a sphere.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tBecause of the combined effects of gravity and rotation, the Earth's shape is not quite a sphere but instead is slightly flattened in the direction of its axis of rotation. For that reason, in cartography the Earth is often approximated by an oblate spheroid instead of a sphere. The current World Geodetic System model uses a spheroid whose radius is $6,378.137$ [km] at the equator and $6,356.752$ [km] at the poles.\n\t\\end{tcolorbox}\n\tThe aspect ratio of an oblate spheroid/ellipse, $b : a$, is the ratio of the polar to equatorial lengths, while the flattening (also named \"oblateness\") $f$, is the ratio of the equatorial-polar length difference to the equatorial length:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Cylinder}\\label{cylinder}\\mbox{}\\\\\\\\\n\tIt goes without saying that the parametric equation of a cylinder of radius $r$ is given by:\n\t\n\tWe see well that the components $x, y$ satisfy the Cartesian equation of a circle for every $z$ since:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith Maple 4.00b a cylinder of radius $r=1$:\\\\\n\n\t\\texttt{>plot3d([cos(phi),sin(phi),z],phi=-Pi..Pi,z=0..2,orientation=[50,60],\\\\\n\tstyle=PATCH,axes=NORMAL);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cylinder.jpg}\n\t\t\\caption{Parametric representation of a cylinder with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\tIt is obvious that the parametric equation of an elliptical base cylinder is given by:\n\t\n\twhich also satisfies the parametric equation of an ellipse in the plane:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Paraboloid}\\mbox{}\\\\\\\\\n\tGiven the equation of the parabola around the $z$-axis:\n\t\n\twith for reminder:\n\t\n\tWe have obviously (by cutting the paraboloid by a horizontal plane which gives therefore a circle of radius $r$) the relation:\n\t\n\tnamed \"\\NewTerm{cylinder equation}\\index{cylinder equation}\". But we also have by applying the Pythagorean theorem in the circle above:\n\t\n\tTherefore:\n\t\n\tHence we get the \"\\NewTerm{Cartesian equation of the paraboloid (of revolution)}\\index{Cartesian equation of the paraboloid }\":\n\t\n\tWe build the parametric equation of the paraboloid exactly in the same way as for the cone, at the difference that the evolution along the $z$-axis will not be linear relatively to a parameter $k$ but to the square of the latter. Therefore we have:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith Maple 4.00b a paraboloid with $h=1,p=1$:\\\\\n\n\t\\texttt{>p:=1:h:=1:\\\\\n\t>plot3d([k*1/(2*p)*cos(t),k*1/(2*p)*sin(t),k\\string^2*h],k=0..10,\\\\\n\tt=0..2*Pi,orientation=[50,60],style=PATCH,axes=NORMAL); \n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/paraboloid.jpg}\n\t\t\\caption{Parametric representation of a paraboloid with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\paragraph{Hyperboloid}\\label{hyperboloid}\\mbox{}\\\\\\\\\n\tGiven the linear function:\n\t\n\tthat we turn around the $z$-axis. As the line passes trough the origin, the revolution around the $z$-axis is similar to two cones facing each other with their apex at the origin O. Therefore:\n\t\n\twhich give us obviously:\n\t\n\t\\textbf{Definition (\\#\\mydef):} Any surface generated by a line is a \"\\NewTerm{ruled surface}\\index{ruled surface}\". More explicitly a surface $S$ is ruled if through every point of $S$ there is a straight line that lies on $S$. A ruled surface $S$ is a curved surface which can be generated by the continuous motion of a straight line in space along a space curve named the \"\\NewTerm{directrix}\\index{directrix}\".\n\t\n\tLate us take an important example (nuclear central chimney shape, gears, etc.) that is the one sheet hyperboloid of equation:\n\t\n\tf we put $a=b=c=1$ we have therefore (rotated line around the $z$ axis but not going through the origin O):\n\t\n\twhich can also be written as the product of the equation of two lines such that:\n\t\n\tThat we can also rewrite with $k \\in \\mathbb{R}$:\n\t\n\tby identification:\n\t\n\tIt is not trivial but these to two lines belong to the same surface and any point belonging to one of these two lines is contained therein. The figures below show well that fact, every point belongs to these two lines:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/hyperbolic_ruled_surface.jpg}\n\t\t\\caption{Hyperbolic Rules Surface}\n\t\\end{figure}\n\tWe could also described this by circles such that (this is the traditional way on computers):\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/hyperbolic_circles.jpg}\n\t\t\\caption{Hyperbolic Circles Surface}\n\t\\end{figure}\n\tHere is also a very interesting figure that highlight the difference between a one sheet and two sheets hyperboloids\\label{two sheets hyperboloid}.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/hyperboloid_one_two_sheets.jpg}\n\t\t\\caption{Two hyperboloids (of one sheet and two sheets) which are asymptotic to an identical elliptic cone}\n\t\\end{figure}\n\t\n\tWe would like to share now a very nice and interesting (instructive) summary of most well known conical surfaces that we have study so far and provided by OpenStax:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/geometry/common_quadric_surfaces_01.jpg}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/geometry/common_quadric_surfaces_02.jpg}\n\t\t\\caption[Common conical surfaces]{Common conical surfaces (source: OpenStax)}\n\t\\end{figure}\n\t\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Torus}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} In geometry, a \"\\NewTerm{torus}\\index{torus}\\label{torus}\" is a surface of revolution generated by revolving a circle in three-dimensional space about an axis coplanar with the circle. If the axis of revolution does not touch the circle, the surface has a ring shape and is named a \"\\NewTerm{torus of revolution}\\index{torus of revolution}\".\n\t\n\tWe know that to generate a circle in the plane $x\\text{O}y$, a possible parametric equation is:\n\t\n\tTo offset this circle to the right (in the direction of the positive $x$), we'll just add a strictly positive constant term $x$:\n\t\n\tIn space to draw a shifted circle in the $x\\text{O}z$-plane we have therefore:\n\t\n\tWhich traditional notation is in the context of the study of torus:\n\t\n\tIf we want to generate a torus, we will rotate the circle around the $u$-axis by making it following a circle in the plane $x\\text{O}y$. We then have the  \"parametric equation of the torus of revolution\":\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith Maple 4.00b a torus with $r=1,R=4$:\\\\\n\n\t\\texttt{>r:=1:R:=4\\\\\n\t>plot3d([(R+r*cos(phi))*cos(theta),(R+r*cos(phi))*sin(theta), 1*sin(phi)],\\\\\n\ttheta=-Pi..Pi,phi=-Pi..Pi,scaling=CONSTRAINED,orientation=[50,60],\\\\\n\tstyle=PATCH,axes=NORMAL); \n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/torus_maple.jpg}\n\t\t\\caption{Parametric representation of a torus with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{80} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 14 votes,  67.14\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to force start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Differential Geometry}\\label{differential geometry}\n\t\\lettrine[lines=4]{\\color{BrickRed}A}s we have already mentioned it in the section of Non-Euclidean geometry, differential geometry is the branch of geometry that aims to study the local and intrinsic  properties (in the neighbourhood of a point) of curves and non-Euclidean surfaces (as a generalization of Euclidean surfaces!).\n\t\n\tDifferential geometry takes its name from the fact that it was born from the possibility of a cinematic interpretation that infinitesimal calculus brings to the study of curves. The points that we will discuss here will also serve well in the study of classical mechanics as complex analysis applied to many areas of study physical fields.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tBefore we tackle the very formal and abstract manner to address differential geometry with the tools of Topology (usual method used by mathematicians) we have chose in a first time to present the essential elements in the most simple and enjoyable as this is done in some engineering schools. Purists will therefore perhaps excuse us... or  wait for something better...\n\t\\end{tcolorbox}\t\n\t\n\t\\subsection{Parametric Curves}\\label{parametric curves}\n\t\\textbf{Definition (\\#\\mydef):} We will assimilate the \"physical space\" to $\\mathbb{R}^3$ and will suppose that it includes a referential $R=(\\text{O},\\vec{i},\\vec{j},\\vec{k})$ and we will denote by $B$ the base $(\\vec{i},\\vec{j},\\vec{k})$.\n\t\n\tLet us consider a set $I \\subset \\mathbb{R}$ and a function $\\Gamma=f(I)$ such that:\n\t\n\tthat is a parametric equations that defines a group of quantities as functions of one or more independent \"parameters\".\n\t\n\tParametric curves are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labelled $t$l However, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} If $f$ is continuous, then $\\Gamma$ is a curve in space named \"curve in one piece\".\\\\\n\t\n\t\\textbf{R2.} A parabola, a sinusoid are curves named  \"\\NewTerm{flat curves}\\index{flat curves}\". An ellipse, a circle are named \"\\NewTerm{closed planar curves}\\index{closed planar curves}\". For these examples, all points of the considered curves are located in a same plane. Conversely, a curve is named \"\\NewTerm{left curve}\\index{left curve}\" if it is not so.\n\t\\end{tcolorbox}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/parametric_curve_parametric_surface.jpg}\n\t\t\\caption{Parametric curve and parametric surface}\n\t\\end{figure}\n\tLet us choose $t_0 \\in I$ and put $M_0=f(t_0)$ which we will denote by $M(t_0)$ then we can state the following definition: the pair $(f, I)$ where $f$ is a continuous function is named \"\\NewTerm{parametric arc}\\index{parametric arc}\". $\\Gamma$ is named the \"\\NewTerm{support}\\index{support}\" of $(f, I)$ and $t_0$ is an \"\\NewTerm{origin}\\index{origin}\" of $(f, I)$.\n\t\n\tNote that parametric representations are generally non-unique, so the same quantities may be expressed by a number of different parametrizations.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} Abusively, and more frequently, we also say that $(f, I)$ is a \"parametrization\" of $\\Gamma$.\\\\\n\t\n\t\\textbf{R2.} It is easy to define other parametrized arcs admitting also $\\Gamma$ as support. To do this, it is sufficient to give us a bijective function $\\varphi$ of $I$ to equation $J\\subset \\mathbb{R}$ and such that $f(t)=f(\\varphi^{-1}(x))$.\n\t\\end{tcolorbox}\n\tBefore continuing, remember that in differential geometry, the \"\\NewTerm{curvilinear abscissa}\\index{curvilinear abscissa}\" or \"\\NewTerm{line-element}\\index{line-element}\" is a kind of algebraic variation of the length of an arc (this is therefore the analogous, on a curve, of the abscissa on a straight oriented line).\n\t\n\tLet us consider now the following curvilinear abscissa (\\SeeChapter{see section Special Relativity page \\pageref{interval invariant}}):\n\t\n\twe have already seen that in a canonical Euclidean space (in $\\mathbb{R}^n$) the curvilinear abscissa is then written:\n\t\n\twith $i,j=1,2,3$ and as we have $\\delta_{ij}=0,i\\neq j$, it remains in $\\mathbb{R}^3$:\n\t\n\tIn the Cartesian system:\n\t\n\tso it comes:\n\t\n\twhich is therefore the linear differential element of an Euclidean space (the shortest path or the \"\\NewTerm{geodesic}\\index{geodesic}\" or \"\\NewTerm{differential curvilinear abscissa}\\index{differential curvilinear abscissa}\") that we have already met many times in various section of this book. So this is nothing new or surprising!\n\t\n\tIf we restrict ourselves to the plane, the differential curvilinear abscissa of a plane curve is then obviously:\n\t\n\tWe already know how to use this equation (we used it in the section of Analytical Mechanics). But as a reminder never hurts, let us do examples with a straight line, a parabola and a half-circle (the choice is not innocent!).\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. Consider the general equation of a line in the plane (it's not a plane curve for reminder but a straight flat line!):\n\t\n\tIt then comes immediately:\n\t\n\tTherefore:\n\t\n\tE2. Consider now the following equation of a parabola in the plane:\n\t\n\tIt then comes immediately:\n\t\n\tTherefore:\n\t\n\tE3. Consider the equation of an ellipse (\\SeeChapter{see section Analytical Geometry page \\pageref{analytical expression ellipse}}) written as:\n\t\n\tafter rearrangement:\n\t\n\tIt then comes immediately:\n\t\n\tHence:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tNotice that by making a Maclaurin approximation (when $x$ is therefore zero, which corresponds to the study of the pole of the ellipse) we get (\\SeeChapter{see section Sequences and Series page \\pageref{usual maclaurin developments}}):\n\t\n\tFollowing the request of a reader here are the details of the development of the previous result. First recall the Taylor series (\\SeeChapter{see section Sequences and Series page \\pageref{taylor series}}):\n\t\n\tIf we put $x_0=0$, we get the Maclaurin series:\n\t\n\tTherefore we get proceeding in two stages:\n\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tby putting $x_0=0$ we get:\n\t\n\tIt then comes immediately:\n\t\n\tTherefore:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tWe see then that the curvilinear abscissa of an ellipse in the plane becomes that of a parabola when we make a Maclaurin series expansion of the equation of the ellipse at the pole.\\\\\n\t\n\tWe could do the same with an hyperbole and end up with the same form of differential curvilinear abscissa, usually denoted by tradition:\n\t\n\twhere $k_x$ is named the \"\\NewTerm{osculator parameter of the parabola}\\index{osculator parameter of the parabola}\".\n\t\\end{tcolorbox}\n\t\\label{curvature parameter} What is veeery important to notice with the developments above (if we did it also identically of the hyperbola that is just the elliptic case with a different sign that we can detail on readers request) is that we finally have (remember that the ellipse is a general case of the sphere!):\n\t\n\tWhere the last three cases are generally denoted:\n\t\n\tWe the understand better why many books on General Relativity or Differential Geometry say that:\n\t\\begin{itemize}\n\t\t\\item When $K=0$ all case reduce to a flat space\n\t\t\n\t\t\\item When $K=+1$ (or more generally just positive) we are in a spherical space (in fact it's ellipsoidal)\n\t\t\n\t\t\\item When $K=-1$ (or more generally just negative) we are in a hyperbolic space\n\t\\end{itemize}\n\tThese examples being closed, let us continue with the theory. We can obviously rewrite our differential curvilinear abscissa by dividing both sides of equality by $\\mathrm{d}t$ as:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\\label{curvilinear abscissa helix}Let us see an application of the parametrized differential curvilinear abscissa with an helix (the examples are pretty in differential geometry and therefore nice to see...) which is a typical left curve:\\\\\n\t\n\tLet $t,r,h\\in \\mathbb{R}^3,t>0,r>,h>0$ and the function:\n\t\n\twith $M(x,y,z)$ and parametric coordinates:\n\t\n\tWe then have with Maple 4.00b by taking $r$ and $h$ as equal to $1$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/geometry/helix.jpg}\n\t\t\\caption{Parametric representation of an helix with Maple 4.00b}\n\t\\end{figure}\n\tThe function $f$ is a parametric arc whose support is named an \"\\NewTerm{helix}\\index{helix}\", $r$ is the radius and $h$ is the step. Taking $t_0=0$ as the origin, the curvilinear abscissa of this helix (one piece) is given by:\n\t\n\tTherefore:\n\t\n\tand hence by integration:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Isolines}\\label{isoline}\n\tLet us see now  a very important point in mathematics but also in medical engineering, astrophysics, meteorology (among many other areas) that are isolines.\n\t\n\tBefore addressing the subject in a purely mathematical form, we suggest the reader to open MATLAB\u2122 5.0.0.473 (we've done also pretty much the same example with Maple 4.00b in the section of Functional Analysis) and write in it:\n\t\n\t\\texttt{>>[xx,yy,z]=peaks;\\\\\n\t>>figure(1);mesh(xx,yy,z);title('peaks')}\n\t\n\tWe get:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isoclines_plot_01.jpg}\n\t\t\\caption[]{Initial plot in MATLAB\u2122 5.0.0.473}\n\t\\end{figure}\n\tthen for aesthetic reasons, to write:\n\t\n\t\\texttt{>>figure(2);surf(xx,yy,z);title('surf')}\n\t\n\tWe get:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isoclines_plot_02.jpg}\n\t\t\\caption[]{Plot rendering modification in MATLAB\u2122 5.0.0.473}\n\t\\end{figure}\n\t\n\tThen we would like MATLAB\u2122  to plot for us some level curves (the points where the value of the function $f (x, y)$ is constant), named by the mathematicians \"\\NewTerm{isolines}\\index{isoline}\" or \"\\NewTerm{iso-level curves}\\index{iso-level curves}\". For this we must write:\n\t\n\t\\texttt{>>figure(3);contour3(xx,yy,z);title('contour')}\n\t\n\tWe get:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isoclines_plot_03.jpg}\n\t\t\\caption{Plot of the isoclines in MATLAB\u2122 5.0.0.473}\n\t\\end{figure}\n\tWe will then ask the software to project the isoclines on the $X, Y$ plane. This is done with:\n\t\n\t\\texttt{>>figure(3);contour(xx,yy,z);title('contour')}\n\t\n\tWe get:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isoclines_plot_04.jpg}\n\t\t\\caption{Projection of the isoclines in MATLAB\u2122 5.0.0.473}\n\t\\end{figure}\n\tAnd it is these curves that will interest us. We would like to determine the algebraic expression in the plane of the latter. But first let's have fun with MATLAB\u2122  by writing again:\n\t\n\t\\texttt{>>figure(3);contour(xx,yy,z);title('contour')}\n\t\n\tWe get:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isoclines_plot_05.jpg}\n\t\t\\caption{Plane representation of isoclines with coloured gradients \\\\in MATLAB\u2122 5.0.0.473}\n\t\\end{figure}\n\t\n\tbut we can do even better by removing the grid with the command:\n\t\n\t\\texttt{>>shading interp}\n\t\n\tWe get:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isoclines_plot_06.jpg}\n\t\t\\caption[]{Plane representation of isoclines with coloured gradients \\\\and without grid in MATLAB\u2122 5.0.0.473}\n\t\\end{figure}\n\tThen, without closing the above now add the line:\n\t\n\t\n\t\\texttt{>>hold on; contour(xx,yy,z,'k')}\n\t\n\tWe get:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[]{img/geometry/isoclines_plot_07.jpg}\n\t\t\\caption{Isoclines association projected with color gradients \\\\in MATLAB\u2122 5.0.0.473}\n\t\\end{figure}\n\t\\textbf{Definition (\\#\\mydef):} In mathematics, a level set of a real-valued function $f$ of $n$ real variables is a set of the form:\n\t\n\tthat is, a set where the function takes on a given constant value $c^{te}$.\n\t\n\tWhen the number of variables is two, a level set is generically a curve, named a \"\\NewTerm{level curve}\\index{level curve}\", \"\\NewTerm{contour line}\\index{contour line}\", or \"\\NewTerm{isoline}\\index{isoline}\". So a level curve is the set of all real-valued solutions of an equation in two variables $x_1$ and $x_2$. When $n = 3$, a level set is named a \"\\NewTerm{level surface}\\index{level surface}\" or \"\\NewTerm{isosurface}\\index{isosurface}\", and for higher values of $n$ the level set is a named a \"\\NewTerm{hypersurface}\\index{hypersurface}\". So a \"\\NewTerm{level surface}\" is the set of all real-valued roots of an equation in three variables $x_1, x_2$ and $x_3$, and a \"\\NewTerm{level hypersurface}\\index{level hypersurface}\" is the set of all real-valued roots of an equation in $n\\; (n > 3)$ variables.\n\t\n\tLet us now consider to determine the equation of the isolines the bivariate function $f(x,y)$ function and that we will impose as being $\\mathbb{R}^2$-differentiable.\n\t\n\tThe relation:\n\t\n\tdefines a plan curve $\\Gamma$ therefore named \"isocline\". It is a curve such that when $x$ varies, then $y$ will thus not vary such that $f$ remains constant.\n\t\n\tWe saw in the section of Differential and Integral Calculus that the differential of $f$, for any infinitesimal variations of $x$ and $y$ is:\n\t\n\tNow, if we want when $x$ varies of $\\mathrm{d}x$, the value of the function $f$ does not change, $\\mathrm{d}y$ must not be anything but such that the variation of $\\mathrm{d}f$ is zero. In other words:\n\t\n\talong $\\Gamma$. But this equation is useless as such, but it fixed us the ratio of the derivative of the isoline in the plane such as:\n\t\n\twhich gives us the slope of the tangent to $\\Gamma$ and thus after by integration, the desired function itself!!!\n\t\n\tIt goes without saying that the tangent vector to the curve $\\Gamma$ is a vector parallel to the one whose having for components (by correspondence with the above relation):\n\t\n\tthat we will denote by:\n\t\n\tAlso recall that the gradient is given by (\\SeeChapter{see section Vector Calculus page \\pageref{gradient scalar field}}):\n\t\n\tWe note that the latter two vectors are perpendicular (result which will be very useful in the section of Complex Analysis). Indeed, calculating the dot product (\\SeeChapter{see section Vector Calculus page \\pageref{dot product}}):\n\t\n\tIn other words, the vector $\\vec{\\nabla}(f)$ defines the orthogonal lines to the curve $\\Gamma$.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider the equation of a special parabola in $\\mathbb{R}^3$:\n\t\n\tSo we have the isoclines which are given by:\n\t\n\thence their equation in the plane:\n\t\n\tThat is to say circles in the plane whose radius is equal to the square root of the corresponding selected constant at the height $z$ height of $f$!\n\t\n\tLet us now calculate the slope of the tangent $\\Gamma$ to these circles:\n\t\n\tThis is consistent with the simple derivative of:\n\t\n\tWe also have:\n\t\n\tWe see that on $x=0$ this vector is equal to:\n\t\n\twhich is intuitively consistent with the vector tangent to the circle that we have at this point of the $x$-axis.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Frenet Frame}\\label{frenet frame}\n\tOne of the most important tools used to analyse a curve is the Frenet frame, a moving frame that provides a coordinate system at each point of the curve that is \"best adapted\" to the curve near that point.\n\t\n\tThe Frenet frame is a tool for the study of the local behaviour of the curves. More precisely, it is a local coordinate system associated with a point describing a curve $\\Gamma$. Its construction method is different depending on the ambient space is $2$-dimensional (plane curve) or $3$-dimensional (left curve).\n\t\n\tThe Frenet, and the Frenet formulas (giving the derivatives vectors of this frame), gives the opportunity to conduct in a systematic way calculations of curvature and twisting of left curves and to introduce interesting geometric concepts associated with curves.\n\t\n\tLet us first consider a curve $\\Gamma$ with its curvilinear abscissa $s(t)$ and $M_0=s(t_0)$ its origin. We denote by definition:\n\t\n\tthe tangent to the parametrized curve $\\Gamma$ of parameter $t$ near a point $M$ with respect to a frame put on O with $\\mathrm{d}s$ that is calculated as we have shown previously.\n\t\n\tIt is interesting to note that if $t$ is interpreted as begin the time (don't forget that this book is mainly target for engineering application), then we get a tangential speed:\n\t\n\tand thus the  $\\vec{T}$ vector is obviously directed in the direction of movement.\n\t\n\tMoreover, by construction and definition of the curvilinear abscissa, we always have:\n\t\n\tand therefore the tangent vector $\\vec{T}$ to the point $M$ is unitary (and not zero!).\n\t\n\tNow, without knowing exactly for what it will be useful for us now, let us look closer to the vector:\n\t\n\tKnowing trivially from the foregoing that:\n\t\n\tThen we have:\n\t\n\ttherefore first $\\mathrm{d}\\vec{T}/\\mathrm{d}s$ is a posteriori not unitary and $\\vec{T}$ is perpendicular to it (results that will serve us several times afterwards so it must be remember!).\n\t\n\tTherefore:\n\t\n\t\n\tLet us put:\n\t\n\tGiven the previous result, $\\vec{N}$ is the perpendicular vector, named unitary \"\\NewTerm{curvature vector}\\index{curvature vector}\" at $\\vec{T}$ on $M$. We say that this couple of vector $\\left(\\vec{T},\\mathrm{d}\\vec{T}/\\mathrm{d}s\\right)$ is \"\\NewTerm{direct orthonormal}\\index{direct orthonormal vectors}\" and $C$ is by definition the \"curvature\".\n\t\n\tWe can also approach the curvature $C$ in a more geometrically way rather than formal and abstract previous definition. Let us see how:\n\t\n\tWe know at this step of our study that on a point $M_0$ of a curve $\\Gamma$ (differentiable at least once on every point - that is to say of class $\\mathcal{C}^1$), there is a non-zero vector $\\vec{T}$ which is tangent.\n\t\n\tIn any neighbouring point $M$ (of curvilinear abscissa $s$), the tangent vector $\\vec{T}$ can be written in approximation:\n\t\n\tif the curve is locally in the same plane (for now we are here studying the curvature and not the twisting of a curve)!\n\t\n\tTwo normal at $M$ and $M_0$ thus intersecting in a point $\\Omega$, the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/curvature_tangent_vector_decomposition.jpg}\n\t\t\\caption{Decomposition of the tangent vector of the curve path}\n\t\\end{figure}\n\tshows that at the first order in $\\mathrm{d}s$, the point $M$ can be considered locally as deduced from the point $M_0$ by a rotation of center $\\Omega$.\n\t\n\tThe circle thus defined of radius:\n\t\n\tis the one that best tangent the curve locally at the point $M_0$. Its radius is derived from the figure (two similar triangles at the limit):\n\t\n\thence, since $\\vec{T}$ is unitary, the definition of the value of the curvature for a plane curve:\n\t\n\tand that's it!\n\t\n\tIt is possible to interpret the concept of curvature as the speed of rotation of the Frenet frame relatively to a fixed direction.\n\t\n\tThe pair of vectors $(\\vec{T}, \\vec{N})$ is named a \"\\NewTerm{Frenet frame}\\index{Frenet Frame}\" and the basis vectors the \"\\NewTerm{Frenet vectors}\\index{Frenet vectors}\".\n\t\n\tThe Frenet frame is a moving frame and the elements of this frame change depending on the point in question. In physics, we must not confuse this with the repository: as Frenet vectors move with the point!\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe definition of $C$ as above is true in the context of an choice of positive curvature. It's taken in a mechanical point of view but not necessary in pure mathematics.\n\t\\end{tcolorbox}\t\n\tIf $C\\neq 0$, then as seen above we can now write:\n\t\n\twhere $R$ is named the \"\\NewTerm{curvature radius}\\index{curvature radius}\\label{curvature radius}\".\n\t\n\tAbout the relation:\n\t\n\tis named the \"\\NewTerm{first Frenet formula}\\index{first Frenet formula}\\label{first Frenet formula}\" and shows obviously that $\\vec{N}$ and $\\mathrm{d}\\vec{T}/\\mathrm{d}s$ are collinear and hence their cross product is zero (result that will be used later).\n\t\n\tThese relations are justified by the analogy with mechanics. Indeed, remember that we have shown above that:\n\t\n\t\n\tLet us now calculate the acceleration:\n\t\n\twe fall back on the result obtained in the section of Classical Mechanics in our study of the osculating plane.\n\t\n\tTo give a more accurate geometric interpretation of the curvature, we define first through $\\Omega$ the \" \\NewTerm{center of curvature}\\index{center of curvature}\\label{center of curvature}\" of the \"\\NewTerm{osculating circle}\\index{osculating circle}\" (placed in the osculating plane) or \"\\NewTerm{curvature circle}\\index{curvature circle}\" of radius $R$ that tangent locally the best the curve $\\Gamma$ in the Frenet frame:\n\t\n\tTo clarify geometrically what is the osculating circle, take a curve and a point $M$ on that curve. Then draw the normal at this point of the locally plane curve equation and take a point $\\Omega$ on the normal. Then, the center $\\Omega$ of the circle passing through the point $M$ is tangent to the curve. But all the circles tangent to the curve are not tangent in the same way! Indeed, if $\\Omega $is far from $M$, the circle will be located rather outside of the curve (blue circle in the figure below). If $\\Omega$ is close to $M$, the circle will be located rather inside of the curve (pink circle in the figure below). The limit radius between being \"inside the curve\" and being \"outside the curve\" is conventionally the \"radius of curvature\" we defined above. The circle of that radius is then the famous \"osculating circle\"!\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/osculating_circle.jpg}\n\t\t\\caption{Representation of the osculating circle}\n\t\\end{figure}\n\tIn the particular case where $\\vec{T}$ is a constant vector we have obviously:\n\t\n\tand therefore $C=0$ implying that $R$ is no longer define. We sometimes say that in this case that the radius of curvature  of $\\Gamma$ is infinite (a straight line then has a zero curvature at any point).\n\t\n\tLet us now consider the vector perpendicular to the osculating plane defined and denoted by:\n\t\n\tWe can already say, since $\\vec{T}$ and $\\vec{N}$ are of unit norm that $\\vec{B}$ is also of unit norm (which will serve us later)!\n\t\n\tWe can see that $\\mathrm{d}\\vec{B}/\\mathrm{d}s$ is obviously orthogonal to $\\vec{T}$. Indeed:\n\t\n\twhere we took the particularly case $\\mathrm{d}\\vec{T}/\\mathrm{d}s=\\vec{0}$  (but in anyway in general $\\mathrm{d}\\vec{T}/\\mathrm{d}s$ and $\\vec{N}$ are collinear as we have prove it earlier and therefore the cross product of these two vectors is always zero).\n\t\n\tAs $\\mathrm{d}\\vec{B}/\\mathrm{d}s$ is perpendicular to $\\vec{T}$ it is therefore collinear to $\\vec{N}$.\n\t\n\tTherefore, as $\\mathrm{d}\\vec{T}/\\mathrm{d}s$ is perpendicular to $\\vec{T}$, $\\mathrm{d}\\vec{B}/\\mathrm{d}s$ is perpendicular $\\vec{B}$. This can be proven using exactly the same trick as when we have proved that $\\mathrm{d}\\vec{T}/\\mathrm{d}s$ is perpendicular to $\\vec{T}$.\n\t\n\tTherefore:\n\t\n\t\n\tLet us put:\n\t\n\tThis relation is the \"\\NewTerm{second Frenet formula}\\index{second Frenet formula}\" where by definition, $\\vec{B}$ is the \"\\NewTerm{bi-normal vector}\\index{bi-normal vector}\" of $\\Gamma$ at the point $M$ and $\\tau$ the \"\\NewTerm{twist}\\index{twist}\" and $R'$ the \"\\NewTerm{torsion radius}\\index{torsion radius}\".\n\t\n\tWe can now establish the third Frenet formula. For this we start from:\n\t\n\tfrom which we get:\n\t\n\tBut, by the properties of the cross product (\\SeeChapter{see section Vector Calculus page \\pageref{cross product}}):\n\t\n\thence the \"\\NewTerm{third Frenet formula}\\index{third Frenet formula}\":\n\t\n\tWe name \"\\NewTerm{Frenet trihedron}\\index{Frenet trihedron}\" or \"\\NewTerm{Frenet-Serret frame}\\index{Frenet-Serret frame}\" associated to the curve $\\Gamma$ at point $M$, the natural orthonormal space frame $(M,\\vec{T},\\vec{N},\\vec{B})$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/geometry/frenet_frame.jpg}\n\t\t\\caption{Representation of the trihedron}\n\t\\end{figure}\n\twhere, in mechanics, the vector $\\vec{T}$ is collinear to the velocity and to tangential acceleration and $\\vec{N}$ is collinear with the normal acceleration.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe radius of curvature $R$ is in the osculating plane (the plane formed by the tangent and normal vector  to the curve) which is the best plan in which is contained the curve. So, the radius of curvature gives in a point (locally) the best (\"truest\") radius of the curve. The twist gives us against the tendency of the curve of going out of the osculating plane (verbatim if the curve is contained in a plane, torsion is zero).\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. We consider the plane parametric equation of a parabola:\n\t\n\tTherefore we have:\n\t\n\tHence:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tTherefore we have:\n\t\n\tBy the way you will notice that we have well:\n\t\n\tThus, the curvature (inverse of the radius of curvature) is given by:\n\t\n\tAnd therefore:\n\t\n\tAt $t=0$ the parabola has therefore a curvature $R(0)=0.5$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/osculating_circle_parabola.jpg}\n\t\t\\caption[]{Osculating circle to parabola at point $t=0$}\n\t\\end{figure}\n\t\n\tE2. Let us seek the radius and the center of curvature for any $M$ of our helix defined above as practical example. Recall that its parametric function is given by:\n\t\n\tand that:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tTherefore we have:\n\t\n\tOn the way, the reader will have probably notices that we have:\n\t\n\tThus, the curvature (the inverse of the radius of curvature) is given by:\n\t\n\tTherefore the radius of curvature is equal to:\n\t\n\tThis is consistent with intuition because when the step $h$ of the helix is equal to zero, the radius or curvature is equal to $r$ (the radius of the corresponding circle) and when $h$ tends to infinity the radius of curvature tends to infinity and the curvature tends to zero. This example is a famous case in engineering applied to the exhaust chimney of smoke evacuation which are surrounded by a spiral of Archimedes and whose objective is to raise the air flows upward (the difficulty being to identify the radius $R$ of the metal plate to cut that will follow the desired curvature at best... at least locally by knowing the radius of the chimney and the height $h$ of the step of the helix):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/helix_curvature_cheminee_figure.jpg}\n\t\t\\caption[]{Basic principle of an industrial chimney with a spiral (source: Frank Morgan, Riemmanian Geometry)}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tOr the real corresponding version:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/helix_curvature_cheminee_photo.jpg}\n\t\t\\caption[]{Industrial evacuation chimney with spiral}\n\t\\end{figure}\n\tBut in this engineering case, the height $h$ should be obtained by a complete rotation. Therefore the radius of curvature will be written:\n\t\n\tAnyway, to get back to our example and finish it, it comes with the first Frenet formula the following normal vector:\n\t\n\tand for which the point (extremity of the vector) are indistinctness of the $z$-axis (merged with it) of our helix regardless the height $h$! The coordinate of the component $z$ of this vector is zero since the normal is taken with respect to a point $M$ of the curve already at a given implicit height $h$.\n\t\n\tBy the third Frenet formula with the binormal vector:\n\t\n\tand the torsion radius is given by the relation:\n\t\n\twe therefore have:\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tAnd as we got the three following relations:\n\t\n\tWe deduce the torsion radius:\n\t\n\t\n\tE2. Let us now determine the important case of the explicit expression of the radius of curvature in Cartesian coordinates (result used in the section of Civil Engineering and useful in many other areas of physics!). Consider for this purpose the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cartesian_curvature.jpg}\n\t\t\\caption[]{Illustrated approach of the \"1D\" curvature expression}\n\t\\end{figure}\n\tSo we have the radius of curvature which is intuitively given by the following relation if we do not make use of vector analysis:\n\t\n\tWe also have:\n\t\n\tand as:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\ttherefore we have:\n\t\n\tand therefore:\n\t\n\tWe have proved in the section of Differential and Integral Calculus the following derivative:\t\n\t\n\tWe then have immediately by the composed derivatives:\n\t\n\tHaving done this we also need $\\mathrm{d}x/\\mathrm{d}s$. But, we have already proved with different approaches in the section of Analytical Mechanics and Geometric Shapes (among others) by simply using Pythagoras theorem that:\n\t\n\tBy grouping all we finally have:\n\t\n\tSo it comes that the radius of the local osculating circle of a Cartesian function in the plane (by taking the absolute value of the second derivative to avoid having a negative radius...) is given by\\label{radius of curvature}:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Surface Patches}\n\tThe study of surface patches is strongly related to the properties of parametric surfaces expression as seen at the end of the section of Analytical Geometry. We are therefore interested to calculate the tangent plane and the curvature of a surface on a given point. It is therefore an extension of the previous subjects. For example, the study of isolines of a surface that we have seen earlier can also be considered as belonging the field of Surface Patches study.\n\t \n\tNow let us consider to begin $D \\subset \\mathbb{R}^2$:\n\t\n\twith:\n\t\n\tsuch that $h(I)\\subset D$.\n\n\tWe can define $g\\circ h$:\n\t\n\tIf we assume $h$ continuous, it is clear that $g\\circ h$ is a parametric arc. Let us denote by $\\Gamma$ its support, we have $\\Sigma \\subset \\Sigma$ and we say that $\\Sigma$ a \"\\NewTerm{plotted curve}\\index{plotted curve}\" or \"\\NewTerm{inscribed curve}\\index{inscribed curve}\" on $\\Sigma$ defined by the \"\\NewTerm{Gauss Coordinates}\\index{Gauss coordinates}\" $u$ and $v$ (already seen in the section of Non-Euclidean Geometry).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe always assume now that $D=I\\times J$.\n\t\\end{tcolorbox}\n\tGiven $M_0 \\in \\Sigma, M_0=g(u_0,v_0)$. Let us look at the two curves drawn on $\\Sigma$ defined by the following parametrized arcs:\n\t\n\t$g_{u_0}$ and $g_{v_0}$ are the two functions named \"\\NewTerm{partial functions}\\index{partial functions}\" of $g$ on $(u_0,v_0)$.\n\t\n\tThe support of $(g_{u_0},J)$ and $(g_{v_0},I)$ are named \"\\NewTerm{coordinate-curves}\\index{coordinate-curve}\" of $\\Sigma$ on $M_0$ relatively to the parametrization $(g,D)$. We denote them  respectively $\\Gamma_{u_0}$ and $\\Gamma_{v_0}$ (see figure below). We also name $\\Gamma_{u_0}$ \"\\NewTerm{first coordinate-curve}\\index{first coordinate-curve}\" and $\\Gamma_{v_0}$ \"\\NewTerm{second coordinate-curve}\\index{second coordinate-curve}\".\n\t\n\tIt is of course quite obvious that (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{partial derivative}}) that:\n\t\n\tis tangent to $\\Gamma_{u_0}$ on $M_0$ and that $\\partial \\overrightarrow{\\text{O}M}/\\partial u$ is tangent to $\\Gamma_{v_0}$ on $M_0$.\n\t\n\tAs we have already prove it in the section of Tensor Calculus, this expression is independent of the surface patch as the infinitesimal  element length $\\mathrm{d}s$ is independent of the parametrization of $\\Sigma$. This quadratic form is therefore an invariant that represents the metric of $\\Sigma$. It is also denoted as follows by tradition:\n\t\n\tor even in more condensed form using tensor notation:\n\t\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/surface_patch.jpg}\n\t\t\\caption{Representation of a surface patch}\n\t\\end{figure}\n\t\n\t\\subsubsection{Metric of a Surface Patch}\n\tGiven again:\n\t\n\twith:\n\t\n\tLet us write $\\mathrm{d}g=(\\mathrm{d}x,\\mathrm{d}y,\\mathrm{d}z)$ or in other words:\n\t\n\tWe also have (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{total exact differential}}):\n\t\n\tand we have proved at the beginning of this section that the curvilinear abscissa in a Cartesian space (in Riemann coordinates Riemann) was given by:\n\t\n\tWe therefore have after substitution in Gaussian coordinates:\n\t\n\tWhich is equivalent to write (take caution not to read there the square of a vector but it is the dot product of the vector with itself!):\n\t\n\tIn a more traditional way with the notation:\n\t\n\twe get a relation named the \"\\NewTerm{first fundamental quadratic form}\\index{first fundamental quadric form}\" (we will note prove in this book the second one):\n\t\n\talso named \"\\NewTerm{first Gauss differential form}\\index{first Gauss differential form}\". It is interesting to write this last relation in the form:\n\t\n\tand we see for $\\mathrm{d}s^2$ to be positive, $E$ and also $EG-F^2$ must be positive!\n\t\n\tThe first fundamental form may be also represented as a symmetric matrix:\n\t\n\tThe first fundamental form is often written in the modern notation of the metric tensor. The coefficients may then be written as $g_{ij}$:\n\t\n\tand we will prove in the section of Tensor Calculus during our study of Gram's determinant that this relation can be used to calculate the surface of any (regular) surface patch!!\n\t\n\t\\pagebreak\n\t\\paragraph{Regularity of a Surface}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A point $M$ belonging to the surface $\\Sigma$ is said (this is quite relatively logical...) a \"\\NewTerm{regular point}\\index{regular point}\" if and only if:\n\t\n\tA surface patch is logically named \"\\NewTerm{smooth surface}\\index{smooth surface}\" if and only if all its points are regular (if the cross product is zero then there is somewhere a \"fold\" at $\\pi/2$... and this is quite annoying for continuity purposes).\n\n\tLet us notice that:\n\t\n\tThe angle $\\alpha$ between the two coordinate-curves $\\Gamma_{u_0}$ and $\\Gamma_{v_0}$ to $\\Sigma$ on $M_0$ is given by the product definition of the dot product:\n\t\n\t\tHence the expression:\n\t\n\tTherefore a necessary and sufficient condition for the coordinates-curves $\\Gamma_{u_0}$ and $\\Gamma_{v_0}$ to be perpendicular to $\\Sigma$ on $M_0$ is that $F$ is equal to zero. In this particular case, we say that the curvilinear coordinates $u$, $v$ on the surface are orthogonal coordinates!!!\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. Let us consider the parametrization of the cartesian plane. Then we have:\n\t\n\thence:\n\t\n\tTherefore:\n\t\n\tWe fall back on the same differential curvilinear abscissa than that seen in the section of Tensor Calculus and General Relativity with the diagonal metric of the flat space.\n\t\n\tWe also have:\n\t\n\tThen the surface is indeed regular. \\\\\n\n\tWe also have:\n\t\n\tSo the both coordinate-curves are perpendicular!\\\\\n\t\n\tE2. Let us consider the parametrization of the cylinder surface. We then have (\\SeeChapter{see section Vector Calculus page \\pageref{cylindrical coordinates}}):\n\t\n\thence:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tTherefore:\n\t\n\tWe fall back on the same differential curvilinear abscissa than that seen in the section of Tensor Calculus and General Relativity with the diagonal metric in polar coordinates.\\\\\n\t\n\tWe also have:\n\t\n\tTherefore the surface is regular as long as $r$ is not zero. We also have:\n\t\n\tSo the both coordinates-curves are perpendicular on the cylinder.\\\\\n\t\n\tE3. Let us consider the parametrization of the sphere of origin O and radius $r$ \\label{metric two sphere}. Then we have (\\SeeChapter{see section Vector Calculus page \\pageref{spherical coordinates}}):\n\t\n\thence:\n\t\n\tTherefore:\n\t\n\tWe fall back on the same differential curvilinear abscissa than that seen in the section of Tensor Calculus and General Relativity with the diagonal metric in spherical coordinates.\\\\\n\t\n\tWe also have:\n\t\n\tTherefore the surface is regular as long as $r$ is not zero. We also have:\n\t\n\tSo the both coordinates-curves are perpendicular on the sphere.\\\\\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE4. Let us consider the parametrization of the hyperboloid. Then we have (\\SeeChapter{see section Analytical Geometry page \\pageref{hyperboloid}}):\n\t\n\thence:\n\t\n\tTherefore:\n\t\n\tLet us take $b$ as being equal to zero. Then we have:\n\t\n\tTherefore the surface is regular as long as $a$ not equal to zero. We also have:\n\t\n\tSo the both coordinates-curves are perpendicular on the hyperboloid.\\\\\n\t\\end{tcolorbox}\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{80} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 35 votes,  74.86\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Geometric Shapes}\n\n\\lettrine[lines=4]{\\color{BrickRed}W}e have already defined at the beginning of the section on Euclidean Geometry the concepts of topological dimensions, what were a zero-dimensional point and a unit dimensional curve. We will not come back on these concepts we will focus here mostly on shapes of higher dimensions that we will use a lot in the chapters related to theoretical Physics.\n\nThe purpose of section is to categorize with proofs some remarkable mathematical properties of well-known geometric shapes and bodies (area, volume, center of mass, moment of inertia). Indeed, there are many mathematical books listing those properties without proofs but few or simply no books at all prove them all (at least we have never seen such a book at this day...). The list below is at this date far away to exhaustive (since there are an infinite number of geometric shapes), but it will be completed in time.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/geometry/delucq_shoot_or_point.eps}\n\\end{figure}\n\nThe few shapes that we wanted to present in this section allow easily to find the remarkable properties of a large number of shapes not listed in this section by assembly or decomposition.\n\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The trigonometric relations and remarkable integrals that will be used for the study of the geometric shapes below are not proven in this section. They all have already been proved in the sections dealing specifically with Trigonometry and Differential Calculus and Integral.\\\\\n\t\n\t\\textbf{R2.} We understand by \"center of gravity\", the \"barycentre\" as discussed in the chapter of Euclidean Geometry.\n\t\\end{tcolorbox}\t\n\t\n\t\\subsection{Usual Surfaces (Areas)}\\label{known surfaces}\n\tThere are several definitions of the concept of surface (area) due to Euclid and another modern due to the topology (see sections of the corresponding names).\n\n\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A \"\\NewTerm{plane surface}\\index{plane surface}\" is a closed area that can be identify almost approximatively by a height and a width.\n\t\t\\item[D2.] A \"\\NewTerm{surface}\\index{surface}\" is a topological variety of dimension 2.\n\t\\end{enumerate}\n\tDepending on the authors a surface or area is denoted by the symbols: $A, S, \\mathcal{A}, \\mathcal{S}$.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nwe will focus initially only on the properties (perimeter, area, center of gravity, etc.) of surfaces dived in Euclidean plane and space.\n\t\\end{tcolorbox}\t\n\n\t\\subsubsection{Polygons}\\label{polygon}\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{polygon}\\index{polygon}\" is a plane figure bounded by straight line segments (i.e.: with a closed polyline).\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/geometry/polygone.eps}\n\\caption{Example of arbitrary plane polygon}\n\\end{figure}\n\n\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{quadrilateral}\\index{quadrilateral}\", \"\\NewTerm{pentagon}\\index{pentagon}\", \"\\NewTerm{hexagon}\\index{hexagon}\", \"\\NewTerm{heptagon}\\index{heptagon}\" are polygons with respectively four, five, six, seven... sides.\n\nWe distinguish three main families (but they are not the only ones!) of polygons: \n\t\\begin{enumerate}\n\t\t\\item Crossed polygons\n\t\t\\item Concave polygons\n\t\t\\item Convex polygons\n\t\\end{enumerate}\nWe will find these two last families in different sections of this book (but for details see below).\n\nBefore going any further we would like to clarify to the reader that there is, at least as far as we know, only mathematical relations to calculate the surface for simple polygons. Although in practice we almost always meet non simple polygons and we considered as useless to dwell on the determination of a relation that would reduce the calculation of the surface of any polygon to that of simple polygons or to use Monte Carlo methods (\\SeeChapter{see section of Numerical Methods page \\pageref{monte carlo simulations}}).\n\n\t\\textbf{Definition (\\#\\mydef):} A polygon is named \"\\NewTerm{crossed polygon}\\index{crossed polygon}\" if at least two of its sides intersect, that is to say, if at least two of its sides cross each other. \n\nThis is the case of the pentagon $ABCDE$ below:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/geometry/crossed_polygon.eps}\n\\caption{Example of a crossed polygon}\n\\end{figure}\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nThe \"\\NewTerm{envelope}\\index{envelope of a polygon}\" of a polygon is the polygon obtained by following its outer contour. For example, the envelope of  the previous polygon is a decagon whose vertices are the five summits of the pentagon and the five intersections of its sides.\n\t\\end{tcolorbox}\t\n\n\t\\textbf{Definition (\\#\\mydef):} A polygon is named \"\\NewTerm{concave polygon}\\index{concave polygon}\" if it is not crossed and if one or more of its diagonals are not entirely within the area bounded by the polygon.\n\nThis is the case of the pentagon $ABCDE$ below:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/geometry/concave_polygon.eps}\n\\caption{Example of a concave polygon}\n\\end{figure}\n\n\t\\textbf{Definition (\\#\\mydef):} A polygon is named \"\\NewTerm{convex polygon}\\index{convex polygon}\" if it is not crossed and if all its diagonals are entirely within the area bounded by the polygon. \n\nThe hexagon $MNOPQR$ below is an example of convex polygon:\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/geometry/convex_polygon.eps}\n\\caption{Example of a convex polygon}\n\\end{figure}\n\nWith respect to the definitions given above where the diagonals were identified, let us see if there is a relation permitting to know the number of diagonals relatively to the number of edges of a polygon.\n\nFor this let us start from a  polygon of $n$ sides (also note that it has $n$ vertices):\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.3]{img/geometry/pentagon.eps}\n\\caption{Example of a polygon (pentagon with $5$ edges and $5$ vertices)}\n\\end{figure}\n\nWe define now the total number segments $S$ as equal to the quantity of the sides (edges) $E$ more the quantity of diagonals $D$ such that:\n\t\nWith our example we can see that $S=10$:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.7]{img/geometry/pentagon_segments.eps}\n\\caption{Pentagon with all its $10$ segments}\n\\end{figure}\n\n\tNow take one first point of our pentagon. We see that we can reach all $n$ points (vertices), excepted the chosen one $(-1)$ therefore we can create $n-1$ segments as shown in the figure below:\n\n\t\\begin{figure}[H]\n\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/geometry/pentagon_first_step_segments.eps}\n\t\\caption{Example of segments starting from a chosen point}\n\t\\end{figure}\n\n\tWith the second point, we can also reach all the points $n$, except the chosen point itself $(-1)$ and we must remove the first point already taken in the previous step $(-1)$ . We therefore created $n-2$ segments:\n\n\t\\begin{figure}[H]\n\t\\centering\n\t\t\\includegraphics{img/geometry/pentagon_second_step_segments.jpg}\n\t\\caption{Approach with the 2nd point}\n\t\\end{figure}\n\tWith the third one, we can also reach all the $n$ points, excepted the considerated point $(-1)$ and less the two points already seen $(-2)$ that is to say the formation of $n-3$ segments:\n\t\\begin{figure}[H]\n\t\\centering\n\t\t\\includegraphics{img/geometry/pentagon_third_step_segments.jpg}\n\t\\caption{Approach with the 3rd point}\n\t\\end{figure}\n\tWe continue with the other points: the 4th that gives $n - 4$ segments, the fifth that gives $n - 5$ segments ... Therefore we see that the $(n - 2)\\text{th}$ item thus gives $n - (n - 2 )$ segments, etc.\n\t\n\tSo we finally have for:\n\t\n\tBy simplifying we find therefore:\n\t\n\tWe find ourselves with two relations:\n\t\n\tTherefore it follows that:\n\t\n\t\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{regular polygon}\\index{regular polygon}\\label{regular polygon}\" is a convex-polygon in which every side has the same length and every angle is the same. For any natural number $n\\geq 3$, we can draw a regular polygon with $n$ sides; sometimes we'll call this polygon a \"\\NewTerm{regular $n$-gon}\\index{regular $n$-gon}\" to emphasize that it has $n$ sides. Below are the regular $n$-gons for $n = 3, 4, ..., 12$. As $n$ gets larger, the regular $n$-gons look more like a perfect circle:\n\t\\begin{figure}[H]\n\t\\centering\n\t\t\\includegraphics{img/geometry/regular_polygons.jpg}\n\t\\caption{Regular polygons (subfamily of convex polygons)}\n\t\\end{figure}\n\t\n\t\\subsubsection{Rectangle}\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{rectangle}\\index{rectangle}\" is a special case of the quadrangle in the sense that his sides $L$ and $H$ (notation according to length and height as we can see in figure below) are equal in pairs and with right angle (in other words, $L$ is not necessarily equal to $H$).\n\t\n\tOther possible definitions include that a rectangle is a parallelogram with a right angle or a quadrilateral with four right angles...\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe rectangle can be seen as the composition of two (or more) triangles (see below for definition). To build a rectangle, it would be sufficient to have a single rectangle triangle and make him undergo a double reflection and rotation relative to a well-chosen axis (\\SeeChapter{see section Euclidean Geometry page \\pageref{geometric transformations}}).\n\t\\end{tcolorbox}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/rectangle_example.jpg}\n\t\t\\caption{Example of rectangle}\n\t\\end{figure}\n\tBy the Euclid's axioms (\\SeeChapter{see section Euclidean Geometry page \\pageref{euclid's postulates}}), the perimeter of a rectangle is given by:\n\t\n\tAnd by definition its surface by:\n\t\n\tand the length of its  diagonal (application of the Pythagorean theorem):\n\t\n\tThe position of the rectangle's center of gravity, if we put the Cartesian coordinate system in the lower left corner of this geometry, is trivially given by:\n\t\n\tFinally, indicate that if we were living beings in a two-dimensional space, the rectangle is what we should perceive if a cuboid crossed our universe parallel to its faces.\n\t\n\tFinally, let us notice, and without proof, that the sum of the angles or a plane rectangle is equal to $4\\cdot \\pi/2=2\\pi$.\n\t\n\t\\subsubsection{Square}\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{square}\\index{square}\" is a special case of the rectangle in the sense that its four sides are equal such that $L=H$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/square_example.jpg}\n\t\t\\caption{Example of square}\n\t\\end{figure}\n\tBy the Euclid's axioms (\\SeeChapter{see section Euclidean Geometry  page \\pageref{euclid's postulates}}), the perimeter of the square is given by:\n\t\n\tSo it comes for the surface:\n\t\n\tand for its diagonal:\n\t\n\tThe position of the square's center of gravity, if put the Cartesian coordinate system in the lower left corner of the form, is trivially given by:\n\t\n\tFinally, let us indicate that if we were living beings in a two-dimensional space, the square is what we should perceive if a cube crossed our universe parallel to its faces.\n\t\n\tAlso notice, and without proof, that the sum of the angles or a plane square is equal to $4\\cdot \\pi/2=2\\pi$.\n\t\n\t\\pagebreak\n\t\\subsubsection{Unspecified Triangle}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{unspecified triangle}\\index{unspecified triangle}\\label{unspecified triangle}\" is a polygon with three sides and includes in particular cases the: isosceles, equilateral and rectangles triangles.  A triangle with vertices $A, B$, and $C$ is denoted $\\triangle ABC$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/unspecified_triangle_example.jpg}\n\t\t\\caption{Example of unspecified triangle with inscribed circle and median}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tFor the details about the Circumcircles and Incircles of triangles we send the reader to the section of Euclidean Geometry.\n\t\\end{tcolorbox}\t\n\tBy the Euclid's axioms (\\SeeChapter{Euclidean Geometry}), the perimeter of an unspecified triangle is given by:\n\t\n\tAn unspecified plane triangle can always be decomposed into two right angle triangles. Thus, the one of the above figure can be divided into two right angle triangles:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/unspecified_triangle_decomposition.jpg}\n\t\t\\caption{Unspecified triangle divided into two right triangles}\n\t\\end{figure}\n\tof respective basis $a_1$ and $a_2$ (defined by the orthogonal projection of the top on opposite to the segment $a$) such as:\n\t\n\tThe surface of each of these two right angle triangles is as we have said already implicitly in our study of the rectangle, half the area of a rectangle of the same length and same height. Therefore:\n\t\n\tTherefore, the sum of these surfaces, gives us the surface of unspecified triangle:\n\t\n\tWe can say from this latter equality that the surface of any one triangle is similar to the half of the area of a rectangle of length $L=a$ and height $H=h$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhatever the basis $a, b, c$ and the respective height $h_a,h_b,h_c$, the previous reasoning remains of course completely correct.\n\t\\end{tcolorbox}\n\tThat said ... it is nice and pretty but there are very many practical cases where we do not know the height but only the length of the three sides. So, what do we do? Well, we will use the cosine theorem (Al-Kashi's Theorem) proved in the section Trigonometry which gives us a reminder for any unspecified triangle of the type:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cosine_theorem_triangle.jpg}\n\t\t\\caption{Reminder of the construction for the cosine's theorem proof}\n\t\\end{figure}\n\t\n\tand also the relation resulting in the triangle relative to the surface:\n\t\n\tTherefore we have:\n\t\n\tTherefore after rearrangement we get finally:\n\t\n\tRelation that is commonly named \"\\NewTerm{Heron's formula}\\index{Heron's formula}\\label{heron formula}\" also sometimes named \"\\NewTerm{Hero's formula}\\index{Hero's formula}\" most know in the following form when we put $\\Sigma:=\\dfrac{1}{2}\\left(a+b+c\\right)$:\n\t\n\t\n\tThe determination of the center of gravity (or centroid) $G$ (\\SeeChapter{see section Euclidean Geometry page \\pageref{barycenter}}) is a little less intuitive than in the case of the rectangle...\n\t\n\t\\begin{theorem}\n\tGiven a triangle $\\triangle ABC$. We name $A'$ the \"middle of the segment\" $\\overline{BC}$, $B'$ that of $\\overline{AC}$ and $C'$ that of $\\overline{AB}$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/triangle_center_of_gravity.jpg}\n\t\t\\caption{Starting point for determining the center of gravity}\n\t\\end{figure}\n\tWe will prove that only the point $G$ satisfying (\\SeeChapter{section Euclidean Geometry}):\n\t\n\tis the point of intersection of the three medians of triangle $\\triangle ABC$. This proof will be made two stages: two proposals (at the end, we can conclude by the theorem):\n\t\\begin{enumerate}\n\t\t\\item[P1.] If $\\triangle ABC$ is a triangle then there exists one and only one center of gravity $G$ such that $\\overrightarrow{GA}+\\overrightarrow{GB}+\\overrightarrow{GC}=\\vec{0}$.\n\t\t\\begin{dem}\n\t\tLet $G$ be a point in the plane such as $\\overrightarrow{GA}+\\overrightarrow{GB}+\\overrightarrow{GC}=\\vec{0}$. Then we can write that:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tThis vector equality ensures that the $G$ point is unique and we can even move it by moving the edges of the triangle!\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\\item[P2.] The three medians of a triangle are concurrent. Their intersection is the point $G$.\n\t\t\\begin{dem}\n\t\tTo prove that the three medians are concurrent, we will prove that $G$ belongs to each of the three medians.\n\t\t\n\t\tJust before we proved $G$ satisfies the equality:\n\t\t\n\t\tAs $A'$ is the midpoint of $\\overline{BC}$, then we can write that:\n\t\t\n\t\tTherefore we get:\n\t\t\n\t\tThe vectors $\\overrightarrow{AG}$ and $\\overrightarrow{AA'}$ are collinear! Therefore the points $A, G, A'$ are aligned. Writing in an other way, the point $G$ is part of the median $\\overline{AA'}$ of the triangle $\\triangle ABC$. We can even say that is placed at the two thirds of the segment $\\overline{AA'}$ from the top $A$.\n\t\t\n\t\tWhat we have just shown with the median $AA'$ is of course also true for the other two medians. Therefore:\n\t\t\n\t\tTo resume, the point $G$ is therefore part of the three medians $\\overline{AA'}$, $\\overline{BB'}$ and $\\overline{CC'}$. These three lines are concurrent and the point $G$ is the intersection point. This result will be useful later in our study of polyhedra.\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\\end{theorem}\n\tFinally, let us indicate that if we were living beings in a two-dimensional space, the triangle is what we should perceive if geometric shapes composed of at least three joined sides would cross our universe through one of its vertex.\n\t\n\tAlso let us recall that the sum of the angles of any plane triangle is equal to $\\pi$ ($180^.\\circ$) as proved in the section of Euclidean Geometry page \\pageref{angle sum theorem}.\n\t\n\tWe will stop here this analogy with a two-dimensional space that can be generalized to each geometric shape that we present thereafter (circle and sphere, ellipse and ellipsoid, etc.). The idea was mainly to present the conception that volumes we experience in our daily lives can also be seen as 4-dimensional space shapes crossing our 3-dimensional space.\n\t\n\t\\subsubsection{Isosceles Triangle}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{isosceles triangle}\\index{isosceles triangle}\" is a triangle that has two sides of equal length. Sometimes it is specified as having two and only two sides of equal length, and sometimes as having at least two sides of equal length, the latter version thus including the equilateral triangle as a special case.\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/iscoles_triangle.jpg}\n\t\t\\caption{Example of isosceles triangle}\n\t\\end{figure}\n\tIn an isosceles triangle, the two equal sides are named \"\\NewTerm{legs}\\index{legs}\", and the remaining side is named the \"\\NewTerm{base}\\index{base (triangle)}\". The opposite angle to the base is named the \"\\NewTerm{vertex angle}\\index{vertex angle}\", and the point associated with that angle is named the \"\\NewTerm{apex}\\index{apex}\". The two equal angles are named the \"\\NewTerm{isosceles angles}\\index{isosceles angles}\".\n\t\n\tThe perimeter of such a triangle remains:\n\t\n\tbut as it has two equal sides, then we can always simplify as follows:\n\t\n\tThe surface as we have proved above in the general case remains:\n\t\n\tAnd if we don't know the height but we know the internal angles we can by apply elementary trigonometry if needed (\\SeeChapter{see section Trigonometry page \\pageref{circle trigonometry}}).\n\t\n\tAnd the center of gravity remains, as we have proved in the general case (unspecified triangle), at the position:\n\t\n\tA remarkable property of an isosceles triangle is that the bisector and the median of the third side, that is of not equal size than the two others, are not-distinguishable and therefore obviously of equal length (\\SeeChapter{see section Euclidean Geometry page \\pageref{triangles remarkable interior lines}}).\n\t\n\tA famous isosceles triangle is the right angle isosceles triangle because its angles are canonical and we get immediately $\\theta=\\pi/4$ and also we have immediately the length of the hypotenuse by applying the Pythagorean theorem:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/isosceles_right_angle_triangle.jpg}\n\t\t\\caption{Right angle isosceles triangle}\n\t\\end{figure}\n\tAlso let us recall that the sum of the angles of any plane triangle is equal to $\\pi$ ($180^.\\circ$) as proved in the section of Euclidean Geometry page \\pageref{angle sum theorem}.\n\t\n\t\\pagebreak\n\t\\subsubsection{Equilateral Triangle}\\label{lateral triangle}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{equilateral triangle}\\index{equilateral triangle}\" is a triangle in which all three sides are equal. In the familiar Euclidean geometry, equilateral triangles are also \"\\NewTerm{equiangular}\\index{equiangular}\"; that is, all three internal angles are also congruent to each other and are each equal to $\\pi/3$. They are regular polygons, and can therefore also be referred to as regular triangles.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/equilateral_triangle.jpg}\n\t\t\\caption{Equilateral triangle}\n\t\\end{figure}\n\t\\begin{theorem}\n\tEach angle of an equilateral triangle is $\\pi/3$ [rad] ($60^\\circ$).\n\t\\end{theorem}\n\t\\begin{dem}\n\tFirst let us recall that by the angle sum theorem (page \\pageref{angle sum theorem}) we have:\n\t\n\tWe know also that $\\Delta abc$ is equilateral, then:\n\t\n\tThen as $\\angle\\overline{ab}$ and $\\angle\\overline{ac}$ are congruent, we have:\n\t\n\tAnd proceeding identically:\n\t\n\tand:\n\t\n\tHence we can write that:\n\t\n\tAs using the angle sum theorem:\n\t\n\tWe get:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem} \n\tThe perimeter of such a triangle remains:\n\t\n\tbut as it has three equal sides, then we can always simplify as follows:\n\t\n\tThe surface as we have proved above in the general case remains:\n\t\n\tApplying Pythagorean theorem we get if we don't know $h$:\n\t\n\tIt can be shown that a triangle with given perimeter has the maximum possible area, if it is equilateral.\n\t\\begin{dem}\n\tRemember the Heron's formula proved above ($\\Sigma=\\dfrac{1}{2}(a+b+c)$):\n\t\n\tTo find the conditions for the maximum area, we want to compute derivatives with respect $a$, $b$ and $c$ but since they are not independent variables, we first substitute the semi-perimeter $\\Sigma$ into Heron's equation to get rid of one degree of random:\n\t\n\tTherefore:\n\t\n\tSetting the derivative with respect to $a$ to zero:\n\t\n\tSince $b=\\Sigma$ is not a solution $A\\neq 0\\Rightarrow b\\neq \\Sigma$.\n\t\n\tSimilarly, setting the derivative with respect to $b$ to zero yields:\n\t\n\tSolving simultaneously gives\n\t\n\tand substituting back gives:\n\t\n\tSo $a=b=c$ and the triangle is equilateral.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tAnd the center of gravity remains, as we have proved in the general case (unspecified triangle), at the position:\n\t\n\tA remarkable property of an equilateral triangle is that all its bisectors and medians are non-distinguishable and obviously of the same length (\\SeeChapter{see section Euclidean Geometry page \\pageref{triangles remarkable interior lines}})!\n\t\n\tLet us now see the important little \"\\NewTerm{Viviani's theorem}\\index{Viviani's theorem}\" regularly used for graphical representation in the field of materials engineering (mixture), or statistics (mixture designs or distribution of three frequencies of data whose sum is always equal).\n\t\n\tConsider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/viviani_theorem.jpg}\n\t\t\\caption{Illustrative figure for Viviani's theorem}\n\t\\end{figure}\n\t\\begin{theorem}\n\tIf we place a point in an equilateral triangle and that from this point we draw a line in the direction of each side, so that the lines are perpendicular to each side. No matter where we place the point, the sum of the perpendicular distances between the point and the sides is equal to the height of the triangle!\n\t\\end{theorem}\n\t\\begin{dem}\n\tFor the proof, let us denote by $d_a,d_b,d_c$ the distances $M$ to the sides, the length of one side and the height $h$. Then we have:\n\t\n\tand we have therefore well:\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem} \n\tAlso let us recall that the sum of the angles of any plane triangle is equal to $\\pi$ ($180^.\\circ$) as proved in the section of Euclidean Geometry page \\pageref{angle sum theorem}.\n\t\n\t\\subsubsection{Right Triangle}\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{right triangle}\\index{right triangle}\" (American English) or \"\\NewTerm{right-angled triangle}\\index{right-angled triangle}\" (British English) is a triangle in which one angle is a right angle (that is, a $\\pi/2$-degree angle). The relation between the sides and angles of a right triangle is the basis for trigonometry (see corresponding section).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/right_angle_triangle.jpg}\n\t\t\\caption{Example of right angle triangle}\n\t\\end{figure}\n\tThe side opposite the right angle is named the \"\\NewTerm{hypotenuse}\\index{hypotenuse}\".\n\t\n\tThe perimeter of such a triangle remains:\n\t\n\tThe surface as we have proved above in the general case remains:\n\t\n\t\n\tAnd the center of gravity remains, as we have proved in the general case (unspecified triangle), at the position:\n\t\n\tRemarkable property of a right triangle: the triangle as this unique property that we can directly apply to it the Pythagorean theorem (\\SeeChapter{see section Euclidean Geometry page \\pageref{pythagorean theorem}}).\n\t\n\tAlso let us recall that the sum of the angles of any plane triangle is equal to $\\pi$ ($180^.\\circ$) as proved in the section of Euclidean Geometry page \\pageref{angle sum theorem}.\n\t\n\t\\subsubsection{Trapezoid}\n\t\\textbf{Definition (\\#\\mydef):} A convex quadrilateral with at least one pair of parallel sides is referred to as a \"\\NewTerm{trapezoid}\\index{trapezoid}\" in American and Canadian English but as a \"\\NewTerm{trapezium}\\index{trapezium}\" in English outside North America. The parallel sides are named the \"\\NewTerm{bases}\\index{bases}\" of the trapezoid and the other two sides are named the \"\\NewTerm{legs}\\index{legs}\" or the lateral sides (if they are not parallel; otherwise there are two pairs of bases). A \"\\NewTerm{scalene trapezoid}\\index{scalene trapezoid}\" is a trapezoid with no sides of equal measure in contrast to the special cases below.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/trapeze.jpg}\n\t\t\\caption{Example of trapezoid}\n\t\\end{figure}\n\tCalculating the perimeter of the trapezoid is obvious:\n\t\n\tIts surface is calculated using the decomposition:\n\t\n\tFinally:\n\t\n\tWhen both sides have the same length, we get the special cases of the square, the rectangle, of the rhombus or of the parallelogram (here we put the order from more specific to the more general, we could put the diamond rhombus as n\\textdegree 2):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/trapezoid_classification.jpg}\n\t\t\\caption{Trapezoid classification}\n\t\\end{figure}\n\tAlso a common use is to retain a more restrictive definition, in order to not take into account these particular figures. We add in this case that the lengths of two parallel sides are not equal (this allows students of high-school classes to avoid confusion arising from the existence of two names for the same object, such as rhombus and trapezius).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThere is a particular case of the trapezoid, the \"\\NewTerm{isosceles trapezoid}\\index{isosceles trapezoid}\" as we can see on the previous figure, whose two non-parallel sides are the same length. (we can add: as both sides are not parallel, it is not a parallelogram).\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Parallelogram}\\label{parallelogram}\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{parallelogram}\\index{parallelogram}\" is a special case of the quadrilateral and very important (in the context of the analysis of shapes in physics), where the sides are parallel in pairs and of opposite equal angles (opposite angles are congruent):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/parallelogram.jpg}\n\t\t\\caption{Parallelogram example}\n\t\\end{figure}\n\tThe rhombus being a particular case of the parallelogram as we already know.\n\t\n\tThe perimeter of the parallelogram is obviously given by:\n\t\n\tFor the surface, as this is a special case of the trapezoid, it comes immediately (as we will use only this relation in the book we will not prove other method to calculate the surface):\n\t\n\tGiven the above figure, it is clear that if we cut the parallelogram into two as below, we see that this is twice the same triangle:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/parallelogram_cutted.jpg}\n\t\t\\caption{Cutter parallelogram}\n\t\\end{figure}\n\tThanks to this we will have a very useful property for the physical analysis of the static forces or phasors in the studies of the wave superposition (\\SeeChapter{see section Wave Mechanics page \\pageref{superposition wave principle}}).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAs we already know parallelograms are therefore trapezoids like (see classification figure above for a refresh).\n\t\\end{tcolorbox}\t\n\tAlso let us recall that as  the sum of the angles of any plane triangle is equal to $\\pi$ ($180^.\\circ$) as proved in the section of Euclidean Geometry page \\pageref{angle sum theorem}, and that a parallelogram is made of two identical triangle, the sum of the angles of a parallelogram is then equal to $2\\pi$.\n\t\n\t\\subsubsection{Hexagon}\t\n\tThe hexagon is a quite interesting case. Indeed, is used for satellite signal Earth tiling and also because  the \"Honeycomb conjecture\" states that the hexagonal tiling is the best way to divide a surface into regions of equal area with the least total perimeter. \t\n\t\n\tThis structure exists naturally in the form of graphite, where each sheet of graphene resembles chicken wire, with strong covalent carbon bonds. Tubular graphene sheets have been synthesised. These are known as carbon nanotubes. They have many potential applications, due to their high tensile strength and electrical properties. Silicene is similar.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.3]{img/geometry/hexagons.jpg}\n\t\t\\caption[A regular hexagonal grid]{A regular hexagonal grid (source: Wikipedia)}\n\t\\end{figure}\n\tSome applications (the reader can also take a look about the European Extra Large Telescope):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{minipage}{.5\\textwidth}\n\t\t  \\centering\n\t\t  \\includegraphics[width=7cm,height=7cm]{img/geometry/james_watt_space_telescope}\n\t\t  \\captionof{figure}{James Watt Space Telescope}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.5\\textwidth}\n\t\t  \\centering\n\t\t  \\includegraphics[width=7cm,height=7cm]{img/geometry/reflector_antenna.jpg}\n\t\t  \\captionof{figure}{TV broadcast reflector antenna}\n\t\t\\end{minipage}\n\t\\end{figure}\n\t\n\t\\textbf{Definition (\\#\\mydef):} In geometry, a hexagon is a $6$-sided polygon or $6$-gon. The total of the internal angles of any hexagon is $4\\pi$. A \"\\NewTerm{regular hexagon}\\index{regular hexagon}\" is defined as a hexagon that is both equilateral and equiangular.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/hexagon.jpg}\n\t\t\\caption[Regular hexagon details]{Regular hexagon details (source: Wikipedia)}\n\t\\end{figure}\n\tFirst the perimeter is obviously:\n\t\n\tThe surface of a regular hexagon of side length $a$ is obviously given by\n\t\n\tTherefore:\n\t\n\t\n\tNow let us come back to our honeycomb conjecture!\n\t\n\tTo begin our search, we had three ideas: the structure of the honeycomb cells should be strong, stable and economical.\n\t\n\tThe cells are solid if they can be nested and leave no holes. We are therefore interested in tiling.\n\t\n\tIt can be consider as intuitive that all cells must have the same form and that they will be stable if their form is regular. So we will study the regular polygons.\n\t\n\tLet us recall that regular polygon is a geometric figure that has all sides of the same length and all angles of equal measure.\n\t\n\tOf all the regular polygons, we have sought those who realize a tiling.\n\t\n\tThis problem is like looking how it is possible to assemble identical regular polygons around its edges.\n\t\n\tFor our study, we needed to know two things:\n\t\\begin{enumerate}\n\t\t\\item The sum of the measures of the three angles of a triangle is equal to $\\pi$\n\n\t\t\\item A round is equal to $2\\pi$\n\t\\end{enumerate}\n\tLet us try with different shapes:\n\t\\begin{enumerate}\n\t\t\\item Equilateral triangles ($\\pi/3$)\n\n\t\tThe three corners of an equilateral triangle are equal, and their measurement is $\\pi/3=$. And as $6\\cdot (\\pi/2) = 2\\pi$, we can assembled six equilateral triangles around the vertices and thus realize a tiling:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.7]{img/geometry/tiling_equilateral_triangle.jpg}\n\t\t\t\\caption[]{Equilateral Triangle Tiling}\n\t\t\\end{figure}\n\t\t\n\t\t\\item Squares ($2\\pi/4$)\n\n\t\tThe corners angles of a square measure $\\pi/2$. And as $4\\cdot (\\pi/2)=2\\pi$, we can assembled four square around the vertices and so achieve a tiling:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.7]{img/geometry/tiling_equilateral_square.jpg}\n\t\t\t\\caption[]{Square Tiling}\n\t\t\\end{figure}\n\t\t\n\t\t\\item Regular pentagons ($2\\pi/5$)\n\t\t\n\t\tThe internal angles of a regular pentagon is equal to $2\\pi/5$ therefore the internal vertices angles are equal to $\\pi-2\\pi/5=3\\pi/5$.\n\t\t\n\t\tIf we assembled three of theme around a node this gives holes as $3\\times (3\\pi/5)<2\\pi$, and obviously add one more pentagon will not solve the problem!\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.7]{img/geometry/tiling_equilateral_pentagon.jpg}\n\t\t\t\\caption[]{Pentagon Tiling}\n\t\t\\end{figure}\n\t\tWe therefore can not provide a tiling with pentagons.\n\t\t\n\t\t\\item Regular hexagons ($2\\pi/6$)\n\t\t\n\t\tThe internal angles of a regular pentagon is equal to $2\\pi/6$ therefore the internal vertices angles are equal to $\\pi-2\\pi/6=2\\pi/3$.\n\t\t\n\t\tAnd as $3\\cdot (2\\pi/3)=2\\pi$, we can assembled three regular hexagons around the vertices and so achieve a tiling:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[scale=0.7]{img/geometry/tiling_equilateral_hexagon.jpg}\n\t\t\t\\caption[]{Hexagon Tiling}\n\t\t\\end{figure}\n\t\t\n\t\t\\item Regular polygons with seven sides and more\n\t\t\n\t\tFrom polygon with seven sides and more, tiling is impossible because the internal angles are not integer dividers from $2\\pi$.\n\t\\end{enumerate}\n\t\n\tThe general relation to get the measurement of the angles of a regular polygon with $n$ sides is:\n\t\n\tTherefore, the only regular polygons that realize a tiling are equilateral triangles, squares and regular hexagons!!\n\t\n\tFinally, we thought that bees must use the least amount of wax to make their cells, while having the maximum space inside.\n\t\n\tTherefore the question is: Among the equilateral triangle, the square and the regular hexagon, which he has the greatest area for the smallest perimeter?\n\t\n\tIn other word we seek to maximize the ratio $A/P$ (or the smallest ratio $P/A$).\n\t\n\tSo we have for each shape using our previous results:\n\t\\begin{enumerate}\n\t\t\\item Equilateral triangle:\n\t\t\t\n\t\t\\item Square:\n\t\t\t\n\n\t\t\\item Regular hexagon:\n\t\t\t\n\t\\end{enumerate}\n\tAmong the equilateral triangle, the square and the regular hexagon it is therefore the regular hexagon which gives for the maximum ratio \"area on perimeter\".\n\t\n\tSo that the cell structure is solid, stable and economical, bees are incentive to build basic regular hexagons. We can see that this is what they do naturally (after a lot of trial and errors during their evolution)!\n\t\n\t\\subsubsection{Rhombus}\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{rhombus}\\index{rhombus}\" is a simple (non-self-intersecting) quadrilateral all of whose four sides have the same length. Another name is \"\\NewTerm{equilateral quadrilateral}\\index{equilateral quadrilateral}\", since equilateral means that all of its sides are equal in length. The rhombus is often named a \"\\NewTerm{diamond}\\index{diamond}\", after the diamonds suit in playing cards which resembles the projection of an octahedral diamond, though the former sometimes refers specifically to a rhombus with a $\\pi/3$ angle and the latter sometimes refers specifically to a rhombus with a $\\pi/4$ angle.\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/rhombus.jpg}\n\t\t\\caption{Example of rhombus}\n\t\\end{figure}\n\tCalculating the perimeter and the surface of the rhombus immediately follow from those of the trapezium.\n\t\n\t\\pagebreak\n\t\\subsubsection{Circle}\n\tThere are several possible definitions of the circle. Let's look at  least two.\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A \"\\NewTerm{circle}\\index{circle}\" is a special case of a polygon with an infinite number of sides.\n\t\t\n\t\t\\item[D2.] A \"\\NewTerm{circle}\\index{circle}\" is a flat curve all of whose points are equidistant from a fixed point named \"\\NewTerm{center}\\index{center}\".\n\t\\end{enumerate}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tA circle may also be defined as a special ellipse in which the two foci are coincident and the eccentricity is $0$.\n\t\\end{tcolorbox}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/circle.jpg}\n\t\t\\caption{Example of circle}\n\t\\end{figure}\n\tWe will prove in the section of Theoretical Computing that the perimeter of a circle of radius $R$ and therefore diameter $\\varnothing =2R$ is given by:\n\t\n\tThe relation for the surface (of the corresponding \"disc\"\\label{surface of a disc}) can be obtained in two common ways:\n\t\\begin{enumerate}\n\t\t\\item By the search of the primitive of perimeter $P$ which gives us:\n\t\t\n\t\t\n\t\t\\item The second method is more aesthetic and uses the parametric equation of the circle, trivially given by the orthogonal projections of the Cartesian coordinates (\\SeeChapter{see section Vector Calculus page \\pageref{polar coordinates}}):\n\t\t\n\t\tWe know that the area described by a function $f (x)$ is given by (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{definite integral}}):\n\t\t\n\t\tWe just have to substitute in this integral the parametrized variables:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tThe limits of integration are obviously $\\theta=[0,2\\pi]$ then we have:\n\t\t\n\t\tWe therefore also by this method:\n\t\t\n\t\\end{enumerate}\n\tThe length $l$ of an opening angle range $\\alpha$ of a circle of radius $R$ is obviously given by:\n\t\t\n\t\tand the surface $S_t$ of an opening angle $\\alpha$ of a circle of radius $R$ identically by:\n\t\t\n\t\tLet us now determine the surface $S_d$ of cutted a part of disc that is to say:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/circle_cutted_part.jpg}\n\t\t\t\\caption{Surface $S_d$ of a disc cutted part}\n\t\t\\end{figure}\n\t\tWe have:\n\t\t\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tA circle may also be defined as a special ellipse in which the two foci are coincident and the eccentricity is $0$.\n\t\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Ellipse}\\label{ellipse}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{ellipse}\\index{ellipse}\" is a closed curve where every point $P$ is such that the sum of its distances from two fixed points named \"\\NewTerm{foci $F1,F2$}\\index{foci}\" is constant (as we see it in details in the section of Analytical Geometry the ellipse can also be seen as an affine transformation of the circle).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/ellipse_foci.jpg}\n\t\t\\caption{Example of ellipse with its visible foci and eccentricity $e$}\n\t\\end{figure}\n\tThe shape of an ellipse (how 'elongated' it is) is represented by its \"\\NewTerm{eccentricity $e$}\\index{eccentricity}\", which for an ellipse can be any number from $0$ (the limiting case of a circle) to arbitrarily close to but less than $1$.\n\t\n\tFor the mathematical developments that will follow we will rather use the notation of the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/ellipse_technical.jpg}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe reader must therefore not forget that most important mathematical developments about the ellipse are in the section of Analytical Geometry.\n\t\\end{tcolorbox}\n\tLet us Introduce to start a small text relative to the calculation of the perimeter of the ellipse!\n\t\n\tGiven the parametric equation in Cartesian coordinates of an ellipse (\\SeeChapter{see section Vector Calculus page \\pageref{polar coordinates}}):\n\t\n\tThe distance between the center of the ellipse and its perimeter is given by the Pythagorean Theorem:\n\t\n\tAn arc element is then given by:\n\t\n\tThe perimeter of the ellipse is the given by the integral:\n\t\n\tand then it starts to get tougher... This type integral can not easily be calculated using the usual primitives, integration by parts or change of variables. This is what we name a \"\\NewTerm{second-order elliptic integral $J$}\\index{elliptic integral!second-order elliptic integral}\\label{elliptic integral ellipse perimeter}\" for $0<b<a$ (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{elliptic integrals}}):\n\t\n\tLong developments which we present in a few years in the section of Differential and Integral Calculus give for calculating the perimeter after a limited series:\n\t\n\tThe surface of the ellipse can be obtained in a very similar manner to that of the circle and the calculations are surprisingly much simpler than those of its perimeter. Remember that the parametric equation the ellipse is:\n\t\n\tWe know that the area described by a function $f(x)$ is given by:\n\t\n\tWe just have to substitute in this integral the parametrized variables:\n\t\n\tTherefore:\n\t\n\tThe limits of integration being obviously $\\theta=[0,-2\\pi]$ we have:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tBe careful with this kind of calculations with the order of integration bounds. Indeed, if we had taken the bounds from $[0,\\pi]$ (instead of $[0,-\\pi]$) one must imagine that the integrated function browse the perimeter in the negative direction of the $x$-axis. So the integral would be necessarily be negative.\n\t\\end{tcolorbox}\n\tWe therefore also have for this method:\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} We assume as obvious (and therefore without proof) that the ellipse's center of gravity coincides with the center of it.\\\\\n\t\n\t\\textbf{R2.} We send the reader to the study of conical (\\SeeChapter{see section Analytical Geometry page \\pageref{conics}}) for the calculation of the area of an ellipse from its \"ellipse parameter\" and its \"eccentricity\" (everything is prove there).\n\t\\end{tcolorbox}\t\n\t\n\t\\paragraph{Ellipse section surface}\\mbox{}\\\\\\\\\t\t\t\n\tNow let us focus on a common question on Internet Forums: What is the area of a section of an ellipse. To answer that question we start again from:\n\t\n\tbut that we will write a in slightly different way:\n\t\n\tAn element of surface in polar coordinates is given by:\n\t\n\tTherefore the area of a section has for expression:\n\t\n\tThus:\n\t\n\tBut using what we have seen in the section of Analytical Geometry we can write that latter:\n\t\n\tTherefore:\n\t\n\tThus:\n\t\n\tTo continue let us make a simple change of variables:\n\t\n\tPulling out $a'^2\\cos^2(\\theta)$ we get:\n\t\n\tThis gives us:\n\tNow we make a serious change of variable:\n\t\n\tThus according to the usual derivatives (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{usual derivatives}}):\n\t\n\tThen the integral becomes:\n\t\n\tWe know according to usual integrals that (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{usual primitives}}):\n\t\n\tSo finally:\n\t\t\n\t\n\t\\pagebreak\n\t\\subsection{Usual Volumes}\n\tThere are several definitions of the concept of volume (surface that limits a body). A definition due to Euclid and another due to the field of Topology (see the section of the same name).\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A \"\\NewTerm{volume}\\index{volume}\" is an entity that has a length, a width and a height.\n\t\t\n\t\t\\item[D2.] A \"\\NewTerm{volume}\\index{volume}\" is a 3-manifold topology.\n\t\\end{enumerate}\n\tThe surfaces that surround a body may be planar or curved:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/volumes.jpg}\n\t\t\\caption{Sample volumes bounded by a surface}\n\t\\end{figure}\n\tOn the left, the body is limited only by planar surfaces, in the middle by one and only one single curved surface, and on the right by a curved surface and two planar surfaces.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe will focus initially only to the properties (area, volume, center of gravity, moment of inertia ...) of volumes plunged into Euclidean geometries.\n\t\\end{tcolorbox}\t\n\n\t\\pagebreak\n\t\\subsubsection{Polyhedron}\n\tThe study of polyhedra (especially Platonic polyhedra) is very important in physics (e.g. crystallography) and also in mathematics because it gives a nice application of finite groups (\\SeeChapter{see section Set Algebra page \\pageref{finite group}}). It is therefore appropriate to carefully read what will follow.\n\t\n\tFurthermore, the study of polyhedra is also a very educational and aesthetic way to see the implementation of several geometric, trigonometric and vector algebra theorems.\n\t\n\tIt should be noted before all that the various polyhedra will deliberately not presented as equal in their study. Thus, we will focus only on given properties for some and not for others.\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A \"\\NewTerm{polyhedron}\\index{polyhedron}\" is a solid whose border is formed of plans or plan portions. The plan the portions, which include between them the polyhedron, are the faces; each face being limited by intersections (edges) with neighbouring faces, is a polygon. The sides of this polygon are the edges of the polyhedron. We name \"\\NewTerm{vertices}\\index{vertices}\" of a polyhedron any vertex of any of its faces.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/polyhedront_vocabulary.jpg}\n\t\t\t\\caption{Polyhedron vocabulary}\n\t\t\\end{figure}\n\t\t\n\t\t\\item[D2.] A \"\\NewTerm{regular polygon}\\index{regular polygon}\" is a polygon which sides and all angles are equal (this definition will be useful for regular polyhedra further below).\n\t\\end{enumerate}\n\t\n\t\\paragraph{Parallelepiped}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} In geometry, a \"\\NewTerm{parallelepiped}\\index{parallelepiped}\" is a three-dimensional figure formed by six parallelograms (the term rhomboid is also sometimes used with this meaning). By analogy, it relates to a parallelogram just as a cube relates to a square or as a cuboid to a rectangle.\n\t\n\tThree equivalent definitions of parallelepiped are\n\t\\begin{itemize}\n\t\t\\item A polyhedron with six faces (hexahedron), each of which is a parallelogram.\n\t\n\t\t\\item A hexahedron with three pairs of parallel faces\n\t\t\n\t\t\\item A prism of which the base is a parallelogram.\n\t\\end{itemize}\n\tThe rectangular cuboid (six rectangular faces), cube (six square faces), and the rhombohedron (six rhombus faces) are all specific cases of parallelepiped (see further below for details about these polyhedrons).\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/parallelepiped.jpg}\n\t\t\\caption{General parallelepiped}\n\t\\end{figure}\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{itemize}\n\t\t\\item[D1.] The parallelepiped is a \"\\NewTerm{cube}\\index{cube}\" if and only if:\n\t\t\n\t\t\\item[D2.] The parallelepiped is a \"\\NewTerm{cuboid}\\index{cuboid}\" (rectangular parallelepiped) if and only if:\n\t\t\n\t\t\\item[D3.] The parallelepiped is a \"\\NewTerm{rhombohedron}\\index{rhombohedron}\"  if and only if:\n\t\t\n\t\\end{itemize}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cube_cuboid_rhomboid.jpg}\n\t\t\\caption{Cube, Cuboid and Rhombohedron}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\tThe volume of the parallelepiped is simply obtained by using trigonometric properties. We can consider mainly two situations:\n\t\\begin{itemize}\n\t\t\\item We know $H,b,c,\\alpha$ and the volume is obviously given by:\n\t\t\n\t\tObviously we get therefore for a cuboid where $a=b=c$:\n\t\t\n\t\t\\item  If we know only $\\alpha,\\beta,\\gamma$ and $H,W,L$:\n\t\t\n\t\\end{itemize}\n\tAbout its surface, it is simply the sum of the surfaces of each of its parallelogram without anything special (we have already prove earlier how to calculate the surface of a parallelogram) and therefore we need to know all deformation angles of each parallelogram.\n\t\n\t\\subparagraph{Moment of Inertia of a rectangular plate}\\label{moment of inertia of a rectangular plate}\\mbox{}\\\\\\\\\n\tNow let us calculate the moment of inertia of a plate (rectangular parallelepiped) of cross-sectional area $S$ which axis of rotation is the $y$-axis:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/inertia_plate.jpg}\n\t\t\\caption[]{Search for the moment of inertia of a plate along the section}\n\t\\end{figure}\n\tA volume element of the rectangle (in gray) is given by:\n\t\n\tand (\\SeeChapter{see section Classical Mechanics page \\pageref{moment of inertia}}):\n\t\n\tUsing Steiner's theorem (\\SeeChapter{see section Classical Mechanics page \\pageref{steiner theorem}})  we can easily get the moment of inertia of this plate when the rotation axis is collinear to the borders of the latter:\n\t\n\tand let us now focus on the moment of inertia of the plate relative to the $z$-axis (perpendicular to $x$ and $y$ therefore) and let us put axes so we have:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/inertia_plate_perpendicular.jpg}\n\t\t\\caption[]{Search for the moment of inertia of a plate along perpendicular axis}\n\t\\end{figure}\n\tWe have:\n\t\n\twhere $r$ is in the $x$ and $y$ plane.\n\t\n\tWith:\n\t\n\tTherefore:\n\t\n\tIf the plate is square of side $L$:\n\t\n\t\n\t\\pagebreak\n\t\\subparagraph{Moment of Inertia of a triangular plate}\\mbox{}\\\\\\\\\n\tWe will now prove that it possible to calculate the moment of inertia of an equilateral triangle plate and rectangle triangle plate for an axis passing still through the centroid from the latter.\n\t\n\tThe method we propose here is direct and almost... brutal.... Indeed, we consider the moment of inertia always from the same axis, but for half the rectangle:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/triangle_inertia_rectangle_centroid.jpg}\n\t\\end{figure}\t\n\t First we have obviously now:\n\t\n\tBut we want the moment inertia of this equilateral triangle trough its centroid! That is to say:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/triangle_inertia_centroid.jpg}\n\t\\end{figure}\n\tTherefore as the center of gravity is placed now on the third of the median starting at the square's center (see proof previously) of gravity and that we make use of Steiner's theorem (\\SeeChapter{see section Classical Mechanics page \\pageref{steiner theorem}}) we get:\n\t\n\tIf $a=b=L$ we have the famous special case:\n\t\n\tIf we have $a=b=L$ then the rectangle as has a height $h=L$ this is why we found most of time the previous relation as:\n\t\n\twhich is the moment of inertia of an equilateral triangle plate for the special case of an axis passing through the center of mass (gravity).\n\t\n\tFor sure as for other geometries we can choose any other axis and the possibilities are limitless... But to end we will calculate the moment of inertia of a rectangular triangle plate of dimensions $a,b$ whose rotation axis is along the $y$-axis but translated of $b/2$ (that is to say: the rotation axis is collinear to the basis of length $a$ of the rectangle triangle plate):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/triangle_inertia_vertical.jpg}\n\t\\end{figure}\n\tRemember for this that we have first proved just above that for a rectangular plate in rotation along the a border was:\n\t\n\tApplying once again Steiner's theorem AND for half a plate (therefore for a rectangle triangle plate) we get immediately:\n\t\n\tAnd if we put as usual $L:=h$ then we get the another famous result:\n\t\n\tAs we can see the inertia momentum in this configuration is the same as that one with a rectangular plate having the rotation axis trough the centroid following the $y$-axis.\n\t\n\t\\paragraph{Pyramid}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{pyramid}\\index{pyramid}\" is a polyhedron whose base is a polygon and for side faces triangles connected on a single point made the \"\\NewTerm{apex}\\index{apex}\". The pyramid is not in the general case a regular polyhedron! A \"\\NewTerm{right pyramid}\\index{right pyramid}\" has its apex directly above the centroid of its base. Non-right pyramids are named \"\\NewTerm{oblique pyramids}\\index{oblique pyramid}\". A \"\\NewTerm{regular pyramid}\" has a regular polygon base and is usually implied to be a right pyramid. When unspecified, a pyramid is usually assumed to be a \"\\NewTerm{regular square pyramid}\\index{regular square pyramid}\".\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/pyramid.jpg}\n\t\t\\caption{Example of pyramid (irregular basis) but non-oblique}\n\t\\end{figure}\n\tConsider a surface $S(t)$ of the section of the pyramid with the plane of $z=t$ (that is to say for $t=0$ we have the surface of the basis of the pyramid), then the volume $V$ we are looking for will be equal to:\n\t\n\tWe speak of plane equation when there is no defined basis for now. In fact, in the integral, $t$ varies between $0$ and $h$. This implies that we take a basis centered on $H$ (the projection of the apex of the pyramid on the basis), of axis therefore oriented towards $\\overline{HO}$. The other two axes are chosen arbitrary in the plane of the basis of the pyramid.\n\t\n\tWe must now clarify what is $S(t)$ depending on $t$:\n\t\n\tLet $S(0)=S$ be the area of the base of the pyramid. The section of the pyramid by the plane of equation $z=t$ is deduced by the scaling of center $O$ and ratio $t/h$. So the integral is written:\n\t\n\twhere the fact that having taken the square of $t/h$ is due to the fact that each inner term of $S$ is the product of two terms (by the definition of a surface that is the multiplication of the length) each of homothetic ratio $t / h$.\n\t\n\tThus we have:\n\t\n\t\n\t\\subparagraph{Moment of Inertia of a regular square pyramid}\\mbox{}\\\\\\\\\n\tWe have along the axis of symmetry of the regular square pyramid of height $h$ and side $a$:\n\t\n\tAs:\n\t\n\tWe have for the mass of the pyramid:\n\t\n\tSo:\n\t\n\tTherefore:\n\t\n\t\n\t\\paragraph{Right Prism}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{(right) prism}\\index{ right prism}\\index{right prism}\" is a polyhedron whose bases are two equals polygons with parallel sides (they have the same surface!), The side faces are parallelograms. Therefore, the prism is not a regular polyhedron! The two parallel sides and shape are named bases of the (right) prism.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/prism.jpg}\n\t\t\\caption{Example of right prism}\n\t\\end{figure}\n\tTo calculate the volume $V$ of a right prism, we simply multiply the area of its base $B$ by its height $h$:\n\t\n\tAs its base is a polygon, that is to say, it can be a triangle, a quadrilateral or a pentagon ... The reader must know how to calculate these areas to calculate the volume of the right prism.\n\t\n\tA right prism with a triangular basis is named a \"triangular prism\", a right prism with a rectangular basis is named a... \"cuboid\", a right prism with a square base is named a... \"cube\" and a right prism with a circular basis is named a... \"cylinder\"...\n\t\n\t\\paragraph{Regular Polyhedron}\\mbox{}\\\\\\\\\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A \"\\NewTerm{regular polyhedron}\\index{regular polyhedron}\" is made of all identical and regular surfaces. A common equivalent definition is that a regular polyhedron is such that faces are congruent regular polygons which are assembled in the same way around each vertex.\n\t\t\n\t\t\\item[D2.] A \"\\NewTerm{convex polyhedron}\\index{convex polyhedron}\" is such that each point of a line segment joining any two points belongs to the (inside of) polyhedron.\n\t\\end{enumerate}\n\tThe regular polyhedra are at the number of nine, where five of them are convex and are known as the Platonic Solids. We sometimes named regular polyhedra only the Platonic solids and it is these that will interest us here.\n\t\n\t\\begin{theorem}\n\tWe will prove that there are only five convex regular polyhedra, which are therefore named the \"\\NewTerm{five Platonic solids}\\index{five Platonic solids}\" (the other columns of the table below will be proved and explained a little further), a regular polyhedron is identified by its \"\\NewTerm{Schl\u00e4fli symbol}\\index{Schl\u00e4fli symbol}\" of the form $(m,n)$, where $m$ is the number of sides of each face and $n$ the number of faces meeting at each vertex.\n\t\n\tFor the proof  we will denote by $A$ the number of edges, $S$ the number of nodes (vertices) and $F$ the number of faces.\n\t\\end{theorem}\n\t\\begin{table}[H]\n\t\\begin{center}\n\t\t\\begin{tabular}{|>{\\centering\\arraybackslash}m{2.2cm} |>{\\centering\\arraybackslash}m{2.3cm} |>{\\centering\\arraybackslash}m{3.5cm} |>{\\centering\\arraybackslash}m{0.8cm} |>{\\centering\\arraybackslash}m{0.8cm} |>{\\centering\\arraybackslash}m{0.8cm} |>{\\centering\\arraybackslash}m{2.5cm} |}\n\t\t\t  \\hline\n\t\t\t  \\rowcolor[gray]{0.75}Name & Schl\u00e4fli $(m,n)$ & Image & $S$ & $A$ & $F$ & $F-A+S$ \\\\ \\hline\n\t\t\t  Tetrahedron & $(3, 3)$ & \\includegraphics{img/geometry/platon_solid_tetrahedron.jpg} & $4$ & $6$ & $4$ & $2$\\\\ \\hline\n\t\t\t  Cube & $(4, 3)$  & \\includegraphics{img/geometry/platon_solid_cube.jpg} & $8$ & $12$ & $6$ & $2$ \\\\ \\hline\n\t\t\t  Octahedron & $(3, 4)$ & \\includegraphics{img/geometry/platon_solid_octahedron.jpg} & $6$ & $12$ & $8$ & $2$  \\\\ \\hline\n\t\t\t  Dodecahedron & $(5, 3)$ & \\includegraphics{img/geometry/platon_solid_dodecahedron.jpg} & $20$ & $30$ & $12$ & $2$  \\\\ \\hline\n\t\t\t  Icosahedron & $(3, 5)$ & \\includegraphics{img/geometry/platon_solid_icosahedron.jpg} & $12$ & $30$ & $20$ & $2$ \\\\\\hline\n\t\t\\end{tabular}\n\t\t\\end{center}\n\t\t\\caption{Five regular polyhedra (Platonic Solids)}\n\t\\end{table}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tA Platonic hydrocarbon is a hydrocarbon (molecule) whose structure matches one of the five Platonic solids, with carbon atoms replacing its vertices, carbon\u2013carbon bonds replacing its edges, and hydrogen atoms as needed. As far as we know, not all Platonic solids have molecular hydrocarbon counterparts.\n\t\\end{tcolorbox}\n\t\\begin{dem}\n\tLet $m$ be the number of sides of each surface of a regular polyhedron, $n$ the number of edges that meet at each vertex. We have therefore that each angle of any face is given by:\n\t\n\t\n\t\\begin{tcolorbox}[colback=red!5,borderline={1mm}{2mm}{red!5},arc=0mm,boxrule=0pt]\n\t\\bcbombe Caution!!! It is the angle $2\\beta$ that defines the angle of a face and not just $\\beta$!\n\t\\end{tcolorbox}\n\t\n\t\n\tWhich is deduced from the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/polyhedron_edges.jpg}\n\t\t\\caption{Angles between the edged of a polyhedron}\n\t\\end{figure}\n\twhere we have:\n\t\n\tand:\n\t\n\tBut the sum of $n$ angles grouped around a vertex is smaller than the $n$ angles which intersect a plane into equal parts (we assume that intuitive by cutting)! Each of them is less than:\n\t\n\ttherefore:\n\t\n\thence:\n\t\n\tThe numbers $m$ and $n$ are both at least equal to $3$ (the smallest polygon being a triangle). It follows that the only possible cases are:\n\t\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\n\tLet us denote now by $F$ the number of faces, $A$ the number of edges and $S$ the number of vertices (nodes). Let recall that we have proved in the section of Graph Theory the \"\\NewTerm{Euler's formula}\\index{Euler's formula}\" (also named \"\\NewTerm{Descartes-Euler's theorem}\\index{Descartes-Euler's theorem}\" or \"\\NewTerm{polyhedral formula}\\index{polyhedral formula}\") such that:\t\n\t\n\tand it is of course also valid for the flattening of a polyhedron in the plane (and therefore of a polyhedron in the space).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe representation as a flattening graph of a polyhedron is named a \"\\NewTerm{Schlegel diagram}\\index{Schlegel diagram}\" and is used in some fields of organic chemistry.\n\t\\begin{center}\n\t\t\\includegraphics{img/geometry/schlegel_diagram.jpg}\n\t\\end{center}\n\t\\end{tcolorbox}\t\n\tIn the case of regular polyhedra, each side (face) has $m$ edges so that $m\\cdot F$ is the whole number of edges of all sides and as each edge as encounter exactly two faces, we have the equality (take an example to be convinced if necessary!):\n\t\n\tand as $n$ is the number of edges that meet at each vertex, and that each edge connects two vertices, we also have:\n\t\n\tTherefore:\n\t\n\tBy injecting into Euler's formula, we have then:\n\t\n\tand we fall back on the inequality of the previous theorem. Let us take again our calculation:\n\t\n\tfrom which we get:\n\t\n\tWe can now undertake the classification of regular polyhedra:\n\t\\begin{enumerate}\n\t\t\\item The tetrahedron $(m,n)=(3,3)$:\n\t\t\n\t\t\n\t\t\\item The octahedron $(m,n)=(3,4)$:\n\t\t\n\t\t\n\t\t\\item The hexahedron or cube $(m,n)=(4,3)$:\n\t\t\n\t\t\n\t\t\\item The icosahedron $(m,n)=(3,5)$:\n\t\t\n\t\t\n\t\t\\item The dodecahedron $(m,n)=(5,3)$:\n\t\t\n\t\\end{enumerate}\n\tthis completes our classification.\n\t\n\tLet us prove now that any polyhedron composed only of pentagons and hexagons has exactly mandatory  twelve pentagons (and therefore there is no constraint on the number of hexagons). It is not a priori not very intuitive!\n\t\n\tThis proof will explain for example us why the soccer ball and the fullerene molecule have twelve pentagons:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/pentagone_rule.jpg}\n\t\t\\caption{Known polyhedra made of $12$ pentagons}\n\t\\end{figure}\n\t\\begin{dem}\n\tFor the proof, we start from the Euler's formula:\n\t\n\tGiven $M$, the number of pentagons and $N$ the number of hexagons, then we have the number of faces that is equal to:\n\t\n\tIn addition, each vertex is shared by three polygons in the case of the soccer ball and of the fullerene. Therefore, the number of vertices is equal to:\n\t\n\tEach edge is shared by two polygons. Therefore, the number of edges is:\n\t\n\tThen injecting Euler's formula:\n\t\n\tTherefore, to satisfy:\n\t\n\t$M$, the number of pentagons, must be equal to $12$.\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\paragraph{Regular Tetrahedron}\\mbox{}\\\\\\\\\n\tA tetrahedron is a polyhedron composed of $4$ triangular faces, $6$ straight edges, and $4$ vertex corners. The tetrahedron is the simplest of all the ordinary convex polyhedra and the only one that has fewer than $5$ faces.\n\t\n\tThe tetrahedron is one kind of pyramid, which is a polyhedron with a flat polygon base and triangular faces connecting the base to a common point. In the case of a tetrahedron the base is a triangle (any of the four faces can be considered the base), so a tetrahedron is also known as a \"\\NewTerm{triangular pyramid}\\index{triangular pyramid}\".\n\t\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{regular tetrahedron}\\index{regular tetrahedron}\" is one in which all $4$ faces are equilateral triangles. It is one of the five regular Platonic solids, which have been known since antiquity.\n\t\n\tWe have shown that for the tetrahedron $F=4,m=3$ and it is relatively easy to guess that such a polyhedron is formed by $3$ identical equilateral triangles as shown in the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/regular_tetrahedron.jpg}\n\t\t\\caption{Example of regular tetrahedron}\n\t\\end{figure}\n\tFor this purpose let us start by studying the following equilateral triangle:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/equilateral_triangle_for_tetrahedron_study.jpg}\n\t\t\\caption[]{Equilateral triangle to start with to calculate the volume of the tetrahedron}\n\t\\end{figure}\n\tIn this equilateral triangle, $a$ is the side, $h$ is the height. The perpendicular bisectors are $h$, $h'$, $h''$ of the respective sides $\\overline{BC}$, $\\overline{AB}$, $\\overline{AC}$.\n\n\tThe segments $h$ and $h'$ intersect at a point $P$ (barycentre). By construction of the equilateral triangle, we have $\\overline{PC}=\\overline{PB}=\\overline{PA}$ (just apply Pythagoras theorem to prove it if you want).\n\t\n\tFurthermore, we have already proved in our study of the triangle, that the barycentre of the latter is always at $2/3$ of the median height. As medians and bisectors are merged in the case of the equilateral triangle, then we have $\\overline{AP}=2/3 \\cdot h$.\n\t\n\tNow, if we draw a straight line passing through the point $P$ and perpendicular to the plane in which the triangle is located. Given $D$ a point on this line, as we have $\\overline{PB}=\\overline{PC}=\\overline{PA}=2/3\\cdot h$ we will have of course $\\overline{BD}=\\overline{AD}=\\overline{CD}$ (again just apply Pythagoras theorem!).\n\t\n\tWe only have to find a way such that $\\overline{BD}=a$ and we will have the regular tetrahedron we wanted. We then calculate:\n\t\n\tand therefore:\n\t\n\ttherefore\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/tetrahedron_variables_summary.jpg}\n\t\t\\caption[]{Tetrahedron variables summary}\n\t\\end{figure}\n\tThe bisector $\\overline{BD}$ (on the right figure) passing through $M$ intersects $H$ at a point O, which is nothing more than the center of the sphere circumscribed to the tetrahedron. Indeed, by construction, we have $\\overline{\\text{O}B}=\\overline{\\text{O}C}=\\overline{\\text{O}A}$ and the bisector gives us $\\text{O}B=\\text{O}D$.\n\t\n\tThe Thales theorem also gives us:\n\t\n\tand for the developments that will follows we will ask put: $H=\\overline{PD}$, $\\overline{\\text{O}D}=R$.\n\t\n\tNow let us calculate the total area. It will necessarily be given by the surface of one side multiplied by the number of faces, and as we have proved how to calculate the area of a triangle above it comes immediately:\n\t\n\tFor the volume, it's also just as simple as we have proved earlier above that is was equal to that of a pyramid. It then comes immediately:\n\t\n\tTherefore:\n\t\n\tand:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Regular hexahedron (cube)}\\mbox{}\\\\\\\\\n\tThe cube is a regular polyhedron that is probably the most familiar to us, it has $6$ faces and its construction  does not require to be presented.\n\t\n\tThe cube is also a square parallelepiped, an equilateral cuboid and a right rhombohedron. It is a regular square prism in three orientations, and a trigonal trapezohedron in four orientations.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cube.jpg}\n\t\t\\caption{Example of regular hexahedron (cube)}\n\t\\end{figure}\n\tSince all sides are of length $a$, the surface is simply given by the multiplication of the $6$ faces surfaces. So:\n\t\n\tand the volume:\n\t\n\t\n\t\\paragraph{Regular octahedron}\\mbox{}\\\\\\\\\n\tWe have shown that for the octahedron $F=8,m=4$ and it is relatively easy to guess that the regular octahedron is formed (by definition!!!) of $8$ identical equilateral triangles.\n\t\n\t A regular octahedron is a square bi-pyramid in any of three orthogonal orientations. It is also a triangular anti-prism in any of four orientations.\n\t\n\tTo build, and demonstrate that it is possible to build such a polyhedron, we put as before that his side is equal to $a$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/regular_octahedron.jpg}\n\t\t\\caption{Example of regular octahedron }\n\t\\end{figure}\n\tThen we denote by O the point of intersection of the two diagonals. We have therefore by applying the Pythagorean theorem:\n\t\n\tand:\n\t\n\tOn the perpendicular line to the plane containing our square, and passing through O, we add two vertices $E$, $F$ at a distance that we calculate as follows:\n\t\n\tfrom which we get:\n\t\n\tTherefore:\n\t\n\tOur polyhedron is the well formed by $8$ equilateral identical triangles. Each vertex has $4$ corners and $4$ faces, which allows us to affirm that it is regular and so ends our construction.\n\t\n\tThe surface of the regular octahedron is:\n\t\n\twhere $h$ is the height of the equilateral triangle of side $a$ that we have already calculated above. \n\n\tFor the volume, this is still based on that of the pyramid. So:\n\t\n\tAnd we will assume that it is obvious to the reader that our octahedron is inscribed in a sphere of radius $R$ whose center is the point $O$. For $R$, we have:\n\t\n\tLet us show already that now we can build the regular icosahedron from the octahedron and this latter exists and is buildable.\n\n\tFor this purpose, we will first consider a reference frame of the octahedron placed at the origin O corresponding to its barycentre as visible in the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/octahedron.jpg}\n\t\t\\caption{Example of the octahedron}\n\t\\end{figure}\n\tWe then have:\n\t\n\tAfter this, let us consider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/tetrahedron_equilateral_triangle_search.jpg}\n\t\t\\caption{Searches of the position of the equilateral triangle}\n\t\\end{figure}\n\tIn the figure above, $A'$ is a point that start from $A$ and that arrives in $B$, and given $B'$ a point which start from $C$ and arrives at $B$, and finally $E'$ a point that start from $B$ and arrives on $E$. These three points go together and moving at the same speed. If we follow these three points that form a triangle $A'B'E'$, we feel intuitively that there is a place such that $A'B'E'$ is an equilateral triangle.\n\t\n\tLet us determine this place:\n\t\n\tand therefore:\n\t\n\tand we want:\n\t\n\tTherefore:\n\t\n\tThat is to say:\n\t\n\tWhich simplifies in:\n\t\n\tand as $0\\leq \\mu \\leq 1$, we get for the resolution of this polynomial of the second degree (\\SeeChapter{see section Calculus page \\pageref{second order polynomials}}) the only acceptable solution:\n\t\n\twhere the reader may have perhaps noticed that this is the inverse of the golden ratio...\n\t\n\tAccording to the figure below, if we put:\n\t\n\tthen we fall back on the same value for $\\mu$ (the reader can check this himself, or we can on request add the detailed calculations here) and ditto for all other points:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/regular_icosahedron_construction.jpg}\n\t\t\\caption{Construction of a regular icosahedron}\n\t\\end{figure}\n\tOur new polyhedron therefore has one face by face of the octahedron and one face by edge of the octahedron. We have therefore $20$ faces composed of identical equilateral triangles. In addition, $5$ edges and $5$ faces meet at each vertices. We then get a regular icosahedron!!!\n\t\n\tWe then have for the coordinates of each vertex (the reader must be observed that the vertices are opposed in pairs one component on the figure):\n\t\n\t\n\t\\paragraph{Regular Icosahedron}\\mbox{}\\\\\\\\\n\tWe saw earlier how to build the regular icosahedron. There it exists!\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/regular_icosahedron.jpg}\n\t\t\\caption{Representation of the regular icosahedron}\n\t\\end{figure}\n\tKnowing the coordinates of the different edges, let us now calculate the surface and volume of the regular icosahedron.\n\t\n\tThe calculation of the surface is simple since it is $20$ equilateral triangles. We have then:\n\t\n\ttherefore:\n\t\n\tHence:\n\t\n\tTherefore:\n\t\n\tThe calculation of the volume being quite more subtle! Let's go for it...\n\t\n\tThe icosahedron is built around the pentagon and the golden section as we were able to notice in our previous study of the octahedron.\n\t\n\tIf ever the reader is not convinced here is an additional figure where we see that each edge of the icosahedron is an edge of a pentagon ($AFECB$, $LGHJK$, $DAJKC$, $DEGHA$, $BJILC$, $FELIH$,...):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/pentagon_in_icosaedron.jpg}\n\t\t\\caption{Pentagons in the regular icosahedron}\n\t\\end{figure}\n\tUsing the method of the pyramids, we have $20$ equilateral triangles that form the basis of a pyramid whose height goes up to the origin $O$ of the icosahedron (or the origin confused the with circumscribed or inscribed corresponding sphere).\n\t\n\tTake for example the base $ABD$ with the intersection of mediators located at the point $M$ as shown below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/icosahedron_pyramid.jpg}\n\t\t\\caption{Representation of one of the pyramids of the regular icosahedron}\n\t\\end{figure}\n\tAs we know, we proved that the volume of a pyramid is:\n\t\n\tThe surface $b$ in our situation that of the equilateral triangle $\\Delta ADB$ and the height $h$ is the segment $\\overline{\\text{O}M}$.\n\n\tIf we denote by $a$ the triangle side, then the surface is given by:\n\t\n\tTo find $h$, we know by construction of the point $M$ that the triangle OMA, OMB, OMD are right triangles.\n\n\tLet us work arbitrarily with the triangle $OMD$. First, let determine the length $\\overline{DM}$. We have proved in our study of the bisector of length $H$ of the equilateral triangle (\\SeeChapter{see section Euclidean Geometry page \\pageref{equilateral triangle}}) that $\\overline{DM}$ is then equal to:\n\t\n\tBut:\n\t\n\tSo finally:\n\t\n\tTo find $h$ we must find the length $\\overline{OD}=r$ in terms of length of the edges $a$ of icosahedron. For this, we must recognize one of the fundamental properties of the icosahedron.\n\n\tBefore going further, let us show a property of the pentagon below with its diagonals $d$ and sides $c$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/parallelogram_inside_pentagon.jpg}\n\t\t\\caption{Parallelogram in a pentagon}\n\t\\end{figure}\n\t$BSEA$ is a parallelogram. Indeed, the diagonal $\\overline{BD}$ is parallel to the side $\\overline{AE}$ (for example, because both are perpendicular to the axis of symmetry passing through $\\overline{OC}$). Since $S$ is on $\\overline{BD}$, it proves that $\\overline{BS}$ and $\\overline{AE}$ are parallel. We show in the same manner that $\\overline{AB}$ and $\\overline{SE}$ are parallel.\n\t\n\tWe deduce that:\n\t\n\tand same for $\\overline{CS}$:\n\t\n\tLet us continue ... we have the equality $\\widehat{BAE}=\\widehat{CSD}$. Furthermore, as $CD$ and $BE$ are parallel, the triangles $SCD$ and $ABE$ are similar. Therefore, the distances between their sides are kept (Thales):\n\t\n\thence the relation:\n\t\n\tAfter some modifications:\n\t\n\tIf $\\mu$ now designates the ratio $d/c$, the above relation becomes:\n\t\n\tand $\\mu$ being strictly positive, we have already seen during our study of the octahedron that the only positive root is the golden ratio:\n\t\n\tWe have just finish to prove that a diagonal of a pentagon is equal to the golden ratio multiplied by the length of an edge of this same pentagon.\n\t\n\tThus we have in the pentagons $AFECB$ and $LGHJK$ of our icosahedron:\n\t\n\tLet us notice also the rectangle $FBGK$ whose barycentre (centroid) coincides with that of the icosahedron. Moreover, $\\overline{FK}$ and $\\overline{BG}$ represent by construction the diameter of the sphere circumscribed to the icosahedron and therefore $\\overline{OF}=\\overline{OK}$ is the radius $r$ that we are looking for.\n\n\tWe have:\n\t\n\tTherefore:\n\t\n\thence:\n\t\n\tNow, we can calculate $h$:\n\t\n\tBut:\n\t\n\tas the golden ratio is a root of $x^2=x+1$.\n\n\tTherefore:\n\t\n\tFinally:\n\t\n\tand:\n\t\n\tTherefore the volume of one pyramid of the icosahedron is:\n\t\n\tAs the are $20$ pyramids:\n\t\n\t\n\t\\paragraph{Regular Dodecahedron}\\mbox{}\\\\\\\\\n\tAfter failing to find in the literature an aesthetic and simple way the of the proof of the construction feasibility of the dodecahedron, we will move in for now (it is possible to live without it...).\n\t\n\tLet us simply notice that the dodecahedron consists of $12$ pentagons and its volume is comparable to that of a cube on which we have placed on each face a sort of little roof which ultimately will give the pentagons (you can see below the cube in light grey and with its $6$ roofs):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/regular_dodecahedron.jpg}\n\t\t\\caption{Regular dodecahedron}\n\t\\end{figure}\n\tFor our study of the dodecahedron, we will only focus to determine its surface and volume.\n\t\n\tFor this, let us consider first the regular pentagon below:n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/regual_pentagone_for_study_of_dodecahedron.jpg}\n\t\t\\caption{Regular pentagon for the study of the volume of the regular dodecahedron}\n\t\\end{figure}\n\tWe will first have to determine the length of $h$ and $b$.\n\n\tLet us first recall that we have already during our study of the icosahedron proved that the diagonal of a pentagon is connected to the length of its sides by the relation:\n\t\n\twhere $\\mu$ is the golden ratio. It remains to  then to determine $h$.\n\n\tIt is obvious first that $\\beta=2\\pi/10=\\pi/5=\\alpha/2$ and that:\n\t\n\tBut we have two missing information here: the angle and $c$. Begin by determining how much is the cosine without using the calculator (you will understand why ...).\n\n\tWe have first by the identity (\\SeeChapter{see section Trigonometry page \\pageref{remarkable trigonometric identities}}):\n\t\n\tthat:\n\t\n\tWhich can also be written:\n\t\n\tBut this can also be written always using the same trigonometric identity:\n\t\n\tThus after simplification:\n\t\n\tBy a change of variable and rearranging the different terms:\n\t\n\tWe have $-1$ and $1/2$ that are two obvious roots so we get (\\SeeChapter{see section Calculus page \\pageref{double root}}):\n\t\n\tWe just have to solve a simple equation of the second degree, the solution is trivial using the methods seen in the section Calculus and we get:\n\t\n\tEither by taking the only feasible solution, then we have:\n\t\n\tSo we fall back on the golden ratio again! and this leads us directly to write:\n\t\n\tIt remains to us to determine $c$. We have:\n\t\n\tand as $\\mu^2-\\mu-1=0$ we have:\n\t\n\tand therefore:\n\t\n\thence:\n\t\n\tWe have therefore for the calculation of the area of the dodecahedron, a surface composed of $12$ pentagons, each of which is composed of by $5$ triangle of base $a$ and  height $h$:\n\t\t\n\tTherefore:\n\t\n\tTo calculate the volume, we're going to use the trick mentioned at the beginning. That is to say to cut in a first time the dodecahedron in a parallelepiped of side:\n\t\n\tsince the side of the parallelepiped is a diagonal of the pentagon of side $s$ and of $6$ small roofs (which are clearly visible on the figure previously given of the dodecahedron).\n\t\n\tEvery little roof following two different views have the following lengths (where we fall back obviously for some edges those of pentagons $s$ or the diagonals $c$ thereof): \n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/dodecahedron_roofs.jpg}\n\t\t\\caption[]{Dodecahedron roofs}\n\t\\end{figure}\n\tEach small roof, we treat separately their ends by separating (cutting) them and merging them together. Finally, we have two pieces to study: the major visible part of the roof left in the figure below and the secondary part of the roof on the right in the figure below, which is nothing else other than the merging of the extremities of one small roof:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/dodecahedron_roofs_decomposition.jpg}\n\t\t\\caption[]{Dodecahedron roofs decomposition for volume calculation}\n\t\\end{figure}\n\tWe must therefore determine $x$ and $l$ and $h$ since $c$ and $s$ are already known to us.\n\n\tFirst, we see trivially that:\n\t\n\tFrom the Pythagorean theorem, we then have:\n\t\n\tTherefore it comes:\n\t\n\tHence:\n\t\n\tWe can now calculate the volume $V_p$ of the $6$ small roofs:\n\t\n\tTherefore, the total volume of the dodecahedron is finally the volume of $6$ small roofs summed with the volume of the central parallelepiped:\n\t\n\tFinally:\n\t\n\t\n\t\\subsubsection{Usual Solids of Revolution}\n\tLet us recall that as seen in the section of Analytical Geometry a \"\\NewTerm{solid of revolution}\\index{solid of revolution}\\label{solid of revolution}\" is a solid figure obtained by rotating a plane curve around some straight line (the axis) that lies on the same plane. and the more formally a surface of revolution is a surface obtained by rotating a plane curve (e.g. $y=f(x)$), named the \"\\NewTerm{generatrix}\\index{generatrix}\", around the $y$-axis (for example!). So we pass from a plane of $\\mathbb{R}^2$ to a basis in $\\mathbb{R}^3$.\n\t\n\tIndeed many surfaces (and also some of which we've seen in the section of Analytical Geometry as the sphere, tore and cylinder) can be described by revolving a primary form of smaller size and then by rotation.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.85]{img/geometry/revolution_surface.jpg}\n\t\\end{figure}\n\tLet us see before going further a general method for determining the area of a body of revolution. That is to say the surface of the body generated by the rotation of a finite length curve about an axis:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/body_of_revolution_idea.jpg}\n\t\\end{figure}\n\tFor this, we notice that when the curve is given a function $y=f(x) $ and we notice by Pythagoras (see figure below) that the element of length $\\mathrm{d}l$ satisfies (relation that we have already meet in other sections of this book):\n\t\n\tTherefore:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/body_of_revolution_formal_figure.jpg}\n\t\\end{figure}\n\tThus, the surface element generated by the rotation of the element of length $\\mathrm{d}l$ is given by:\n\t\n\tThe area of the surface of revolution generated by a function $f:[a,b]\\mapsto \\mathbb{R}$ continuously differentiable  is then given by the relation:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn the section Mechanics we will use another approach that can be used to calculate the surface (and volume) of bodies of revolution and that is named \"Pappus's second centroid Theorem\". In fact as we will see, it is more used in practice to determine the centroid position rather than the surface (or volume) of a body of revolution.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\paragraph{Cylinder}\\label{cylinder}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{cylinder}\\index{cylinder}\" is a surface generated by a line moving parallel to a fixed direction by meeting a fixed plane curve (circle), which plane cuts the given direction.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cylinder_revolution.jpg}\n\t\t\\caption{Example of a cylinder}\n\t\\end{figure}\n\tThe volume of an cylinder of revolution of height $x=r$ and height $h$ is calculated by the method of discs knowing that the surface of a circle (disc) is $\\pi r^2$:\n\t\n\tTherefore:\n\t\n\tThe surface of a cylinder is simply the sum of the surface of the both discs and of the surface of the folded rectangle of height $h$ and length of $2\\pi r$:\n\t\n\tNow let us calculate the moment of inertia of a solid cylinder with respect to its vertical symmetry axis (revolution axis):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cylinder_inertia.jpg}\n\t\\end{figure}\n\tWe have:\n\t\n\tTherefore:\n\t\n\tGiven now $G$ the cylinder's center of gravity, $G_z$ coincides with the cylinder axis of the revolution. The axes $G_y$ and $G_x$ play identical roles. The moments of inertia $J_{G_x}$ and $J_{G_y}$ with respect to these axes are equal and are written:\n\t\n\thence:\n\t\n\thence:\n\t\n\tThe first integral is in fact the moment of inertia of the cylinder relatively to the axis $G_z$ and we know it's equal to:\n\t\n\tThe second integral is easily calculated by cutting the cylinder into slices  of thickness $\\mathrm{d}z$ perpendicular to the axis $G_z$. The mass of the elementary slice is $\\mathrm{d}m=\\pi r^2\\rho\\mathrm{d}z$ thus:\n\t\n\tThe moment of inertia of a cylinder relative to an axis perpendicular to its axis of revolution is thus written:\n\t\n\tIn many high-school problems $R\\ll h$ then we can found the latter relation under the form:\n\t\n\tNow let us found the moment of inertia of a cylinder rotation around an axis perpendicular to its rotation axis and at a distance $h$ of one of it ends. To calculate it, we consider:\n\t\\begin{itemize}\n\t\t\\item The cylinder is cut into infinitesimally many pieces of infinitesimally thin slices\n\n\t\t\\item Each of these slices have a mass of $\\mathrm{d}m$ and length of $\\mathrm{d}x$\n\n\t\t\\item A variable to sum. E.g. in this problem, we are summing from left of the axis to right of axis. The variable is $x$.\n\t\\end{itemize}\n\tNow, we show our formula for the calculation for moment of inertia first:\n\t\n\tRecall that we are using $x$ to sum. Hence, we have to force a $\\mathrm{d}x$ into the equation for moment of inertia. Now, lets find an expression for $\\mathrm{d}m$. Since the rod is uniform, the mass varies linearly with distance such that:\n\t\n\tUsing the relation for $mathrm{d}m$, we substitute it into the first relation. Hence, we have:\n\t\n\tNow we know obviously that:\n\t\n\tSubstituting $\\mathrm{d}J$, (write the appropriate limits):\n\t\n\twhere the lower limit is $-h$ because the left side of the cylinder is $-h$ units away from the axis of rotation (we take right as positive). This part is in fact the tricky part...\n\n\tSolving the integration, we have the moment of inertia for a uniform rigid cylinder for any perpendicular rotation axis:\n\t\n\tNow in the special case $h=0$ (used in many high-school to study a falling chimney) we found the famous relation that can been found in many textbooks:\n\t\n\tIf the rotation axis is through the center of mass of the rod. (remember rod is uniform, hence $h=L/2$) we fall back on:\n\t\n\t\n\tNow let us also calculate the moment of inertia of a tube or empty cylinder of non-zero thickness (always provided in formula summary books). \n\t\n\tThe moment of inertia of a tube relative to its axis of revolution is a great classical treatment of the cylinder inertia moment. Thus, let us consider a tube outer radius $r_e$ and inner radius $r_i$. As (\\SeeChapter{see section Classical Mechanics page \\pageref{sums of moments of inertia}}):\n\t\n\tTherefore the moment of inertia of a tube can be seen as the moment of inertia of cylinder of radius equal to the external radius of the tube decreased by the moment of inertia of cylinder of radius equal to the internal radius of the tube. So:\n\t\n\tand if $\\bar{r}=r_e^2\\cong r_i^2$, then we get the classical relation available in many physics textbooks:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Cone}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{cone}\\index{cone}\" is a three-dimensional geometric shape that tapers smoothly from a flat base (frequently, though not necessarily, circular) to a point named the \"\\NewTerm{apex}\\index{apex}\" or \"\\NewTerm{vertex}\\index{vertex}\".\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/cone_shape.jpg}\n\t\t\\caption{Cone}\n\t\\end{figure}\n\tThe line passing through the points $(r,0)$ (end of the base of the cone/center of the disc) and $(0,h)$ (apex of the cone) is given obviously by:\n\t\n\tIndeed, when $x=0$, we have $y=h$ and when $x=r$ we have $y=0$.\n\t\n\tThe rotation of this line relative to the $y$-axis gives the volume of the cone:\n\t\n\tFinally:\n\t\n\tTo calculate the lateral surface of a cone, we will parametrized the straight line that go from the summit of the cone $(0.0)$ to $(r, h)$ so this is a different parametrization than for the volume (this gives the possibility to simplify some calculations). Then we have:\n\t\n\tand therefore:\n\t\n\tTherefore, the total surface of the cone (base + side surface) is:\n\t\n\tFinally:\n\t\n\tLet us now calculate the moment of inertia of a cone with respect to its axis of revolution:\n\n\tFor these calculations we will use the moment of inertia of the cylinder $J_{z,\\text{cyl}}$ and consider the cone as a stack of infinitesimal cylinders.\n\t\n\tTherefore:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Sphere}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{sphere}\\index{sphere}\\label{sphere}\" is a perfectly round geometrical object in three-dimensional space that is the surface of a completely round ball. Like a circle, which geometrically is a two-dimensional object, a sphere is defined mathematically as the set of points that are all at the same distance $r$ (radius) from a given point, but in three-dimensional space. The longest straight line through the ball, connecting two points of the sphere, passes through the center and its length is thus twice the radius (as for the circle); it is the diameter $\\varnothing$ of the sphere.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sphere_shape.jpg}\n\t\t\\caption{Sphere}\n\t\\end{figure}\n\tAs the figure above is quite academic, here is a more friendly one if it can help:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.3]{img/geometry/sphere_grid.jpg}\n\t\t\\caption[Sphere Grid]{Sphere Grid (source: Wikipedia)}\n\t\\end{figure}\n\tObviously we can see a sphere of radius $R$, as a surface formed by rotating a semicircle around its major axis. The function describing a semicircle being (\\SeeChapter{see section Analytical Geometry page \\pageref{equation of a circle}}):\n\t\n\tThe sphere can be dissected as a sum of disk of thickness $\\Delta x$. The half-discs being perpendicular to the $x$-axis and of width $\\Delta x$ at the position $x_k$  (see figure below).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/sphere_disc_decomposition.jpg}\n\t\t\\caption{Calculation the volume of a sphere by a stack of discs}\n\t\\end{figure}\n\tThen we have:\n\t\n\tThe volume of a disk (cylinder) being given by (passing to the limit):\n\t\n\tand the radius $r_k$ being given by the function:\n\t\n\tthen we have:\n\t\n\tBy integrating between $x=[-R,R]$, we then have:\n\t\n\tWe can also make the terminals as $x=[0,R]$ it gives the same to a factor $2$:\n\t\n\tFinally:\n\t\n\tThe expression of the surface being given by differentiation with respect to the element generating the surface, we get (it's a bit limit to say this...)\\label{surface of a sphere}:\n\t\n\tThere is another way, more rigorous, to approach these both calculations (especially the last one...). Indeed, in the section of Differential and Integral Calculus we have introduced the concept of Jacobian that allows changing the coordinate system based on integration variables we are working on (for a detailed definition the reader should refer to section Differential and Integral Calculus page \\pageref{jacobian}):\n\t\n\tand we have proved (always in the section of Differential and Integral Calculus page \\pageref{jacobian spherical coordinates}) that the Jacobian in spherical coordinates was:\n\t\n\tSo just as $\\mathrm{d}x\\mathrm{d}y\\mathrm{d}z$ is a differential element of volume, we can convert this element into spherical coordinates and make verbatim appear a differential element of volume of the sphere of radius $r$. We then have just to integrate correctly for the size of the whole sphere.\n\n\tTherefore we have:\n\t\n\tand for the surface (for which the radius is constant):\n\t\n\tIf necessary, we can find the element of surface geometrically rather than through the Jacobian because the latter is not very educational in small classes...\n\t\n\tThen, remembering that in the section of Trigonometry, we proved that the length of an element circular arc is given by:\n\t\n\tSo, it becomes quite easy to complete the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/element_of_surface_of_sphere.jpg}\n\t\t\\caption{Representation of a surface element in spherical coordinates}\n\t\\end{figure}\n\tand then we see immediately that (most important surface element for us in this book so far for the sections treating of Physics and Engineering!)\\label{infinitesimal element of a surface of a sphere}:\n\t\n\twhat is obviously more funny...\n\n\tLet us now calculate the moment of inertia of a homogeneous solid ball of mass $M$ and density $\\rho$. For this, the ball having a maximal symmetry, it is more convenient to first calculate the polar moment of inertia (\\SeeChapter{see section Classical Mechanics page \\pageref{polar moment of inertia}}), and then determine the axial moment of inertia using the first result:\n\t\n\twhere we used the fact that:\n\t\n\tAs $J_x,J_y,J_z$ are equally by the symmetry of the ball, it comes\\label{inertia momentum ball}:\n\t\n\t\n\t\\subparagraph{Sphere packing}\\mbox{}\\\\\\\\\n\tIn geometry, a sphere packing is an arrangement of non-overlapping spheres within a containing space. The spheres considered are usually all of identical size, and the space is usually three-dimensional Euclidean space. A typical sphere packing problem is to find an arrangement in which the spheres fill as much of the space as possible. The proportion of space filled by the spheres is named the density of the arrangement. \n\t\n\tMany problems in the chemical and physical sciences can be related to sphere packing problems where more than one size of sphere is available. Here there is a choice between separating the spheres into regions of close-packed equal spheres, or combining the multiple sizes of spheres into a compound or interstitial packing (when many sizes of spheres - or a distribution - are available, the problem quickly becomes intractable).\n\t\n\tFor the very special case of equal spheres in three dimensions, the densest packing uses approximately $74\\%$ of the volume. \n\t\n\t\\begin{dem}\n\tLet us see for the two famous case:\n\t\\begin{itemize}\n\t\t\\item FCC (face cubic center) structure:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/sphere_packing_fcc.jpg}\n\t\t\\end{figure}\n\t\tFirst we see obviously as the diagonal of the cube is equal to four times a sphere radius ($4R$) that using the Pythagoras theorem:\n\t\t\n\t\t\n\t\tTherefore, denoting $V_s$ the volume of sphere and $V_e$ the volume of the cube:\n\t\t\n\t\n\t\t\\item HCP (hexagonal close packed):\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/sphere_packing_hcp.jpg}\n\t\t\\end{figure}\n\t\tFirst we see obviously that:\n\t\t\n\t\tTherefore:\n\t\t\n\t\\end{itemize}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tCarl Friedrich Gauss proved in 1831 in a non-trivial and non-formal way that these packings have the highest density amongst all possible lattice packings (we won't provide the proof here as it is quite non-elegant in our personal and subjective point of view).\n\t\n\tIn 1611 Johannes Kepler conjectured that this is the maximum possible density amongst both regular and irregular arrangements\u2014this became known as the Kepler conjecture. In 1998, Thomas Callister Hales, following the approach suggested by L\u00e1szl\u00f3 Fejes T\u00f3th in 1953, announced a proof of the Kepler conjecture (for an good presentation of the proof see \\cite{hales2005proof}). Hales' proof is a proof by exhaustion involving checking of many individual cases using complex computer calculations. Referees said that they were \"$99\\%$ certain\" of the correctness of Hales' proof. On 10 August 2014 Hales announced the completion of a formal proof using automated proof checking, removing any doubt.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tSome other lattice packings are often found in physical systems. These include the cubic lattice with a density of $\\frac{\\pi}{6} \\cong 0.5236$, the hexagonal lattice with a density of $\\frac{\\pi}{3 \\sqrt{3}} \\cong 0.6046$ and the tetrahedral lattice with a density of $\\frac{\\pi \\sqrt{3}}{16} \\cong 0.3401$, and loosest possible at a density of $0.0555$.\n\t\\end{tcolorbox}\n\t\n\tFinally notice that many people like to compute how many times the Earth can be contained inside Jupiter or Inside the Sun. Dividing the volume for example of the Sun by that of Earth obviously gives a value ($\\sim 1,300,000$) that assumes that the Earth can be cut into infinitesimal volumes. But if we consider that we can't cut the Earth into infinitesimal volumes (sphere packing problem), then the number is approximately $\\sim 932,884$. Value to compare with $1,300,000\\cdot 74.05\\%=962,624$.\n\t\n\t\\pagebreak\n\t\\paragraph{Torus}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{torus}\\index{torus}\" is the surface generated by rotating a circle $c$ of \"\\NewTerm{minor radius}\\index{torus!minor radius}\" $r$ around a straight line at a distance $R$, named \"\\NewTerm{major radius}\\index{torus!major radius}\" from it center located in its plane, but not passing through its center. In other words,  it is a surface of revolution generated by revolving a circle in three-dimensional space about an axis coplanar with the circle and that does not touch the circle:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/torus.jpg}\n\t\t\\caption{Torus}\n\t\\end{figure}\n\tWe see obviously in the figure above that: $a-b=2R$.\n\t\n\tTo calculate the volume we could obviously use Pappus's centroid theorem as proved in the section of Classical Mechanic. But the latter method is not very elegant and quite \"physics\" oriented. We will look therefore for a more pure mathematical approach.\n\t\n\tGiven the equation of a semicircle of center $(0, c)$ (\\SeeChapter{see section Analytical Geometry page \\pageref{centered offset ellipse}}):\n\t\n\tIn order to write $y$ as a function of $x$, let us isolate $y$ in this equation:\n\t\n\tThe circle is then made of the graphs of the following functions:\n\t\\begin{itemize} \n\t\t\\item Top semicircle:\n\t\t\n\n\t\t\\item Bottom semicircle:\n\t\t\n\t\\end{itemize}\n\tLet us denote now by usage with the study of the torus $R:=c$.\n\t\n\tThe requested volume is the difference between the volumes generated by the rotation of the surfaces (surfaces defined by the area between the function of the circle in question and the axis of abscissa $x=r,x=-r$) in the space around the $x$-axis .\n\n\tBy applying the integration relation of solids of revolution we get:\n\t\n\tLet us calculate the latter integral by the classical substitution $x=r\\sin(t)$  thus:\n\t\n\tif $x=-r$:\n\t\n\tif $x=r$:\n\t\n\tTherefore:\n\t\n\tLet us linearise that expression again using the trigonometric identities proved in the section Trigonometry (Carnot's formula):\n\t\n\tSo the volume of a torus of a minor radius $r$ and major radius $R$ is given by:\n\t\n\tand the surface (by derivation of the generating surface element... but there are many other elementary way to get this result):\n\t\n\tThe moment of inertia of the torus in relation to its axis of revolution is calculated as follows:\n\n\tFirst we start using the torus volume by change the notations a little bit:\n\t\n\tThe volumetric density of the torus is given by (weight to volume):\n\t\n\tIn cylindrical coordinates, we know that we have:\n\t\n\tTherefore:\n\t\n\tThe moment of inertia is given by:\n\t\n\tLet us put $s=r-b$, then:\n\t\\begin{enumerate}\n\t\t\\item The limits of integration then become $-a$, $+ a$, as we bring all integration points at the origin by putting $s=r-b$.\n\n\t\t\\item Trivially, since $r=s+b$ so we have $\\mathrm{d}s=\\mathrm{d}r$.\n\t\\end{enumerate}\n\tWhich gives:\n\t\n\tAs we have proved in the section of Differential and Integral Calculus that the integral with two symmetrical terminals of an odd function (product of an even and odd function) is zero, the integrals of:\n\t\n\tare equal to zero.\n\n\tThen we have to calculate:\n\t\n\tNow let us put $s=a\\sin(t)$ and therefore $\\mathrm{d}s=a\\cos(t)\\mathrm{d}t$. Therefore it comes:\n\t\n\tBut as:\n\t\n\tTherefore:\n\t\n\tTherefore (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{usual primitives}}):\n\t\n\tSo finally:\n\t\n\t\n\t\\paragraph{Ellipsoid (spheroid)}\\mbox{}\\\\\\\\\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] An \"\\NewTerm{ellipsoid}\\index{ellipsoid}\" is a surface of the second degree of an three dimensional Euclidean. It is therefore part of quadrics (\\SeeChapter{see section Analytical Geometry page \\pageref{quadrics}}) given by the equation:\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/ellipsoid.jpg}\n\t\t\t\\caption{Ellipsoid}\n\t\t\\end{figure}\n\n\t\t\\item[D2.] A \"\\NewTerm{spheroid}\\index{spheroid}\", or \"\\NewTerm{ellipsoid of revolution}\\index{ellipsoid of revolution}, is a quadric surface obtained by rotating an ellipse about one of its principal axes; in other words, an ellipsoid with two equal semi-diameters.\n\t\t\n\tThe equation of a spheroid with $z$ as the symmetry axis is given by setting $a = b$:\n\t\n\n\tIf the ellipse is rotated about its major axis, the result is a \"\\NewTerm{prolate (elongated) spheroid}\\index{prolate (elongated) spheroid}\", like an American football or rugby ball. If the ellipse is rotated about its minor axis, the result is an \"\\NewTerm{oblate (flattened) spheroid}\\index{oblate (flattened) spheroid}\", like a lentil:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/spheroid.jpg}\n\t\t\t\\caption[Oblate spheroid on the left, prolate one on the right]{Oblate spheroid on the left, prolate one on the right (source: Wikipedia)}\n\t\t\\end{figure}\n\t\\end{enumerate}\n\n\tTo calculate the volume delimited by an ellipsoid, we take the equation that we have determined during our study of conicals:\n\t\n\tThe section by a plane parallel to the plane $YZ$ and lying at a distance $x$ of the latter, gives the ellipse:\n\t\n\tor:\n\t\n\twith for semi-axes:\n\t\n\tBut as we proved it earlier, the surface of an ellipse is equal to $\\pi b_1c_2$. Therefore:\n\t\n\tThe volume of the ellipsoid is therefore equal to:\n\t\n\tTherefore:\n\t\n\tIf $a=b=c=R$ we fall back on the volume of a sphere:\n\t\n\t\n\tLet us now calculate the surface and volume of an oblate spheroid. Remember that its Cartesian equation is:\n\t\n\tThe ellipticity of an oblate spheroid is defined by:\n\t\n\tThe surface area of an oblate spheroid can be computed as a surface of revolution about the $z$-axis,\n\t\n\twith radius function of $z$ given by:\n\t\n\tTherefore:\n\t\n\tTherefore:\n\t\n\tIf we put:\n\t\n\tWe can then write the integral as following:\n\t\n\tAnd as we have seen in the section of Differential and Integral Calculus and as $-\\arcsin(\\alpha)=\\arcsin(-\\alpha)$:\n\t\n\tTherefore:\n\t\n\tThe calculation of the moment of inertia of an ellipsoid is very important in astrophysics since a large number of stars or planets in rotation on themselves by their deformation at the equator because of the centrifugal force are being distorted in a first approximation in this volume.\n\t\n\tFor an ellipsoid, define $C$ as being the moment of inertia along the $c$-axis, $A$ is the moment of inertia along the $a$-axis, and $B$ the moment of inertia along the $b$-axis.\n\n\tTo begin, let us consider the moment of inertia along the $c$-axis that we will assimilate to the $z$axis. Thus, in Cartesian coordinates, we have:\n\t\n\tBy making the following substitution, we implicitly do that the previous integral is a normalization of an ellipsoid:\n\t\n\twhat gives us for our integral (so we transform the volume $V$ of the ellipsoid in the volume $V'$ of a sphere):\n\t\n\tWe can now move from Cartesian coordinates to spherical coordinates (\\SeeChapter{see section Vector Calculus page \\pageref{spherical coordinates}}) without forgetting to use the Jacobian (\\SeeChapter{see section Differential and Integral Calculus page \\pageref{jacobian}}) that we had proved as being in spherical coordinates given by:\n\t\n\tTherefore (we use again the common primitives proved in the section of Differential and Integral Calculus):\n\t\n\tBy injecting for the ellipsoid:\n\t\n\tThen we get:\n\t\n\tand by symmetry, we get the following obvious results:\n\t\n\tThe inertia matrix (\\SeeChapter{see section Classical Mechanics page \\pageref{inertia matrix}}) is therefore:\n\t\n\t\n\t\\paragraph{Paraboloid}\\label{paraboloid}\\mbox{}\\\\\\\\\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{paraboloid}\\index{paraboloid}\" is as we already know quadric surface of special kind. There are two kinds of paraboloids: elliptic and hyperbolic.\n\t\n\tThe elliptic paraboloid is shaped like an oval cup and can have a maximum or minimum point. In a suitable coordinate system with three axes $x$, $y$, and $z$, it can be represented by the equation\n\t\n\twhere $a$ and $b$ are constants that dictate the level of curvature in the $xz$ and $yz$ planes respectively. This is an elliptic paraboloid which opens upward for $c > 0$ and downward for $c < 0$.\n\t\n\tThe latter relations is more commonly written as:\n\t\n\t\n\tThe hyperbolic paraboloid - not to be confused with a hyperboloid (\\SeeChapter{see section Analytical Geometry page \\pageref{hyperboloid}}) - is a doubly ruled surface shaped like a saddle. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{minipage}{.45\\linewidth}\n\t\t  \\includegraphics[width=7cm,height=7cm]{img/geometry/paraboloid_of_revolution.jpg}\n\t\t  \\captionof{figure}{Paraboloid of revolution}\n\t\t  \\label{img1}\n\t\t\\end{minipage}\n\t\t\\hspace{.05\\linewidth}\n\t\t\\begin{minipage}{.45\\linewidth}\n\t\t  \\includegraphics[width=7cm,height=7cm]{img/geometry/hyperbolic_paraboloid.jpg}\n\t\t  \\captionof{figure}{Hyperbolic paraboloid}\n\t\t  \\label{img2}\n\t\t\\end{minipage}\n\t\\end{figure}\n\tWe will only calculate the volume of the paraboloid of revolution as it is the only result that we need in the other chapters of this book at this date. For this purpose let us consider the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/paraboloid_of_revolution_volume.jpg}\n\t\\end{figure}\n\tGiven by:\n\t\n\tLooking at the figure above we see that the parameters $a$ and $b$ depends  on the value of $z$. So as the surface of an ellipse is given by $\\mathrm{d}S=\\pi a b$, we have for every height of the paraboloid:\n\t\n\tSo in the $xz$ plane as we have $y=0$ then:\n\t\n\tand also same for the plane $yz$:\n\t\n\tAnd we know that when $z=h$ we must have $a(h)=a$, and when $z=0$ we must have $a(h)=0$ (same with $b$). So we have to write:\n\t\n\tTherefore\n\t\n\tHence:\n\t\n\t\n\t\\paragraph{Gabriel's horn}\\mbox{}\\\\\\\\\n\t\"\\NewTerm{Gabriel's horn}\\index{Gabriel's horn}\" (also named \"\\NewTerm{Torricelli's trumpet}\\index{Torricelli's trumpet}\") is a geometric figure which has infinite surface area but finite volume. The name refers to the Abrahamic tradition identifying the Archangel Gabriel as the angel who blows the horn to announce Judgement Day, associating the divine, or infinite, with the finite. The properties of this figure were first studied by Italian physicist and mathematician Evangelista Torricelli in the 17th century.\n\t\n\tGabriel's horn is formed by taking the graph of:\n\t\n\twith the domain $x\\geq 1$ and rotating it in three dimensions about the $x$-axis. With Maple 4.00b we can plot it using (notice that is has the same shape as a gravitational potential or electric potential):\n\t\n\t\\texttt{>plot3d(1/x, theta=0..2*Pi, x = 1..20, coords=cylindrical}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/gabriel_horn.jpg}\n\t\t\\caption{Gabriel's horn with Maple 4.00b}\n\t\\end{figure}  \t\n\tThe discovery was made using Cavalieri's principle before the invention of calculus, but today calculus can be used to calculate the volume and surface area of the horn between $x = 1$ and $x = a$, where $a > 1$. Using integration (see Solid of revolution and Surface of revolution for details), it is possible to find the volume $V$ (using the method of discs again for the volume) and the surface area $S$:\n\t\n\ta can be as large as required, but it can be seen from the equation that the volume of the part of the horn between $x = 1$ and $x = a$ will never exceed $\\pi$; however, it will get closer and closer to $\\pi$ as a becomes larger. Mathematically, the volume approaches $\\pi$ as a approaches infinity. Using the limit notation of calculus:\n\t\n\the surface area formula above gives a lower bound for the area as $2\\pi$ times the natural logarithm of $a$. There is no upper bound for the natural logarithm of $a$ as $a$ approaches infinity. That means, in this case, that the horn has an infinite surface area. That is to say:\n\t\n\tThe apparent paradox formed part of a dispute over the nature of infinity involving many of the key thinkers of the time including Thomas Hobbes, John Wallis and Galileo Galilei.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tSince the horn has finite volume but infinite surface area, there is an apparent paradox that the horn could be filled with a finite quantity of paint, and yet that paint would not be sufficient to coat its inner surface. The \"paradox\" is resolved by realizing that a finite amount of paint can in fact coat an infinite surface area - it simply needs to get thinner at a fast enough rate. \n\t\\end{tcolorbox}\n\tThe converse phenomenon of Gabriel's horn \u2013 a surface of revolution that has a finite surface area but an infinite volume \u2013 cannot occur (there is a proof but it is not a very aesthetic one so we will not present it here).\n\t\n\t\\paragraph{Wine Barrel with Circular Section}\\mbox{}\\\\\\\\\n\tNow let us look for fun to a well known volume for winemakers (and not only!):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/wine_barrel.jpg}\n\t\t\\caption[]{Wine Barrel with circular section}\n\t\\end{figure}\n\tLet us consider that the lateral curvature of the barrel is a parabola:\n\t\n\tLet us put:\n\t\n\tgiven the way we have put the $x, y$-axis it is relatively easy to determine the coefficients of the polynomial. The coefficient $c$ is the simplest:\n\t\n\tWe also have:\n\t\n\tAlso:\n\t\n\tTherefore we have:\n\t\n\tThe radius of a horizontal section of ordinate $x$ is $r=y$ and its surface is therefore:\n\t\n\tor:\n\t\n\tLet us develop this:\n\t\n\tThe volume of liquid for a height $h$ will then be:\n\t\n\tTo calculate the inner surface of the wine barrel, we consider the outer curve given by a parabolic arc as shown below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/wine_barrel_vertical_section.jpg}\n\t\t\\caption[]{Vertical section of the wine barell}\n\t\\end{figure}\n\tTo calculate the lateral area of the barrel, we must first determine the expression of the above parabola.\n\t\n\tLooking at the figure, we get:\n\t\n\twhich is a system of three equations with the unknowns $a,b,c$.\n\t\n\tAfter resolution, we get:\n\t\n\tThe side surface of the barrel including the surface of the two disks at the ends is then given by:\n\t\n\tBy doing the change of variable $u=2ax+b$ we get:\n\t\n\tThe latter integral can be calculated using the following relations (the second has been proved in the section of Differential and Integral Calculus the first one need to be detailed when we will have the time...):\n\t\n\tWhere for recall (\\SeeChapter{see section Trigonometry page \\pageref{hyperbolic trigonometry}}):\n\t\n\tWe will not go further because the resulting formula would be huge and not very interesting in our point of view (but on request we will do it).\t\n\t\n\tNevertheless, here is a numerical application. Suppose $r=20$ [cm], $R=30$ [cm], $H=60$ [cm].\n\t\n\tWe calculate:\n\t\n\tTherefore:\n\t\n\tThe first integral is equal to:\n\t\n\tThe second integral is equal to:\n\t\n\tSo finally we get:\n\t\n\t\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{80} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 171 votes,  74.86\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t\\pagebreak\n\t%to force start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Graph Theory}\\label{graph theory}\n\n\\lettrine[lines=4]{\\color{BrickRed}T}he history of graph theory (or also named \"complex cellulars\") may have start with the work of Euler in the 18th century and has its origins in the study of certain problems, such as the bridges of K\u00f6nigsberg (the inhabitants of K\u00f6nigsberg wondered if it was possible, starting from any part of the city, to cross all the bridges without passing two times by the same bridge and return to their starting point), the walk of the rider on the chessboard cards or the colouring problem and the shortest path between two points.\n\nGraph theory was then developed in various disciplines such as chemistry (isomers), biology, social sciences (transport networks, social networks), project management (critical path method), IT (network topology, computational complexity, protocols transfers), quantum physics, spectral clustering, etc. Since the early 20th century, it is a full branch of mathematics, thanks to the works of K\u00f6nig, Menger, Cayley and then Berge and Erd\u00f6s. This branch of mathematics has get a great resurgence of interest following the emergence of Internet social networks whose connections between \"friends\" and \"followers\" are graphs whose topological and statistical analysis can teach us many things.\n\nIn general, a graph is used to represent the structure, the connections of a complex set by expressing the links between its components: communication network, road networks, interaction of various animal species, electrical networks, etc.\n\nThe graphs are therefore a thought method to model a wide variety of problems by reducing them to the study of vertices and edges.\n\nRecent work in graph theory are often done by computer, because of the importance of the algorithmic aspect in their field (see the beginning of the section of Theoretical Computing some small examples).\n\nIndeed, the purpose is essentially to modelize problems. We express the problem in terms of graphs so that it reports to a problem of graph theory that we know usually how to resolve because falling within a class of known problems.\n\nThe solutions to graph problems can be easy and effective (the time required to process computationally being reasonable because they depend polynomially on the number of vertices of the graph) or difficult (the processing time is then exponential) in which case we use a heuristic, that is to say a search process for a solution (not necessarily the best).\n\n\tIf graph theory knows a quite big interests since the years 1980, maybe is it because it does not require in its elementary concepts considerable mathematical background. Actually, just going through the sections of Probability, Set Theory,  Linear Algebra and Topology presented in this book  to already feel comfortable with the different definitions.\n\n\tWe will introduce the basic vocabulary of the graph theory. The terms used are those of the common language of Euclidean geometry (and unfortunately they are also in large numbers...).\n\n\t\\pagebreak\n\t\\subsection{Type of Graphs and Structures}\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\\item[D1.] A \"\\NewTerm{graph}\\index{graph}\" (or \"\\NewTerm{polygraph}\\index{polygraph}\") $G$ is a pair $G(X,E)$ consisting of a non-empty and finite set $X$ (the vertices/nodes), and a set $E$ (the edges/links) of $X$ of ordered pairs elements of $X$ connected by a line segment or said in another way (...) a part of the Cartesian product $X^2$ (\\SeeChapter{see section Set Theory page \\pageref{cartesian product}}).\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/geometry/graph_vocabular.eps}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tA graph is often denoted in English $G = (E, V)$ where $E$ is the first letter of the word \"edges\" and $V$ the first letter of the word \"vertices\".\n\t\\end{tcolorbox}\t\n\t\n\t\\item[D2.] Let $(a_1, b_1)$ and $(a_2, b_2)$ be pairs. Then the characteristic (or defining) property of the \"\\NewTerm{ordered pair}\\index{ordered pair}\" is:\n\t\t\n\t\t The order in which the objects appear in the pair is significant: the ordered pair $(a, b)$ is different from the ordered pair $(b, a)$ unless $a = b$. \n\t\t \n\t\tIn graph theory it is useful to precise if a graph $G$ is made of ordered pairs in this way we can work if necessary with edges orientation to distinguish edges sharing the same pairs of vertices but having not the same orientation path.\n\t\n\t\\item[D3.] A \"\\NewTerm{multigraph}\\index{multigraph}\" is a graph $G$ which is permitted to have edges sharing the same vertices and having edges looping on the same vertices.\n\t\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.75]{img/geometry/multigraph.eps}\n\t\\caption{Example of a multigraph that we will be used much later}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nThis a very common case in Markov chains analysis (see sections on Probabilities page \\pageref{markov chains} or Games and Decision Theory page \\pageref{markov decision process}) and as we will see further below the K\u00f6nigsberg bride problem is a multigraph!\n\t\\end{tcolorbox}\t \n\n\t\\item[D4.] The elements of $X$ are the \"\\NewTerm{vertices}\\index{vertices}\" of the graph $G$, those of $E$ are the \"\\NewTerm{edges}\\index{edges}\" of the graph $G$ (indeed an edge - link - is composed of two vertices joined by a line segment, hence the reference to pairs of elements in the previous definition). The set of vertices of a graph $G$ is denoted by $G (X)$ and the set of edges $G (E)$.\n\t\n\t\\item[D5.] The plane where a graph is immersed is a \"\\NewTerm{face $F$}\\index{face}\" and each close surface (area) on this plane defined by a close path of edges is also a face $F$. \n\n\t\\item[D6.] A graph is named \"\\NewTerm{planar graph}\\index{planar graph}\" when we can represent it in a plane without intersecting edges.\n\t\n\tThe following are some examples of (connected) planar graphs:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/planar_graphs.jpg}\n\t\t\\caption{Example of planar graphs}\n\t\\end{figure}\n\n\\begin{theorem}\n\tNow let us show that if $F$ is the number of faces of a planar graph, $A$ the number of edges and $S$ the number of nodes, then we have using a old traditional notation:\n\t\nwhich is the relation known under the name \"\\NewTerm{Euler's formula}\\index{Euler's formula}\" or \"\\NewTerm{Descartes-Euler's theorem}\\index{Descartes-Euler's theorem}\" (proof after the example below) and which will be useful many times in this book (in this section and during our study of polyhedra in the section of Geometrical Shapes). So keep in mind that any graph that doesn't respect this relation cannot be a planar graph!\n\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tA graph with 2 faces (the light gray side is an infinite outside face), 4 vertices and 4 nodes:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.4]{img/geometry/euler_theorem.eps}\n\t\t\\caption{Graph with 2 faces, 4 vertices and 4 nodes}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\\end{theorem}\n\\begin{dem}\nWe prove this theorem by performing a recurrence (\\SeeChapter{see section Proof Theory page \\pageref{proof by recurrence}}) on the difference $A - S$ (this is the trick!):\n\nFirst it is obvious that the theorem is true for:\n\t\nbecause in this case the graph is a tree so it only has one face (the outer infinite face), thus $F=1$.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.4]{img/geometry/tree_one_edge_two_vertices.eps}\n\\caption{Tree with one edge and two vertices}\n\\end{figure}\nTherefore:\n\t\nThen take a connected graph (see definition further below) containing at least one cycle $G$ (figure below is an example of a graph with $3$ cycles that is to say the two small insiders and the global one):\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{img/geometry/graph_with_at_least_one_cycle.eps}\n\\caption{Graph with at least one cycle}\n\\end{figure}\n\nIf we remove one edge $e$ to this cycle, then we should be able recursively to apply the same relation to the graph $G-\\left\\lbrace e \\right\\rbrace$ if it is right. Indeed, the graph without the edge $e$ will have $F$ faces, $S$ vertices and $A$ edges and thus we can use the relation prove above with the simple three (that had no cycles):\n\t\nif we give put the edge back on place then we write:\n\t\n\t\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\\end{dem}\n\tRegular polygons (\\SeeChapter{see section Geometric Shapes page \\pageref{regular polygon}}) with their diagonals drawn in can be viewed as graphs (with a vertex at each intersection):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/planar_graphs_regular_polygons.jpg}\n\t\t\\caption{Regular polygons seen as graphs}\n\t\\end{figure}\n\t If we make every intersection point a vertex, then these pictures automatically become connected, planar graphs! If we know the number of intersection points and the number of segments, we can then use Euler's formula to find the number of regions (for example, drawing the two diagonals of a square results in $5$ intersections, $4$ regions, and $8$ segments. The $5$ diagonals of a regular pentagon make $10$ intersections, $11$ regions, and $20$ segments). \n\t \n\t \\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us consider $K_5$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.4]{img/geometry/euler_formula_k5.jpg}\n\t\\end{figure}\n\tWe take the Euler formula:\n\t\n\t$K_5$ has $5$ vertices and $10$ edges, so we get:\n\t\n\twhich says that if the graph is drawn without any edges crossing, there would be $S=7$ faces. And there is no way to find a way to do that with $K_5$.\n\t\\end{tcolorbox}\n\t \n\t\\item[D7.] Given $e={x,y}$ an edge of the graph $G$, we say that the vertices $x, y$ which are the \"ends\" of the edge of $G$ are \"\\NewTerm{adjacent}\\index{adjacent}\" or \"\\NewTerm{neighbours}\\index{neighbours}\" in the graph $G$ and that the edge $e$ is \"\\NewTerm{incident}\\index{incident}\" to the vertices $x, y$.\n\t\n\t\\item[D8.] If two edges $e$ and $e'$ have one end in common, we say they are \"\\NewTerm{incidental edges}\\index{incidental edges}\" otherwise that they are \"\\NewTerm{independent edges}\\index{independent edges}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nIf $e$ is an edge of a graph $G$, we denote by $G-e$ the subgraph (see definition just below) $G'=(X,E-\\left\\lbrace e \\right\\rbrace)$. If $X'$ is a subset of $X$, we denote by $G-X'$ the graph $G$ without the vertices $X'$.\n\t\\end{tcolorbox}\n\t\\item[D9.] What we name \"\\NewTerm{order of a graph $\\mathcal{O}$}\\index{order of a graph}\" is the number of its vertices (nodes).\n\t\n\t\\item[D10.] A \"\\NewTerm{complete graph}\\label{complete graph}\" or \"\\NewTerm{strongly connex graph}\\index{strongly connex graph}\" is a simple graph in which every pair of distinct vertices is connected by a unique edge.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\t\\includegraphics[width=180pt,height=180pt]{img/geometry/euler_theorem.eps}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\t\\includegraphics[width=180pt,height=180pt]{img/geometry/graph_with_at_least_one_cycle.eps}\n\t\t\\end{subfigure}\t\t\n\t\t\\caption{Difference between a non-complete graph (left) and complete graph (right)}\t\t\n\t\\end{figure}\n\t\n\t\\item[D11.] What we name \"\\NewTerm{size of a graph}\\index{size of a graph}\" of the graph is the number of its edges.\n\t\n\tLet $G$ be a complete graph (and not a multigraph!) of order $n$, the set $E$ of edges must be by definition chosen as a subset of all pairs of elements of the set $X$, therefore it's a set of cardinal:\n\t\n\tThis is a relatively trivial result since each vertices is linked to all the other vertices except himself (hence the numerator) and we divide by two simply to not count the neighbours vertices twice (and they all are neighbours when we go through them all in a complete graph!).\n\t\n\tMore explicitly (if needed) $n(n-1)/2$ comes from simple counting argument. Label the vertices $1,2,\\ldots,n$. The first vertex is now joined to $n-1$ other vertices. The second vertex has already been joined to vertex $1$ and hence has to be joined to the remaining $n-2$ vertices and in general the $k$th vertex has already been joined to the previous $k-1$ vertices and hence has to be joined to the remaining $n-k$ vertices. So the total number of edges is given by:\n\t\n\tThis result is illustrated by some examples further below.\n\t\n\tConsequently, there are (see section Probabilities on the arrangements  of $n$ indistinguishable elements in pairs of two page \\pageref{simple arrangements with repetitions}):\n\t\n\tpossible choices for $E$ and so many graphs admitting $X$ as set of vertices. \n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWe take a triangular graph - with three points - as example where we have $n(n-1)/2=3$ and therefore $2^{\\frac{n(n-1)}{2}}=8$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/subset_graphs.jpg}\n\t\t\\caption{Example of graphs subsets}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t Some of these graphs are, as we consider their vertices as indistinguishable, \"\\NewTerm{automorphic}\\index{automorphic}\" (see definition of this term a little bit late below in this section).\n\t\n\tThis result means that there are about $2$ million of graphs for to $7$ vertices, graphs and almost $4\\cdot 10^{105}$ for graphs with 27 vertices - a number to be compare with the fact that we estimate at $10^{100}$ the number of atoms in the universe...\n\t\n\t\\item[D12.] The \"\\NewTerm{neighbourhood}\\index{neighbourhood of a vertices}\" of a vertices is the set $V$ of all its neighbours.\n\t\n\t\\item[D13.]  We name \"\\NewTerm{degree}\\index{degree of a vertices}\" of a vertices and we denote by $D$ the number of its neighbours (the cardinal of $V$), which is also the number of edges that are incident to it (a zero degrees vertices being named an \"\\NewTerm{isolated vertices}\\index{isolated vertices}\" and a degree $1$ vertices is named \"\\NewTerm{pending vertices}\\index{pending vertices}\").\n\t\n\tProperties (without proof):\n\t\\begin{enumerate}\n\t\t\\item[P1.] The sum of the degrees of the vertices is equal to twice the number of edges.\n\t\t\n\t\t\n\t\t\\item[P2.] In a graph such that  $\\mathcal{O}>3$, the number of vertices with odd degrees is always even (handshaking lemma).\n\t\t\n\t\tIndeed, if the sum of all the degrees is equal to twice the number of edges. Since the sum of the degrees is even and the sum of the degrees of vertices with even degree is even, the sum of the degrees of vertices with odd degree must be even. If the sum of the degrees of vertices with odd degree is even, there must be an even number of those vertices.\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tA \"\\NewTerm{regular graph}\\index{regular graph}\" is a graph in which all vertices have same degree $k$. We say then that the graph is \"\\NewTerm{$k$-regular}\\index{$k$-regular graph}\".\n\t\\end{tcolorbox}\n\t\n\t\\item[D14.] We say that a graph $G'=(X',E')$ is a \"\\NewTerm{subgraph}\\index{subgraph}\" of a graph $G=(X,E)$ when $X' \\subseteq X$ and $E' \\subseteq E$. We say that $G'=(X',E')$ is an \"\\NewTerm{induced subgraph}\\index{induced subgraph}\" of a graph $G=(X,E)$ provided $X'\\subseteq X$ and $E'$ contains all edges of $E$ which are subsets of $X'$.\n\t\n\tIf $G'$ is a subgraph of $G$ and $u$ and $w$ are vertices of $G'$, then by the definition of a subgraph, $u$ and $w$ are also vertices of $G$. However, if $u$ and $w$ are adjacent in $G$ (i.e., there is an edge of $G$ joining them), the definition of subgraph does not require that the edge joining them in $G$ is also an edge of $G'$. If the subgraph $g'$ has the property that whenever two of its vertices are joined by an edge in $G$, this edge is also in $G'$, then we say that $G'$ is an induced subgraph. \n\t\n\tNotice that every induced subgraph is also an ordinary subgraph, but not conversely. Think of a subgraph as the result of deleting some vertices and edges from the larger graph. For the subgraph to be an induced subgraph, we can still delete vertices, but now we only delete those edges that included the deleted vertices.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider the graphs:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/subset_graph_induced_graph.jpg}\n\t\t\\caption{Example of subgraphs and induced graphs}\n\t\\end{figure}\n\tHere both $G_2(X_2,E_2)$ and $G_3(X_2,E_3)$ are subgraphs of $G_1(X_1,E_1)$. But only $G_2$ is an induced subgraph. Every edge in $G_1$ that connects vertices in $G_2$ is also an edge in $G_2$. In $G_3$, the edge $\\{a,b\\}$ is in $E_1$ but not $E_3$, even though vertices $a$ and $b$ are in $X_3$.\\\\\n\t\n\tThe graph $G_4(X_4,E_4)$ is not a subgraph of $G_1$, even though it looks like all we did is remove vertex $e$. The reason is that in $E_4$ we have the edge $\\{c,f\\}$ but this is not an element of $E_1$, so we don't have the required $E_4\\subseteq E_1$.\n\t\\end{tcolorbox}\n\t\n\t\\item[D15.] A \"\\NewTerm{covering subgraph}\\index{covering}\" of a graph $G=(X,E)$ is a subgraph $G'=(X,E')$, that is to say a subgraph whose vertices are all vertices of $G$ and whose edges are in $E'$.\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/covering_graph.jpg}\n\t\t\\caption[]{In the following figure, the graph on the right is a covering graph of the graph on the left}\n\t\\end{figure}\n\t\n\t\\item[D16.] For a graph of order $n$, there are two extreme cases for all its edges: either the graph has no edges or all possible edges connecting the vertices by pairs are present. In the latter case the graph is said named a \"\\NewTerm{complete graph}\\index{complete graph}\".\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tHere are some complete graphs for which we have well:\n\t\n\tedges. We notice that the first four graphs are planar (indeed you can see how it is possible, by projecting a vertices in the plane, to transform the fourth $K_4$ in $K_4^{\\prime}$ i a way so that there are no more intersections). The fifth graph $K_5$ is non-planar (we can not find movements avoiding crossings):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/complete_graphs.jpg}\n\t\t\\caption{Example of complete graphs}\n\t\\end{figure}\n\tA complete graph is therefore a graph where each vertex is connected to every other. The complete graph of order $n$ is denoted by $K_n$. In this type of graph each vertex is of degree $n-1$.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tSo a nice case to be processed is the \"Star of David\":\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/complete_graph_david_star.jpg}\n\t\t\\caption{Star of David}\n\t\\end{figure}\n\twhich is obviously a complete graph by definition if and only if we join all vertices between them (and we lose the geometry of the star but we get a graph $K_6$):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/complete_graph_david_star_corrected.jpg}\n\t\t\\caption{Star of David completed}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThis result is interesting in the field of management of the communication in business projects and IT security (see for this latter the section Cryptography). For example, if you manage a project with $10$ stakeholders (corresponding to $n$), so there are $n(n-1) / 2$ possible communication channels (email or phone) possible, that is to says $45$ (and in the field of IT security there would be $45$ encryption symmetric keys system to generate). Hence the importance in project management to establish clear communication rules (in the form of a graph) if one does not want to be flooded by emails or phones unnecessarily (and in the field of security to implement asymmetric systems). We will also encounter this results in the liquid drop model of the nucleus in the Nuclear Physics section of the chapter Atomistic.\n\t\\end{tcolorbox}\n\t\n\t\\item[D17.] A \"\\NewTerm{stable graph}\\index{stable graph}\" or \"\\NewTerm{independent set}\\index{independent graph}\" is subgraph without edges and a \"\\NewTerm{clique}\\index{clique}\" a complete subgraph.\n\t\n\tIn other words an independent vertex set of a graph $G$ is a subset of the vertices such that no two vertices in the subset represent an edge of $G$. \n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/stable_graph.jpg}\n\t\t\\caption{Stable subgraph example (in blue)}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tA \"\\NewTerm{maximum independent}\\index{maximum independent}\" set is an independent set of largest possible size for a given graph $G$. The problem of finding such a set is an NP-hard optimization problem (\\SeeChapter{see section Numerical Methods page \\pageref{np completude}}). As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph.\n\t\\end{tcolorbox}\n\n\t\\item[D18.] In a graph, it is natural to want to move from summit to summit following the edges. Such a walk through $n$ vertices is named a \"\\NewTerm{string $P_n$}\\index{string (graph theory)}\"  or \"\\NewTerm{path}\\index{path}\":\n\t\n\tA path is a list $P_k=(x_1,...,x_k)$ of vertices such that there exists an edge in the graph between every pair of successive vertices: $\\forall i=1,...,k-1\\; (x_i,x_{i+1})\\in E$. The path length corresponds to the number of edges traversed: $k-1$.\n\t\n\tA path is named \"\\NewTerm{simple path}\\index{simple path}\" if each edge of the path is taken only once. Here for example a simple way with $5$ vertices:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/simple_path.jpg}\n\t\t\\caption{Simple path example}\n\t\\end{figure}\n\tThus, we also define a \"\\NewTerm{cycle}\\index{cycle}\" as:\n\t\n\tas being a simple path ending at the starting point as $x_1=x_{k++}$. \n\t\n\t\\item[D19.] A \"\\NewTerm{simple cycle}\\index{simple cycle}\" is a cycle in which all the edges are different.\n\t\n\t\\item[D20.] A \"\\NewTerm{directed graph}\\index{directed graph}\" or \"\\NewTerm{oriented graph}\\index{oriented graph}\\label{oriented graph}\" is a graph whose edges have a direction and re therefore named \"arcs\" (thus opposite to an undirected graph).\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The terms \"path\" and \"circuit\" are normally reserved to directed graphs. For undirected graphs we use mainly the words \"string\" or \"cycle\". However, the formal definition is exactly the same in both cases, only changes the structure (directed graph or not) on which they are defined.\\\\\n\t\n\t\\textbf{R2.} An undirected graph is a symmetrical directed graph. Indeed, if an arc connects the vertice $a$ to the vertice $b$ top and another arc the vertice $b$ to the vertice $a$, then it is of usage to trace only a simple line between $a$ and $b$ which we call ... an edge.\n\t\\end{tcolorbox}\n\t\n\t \\item[D21.] A path $P_k=(x_1,...,x_k)$ is named \"\\NewTerm{elementary path}\\index{elementary path}\" if each of the vertices of the path is visited only once: $\\forall i,j=1,...,k \\quad i\\neq j,x_i \\neq x_j$. An elementary path is therefore a simple path without cycle.\n\t \n\t In a graph of order $G$ we have the following properties:\n\t \\begin{enumerate}\n\t \t\\item[P1.] All elementary path is of length at most $n-1$. Indeed, an elementary path visiting at most 1 time each vertex of the graph, its length (number of edges) may actually not exceed $n-1$.\n\t \t\n\t \t\\item[P2.] The number of elementary paths in a graph is finite. Indeed, the number of paths of length $k(k=0,1,...,n-1)$ is at most the combinatorial of a sequence of $k + 1$ peaks distinguishable from $n$. There are therefore (\\SeeChapter{see section Probabilities page \\pageref{simple arrangements without repetitions}}):\n\t \t\n\t\tpossible paths.\n\t\t\n\t\tThe elementary paths are the natural restrictions we are looking at the concept of path. The question is whether we lose something by considering only the basic paths in a graph: can we always replace a path in a graph by an elementary path?\n\t \\end{enumerate}\n\t \\begin{lemma}\n\t The \"\\NewTerm{K\u00f6nig's lemma}\\index{K\u00f6nig's lemma}\" answers this question affirmatively: from any path we can extract an elementary sub-path.\n\t \n\t In other words: If there is a path between two vertices $x$ and $y$, then there exists an elementary path between $x$ and $y$.\n\t \\end{lemma}\n\t \\begin{dem}\n\t The idea of the proof is to choose a particular path between $x$ and $y$ and show that it is elementary. Which path to choose? If a path has a circuit, this circuit is a detour on the road of $x$ and $y$. A good candidate to be an elementary path appears to be a shortest path.\n\t \n\t Among all the paths connecting $x$ to $y$, let us choose a path:\n\t \n\t with the fewest possible edges. Suppose by contradiction that $P_k$ is not elementary. There exists in this path a vertices $z$ appearing at least two times along the path $P_k$.\n\t \n\t Given $i, j$ the first two indices such as $x_i=z$ and $x_j=z$:\n\t  \n\t To get contradiction, we just delete the cycle between $x_i=z$ and $x_j=z$. So we have a new path:\n\t \n\t which is a path connecting $x$ to $y$. Its length strictly less than $P_k$: \n\t  \n\t which contradicts our initial choice as a shortest path.\n\t \\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t \\end{dem}\n\t \n\t \\item[D22.] A graph $G$ is named \"\\NewTerm{connected graph}\\index{connected graph}\" or \"\\NewTerm{connex graph}\\index{connex graph}\" if and only if there is at least one simple path between each pair of vertices (the path being therefore not necessarily direct- can pass through one or more intermediate vertices).\n\t \n\t Therefore, we have that a \"\\NewTerm{connected component}\\index{connect component}\" $G'=(X',E')$ is a subgraph of $G$ in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.\n\t \\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/geometry/sug_connected_component.jpg}\n\t\t\\caption[A graph with three connected components]{A graph with three connected components (source: Wikipedia)}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} A graph with only one connected component is simply a... connected graph. \\\\\n\t\n\t\\textbf{R2.} An isolated node (of degree $0$) always constitutes a connected component alone.\\\\\\\n\t\n\t\\textbf{R3.} The relation on the nodes \"there is a path between ...\" is an equivalence relation (reflexive, symmetric and transitive). The connected components of a graph correspond to equivalence classes of this relation.\n\t\\end{tcolorbox}\n\tWe will suppose intuitive that a graph $G$ of order $n$ is always associated with at least $n-1$ edges.\n\t\n\t\\item[D23.] A \"\\NewTerm{tree}\\index{tree}\" or \"\\NewTerm{spanning tree}\\index{spanning tree}\" is a connected graph (undirected) with no cycles (acyclic). In a tree the number of edges is equal to the number of vertices $- 1$:\n\t \\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/spanning_tree.jpg}\n\t\t\\caption{Example of a spanning tree}\n\t\\end{figure}\n\t A minimum spanning tree with the minimum weight values of a valuated graph where all nodes are browsed is named a \"\\NewTerm{minimum spanning tree}\\index{minimum spanning tree}\":\n\t \\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/geometry/minimum_spanning_tree.jpg}\n\t\t\\caption[Example of a minimum spanning tree]{Example of a minimum spanning tree (source: Wikipedia)}\n\t\\end{figure}\n\tIf you imagine that every point is a city, and that a country has not enough money for now for the maintenance of all existing routes between several cities, the minimum spanning tree gives mathematically (without any other considerations) the road to clean and repair to minimize costs while connecting all the towns together.\n\t\n\t\\item[D24.] A \"\\NewTerm{valuated tree}\\index{valuated tree}\" or \"\\NewTerm{weighted graph}\\index{weighted graph}\" is a tree (respectively a graph) where the edges have positive values (weights). The sum of all the values that are on the browsed edges is then named the \"\\NewTerm{cost tree Value}\\index{cost tree value}\" (respectively \"\\NewTerm{cost graph value}\\index{cost graph value}\").\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe valued trees are used in many fields. These include computer networks in which we seek to optimize the number of interconnections between machines to avoid duplication of items of data packets or project management (see example below).\n\t\\end{tcolorbox}\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tAn excellent practical example of a connected weighted and directed graph and directed (abbreviated to as \"\\NewTerm{digraph}\\index{digraph}\") is the one used in project management to calculate the critical path (of a project without constraints). This is a graph representing the dependencies between $n$ intermediate tasks required to complete a project, commonly named \"\\NewTerm{Gantt chart}\\index{Gantt chart}\\label{gantt chart}\" or even \"\\NewTerm{scheduling graph}\\index{scheduling graph}\". The length (weight) of each task is the value of the externally incident weights to the correspondent node arcs. Edges represent the sequences of tasks. We are always adding an initial and final node (in the world of project management people rather talking about \"milestone\"...). The first node is connected by a zero value arc to all nodes without predecessors, and all nodes without successors are connected to the final node. The resulting graph should obviously be acyclic.\\\\\n\t\t\n\t\tA \"\\NewTerm{critical path}\\index{critical path}\\label{critical path}\" is a path of maximum length between the two extreme nodes (milestones). It may have possibly be more than one of the same length. Any task located on a critical path can not be delayed without affecting the total duration of a project. In other words, it has zero \"\\NewTerm{total slack}\\index{total slack}\" (we then also say so that its early start finish date is strictly equal to late finish date). Furthermore, we also define in project management the \"\\NewTerm{free slack}\\index{free slack}\" that indicates the time over which a task can slide without moving/impacting the successor task. The free slack is calculated as the difference between the date of earliest start of a task summed of its duration and the late start of the successor task. By construction the free slack is always small or equal tot the total slack.\\\\\n\t\t\n\t\tLet us take for example a project that consists of the following tasks:\n\t\\begin{table}[H]\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t  \\hline\n\t\t  \\rowcolor[gray]{0.75}Tasks & Predecessor Tasks & Duration \\\\ \\hline\n\t\t  A & E & $3$ \\\\ \\hline\n\t\t  B & K,C & $4$ \\\\ \\hline\n\t\t  C & - & $3$ \\\\ \\hline\n\t\t  E & E,J & $2$ \\\\ \\hline\n\t\t  E & - & $2$ \\\\ \\hline\n\t\t  F & G,L & $3$ \\\\ \\hline\n\t\t  G & - & $4$ \\\\ \\hline\n\t\t  H & A, M, R & $2$ \\\\ \\hline\n\t\t  J & E & $2$ \\\\ \\hline\n\t\t  K & C & $2$ \\\\ \\hline\n\t\t  L & G & $5$ \\\\ \\hline\n\t\t  M & C & $4$ \\\\ \\hline\n\t\t  N & G & $3$ \\\\ \\hline\n\t\t  R & J & $2$ \\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption[]{Data for critical path example}\n\t\\end{table}\t\t\t\n\tThe valued associated directed graph corresponding to this table once the definition of the critical path applied is the following using the start dates:\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/pert_diagram.jpg}\n\t\t\\caption{PERT (Program Evaluation Review Technique) diagram}\n\t\\end{figure}\n\tWe see well in this graph that the finite set of tasks $\\left\\lbrace \\text{Start}, \\text{G}, \\text{L}, \\text{Fin}\\right\\rbrace$ are critical.\\\\\n\t\n\tA great tool for using such graphs is (among others) Microsoft Project whose the corresponding Gantt diagram for the example above is:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/gantt_example.jpg}\n\t\t\\caption{Same graph but seen in Microsoft Project 97}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\item[D25.] A \"\\NewTerm{cycle}\\index{cycle}\" is a simple path looping on itself. A graph in which there is no cycles is known as \"\\NewTerm{acyclic}\\index{acyclic}\"\\label{acyclic graph}.\n\t\n\tNon-connex acyclic graphs made of trees are an interesting class of graphs with remarkable properties and a name: \"\\NewTerm{forests}\\index{forests}\" (a term often used in computer networks).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/graph_connex_cyclic.jpg}\n\t\t\\caption{Example of a connex graph containing a cycle}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/graph_connex_acyclic.jpg}\n\t\t\\caption{Example of a connex graph containing non cycles}\n\t\\end{figure}\n\tBelow an example of forest (composed of trees, each tree being connex but the whole forming an acyclic and non-connex graph):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/graph_forest.jpg}\n\t\t\\caption{Example of a forest (3 connex graphs)}\n\t\\end{figure}\n\tThus we see that a forest is a graph whose components are trees. The vertices of degree $1$ are named \"\\NewTerm{leaves}\\index{leaves}\" of the tree.\n\t\n\tProperties:\n\t\\begin{enumerate}\n\t\t\\item[P1.] If in a graph $G$ is any vertex is of degree greater than or equal to $2$, then $G$ has at least one cycle (trivial).\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThis simple property implies that a graph without cycle without graph has at least one vertex of degree $0$ or $1$.\n\t\\end{tcolorbox}\t\n\t\t\n\t\t\\item[P2.] An acyclic graph $G$ with $n$ vertices has at most $n-1$ edges as we already know.\n\t\\end{enumerate}\n\n\t\\item[D26.] An \"\\NewTerm{Eulerian cycle}\\index{Eulerian cycle}\" is a cycle passing once and only once by each \\underline{edge} of the graph and returning to the starting point (we will see further properties required by a graph for such a cycle exists on it) and a graph having an eulerian cycle is named \"\\NewTerm{Eulerian graph}\\index{Eulerian graph}\".\n\t\n\t\\item[D27.] A \"\\NewTerm{Hamiltonian cycle}\\index{Hamiltonian cycle}\\label{hamiltonian cycle}\" is a simple cycle through all the \\underline{vertices} of the graph once and only once and a graph having an eulerian cycle is named \"\\NewTerm{Hamiltonian graph}\\index{Hamiltonian graph}\". For a Hamiltonian cycle, the graph must be connex and must not have any single node.\n\t\n\tTo better understand the difference between the two previous definitions the reader can see in the figure below some graphs that are Eulerian and Hamiltonian, Eulerian but not Hamiltonian and so on... (the four possibilities):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/euler_vs_hamilton.jpg}\n\t\t\\caption{Example of Hamiltonian or Non-Hamiltonian graphs}\n\t\\end{figure}\n\tIt is appropriate now to open a parenthesis (for straws ...) on one of the most famous problem in graph theory: K\u00f6nigsberg bridges.\n\t\n\tLeonhard Euler, liked to take a walk in his city of K\u00f6nigsberg. Following the legend he liked especially to walk through seven bridges that cross the river. Age coming (mathematical knowledge too ...), he wondered if without sacrificing his walk, he could shorten the length of his walk by crossing each bridge only once. This problem is probably one of the oldest known one in graph theory: that the existence of a chain passing once and only once by each edge (sadly today only four bridges remain standing).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/konigsberg.jpg}\n\t\t\\caption{Antique map of the city of K\u00f6nigsberg}\n\t\\end{figure}\n\tThe river divides the city of K\u00f6nigsberg in four parts: $a, b, c, d$. Each bridge connects two such parties. Then we can represent our problem by a graph with four vertices, wherein each edge represents one of the seven bridges of K\u00f6nigsberg. In this example the graph is not a simple graph:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/konigsberg_graph.jpg}\n\t\t\\caption{K\u00f6nigsberg equivalent graph}\n\t\\end{figure}\n\tThe question here is whether the graph is Eulerian or not? If our problem the obtained graph is Eulerian, we need to exhibit an Euler cycle, which does not seem easy. But if it is not? Euler gave a strong characterization of Eulerian graphs given by the following statement:\n\t\\begin{theorem}[Euler Theorem]\n\tA graph is Eulerian if and only if it is connex and all vertices have even degree (so there is an even number of edges that arrive on each vertex of which half of them used to arrive at the vertices, the others to leave it) except for maximum two (these two exceptions being the vertice of departure and this of arrival).\n\t\n\tSpecifically for a connex graph:\n\t\\begin{itemize}\n\t\t\\item The graph has no odd vertices, then it is Eulerian (and the chain is cyclic).\n\t\t\n\t\t\\item The graph can have only one odd vertices from the property (already stated above) that in a graph, the vertices having odd order is always in even.\n\t\t\n\t\t\\item If the graph has two odd vertices, then these vertices are the ends of the Eulerian cycle.\n\t\\end{itemize}\n\tAs corollary we have that a graph have more than two odd vertices never have an Eulerian cycle.\n\t\\end{theorem}\n\tWith this characterization (as we will prove it just after), the vertices $a,b,c,d$ being of odd order, we know immediately that it is impossible to walk cross all the K\u00f6nigsberg bridges in only one time during a walk.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/euler_konigsberg.jpg}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\begin{dem}\n\tThe proof will be done if three steps:\n\t\\begin{enumerate}\n\t\t\\item Let us suppose that a graph $G$ is Eulerian. Then there exists a chain $c$ browsing once and only once each edge (until then it's easy). Of course, in the case of a chain, the peaks being located at the ends of the chain are of odd order and are only two.\n\t\t\n\t\t\\item Consider now a vertex $x$ and let us suppose this time not a Eulerian graph but an Euler cycle. During the course of this cycle, each time we go through the vertex $x$, we find ourselves at the starting point and to perform a new round two edged are available to us (indeed the path can be walked in two direction since the graph is undirected!). The vertex $x$ is the of even order and can be defined anywhere in the cycle, hence the fact that all the vertices are of even degree.\n\t\t\n\t\t\\begin{itemize}\n\t\t\t\\item If $G$ is reduced to a single isolated vertex, it is obviously Eulerian. Otherwise all the vertices of $G$ are of degree greater or equal to $2$. This implies that there is a cycle $\\phi$ on $G$:\n\t\t\t\n\t\t\t\\item Consider the partial graph $H$ consisting of the edges outside the cycle $\\phi$. The vertices of $H$ are also of even degree, the cycle containing an even number of edges incident to each vertex. By induction each connex component $H_i$ of $H$ is an Eulerian graph, and so has an Eulerian cycle $\\phi_i$. To rebuild an Euler cycle on $G$, we only need to merge the cycle $\\phi$ the with different cycles $\\phi$. For this, we travel the cycle $\\phi$ from an arbitrary vertices; when we meet for the first time a vertices $x$ from $H_i$, we substitute it the cycle $\\phi_i$. The resulting cycle is an Euler cycle in $G$, the cycle $\\phi$ and the cycles $\\phi_i$ form a partition of the edges.\n\t\t\t\\begin{figure}[H]\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{img/geometry/parties_eulerian_graph.jpg}\n\t\t\t\\end{figure}\n\t\t\\end{itemize}\n\t\\end{enumerate}\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThis principle of decomposing a graph into connex graphs and summing them permits to build a recursive algorithm for determining if a graph is Eulerian or not.\n\t\\end{tcolorbox}\n\t\n\t\\item[D28.] Two graphs $G$ and $G'$ are \"\\NewTerm{complementary graphs}\\index{complementary graphs}\" when they satisfy the following conditions:\n\t\\begin{enumerate}\n\t\t\\item They have the same set of vertices.\n\t\n\t\t\\item Two vertices $x, y$ that are neighbours in $G$ are not neighbours in $G'$.\n\t\\end{enumerate}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/complementary_graph.jpg}\n\t\t\\caption{Example of complementary graph}\n\t\\end{figure}\n\tA \"\\NewTerm{self-complementary}\\index{self-complementary}\" graph is a graph which is isomorphic to its complement. The simplest non-trivial self-complementary graphs are the 4-vertex path graph and the 5-vertex cycle graph.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/self_complementary.jpg}\n\t\t\\caption[Example of self-complementary graph]{Example of self-complementary graph (source: Wikipedia)}\n\t\\end{figure}\n\t\n\t\\item[D29.] A \"\\NewTerm{bipartite graph}\\index{bipartite graph}\" is a graph $K_{p,q}$ such that we can partition the set of all its vertices into two classes cardinal of $p$ and $q$ respectively so that each edge has one end in each of the two classes.\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tHere is a representation of a classical bipartite graph $K_{3,3}$ . It is the famous problem of the supply of three houses from three supplier (water, electricity, gas) without the right of alignment of services (using same \"pipelines\"). We immediately see that this graph is non-planar.\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/biparti_graph.jpg}\n\t\t\t\\caption{Example of a $K_{3,3}$ bipartite graph}\n\t\t\\end{figure}\n\t\t\\end{tcolorbox}\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe complete bipartite graph $K_{p,q}$ is a graph with $n=p+1$ vertices and configured in such a way that each vertex of a class is adjacent to all the vertices of the other and only thereto.\n\t\t\\end{tcolorbox}\t\n\t\t\n\t\t\\item[D30.] Two graphs are \"\\NewTerm{isomorphic graphs}\\index{isomorphic graphs}\" if there exists a bijection of $f$ from $X$ into $X'$ such that for any vertices $x$ and $y$ of $G$:\n\t\t\n\t\tWe also say that $f$ is an \"\\NewTerm{isomorphism of $G$ on $G'$}\".\n\t\t\n\t\tMore formally, an isomorphism of graphs $G_1$ and $G_2$ is a bijection $f:V(G_1)\\mapsto V(G_2)$ that preserves adjacency. That is to say:\n\t\n\t\n\t\tFor example on the figure below we have three isomorphic graphs:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/isomorphic_graphs.jpg}\n\t\t\t\\caption{Isomorphic graphs}\n\t\t\\end{figure}\n\t\t\n\t\t\\begin{tcolorbox}[colback=red!5,borderline={1mm}{2mm}{red!5},arc=0mm,boxrule=0pt]\n\t\t\\bcbombe Caution!!! Sometimes we talk about graphs equivalence \"to a given isomorphism\". This means more clearly, that at the exception of a single violation among the set of all edges, that the graphs are isomorphic.\n\t\t\\end{tcolorbox}\n\t\t\n\t\tIf there exists a bijection of $f$ of $X$ into itself such that:\n\t\t\n\t\tthen we say that $f$ is an \"\\NewTerm{automorphism of graph}\\index{automorphism of graph}\" in $G$ (by permutation of the vertices there are many possible examples ...).\n\t\t\n\t\t\\begin{tcolorbox}[colback=red!5,borderline={1mm}{2mm}{red!5},arc=0mm,boxrule=0pt]\n\t\t\\bcbombe Caution!! Sometimes we talk about equivalent graphs \"to a given isomorphism\". This means, more clearly, that at the exception of a single violation and among the set of all edges, the graphs are isomorphic!\n\t\t\\end{tcolorbox}\n\t\t\n\t\tAs the isomorphism in the case of graphs go from one set to another of the same size $n$ ($n$ to $n$ mapping), the number of different possible bijections is (\\SeeChapter{see section Probabilities page \\pageref{distinguishable elements}}):\n\t\t\n\t\tThis means there is a maximum of $n!$ graphs which can be grouped in a same equivalence class. Consequently, there exists at least (lower bound) the ratio of the total $n$ of vertices on the cardinal majorated by the greatest possible equivalence class (but not necessarily existing!):\n\t\t\t\n\t\tUsing the gross majoration $n!<n^n$, we have:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tGiven:\n\t\t\n\t\tThus, when $n$ approaches the infinity, $\\log_2(N)$ admits a lower bound of order $n^2$.\n\t\\end{enumerate}\n\tThe reader should also keep in mind that some graphs are used more than other, and get special names:\n\t\\begin{itemize}\n\t\t\\item $K_n$: The complete graph on $n$ vertices\n\t\t\\item $K_{m,n}$: The complete bipartite graph with sets of $m$ and $n$ vertices\n\t\t\\item $C_n$: The cycle on $n$ vertices, just one big loop\n\t\t\\item $P_n$: The path on $n$ vertices, just one long path\n\t\\end{itemize}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=1.0\\textwidth]{img/geometry/some_typical_graphs.jpg}\n\t\\end{figure}\n\t\n\t\\pagebreak\n\t\\subsection{Graph Adjacency Matrix}\\label{adjacency matrix}\n\tFormally, a graph is also a set on which we have defined a binary, anti-reflexive (no element is related to itself) and symmetric (if $x$ is related to $y$, then $y$ is related to $x$) relation. The graph structure may then seem particularly poor.\n\t\n\tBut we can also associate to a graph $G$ vertices $x_1,\\ldots, x_n$ a square matrix $M$ of dimensions $m\\times n$ named the \"\\NewTerm{adjacency matrix}\\index{adjacency matrix}\" of the graph and whose elements are $0$ or $1$. Denoting by $m_{ij}$ the term located at the intersection of row number $i$ (representing the vertices $x_i$) and column $j$ (representing the vertices $x_j$), $M$ is defined as:\n\t\n\tLet us recall that such a matrix is said to be a \"\\NewTerm{symmetrical matrix}\\index{symmetrical matrix}\" (\\SeeChapter{see section Linear Algebra page \\pageref{symmetric matrix}}).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe also know that graphs can be represented by matrices in the context of the study of Markov chains in the field of probabilities (\\SeeChapter{see section Probabilities page \\pageref{markov chains}}).\n\t\\end{tcolorbox}\n\tLet us see an example both abstract but also easily applicable to many fields of industry, sociology and biology (there is also application for the adjacency matrix in Text Mining as you can see in the R companion book). Consider the following directed graph and observe that it is not symmetrical nor antireflexive:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/markov_graph.jpg}\n\t\t\\caption{Example of Markov chain (nor antireflexive or symmetric)}\n\t\\end{figure}\n\tAnd as we just say we can represent this diagram (connected, and oriented graph) in the form of a table in which we denote by \"$1$\" a possible transition from  the state mentioned at the top of the column to the state mentioned at the beginning of the row and \"$0$\" otherwise:\n\t\\begin{table}[H]\n\t\t\\begin{center}\n\t\t\\begin{tabular}{>{\\columncolor[gray]{0.75}}c||c|c|c|c|c|c|c|}\n\t\\hline\n\t\\rowcolor[gray]{0.75}$\\nearrow $ & E1 & E4 & E2 & E3 & E5 & E7 & E6 \\\\\n\t  \\hline \\hline\n\t  % after \\\\: \\hline or \\cline{col1-col2} \\cline{col3-col4} ...\n\tE1. & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\\\ \\hline\n\tE4. & $0$ & $0$ & $1$ & $0$ & $0$ & $0$ & $0$\\\\ \\hline\n\tE2. & $1$ & $1$ & $1$ & $1$ & $0$ & $0$ & $0$\\\\ \\hline\n\tE3. & $0$ & $0$ & $1$ & $0$ & $0$ & $0$ & $0$\\\\ \\hline\n\tE5. & $0$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$\\\\ \\hline\n\tE7. & $0$ & $1$ & $1$ & $1$ & $0$ & $0$ & $0$\\\\ \\hline\n\tE6. & $0$ & $1$ & $1$ & $1$ & $0$ & $0$ & $0$\\\\ \\hline\n\n\t\t\\end{tabular}\n\t\t\\end{center}\n\t\t\\caption[]{Adjacency matrix of the connected graph}\n\t\\end{table}\n\tIt is essential to fully understand the meaning of the values in this table! But at this level of the book that should not pose major difficulties to the reader.\n\t\n\tNow let us evaluate the number of ways to go:\n\t\\begin{enumerate}\n\t\t\\item From E2 to E2 in two stages\n\t\t\\item From E3 to E4 in two stages\n\t\t\\item Form E2 to E7 in two stages\n\t\\end{enumerate}\n\tIt is easy in the particular case above to count these possibilities. But in the case of a more complex graph it becomes difficult or even impossible for a human being within a reasonable time (and also for some computer at this time).\n\t\n\tWe have to use then the following theorem:\n\t\\begin{theorem}\n\tGiven an orient graph with $x_1,x_2,\\ldots, x_n$ vertices and of adjacency matrix:\n\t\n\tFor any integer $k$, then the number of paths of length $k$ of vertices $x_1$ to a vertices $x_j$is given by:\n\t\n\twhere the exponent of $M$, denotes the power $k$ of the adjacency matrix.\n\t\\end{theorem}\n\t\\begin{dem}\n\tLet us perform an induction on $k$:\n\t\n\tdesignates the number of paths going from $x_i$ to $x_j$ (where $i$ and $j$ can be equal). Let us suppose the result true for the integer $k-1$, with $k\\geq 1$ as:\n\t\n\twe then have (by construction of matrix multiplication):\n\t\n\tBy induction hypothesis, $m_{il}^{(k\u00ad1)}$ is the number of paths of length $k-1$ from $x_i$ to $x_l$ and $m_{lj}$ is, we know it (by construction!), the number of paths of length $1$ from $x_l$ to $x_j$ and in particular it is equal to $1$ if $(x_i,x_j)$ is an edge of the graph and equal to $0$ otherwise!\n\t\n\tSo the product:\n\t\n\tgives for a given value of $l$ the number of paths of length $k$ from $x_i$ to $x_j$ whose last edge is $(x_l,x_j)$.\n\n\tThe sum:\n\t\n\ttherefore gives all the possibilities (paths) of length $k$ from $x_i$ to $x_j$ regardless of the starting point of the last edge!\n\t\\begin{flushright}\n\t\t$\\blacksquare$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tThus, in our example, the adjacency matrix $M$ is given by:\n\t\n\tand is neither symmetric or antireflexive as we have already mentioned.\n\t\n\tSo if we carry this matrix at the power of $k$, each component $m_{ij}^{(k)}$ will give all possibilities (paths) of length $k$ going from $x_i$ to $x_j$ Thus we have for example with $k=2$:\n\t\n\tusing for example the function \\texttt{MMULT( )} available in most spreadsheet softwares.\n\t\n\tWe then have the answer to all our three initial questions by reading the above matrix:\n\t\\begin{enumerate}\n\t\t\\item From E2 to E2 in two stages there are $3$ possibilities\n\n\t\t\\item  From E3 to E4 in two stages there is $1$ possibility\n\n\t\t\\item From E2 to E7 in two stages there are $3$ possibilities\n\t\\end{enumerate}\n\tIt is possible to gain some time in such calculations. If we denote $C_i$, the $i$-th column of the matrix $M$, then:\n\t\n\tSo we always get the number of paths of length $k$ from a given starting point corresponding to the column $i$.\n\t\n\tThis example has a very interesting approach in some areas studying the behaviour of individuals in different situations (shopping, tourisms, accidents, etc.).\n\t\n\tIf instead of writing in the matrix $M$ the number of possible paths from one vertices to another, we write the probability (part) of the total number of individuals who go from one vertices to reach another one we have for example (in practice the values are imposed by experimental observation!) the following matrix:\n\t\n\twhich is already the stochastic transpose  matrix of the visible graph below (according to the theory of Markov chains we saw in the section of Probabilities that we need to take the transpose of the stochastic matrix to calculate the probabilities of states).\n\t\n\tConsidering that these probabilities do not change over time, then we have a homogeneous Markov chain (without cycles). We then see that:\n\t\\begin{enumerate}\n\t\t\\item The sum of the transition probabilities going from each vertices (state) must always logically be equal to $1$ (which we already mentioned in the section of Probabilities)\n\n\t\t\\item Everyone start from the first vertices E1\n\n\t\t\\item Some stagnate (stop) at some stage\n\n\t\t\\item Those arriving at an end stage E5, E6 or E7 stay there and do not retrace their steps (absorbing states).\n\t\\end{enumerate}\n\tThe equivalent graph is therefore:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/markov_graph_probabilities.jpg}\n\t\t\\caption[]{Markov probabilistic transition graph}\n\t\\end{figure}\n\tBy denoting $N$ (instead of $M$ to avoid confusion) the matrix constructed from the graph above we see that $C_i$ is if one of the columns of the matrix $M$ then for example:\n\t\n\tgives the sum of the transition probabilities that what goes from E1 arrives $2$ in respectively E1($0$), E2($0.662$), E3 ($0.218$) ... (the distribution of the initial vector may be any as long as the sum of column of values is equal to 1)!\n\t\n\t\n\tgives the sum of the transition probabilities that what goes from E1 arrives $3$ in respectively E1($0$), E2($0.691$), ... and so on. We can therefore know what is the probability that an individual arriving on E2 can arrive to one of the terminal nodes (E5, E6 or E7) on a number of finite step.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tRemember that the sum of the probabilities of the resulting columns $T$ is always equal to $1$ for the transpose of the stochastic matrix.\n\t\\end{tcolorbox}\n\tContinuing a long time as well ... we find that the equilibrium measurement $\\pi$ that satisfies (\\SeeChapter{see section Probabilities page \\pageref{stationary measure}}):\n\t\n\tis:\n\t\n\t\n\tregardless of the distribution of the starting vector. Property named \"\\NewTerm{ergodic}\\index{ergodic}\" in the field of Markov chains.\n\n\tWhich means $45\\%$ probability of being in E5, $32\\%$ probability of being in E7 and $23\\%$ probability of being in E6. Another way to look at it is to say that if a cohort of $100$ individuals start at node E1 with constant probability in time between the various states transitions, the equilibrium we will have is $45$ people in E5, $32$ people in E7 and $23$ people in E6.\n\n\t\\subsection{Categories}\n\tThe introduction of categories through the \"\\NewTerm{category theory}\\index{category theory}\" by Eilenberg and MacLane in 1942 was to transform difficult problems of topology in more affordable algebra problems. Later, the theory of categories has grown significantly, both for itself and for its applications in the most diverse fields of mathematics (e.g. differential geometry). Although some of its autonomous development has sometimes been criticized, categories are now recognized as a powerful language to develop a universal semantics of mathematical structures. They are also used in logic and more recently in physics, and a fruitful collaboration seems to develop between IT and categoricians.\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] Intuitively a \"\\NewTerm{category}\\index{category}\" is just a directed graph on which we have given a law to compose consecutive arrows, satisfying certain axioms.\n\n\t\t\\item[D2.] An \"\\NewTerm{oriented graph}\\index{oriented graph}\" is formed by a set of nodes of graph, with links between them, represented by arrows from a node $A$ to a node $B$, what we denote by $f:A\\mapsto B$. We say then that $A$ is the \"source\" of the arrow, and $B$ and its \"goal\". There may be several arrows having same source and the same goal (we say then that they are \"parallel\") and there can be \"closed\" arrows, which source and target coincide.\n\n\t\t\\item[D3.] Two arrows $f$, $g$ are say to be \"\\NewTerm{consecutive arrow}\\index{consecutive arrow}\" if the target of the first is also the source of the second:\n\t\t\n\t\\end{enumerate}\n\tthen we say that they form a path of length $2$ from $A$ to $C$.\n\n\tA category is then a graph in which we define a composition of arrows, associating to any path $(f, g)$ of length $2$ from $A$ to $C$ an arrow of the graph from $A$ to $A$, named \"\\NewTerm{composed of the path}\\index{composed of a path}\", and denoted $fg$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/graph_category.jpg}\n\t\t\\caption{Example of graph of categories}\n\t\\end{figure}\n\tThis composition satisfies the following axioms:\n\t\\begin{enumerate}\n\t\t\\item[A1.] Associativity: If $fgh$ is a path of length $3$, the both compositions $f(gh)$ and $(fg)h$ that we deduce are associative. It follows that to any path of length $n$ is also associated only a single composition of vertices (invariance of the path).\n\n\t\t\\item[A2.] Identities: At any node $A$ is associated a closed arrow from $A$ to $A$, obviously named \"identity\" of $A$ and denoted $\\text{Id}_A$, from which the composition with an arrow of source or target $A$ is equal to this other arrow.\n\t\\end{enumerate}\n\tMore generally, a path (of length $n$) from $A$ to $A_n$ is a sequence of $n$ consecutive arrows:\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The graph nodes are named \"\\NewTerm{objects}\\index{objects}\" of category and the arrows the \"\\NewTerm{morphisms}\\index{morphisms}\" (or simply \"\\NewTerm{links}\\index{links}\") within the framework of category theory.\\\\\n\t\n\t\\textbf{R2.} An arrow $f$ is an isomorphism (\\SeeChapter{see section Set Theory page \\pageref{isomorphism}}) if there exists an arrow $g$ (named \"\\NewTerm{inverse}\\index{inverse}\") such that the compositions $fg$ and $gf$ are identities (this inverse is therefore unique).\n\t\\end{tcolorbox}\t\n\tThus, a category is formed by objects (the nodes of the graph) and by the links between them (the arrows or morphisms), but the essential idea is to emphasize the links relatively to the objects. In fact, the success of the categories in the most varied fields is due to the quantity of information on objects that can be inferred from the mere consideration of links and operations on them, regardless of the nature and anatomy of these objects.\n\t\n\tIn the next few lines, we will explain how to read the directed graphs we can sometimes encounter in math books. This will be a good example of category theory, because we have already encountered such graphs without describing them in the section Numbers and Cryptography for example.\n\t\n\tFor simplicity we will explain these graphs when the basic objects are  sets (which is the most common case in this whole book anyway actually).\n\tLet us consider three sets $A$, $B$, and $C$ and three applications such that:\n\t\n\tand let us rewrite it as following:\n\t\n\tWe can therefore consider the applications $f$, $g$ and $h$ as arrows that connect the objects (sets) $A$, $B$, and $C$ to form a triangle:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/commutative_graph.jpg}\n\t\t\\caption{Example commutative diagram}\n\t\\end{figure}\n\tWhere obviously $g=h\\circ f$ or more explicitly for those that forget the meaning of this notation that for any $a\\in A$ we have $h(f(a))=g(a)$.\n\t\n\t\\textbf{Definition (naive version \\#\\mydef):} We say that an oriented graph is a \"\\NewTerm{commutative diagram}\\index{commutative diagram}\" if all the paths we take to get from one object (set) to another one represent the same application.\n\t\n\tOr let us consider:\n\t\n\tand such that each application $f,g,h$ has its Kernel that is equal to the set of neutral elements of the starting set. Therefore:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/geometry/category_diagram.jpg}\n\t\\end{figure}\n\tWe can complicated almost without limits such diagrams considering more sets and arrows (applications) connecting them. For example:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/commutative_graph_more_complex.jpg}\n\t\\end{figure}\n\tThis diagram is commutative if and only if $h\\circ f=r\\circ g$.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} Usually in the mathematical literature such diagrams are implicitly commutative.\\\\\n\t\n\t\\textbf{R2.} As already mentioned, the objects of these diagrams can more generally be groups, rings, fields, topological space, etc. In these cases, the arrows are no longer any applications but group homomorphisms, or ring homomorphisms, continuous applications, etc.\n\t\\end{tcolorbox}\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{90} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 28 votes,  70.00\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\n\t %to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t \\section{Knot Theory}\n\t \n\t\\lettrine[lines=4]{\\color{BrickRed}R}en\u00e9 Descartes had imagined a system of the World, when the universe was animated by vortices. At the end of last century, Tait and Kelvin have resurrected this theory, and interpreted the chemical bonds imagining knotted molecules that interlace.\\\\\\\\\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The name \"\\NewTerm{Knots theory}\\index{Knots theory}\" that has emerged in the scientific community is quite unfortunate. Some (French)  mathematicians rightly use instead the name of \"\\NewTerm{theory of intertwining and braids}\\index{theory of intertwining and braids}\" which is more correct and general (since a node is interlaced with several components).\\\\\n\t\n\t\\textbf{R2.} Most of the text and figures below is a reproduction, with agreement, of Professor Michael Eisermann course from the University of Stuttgart (\\url{http://www.igt.uni-stuttgart.de/eiserm}).\n\t\\end{tcolorbox}\t\n\t\n\t\tThe mathematical knot theory was pioneered by the work of Little and Kirkman who tried to give a basis for the physical ideas of Kelvin and Tait. The first nodes classifications using a wire image of the nodes, without thickness, and a planar projection, as the shadow cast on a screen. One immediately noticeable character concerns crossings where we distinguish the strand above and strand below. The cannages, or braiding strips are based on the consideration of the crossings, and the number of such intersections will be the first classifying element.\n\t\t\n\t\t\\subsection{Braids Representation}\n\t\tAn example of braid (set of strands) and the corresponding string (a string is a braid that was closed):\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/strand_and_string.jpg}\n\t\t\t\\caption{Braid and corresponding string (node diagram)}\n\t\t\\end{figure}\n\t\tLet us focus on the braid to the left of the figure above to begin and let us ask ourselves what would be the best way of putting things in order to compare braids? Let us first attempt with the following representation of a particular braid:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/particular_braid_representation.jpg}\n\t\t\t\\caption{Particular presentation of a braid}\n\t\t\\end{figure}\n\t\tAs the strands of a braid are flexible and can move, we notice immediately that this representation has a problem: all the braids are equal. Effectively:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/issue_of_braid_representation.jpg}\n\t\t\t\\caption{Example of the problem of the braid representation}\n\t\t\\end{figure}\n\t\tA better model then is perhaps to secure the ends of both sides:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/better_particular_braid_representation.jpg}\n\t\t\t\\caption{Other choice of representation}\n\t\t\\end{figure}\n\t\tOf course, with this model the middle strands may still move as below and the braid remains invariant:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braid_first_invariant_example.jpg}\n\t\t\t\\caption{First example of invariant braid}\n\t\t\\end{figure}\n\t\tor otherwise as this:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braid_second_invariant_example.jpg}\n\t\t\t\\caption{Second example of invariant braid}\n\t\t\\end{figure}\n\t\tWe can also translate the crossings and the braid remains invariant:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braid_tranlsation_invariance.jpg}\n\t\t\t\\caption{Translation of crossing and preserving the trivial invariance}\n\t\t\\end{figure}\n\t\tThe length is not a variable in this model:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braid_length_invariance.jpg}\n\t\t\t\\caption{Length invariance}\n\t\t\\end{figure}\n\t\t\n\t\t\\subsubsection{Braids Group}\n\t\tIf we define the multiplication of two braids as the operation that consists schematically to:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braids_multiplication.jpg}\n\t\t\t\\caption{Example of multiplication of two braids}\n\t\t\\end{figure}\n\t\tWe can ask ourselves if this is a type of commutative group?\n\t\t\n\t\tRemember before that we have defined the group structure in the section of Set Theory as follows: We refer to a set by the term \"group\" if its components meets the three conditions of what we call \"internal law group \", defined below:\n\t\n\tIf furthermore, the internal law is also commutative, then we say that the group is an \"\\NewTerm{abelian group}\\index{abelian group}\" or simply \"\\NewTerm{commutative group}\\index{commutative group}\".\n\t\n\t \\begin{enumerate}\n\t \t\\item Let us start with the first check. Is this representation is associative?:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braids_associative_check.jpg}\n\t\t\t\\caption{Check of associativity of braids (by the multiplication)}\n\t\t\\end{figure}\n\t\tTherefore the answer is NO beyond two strands (because with 2 it is commutative)!\n\t\t\n\t\t \\item Does it admits a neutral element?:\n\t\t \\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braid_neutral_check.jpg}\n\t\t\t\\caption{Check of the existence of a neutral element for braids (by the multiplication)}\n\t\t\\end{figure}\n\t\tThe answer is therefore YES whatever the number of strands!\n\t\t\n\t\t\\item Are there inverse elements (symmetrical)?:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/braids_symetrical_check.jpg}\n\t\t\t\\caption{Check of the existence of an inverse element for braids (by the multiplication)}\n\t\t\\end{figure}\n\t\tThe answer is therefore YES whatever the number of strands!\t\t\n\t \\end{enumerate}\n\t \n\t Therefore the braids with $n$ strands form a non-commutative group denoted by $(\\mathcal{B}_n,\\cdot)$ where the stylized $B$ represent the word \"braids\".\n\t \n\t Let us observe something interesting about the braids with two double strands. If we index their tendrils by integer numbers as below (imagine the braid from the middle of which you rotate the ends, it gives all other left and right tendrils):\n\t \\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/braid_spin_tendrils.jpg}\n\t\t\\caption{Indexing braids tendrils}\n\t\\end{figure}\n\twith the convention of positive tendril $v$ and negative tendril $v$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/braid_tendril_indexation_convention.jpg}\n\t\t\\caption{Indexation convention}\n\t\\end{figure}\n\tBeware to the following trap! (try to find it ... it is not always easy at first glance):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/braid_tendril_trap.jpg}\n\t\t\\caption{Indexation tendril trap}\n\t\\end{figure}\n\tIndeed, the right crossing above of the second braid is precisely not .... a cross! Then we say that the number of crossing is not an invariant. It is for this reason that what matters is the \"tendril\" because it does not change in the movements of the braid at the opposite of a simply cross!!!\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/braid_difference_trindle_cross.jpg}\n\t\t\\caption{Difference between tendril and cross}\n\t\\end{figure}\n\tWe then notice that regarding to the multiplication of braids (\\SeeChapter{see section Set Theory page \\pageref{homomorphism of group}}) and their indexing under the form of their number of tindrels, we have the following properties:\n\t\n\tSo the multiplicative group of braid with 2 strands is a homomorphism of group with the addition law of the group of relative integers. Moreover, we see that the application $f$ is trivially bijective. We have therefore a isomorphism of group! Beyond 3 strands, we saw that the group was not more commutative!\n\t\n\t\\subsection{Knot Representation}\n\tLet us return to the first figure presented above:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/strand_and_string.jpg}\n\t\t\\caption{Braid and corresponding string (node diagram)}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAny node or any chain can be obtained from a braid (without proof).\n\t\\end{tcolorbox}\n\tThere exists many possible views, and different appearances of the same node. Two nodes of different projections are distinct (not isotopes)? In the first published table of node classification (that of Tait and Little) the minimum number of crossings is used as a classification principle (up to ten crossings) and it took almost a century to detect one duplication (see table further below): two identical nodes were taken for different.\n\t\n\tIn practice a node rather looks like this:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/node_in_real_life.jpg}\n\t\t\\caption{Example of a node in real life}\n\t\\end{figure}\n\tBut mathematicians have the usage to connect both ends of the string to get this representation of a node:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/node_usage_representation.jpg}\n\t\t\\caption{Knot representation by mathematicians}\n\t\\end{figure}\n\ttherefore a node is always a loop that closes on itself. In other words, an open node is always equivalent to a closed node:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/equivalence_open_closed_knot.jpg}\n\t\t\\caption{Equivalence between an open and a closed knot}\n\t\\end{figure}\n\tA closed knot may look different depending on the angle from which we look at it. Thus, the two nodes below are two representations of the same type of knot named the \"trefoil knot\":\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/clover_knot.jpg}\n\t\t\\caption{2 different perspectives of the same trefoil knot}\n\t\\end{figure}\n\tWe can also ask ourselves if the left clover node and its mirror on the right are the same?:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/clover_knot_left_right_equality.jpg}\n\t\t\\caption{Searching for equality between mirror symmetry of the clover node}\n\t\\end{figure}\n\tor the same question but when the two knots are closed:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/clover_knot_closed_left_right_equality.jpg}\n\t\t\\caption[]{Same situation but by closing the knot}\n\t\\end{figure}\n\tMore difficult, K. A. Perko has showed that the two knots below, named \"\\NewTerm{Perko par}\\index{Perko par}\", are in fact the same knot:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_parko_par.jpg}\n\t\t\\caption{Example of a Perko pair}\n\t\\end{figure}\n\tIt is therefore not trivial to know two physical objects represents basically the same knot. But, that's what interests mathematicians! Specifically, they want to classify the knots, that is to say determining all types of nodes that are fundamentally different, and not just in appearance (it's the same idea than in topology where mathematicians successfully proved that every volume could be reduced to three primary volumes).\n\t\n\t\\subsubsection{Knots Group}\n\tJust as we did for braids, let us look if knots form a group?\n\t\n\tSo we define the multiplication of two knots as the operation that is schematically:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_multiplication_proposition.jpg}\n\t\t\\caption{Refresh of the definition of the multiplication of two knots}\n\t\\end{figure}\n\tWe can ask ourselves if this is a commutative type group?\n\t\n\t\\begin{enumerate}\n\t\t\\item Let us start with the first check. Is this representation associative ?:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_check_associative.jpg}\n\t\t\t\\caption{Check of the commutativity of knots (by the multiplication)}\n\t\t\\end{figure}\n\t\tThe answer is YES whatever the number of knots!\n\t\t\n\t\t\\item Second control: does it have a neutral element?:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_check_neutral_element.jpg}\n\t\t\t\\caption{Check of the existence of a neutral element for the knots (by the multiplication)}\n\t\t\\end{figure}\n\t\tThe answer is YES whatever the number of knots!\n\t\t\n\t\t\\item Third check: is it commutative?:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_check_commutativity.jpg}\n\t\t\t\\caption{Check of the commutativity of knots (by the multiplication)}\n\t\t\\end{figure}\n\t\tSo unlike braids the answer is YES!\n\t\t\n\t\t\\item Fourth and last check: Are there inverse elements (symmetrical)?:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_check_inverse_element.jpg}\n\t\t\t\\caption{Check of the existence of an inverse element for the knots (by the multiplication)}\n\t\t\\end{figure}\n\t\tand that's the one million dollar question!\t\n\t\\end{enumerate}\n\tWith time and the efforts of mathematicians, three essential mathematical perspectives  for studying knots emerged to solve different problems:\n\t\\begin{enumerate}\n\t\t\\item Topological, where the node is conceived as the union of a finite number of closed curves, define close to a given deformation.\n\t\t\n\t\t\\item Algebraic, where there count the intersections for example, and where we associate the groups to the nodes.\n\t\t\n\t\t\\item Geometrical, where we take into account the shape of the knot, measuring lengths or angles. In particular, the idea of the number of rotations appears as twisting and number of interlaces, and these concepts are fundamental to the study of DNA in molecular biology.\n\t\\end{enumerate}\n\t\n\tThe link between these views is tricky. There are evidences hard to prove rigorously. We think especially to the Jordan theorem asserting that a closed flat curve without crossing defines an interior and an exterior! It took almost two centuries to properly define the concept of curve and there are wild knots to eliminate before any classification. We must take care to properly define the deformation of a knot, that mathematician named \"\\NewTerm{isotopies}\\index{isotopies (knot theory)}\".\n\t\n\tThe most effective mathematical idea for the study of the knots is probably that of \"\\NewTerm{knot invariant}\\index{knot invariant}\". An invariant of a node is a characteristic (integer number, real number, polynomial, group, etc.) which remains unchanged during deformation. If we have/found an invariant, we can say that two knots are really different when the invariant does not take the same value for both knots. But if two knots have the same invariant, we can not say they are the same type (deformable into each other). A typical example of two knots having the same invariant and for which it is not trivial to say whether they are the isotopes are the right and the left trefoil knots when the chosen invariant is the number of crossings denoted by $c(D)$. We should for this purpose have a complete system of invariants.\n\t\n\tHere, invariants are indispensable, and mathematicians have come up with dozens that distils different features of knots. But these invariants tend to have blind spots.\n\t\n\tTake, for example, an invariant named \"\\NewTerm{tricolorability}\\index{tricolorability}\". A knot diagram is tricolorable if there's a way to color its strands red, blue and green so that at every crossing, the three strands that meet are either all the same color or all different colors. Mathematicians have shown that even when you move the strands of a knot around, its tricolorability (or lack thereof) is unchanged. In other words, tricolorability is an innate feature of a knot.\n\n\tThe trefoil is tricolorable:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/tricolorability.jpg}\n\t\t\\caption[]{source: Quanta Magazine}\n\t\\end{figure}\n\tBut the \"unknot\" (a loop that has no actual knots, even if it appears tangled) is not tricolorable, providing an instant proof that the trefoil is not just the unknot in disguise. But while tricolorability enables us to distinguish some knots from the unknot, it's not a perfect tool for this purpose: Knots that are tricolorable are definitely knotted, but knots that aren't tricolorable aren't definitely unknotted. For instance, the figure-eight knot is not tricolorable, but it is genuinely knotted. This knot falls into tricolorability\u2019s blind spot \u2014 it's as if the invariant is saying: \\og The figure-eight knot is unknotted as far as I can tell\\fg{}.\n\t\n\tThe Conway knot, an i11-crossing knot discovered by John Horton Conway more than 50 years ago, is extraordinarily skilled at fooling knot invariants!\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/conway_knot.jpg}\n\t\t\\caption[]{source: Wikipedia, author: Saung Tadashi}\n\t\\end{figure}\n\tThe Russian mathematician V. Vasilyev introduced in 1990 a new class of invariants. It remains to make them explicitly computable and prove that they form a complete system.\n\t\n\tHenri Poincar\u00e9 introduced in 1900 the concept of fundamental group of a space which describes the possibilities of paths with return to starting point. Applied to the outer space to a node, this provides the \"\\NewTerm{knot group}\\index{knot group}\" and the \"\\NewTerm{Alexander polynomial}\\index{Alexander polynomial}\" linked to it (see below the mathematical formalization).\n\t\n\tThe same construction applied to what is named the \"configuration space\" provides an effective definition of \"braid groups\". These groups were introduced in an intuitive form, circa 1920, by the Viennese mathematician Emil Artin, one of the fathers of modern algebra. Since 1937, the Russian mathematician Markov connected knots and braids, and gave a theoretical method to define, with braids, knots invariants.\n\t\n\tAt the beginning of years 1990, braid groups were a curiosity and their complexity repulsive. Then, suddenly, they have become a central theme of scientific research. Let us give an idea of the diversity of views that lead to braid groups:\n\t\\begin{enumerate}\n\t\t\\item In geometry, the Russian mathematician Vladimir Arnold classified under the name of \"disasters\" singularities of geometric configurations, which leads to the configuration spaces.\n\t\t\n\t\t\\item In algebra, specifically in group theory, two avatars of braid groups play a central role: the Coxeter's groups and the Hecke's algebras.\n\t\t\n\t\t\\item In statistical mechanics, the study of exactly soluble models is done through the use of Yang-Baxter relations and quantum groups. The link with the braid groups is profound.\n\t\t\n\t\t\\item The two-dimensional physics has recently taken great importance and we expect from it a high temperature model of superconductivity. The usual classification of fermions and bosons particles is complicated in two dimensions. The new concept is that of anyone whose mathematical model is linked to braid groups.\n\t\t\n\t\t\\item Quantum field theory is the mathematical model of elementary particles. Edward Witten has made the bridge between this theory and braid groups, via the Jones polynomial.\n\t\\end{enumerate}\n\tThe Knots Theory is therefore an active interface between physics and mathematics. Knots and braids now provide an effective modernization tool for the physic of polymers to the liquid crystal, through molecular biology. In the opposite direction, imported ideas of physics have sparked a revolution in mathematics: to somewhat marginal issue at the beginning of the years 1990, the Knots Theory at the beginning of the years 2000 one of the greatest mathematical subject.\n\t\n\t\\subsection{Tait's Knot}\n\tThe main aim of the knot theory is to clarify all knots and has its origin in physics in the early 19th century as we have already mentioned. Why?\n\t\n\tRecall that at that time atoms were a mystery: why they seemed indestructible and why they exist in so many varieties and can be combined to give countless other components?\n\t\n\tAt that time the most beautiful equations of physics (which are often good candidates to explain things we do not understand...) were the Maxwell equations, then it was natural (or attempting) for physicists to try to explain the atomic mechanics in terms of electromagnetism, although we now know today that this path was destined to fail. Later in the mid 19th century, electromagnetic waves were largely conceptualized as vibrations of a medium named at the time \"phosphor ether\". The repository of this ether was then defined (or at least suspected) as an absolute reference... But later, the experimenters Michelson and Morley showed that the relative motion of our planet in this ether was undetectable (and worse... they measured the absolute constancy of the speed of light!). Their experiences also led Albert Einstein and Henri Poincar\u00e9 to develop their famous theory of Special Relativity (\\SeeChapter{see section Special Relativity page \\pageref{special relativity}}):\n\t\n\twhich is, we must admit, similar to fluid mechanics (see section of the same name page \\pageref{fluid mechanics}) when we have an incompressible fluid without viscosity and without vortices:\n\t\n\tMore generally, if the rotational $\\vec{ \\nabla}\\times \\vec{v}$ is not zero, Helmholtz proved in 1858 that the vortex field lines - defined by the lines of $\\vec{ \\nabla}\\times \\vec{v}$ - move in the direction of $\\vec{v}$ as if they had an independent existence (and here this is the open breach ... of course!). These lines could have no end but could form loops.\n\t\n\tIn 1867 the physicist Peter Guthrie Tait (assistant of Hamilton and a champion of quaternions) founded an ingenious way to prove this effect by cutting a circular hole in a box, filling it with smoke, and then expelling smoke afterwards through compression of the air in the box thereby forming smoke circles. It also showed this to his friend Kelvin that noted the analogy with electromagnetism and proposed a theory in which the atoms were vortex (knots) in ether! He hypothesized that the different types of atoms correspond to different types of knotted vortices (yes it's a bit farfetched but...)!\n\t\n\tTait subsequently tried to classify (see table below) tied lines in agreement with:\n\t\\begin{enumerate}\n\t\t\\item The number of interlaces when they are projected on a plane\n\t\t\n\t\t\\item By representing only the \"\\NewTerm{main nodes}\\index{main nodes}\"\n\t\\end{enumerate}\n\t\\textbf{Definition (\\#\\mydef):} A node can be complicated because it is the succession of simple knots:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_open_succession_example.jpg}\n\t\t\\caption{Example of a succession of knots on an open knot}\n\t\\end{figure}\n\tThese simple knots (with one strand) can be separated by cutting the string:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_separation.jpg}\n\t\t\\caption{Separation of the knot}\n\t\\end{figure}\n\tA main knot is a knot that can not be separated into simpler knots: cut the string unties the knot.\n\t\n\tClassify knots this is trying to determine the building blocks: all main knots. The list Tait got initially was the following nodes (in the hope of obtaining the periodic table of elements but \"ether\" version...) and if necessary we recommend you to bring a string to ensure they are main one (sometimes it is mentally difficult):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_tait_classification.jpg}\n\t\t\\caption{Tait's classification}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} This table is very important and we will very often make reference to it for application examples using the proposed nomenclature.\\\\\n\t\n\t\\textbf{R2.} The corresponding values in the above table, named \"\\NewTerm{knot complexity measures}\\index{knot complexity measures}\", are noted $c(D)_{\\text{Id}}^{c(B)}$ in generality where $c(D)$ is the number of crossings, $c(B)$ the number of strands, and $\\text{Id}$ the number id of the node in the class $c(D)$.\n\t\\end{tcolorbox}\n\tThe beauty of this theory of \"vortex atoms\" was the fact that it was in agreement with the continuous appearance of the wonderful world of fluids, or to the Maxwell's equations by extension, to discretize the different types of atoms. A difficulty with this theory, however, was the remarkable stability of atoms. Kelvin admitted in 1905 after many years of failure in trying to prove that the movement Helmholtz circles was stable, the conclusion was that these circles were essentially unstable and should therefore dissipate. Curiously it is the extreme stability of the atoms that was one of the puzzle pieces to build the corpuscular quantum physics.\n\t\n\tMoreover, with the theory of Special Relativity, the concept of ether that was precious to Maxwell and his contemporaries, especially regarding the Knots Theory, became synonymous of a useless concept and with the addition of the theoretical quantum theory vortex atoms was completely forgotten. Knot theory resurfaced because of some conjecture that Tait was not able to proved (we will come back about this later) that were proved only in the 1980s in a hazardous turn in theoretical physics.\n\t\n\tWhat physicists abandoned intrigue mathematicians. The basic question remains the same: how can we say that two nodes are \"isotopies\" of each other (we will define further below what it is exactly). This question is closely related to the famous conjectures of Tait. To attack these conjectures and the basic problem of knots resemblance, the topologists developed invariants knots. An example of well known knot invariant and that have had  a great success are the Alexander polynomial discovered by J. W. Alexander in 1927 (see further below). Thus, if the polynomials of two nodes are different, these are then not isotopies. Unfortunately, there are some knots having polynomials Alexander that are equivalents which are not isotopies of each other...\n\tWhat physicists abandoned intrigue mathematicians. The basic question remains the same: how can we say that two nodes are \"isotopies\" of each other (we will define further below what it is exactly). This question is closely related to the famous conjectures of Tait. To attack these conjectures and the basic problem of knots resemblance, the topologists developed invariants knots. An example of well known knot invariant and that have had  a great success are the Alexander polynomial discovered by J. W. Alexander in 1927 (see further below). Thus, if the polynomials of two nodes are different, these are then not isotopies. Unfortunately, there are some knots having polynomials Alexander that are equivalents which are not isotopies of each other...\n\t\n\tThe mathematical knot theory then developed during fifty years was a bit forgotten until the thunderclap of Jones in 1984 of a new knot invariant (the Jones polynomial). The discovery of Jones is pretty exemplar of the scientific point of view and can lead to meditation on the current organization of science. Jones was not at all an expert on knots. He was interested in the classification of factors in von Neumann algebra (functional analysis). He obtained matrices algebras whose commutation relations (Yang-Baxter equations) were close relations of the braid group. From braids to knots, there is only one step that Jones has taken with the help of Joan Birman which is a specialist in knots.\n\t\n\tThen we are see an explosion of discoveries: purely combinatorial version, new polynomial, etc. In 1989, Witten proved that the Jones polynomial can be obtained from quantum field theory by thanks to a Feynman integral, giving the first definition that does not use the flat projections of a knot. Somehow the Jones-Witten theory is a non-commutative extension of the work of Gauss. The Lie group (\\SeeChapter{see section Set Algebra page \\pageref{lie group}}) which acts in magnetism is $\\text{U}(1)$, while the Witten invariant is a Feynman integral over a space of $\\text{SU}(2)$-connections.\n\t\n\t\\pagebreak\n\t\\subsection{Mathematical Formalisation}\n\tA node is mathematically modelled by an injective application, differentiable and whose derivative does not vanish, of the circle in oriented three-dimensional space (trivial knot).\n\t\n\tThe two central problems of the theory of knots is to decide in a calculable way if the knot is trivial (can be discard without cutting the string ..) or not (this problem is in this early 21th century not resolved as far as we know) and if two nodes are really equivalents.\n\t\n\tThe first type of problem can be well represented on the fact whether the next knot is knotted or not...?:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/knotted_knot.jpg}\n\t\t\\caption{Knotted knot?}\n\t\\end{figure}\n\tThe answer is no as shown in the figure below (read from left to right and top to bottom):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/geometry/knot_not_knotted.jpg}\n\t\t\\caption[]{It is not...!}\n\t\\end{figure}\n\tThe goal for the second problem is to combine the knots computable mathematical objects (polynomials, numbers) named \"\\NewTerm{knot invariants}\\index{knot invariants}\" and that are insensitive to the deformation of a knot. If the invariant is not equal to that the trivial knot that is:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_trivial.jpg}\n\t\t\\caption{Trivial Knot}\n\t\\end{figure}\n\twe are therefore sure that the knot is not trivial. And for example, the simplest non-trivial knot is the trefoil knot:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_trefoil.jpg}\n\t\t\\caption{Tefoil Knot}\n\t\\end{figure}\n\tThe problem is finding enough fine tuning invariants. We will describe two of them:\n\t\\begin{enumerate}\n\t\t\\item The number of interlacing (of two knots), idea of Gauss, and which occurs in electromagnetism\n\t\t\n\t\t\\item The Jones polynomial (introduced in 1985 by Vaughan Jones, 1990 Fields Medal), which is subtle enough to distinguish for example right of the left trefoil knot.\n\t\\end{enumerate}\n\t\n\tWe will also describe a general class of invariants: finite type invariants or Vassiliev invariants. These invariants defined in a rather unconstructive way are maybe complete invariants, but we do not know for sure at this beginning of the 21st century. We will describe the number of interlacing of 2 knots as a combinatorial invariant calculable from a node diagram. We will then write the classical integral formula related to magnetism (Gauss) to calculate it. We will then do a little detour by the global differential geometry of curves in three-dimensional space to show the \"White formula\" that connects three geometric invariants associated with a ribbon. We will then describe the new polynomial invariant in a combinatorial point of view. The point of view of Feynman integrals will be discussed.\n\t\n\tVassiliev has introduced a general family of invariants that contains most known invariants and we can say that they are of finite type. We will describe the principle.\n\t\n\tIn biology the RNA and DNA strand and also amino acid filaments are wound in complex three-dimensional shapes (these are closed braids, as we know a more general case of knots). However, often, through the microscope, we see only a two-dimensional projection. Invariants give the possibility to trace back to three-dimensional information from 2D views we have:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/arn_or_adn.jpg}\n\t\t\\caption{Example of RNA or DNA strand}\n\t\\end{figure}\n\t\n\tOn the other hand, biologists have observed knotted DNA molecules and found that the topological nature of the DNA molecule, that is to say the knot type formed by the molecule, affects its operational mechanism in the cells by conditioning some of its chemical properties. Some viruses attack cells by changing the long DNA molecules by interlacing them in different ways. Indeed, by the means of enzymes named topoisam\u00e9rases, viruses cut and reattach different strands of the DNA molecule such that they take the form of a knot which can be very complex. It turns out that the type of knot obtained is in a way the virus signature. To fight effectively against the virus, it is imperative to recognize their signature by their action on DNA. Therefore to identify different viruses we must be able to recognize the different types of knots and this is where the theory of knots can help the biologist.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe identification of some knot type has been transformed by research centers in an online gaming which aims it to give a funny way to the people to contribute to find a solution. This seems to work since late 2011 some players seems to have made  relevant discovery regarding the analysis of proteins.\n\t\\end{tcolorbox}\n\t\n\tLet us start now with some definitions. We have deliberately chosen to not to list a given number of definitions (as many in Graph Theory...) that the reader can easily find in literature or on the Internet. We allow ourselves to omit them in the sense that they will not be helpful in the application of knot theory in quantum field physics (and except for those who like make small drawings... they are useless in our point of view).\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A knot may be defined (as there are several possible definitions and some have small enough bothersome uncertainties...) by the image of the circle denoted by $S^1$ by an continuous application (the string is assumed as such), injective (this avoiding the string enters in itself):\n\t\t\n\t\tin other words, it is a curve without double points, drawn in Euclidean space of three dimensions.\n\n\t\tA knot is therefore represented by an injective application $f\\in \\mathcal{C}^1 ([0,1],\\mathbb{R}^3)$ (impose $\\mathcal{C}^1$ class knots avoids having too wild curves ...) verifying that $f(0)=f(1)$ (closed knot). The image of $f$ is sometimes named \"\\NewTerm{support of the knot $f$}\\index{support of a knot}\": this is the \"physical\" realization of the knot in space.\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tAll knots will subsequently denoted by the letter $N$.\n\t\t\\end{tcolorbox}\n\t\t\n\t\tTo summarize crudely..., a knot is a string we for which we welded the ends.\n\t\t\n\t\t\\item[D2.] We say that a knot is a \"\\NewTerm{trivial knot}\\index{trivial knot}\" if the application $f$ that defines it extends in a continuous application of the disc $D^2 \\rightarrow \\mathbb{R}^3$ and always injective: a trivial knot is therefore a knot that border an immersed disc of $\\mathbb{R}^3$.\n\t\t\n\t\t\\item[D3.] Two knots $\\gamma_1, \\gamma_2$ are equivalent if there exists a continuous application:\n\t\t\n\t\tsuch as $F(1)=\\gamma_1$ and $F(2)=\\gamma_2$. It remains to find $F$ that is a curve in the set of curves.\n\t\t\n\t\t\\item[D4.] The $c(K)$ of a knot $K$ is the natural integer representing the minimum number of crossings for every diagram of a knot type (that is a natural measure of complexity).\n\t\t\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tThe knot $0_1$ to has $c(K)=c(0_1)=0$ therefore null. It seems there exist no knots with $c(K)$ equal to the unit. A proof consist is to list all possible diagrams with one or two cross and see that they are in fact equivalent knots of the type $0_1$ or just interlaces. The trefoil knot (knot $3_1$) has a $c(K)=c(3_1)=3$.\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D5.] The mirror image $\\bar{K}$ of a knot $K$ is obtained by reflection on a plane in $\\mathbb{R}^3$. The knot $K$ can be constructed by inversion of crossing of the knot diagram:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_mirror_image.jpg}\n\t\t\t\\caption{Mirror image by crosses inversion}\n\t\t\\end{figure}\n\t\t\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tWith the trefoil knot (\"left trefoil\" and \"right trefoil\"):\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_trefoil_mirror.jpg}\n\t\t\t\\caption{Mirror of trefoil knot}\n\t\t\\end{figure}\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tWe also notice that in the Tait classification table, the mirrors knots are not represented!\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D6.] Two knots are say to be \"\\NewTerm{isotopes knots}\\index{isotope knots}\" if we can switch from one to the other by continuous manipulation named \"\\NewTerm{isotopies}\\index{isotopies (knots)}\" (the left trefoil knot and right trefoil knot are for example not isotopes!) and as we know this is the problem number one in the knot theory: detect isotopes knots.\n\t\t\n\t\tIn other words, an isotopie is a movement which does not deforms/change the knot.\n\t\t\n\t\tThe isotopies give in the plane three particular types of movements named \"\\NewTerm{Reidemeister movements}\\index{Reidemeister movements}\":\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_reidemaster_movements.jpg}\n\t\t\t\\caption{Reidemeister moves}\n\t\t\\end{figure}\n\t\tIt is therefore well three simple operations for changing a part of a knot without changing the nature of the knot itself.\n\t\t\n\t\tSo two diagrams or knots are isotopes, if we can pass from one to the other by a finite sequence of Reidemeister movements. Thus, with the example below, we show that the initial knot is equivalent to the trivial knot:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/reidemeister_example.jpg}\n\t\t\t\\caption{Simplistic application example of the Reidemeister movements}\n\t\t\\end{figure}\n\t\t\n\t\t\\item[D7.] Two knots are say to be \"\\NewTerm{equivalent knot}\\index{equivalent knot}\" or \"\\NewTerm{amphicheiral}\\index{amphicheiral}\". if they are isotopes OR if one is isotope to the image of the other in a mirror. According to the above definitions, each knot is necessarily equivalent to its own mirror image, but only reflective knots are isotopes to their mirror image. The eight knot $4_1$ is a good example of this kind of knots, which are almost rare:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_isotopie.jpg}\n\t\t\t\\caption{Knot isotopie}\n\t\t\\end{figure}\n\t\t\n\t\tThe Reidemeister movements are not always obvious to guess. Let us show you the details:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_isotopie_details.jpg}\n\t\t\\end{figure}\n\t\t\n\t\t\\item[D8.] An \"\\NewTerm{interlaced}\\index{interlaced}\" is a subvariety(\\SeeChapter{see section Topology \\pageref{subvariety}}) compact (\\SeeChapter{see section Topology page \\pageref{compact}}) of class $\\mathcal{C}^\\infty$ and of dimension $1$.\n\t\t\n\t\t\\item[D9.] The \"\\NewTerm{number of convex components}\\index{number of convex components}\" is denoted by $c(E)$. If $c(E)=1$ we say that $E$ is a knot.\n\t\t\n\t\tMost of time, interlaces are oriented (\\SeeChapter{see section Graph Theory page \\pageref{oriented graph}}) and we will identify the isotopes interlacing. So we represent the interlacing in the plane by projecting and specifying the type of crossing points.\n\t\t\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tInterlaces with three convex components (left) and trefoil knot (right):\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/knot_interlaces.jpg}\n\t\t\t\\caption{Interlace with three convex components}\n\t\t\\end{figure}\n\t\t\\end{tcolorbox}\n\t\tFor sure two interlaces are isotopes if and only if we can move from one to the other by a finite sequence of Reidemeister movements.\n\t\t\n\t\t\\item[D10.] Three interlaces $E_+,E_{-},E_0$ are named \"\\NewTerm{associates interlaces}\\index{associates interlaces}\" if they differ only in a crossing point and that at this point they are in one of the following configurations:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/interlaces_associated_example.jpg}\n\t\t\t\\caption{Example of associated interlaces}\n\t\t\\end{figure}\n\t\tWe denote by $\\mathcal{E}$ the set of all interlacing classes and we will now focus in functions of the type $P:\\mathcal{E}\\rightarrow A$ where $A$ is a commutative ring (the reader will must forget that we have seen that the coefficients of a polynomial are elements of a ring).\n\t\t\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tWe must also be remembered that a knot is a curve and that any curve can be represented by a polynomial where the idea!\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D11.] $P$ is say to be \"\\NewTerm{invariant by association}\\index{invariant by association}\" (by association interlaces...) if:\n\t\t\n\t\tand if there exist invertible $a_+,a_{-},a_0 \\in A$ such that for any associated interlacing triplet $(E_+,E_{-},E_0)$ we have (!):\n\t\t\n\t\tWe can already prove in a fairly basic way that if such a polynomial exists, then it is uniquely determined by the coefficients of the above equality. We summarize this in the following theorem:\n\t\t\n\t\t\\begin{theorem}\n\t\t\tIf $P$ is invariant by association, then it is uniquely determined by the coefficients  $a_+,a_{-},a_0$.\n\t\t\\end{theorem}\n\t\t\\begin{dem}\n\t\tLet us notice first that:\n\t\t\n\t\twhere $\\bigcirc^r$ describes the knot consisting of $r$ non-interlaced circles. Indeed, the relation $P(\\bigcirc)=1$ and the invariance property of $P$ applied to the following interlaces:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/geometry/non_interlaces_notation_example.jpg}\n\t\t\t\\caption{Example of non-interlaces notation}\n\t\t\\end{figure}\n\t\tgives:\n\t\t\n\t\tTherefore wet:\n\t\t\n\t\tThus, by induction on $r$ we get:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\blacksquare$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\tIn other words, the function $P$ can be expressed only by its coefficients!!! So we can now try to see if a knot full of interlaces may always be reduced to $P(\\bigcirc)$ recursively.\n\t\\end{enumerate}\n\t\n\t\\subsubsection{Planar Representation}\n\tWe assume that the knot is in the oriented three dimensional Euclidean space. We look at projections of a knot on the plane $z=0$. It is intuitively clear that we can assume that node has no vertical tangent and that the crossing points are only double and transversal. Such screening will be named a \"\\NewTerm{good projection}\\index{good projection}\".\n\t\n\tWe then indicate at each crossing point which is the strand which passes over the other. Such a drawing represents a knit in a non-unambiguous way: we say that we have a \"\\NewTerm{knot graph}\\index{knot graph}\". Of course, two equivalent knots have good projections that give different knots diagrams in general (herein lies also one of the problems as well). It is therefore important to read the equivalence of two knots directly on their charts.\n\t\n\tHere is an example with the trefoil knot:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/geometry/knot_diagram_representation.jpg}\n\t\t\\caption{Trefoil knot and its diagram}\n\t\\end{figure}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe trefoil knot is the knot having the minimum number of intersections, namely $3$; it actually exists in two amphiceiral (images of one another by reflection for refresh...). This is in fact in a simple knot the ends have been welded. The trefoil knot is the border of a M\u00f6bius with three half-twists and also the torus knot of order $(3,2)$ (3 windings around the core, on two revolutions), as well as the on of order $(2,3)$.\n\t\\end{tcolorbox}\n\t\n\tAs a drawing is not necessarily easy to transmit, or to draw on a computer, we can also give a coding diagram of the knot by a matrix with integer coefficients at three lines and whose number of columns equals the number of double points.\n\t\n\tWe number the $2n$ crossing points on the closed curve (circle) in the order they arrive. We note that the pairs are all formed of an even number and an odd number. We build then the first two rows of the matrix by putting in each column both numbers giving the same projection: the odd on the first row, the odds on the second one. We then add to each column, an $\\pm 1$ indicating the orientation of the two strands (oriented) ($+1$ if the bottom one passes through from right to left when we traverse the top one, $-1$ otherwise). For example, the matrix associated with the trefoil knot of the previous figure is:\n\t\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{15} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 20 votes,  77.00\\%}} \n\t\\end{tabular} \n\t\\end{flushright}", "meta": {"hexsha": "4e07f8d84653b15a134edd29306bdd181dc43694", "size": 634841, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter_Geometry.tex", "max_stars_repo_name": "vincentisoz/Opera_Magistris", "max_stars_repo_head_hexsha": "ea6789f995abe5fc364e6b271b2f0cb9b4ee1bf2", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-07-24T16:37:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-04T23:02:18.000Z", "max_issues_repo_path": "Chapter_Geometry.tex", "max_issues_repo_name": "vincentisoz/Opera_Magistris", "max_issues_repo_head_hexsha": "ea6789f995abe5fc364e6b271b2f0cb9b4ee1bf2", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter_Geometry.tex", "max_forks_repo_name": "vincentisoz/Opera_Magistris", "max_forks_repo_head_hexsha": "ea6789f995abe5fc364e6b271b2f0cb9b4ee1bf2", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.5794692038, "max_line_length": 2118, "alphanum_fraction": 0.7498019189, "num_tokens": 170310, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799253, "lm_q2_score": 0.8080672227971211, "lm_q1q2_score": 0.5956733399054158}}
{"text": "\\documentclass{article}\n\\usepackage{theorem}\n\n\\newcommand{\\implies}{\\Rightarrow}\n\\newcommand{\\have}{{.~}}\n\\newcommand{\\bstate}{{\\mbox{\\cal S}}}\n\\newcommand{\\blocks}{{\\mbox{\\cal B}}}\n\\newcommand{\\tabtop}{{\\mbox{\\sc t}}}\n\\newcommand{\\tblocks}{{\\blocks_\\tabtop}}\n\\newcommand{\\bclear}{{\\mbox{\\sc clear}}}\n\\newcommand{\\bon}{{\\mbox{\\sc on}}}\n\\newcommand{\\st}{~|}\n\n\\begin{document}\n\n\\section{Definitions}\n\n\\begin{itemize}\n\\item Let\n  $$ \\blocks = \\{ 1, \\ldots, N \\} $$\nrepresent a set of $N$ blocks in a blocks-world problem.\nWe represent the table top with the symbol $\\tabtop$, and\ndefine $$ \\tblocks \\equiv \\blocks \\cup \\{ \\tabtop \\} $$\n  \n\\item Call any \n  $$ \\bstate \\subseteq \\blocks \\times \\tblocks $$\na {\\em part-state} of the blocks-world problem. The pairs of blocks in\n$\\bstate$ represent {\\em on} relations.\n\n\\item A part-state $\\bstate$ is {\\em legal} iff\n  \\begin{itemize}\n  \\item No block is on two things: $$\n    \\forall i \\in \\blocks \\have\n      \\langle i , j \\rangle \\in \\bstate \\implies\n      \\not \\exists j' \\in \\tblocks \\have\n\tj \\neq j' \\wedge \\langle i , j' \\rangle \\in \\bstate\n  $$\n  \\item No two blocks are on the same block: $$\n    \\forall j \\in \\blocks \\have\n      \\langle i , j \\rangle \\in \\bstate \\implies\n      \\not \\exists i' \\in \\blocks \\have\n\ti \\neq i' \\wedge \\langle i' , j \\rangle \\in \\bstate\n  $$\n  \\end{itemize}\n\n\\item A part-state is a {\\em total state} or just a {\\em state}\niff it is legal and every block is on something: $$\n    \\forall i \\in \\blocks \\have\n      \\exists j \\in \\tblocks \\have\n\t\\langle i , j \\rangle \\in \\bstate\n$$\n\n\\item A legal part-state $\\bstate^1$ is {\\em simpler} than\na legal part-state $\\bstate^2$ iff every block in $\\bstate^2$\nis either on the same block as on $\\bstate^1$ or on the\ntable: $$\n  \\forall \\langle i, j \\rangle \\in \\bstate^2 \\have\n    j = \\tabtop ~\\vee~ \\langle i, j \\rangle \\in \\bstate^1\n$$\nWe will justify this terminology in Lemma \\ref{simpler-states}\nbelow.\n  \n\\item A block $j$ is {\\em clear} in a part-state $\\bstate$ iff no block\nis above it: $$\n  \\not \\exists i \\in \\blocks \\have \\langle i , j \\rangle \\in \\bstate\n$$\nWe will define the set of blocks clear in a part-state $\\bstate$ by $$\n  \\bclear(\\bstate) = \\{ j \\in \\bstate \\st\n  \\not \\exists i \\have \\langle i , j \\rangle \\in \\bstate \\}\n$$\n\n\\item A block $i$ is {\\em above} a block $i'$ in a\npart-state $\\bstate$ if there is a sequence\nof blocks whose {\\em on} relationships lead from $i$ to $i'$: $$\n  \\exists \\{ j_1 , j_2 , \\ldots , j_k \\} \\subseteq \\blocks \\have\n    \\{ \\langle i , j_1 \\rangle , \\langle j_1 , j_2 \\rangle ,\n       \\ldots , \\langle j_k , i' \\rangle \\} \\subseteq \\bstate\n$$\n\n\\item A part-state $\\bstate$ is {\\em realizable} iff it\nis legal and no block is above itself.\n\n\\item {\\em Moves} are functions $m_{ij}$ which take a realizable part-state in\nwhich blocks $i$ and $j$ are clear to a\nrealizable part-state in which $i$ is on $j$.\n\nLet $i \\in \\blocks$ and $j \\in \\tblocks$, and let $C(ij)$ be the set\nof realizable part-states\nin which $i$ is clear, and either $j = \\tabtop$ or $j$ is clear.\nThen the domain of $m_{ij}$ is\n$C(ij)$, and the range is the set of all realizable part-states.\n\nDefine the partial functions\n$\\bon_i(\\bstate)$ mapping legal part-states to\nblocks such that\n$\\bon_i(\\bstate) = j$ when $\\langle i, j \\rangle \\in \\bstate$;\n$\\bon_i$ is undefined elsewhere.\n(In other words, $\\bon_i$ says\nwhich block block $i$ is on in a given state.)\nBy the definition of a legal part-state, these functions are well-defined.\nWe then define\n$m_{ij}$ by $$\n  m_{ij}(\\bstate) \\equiv\n    \\left ( \\bstate - \\{ \\langle i, \\bon_i(\\bstate) \\rangle \\} \\right )\n    \\cup \\{ \\langle i, j \\rangle \\}\n$$\nwhen $i$ and $j$ are clear in $\\bstate$; $m_{ij}$ is undefined\nelsewhere.  A move is {\\em legal} iff $m_{ij}$ is defined.\n\n\\item A {\\em move sequence} $M$\nof length $k$ from state $\\bstate^0$ to state $\\bstate^1$ consists of\na sequence of pairs $$\n  \\langle i_1, j_1 \\rangle , \\langle i_2 , j_2 \\rangle , \\ldots ,\n  \\langle i_k , j_k \\rangle \\in \\left ( \\blocks \\times \\tblocks \\right ) ^ \\ast\n$$\nsuch that $$\n  \\left ( m_{i_k j_k} \\circ \\cdots \\circ m_{i_2 j_2} \\circ\n  m_{i_1 j_1} \\right ) (\\bstate^0) = \\bstate^1\n$$\nWe allow ourselves the liberty of writing $M(\\bstate^0) = \\bstate^1$.\nA move sequence $M$ is {\\em legal} for a state $\\bstate$ iff\n$M(\\bstate)$ is defined.\n\n\\item A {\\em primitive-blocks-world} (PBW) {\\em problem} consists of two\nrealizable states: an\ninitial state $I$ and a goal state $G$.\nA {\\em solution} to the problem is a move sequence $M$ such that $M(I) = G$.\nWe say that a solution is {\\em optimal} if there is no solution\nof shorter length.\n\n\\item A {\\em stack} is a sequence of blocks $i_1, i_2, \\ldots, i_k$\nin a realizable\npart-state $\\bstate$ such that $$\n  \\{ \\langle i_1, i_2 \\rangle, \\langle i_2, i_3 \\rangle, \\ldots,\n  \\langle i_{k-1}, i_k \\rangle \\} \\subseteq \\bstate\n$$\nand such that $$\n  \\forall i \\in \\{ i_1, i_2, \\ldots, i_{k-1} \\} \\have\n    i \\neq \\tabtop\n$$\nA {\\em tower} is a stack such that $i_k = \\tabtop$.\n\n\\end{itemize}\n\n\\section{Solution Optimization}\n\nIn the search for an algorithm giving optimal solutions for PBW\nproblems, we make use of the following lemmas, all of which\nhave been proved previously elsewhere.  All of these lemmas are\nproved using essentially the same reasoning:  If $\\bstate$ is a\nsolution, and we can construct from $\\bstate$ a solution $\\bstate'$ which\nis the same as $\\bstate$ except for deletion of one or more elements,\nthen $\\bstate$ is not an optimal solution.\n\n\\begin{lemma}{simpler-states}{Simpler States}\n\\given\nStates $\\bstate^1$ and $\\bstate^1$ such that $$\n\\bclear(\\bstate^1) \\subseteq \\bclear(\\bstate^2)\n$$\nA legal move sequence $M$ starting at $\\bstate^1$.\n\\prove\n$M$ is also a legal move sequence starting at $\\bstate^2$.\n\\proof\nConsider the states just after the first move $\\langle i, j \\rangle$ in $M$.\n\\begin{eqnarray*}\n  \\bstate^1_1 = m_{i j}(\\bstate^1) &=&\n  \\left ( \\bstate^1 - \\{ \\langle i, \\bon_i(\\bstate^1) \\rangle \\} \\right )\n  \\cup \\{ \\langle i, j \\rangle \\} \\\\\n  \\bstate^2_1 = m_{i j}(\\bstate^2) &=&\n  \\left ( \\bstate^2 - \\{ \\langle i, \\bon_i(\\bstate^2) \\rangle \\} \\right )\n  \\cup \\{ \\langle i, j \\rangle \\}\n\\end{eqnarray*}\nThis first move is legal, since it is legal in $\\bstate^1$ and\n$\\bclear(\\bstate^1) \\subseteq \\bclear(\\bstate^2)$, and the set of\nclear blocks after the move has the property that\n\\begin{eqnarray*}\n  \\bclear(\\bstate^1_1) &=& \\bclear (\n  \\left ( \\bstate^1 - \\{ \\langle i, \\bon_i(\\bstate^1) \\rangle \\} \\right )\n  \\cup \\{ \\langle i, j \\rangle \\} ) \\\\\n  &=&  \\left ( \\bstate^1 - \\{ \\langle i, \\bon_i(\\bstate^1) \\rangle \\} \\right )\n  - j\n\\end{eqnarray*}\n\\end{lemma}\n\n\\begin{lemma}{table-to-table}{Table-To-Table Moves}\n\\given\nA PBW problem $\\langle I, G \\rangle$.\nAn optimal solution $M$ to $\\langle I, G \\rangle$.\n\\prove\nNo block moves from the table to the table in $M$.\n\\proof\nAssume the contrary of the lemma, i.e., that there is\nsome table-to-table move in $M$.\nUse this move to split $M$ \n$$\n  M = M^1 \\cdot \\langle i, \\tabtop \\rangle \\cdot M^2\n$$\nwhere $$\n  \\bon_i(M^1(I)) = \\tabtop\n$$\nLet $\\bstate^1 = M^1(I)$.\nApplying the definitions of the previous section yields\n\\begin{eqnarray*}\n  (\\langle i, \\tabtop \\rangle)(\\bstate^1)\n  &\\equiv& m_{i \\tabtop}(\\bstate^1) \\\\\n  &=& \\left ( \\bstate^1 - \\{ \\langle i, \\bon_i(\\bstate^1) \\rangle \\} \\right )\n  \\cup \\{ \\langle i, \\tabtop \\rangle \\} \\\\\n  &=& \\left ( \\bstate^1 - \\{ \\langle i, \\tabtop \\rangle \\} \\right )\n  \\cup \\{ \\langle i, \\tabtop \\rangle \\} \\\\\n  &=& \\bstate^1\n\\end{eqnarray*}\nThis says that the state at the start of the table-to-table\nmove is the same as the state at the end, and thus by the\ndefinition of a solution, the sequence obtained by deleting\nthis move from $M$ is also a solution.  But by the definition\nof optimality $M$ is therefore not optimal, and thus we have\na condradiction.\n\nTherefore, no such move can be in $M$.\n\\end{lemma}\n\n\n\\begin{lemma}{tabled}{Correctly Tabled Blocks}\n\\given\nA PBW problem $\\langle I, G \\rangle$, such that\na block $i$ is on the table in both $I$ and $G$.\nAn optimal solution $M$ to $\\langle I, G \\rangle$.\n\\prove\n$i$ doesn't move in $M$, i.e. $$\n  \\not \\exists j \\in \\tblocks \\have\n    M = \\ldots, \\langle i, j \\rangle, \\ldots\n$$\n\\proof\nAssume the contrary, i.e., that $i$ moves in $M$.\n\nWe first split $M$ such that $$\n  M = M^1 \\cdot \\langle i, j \\rangle \\cdot M^2 \\cdot\n  \\langle i, \\tabtop \\rangle \\cdot M^3\n$$\nwhere $M^1$ contains no moves of $i$, and\n$M^2$ contains no moves of $i$ to $\\tabtop$.\nTo see that this split is possible, we first note\nthat since by assumption $i$ moves in $M$, the first\nmove in the split exists.  Further, by Lemma\n\\ref{table-to-table}, $j \\neq \\tabtop$.\nLet \\begin{eqnarray*}\n\\bstate^1 &=& M^1(I) \\\\\n\\bstate^2 &=& (\\langle i, j \\rangle)(\\bstate^1) \\\\\n\\bstate^3 &=& (M^2 \\cdot \\langle i, \\tabtop \\rangle)(\\bstate^2)\n\\end{eqnarray*}\nSince\n$\\bon_i(G) = \\tabtop$, but $\\bon_i(\\bstate^2) \\neq \\tabtop$,\nthen the second move in the split exists as well.\n\nNow consider the subproblem $\\langle \\bstate^1, \\bstate^3$ and its known\nsolution $$\n  \\langle i, j \\rangle \\cdot M^2 \\cdot \\langle i, \\tabtop \\rangle\n$$\nWe will construct a shorter solution to the problem from this\nsolution.\n\nFor any solution $M$ to $\\langle I, G \\rangle$,\nlet $M_i$ be the sequence of moves obtained as follows:\nConsider any move $m = \\langle j, i \\rangle$ in $M$, and the\nstate $\\bstate$ just before that move.  Replace $m$ with the two-move\nsequence $$\n  m' = \\langle i, tabtop \\rangle, \\langle j, \\bon_i(\\bstate) \\rangle\n$$\nWe note that $M_i$ is still a sequence of\nlegal moves by Lemma \\ref{simpler-states}.\n\n\\end{lemma}\n\n\\end{document}\n", "meta": {"hexsha": "306c9b98e697a38081b978335ae37a2ca069e30e", "size": 9625, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/thm/bak/thm-1.tex", "max_stars_repo_name": "BartMassey/blocks", "max_stars_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/thm/bak/thm-1.tex", "max_issues_repo_name": "BartMassey/blocks", "max_issues_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/thm/bak/thm-1.tex", "max_forks_repo_name": "BartMassey/blocks", "max_forks_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-15T18:45:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-15T18:45:13.000Z", "avg_line_length": 35.0, "max_line_length": 79, "alphanum_fraction": 0.6582857143, "num_tokens": 3341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289388167733099, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5956707905132381}}
{"text": "% Created 2019-09-14 Sat 01:34\n% Intended LaTeX compiler: pdflatex\n\\documentclass[11pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\author{Subhajit Sahu (2018801013)}\n\\date{\\today}\n\\title{Arithmetic Lang}\n\\hypersetup{\n pdfauthor={Subhajit Sahu (2018801013)},\n pdftitle={Arithmetic Lang},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.1.9)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n\n\n\n\\section{Imports}\n\\label{sec:orgdbb9a84}\n\nI needed to use \\texttt{isPrefixOf} for doing string replacement. It is part of\nmodule \\texttt{Data.List}. In the \\texttt{main} function, using \\texttt{putStr} does not flush\nthe string to \\texttt{stdout} immediately. For that, module \\texttt{System.IO} nneded\nto be imported.\n\n\\begin{verbatim}\nimport Data.List\nimport System.IO\n\\end{verbatim}\n\n\n\n\\section{Values}\n\\label{sec:org0f91885}\n\nUnlike the arithmetic language interpreter (using racket), discussed in\nclass, this arithmetic lang needs to support both numbers and booleans.\nTo support that, i used a boxing type, here called \\texttt{Value}. To keep the\ncode simple, i made it an instance of \\texttt{Eq}, \\texttt{Show}, \\texttt{Num}, and \\texttt{Fractional}.\nNummbers are called \\texttt{Numv}, and booleans \\texttt{Boolv}, to distinguish them from\nbuiltin typeclass / type.\n\n\\begin{verbatim}\ndata Value =\n  Numv  Float |\n  Boolv Bool\n  deriving (Eq)\n\ninstance Show Value where\n  show (Numv x)  = show x\n  show (Boolv x) = show x\n\ninstance Num Value where\n  (Numv x) + (Numv y) = Numv $ x + y\n  (Numv x) * (Numv y) = Numv $ x * y\n  abs (Numv x) = Numv $ abs x\n  signum (Numv x) = Numv $ signum x\n  fromInteger x = Numv $ fromInteger x\n  negate (Numv x) = Numv $ negate x\n\ninstance Fractional Value where\n  (Numv x) / (Numv y) = Numv $ x / y\n  fromRational x = Numv $ fromRational x\n\\end{verbatim}\n\n\n\n\\section{Abstract Syntax Tree}\n\\label{sec:org187dc02}\n\nThe AST is as expected, and again numbers and booleans are called differently.\n\n\\begin{verbatim}\ndata Ast =\n  Numa   Float   |\n  Boola  Bool    |\n  Add    Ast Ast |\n  Mul    Ast Ast |\n  Sub    Ast Ast |\n  Div    Ast Ast |\n  Equals Ast Ast |\n  IsZero Ast\n  deriving (Eq, Read, Show)\n\\end{verbatim}\n\n\n\n\\section{Run}\n\\label{sec:org36d132d}\n\nThere is a \\texttt{main} function which provides the REPL. It simply accepts a line\nand shows the output \\texttt{Value} of \\texttt{run} function. Use an empty (null) line to\nterminate. After displaying the \"arithmetic: \" prompt, it was necessary to\nflush to \\texttt{stdout}.\n\n\\begin{verbatim}\nmain = do\n  putStr \"arithmetic: \"\n  hFlush stdout\n  exp <- getLine\n  if null exp\n    then return ()\n    else do\n      putStrLn (show . run $ exp)\n      main\n\\end{verbatim}\n\nThe run function is simply parses and evaluates a string.\n\n\\begin{verbatim}\nrun :: String -> Value\nrun = eval . parse\n\\end{verbatim}\n\n\n\n\\section{Evaluator}\n\\label{sec:org48765be}\n\nThe \\texttt{eval} function pattern matches with all constructors of AST, and does the\ndesired action. It finally returns a \\texttt{Value}. Since \\texttt{Value} is already an instance\nof \\texttt{Eq}, \\texttt{Num}, \\texttt{Fractional} we can directly use arithmetic operators. However,\nequals operator is an exception, since it always returns a \\texttt{Bool} and thus it\nhad to be boxed in a \\texttt{Boolv}.\n\n\\begin{verbatim}\neval :: Ast -> Value\neval (Numa  x) = Numv  x\neval (Boola x) = Boolv x\neval (Add x y) = (eval x) + (eval y)\neval (Mul x y) = (eval x) * (eval y)\neval (Sub x y) = (eval x) - (eval y)\neval (Div x y) = (eval x) / (eval y)\neval (Equals x y) = Boolv $ (eval x) == (eval y)\neval (IsZero x)   = Boolv $ (eval x) == Numv 0\n\\end{verbatim}\n\n\n\n\\section{Parser}\n\\label{sec:org1edeaab}\n\nI wanted to depend upon the \\texttt{read} function to generate the AST. But a simple\n\\texttt{read} on the string would not work since it would read numbers as \\texttt{Num} and\nnot \\texttt{Numa} (AST needs that). Same is true for the boolean values. It would\nalso not work with operators.\n\nHowever if the input string is transformed into a suitable format, using our\nAST constructors, read should work fine. So, as you can see we split the\ninput into words, transform it, and rejoin it back, before feeding it\nback to read, which then directly gives us the AST.\n\nAlso, since brackets and operators, or brackets and numbers are not separated\nby a space, it causes problems for word splitting, so it needed to be done\nbeforehand.\n\n\\begin{verbatim}\nparse :: String -> Ast\nparse s = (read . unwords . map token . words $ bpad) :: Ast\n  where bpad = replace \"(\" \" ( \" . replace \")\" \" ) \" $ s\n\\end{verbatim}\n\nHere is the token replacement strategy.\n\n\\begin{verbatim}\ntoken :: String -> String\ntoken \"+\" = \"Add\"\ntoken \"*\" = \"Mul\"\ntoken \"-\" = \"Sub\"\ntoken \"/\" = \"Div\"\ntoken \"=\" = \"Equals\"\ntoken \"zero?\" = \"IsZero\"\ntoken t\n  | isFloat t  = \"(Numa \"  ++ t ++ \")\"\n  | isBool  t  = \"(Boola \" ++ t ++ \")\"\n  | otherwise  = t\n\\end{verbatim}\n\nAnd, here are a few utility functions we are using.\n\n\\begin{verbatim}\nreplace :: (Eq a) => [a] -> [a] -> [a] -> [a]\nreplace _ _ [] = []\nreplace from to all@(x:xs)\n  | from `isPrefixOf` all = to ++ (replace from to . drop (length from) $ all)\n  | otherwise             = x : replace from to xs\n\nisFloat :: String -> Bool\nisFloat s = case (reads s) :: [(Float, String)] of\n  [(_, \"\")] -> True\n  _         -> False\n\nisBool :: String -> Bool\nisBool s = case (reads s) :: [(Bool, String)] of\n  [(_, \"\")] -> True\n  _         -> False\n\\end{verbatim}\n\n\n\n\\section{This is where you put it all together}\n\\label{sec:orgfadf544}\n\n\\begin{verbatim}\nimport Data.List\nimport System.IO\n\n\ndata Value =\n  Numv  Float |\n  Boolv Bool\n  deriving (Eq)\n\ninstance Show Value where\n  show (Numv x)  = show x\n  show (Boolv x) = show x\n\ninstance Num Value where\n  (Numv x) + (Numv y) = Numv $ x + y\n  (Numv x) * (Numv y) = Numv $ x * y\n  abs (Numv x) = Numv $ abs x\n  signum (Numv x) = Numv $ signum x\n  fromInteger x = Numv $ fromInteger x\n  negate (Numv x) = Numv $ negate x\n\ninstance Fractional Value where\n  (Numv x) / (Numv y) = Numv $ x / y\n  fromRational x = Numv $ fromRational x\n\n\ndata Ast =\n  Numa   Float   |\n  Boola  Bool    |\n  Add    Ast Ast |\n  Mul    Ast Ast |\n  Sub    Ast Ast |\n  Div    Ast Ast |\n  Equals Ast Ast |\n  IsZero Ast\n  deriving (Eq, Read, Show)\n\n\nmain = do\n  putStr \"arithmetic: \"\n  hFlush stdout\n  exp <- getLine\n  if null exp\n    then return ()\n    else do\n      putStrLn (show . run $ exp)\n      main\n\nrun :: String -> Value\nrun = eval . parse\n\neval :: Ast -> Value\neval (Numa  x) = Numv  x\neval (Boola x) = Boolv x\neval (Add x y) = (eval x) + (eval y)\neval (Mul x y) = (eval x) * (eval y)\neval (Sub x y) = (eval x) - (eval y)\neval (Div x y) = (eval x) / (eval y)\neval (Equals x y) = Boolv $ (eval x) == (eval y)\neval (IsZero x)   = Boolv $ (eval x) == Numv 0\n\nparse :: String -> Ast\nparse s = (read . unwords . map token . words $ bpad) :: Ast\n  where bpad = replace \"(\" \" ( \" . replace \")\" \" ) \" $ s\n\ntoken :: String -> String\ntoken \"+\" = \"Add\"\ntoken \"*\" = \"Mul\"\ntoken \"-\" = \"Sub\"\ntoken \"/\" = \"Div\"\ntoken \"=\" = \"Equals\"\ntoken \"zero?\" = \"IsZero\"\ntoken t\n  | isFloat t  = \"(Numa \"  ++ t ++ \")\"\n  | isBool  t  = \"(Boola \" ++ t ++ \")\"\n  | otherwise  = t\n\n\nreplace :: (Eq a) => [a] -> [a] -> [a] -> [a]\nreplace _ _ [] = []\nreplace from to all@(x:xs)\n  | from `isPrefixOf` all = to ++ (replace from to . drop (length from) $ all)\n  | otherwise             = x : replace from to xs\n\nisFloat :: String -> Bool\nisFloat s = case (reads s) :: [(Float, String)] of\n  [(_, \"\")] -> True\n  _         -> False\n\nisBool :: String -> Bool\nisBool s = case (reads s) :: [(Bool, String)] of\n  [(_, \"\")] -> True\n  _         -> False\n\\end{verbatim}\n\\end{document}\n", "meta": {"hexsha": "297de22060abc17f5ce8b86859d5778b9d6967ea", "size": 7905, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "arithmetic.tex", "max_stars_repo_name": "interpreterz/arithmetic2", "max_stars_repo_head_hexsha": "af596646ff985924532507d234f16e15d6c55f4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "arithmetic.tex", "max_issues_repo_name": "interpreterz/arithmetic2", "max_issues_repo_head_hexsha": "af596646ff985924532507d234f16e15d6c55f4c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "arithmetic.tex", "max_forks_repo_name": "interpreterz/arithmetic2", "max_forks_repo_head_hexsha": "af596646ff985924532507d234f16e15d6c55f4c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.9369085174, "max_line_length": 104, "alphanum_fraction": 0.6521189121, "num_tokens": 2530, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7772998663336157, "lm_q1q2_score": 0.5956399463430159}}
{"text": "\\section{Sorts}\n\\label{sec-sort}\n\n\\subsection{Introduction}\n\nThe {\\GF} logic is sorted, although you are not forced to make any\nsort declarations.\nIf you never use any of the system's sort commands, you can use the \nlogic as if it were unsorted. {\\GF} makes sure that everything works correctly\nwith appropriate default sorts\n(see later in this section).\n\nSorts in {\\GF} are specially-handled unary predicates: that is, aside from the\nusual features of unary predicates, they possess some additional ones.\nThis choice won't be deeply justified here.\nThe resulting logic can, however, be proved to be \nconsistent and to allow more compact axiomatizations and proofs\nthan the unsorted one.\n\n\\subsection{Defining the sort hierarchy}\nWe present here the commands for the declaration of new sorts, for the\ndefinition of the generality relations between sorts, for the definition\nof a most general sort and for the declaration sort extensions.\n\n\nThe generality relations can be intuitively specified in this way:\nstating that sort $S_1$ is {\\it weakly more general} than sort $S_2$\nis equivalent to\nspecifying an axiom of the form $\\forall x(S_2(x) \\imp S_1(x))$.\nIn this case we also say that $S_2$ is {\\it weakly less general} than $S_1$.\nTwo sorts are said to be {\\it equivalent} when they are mutually\n(weakly) more general.\nWhen $S_1$ is {\\it weakly more (less) general} than $S_2$ and it is not\nequivalent to it, then we say that $S_1$ is {\\it strictly more (less)\ngeneral} than $S_2$.\nFinally, we say that $S$ is a {\\em most general sort} if it is weakly more\ngeneral than any sort. {\\GF} has a default most general sort, {\\tt\nUNIVERSAL}, that is used when no explicit sort information is provided to\nthe system.\n\n\\subsection{Sorted declarations}\n\nA sort is associated to every {\\indsym} at the moment of its declaration.\nIn unsorted\ndeclarations, when no sort is specified, the default most general sort\n{\\tt UNIVERSAL} is taken to be the sort of the symbol.\n\nThe sort information associated with a {\\funsym} is a set of so-called \n{\\it fmaps},\nthat is of $n+1$-tuples of sorts, where $n$ is the arity of the \\funconst.\nThe fmaps for $f$ specify the sort of a term whose external {\\funsym} is \n$f$, depending on the sort of its arguments. When a {\\funsym} is declared\nthe $n+1$-tuple ({\\tt UNIVERSAL} \\ldots {\\tt UNIVERSAL}) is always added to its\n{\\it fmaps}.\n\nAccording to the sort information of symbols we give now the\ndefinition of the useful notion of {\\it sort of a term}:\nwe say that $t$ is a term of sort $S$ \nif and only if\n%\n\\begin{itemize}\n\\item\n  $t$ is an individual symbol, and its sort is weakly less general than $S$,\n  or\n\\item\n  $t=f(t_1,\\dots,t_n)$, $f$ has an {\\it fmap} of the form $(S_1,\\dots,S_n,\n  S_f)$, $S_f$ is weakly less general than $S$ and all $t_i$ are terms of sort\n  $S_i$.\n\\end{itemize}\n", "meta": {"hexsha": "c327dce80188822f1d62c1bd438d3cd99f85452c", "size": 2825, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user/language/introsort.tex", "max_stars_repo_name": "getfol/GETFOL", "max_stars_repo_head_hexsha": "b861b00f2301b826f058010b42555789e2a9401d", "max_stars_repo_licenses": ["DOC", "Unlicense"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-08-25T01:02:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-05T05:17:17.000Z", "max_issues_repo_path": "doc/user/language/introsort.tex", "max_issues_repo_name": "getfol/GETFOL", "max_issues_repo_head_hexsha": "b861b00f2301b826f058010b42555789e2a9401d", "max_issues_repo_licenses": ["DOC", "Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/user/language/introsort.tex", "max_forks_repo_name": "getfol/GETFOL", "max_forks_repo_head_hexsha": "b861b00f2301b826f058010b42555789e2a9401d", "max_forks_repo_licenses": ["DOC", "Unlicense"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-25T02:05:54.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-25T02:05:54.000Z", "avg_line_length": 40.3571428571, "max_line_length": 79, "alphanum_fraction": 0.7447787611, "num_tokens": 779, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5956399423897648}}
{"text": "\\chapter{Convergence of Random Variables}\n\n% 1\n\\begin{ex}\n  \\begin{enumerate}[(a)]\n    \\item[]\n    \\item Note that since\n          \\begin{align*}\n            \\E{\\overline{X}_n}\n             & =\\E{\\frac{1}{n}\\sum_{i=1}^n X_i}\n            =\\frac{1}{n}\\sum_{i=1}^n \\E{X_i}\n            =\\mu,                                 \\\\\n            \\var{\\overline{X}_n}\n             & =\\var{\\frac{1}{n}\\sum_{i=1}^n X_i}\n            =\\frac{1}{n^2}\\sum_{i=1}^n \\var{X_i}\n            =\\frac{\\sigma^2}{n},\n          \\end{align*}\n          we have\n          \\begin{align*}\n            \\E{S_n^2}\n             & =\\E{\\frac{1}{n-1}\\sum_{i=1}^n(X_i-\\overline{X}_n)^2}                                                           \\\\\n             & =\\frac{1}{n-1}\\E{\\sum_{i=1}^n\\left[X_i^2-2X_i\\overline{X}_n+\\overline{X}_n^2\\right]}                           \\\\\n             & =\\frac{1}{n-1}\\E{\\sum_{i=1}^n X_i^2-2\\left(\\sum_{i=1}^n X_i\\right)\\overline{X}_n+\\sum_{i=1}^n\\overline{X}_n^2} \\\\\n             & =\\frac{1}{n-1}\\E{\\sum_{i=1}^nX_i^2-2n\\overline{X}_n^2+n\\overline{X}_n^2}                                       \\\\\n             & = \\frac{n}{n-1}\\left(\\E{X_1^2}-\\E{\\overline{X}_n^2}\\right)                                                     \\\\\n             & = \\frac{n}{n-1}\\left(\\var{X_1}+\\E{X_1}^2-\\var{\\overline{X}_n}-\\E{\\overline{X}_n}^2\\right)                      \\\\\n             & = \\frac{n}{n-1}\\left(\\sigma^2+\\mu^2-\\frac{\\sigma^2}{n}-\\mu^2 \\right)                                           \\\\\n             & =\\sigma^2.\n          \\end{align*}\n    \\item We have\n          \\begin{align*}\n            S^2\n             & =\\frac{1}{n-1}\\sum_{i=1}^n(X_i-\\overline{X}_n)^2                                  \\\\\n             & =\\frac{1}{n-1}\\left(\\sum_{i=1}^nX_i^2-2n\\overline{X}_n^2+n\\overline{X}_n^2\\right) \\\\\n             & =\\frac{n}{n-1}n^{-1}\\sum_{i=1}^nX_i^2-\\frac{n}{n-1}\\overline{X}_n^2,\n          \\end{align*}\n          where $\\frac{n}{n-1}\\to 1$ as $n\\to\\infty$. Note that then\n          $\\overline{X}_n\\to \\mu$ in probability by the law of large numbers,\n          and therefore by Theorem 5.5, part (d), $\\overline{X}_n^2\\to \\mu^2$\n          in probability. Likewise, we can apply the law of large numbers to\n          conclude that $n^{-1}\\sum_{i=1}^nX_i^2\\xrightarrow{P} \\E{X_1^2}$\n          where $\\E{X_1^2}=\\sigma^2+\\mu^2$. Combining all of these results, it\n          follows that\n          \\[\n            S^2\\xrightarrow{P}\\sigma^2+\\mu^2-\\mu^2=\\sigma^2.\n          \\]\n  \\end{enumerate}\n\\end{ex}\n\n\\begin{ex}\n  Note that\n  \\begin{align*}\n    \\E{[X_n-b]^2}\n     & =\\E{X_n^2-2bX+b^2}                   \\\\\n     & =\\var{X_n}+\\E{X_n}^2-2b\\E{X}+b^2     \\\\\n     & =\\var{X_n}+\\left[\\E{X_n}-b\\right]^2.\n  \\end{align*}\n  If $\\var{X_n}\\to 0$ and $\\E{X_n}\\to b$, as\n  $n\\to \\infty$, it follows that $\\E{[X_n-b]^2}\\to 0$ as well and hence\n  $X_n$ converges to $b$ in quadratic mean.\n\n  Conversely, if $\\E{[X_n-b]^2}\\to 0$ as $n\\to\\infty$ it follows that\n  \\[\n    0=\\lim_{n\\to\\infty}\\left(\\var{X_n}+\\left[\\E{X_n}-b\\right]^2\\right),\n  \\]\n  or, since both summands are non-negative, that\n  \\[\n    \\lim_{n\\to\\infty}\\var{X_n}=0,\\text{ and }\n    \\lim_{n\\to\\infty}\\left[\\E{X_n}-b\\right]^2=0.\n  \\]\n  If $\\lim_{n\\to\\infty}\\left[\\E{X_n}-b\\right]^2=0$, however, it follows that\n  $\\lim_{n\\to\\infty}\\E{x_n}=b$.\n\\end{ex}\n\n\\begin{ex}\n  Let $\\var{X_i}=\\sigma^2$. Let $Y_i=X_i-\\mu$. Note that then\n  \\[\n    \\mu_Y=\\E{Y_i}=\\E{X_i-\\mu}=0,\n  \\]\n  \\[\n    \\sigma^2_Y=\\var{Y_i}=\\var{X_i-\\mu}=\\sigma^2,\n  \\]\n  \\begin{align*}\n    \\E{(\\overline{X}_n-\\mu)^2}\n     & =\\E{\\frac{1}{n}\\left[\\sum_{i=1}^n(X_i-\\mu)\\right]^2}            \\\\\n     & =\\frac{1}{n^2}\\E{\\left(\\sum_{i=1}^n Y_i\\right)^2}               \\\\\n     & =\\frac{1}{n^2}\\E{\\sum_{i=1}^n Y_i^2 +\n      \\sum_{j=1}^n\\sum_{\\substack{k=1                                  \\\\ j\\neq k}}^n Y_jY_k} \\\\\n     & =\\frac{1}{n^2}\\left[n(\\sigma_Y^2+\\mu_Y^2)+n(n-1)\\mu_Y^2 \\right] \\\\\n     & =\\frac{\\sigma^2}{n},\n  \\end{align*}\n  and $\\E{(\\overline{X}_n-\\mu)^2}\\to 0$ as $n\\to\\infty$. Therefore,\n  $\\overline{X}_n$ converges to $\\mu$ in quadratic mean.\n\\end{ex}\n\n\\begin{ex}\n  Fix an $\\epsilon>0$. Note that for any random variable $X$,\n  \\begin{align*}\n    \\lim_{n\\to\\infty}\\P{|X_n-X|>\\epsilon}\n     & =\\lim_{n\\to\\infty}\\left[\\P{\\left|\\frac{1}{n}-X\\right|>\\epsilon}\\P{X_n=\\frac{1}{n}}\n      +\\P{|n-X|>\\epsilon}\\P{X_n=n}\\right]                                                    \\\\\n     & =\\lim_{n\\to\\infty}\\P{\\left|\\frac{1}{n}-X\\right|>\\epsilon}\\left(1-\\frac{1}{n^2}\\right)\n    +\\lim_{n\\to\\infty}\\P{|n-X|>\\epsilon}\\frac{1}{n^2}                                        \\\\\n     & =\\P{|X|>\\epsilon},\n  \\end{align*}\n  and thus, $X_n$ converges to $X$ in probability if and only if $X$ is a point\n  mass at $0$.\n\n  Recall that if $X_n$ converges in quadratic mean to $X$, by Theorem 5.4,\n  $X_n$ also converges to $X$ in probability. However, we know that $X_n$\n  only converges to $0$ in probability, and thus it suffices to check whether\n  $X_n$ converges to $0$ in quadratic mean as well. Since\n  \\begin{align*}\n    \\lim_{n\\to\\infty}\\E{(X_n-0)^2}\n     & =\\lim_{n\\to\\infty}\\E{X_n^2}                                                              \\\\\n     & =\\lim_{n\\to\\infty}\\left[\\frac{1}{n^2}\\P{X_n=\\frac{1}{n}}+n^2\\P{X_n=n}\\right]             \\\\\n     & =\\lim_{n\\to\\infty}\\left[\\frac{1}{n^2}\\left(1-\\frac{1}{n^2}\\right)+\\frac{n^2}{n^2}\\right] \\\\\n     & =\\lim_{n\\to\\infty}\\left[1+\\frac{1}{n^2}-\\frac{1}{n^4}\\right]                             \\\\\n     & = 1,\n  \\end{align*}\n  $X_n$ does not converge in quadratic mean.\n\\end{ex}\n\n% 5\n\\begin{ex}\n  First, note that if $X_i$ is a Bernoulli random variable, $X_i^2=X_i$. Next,\n  recall that by Theorem 5.4, if a sequence of random variables converges to a\n  distribution in quadratic mean, it converges to the same distribution in\n  probability as well. Thus, it suffices to establish convergence in quadratic\n  mean. However,\n  \\begin{align*}\n    \\E{\\left[\\frac{1}{n}\\sum_{i=1}^nX_i^2 -p\\right]^2}\n     & =\\frac{1}{n^2}\\E{\\left[\\sum_{i=1}^n(X_i -p)\\right]^2}                   \\\\\n     & =\\frac{1}{n^2}\\E{\\sum_{i=1}^n(X_i-p)^2 +\\sum_{j=1}^n\\sum_{\\substack{k=1 \\\\ j\\neq k}}^n (X_j-p)(X_k-p)} \\\\\n     & =\\frac{1}{n^2}\\E{\\sum_{i=1}^n(X_i-\\E{X_i})^2}                           \\\\\n     & =\\frac{1}{n^2}\\left[n\\left(\\var{X_1} \\right) \\right]                    \\\\\n     & =\\frac{p(1-p)}{n},\n  \\end{align*}\n  which converges to $0$ as $n$ goes to infinity.\n\\end{ex}\n\n\\begin{ex}\n  Let $\\overline{X}$ be the average height of the $100$ random men. By the\n  central limit theorem, we know that we can approximate $\\overline{X}$ by $Y$,\n  an $N(68, 2.6^2/100)$ distribution. Hence,\n  \\begin{align*}\n    \\P{\\overline{X}\\geq 72}\n    \\approx \\P{Y\\geq 72}\n    = \\P{\\frac{Y-68}{0.26}\\geq 15.385}\n    =1-\\Phi(15.385)\n    \\approx 0.\n  \\end{align*}\n\n  \\noindent\n  \\textit{Note: In the second printing of the book, the problem asks for the\n    probability that the average height of the men will be at least 68 inches.\n    This is trivial, since that is also the population mean. The problem is\n    corrected in the errata on the author's website to 72 inches instead. }\n\\end{ex}\n\n\\begin{ex}\n  Let $\\lambda_n=1/n$ for $n=1,2,\\ldots$. Let\n  $X_n\\sim\\text{Poisson}(\\lambda_n)$. Then\n  \\begin{enumerate}\n    \\item Fix an $\\epsilon>0$. Then,\n          \\begin{align*}\n            \\lim_{n\\to\\infty}\\P{|X_n|>\\epsilon}\n             & \\leq \\lim_{n\\to\\infty}\\P{|X_n|>\\min\\{\\epsilon, 1\\}} \\\\\n             & =1-\\lim_{n\\to\\infty}\\P{X_n=0}                       \\\\\n             & =1-\\lim_{n\\to\\infty}e^{-1/n}                        \\\\\n             & =0.\n          \\end{align*}\n    \\item Let $Y_n=nX_n$. Then, for $\\epsilon>0$,\n          \\[\n            \\lim_{n\\to\\infty}\\P{|Y_n|>\\epsilon}\n            \\leq 1-\\lim_{n\\to\\infty}\\P{Y_n=0}\n            =1-\\lim_{n\\to\\infty}e^{-1/n}\n            =0.\n          \\]\n  \\end{enumerate}\n\\end{ex}\n\n\\begin{ex}\n  Recall that the variance of a Poisson distribution is equal to its mean. By\n  the central limit theorem, we can approximate\n  $\\frac{1}{n}\\sum_{i=1}^n X_i$ by an $N(1, 1/n)$ distribution, and therefore\n  can approximate $Y$ by an $N(n, n)$ distribution. Hence,\n  \\[\n    \\P{Y<90}\n    =\\P{Z<\\frac{90-n}{\\sqrt{n}}}\n    =\\P{Z<-1}\n    \\approx 0.15865.\n  \\]\n\\end{ex}\n\n\\begin{ex}\n  Let $0<\\epsilon<e$. Note that\n  \\[\n    \\P{X-X_n=x}=\\begin{cases}\n      \\frac{n-1}{n} & x=0,     \\\\\n      \\frac{1}{n}   & x=e^{n},\n    \\end{cases}\n  \\]\n  and that therefore\n  \\[\n    \\P{|X_n-X|>\\epsilon}=1/n,\n  \\]\n  which goes to $0$ as $n$ goes to infinity. Hence, $X_n$ converges to $X$ in\n  probability. By Theorem 5.4 it follows that $X_n$ converges to $X$ in\n  distribution as well.\n\n  Finally, note that\n  \\[\n    \\E{(X-X_n)^2}=\\frac{e^{2n}}{n},\n  \\]\n  which does not converge to $0$ as $n\\to\\infty$. Therefore, $X_n$ does not\n  converge to $X$ in quadratic mean.\n\\end{ex}\n\n% 10\n\\begin{ex}\n  Note that\n  \\[\n    \\P{|Z|>t}\n    =\\P{|Z|^k>t^k}\n    \\leq \\frac{\\E{|Z|^k}}{t^k}\n  \\]\n  by applying Markov's inequality to $|Z|^k$. We proved that $\\E{|Z|^k}$\n  exists for any non-negative integer $k$ in Exercise 4.6, and note that we can\n  extend this to all $k>0$, since\n  \\begin{align*}\n    \\E{|Z|^k}\n     & =\\int_0^\\infty\\! 2z^k \\phi(z)\\,\\d{z}                      \\\\\n     & \\leq \\int_0^1\\! 2z^{\\lfloor k\\rfloor} \\phi(z)\\,\\d{z}\n    + \\int_1^\\infty\\! 2z^{\\lceil k\\rceil} \\phi(z)\\,\\d{z}         \\\\\n     & \\leq \\int_0^\\infty\\! 2z^{\\lfloor k\\rfloor} \\phi(z)\\,\\d{z}\n    + \\int_0^\\infty\\! 2z^{\\lceil k\\rceil} \\phi(z)\\,\\d{z}         \\\\\n     & =\\E{|Z|^{\\lfloor k\\rfloor}} + \\E{|Z|^{\\lceil k\\rceil}}.\n  \\end{align*}\n\n  By Mill's inequality,\n  \\[\n    \\P{|Z|>t}\\leq \\sqrt{\\frac{2}{\\pi}}\\frac{e^{-t^2/2}}{t}.\n  \\]\n\n  Note that this bound is $O(e^{-t^2/2}/t)$ instead of $O(t^{-k})$ and that\n  therefore, for $t$ sufficiently large, it provides a tighter bound.\n\\end{ex}\n\n\\begin{ex}\n  To show convergence in distribution, note that $Y_n=\\sqrt{n} X_n$ is a\n  standard normal random variable, and that therefore\n  \\[\n    F_{X_n}(x)=F_{Y_n}(\\sqrt{n} x)=\\Phi(\\sqrt{n} x),\n  \\]\n  and so\n  \\[\n    \\lim_{n\\to\\infty} F_{X_n}(x)\n    =\\lim_{n\\to\\infty} \\Phi(\\sqrt{n}x)\n    =\\begin{cases}\n      0 & x < 0,   \\\\\n      1 & x\\geq 0.\n    \\end{cases}\n  \\]\n\n  To show convergance in probability, fix an $\\epsilon>0$. Then\n  \\begin{align*}\n    \\P{|X_n-X|>\\epsilon}\n     & \\leq \\P{|X_n|+|X|>\\epsilon}                                      \\\\\n     & \\leq \\P{|X_n|>\\epsilon/2}+\\P{|X|>\\epsilon/2}                     \\\\\n     & = 1 - \\P{-\\epsilon/2\\leq X_n\\leq \\epsilon/2}\n    + 1 - \\P{-\\epsilon/2\\leq X\\leq \\epsilon/2}                          \\\\\n     & =1-\\left[\\Phi\\left(n\\epsilon/2\\right)-\\Phi(-n\\epsilon/2)\\right]\n    +1 - F(\\epsilon/2) + F(-\\epsilon/2)                                 \\\\\n     & =1-\\left[\\Phi\\left(n\\epsilon/2\\right)-\\Phi(-n\\epsilon/2)\\right],\n  \\end{align*}\n  which goes to $0$ as $n$ goes to infinity since the expression between the\n  square brackets is the probability that a standard normal random variable is\n  between $-\\sqrt{n}\\epsilon/2$ and $\\sqrt{n}\\epsilon/2$. Thus, $X_n$ converges\n  to $X$ in probability.\n\n\\end{ex}\n\n\\begin{ex}\n  Let $X,X_1,X_2,\\ldots$ be random variables that are positive and integer\n  valued. Let $F$ be the CDF of $X$ and $F_i$ the CDF of $X_i$. Suppose that\n  $X_n\\rightsquigarrow X$. Then\n  \\begin{align*}\n    \\lim_{n\\to\\infty}\\P{X_n=k}\n     & =\\lim_{n\\to\\infty}\\left[F_n(k)-F_n(k-1)\\right]     \\\\\n     & =\\lim_{n\\to\\infty}F_n(k)-\\lim_{n\\to\\infty}F_n(k-1) \\\\\n     & =F(k) - F(k-1)                                     \\\\\n     & =\\P{X=k}.\n  \\end{align*}\n\n  Conversely, suppose that\n  \\[\n    \\lim_{n\\to\\infty}\\P{X_n=k}=\\P{X=k},\n  \\]\n  for all $k$. Then, for $k$ a positive integer,\n  \\begin{align*}\n    \\lim_{n\\to\\infty} F_n(k)\n     & =\\lim_{n\\to\\infty}\\sum_{i=1}^k \\P{X_n=k} \\\\\n     & =\\sum_{i=1}^k\\lim_{n\\to\\infty} \\P{X_n=k} \\\\\n     & =\\sum_{i=1}^k \\P{X=k}                    \\\\\n     & =F(k),\n  \\end{align*}\n  and, since the random variables can only take on positive integer values, it\n  follows that $X_n\\rightsquigarrow X$.\n\\end{ex}\n\n\\begin{ex}\n  Note that\n  \\begin{align*}\n    \\P{X_n\\leq x}\n     & =\\P{n\\min\\{Z_1,\\ldots, Z_n\\}\\leq x}              \\\\\n     & =\\P{\\min\\{Z_1,\\ldots, Z_n\\}\\leq x/n}             \\\\\n     & =1 - \\P{Z_1>x/n}\\cdots\\P{Z_n> x/n}               \\\\\n     & =1-\\left(1-\\int_{0}^{x/n}\\!f(t)\\,\\d{t}\\right)^n,\n  \\end{align*}\n  where the lower limit of immigration follows from the fact that $\\P{Z_i>0}=1$.\n  Let $F(x)=\\int_0^x\\!f(t)\\,\\d{t}$ and note that then\n  \\begin{align*}\n    \\lim_{n\\to\\infty}\\P{X_n\\leq x}\n     & =1-\\lim_{n\\to\\infty}\\left[1-F\\left(\\frac{x}{n}\\right)\\right]^n \\\\\n     & =1-\\lim_{n\\to\\infty}\\left[1-F(0)-F'(0)\\left(\\frac{x}{n}\\right)\n      -h\\left(\\frac{x}{n}\\right)\\left(\\frac{x}{n}\\right)\\right]^n\n     & (\\text{Taylor's theorem})                                      \\\\\n     & =1-\\lim_{n\\to\\infty}\\left[\\left(1-\\frac{\\lambda x}{n}\\right)\n      -h\\left(\\frac{x}{n}\\right)\\left(\\frac{x}{n}\\right)\\right]^n     \\\\\n     & =1-\\lim_{n\\to\\infty}\\left(1-\\frac{\\lambda x}{n}\\right)^n,\n  \\end{align*}\n  by binomial expansion and using the fact that the $h(x)$ error term from\n  Taylor's theorem goes to $0$ as $x$ goes to $0$.\n\n  Finally, taking the limit gives us\n  \\begin{align*}\n    \\lim_{n\\to\\infty}\\P{X_n\\leq x}=1-e^{-\\lambda x},\n  \\end{align*}\n  the CDF of an $\\text{Exponential}(\\lambda)$ distribution.\n\\end{ex}\n\n\\begin{ex}\n  Let $X_1,\\ldots,X_n\\sim\\text{Uniform(0,1)}$. Then\n  \\[\n    \\sqrt{12n}(\\overline{X}_n - 1/2) \\rightsquigarrow N(0,1)\n  \\]\n  by the central limit theorem. Note that $g(x)=x^2$ is a differentiable\n  function with $g'(x)=2x$. Therefore, $g'(\\mu)=1\\neq 0$ and thus it follows by\n  the delta method,\n  \\[\n    \\sqrt{12n}\\left(\\overline{X}_n^2-\\frac{1}{4}\\right)\n    \\rightsquigarrow N(0, 1),\n  \\]\n  i.e.\\ that\n  \\[\n    \\overline{X}_n^2\\approx N\\left(\\frac{1}{4}, \\frac{1}{12n}\\right).\n  \\]\n\\end{ex}\n\n% 15\n\\begin{ex}\n  Let $g(y)=y_1/y_2$ and note than then\n  \\[\n    \\nabla g(y) = \\begin{pmatrix}\n      1/y_2 \\\\ -y_1/y_2^2\n    \\end{pmatrix}.\n  \\]\n  By the multivariate central limit theorem, we have\n  \\[\n    \\sqrt{n}\\left[\\begin{pmatrix}\n        \\overline{X}_{1} \\\\ \\overline{X}_{2}\n      \\end{pmatrix}-\\begin{pmatrix}\n        \\mu_1 \\\\ \\mu_2\n      \\end{pmatrix}\\right]\\sim N(0, \\Sigma),\n  \\]\n  and therefore it follows by the multivariate delta method that\n  \\[\n    \\sqrt{n}\\left(\n    \\overline{X}_{1}/\\overline{X}_{2} - \\mu_1/\\mu_2\n    \\right)\\sim N(0, \\nabla^T_\\mu\\Sigma\\nabla_\\mu),\n  \\]\n  where\n  \\[\n    \\nabla^T_\\mu\\Sigma\\nabla_\\mu\n    =\\begin{pmatrix}\n      \\mu_2^{-1} & -\\mu_1\\mu_2^{-2} \\\\\n    \\end{pmatrix}\n    \\begin{pmatrix}\n      \\sigma_{11} & \\sigma_{12} \\\\\n      \\sigma_{12} & \\sigma_{22}\n    \\end{pmatrix}\n    \\begin{pmatrix}\n      \\mu_2^{-1} \\\\ -\\mu_1\\mu_2^{-2}\n    \\end{pmatrix}\n    =\\mu_2^{-1}\\sigma_{11}\n    -2\\mu_1\\mu_2^{-3}\\sigma_{12}\n    +\\mu_1^2\\mu_2^{-4}\\sigma_{22}.\n  \\]\n  Therefore,\n  \\[\n    \\overline{X}_{1}/\\overline{X}_{2}\\approx\n    N(\\mu_1/\\mu_2,\n    \\mu_2^{-1}\\sigma_{11}\n    -2\\mu_1\\mu_2^{-3}\\sigma_{12}\n    +\\mu_1^2\\mu_2^{-4}\\sigma_{22}.\n    ).\n  \\]\n\\end{ex}\n\n\\begin{ex}\n  Let $X_1, X_2, \\ldots$ be a sequence of random variables such that\n  $X_n\\sim N(0, 1)$ and let $Y_n=-X_n$ for all $n$. Note that since the standard\n  normal distribution is symmetric about the origin, $Y_n\\sim N(0, 1)$. Hence,\n  both $X_n$ and $Y_n$ converge in distribution to an $N(0, 1)$ distribution,\n  and thus $X + Y=N(0, 2)$, but since $X_n+Y_n=X_n+(-X_n)=0$,\n  $X_n+Y_n\\rightsquigarrow 0$.\n\\end{ex}", "meta": {"hexsha": "a6bd7d18a74584136679f76ea6ddaeb86eafdd21", "size": 15398, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/ch05.tex", "max_stars_repo_name": "dtrifuno/all-of-stats-solutions", "max_stars_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/ch05.tex", "max_issues_repo_name": "dtrifuno/all-of-stats-solutions", "max_issues_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/ch05.tex", "max_forks_repo_name": "dtrifuno/all-of-stats-solutions", "max_forks_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8373205742, "max_line_length": 128, "alphanum_fraction": 0.5218859592, "num_tokens": 6120, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7772998560157663, "lm_q1q2_score": 0.5956399384365135}}
{"text": "\\section{Clustering}\n\\label{sec:clustering}\n\nWe compared three clustering algorithms applied to the original\nsubstitute vectors using many-to-one accuracy on the 45-tag 24K word\ntest corpus.  Hierarchical agglomerative clustering with complete\nlinkage (HAC) starts with each instance in its own cluster and\niteratively combines the two closest groups (measured by their most\ndistant points) at each step \\cite{manning2008introduction}.\nK-medoids minimizes sum of pairwise distances between each datapoint\nto the exemplar at the center of its cluster\n\\cite{kaufman2005finding}.  Spectral clustering\\footnote{We used the\n  implementation in \\cite{chen2011parallel} with a symmetric sparse\n  affinity matrix of 550 nearest neighbors.} uses the eigenvalues of\nthe graph Laplacian $L=D^{-1/2} W D^{-1/2}$ to reduce the number of\ndimensions (similar to Laplacian eigenmaps) and uses simple k-means\nclustering on the resulting representation \\cite{ng2002spectral}.  All\nthree algorithms accept the distance matrix based on the KL2 distance\n(see Section~\\ref{sec:dist}) as input.\n\n\\begin{figure}[h]\n\\includegraphics[width=.5\\textwidth]{clustering_graph_mono.eps}\n\\caption{Many-to-one score for three clustering algorithms on the\n  45-tag 24K word corpus.}\n\\label{fig:clustering}\n\\end{figure}\n\n% #cluster kmedoid spectral hac\n% 2          0.20795 0.17781 0.135762\n% 4          0.29413 0.33439 0.155579\n% 8          0.36615 0.39550 0.159076\n% 16        0.43830 0.46116 0.263988\n% 32        0.43318 0.54192 0.329850\n% 45        0.46245 0.59413 0.401832\n% 64        0.46965 0.59151 0.438843\n% 128      0.48235 0.63106 0.506703\n% 256      0.53709 0.65000 0.558659\n% 512      0.58464 0.66520 0.616653\n% 1024     0.62985 0.67523 0.658576\n% 2048     0.66107 0.68414 0.696878\n% 4096     0.69842 0.71286 0.744338\n\n\nFigure~\\ref{fig:clustering} plots the many-to-one score versus number\nof clusters for the three algorithms on the 45-tag 24K word test\ncorpus.  The many-to-one score naturally increases as we approach the\none cluster per word limit, however we find the evolution of the\ncurves informative.  At the high end (more than 2000 clusters) HAC\nperforms best with its conservative clusters, but its performance\ndegrades fast as we reduce the number of clusters because it cannot\nreverse the accumulating mistakes.  At the low end (less than 16\nclusters) k-medoids and spectral have similar performance.  However\nfor the region of interest (between 16 to 2000 clusters) spectral\nclustering is clearly superior with \\spectralResult\\% many-to-one\naccuracy at 45 clusters.\n\n", "meta": {"hexsha": "847925f53dd8fdc4884a44b03124f7aa813f887f", "size": 2559, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/cl2012/acl12/clustering.tex", "max_stars_repo_name": "ai-ku/upos", "max_stars_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-01-24T11:27:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-18T11:35:02.000Z", "max_issues_repo_path": "papers/cl2012/acl12/clustering.tex", "max_issues_repo_name": "ai-ku/upos", "max_issues_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/cl2012/acl12/clustering.tex", "max_forks_repo_name": "ai-ku/upos", "max_forks_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-06T07:56:00.000Z", "avg_line_length": 44.8947368421, "max_line_length": 70, "alphanum_fraction": 0.7624071903, "num_tokens": 784, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7662936377487304, "lm_q1q2_score": 0.5956399382411373}}
{"text": "\\documentclass{article}\n\\usepackage{latexsym}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n\\begin{document}\n\\title{Measure Theory notes}\n\\author{Dave Neary}\n\n\\maketitle\n\n\\section{Motivation for Lebesgue integral}\n\n\\subsection{Overview of the Riemann integral}\n\nThe Riemann integral of a continuous function $f:\\mathbb{R} \\rightarrow \\mathbb{R}$ is defined as:\n\n\\[ \\int_{a}^{b}f(x) dx = \\lim_{n \\rightarrow \\infty}\\sum_{i=1}^{n}f(x_{i}) \\Delta x \\]\n \nwhere $x_{i} = a + (i-1)\\Delta x$ and $\\Delta x=\\frac{b-a}{n}$\n\nIn other words, we partition the domain of the function into small slices, \nand calculate the area under the curve by multiplying the width of the slices \nby the value of the function at the beginning of the slice.\n\nThis works well for a certain class of functions, called Riemann-integrable\nfunctions. These functions must satisfy the condition that the domain is $\\mathbb{R}$,\nand that limit above exists.\n\nMore generally, we can calculate an upper Riemann sum $U(f)$ by summing the \nareas using $\\sup{f(x)}$ on each partition, and a lower sum $L(f)$ by using $\\inf{f(x)}$\nfor each interval. The function $f$ is Riemann integrable when $\\lim L(f) = \\lim U(x)$. \n\n\\subsection{Limitations of the Riemann integral}\n\nIt is possible to generalize the Riemann integral to two\nor more dimensions, but the problem of finding an appropriate partition of the domain\nmeans that for dimensions of the real numbers which are higher than $\\mathbb{R}^2$,\nthe Riemann integral is limited.  In addition, we would like to consider other classes\nof domains than the reals for functions - for example, probability spaces or generic\nHilbert spaces - where some alternative idea of the area under the curve (or more \ngenerally, the volume of a set) may make sense. Another limitation of the Riemann\nintegral is that there are useful classes of functions for which it does not converge,\nbut for which a reasonable value for the integral exists.\n \nAnother limitation of the Riemann integral is that there is only a very limited set of\nfunctions for which  it is possible to say \n\\[\\int \\sum_n f_n(x) dx = \\sum_n \\int f_n(x) dx \\]\n\nNamely, $f_n(x)$ must converge uniformly to $f(x)$, which is a very strong constraint.\n\nAs a result of these limitations, the idea of the Lebesgue integral is to partition\nthe function range instead of the domain. We then identify the subsets of the domain for\nspecific values of $f(x)$, and calculate their volume using a generic measure function\n$\\mu$. By taking finer and finer intervals of the range, we can get better and better \nestimates of the volume under the function with respect to the domain and the measure.\n\nThe remainder of this document will describe the characteristics of a domain, the\nconstraints required for a measure, which types of functions we can integrate, and\na precise definition of the Lebesgue integral. We will also include a selection of\nproofs and problems which we can use the Lebesgue integral to solve.\n\n\\section{Lebesgue Measure}\n\nWorking backwards, to define what we mean by an integrable function, we will need to\nfirst define how to measure the volume of a subset of a domain (a measure), and to\ndefine a measure, we must first define the types of sets which will be measurable.\n\n\\subsection{Measurable spaces}\n\nStarting from a set $X$, a collection of subsets of $X$, $\\mathcal{A}$, is called a\n$\\sigma$-algebra if it satisfies the following conditions:\n\n\\begin{enumerate}\n\t\t\\item $X \\in \\mathcal{A}$\n\t\t\\item For each $A \\in \\mathcal{A}$, $X \\setminus A \\in \\mathcal{A}$\n\t\t\\item For a countable sequence of subsets $(A_n)_{n \\in \\mathbb{N}} \\in \\mathcal{A}$,\n\t\t\t\\[\\bigcup_{n} A_n \\in \\mathcal{A} \\]\n\\end{enumerate}\n\nWe will see when we define a measure why this is called a $\\sigma$-algebra.\n\nThe pair $(X, \\mathcal{A})$ is called a measurable space.\n\nGiven any collection of subsets $\\mathcal{C}$ of subsets of $X$, we can generate a \nsmallest $\\sigma$-algebra which contains $\\mathcal{C}$. That is, there is a $\\sigma$-algebra \n$\\mathcal{A}$ such that if $\\mathcal{B}$ is a $\\sigma$-algebra containing $\\mathcal{C}$, then \n$\\mathcal{A} \\subseteq \\mathcal{B}$. We say that such a $\\sigma$-algebra $\\mathcal{A}$ is\ngenerated by $\\mathcal{C}$.\n\n\\subsubsection{Examples}\n\n\\begin{enumerate}\n\t\\item \\textbf{Exercise:} If $X=\\{1,2,3,4\\}$, and the $\\sigma$-algebra $\\mathcal{A}$ is\n\t\tgenerated by $\\{\\{1,2\\},\\{2,3\\}\\}$, what are the other members of \n\t\t$\\mathcal{A}$? \\\\\n\t\t\\textbf{Answer:} By condition 1 above, $X=\\{1,2,3,4\\} \\in \\mathcal{A}$,\n\t\tand by condition 2, since $X \\in \\mathcal{A}$, $X \\setminus X = \\emptyset \n\t\t\\in \\mathcal{A}$. Similarly, since $\\{1,2\\}$ and $\\{2,3\\} \\in \\mathcal{A}$,\n\t\t$X \\setminus \\{1,2\\} = \\{3,4\\}$ and $X \\setminus \\{2,3\\} = \\{1,4\\} \\in \n\t\t\\mathcal{A}$. By rule 3, $\\{1,2\\} \\cup \\{2,3\\} = \\{1,2,3\\}$ and $\\{2,3\\} \\cup \n\t\t\\{3,4\\} = \\{2,3,4\\} \\in \\mathcal{A}$. And by rule 2 again, $X \\setminus \n\t\t\\{1,2,3\\} = \\{4\\}$ and $X \\setminus \\{2,3,4\\} = \\{1\\} \\in \\mathcal{A}$. Back to \n\t\trule 3, $\\{4\\} \\cup \\{1,2\\} = \\{1,2,4\\}$ and $\\{1\\} \\cup \\{3,4\\} = \\{1,3,4\\} \\in\n\t\t\\mathcal{A}$. Finally, $X \\setminus \\{1,2,4\\} = \\{3\\}$ and $X \\setminus \n\t\t\\{1,3,4\\} = \\{2\\} \\in \\mathcal{A}$. Since each of the individual elements of \n\t\t$X$ are in a subset on their own, we can now create all possible subsets \n\t\tof $X$. $\\mathcal{A} = \\mathcal{P}(X)$, the power set of all subsets of $X$.\n\t\\item \\textbf{Exercise:} Prove that for a countable sequence of subsets \n\t\t$(A_n)_{n \\in \\mathbb{N}} \\in \\mathcal{A}$, a $\\sigma$-algebra on $X$, that \n\t\t\\[\\bigcap_{n} A_n \\in \\mathcal{A} \\]\n\t\t\\textbf{Answer:} Define a sequence of sets $B_n = X \\setminus A_n$. Then, \n\t\tby condition 2, $B_n \\in \\mathcal{A}$ for all $n$. By condition 3, \n\t\t\\[ B = \\bigcup_n B_n \\in \\mathcal{A} \\] \n\t\t\\[ X \\setminus B \\in \\mathcal{A} \\]\n\t\tBy Demorgan's laws, \n\t\t\\[\n\t\t\tX \\setminus \\bigcup_n B_n = \\bigcap_n (X \\setminus B_n) = \\bigcap_n A_n\n\t\t\\]\n\t\tSo $\\bigcap_n A_n \\in \\mathcal{A}$. QED.\n\t\\item \\textbf{Exercise:} $(X, \\mathcal{A})$ is a measurable space, with $Y \\subset X$.\n\t\tProve that $(Y,\\mathcal{A^\\prime})$ is a measurable space, where\n\t\t$\\mathcal{A}^\\prime = \\{A \\bigcap Y | A \\in \\mathcal{A}\\}$\\\\\n\t\t\\textbf{Answer:} Since $\\emptyset \\in \\mathcal{A}$, $\\emptyset \\bigcap Y =\n\t\t\\emptyset \\in \\mathcal{A}^\\prime$. Similarly, since $X \\in \\mathcal{A}$, \n\t\t$X \\bigcap Y = Y \\in \\mathcal{A}^\\prime$. \\\\\n\t\tFor any $A \\in \\mathcal{A}$, $X \\setminus A \\in \\mathcal{A}$, and $(X \\setminus A) \n\t\t\\bigcap Y = (X \\bigcap Y) \\setminus (A \\bigcap Y) = Y \\setminus (A \\bigcap Y)$.\n\t\tSo if $A \\bigcap Y \\in \\mathcal{A}^\\prime$, then $Y \\setminus (A \\bigcap Y) \\in\n\t\t\\mathcal{A}^\\prime$\\\\\n\t\tFinally, let $(A_i)_{i \\in \\mathbb{N}}$ be a sequence of sets in $\\mathcal{A}$. Then\n\t\t\\[\\bigcup_{i \\in \\mathbb{N}} A_i \\in \\mathcal{A} \\]\n\t\tDefine a sequence $(B_i)_{i \\in \\mathbb{N}}$ with  $B_i = Y \\bigcap A_i$ for all $i$.\n\t\tThen\n\t\t\\begin{equation}\n\t\t\t\\bigcup_{i \\in \\mathbb{N}} B_i = \\bigcup_{i \\in \\mathbb{N}} (Y \\bigcap A_i) \\\\\n\t\t\t= Y \\bigcap \\left(\\bigcup_{i \\in \\mathbb{N}} (A_i)\\right) \\in \\mathcal{A}^\\prime\n\t\t\\end{equation}\n\t\tTherefore, $(Y, \\mathcal{A^\\prime})$ is a measure space.\n\\end{enumerate}\n\n\\subsection{Measures}\n\nThe extended real numbers is the set $\\overline{\\mathbb{R}} = \\mathbb{R} \\bigcup \\{-\\infty, \\infty\\}$.\n\n\\textbf{Definition:} A measure is a function $\\mu:X \\rightarrow \\overline{\\mathbb{R}_{0}^{+}}$ on a\nmeasurable space $(X, \\mathcal{A})$ which satisfies the conditions:\n\n\\begin{enumerate}\n\t\\item $\\mu(\\emptyset) = 0$\n\t\\item if $(A_i)_{i \\in \\mathbb{N}} \\in \\mathcal{A}$ is a sequence of pairwise disjoint\n\t\tsets (that is, $A_i \\bigcap A_j = \\emptyset$ if $i \\ne j$), then:\n\t\t\\[ \\mu \\left( \\bigcup_{i =1}^{\\infty} A_i \\right) = \\sum_{i=1}^{\\infty} \\mu\n\t\t\\left( A_i \\right) \\]\n\\end{enumerate}\n\nThis characteristic of being able to turn a countable union of disjoint sets into a sum is why\n$\\mathcal{A}$ is called a $\\sigma$-algebra.\n\nIn general, we can think of the measure of a set as its volume, or (for real functions) as\nthe area under the curve.\n\nA measurable space $(X, \\mathcal{A})$ with a measure $\\mu$ is called a measure space, and\nis written $(X, \\mathcal{A}, \\mu)$.\n\nWe can deduce a number of lemmas from this definition:\n\n\\textbf{Lemma:} If $A \\subseteq B$ and $A, B \\in \\mathcal{A}$, then $\\mu(A) \\le \\mu(B)$\n\n\\textbf{Lemma:} For a sequence of sets $(A_i)_{i \\in \\mathbb{N}} \\in \\mathcal{A}$ \n\\[ \\mu \\left( \\bigcup_{i =1}^{\\infty} A_i \\right) \\ge \\sum_{i=1}^{\\infty} \\mu\n                \\left( A_i \\right) \\]\n\n\\textbf{Lemma:} For a sequence of sets $(A_i)_{i \\in \\mathbb{N}} \\in \\mathcal{A}$ with \n$A_i \\subseteq A_j$ if $i<j$ then \n\\[ \\mu \\left( \\bigcup_{i =1}^{\\infty} A_i \\right)= \\lim_{i \\rightarrow \\infty} \\mu(A_i) \\]\n\nSimilarly, for a sequence where $A_i \\supseteq A_j$ for $i<j$, \n$\\mu \\left( \\bigcap_{i =1}^{\\infty} A_i \\right)= \\lim_{i \\rightarrow \\infty} \\mu(A_i)$\n\nSome examples of measures are the trivial measure $\\mu(A)=0$ for all $A \\in \\mathcal{A}$,\nthe counting measure $\\mu(A) = |A|$ if A is finite, or $\\infty$ if it is infinite, and the\nDirac measure $\\delta_a(S) = 1$ if $a \\in S$ or 0 otherwise.\n\n\\textbf{Exercise:} Prove that the trivial, counting, and Dirac functions are measures.\n\n\\begin{itemize}\n\t\\item \\textbf{Trivial measure:} if $\\mu(A)=0$ for all $A$, then \n\t\t$\\mu(\\emptyset)=0$, and for a pairwise disjoint collection of\n\t\tsets $(A_i)_{i \\in \\mathbb{N}} \\in \\mathcal{A}$, \n\t\t\\[ \\mu(\\left( \\bigcup_{i =1}^{\\infty} A_i \\right) = 0 \\]\n\t\tand since $\\mu \\left( A_i \\right) = 0$ for all $i$,\n\t\t\\[ \\sum_{i=1}^{\\infty} \\mu \\left( A_i \\right) = 0 = \n\t\t\\mu(\\left( \\bigcup_{i=1}^{\\infty} A_i \\right) \\]\n\t\tTherefore, $\\mu$ is a measure.\n\t\\item \\textbf{Counting measure:} $\\mu(\\emptyset) = |\\emptyset|=0$\n\t\tFor a collection of pairwise disjoint sets \n\t\t$(A_i)_{i \\in \\mathbb{N}} \\in \\mathcal{A}$, if \n\t\t\\[ \\mu(\\left( \\bigcup_{i =1}^{\\infty} A_i \\right) < \\infty \\]\n\t\tthen each of $\\mu(A_i)$ is finite, and there is a finite\n\t\tcollection of subsets $(A_i)$ of $X$. Each element of \n\t\t$\\bigcup_{i =1}^{\\infty} A_i$ is also an element of exactly one $A_i$\n\t\tand each element of each $A_i$ is also an element of \n\t\t$\\bigcup_{i =1}^{\\infty} A_i$ by definition.\n\t\tIf \\[ \\mu(\\left( \\bigcup_{i =1}^{\\infty} A_i \\right) = \\infty \\], then \n\t\tfor each $a \\in \\bigcup_{i =1}^{\\infty} A_i$, $a \\in A_i$ for some $i$.\n\t\tTherefore, in either case,\n\t\t\\[ \\sum_{i=1}^{\\infty} \\mu \\left( A_i \\right) = \n\t\t\\sum_{i=1}^{\\infty} |A_i| = \n\t\t\\mu(\\left( \\bigcup_{i =1}^{\\infty} A_i \\right) \\]\n\t\tTherefore, the counting measure $\\mu(A)=|A|$ is a measure.\n\t\\item \\textbf{Dirac measure:} For an element $a \\in X$, $\\delta_a(A)=0$\n\t\tif $a \\notin A$\n\t\tSince $a \\notin \\emptyset$, $\\delta_a(\\emptyset) = 0$.\n\t\tConsider a pairwise disjoint collection of sets \n\t\t$(A_i)_{i \\in \\mathbb{N}} \\in \\mathcal{A}$. If \n\t\t$a \\in \\bigcup_{i =1}^{\\infty} A_i $ then\n\t\t\\[\\delta_a \\left(\\bigcup_{i=1}^{\\infty} A_i \\right) = 1 \\]\n\t\tand $a \\in A_i$ for some $i \\in \\mathbb{N}$, and since \n\t\t$(A_i)$ are pairwise disjoint, $\\delta_a(A_i)=1$ and\n\t\t$\\delta_a(A_j)=0$ for $j \\ne i$. Then\n\t\t\\[ \\sum_{i=1}^{\\infty} \\delta_a \\left( A_i \\right) = 1 = \n                \\delta_a(\\left( \\bigcup_{i=1}^{\\infty} A_i \\right) \\]\n\t\tIf $a \\notin \\bigcup_{i =1}^{\\infty} A_i $ then\n\t\t$a \\notin A_i$ for all $i$, and\n\t\t\\[ \\sum_{i=1}^{\\infty} \\delta_a \\left( A_i \\right) = 0 =\n                \\delta_a(\\left( \\bigcup_{i=1}^{\\infty} A_i \\right) \\]\n\t\tTherefore, $\\delta_a$ is a measure.\n\\end{itemize}\n\n\\textbf{Exercise:} Prove the lemmas above.\n\n\\subsection{Measurable functions}\n\nLet $\\left(\\mathcal{X}, \\mathcal{A}\\right)$ and $\\left(\\mathcal{Y}, \\mathcal{B}\\right)$\nbe measurable spaces. A function $f:\\mathcal{X} \\rightarrow \\mathcal{Y}$ is measurable with \nrespect to the $\\sigma$-algebras $\\mathcal{A}, \\mathcal{B}$ if, for each subset\n$B \\in \\mathcal{B}$, $f^{-1}(B) \\in \\mathcal{A}$ (where $f^{-1}(B)$ is the pre-image of\nthe set $B$ under the function $f$, $\\{x \\in X \\text: f(x) \\in B\\}$).\n\nThinking about useful collections of sets for a measure space, for functions mapping onto \n$\\overline{\\mathbb{R}}$, we can generate a $\\sigma$-algebra from the set of open intervals\nin $\\mathbb{R}$, plus $\\infty$.\nFor the open interval $A=(a,b)$. we define the measure $\\lambda (A) = b-a$. This measure\nis called the Lebesgue measure.\n\nA $\\sigma$-algebra $\\mathcal{A}$ generated from all of the open subsets of $\\mathcal{X}$\nis called the Borel $\\sigma$-algebra. It is a useful concept, because by choosing a Borel\n$\\sigma$-algebra, $\\mathcal{A}$ is also a topology, and we inherit all of the useful\ntheorems from topology too.\n\n\\textbf{Reminder:} A topological space is a nonempty set $X$ plus a set of subsets $A$ possessing\nthe properties:\n\n\\begin{enumerate}\n\\item $X, \\emptyset \\in A$\n\\item If $O_1 \\in A$ and $O_2 \\in A$, then $O_1 \\bigcap O_2 \\in A$\n\\item For a sequence of sets $\\left(O_i\\right)_{i\\in \\mathbb{N}} \\in A$, the countable\n\tunion $\\bigcup_{i\\in \\mathbb{N}} O_i \\in A$\n\\end{enumerate}\n\n\\textbf{Exercise:}\n\n\\textbf{Exercise:}\n\n\\subsection{Lebesgue Integral}\n\nWe can now pull all of these ideas together to define the Lebesgue integral.\n\nWe define the characteristic function $\\chi_E(x)$ of\nthe set $E$:\n\\[ \\chi_E(x)=\\left\\{ \n\\begin{array}{ll}\n1 & x \\in E\\\\\n0 & x \\notin E\n\\end{array} \\right.\n\\]\n\nA linear combination\n\\[ \\phi(x) = \\sum_{i=1}^{n}a_i\\chi_{E_i}(x) \\]\nis called a simple function, if $\\phi$ is measurable with respect to the $\\sigma$-algebra\ngenerated by the sets $(E_i)$, and assumes only a finite number of values\n$\\{a_1,a_2,...,a_n\\}$. \n\nThe Lebesgue integral of a simple function\n\\[\\phi = \\int_X \\left( \\sum a_i \\chi_{A_i}(x)\\right) d\\mu = \\sum a_i \\mu(A_i) \\]\nis the result of defining $A_i = \\phi^{-1}(a_i)$ (that is,\n$A_i = \\{x:\\phi(x)=a_i\\}$) for each of the values $a_i$ that $\\phi$ assumes. One consequence\nof this definition is that $\\left(A_i\\right)$ is a sequence of pairwise disjoint sets.\n\nWe can define the Lebesgue integral for a measurable non-negative function $f:X \\rightarrow \n\\overline{\\mathbb{R}^{+}}$ with respect to a measure space $(X, \\mathcal{A},\\mu)$ as:\n\\[ \\int_X f(x) d\\mu = \\sup\\left\\{\\int_X \\phi(x) d\\mu: 0 \\le \\phi(x) \\le f(x), \\phi \\textrm{ a\nsimple function} \\right\\} \\]\nIn other words, we look over all of the simple functions that are less than $f$, and take\nthe supremum across all of them.\n\nFor functions which are not non-negative, we split $f(x)$ into\n\\[ g(x) = \\max(f(x),0) \\]\nand\n\\[ h(x) = - \\min(f(x,0)) \\]\n\nThen \n\\[\\int_X f(x) d\\mu = \\int_X g(x) d\\mu - \\int_X h(x) d\\mu \\]\n\nIn other words, we split $f(x)$ into two non-negative functions, one representing the positive\npart of $f$, and one representing the absolute value of the negative part of $f$, and we can\ncalculate the final interval by removing the negative area from the positive area.\n\nFor any continuous functions $f(x): X \\rightarrow \\overline{\\mathbb{R}^{+}}$, we can construct\na sequence of simple functions $\\{f_n(x)\\}$ which converges pointwise to $f(x)$ as follows.\nFor $f_n(x)$, partition the range into $2^{2n}+1$ disjoint partitions $\\{I_{n,i}\\}$ with \n\\[ I_{n,i}  = \\left\\{ \n\\begin{array}{ll}\n\t\\left[\\frac{i-1}{2^n},\\frac{i}{2^n}\\right) & 1 \\le i \\le 2^{2n} \\\\[3pt]\n\t\\left[\\frac{i-1}{2^n},\\infty\\right) & i=2^{2n} + 1 \n\\end{array} \\right. \\]\n\nThen define $\\{A_{n,i}\\}_{i \\le 2^{2n} + 1} = f^{-1}(I_{n,i})$, the preimage of $I_{n,i}$.\nThe collection $\\{I_{n,k}\\}$ cover $[0,\\infty)$ for all $n$. The simple function \n\\[ f_n(x) = \\sum_{i=1}^{2^{2n+1}}\\frac{(i-1)}{2^n}\\chi_{A_{n,i}}(x) \\]\nis a sequence of increasing functions which converge pointwise to $f(x)$.\n\n\n\\textbf{Exercise:} Prove that the sequence $f_n(x)$ above converges pointwise to $f(x)=x^2$ for all \n$x \\in \\overline{\\mathbb{R}^{+}}$.\n\n\n\\section{Lebesgue Integrals and Probability Theory}\n\nProbability distributions all share some common characteristics which allow the application of measure\ntheory to be useful. Given a sample space $\\Omega$ of possible outcomes, and an event space \n$\\mathcal{A}$, which is a $\\sigma$-algebra, and a probability measure $P$, a measure space\n$(\\Omega, \\mathcal{A}, P)$ is called a probability space if:\n\\begin{enumerate}\n\\item $P(\\emptyset)=0$\n\\item $P(\\Omega)=1$\n\\item if $\\{A_{i}\\}_{i=1}^{\\infty } \\subseteq {\\mathcal{A}}$ is a countable collection of \n\tpairwise disjoint sets, then:\n\t\t\\[ P(\\bigcup _{i=1}^{\\infty }A_{i}) = \\sum_{i=1}^{\\infty} P(A_{i}) \\]\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "5e36757e64f56022b76ba886c34b41380b788f2a", "size": 16363, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "measure_theory.tex", "max_stars_repo_name": "dneary/math", "max_stars_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "measure_theory.tex", "max_issues_repo_name": "dneary/math", "max_issues_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "measure_theory.tex", "max_forks_repo_name": "dneary/math", "max_forks_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5668604651, "max_line_length": 102, "alphanum_fraction": 0.6629591151, "num_tokens": 5835, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7772998508568416, "lm_q1q2_score": 0.5956399344832622}}
{"text": "\\chapter{Verification}\n\\label{chapter:verification}\n\nWe improve the Halide simplifier term rewriting system by ensuring its soundness in\ntwo ways: first, we verify that each individual rule is \\emph{correct}, meaning that the\nrewrite preserves semantics. Then we verify that the term rewriting system is\n\\emph{guaranteed to terminate} on all inputs by ensuring that no sequence of\nrule applications, on any input expression, can form a cycle.\n\n\\section{Correctness}\n\\label{sec:ruleverification}\nWe verify each individual rule is correct by modeling Halide\nexpressions in SMT2 and using the SMT solver Z3~\\citep{de2008z3} to\nprove that the rule's left- and right-hand sides are equivalent. Most Halide expression\nsemantics map cleanly to SMT2 formulas. The functions \\hmax{} and\n\\hmin{} are defined in the usual way, and \\hsel{} in\nHalide is equivalent to the SMT2 operator \\texttt{ite}. Division and\nmodulo are given the Euclidean definitions in both Halide and\nSMT2~\\citep{boute1992euclidean}, though division and modulo by zero is handled\ndifferently (in Halide both evaluate to zero).\n%If a variable appears in the LHS of a rule as a divisor in a\n%division or modulo operation, it is assumed to be non-zero. %The Halide\n%expressions do not have a true boolean type (true and false are represented by\n%unsigned integers of 1 bitwidth), so expressions must be typed as either\n%\\texttt{Int} or \\texttt{Bool} when translated into SMT2. The Halide expression\nAll integer and boolean operators can be coerced to vector operators on vector inputs of the appropriate type. \nHalide's TRS uses two vector-constructing operators, \\texttt{broadcast} and \\texttt{ramp}. \n\\texttt{broadcast(x, l)} projects some value $x$ to a vector of length $l$; because of\nthe type coercion, we can simply represent \\texttt{broadcast(x, l)} as the variable\n\\texttt{x} in SMT2. \\texttt{ramp(x, s, l)} creates a vector of length $l$\nwhose initial entry has the value $x$ and all subsequent entries increase with\nstride $s$. In SMT2, we represent this term as the symbolic expression $x + l *\ns$, where $l$ must be zero or positive.\n\nGiven this modeling, for each rule, we assert any predicate guards are true, then\nask Z3 to search for a variable assignment that makes the LHS and RHS not\nequivalent.  If Z3 indicates no such assignment exists, the LHS must be equivalent to\nthe RHS and the rule must be correct. We implemented an SMT2 printer for \nHalide rewrite rules that automatically constructs an SMT2 verification problem for each rule.\nRule verification using Z3 is fully automated\nand can be run for the current set of rewrite rules used in the compiler via a script.\n\nHowever, for 123\nrules, Z3 either timed out or returned unknown. Nearly all of these rules used\neither division or modulo. We used the proof assistant Coq to manually prove the\ncorrectness of these remaining rules. In the course of these proofs, we\ndiscovered we were also able to relax the predicate guards of \\NumPredicatesRelaxed\nrules; for example, in some cases a rule\nwith a guard requiring some constant to be positive would be equally valid\nif the constant was non-zero.\n\n\nThis mostly-automated approach to verification assists with changing\nthe language semantics. Our initial work on verification was not on\nthe semantics described above: division or modulo by zero was originally\nconsidered undefined behavior. Since we had already\nmodeled Halide semantics in SMT2, it was easy to alter the\ndefinitions of division and modulo and re-run the verification scripts.\nWe proved \\NumZdivCoqProvedRules rules manually in Coq after Z3 failed to verify them; \nsince in the previous round all Coq proofs\nincluded the assumption that all divisors were non-zero, in most cases\nwe had only to add a case to show that the rule was true when the\ndivisor was zero as well. In the course of reviewing\nthese proofs, we identified \\NumZdivRelaxedPredicates rules whose\npredicates included the condition that a divisor be non-zero and where\nthat condition could safely be lifted. We found that\n\\NumZdivFalseRules rules were not correct under the new semantics and\nsubmitted a patch to amend them.\n\n\n\n\n\\section{Termination}\n\\label{sec:termination}\n\n\\modified{Under the umbrella goal of simplifying expressions, the Halide TRS uses\nmany strategies: it may attempt to make expressions as short as possible; it may factor out\nvector operations or more expensive operations such as division; it may attempt to\ncanonicalize subexpressions so they can cancel or be shown equivalent. These\nstrategies are not necessarily aligned and may even undo each other. Crafting new rules \ncan thus require a detailed understanding of the ruleset and its various applications. \nIn this section we formalize the Halide expression simplification strategy that was\npreviously only encoded in the ruleset itself. In doing so, we also prove that since \neach rule strictly makes progress in accordance to this strategy, the Halide TRS always terminates.}\n\nConsider a term\nrewriting system containing only one rule: $x + y \\rewrites y + x$. The term\n$3 + 5$ matches the LHS of the rule and is rewritten to $5 + 3$, which can again\nbe matched to the rule and rewritten to $3 + 5$, and so on. Termination failures in the Halide TRS have occurred in the past\\footnote{See for example https://github.com/halide/Halide/pull/1525}, causing unbounded recursion and eventually a stack overflow in the compiler. This is tricky to debug, and may not always be reported by users, since the error is fairly opaque. To show that this type of error has been eliminated, we must prove that there is no expression in the Halide expression language that can be infinitely rewritten by some sequence of rules that form a cycle.\n\nIntuitively, we can think of Halide expressions as existing in some multi-dimensional space; when an expression is rewritten by a rule, it moves from one point in that space to another. If each rule always rewrites expressions such that they move monotonically in some direction through the expression space, then no sequence of rules can form a cycle. These directions correspond to our intuition about why certain rules are useful (like the examples at the beginning of this section). We can consider each of these directions as a dimension in the expression space. If we formalize this desirable ordering and show that all rewrites from one expression to another strictly obey it, then we will have a proof of termination.\n\nWe provide this formalism and prove that the Halide term rewriting system must terminate by constructing a \\emph{reduction order}, a strict order with properties that ensure that, for an order $>$ and a rule $l \\rewrites r$, if $l > r$, then for any expression $e_1$ that matches $l$ and is rewritten by $l \\rewrites r$ into $e_2$, it must be true that $e_1 > e_2$. Crucially, this order is evaluated over rule terms, and not over all expressions that those terms may match. We take the definition of a reduction order and the next two theorems from~\\citep{baader1999term}.\n\n\\begin{theorem}\\label{theorem:terminates}\nA term rewriting system $R$ terminates iff there exists a reduction order $>$ that satisfies $l > r$ for all $l \\rewrites r \\in R$.\n\\end{theorem}\n\nA reduction order is a strict order that must be well-founded, meaning that every non-empty set has a least element with regard to the order, to prevent infinitely descending chains. It must be \\emph{compatible with $\\Sigma$-operations}: for all expressions $s_1, s_2$, all $n \\geq 0$, and all $f \\in \\Sigma$:\n\\[\ns_1 > s_2 \\implies f(t_1,...t_{i-1},s_1,t_{i+1},...,t_n) > f(t_1,...t_{i-1},s_2,t_{i+1},...,t_n)\n\\]\nfor all $i, 1 \\leq i \\leq n$ and all expressions $t_1,...t_{i-1},t_{i+1},...,t_n$. This property means that if a rewrite rule transforms a subtree in some expression $e$, the $>$ relation is preserved between the original expression $e$ and the rewritten expression $e'$. Finally, a reduction order is \\emph{closed under substitution}: for all expressions $s_1, s_2$ and all substitutions $\\sigma \\in \\mathcal{S}ub(T(\\Sigma,V))$, \n$s_1 > s_2 \\implies \\sigma(s_1) > \\sigma(s_2)$. When we match some left-hand side term $l$ to some expression $e$, we are defining a substitution for each of the variables in $l$ with some subtree in $e$; we then use that substitution to rewrite $e$ to $e'$. If our order is closed under substitutions, we know that for any expression we match to $l$, the resulting rewritten expression will obey the ordering.\n\nChoosing a single monotonic direction in which to rewrite expressions would be overly restrictive. \nThe Halide TRS is used both to prove expressions true and to simplify them; when using it as a prover, we want to put both sides of an equality into some normal form, but it doesn't particularly matter what that form is. When using the TRS to simplify expressions, on the other hand, reducing the size of an expression has important performance benefits. Since we need an ordering that covers the full Halide simplification strategy, we make use of the following theorem:\n\n\\begin{theorem}\nThe lexicographic product of two terminating relations is again terminating.\n\\end{theorem}\n\nThus, our strategy in finding a reduction order to cover the handwritten ruleset is to pick an order $>_a$ such that for all rules $l \\rewrites r$, either $l >_a r$ or $l =_a r$. Then, we pick another order $>_b$ such that for all rules $l \\rewrites r$ where $l =_a r$, either $l >_b r$ or $l =_b r$. We continue in this way until a sequence of orders has been found such that for their product $>_{\\times}$, $l >_{\\times} r$ holds for the entire ruleset.  Our final ordering consists of 13 component orders, given in full in appendix~\\ref{a:sreductionorder}.\n\nMany of our component orders are defined using measure functions that count the number of particular operations or other features in a term. We say that $s > t$ when $s$ has more vector operations than $t$, then when $s$ has more division, modulo and multiplications operations, and so on. As a sample proof sketch of this flavor of order, consider an order $s_1 >_* s_2$ that holds when the number of multiplication operations is greater in $s_1$ than in $s_2$. We represent this through a measure function $|s_1|_*$ that returns the count of multiplication operations in $s_1$; since this function maps a term to a natural number, the order is clearly well-founded. The order is also compatible with $\\Sigma$-operations; we compute our measure function as follows:\n\n\n\\[\n|f(t_1,...,t_n)|_* = \\sum_i^n |t_i|_* + \\begin{cases} 1 & \\textrm{if } f = * \\\\\n                                                      0 & \\textrm{otherwise}\n                                        \\end{cases}\n\\]\n\nIt clearly follows that given $|s_1|_* > |s_2|_*$, it must be true that:\n\n\\[\n|f(t_1,...t_{i-1},s_1,t_{i+1},...,t_n)|_* > |f(t_1,...t_{i-1},s_2,t_{i+1},...,t_n)|_*\n\\]\n\nTo ensure the order is closed under substitution, we need to add one more constraint. Imagine a rule $x * 2 \\rewrites x + x$. Although there are fewer $*$ symbols in the righthand term than on the left, that would not be true for a substitution $\\sigma = \\{x \\mapsto (z * z)\\}$. We add a condition that for every variable present in $s_1$, it must occur either fewer or an equal number of times in $s_2$. With this constraint there is no possible substitution that increases the value of the measure function in $s_2$ that would not result in an increase by an equivalent or greater amount in $s_1$. This gives us the order:\n\n\\[\ns_1 >_* s_2 \\textrm{ iff } |s_1|_* > |s_2|_* \\wedge \\forall x \\in \\mathcal{V}ar(s_1) . |s_1|_x \\geq |s_2|_x\n\\]\n\nMost of the component orders in the full reduction order take the form above. These orders guarantee termination no matter what sequence rewrite rules are applied to an expression. However, in a few exceptional cases, we were obliged to take into account the order in which rules are applied in the Halide TRS algorithm.\n\nFor example, one existing rule is the canonicalization $(c_0 - x) + y \\rewrites (y - x) + c_0$ where $c_0$ is a constant. If $y$ is also a constant, this rule forms a cycle with itself, and could not possibly obey any reduction order. Fortunately, the rule immediately before it in the TRS handles that specific case ($((c_0 - x) + c_1 \\rewrites \\texttt{fold}(c_0 + c_1) - x)$), so by this sort of non-local reasoning we know that $y$ is not a constant, and therefore the rule strictly decreases a measure which counts the number of constants on the right-hand side of an addition.\n\n%The handwritten ruleset had many rules that eliminated the occurrence of a variable, such as $x + x \\rewrites x * 2$. It seems natural to define an order based on the measure function $|s_1|_x$, but for a substitution $\\sigma = \\{x \\mapsto 3\\}$, $|3 + 3|_x \\not > |3 * 2|_x$. However, the simplifier algorithm always attempts constant folding before any other rule, so we know that the rule $x + x \\rewrites x * 2$ can only be invoked if $x$ is not a ground term.\n\n%Similarly, we have several rules that factor out an occurrence of a variable and introduce the constant 0 into the expression. We define an order on the occurrences of the constant 0 by defining a measure function that takes the count of terminals or leaves in the expression and subtracts the count of the constant 0; if terms $s_1$ and $s_2$ have the same number of leaves, but more of the leaves of $s_2$ are the constant 0, then $s_1 > s_2$.\n\n%\\[\n%s_1 >_0 s_2 \\textrm { iff } |s_1|_{leaf} - |s_1|_0 > |s_2|_{leaf} - |s_2|_0 \\wedge |s_1|_{leaf} = |s_2|_{leaf} \\wedge \\forall x \\in \\mathcal{V}ar(s_1) . |s_1|_x \\geq |s_2|_x\n%\\]\n\n% For the rule $\\texttt{max}(x + y, x) \\rewrites \\texttt{max}(y, 0) + x$, the order will not hold for the substitution $\\sigma = \\{x \\mapsto 0\\}$. However, we know the rule $0 + x \\rewrites x$ will be invoked before this one, so the rule cannot be evaluated on the expression $\\texttt{max}(0 + y, 0)$.\n\nRelying on non-local reasoning makes our order more brittle; if the simplifier algorithm were to be changed, the termination guarantee could be lost. However, we use only a small number of basic rules in this way, which are unlikely to be changed.\n\nBesides giving a termination guarantee, the reduction order is necessary if we want to synthesize new rewrite rules. If we do not constrain newly-synthesized rules to obey a consistent reduction order with the existing human-written ones, they may form cycles with the existing rules and cause infinite recursion in the TRS. Additionally, the reduction order is the formal encoding of the types of transformations we find desirable, so the reduction order limits synthesis to rules that rewrite expressions in a useful direction; this is discussed in greater detail in chapter~\\ref{chapter:synthfromscratch}.\n\nIn constructing the reduction order, we found \\NumOrderingProblems rules that contradicted a desirable ordering, and submitted patches to either delete or modify them. With this amendment, the reduction order can be shown to hold over the entire Halide ruleset, and the guarantee of termination is complete. To ensure this guarantee is preserved, we built a script that automatically checks the full set of rules in the compiler to ensure they respect the reduction order. \n\n\\subsection{Evaluation of verification work}\n\nChanges to the simplifier ruleset undergo a stringent development process: new rules are peer reviewed after they are proven on paper, and fuzz-testing is used to discover bugs. It is thus reasonable to ask whether mechanized verification can add any value. Our verification discovered \\NumRulesFixed previously-unknown correctness bugs and \\NumPredicatesRelaxed instances of rules whose predicates were overly restrictive. The former bugs eluded the fuzzer; the latter are deemed too hard to detect so the fuzzer does not look for them.  Some of the corrected rules are listed in more detail in table~\\ref{tab:verfirstround}.\n\n%The first use of verification took place when the TRS had not yet been merged into the Halide master branch. We ran the verification pipeline and discovered \\NumRulesFixed incorrect rewrite rules, listed in Table~\\ref{tab:verfirstround}. The rules that could not be checked with Z3 were proved true using the Coq proof assistant (none of the manually proved rules were found to be incorrect). While these bugs were found automatically the fixes were performed by hand, as the synthesis pipeline did not yet exist. \n\nFurthermore, because the verification infrastructure was in place, it was possible to verify a change of division semantics without much additional effort, identifying 44 rules that were incorrect under the new semantics. This change to the semantics of Halide may not have even been attempted without the verifier. In this change, Halide defined division or modulo by zero to evaluate to zero, instead of being undefined behavior, in response to an issue discovered by Alex Reinking~\\citep{reinkingthesis}. Existing tests and real uses of Halide were useless as a test of this change, as \\emph{they were all carefully written to never divide by zero}. Within the TRS, this change required rechecking every rewrite rule that involves the division and modulo operators. Whereas previously each rule assumed that a denominator on the LHS could not be zero, now it was necessary to either show that the rule was still correct in the case where a denominator was zero, or constrain the rule to only trigger when the denominator was known to be non-zero. This was done by encoding the new semantics into the verifier, and reverifying all rules. Because division and modulo is involved, these rules cannot always be mechanically verified. \n\\NumZdivCoqProvedRules rules were reverified with a human in the loop by revisiting and modifying existing Coq proofs. The mechanical re-verification was all but push-button; the manual effort for updating the Coq proofs was non-trivial, but about half of the effort of writing the original proofs from scratch. In this process, \\NumZdivFalseRules existing rules were found to be incorrect in the new semantics and fixed\\footnote{\\url{https://github.com/halide/Halide/pull/4439}}. \nTwo of them were in fact not related to division, but were instead the first discovery of the bugs injected in a patch after the initial ruleset verification. \nThe remaining 42 rules were modified to only trigger when the denominator was known to be non-zero, either by adding a predicate to the rule, or by exploiting the TRS\u2019s ability to track constant bounds and alignment of subexpressions. Three examples of now-incorrect rules were:\n\n\\begin{align*}\n(x/y)*y + (x\\;\\%\\;y) \\rewrites & x \\\\\n  -1 / x \\rewrites & \\hsel(x < 0, 1, -1) \\\\\n(x + y)/x \\rewrites & y/x + 1\n\\end{align*}\n\nThe first was modified to:\n\\[\n(x/c_0)*c_0 + (x\\;\\%\\;c_0) \\rewrites x \\pred c_0 \\neq 0\n\\]\nand the other two were constrained to only trigger when the denominator is known to be non-zero via other means.\n\nThe cases discussed in Section~\\ref{sub:bugfixes} all concern fixing existing problems while not introducing new ones. By giving a proof of soundness, showing that the ruleset is correct and that the rules are cycle-free, we also remove two entire classes of future bugs. For reference, over the life of the Halide project there have been 14 pull requests that fix incorrect rules, and 3 pull requests that modify rules in order to avoid cycles. Fixing a reduction order also guarantees that no new cycles can be introduced as long as new rules obey this order; without such a guide, it is possible to introduce a rule that would close a loop in some sequence of existing rule applications and cause a cycle, resulting in infinite recursion during compilation. \n\n\\begin{table*}\n\\caption{Rules corrected through the first round of verification.}\n{\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{r|l|l}\n& Rule & Counterexample \\\\\n\\hhline{=|=|=}\nWrong &  $\\frac{x * c_0}{c_1} \\rewrites \\frac{x}{(c_1 / c_0)} \\pred c_1 \\;\\%\\; c_0 = 0 \\wedge c_1 > 0$ & $c_0 = -1, c_1 = 2, x = 1$\\\\\nFixed & $\\frac{x * c_0}{c_1} \\rewrites \\frac{x}{(c_1 / c_0)} \\pred c_1 \\;\\%\\; c_0 = 0 \\wedge c_0 > 0 \\wedge \\frac{c_1}{c_0} \\neq 0$ & \\\\\n\\hhline{=|=|=}\nWrong & $(\\frac{x + c_0}{c_1})*c_1 - x \\rewrites x \\;\\%\\; c_1 \\pred c_1 > 0 \\wedge c_0 + 1 = c_1$ & $c_0 = 2, c_1 = 3, x = -5$\\\\\nFixed & $(\\frac{x + c_0}{c_1})*c_1 - x \\rewrites {-x} \\;\\%\\; c_1 \\pred c_1 > 0 \\wedge c_0 + 1 = c_1$ & \\\\\n\\hhline{=|=|=}\nWrong & $x - (\\frac{x + c_0}{c_1})*c_1 \\rewrites -(x \\;\\%\\; c_1) \\pred c_1 > 0 \\wedge c_0 + 1 = c_1$ & $c_0 = 2, c_1 = 3, x = -5$\\\\\nFixed & $x - (\\frac{x + c_0}{c_1})*c_1 \\rewrites ((x + c_0) \\;\\%\\; c_1) + {-c_0} \\pred c_1 > 0 \\wedge c_0 + 1 = c_1$ & \\\\\n\n\\end{tabular}\n}\n\\label{tab:verfirstround}\n\\end{table*}\n\n\n\\section{Related Work}\n\nAn \\textit{equivalence graph} or e-graph (introduced by \\citep{nelson1980techniques}, see \\citep{willsey2021egg} for a recent treatment) is a data structure used to compute applications of the rules of a term rewriting system. The algorithm builds up equivalence classes by successively applying all rules to all expressions within those classes, then queries to see if two expressions are equivalent by checking if they are present in the same class. Like our algorithm, it does not backtrack, but the e-graph can require significant amounts of memory, which our algorithm avoids.\n\nAProVE~\\citep{giesl2004automated, giesl2017analyzing} is a tool that automatically generates proofs of termination for term rewriting systems (as well as programs in Java, C, Haskell, and so on). It employs a variety of techniques for doing so. It may prove a TRS terminations through direct termination proofs, or finding a reduction order that fits all rules in the TRS, as we do in our work, searching classes of orders including path orders, Knuth-Bendix orders, and polynomial orders. It may also prove termination through dependency pairs (finding all instances in which terms of RHSs can unify with rule LHSs, then showing that a weakly monotonic ordering holds over all dependency pairs) or by abstracting rules by their effect on term height and proving that rule application must cause term height to decrease. It also employs techniques to remove portions of the ruleset that have no effect on termination and for reducing the size of the ruleset to make termination proof search more efficient. \nA proof of termination by term height abstraction would not be useful for synthesis, since it encodes no information about progress towards a goal state. A proof of termination through dependency pairs could do so, and since it uses weakly rather than strongly monotonic orders, it could permit rewriting strategies that our technique cannot. However, this method requires reasoning over the full ruleset rather than individual rules, so using it as a specification for synthesis would result in a significantly more complicated synthesis task. However, it would be interesting to see if AProVE can find a direct termination proof for the Halide simplifier and handwritten variable solver TRSs and if those direct termination proofs are human-interpretable. If AProVE can find reduction orders to fit, it would also be interesting to see what rulesets could be synthesized using them as an input to our synthesis pipeline.", "meta": {"hexsha": "8d358e9906792178d3f0aefd66c02f6f156e1c9f", "size": 23270, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/03-verification.tex", "max_stars_repo_name": "jn80842/UWThesis", "max_stars_repo_head_hexsha": "39a2749980fd32fce4ef2a8363a3e10eded55071", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/03-verification.tex", "max_issues_repo_name": "jn80842/UWThesis", "max_issues_repo_head_hexsha": "39a2749980fd32fce4ef2a8363a3e10eded55071", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/03-verification.tex", "max_forks_repo_name": "jn80842/UWThesis", "max_forks_repo_head_hexsha": "39a2749980fd32fce4ef2a8363a3e10eded55071", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 112.4154589372, "max_line_length": 1233, "alphanum_fraction": 0.7666523421, "num_tokens": 5893, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7662936430859597, "lm_q1q2_score": 0.5956399344832622}}
{"text": "% Womelsdorf Lab burst library user guide - Algorithms\n% Written by Christopher Thomas.\n\n\\chapter{Segmentation and Parametric Representation}\n\\label{sect-math}\n\n%\n%\n%\n\\section{Segmentation -- Detecting Burst Events}\n\\label{sect-math-segment}\n\n``Segmentation'' is the process of deciding which portions of an input\nsignal contain oscillatory bursts. Historically, this has usually been done\nby performing a Hilbert transform to get an analytic signal with known\nmagnitude and phase, and looking for magnitude excursions above some\nthreshold (usually $3\\sigma$ with respect to the mean magnitude across all\nparts of the waveform and all trials). Many variants of this approach exist,\nand several other approaches have been used, but these are beyond the scope\nof this document.\n\nThe wlBurst library provides an implementation of magnitude-threshold\ndetection, using the algorithm illustrated in Figure \\ref{fig-math-algmag}.\nRather than using a fixed threshold, this implementation uses a\nlow-pass-filtered version of the magnitude as its ``DC'' level, and looks\nfor magnitude excursions with respect to that (comparing to a local rather\nthan global mean). The intention is that this is less sensitive to drift\nin signal properties.\n\nA typical event segmented by magnitude thresholding is shown in Figure\n\\ref{fig-math-evmag}. Notice that the analytic magnitude (orange envelope\nin the top pane) is only well-behaved for the band-pass-filtered signal.\n\n\\figdef{\\begin{center}\n\\includegraphics[width=6in]{figs/alg-detmag-v1.png}\\end{center}}\n{Segmentation by Magnitude Thresholding}\n{fig-math-algmag}\n\n\\figdef{\\begin{center}\n\\includegraphics[height=3.5in]{plots/multi-evmag-be-vsbpf-0051.png} \\\\\nBandpass \\\\\n\\vspace{2\\baselineskip}\n\\includegraphics[height=3.5in]{plots/multi-evmag-be-vsraw-0051.png} \\\\\nWideband\n\\end{center}}\n{Typical event segmented by magnitude thresholding.}\n{fig-math-evmag}\n\n\\FloatBarrier\n\nIn addition to magnitude-threshold event detection, the wlBurst library\nprovides an implementation of ``frequency-stability'' event detection. This\nalgorithm looks at the frequency of the analytic signal (middle panes in\nFigure \\ref{fig-math-evmag}. The analytic frequency is defined as the\nderivative of the phase of the analytic signal (which is provided directly\nby the Hilbert transform).\n\nDuring oscillation events, the frequency is expected to be well-defined\n(staying within a narrow range with few or no excursions). Outside of\noscillation events, when the neural signal is not oscillating with a\nwell-defined frequency, excursions in analytic frequency are expected to\noccur. These two situations are difficult to distinguish in the Hilbert\ntransform of the band-pass-filtered waveform, but are visually apparent\nin the Hilbert transform of the wideband signal.\n\nTo perform automated segmentation, high-frequency noise with known\nproperties is added to the band-pass-filtered signal. The Hilbert transform\nof this ``noisy'' signal is computed, and the variance of the resulting\nanalytic frequency signal is measured. This algorithm is illustrated\nin Figure \\ref{fig-math-algfreq}.\n\nA typical event segmented by frequency stability detection is shown in\nFigure \\ref{fig-math-evfreq}. Notice that the wideband signal shows\nout-of-band events, and that the band-pass-filtered signal with added\nnoise shows only in-band events and has more consistenly different variance\nin the frequency signal between event and non-event regions. The ``noisy''\nsignal is only used for detection - parameter extraction uses the clean\nband-pass-filtered signal.\n\n\\figdef{\\begin{center}\n\\includegraphics[height=7.5in]{figs/alg-detfreq-v1.png}\\end{center}}\n{Segmentation by Frequency Stability}\n{fig-math-algfreq}\n\n\\figdef{\\begin{center}\n\\includegraphics[height=3.5in]{plots/multi-evfreq-al-vsraw-0026.png} \\\\\nWideband \\\\\n\\vspace{2\\baselineskip}\n\\includegraphics[height=3.5in]{plots/multi-evfreq-al-vsnoisy-0026.png} \\\\\nBPF Plus Noise\n\\end{center}}\n{Typical event segmented by frequency stability.}\n{fig-math-evfreq}\n\n\\FloatBarrier\n\n%\n%\n%\n\\section{Feature Extraction -- Curve-Fitting Oscillatory Bursts}\n\\label{sect-math-param}\n\n``Feature extraction'' or ``parameter extraction'' is the process of\nbuilding a description of a burst event that contains all desired information\nabout the burst. This is usually done as a separate step after segmentation\n(burst detection). The model used to represent oscillatory bursts in the\nwlBurst library is shown in Figure \\ref{fig-math-params}, and is described\nin \\texttt{EVENTFORMAT.txt} (as the ``chirpramp'' model). Future versions\nof the wlBurst library will explicitly support additional models; for the\ntime being, custom models may be used provided care is taken to not call\nanalysis or plotting functions that assume use of the ``chirpramp'' model.\n\n\\figdef{\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{figs/synth-burst.png}\n\\end{center}}\n{Parametric burst model (``\\texttt{chirpramp}'' model).}\n{fig-math-params}\n\n``Chirpramp'' model fits are performed through several steps:\n\n\\begin{itemize}\n%\n\\item Endpoints of the burst are assumed. These may either be the times\nwhen the detection criterion (absolute magnitude or frequency variance)\ncrossed its threshold, or where the detection criterion crosses some other\nlower threshold (for ``dual'' variants of the magnitude-threshold and\nfrequency-stability detection algorithms).\n%\n\\item The magnitude envelope is curve-fit. This involves choosing roll-on\nand roll-off times, and performing exponential and linear fits to the\nanalytic magnitude and choosing the best match. Choosing roll-on and roll-off\ntimes may be done via a grid search (fast but less accurate) or by simulated\nannealing (slow but more accurate).\n%\n\\item The frequency and phase are curve-fit. This involves performing\nexponential and linear fits to the analytic frequency and choosing the best\nmatch, followed by testing possible starting phases and choosing the one that\nbest matches modeled burst phase and the analytic phase across the duration\nof the burst.\n%\n\\item Optionally, a simulated annealing algorithm may perturb all components\nof the parametric burst to try to better match the input waveform. This is\nslow but slightly improves accuracy.\n%\n\\end{itemize}\n\nUser-specified segmentation and parameter extraction algorithms may\nalternatively be used, per \\linebreak\n\\texttt{SEGCONFIG.txt} and \\texttt{PARAMCONFIG.txt}.\n\n\n%\n% This is the end of the file.\n", "meta": {"hexsha": "6a66a0d44e54e91394a589251d4eb3cac3d12148", "size": 6433, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/wlburst-guide-math.tex", "max_stars_repo_name": "att-circ-contrl/wlBurst_v2", "max_stars_repo_head_hexsha": "e73f05692c12ce7317f576de093be9906dca3d0e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-12-28T07:54:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-28T19:38:11.000Z", "max_issues_repo_path": "manual/wlburst-guide-math.tex", "max_issues_repo_name": "att-circ-contrl/wlBurst_v2", "max_issues_repo_head_hexsha": "e73f05692c12ce7317f576de093be9906dca3d0e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/wlburst-guide-math.tex", "max_forks_repo_name": "att-circ-contrl/wlBurst_v2", "max_forks_repo_head_hexsha": "e73f05692c12ce7317f576de093be9906dca3d0e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7727272727, "max_line_length": 77, "alphanum_fraction": 0.7985387844, "num_tokens": 1507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438951143326726, "lm_q2_score": 0.7057850340255386, "lm_q1q2_score": 0.5956085419832712}}
{"text": "\\documentclass[11pt, oneside]{amsart}   \t% use \"amsart\" instead of \"article\" for AMSLaTeX format\n\\usepackage[top=1.3cm, bottom=1.3cm]{geometry}                \t\t% See geometry.pdf to learn the layout options. There are lots.\n\\geometry{letterpaper}                   \t\t% ... or a4paper or a5paper or ... \n%\\geometry{landscape}                \t\t% Activate for rotated page geometry\n%\\usepackage[parfill]{parskip}    \t\t% Activate to begin paragraphs with an empty line rather than an indent\n\\usepackage{graphicx}\t\t\t\t% Use pdf, png, jpg, or eps\u00a7 with pdflatex; use eps in DVI mode\n\\usepackage{bm}\t\t\t\t\t\t\t% TeX will automatically convert eps --> pdf in pdflatex\t\t\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{subcaption}\n\\usepackage{float}\n\\pagenumbering{gobble}\n%SetFonts\n\n\\newcommand\\indep{\\protect\\mathpalette{\\protect\\indeP}{\\perp}}\n \\def\\indeP#1#2{\\mathrel{\\rlap{$#1#2$}\\mkern2mu{#1#2}}}\n\n\n%SetFonts\n\n\n\\title{MVA-DM2 Probabilistic graphical models}\n\\author{Hugo Cisneros}\n\\date{}\t\t\t\t\t\t\t% Activate to display a given date or no date\n\n\\begin{document}\n\\maketitle\n\\section{Conditional independence and factorizations}\n \\vfill\n\\textbf{1.}  The implied factorization of a distribution $p\\in \\mathcal{L}(G)$ is\n\\begin{align*} \np(x, y, z, t) = p(t|z)p(z|x, y)p(x)p(y)\n\\end{align*}\nFor two real valued independent r.v $X$ and $Y$ that follow the distribution $\\mathcal{U}(0, 2)$, and $Z = X+Y$, $T = \\mathbf{1}_{(Z \\geq 3)}$,$\\boxed{\\text{the statement } X\\indep Y |\u00a0T \\text{ is not true}}$.\nIndeed, knowing that $X+Y \\geq 3$ makes the two random variables not independent ($p(x| y=2, z=1) = p(x \\geq 1) \\neq p(x|z=1)$).\n\\vfill\n \n \\textbf{2.a.} If $Z$ is a binary variable, let $p = p(z=1) = 1 - p(z=0)$. We have \n \\begin{align*}\n \tp(x, y) &= p(x)p(y) = p(x, y|z=1)p + p(x, y|z=0)(1-p) & (X\\indep Y ) \\\\\n\t& = p(x|z=1)p(y|z=1)p + p(x|z=0)p(y|z=0)(1-p) & (X\\indep Y |\u00a0Z)\\\\\n\t&  =\u00a0p(x)p(y)\\left(\\frac{1}{p} p(z=1|x)p(z=1|y) + \\frac{1}{1-p}p(z=0|x)p(z=0|y)\\right)\n \\end{align*}\n If we write $p_x = p(z=1|x) = 1-p(z=0|x)$ and $p_y = p(z=1|y)$, the equality becomes $p^2 - (p_x + p_y)p + p_xp_y\n = 0$ which yields $ \\left(p(z=1) = \\right)\\ p = p_x\\ ( =  p(z=1|x) )$ or $p = p_y$. Therefore either $\\boxed{X\u00a0\\indep Z \\text{ or } Y \\indep Z}$.\n\\vfill\n \n \\textbf{2.b.} Let $\\mathcal{A}$ be the finite space $\\{0, 2\\}$, $X_0$ and $Y_0$ two independent random variables with uniform distributions on $\\mathcal{A}$ , \n we define $X =\\begin{bmatrix} X_0 \\\\\u00a00 \\end{bmatrix}$, $Y = \\begin{bmatrix} 0 \\\\\u00a0Y_0 \\end{bmatrix}$ and $Z = \\begin{bmatrix} X_0 \\\\\u00a0Y_0 \\end{bmatrix}$. \n \\\\\n \n $X$, $Y$ and $Z$ are three random variables with $X \\indep Y$ and $ X \\indep Y | Z$\n \\begin{itemize}\n\\item  $\\mathbb{E}[XY] =  \\begin{bmatrix} 0 \\\\\u00a00 \\end{bmatrix} = \\mathbb{E}[X]\\mathbb{E}[Y]$\n\\item $\\forall (k,l) \\in \\mathcal{A}\\times\\mathcal{A},  \\mathbb{E}[X|Z=(k,l)] =  \\begin{bmatrix} k \\\\\u00a00 \\end{bmatrix}$ and similarly for $Y$, therefore $\\mathbb{E}[XY|Z] =  \\begin{bmatrix} 0 \\\\\u00a00 \\end{bmatrix} = \\mathbb{E}[X|Z]\\mathbb{E}[Y|Z]$\n \\end{itemize}\n However, $X$ and $Y$ are not independent from $Z$, since  $\\mathbb{E}[XZ] = \\begin{bmatrix} \\mathbb{E}[X_0^2] \\\\\u00a00 \\end{bmatrix}$, $\\mathbb{E}[X]\\mathbb{E}[Z] = \\begin{bmatrix} \\mathbb{E}[X_0]^2 \\\\\u00a00 \\end{bmatrix}$ and $1 = \\boxed{\\mathbb{E}[X_0]^2 \\neq \\mathbb{E}[X_0^2] }= 2$.\n \\vfill\n \n \\clearpage\n \\section{Distributions factorizing in a graph}\n  \\vfill\n \\textbf{1.} Let $p\\in\\mathcal{L}(G)$, $p(x) = \\prod_{s\\in V} p(x_s|x_{\\pi_s})$. Since $\\{i\\rightarrow j\\}$ is a covered edge,  $\\pi_j = \\pi_i \\cup \\{i\\}$. Therefore,\n \\begin{align*}\n p(x) = p(x_i|x_{\\pi_i})\\cdot p(x_j|x_{\\pi_i}, x_i) \\cdot\\prod_{s\\in V-\\{ i, j \\}} p(x_s|x_{\\pi_s})\n \\end{align*}\n \n And \n \n \\begin{align*}\n p(x_i|x_{\\pi_i})\\cdot p(x_j|x_{\\pi_i}, x_i) &= \\frac{p(x_i, x_{\\pi_i})}{p(x_{\\pi_i})} \\frac{p(x_i, x_{\\pi_i}, x_j)}{p(x_{\\pi_i}, x_i)} =p(x_i,  x_j |x_{\\pi_i})\\cdot \\frac{p(x_j, x_{\\pi_i})}{p(x_j, x_{\\pi_i})}\\\\[+5pt]\n & = p(x_i | x_{\\pi_i}, x_j)\\cdot p(x_j|x_{\\pi_i})\n \\end{align*}\n \\\\\n In $G'$, $\\{i\\rightarrow j\\}$ has been replaced by $\\{j\\rightarrow i\\}$, thus $\\pi_i' = \\pi_i \\cup \\{j\\}$ and $\\pi_j' = \\pi_i $, the equation above shows that $p$ can be factorized in this new graph, hence $p\\in \\mathcal{L}(G')$. From the same equality above, if $p\\in \\mathcal{L}(G')$, also $p\\in \\mathcal{L}(G)$. Therefore $\\boxed{ \\mathcal{L}(G) =  \\mathcal{L}(G')}$.\n \\vfill\n \n \\textbf{2.} Since $G$ is a directed tree, it doesn't contain any v-structure and all nodes have either 0 or 1 parent (only the root of the tree doesn't have a parent). Moreover, all cliques or $G'$ contain at most 2 elements because any clique of 3 or more elements would contain a 3-clique. That 3-clique in the directed graph must contain either a v-structure or a cycle, which is not possible in a directed tree. \n \\\\\n \n Hence if $p\\in \\mathcal{L}(G)$, $p(x) = \\prod_i p(x_i\u00a0|\u00a0x_{\\pi_i}) = \\prod_i \\Psi_i(x_i, x_{\\pi_i})$, where $\\Psi_i: (x_i, x_{\\pi_i})\\rightarrow p(x_i|x_{\\pi_i})$ is a function of an element and its parent (a 2-clique of the graph), or of the root of the tree only (a 1-clique). We have $\\boxed{\\mathcal{L}(G) \\subset \\mathcal{L}(G')}$.\n \\\\\n \n For $p\\in \\mathcal{L}(G')$, we have for all $x$, $p(x) = \\frac{1}{Z} \\prod_{c\\in\\mathcal{C}} \\psi_c(x_c)$ where $\\mathcal{C}$ is the set of maximal cliques of $G'$ and $Z = \\sum_x  \\prod_{c\\in\\mathcal{C}} \\psi_c(x_c)$. From the argument above, we know that all maximal cliques of $G'$ have size 2 except for the root of the corresponding directed tree. Therefore, $p(x) = \\prod_i f(x_i, x_{\\pi_i})$. Since the $f_i$ are defined up to a multiplicative constant, we can make them equal to a conditional probability by properly normalizing. Hence  $\\boxed{\\mathcal{L}(G') \\subset \\mathcal{L}(G)}$.\n\n \\vfill\n \n \\clearpage\n\\section{Implementation - Gaussian mixtures}\n \\vfill\n \\textbf{3.a.} The histogram below shows the repartition of distortions for 1000 random initializations of K-Means in a 10 by 10 square centered on the global mean of the dataset. It clearly shows that the algorithm might converge to several local minima, where distortion scores differ slightly. The different cluster centers estimated by the algorithm are represented by blue stars on next page's plot of the K-means algorithm.\n\\begin{figure}[h!]\n\\centering\n \\includegraphics[width=0.7\\linewidth]{Distortions.pdf}\n \\caption{Histogram of distortions for 1000 random initializations of the K-means algorithm}\n\\end{figure}\n\\vfill\n\n\\textbf{3.b.} When we write $\\Sigma_{j, t} = \\sigma_{j, t} I$, the part of the likelihood to be maximized that depends on $\\sigma$ is written\n\\begin{align*}\n\\sum_{i=1}^n \\sum_{j=1}^k \\tau_i^j\\left(-\\frac{n}{2}\\log(\\sigma_{j, t}) - \\frac{1}{2\\sigma_{j,t}} ||x_i - \\mu_{j, t}||^2\\right)\n\\end{align*}\nFor each j, the expression above is equal to $-\\frac{n}{2}\\log(\\sigma_{j,t})\\sum_i \\tau_i^j - \\frac{1}{2\\sigma_{j,t}}\\sum_i \\tau_i^j ||x_i - \\mu_{j,t}||^2$. They can be minimized separately. The gradient of this expression with respect to $\\sigma_{j}$ reads: \n\\begin{align*}\n-\\frac{n}{2\\sigma_{j}}\\sum_i \\tau_i^j + \\frac{1}{2\\sigma_{j}^2} \\sum_i\\tau_i^j ||x_i - \\mu_{j}||^2 = 0 \\implies \\boxed{\\sigma_{j, t+1} = \\frac{\\sum_i\\tau_i^j ||x_i - \\mu_{j, t+1}||^2}{n \\sum_i \\tau_i^j}}\n\\end{align*}\nThe other derivations ($\\mu$ and $\\pi$) are the same than for the complete Gaussian mixture model detailed in the course notes.\n\\vfill\n\n\\textbf{3.c.} From the course notes we have $\\boxed{\\Sigma_{j,t+1} = \\dfrac{\\sum_i\\tau_i^j(x_i-\\mu_{j,t+1})(x_i-\\mu_{j,t+1})^T}{ \\sum_i\\tau_i^j}}$\n\\vfill\n\n\\textbf{3.d.} For the isotropic gaussian model, the log-likelihood on the train set is -2649.9 and -2639 for the test set. Visually, the results for clustering are close to the results of K-means, which can be understood by the fact that the distance function used in K-means is the isotropic norm L2. However, the ``equivalent'' distance function in the isotropic Gaussian mixture model is non quadratic and therefore penalizes differently points far from a cluster center depending on the covariance of the cluster. \n\nThe mixture of full gaussians allows for dealing with much more complex dataset shapes, which in the case of this dataset yields higher log-likelihoods: -2340.2 for the training set and -2431 for the test set. Estimations more closely ressemble the training data and are more accurate than the isotropic model as the test results show.\n\\vfill\n \\clearpage\n \\begin{figure}[t!]\n\\centering\n\\begin{subfigure}{.49\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{KMeans.pdf}\n\n\\end{subfigure} \n\\begin{subfigure}{.49\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{Isotropic.pdf}\n\n\\end{subfigure}\n\\\\[+5pt]\n\\begin{subfigure}{.49\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{Full.pdf}\n\n\\end{subfigure} \n\n\\end{figure}\n \n \\end{document}", "meta": {"hexsha": "920918cc5183580fd40ebe496f6a9fb7f145ff7d", "size": 8806, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework 2/MVA_DM2_Cisneros.tex", "max_stars_repo_name": "hugcis/Homeworks-PGM", "max_stars_repo_head_hexsha": "fdc392a30809f4ea876261c43587c181ee5d6371", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework 2/MVA_DM2_Cisneros.tex", "max_issues_repo_name": "hugcis/Homeworks-PGM", "max_issues_repo_head_hexsha": "fdc392a30809f4ea876261c43587c181ee5d6371", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework 2/MVA_DM2_Cisneros.tex", "max_forks_repo_name": "hugcis/Homeworks-PGM", "max_forks_repo_head_hexsha": "fdc392a30809f4ea876261c43587c181ee5d6371", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.3525179856, "max_line_length": 595, "alphanum_fraction": 0.6681807858, "num_tokens": 3204, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850402140659, "lm_q2_score": 0.8438950966654774, "lm_q1q2_score": 0.5956085347364969}}
{"text": "% -----------------------------*- LaTeX -*------------------------------\n\\documentclass[12pt]{report}\n\\usepackage{scribe_hgen486}\n\\usepackage{amsmath}\n\\begin{document}\n\n\\scribe{Frank Wen}\t\t% required\n\\lecturenumber{9}\t\t\t% required, must be a number\n\\lecturedate{February 2}\t\t% required, omit year\n\\lecturer{John Novembre} \n\n\\maketitle\n\n% please leave this comment \n\\framebox[.95\\textwidth]{\\parbox{.93\\textwidth}{ {{\\bf Note:}} These\nlecture notes are still rough, and have only have been mildly\nproofread.  }}\n\\vspace*{.1in}\n\n\n% feel free to delete content below this line \n% ----------------------------------------------------------------------\n\n\n\\section{Continuous time processes (the exponential distribution)}\nThe exponential distribution often appears as the distribution of waiting times until an event (i.e. arrival times). \n\n\\begin{align*}\nf(x) = \\lambda e^{-\\lambda x}\n\\end{align*}\n\nLet $T$ be a random variable representing the waiting time until an event happens. Then for $T\\sim exp(\\lambda)$ with expected value $E(T) = \\frac{1}{\\lambda}$. The cumulative distribution function is given by\n\n\\begin{align*}\nF(X) = P(T\\le X) &= 1-e^{-\\lambda x}\\\\\nP(T > X) &= e^{-\\lambda x}\n\\end{align*}\n\nAn important property of the exponential distribution is that it is \\textit{memoryless}. Suppose that $T$ is the waiting time until an event occurs. Then, for any given waiting times $t$ and $s$,\n\n\\begin{align*}\nP(T > t+s | T>t ) &= P(T>s)\n\\end{align*}\n\nThis is to say that if we have already waited $t$ minutes for an event to occur, then the remaining time $s$ for the event to occur is the same as if we hadn't waited the first $t$ minutes to begin with. To show this, we use our definition of conditional probability:\n\\begin{align*}\nP(T > t+s | T>t ) &= \\frac{P(T > t+s , T>t )}{P(T>t)}\n\\end{align*}\nSince we have already waited $t$ minutes, $P(T > t+s , T>t ) = P(T > t+s)$. So,\n\\begin{align*}\nP(T > t+s | T>t ) &= \\frac{P(T > t+s )}{P(T>t)}\\\\\n&= \\frac{e^{-\\lambda(t+s)}}{ e^{-\\lambda t} }\\\\\n&= e^{-\\lambda s} = P(T>s)\n\\end{align*}\n\\section{Discrete time processes (the geometric distribution)}\n\nThe geometric distribution can be thought of as the discrete analog of the exponential distribution. It is the the probability distribution of $X$ number of Bernoulli trials with success probability $p$ before one success is obtained.\n\n\\begin{equation*}\nGeom(p) = P(X= k) = (1-p)^{k-1}p\n\\end{equation*}\n\nBecause the trials are independent, the geometric distribution is also memoryless.\n\n\n\\section{Poisson processes}\n\\subsection{Counting process and the poisson distribution}\nA poisson process can be defined as a counting process, where we are interested in the number of events (arrivals) that have occurred up until some time $N(t)$. Four properties of this counting process are the following\\\\\n1) $N(0) = 0$\\\\\n2) $N(t), t\\ge0$ has independent increments, or what happens in one time interval is independent of what happens in any other interval\\\\\n3) $P(N(t_0+t) - N(t_0) \\ge2) = o(t) $ as $t\\rightarrow0$\\\\\nAs the time between events gets smaller, the probability of observing two or more events in that time step goes to zero. In other words, events do not happen simultaneously.\\\\\n4) $P(N(t_0+t) - N(t_0) =1 ) = \\lambda t + o(t) $ as $t\\rightarrow0$ \\\\\ni.e. the number of events in an interval of length $t$ is Poisson distributed with rate $\\lambda t$. We write this as\n\\begin{equation*}\nP(N(t) = k) = \\frac{(\\lambda t)^k e^{-\\lambda t} }{k!}\n\\end{equation*}\n\nHere, $f(x) = o((g(x))$ means that $|f(x)| \\le k|g(x)| $ for all $k$. Intuitively this means that $g(x)$ grows faster than $f(x)$\n\n\\subsection{Relationship to the binomial distribution}\nThe poisson distribution can be derived as the limit of a binomial distribution as the number of trials $n$ goes ti infinity, and the probability of success $p$ goes to zero, such that $np = \\lambda$. To show this, we take an interval $(s, s+t]$ and divide it into $n$ intervals of size $t/n$. The number of events in the $i$th interval is given by $N_i$. Since events do not happen simultaneously,\n\\begin{align*}\nP(N_i\\ge2) = o(t/n),  t\\rightarrow0\n\\end{align*}\nThis is the equivalent of saying that $N_i \\sim Bernoulli(p)$, where $p = \\lambda(t/n) + o(t/n)$.\\\\\\\\\nThe total number of events in the interval is then Binomially distributed\n\\begin{align*}\n N(t+s) - N(s) \\sim Binom(n,p) = Binom(n, \\lambda t/n)\n\\end{align*}\n So, as $n$ approaches infinity, $np$ approaches $\\lambda t$, and $N(t+s) - N(s) ~ Pois(\\lambda t)$.\n\n\\subsection{Inter-arrival times}\nWaiting times between events (inter-arrival times) in a poisson process are exponentially distributed. To show this, we consider the waiting times between the first and second arrival $T_1$ and $T_2$ respectively.\n\\begin{align*}\nP(T_1 > t) &= P(N(t) = 0)\\\\\n&= \\frac{(\\lambda t)^0 e^{-\\lambda t}}{0!} = e^{-\\lambda t}\n\\end{align*}\nSo the waiting time until the first arrival is exponentially distributed. For the second arrival event, given the first arrival occurred at time $s$, the waiting time is\n\\begin{align*}\nP(T_2 > t) &= \\int_s P(T_2 > t | T_1 = s) P(T_1 = s)\\\\\n&= \\int_s P(0 \\text{ events in }(s, s+t] | T_1 = s) P(T_1 = s) \n\\end{align*}\nBy independence,\n\\begin{align*}\n&=\\int_s  P(0 \\text{ events in } (s, s+t] ) P(T_1 = s) \\\\\n&= e^{-\\lambda t}\\int_s P(T_1 = s) \\\\\n&= e^{-\\lambda t}\n\\end{align*}\nBy the same logic, the inter-arrival time (waiting time in between events) for any $i$th interval is also exponentially distributed $T_i \\sim exp(\\lambda)$. Then, the waiting time until the $n$th arrival is the sum of all $i$ events $S_n = \\sum_{i=1}^n T_i$. Since the inter-arrival times are exponentially distributed, the $n$th arrival time is gamma distributed $S_n \\sim Gamma(\\lambda, n)$\n\nIf we know the number of events in some given interval $(0,t]$, then the distribution of inter-arrival times conditioned on the number of events is uniformly distributed on $(0,t]$. We show this for the simple case with one event.\n\\begin{align*}\nP(T_1 > s | N(t) = 1) &= \\frac{P(T_1 > s , N(t) = 1)}{P(N(t) = 1)}\\\\\n&=P(0 \\text{ events in } (0,s], 1 \\text{ event in } (s, t] )\\\\\n&= \\frac{e^{-\\lambda s} \\lambda (t-s) e^{-\\lambda(t-s)} }{\\lambda t e^{-t}}\\\\\n&= \\frac{t-s}{t}\n\\end{align*}\nTo get to the uniform distribution, \n\\begin{align*}\nP(T_1 \\le s | N(t) = 1) &= 1- P(T_1 > s |N(t) =1)\\\\\n&=\\frac{s}{t}\n\\end{align*}\n\n\\renewcommand{\\vec}[1]{\\mathbf{#1}}\n\n\\subsection{Splitting and superposition}\nIn order to develop point processes suitable for certain models, one can employ mathematical operations that remove points from Poisson process (splitting i.e. thinning), or combining points from multiple Poisson processes (superposition).\n\n\\subsubsection{Superposition}\nIf we combine the points from two Poisson processes with rates $r_1$ and $r_2$, the result is a Poisson process with rate $r_1 + r_2$. More formally, suppose we have two Poisson processes of rate $\\lambda_1$ and $\\lambda_2)$, given by $\\{N_1(t) ; t>0\\}$ and $\\{N_2(t) ; t>0\\}$. The two are independent if for all $t_1, ..., t_n$, the random variables $N_1(t_1),...,N_1(t_n)$ are independent of $N_2(t_1),...,N_1(t_n)$. Suppose $N(t) = N_1(t) + N_2(t)$ for all $t>0$. Then $\\{N_t; t>0\\} $then a Poisson process with rate $\\lambda_1+\\lambda_2$ .\\\n\n\\subsubsection{Splitting}\nNow suppose we have a Poisson process $\\{N_t; t>0\\}$ with rate $\\lambda$. Suppose each arrival is switched to $\\{N_1(t) ; t>0\\}$ with probability $p$ and switched to $\\{N_2(t) ; t>0\\}$ with probability $(1-p)$. Then $\\{N_1(t) ; t>0\\}$ is a Poisson process with rate $\\lambda p$ and $\\{N_2(t) ; t>0\\}$ is a Poisson process with rate $\\lambda (1-p)$\n\n\\subsection{Compound Poisson processes}\nCompound Poisson processes assign a random value or weight to each point in a Poisson process.\n\n\\subsection{Non-homogenous Poisson processes}\nSo far, we have considered Poisson processes where the rate parameter $\\lambda$ is constant for all time $t > 0$. A nonhomogenous Poisson process is one where the `rate,' referred to as an intensity function $\\lambda(t)$ can vary over time. $\\{N_t; t>0\\}$ is a nonhomogenous or inhomogenous Poisson process if\\\\\n1. $N(0) = 0$\\\\\n2. $\\{N(t), t\\ge 0 \\}$ has independent increments\\\\\n3. $P(N(t+h) - N(t) \\ge 2) = o(h)$\\\\\n4.$ P(N(t+h) - n(t) = 1) = \\lambda(t)h + o(h)$\\\\\nNote that a nonhomogenous Poisson process with $\\lambda(t) = \\lambda$ for all $t>0$ is a homogenous Poisson process.\\\\\nAlso note that for the nonhomogenous Poisson process, the interarrival times are no longer exponentially distributed, nor are they independent. \n\\subsection{Compound or mixed `Poisson' processes}\n\n\\end{document}\n\n", "meta": {"hexsha": "528dedac435cea51970ef5bd2862f42d4f0fc176", "size": 8580, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/scribe_notes_2016/lec9.tex", "max_stars_repo_name": "stephens999/hgen48600", "max_stars_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-03-19T18:02:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:15:28.000Z", "max_issues_repo_path": "docs/scribe_notes_2016/lec9.tex", "max_issues_repo_name": "stephens999/hgen48600", "max_issues_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-02-05T00:34:09.000Z", "max_issues_repo_issues_event_max_datetime": "2017-03-07T20:15:19.000Z", "max_forks_repo_path": "docs/scribe_notes_2016/lec9.tex", "max_forks_repo_name": "stephens999/hgen48600", "max_forks_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2016-01-08T16:59:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-11T23:09:29.000Z", "avg_line_length": 56.821192053, "max_line_length": 544, "alphanum_fraction": 0.6821678322, "num_tokens": 2643, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8438951025545426, "lm_q1q2_score": 0.5956085284479753}}
{"text": "\\documentclass[12pt]{scrartcl}\n\n\\input{preamble}\n\n\\makeatletter\n\\title{Hack 11.0}\\let\\Title\\@title\n\\subtitle{Computer Science I -- Java\\\\\nEncapsulation\\\\\n{\\small\n\\vskip1cm\nDepartment of Computer Science \\& Engineering \\\\\nUniversity of Nebraska--Lincoln}\n\\vskip-3cm}\n%\\author{Dr.\\ Chris Bourke}\n\\date{~}\n\\makeatother\n\n\\begin{document}\n\n\\maketitle\n\n\\hrule\n\n\\input{instructions.tex}\n\n\\section*{Problem Statement}\n\nThere are thousands of commercial, military, and local airports in the US and\naround the world.  The International Civil Aviation Organization maintains a\ndatabase of current and inactive airports around the world.  The database \nuniquely identifies each airport by an alphanumeric GPS code.  Further, each\nrecord contains the following pieces of data on each airport:\n\\begin{itemize}\n  \\item The name of the airport\n  \\item Its latitude in degrees in the range $[-90, 90]$ with negative values corresponding to the southern hemisphere\n  \\item Its longitude in degrees in the range $[-180, 180]$ with negative values corresponding to the western hemisphere\n  \\item The type of airport \n  \\item Its elevation in (whole) feet above sea level\n  \\item Its municipality and its country\n\\end{itemize}\n\nDesign a Java class to encapsulate these attributes to model an\nairport record from the ICAO database.  Also design several methods\nto support your class including a constructor, getters, a \\mintinline{java}{toString()}\nmethod, etc.  You will also implement several utility methods that \nuse your class to compute the air\ndistance(s) between airport locations using their latitude and longitude.\nRecall that the air distance $d$ between two latitude/longitude points can be \nestimated using the Spherical Law of Cosines.\n\n $$d = \\arccos{(\\sin(\\varphi_1) \\cdot \\sin(\\varphi_2) + \\cos(\\varphi_1) \\cos(\\varphi_2) \\cos(\\Delta) )} \\cdot R$$\nwhere\n\\begin{itemize}\n  \\item $\\varphi_1$ is the latitude of location $A$, $\\varphi_2$ is the latitude of location $B$\n  \\item $\\Delta$ is the difference between location $B$'s longitude and location $A$'s longitude\n  \\item $R$ is the (average) radius of the earth, 6,371 kilometers\n\\end{itemize}\nThis formula assumes that latitude and longitude are in radians \n$r$, $-\\pi \\leq r \\leq \\pi$.  To convert from degrees $d$ ($-180 \\leq d \\leq 180$) \nto radians $r$, you can use the simple formula:\n  $$r = \\frac{d}{180} \\pi$$\n\nMore details have been provided in a starter file, \\mintinline{text}{Airport.java}.\nYou will need to design your class and implement all of the specified \nmethods.\n\n\\section*{Instructions}\n\n\\begin{itemize}\n\n  \\item In addition, you must create a main test driver program that \n  demonstrates at least 3 cases per non-trivial function.  Name this file \n  \\mintinline{text}{AirportTester.java} and hand it in.\n\n  \\item You are encouraged to collaborate any number of students \n  before, during, and after your scheduled hack session.  \n\n  \\item You may (in fact are encouraged) to define any additional\n  ``helper'' methods that may help you.\n\n  \\item Include the name(s) of everyone who worked together on\n  this activity in your source file's header.\n\n  \\item Turn in all of your files via webhandin, making sure that \n  it runs and executes correctly in the webgrader.  Each individual \n  student will need to hand in their own copy and will receive \n  their own individual grade.\n\\end{itemize}  \n\n\n\\end{document}\n", "meta": {"hexsha": "9fd37a92707591ea290a82d6e812fa73a7a96055", "size": 3381, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "honors/hacks/hack11.0.tex", "max_stars_repo_name": "bobbys131/ComputerScienceI", "max_stars_repo_head_hexsha": "93e22289e966386f208c477ee319837877bbe62a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 68, "max_stars_repo_stars_event_min_datetime": "2018-05-14T20:29:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-19T10:05:16.000Z", "max_issues_repo_path": "honors/hacks/hack11.0.tex", "max_issues_repo_name": "hrithik125/ComputerScienceI", "max_issues_repo_head_hexsha": "40be47f15817a50497f6c6f7cdca9ee1db429b00", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-05-11T01:30:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-02T04:34:10.000Z", "max_forks_repo_path": "honors/hacks/hack11.0.tex", "max_forks_repo_name": "hrithik125/ComputerScienceI", "max_forks_repo_head_hexsha": "40be47f15817a50497f6c6f7cdca9ee1db429b00", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 204, "max_forks_repo_forks_event_min_datetime": "2018-10-17T18:35:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T16:51:50.000Z", "avg_line_length": 36.75, "max_line_length": 120, "alphanum_fraction": 0.7539189589, "num_tokens": 873, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8438951005915208, "lm_q1q2_score": 0.5956085270625039}}
{"text": "%!TEX root = maths.tex\n\\usetikzlibrary{patterns,intersections,arrows,backgrounds}\n\n\\newcommand{\\degre}{\\ensuremath{^\\circ}}\n\\pgfplotsset{compat=1.15}\n\\usetikzlibrary{arrows}\n\n\n\\newcommand{\\realPlot}[2]{\n\t\\draw[domain=#1,scale=1,samples=500 ] plot (\\x:{#2}); %Curve with parameter\n}\n\n\n\\newenvironment{fullPlot}[4]\n{\\begin{center}\n\t\t\\resizebox{0.5\\textwidth}{!}{%\t\n\t\t\t\\begin{tikzpicture}\n\t\t\t\\begin{axis}[\n\t\t\tx=1cm,y=1cm,\n\t\t\taxis lines=middle,\n\t\t\tymajorgrids=true,\n\t\t\txmajorgrids=true,\n\t\t\txmin=#1,\n\t\t\txmax=#2,\n\t\t\tymin=#3,\n\t\t\tymax=#4,\n\t\t\txtick={-20,-19,...,20},\n\t\t\tytick={-20,-19,...,20},]\n\t\t\t\\coordinate (origin) at (0,0);\n\t\t\t\\end{axis}  \n\t\t\t\\begin{scope}[shift={(origin)}]\n\t\t\t\\clip (#1,#3) rectangle (#2,#4);\n\t\t\t\t%Dashed lines loop (increments of 15 degrees)\n\t\t\t\\foreach \\i in {0,15,...,75,105,120,...,180}{\n\t\t\t\t\\draw[red,dashed, thick, samples=500] plot(\\x, {tan(\\i) * \\x});}\n\t\t\t\t\\draw[red, dashed, thick, samples=500] plot(0,\\x);\t\t %Line x=0 due to tan() math err\n\t\t}\n\t\n\t\n\t\t{ \t\t\t\n\t\t\t%Green crosses loop (incremenets of 15 degress with upper bound parameter)\n%\t\t\t\\foreach \\x in {0,15,...,75,105,120,...,{#6}}{\n%\t\t\t\t\\pic[line width=1pt,green] at ({\\x}:{#5}) {mycross};}\n\t\t\t\n\t\t\t%\t\\pic[line width=1pt,green] at (0,5) {mycross};\n\t\t\t%\t\\pic[line width=1pt,green] at (0,-5) {mycross};\n\t\t\t\\end{scope}\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\\end{center}\n\n}\t\n\n\n\n\\graphicspath{pictures/Polar_Curves}\n\\begin{document}\n\t\\chapter{Polar Curves}\n\t\\section{Introduction}\n\tThe position of a point on a plane can be described in sever ways. With respect to some origin $O$\n\twe can locate a point $P$ on a plane by noting a horizontal distance followed by a vertical distance.\\\\\n\t\n\t\\definecolor{uququq}{rgb}{0.25,0.25,0.25}\n\t\\definecolor{xdxdff}{rgb}{0.49,0.49,1}\n\t\\definecolor{qqqqff}{rgb}{0,0,1}\n\t\\definecolor{xdxdff}{rgb}{0.49,0.49,1}\n\t\\definecolor{qqqqff}{rgb}{0,0,1}\n\t\\begin{center}\n\t\t\\definecolor{xdxdff}{rgb}{0.49,0.49,1}\n\t\t\\definecolor{qqqqff}{rgb}{0,0,1}\n\t\t\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]\n\t\t\\draw[->,color=black] (-1,0) -- (5,0);\n\t\t\\foreach \\x in {-1,1,2,3,4}\n\t\t\\draw[shift={(\\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt) node[below] {\\footnotesize $\\x$};\n\t\t\\draw[->,color=black] (0,-1) -- (0,4);\n\t\t\\foreach \\y in {-1,1,2,3}\n\t\t\\draw[shift={(0,\\y)},color=black] (2pt,0pt) -- (-2pt,0pt) node[left] {\\footnotesize $\\y$};\n\t\t\\draw[color=black] (0pt,-10pt) node[right] {\\footnotesize $0$};\n\t\t\\clip(-1,-1) rectangle (5,4);\n\t\t\\draw [dash pattern=on 1pt off 1pt] (0,3)-- (4,3);\n\t\t\\draw [dash pattern=on 1pt off 1pt] (2.08,3) -- (2,2.9);\n\t\t\\draw [dash pattern=on 1pt off 1pt] (2.08,3) -- (2,3.1);\n\t\t\\draw [dash pattern=on 1pt off 1pt] (4,0)-- (4,3);\n\t\t\\draw [dash pattern=on 1pt off 1pt] (4,1.58) -- (4.1,1.5);\n\t\t\\draw [dash pattern=on 1pt off 1pt] (4,1.58) -- (3.9,1.5);\n\t\t\\begin{scriptsize}\n\t\t\\fill [color=red] (4,3) circle (1.5pt);\n\t\t\\draw[color=black] (4.31,3.19) node {$P$ ($x$, $y$)};\n\t\t\\fill [color=qqqqff] (4,0) circle (1.5pt);\n\t\t\\draw[color=black] (4.2,0.2) node {$X$};\n\t\t\\fill [color=qqqqff] (0,3) circle (1.5pt);\n\t\t\\draw[color=black] (0.2,3.19) node {$Y$};\n\t\t\\end{scriptsize}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\n\tHowever the point $P$ can be located on a plane with respect to the origin $O$, a horizontal line and the distance of $P$ from $O$.\\\\\n\t\n\t\\definecolor{ttttff}{rgb}{0.2,0.2,1}\n\t\\definecolor{qqqqff}{rgb}{0,0,1}\n\t\\definecolor{cqcqcq}{rgb}{0.75,0.75,0.75}\n\t\\definecolor{ttttff}{rgb}{0.2,0.2,1}\n\t\\definecolor{qqqqff}{rgb}{0,0,1}\n\t\\begin{center}\n\t\t\n\t\t\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]\n\t\t\\draw[->,color=black] (-1,0) -- (6,0);\n\t\t\\foreach \\x in {-1,1,2,3,4,5}\n\t\t\\draw[shift={(\\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt) node[below] {\\footnotesize $\\x$};\n\t\t\\draw[->,color=black] (0,-1) -- (0,6);\n\t\t\\foreach \\y in {-1,1,2,3,4,5}\n\t\t\\draw[shift={(0,\\y)},color=black] (2pt,0pt) -- (-2pt,0pt) node[left] {\\footnotesize $\\y$};\n\t\t\\draw[color=black] (0pt,-10pt) node[right] {\\footnotesize $0$};\n\t\t\\clip(-1,-1) rectangle (6,6);\n\t\t\\draw [shift={(0,0)},color=ttttff,fill=ttttff,fill opacity=0.1] (0,0) -- (0:0.71) arc (0:45:0.71) -- cycle;\n\t\t\\draw (0,0)-- (3,3);\n\t\t\\begin{scriptsize}\n\t\t\\fill [color=qqqqff] (0,0) circle (1.5pt);\n\t\t\\draw[color=black] (-0.4,-0.3) node {Pole};\n\t\t\\fill [color=red] (3,3) circle (1.5pt);\n\t\t\\draw[color=black] (3.52,3.21) node {$P$ ($r$, $\\theta$)};\n\t\t\\draw[color=black] (1.71,1.44) node {$r$};\n\t\t\\draw[color=black] (0.46,0.2) node {$\\theta$};\n\t\t\\end{scriptsize}\n\t\t\\end{tikzpicture}\n\t\t\n\t\\end{center}\n\t\n\tThis is the polar coordinate system where we refer to the origin as the pole and the horizontal line as the initial line. The anti clockwise angle is usually measured in the principal range $-\\pi < \\theta \\leq \\pi$.\n\t\\newpage\n\t\\section{Relationship between Polar and Cartesian Coordinates}\n\t\n\tConsider the following diagram showing the point P on the plane, both in Cartesian and Polar coordinates\n\t\n\tThe above relationship can be used to convert from one form to another.\n\t\\begin{example}\n\t\tFind the polar coordinates of the curve given by $y=x^2+y^2=2x$\n\t\\end{example}\n\t\n\t\\begin{alignat*}{2}\n\t& \\qquad \\quad x^2+y^2  &   & = 2x  \\\\    \n\t& \\implies 2r\\cos\\theta &   & = r^2 \\\\\n\t& \\implies 2\\cos\\theta  &   & = r   \n\t\\end{alignat*}\n\t\n\t\n\t\\hrulefill\n\t\\begin{example}\n\t\tFind the Cartesian equation corresponding to the curves \\textbf{a)} $r=4(1+\\cos\\theta)$ and \\textbf{b)} $3 = r\\sin 2\\theta$.\n\t\\end{example}\n\t\\begin{equation*}\n\t\\begin{split}\n\tr &= 4(1+\\cos\\theta)\n\t\\end{split}\n\t\\qquad \\qquad \\qquad\n\t\\begin{split}\n\t3 &= r\\sin2\\theta\\\\\n\t3 &= 2r\\sin\\theta\\cos\\theta\\\\\t\n\t3  &= 2x\\sin\\frac yr\n\t\\end{split}\n\t\\end{equation*}\n\t$$i^2 + 1^2 = 0$$\n\t\n\t\\newpage\n\t\\section{Sketching Polar Curves}\n\t\\begin{example}\n\t\tSketch the polar curve of $2+2\\cos\\theta$.\n\t\\end{example}\n\tObserving the following sketch, it is shown that a line of symmetry is present in the $x$-axis for the \\emph{cosine} function family.\n\t\\begin{center}\n\t\t\\resizebox{300pt}{!}{%\n\t\t\t\\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\n\t\t\t\t\n\t\t\t\t$\\displaystyle\\theta$ & $0$ & $\\frac\\pi6$ & $\\frac\\pi4$ & $\\frac\\pi3$ & $\\frac\\pi2$ & $\\frac{2\\pi}3$ & $\\frac{3\\pi}4$ & $\\frac{5\\pi}6$ & $\\pi$ \\\\ \\hline\n\t\t\t\t$r$                   & 4   & 3.7         & 3.4         & 3           & 2           & 1              & 0.6            & 0.3            & 0     \n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}[scale=1.7]\n\t\t\\draw[thick,->,>=latex] (-3,0)--(5,0) node[above] {$x$};\n\t\t\\draw[thick,->,>=latex] (0,-3)--(0,4) node[left] {$y$};\n\t\t\\draw[domain=0:540,scale=1,samples=1000] plot (\\x:{2+(2*cos(\\x))});\n\t\t\\clip (-3,-3) rectangle (6,4);\n\t\t\n\t\t% Draw dotted lines and crosses\n\t\t\\foreach \\i in {0, {pi/6},{pi/4}, {pi/3}, {2* pi / 3}, {3 * pi / 4}, {5 *pi / 6}, {pi}}{\n\t\t\t\\draw[red, dashed, samples=500] plot(\\x, {tan( deg(\\i)) * \\x});\n\t\t\t\\pic[line width=1pt,green] at ({deg(\\i)}:{2+(2*cos(deg(\\i)))}) {mycross};}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the graph with polar equation $r=3-2\\cos\\theta$.\n\t\\end{example}\n\tSimilarly to the above example, a line of symmetry is present in the $x$-axis.\n\t\\begin{center}\n\t\t\\resizebox{300pt}{!}{%\n\t\t\t\\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\n\t\t\t\t$\\displaystyle\\theta$ & $0$ & $\\frac\\pi6$ & $\\frac\\pi4$ & $\\frac\\pi3$ & $\\frac\\pi2$ & $\\frac{2\\pi}3$ & $\\frac{3\\pi}4$ & $\\frac{5\\pi}6$ & $\\pi$ \\\\ \\hline\n\t\t\t\t$r$                   & 1   & 1.3         & 1.6         & 2           & 3           & 4              & 4.4            & 4.7            & 5     \n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\t\n\t\n\t\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\draw[thick,->,>=latex] (-6,0)--(4,0) node[above] {$x$};\n\t\t\\draw[thick,->,>=latex] (0,-4)--(0,4) node[left] {$y$};\n\t\t\\draw[domain=0:540,scale=1,samples=500] plot (\\x:{3-(2*cos(\\x))});\n\t\t\\clip (-5,-5) rectangle (6,4);\n\t\t\n\t\t%\tDraw dotted lines and crosses\n\t\t\\foreach \\i in {0, {pi/6},{pi/4}, {pi/3}, {2* pi / 3}, {3 * pi / 4}, {5 *pi / 6}, {pi}}{\n\t\t\t\\draw[red, dashed, thick, samples=500] plot(\\x, {tan( deg(\\i)) * \\x});\n\t\t\t\\pic[line width=1pt,green] at ({deg(\\i)}:{3-(2*cos(deg(\\i)))}) {mycross};}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the graph with polar equation $r=5+2\\sin\\theta$.\n\t\\end{example}\n\t\n\tIt is noted that since the graph is part of the \\emph{sine} function family,the line of symmetry is now present in the $y$-axis.\n\t\n\t\\begin{center}\n\t\t\\resizebox{300pt}{!}{%\n\t\t\t\\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\n\t\t\t\t$\\theta$ & $\\frac{-\\pi}2$ & $\\frac{-\\pi}3$ & $\\frac{-\\pi}4$ & $\\frac{-\\pi}6$ & 0   & $\\frac\\pi6$ & $\\frac\\pi4$ & $\\frac\\pi3$ & $\\frac\\pi2$ \\\\ \\hline\n\t\t\t\t$r$      & 3              & $3.3$          & $3.6$          & $4$            & $5$ & $6$         & $6.4$       & $6.7$       & $7$         \n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\t\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\draw[thick,->,>=latex] (-8,0)--(6,0) node[above] {$x$};\n\t\t\\draw[thick,->,>=latex] (0,-4)--(0,9) node[left] {$y$};\n\t\t\\draw[domain=0:540,scale=1,samples=500] plot (\\x:{5+(2*(sin(\\x)))});\n\t\t\n\t\t\\clip (-9,-4) rectangle (8,8);\n\t\t\n\t\t%\tDraw dotted lines and crosses\n\t\t\\foreach \\i in {{-pi/6},{-pi/4}, {-pi/3}, 0, {pi/6},{pi/4}, {pi/3}}{\n\t\t\t\\draw[red, dashed, thick, samples=500] plot(\\x, {tan( deg(\\i)) * \\x});\n\t\t\t\\pic[line width=1pt,green] at ({deg(\\i)}:{5+(2*(sin(deg(\\i))))}) {mycross};}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the polar curve $r=3+7\\sin\\theta$ and the circle $r=5$. Find the polar coordinates of their points of intersection.\n\t\\end{example}\n\t\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\draw[thick,->,>=latex] (-7,0)--(7,0) node[above] {$x$};\n\t\t\\draw[thick,->,>=latex] (0,-6)--(0,11) node[left] {$y$};\n\t\t\\draw[domain=0:540,scale=1,samples=1000] plot (\\x:{3+(7*sin(\\x))}); %Curve\n\t\t\\draw[domain=0:540,scale=1,samples=500] plot (\\x:5);\t\t\t\t%Circle\n\t\t\\clip (-7,-2) rectangle (6,10);\n\t\t\n\t\t%Draw dotted lines and crosses\n\t\t\\foreach \\i in {{-pi/6},{-pi/4}, {-pi/3}, 0, {pi/6},{pi/4}, {pi/3}}{\n\t\t\t\\draw[red, dashed, thick, domain=-7:7, samples=500] plot(\\x, {tan( deg(\\i)) * \\x});\n\t\t\t\\pic[line width=1pt,green] at ({deg(\\i)}:{3+(7*sin(deg(\\i)))}) {mycross};}\n\t\t\n\t\t\\draw[red, dashed, thick, domain=-7:12, samples=500] plot(0,\\x);\n\t\t%Tangential Asymptotes\n\t\t\\pic[line width=1pt,green] at ({deg(pi/2)}:10) {mycross};\n\t\t\\pic[line width=1pt,green] at ({deg(-pi/2)}:-4) {mycross};\n\t\t\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\n\t\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the polar curve $r=8\\sin^2\\theta$ and the line $r=\\csc\\theta$. Find the polar coordinates of their points of intersection.\n\t\\end{example}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\begin{axis}[\n\t\tx=1cm,y=1cm,\n\t\taxis lines=middle,\n\t\tymajorgrids=true,\n\t\txmajorgrids=true,\n\t\txmin=-4,\n\t\txmax=4,\n\t\tymin=-9,\n\t\tymax=9,\n\t\txtick={-4,-3,...,4},\n\t\tytick={-9,-8,...,9},]\n\t\t\\draw[domain=0:540,scale=1,samples=1000] plot (\\x:{8*(sin(\\x)^2)}) node[above] {$r=8\\sin^2\\theta$}; %Curve\n\t\t\\draw[domain=-5:5,scale=1,samples=1000] plot (\\x, 1) node[above] {$r=\\csc\\theta$}; %Curve\n\t\t%\t\\clip (-7,-2) rectangle (6,10);\n\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\n\t\\section{Area partly bounded by a polar curve}\n\tConsider the following diagram showing the graph of $r=f(\\theta)$.\n\t\n\t\\begin{center}\n\t\t\\resizebox{300pt}{!}{%\n\t\t\t\\definecolor{ccffww}{rgb}{0.8,1,0.4}\n\t\t\t\\definecolor{qqttcc}{rgb}{0,0.2,0.8}\n\t\t\t\\definecolor{uuuuuu}{rgb}{0.26666666666666666,0.26666666666666666,0.26666666666666666}\n\t\t\t\\definecolor{xdxdff}{rgb}{0.49019607843137253,0.49019607843137253,1}\n\t\t\t\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]\n\t\t\t\\begin{axis}[\n\t\t\tx=1cm,y=1cm,\n\t\t\taxis lines=middle,\n\t\t\tymajorgrids=true,\n\t\t\txmajorgrids=true,\n\t\t\txmin=-4.087350135407929,\n\t\t\txmax=12.075704876608706,\n\t\t\tymin=-1.9400069331551724,\n\t\t\tymax=8.089791136292845,\n\t\t\txtick={-4,-3,...,12},\n\t\t\tytick={-1,0,...,8},]\n\t\t\t\\clip(-4.087350135407929,-1.9400069331551724) rectangle (12.075704876608706,8.089791136292845);\n\t\t\t\n\t\t\t\\fill[line width=0.8pt,color=ccffww,fill=ccffww,fill opacity=0.55] (1.6469997622170334,5.285200045490499) -- (1.8018963690587861,5.303857839349265) -- (2.057802642713001,5.344452622890168) -- (2.2425420008780867,5.375610046949744) -- (2.4085986469696503,5.401170598203379) -- (2.6317350137162325,5.427157717856224) -- (2.826543364850297,5.438046959825773) -- (2.9930508460166525,5.436092904437624) -- (3.1583701235903847,5.422111858769122) -- (3.394065928081652,5.378740328890088) -- (3.5780745473050493,5.323957494370168) -- (3.7684491850962933,5.246759511526517) -- (3.909043353636018,5.175946821869229) -- (4.096867913840399,5.062840104074621) -- (4.339673854011711,4.885528774754539) -- (4.70860996427174,4.552297629385634) -- (5.130951567039364,4.088855310480258) -- (5.420878217861827,3.7324711921147538) -- (5.752127039672099,3.3035617372125) -- (5.8996282307282275,3.1102010591021587) -- (0,0) -- cycle;\n\t\t\t\\draw [line width=2pt,dash pattern=on 1pt off 1pt,domain=-4.087350135407929:12.075704876608706] plot(\\x,{(-0--4.099788120747726*\\x)/1.2775959286106953});\n\t\t\t\\draw [line width=2pt,dash pattern=on 1pt off 1pt,domain=-4.087350135407929:12.075704876608706] plot(\\x,{(-0--4.87618*\\x)/9.24945});\n\t\t\t\\draw[line width=2pt,color=xdxdff,smooth,samples=100,domain=-4.087350135407929:12.075704876608706] plot(\\x,{0.014635180803888441*(\\x)^(4)-0.23126329831793696*(\\x)^(3)+1.0699408938350827*(\\x)^(2)-1.8059409257011467*(\\x)+6.282771603030379});\n\t\t\t\\draw (2.6126764256158723,7.846889871240623) node[anchor=north west] {$\\theta = \\alpha$};\n\t\t\t\\draw (7.440339068528793,5.195217727753861) node[anchor=north west] {$\\theta = \\beta$};\n\t\t\t\\draw [line width=2.4pt,color=qqttcc,fill=qqttcc,fill opacity=0.1] (0,0) -- (27.797567846322046:0.9614841741650471) arc (27.797567846322046:72.69166717480417:0.9614841741650471) -- cycle;\n\t\t\t\\draw [color=xdxdff](-2.5287336846561685,8.960187336063308) node[anchor=north west] {$ r=f({\\theta})$};\n\t\t\t\\begin{scriptsize}\n\t\t\t\\draw [fill=uuuuuu] (1.6469997622170334,5.285200045490499) circle (2pt);\n\t\t\t%\t\t\t\t\\draw[color=uuuuuu] (1.74734066886733,5.524146524178746) node {$G$};\n\t\t\t\\draw [fill=uuuuuu] (0,0) circle (2pt);\n\t\t\t\\draw [fill=uuuuuu] (5.8996282307282275,3.1102010591021587) circle (2pt);\n\t\t\t\\draw[color=uuuuuu] (6.038596351456593,3.3937000119498784) node {$D_1$};\n\t\t\t\\draw[color=qqttcc] (1.0743017469517973,0.5143079324766583) node {$\\omega$};\n\t\t\t\\end{scriptsize}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\\end{center}\n\tIt can be shown that the area bounded by the curve $r=f(\\theta)$, from $\\theta = \\alpha$ to $\\theta = \\beta$ is given by \\[A = \\frac12\\int_\\alpha^\\beta r^2\\,d\\theta\\]\n\t\\hrulefill\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the curve with polar equation $r=2(1-\\cos\\theta)\\sqrt{\\sin\\theta}$ for $0 \\leq \\theta\\leq \\pi$ and find the area it encloses.\n\t\\end{example}\n\t\\begin{center}\n\t\t\\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c}\n\t\t\t$\\theta$ & 0   & $\\frac{\\pi}{12}$ & $\\frac{\\pi}6$ & $\\frac{\\pi}4$ & $\\frac{\\pi}3$ & $\\frac{5\\pi}{12}$ & $\\frac\\pi2$ & $\\frac{7\\pi}{12}$ & $\\frac{2\\pi}3$ & $\\frac{3\\pi}4$ & $\\frac{5\\pi}6$ & $\\frac{11\\pi}{12}$ & $\\pi$ \\\\ \\hline\n\t\t\t$r$      & $0$ & $0.03$           & $0.2$         & $0.5$         & $0.9$         & $1.5$             & $2$         & $2.5$             & $11$           & $2.9$          & $2.6$          & $2$                & $0$   \n\t\t\\end{tabular}\n\t\\end{center}\n\t\n\t\\begin{fullPlot}{-3}{2}{-2}{3}\n\t\\realPlot{0:180}{2*(1-cos(\\x))*(sqrt(sin(\\x)))}\n\t\\end{fullPlot}\n%\t\\fullPlot{-3}{2}{-2}{3}{2*(1-cos(\\x))*(sqrt(sin(\\x)))}{180}\n\t\\begin{align*}\n\tA & =\\frac12\\int_0^\\pi \\left(2(1-\\cos\\theta)\\sqrt{\\sin\\theta}\\right)^2\\,d\\theta             \\\\\n\t& = \\frac12\\int_0^\\pi 4(1-\\cos\\theta)^2\\sin(\\theta)\\,d\\theta                                \\\\\n\t& = 2\\int_0^\\pi \\sin(\\theta)(1-2\\cos(\\theta)+\\cos^2\\theta)\\,d\\theta\\\\\n\t&= 2\\int_0^\\pi \\sin\\theta - 2\\sin\\theta\\cos\\theta+\\sin\\theta\\cos^2\\theta\\,d\\theta\\\\\n\t&= 2\\left(\\left.-\\cos\\theta + \\cos^2\\theta - \\frac{\\cos^3\\theta}{3}\\right|_0^\\pi\\right)\n\t\\end{align*}\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the curve with polar equation $r=3-4\\cos\\theta$ and find the area enclosed by the inner loop.\n\t\\end{example}\n\t\n\t\\fullPlot{-5}{5}{-5}{5}{2+2*(cos(\\x))}{180}\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the curve  $r=5\\sin3\\theta$ and the circle $r=5$. Find the area of the region which lies inside the circle but outside the curve.\n\t\\end{example}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\begin{axis}[\n\t\tx=1cm,y=1cm,\n\t\taxis lines=middle,\n\t\tymajorgrids=true,\n\t\txmajorgrids=true,\n\t\txmin=-6,\n\t\txmax=6,\n\t\tymin=-6,\n\t\tscale=1,\n\t\tymax=6,\n\t\txtick={-6,-5,...,6},\n\t\tytick={-6,-5,...,6},]\n\t\t\\coordinate (origin) at (0,0);\n\t\t\\end{axis}  \n\t\t\\begin{scope}[shift={(origin)}]\n\t\t\\clip (-6,-6) rectangle (6,6);\n\t\t\\draw[domain=0:360,scale=1,samples=500 ] plot (\\x:{5*sin(3*\\x)}); %Curve \n\t\t\\draw[domain=0:360,scale=1,samples=500 ] plot (\\x:5); %Curve \n\t\t\\foreach \\i in {0,15,...,75,105,120,...,180}{\n\t\t\t\\draw[red, dashed, thick, domain=-5:5, samples=500] plot(\\x, {tan(\\i) * \\x});\n\t\t\t\\pic[line width=1pt,green] at ({\\i}:{5*sin(3*\\i)}) {mycross};\n\t\t}\n\t\t\\draw[red, dashed, thick, domain=-5:5, samples=500] plot(0, \\x);\n\t\t\\pic[line width=1pt,green] at (0,5) {mycross};\n\t\t\\pic[line width=1pt,green] at (0,-5) {mycross};\n\t\t\\end{scope}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\n\t\n\t\\begin{multicols}{2}\n\t\t\\noindent\n\t\t\\begin{align*}\n\t\tA & = \\frac12 \\int_0^\\frac\\pi3 (5\\sin(3\\theta))^2\\,d\\theta                        \\\\\n\t\t& = \\frac{25}{2} \\int_0^\\frac\\pi3 \\sin^2(3\\theta)\\,d\\theta                      \\\\\n\t\t& = \\frac{25}{2} \\int_0^\\frac\\pi3 \\frac{1-\\cos(6\\theta)}{2}                     \\\\\n\t\t& = \\frac{25}{4} \\left.\\quad\\theta - \\frac{\\cos(6\\theta)}{6}\\right|_0^\\frac\\pi3 \\\\\n\t\t& = \\frac{25}{24} \\left.\\quad 6\\theta - \\cos6\\theta\\right|_\\alpha^\\beta          \n\t\t\\end{align*}\n\t\t\n\t\t\\begin{align*}\n\t\tA & = \\pi r^2 \\\\\n\t\t& = 25\\pi   \\\\~\\\\~\\\\~\\\\~\\\\\n\t\t\\end{align*}\n\t\\end{multicols}\n\t\n\t\\newpage\n\t\\begin{example}\n\t\tSketch the curve $r=3(1+\\sqrt{2}\\cos\\theta)$ and the line $r=3\\sqrt{2}\\sec\\theta$. Find the polar coordinates of their points of intersection. Find the area of the region which lies inside the circle but outside of the cardioid.\n\t\\end{example}\n\t\n\t\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\begin{axis}[\n\t\tx=1cm,y=1cm,\n\t\taxis lines=middle,\n\t\tymajorgrids=true,\n\t\txmajorgrids=true,\n\t\txmin=-2,\n\t\txmax=8,\n\t\tymin=-6,\n\t\tscale=1,\n\t\tymax=6,\n\t\txtick={-2,-1,...,8},\n\t\tytick={-6,-5,...,6},]\n\t\t\\coordinate (origin) at (0,0);\n\t\t\\end{axis}  \n\t\t\\begin{scope}[shift={(origin)}]\n\t\t\\clip (-2,-6) rectangle (8,6);\n\t\t\\draw[domain=0:360,scale=1,samples=500 ] plot (\\x:{3*(1+(sqrt(2)*cos(\\x))}); %Curve \n\t\t\\draw[range=0:180,scale=1,samples=500 ] plot (\\x:{3*sqrt(2)*sec(\\x)}); %Curve \n\t\t\\foreach \\i in {0,15,...,75,105,120,...,180}{\n\t\t\t\\draw[red, dashed, thick, domain=-5:8, samples=500] plot(\\x, {tan(\\i) * \\x});\n\t\t\t\\pic[line width=1pt,green] at ({\\i}:{3*(1+(sqrt(2)*cos(\\i))}) {mycross};\n\t\t}\n\t\t%\t\\draw[red, dashed, thick, domain=-5:5, samples=500] plot(0, \\x);\n\t\t%\t\\pic[line width=1pt,green] at (0,5) {mycross};\n\t\t%\t\\pic[line width=1pt,green] at (0,-5) {mycross};\n\t\t\\end{scope}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\newpage\n\t\n\t\\begin{example}\n\t\tSketch the curve $r=5+\\sin\\theta$ and the line $r\\sin\\theta = 8$.\n\t\\end{example}\n\t\n\t\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\\begin{axis}[\n\t\tx=1cm,y=1cm,\n\t\taxis lines=middle,\n\t\tymajorgrids=true,\n\t\txmajorgrids=true,\n\t\txmin=-7,\n\t\txmax=7,\n\t\tymin=-3,\n\t\tscale=1,\n\t\tymax=9,\n\t\txtick={-7,-6,...,7},\n\t\tytick={-3,-2,...,9},]\n\t\t\\coordinate (origin) at (0,0);\n\t\t\\end{axis}  \n\t\t\\begin{scope}[\n\t\tshift={(origin)}]\n\t\t\\clip (-7,-3) rectangle (7,9);\n\t\t\\draw[domain=0:360,scale=1,samples=500 ] plot (\\x:{5+(3*sin(\\x))}); %Curve \n\t\t\\draw[domain=-7:7,scale=1,samples=500 ] plot (\\x,8); %LIne y=8 \n\t\t\\foreach \\i in {0,15,...,75,105,120,...,180}{\n\t\t\t\\draw[red, domain=-7:7,dashed, thick, samples=500] plot(\\x, {tan(\\i) * \\x});\n\t\t\t\\pic[line width=1pt,green] at ({\\i}:{5+(3*sin(\\i))}) {mycross};\n\t\t}\n\t\t%\t\\draw[red, dashed, thick, domain=-5:5, samples=500] plot(0, \\x);\n\t\t%\t\\pic[line width=1pt,green] at (0,5) {mycross};\n\t\t%\t\\pic[line width=1pt,green] at (0,-5) {mycross};\n\t\t\\end{scope}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\n\t\n\\end{document}", "meta": {"hexsha": "d8674850d0787282d3dac67d2f63137806858f10", "size": 19982, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Pure Mathematics/Polar_Curves.tex", "max_stars_repo_name": "Girogio/My-LaTeX", "max_stars_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-12T11:45:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-30T21:47:25.000Z", "max_issues_repo_path": "Pure Mathematics/Polar_Curves.tex", "max_issues_repo_name": "Girogio/My-LaTeX", "max_issues_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pure Mathematics/Polar_Curves.tex", "max_forks_repo_name": "Girogio/My-LaTeX", "max_forks_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.6468253968, "max_line_length": 914, "alphanum_fraction": 0.6004904414, "num_tokens": 8606, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5955582830988848}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% ============================================================================================\n\\section*{Checking the 2nd and 3rd order terms of Calzetta etal.}\n\nThe following calculations show that my results for the RNC connection agree with those of Calzetta etal. to third order terms.\n\nNote that I take $\\nabla_{ab}$ to be $\\nabla_a\\left(\\nabla_b\\right)$.\n\nNote also that $(LCB)\\>R_{abcd} = - (Calzetta)\\> R_{abcd}$. Consequently, I replace $R_{abcd}$ with\n$-R_{abcd}$ in the Calzetta expressions (done as a Cadabra substitution rule).\n\nThis is relatively straightforward. We just apply a few carefully chosen applications of the first and second Bianchi identities.\n\nNote that in this example, 2nd and 3rd order refer to the powers of $x$ in the expression. This differs from\nthe usage elsewhere in these examples (3rd and 4th order with respect to $R$ and its derivatives).\n\n\\clearpage\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,u,v,w#}::Indices(\"latin\",position=independent).\n   {\\mu,\\nu,\\rho,\\sigma,\\tau,\\lambda,\\xi#}::Indices(\"greek\",position=independent).\n\n   \\nabla{#}::Derivative.\n\n   g_{a b}::Metric.\n   g^{a b}::InverseMetric.\n   g^{a b}::Weight(label=gnum,value=1).\n\n   \\delta{#}::KroneckerDelta.\n\n   R_{a b c d}::RiemannTensor.\n   R_{a b c d}::Depends(\\nabla{#}).\n\n   x^{a}::Weight(label=xnum,value=1).\n\n   def add_tags (obj,tag):\n\n      n = 0\n      ans = Ex('0')\n\n      for i in obj.top().terms():\n         foo = obj[i]\n         bah = Ex(tag+'_{'+str(n)+'}')\n         ans := @(ans) + @(bah) @(foo).\n         n = n + 1\n\n      return ans\n\n   def clear_tags (obj,tag):\n\n      ans := @(obj).\n      foo  = Ex(tag+'_{a?} -> 1')\n      substitute (ans,foo)\n\n      return ans\n\n   def get_xterm (obj,n):\n\n       foo := @(obj).\n       bah  = Ex(\"xnum = \" + str(n))\n       distribute  (foo)\n       keep_weight (foo, bah)\n\n       return foo\n\n   def get_gterm (obj,n):\n\n       foo := @(obj).\n       bah  = Ex(\"gnum = \" + str(n))\n       distribute  (foo)\n       keep_weight (foo, bah)\n\n       return foo\n\n   # note: keeping numbering as is (out of order) to ensure R appears before \\nabla R etc.\n   def product_sort (obj):\n       substitute (obj,$ g^{a b}                   -> A001^{a b}                $)\n       substitute (obj,$ x^{a}                     -> A002^{a}                  $)\n       substitute (obj,$ z^{a}                     -> A003^{a}                  $)\n       substitute (obj,$ \\nabla_{e f}{R_{a b c d}} -> A006_{a b c d e f}        $)\n       substitute (obj,$ \\nabla_{e}{R_{a b c d}}   -> A005_{a b c d e}          $)\n       substitute (obj,$ R_{a b c d}               -> A004_{a b c d}            $)\n       sort_sum       (obj)\n       sort_product   (obj)\n       rename_dummies (obj)\n       substitute (obj,$ A001^{a b}                -> g^{a b}                   $)\n       substitute (obj,$ A002^{a}                  -> x^{a}                     $)\n       substitute (obj,$ A003^{a}                  -> z^{a}                     $)\n       substitute (obj,$ A004_{a b c d}            -> R_{a b c d}               $)\n       substitute (obj,$ A005_{a b c d e}          -> \\nabla_{e}{R_{a b c d}}   $)\n       substitute (obj,$ A006_{a b c d e f}        -> \\nabla_{e f}{R_{a b c d}} $)\n\n       return obj\n\n   def reformat (obj,scaleA,scaleB):\n\n      foo  = Ex(str(scaleA))\n      moo  = Ex(str(scaleB))\n      bah := @(foo) @(obj) / @(moo).\n\n      distribute     (bah)\n      bah = product_sort (bah)\n      rename_dummies (bah)\n      canonicalise   (bah)\n      factor_out     (bah,$g^{c? d?}$)\n      factor_out     (bah,$x^{a?},z^{b?}$)\n      ans := @(moo) @(bah) / @(foo).\n\n      return ans\n\n   # ==========================================================================\n   # LCB\n\n   import cdblib\n   Gamma  = cdblib.get ('Gamma','../connection.json')             # cdb(ex-12.100,Gamma)\n\n   # note that the next two lines require careful inspection of the free indices on Gamma\n   # expecting Gamma = \\Gamma^{d}_{ab}\n   Gamma := z^{a} z^{b} @(Gamma).\n\n   # lower index ^{d} to _{v}\n\n   Gamma := g_{v d} @(Gamma).\n\n   distribute (Gamma)\n   substitute (Gamma, $g_{a d} g^{d b} -> \\delta_{a}^{b}$)\n   eliminate_kronecker (Gamma)                                    # cdb(ex-12.101,Gamma)\n\n   # change free index _{v} to _{a}\n\n   foo := tmp_{v} -> @(Gamma).                                    # cdb(ex-12.191,foo)\n   bah := tmp_{a}.                                                # cdb(ex-12.192,bah)\n   substitute (bah, foo)                                          # cdb(ex-12.193,bah)\n\n   Gamma := @(bah).                                               # cdb(ex-12.102,Gamma)\n\n   Gamma = product_sort (Gamma)                                   # cdb(ex-12.103,Gamma)\n\n   gam1  = get_xterm (Gamma,1)                                    # cdb(ex-12.200,gam1)\n   gam2  = get_xterm (Gamma,2)                                    # cdb(ex-12.201,gam2)\n   gam3  = get_xterm (Gamma,3)                                    # cdb(ex-12.202,gam3)\n\n   gam30 = get_gterm (gam3,0)                                     # cdb(ex-12.203,gam30)\n   gam31 = get_gterm (gam3,1)                                     # cdb(ex-12.204,gam31)\n\n   gam1  = reformat (gam1, 3,1)                                   # cdb(ex-12.300,gam1)\n   gam2  = reformat (gam2,12,1)                                   # cdb(ex-12.301,gam2)\n\n   gam30 = reformat (gam30,40,1)                                  # cdb(ex-12.302,gam30)\n   gam31 = reformat (gam31,45,2)                                  # cdb(ex-12.303,gam31)\n\n   gam3 := @(gam30) + @(gam31).                                   # cdb(ex-12.304,gam3)\n\n   Gamma := @(gam1) + @(gam2) + @(gam3).                          # cdb(ex-12.305,Gamma)\n\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.100}}\n   \\Dmath*{\\cdb*{ex-12.191}}\n   \\Dmath*{\\cdb*{ex-12.192}}\n   \\Dmath*{\\cdb*{ex-12.193}}\n   \\Dmath*{\\cdb*{ex-12.101}}\n   \\Dmath*{\\cdb*{ex-12.102}}\n   \\Dmath*{\\cdb*{ex-12.103}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.200}}\n   \\Dmath*{\\cdb*{ex-12.201}}\n   \\Dmath*{\\cdb*{ex-12.202}}\n   \\Dmath*{\\cdb*{ex-12.203}}\n   \\Dmath*{\\cdb*{ex-12.204}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.300}}\n   \\Dmath*{\\cdb*{ex-12.301}}\n   \\Dmath*{\\cdb*{ex-12.302}}\n   \\Dmath*{\\cdb*{ex-12.303}}\n   \\Dmath*{\\cdb*{ex-12.304}}\n   \\Dmath*{\\cdb*{ex-12.305}}\n\\end{dgroup*}\n\n\\clearpage\n\n\\begin{cadabra}\n   # ==========================================================================\n   # Calzetta\n   # note: \\nabla_{a b} defined as \\nabla_{a}\\nabla_{b}\n\n   GammaBar := z^{\\nu} z^{\\rho} (\n                 (2/3) R^{\\mu}_{\\nu\\rho\\sigma} x^{\\sigma}\n               + (1/12) (5 \\nabla_{\\lambda}{R^{\\mu}_{\\nu\\rho\\sigma}}\n                         + \\nabla_{\\rho}{R^{\\mu}_{\\sigma\\nu\\lambda}}) x^{\\sigma} x^{\\lambda}\n               + (1/6) (  (9/10) \\nabla_{\\tau\\lambda}{R^{\\mu}_{\\rho\\nu\\sigma}}\n                        + (3/20) (  \\nabla_{\\tau\\rho}{R^{\\mu}_{\\sigma\\nu\\lambda}}\n                                  + \\nabla_{\\rho\\tau}{R^{\\mu}_{\\sigma\\nu\\lambda}} )\n                        + (1/60) (  21 R^{\\mu}_{\\lambda\\xi\\rho} R^{\\xi}_{\\sigma\\nu\\tau}\n                                  + 48 R^{\\mu}_{\\xi\\rho\\lambda} R^{\\xi}_{\\sigma\\nu\\tau}\n                                  - 37 R^{\\mu}_{\\sigma\\xi\\lambda} R^{\\xi}_{\\nu\\rho\\tau} ) ) x^{\\sigma} x^{\\lambda} x^{\\tau} ).\n                                                                  # cdb(ex-12.400,GammaBar)\n\n   # convert from Greek to Latin indices\n\n   distribute (GammaBar)\n   rename_dummies (GammaBar,\"greek\",\"latin\")                      # cdb(ex-12.401,GammaBar)\n\n   # lower the \\mu index\n\n   GammaBar := \\delta_{a \\mu} @(GammaBar).                        # cdb(ex-12.402,GammaBar)\n   distribute (GammaBar)                                          # cdb(ex-12.403,GammaBar)\n   eliminate_kronecker (GammaBar)                                 # cdb(ex-12.404,GammaBar)\n\n   # sort products\n\n   GammaBar = product_sort (GammaBar)                             # cdb(ex-12.405,GammaBar)\n\n   # Replace R with - R (Calzetta uses the non-MTW convention for Riemann)\n\n   substitute (GammaBar, $R_{a b c d} -> - R_{a b c d}$)          # cdb(ex-12.406,GammaBar)\n   substitute (GammaBar, $R^{a}_{b c d} -> - R^{a}_{b c d}$)      # cdb(ex-12.407,GammaBar)\n\n   substitute (GammaBar, $R^{a}_{b c d} -> g^{a e} R_{e b c d}$)  # cdb(ex-12.408,GammaBar)\n\n   cal1 = get_xterm (GammaBar,1)                                  # cdb(ex-12.500,cal1)\n   cal2 = get_xterm (GammaBar,2)                                  # cdb(ex-12.501,cal2)\n   cal3 = get_xterm (GammaBar,3)                                  # cdb(ex-12.502,cal3)\n\n   cal1 = reformat (cal1,3,1)                                     # cdb(ex-12.600,cal1)\n   cal2 = reformat (cal2,12,1)                                    # cdb(ex-12.601,cal2)\n   # cal3 = reformat (cal3,360,1)                                   # cdb(ex-12.602,cal3)\n\n   cal30 = get_gterm (cal3,0)                                     # cdb(ex-12.602,cal30)\n   cal31 = get_gterm (cal3,1)                                     # cdb(ex-12.603,cal31)\n\n   cal1  = reformat (cal1, 3,1)                                   # cdb(ex-12.604,cal1)\n   cal2  = reformat (cal2,12,1)                                   # cdb(ex-12.605,cal2)\n\n   cal30 = reformat (cal30,40,1)                                  # cdb(ex-12.606,cal30)\n   cal31 = reformat (cal31,360,1)                                 # cdb(ex-12.607,cal31)\n\n   cal3 := @(cal30) + @(cal31).                                   # cdb(ex-12.608,cal3)\n\n   GammaBar := @(cal1) + @(cal2) + @(cal3).                       # cdb(ex-12.409,GammaBar)\n\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.400}}\n   \\Dmath*{\\cdb*{ex-12.401}}\n   \\Dmath*{\\cdb*{ex-12.402}}\n   \\Dmath*{\\cdb*{ex-12.403}}\n   \\Dmath*{\\cdb*{ex-12.404}}\n   \\Dmath*{\\cdb*{ex-12.405}}\n   \\Dmath*{\\cdb*{ex-12.406}}\n   \\Dmath*{\\cdb*{ex-12.407}}\n   \\Dmath*{\\cdb*{ex-12.408}}\n\\end{dgroup*}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.500}}\n   \\Dmath*{\\cdb*{ex-12.501}}\n   \\Dmath*{\\cdb*{ex-12.502}}\n\\end{dgroup*}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.600}}\n   \\Dmath*{\\cdb*{ex-12.601}}\n   \\Dmath*{\\cdb*{ex-12.602}}\n   \\Dmath*{\\cdb*{ex-12.603}}\n   \\Dmath*{\\cdb*{ex-12.604}}\n   \\Dmath*{\\cdb*{ex-12.605}}\n   \\Dmath*{\\cdb*{ex-12.606}}\n   \\Dmath*{\\cdb*{ex-12.607}}\n   \\Dmath*{\\cdb*{ex-12.608}}\n\\end{dgroup*}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.409}}\n\\end{dgroup*}\n\n\\clearpage\n\n\\def\\GammaBar{{\\bar{\\Gamma}}}\n\n% -----------------------------------------------------------------------------\n\\subsection*{The fun begins $\\Gamma - \\GammaBar$}\n\nIt's now time to compute the difference $\\Gamma-\\GammaBar$. Here it is.\n\n\\begin{cadabra}\n   def reformat_diff (obj):\n\n       distribute (obj)\n\n       obj1  = get_xterm (obj,1)\n       obj2  = get_xterm (obj,2)\n       obj3  = get_xterm (obj,3)\n\n       obj30 = get_gterm (obj3,0)\n       obj31 = get_gterm (obj3,1)\n\n       obj1  = reformat (obj1, 3,1)\n       obj2  = reformat (obj2,12,1)\n\n       obj30 = reformat (obj30,40,1)\n       obj31 = reformat (obj31,360,1)\n\n       obj3 := @(obj30) + @(obj31).\n\n       ans := @(obj1) + @(obj2) + @(obj3).\n\n       return ans\n\n   # We could use reformat_diff here but instead we'll do it one step at a time so that\n   # we can see exactly what's going on. Later on we will use reformat_diff to do the job.\n\n   diff := @(Gamma) - @(GammaBar).                                # cdb(ex-12.diff.100,diff)\n   distribute (diff)\n\n   diff1  = get_xterm (diff,1)                                    # cdb(ex-12.diff.200,diff1)\n   diff2  = get_xterm (diff,2)                                    # cdb(ex-12.diff.201,diff2)\n   diff3  = get_xterm (diff,3)                                    # cdb(ex-12.diff.202,diff3)\n\n   diff30 = get_gterm (diff3,0)                                   # cdb(ex-12.diff.203,diff30)\n   diff31 = get_gterm (diff3,1)                                   # cdb(ex-12.diff.204,diff31)\n\n   diff1  = reformat (diff1, 3,1)                                 # cdb(ex-12.diff.300,diff1)\n   diff2  = reformat (diff2,12,1)                                 # cdb(ex-12.diff.301,diff2)\n\n   diff30 = reformat (diff30,40,1)                                # cdb(ex-12.diff.302,diff30)\n   diff31 = reformat (diff31,360,1)                               # cdb(ex-12.diff.303,diff31)\n\n   diff3 := @(diff30) + @(diff31).                                # cdb(ex-12.diff.304,diff3)\n\n   diff := @(diff1) + @(diff2) + @(diff3).                        # cdb(ex-12.diff.305,diff)\n\n\\end{cadabra}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.diff.100}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.diff.200}}\n   \\Dmath*{\\cdb*{ex-12.diff.201}}\n   \\Dmath*{\\cdb*{ex-12.diff.202}}\n   \\Dmath*{\\cdb*{ex-12.diff.203}}\n   \\Dmath*{\\cdb*{ex-12.diff.204}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.diff.300}}\n   \\Dmath*{\\cdb*{ex-12.diff.301}}\n   \\Dmath*{\\cdb*{ex-12.diff.302}}\n   \\Dmath*{\\cdb*{ex-12.diff.303}}\n   \\Dmath*{\\cdb*{ex-12.diff.304}}\n   \\Dmath*{\\cdb*{ex-12.diff.305}}\n\\end{dgroup*}\n\n\\clearpage\n\n% -----------------------------------------------------------------------------\n\\subsection*{Second order terms}\n\n\\begin{cadabra}\n   diff2  = get_xterm (diff,2)\n   diff2 := 12 @(diff2).                                                        # cdb (ex-12.701,diff2)\n   distribute (diff2)                                                           # cdb (ex-12.702,diff2)\n\n   diff2 = add_tags (diff2,'\\\\mu')                                              # cdb (ex-12.711,diff2)\n\n   # swap indices on middle term, then apply 2nd Bianchi identity\n\n   zoom       (diff2, $\\mu_{1} Q??$)                                            # cdb (ex-12.712,diff2)\n   substitute (diff2, $\\nabla_{b}{R_{a d c e}} -> - \\nabla_{b}{R_{d a c e}}$)   # cdb (ex-12.713,diff2)\n   unzoom     (diff2)\n\n   substitute (diff2, $\\mu_{1} -> \\mu_{0}, \\mu_{2} -> \\mu_{0}$)                 # cdb (ex-12.714,diff2)\n   substitute (diff2, $\\mu_{0} -> 0$)                                           # cdb (ex-12.715,diff2)\n\n   diff2 = clear_tags (diff2,'\\\\mu')                                            # cdb (ex-12.716,diff2)\n\n   diff2 := @(diff2) / 12 .\n\n   diff := @(diff1) + @(diff2) + @(diff3).\n\n   diff  = reformat_diff (diff)                                                 # cdb(ex-12.diff.306,diff)\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.701}}\n   \\Dmath*{\\cdb*{ex-12.702}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.711}}\n   \\Dmath*{\\cdb*{ex-12.712}}\n   \\Dmath*{\\cdb*{ex-12.713}}\n   \\Dmath*{\\cdb*{ex-12.714}}\n   \\Dmath*{\\cdb*{ex-12.715}}\n   \\Dmath*{\\cdb*{ex-12.716}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.diff.306}}\n\\end{dgroup*}\n\n\\clearpage\n\n% -----------------------------------------------------------------------------\n\\subsection*{Third order terms, commute $\\nabla\\nabla R$ terms}\n\n\\begin{cadabra}\n   diff3  = get_xterm (diff,3)\n   diff3 := 360 @(diff3).                              # cdb (ex-12.801,diff3)\n   distribute (diff3)                                  # cdb (ex-12.802,diff3)\n\n   # commutation rule for covariant derivs on Rabcd, see exrecise 3.6\n   # note: \\nabla_{a b} defined as \\nabla_{a}\\nabla_{b}\n   CommuteNablaRiemann := \\nabla_{f e}(R_{a b c d}) -> \\nabla_{e f}(R_{a b c d})\n                                                     + g^{u v} R_{u a e f} R_{v b c d}\n                                                     + g^{u v} R_{u b e f} R_{a v c d}\n                                                     + g^{u v} R_{u c e f} R_{a b v d}\n                                                     + g^{u v} R_{u d e f} R_{a b c v}.\n\n   diff3 = add_tags (diff3,'\\\\mu')                     # cdb (ex-12.901,diff3)\n\n   # commute derivs on Rabcd so that each double deriv is of the form \\nabla_{b*}\n\n   substitute (diff3, $\\mu_{3} -> \\mu_{1}$)            # cdb (ex-12.902,diff3)\n\n   zoom       (diff3, $\\mu_{1} Q??$)                   # cdb (ex-12.903,diff3)\n   substitute (diff3, CommuteNablaRiemann)             # cdb (ex-12.904,diff3)\n   unzoom     (diff3)\n\n   diff3 = clear_tags (diff3,'\\\\mu')\n   diff3 := @(diff3) / 360 .\n\n   distribute   (diff3)\n   canonicalise (diff3)                                # cdb (ex-12.905,diff3)\n\n   diff := @(diff1) + @(diff2) + @(diff3).\n\n   diff  = reformat_diff (diff)                        # cdb(ex-12.diff.307,diff)\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.801}}\n   \\Dmath*{\\cdb*{ex-12.802}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.901}}\n   \\Dmath*{\\cdb*{ex-12.902}}\n   \\Dmath*{\\cdb*{ex-12.903}}\n   \\Dmath*{\\cdb*{ex-12.904}}\n   \\Dmath*{\\cdb*{ex-12.905}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.diff.307}}\n\\end{dgroup*}\n\n\\clearpage\n\n% -----------------------------------------------------------------------------\n\\subsection*{Third order terms, use 2nd Bianchi identity on $\\nabla\\nabla R$ terms}\n\n\\begin{cadabra}\n   diff3  = get_xterm (diff,3)\n   diff3 := 360 @(diff3).                                                           # cdb (ex-12.910,diff3)\n   distribute (diff3)                                                               # cdb (ex-12.911,diff3)\n\n   diff3 = add_tags (diff3,'\\\\mu')                                                  # cdb (ex-12.912,diff3)\n\n   # swap indices on middle second deriv term, then apply 2nd Bianchi identity\n\n   zoom       (diff3, $\\mu_{1} Q??$)                                                # cdb (ex-12.913,diff3)\n   substitute (diff3, $\\nabla_{b c}{R_{a e d f}} -> - \\nabla_{b c}{R_{e a d f}}$)   # cdb (ex-12.914,diff3)\n   unzoom     (diff3)\n\n   substitute (diff3, $\\mu_{1} -> \\mu_{0}, \\mu_{2} -> \\mu_{0}$)                     # cdb (ex-12.915,diff3)\n   substitute (diff3, $\\mu_{0} -> 0$)                                               # cdb (ex-12.916,diff3)\n\n   diff3 = clear_tags (diff3,'\\\\mu')\n   diff3 := @(diff3) / 360 .\n\n   distribute   (diff3)\n   canonicalise (diff3)                                                             # cdb (ex-12.917,diff3)\n\n   diff := @(diff1) + @(diff2) + @(diff3).\n\n   diff  = reformat_diff (diff)                                                     # cdb(ex-12.diff.308,diff)\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.910}}\n   \\Dmath*{\\cdb*{ex-12.911}}\n   \\Dmath*{\\cdb*{ex-12.912}}\n   \\Dmath*{\\cdb*{ex-12.913}}\n   \\Dmath*{\\cdb*{ex-12.914}}\n   \\Dmath*{\\cdb*{ex-12.915}}\n   \\Dmath*{\\cdb*{ex-12.916}}\n   \\Dmath*{\\cdb*{ex-12.917}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.diff.308}}\n\\end{dgroup*}\n\n\\clearpage\n\n% -----------------------------------------------------------------------------\n\\subsection*{Third order terms, use 1st Bianchi identity on $R R$ terms}\n\n\\begin{cadabra}\n   diff3  = get_xterm (diff,3)\n   diff3 := 360 @(diff3).\n   distribute (diff3)\n\n   diff3 = add_tags (diff3,'\\\\mu')                                              # cdb (ex-12.921,diff3)\n\n   # swap indices on middle term, then apply 1st Bianchi identity\n\n   zoom       (diff3, $\\mu_{1} Q??$)                                            # cdb (ex-12.922,diff3)\n   substitute (diff3, $R_{a d b g} R_{c e f h} -> - R_{a d g b} R_{c e f h}$)   # cdb (ex-12.923,diff3)\n   unzoom     (diff3)\n\n   substitute (diff3, $\\mu_{1} -> \\mu_{0}, \\mu_{2} -> \\mu_{0}$)                 # cdb (ex-12.924,diff3)\n   substitute (diff3, $\\mu_{0} -> 0$)                                           # cdb (ex-12.925,diff3)\n\n   diff3 = clear_tags (diff3,'\\\\mu')                                            # cdb (ex-12.926,diff3)\n\n   diff := @(diff1) + @(diff2) + @(diff3).\n\n   diff  = reformat_diff (diff)                                                 # cdb(ex-12.diff.309,diff)\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.921}}\n   \\Dmath*{\\cdb*{ex-12.922}}\n   \\Dmath*{\\cdb*{ex-12.923}}\n   \\Dmath*{\\cdb*{ex-12.924}}\n   \\Dmath*{\\cdb*{ex-12.925}}\n   \\Dmath*{\\cdb*{ex-12.926}}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\Dmath*{\\cdb*{ex-12.diff.309}}\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "733bfe8e2ceb202e12bacaa4317af919da08d5fd", "size": 19974, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/checks/check-calzetta.tex", "max_stars_repo_name": "leo-brewin/riemann-normal-coords", "max_stars_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-20T16:15:58.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-20T16:15:58.000Z", "max_issues_repo_path": "source/cadabra/checks/check-calzetta.tex", "max_issues_repo_name": "leo-brewin/riemann-normal-coords", "max_issues_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/checks/check-calzetta.tex", "max_forks_repo_name": "leo-brewin/riemann-normal-coords", "max_forks_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.5570934256, "max_line_length": 129, "alphanum_fraction": 0.4599979974, "num_tokens": 6581, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.746138993030751, "lm_q1q2_score": 0.5955582785931008}}
{"text": "\\section{Periodogram-based Methods Applied to Real\u2013World Data}\n\n\\begin{enumerate}[label=\\alph*), leftmargin=*]\n%% a)\n\\item\n%\n\nThe sunspot time series and its Chebyshev-windowed periodogram method are provided at figure \\ref{fig:1_4_a}.\nThe raw time series (blue) spectral estimates are compared to two preprocessed series:\n\\begin{enumerate}\n    \\item a centered and detrended series (red, \\texttt{mean \\& detrend})\n    \\item a logarithmic centered series (yellow, \\texttt{log \\& mean})\n\\end{enumerate}\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/assets/a/sunspots-raw}\n    \\end{subfigure}\n    ~ \n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/assets/a/sunspots-Chebyshev}\n    \\end{subfigure}\n    \\caption{Sunspots: Chebyshev-windowed periodogram method, to raw and preprocessed series.}\n    \\label{fig:1_4_a}\n\\end{figure}\n\nSubtraction of the \\texttt{mean}, results in attenuation of the DC component ($f = 0$), while \\texttt{detrend} removed low-frequency components.\nConsequently, the spectral estimate of the centered and detrended series is almost identical to the raw sunspots time series for frequencies $f \\gtrapprox 0.02$ (in $rad/sample$),\nwhile the lower frequency components are eliminated.\n\nTo avoid logarithms of zero, the logarithmic series is obtained by first adding a small constant to the raw sunspots time series (MATLAB $\\mathtt{eps} = 2.2204e-16$) and then\ntaking natural \\texttt{log} and removing the \\texttt{mean}. Similarly, the DC component is also removed and the peaks at the same frequencies are observed, but now they are more noticeable\n(only peaks of interest are above $0dB$).\n\n%% b)\n\\item\n%\n\nThe Standard and Bartlett method periodograms of the EEG signal are provided at figure \\ref{fig:1_4_b_1}. The latter is easier to interpret since its peaks are accentuated.\nThe peaks distinguished from the periodograms are at frequencies $f = 13,\\ 26,\\ 39,\\ 50\\ Hz$ as well as a wider peak at $8-10\\ Hz$.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/assets/b/eeg-periodogram-standard}\n    \\end{subfigure}\n    ~ \n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/assets/b/eeg-periodogram-averaged-bartlett}\n    \\end{subfigure}\n    \\caption{EEG: standard and Bartlett method periodogram.}\n    \\label{fig:1_4_b_1}\n\\end{figure}\n\nAs suggested by the instructions, the components in $8-10\\ Hz$ are due to the tiredness of the subject during the recording, adequately captured by the Bartlett method periodograms,\nregardless the window length $\\Delta t$.\n\nThe peak at $f_{0}^{SSEVP} = 13\\ Hz$ corresponds to the fundamental frequency of the SSEVP, whose harmonics can be noticed at frequencies $f_{1}^{SSEVP} = 26\\ Hz$ and\n$f_{2}^{SSEVP} = 39\\ Hz$. Its third harmonic $f_{3}^{SSEVP} = 52\\ Hz$ is hardly visible at the standard periodogram and the Bartlett method periodogram with $\\Delta t = 10\\ s$, highly affected by\nthe strong $f^{PLI} = 50\\ Hz$ component due to the power-line interference.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/assets/b/eeg-periodogram-averaged-bartlett-dt_10}\n    \\end{subfigure}\n    ~ \n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/assets/b/eeg-periodogram-averaged-bartlett-dt_1}\n    \\end{subfigure}\n    \\caption{EEG: Bartlett method periodogram averaging window $\\Delta t$.}\n    \\label{fig:1_4_b_2}\n\\end{figure}\n\nFigure \\ref{fig:1_4_b_2} depicts the comparison between the Standard and Bartlett method periodograms for averagign window lengths $\\Delta t = 10\\ s$ and $\\Delta t = 1\\ s$.\nFor $\\Delta t = 10\\ s$, the periodogram has reduced variance compared to the standard periodogram, but inevitably reduced resolution. Nonetheless, the peaks of interest\n(harmonics of SSEVP, power-line interference frequency and $8-10\\ Hz$ band) are observable. On the other hand, for $\\Delta t = 1\\ s$ despite the even more reduced variance\n(by a factor of 10) the resolution is insufficient to capture the $3^{rd}$ harmonic of SSEVP, but the rest of the peaks are visible.\n\nOverall, the trade-off between variance and precision is illustrated, from the standard periodogram with best precision and maximum variance to the Bartlett method periodgram\nwith $\\Delta t = 1\\ s$ with worst precision and least variance.\n\n%\n\\end{enumerate}", "meta": {"hexsha": "a6223c837aa7e20dab12d76f1f804824bbf89aca", "size": 5049, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/index.tex", "max_stars_repo_name": "filangel/ASPMI", "max_stars_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-02-20T14:43:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-13T21:13:02.000Z", "max_issues_repo_path": "tex/report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/index.tex", "max_issues_repo_name": "AmjadHisham/ASPMI", "max_issues_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/report/spectrum-estimation/periodogram-based-methods-applied-to-real-world-data/index.tex", "max_forks_repo_name": "AmjadHisham/ASPMI", "max_forks_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-07-17T08:32:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-12T18:26:18.000Z", "avg_line_length": 55.4835164835, "max_line_length": 195, "alphanum_fraction": 0.7470786294, "num_tokens": 1393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.798186775339273, "lm_q1q2_score": 0.5955582768021075}}
{"text": "\\documentclass{article}\n\\input{../homework-problems}\n\n\\toggletrue{solutions}\n%\\togglefalse{solutions}\n\\toggletrue{answers}\n\\newtheorem{problem}{Problem}\n\n\\newcommand{\\hide}[1]{}\n\\renewcommand{\\fcProblemRef}{\\theproblem.\\theenumi}\n\\renewcommand{\\fcSubProblemRef}{\\theenumi.\\theenumii}\n\n\n\\begin{document}\n\n\\begin{center}\n\\Large\nMaster Problem Sheet \\\\\nVersion \\today\n\\\\ Calculus III \\\\ \\normalsize Instructor: Todor Milev\n\n\\end{center}\n\nThis master problem sheet contains all freecalc problems on the topics studied in Calculus III. For a list of contributors/authors of the freecalc project (and in particular, the present problem collection) see file contributors.tex.\n\n\n\\fcLicenseContent\n\n\n\n\\tableofcontents\n\n\\section{Distances and coordinates}\n\\begin{problem}\n\\input{../../modules/coordinate-systems/homework/distance-between-points}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/coordinate-systems/homework/recognizing-spheres}\n\\end{problem}\n\n\\section{Vectors}\n\\subsection{Vector basics}\n\\begin{problem}\n\\input{../../modules/vectors/homework/basic-operations-in-coordinates-1}\n\\end{problem}\n\\subsection{Dot product}\n\\begin{problem}\n\\input{../../modules/vectors/homework/dot-product}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/vectors-orthogonal}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/vectors-angles}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/angle-between-vertices-and-origin-regular-tetrahedron}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/vector-projection}\n\\end{problem}\n\n\\subsection{Cross product}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/triangle-in-space-area}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/find-orthogonal-vector}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/find-tetrahedron-volume}\n\\end{problem}\n\n\\input{../../modules/vectors/homework/find-tetrahedron-volume-solutions}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/points-coplanar}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/jacobi-identity}\n\\end{problem}\n\n\\section{Lines, planes, points and relationships between them}\n\\subsection{Lines}\n\\subsubsection{Line from point and direction}\n\\begin{problem}\n\\input{../../modules/vectors/homework/line-from-point-and-direction}\n\\end{problem}\n\n\\subsubsection{Line from two points}\n\\begin{problem}\n\\input{../../modules/vectors/homework/line-from-two-points}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/line-from-two-points-lines-from-cube-vertices}\n\\end{problem}\n\\input{../../modules/vectors/homework/line-from-two-points-lines-from-cube-vertices-solution}\n\n\\subsection{Planes}\n\\subsubsection{Plane from point and normal}\n\\begin{problem}\n\\input{../../modules/vectors/homework/plane-from-point-and-normal}\n\\end{problem}\n\\input{../../modules/vectors/homework/plane-from-point-and-normal-solution}\n\n\\subsubsection{Plane from point and two directions}\n\\begin{problem}\n\\input{../../modules/vectors/homework/plane-from-point-and-two-directions}\n\\end{problem}\n\n\\subsubsection{Plane from three points}\n\\begin{problem}\n\\input{../../modules/vectors/homework/plane-from-three-points}\n\\end{problem}\n\n\\subsection{Distances between points, lines, planes}\n\\subsubsection{Distance between line and point}\n\\begin{problem}\n\\input{../../modules/vectors/homework/distance-line-point}\n\\end{problem}\n\\subsubsection{Distance between plane and point}\n\\begin{problem}\n\\input{../../modules/vectors/homework/distance-plane-point}\n\\end{problem}\n\n\\subsubsection{Distance between lines}\n\\begin{problem}\n\\input{../../modules/vectors/homework/distance-between-edges-regular-tetrahedron}\n\\end{problem}\n\\begin{problem}\n\\input{../../modules/vectors/homework/distance-between-lines}\n\\end{problem}\n\\input{../../modules/vectors/homework/distance-between-lines-solutions}\n\n\n\\subsection{Angles (in space)}\n\\subsubsection{Angles between lines}\n\\begin{problem}\n\\input{../../modules/vectors/homework/angle-between-edges-regular-tetrahedron}\n\\end{problem}\n\n\\subsubsection{Angle between plane and line}\n\\begin{problem}\n\\input{../../modules/vectors/homework/angle-betwen-plane-and-line}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/vectors/homework/angle-betwen-tetrahedron-edge-and-base}\n\\end{problem}\n\n\\subsubsection{Angles between planes}\n\\begin{problem}\n\\input{../../modules/vectors/homework/angle-between-faces-tetrahedron}\n\\end{problem}\n\n\\section{Polar, Cylindrical and Spherical coordinates}\n\\subsection{Polar coordinates}\n\\begin{problem}\n\\input{../../modules/polar-coordinates/homework/equation-of-line-in-polar}\n\\end{problem}\n\n\\input{../../modules/polar-coordinates/homework/equation-of-line-in-polar-solutions}\n\n\\begin{problem}\n\\input{../../modules/polar-coordinates/homework/equation-of-circle-in-polar}\n\\end{problem}\n\\subsection{Cylindrical coordinates - basics}\n\n\\begin{problem}\n\\input{../../modules/cylindrical-coordinates/homework/equation-of-plane-in-cylindrical}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/cylindrical-coordinates/homework/equation-of-sphere-in-cylindrical}\n\\end{problem}\n\\subsection{Spherical coordinates - basics}\n\n\\begin{problem}\n\\input{../../modules/spherical-coordinates/homework/equation-of-plane-in-spherical}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/spherical-coordinates/homework/equation-of-sphere-in-spherical}\n\\end{problem}\n\n\\section{Quadratic surfaces}\n\n\\begin{problem}\n\\input{../../modules/quadratic-surfaces/homework/name-quadratic-surfaces-and-give-examples}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/quadratic-surfaces/homework/find-surface-type-from-equation}\n\\end{problem}\n\\input{../../modules/quadratic-surfaces/homework/find-surface-type-from-equation-solutions}\n\n\\section{Curves in space}\n\\subsection{Curvature}\n\\begin{problem}\n\\input{../../modules/parametric-curves/homework/curvature-computation-1}\n\\end{problem}\n\\input{../../modules/parametric-curves/homework/curvature-computation-1-solutions}\n\n\\subsection{Curve length}\n\\begin{problem}\n\\input{../../modules/parametric-curves/homework/space-curve-length-1}\n\\end{problem}\n\\input{../../modules/parametric-curves/homework/space-curve-length-1-solutions}\n\\begin{problem}\n\\input{../../modules/parametric-curves/homework/space-curve-write-length-integral-1}\n\\end{problem}\n\n\n\\section{Multivariable limits}\n\n\n\\begin{problem}\n\\input{../../modules/continuity-multivariable/homework/limits-multivariable-1}\n\\end{problem}\n\n\\input{../../modules/continuity-multivariable/homework/limits-multivariable-1-solutions}\n\\section{Tangent planes}\n\n\n\\begin{problem}\n\\input{../../modules/partial-derivatives/homework/compute-tangent-plane-graph-function-1}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/partial-derivatives/homework/compute-tangent-plane-surface-implicit-equation-1}\n\\end{problem}\n\\input{../../modules/partial-derivatives/homework/compute-tangent-plane-surface-implicit-equation-1-solutions}\n\n\\section{Partial derivatives}\n\n\\begin{problem}\n\\input{../../modules/partial-derivatives/homework/compute-partial-derivative-1}\n\\end{problem}\n\n\\input{../../modules/partial-derivatives/homework/compute-partial-derivative-1-solutions}\n\n\\subsection{Variable changes in differential operators}\n\n\\begin{problem}\n\\input{../../modules/partial-derivatives/homework/variable-change-differential-operators-1}\n\\end{problem}\n\\input{../../modules/partial-derivatives/homework/variable-change-differential-operators-1-solutions}\n\\section{Multivariable optimization (min/max)}\n\n\\begin{problem}\n\\input{../../modules/optimization-multivariable/homework/find-min-max-1}\n\\end{problem}\n\\input{../../modules/optimization-multivariable/homework/find-min-max-1-solutions}\n\\subsection{Lagrange multipliers}\n\n\\begin{problem}\n\\input{../../modules/optimization-multivariable/homework/min-max-with-lagrange-multipliers-1}\n\\end{problem}\n\\input{../../modules/optimization-multivariable/homework/min-max-with-lagrange-multipliers-1-solutions}\n\n\\section{Double integrals}\n\n\\begin{problem}\n\\input{../../modules/integration-multivariable/homework/double-integrals-1}\n\\end{problem}\n\n\\begin{problem}\n\\input{../../modules/integration-multivariable/homework/double-integrals-2}\n\\end{problem}\n\\input{../../modules/integration-multivariable/homework/double-integrals-2-solutions}\n\\subsection{Double integrals solved via changing integration order}\n\n\\begin{problem}\n\\input{../../modules/integration-multivariable/homework/double-integrals-change-variable-order-1}\n\\end{problem}\n\\input{../../modules/integration-multivariable/homework/double-integrals-change-variable-order-1-solutions}\n\\section{Triple Integrals}\n\\begin{problem}\n\\input{../../modules/integration-multivariable/homework/triple-integrals-over-curvilinear-trapezoids-1}\n\\end{problem}\n\\input{../../modules/integration-multivariable/homework/triple-integrals-over-curvilinear-trapezoids-1-solutions}\n\n\\section{Variable Changes in Multivariable Integrals}\n\\begin{problem}\n\\textbf{Problem \\ref{problemVolumeRaisedHorn} is of higher difficulty than the problem you will get on the exam.}\n\n\\input{../../modules/integration-multivariable-variable-change/homework/volumes-1}\n\\end{problem}\n\\input{../../modules/integration-multivariable-variable-change/homework/volumes-1-solutions}\n\n\\subsection{Integrals in Polar, Cylindrical and Spherical coordinates}\n\\subsubsection{Polar coordinates}\n\\begin{problem}\n\\input{../../modules/integration-multivariable-variable-change/homework/integrals-polar-1}\n\\end{problem}\n\\subsubsection{Cylindrical coordinates}\n\\begin{problem}\n\\input{../../modules/integration-multivariable-variable-change/homework/integrals-cylindrical-1}\n\\end{problem}\n\\subsubsection{Spherical coordinates}\n\\begin{problem}\n\\input{../../modules/integration-multivariable-variable-change/homework/integrals-spherical-centroids-centers-of-mass-1}\n\\end{problem}\n\\section{2D Field Potential}\n\\begin{problem}\n\\input{../../modules/integration-line-integrals/homework/find-2D-scalar-potential}\n\\end{problem}\n\\section{Green's Theorem}\n\\subsection{Line Integrals via Greens Theorem}\n\\begin{problem}\n\\input{../../modules/integration-line-integrals/homework/line-integrals-via-greens-theorem}\n\\end{problem}\n\\section{Surface Integrals}\n\\subsection{Surface area}\n\\begin{problem}\n\\input{../../modules/integration-surface-integrals/homework/surface-area-1}\n\\end{problem}\n\\subsection{Flux}\n\\begin{problem}\n\\input{../../modules/integration-surface-integrals/homework/find-flux-1}\n\\end{problem}\n\\subsection{Divergence Theorem}\n\\begin{problem}\n\\input{../../modules/integration-surface-integrals/homework/divergence-theorem-1}\n\\end{problem}\n\n\\end{document}\n\n", "meta": {"hexsha": "35bbdd975da479779b7164fd544924fd85b52447", "size": 10603, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "homework/referenceallproblemsbycourse/calculusiiimasterproblemsheet.tex", "max_stars_repo_name": "tmilev/freecalc", "max_stars_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-07-12T11:15:57.000Z", "max_stars_repo_stars_event_max_datetime": "2017-07-12T11:15:57.000Z", "max_issues_repo_path": "homework/referenceallproblemsbycourse/calculusiiimasterproblemsheet.tex", "max_issues_repo_name": "tmilev/freecalc", "max_issues_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homework/referenceallproblemsbycourse/calculusiiimasterproblemsheet.tex", "max_forks_repo_name": "tmilev/freecalc", "max_forks_repo_head_hexsha": "b34b2ad2946fc0b6a9b403d4399a0cf9d23c19dd", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2017-09-21T13:51:45.000Z", "max_forks_repo_forks_event_max_datetime": "2017-09-21T13:51:45.000Z", "avg_line_length": 31.2772861357, "max_line_length": 233, "alphanum_fraction": 0.7783646138, "num_tokens": 2849, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7981867705385762, "lm_q1q2_score": 0.5955582732201204}}
{"text": "% jam 2004-08-28\n\n\\subsection{Functions of faces}\n\\label{sec:faces}\n\n\n%------------------------------------------------------------------\n\n\\subsubsection{Corner angles}\n\\label{sec:corner_angles}\n\n\\begin{figure}[!htp]\n\\centering\n\\begin{verbatim}\n\n          V2/p2\n            o\n           /U\\\n          / g2\\\n     E20 /     \\ E12\n        / F012  \\\n       /)g0   g1(\\\nV0/p0 o-----------o V1/p1\n           E01\n\n\\end{verbatim}\n\\caption{Face labeling.\n\\label{fig:face_labeling}}\n\\end{figure}\n\nSuppose face $\\F_{012}$ has vertices $\\V_0, \\V_1, \\V_2$,\nat points $\\p_0, \\p_1, \\p_2$,\nand edges $\\E_{01}, \\E_{12}, \\E_{20}$,\nas labeled in figure \\ref{fig:face_labeling}.\n\nThe {\\em corner angle} $\\gamma_i$ in face $\\F_{012}$ of vertex $\\V_i$ is\nthe angle between the two edges in $\\F_{012}$ that meet at $\\V_i$:\n\\begin{eqnarray}\n\\gamma_i\n& = & \\gamma(\\F_{012},\\V_i)\n\\\\\n& = & \\gamma(\\p_i,\\p_{(i+1) \\% 3},\\p_{(i+2) \\% 3})\n\\nonumber\n\\\\\n& = & \\theta(\\p_{(i+1) \\% 3} - \\p_i,\\p_{(i+2) \\% 3} - \\p_i)\n\\nonumber\n\\\\\n& = &\n\\cos^{-1}\n\\left[\n\\frac{ \\left( \\p_{(i+1) \\% 3} - \\p_i \\right)\n  \\bullet\n  \\left( \\p_{(i+2) \\% 3} - \\p_i \\right) }\n{ \\| \\p_{(i+1) \\% 3} - \\p_i \\|\n  \\| \\p_{(i+2) \\% 3} - \\p_i \\| }\n\\right]\n\\nonumber\n\\end{eqnarray}\n\nCorner angles vary between $0$ and $\\pi$, with both extremes\ncorresponding to singular, zero-area faces.\nThe sum of corner angles for a given face is always $\\pi$.\n\nUsing equation \\ref{eq:angle_gradient},\nit's easy to see that the partial gradients are:\n\\begin{eqnarray}\n\\Gc{\\p_0}{\\gamma(\\p_0,\\p_1,\\p_2)}{\\q}\n& = &\n\\frac{\n\\left[ (\\q_1 -\\q_0) \\perp (\\q_2 - \\q_0) \\right]\n+\n\\left[ (\\q_2 -\\q_0) \\perp (\\q_1 - \\q_0) \\right]\n}\n{ \\sqrt{\\|\\q_1 - \\q_0\\|^2\\|\\q_2 - \\q_0\\|^2 -\n\\left( (\\q_1 - \\q_0) \\bullet (\\q_2 - \\q_0) \\right)^2 }}\n\\\\\n\\Gc{\\p_1}{\\gamma(\\p_0,\\p_1,\\p_2)}{\\q}\n& = &\n\\frac{-(\\q_1 -\\q_0) \\perp (\\q_2 - \\q_0)}\n{ \\sqrt{\\|\\q_1 - \\q_0\\|^2\\|\\q_2 - \\q_0\\|^2 -\n\\left( (\\q_1 - \\q_0) \\bullet (\\q_2 - \\q_0) \\right)^2 }}\n\n\\nonumber\n\\\\\n\\Gc{\\p_2}{\\gamma(\\p_0,\\p_1,\\p_2)}{\\q}\n& = &\n\\frac{-(\\q_2 -\\q_0) \\perp (\\q_1 - \\q_0)}\n{ \\sqrt{\\|\\q_1 - \\q_0\\|^2\\|\\q_2 - \\q_0\\|^2 -\n\\left( (\\q_1 - \\q_0) \\bullet (\\q_2 - \\q_0) \\right)^2 }}\n\\nonumber\n\\end{eqnarray}\n\n%------------------------------------------------------------------\n\n\\subsubsection{Functions of face normals}\n\\label{sec:normals}\n\nA number of important functions of triangular meshes,\nsuch as surface area,\nare based on face normal vectors.\n\n%------------------------------------------------------------------\n\n\\paragraph{Area-weighted face normal}\n\\label{sec:areanormal}\n\nSuppose we have a face whose 3 vertices are at $\\p = (\\p_0, \\p_1, \\p_2)$,\nwhere $\\p_i \\in \\Reals^3; i=0,1,2.$\n(Note that the order of the $\\p_i$ determines the orientation of the face.\nWith a face as labeled in figure \\ref{fig:face_labeling},\nthe normal points out of the page.)\n\nThe {\\it area-weighted normal} vector is\n\\nopagebreak\n\\begin{eqnarray}\n\\a (\\p) & = & (\\p_0 \\times \\p_1) \\ + \\ (\\p_1 \\times \\p_2) \\ + \\ (\\p_2 \\times \\p_0) \\\\\n        & = & (\\p_1 - \\p_0) \\ \\times \\ (\\p_2 - \\p_0) \\nonumber \\\\\n        & = & (\\p_2 - \\p_1) \\ \\times \\ (\\p_0 - \\p_1) \\nonumber \\\\\n        & = & (\\p_0 - \\p_2) \\ \\times \\ (\\p_1 - \\p_2) \\nonumber\n\\end{eqnarray}\n\nThe 'partial' derivatives of the area-weighted normal are:\n\\begin{eqnarray}\n\\Dd{\\p_0}{\\a}{\\q}{r_0} \\\n& = \\ (\\r0 \\times \\q_1) \\ + \\ (\\q_2 \\times \\r_0) & = (\\q_2 - \\q_1) \\times \\r_0 \\\\\n\\Dd{\\p_1}{\\a}{\\q}{r_1} \\\n& = \\ (\\r1 \\times \\q_2) \\ + \\ (\\q_0 \\times \\r_1) & = (\\q_0 - \\q_2) \\times \\r_1 \\nonumber \\\\\n\\Dd{\\p_2}{\\a}{\\q}{r_2} \\\n& = \\ (\\r2 \\times \\q_0) \\ + \\ (\\q_1 \\times \\r_2) & = (\\q_1 - \\q_0) \\times \\r_2 \\nonumber\n\\end{eqnarray}\n\nNote that $\\Df{\\p}{\\a}$ is {\\it skew-symmetric}, that is,\n$\\Df{\\p}{\\a}^{\\dagger} = -\\Df{\\p}{\\a}.$\n\n%------------------------------------------------------------------\n\n\\paragraph{Face area}\n\\label{sec:facearea}\n\nThe area of a face is half the length of the area-weighted normal:\n\\begin{eqnarray}\nA(\\p)\n& = & \\frac{1}{2} \\| \\ \\a(\\p) \\ \\|  \\\\\n& = & \\frac{1}{2} \\| \\ (\\p_0 \\times \\p_1) \\ + \\ (\\p_1 \\times \\p_2) \\ + \\ (\\p_2 \\times \\p_0) \\ \\|.\n\\nonumber\n\\end{eqnarray}\n\nIt follows from equation \\ref{eq:norm_derivative}\nthat the first 'partial' derivative of the face area is:\n\\begin{eqnarray}\n\\label{eq:area_partial_derivative}\n\\Dd{\\p_0}{A}{\\q}{\\r_0}\n& = &\n\\frac{\\a(\\q)^\\dagger}{2\\|\\a(\\q)\\|}\n{\\Dd{\\p_0}{\\a}{\\q}{\\r_0}}  \\\\\n& = &\n\\frac{\\a(\\q)}{2\\|\\a(\\q)\\|}\n\\bullet\n\\left[(\\r_0 \\times \\q_1) + (\\q_2 \\times \\r_0)\\right] \\nonumber \\\\\n& = &\n\\frac{\\a(\\q)}{2\\|\\a(\\q)\\|}\n\\bullet\n\\left[(\\q_2 - \\q_1) \\ \\times \\  \\r_0)\\right] \\nonumber \\\\\n& = &\n\\frac{\\a(\\q) \\times (\\q_2 - \\q_1)}{2\\|\\a(\\q)\\|}\n\\bullet\n\\r_0 \\nonumber\n\\end{eqnarray}\n\nThe last identity follows from equation \\ref{eq:dot_cross}.\n\nThe 'partial' gradients of the face area are then:\n\\begin{eqnarray}\n\\Gc{\\p_0}{A}{\\q} & = & \\frac{\\a(\\q) \\times (\\q_2 - \\q_1)}{2\\|\\a(\\q)\\|} \\\\\n\\Gc{\\p_1}{A}{\\q} & = & \\frac{\\a(\\q) \\times (\\q_0 - \\q_2)}{2\\|\\a(\\q)\\|} \\nonumber \\\\\n\\Gc{\\p_2}{A}{\\q} & = & \\frac{\\a(\\q) \\times (\\q_1 - \\q_0)}{2\\|\\a(\\q)\\|} \\nonumber\n\\end{eqnarray}\n\nMore simply, using the face unit normal \\( \\n(\\p)  =  \\frac{\\a(\\p)}{\\| \\a(\\p) \\|} \\):\n\\begin{eqnarray}\n\\label{eq:area_gradient}\n\\Gc{\\p_0}{A}{\\q} & = & \\frac{\\n(\\q)}{2} \\times (\\q_2 - \\q_1) \\\\\n\\Gc{\\p_1}{A}{\\q} & = & \\frac{\\n(\\q)}{2} \\times (\\q_0 - \\q_2) \\nonumber \\\\\n\\Gc{\\p_2}{A}{\\q} & = & \\frac{\\n(\\q)}{2} \\times (\\q_1 - \\q_0) \\nonumber\n\\end{eqnarray}\n\n%------------------------------------------------------------------\n\n\\paragraph{Face unit normal vector}\n\\label{sec:facenormal}\n\nThe unit vector normal to a face whose vertices are at\n$\\p = (\\p_0,\\p_1,\\p_2)$ is just the area weighted normal (see \\autoref{sec:areanormal})\nadjusted to length 1:\n\\begin{equation}\n\\n(\\p)  =  \\frac{\\a(\\p)}{\\| \\a(\\p) \\|}\n\\end{equation}\n\nFollowing equation \\ref{eq:normalized_function_derivative}, the derivative is:\n\n\\begin{eqnarray}\n\\label{eq:unit_normal_derivative}\n\\Dc{\\n}{\\p}{\\q}\n&  =\n& \\frac{ \\left( {\\| \\a(\\p) \\|^2 \\I_{\\Reals^3}  -  \\a(\\p) \\otimes \\a(\\p) }\n{\\| \\a(\\p) \\|^3} \\right) }\n\\; \\Dc{\\a}{\\p}{\\q}\n \\\\\n& & \\nonumber\\\\\n&  =\n& \\frac{ \\left( {\\| \\a(\\p) \\|^2 \\I_{\\Reals^3}  -  \\a(\\p) \\otimes \\a(\\p) }\n{\\| \\a(\\p) \\|^3} \\right)} \\ast\n\\nonumber \\\\\n&    &\n\\left[ \\left( \\q_0 \\times \\p_1 \\right) + \\left( \\p_2 \\times \\q_0 \\right)\n+\n\\left( \\q_1 \\times \\p_2 \\right) + \\left( \\p_0 \\times \\q_1 \\right)\n+\n\\left( \\q_2 \\times \\p_0 \\right) + \\left( \\p_1 \\times \\q_2 \\right) \\right]\n\\nonumber \\\\\n& & \\nonumber\\\\\n&  =\n& \\frac{ \\left( {\\| \\a(\\p) \\|^2 \\I_{\\Reals^3}  -  \\a(\\p) \\otimes \\a(\\p) }{\\| \\a(\\p) \\|^3} \\right)} \\ast\n\\nonumber \\\\\n&    &\n\\left[ \\left( (\\p_2 - \\p_1) \\times \\q_0 \\right)\n+\n\\left( (\\p_0 - \\p_2) \\times \\q_1 \\right)\n+\n\\left( (\\p_1 - \\p_0) \\times \\q_2 \\right) \\right]\n\\nonumber \\\\\n& & \\nonumber\\\\\n&  =\n& { \\left( {\\I_{\\Reals^3}  -  \\n(\\p) \\otimes \\n(\\p) } \\over {\\| \\a(\\p) \\|} \\right)} \\ast \\; \\Dc{\\a}{\\p}{\\q}\n\\nonumber\n\\end{eqnarray}\n\n%------------------------------------------------------------------\n\n\\subsubsection{Aspect Ratio}\n\\label{sec:aspect_ratio}\n\nMinimizing a measure of face aspect ratio can help maintain\na well-conditioned mesh.\nMaximizing it may help in discovering collapse-able edges.\n\n%------------------------------------------------------------------\n\n\\paragraph{Squared edge lengths over area}\n\\label{sec:Squared-edge-lengths-over-area}\n\nOne measure of the deviation of a face from equilaterality is:\n\\begin{equation}\n{\\mathrm L2A}(\\p_0,\\p_1,\\p_2)\n=\n\\frac{ \\| \\p_0 - \\p_1 \\|^2 + \\| \\p_1 - \\p_2 \\|^2 + \\| \\p_2 - \\p_0 \\|^2 }\n{A(\\p)}\n\\end{equation}\n\nUsing \\ref{eq:area_gradient}, it follows that the\npartial gradients of L2A are:\n\\begin{eqnarray}\n\\label{eq:L2A_gradient}\n\\Gc{\\p_0}{L2A}{\\q}\n& =\n&\n\\left(\n\\frac{2\\left[ \\left( \\p_0 - \\p_1 \\right) + \\left( \\p_0 - \\p_2 \\right) \\right]}\n{A(\\q)}\n\\right)\n\\\\\n& - &\n\\left(\n\\frac{ \\| \\p_0 - \\p_1 \\|^2 + \\| \\p_1 - \\p_2 \\|^2 + \\| \\p_2 - \\p_0 \\|^2 }\n{2 A^2(\\q)}\n\\left[ \\n(\\q) \\times (\\q_2 - \\q_1)\n\\right]\n\\right)\n\\nonumber \\\\\n\\Gc{\\p_1}{L2A}{\\q}\n& =\n&\n\\left(\n\\frac{2\\left[ \\left( \\p_1 - \\p_2 \\right) + \\left( \\p_1 - \\p_0 \\right) \\right]}\n{A(\\q)}\n\\right)\n\\nonumber\n\\\\\n& - &\n\\left(\n\\frac{ \\| \\p_0 - \\p_1 \\|^2 + \\| \\p_1 - \\p_2 \\|^2 + \\| \\p_2 - \\p_0 \\|^2 }\n{2 A^2(\\q)}\n\\left[ \\n(\\q) \\times (\\q_0 - \\q_2) \\right]\n\\right)\n\\nonumber\n\\\\\n\\Gc{\\p_2}{L2A}{\\q}\n& =\n&\n\\left(\n\\frac{2\\left[ \\left( \\p_2 - \\p_0 \\right) + \\left( \\p_2 - \\p_1 \\right) \\right]}\n{A(\\q)}\n\\right)\n\\nonumber\n\\\\\n& - &\n\\left(\n\\frac{ \\| \\p_0 - \\p_1 \\|^2 + \\| \\p_1 - \\p_2 \\|^2 + \\| \\p_2 - \\p_0 \\|^2 }\n{2 A^2(\\q)}\n\\left[ \\n(\\q) \\times (\\q_1 - \\q_0) \\right]\n\\right)\n\\nonumber\n\\end{eqnarray}\n", "meta": {"hexsha": "9c0891ec7bc8e885f7de169149f0c72bee18f95e", "size": 8492, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/old/fosm/faces.tex", "max_stars_repo_name": "palisades-lakes/les-elemens", "max_stars_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/old/fosm/faces.tex", "max_issues_repo_name": "palisades-lakes/les-elemens", "max_issues_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/old/fosm/faces.tex", "max_forks_repo_name": "palisades-lakes/les-elemens", "max_forks_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.0445859873, "max_line_length": 107, "alphanum_fraction": 0.5250824305, "num_tokens": 3671, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.763483758172699, "lm_q1q2_score": 0.5955119108340287}}
{"text": "\\section{Introduction}\r\n\r\nSome of the important questions in cellular biology have to do with how the sizes of different parts of the cell, the organelles, are regulated.\r\nHere we concentrate on modeling the growth of the Eukaryotic flagellum, an organelle that protrudes from the cell and is usually used for propulsion.\r\nThe choice of studying a flagellum has the advantage of restricting the problem to growth in only one dimension.\r\n~\\cite{bressloff}\r\n\r\n\\begin{figure}[!ht]\r\n\\centering\r\n\\parbox[t][2.3in][t]{4in}{\r\n\\hspace{1in}IFT Particles\\\\\r\n\\includegraphics[width=3in]{IFT_diagram.png}\r\n\\raisebox{1.75in}[1pt][1pt]{\\hspace{2.2in}Kinesin-II}\\\\\r\n\\raisebox{1.39in}[1pt][1pt]{\\hspace{3.04in}Microtubules}\\\\\r\n\\raisebox{1.5in}[1pt][1pt]{($-$)\\hspace{2.2in}($+$)}\\\\\r\n\\raisebox{1.05in}[1pt][1pt]{\\hspace{-0.15in}Dynein-1b}\\\\\r\n\\raisebox{0.83in}[1pt][1pt]{\\hspace{2.95in}Flagellar membrane}\r\n}\r\n\\caption{The interflagellar transport machinery.~\\cite{rosenbaum2}\r\nDuring intraflagellar transport,\r\nlinear arrays of IFT particles (yellow) are anterograde,\r\ntowards the ($+$) ends of the flagellar microtubules (blue)\r\nby kinesin-II (pink), and retrograde (towards the ($-$) ends)\r\nby cytoplasmic dynein 1b (green).\r\nThe IFT particles carry precursors that are necessary for the assembly of\r\nthe flagellar axoneme.}\r\n\\end{figure}\r\n\r\nEukaryotic flagella grow using the mechanism of intraflagellar transport. This process describes the movement of particles (transporters) carrying building blocks (precursor proteins) along the microtubule and depositing them at the flagellum tip.\r\nThe transporters use different motors for anterograde (towards tip) movement and retrograde (towards cell body) movement.\r\n~\\cite{rosenbaum}\r\n\r\nIn order to express the length of the flagellum mathematically we define the relevant variables as follows:\r\n$L$ represents the flagellar length, $a$ represents the effective size of a precursor protein transported by a transporter, $v_+$ represents a transporter's anterograde speed, $v_-$ represents a transporter's retrograde speed, and $M$ represents the number of transporters in the flagellum.\r\nWe also suppose that a flagellum degrades at a constant speed of $V$.\r\nUsing these parameters it is possible to arrive at an ordinary differential equation (ODE) for the flagellum length, at the microscopic level, however, a stochastic model may be more representative of the behavior of the motors and transporters.\r\nOur goal is to represent flagellar growth as a stochastic process, and compare and contrast the stochastic model with the deterministic model.\r\nWe are also interested in analyzing the behavior that cannot be predicted by the deterministic model, such as the variation of the flagellum length.\r\nThe deterministic ODE for the flagellum length is:\r\n\r\n\\begin{equation*}\r\n\\frac{dL}{dt} = \\frac{a \\bar{v} M}{2L} - V\r\n\\end{equation*} \r\n\r\nWhere $\\bar{v}$ is the harmonic mean of $v_+$ and $v_-$.\r\nThe assembly rate, which is the positive right-hand-side term, is arrived at by noting that a transporter adds a segment of length $a$ to the flagellum every time it completes one round trip, to the tip of the flagellum and back.\r\nThe time it takes to complete this trip is the time taken to reach the tip from the base, $L/v_+$, added to the time it takes to reach the base from the tip, $L/v_-$, adding up to $2L/\\bar{v}$.\r\nThe disassembly rate is given by $V$ and is length-independent.\r\n\r\nThe values we use for the parameters above in our simulations are taken from the paper by Paul C. Bressloff, \\emph{Stochastic model of intraflagellar transport }~\\cite{bressloff}: $v_+$, $v_-$, and $V$ are taken to be $2.0 \\mu$m$/$s, $3.5 \\mu$m$/$s, $0.01 \\mu$m$/$s respectively, $M$ is taken to be 10, and $a$ is taken to be $10$nm.\r\n", "meta": {"hexsha": "0e2fbcfa44bf98473990efe6dea6fb077be6ee2b", "size": 3756, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Senior Thesis/Intro.tex", "max_stars_repo_name": "sverchkov/flagellar-growth-model", "max_stars_repo_head_hexsha": "b1886180d7be11932882b6a95d7a4e4d12c965a5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Senior Thesis/Intro.tex", "max_issues_repo_name": "sverchkov/flagellar-growth-model", "max_issues_repo_head_hexsha": "b1886180d7be11932882b6a95d7a4e4d12c965a5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Senior Thesis/Intro.tex", "max_forks_repo_name": "sverchkov/flagellar-growth-model", "max_forks_repo_head_hexsha": "b1886180d7be11932882b6a95d7a4e4d12c965a5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 73.6470588235, "max_line_length": 334, "alphanum_fraction": 0.7585197018, "num_tokens": 1040, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928900257126, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.5955119072224203}}
{"text": "\\documentclass{beamer}\n\\usepackage[utf8]{inputenc}\n% \\usetheme{Warsaw}  %% Themenwahl\n\n\\input{../../shared_slides.tex}\n\n\\title{Minimum enclosing ball problem}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\\frame{\\tableofcontents}\n\n\\section{Intro}\n\n\\begin{frame}\n  \\frametitle{Intro} %%Folientitel\n  \\begin{definition} %%Definition\n    Given a set of vectors $A = \\{a_1, \\dots, a_m\\}\\in \\R^d$ we want to find the smallest ball containing all points of $A$.\n    % A \\subseteq B_{c,\\rho} := \\{x \\in R^d : \\Vert x - c \\Vert \\le \\rho \\}\n    I.e.\n    \\begin{align}\n      \\min_{c,\\rho}\\,  \\rho \\\\\n      s.t. \\Vert a_i -c  \\Vert \\le \\rho \\quad \\forall i=1, \\dots, m\n    \\end{align}\n  \\end{definition}\n  Used in clustering, Data classification, facility location, computer graphics.\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Rewriting the problem}\n  Square constraints for smoothness\n    \\begin{align}\n      \\min_{c,\\rho} \\, \\rho \\\\\n      s.t. \\Vert a_i \\Vert^2 - 2 a_i^T c + c^T c  \\le \\rho \\quad \\forall i=1, \\dots, m\n    \\end{align}\n\n    Build Lagrangian Dual:\n    \\begin{equation}\n      L(c, \\rho, u) = \\rho + \\sum_{i=1}^m u_i *  (\\Vert a_i -c  \\Vert^2- \\rho^2)\n    \\end{equation}\n    and the dual function:\n    \\begin{equation}\n      \\Phi(u) = \\inf_{c, \\rho} L(c, \\rho, u) = \\sum_{i=1}^m \\Vert a_i \\Vert^2u_i - \\sum_{i=1}^{m} u_i a_i^T c\n    \\end{equation}\n    if $\\sum_{i=1}^{m} u_i = 1$ with $u_i \\ge 0$. Yields the unit simplex.\n\n    sparsity (dual is typically sparse because the dual solution is combination of only a few points)\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Solving via Frank-Wolfe}\n\n\\end{frame}\n\\end{document}\n", "meta": {"hexsha": "5962d3f87e7ba3fcb5205f43cc1bdfa0cb6ee700", "size": 1621, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/Minimum-enclosing-ball-problem/Minimum-enclosing-ball-problem.tex", "max_stars_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_stars_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-10-03T14:40:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-20T15:34:36.000Z", "max_issues_repo_path": "slides/Minimum-enclosing-ball-problem/Minimum-enclosing-ball-problem.tex", "max_issues_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_issues_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-10-21T13:02:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-06T19:50:32.000Z", "max_forks_repo_path": "slides/Minimum-enclosing-ball-problem/Minimum-enclosing-ball-problem.tex", "max_forks_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_forks_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-10-05T21:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T15:38:30.000Z", "avg_line_length": 28.9464285714, "max_line_length": 124, "alphanum_fraction": 0.6304750154, "num_tokens": 589, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7634837527911057, "lm_q1q2_score": 0.5955119066364241}}
{"text": "% Copyright 2018 Melvin Eloy Irizarry-Gelp\u00ed\n\\chapter{Impulse and Momentum}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Preliminary}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nLinear momentum (or just \\textbf{momentum}) is a physical quantity that incorporates velocity and mass:\n\\begin{equation}\n    p = m v\n\\end{equation}\nDue to the second law of motion, a net \\textbf{force} causes an acceleration (a change in the velocity of an object). This in turn leads to a change in the momentum of an object. The precise relationship between momentum and forces includes a quantity called \\textbf{impulse}.\n\nFormally, impulse is defined as the area under the curve in a \\textbf{force versus time} graph. In principle, we need calculus to find the area under a curve. In practice we use a simple approximation: we approximate the curve as a \\textbf{flat line}. The area under a flat line has the same shape as a rectangle. The area of a rectangle is the length multiplied by the width. Which flat line to use? Since force is in the vertical axis, a flat horizontal line corresponds to a fixed value of force along time. An appropriate choice for the fixed value is the \\textbf{time-averaged force} $\\bar{F}$. This would correspond to the width of the rectangle. The length of the rectangle is the amount of time $\\Delta t$ that the force acts. The ``area'' is thus given by\n\\begin{equation}\n    \\text{``area'' } = \\bar{F} \\times \\Delta t = \\text{ impulse}\n\\end{equation}\nImpulse has units of force multiplying time. If force is in newtons (N) and time in seconds (s), then impulse has units of N s. However, the newton is equivalent to\n\\begin{equation}\n    1 \\text{ N} = 1 \\text{ kg m/s}^{2}\n\\end{equation}\nThus, the N s is equivalent to\n\\begin{equation}\n    1 \\text{ N s} = 1 \\text{ kg m/s}\n\\end{equation}\nIf you look carefully, these are the same units of momentum. This does not mean that impulse is the same as momentum. Indeed, impulse can also be defined as the \\textbf{change in momentum}:\n\\begin{equation}\n    \\text{impulse } = p_{2} - p_{1}\n\\end{equation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Experiment}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nIn order to verify the relationship between momentum, force, and impulse, we need to use a mechanical system where momentum and force change with time. A cart, moving along a track and colliding with a force sensor is perfect for this. Different attachment on the force sensor allow us to study different kinds of collisions. We also use a photogate to measure the speed of the cart before and after the collision. In this way, we collect data on force, and momentum, and the impulse can be determined in two different ways.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Analysis}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nWe are going to determine the amount of impulse in two different ways.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Impulse from Change in Momentum}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nThe photogate measures the velocity of the object before the collision ($v_{1}$) and after the collision ($v_{2}$). Strictly speaking, \\textbf{one of the velocities value should be negative} because there is \\textbf{a change in the direction} of motion. We are going to multiply the value for $v_{2}$ by $-1$ in order to \\textbf{make it negative}.\n\nWith the values of velocity, and the mass of the object, you can find the momentum of the object before the collision,\n\\begin{equation}\n    p_{1} = m v_{1}\n\\end{equation}\nand the momentum after the collision,\n\\begin{equation}\n    p_{2} = m v_{2}\n\\end{equation}\nThe change in momentum is\n\\begin{equation}\n    \\Delta p = p_{2} - p_{1}\n\\end{equation}\nThis is the first estimate of the amount of impulse during the collision.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Impulse from Force Acting Over Time}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nThe force sensor measures the amount of force on the sensor during the collision. First you need to determine when the collision is taking place. This involves making a force versus time graph and finding the time $t_{1}$ when the collision begins and the time $t_{2}$ when it ends. With these times you can compute the amount of time that the collision lasts:\n\\begin{equation}\n    \\Delta t = t_{2} - t_{1}\n\\end{equation}\nThen you delete all the force values before and after the collision in the force column. Finally, you can compute the average force during this time period. This gives you the value of $\\bar{F}$. With $\\bar{F}$ and $\\Delta t$ you can compute the impulse:\n\\begin{equation}\n    \\text{impulse } = \\bar{F} \\times \\Delta t\n\\end{equation}\nThis is the second estimate of the amount of impulse during the collision.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{My Data}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nThere are two experiments: elastic and inelastic collisions. The difference between these experiment is that in the inelastic case, the amount of force during the collision is larger, and the velocity (and momentum) after the inelastic collision is zero. Besides that, the analysis is almost identical.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Elastic Collision}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nIn my elastic collision data you will find 12 runs (3 for each amount of extra mass). I analyzed runs 1, 4, 7, and 10 (corresponding to no extra mass, 100 g, 200 g, and 300 g). Here are some of the steps I followed to find my results, which are in table \\ref{table.08.results.elastic}.\n\nThe two values for the velocity are scattered along the velocity column. With the velocity values you can compute the momentum values, and thus the change in momentum.\n\nThe force versus time graph has to be restricted to the region where the collision is happening. This region consist of the deep well in the middle of the graph. There seems to be small oscillations after the deep well, but these are due to the fact that after the cart leaves the spring, the spring vibrates due to its elasticity. These oscillations are not part of the collision.\n\nIn order to quantify the agreement between the two values of impulse found, you can compute a modified version of the percent difference. If $I_{1}$ corresponds to the value of impulse determined from the change in momentum, and $I_{2}$ corresponds to the value of impulse determined from the average force and the length of the time interval, then the percent difference is given by\n\\begin{equation}\n    \\text{percent difference } = 200 \\times \\left\\vert \\frac{I_{1} - I_{2}}{I_{1} + I_{2}} \\right\\vert\n\\end{equation}\n(The long vertical lines mean taking the absolute value and thus ignoring the sign of the fraction inside). In my case, the percent difference was always below 2\\%.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{table}\n    \\centering\n    \\begin{tabular}{|r|r|r|r|r|}\\hline\n        Extra mass & $\\Delta p$ & Average $F$ & $\\Delta t$ & Impulse \\\\ \\hline\n        0 kg & -0.339 kg m/s & -0.804 N & 0.430 s & -0.346 kg m/s \\\\\n        0.1 kg & -0.357 kg m/s & -0.764 N & 0.476 s & -0.364 kg m/s \\\\\n        0.2 kg & -0.528 kg m/s & -1.079 N & 0.496 s & -0.535 kg m/s \\\\\n        0.3 kg & -0.570 kg m/s & -1.096 N & 0.528 s & -0.579 kg m/s \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Summary of results for the elastic collision}\n    \\label{table.08.results.elastic}\n\\end{table}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Inelastic Collision}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nIn the inelastic collisions, both the cart and the force sensor had a sticky clay attachment that enabled the cart to stick to the force sensor during the collision. Everyone has the same data, so I will not spoil the surprise. But the percent differences for the inelastic collision came out to be very small (see table \\ref{table.08.results.inelastic}).\n\nThe velocity column for each inelastic collision has a single velocity measurement. This value corresponds to $v_{1}$. The velocity after the inelastic collision is zero.\n\nThere is a very important difference between the inelastic collisions and the elastic collisions: In the force versus time graph, after the initial deep well (which is deeper in the inelastic case than in the elastic case), there are peaks and wells that are significant in magnitude and cannot be regarded as noise. For example, the significant region in the force versus time graph for run 1 is in figure \\ref{figure.08.run.1}. Beyond this region in time, there are further oscillations, but they are not large enough to be significant and are thus regarded as noise.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{table}\n\t\\centering\n    \\begin{tabular}{|r|r|}\\hline\n        Extra mass & Percent difference \\\\ \\hline\n        0 kg & 0.355\\% \\\\\n        0.1 kg & 0.554\\% \\\\\n        0.2 kg & 0.408\\% \\\\\n        0.3 kg & 0.146\\% \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Summary of results for the inelastic collision}\n    \\label{table.08.results.inelastic}\n\\end{table}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.71]{image/08-impulse/run1.png}\n    \\caption{}\n    \\label{figure.08.run.1}\n\\end{figure}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Your Data}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nYou collected data for elastic collisions. Everyone is going to use the same data for the inelastic case.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Your Lab Report}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nIn your lab report you should include:\n\\begin{enumerate}\n    \\item A table like table \\ref{table.08.results.elastic} for both the elastic and the inelastic collisions. Use these tables to answer the questions below.\n    \\item A table like table \\ref{table.08.results.inelastic} for both the elastic and the inelastic collisions.\n    \\item How does the average force during the elastic collisions compare to the average force during the inelastic collision?\n    \\item How does the collision duration $\\Delta t$ for the elastic collisions compare to the inelastic collisions?\n\\end{enumerate}", "meta": {"hexsha": "3f9cf174de1d043fe54c3dc973e19ec792ef45dc", "size": 11086, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/08-impulse.tex", "max_stars_repo_name": "meridethfrey/phys-207L", "max_stars_repo_head_hexsha": "372e7c08b2782171fd56a5204dda7cbdb2d066b3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/08-impulse.tex", "max_issues_repo_name": "meridethfrey/phys-207L", "max_issues_repo_head_hexsha": "372e7c08b2782171fd56a5204dda7cbdb2d066b3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/08-impulse.tex", "max_forks_repo_name": "meridethfrey/phys-207L", "max_forks_repo_head_hexsha": "372e7c08b2782171fd56a5204dda7cbdb2d066b3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.0704225352, "max_line_length": 764, "alphanum_fraction": 0.6141078838, "num_tokens": 2419, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837527911056, "lm_q2_score": 0.7799928951399098, "lm_q1q2_score": 0.5955119027318176}}
{"text": "% !TEX root = ../bachlor-arbeit.tex\nTo implement the desired algorithm we need to be able to calculate the optical behavior of stacked metasurfaces. The mathematical framework we will use is called scattering matrix calculus and this section will give some insight into its physical origin and how to use it. We will start with Maxwell's equations in matter.\n\\paragraph{Maxwell Equations}~\\\\\n\n\\begin{tabular*}{\\textwidth}{ll}\n\\begin{minipeqn}\n    \\curl{\\vb{E}(\\vb{r}, t)} = - \\pdv{t} \\vb{B}(\\vb{r}, t)\n\\end{minipeqn}&\n\\begin{minipeqn}[c]\n    \\div \\vb{D}(\\vb{r}, t) = \\rho_\\s{ext}(\\vb r, t)\n\\end{minipeqn}\\\\\n\\begin{minipeqn}\n    \\curl \\vb H(\\vb r, t) = \\vb j(\\vb r, t) + \\pdv{t} \\vb D(\\vb r, t)\n\\end{minipeqn}&\n\\begin{minipeqn}[c]\n    \\div \\vb B(\\vb r, t) = 0\n\\end{minipeqn}\n\\end{tabular*}\n\\\\\n\\\\\n\n\nThe four involved fields are:\n$\\vb E$ electric field, $\\vb B$ magnetic flux density, $\\vb D$ electric flux density and $\\vb H$ magnetic field and the sources are the external charges $\\rho_\\s{ext}$ and the macroscopic currents $\\vb j$. All the material properties are captured by the $\\vb D$ and $\\vb H$ fields which are defined as\n\n\n\\begin{equation}\\label{eq:bg:D}\n\\begin{aligned}\n    \\vb D(\\vb r, t) &= \\varepsilon_0 \\vb E (\\vb r, t) + \\vb P (\\vb r, t) \n    \\qq{and}\\\\\n    \\vb H(\\vb r, t) &= \\frac{1}{\\mu_0} \\qty[\\vb B(\\vb r, t) - \\vb M(\\vb r, t)],\n\\end{aligned}\n\\end{equation}\n\n\nwhere $\\vb P$ is the dielectric polarization and $\\vb M$ is the magnetic polarization. One can read equation \\eqref{eq:bg:D} in the following way:\nwhen the electric field $\\vb E$ interacts with matter, it exerts a force on all its charges and displaces them by a small amount. This separation of charges results in a counter field $\\vb P$ and the total field $\\vb D$ is now a superposition of $\\vb E$ and $\\vb P$.\nThis set of equations describes the whole electromagnetic spectrum. However, in this work we are only interested in visible (VIS) and near infrared (NIR) light where we can make some simplifications. Generally in optics, materials are non-magnetizable so $\\vb M = 0$ and there are no free charges $\\rho_\\s{ext}$ = 0. Inserting these assumptions into maxwell's equation gives\n\\\\\n\n\n\\noindent\n\\begin{tabular*}{\\textwidth}{ll}\n\\begin{minipeqn}\\label{eq:bg:M1}\n    \\curl{\\vb{E}(\\vb{r}, t)} = - \\mu_0 \\pdv t \\vb H(\\vb r, t)\n\\end{minipeqn}&\n\\begin{minipeqn}[c]\\label{eq:bg:M2}\n    \\varepsilon_0 \\div \\vb{E}(\\vb{r}, t) = - \\div \\vb P(\\vb r, t)\n\\end{minipeqn}\\\\\n\\begin{minipeqn}\\label{eq:bg:M3}\n    \\curl \\vb H(\\vb r, t) = \\vb j(\\vb r, t) + \\pdv{t} \\vb P(\\vb r, t)\n    + \\varepsilon_0 \\pdv{t} \\vb E(\\vb r, t)\n\\end{minipeqn}&\n\\begin{minipeqn}[c]\n    \\div \\vb H(\\vb r, t) = 0\n\\end{minipeqn}\n\\end{tabular*}\n\n\n\\paragraph{Light in Vacuum}~\\\\\nNow we can derive the wave equation by applying $\\curl$ to \\eqref{eq:bg:M1} which yields\n\n\n\\begin{equation}\n\\hspace{3cm}\n\\begin{aligned}\n    \\phantom{\\Longleftrightarrow} \\quad\n    \\curl \\bigg [\\curl \\vb E \\bigg ] &= \\curl[- \\mu_0 \\pdv t \\vb H]\n    \\\\\n    \\Longleftrightarrow \\quad\n    \\grad(\\div \\vb E) - \\Delta \\vb E &=\n    - \\mu_0 \\pdv t \\curl \\vb H\n    \\qquad \\bigg | \\qq{subs. \\eqref{eq:bg:M3} and \\eqref{eq:bg:M2}}\n    \\\\\n    \\Longleftrightarrow \\quad \\ \\\n    \\frac{1}{c^2} \\pdv[2] t \\vb E - \\Delta \\vb E\n    &= -\\mu_0 \\pdv t \\vb j - \\mu_0 \\pdv[2] t \\vb P +\n    \\frac{1}{\\varepsilon_0} \\grad(\\div \\vb P)\n\\end{aligned}\n\\end{equation}\n\nIn vacuum ($\\vb P = 0$ and $\\vb j = 0$), the right side of this equation vanishes and we are left with\\\\\n$\\frac{1}{c^2} \\pdv[2] t \\vb E - \\Delta \\vb E = 0$ which is solved by the plane wave $\\vb E = \\vb E_0 e^{i(\\vb k \\vb r - \\omega t)}$ where $\\frac{\\omega}{k} = c$. This describes the propagation of light through empty space: a sinusoidal oscillation in time and space along the $\\vb k$ direction where $\\vb E, \\, \\vb B$ and $\\vb k$ are all perpendicular to each other.\n\n\\paragraph{Light in homogeneous isotropic materials}~\\\\\n\\label{par:light_in_materials}The next question we can answer is how light propagates through a homogeneous and isotropic material. For us, the dielectric polarization is some linear function of the electric field so $\\vb P(\\vb r, t) = \\hat{\\chi}(\\omega, \\vb r) \\vb E(\\vb r, t)$. An isotropic material behaves the same for all orientations of $\\vb E$ therefor $\\hat{\\chi}(\\omega, \\vb r)$ becomes a scalar function $\\chi(\\omega, \\vb r)$. If the material is additionally homogeneous, that is the same everywhere independent of $\\vb r$, then $\\grad \\chi(\\omega, \\vb r) = 0$. With equation \\eqref{eq:bg:M2} this gives us $\\div \\vb P = 0$\nand the wave equation simplifies to\n\\begin{equation}\n\\begin{aligned}\n    &\\frac{\\varepsilon}{c^2} \\pdv[2] t \\vb E - \\Delta \\vb E = 0,\\\\\n    \\qq*{where} &\\varepsilon \\, \\vb E := \\qty(1 + \\chi) \\vb E = \\vb E + \\vb P.\n\\end{aligned}\n\\end{equation}\n\nThis implies that light behaves in these materials exactly as it would in vacuum, we only have to account for a decreased speed of light\n$c = {c_0} / {\\sqrt{\\varepsilon}} =: {c_0} / {n}$\nwith the refractive index $n$.\nThis is equivalent to a decreased wavelength\n$\\lambda =: {\\lambda_0} / {n}$\nor an increased wave number\n$k = {2 \\pi} / {\\lambda} = n \\, k_0$,\nwhere $c_0$, $\\lambda_0$ and $k_0$ are the quantities in vacuum.\nThe wave equation is also solved by a complex valued $n := \\eta + i \\kappa$ which describes the behavior of the exponentially decaying field commonly found in metals \n\n\\begin{equation}\n    \\vb E = \\vb E_0 e^{i(n \\vb k \\vb r - \\omega t)}\n    = \\vb E_0 \\\n    \\underbrace{e^{-\\kappa \\vb k \\vb r}}_\\s{\n    decay} \\\n    \\underbrace{e^{i(\\eta \\vb k \\vb r - \\omega t)}}_\\s{\n    oscillation}\n\\end{equation}\n\n\\paragraph{Interfaces}~\\\\\nThe metasurface stacks we want to understand are obviously not one homogeneous material. Rather, they contain many interfaces between different materials and we can again use the maxwell equations to predict how light will behave at such an interface.\n\\\\\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{bg_interface}\n    \\caption{An interface of two materials with different refractive indices $n_1$ and $n_2$ and a closed contour $C$ which is tangent to the interface between the points $a$ and $b$. A field vector within the depicted plane can be decomposed into a tangential and a orthogonal component to the interface, \n    $\\vb E^\\parallel$ and $\\vb E^\\bot$, respectively.}\n    \\label{fig:bg:interface}\n\\end{figure}\n\nLet us consider a closed contour $C$ around an interface as seen in Figure \\ref{fig:bg:interface} and integrate the first maxwell equation \\eqref{eq:bg:M1} over the surface $A$ enclosed by that contour\n\n\\begin{equation}\n\\begin{aligned}\n    \\int_A \\curl \\vb E(\\vb r, t) \\, \\dd \\vb A\n    &= - \\mu_0 \\pdv t \\int_A \\vb H(\\vb r, t) \\dd \\vb A\\\\\n    \\overset{\\s{Stokes}}{\\Longleftrightarrow} \\hspace{3em}\n    \\int_{C = \\partial A} \\vb E(\\vb r, t) \\dd \\vb r\n    &= - \\mu_0 \\pdv t \\int_A \\vb H(\\vb r, t) \\dd \\vb A\\\\\n\\end{aligned}\n\\end{equation}\n\nBut now we can bring the contour infinitesimally close to the interface and thus reduce the right hand side of the equation, the total magnetic flux through the surface, to zero. That leaves us with\n\n\\begin{equation}\n\\begin{aligned}\n    \\int_a^b \\vb E_1 \\, \\dd \\vb r \\ &+ \\int_b^a \\vb E_2 \\,  \\dd \\vb r = 0\\\\\n    \\Longleftrightarrow \\hspace{3em}\n    \\int_a^b \\vb E_1 \\, \\dd \\vb r &= \\int_a^b \\vb E_2 \\,  \\dd \\vb r \\\\\n\\end{aligned}\n\\end{equation}\n\nBecause $a$ and $b$ were chosen arbitrarily, that means that the transverse field components along the path need to be continuous so\n$\\vb E_1^{\\parallel} = \\vb E_2^{\\parallel}$.\nThe analog expression\n$\\vb H_1^{\\parallel} = \\vb H_2^{\\parallel}$\ncan be shown by starting with the third maxwell equation \\eqref{eq:bg:M3} instead. We can now use these continuity conditions to learn more about the behavior of light at these interfaces.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{bg_fresnel}\n    \\caption{Interface between two materials of different refractive indices $n_1$ and $n_2$. The incident light has a wave vector $\\vb k^\\s{i}$ and is partially transmitted and partially reflected. The fields are shown in transverse electric (TE) polarization where $\\vb H$ is in the $x$-$z$ plane and $\\vb E$ oscillates in the $y$ direction.\n    In this polarization state the $\\vb E$ field is fully tangential to the interface and therefor \n    $\\vb E = \\vb E^\\parallel$ and $\\vb E^\\bot = 0$.\n    For the $\\vb H$ field only the $x$ component is tangential to the interface and the $z$ component is orthogonal to it, so \n    $\\vb H_x = \\vb H^\\parallel$ and $\\vb H_z = \\vb H^\\bot$.}\n    \\label{fig:bg:fresnel}\n\\end{figure}\n\nLet us consider the same interface from before and have an incident field\n$\\vb E^\\s{i}$ interact with it. From figure \\ref{fig:bg:fresnel} we can see\n\n\\begin{equation} \\label{eq:bg:k}\n    \\vb k^\\s{i} = k_0 n_1\n    \\begin{pmatrix}\n        -\\sin \\varphi_\\s{i}\\\\ 0\\\\ -\\cos \\varphi_\\s{i}\n    \\end{pmatrix}\n    , \\\n    \\vb k^\\s{r} = k_0 n_1\n    \\begin{pmatrix}\n        \\phantom{-} \\sin \\varphi_\\s{r} \\\\ 0\\\\ -\\cos \\varphi_\\s{r}\n    \\end{pmatrix}\n    \\qq{and}\n    \\vb k^\\s{t} = k_0 n_2\n    \\begin{pmatrix}\n        \\sin \\varphi_\\s{t}\\\\ 0\\\\ \\cos \\varphi_\\s{t}\n    \\end{pmatrix}.\n\\end{equation} \n\\\\\n\nFor the derivation it is useful to decompose the fields into orthogonal polarization components. Here we are going to use the transversal electric (TE) and transversal magnetic (TM) polarizations. The TE component can be seen in figure \\ref{fig:bg:fresnel} and in TM polarization $\\vb E$ and $\\vb H$ fields are simply interchanged.\nIf we apply the continuity condition from before to the TE component, we get that $\\vb E_1 = \\vb E_2$ at $z = 0$ therefor\n\n\\begin{equation}\n\\begin{aligned}\n    &\\vb E^\\s{i} + \\vb E^\\s{r} = \\vb E^\\s{t} \\\\\n    \\Longleftrightarrow \\quad\n    &E^\\s{i} e^{i(k^\\s{i}_x x)} \\, \\va e_y +\n    E^\\s{r} e^{i(k^\\s{r}_x x)} \\, \\va e_y = \n    E^\\s{t} e^{i(k^\\s{t}_x x)} \\, \\va e_y,\n\\end{aligned}\n\\end{equation}\n\nwhere $E^\\s i, E^\\s r, E^\\s t \\in \\mathds R$ are the measurable real field magnitudes. This is only possible for all $x$ if\n\n\\begin{equation} \\label{eq:bg:x}\n    E^\\s{i} + E^\\s{r} = E^\\s{t} \\qq{and}\n    k^\\s{i}_x = k^\\s{r}_x = k^\\s{t}_x.\n\\end{equation}\n\nTwo basic laws of optics lie in this relation: The law of reflection\n\n\\begin{equation}\n    k^\\s{i}_x = k^\\s{r}_x\n    \\quad \\Rightarrow \\quad\n    k_0 n_1 \\sin \\varphi_i =  k_0 n_1 \\sin \\varphi_r\n    \\quad \\Rightarrow \\quad\n    \\varphi_i = \\varphi_r := \\varphi_1\n\\end{equation}\n\nand the Snells law of refraction\n\n\\begin{equation}\n    k^\\s{i}_x = k^\\s{t}_x, \\\n    \\varphi_t := \\varphi_2\n    \\quad \\Rightarrow \\quad\n    k_0 n_1 \\sin \\varphi_1 =  k_0 n_2 \\sin \\varphi_2\n    \\quad \\Rightarrow \\quad\n    n_1 \\sin \\varphi_1 = n_2 \\sin \\varphi_2.\n\\end{equation}\n\n\\paragraph{Fresnel Equations}~\\\\\nTo fully describe an interface we also need to know the transmitted and the reflected field amplitudes,  \n$t := E^t / E^i$ \nand \n$r := E^r / E^i$, \nrespectively.\nAgain using the TE polarization shown in figure \\ref{fig:bg:fresnel} we see\n\n\\begin{equation} \\label{eq:bg:H}\n    \\vb H^\\s{i} = H^\\s{i}\n    \\begin{pmatrix}\n        -\\cos \\varphi_\\s{i}\\\\ 0\\\\ -\\sin \\varphi_\\s{i}\n    \\end{pmatrix}\n    , \\\n    \\vb H^\\s{r} = H^\\s{r}\n    \\begin{pmatrix}\n        \\cos \\varphi_\\s{r} \\\\ 0\\\\ \\sin \\varphi_\\s{r}\n    \\end{pmatrix}\n    \\qq{and}\n    \\vb H^\\s{t} = H^\\s{t}\n    \\begin{pmatrix}\n        -\\cos \\varphi_\\s{t}\\\\ 0\\\\ \\sin \\varphi_\\s{t}\n    \\end{pmatrix}.\n\\end{equation}\n\nThe $H_x$ component is tangent to the interface, so it also needs to be continuous across the boundary\n\n\\begin{equation}\\label{eq:bg:Hx}\n\\begin{aligned}\n     H^\\s{i}_x +  H^\\s{r}_x &=  H^\\s{t}_x \\\\\n    \\Longleftrightarrow \\qquad\n    H^\\s i \\cos \\varphi_1 - H^\\s r \\cos \\varphi_1 &= H^\\s t \\cos \\varphi_2\n    \\qquad  \\qquad\n\\end{aligned}\n\\end{equation}\n\nIf we assume the described plane waves to be\n$\\vb E = \\vb E_0 e^{i(n \\vb k \\vb r - \\omega t)}$\nand \n$\\vb H = \\vb H_0 e^{i(n \\vb k \\vb r - \\omega t)}$,\nMaxwell's first equation \\eqref{eq:bg:M1} allows us to connect the magnitudes of $\\vb H$ and $\\vb E$ via the refractive index $H \\sim n \\, E$. This changes \\eqref{eq:bg:Hx} to\n\\begin{equation}\n    n_1 \\, E^\\s i \\cos \\varphi_1 - n_1 \\, E^\\s r \\cos \\varphi_1 =\n    n_2 \\, E^\\s t \\cos \\varphi_2.\n\\end{equation}\n\nSubstituting $E^\\s r$ or $E^\\s i$ with \\eqref{eq:bg:x} and rearranging we get\n\n\\begin{equation}\n    t = \\frac{2 n_1 \\cos \\varphi_1}{n_1 \\cos \\varphi_1 + n_2 \\cos \\varphi_2}\n    \\qq{and}\n    r = \\frac{n_1 \\cos \\varphi_1 - n_2 \\cos \\varphi_2}{\n    n_1 \\cos \\varphi_1 + n_2 \\cos \\varphi_2}.\n\\end{equation}\n\nThese are called the Fresnel coefficients for the TE component. The TM component can be treated analog which yields\n\n\\begin{equation}\n    t = \\frac{2 n_1 \\cos \\varphi_1}{n_2 \\cos \\varphi_1 + n_1 \\cos \\varphi_2}\n    \\qq{and}\n    r = \\frac{n_2 \\cos \\varphi_1 - n_1 \\cos \\varphi_2}{\n    n_2 \\cos \\varphi_1 + n_1 \\cos \\varphi_2}.\n\\end{equation}\n\nFor perpendicular incident light at $\\varphi = 90^\\circ$ these should describe the same situation as we can no longer differentiate between TE and TM components and indeed for this angle they are equivalent if we consider that the TE coefficients describe the electric field and the TM coefficients describe the magnetic field. The TE coefficients become\n\n\\begin{equation}\n    t = \\frac{2 n_1}{n_1 + n_2} \\qq{and} r = \\frac{n_1 - n_2}{n_1 + n_2}.\n\\end{equation}\n\n\\paragraph{Stacking}~\\\\\nWe can take this one step further by including interfaces which have incident light from both sides. Let $\\vb E_\\s{out}$ be the light coming out of the interface and $\\vb E_\\s{in}$ be the light going into the interface as seen in figure \\ref{fig:bg:both}. Using the factors from the Fresnel equations we get\n\n\\begin{equation}\n    E^b_\\s{out} = E^f_\\s{in} \\, t^f + E^b_\\s{in} \\, r^b\n    \\qq{and}\n    E^f_\\s{out} = E^b_\\s{in} \\, t^b + E^f_\\s{in} \\, r^f\n\\end{equation}\n\\vspace{0.5mm}\n\nand this can be expressed more concisely using matrices and vectors\n\n\\begin{equation}\n\\begin{pmatrix}\n    E^f_\\s{out} \\\\\n    E^b_\\s{out}\n\\end{pmatrix} =\n\\underbrace{\n\\begin{pmatrix}\n    t^f & r^b \\\\\n    r^f & t^b\n\\end{pmatrix}\n}_{\n =: \\hat{S}\n}\n\\begin{pmatrix}\n    E^f_\\s{in} \\\\\n    E^b_\\s{in}\n\\end{pmatrix}.\n\\end{equation}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.6\\linewidth]{bg_both_directions}\n    \\caption{Interface between two materials of different refractive indices $n_1$ and $n_2$ where incident light is coming from the front and the back.}\n    \\label{fig:bg:both}\n\\end{figure}\n\nWe call these matrices, that map field-in to field-out, scattering matrices or short $S$-matrices. They can be used to describe more than just interfaces. As shown in the paragraph \\hyperref[par:light_in_materials]{Light in materials} when light propagates through homogeneous isotropic material just the phase changes by a factor of $e^{i k_0 n d}$, where $d$ is the distance traveled and $\\vb k_0$ the wave vector in vacuum. We can express this propagation by allowing complex valued $t$ and $r$\n\n\\begin{equation}\n    \\hat S_{n, d} =\n    \\begin{pmatrix}\n        e^{i k_0 n d} & 0 \\\\\n        0 & e^{i k_0 n d}\n    \\end{pmatrix}\n\\end{equation}\n\nand analog the $S$-matrix for an interface from $n_1$ to $n_2$ is\n\n\\begin{equation}\n    \\hat S_{n_1, n_2} =\n    %increase matrix spacing for the fractions\n    \\begingroup\n    \\renewcommand*{\\arraystretch}{1.5}\n        \\begin{pmatrix}\n            \\frac{2 n_1}{n_1 + n_2} & \\frac{n_2 - n_1}{n_1 + n_2} \\\\\n            \\frac{n_1 - n_2}{n_1 + n_2} & \\frac{2 n_1}{n_1 + n_2}\n        \\end{pmatrix}\n    \\endgroup.\n\\end{equation}\n\\\\\n\n\nSay we have a stack of different homogeneous isotropic materials. We are now able to write down an $S$-matrix for every part of the stack, that is for every interface and every propagation between interfaces as seen in figure \\ref{fig:bg:star_prod}, but we cannot predict the behavior of the stack as a whole.\n\\\\\n\n\\indent\nThe combined $S$-matrix $\\hat S$ consisting of $\\hat S_1$ and $\\hat S_2$ cannot be obtained by simply using the matrix product. The result would be\n\n\\begin{equation}\n    \\hat S_1 \\, \\hat S_2 =\n    \\begin{pmatrix}\n        t^f_1 \\, t^f_2 + r^b_1 \\, r^f_2 & t^f_1 \\, r^b_2 + r^b_1 \\, t^b_2 \\\\\n        r^f_1 \\, t^f_2 + t^b_1 \\, r^f_2 & r^f_1 \\, r^b_2 + t^b_1 \\, t^b_2\n    \\end{pmatrix} \\neq\n    \\begin{pmatrix}\n        t^f & r^b \\\\\n        r^f & t^b\n    \\end{pmatrix} =\n    \\hat S\n\\end{equation}\n\nand this would imply that the transmission from the front $t^f$ increases when the internal reflections of the system $r^b_1$ and $r^f_2$ increase which is the opposite of the behavior we expect. In the 1960s Redheffer \\cite{Redheffer1960} developed a linear transformation called \\textit{star product} $\\star$ which yields the correct combined $S$-matrix\n\n\\begin{equation}\\label{eq:bg:star}\n    \\hat S_1 \\star \\hat S_2 :=\n    \\begin{pmatrix}\n        t^f_2 (1 - r^b_1 r^f_2)^{-1} t^f_1 &\n        r^b_2 + t^f_2 r^b_1 (1 - r^f_2 r^b_1)^{-1} t^b_2\\\\\n        r^f_1 + t^b_1 r^f_2 (1 - r^b_1 r^f_2)^{-1} t^f_1 &\n        t^b_1 (1 - r^f_2 r^b_1)^{-1} t^b_2\n    \\end{pmatrix}.\n\\end{equation}\n\\\\\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.6\\linewidth]{bg_starprod}\n    \\caption{A slab of one homogeneous isotropic material with thickness $d$ in air $(n_0)$. This setup is fully described by two interface $S$-matrices \n    $\\hat S_{n_0,n}, \\, \\hat S_{n,n_0}$\n    and one propagation $S$-matrices\n    $\\hat S_{n,d}$.}\n    \\label{fig:bg:star_prod}\n\\end{figure}\n\nFor this operation to produce physically valid results, certain conditions have to be met. We will discuss these in detail in the paragraph \\hyperref[par:conditions]{Conditions} of section \\ref{sec:SASA}.\nApplied to the example of figure \\ref{fig:bg:star_prod}, the star product gives\n\n\\begin{equation}\n    \\hat S = \\hat S_{n_0, n} \\star \\hat S_{n, d} \\star \\hat S_{n, n_0}.\n\\end{equation}\n\n\\paragraph{Polarization}~\\\\\nUp to this point the $S$-matrix is made of the four complex numbers $t^f$, $r^b$, $r^f$ and $t^b$. This means when a $S$-matrix is applied to the input vector containing the total incoming light  \n$\n\\begin{pmatrix}\n    E^f_\\s{in} \\\\\n    E^b_\\s{in}\n\\end{pmatrix}\n$,\n$E^f_\\s{in}$ and $E^b_\\s{in}$ are also restricted to the complex plane and can only describe linear polarized light in one direction. The last generalization we want to make is to extend the scattering matrix calculus to all polarizations. As shown in the paragraph \\hyperref[par:light_in_materials]{Light in materials} a planar light wave propagating along the $z$ axis through a homogeneous isotropic material can be described as\n\n\\begin{equation}\n   \\vb E =\n   \\begin{pmatrix}\n       E_x e^{i(kz - \\omega t + \\varphi_x)}\\\\\n       E_y e^{i(kz - \\omega t + \\varphi_y)}\\\\\n       0\\\\\n   \\end{pmatrix}\n   =\n   \\qty(E_x \\, e^{i \\varphi_x} \\, \\va{e_x} +\n        E_y \\, e^{i \\varphi_y} \\, \\va{e_y})\n       e^{i(kz - \\omega t)}.\n\\end{equation}\n\\\\\n\n\\noindent\nThe waves polarization is determined by the scaling factors of $\\va{e_x}$ and $\\va{e_y}$ and can be expressed as a \\textit{Jones Vector} $\\vb j \\in \\mathds{C}^2$\n\n\\begin{equation}\n   \\vb j = \\frac{1}{\\sqrt{E_x^2 + E_y^2}}\n   \\begin{pmatrix}\n       E_x\\\\\n       E_y \\, e^{i \\delta}\n   \\end{pmatrix},\n   \\qq{with}\n   \\delta := \\varphi_y - \\varphi_x.\n\\end{equation}\n\n\\noindent\nNow all linear operations on the polarization are matrices $\\hat{M} \\in \\mathds{C}^{2 \\times 2}$. This means all passive components have a corresponding matrix. Examples for components aligned with the $x$ axis are\n\n\n\\begin{equation}\n\\begin{split}\n   \\qq*{linear polarizer:} \\hat{M} &=\n   \\begin{pmatrix}\n       1 & 0\\\\\n       0 & 0\n   \\end{pmatrix},\\\\\n   \\qq*{$\\lambda / 4$ plate:} \\hat{M} &=\n   \\begin{pmatrix}\n       1 & 0\\\\\n       0 & i\n   \\end{pmatrix}\n   e^{-\\frac{i \\pi}{4}}\n   \\qq{and}\\\\\n   \\qq*{$\\lambda / 2$ plate:} \\hat{M} &=\n   \\begin{pmatrix}\n       -i & 0\\\\\n       0 & i\n   \\end{pmatrix}.\\\\\n\\end{split}\n\\end{equation}\n\nWe can now generalize the $S$-matrix calculus simply by allowing Jones matrices in place of the Fresnel coefficients: $t \\rightarrow \\hat T$ and $r \\rightarrow \\hat R$. The star product of two $S$-matrices becomes\n\n\\begin{equation}\\label{eq:bg:star}\n    \\hat S_1 \\star \\hat S_2 =\n    \\begin{pmatrix}\n        \\hat T^f_1 & \\hat R^b_1 \\\\\n        \\hat R^f_1 & \\hat T^b_1\n    \\end{pmatrix}\n    \\star\n    \\begin{pmatrix}\n        \\hat T^f_2 & \\hat R^b_2 \\\\\n        \\hat R^f_2 & \\hat T^b_2\n    \\end{pmatrix}\n    =\n    \\begin{pmatrix}\n        \\hat T^f_2 (\\mathds 1 - \\hat R^b_1 \\hat R^f_2)^{-1} \\hat T^f_1 &\n        \\hat R^b_2 + \\hat T^f_2 \\hat R^b_1 (\\mathds 1 - \\hat R^f_2 \\hat R^b_1)^{-1} \\hat T^b_2\\\\\n        \\hat R^f_1 + \\hat T^b_1 \\hat R^f_2 (\\mathds 1 - \\hat R^b_1 \\hat R^f_2)^{-1} \\hat T^f_1 &\n        \\hat T^b_1 (\\mathds 1 - \\hat R^f_2 \\hat R^b_1)^{-1} \\hat T^b_2.\n    \\end{pmatrix}\n\\end{equation}\n\nSo the generalized $S$-matrix is a $2 \\times 2$ block matrix of four Jones matrices and the input vector $\\quad $\n$\n\\begin{pmatrix}\n    \\vb E^f_\\s{in} \\\\\n    \\vb E^b_\\s{in}\n\\end{pmatrix}\n$\nis a $2 \\times 1$ block vector of two Jones vectors.\n", "meta": {"hexsha": "f12e49ff107a40798e9e97f99d8899454e7d806c", "size": 20999, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/background/s_mats.tex", "max_stars_repo_name": "TimLucaTuran/bachlor-arbeit", "max_stars_repo_head_hexsha": "f6c1eb502d7e99a4fdf2f3b0110677578f8eff95", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/background/s_mats.tex", "max_issues_repo_name": "TimLucaTuran/bachlor-arbeit", "max_issues_repo_head_hexsha": "f6c1eb502d7e99a4fdf2f3b0110677578f8eff95", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-17T15:04:05.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-17T15:04:05.000Z", "max_forks_repo_path": "tex/background/s_mats.tex", "max_forks_repo_name": "TimLucaTuran/bachlor-arbeit", "max_forks_repo_head_hexsha": "f6c1eb502d7e99a4fdf2f3b0110677578f8eff95", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7475149105, "max_line_length": 633, "alphanum_fraction": 0.6547930854, "num_tokens": 7375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.872347368040789, "lm_q2_score": 0.6825737214979745, "lm_q1q2_score": 0.5954413894425645}}
{"text": "For the moment, assume that we are running in comoving coordinates, \nwith dark matter particles only (no hydro) and that the particles all exist at level 0.   These assumptions are\nencapsulated in the following lines in the inputs file: \\\\\n\n\\noindent {\\bf nyx.use\\_comoving = t} \\\\\n\\noindent {\\bf nyx.do\\_dm\\_particles = 1} \\\\\n\\noindent {\\bf amr.max\\_level =  0} \\\\\n\\noindent {\\bf nyx.do\\_hydro = 0} \\\\\n\\noindent {\\bf nyx.do\\_react = 0} \\\\\n\\noindent {\\bf nyx.do\\_grav =  1} \n\n\\section{Equations}\n\nIf we define ${\\mathbf x}_i$ and ${\\bf u}_i$ as the location and velocity of particle $i$, respectively, then we wish\nto solve\n\\begin{eqnarray}\n\\frac{d {\\mathbf x}_i}{d t} &=& \\frac{1}{a} {\\mathbf u}_i \\\\\n\\frac{d (a {\\mathbf u}_i) }{d t} &=& {\\mathbf g}_i\n\\end{eqnarray}\nwhere ${\\mathbf g}_i$ is the gravitational force evaluated at the location of particle $i$, i.e., \n${\\mathbf g}_i = {\\mathbf g}({\\mathbf x}_i,t).$\n\n\\section{Initializing the Particles}\n\n\\noindent There are several different ways in which particles can currently be initialized:\n\n\\subsection{Read from an ASCII file}\n\nTo enable this option, set \\\\\n\n\\noindent {\\bf nyx.particle\\_init\\_type = AsciiFile} \\\\\n\\noindent {\\bf nyx.ascii\\_particle\\_file =}{\\em particle\\_file}\n\nHere {\\em particle\\_file} is the user-specified name of the file.  The first line in this file is\nassumed to contain the number of particles.  Each line after that contains  \\\\\n\nx y z mass xdot ydot zdot \\\\\n\nNote that the variable that we call the particle velocity, ${\\mathbf u} = a {\\bf \\dot{x}}$, \nso we must multiply ${\\bf \\dot{x}}$, by $a$ when we initialize the particles.\n\n\\subsection{Read from a binary file}\n\nTo enable this option, set \\\\\n\n\\noindent {\\bf nyx.particle\\_init\\_type = BinaryFile} \\\\\n\\noindent {\\bf nyx.binary\\_particle\\_file =}{\\em particle\\_file} \\\\\n\nAs with the ASCII read, the first line in this file is\nassumed to contain the number of particles.  Each line after that contains  \\\\\n\nx y z mass xdot ydot zdot \\\\\n\nNote that the variable that we call the particle velocity, ${\\mathbf u} = a {\\bf \\dot{x}}$, \nso we must multiply ${\\bf \\dot{x}}$, by $a$ when we initialize the particles.\n\n\\subsection{Read from a binary \"meta\" file}\n\nThis option allows you to read particles from a series of files rather than \njust a single file.   To enable this option, set \\\\\n\n\\noindent {\\bf nyx.particle\\_init\\_type = BinaryMetaFile} \\\\\n\\noindent {\\bf nyx.binary\\_particle\\_file =}{\\em particle\\_file} \\\\\n\nIn this case the {\\em particle\\_file} you specify is an ASCII file specifying a\nlist of file names with full paths.   Each of the files in this list is assumed\nto be binary and is read sequentially (individual files are read in parallel) in \nthe order listed.\n\n\\subsection{Reading SPH particles}\n\nFor some applications it is useful to initialize the grid data with SPH-type\nparticles.   To enable this option, you must set \\\\\n\n\\noindent {\\bf nyx.do\\_santa\\_barbara = 1} \\\\\n\\noindent {\\bf nyx.init\\_with\\_sph\\_particles = 1} \\\\\n\n\\noindent The SPH-type particles can then be read in by setting \\\\\n\n\\noindent {\\bf nyx.sph\\_particle\\_file =}{\\em sph\\_particle\\_file} \\\\\n\n\\noindent where {\\em sph\\_particle\\_file} is the user-specified name of the file\ncontaining the SPH particles.  The type of {\\em sph\\_particle\\_file} \nmust be the same (Ascii, Binary or BinaryMeta) as the dark matter particle \nfile as specified by   \\\\\n\n\\noindent {\\bf nyx.particle\\_init\\_type = } \\\\\n\n\\noindent The SPH particles will be discarded by the code once the grid data has been initialized.\n\n\\subsection{Random placement}\n\nTo enable this option, set \\\\\n\n\\noindent {\\bf nyx.nyx.particle\\_init\\_type = Random} \\\\\n\n\\noindent There are then a number of parameters to set, for example: \\\\\n\n\\noindent {\\bf nyx.particle\\_initrandom\\_count = 100000}\n\n\\noindent {\\bf nyx.particle\\_initrandom\\_mass  = 1}\n\n\\noindent {\\bf nyx.particle\\_initrandom\\_iseed = 15}\n\n\\subsection{Cosmological}\n\nUsing cosmological initial conditions is a three step process:\n\\begin{enumerate}\n\t\\item Generating a transfer function (e.g. with \\texttt{camb})\n\t\\item Generating an initial displacement field (with \\texttt{nyx-ic})\n\t\\item Starting nyx\n\\end{enumerate}\n\nIn the following we will look at each step a bit closer.\n\n\\subsubsection{Generating a transfer function}\n\n\tThe transfer function is used in \\texttt{nyx-ic} to generate the power\n\tspectrum. The usual way is to use \\texttt{camb}\\footnote{See \\url{http://camb.info/}}\n\tto calculate it for the desired universe. A sample \\texttt{camb.ini} is\n\tprovided with \\texttt{nyx-ic}. The important options are: \n\t\\begin{itemize}\n\t\t\\item {\\bf transfer\\_redshift(1) = 50}\n\t\t\\item {\\bf transfer\\_matterpower(1) = tf} \t\t\n\t\\end{itemize}\n\n\twhich determine the initial time for the simulation. You should make sure\n\tthat you catch all necessary wave numbers for the considered box length and\n\tresolution.\n\t\n\tFrom the \\texttt{camb} output you have to note values for \\texttt{sigma\\_8}\n\tfor a redshift of zero and the initial redshift. We need this to compute\n\tthe right normalization.\n\t\n\\subsubsection{Setting up the initial displacements}\n\t\n\tWe calculate the initial displacements with a stand-alone program called\n\t\\texttt{nyx-ic}. This takes a transfer function and some cosmological parameters\n\tas an argument and outputs an \"init\" directory which basically contains initial\n\tdisplacements for every grid point in an AMReX MultiFAB. Furthermore the mf \n\tcontains a fourth field containing the density contrast as initial condition\n\tfor the baryonic matter. \\\\\n\t\\texttt{nyx-ic} is started with an ``inputs``\n\tfile similar to the one from Nyx. A sample one is provided. The options are\n\\begin{verbatim}\n#Omega_{Matter}\ncosmo.omegam = 0.272\n#Omega_{Lambda}\ncosmo.omegax = 0.728\n\n#equation of state paramater omega_{effective}\ncosmo.weff = -0.980\n\n#Omega_{baryon}*Hubble^2 \ncosmo.ombh2 = 0.0226\n#Hubble/100km/s\ncosmo.hubble = 0.704\n#scalar spectral index\ncosmo.enn = 0.963\n# initial z\ncosmo.z_init = 50\n\n#sidelength of the box (in Mpc)\ncosmo.boxside = 90.14\n#seed of the rng\ncosmo.isd = 100\n#resolution of the box\ncosmo.gridpoints = 256\n#the output file name\ncosmo.initDirName = init\n\n#choose the source of the transferfunction\ncosmo.transferfunction = CAMB\n\n#some tabulated transferfunction generated with camb (compare camb-ini-file)\ncosmo.tabulatedTk = tf\n# sigma8 for the input tf at z=0 and initial z (to calc the growthfactor)\ncosmo.init_sigma8_0 = 0.7891368\ncosmo.init_sigma8_init = 2.0463364E-02\n\\end{verbatim} \n\n\tThe code solves the equation\n\t\\begin{align}\n\tP(k,a) = 2\\pi^2\\delta^2_H \\frac{k^n}{H_0^{n+3}}T^2(k)\\left( \\frac{D(a)}{D(a=1)} \\right)^2\n\t\\end{align}\n\tto calculate $P$ and from that gaussian distributed density perturbations\n\t$\\delta$ following that spectrum. Particle displacements are then calculated\n\tas Zel'dovich displacements.\n\t\n\tNon-gaussian effects as well as neutrino contributions are planned for the\n\tfuture.\n\t\n\\subsubsection{Using Nyx with cosmological initial conditions}\n\t\\begin{itemize}\n\t\t\\item {\\bf nyx.nyx.particle\\_init\\_type = Cosmological} \\\\ \n\t\t\tset the \\emph{right} init type\n\t\t\\item {\\bf cosmo.initDirName = init} \\\\\n\t\t\tset the name of the displacements directory (amrex format)\n\t\t\\item {\\bf cosmo.particle\\_mass = 0.19178304E+10} \\\\\n\t\t\tsets the mass [$M_\\odot$] of each particle\n\t\t\\item {\\bf cosmo.omegam = 0.272} \\\\\n\t\t\tset $\\Omega_{Matter}$\n\t\t\\item {\\bf cosmo.omegax = 0.728} \\\\\n\t\t\tset $\\Omega_\\Lambda$\n\t\t\\item {\\bf cosmo.hubble = 0.704} \\\\\n\t\t\tset the reduced hubble constant $h$\n\t\\end{itemize}\n\t\n\tWe will generate a particle of mass \\textbf{particle\\_mass} in every grid cell\n\tdisplaced from the center by the value found in the \\textbf{initDirName} for\n\tthat cell. Velocities are calculated in the Zel'dovich approximation by\n\t\\begin{align}\n\t\t\\vec{v} = \\Delta{\\vec{x}} \\times 100 \\text{km/s} \\times a \\sqrt{\\Omega_M/a^3+\\Omega_\\Lambda} \\times L_{\\text{box}}\n\t\\end{align}\n\twhere $\\Delta{\\vec{x}}$ is the displacement of the particle.\n\t\n\\section{Time Stepping}\n\n\\noindent There are currently two different ways in which particles can be moved:\n\n\\subsection{Random}\n\n\\noindent To enable this option, set \\\\\n\n\\noindent {\\bf nyx.particle\\_move\\_type = Random} \\\\\n\n\\noindent Update the particle positions at the end of each coarse time step using a \nrandom number between 0 and 1 multiplied by 0.25 dx.\n\n\\subsection{Motion by Self-Gravity}\n\n\\noindent To enable this option, set \\\\\n\n\\noindent {\\bf nyx.particle\\_move\\_type = Gravitational} \\\\\n\n\\subsubsection{Move-Kick-Drift Algorithm}\n\nIn each time step:\n\\begin{itemize}\n\\item Solve for ${\\mathbf g}^n$  (only if multilevel, otherwise use ${\\mathbf g}^{n+1}$ from previous step)\n\\item ${\\mathbf u}_i^{\\nph} = \\frac{1}{a^{\\nph}} ( (a^n {\\mathbf u}^n_i) + \\frac{\\dt}{2} \\; {\\mathbf g}^n_i )$\n\\item ${\\mathbf x}_i^{n+1 } = {\\mathbf x}^n_i +  \\frac{\\dt}{a^{\\nph}}  {\\mathbf u}_i^{\\nph}$\n\\item Solve for ${\\mathbf g}^{n+1}$ using ${\\mathbf x}_i^{n+1}$\n\\item ${\\mathbf u}_i^{n+1} = \\frac{1}{a^{n+1}} ( (a^{\\nph} {\\mathbf u}^{\\nph}_i) + \\frac{\\dt}{2} \\; {\\mathbf g}^{n+1}_i )$\n\\end{itemize}\n\nNote that at the end of the timestep ${\\bf x}_i^{n+1}$ is consistent with ${\\bf g}^{n+1}$ becasue\nwe have not advanced the positions after computing the new-time gravity.  This has the benefit that\nwe perform only one gravity solve per timestep (in a single-level calculation with no hydro) because\nthe particles are only moved once.\n\n\\subsubsection{Computing {\\bf g}}\n\nWe solve for the gravitational vector as follows:\n\\begin{itemize}\n\\item Assign the mass of the particles onto the grid in the form of density, $\\rho_{DM}$.  \nThe mass of each particle is assumed to be uniformly distributed over a cube of side $\\Delta x$, \ncentered at what we call the position of the particle.    We distribute the mass of each\nparticle to the cells on the grid in proportion to the volume of the intersection of each cell\nwith the particle's cube.   We then divide these cell values by $\\Delta x^3$ so that the\nright hand side of the Poisson solve will be in units of density rather than mass.  \nNote that this is the {\\it comoving} density.\n\n\\item Solve $\\nabla^2 \\phi = \\frac{4 \\pi G}{a} \\rho_{DM}$.\nWe discretize with the standard 7-point Laplacian (5-point in 2D) \nand use multigrid with Gauss-Seidel red-black relaxation to solve the equation for $\\phi$ at cell centers.\n\n\\item Compute the normal component of ${\\bf g} = -\\nabla \\phi$ at cell faces by differencing the adjacent values of $\\phi,$\ne.g. if $\\gb = (g_x, g_y, g_z),$ then we define $g_x$ on cell faces with a normal in the x-direction by computing\n$g_{x,i-\\myhalf,j,k} = -(\\phi_{i,j,k} - \\phi_{i-1,j,k}) / \\Delta x.$\n\n\\item  Interpolate each component of ${\\bf g}$ from normal cell faces onto each particle position using \nlinear interpolation in the normal direction.\n\n\\end{itemize}\n\n%\\subsection{Predictor-Corrector}\n%\n%An alternative to the above algorithm would be the following predictor-corrector approach:\n%\n%\\begin{itemize}\n%\\item Solve for ${\\bf g}^n$\n%\\item ${\\bf v}_i^{n+1,*} = {\\bf v}^n_i + \\dt \\; {\\bf g}^n$\n%\\item ${\\bf x}_i^{n+1,*} = {\\bf x}^n_i + \\dt \\; {\\bf v}_i^n$\n%\\item Solve for ${\\bf g}^{n+1}$ using ${\\bf x}_i^{n+1,*}$\n%\\item ${\\bf v}_i^{n+1} = {\\bf v}^{n+1,*}_i + \\frac{\\dt}{2} \\; ({\\bf g}^{n+1} - {\\bf g}^n)$\n%\\item ${\\bf x}_i^{n+1} = {\\bf x}^{n+1,*}_i + \\frac{\\dt}{2} \\; ({\\bf v}_i^{n+1} - {\\bf v}_i^n)$\n%\\end{itemize}\n%\n%\\noindent This has two issues:\n%\\begin{itemize}\n%\\item First, the gravity at the end of the timestep is not consistent with the particle positions at the end of the timestep.\n%Thus this will require an additional solve per timestep because we move the particles twice per timestep.\n%\\item Second, this increases the memory required per particle because we would need to keep both ${\\bf v}^n_i$\n%and ${\\bf v}^{n+1,*}_i$ over the course of a timestep.\n%\\end{itemize}\n\n\\section{Output Format}\n\n\\subsection{Checkpoint Files}\n\nThe particle positions and velocities are stored in a binary file in each checkpoint directory.  \nThis format is designed for being read by the code at restart rather than for diagnostics. \\\\\n\nWe note that the value of $a$ is also written in each checkpoint directory, \nin a separate ASCII file called {\\em comoving\\_a}, containing only the single value. \\\\\n\n\\subsection{Plot Files}\n\nIf {\\bf particles.write\\_in\\_plotfile =} 1 in the inputs file \nthen the particle positions and velocities will be written in a binary file in each plotfile directory.  \n\nIn addition, we can also\nvisualize the particle locations as represented on the grid.  There are two ``derived quantities''\nwhich represent the particles.  Setting \\\\\n\n\\noindent {\\bf amr.derive\\_plot\\_vars = particle\\_count particle\\_mass\\_density} \\\\\n\\noindent {\\bf amr.plot\\_vars = NONE} \\\\\n\n\\noindent in the inputs file will generate plotfiles with only two variables.  \n{\\bf particle\\_count} represents the number of particles in a grid cell; \n{\\bf particle\\_mass\\_density} is the density on the grid resulting from the particles.\n\nWe note that the value of $a$ is also written in each plotfile directory, \nin a separate ASCII file called {\\em comoving\\_a}, containing only the single value. \\\\\n\n\\subsection{ASCII Particle Files}\n\nTo generate an ASCII file containing the particle positions and velocities, \none needs to restart from a checkpoint\nfile but doesn't need to run any steps.  For example, if chk00350 exists, then one can set: \\\\\n\n\\noindent {\\bf amr.restart = chk00350} \\\\\n\\noindent {\\bf max\\_step = 350} \\\\\n\\noindent {\\bf particles.particle\\_output\\_file =} {\\em particle\\_output} \\\\\n\n\\noindent which would tell the code to restart from chk00350, not to take any further time steps, and to write an ASCII-format \nfile called {\\em particle\\_output}. \\\\\n\n\\noindent This file has the same format as the ASCII input file: \\\\\n\n\\noindent number of particles \\\\ \nx y z mass xdot ydot zdot \\\\\n\n\\subsection{Run-time Data Logs}\n\nIf you set \\\\\n\n\\noindent {\\bf amr.data\\_log = }{\\em log\\_file}  \\\\\n\n\\noindent in the inputs file, then at run-time the code will write a log file with entries every coarse\ngrid time step, containing \\\\\n\n\\noindent nstep  time   dt   redshift   a\n\n\n\\subsection{Run-time Screen Output}\n\nThere are a number of flags that control the verbosity written to the screen at run-time.  These are:\n\n\\noindent {\\bf amr.v } \\\\\n\\noindent {\\bf nyx.v } \\\\\n\\noindent {\\bf gravity.v } \\\\\n\\noindent {\\bf mg.v } \\\\\n\\noindent {\\bf particles.v } \\\\\n\nThese control printing  about the state of the calculation (time, value of $a$, etc) as well as\ntiming information.\n", "meta": {"hexsha": "8884b5b02b51f4ea5bfc7aa70dd96f3d74eea4e0", "size": 14473, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "UsersGuide/Particles/Particles.tex", "max_stars_repo_name": "luminosa42/Nyx", "max_stars_repo_head_hexsha": "e359f698fc27eb09cce87712808a7e5b4201b446", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "UsersGuide/Particles/Particles.tex", "max_issues_repo_name": "luminosa42/Nyx", "max_issues_repo_head_hexsha": "e359f698fc27eb09cce87712808a7e5b4201b446", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "UsersGuide/Particles/Particles.tex", "max_forks_repo_name": "luminosa42/Nyx", "max_forks_repo_head_hexsha": "e359f698fc27eb09cce87712808a7e5b4201b446", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2222222222, "max_line_length": 127, "alphanum_fraction": 0.7226559801, "num_tokens": 4247, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637577007393, "lm_q2_score": 0.6926419894793248, "lm_q1q2_score": 0.5954392154171123}}
{"text": "\n\\subsection{Association rules}\n\n\\subsubsection{The data}\n\nWe have a transaction dataset, \\(D\\).\n\nThis includes transactions of items in \\(I\\).\n\nAny subset of \\(I\\) is an itemset.\n\nA subset of size \\(k\\) is a \\(k\\)-itemset.\n\nTransactions are a \\(k\\)-itemset with a unique id, tid.\n\nThe set of all transactions is \\(T\\).\n\nA tidset is a subset of \\(T\\).\n\n\\subsubsection{Forming a lattice}\n\nWe have a total order on the items.\n\nAn itemset \\(ab\\) is greater than \\(a\\), for examplle.\n\nThe two points of the lattice are the nullset, and \\(I\\).\n\n\\subsubsection{Mappings}\n\nWe have a mapping from \\(I\\) to \\(T\\) called \\(t\\).\n\nWe have another mapping from \\(T\\) to \\(I\\) called \\(i\\).\n\n\\subsubsection{Frequency}\n\nWe define the frequency of an itemset as the number of transactions it appears in.\n\nWe can write the frequency of \\(A\\) as \\(\\sum A\\).\n\n", "meta": {"hexsha": "d2213d9e37dc9e38ef67626560ce9c71daad368e", "size": 841, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/association/01-01-association.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/association/01-01-association.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/association/01-01-association.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.025, "max_line_length": 82, "alphanum_fraction": 0.6848989298, "num_tokens": 229, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637648915617, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5954392149435649}}
{"text": "\\documentclass{article}\n\\usepackage[left=1in,right=1in,top=1in,bottom=1in]{geometry}\n\\usepackage{amsmath, amsthm, verbatim, enumerate, xcolor, mathpartir}\n\n\\input{macros}\n\n\\title{IIT CS595: Topics \\& Applications in Programming Languages\\\\\n  {\\large Homework 0: Rule induction, expressions, type safety}}\n\\author{Dustin Van Tate Testa}\n\n\\begin{document}\n\\maketitle\n\n\\section{Logistics}\n\n\\paragraph{Writing up Solutions.}\nIf you're looking at this file, you're probably (at least thinking about)\nwriting up your solutions in LaTeX. Nice! If you need help, see the writeup\nPDF for a link to resources and other options.\n\n\\paragraph{Collaboration.}\nPlease see the collaboration policy on the course website.\n\n\\paragraph{Submission.}\nSubmit your answers as a single .pdf or .doc file (see above) on Blackboard\nunder\nthe correct assignment. \n\n\n\\section{Evaluation}\n\\begin{task}\n  Evaluate the following expression using the dynamics semantics from class,\n  showing each step:\n\n  (Put your answers in the ... and copy and paste the line to add more lines\n  as necessary).\n  \n  \\[\n  \\begin{array}{l l}\n    &\n    \\kwlet{x}{\\kwn{2}}{x+(\\kwlet{x}{\\kwn{1}}{\\kwlet{y}{x+\\kwn{1}}{x+y}})}\\\\\n    \\step & \\kwn{2}+(\\kwlet{x}{\\kwn{1}}{\\kwlet{y}{x+\\kwn{1}}{x+y}})\\\\\n    \\step & \\kwn{2}+(\\kwlet{y}{\\kwn{1}+\\kwn{1}}{\\kwn{1}+y}\\\\\n    \\step & \\kwn{2}+(\\kwlet{y}{\\kwn{2}}{\\kwn{1}+y}\\\\\n    \\step & \\kwn{2}+(\\kwn{1}+\\kwn{2})\\\\\n    \\step & \\kwn{2}+\\kwn{3}\\\\\n    \\step & \\kwn{5}\\\\\n  \\end{array}\n  \\]\n\\end{task}\n\n\\section{Red-Black Trees}\nIn data structures, a red-black tree is a binary tree whose nodes are colored\neither red or black.\n%\nRed-black trees must obey the following rules:\n\\begin{itemize}\n\\item Leaf nodes must be black.\n\\item No red node has red children.\n\\item Any path from the root to a leaf contains the same number of black nodes.\n\\end{itemize}\n\nWe can express red-black trees using the following small language\n(note that this syntax doesn't enforce the rules above; we'll do that later).\n\n\\[\n\\begin{array}{r l l l}\n  Trees & \\tree & \\bnfdef & \\leaf \\bnfalt \\bnode{\\tree}{\\tree} \\bnfalt\n  \\rnode{\\tree}{\\tree}\\\\\n  Colors & \\colr & \\bnfdef & \\bc \\bnfalt \\rc\\\\\n\\end{array}\n\\]\nwhere~$\\leaf$ is a leaf and~$\\bnode{\\tree_1}{\\tree_2}$\nand~$\\rnode{\\tree_1}{\\tree_2}$ represent black and red nodes, respectively,\nwith children~$\\tree_1$ and~$\\tree_2$.\n\nWe will define a judgment~$\\tree\\istree{\\colr}{n}$, meaning that~$\\tree$ is\na valid red-black tree whose root node has the color~$\\colr$ (which is\neither~$\\bc$ or~$\\rc$) and all of whose paths from root to leaf have\nexactly~$n$ black nodes.\n\n{\n  \\centering\n  \\def \\MathparLineskip {\\lineskip=0.43cm}\n  \\begin{mathpar}\n\n    \\Rule{RBT-1}\n         {\\strut}\n         {\\leaf \\istree{\\bc}{1}}\n    \\and\n    \\Rule{RBT-2}\n         {\\tree_1 \\istree{\\colr_1}{n}\\\\\n           \\tree_2 \\istree{\\colr_2}{n}}\n         {\\bnode{\\tree_1}{\\tree_2} \\istree{\\bc}{n+1}}\n    \\and\n    \\Rule{RBT-3}\n         {\\tree_1 \\istree{\\bc}{n}\\\\\n           \\tree_2 \\istree{\\bc}{n}}\n         {\\rnode{\\tree_1}{\\tree_2} \\istree{\\rc}{n}}\n  \\end{mathpar}\n}\n\nRule~(RBT-1) says that leaves are valid trees as long as they are\nblack: a leaf by definition has~1 black node on any path from root to leaf.\n%\nRule~(RBT-2) allows a black node with two children as long as both\nchildren are valid red-black trees with the same number of black nodes on any\npath (regardless of what color their roots are); the result is a tree with a\nblack root and~$n+1$ black nodes on any path.\n%\nRule~(RBT-3) enforces that red nodes do not have red children by\nrequiring in the premises that both children have black roots.\n\nDefine~$\\nodes{\\tree}$ to be the number of nodes, including leaves, in a\ntree:\n\n\\[\n\\begin{array}{l l l}\n  \\nodes{\\leaf} & = & 1\\\\\n  \\nodes{\\bnode{\\tree_1}{\\tree_2}} & = & 1 + \\nodes{\\tree_1} + \\nodes{\\tree_2}\\\\\n  \\nodes{\\rnode{\\tree_1}{\\tree_2}} & = & 1 + \\nodes{\\tree_1} + \\nodes{\\tree_2}\n\\end{array}\n\\]\n\n\\begin{task}\n  Prove by rule induction that for any tree~$\\tree$,\n  color~$\\colr \\in \\{\\rc, \\bc\\}$ and~$n \\geq 1$,\n  if~$\\tree \\istree{\\colr}{n}$, then~$\\nodes{\\tree} \\geq 2^{n-1}$.\n\n  {\\em You must} use rule induction, not any other proof technique.\n\n  \\textbf{Note:} This, together with one or two other properties of\n  red-black trees, proves that a valid red-black tree is approximately\n  balanced.\n\\end{task}\n\n\\textbf{Answer:}\n\n- Notice use of shorthand $\\kwlen{\\tree}$ to indicate the value of $n$ (black height) for tree \\tree \\\\\n\nBase Case: \n\\begin{itemize}\n    \\item all cases when n = 1:\n        \\begin{itemize}\n            \\item {\\tree = \\leaf}: Rule~(RBT-1) confirms that when T is a single black node, n = 1.\\\\\n            The proposition holds for this case as $\\nodes{\\leaf} = 1 \\geq 2^{n-1} = 2^{1-1} = 1$.\n\n            \\item {\\tree = \\rnode{\\leaf}{\\leaf}}: rule~(RBT-3) confirms that when $\\tree$ is a red node,\n            n is the same as it's children, which when they're both leaves makes n = 1.\\\\\n            The proposition holds for this case as follows:\\\\\n            LHS: $\\nodes{\\rnode{\\leaf}{\\leaf}} = 1 + \\nodes{\\leaf} + \\nodes{\\leaf} = 1 + 1 + 1 = 3$\\\\\n            RHS: $2^{n-1} = 2 ^ {1 - 1} = 2^0 = 1$\\\\\n            giving $3 \\geq 1$ \n        \\end{itemize}\n\\end{itemize}\n\nInductive case assuming $\\nodes{\\tree} \\geq 2^{n-1}$:\n\\begin{itemize}\n    \\item Case when $\\tree = \\bnode{...}{...}$ -- tree's top element is a black node\n        \\[\n        \\begin{array}{l l l l}\n            \\nodes{\\bnode{\\tree_1}{\\tree_2}} & \\geq & 2^{\\kwlen{\\tree}-1} & \\text{the proposition for this case}\\\\\n            1 + \\nodes{\\tree_1} + \\nodes{\\tree_2} & \\geq & 1 + 2 ^ {\\kwlen{\\tree_1} - 1} + 2 ^ {\\kwlen{\\tree_2} - 1} & \\text{apply rule (RBT-2)}\\\\\n            1 & \\geq & 1  & \\text{apply inductive hypothesis}\\\\\n        \\end{array}\n        \\]\n    \n    \\item Case when $\\tree = \\rnode{...}{...}$ -- tree's top element is a red node\n        \\[\n        \\begin{array}{l l l l}\n            \\nodes{\\rnode{\\tree_1}{\\tree_2}} & \\geq & 2^{\\kwlen{\\tree}-1} & \\text{the proposition for this case}\\\\\n            1 + \\nodes{\\tree_1} + \\nodes{\\tree_2} & \\geq & 2 ^ {\\kwlen{\\tree_1} - 1} + 2 ^ {\\kwlen{\\tree_2} - 1} & \\text{apply rule (RBT-3)}\\\\\n            1 & \\geq & 0  & \\text{apply inductive hypothesis}\\\\\n        \\end{array}\n        \\]\n\\end{itemize}\n\n\\section{Booleans}\n\nIn this task, you'll extend the {\\Elang} language from class with Booleans:\n\n\\[\n\\begin{array}{r l l l}\n  \\mathit{Types} & \\tau & \\bnfdef &\n  \\kwint \\bnfalt \\kwstring \\bnfalt \\kwbool\\\\\n  \\mathit{Expressions} & e & \\bnfdef &\n  x \\bnfalt\n  \\kwn{n} \\bnfalt\n  \\kws{s} \\bnfalt\n  \\kwtrue \\bnfalt\n  \\kwfalse \\bnfalt\n  e = e \\bnfalt\n  e + e \\bnfalt\n  e \\cat e \\bnfalt\n  \\kwlen{e} \\bnfalt\n  \\kwlet{x}{e}{e} \\bnfalt\n  \\kwif{e}{e}{e}\n\\end{array}\n\\]\n\nThe expressions~$\\kwtrue$ and~$\\kwfalse$ are values and have their usual\nmeanings.\n%\nThe expression~$\\kwif{e_1}{e_2}{e_3}$ should evaluate~$e_1$. If it evaluates\nto~$\\kwtrue$, it should continue evaluating~$e_2$, otherwise it should\ncontinue evaluating~$e_3$.\n%\nThe other branch {\\em should not} be evaluated (this isn't possible to express\nin {\\Elang}, but consider the code $\\kwif{x = \\kwn{0}}{\\kwn{0}}{\\kwn{42} / x}$: we certainly\ndon't want to evaluate the else branch when the condition is true!).\n%\nWe've also added an integer equality test~$e_1 = e_2$ to have an interesting\nway of producing Booleans (this is an operation on integers only, not on\nBooleans or strings).\n%\nRecall the dynamic semantics from class, now extended with the rules\nfor~$e_1 = e_2$.\n\n\\textbf{Note:} There's a lot here, but don't panic:\nthe only new rules here are (S-11) through (S-14). The rest are just there\nas a reminder. Your job is only to add Booleans to the language.\n\n{\n  \\centering\n  \\def \\MathparLineskip {\\lineskip=0.43cm}\n  \\begin{mathpar}\n    \\Rule{V-1}\n         {\\strut} % This leaves the proper spacing above the line of an axiom\n         {\\kwn{n}\\val}\n    \\and\n    \\Rule{V-2}\n         {\\strut}\n         {\\kws{s}\\val}\n    \\and\n    \\Rule{S-1}\n         {\\strut}\n         {\\kwn{n_1} + \\kwn{n_2} \\step \\kwn{n_1 + n_2}}\n    \\and\n    \\Rule{S-2}\n         {\\strut}\n         {\\kws{s_1} \\cat \\kws{s_2} \\step \\kws{s_1s_2}}\n    \\and\n    \\Rule{S-3}\n         {\\strut}\n         {\\kwlen{\\kws{s}} \\step \\kwn{\\kwlen{s}}}\n    \\and\n    \\Rule{S-4}\n         {e_1 \\step e_1'}\n         {e_1 + e_2 \\step e_1' + e_2}\n    \\and\n    \\Rule{S-5}\n         {e_2 \\step e_2'}\n         {\\kwn{n_1} + e_2 \\step \\kwn{n_1} + e_2'}\n    \\and\n    \\Rule{S-6}\n         {e_1 \\step e_1'}\n         {e_1 \\cat e_2 \\step e_1' \\cat e_2}\n    \\and\n    \\Rule{S-7}\n         {e_2 \\step e_2'}\n         {\\kws{s_1} \\cat e_2 \\step \\kws{s_1} \\cat e_2'} \n    \\and\n    \\Rule{S-8}\n         {e \\step e'}\n         {\\kwlen{e} \\step \\kwlen{e'}}\n    \\and\n    \\Rule{S-9}\n         {e_1 \\step e_1'}\n         {\\kwlet{x}{e_1}{e_2} \\step \\kwlet{x}{e_1'}{e_2}}\n    \\and\n    \\Rule{S-10}\n         {v \\val}\n         {\\kwlet{x}{v}{e_2} \\step \\sub{v}{x}{e_2}}\n    \\and\n    \\Rule{S-11}\n         {e_1 \\step e_1'}\n         {e_1 = e_2 \\step e_1' = e_2}\n    \\and\n    \\Rule{S-12}\n         {e_2 \\step e_2'}\n         {\\kwn{n_1} = e_2 \\step \\kwn{n_1} = e_2'}\n    \\and\n    \\Rule{S-13}\n         {\\strut}\n         {\\kwn{n} = \\kwn{n} \\step \\kwtrue}\n    \\and\n    \\Rule{S-14}\n         {n_1 \\neq n_2}\n         {\\kwn{n_1} = \\kwn{n_2} \\step \\kwfalse}\n  \\end{mathpar}\n}\n\n\\begin{task}\\label{task:dyn}\n  Write the new inference rules for the dynamic semantics of Booleans and the\n  if-else construct.\n  You should have 2 new rules for the judgment~$e\\val$ and 3 new rules\n  for the judgment~$e \\step e$.\\\\\n  (\\textbf{Hint:} only one of these will be a search rule.)\n\\end{task}\n\n\\textbf{Answer:}\n\n{\n    \\centering\n    \\def \\MathparLineskip{\\lineskip=0.43cm}\n    \\begin{mathpar}\n        \\Rule{V-3}\n            {\\strut}\n            {\\kwfalse \\val}\n        \\and\n        \\Rule{V-4}\n            {\\strut}\n            {\\kwtrue \\val}\n        \\and\n        \\Rule{S-15}\n            {e_1 \\step e_1'}\n            {\\kwif{e_0}{e_1}{e_2} \\step \\kwif{e_0'}{e_1}{e_2}}\n        \\and\n        \\Rule{S-16}\n            {\\strut}\n            {\\kwif{\\kwtrue}{e_1}{e_2} \\step e_1}\n        \\and\n        \\Rule{S-17}\n            {\\strut}\n            {\\kwif{\\kwfalse}{e_1}{e_2} \\step e_2}\n    \\end{mathpar}\n}\n\nWe also extend the typing rules with the new rule for equality testing and\nBooleans.\n\n{\n  \\centering\n  \\def \\MathparLineskip {\\lineskip=0.43cm}\n  \\begin{mathpar}\n    \\Rule{T-8}\n         {\\typed{\\ctx}{e_1}{\\kwint}\\\\\n           % Use \\\\ to separate multiple premises\n           \\typed{\\ctx}{e_2}{\\kwint}}\n         {\\typed{\\ctx}{e_1 = e_2}{\\kwbool}}\n    \\and\n    \\Rule{T-9}\n         {\\strut}\n         {\\typed{\\ctx}{\\kwtrue}{\\kwbool}}\n    \\and\n    \\Rule{T-10}\n         {\\strut}\n         {\\typed{\\ctx}{\\kwfalse}{\\kwbool}}\n    \\and\n    \\Rule{T-11}\n         {\\typed{\\ctx}{e_1}{\\kwbool}\\\\\n           \\typed{\\ctx}{e_2}{\\tau}\\\\\n           \\typed{\\ctx}{e_3}{\\tau}}\n         {\\typed{\\ctx}{\\kwif{e_1}{e_2}{e_3}}{\\tau}}\n  \\end{mathpar}\n}\n\nNote that rule (T-11) doesn't require that the two branches of the conditional\nhave a particular type (e.g.~$\\kwint$).\n%\nThe use of the same metavariable~$\\tau$ {\\em does} however, mean that the\ntwo branches~$e_2$ and~$e_3$ must have the {\\em same} type, which is then the\ntype of the whole expression\n(this makes sense: how could we possibly give a type to the expression\n$\\kwif{n = \\kwn{0}}{\\kwn{42}}{\\kws{\\text{Oops}}}$?).\n\n\\begin{task}\n  Prove the cases of the Preservation theorem for the new rules you\n  added in Task~\\ref{task:dyn}.\n\\end{task}\n\n\\textbf{Answer:}\n\nProposition: if $\\typed{\\ctx}{e}{\\tau}$ and $e \\step e'$ then $\\typed{\\ctx}{e'}{\\tau}$\n\\begin{itemize}\n    \\item S-16: When $e = \\kwif{\\kwtrue}{e_1}{e_2}$\\\\ \\step $e' = e_1$\\\\\n        by inversion on rule (T-11) $\\typed{\\ctx}{\\kwif{e_0}{e_1}{e_2}}{\\tau}$ \n            and  $\\typed{\\ctx}{e_1}{\\tau}$\\\\\n        with induction on $e_1 \\step e_1'$ this confirms that the type of the stepped value is preserved.\n        \n    \\item S-17: When $e = \\kwif{\\kwfalse}{e_1}{e_2}$\\\\ \\step $e' = e_2$\\\\\n        by inversion on rule (T-11) $\\typed{\\ctx}{\\kwif{e_0}{e_1}{e_2}}{\\tau}$ \n            and  $\\typed{\\ctx}{e_2}{\\tau}$\\\\\n        with induction on $e_2 \\step e_2'$ this confirms that the type of the stepped value is preserved.\n    \n    \\item S-15: When $e = \\kwif{e_0}{e_1}{e_2}$\\\\ \\step $e' = \\kwif{e_0'}{e_1}{e_2}$\\\\\n        by inversion on rule (T-11), $\\typed{\\ctx}{e_0}{\\kwbool}$, $\\typed{\\ctx}{e_1}{\\tau}$, \n            and $\\typed{\\ctx}{e_2}{\\tau}$\\\\\n        by induction we can assume $e_0 \\step e_0'$ with $\\typed{\\ctx}{e_0'}{\\kwbool}$\\\\\n        thus we can apply (T-11) to assert that the type is the same.\n\\end{itemize}\n\n\\begin{task}\\label{task-cf}\n  State (you do not have to prove it) the new case of the Canonical Forms\n  lemma for Booleans.\n\\end{task}\n\n\\textbf{Answer:}\nif $e \\val$ and $\\typed{}{e}{\\kwbool}$ then $e$ is either $\\kwtrue$ or $\\kwfalse$.\n\n\n\\begin{task}\n  Prove the cases of the Progress theorem for the new rules\n  (T-9) through (T-11). You may (and should) use the new case of Canonical\n  Forms from Task~\\ref{task-cf}.\n\\end{task}\n\n\\textbf{Answer:}\n\nProposition: if $\\typed{}{e}{\\tau}$ then $e \\val$ or $e \\step e'$\n\\begin{itemize}\n    \\item T-9: $e = \\kwtrue$, $\\tau = \\kwbool$\\\\\n        by V-4 $e \\val$\n    \\item T-10: $e = \\kwfalse$, $\\tau = \\kwbool$\\\\\n        by V-3 $e \\val$\n    \\item T-11: $e = \\kwif{e_1}{e_2}{e_3}$\\\\\n        by inversion on rule T-11 $\\typed{}{e_1}{\\kwbool}$\\\\\n        by induction, either $e_1 \\val$ or $e_1 \\step e_1'$\\\\\n        by the Canonical Forms lemma for from Task~\\ref{task-cf} $e_1$ can hold either \\kwtrue or \\kwfalse\n        \\begin{itemize}\n            \\item \\kwtrue : S-16 confirms that $\\kwif{\\kwtrue}{e_2}{e_3} \\step e_2$ and thus $e' = e_2$\n            \\item \\kwfalse : S-17 confirms that $\\kwif{\\kwfalse}{e_2}{e_3} \\step e_3$ and thus $e' = e_3$\n        \\end{itemize}\n\n\\end{itemize}\n\n\\end{document}\n", "meta": {"hexsha": "2d53b459814a118fd4b8f49cd6d2c3a25f770adf", "size": 13851, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW0/hw0.tex", "max_stars_repo_name": "dvtate/CS595-PLT", "max_stars_repo_head_hexsha": "0f7ced45b1ecfc205db258a49d628aa9929fec3f", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW0/hw0.tex", "max_issues_repo_name": "dvtate/CS595-PLT", "max_issues_repo_head_hexsha": "0f7ced45b1ecfc205db258a49d628aa9929fec3f", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW0/hw0.tex", "max_forks_repo_name": "dvtate/CS595-PLT", "max_forks_repo_head_hexsha": "0f7ced45b1ecfc205db258a49d628aa9929fec3f", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.3621495327, "max_line_length": 146, "alphanum_fraction": 0.5914374413, "num_tokens": 5029, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347362, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.5954392037370674}}
{"text": "\\chapter{Governing Equations}\n\\label{cha:governing:equations}\n\nThis chapter presents the solution schemes we use for solving\nvariations of the elasticity equation using the finite-element\nmethod. In all of our derivations, we use the notation described in\nTable \\vref{tab:notation}.\n\n\\begin{table}[htbp]\n  \\caption{Mathematical notation}\n  \\label{tab:notation}\n  \\begin{tabular}{cp{3in}}\n    \\toprule\n    {\\bf Symbol} & {\\bf Description} \\\\\n    \\midrule\n    $\\vec{a}$ & Vector field a \\\\\n    $\\tensor{a}$ & Second order tensor field a \\\\\n    $\\vec{u}$ & Displacement vector field \\\\\n    $\\vec{v}$ & Velocity vector field \\\\\n    $\\vec{{d}}$ & Fault slip vector field \\\\\n    $\\vec{f}$ & Body force vector field \\\\\n    $\\vec{\\tau}$ & Traction vector field \\\\\n    $\\tensor{\\sigma}$ & Stress tensor field \\\\\n    $\\vec{n}$ & Normal vector field \\\\\n    $\\rho$ & Mass density scalar field \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\\input{governingeqns/elasticity_derivation.tex}\n\\input{governingeqns/formulation.tex}\n\n\\input{governingeqns/elasticity_infstrain.tex}\n\\input{governingeqns/elasticity_infstrain_prescribedslip.tex}\n\\input{governingeqns/incompressible_elasticity.tex}\n\\input{governingeqns/poroelasticity.tex}", "meta": {"hexsha": "ce1340c3a5ac32a2e33ce660ff006abb408c8bc7", "size": 1217, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/userguide/governingeqns/governingeqns.tex", "max_stars_repo_name": "Grant-Block/pylith", "max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2015-01-08T16:41:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T13:40:02.000Z", "max_issues_repo_path": "doc/userguide/governingeqns/governingeqns.tex", "max_issues_repo_name": "Grant-Block/pylith", "max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 277, "max_issues_repo_issues_event_min_datetime": "2015-02-20T16:27:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T21:13:09.000Z", "max_forks_repo_path": "doc/userguide/governingeqns/governingeqns.tex", "max_forks_repo_name": "Grant-Block/pylith", "max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 71, "max_forks_repo_forks_event_min_datetime": "2015-03-24T12:11:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T04:26:02.000Z", "avg_line_length": 33.8055555556, "max_line_length": 67, "alphanum_fraction": 0.7091207888, "num_tokens": 350, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637433190939, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5954392000015679}}
{"text": "\\section{Displacement and Area}\\label{sec:AreaProb}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\begin{example}{Object Moving in a Straight Line}{ObjectMovingStraightLine}\r\nAn object moves in a straight line so that\r\nits speed at time $t$ is given by $v(t)=3t$ in, say, cm/sec. If the\r\nobject is at position $10$ on the straight line when $t=0$, where is\r\nthe object at any time $t$? \r\n\\end{example}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\begin{solution} \r\nThere are two reasonable ways to approach this problem. If $s(t)$ is\r\nthe position of the object at time $t$, we know that\r\n$s'(t)=v(t)$. Based on our knowledge of derivatives, we\r\ntherefore know that $\\ds s(t)=3t^2/2+k$, and because $s(0)=10$ we easily\r\ndiscover that $k=10$, so $\\ds s(t)=3t^2/2+10$. For example, at $t=1$ the\r\nobject is at position $3/2+10=11.5$.\r\nThis is certainly the easiest way to deal with this problem. Not all\r\nsimilar problems are so easy, as we will see; the second approach to\r\nthe problem is more difficult but also more general.\r\n\r\nWe start by considering how we might approximate a solution. We know\r\nthat at $t=0$ the object is at position 10. How might we approximate\r\nits position at, say, $t=1$? We know that the speed of the object at\r\ntime $t=0$ is $0$; if its speed were constant then in the first second\r\nthe object would not move and its position would still be 10 when\r\n$t=1$. In fact, the object will not be too far from 10 at $t=1$, but\r\ncertainly we can do better. Let's look at the times $0.1$, $0.2$,\r\n$0.3$, \\dots, $1.0$, and try approximating the location of the object\r\nat each, by supposing that during each tenth of a second the object is\r\ngoing at a constant speed. Since the object initially has speed 0, we\r\nagain suppose it maintains this speed, but only for a tenth of second;\r\nduring that time the object would not move. During the tenth of a\r\nsecond from $t=0.1$ to $t=0.2$, we suppose that the object is\r\ntraveling at $0.3$ cm/sec, namely, its actual speed at $t=0.1$. In\r\nthis case the object would travel $(0.3)(0.1)=0.03$ centimeters: $0.3$\r\ncm/sec times $0.1$ seconds. Similarly, between $t=0.2$ and $t=0.3$ the\r\nobject would travel $(0.6)(0.1)=0.06$ centimeters.  Continuing, we get\r\nas an approximation that the object travels\r\n$$ \r\n  (0.0)(0.1)+(0.3)(0.1)+(0.6)(0.1)+\\cdots+(2.7)(0.1)=1.35\r\n$$ \r\ncentimeters, ending up at position 11.35. This is a better\r\napproximation than 10, certainly, but is still just an\r\napproximation. (We know in fact that the object ends up at position\r\n$11.5$, because we've already done the problem using the first\r\napproach.) Presumably, we will get a better approximation if we divide\r\nthe time into one hundred intervals of a hundredth of a second each,\r\nand repeat the process:\r\n$$\r\n  (0.0)(0.01)+(0.03)(0.01)+(0.06)(0.01)+\\cdots+(2.97)(0.01)=1.485.\r\n$$\r\nWe thus approximate the position as $11.485$. Since we know the exact\r\nanswer, we can see that this is much closer, but if we did not already\r\nknow the answer, we wouldn't really know how close.\r\n\r\nWe can keep this up, but we'll never really know the exact answer if\r\nwe simply compute more and more examples. Let's instead look at a\r\n``typical'' approximation. Suppose we divide the time into $n$ equal\r\nintervals, and imagine that on each of these the object travels at a\r\nconstant speed. Over the first time interval we approximate the\r\ndistance traveled as $(0.0)(1/n)=0$, as before. During the second time\r\ninterval, from $t=1/n$ to $t=2/n$, the object travels approximately\r\n$\\ds 3(1/n)(1/n)=3/n^2$ centimeters. During time interval number $i$, the\r\nobject travels approximately $\\ds (3(i-1)/n)(1/n)=3(i-1)/n^2$\r\ncentimeters, that is, its speed at time $(i-1)/n$, $3(i-1)/n$, times\r\nthe length of time interval number $i$, $1/n$.\r\nAdding these up as before, we approximate the distance traveled as\r\n$$\r\n  (0){1\\over n}+3{1\\over n^2}+3(2){1\\over n^2}+\r\n  3(3){1\\over n^2}+\\cdots+3(n-1){1\\over n^2}\r\n$$\r\ncentimeters. What can we say about this? At first it looks rather less\r\nuseful than the concrete calculations we've already done, but in fact\r\na bit of algebra reveals it to be much more useful. We can factor out\r\na 3 and $\\ds 1/n^2$ to get\r\n$$\r\n  {3\\over n^2}(0+1+2+3+\\cdots+(n-1)),\r\n$$\r\nthat is, $\\ds 3/n^2$ times the sum of the first $n-1$ positive\r\nintegers. Now we make use of a fact you may have run across before, Gauss's Equation:\r\n$$\r\n  1+2+3+\\cdots+k={k(k+1)\\over2}.\r\n$$\r\nIn our case we're interested in $k=n-1$, so\r\n$$\r\n  1+2+3+\\cdots+(n-1)={(n-1)(n)\\over2}={n^2-n\\over2}.\r\n$$\r\nThis simplifies the approximate distance traveled to \r\n$$\r\n  {3\\over n^2}{n^2-n\\over2}={3\\over2}{n^2-n\\over n^2}=\r\n  {3\\over2}\\left({n^2\\over n^2}-{n\\over n^2}\\right)=\r\n  {3\\over2}\\left(1-{1\\over n}\\right).\r\n$$\r\nNow this is quite easy to understand: as $n$ gets larger and larger\r\nthis approximation gets closer and closer to $(3/2)(1-0)=3/2$, so that\r\n$3/2$ is the exact distance traveled during one second, and the final\r\nposition is $11.5$.\r\n\r\nSo for $t=1$, at least, this rather cumbersome approach gives the same\r\nanswer as the first approach. But really there's nothing special about\r\n$t=1$; let's just call it $t$ instead. In this case the approximate\r\ndistance traveled during time interval number $i$ is $\\ds\r\n3(i-1)(t/n)(t/n)=3(i-1)t^2/n^2$, that is, speed $3(i-1)(t/n)$ times\r\ntime $t/n$, and the total distance traveled is approximately\r\n$$\r\n  (0){t\\over n}+3(1){t^2\\over n^2}+3(2){t^2\\over n^2}+\r\n  3(3){t^2\\over n^2}+\\cdots+3(n-1){t^2\\over n^2}.\r\n$$\r\nAs before we can simplify this to\r\n$$\r\n  {3t^2\\over n^2}(0+1+2+\\cdots+(n-1))={3t^2\\over n^2}{n^2-n\\over2}=\r\n  {3\\over2}t^2\\left(1-{1\\over n}\\right).\r\n$$ \r\nIn the limit, as $n$ gets larger, this gets closer and closer to $\\ds\r\n(3/2)t^2$ and the approximated position of the object gets closer and\r\ncloser to $\\ds (3/2)t^2+10$, so the actual position is $\\ds\r\n(3/2)t^2+10$, exactly the answer given by the first approach to the\r\nproblem.\r\n\\end{solution}\r\n\r\n\\begin{example}{Area under the Line}{area under line} \r\nFind the area under the\r\ncurve $y=3x$ between $x=0$ and any positive value $x$. \r\n\\end{example}\r\n\r\n\\begin{solution} \r\nThere is here\r\nno obvious analogue to the first approach in the previous example, but\r\nthe second approach works fine. (Since the function $y=3x$ is so\r\nsimple, there is another approach that works here, but it is even more\r\nlimited in potential application than is approach number one.)  How\r\nmight we approximate the desired area? We know how to compute areas of\r\nrectangles, so we approximate the area by rectangles. Jumping straight\r\nto the general case, suppose we divide the interval between 0 and $x$\r\ninto $n$ equal subintervals, and use a rectangle above each\r\nsubinterval to approximate the area under the curve. There are many\r\nways we might do this, but let's use the height of the curve at the\r\nleft endpoint of the subinterval as the height of the rectangle, as in\r\nfigure~\\xrefn{fig:approximating area by rectangles}. The height of\r\nrectangle number $i$ is then $3(i-1)(x/n)$, the width is $x/n$, and\r\nthe area is $\\ds 3(i-1)(x^2/n^2)$. The total area of the rectangles is\r\n$$\r\n  (0){x\\over n}+3(1){x^2\\over n^2}+3(2){x^2\\over n^2}+\r\n  3(3){x^2\\over n^2}+\\cdots+3(n-1){x^2\\over n^2}.\r\n$$\r\nBy factoring out $\\ds 3x^2/n^2$ this simplifies to \r\n$$\r\n  {3x^2\\over n^2}(0+1+2+\\cdots+(n-1))={3x^2\\over n^2}{n^2-n\\over2}=\r\n  {3\\over2}x^2\\left(1-{1\\over n}\\right).\r\n$$\r\nAs $n$ gets larger this gets closer and closer to $\\ds 3x^2/2$, which must\r\ntherefore be the true area under the curve.\r\n\\end{solution}\r\n\r\n\\figure[H]\r\n\\centerline{\\vbox{\\beginpicture\r\n\\normalgraphs\r\n%\\ninepoint\r\n\\setcoordinatesystem units <0.5truecm,0.2truecm>\r\n\\setplotarea x from 0 to 10, y from 0 to 30\r\n\\axis left shiftedto x=0 /\r\n\\axis bottom shiftedto y=0 /\r\n\\put {$\\ldots$} at 6.5 8\r\n\\setlinear\r\n\\plot 0 0 10 30 /\r\n\\setdashes <2pt>\r\n\\putrule from 1 0 to 1 3\r\n\\putrule from 2 0 to 2 6\r\n\\putrule from 3 0 to 3 9\r\n\\putrule from 4 0 to 4 9\r\n\\putrule from 9 0 to 9 27\r\n\\putrule from 10 0 to 10 27\r\n\\putrule from 1 3 to 2 3\r\n\\putrule from 2 6 to 3 6\r\n\\putrule from 3 9 to 4 9\r\n\\putrule from 9 27 to 10 27\r\n\\endpicture}}\r\n\\caption{Approximating the area under $y=3x$ with rectangles. \\label{fig:approximating area by rectangles}}\r\n\\endfigure\r\n\r\nWhat you will have noticed, of course, is that while the problem in\r\nthe second example appears to be much different than the problem in\r\nthe first example, and while the easy approach to problem one does not\r\nappear to apply to problem two, the ``approximation'' approach works\r\nin both, and moreover the {\\it calculations are identical.} As we will\r\nsee, there are many, many problems that appear much different on the\r\nsurface but turn out to be the same as these problems, in the\r\nsense that when we try to approximate solutions we end up with\r\nmathematics that looks like the two examples, though of course the\r\nfunction involved will not always be so simple.\r\n\r\nEven better, we now see that while the second problem did not appear\r\nto be amenable to approach one, it can in fact be solved in the same\r\nway. The reasoning is this: we know that problem one can be solved\r\neasily by finding a function whose derivative is $3t$. We also know\r\nthat mathematically the two problems are the same, because both can be\r\nsolved by taking a limit of a sum, and the sums are\r\nidentical. Therefore, we don't really need to compute the limit of\r\neither sum because we know that we will get the same answer by\r\ncomputing a function with the derivative $3t$ or, which is the same\r\nthing, $3x$.\r\n\r\nIt's true that the first problem had the added complication of the\r\n``10'', and we certainly need to be able to deal with such minor\r\nvariations, but that turns out to be quite simple. The lesson then is\r\nthis: whenever we can solve a problem by taking the limit of a sum of\r\na certain form, instead of computing the (often nasty) limit we can\r\nfind a new function with a certain derivative.\r\n\r\n\r\n\\subsection{Sigma Notation}\r\n\\vskip\\baselineskip\r\nTo refine the area approximations we use more rectangles. The notation can become unwieldy, as we add up longer and longer lists of numbers. For this reason we introduce \\textbf{sigma notation}. \\index{sigma!notation}\\\\\r\n\r\n\r\nSuppose we wish to add up a list of numbers $a_1$, $a_2$, $a_3$, \\ldots, $a_9$. Instead of writing $$a_1+a_2+a_3+a_4+a_5+a_6+a_7+a_8+a_9,$$ we use sigma notation and write \r\n\\begin{center}\r\n\\includegraphics[scale=.7]{figures/figrie_notation}\r\n\\caption{Understanding sigma notation.}\\label{fig:rie_notation}\r\n\\end{center}\r\n\r\nThe upper case sigma represents the term ``sum.'' The index of summation in this example is $i$; any symbol can be used. By convention, the index takes on only the integer values between (and including) the lower and upper bounds. \r\n\r\nLet's practice using this notation.\\\\\r\n\r\n\\begin{example}{Using sigma notation}{ex_rie3}{\r\nLet the numbers $\\{a_i\\}$ be defined as $a_i = 2i-1$ for integers $i$, where $i\\geq 1$. So $a_1 = 1$, $a_2 = 3$, $a_3 = 5$, etc. (The output is the positive odd integers). Evaluate the following summations:\r\n$$ 1.\\ \\sum_{i=1}^6 a_i \\qquad\\qquad\\qquad 2.\\ \\sum_{i=3}^7 (3a_i-4)\\qquad\\qquad \\qquad 3.\\ \\sum_{i=1}^4 (a_i)^2$$\r\n}\r\n{\\begin{enumerate}\r\n\t\t\\item\t\t\\noindent\\vskip-45pt%\\begin{minipage}[t]{\\linewidth}\r\n\t\t\t\t\t\t\\begin{align*}\r\n\t\t\t\t\t\t\\sum_{i=1}^6 a_i &= a_1+a_2+a_3+a_4+a_5+a_6\\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t&=\t1+3+5+7+9+11 \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t&=\t36.\r\n\t\t\t\t\t\\end{align*}\r\n%\t\t\t\t\t\\end{minipage}\r\n\t\t\\item\tNote the starting value is different than 1:\r\n\t\t\t\t\t\\begin{align*}\r\n\t\t\t\t\t\\sum_{i=3}^7 a_i &= (3a_3-4)+(3a_4-4)+(3a_5-4)+(3a_6-4)+(3a_7-4) \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t&= 11+17+23+29+35 \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t&= 115.\r\n\t\t\t\t\t\\end{align*}\r\n\t\t\\item\t\t\\noindent\\vskip-45pt%\\begin{minipage}[t]{\\linewidth}\r\n\t\t\t\t\t\t\\begin{align*}\r\n\t\t\t\t\t\t\\sum_{i=1}^4 (a_i)^2 &=\t(a_1)^2+(a_2)^2+(a_3)^2+(a_4)^2\\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t&=\t1^2+3^2+5^2+7^2 \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t&=\t84\r\n\t\t\t\t\t\t\\end{align*}\r\n\\end{enumerate}\t\r\n\\vskip-3\\baselineskip\t\t\t\t\t\t\t\t\t\t\t\r\n}\r\n\\end{example}\r\n\r\n\r\nIt might seem odd to stress a new, concise way of writing summations only to write each term out as we add them up. It is. The following theorem gives some of the properties of summations that allow us to work with them without writing individual terms. The first three properties are typically referred to as the \\textit{linearity properties}. Examples will follow.\r\n\r\n\\begin{theorem}{Properties of Summations}{summation}\r\n{\\noindent\\begin{minipage}[t]{200pt}\\index{summation!properties}\r\n\\begin{enumerate}\r\n\t\t\\item\t\t$\\ds \\sum_{i=1}^n c = c\\cdot n$, where $c$ is a constant.\r\n\t\t\\item\t\t$\\ds \\sum_{i=m}^n (a_i\\pm b_i) = \\sum_{i=m}^n a_i \\pm \\sum_{i=m}^n b_i$\r\n\t\t\\item\t\t$\\ds \\sum_{i=m}^n c\\cdot a_i = c\\cdot\\sum_{i=m}^n a_i$\r\n\t\t\\item\t\t$\\ds \\sum_{i=m}^j a_i + \\sum_{i=j+1}^n  a_i = \\sum_{i=m}^n a_i$\r\n%\t\t\\item\t\t$\\ds \\sum_{i=1}^n i = \\frac{n(n+1)}2$\r\n%\t\t\\item\t\t$\\ds \\sum_{i=1}^n i^2 = \\frac{n(n+1)(2n+1)}6$\r\n%\t\t\\item\t\t$\\ds \\sum_{i=1}^n i^3 = \\left(\\frac{n(n+1)}2\\right)^2$\r\n\t\\end{enumerate}\r\n\\end{minipage}\r\n\\begin{minipage}[t]{200pt}\r\n\\begin{enumerate}\\addtocounter{enumi}{4}\r\n%\t\t\\item\t\t$\\ds \\sum_{i=1}^n c = c\\cdot n$, where $c$ is a constant.\r\n%\t\t\\item\t\t$\\ds \\sum_{i=m}^n (a_i\\pm b_i) = \\sum_{i=m}^n a_i \\pm \\sum_{i=m}^n b_i$\r\n%\t\t\\item\t\t$\\ds \\sum_{i=1}^n c\\cdot a_i = c\\cdot\\sum_{i=1}^n a_i$\r\n%\t\t\\item\t\t$\\ds \\sum_{i=m}^j a_i + \\sum_{i=j+1}^n  a_i = \\sum_{i=m}^n a_i$\r\n\t\t\\item\t\t$\\ds \\sum_{i=1}^n i = \\frac{n(n+1)}2$\r\n\t\t\\item\t\t$\\ds \\sum_{i=1}^n i^2 = \\frac{n(n+1)(2n+1)}6$\r\n\t\t\\item\t\t$\\ds \\sum_{i=1}^n i^3 = \\left(\\frac{n(n+1)}2\\right)^2$\r\n\t\\end{enumerate}\r\n\\end{minipage}\r\n}\r\n\r\n\\end{theorem}\r\n\r\n\\begin{example}{Evaluating summations using Theorem \\ref{thm:summation}}{ex_rie4}\r\n{Revisit Example \\ref{exa:ex_rie3} and, using Theorem \\ref{thm:summation}, evaluate $$\\sum_{i=1}^6 a_i = \\sum_{i=1}^6 (2i-1).$$\r\n}\r\n\\end{example}\r\n\\begin{solution}\r\n{\\begin{align*}\r\n\t\t\\sum_{i=1}^6 (2i-1) & = \\sum_{i=1}^6 2i - \\sum_{i=1}^6 (1)\\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t&=\t\\left(2\\sum_{i=1}^6 i \\right)- 6 \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t&= 2\\frac{6(6+1)}{2} - 6 \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t&= 42-6 = 36\r\n \\end{align*}\r\n We obtained the same answer without writing out all six terms. When dealing with small sizes of $n$, it may be faster to write the terms out by hand. However, Theorem \\ref{thm:summation} is incredibly important when dealing with large sums as we'll soon see.\r\n }\r\n \\end{solution}\r\n\r\n\r\n\\subsection{Approximating the Area of a Plane Region}\r\nAs we have observed above, if $ f(t) $ is a positive velocity function, then the area under the graph of $ f(x) $ over the interval $ [t_1,t_2] $ is the distance travelled over the same time interval.  Note that if $ f(t) $ is allowed to be negative, then the area provides the displacement over the interval.\\\\  \r\n\r\n\r\n\r\n\r\n% left hand sums\r\n%\\begin{tikzpicture}[/pgf/declare function={f=4/x;}]\r\n%\\begin{axis}[\r\n%        xmin=0,xmax=9,ymin=0,ymax=4,\r\n%    domain=0:10,\r\n%    samples=100,\r\n%    axis lines=middle\r\n%]\r\n%\\addplot [thick, red] {f};\r\n%\\addplot [\r\n%    black!80,fill=green,opacity=.3,\r\n%    left segments=7,\r\n%    left=1:8\r\n%] {f};\r\n%\\end{axis}\r\n%\\end{tikzpicture}\r\n\r\n\r\n\r\n\r\n\r\nFor the rest of this section, we assume that $f(x)$ is {\\bf{continuous and positive}}, so that the graph lies above the $x$-axis. Our goal is to compute the area ``under the graph\", that is, the area between the graph and the $x$-axis.  As a first step, we approximate the area using rectangles.\\\\\r\n\r\nThere are three common ways to determine the height of these rectangles: the \\textbf{Right Hand Rule} (LHR), the \\textbf{Left Hand Rule} (RHR), and the \\textbf{Midpoint Rule} (MPR). The \\textbf{Riight Hand Rule} says to evaluate the function at the right--hand endpoint of the subinterval and make the rectangle that height. \r\n\r\nThe \\textbf{Left Hand Rule} says the opposite: on each subinterval, evaluate the function at the left endpoint and make the rectangle that height. \r\n\r\nThe \\textbf{Midpoint Rule} says that on each subinterval, evaluate the function at the midpoint and make the rectangle that height. \r\n\r\n\r\nSuppose we wish to find the area under $y = x^2$ between $x = 0$ and $x = 1$. \r\n\r\n\\begin{tikzpicture}[/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.1,ymin=0,ymax=2,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle\r\n]\r\n\\addplot [thick, red, ,name path=A] {f};\r\n \\addplot [draw=none,name path=B] {0};     % \u201cfictional\u201d curve\r\n  \\addplot [green] fill between[of=A and B,soft clip={domain=0:1}]; % filling\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\r\n\r\n\r\n\r\nWe can first approximate the area using the RHR. Divide $[0,1]$ into three strips of width $\\frac{1}{3}$, and draw rectangles in those strips, the heights of which are the same as the height of the function at the right end of that strip. Four strips gives a better approximation, and five is even better...\\\\\r\n\\begin{figure}\r\n% right hand sums\r\n\\begin{tikzpicture}[scale=.75,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.1,ymin=0,ymax=1.5,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n     xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n     xtick parsed={0, 1/3, 2/3, 1}\r\n]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    right segments=3,\r\n    right=0:1,\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\begin{tikzpicture}[scale=.75,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.1,ymin=0,ymax=1.5,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n     xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n     xtick parsed={0, 1/4, 1/2, 3/4, 1}\r\n]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    right segments=4,\r\n    right=0:1,\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\begin{tikzpicture}[scale=.75,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.1,ymin=0,ymax=1.5,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n     xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n     xtick parsed={0, 1/5, 2/5, 3/5, 4/5, 1}\r\n]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    right segments=5,\r\n    right=0:1,\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\r\n\\caption{Approximating the area under $ f(x)=x^2 $ using the RHR, $ n=3, 4,$ and $ 5 $ (right-hand) rectangles, respectively. \\label{fig:ApproxAreaRHR1}}\r\n\\end{figure}\r\n\r\n\r\n\r\nIf we use more and more rectangles we get better and better approximations.\r\n\r\n\\begin{figure}\r\n\\begin{tikzpicture}[scale=.75,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.1,ymin=0,ymax=1.5,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n     ]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    right segments=10,\r\n    right=0:1,\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\begin{tikzpicture}[scale=.75,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.1,ymin=0,ymax=1.5,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n     ]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    right segments=20,\r\n    right=0:1,\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\begin{tikzpicture}[scale=.75,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.1,ymin=0,ymax=1.5,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n     ]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    right segments=40,\r\n    right=0:1,\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\r\n\\caption{Approximating the area under $ f(x)=x^2 $ using the RHR, $ n=10, 20,$ and $ 40 $ (right-hand) rectangles, respectively. \\label{fig:ApproxAreaRHR2}}\r\n\\end{figure}\r\n\r\n\r\n Alternatively, we could use the LHR to determine the heights of the rectangles.\r\n\r\n\\begin{figure}\r\n\\begin{tikzpicture}[scale=.7,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.2,ymin=0,ymax=2,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n    xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n         xtick parsed={0, 1/3, 2/3, 1}\r\n]\r\n\\addplot [\r\n    black!80,fill=green, opacity=.2,\r\n    left segments=3,\r\n    left=0:1\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\begin{tikzpicture}[scale=.7,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.2,ymin=0,ymax=2,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n    xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n         xtick parsed={0, 1/4, 1/2, 3/4,1}\r\n]\r\n\\addplot [\r\n    black!80,fill=green, opacity=.2,\r\n    left segments=4,\r\n    left=0:1\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\begin{tikzpicture}[scale=.7,/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.2,ymin=0,ymax=2,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle,\r\n    xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n         xtick parsed={0, 1/5, 2/5, 3/5,4/5,1}\r\n]\r\n\\addplot [\r\n    black!80,fill=green, opacity=.2,\r\n    left segments=5,\r\n    left=0:1\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\caption{Approximating the area under $ f(x)=x^2 $ using the LHR, $ n=3, 4,$ and $ 5 $ (left-hand) rectangles, respectively. \\label{fig:ApproxAreaLHR}}\r\n\\end{figure}\r\n\r\n\r\nWe could also use the midpoint rule (MPR).\r\n\r\n\\begin{figure}\r\n% mid point\r\n\\begin{tikzpicture}[/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.2,ymin=0,ymax=2,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle\r\n]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    midpoint segments=3,\r\n    midpoint=0:1,\r\n] {f};\r\n\\addplot [thick,red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n% mid point\r\n\\begin{tikzpicture}[/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.2,ymin=0,ymax=2,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle\r\n]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    midpoint segments=4,\r\n    midpoint=0:1,\r\n] {f};\r\n\\addplot [thick,red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n% mid point\r\n\\begin{tikzpicture}[/pgf/declare function={f=x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=1.2,ymin=0,ymax=2,\r\n    domain=0:1.1,\r\n    samples=100,\r\n    axis lines=middle\r\n]\r\n\\addplot [\r\n    black!80,fill=green,opacity=.3,\r\n    midpoint segments=5,\r\n    midpoint=0:1,\r\n] {f};\r\n\\addplot [thick,red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\r\n\r\n\\caption{Approximating the area under $ f(x)=x^2 $ using the MPR, $ n=3, 4,$ and $ 5 $ (mid-point) rectangles, respectively. \\label{fig:ApproxAreaMP}}\r\n\\end{figure}\r\n\r\n\r\n\\begin{example}{Using the Left Hand, Right Hand and Midpoint Rules}{ex_rie2}\r\n \r\nApproximate the value of $\\int_0^4 (4x-x^2)\\ dx$ using the Left Hand Rule, the Right Hand Rule, and the Midpoint Rule, using 4 equally spaced subintervals.\r\n \r\n We break the interval $[0,4]$ into four subintervals. In Figure \\ref{fig:rie2a} we see 4 rectangles drawn on $f(x) = 4x-x^2$ using the Left Hand Rule. \r\n\r\n\\begin{minipage}[t]{\\linewidth}\r\n%\\begin{figure}\r\n\\begin{tikzpicture}[/pgf/declare function={f=(4*x-x^2);}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=4.1,ymin=0,ymax=4.5,\r\n    domain=0:4,\r\n    samples=100,\r\n    axis lines=middle,\r\n    xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n         xtick parsed={0, 1, 2, 3, 4}\r\n]\r\n\\addplot [\r\n    black!80,fill=green, opacity=.2,\r\n    left segments=4,\r\n    left=0:4,\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\caption{Approximating the area under $f(x)= 4x-x^2$ on $ [0,4] $ using the Left Hand Rule \\label{fig:rie2a}}\r\n\\end{minipage}\r\n%\\end{figure}\r\n\r\n\\noindent Note how in the first subinterval, $[0,1]$, the rectangle has height $f(0)=0$. We add up the areas of each rectangle (height$\\times$ width) for our Left Hand Rule approximation:\r\n\t\\begin{align*} f(0)\\cdot 1 + f(1)\\cdot 1+ f(2)\\cdot 1+f(3)\\cdot 1 &=\\\\\r\n\t0+3+4+3&= 10.\r\n\t\\end{align*}\r\n\t\r\nFigure \\ref{fig:rie2b} shows 4 rectangles drawn under $f$ using the Right Hand Rule; note how the $[3,4]$ subinterval has a rectangle of height 0. \r\n\\begin{minipage}[t]{\\linewidth}\r\n\\begin{tikzpicture}[/pgf/declare function={f=4*x-x^2;}]\r\n\\begin{axis}[\r\n        xmin=0,xmax=4.1,ymin=0,ymax=4.5,\r\n            domain=0:4,\r\n            samples=100,\r\n            axis lines=middle,\r\n            xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n                 xtick parsed={0, 1, 2, 3, 4}\r\n]\r\n\\addplot [\r\n    black!80,fill=green, opacity=.2,\r\n    right segments=4,\r\n    right=0:4\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\caption{Approximating the area under $f(x)= 4x-x^2$ on $ [0,4] $ using the Right Hand Rule \\label{fig:rie2b}}\r\n\\end{minipage}\r\n\r\n\\noindent In this example, these rectangle seem to be the mirror image of those found in Figure \\ref{fig:rie2a}. (This is because of the symmetry of our shaded region.) Our approximation gives the same answer as before, though calculated a different way:\r\n\t\\begin{align*} f(1)\\cdot 1 + f(2)\\cdot 1+ f(3)\\cdot 1+f(4)\\cdot 1 &=\\\\\r\n\t3+4+3+0&= 10.\r\n\t\\end{align*}\r\n\r\nFigure \\ref{fig:rie2c} shows 4 rectangles drawn under $f$ using the Midpoint Rule.\r\n\r\n\\begin{minipage}[t]{\\linewidth}\r\n\\begin{tikzpicture}[/pgf/declare function={f=4*x-x^2;}]\r\n\\begin{axis}[\r\n       xmin=0,xmax=4.1,ymin=0,ymax=4.5,\r\n           domain=0:4,\r\n           samples=100,\r\n           axis lines=middle,\r\n           xticklabel style={/pgf/number format/frac, /pgf/number format/frac shift=2},\r\n                xtick parsed={0, 1, 2, 3, 4}\r\n]\r\n\\addplot [\r\n    black!80,fill=green, opacity=.2,\r\n    midpoint segments=4,\r\n    midpoint=0:4\r\n] {f};\r\n\\addplot [thick, red] {f};\r\n\\end{axis}\r\n\\end{tikzpicture}\r\n\\caption{Approximating the area under $f(x)= 4x-x^2$ on $ [0,4] $ using the Midpoint Rule \\label{fig:rie2c}}\r\n\\end{minipage}\r\n\r\n\\noindent This gives an approximation of \r\n\\begin{align*} f(0.5)\\cdot 1 + f(1.5)\\cdot 1+ f(2.5)\\cdot 1+f(3.5)\\cdot 1 &=\\\\\r\n\t1.75+3.75+3.75+1.75&= 11.\r\n\t\\end{align*}\r\nOur three methods provide two approximations of the area under $f(x)=4x-x^2 $ on $ [0.4] $: 10 and 11.\r\n\\end{example}\r\n\r\n\r\n\r\n\r\n\r\n\r\n% % % % % % % % % % % % % % % % % % % % % % % % %\r\n\r\n\r\n\r\n\r\n\\subsection{Riemann Sums}\r\n\r\n\r\nFor now, we continue to focus on determining an accurate estimate of area through the use of a sum of the areas of rectangles, doing so in the setting where $f(x) \\ge 0$ on $[a,b]$.  Throughout, unless otherwise indicated, we also assume that $f$ is continuous on $[a,b]$.\r\n\r\nThe first choice we make in any such approximation is the number of rectangles.  \r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_2_Interval}\r\n\\caption{Subdividing the interval $[a,b]$ into $n$ subintervals of equal length $\\triangle x$.} \\label{F:4.2.Interval}\r\n\\end{center}\r\n\\end{figure}\r\nIf we say that the total number of rectangles is $n$, and we desire $n$ rectangles of equal width to subdivide the interval $[a,b]$, then each rectangle must have width $\\triangle x = \\frac{b-a}{n}$. We observe further that $x_1 = x_0 + \\triangle x$, $x_2 = x_0 + 2 \\triangle x$, and thus in general $x_{i} = a + i\\triangle x,$ as pictured in Figure~\\ref{F:4.2.Interval}.\r\n\r\nWe use each subinterval $[x_i, x_{i+1}]$ as the base of a rectangle, and next must choose how to decide the height of the rectangle that will be used to approximate the area under $y = f(x)$ on the subinterval.  The three standard choices are the left endpoint, right endpoint, or the midpoint of each.  These are precisely the options encountered in the previous section.  We next explore how these choices can be reflected in sigma notation.\r\n\r\nIf we now consider an arbitrary positive function $f$ on $[a,b]$ with the interval subdivided as shown in Figure~\\ref{F:4.2.Interval}, and choose to use left endpoints, then on each interval of the form $[x_{i}, x_{i+1}]$, the area of the rectangle formed is given by\r\n$$A_{i+1} = f(x_i) \\cdot \\triangle x,$$\r\nas seen in Figure~\\ref{F:4.2.LeftSum}.\r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_2_LeftSum}\r\n\\caption{Subdividing the interval $[a,b]$ into $n$ subintervals of equal length $\\triangle x$ and approximating the area under $y = f(x)$ over $[a,b]$ using left rectangles.} \\label{F:4.2.LeftSum}\r\n\\end{center}\r\n\\end{figure}\r\nIf we let $L_n$ denote the sum of the areas of rectangles whose heights are given by the function value at each respective left endpoint, then we see that\r\n\\begin{eqnarray*}\r\nL_n & = & A_1 + A_2 + \\cdots + A_{i+1} + \\cdots + A_n \\\\\r\n\t& = & f(x_0) \\cdot \\triangle x + f(x_1) \\cdot \\triangle x + \\cdots + f(x_i) \\cdot \\triangle x + \\cdots + f(x_{n-1}) \\cdot \\triangle x.\r\n\\end{eqnarray*}\r\nIn the more compact sigma notation, we have \r\n$$L_n = \\sum_{i = 0}^{n-1} f(x_i) \\triangle x.$$\r\nNote particularly that since the index of summation begins at $0$ and ends at $n-1$, there are indeed $n$ terms in this sum.  We call $L_n$ the \\emph{left Riemann sum} \\index{Riemann sum} \\index{Riemann sum!left} for the function $f$ on the interval $[a,b]$.\r\n\r\nThere are now two fundamental issues to explore:  the number of rectangles we choose to use and the selection of the pattern by which we identify the height of each rectangle.  It is best to explore these choices dynamically, and the applet\\footnote{Marc Renault, Geogebra Calculus Applets.} found at \\href{http://gvsu.edu/s/a9}{\\texttt{http://gvsu.edu/s/a9}} is a particularly useful one.  There we see\r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\scalebox{0.4}{\\includegraphics{figures/4_2_RenaultAppletRS.pdf}}\r\n\\caption{A snapshot of the applet found at \\href{http://gvsu.edu/s/a9}{\\texttt{http://gvsu.edu/s/a9}}.} \\label{F:4.2.RenaultAppletRS}\r\n\\end{center}\r\n\\end{figure}\r\nthe image shown in Figure~\\ref{F:4.2.RenaultAppletRS}, but with the opportunity to adjust the slider bars for the left endpoint and the number of subintervals.  By moving the sliders, we can see how the heights of the rectangles change as we consider left endpoints, midpoints, and right endpoints, as well as the impact that a larger number of narrower rectangles has on the approximation of the exact area bounded by the function and the horizontal axis.  \r\n\r\nTo see how the Riemann sums for right endpoints and midpoints are constructed, we consider Figure~\\ref{F:4.2.RightMidSum}.\r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_2_RightMidSum}\r\n\\caption{Riemann sums using right endpoints and midpoints.} \\label{F:4.2.RightMidSum}\r\n\\end{center}\r\n\\end{figure}\r\nFor the sum with right endpoints, we see that the area of the rectangle on an arbitrary interval $[x_i, x_{i+1}]$ is given by\r\n$$B_{i+1} = f(x_{i+1}) \\cdot \\triangle x,$$\r\nso that the sum of all such areas of rectangles is given by\r\n\\begin{eqnarray*}\r\nR_n & = & B_1 + B_2 + \\cdots + B_{i+1} + \\cdots + B_n \\\\\r\n\t& = &  f(x_1) \\cdot \\triangle x + f(x_2) \\cdot \\triangle x + \\cdots + f(x_{i+1}) \\cdot \\triangle x + \\cdots + f(x_{n}) \\cdot \\triangle x \\\\ \r\n\t& = & \\sum_{i=1}^{n} f(x_i) \\triangle x.\r\n\\end{eqnarray*}\r\nWe call $R_n$ the \\emph{right Riemann sum} \\index{Riemann sum!right} for the function $f$ on the interval $[a,b]$.  For the sum that uses midpoints, we introduce the notation\r\n$$\\overline{x}_{i+1} = \\frac{x_{i} + x_{i+1}}{2}$$\r\nso that $\\overline{x}_{i+1}$ is the midpoint of the interval $[x_i, x_{i+1}]$.  For instance, for the rectangle with area $C_1$ in Figure~\\ref{F:4.2.RightMidSum}, we now have\r\n$$C_1 = f(\\overline{x}_1) \\cdot \\triangle x.$$\r\nHence, the sum of all the areas of rectangles that use midpoints is \r\n\\begin{eqnarray*}\r\nM_n & = & C_1 + C_2 + \\cdots + C_{i+1} + \\cdots + C_n \\\\\r\n\t& = &  f(\\overline{x_1}) \\cdot \\triangle x + f(\\overline{x_2}) \\cdot \\triangle x + \\cdots + f(\\overline{x}_{i+1}) \\cdot \\triangle x + \\cdots + f(\\overline{x}_{n}) \\cdot \\triangle x \\\\ \r\n\t& = & \\sum_{i=1}^{n} f(\\overline{x}_i) \\triangle x,\r\n\\end{eqnarray*}\r\nand we say that $M_n$ is the \\emph{middle Riemann sum} \\index{Riemann sum!middle} for $f$ on $[a,b]$.\r\n\r\nWhen $f(x) \\ge 0$ on $[a,b]$, each of the Riemann sums $L_n$, $R_n$, and $M_n$ provides an estimate of the area under the curve $y = f(x)$ over the interval $[a,b]$; momentarily, we will discuss the meaning of Riemann sums in the setting when $f$ is sometimes negative.  We also recall that in the context of a nonnegative velocity function $y = v(t)$, the corresponding Riemann sums are approximating the distance traveled on $[a,b]$ by the moving object with velocity function $v$.\r\n\r\nThere is a more general way to think of Riemann sums, and that is to not restrict the choice of where the function is evaluated to determine the respective rectangle heights.  That is, rather than saying we'll always choose left endpoints, or always choose midpoints, we simply say that a point $x_{i+1}^*$ will be selected at random in the interval $[x_i, x_{i+1}]$ (so that $x_i \\le x_{i+1}^* \\le x_{i+1}$), which makes the Riemann sum given by \r\n$$f(x_1^*) \\cdot \\triangle x + f(x_2^*) \\cdot \\triangle x + \\cdots + f(x_{i+1}^*) \\cdot \\triangle x + \\cdots + f(x_n^*) \\cdot \\triangle x = \\sum_{i=1}^{n} f(x_i^*) \\triangle x.$$\r\nAt \\href{http://gvsu.edu/s/a9}{\\texttt{http://gvsu.edu/s/a9}}, the applet noted earlier and referenced in Figure~\\ref{F:4.2.RenaultAppletRS}, by unchecking the ``relative'' box at the top left, and instead checking ``random,'' we can easily explore the effect of using random point locations in subintervals on a given Riemann sum.  In computational practice, we most often use $L_n$, $R_n$, or $M_n$, while the random Riemann sum is useful in theoretical discussions.  \r\n \r\n\r\n\\subsection*{When the function is sometimes negative}\r\n\r\nFor a Riemann sum such as \r\n$$L_n = \\sum_{i=0}^{n-1} f(x_i) \\triangle x,$$\r\nwe can of course compute the sum even when $f$ takes on negative values.  We know that when $f$ is positive on $[a,b]$, the corresponding left Riemann sum $L_n$ estimates the area bounded by $f$ and the horizontal axis over the interval.  \r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_2_NegF}\r\n\\caption{At left and center, two left Riemann sums for a function $f$ that is sometimes negative; at right, the areas bounded by $f$ on the interval $[a,d]$.} \\label{F:4.2.NegF}\r\n\\end{center}\r\n\\end{figure}\r\nFor a function such as the one pictured in Figure~\\ref{F:4.2.NegF}, where in the first figure a left Riemann sum is being taken with 12 subintervals over $[a,d]$, we observe that the function is negative on the interval $b \\le x \\le c$, and so for the four left endpoints that fall in $[b,c]$, the terms $f(x_i) \\triangle x$ have negative function values.  This means that those four terms in the Riemann sum produce an estimate of the \\emph{opposite} of the area bounded by $y = f(x)$ and the $x$-axis on $[b,c]$.\r\n\r\nIn Figure~\\ref{F:4.2.NegF}, we also see evidence that by increasing the number of rectangles used in a Riemann sum, it appears that the approximation of the area (or the opposite of the area) bounded by a curve appears to improve.  For instance, in the middle graph, we use $ 24 $ left rectangles, and from the shaded areas, it appears that we have decreased the error from the approximation that uses 12.  When we proceed to the next section, we will discuss the natural idea of letting the number of rectangles in the sum increase without bound.  \r\n\r\nFor now, it is most important for us to observe that, in general, any Riemann sum of a continuous function $f$ on an interval $[a,b]$ approximates the difference between the area that lies above the horizontal axis on $[a,b]$ and under $f$ and the area that lies below the horizontal axis on $[a,b]$ and above $f$.  In the notation of Figure~\\ref{F:4.2.NegF}, we may say that\r\n$$L_{24} \\approx A_1 - A_2 + A_3,$$\r\nwhere $L_{24}$ is the left Riemann sum using $ 24 $ subintervals shown in the middle graph, and $A_1$ and $A_3$ are the areas of the regions where $f$ is positive on the interval of interest, while $A_2$ is the area of the region where $f$ is negative.  We will also call the quantity $A_1 - A_2 + A_3$ the \\emph{net signed area} \\index{net signed area} bounded by $f$ over the interval $[a,d]$, where by the phrase ``signed area'' we indicate that we are attaching a minus sign to the areas of regions that fall below the horizontal axis.\r\n\r\nFinally, we recall that in the context where the function $f$ represents the velocity of a moving object, the total sum of the areas bounded by the curve tells us the total distance traveled over the relevant time interval, while the total net signed area bounded by the curve computes the object's change in position on the interval.\r\n\r\n%\\nin \\framebox{\\hspace*{3 pt}\r\n%\\parbox{6.25 in}{\r\nSummary:\r\n\\begin{itemize}\r\n\\item A Riemann sum is simply a sum of products of the form $f(x_i^*) \\triangle x$ that estimates the area between a positive function and the horizontal axis over a given interval.  If the function is sometimes negative on the interval, the Riemann sum estimates the difference between the areas that lie above the horizontal axis and those that lie below the axis.\r\n\\item The three most common types of Riemann sums are left, right, and middle sums, plus we can also work with a more general, random Riemann sum.  The only difference among these sums is the location of the point at which the function is evaluated to determine the height of the rectangle whose area is being computed in the sum.  For a left Riemann sum, we evaluate the function at the left endpoint of each subinterval, while for right and middle sums, we use right endpoints and midpoints, respectively.\r\n\\item The left, right, and middle Riemann sums are denoted $L_n$, $R_n$, and $M_n$, with formulas\r\n$$L_n = f(x_0) \\triangle x + f(x_1) \\triangle x + \\cdots + f(x_{n-1}) \\triangle x = \\sum_{i = 0}^{n-1} f(x_i) \\triangle x,$$\r\n$$R_n = f(x_1) \\triangle x + f(x_2) \\triangle x + \\cdots + f(x_{n}) \\triangle x = \\sum_{i = 1}^{n} f(x_i) \\triangle x,$$\r\n$$M_n = f(\\overline{x}_1) \\triangle x + f(\\overline{x}_2) \\triangle x + \\cdots + f(\\overline{x}_{n}) \\triangle x = \\sum_{i = 1}^{n} f(\\overline{x}_i) \\triangle x,$$\r\nwhere $x_0 = a$, $x_i = a + i\\triangle x$, and $x_n = b$, using $\\triangle x = \\frac{b-a}{n}$.  For the midpoint sum, $\\overline{x}_{i} = (x_{i-1} + x_i)/2$.\r\n\\end{itemize}\r\n\r\n\\subsection{The Definite Integral}\r\n\r\nIn the previous examples it appears that as the number of rectangles got larger and larger, the values of $L_n$, $M_n$, and $R_n$ all grew closer and closer to the same value.  It turns out that this occurs for any continuous function on an interval $[a,b]$, and even more generally for a Riemann sum using any point $x_{i+1}^*$ in the interval $[x_i, x_{i+1}]$.  Said differently, as we let $n \\to \\infty$, it doesn't really matter where we choose to evaluate the function within a given subinterval, because\r\n$$\\lim_{n \\to \\infty} L_n = \\lim_{n \\to \\infty} R_n = \\lim_{n \\to \\infty} M_n = \\lim_{n \\to \\infty} \\sum_{i=1}^{n} f(x_i^*) \\triangle x.$$  \r\nThat these limits always exist (and share the same value) for a continuous\\footnote{It turns out that a function need not be continuous in order to have a definite integral.  For our purposes, we assume that the functions we consider are continuous on the interval(s) of interest.  It is straightforward to see that any function that is piecewise continuous on an interval of interest will also have a well-defined definite integral.} function $f$ allows us to make the following definition.\r\n\\begin{definition} \\label{D:4.3.DefInt}\r\nThe \\emph{definite integral} of a continuous function $f$ on the interval $[a,b]$, denoted $\\ds \\int_a^b f(x) \\, dx$, is the real number given by\r\n$$\\int_a^b f(x) \\, dx = \\lim_{n \\to \\infty} \\sum_{i=1}^{n} f(x_i^*) \\triangle x,$$\r\nwhere $\\triangle x = \\frac{b-a}{n}$, $x_i = a + i\\triangle x$ (for $i = 0, \\ldots, n$), and $x_i^*$ satisfies $x_{i-1} \\le x_i^* \\le x_i$ (for $i = 1, \\ldots, n$).\r\n\\end{definition}\r\nWe call the symbol $\\int$ the \\emph{integral sign}\\index{integral sign}, the values $a$ and $b$ the \\emph{limits of integration}\\index{limits of integration}, and the function $f$ the \\emph{integrand}\\index{integrand}.  The process of determining the real number $\\int_a^b f(x) \\, dx$ is called \\emph{evaluating the definite integral}.  While we will come to understand that there are several different interpretations of the value of the definite integral, for now the most important is that $\\int_a^b f(x) \\, dx$ measures the net signed area bounded by $y = f(x)$ and the $x$-axis on the interval $[a,b]$.  \r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_3_DefIntInterp}\r\n\\caption{A continuous function $f$ on the interval $[a,d]$.} \\label{F:4.3.DefIntInterp}\r\n\\end{center}\r\n\\end{figure}\r\nFor example, in the notation of the definite integral, if $f$ is the function pictured in Figure~\\ref{F:4.3.DefIntInterp} and $A_1$, $A_2$, and $A_3$ are the exact areas bounded by $f$ and the $x$-axis on the respective intervals $[a,b]$, $[b,c]$, and $[c,d]$, then\r\n$$\\int_a^b f(x) \\, dx = A_1, \\ \\int_b^c f(x) \\, dx = -A_2, \\ \\int_c^d f(x) \\, dx = A_3,$$\r\nand\r\n$$\\int_a^d f(x) \\, dx = A_1 - A_2 + A_3.$$\r\nWe can also use definite integrals to express the change in position and distance traveled by a moving object.  In the setting of a velocity function $v$ on an interval $[a,b]$, it follows from our work above and in preceding sections that the change in position, $s(b) - s(a)$, is given by\r\n$$s(b) - s(a) = \\int_a^b v(t) \\, dt.$$\r\nIf the velocity function is nonnegative on $[a,b]$, then $\\int_a^b v(t) \\,dt$ tells us the distance the object traveled.  When velocity is sometimes negative on $[a,b],$ the areas bounded by the function on intervals where $v$ does not change sign can be found using integrals, and the sum of these values will tell us the distance the object traveled. \r\n\r\nIf we wish to compute the value of a definite integral using the definition, we have to take the limit of a sum.  While this is possible to do in select circumstances, it is also tedious and time-consuming; moreover, computing these limits does not offer much additional insight into the meaning or interpretation of the definite integral.  Instead, in the next section, we will learn the Fundamental Theorem of Calculus, a result that provides a shortcut for evaluating a large class of definite integrals.  This will enable us to determine the exact net signed area bounded by a continuous function and the $x$-axis in many circumstances.\r\n\r\nFor now, our goal is to understand the meaning and properties of the definite integral, rather than how to actually compute its value using ideas in calculus.  Thus, we temporarily rely on the net signed area interpretation of the definite integral and observe that if a given curve produces regions whose areas we can compute exactly through known area formulas, we can thus compute the exact value of the integral.  \r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_3_TrapArea}\r\n\\caption{The area bounded by $f(x)=2x+1$ and the $x$-axis on the interval $[1,4]$.} \\label{F:4.3.TrapArea}\r\n\\end{center}\r\n\\end{figure}\r\nFor instance, if we wish to evaluate the definite integral \\\\ $\\int_1^4 (2x+1) \\, dx$, we can observe that the region bounded by this function and the $x$-axis is the trapezoid shown in Figure~\\ref{F:4.3.TrapArea}, and by the known formula for the area of a trapezoid, its area is $A = \\frac{1}{2}(3+9) \\cdot 3 = 18$, so\r\n$$\\int_1^4 (2x+1) \\, dx = 18.$$\r\n\r\n\r\n\r\n\\subsection*{Some properties of the definite integral}\r\n\r\nWith the perspective that the definite integral of a function $f$ over an interval $[a,b]$ measures the net signed area bounded by $f$ and the $x$-axis over the interval, we naturally arrive at several different standard properties of the definite integral.  In addition, it is helpful to remember that the definite integral is defined in terms of Riemann sums that fundamentally consist of the areas of rectangles.\r\n\r\nIf we consider the definite integral $\\int_a^a f(x) \\, dx$ for any real number $a$, it is evident that no area is being bounded because the interval begins and ends with the same point.  Hence, \r\n\r\n\\vspace*{5pt}\r\n\\noindent \\framebox{\\hspace*{3 pt}\r\n\\parbox{6.25 in}{\r\nIf $f$ is a continuous function and $a$ is a real number, then $\\ds \\int_a^a f(x) \\,dx = 0.$\r\n} \\hspace*{3 pt}}\r\n\\vspace*{1pt}\r\n \r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_3_AdditiveProp}\r\n\\caption{The area bounded by $y=f(x)$ on the interval $[a,c]$.} \\label{F:4.3.AdditiveProp}\r\n\\end{center}\r\n\\end{figure}\r\n\r\nNext, we consider the results of subdividing a given interval. In Figure~\\ref{F:4.3.AdditiveProp}, we see that\r\n$$\\int_a^b f(x) \\, dx = A_1, \\ \\int_b^c f(x) \\, dx = A_2, \\ \\mbox{and} \\ \\int_a^c f(x) \\, dx = A_1 + A_2,$$ \r\nwhich is indicative of the following general rule.\r\n  \r\n\\vspace*{5pt}\r\n\\noindent \\framebox{\\hspace*{3 pt}\r\n\\parbox{6.25 in}{\r\nIf $f$ is a continuous function and $a$, $b$, and $c$ are real numbers, then $$\\ds \\int_a^c f(x) \\,dx = \\int_a^b f(x) \\,dx + \\int_b^c f(x) \\,dx.$$\r\n} \\hspace*{3 pt}}\r\n\\vspace*{1pt}\r\n\r\n\\noindent While this rule is most apparent in the situation where $a < b < c$, it in fact holds in general for any values of $a$, $b$, and $c$.  This result is connected to another property of the definite integral, which states that if we reverse the order of the limits of integration, we change the sign of the integral's value.\r\n\r\n\\vspace*{5pt}\r\n\\noindent \\framebox{\\hspace*{3 pt}\r\n\\parbox{6.25 in}{\r\nIf $f$ is a continuous function and $a$ and $b$ are real numbers, then $\\ds \\int_b^a f(x) \\,dx = -\\int_a^b f(x) \\,dx.$\r\n} \\hspace*{3 pt}}\r\n\\vspace*{1pt}\r\n\r\n\\noindent This result makes sense because if we integrate from $a$ to $b$, then in the defining Riemann sum $\\triangle x = \\frac{b-a}{n}$, while if we integrate from $b$ to $a$, $\\triangle x = \\frac{a-b}{n} = -\\frac{b-a}{n}$, and this is the only change in the sum used to define the integral.\r\n\r\nThere are two additional properties of the definite integral that we need to understand.  Recall that when we worked with derivative rules in Chapter~\\ref{C:2}, we found that both the Constant Multiple Rule and the Sum Rule held.  The Constant Multiple Rule tells us that if $f$ is a differentiable function and $k$ is a constant, then\r\n$$\\frac{d}{dx} [kf(x)] = kf'(x),$$\r\nand the Sum Rule states that if $f$ and $g$ are differentiable functions, then\r\n$$\\frac{d}{dx}[f(x) + g(x)] = f'(x) + g'(x).$$\r\nThese rules are useful because they enable us to deal individually with the simplest parts of certain functions and take advantage of the elementary operations of addition and multiplying by a constant.  They also tell us that the process of taking the derivative respects addition and multiplying by constants in the simplest possible way.  \r\n\r\nIt turns out that similar rules hold for the definite integral.  First, let's consider the situation pictured in Figure~\\ref{F:4.3.ConstMult},\r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_3_ConstMult}\r\n\\caption{The areas bounded by $y = f(x)$ and $y = 2f(x)$ on $[a,b]$.} \\label{F:4.3.ConstMult}\r\n\\end{center}\r\n\\end{figure}\r\nwhere we examine the effect of multiplying a function by a factor of 2 on the area it bounds with the $x$-axis.  Because multiplying the function by 2 doubles its height at every $x$-value, we see that if we consider a typical rectangle from a Riemann sum, the difference in area comes from the changed height of the rectangle:  $f(x_i)$ for the original function, versus $2f(x_i)$ in the doubled function, in the case of left sum.  Hence, in Figure~\\ref{F:4.3.ConstMult}, we see that for the pictured rectangles with areas $A$ and $B$, it follows $B = 2A$.  As this will happen in every such rectangle, regardless of the value of $n$ and the type of sum we use, we see that in the limit, the area of the red region bounded by $y = 2f(x)$ will be twice that of the area of the blue region bounded by $y = f(x)$.  As there is nothing special about the value $2$ compared to an arbitrary constant $k$, it turns out that the following general principle holds.\r\n\r\n\\vspace*{5pt}\r\n\\noindent \\framebox{\\hspace*{3 pt}\r\n\\parbox{6.25 in}{\r\n{\\bf Constant Multiple Rule:}\\index{definite integral!constant multiple rule} If $f$ is a continuous function and $k$ is any real number then $$\\ds \\int_a^b k \\cdot f(x) \\,dx = k \\int_a^b f(x) \\,dx.$$\r\n} \\hspace*{3 pt}}\r\n\\vspace*{1pt}\r\n\r\nFinally, we see a similar situation geometrically with the sum of two functions $f$ and $g$.\r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_3_Sum}\r\n\\caption{The areas bounded by $y = f(x)$ and $y = g(x)$ on $[a,b]$, as well as the area bounded by $y = f(x) + g(x)$.} \\label{F:4.3.Sum}\r\n\\end{center}\r\n\\end{figure}\r\nIn particular, as shown in Figure~\\ref{F:4.3.Sum}, if we take the sum of two functions $f$ and $g$, at every point in the interval, the height of the function $f+g$ is given by $(f+g)(x_i) = f(x_i) + g(x_i)$, which is the sum of the individual function values of $f$ and $g$ (taken at left endpoints).  Hence, for the pictured rectangles with areas $A$, $B$, and $C$, it follows that $C = A + B$, and because this will occur for every such rectangle, in the limit the area of the gray region will be the sum of the areas of the blue and red regions.  Stated in terms of definite integrals, we have the following general rule.\r\n\r\n\\vspace*{5pt}\r\n\\noindent \\framebox{\\hspace*{3 pt}\r\n\\parbox{6.25 in}{\r\n{\\bf Sum Rule:}\\index{definite integral!sum rule} If $f$ and $g$ are continuous functions, then $$\\ds \\int_a^b [f(x) + g(x)] \\,dx = \\int_a^b f(x) \\,dx + \\int_a^b g(x) \\,dx.$$\r\n} \\hspace*{3 pt}}\r\n\\vspace*{1pt}\r\n\r\nMore generally, the Constant Multiple and Sum Rules can be combined to make the observation that for any continuous functions $f$ and $g$ and any constants $c$ and $k$,\r\n$$\\ds \\int_a^b [c f(x) \\pm k g(x)] \\,dx = c \\int_a^b f(x) \\,dx \\pm k \\int_a^b g(x) \\,dx.$$\r\n\r\nIn summary we have the following:\r\n\r\n\\begin{formulabox}[Properties of Definite Integrals]\r\nSome properties are as follows:\r\n$$\\mbox{Order of limits matters:}\\qquad\\int_a^b f(x)\\,dx=-\\int_b^a f(x)\\,dx$$\r\n$$\\mbox{If interval is empty, integral is zero:}\\qquad\\int_a^a f(x)\\,dx=0$$\r\n$$\\mbox{Constant Multiple Rule:}\\qquad\\int_a^b cf(x)\\,dx=c\\int_a^b f(x)\\,dx$$\r\n$$\\mbox{Sum/Difference Rule:}\\qquad\\int_a^b f(x)\\pm g(x)\\,dx=\\int_a^b f(x)\\,dx\\pm\\int_a^b g(x)\\,dx$$\r\n$$\\mbox{Can split up interval $[a,b]=[a,c]\\cup[c,b]$:}\\qquad\\int_a^b f(x)\\,dx=\\int_a^c f(x)\\,dx+\\int_c^b f(x)\\,dx$$\r\n$$\\mbox{The variable does not matter!:}\\qquad\\int_a^b f(x)\\,dx=\\int_a^b f(t)\\,dt$$\r\n\\end{formulabox}\r\n\r\nThe reason for the last property is that a definite integral is a \\ifont{number}, not a function, so the variable is just a placeholder that won't appear in the final answer.\r\n\r\nSome additional properties are \\ifont{comparison} types of properties.\r\n\r\n\\begin{formulabox}[Comparison Properties of Definite Integrals]\r\n$$\\mbox{If $f(x)\\geq 0$ for $x\\in[a,b]$, then:}\\qquad\\int_a^b f(x)\\,dx\\geq 0.$$\r\n$$\\mbox{If $f(x)\\geq g(x)$ for $x\\in[a,b]$, then:}\\qquad\\int_a^b f(x)\\,dx\\geq \\int_a^b g(x)\\,dx.$$\r\n$$\\mbox{If $m\\leq f(x)\\leq M$ for $x\\in[a,b]$, then:}\\qquad m(b-a)\\leq \\int_a^b f(x)\\,dx\\leq M(b-a).$$\r\n\\end{formulabox}\r\n\r\n\\begin{example}{Properties of Definite Integrals}{PropertiesDefiniteIntegrals}\r\nSuppose $\\ds{\\int_a^b f(x)~dx=7}$ and $\\ds{\\int_a^b g(x)~dx=3}$. Find:\r\n\\begin{multicols}{2}\r\n\\begin{enumerate}\r\n\t\\item\t$\\ds\\int_a^b 2f(x)-3g(x)\\,dx$.\r\n\t\\item\t$\\ds\\int_{b}^{a} 2g(x)\\,dx$.\r\n\t\\item\t$\\ds\\int_a^a f(x)\\cdot g(x)\\,dx$.\r\n\t\\item\t$\\ds\\int_a^c f(x)~dx+\\int_c^b f(x)\\,dx$.\r\n\\end{enumerate}\r\n\\end{multicols}\r\n\\vspace{5mm}\r\n\\end{example}\r\n\\begin{solution}\r\n\\begin{enumerate}\r\n\t\\item\t$\\ds\\int_a^b 2f(x)-3g(x)\\,dx=\\ds 2\\int_a^b f(x)\\,dx-3\\int_a^b g(x)\\,dx=2(7)-3(3)=5$.\r\n\t\\item\t$\\ds\\int_{b}^{a} 2g(x)\\,dx=\\ds -2\\int_{a}^{b} g(x)\\,dx=-2(3)=-6$.\r\n\t\\item\t$\\ds\\int_a^a f(x)\\cdot g(x)\\,dx=0$.\r\n\t\\item\t$\\ds\\int_a^c f(x)\\,dx+\\int_c^b f(x)\\,dx=\\ds\\int_a^b f(x)\\,dx=7$.\r\n\\end{enumerate}\r\n\\end{solution}\r\n\r\n\r\n\\subsection{How the definite integral is connected to a function's average value} \\index{average value of a function}\r\n\r\nOne of the most valuable applications of the definite integral is that it provides a way to meaningfully discuss the average value of a function, even for a function that takes on infinitely many values.  Recall that if we wish to take the average of $n$ numbers $y_1$, $y_2$, $\\ldots$, $y_n$, we do so by computing\r\n$$\\mbox{Avg} = \\frac{y_1 + y_2 + \\cdots + y_n}{n}.$$\r\n\r\nSince integrals arise from Riemann sums in which we add $n$ values of a function, it should not be surprising that evaluating an integral is something like averaging the output values of a function.  Consider, for instance, the right Riemann sum $R_n$ of a function $f$, which is given by\r\n$$R_n = f(x_1) \\triangle x + f(x_2) \\triangle x + \\cdots + f(x_n) \\triangle x = (f(x_1) + f(x_2) + \\cdots + f(x_n))\\triangle x.$$\r\nSince $\\triangle x = \\frac{b-a}{n}$, we can thus write \r\n\\begin{equation} \\label{E:RAvg}\r\nR_n = (f(x_1) + f(x_2) + \\cdots + f(x_n))\\cdot \\frac{b-a}{n} = (b-a) \\frac{f(x_1) + f(x_2) + \\cdots + f(x_n)}{n}.\r\n\\end{equation}\r\nHere, we see that the right Riemann sum with $n$ subintervals is the length of the interval $(b-a)$ times the average of the $n$ function values found at the right endpoints.  And just as with our efforts to compute area, we see that the larger the value of $n$ we use, the more accurate our average of the values of $f$ will be.  Indeed, we will define the average value of $f$ on $[a,b]$ to be \r\n$$f_{\\mbox{\\tiny{AVG}}[a,b]} = \\lim_{n \\to \\infty} \\frac{f(x_1) + f(x_2) + \\cdots + f(x_n)}{n}.$$  But we also know that for any continuous function $f$ on $[a,b]$, taking the limit of a Riemann sum leads precisely to the definite integral.  That is, $\\ds \\lim_{n \\to \\infty} R_n = \\int_a^b f(x) \\, dx$, and thus taking the limit as $n \\to \\infty$ in Equation~(\\ref{E:RAvg}), we have that\r\n\\begin{equation} \\label{E:RAvg2}\r\n\\int_a^b f(x) \\, dx = (b-a) \\cdot f_{\\mbox{\\tiny{AVG}}[a,b]}.\r\n\\end{equation}\r\nSolving Equation~(\\ref{E:RAvg2}) for $f_{\\mbox{\\tiny{AVG}}[a,b]}$, we have the following general principle.\r\n\r\n\\begin{definition}{The Average Value of a Function}{avg_value}\r\nIf $f$ is a continuous function on $[a,b]$, then its average value on $[a,b]$ is given by the formula\r\n$$f_{\\mbox{\\tiny{AVG}}[a,b]} = \\frac{1}{b-a} \\cdot \\int_a^b f(x) \\, dx.$$\r\n\\end{definition}\r\n\r\nObserve that Equation~(\\ref{E:RAvg2}) tells us another way to interpret the definite integral:  the definite integral of a function $f$ from $a$ to $b$ is the length of the interval $(b-a)$ times the average value of the function on the interval.  In addition, Equation~(\\ref{E:RAvg2}) has a natural visual interpretation when the function $f$ is nonnegative on $[a,b]$.  \r\n\\begin{figure}[h]\r\n\\begin{center}\r\n\\includegraphics{figures/4_3_AvgVal}\r\n\\caption{A function $y = f(x)$, the area it bounds, and its average value on $[a,b]$.} \\label{F:4.3.AvgVal}\r\n\\end{center}\r\n\\end{figure}\r\nConsider Figure~\\ref{F:4.3.AvgVal}, where we see at left the shaded region whose area is $\\int_a^b f(x) \\, dx$, at center the shaded rectangle whose dimensions are $(b-a)$ by $f_{\\mbox{\\tiny{AVG}}[a,b]}$, and at right these two figures superimposed.  Specifically, note that in dark green we show the horizontal line $y = f_{\\mbox{\\tiny{AVG}}[a,b]}$.  Thus, the area of the green rectangle is given by $(b-a) \\cdot f_{\\mbox{\\tiny{AVG}}[a,b]}$, which is precisely the value of $\\int_a^b f(x) \\, dx$.  Said differently, the area of the blue region in the left figure is the same as that of the green rectangle in the center figure; this can also be seen by observing that the areas $A_1$ and $A_2$ in the rightmost figure appear to be equal.  Ultimately, the average value of a function enables us to construct a rectangle whose area is the same as the value of the definite integral of the function on the interval.  The java applet\\footnote{David Austin, \\href{http://gvsu.edu/s/5r}{\\texttt{http://gvsu.edu/s/5r}}.} at \\href{http://gvsu.edu/s/az}{\\texttt{http://gvsu.edu/s/az}} provides an opportunity to explore how the average value of the function changes as the interval changes, through an image similar to that found in Figure~\\ref{F:4.3.AvgVal}.\r\n\r\n\r\nSummary\r\n\r\n\\begin{itemize}\r\n\\item Any Riemann sum of a continuous function $f$ on an interval $[a,b]$ provides an estimate of the net signed area bounded by the function and the horizontal axis on the interval.  Increasing the number of subintervals in the Riemann sum improves the accuracy of this estimate, and letting the number of subintervals increase without bound results in the values of the corresponding Riemann sums approaching the exact value of the enclosed net signed area.\r\n\\item When we take the just described limit of Riemann sums, we arrive at what we call the definite integral of $f$ over the interval $[a,b]$.  In particular, the symbol $\\int_a^b f(x) \\, dx$ denotes the definite integral of $f$ over $[a,b]$, and this quantity is defined by the equation\r\n$$\\int_a^b f(x) \\, dx = \\lim_{n \\to \\infty} \\sum_{i=1}^{n} f(x_i^*) \\triangle x,$$\r\nwhere $\\triangle x = \\frac{b-a}{n}$, $x_i = a + i\\triangle x$ (for $i = 0, \\ldots, n$), and $x_i^*$ satisfies $x_{i-1} \\le x_i^* \\le x_i$ (for $i = 1, \\ldots, n$).\r\n\\item The definite integral $\\int_a^b f(x) \\,dx$ measures the exact net signed area bounded by $f$ and the horizontal axis on $[a,b]$; in addition, the value of the definite integral is related to what we call the average value of the function on $[a,b]$: $f_{\\mbox{\\tiny{AVG}}[a,b]} = \\frac{1}{b-a} \\cdot \\int_a^b f(x) \\, dx.$  In the setting where we consider the integral of a velocity function $v$, $\\int_a^b v(t) \\,dt$ measures the exact change in position of the moving object on $[a,b]$; when $v$ is nonnegative, $\\int_a^b v(t) \\,dt$ is the object's distance traveled on $[a,b]$.  \r\n\\item The definite integral is a sophisticated sum, and thus has some of the same natural properties that finite sums have.  Perhaps most important of these is how the definite integral respects sums and constant multiples of functions, which can be summarized by the rule\r\n$$\\ds \\int_a^b [c f(x) \\pm k g(x)] \\,dx = c \\int_a^b f(x) \\,dx \\pm k \\int_a^b g(x) \\,dx$$\r\nwhere $f$ and $g$ are continuous functions on $[a,b]$ and $c$ and $k$ are arbitrary constants.\r\n\\end{itemize}\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for Section \\ref{sec:AreaProb}}\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n Suppose an object moves in a straight line so that its speed at\r\ntime $t$ is given by $v(t)=2t+2$, and that at $t=1$ the object is at\r\nposition 5. Find the position of the object at $t=2$.\r\n\\begin{sol}\r\n 10\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n Suppose an object moves in a straight line so that its speed at\r\ntime $t$ is given by $\\ds v(t)=t^2+2$, and that at $t=0$ the object is at\r\nposition 5. Find the position of the object at $t=2$.\r\n\\begin{sol}\r\n $35/3$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n  Find the area under $y=2x$ between $x=0$ and any\r\n  positive value for $x$.\r\n\\begin{sol}\r\n $\\ds x^2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n  Find the area under $y=4x$ between $x=0$ and any\r\n  positive value for $x$.\r\n\\begin{sol}\r\n $\\ds 2x^2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n  Find the area under $y=4x$ between $x=2$ and any\r\n  positive value for $x$ bigger than 2.\r\n\\begin{sol}\r\n $\\ds 2x^2-8$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n  Find the area under $y=4x$ between any two positive\r\n  values for $x$, say $a<b$.\r\n\\begin{sol}\r\n $\\ds 2b^2-2a^2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n Let $\\ds f(x)=x^2+3x+2$. Approximate the area under the curve\r\nbetween $x=0$ and $x=2$ using 4 rectangles and also using 8\r\nrectangles. \r\n\\begin{sol}\r\n 4 rectangles: $41/4=10.25$, \r\n8 rectangles: $183/16= 11.4375$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n Let $\\ds f(x)=x^2-2x+3$. Approximate the area under the curve\r\nbetween $x=1$ and $x=3$ using 4 rectangles. \r\n\\begin{sol}\r\n $ 23/4$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n\\end{enumialphparenastyle}", "meta": {"hexsha": "8e3f7ef0a1fdec95375d77fa5462bb28ccb35934", "size": 59958, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6-integration/6-1-area-prob.old.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6-integration/6-1-area-prob.old.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6-integration/6-1-area-prob.old.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.5072727273, "max_line_length": 1253, "alphanum_fraction": 0.679625738, "num_tokens": 19643, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.672331699179286, "lm_q2_score": 0.8856314692902446, "lm_q1q2_score": 0.5954381105945579}}
{"text": "\\label{sec:image-fitting}\n\nIn this section, I discuss fitting a polygon to a binary classification\nimage, or a monochrome 'probability' image.\nIn the first case, the pixel values are either $0$ or $1$,\nindicating assignment to 1 of 2 possible classes.\nIn the monochrome case, I take the the value ($0--255$) to indicate\nthe probability that the pixel belong to class $1$,\nor some other score, such as the fraction of the pixel area that belongs\nto class $1$.\n\nThe basic idea is to optimize the polygon's vertex positions\nto maximize the agreement of the inside-outside pixel classification\ninduced by the polygon with the classification, or class score\ngiven by the image.\n\n\\subsection{Inside-Outside}\n\\label{sec:inside-outside}\n\nI use a non-standard definition of polygon inside-outside,\nbased on projection:\n\n\\begin{figure}[!htp]\n\\centering\n\\begin{verbatim}\n                            .p\n                           .\n                   e01    .\n             v0 o--------o\n                       v1 \\\n                           \\ e12\n                            \\\n                             o v2\n\\end{verbatim}\n\\caption{Convex vertex.\n\\label{fig:convex-vertex}}\n\\end{figure}\n\n\\begin{figure}[!htp]\n\\centering\n\\begin{verbatim}\n                             o v2\n                            /\n                           / e12\n                          /\n             v0 o--------o v1\n                   e01    .\n                           .\n                            .p\n\\end{verbatim}\n\\caption{Concave vertex.\n\\label{fig:concave-vertex}}\n\\end{figure}\n\nSuppose we have a polygon, $\\M$, consisting of a set of vertices,\n$\\{\\v_i\\}$, and consistently directed edges $\\{\\e_{i,i+1}:(\\v_i,\\v_{i+1})\\}$.\nTo determine whether a point $\\p$ is inside or outside,\nfind the projection of (the closest point to) $\\p$ on the polygon.\nIf the projection is on a convex vertex (see \\ref{fig:convex-vertex}),\nthen the point is outside.\nIf the projection is on a concave vertex (see \\ref{fig:concave-vertex}),\nthen the point is inside.\nIf the projection is to the interior of edge $\\e_{i,i+1}$,\nthen the point is outside if the cross product\n$(\\v_i-\\p) \\times (\\v_{i+1}-\\p) > 0$.\n\nThe {\\em signed distance,} $\\d(\\p_i,\\M)$, from $\\p$ to the polygon $\\M$,\nis the distance to the closest point, times $-1$ if $\\p$ is inside $\\M$.\nI choose this sign so that the positive part of the signed distance\nis the same as the distance to the set corresponding to the polygon interior.\nCorrespondingly, I define a classified pixel's sign $\\sign(\\p)$ to be +1\nif the pixel is in class 0 and -1 if the pixel is in class 1,\nbecause it's most common to consider the interior of the shape\nto be class 1.\n\n\n\\subsection{Counting penalties}\n\\label{sec:counting-penalties}\n\n\n\\subsection{Signed distance penalties}\n\\label{sec:signed-distance-penalties}\n\nIn this section, I describe a class of penalties the depend on the\nsigned distance of a classified pixel to the polygon.\n\nThese penalties sum over (a sample of) the pixels ($\\p_i$) in the class image:\n\\begin{equation}\nf(\\M) = \\sum_{\\p_i} \\phi( \\sign(\\p_i) \\ast \\d(\\p_i,\\M) ) ,\n\\end{equation}\n\nThe function $\\phi(x)$ is typically zero for all $x \\le 0$,\nmonotone increasing in $x$, and bounded by $1$ as $x\\rightarrow\\infty$.\n$\\frac{d\\phi}{dx}\\mid_{x=0} = 0$ and $\\frac{d\\phi}{dx}\\mid_{x} \\rightarrow 0$\nas $x\\rightarrow\\infty$.\n\nExamples are:\n\n\\begin{eqnarray}\n\\phi_s(x) & = 3 \\left(\\frac{x}{r}\\right)^2 - 2 \\left(\\frac{x}{r}\\right)^3 & {x \\geq 0} \\\\\n          & = 0 & {x \\leq 0}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\phi_g(x) & = 1 - e^{ \\frac{-x^2}{2\\pi r^2} } & x \\geq 0 \\\\\n          & = 0                               & x \\leq 0 \\nonumber\n\\end{eqnarray}\n", "meta": {"hexsha": "839f14efb884f2745bc43517b28662f1d73157e6", "size": 3668, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/old/fotm/image-fitting.tex", "max_stars_repo_name": "palisades-lakes/les-elemens", "max_stars_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/old/fotm/image-fitting.tex", "max_issues_repo_name": "palisades-lakes/les-elemens", "max_issues_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/old/fotm/image-fitting.tex", "max_forks_repo_name": "palisades-lakes/les-elemens", "max_forks_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.962962963, "max_line_length": 89, "alphanum_fraction": 0.6202290076, "num_tokens": 1035, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8397339756938819, "lm_q2_score": 0.7090191337850932, "lm_q1q2_score": 0.5953874560563887}}
{"text": "\\documentclass{homework}\n\\course{Math 5522H}\n\\author{Jim Fowler}\n\\input{preamble}\n\\DeclareMathOperator{\\Res}{Res}\n\n\\begin{document}\n\\maketitle\n\n\\begin{inspiration}\n  Luck is the residue of design\n  \\byline{Branch Rickey}\n\\end{inspiration}\n\n\\section{Terminology}\n\n\\begin{problem}\n  Define the \\textbf{residue} of $f$ at the point $z$, which we write $\\Res(f,z)$.\n\\end{problem}\n\n\\section{Numericals}\n\n\\begin{problem}\n  Evaluate $\\Res(f,z)$ for the function $f(z) = \\displaystyle\\frac{e^z}{z^2-1}$.\n\\end{problem}\n\n\\begin{problem}\\label{residues-all-one}Evaluate $\\Res(f,z)$ for the\n  function $f(z) = \\pi \\cot (\\pi z)$.\n\\end{problem}\n\n\\begin{problem}\\label{residue-coth}Compute $\\Res(f,\\pm bi)$ for the\n  function\n  \\[\n    f(z) = \\frac{\\pi \\cot(\\pi z)}{z^2 + b^2}.\n  \\]\n\\end{problem}\n\n\\begin{problem}\n  Evaluate the integrals\n  \\[\n    \\int_{-\\infty}^\\infty \\frac{\\sin x}{1+x^2} \\, dx \\mbox{ and }\n    \\int_{-\\infty}^\\infty \\frac{\\cos x}{1+x^2} \\, dx.\n  \\]\n\\end{problem}\n\n\\begin{problem}\\label{integral-for-euler-reflection}For a real number\n  $\\lambda \\in (0,1)$, evaluate\n  \\[\n    \\int_{-\\infty}^\\infty \\frac{e^{\\lambda x}}{1 + e^x} \\, dx.\n  \\]\n\\end{problem}\n\n\\begin{problem}\n  Evaluate the integral $\\displaystyle\\int_{0}^{2\\pi} \\frac{1}{3 + \\sin^2 x} \\, dx$.\n\\end{problem}\n\n\\begin{problem}\n  Evaluate the integral $\\displaystyle \\int_0^\\pi \\log \\sin x \\, dx$.\n\\end{problem}\n\n\\begin{problem}\n  Evaluate the integral $\\displaystyle\\int_{-\\infty}^{\\infty} \\frac{1-x}{1-x^7} \\, dx$.\n\\end{problem}\n\n\\begin{problem}\n  Show that $f(z) = z^5 + 15z - 1$ has five zeros in $B_2(0)$, and one zero in $B_{1/15}(0)$.\n\\end{problem}\n\n\\begin{problem}\\label{form-at-infinity}Set $w = 1/z$ and compute $f(w) \\, dw$ in terms of $f(z) \\, dz$.\n\\end{problem}\n\n\\section{Exploration}\n\n\\begin{problem}\n  Fix $w \\in \\R$ with $w>0$.  By the intermediate value theorem, the\n  polynomial $f(z) = z^4 + 4w^3 z - 1$ has a real root in the interval\n  $(-\\infty,0)$ and another in $(0,1)$, along with two complex roots\n  $a \\pm bi$.  Use Gauss-Lucas (\\ref{gauss-lucas}) and Rouch\\'e's\n  (\\ref{rouches-theorem}) theorem to describe a subset of $\\C$\n  containing $a\\pm bi$.\n\\end{problem}\n\n\\begin{problem}\n  What is the correct definition of $\\Res(f,\\infty)$?  Generally, we\n  would shift the viewport by replacing $f$ with the function\n  $g(z) = f(1/z)$ and then we study $g$ near $z = 0$ in order to\n  investigate ``$f$ near $\\infty$.''  Does \\ref{form-at-infinity} help\n  here?\n\\end{problem}\n\n\\begin{problem}\\label{rouches-theorem}Suppose $U$ is an open set containing the closed disk $D_r(z_0)$ and\n  $f, g : U \\to \\C$ are holomorphic functions satisfying\n  \\[\n    \\abs{f(z)} > \\abs{g(z)}\n  \\]\n  for $z \\in \\partial D_r(z_0)$.  Prove \\textbf{Rouch\\'e's theorem}\n  that $f$ and $f+g$ have the same number of zeros, counted with\n  multiplicity, in $B_r(z_0)$.  (This is sometimes called the dog\n  leash theorem---can you see why?)\n\\end{problem}\n\n\\begin{problem}\n  Use \\ref{rouches-theorem} to give another proof of the Fundamental Theorem of Algebra.\n\\end{problem}\n\n\\begin{problem}\n  It is important to recognize patterns in how residues equip us to evaluate integrals, so that, when presented with a fresh integration problem, we have some ideas about the best tool for the task.  Develop a tool for evaluating\n  \\[\n    \\int_0^{2\\pi} f(\\cos \\theta,\\sin \\theta) \\, d\\theta\n  \\]\n  via residues.  Here $f$ is a rational function, explain how to make\n  the substitution $z = e^{i\\theta}$ to rewrite\n  $f(\\cos \\theta,\\sin \\theta)$ in terms of $z$, and\n  similarly rewrite $d\\theta$ in terms of\n  $dz$ to reduce the given integral to a certain contour integral.\n\\end{problem}\n\n\\begin{problem}\\label{summation-theorem}Sometimes the residue calculus\n  permits us to sum series.\n\n  Suppose $f$ is meromorphic with poles $z_1,\\ldots,z_N$ with $z_j \\not\\in \\Z$.  Our goal is a formula\n  \\[\n    \\sum_{n=-\\infty}^{\\infty} f(n) = - \\sum_{j=1}^N \\Res(g, z_j)\n  \\]\n  where $g(z) = \\pi \\cot(\\pi z) f(z)$.  \\textit{Hint:} Consider a square contour $R$ centered at the origin with side-length $2N+1$.  You may assume that $\\pi \\cot(\\pi z)$ is bounded on $R$ independent of $N$.  You will then need to impose some bound on $f$ on the contour $R$.\n\\end{problem}\n\n\\begin{problem}\n  Apply \\ref{residue-coth} and \\ref{summation-theorem} to evaluate\n  \\[\n    \\sum_{n=-\\infty}^\\infty \\frac{1}{n^2 + b^2}.\n  \\]\n\\end{problem}\n\n\\section{Prove or Disprove and Salvage if Possible}\n\n\\begin{problem}\\label{zero-residue-not-enough}Suppose $f : D_r(z_0) \\to \\C$ is holomorphic.  The residue\n  $\\Res(f,z_0)$ vanishes if and only if the singularity $z_0$ is\n  removable.\n\\end{problem}\n\n\\begin{problem}\\label{open-mapping-theorem}If $U \\subset \\C$ is open\n  and $f : U \\to \\C$ is holomorphic, then $f(U)$ is\n  open. % missing nonconstant, can be proved from rouche\n\\end{problem}\n\n\\end{document}\n", "meta": {"hexsha": "b4fc6f58a74a4faafcaacedbd744f45382c4f8f6", "size": 4825, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problem-sets/set09.tex", "max_stars_repo_name": "kisonecat/math5522h", "max_stars_repo_head_hexsha": "c9fc5eb915c6d29d91a864dfe066878b75305c42", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-13T03:38:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-13T03:38:29.000Z", "max_issues_repo_path": "problem-sets/set09.tex", "max_issues_repo_name": "kisonecat/math5522h", "max_issues_repo_head_hexsha": "c9fc5eb915c6d29d91a864dfe066878b75305c42", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem-sets/set09.tex", "max_forks_repo_name": "kisonecat/math5522h", "max_forks_repo_head_hexsha": "c9fc5eb915c6d29d91a864dfe066878b75305c42", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-01-11T18:43:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-11T18:43:51.000Z", "avg_line_length": 32.6013513514, "max_line_length": 277, "alphanum_fraction": 0.6667357513, "num_tokens": 1678, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.8397339736884712, "lm_q1q2_score": 0.5953874546345143}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section{Sheet 11}\n\n\\subsection{Wave equation}\n\n\\subsubsection{General travelling impulse}\n\nWe want to show that any twice-differentiable function \\(F(z \\pm v t) = f(z, t)\\) solves the wave equation: \n%\n\\boxalign{\n\\begin{align} \\label{eq:wave-equation}\n\\qty(-\\partial_{t}^2 + v^2 \\partial_{z}^2) f(t, z) = 0\n\\,. \n\\end{align}}\n\nWe denote \\(\\alpha (t,z) = z \\pm vt\\), so that \\(f = F \\circ \\alpha \\) and: \n%\n\\begin{subequations}\n\\begin{align}\n\\pdv[2]{f}{t} = \\pdv{}{t }\\qty( \\dv{F}{\\alpha } \\pdv{\\alpha }{t}) &= \\dv{F}{\\alpha } \\pdv[2]{\\alpha }{t} +  \\pdv{\\alpha }{t} \\pdv{}{t} \\qty(\\dv{F}{\\alpha })  \\\\\n&= \\dv{F}{\\alpha } \\pdv[2]{\\alpha }{t}\n+ \\pdv{\\alpha }{t}\\dv{}{\\alpha } \\qty(\\dv{F}{\\alpha } \\pdv[]{\\alpha }{t}) \\marginnote{Commuted the derivatives}\\\\\n&= \\dv{F}{\\alpha } \\pdv[2]{\\alpha }{t}\n+ \\pdv{\\alpha }{t} \\qty( \\dv[2]{F}{\\alpha } \\pdv[]{\\alpha }{t} \n+ \\dv{F}{\\alpha } \\cancelto{}{\\pdv{}{t} \\dv{\\alpha }{\\alpha }}) \\marginnote{Derivative of a constant}  \\\\\n&= \\dv{F}{\\alpha } \\pdv[2]{\\alpha }{t}\n+ \\dv[2]{F}{\\alpha } \\qty(\\pdv[]{\\alpha }{t})^2  \\label{eq:faa-di-bruno}\\\\\n&= v^2 \\dv[2]{F}{\\alpha }\n\\,,\n\\end{align}\n\\end{subequations}\n%\nwhere the expression we derived is fully general up to equation \\eqref{eq:faa-di-bruno}, while the last passage comes from the fact that \\(\\pdv*{\\alpha }{t} = \\pm v\\), so its square is \\(v^2\\) while the second derivative is zero since \\(v\\) is a constant. By the same reasoning, \n%\n\\begin{subequations}\n\\begin{align}\n\\pdv[2]{f}{z} &= \\dv{F}{\\alpha } \\pdv[2]{\\alpha }{z} \n+ \\dv[2]{F}{\\alpha } \\qty(\\pdv{\\alpha }{z})^2  \\\\\n&= \\dv[2]{F}{\\alpha }\n\\,.\n\\end{align}\n\\end{subequations}\n\nTherefore, \n%\n\\begin{align}\n\\qty(-\\partial_{t}^2 + v^2 \\partial_{z}^2) f(t, z) \n= (-v^2 + v^2) \\dv[2]{F}{\\alpha } \\equiv 0 \n\\,.\n\\end{align}\n\n\\subsubsection{Gaussian pulse}\n\nAt any fixed time \\(t\\), the function \n%\n\\begin{align}\nF(\\alpha ) = \\exp(- \\alpha^2) = \\exp(- \\qty(z - v t)^2)\n\\,\n\\end{align}\n%\nlooks like a rescaled Gaussian, centered around \\(z = vt\\). As \\(t\\) increases it travels rightward with speed \\(v\\). A plot of this, with \\(v = 6\\) and \\(t = 0, 1, 2\\) is shown in figure \\ref{fig:moving-gaussian-pulse}.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\textwidth]{figures/gauss.pdf}\n\\caption{Pulse for \\(t = 0, 1, 2\\).}\n\\label{fig:moving-gaussian-pulse}\n\\end{figure}\n\n\\subsubsection{Harmonic wave solution}\n\nWe consider the harmonic solution \n%\n\\begin{align}\nf(t,z) \n= A \\cos(2 \\pi \\qty(\\frac{z}{\\lambda } - \\frac{t}{T})) \n= A \\cos(\\frac{2\\pi}{\\lambda } \\alpha (t,z)) \n\\,,\n\\end{align}\n%\nwhere \\(\\alpha  = z - \\lambda t /T\\). Writing the harmonic solution this way allows us to see the properties easily: we have \\(v  = \\lambda / T\\) by comparison with the formula from before (\\(\\alpha = z - vt\\)), while the periodicity follows from that of the cosine: we start from the fact that \\(\\cos(x) = \\cos(x \\pm 2 \\pi )\\).\n\nIf we map \\(z \\rightarrow z  + \\lambda \\) we get \\(\\alpha \\rightarrow \\alpha +\\lambda \\), and \n%\n\\begin{align}\n\\cos(\\frac{2 \\pi }{\\lambda } \\alpha ) \\rightarrow \\cos(\\frac{2 \\pi }{\\lambda }\\qty(\\alpha + \\lambda )) = \\cos(\\frac{2 \\pi }{\\lambda } \\alpha  + 2 \\pi ) = \\cos(\\frac{2 \\pi }{\\lambda } \\alpha )\n\\,.\n\\end{align}\n\nSimilarly, if we map \\(t \\rightarrow t + T\\) we get \\(\\alpha \\rightarrow \\alpha - \\lambda T/T = \\alpha - \\lambda  \\), so we can apply the same reasoning. This means that the wave has spatial period \\(\\lambda \\) and temporal period \\(T\\). \n\n\\subsubsection{Lightspeed waves}\n\nThe equation \\(\\square f = 0\\) is analogous to \\eqref{eq:wave-equation} for \\(v = 1 (= c)\\). The difference is the three-dimensionality of the spatial derivatives, however this does not present an issue. \nWe still consider solutions in the form \\(f (x^{\\mu }) = F(\\alpha (x^{\\mu }) )\\), with \\(\\alpha (x^{\\mu }) = k_{\\mu } x^{ \\mu }\\) for a constant covector \\(k_{\\mu }\\). We will have the following derivatives \n%\n\\begin{align}\n\\pdv[2]{f}{(x^{\\mu })} \n= \\dv[2]{F}{\\alpha }\n\\qty( \\pdv{\\alpha }{(x^{\\mu})} )^2\n= \\dv[2]{F}{\\alpha }\n\\qty(k_{\\mu })^2\n\\,,\n\\end{align}\n%\nwhere the index \\(\\mu \\) can take the conventional values from \\(0\\) to \\(3\\). \nSo, the equation \\(\\square f = 0\\) reads \n%\n\\begin{align}\n\\dv[2]{F}{\\alpha } \\qty(-\\qty(k_{0})^2 + \\sum_{i=1 }^{3} \\qty(k_{i })^2) = 0\n\\,,\n\\end{align}\n%\nwhich is always satisfied if we take for  \\(k_{\\mu }\\) a null vector with respect to the Minkowski metric, \\(k_{\\mu } \\eta^{\\mu \\nu } k_{\\nu } = 0 \\). \nIn order for the wave to be propagating forwards in time we need to set \\(k_{0} < 0\\), or equivalently \\(k^{0}>0\\). \n\nSo, the wave equation does indeed have travelling waves as solutions, which propagate with speed 1 (which can be gathered from the factor multiplying the Laplacian). \n\n\\subsection{Gravitational waves}\n\n\\subsubsection{Weak-field wave equation}\n\nWe consider a perturbed metric \\(g_{\\mu \\nu } = \\eta_{\\mu \\nu } + h_{\\mu \\nu }\\), to first order in \\(h_{\\mu \\nu }\\). \n\nFirst of all we need to compute the Christoffel symbols: they are given by \n%\n\\begin{subequations}\n\\begin{align}\n\\Gamma^{\\mu }_{\\nu \\rho } \n&= \\frac{1}{2} g^{\\mu \\alpha } \\qty(g_{\\alpha \\nu , \\rho } + g_{\\alpha \\rho , \\nu } - g_{\\nu \\rho , \\alpha })  \\\\\n&= \\frac{1}{2} g^{\\mu \\alpha } \\qty(h_{\\alpha \\nu , \\rho } + h_{\\alpha \\rho , \\nu } - h_{\\nu \\rho , \\alpha })  \\marginnote{\\(\\eta_{\\mu \\nu }  \\) is constant}  \\\\\n&= \\frac{1}{2} \\eta^{\\mu \\alpha } \\qty(h_{\\alpha \\nu , \\rho } + h_{\\alpha \\rho , \\nu } - h_{\\nu \\rho , \\alpha })   + \\mathcal{O}(h^2) \\marginnote{We only consider the first order in \\(h\\)}  \\\\\n&= \\frac{1}{2} \\qty(h^{\\mu }_{\\nu , \\rho } + h^{\\mu }_{\\rho , \\nu } - \\tensor{h}{_{\\nu \\rho , }^{\\alpha }}) + \\mathcal{O}(h^2) \n\\,.\n\\end{align}\n\\end{subequations}\n\nNow, let us look at the Ricci tensor: it is given \n%\n\\begin{subequations}\n\\begin{align}\nR_{\\mu \\nu } = R^{\\alpha }_{\\mu \\alpha \\nu } \n&= -2 \\qty(\\Gamma^{\\alpha }_{\\mu [\\alpha , \\nu ]}\n+ \\Gamma^{\\beta }_{\\mu [\\alpha } \\Gamma^{\\alpha }_{\\nu ], \\beta })  \\\\\n&= \\Gamma^{\\alpha }_{\\mu \\nu , \\alpha } - \\Gamma^{\\alpha }_{\\mu \\alpha , \\nu } + \\mathcal{O}(h^2)  \\\\\n&= \\frac{1}{2} \\qty(\\partial_{\\alpha } \\qty(h^{\\alpha }_{\\mu , \\nu } + h^{\\alpha }_{\\nu , \\mu } - \\tensor{h}{_{\\mu \\nu , }^{\\alpha }}) - \\partial_{\\nu } \\qty(h^{\\alpha }_{\\mu , \\alpha } + h^{\\alpha }_{\\alpha , \\mu } - \\tensor{h}{_{\\mu \\alpha ,}^{\\alpha }}))  \\\\\n&= \\frac{1}{2} \\qty(\\cancelto{}{h^{\\alpha }_{\\mu , \\nu \\alpha }} + h^{\\alpha }_{\\nu , \\mu \\alpha } - \\tensor{h}{_{\\mu \\nu , }^{\\alpha }_{\\alpha }} - \\cancelto{}{h^{\\alpha }_{\\mu , \\alpha \\nu }} - h^{\\alpha }_{\\alpha , \\mu \\nu } + \\tensor{h}{_{\\mu \\alpha ,}^{\\alpha }_{\\nu }}) \\marginnote{Commuting derivatives}  \\\\\n&= - \\frac{1}{2} \\square h_{\\mu \\nu } + \\frac{1}{2} \\qty(h^{\\alpha }_{\\nu , \\mu \\alpha } + \\tensor{h}{_{\\mu \\alpha, }^{\\alpha }_{\\nu }} - h_{,\\mu \\nu } ) \\marginnote{Introduced \\(\\square = \\partial^{\\alpha } \\partial_{\\alpha }\\) and  \\(h = h^{\\alpha }_{\\alpha }\\)}  \\\\\n&= - \\frac{1}{2} \\square h_{\\mu \\nu } + \\frac{1}{2} \n\\partial_{\\mu } \\qty(h_{\\nu, \\alpha  }^{\\alpha } - \\frac{1}{2} h_{, \\nu } ) \n+ \\frac{1}{2 } \\partial_{\\nu } \\qty(h_{\\mu, \\alpha  }^{\\alpha } - \\frac{1}{2} h_{, \\mu }) \\marginnote{Split \\(h_{, \\mu \\nu } \\) in two, swapped the positions of two contracted indices}\n\\,,\n\\end{align}\n\\end{subequations}\n%\nwhich is our final expression. \n\nFor brevity\\footnote{Which is the formal way to say that I do not feel like writing it all out.} I omit the proof of the fact that setting \\(2 h^{\\alpha }_{\\nu , \\alpha } - h_{, \\nu } = 0\\) (the harmonic gauge condition) is actually a valid gauge choice, it can be found in the lecture notes. Making this gauge choice, the two terms in parentheses cancel so we find \n%\n\\boxalign{\n\\begin{align}\nR_{\\mu \\nu } = - \\frac{1}{2} \\square h_{\\mu \\nu }\n\\,,\n\\end{align}}\n%\nwhich means that, if \\(T_{\\mu \\nu } = 0\\), the Einstein Field Equations read \\(\\square h_{\\mu \\nu } = 0\\), which is precisely the wave equation for a wave propagating with speed \\(c\\). \n\n\\subsubsection{Gravitational waves polarizations}\n\nWe are considering a wave in the form \n%\n\\begin{align}\nh_{\\mu \\nu } = \\exp(i k_{\\beta } x^{\\beta }) \\epsilon_{\\mu \\nu } = \\exp(i \\qty(- k t + kz)) \\epsilon_{\\mu \\nu }\n\\,,\n\\end{align}\n%\nfor a constant matrix \\(\\epsilon_{\\mu \\nu }\\) and for \\(k_{\\mu } = (-k, 0,0,k)\\). We have already shown in the exercise before that this satisfies the wave equation. We want to impose the conditions: \n\\begin{enumerate}\n    \\item \\(\\epsilon_{0 \\nu } = 0\\);\n    \\item \\(2\\partial_{\\alpha } h^{\\alpha }_{\\nu } - \\partial_{\\nu } h = 0\\). \n\\end{enumerate}\n\nThe first of these also implies \\(\\epsilon_{\\nu 0} = 0\\) by symmetry (the symmetry of \\(g_{\\mu \\nu } \\) implies the symmetry of \\(h_{\\mu \\nu }\\), which in turn implies the symmetry of \\(\\epsilon_{\\mu \\nu }\\), since the exponential is a scalar).\n\nCan we actually impose these conditions? Recall: our gauge transformations are \\(h_{\\mu \\nu } \\rightarrow h_{\\mu \\nu } - \\partial_{(\\mu  } \\xi_{\\nu )}\\) for some covector \\(\\xi_{ \\nu }\\). \nFirst of all, we impose the harmonic gauge condition. We need to show that we can use a gauge transformation to transform a generic perturbed metric \\(\\widetilde{h}_{\\mu \\nu }\\) into a perturbed metric \\(h_{\\mu \\nu }\\) such that \\(2 \\partial^{\\alpha } h_{\\alpha\\nu } - \\partial_{\\nu }  h = 0\\).\n\nWriting out the condition, we have \n%\n\\begin{subequations}\n\\begin{align}\n2 \\partial^{\\alpha } \\qty(\\widetilde{h}_{ \\alpha  \\nu } - \\partial_{(\\alpha } \\xi_{\\nu )}) - \\partial_{\\nu } \\qty(\\eta^{\\rho \\sigma } \\qty(\\widetilde{h}_{\\rho \\sigma } - \\partial_{ (\\rho } \\xi_{\\sigma )}))  &\\overset{?}{=}  0  \\marginnote{The symmetrization in the second term is redundant since \\(\\eta_{\\mu \\nu }\\) is already symmetric}\\\\\n2 \\partial^{\\alpha } \\widetilde{h}_{\\alpha \\nu } - \\partial_{\\nu } \\widetilde{h} &\\overset{?}{=} \n\\square \\xi_{\\nu } + \\partial_{\\nu } \\qty(\\partial^{\\alpha } \\xi_{\\alpha }) - \\partial_{\\nu }\\qty( \\partial \\cdot \\xi )  \\\\\n2 \\partial^{\\alpha } \\widetilde{h}_{\\alpha \\nu } - \\partial_{\\nu } \\widetilde{h} &\\overset{?}{=} \n\\square \\xi_{\\nu }\n\\,,\n\\end{align}\n\\end{subequations}\n%\nso we can use the value of \\(2 \\partial^{\\alpha } \\widetilde{h}_{\\alpha \\nu } - \\partial_{\\nu } \\widetilde{h}\\) as a source term \\(\\rho_{\\nu } \\) for the four PDEs \\(\\rho_{\\nu } = \\square \\xi_{\\nu }\\). These always have a solution, which can be written explicitly using the Green function \\(G(x)\\), which is defined by \n%\n\\begin{align}\n\\square G(x - x') = \\delta (x - x')\n\\,,\n\\end{align}\n%\nwhere \\(x\\) and \\(x'\\) are generic points in spacetime and \\(\\delta \\) is a 4-dimensional Dirac delta function. Then, the solution will be written as \n%\n\\begin{align}\n\\xi_{\\nu } (x) = \\int_{ \\mathbb{R}^{4}} G(x - y) \\rho_{\\nu }(y) \\dd[4]{y}\n\\,.\n\\end{align}\n\nThen, we know that we can fix the harmonic gauge: for an arbitrary \\(\\widetilde{h}_{\\mu \\nu }\\) we can choose an appropriate \\(\\xi_{\\nu }\\) which gives the appropriate vector field \\(\\square \\xi_{\\nu }\\). A thing to notice is the fact that if we change \\(\\xi_{\\nu } \\rightarrow \\xi_{\\nu } + r_{\\nu }\\), where \\(\\square r_{\\nu } =0\\), then the Dalambertian of the gauge vector field does not change, therefore the vector field we have \\emph{still} allows us to set the harmonic gauge condition for our starting metric \\(\\widetilde{h}_{\\mu \\nu }\\). Then, we say the \\(r_{\\nu }\\) is our residual gauge freedom. \n\nSo, we assume to have already fixed the primary gauge, and we restrict ourselves to the residual one. We can use this residual gauge freedom to impose the conditions \\(h_{0 \\mu } =0\\): let us start with \\(h_{00} =0\\). \nWe assume to start from \n%\n\\begin{align}\nh_{00} = \\widetilde{h}_{00} - \\partial_{0} r_{0} \\overset{!}{=} 0 \n\\,,\n\\end{align}\n%\nso if we set \\(\\partial_{0} r_{0} = \\widetilde{h}_{00}\\) we can set this gauge condition. This is compatible with \\(\\square r_0 = 0\\): we will need to also have \\(\\nabla^2 r_0 = \\partial_0 \\partial_0 r_0 = \\partial_0 \\widetilde{h}_{00}\\), and we note that this fixes \\(r_0 \\) to be the integral in \\(\\dd{x^{0}}\\) of \\(\\widetilde{h}_{00}\\) at each spatial point.\n\nNow, we show that we can also set \\(h_{0i}=0\\): it amounts to \n%\n\\begin{align}\nh_{0i} = \\widetilde{h}_{0i} - \\frac{1}{2} \\qty(\\partial_{i} r_0 + \\partial_0 r_{i}) \\overset{!}{=}  0\n\\,,\n\\end{align}\n%\nso if we set each of the spatial components \\(r_{i}\\) to \n%\n\\begin{align}\nr_{i} = \\int \\dd{x^{0}} \\qty(2 \\widetilde{h}_{0i} - \\partial_{i} r_0 )\n\\,\n\\end{align}\n%\nwe will have the desired gauge. Since we also need to set \\(\\square r_{i}=0\\), we will need to also have \\(\\nabla^2 r_{i} = \\partial_0 \\partial_0 r_{i} = \\partial_0 \\qty(2 \\widetilde{h}_{0i} - \\partial_{i} r_0 )\\). \n\n% We can show this for plane wave solutions: we do not actually lose generality since they form a basis for any solution, the so-called Fourier basis. \n\n\nWhat do these conditions actually mean for the matrix \\(\\epsilon_{\\mu \\nu }\\)?\nWe can write the trace of \\(h_{\\mu \\nu }\\) as \n%\n\\begin{subequations}\n\\begin{align}\nh= \\eta^{\\mu \\nu } h_{\\mu \\nu } &= \\exp(i k_{\\beta } x^{\\beta } ) \\epsilon_{\\mu \\nu } \\eta^{\\mu \\nu }  \\\\\n&= \\exp(i k_{\\beta } x^{\\beta } ) \\epsilon_{ij} \\delta^{ij} \\marginnote{The components with a 0 vanish, and \\(\\eta_{ij} = \\delta_{ij}\\)}\n\\,,\n\\end{align}\n\\end{subequations}\n%\nso \\(h\\) depends on the sum of the diagonal entries of \\(\\epsilon_{ij}\\), which is a constant, we will denote it as \\(\\epsilon = \\epsilon_{ij} \\delta^{ij}\\). \nLet us now think through the possible values of \\(\\nu \\) in the second condition. If \\(\\nu =0\\) we find: \n%\n\\begin{align}\n2 \\partial_{\\alpha } h^{\\alpha }_{0} - \\partial_{0} h = 0\n\\,,\n\\end{align}\n%\nbut the first term vanishes since all the components of \\(\\epsilon_{\\mu \\nu }\\) with an index \\(0\\) vanish, so the same holds for \\(h_{\\mu \\nu }\\). Therefore we get \\(\\partial_{0} h = 0\\), which can be expanded into \n%\n\\begin{align}\n- i k \\exp(ik_{\\beta } x^{\\beta }  ) \\epsilon = 0\n\\,,\n\\end{align}\n%\nbut \\(k \\) cannot be 0 for a nontrivial solution, and the exponential always has norm 1, so we must conclude that \\(\\epsilon = 0\\); this implies \\(h = 0\\).  \n\nMoving on to \\(\\nu = j \\): we get \n%\n\\begin{align}\n0 = \\partial_{\\alpha } h_{\\alpha j} = \\partial_{i } h_{ij} = \\partial_{3} h_{3 j} = ik \\exp(ik_{\\beta } x^{\\beta } ) \\epsilon_{3j}\n\\,,\n\\end{align}\n%\nbut, similarly to before, nor \\(k\\) nor the exponential can be zero, therefore \\(h_{3j}=0\\), and by symmetry also \\(h_{j3}=0\\). This means that the matrix \\(\\epsilon_{\\mu \\nu }\\) looks like \n%\n\\begin{subequations}\n\\begin{align}\n\\epsilon_{\\mu \\nu } = \\left[\\begin{array}{cccc}\n0 & 0 & 0 & 0 \\\\ \n0 & \\epsilon_{11} & \\epsilon_{12} & 0 \\\\ \n0 & \\epsilon_{21} & \\epsilon_{22} & 0 \\\\ \n0 & 0 & 0 & 0\n\\end{array}\\right]\n\\,,\n\\end{align}\n\\end{subequations}\n%\nso at this point we are left with two possible degrees of freedom: \\(\\epsilon_{12} = \\epsilon_{21}\\), and we found before that \\(\\epsilon_{\\mu \\nu }\\) must be traceless, which means \\(\\epsilon_{11} = - \\epsilon_{22}\\). \n\nFor simplicity I will now write only the smaller \\(2 \\times 2\\) matrix \\(\\widetilde{\\epsilon}_{mn}\\), where \\(m\\) and \\(n\\) range from 1 to 2. Two possible real matrices which satisfy our conditions are the Pauli matrices \\(\\widetilde{\\epsilon}=   \\sigma_{x} \\) and \\( \\widetilde{\\epsilon} = \\sigma_{z}\\): \n%\n\\begin{subequations}\n\\begin{align}\n\\sigma_{x} = \\left[\\begin{array}{cc}\n0 & 1 \\\\ \n1 & 0\n\\end{array}\\right] \\qquad \\text{and} \\qquad\n\\sigma_{z} = \\left[\\begin{array}{cc}\n1 & 0 \\\\ \n0 & -1\n\\end{array}\\right]\n\\,.\n\\end{align}\n\\end{subequations}\n\nThese two basis matrices represent the two basis \\emph{linear} polarizations of a gravitational wave and we can get any linearly polarized GW along the \\(z\\) direction by considering linear combinations of these with real coefficients. \n\nAn interesting thing to note, however, is the fact that we can consider \\emph{complex} combinations of these basis matrices: the matrices \\((\\sigma_{z} \\pm i \\sigma_{x} ) / \\sqrt{2}\\) represent circularly polarized waves, and they form a basis over \\(\\mathbb{C}\\) for all of the polarizations. \n\n\\end{document}\n", "meta": {"hexsha": "8f09808c9943731b72ccf41217b0a99e92d60556", "size": 15984, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_first_semester/gr_exercises/sheet11.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_first_semester/gr_exercises/sheet11.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_first_semester/gr_exercises/sheet11.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 49.95, "max_line_length": 608, "alphanum_fraction": 0.6252502503, "num_tokens": 5729, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289836, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5953327377908582}}
{"text": "\\chapter{Semantics for Propositional Logic}\n\n\\section{Valuations and Truth}\n\n\t\\begin{enumerate}[\\thesection.1]\n\t\n\t\t\\item In this chapter, we turn our attention to semantics. Remember from the introduction that in semantics, we make the idea of validity as truth-preservation formally precise. We will do this by introducing the notion of a \\emph{model} for a propositional language and defining what it means for a formula to be \\emph{true in a model}. We then obtain our account of validity for inferences couched in a propositional language by saying that such an inference is valid iff in every model where the premises are true, the conclusion is true as well. So, intuitively, models play the role of situations. But how can we make this precise?\n\t\t\n\t\t\\item What do we want our formal situation-counterparts to do? Well, note that all that matters for our account of validity is which statements are true in a given situation. So, at the very least, we want our formal situation-counterpart to tell us which sentence letters are true in the situation we're modeling. As it turns out, this is already enough. Once we know the truth-values of the sentence letters, we can calculate the values of every formula by means of function recursion. But we're getting ahead of ourselves.  Let's figure out how a model can tell us which sentence letters are true in a given model. Remember that in classical logic, we assume that every sentence is either true or false and never both---this is the principle of bivalence. So, in classical logic, there are only two truth-values, \\emph{true} and \\emph{false}. If we now assign to each sentence letter its truth value in a given situation, the result will be a \\emph{function}. Why so? Well, by bivalence, every sentence is true or false in the situation---so every sentence get's assigned \\emph{a} value. And, also by bivalence, no sentence is both true and false in the situation---so every sentence gets assigned a \\emph{unique} value. We call a function that assigns a truth-value to every sentence letter a \\emph{valuation}. We will use valuations as models for propositional languages. Intuitively, the idea is that a valuation models the situation in which the true sentences are precisely the ones which the valuation assigns \\emph{true}. From a mathematical perspective, it's convenient to model the truth-values as numbers, 0 for \\emph{false} and 1 for \\emph{true}. We can now give the formal definition of a model or valuation for a propositional language: \n\t\t\n\t\t\\begin{itemize}\t\t\n\t\t\t\t\t\t\n\t\t\t\\item Let $\\mathcal{L}$ be a propositional language. A \\emph{valuation} for $\\mathcal{L}$ is a function $v:\\mathcal{P}\\to\\{0,1\\}$. \n\t\t\t\n\t\t\\end{itemize}\n\t\t\t\n\t\\item \\emph{Examples}. Let $\\mathcal{P}=\\{p,q,r\\}$. The following are \\emph{all} the valuations $v:\\{p,q,r\\}\\to \\{0,1\\}$:\n\t\n\t\t\\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item $v(p)=0, v(q)=0,$ and $v(r)=0$\n\t\t\t\\item $v(p)=0, v(q)=0,$ and $v(r)=1$\n\t\t\t\\item $v(p)=0, v(q)=1,$ and $v(r)=0$\n\t\t\t\\item $v(p)=0, v(q)=1,$ and $v(r)=1$\n\t\t\t\\item $v(p)=1, v(q)=0,$ and $v(r)=0$\n\t\t\t\\item $v(p)=1, v(q)=0,$ and $v(r)=1$\n\t\t\t\\item $v(p)=1, v(q)=1,$ and $v(r)=0$\n\t\t\t\\item $v(p)=1, v(q)=1,$ and $v(r)=1$\n\t\t\t \t\n\t\t\\end{enumerate}\n\t\tMore generally, if there are $n$ elements in $\\mathcal{P}$, there are $2^n$ different valuations for $\\mathcal{L}$. \n\t\t\n\t\t\\item \\emph{More Examples}. But even if $\\mathcal{P}$ is infinite, for example $\\{p_i:i\\in\\mathbb{N}\\}$, we can reasonably define valuations (but note that there'll be infinitely many, so we can't write them all down). Here are some examples for definitions of valuations $v:\\{p_i:i\\in\\mathbb{N}\\}\\to\\{0,1\\}$:\n\t\t\n\t\t\\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item $v(p_i)=0$ for all $i\\in \\mathbb{N}$\n\t\t\n\t\t\t\\item $v(p_i)=\\begin{cases} 0 & \\text{if }i\\text{ is odd}\\\\1 & \\text{if }i\\text{ is even}\\end{cases}$\n\t\t\t\n\t\t\t\\item $v(p_i)=\\begin{cases} 0 & \\text{if }i\\text{ is even}\\\\1 & \\text{if }i\\text{ is odd}\\end{cases}$\n\t\t\t\n\t\t\t\\item $v(p_i)=\\begin{cases} 1 & \\text{if }i\\text{ is prime}\\\\0 & \\text{ otherwise}\\end{cases}$\n\t\t\t\n\t\t\t\\item $v(p_i)=1$ for all $i\\in \\mathbb{N}$\n\t\t\t\n\t\t\t\\item For $X\\subseteq \\mathbb{N}$ a set of numbers, we set $v(p_i)=1$ iff $i\\in X$.\n\t\t\t\n\t\t\t\\item For $\\phi\\in\\mathcal{L}$ be a formula, we set $v(p_i)=0$ iff $p_i\\in sub(\\phi)$, for all $i\\in\\mathbb{N}$.\n\t\t\t\n\t\t\t \t\n\t\t\\end{enumerate}\n\t\t\n\t  \\item Above we said that once we know the truth-values of the sentence letters under a given valuation, we can calculate the truth-values of all formulas using function recursion.\n\t\t\tIn order to do so, we need to know how the truth-value of a complex formula depends on the truth-values of its immediate sub-formulas.\n\t\t\tLet's begin by guiding our intuitions first.\n\t\t\tThe following principles seem plausible for all $\\phi,\\psi\\in\\mathcal{L}$, given that $\\neg$ means \\emph{not}, $\\land$ means \\emph{and}, and $\\lor$ means \\emph{or}:\n\t\t\n\t\t\\begin{enumerate}[(i)]\n\t\t\t\t\t\t\n\t\t\t\t\\item $\\neg\\phi$ is true iff $\\phi$ is false\n\t\t\t\t\t\n\t\t\t\t\\item $(\\phi\\land\\psi)$ is true iff $\\phi$ is true and $\\psi$ is true\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\\item $(\\phi\\lor\\psi)$ is true iff $\\phi$ is true or $\\psi$ is true\t\t\t\t\t\t\t\n\t\t\t\t\\end{enumerate}\n\t\t\tHow about a formula of the form $(\\phi\\to\\psi)$?\n\t\t\tWhen is a formula of this form true?\n\t\t\tThe standard answer to this question is actually not so easy to see.\n\t\t\tWe will try to motivate it ``indirectly'' by thinking about when $(\\phi\\to\\psi)$ should be \\emph{false}.\n\t\t\tSurely, if $\\phi$ is true and $\\psi$ is false, then $(\\phi\\to\\psi)$ should be false.\n\t\t\tTo say that if the ball is red, then it's scarlet should exclude that the ball is red and not scarlet.\n\t\t\tThe standard account for the truth of $(\\phi\\to\\psi)$ says that this is the \\emph{only} way in which $(\\phi\\to\\psi)$ can be false: the only way in which it can turn out to be false that if the ball is red, then it's scarlet is if the ball is red but not scarlet.\n\t\t\tSo, $(\\phi\\to\\psi)$ is false if \\emph{and only if} $\\phi$ is true and $\\psi$ is false.\n\t\t\tNote that, by bivalence, this gives us an answer to when $(\\phi\\to\\psi)$ is true: by bivalence, $(\\phi\\to\\psi)$ is false iff $(\\phi\\to\\psi)$ is not true; so $(\\phi\\to\\psi)$ is true iff its not the case that $\\phi$ is true and $\\psi$ is false; and there are two ways in which it can not be the case that $\\phi$ is true and $\\psi$ is false: one is that $\\phi$ is not true, and so false, and the other is that $\\psi$ is not false, and so true.\n\t\t\tWe arrive at:\n\t\t\t\t\\begin{enumerate}[(i)]\n\t\t\t\t\\setcounter{enumii}{3}\t\n\t\t\t\t\\item $(\\phi\\to\\psi)$ is true iff $\\phi$ is false or $\\psi$ is true\t\t\t\t\t\t\t\n\t\t\t\t\\end{enumerate}\nThe last case, $(\\phi\\leftrightarrow\\psi)$ is somewhat easier, given that $\\leftrightarrow$ means \\emph{if and only if}. We want to say that $(\\phi\\leftrightarrow\\psi)$ is true iff $(\\phi\\to\\psi)$ is true (if) and $(\\psi\\to\\phi)$ is true (only if). With a bit of fiddling using (iv), we get:\n \t\t\\begin{enumerate}[(i)]\n\t\t\t\t\\setcounter{enumii}{4}\t\n\t\t\t\t\\item $(\\phi\\leftrightarrow\\psi)$ is true iff either $\\phi$ and $\\psi$ are both true or $\\phi$ and $\\psi$ are both false.\t\t\t\n\t\t\\end{enumerate} \t\t\n\t\t\n\t\t\\item The reading of $\\to$ given by (iv) is something worth dwelling on for a moment. Remember that in the introduction we said that the treatment of the conditional is something that characterizes classical logic. Condition (iv) is this treatment. The operator $\\to$ with the reading given by (iv) is what's called the \\emph{material} conditional. A peculiar feature of the material conditional is that $(\\phi\\to\\psi)$ is true, if $\\phi$ is false (regardless of whether $\\psi$ is true). So, given we're thinking about the actual world, the sentence ``if Utrecht is in the US, then I'm the king of Germany'' is true, if understood in terms of $\\to$. This might sound weird, so let's think about it. Why should ``if Utrecht is in the US, then I'm the king of Germany'' be true (about the actual world)? Well, because it's not false: for it to be false, it would need to be the case that Utrecht is in the US but I'm not the king of Germany---and it's simply not true that Utrecht is in the US. Note that this is, essentially, the same reasoning we used to show in the introduction that in classical logic, an inference with inconsistent premises is always valid (1.3.3). Here classical logic and the reading of $\\to$ go hand-in-hand. There are many \\emph{non}-classical logics in which the conditional is not material. For example, in relevant logic, $(\\phi\\to\\psi)$ can only be true if $\\phi$ and $\\psi$ have something in common. This would render ``if Utrecht is in the US, then I'm the king of Germany'' false since the country Utrecht is in and my royal pedigree have absolutely nothing to do with each other. But, as we said in the introduction, we will only deal with classical logic in this course.\n\t\t\n\t\t\\item Let's see how we can use (i--v) to determine the truth value of formulas under an assignment. In order to do so, we first try to capture the meaning of the operators by means of so-called \\emph{truth-functions}. An $n$-ary truth function (for $n\\in\\mathbb{N}$) is a function from $\\{0,1\\}^n$ to $\\{0,1\\}$. To each of the clauses (i--v), there corresponds a truth-function that ``mirrors'' the influence of the operator on the truth-value of the sentence. These truth-functions, in a sense, give the \\emph{meaning} of their corresponding operator. So, we have functions $f_\\neg:\\{0,1\\}\\to\\{0,1\\}$ and $f_\\circ:\\{0,1\\}^2\\to\\{0,1\\}$ for $\\circ=\\land,\\lor,\\to,\\leftrightarrow$ given by the following definitions:\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item \\emph{Negation}: $f_\\neg(x)=1-x$\n\t\t\t\n\t\t\t\\begin{center}\n\t\t\t\\begin{tabular}{c | c}\n\n\t\t\t$f_\\neg$ & \\\\\\hline\n\n\t\t\t0 & 1\\\\\n\n\t\t\t1 & 0\n\n\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\t\n\t\t\t\\item \\emph{Conjunction}: $f_\\land(x,y)=min(x,y)$\\footnote{The function $min$ is defined analogously to $max$ (remember 4.4.4) by: \\[min(x,y)=\\begin{cases}x & \\text{if }x<y\\\\y&\\text{if }y<x\\\\ x & \\text{if }x=y\\end{cases}.\\]}\n\t\t\n\t\t\t\\begin{center}\n\t\t\t\\begin{tabular}{c | c c}\n\t\t\t\n\t\t\t$f_\\land$ & 0 & 1\\\\\\hline\n\t\t\t\n\t\t\t0 & 0 & 0 \\\\\n\t\t\t\n\t\t\t1 & 0 & 1\n\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\t\n\t\t\t\\item \\emph{Disjunction}: $f_\\lor(x,y)=max(x,y)$\n\t\t\t\n\t\t\t\\begin{center}\n\t\t\t\\begin{tabular}{c | c c}\n\t\t\n\t\t\t$f_\\lor$ & 0 & 1\\\\\\hline\n\t\t\t\n\t\t\t0 & 0 & 1 \\\\\n\n\t\t\t1 & 1 & 1\n\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\t\n\t\t\t\\item \\emph{Conditional}: $f_\\to(x,y)=max(1-x,y)$\n\t\t\t\n\t\t\t\\begin{center}\n\t\t\t\\begin{tabular}{c | c c}\n\t\t\t\n\t\t\t$f_\\to$ & 0 & 1\\\\\\hline\n\t\t\t\n\t\t\t0 & 1 & 1 \\\\\n\n\t\t\t1 & 0 & 1\n\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\t\n\t\t\t\\item \\emph{Biconditional}: $f_\\leftrightarrow(x,y)=min(max(1-x,y), max(1-y,x))$.\n\t\t\t\n\t\t\t\\begin{center}\n\t\t\t\\begin{tabular}{c | c c}\n\t\t\t\n\t\t\t$f_\\leftrightarrow$ & 0 & 1\\\\\\hline\n\t\t\n\t\t\t0 & 1 & 0 \\\\\n\t\t\n\t\t\t1 & 0 & 1\n\t\t\n\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\n\t\t\\end{enumerate}\n\t\tLet's think this through in the case of $f_\\land$. Studying the function-table for $f_\\land$, we can see that $f_\\land(x,y)=1$ iff $x=1$ and $y=1$---the only case in which $f_\\land$ assigns the output one is if both inputs are one. Since one means \\emph{true} and zero means \\emph{false}, this just means that $f_\\land$ assigns the output \\emph{true} iff both inputs are \\emph{true}---which is precisely what (5.1.5.ii) says. The truth-functions given by (i--v) are also known as the \\emph{Boolean functions} or simply \\emph{Booleans}.\n\t\t\n\t\t\\item The Booleans $f_\\to$ and $f_\\leftrightarrow$ are a bit hard to wrap your head around, so let's think about them for a second. First, $f_\\to$. There is another way of writing down the same function, which can be found by looking at the table. Note that there are four possible inputs: $(0,0), (0,1),(1,0),$ and $(1,1)$. But in only one of these cases, does $f_\\to$ assign zero, viz. $(1,0)$. Remember that $(\\phi\\to\\psi)$ is false iff $\\phi$ is true and $\\psi$ is false. So, we can use definition by cases to write down the definition of $f_\\to$:\n\t\t\t\t\\[f_\\to(x,y)=\\begin{cases} 0 & \\text{if } x=1\\text{ and }y=0\\\\1&\\text{otherwise}\\end{cases}\\]\n\t\t\t\tSimilarly, $f_\\leftrightarrow$ looks almost threatening. But look at the table! Only for the inputs $(0,0)$ and $(1,1)$ does $f_\\leftrightarrow$ assign the output $1$. So, we get the following useful definition by cases of $f_\\leftrightarrow$:\t\n\t\t\t\t\\[f_\\leftrightarrow(x,y)=\\begin{cases} 1 & \\text{if } x=y\\\\0&\\text{otherwise}\\end{cases}\\]\n\t\tThis, I hope, is already much more transparent.\n\t\t\n\t\t\\item We will now use the Booleans to define the truth-value $\\llbracket\\phi\\rrbracket_v$ of a formula $\\phi\\in\\mathcal{L}$ under a valuation $v$. More concretely, we will define the function $\\llbracket\\cdot\\rrbracket_v:\\mathcal{L}\\to\\{0,1\\}$ by the following recursion:\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item  $\\llbracket p\\rrbracket_v=v(p)$ for all $p\\in\\mathcal{P}$\n\t\t\t\n\t\t\t\\item \\begin{enumerate}[(a)]\n\t\t\t\n\t\t\t\t\\item  $\\llbracket\\neg \\phi\\rrbracket_v=f_\\neg(\\llbracket\\phi\\rrbracket_v)$\n\t\t\t\t\n\t\t\t\t\\item  $\\llbracket(\\phi\\circ \\psi)\\rrbracket_v=f_\\circ( \\llbracket\\phi\\rrbracket_v, \\llbracket\\psi\\rrbracket_v)$ for $\\circ=\\land,\\lor,\\to,\\leftrightarrow$\n\t\t\t\n\t\t\t\\end{enumerate}\n\t\t\\end{enumerate}\n\t\t\t\n\t\tThis is very compact, so let's unfold it a bit by applying the definitions of the truth-functions:\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item  $\\llbracket p\\rrbracket_v=v(p)$ for all $p\\in\\mathcal{P}$\n\t\t\t\n\t\t\t\\item \\begin{enumerate}[(a)]\n\t\t\t\n\t\t\t\t\\item  $\\llbracket\\neg \\phi\\rrbracket_v=1-\\llbracket\\phi\\rrbracket_v$\n\t\t\t\t\n\t\t\t\t\\item  $\\llbracket(\\phi\\land \\psi)\\rrbracket_v=min(\\llbracket\\phi\\rrbracket_v, \\llbracket\\psi\\rrbracket_v)$\n\t\t\t\t\\item[] $\\llbracket(\\phi\\lor \\psi)\\rrbracket_v=max(\\llbracket\\phi\\rrbracket_v, \\llbracket\\psi\\rrbracket_v)$\t\t\n\t\t\t\t\\item[] $\\llbracket(\\phi\\to \\psi)\\rrbracket_v=max(1-\\llbracket\\phi\\rrbracket_v, \\llbracket\\psi\\rrbracket_v)$\t\t\n\t\t\t\t\n\t\t\t\t\\item[] $\\llbracket(\\phi\\leftrightarrow \\psi)\\rrbracket_v=\\begin{cases} 1 & \\text{if } \\llbracket\\phi\\rrbracket_v=\\llbracket\\psi\\rrbracket_v\\\\0&\\text{otherwise}\\end{cases}$\t\t\n\t\n\t\t\t\\end{enumerate}\t\t\t\n\t\t\\end{enumerate}\nBy virtue of function recursion, we get truth-value for \\emph{every} formula from (i--ii). In fact, we can calculate this value \\emph{step-by-step}. Let's consider a couple of concrete examples. Let $v$ be the valuation given in 5.1.3.f, i.e. $v(p)=1,v(q)=0,$ and $v(r)=1$. For $\\neg (p\\land (r\\lor q))$, we get:\n\t\t\\begin{align*}\n\t\t\\llbracket \\neg (p\\land (r\\lor q))\\rrbracket_v &=1-\\llbracket (p\\land (r\\lor q))\\rrbracket_v\\tag{ii.a}\\\\\n\t\t&=1-min(\\llbracket p\\rrbracket_v, \\llbracket (r\\lor q)\\rrbracket_v)\\tag{ii.b}\\\\\n\t\t&=1-min(\\llbracket p\\rrbracket_v, max(\\llbracket r\\rrbracket_v,\\llbracket q\\rrbracket_v))\\tag{ii.b}\\\\\n\t\t&=1-min(v(p), max(v(r), v(q)))\\tag{i}\\\\\n\t\t&=1-min(1, max(1,0))\\tag{5.1.3.f}\\\\\n\t\t&=1-min(1,1)\\\\\n\t\t&=1-1\\\\\n\t\t&=0\n\t\t\\end{align*}\nNext, consider the formula $((p\\to q)\\lor (q\\to r))$. We get:\n\t\t\\begin{align*}\n\t\t\\llbracket ((p\\to q)\\lor (q\\to r))\\rrbracket_v &=max(\\llbracket (p\\to q)\\rrbracket_v, \\llbracket (q\\to r)\\rrbracket_v)\\\\\n\t\t&=max(max(1-\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket_v), max(1-\\llbracket q\\rrbracket_v, \\llbracket r\\rrbracket_v))\\\\\n\t\t&=max(max(1-v(p), v(q)), max(1-v(q), v(r)))\\\\\n\t\t&=max(max(1-1, 0), max(1-0, 1))\\\\\n\t\t&=max(max(0, 0), max(1,1))\\\\\n\t\t&=max(0,1)\\\\\n\t\t&=1\n\t\t\\end{align*}\n\t\t\t\t\n\t\t\\item Note that since for every valuation $v$, $\\llbracket \\cdot\\rrbracket_v$ is a \\emph{function} from $\\mathcal{L}$ to $\\{0,1\\}$, it follows immediately that for each $\\phi\\in\\mathcal{L}$, we have that either $\\llbracket\\phi\\rrbracket_v=1$ or $\\llbracket\\phi\\rrbracket_v=0$ (and never both). In other words, the law of bivalence holds for our semantics.\n\t\t\n\t\t\\item It's useful to look at the definition of truth under a valuation from another angle, to look at it as a \\emph{property} of formulas. The idea is that, instead of defining the truth-value of a formula using function recursion as we did in 5.1.9, we could also have defined a property of formulas using an inductive definition---just like we inductively define sets.\\footnote{Remember that mathematically speaking a property actually \\emph{is} just the set of objects satisfying the property.} For $v$ a valuation and $\\phi\\in\\mathcal{L}$ a formula, we write $v\\vDash \\phi$ to say that $\\phi$ is true under the valuation $v$, and we write $v\\nvDash\\phi$ to say that $\\phi$ is not true under $v$. To obtain an inductive definition of truth under a valuation, we would now simply postulate the following inductive clauses, which are derived clauses (i--v) from 5.1.5:\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\t\t\\item $v\\vDash p$ iff $v(p)=1$, for $p\\in\\mathcal{P}$\n\t\t\t\t\n\t\t\t\t\t\\item $v\\vDash \\neg\\phi$ iff $v\\nvDash\\phi$\n\t\t\t\t\t\n\t\t\t\t\t\\item $v\\vDash(\\phi\\land\\psi)$ iff $v\\vDash\\phi$ and $v\\vDash\\psi$\n\t\t\t\t\t\\item $v\\vDash(\\phi\\lor\\psi)$ iff $v\\vDash\\phi$ or $v\\vDash\\psi$\n\t\t\t\t\t\\item $v\\vDash(\\phi\\to\\psi)$ iff $v\\nvDash\\phi$ or $v\\vDash\\psi$\n\t\t\t\t\t\\item $v\\vDash(\\phi\\leftrightarrow\\psi)$ iff   either $v\\vDash\\phi$ and $v\\vDash\\psi$, or $v\\nvDash\\phi$ and $v\\nvDash\\psi$.\n\t\t\t\t\n\t\t\t\t\\end{enumerate}\nThis definition would have worked equally well---in fact, we shall prove in a moment that the two definitions coincide. Which of the two definitions (5.1.9 or this one) to prefer is mainly a question of preference. Some logicians prefer 5.1.9 and some logicians prefer the above definition. In the following, we will mainly work with definition 5.1.9 (so guess which kind of logician I am). \n\t\t\n\t\\item Let's look at our two examples from before. Let $v$ be again the valuation given in 5.1.3.f, i.e. $v(p)=1,v(q)=0,$ and $v(r)=1$. For $\\neg (p\\land (r\\lor q))$, we can argue:\n\t\t\\begin{itemize}\n\t\t\n\t\t\t\\item $v\\vDash \\neg (p\\land (r\\lor q))$ iff $v\\nvDash (p\\land (r\\lor q))$ (by ii)\n\t\t\t\n\t\t\t\\item $v\\nvDash (p\\land (r\\lor q))$ iff $v\\nvDash p$ or $v\\nvDash (r\\lor q)$ (by (iii) and contrapositive reasoning)\n\t\t\t\n\t\t\t\\item $v\\nvDash (r\\lor q)$ iff $v\\nvDash r$ and $v\\nvDash q$ (by (iv) and contrapositive reasoning)\n\t\t\t\n\t\t\t\\item But we know that $v(p)=1, v(q)=0,$ and $v(r)=1$ (by 5.1.3.f).\n\t\t\t\n\t\t\t\\item So, $v\\vDash p$, $v\\nvDash q$ and $v\\vDash r$ (by i and in the case of $q$ contrapositive reasoning).\n\t\t\t\n\t\t\t\\item So, since we have $v\\vDash p$, we don't have $v\\nvDash p$ (on pain of contradiction). And since we have $v\\vDash r$, we don't have $v\\nvDash r$ and $v\\nvDash q$ (again, on pain of contradiction). \n\t\t\t\n\t\t\t\\item Hence, we neither have $v\\nvDash p$, nor do we have $v\\nvDash r$ and $v\\nvDash q$.\n\t\t\t\n\t\t\t\\item But that just means that we don't have $v\\nvDash (p\\land (r\\lor q))$, and so we don't have $v\\vDash \\neg (p\\land (r\\lor q))$. In other words, $v\\nvDash \\neg (p\\land (r\\lor q))$.\n\t\t\n\t\t\\end{itemize}\nFor $((p\\to q)\\lor (q\\to r))$, instead, we argue as follows:\n\t\\begin{itemize}\n\t\n\t\t\\item $v\\vDash((p\\to q)\\lor (q\\to r))$ iff $v\\vDash (p\\to q)$ or $v\\vDash (q\\to r)$ (by iv) \n\t\t\n\t\t\\item $v\\vDash (p\\to q)$ iff $v\\nvDash p$ or $v\\vDash q$ (by v)\n\t\t\n\t\t\\item $v\\vDash (q\\to r)$ iff $v\\nvDash q$ or $v\\vDash r$ (by v)\n\t\n\t\t\\item Since we have $v(r)=1$, we have $v\\vDash r$ (by i).\n\t\t\n\t\t\\item So we have $v\\nvDash q$ or $v\\vDash r$.\n\t\t\n\t\t\\item So we have $v\\vDash (q\\to r)$.\n\t\t\n\t\t%\\item So we have $v\\nvDash p$ or $v\\vDash q$.\n\t\t\n\t\t\\item So we have $v\\vDash((p\\to q)\\lor (q\\to r))$.\n\t\n\t\\end{itemize}\nWe get precisely to the same results as we did using definition 5.1.9: $\\neg (p\\land (r\\lor q))$ is false under $v$ and $((p\\to q)\\lor (q\\to r))$ is true under $v$. This is not by chance, as we'll prove in the following theorem.\n\n\t\\item We show:\n\t\n\t\t\\begin{proposition}\n\t\tLet $v$ be a valuation and $\\phi\\in\\mathcal{L}$. Then $\\llbracket\\phi\\rrbracket_v=1$ iff $v\\vDash \\phi$.\n\t\t\\end{proposition}\n\t\t\\begin{proof}\n\t\tWe prove this claim by induction on formulas.\n\t\t\n\t\t\t\\begin{enumerate}[(i)]\n\t\t\t\n\t\t\t\t\\item \\emph{Base case}. We need to show that $\\llbracket p\\rrbracket_v=1$ iff $v\\vDash p$ for $p\\in \\mathcal{P}$. But $\\llbracket p\\rrbracket_v=1$ and $v\\vDash p$ are both defined as $v(p)=1$, hence the claim trivially holds.\n\t\t\t\t\n\t\t\t\t\\item \\emph{Induction steps}. \n\t\t\t\t\n\t\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\t\n\t\t\t\t\t\\item Assume the induction hypothesis that $\\llbracket\\phi\\rrbracket_v=1$ iff $v\\vDash \\phi$. We need to show that $\\llbracket\\neg \\phi\\rrbracket_v=1$ iff $v\\vDash \\neg\\phi$. This is a biconditional, so we have to show both directions:\n\t\t\t\t\t\\begin{itemize}\n\t\n\t\t\t\t\t\t\\item \\emph{Left-to-right}. Suppose that $\\llbracket\\neg \\phi\\rrbracket_v=1$. We need to derive that $v\\vDash\\neg \\phi$. Since $\\llbracket\\neg \\phi\\rrbracket_v=1-\\llbracket \\phi\\rrbracket_v$, we can infer that $\\llbracket \\phi\\rrbracket_v=0$. But by the induction hypothesis, we have that $\\llbracket\\phi\\rrbracket_v=1$ iff $v\\vDash\\phi$. Hence $\\llbracket\\phi\\rrbracket_v=0$ iff $v\\nvDash\\phi$. So, we can conclude from $\\llbracket \\phi\\rrbracket_v=0$ that $v\\nvDash\\phi$. But $v\\vDash\\neg \\phi$ iff $v\\nvDash\\phi$, and so $v\\vDash\\neg \\phi$, as desired.\n\t\t\t\t\t\t\n\t\t\t\t\t\t\\item \\emph{Right-to-left}. Suppose that $v\\vDash\\neg \\phi$. Since $v\\vDash\\neg \\phi$ iff $v\\nvDash\\phi$, it follows that  $v\\nvDash\\phi$. The induction hypothesis states that $\\llbracket\\phi\\rrbracket_v=1$ iff $v\\vDash\\phi$, and so $\\llbracket\\phi\\rrbracket_v=0$ iff $v\\nvDash\\phi$. Hence, we get to $\\llbracket\\phi\\rrbracket_v=0$. But since $\\llbracket\\neg \\phi\\rrbracket_v=1-\\llbracket \\phi\\rrbracket_v$, it follows that $\\llbracket\\neg \\phi\\rrbracket_v=1$, as desired.\n\t\t\t\t\n\t\t\t\t\t\\end{itemize}\n\t\t\t\t\t\n\t\t\t\t\\item Assume the induction hypotheses that $\\llbracket\\phi\\rrbracket_v=1$ iff $v\\vDash \\phi$ and $\\llbracket\\psi\\rrbracket_v=1$ iff $v\\vDash \\psi$. We need to show four different cases, $\\llbracket (\\phi\\circ\\psi)\\rrbracket_v=1$ iff $v\\vDash(\\phi\\circ\\psi)$ for $\\circ=\\land,\\lor,\\to,\\leftrightarrow$. We will only prove the case for $\\circ=\\land$ and leave the remaining cases as exercises.\n\t\t\t\t\n\t\t\t\tSo, we need to derive that $\\llbracket (\\phi\\land\\psi)\\rrbracket_v=1$ iff $v\\vDash(\\phi\\land\\psi)$. So we have to show both directions:\n\t\t\t\t\t\\begin{itemize}\n\t\n\t\t\t\t\t\t\\item \\emph{Left-to-right}. \tSuppose that $\\llbracket (\\phi\\land\\psi)\\rrbracket_v=1$. Since $\\llbracket (\\phi\\land\\psi)\\rrbracket_v=min(\\llbracket \\phi\\rrbracket_v,\\llbracket \\psi\\rrbracket_v)$, it follows that $\\llbracket \\phi\\rrbracket_v=1$ and $\\llbracket \\psi\\rrbracket_v=1$. By the induction hypothesis, it follows that $v\\vDash \\phi$ and $v\\vDash \\psi$. But  $v\\vDash(\\phi\\land\\psi)$ iff $v\\vDash\\phi$ and $v\\vDash\\psi$ by definition, and so $v\\vDash(\\phi\\land\\psi)$, as desired.\n\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\\item \\emph{Right-to-left}. Suppose that $v\\vDash(\\phi\\land\\psi)$. Since $v\\vDash(\\phi\\land\\psi)$ iff $v\\vDash\\phi$ and $v\\vDash\\psi$, it follows that $v\\vDash\\phi$ and $v\\vDash\\psi$. By the induction hypothesis, we have that $\\llbracket \\phi\\rrbracket_v=1$ and $\\llbracket \\psi\\rrbracket_v=1$. But since $\\llbracket (\\phi\\land\\psi)\\rrbracket_v=min(\\underbrace{\\llbracket \\phi\\rrbracket_v}_{=1},\\underbrace{\\llbracket \\psi\\rrbracket_v}_{=1})$, it follows that $\\llbracket (\\phi\\land\\psi)\\rrbracket_v=1$.\n\t\t\t\t\n\t\t\t\t\t\\end{itemize}\n\t\t\t\t\t\n\t\t\t\t\\end{enumerate}\n\t\t\t\\end{enumerate}\n\t\t\\end{proof}\t\t\t\t\n\t\t\n\t\n\t\t\t\t\n\t\\end{enumerate}\n\t\n\\section{Consequence and the Deduction Theorem}\t\n\n\t\\begin{enumerate}[\\thesection.1]\n\n\t\t\\item We've now arrived at the point where we can formulate the main result of our theoretical inquiry into valid reasoning: a formal account of validity. Remember that the standard account of validity states that an inference is valid iff in every situation where the premises are true, the conclusion is true as well. We can now understand this idea \\emph{precisely} as truth-preservation across all models. Assuming that an inference is formulated in terms of a propositional language, we can say that an inference is valid iff under every valuation, if the premises are all true under the valuation, then the conclusion is, too. We'll spend this section working this idea out in a bit more detail.\n\t\t\n\t\t\\item Note that what underlies our (formal) notion of validity is a relation among formulas, the relation that holds between a set of formulas and another formula just in case under every valuation that makes the set of formulas true also makes the other formula true. This notion is known in the literature as \\emph{logical consequence}. If $\\Gamma$ is a set of formulas (and we typically use $\\Gamma,\\Delta,\\Sigma, \\mathellipsis$ as variables for sets of formulas) and $\\phi$ a single formula, we write $\\Gamma\\vDash \\phi$ to say that $\\phi$ is a consequence of the $\\Gamma$'s; and we write $\\Gamma\\nvDash \\phi$ to say that $\\phi$ is \\emph{not} consequence of the $\\Gamma$'s. Formally, we define the relation $\\vDash$ on the formulas as follows:\t\t\n\t\t\\begin{itemize}\n\t\t\n\t\t\t\\item $\\Gamma\\vDash\\phi$ iff for all valuations $v$, if $\\llbracket\\psi\\rrbracket_v=1$, for all $\\psi\\in\\Gamma$, then $\\llbracket\\phi\\rrbracket_v=1$.\n\t\t\n\t\t\\end{itemize}\nAlternatively, using the property of truth in a model, we can define the relation as follows:\n\t\t\\begin{itemize}\n\t\t\n\t\t\t\\item $\\Gamma\\vDash\\phi$ iff for all valuations $v$, if $v\\vDash\\psi$, for all $\\psi\\in\\Gamma$, then $v\\vDash\\phi$.\n\t\t\n\t\t\\end{itemize}\nIt's an almost immediate consequence of Proposition 5.1.13 that these two definitions are equivalent. So, we're not going to bother proving it here. \n\nWhen $\\Gamma\\vDash \\phi$, we also say that the $\\Gamma$'s \\emph{entail} $\\phi$ or that the $\\Gamma$'s \\emph{imply} $\\phi$. Ultimately, the notion of logical consequence is going to be an \\emph{auxialliary} notion in our account of valid inference. The idea is that an inference $\\Gamma\\therefore \\phi$, couched in a formal language, is valid iff $\\Gamma\\vDash \\phi$. Note that $\\Gamma\\therefore \\phi$ is \\emph{not} a formula of $\\mathcal{L}$ (Ask yourself: Is $\\therefore$ a symbol of $\\mathcal{L}$? No! So simply apply Proposition 4.2.5 to conclude that $\\Gamma\\therefore \\phi\\notin\\mathcal{L}$.) The expression $\\Gamma\\therefore \\phi$ is an inference couched in a formal language: it's a model for a natural language inference, a piece of reasoning, and not itself a sentence. What we do is to give an account of when $\\Gamma\\therefore \\phi$ is valid (iff $\\Gamma\\vDash\\phi$), which we use as a model for the validity of natural language inferences.\n\n\t\\item Let's consider some examples. When we're writing out claims of consequence, it's common to leave out the set-brackets before the $\\vDash$ for ease of exposition. So, instead of the more proper $\\{p,q\\}\\vDash p\\land q$, we'll typically write $p,q\\vDash p\\land q$.\n\t\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item \\emph{Claim}. $p,q\\vDash p\\land q$\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe need to prove that for each valuation $v$, if $\\llbracket p\\rrbracket_v=1$ and $\\llbracket q\\rrbracket_v=1$, then $\\llbracket p\\land q\\rrbracket_v=1$. So, let $v$ be an arbitrary valuation such that $\\llbracket p\\rrbracket_v=1$ and $\\llbracket q\\rrbracket_v=1$. Consider $\\llbracket p\\land q\\rrbracket_v$. We know that $\\llbracket p\\land q\\rrbracket_v=min(\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket_v)$. But since $\\llbracket p\\rrbracket_v=1$ and $\\llbracket q\\rrbracket_v=1$, we have that $min(\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket_v)=min(1,1)=1$, which is what we needed to show.\n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\\item \\emph{Claim}. $p\\land q\\vDash p$ and $p\\land q\\vDash q$\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe only show $p\\land q\\vDash p$, since the proof for  $p\\land q\\vDash q$ is strictly analogous. We need to show that for each valuation $v$, if $\\llbracket p\\land q\\rrbracket_v=1$, then $\\llbracket p\\rrbracket_v=1$. So, assume that $v$ is a valuation and $\\llbracket p\\land q\\rrbracket_v=1$. We know that $\\llbracket p\\land q\\rrbracket_v=min(\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket_v)$. But the only way that $min(\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket_v)=1$ is that both $\\llbracket p\\rrbracket_v=1$ and $\\llbracket q\\rrbracket_v=1$. Hence $\\llbracket p\\rrbracket_v=1$, as desired.\n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\\item \\emph{Claim}. $p\\vDash p\\lor q$ and $q\\vDash p\\lor q$.\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe only show $p\\vDash p\\lor q$, since the proof for $q\\vDash p\\lor q$ is strictly analogous. We need to show that for each valuation $v$, if $\\llbracket p\\rrbracket_v=1$, then $\\llbracket p\\lor q\\rrbracket_v=1$. So let $v$ be a valuation such that $\\llbracket p\\rrbracket_v=1$. Consider $\\llbracket p\\lor q\\rrbracket_v$. We know that $\\llbracket p\\lor q\\rrbracket_v=max(\\llbracket p\\rrbracket_v,\\llbracket q\\rrbracket_v)$. But since $\\llbracket p\\rrbracket_v=1$, $min(\\llbracket p\\rrbracket_v,\\llbracket q\\rrbracket_v)=max(1, \\llbracket q\\rrbracket_v)$. Now note that whatever the value of $\\llbracket q\\rrbracket_v$, we have that $max(1, \\llbracket q\\rrbracket_v)=1$, which is what we needed to show. If we want to be \\emph{super} precise, we can argue using distinction by cases. There are two exhaustive cases (a) $\\llbracket q\\rrbracket_v=1$ and (b) $\\llbracket q\\rrbracket_v=0$. In case (a), we have $max(1, \\llbracket q\\rrbracket_v)=max(1,1)=1$. And in case (b), we have $max(1, \\llbracket q\\rrbracket_v)=max(1,0)=1$. So, either way, $max(1, \\llbracket q\\rrbracket_v)=1$.\n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\\item \\emph{Claim}. $p\\lor q, \\neg p\\vDash q$ (remember inference (1) from \\S1)\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe need to prove that for each valuation $v$, if $\\llbracket p\\lor q\\rrbracket_v=1$ and $\\llbracket \\neg p\\rrbracket_v=1$, then also $\\llbracket q\\rrbracket_v=1$. So let $v$ be a valuation and suppose that $\\llbracket p\\lor q\\rrbracket_v=1$ and $\\llbracket \\neg p\\rrbracket_v=1$. Since $\\llbracket \\neg p\\rrbracket_v=1-\\llbracket p\\rrbracket_v$, it follows that $\\llbracket p\\rrbracket_v=0$. We furthermore know that $\\llbracket p\\lor q\\rrbracket_v=max(\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket_v)$. Since  $\\llbracket p\\rrbracket_v=0$, it follows that $\\llbracket p\\lor q\\rrbracket_v=max(0, \\llbracket q\\rrbracket_v)$. But $\\llbracket p\\lor q\\rrbracket_v=1$, by assumption, and we can only have $max(0, \\llbracket q\\rrbracket_v)=1$ if $\\llbracket q\\rrbracket_v=1$. So we can conclude that $\\llbracket q\\rrbracket_v=1$, as desired.\n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\\item \\emph{Claim}. $p,p\\to q\\vDash q$\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe need to prove that for each valuation $v$, if $\\llbracket p\\rrbracket_v=1$ and $\\llbracket p\\to q\\rrbracket_v=1$, then also $\\llbracket q\\rrbracket_v=1$. We could prove this directly, using similar reasoning as in (iii), but we'll use another argument form for illustrative purposes---we're going to prove the claim indirectly. So assume that it's \\emph{not} the case that for each valuation $v$, if $\\llbracket p\\rrbracket_v=1$ and $\\llbracket p\\to q\\rrbracket_v=1$, then also $\\llbracket q\\rrbracket_v=1$. This would mean that there \\emph{exists} $v$, such that  $\\llbracket p\\rrbracket_v=1$ and $\\llbracket p\\to q\\rrbracket_v=1$, but $\\llbracket q\\rrbracket_v=0$. Remember that $\\llbracket p\\to q\\rrbracket_v=min(1-\\llbracket p\\rrbracket, \\llbracket q\\rrbracket_v)$. By assumption $\\llbracket p\\rrbracket_v=1$ and $\\llbracket q\\rrbracket_v=0$. So we can infer that $min(1-\\llbracket p\\rrbracket, \\llbracket q\\rrbracket_v)=min(1-1, 0)=min(0, 0)=0$. But also by assumption, $\\llbracket p\\to q\\rrbracket_v=min(1-\\llbracket p\\rrbracket, \\llbracket q\\rrbracket_v)=1$. So, we have arrived at a contradiction: $\\llbracket p\\to q\\rrbracket_v=1$ and $\\llbracket p\\to q\\rrbracket_v=0$. So, we can conclude that our assumption for indirect proof---that there exists $v$, such that  $\\llbracket p\\rrbracket_v=1$ and $\\llbracket p\\to q\\rrbracket_v=1$, but $\\llbracket q\\rrbracket_v=0$---is false. But this just means that for \\emph{each} valuation $v$, if $\\llbracket p\\rrbracket_v=1$ and $\\llbracket p\\to q\\rrbracket_v=1$, then also $\\llbracket q\\rrbracket_v=1$.\n\t\t\t\n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\\item \\emph{Claim}. $\\neg p, p\\leftrightarrow q\\vDash \\neg q$. \n\t\t\t\\begin{proof}\n\t\t\t  We need to prove that for each valuation $v$,\n\t\t\t  if\n\t\t\t  $\\llbracket \\neg p\\rrbracket_v=1$\n\t\t\t  and\n\t\t\t  $\\llbracket p\\leftrightarrow q\\rrbracket_v=1$,\n\t\t\t  then also\n\t\t\t  $\\llbracket \\neg q\\rrbracket_v=1$.\n\t\t\t  So let $v$ be a valuation and suppose that\n\t\t\t  $\\llbracket \\neg p\\rrbracket_v=1$\n\t\t\t  and\n\t\t\t  $\\llbracket p\\leftrightarrow q\\rrbracket_v=1$.\n\t\t\t  Since\n\t\t\t  $\\llbracket \\neg p\\rrbracket_v=1-\\llbracket p\\rrbracket_v$,\n\t\t\t  it follows that\n\t\t\t  $\\llbracket p\\rrbracket_v=0$.\n\t\t\t  We furthermore know that \\[\\llbracket(p\\leftrightarrow q)\\rrbracket_v=\\begin{cases} 1 & \\text{if } \\llbracket p\\rrbracket_v=\\llbracket q\\rrbracket_v\\\\0&\\text{otherwise}\\end{cases}.\\]\n\t\t\t  So,\n\t\t\t  since\n\t\t\t  $\\llbracket p\\leftrightarrow q\\rrbracket_v=1$,\n\t\t\t  it follows that\n\t\t\t  $\\llbracket q\\rrbracket_v=\\llbracket p\\rrbracket_v=0$.\n\t\t\t  But since\n\t\t\t  $\\llbracket \\neg q\\rrbracket_v=1-\\llbracket q\\rrbracket_v$,\n\t\t\t  we get\n\t\t\t  $\\llbracket \\neg q\\rrbracket_v=1$,\n\t\t\t  as desired.\n\t\t\t\n\t\t\t\\end{proof}\t\t\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t  \\item Note that each of the examples in 5.2.3 is a claim about \\emph{concrete} formulas, i.e.\\ specific formulas of a fixed language.\n\t\t\tBut an important aspect of logic is to figure out logical \\emph{laws}: \\emph{patterns} of valid inferences.\n\t\t\tAn example of such a patter is that $\\phi,\\psi\\therefore(\\phi\\land\\psi)$ is valid for all $\\phi,\\psi\\in\\mathcal{L}$.\n\t\t\tRemember from 5.2.2 that the idea is that $\\phi,\\psi\\therefore(\\phi\\land\\psi)$ is valid iff $\\phi,\\psi\\vDash (\\phi\\land\\psi)$.\n\t\t\tSo, what we have to prove in order to prove the logical law is that for all $\\phi,\\psi\\in\\mathcal{L}$, $\\phi,\\psi\\vDash(\\phi\\land\\psi)$.\n\t\t\tHow would we do something like this?\n\t\t\tWell, even though this is a claim about formulas, we don't need to use induction to establish this.\n\t\t\tWe can simply use the reasoning from 5.2.3.(i) and replace $p$ with $\\phi$ and $q$ with $\\psi$.\n\t\t\tTo be perfectly explicit, here is the argument:\n\t\t\t\\begin{proof}\n\t\t\t  Let $\\phi,\\psi\\in\\mathcal{L}$ be two formulas.\n\t\t\t  We need to prove that for each valuation $v$, if $\\llbracket \\phi\\rrbracket_v=1$ and $\\llbracket \\psi\\rrbracket_v=1$, then $\\llbracket \\phi\\land \\psi\\rrbracket_v=1$.\n\t\t\t  So, let $v$ be an arbitrary valuation such that $\\llbracket \\phi\\rrbracket_v=1$ and $\\llbracket \\psi\\rrbracket_v=1$.\n\t\t\t  Consider $\\llbracket \\phi\\land \\psi\\rrbracket_v$.\n\t\t\t  We know that $\\llbracket \\phi\\land \\psi\\rrbracket_v=min(\\llbracket \\phi\\rrbracket_v, \\llbracket \\psi\\rrbracket_v)$.\n\t\t\t  But since $\\llbracket \\phi\\rrbracket_v=1$ and $\\llbracket \\psi\\rrbracket_v=1$, we have that $min(\\llbracket \\phi\\rrbracket_v, \\llbracket \\psi\\rrbracket_v)=min(1,1)=1$, which is what we needed to show.\n\t\t\t\\end{proof}\nIn a similar way, we can also transform the other examples from 5.2.3 into logical laws. \n\nThere is, however, another kind of logical law, which requires a slightly more general approach. Suppose that you know that $\\Gamma$ entails $\\phi$ and that $\\phi$ together with $\\Delta$ entails $\\psi$, for some formulas $\\phi,\\psi\\in\\mathcal{L}$ and sets of formulas $\\Gamma,\\Delta\\subseteq\\mathcal{L}$. Can it be that $\\Gamma$ and $\\Delta$ together \\emph{don't} entail $\\psi$? The answer seems to be: obviously not! If $\\phi$ is true whenever the $\\Gamma$'s are true and $\\psi$ is true whenever $\\phi$ and the $\\Delta$'s are true, then it seems to follow that $\\phi$ is true whenever the $\\Gamma$'s and the $\\Delta$'s are true. This is the law of the \\emph{transitivity} of logical consequence. It's an example of a more general kind of logical law, which we need to prove in a slightly more complicated way. Let's do this as an example:\n\t\t\n\t\t\\begin{proposition}\n\t\tFor all $\\Gamma,\\Delta\\subseteq\\mathcal{L}$ and $\\phi,\\psi\\in\\mathcal{L}$, if $\\Gamma\\vDash\\phi$ and $\\{\\phi\\}\\cup\\Delta\\vDash\\psi$, then $\\Gamma\\cup\\Delta\\vDash\\psi$.\n\t\t\\end{proposition}\n\t\t\\begin{proof}\n\t\tLet $\\Gamma,\\Delta\\subseteq\\mathcal{L}$ and $\\phi,\\psi\\in\\mathcal{L}$ be arbitrary. We need to prove that  if $\\Gamma\\vDash\\phi$ and $\\{\\phi\\}\\cup\\Delta\\vDash\\psi$, then $\\Gamma\\cup\\Delta\\vDash\\psi$. So, suppose for conditional proof, that $\\Gamma\\vDash\\phi$ and $\\{\\phi\\}\\cup\\Delta\\vDash\\psi$. This means that (a) for each valuation $v$, if $\\llbracket\\theta\\rrbracket_v=1$ for all $\\theta\\in\\Gamma$, then $\\llbracket \\phi\\rrbracket_v=1$, and (b) $\\llbracket\\theta\\rrbracket_v=1$ for all $\\theta\\in\\Delta\\cup\\{\\phi\\}$, then $\\llbracket \\psi\\rrbracket_v=1$. We want to show that $\\Gamma\\cup\\Delta\\vDash\\psi$, i.e. for all $v$, if $\\llbracket \\theta\\rrbracket_v=1$ for all $\\theta\\in\\Gamma\\cup\\Delta,$ then $\\llbracket\\psi\\rrbracket_v=1$. So, suppose that (c) $\\llbracket \\theta\\rrbracket_v=1$ for all $\\theta\\in\\Gamma\\cup\\Delta$. Since $\\Gamma\\subseteq \\Gamma\\cup \\Delta$, we can infer that $\\llbracket \\theta\\rrbracket_v=1$ for all $\\theta\\in\\Gamma$. By (a), this means that $\\llbracket \\phi\\rrbracket_v=1$. And since also $\\Delta\\subseteq \\Gamma\\cup \\Delta$, we can infer from (c) that $\\llbracket \\theta\\rrbracket_v=1$ for all $\\theta\\in\\Delta$. Putting the last two observations together, we get that $\\llbracket\\theta\\rrbracket_v=1$ for all $\\theta\\in\\Delta\\cup\\{\\phi\\}$. Finally by (b), we can infer from this that $\\llbracket\\psi\\rrbracket_v=1$, as desired.\n\t\t\\end{proof}\nWhat is this more general law good for? Well, it allows you to infer consequence claims from other consequence claims that you've already proven. We know, for example, that $p, p\\to q\\vDash q$ (5.2.3.v) and that $p,q\\vDash p\\land q$ (5.2.3.i). So, using our proposition, we can infer that $p,p\\to q\\vDash p\\land q$. Note that we implicitly made use of our set notation here: strictly speaking  $p, p\\to q\\vDash q$ should be written  $\\{p, p\\to q\\}\\vDash q$ and $p,q\\vDash p\\land q$ should be written $\\{p,q\\}\\vDash p\\land q$. So, what we can infer using our proposition is that $\\{p, p\\to q\\}\\cup \\{p\\}\\vDash p\\land q$. But $\\{p, p\\to q\\}\\cup \\{p\\}=\\{p,p\\to q\\}$ (remember, with sets, repetition doesn't matter). So, $\\{p, p\\to q\\}\\cup \\{p\\}\\vDash p\\land q$ can be written $p,p\\to q\\vDash p\\land q$.\n\n\t\t\\item If two formulas $\\phi$ and $\\psi$ are consequences of each other, i.e. if $\\phi\\vDash\\psi$ and $\\psi\\vDash\\phi$, then we say that they are \\emph{logically equivalent}. We write $\\phi\\equi \\psi$ to say that $\\phi$ and $\\psi$ are equivalent. For example, we have that $p\\equi \\neg\\neg p$. To see this, note that for any valuation $v$, we have that $\\llbracket \\neg\\neg p\\rrbracket_v=1-\\llbracket \\neg p\\rrbracket_v=1-(1-\\llbracket p\\rrbracket_v)=\\llbracket p\\rrbracket_v$. So, $p$ and $\\neg\\neg p$ have the \\emph{same} truth-value under every valuation. As a consequence, we have that both $p\\vDash \\neg\\neg p$ and $\\neg \\neg p\\vDash p$, i.e. $p\\equi \\neg\\neg p$. This point actually generalizes: two formulas are logically equivalent iff they have the same truth-value under each valuation:\n\t\t\\begin{proposition}\n\t\tLet $\\phi,\\psi\\in\\mathcal{L}$ be formulas. Then $\\phi\\equi\\psi$ iff for all valuations $v$, $\\llbracket\\phi\\rrbracket_v=\\llbracket\\psi\\rrbracket_v$.\n\t\t\\end{proposition}\n\t\t\\begin{proof}\n\t\tWe prove the two directions in turn:\n\t\t\n\t\t\t\\begin{itemize}\n\t\t\t\n\t\t\t\t\\item (\\emph{Left-to-right}): Suppose that $\\phi\\equi\\psi$, i.e. both $\\phi\\vDash\\psi$ and $\\psi\\vDash\\phi$. We need to show that for all valuations $v$, $\\llbracket\\phi\\rrbracket_v=\\llbracket\\psi\\rrbracket_v$. So, let $v$ be an arbitrary valuation. There are two exhaustive possibilities: (a) $\\llbracket\\phi\\rrbracket_v=1$ and (b) $\\llbracket \\phi\\rrbracket_v=0$. In case (a), we can infer that $\\llbracket\\psi\\rrbracket_v=1$ from the fact that $\\phi\\vDash\\psi$ (i.e. for all $v$, if $\\llbracket\\phi\\rrbracket_v=1$, then $\\llbracket\\psi\\rrbracket_v=1$). Hence $\\llbracket \\phi\\rrbracket_v=1=\\llbracket \\psi\\rrbracket_v$, as desired. Can it be in case (b) that $\\llbracket\\psi\\rrbracket_v=1$? Well, if this were the case, then by $\\psi\\vDash\\phi$, we'd have that $\\llbracket\\phi\\rrbracket_v=1$ contrary to our assumption that $\\llbracket\\phi\\rrbracket_v=0$. Hence, by indirect proof, we have to have $\\llbracket\\psi\\rrbracket_v=0$. So $\\llbracket \\phi\\rrbracket_v=0=\\llbracket \\psi\\rrbracket_v$, as desired. \n\t\t\t\t\n\t\t\t\t\\item (\\emph{Right-to-left}): Suppose that for all valuations $v$, $\\llbracket\\phi\\rrbracket_v=\\llbracket\\psi\\rrbracket_v$. We need to show that $\\phi\\equi\\psi$, i.e. both $\\phi\\vDash\\psi$ and $\\psi\\vDash\\phi$. We only show $\\phi\\vDash\\psi$, since the proof for  $\\psi\\vDash\\phi$ is strictly analogous. To prove that $\\phi\\vDash\\psi$, we need to show that for all valuations $v$, if $\\llbracket\\phi\\rrbracket_v=1$, then $\\llbracket\\psi\\rrbracket_v=1$. So, let $v$ be an arbitrary valuation with $\\llbracket\\phi\\rrbracket_v=1$. But since, by assumption, $\\llbracket\\phi\\rrbracket_v=\\llbracket\\psi\\rrbracket_v$, it follows immediately that $\\llbracket\\psi\\rrbracket_v=1$, as desired.\n\t\t\t\\end{itemize}\n\t\t\n\t\t\\end{proof}\nThe concept of equivalence is of fundamental importance for logical theory. To see this, note that all that matters for validity is the truth or falsity of statements: an inference is valid iff in every model where the premises are true, so is the conclusion. But then, if two statements are logically equivalent, they can be replaced for each other in all reasoning contexts without destroying validity: if an inference is valid, then so is any inference where some formulas have been replaced with logically equivalent ones. For example, since $p,q\\therefore p\\land q$ is valid, so is $\\neg\\neg p,q\\therefore p\\land q$.\n\n\t\t\\item We shall now state a series of logical laws that we're not all going to prove---well, you will as an exercise. It's important that you've seen these laws and understand them since they are often alluded to in formal reasoning, inside and outside the context of logic. \n\t\t\n\t\t\\begin{lemma}[Propositional Laws]\n\t\tThe following laws hold for all $\\phi,\\psi,\\theta\\in\\mathcal{L}$ and all $\\Gamma,\\Delta\\subseteq\\mathcal{L}$:\n\t\t\n\t\t\t\\begin{enumerate}[(i)]\n\t\t\t\n\t\t\t\t\\item $\\phi\\vDash \\phi$ \\hfill (Reflexivity)\n\t\t\t\t\n\t\t\t\t\\item If $\\Gamma\\vDash\\phi$ and $\\{\\phi\\}\\cup\\Delta\\vDash\\psi$, then $\\Gamma\\cup\\Delta\\vDash\\psi$.\\hfill (Transitivity)\n\t\t\t\n\t\t\t\t\\item If $\\Gamma\\vDash\\phi$, then $\\Gamma\\cup\\Delta\\vDash\\phi$. \\hfill(Monotonicity)\n\t\t\t\t\n\t\t\t\t\\item $\\phi,\\psi\\vDash\\phi\\land \\psi$ \\hfill(Conjunction Introduction)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\land\\psi\\vDash\\phi$ and $\\phi\\land\\psi\\vDash\\psi$ \\hfill (Conjunction Elimination)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\vDash\\phi\\lor\\psi$ and $\\psi\\vDash\\phi\\lor \\psi$ \\hfill (Disjunction Introduction)\n\t\t\t\t\n\t\t\t\t\\item If $\\phi\\vDash\\theta$ and $\\psi\\vDash\\theta$, then $\\phi\\lor\\psi\\vDash\\theta$. \\hfill (Disjunction Elimination)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\land\\phi\\equi\\phi$ \\hfill (Idempotence)\n\t\t\t\t\n\t\t\t\t\\item  $\\phi\\lor\\phi\\equi\\phi$ \\hfill (Idempotence)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\land\\psi\\equi \\psi\\land\\phi$ \\hfill (Commutativity)\n\t\t\t\t\\item $\\phi\\lor\\psi\\equi \\psi\\lor\\phi$ \\hfill (Commutativity)\n\t\t\t\t\n\t\t\t\t\\item $(\\phi\\land\\psi)\\land\\theta\\equi \\phi\\land(\\psi\\land\\theta)$ \\hfill (Associativity)\n\t\t\t\t\n\t\t\t\t\\item $(\\phi\\lor\\psi)\\lor\\theta\\equi \\phi\\lor(\\psi\\lor\\theta)$ \\hfill (Associativity)\n\t\t\t\n\t\t\t\t\\item $\\phi\\land(\\psi\\lor\\theta)\\equi (\\phi\\land\\psi)\\lor(\\phi\\land\\theta)$ \\hfill (Distributivity)\n\t\t\t\t\\item $\\phi\\lor(\\psi\\land\\theta)\\equi (\\phi\\lor\\psi)\\land(\\phi\\lor\\theta)$ \\hfill (Distributivity)\n\t\t\t\t\n\t\t\t\t\\item $\\neg\\neg \\phi\\equi \\phi$ \\hfill (Double Negation)\n\t\t\t\t\n\t\t\t\t\\item $\\neg(\\phi\\land\\psi)\\equi \\neg\\phi\\lor\\neg\\psi$ \\hfill (De Morgan's Law)\n\t\t\t\t\n\t\t\t\t\\item  $\\neg(\\phi\\lor\\psi)\\equi \\neg\\phi\\land\\neg\\psi$ \\hfill (De Morgan's Law)\n\t\t\t\t\t\t\n\t\t\t\t\\item $\\neg\\phi,\\phi\\lor \\psi\\vDash\\psi$ \\hfill (Disjunctive Syllogism)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\to\\psi\\equi \\neg\\phi\\lor\\psi$ \\hfill (Conditional Definition)\n\t\t\t\n\t\t\t\t\\item $\\phi\\to\\psi\\equi\\neg\\psi\\to\\neg \\phi$ \\hfill (Contraposition)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\to\\psi,\\phi\\vDash\\psi$ \\hfill (Modus Ponens)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\to\\psi,\\neg\\psi\\vDash\\neg\\phi$ \\hfill (Modus Tollens)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\leftrightarrow\\psi\\equi (\\phi\\to\\psi)\\land(\\psi\\to\\phi)$ \\hfill (Biconditional Introduction)\n\t\t\t\t\n\t\t\t\t\\item $\\phi\\leftrightarrow\\psi\\equi \\neg\\phi\\leftrightarrow\\neg\\psi$ \\hfill (Biconditional Contraposition)\n\t\t\t\n\t\t\t\\end{enumerate}\n\t\t\n\t\t\\end{lemma}\n\t\t\\begin{proof}\n\t\tWe leave all but (vii) as an exercise. We prove (vii) here because it allows us to understand the idea of proof by cases better. \n\t\t\n\t\tWe want to show that if $\\phi\\vDash\\theta$ and $\\psi\\vDash\\theta$, then $\\Gamma\\cup\\{\\phi\\lor\\psi\\}\\vDash\\theta$. So, suppose that $\\phi\\vDash\\theta$ and $\\psi\\vDash\\theta$. This means that (a), for all valuations $v$, if $\\llbracket \\phi\\rrbracket_v=1,$ then $\\llbracket\\theta\\rrbracket_v=1$; and (b) for all valuations $v$, if $\\llbracket \\psi\\rrbracket_v=1,$ then $\\llbracket\\theta\\rrbracket_v=1$.\tIn order to derive $\\phi\\lor\\psi\\vDash\\theta$, we need to show that for all valuations $v$, if $\\llbracket \\phi\\lor\\psi\\rrbracket_v=1,$ then $\\llbracket\\theta\\rrbracket_v=1$. So, let $v$ be a valuation such that $\\llbracket \\phi\\lor\\psi\\rrbracket_v=1$. Since $\\llbracket \\phi\\lor\\psi\\rrbracket_v=1$ and $\\llbracket \\phi\\lor\\psi\\rrbracket_v=max(\\llbracket \\phi\\rrbracket_v,\\llbracket \\psi\\rrbracket_v)$, we can distinguish two exhausting cases (c) $\\llbracket \\phi\\rrbracket_v=1$ and (d) $\\llbracket \\psi\\rrbracket_v=1$. We show that in each case $\\llbracket\\theta\\rrbracket_v=1$.\n\t\t\\begin{enumerate}[(a)]\n\t\t\\setcounter{enumii}{2}\n\t\t\t\\item Let $\\llbracket \\phi\\rrbracket_v=1$. By (a) this directly  gives us $\\llbracket\\theta\\rrbracket_v=1$.\n\t\t\t\n\t\t\t\\item Let $\\llbracket \\psi\\rrbracket_v=1$. By (b), we can diretly infer that $\\llbracket\\theta\\rrbracket_v=1$.\n\t\t\n\t\t\\end{enumerate}\n\t\tHence, either way, assuming that $\\llbracket \\phi\\lor\\psi\\rrbracket_v=1$, we get that $\\llbracket\\theta\\rrbracket_v=1$, which is what we needed to show.\n\t\t\t\t\n\t\t\\end{proof}\n\t\t\n\t\tNote that the law of Associativity justifies our notational convention of leaving out parentheses in sequences of $\\land$'s and sequences of $\\lor$'s (4.5.4).\n\t\t\n\t\t\\item A nice thing about the previously stated laws is that you can use them to derive further laws. For example, you might suspect that $\\phi\\land\\psi\\equi \\neg(\\neg\\phi\\lor\\neg \\psi)$. You can actually derive this law from the laws of Double Negation and de Morgan's Law. Here's how (for the derivation, note that the laws hold for \\emph{all} $\\phi$):\n\t\t\\begin{itemize}\n\t\t\n\t\t\t\\item By Double Negation and the fact that we can replace logical equivalents $\\phi\\land\\psi\\equi \\neg\\neg\\phi\\land\\neg\\neg\\psi$. \t\t\t\n\t\t\t\\item By de Morgan's law (and replacing logical equivalents), we get that $ \\neg\\neg\\phi\\land\\neg\\neg\\psi\\equi \\neg(\\neg\\phi\\lor\\neg\\psi)$.\n\t\t\t\t\t\n\t\t\t\\item Using transitivity a bunch of times, we get  $\\phi\\land\\psi\\equi\\neg(\\neg\\phi\\lor\\neg\\psi)$.\t\t\n\t\t\\end{itemize}\n\t\tThese kinds of derivations will be the topic in the next chapter, in proof theory. Here the point simply is that the laws given above are, essentially, laws of reasoning. And we can \\emph{prove them} to be correct. \n\t\t\t\t\n\t\t\\item Note that it's a simple consequence of the definition of $\\vDash$ that $\\Gamma\\nvDash\\phi$ iff there exists a valuation $v$, such that $\\llbracket\\psi\\rrbracket_v=1$, for all $\\psi\\in\\Gamma$, but $\\llbracket\\phi\\rrbracket_v=0$. This gives us the standard method for showing that some formuals $\\Gamma$ \\emph{don't} entail a given conclusion $\\phi$: we produce a \\emph{countermodel}, i.e. a valuation $v$, such that $\\llbracket\\psi\\rrbracket_v=1$, for all $\\psi\\in\\Gamma$, but $\\llbracket\\phi\\rrbracket_v=0$. Here are some examples. For simplicity, we assume that $\\mathcal{P}=\\{p,q,r\\}$.\n\t\t\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item \\emph{Claim}. $p\\lor q, p\\nvDash \\neg q$\n\t\t\t\n\t\t\t\\emph{Countermodel}. Any $v$ such that $v(p)=1$ and $v(q)=1$. If $v(p)=1$ and $v(q)=1$, then both $\\llbracket p\\rrbracket_v=1$ and $\\llbracket q\\rrbracket_v=1$. And $\\llbracket p\\lor q\\rrbracket_v=max(\\llbracket p\\rrbracket_v,\\llbracket q\\rrbracket_v)=max(1, 1)=1$. But $\\llbracket q\\rrbracket_v=1$ and $\\llbracket \\neg q\\rrbracket_v=1-\\llbracket q\\rrbracket_v$, so $\\llbracket \\neg q\\rrbracket_v=0$.\n\t\t\n\t\t\t\\item \\emph{Claim}. $p\\to q, q\\nvDash p$  (remember inference (2) from the introduction)\n\t\t\t\n\t\t\t\\emph{Countermodel}. Any $v$ such that $v(p)=0$ and $v(q)=1$. If $v(p)=0$ and $v(q)=1$, then $\\llbracket p\\rrbracket_v=0$ and $\\llbracket q\\rrbracket_v=1$. Since $\\llbracket p\\to q\\rrbracket_v=min(1-\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket)$, we have that $\\llbracket p\\to q\\rrbracket_v=max(1-0,0)=max(1,0)=1$. But $\\llbracket p\\rrbracket_v=0$.\n\t\t\n\t\t\t\\item \\emph{Claim}. $p\\to q, \\neg p\\nvDash \\neg q$\n\t\t\t\n\t\t\t\\emph{Countermodel}: Any $v$ such that $v(p)=0$ and $v(q)=1$. If $v(p)=0$, then $\\llbracket p\\rrbracket_v=0$. So $\\llbracket \\neg p\\rrbracket_v=1-\\llbracket p\\rrbracket_v=1$ and $\\llbracket p\\to q\\rrbracket_v=max(1-\\llbracket p\\rrbracket_v, \\llbracket q\\rrbracket)=max(1-0, 0)=max(1,0)=1$. But since $v(q)=1$, $\\llbracket q\\rrbracket_v=1$, and so  $\\llbracket \\neg q\\rrbracket_v=1-\\llbracket q\\rrbracket_v=0$.\n\t\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\item Corresponding to the positive laws of logic we discussed above---laws about what follows from what---there are also \\emph{negative} laws of logic---what \\emph{doesn't} follow from what. These are also known as \\emph{logical fallacies}, mistakes of reasoning. The examples (i--iii) of the previous point are instances of the most commonly known propositional fallacies. The examples we gave show that:\n\t\t\n\t\t\\begin{itemize}\n\t\t\n\t\t\t\\item There are formulas $\\phi,\\psi\\in\\mathcal{L}$, such that $\\phi\\lor\\psi,\\phi\\nvDash\\neg\\psi$. So you can't necessarily infer from $\\phi\\lor\\psi$ and $\\phi$ that $\\neg\\psi$. If you do so anyway in a case where $\\phi\\lor\\psi,\\phi\\nvDash\\neg\\psi$, you've committed the fallacy of \\emph{affirming the disjunct}.\n\t\t\t\n\t\t\t\\item There are formulas $\\phi,\\psi\\in\\mathcal{L}$, such that $\\phi\\to\\psi,\\psi\\nvDash\\phi$. So you can't necessarily infer from $\\phi\\to\\psi$ and $\\psi$ that $\\phi$. If you do so anyways in a case where $\\phi\\to\\psi,\\psi\\nvDash\\phi$, you've committed the fallacy of \\emph{affirming the consequent}.\n\t\t\t\n\t\t\t\\item There are formulas $\\phi,\\psi\\in\\mathcal{L}$, such that $\\phi\\to\\psi,\\neg\\phi\\nvDash\\neg\\psi$. So you can't necessarily infer from $\\phi\\to\\psi$ and $\\neg\\phi$ that $\\neg\\psi$. If you do so anyways in a case where $\\phi\\to\\psi,\\neg\\phi\\nvDash\\neg\\psi$, then you've committed the fallacy of \\emph{denying the antecedent}.\n\t\t\n\t\t\\end{itemize}\n\t\t\n\tNote that the fallacies are not as formulated as neatly as the positive laws. The reason is that it's \\emph{not} the case, for example, that for all $\\phi,\\psi\\in\\mathcal{L}$, we have that $\\phi\\lor\\psi,\\phi\\nvDash\\neg\\psi$. To give a kind of stupid example, which however makes the point very clear, let $\\phi=p$ and $\\psi=\\neg p$. Then $\\phi\\lor\\psi,\\phi\\nvDash\\neg\\psi$ becomes $p\\lor\\neg p, p\\nvDash \\neg\\neg p$. But it's easy to see using the laws of logic, that this is false. By Double Negation, $p\\vDash \\neg\\neg p$. And by (Monotonicity), from this we get $p\\lor\\neg p, p\\vDash \\neg\\neg p$. So, \\emph{in this specific case}, you can reason by affirming the disjunct. The point is that, in contrast to the laws of logic, you can't \\emph{always} reason like this. \n\t\n\t\\item Remember from the introduction that as a consequence of bivalence, in classical logic there are \\emph{logical truths}---statements that are true in every possible situation---and \\emph{logical falsehood}---statements that are false in every possible situation. We'll now make these notions precise. Note that we defined the expression $\\Gamma\\vDash \\phi$ for \\emph{any} set $\\Gamma$ and formula $\\phi$. But what if $\\Gamma=\\emptyset$? We'll let's see what happens when we apply the definition:\n\t\\begin{itemize}\n\t\n\t\t\\item $\\emptyset\\vDash\\phi$ iff for all valuations $v$, if $\\llbracket\\psi\\rrbracket_v$ for all $\\psi\\in\\emptyset$, then $\\llbracket\\phi\\rrbracket_v=1$. \n\t\n\t\\end{itemize}\nBut wait a moment, $\\emptyset$ has no members. So there is no $\\psi$ such that $\\psi\\in \\emptyset$. What does this mean for us? Well, that \\emph{every} valuation $v$ is such that $\\llbracket\\psi\\rrbracket_v=1$ for all $\\psi\\in\\emptyset$. To see this, let's think about what it would mean for it to be false under some valuation $v$ that $\\llbracket\\psi\\rrbracket_v=1$ for all $\\psi\\in\\emptyset$. Well, it would mean that there is a $\\psi\\in\\emptyset$ such that $\\llbracket\\psi\\rrbracket_v=0$. But there is no $\\psi\\in\\emptyset$. So, it can't be false that $\\llbracket\\psi\\rrbracket_v=1$ for all $\\psi\\in\\emptyset$. But that means that all we require for $\\emptyset\\vDash\\phi$ is that $\\llbracket\\phi\\rrbracket_v=1$, for all valuations $v$. This is the notion of logical truth made precise: a formula is a \\emph{logical truth} iff it is the consequence of the empty set. In formal logic, we also call logical truth \\emph{validities}. So, a formula that is true under each valuation can also be called a \\emph{valid} formula. As a matter of notation, note that $\\emptyset$ can also be written $\\{\\}$. So, $\\emptyset\\vDash\\phi$ can also be written $\\{\\}\\vDash\\phi$. But we've said that we typically leave out the set-braces in consequence claims, so $\\emptyset\\vDash\\phi$ can simply be written as $\\vDash\\phi$.\n\n\t\\item Let's consider some examples:\n\t\t\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item \\emph{Claim}: $\\vDash p\\lor\\neg p$\n\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe need to prove that for all valuations $v$, we have that $\\llbracket p\\lor\\neg p\\rrbracket_v=1$. So let $v$ be an arbitrary valuation. Since $\\llbracket p\\lor\\neg p\\rrbracket_v=max(\\llbracket p\\rrbracket_v,\\llbracket\\neg p\\rrbracket_v)$ and $\\llbracket \\neg p\\rrbracket_v=1-\\llbracket p\\rrbracket_v$, we get that $\\llbracket p\\lor\\neg p\\rrbracket_v=max(\\llbracket p\\rrbracket_v,1-\\llbracket p\\rrbracket_v)$. Now we can distinguish two exhaustive cases: (a) $\\llbracket p\\rrbracket_v=1$ and (b) $\\llbracket p\\rrbracket_v=0$. In case (a), we have $\\llbracket p\\lor\\neg p\\rrbracket_v=max(\\llbracket p\\rrbracket_v,1-\\llbracket p\\rrbracket_v)=max(1,1-1)=max(1,0)=1$. And in case (b), we have $\\llbracket p\\lor\\neg p\\rrbracket_v=max(\\llbracket p\\rrbracket_v,1-\\llbracket p\\rrbracket_v)=max(0, 1-0)=max(0,1)=1$. So, either way, $\\llbracket p\\lor\\neg p\\rrbracket_v=1,$ which is what we needed to show.\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\\item \\emph{Claim}: $\\vDash (p\\to q)\\lor (q\\to p)$\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe need to prove that for all valuations $v$, we have that $\\llbracket (p\\to q)\\lor (q\\to p)\\rrbracket_v=1$. So, let $v$ be an arbitrary valuation and consider $\\llbracket (p\\to q)\\lor (q\\to p)\\rrbracket_v$. We know that $\\llbracket (p\\to q)\\lor (q\\to p)\\rrbracket_v=max(\\llbracket (p\\to q)\\rrbracket_v, \\llbracket (q\\to p)\\rrbracket_v)$. And since $ \\llbracket (p\\to q)\\rrbracket_v=max(1-\\llbracket p\\rrbracket_v,\\llbracket q\\rrbracket_v)$ and $ \\llbracket (q\\to p)\\rrbracket_v=max(1-\\llbracket q\\rrbracket_v,\\llbracket p\\rrbracket_v)$, we can conclude that: \\[\\llbracket (p\\to q)\\lor (q\\to p)\\rrbracket_v=max(max(1-\\llbracket p\\rrbracket_v,\\llbracket q\\rrbracket_v),max(1-\\llbracket q\\rrbracket_v,\\llbracket p\\rrbracket_v))\\] \n\t\t\tWe can again distinguish two exhaustive cases: (a) $\\llbracket p\\rrbracket_v=1$ and (b) $\\llbracket p\\rrbracket_v=0$. In case (a), we get:\n\t\t\t\\begin{align*}\n\t\t\t\\llbracket (p\\to q)\\lor (q\\to p)\\rrbracket_v&=max(max(1-\\llbracket p\\rrbracket_v,\\llbracket q\\rrbracket_v),max(1-\\llbracket q\\rrbracket_v,\\llbracket p\\rrbracket_v))\\\\\n\t\t\t&=max(max(1-1,\\llbracket q\\rrbracket_v),max(1-\\llbracket q\\rrbracket_v,1))\\\\\n\t\t\t&=max(max(0,\\llbracket q\\rrbracket_v),\\underbrace{max(1-\\llbracket q\\rrbracket_v,1)}_{=1})\\tag{$\\ast$}\\\\\n\t\t\t&=max(max(0,\\llbracket q\\rrbracket_v),1)\\\\\n\t\t\t&=1\n\t\t\t\\end{align*}\nTo see the critical identity ($\\ast$), simply note that $max(x,1)$ for $x\\in\\{0,1\\}$ is always going to be $1$. In case (b), the reasoning is analogous:\n\\begin{align*}\n\t\t\t\\llbracket (p\\to q)\\lor (q\\to p)\\rrbracket_v&=max(max(1-\\llbracket p\\rrbracket_v,\\llbracket q\\rrbracket_v),max(1-\\llbracket q\\rrbracket_v,\\llbracket p\\rrbracket_v))\\\\\n\t\t\t&=max(max(1-0,\\llbracket q\\rrbracket_v),max(1-\\llbracket q\\rrbracket_v,0))\\\\\n\t\t\t&=max(\\underbrace{max(1,\\llbracket q\\rrbracket_v)}_{=1},max(1-\\llbracket q\\rrbracket_v,0))\\\\\n\t\t\t&=max(1,max(1-\\llbracket q\\rrbracket_v,0))\\\\\n\t\t\t&=1\n\t\t\t\\end{align*}\n\tSo, in either case, $\\llbracket (p\\to q)\\lor (q\\to p)\\rrbracket_v=1$, which is what we needed to show.\n\t\t\t\\end{proof}\n\t\tNote that in each of the two proofs we used a distinction by cases on $\\llbracket p\\rrbracket_v=1$ or $\\llbracket p\\rrbracket_v=0$---that is, we've used bivalence. This is not by accident. There is a precise sense (that we're not going to go into here) in which all validities of classical logic ultimately depend on bivalence. The take away is that if you want to prove that a formula is valid, it's always a good idea to try to use bivalence in the proof. The usual way to use bivalence is to make a distinction by cases as we did in the proof, but sometimes you can also use bivalence for proof by contradiction---deriving that both $\\llbracket \\phi\\rrbracket_v=1$ and $\\llbracket \\phi\\rrbracket_v=0$ for some $\\phi$ is a good aim when trying to prove something indirectly.\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\item So much for logical truth. How about logical \\emph{falsehood}? Well, remember that a sentence is a logical falsehood iff it's false in all possible situations. Formally, this means that a formula get's value zero under all valuations. But wait, we can already express this using our terminology. Note that for all valuations $v$, $\\llbracket\\phi\\rrbracket_v=0$ iff $\\llbracket \\neg\\phi\\rrbracket_v=1$---this follows directly from $\\llbracket \\neg\\phi\\rrbracket_v=1-\\llbracket \\phi\\rrbracket_v$. But that just means that $\\llbracket\\phi\\rrbracket_v=0$ for all valuations $v$ iff $\\llbracket\\neg \\phi\\rrbracket_v=1$ for all valuations $v$. But that just means that $\\neg\\phi$ is valid! So, we can formally understand logical falsehood in terms of logical truth: $\\phi$ being logically false simply means $\\vDash\\neg\\phi$. \n\t\t\n\t\t\\item Let's consider an example. Since proving that something is a logical falsehood is proving that something is a logical truth, we don't expect any new methods to be necessary. We'll simply show that $(p\\land\\neg p)$ is a logical falsehood. In order to show this, we establish\n\t\t\n\t\t\\begin{itemize}\n\t\t\n\t\t\t\\item \\emph{Claim}. $\\vDash\\neg (p\\land\\neg p)$\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\tWe will actually not prove this from definitions, but rather using the laws of logic from 5.2.6 and the result $\\vDash p\\lor\\neg p$ from 5.2.11. For note that by de Morgan's law, we have that $\\neg (p\\land\\neg p)\\equi \\neg p\\lor \\neg\\neg p$. By Double Negation, $\\neg\\neg p\\equi p$ and so $\\neg p\\lor \\neg\\neg p\\equi \\neg p\\lor p$. By Commutativity, we have that $\\neg p\\lor p\\equi p\\lor \\neg p$. So, putting all of this together and using Transitivity a bunch of times, we get that  $\\neg (p\\land\\neg p)\\equi p\\lor\\neg p$. And we know that $\\vDash p\\lor \\neg p$ from 5.2.11. By definition, this means that $\\llbracket p\\lor\\neg p\\rrbracket_v=1$ for all valuations $v$. Since $\\neg (p\\land\\neg p)\\equi p\\lor\\neg p$, by Proposition 5.2.5, we have that for every valuation $v$, $\\llbracket p\\lor\\neg p\\rrbracket_v=\\llbracket \\neg(p\\land\\neg p)\\rrbracket_v$. But that just means that $\\llbracket \\neg(p\\land\\neg p)\\rrbracket_v=1$, for all valuations $v$.\n\t\t\t\\end{proof}\n\t\t\n\t\t\\end{itemize} \n\t\n\t\t\t\\item We conclude our discussion of validity by stating and proving a theorem about the connection between the consequence and the conditiona: the so-called \\emph{deduction theorem}. We will first state and prove the theorem and then discuss it's content:\n\t\t\t\\begin{theorem}[Deduction Theorem]\n\t\t\tLet $\\phi,\\psi\\in\\mathcal{L}$ be formulas and $\\Gamma\\subseteq\\mathcal{L}$ a set of formulas. Then the following two are equivalent:\n\t\t\t\\begin{enumerate}[1.]\n\t\t\t\n\t\t\t\t\\item $\\Gamma\\cup\\{\\phi\\}\\vDash\\psi$\n\t\t\t\t\n\t\t\t\t\\item $\\Gamma\\vDash \\phi\\to\\psi$\n\t\t\t\n\t\t\t\\end{enumerate}\n\t\t\t\\end{theorem}\n\t\t\t\\begin{proof}\n\t\t\tWe need to show that if 1., then 2. and if 2., then 1. We do so in turn:\n\t\t\t\n\t\t\t\\begin{itemize}\n\t\t\t\n\t\t\t\t\\item Suppose that $\\Gamma\\cup\\{\\phi\\}\\vDash\\psi$. That means, by definition, that for all valuations $v$ such that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma\\cup\\{\\phi\\},$ we have $\\llbracket\\psi\\rrbracket_v=1$. We need to derive that $\\Gamma\\vDash \\phi\\to\\psi$, i.e. that for all valuations $v$ such that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma$, we have $\\llbracket\\phi\\to\\psi\\rrbracket_v=1$. So let $v$ be an arbitrary valuation such that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma$. We can distinguish two cases: (a) $\\llbracket \\phi\\rrbracket_v=1$ and (b) $\\llbracket \\phi\\rrbracket_v=0$. In each case, consider  $\\llbracket\\phi\\to\\psi\\rrbracket_v$. So, remember that $\\llbracket\\phi\\to\\psi\\rrbracket_v=max(1-\\llbracket\\phi\\rrbracket_v,\\llbracket\\psi\\rrbracket_v)$. Let's look at the two cases:\n\t\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\t\n\t\t\t\t\t\\item If $\\llbracket \\phi\\rrbracket_v=1$, we can infer that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma\\cup\\{\\phi\\}$. Why? Well, because we've assumed that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma$ and we're considering the case where $\\llbracket \\phi\\rrbracket_v=1$. If $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma\\cup\\{\\phi\\}$, then since $\\Gamma\\cup\\{\\phi\\}\\vDash\\psi$, we get that $\\llbracket\\psi\\rrbracket_v=1$. Ok, now let's calculate $\\llbracket\\phi\\to\\psi\\rrbracket_v$. We have $\\llbracket\\phi\\to\\psi\\rrbracket_v=max(1-\\llbracket\\phi\\rrbracket_v,\\llbracket\\psi\\rrbracket_v)=max(1-1,1)=max(0,1)=1$.\n\t\t\t\t\t\n\t\t\t\t\t\\item If $\\llbracket \\phi\\rrbracket_v=0$, the situation is resolved even quicker. Let's calculate $\\llbracket\\phi\\to\\psi\\rrbracket_v$ on the assumption that $\\llbracket \\phi\\rrbracket_v=0$. We get that $\\llbracket\\phi\\to\\psi\\rrbracket_v=max(1-\\llbracket\\phi\\rrbracket_v,\\llbracket\\psi\\rrbracket_v)=max(1-0, \\llbracket\\psi\\rrbracket_v)=max(1, \\llbracket\\psi\\rrbracket_v)=1$, as desired.\n\t\t\t\t\t\n\t\t\t\t\\end{enumerate}\n\t\t\tSo, we have that if $\\Gamma\\cup\\{\\phi\\}\\vDash\\psi$, then $\\Gamma\\vDash\\phi\\to\\psi$.\n\t\t\t\t\n\t\t\t\\item Suppose conversely that $\\Gamma\\vDash\\phi\\to\\psi$. That is for each valuation $v$, if $\\llbracket\\theta\\rrbracket_v=1$ for all $\\theta\\in\\Gamma$, then $\\llbracket\\phi\\to\\psi\\rrbracket_v=1$. We want to show that $\\Gamma\\cup\\{\\phi\\}\\vDash\\psi$. That is, we need to show that for all valuations $v$ such that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma\\cup\\{\\phi\\},$ we have $\\llbracket\\psi\\rrbracket_v=1$. So let $v$ be an arbitrary valuation $v$ such that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma\\cup\\{\\phi\\}$. First, note that this means that $\\llbracket\\theta\\rrbracket_v=1$, for all $\\theta\\in \\Gamma$, and so, since $\\Gamma\\vDash\\varphi\\to\\psi$, we get that $\\llbracket\\phi\\to\\psi\\rrbracket_v=1$. Second, note that we also get that $\\llbracket\\phi\\rrbracket_v=1$---simply since $\\phi\\in\\Gamma\\cup\\{\\phi\\}$. Now consider $\\llbracket\\phi\\to\\psi\\rrbracket_v=max(1-\\llbracket\\phi\\rrbracket_v,\\llbracket\\psi\\rrbracket_v)$. Since $\\llbracket\\phi\\to\\psi\\rrbracket_v=1$, we get that $max(1-\\llbracket\\phi\\rrbracket_v,\\llbracket\\psi\\rrbracket_v)=1$. And since $\\llbracket\\phi\\rrbracket_v=1$, we get that $max(1-1,\\llbracket\\psi\\rrbracket_v)=max(0,\\llbracket\\psi\\rrbracket_v)=1$. But for $x\\in\\{0,1\\}$, we can only have $max(0,x)=1$, if $x=1$. So $\\llbracket\\psi\\rrbracket_v=1$, as desired. So, we have that if $\\Gamma\\vDash\\phi\\to\\psi$, then $\\Gamma\\cup\\{\\phi\\}\\vDash\\psi$.\n\t\t\t\n\t\t\t\\end{itemize}\n\t\t\tThis completes the proof of the deduction theorem.\n\t\t\t\\end{proof}\n\t\t\t\n\t\t\\item The deduction theorem is of fundamental importance since it connects logical consequence with the (material) conditional. One way of reading the theorem is that you can infer that a conditional is true iff you can infer the consequent (the then-part) from the antecedent (the if-part). This is, in a sense, the essence of conditional proof. Using the deduction theorem, we can make a connection between the validity of inferences and the \\emph{logical truth of formulas}. This will be our last observation. Let's first think about a simple case of an inference with one premise and one conclusion: $\\phi\\therefore\\psi$. We've said that this inference is valid iff $\\phi\\vDash\\psi$. But by the deduction theorem, this is the case iff $\\vDash\\phi\\to\\psi$. Why? Well, just take the statement of the deduction theorem and let $\\Gamma=\\emptyset$. So, $\\phi\\therefore\\psi$ is valid iff $\\vDash\\phi\\to\\psi$. In words, an inference with just one premise is valid iff the statement ``if [premise], then [conclusion]'' is a logical truth. We'll now turn this into a more general theorem, which we'll use to prove decidability in the next section.\n\t\t\n\t\t\\item The essence of the theorem we're about to prove is that we can generalize the idea that we've just described. But to see how, we first need to prove a lemma:\n\t\t\\begin{lemma}\n\t\tFor all $\\phi,\\psi,\\theta\\in\\mathcal{L}$, we have $\\phi\\to(\\psi\\to\\theta)\\equi (\\phi\\land\\psi)\\to\\theta$.\n\t\t\\end{lemma}\n\t\t\\begin{proof}\n\t\tWe derive the equivalence using the logical laws. First, note that $\\phi\\to(\\psi\\to\\theta)\\equi \\neg\\phi\\lor \\neg \\psi\\lor \\theta$ using Conditional Definition twice. Now consider $(\\phi\\land\\psi)\\to\\theta$. By Conditional Definition, we get $(\\phi\\land\\psi)\\to\\theta\\equi \\neg(\\phi\\land\\psi)\\lor\\theta$. But by De Morgan's law $\\neg(\\phi\\land\\psi)\\equi \\neg\\phi\\lor\\neg\\psi$. Hence $\\neg(\\phi\\land\\psi)\\lor\\theta\\equi  \\neg\\phi\\lor\\neg\\psi\\lor\\theta$. But now we have that $\\phi\\to(\\psi\\to\\theta)\\equi \\neg\\phi\\lor \\neg \\psi\\lor \\theta$ and $(\\phi\\land\\psi)\\to\\theta\\equi  \\neg\\phi\\lor\\neg\\psi\\lor\\theta$, from which we can infer our claim by Transitivity. \n\t\n%\t\tWe prove this fact using Proposition 5.2.5, which states that $\\phi\\equi \\psi$ iff for all valuations $v$, $\\llbracket\\phi\\rrbracket_v=\\llbracket\\psi\\rrbracket_v$. So, in order to use this Proposition, we want to show that for all valuations $v$, $\\llbracket\\phi\\to(\\psi\\to\\theta)\\rrbracket_v=\\llbracket(\\phi\\land\\psi)\\to\\theta\\rrbracket_v$. So let $v$ be an arbitrary valuation. We compare the values for $\\llbracket\\phi\\to(\\psi\\to\\theta)\\rrbracket_v$ and $\\llbracket(\\phi\\land\\psi)\\to\\theta\\rrbracket_v$. First, consider $\\llbracket\\phi\\to(\\psi\\to\\theta)\\rrbracket_v$. We have:\n%\t\t\\begin{align*}\n%\t\t\\llbracket\\phi\\to(\\psi\\to\\theta)\\rrbracket_v&=max(1-\\llbracket\\phi\\rrbracket_v, \\llbracket (\\psi\\to\\theta)\\rrbracket_v)\\\\\n%\t\t&=max(1-\\llbracket\\phi\\rrbracket_v, max(1-\\llbracket \\psi\\rrbracket_v, \\llbracket\\theta\\rrbracket_v))\n%\t\t\\end{align*}\n%\tFor $\\llbracket(\\phi\\land\\psi)\\to\\theta\\rrbracket_v$, we have:\n%\t\\begin{align*}\n%\t\t\\llbracket(\\phi\\land\\psi)\\to\\theta\\rrbracket_v&=max(1-\\llbracket\\phi\\land\\psi\\rrbracket_v, \\llbracket \\theta\\rrbracket_v)\\\\\n%\t\t&=max(1-min(\\llbracket\\phi\\rrbracket_v, \\llbracket\\psi\\rrbracket_v),\\llbracket\\theta\\rrbracket_v))\n%\t\t\\end{align*}\n%\t\tTo see that these two expressions are the same, first note that for $x,y\\in\\{0,1\\}$, we have that $max(1-x,1-y)=1-min(x, y)$ (the is essentially de Morgan's law). So, we can see that:\n%\t\t\\[max(1-min(\\llbracket\\phi\\rrbracket_v, \\llbracket\\psi\\rrbracket_v),\\llbracket\\theta\\rrbracket_v))=max(max(1-\\llbracket\\phi\\rrbracket_v, 1-\\llbracket\\psi\\rrbracket_v),\\llbracket\\theta\\rrbracket_v))\\]\n%\t\tTo complete the proof, we note that for all $x,y,z\\in\\{0,1\\}$, we have $max(x,max(y,z))=max(max(x,y),z)$ (it doesn't matter in which order we take the maximum). In this way, we get:\n%\t\t\\[max(max(1-\\llbracket\\phi\\rrbracket_v, 1-\\llbracket\\psi\\rrbracket_v),\\llbracket\\theta\\rrbracket_v))=max(1-\\llbracket\\phi\\rrbracket_v, max(1-\\llbracket\\psi\\rrbracket_v,\\llbracket\\theta\\rrbracket_v))\\]\n%\t\tPutting all of this together, we have that $\\llbracket\\phi\\to(\\psi\\to\\theta)\\rrbracket_v=\\llbracket(\\phi\\land\\psi)\\to\\theta\\rrbracket_v$, from which our claim follows by a simple application of \n%\t\tProposition 5.2.5.\n\t\t\\end{proof}\t\t\n\t\t\t\n\tWith this lemma in place, we prove our theorem as a corollary from the deduction theorem:\n\t\\begin{theorem}\n\tLet $\\phi_1,\\mathellipsis, \\phi_n,\\psi\\in\\mathcal{L}$ be formulas. Then:\n\t\\[\\phi_1,\\mathellipsis, \\phi_n\\vDash \\psi\\text{ iff }\\vDash (\\phi_1\\land\\mathellipsis\\land\\phi_n)\\to\\psi\\]\n\t\\end{theorem}\n\t\\begin{proof}\n\tThe proposition follows from the deduction theorem by $n$ applications of Lemma 5.2.16.\n\t\\end{proof}\n\tThis theorem will play an essential role in the following section.\n\t\n\t\\item But first, we note in the following corollary that we can also reduce logical equivalence to the validity of a formula---unsurprisingly, a biconditional:\n\t\\begin{corollary}\n\tFor $\\phi,\\psi\\in\\mathcal{L}$, we have that $\\phi\\equi \\psi$ iff $\\vDash\\phi\\leftrightarrow\\psi$\n\t\\end{corollary}\n\t\\begin{proof}\n\tWe prove both directions in turn:\n\t\\begin{itemize}\n\t\n\t\t\\item (\\emph{Left-to-right}): Suppose that $\\phi\\equi \\psi$, i.e. both $\\phi\\vDash\\psi$ and $\\psi\\vDash\\phi$. By the deduction theorem, we have $\\vDash \\phi\\to\\psi$ and $\\vDash\\psi\\to\\phi$. By the law of Conjunction Introduction, we have $\\vDash (\\phi\\to\\psi)\\land(\\psi\\to\\phi)$. And by the law of Biconditional Introduction, we have $\\vDash\\phi\\leftrightarrow\\psi$.\n\t\t\n\t\t\\item (\\emph{Left-to-right}). Suppose that $\\vDash\\phi\\leftrightarrow\\psi$. By the law of Biconditional Introduction, we have $\\vDash (\\phi\\to\\psi)\\land(\\psi\\to\\phi)$. From this, it easily follows that $\\vDash (\\phi\\to\\psi)$ and $\\vDash(\\psi\\to\\phi)$ using the law of Conjunction Elimination. But that gives us $\\phi\\vDash\\psi$ and $\\psi\\vDash\\phi$ by the deduction theorem and so $\\phi\\equi\\psi$, as desired.\n\t\n\t\\end{itemize}\n\t\\end{proof}\n\t\n\t\\item So, to sum up, by Proposition 5.2.6, we have reduced the question of whether a (finite) set of formulas entails another formula to the question whether a specific conditional is valid, the conditional formed by taking the conjunction of all the members in the set as the if-part and the potential conclusion as the then-part. In the following section, we shall turn this observation into a decision procedure for propositional logic.\n\t\t\t\n\t\\end{enumerate}\n\n%\\section{Truth-Functional Completeness}\n\\section{Truth Tables and Decidability}\n\n\t\\begin{enumerate}[\\thesection.1]\n\n\t\t\\item Remember our discussion of decidability from the introduction. There we posed the question of whether it is possible to write a computer program, such that if I give it an arbitrary inference, the program will determine \\emph{whether} the argument is valid. In general, we remarked, the answer is: no! We'll discuss that at the end of our treatment of first-order logic. But with respect to propositional logic, the answer is: yes! The purpose of this section is to develop a \\emph{semantic} decision procedure for propositional logic, i.e. one using semantic concepts. In the following chapter, we'll cover a \\emph{proof-theoretic} method, i.e. one that makes use of proof-theoretic concepts.\n\t\t\n\t\t\\item The decision method that we'll discuss in this chapter is the \\emph{method of truth-tables}. It was first discovered by the Austrian philosopher Ludwig Wittgenstein in the 20's of the last century. But to this day, it's the most widely used decision procedure for propositional logic. You will soon be able to appreciate its elegance. The idea underlying the method is that in order to determine the truth-value of a formula, we actually only need to know the truth-values of the sentence letters that occur in the formula. But there are only finitely many of those and so, there are only finitely many possible combinations of truth-values for the sentence letters in the formula. Hence, we can write down all the possible truth-values that a formula can take in one, finite table. This table, the so-called \\emph{truth-table} for the formula is at the heart of the decision procedure we'll discuss in this section.\n\t\n\t\\item First, let's discuss how to construct a truth-table. We'll do this by means of an example. Let's construct the truth-table for $((p\\lor q)\\land \\neg (p\\land q)) \\to r$  step-by-step:\n\n\\begin{enumerate}\n\n\\item The first thing you should do when you're constructing a truth-table for a formula is to find all the propositional letters. In our case, we get $p,q,$ and $r$.\n\n\\item Write down all the propositional letters, followed by some space (we'll need that space). Draw separators as in the following example:\n\n\\begin{center}\n\\begin{tabular}{c | c | c | c}\n$p$ & $q$ & $r$ &\\hspace{40ex} \\\\\\hline\n& & & \\\\\n\n\\end{tabular}\n\\end{center}\n\nThis is the beginning of our truth-table.\n\n\\item Next, count the propositional letters. If there are $n$ propositional letters in the formula, then its truth-table will have $2^n$ rows (we'll see in a second why). In our case, this means that there are $2^3=2\\cdot 2\\cdot 2=8$ rows in our truth-table.\n\n\\item Correspondingly, draw $2^n$ lines into the truth-table (if your paper has lines, you don't need to do this). In our example, we get:\n\n\\begin{center}\n\\begin{tabular}{c | c | c | c}\n$p$ & $q$ & $r$ &\\hspace*{40ex} \\\\\\hline\n& & & \\\\\\hline\n& & & \\\\\\hline\n& & & \\\\\\hline\n& & & \\\\\\hline\n& & & \\\\\\hline\n& & & \\\\\\hline\n& & & \\\\\\hline\n& & & \\\\\n\n\\end{tabular}\n\\end{center}\n\nWe now have the skeleton of our truth-table.\n\n\\item Next, we need to fill in all the possible combinations of truth-values for $p,q,r$. Since there are $n$ letters (in our case $3$) and $2$ truth-values (1 and 0), combinatorics tells us that there are $2^n$ different combinations. Here is a method for determining them all:\n\n\\begin{enumerate}[(i)]\n\n\\item Start by dividing the rows of your truth-table skeleton into two equal parts (this is always possible, since $2^n$ is even for every $n$). Then, in the first column of the table (in our case, the $p$ column), fill in $1$s in the upper half of the table and $0$s in the lower half, like this:\n\n\\begin{center}\n\\begin{tabular}{c | c | c | c}\n$p$ & $q$ & $r$  &\\hspace*{40ex} \\\\\\hline\n1& & & \\\\\\hline\n1& & & \\\\\\hline\n1& & & \\\\\\hline\n1& & & \\\\\\hline\n0& & & \\\\\\hline\n0& & & \\\\\\hline\n0& & & \\\\\\hline\n0& & & \\\\\n\n\\end{tabular}\n\\end{center}\n\n\\item Next, consider the upper half and lower half of the second column, and divide them again into two parts. In this column (in our case, the $q$ column), fill in $1$s in the upper half of the table and $0$s in the lower half of every part, like this:\n\n\\begin{center}\n\\begin{tabular}{c | c | c | c}\n$p$ & $q$ & $r$  &\\hspace*{40ex} \\\\\\hline\n1& 1& & \\\\\\hline\n1& 1& & \\\\\\hline\n1& 0& & \\\\\\hline\n1& 0& & \\\\\\hline\n0& 1& & \\\\\\hline\n0& 1& & \\\\\\hline\n0& 0& & \\\\\\hline\n0& 0& & \\\\\n\n\\end{tabular}\n\\end{center}\n\n\\item Proceed to the next column (in our case, the $r$ column) and repeat the procedure:\n\n\\begin{center}\n\\begin{tabular}{c | c | c | c}\n$p$ & $q$ & $r$  &\\hspace*{40ex} \\\\\\hline\n1& 1& 1& \\\\\\hline\n1& 1& 0 & \\\\\\hline\n1& 0& 1& \\\\\\hline\n1& 0& 0& \\\\\\hline\n0& 1& 1& \\\\\\hline\n0& 1& 0& \\\\\\hline\n0& 0& 1& \\\\\\hline\n0& 0& 0& \\\\\n\n\\end{tabular}\n\\end{center}\n\nIf you have more than 3 propositional letters, you can continue dividing the parts in two. This will always work, since half of half of $2^n$ will always be even.\\footnote{If you know how to count in binary, then you can see that I'm basically counting down from $2^n$ in binary.}\n\n\\end{enumerate}\n\n\\item Now that we've filled in all the possible combinations of truth-values for the propositional letters, we recursively calculate the truth-value of the whole formula following the parsing tree. In our case, the parsing tree is this:\n\n\\begin{center}\n\\begin{tikzpicture}\n\\Tree [.$(((p\\lor q)\\land \\neg (p\\land q))\\to r)$ [.$((p\\lor q)\\land \\neg (p\\land q))$ [.$(p\\lor q)$ [.$p$ ] [.$q$ ] ] [.$\\neg(p\\land q)$ [.$(p\\land q)$ [.$p$ ] [.$q$ ] ] ] ] [.$r$ ] ]\n\\end{tikzpicture}\n\\end{center}\n\n\\item We calculate the truth-values of the formulas in the parsing tree one after another from the bottom to the top. We use the truth-function corresponding to the rule that was applied. Once we've calculated the truth-value for a formula in the parsing tree, we write it underneath the formula in the truth-table. In our case, this means that we've got to complete 5 steps:\n\n\\begin{enumerate}[(i)]\n\n\\item We start with the first two leaves and calculate the value of $(p\\lor q)$ based on the values of $p$ and $q$ using the truth-function $f_\\lor$:\n\n\n\\begin{center}\n\\begin{tabular}{c c c | c | c  c  c  c  c}\n$p$ & $q$  & r & $(p\\lor q)$ & \\hspace*{20ex}\\\\\\hline\n\n1 & 1 & 1 & 1 & &&&& \\\\\n\n1 & 1 & 0 & 1 & &&&&  \\\\\n\n1 & 0 & 1 & 1 & &&&& \\\\\n\n1 & 0 & 0  & 1 & &&& \\\\\n\n0 & 1 &  1 & 1 & &&& \\\\\n\n0 & 1 & 0 & 1 & & &&& \\\\\n\n0 & 0 & 1 & 0 & & &&&  \\\\\n\n0 & 0 & 0 & 0 & & &&&\n\n\\end{tabular}\n\n\\end{center}\n\n\\item Next, we move to the third and fourth leaf and calculate the truth-value of $(p\\land q)$ based on the truth values of $p$ and $q$ using the truth-function $f_\\land$:\n\n\\begin{center}\n\\begin{tabular}{c c c | c | c | c  c  c  c}\n$p$ & $q$  & r & $(p\\lor q)$ &  $(p\\land q)$ & \\hspace*{15ex} \\\\\\hline\n\n1 & 1 & 1 & 1 & 1&& \\\\\n\n1 & 1 & 0 & 1 & 1&&  \\\\\n\n1 & 0 & 1 & 1 & 0&& \\\\\n\n1 & 0 & 0  & 1 & 0& \\\\\n\n0 & 1 &  1 & 1 & 0& \\\\\n\n0 & 1 & 0 & 1 &0&& \\\\\n\n0 & 0 & 1 & 0  &0&&  \\\\\n\n0 & 0 & 0 & 0  &0&&\n\n\\end{tabular}\n\n\\end{center}\n\n\\item Now we go one step up and calculate the truth-value of $\\neg(p\\land q)$ based on the truth-value of $(p\\land q)$ using the truth-function $f_\\neg$:\n\n\\begin{center}\n\\begin{tabular}{c c c | c | c | c | c  c  c}\n$p$ & $q$  & r & $(p\\lor q)$ & $(p\\land q)$ &  $\\neg (p\\land q)$ & \\hspace*{10ex} \\\\\\hline\n\n1 & 1 & 1 & 1 &1&0& \\\\\n\n1 & 1 & 0 & 1 &1&0&  \\\\\n\n1 & 0 & 1 & 1 &0&1& \\\\\n\n1 & 0 & 0  & 1 &0& 1\\\\\n\n0 & 1 &  1 & 1 &0& 1\\\\\n\n0 & 1 & 0 & 1 &0&1 \\\\\n\n0 & 0 & 1 & 0 &0& 1 \\\\\n\n0 & 0 & 0 & 0 &0&1\n\n\\end{tabular}\n\n\\end{center}\n\n\\item Now we proceed to calculate the truth-value of $((p\\lor q)\\land \\neg(p\\land q))$ based on the truth-values of $(p\\lor q)$ and $\\neg(p\\land q))$ we've just calculated now using the truth-function $f_\\land$:\n\n\\begin{center}\n\\begin{tabular}{c c c | c | c | c | c | c  c}\n$p$ & $q$  & r & $(p\\lor q)$ & $(p\\land q)$ &  $\\neg (p\\land q)$ & $((p\\lor q)\\land \\neg(p\\land q))$ &\\hspace*{2ex} \\\\\\hline\n\n1 & 1 & 1 & 1 &1&0&  0\\\\\n\n1 & 1 & 0 & 1 &1&0&  0\\\\\n\n1 & 0 & 1 & 1 &0&1& 1\\\\\n\n1 & 0 & 0  & 1 &0& 1&1\\\\\n\n0 & 1 &  1 & 1 &0& 1 &1 \\\\\n\n0 & 1 & 0 & 1 &0&1& 1 \\\\\n\n0 & 0 & 1 & 0 &0& 1 &0\\\\\n\n0 & 0 & 0 & 0 &0&1 &0\n\n\\end{tabular}\n\n\\end{center}\n\n\\item Finally, we calculate the truth-value of the whole formula $((p\\lor q)\\land \\neg(p\\land q))\\to r$ based on the truth-values of $((p\\lor q)\\land \\neg(p\\land q))$ and $r$ using $f_\\to$:\n\n\\end{enumerate}\n\n\\end{enumerate}\n\n\n\t\\end{enumerate}\n\n{\\small\n\\begin{tabular}{c c c | c | c | c | c | c }\n$p$ & $q$  & r & $(p\\lor q)$ & $(p\\land q)$ &  $\\neg (p\\land q)$ & $((p\\lor q)\\land \\neg(p\\land q))$ & $((p\\lor q)\\land \\neg(p\\land q))\\to r$ \\\\\\hline\n\n1 & 1 & 1 & 1 &1&0&  0 & 1\\\\\n\n1 & 1 & 0 & 1 &1&0&  0& 1\\\\\n\n1 & 0 & 1 & 1 &0&1& 1& 1\\\\\n\n1 & 0 & 0  & 1 &0& 1&1& 0\\\\\n\n0 & 1 &  1 & 1 &0& 1 &1 & 1\\\\\n\n0 & 1 & 0 & 1 &0&1& 1 & 0\\\\\n\n0 & 0 & 1 & 0 &0& 1 &0& 1\\\\\n\n0 & 0 & 0 & 0 &0&1 &0& 1\n\n\\end{tabular}}\n\n\t\\begin{enumerate}[\\thesection.1]\n\n\t\t\\setcounter{enumi}{3}\n\t\t\n\t\t\\item So what's the general pattern here? In order to generate the truth-table for a formula, we do the following:\n\t\t\\begin{enumerate}[1.]\n\t\t\n\t\t\t\\item Determine all the sentence letters in the formula.\n\t\t\t\n\t\t\t\\item Determine how many different ways there are for distributing the truth-values $0,1$ over these sentence letters and write the different combinations in a table, one row at a time. \n\t\t\t\n\t\t\t\\emph{Fact}. If there are $n$ sentence letters, then there are $2^n$ different combinations. \n\t\t\t\n\t\t\t\\item Calculate the parsing tree of the formula.\n\t\t\t\n\t\t\t\\item Recursively calculate the truth-values of the sub-formulas for each of the different combinations of truth-values, and write the result for a sub-formula under the formula, in the row that corresponds to the combination you used to calculate the result. \n\t\t\\end{enumerate}\n\t\t\n\t\tIn order to have a unique order for step 4., we start with the bottom-left leaf of the tree and then try to move up and calculate what we need to know along the way. This is precisely what we did in 5.3.3. \n\t\t\n\t\tHere is another example (this time, just the result):\n\t\t\n\t\t\n$p \\land (q \\lor r) \\leftrightarrow (p \\land q) \\lor (p \\land r)$\n\n\\begin{center}\n\\emph{Parsing Tree}:\\\\[2ex]\n\\Tree [.{$p \\land (q \\lor r) \\leftrightarrow (p \\land q) \\lor (p \\land r)$} [.${p \\land (q \\lor r)}$ [.$p$ ] [.$q\\lor r$ [.$q$ ] [.$r$ ] ] ] [.${(p \\land q) \\lor (p \\land r)}$ [.$p\\land q$ [.$p$ ] [.$q$ ] ] [.$p\\land r$ [.$p$ ] [.$r$ ] ] ] ]\n\\end{center}\n\n\\end{enumerate}\n\n\\begin{center}\n\\emph{Truth-Table}:\\\\[2ex]\n{\\small\\begin{tabular}{ccc|c | c | c | c | c | c }\n$p$&$q$&$r$& $q\\lor r$ & $p\\land (q\\lor r)$ & $p\\land q$ & $p\\land r$ & $(p\\land q)\\lor (p\\land r)$ & $p \\land (q \\lor r) \\leftrightarrow (p \\land q) \\lor (p \\land r)$\n\\\\\n\\hline\n 1 & 1 & 1 & 1 &1 &1 &1 &1 & 1 \\\\ \n 1 & 1 & 0 &1 &1 &1 &0 & 1 &1\\\\\n 1 & 0 & 1 &1 &1 &0 &1 &1 &1 \\\\\n 1 & 0 & 0 &0 &0 &0 &0 &0 &1 \\\\\n 0 & 1 & 1 &1 & 0 &0 &0 &0 &1\\\\\n 0 & 1 & 0 &1 &0 &0 &0 &0 &1 \\\\\n 0 & 0 & 1 &1 &0 &0 &0 &0 &1 \\\\\n 0 & 0 & 0 &0 &0 &0 &0 &0 &1 \\\\\n\\end{tabular}}\n\\end{center}\n\n\\vspace{1ex}\n\n\\begin{enumerate}[\\thesection.1]\n\n\t\t\\setcounter{enumi}{4}\n\t\t\n\t\t\\item Remember that, mathematically speaking, an algorithm is a set of precise instructions for a specific task. In our case, the task is to  determine whether a given formula is valid (since we've reduced the question of the validity of inference to the question of validities of formulas). With 5.3.4, we're almost there, we just need to add one more step. So far we've described an algorithm that allows us to calculate all the different possible truth-values a formula can take. But wait! A formula is valid  just in case it gets value one under every valuation. So, if our truth-table yields one as the only possible value for our formula, then the formula should be valid! So, the example truth-tables we just did show that $\\nvDash((p\\lor q)\\land \\neg(p\\land q))\\to r$ and $\\vDash p \\land (q \\lor r) \\leftrightarrow (p \\land q) \\lor (p \\land r)$. So, after 1.--4., we add the following final step: \n\t\t\\begin{enumerate}[1.]\n\t\t\\setcounter{enumii}{4}\n\t\t\n\t\t\t\\item Check the column under the formula:\n\t\t\t\n\t\t\t\t\\begin{itemize}\n\t\t\t\t\n\t\t\t\t\t\\item If there are only 1's, the formula is valid.\n\t\t\t\t\t\\item If there is one or more 0's, the formula is not valid.\n\t\t\t\t\\end{itemize}\n\t\t\n\t\t\\end{enumerate}\n\tWe've arrived at an algorithm that for determining whether a given formula is valid.\n\t\n\t\\item How can we use the algorithm described above to determine whether a given \\emph{inference} is valid? To answer this question, consider an inference $\\phi_1, \\mathellipsis,\\phi_n\\therefore\\psi$ with finitely many premises. By definition, the inference is valid iff  $\\phi_1, \\mathellipsis,\\phi_n\\vDash\\psi$. And we know by Proposition 5.2.16 that $\\phi_1, \\mathellipsis,\\phi_n\\vDash\\psi$ is mathematically equivalent to $\\vDash (\\phi_1\\land\\mathellipsis\\land\\phi_n)\\to \\psi$. So, we use our algorithm to determine whether $(\\phi_1\\land\\mathellipsis\\land\\phi_n)\\to \\psi$ is a logical truth. If it is, then the inference is valid; and if it isn't the inference is invalid.\n\t\n\t\\item We will complete this chapter by \\emph{proving} that the algorithm works, i.e. we will show that if the algorithm tells us that a formula is valid, then it is valid; and we will show that if the algorithm tells us that a formula is \\emph{in}valid, then it is, in fact, invalid. This proof, together with the observation that carrying out the algorithm only takes finitely many steps, establishes that classical propositional logic is decidable. This is the main theorem of this chapter:\n\t\n\t\\begin{theorem}[Decidability of Propositional Logic]\n\tPropositional logic is decidable, i.e. there exists an algorithm which after finitely many steps correctly determines whether a given inference (with finitely any premises) is valid.\n\t\\end{theorem}\n\tWhat we will need to prove in order to show that our algorithm is correct is that if the algorithm tells us a formula is valid, then it is valid; and that if the algorithm says that the formula is invalid, then it's invalid. \t\n\t\n\t\\item But first, we make the following simple observation:\n\t\n\t\t\\begin{lemma}\n\t\tLet $\\phi$ and let $p_1, \\mathellipsis, p_n$ be the sentence letters in $\\phi$. Further, let $v$ be a valuation. Consider a line in the truth-table for $\\phi$ and let $x_1, \\mathellipsis, x_n$ be the values for $p_1, \\mathellipsis, p_n$ in that row and $x_\\phi$ the value for $\\phi$ in that row. Then, if $v(p_i)=x_i$ for $1\\leq i\\leq n$, then $\\llbracket \\phi\\rrbracket_v=x_\\phi$.\n\t\t\\end{lemma}\n\t\t\\begin{proof}\n\t\tBy inspection of the way the values of the truth-table are calculated. You can prove this by an induction on formulas, the details are left as an exercise for interested students.\n\t\t\\end{proof}\n\t\t\n\t\\item Using this lemma it's easy to show that our algorithm is correct:\n\t\n\t\t\\begin{theorem}[Verification of the Method of Truth-Tables.] Let $\\phi$ be a formula.\n\t\t\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item  If in the truth-table for $\\phi$ there exists a line with a 0 under $\\phi$, then $\\nvDash\\phi$.\n\t\t\t\n\t\t\t\\item  If in the truth-table for $\\phi$, in all lines under $\\phi$ the value is 1, then $\\vDash\\phi$.\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\n\t\t\\end{theorem}\n\t\n\t\n\t\\begin{proof}\n\tWe prove the two in turn:\n\t\n\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\\item Suppose that $p_1, \\mathellipsis, p_n$ are the sentence letters in $\\phi$ and that $x_1, \\mathellipsis, x_n$ are the values for $p_1, \\mathellipsis, p_n$ (respectively) in the row. Define a valuation $v:\\mathcal{P}\\to\\{0,1\\}$ by setting $v(p_1)=x_1, \\mathellipsis, v(p_n)=x_n$, and $v(p)=0$ if $p\\neq p_1, \\mathellipsis, p_n$. Then by Lemma 5.3.8, we have that $\\llbracket \\phi\\rrbracket_v=0$, which means that $\\nvDash\\phi$.\n\t\t\t\n\t\t\t\\item Suppose that $p_1, \\mathellipsis, p_n$ are the sentence letters in $\\phi$. Let $v$ be an arbitrary valuation. Consider the values $v(p_1), \\mathellipsis, v(p_n)$. Since in our truth-table, we have considered \\emph{all} the possible truth-values for $p_1, \\mathellipsis, p_n$, there will be a line in our table that corresponds to $v(p_1), \\mathellipsis, v(p_n)$. The value of $\\phi$ in that line will be 1 since, by assumption, the value of $\\phi$ is 1 in \\emph{every} line. Hence, by  Lemma 5.3.8, $\\llbracket \\phi\\rrbracket_v=1$, which is what we needed to show.\n\n\n\t\t\\end{enumerate}\n\t\n\t\\end{proof}\t\n\t\n\tThis completes our investigation into the method of truth-tables: we've established that it is indeed a decision procedure for propositional logic.\n\n\t\\end{enumerate}\n\n%\\section{Consequence and Satisfiability}\n\t\t\t\n\\section{Core Ideas}\n\n\\begin{itemize}\n\n\t\\item Models in propositional logic are \\emph{valuations}: functions from sentence letters to truth-values.\n\t\n\t\\item We can calculate the value of a formula under a valuation recursively using the Boolean \\emph{truth-functions}.\n\t\n\t\\item The validity of inferences over formal languages can be understood in terms of the concept of logical consequence.\n\t\n\t\\item Logical consequence is defined by saying that a set of formulas entails a formula iff in every valuation where all the members of the set have value 1, the formula has value 1.\n\t\n\t\\item Logical truth is a special case of logical consequence: a formula is a logical truth if it's a consequence of the empty set.\n\t\n\t\\item The question whether a given set of premises entails a conclusion can be reduced to the question whether the conditional with the conjunction of the premises as the if-part and the conclusion as the then-part is logically valid.\n\t\t\n\t\\item The method of truth-tables allows us to decide in finitely many steps whether a given formula is valid. This gives us a decision procedure for propositional logic.\n\t\n\\end{itemize}\n\n\\section{Self-Study Questions}\n\n\t\\begin{enumerate}[\\thesection.1]\n\n\t\t\\item Suppose that a formula contains 3 connectives. Which of the following is the best you can say about the formula's truth-table?\n\t\t\n\t\t\\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item It has exactly $3^2=9$ rows.\n\t\t\t\n\t\t\t\\item It has exactly $2^3=8$ rows.\n\t\t\t\n\t\t\t\\item We can't predict the number of rows.\n\t\t\t\n\t\t\t\\item We can't predict the number of rows, but there are \\emph{at least} $2\\cdot 3$ rows.\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\item Is it possible for a formula of the form $\\phi\\land\\psi$ to be a logical truth?\n\t\t\n\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\n\t\t\t\t\\item Yes! For example, if $\\phi=p$ and $\\psi=\\neg p$.\n\t\t\t\t\n\t\t\t\t\\item Yes! For example, if $\\phi$ and $\\psi$ are logical truths themselves.\n\t\t\t\t\n\t\t\t\t\\item Yes! For example, if the formula is also of the form $p\\lor\\neg p$\n\t\t\t\t\n\t\t\t\t\\item No! That would entail that two sentence letters are logical truths, which is impossible.\n\t\t\t\n\t\t\t\\end{enumerate}\n\t\n\t\t\n\t\t\\item Consider a formula of the form $\\phi\\to\\psi$. Which of the following entails that the formula is a logical truth?\n\t\t\n\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\n\t\t\t\t\\item $\\phi\\vDash\\psi$\n\t\t\t\t\n\t\t\t\t\\item $\\vDash \\psi$\n\t\t\t\t\n\t\t\t\t\\item $\\vDash \\neg\\psi\\to\\neg\\phi$\n\t\t\t\t\n\t\t\t\t\\item $\\vDash\\neg\\phi$\n\t\t\t\n\t\t\t\\end{enumerate}\n\t\n\t\\end{enumerate}\n\n\\section{Exercises}\n\n\n\t\\begin{enumerate}[\\thesection.1]\n\t\n\t\t\\item Prove the remaining cases of Proposition 5.1.13.\n\t\t\n\t\t\\item Prove that the two definitions of $\\vDash$ in 5.2.2 are equivalent (using Proposition 5.1.13). (This is a good exercise for proof strategies!)\n\t\t\n\t\t\\item Proof the laws of Lemma 5.2.6. $[h]$ (iii), (xiii), and (xv).\n\t\t\n\t\t\\item Suppose that $\\phi$ is a formula and $v:\\mathcal{L}\\to\\{0,1\\}$ a valuation such that for all $\\psi\\in sub(\\phi)$, $\\llbracket\\psi\\rrbracket_v=0$. Prove that $\\phi$ does not contain any $\\neg$'s.\n\t\t\n\t\t\\item $[h]$ Prove that there is no valuation $v$ such that for all $\\phi\\in\\mathcal{L}$, we have $\\llbracket\\phi\\rrbracket_v=1$.\n\t\t\n\t\t\\item Prove that $\\Gamma\\vDash\\phi$ iff there is no valuation $v$, such that $\\llbracket\\psi\\rrbracket_v=1$, for all $\\psi\\in\\Gamma$, but also $\\llbracket\\phi\\rrbracket_v=0$.\n\t\t\n\t\t\n\t\t\\item Do the truth-tables for the following formulas:\n\t\t\n\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\n\t\t\t\t\n\t\t\t\t\\item$[h]$ $p \\lor (q \\land r) \\leftrightarrow (p \\lor q) \\land(p \\lor r)$\n\t\t\t\t\n\t\t\t\t\\item $\\neg p \\lor q \\rightarrow q \\land (p \\leftrightarrow q)$\n\n\t\t\t\t\\item $p \\land (q \\rightarrow r) \\leftrightarrow (\\neg p \\lor q \\rightarrow  p \\land r)$\n\t\t\t\t\n\t\t\t\t\\item $\\neg(p\\rightarrow(q\\lor \\neg r)\\land (\\neg q\\rightarrow r))$\n\t\t\t\t\n\t\t\t\t\\item $(p\\leftrightarrow q\\land r)\\lor(q\\leftrightarrow r)$\n\n\n\t\t\t\t\\item $\\neg p\\lor\\neg q\\rightarrow\\neg(p \\land q)$\n\n\t\t\t\t\\item $(\\neg p \\lor q) \\rightarrow (q \\land (p \\leftrightarrow q))$\n\t\t\t\t\n\t\t\t\t\\item $((p \\leftrightarrow q) \\to ((q \\leftrightarrow r) \\to(p \\leftrightarrow r)))$\n\n\t\t\t\t\\item $(p \\rightarrow q) \\lor (\\neg q \\rightarrow p)$\n\t\t\t\t\n\t\t\t\t\\item $(q \\to r) \\to p \\land (q \\lor \\neg r)$\n\n\t\t\t\t\n\t\t\t\t\\item $((p \\lor q) \\lor (\\neg p \\lor r)) \\lor (\\neg q \\lor \\neg r) $ \n\n\t\t\t\t\\item $(p \\to (q \\to r)) \\to ((p \\to q) \\to (p \\to r))$\n\n\t\t\t\t\\item $(p \\land q) \\leftrightarrow (r \\lor (\\neg p \\land q))$ \n\n\t\t\t\t\\item $((p \\to r) \\to ((q \\to r) \\to (p \\lor q \\to r)))$\n\n\t\t\t\t\\item $\\neg q \\leftrightarrow (p \\to (\\neg r \\to q))$\n\n\t\t\t\t\\item $(p \\to q) \\land ((q \\to r) \\land (r \\to \\neg p))$ \n\n\t\t\t\t\\item $p \\to (q \\to (r \\to (\\neg p \\to (\\neg q \\to \\neg r))))$\n\n\t\t\t\t\\item $(p \\rightarrow q \\land r) \\leftrightarrow ((p \\rightarrow q) \\land (p \\rightarrow r))$\n\n\t\t\t\t\\item $p \\land (\\neg p \\lor q) \\to (r \\to \\neg q) \\land (p \\to r)$\n\n\t\t\t\n\n\t\t\t\\end{enumerate}\n\t\t\t\\item Use the method of truth-tables to determine whether the following inferences are valid:\n\t\t\t\n\t\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\\item $[h]$ $p\\therefore p\\lor (p\\land q)$ \n\n\t\t\t\t\t\\item $p\\to \\neg p\\therefore \\neg p$\n\t\t\t\t\t\n\t\t\t\t\t\\item $p\\land \\neg p\\therefore q$\n\t\t\t\t\t\n\t\t\t\t\t\\item $p\\therefore p\\lor \\neg p$\n\t\t\t\t\t\n\t\t\t\t\t\\item $q\\therefore p\\to q$\n\t\t\t\t\t\n\t\t\t\t\t\\item $p\\to q, q\\to r\\therefore p\\to r$\n\t\t\t\t\t\n\t\t\t\t\t\\item $p\\leftrightarrow \\neg p\\therefore p\\leftrightarrow (q\\land \\neg q)$ \n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\\end{enumerate}\n\t\t\t\n\t\\end{enumerate}\n\n\\section{Further Readings}\n\nThe following chapters cover roughly the same material:\n\n\\begin{itemize}\n\t\n\t\t\\item Section 2.2 of Dalen, Dirk van. 2013. \\emph{Logic and Structure}. 5$^\\text{th}$ edition. London, UK: Springer.\n\t\t\n\t\t\\item Sections 1.2 of Enderton, Herbert. 2001. \\emph{A Mathematical Introduction to Logic}. 2$^\\text{nd}$ edition. San Diego, CA: Harcourt/Academic Press.\n\t\t\t\n\t\\end{itemize}\n\n\\vfill\n\n\\hfill \\rotatebox[origin=c]{180}{\n\\fbox{\n\\begin{minipage}{0.5\\linewidth}\n\n\\subsection*{Self Study Solutions}\n\n\\begin{enumerate}\n\n\t\\item[5.5.1] (c)\n\t\n\t\\item[5.5.2] (b)\n\t\n\t\\item[5.5.3] (a--d)\n\n\t\t\n\\end{enumerate}\n\n\n\\end{minipage}}}\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"../../logic.tex\"\n%%% End:\n", "meta": {"hexsha": "6ed15695e3e132e48e81d90961cdf4eecfc41920", "size": 92300, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lib/notes/tex/mainmatter/prop-semantics.tex", "max_stars_repo_name": "jkorb/logic-introduction", "max_stars_repo_head_hexsha": "316ff2b8c60d98c63df528a75baddda156d8a27b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-09-12T17:29:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-05T08:03:21.000Z", "max_issues_repo_path": "lib/notes/tex/mainmatter/prop-semantics.tex", "max_issues_repo_name": "jkorb/logic-introduction", "max_issues_repo_head_hexsha": "316ff2b8c60d98c63df528a75baddda156d8a27b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 69, "max_issues_repo_issues_event_min_datetime": "2020-09-04T16:24:11.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-18T13:54:07.000Z", "max_forks_repo_path": "lib/notes/tex/mainmatter/prop-semantics.tex", "max_forks_repo_name": "jkorb/logic-introduction", "max_forks_repo_head_hexsha": "316ff2b8c60d98c63df528a75baddda156d8a27b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-09-04T08:49:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-30T11:24:44.000Z", "avg_line_length": 70.89093702, "max_line_length": 1755, "alphanum_fraction": 0.6909859155, "num_tokens": 31157, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7606506526772884, "lm_q1q2_score": 0.5953327254415763}}
{"text": "\\chapter{Cluster Analysis}\n\nFinding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or\nunrelated to) the objects in other groups\n\n\\section{Type of Clustering}\n\n\\begin{description}\n\\item[Partitional Clustering] A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset\n\\item[Hierarchical clustering] A set of nested clusters organized as a hierarchical tree\n\\end{description}\n\n\\section{Type of Clusters}\n\n\\begin{description}\n\\item[Well-Separated Clusters] any point in a cluster is closer to every points in the cluster than to any \\emph{point} not in the cluster\n\\item[Center-Based Clusters] object in a cluster is closer to the center of a cluster, than to the \\emph{center} of any other cluster\n\\item[Contiguous Clusters] Neighbourhood relationship, \\emph{each point is close to another point} in the cluster, immediate neighbour\n\\item[Density-Based Clusters] A cluster is a \\emph{dense region} of points, which is separated by low-density regions, from other regions of high density\n\\item[Property or Conceptual] Clusters that share some common property or represent a particular concept\n\\item[Described by Objective Function] Find clusters that minimise or maximise an objective function\n\\end{description}\n\\section{K-mean Clustering}\n\n\\begin{table}[h!]\n\\begin{tabular}{r p{12cm}}\n\\hline\n    1: & Select $k$ points as initial centroids\\\\\n    2: & Repeat \\\\\n    3: & \\ \\ \\ \\ Form $k$ clusters by assigning all points to the closest centroid. \\\\\n    4: & \\ \\ \\ \\ Recompute the centroid of each cluster \\\\\n    5: & Until the centroids don't change \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\par \\noindent Evaluating k-means clusters using Sum of Squared Error (SSE):\n$$SSE = \\sum_{i=1}^{K} \\sum_{x \\in C_i} dist^{2}(m_i, x) $$\n\n\\section{Hierarchical Clustering}\n\n\\subsection{Agglomerative Clustering Algorithm}\n\n\\begin{table}[h!]\n\\begin{tabular}{r p{12cm}}\n\\hline\n    1: & Compute the proximity matrix\\\\\n    2: & Let each data point be a cluster \\\\\n    3: & Repeat \\\\\n    4: & \\ \\ \\ \\ Merge the two closest clusters \\\\\n    5: & \\ \\ \\ \\ Update the proximity matrix \\\\\n    5: & Until only a single cluster remains \\\\\n\\hline\n\\end{tabular}\n\\end{table} \\noindent\n\\underline{Cluster Similarity: Group Average} \\\\\n$$proximity(Cluster_i,Cluster_j)=\\frac{\\sum proximity(p_i, p_j)}{\\mid Cluster_i \\mid \\mid Cluster_j \\mid}$$\n\n\\clearpage\n\\subsection{Divisive Clustering Algorithm: \\\\ Minimum Spanning Tree}\n\\begin{table}[h!]\n\\begin{tabular}{r p{12cm}}\n\\hline\n    1: & Compute a minimum spanning tree for the proximity graph \\\\\n    2: & Repeat \\\\\n    3: & \\ \\ \\ \\ Create a new cluster by breaking the link corresponding \\\\\n       & \\ \\ \\ \\ to the largest distance \\\\\n    4: & Until only singleton clusters remain \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\noindent Same as single link agglomerative clustering", "meta": {"hexsha": "c0cb373b03342ead82e2a99f55f95e52b97fbfa8", "size": 2910, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter6.tex", "max_stars_repo_name": "Andyccs/data-mining-summary", "max_stars_repo_head_hexsha": "27ffac528e9e225c8a15ff44fbf2ed3e1c6b9f7a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter6.tex", "max_issues_repo_name": "Andyccs/data-mining-summary", "max_issues_repo_head_hexsha": "27ffac528e9e225c8a15ff44fbf2ed3e1c6b9f7a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter6.tex", "max_forks_repo_name": "Andyccs/data-mining-summary", "max_forks_repo_head_hexsha": "27ffac528e9e225c8a15ff44fbf2ed3e1c6b9f7a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.985915493, "max_line_length": 153, "alphanum_fraction": 0.7274914089, "num_tokens": 802, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624688140726, "lm_q2_score": 0.7606506526772883, "lm_q1q2_score": 0.5953327177294421}}
{"text": "\\subsection*{Q. 4}\n\\begin{longtable}[c]{llllll}\n         &          &          & $A_2$    & $A_1$    & $A_0$    \\\\\n\\endfirsthead\n%\n\\endhead\n%\n$\\times$ &          &          & $B_2$    & $B_1$    & $B_0$    \\\\ \\hline\n         &          &          & $A_2B_0$ & $A_1B_0$ & $A_0B_0$ \\\\\n         &          & $A_2B_1$ & $A_1B_1$ & $A_0B_1$ &          \\\\\n         & $A_2B_2$ & $A_1B_2$ & $A_0B_2$ &          &          \\\\ \\hline\n$C_5$\\quad & $C_4$    & $C_3$    & $C_2$    & $C_1$    & $C_0$   \n\\end{longtable}\n\\par Starting with the traditional vertical multiplication, we found out that: \\ding{172} a 3-bit multiplication is made up of three partial products; \\ding{173} each partial product, is a 1-bit $\\times$ 3-bit, while bits can only be 0 or 1, this multiplication doesn't have carry out, and thus can simply use three; \\emph{and gate} to denote them \\ding{174} the partial products are added up to give the final result. Formally, we say:\n\\begin{align*}\nC_0&=A_0B_0&\\text{(no carry out)}\\\\\n\\{\\text{Carry}_1, C_1\\}&=A_1B_0+A_0B_1&\\text{LHS = 2 $\\times$ Carry$_1$ + $C_1$}\\\\\n\\{\\text{Carry}_2, C_2\\}&=A_2B_0+A_1B_1+A_0B_2+\\text{Carry}_1\\\\\n\\cdots\\\\\n\\{\\text{Carry}_4, C_4\\}&=A_2B_2+\\text{Carry}_3\\\\\nC_5&=\\text{Carry}_4\n\\end{align*}\n\\par A.k.a., since multiply takes \\emph{AND} operation of the all binary combination from the bits of A and B and add the result with shifting \\emph{i+j (i,j denotes the indices of bits in A and B)} operations. To express the final result in binary form, we only take the least bit of output sum and feed the rest bits of output sum together with the carry out bit into the augend input of the next full adder. \\\\\n\\vspace{1em}\n\\centerline{\\includegraphics[width=0.5\\textwidth]{fig/q4}}\n\n\\par The circuit can be represented as the following logic formulas:\n\\begin{align*}\nC_0&=A_0B_0\\\\\nC_1&=(A_1B_0)\\oplus(A_0B_1)\\\\\nC_2&=(A_2B_0)\\oplus(A_1B_1)\\oplus(A_0B_2)\\oplus(A_1B_0A_0B_1)\\\\\nC_3&=(A_2B_1)\\oplus(A_1B_2)\\oplus(A_2B_0A_1B_1A_0B_2B_0)\\\\\nC_4&=(A_2B_2)\\oplus(A_2B_0A_1B_1A_0B_2B_0)\\\\\nC_5&=A_2B_0A_1B_1A_0B_2B_0\n\\end{align*}\n\\vspace{-2em}\n", "meta": {"hexsha": "e706128a5eaa82ee4708e15bf9842ce190f26e7f", "size": 2068, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2021F/CS207/A3/q4.tex", "max_stars_repo_name": "HeZean/SUSTech-Archive", "max_stars_repo_head_hexsha": "0c89d78f232fdef427ca17b7e508881b782d7826", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2021F/CS207/A3/q4.tex", "max_issues_repo_name": "HeZean/SUSTech-Archive", "max_issues_repo_head_hexsha": "0c89d78f232fdef427ca17b7e508881b782d7826", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2021F/CS207/A3/q4.tex", "max_forks_repo_name": "HeZean/SUSTech-Archive", "max_forks_repo_head_hexsha": "0c89d78f232fdef427ca17b7e508881b782d7826", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.8918918919, "max_line_length": 436, "alphanum_fraction": 0.6416827853, "num_tokens": 878, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.8175744828610095, "lm_q1q2_score": 0.5951741511748158}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 3.3 Computing $R_{abcd}$}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).\n\n   \\partial{#}::PartialDerivative.\n\n   \\Gamma^{a}_{b c}::TableauSymmetry(shape={2}, indices={1,2}).\n   \\Gamma_{a b c}::TableauSymmetry(shape={2}, indices={1,2}).\n\n   dgab := \\partial_{c}{g_{a b}} ->   \\Gamma^{d}_{a c} g_{d b}\n                                    + \\Gamma^{d}_{b c} g_{a d}.      # cdb(dgab.000,dgab)\n\n   RabcdU := R^{a}_{b c d} ->   \\partial_{c}{\\Gamma^{a}_{b d}}\n                              - \\partial_{d}{\\Gamma^{a}_{b c}}\n                              + \\Gamma^{e}_{b d} \\Gamma^{a}_{c e}\n                              - \\Gamma^{e}_{b c} \\Gamma^{a}_{d e}.   # cdb(Rabcd.000,RabcdU)\n\n   GammaD := {g_{a e} \\Gamma^{e}_{b c} -> \\Gamma_{a b c},\n              g_{e a} \\Gamma^{e}_{b c} -> \\Gamma_{a b c}}.           # cdb(Gamma.010,GammaD)\n\n   RabcdD := R_{a b c d} -> g_{a e} R^{e}_{b c d}.                   # cdb(Rabcd.010,RabcdD)\n\n   gabDGamma := g_{a e} \\partial_{c}{\\Gamma^{e}_{b d}} ->\n                     \\partial_{c}{g_{a e} \\Gamma^{e}_{b d}}\n                   - \\Gamma^{e}_{b d} \\partial_{c}{g_{a e}}.         # cdb(gabDGamma.000,gabDGamma)\n\n   # this pair of rules needed to sort \\Gamma_{a b c} to the very left\n   # this helps canonicalise spot the terms that cancel\n   bah := \\Gamma_{a b c} -> A_{a b c}.\n   foo := A_{a b c} -> \\Gamma_{a b c}.\n\n   expr := R_{a b c d}.                                              # cdb(ex-0303.101,expr)\n\n   substitute     (expr, RabcdD)                                     # cdb(ex-0303.102,expr)\n   substitute     (expr, RabcdU)                                     # cdb(ex-0303.103,expr)\n   distribute     (expr)                                             # cdb(ex-0303.104,expr)\n   substitute     (expr, gabDGamma)                                  # cdb(ex-0303.105,expr)\n   substitute     (expr, dgab)                                       # cdb(ex-0303.106,expr)\n   substitute     (expr, GammaD)                                     # cdb(ex-0303.107,expr)\n   distribute     (expr)                                             # cdb(ex-0303.109,expr)\n   substitute     (expr, bah)                                        # cdb(ex-0303.110,expr)\n   sort_product   (expr)                                             # cdb(ex-0303.111,expr)\n   rename_dummies (expr)                                             # cdb(ex-0303.112,expr)\n   substitute     (expr, foo)                                        # cdb(ex-0303.113,expr)\n   canonicalise   (expr)                                             # cdb(ex-0303.114,expr)\n\\end{cadabra}\n\n\\begin{dgroup*}[spread={3pt}]\n   \\Dmath*{\\cdb{ex-0303.101} = \\Cdb*{ex-0303.102}\n                             = \\Cdb*{ex-0303.103}\n                             = \\Cdb*{ex-0303.104}\n                             = \\Cdb*{ex-0303.105}\n                             = \\Cdb*[\\hskip 2cm\\hfill]{ex-0303.106}\n                             = \\Cdb*{ex-0303.107}\n                             = \\Cdb*{ex-0303.109}\n                             = \\Cdb*{ex-0303.110}\n                             = \\Cdb*{ex-0303.111}\n                             = \\Cdb*{ex-0303.112}\n                             = \\Cdb*{ex-0303.113}\n                             = \\Cdb*{ex-0303.114}}\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "b209b7662103abd4af8eb0e0b896d9abcbd24437", "size": 3535, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0303.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0303.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0303.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 48.4246575342, "max_line_length": 99, "alphanum_fraction": 0.4045261669, "num_tokens": 1046, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5951741463211371}}
{"text": "\\chapter{Arithmetic}\n\\label{numberchapter}\n\\index{number}\n\nThis chapter describes Scheme's libraries for more specialized\nnumerical operations: fixnum and flonum arithmetic, as well as bitwise\noperations on exact integer objects.  \n\n\\section{Bitwise operations}\n\nA number of procedures operate on the binary two's-complement\nrepresentations of exact integer objects: Bit positions within an\nexact integer object are counted from the right, i.e.\\ bit 0 is the\nleast significant bit.  Some procedures allow extracting \\defining{bit\n  fields}, i.e., number objects representing subsequences of the\nbinary representation of an exact integer object.  Bit fields are\nalways positive, and always defined using a finite number of bits.\n\n\\section{Fixnums}\n\\label{fixnumssection}\n\nEvery implementation must define its fixnum range as a closed\ninterval\n%\n\\begin{displaymath}\n[-2^{w-1}, 2^{w-1} - 1]\n\\end{displaymath}\n%\nsuch that $w$ is a (mathematical) integer $w \\geq 24$.  Every\nmathematical integer within an implementation's fixnum range must\ncorrespond to an exact integer object that is representable within the\nimplementation.\nA fixnum is an exact integer object whose value lies within this\nfixnum range.\n\nThis section describes the \\defrsixlibrary{arithmetic fixnums} library,\nwhich defines various operations on fixnums.\nFixnum operations perform integer arithmetic on their fixnum\narguments, but raise an exception with condition type\n{\\cf\\&implementation-restriction} if the result is not a fixnum.\n\nThis section uses \\var{fx}, \\vari{fx}, \\varii{fx}, etc., as parameter\nnames for arguments that must be fixnums.\n\n\\begin{entry}{%\n\\rproto{fixnum?}{ obj}{procedure}}\n\nReturns \\schtrue{} if \\var{obj} is an exact\ninteger object within the fixnum range, \\schfalse{} otherwise.\n\\end{entry}\n\n\\begin{entry}{%\n\\rproto{fixnum-width}{}{procedure}\n\\rproto{least-fixnum}{}{procedure}\n\\rproto{greatest-fixnum}{}{procedure}}\n\nThese procedures return $w$,\n$-2^{w-1}$ and $2^{w-1} - 1$: the\nwidth, minimum and the maximum value of the fixnum range, respectively.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fx=?}{ \\vari{fx} \\varii{fx} \\variii{fx} \\dotsfoo}{procedure}\n\\proto{fx>?}{ \\vari{fx} \\varii{fx} \\variii{fx} \\dotsfoo}{procedure}\n\\proto{fx<?}{ \\vari{fx} \\varii{fx} \\variii{fx} \\dotsfoo}{procedure}\n\\proto{fx>=?}{ \\vari{fx} \\varii{fx} \\variii{fx} \\dotsfoo}{procedure}\n\\proto{fx<=?}{ \\vari{fx} \\varii{fx} \\variii{fx} \\dotsfoo}{procedure}}\n\nThese procedures return \\schtrue{} if their arguments are (respectively):\nequal, monotonically increasing, monotonically decreasing,\nmonotonically nondecreasing, or monotonically nonincreasing,\n\\schfalse{} otherwise.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxzero?}{ fx}{procedure}\n\\proto{fxpositive?}{ fx}{procedure}\n\\proto{fxnegative?}{ fx}{procedure}\n\\proto{fxodd?}{ fx}{procedure}\n\\proto{fxeven?}{ fx}{procedure}}\n\nThese numerical predicates test a fixnum for a particular property,\nreturning \\schtrue{} or \\schfalse{}.  The five properties tested by\nthese procedures are: whether the number object is zero, greater than zero,\nless than zero, odd, or even.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxmax}{ \\vari{fx} \\varii{fx} \\dotsfoo}{procedure}\n\\proto{fxmin}{ \\vari{fx} \\varii{fx} \\dotsfoo}{procedure}}\n\nThese procedures return the maximum or minimum of their arguments.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fx+}{ \\vari{fx} \\varii{fx}}{procedure}\n\\proto{fx*}{ \\vari{fx} \\varii{fx}}{procedure}}\n\nThese procedures return the sum or product of their arguments,\nprovided that sum or product is a fixnum.  An exception with condition\ntype {\\cf\\&implementation-restriction} is raised if\nthat sum or product is not a fixnum.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fx-}{ \\vari{fx} \\varii{fx}}{procedure}\n\\rproto{fx-}{ fx}{procedure}}\n\nWith two arguments, this procedure returns the difference\n$\\vari{fx}-\\varii{fx}$, provided that difference is a fixnum.\n\nWith one argument, this procedure returns the additive\ninverse of its argument, provided that integer object is a\nfixnum.\n\nAn exception with condition type {\\cf\\&implementation-restriction} is raised if the\nmathematically correct result of this procedure is not a fixnum.\n\n\\begin{scheme}\n(fx- (least-fixnum))  \\lev  \\exception{\\&implementation-restriction}%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxdiv-and-mod}{ \\vari{fx} \\varii{fx}}{procedure}\n\\proto{fxdiv}{ \\vari{fx} \\varii{fx}}{procedure}\n\\proto{fxmod}{ \\vari{fx} \\varii{fx}}{procedure}\n\\proto{fxdiv0-and-mod0}{ \\vari{fx} \\varii{fx}}{procedure}\n\\proto{fxdiv0}{ \\vari{fx} \\varii{fx}}{procedure}\n\\proto{fxmod0}{ \\vari{fx} \\varii{fx}}{procedure}}\n\n\\domain{\\varii{Fx} must be nonzero.}\nThese procedures implement number-theoretic integer division and\nreturn the results of the corresponding mathematical operations\nspecified in report section~\\extref{report:integerdivision}{Integer division}.\n\n\\begin{scheme}\n(fxdiv \\vari{fx} \\varii{fx})         \\ev \\(\\vari{fx}~\\mathrm{div}~\\varii{fx}\\)\n(fxmod \\vari{fx} \\varii{fx})         \\ev \\(\\vari{fx}~\\mathrm{mod}~\\varii{fx}\\)\n(fxdiv-and-mod \\vari{fx} \\varii{fx})     \\lev \\(\\vari{fx}~\\mathrm{div}~\\varii{fx}, \\vari{fx}~\\mathrm{mod}~\\varii{fx}\\)\\\\\\>\\>; \\textrm{two return values}\n(fxdiv0 \\vari{fx} \\varii{fx})        \\ev \\(\\vari{fx}~\\mathrm{div}\\sb{0}~\\varii{fx}\\)\n(fxmod0 \\vari{fx} \\varii{fx})        \\ev \\(\\vari{fx}~\\mathrm{mod}\\sb{0}~\\varii{fx}\\)\n(fxdiv0-and-mod0 \\vari{fx} \\varii{fx})   \\lev \\(\\vari{fx} \\vari{fx}~\\mathrm{div}\\sb{0}~\\varii{fx}, \\vari{fx}~\\mathrm{mod}\\sb{0}~\\varii{fx}\\)\\\\\\>\\>; \\textrm{two return values}%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fx+/carry}{ \\vari{fx} \\varii{fx} \\variii{fx}}{procedure}}\n\nReturns the two fixnum results of the following computation:\n%\n\\begin{scheme}\n(let* ((s (+ \\vari{fx} \\varii{fx} \\variii{fx}))\n       (s0 (mod0 s (expt 2 (fixnum-width))))\n       (s1 (div0 s (expt 2 (fixnum-width)))))\n  (values s0 s1))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fx-/carry}{ \\vari{fx} \\varii{fx} \\variii{fx}}{procedure}}\n\nReturns the two fixnum results of the following computation:\n%\n\\begin{scheme}\n(let* ((d (- \\vari{fx} \\varii{fx} \\variii{fx}))\n       (d0 (mod0 d (expt 2 (fixnum-width))))\n       (d1 (div0 d (expt 2 (fixnum-width)))))\n  (values d0 d1))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fx*/carry}{ \\vari{fx} \\varii{fx} \\variii{fx}}{procedure}}\n\nReturns the two fixnum results of the following computation:\n\\begin{scheme}\n(let* ((s (+ (* \\vari{fx} \\varii{fx}) \\variii{fx}))\n       (s0 (mod0 s (expt 2 (fixnum-width))))\n       (s1 (div0 s (expt 2 (fixnum-width)))))\n  (values s0 s1))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxnot}{ \\var{fx}}{procedure}}\n\nReturns the unique fixnum that is congruent\nmod $2^w$ to the one's-complement of \\var{fx}.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxand}{ \\vari{fx} \\dotsfoo}{procedure}\n\\proto{fxior}{ \\vari{fx} \\dotsfoo}{procedure}\n\\proto{fxxor}{ \\vari{fx} \\dotsfoo}{procedure}}\n\nThese procedures return the fixnum that is the bit-wise ``and'',\n``inclusive or'', or ``exclusive or'' of the two's complement\nrepresentations of their arguments.  If they are passed only one\nargument, they return that argument.  If they are passed no arguments,\nthey return the fixnum (either $-1$ or $0$) that acts as identity for the\noperation.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxif}{ \\vari{fx} \\varii{fx} \\variii{fx}}{procedure}}\n\nReturns the fixnum that is the bit-wise ``if'' of the two's complement\nrepresentations of its arguments, i.e.\\ for each bit, if it is 1 in\n\\vari{fx}, the corresponding bit in \\varii{fx} becomes the value of\nthe corresponding bit in the result, and if it is 0, the corresponding\nbit in \\variii{fx} becomes the corresponding bit in the value of the\nresult.  This is the fixnum result of the following computation:\n\\begin{scheme}\n(fxior (fxand \\vari{fx} \\varii{fx})\n       (fxand (fxnot \\vari{fx}) \\variii{fx}))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxbit-count}{ \\var{fx}}{procedure}}\n\nIf \\var{fx} is non-negative, this procedure returns the\nnumber of 1 bits in the two's complement representation of \\var{fx}.\nOtherwise it returns the result of the following computation:\n%\n\\begin{scheme}\n(fxnot (fxbit-count (fxnot \\var{fx})))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxlength}{ \\var{fx}}{procedure}}\n\nReturns the number of bits needed to represent \\var{fx} if it is\npositive, and the number of bits needed to represent {\\cf (fxnot\n  \\var{fx})} if it is negative, which is the fixnum result of the\nfollowing computation:\n\\begin{scheme}\n(do ((result 0 (+ result 1))\n     (bits (if (fxnegative? \\var{fx})\n               (fxnot \\var{fx})\n               \\var{fx})\n           (fxarithmetic-shift-right bits 1)))\n    ((fxzero? bits)\n     result))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxfirst-bit-set}{ \\var{fx}}{procedure}}\n\nReturns the index of the least significant $1$ bit in\nthe two's complement representation of \\var{fx}.  If \n\\var{fx} is $0$, then $-1$ is returned.\n%\n\\begin{scheme}\n(fxfirst-bit-set 0)        \\ev  -1\n(fxfirst-bit-set 1)        \\ev  0\n(fxfirst-bit-set -4)       \\ev  2%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxbit-set?}{ \\vari{fx} \\varii{fx}}{procedure}}\n\n\\domain{\\varii{Fx} must be non-negative.}  The {\\cf fxbit-set?} procedure returns\n\\schtrue{} if the \\varii{fx}th bit is 1 in the two's complement\nrepresentation of \\vari{fx}, and \\schfalse{} otherwise.  This is the\nresult of the following computation:\n%\n\\begin{scheme}\n(if (fx>=? \\varii{fx} (fx- (fixnum-width) 1))\n    (fxnegative? \\vari{fx})\n    (not\n      (fxzero?\n         (fxand \\vari{fx}\n                (fxarithmetic-shift-left 1 \\varii{fx})))))%\n\\end{scheme}\n%\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxcopy-bit}{ \\vari{fx} \\varii{fx} \\variii{fx}}{procedure}}\n\n\\domain{\\varii{Fx} must be non-negative and less than {\\cf\n  $w-1$}. \\variii{Fx} must be 0 or\n1.}  The {\\cf fxcopy-bit} procedure returns the result of replacing\nthe \\varii{fx}th bit of \\vari{fx} by \\variii{fx}, which is\nthe result of the following computation:\n\\begin{scheme}\n(let* ((mask (fxarithmetic-shift-left 1 \\varii{fx})))\n  (fxif mask\n        (fxarithmetic-shift-left \\variii{fx} \\varii{fx})\n        \\vari{fx}))%\n\\end{scheme}\n%\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxbit-field}{ \\vari{fx} \\varii{fx} \\variii{fx}}{procedure}}\n\n\\domain{\\varii{Fx} and \\variii{fx} must be non-negative and less than\n  $w$.  Moreover, \\varii{fx} must be less than or\n  equal to \\variii{fx}.}  The {\\cf fxbit-field} procedure returns the\nnumber represented by the bits at the positions from \\varii{fx} (inclusive) to\n$\\variii{fx}$ (exclusive), which is\nthe fixnum result of the following computation:\n%\n\\begin{scheme}\n(let* ((mask (fxnot\n              (fxarithmetic-shift-left -1 \\variii{fx}))))\n  (fxarithmetic-shift-right (fxand \\vari{fx} mask)\n                            \\varii{fx}))%\n\\end{scheme}\n%\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxcopy-bit-field}{ \\vari{fx} \\varii{fx} \\variii{fx} \\variv{fx}}{procedure}}\n\n\\domain{\\varii{Fx} and \\variii{fx} must be non-negative and less than\n  $w$.  Moreover, \\varii{fx} must be less than or\n  equal to \\variii{fx}.}  The {\\cf fxcopy-bit-field} procedure returns\nthe result of replacing in \\vari{fx} the bits at positions from\n\\varii{fx} (inclusive) to $\\variii{fx}$ (exclusive) by the bits in\n\\variv{fx} from position 0 (inclusive) to position\n$\\variii{fx}-\\varii{fx}$ (exclusive), which\nis the fixnum result of the following computation:\n\\begin{scheme}\n(let* ((to    \\vari{fx})\n       (start \\varii{fx})\n       (end   \\variii{fx})\n       (from  \\variv{fx})\n       (mask1 (fxarithmetic-shift-left -1 start))\n       (mask2 (fxnot\n               (fxarithmetic-shift-left -1 end)))\n       (mask (fxand mask1 mask2))\n       (mask3 (fxnot (fxarithmetic-shift-left\n                       -1 (- end start)))))\n  (fxif mask\n        (fxarithmetic-shift-left (fxand from mask3)\n                                 start)\n        to))%\n\\end{scheme}\n\n\\begin{scheme}\n(fxcopy-bit-field \\sharpsign{}b0000001 2 5 \\sharpsign{}b1111000) \\lev 1\n(fxcopy-bit-field \\sharpsign{}b0000001 2 5 \\sharpsign{}b0001111) \\lev 29\n(fxcopy-bit-field \\sharpsign{}b0001111 2 5 \\sharpsign{}b0001111) \\lev 31%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxarithmetic-shift}{ \\vari{fx} \\varii{fx}}{procedure}}\n\n\\domain{The absolute value of \\varii{fx} must be less than \n$w$.}  If\n%\n\\begin{scheme}\n(floor (* \\vari{fx} (expt 2 \\varii{fx})))%\n\\end{scheme}\n%\nis a fixnum, then that fixnum is returned.  Otherwise an exception\nwith condition type {\\cf\\&implementation-\\hp{}restriction} is\nraised.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxarithmetic-shift-left}{ \\vari{fx} \\varii{fx}}{procedure}\n\\proto{fxarithmetic-shift-right}{ \\vari{fx} \\varii{fx}}{procedure}}\n\n\\domain{\\varii{Fx} must be non-negative, and less than $w$.}\n  The {\\cf fxarithmetic-shift-left} procedure behaves the same as {\\cf\n  fxarithmetic-shift}, and {\\cf (fxarithmetic-shift-right \\vari{fx}\n  \\varii{fx})} behaves the same as {\\cf (fxarithmetic-shift \\vari{fx}\n  (fx- \\varii{fx}))}.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxrotate-bit-field}{ \\vari{fx} \\varii{fx} \\variii{fx} \\variv{fx}}{procedure}}\n\n\\domain{\\varii{Fx}, \\variii{fx}, and \\variv{fx} must be non-negative\n  and less than $w$.  \\varii{Fx} must be less than or\n  equal to \\variii{fx}. \\variv{Fx} must be less than or equal to the difference\nbetween \\variii{fx} and \\varii{fx}.}  The {\\cf fxrotate-bit-field}\nprocedure returns the result of cyclically permuting in \\vari{fx} the\nbits at positions from \\varii{fx} (inclusive) to \\variii{fx}\n(exclusive) by \\variv{fx} bits\ntowards the more significant bits, which is the result of the\nfollowing computation:\n\\begin{scheme}\n(let* ((n     \\vari{fx})\n       (start \\varii{fx})\n       (end   \\variii{fx})\n       (count \\variv{fx})\n       (width (fx- end start)))\n  (fxcopy-bit-field n start end\n    (fxior\n      (fxarithmetic-shift-left\n        (fxbit-field n start (fx- end count))\n        count)\n      (fxarithmetic-shift-right\n        (fxbit-field n start end)\n        (fx- width count)))))%\n\\end{scheme}\n\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fxreverse-bit-field}{ \\vari{fx} \\varii{fx} \\variii{fx}}{procedure}}\n\n\\domain{\\varii{Fx} and \\variii{fx} must be non-negative and less than\n  $w$.  Moreover, \\varii{fx} must be less than or\n  equal to \\variii{fx}.}  The {\\cf fxreverse-bit-field} procedure\nreturns\nthe fixnum obtained from \\vari{fx} by reversing the\norder of the bits at positions from \\varii{fx} (inclusive) to\n\\variii{fx} (exclusive).\n\\begin{scheme}\n(fxreverse-bit-field \\sharpsign{}b1010010 1 4)    \\lev  88 ; \\sharpsign{}b1011000%\n\\end{scheme}\n\n\\end{entry}\n\n\\section{Flonums}\n\\label{flonumssection}\n\nThis section describes the \\defrsixlibrary{arithmetic flonums} library.\n\nThis section uses \\var{fl}, \\vari{fl}, \\varii{fl}, etc., as\nparameter names for arguments that must be flonums, and \\var{ifl}\nas a name for arguments that \nmust be integer-valued flonums, i.e., flonums for which the\n{\\cf integer-valued?} predicate returns true.\n\n\\begin{entry}{%\n\\proto{flonum?}{ obj}{procedure}}\n\nReturns \\schtrue{} if \\var{obj} is a flonum, \\schfalse{} otherwise.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{real->flonum}{ x}{procedure}}\n\nReturns the best flonum representation of\n\\var{x}.\n\nThe value returned is a flonum that is numerically closest to the\nargument.\n\n\\begin{note}\n  If flonums are represented in binary floating point, then\n  implementations should break ties by preferring\n  the floating-point representation whose least significant bit is\n  zero.\n\\end{note}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fl=?}{ \\vari{fl} \\varii{fl} \\variii{fl} \\dotsfoo}{procedure}\n\\proto{fl<?}{ \\vari{fl} \\varii{fl} \\variii{fl} \\dotsfoo}{procedure}\n\\proto{fl<=?}{ \\vari{fl} \\varii{fl} \\variii{fl} \\dotsfoo}{procedure}\n\\proto{fl>?}{ \\vari{fl} \\varii{fl} \\variii{fl} \\dotsfoo}{procedure}\n\\proto{fl>=?}{ \\vari{fl} \\varii{fl} \\variii{fl} \\dotsfoo}{procedure}}\n\nThese procedures return \\schtrue{} if their arguments are (respectively):\nequal, monotonically increasing, monotonically nondecreasing,\nmonotonically decreasing, or monotonically nonincreasing,\n\\schfalse{} otherwise.  These\npredicates must be transitive.\n\n\\begin{scheme}\n(fl=? +inf.0 +inf.0)           \\ev  \\schtrue{}\n(fl=? -inf.0 +inf.0)           \\ev  \\schfalse{}\n(fl=? -inf.0 -inf.0)           \\ev  \\schtrue{}\n(fl=? 0.0 -0.0)                \\ev  \\schtrue{}\n(fl<? 0.0 -0.0)                \\ev  \\schfalse{}\n(fl=? +nan.0 \\var{fl})               \\ev  \\schfalse{}\n(fl<? +nan.0 \\var{fl})               \\ev  \\schfalse{}%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flinteger?}{ fl}{procedure}\n\\proto{flzero?}{ fl}{procedure}\n\\proto{flpositive?}{ fl}{procedure}\n\\proto{flnegative?}{ fl}{procedure}\n\\proto{flodd?}{ ifl}{procedure}\n\\proto{fleven?}{ ifl}{procedure}\n\\proto{flfinite?}{ fl}{procedure}\n\\proto{flinfinite?}{ fl}{procedure}\n\\proto{flnan?}{ fl}{procedure}}\n\nThese numerical predicates test a flonum for a particular property,\nreturning \\schtrue{} or \\schfalse{}.\nThe {\\cf flinteger?} procedure tests whether the number object is an integer,\n{\\cf flzero?} tests whether\nit is {\\cf fl=?} to zero, {\\cf flpositive?} tests whether it is greater\nthan zero, {\\cf flnegative?} tests whether it is less\nthan zero, {\\cf flodd?} tests whether it is odd, \n{\\cf fleven?} tests whether it is even,\n{\\cf flfinite?} tests whether it is not an infinity and not a NaN,\n{\\cf flinfinite?} tests whether it is an infinity, and\n{\\cf flnan?} tests whether it is a NaN.\n\n\\begin{scheme}\n(flnegative? -0.0)   \\ev \\schfalse{}\n(flfinite? +inf.0)   \\ev \\schfalse{}\n(flfinite? 5.0)      \\ev \\schtrue{}\n(flinfinite? 5.0)    \\ev \\schfalse{}\n(flinfinite? +inf.0) \\ev \\schtrue{}%\n\\end{scheme}\n\n\\begin{note}\n{\\cf (flnegative? -0.0)} must return \\schfalse{},\nelse it would lose the correspondence with\n{\\cf (fl<? -0.0 0.0)}, which is \\schfalse{}\naccording to IEEE 754~\\cite{IEEE}.\n\\end{note}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flmax}{ \\vari{fl} \\varii{fl} \\dotsfoo}{procedure}\n\\proto{flmin}{ \\vari{fl} \\varii{fl} \\dotsfoo}{procedure}}\n\nThese procedures return the maximum or minimum of their arguments.\nThey always return a NaN when one or more of the arguments is a NaN.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fl+}{ \\vari{fl} \\dotsfoo}{procedure}\n\\proto{fl*}{ \\vari{fl} \\dotsfoo}{procedure}}\n\nThese procedures return the flonum sum or product of their flonum\narguments.  In general, they should return the flonum that best\napproximates the mathematical sum or product.  (For implementations\nthat represent flonums using IEEE binary floating point, the\nmeaning of ``best'' is defined by the IEEE standards.)\n\n\\begin{scheme}\n(fl+ +inf.0 -inf.0)      \\ev  +nan.0\n(fl+ +nan.0 \\var{fl})          \\ev  +nan.0\n(fl* +nan.0 \\var{fl})          \\ev  +nan.0%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fl-}{ \\vari{fl} \\varii{fl} \\dotsfoo}{procedure}\n\\rproto{fl-}{ fl}{procedure}\n\\proto{fl/}{ \\vari{fl} \\varii{fl} \\dotsfoo}{procedure}\n\\rproto{fl/}{ fl}{procedure}}\n\nWith two or more arguments, these procedures return the flonum\ndifference or quotient of their flonum arguments, associating to the\nleft.  With one argument, however, they return the additive or\nmultiplicative flonum inverse of their argument.  In general, they\nshould return the flonum that best approximates the mathematical\ndifference or quotient.  (For implementations that represent flonums\nusing IEEE binary floating point, the meaning of ``best'' is\nreasonably well-defined by the IEEE standards.)\n\n\\begin{scheme}\n(fl- +inf.0 +inf.0)      \\ev  +nan.0%\n\\end{scheme}\n\nFor undefined quotients, {\\cf fl/} behaves as specified by the\nIEEE standards:\n\n\\begin{scheme}\n(fl/ 1.0 0.0)  \\ev +inf.0\n(fl/ -1.0 0.0) \\ev -inf.0\n(fl/ 0.0 0.0)  \\ev +nan.0%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flabs}{ fl}{procedure}}\n\nReturns the absolute value of \\var{fl}.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fldiv-and-mod}{ \\vari{fl} \\varii{fl}}{procedure}\n\\proto{fldiv}{ \\vari{fl} \\varii{fl}}{procedure}\n\\proto{flmod}{ \\vari{fl} \\varii{fl}}{procedure}\n\\proto{fldiv0-and-mod0}{ \\vari{fl} \\varii{fl}}{procedure}\n\\proto{fldiv0}{ \\vari{fl} \\varii{fl}}{procedure}\n\\proto{flmod0}{ \\vari{fl} \\varii{fl}}{procedure}}\n\nThese procedures implement number-theoretic integer division and\nreturn the results of the corresponding mathematical operations\nspecified in report section~\\extref{report:integerdivision}{Integer division}.\nIn the cases where the\nmathematical requirements in section~\\extref{report:integerdivision} cannot be\nsatisfied by any number object, either an exception is raised with\ncondition type {\\cf\\&implementation-restriction}, or unspecified\nflonums (one for {\\cf fldiv} {\\cf flmod}, {\\cf fldiv0} and {\\cf\n  flmod0}, two for {\\cf fldiv-and-mod} and {\\cf fldiv0-and-mod0}) are\nreturned.\n\n\\begin{scheme}\n(fldiv \\vari{fl} \\varii{fl})         \\ev \\(\\vari{fl}~\\mathrm{div}~\\varii{fl}\\)\n(flmod \\vari{fl} \\varii{fl})         \\ev \\(\\vari{fl}~\\mathrm{mod}~\\varii{fl}\\)\n(fldiv-and-mod \\vari{fl} \\varii{fl})     \\lev \\(\\vari{fl}~\\mathrm{div}~\\varii{fl}, \\vari{fl}~\\mathrm{mod}~\\varii{fl}\\)\\\\\\>\\>; \\textrm{two return values}\n(fldiv0 \\vari{fl} \\varii{fl})        \\ev \\(\\vari{fl}~\\mathrm{div}_0~\\varii{fl}\\)\n(flmod0 \\vari{fl} \\varii{fl})        \\ev \\(\\vari{fl}~\\mathrm{mod}_0~\\varii{fl}\\)\n(fldiv0-and-mod0 \\vari{fl} \\varii{fl})   \\lev \\(\\vari{fl}~\\mathrm{div}_0~\\varii{fl}, \\vari{fl}~\\mathrm{mod}_0~\\varii{fl}\\)\\\\\\>\\>; \\textrm{two return values}%\n\\end{scheme}\n\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flnumerator}{ fl}{procedure}\n\\proto{fldenominator}{ fl}{procedure}}\n\nThese procedures return the numerator or denominator of \\var{fl}\nas a flonum; the result is computed as if \\var{fl} was represented as\na fraction in lowest terms.  The denominator is always positive.  The\ndenominator of 0.0 is defined to be 1.0.\n%\n\\begin{scheme}\n(flnumerator +inf.0)           \\ev  +inf.0\n(flnumerator -inf.0)           \\ev  -inf.0\n(fldenominator +inf.0)         \\ev  1.0\n(fldenominator -inf.0)         \\ev  1.0\n(flnumerator 0.75)             \\ev  3.0 ; \\textrm{probably}\n(fldenominator 0.75)           \\ev  4.0 ; \\textrm{probably}%\n\\end{scheme}\n\nImplementations should implement following behavior:\n\n\\begin{scheme}\n(flnumerator -0.0)             \\ev -0.0%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flfloor}{ fl}{procedure}\n\\proto{flceiling}{ fl}{procedure}\n\\proto{fltruncate}{ fl}{procedure}\n\\proto{flround}{ fl}{procedure}}\n\nThese procedures return integral flonums for flonum arguments that are\nnot infinities or NaNs.  For such arguments, {\\cf flfloor} returns the\nlargest integral flonum not larger than \\var{fl}.  The {\\cf flceiling}\nprocedure\nreturns the smallest integral flonum not smaller than \\var{fl}.\nThe {\\cf fltruncate} procedure returns the integral flonum closest to \\var{fl} whose\nabsolute value is not larger than the absolute value of \\var{fl}.\nThe {\\cf flround} procedure returns the closest integral flonum to \\var{fl},\nrounding to even when \\var{fl} represents a number halfway between two integers.\n\nAlthough infinities and NaNs are not integer objects, these procedures return\nan infinity when given an infinity as an argument, and a NaN when\ngiven a NaN:\n\n\\begin{scheme}\n(flfloor +inf.0)                       \\ev  +inf.0\n(flceiling -inf.0)                     \\ev  -inf.0\n(fltruncate +nan.0)                    \\ev  +nan.0%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flexp}{ fl}{procedure}\n\\proto{fllog}{ fl}{procedure}\n\\rproto{fllog}{ \\vari{fl} \\varii{fl}}{procedure}\n\\proto{flsin}{ fl}{procedure}\n\\proto{flcos}{ fl}{procedure}\n\\proto{fltan}{ fl}{procedure}\n\\proto{flasin}{ fl}{procedure}\n\\proto{flacos}{ fl}{procedure}\n\\proto{flatan}{ fl}{procedure}\n\\rproto{flatan}{ \\vari{fl} \\varii{fl}}{procedure}}\n\nThese procedures compute the usual transcendental functions.  \nThe {\\cf flexp} procedure computes the base-$e$ exponential of \\var{fl}.\nThe {\\cf fllog} procedure with a single argument computes the natural logarithm of\n\\var{fl} (not the base ten logarithm); {\\cf (fllog \\vari{fl}\n  \\varii{fl})} computes the base-\\varii{fl} logarithm of \\vari{fl}.\nThe {\\cf flasin}, {\\cf flacos}, and {\\cf flatan} procedures compute arcsine,\narccosine, and arctangent, respectively.  {\\cf (flatan \\vari{fl}\n  \\varii{fl})} computes the arc tangent of \\vari{fl}/\\varii{fl}.\n\nSee report\nsection~\\extref{report:transcendentalfunctions}{Transcendental functions} for the underlying\nmathematical operations.  In the event that these operations do not\nyield a real result for the given arguments, the result may be a NaN,\nor may be some unspecified flonum.\n\nImplementations that use IEEE binary floating-point arithmetic \nshould follow the relevant standards for these procedures.\n\n\\begin{scheme}\n(flexp +inf.0)                \\ev +inf.0\n(flexp -inf.0)                \\ev 0.0\n(fllog +inf.0)                \\ev +inf.0\n(fllog 0.0)                   \\ev -inf.0\n(fllog -0.0)                  \\ev \\unspecified\\\\\\>; \\textrm{if -0.0 is distinguished}\n(fllog -inf.0)                \\ev +nan.0\n(flatan -inf.0)               \\lev -1.5707963267948965\\\\\\>; \\textrm{approximately}\n(flatan +inf.0)               \\lev 1.5707963267948965\\\\\\>; \\textrm{approximately}%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flsqrt}{ fl}{procedure}}\n\nReturns the principal square root of \\var{fl}. For $-0.0$,\n{\\cf flsqrt} should return $-0.0$; for other negative arguments,\nthe result may be a NaN or some unspecified flonum.\n\n\\begin{scheme}\n(flsqrt +inf.0)               \\ev  +inf.0\n(flsqrt -0.0)                 \\ev  -0.0%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{flexpt}{ \\vari{fl} \\varii{fl}}{procedure}}\n\n\\domain{Either \\vari{fl} should be non-negative, or, if \\vari{fl} is\n  negative, \\varii{fl} should be an integer object.}\nThe {\\cf flexpt} procedure returns \\vari{fl} raised to the power \\varii{fl}.  If \\vari{fl} is\nnegative and \\varii{fl} is not an integer object, the result may be a\nNaN, or may be some unspecified flonum.\nIf \\vari{fl} and \\varii{fl} are both zero, the result is\n1.0.  If \\vari{fl} is zero and \\varii{fl} is positive, the result is zero. \nIf \\vari{fl} is zero and \\varii{fl} is negative, the result may be a NaN, or may be\nsome unspecified flonum.\n\\end{entry}\n\n\\begin{entry}{%\n\\ctproto{no-infinities}\n\\proto{make-no-infinities-violation}{}{procedure}\n\\proto{no-infinities-violation?}{ obj}{procedure}\n\\ctproto{no-nans}\n\\proto{make-no-nans-violation}{}{procedure}\n\\proto{no-nans-violation?}{ obj}{procedure}}\n\nThese condition types could be defined by the following code:\n\n\\begin{scheme}\n(define-condition-type \\&no-infinities\n    \\&implementation-restriction\n  make-no-infinities-violation\n  no-infinities-violation?)\n\n(define-condition-type \\&no-nans\n    \\&implementation-restriction\n  make-no-nans-violation no-nans-violation?)%\n\\end{scheme}\n\nThese types describe that a program has executed an arithmetic\noperations that is specified to return an infinity or a NaN,\nrespectively, on a Scheme implementation that is not able to represent\nthe infinity or NaN.  (See report section~\\extref{report:infinitiesnanssection}{Representability of infinities and NaNs}.)\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{fixnum->flonum}{ fx}{procedure}}\n\nReturns a flonum that is numerically closest to \\var{fx}.\n\n\\begin{note}\nThe result of this procedure may not be\nnumerically equal to \\var{fx}, because the fixnum precision\nmay be greater than the flonum precision.\n\\end{note}\n\\end{entry}\n\n\\section{Exact bitwise arithmetic}\n\\label{exactsection}\n\nThis section describes the \\defrsixlibrary{arithmetic bitwise}\nlibrary.  The exact bitwise arithmetic provides generic operations on\nexact integer objects.  This section uses \\var{ei}, \\vari{ei}, \\varii{ei}, etc.,\nas parameter names that must be exact integer objects.\n\n\n\\begin{entry}{%\n\\proto{bitwise-not}{ ei}{procedure}}\n\nReturns the exact integer object whose two's complement representation is the\none's complement of the two's complement representation of \\var{ei}.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-and}{ \\vari{ei} \\dotsfoo}{procedure}\n\\proto{bitwise-ior}{ \\vari{ei} \\dotsfoo}{procedure}\n\\proto{bitwise-xor}{ \\vari{ei} \\dotsfoo}{procedure}}\n\nThese procedures return the exact integer object that is the bit-wise\n``and'', ``inclusive or'', or ``exclusive or'' of the two's complement\nrepresentations of their arguments.  If they are passed only one\nargument, they return that argument.  If they are passed no arguments,\nthey return the integer object (either $-1$ or $0$) that acts as identity for\nthe operation.\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-if}{ \\vari{ei} \\varii{ei} \\variii{ei}}{procedure}}\n\nReturns the exact integer object that is the bit-wise ``if'' of the two's complement\nrepresentations of its arguments, i.e.\\ for each bit, if it is 1 in\n\\vari{ei}, the corresponding bit in \\varii{ei} becomes the value of\nthe corresponding bit in the result, and if it is 0, the corresponding\nbit in \\variii{ei} becomes the corresponding bit in the value of the\nresult.\nThis is the result of the following computation:\n\\begin{scheme}\n(bitwise-ior (bitwise-and \\vari{ei} \\varii{ei})\n             (bitwise-and (bitwise-not \\vari{ei}) \\variii{ei}))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-bit-count}{ ei}{procedure}}\n \nIf \\var{ei} is non-negative, this procedure returns the number of\n1 bits in the two's complement representation of \\var{ei}.\nOtherwise it returns the result of the following computation:\n%\n\\begin{scheme}\n(bitwise-not (bitwise-bit-count (bitwise-not \\var{ei})))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-length}{ ei}{procedure}}\n\nReturns the number of bits needed to represent \\var{ei} if it is\npositive, and the number of bits needed to represent {\\cf (bitwise-not\n  \\var{ei})} if it is negative, which is the exact integer object that\nis the result of the following computation:\n\\begin{scheme}\n(do ((result 0 (+ result 1))\n     (bits (if (negative? \\var{ei})\n               (bitwise-not \\var{ei})\n               \\var{ei})\n           (bitwise-arithmetic-shift bits -1)))\n    ((zero? bits)\n     result))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-first-bit-set}{ ei}{procedure}}\n\nReturns the index of the least significant $1$\nbit in the two's complement representation of \\var{ei}.\nIf \\var{ei} is $0$, then $-1$ is returned.\n\\begin{scheme}\n(bitwise-first-bit-set 0)        \\ev  -1\n(bitwise-first-bit-set 1)        \\ev  0\n(bitwise-first-bit-set -4)       \\ev  2%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-bit-set?}{ \\vari{ei} \\varii{ei}}{procedure}}\n\n\\domain{\\varii{Ei} must be non-negative.}\nThe {\\cf bitwise-bit-set?} procedure returns\n\\schtrue{} if the \\varii{ei}th bit is 1 in the two's complement\nrepresentation of \\vari{ei}, and \\schfalse{}\notherwise.  This is the result of the following computation:\n\\begin{scheme}\n(not (zero?\n       (bitwise-and\n         (bitwise-arithmetic-shift-left 1 \\varii{ei})\n         \\vari{ei})))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-copy-bit}{ \\vari{ei} \\varii{ei} \\variii{ei}}{procedure}}\n\n\\domain{\\varii{Ei} must be non-negative, and \\variii{ei}\nmust be either $0$ or $1$.}\nThe {\\cf bitwise-copy-bit} procedure returns the result of replacing\nthe \\varii{ei}th bit of \\vari{ei} by \\variii{ei}, which is\nthe result of the following computation:\n\\begin{scheme}\n(let* ((mask (bitwise-arithmetic-shift-left 1 \\varii{ei})))\n  (bitwise-if mask\n            (bitwise-arithmetic-shift-left \\variii{ei} \\varii{ei})\n            \\vari{ei}))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-bit-field}{ \\vari{ei} \\varii{ei} \\variii{ei}}{procedure}}\n\n\\domain{\\varii{Ei} and \\variii{ei} must be non-negative, and\n  \\varii{ei} must be less than or equal to \\variii{ei}.}\nThe {\\cf bitwise-bit-field} procedure returns the\nnumber represented by the bits at the positions from \\varii{ei}\n(inclusive) to $\\variii{ei}$ (exclusive), which is\nthe result of the following computation:\n%\n\\begin{scheme}\n(let ((mask\n       (bitwise-not\n        (bitwise-arithmetic-shift-left -1 \\variii{ei}))))\n  (bitwise-arithmetic-shift-right\n    (bitwise-and \\vari{ei} mask)\n    \\varii{ei}))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-copy-bit-field}{ \\vari{ei} \\varii{ei} \\variii{ei} \\variv{ei}}{procedure}}\n\n\\domain{\\varii{Ei} and \\variii{ei} must be non-negative,\nand \\varii{ei} must be less than or equal to \\variii{ei}.}\nThe {\\cf bitwise-copy-bit-field} procedure returns\nthe result of replacing in \\vari{ei} the bits at positions from\n\\varii{ei} (inclusive) to $\\variii{ei}$ (exclusive) by the bits in\n\\variv{ei} from position 0 (inclusive) to position\n$\\variii{ei}-\\varii{ei}$ (exclusive), which\nis the result of the following computation:\n%\n\\begin{scheme}\n(let* ((to    \\vari{ei})\n       (start \\varii{ei})\n       (end   \\variii{ei})\n       (from  \\variv{ei})\n       (mask1\n         (bitwise-arithmetic-shift-left -1 start))\n       (mask2\n         (bitwise-not\n           (bitwise-arithmetic-shift-left -1 end)))\n       (mask (bitwise-and mask1 mask2)))\n  (bitwise-if mask\n              (bitwise-arithmetic-shift-left from\n                                             start)\n              to))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry} {%\n\\proto{bitwise-arithmetic-shift}{ \\vari{ei} \\varii{ei}}{procedure}}\n\nReturns the result of the following computation:\n%\n\\begin{scheme}\n(floor (* \\vari{ei} (expt 2 \\varii{ei})))%\n\\end{scheme}\n\nExamples:\n%\n\\begin{scheme}\n(bitwise-arithmetic-shift -6 -1) \\lev -3\n(bitwise-arithmetic-shift -5 -1) \\lev -3\n(bitwise-arithmetic-shift -4 -1) \\lev -2\n(bitwise-arithmetic-shift -3 -1) \\lev -2\n(bitwise-arithmetic-shift -2 -1) \\lev -1\n(bitwise-arithmetic-shift -1 -1) \\lev -1%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-arithmetic-shift-left}{ \\vari{ei} \\varii{ei}}{procedure}\n\\proto{bitwise-arithmetic-shift-right}{ \\vari{ei} \\varii{ei}}{procedure}}\n\n\\domain{\\varii{Ei} must be non-negative.}  The {\\cf\n  bitwise-\\hp{}arithmetic-\\hp{}shift-\\hp{}left} procedure returns the same result as {\\cf\n  bitwise-arithmetic-shift}, and\n\\begin{scheme}\n(bitwise-arithmetic-shift-right \\vari{ei} \\varii{ei})%\n\\end{scheme}\nreturns the same result as \n\\begin{scheme}\n(bitwise-arithmetic-shift \\vari{ei} (- \\varii{ei}))\\textrm{.}%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-rotate-bit-field}{ \\vari{ei} \\varii{ei} \\variii{ei} \\variv{ei}}{procedure}}\n\n\\domain{\\varii{Ei}, \\variii{ei}, \\variv{ei} must be non-negative, \n\\varii{ei} must be less than or equal to \\variii{ei}.}\nThe {\\cf bitwise-rotate-bit-field} procedure returns the result of cyclically permuting in \\vari{ei} the\nbits at positions from \\varii{ei} (inclusive) to \\variii{ei} (exclusive) by \\variv{ei} bits\ntowards the more significant bits, which is the result of the\nfollowing computation:\n%\n\\begin{scheme}\n(let* ((n     \\vari{ei})\n       (start \\varii{ei})\n       (end   \\variii{ei})\n       (count \\variv{ei})\n       (width (- end start)))\n  (if (positive? width)\n      (let* ((count (mod count width))\n             (field0\n               (bitwise-bit-field n start end))\n             (field1 (bitwise-arithmetic-shift-left\n                       field0 count))\n             (field2 (bitwise-arithmetic-shift-right\n                       field0\n                       (- width count)))\n             (field (bitwise-ior field1 field2)))\n        (bitwise-copy-bit-field n start end field))\n      n))%\n\\end{scheme}\n\\end{entry}\n\n\\begin{entry}{%\n\\proto{bitwise-reverse-bit-field}{ \\vari{ei} \\varii{ei} \\variii{ei}}{procedure}}\n\n\\domain{\\varii{Ei} and \\variii{ei} must be non-negative, and\n  \\varii{ei} must be less than or equal to \\variii{ei}.}  The {\\cf bitwise-reverse-bit-field} procedure returns\nthe result obtained from \\vari{ei} by reversing the\norder of the bits at positions from \\varii{ei} (inclusive) to\n\\variii{ei} (exclusive).\n\\begin{scheme}\n(bitwise-reverse-bit-field \\sharpsign{}b1010010 1 4)   \\lev  88 ; \\sharpsign{}b1011000%\n\\end{scheme}\n\\end{entry}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"r6rs-lib\"\n%%% End: \n", "meta": {"hexsha": "b8d1f58600e93ec0ba99ae190c5ae9c3ff2f2668", "size": 35756, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "r6rs/arith.tex", "max_stars_repo_name": "schemedoc/rnrs-metadata", "max_stars_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-04T17:38:19.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-04T17:38:19.000Z", "max_issues_repo_path": "r6rs/arith.tex", "max_issues_repo_name": "schemedoc/scheme-rnrs-metadata", "max_issues_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2019-03-27T22:24:05.000Z", "max_issues_repo_issues_event_max_datetime": "2019-09-26T17:56:02.000Z", "max_forks_repo_path": "r6rs/arith.tex", "max_forks_repo_name": "schemedoc/scheme-rnrs-metadata", "max_forks_repo_head_hexsha": "2f998d354177dc41a8d3147fd15c056a14ffabda", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7821011673, "max_line_length": 175, "alphanum_fraction": 0.6868497595, "num_tokens": 11252, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916099737807, "lm_q2_score": 0.6893056295505784, "lm_q1q2_score": 0.5951406972616644}}
{"text": "%\\section{nomenclature}\r\n\r\nE $\\equiv$ electric field in the random sample\r\n\r\n$ {\\cal E}  \\equiv$ energy\r\n\r\n$ \\xi \\equiv$ localization length\r\n\r\nx/L $\\equiv$ normalized system length\r\n\r\n$x_o$/L $\\equiv$ normalized defect position\r\n\r\nL  $\\equiv$ system length\r\n\r\n$\\omega \\equiv$ frequency\r\n\r\n$\\omega_o \\equiv$ resonant frequency\r\n\r\nT  $\\equiv$ transmission\r\n\r\n$J_+ \\equiv$ transmission flux\r\n\r\nR  $\\equiv$ reflection\r\n\r\n$J_- \\equiv$ reflection flux\r\n\r\nA $\\equiv$ forward propogating electric field coefficient\r\n\r\nB $\\equiv$ backward propogating electric field coefficient\r\n\r\nn $\\equiv$ refractive index for random-width layers\r\n\r\n$\\epsilon \\equiv$ dielectric constant for random-width layers\r\n\r\n$x_o \\equiv$ defect position\r\n\r\n$x_1 \\equiv$ turning point position\r\n\r\n$\\lambda \\equiv$ wavelength\r\n\r\n$\\psi _1 = \\psi _{LR} \\equiv$ wave in passive material, left-to-right orientation\r\n\r\n$\\psi _2 = \\psi _{RL} \\equiv$ wave in passive material, right-to-left orientation\r\n\r\n$\\tilde{ \\psi} _1 = \\tilde{\\psi} _{LR} \\equiv$ wave in active material, left-to-right orientation\r\n\r\n$\\tilde{ \\psi} _2 = \\tilde{\\psi} _{RL} \\equiv$ wave in active material, right-to-left orientation\r\n\r\n$\\alpha \\equiv$ coefficent of $\\psi _1 $ with respect to building $\\tilde{\\psi}$\r\n\r\n$\\beta \\equiv$ coefficent of $\\psi _2 $ with respect to building $\\tilde{\\psi}$\r\n\r\n%%%% %%%%%%\r\n\r\nW $\\equiv$ width\r\n\r\nL $\\equiv$ length\r\n\r\nE $\\equiv$ electric field\r\n\r\nc $\\equiv$ speed of light\r\n\r\n%%%% from quasi1d %%%%%%\r\n\r\nN $\\equiv$ number of open channels\r\n\r\n$N_c \\equiv$ number of closed channels\r\n\r\n$\\alpha \\equiv$ scattering strength\r\n\r\n$l_s \\equiv$ scattering length\r\n\r\n$\\rho \\equiv$ density\r\n\r\ng $\\equiv$ conductivity\r\n\r\n$num_{scat} \\equiv$ number of scatterers\r\n\r\nn $\\equiv$ mode number\r\n\r\nc $\\equiv$ speed of light in a vacuum\r\n\r\nW $\\equiv$ system width\r\n\r\nL $\\equiv$ system length\r\n\r\nk $\\equiv$ wave number\r\n\r\nchunk size = self embedding length = number of scatters in a chunk\r\n\r\n$\\lambda \\equiv$ wavelength\r\n\r\n$\\Gamma \\equiv$ scatterer matrix\r\n\r\n$\\tilde{ \\Gamma} _1 \\equiv$ reduced scatterer matrix for the first scatterer, of size NxN\r\n\r\n$\\tilde{ \\Gamma} ^{combined} \\equiv$ reduced scatterer matrix after being combined\r\n\r\n$\\tilde{ \\Gamma} _{NNC} ^{combined} \\equiv$ reduced scatterer matrix with reduced factors being combined\r\n\r\n$\\hat{G} \\equiv$ a matrix of size $2*(N+N_c)$x$2*(N+N_c)$ describing\r\n\r\n$\\hat{H} \\equiv$ a matrix of size $2*(N+N_c)$x$2*(N+N_c)$ describing\r\n\r\n$\\hat{S} \\equiv$ a matrix of size $2*(N+N_c)$x$2*(N+N_c)$ describing", "meta": {"hexsha": "364a1e655d2027ee3880aee8f465afb932c97b12", "size": 2509, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix_symbols_used.tex", "max_stars_repo_name": "bhpayne/physics_phd_dissertation", "max_stars_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendix_symbols_used.tex", "max_issues_repo_name": "bhpayne/physics_phd_dissertation", "max_issues_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/appendix_symbols_used.tex", "max_forks_repo_name": "bhpayne/physics_phd_dissertation", "max_forks_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.8952380952, "max_line_length": 105, "alphanum_fraction": 0.6775607812, "num_tokens": 690, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916029436189, "lm_q2_score": 0.689305616785446, "lm_q1q2_score": 0.5951406813944261}}
{"text": "\\subsection{Sequence Characterization}\n\n\\frame{\n  \\frametitle{Sequence Characterization}\n\n  \\begin{multline*}\n    \\begin{array}{cccccccccc}\n      \\underbrace{aaa} & b & \\underbrace{aa} & b & \\underbrace{aa} & b & \\underbrace{aaa} & b & \\underbrace{aa} & b \\\\\n      3 && 2 && 2 && 3 && 2 &\n    \\end{array}\\\\\n    \\begin{array}{cccccccc}\n      \\underbrace{aa} & b & \\underbrace{aaa} & b & \\underbrace{aa} & b & \\underbrace{aaa} & \\dots \\\\\n      2 && 3 && 2 && 3 & \\dots\n    \\end{array}\n  \\end{multline*}\n}\n\n\\frame{\n  \\frametitle{Sequence Characterization}\n\n  \\begin{gather*}\n    \\begin{array}{cccccccc}\n      3 & \\underbrace{22} & 3 & \\underbrace{22} & 3 & \\underbrace{2} & 3 & \\dots\\\\ \n      & 2 && 2 && 1 & \\dots\n    \\end{array}\n  \\end{gather*}\n}\n\n\\subsection{Algorithm}\n\n\\frame{\n  \\frametitle{Algorithm}\n\n  \\begin{align*}\n    dx_n& = \\bigcap_{i=0}^n \\left( \\frac{i}{1 + \\sum_{j=0}^i n_j}, \\frac{1}{-1 + \\sum_{j=0}^i n_j} \\right)\\\\\n    \\delta_n& = \\bigcap_{i=0}^n \\left( i-dx_{n, max} (1 + \\sum_{j=0}^i n_j), i-dx_{n, min} (1 + \\sum_{j=0}^i n_j) \\right)\n  \\end{align*}\n}\n", "meta": {"hexsha": "62c8f811a229457b6c05326d9ef3766a7bcbc6d3", "size": 1070, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "presentation/algorithm.tex", "max_stars_repo_name": "spectralflight/billiards", "max_stars_repo_head_hexsha": "429865070b490940fda4d475bb12d1f888bd190b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-05-19T01:48:14.000Z", "max_stars_repo_stars_event_max_datetime": "2015-05-19T01:48:14.000Z", "max_issues_repo_path": "presentation/algorithm.tex", "max_issues_repo_name": "spectralflight/billiards", "max_issues_repo_head_hexsha": "429865070b490940fda4d475bb12d1f888bd190b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentation/algorithm.tex", "max_forks_repo_name": "spectralflight/billiards", "max_forks_repo_head_hexsha": "429865070b490940fda4d475bb12d1f888bd190b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.4358974359, "max_line_length": 121, "alphanum_fraction": 0.5635514019, "num_tokens": 440, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916170039418, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.5951406800649773}}
{"text": "\\chapter{Simplification strategies for coupled fundamental commutators}\\label{app:jscheme_commutator_tricks}\n\nFor the coupled fundamental commutators,\nthe key to arriving at the simplest expressions\nis to avoid unnecessary recoupling.\nRecoupling,\nwhere bra or ket couplings have to be broken and recreated\nto relate matrix elements on both sides of the expression,\nis unnecessary when symmetries of the matrix elements\nor of the expression can be exploited\nto bring the uncoupled expression into a form\nwhere its coupled analog no longer requires recoupling.\nRecoupling comes at the cost of an additional sum over an angular momentum\nand the inclusion of a Wigner $6j$ or $9j$ symbol.\nThus avoiding unnecessary recoupling is essential for achieving better performance\nfor implementations of the coupled expressions.\nIn the following,\nwe discuss two strategies for avoiding recoupling\nand apply them each to one example.\n\n\\section{Using antisymmetry of input operator matrix elements}\n\nThe uncoupled matrix elements are antisymmetric under exchange\nof any two bra or ket indices.\nThis symmetry can easily be exploited to permute indices into a form\nthat minimizes recoupling.\n\nConsider the $[\\onebodyop{A}, \\threebodyop{B}] \\rightarrow \\twobodyop{C}$ fundamental commutator.\nThe uncoupled expression for the matrix elements of $C$ is\n\\begin{equation}\n    C_{ijkl} = \\sum_{ab} (n_a - n_b) A_{ab} B_{bijakl}\\,,\n\\end{equation}\nand the coupled expression is\n\\begin{equation}\n    \\begin{split}\n        C_{ijkl}^{J_C} &= (-1)^{j_i + j_j + j_k + j_l}\n        \\sum_{j_a} \\sum_{(J_{B,3}, J_{B,pq}, J_{B,st})} \\hat{J}_{B,pq}\\hat{J}_{B,st}\\hat{J}_{B,3}^2 \\\\\n        &\\quad\\sum_{ab} (n_a \\bar{n}_b - \\bar{n}_a n_b)\n        \\sixj{j_j}{j_i}{J_C}{j_a}{J_{B,3}}{J_{B,pq}}\n        \\sixj{j_l}{j_k}{J_C}{j_a}{J_{B,3}}{J_{B,st}} \\\\\n        &\\quad A_{ab}^{j_a} B_{bijakl}^{(J_{B,3}, J_{B,pq}, J_{B,st})}\\,.\n    \\end{split}\n\\end{equation}\n\nThe appearance of 6$j$ symbols is a result of\nbreaking the $bi$ coupling on the right to form the $ij$ coupling on the left\nand breaking the $ak$ coupling on the right to form the $kl$ coupling on the left.\nBy exploiting antisymmetry in the uncoupled matrix elements of $B$,\nwe can arrive at the uncoupled expression\n\\begin{equation}\n    C_{ijkl} = \\sum_{ab} (n_a \\bar{n}_b - \\bar{n}_a n_b) A_{ab} B_{ijbkla}\\,,\n\\end{equation}\nwhich yields the coupled expression\n\\begin{equation}\n    C_{ijkl}^{J_C} = \\frac{1}{\\hat{J}_{C}^2}\n    \\sum_{j_a}\n    \\sum_{J_{B,3}} \\hat{J}_{B,3}^2\n    \\sum_{ab} (n_a \\bar{n}_b - \\bar{n}_a n_b)\n    A_{ab}^{j_a} B_{ijbkla}^{(J_{B,3}, J_{C}, J_{C})}\\,.\n\\end{equation}\n\n\\section{Using antisymmetry of output operator matrix elements}\n\nAs a reminder, in this work we typically work with\nnon-antisymmetrized expressions for the fundamental commutators,\nso one can not simply permute the indices on the output matrix elements.\nHowever, one can exploit that the output matrix elements\nwill at some point be antisymmetrized\nand choose the term that contributes to the antisymmetrized result\nwith the minimal recoupling.\n\nTo make this clear, consider the following example:\nSuppose the three-body uncoupled matrix elements $O_{pqrstu}$\nare already antisymmetric in $stu$, but not in $pqr$.\nThus to get the final antisymmetrized matrix elements $O^{\\text{as}}_{pqrstu}$,\none needs to evaluate\n\\begin{equation}\\label{eq:antisymmetrize_short}\n    O^{\\text{as}}_{pqrstu} = \\mathcal{A}_{pqr} O_{pqrstu}\\,,\n\\end{equation}\nwith\n\\begin{equation}\n    \\mathcal{A}_{pqr} = \\frac{1}{6} (1 + P_{prq} + P_{prq}^2) (1 - P_{pq})\\,,\n\\end{equation}\nwhere $P_{prq}$ cyclically permutes the indices $p$, $q$, and $r$\nand $P_{pq}$ exchanges the indices $p$ and $q$\nas discussed in Section~\\ref{sec:jscheme_many_body_expressions}.\n\nWriting Eq.~\\eqref{eq:antisymmetrize_short} out, one gets\n\\begin{equation}\\label{eq:antisymmetrize_long_normal}\n    O^{\\text{as}}_{pqrstu} = \\frac{1}{6} (O_{pqrstu} + O_{qrpstu} + O_{rpqstu} - O_{qprstu} - O_{rqpstu} - O_{prqstu})\\,.\n\\end{equation}\nNow consider\n\\begin{equation}\n    \\overline{O}_{pqrstu} \\equiv O_{qrpstu}\\,.\n\\end{equation}\nAntisymmetrizing $\\overline{O}_{pqrstu}$ gives\n\\begin{equation}\n    \\overline{O}^{\\text{as}}_{pqrstu} = \\frac{1}{6} (\\overline{O}_{pqrstu} + \\overline{O}_{qrpstu} + \\overline{O}_{rpqstu}\n    - \\overline{O}_{qprstu} - \\overline{O}_{rqpstu} - \\overline{O}_{prqstu})\\,,\n\\end{equation}\nwhich one can rewrite in terms of the original matrix elements $O_{pqrstu}$:\n\\begin{equation}\\label{eq:antisymmetrize_long_alternate}\n    \\overline{O}^{\\text{as}}_{pqrstu} = \\frac{1}{6} (O_{qrpstu} + O_{rpqstu} + O_{pqrstu}\n    - O_{rqpstu} - O_{prqstu} - O_{qprstu})\\,.\n\\end{equation}\nOne can quickly see that Eqs.~\\eqref{eq:antisymmetrize_long_normal} and~\\eqref{eq:antisymmetrize_long_alternate}\ncontain the exact same terms, making them equal.\nThus if $O_{pqrstu}$ is given by some many-body expression,\none can permute the indices $p$, $q$, and $r$ on the right-hand (or left-hand) side of the expression,\nincluding an overall minus sign if the permutation is odd,\nand know that it will give the same result after antisymmetrization.\n\nAgain, it is useful to consider a concrete application to illustrate how this strategy can be applied.\nConsider the $[\\twobodyop{A}, \\twobodyop{B}] \\rightarrow \\threebodyop{C}$ fundamental commutator.\nThe uncoupled non-antisymmetrized expression is\n\\begin{equation}\n    C_{ijklmn} = 9 \\sum_{a} (A_{ijla} B_{akmn} - B_{ijla} A_{akmn})\\,.\n\\end{equation}\nwhich gives the coupled expression\n\\begin{equation}\\label{eq:comm_223_bad}\n    \\begin{split}\n        C_{ijklmn}^{(J_{C,3}, J_{C,ij}, J_{C,lm})} & =\n        9 \\hat{J}_{C,ij} \\hat{J}_{C,lm}\n        (-1)^{j_m + j_n}\n        \\sum_{J_{mn}}\n        \\hat{J}_{mn}^2\n        \\sum_{a} (-1)^{j_a + j_k} \\\\\n        & \\quad \\quad \\times\n        \\sixj{j_k}{j_a}{J_{mn}}{j_{l}}{J_{C,3}}{J_{C,ij}}\n        \\sixj{j_l}{j_m}{J_{C,lm}}{j_{n}}{J_{C,3}}{J_{mn}} \\\\\n        & \\quad \\quad \\times \\left(\n        A_{ijla}^{J_{C,ij}} B_{akmn}^{J_{mn}}\n        - B_{ijla}^{J_{C,ij}} A_{akmn}^{J_{mn}}\n        \\right).\n    \\end{split}\n\\end{equation}\nOne can immediately see that\nthe $mn$ coupling on the right in $B$ and $A$ needs to be broken up\nto form the $lm$ coupling on the left,\nleading to some of the recoupling.\n\nHowever, the uncoupled expression\n\\begin{equation}\n    D_{ijklmn} \\equiv 9 \\sum_{a} (A_{ijna} B_{aklm} - B_{ijna} A_{aklm})\n\\end{equation}\nhas the indices $l$, $m$, and $n$ on the right-hand side permuted cyclically,\nand it will no longer have this recoupling problem when the expression is coupled.\nThe antisymmetrized matrix elements of $C$ and $D$ are equal,\nso one might as well simply calculate $D$ instead of $C$ and antisymmetrize that.\nThe coupled expression for $D$ is\n\\begin{equation}\\label{eq:comm_223_good}\n    \\begin{split}\n        D_{ijklmn}^{(J_{C,3}, J_{C,ij}, J_{C,lm})} & =\n        -9 (-1)^{J_{C,lm}} \\hat{J}_{C,ij} \\hat{J}_{C,lm}\n        \\sum_{a} (-1)^{j_a + j_k}\n        \\sixj{j_k}{j_a}{J_{C,lm}}{j_{n}}{J_{C,3}}{J_{C,ij}} \\\\\n        & \\quad \\quad \\times \\left(\n        A_{ijna}^{J_{C,ij}} B_{aklm}^{J_{C,lm}}\n        - B_{ijna}^{J_{C,ij}} A_{aklm}^{J_{C,lm}}\n        \\right).\n    \\end{split}\n\\end{equation}\nThe final $6j$ symbol cannot be avoided\nas it forms the three-body angular-momentum coupling\nthat is not present on the right-hand side.\nComparing Eq.~\\eqref{eq:comm_223_bad} and Eq.~\\eqref{eq:comm_223_good},\none can see the simplification afforded by carefully considering\nwhich quantity one actually wants to calculate.\n", "meta": {"hexsha": "b8cf2c31d2548f2ca2d7cebd332457a828532f25", "size": 7503, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/doc/a02_jscheme_simplification_strategies.tex", "max_stars_repo_name": "cheshyre/masters-thesis", "max_stars_repo_head_hexsha": "464fb498b0f0225d370358164c8efefe014fa820", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/doc/a02_jscheme_simplification_strategies.tex", "max_issues_repo_name": "cheshyre/masters-thesis", "max_issues_repo_head_hexsha": "464fb498b0f0225d370358164c8efefe014fa820", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/doc/a02_jscheme_simplification_strategies.tex", "max_forks_repo_name": "cheshyre/masters-thesis", "max_forks_repo_head_hexsha": "464fb498b0f0225d370358164c8efefe014fa820", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.3964497041, "max_line_length": 122, "alphanum_fraction": 0.6930561109, "num_tokens": 2571, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085708384735, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5951024189907494}}
{"text": "\\documentclass[a4paper]{scrartcl}\n\n% Text\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage[T1]{fontenc}\n\n% Math\n\\usepackage{amsmath, amssymb}\n\\usepackage{mathtools}\n\\usepackage{bm}\n\t\\newcommand{\\vv}[1]{\\ensuremath{\\bm{#1}}}\n\t\\newcommand\\lphi{\\ensuremath{\\phi^{\\text{left}}}}\n\t\\newcommand\\lH{\\ensuremath{H^{\\text{left}}}}\n\t\\newcommand\\lh{\\ensuremath{h^{\\text{left}}}}\n\n\t\\newcommand\\rphi{\\ensuremath{\\phi^{\\text{right}}}}\n\t\\newcommand\\rH{\\ensuremath{H^{\\text{right}}}}\n\t\\newcommand\\rh{\\ensuremath{h^{\\text{right}}}}\n\n\t\\newcommand\\R{\\ensuremath{\\mathbb{R}}}\n\t\\newcommand\\Z{\\ensuremath{\\mathbb{Z}}}\n\t\\DeclareMathOperator\\supp{supp}\n\t\\DeclareMathOperator\\spn{span}\n\n% Set\n% \\given is part of the syntax, make sure it exists\n% \\providecommand\\given{}\n\\newcommand\\given{\\:\\vert\\:}\n\\newcommand\\SetSymbol[1][]{\n\t\\nonscript\\: #1 \\vert \\nonscript\\:\n\t\\mathopen{} % make - etc behave as sign\n\t\\allowbreak % in text, allow line break after the \\given\n}\n\\DeclarePairedDelimiterX\\Set[1]{\\{}{\\}}{\n\t% Change is local to \\Set\n\t\\renewcommand\\given{\\SetSymbol[\\delimsize]}\n\t#1\n}\n\n\\usepackage{graphicx}\n\\usepackage{asymptote}\n\n\\usepackage[colorlinks=true]{hyperref}\n\n\\usepackage{cleveref}\n\n\\usepackage{biblatex}\n\\bibliography{literature.bib}\n\n\\title{Daubechies wavelets}\n\\author{Robert Dahl Jacobsen}\n\n\\begin{document}\n\n\\maketitle\n\nThe purpose of the Julia package \\href{https://github.com/robertdj/IntervalWavelets.jl}{IntervalWavelets.jl} is to compute ordinary Daubechies scaling functions (see e.g.\\ \\cite{Mallat:2009}) and the moment-preserving boundary scaling functions from \\cite{Cohen:Daubechies:Vial:1993}.\nWe rely on the recursive approach of \\cite{Strang:1989}.\nIn this note I summarize these methods with all the details used in the implementation.\n\n\n\\section{Interior scaling functions}\n\\label{sec:internal}\n\nA Daubechies scaling function $\\phi$ and associated wavelet $\\psi$ with $p$ vanishing moments are defined by a filter $\\{h_k\\}_{k\\in\\Z}$.\nThe filter, scaling function and wavelet have supports of the same lengths and we know from \\cite[Theorem 7.5]{Mallat:2009} that if $\\supp \\{h_k\\}_k = [N_1, N_2]$, then \n\\begin{equation*}\n\t\\supp\\phi = [N_1, N_2],\n\t\\quad\n\t\\supp\\psi = \\Bigl[\\frac{N_1 - N_2 + 1}2, \\frac{N_2 - N_1 + 1}2\\Bigr].\n\\end{equation*}\nIt is customary to let $N_1 = 0$ and $N_2 = 2p - 1$.\nHowever, when constructing the boundary scaling functions we have $N_1 = -p+1$ and $N_2 = p$.\n\nThe scaling function satisfies the dilation equation\n\\begin{equation}\n\t\\label{eq:internal_scaling_function_definition}\n\t\\phi(x) \n\t= \\sqrt2 \\sum_{k=0}^{2p-1} h_k \\phi_k(2x)\n\t= \\sqrt2 \\sum_{k=0}^{2p-1} h_k \\phi(2x - k).\n\\end{equation}\nFor $p\\geq2$, $\\phi$ is continuous and hence zero at the endpoints of the support.\nThese properties allow us to compute $\\phi$ at the integer values (in the support).\nAs an example, for $p = 3$:\n\\begin{align*}\n\t\\phi(1) \n    & = \\sqrt2 \\bigl(h_1\\phi(1) + h_0\\phi(2)\\bigr),\n\t\\\\\n\t\\phi(2)\n    & = \\sqrt2 \\bigl(h_4\\phi(1) + h_3\\phi(2) + h_2\\phi(3) + h_1\\phi(4)\\bigr),\n\t\\\\\n\t\\phi(3)\n    & = \\sqrt2 \\bigl(h_5\\phi(1) + h_4\\phi(2) + h_3\\phi(3) + h_2\\phi(4)\\bigr),\n\t\\\\\n\t\\phi(4)\n    & = \\sqrt2 \\bigl(h_5\\phi(3) + h_4\\phi(4)\\bigr).\n\\end{align*}\nIn matrix form, we have an eigenvalue problem:\n\\begin{equation*}\n\t\\begin{bmatrix}\n\t\t\\phi(1) \\\\ \\phi(2) \\\\ \\phi(3) \\\\ \\phi(4)\n\t\\end{bmatrix}\n\t=\n\t\\sqrt2\n\t\\begin{bmatrix}\n\t\th_1 & h_0 & 0 & 0\n\t\t\\\\\n\t\th_3 & h_2 & h_1 & h_0\n\t\t\\\\\n\t\th_5 & h_4 & h_3 & h_2\n\t\t\\\\\n\t\t0 & 0 & h_5 & h_4\n\t\\end{bmatrix}\n\t\\begin{bmatrix}\n\t\t\\phi(1) \\\\ \\phi(2) \\\\ \\phi(3) \\\\ \\phi(4)\n\t\\end{bmatrix}.\n\\end{equation*}\nFor a general support $[N_1, N_2]$ the $(i,j)$'th entry is $\\sqrt2 h_{2i-j+N_1}$ and the vector $\\vv\\phi = [\\phi(n)]_{n=1}^4$ is an eigenvector of the eigenvalue 1.\nThis eigenspace is one-dimensional, so the only question is how to scale $\\vv\\phi$.\nFrom e.g.\\ \\cite[page 69]{Cohen:Daubechies:Vial:1993} we know that\n\\begin{equation*}\n\t\\sum_{k\\in\\mathbb{Z}} \\phi(k)\n\t= \\sum_{k=0}^{2p-1} \\phi(k)\n\t= 1.\n\\end{equation*}\n\nFrom the function values at the integers we can compute the function values at the half-integers using \\eqref{eq:internal_scaling_function_definition}.\nAs an example,\n\\begin{equation*}\n\t\\phi\\Bigl(\\frac32\\Bigr)\n    = \\sqrt2 \\bigl(h_0 \\phi(3) + h_1 \\phi(2) + h_2 \\phi(1)\\bigr).\n\\end{equation*}\nThis process can be repeated to recursively yield $\\phi(k/2^R)$, for all integers $k$ and positive integers $R$.\n\nNote that the filter $\\{h_k\\}_k$ defining the scaling function is not unique.\nIn \\cref{fig:Daubechies4} is shown the usual, minimum-phase Daubechies 4 scaling function along with Daubechies 'symmlet'/linear phase scaling function used in \\cref{sec:boundary_Daubechies} and \\cite{Cohen:Daubechies:Vial:1993} -- see e.g.\\ \\cite[Section 7.2.3]{Mallat:2009}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.5]{interior}\n\t\\caption{The usual minimum-phase Daubechies 4 scaling function (black) and the symmlet version (red).}\n\t\\label{fig:Daubechies4}\n\\end{figure}\n\n\n\\section{Boundary scaling functions}\n\\label{sec:boundary_Daubechies}\n\nThe moment preserving Daubechies boundary scaling functions were introduced in \\cite{Cohen:Daubechies:Vial:1993} and are also described in \\cite{Mallat:2009} (albeit with some indexing errors).\n\nAn important difference between the internal and boundary scaling functions is that the left (right) boundary scaling functions are \\emph{not} continuous at the left (right) endpoint of their support.\n\nAs in \\cref{sec:internal}, the dilation equations defining the boundary scaling functions can yield function values at all dyadic rationals once we have the function values at the integers (in the support).\nIn the subsequent sections the focus is therefore on how to compute these functions at the integers.\n\nThe filters used for the boundary scaling functions are available at \\url{https://services.math.duke.edu/~ingrid/publications/54.txt} and \\url{http://numerical.recipes/contrib}.\n\n\n\\subsection{Left boundary scaling functions}\n\nLet $p$ denote the number of vanishing moments and $\\phi$ be the interior symmlet Daubechies scaling function associated with the wavelet with $p$ vanishing moments translated such that $\\supp\\phi = [-p+1, p]$.\n\nWe want a family of functions satisfying a multiresolution analysis on $L^2([0,\\infty))$ or, equivalently, a dilation equation like \\cref{eq:internal_scaling_function_definition}.\nThe starting point is $\\{\\phi_k\\}_{k\\geq 0}$.\nThe functions $\\phi_k$ with $\\supp\\phi_k \\subseteq [0,\\infty)$ do not need any alteration.\nBut the $\\phi_k$ with $\\supp\\phi_k \\cap (-\\infty,0) \\neq \\emptyset$ (i.e., with $0\\leq k < p-1$) must be replaced with a corresponding $\\lphi_k$.\nIt turns out that we should also replace $\\phi_{p-1}$ with $\\lphi_{p-1}$ in order to keep the number of vanishing moments even though $\\supp\\phi_{p-1} = [0,2p-1]$.\nThe boundary scaling functions are constructed such that $\\supp\\bigl(\\lphi_k\\bigr) = [0,p+k]$.\n\nThe relevant counterpart to the dilation equation \\cref{eq:internal_scaling_function_definition} for interior scaling functions is\n\\begin{align}\n\t\\lphi_k(x)\n\t& = \\sqrt2 \\sum_{l=0}^{p-1} \\lH_{k,l} \\lphi_l(2x) + \\sqrt2 \\sum_{m=p}^{p+2k} \\lh_{k,m} \\phi_m(2x)\n\t\\nonumber\n\t\\\\\n\t& = \\sqrt2 \\sum_{l=0}^{p-1} \\lH_{k,l} \\lphi_l(2x) + \\sqrt2 \\sum_{m=p}^{p+2k} \\lh_{k,m} \\phi(2x-m),\n\t\\label{eq:left_scaling_function_definition}\n\\end{align}\nfor $0\\leq k\\leq p-1$, where $(\\lH_{k,l})$ and $(\\lh_{k,m})$ are filter coefficients.\n\nFor $x \\neq 0$ we make use of the compact support.\nConsider e.g.\\ the case $p=2$ (where is $\\phi$ is supported on $[-1,2]$, $\\lphi_0$ is supported on $[0,2]$ and $\\lphi_1$ is supported on $[0,3]$):\n\\begin{align*}\n\t\\lphi_0(1)\n    & = \\sqrt2 \\bigl( \\lH_{0,1} \\lphi_1(2) + \\lh_{0,2} \\phi(0) \\bigr),\n\t\\\\\n\t\\lphi_1(1)\n    & = \\sqrt2 \\bigl( \\lH_{1,1} \\lphi_1(2) + \\lh_{1,2} \\phi(0) \\bigr),\n\t\\\\\n\t\\lphi_1(2)\n    & = \\sqrt2 \\bigl( \\lh_{1,3} \\phi(1) + \\lh_{1,4} \\phi(0) \\bigr).\n\\end{align*}\nFrom Section \\ref{sec:internal} we know how to calculate the internal scaling function and by starting with the largest values we can also compute the boundary scaling function.\n\nThe function value $\\lphi_k(0)$ require special treatment.\nFor $x = 0$ \\eqref{eq:left_scaling_function_definition} becomes\n\\begin{equation*}\n\t\\lphi_k(0)\n    = \\sqrt2 \\sum_{l=0}^{p-1} \\lH_{k,l} \\lphi_l(0).\n\\end{equation*}\nThese function values are therefore an eigenvector of the matrix with $(i,j)$'th entry $[\\sqrt2 \\lH_{i,j}]$.\nTo find the proper normalization of this eigenvector we can do as follows:\nLet $\\lphi_\\ell(0) = y_\\ell$ for $0\\leq \\ell < p$.\nComputing an eigenvector as described we have $\\lphi_\\ell(0) = z_\\ell = c y_\\ell$ for some $c \\in \\R$.\nWe know that the left boundary scaling functions are a basis capable of reconstructing polynomials.\nIn particular, there exists $a_0, \\ldots, a_{p-1}$ such that for all $x \\in [0, 1]$ we have\n\\begin{equation*}\n    \\sum_{\\ell=0}^{p-1} a_\\ell \\lphi_\\ell(x) = 1.\n\\end{equation*}\nOn this interval all the interior scaling functions are zero and we only need the left boundary scaling functions.\nThe coefficients can be determined by choosing $p$ dyadic rationals $x_0, \\ldots, x_{p-1}$ with $0 < x_k \\leq 1$ and solving the linear equations\n\\begin{equation*}\n    \\sum_{\\ell=0}^{p-1} a_\\ell \\lphi_\\ell(x_k) = 1, \\quad 0\\leq k< p.\n\\end{equation*}\nIn order to have have the $p$ function values $\\lphi_\\ell(x_k)$ the iterative refinement must proceed until the resolution of the dyadic rationals is at least $R = \\lceil \\log_2 p \\rceil$.\nThis gives us the constant $c$:\n\\begin{equation*}\n    1 = \\sum_{\\ell=0}^{p-1} a_\\ell \\lphi_\\ell(0) \n    \\quad\\Rightarrow\\quad\n    c = \\sum_{\\ell=0}^{p-1} a_\\ell z_\\ell. \n\\end{equation*}\n\n\nThe four boundary scaling functions related to four vanishing moments are seen in \\cref{fig:left_Daubechies4}.\nThere is a large resemblance between $\\lphi_3$ and the symmlet scaling function in \\cref{fig:Daubechies4} (here denoted $\\phi_4$).\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.5]{left}\n\t\\caption{The left boundary scaling function with 4 vanishing moments.}\n\t\\label{fig:left_Daubechies4}\n\\end{figure}\n\n\n\\subsection{Right boundary scaling functions}\n\nLet again $\\phi$ denote the interior symmlet Daubechies scaling function and $p$ denote the number of vanishing moments of the associated wavelet.\nThe idea for the right boundary scaling functions is the same as for the the left:\nWe want a multiresolution analysis on $L^2((-\\infty,0])$ by modifiying the interior scaling functions.\nThe $\\phi_k$ with $\\supp\\phi_k \\subset (-\\infty,0)$ are unaltered, but those with $\\supp\\phi_k \\cap [0, \\infty) \\neq \\emptyset$ are replaced by a corresponding $\\rphi_k$.\nIn conclusion, for $k=0,\\ldots,p-1$, the right boundary scaling functions satisfies the dilation equations\n\\begin{equation}\n\t\\label{eq:right_scaling_function_definition}\n\t\\rphi_k(x)\n\t= \\sqrt2 \\sum_{l=0}^{p-1} \\rH_{k,l} \\rphi_l(2x) + \\sqrt2 \\sum_{m=p}^{p+2k} \\rh_{k,m} \\phi(2x+m+1),\n\\end{equation}\nwhere $(\\rH_{k,l})$ and $(\\rh_{k,m})$ are filter coefficients.\nThe support of $\\rphi_k$ is $[-p-k,0]$.\n\nThe four right boundary scaling functions related to four vanishing moments are seen in \\cref{fig:right_Daubechies4}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.5]{right}\n\t\\caption{The right boundary scaling function with 4 vanishing moments.}\n\t\\label{fig:right_Daubechies4}\n\\end{figure}\n\n\n\\section{Scaling functions on an interval}\n\nThe left and right scaling functions are multiresolutions for the postive and negative halflines, respectively.\nWe can combine the three kind of scaling functions to obtain a multiresoltion for an interval.\nLet us first recall a few properties of a multiresoltion analysis for $L^2(\\R)$.\nWe use the notation\n\\begin{equation*}\n    \\phi_{J,k}(x) = 2^{J/2} \\phi(2^J x - k)\n\\end{equation*}\nand \n\\begin{equation*}\n    V_J = \\spn\\Set{\\phi_{J,k} \\given k \\in \\Z}\n\\end{equation*}\nThen\n\\begin{equation*}\n    L^2(\\R) = \\overline{\\bigcup_{J\\in\\Z} V_J}.\n\\end{equation*}\n\nInitially we want a similar multiresoltion for the interval $[0, 1]$, that is, a relation of the form\n\\begin{equation*}\n    L^2([0, 1]) = \\overline{\\bigcup_{J\\in\\Z} \\widetilde V_J}\n\\end{equation*}\nfor suitably defined $\\widetilde V_J$.\nWe index the basis functions in $\\widetilde V_J$ as $\\widetilde \\phi_1, \\ldots, \\widetilde \\phi_N$.\nThe first thing to notice is that all three kinds of scaling functions must be dilated such that their support is at most of length 1.\nJust as in the classic multiresoltion we dilate with numbers of the form $2^J$.\nLet $\\tau_h$ deonte a translation operator, meaning that $(\\tau_h f)(x) = f(x - h)$.\nDefine\n\\begin{equation*}\n    \\widetilde \\phi_k(x) = \n    \\begin{dcases}\n        \\lphi_k(x), & 1\\leq k\\leq p, \n        \\\\\n        \\tau_k \\phi(x) & p < k \\leq N - p,\n        \\\\\n        \\tau_1 \\rphi_k(x) & N - p < k \\leq N.\n    \\end{dcases}\n\\end{equation*}\nThen \n\\begin{equation*}\n    \\widetilde V_J = \\spn\\Set[\\big]{\\widetilde \\phi_{J,k} \\given 0\\leq k\\leq N},\n    \\quad\n    \\widetilde \\phi_{J,k}(x) = 2^{J/2} \\widetilde \\phi_k(2^J x).\n\\end{equation*}\nAs an example of the support of the interval basis functions consider \\cref{fig:support}.\n\n\\begin{figure}\n    \\centering\n    \\asyinclude{support.asy}\n    \\caption{The support of the interval scaling functions with $p = 2$ vanishing moments at scale $J = 4$ computed at dyadic rationals of resolution $R = 4$.}\n    \\label{fig:support}\n\\end{figure}\n\nIn \\emph{IntervalWavelets} we want to evalute the interval scaling functions at the dyadic rationals of a predefined resolution $R$.\nNote that we have an interaction between the scale and the resolution:\n\\begin{equation*}\n    \\widetilde\\phi_{J,k}\\biggl(\\frac n{2^R}\\biggr)\n    = 2^{J/2} \\widetilde \\phi_k\\biggl(2^J \\frac n{2^R}\\biggr)\n    = 2^{J/2} \\widetilde \\phi_k\\biggl(\\frac n{2^{R - J}}\\biggr).\n\\end{equation*}\nThis means that in order to evaluate $\\widetilde\\phi_{J,k}$ in dyadic rationals of resolution $R$ we only need to compute $\\widetilde\\phi$ in dyadic rationals of resolution $R - J$.\n\n\\printbibliography\n\n\\end{document}\n\n", "meta": {"hexsha": "f08973ca44c84b973317889b612a2a628e5198ba", "size": 13980, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/boundary_wavelets.tex", "max_stars_repo_name": "robertdj/IntervalWavelets.jl", "max_stars_repo_head_hexsha": "93c52c0e0f65d106ade6879355d8d11fd3081bd3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-03-12T11:47:44.000Z", "max_stars_repo_stars_event_max_datetime": "2018-03-12T11:47:44.000Z", "max_issues_repo_path": "doc/boundary_wavelets.tex", "max_issues_repo_name": "robertdj/IntervalWavelets.jl", "max_issues_repo_head_hexsha": "93c52c0e0f65d106ade6879355d8d11fd3081bd3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2019-09-04T16:44:33.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-21T21:22:55.000Z", "max_forks_repo_path": "doc/boundary_wavelets.tex", "max_forks_repo_name": "robertdj/IntervalWavelets.jl", "max_forks_repo_head_hexsha": "93c52c0e0f65d106ade6879355d8d11fd3081bd3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2016-07-12T02:14:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-08T11:03:15.000Z", "avg_line_length": 42.752293578, "max_line_length": 284, "alphanum_fraction": 0.7066523605, "num_tokens": 4856, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5951024185027732}}
{"text": "\\section{Graph Model of Set Theory}\n\n\\begin{itemize}\n\t\n\t\\item Directed graphs: $ G = \\pair{V}{A} $\n\t\n%\t\\begin{itemize}\n%\t\t\n%\t\t\\item $ V $ is the set of vertices\n%\t\t\n%\t\t\\item $ A \\subseteq V \\times V $ is the set of ordered pairs of vertices representing arrows\n%\t\t\n%\t\\end{itemize}\n\n\t\\item A graph is \\textbf{well-founded} if it has no looping paths and no infinite descending paths\n\t\n\t\\item A graph is \\textbf{extensional} if for any $ v_0, v_1 $ such that $ v_0 $ has the same incoming arrows as $ v_1 $, $ v_0 = v_1 $\n\t\n\t\\item Two graphs are isomorphic if there is a function (\\textit{isomorphism}) $ \\sigma $ between them such that:\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item $ \\sigma $ is a bijection (surjection + injection)\n\t\t\n\t\t\\item $ vA_0 u \\leftrightarrow \\sigma(v) A_1 \\sigma(u) $\n\t\t\n\t\\end{itemize}\n\n\t\\item An automorphism is an isomorphism between some graph and itself (the identity is a trivial one)\n\t\n\t\\item $ G $ is a subgraph of $ G' $ if $ V \\subseteq V' $ and $ v_0 A v_1 \\leftrightarrow v_0 A' v_1 $ for all $ v_0, v_1 \\in V $\n\t\n\t\\item $ G $ is \\textit{maximal} in some property $ \\Phi $ if $ G $ possesses $ \\Phi $ and there exists no graph $ G' $ such that:\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item $ G' $ possesses $ \\Phi $; and\n\t\t\\item $ G $ is a proper subgraph of $ G' $\n\t\t\n\t\\end{itemize}\n\t\n\t\\item Let $ G $ be a \\textit{maximal} well-founded graph with no non-trivial automorphisms\n\t\n\t\\item Equivalently, $ G $ is a maximal well-founded graph which is extensional\n\t\n\t\\item $ G $ is then an \\textit{intended model} of Set Theory\n\t\n\\end{itemize}\n\n\\section{First Order Logic}\n\n\\begin{itemize}\n\t\n\t\\item Logical symbols: $ \\lnot $, $ \\land $, $ \\lor $, $ \\implies $, $ \\iff $, $ \\exists $, $ \\forall $, $ ( $, $ ) $\n\t\n\t\\item Non-logical symbols: constant symbols ($ a, b, c $), relation symbols ($ P, Q, R $), function symbols ($ f, g, h $)\n\t\n\t\\item Language: $ \\Lang = \\set{a, b, c, \\dots, P, Q, R, \\dots, f, g, h, \\dots}$\n\t\n\t\\item Individual variables will be denoted $ v_1, v_2, \\dots $\n\t\n\t\\item Metavariables / arbitrary variables will be denoted $ x, y, z, \\dots $\n\t\n\t\\item The set of terms, $ Term $ is defined recursively:\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item If $ t $ is a constant symbol or individual variable, $ t $ is a term\n\t\t\n\t\t\\item If $ t_1, \\dots, t_n $ are terms and $ f $ is a function symbol with arity $ n $, $ f(t_1, \\dots, t_n) $ is a term\n\t\t\n\t\t\\item Nothing else is a term\n\t\t\n\t\\end{itemize}\n\t\n\t\\item A string $ \\varphi $ is an \\textit{atom} if $ \\varphi = Rt_1, \\dots, t_n $ where $ t_1, \\dots, t_n $ are terms and $ R $ is a relation with arity $ n $\n\t\n\t\\item If $ t $ is a term with no variables occurring in it, $ t $ is a \\textit{closed term}\n\t\n\t\\item The set of Well Formed Formulae $ WFF $ is defined recursively, with atoms as the base case\n\t\n\t\\begin{itemize}\n\t\t\\item If $ \\varphi $ consists of a combination of logical operators over well formed formulae, $ \\varphi \\in WFF $\n\t\\end{itemize}\n\t\n%\t\\item The set of Well Formed Formulae, $ WFF $ is defined recursively:\n%\t\n%\t\\begin{itemize}\n%\t\t\n%\t\t\\item If $ \\varphi $ is an atom, $ \\varphi \\in WFF $\n%\t\t\n%\t\t\\item If $ \\varphi \\in \\set{\\lnot \\psi, \\forall x \\psi, \\exists x \\psi} $ where $ \\psi \\in WFF $ and $ x $ is a variable, $ \\varphi \\in WFF $\n%\t\t\n%\t\t\\item If $ \\varphi \\in \\set{(\\psi \\land \\chi), (\\psi \\lor \\chi), (\\psi \\implies \\chi)} $ where $ \\psi, \\chi \\in WFF $, $ \\varphi \\in WFF $\n%\t\t\n%\t\t\\item Nothing else is in $ WFF $\n%\t\t\n%\t\\end{itemize}\n\n\t\\item $ x $ is \\textit{free} in $ \\varphi \\in WFF $ if:\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item If $ \\varphi = R t_1, \\dots, t_n $ and $ x = t_i $ for some $ i $\n\t\t\n\t\t\\item If $ \\varphi $ consists of logical operators over well formed formula, where $ x $ is free in at least one\n\t\t\n\t\t\\item If $ \\varphi $ is a quantification over a formula where $ x $ is free, and $ x $ is not the bound variable\n\t\t\n\t\\end{itemize}\n\n%\t\\item The variable $ x $ `being free' in $ \\varphi \\in WFF $ is defined recursively:\n%\t\n%\t\\begin{itemize}\n%\t\t\n%\t\t\\item If $ \\varphi = R t_1, \\dots, t_n $ and $ x = t_i $ for some $ i $, then $ x $ is free in $ \\varphi $\n%\t\t\n%\t\t\\item If $ \\varphi \\in \\set{\\forall y \\psi, \\exists y \\psi} $ and $ x $ is free in $ \\psi $ and $ x \\ne y $, $ x $ is free in $ \\varphi $\n%\t\t\n%\t\t\\item If $ \\varphi \\in \\set{\\lnot \\psi, (\\psi \\land \\chi), (\\psi \\lor \\chi), (\\psi \\implies \\chi)} $ and $ x $ is free in $ \\psi $ or $ \\chi $, then $ x $ is free in $ \\varphi $\n%\t\t\n%\t\t\\item Otherwise $ x $ is not free in $ \\varphi $\n%\t\t\n%\t\\end{itemize}\n\n\t\\item If $ \\varphi $ has no free variables, $ \\varphi $ is a sentence, $ \\varphi \\in Sent $\n\t\n\\end{itemize}\n\n\\newpage\n\n\\section{Model Theory}\n\n\\begin{itemize}\n\t\n\t\\item A model $ \\Model $ requires:\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item A language $ \\Lang $\n\t\t\n\t\t\\item A domain $ M $ of objects\n\t\t\n\t\t\\item An interpretation:\n\t\t\n\t\t\\begin{itemize}\n\t\t\t\n\t\t\t\\item Constant symbols $ c $ are interpreted by some object from the domain: $ c^\\Model \\in M $\n\t\t\t\n\t\t\t\\item Relation symbols $ R $ (with arity $ n $) are interpreted by a set of tuples of objects within the domain; so $ R^\\Model \\subseteq \\setcomp{\\tuple{m_1, \\dots, m_n}}{m_1, \\dots, m_n \\in M} $\n\t\t\t\n\t\t\t\\item Function symbols $ f $ with arity $ n $ are interpreted by functions taking some $ m_1, \\dots, m_n \\in M$, and returning some $ m \\in M $\n\t\t\t\n\t\t\\end{itemize}\n\t\t\n\t\\end{itemize}\n\n\t\\item A model of $ \\Lang = \\set{a, b, c, \\dots, P, Q, R, \\dots, f, g, h, \\dots} $ would then be:\n\t\\begin{equation*}\n\t\\Model = \\tuple{M, a^\\Model, b^\\Model, c^\\Model, \\dots, P^\\Model, Q^\\Model, R^\\Model, \\dots, f^\\Model, g^\\Model, h^\\Model, \\dots}\n\t\\end{equation*}\n\t\n\t\\item If $ t $ is a closed term of $ \\Lang $ and $ \\Model $ is a model of $ \\Lang $, then the \\textit{denotation} $ t^\\Model $ is:\n\t\n\t\\begin{itemize}\n\t\t\\item If $ t $ is a constant symbol $ c $, $ t^\\Model = c^\\Model $\n\t\t\n\t\t\\item If $ t = f(t_1, \\dots, t_n) $, $ t^\\Model = f^\\Model (t_1^\\Model, \\dots, t_n^\\Model) $\n\t\\end{itemize}\n\n\t\\item A sentence $ \\varphi $ in some language $ \\Lang $ may then be \\textit{true} in $ \\Model $ ($ \\Model \\models \\varphi $)\n\t\n\t\n\t\\item $ \\varphi $ is \\textit{satisfiable} if there exists some model $ \\Model $ in the language $ \\Lang $ of $ \\varphi $ such that $ \\Model \\models \\varphi $\n\t\n\t\\item $ \\varphi $ is \\textit{valid} ($ \\models \\varphi$) if for every model $ \\Model $ in the language $ \\Lang $ of $ \\varphi $, $ \\Model \\models \\varphi $\n\t\n\t\\item Given $ \\Gamma \\subseteq Sent_\\Lang $ and $ \\varphi \\in Sent_\\Lang $, we say $ \\varphi $ \\textit{is a consequence of} $ \\Gamma $ ($ \\Gamma \\models \\varphi $) if for every model $ \\Model $ of $ \\Lang $, if $ \\Model \\models \\gamma $ for all $ \\gamma \\in \\Gamma $, then $ \\Model \\models \\varphi $\n\t\n\t\\item $ \\varphi $ is \\textit{derivable} from $ \\Gamma $ ($ \\Gamma \\vdash \\varphi $) if:\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item (Ax) $ \\varphi $ is an axiom of first order logic\n\t\t\n\t\t\\item (Ass) $ \\varphi \\in \\Gamma $\n\t\t\n\t\t\\item (MP) $ \\Gamma \\vdash \\psi \\implies \\varphi $ and $ \\Gamma \\vdash \\psi $\n\t\t\n\t\t\\item (UG) If $ \\Gamma \\vdash \\varphi $, then $ \\Gamma \\vdash \\forall y \\varphi(y \\mapsto x) $ when:\n\t\t\n\t\t\\begin{itemize}\n\t\t\t\\item $ x $ is not free in any formula in $ \\Gamma $;\n\t\t\t\\item $ y = x $; or\n\t\t\t\\item $ y $ is not free in $ \\varphi $\n\t\t\\end{itemize}\n\t\t\n\t\\end{itemize}\n\n\t\\item A model can describe a structure \\textit{externally} (as in the graph model), or \\textit{internally} using sentences true in the intended model\n\t\n\t\\item A \\textit{theory} $ \\Gamma $ is a set of sentences closed under consequence (i.e. $ \\Gamma \\vdash \\varphi \\implies \\varphi \\in \\Gamma $)\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item It is \\textit{consistent} if there is no sentence $ \\varphi $ such that $ \\Gamma \\vdash \\varphi \\land \\lnot \\varphi $\n\t\t\n\t\t\\item It is \\textit{complete} if for all $ \\varphi $ we have either $ \\Gamma \\vdash \\varphi $ or $ \\Gamma \\vdash \\lnot \\varphi $\n\t\t\n\t\t\\item It is \\textit{categorical} if there is exactly one model $ \\Model $ such that $ \\Model \\models \\Gamma $\n\t\t\n\t\\end{itemize}\n\t\n\t\\item A theory is \\textbf{algebraic} if it has multiple intended models, and \\textbf{non-algebraic} if it has one unique intended model\n\t\n\\end{itemize}\n\n\\newpage", "meta": {"hexsha": "d52043fdc1e83b6ea1c01407a6321c3d21bfa69d", "size": 8102, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MATH3306/set_theory/intro_material.tex", "max_stars_repo_name": "mcoot/CourseNotes", "max_stars_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MATH3306/set_theory/intro_material.tex", "max_issues_repo_name": "mcoot/CourseNotes", "max_issues_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MATH3306/set_theory/intro_material.tex", "max_forks_repo_name": "mcoot/CourseNotes", "max_forks_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.1651376147, "max_line_length": 300, "alphanum_fraction": 0.6219451987, "num_tokens": 2845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7577943658046608, "lm_q1q2_score": 0.5951024180147968}}
{"text": "\\section{Unsupervised text classification}\n\\label{uns-class}\n\n\\paragraph{Text classification}\nThe problem of associating texts, so documents, to classes, represented by labels.\nClasses may be a partition of the document space, or they can overlap.\nThe approaches to solve the problem are basically two, pre-trained models, \nso supervised, or based only on documents, so unsupervised.\n\nDepending on the number of classes and how the cover the document space, we define \nfour tasks\n\\begin{center}\n    \\begin{tabular}{c | c |c}\n            & 2 classes & Many classes\\\\\n            \\hline\n            Disjoint classes & Binary classification & Multi-class classification\\\\\n            \\hline\n            Overlapping classes & Soft binary classification & Multi-label classification\\\\\n    \\end{tabular}\n\\end{center}\n\nNote that ranking can be seen as soft binary classification.\n\n\\subsection{Clustering}\n\n\\paragraph{Goal}\nClustering is the problem of grouping objects in clusters, in particular\ngiven a distance metric, \nwe want to minimize the distance within a cluster and maximize the one\nbetween different clusters.\n\n\\paragraph{Parameters}\nChoosing the right number of clusters is pretty difficult, a high number \nleads to consistency inside clusters, but poor aggregation, while a low number \nincreases aggregation but reduce quality of the single clusters.\n\n\\paragraph{Types of clustering}\nThere are many approaches to clustering, one could for instance \nconsider flat or hierarchical clustering, in the first one there is no structure, where \nin the second clusters might contain other clusters. \nThere is also a difference in cluster assignment, one could assign in an hard or soft way.\n\n\\subsubsection{Support to document retrieval}\n\n\\paragraph{Terminological analysis}\nAfter computing a clustering of documents we could analyze terminology \nof documents of the same cluster to extract relevant terms, this \ninformation can then be used for query expansion.\n\n\\paragraph{Cluster pruning}\nBy selecting representative documents for clusters, we can compare a query \nonly to those documents. After finding a match all the documents in the cluster \nare returned.\n\n\\subsubsection{Evaluation}\nGiven $\\Omega = \\{ \\omega_1, \\dots, \\omega_n\\}$ the set of clusters \nand $\\mathbb{C} = \\{ c_1, \\dots, c_n\\}$ the set of classes, we can evaluate \nthe clustering by focusing on pairs of documents.\n\nLet's consider the pair $(x_i, x_j)$, we ask ourselves if \n$$\\exists \\omega_k : (x_i, x_j) \\in \\omega_k\\;\\;\\exists c_z : (x_i, x_j) \\in c_z$$\nThey represent the clustering and the real world scenario.\nIf the two are true, we find a true positive, if they are both false a true negative.\nIn the case the first one is true and the second is not, we have a false positive, \na false negative otherwise.\n\nWe can observe that choosing a small number of clusters leads to a large \nquantity of false positives, while a fine grain partition, i.e. a high number \nof clusters, produces a high number og false negatives.\n\n\\paragraph{Measures}\nWe can use some measures of quality in clustering, \nfor instance \n$$\\mathit{Rand} = \\frac{\\mathit{TP} + \\mathit{TN}}{\\mathit{TP} + \\mathit{TN} + \\mathit{FP} + \\mathit{FN}}$$\nbut also \\emph{Purity} and \\emph{Normalized Mutual Information}.\n\n\\subsection{K-means}\nThis approach starts with $k$ random centroids and assign each data point \nto the closets one. We then recompute the centroids with the current clusters \nand repeat the first point until termination.\n\nThe termination criterion could be a fixed number of iterations, no change \nin documents assignments or \\emph{RSS} measure below a threshold:\n$$\\mathit{RSS} = \\sum_{k=1}^K\\sum_{\\vec{x} \\in \\omega_k}|\\vec{x} - \\vec{\\mu}(\\omega_k)|^2$$\nwhere the measure is the sum of the distances between a point and it's centroid squared.\nNote that at each iteration the \\emph{RSS} decrease.\n\nTwo issues of this approach is that the initial position of clusters\nchanges the results a lot, and also, the $k$ parameter is crucial to the \nalgorithm.\nA good value for $k$ is $K = \\min_K(\\min(\\mathit{RSS}_K) + \\lambda K )$\n\n\\subsection{Model-based clustering}\nA k-means generalization is obtained by interpreting the centroids as a model \nthat generates data. A centroid with some added noise can generate a document.\n\n\\paragraph{Idea}\nInstead of generating classes, we start from the points, \nwe assume that they were generated with a generative model \nand we estimate the latent model parameters.\n\n\\paragraph{Latent parameters}\n$\\Theta = \\{ \\vec{\\mu}_1, \\dots, \\vec{\\mu}_n\\}$ are the centroids \nto be found by k-means.\n\n$L(D | \\Theta)$ is the log-likelihood that the data $D$ wa generated by $\\Theta$, \nthis is the objective function, so $\\Theta$ becomes:\n$$\\Theta = \\max_{\\Theta} L(D|\\Theta) = \\max_\\Theta\\sum_{n=1}^N \\log P(d_n | \\Theta)$$\n\nThat means maximizing the sum of probabilities that documents are generated by \nthe model. We are in a soft clustering scenario, as we are \nbasically dealing with probabilities\nas cluster assignments.\n\n\\subsubsection{Expectation Maximization algorithm}\nThis is an iterative algorithm that maximizes $L(D|\\Theta)$, but can also be \nused to find latent models in a variety of applications.\n\nLet's consider a multivariate Bernoulli distribution for data, \nso all documents are binary vectors, \nwe want to estimate the probability that a given document is assigned to a\nspecific cluster given the models parameters.\n$$P(d | w_k; \\Theta) = \\prod_{t_m \\in d}q_{mk} \n\\cdot \\prod_{t_m \\notin d}(1-q_{mk})\\;\\;\\;\\Theta_k = (\\alpha_k, q_{1k}, \\dots, q_{Mk})$$\nwhere $q_{mk} = P(U_m =1 | \\omega _k)$ is the probability that a document from the cluster \n$\\omega_k$ contains the term $t_m$.\n\n$\\alpha_k$ si the prior of the cluster $k$, so the probability that of a document \nto be in that cluster, not having any information about it.\n\n\\paragraph{Process}\nWe start by generating a document by picking a cluster $w_k$ with probability \n$\\alpha_k$, generating terms with probability $q_{mk}$. \n\nWe need to estimate the two parameters, and we do that iteratively similarly to k-means.\nThe iteration is composed by two steps, \\emph{Maximization} and \\emph{Expectation}.\n\n\\paragraph{Maximization step}\nThe goal here is to estimate the parameters of the model, so the prior and the \nterms probability given the cluster:\n$$q_{mk} = \\frac{\\sum_{n=1}^N r_{nk}I(t_m \\in d_m)}{\\sum_{n=1}^N r_{nk}}\n\\;\\;\\; \\alpha_k = \\frac{\\sum_{n=1}^N r_{nk}}{N}$$\nwhere $I$ can be one or zero wether the term appears or not in the \ndocument, and $r_{nk}$ is the soft assignment of the document $n$ to the \ncluster $k$.\n\n\n\\paragraph{Expectation step}\nWe now estimate the probability of each term to be assigned to a cluster.\n$$r_{nk} = \\frac{\\alpha_k(\\prod_{t_m \\in d_n}q_{mk})(\\prod_{t_m \\notin d_n}(1-q_{mk}))}\n{\\sum_{k=1}^K\\alpha_k(\\prod_{t_m \\in d_n}q_{mk})(\\prod_{t_m \\notin d_n}(1-q_{mk}))}$$\nthis is basically, for each cluster, the prior, multiplied by the product of $q_{mk}$ or $1- q_{mk}$ for each \nterm in the document\ncomputed before, \nnormalized by the same quantity over all the clusters.\n\n\\paragraph{Why do EM works?}\nThe main idea of the algorithm is to push documents that contains the same \nwords in the same cluster. Basically if two words appears in a document \nthey should have a high probability of being assigned to the same cluster, \nso they have the same $q_{mk}$. This leads to assigning documents with the \nsame words in the same clusters.\n\n\\subsection{Affinity propagation clustering}\n\n\\paragraph{Input}\nThe input is a similarly matrix, for instance $s(i,j) = - ||\\vec{i} - \\vec{j}||^2$.\nThere are also some special values for the diagonal of the matrix, \nwhere larger values are more likely to be chosen as exemplars for \nclusters.\n\n\\paragraph{Idea}\nDocuments exchange messages to decide exemplars and clusters assignments.\nThis approach is similar to \\emph{page rank with authorities} \\footnote{\\url{https://nlp.stanford.edu/IR-book/html/htmledition/hubs-and-authorities-1.html}}. \n\n\\paragraph{Responsibility messages}\nA message $r(i,k)$ that denotes how well $k$ is an exemplar for $i$.\nAn example can be\n$$r(i, k) = s(i, k) - \\max_{k^\\prime s.t.k^\\prime \\neq k} \\{ a(i, k^\\prime) + s(i, k^\\prime)\\}$$\nbasically we take the similarity of a pair of nodes and subtract the maximum \nsimilarity for any other document. This value will be zero for the most similar \ndocument, and negative for all other, as the quantity $a(i, k^\\prime)$ is initially zero.\n\n\\paragraph{Availability}\nThis message $a(i, k)$ represent how appropriate is for $i$ to choose \n$k$ as exemplar.\n$$a(i, k) = \\min \\bigg\\{ 0, r(k, k) + \\sum_{i^\\prime s.t.i^\\prime \\notin \\{i, k\\}} \\max\\{0, r(i^\\prime, k)\\}\\bigg\\}$$\n\n\\paragraph{Self availability}\nThe availability measure given the same document as argument, is \n$$a(k, k) = \\sum_{i^\\prime s.t.i^\\prime \\neq k} \\max\\{0, r(i^\\prime, k)\\}$$\n\n\\paragraph{Process}\nAt first, documents are connected by similarity, then we recompute the availability \nwith respect to the responsibility and vice versa. This leads to making documents \nconnections stronger and stronger and to the election of an exemplar for clusters.\nAfter some iterations convergence is reached.\n\n\\subsection{Hierarchical clustering}\nThis approach produces a hierarchy of clusters. It does not need the number of clusters \nbut a criterion of optimal cluster selection. Most of this algorithms are deterministic.\n\n\\paragraph{Process}\nStaring from a similarity matrix for the documents, we start taking the pair \nof documents with maximum similarity and group them in the same cluster. \nAn idea is to have documents as nodes, and to add a cluster node connected \nto the two most similar documents.\n\nThe next step is to delete the rows and columns of the two selected documents, \nas they are already assigned, but, we add a new row and column \nfor the cluster node.\nThe similarity value between a document and a cluster is the minimum similarity \nin that cluster for the considered document, but many strategies can be applied.\n\nAfter rebuilding the table, we continue with the initial step, \nnote that if a cluster and document are selected as most similar items, \nwe are adding a cluster connected to them, ergo we are considering the cluster \nobtained in the first step, as a sub-cluster for the last one.\n\nThe last step is to select the number of clusters.\nThe previous logic created tree, as we grouped documents and clusters by adding \nnodes connected to them.\nWe can choose a threshold for the intra-cluster similarity, or select a certain \nnumber of clusters, or considering the maximum jump in similarity in the obtained cluster\ntree and cut off the tree there. By cutting off the tree, we take the lower part, so \nthe one with the leaves.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{images/hierarchical-clustering.png}\n    \\caption{Hierarchical clustering example with 4 clusters}\n\\end{figure}\n\n\\paragraph{Top-down clustering}\nThis approach starts from a cluster containing all the documents, and\nrecursively splits it keeping track of the parent of each document. \nThe process stops when a threshold of intra-cluster similarity is obtained \nor when there's a cluster for each document.\nThis approach is the opposite of the bottom up tree creation.\n", "meta": {"hexsha": "ae6518e9d943191a3802d3dd15e0a7a5162541fc", "size": 11260, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old-courses/information-retrieval/chapters/unsupervised_text_classification.tex", "max_stars_repo_name": "marcodb97/unimi-notes", "max_stars_repo_head_hexsha": "b0b9520a01568c4c64f4fdb69523dd05339ee8f6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "old-courses/information-retrieval/chapters/unsupervised_text_classification.tex", "max_issues_repo_name": "marcodb97/unimi-notes", "max_issues_repo_head_hexsha": "b0b9520a01568c4c64f4fdb69523dd05339ee8f6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "old-courses/information-retrieval/chapters/unsupervised_text_classification.tex", "max_forks_repo_name": "marcodb97/unimi-notes", "max_forks_repo_head_hexsha": "b0b9520a01568c4c64f4fdb69523dd05339ee8f6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-09T08:24:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-09T08:24:02.000Z", "avg_line_length": 46.9166666667, "max_line_length": 158, "alphanum_fraction": 0.7542628774, "num_tokens": 2820, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085758631159, "lm_q2_score": 0.7577943658046608, "lm_q1q2_score": 0.5951024142071512}}
{"text": "\n\\chapter{More Examples}\n\\label{chap:more-examples}\n\nIn addition to the examples already covered in this text, the \\holn{}\ndistribution comes with a variety of instructive examples in the\n\\verb|examples| directory.  There the following examples (among\nothers) are to be found:\n\n\\begin{description}\n\n\\item [\\tt autopilot.sml]\n\n  This example is a \\holn{} rendition (by Mark Staples) of a PVS\n  example due to Ricky Butler of NASA. The example shows the use of\n  the record-definition package, as well as illustrating some aspects\n  of the automation available in \\holn{}.\n\n\\item [\\tt bmark]\n\n  In this directory, there is a standard HOL benchmark: the proof of\n  correctness of a multiplier circuit, due to Mike Gordon.\n\n\\item [\\tt euclid.sml]\n\n  This example is the same as that covered in\n  Chapter~\\ref{chap:euclid}: a proof of Euclid's theorem on the\n  infinitude of the prime numbers, extracted and modified from a much\n  larger development due to John Harrison. It illustrates the\n  automation of \\HOL{} on a classic proof.\n\n\\item[\\tt ind\\_def]\n\nThis directory contains some examples of an inductive definition package\nin action. Featured are an operational semantics for a small imperative\nprogramming language, a small process algebra, and combinatory logic\nwith its type system. The files were originally developed by Tom Melham\nand Juanito Camilleri and are extensively commented.  The last is the\nbasis for Chapter~\\ref{chap:combin}.\n\nMost of the proofs in these theories can now be done much more easily by\nusing some of the recently developed proof tools, namely the simplifier\nand the first order prover.\n\n\\item [\\tt fol.sml]\n\n  This file illustrates John Harrison's implementation of a\n  model-elimination style first order prover.\n\n\\item[\\tt lambda]\n\nThis directory develops theories of a ``de Bruijn'' style lambda calculus,\nand also a name-carrying version. (Both are untyped.) The development\nis a revision of the proofs underlying the paper\n{\\it ``5 Axioms of Alpha Conversion'',\n            Andy Gordon and Tom Melham,\n            Proceedings of TPHOLs'96, Springer LNCS 1125}.\n\n\\item[\\tt parity]\n\n  This sub-directory contains the files used in the parity example of\n  Chapter~\\ref{parity}.\n\n\\item [\\tt MLsyntax]\n\n  This sub-directory contains an extended example of a facility for\n  defining mutually recursive types, due to Elsa Gunter of Bell Labs.\n  In the example, the type of abstract syntax for a small but not\n  totally unrealistic subset of ML is defined, along with a simple\n  mutually recursive function over the syntax.\n\n\\item[\\tt Thery.sml]\n\n  A very short example due to Laurent Th\\'ery, demonstrating a cute\n  inductive proof.\n\n\\item[\\tt RSA]\n\n       This directory develops some of the mathematics underlying\n       the RSA cryptography scheme. The theories have been\n       produced by Laurent Th\\'ery of INRIA Sophia-Antipolis.\n\n\\end{description}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"tutorial\"\n%%% End:\n", "meta": {"hexsha": "0b2083cfb14143aa07547bb47e7ef578358bb56e", "size": 2961, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Manual/Tutorial/more-examples.tex", "max_stars_repo_name": "dwRchyngqxs/HOL", "max_stars_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 492, "max_stars_repo_stars_event_min_datetime": "2015-01-07T16:36:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T22:18:48.000Z", "max_issues_repo_path": "Manual/Tutorial/more-examples.tex", "max_issues_repo_name": "dwRchyngqxs/HOL", "max_issues_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 759, "max_issues_repo_issues_event_min_datetime": "2015-01-01T00:40:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T17:33:39.000Z", "max_forks_repo_path": "Manual/Tutorial/more-examples.tex", "max_forks_repo_name": "dwRchyngqxs/HOL", "max_forks_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 126, "max_forks_repo_forks_event_min_datetime": "2015-02-17T03:20:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T00:42:55.000Z", "avg_line_length": 32.9, "max_line_length": 74, "alphanum_fraction": 0.756501182, "num_tokens": 701, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8354835432479661, "lm_q1q2_score": 0.5950582689031937}}
{"text": "\\section{Probabilistic Particle Tracing Model}\n\nIn order to analyze the uncertain flow field by particle tracing, in this section we introduce our probability-based particle tracing model. First, we describe the global modeling of streamlines and how to estimate the trace distributions for a given uncertain data. Then local distributions used by the global model are detailed.\n\n\\subsection{Global Modeling}\n\nA single streamline $L$ originating from position ${x_0}$ can be modeled as $L = \\{ {x_0},{x_1},...,{x_n}\\} = {x_{0:n}}$, where $x_t$ refers to a position in $\\mathrm{R}^d$. As mentioned by Otto et al. in~\\cite{Otto10a, Otto11a}, conventional streamline integration methods such as RK4 are not well defined for uncertain vector fields, since there is no unique vector direction at a location ${x_t}$. Therefore, as with most previous methods~\\cite{Otto10a, Otto11a}, we make use of the Euler integration model in this paper:\n\\begin{equation}\n  {x_{t + 1}} = {x_t} + {v_t}\\Delta t\n\\end{equation}\nwhere ${v_t}$ and $\\Delta t$ refer to the vector direction and the step size at step $t$. Since we focus on generating streamlines from steady vector fields in this work, it is safe to ignore the magnitude of the vectors. And if we set the step size $\\Delta t$ as a constant, we can represent the streamline by a sequence of vector directions ${L = v_{0:n}}$, since the streamline trajectory only depends on the propagation directions $v_{0:n}$.\n\nFor the uncertain vector fields, there is no unique streamline $v_{0:n}$ for a given starting point $x_0$. For all possible streamlines which originate from $x_0$ given the uncertain data $\\mathcal{H}$, we can define a probability density function (pdf) over the path space, which is:\n\\begin{equation}\n  p(v_{0:n}|\\mathcal{H})\n\\end{equation}\nwhere $\\mathcal{H}$ is a set of observations in distributions from the uncertain data along the streamline trajectory. Here, we denote the distribution obtained at the starting point $x_t$ of a vector $v_t$ as $\\lambda_t=\\mathcal{H}(v_t)$. By applying the Bayes theorem, the target distribution $p({v_{0:n}}|{\\lambda_{0:n}})$ can be represented by the prior density $p({v_{0:n}})$ and the conditional observation density $p({\\lambda_{0:n}}|{v_{0:n}})$, as:\n\\begin{equation}\n  p({v_{0:n}}|{\\lambda_{0:n}}) = \\frac{{p({v_{0:n}})p({\\lambda_{0:n}}|{v_{0:n}})}}{{p({\\lambda_{0:n}})}}\n\\end{equation}\nwhere ${p({\\lambda_{0:n}})}$ is a normalizing constant for a fixed data realization, which equals to $\\int {p({v_{0:n}},{\\lambda_{0:n}})} d{v_{0:n}}$.\n\nScientific simulations commonly represent physical phenomena as continuous functions. Thus, the sequence of vector directions $v_{0:n}$ along the particle trace should be correlated. This constraint can be modeled as a conditional prior density $p({v_t}|{v_{0:t - 1}})$. In this paper, we assume the sequence $v_{0:n}$ forms a Markov chain, which means the vector direction $v_t$ only depends on the previous direction $v_{t-1}$, but not on $v_{t-2},...,v_0$; so:\n\\begin{equation}\n  p({v_t}|{v_{0:t - 1}}) = p({v_t}|{v_{t - 1}})\n\\end{equation}\nwhere $p({v_t}|{v_{t - 1}})$ denotes the probability density associated with the transition from $v_{t - 1}$ to $v_t$. Hence, the probability density for a given streamline can be formulated as:\n\\begin{equation}\n  p({v_{0:n}}) = p({v_0})\\prod\\limits_{t = 1}^n {p({v_t}|{v_{t - 1}})}\n\\end{equation}\nwhere $p(v_0)$ can be defined by a uniform distribution, since no prior knowledge is applied.\n\nBy measuring the observations $\\lambda_{0:n}$ along a given streamline $v_{0:n}$, we can get the conditional observation density $p({\\lambda_{0:n}}|v_{0:n})$, which defines a measure of how the observations match the given path. In the other word, the observation density gives how likely the distributions $\\lambda_{0:n}$ will be observed if the given streamline $v_{0:n}$ actually exists in the flow field. Likewise, we assume that the observation measured at a point does not depend on any previous points in the trace, i.e.:\n\\begin{equation}\n  p(\\lambda_t|v_{0:t}) = p({\\lambda_t}|{v_t})\n\\end{equation}\nwhich defines the likelihood density:\n\\begin{equation}\n  p({\\lambda_{0:n}}|{v_{0:n}}) = \\prod\\limits_{t = 0}^n {p({\\lambda_t}|{v_t})}\n\\end{equation}\n\nBy substituting (5) and (7) into (3), the posterior density $p({v_{0:n}}|{\\lambda_{0:n}})$ can be expanded as:\n\\begin{equation}\n  p({v_{0:n}}|{\\lambda_{0:n}}) = \\frac{{p({v_0})\\prod\\limits_{t = 1}^n {p({v_t}|{v_{t - 1}})} \\prod\\limits_{t = 0}^n {p({\\lambda_t}|{v_t})} }}{{p({\\lambda_{0:n}})}}\n\\end{equation}\n\nSince $p({v_{0:n}}|{\\lambda_{0:n}})$ is high-dimensional, non-standard, and only known up to a proportionality constant, which makes it infeasible to be evaluated in closed-form. Therefore, a Monte Carlo based method needs to be used to approximate the target distribution. In this case, particle filtering~\\cite{doucet2001sequential} as one of Sequential Monte Carlo methods is suitable to approximate the target distribution in iterations. We first put a number of weighted particles at the seed position and update them iteratively. At each iteration, the vector directions which the particles will propagate along with are sampled from an importance function. Then the weight of each particle can be updated based on the importance sampling. Furthermore, the particles can be resampled based on their weights. In the end, a set of weighted particle tracing are obtained to represent the target posterior distribution, from which the most likely trace can be chosen by maximum a posteriori (MAP) estimation.\n\n\\subsection{Local Modeling}\n\nBased on the particle filtering algorithm, the prior density $p({v_t}|{v_{t - 1}})$, the observation density $p({\\lambda_t}|{v_t})$, and an importance function need to be modeled and estimated. In this section, we will elaborate how to model and estimate these local densities in detail.\n\n\\noindent\\textbf{Prior Density.} As mentioned before, prior density is used to characterize the correlation between two adjacent vector directions. In this work, a prior density which prefers to continue in the previous direction and gives decreasing probability for sharper turns is used. As presented by Zhang et al. in~\\cite{Zhang20095}, the von Mises-Fisher distribution~\\cite{fisher} has been selected as the prior density due to its mathematical simplicity and tractability.\n\nFor a random $d$-dimensional unit vector $v$ on the $(d-1)$-dimensional sphere, with respect to the mean direction $\\mu$ and the concentration parameter $\\kappa$, the probability density function of the von Mises-Fisher distribution is given by\n\\begin{equation}\n  f_{d}(v| \\mu, \\kappa)=C_{d}(\\kappa)\\exp \\left( {\\kappa \\mu^T v } \\right)\n\\end{equation}\nwhere the normalization constant $C_{d}(\\kappa)\\,$ is\n\\begin{equation}\n  C_{d}(\\kappa)=\\frac {\\kappa^{d/2-1}} {(2\\pi)^{d/2}I_{d/2-1}(\\kappa)} \\,\n\\end{equation}\nin which $I_{d/2-1}$ denotes the modified Bessel function of the first kind and order $d/2-1$.\n\nThe parameter $\\kappa$ is used to control the concentration of the distribution around the mean direction $\\mu$. Figure~\\ref{fisher} gives examples of points sampled from 2-dimensional von Mises-Fisher distributions with different $\\kappa$.\n\n\\begin{figure}[htb]\n  \\centering\n  \\begin{subfigure}[b]{0.16\\textwidth}\n    \\centering\n    \\includegraphics[width=0.8in]{../figures/vf_100.eps}\n    \\caption{$\\kappa=100$}\n  \\end{subfigure}~\n  \\begin{subfigure}[b]{0.16\\textwidth}\n    \\centering\n    \\includegraphics[width=0.8in]{../figures/vf_10.eps}\n    \\caption{$\\kappa=10$}\n  \\end{subfigure}~\n  \\begin{subfigure}[b]{0.16\\textwidth}\n    \\centering\n    \\includegraphics[width=0.8in]{../figures/vf_1.eps}\n    \\caption{$\\kappa=1$}\n  \\end{subfigure}\n  \\caption{Points sampled from three von Mises-Fisher distributions on $2$-dimensional spheres with different value of $\\kappa$. The mean directions are shown as arrows.}\n  \\label{fisher}\n\\end{figure}\n\nIn this work, the mean direction $\\mu$ of the prior density is given by the previous vector direction ${v_{t - 1}}$. The concentration parameter $\\kappa$ is given manually as a constant value, in this work a $\\kappa = 60$ is used. For more turbulent flow fields, a larger $\\kappa$ can be used.\n\n\\noindent\\textbf{Observation Density.} We make use of the observation density to characterize local uncertainties, which defines a likelihood function that measures how observations match the current prior model. At step $t$, for a given vector direction $v_t$, the observation density $p({\\lambda_t}|{v_t})$ is a conditional density representing how likely the observation $\\lambda_t$ measured at position $x_t$ will happen based on the given vector $v_t$. According to the observation $\\lambda_t$, the observation density $p({\\lambda_t}|{v_t})$ is estimated based on the method presented by Friman et al. in~\\cite{frimanTMI06}. We first model the observation matching perfectly to the current state $v_t$ as a distribution $\\mu_t$ where the probability of $v_t$ equals to $1$. Then treat $\\lambda_t$ as a histogram and let the probability $\\lambda_t(i)$ of bin $i$ be an uncertain observation of $\\mu_t(i)$, i.e. $\\log{\\mu_t(i)}=\\log{\\lambda_t(i)}+\\epsilon$. The noise $\\epsilon$ can be modeled as additive Gaussian distribution~\\cite{Basser1994,Salvador05}, such as $\\epsilon \\sim N(0,{\\sigma ^2}/{\\mu _t}{(i)^2})$. Then the observation density, or the likelihood, can be written as\n\\begin{equation}\n  p({\\lambda_t}|{v_t}) = \\prod\\limits_{i = 0}^N {\\frac{{{\\mu _t}(i)}}{{\\sqrt {2\\pi {\\sigma ^2}} }}} {e^{ - \\frac{{{\\mu _t}{{(i)}^2}}}{{{2\\sigma ^2}}}{{(\\log{{\\lambda _t}(i)} - \\log{{\\mu _t}(i)})}^2}}}\n\\end{equation}\n\n\\noindent\\textbf{Importance Density.} A usual approach is to use the same distribution as the prior density as the importance function, which is called bootstrap filter or condensation algorithm. However, such importance function may not be always effective, since no observation information is used. As a result, the resulting particles are often outliers of the posterior distribution. Therefore, the observation density is used to determine the $v_t$. Since it is expensive to evaluate the observation density function and there is no need to use the actual observation density as the importance density for the particle filtering method, we choose the observed distribution $\\lambda_t$ as the importance density function.\n", "meta": {"hexsha": "a8b5f5648428425ec317f6e9e8c0e4f2f539c821", "size": 10345, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/pvisnotes2016/draft-method.tex", "max_stars_repo_name": "hewenbin/pspf", "max_stars_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/pvisnotes2016/draft-method.tex", "max_issues_repo_name": "hewenbin/pspf", "max_issues_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/pvisnotes2016/draft-method.tex", "max_forks_repo_name": "hewenbin/pspf", "max_forks_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 107.7604166667, "max_line_length": 1185, "alphanum_fraction": 0.7367810536, "num_tokens": 2855, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835330070838, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.595058266712616}}
{"text": "\\documentclass{article}\n%\\usepackage{mathrsfs}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\n% table padding\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{document}\n\n\\title{Set theory}\n\n\\section{Concepts}\n\n\\begin{itemize}\n  \\item Set\n  \\item Element\n  \\item Subset\n  \\item Superset\n\\end{itemize}\n\n\\section{Special sets}\n\n\\begin{itemize}\n  \\item Universal set: U\n  \\item Empty set: $\\emptyset$\n  \\item Natural numbers: $\\mathbb{N}$\n  \\item Integer: $\\mathbb{Z}$\n  \\item Rational numbers: $\\mathbb{Q}$\n  \\item Real numbers: $\\mathbb{R}$\n  \\item Complex numbers: $\\mathbb{C}$\n\\end{itemize}\n\n\\section{Set operations}\n\n\\subsection{Union and interception}\n\n\\subsubsection{Union}\n$$\nA \\cup B = \\{ x \\mid x \\in A \\lor x \\in B \\}\n$$\n\n\\subsubsection{Interception}\n$$\nA \\cap B = \\{ x \\mid x \\in A \\land x \\in B \\}\n$$\n\nProperties:\n\\begin{itemize}\n  \\item $A \\cap B \\subseteq A, A \\cap B \\subseteq B$\n  \\item $A \\subseteq A \\cup B, B \\subseteq A \\cup B$\n\\end{itemize}\n\nTheorem 1.3: For any sets A and B, we have:\n\\begin{gather*}\nA \\cap B \\subseteq A \\subseteq A \\cup B \\\\\nA \\cap B \\subseteq B \\subseteq A \\cup B\n\\end{gather*}\n\nTheorem 1.4: The followings are equivalent:\n$$\nA \\subseteq B, A \\cap B = A, A \\cup B = B\n$$\n\n\\subsection{Complement}\n\n\\subsubsection{Absolute complement}\n\n$$\nA^c = \\{ x \\mid x \\in \\text{U}, x \\not\\in A \\}\n$$\n\n\\subsubsection{Relative Complement}\n\nThe relative complement of A and B, donoted by $A\\setminus B$, is:\n$$\nA\\setminus B = \\{ x \\mid x \\in A, x \\not\\in B \\}\n$$\n\n\\subsubsection{Symmetric difference}\n\nSymmetric difference of sets A and B, denoted by $A \\oplus B$, is:\n$$\nA \\oplus B = \\{ x \\mid (x \\in A \\land x \\not\\in B) \\lor (x \\in B \\land x \\not\\in A) \\}\n$$\n\nThat is:\n\\begin{gather*}\nA \\oplus B = (A \\cup B)\\setminus(A \\cap B) \\\\\nA \\oplus B = (A \\setminus B) \\cup (B \\setminus A)\n\\end{gather*}\n\n\\subsection{Fundamental products}\n\nConsider $n$ distinct sets $A_1, A_2, \\cdots, A_n$,\na fundanmental product is a set of form\n\n$$\nA_1^* \\cap A_2^* \\cap \\cdots \\cap A_n^* (A_i^* = A_i \\lor A_i^* = A_i^c)\n$$\n\n\\section{Algebra of sets, duality}\n\nTheorem 1.5: Sets satisfy the laws in table:\n\\begin{table}\n\\caption{Laws of the algebra of sets}\n\\label{Table: 1-1}\n\\begin{tabular}{l | l | l}\n  \\hline\n  Idempotent laws  & $A \\cup A = A$ & $A \\cap A = A$ \\\\\n  \\hline\n  Associative laws & $(A \\cup B) \\cup C = A \\cup (B \\cup C)$ & $(A \\cap B) \\cap C = A \\cap (B \\cap C)$ \\\\\n  \\hline\n  Commutative laws & $A \\cup B = B \\cup A$ & $A \\cap B = B \\cap A$ \\\\\n  \\hline\n  Distributive laws\n  & $A \\cup (B \\cap C) = (A \\cup B) \\cap (A \\cup C)$\n  & $A \\cap (B \\cup C) = (A \\cap B) \\cup (A \\cap C)$ \\\\\n  \\hline\n  Identity laws & $A \\cup \\emptyset = A$      & $A \\cap \\text{U} = A$ \\\\\n                & $A \\cup \\text{U} = \\text{U}$ & $A \\cap \\emptyset = \\emptyset$ \\\\\n  \\hline\n  Involution laws & $(A^c)^c = A$ & \\\\\n\n  \\hline\n  Complement laws & $A \\cup A^c = \\text{U}$ & $A \\cap A^c = \\emptyset$ \\\\\n                  & $\\text{U}^c = \\emptyset$ & $\\emptyset^c = \\text{U}$ \\\\\n  \\hline\n  DeMorgan's laws & $(A\\cup B)^c = A^c \\cap B^c$ & $(A\\cap B)^c = A^c \\cap B^c$\\\\\n  \\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Venn diagrams}\n\n\\end{document}\n", "meta": {"hexsha": "4225c8c86d8de1b9f07d49567b8b52cebba9cb61", "size": 3133, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "set-theory/sheets.tex", "max_stars_repo_name": "fecet/Discrete-Mathematics", "max_stars_repo_head_hexsha": "a29c41030edd37ec9ba67bb96219aca686fcf8b7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "set-theory/sheets.tex", "max_issues_repo_name": "fecet/Discrete-Mathematics", "max_issues_repo_head_hexsha": "a29c41030edd37ec9ba67bb96219aca686fcf8b7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "set-theory/sheets.tex", "max_forks_repo_name": "fecet/Discrete-Mathematics", "max_forks_repo_head_hexsha": "a29c41030edd37ec9ba67bb96219aca686fcf8b7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-11T16:26:43.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-11T16:26:43.000Z", "avg_line_length": 23.0367647059, "max_line_length": 105, "alphanum_fraction": 0.6064474944, "num_tokens": 1212, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8354835289107307, "lm_q1q2_score": 0.5950582586917531}}
{"text": "\n\n    \\filetitle{uniform}{Create function proportional to log of uniform distribution}{logdist/uniform}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nF = logdist.uniform(Lo,Hi)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\item\n  \\texttt{Lo} {[} numeric {]} - Lower bound of the uniform distribution.\n\\item\n  \\texttt{Hi} {[} numeric {]} - Upper bound of the uniform distribution.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{F} {[} function\\_handle {]} - Handle to a function returning a\n  value that is proportional to the log of the uniform density.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\nSee \\href{logdist/Contents}{help on the logdisk package} for details on\nusing the function handle \\texttt{F}.\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "1089b24fc43f33ba4f535d630846009558e1483f", "size": 894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/logdist/uniform.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/logdist/uniform.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/logdist/uniform.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 24.1621621622, "max_line_length": 101, "alphanum_fraction": 0.7483221477, "num_tokens": 249, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835289107309, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5950582535884447}}
{"text": "%In what follows, we use the notation $(x_1,y_1)$ to represent a point in\r\n%the $(x,y)$ coordinate system, also called the $(x,y)$-plane.\r\n%Previously, we used $(a,b)$ to represent\r\n%an open interval. Notation often gets reused and abused in mathematics, \r\n%but thankfully, it is usually clear from the context what we mean.\r\n%\r\n%In the $(x,y)$ coordinate system we normally write the $x$-axis\r\n%horizontally, with positive numbers to the right of the origin, and\r\n%the $y$-axis vertically, with positive numbers above the origin.  That\r\n%is, unless stated otherwise, we take ``rightward'' to be the positive\r\n%$x$-direction and ``upward'' to be the positive $y$-direction.  In a\r\n%purely mathematical situation, we normally choose the same scale for\r\n%the $x$- and $y$-axes.  For example, the line joining the origin to\r\n%the point $(a,a)$ makes an angle of 45${}^\\circ$ with the $x$-axis\r\n%(and also with the $y$-axis).\r\n%\r\n%In applications, often letters other than $x$ and $y$ are used, and\r\n%often different scales are chosen in the horizontal and vertical\r\n%directions.  \\\\\r\n%\r\n%\\begin{example}{Data Plot}{DataPlot}\r\n%Suppose you drop a coin from a window, and you want to study how \r\n%its height above the ground changes from second to second.\r\n%It is natural to let the letter $t$ denote the time\r\n%(the number of seconds since the object was released) and to let the\r\n%letter $h$ denote the height.  For each $t$ (say, at one-second\r\n%intervals) you have a corresponding height $h$.  This information can\r\n%be tabulated, and then plotted on the $(t,h)$ coordinate plane, as\r\n%shown in figure~\\ref{fig:data plot}.\r\n%\\end{example}\r\n%\r\n%\\figure[!ht]\r\n%\\vbox{\r\n%$$\\begin{array}{|r|ccccc|}\r\n%\\hline\r\n%seconds & 0 & 1 & 2 & 3 & 4\\\\\r\n%\\hline\r\n%meters & 80 & 75.1 & 60.4 & 35.9 & 1.6\\\\\r\n%\\hline\r\n%\\end{array}$$ \r\n%\r\n%\\centerline{\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%%\\ninepoint\r\n%\\setcoordinatesystem units <2truecm,0.5truemm>\r\n%\\setplotarea x from 0 to 4.2, y from 0 to 90\r\n%\\axis left shiftedto x=0 ticks numbered from 20 to 80 by 20 /\r\n%\\axis bottom shiftedto y=0 ticks numbered from 0 to 4 by 1 /\r\n%\\put {$t$} [l] <3pt,0pt> at 4.2 0\r\n%\\put {$h$} [l] <3pt,0pt> at 0 90\r\n%\\setquadratic\r\n%\\plot 0 80 1 75.1 2 60.4 3 35.9 4 1.6 /\r\n%\\multiput {$\\bullet$} at 0 80 1 75.1 2 60.4 3 35.9 4 1.6 /\r\n%\\endpicture}}}\r\n%\\caption{A data plot, height versus time. \\label{fig:data plot}}\r\n%\\endfigure\r\n%\r\n%We use the word ``quadrant'' for each of the four regions into which\r\n%the plane is divided by the axes: the first quadrant is where points\r\n%have both coordinates positive, or the ``northeast'' portion of the\r\n%plot, and the second, third, and fourth quadrants are counted off\r\n%counterclockwise, so the second quadrant is the northwest, the third\r\n%is the southwest, and the fourth is the southeast.\r\n%\r\n%Suppose we have two points $A$ and $B$ in the $(x,y)$-plane.\r\n%We often want to know the change in $x$-coordinate (also called the\r\n%``horizontal distance'') in going from $A$ to $B$.  This is often\r\n%written $\\Delta x$, where the meaning of $\\Delta$ (a capital delta in\r\n%the Greek alphabet) is ``change in''. Thus, $\\Delta x$ can be read as\r\n%``change in $x$'' although it usually is read as ``delta $x$''. The\r\n%point is that $\\Delta x$ denotes a single number, and should not be\r\n%interpreted as ``delta times $x$''. Similarly, the ``change in $y$'' is written $\\Delta y$\r\n%and represents the difference between the $y$-coordinates of the \r\n%two points. It is the vertical distance you have to move in going from $A$ to $B$. \\\\\r\n%\r\n%\\begin{example}{Change in $x$ and $y$}{ChangeIn}\\label{ChangeIn}\r\n%If $A=(2,1)$ and $B=(3,3)$ the change in $x$ is\r\n%$$\\Delta x=3-2=1$$\r\n%while the change in $y$ is\r\n%$$\\Delta y= 3-1=2.$$\r\n%\\vspace{-0.5cm}\r\n%\\end{example}\r\n%\r\n%The general formulas for the change in $x$ and the change in $y$ \r\n%between a point $(x_1,y_1)$ and a point $(x_2,y_2)$ are:\r\n%$$\r\n%\\Delta x=x_2-x_1,\\qquad\\qquad\\Delta y=y_2-y_1.\r\n%$$\r\n%Note that either or both of these might be negative.\r\n\r\n\r\n\r\n%%%%%%%%%%%%%%%%%%\r\n\\subsection{Lines}\\label{sec:lines}\r\nIf we have two \\ifont{distinct} points $A(x_1,y_1)$ and $B(x_2,y_2)$, then we can draw one\r\nand only one straight line through both points.  By the \\dfont{slope} of this line\r\nwe mean the ratio of $\\Delta y$ to $\\Delta x$.  The slope is often denoted by the letter $m$. \\\\\r\n\r\n\\begin{formulabox}[Slope Formula]\r\nThe slope of the line joining the points $(x_1,y_1)$ and $(x_2,y_2)$ is:\r\n$$m=\\frac{\\Delta y}{\\Delta x}=\\frac{y_2-y_1}{x_2-x_1}=\\frac{\\mbox{rise}}{\\mbox{run}}$$\r\n\\end{formulabox}\r\n\r\n\\bigskip\r\n\r\n\\begin{example}{Slope of a Line Joining Two Points}{SlopeofLine}\r\nThe line joining the two points $(1,-2)$ and $(3,5)$ has slope\r\n$\\ds m=\\frac{5-(-2)}{3-1}=\\frac{7}{2}$\r\n%\\vspace{-0.8cm}\r\n\\end{example}\r\n\r\nThe most familiar form of the equation of a straight line is:\r\n$$y=mx+b.$$\r\nHere $m$ is the slope of the line: if you increase $x$ by\r\n1, the equation tells you that you have to increase $y$ by $m$; and if\r\nyou increase $x$ by $\\Delta x$, then $y$ increases by $\\Delta\r\ny=m\\Delta x$.  The number $b$ is called the \\dfont{y-intercept}, because\r\nit is where the line crosses the $y$-axis (when $x=0$).  If you know two points on\r\na line, the formula $m=(y_2-y_1)/(x_2-x_1)$ gives you the slope.\r\nOnce you know a point and the slope, then the $y$-intercept can be\r\nfound by substituting the coordinates of either point in the equation:\r\n$y_1=mx_1+b$, i.e., $b=y_1-mx_1$.  Alternatively, one can use the\r\n\\dfont{``point-slope'' form} of the equation of a straight line: start with\r\n$(y-y_1)/(x-x_1)=m$ and then multiply to get \r\n$$(y-y_1)=m(x-x_1),$$ the\r\npoint-slope form. Of course, this may be further manipulated to get\r\n$y=mx-mx_1+y_1$, which is essentially the \\dfont{``$y=mx+b$'' form}.\r\n\r\nIt is possible to find the equation of a line between two points directly\r\nfrom the relation $m=(y-y_1)/(x-x_1)=(y_2-y_1)/(x_2-x_1)$, which says ``the\r\nslope measured between the point $(x_1,y_1)$ and the point $(x_2,y_2)$ is\r\nthe same as the slope measured between the point $(x_1,y_1)$ and any other\r\npoint $(x,y)$ on the line.''  For example, if we want to find the equation\r\nof the line joining points $A(2,1)$ and $B(3,3)$, we can use\r\nthis formula:\r\n$$\r\nm=\\frac{y-1}{x-2}=\\frac{3-1}{3-2}=2,\\qquad\\hbox{so~that}\\qquad\r\ny-1=2(x-2),\\qquad\\hbox{i.e.,}\\qquad y=2x-3.\r\n$$\r\nOf course, this is really just the point-slope formula, except that we\r\nare not computing $m$ in a separate step. \r\nWe summarize the three common forms of writing a straight line below:\\\\\r\n\r\n\\begin{formulabox}[Slope-Intercept Form of a Straight Line]\r\nAn equation of a line with slope $m$ and $y\\, $-intercept $b$ is:\r\n$$y=mx+b.$$\r\n\\end{formulabox}\r\n\r\n\\bigskip\r\n\r\n\\begin{formulabox}[Point-Slope Form of a Straight Line]\r\nAn equation of a line passing through $(x_1,y_1)$ and having slope $m$ is:\r\n$$y-y_1=m(x-x_1).$$\r\n\\end{formulabox}\r\n\r\n\\bigskip\r\n\r\n\\begin{formulabox}[General Form (Standard Form) of a Straight Line]\r\nAny line can be written in the form\r\n$$Ax+By+C=0,$$\r\nwhere $A,B,C$ are constants and $A,B$ are not both $0$.\r\n\\end{formulabox}\r\n\r\nThe slope $m$ of a line in the form $y=mx+b$ tells us the\r\ndirection in which the line is pointing.  If $m$ is positive, the line goes\r\ninto the 1st quadrant as you go from left to right.   If $m$ is large and\r\npositive, it has a steep incline, while if $m$ is small and positive, \r\nthen the line has a small angle of inclination.  If $m$ is negative, the\r\nline goes into the 4th quadrant as you go from left to right.  If $m$ is\r\na large negative number (large in absolute value), then the line points\r\nsteeply downward. If $m$ is negative but small in absolute value, then it points\r\nonly a little downward.\r\n\r\nIf $m=0$, then the line is horizontal and its equation is simply $y=b$.\r\n\r\nAll of these possibilities are illustrated below.\r\n%figure~\\ref{fig:graphs of lines}.\r\n\r\n$$\\includegraphics[width=6.4in]{images/slopes}$$\r\n\r\n%\\figure[!ht]\r\n%\\vbox{\r\n%\\centerline{\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%%\\sevenpoint\r\n%\\setcoordinatesystem units <3truemm,3truemm>\r\n%\\setplotarea x from -5 to 5, y from -5 to 5\r\n%\\axis left shiftedto x=0 /\r\n%\\axis bottom shiftedto y=0 /\r\n%\\setlinear\r\n%\\plot -1 -5 2.33 5 /\r\n%\\linethickness 0.1pt\r\n%\\axis left ticks in andacross from -5 to 5 by 1 /\r\n%\\axis left ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\axis bottom ticks in andacross from -5 to 5 by 1 /\r\n%\\axis bottom ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\endpicture}\\qquad\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%%\\sevenpoint\r\n%\\setcoordinatesystem units <3truemm,3truemm>\r\n%\\setplotarea x from -5 to 5, y from -5 to 5\r\n%\\axis left shiftedto x=0 /\r\n%\\axis bottom shiftedto y=0 /\r\n%\\setlinear\r\n%\\plot -5 1 5 2 /\r\n%\\linethickness 0.1pt\r\n%\\axis left ticks in andacross from -5 to 5 by 1 /\r\n%\\axis left ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\axis bottom ticks in andacross from -5 to 5 by 1 /\r\n%\\axis bottom ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\endpicture}\\qquad\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%%\\sevenpoint\r\n%\\setcoordinatesystem units <3truemm,3truemm>\r\n%\\setplotarea x from -5 to 5, y from -5 to 5\r\n%\\axis left shiftedto x=0 /\r\n%\\axis bottom shiftedto y=0 /\r\n%\\setlinear\r\n%\\plot -0.5 5 2 -5 /\r\n%\\linethickness 0.1pt\r\n%\\axis left ticks in andacross from -5 to 5 by 1 /\r\n%\\axis left ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\axis bottom ticks in andacross from -5 to 5 by 1 /\r\n%\\axis bottom ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\endpicture}\\qquad\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%%\\sevenpoint\r\n%\\setcoordinatesystem units <3truemm,3truemm>\r\n%\\setplotarea x from -5 to 5, y from -5 to 5\r\n%\\axis left shiftedto x=0 /\r\n%\\axis bottom shiftedto y=0 /\r\n%\\setlinear\r\n%\\plot -5 2 5 1 /\r\n%\\linethickness 0.1pt\r\n%\\axis left ticks in andacross from -5 to 5 by 1 /\r\n%\\axis left ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\axis bottom ticks in andacross from -5 to 5 by 1 /\r\n%\\axis bottom ticks in andacross numbered from -4 to 4 by 2 /\r\n%\\endpicture}}}\r\n%\\caption{Lines with slopes 3, $0.1$, $-4$, and $-0.1$.\\label{fig:graphs of lines}}\r\n%\\endfigure\r\n\r\nThere is one type of line that cannot be written in the form $y=mx+b$,\r\nnamely, vertical lines.  A vertical line has an equation of the form $x=a$.\r\nSometimes one says that a vertical line has an ``infinite'' slope.\r\n\r\nIt is often useful to find the $x$-intercept of a line $y=mx+b$.  This is\r\nthe $x$-value when $y=0$.  Setting $mx+b$ equal to 0 and solving for\r\n$x$ gives: $x=-b/m$.  \\\\\r\n\r\n\\begin{example}{Finding $x$-intercepts}{xintercepts}\r\nTo find $x$-intercept of the line $y=2x-3$, we set $y=0$ and solve for $x$:\r\n$$0 = 2x-3\\qquad\\to\\qquad x = \\frac{3}{2}.$$\r\nThus, the line has an $x$-intercept of $3/2$.\r\n\\end{example}\r\n\r\nIt is often necessary to know if two lines are parallel or perpendicular.\r\nLet $m_1$ and $m_2$ be the slopes of the nonvertical lines $L_1$ and $L_2$.\r\nThen:\r\n\\begin{itemize}\r\n\\item $L_1$ and $L_2$ are \\dfont{parallel} if and only if $m_1=m_2$.\r\n\\item $L_1$ and $L_2$ are \\dfont{perpendicular} if and only if $\\ds{m_2=\\frac{-1}{m_1}}$.\r\n\\end{itemize}\r\nIn the case of perpendicular lines, we say their slopes are \\ifont{negative reciprocals}. \r\nBelow is a visual representation of a pair of parallel lines and a pair of perpendicular lines.\r\n$$\\includegraphics[width=3.7in]{images/lines-1}$$\r\n\r\n\\begin{example}{Equation of a Line}{EquationLine}\r\nFor each part below, find an equation of a line satisfying the requirements:\r\n\\begin{enumerate}\r\n\\item[(a)] Through the two points $(0,3)$ and $(-2,4)$.\r\n\\item[(b)] With slope $7$ and through point $(1,-2)$.\r\n\\item[(c)] With slope $2$ and $y$-intercept $4$.\r\n\\item[(d)] With $x$-intercept $8$ and $y$-intercept $-3$.\r\n\\item[(e)] Through point $(5,3)$ and parallel to the line $2x+4y+2=0$.\r\n\\item[(f)] With $y$-intercept $4$ and perpendicular to the line $\\ds{y=-\\frac{2}{3}x+3}$.\r\n\\end{enumerate}\r\n\\end{example}\r\n\r\n\\begin{solution} \r\n\\noindent(a) We use the \\ifont{slope formula} on $(x_1,y_1)=(0,3)$ and $(x_2,y_2)=(-2,4)$ to find m:\r\n$$m=\\frac{(4)-(3)}{(-2)-(0)}=\\frac{1}{-2}=-\\frac{1}{2}.$$\r\nNow using the \\ifont{point-slope formula} we get an equation to be:\r\n$$y-3=-\\frac{1}{2}\\left(x-0\\right)\\quad\\to\\quad y=-\\frac{1}{2}x+3.$$\r\n\r\n\\noindent(b) Using the \\ifont{point-slope formula} with $m=7$ and $(x_1,y_1)=(1,-2)$ gives:\r\n$$y-(-2)=7(x-1)\\quad\\to\\quad y = 7x-9.$$\r\n\r\n\\noindent(c) Using the \\ifont{slope-intercept formula} with $m=2$ and $b=4$ we get $y=2x+4$.\r\n\r\n\\noindent(d) Note that the intercepts give us two points: $(x_1,y_1)=(8,0)$ and $(x_2,y_2)=(0,-3)$.\r\nNow follow the steps in part (a):\r\n$$m=\\frac{-3-0}{0-8}=\\frac{3}{8}$$\r\nUsing the \\ifont{point-slope formula} we get an equation to be:\r\n$$y-(-3)=\\frac{3}{8}\\left(x-0\\right)\\quad\\to\\quad y=\\frac{3}{8}x-3$$\r\n\r\n\\noindent(e)  The line $2x+4y+2=0$ can be written as:\r\n$$4y = -2x - 2 \\quad\\to\\quad y=-\\frac{1}{2}x-\\frac{1}{2}$$\r\nThis line has slope $-1/2$.\r\nSince our line is \\ifont{parallel} to it, we have $m=-1/2$.\r\nNow we have a point $(x_1,y_1)=(5,3)$ and slope $m=-1/2$, thus, the \\ifont{point-slope formula} gives:\r\n$$y-3=-\\frac{1}{2}\\left(x-5\\right).$$\r\n\r\n\\noindent(f) The line $\\ds{y=-\\frac{2}{3}x+3}$ has slope $m=-2/3$.\r\nSince our line is perpendicular to it, the slope of our line is the \\ifont{negative reciprocal}, hence, $m=3/2$.\r\nNow we have $b=4$ and $m=3/2$, thus by the \\ifont{slope-intercept formula}, an equation of the line is\r\n$$y=\\frac{3}{2}x+4.$$\r\n\\vspace{-0.5cm}\r\n\\end{solution}\r\n\r\n\r\n\\begin{example}{Parallel and Perpendicular Lines}{ParallelPerpendicularLines}\r\nAre the two lines $7x+2y+3=0$ and $6x-4y+2=0$ perpendicular? Are they parallel? If they are not parallel, what is their point of intersection?\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nThe first line $L_{1}$ re-expressed in slope-intercept form is:\r\n$$7x+2y+3=0\\quad\\to\\quad 2y=-7x-3\\quad\\to\\quad y=-\\frac{7}{2}x-\\frac{3}{2}.$$\r\nIt has slope $m_1=-7/2$.\r\nThe second line $L_{2}$ in slope-intercept form is:\r\n$$6x-4y+2=0\\quad\\to\\quad -4y=-6x-2\\quad\\to\\quad y=\\frac{3}{2}x+\\frac{1}{2}.$$\r\nIt has slope $m_2=3/2$. Since $m_1\\neq m_2$ the lines are not parallel. The lines are also not perpendicular since $m_1\\cdot m_2\\neq -1$ (they are not negative reciprocals). Therefore, the lines must intersect.\\\\\r\nWe find points of intersection by setting the $y$ -values of $L_{1} = L_{2}$, and then solve for $x$.\r\n\r\n\\begin{minipage}{0.6\\textwidth}\r\n\tIn particular, we have\r\n\t\r\n\t$$\\begin{array}{rcl}\r\n\tL_{1} & = & L_{2} \\\\\r\n\t\\ds{-\\frac{7}{2}x-\\frac{3}{2}} & = & \\ds{\\frac{3}{2}x+\\frac{1}{2}} \\\\\r\n\t\\end{array} $$\r\n\t\r\nSolving for $x$ gives $x=-2/5$. \\\\\r\n\r\nThen substituting this \r\ninto either $L_{1}$ or $L_{2}$ \\\\\r\ngives $y=-1/10$.\\\\\r\n \r\nTherefore, the lines intersect at the point $(-2/5,-1/10)$.\r\n\r\n\\vspace{2cm}\r\n\\end{minipage}\r\n\\begin{minipage}{0.35\\textwidth}\r\n\t\r\n\\vspace{3mm}\t\r\n\\hspace{3mm}\\includegraphics[width=2.2in]{images/ExampleLines}\r\n\\end{minipage}\r\n\\end{solution}\r\n\r\n\r\n", "meta": {"hexsha": "5448948db6e70a4d68842c24e278cb9e23ab2ae3", "size": 14898, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "1-review/1-2-1-lines.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "1-review/1-2-1-lines.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1-review/1-2-1-lines.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.4444444444, "max_line_length": 213, "alphanum_fraction": 0.6730433615, "num_tokens": 5151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8244619306896955, "lm_q1q2_score": 0.5950497973262888}}
{"text": "\n\\subsection{Renewal arrays theory}\n\\label{sec:back:to:the:basics:rogers}\n\n\\citeauthor{rogers:1977}, in \\cite{rogers:1977}, proposes a theory\nas a solution to the question left open by \\citeauthor{shapiro:1976}, \nintroducing the concept of \\emph{renewal arrays}.\n\nIn order to understand his work, we formalize first the concept of $k$-fold\nconvolution of a sequence $\\lbrace s_{i}\\rbrace_{i\\in\\mathbb{N}}$ \nwith itself and, second, the concept of \\emph{renewal arrays}.\n\\\\\\\\\nLet $\\lbrace \\alpha_{i}\\rbrace_{i\\in\\mathbb{N}}$ and \n$\\lbrace \\beta_{i}\\rbrace_{i\\in\\mathbb{N}}$ be two sequences\n\\marginpar{to not clutter notations, we discard subscript in \nnotation $\\lbrace \\alpha_{i} \\rbrace$ for sequences}. Denote\nwith $\\cdot$ the \\emph{convolution} operator which consumes a pair\nof sequences and produces a new sequence, written in \\emph{infix} \nnotation:\n\\begin{displaymath}\n    \\lbrace \\alpha_{i}\\rbrace_{i\\in\\mathbb{N}}\\cdot\n    \\lbrace \\beta_{i}\\rbrace_{i\\in\\mathbb{N}} =\n    \\lbrace \\theta_{i}\\rbrace_{i\\in\\mathbb{N}}\n\\end{displaymath}\nwhere sequence  $\\lbrace \\theta_{i}\\rbrace_{i\\in\\mathbb{N}}$ satisfies:\n\\begin{displaymath}\n    \\theta_{n}=\\sum_{j=0}^{n}{\\alpha_{j}\\beta_{n-j}}\n\\end{displaymath}\n\nThe $k$-fold convolution of a sequence $\\lbrace\n\\theta_{i}\\rbrace_{i\\in\\mathbb{N}}$ \\marginpar{$k$-fold convolution, precisely}\nwith itself, denoted with  $\\left\\lbrace\n\\theta_{i}^{(k)}\\right\\rbrace_{i\\in\\mathbb{N}}$, is defined recursively as:\n\\begin{displaymath}\n    \\left\\lbrace\\theta_{n}^{(k+1)}\\right\\rbrace= \\left\\lbrace\\theta_{n}^{(k)}\\right\\rbrace\\cdot\n        \\lbrace\\theta_{n}\\rbrace\n\\end{displaymath}\nwith boundary condition $\\left\\lbrace\\theta_{n}^{(0)}\\right\\rbrace=(1,\\underbrace{0,0,\\ldots}_{\\text{infinitely many zeros}})$ so as:\n\\begin{displaymath}\n    \\left\\lbrace\\theta_{n}^{(1)}\\right\\rbrace= \\left\\lbrace\\theta_{n}^{(0)}\\right\\rbrace\\cdot\n        \\lbrace\\theta_{n}\\rbrace=\n        \\lbrace\\theta_{n}+0\\theta_{n-1}+\\ldots+0\\theta_{0}\\rbrace=\n        \\lbrace\\theta_{n}\\rbrace\n\\end{displaymath}\n\\\\\\\\\nLet $\\vect{b}=\\lbrace b_{i}\\rbrace_{i\\in\\mathbb{N}}$ be a sequence. \nThe \\emph{renewal array} \\marginpar{renewal arrays, precisely}\n$\\mathcal{B}=\\lbrace b_{nm}\\rbrace_{n,m\\in\\mathbb{N}}$\ngenerated by sequence $\\vect{b}$ is defined as follows:\n\\begin{displaymath}\n    b_{nm}=b_{n-m}^{(m+1)} \\quad\\wedge\\quad n < m\\rightarrow b_{nm}=0\n\\end{displaymath}\nFor the sake of clarity, let $n\\in\\mathbb{N}$, then\n% the following set should be helpful if subscript index change `n-m'\n% hasn't be done in the definition of Renewal array \n%let $S_{m}=\\mathbb{N}\\setminus\\lbrace0,\\ldots,m-1\\rbrace$, then \nthe following is the sequence of the very first column, where $m=0$:\n\\begin{displaymath}\n    \\lbrace b_{n0}\\rbrace\n        =\\lbrace b_{n}^{(1)}\\rbrace\n        =\\lbrace b_{n}\\rbrace\n\\end{displaymath}\nthe following is the sequence of the second column, where $m=1$:\n\\begin{displaymath}\n    \\lbrace b_{n1}\\rbrace\n        =\\lbrace b_{n-1}^{(2)}\\rbrace\n        =\\left\\lbrace \\sum_{j=0}^{n-1}{b_{j}b_{n-1-j}}\\right\\rbrace\n\\end{displaymath}\nthe following is the sequence of the third column, where $m=2$:\n\\begin{displaymath}\n    \\lbrace b_{n2}\\rbrace\n        =\\lbrace b_{n-2}^{(3)}\\rbrace\n        =\\left\\lbrace \\sum_{i+j+l=n-2}{b_{i}b_{j}b_{l}}\\right\\rbrace\n\\end{displaymath}\nand, in general, the following is the sequence of column $m\\in\\mathbb{N}$:\n\\begin{displaymath}\n    \\lbrace b_{nm}\\rbrace\n        =\\lbrace b_{n-m}^{(m+1)}\\rbrace\n        =\\left\\lbrace \\sum_{i_{1}+i_{2}+\\ldots+i_{m+1} =n-m}\n            {b_{i_{1}}b_{i_{2}}\\cdots b_{i_{m+1}}}\\right\\rbrace\n\\end{displaymath}\nChoose a column index $m\\in\\mathbb{N}$ and let $B$ a \\ac{fps} over sequence $\\vect{b}$.\nDenote with $B^{(m)}$ a \\ac{fps} over the \\emph{renewal array} \n$\\lbrace b_{nm} \\rbrace_{n,m\\in\\mathbb{N}}$, defined by sequence $\\vect{b}$, \nsuch that:\n\\begin{displaymath}\n    B^{(m)}(t) = \\sum_{n\\geq m}{b_{nm}t^{n-m}}\n\\end{displaymath}\nthen following relation exists between functions $B$ and $B^{(m)}$:\n\\begin{displaymath}\n    B^{(m)}(t) = B(t)^{m+1}\n\\end{displaymath}\nConsider a sequence\\marginpar{$A$-sequence, informally}\n$\\vect{a}=\\lbrace a_{n} \\rbrace_{n\\in\\mathbb{n}}$ and let $A$ be\na \\ac{fps} over it:\n\\begin{displaymath}\n    A\\left(t\\,B(t)\\right) \n        = \\sum_{r\\geq 0}{a_{r}\\left(t\\,B(t)\\right)^{r}}\n        = \\sum_{r\\geq 0}{a_{r}B(t)^{r}t^{r}}\n        = \\sum_{r\\geq 0}{a_{r}B^{(r-1)}(t)t^{r}}\n\\end{displaymath}\nby definition of $B^{(r-1)}$ rewrite:\n\\begin{displaymath}\n    \\begin{split}\n        \\sum_{r\\geq 0}{a_{r}B^{(r-1)}(t)t^{r}}\n            &=\\sum_{r\\geq 0}{a_{r}\\left(\\sum_{n\\geq r-1}{b_{n,r-1}t^{n-r+1}}\\right)t^{r}}\\\\\n            &=\\sum_{r\\geq 0}{\\sum_{n\\geq r-1}{a_{r}b_{n,r-1}t^{n+1}}}\n    \\end{split}\n\\end{displaymath}\nnow we tabulate sums respect to index $r$:\n\\begin{displaymath}\n    \\begin{split}\n        r=0 &\\rightarrow \\sum_{n\\geq -1}{a_{0}b_{n,-1}t^{n+1}}\\\\\n        r=1 &\\rightarrow \\sum_{n\\geq 0}{a_{1}b_{n,0}t^{n+1}}\\\\\n        r=2 &\\rightarrow \\sum_{n\\geq 1}{a_{2}b_{n,1}t^{n+1}}\\\\\n        &\\ldots\\\\\n        r=k+1 &\\rightarrow \\sum_{n\\geq k}{a_{k+1}b_{n,k}t^{n+1}}\\\\\n        &\\ldots\\\\\n    \\end{split}\n\\end{displaymath}\ntherefore extracting coefficient of $t^{n+1}$ yields:\n\\begin{displaymath}\n    [t^{n+1}]A(t\\,B(t))=\\sum_{k\\geq 0}{a_{k}\\,b_{n,k-1}}\n\\end{displaymath}\nand if \\marginpar{$A$-sequence, precisely}function $A$ satisfies \n$B(t)=A(t\\,B(t))$, which is the same to say:\n\\begin{equation}\n    b_{n+1}=\\sum_{k\\geq 0}{a_{k}\\,b_{n,k-1}}\n    \\label{eq:A:sequence:for:Renewal:arrays}\n\\end{equation}\nthen, sequence $\\vect{a}$ is the $A$-sequence for the \\emph{renewal array}\n$\\lbrace b_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$, generated by sequence $\\vect{b}$.\nA subtle point underlying all this theory is shown in \n\\autoref{eq:A:sequence:for:Renewal:arrays}: it says that coeffient\n$b_{n+1}$, in sequence $\\vect{b}$, is a combination\nof all coefficients lying on row $n$ of the \\emph{Renewal array}\n$\\lbrace b_{nm}\\rbrace_{n,m\\in\\mathbb{N}}$,\\marginpar{a recursion trick}\nwhich is generated by sequence $\\vect{b}$ itself!\n\\\\\\\\\nAllow us to generalize a little further. Choose \n$s\\in\\mathbb{N}\\setminus\\lbrace0\\rbrace$ and consider the following:\n\\begin{displaymath}\n    (t\\,B(t))^{s-1}\\,A\\left(t\\,B(t)\\right) \n        = \\sum_{r\\geq 0}{a_{r}B(t)^{r+s-1}t^{r+s-1}}\n        = \\sum_{r\\geq 0}{a_{r}B^{(r+s-2)}(t)t^{r+s-1}}\n\\end{displaymath}\nby definition of $B^{(r+s-2)}$ rewrite:\n\\begin{displaymath}\n    \\sum_{r\\geq 0}{a_{r}B^{(r+s-2)}(t)t^{r+s-1}}\n        =\\sum_{r\\geq 0}{a_{r}\\sum_{n\\geq r+s-2}{b_{n,r+s-2}t^{n+1}}}\n\\end{displaymath}\nfinally:\n\\begin{displaymath}\n    =\\sum_{r\\geq 0}{\\sum_{n\\geq r+s-2}{a_{r}b_{n,r+s-2}t^{n+1}}}\n\\end{displaymath}\nNow we tabulate sums respect to index $r$, as done in the previous derivation:\n\\begin{displaymath}\n    \\begin{split}\n        r=0 &\\rightarrow \\sum_{n\\geq s-2}{a_{0}b_{n,s-2}t^{n+1}}\\\\\n        r=1 &\\rightarrow \\sum_{n\\geq s-1}{a_{1}b_{n,s-1}t^{n+1}}\\\\\n        r=2 &\\rightarrow \\sum_{n\\geq s}{a_{2}b_{n,s}t^{n+1}}\\\\\n        r=3 &\\rightarrow \\sum_{n\\geq s+1}{a_{3}b_{n,s+1}t^{n+1}}\\\\\n        &\\ldots\\\\\n        r=k+2 &\\rightarrow \\sum_{n\\geq s+k}{a_{k+2}b_{n,s+k}t^{n+1}}\\\\\n        &\\ldots\\\\\n    \\end{split}\n\\end{displaymath}\ntherefore extracting coefficient of $t^{n+1}$ yields:\n\\begin{displaymath}\n    [t^{n+1}](t\\,B(t))^{s-1}A(t\\,B(t))=\\sum_{k\\geq 0}{a_{k}\\,b_{n,s+k-2}}\n\\end{displaymath}\nand if \\marginpar{$A$-sequence, generally}function $A$ satisfies \n$B(t)^{s}=(t\\,B(t))^{s-1}A(t\\,B(t))$, which is the same to say:\n\\begin{equation}\n    \\sum_{i_{1}+\\ldots+i_{s}=n+1}{b_{i_{1}}\\cdots b_{i_{s}}}\n        =b_{n+1}^{(s)}\n        =b_{n+s,s-1}\n        =\\sum_{k\\geq 0}{a_{k}\\,b_{n,s+k-2}}\n    \\label{eq:generalized:A:sequence:for:Renewal:arrays}\n\\end{equation}\nthen, sequence $\\vect{a}$ is the (\\emph{generalized}) $A$-sequence for the\n\\emph{renewal array} $\\lbrace b_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$, generated by\nsequence $\\vect{b}$ \\marginpar{a dependency within array $\\lbrace\nb_{nm}\\rbrace_{n,m\\in\\mathbb{N}}$}.\n\nFixing $s=1$, we go back to result shown in\n\\autoref{eq:A:sequence:for:Renewal:arrays}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "1c3bedc0b83e1362ccbe73574b12db0d4ee41fe5", "size": 8067, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classicthesis/Chapters/back-to-the-basics/rogers.tex", "max_stars_repo_name": "massimo-nocentini/master-thesis", "max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classicthesis/Chapters/back-to-the-basics/rogers.tex", "max_issues_repo_name": "massimo-nocentini/master-thesis", "max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classicthesis/Chapters/back-to-the-basics/rogers.tex", "max_forks_repo_name": "massimo-nocentini/master-thesis", "max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3512195122, "max_line_length": 133, "alphanum_fraction": 0.6384033718, "num_tokens": 3190, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.8244619306896955, "lm_q1q2_score": 0.5950497874571513}}
{"text": "\\chapter{Other}\n\n\\section{Greek Alphabet}\n\n\\begin{tabbing}\n\t\\hspace{2em} \\= \\hspace{3em} \\= $N$ = Yet another statement \\= \\kill\n\t\n\tA           \\>  $\\alpha$                 \\>  alpha    \\\\\n\tB           \\>  $\\beta$                  \\>  beta     \\\\\n\t$\\Gamma$    \\>  $\\gamma$                 \\>  gamma    \\\\\n\t$\\Delta$    \\>  $\\delta$                 \\>  delta    \\\\\n\tE           \\>  $\\epsilon \\ \\varepsilon$ \\>  epsilon  \\\\\n\tZ           \\>  $\\zeta$                  \\>  zeta     \\\\\n\tH           \\>  $\\eta$                   \\>  eta      \\\\\n\t$\\Theta$    \\>  $\\theta \\ \\vartheta $    \\>  theta    \\\\\n\tI           \\>  $\\iota$                  \\>  iota     \\\\\n\tK           \\>  $\\kappa$                 \\>  kappa    \\\\\n\t$\\Lambda$   \\>  $\\lambda$                \\>  lambda   \\\\\n\tM           \\>  $\\mu$                    \\>  mu       \\\\\n\tN           \\>  $\\nu$                    \\>  nu       \\\\\n\t$\\Xi$       \\>  $\\xi$                    \\>  xi       \\\\\n\tO           \\>  o                        \\>  omicron  \\\\\n\t$\\Pi$       \\>  $\\pi \\ \\varpi $          \\>  pi       \\\\\n\tP           \\>  $\\rho$                   \\>  rho      \\\\\n\t$\\Sigma$    \\>  $\\sigma \\ \\varsigma $    \\>  sigma    \\\\\n\tT           \\>  $\\tau$                   \\>  tau      \\\\\n\t$\\Upsilon$  \\>  $\\upsilon$               \\>  upsilon  \\\\\n\t$\\Phi$      \\>  $\\phi \\ \\varphi $        \\>  phi      \\\\\n\tX           \\>  x                        \\>  chi      \\\\\n\t$\\Psi$      \\>  $\\psi$                   \\>  psi      \\\\\n\t$\\Omega$    \\>  $\\omega$                 \\>  omega    \\\\\n\t\n\\end{tabbing}\n\n", "meta": {"hexsha": "e190f37e4702f33c74bd5a61f85b3b98dc448faa", "size": 1540, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Mathematics_Formulary/sections/other.tex", "max_stars_repo_name": "ufoscout/Physics_notes", "max_stars_repo_head_hexsha": "68e705f1afc087af3161dd2eb5ff556cf3873533", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics_Formulary/sections/other.tex", "max_issues_repo_name": "ufoscout/Physics_notes", "max_issues_repo_head_hexsha": "68e705f1afc087af3161dd2eb5ff556cf3873533", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics_Formulary/sections/other.tex", "max_forks_repo_name": "ufoscout/Physics_notes", "max_forks_repo_head_hexsha": "68e705f1afc087af3161dd2eb5ff556cf3873533", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-29T08:25:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-29T08:25:09.000Z", "avg_line_length": 44.0, "max_line_length": 69, "alphanum_fraction": 0.2506493506, "num_tokens": 473, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.8244619220634456, "lm_q1q2_score": 0.5950497812312142}}
{"text": "\\documentclass{article}\n    \\usepackage{subcaption}\n    \\usepackage{amsmath, amssymb}\n    \\usepackage{graphicx, float}\n    \\usepackage[hidelinks]{hyperref}\n    \\usepackage[bottom]{footmisc}\n    \\usepackage[margin=.8in, tmargin=.8in]{geometry}\n    \\usepackage{esint} % for double line integrals\n    \\usepackage{xcolor} % for color macros\n    \\usepackage{tikz}\n    \\usepackage{algorithm, algpseudocode}\n    \\newcommand{\\citep}[2][]{(\\cite[#1]{#2})}\n\n    \\usetikzlibrary{arrows,decorations.markings}\n    \\usepackage[\n        backend=biber,\n        % style=alphabetic,\n        citestyle=authoryear,\n        sorting=ynt,\n        bibencoding=utf8\n    ]{biblatex}\n    \n    \\bibliography{bibliography}\n\n    \\renewcommand{\\baselinestretch}{1.2}\n    \\newcommand{\\eps}{\\epsilon}\n    \\newcommand{\\der}{\\partial}\n    \\newcommand{\\del}{\\nabla}\n    \\newcommand{\\bm}[1]{\\mathbf{#1}}\n    % \\newcommand{\\vf}[1]{\\vec{\\mathbf{#1}}}\n    \\newcommand{\\vf}[1]{\\mathbf{#1}}\n    \\newcommand{\\norm}{\\hat{\\bm{n}}}\n    \\newcommand{\\normal}{\\mathcal{N}}\n    \\newcommand{\\var}{\\text{var}}\n    \\newcommand{\\bx}{\\vf{x}}\n    \\newcommand{\\by}{\\vf{y}}\n    \\newcommand{\\bz}{\\vf{z}}\n    \\newcommand{\\bw}{\\vf{w}}\n    \\newcommand{\\bfu}{\\vf{f}}\n    \\newcommand{\\giv}{\\ |\\ }\n    \\newcommand{\\bmu}{\\pmb{\\mu}}\n    \\newcommand{\\gauss}{\\mathcal{N}}\n    \\newcommand{\\data}{\\mathcal{D}}\n    \\newcommand{\\model}{\\mathcal{M}}\n    \\DeclareMathOperator*{\\argmax}{arg\\,max}\n    \\DeclareMathOperator*{\\argmin}{arg\\,min}\n\n    \\newcommand{\\id}{\\mathbb{I}}\n    \\newcommand{\\de}{\\text{d}}\n    \\newcommand{\\tran}{\\text{T}}\n    \\newcommand{\\expect}{\\mathbb{E}}\n    \\newcommand{\\kl}{D_{KL}}\n    \\newcommand{\\post}{P(\\bw \\giv \\data)}\n\n    \\newcommand{\\red}[1]{\\textcolor{red}{#1}}\n    \\newcommand{\\blue}[1]{\\textcolor{blue}{#1}}\n    \n    \\setlength\\parindent{0pt}\n    \\captionsetup{justification=centering}\n    \n    \\title{Machine Learning and Pattern Recognition}\n    \\date{\\today}\n    \\author{Traiko Dinev \\textless traiko.dinev@gmail.com\\textgreater}\n\n\\begin{document}\n\n\\maketitle\n\\textit{NOTE: This partially follows Machine Learning and Pattern Recognition, a masters level course at the University of Edinburgh.}\n\n\\textit{NOTE: Note this \"summary\" is NOT a reproduction of the course materials nor is it copied from the corresponding courses. It was entirely written and typeset from scratch.}\n\n\\textit{License: Creative Commons public license; See README.md of repository}\n\n\\section{Linear Regression}\nConsider the vector of features $\\bm{x} = \\langle x_1, x_2, \\dots, x_D \\rangle$, where $D$ is the number of dimensions. Then we want to learn this transformation (i.e. learn $\\bm{w}$):\n\n\\begin{equation}\n    \\bm{f} = X \\bm{w} + b\n\\end{equation}\n\nwhere X is a $N\\times D$ matrix, containing $N$ examples of $D$ dimensions. We have $N$ targets, our training vector $y$ of size $N\\times1$. This corresponds to computing $\\bm{w}^T \\bm{x}^{(i)}$, i.e. the dot product between the weights and training examples for each $i$. We can use mean squared error:\n\n\\begin{align*}\n    \\min_{\\bm{w}} \\quad& \\sum_i (y^{(i)} - f^{(i)})^2\n        = (\\bm{y} - \\bm{f})^T (\\bm{y} - \\bm{f})\n\\end{align*}\n\nThis has a linear solution of the form $\\bm{w} = \\underbrace{(X^TX)^{-1}X^T}_\\text{\\blue{pseudo-inverse}} \\bm{y}$. We can also introduce basis functions $phi$:\n\\vskip 0.1in\n\n\\begin{align*}\n    f(\\bm{x}) &= \\sum_k w_k\\ \\phi_k(\\bm{x}) \\\\\n    \\phi(\\bm{x}) &= \\exp(-(\\bm{x} - \\bm{c})^T (\\bm{x} - \\bm{c}) / h^2)\n        && \\text{\\blue{Radial Basis Function}} \\\\\n    \\sigma(\\bm{x}) &= \\frac{1}{\\exp(-\\bm{v}^T \\bm{x} - b)}\n        && \\text{\\blue{Sigmoid}}\n\\end{align*}\n\n\\subsection{Regularization}\nWe can add L2 regularization, which penalizes big weights.\n\n\\begin{equation}\n    \\hat{E}(\\bm{w}) = E(\\bm{w}) + \\lambda \\bm{w}^T\\bm{w}\n\\end{equation}\n\nwhich we can solve analytically to get something similar:\n\n\\begin{equation}\n    \\bm{w}^* = (X^TX + \\lambda \\id)^{-1}X^T \\bm{y}\n\\end{equation}\n\nAlternatively, we can augment the $\\bm{y}$ vector and $X$ matrix to obtain the same results:\n\n\\begin{align*}\n    \\bm{y}' =\n        \\begin{pmatrix}\n            \\bm{y} \\\\\n            \\bm{0}_k\n        \\end{pmatrix} \\\\\n    \\Phi' =\n        \\begin{pmatrix}\n            \\Phi \\\\\n            \\sqrt{\\lambda} \\id_k\n        \\end{pmatrix}\n\\end{align*}\n\n\\section{Error Bars}\nWe normally train on a \\textit{training set}, find best hyperparameters (like the L2 coefficients) on a \\textit{validation set} and judge only once on a \\textit{test set}.\n\nNow let's say we compute some error measure or loss $L(y, f(x))$, like the mean squared error above. If we have a test set, we can compute the mean of this error:\n\n\\begin{equation}\n    L_\\text{test} = \\frac{1}{M} \\sum_{m=1}^M L(y_m, f(x_m)) = \\frac{1}{M} \\sum_m L_m\n\\end{equation}\n\nThis is a sum of test errors, which means that we can \\textit{most of the time} apply the \\textbf{Central Limit Theorem} and say that $L_\\text{test}$ is approximately Gaussian. We need to have a finite variance and all the $x_m$'s need to be independent, but we as things in ML often are, we assume things work until proven otherwise.\n\n\\vskip 0.1in\nThe $x$'s actually come from an \\textbf{input distribution}, which we don't know. That doesn't stop us from computing its mean and variance from independent samples (different than the sets above):\n\n\\begin{align*}\n    \\mu &\\approx \\bar{x} = \\frac{1}{N} \\sum_n x_n \\\\\n    \\sigma^2 &\\approx \\bar{\\sigma}^2 = \\frac{1}{N - 1} \\sum_n (x_n - \\bar{x})^2\n\\end{align*}\n\nNow we can also compute the variance of the mean. If we collected a lot of sets, we can compute the variance:\n\n\\begin{align}\n    \\var[\\bar{x}] &= \\frac{1}{N^2} \\sum_i \\var[x_n] = \\frac{\\sigma^2}{N}\n        \\approx \\frac{\\bar{\\sigma}^2}{N}\n\\end{align}\n\nUsing the rules of variance (google variance of a sum). We can apply the same reasoning to the errors, where we replace $x_n$ above by $L(x_n, f(x_n))$. This combined with the central theorem reasoning gives us a \"confidence interval\" like so:\n\n\\begin{equation}\n    \\mu \\pm \\hat{\\sigma} / \\sqrt{N}\n\\end{equation}\n\nTL;DR: Report standard error bars.\n\n\\section{Normal Distributions}\nA random variable is Gaussian (normal) if it has the following pdf:\n\n\\begin{equation} \\label{eq:normal1d}\n    p(y) = \\mathcal{N}(y; \\mu, 1) = \\frac{1}{\\sqrt{2 \\pi}} e^{-\\frac{1}{2}(y - \\mu) ^ 2}\n\\end{equation}\n\nStandard normals have a mean of 0. We can scale and shift a standard normal like so:\n\n\\begin{align*}\n    z &= \\sigma x + \\mu \\\\\n    x &= \\frac{z - \\mu}{\\sigma}\n\\end{align*}\n\nWe then substitute the above in \\autoref{eq:normal1d} and scale the PDF to be normalized:\n\n\\begin{equation}\n    p(z) = \\mathcal{N}(z; \\mu, \\sigma) = \\frac{1}{\\sigma \\sqrt{2 \\pi}} e^{-\\frac{1}{2\\sigma^2}(z - \\mu) ^ 2}\n\\end{equation}\n\n\\subsection{Multivariate Gaussians}\nThe joint probability density of $D$ univariate Gaussians will give us a multivariate one:\n\n\\begin{equation}\n    p(\\vf{x}) = \\prod_d p(x_d) = \\frac{1}{(2\\pi)^{D/2}} e^{-\\frac{1}{2} \\vf{x}^T \\vf{x}}\n\\end{equation}\n\nThis has a diagonal covariance matrix ($\\vf{x}^T \\vf{x}$). If we transform a variable as the one above:\n\n\\begin{equation}\n    \\vf{y} = A \\vf{x}\n\\end{equation}\n\nThen we can show that $\\text{cov}[\\vf{y}] = \\Sigma = AA^T$. Also, assuming A is invertible, $\\vf{x} = A^{-1} \\vf{y}$. Substituting and normalizing, we obtain:\n\n\\begin{equation}\n    p(\\vf{y}) = {|2\\pi\\Sigma}|^{-1/2} e^{-\\frac{1}{2} \\vf{y}^T \\Sigma^{-1} \\vf{y}}\n\\end{equation}\n\nShifting the distribution ($\\vf{z} = \\vf{y} + \\vf{\\mu}$), we obtain:\n\n\\begin{equation}\n    p(\\vf{z}) = {|2\\pi\\Sigma}|^{-1/2} e^{-\\frac{1}{2} (\\vf{z} - \\vf{\\mu})^T \\Sigma^{-1} (\\vf{z} - \\vf{\\mu})}\n\\end{equation}\n\nCovariance matrices need to be \\textbf{positive semi-definite}. This ensures that the above matrix is invertible and the hence the transformation doesn't \"reduce\" dimensionality:\n\n\\begin{equation}\n    \\vf{z}^T \\Sigma \\vf{z} \\geq 0, \\text{for all real real}\\ \\vf{z} \n\\end{equation}\n\n\\section{Simple Classifiers}\nIn classification targets $y^{(n)}$ are either $\\{0, 1\\}$ or $\\{-1, +1\\}$. Always 1-hot encode categorical variables: $\\vf{x} = [0\\ 1\\ 0\\ 0\\ \\dots]^T$. \n\n\\subsection{Generative Normal Models}\nWe can assume that features come from a normal distribution for each class:\n\n\\begin{equation*}\n    p(\\bx | y=k) = \\gauss(\\bx, \\bmu_k, \\Sigma_k)\n\\end{equation*}\n\nWe can fit the mean and covariance to the mean and covariance of the vectors. Using Bayes' rule, we can get a probability for a label belonging to class $k$:\n\n\\begin{align*}\n    P(y=k | \\bx) &\\propto p(\\bx | y = k) P(y = k) \\\\\n                 &\\propto \\gauss(\\bx; \\mu_k, \\Sigma_k) \\pi_k\n\\end{align*}\n\nTo get priors, we can just count:\n\n\\begin{equation*}\n    P(y = k) = \\pi_k \\approx \\frac\n        {\\Sigma_n I(y^{(n)} = k)}\n        {N}\n\\end{equation*}\n\nAt test time we just compute probabilities for each class, normalize and hey-presto, classifier:\n\n\\begin{equation*}\n    P(y = k | \\bx^{(\\text{test})}, \\theta) = \\frac\n        {s_k}\n        {\\sum_{k'} s_{k'}}\n\\end{equation*}\n\nwhere $\\theta = \\{\\bmu_k, \\Sigma_k, \\pi_k\\}$, the learned parameters and $s_k = P(y = k | \\bx)$ as above.\n\n\\subsection{Naive Bayes}\nIf the features are discrete, we can assume they are independent (big, very wrong assumption) and fit a model like this:\n\n\\begin{equation*}\n    P(\\bx | y = k, \\theta) = \\prod_d P(x_d | y = k; \\theta)\n        = \\prod_d \\theta_{d, k}^{x_d} (1 - \\theta_{d, k})^{1 - x_d}\n\\end{equation*}\n\nHere $\\theta_{d,k} = P(x_d = 1| y = k)$.\n\n\\vskip 0.1in\nWe can also make the Normal Model above \"naive\" by making the covariance matrices diagonal, ignoring dependence (covariances) between features.\n\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}{0.4\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{figures/normal_class}\n        \\label{fig:udp_header}\n        \\caption{A normal classifier}\n    \\end{subfigure}\n    %\n    \\centering\n    \\begin{subfigure}{0.4\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{figures/normal_class_naive}\n        \\label{fig:tcp_header}\n        \\caption{A \"naive\" normal classifier. The multivariate Gaussian has no diagonal covariance, hence it can only stretch, but not rotate.}\n    \\end{subfigure}\n\\end{figure}\n\n\\section{Logistic Regression}\n\\subsection{Gradient Descent}\nFrom the chain rule, we have $\\frac{dw}{dt} = w_x \\frac{dx}{dt} + w_y \\frac{dy}{dt} + w_z \\frac{dz}{dt}$. In vector notation, this becomes $\\frac{dw}{dt} = \\nabla w \\cdot \\frac{\\vf{r}}{dt}$, where $\\vf{r}$ is a curve in 3 dimensions. Thus:\n\n\\begin{equation*}\n    \\nabla w = \\langle w_x, w_y, w_z \\rangle\n\\end{equation*}\n\nThe gradient vector's values depend on the point where we evaluate it. It is also perpendicular to a level surface $w = c$ by the following proof:\n\\vskip 0.1in\n\nProof: For any curve $\\vec{r} = \\vec{r}(t)$ that is inside the level surface $w = c$, the velocity is in the tangent plane. This follows from the fact that derivatives are tangents to the function curve from 1-D calculus. Then by the chain rule above $\\frac{dw}{dt} = \\nabla w \\cdot \\frac{\\vec{r}}{dt} = \\frac{c}{dt} = 0$. Hence the gradient is perpendicular to the surface.\n\\vskip 0.1in\n\n\\textbf{Directional derivatives}. These are derivatives in the direction of some vector $\\hat{\\mathbf{u}}$. Treat $\\frac{\\partial w}{\\partial x}$ as the derivative along the $x$-axis. In other words, this is a slice of the function as given by the $x$-plane. Now let's have a curve $\\vec{r}(t)$, such that it has a \\textit{normal} velocity $\\hat{\\mathbf{u}} = \\langle a, b \\rangle$. We can assume $|\\hat{\\mathbf{u}}| = 1$, since we are only interested in the direction. In other words:\n\n\\begin{equation*}\n    \\begin{cases}\n        x(s) &= x_0 + as \\\\\n        y(s) &= y_0 + bs\n    \\end{cases}\n\\end{equation*}\n\nNow $s$ is the change in the direction of the curve. We can ask then what $\\frac{\\partial w}{\\partial s}$ is. Well, by the chain rule:\n\n\\begin{equation*}\n    \\frac{\\partial w}{\\partial s} = \\nabla w \\cdot \\frac{d\\vf{r}}{ds} = \\nabla w \\cdot \\hat{\\mathbf{u}}\n\\end{equation*}\n\nBut from algebra we know:\n\n\\begin{equation*}\n    \\frac{\\partial w}{\\partial s} = \\nabla w \\cdot \\hat{\\mathbf{u}} = |\\nabla w| |\\hat{\\mathbf{u}}| \\cos{\\theta} = |\\nabla w| \\cos{\\theta}\n\\end{equation*}\n\nThis is maximized when $\\cos{\\theta} = 1$, i.e. when the gradient $\\hat{\\mathbf{u}}$ is in the direction of $\\nabla w$. Hence the gradient points to the direction of steepest increase.\n\n\\subsection{So what?}\nPatience. This can be applied to machine learning with the following idea. We normally define a loss or cost function $L$ (or $J$ as often seen in statistics). This is a function of our weights/ parameters. We usually aim to minimize said cost function. Well, if we know the direction of its steepest increase, $\\nabla_{\\vf{w}} L$, we know the direction of its steepest decrease - $-\\nabla_{\\vf{w}} L$. Hence we can take small steps in that direction:\n\n\\begin{equation*}\n    \\vf{w} \\leftarrow \\vf{w} - \\eta \\nabla_{\\vf{w}}\n\\end{equation*}\n\n\\subsection{Logistic Regression.. finally}\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.4\\textwidth]{figures/600px-Logistic-curve.png}\n    \\caption{Logistic Sigmoid (from wikipedia)}\n    \\label{fig:sigmoid}\n\\end{figure}\n\nWe want to classify things, which we can interpret as computing the probability $P(y = 1 | \\bx, \\bw)$. Enter the sigmoid:\n\n\\begin{equation*}\n    f(\\vf{x}; \\vf{w}) = \\sigma(\\vf{w}^T \\vf{x}) = \\frac{1}{1 + e^{\\vf{w}^T \\vf{x}}}\n\\end{equation*}\n\nSee \\autoref{fig:sigmoid}. We also need to define our loss function. We define another \\textbf{super-duper important} concept in ML - the likelihood of the data given the weights:\n\n\\begin{align*}\n    \\text{Likelihood} &= P(\\mathcal{D} | \\vf{w}) = \n        P(\\vf{x}^{(1)}, y^{(1)}, \\dots, \\vf{x}^{(N)}, y^{(N)}\\ |\\ \\vf{w}) \\\\\n    &= \\underbrace{\\prod_i P(\\vf{x}^{(i)}, y^{(i)} |\\ \\vf{w})}_{\\text{\\blue{independent x's}}} = \\prod_i P(y^{(i)} |\\ \\vf{x}^{(i)}, \\vf{w}) P(\\vf{x}^{(i)}\\ |\\ \\vf{w}) \\\\\n    &= \\prod_i P(y^{(i)} |\\ \\vf{x}^{(i)}, \\vf{w}) \\underbrace{P(\\vf{x}^{(i)})}_{\\text{\\blue{ignore weights}}} = \\prod_i P(y^{(i)} |\\ \\vf{x}^{(i)}, \\vf{w})\n\\end{align*}\n\nWe \\blue{ignore the weights}, since our model doesn't tell us anything about the input distribution. I.e. no matter the weights, the probability of having $\\bx$ will remain $1$. Another way of saying this is that our model is \\textbf{discriminative} as opposed to \\textbf{generative}. We don't \"generate\" anything, everything has a probability $1$ of being generated. \\footnote{\\textit{I know this is sloppy but I've looked and I haven't found a better (mathematically rigorous) explanation for this. Suggestions are welcome.}}\n\\vskip 0.1in\n\nWe often minimize the negative \\textbf{log-likelihood}, in this case primarily because it transforms the product into a sum. With some careful math, we can calculate it and fit logistic regression models.\n\n\\subsection{Softmax Regression}\nIf there are $K$ classes, we can one-hot encode them as:\n\n\\begin{equation*}\n    \\vf{y} = [0\\ 0\\ 0\\ \\dots\\ 0\\ 1\\ 0\\ \\dots\\ 0]\n\\end{equation*}\n\nwhere we have a $1$ for the target class. We fit $K$ weights, $\\vf{w}^{(k)}$, such that our output contains $K$ probabilities, for each class:\n\n\\begin{equation*}\n    f_k = \\frac{\n        {e^{(\\vf{w}^{(k)})}}^T \\vf{x}\n    }{\n        \\sum_{k'} {e^{(\\vf{w}^{(k')})}}^T \\vf{x}\n    }\n\\end{equation*}\n\nwhere we normalize by the sum of all outputs so that we get a proper probability distribution. We can use batch gradient descent to minimize the negative log-likelihood. The likelihood would be the following then:\n\n\\begin{align*}\n    \\nabla_{\\vf{w}}^{(k)}\\ \\text{LL} &= \\nabla_{\\vf{w}^{(k)}} \\log \\prod P(y_c = 1\\ |\\ \\vf{x}, W) \\\\\n            &= \\sum_n \\nabla_{\\vf{w}^{(k)}} \\log P(y_c = 1\\ |\\ \\vf{x}, W) \\\\ \n            &= \\sum_n (\\vf{y}_k^{(n)} - f_k(\\vf{x}^{(n)})) \\vf{x}^{(n)}\n\\end{align*}\n\n\\subsection{Robust Regression}\nSometimes we get outliers in the data we cannot filter in the pre-processing stage. We assume that for each observation there is a probability it isn't part of the data. Or, in math terms:\n\n\\begin{equation*}\n    P(m)\\ =\\ \\text{Bernoulli}(m; 1 - \\epsilon) =\n        \\begin{cases}\n            1 - \\epsilon & \\text{m = 1} \\\\\n            \\epsilon     & \\text{m = 0}\n        \\end{cases}\n\\end{equation*}\n\nthen we pick a random label when $m = 0$:\n\n\\begin{equation}\n    P(y = 1\\ |\\ \\vf{x}, \\vf{w}, m) =\n        \\begin{cases}\n            \\sigma(\\vf{w}^T\\vf{x}) & m = 1 \\\\\n            \\frac{1}{2}              & m = 0\n        \\end{cases}\n\\end{equation}\n\nThe likelihood for a single example becomes:\n\n\\begin{align*}\n    P(y = 1\\ |\\ \\vf{x}, \\vf{w}) &= \\sum_m \n        P(y = 1, m\\ |\\ \\vf{x}, \\vf{w}) \\\\\n    &= \\sum_m P(y = 1\\ |\\ \\vf{x}, \\vf{w}, m) P(m) \\\\\n    &= (1 - \\epsilon) \\sigma(\\vf{w}^T\\vf{x}) + \\frac{\\epsilon}{2}\n\\end{align*}\n\nWe can then derive the gradient for this and maximize the likelihood. We can treat $\\epsilon$ as a hyperparameter and optimize over a validation set or set $\\epsilon = \\sigma(b)$ and optimize $b = \\text{logit}(\\epsilon) = \\log \\frac{\\epsilon}{1 - \\epsilon}$.\n\n\n\\section{Neural Networks}\n\\label{sec:nn_intro}\n\\textit{This section is taken from by Bachelor's thesis, so it might not read very well in a summary kind of way.}\n\\vskip 0.1in\n\nNeural networks are universal function approximators inspired by biological neurons (e.g. \\cite[Chapter~6]{goodfellow_deep_2016}). The main building blocks of neural networks are called \\textit{neurons} and the networks are often drawn as directed graphs, as in figure \\ref{fig:neural_net}. This graph illustrates the \\textit{flow} of the network, from the \\textit{inputs} $x_1, x_2, x_3$ to the \\textit{output} $y$. Each circle in the figure is called a neuron and each of the three vertical slices is called a layer. The layers that are not the input or output layers are called \\textit{hidden} layers.\n\n\\begin{figure}\n    \\centering\n    \\begin{tikzpicture}\n        \\tikzset{myptr/.style={decoration={markings,mark=at position 1 with %\n        {\\arrow[scale=2,>=stealth]{>}}},postaction={decorate}}}\n    \n        % left row\n        \\draw (0, 0) circle (1) node {$x_3$};\n        \\draw (0, 2.5) circle (1) node {$x_2$};\n        \\draw (0, 5) circle (1) node {$x_1$};\n        \n        % middle row\n        \\draw (3, 1) circle (1) node {$h_2$};\n        \\draw (3, 4) circle (1) node {$h_1$};\n        \n        % right row\n        \\draw (6, 2.5) circle (1) node {$y$};\n        \n        % lines\n        % to h_2\n        \\draw[myptr] (1, 0) -- (2, 1);\n        \\draw[myptr] (1, 2.5) -- (2, 1);\n        \\draw[myptr] (1, 5) -- (2, 1);\n        % to h_1\n        \\draw[myptr] (1, 0) -- (2, 4);\n        \\draw[myptr] (1, 2.5) -- (2, 4);\n        \\draw[myptr] (1, 5) -- (2, 4);\n        % to y\n        \\draw[myptr] (4, 1) -- (5, 2.5);\n        \\draw[myptr] (4, 4) -- (5, 2.5);\n    \\end{tikzpicture}\n    \n    \\caption[Neural network example]{Neural Network illustration. The inputs are $x_1, x_2, x_3$, the hidden layers are $h_1, h_2$ and $y$ is the output.}\n    \\label{fig:neural_net}\n\\end{figure}\n\n\nIn what is called the \\textit{supervised learning} scenario, the neural network is trained to predict the outputs $y$ given the inputs $\\mathbf{x}$ (see next section). If $y$ is real-valued, then the task is called \\textit{regression} and is what we will attempt to do in the context of ABC.\n\\vskip 0.1in\n\nWe can define the transformations equivalently from the inputs to the hidden layer and from the hidden units to output(s) via a system of equations:\n%\n\\begin{align}\n    \\mathbf{h} &= W^{(1)}\\ \\mathbf{x} \\\\\n    y &= W^{(2)}\\ \\mathbf{h}\n\\end{align}\n\nThus we can increase or decrease the size of the input vector in subsequent layers of the network depending on the shapes of the  matrices $W^{(1)}$ and $W^{(2)}$.\n\\vskip 0.1in\n\nWe apply a piecewise non-linearity called the \\textit{activation function} after each matrix multiplication, i.e. at each neuron; without it, the model could be collapsed to a single layer:\n\n\\begin{align}\n    \\mathbf{h} &= h^{(1)}(W^{(1)}\\ \\mathbf{x}) \\\\\n    y &= h^{(2)} (W^{(2)}\\ \\mathbf{h})\n\\end{align}\n\nChoices for activation functions $h(\\mathbf{x})$ include hyperbolic tangent ($\\tanh$), sigmoids ($\\sigma(x)$) and rectified linear units (relu):\n\n\\begin{align}\n    \\text{tanh}(x) &= \\frac{e^x - e^{-x}}{e^x + e^{-x}} \\\\\n    \\sigma(x) &= \\frac{1}{1 + e^{-x}} \\\\\n    \\text{relu}(x) &= \\text{max}(0, x)\n\\end{align}\n\nAdditionally, parametric rectifier units allow the activation to be non-zero (and hence, the derivative as well):\n\n\\begin{equation}\n    \\text{prelu}(x) = \\begin{cases}\n        x       & \\text{if}\\ x > 0 \\\\\n        a\\ x    & \\text{otherwise}\n    \\end{cases}\n\\end{equation}\n\nThe parameter $a$ is learned akin to the rest of the neural network parameters.\n\\vskip 0.1in\n\nRectified (and parametric) linear units generally learn faster than $\\tanh$ and sigmoid units and are preferred in state-of-the-art applications \\citep{glorot_deep_2011}. One improvement we will make is to replace the previously used $\\tanh$ activations with relu units.\n\\vskip 0.1in\n\nWe can think of the hidden layers of the network as \\textit{learned} feature transformations. If the outputs $y$ are real-valued and unbounded, i.e.\\ in a regression task, we normally use a linear activation for the final layer, so we can think of the outputs from the previous hidden layer as features or \\textit{basis} functions for the linear output neuron.\n\n\\begin{equation}\n    y = W^{{K - 1}}\\ \\mathbf{h}^{(K - 1)}\n\\end{equation}\n\nwhere we have K layers, including the output.\n\n\\subsection{Training Neural Networks}\n\\label{sec:nn_train}\n\nTo train the neural network, we use \\textit{backpropagation} \\citep{rumelhart_learning_1985}. More specifically, we use more optimized versions of backpropagation, namely \\textit{Adam} \\citep{kingma_adam:_2014} and \\textit{RMSProp} \\citep{tieleman_t._and_hinton_g._e._lecture_2012}.\n\\vskip 0.1in\n\nBefore we train, we specify a \\textit{loss function}. The loss function gives us a measure of \"how bad\" it is to differ from the true output $y$, where $\\hat{y}$ is the output of the neural network. We then train on a set of examples $\\{\\mathbf{x}^{(i)}, \\mathbf{y}^{(i)}\\}_{i = 0}^{N}$ via backpropagation. For a regression task, we use the mean-squared error loss function:\n\n\\begin{equation}\n    \\mathcal{L}(y, \\hat{y}) = \\frac{1}{N} \\sum_{i = 1}^{N} (y^{(i)} - \\hat{y}^{(i)})^2\n\\end{equation}\n\nBackpropagation computes the derivative of the weight matrices $W^{(i)}$ with respect to the loss function for a \\textit{batch} of training examples from the training set. It then moves the weight in the direction of the negative gradient, thus minimizing the loss function. The backpropagation algorithm takes as a parameter a \\textit{learning rate}, which can be optimized. There are also more advanced versions, which often perform better, such as Adam \\citep{kingma_adam:_2014} and RMSProp \\citep{tieleman_t._and_hinton_g._e._lecture_2012}.\n\n\\begin{algorithm}\n    \\caption{Basic Backpropagation Algorithm}\n    \\begin{algorithmic}[1]\n        \\State initialize learning rate $\\eta$\n\n        \\While{Error is still high (see Early Stopping)}\n            \\State $\\mathbf{w} \\gets \\mathbf{w} - \\eta \\nabla \\mathbf{w}$\n            \\Comment Update weights\n        \\EndWhile\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Modern Deep Neural Networks}\n\\label{subsec:nn_modern}\n\nUsing the simple backpropagation algorithm and neural networks introduced so far does not scale well to larger models and datasets \\citep[Chapter 7]{goodfellow_deep_2016}. We can employ a number of techniques to improve the learning procedure. We describe here Batch Normalization \\citep{ioffe_batch_2015}, Early Stopping \\citep{prechelt_early_2012} and Dropout \\citep{srivastava_dropout:_2014}, as well as \\textit{regularization} in general.\n\\vskip 0.1in\n\nA problem with larger networks is \\textit{overfitting}. Normally, we train the network on a training set and then evaluate the performance on a separate \\textit{validation set}, unseen during training. If the network learned the underlying structure of the data, then we say it \\textit{generalizes} and we expect the loss on the validation set to be comparable to that on the training set. Networks with more free parameters (weights) as compared to the number of training examples can, however, learn to \"remember\" quirks in the training instances instead of learning such an underlying structure. In this thesis, in the context of ABC, we have the benefit of procedurally generating the data, so we do not have issues with the number of examples; we can have as many as needed.\n\\vskip 0.1in\n\n\\textbf{Early Stopping} is a relatively easy way to prevent overfitting \\citep{prechelt_early_2012}. Consider figure \\ref{fig:overfitting}. The training error normally will approach a very small number if the model has enough capacity. The validation error, however, usually starts increasing after it reaches a (sometimes local) minimum. If we continue to train, we no longer learn useful information, but rather quirks in the training set. Thus early stopping monitors the validation error at each epoch and stops training if the validation error continues to \\textit{increase} for a number of epochs. We then revert to the best epoch so far.\n\\vskip 0.1in\n\n\\textbf{Batch Normalization} forces the activations of different layers in the network to be normalized \\citep{ioffe_batch_2015}. This has been shown to help with poor initialization strategies and large input values by forcing outputs of layers to take on the same distribution, thus helping learning. Using batch normalization after each (fully-connected) layer helps the network by eliminating the need to learn the different output distributions of each layer. The following transformation is used after applying the activation functions on each layer:\n\n\\begin{align}\n    \\hat{z}_i &= \\frac{h_i - \\mu_i}{\\sqrt{\\sigma_i^2 + \\epsilon}} \\\\\n    h_i &= \\gamma_i\\ \\hat{z}_i + \\beta_i\n\\end{align}\n\nHere $h_i$ are the outputs of each hidden unit and $\\mu_i$ and $\\sigma_i$ are minibatch mean and standard deviation, computed over training examples. $\\gamma_i$ and $\\beta_i$ are \\textit{learned} scaling parameters and $\\epsilon$ is a small value to prevent division by 0.\n\\vskip 0.1in\n\n\\textbf{Dropout} is another simple regularization technique \\citep{srivastava_dropout:_2014}, that can prevent overfitting. During training we disconnect each neuron with a certain probability. This forces the other neurons to learn when some of the features are not present, thus making the network more flexible. At test time all the neurons are used and the outputs are scaled accordingly.\n\\vskip 0.1in\n\n\\textbf{L2 Regularization} is another way to prevent overfitting. We add a penalty term to the loss function for the entire neural network consisting of the squared absolute value of the weights for each layer. This forces the weights of the network to be small, thus ensuring the gradient updates are also small. It also prevents the activations from \\textit{exploding}.\n\n\\begin{equation}\n    \\mathcal{\\hat{L}}(y, \\hat{y}, \\{W^{(i)}\\}_{i = 1}^{K}) = \\frac{1}{N} \\sum_{i = 1}^{N} (y^{i} - \\hat{y}^{i})^2 + \\sum_{i,j,k} \\lambda (W_{i,j}^{(k)})^2\n\\end{equation}\n\nwhere $\\lambda$ is the regularization strength.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.7\\textwidth]{figures/overfitting_early_stopping}\n    \\caption[Overfitting illustration]{Neural Network overfitting illustration. The training error almost reaches $0$, but the validation error starts to increase as the networks begins to learn structure present \\text{only} in the training set.}\n    \\label{fig:overfitting}\n\\end{figure}\n\n\\textbf{Note} for all neural network models used in this thesis both the input data and the targets were standardized. This helps learning by avoiding large values for any neuron and therefore large gradients that become unstable:\n\n\\begin{equation}\n    \\hat{x}\\ =\\ \\frac{x - \\langle x \\rangle}{\\sqrt{\\text{Var}[x]}}\n\\end{equation}\n\n\\section{Autoencoders and PCA}\n\\subsection{Autoencoders}\nAutoencoders learn transformations or representations of data. The basic idea is to create a bottleneck somewhere in the middle of the network so that networks learn to represent the data in a smaller number of dimensions. Thus autoencoders are functions that return the input approximately:\n\n\\begin{equation}\n    f(\\bx) \\approx \\bx\n\\end{equation}\n\nWe can use a neural network with a hidden size less than the input, like so:\n\n\\begin{align*}\n    \\vf{h} &= g^{(1)}(W^{(1)} \\bx + \\vf{b}^{(1)}) \\\\\n    \\vf{f} &= g^{(2)}(W^{(2)} \\vf{h} + \\vf{b}^{(1)}) \\\\\n\\end{align*}\n\nwhere $W^{(1)}$ is $K \\times D$ and $K << D$. We can additionally impose a regularization penalty to further constrain the network. Sparse autoencoders, for instance, impose an L1 regularization. More about why that drives weights to zero below.\n\\vskip 0.1in\n\nDenoising autoencoders learn to reconstruct the data when some of the inputs are missing or swapped or with added noise. Those are useful for removing noise from real-world images. \n\n\\subsection{Principal Component Analysis (PCA)}\nWe compute the covariance matrix of the data $\\Sigma = \\frac{1}{N} X^T X$. We want to reduce dimensionality (from $D$ to $K$) in a way that maximizes the variance along the $K$ new axis. This can be done by computing the eigenvectors of the covariance matrix and then projecting the data along those axis. These eigenvectors are also orthogonal to each other; if we keep all dimensions (i.e. project onto all eigenvectors), no information is lost. This process is also equivalent to a linear autoencoder with a square loss function (TODO: proof).\n\\vskip 0.1in\n\nOften we standardize the data (i.e. $\\frac{x - \\mu}{\\sigma}$) which means we can omit the ${1}{N}$ in the calculation above. \n\n\\subsection{Singular Value Decomposition}\nOr SVD decomposes an $N\\times D$ matrix into a product of a $N\\times K$ matrix $U$, a diagonal $K\\times K$ matrix $S$ and $K\\times D$ matrix $V^T$:\n\n\\begin{equation*}\n    X \\approx USV^T\n\\end{equation*}\n\nThe columns of $U$ contain the eigenvectors of $X X^T$ and the rows of $U$ and the columns of $V^T$ give $K$-dimensional embeddings of the rows and columns of $X$, respectively.\n\n\\subsection{Probabilistic PCA}\nWe want to embed the data $X = \\{\\vf{x}^{(n)}\\}$ into a $K$-dimensional space, such that $K < D$. Let's assume that each point is embedded into a latent variable $\\vf{z}^{(n)}$. We assume that the latent variables are normally distributed:\n\n\\begin{equation*}\n    \\vf{z}^{(n)} \\sim \\mathcal{N}(\\vf{0}, \\mathbb{I})\n\\end{equation*}\n\nWe also assume that the original data is generated via a projection:\n\n\\begin{equation}\n    \\vf{x}^{(n)}\\ |\\ \\vf{z}^{(n)} \\sim \\mathcal{N}(W\\vf{z}^{(n)}, \\sigma^2 \\mathbb{I})\n\\end{equation}\n\nwhere the matrix $W$ is $D\\times K$. $\\sigma$ is known as the noise term. We are interested in estimating $W$ and $\\sigma$. Marginalizing out the $\\vf{z}^{(n)}$, we arrive at:\n\n\\begin{equation}\n    \\vf{x}^{(n)} \\sim \\mathcal{N}(\\vf{0}, WW^T + \\sigma^2 \\mathbb{I})\n\\end{equation}\n\nThis is then fitted via e.g. maximum likelihood.\n\n\\section{Bayesian Modeling}\nWe can adopt a probabilistic model for regression. Our training data consists of pairs $(\\bx, y)$. We can assume a simple model:\n\n\\begin{equation*}\n    P(y\\ |\\ \\bx, \\bw) = \\mathcal{N}(y; f(\\bx, \\bw), \\sigma_y^2)\n\\end{equation*}\n\nwhere $f(\\bx, \\bw)$ is any function. A quadratic regression would have a quadratic expansion, for example. We can then fit using maximum likelihood or in practice by minimizing the negative log-likelihood:\n\n\\begin{align*}\n    - \\log P(\\by \\giv X, \\bw) &= - \\sum_n logP(y^{(n)} \\giv \\bx^{(n)}, \\bw) \\\\\n        &= \\frac{1}{2 \\sigma_y^2} \\sum_n \\big[  (y^{(n)} - f(\\bx^{(n)}, \\bw)) \\big] + \\frac{N}{2} \\log(2 \\pi \\sigma_y^2)\n\\end{align*}\n\nwhich is equivalent to fitting by least squares. Neat.\n\n\\section{Bayesian Regression}\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{figures/bayes_regression}\n    \\caption{Example Bayesian Regression with basis functions. The left graph has fits sampled from the prior. The shaded region in the right graph is centered at the posterior mean with one standard deviation plotted. The red line is the mean fit. The bars on each datapoint represent the noise $\\sigma_y$.}\n    \\label{fig:bayes_regression}\n\\end{figure}\n\nWe need to set-up two things - a probabilistic model, like the one above, and a prior belief for our model. A simple way to do this would be as follows:\n\n\\begin{align*}\n    P(\\bw) &= \\mathcal{N}(\\bw; \\vf{0}, \\sigma_w^2 \\mathbb{I}) \\\\\n    P(y\\ |\\ \\bx, \\bw) &= \\mathcal{N}(y; \\bw^T \\phi(\\bx^{(n)}), \\sigma_y^2)\n\\end{align*}\n\nwhere this is a linear regression and $\\phi(\\bx^{(n)})$ represents any transformation we can apply. We can thus compute our posterior beliefs about the weights $\\bw$ given our training examples (using Bayes' theorem, of course):\n\n\\begin{align*}\n    P(\\bw \\giv \\mathcal{D}) &\\propto P(\\mathcal{D} \\giv \\bw) P(\\bw) \\\\\n        &\\propto \\mathcal{N}(\\bw; \\vf{0}, \\sigma_w^2 \\mathbb{I})\n            \\prod_n \\mathcal{N}(y^{(n)}; \\bw^T \\phi(\\bx^{(n)}), \\sigma_y^2) \\\\\n        &\\propto \\mathcal{N}(\\bw; \\vf{0}, \\sigma_w^2 \\mathbb{I})\\ \n            \\mathcal{N}(\\by; \\Phi \\bw, \\sigma_y^2 \\mathbb{I})\n\\end{align*}\n\nThis is also a Gaussian and can be derived (Murphy). If $\\bw_0 = \\bf{0}$ for the prior and $V_0 = \\sigma_w^2 \\mathbb{I}$, then:\n\n\\begin{align*}\n    V_n &= \\sigma_y^2(\\sigma_y^2 V_0^{-1} + \\Phi^T \\Phi)^{-1} \\\\\n    \\bw_n &= V_n V_0^{-1} \\bw_0 + \\frac{1}{\\sigma_y^2} V_N \\Phi^T \\by\n\\end{align*}\n\nwhere $\\bw_N$ is the posterior mean and $V_n$ - the posterior covariance.\n\n\\subsection{Test Predictions}\nSo we have our posterior $\\bw$. Now we want, conditioned on the training data $\\mathcal{D} = \\{\\bx^{(n)}, y^{(n)}\\}$ to give predictions for a new test point $\\bx^*$ with output $y^*$. We introduce our weights using the sum rule:\n\n\\begin{equation*}\n    P(y^* \\giv \\bx^*, \\mathcal{D}) = \\int P(y^*, \\bw \\giv \\bx^*, \\mathcal{D}) \\ \\de\\bw\n\\end{equation*}\n\nNow we split the above using the product rule:\n\\begin{align*}\n    P(y^* \\giv \\bx^*, \\mathcal{D}) &= \\int P(y^* \\giv \\bw, \\bx^*, \\mathcal{D})\n        \\ P(\\bw \\giv \\bx^*, \\mathcal{D}) \\ \\de\\bw \\\\\n    &= \\int P(y^* \\giv \\bw, \\bx^*)\\ \n        P(\\bw \\giv \\mathcal{D}) \\ \\de\\bw\n\\end{align*}\n\nwhere $\\bw$ does not depend on the test input $\\bx^*$. The test prediction also doesn't depend on the training data if we know $\\bw$, the parameters. Now, we know the posterior for $\\bw$:\n\n\\begin{equation*}\n    P(\\bw \\giv \\mathcal{D}) = \\mathcal{N}(\\bw; \\bw_N, V_N)\n\\end{equation*}\n\nFor linear models the above integral is Gaussian. Expecting that, we can write:\n\n\\begin{align*}\n    y^* &= f(\\bx^*) + \\nu = \\bw^T \\bx^* + \\nu \\\\\n    \\nu &\\sim \\mathcal{N}(0, \\sigma_y^2)\n\\end{align*}\n\nThis matches what we previously descibed as:\n\n\\begin{equation*}\n    P(y^* \\giv \\bw) = \\mathcal{N}(y^*; \\bw^T \\bx^*, \\sigma_y^2)\n\\end{equation*}\n\nWe can thus show that:\n\n\\begin{equation*}\n    P(y^* \\giv \\mathcal{D}, \\bx^*) =\n        \\mathcal{N}(y^*; \\bw_N^T \\bx^*, (\\bx^*)^T V_n \\bx^* + \\sigma_y^2)\n\\end{equation*}\n\n\\subsection{Decision Making}\nThe above answer is a distribution of $y^*$. Sometimes we need to ouput a single prediction, a guess. Thus we need to write a loss function $L(y, y^*)$, which says how bad it is to guess $y^*$ if the actual output is $y$. We can compute the expected loss:\n\n\\begin{equation*}\n    c = \\mathbb{E}_{P(y \\giv \\mathcal{D})}[L(y, y^*)] =\n        \\int L(y, y^*)\\ P(y \\giv \\mathcal{D})\\ \\de y\n\\end{equation*}\n\nFor square loss, $L(y, y^*) = (y - y^*)^2$, we can differentiate the cost $c$ with respect to our guess:\n\n\\begin{equation*}\n    \\frac{\\partial c}{\\partial y^*} =\n        \\mathbb{E}_{P(y \\giv \\mathcal{D})} \\frac{\\partial L(y, y^*)}{\\partial y^*} =\n        \\mathbb{E}_{P(y \\giv \\mathcal{D})} [2(y - y^*)] =\n        2 (\\mathbb{E}_{P(y \\giv \\mathcal{D})} [y] - y^*)\n\\end{equation*}\n\nSetting this to $0$, the optimal guess is the posterior mean, $y^* = \\mathbb{E}_{P(y \\giv \\mathcal{D})}[y]$. Thus if we only need the guess, using Bayesian regression does not change much at all. In fact this can be shown to correspond to a L2-regularized standard linear regression.\n\n\\section{Bayesian Logistic Regression}\nFor logistic regression, the likelihood function (as above) is defined as:\n\n\\begin{align*}\n    P(\\mathcal{D} \\giv \\model) &=  P(\\data \\giv \\bw) =\n        P(\\vf{x}^{(1)}, y^{(1)}, \\dots, \\vf{x}^{(N)}, y^{(N)}\\ |\\ \\vf{w}) \\\\\n    &= \\underbrace{\\prod_i P(\\vf{x}^{(i)}, y^{(i)} |\\ \\vf{w})}_{\\text{\\blue{independent x's}}} = \\prod_i P(y^{(i)} |\\ \\vf{x}^{(i)}, \\vf{w})\\ P(\\vf{x}^{(i)}\\ |\\ \\vf{w}) \\\\\n    &= \\prod_i P(y^{(i)} |\\ \\vf{x}^{(i)}, \\vf{w}) \\underbrace{P(\\vf{x}^{(i)})}_{\\text{\\blue{ignore weights}}} = \\prod_i P(y^{(i)} |\\ \\vf{x}^{(i)}, \\vf{w})\n\\end{align*}\n\nWhen fitting via maximum likelihood, we do something like:\n\n\\begin{equation*}\n    \\bw^* = \\argmax_{\\bw} [ \\log\\ P(\\by \\giv X, \\bw) -  \\bw^T \\bw ]\n\\end{equation*}\n\nor, as most optimizers are optimized for minimizing functions, equivalently minimize the negative log-likelihood. Bayesian logistic regression begins the same way as Bayesian linear regression:\n\n\\begin{equation*}\n    P(\\bw \\giv \\data) = \\frac{P(\\data \\giv \\bw) P(\\bw)}{P(\\data)} \\propto P(\\data \\giv \\bw) P(\\bw)\n\\end{equation*}\n\nTo make predictions (for a datapoint $y^*, \\bx^*$), we can use probability theory:\n\n\\begin{align*}\n    P(y^* \\giv \\bx^*, \\data) &= \\int P(y^*, \\bw \\giv \\bx^*, \\data) \\de\\bw \\\\\n        &= \\int P(y^* \\giv \\bx^*, \\bw) P(\\bw \\giv \\data) \\de\\bw\n\\end{align*}\n\nThe problem, of course, is that we can't really compute the posterior $P(\\bw \\giv \\data)$. It never really is Gaussian as it were in the linear regression case. Thus we must be clever. In reality, if we assume a Gaussian prior, then posterior is this:\n\n\\begin{align*}\n    P(\\bw) &= \\normal(\\bw; \\mathbf{0}, \\sigma_w^2 \\mathbb{I}) \\\\\n    P(\\bw \\giv \\data) &= \\normal(\\bw; \\mathbf{0}, \\sigma_w^2 \\mathbb{I})\n        \\prod_n \\sigma(\\bw^T \\bx^{(n)} z^{(n)})\n\\end{align*}\n\nwhere $z^{n} = \\{ +1, -1 \\}$ instead of $\\{0, 1\\}$. If we notice that $\\sigma(-a) = 1 - \\sigma(a)$, then the above follows. Either way, this product of sigmoids and a Gaussian is nasty and very slow to integrate numerically above. We can do importance sampling to predict like so:\n\n\\subsection{Importance Sampling}\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.6\\textwidth]{figures/bayes_logistic}\n    \\caption{Bayesian Logistic Regression via Importance Sampling}\n    \\label{fig:bayes_log_importance}\n\\end{figure}\n\n\\begin{align*}\n    P(y^* \\giv \\bx^*, \\data) &= \\mathbb{E}_{P(\\bw \\giv \\data)}[P(y^* \\giv \\bx^*, \\bw)] \\\\\n        &\\approx \\frac{1}{S} \\sum_{s = 1}^{S} P(y \\giv, \\bx^*, \\bw^{(s)})\n\\end{align*}\n\nwhere $\\bw^{(s)} \\sim P(\\bw \\giv \\data)$. Sampling from the posterior $P(\\bw \\giv \\data)$ is hard. We can do many things, like MCMC, but perhaps the easiest is importance sampling:\n\n\\begin{align*}\n    P(y^* \\giv \\bx^*, \\data) &= \\int P(y^* \\giv \\bx^*, \\bw) P(\\bw \\giv \\data)\\ \\de\\bw \\\\\n        &= \\int P(y^* \\giv \\bx^*, \\bw) \\frac{P(\\data \\giv \\bw) P(\\bw)}{P(\\data)}\\  \\de\\bw \\\\\n    &= \\mathbb{E}_{P(\\bw)} [P(y^* \\giv \\bx^*, \\bw) \\frac{P(\\data \\giv \\bw)}{P(\\data)}] \\\\\n    &= \\mathbb{E}_{P(\\bw)} [P(y^* \\giv \\bx^*, \\bw) \\frac{P(\\data \\giv \\bw)}{\\int P(\\data \\giv \\bw) P(\\bw)\\ \\de\\bw}] \\\\\n    &= \\mathbb{E}_{P(\\bw)} [P(y^* \\giv \\bx^*, \\bw) \\frac{P(\\data \\giv \\bw)}{\\mathbb{E}_{P(\\bw)}[P(\\data \\giv \\bw)]}]\n\\end{align*}\n\nNow we can do importane sampling in the denominator and in the entire expression, perhaps even storing samples from $P(\\bw)$, the prior. The end result looks like this:\n\n\\begin{align*}\n    P(y^* \\giv \\bx^*, \\data) &\\approx\n        \\frac{1}{S} \\sum_s P(y^* \\giv \\bx^*, \\bw^{(s)}) \\frac{\n            P(\\data \\giv \\bw^{(s)})\n        }{\n            \\frac{1}{S'} \\sum_{s'} P(\\data \\giv \\bw^{(s')})\n        }\n\\end{align*}\n\nwhere $w^{(s)} \\sim P(\\bw)$. The bias of this, of course, is humongous. \\autoref{fig:bayes_log_importance} shows an example result of this procedure.\n\\vskip 0.1in\n\nAnother thing we can do is:\n\n\\subsection{Maximum a Posteriori}\nwhich is just this:\n\n\\begin{equation*}\n    \\bw^* = \\argmax_\\bw[\\log P(\\bw \\giv \\data)] =\n        \\argmax_\\bw[\\log P(\\data \\giv \\bw) P(\\bw)]\n\\end{equation*}\n\nc.f. maximum likelihood. Really we only added the prior here. Onwards to:\n\n\\subsection{Laplace Approximation}\nLet's assume that the posterior $p(\\bw \\giv \\data)$ is Gaussian. That's not a very good assumption, putting it mildly. We can try matching the best Gaussian to it. We first find the maximum likelihood, the most probable value of $\\bw$, which we label $\\bw^*$, as usual:\n\n\\begin{align*}\n    \\bw^* &= \\argmax_\\bw P(\\bw \\giv \\data) \\\\\n        &= \\argmax_\\bw\\ \\log \\frac{P(\\data, \\bw)}{P(\\data)} \\\\\n        &= \\argmax_\\bw\\ \\log P(\\data | \\bw) P(\\bw)\n\\end{align*}\n\nThe last line follows from the fact that $P(\\data)$ is independent of $\\bw$, i.e. a constant in the optimization problem. We also can't really evaluate $P(\\data)$, since it's a nasty integral. So far, so good. We can define the energy as follows:\n\n\\begin{align*}\n    E(\\bw) &= -\\log P(\\bw, \\data) \\\\\n    \\bw^* &= \\argmin_\\bw E(\\bw)\n\\end{align*}\n\nThen we compute the Hessian, the matrix of second derivatives. This gives us the curvature of the function:\n\n\\begin{equation*}\n    H_{i, j} = \\frac{\n        \\partial^2 E(\\bw^*) \n    }{\n        \\partial w_i\\ \\partial w_j\n    }\n\\end{equation*}\n\nThe energy of a Gaussian distribution would be, up to a constant:\n\n\\begin{equation*}\n    E_\\gauss (w) = \\frac{(w - \\mu)^2}{2\\sigma^2}\n\\end{equation*}\n\nThe minimum is at $w^* = \\mu$ and the second derivative - $H = 1 / \\sigma^2$. Generalizing to a higher dimension, we get:\n\n\\begin{equation*}\n    E_\\normal(\\bw) = \\frac{1}{2} (\\bw - \\mathbf{\\mu})^T \\Sigma^{-1} (\\bw - \\mathbf{\\mu})\n\\end{equation*}\n\nwhere $\\bw^* = \\mathbf{\\mu}$ and $H = \\Sigma^{-1}$ or $\\Sigma = H^{-1}$. Thus matching energies, we get:\n\n\\begin{equation*}\n    P(\\bw \\giv \\data) \\approx \\normal(\\bw; \\bw^*, H^{-1})\n\\end{equation*}\n\nTo approximate $P(\\data)$, we can do:\n\n\\begin{align*}\n    P(\\bw \\giv \\data) &= \\frac{P(\\bw, \\data)}{P(\\data)}\n        \\approx \\normal(\\bw; \\bw^*, H^{-1}) =\n        \\frac{|H|^{1/2}}{(2\\pi)^{D/2}} \\exp(-\\frac{1}{2}\n            (\\bw - \\bw^*)^{\\text{T}} H (\\bw - \\bw^*)) \n\\end{align*}\n\nThis is true for any $\\bw$, so let's evaluate it for $\\bw^*$, giving us:\n\n\\begin{align*}\n    P(\\bw \\giv \\data) &= \\frac{P(\\bw, \\data)}{P(\\data)}\n        \\approx \\normal(\\bw^*; \\bw^*, H^{-1}) =\n        \\frac{|H|^{1/2}}{(2\\pi)^{D/2}}\n\\end{align*}\n\nwhich gives us:\n\\begin{equation*}\n    P(\\data) \\approx P(\\bw^*, \\data)\\ |2 \\pi H^{-1}|^{1/2}\n\\end{equation*}\n\n\\subsection{Test Predictions}\nFor a test point $y^*, \\bx^*$, we need to compute:\n\n\\begin{align*}\n    P(y^* \\giv \\bx^*, \\data) &= \\int P(y^*, \\bw \\giv \\bx^*, \\data)\\ \\de\\bw \\\\\n    &= \\int P(y^* \\giv \\bx^*, \\bw)\\ P(\\bw \\giv \\data)\\ \\de\\bw\n\\end{align*}\n\nUsing the Laplace approximation above, we know that $P(\\bw \\giv \\data) \\approx \\normal(\\bw; \\bw^*, H)$, we have the following integral:\n\n\\begin{align*}\n    P(y^* = 1 \\giv \\bx, \\data) &\\approx\n        \\int \\sigma(\\bw \\bx^\\text{T}) \\normal(\\bw; \\bw^*, H^{-1})\\ \\de\\bw \\\\\n    &= \\mathbb{E}_{\\normal(\\bw; \\bw^*, H^{-1})} [\\sigma(\\bw \\bx^\\text{T})]\n\\end{align*}\n\nCalling $a = \\bw^* \\bx^\\text{T}$ and changing variables:\n\n\\begin{align*}\n    P(a) &= \\normal(a; \\bw^* \\bx^\\tran, \\bx^\\tran H^{-1} \\bx)\\\\\n    P(y = 1 \\giv \\bx, \\data) &= \n        \\int \\sigma(a)\\ \\normal(a; \\bw^* \\bx^\\tran, \\bx^\\tran H^{-1} \\bx) \\ \\de a\n\\end{align*}\n\nwhich we can then compute numerically. From Murphy Section 8.4.4.2, we can derive another approximation:\n\n\\begin{align*}\n    P(y = 1 \\giv \\bx, \\data) &\\approx \\sigma(\\kappa \\bw^* \\bx) \\\\\n    \\kappa &= \\frac{\n        1\n    }{\n        \\sqrt{\n            1 + \\frac{\\pi}{8} \\bx^\\tran V \\bx\n        }\n    }\n\\end{align*}\n\n\\section{Optimizing Hyperparameters in a Bayesian Setting}\nWe normally opt to choose simpler models, that is models that assign a high probability to the training data, over models that have an high probability of generating many datasets. We can thus specify the probability:\n\n\\begin{align*}\n    P(\\data \\giv \\model) &= P(X, \\by \\giv \\model) =\n        P(\\by \\giv X, \\model) \\underbrace{P(X \\giv \\model)}_\\text{\\blue{1}} \\\\\n    &= P(\\by \\giv X, \\model) = \\int P(\\by, \\bw \\giv X, \\model)\\ \\de\\bw =\n        \\int P(\\by \\giv X, \\bw, \\model)\\ P(\\bw \\giv \\model)\\ \\de\\bw\n\\end{align*}\n\nThe probability of the data given the model, i.e. $P(\\data \\giv \\model)$ is the \\textbf{marginal likelihood} of the model. We can use it to select between different models $\\model_k$.\n\\vskip 0.1in\n\n(\\textit{Yet another example from MLPR}) If we have a linear model with:\n\\begin{align*}\n    P(\\bw \\giv \\sigma_w) &= \\normal(\\bw; \\mathbf{0}, \\sigma_w^2 \\mathbb{I}) \\\\\n    P(y \\giv \\bw, \\sigma_y) &= \\normal(y; \\bw^\\tran \\bx, \\sigma_y^2)\n\\end{align*}\n\nWe then fit $\\sigma_w$ and $\\sigma_y$ by maximizing the marginal likelihood:\n\n\\begin{equation*}\n    P(\\by \\giv X, \\sigma_w, \\sigma_y) =\n        \\int P(\\by \\giv X, \\bw, \\sigma_y)\\ P(\\bw \\giv \\sigma_w)\\ \\de\\bw\n\\end{equation*}\n\n\\section{Gaussian Processes}\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.6\\textwidth]{figures/gp}\n    \\caption{Gaussian Process posterior}\n    \\label{fig:gp}\n\\end{figure}\n\nGaussian processes are just big Gaussians. Let's say we have a training set of values $\\by$ and inputs $X$ and a test set of values (unknown) $\\bfu_*$ and test inputs $X_*$. If we assume that everything comes from a Gaussian, i.e.:\n\n\\begin{equation*}\n    P\\bigg(\\begin{bmatrix}\n        \\by \\\\\n        \\bfu_*\n    \\end{bmatrix}\\bigg) = \\normal\\bigg(\n        \\begin{bmatrix}\n            \\by \\\\\n            \\bfu_*\n        \\end{bmatrix};\n        \\mathbf{0};\n        \\begin{bmatrix}\n            K(X, X) + \\sigma_y^2 \\mathbb{I} & K(X, X_*) \\\\\n            K(X_*, X) & K(X_*, X_*)\n        \\end{bmatrix}\n    \\bigg)\n\\end{equation*}\n\nThis really is the essence of the thing. The $K$ functions are the kernels that represent the covariance between datapoints. This is enough to describe any dataset; in fact a non-zero mean Gaussian process is always equivalent to a zero mean one (TODO: proof). The kernels are defined as:\n\n\\begin{equation*}\n    K(X, Y)_{i, j} = k(\\bx^{(i)}, \\bz^{(j)})\n\\end{equation*}\n\nfor two test inputs $\\bx$ and $\\bz$. For training inputs we also have noise variance $\\sigma_y^2$. Let's call the noise:\n\n\\begin{equation*}\n    \\nu_i \\sim \\normal(0, \\sigma_y^2)\n\\end{equation*}\n\nthen for the cross-covariance:\n\n\\begin{align*}\n    \\text{cov}(y_i, f_{*, j}) &= \\expect[y_i f_{*, j}] -\n        \\expect[y_i] \\expect[f_{*,j}] \\\\\n        &= \\expect[(f_i + \\nu_i) f_{*, j}] \\\\\n        &= \\expect[f_i f_{*,j}] + \\underbrace{\\expect[\\nu_i]}_0\\expect[f_{*, j}] \\\\\n        &= \\expect[f_i f_{*,j}] = k(\\bx^{(i)}, \\bx^{*, j})\n\\end{align*}\n\nOnce we specify the kernel $k$, we can extract the distribution of the test datapoints by simply marginalizing the big Gaussian:\n\n\\begin{align*}\n    P(\\bfu_* \\giv \\data, \\model) =\n        \\normal&(\\bfu_*; \\\\\n        &K(X_*, X)(K(X, X) + \\sigma_y^2 \\mathbb{I})^{-1} \\by, \\\\\n        &K(X_*, X_*) - K(X_*, X)(K(X, X) + \\sigma_y^2 \\mathbb{I})^{-1}K(X, X_*) \\\\\n        &)\n\\end{align*}\n\nwhich is the posterior over test datapoints.\n\n\\subsection{Gaussian Kernel}\nA common choice for a kernel is the Gaussian kernel:\n\n\\begin{equation*}\n    k(\\bx^{i}, \\bx^{j}) = \n        \\sigma_f^2 \\exp\\bigg( \n            -\\frac{1}{2} \\sum_{d=1}^D (x_d^{(i)} - x_d^{j})^2 /l_d^2    \n        \\bigg)\n\\end{equation*}\n\nWe can fit $\\sigma_f$ and $\\l_d$ by maximizing the marginal likelihood.\n\n\\section{Variational Inference}\nNot going into details here. The idea is to minimize the Kullback-Leibler Divergence between two distributions. We can use it instead of the Laplace Approximation for Bayesian Logistic Regression above.\n\n\\begin{equation*}\n    D_{KL}(p || q) = \\int p(\\bx) \\log \\frac{p(\\bx)}{q(\\bx)}\\ \\de\\bx\n\\end{equation*}\n\nWe can minimize the KL divergence $D_{KL}(q(\\bw; \\alpha)\\ ||\\ P(\\bw \\giv \\data))$ in the logistic regression case. We are trying to match a parametric function $p$ with some parameters captured in $\\alpha$ to the posterior distribution of the weights $P(\\bw \\giv \\data)$:\n\n\\begin{align*}\n    \\kl(q(\\bw; \\alpha)\\ ||\\ \\post) &= \\int q(\\bw; \\alpha) \\log \\frac{\n        q(\\bw; \\alpha)\n    }{\n        \\post\n    }\n    \\ \\de\\bw \\\\\n    &= -\\int q(\\bw; \\alpha) \\log \\post\\ \\de\\bw + \n    \\underbrace{\n        \\int q(\\bw; \\alpha) \\log q(\\bw; \\alpha)\\ \\de\\bw\n    }_{\\text{negative entropy,} -H(q)}\n\\end{align*}\n\nThe first term is large if we put probability on regions where the posterior is small. The second term encourages more spread-out distributions with high entropy.\n\\vskip 0.1in\n\nUsing Bayes' theroem in the KL divergence formula:\n\n\\begin{equation*}\n    \\kl(q\\ ||\\ p) = \\underbrace{\n        \\expect_q[\\log q(\\bw)] -\n        \\expect_q[\\log P(\\data \\giv \\bw)] -\n        \\expect_q[\\log P(\\bw)]\n    }_{J(q)} + \\log P(\\data)\n\\end{equation*}\n\nWe thus minimize $J(q)$, also known as ELBO (Evidence Lower Bound). We can get a bound on the $P(\\data)$ term, the marginal likelihood, by observing the KL divergence is non-negative:\n\n\\begin{equation*}\n    \\log P(\\data) \\geq -J(q)\n\\end{equation*}\n\n\\subsection{Minimizing the KL-divergence for a Gaussian function}\nLet $\\alpha = \\{ \\bm{m}, V \\}$, the mean and co-variance of a Gaussian distribution. We will be (numerically) minimizing the $J(q)$ with respect to $\\alpha$. Thus:\n\n\\begin{equation*}\n    q(\\bw; \\alpha) = \\normal(\\bw; \\bm{m}, V)\n\\end{equation*}\n\nWe first Cholesky decompose $V$ to get $V = LL^\\tran$, $L$ being a lower-triangular matrix with positive diagonal entries. We also optimize $\\log \\sigma_w$; otherwise we might make $\\sigma_w$ negative.\n\n\\begin{equation*}\n    L = \\begin{cases}\n        \\log L_{i, j} & i = j \\\\\n        L_{i, j}      & i \\neq j\n    \\end{cases}\n\\end{equation*}\n\nWe assume a Gaussian prior:\n\n\\begin{equation*}\n    P(\\bw) = \\normal(\\bw; \\bf{0}, \\sigma_w^2 \\mathbb{I})\n\\end{equation*}\n\nWe can write two of the terms of $J(q)$ in closed form:\n\n\\begin{align*}\n    \\expect_{\\normal(\\bw; \\bm{m}, V)}[\\normal(\\bw; \\bmu, \\Sigma)] &=\n        \\expect_{\\normal(\\bw; \\bm{m}, V)}\\bigg[\n            -\\frac{1}{2}(\\bw - \\bmu)^\\tran \\Sigma^{-1} (\\bw - \\bmu)\n        \\bigg] - \\frac{1}{2} \\log |2 \\pi \\Sigma] \\\\\n    % \n    \\expect_q[\\log q(\\bw)] &=\n        \\expect_{\\normal(\\bw; \\bm{m}, V)}[\\log \\normal(\\bw; \\bm{m}, V)] = -\\frac{D}{2} - \\frac{1}{2}\\log|2 \\pi V| \\\\\n    % \n    \\expect_q[\\log P(\\bw)] &=\n        \\frac{1}{2\\sigma_w^2}\\bigg[ \\text{Tr}(V) + \\bm{m}^\\tran \\bm{m} \\bigg] + \\frac{D}{2} \\log(2\\pi\\sigma_w^2)\n\\end{align*}\n\nUsing this trick:\n\n\\begin{align*}\n    \\expect_{\\normal(\\bz; 0, V)}[\\log \\normal(\\bz; 0, V)] &=\n        \\expect[-\\frac{1}{2} \\text{Tr}(\\bz^\\tran V^{-1} \\bz)]\n    = -\\frac{1}{2} \\text{Tr}(\\expect[\\bz^\\tran V^{-1} \\bz])\n    = -\\frac{1}{2} \\text{Tr}(\\expect[\\bz^\\tran \\bz] V^{-1}) \\\\\n    &= -\\frac{1}{2} \\text{Tr}(VV^{-1}) = -\\frac{1}{2} \\text{Tr}(\\mathbb{I}_D) = -\\frac{D}{2}\n\\end{align*}\n\nwhere $\\expect[\\bz^\\tran \\bz] = V$ by the definition of covariance for a $0$-mean random variable. We also know that $\\bz^\\tran V \\bz$ is a number and hence equal to its trace, a.k.a. the \"trace trick\".\n\\vskip 0.1in\n\nFrom the Cholesky decomposition:\n\\begin{align*}\n    \\frac{1}{2} \\log|V| &= \\sum_i \\log L_{i,i} \\\\\n    \\text{Trace}(V) &= \\sum_{i,j} L_{i,j}^2\n\\end{align*}\n\nhence the above reparametrization.\n\n\\subsection{Log-likelihood term}\nThe nastiest of the three terms is the log-likelihood:\n\n\\begin{equation*}\n    -\\expect_{\\normal(\\bw; \\bm{m}, V)}[\\log P(\\data \\giv \\bw)] =\n        -\\expect_{\\normal(\\bw; \\bm{m}, V)}\\bigg[\n            \\sum_{n=1}^N \\log p(y^{(n)} \\giv \\bx^{(n)}, \\bw)\n        \\bigg]\n\\end{equation*}\n\nWe cannot compute this in closed form. Instead we do a Monter Carlo estimate:\n\n\\begin{equation*}\n    -\\expect_{\\normal(\\bw; \\bm{m}, V)}[\\log P(\\data \\giv \\bw)] \\approx\n        - \\sum_{n = 1}^{N} \\log p(y^{(n)}, \\bx^{n}, \\bw)\n\\end{equation*}\n\nwhere $\\bw \\sim \\normal{\\bm{m}, V}$.\n\n\\subsection{Gradients of the log-likelihood}\nHere we do a \"reparametrization trick\" (gotta love those). To sample a random $\\bw$, we sample a vector of normal variables $\\nu \\sim \\normal(0, \\mathbb{I})$ and then transform it (see multivariate Gaussians above): $\\bw = \\bm{m} + L \\nu$. We can rewrite the expectation:\n\n\\begin{equation*}\n    \\expect_{\\normal(\\bw; \\bm{m}, V)}[f(\\bw)] = \\expect_{\\normal(\\pmb{\\nu}; \\bm{0}, \\mathbb{I})}[f(\\bm{m} + L \\pmb{\\nu})]\n\\end{equation*}\n\nNow differentiating:\n\\begin{align*}\n    \\nabla_\\bm{m} \\expect_{\\normal(\\bw; \\bm{m}, V)}[f(\\bw)] &=\n    \\expect_{\\normal(\\pmb{\\nu}; \\bm{0}, \\mathbb{I})}[\\nabla_\\bm{m} f(\\bm{m} + L \\pmb{\\nu})] \\\\\n    &\\approx \\nabla_\\bm{m} f(\\bm{m} + L \\pmb{\\nu}) \\\\\n    % \n    \\nabla_L \\expect_{\\normal(\\bw; \\bm{m}, V)}[f(\\bw)] &=\n    \\expect_{\\normal(\\pmb{\\nu}; \\bm{0}, \\mathbb{I})}[\\nabla_L f(\\bm{m} + L \\pmb{\\nu})] \\\\\n    &\\approx [\\nabla_L f(\\bw)]\\pmb{\\nu}^\\tran\n\\end{align*}\n\n\\section{Gaussian Mixture Models}\nThis is a clustering algorithm. We have $K$ classes and each datapoint has a class: $z^{(n)} = \\{1, \\dots, K\\}$. These are unobserved and drawn from a prior. $z^{(n)} \\sim \\pi$. The points come from the Gaussian mixtures:\n\n\\begin{equation*}\n    P(\\bx^{(n)} \\giv z^{n} = k, \\theta) = \\normal(\\bx^{(n)}; \\pmb{\\mu}^{(k)}, \\Sigma^{(k)})\n\\end{equation*}\n\nwhere $\\theta = \\{ \\pi, \\{ \\bmu^{(k)}, \\Sigma^{(k)} \\}\\}$. We can fit it via maximum likelihood by maximizing:\n\n\\begin{equation*}\n    \\log P(\\data \\giv \\theta) = \\sum_n \\log P(\\bx^{(n)} \\giv \\theta)\n\\end{equation*}\n\nwhere:\n\n\\begin{align*}\n    P(\\bx^{(n)} \\giv \\theta) &=\n        \\sum_k P(\\bx^{(n)}, z^{(n)} = k \\giv \\theta) \\\\\n    &= \\sum_k P(\\bx^{(n)} \\giv z^{(n)} = k, \\theta) P(z^{(n)} = k \\giv \\theta) \\\\\n    &= \\sum_k \\pi_k \\normal(\\bx^{(n)}; \\bmu^{(k)}, \\Sigma)\n\\end{align*}\n\nWe can't solve this analytically, but we can use gradient-based optimizers. We decompose $\\Sigma = LL^\\tran$ (Cholesky) and set:\n\n\\begin{equation*}\n    L = \\begin{cases}\n        \\log L_{i, j} & i = j \\\\\n        L_{i, j}      & i \\neq j\n    \\end{cases}\n\\end{equation*}\n\nThe $\\pmb{\\pi}$ vector must represent probabilities. We can use a softmax transformation.\n\n\\subsection{Expectation Maximization (EM)}\nThis consists of two steps, the $\\bm{E}$ and $\\bm{M}$ step:\n\\vskip 0.1in\n\n$\\bm{E}$-step:\n\\begin{equation}\n    r_k^{(n)} = P(z^{(n)} = k \\giv \\bx^{(n)}, \\theta) = \\frac{\n        \\pi_k \\normal(\\bx^{(n)}; \\bmu^{(k)}, \\Sigma^{(k)})\n    }{\n        \\sum_l \\pi_l \\normal(\\bx^{(n)}; \\bmu^{(l)}, \\Sigma^{(l)})\n    }\n\\end{equation}\n\n$\\bm{M}$-step:\n\\begin{align*}\n    r_k = \\sum_n r_k^{(n)}\n    \\pi_k &= \\frac{r_k}{N} \\\\\n    \\bmu^{(k)} &= \\frac{1}{r_k} \\sum_n r_k^{(n)} \\bx^{(n)} \\\\\n    \\Sigma^{(k)} &= \\frac{1}{r_k} \\sum_n r_k^{(n)} \\bx^{(n)} \\bx^{(n)\\tran}\n        - \\bmu^{(k)} \\bmu^{(k)\\tran}\n\\end{align*}\n\n% \\section{Optimization}\n% \\subsection{Regularization in Depth}\n% \\subsection{Expectation Maximization}\n% \\subsection{Proximal Methods}\n\n% \\section{Ensembles}\n% \\subsection{Averaging}\n% \\subsection{Bagging}\n% \\subsection{Mixture of Experts}\n\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "9b6fae52713f5347fb0481a73bf6f41b8349ff27", "size": 54961, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "computer_science/machine_learning_pattern_recognition/summary.tex", "max_stars_repo_name": "include4eto/topic_summaries", "max_stars_repo_head_hexsha": "8eca11d3544fc3c79f328051f170a42227f6c84c", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-13T20:04:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T20:57:56.000Z", "max_issues_repo_path": "computer_science/machine_learning_pattern_recognition/summary.tex", "max_issues_repo_name": "include4eto/topic_summaries", "max_issues_repo_head_hexsha": "8eca11d3544fc3c79f328051f170a42227f6c84c", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "computer_science/machine_learning_pattern_recognition/summary.tex", "max_forks_repo_name": "include4eto/topic_summaries", "max_forks_repo_head_hexsha": "8eca11d3544fc3c79f328051f170a42227f6c84c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.5028340081, "max_line_length": 779, "alphanum_fraction": 0.6419461072, "num_tokens": 18296, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.8006920020959544, "lm_q1q2_score": 0.595048689818822}}
{"text": "% Question  ##################################################################################################################\n\\section{Question 3}\\label{ssec:pt2q3}\n\\textbf{Hidden Markov Models (HMM) are an important unsupervised learning method used to analyse sequence data and identifying underlying regimes that govern the data.}\n\n\\noindent\n\\textbf{This question relates to detecting regime changes in volatility (risk).}\n% END Question  ##############################################################################################################\n\n% Question (i) ###############################################################################################################\n\\subsection{Q2 (i)}\\label{sssec:pt2q3i}\n\\textbf{Download and calculate the last 10 year daily returns of the S\\&P500 index (till 31/12/2017).}\n\n\\noindent\nCode to download the closing prices for S\\&P500 in this question can be found in \u2018Question 3 (i)\u2019 in the python notebook. The data was downloaded from Yahoo Finance and saved as .csv format in the following path \\textit{\u2018src/data/part\\_2\u2019}. The file was saved as \\textit{\u2018GSPC.csv\u2019}. The data was then reloaded from the file and converted into pandas dataframe and the daily log returns were computed.\n% END Question (i) ###########################################################################################################\n\n% Question (ii) ##############################################################################################################\n\\subsection{Q2 (ii)}\\label{sssec:pt2q3ii}\n\\textbf{Create a new series of volatility based on 10 day rolling time window (for each day apply standard deviation on the previous 10 day returns).}\n\n\\noindent\nTo calculate the 10 day rolling volatility a function called 'rolling\\_vol' was created in the 'fintech' library as shown in Fig.~\\ref{fig:rollvol}, and this was called from the notebook in 'Question 3 (ii)'. \n\n\\begin{figure}[H]\n\\centering\n  \\includegraphics[scale = .58]{imgs/rolling_vol.png}\n  \\caption{Function to calculate rolling volatility.}\n  \\label{fig:rollvol}\n\\end{figure}\n% END Question (ii) ##########################################################################################################\n\n% Question (iii) ############################################################################################################\n\n\\subsection{Q2 (iii)}\\label{sssec:pt2q3iii}\n\\textbf{Utilize this new series to train two Gaussian HMM: one using 2 latent states and the other with 3 latent states (components).  These latent states indicate the different volatility regimes.}\n\n\\noindent\nThe code to train the two models can be found in 'Question 3 (iii)'. Both models were trained using 1000 iteration using a diagonal covariance matrix. Also both models were trained using one feature, which is the series created in (ii). A third party library called 'hmmlearn' \\cite{python:hmmlearn} was utilised for this task. \n\n% END Question (iii) #########################################################################################################\n\n% Question (iv) ##############################################################################################################\n\n\\subsection{Q2 (iv)}\\label{sssec:pt2q3iv}\n\\textbf{For each of the two models above, present a visualization (one or more plots) which clearly shows the identified volatility regimes over the 10-year period being investigated.}\n\n\\noindent\nFig.~\\ref{fig:regime1} shows the volatility regime over the 10-period when using 2 latent states while Fig.~\\ref{fig:regime2} shows the volatility regimes when using 3 latent states. The code for these plots can be found in the notebook in 'Question 3 (iv)'.\n\n\\begin{figure}[H]\n\\centering\n  \\includegraphics[scale = .60]{imgs/regime1.png}\n  \\caption{Volatility regime over the 10-period when using 2 latent states.}\n  \\label{fig:regime1}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n  \\includegraphics[scale = .65]{imgs/regime2.png}\n  \\caption{Volatility regime over the 10-period when using 3 latent states.}\n  \\label{fig:regime2}\n\\end{figure}\n\n% END Question (iv) ##########################################################################################################\n\n% Question (v) ###############################################################################################################\n\n\\subsection{Q2 (v)}\\label{sssec:pt2q3v}\n\\textbf{Comment on the findings and discuss the difference in outcomes between the two models.}\n\n\\noindent\nUsing the models in the previous question (iii) the Hidden Markov model enables us to investigate underlying latent states (must specify the number of components in this unsupervised learning model) and the probability transitions between them even though they are not directly observable. \n\n\\noindent\nFinding volatility regimes can help an investor to manage risk more effectively by identifying states where there is high volatility. This model has the potential of identifying incorrect identification of a \u201ctrend\u201d. \n\n\\noindent \nAs seen from both the plots the regime detection captures highly volatile and \u201ctrending\u201d periods. In the first model (two latent states) the majority is captured in the 2nd state but most of 2008-2009 and late 2011 is captured in the first state. In the second model the majority was captured in the 2nd state, while most of 2008-2009 was captured in the first state together with the late 2011 period. Utilizing these regimes one can apply different investing/risk management methods according to the state the asset is in, in terms of volatility. The states which were captured show a low volatility state and a high volatility state/s.\n\n\\noindent\nAll the states in the two models have a positive mean. The variance of the 2nd state in both models indicate low variance (low volatility periods), but the first state has a variance which is significantly higher in both models (high volatility periods). This indicates a state with high volatility. The third state in the second model also indicate high volatility but it must be noted that this state captures relatively low information. \n\n\\noindent\nIt is also important to note the states during the 2008 period (market crash), as you can see both models captured high volatility during that period (both models capture this in the first state). The measurements discussed above (mean and var) are shown in Fig.~\\ref{fig:model_a_regime} and Fig.~\\ref{fig:model_b_regime}. \n\n\\begin{figure}[H]\n     \\centering\n     \\begin{subfigure}[b]{0.45\\textwidth}\n         \\centering\n         \\includegraphics[width=\\textwidth]{imgs/model_a_regime.JPG}\n         \\caption{HMM measurements for Model A (two latent states).}\n         \\label{fig:model_a_regime}\n     \\end{subfigure}\n     \\hfill\n     \\begin{subfigure}[b]{0.45\\textwidth}\n         \\centering\n                 \\includegraphics[width=\\textwidth]{imgs/model_b_regime.JPG}\n        \\caption{HMM measurements for Model B (three latent states).}\n         \\label{fig:model_b_regime}\n     \\end{subfigure}\n\\end{figure}\n\n\n% END Question (v) ###########################################################################################################\n\n", "meta": {"hexsha": "e73a3c868d75cc1007d0d6d80928bc0773f336aa", "size": 7131, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/LaTeX/sections/2_part/3_question.tex", "max_stars_repo_name": "achmand/ari5122_assignment", "max_stars_repo_head_hexsha": "0322dfc77303bf77ca5acbacee4efc659765ab42", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/LaTeX/sections/2_part/3_question.tex", "max_issues_repo_name": "achmand/ari5122_assignment", "max_issues_repo_head_hexsha": "0322dfc77303bf77ca5acbacee4efc659765ab42", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/LaTeX/sections/2_part/3_question.tex", "max_forks_repo_name": "achmand/ari5122_assignment", "max_forks_repo_head_hexsha": "0322dfc77303bf77ca5acbacee4efc659765ab42", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.2735849057, "max_line_length": 638, "alphanum_fraction": 0.623755434, "num_tokens": 1519, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.8006920020959544, "lm_q1q2_score": 0.595048689818822}}
{"text": "% !TEX root = as_grf_sopt.tex\n\n% \\begin{appendices}\n\\section{Active Search Regret Bound}\n\\label{appendix:proof_as_regret}\nWe start by stating the following result.% by \\cite{srinivas2012information}.\n\\begin{theorem}[Theorem 6, {\\cite{srinivas2012information}}]\n\t\\label{thm:dev}\n\tLet $\\delta \\in (0,1)$. Assume the observation noises are uniformly bounded by $\\sigma_n$ and $f$ has RKHS norm $B$ with kernel $C_0$, which is equivalent to $\\bff^\\top\\widetilde{\\cLap}_0\\bff\\leq B^2$. Define \n\t% \\begin{equation*}\n\t$\n\t\t\\alpha_t = \\sqrt{2 B^2 + 300 \\gamma_t \\log(t/\\delta)^3},\n\t\t$\n\t% \\end{equation*}\n\t% where $\\Vert \\cdot \\Vert_K$ denotes the RKHS norm associated with the kernel $K$. Then\n\tthen\n\t\\begin{equation*}\n\t\t\\mbox{Pr}\\left(\\forall t, \\forall v \\in V, \\;\\; |\\mu_t(v) - f(v)| \\leq \\alpha_{t+1} \\sigma_t(v) \\right) \\geq 1 - \\delta.\n\t\\end{equation*}\n\\end{theorem}\nWe use this result to bound our instantaneous regrets.\n\\begin{lemma}\n\tConditioned on the high-probability event in Theorem \\ref{thm:dev}, the following bound holds:\n\t\\begin{equation*}\n\t\t\\forall t, \\;\\; r_t := f(v^*_t) - f(v_t) \\leq 2 \\alpha_t k \\sigma_{t-1}(v_t), \n\t\\end{equation*}\n\twhere $v^*_t$ is the node with the $t$-th globally largest function value and $v_t$ is node selected at round $t$. \n\\end{lemma}\n%\\begin{proof}\n\\emph{Proof}.\nAt round $t$ there are two possible situations. If \n$v^*_t$ was picked at some earlier round, the definition of $v^*_t$ implies that there exists some $t' < t$ such that $v^*_{t'}$ \nhas not been picked yet. According to our selection rule, the fact that $s_t(v) \\geq \\sigma_t(v)$, \nand Theorem \\ref{thm:dev}, the following holds:\n\\begin{align*}\n\t&\\mu_{t-1}(v_t) + \\alpha_t s_{t-1}(v_t) \\geq \\mu_{t-1}(v^*_{t'}) + \\alpha_t s_{t-1}(v^*_{t'}) \\\\\n\t\t&\\qquad \\geq \\mu_{t-1}(v^*_{t'}) + \\alpha_t \\sigma_{t-1}(v^*_{t'})\n\t \\geq f(v^*_{t'}) \\geq f(v^*_t).\n\\end{align*}\nIf $v^*_t$ has not been picked yet, a similar argument gives \n\\begin{equation*}\n\t\\mu_{t-1}(v_t) + \\alpha_t s_{t-1}(v_t) \\geq \\mu_{t-1}(v^*_{t}) + \\alpha_t s_{t-1}(v^*_{t}) \n\t\t%&\\geq& \\mu_{t-1}(v^*_{t}) + \\alpha_t \\sigma_{t-1}(v^*_{t})\\\\\n\t\\geq f(v^*_t).\n\\end{equation*}\nThus we always have \n\\begin{align*}\nf(v^*_t) &\\leq \\mu_{t-1}(v_t) + \\alpha_t s_{t-1}(v_t) \\\\\n&\\leq f(v_t) + \\alpha_t \\sigma_{t-1}(v-t) +\\alpha_t s_{t-1}(v_t)\\\\\n%&\\leq& f(v_t) + 2 \\alpha_t s_{t-1}(v_t) \\\\\n&\\leq f(v_t) + 2 \\alpha_t k \\sigma_{t-1}(v_t).\t\n\\end{align*}\n%\\end{proof}\n%\\begin{lemma}[Lemma 5.3, {\\cite{srinivas2012information}}]\n%\tThe information gain achieved by the selected nodes can be expressed in terms of the predictive variances.\n%\tLet $\\bv_t = (v_1, v_2, \\ldots, v_t)$ denote the sequence of selected nodes. Then\n%\t\\begin{equation*}\n%\t\t\\mathcal{I}(\\by_{\\bv_t};f_{\\bv_t}) = \\frac{1}{2} \\sum_{i=1}^t \\log(1 + \\sigma^{-2}\\sigma_{i-1}(v_i)^2). \n%\t\\end{equation*}\n%\\end{lemma}\n%Next we bound the sum of squared instantaneous regrets in terms of the maximum information gain. \n\\begin{lemma}[Lemma 5.4, {\\cite{srinivas2012information}}]\n\tLet $\\alpha_t$ be defined as in Theorem \\ref{thm:dev} and $c_1$ be defined as in Theorem \\ref{thm:as_regret}.\n\tConditioned on the high probability event of Theorem \\ref{thm:dev}, the following holds:\n\t\\begin{equation*}\n\t\\forall T\\geq 1, \\;\\;\\sum_{t=1}^T r_t^2 \\leq \\alpha_T k^2 c_1 \\mathcal{I}(\\by_{\\bv_T};f_{\\bv_T}) \\leq \\alpha_T k^2 c_1 \\gamma_T.\t\n\t\\end{equation*}\n\\end{lemma}\nFinally, the Cauchy-Schwarz inequality gives  \n$R_T \\leq \\sqrt{T \\sum_{t=1}^T r_t^2} \\leq k \\sqrt{T c_1 \\alpha_T \\gamma_T}.$\n% \\end{appendices}\n", "meta": {"hexsha": "aba4fa46de3785e6c1572b3b9e977179ab47237c", "size": 3504, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "texts/appendix_proof_as_regret.tex", "max_stars_repo_name": "AutonlabCMU/active-search-gp-sopt", "max_stars_repo_head_hexsha": "45d75dc0fe33d3d68784c30ba7f6ecd7b1718c31", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-10-23T03:53:13.000Z", "max_stars_repo_stars_event_max_datetime": "2018-10-23T03:53:13.000Z", "max_issues_repo_path": "texts/appendix_proof_as_regret.tex", "max_issues_repo_name": "AutonlabCMU/active-search-gp-sopt", "max_issues_repo_head_hexsha": "45d75dc0fe33d3d68784c30ba7f6ecd7b1718c31", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "texts/appendix_proof_as_regret.tex", "max_forks_repo_name": "AutonlabCMU/active-search-gp-sopt", "max_forks_repo_head_hexsha": "45d75dc0fe33d3d68784c30ba7f6ecd7b1718c31", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2015-12-22T23:55:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-14T15:33:27.000Z", "avg_line_length": 48.6666666667, "max_line_length": 210, "alphanum_fraction": 0.6618150685, "num_tokens": 1400, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.595048680709591}}
{"text": "% Copyright 2018 Melvin Eloy Irizarry-Gelp\u00ed\n\\chapter{Bi-Perplex Numbers}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA split quaternion has the form\n\\begin{equation}\n    a_{0} + a_{1} A + a_{2} S + a_{3} AS\n\\end{equation}\nThese follow from a hyperbolic Cayley-Dickson construct on the binions.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Arithmetic Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Multiplication}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Conjugate Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Asterisk Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quadrance}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Cloak Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Dagger Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Hodge Star}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Differential Operators}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...", "meta": {"hexsha": "fafc5c9079032e0bb7d50d98927deb4bbe114393", "size": 2092, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/H.tex", "max_stars_repo_name": "meirizarrygelpi/plexifications", "max_stars_repo_head_hexsha": "0976772cc56b7f4b6cb9856c6d2bb052fed0b525", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/H.tex", "max_issues_repo_name": "meirizarrygelpi/plexifications", "max_issues_repo_head_hexsha": "0976772cc56b7f4b6cb9856c6d2bb052fed0b525", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/H.tex", "max_forks_repo_name": "meirizarrygelpi/plexifications", "max_forks_repo_head_hexsha": "0976772cc56b7f4b6cb9856c6d2bb052fed0b525", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5454545455, "max_line_length": 80, "alphanum_fraction": 0.1959847036, "num_tokens": 222, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802373309979, "lm_q2_score": 0.6477982043529715, "lm_q1q2_score": 0.5949898484767115}}
{"text": "\\section{National Camp 2019 Problem Set}\n\t\n\t\\prob{https://math.stackexchange.com/questions/675646/if-there-are-5-points-on-a-sphere-then-4-of-them-belong-to-a-half-sphere}{\t\t}{E}{Given any five points on a sphere, show that some four of them must lie on a closed hemisphere.}\n\t\n\t\n\t\\prob{}{}{}{Alice and Bob play a game. There is a threshold $ n $, Initially, the game starts with the number $ 1 $. In a move the player replaced the number $ m $ with either $ m+1 $ or $ 2m $. Assuming optimal play, count the number of threshold under $ 2019 $ where Alice wins.}\n\t\n\t\n\t\\prob{}{}{}{For an integer $ n $, let $ a_1, a_2, \\dots a_{\\phi(n)} $ be the integers less than and coprime to $ n $. Determine all possible values of \\[\\prod_{i=1}^{\\phi(n)} a_i\\ (\\text{mod}\\ n)\\] and for which values of $ n $ they appear.}\n\t\n\t\n\t\\prob{}{}{}{Let $ O $ and $ N $ be the circumcenter and nine-point-center of $ \\triangle ABC $ respectively. Let $ I_b $ and $ I_c $ be the $ B $ and $ C $ excenters of $ \\triangle ABC $ respectively. Prove that \\[ \\angle I_bOI_c = 180^\\circ - \\frac 1 2 \\angle I_bNI_c \\]}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/q3h448003p2521640}{Putnam 2001 A5}{E}{Prove that there are unique positive integers $ a, n $ such that \\[a^{n+1} - (a+1)^n = 2001\\]}\n\t\n\t\n\t\\prob{}{}{}{Find all polynomials $ P(x) $ with real coefficients such that for all real numbers $ x+ y+ z= 0 $, then the determinant of the following matrix is $ 0 $ \n\t\\[\n\t\\begin{bmatrix}\n\t\t1 & x & P(x)\\\\\n\t\t1 & y & P(y)\\\\\n\t\t1 & z & P(z)\n\t\\end{bmatrix}\n\t\\]\n\t}\t\n\n\t\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h60830p366681}{IMO 1971 P5}{M}{Prove that for every positive integer $m$ we can find a finite set $S$ of points in the plane, such that given any point $A$ of $S$, there are exactly $m$ points in $S$ at unit distance from $A$.}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/q2h368177p2026501}{ISL 1971}{M}{Consider a sequence of polynomials $P_0(x), P_1(x), P_2(x), \\ldots, P_n(x), \\ldots$, where $P_0(x) = 2, P_1(x) = x$ and for every $n \\geq 1$ the following equality holds:\n\t\t\\[P_{n+1}(x) + P_{n-1}(x) = xP_n(x).\\]\n\t\tProve that there exist three real numbers $a, b, c$ such that for all $n \\geq 1,$\n\t\t\\[(x^2 - 4)[P_n^2(x) - 4] = [aP_{n+1}(x) + bP_n(x) + cP_{n-1}(x)]^2.\\]}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/q2h58616p359102}{IMO 1974 P3}{}{Prove that for any $ n $ natural, the number \\[\\sum_{k=0}^{n} \\binom{2n+1}{2k+1}2^3k\\] cannot be divided by $ 5 $.}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c7h513439p2882561}{Putnam 1999 B6}{M}{Let $S$ be a finite set of integers, each greater than $1$. Suppose that for each integer $n$ there is some $s\\in S$ such that $\\gcd(s,n)=1$ or $\\gcd(s,n)=s$. Show that there exist $s,t\\in S$ such that $\\gcd(s,t)$ is prime.}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h279890p1511554}{Sharygin 2009 P4}{E}{Let $ P$ and $ Q$ be the common points of two circles. The ray with origin $ Q$ reflects from the first circle in points $ A_1$, $ A_2$,$ \\ldots$ according to the rule ``the angle of incidence is equal to the angle of reflection''. Another ray with origin $ Q$ reflects from the second circle in the points $ B_1$, $ B_2$,$ \\ldots$ in the same manner. Points $ A_1$, $ B_1$ and $ P$ occurred to be collinear. Prove that all lines $ A_iB_i$ pass through $ P $.}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/q2h127832p725339}{Greece}{M}{Find all surjective functions $ f:\\N \\to \\N $ such that for all $ m, n\\in \\N $\\[m|n \\Longleftrightarrow f(m)|f(n)\\]}\n\t\n\t\n\t\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h358783p1960094}{USA TST 2010 P9}{H}{Determine whether or not there exists a positive integer $k$ such that $p = 6k+1$ is a prime and\n\t\t\\[\\binom{3k}{k} \\equiv 1 \\pmod{p}\\]}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h420428p2374811}{USA TST 2011 P6}{H}{A polynomial $P(x)$ is called nice if $P(0) = 1$ and the nonzero coefficients of $P(x)$ alternate between $1$ and $-1$ when written in order. Suppose that $P(x)$ is nice, and let $m$ and $n$ be two relatively prime positive integers. Show that\n\t\t\\[Q(x) = P(x^n) \\cdot \\frac{(x^{mn} - 1)(x-1)}{(x^m-1)(x^n-1)}\\]\n\t\tis nice as well.}\n\t\n\t\n\t\\prob{}{}{}{Find all pairs of positive integers $ (m,n)$ such that $ mn - 1$ divides $ (n^2 - n + 1)^2$.}\n\t\n\t\n\t\\prob{}{}{}{In isosceles $\\triangle ABC$, $AB=AC$, points $D,E,F$ lie on segments $BC,AC,AB$ such that $DE\\parallel AB$, $DF\\parallel AC$. The circumcircle of $\\triangle ABC$ $\\omega_1$ and the circumcircle of $\\triangle AEF$ $\\omega_2$ intersect at $A,G$. Let $DE$ meet $\\omega_2$ at $K\\neq E$. Points $L,M$ lie on $\\omega_1,\\omega_2$ respectively such that $LG\\perp KG, MG\\perp CG$. Let $P,Q$ be the circumcenters of $\\triangle DGL$ and $\\triangle DGM$ respectively. Prove that $A,G,P,Q$ are concyclic.}\n\t\n\t\t\\figdf{.3}{nat_pset_16}{}\n\t\n\t\n\t\\prob{}{}{}{Given a polynomial $f(x)$ with rational coefficients, of degree $d \\ge 2$, we define the sequence of sets $f^0(\\mathbb{Q}), f^1(\\mathbb{Q}), \\ldots$ as $f^0(\\mathbb{Q})=\\mathbb{Q}$, $f^{n+1}(\\mathbb{Q})=f(f^{n}(\\mathbb{Q}))$ for $n\\ge 0$. (Given a set $S$, we write $f(S)$ for the set $\\{f(x)\\mid x\\in S\\})$.\n\t\tLet $f^{\\omega}(\\mathbb{Q})=\\bigcap_{n=0}^{\\infty} f^n(\\mathbb{Q})$ be the set of numbers that are in all of the sets $f^n(\\mathbb{Q})$, $n\\geq 0$. Prove that $f^{\\omega}(\\mathbb{Q})$ is a finite set.}\n\t\n\t\n\t\\prob{}{}{}{Define the polymonial sequence $\\left \\{ f_n\\left ( x \\right ) \\right \\}_{n\\ge 1}$ with $f_1\\left ( x \\right )=1$, $$f_{2n}\\left ( x \\right )=xf_n\\left ( x \\right ), \\; f_{2n+1}\\left ( x \\right ) = f_n\\left ( x \\right )+ f_{n+1} \\left ( x \\right ), \\; n\\ge 1.$$Look for all the rational number $a$ which is a root of certain $f_n\\left ( x \\right ).$}\n\t\n\t\n\t\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/q1h547874p3176078}{Sharygin 2013 Final Round 10.8}{}{Two fixed circles are given on the plane, one of them lies inside the other one. From a point $ C $ moving arbitrarily on the external circle, draw two chords $ CA, CB $ of the larger circle such that they tangent to the smaller one. Find the locus of the incenter of triangle $ ABC $.}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/q1h1279375p6724144}{IMC 2014 P3}{}{Let $n$ be a positive integer. Show that there are positive real numbers $a_0, a_1, \\dots, a_n$ such that for each choice of signs the polynomial\n\t\t\\[\\pm a_nx^n\\pm a_{n-1}x^{n-1} \\pm \\dots \\pm a_1x \\pm a_0\\]\n\thas $n$ distinct real roots.}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c260h1442608p8732088}{Dunno}{}{Suppose that the real numbers $a_0,a_1,...,a_n$ and $x$, with $0<x<1$, satisfy\n\t\t\\[\\frac{a_0}{1-x}+\\frac{a_1}{1-x^2}+...+\\frac{a_n}{1-x^{n+1}}=0\\]\t\n\tProve that there exist a real number $y$ with $0<y<1$ such that\n\t\t\\[a_0+a_1y+a_2y^2+...+a_ny^n=0\\]}\n\t\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/q2h523187p2952670}{RMM 2013 P6}{}{A token is placed at each vertex of a regular $2n$-gon. A move consists in choosing an edge of the $2n$-gon and swapping the two tokens placed at the endpoints of that edge. After a finite number of moves have been performed, it turns out that every two tokens have been swapped exactly once. Prove that some edge has never been chosen.}\n\t\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h1683076p10734638}{Sharygin 2018 Final Round 8.4}{}{Find all sets of six points in the plane, no three collinear, such that if we partition the set into two sets, then the obtained triangles are congruent.}\n\n\n", "meta": {"hexsha": "7df7164ab3fee3da64aa644a6888a0641bcfdd69", "size": 7509, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PSets/nat_2019_probs.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "PSets/nat_2019_probs.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PSets/nat_2019_probs.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 72.2019230769, "max_line_length": 549, "alphanum_fraction": 0.6707950459, "num_tokens": 2652, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.8104789178257654, "lm_q1q2_score": 0.5949883997168233}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\n\\markright{integ2d}\n\\section*{\\hspace*{-1.6cm} integ2d}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nApproximate 2-D integral.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nsom = integ2d(MAT)\nsom = integ2d(MAT,x)\nsom = integ2d(MAT,x,y)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty integ2d} approximates the 2-D integral of matrix {\\ty MAT}\n        according to abscissa {\\ty x} and ordinate {\\ty y}.\\\\\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty MAT} & {\\ty (M,N)} matrix to be integrated\\\\\n        {\\ty x}   & {\\ty N}-row-vector indicating the abscissa integration path     \n                                & {\\ty (1:N)}\\\\\n        {\\ty y}   & {\\ty M}-column-vector indicating the ordinate integration path \n                                & {\\ty (1:M)}\\\\\n \\hline {\\ty som} & result of integration\\\\\n\n\\hline\n\\end{tabular*}\n\n\\end{minipage}\n\\vspace*{1cm}\n\n\n{\\bf \\large \\sf Example}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nConsider the scalogram of a sinusoidal frequency modulation of 128 points,\nand compute the integral over the time-scale plane of the scalogram :\n\\begin{verbatim}\n         S = fmsin(128,0.2,0.3);\n         [TFR,t,f] = tfrscalo(S,1:128,8,0.1,0.4,128,1);\n         Etfr = integ2d(TFR,t,f)\n         Etfr = \n                128.0000\n\\end{verbatim}\nWe find for {\\ty Etfr} the value of the signal energy, which is the\nexpected value since the scalogram preserves energy.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\ninteg.\n\\end{verbatim}\n\\end{minipage}\n\n\n\n", "meta": {"hexsha": "5eee72ce2b23349333232fb2e65ee2714d78f059", "size": 2051, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/integ2d.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/integ2d.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/integ2d.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 23.5747126437, "max_line_length": 84, "alphanum_fraction": 0.6245733788, "num_tokens": 724, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.734119526900183, "lm_q2_score": 0.8104789109591832, "lm_q1q2_score": 0.5949883946759311}}
{"text": "\\newtheorem{ques}{Question}\n\\newtheorem{answ}{Answer}\n\\newtheorem{note}{Note}\n\n\\chapter{ Cross-Entropy Optimization can solve many hard problems on combinatorial optimization.\n}\n\\section{Motivation}\n\n      \\subsection{Rare-event simulation}\n           \n      Suppose that there is a system with \n      \\Terms{state space} $\\D \\subseteq \\R^n$ for some $n\\in \\N_+,$ \n      and \\Terms{performance function} $f:\\D \\mapsto \\R.$ The state\n      $\\X$ is assumed to be a random variable distributed in accordance\n       with a probability density function (pdf)\n      $p(\\x).$ We want to compute the probability of the random event\n      that $\\EE=\\{\\x\\in \\D:\\ f(\\x)\\le \\gamma\\}$ for a \\Stress{small} constant\n      $\\gamma\\in \\R.$ Mathematically, we need to compute\n      \\begin{equation}\\label{eq:Objective_RES}\n            \\PP(\\EE)=\\int_{\\x\\in \\D} \\1_{\\{y: f(y) \\le \\gamma\\}}(\\x)p(\\x) d\\x=\\ \\E_{p(\\cdot)} \\Big[ \\1_{\\{y: f(y) \\le \\gamma\\}}(\\X) \\Big].\n      \\end{equation}\n\n\n\\begin{note}\n      The motivation is how to calculate $ P\\left(I_{\\left\\{f(x) > \\gamma\\right\\}}\\right) $ efficiently when\n      the $\\gamma$ is large and the respective event $\\varepsilon$ is rare.\n\\end{note}\n\n      \\subsection{Change of measure}\n\n      For any other pdf $g(\\cdot)$ with $g(\\x)=0\\implies p(\\x)=0\\ \\forall \\x \\in \\D,$ we have\n      \\begin{equation}\\label{eq:CoM_RES}\n            \\PP[\\EE]\\!=\\!\\E_{p(\\cdot)}\\big[\\1_{\\{y:\\ f(y)\\le \\gamma\\}}(\\X)\\big] \\!=\\!\\E_{g(\\cdot)}\\Big[\\1_{\\{y:\\ f(y)\\le \\gamma\\}}(\\X)\\frac{p(\\X)}{g(\\X)}\\Big].\n      \\end{equation}\n      The CMC estimator of \\eqref{eq:Objective_RES} with CoM $g(\\cdot)$ is\n      \\begin{equation}\\label{eq:CoM_CMC}\n            \\frac{1}{N} \\sum_{k=1}^{N} \\1_{\\{y:\\ f(y)\\le \\gamma\\}}(\\X^{(k)})\\frac{p(\\X^{(k)})}{g(\\X^{(k)})},\n      \\end{equation}\n      where $\\X^{(1)}, \\ldots, \\X^{(N)}$ are $i.i.d.$ from $g(\\cdot),$ \n      and $N\\in \\N_+$ is the \\Terms{sample size}.\\\\ \n      The optimal CoM is \n      \\begin{equation}\\label{eq:OptCoM_RES}\n                  g^*(\\x)=\\1_{\\{y:\\ f(y)\\le \\gamma\\}}(\\x) \\frac{p(\\x)}{\\PP[\\EE]}, \\quad \\forall \\x \\in \\D\n      \\end{equation}\n      with zero variance, but infeasible!\n\n\\begin{note}\n      If we change the measure to one described by the density funtion\n      $$ g^*(x) = I_{\\left\\{f(x) > \\gamma\\right\\}}\\left(x\\right) \\frac{p\\left(x\\right)}{P\\left(\\varepsilon\\right)}, $$\n      the new probability of $\\varepsilon$ satisfies $\\tilde P (\\varepsilon) = 1$.\n      And if we can generate some samples from the new probability measure, the finite average may equal exactly the expectation\n      for any size of samples. (But how can we calculate the original probability of $\\varepsilon$ ?)\n\\end{note}\n\n      \\subsection{Stochastic program}\n\n\t\tA smart idea is to find a pdf from a given family, which well mimics \n\t\t\\eqref{eq:OptCoM_RES}. This forms the following optimization\n\t\tproblem\n\t\t\\begin{equation}\\label{eq:SP_RES}\n\t\t\\min_{\\theta\\in \\Theta}\\quad  \\DD\\Big[g^*\\ \\Big|\\Big|\\ g_{\\theta}\\Big],\n\t\t\\end{equation}\n\t\twhere \n\t\t\\begin{itemize}\n\t\t\t\\item $\\DD[h\\ ||\\ r]\\ge 0$ for any two pdfs $h,r,$ and  $\\DD[h \\ || \\ r]=0$ if and only if $h=r$ almost surely,\n\t\t\ttypically $\\DD[\\cdot ||\\cdot]$ is chosen to be the \\Terms{Kullback-Leibler divergence}, i.e.,\n\t\t\t\\begin{itemize}\\label{eq:KL-distance}\n\t\t\t\t\\item $\\DD\\big[g^*\\big|\\big|\\ g_{\\theta}\\big]=\\E_{g^*}\\big[\\ln \\frac{g^*(\\X)}{g_{\\theta}(\\X)}\\big]$\n\t\t\t\\end{itemize}\n\t\t\t\\item $\\F=\\{g_{\\theta}:\\ \\theta\\in \\Theta\\}$ is a family of pdfs \n\t\t\tdefined on $\\D$ and $\\Theta\\subseteq \\R^m$ is a \\Stress{convex} \n\t\t\tparametric set for some $m\\in \\N_+.$ \n\t\t\\end{itemize}\n\t\tWith Kullback-Leibler divergence, \\eqref{eq:SP_RES} is equivalent to, for \\Stress{any} $\\theta'\\in \\Theta,$\n\t\t\\begin{equation}\\label{eq:SP_RES_KL}\n\t\t\t\\max_{\\theta \\in \\Theta}\\ \\E_{g^*} \\Big[ \\ln g_{\\theta}(\\X)\\Big]\n\t\t\t\\!\\iff\\! \\max_{\\theta \\in \\Theta}\\ \\E_{g_{\\theta'}} \\Big[\\1_{\\{y:\\ f(y)\\le \\gamma\\}}(\\X)\\frac{p(\\X)}{g_{\\theta'}(\\X)} \\ln g_{\\theta}(\\X)\\Big]\n\t\t\\end{equation}\n\n\t\t\\subsection{Stochastic iterative search}\n\n\t\tThe difficulty of solving \\eqref{eq:SP_RES_KL} by traditional \n\t\ttechniques is that $\\1_{\\{y:\\ f(y)\\le \\gamma\\}}(\\x)=0$ for most\n\t\t$\\x\\in\\D \\subseteq  \\R^n$.  To overcome this difficulty,\n\t\t\\Terms{Rubinstein (1997)} proposed a stochastic \n\t\titerative search, called \\Terms{Cross Entropy method}.\n\t\tThe method works like peeling onions. Starting from an arbitrarily\n\t\tfixed $\\theta_0\\in \\Theta,$ the method then iterates over the following\n\t\tthree steps, for every $t=0,1, 2, \\ldots, $ until $\\gamma_t \\ge \\gamma,$\n\t\t\\begin{itemize}\n\t\t\t\\item generate $\\X_t^{(1)}, \\ldots, \\X_t^{(N)}$ from pdf \n\t\t\t$g_{\\theta_t}(\\cdot),$ and non-decreasingly sort them as $\\X_t^{[1]}, \\ldots, \\X_t^{[N]},$ i.e., \n\t\t\t$f\\big(\\X_t^{[1]}\\big)\\le \\cdots \\le f\\big(\\X_t^{[N]}\\big),$\n\t\t\t\\item put $\\gamma_t=f\\big(\\X_t^{[\\lceil\\alpha \\cdot N\\rceil]}\\big),$ and\n\t\t\t\\begin{equation}\\label{eq:CE_Estimate}\n\t\t\t\t\\theta_{t\\!+\\!1} = \\arg\\max_{\\theta\\in \\Theta}\\frac{1}{N}\\sum_{k=1}^{N} \\1_{\\{y:\\ f(y)\\le \\gamma_t\\}}(\\X_t^{[k]})\\frac{p(\\X_t^{[k]})}{g_{\\theta_t}(\\X_t^{[k]})} \\ln g_{\\theta}(\\X_t^{[k]}),\n\t\t\t\\end{equation} \n\t\t\t%\\item put $\\theta_{t\\!+\\! %1}=\\theta_t+\\rho(\\hat{\\theta}_{t\\!+\\!1}-\\theta_t),$\n\t\t\\end{itemize}\n\t\twhere $\\alpha\\in (0,1), N \\in \\N_+$ are parameters that need to be finely tunned. \n\n                  By the Cross-Entropy method, one can eventually find a $\\theta^*\\in \\Theta$\n                  such that $g_{\\theta^*}$ well approximates the optimal CoM $g^*$ in \\eqref{eq:OptCoM_RES}. Then, the probability $\\PP[\\EE]$ can be well estimated by CMC with CoM $g_{\\theta^*}.$\\\\[1ex]\n\t\t\t\\Note{How does it work?}\\\\[1ex]\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item $\\theta_{t\\!+\\!1}$ in \\eqref{eq:CE_Estimate} is a stochastic (optimal) solution to\t\n\t\t\t\t\\begin{equation}\\label{eq:CE_Iter_Obj}\n\t\t\t\t\\max_{\\theta\\in \\Theta}\\ \\int_{\\x\\in \\D} \\1_{\\{y:\\ f(y)\\le \\gamma_t\\}}(\\x)p(\\x) \\ln g_{\\theta}(\\x)d\\x,\n\t\t\t\t\\end{equation} \n\t\t\t\ttherefore, $g_{\\theta_{t\\!+\\!1}}$ may well approximate\n\t\t\t\tthe truncate density \n\t\t\t\t\\begin{equation}\\label{eq:CE_Iter_Obj_Truncate}\n\t\t\t\t\t\\frac{\\1_{\\{y:\\ f(y)\\le \\gamma_t\\}}(\\x)p(\\x)}{\n\t\t\t\t\t\t\\int_{\\mathbf{z}\\in \\D} \\1_{\\{w:\\ f(w)\\le \\gamma_t\\}}(\\mathbf{z})p(\\mathbf{z}) d \\mathbf{z}\n\t\t\t\t\t\t}\\quad \\forall \\x\\in \\D,\n\t\t\t\t\\end{equation}\n\t\t\t\t\\item $\\gamma_t=f\\big( \\X_t^{[\\lceil \\alpha \\cdot N \\rceil]} \\big)$\n\t\t\t\testimates the $\\lceil (1-\\alpha) \\cdot N \\rceil$-th order statistic of\n\t\t\t\t$f(\\X)$ with $\\X \\sim g_{\\theta_t},$ which are expected to decrease\n\t\t\t\twith $t.$ \n\t\t\t\\end{itemize}\n\n\\section{For continuous discrete optimization}\n\\subsection{General idea}\n\n\t\\Terms{Rubinstein (1999)} extends the idea for rare event simulation into optimization. Given an abstract optimization problem\n\t\\begin{equation*}\n\t\t\\min_{\\x \\in \\D}\\ F(\\x),\n\t\\end{equation*}\n\tfor a cost function $F(\\x),$ and feasible region $\\D\\subseteq \\R^n,$ we can\n\t\\Stress{solve the program through constructing a probabilistic distribution concentrating on optimal solutions}. Mathematically, we can then solve the following program\n\t\\begin{equation}\\label{eq:Optimization_Obj}\n\t\t\\max_{\\theta \\in \\Theta}\\ \\E_{g_{\\theta}} \\big[\\1_{\\{y:\\ F(y)\\le \\gamma\\}}(\\X)\\big],\n\t\\end{equation}\n\twhere we only need to specify a family $\\F=\\{g_{\\theta}:\\ \\theta\\in \\Theta\\}$ of pdfs.\n\tThe $\\gamma$ can be the minimum cost value. In the whole optimization procedure, we do not need knowledge about $\\gamma.$\n\n      \\subsection[Alg]{Algorithm}\n\n\t\t\\Note{Input (Precondition):} a pdf family $\\F=\\{g_{\\theta}:\\ \\theta\\in \\Theta\\},$ an initial pdf index $\\theta_0\\in \\Theta,$ \n\t\ta smooth parameter $\\rho\\in (0,1),$ an elite rate $\\alpha\\in (0,1),$\n\t\tan initial solution $\\x\\in \\D,$ and a sample size $N\\in \\N_+$\\\\[1ex]\n\t\t\\Note{Output (Postcondition):} $\\x$\\\\[1ex]\n\t\t\\Note{Pseudo code (iterates over the following steps):}\n\t\t\\begin{itemize}\n\t\t\t\\item independently generate $\\X_t^{(1)}, \\ldots, \\X_t^{(N)}$ from pdf \n\t\t\t$g_{\\theta_t}(\\cdot),$ and non-decreasingly sort them as $\\X_t^{[1]}, \\ldots, \\X_t^{[N]},$ i.e., \n\t\t\t$F\\big(\\X_t^{[1]}\\big)\\le \\cdots \\le F\\big(\\X_t^{[N]}\\big),$\n\t\t\t\\item if $F\\big(\\x\\big)> F\\big(\\X_t^{[1]}\\big), \\x=\\X_t^{[1]},$\n\t\t\t\\item $\\gamma_t=F\\big(\\X_t^{[\\lceil\\alpha \\cdot N\\rceil]}\\big),$ and\n\t\t\t\\begin{equation}\\label{eq:CE_Estimate_Optimization}\n\t\t\t\\hat{\\theta}_{t\\!+\\!1} = \\arg\\max_{\\theta\\in \\Theta}\\frac{1}{N}\\sum_{k=1}^{N} \\1_{\\{y:\\ F(y)\\le \\gamma_t\\}}(\\X_t^{[k]}) \\ln g_{\\theta}(\\X_t^{[k]}),\n\t\t\t\\end{equation} \n\t\t\t\\item $\\theta_{t\\!+\\!1}=\\theta_t+\\rho\\cdot (\\hat{\\theta}_{t\\!+\\!1}-\\theta_t).$\n\t\t\\end{itemize}\n\n\\begin{ques}\n      Why we use a convex combination between two iterative steps?\n\\end{ques}\n\\begin{answ}\n      The objective is to prevent a too fast convergence.\n\\end{answ}\n\n\\begin{ques}\n      Does this algorithm work for a continuous objective function with millions' of arguments?\n\\end{ques}\n\\begin{answ}\n      Yes, but we must consider a more specified parameter space to approximate the final distribution.\n      For instance, if Gauss-type distributions require so many parameters to exceed the memory, \n      we can try uniform distributions.\n\\end{answ}\n\n    \\subsection[continuous]{Continuous optimization}\n\n    \t\tApplying the Cross-Entropy algorithm to continuous problem, we\n    \t\tneed to select a suitable family of pdfs.\\\\[1ex]\n    \t\t\\Note{Optimization with Gaussian pdfs (Normal distribution)}\n    \t\t\\begin{itemize}\n    \t\t\t\\item $\\Theta=\\{(\\mu, \\Sigma): \\mu\\in \\R^n, \\Sigma\\in \\R^{n\\times n} \\text{ is semipositive definite}\\}, g_{(\\mu,\\Sigma)}=\\mathcal{N}(\\mu, \\Sigma)$ is Normal distribution with mean $\\mu$ and co-variance matrix $\\Sigma,$\n    \t\t\t\\item $\\hat{\\theta}_{t\\!+\\!1}=(\\hat{\\mu}_{t\\!+\\!1}, \\hat{\\Sigma}_{t\\!+\\!1}),$ \n    \t\t\tthe mean and co-variance matrix of \\Terms{elite} samples $\\X_t^{[1]}, \\ldots, \\X_t^{[\\lceil \\alpha \\cdot N \\rceil]}$ in iteration $t,$\n    \t\t\t\\item $\\mu_{t+1}=\\mu_t + \\rho (\\hat{\\mu}_{t\\!+\\!1}-\\mu_t), \\Sigma_{t\\!+\\!1}=\\Sigma_t+\\rho(\\hat{\\Sigma}_{t\\!+\\!1}-\\Sigma_t),$\n    \t\t\t\\item we can employ a small constant $\\delta>0$ to calibrate $\\Sigma$ so as to avoid\n    \t\t\toverflow in practice, i.e., if any element in $\\Sigma_{t\\!+\\!1}$ is smaller than \n    \t\t\t$\\delta,$ then set it to be $\\delta.$\n    \t\t\t\n    \t\t\\end{itemize}\n\n\\begin{ques}\n      What is the performance of this algorithm by comparison with the simulated annealing method?\n\\end{ques}\n\\begin{answ}\n      This algorithm performance better in general.\n\\end{answ}\n\n\\begin{ques}\n      Can a known local minimum assist this algorithm?\n\\end{ques}\n\\begin{answ}\n      I don't think so.\n\\end{answ}\n\n    \\subsection[Discrete]{Dicrete optimization}\n\n    \tWe generally consider combinatorial optimization, \n    \t\\[\\D\\subseteq \\underbrace{\\A\\times\\A\\times \\cdots \\times \\A}_{n}\\] for some $\\A$ with $|\\A|<\\infty$ and $n\\in \\N.$ \\\\[1ex]\n    \t\\Note{Optimization with univariate marginal distributions}\n    \t\\begin{itemize}\n    \t\t\\item $\\Theta=\\big\\{\\theta = (p_{i,j})_{n\\times |\\A|}:\\ p_{i,j}\\ge 0, \\sum_{j=1}^{|\\A|} p_{i,j}=1, \\forall i=1,\\ldots,n, j=1,\\ldots, |\\A|\\big\\}, g_{\\theta}$ is now a probability mass function \n    \t\t\\item $\\hat{\\theta}_{t\\!+\\!1}=(\\hat{p}_{i,j}^{t\\!+\\!1})_{n\\times |\\A|},$ \n    \t\t$\\hat{p}_{i,j}^{t\\!+\\!1}$ is the empirical frequency of the $j$-th element of $\\A$ in the $i$-th position of the \\Terms{elite} samples $\\X_t^{[1]}, \\ldots, \\X_t^{[\\lceil \\alpha \\cdot N \\rceil]}$ in iteration $t,$\n    \t\t\\item $p_{i,j}^{t\\!+\\!1}=p_{i,j}^{t} + \\rho (\\hat{p}_{i,j}^{t\\!+\\!1}-p_{i,j}^{t}),$\n    \t\\end{itemize}\n\n    \\section[Theory]{Theory for discrete optimization}\n    \\subsection[Process]{Underlying stochastic process}\n\n    \tWe call $\\big(Y_t\\big)_{t=0,1,2,\\ldots,}$ a \\Note{Markov Chain} (MC)\n    \twith \\Note{state space} $\\mathcal{Y}$ if every random variable \n\t(or vector) $Y_t$ is defined on probabilistic space \n    \t$\\big(\\Y, \\A, \\mu\\big),$ and \n    \t\\[\n    \t\\PP\\big[Y_{t+1}\\in B_{t+1}\\big|\\ Y_{t}\\in B_{t}, \\ldots, Y_0\\in B_{0}\\big]=\\PP \\big[Y_{t+1}\\in B_{t+1}\\big|\\ Y_{t}\\in B_{t}\\big],\n    \t\\]\n\twhere $B_0,\\ldots, B_{t+1}\\in \\A$ are $\\mu$-measurable sets.\\\\[1ex]\n\t\n\t\\hspace{0.5cm}If \n\t\\[\n    \t\\PP \\big[Y_{t+k}\\in B\\big|\\ Y_{t}\\in B'\\big]=\\PP \\big[Y_{k}\\in B\\big|\\ Y_{0}\\in B'\\big],\n    \t\\] \n\tfor any $t,k\\in \\N,$ any $B,B'\\in \\A,$ we call the Markov chain\n\t\\Note{time-homogeneous}, otherwise \\Note{time-inhomogeneous}. \\\\[1ex]\n\t\n    \tIn the discrete-state case, one can enumerate all the sates as\n\t$0,1,\\ldots,K$ ($K\\in \\N\\cup \\{\\infty\\}$). Then, for each step \n\t$t\\in \\N,$ we can use a matrix $Q_t\\in [0,1]^{K\\times K}$ to \n\tdescribe the transition of states. For time-homogeneous MC, \n\t$Q_t\\equiv Q,$ where $Q=\\big(q_{i,j}\\big)_{K\\times K}$ is a \n\tconstant matrix with $q_{i,j}$ describing the probability of \n\tmoving to state $j$ from state $i.$\\\\[1ex]\n\t\n\t\\hspace{0.5cm}For discrete-state time-homogeneous MC, one\n\tcan easily show that \n\t\\[\n\tq^{(n)}_{i,j}:=\\PP[Y_{n}=j\\mid Y_{0}=i]=\\sum_{k=1}^{K}\n\tq_{ik}\\PP[Y_{n}=j\\mid Y_{1}=k]=\\sum_{k=1}^{K}\n\tq_{ik}q^{(n-1)}_{k,j},\t\n\t\\]\n\t\\[\n\tq^{(n)}_{i,j}=\\sum_{k=1}^{K}\n\tq^{(m)}_{ik}q^{(n-m)}_{k,j},\t\t\n\t\\]\n\twhere $1\\le m<n.$ Therefore, the $n$-step transition matrix\n\t$Q^{(n)}=(q_{i,j}^{(n)})=Q^n.$\\\\[1ex]\n\t\n\t\\hspace{0.5cm}A probability mass function $\\pi=(\\pi_j)_K$ on $\\Y$ is a \n\t\\Note{limiting distribution (or stationary distribution)} if \n\t$\n\t\\pi_j=\\lim_{n\\to \\infty} q_{i,j}^{(n)}\n\t$\n\tfor any $i,j=1,\\ldots, K.$\n\\hspace{0.5cm}In the case that the state space $\\Y$ is countable, the \nprocess is called a \\Note{discrete-state} Markov chain. \n\n    \tA Markov chain is said to be \\Note{irreducible} if for any \n\ttwo states $i,j\\in \\Y,$ it exists $n_1, n_2\\in \\N$ such that\n\t$q_{i,j}^{(n_1)}>0$ and $q_{j,i}^{(n_2)}>0.$ A Markov chain\n\tis said to be \\Note{aperiodic} if for every $i\\in \\Y,$ it exists a $n\\in \\N$ such that $q_{i,i}^{(n')}>0$\tfor any $n'\\ge n$ (i.e. ``period\" is 1). \\\\[1ex]\n\t\\hspace{0.5cm}A state $i$ is \\Note{transient} if $\\PP[\\min\\{m\\ge 1:\\  Y_{m}=i\\mid Y_0=i\\}<\\infty]<1,$\n\totherwise it is called a \\Note{recurrent}. For a recurrent state\n\t$i,$ if the mean recurrence time $\\Stress{M_i=\\sum_{n=1}^{\\infty} n\\PP[\\min \\{m\\ge 1:\\  Y_{m}=i\\mid Y_0=i\\}=n]}<\\infty,$ $i$ is said to be \\Note{positive recurrent}, otherwise \\Note{null recurrent}.\\\\[1ex]\n\t\\Note{Theorem (Steady analysis for discrete-state time-homogeneous MC)}\\\\\n\tAn irreducible MC has a limiting distribution $\\pi=(\\pi_j)_K$ if and only if\n\tit is aperiodic and all of its states are positive recurrent. Moreover, $\\pi_j=C/M_j,$ for every state $j=1,\\ldots, K,$ and $C$ is\n\ta normalizing constant.\\\\[1ex]\n\t\\hspace{0.5cm}When the limiting distribution $\\pi$ exists, \n\t\\[\n\t\\lim_{n\\to\\infty}Q^{(n)}=\\left(\n\t\\begin{array}{c}\n\t\\pi\\\\\n\t\\ldots\\\\\n\t\\pi\n\t\\end{array}\n\t\\right),\n\t\\quad \\pi=\\pi\\cdot Q.\n\t\\] \n\n\tThe above steady analysis applies to time-homogeneous MC. Actually, for time-inhomogeneous MC, the limiting distribution may also hold. However, this requires the so-called \\Note{detailed balance}. \\\\[1ex]\n\t\\Note{Theorem (Steady analysis under detailed balance)}\\\\\n\tAssume that there is a distribution $\\pi=(\\pi_k)_K,$ such that\n\tfor any $n\\in \\N, i,j=1,\\ldots,K,$ \n\t\\[\n\t\\pi_j=\\sum_{i=1}^{K} \\pi_i\\cdot \\PP[Y_{n+1}=j\\mid Y_{n}=i],\n\t\\]\n\tthen $\\pi$ is a limiting distribution of the MC.\\\\[1ex]\n\t\\Note{Remark:} The famous Markov Chain Monte-Carlo algorithm is\n\tdesigned based on this theorem.\n\n\\begin{note}\n      For a time-homogeneous discrete-state Markov chain, we can use the limiting distribution \n      with respect to the transition matrix to determine the recurrent state of the M.C..\n\\end{note}\n\n\\begin{ques}\n      For a Markov chain, if we don't know the transition matrix beforehand, can we decide the recurrent state?\n\\end{ques}\n\\begin{answ}\n      Other information about the transition must be available.\n\\end{answ}\n\n    \\subsection[Results]{Available results for discrete optimization}\n\n    \tSo far, theory is available only for discrete case. For continuous case, the\n    \ttheory is still almost blank. The following theory is for discrete optimization.\n    \tThey do not impose any restrictions on the problems. Therefore, they apply to\n    \tall discrete problems.\\\\[1ex]\n    \t\\Note{Theorem (Finite reachability, [Wu \\& Kolonko, 2014, TEVC])}\\\\[1ex]\n    \t$\\PP [\\exists T:\\ \\X_T \\cap \\mathcal{S}^*\\neq \\emptyset]\\to 1$ as $\\rho \\to 0$ or $N\\to \\infty.$\\\\[2ex]\n    \t\\Note{Theorem (Convergence of solutions, [Wu \\& Kolonko, 2014, TEVC])}\\\\[1ex]\n    \t$\\PP [\\exists \\x \\in \\D\\ \\exists T\\in \\N_+\\ \\forall t\\ge T:\\ \\X_t=(\\x, \\x,\\ldots, \\x)\\in \\D^{N}]=1$ (Genetic drift).\\\\[2ex]\n    \t\\Note{Theorem (Convergence of distributions, [Wu \\& Kolonko, 2014, TEVC])}\\\\[1ex]\n    \t$\\PP [\\exists \\theta\\in \\Theta:\\ \\lim_{t\\to \\infty} \\theta_t =\\theta]=1.$\n\n\\begin{note}\n      This algorithm can be applied to both continuous and discrete optimization problems. \n      For discrete problem, some convergent results can be obtained.\n      There are still many open issues for continuous problems.\n\\end{note}\n\n    \t\tThe following lists more theory for particular problems.\\\\[1ex]\n    \t\t\\Note{Theorem (Runtime for simple problems, [Wu, Kolonko \\& M{\\\"ohring}, 2017, TEVC])}\\\\[1ex]\n    \t\t\\begin{itemize}\n    \t\t\t\\item \\Terms{OneMax:} $\\forall \\epsilon>0,\\ \\PP[\\tau \\in O(n^{1.5+\\epsilon})]\\ge 1-e^{-\\Omega(n^\\epsilon)}.$\n    \t\t\t\\item \\Terms{LeadingOnes:} $\\forall \\epsilon>0,\\ \\PP[\\tau \\in O(n^{2+\\epsilon})]\\ge 1-e^{-\\Omega(n^\\epsilon)}.$\n    \t\t\\end{itemize}\n    \t\t\\Note{Theorem (Runtime for TSP, [Wu, M{\\\"ohring} \\& Lai, 2017, ArXiv])}\\\\[1ex]\n    \t\t\\begin{itemize}\n    \t\t\t\\item Simple TSP: $\\forall \\epsilon>0,\\ \\PP[\\tau \\in O(n^{3+\\epsilon}\\ln n)]\\ge 1-e^{-\\Omega(n^{\\epsilon})}$\n    \t\t\t\\item Convex TSP: $\\forall \\epsilon>0,\\ \\PP [\\tau \\in O(n^2m^{5+\\epsilon})] \\ge 1-e^{-\\Omega(m^\\epsilon)}$\n    \t\t\t\\item General TSP: $\\forall \\epsilon>0,\\ \\PP [\\tau \\in O(n^3m^{5+\\epsilon}+n^{3k}m^{\\epsilon})]\\ge 1-e^{-\\Omega(m^\\epsilon)}$\n    \t\t\\end{itemize}\n    \t\twhere the TSP instance is positioned with $m\\times m$-grid. \n\n    \t\\section[Extension]{Extension}\n    \t\tThe idea can be made more general. One can use more selection\n    \t\tmethods and estimation approaches. This results in the more general\n    \t\tidea \\Terms{Model-based search}, see, e.g., \\Def{[Wu, 2016, Thesis], [Zloch et al 2006].}\\\\[1ex]\n    \t\t\\Note{Solution $v.s.$ Model}\n    \t\t\\begin{itemize}\n    \t\t\t\\item \\Stress{Solution-based:} genetic algorithm, simulated annealing, gradient decent, particle swarm optimization, etc\n    \t\t\t\\item \\Stress{Model-based:} cross-entropy algorithm, ant colony optimization, estimation of distribution algorithms (MIMIC, BOA, compact GA, etc).\n    \t\t\t\\item Solution-based is efficient, easy to implement\n    \t\t\t\\item Model-based is more stable, higher quality\n    \t\t\\end{itemize}\n\\begin{note}\n      This algorithm is one kind of more general model-based algorithms,\n      which is quite different from the classical algorithms based on local information.\n      The solution is more stable and with higher quality by comparison with the classical ones,\n      but may cost longer time.\n\\end{note}\n\n\\begin{ques}\n      Are the derivatives of the objective function helpful to this algorithm? How can this algorithm \n      combine with the gradient-descent-type algorithm?\n\\end{ques}\n\\begin{answ}\n      We are considering that.\n\\end{answ}\n", "meta": {"hexsha": "6b954abd5dbe5f6aa0f5c80fc303a6c70fe4f064", "size": 19136, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/ZJWu.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/ZJWu.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/ZJWu.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.8590785908, "max_line_length": 226, "alphanum_fraction": 0.6332044314, "num_tokens": 6812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879312056025699, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5948210713854244}}
{"text": "%!TEX root = RBM.tex\n\n\n\\subsection{Theoretical Anaylsis}\n\n% Since TAP's result is relatively irrelevant to the run time and resources. We use the value estimated in the last subsection.\n\n% \\subsubsection{Practice}\n% \\para{Complexity}\n% For horizontal comparison on complexity, we let the correctness \\& stability to be approximately the same (i.e. the correctness the AIS could achieve in 2 seconds), and compare the run time for three algorithms.\n\n% We have conducted 20 experiments for the acquistion of $Z(\\theta)$.\n\n% From our practice, we have found that xxx gives the most agreeable result while TAP completely underestimate the value.\n\n\n% \\para{Correctness \\& Stability}\n% For horizontal comparison on the correctness \\& stability, we allow the same run time for AIS and RTS algorithms to compute (i.e. 10 seconds per experiment). We have conducted 20 experiments for the acquistion of $Z(\\theta)$.\n\n% From our practice, we have found that xxx gives the most agreeable result while TAP completely underestimate the value.\n\n% We also notice that the variance of ... is the smallest which indicates it has the best stability.\n\n\n\\para{RTS}\nFrom a theretical perspective, we have proven that the bias \\& the variance of the RTS method are to be:\n\\begin{equation}\nE{[ log \\hat{Z}_{k}^{RTS} ]} - E{[Z_{k}]} \\approx \\frac{1}{2}{[\\frac{\\sigma^{2}_{1}}{\\hat{c}^{2}_{1}}-\\frac{\\sigma^{2}_{k}}{\\hat{c}^{2}_{k}}]} \n\\end{equation}\n\\begin{equation}\nVar{[ log \\hat{Z}_{k}^{RTS} ]} \\approx \\frac{\\sigma^{2}_{1}}{\\hat{c}^{2}_{1}} + \\frac{\\sigma^{2}_{k}}{\\hat{c}^{2}_{k}} - \\frac{2\\sigma_{1k}}{\\hat{c}_{k}\\hat{c}_{1}}\n\\end{equation}\nwhere $\\sigma^{2}_{k} = Var{[\\hat{c}_{k}]}$ and $\\sigma_{1k} = Cov{[\\hat{c}_{1},\\hat{c}_{k}]}$\n\nThis has shown that the bias of RTS has no definite sign.\n\n\\para{AIS}\nHowever, in AIS, Neal and Jarzynski et al.\\cite{neal2001annealed,nonequilibrium} have shown that if we want the result to be unbiased, we would have to let $M=1$ in the iteration, which by doing so have lost the advantage of AIS. That is to say, on the other hand, if $M > 1$, we would have a negative bias due to Jenson Inequality.\n\nFrom the result of our practice, we see when AIS and RTS show same good performance in estimation, RTS's efficiency is a little bit lower than that of AIS's.\n\n\\para{TAP}\nAlthough TAP shows the best efficiency, its results are the most disapointing. Apparantly TAP has underestimate the value of the partition function.\n\nWe did not analyze deeply on the reasons why it failed to perform a satisfying result, but our intuition tells that maybe it is because the Legendre transform. In our practice, we only took the Legendre transform to the 2nd order, which might result in the underestimation.\n", "meta": {"hexsha": "fdcdb648fe35c73d896dbdad96cece35344d0cc5", "size": 2716, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/theoretical.tex", "max_stars_repo_name": "lzhbrian/MCMC", "max_stars_repo_head_hexsha": "0dd3aadd1ed2833aff76bd7af4b014282739984b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-09-10T04:42:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-21T16:07:29.000Z", "max_issues_repo_path": "tex/theoretical.tex", "max_issues_repo_name": "lzhbrian/MCMC", "max_issues_repo_head_hexsha": "0dd3aadd1ed2833aff76bd7af4b014282739984b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/theoretical.tex", "max_forks_repo_name": "lzhbrian/MCMC", "max_forks_repo_head_hexsha": "0dd3aadd1ed2833aff76bd7af4b014282739984b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-03-03T17:34:05.000Z", "max_forks_repo_forks_event_max_datetime": "2019-03-24T10:54:53.000Z", "avg_line_length": 59.0434782609, "max_line_length": 332, "alphanum_fraction": 0.7334315169, "num_tokens": 748, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7879311956428947, "lm_q1q2_score": 0.5948210682113505}}
{"text": "% Copyright 2011-2015 David Hadka.  All Rights Reserved.\r\n%\r\n% This file is part of the MOEA Framework User Manual.\r\n%\r\n% Permission is granted to copy, distribute and/or modify this document under\r\n% the terms of the GNU Free Documentation License, Version 1.3 or any later\r\n% version published by the Free Software Foundation; with the Invariant Section\r\n% being the section entitled \"Preface\", no Front-Cover Texts, and no Back-Cover\r\n% Texts.  A copy of the license is included in the section entitled \"GNU Free\r\n% Documentation License\".\r\n\r\n\\chapter{Example: Knapsack Problem}\r\n\r\nIn this chapter, we will walk through a complete example of creating a new optimization problem and solving it using the MOEA Framework.  This example serves as a review of the topics learned thus far.  We will also introduce several new concepts such as constraint handling.\r\n\r\nThe problem we will be solving is the multiobjective version of the knapsack problem.  The knapsack problem (discussed in much detail at \\webpage{http://en.wikipedia.org/wiki/Knapsack_problem}) is a famous combinatorial problem that involves choosing which items to place in a knapsack to maximize the value of the items carried without exceeding the weight capacity of the knapsack.  More formally, we are given $N$ items.  Each item has a profit, $P(i)$, and weight, $W(i)$, for $i = {1, 2, \\ldots, N}$.  Let $d(i)$ represent our decision to place the $i$-th item in the knapsack, where $d(i)=1$ if the item is put into the knapsack and $d(i)=0$ otherwise.  If the knapsack has a weight capacity of $C$, then the knapsack problem is defined as:\r\n\r\n\\begin{equation*}\r\n  \\text{Maximize} \\; \\sum_{i=1}^{N} d(i)*P(i) \\; \\text{such that} \\; \\sum_{i=1}^{N} d(i)*W(i) \\leq C\r\n\\end{equation*}\r\n\r\nThe summation on the left (which we are maximizing) calculates the total profit we gain from the items placed in the knapsack.  The summation on the right side is a constraint that ensures the items placed in the knapsack do not exceed the weight capacity of the knapsack.\r\n\r\nThe multiobjective knapsack problem that we will be solving in this section is very similar, except that we now have $2$ knapsacks to hold the items.  Additionally, the profit and weights vary depending on which knapsack is holding each item.  For example, an item will have a profit of $\\$25$ and a weight of $5$ pounds in the first knapsack, but will have a profit of $\\$15$ and a weight of $8$ pounds in the second knapsack.  (It may seem unusual that the weight changes, but that is how the problem is defined in the literature.)   Thus, profit is now defined by $P(i,j)$ and weight by $W(i,j)$, where the $j = {1, 2}$ term is the knapsack index.  Lastly, each knapsack defines its own capacity, $C_1$ and $C_2$.  Combining all of this, the multiobjective knapsack problem is formally defined as:\r\n\r\n\\begin{equation*}\r\n\\begin{array}{l}\r\n  \\text{Maximize} \\; \\sum_{i=1}^{N} d(i)*P(i,1) \\; \\text{such that} \\; \\sum_{i=1}^{N} d(i)*W(i,1) \\leq C_1 \\; \\text{and}\\\\\r\n  \\text{Maximize} \\; \\sum_{i=1}^{N} d(i)*P(i,2) \\; \\text{such that} \\; \\sum_{i=1}^{N} d(i)*W(i,2) \\leq C_2\r\n\\end{array}\r\n\\end{equation*}\r\n\r\nOnce we have a firm understanding of the optimization problem, we can now work on solving this problem.  You can find all of the code for this example in the \\folder{examples/org/moeaframework/examples/ga/knapsack} folder in the source code distribution.\r\n\r\n\\section{Data Files}\r\nWe begin by developing a way to store all of the information required by the knapsack problem --- profits, weights, capacities --- in a text file.  This will let us quickly generate and run new inputs for this problem.  Fortunately, two researchers, Eckart Zitzler and Marco Laumanns, have already created a file format for multiobjective knapsack problems at \\webpage{http://www.tik.ee.ethz.ch/sop/download/supplementary/testProblemSuite/}.  For example, a simple $5$ item problem instance would appear as follows.  \r\n\r\n\\begin{lstlisting}[language=plaintext]\r\nknapsack problem specification (2 knapsacks, 5 items)\r\n=\r\nknapsack 1:\r\n capacity: +251\r\n item 1:\r\n  weight: +94\r\n  profit: +57\r\n item 2:\r\n  weight: +74\r\n  profit: +94\r\n item 3:\r\n  weight: +77\r\n  profit: +59\r\n item 4:\r\n  weight: +74\r\n  profit: +83\r\n item 5:\r\n  weight: +29\r\n  profit: +82\r\n=\r\nknapsack 2:\r\n capacity: +190\r\n item 1:\r\n  weight: +55\r\n  profit: +20\r\n item 2:\r\n  weight: +10\r\n  profit: +19\r\n item 3:\r\n  weight: +97\r\n  profit: +20\r\n item 4:\r\n  weight: +73\r\n  profit: +66\r\n item 5:\r\n  weight: +69\r\n  profit: +48\r\n\\end{lstlisting}\r\n\r\nWe will re-use this file format in this example.  One advantage is that you can download any of the example knapsack problems from \\webpage{http://www.tik.ee.ethz.ch/sop/download/supplementary/testProblemSuite/} and solve them with the program we are writing.  Go ahead and save this example input file to \\file{knapsack.5.2}.  We will then load and solve this data file later in this chapter.\r\n\r\n\\section{Encoding the Problem}\r\nThe next step is to decide upon the encoding for the decision variables.  Observe that we are deciding which items to place in the knapsacks.  Recalling \\chptref{chpt:representations}, the bit string representation works well for situation where we are making many yes/no decisions.  For example, if $N=5$, we can represent the decision to include each item using a bit string with $5$ bits.  Each bit in the string corresponds to an item, and is set to \\java{1} if the item is included and \\java{0} if the item is excluded.  For instance, the bit string \\java{00110} would place items 3 and 4 inside the knapsacks, excluding the rest.\r\n\r\n\\section{Implementing the Problem}\r\nHaving decided upon an encoding, we can now implement the knapsack problem as shown below.\r\n\r\n\\begin{lstlisting}[language=Java]\r\nimport java.io.File;\r\nimport java.io.FileReader;\r\nimport java.io.IOException;\r\nimport java.io.InputStream;\r\nimport java.io.InputStreamReader;\r\nimport java.io.Reader;\r\nimport java.util.regex.Matcher;\r\nimport java.util.regex.Pattern;\r\n\r\nimport org.moeaframework.core.Problem;\r\nimport org.moeaframework.core.Solution;\r\nimport org.moeaframework.core.variable.BinaryVariable;\r\nimport org.moeaframework.core.variable.EncodingUtils;\r\nimport org.moeaframework.util.Vector;\r\nimport org.moeaframework.util.io.CommentedLineReader;\r\n\r\n/**\r\n * Multiobjective 0/1 knapsack problem.\r\n */\r\n public class Knapsack implements Problem {\r\n\r\n\t/**\r\n\t * The number of sacks.\r\n\t */\r\n\tprivate int nsacks;\r\n\r\n\t/**\r\n\t * The number of items.\r\n\t */\r\n\tprivate int nitems;\r\n\r\n\t/**\r\n\t * Entry {@code profit[i][j]} is the profit from including\r\n\t * item {@code j} in sack {@code i}.\r\n\t */\r\n\tprivate int[][] profit;\r\n\r\n\t/**\r\n\t * Entry {@code weight[i][j]} is the weight incurred from\r\n\t * including item {@code j} in sack {@code i}.\r\n\t */\r\n\tprivate int[][] weight;\r\n\r\n\t/**\r\n\t * Entry {@code capacity[i]} is the weight capacity of sack\r\n\t * {@code i}.\r\n\t */\r\n\tprivate int[] capacity;\r\n\r\n\t/**\r\n\t * Constructs a multiobjective 0/1 knapsack problem instance\r\n\t * loaded from the specified file.\r\n\t * \r\n\t * @param file the file containing the knapsack problem\r\n\t *        instance\r\n\t * @throws IOException if an I/O error occurred\r\n\t */\r\n\tpublic Knapsack(File file) throws IOException {\r\n\t\tthis(new FileReader(file));\r\n\t}\r\n\t\r\n\t/**\r\n\t * Constructs a multiobjective 0/1 knapsack problem instance\r\n\t * loaded from the specified input stream.\r\n\t * \r\n\t * @param inputStream the input stream containing the knapsack\r\n\t *         problem instance\r\n\t * @throws IOException if an I/O error occurred\r\n\t */\r\n\tpublic Knapsack(InputStream inputStream) throws IOException {\r\n\t\tthis(new InputStreamReader(inputStream));\r\n\t}\r\n\t\r\n\t/**\r\n\t * Constructs a multiobjective 0/1 knapsack problem instance\r\n\t * loaded from the specified reader.\r\n\t * \r\n\t * @param reader the reader containing the knapsack problem\r\n\t *        instance\r\n\t * @throws IOException if an I/O error occurred\r\n\t */\r\n\tpublic Knapsack(Reader reader) throws IOException {\r\n\t\tsuper();\r\n\t\t\r\n\t\tload(reader);\r\n\t}\r\n\r\n\t/**\r\n\t * Loads the knapsack problem instance from the specified\r\n\t * reader.\r\n\t * \r\n\t * @param reader the file containing the knapsack problem\r\n\t *        instance\r\n\t * @throws IOException if an I/O error occurred\r\n\t */\r\n\tprivate void load(Reader reader) throws IOException {\r\n\t\tPattern specificationLine = Pattern.compile(\"knapsack problem specification \\\\((\\\\d+) knapsacks, (\\\\d+) items\\\\)\");\r\n\t\tPattern capacityLine = Pattern.compile(\" capacity: \\\\+(\\\\d+)\");\r\n\t\tPattern weightLine = Pattern.compile(\"  weight: \\\\+(\\\\d+)\");\r\n\t\tPattern profitLine = Pattern.compile(\"  profit: \\\\+(\\\\d+)\");\r\n\r\n\t\tCommentedLineReader lineReader = null;\r\n\t\tString line = null;\r\n\t\tMatcher matcher = null;\r\n\t\t\r\n\t\ttry {\r\n\t\t\tlineReader = new CommentedLineReader(reader);\r\n\t\t\tline = lineReader.readLine(); // problem specification\r\n\t\t\tmatcher = specificationLine.matcher(line);\r\n\t\t\t\r\n\t\t\tif (matcher.matches()) {\r\n\t\t\t\tnsacks = Integer.parseInt(matcher.group(1));\r\n\t\t\t\tnitems = Integer.parseInt(matcher.group(2));\r\n\t\t\t} else {\r\n\t\t\t\tthrow new IOException(\"knapsack data file \" +\r\n\t\t\t\t\t\t\"not properly formatted: invalid specification \" +\r\n\t\t\t\t\t\t\"line\");\r\n\t\t\t}\r\n\t\t\t\r\n\t\t\tcapacity = new int[nsacks];\r\n\t\t\tprofit = new int[nsacks][nitems];\r\n\t\t\tweight = new int[nsacks][nitems];\r\n\t\r\n\t\t\tfor (int i = 0; i < nsacks; i++) {\r\n\t\t\t\tline = lineReader.readLine(); // line containing \"=\"\r\n\t\t\t\tline = lineReader.readLine(); // knapsack i\r\n\t\t\t\tline = lineReader.readLine(); // the knapsack capacity\r\n\t\t\t\tmatcher = capacityLine.matcher(line);\r\n\t\t\t\t\r\n\t\t\t\tif (matcher.matches()) {\r\n\t\t\t\t\tcapacity[i] = Integer.parseInt(matcher.group(1));\r\n\t\t\t\t} else {\r\n\t\t\t\t\tthrow new IOException(\"knapsack data file \" +\r\n\t\t\t\t\t\t\t\"not properly formatted: invalid capacity line\");\r\n\t\t\t\t}\r\n\t\r\n\t\t\t\tfor (int j = 0; j < nitems; j++) {\r\n\t\t\t\t\tline = lineReader.readLine(); // item j\r\n\t\t\t\t\tline = lineReader.readLine(); // the item weight\r\n\t\t\t\t\tmatcher = weightLine.matcher(line);\r\n\t\t\t\t\t\r\n\t\t\t\t\tif (matcher.matches()) {\r\n\t\t\t\t\t\tweight[i][j] = Integer.parseInt(matcher.group(1));\r\n\t\t\t\t\t} else {\r\n\t\t\t\t\t\tthrow new IOException(\"knapsack data file \" +\r\n\t\t\t\t\t\t\t\t\"not properly formatted: invalid weight line\");\r\n\t\t\t\t\t}\r\n\t\r\n\t\t\t\t\tline = lineReader.readLine(); // the item profit\r\n\t\t\t\t\tmatcher = profitLine.matcher(line);\r\n\t\t\t\t\t\r\n\t\t\t\t\tif (matcher.matches()) {\r\n\t\t\t\t\t\tprofit[i][j] = Integer.parseInt(matcher.group(1));\r\n\t\t\t\t\t} else {\r\n\t\t\t\t\t\tthrow new IOException(\"knapsack data file \" +\r\n\t\t\t\t\t\t\t\t\"not properly formatted: invalid profit line\");\r\n\t\t\t\t\t}\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t} finally {\r\n\t\t\tif (lineReader != null) {\r\n\t\t\t\tlineReader.close();\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n\r\n\t@Override\r\n\tpublic void evaluate(Solution solution) {\r\n\t\tboolean[] d = EncodingUtils.getBinary(\r\n\t\t\t\tsolution.getVariable(0));\r\n\t\tdouble[] f = new double[nsacks];\r\n\t\tdouble[] g = new double[nsacks];\r\n\r\n\t\t// calculate the profits and weights for the knapsacks\r\n\t\tfor (int i = 0; i < nitems; i++) {\r\n\t\t\tif (d[i]) {\r\n\t\t\t\tfor (int j = 0; j < nsacks; j++) {\r\n\t\t\t\t\tf[j] += profit[j][i];\r\n\t\t\t\t\tg[j] += weight[j][i];\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// check if any weights exceed the capacities\r\n\t\tfor (int j = 0; j < nsacks; j++) {\r\n\t\t\tif (g[j] <= capacity[j]) {\r\n\t\t\t\tg[j] = 0.0;\r\n\t\t\t} else {\r\n\t\t\t\tg[j] = g[j] - capacity[j];\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// negate the objectives since Knapsack is maximization\r\n\t\tsolution.setObjectives(Vector.negate(f));\r\n\t\tsolution.setConstraints(g);\r\n\t}\r\n\r\n\t@Override\r\n\tpublic String getName() {\r\n\t\treturn \"Knapsack\";\r\n\t}\r\n\r\n\t@Override\r\n\tpublic int getNumberOfConstraints() {\r\n\t\treturn nsacks;\r\n\t}\r\n\r\n\t@Override\r\n\tpublic int getNumberOfObjectives() {\r\n\t\treturn nsacks;\r\n\t}\r\n\r\n\t@Override\r\n\tpublic int getNumberOfVariables() {\r\n\t\treturn 1;\r\n\t}\r\n\r\n\t@Override\r\n\tpublic Solution newSolution() {\r\n\t\tSolution solution = new Solution(1, nsacks, nsacks);\r\n\t\tsolution.setVariable(0, EncodingUtils.newBinary(nitems));\r\n\t\treturn solution;\r\n\t}\r\n\r\n\t@Override\r\n\tpublic void close() {\r\n\t\t//do nothing\r\n\t}\r\n\r\n}\r\n\\end{lstlisting}\r\n\r\nIt is not vitally important to understand all of the code.  Much of the code is for loading the data file discussed in the previous section.  The key sections of the code you should pay attention to are the \\java{evaluate} method starting on line 168 and the \\java{newSolution} method on line 219.  Starting with the \\java{newSolution} method, notice how line 220 creates a solution using the three-argument constructor, \\java{new Solution(1, nsacks, nsacks)}.  The three argument constructor is used to define constraints.  In this example, we are defining a problem with \\java{1} decision variable, \\java{nsacks} objectives, and \\java{nsacks} constraints --- one objective and one constraint for each knapsack.  Then on line 221 we set the one decision variable to be a bit string (binary encoding) with \\java{nitems} bits.\r\n\r\nThe \\java{evaluate} method on line 168 is where the knapsack equations from the beginning of this chapter are calculated.  We extract the bit string from the solution we are evaluating on line 169.  When the bit is set to \\java{1}, the corresponding item is placed in both knapsacks.  Lines 175-182 sum up the profit and weight in each knapsack.  Lines 185-191 then check if any of the weights exceeds the capacity of any knapsack.  If the weight is less than the capacity, then the constraint is satisfied as we set the constraint value to 0 (line 187).  However, if the capacity is exceeded, then the constraint is violated and we set the constraint to a non-zero value (line 189).  To reiterate, constraints that are satisfied have a value of zero; violated constraints have non-zero values (both positive and negative).\r\n\r\nLastly, we set the objective values on line 194 and the constraint values on line 195.  Note on line 194 how we negate the objective values.  This is because we are trying to maximize the objectives (the profits).  See \\sectref{sect:maximizing} for additional details on maximizing objectives.\r\n\r\n\\section{Solving the Problem}\r\nWith the problem implemented in Java, we can now solve the multiobjective knapsack problem using the optimization algorithms provided by the MOEA Framework.  In this example, we will use the NSGA-II algorithm as shown below.  \r\n\r\n\\begin{lstlisting}[language=Java]\r\nimport java.io.File;\r\nimport java.io.IOException;\r\nimport java.io.InputStream;\r\nimport org.moeaframework.Executor;\r\nimport org.moeaframework.core.NondominatedPopulation;\r\nimport org.moeaframework.core.Solution;\r\nimport org.moeaframework.util.Vector;\r\n\r\n/**\r\n * Example of binary optimization using the {@link Knapsack}\r\n * problem\r\n */\r\npublic class KnapsackExample {\r\n\r\n\t/**\r\n\t * Starts the example running the knapsack problem.\r\n\t * \r\n\t * @param args the command line arguments\r\n\t * @throws IOException if an I/O error occurred\r\n\t */\r\n\tpublic static void main(String[] args) throws IOException {\t\t\t\r\n\t\t// solve using NSGA-II\r\n\t\tNondominatedPopulation result = new Executor()\r\n\t\t\t\t.withProblemClass(Knapsack.class,\r\n\t\t\t\t\t\tnew File(\"knapsack.5.2\"))\r\n\t\t\t\t.withAlgorithm(\"NSGAII\")\r\n\t\t\t\t.withMaxEvaluations(50000)\r\n\t\t\t\t.distributeOnAllCores()\r\n\t\t\t\t.run();\r\n\r\n\t\t// print the results\r\n\t\tfor (int i = 0; i < result.size(); i++) {\r\n\t\t\tSolution solution = result.get(i);\r\n\t\t\tdouble[] objectives = solution.getObjectives();\r\n\t\t\t\t\t\r\n\t\t\t// negate objectives to return them to their maximized\r\n\t\t\t// form\r\n\t\t\tobjectives = Vector.negate(objectives);\r\n\t\t\t\t\t\r\n\t\t\tSystem.out.println(\"Solution \" + (i+1) + \":\");\r\n\t\t\tSystem.out.println(\"    Sack 1 Profit: \" + objectives[0]);\r\n\t\t\tSystem.out.println(\"    Sack 2 Profit: \" + objectives[1]);\r\n\t\t\tSystem.out.println(\"    Binary String: \" +\r\n\t\t\t\t\tsolution.getVariable(0));\r\n\t\t}\r\n\t}\r\n\r\n}\r\n\\end{lstlisting}\r\n\r\nHere, we are using the \\java{Executor} to configure and solve the Knapsack problem.  Please refer to \\chptref{chpt:executor} for more details.  You can now run this example code.  If all goes well, you will see output similar to:\r\n\r\n\\begin{lstlisting}[language=plaintext]\r\nSolution 1:\r\n    Sack 1 Profit: 259.0\r\n    Sack 2 Profit: 133.0\r\n    Binary String: 01011\r\n\\end{lstlisting}\r\n\r\nIn this case, only one Pareto optimal solution was found.  You can see the profits for each knapsack as well as identify which items were selected in this solution from the binary string being displayed.\r\n\r\n\\section{Conclusion}\r\nThis chapter walked you through a complete example of defining a new problem and solving it using the MOEA Framework.  You should now have a general understanding of using the MOEA Framework.  We recommend walking through the other examples in the \\folder{examples} folder provided in the source code distribution.", "meta": {"hexsha": "4902dfe5c9cf51867386be6fa2a96efc4b0e8ac3", "size": 16469, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/knapsack.tex", "max_stars_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_stars_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_stars_repo_licenses": ["Intel"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manual/knapsack.tex", "max_issues_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_issues_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_issues_repo_licenses": ["Intel"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/knapsack.tex", "max_forks_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_forks_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_forks_repo_licenses": ["Intel"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.1202046036, "max_line_length": 826, "alphanum_fraction": 0.6977958589, "num_tokens": 4251, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7879311906630568, "lm_q1q2_score": 0.5948210644519962}}
{"text": "\\subsection{Group actions}\\label{subsec:group_actions}\n\n\\begin{definition}\\label{def:endomorphism_monoid}\n  Given a \\hyperref[def:category_size]{locally small} \\hyperref[def:category]{category} \\( \\cat{C} \\), we call \\( \\cat{C}(A) \\) the \\term{endomorphism monoid} over \\( A \\) and denote it by \\( \\End(A) \\). If \\( A \\) is the only object in \\( \\cat{C} \\), we can identify the entire category \\( \\cat{C} \\) with the monoid \\( \\End(A) \\).\n\\end{definition}\n\n\\begin{remark}\\label{rem:monoid_action_endomorphisms}\n  It is tempting to define a monoid action over an object \\( A \\) of an arbitrary locally small category rather than over a set rather than specifying the properties of an action (e.g. \\enquote{linear action}). Unfortunately, even the simplest examples of monoid actions may fail to hold nice properties. See, e.g. \\fullref{thm:monoid_is_action} or \\fullref{thm:cayleys_theorem}.\n\\end{remark}\n\n\\begin{definition}\\label{def:left_monoid_action}\\mcite[159]{Knapp2016BasicAlgebra}\n  Let \\( \\mscrM \\) be a \\hyperref[def:unital_magma/monoid]{monoid} and let \\( A \\) be a set. A \\term{left monoid action} of \\( \\mscrM \\) on \\( A \\) can be defined equivalently as:\n  \\begin{thmenum}\n    \\thmitem{def:left_monoid_action/homomorphism} A homomorphism from \\( \\mscrM \\) to the \\hyperref[def:multi_valued_function/endofunction]{endofunction} monoid \\( \\End(A) \\).\n    \\thmitem{def:left_monoid_action/indirect_homomorphism} An indexed family of endofunctions \\( \\{ \\tau_x \\}_{x \\in M} \\) such that\n    \\begin{equation}\\label{eq:def:left_monoid_action/indirect_homomorphism}\n      \\tau_{xy} = \\tau_x \\circ \\tau_y \\T{for all} x, y \\in M.\n    \\end{equation}\n\n    \\thmitem{def:left_monoid_action/operation} A heterogeneous \\hyperref[def:magma]{algebraic operation} \\( \\circ: \\mscrM \\times A \\to A \\), written using juxtaposition, such that\n    \\begin{thmenum}\n      \\thmitem{def:left_monoid_action/operation/identity} \\( e \\odot a = a \\) holds for any \\( a \\in A \\).\n      \\thmitem{def:left_monoid_action/operation/compatibility} \\( (xy) \\odot a = x \\odot (y \\odot a) \\) holds whenever \\( x, y \\in M \\) and \\( a \\in A \\).\n    \\end{thmenum}\n\n    See \\fullref{rem:theory_of_left_monoid_actions} for the \\hyperref[def:first_order_theory]{first-order logic theory} behind this operation.\n  \\end{thmenum}\n\n  We say that \\( \\mscrM \\) acts on \\( A \\) and optionally add adjectives, e.g. \\enquote{\\( \\mscrM \\) acts linearly/smoothly on \\( A \\)}.\n\\end{definition}\n\\begin{proof}\n  \\ImplicationSubProof{def:left_monoid_action/homomorphism}{def:left_monoid_action/indirect_homomorphism} Let \\( \\tau: \\mscrM \\to \\End(A) \\) be a homomorphism. Thus, we assign a morphism \\( \\tau(x) \\) for each member \\( x \\in M \\). Furthermore, since monoid operation on \\( \\End(A) \\) is function composition and since \\( \\tau \\) is a homomorphism, we have\n  \\begin{equation*}\n    \\tau(xy) = \\tau(x) \\circ \\tau(y).\n  \\end{equation*}\n\n  \\ImplicationSubProof{def:left_monoid_action/indirect_homomorphism}{def:left_monoid_action/operation} Assume that we have a morphism \\( \\tau_x: A \\to A \\) for each \\( x \\in M \\) that satisfies \\eqref{eq:def:left_monoid_action/indirect_homomorphism}. Define the operation\n  \\begin{align*}\n    {}&\\odot{}: \\mscrM \\times A \\to A \\\\\n    x &\\odot a \\coloneqq \\tau_x(a).\n  \\end{align*}\n\n  It satisfies the necessary axioms:\n  \\begin{refenum}\n    \\refitem{def:left_monoid_action/operation/identity} If \\( a \\in A \\), we have\n    \\begin{equation*}\n      e \\odot a\n      =\n      \\tau_e(a)\n      =\n      \\id(a)\n      =\n      a.\n    \\end{equation*}\n\n    \\refitem{def:left_monoid_action/operation/compatibility} If \\( x, y \\in M \\) and \\( a \\in A \\), we have\n    \\begin{equation*}\n      (x y) \\odot a\n      =\n      \\tau_{x y}(a)\n      =\n      \\tau_{x}(\\tau{y}(a))\n      =\n      g \\odot (h \\odot a).\n    \\end{equation*}\n  \\end{refenum}\n\n  \\ImplicationSubProof{def:left_monoid_action/operation}{def:left_monoid_action/homomorphism} Assume that we have an operation \\( \\odot: \\mscrM \\times A \\to A \\) that satisfies the axioms for left action. Define the function\n  \\begin{align*}\n    &\\tau: \\mscrM \\to \\End(A) \\\\\n    &\\tau(x) \\coloneqq x \\id.\n  \\end{align*}\n\n  Then \\( \\tau \\) is a monoid homomorphism because \\( \\tau(\\varepsilon) = \\id \\) and\n  \\begin{equation}\\label{def:left_monoid_action/homomorphism/proof}\n    \\varphi(xy)\n    =\n    xy \\id\n    =\n    x (y \\id)\n    =\n    x \\id (y \\id)\n    =\n    (x \\id) (y \\id)\n    =\n    \\varphi(x) \\varphi(y),\n  \\end{equation}\n\\end{proof}\n\n\\begin{remark}\\label{rem:theory_of_left_monoid_actions}\n  In order to fit the heterogeneous operation of \\hyperref[def:left_monoid_action]{left monoid actions} into the framework of \\hyperref[def:first_order_semantics/satisfiability]{first-order logic models}, we need the category \\( \\cat{C} \\) to be a \\hyperref[def:category_of_small_first_order_models]{category of \\( \\mscrU \\)-small models}. A monoid action is then obtained, by extending the theory of \\( \\cat{C} \\).\n\n  \\begin{thmenum}\n    \\thmitem{rem:theory_of_left_monoid_actions/functions} For each \\( x \\in M \\), add a unary \\hyperref[def:first_order_language/func]{functional symbol} \\( \\varphi_x \\).\n\n    \\thmitem{rem:theory_of_left_monoid_actions/axiom} For each \\( x, y \\in M \\), add the axiom\n    \\begin{equation}\\label{eq:rem:theory_of_left_monoid_actions/axiom_schema}\n      \\forall a (\\tau_{xy}(a) = \\tau_x(\\tau_y(a))).\n    \\end{equation}\n  \\end{thmenum}\n\\end{remark}\n\n\\begin{definition}\\label{def:right_monoid_action}\n  We say that \\( \\tau: \\mscrM \\to \\End(A) \\) is a \\term{right monoid action} of \\( \\mscrM \\) on \\( A \\) if the same function is a \\hyperref[def:left_monoid_action]{left monoid action} of the \\hyperref[def:magma/opposite]{opposite monoid} \\( \\mscrM^{-1} \\) on \\( A \\).\n\\end{definition}\n\n\\begin{definition}\\label{def:faithful_left_monoid_action}\n  A left monoid action is said to be \\term{faithful} if the corresponding homomorphism is injective.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:monoid_is_action}\n  Any \\hyperref[def:unital_magma/monoid]{monoid} \\hyperref[def:left_monoid_action]{acts} on itself by \\hyperref[def:multi_valued_function/endofunction]{endofunctions}.\n\n  Compare this result to \\fullref{thm:cayleys_theorem}.\n\\end{proposition}\n\\begin{proof}\n  For completeness, we will verify all three definitions:\n\n  \\SubProofOf{def:left_monoid_action/homomorphism} The identity function \\( \\id: \\mscrM \\to \\mscrM \\) is the identity element of \\( \\fun(\\mscrM) \\). Define\n  \\begin{align*}\n    &\\tau: \\mscrM \\to \\fun(\\mscrM) \\\\\n    &\\tau(x) \\coloneqq x \\id\n  \\end{align*}\n\n  It is a monoid homomorphism because both \\eqref{eq:def:pointed_set/homomorphism} and \\eqref{def:left_monoid_action/homomorphism/proof} hold.\n\n  \\SubProofOf{def:left_monoid_action/indirect_homomorphism} The proof is the same as above.\n\n  \\SubProofOf{def:left_monoid_action/operation} Define the operation\n  \\begin{balign*}\n    {}&\\odot{}: \\mscrM \\times \\mscrM \\to \\mscrM \\\\\n    x &\\odot y \\coloneqq xy\n  \\end{balign*}\n\n  It immediately follows that\n  \\begin{refenum}\n    \\refitem{def:left_monoid_action/operation/identity} \\( e \\circ x = ex = x \\) for all \\( x \\in M \\).\n    \\refitem{def:left_monoid_action/operation/compatibility} \\( (x y) \\circ z = xyz = x \\circ (y \\circ z) \\) for all \\( x, y, z \\in M \\).\n  \\end{refenum}\n\\end{proof}\n\n\\begin{proposition}\\label{thm:natural_numbers_monoid_action}\n  The \\hyperref[def:set_of_natural_numbers]{natural numbers} \\( \\BbbN \\) act on any \\hyperref[def:unital_magma/monoid]{monoid} by \\hyperref[def:unital_magma/exponentiation]{exponentiation}.\n\n  Compare this result to \\fullref{thm:integers_group_action}.\n\\end{proposition}\n\\begin{proof}\n  The action itself is given by \\( n \\mapsto (x \\mapsto x^n) \\). We must only verify that it is a homomorphism.\n\n  We verify the explicit axioms from \\fullref{def:left_monoid_action/operation}:\n  \\begin{refenum}\n    \\refitem{def:left_monoid_action/operation/identity} \\( x^1 = x \\) for all \\( x \\in M \\) by definition.\n    \\refitem{def:left_monoid_action/operation/compatibility} \\( (x^n)^m = x^{nm} \\) is the literal statement of \\eqref{eq:thm:magma_exponentiation_properties/repeated}.\n  \\end{refenum}\n\\end{proof}\n\n\\begin{definition}\\label{def:automorphism_group}\n  Given a \\hyperref[def:category_size]{locally small} \\hyperref[def:groupoid]{groupoid} \\( \\cat{G} \\), we call \\( \\cat{G}(A) \\) the \\term{automorphism group} over \\( A \\) and denote it by \\( \\aut(A) \\). If \\( A \\) is the only object in \\( \\cat{G} \\), we can identify the entire category \\( \\cat{C} \\) with the group \\( \\aut(A) \\).\n\\end{definition}\n\n\\begin{definition}\\label{def:symmetric_group}\n  We define the \\term{symmetric group} of order \\( n \\) as the group\n  \\begin{equation*}\n    S_n \\coloneqq \\aut(\\{ 1, 2, \\ldots, n \\})\n  \\end{equation*}\n  of all bijections from the set \\( \\{ 1, 2, \\ldots, n \\} \\) to itself.\n\n  \\begin{thmenum}\n    \\thmitem{def:symmetric_group/permutation} Members of \\( S_n \\) are called \\term{permutations}.\n\n    \\thmitem{def:symmetric_group/inversion} We say that the pair \\( (p(k), p(m)) \\) is an \\term{inversion} of the permutation \\( p \\) if \\( k < m \\) and \\( p(k) > p(m) \\).\n\n    \\thmitem{def:symmetric_group/permutation_parity} A permutation is said to be \\term{even} or \\term{odd} depending on whether it has an even or odd number of inversions.\n\n    \\thmitem{def:symmetric_group/permutation_sign} We define the \\term{sign} of a permutation as\n    \\begin{align*}\n       &\\sgn: S_n \\to \\{ -1, 1 \\}, \\\\\n       &\\sgn(p) \\coloneqq \\begin{cases}\n        1,  &p \\text{ is even} \\\\\n        -1, &p \\text{ is odd}\n      \\end{cases}\n    \\end{align*}\n\n    \\thmitem{def:symmetric_group/alternating} The subgroup of all even permutations is denoted by \\( A_n \\) and is called the \\term{alternating group} of order \\( n \\).\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{definition}\\label{def:left_group_action}\n  In analogy to \\hyperref[def:left_monoid_action]{monoid actions}, we can define a \\term{group actions} of the group \\( \\mscrG \\) on a set \\( A \\) as\n  \\begin{thmenum}\n    \\thmitem{def:left_group_action/homomorphism} A homomorphism from \\( \\mscrG \\) to the automorphism group \\( \\aut(A) \\) (the set of all bijections on \\( A \\)).\n    \\thmitem{def:left_group_action/indirect_homomorphism} An indexed family of isomorphisms \\( \\{ \\tau_x: A \\to A \\}_{x \\in G} \\) such that\n    \\begin{equation}\\label{eq:def:left_group_action/indirect_homomorphism}\n      \\tau_{xy} = \\tau_x \\circ \\tau_y \\T{for all} x, y \\in G.\n    \\end{equation}\n\n    \\thmitem{def:left_group_action/operation} The same heterogeneous algebraic operation \\( \\circ: \\mscrG \\times A \\to A \\) as in \\fullref{def:left_monoid_action/operation}.\n  \\end{thmenum}\n\\end{definition}\n\\begin{proof}\n  The proof of equivalence is the same as in \\fullref{def:left_monoid_action}.\n\\end{proof}\n\n\\begin{definition}\\label{def:right_group_action}\n  As for \\hyperref[def:right_monoid_action]{monoid actions}, we define \\term{right group actions} as \\hyperref[def:left_group_action]{left group action} of the \\hyperref[def:magma/opposite]{opposite group} \\( \\mscrG^{-1} \\) on \\( A \\).\n\\end{definition}\n\n\\begin{lemma}\\label{thm:group_multiplication_is_bijection}\n  For each element \\( x \\) of a group \\( \\mscrG \\), the function \\( \\varphi_x \\coloneqq x\\id \\), i.e.\n  \\begin{align*}\n    &\\varphi_x: \\mscrG \\to \\mscrG \\\\\n    &\\varphi_x(y) \\coloneqq xy,\n  \\end{align*}\n  is an bijection (but not necessarily an isomorphism).\n\\end{lemma}\n\\begin{proof}\n  \\SubProofOf{def:function_invertibility/injective} If \\( y_1, y_2 \\in G \\) and \\( \\varphi_x(y_1) = \\varphi_x(y_2) \\), we have\n  \\begin{equation*}\n    xy_1 = \\varphi_x(y_1) = \\varphi_x(y_2) = xy_2.\n  \\end{equation*}\n\n  By \\fullref{thm:def:group/properties/cancellative}, \\( y_1 = y_2 \\). Therefore, \\( \\varphi_x \\) is injective.\n\n  \\SubProofOf{def:function_invertibility/surjective} If \\( z \\in G \\), then \\( z = x(x^{-1} z) \\). Therefore, \\( z = \\varphi_x(x^{-1} z) \\) and \\( \\varphi_x \\) is surjective.\n\\end{proof}\n\n\\begin{theorem}[Cayley's theorem]\\label{thm:cayleys_theorem}\n  Any \\hyperref[def:group]{group} \\hyperref[def:left_group_action]{acts} on itself by \\hyperref[def:multi_valued_function/endofunction]{endofunctions}.\n\n  Compare this result to \\fullref{thm:monoid_is_action}.\n\\end{theorem}\n\\begin{proof}\n  Follows directly from \\fullref{thm:monoid_is_action} and \\fullref{thm:group_multiplication_is_bijection}.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:integers_group_action}\n  The \\hyperref[def:set_of_integers]{integers} \\( \\BbbZ \\) act on any \\hyperref[def:group]{group} by \\hyperref[def:group/exponentiation]{exponentiation}.\n\n  Compare this result to \\fullref{thm:natural_numbers_monoid_action}.\n\\end{proposition}\n\\begin{proof}\n  Follows from \\fullref{thm:natural_numbers_monoid_action}.\n\\end{proof}\n", "meta": {"hexsha": "51582a598b762372455b0fd025e29222a99c73e7", "size": 12719, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/group_actions.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/group_actions.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/group_actions.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.1270491803, "max_line_length": 415, "alphanum_fraction": 0.695966664, "num_tokens": 4173, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149758396752, "lm_q2_score": 0.7879311956428946, "lm_q1q2_score": 0.5948210595220821}}
{"text": "\\section{Arc Length}{}{}\\label{sec:Arc Length}\n\n\nIn previous sections we used integration to answer the following questions:\n\t\\begin{enumerate}\n\t\\item\t\tGiven a region, what is its area?\n\t\\item\t\tGiven a solid, what is its volume?\n\t\\end{enumerate}\n\t\nIn this section, we address a related question: Given a curve, what is its length? This is often referred to as \\textbf{arc length}. \n\nConsider the graph of $y=\\sin x$ on $[0,\\pi]$ given in Figure \\ref{fig:arcintro} (a). How long is this curve? That is, if we were to use a piece of string to exactly match the shape of this curve, how long would the string be?\n\nAs we have done in the past, we start by approximating; later, we will refine our answer using limits to get an exact solution.\n\nThe length of straight--line segments is easy to compute using the Distance Formula. We can approximate the length of the given curve by approximating the curve with straight lines and measuring their lengths. \n\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{subfigure}[t]{0.5\\textwidth}\n\t\t\\begin{tikzpicture}\n\t\t\\begin{axis}[ width=\\linewidth,%\n\t\ttick label style={font=\\scriptsize},axis y line=middle,axis x line=middle,name=myplot,axis on top,%\n\t\t\t\t\t%x=.37\\marginparwidth,\n\t\t\t\t\t%y=.37\\marginparwidth,\n\t\t\t\t\txtick=\\empty,% \n\t\t\t\t\textra x ticks={.79,1.57,2.36,3.14},\n\t\t\t\t\textra x tick labels={$\\frac{\\pi}4$,$\\frac{\\pi}2$,$\\frac{3\\pi}{4}$,$\\pi$},\n\t\t%\t\t\tytick={1},\n\t\t%\t\t\tyticklabels={$-0.002$,$0.002$,$0.004$},\n\t\t\t\t\t%minor y tick num=1,\n\t\t%\t\t\textra y ticks={1.8},%\n\t\t%\t\t\textra y tick labels={$y$},\n\t\t%\t\t\tminor x tick num=4,\n\t\t\t\t\tymin=-.2,ymax=1.25,%\n\t\t\t\t\txmin=-.1,xmax=3.5,%\n\t\t]\n\t\t\n\t\t\\addplot [{\\colorone},smooth,thick,domain=0:3.14] {sin(deg(x))};\n\t\t\n\t\t\\end{axis}\n\t\t\n\t\t\\node [right] at (myplot.right of origin) {\\scriptsize $x$};\n\t\t\\node [above] at (myplot.above origin) {\\scriptsize $y$};\n\t\t\\end{tikzpicture}\n         \\label{fig:arcintroa}\n        \\caption{} \n    \\end{subfigure}% \n    \\begin{subfigure}[t]{0.5\\textwidth}\n    \\begin{tikzpicture}\n    \\begin{axis}[ width=\\linewidth,%\n    tick label style={font=\\scriptsize},axis y line=middle,axis x line=middle,name=myplot,axis on top,%\n    \t\t\t%x=.37\\marginparwidth,\n    \t\t\t%y=.37\\marginparwidth,\n    \t\t\txtick=\\empty,% \n    \t\t\textra x ticks={.79,1.57,2.36,3.14},\n    \t\t\textra x tick labels={$\\frac{\\pi}4$,$\\frac{\\pi}2$,$\\frac{3\\pi}{4}$,$\\pi$},\n    %\t\t\tytick={1},\n    %\t\t\tyticklabels={$-0.002$,$0.002$,$0.004$},\n    \t\t\t%minor y tick num=1,\n    \t\t\textra y ticks={.71},%\n    \t\t\textra y tick labels={$\\frac{\\sqrt{2}}2$},\n    %\t\t\tminor x tick num=4,\n    \t\t\tymin=-.2,ymax=1.25,%\n    \t\t\txmin=-.1,xmax=3.5,%\n    ]\n    \n    \\addplot [{\\colorone},smooth,thick,domain=0:3.14] {sin(deg(x))};\n    \n    \\draw [{\\colortwo},thick] (axis cs:0,0) -- (axis cs:.79,.71) -- (axis cs: 1.57,1) -- (axis cs:2.36,.71) -- (axis cs: 3.14,0);\n    \n    \\filldraw (axis cs:0,0) circle (1pt) (axis cs:.79,.71) circle (1pt) (axis cs: 1.57,1) circle (1pt) (axis cs:2.36,.71) circle (1pt) (axis cs: 3.14,0)circle (1pt);\n    \n    \\end{axis}\n    \n    \\node [right] at (myplot.right of origin) {\\scriptsize $x$};\n    \\node [above] at (myplot.above origin) {\\scriptsize $y$};\n    \\end{tikzpicture}\n        \\label{fig:arcintrob}\n        \\caption{}    \n    \\end{subfigure} \n    \\caption{Graphing $y=\\sin x$ on $[0,\\pi]$ and approximating the curve with line segments. \\label{fig:arcintro}}\n\\end{figure}\n\nIn Figure \\ref{fig:arcintro} (b), the curve $y=\\sin x$ has been approximated with 4 line segments (the interval $[0,\\pi]$ has been divided into 4 equally--lengthed subintervals). It is clear that these four line segments approximate $y=\\sin x$ very well on the first and last subinterval, though not so well in the middle. Regardless, the sum of the lengths of the line segments is $3.79$, so we approximate the arc length of $y=\\sin x$ on $[0,\\pi]$ to be $3.79$. \n\n%Using 6 evenly spaced subintervals is not too much more work and gives a better approximation. The six lines are shown in Figure \\ref{fig:arcintroc} and provide an arc length approximation of $3.81$. \n\n%\\begin{figure}[H]\n%\\begin{tikzpicture}\n%\\begin{axis}[width=\\marginparwidth+25pt,%\n%tick label style={font=\\scriptsize},axis y line=middle,axis x line=middle,name=myplot,axis on top,%\n%\t\t\t%x=.37\\marginparwidth,\n%\t\t\t%y=.37\\marginparwidth,\n%\t\t\txtick=\\empty,% \n%\t\t\textra x ticks={.52,1.05,1.6,2.1,2.6,3.14},\n%\t\t\textra x tick labels={$\\frac{\\pi}6$,$\\frac{\\pi}3$,$\\frac{\\pi}{2}$,$\\frac{2\\pi}3$,$\\frac{5\\pi}6$,$\\pi$},\n%%\t\t\tytick={1},\n%%\t\t\tyticklabels={$-0.002$,$0.002$,$0.004$},\n%\t\t\t%minor y tick num=1,\n%\t\t\textra y ticks={.866},%\n%\t\t\textra y tick labels={$\\frac{\\sqrt{3}}2$},\n%%\t\t\tminor x tick num=4,\n%\t\t\tymin=-.2,ymax=1.25,%\n%\t\t\txmin=-.1,xmax=3.5,%\n%]\n%\n%\\addplot [{\\colorone},smooth,thick,domain=0:3.14] {sin(deg(x))};\n%\n%\\draw [{\\colortwo},thick] (axis cs:0,0)--(axis cs:0.5236,0.5)--(axis cs:1.047,0.866)--(axis cs:1.571,1.)--(axis cs:2.094,0.866)--(axis cs:2.618,0.5)--(axis cs:3.142,0);\n%\n%\\filldraw (axis cs:0,0) circle (1pt) (axis cs:0.5236,0.5) circle (1pt) (axis cs:1.047,0.866) circle (1pt) (axis cs:1.571,1.) circle (1pt) (axis cs:2.094,0.866) circle (1pt) (axis cs:2.618,0.5) circle (1pt) (axis cs:3.142,0);\n%\n%\\end{axis}\n%\n%\\node [right] at (myplot.right of origin) {\\scriptsize $x$};\n%\\node [above] at (myplot.above origin) {\\scriptsize $y$};\n%\\end{tikzpicture}\n%\\label{fig:arcintroc}\n%\\end{figure}\n\n\n\nIn general,  we can approximate the arc length of $y=f(x)$ on $[a,b]$ in the following manner. Let $a=x_1 < x_2 < \\ldots < x_n< x_{n+1}=b$ be a partition of $[a,b]$ into $n$ subintervals. Let $\\dx_i$ represent the length of the $i\\,^\\text{th}$ subinterval $[x_i,x_{i+1}]$.\n\n\\begin{figure}[H]\n\\begin{tikzpicture}\n\\begin{axis}[width=.5\\linewidth,%\ntick label style={font=\\scriptsize},axis y line=middle,axis x line=middle,name=myplot,axis on top,%\n\t\t\t%x=.37\\marginparwidth,\n\t\t\t%y=.37\\marginparwidth,\n\t\t\txtick=\\empty,% \n\t\t\textra x ticks={.3,1.37},\n\t\t\textra x tick labels={$x_i$,$x_{i+1}$},\n\t\t\tytick=\\empty,\n%\t\t\tyticklabels={$-0.002$,$0.002$,$0.004$},\n\t\t\t%minor y tick num=1,\n\t\t\textra y ticks={.48,1},%\n\t\t\textra y tick labels={$y_i$,$y_{i+1}$},\n%\t\t\tminor x tick num=4,\n\t\t\tymin=-.2,ymax=1.25,%\n\t\t\txmin=-.1,xmax=1.6,%\n]\n%\\addplot [{\\colorone},smooth,thin,dashed,domain=0.1:1.55] {sin(deg(x+.2))};\n\n\\addplot [{\\colorone},smooth,thick,domain=0.3:1.37] {sin(deg(x+.2))};\n\n\\draw (axis cs: .25,1) -- (axis cs:.1,1) \n\t\t\t(axis cs: .25,.48) -- (axis cs:.1,.48)\n\t\t\t(axis cs: .175,.48) -- node [pos=.5,fill=white] {\\scriptsize $\\Delta y_i$} (axis cs:.175,1);\n\n\\draw (axis cs: .3,.25) -- (axis cs: .3,.4)\n\t\t\t(axis cs: 1.37,.25) -- (axis cs: 1.37,.4)\n\t\t\t(axis cs: .3,.325) -- node [pos=.5,fill=white] {\\scriptsize $\\Delta x_i$} (axis cs: 1.37,.325);\n\t\t\t\n\\draw [{\\colortwo},thick] (axis cs: .3,.48) -- (axis cs:1.37,1);\n\n\\draw [dashed,thin] (axis cs: .3,.48) -| (axis cs:1.37,1);\n\t\t\t\n\\filldraw (axis cs: .3,.48) circle (1pt) (axis cs:1.37,1) circle (1pt);\n\n%\\draw [{\\colortwo},thick] (axis cs:0,0)--(axis cs:0.5236,0.5)--(axis cs:1.047,0.866)--(axis cs:1.571,1.)--(axis cs:2.094,0.866)--(axis cs:2.618,0.5)--(axis cs:3.142,0);\n%\n%\\filldraw (axis cs:0,0) circle (1pt) (axis cs:0.5236,0.5) circle (1pt) (axis cs:1.047,0.866) circle (1pt) (axis cs:1.571,1.) circle (1pt) (axis cs:2.094,0.866) circle (1pt) (axis cs:2.618,0.5) circle (1pt) (axis cs:3.142,0);\n\n\\end{axis}\n\n\\node [right] at (myplot.right of origin) {\\scriptsize $x$};\n\\node [above] at (myplot.above origin) {\\scriptsize $y$};\n\\end{tikzpicture}\n\\label{fig:arcintro2}\n\\caption{Zooming in on the $i\\,^\\text{th}$ subinterval $[x_i,x_{i+1}$] of a partition of $[a,b]$.}\n\\end{figure}\n\n\n%\\mfigure{.35}{Zooming in on the $i\\,^\\text{th}$ subinterval $[x_i,x_{i+1}$] of a partition of $[a,b]$.}{fig:arcintro2}{figures/figarcintrod}\n%\nFigure \\ref{fig:arcintro2} zooms in on the $i\\,^\\text{th}$ subinterval where $y=f(x)$ is approximated by a straight line segment. The dashed lines show that we can view this line segment as they hypotenuse of a right triangle whose sides have length $\\dx_i$ and $\\dy_i$. Using the Pythagorean Theorem, the length of this line segment is\n$\\ds \\sqrt{\\dx_i^2 + \\Delta y_i^2}.$ Summing over all subintervals gives an arc length approximation\n$$L \\approx \\sum_{i=1}^n \\sqrt{\\dx_i^2 + \\Delta y_i^2}.$$\n\nAs shown here, this is \\textit{not} a Riemann Sum. While we could conclude that taking a limit as the subinterval length goes to zero gives the exact arc length, we would not be able to compute the answer with a definite integral. We need first to do a little algebra.\n\nIn the above expression factor out a $\\dx_i^2$ term:\n\\begin{align*}\n\\sum_{i=1}^n \\sqrt{\\dx_i^2 + \\Delta y_i^2} &= \\sum_{i=1}^n \\sqrt{\\dx_i^2\\left(1 + \\frac{\\Delta y_i^2}{\\dx_i^2}\\right)}.\\\\\n\\intertext{Now pull the $\\dx_i^2$ term out of the square root:}%\\\\\n\t\t\t&= \\sum_{i=1}^n\\sqrt{1 + \\frac{\\Delta y_i^2}{\\dx_i^2}}\\ \\dx_i.\\\\\n\\intertext{This is nearly a Riemann Sum. Consider the $\\Delta y_i^2/\\dx_i^2$ term. The expression $\\Delta y_i/\\dx_i$ measures the ``change in $y$/change in $x$,'' that is, the ``rise over run'' of $f$ on the $i\\,^\\text{th}$ subinterval. The Mean Value Theorem of Differentiation (Theorem \\ref{thm:mvt}) states that there is a $c_i$ in the $i\\,^\\text{th}$ subinterval where $\\fp(c_i) = \\Delta y_i/\\dx_i$. Thus we can rewrite our above expression as:} \n\t\t\t&= \\sum_{i=1}^n\\sqrt{1+\\fp(c_i)^2}\\ \\dx_i.\\\\\n\\intertext{This \\textit{is} a Riemann Sum. As long as \\fp\\ is continuous, we can invoke Theorem \\ref{thm:riemann_sum} and conclude }%\\\\\n\t\t\t&= \\int_a^b\\sqrt{1+\\fp(x)^2}\\ dx.\n\\end{align*}\n\n\n\\begin{formulabox}[\\label{idea:arclength} Arc Length ]\n{Let $f$ be differentiable on an open interval containing $[a,b]$, where $\\fp$ is also continuous on $[a,b]$. Then the arc length of $f$ from $x=a$ to $x=b$ is\n\\index{integration!arc length}\\index{arc length}\n$$L = \\int_a^b \\sqrt{1+\\fp(x)^2}\\ dx.$$\n}\n\n\\end{formulabox}\n\n\nAs the integrand contains a square root, it is often difficult to use the formula in Key Idea \\ref{idea:arclength} to find the length exactly. When exact answers are difficult to come by, we resort to using numerical methods of approximating definite integrals. The following examples will demonstrate this.\\\\\n\n\n\\begin{example}{Finding arc length}{ex_arc1}\n{\nFind the arc length of $f(x) = x^{3/2}$ from $x=0$ to $x=4$. }\n\\end{example}\n\n\n\\begin{solution}\n{We begin by finding $\\fp(x)= \\frac32x^{1/2}$. Using the formula, we find the arc length $L$ as\n\\begin{align*}\n\tL &=\t\\int_0^4 \\sqrt{1+\\left(\\frac32x^{1/2}\\right)^2}\\ dx \\\\\n\t\t&=\t\\int_0^4 \\sqrt{1+\\frac94x} \\ dx \\\\\n\t\t&= \t\\int_0^4 \\left(1+\\frac94x\\right)^{1/2}\\ dx \\\\\n\t\t&=  \\frac23\\frac49\\left(1+\\frac94x\\right)^{3/2}\\Big|_0^4 \\\\\n\t\t&=\\frac{8}{27}\\left(10^{3/2}-1\\right) \\approx 9.07 \\text{units}.\n\\end{align*}\n\n\\begin{figure}[H]\n\\begin{tikzpicture}\n\\begin{axis}[width=.4\\linewidth,%\ntick label style={font=\\scriptsize},axis y line=middle,axis x line=middle,name=myplot,axis on top,%\n\t\t\t%x=.37\\marginparwidth,\n\t\t\t%y=.37\\marginparwidth,\n%\t\t\txtick=\\empty,% \n%\t\t\textra x ticks={.79,1.57,2.36,3.14},\n%\t\t\textra x tick labels={$\\frac{\\pi}4$,$\\frac{\\pi}2$,$\\frac{3\\pi}{4}$,$\\pi$},\n%\t\t\tytick={1},\n%\t\t\tyticklabels={$-0.002$,$0.002$,$0.004$},\n\t\t\t%minor y tick num=1,\n%\t\t\textra y ticks={1.8},%\n%\t\t\textra y tick labels={$y$},\n%\t\t\tminor x tick num=4,\n\t\t\tymin=-.2,ymax=8.5,%\n\t\t\txmin=-.1,xmax=4.5,%\n]\n\n\\addplot [{\\colorone},smooth,thick,domain=0:4,samples=50] coordinates {(0,0)(0.1,0.03162)(0.2,0.08944)(0.3,0.1643)(0.4,0.253)(0.5,0.3536)(0.6,0.4648)(0.7,0.5857)(0.8,0.7155)(0.9,0.8538)(1.,1.)(1.3,1.482)(1.6,2.024)(1.9,2.619)(2.2,3.263)(2.5,3.953)(2.8,4.685)(3.1,5.458)(3.4,6.269)(3.7,7.117)(4.,8.)};\n\n\\end{axis}\n\n\\node [right] at (myplot.right of origin) {\\scriptsize $x$};\n\\node [above] at (myplot.above origin) {\\scriptsize $y$};\n\\end{tikzpicture}\n\\caption{A graph of $f(x) = x^{3/2}$ from Example \\ref{exa:ex_arc1}. \\label{fig:arc1}}\n\\end{figure}\n}\n\\end{solution}\n\n\n\n\\begin{example}{Finding arc length}{ex_arc2}{\nFind the arc length of $\\ds f(x) =\\frac18x^2-\\ln x$ from $x=1$ to $x=2$.}\n\\end{example}\n\n\n\\begin{solution}\n{This function was chosen specifically because the resulting integral can be evaluated exactly. We begin by finding $\\fp(x) = x/4-1/x$. The arc length is \n\\begin{align*}\nL\t\t&=  \\int_1^2 \\sqrt{1+ \\left(\\frac x4-\\frac1x\\right)^2}\\ dx \\\\\n\t\t&= \t\\int_1^2 \\sqrt{1 + \\frac{x^2}{16} -\\frac12 + \\frac1{x^2} } \\ dx \\\\\n\t\t&=\t\\int_1^2 \\sqrt{\\frac{x^2}{16} +\\frac12 + \\frac1{x^2} } \\ dx \\\\\n\t\t&=\t\\int_1^2\t\\sqrt{ \\left(\\frac x4 + \\frac1x\\right)^2}\\ dx \n\\end{align*}\n\\begin{align*}\n\\phantom{L}\n\t\t\t\t&= \\int_1^2 \\left(\\frac x4 + \\frac1x\\right) \\ dx \\\\\n\t\t&=  \\left(\\frac{x^2}8 + \\ln x\\right)\\Bigg|_1^2\\\\\n\t\t&=\t\\frac38+\\ln 2 \\approx 1.07 \\ \\text{units}.\n\\end{align*}\n\n\\begin{figure}[H]\n\\begin{tikzpicture}\n\\begin{axis}[width=.4\\linewidth,%\ntick label style={font=\\scriptsize},axis y line=middle,axis x line=middle,name=myplot,axis on top,%\n\t\t\t%x=.37\\marginparwidth,\n\t\t\t%y=.37\\marginparwidth,\n%\t\t\txtick=\\empty,% \n%\t\t\textra x ticks={.79,1.57,2.36,3.14},\n%\t\t\textra x tick labels={$\\frac{\\pi}4$,$\\frac{\\pi}2$,$\\frac{3\\pi}{4}$,$\\pi$},\n%\t\t\tytick={1},\n%\t\t\tyticklabels={$-0.002$,$0.002$,$0.004$},\n\t\t\t%minor y tick num=1,\n%\t\t\textra y ticks={1.8},%\n%\t\t\textra y tick labels={$y$},\n%\t\t\tminor x tick num=4,\n\t\t\tymin=-.26,ymax=1.2,%\n\t\t\txmin=-.1,xmax=3.1,%\n]\n\\addplot [{\\colorone},smooth,thin] coordinates{(0.2,1.614)(0.3,1.215)(0.4,0.9363)(0.5,0.7244)(0.6,0.5558)(0.7,0.4179)(0.8,0.3031)(0.9,0.2066)(1.,0.125)(1.1,0.05594)(1.2,-0.002322)(1.3,-0.05111)(1.4,-0.09147)(1.5,-0.1242)(1.6,-0.15)(1.7,-0.1694)(1.8,-0.1828)(1.9,-0.1906)(2.,-0.1931)(2.1,-0.1907)(2.2,-0.1835)(2.3,-0.1717)(2.4,-0.1555)(2.5,-0.135)(2.6,-0.1105)(2.7,-0.082)(2.8,-0.04962)(2.9,-0.01346)(3.,0.02639)};\n\n\\addplot [{\\colorone},smooth,very thick,] coordinates {(1.,0.125)(1.1,0.05594)(1.2,-0.002322)(1.3,-0.05111)(1.4,-0.09147)(1.5,-0.1242)(1.6,-0.15)(1.7,-0.1694)(1.8,-0.1828)(1.85,-0.1874)(1.9,-0.1906)(1.95,-0.1925)(2.,-0.1931)};\n\n\n\\end{axis}\n\n\\node [right] at (myplot.right of origin) {\\scriptsize $x$};\n\\node [above] at (myplot.above origin) {\\scriptsize $y$};\n\\end{tikzpicture}\n\\caption{A graph of $f(x) =\\frac18x^2-\\ln x$ from Example \\ref{ex_arc2}. \\label{fig:arc2}}\n\\end{figure}\n\nA graph of $f$ is given in Figure \\ref{fig:arc2}; the portion of the curve measured in this problem is in bold.\n}\n\\end{solution}\n\n\n\\begin{example}{Circumference of a Circle}{Circumference of a Circle}\\label{Circumference of a Circle} \nLet $\\ds f(x) = \\sqrt{r^2-x^2}$, the upper half circle of radius\n$r$. The length of this curve is half the circumference, namely $\\pi\nr$. Compute this with the arc length formula.\n\\end{example}\n\n\\begin{solution}\nThe derivative $f'$ is $\\ds \\ds -x/\\sqrt{r^2-x^2}$ so the integral is\n$$\n  \\int_{-r}^r \\sqrt{1+{x^2\\over r^2-x^2}}\\,dx\n  =\\int_{-r}^r \\sqrt{r^2\\over r^2-x^2}\\,dx\n  =r\\int_{-r}^r \\sqrt{1\\over r^2-x^2}\\,dx.\n$$\nUsing a trigonometric substitution, we find the antiderivative, namely\n$\\ds \\arcsin(x/r)$. Notice that the integral is improper at both\nendpoints, as the function $\\ds \\sqrt{1/(r^2-x^2)}$ is undefined when\n$x=\\pm r$. So we need to compute\n$$\n  \\lim_{D\\to-r^+}\\int_D^0  \\sqrt{1\\over r^2-x^2}\\,dx +\n  \\lim_{D\\to r^-}\\int_0^D  \\sqrt{1\\over r^2-x^2}\\,dx.\n$$\nThis is not difficult, and has value $\\pi$, so the original integral,\nwith the extra $r$ in front, has value $\\pi r$ as expected.\n\\end{solution}\n\nThe previous examples found the arc length exactly through careful choice of the functions. In general, exact answers are much more difficult to come by and numerical approximations are necessary. \\\\\n\n\n\\begin{example}{Approximating arc length numerically}{ex_arc3}\n{\nFind the length of the sine curve from $x=0$ to $x=\\pi$.}\n\\end{example}\n\n\n\\begin{solution}\n{This is somewhat of a mathematical curiosity; in Example \\ref{ex_ftc4} we found the area under one ``hump'' of the sine curve is 2 square units; now we are measuring its arc length.\n\nThe setup is straightforward: $f(x) = \\sin x$ and $\\fp(x) = \\cos x$. Thus \n$$L = \\int_0^\\pi \\sqrt{1+\\cos^2x}\\ dx.$$\nThis integral \\textit{cannot} be evaluated in terms of elementary functions so we will approximate it with Simpson's Method with $n=4$. \n\\mtable{.5}{A table of values of $y=\\sqrt{1+\\cos^2x}$ to evaluate a definite integral in Example \\ref{ex_arc3}.}{fig:arc3}{%\n$$\\begin{array}{cc}\nx & \\sqrt{1+\\cos^2x} \\\\ \\hline\n 0 & \\sqrt{2} \\rule{0pt}{10pt}\\\\\n \\pi/4 & \\sqrt{3/2} \\\\\n \\pi/2 & 1 \\\\\n 3 \\pi/4 & \\sqrt{3/2} \\\\\n \\pi  & \\sqrt{2} \\\\\n\\end{array}$$\n}\nFigure \\ref{fig:arc3} gives $\\sqrt{1+\\cos^2x}$ evaluated at 5 evenly spaced points in $[0,\\pi]$. Simpson's Rule then states that \n\\begin{align*}\n\\int_0^\\pi \\sqrt{1+\\cos^2x}\\ dx &\\approx\t\\frac{\\pi-0}{4\\cdot 3}\\left(\\sqrt{2}+4\\sqrt{3/2}+2(1)+4\\sqrt{3/2}+\\sqrt{2}\\right) \\\\\n\t\t\t&=3.82918.\n\\end{align*}\nUsing a computer with $n=100$ the approximation is $L\\approx 3.8202$; our approximation with $n=4$ is quite good.\n}\n\\end{solution}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %\n\n\n%Here is another geometric application of the integral: Find the length\n%of a portion of a curve. As usual, we need to think about how we might\n%approximate the length, and turn the approximation into an integral.\n%\n%We already know how to compute one simple arc length, that of a line\n%segment. If the endpoints are $\\ds P_0(x_0,y_0)$ and $\\ds P_1(x_1,y_1)$\n%then the length of the segment is the distance between the points,\n%$\\ds \\sqrt{(x_1-x_0)^2+(y_1-y_0)^2}$, from the Pythagorean theorem, as\n%illustrated in Figure~\\ref{fig:length of a line segment}.\n%\n%\\figure[H]\n%%\\texonly\n%\\centerline{\\vbox{\\beginpicture\n%\\normalgraphs\n%%\\sevenpoint\n%\\setcoordinatesystem units <1.5truecm,1.5truecm>\n%\\setplotarea x from 0 to 5, y from 0 to 3\n%\\axis bottom /\n%\\axis left /\n%\\putrule from 2 1 to 4.5 1\n%\\putrule from 4.5 1 to 4.5 2.5\n%\\plot 2 1 4.5 2.5 /\n%\\put {$(x_1,y_1)$} [bl] <3pt,3pt> at 4.5 2.5\n%\\put {$(x_0,y_0)$} [tr] <-3pt,-3pt> at 2 1\n%\\put {$x_1-x_0$} [t] <0pt,-3pt> at 3.25 1\n%\\put {$y_1-y_0$} [l] <3pt,0pt> at 4.5 1.75\n%\\put {$\\sqrt{(x_1-x_0)^2+(y_1-y_0)^2}$} [br] <-3pt,3pt> at 3.25 1.75\n%\\endpicture}}\n%%\\endtexonly\n%%\\figrdef{fig:length of a line segment}\n%%\\htmlfigure{Integration_applications-arc_length.html}\n%\\caption{\\label{fig:length of a line segment}\n%The length of a line segment.}\n%%\\endcaption\n%\\endfigure\n%\n%Now if the graph of $f$ is ``nice'' (say, differentiable) it appears\n%that we can approximate the length of a portion of the curve with line\n%segments, and that as the number of segments increases, and their\n%lengths decrease, the sum of the lengths of the line segments will\n%approach the true arc length; see \n%Figure~\\ref{fig:approximating arc length}.\n%\n%\\figure[H]\n%%\\texonly\n%\\centerline{\\vbox{\\beginpicture\n%\\normalgraphs\n%%\\sevenpoint\n%\\setcoordinatesystem units <1.5truecm,0.8truecm>\n%\\setplotarea x from 0 to 8, y from 0 to 5\n%\\axis bottom /\n%\\axis left /\n%\\setquadratic\\plot \n%1.000 1.000 1.150 1.953 1.300 2.694 1.450 3.246 1.600 3.636 \n%1.750 3.884 1.900 4.012 2.050 4.041 2.200 3.990 2.350 3.874 \n%2.500 3.711 2.650 3.515 2.800 3.299 2.950 3.075 3.100 2.853 \n%3.250 2.644 3.400 2.454 3.550 2.290 3.700 2.157 3.850 2.060 \n%4.000 2.000 4.150 1.979 4.300 1.996 4.450 2.050 4.600 2.138 \n%4.750 2.255 4.900 2.396 5.050 2.554 5.200 2.721 5.350 2.886 \n%5.500 3.039 5.650 3.168 5.800 3.258 5.950 3.294 6.100 3.261 \n%6.250 3.140 6.400 2.912 6.550 2.556 6.700 2.051 6.850 1.374 \n%7.000 0.500 /\n%\\multiput {$\\bullet$} at 1 1 2.050 4.041 3.250 2.644\n%4.300 1.996 5.500 3.039 6.250 3.140 7 0.5 /\n%\\setlinear\\plot\n%1 1 2.050 4.041 3.250 2.644\n%4.300 1.996 5.500 3.039 6.250 3.140 7 0.5 /\n%\\endpicture}}\n%%\\endtexonly\n%%\\figrdef{fig:approximating arc length}\n%%\\htmlfigure{Integration_applications-arc_length_line_segments.html}\n%\\caption{\\label{fig:approximating arc length}\n%Approximating arc length with line segments.}\n%%\\endcaption\n%\\endfigure\n%\n%Now we need to write a formula for the sum of the lengths of the line\n%segments, in a form that we know becomes an integral in the limit.  So\n%we suppose we have divided the interval $[a,b]$ into $n$ subintervals\n%as usual, each with length $\\Delta x =(b-a)/n$, and endpoints $\\ds\n%a=x_0$, $\\ds x_1$, $\\ds x_2$, \\dots, $\\ds x_n=b$.  The length of a\n%typical line segment, joining $\\ds (x_i,f(x_i))$ to $\\ds\n%(x_{i+1},f(x_{i+1}))$, is $\\ds\\sqrt{(\\Delta x )^2\n%  +(f(x_{i+1})-f(x_i))^2}$.  By the Mean Value Theorem, %(\\xrefn{thm:mvt}), \n%there is a number $\\ds t_i$ in $\\ds (x_i,x_{i+1})$\n%such that $\\ds f'(t_i)\\Delta x=f(x_{i+1})-f(x_i)$, so the length of\n%the line segment can be written as\n%$$\n%  \\sqrt{(\\Delta x)^2 + (f'(t_i))^2\\Delta x^2}=\n%  \\sqrt{1+(f'(t_i))^2}\\,\\Delta x.\n%$$\n%Then arc length is:\n%$$\n%  \\lim_{n\\to\\infty}\\sum_{i=0}^{n-1} \\sqrt{1+(f'(t_i))^2}\\,\\Delta x=\n%  \\int_a^b \\sqrt{1+(f'(x))^2}\\,dx.\n%$$\n%Note that the sum looks a bit different than others we have\n%encountered, because the approximation contains a $\\ds t_i$ instead of an\n%$\\ds x_i$. In the past we have always used left endpoints (namely, $\\ds x_i$)\n%to get a representative value of $f$ on $\\ds [x_i,x_{i+1}]$; now we are\n%using a different point, but the principle is the same.\n%\n%To summarize, to compute the length of a curve on the interval\n%$[a,b]$, we compute the integral\n%$$\\int_a^b \\sqrt{1+(f'(x))^2}\\,dx.$$ \n%Unfortunately, integrals of this form are typically difficult or\n%impossible to compute exactly, because usually none of our methods for\n%finding antiderivatives will work. In practice this means that the\n%integral will usually have to be approximated.\n%\n%\\begin{example}{Circumference of a Circle}{Circumference of a Circle}\\label{Circumference of a Circle} \n%Let $\\ds f(x) = \\sqrt{r^2-x^2}$, the upper half circle of radius\n%$r$. The length of this curve is half the circumference, namely $\\pi\n%r$. Compute this with the arc length formula.\n%\\end{example}\n%\n%\\begin{solution}\n%The derivative $f'$ is $\\ds \\ds -x/\\sqrt{r^2-x^2}$ so the integral is\n%$$\n%  \\int_{-r}^r \\sqrt{1+{x^2\\over r^2-x^2}}\\,dx\n%  =\\int_{-r}^r \\sqrt{r^2\\over r^2-x^2}\\,dx\n%  =r\\int_{-r}^r \\sqrt{1\\over r^2-x^2}\\,dx.\n%$$\n%Using a trigonometric substitution, we find the antiderivative, namely\n%$\\ds \\arcsin(x/r)$. Notice that the integral is improper at both\n%endpoints, as the function $\\ds \\sqrt{1/(r^2-x^2)}$ is undefined when\n%$x=\\pm r$. So we need to compute\n%$$\n%  \\lim_{D\\to-r^+}\\int_D^0  \\sqrt{1\\over r^2-x^2}\\,dx +\n%  \\lim_{D\\to r^-}\\int_0^D  \\sqrt{1\\over r^2-x^2}\\,dx.\n%$$\n%This is not difficult, and has value $\\pi$, so the original integral,\n%with the extra $r$ in front, has value $\\pi r$ as expected.\n%\\end{solution}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:Arc Length}}\n\n\\begin{enumialphparenastyle}\n\n% % % % % % % % % % %\n\\begin{ex}\nFind the arc length of the function on the given interval.\n\\begin{enumerate}\n\\item {$\\ds f(x) = x$ on $[0, 1]$.}\n\n\n\\item {$\\ds f(x) = \\sqrt{8}x$ on $[-1, 1]$.}\n\n\n\\item {$\\ds f(x) = \\frac13x^{3/2}-x^{1/2}$ on $[0,1]$.}\n\n\\item {$\\ds f(x) = \\frac1{12}x^{3}+\\frac1x$ on $[1,4]$.}\n\n\n\\item {$\\ds f(x) = 2x^{3/2}-\\frac16\\sqrt{x}$ on $[0,9]$.}\n\n\n\\item {$\\ds f(x) = \\cosh x$ on $[-\\ln 2, \\ln 2]$.}\n\n\n\\item {$\\ds f(x) = \\frac12\\big(e^x+e^{-x}\\big)$ on $[0, \\ln 5]$.}\n\n\n\\item {$\\ds f(x) = \\frac1{12}x^5+\\frac1{5x^3}$ on $[.1, 1]$.}\n\n\n\\item {$\\ds f(x) = \\ln \\big(\\sin x\\big)$ on $[\\pi/6, \\pi/2]$.}\n\n\n\\item {$\\ds f(x) = \\ln \\big(\\cos x\\big)$ on $[0, \\pi/4]$.}\n\n\\end{enumerate}\n\n\\begin{sol}\n\\begin{enumerate}\n\\item {$\\sqrt{2}$}\n\\item {$6$}\n\\item {$4/3$}\n\\item {$6$}\n\\item {$109/2$}\n\\item {$3/2$}\n\\item {$12/5$}\n\\item {$79953333/400000 \\approx 199.883$}\n\\item {$-\\ln (2-\\sqrt{3}) \\approx 1.31696$}\n\\item {$\\sinh^{-1} 1$}\n\\end{enumerate}\n\\end{sol}\n\n\\end{ex}\n% % % % % % % % % % % %\n\n% % % % % % % % % % %\n\\begin{ex}\nSet up the integral to compute the arc length of the function on the given interval. Do \\textbf{not} evaluate the integral. \\label{ex_al1}\n\\begin{enumerate}\n\\item {$\\ds f(x) = x^2$ on $[0, 1]$.\\label{ex_07_04_ex_13}}\n\n\n\\item {$\\ds f(x) = x^{10}$ on $[0, 1]$.}\n\n\n\\item {$\\ds f(x) = \\ln x$ on $[1, e]$.}\n\n\n\\item {$\\ds f(x) = \\sqrt{x}$ on $[0, 1]$.}\n\n\n\\item {$\\ds f(x) = \\sqrt{1-x^2}$ on $[-1, 1]$. (Note: this describes the top half of a circle with radius 1.)}\n\n\n\\item {$\\ds f(x) = \\sqrt{1-x^2/9}$ on $[-3, 3]$. (Note: this describes the top half of an ellipse with a major axis of length $ 6 $ and a minor axis of length $ 2 $.)}\n\\item {$\\ds f(x) = \\frac1x$ on $[1,2]$.}\n\n\\item {$\\ds f(x) = \\sec x$ on $[-\\pi/4,\\pi/4]$.\\label{ex_07_04_ex_20}}\n\n\\end{enumerate}\n\n\\begin{sol}\n\\begin{enumerate}\n\\item {$\\int_0^1 \\sqrt{1+4x^2}\\ dx$}\n\\item {$\\int_0^1 \\sqrt{1+100x^{18}}\\ dx$}\n\\item {$\\int_1^e \\sqrt{1+\\frac1{x^2}}\\ dx$}\n\\item {$\\int_0^1 \\sqrt{1+\\frac{1}{4x}}\\ dx$}\n\\item {$\\int_{-1}^1 \\sqrt{1+\\frac{x^2}{1-x^2}}\\ dx$}\n\\item \n{$\\int_{-3}^3 \\sqrt{1+\\frac{x^2}{81-9x^2}}\\ dx$}\n\n\\item {$\\int_{1}^2 \\sqrt{1+\\frac1{x^4}}\\ dx$}\n\\item {$\\int_{-\\pi/4}^{\\pi/4} \\sqrt{1+\\sec^2x\\tan^2x}\\ dx$}\n\n\\end{enumerate}\n\\end{sol}\n\n\\end{ex}\n% % % % % % % % % % % %\n\n\n\n\n\n% % % % % % % % % % %\n\\begin{ex}\nUse Simpson's Rule, with $n=4$, to approximate the arc length of each the function on the given interval in Exercise \\ref{ex_al1}. \n\n\\begin{sol}\n\\begin{enumerate}\n\\item {$1.4790$}\n\\item {$1.8377$}\n\\item {$2.1300$}\n\\item \n\\item \n\\item \n\\item {$1.4058$}\n\\item {$1.7625$}\n\\end{enumerate}\n\\end{sol}\n\n\\end{ex}\n% % % % % % % % % % % %\n\n\n\n\n\n\n\n%%%%%%%%%%\n\\begin{ex}\n Find the arc length of $\\ds f(x)=x^{3/2}$ on $[0,2]$.\n\\begin{sol}\n $\\ds (22\\sqrt{22}-8)/27$\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Find the arc length of $\\ds f(x) = x^2/8-\\ln x$\non $[1,2]$.\n\\begin{sol}\n $\\ln(2)+3/8$\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\nFind the arc length of $\\ds f(x) = (1/3)(x^2 +2)^{3/2}$\non the interval $[0,a]$.\n\\begin{sol}\n $\\ds a+a^3/3$\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Find the arc length of $f(x)=\\ln(\\sin x)$ on the\ninterval $[\\pi/4,\\pi/3]$.\n\\begin{sol}\n $\\ds \\ln((\\sqrt2+1)/\\sqrt3)$\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Let $a>0$. Show that the length of $y=\\cosh x$ on\n$[0,a]$ is equal to $\\ds \\int _0 ^a \\cosh x\\,dx$.\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Find the arc length of $f(x)=\\cosh x$ on $[0, \\ln 2]$.\n\\begin{sol}\n $3/4$\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Set up the integral to find the arc length of $\\sin x$ \non the interval $[0,\\pi]$; do not evaluate the integral. If you have\naccess to appropriate software, approximate the value of the integral.\n\\begin{sol}\n $\\approx 3.82$\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Set up the integral to find the arc length of $\\ds y=xe^{-x}$\non the interval $[2,3]$; do not evaluate the integral. If you have\naccess to appropriate software, approximate the value of the integral.\n\\begin{sol}\n $\\approx 1.01$\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Find the arc length of $\\ds y=e^x$ on the interval $[0,1]$.\n(This can be done exactly; it is a bit tricky and a bit long.)\n\\begin{sol}\n $\\ds \\sqrt{1+e^2}-\\sqrt2+\n{1\\over2}\\ln\\left({\\sqrt{1+e^2}-1\\over\\sqrt{1+e^2}+1}\\right)+\n{1\\over2}\\ln(3+2\\sqrt2)$\n\\end{sol}\n\\end{ex}\n\n\\end{enumialphparenastyle}", "meta": {"hexsha": "3744c5b79955601410a94cec73f7a8dc5998f2d8", "size": 26945, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8-applications-of-integration/8-7-arclength.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8-applications-of-integration/8-7-arclength.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8-applications-of-integration/8-7-arclength.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.1677852349, "max_line_length": 464, "alphanum_fraction": 0.6304323622, "num_tokens": 10752, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417487156366, "lm_q2_score": 0.9032942001955143, "lm_q1q2_score": 0.59474469745565}}
{"text": "\\documentclass[12pt]{scrartcl}\n\n\\input{preamble}\n\n\\makeatletter\n\\title{Hack 11.0}\\let\\Title\\@title\n\\subtitle{Computer Science I\\\\\nEncapsulation\\\\\n{\\small\n\\vskip1cm\nDepartment of Computer Science \\& Engineering \\\\\nUniversity of Nebraska--Lincoln}\n\\vskip-3cm}\n%\\author{Dr.\\ Chris Bourke}\n\\date{~}\n\\makeatother\n\n\\begin{document}\n\n\\maketitle\n\n\\hrule\n\n\\input{instructions.tex}\n\nCorrectness:\n\\begin{itemize}\n  \\item 5 points for student tester (it is OK to test a function transitively; ie if createAirport uses initAirport and they only test createAirport; both are getting tested)\n  \\item 5 points for our ``visual'' inspection test\n  \\item 3 points for the 100 test cases for getAirDistance\n  \\item 3 points for the 100 test cases getEstimatedTravel time\n\\end{itemize}\n\n\\section*{Problem Statement}\n\nThere are thousands of commercial, military, and local airports in the US and\naround the world.  The International Civil Aviation Organization maintains a\ndatabase of current and inactive airports around the world.  The database \nuniquely identifies each airport by an alphanumeric GPS code.  Further, each\nrecord contains the following pieces of data on each airport:\n\\begin{itemize}\n  \\item The name of the airport\n  \\item Its latitude in degrees in the range $[-90, 90]$ with negative values corresponding to the southern hemisphere\n  \\item Its longitude in degrees in the range $[-180, 180]$ with negative values corresponding to the western hemisphere\n  \\item The type of airport \n  \\item Its elevation in (whole) feet above sea level\n  \\item Its municipality and its country\n\\end{itemize}\n\nYou will design a C structure to encapsulate these attributes to model an\nairport record from the ICAO database.  You will also design several functions\nto support your structure including factory functions, functions to \ncreate a string representation, print records, etc. You will also implement\nseveral utility functions that use your structure to compute the air\ndistance(s) between airport locations using their latitude and longitude.\nRecall that the air distance $d$ between two latitude/longitude points can be \nestimated using the Spherical Law of Cosines.\n\n $$d = \\arccos{(\\sin(\\varphi_1) \\cdot \\sin(\\varphi_2) + \\cos(\\varphi_1) \\cos(\\varphi_2) \\cos(\\Delta) )} \\cdot R$$\nwhere\n\\begin{itemize}\n  \\item $\\varphi_1$ is the latitude of location $A$, $\\varphi_2$ is the latitude of location $B$\n  \\item $\\Delta$ is the difference between location $B$'s longitude and location $A$'s longitude\n  \\item $R$ is the (average) radius of the earth, 6,371 kilometers\n\\end{itemize}\nThis formula assumes that latitude and longitude are in radians \n$r$, $-\\pi \\leq r \\leq \\pi$.  To convert from degrees $d$ ($-180 \\leq d \\leq 180$) \nto radians $r$, you can use the simple formula:\n  $$r = \\frac{d}{180} \\pi$$\n\nMore details have been provided in a header file, \\mintinline{text}{airport.h}.\nYou will need to design your structure and implement all of the specified \nfunctions.\n\n\n\\section*{Instructions}\n\n\\begin{itemize}\n\n  \\item Place all of your function definitions in a source file named \n  \\mintinline{text}{airport.c} and hand it in with your header file, \n  \\mintinline{text}{airport.h}.  You may add any utility functions you\n  wish but you must \\emph{not} change any of the signatures of the required\n  functions.\n  \n  \\item In addition, you must create a main test driver program that \n  demonstrates at least 3 cases per function.  Name this file \n  \\mintinline{text}{airportTester.c} and hand it in.\n\n  \\item You are encouraged to collaborate any number of students \n  before, during, and after your scheduled hack session.  \n\n  \\item You may (in fact are encouraged) to define any additional\n  ``helper'' functions that may help you.\n\n  \\item Include the name(s) of everyone who worked together on\n  this activity in your source file's header.\n\n  \\item Turn in all of your files via webhandin, making sure that \n  it runs and executes correctly in the webgrader.  Each individual \n  student will need to hand in their own copy and will receive \n  their own individual grade.\n\\end{itemize}  \n\n\n\\end{document}\n", "meta": {"hexsha": "57deb3bd3831351b2ed17663c7954ef3902c6300", "size": 4106, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hacks/hack11.0.tex", "max_stars_repo_name": "hrithik125/ComputerScienceI", "max_stars_repo_head_hexsha": "40be47f15817a50497f6c6f7cdca9ee1db429b00", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-07T15:21:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T15:21:01.000Z", "max_issues_repo_path": "hacks/hack11.0.tex", "max_issues_repo_name": "hrithik125/ComputerScienceI", "max_issues_repo_head_hexsha": "40be47f15817a50497f6c6f7cdca9ee1db429b00", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hacks/hack11.0.tex", "max_forks_repo_name": "hrithik125/ComputerScienceI", "max_forks_repo_head_hexsha": "40be47f15817a50497f6c6f7cdca9ee1db429b00", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.3738317757, "max_line_length": 174, "alphanum_fraction": 0.758402338, "num_tokens": 1048, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081925, "lm_q2_score": 0.8670357477770337, "lm_q1q2_score": 0.594742698660277}}
{"text": "The agenda for this section is as follows. First, we exemplify Stage~1 of our\nframework by introducing a number of quantities of interest \\g, which illustrate\nthe broad applicability of the framework and thereby give the reader a better\nsense of its utility. Second, we turn to Stage~2 and highlight a transformation\n$\\transform$ that is the appropriate one to use in the majority of cases. Third,\nwe proceed directly to Stage~4 and illustrate a potential output of the\nframework.\n\n\\subsection{\\problemtitle}\n\\slab{frame-application-problem}\n\nAssume the system, power, and temperature models described in\n\\sref{system-model}, \\sref{power-model}, \\sref{temperature-model}, respectively.\nAssume also the application model utilized in \\sref{dream-optimization-problem}\nexcept for the requirement about being periodic.\n\nLet us first touch upon the timing aspects of the system in question. Each of\nthe \\nt tasks of the application has a start time and a finish time, which are\ndenoted by $b_i$ and $d_i$, respectively. Let also $\\v{b} = (b_i)_{i = 1}^\\nt$\nand $\\v{d} = (d_i)_{i = 1}^\\nt$. Other timing characteristics can then be\nderived from $\\v{b}$ and $\\v{d}$. For example, the end-to-end delay of the\napplication, which is the difference between the finish time of the latest task\nand the start time of the earliest task, is as follows:\n\\begin{equation} \\elab{frame-delay}\n  \\text{End-to-end delay}\n  = \\max_{i = 1}^\\nt d_i - \\min_{i = 1}^\\nt b_i.\n\\end{equation}\n\nSuppose now that the execution times of the tasks depend on the uncertain\nparameters \\vu introduced in \\sref{frame-problem}. Then $\\v{b}$ and $\\v{d}$\ndepend on \\vu. Hence, the end-to-end delay given in \\eref{frame-delay} does too,\nand it constitutes a quantity \\g that the designer might be interested in\nanalyzing. Note that, since the $\\min$ and $\\max$ functions are\nnondifferentiable, the same is true for \\g. Therefore, \\g is nonsmooth, which\nrenders the \\up{PC} decomposition and similar techniques inappropriate in this\ncase in general, which is discussed in \\sref{frame-motivation}.\n\n\\begin{remark} \\rlab{frame-nonsmoothness}\nThe behavior of \\g with respect to continuity, differentiability, and smoothness\ncannot generally be inferred from the behavior of \\vu. Even when the parameters\nbehave perfectly, \\g might still exhibit nondifferentiability or even\ndiscontinuity, which depends on how \\g functions internally. For example, as\nshown in \\cite{tanasa2015}, even if the execution times of tasks are continuous,\nend-to-end delays are very often discontinuous due to the actual scheduling\npolicy.\n\\end{remark}\n\nLet us move on to power. The total energy consumed by the \\np processing\nelements during an application run can be estimated using a power profile \\mp,\nwhich is defined in \\eref{power-profile}, as follows:\n\\begin{equation} \\elab{frame-energy}\n  \\text{Total energy}\n  = \\sum_{i = 1}^\\np \\int \\p_i(t) \\d t\n  \\approx \\dt \\norm[1]{\\mp}\n\\end{equation}\nwhere $\\p_i$ denotes the power consumption of processing element~$i$, \\dt is the\nsampling interval, and $\\norm[1]{\\cdot}$ stands for the Manhattan norm. Since\n$\\v{b}$ and $\\v{d}$ depend on \\vu, the power consumption of the system is also\ndependent on \\vu. Consequently, the total energy given in \\eref{frame-energy}\ndepends on \\vu and is a candidate for \\g. \\rref{frame-nonsmoothness} applies in\nthis context to the full extent.\n\nLet us now turn to temperature. The maximum temperature that the platform\nreaches during an application run can be estimated using a temperature profile\n\\mq, which is defined in \\eref{temperature-profile}, as follows:\n\\begin{equation} \\elab{frame-temperature}\n  \\text{Maximum temperature}\n  = \\max_{i = 1}^\\np \\sup_{t} \\q_i(t)\n  \\approx \\norm[\\infty]{\\mq}\n\\end{equation}\nwhere $\\q_i$ denotes the heat dissipation of processing element~$i$, and\n$\\norm[\\infty]{\\cdot}$ stands for the uniform norm. Since the power consumption\nof the platform is affected by \\vu, the corresponding heat dissipation is\ninfluenced by \\vu as well. Therefore, the maximum temperature in\n\\eref{frame-temperature} is also a potential quantity of interest \\g. Note that,\ndue to the maximization involved in the calculations, the quantity is\nnondifferentiable and hence cannot generally be adequately addressed using\npolynomial approximations; recall also the concern in\n\\rref{frame-nonsmoothness}.\n\nTo summarize, we have covered three aspects of electronic systems, namely\ntiming, power, and temperature, and introduced a number of quantities that the\ndesigner is typically interested in analyzing. These quantities will be\ndiscussed further in the section on experimental results, \\sref{frame-results}.\n\n\\inputfigure{frame-application}\nLet us employ one of the introduced quantities in order to have a concrete\nexample to work with in this section. The problem being addressed is depicted on\nthe left-hand side of \\fref{frame-application}. We consider a heterogeneous\nplatform with two processing elements denoted by PE1 and PE2 and an application\nwith four tasks denoted by T1--T4; the setup will be described in detail in\n\\sref{frame-results}. The data dependencies between the tasks and their mapping\nonto the processing elements can be seen in \\fref{frame-application}. The\nquantity of interest \\g is the application's end-to-end delay defined in\n\\eref{frame-delay}. The uncertain parameters \\vu are the execution times of T2\nand T4 denoted by $\\u_1$ and $\\u_2$, respectively.\n\nThe large rectangle on the left-hand side of \\fref{frame-application} is a\n``black box'' that evaluates \\g given \\vu. It takes an assignment of the\nexecution times $\\u_1$ and $\\u_2$ and outputs the calculated end-to-end delay\n\\g. In practice, this evaluation often involves a system simulator, such as\nSniper \\cite{carlson2011}, in which case the modeling capabilities of this\nsimulator are naturally inherited by our technique.\n\nTargeting the practical scenario described in \\sref{chaos-formulation}, the\nmarginal distributions and correlation matrix of \\vu are assumed to be\navailable. Without loss of generality, each marginal distribution is a\nfour-parameter beta distribution shown in \\eref{beta-distribution}. Furthermore,\nthe execution times are assumed to be correlated based on the dependencies\nbetween the corresponding tasks as defined by the structure of the application's\ntask graph. Specifically, the closer task~$i$ and task~$j$ are in the graph as\nmeasured by the number of edges between vertex~$i$ and vertex~$j$, the more\nstrongly $\\u_i$ and $\\u_j$ are correlated.\n\n\\subsection{Probability Transformation}\n\\slab{frame-application-transformation}\n\nAt Stage~2 of our workflow outlined in \\sref{frame-solution}, a suitable\ntransformation $\\transform$, which is required in \\eref{frame-transformation},\nis chosen. We utilize the one shown in \\eref{probability-transformation} and\nreduce $\\vu: \\Omega \\to \\real^\\nu$ to $\\vz: \\Omega \\to [0, 1]^\\nz$ so that the\nlatter is uniformly distributed. In this case, the rightmost $\\transform_1$ in\n\\eref{probability-transformation} is simplified, since the marginals of \\vz are\nalready uniform. Note that the model-order-reduction functionality of the chosen\n$\\transform$ is engaged; it eliminates redundant stochastic dimensions and,\ntherefore, assists the subsequent interpolation.\n\nThe result is that the obtained vector \\vz conforms to the requirements listed\nin \\sref{frame-transformation}: the codomain of \\vz is $[0, 1]^\\nz$, and it has\nthe smallest number of dimensions that are necessary in order to preserve a\ncertain level of accuracy.\n\nIn \\fref{frame-application}, $\\transform$ is depicted as a small square. In this\nparticular example, the stochastic dimensionality is found to be the same before\nand after $\\transform$, which is indicated by the two incoming and two outgoing\narrows. The depicted component takes an assignment of the auxiliary variables\n$\\z_1$ and $\\z_2$ and outputs the execution times $\\u_1$ and $\\u_2$ in\naccordance with their joint distribution.\n\nIn the following, we proceed directly to Stage~4, since Stage~3, which is given\nin the middle of \\fref{frame-application}, is standard: using a number of\nstrategic invocations of the simulator of \\g (the ``black box''), it delivers a\nlightweight surrogate $\\interpolant{\\nz}{\\ls}(\\g)$, which is illustrated by a\nrounded rectangle in \\fref{frame-application}.\n\n\\subsection{Post-Processing}\n\nHaving constructed the interpolant $\\interpolant{\\nz}{\\ls}(\\g)$, the designer\nstarts to work solely with this interpolant, which corresponds to Stage~4 of our\nframework.\n\nSuppose the designer is interested in the probability distribution of \\g. In\nthis scenario, $\\interpolant{\\nz}{\\ls}(\\g)$ should be sampled, which is\nrepresented by the rightmost box in \\fref{frame-application}. The operation\ncorresponds roughly to \\aref{frame-evaluation}: the interpolant receives $\\z_1$\nand $\\z_2$ and returns an approximation of the value of \\g at that point. Recall\nthat the computational cost of this sampling is negligible, since \\g is not\ninvolved. The collected samples, which are denoted by $G$ in\n\\fref{frame-application}, are then utilized in order to estimate the\ndistribution of \\g.\n\n\\inputfigure{frame-density}\n\\fref{frame-density} depicts the result. The blue line shows the \\ac{PDF} of \\g\ncomputed by applying kernel density estimation \\cite{hastie2013} to the data set\n$G$. The orange line, which is barely visible behind the blue line, shows the\nactual density of \\g; its calculation is explained in \\sref{frame-results}. It\ncan be seen that our solution closely matches the exact one. In addition, the\ngreen line illustrates the estimate that the designer would obtain if \\g was\nsampled directly using the same number of evaluations as the one consumed by the\nproposed framework. It can be seen that, given the same budget, the solution\ndelivered by our technique is substantially closer to the exact one than the one\ndelivered by direct sampling.\n", "meta": {"hexsha": "007af96610506827b4bde795ed9e9a3011e84fe4", "size": 9897, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "include/uncertainty/workload/development/application.tex", "max_stars_repo_name": "IvanUkhov/thesis", "max_stars_repo_head_hexsha": "95a7e2ee7664b94156906322610555e36e53cfe0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "include/uncertainty/workload/development/application.tex", "max_issues_repo_name": "IvanUkhov/thesis", "max_issues_repo_head_hexsha": "95a7e2ee7664b94156906322610555e36e53cfe0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "include/uncertainty/workload/development/application.tex", "max_forks_repo_name": "IvanUkhov/thesis", "max_forks_repo_head_hexsha": "95a7e2ee7664b94156906322610555e36e53cfe0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.2080924855, "max_line_length": 80, "alphanum_fraction": 0.7792260281, "num_tokens": 2456, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430562234877, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5946196913940804}}
{"text": "\\documentclass{tufte-handout}\n\\title{Applied Algorithms\\newline Mandatory Assignment \\#$2$}\n\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{graphicx} % for handling images\n\n\\usepackage{amsmath}\n\\usepackage{booktabs}\n\\usepackage{microtype}\n\n\\usepackage{pgfplots} % layout and margin handling\n\\pgfplotsset{width=10cm,compat=1.13}\n\n\\usepackage{booktabs}\n\\usepackage{tcolorbox}\n\n% Graphics\n\\usepackage{graphicx}\n\\usepackage{subfig}\n\n% Code formatting   \n\\usepackage{listings}\n\\lstset{\n   language=java,\n   extendedchars=true,\n   basicstyle=\\footnotesize\\ttfamily,\n   showstringspaces=false,\n   showspaces=false,\n   numbers=left,\n   numberstyle=\\footnotesize,\n   numbersep=9pt,\n   tabsize=2,\n   breaklines=true,\n   showtabs=false,\n   frame=single,\n   extendedchars=false,\n   inputencoding=utf8,\n   captionpos=b\n}\n\n\\begin{document}\n\n\\thispagestyle{empty}\n\n\\maketitle\n\\includegraphics[width=\\textwidth]{figs/logo_en.png}\n\n\\vspace{10mm}\\noindent %10mm vertical space\n\n\\begin{center}\n{\\LARGE Distinct elements using hashing}\n\\end{center}\n\n\\vspace{5mm}\\noindent %10mm vertical space\n\n\\begin{table}[!h]\n\\centering\n\\begin{tabular}{cc}\n\\multicolumn{2}{c}{Authors}                                   \\\\ \\hline\n\\multicolumn{1}{c|}{\\textbf{Hugo Brito}} & \\textbf{Ren\u00e9 Haas} \\\\\n\\multicolumn{1}{c|}{\\href{mailto:hubr@itu.dk}{hubr@itu.dk}}         & \\href{mailto:renha@itu.dk}{renha@itu.dk}       \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\vspace{5mm} %10mm vertical space\n\n%\\newpage\n\n\\section{Introduction}\n\nIn this report we show our findings with the HyperLogLog algorithm for estimating distinct elements.\nAll our experiments are reproducible and can be rerun by using the {\\tt Makefile}. If the reader wishes to rerun our experiments simply {\\tt cd} to the {\\tt src} folder and type {\\tt make}.\n\n\\section{\\textbf{Exercise 1.}}\nIn this first exercise we are given a matrix $A\\in\\{0,1\\}^{b\\times k}$ where $b=k=32$. We are asked to implement the hash function\n\\begin{align*}\n  h(x)_i = \\sum_{j=1}^b A_{i,j}x_j \\mod 2\n\\end{align*}\nWe implement the function in Java (see {\\tt Ex1.java})\nwith the following code:\n\n\\begin{lstlisting}\npublic static int h(int x) {\n    int res = 0;\n    for (int i = 0; i < A.length; i++) {\n        res += (Integer.bitCount(A[i] & x) % 2) << (31-i);\n    }\n    return res;\n}\n\\end{lstlisting}\n\\noindent We start by calculating the bit-wise {\\tt \\&} (and) operation between $x$ and the rows of $A$. The result of this operation is a binary number which is equivalent to a vector consisting of the element-wise products of the components of $x$ and the $i^{th}$ row of $A$. We then count the sum with {\\tt Integer.bitCount} and then take the modulus $2$ of that number. This gives us an integer which is either $0$ or $1$. We make this the integer the $j$ bit in the result by using the shift operator {\\tt <<}. This implementation passes the tests on CodeJudge.\n\n\\section{\\textbf{Exercise 2.}}\n\nWe are asked to implement the function $\\rho(y) = \\min\\{i\\ |\\ y_{k-i}=1\\}$. We do this the easy way in Java by simply using the {\\tt Integer.numberOfLeadingZeros} function from the Java standard library and adding $1$ to the result. We add $1$ because we are interested in the first position that is not $0$. The implementation can be found on the {\\tt Ex2.java} file and looks like the following:\n\n\\begin{lstlisting}\npublic static int rho(int x) {\n    if (x == 0) { throw new InputMismatchException(\"Zero is undefined\"); }\n    return Integer.numberOfLeadingZeros(x);\n}\n\\end{lstlisting}\n\\noindent\nIt is claimed that the distribution of hash values of $\\rho$ satisfies $Pr(\\rho(y)=i) = 2^{-i}$.\nIn Figure \\ref{fig:ex2} we show a histogram over the values of $\\rho(x)$ for $x \\in \\{1,\\cdots,10^6\\}$. We find that there is direct experimental support for this claim, although in reality $Pr(\\rho(y)=i)=2^{-i}/2$. This is verified experimentally in Fig \\ref{fig:ex2} and the factor of $1/2$ is also theoretically necessary since $\\sum_{i=1}^{\\infty} = 2$ and a probability distributions must be normalised.\n\n\\begin{figure}[h!]\n  \\includegraphics[width = \\textwidth]{figs/ex2-hist}\n  \\caption{Histogram of $\\rho(x)$ for $x \\in \\{1,\\cdots,10^6\\}$}\n  \\label{fig:ex2}\n\\end{figure}\n\n\\section{\\textbf{Exercise 3.}}\n% Exercise 3. Implement this algorithm with m = 1024 and k = 32. Test it on the input sequence 106; : : : ; 2 \u0001 106 \ufffd\ufffd 1 with 1 million distinct items: How many distinct elements are reported by your implementation? (Add this number to your report.)\n\nWe have implemented the algorithm and this implementation for the values $m = 1\\ 024$ and $k = 32$ can be found in {\\tt Ex3.java} file. This file also passes all instances of CodeJudge.\n\\\\\\noindent\nFor the given $m$, $k$ and the given sequence of distinct integers $x \\in \\{10^6,\\cdots,2\\times 10^6 - 1\\}$ the implementation of this algorithm estimates that there are $973\\ 089,272$ distinct values. The error\\footnote{Taken from \\href{https://www.thoughtco.com/how-to-calculate-percent-error-609584}{thoughtco.com}} is then $|1\\ 000\\ 000 - 973\\ 089,272| = 26\\ 910,728$, which translates to $2,69\\%$.\n\n\\section{\\textbf{Exercise 4.}}\n\n% As alluded to earlier, the space usage of the HyperLogLog counter influences its estimation error. In the last exercise, we want to experimentally find out how strong this influence is.\n\n% Write an input generator that takes as input an integer n and a seed seed on standard input and outputs a list of n random 32-bit integers from a random integer generator using seed seed. Describe and run an experiment that gives a graphical representation of the connection of m and the estimation error. A recommended representation is a histogram plot over the distinct element count reported by the algorithm. Try at least 3 di\u000berent values of m.\n\n% we have the table and want to correlate the number of times for each m that an estimation falls inside the first sigma, the second sigma, or none. we have the columns that report true or false for that.\n\n% we can make a graph out of that table\n\n\\noindent Lastly, we wanted to find the co-relation between the space usage of the HyperLogLog counter influences its estimation error.\\\\\n\\bigskip \\noindent\nThe file {\\tt Ex4.java} implements the input generator to be used by the HyperLogLog algorithm. We wanted to be certain that the algorithm would always estimate the number of distinct integers and compare that estimation to the real number of distinct integers. In order to achieve this machinery, the above-mentioned generator would add the generated integers to set. Below the code snippet that implements this description:\n\n\\begin{lstlisting}\ndo {\n\tint i = a + random.nextInt(-a + b + 1);\n\tif (i != 0) { // zero cannot be used because it has no leading zeros and cannot be hashed\n\t\tintegers.add(i);\n    // omitted for brevity\n\t}\t\t\t\n} while (integers.size() < n);\n\\end{lstlisting}\n\\noindent\nVariable description:\n\\begin{itemize}\n    \\item {\\tt a}: The lower bound of the random generator\n    \\item {\\tt b}: The upper bound of the random generator\n    \\item {\\tt integers}: The set of distinct integers\n    \\item {\\tt n}: The real number of distinct integers\n\\end{itemize}\n\\noindent\nThe file {\\tt Experiment.java} implements experiment we designed to assess the relation between the $m$ and the estimation. It takes a set of distinct integers and $m$ as constructor parameters and outputs all the information needed to report on this relation, namely:\n\\begin{itemize}\n    \\item The estimation\n    \\item A {\\tt boolean} value that would evaluate to {\\tt true} should the estimation fall within one $\\sigma$ range  \n    \\item A {\\tt boolean} value that would evaluate to {\\tt true} should the estimation fall within $2\\sigma$ range \n    \\item $\\sigma$\n    \\item The lower and upper bounds of the $\\sigma$ range\n    \\item The lower and upper bounds of $2\\sigma$ range\n\\end{itemize}\n\\noindent\nFinally, the file {\\tt main.java} combines all the above and outputs the results to a file. In itself, it contains a {\\tt long} generator that was initialised with the {\\tt seed} $42$. The output of this {\\tt long} generator is then used as {\\tt seed} in the {\\tt Ex4.java} file to generate sets of distinct integers. Once the set is created, $3$ {\\tt Experiment} are instantiated with different values of $m\\in\\{16, 256, 1024\\}$. In each iteration a new set is generated with an unused {\\tt seed} (we garantee this by keeping a set {\\tt usedSeeds}) this is passed to the $3$ instances. In order to achieve significant data, this process is repeated $100\\ 000$ times.\n\n\\begin{figure}[!h]\n  \\includegraphics[width = \\textwidth]{figs/ex416}\n  \\caption{HiperLogLog estimation distribution for $m = 16$ over $100\\ 000$ distinct sets of $1\\ 000\\ 000$ distinct integers}\n  \\label{fig:16}\n\\end{figure}\n\\noindent In figures \\ref{fig:16}, \\ref{fig:256} and \\ref{fig:1024} we see histograms that resulted from these experiments. In the figures the yellow vertical lines represent $n(1\\pm\\sigma)$ range and the red lines represent $n(1\\pm2\\sigma)$ range. We have also fitted a Gaussian distribution and plotted that on top of the histogram in orange.\n\\begin{figure}[h!]\n  \\includegraphics[width = \\textwidth]{figs/ex4256}\n  \\caption{HiperLogLog estimation distribution for $m = 256$ over $100\\ 000$ distinct sets of $1\\ 000\\ 000$ distinct integers}\n  \\label{fig:256}\n\\end{figure}\n\\begin{figure}[h!]\n  \\includegraphics[width = \\textwidth]{figs/ex41024}\n  \\caption{HiperLogLog estimation distribution for $m = 1\\ 024$ over $100\\ 000$ distinct sets of $1\\ 000\\ 000$ distinct integers}\n  \\label{fig:1024}\n\\end{figure}\n% \\clearpage\n\n\\noindent We can see that the error distribution follows a normal Gaussian distribution. In Table \\ref{fractions} we see the fraction of runs that fall inside $n(1\\pm\\sigma)$ and $n(1\\pm2\\sigma)$ for the three different values of $m$ respectively.\n\n\\begin{table}\n  \\centering\n  \\begin{tabular}{ l | c | c | c}\n                     & $m = 16$ & $m = 256$ & $m = 1024$\\\\ \\hline\n  $n(1\\pm\\sigma)$    & $69,453\\%$ & $68,596\\%$ & $68,521\\%$ \\\\ \\hline\n  $n(1\\pm 2\\sigma)$   & $95,042\\%$ & $95,476\\%$ & $95,523\\%$ \\\\ \\hline\n  \\end{tabular}\n  \\caption{Fractions of runs that fall inside $n(1\\pm\\sigma)$ and $n(1\\pm2\\sigma)$ for different $m$s}\n  \\label{fractions}\n\\end{table}\n\n\\section{\\textbf{Conclusion}}\n\nThe HyperLogLog algorithm is an effective strategy to estimate the cardinality of a set of integers as it does not require to keep track of the integers, saving memory space.\n\\noindent We could also see that  allocating to space (buckets) by increasing the size of $m$ does not influence the distribution of the error, which always follows the Gaussian distribution. It is important to mention that being $\\sigma$ a function of $m$, decreasing $m$ will also increase the error range, resulting in estimations that are further away from the real value in absolute terms. Se there is some gain in for instance increasing $m$ from $16$ to $256$, but this gain is less expressive from $256$ to $1024$.\n\n\\end{document}", "meta": {"hexsha": "d93f231b00a027eea00f1f857ec36404ff5d7c58", "size": 10947, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/main.tex", "max_stars_repo_name": "hugo-brito/Distinct-elements-using-hashing", "max_stars_repo_head_hexsha": "7fde0de0be5faeed5417d65693a21d2eae0a834a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/main.tex", "max_issues_repo_name": "hugo-brito/Distinct-elements-using-hashing", "max_issues_repo_head_hexsha": "7fde0de0be5faeed5417d65693a21d2eae0a834a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/main.tex", "max_forks_repo_name": "hugo-brito/Distinct-elements-using-hashing", "max_forks_repo_head_hexsha": "7fde0de0be5faeed5417d65693a21d2eae0a834a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.4626865672, "max_line_length": 667, "alphanum_fraction": 0.7261350142, "num_tokens": 3067, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8311430436757312, "lm_q1q2_score": 0.5946196874600991}}
{"text": "\\documentclass{article}\n    % General document formatting\n    \\usepackage[margin=0.7in]{geometry}\n    \\usepackage[parfill]{parskip}\n    \\usepackage[utf8]{inputenc}\n    \\usepackage{mathrsfs}\n    \\usepackage{amsmath}\n    \\usepackage{amssymb}\n    \\usepackage{tikz}\n    \\usepackage{fancyhdr}\n    \\usepackage{multicol}\n\n    \\usetikzlibrary{positioning}\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{Edgar Jacob Rivera Rios - A01184125}\n\n\\renewcommand{\\labelenumi}{\\alph{enumi})}\n\n\\begin{document}\n\\section*{3.1.1}\nA survey question for a sample of 150 individuals yielded 75 \\textbf{Yes} responses, 55 \\textbf{No} responses, and 20 \\textbf{No Opinions}.\n\\begin{enumerate}\n  \\item What is the point estimate of the proportion in the population who respond \\textbf{Yes}?\n  \\begin{equation*}\n    \\bar{p} = \\frac{x}{n} = \\frac{75}{150} = 0.5\n  \\end{equation*}\n  \\item What is the point estimate of the proportion in the population who respond \\textbf{No}?\n  \\begin{equation*}\n    \\bar{p} = \\frac{x}{n} = \\frac{55}{150} = \\frac{11}{30} = 0.36666666\n  \\end{equation*}\n\\end{enumerate}\n\n\\section*{3.1.2}\nMany drugs used to treat cancer are expensive. BusinessWeek reported on the cost per treatment of Herceptin, a drug used to treat breast cancer(BusinessWeek, January 30, 2006).\n\nTypical treatment costs (in dollars) for Herceptin are provided by a simple random sample of 10 patients.\n\\begin{center}\n  \\textbf{\\$4,376 \\$5,578 \\$2,717 \\$4,920 \\$4,495 \\$4,798 \\$6,446 \\$4,119 \\$4,237 \\$3,814}\n\\end{center}\n\n\\begin{enumerate}\n  \\item Develop a point estimate of the mean cost per treatment with Herceptin\n  \\begin{equation*}\n    \\bar{x} = \\frac{1}{n} \\sum_{i=1}^{n} x_{i} = \\frac{45500}{10} = \\$4,550.00\n  \\end{equation*}\n  \\item Develop a point estimate of the standard deviation of the cost per treatment with Herceptin.\n  \\begin{equation*}\n    s = \\sqrt{\\frac{1}{n - 1} \\sum_{i=1}^{n}(x_{i} - \\bar{x})^{2}} = \\sqrt{\\frac{9,068,620.00}{9}}= \\$1003.80498327337\n  \\end{equation*}\n\\end{enumerate}\n\n\\section*{3.1.3}\nThe American Association of Individual Investors (AAII) polls its subscribers on a weekly basis to determine the number who are bullish, bearish, or neutral on the short-term prospects for the stock market. Their findings for the week ending March 2, 2006, are consistent with the following sample results (AAII website, March 7, 2006).\n\\begin{table}[h!]\n  \\centering\n  \\begin{tabular}{c c c}\n    Bullish 409 &Neutral 299 &Bearish 291\\\\\n  \\end{tabular}\n\\end{table}\nDevelop a point estimate of the following population parameters.\n\\begin{enumerate}\n  \\item The proportion of all AAII subscribers who are bullish on the stock market.\n  \\begin{equation*}\n    \\bar{p} = \\frac{x}{n} = \\frac{409}{999} = 0.409\n  \\end{equation*}\n  \\item The proportion of all AAII subscribers who are neutral on the stock market.\n  \\begin{equation*}\n    \\bar{p} = \\frac{x}{n} = \\frac{299}{999} = 0.299\n  \\end{equation*}\n  \\item The proportion of all AAII subscribers who are bearish on the stock market.\n  \\begin{equation*}\n    \\bar{p} = \\frac{x}{n} = \\frac{291}{999} = 0.291\n  \\end{equation*}\n\\end{enumerate}\n\n\\section*{3.1.4}\nA simple random sample of 5 months of sales data provided the following information:\n\\begin{table}[h!]\n  \\centering\n  \\begin{tabular}{c c c c c c}\n    Month: &1 &2 &3 &4 &5 \\\\\n    Units Sold: &94 &100 &85 &94 &92\\\\\n  \\end{tabular}\n\\end{table}\n\\begin{enumerate}\n  \\item Develop a point estimate of the population mean number of units sold per month.\n  \\begin{equation*}\n    \\bar{x} = \\frac{1}{n} \\sum_{i=1}^{n} x_{i} = \\frac{465}{5} = 93\n  \\end{equation*}\n  \\item Develop a point estimate of the population standard deviation.\n  \\begin{equation*}\n    s = \\sqrt{\\frac{1}{n - 1} \\sum_{i=1}^{n}(x_{i} - \\bar{x})^{2}} = \\sqrt{\\frac{116}{4}}= 5.3851648071345\n  \\end{equation*}\n\\end{enumerate}\n\\end{document}", "meta": {"hexsha": "bfeaabebf58e6533c1e76ed8f759ce9e9d4443d4", "size": 3797, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/Homework3_1.tex", "max_stars_repo_name": "edjacob25/Applied-Maths", "max_stars_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework/Homework3_1.tex", "max_issues_repo_name": "edjacob25/Applied-Maths", "max_issues_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/Homework3_1.tex", "max_forks_repo_name": "edjacob25/Applied-Maths", "max_forks_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1443298969, "max_line_length": 336, "alphanum_fraction": 0.6873847775, "num_tokens": 1290, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8311430415844385, "lm_q1q2_score": 0.5946196758779685}}
{"text": "\\section{Indexing}\n\n\\frame{\\tableofcontents[currentsection]}\n\n\\begin{frame}\n    \\frametitle{Problem Statement}\n    \\begin{center}\\ttfamily\n        [1, 2, 3, 4, 5, 6][3] \\\\[4mm]\n        $\\downarrow$ \\\\[4mm]\n        4\n    \\end{center}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Indexing Array}\n    \\begin{center}\n        \\begin{tikzpicture}\n            \\draw[thick] (0,0) grid (6,1);\n            \\foreach[evaluate={int(\\x+1)} as \\i] \\x in {0,...,5} {\n                \\node[font=\\ttfamily] at ($ (\\x,0.5) + (0.5,0) $) {\\i};\n            }\n            \\draw[thick] (0,0) -- ++(0,-0.5);\n            \\node[anchor=north,font=\\ttfamily\\tiny] at (0,-0.5) {start};\n            \\draw[|-latex] (0,-.25) -- ++(3,0) node[midway,below,font=\\tiny\\ttfamily] {index * sizeof(T)};\n        \\end{tikzpicture}\n    \\end{center}\n    \\vskip4mm\n    \\structure{Algorithm}\n    \\begin{itemize}\n        \\item Memory location can be computed in a single step\n        \\item \\texttt{location = start + index * sizeof(T)}\n        \\item Direct CPU support: only 1 instruction required\n        \\item Explains zero-indexing\n        \\item $O(1)$\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Indexing Linked List}\n    \\begin{center}\n        \\begin{tikzpicture}[link/.style={thick,-latex}]\n            \\path[use as bounding box] (0,0) rectangle (10,3);\n            \\coordinate (p1) at (0,0);\n            \\coordinate (p2) at (2,2);\n            \\coordinate (p4) at (4,1);\n            \\coordinate (p3) at (6,1);\n            \\coordinate (p5) at (8,2);\n            \\coordinate (p6) at (7,0);\n\n            \\foreach \\i in {1,...,6} {\n                \\llnode[position={p\\i},size=0.5cm,value=\\i]\n            }\n\n            \\draw[-latex] ($ (p1) + (0.75,0.25) $) to[bend right=30] (p2);\n            \\draw[-latex] ($ (p2) + (0.75,0.25) $) to[bend left=30] ($ (p3) + (0,0.5) $);\n            \\draw[-latex] ($ (p3) + (0.75,0.25) $) to[bend left=45] ($ (p4) + (1,0) $);\n            \\draw[-latex] ($ (p4) + (0.75,0.25) $) to[bend left=45] ($ (p5) + (0,0.25) $);\n            \\draw[-latex] ($ (p5) + (0.75,0.25) $) to[bend left=45] ($ (p6) + (1,0.5) $);\n        \\end{tikzpicture}\n    \\end{center}\n    \\vskip4mm\n    \\structure{Algorithm}\n    \\begin{itemize}\n        \\item Nodes are scattered unpredictably across memory\n        \\item Follow \\texttt{Next} until \\texttt{Next == null}\n        \\item Finding \\texttt{n}th element takes \\texttt{n} jumps\n        \\item $O(n)$\n    \\end{itemize}\n\\end{frame}\n", "meta": {"hexsha": "502f5c00907dc91099523beadddf70fd866e926a", "size": 2456, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/linked-lists/aux-indexing.tex", "max_stars_repo_name": "DennisWinnepenninckx/distributed-applications", "max_stars_repo_head_hexsha": "06743e4e2a09dc52ff52be831e486bb073916173", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-22T09:52:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T09:52:11.000Z", "max_issues_repo_path": "slides/linked-lists/aux-indexing.tex", "max_issues_repo_name": "DennisWinnepenninckx/distributed-applications", "max_issues_repo_head_hexsha": "06743e4e2a09dc52ff52be831e486bb073916173", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2019-06-19T18:58:13.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-16T14:43:06.000Z", "max_forks_repo_path": "slides/linked-lists/aux-indexing.tex", "max_forks_repo_name": "DennisWinnepenninckx/distributed-applications", "max_forks_repo_head_hexsha": "06743e4e2a09dc52ff52be831e486bb073916173", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 32, "max_forks_repo_forks_event_min_datetime": "2019-09-19T03:25:11.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-06T15:01:47.000Z", "avg_line_length": 35.0857142857, "max_line_length": 106, "alphanum_fraction": 0.5114006515, "num_tokens": 879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7905303186696748, "lm_q1q2_score": 0.5944887483651932}}
{"text": "\\subsection{The scale invariant limit in 2 dimensions}\n\nIn the spirit of Nussinov and Nussinov~\\cite{nussinov2004bcs}, Nishida and Son~\\cite{Nishida:2006eu} analyzed the two-dimensional unitary limit (or near it) by investigating the system in $2+\\epsilon$ dimensions, using $\\epsilon$ as a perturbative parameter (not to be confused with the lattice spacing).\nThey found the trivial solution in the scale-invariant limit; namely, the two-body system becomes non-interacting as the scattering length $\\tilde a_{20} \\goesto\\infty$.\nOur results also show this behavior, since, for fixed $L$, the intersection of a flat scattering amplitude given by $\\tilde a_{20}$ goes to the non-interacting finite-volume energies as the scattering length grows large with either sign, indicating a trivial theory, as seen in \\Figref{luescher2d}.\n\nIn exactly two dimensions, however, the logarithmic dependence of the scattering length requires an accompanying scale so as to make the argument of the logarithm dimensionless, a consequence of the fact that in 2 dimensions, the interaction parameters of the Schr\\\"odinger equation are dimensionless.\n\\emph{Dimensionful} observables occur via dimensional transmutation \\cite{} which requires some fiducial external scale, which in any finite-volume calculation is naturally given by the size of the volume.\nThus the scale-invariant limit becomes sensitive to the order in which one takes either $\\tilde a_{20}\\goesto\\infty$ and $L\\goesto\\infty$ limits.\nTaking either limit first, indpendently of the other, will give the trivial scale-invariant limit found in \\cite{Nishida:2006eu}.\nHowever, there is a particular limit in which the scale-invariant system gives non-trivial solutions.\nTo see this, note that when $\\tilde a_{20}=\\frac{L}{2\\pi}$ the finite-volume energies must zero the zeta function in the quanization condition~\\eqref{2d luscher}, as in three dimensions when $a_{30}\\to\\infty$ and one dimension when $a_{10}\\to 0$.\n\nIn those dimensions, such a situation is akin to the unitary limit, once the infinite volume limit has been taken.\nThis limit can be done independently of taking $a_{30}\\to\\infty$ (3-D) or $a_{10}\\to0$ (1-D).\nHowever, in two dimensions, because of the logarithmic dependence, demanding that the zeta function remains zeroed axis requires the infinite volume limit $L\\to\\infty$ taken together with $\\tilde a_{20}$.\nIn particular, for the \\emph{non-trivial} scale-invariant limit, one must take both $\\tilde a_{20}\\goesto \\infty$ and $L\\to\\infty$ limits simultaneously \\emph{such that} $L/\\tilde a_{20}=2\\pi$.\nSuch a procedure keeps the dispersion zeta function $S^{\\dispersion}_2$ zeroed at each step of the extrapolation, resulting in a non-trivial, scale-invariant, two-body system in two dimensions.\n", "meta": {"hexsha": "65386f10a62dbf927a941746180304c9ed220a75", "size": 2767, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/luescher-nd/section/two-dimensions/scale-invariance.tex", "max_stars_repo_name": "ckoerber/luescher-nd", "max_stars_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-04-12T22:19:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-26T14:06:49.000Z", "max_issues_repo_path": "paper/luescher-nd/section/two-dimensions/scale-invariance.tex", "max_issues_repo_name": "ckoerber/luescher-nd", "max_issues_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2019-12-16T19:49:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-02T00:50:31.000Z", "max_forks_repo_path": "paper/luescher-nd/section/two-dimensions/scale-invariance.tex", "max_forks_repo_name": "ckoerber/luescher-nd", "max_forks_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 145.6315789474, "max_line_length": 304, "alphanum_fraction": 0.7885796892, "num_tokens": 698, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5944887439715639}}
{"text": "We have implemented a suite of tools for analysis of error for all of the aforementioned Langevin Monte Carlo algorithms. This section contains a selection of results, as well as a description of error metrics available and any other design choices made. The \\textsc{Python} code and a how-to\nguide can be found at the following\n\\textsc{url}: \\\\\n\n   \\centerline{ \\url{https://github.com/Tom271/LangevinMC}}\n\n\\subsection{Overview}\\label{overviewprogram}\n\\begin{itemize}\n\\item In the program, we first defined 'Sliced Wasserstein Distance' for dimension 1 and other. \\\\\n\\item We then defined the class of the potential: Gaussian, Gaussian Gradient; Double Wells, Double Wells Gradient; Ginzburg Landau, Ginzburg Landau Gradient;     Rosenbrock etc.\\\\\n\\item We then defined the algorithms we will be using to analyse our samples. The algorithms we will be using are: ULA, MALA, RWM, LM, HOLA, HPH; Taming Algorithms are: tULA, tMALA, tHOLA, tLM etc.\\\\\n\\item To evaluate and compare which method is better, we will compare some properties obtained by each method. For example, we will compare 'first moment', 'second moment', 'trace', 'scatter', 'histogram', 'KL-divergence', 'total variation', 'Sliced Wasserstein' \\\\\n\\item The experiment will then be performed for each method and each property. We could increase the accuracy of our results by increasing the number of steps in each experiment. \\\\\n\\item To know the accuracy of a method, we will calculate the difference between the results obtained from our algorithm and the result obtained from the theory. \\\\ The smaller the difference is, the more accurate the approximated result is, therefore, the better the algorithm.\\\\\n\\end{itemize}\n% \\subsection{Results}\n% \n% omit this figure\n\n% \\begin{figure}[H]\n% \\centering\n%   \\begin{minipage}[b]{0.32\\textwidth}\n%   \\centering\n%     \\includegraphics[width=\\textwidth]{Figures/tula_tv.png}\n%   \\end{minipage} %\n%   \\begin{minipage}[b]{0.32\\textwidth}\n%   \\centering\n%     \\includegraphics[width=\\textwidth]{Figures/tulac_tv.png}\n%   \\end{minipage} %\n%   \\begin{minipage}[b]{0.32\\textwidth}\n%   \\centering\n%     \\includegraphics[width=\\textwidth]{Figures/tmala_tv.png}\n%   \\end{minipage}\n%   \\caption{Comparison of \\texttt{tULA}, \\texttt{tULAc} and \\texttt{tMALA} for the total variation error evolving as a function of step size.}\n% \\end{figure}\n\n\n\n\n\\subsection{Sampling Approaches}\n\nThroughout this project, we have encountered two main approaches for sampling. To obtain a set of samples, one may:\n\n\\begin{itemize}\n    \\item Run a single chain, discard a certain amount of the initial values (the burn-in period) and keep the rest, with possible thinning\\footnote{To thin the chain, one only retains every $k$-th sample for some $k$.} to reduce autocorrelation. This is used for practical sampling.\n    \\item Run multiple chains and retain their final $N$-th value only. This is useful for verification/establishment of non-asymptotic bounds.\n\\end{itemize}\n\nTo accommodate for both possibilities, given positive integers $b, N_{\\text{chains}}, N$ and a set $I \\subset \\mathbb \\{1, 2, \\dots, N - b\\}$ of indices, our program aggregates samples with indices in $\\{i+b\\ |\\ i \\in I\\} $  from $ N_{\\text{chains}}$ independent chains.\n\n\\subsection{Estimating Error}\n\nIn the following, let $\\pi$ be the pdf of true distribution from which we attempt to sample and $X = \\{x_i\\}_{i=1}^N$ denote the collection of samples collected by a sampling algorithm. The notation $x_i(d)$ will be used to refer to the $d$-th coordinate of $x_i$. We also denote by $\\{m_d\\}_{d=1}^D$ and $\\{M_d\\}_{d=1}^D$ the coordinate-wise minimums and maximums among $x_i$, that is:\n\n$$ \n    m_d = \\min \\{x_i(d)\\}_{i=1}^N \\ \\ \\text{ and } \\ \\ M_d = \\max \\{x_i(d)\\}_{i=1}^N\n$$\n\nFor some of the error estimation methods, it will be necessary to construct multi-dimensional histograms. For this purpose, we define the following set of points (this is a minimal mesh that spans all of the samples):\n\n\\begin{align*}\n    \\#_X := \\Big\\{ \\left(c_1^{(k_1)}, c_2^{(k_2)}, \\dots, c_D^{(k_D)}\\right)\\ |\\ &(k_1, k_2, \\dots, k_D) \\in \\lbrace 0, 1, \\dots, \\text{bins - 1}\\rbrace^D, \\ c_d^{(k_d)} \\\\\n    &= m_d + \\left(k_d + \\frac 1 2\\right) \\frac{M_d - m_d}{\\text{bins}} \\Big\\}\n\\end{align*}\n\nwhere $bins$ is a parameter describing the fineness of the mesh. The histogram of the collection of samples $X$ will be understood to be the following function $h: \\#_X \\rightarrow \\mathbb R$:\n\n$$\n    h(c) = |\\{x \\in X\\ |\\ \\arg \\min_{c' \\in \\#_X} \\norm{c' - x} = c \\}|\n$$\n\ni.e. count of points $x$ such that the closes point of mesh to $x$ is $c$. This defines the discrete probability measure \n\n$$H_X := \\sum_{c \\in \\#_X} \\frac {h(c)} N   \\delta_c,$$\n\nwhich we will understand to be the sampled approximation of the true distribution from which we are sampling. Finally, denote also by $Z$ the sum of the true distribution pdf values over the values of the mesh.\n\n\\[Z := \\sum_{c \\in \\#_X} \\pi(c)\\]\n\n\n\n\\subsubsection{The Curse of Dimensionality}\n\nIn short, since the size of the mesh $|\\#_X| = \\text{bins}^D$ is exponential in dimension, it is impossible to construct histograms in high dimensions. Even within smaller dimensions, a fundamental trade-off arises; either one can construct a desirably fine mesh with many bins being empty (that is, $h(c) = 0$), or construct a coarse mesh with sufficiently many samples. The quality of the error estimation suffers in both cases. \\\\\\\\\nFor this reason, the implemented  histogram  error metrics are only feasible for smaller dimensions\\footnote{Around $1 \\leq D \\leq 4$.}, whereas the other metrics should be considered only indicative with increasing dimension, rather than precise measures of error.\n\n\\subsubsection{Kernel Density Estimation}\nAs an alternative to histograms, and to smoothen (and improve) our approximation of the sampled distribution, we considered Kernel Density Estimation. KDE can be thought of as an extension of the idea of histogram which reduces the effect of bin placement.\n\n\\begin{defn}[KDE]\nA kernel density estimator for a i.i.d. set of samples $X = \\{x_i\\}_{i=1}^N$ is\n\\[\\hat{\\pi}_{X} (x) = \\frac 1 {Ns} \\sum_{i=1}^N K\\left( \\frac{x - x_i}{s} \\right)\\]\nwhere $s > 0$ is a smoothing parameter known as the bandwidth and $K$ is a kernel function. Based on this estimation, we define the following discrete probability measure:\n\n\\[ \\text{KDE}_X := \\frac 1 {\\hat Z} \\sum_{c \\in \\#_X} \\hat{\\pi}_X(c) \\delta_c \\qquad \\text{with the normalizing constant } \\hat Z = \\sum_{c \\in \\#_X} \\hat{\\pi}_X(c).\\]\n\\end{defn}\n\nFor our purposes, a Gaussian kernel was assumed.  This choice was fairly arbitrary, as time did not permit an in-depth review of the literature.  The main justification for the choice was that the kernel has unbounded support.  The bandwidth then determines the standard deviation of the Gaussian kernel.  For small bandwidths, the estimator will suffer from overfitting, and will result in a jagged estimate with high variance.  For high bandwidths, the estimator oversmooths, and hence results in a highly biased, underfit estimate \\cite{kroese2013handbook}.  For dimension one, Silverman's rule or the Improved Sheather-Jones algorithm can be used for bandwidth selection.  The former assumes the true distribution is Gaussian, and is best suited for unimodal distributions.  Given large numbers of data points, the latter algorithm can deal with a wider range of multimodal distributions \\cite{botev2010kernel}, \\cite{wand1994kernel}.  These algorithms do not extend to higher dimensions, however, and so, like the choice of kernel, bandwidth also became relatively arbitrary.\n\nWithout theoretical justification for our choices, we did not use KDE for the results in this report.  We also note that, although they solve some of the issues histograms in dimensions higher than one, kernel density estimates are similarly affected by the curse of dimensionality, and are impractical in dimensions much greater than those recommended for histograms.  \n\n\\noindent \n\n\\subsection{Implemented Metrics}\n\n\\subsubsection{Visualizations}\nIn special cases when the dimension is $1$ or $2$, we have implemented an option to visualize the samples either as a histogram plot (in 1D) or as a scatter or trace plot (in 2D or 3D).\n\n\n\\begin{figure}[H]\n\\centering\n  \\begin{minipage}[b]{0.3\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/histo_example.png}\n  \\end{minipage} %\n  \\begin{minipage}[b]{0.3\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/scatter_example.png}\n  \\end{minipage} %\n  \\begin{minipage}[b]{0.3\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/trace_example.png}\n  \\end{minipage}\n   \\caption{Example of a histogram plot, scatter plot and a trace plot.}\n\\end{figure}\n\n\n\n\\subsubsection{First/Second Moments}\nThe first and second moments are implemented in the standard way. By default, the result is computed \\textit{in the first coordinate only\\footnote{With option to use $\\frac 1 N\\sum_{i=1}^N \\norm{x_i}$ and $\\frac 1 N\\sum_{i=1}^N \\norm{x_i^2}$ instead.}}.\n\n$$ \n    \\text{first moment} = \\frac 1 N\\sum_{i=1}^N x_i(1) \\qquad \\text{ and } \\qquad \\text{second moment} = \\frac 1 N\\sum_{i=1}^N x_i(1)^2\n$$\n\n\n\\subsubsection{Total Variation}\nThe total variation, being the first of implemented histogram measures, is calculated as\n\n\\[\\text{total variation} = \\norm{H_X - \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_c}_{\\text{TV}} = \\frac 1 2\\sum_{c \\in \\#_X} \\left|\\frac{h(c)}{N} - \\frac{\\pi(c)}{Z}\\right| .\\]\n\n\\subsubsection{KL Divergence}\nThe next histogram-based measure is KL divergence.\n\n\\[\\text{KL divergence} = D_{\\text{KL}} \\left( \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_c \\,\\bigg|\\bigg| \\, H_X) =  \\sum_{c \\in \\#_X} \\frac{\\pi(c)}{Z} \\log\\left(\\frac{\\pi(c) N}{h(c) Z} \\right)\\right) .\\]\n\n\\subsubsection{Sliced Wasserstein}\n\nThe Sliced Wasserstein distance, being defined via a multi-dimensional integral, cannot be computed exactly. Therefore, we resort to a simple Monte Carlo scheme where $L$ samples $\\{\\theta_i\\}$ are drawn uniformly from the $(d-1)$-dimensional sphere $\\mathbb S^{d-1}$.\n\n$$ \nSW_p(P, Q) \\approx \\left( \\frac 1 L \\sum_{i=1}^L W_p^p\\left(\\mathcal{RI}_P(\\cdot, \\theta_i), \\mathcal{RI}_Q(\\cdot, \\theta_i) \\right) \\right)^{\\frac 1 p}\n$$\n\nThe first, histogram based, implemented metric for Sliced Wasserstein distance further approximates the above Monte Carlo scheme for $SW_1$ as\n\n\\[\\text{SW}_{histogram} = \\begin{cases}\nW_1 \\left( H_X,\\  \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_c \\right) & \\text{if dimension } = 1 \\\\\n\\sum_{i = 1}^L W_1 \\left( \\frac 1 N \\sum_{c \\in \\#_X} h(c) \\delta_{\\left<c, \\theta_i \\right>},\\  \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_{\\left<c, \\theta_i \\right>}  \\right) & \\text{ otherwise, }\n\\end{cases}\\]\n\nwhere $\\left< \\cdot \\right>$ is the dot product, $\\theta_i = \\frac{\\theta'_i}{\\norm{\\theta'_i}}$ is a point on $(d-1)$-dimensional sphere with $\n\\theta'_i \\sim \\mathcal N(0, I_D)$ and $W_1$ is computed explicitly.   No computationally efficient method was found for $SW_2$ \\footnote{It is possible to calculate $SW_2$ using \\cite{flamary2017pot}, which solves the optimal transport problem using linear programming techniques.  This is many orders of magnitude slower than $SW_1$}.   \\\\\\\\\n\nThe second, \\textbf{heuristic-based}, approximation of the Sliced Wasserstein distance is calculated as follows:\n\n\\[\\text{SW}_{no\\ histogram} = \\begin{cases}\nW_1 \\left( \\frac 1 N \\sum_{x \\in X} \\delta_x,\\  \\frac 1 {Z'} \\sum_{x \\in X} \\pi(x) \\delta_x \\right) & \\text{if dimension } = 1 \\\\\n\\sum_{i = 1}^L W_1 \\left(  \\frac 1 N \\sum_{x \\in X} \\delta_{\\left<x, \\theta_i \\right>},\\ \\frac 1 {Z'} \\sum_{x \\in X} \\pi(x) \\delta_{\\left<x, \\theta_i \\right>}  \\right) & \\text{ otherwise. }\n\\end{cases}\\]\nwhere $Z' = \\sum_{x \\in X} \\pi(x) $ is a normalizing constant. Intuitively, this measure of error describes whether the spatial distribution of samples is proportional to the true distribution, \\textit{at the same points}.\n\n\n\\subsection{KDE-based metrics}\n\nAll three of the above histogram-based metrics are available in their KDE version as well. \n\n\\[\\text{total variation}^{\\text{KDE}} = \\norm{\\text{KDE}_X - \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_c}_{\\text{TV}} \\]\n\n\\[\\text{KL divergence}^{\\text{KDE}} = D_{\\text{KL}} \\left(\\text{KDE}_X \\ \\bigg|\\bigg|\\  \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_c\\right) \\]\n\n\\[\\text{SW}_{histogram}^{\\text{KDE}} = \\begin{cases}\nW_1 \\left( \\text{KDE}_X,\\  \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_c \\right) & \\text{if dimension } = 1 \\\\\n\\sum_{i = 1}^L W_1 \\left( \\frac 1 {\\hat Z} \\sum_{c \\in \\#_X} \\hat{\\pi}_X(c) \\delta_{\\left<c, \\theta_i \\right>},\\  \\frac 1 Z \\sum_{c \\in \\#_X} \\pi(c) \\delta_{\\left<c, \\theta_i \\right>}  \\right) & \\text{ otherwise. }\n\\end{cases}\\]", "meta": {"hexsha": "2fc3a01cf1657d9683a8ee34c70831a38d824b3b", "size": 12743, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "WriteUp/Implementation.tex", "max_stars_repo_name": "Tom271/LangevinMC", "max_stars_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2019-02-07T12:51:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T13:35:13.000Z", "max_issues_repo_path": "WriteUp/Implementation.tex", "max_issues_repo_name": "swyoon/LangevinMC", "max_issues_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "WriteUp/Implementation.tex", "max_forks_repo_name": "swyoon/LangevinMC", "max_forks_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-01-19T17:44:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-19T17:44:19.000Z", "avg_line_length": 68.8810810811, "max_line_length": 1080, "alphanum_fraction": 0.7166287373, "num_tokens": 3862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7401743677704878, "lm_q1q2_score": 0.5944886639026878}}
{"text": "\n    \\documentclass{article}\n    \\usepackage{amsfonts}\n    \\usepackage{amsmath,multicol,eso-pic}\n    \\begin{document}\n    \\title{Example Worksheet 1} \n \\date{\\vspace{-5ex}} \n \\maketitle\n\n        \\section{Linear equations}\n        Solve the following equations for the specified variable.\n        \\begin{multicols}{1}\n        \\begin{enumerate}\n        \\item Solve for $E$ : $$- 15 E + 23 = E F + T$$\n\\item Solve for $q$ : $$E - 12 q = T q + 15$$\n\\item Solve for $v$ : $$f v - 15 = 18 v + y$$\n\\item Solve for $x$ : $$H x + 16 = - 5 x - 5$$\n\\item Solve for $n$ : $$c - 11 n = B n + 12$$\n\\item Solve for $b$ : $$11 b + g = A + T b$$\n\\item Solve for $X$ : $$- 18 X - 13 = - 5 X + a$$\n\\item Solve for $b$ : $$T b + h = b z - 2$$\n\\item Solve for $d$ : $$10 d - 3 = K d + j$$\n\\item Solve for $q$ : $$g + 4 q = 17 q + r$$\n\\item Solve for $d$ : $$d r + 21 = A + S d$$\n\\item Solve for $u$ : $$- 4 u - 17 = 24 u - 2$$\n\\item Solve for $Y$ : $$E - Y = - 7 Y - 17$$\n\\item Solve for $j$ : $$j t + 5 = - 13 j - 9$$\n\\item Solve for $W$ : $$- 26 W + c = C + W g$$\n\\item Solve for $r$ : $$- 19 r + t = H r - 2$$\n\\item Solve for $N$ : $$15 N + 4 = - 16 N - 14$$\n\\item Solve for $v$ : $$24 v + 5 = - 13 v - 6$$\n\\item Solve for $d$ : $$15 d - 4 = 23 d + 5$$\n\\item Solve for $x$ : $$N x + S = W x - 25$$\n        \\end{enumerate}\n        \\end{multicols}\n        \n\n        \\section{Quadratic equations}\n        Solve the following quadratic equations.\n        \\begin{multicols}{1}\n        \\begin{enumerate}\n        \\item $$- 8 y^{2} + 3 = y - 21$$\n\\item $$- 20 y^{2} - 20 y = 23$$\n\\item $$- 12 x^{2} = 14 x^{2} + 2 x + 18$$\n\\item $$y^{2} + 28 y + 132 = 0$$\n\\item $$x^{2} - 39 x + 368 = 0$$\n\\item $$x^{2} - 17 x + 42 = 0$$\n\\item $$x^{2} + 6 x - 520 = 0$$\n\\item $$y^{2} + 12 y - 364 = 0$$\n\\item $$y^{2} - 32 y + 231 = 0$$\n\\item $$17 y^{2} - 24 y = 0$$\n\\item $$y^{2} + 8 y + 12 = 0$$\n\\item $$y^{2} - 8 y - 308 = 0$$\n\\item $$15 x^{2} + 8 x - 21 = 9 x^{2} + 14$$\n\\item $$- 25 y^{2} - 8 y - 2 = 18 y^{2} - 12 y - 8$$\n\\item $$x^{2} - 18 x + 77 = 0$$\n\\item $$x^{2} + 6 x - 247 = 0$$\n\\item $$y^{2} - 27 y + 92 = 0$$\n\\item $$- 19 y^{2} = - 5 y^{2} + 2 y + 20$$\n\\item $$12 x^{2} - 4 x = - 19 x$$\n\\item $$x^{2} + 18 x - 208 = 0$$\n        \\end{enumerate}\n        \\end{multicols}\n        \n\n        \\section{Differentiation}\n        Compute each derivative\n        \\begin{multicols}{1}\n        \\begin{enumerate}\n        \\item $$\\frac{d}{d x}\\left(\\frac{e^{x} + \\log{\\left (x \\right )}}{\\tan{\\left (x \\right )}}\\right)$$\n\\item $$\\frac{d}{d z}\\left(\\left(\\sqrt{z} + \\sin{\\left (z \\right )}\\right) e^{- z}\\right)$$\n\\item $$\\frac{d}{d y}\\left(\\frac{- 5 y^{2} + y - 21}{\\tan{\\left (y \\right )}}\\right)$$\n\\item $$\\frac{d}{d y}\\left(\\left(- 16 y^{3} - 19 y + \\sin{\\left (y \\right )} + 23\\right) e^{- y}\\right)$$\n\\item $$\\frac{d}{d z}\\left(\\frac{\\sqrt{z} - 10 z^{3} + 15 z^{2} + z}{- 23 z^{2} + 11 z + 12}\\right)$$\n\\item $$\\frac{d}{d x}\\left(- \\frac{1}{11 x^{2}} \\left(\\log{\\left (x \\right )} + \\cos{\\left (x \\right )}\\right)\\right)$$\n\\item $$\\frac{d}{d y}\\left(\\frac{y + e^{y}}{17 y^{3} - 24 y - 10}\\right)$$\n\\item $$\\frac{d}{d z}\\left(\\left(z + \\tan{\\left (z \\right )}\\right) e^{- z}\\right)$$\n\\item $$\\frac{d}{d y}\\left(\\frac{\\sin{\\left (y \\right )} + \\tan{\\left (y \\right )}}{- 8 y + 5}\\right)$$\n\\item $$\\frac{d}{d y}\\left(\\frac{6 y^{3} + 36 y^{2}}{\\sin{\\left (y \\right )}}\\right)$$\n        \\end{enumerate}\n        \\end{multicols}\n        \n\n        \\section{Compute the integral}\n        Compute the integral of the polynomials.\n        \\begin{multicols}{1}\n        \\begin{enumerate}\n        \\item $$\\int \\left(-7\\right)\\, dz$$\n\\item $$\\int 18 y\\, dy$$\n\\item $$\\int \\left(- 13 z - 22\\right)\\, dz$$\n\\item $$\\int 2\\, dz$$\n\\item $$\\int \\left(- 17 y^{3} - 20 y^{2} - 26\\right)\\, dy$$\n\\item $$\\int \\left(25 y - 1\\right)\\, dy$$\n\\item $$\\int \\left(- 23 y\\right)\\, dy$$\n\\item $$\\int \\left(16 z^{3} - 12 z\\right)\\, dz$$\n\\item $$\\int \\left(8 z + 6\\right)\\, dz$$\n\\item $$\\int 5\\, dz$$\n        \\end{enumerate}\n        \\end{multicols}\n        \n\n        \\section{Compute the integral}\n        Compute the integral of the powers.\n        \\begin{multicols}{1}\n        \\begin{enumerate}\n        \\item $$\\int \\left(8 y^{\\frac{5}{3}} - \\frac{23 y^{\\frac{4}{3}}}{15}\\right)\\, dy$$\n\\item $$\\int \\left(- \\frac{7}{9 z^{\\frac{3}{2}}}\\right)\\, dz$$\n\\item $$\\int \\left(\\frac{\\sqrt{z}}{6} + \\frac{7}{\\sqrt[4]{z}} - \\frac{7}{2 z^{\\frac{5}{4}}}\\right)\\, dz$$\n\\item $$\\int \\frac{13}{9 y^{\\frac{3}{2}}}\\, dy$$\n\\item $$\\int \\frac{7}{z^{\\frac{5}{3}}}\\, dz$$\n\\item $$\\int \\frac{51 z^{\\frac{3}{5}}}{20}\\, dz$$\n\\item $$\\int \\frac{5 y^{\\frac{5}{2}}}{4}\\, dy$$\n\\item $$\\int \\left(- \\frac{5 z^{\\frac{5}{4}}}{3}\\right)\\, dz$$\n\\item $$\\int \\left(- 6 z^{\\frac{2}{5}} - \\frac{7}{\\sqrt[5]{z}}\\right)\\, dz$$\n\\item $$\\int \\left(- \\frac{z^{\\frac{5}{2}}}{4} + \\frac{5 z^{\\frac{3}{2}}}{4}\\right)\\, dz$$\n        \\end{enumerate}\n        \\end{multicols}\n        \n\n        \\section{Compute the integral}\n        Compute the integral of the powers.\n        \\begin{multicols}{1}\n        \\begin{enumerate}\n        \\item $$\\int \\frac{1}{z} \\tan{\\left (\\log{\\left (z \\right )} \\right )}\\, dz$$\n\\item $$\\int \\left(- 24 z - 11\\right) \\sqrt{- 12 z^{2} - 11 z - 10}\\, dz$$\n\\item $$\\int \\left(- \\frac{14}{z} \\log{\\left (z \\right )}\\right)\\, dz$$\n\\item $$\\int \\left(19 e^{3 y} - 2\\right) e^{y}\\, dy$$\n\\item $$\\int \\frac{1}{y} \\tan{\\left (\\log{\\left (y \\right )} \\right )}\\, dy$$\n\\item $$\\int \\left(- 38 z + 22\\right) \\left(- 247 z^{2} + 286 z + 6 \\left(- 19 z^{2} + 22 z - 18\\right)^{3} - 234\\right)\\, dz$$\n\\item $$\\int e^{2 y}\\, dy$$\n\\item $$\\int \\sqrt{e^{y}} e^{y}\\, dy$$\n\\item $$\\int \\frac{\\sin{\\left (\\sqrt{y} \\right )}}{2 \\sqrt{y}}\\, dy$$\n\\item $$\\int 20 \\sin^{2}{\\left (z \\right )} \\cos{\\left (z \\right )}\\, dz$$\n        \\end{enumerate}\n        \\end{multicols}\n        \n\n    \\end{document}\n    ", "meta": {"hexsha": "c9492fc617e3816cb67daa0a4da824a2bd7e79d5", "size": 5789, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "worksheet.tex", "max_stars_repo_name": "luisromero87/mathexamgen", "max_stars_repo_head_hexsha": "9b17d689125851aefd63ded241106c9e77b7d6a5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "worksheet.tex", "max_issues_repo_name": "luisromero87/mathexamgen", "max_issues_repo_head_hexsha": "9b17d689125851aefd63ded241106c9e77b7d6a5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "worksheet.tex", "max_forks_repo_name": "luisromero87/mathexamgen", "max_forks_repo_head_hexsha": "9b17d689125851aefd63ded241106c9e77b7d6a5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.6474820144, "max_line_length": 127, "alphanum_fraction": 0.486612541, "num_tokens": 2524, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737963569016, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5944886569282922}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 2.3 Covariant derivative of $v^{a}{}_{b}$}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u#}::Indices(position=independent).\n\n   \\nabla{#}::Derivative.\n   \\partial{#}::PartialDerivative.\n\n   # template for covariant derivative of a vector\n\n   derivU := \\nabla_{a}{A?^{b}} -> \\partial_{a}{A?^{b}} + \\Gamma^{b}_{c a} A?^{c}.\n   derivD := \\nabla_{a}{A?_{b}} -> \\partial_{a}{A?_{b}} - \\Gamma^{c}_{b a} A?_{c}.\n\n   vab := v^{a}_{b} -> A^{a} B_{b}.\n   iab := A^{a} B_{b} -> v^{a}_{b}.\n\n   pab := \\partial_{a}{A^{b}} B_{c} -> \\partial_{a}{A^{b} B_{c}} - A^{b} \\partial_{a}{B_{c}}.\n\n   # create an object\n\n   Dvab := \\nabla_{a}{v^{b}_{c}}.   # cdb (ex-0203.101,Dvab)\n\n   # apply the rule, then simplify\n\n   substitute     (Dvab,vab)        # cdb (ex-0203.102,Dvab)\n   product_rule   (Dvab)            # cdb (ex-0203.103,Dvab)\n   substitute     (Dvab,derivD)     # cdb (ex-0203.104,Dvab)\n   substitute     (Dvab,derivU)     # cdb (ex-0203.105,Dvab)\n   distribute     (Dvab)            # cdb (ex-0203.106,Dvab)\n   substitute     (Dvab,pab)        # cdb (ex-0203.107,Dvab)\n   canonicalise   (Dvab)            # cdb (ex-0203.108,Dvab)\n   substitute     (Dvab,iab)        # cdb (ex-0203.109,Dvab)\n   sort_product   (Dvab)            # cdb (ex-0203.110,Dvab)\n\\end{cadabra}\n\n\\begin{align}\n   \\cdb{ex-0203.101} &= \\Cdb{ex-0203.102}\\\\\n                     &= \\Cdb{ex-0203.103}\\\\\n                     &= \\Cdb{ex-0203.104}\\\\\n                     &= \\Cdb{ex-0203.105}\\\\\n                     &= \\Cdb{ex-0203.106}\\\\\n                     &= \\Cdb{ex-0203.107}\\\\\n                     &= \\Cdb{ex-0203.108}\\\\\n                     &= \\Cdb{ex-0203.109}\\\\\n                     &= \\Cdb{ex-0203.110}\n\\end{align}\n\n\\end{document}\n", "meta": {"hexsha": "9396af059f976c122c957de6582b2a20913f3fde", "size": 1940, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0203.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0203.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0203.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 34.0350877193, "max_line_length": 94, "alphanum_fraction": 0.4850515464, "num_tokens": 729, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7401743505760728, "lm_q1q2_score": 0.5944886500925841}}
{"text": "\\section{Arith}\\label{Arith}\n\nThe {\\tt Arith} library deals with various arithmetical notions and \ntheir properties.\n\n\\subsection*{Standard {\\tt Arith} library}\n\nThe following files are automatically loaded by {\\tt Require Arith}.\n\n\\begin{itemize}\n\n\\item {\\tt Le.v} states and proves properties of the large order {\\tt le}.\n\n\\item {\\tt Lt.v} states and proves properties of the strict order {\\tt \nlt} (especially, the relationship with {\\tt le}).\n\n\\item {\\tt Plus.v} states and proves properties on the addition.\n\n\\item {\\tt Gt.v} states and proves properties on the strict order {\\tt gt}.\n\n\\item {\\tt Minus.v} defines the difference on \n{\\tt nat} and proves properties of it. On {\\tt nat}, {\\tt (minus n p)} is \n{\\tt O} if {\\tt n} $<$ {\\tt p}.\n\n\\item {\\tt Mult.v} states and proves properties on the multiplication.\n\n\\item {\\tt Between.v} defines modalities on {\\tt nat} and proves properties \nof them.\n\n\\end{itemize}\n\n\\subsection*{Additional {\\tt Arith} library}\n\n\\begin{itemize}\n\n\\item {\\tt Compare.v}, {\\tt Compare\\_dec.v} and {\\tt Peano\\_dec.v} state\nand prove various decidability results on {\\tt nat}.\n\n\\item {\\tt Wf\\_nat.v} states and proves various induction and recursion \nprinciples on {\\tt nat}. Especially, recursion for objects measurable by \na natural number and recursion on {\\tt nat * nat} are provided.\n\n\\item {\\tt Min.v} defines the minimum of two natural numbers and proves \nproperties of it. \n\n\\item {\\tt Eqnat.v} defines a specific equality on {\\tt nat} and shows \nthe equivalence with Leibniz' equality.\n\n\\item {\\tt Euclid.v} proves that the euclidean\ndivision specification is realisable. Conversely, {\\tt Div.v} exhibits\ntwo different algorithms and semi-automatically reconstruct the proof of\ntheir correctness. These files emphasize the extraction of program vs\nreconstruction of proofs paradigm.\n\n\\end{itemize}\n", "meta": {"hexsha": "655de34ca84b61f084a6b44acf947193a27c074a", "size": 1839, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Resources/coq-8.3pl2/theories/Arith/intro.tex", "max_stars_repo_name": "mzp/coq-for-ipad", "max_stars_repo_head_hexsha": "4fb3711723e2581a170ffd734e936f210086396e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-01-27T00:11:26.000Z", "max_stars_repo_stars_event_max_datetime": "2015-01-27T00:11:26.000Z", "max_issues_repo_path": "Resources/coq-8.3pl2/theories/Arith/intro.tex", "max_issues_repo_name": "mzp/coq-for-ipad", "max_issues_repo_head_hexsha": "4fb3711723e2581a170ffd734e936f210086396e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Resources/coq-8.3pl2/theories/Arith/intro.tex", "max_forks_repo_name": "mzp/coq-for-ipad", "max_forks_repo_head_hexsha": "4fb3711723e2581a170ffd734e936f210086396e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.8392857143, "max_line_length": 76, "alphanum_fraction": 0.7373572594, "num_tokens": 468, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.8031737869342623, "lm_q1q2_score": 0.5944886453505285}}
{"text": "\\chapter{Mutli-armed bandits}", "meta": {"hexsha": "5e8d816a2bdc7ff090dd4dd555e0d444752b1ceb", "size": 29, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RL/notes/TeX_files/chapter02.tex", "max_stars_repo_name": "Zilleplus/HML", "max_stars_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "RL/notes/TeX_files/chapter02.tex", "max_issues_repo_name": "Zilleplus/HML", "max_issues_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RL/notes/TeX_files/chapter02.tex", "max_forks_repo_name": "Zilleplus/HML", "max_forks_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.0, "max_line_length": 29, "alphanum_fraction": 0.8275862069, "num_tokens": 10, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737869342624, "lm_q2_score": 0.7401743505760728, "lm_q1q2_score": 0.5944886361437928}}
{"text": "%---------------------------Jacobian-----------------------------\n\\section{Jacobian\\label{s:tet-jacobian}}\n\nThis metric is defined as follows:\n\\begin{displaymath}\nq = \\left( \\vec L_2 \\times \\vec L_0 \\right) \\cdot \\vec L_3  \n\\end{displaymath}\n\n\\tetmetrictable{Jacobian}%\n{$L^3$}%                  Dimension\n{$[0,DBL\\_MAX]$}%         Acceptable range\n{$[0,DBL\\_MAX]$}%         Normal range\n{$[-DBL\\_MAX,DBL\\_MAX]$}% Full range\n{$\\frac{\\sqrt{2}}{2}$}%   Equilateral tet\n{\\cite{knu:03}}%          Citation\n{v\\_tet\\_jacobian}%                            Verdict function name\n\n", "meta": {"hexsha": "1e4716c112c97511babc6191c917d5ca90578555", "size": 572, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TetJacobian.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TetJacobian.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TetJacobian.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 31.7777777778, "max_line_length": 68, "alphanum_fraction": 0.5367132867, "num_tokens": 171, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8499711756575749, "lm_q2_score": 0.6992544147913993, "lm_q1q2_score": 0.5943460970239951}}
{"text": "\\documentclass[11pt,letterpaper]{article}\n\n\\usepackage[pdftex]{graphicx}\n\\usepackage{natbib}\n\\usepackage{fullpage}\n\\usepackage{lineno}\n\\usepackage{multirow}\n\\usepackage{wrapfig}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{sidecap}\n\\usepackage{hyperref}\n\n\\begin{document}\n\n\\setlength{\\parindent}{0mm}\n\\setlength{\\parskip}{0.4cm}\n\n\\bibliographystyle{apalike}\n\n%\\modulolinenumbers[5]\n%\\linenumbers\n\n\\title{ARCS scenario 1 data inversions}\n\n\\maketitle\n\n\\tableofcontents\n\n\\pagebreak\n\n\nThis document describes the mathematical formulation of the ARCS scenario 1 data inversion problem.  \n\n\n\\section{Summary}\n\nMeasurements being made for ARCS scenario 1 crossings will result in production of maps of magnetic field fluctuations $\\delta \\mathbf{B}$, and flows $\\mathbf{v}$ as described in the various instrument and data processing section.  These will be converted, as part of standard data processing, into parallel current density and electric field:\n\\begin{equation}\n\\mathbf{J} = \\nabla \\times \\left( \\frac{\\delta \\mathbf{B}}{\\mu_0} \\right); \\qquad \\mathbf{E} = -\\mathbf{v} \\times \\mathbf{B}\n\\end{equation}\nBecause this is a swarm of satellites we will have \\emph{datamaps} (2D images) of these parameters; additionally we note that the measurements may be directly used to also produce datamaps of Poynting flux:\n\\begin{equation}\n\\mathbf{S} = \\mathbf{E} \\times \\frac{\\delta \\mathbf{B}}{\\mu_0}\n\\end{equation}\nIn the case of scenario 1 data,  key unknown physical parameters are datamaps of ionospheric Pedersen and Hall conductances - without these we do not have a full picture of electrodynamics and energy flow in the auroral system and cannot, then, fully unlock the scientific potential of the large amount of scenario 1 data.  \n\nScenario 1 current density and Poynting flux Datamaps are related to each other through electromagnetic conservation laws:  current continuity and the Poynting theorem.  Under assumptions of steady state conditions and equipotential field lines (appropriate at scales relevant to our science questions), these can be reduced to:\n\\begin{eqnarray}\nJ_\\parallel &=& \\Sigma_P \\nabla \\cdot \\mathbf{E}_\\perp + \\nabla \\Sigma_P \\cdot \\mathbf{E}_\\perp - \\nabla \\Sigma_H \\cdot \\left( \\mathbf{E}_\\perp \\times \\hat{\\mathbf{b}} \\right) \\\\\nS_{\\parallel}  &=& - \\Sigma_P E^2\n\\end{eqnarray}\nwith unknown Pedersen and Hall conductances.  These conductances may, in principle, be solved by simply inverting this system of equations.  In practice there are many different methods that could work to perform this inversion.  A first glance the system appears even-determined on account of the fact that we have two unknown fields (conductances) and two input datamaps (conserved variables current density and Poynting flux).  It is worth noting however, that the component of the Hall conductance gradient along the electric field direction is explicitly not part of this equation and so is in the null space of the problem.  Thus in addition to conservation laws additional prior information is needed.  For ARCS data processing this will come in two forms (both of which may not always be necessary depending on noise conditions):  (1) regularization that places constrains on solution norm or smoothness (i.e. Tikhonov regularization), and/or (2) inclusion of model information that further correlates Pedersen conductance (which is well-constrained by laws above) to Hall conductance (which requires further constraints).   This could come in the form of model-based inversions (e.g. using GEMINI-GLOW) or simply parameerizations based, e.g. on the Robinson formulas or updated version of these formulas.  \n\n\n\n\\section{Physical constraints:  simplification of general conservation laws}\n\nConservation of charge in electromagnetic system is described by the current continuity equation:\n\\begin{equation}\n  \\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot \\mathbf{J} = 0\n\\end{equation}\nIn a steady state this reduces to:\n\\begin{equation}\n\\nabla \\cdot \\mathbf{J} = \\frac{\\partial J_\\parallel}{\\partial z} +  \\nabla_\\perp \\cdot \\mathbf{J}_\\perp = 0,\n\\end{equation}\nwhere the $z-$ direction represent altitude in a locally Cartesian coordinate system.  Integrating with respect to altitude:\n\\begin{equation}\n\\int \\frac{\\partial J_\\parallel}{\\partial z} dz +  \\int \\nabla_\\perp \\cdot \\mathbf{J}_\\perp dz = J_\\parallel(\\max(z)) - J_\\parallel(\\min(z)) + \\nabla_\\perp \\cdot \\left(  \\Sigma \\cdot \\mathbf{E}_\\perp \\right) = 0\n\\end{equation}\nWhich can be expanded out and solved for the parallel current at the top of the domain, if the bottom current is assumed to be zero:\n\\begin{equation}\nJ_\\parallel = - \\nabla_\\perp \\cdot \\left(  \\Sigma \\cdot \\mathbf{E}_\\perp \\right) =  - \\Sigma_P \\nabla \\cdot \\mathbf{E}_\\perp - \\nabla \\Sigma_P \\cdot \\mathbf{E}_\\perp + \\nabla \\Sigma_H \\cdot \\left( \\mathbf{E}_\\perp \\times \\hat{\\mathbf{b}} \\right)\n\\end{equation}\nThe current continuity equation to be used in the ARCS analysis is then:\n\\begin{equation}\n\\boxed{\nJ_\\parallel = -\\Sigma_P \\nabla \\cdot \\mathbf{E}_\\perp - \\nabla \\Sigma_P \\cdot \\mathbf{E}_\\perp + \\nabla \\Sigma_H \\cdot \\left( \\mathbf{E}_\\perp \\times \\hat{\\mathbf{b}} \\right)\n} \\label{eqn:continuity}\n\\end{equation}\nNote that this equation effectively has two unknown fields $\\Sigma_P,\\Sigma_H$, but represents only one physical constraint; hence additional information is needed.  This is provided by conservation of electromagnetic energy, viz. the Poynting theorem:\n\\begin{equation}\n\\frac{\\partial w}{\\partial t} + \\nabla \\cdot \\mathbf{S} = - \\mathbf{J} \\cdot \\mathbf{E}\n\\end{equation}\nSimilar to the assumptions made to produce Equation \\ref{eqn:continuity} we neglect time-dependent terms and proceed to integrate the equation along a geomagnetic field line:\n\\begin{equation}\nS_{\\parallel,top} - S_{\\parallel,bottom} + \\nabla_\\perp \\cdot \\mathbf{\\mathcal{S}}_\\perp = - \\Sigma_P E^2\n\\end{equation}\nwhere $\\mathbf{\\mathcal{S}}_\\perp$ is the column integrated perpendicular Poynting flux.  If we further assume that there is no Poynting flux through the bottom of the ionosphere or the lateral sides of our volume of interest (i.e. net incoming D.C. Poynting flux is dissipated) a simple relation between parallel Poynting flux and Pedersen conductance.  \n\\begin{equation}\n\\boxed{\nS_{\\parallel}  = - \\Sigma_P E^2\n} \\label{eqn:poynting}\n\\end{equation}\n\n\n\\section{Estimating conductances}\n\nSeveral different procedures can be developed for converting the maps of electric field and Poynting flux into conductances. Two approach are discussed here.   \n\nEquation \\ref{eqn:poynting} fully specifies the Pedersen conductance given quantities that are measurable by scenario 1 experiments, so the most obvious path would be to then provide the Pedersen conductance to Equation \\ref{eqn:continuity}.  Superficially, the equation allows solution for the gradient of the Hall conductance and in principle one would need to compute a line integral of this quantity to solve for Hall conductance:\n\\begin{equation}\n\\Sigma_H(\\mathbf{r}_2)-\\Sigma_H(\\mathbf{r}_1) = \\int_{\\mathbf{r}_1}^{\\mathbf{r}_2} \\nabla \\Sigma_H \\cdot d \\mathbf{r}\n\\end{equation}\nMoreover, one would also need the value of the Hall condutance at some reference point $\\mathbf{r}_1$ to complete the solution for Hall conductance.  While it may be possible to choose a point with low density and assume zero Hall conductance at that reference point there is a more serious issue with this approach and with the set of physical constraints being used, more generally.  Equation \\ref{eqn:continuity} only provides constraints on the derivative of the Hall conductance \\emph{in the direction of the $\\mathbf{E} \\times \\mathbf{B}$ drift}.  Thus, there is information about the Hall conductance (namely the variation in the direction of the electric field) that is completely unconstrained by current continuity.  As a result, the Hall conductance lies partly in the null space of the problem defined by Equations \\ref{eqn:continuity} and \\ref{eqn:poynting} and some additional assumptions/information/regularization will be required to solve the inverse problem.  \n\nAnother approach to the inverse problem would be to view the conservation laws as constraints to be combined together with other prior information in the form of, e.g., smoothness constraints.  Here we rewrite the physical constraints in a matrix form to facilitate application of results from linear inverse theory.  Field quantities can be ``flattened'' into vectors using column major ordering and then operators can be represented through matrix operations.  The latter step can be understood as a decomposition of the derivative operations into finite difference matrices:\n\\begin{equation}\n\\underline{j} = - \\underline{\\underline{I}} ~ \\underline{p} \\left( \\nabla \\cdot \\mathbf{E}_\\perp \\right) - \\underline{\\underline{L}}_x \\underline{p} E_x - \\underline{\\underline{L}}_y \\underline{p} E_y + \\underline{\\underline{L}}_{E \\times B} \\underline{h} E_\\perp\n\\end{equation}\n\\begin{equation}\n\\underline{s} = - E^2 \\underline{\\underline{I}} ~ \\underline{p}\n\\end{equation}\nConcatenating the unknown conductances into a single vector we get:\n\\begin{equation}\n\\underline{x} \\equiv \\left[ \\begin{array}{c} \\underline{p} \\\\ \\underline{h} \\end{array} \\right]\n\\end{equation}\nThe left-hand sides of each conservation law (i.e. measurements) are similarly stacked:\n\\begin{equation}\n\\underline{b} \\equiv \\left[ \\begin{array}{c} \\underline{j} \\\\ \\underline{s} \\end{array} \\right]\n\\end{equation}\nFinally the right-hand side operations may be expressed in block diagonal form:\n\\begin{equation}\n\\underline{\\underline{A}} \\equiv \\left[ \\begin{array}{cc} -\\underline{\\underline{I}}  \\left( \\nabla \\cdot \\mathbf{E}_\\perp \\right) -  \\underline{\\underline{L}}_x  E_x - \\underline{\\underline{L}}_y E_y  & ~ \\underline{\\underline{L}}_{E \\times B} E_\\perp \\\\ -E^2 \\underline{\\underline{I}} & \\underline{\\underline{0}} \\end{array} \\right]\n\\end{equation}\nYielding our full set of constrains as:\n\\begin{equation}\n\\underline{\\underline{A}} ~ \\underline{x} = \\underline{b}\n\\end{equation}\nAs discussed previously this system will not be full-rank, but serves as a starting point for a suitable generalized inverse for this problem.  As a final note the full system has size $2 \\cdot N \\cdot M \\times 2 \\cdot N \\cdot M$; where $N,M$ are the $x,y$ size of the data maps provided by instrument teams.  \n\n\n\\section{Maximum likelihood estimators}\n\nThe maximum likelihood estimator, assuming Gaussian-distributed noise is (note we drop the underline notation here for brevity):\n\\begin{equation}\n\\hat{x}_{ML} = \\left( A^T A  \\right)^{-1} A^T b\n\\end{equation}\nThe matrix to be inverted here is singular for reasons noted previously; we adopt a Tikhonov regularization scheme to mitigate this:\n\\begin{equation}\n\\hat{x} = \\left( A^T A  + \\lambda I \\right)^{-1} A^T b\n\\end{equation}\nwhere $\\lambda$ is a regularization parameter.  This approach regularizes the norm of the solution and coerces it to favor small norms.  One could also enforce any other conditions that can be expressed as a linear operation, yielding:\n\\begin{equation}\n\\hat{x} = \\left( A^T A  + \\lambda  \\Gamma^T \\Gamma \\right)^{-1} A^T b\n\\end{equation}\nwhere $\\Gamma$ is an operator describing smoothness (e.g. Laplacian) or variation (gradient).  We find that the laplacian works well to keep the reconstructions as smooth as possible.  \n\nWe can add in an offset term, i.e. solve a problem of the form:\n\\begin{equation}\n\\hat{x} = \\min_x \\left\\{  || Ax -b ||^2 + || \\Gamma x||^2 +  || x - x_0 ||^2 \\right\\}\n\\end{equation}\nWhich solves the least squares problem subject to constraints on smoothness and some expected value for the solution.  The solution is then given by:\n\\begin{equation}\n\\hat{x} = \\left( A^T A  + \\lambda  \\Gamma^T \\Gamma \\right)^{-1} \\left( A^T b - \\Gamma^T \\Gamma x_0\\right)\n\\end{equation}\nIn the case of the problem of estimating the ionospheric conductances the Hall conductance could constrained to not vary too far from the Pedersen conductivity.  \n\nLastly it may be advantageous to recast the current continuity in terms of the ratio of Hall to Pedersen conductance.  This retains the linearity of the problem only if the Pedersen conductance is known \\emph{a priori}.  \n\\begin{equation}\n\\nabla \\left(  \\frac{\\Sigma_H}{\\Sigma_P}  \\right) \\cdot \\mathbf{E} \\times \\hat{\\mathbf{b}} +  \\left(  \\frac{\\Sigma_H}{\\Sigma_P}  \\right) \\frac{\\nabla \\Sigma_P}{\\Sigma_P}  \\cdot \\mathbf{E} \\times \\hat{\\mathbf{b}} = \\frac{J_\\parallel}{\\Sigma_P} + \\nabla_\\perp \\cdot \\mathbf{E}_\\perp + \\frac{\\nabla \\Sigma_P}{\\Sigma_P}  \\cdot  \\mathbf{E}_\\perp\n\\end{equation}\nThis problem can be expressed in matrix form, similar to the approaches described above, and also solve via regularized inverses.  Doing so in a linear fashion does require one to first solve for the Pedersen conductance using the Poynting theorem.  Such a formulation has the benefit that you can regularize deviations from a set conductance ratio.   If a ratio of 1 is used this is equivalent to assuming an average energy of 2.5 keV for the precipitating particles.  \n\n\n\n\\section{Error Covariance}\n\n\n\\section{Connections to precipitating electrons}\n\nConductances are ultimately driven by electron precipitation, here encapsulated in terms of total energy flux $Q$ and average energy $E_{av}$ - these precipitation parameters are what is needed to ultimately drive the GEMINI simulations.  \n\nOne of the simplest parameterizations of conductance is the Robinson formulas:\n\\begin{equation}\n\\Sigma_P = \\frac{40 E_{av}}{16+E_{av}^2} \\sqrt{Q}\n\\end{equation}\n\\begin{equation}\n\\frac{\\Sigma_H}{\\Sigma_P} = 0.45 E_{av}^{0.85}\n\\end{equation}\nUsing these to constrain conductances creates a physical correlations between them that does not exist using just the constraints from conservation laws.  \n\n\n\n\\end{document}", "meta": {"hexsha": "ea81f136ff4457d9ae2ab8503f74f5eb6a49e92e", "size": 13782, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/inverse_formulation.tex", "max_stars_repo_name": "mattzett/arcs_scen1", "max_stars_repo_head_hexsha": "c9cdb2a5f183551dd8317ae7ba361ffe5fb1d909", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/inverse_formulation.tex", "max_issues_repo_name": "mattzett/arcs_scen1", "max_issues_repo_head_hexsha": "c9cdb2a5f183551dd8317ae7ba361ffe5fb1d909", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/inverse_formulation.tex", "max_forks_repo_name": "mattzett/arcs_scen1", "max_forks_repo_head_hexsha": "c9cdb2a5f183551dd8317ae7ba361ffe5fb1d909", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.4972972973, "max_line_length": 1315, "alphanum_fraction": 0.7625888841, "num_tokens": 3825, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.84997116805678, "lm_q2_score": 0.6992544147913993, "lm_q1q2_score": 0.5943460917091058}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{December 8, 2014}\n\\maketitle\n\\section*{14b}\n$m\\mathbb{Z}\\cdot n\\mathbb{Z}=(mn)\\mathbb{Z}$\n\n$\\subseteq$\n\nlet $x\\in (n\\mathbb{Z})(m\\mathbb{Z})$ so $x=\\sum\\limits_{i=1}^k{a_ib_i}$ and $n|a_i$ and $m|b_i$ and so $nm|a_ib_i$ and so $nm|x$ and so $n\\mathbb{Z}m\\mathbb{Z}\\subseteq nm\\mathbb{Z}$\n\n$\\supseteq$\n\nlet $y\\in mn\\mathbb{Z}$ and so $y=mnz_0=(n\\cdot 1)(m\\cdot  z_0)\\in n\\mathbb{Z}m\\mathbb{Z}$\n\n\\section*{20}\ngaussian integers are ``complex'' integers\n\n$\\mathbb{Z}[i]/\\langle p\\rangle=\\{[a+bi]|a,b\\in\\mathbb{Z}\\}=\\{[a]+[b][i]\\}$\n\n$[a]+[b]i=[a']+[b']i\\to(a+bi)-(a'+bi)\\in\\langle p\\rangle=p(c+di)\\to a-a'=pc\\text{ and }b-b'=pd\\to a-a'\\in \\langle p\\rangle\\to [a]=[a']$ and similarly with b.\n\n\\section*{define}\n$R$ comm ring and $I$ ideal where $I\\ne R$ then $I$ prime ideal means that $ab\\in I\\to a\\in I$or $b\\in I$\n\n\\subsection*{example}\n$R=\\mathbb{Z}$, $p\\in \\mathbb{Z}$ then $p\\mathbb{Z}$ is a prime ideal because if $ab\\in I$ then wlog $p|ab\\to p|b\\to b\\in I$.\n\n\\subsection*{example}\nclaim\n$n\\mathbb{Z}$ prime ideal then $n$ is prime\n\nassume $n$ not prime and $n\\ne 0$. $n=\\alpha\\beta$ where $1<\\alpha<n, 1<\\beta<n, \\alpha,\\beta\\in \\mathbb{Z}$. then $\\alpha\\beta\\in n\\mathbb{Z}$ but $\\alpha\\not\\in n\\mathbb{Z}$ and $\\beta\\not\\in n\\mathbb{Z}$. notice that $n\\ne 0$ is key here.\n\n\\subsection*{example}\nclaim $\\langle0\\rangle$ is a prime ideal. $ab=0\\to a=0$ or $b=0$ because $R$ is an integral domain\n\nobservation: $R$ commutative ring theen $R$is an integral domain iff $\\langle0\\rangle$is a prime ideal.\n\n\\subsection*{$\\mathbb{Z}$}\n$n\\mathbb{Z}\\subseteq m\\mathbb{Z}\\Leftrightarrow m|n$\n\ntake$n=p$ prime nmber. $p\\mathbb{Z}\\subseteq m\\mathbb{Z}\\Leftrightarrow m|p\\Leftrightarrow m\\pm 1$ or $m=\\pm p$ and so $m\\mathbb{Z}=\\mathbb{Z}$ or $m\\mathbb{Z}=p\\mathbb{Z}$\n\n\n\\section*{definition}\nif $J$ is an ideal of $R$ and $J\\ne R$we say that $J$ is a maximal ideal of $R$ if for every ideal $I$ of $R$ $j\\subseteq I\\subseteq R\\to I=J$ or $I=R$.\n\n\\subsection*{$J=p\\mathbb{Z}$}\nnote that $\\langle0\\rangle$ is not a maximal ideal. \n\n\\section*{claim}\nevery maximal ideal is a prime ideal\n\n\\section*{proposition}\nlet $I$ be a proper ideal ofa commutative ring $R$ (proper means different from ring itself). then $I$ is maximum ideal iff $R/I$ is a field. also $I$ is a prime ideal iff $R/I$ is an integral domain. finally $I$ maximal implies $I$ is a prime ideal.\n\n\\subsection*{proof 1}\n$R/I$ field iff $R/I$ has only the two trivial ideals (can you prove this?)\n\nand this is true iff $I$ is maximal.\n\n\\subsection*{proof 2}\nassume $I$ is a prime ideal. then $[x][y]\\in R/I$ then $[xy]=[0]$  in $R/I$ and so $xy\\in I$ means that $x\\in I$ or $y\\in I$ and so $[x]=0$ or $[y]=0$ so $R/I$ is an integral domain.\n\n\\subsection*{proof 3}\n$R/I$ integral domain\n\nthen $[xy]=[0]$ in $R/I$ $[x][y]=[0]$ in $R/I$ so $[x]=0$ or$[y]=[0]$ then $x\\in I$ or $y\\in I$ hence $I$ is a prime ideal.\n\n\\section*{thrm}\nif $R$ is a principle ideal domain and $P$ is a prime ideal different from zero. then $P$ is maximal.\n\n\\subsection*{proof}\n$P\\subseteq I\\subseteq R$ write $P=aR$, $I=bR$. $P$ is non-zero and so $a\\ne 0$ and $P\\in I\\to aR\\in bR$. then $a\\in bR$ we write $a=br, r\\in R$ then $a=br\\in P$ and so  $b\\in P$ or $r\\in P$ because $P$ is prime ideal. if $b\\in P$ then $a|b$ and so $I\\subseteq P$ but $P\\subseteq I$ and so $I=P$. if $r\\in P$ then $r\\in aR$ and $r=as, s\\in R$ and then $a=br=b(as)$ and so $a(1-bs)=0$ but $a\\ne 0$ and because we are in an integral domain then $1-bs=0$. and so $1=bs\\in I$ and $1\\in I$ and so $I=R$.\n\\end{document}\n\n\n\n", "meta": {"hexsha": "fa185f04f21e56dfd1db16a9359ecca0daad8482", "size": 3754, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-12-08.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-12-08.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-12-08.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7111111111, "max_line_length": 498, "alphanum_fraction": 0.6534363346, "num_tokens": 1514, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.8198933403143929, "lm_q1q2_score": 0.594316321775432}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\usepackage[latin1]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\author{Daniel Frederico Lins Leite}\n\\graphicspath{ {./} }\n\\newcommand{\\qed}{\\hfill\\blacksquare}\n\\begin{document}\n\t\nDasgupta \"Algorithms\" book:\nExercises:\n0.a\n\\begin{align*}\n\tf(n) = n - 100\\\\\n\tg(x) = n - 200 \n\\end{align*}\n\\includegraphics[width=10cm]{0_a.png}\n\\paragraph{}\nIs $f(n)=O(g(n))$?\n\\begin{align*}\n\t\\frac{n-100}{n-200} = c\\\\\n\tn-100=c*(n-200)\\\\\n\tn-100=cn-200c\\\\\n\tn=cn-200c+100\\\\\n\tn-cn=-200c+100\\\\\n\tn(1-c)=-200c+100\\\\\n\tn=\\frac{-200c+100}{1-c}\\\\\n\tn=\\frac{100(-2c+1)}{1-c}\\\\\n\tn=100\\frac{-2c+1}{1-c}\\\\\n\\end{align*}\n\\begin{align*}\n\tn*-1=100\\frac{-2c+1}{1-c}*-1\\\\\n\t-n=100\\frac{2c-1}{-1+c}\\\\\n\t-n=100\\frac{2c-1}{c-1}\\\\\n\t-n=100\\frac{2(c-1)}{c-1}\\\\\n\t-n=200\\frac{c-1}{c-1}\\\\\n\t-n=200\\\\\n\tn=-200\n\\end{align*}\n\\paragraph{}\nSo\n\\begin{align*}\n\t\\frac{f(n)}{g(n)}&<=-200*\\frac{g(n)}{g(n)}\\\\\n\tf(n) &= O(g(n))\\\\\n\t\\blacksquare\n\\end{align*}\n\n\\paragraph{}\n0.2\nShow that, if c is a positive real number, then $g(n) = 1 + c + c^2 + ... + c^n$ is:\\\\\n(a) $\\theta(1)$ if $c < 1$.\\\\\n(b) $\\theta(n)$ if $c = 1$.\\\\\n(c) $\\theta(cn)$ if c > 1.\\\\\nThe moral: in big-$\\theta$ terms, the sum of a geometric series is simply the first term if the series is\nstrictly decreasing, the last term if the series is strictly increasing, or the number of terms if the\nseries is unchanging.\n\\paragraph{}\n\\begin{align*}\n\tf(n) &= 1 + c + c^2 + ... + c^n\\\\\n\tf(n) &= \\sum_{i=0}^{n} {c^i} <= \\sum_{i=0}^{\\infty} {c^i}\\\\\n\t&<=\\frac{1}{1-c}\n\\end{align*}\nIn the case of $c<1$ we have\n\\begin{align*}\n\tf(n) <= \\frac{1}{1-c}\\\\\n\tf(n) <= z &&z > 0\\\\\n\t\\frac{f(n)}{g(n)}<=k*\\frac{g(n)}{g(n)}\\\\\n\t\\frac{f(n)}{1}<=k*\\frac{1}{1}&&\\text{choose $k=z$}\\\\\\\\\n\tf(n) = O(1)\\\\\n\\end{align*}\nIn the case of $c=1$ we have\n\\begin{align*}\n\tf(n) &= 1 + 1^2 + ... + 1^n\\\\\n\tf(n) &= \\sum_{i=0}^{n}{1} = n\\\\\n\t\\frac{f(n)}{g(n)}&<=k*\\frac{g(n)}{g(n)}\\\\\n\t\\frac{n}{n}&<=k*\\frac{n}{n}&&\\text{choose $k=1$}\\\\\n\tf(n) &= O(n)\n\\end{align*}\nIn the case of $c>1$ we have\n\\begin{align*}\n\tf(n) &= 1 + c^2 + ... + c^n\\\\\n\tf(n) &= \\sum_{i=0}^{n} {c^i} <= \\sum_{i=0}^{\\infty} {c^i}\\\\\n\t&=\\sum_{i=0}^{\\infty} {c^n}\\\\\n\t&=c^n*\\sum_{i=0}^{\\infty} {1}\\\\\n\t&=n*c^n\\\\\n\tf(n)&<=n*c^n\\\\\n\t\\frac{f(n)}{g(n)}&<=k*\\frac{g(n)}{g(n)}\\\\\n\t\\frac{f(n)}{c^n}&<=k*\\frac{c^n}{c^n} &&\\text{choose $k=n$}\\\\\n\tf(n) &= O(c^n)\n\t\\blacksquare\n\\end{align*}\n\n0.3\nThe Fibonacci numbers F 0 ,F 1 ,F 2 ,..., are defined by the rule\nF 0 = 0, F 1 = 1, F n = F n?1 + F n?2 .\nIn this problem we will confirm that this sequence grows exponentially fast and obtain some\nbounds on its growth.\n(a) Use induction to prove that F n ? 2 0.5n for n ? 6.\n(b) Find a constant c < 1 such that F n ? 2 cn for all n ? 0. Show that your answer is correct.\n(c) What is the largest c you can find for which F n = ?(2 cn )?\n\n(a)\nmissing bases steps\n\\begin{align*}\nF_n &>= 2^{n/2}\\\\\nF_n &= F_{n-1}+F_{n-2}\\\\\n&>= 2^{(n-1)/2} + 2^{(n-2)/2}\\\\\n& &= (X+2^{(n-2)/2}) + 2^{(n-2)/2} &&\\text{$X>0$}\\\\\n& &= 2*2^{(n-2)/2}\\\\\n&>= 2*2^{(n-2)/2}\\\\\n& &=2^{\\frac{n-2}{2}+1}\\\\\n& &=2^{\\frac{n-2}{2}+\\frac{2}{2}}\\\\\n& &=2^{\\frac{n}{2}}\\\\\n&>= 2^{n/2}\\\\\n\\qed\n\\end{align*}\n\n(B)\n\\begin{align*}\n\tF_n &<= 2^{cn}\\\\\n\tF_{n-2} + F_{n-1} &<= 2^{cn}\\\\\n\t2^{c(n-2)} + 2^{c(n-1)} &<= 2^{cn}\\\\\n\t2^{cn-2c} + 2^{cn-c} &<= 2^{cn}\\\\\n\t\\frac{2^{cn}}{2^{2c}} + \\frac{2^{cn}}{2^c} &<= 2^{cn}\\\\\n\t\\frac{(2^{c})^n}{(2^{c})^2} + \\frac{(2^{c})^n}{2^c} &<= (2^{c})^n\\\\\n\t\\frac{a^n}{a^2} + \\frac{a^n}{a} &<= a^n\\\\\n\t\\frac{aa^n}{aa^2} + \\frac{a^2a^n}{a^2a} &<= a^n\\\\\n\t\\frac{a^{n+1}}{a^3} + \\frac{a^{n+2}}{a^3} &<= a^n\\\\\n\t\\frac{a^{n+1}}{a^3} + \\frac{a^{n+2}}{a^3} &<= a^n\\\\\n\t\\frac{a^{n+1}+a^{n+2}}{a^n} &<= a^3\\\\\n\t\\frac{aa^n+a^2a^n}{a^n} &<= a^3\\\\\n\t\\frac{a^n(a+a^2)}{a^n} &<= a^3\\\\\n\ta+a^2 &<= a^3\\\\\n\t2^c+2^{2c}&<=2^{3c}\\\\\n\t\\log_2{(2^c+2^{2c})}&<=\\log_2{(2^{3c})}\\\\\n\\end{align*}\nFib\n0\n1\n1\n2\n3\n5\n8\n13\n21\n34\n55\n89\n144\n233\n377\n610\n987\n1597\n2584\n4181\n6765\n\\end{document}", "meta": {"hexsha": "4fae0f75fe2c16e0102d64b165081159fe800f54", "size": 3907, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "courses/coursera-sandiego-algorithms/algorithmic-toolbox/book/DasguptaSolutions.tex", "max_stars_repo_name": "xunilrj/sandbox", "max_stars_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-04-01T17:18:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-12T05:23:23.000Z", "max_issues_repo_path": "courses/coursera-sandiego-algorithms/algorithmic-toolbox/book/DasguptaSolutions.tex", "max_issues_repo_name": "xunilrj/sandbox", "max_issues_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-05-24T13:36:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-15T06:44:20.000Z", "max_forks_repo_path": "courses/coursera-sandiego-algorithms/algorithmic-toolbox/book/DasguptaSolutions.tex", "max_forks_repo_name": "xunilrj/sandbox", "max_forks_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-09-20T01:07:39.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-22T14:55:38.000Z", "avg_line_length": 24.2670807453, "max_line_length": 105, "alphanum_fraction": 0.5195802406, "num_tokens": 1961, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768249, "lm_q2_score": 0.8198933293122506, "lm_q1q2_score": 0.5943163040541075}}
{"text": "\\appendix\n\n\\section{Butcher Tableau for IMEX Schemes}\n\\label{app:butcherTables}\n\nFor easy reference, we include the Butcher tableau for the IMEX schemes considered in this paper, which can be written in the standard double Butcher tableau\n\\begin{equation}\n  \\begin{array}{c | c}\n    \\tilde{\\vect{c}} & \\tilde{A} \\\\\n    \\hline\n    & \\tilde{\\vect{w}}^{T}\n  \\end{array}\n  \\qquad\n  \\begin{array}{c | c}\n    \\vect{c} & A \\\\\n    \\hline\n    \\alpha & \\vect{w}^{T}\n  \\end{array}.  \n  \\label{eq:butcher}\n\\end{equation}\nThe \\emph{explicit tableau} (left; components adorned with a tilde) represents the explicit part of the IMEX scheme, and the \\emph{implicit tableau} (right; unadorned components) represents the implicit part of the IMEX scheme.  \nFor $s$ stages, $\\tilde{A}=(\\tilde{a}_{ij})$, $\\tilde{a}_{ij}=0$ for $j\\ge i$, and $A=(a_{ij})$, $a_{ij}=0$ for $j>i$, are $s\\times s$ matrices, and $\\tilde{\\vect{w}}=(\\tilde{w}_{1},\\ldots,\\tilde{w}_{s})^{T}$ and $\\vect{w}=(w_{1},\\ldots,w_{s})^{T}$.  \nThe vectors $\\tilde{\\vect{c}}=(\\tilde{c}_{1},\\ldots,\\tilde{c}_{s})^{T}$ and $\\vect{c}=(c_{1},\\ldots,c_{s})^{T}$, used for non autonomous systems, satisfy $\\tilde{c}_{i}=\\sum_{j=1}^{i-1}\\tilde{a}_{ij}$ and $c_{i}=\\sum_{j=1}^{i}a_{ij}$.  \nFor the implicit tableau, we have included the scalar $\\alpha$, used for the correction step in Eq.~\\eqref{eq:imexCorrection}.  \n\nFor the analysis of convex-invariant IMEX schemes, additional coefficients are defined \\cite{hu_etal_2018} (cf. Eq.~\\eqref{eq:imexStagesRewrite}).  \nFirst, let\n\\begin{equation}\n  b_{ii} = \\f{1}{a_{ii}}, \\quad\n  b_{ij} = -\\f{1}{a_{ii}}\\sum_{l=j}^{i-1}a_{il}b_{lj}, \\quad\n  \\tilde{b}_{ij} = -\\f{1}{a_{ii}}\\Big(\\tilde{a}_{ij}+\\sum_{l=j+1}^{i-1}a_{il}\\tilde{b}_{lj}\\Big).  \n\\end{equation}\nThen, for IMEX schemes of Type~A \\cite{dimarcoPareschi2013},\n\\begin{equation}\n  \\begin{aligned}\n    c_{i0} &= 1-\\sum_{j=1}^{i-1}\\sum_{l=j}^{i-1}a_{il}b_{lj}, \\quad &\n    c_{ij} &= \\sum_{l=j}^{i-1}a_{il}b_{lj}, \\\\\n    \\tilde{c}_{i0} &= 0, \\quad &\n    \\tilde{c}_{ij} &= \\tilde{a}_{ij} + \\sum_{l=j+1}^{i-1}a_{il}\\tilde{b}_{lj};\n  \\end{aligned}\n  \\label{eq:positivityCoefficientsA}\n\\end{equation}\nfor IMEX schemes of Type~ARS \\cite{ascher_etal_1997},\n\\begin{equation}\n  \\begin{aligned}\n    c_{i0} &= 1-\\sum_{j=2}^{i-1}\\sum_{l=j}^{i-1}a_{il}b_{lj}, \\quad &\n    c_{ij} &= \\sum_{l=j}^{i-1}a_{il}b_{lj} \\\\\n    \\tilde{c}_{i0} &= \\tilde{a}_{i1}+\\sum_{j=2}^{i-1}a_{ij}\\tilde{b}_{j1}, \\quad &\n    \\tilde{c}_{ij} &= \\tilde{a}_{ij}+\\sum_{l=j+1}^{i-1}a_{il}\\tilde{b}_{lj}.  \n  \\end{aligned}\n  \\label{eq:positivityCoefficientsARS}\n\\end{equation}\nNote that $c_{i1}=\\tilde{c}_{i1}=0$ in Eq.~\\eqref{eq:positivityCoefficientsARS} so that $\\sum_{j=0}^{i-1}c_{ij}=1$.  \nAlso note the difference between the matrix coefficients in Eqs.~\\eqref{eq:positivityCoefficientsA} and \\eqref{eq:positivityCoefficientsARS} and the vector components defined below Eq.~\\eqref{eq:butcher}.  \n\n\\paragraph{IMEX PA2}\n\nA second-order accurate, convex-invariant IMEX scheme of type $A$ (the matrix $A$ is invertible) with four implicit solves was given in \\cite{hu_etal_2018}.  \nWe refer to this scheme as IMEX PA2.  \nFor this scheme, the non-zero components of $\\tilde{A}$ and $A$ are given by\n\\begin{align*}\n  \\tilde{a}_{21} &= 0.7369502715, \\\\\n  \\tilde{a}_{31} &= 0.3215281691, \\quad \\tilde{a}_{32} = 0.6784718309, \\\\\n  a_{11} &= 0.6286351712, \\\\\n  a_{21} &= 0.2431004655, \\quad a_{22} = 0.1959392570, \\\\\n  a_{31} &= 0.4803651051, \\quad a_{32} = 0.0746432814, \\quad a_{33} = 0.4449916135. \n\\end{align*}\nThe coefficient in the correction step is $\\alpha = 0.2797373792$ and the CFL constant is $c_{\\Sch} = 0.5247457524$.\nThis scheme is globally stiffly accurate (GSA), so that $\\tilde{w}_{i}=\\tilde{a}_{3i}$ and $w_{i}=a_{3i}$ for $i\\le3$.\n\n\\paragraph{IMEX PA2+}\n\nWe have found another second-order accurate, convex-invariant IMEX scheme of type $A$ with four implicit solves, which we refer to as IMEX PA2+.  \nThis scheme allows for a larger value of $c_{\\Sch}$ than IMEX PA2 (i.e., a larger time step while maintaining admissible solutions).  \nThe scheme was found by random sampling of the parameter space spanned by the IMEX coefficients and selecting the scheme with the largest $c_{\\Sch}$.  \nFor IMEX PA2+, $c_{\\Sch} = 0.895041066934$. \nThe non-zero components of $\\tilde{A}$ and $A$ are given by\n\\begin{align*}\n  \\tilde{a}_{21} &= 0.909090909090909, \\\\\n  \\tilde{a}_{31} &= 0.450000000000000, \\quad \\tilde{a}_{32} = 0.550000000000000, \\\\\n  a_{11} &= 0.521932391842510, \\\\\n  a_{21} &= 0.479820781424967, \\quad a_{22} = 0.002234534340252, \\\\\n  a_{31} &= 0.499900000000000, \\quad a_{32} = 0.001100000000000, \\quad a_{33} = 0.499000000000000.\n\\end{align*}\nThe coefficient in the correction step is $\\alpha = 0.260444263529413$.  \nThis scheme is also GSA; $\\tilde{w}_{i}=\\tilde{a}_{3i}$ and $w_{i}=a_{3i}$ for $i\\le3$.  \n\nThe rest of the IMEX schemes we consider here do not include the correction step in Eq.~\\eqref{eq:imexCorrection}; i.e., $\\alpha=0$.  \n\n\\paragraph{IMEX PC2}\n\nAnother IMEX scheme was given in \\cite{mcclarren_etal_2008} (referred to there as a semi-implicit predictor-corrector method).  \nThis scheme has two implicit solves and can be written in the double Butcher tableau form, and we refer to this scheme as IMEX PC2.  \nThe non-zero components of $\\tilde{A}$ and $A$ are given by\n\\begin{align*}\n  \\tilde{a}_{21} &= 0.5, \\quad \\tilde{a}_{32} = 1, \\\\\n  a_{22} &= 0.5, \\quad a_{33} = 1.0,\n\\end{align*}\n$\\alpha=0$, and $\\tilde{w}_{i}=\\tilde{a}_{3i} = w_{i}=a_{3i}$ for $i\\le3$.  \nIMEX PC2 is not convex-invariant, since $c_{\\Sch} = 0$ (cf. discussion in Section~\\ref{sec:imex}).  \n\n\\paragraph{IMEX PD-ARS}\n\nWe have found a family of convex-invariant, diffusion accurate IMEX schemes of type ARS that are second-order accurate in the streaming limit, which we refer to as IMEX PD-ARS; see \\ref{app:PD-ARS}.  \nFor these schemes, $c_{\\Sch}= 1 - 2\\epsilon$ with $\\epsilon \\in [0, 1/2)$.\nHere we give an example by setting $\\epsilon=0.1$:\n\\begin{align*}\n  \\tilde{a}_{21} & = 1.0, \\\\\n  \\tilde{a}_{31} & = 0.5, \\quad \\tilde{a}_{32} = 0.5, \\\\\n  a_{22} & = 1.0, \\nonumber \\\\\n  a_{32} & = 0.4 \\,( = 0.5 - \\epsilon\\,), \\quad a_{33} = 0.6 \\,( = 0.5 + \\epsilon\\,). \n\\end{align*}\nThis scheme is GSA, $\\alpha=0$, and requires two implicit solves per time step (same as IMEX PC2).  \n\n\\paragraph{IMEX RKCB2}\n\nWe compare the performance of the convex-invariant IMEX schemes with two other (not convex-invariant) IMEX schemes.  \nThe first one is the second-order accurate IMEX scheme given in \\cite{cavaglieriBewley2015} with two implicit solves.  \nWe refer to this scheme as IMEX RKCB2.  \nThe non-zero components of $\\tilde{A}$ and $A$ are given by\n\\begin{align*}\n  \\tilde{a}_{21} &= 2/5, \\quad \\tilde{a}_{32} = 1, \\\\\n  a_{22} &= 2/5, \\nonumber \\\\\n  a_{32} &= 5/6, \\quad a_{33} = 1/6,\n\\end{align*}\n$\\alpha=0$, and $w_{i} = a_{3i} = \\tilde{w}_{i}$ (stiffly accurate \\cite{pareschiRusso_2005}).\n\n\\paragraph{IMEX SSP2332}\n\nAnother scheme that we use for comparison is the second-order accurate IMEX scheme given in \\cite{pareschiRusso_2005} with three implicit solves.  \nWe refer to this scheme as IMEX SSP2332.  \nThe non-zero components of $\\tilde{A}$ and $A$ are given by\n\\begin{align*}\n  \\tilde{a}_{21} &= 1/2, \\\\\n  \\tilde{a}_{31} &= 1/2, \\quad \\tilde{a}_{32} = 1/2, \\\\\n  a_{11} &= 1/4, \\\\\n  a_{22} &= 1/4, \\\\\n  a_{31} &= 1/3, \\quad a_{32} = 1/3, \\quad a_{33} = 1/3, \n\\end{align*}\n$\\alpha=0$, and $w_{i} = a_{3i} = \\tilde{w}_{i}$ (stiffly accurate).\n\n\\paragraph{SSPRK2 and SSPRK3}\n\nTo compare the performance of the IMEX schemes in the streaming limit (no collisions), we also compute results with explicit strong stability-preserving Runge-Kutta methods \\cite{gottlieb_etal_2001}.  \n(All elements of the implicit Butcher tableau are zero.)  \nThe optimal second-order accurate, strong-stability-preserving Runge-Kutta scheme (SSPRK2) has the following non-zero components:\n\\begin{align}\n  \\tilde{a}_{21} &= 1, \\nonumber \\\\ \n  \\tilde{w}_{1}  &= 1/2, \\quad \\tilde{w}_{2} = 1/2. \\nonumber \n\\end{align}\nThe optimal third-order accurate, strong-stability-preserving Runge-Kutta scheme (SSPRK3) has the following non-zero components:\n\\begin{align}\n  \\tilde{a}_{21} &= 1, \\nonumber \\\\\n  \\tilde{a}_{31} &= 1/4, \\quad \\tilde{a}_{32} = 1/4, \\nonumber \\\\\n  \\tilde{w}_{1} &= 1/6, \\quad \\tilde{w}_{2} = 1/6, \\quad \\tilde{w}_{3} =2/3. \\nonumber\n\\end{align}\n\n\\section{Construction of IMEX Scheme PD-ARS}\n\\label{app:PD-ARS}\n\nHere we construct a three-stage PD-IMEX scheme of Type~ARS, conforming to Definition~\\ref{def:PD-IMEX}.  \nWe refer to the resulting IMEX scheme as PD-ARS.  \nFor a 3-stage scheme, the double Butcher tableau is\n\\begin{equation}\n  \\begin{array}{c | c c c}\n  \t         0           & 0                 & 0                   & 0  \\\\\n  \t\\tilde{c}_{2} & \\tilde{a}_{21} & 0                   & 0  \\\\\n  \t\\tilde{c}_{3} & \\tilde{a}_{31} & \\tilde{a}_{32} & 0  \\\\ \\hline\n  \t                   & \\tilde{a}_{31} & \\tilde{a}_{32} & 0\n  \\end{array}\n  \\qquad\n  \\begin{array}{c | c c c}\n  \t     0  & 0  & 0         & 0          \\\\\n  \tc_{2} & 0 & a_{22} & 0          \\\\\n  \tc_{3} & 0 & a_{32} & a_{33}  \\\\ \\hline\n  \t         & 0 & a_{32} & a_{33}\n  \\end{array}\n\\end{equation}\nThe problem is then to find the coefficients $\\{ \\tilde{a}_{21}, \\tilde{a}_{31}, \\tilde{a}_{32}, a_{22}, a_{32}, a_{33} \\}$ satisfying the constraints in Definition~\\ref{def:PD-IMEX} while maximizing\n\\begin{equation}\n  c_{\\Sch} = \\min \\Big\\{\\, \\dfrac{c_{20}}{\\tilde{c}_{20}},\\, \\dfrac{c_{30}}{\\tilde{c}_{30}},\\, \\dfrac{c_{32}}{\\tilde{c}_{32}} \\,\\Big\\}.  \n\\end{equation}\nBy imposing the equality constraints (i.e., Eqs.~\\eqref{eq:diffusionCondition}, \\eqref{eq:orderConditionsEx}, and \\eqref{eq:implicitConsistency}), the double Butcher tableau can be written in terms of two independent parameters ($x,y\\in\\bbR$) as\n\\begin{equation}\n  \\begin{array}{c | c c c}\n  \t     0       & 0            & 0 & 0 \\\\\n  \t\\frac{1}{2x} & \\frac{1}{2x} & 0 & 0 \\\\\n  \t     1       & 1-x          & x & 0 \\\\ \\hline\n  \t             & 1-x          & x & 0\n  \\end{array}\n  \\qquad\n  \\begin{array}{c | c c c}\n  \t     0       & 0 & 0            & 0 \\\\\n  \t\\frac{1}{2x} & 0 & \\frac{1}{2x} & 0 \\\\\n  \t     1       & 0 & 1-y          & y \\\\ \\hline\n  \t             & 0 & 1-y          & y\n  \\end{array}\n\\end{equation}\nComputing the relevant coefficients in Eq.~\\eqref{eq:positivityCoefficientsARS}, we find $c_{20}=1$, $c_{30}=1-2x(1-y)$, $c_{32}=2x(1-y)$, $\\tilde{c}_{20}=\\f{1}{2x}$, $\\tilde{c}_{30}=(y-x)$, and $\\tilde{c}_{32}=x$, so that\n\\begin{equation}\n  c_{\\Sch} = \\min \\Big\\{\\, 2x,\\, \\f{1-2x(1-y)}{y-x},\\, 2(1-y) \\,\\Big\\}.  \n\\end{equation}\nThe convex-invariant property requires imposing the inequality constraints $a_{22},a_{33}>0$, $c_{20},c_{30},c_{32}\\ge0$, and $\\tilde{c}_{20},\\tilde{c}_{30},\\tilde{c}_{32}\\ge0$, which imply that\n\\begin{equation}\n  0 < x \\le y\n  \\quad\\text{and}\\quad\n  0 < y \\le 1.\n\\end{equation}\nWe chose $x=\\f{1}{2}$, so that the explicit part of the IMEX scheme is equivalent to the optimal second-order SSP-RK scheme in \\cite{gottlieb_etal_2001} (SSPRK2 in \\ref{app:butcherTables}).  \nThen, $y=\\f{1}{2} + \\epsilon$, where $\\epsilon \\in [0, \\frac{1}{2})$, and $c_{\\Sch} = 1 - 2\\epsilon$ results in the PD-ARS IMEX scheme.  \nSetting $\\epsilon = 0$ gives the optimal scheme with $c_{\\Sch} = 1$.  \n\n\\section{Nonexistence of Three-Stage PD-IMEX Scheme of Type~A}\n\\label{app:noTypeA}\n\nHere we prove that a PD-IMEX scheme  of type~A (i.e., conforming to Definition~\\ref{def:PD-IMEX}, but with Eq.~\\eqref{eq:positivityConditionsTypeA} replacing Eq.~\\eqref{eq:positivityConditionsTypeARS} in item 3) does not exist.  \nFirst, for a three-stage, GSA IMEX scheme of type~A the double Butcher tableau is\n\\begin{equation*}\n  \\begin{array}{c | c c c}\n  \t      0       & 0              & 0              & 0 \\\\\n  \t\\tilde{c}_{2} & \\tilde{a}_{21} & 0              & 0 \\\\\n  \t\\tilde{c}_{3} & \\tilde{a}_{31} & \\tilde{a}_{32} & 0 \\\\ \\hline\n  \t              & \\tilde{a}_{31} & \\tilde{a}_{32} & 0\n  \\end{array}\n  \\qquad\n  \\begin{array}{c | c c c}\n    c_{1} & a_{11} & 0      & 0      \\\\\n  \tc_{2} & a_{21} & a_{22} & 0      \\\\\n  \tc_{3} & a_{31} & a_{32} & a_{33} \\\\ \\hline\n  \t      & a_{31} & a_{32} & a_{33}\n  \\end{array}.\n\\end{equation*}\nFirst we consider the equality constraints.  \nConsistency of the implicit coefficients and second-order accuracy in the streaming limit (Eqs.~\\eqref{eq:implicitConsistency} and \\eqref{eq:orderConditionsEx}, respectively) give\n\\begin{align}\n  a_{31} + a_{32} + a_{33} = 1, \\quad \\tilde{a}_{31} + \\tilde{a}_{32} = 1, \\quad \\text{and}\\quad \\tilde{a}_{32}\\,\\tilde{a}_{21} =  \\f{1}{2}.\n  \\label{eq:1stOrderGeneral}\n\\end{align}\nAccuracy in the diffusion limit (Eq.~\\eqref{eq:diffusionCondition}) requires \n\\begin{align}\n  \\f{\\tilde{a}_{21}}{a_{22}} = 1\n  \\quad\\text{and}\\quad \n  -\\f{a_{32}\\,\\tilde{a}_{21}}{a_{22}\\,a_{33}} + \\f{\\tilde{a}_{31}+\\tilde{a}_{32}}{a_{33}} = 1.\n\\label{eq:diffusionAccurate}\n\\end{align}\nEq.~\\eqref{eq:diffusionAccurate} together with the second constraint in Eq~\\eqref{eq:1stOrderGeneral} gives $a_{32}+a_{33}=1$, which together with the first constraint in Eq.~\\eqref{eq:1stOrderGeneral} gives\n\\begin{align}\n  a_{31} = 0.\n  \\label{eq:a31is0}\n\\end{align}\nNext we consider the inequality constraints.  \nThe convex-invariant property in Eq.~\\eqref{eq:positivityConditionsTypeA} requires $a_{11}, a_{22}, a_{33}>0$, and \n\\begin{equation}\n  c_{21} = \\frac{a_{21}}{a_{11}} \\geq 0, \\quad\n  c_{31} = \\frac{a_{31}}{a_{11}} - \\frac{a_{32}\\,a_{21}}{a_{22}\\,a_{11}} \\geq 0, \\quad\\text{and}\\quad\n  c_{32} = \\frac{a_{32}}{a_{22}} \\geq 0.\n  \\label{eq:constraintsTypeA}\n\\end{equation}\nAs a consequence, $a_{21}, a_{32} \\geq 0$.  \nHowever, since $a_{31} = 0$, \n\\begin{equation*}\n  c_{31} = - \\frac{a_{32}a_{21}}{a_{22}a_{11}} \\le 0.  \n\\end{equation*}\nThus the inequality constraints in Eq.~\\eqref{eq:constraintsTypeA} hold only for $c_{31} = 0$, which gives $c_{\\Sch} = \\min \\Big\\{\\,\\f{c_{21}}{\\tilde{c}_{21}}, \\f{c_{31}}{\\tilde{c}_{31}}, \\f{c_{32}}{\\tilde{c}_{32}}\\Big\\} = 0$.\nTherefore, a three-stage PD-IMEX scheme (Definition~\\ref{def:PD-IMEX}) of type~A does not exist.\n", "meta": {"hexsha": "8f605b63b9f9323893ff04846b01a9c8556dd5b0", "size": 14059, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/M1/realizableFermionicM1/sections/appendix.tex", "max_stars_repo_name": "srichers/thornado", "max_stars_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-12-08T16:16:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-24T19:31:21.000Z", "max_issues_repo_path": "Documents/M1/realizableFermionicM1/sections/appendix.tex", "max_issues_repo_name": "srichers/thornado", "max_issues_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-07-10T20:13:15.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-11T13:21:00.000Z", "max_forks_repo_path": "Documents/M1/realizableFermionicM1/sections/appendix.tex", "max_forks_repo_name": "srichers/thornado", "max_forks_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-11-14T01:13:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-24T02:08:20.000Z", "avg_line_length": 52.8533834586, "max_line_length": 251, "alphanum_fraction": 0.6341133793, "num_tokens": 5480, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933359135361, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.5943163039660836}}
{"text": "\\documentclass[12pt, letter]{article}\n\\usepackage{amsmath,amssymb,hyperref,float,graphicx}\n\\graphicspath{{./images/}}\n\\restylefloat{table}\n\\begin{document}\n\\title{Air Taxi Analytic Framework}\nThis document provides an analytic framework for an air taxi service using electric vertical takeoff (eVTOL) aircraft. \n\\section{Passenger Mass}\nThe Survey on Standard Weights of Passengers and Baggage\\footnote{EASA 2008.C.06/30800/R20090095/30800000/FBR/RLO} funded by EASA in 2009 provides passenger weights, with some data broken down by the type of trip (business/leisure), location, gender, and age. From Table 4.7 in the reference, the average weight of male passengers with carry on baggage in the winter has a mean of 93.5 kg with a standard deviation of 15.6 kg. From Table 4.9, the average checked bag weight for males in the winter is 16.5 kg with standard deviation of 5.9 kg. Assuming independence between these two distributions, the passenger mass with baggage distribution has a mean of $\\mu=110 kg$ with a standard deviation of $\\sigma=16.7 kg$. The payload mass estimation is framed as a statistics problem, where the payload mass based on the passenger capacity, $n_{pax}$, is calculated such that a specified fraction of passengers, $p$, can be accommodated when independently sampled from a Gaussian population distribution. The payload mass can be calculated with the following formula,\n\\begin{align}\n\tm_{pax}=\\mu+erf^{-1}\\left(2p-1\\right)\\cdot\\sigma\\sqrt{\\frac{2}{n_{pax}}}\n\\end{align}\n\\section{Vehicle Mass}\nThe vehicle empty mass, $m_{empty}$, is defined as the portion of the vehicle gross takeoff mass, $m_{gross}$, that is not battery mass, $m_{batteries}$, or payload mass, $m_{payload}$. The ratio of the empty and gross mass, $f_{empty}$, is typically between 0.55 and 0.65. From these relationship, the battery mass may be determined with the constraint that the battery mass cannot be negative.\n\\begin{align}\n\tm_{empty}&=m_{gross}f_{empty}\\\\\n\tm_{payload}&=n_{pax}m_{pax}+n_{pilot}m_{pilot}\\\\\n\tm_{batteries}&=\\max\\left(0,m_{gross}-m_{empty}-m_{payload}\\right)\n\\end{align}\n\\section{Hover Performance}\nNoise produced during hover is strongly dependent on the rotor tip Mach number, $M_{tip}$. As such, a constraint may be placed to limit the tip Mach number with $M_{tip}=0.5$ being a reasonable selection. The tip speed is given by,\n\\begin{align}\n\tv_{tip}=M_{tip}a\n\\end{align}\nwhere $a$ is the speed of sound. It is assumed that the vehicle extents are circumscribed by a circle of diameter, $d_{value}$. This value is also known as the ``d-value\" and is a common constraint that is applied to account for helipad size limitations. The ratio of total rotor area to that area encompassed by the $d_{value}$ is given by $f_{rotorArea}$ and is dependent on vehicle configuration. Given the number of rotors, $n_{rotors}$, the individual rotor area may be defined as,\n\\begin{align}\n\tA_{disk}=\\frac{\\pi d_{value}^2}{4} \\frac{f_{rotorArea}}{n_{rotors}}\n\\end{align}\nThe rotor solidity, $\\sigma$ is related to the individual rotor thrust, $T$, and the average blade section lift coefficient, $c_l$. Due to structural and aerodynamic limitations, the rotor solidity is typically constrained to values between $\\sigma_{min}=0.05$ and $\\sigma_{max}=0.25$.\\footnote{Rotors with higher solidity generally have small aspect ratios and are prone to 3-d structural and aerodynamic effects. Some of these 3-d effects require special compensations or modeling techniques that may not be captured with reasonable accuracy with general global models.}\n\\begin{align}\n\tT&=\\frac{m_{gross}g}{n_{motors}}\\\\\n\t\\sigma&=\\text{median}\\left(\\sigma_{min},\\frac{6T}{\\rho v_{tip}^2 A_{disk} c_l},\\sigma_{max}\\right) \\label{solidity}\n\\end{align}\nNote that if the solidity is set to one of the limits, then the tip speed must be recalculated using Equation (\\ref{solidity}). If the tip speed limit is exceeded, then the solution is considered invalid. The induced power for an individual rotor is given by,\n\\begin{align}\n\tP_{ind}=\\frac{T^{1.5}}{\\sqrt{2 \\rho A_{disk}}} \\kappa\n\\end{align}\nwhere $\\kappa$ accounts for non-ideal induced power penalties, including tip losses and partial wake contraction. The profile power for an individual rotor is given by,\n\\begin{align}\n\tP_{pro}=\\frac{\\rho A_{disk} v_{tip}^3 \\sigma c_d}{8} \\label{P_pro}\n\\end{align}\nwhere $c_d$ is the average blade section drag coefficient. Note that the profile power estiamte produced by Equation \\ref{P_pro} becomes non-conservative for tip speeds approaching Mach $0.7$. The total hover power draw is therefore,\n\\begin{align}\n\tP_{hover}=n_{rotors} \\frac{P_{ind}+P_{pro}}{\\eta_{hover}}\n\\end{align}\nwhere $\\eta_{hover}$ is the powertrain efficiency during hover. This efficiency covers losses in the motor, controller, wires, and any other losses.\n\\section{Cruise Performance}\nBy selecting a reasonable cruise lift coefficient, $C_L$, and prescribing a cruise speed, $V_c$, the required reference wing area, $S_{ref}$, may be determined as follows,\n\\begin{align}\n\tS_{ref}=\\frac{2m_{gross}g}{\\rho C_L V_c^2} \\label{S_ref}\n\\end{align}\nGiven the d-value constraint, the resulting aspect ratio, $A$ may be determined using its definition. In consideration of both structural and aeroelastic constraints, a soft constraint is applied to limit the maximum aspect ratio using the p-norm.\n\\begin{align}\n\tA=\\left\\|\\frac{d_{value}^2}{S_{ref}},A_{max}\\right\\|_p\n\\end{align}\nNote that if the aspect ratio is limited by the maximum constraint, then the reference area must be calculated using the definition of the aspect ratio and cruise lift coefficient must be recalculated using Equation \\ref{S_ref}. With the aspect ratio known, the Oswald efficiency factor, $e$ may be estimated as follows\\footnote{\\href{https://bit.ly/2BhwcbZ}{Estimating the Oswald Factor from Basic Aircraft Geometrical Parameters (Eq 4)}}.\n\\begin{align}\n\te &=\\frac{1}{1.05-0.007 \\pi A}\n\\end{align}\nThe drag coefficient during cruise is given by the sum of a parasitic drag from the aerodynamic surfaces, fuselage and fairing parasitic drag\\footnote{Bramwell Helicopter Dynamics, Fig. 4.15, average of Clean Fixed Wing Aircraft and Experimental Helicopters} ($C_{d_f}$), and induced drag.\n\\begin{align}\n\tC_{d_f}&=\\frac{0.2331}{S_{ref}} \\left(\\frac{m_{gross}}{1000}\\right)^{2/3}\\\\\n\tC_D&=C_{D_0}+C_{d_f}+\\frac{C_L^2}{\\pi A e}\n\\end{align}\nFinally, the power required during cruise is given by,\n\\begin{align}\n\tP_{cruise} = \\frac{m_{gross} g V_C C_D}{C_L \\eta_{cruise}}\n\\end{align}\nwhere $\\eta_{cruise}$ is the powertrain efficiency during cruise. This efficiency covers losses in the motor, controller, wires, and any other losses.\n\\section{Mission}\nThe mission performance is considered from a total available energy perspective. The energy available for a given mission is determined by the available energy in a battery pack and by the energy reserved for non-mission segments, such as alternate legs, reserve energy, and unusable energy. The useful energy available in a battery pack is given by,\n\\begin{align}\n\tE_{total}=\\hat{E}_{cell} f_{int} f_{eol} m_{batteries}\n\\end{align}\nwhere $\\hat{E_{cell}}$ is the cell specific energy defined as the rated energy per mass of a battery cell, $f_{int}$ is the integration factor which is the ratio of total cell mass to total battery pack mass, and $f_{eol}$ is the battery end of life factor that accounts for pack degradation over the useful life of the pack. An alternate mission is sized to account for instances such as the primary landing site not being available. Under current standards, the alternate is sized to a particular time spent loitering typically in excess of twenty minutes. However, with typical primary mission lengths on the same order, these alternates will likely be redefined to account for the inherently constrainted mission profiles. To incorporate flexibility, energy used during alternates ($E_{alt}$) based on either loiter time ($t_{alt}$) or distance ($d_{alt}$) are addressed by considering the maximum energy of both cases.\n\\begin{align}\n\tE_{alt}=\\max \\left({P_C t_{alt}, \\frac{P_C d_{alt}}{V_C-V_{wind}}}\\right)\n\\end{align}\nwhere $V_{wind}$ is the design headwind. Reserve energy $E_{res}$ is defined as a fraction of total energy $f_{res}$. If the primary mission length is not defined, then the maximum mission length is determined as follows,\n\\begin{align}\n\tE_{hover}\t &= P_{hover} t_{hover} \\\\\n\tE_{cruise} &= E_{total} - E_{hover} - E_{alt} - E_{res} \\\\ \n\tt_{cruise} &= \\frac{E_{cruise}}{P_{cruise}} \\\\ \\label{t_cruise} \\\\\n\td_{cruise} &= t_{cruise} \\left( V_C - V_{wind} \\right) \\label{d_cruise}\n\\end{align}\nThe total time spent in flight is given by,\n\\begin{align}\n\tt_{flight}=t_{hover}+t_{cruise}\n\\end{align}\nIf the primary mission length is defined, then Equation \\ref{d_cruise} is used to calculate the time spent in cruise and Equation \\ref{t_cruise} is used to calculate the energy used in cruise. In both cases, if the energy required to complete the mission is less than zero, then the mission is considered infeasible. \n\\section{Operations}\nThe maximum number of trips, $N_{day}$ that may be completed in a given day by a vehicle is simply the ratio of daily operating time, $t_{day}$, to total turn around time. The total turn around time is the sum of the trip flight time and the turn around time at the pad, $t_{turn}$ which is the total time between the aircraft landing and subsequent takeoff. The number of trips per year accounts for both the scheduled availability, $a_{sch}$, and unscheduled availability, $a_{unsch}$, each of which are defined as the ratio of the time that the aircraft is available to the total time it could be available if tasks such as maintenance and weather caused no unavailability. Finally, the flight hours per year is proportional to the product of the trip length and the trips per year.\n\\begin{align}\n\tN_{day}&=\\frac{t_{day}}{t_{flight}+t_{turn}} \\\\\n\tN_{year}&=365 N_{day} a_{sch} a_{unsch} \\\\\n\tH_{year}&=\\frac{N_{year} t_{flight}}{3600}\n\\end{align}\n\\section{Costs}\nOperating cost estimation contains many parameters that required detailed exploration, but fundametally may be broken down into several categories. These include energy cost, maintenance, battery accruals, hull accruals, insurance, training, services, and landing fees. In addition to these costs, several others exist, such as management and customer service, that may be lumped into an additional factor, $f_{ops}$. Energy costs are described as follows,\n\\begin{align}\n\tE_{flight}&=E_{hover}+E_{cruise} \\\\\n\tD_{discharge}&=\\frac{E_{flight}}{E_{total}} \\\\\n\tC_{pack}&=E_{total} \\left(c_{pack}+c_{cell}\\right) + c_{base}\\\\\n\tR_{discharge}&=\\frac{3600 E_{flight}}{t_{flight} E_{total}}\n\\end{align}\nwhere $D_{discharge}$ is the depth of discharge, $C_{battery}$ is the cost of the battery pack, $c_{pack}$ is the pack component cost per $kWh$, $c_{cell}$ is the cell cost per $kWh$, $c_{base}$ is the pack component base cost, and $R_{discharge}$ is the average rate of discharge during the mission. One important value in determination of battery costs is the battery cycle life, $N_{battery}$, which is generally very difficult to estimate since it depends on a wide variety of factors such as pack temperature, cell balancing, charge rates, discharge rates, and many more. Nevertheless, a simplified model is proposed here that captures the trends observed for the cells under consideration.\n\\begin{align}\n\tN_{battery}=f_R R_{discharge}^{-k_R} D_{discharge}^{-k_D}\n\\end{align}\nwhere $f_R$, $k_R$. and $k_D$ are determined based on the specific battery type and usage. The costs associated with energy, $C'_{energy}$, and battery pack accruals, $C'_{pack}$, is therefore,\n\\begin{align}\n\t\\hat{C}_{pack}&=\\frac{C_{pack}}{N_{battery}} \\\\\n\t\\hat{C}_{energy}&=c_{elec} E_{flight}\n\\end{align}\nConsidered next are annual fixed operating costs, $C_{fixed}$. These include those items that do not vary with flight hours, such as aircraft depreciation ($C_{depreciation}$), insurance ($C_{insurance}$), pilots ($C_{pilot}$), training ($C_{training}$), and other items or services ($C_{services}$).\n\\begin{align}\n\tC_{aircraft}&=c_{aircraft} m_{empty}\\\\\n\tC_{insurance}&=C_{liability}+c_{hull} C_{aircraft}\\\\\n\tC_{depreciation}&=r_{depreciation} C_{aircraft}\\\\\n\tC_{pilot}&=c_{pilot} n_{pilot}\\\\\n\tC_{training}&=c_{training} n_{pilot}\\\\\n\tC_{fixed}&=C_{insurance}+C_{depreciation}+C_{pilot}+C_{training}+C_{services}\n\\end{align}\nwhere $c_{aircraft}$ is the aircraft acquisition cost per unit mass, $C_{liability}$ is the annual liability insurance cost, $c_{hull}$ is the annual hull insurance rate, $r_{depreciation}$ is the aircraft depreciation rate, $c_{pilot}$ is the annual cost of a pilot, and $c_{training}$ is the annual cost of pilot training. Variable costs include the hourly cost of energy, battery accurals, and maintenance.\n\\begin{align}\n\t\\dot{C}_{variable}&=\\frac{\\hat{C}_{energy}+\\hat{C}_{battery}}{t_{flight}}+\\dot{C}_{maintenance}\n\\end{align}\nFinally, the total operating cost per flight hour includes the additional costs of landing fees ($c_{landing}$) and other items or services that are accounted for by including margin, $f_{operating}$.\n\\begin{align}\n\t\\dot{C}&=\\left(\\dot{C}_{variable}+\\frac{C_{fixed}+\\hat{C}_{landing}N_{year}}{H_{year}}\\right)f_{operating}\n\\end{align}\n\\section{Passenger Experience}\nOne method to determine the value that a new transportation service adds is to perform a comparative analysis between the new service and existing services. In the case of an air taxi service, both the total trip time and total trip cost as experienced by the passenger are considered. To perform a comparative analysis, trip time and trip cost models for both air taxi and ground taxi are therefore requried. A representative case considered here in which a passenger lands at an airport and needs to transfer to a hotel. In the case of the air taxi, the trip is broken down into the following segments:\n\\begin{table}[H]\n\t\\begin{tabular}{|p{5in}|}\n\t\\hline\n\t\t\\\\\\includegraphics[width=360px]{airCase.png}\\\\\n\t\t\\begin{enumerate}\n\t\t\t\\item Pre-cruise phase: total time, $t_{transfer}$\n\t\t\t\\begin{enumerate}\n\t\t\t  \\item Transfer from gate to vertipad: 16 minutes\n\t\t\t\t\\item Check-in and passenger briefing: 4 minutes\n\t\t\t\t\\item Passenger and luggage loading: 2 minutes\n\t\t\t\t\\item Takeoff and taxi: 2 minutes\n\t\t\t\\end{enumerate}\n\t\t\t\\item Cruise phase: at speed $V_C$\n\t\t\t\\item Alight phase: total time ($t_{alight}$)\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item Taxi and land: 2 minutes\n\t\t\t\t\\item Passenger and luggage unloading: 2 minute\n\t\t\t\t\\item Transfer to curb with cab awaiting: 2 minute\n\t\t\t\\end{enumerate}\n\t\t\t\\item Drive phase: to hotel (distance $d_{last}$) at taxi speed\n\t\t\t\\item Unload phase: 1 minute ($t_{unload}$)\n\t\t\\end{enumerate}\n\t\\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\\noindent In the case of the ground taxi, the trip is broken down into the following segments:\n\\begin{table}[H]\n\t\\begin{tabular}{|p{5in}|}\n\t\\hline\n\t\t\\\\\\includegraphics[width=360px]{groundCase.png}\\\\\n\t\t\\begin{enumerate}\n\t\t\t\\item Pre-drive phase: total time ($t_{curb}$)\n\t\t\t\\begin{enumerate}\n\t\t\t  \\item Transfer from gate to curb: 15 minutes\n\t\t\t\t\\item Passenger and luggage loading: 1 minute\n\t\t\t\\end{enumerate}\n\t\t\t\\item Drive phase: at taxi speed\n\t\t\t\\item Unload phase: 1 minute ($t_{unload}$)\n\t\t\\end{enumerate}\n\t\t\\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\\noindent The total air taxi and ground taxi trip times are given by,\n\\begin{align}\n\tt_{drive}&=\\frac{d_{cruise}}{V_{drive}\\left(d_{cruise}\\right)}\\\\\n\tt_{ground}&=t_{curb}+t_{drive}+t_{unload}\\\\\n\tt_{last}&=\\frac{d_{last}}{V_{drive}\\left(d_{last}\\right)}+t_{unload}\\\\\n\tt_{air}&=t_{transfer}+t_{flight}+t_{alight}+t_{last}\n\\end{align}\nThe proposed ground speed model during peak traffic is detailed in Appendix \\ref{APX:groundSpeed}. During peak traffic, the drive speed model takes the following form.\n\\begin{align}\n\tV_{drive}(d)=\\left(11.3\\frac{m}{s}\\right)\\frac{d}{8530m+d}\\frac{1}{f_{traffic}}\n\\end{align}\nwhere $f_{traffic}$ may be used to effectively increase traffic by reducing the average driving speed by that factor.\n1Several different ticket pricing schemes may also be considered. For a ground taxi, a standard fare model is used. For the air taxi, pricing schemes based on value, distance, and time are considered.\n\\begin{align}\n\tP_{ground}&=c_{ground}+d_{cruise} c'_{ground}+t_{drive} \\dot{c}_{ground}\\\\\n\tP_{last}&=c_{ground}+d_{last} c'_{ground}+t_{last} \\dot{c}_{ground}\\\\\n\tP_{value}&=c_{value}\\left(t_{ground}-t_{fly}\\right)+P_{ground}-P_{last}\\\\\n\tP_{distance}&=d_{cruise} c_{distance}\\\\\n\tP_{time}&=t_{flight} c_{time}\n\\end{align}\nThe final price that will be paid by the passenger in the case of the air taxi is the sum of the price for the flying leg ($P_{flight}$) and the price for the last ground leg ($P_{last}$).\n\\section{Business Case}\nThe operator business case is determined by considering all of the revenue from ticket sales as compared against the costs. The revenue per trip is dependent on the passenger loading rate ($f_{load}$) and deadhead rate ($f_{dead}$). The passenger loading rate is defined as the average fraction of seats filled during non-empty flights. In the case of a single passenger aircraft, this value should equal unity and as more seats are added the fraction filled should decrease. The deadhead rate is defined as the fraction of flights that have no paying passengers. Note that the deadhead trips are accounted for separately from the loading rate. The revenue is determined as follows,\n\\begin{align}\n\t\\hat{R} &= P_{flight} n_{pax} f_{load} \\\\\n\t\\dot{R} &= \\frac{3600\\hat{R}}{t_{flight}} \\left(1-f_{dead}\\right)\n\\end{align}\nThe profit is therefore,\n\\begin{align}\n\t\\dot{P} &= \\dot{R} - \\dot{C}  \\\\\n\tP &= \\dot{P} H_{year}\n\\end{align}\nThis profit value provides the profit for a trip of a fixed length being performed throughout the year. Practically, there will be a variety of trip lengths each of which may be performed with the same aircraft. To account for a variety of trip lengths, a probability density function (PDF) of those trip lengths may be provided. Several data sources have been used to estimate the PDF including commuter trip data, a global survey of airport-to-downtown distances, taxi trip data, and distribution in cities with existing helipad infrastructure. It was found that in all cases the PDFs were similar. The airport-to-downtown cumulative distribution function (CDF) and PDF are shown below for reference with a gamma distribution fit to the data ($k=3.98$, $\\theta=4.85$).\n\\\\\\includegraphics[width=250px]{airportCDF.png}\\\\\nFor each trip length, the profitability is evaluated. The weighted sum off all profitable trips represents the effective profitability for that vehicle throughout the year.\n\\begin{align}\n\t\\bar{P} &= \\frac{\\sum_i H(P_i) P_i p_i}{\\sum_i H(P_i) p_i}\n\\end{align}\nwhere $H(x)$ is the Heaviside step function, $p_i$ is the probability density function and the function $f$ is equal to zero for negative input values and one for positive input values.  \n\n\\section{Baseline}\nA baseline analysis was performed to determine for which combinations of range and speed would yield profitable operations. The baseline parameters are listed in Appendix \\ref{APX:baseline}\n\n\n\\appendix\n\\section{Ground Speed Model} \\label{APX:groundSpeed}\n\\includegraphics[width=360px]{groundSpeedModel.png}\n\n\n\\section{Baseline Input Variables} \\label{APX:baseline}\nThe baseline parameters selected were as follows,\\\\\\\\\n\\textbf{Physics Variables}\n\\begin{itemize}\n\t\\item[$g$] Gravitational acceleration, $9.81 \\frac{m}{s^2}$\n\t\\item[$\\rho$] Air density, $1.0 \\frac{kg}{m^3}$\n\t\\item[]\n\\end{itemize}\n\\textbf{Vehicle Variables}\n\\begin{itemize}\n\t\\item[$d_{value}$] The diameter of the circle that circumscribes the top view of the vehicle, $14 m$ \n\t\\item[$f_{empty}$] Empty mass fraction, $0.58$\n\t\\item[$f_{rotor}$] Fraction of d-value area covered by rotors in hover, $0.32$\n\t\\item[]\n\\end{itemize}\n\\textbf{Hover Variables}\n\\begin{itemize}\n\t\\item[$n_{rotors}$] Number of motors, $8$\n\t\\item[$\\eta_{hover}$] Hover electrical efficiency, $0.89$\n\t\\item[$\\kappa$] Induced power penalty, $1.1$\n\t\\item[$c_l$] Blade section average lift coefficient, $0.80$\n\t\\item[$c_d$] Blade section average drag coefficient, $0.02$\n\t\\item[]\n\\end{itemize}\n\\textbf{Cruise Variables}\n\\begin{itemize}\n\t\\item[$\\eta_{cruise}$] Cruise electrical efficiency, $0.69$\n\t\\item[$C_L$] Cruise lift coefficient, $0.55$\n\t\\item[$AR_{max}$] Maximum aspect ratio, $12$\n\t\\item[$C_{d_0}$] Parasitic drag from aerodynamic surfaces, $0.025$\n\t\\item[]\n\\end{itemize}\n\\textbf{Battery Variables}\n\\begin{itemize}\n\t\\item[$\\hat{E}_{cell}$] Cell specific energy, $240 \\frac{Wh}{kg}$\n\t\\item[$f_{int}$] Pack integration factor, $0.75$\n\t\\item[$f_{eol}$] Cell end of life factor, $0.90$\n\t\\item[$k_R$] Cell degradation coefficient due to average discharge rate, $1$\n\t\\item[$k_D$] Cell degradation coefficient due to depth of discharge rate, $2$\n\t\\item[]\n\\end{itemize}\n\\textbf{Mission Variables}\n\\begin{itemize}\n\t\\item[$f_{res}$] Reserve energy factor, $0.20$\n\t\\item[$f_{wind}$] Headwind encountered in flight, $15 \\frac{m}{s}$\n\t\\item[$t_{hover}$] Time spent in hover, $3 min$\n\t\\item[$t_{alt}$] Alternate mission time, $10 min$\n\t\\item[$d_{alt}$] Alternate mission distance, $15 km$\n\t\\item[$t_{day}$] Operating time per day, $8 hr$\n\t\\item[$a_{sch}$] Scheduled availability rate, $0.9$\n\t\\item[$a_{unsch}$] Unscheduled availability rate, $0.9$\n\t\\item[$t_{turn}$] Pad turn-around time, $6 min$\n\t\\item[$f_{dead}$] Deadhead rate, $0.3$\n\t\\item[$f_{operating}$] Operating cost factor, $1.25$\n\t\\item[]\n\\end{itemize}\n\\textbf{Cost Variables}\n\\begin{itemize}\n\t\\item[$c_{pack}$] Specific battery cost, $250 \\frac{\\$}{kWh}$\n\t\\item[$c_{elec}$] Cost of electricity, $0.2 \\frac{\\$}{kWh}$\n\t\\item[$c_{aircraft}$] Specific hull cost, $550 \\frac{\\$}{kg}$\n\t\\item[$r_{depreciation}$] Aircraft depreciation rate, $0.1$\n\t\\item[$C_{liability}$] Annual liability insurance cost, $\\$22000$\n\t\\item[$c_{hull}$] Annual hull insurance specific cost, $0.045 \\frac{\\$}{kg}$\n\t\\item[$C_{services}$] Annual services costs, $\\$7700$\n\t\\item[$\\dot{C}_{maintenance}$] Hourly maintenance rate, $\\$100$\n\t\\item[$\\hat{C}_{landing}$] Landing fee, $\\$20$\n\t\\item[$c_{pilot}$] Annual cost per pilot (including benefits), $\\$280500$\n\t\\item[$c_{training}$] Annual training cost per pilot, $\\$9900$\n\t\\item[]\n\\end{itemize}\n\\textbf{Travel Variables}\n\\begin{itemize}\n\t\\item[$c_{ground}$] Taxi base fare, $\\$3.5$\n\t\\item[$c'_{ground}$] Taxi distance rate, $1.7 \\frac{\\$}{km}$\n\t\\item[$\\dot{c}_{ground}$] Taxi time rate, $0.55 \\frac{\\$}{min}$\n\t\\item[$d_{last}$] Last leg distance, $3 km$\n\t\\item[$t_{curb}$] Time to get from gate to drive away (for taxi trip), $16 min$\n\t\\item[$t_{transfer}$] Time to get from gate to hover (for aircraft trip), $24 min$\n\t\\item[$t_{unload}$] Time to unload luggage from taxi and exit, $1 min$\n\t\\item[$t_{alight}$] Time to unload luggage from aircraft and exit, $6 min$\n\t\\item[$c_{value}$] The passenger willingness to pay for time saved, $3\\frac{\\$}{min}$\n\t\\item[]\n\\end{itemize}\n\\end{document}", "meta": {"hexsha": "402780152c69c929493129ec2be71a899d016f32", "size": 23005, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "AirTaxiAnalyticFramework.tex", "max_stars_repo_name": "zlovering/simpleBusinessCase", "max_stars_repo_head_hexsha": "3013e81acfd9bd44861ce4aae8544b28f0255e73", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-28T08:38:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-28T08:38:59.000Z", "max_issues_repo_path": "AirTaxiAnalyticFramework.tex", "max_issues_repo_name": "zlovering/simpleBusinessCase", "max_issues_repo_head_hexsha": "3013e81acfd9bd44861ce4aae8544b28f0255e73", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AirTaxiAnalyticFramework.tex", "max_forks_repo_name": "zlovering/simpleBusinessCase", "max_forks_repo_head_hexsha": "3013e81acfd9bd44861ce4aae8544b28f0255e73", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-11-27T21:21:01.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-30T20:59:16.000Z", "avg_line_length": 74.4498381877, "max_line_length": 1063, "alphanum_fraction": 0.7426646381, "num_tokens": 6546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424295406088, "lm_q2_score": 0.7025300698514777, "lm_q1q2_score": 0.5942999941154927}}
{"text": "%!TEX root = ../CombinatoricsNotes.tex\n\n\\section{Ramsey Theory}\n\\marginnote{``Every sufficiently large system contains a regular subsystem.''}\n\nConsider a two coloring $C: [n]^{(2)}\\to \\{R,B\\}$ where $R$ stands for red and $B$, blue.\nA \\defn{regular subsystem} is a subset $X\\subset [n]$ such that $c$ restricted to $X^{(2)}$ takes only one value\\marginnote{That is, the coloring restricted to $X$ is monochromatic.}.\nThe \\defn{Ramsey number} $R(k,\\ell)$ is the minimum number $n$ such that for every $c: [n]^{(2)} \\to \\{R,B\\}$, there exists $X\\subset[n]$ such that either\n\\begin{itemize}\n\t\\item $c$ restricted to $X^{(2)}$ is identically $R$ and $|X|=k$, or\n\t\\item \\ditto{$c$ restricted to $X^{(2)}$ is identically} $B$ and $|X| = \\ell$.\n\\end{itemize}\n\\begin{remark}\nThe existence of Ramsey numbers was first shown by Frank Ramsey (\\cite{ramsey1930problem}), as a lemma to proving the decidability of a class of formulas in first order logic. Here, we will present a short proof by \\erdos  and Szekeres.\n\\end{remark}\n\\begin{theorem}[\\cite{erdosszekeres1935combinatorial}]\n~\\begin{itemize}\n\t\\item $R(k,\\ell)$ exists for all $k$ and $\\ell$,\n\t\\item $R(k,\\ell) \\leq R(k-1,\\ell) + R(k,\\ell-1)$ for $k,\\ell\\geq 2$, and\n\t\\item $R(1,\\ell) = R(k,1) = 1$. \n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\t\nLet $n = R(k-1,\\ell) + R(k,\\ell-1)$. Choose $v\\in [n]$; then, either\n\\begin{enumerate}[(1)]\n  \\item  $v$ is joined to $\\geq R(k-1,\\ell)$ verticies by red edges. In this case choosing an appropriate $X$ among the red neighbors of $v$ gives the result, or\n  \\item $v$ is joined to $(n-1) - (R(k-1,\\ell)-1) = R(k,\\ell-1)$ verticies by blue edges. This case is analogous.\n\\end{enumerate}\nThis yields $R(k,\\ell) \\leq R(k-1,\\ell) + R(k,\\ell-1)$. The third point follows from the definition, and the first inductively from this inequality, using the third point as the base case.\n\\end{proof}\nIn fact, we may obtain a bound on the $R(k,\\ell)$ in this way.\n\\begin{corollary}\n\\[\nR(k,\\ell)\\leq {k+\\ell -2 \\choose k-1}\n\\]\nfor all $k,\\ell \\geq 1$.\n\\end{corollary}\n\\begin{proof}  \nIf $k=1$ or $\\ell=1$ then $R(k,\\ell)=1$. Otherwise, by induction\n\\begin{align*}  \nR(k,\\ell) &\\leq R(k-1,\\ell) + R(k,\\ell)-1 \\\\\n&\\leq {k+\\ell -3 \\choose k-2} + {k+\\ell -3 \\choose k-1} = {k+\\ell-2 \\choose k-1}.\\qedhere\n\\end{align*}\n\\end{proof}\n\\begin{remark}\nIn particular, $R(k,k) \\leq {2k-2\\choose k-1} \\sim \\frac{4^k}{\\sqrt{k}}$. We in fact have $R(3,3) = 6$, $R(4,4)=18$, and $43\\leq R(5,5) \\leq 49$. \n\\end{remark}\n\n\\begin{theorem}[\\cite{erdos1947someremarks}]\n$R(k,k)\\geq 2^{k/2}$ for $k\\geq 2$.\n\\end{theorem}\n\\begin{proof}  \nLet $n = \\floor{2^{k/2}}$ and let $c:[n]^{(2)}\\to \\{R,B\\}$ be chosen unifomrly randomly. That is, the color of every edge in $[n]^{(2)}$ is chosen to be $R$ with probability $1/2$ or $B$ with probability $1/2$, independently of the other edges. Let $Z$ be equal to the number of sets $X\\subset[n]$, $|X|=k$, such that $c$ restricted to $X^{(2)}$ is monochromatic. It is enough to show $\\E[Z] < 1$. \\marginnote{Since then there must be exist some coloring with $Z<1$, i.e., $Z=0$.}\nFor fixed $X$ as above,\n\\[\n\\Pr[c \\text{ restricted to }X\\text{ is monochromatic}] = \\frac{1}{2^{{k\\choose 2}-1}}.\n\\]\nThen\n\\begin{align*}  \n\\E[Z] &= {n\\choose k} 2^{1 - {k\\choose 2}}\\leq \\frac{n^k}{k!}2^{1 - \\frac{k(k-1)}{2}}\\\\\n&\\leq \\frac{2^{k^2/2}\\cdot 2^{1 + (k/2) - k^2/2}}{k!} = \\frac{2^{1+k/2}}{k!}<1\n\\end{align*}\nfor $k\\geq 3$. One may show $R(2,2) = 2$, yielding the case $k=2$.\n\\end{proof}\nThe state of the art bounds are\n\\marginnote{Lower: \\cite{SPENCER1975}, upper: \\cite{conlon2009new}.}\n% \\begin{theorem*}[]\n\\[\n(1 + O(1)) \\frac{\\sqrt{2}k}{e}2^{k/2}\\leq R(k,k) \\leq k^{- \\frac{c\\log k}{\\log \\log k}}4^k.\n\\]\n% \\end{theorem*}\n\n\n\\newthought{Next, if $n$ is sufficiently large} with respect to $k$ and $c:[n]\\to \\{R,B\\}$, then $[n]$ contains a monochromatic \\defn{arithmetic progression} of length $k$, that is a set\\marginnote{Note that here we are discussing colorings of $[n]$, not $[n]^{(2)}$; that is, vertex colorings, not edge colorings.}\n\\[\n\\{ a, a+d ,\\dotsc, a+(k-1)d\\}\n\\]\nfor some $d>0$. We may write this as $a + [0,k-1]d$, where $[0,k-1] = \\{ j\\in \\Z: 0\\leq j\\leq k-1\\}$ and the addition and multiplication is elementwise. The \\defn{Van der Waerden number} $W(k,r)$ is the minimum $n$ such that if $[n]$ is colored in $r$ colors one can always find a monochromatic arithmetic progression of length $k$.\n\n% \\missing{Missed class: Monday, March 7}\n\\begin{theorem}[\\cite{VdWaerden1927}]\n$W(k,r)$ exists for all $r,k$. \\label{thm:VdW}\n\\end{theorem}\nThe proof will follow from \\cref{lem:Wkmr}.\n\\begin{example}\n\\begin{itemize}\n\t\\item $W(2,r) = r+1$ \\marginnote{by pigeonhole}\n\t\\item $W(k,1) = k$,\n\t\\item What about $W(3,2)$? Well, \\rdot\\rdot\\bdot\\bdot\\rdot\\rdot\\bdot\\bdot \\,shows $W(3,2) > 8$.  On the other hand, assuming the 5th slot is red, wlog, we reduce to two cases:\n\t\\begin{align*}\t\n\t\\vartextvisiblespace \\rdotm \\vartextvisiblespace \\bdotm \\rdotm \\bdotm \\vartextvisiblespace \\rdotm \\vartextvisiblespace\\\\\n\t\\rdotm\\rdotm\\bdotm\\bdotm\\rdotm\\rdotm\\bdotm\\bdotm\\vartextvisiblespace\n\t\\end{align*}\n\n\tLet's outline the general argument; this will give a bound $W(3,2) \\leq 5 \\times 65 = 325$. Color $[325]$ and divide it into $65$ intervals of length~5.\n\\newcommand{\\vdwdots}{\\rdot\\bdot\\bdot\\rdot\\rdot}\n\\newcommand{\\firstpic}{\\begin{tikzpicture}[node distance = .5cm]\n\t\\node[draw] (n1) at (0,0) {1\\,2\\,3\\,4\\,5};\n\t\\node[left = 1cm of n1]{a)};\n\t\\node[draw,right= of n1] (n2)  {6\\,7\\,8\\,9\\,10};\n\n\t\\node[right= .25cm of n2] (n7)  {$\\dotsm$};\n\t\\node[draw,right= .25cm of n7] (n10)  {361\\,362\\,363\\,364\\,365};\n\t\\end{tikzpicture}}\n\t\\newcommand{\\secondpic}{\\begin{tikzpicture}[node distance = .5cm]\n\t\\node[draw] (n1) at (0,0) {\\phantom{\\vdwdots}};\n\t\\node[left = 1cm of n1]{b)};\n\n\t\\node[draw,label=above:$I$,right=of n1] (n2) {\\vdwdots};\n\t\\node[right= .25cm of n2] (n3)  {$\\dotsm$};\n\t\\node[draw,right=.25cm of n3,label=above:$I+5d$] (n4)  {\\vdwdots};\n\t\\node[right= .25cm of n4] (n5)  {$\\dotsm$};\n\t\\node[draw,right= .25cm of n5] (n6)  {\\phantom{\\vdwdots}};\n\t\\node[draw,right= of n6] (n6p5)  {\\phantom{\\vdwdots}};\n\t\\node[right= .25cm of n6p5] (n7)  {$\\dotsm$};\n\t% \\node[draw,right=.25cm of n7,label=above:$I+10d$] (n8)  {\\rdot\\bdot\\bdot\\rdot\\rdot};\n\t% \\node[right= .25cm of n8] (n9)  {$\\dotsm$};\n\t\\node[draw,right= .25cm of n7] (n10)  {\\phantom{\\vdwdots}};\n\\draw [\n    thick,\n    decoration={\n        brace,\n        mirror,\n        raise=0.5cm\n    },\n    decorate] (n1.west) --  (n6.east) node [pos=0.5,anchor=north,yshift=-0.55cm] {$33$ intervals};\n\t\\end{tikzpicture}}\n\n\\newcommand{\\thirdpic}{\\begin{tikzpicture}[node distance = .5cm]\n\t\\node[draw] (n1) at (0,0) {\\phantom{\\vdwdots}};\n\t\\node[left = 1cm of n1]{c)};\n\t\\node[draw,label=above:$I$,right=of n1] (n2) {\\rdot\\bdot\\bdot\\rdot\\rdot};\n\t\\node[right= .25cm of n2] (n3)  {$\\dotsm$};\n\t\\node[draw,right=.25cm of n3,label=above:$I+5d$] (n4)  {\\rdot\\bdot\\bdot\\rdot\\rdot};\n\t\\node[right= .25cm of n4] (n5)  {$\\dotsm$};\n\t\\node[draw,right= .25cm of n5] (n6)  {\\phantom{\\vdwdots}};\n\t\\node[draw,right=  of n6] (n6p5)  {\\phantom{\\vdwdots}};\n\t\\node[right= .25cm of n6p5] (n7)  {$\\dotsm$};\n\t\\node[draw,right=.25cm of n7,label=above:$I+10d$] (n8)  {\\phantom{\\rdot\\bdot} \\textvisiblespace \\phantom{\\rdot} \\textvisiblespace};\n\t\\node[right= .25cm of n8] (n9)  {$\\dotsm$};\n\t\\node[draw,right= .25cm of n9] (n10)  {\\phantom{\\vdwdots}};\n\\draw [\n    thick,\n    decoration={\n        brace,\n        mirror,\n        raise=0.5cm\n    },\n    decorate] (n1.west) --  (n6.east) node [pos=0.5,anchor=north,yshift=-0.55cm] {$33$ intervals};\n\t\\end{tikzpicture}}\n\\begin{figure*}[ht]\n\\begin{center}\n\n\n\\begin{tabular}{l}\n\\firstpic\\\\[1.25em]\n \\secondpic \\\\[1.25em]\n\\thirdpic\n\\end{tabular}\n\n\\end{center}\n\n\\caption{a) We subdivide $[325]$ into 65 intervals of length $5$. b) By the pigeonhole principle, two of the first $2^5+1=33$ intervals have same coloring. If $I$ is the first interval; then for some $d>0$, the second is $I+5d$. c) To complete the proof, we consider $I+10d$, as described in the text.} \\label{fig:VdW_ex_33int}\n\\end{figure*}\n\tConsider the first $2^5+1=33$ of them. Two of them will be colored exactly the same by pigeonhole; say interval $I$ and interval $I+5d$. If $I = [a,a+5]$, then $\\{a,a+d',a+2d'\\}$ is an arithmetic progression for $d'\\in \\{1,2\\}$. Now, if either of these progressions are monochromatic, then we are done. Otherwise, say we have $\\{{\\color{blue} a },{\\color{blue} a+d' },{ \\color{red} a+2d' } \\}$\\sidenote{If $a$ and $a+d'$ are different colors, then choose the other $d'$.}, for $d'\\in \\{1,2\\}$. Now, we consider the interval $I+10d$. If $a+2d' + 10d$ is red, then $\\{a+2d',a+2d'+5d, a+2d' + 10d\\}$ is monochromatic red. Otherwise $a,a+d'+5d, a+2d' + 10d$ is monochromatic blue. \n\n\\end{itemize}\n\\end{example}\n\nA \\defn{polychromatic $m$-tuple}  of arithmetic progression of length $k$ is a set $A = \\{a + [0,k]d_1\\} \\cup \\{ a + [0,k]d_2\\} \\cup \\dotsm \\cup \\{a + [0,k]d_m\\}$ such that $a+ [1,k]d_i$ is monochromatic for all $i$, and all of these $m$ progressions are of different colors (see \\cref{fig:polymk} for an example).\n\\lect{3}{9}\n\n\\begin{marginfigure}\n\\[\n \\stackrel{\\text{\\textvisiblespace}}{0}\\,\\stackrel{\\rdotm}{1}\\,\\stackrel{\\rdotm}{2}\\,\\stackrel{\\gdotm}{3}\\,\\stackrel{\\bdotm}{4}\\,\\stackrel{\\text{\\textvisiblespace}}{5} \\,\\stackrel{\\gdotm}{6}\\,\\stackrel{\\text{\\textvisiblespace}}{7}\\,\\stackrel{\\bdotm}{8}\n\\]\n\\caption{With the coloring shown, $\\{ 0 + [0,2]\\} \\cup \\{0 + [0,2]\\cdot 3\\} \\cup \\{0 + [0,2]\\cdot 4\\}$ is a  polychromatic $3$-tuple of length $2$.}\\label{fig:polymk}\n\\end{marginfigure}\n\n\n\\begin{lemma} \\label{lem:Wkmr}\nIf $W(k-1;r)$ exists for every $r$ then for every $m$ there exists $W(k,m,r) = n$ such that in any $r$-coloring of $[n]$ there exists a monochromatic a.p. of length $k$ or a polychromatic $m$-tuple of a.p. of length $k-1$.\n\\end{lemma}\n\\begin{remark}\nWith this result,  \\cref{thm:VdW} follows by induction on $k$, using that $W(k;r) \\leq W(k,r+1,r)$, since there cannot exist a polychromatic $r+1$-tuple when there are only $r$ colors.\n\\end{remark}\n\\begin{proof}[Proof by induction on $m$.] Base case: $m=1$. Then in a set of size $W(k-1,r)$ we may find a monochrome $(k-1)$-a.p., say $a+d[1,k]$; then for a polychromatic 1-tuple of a..p of length $k$, we simply need to add the point $a$. We may ensure this by taking a second copy of $W(k-1,r)$ to the left, yielding the bound $ W(k,1,r) \\leq 2W(k-1,r)$ (see \\cref{fig:vdwclaim_basecase} for an illustration).\n\\begin{marginfigure}\n\\begin{center}\n\\begin{tikzpicture}\n \\node[draw,label={above:$W(k-1,r)$},minimum width = 2.2cm, minimum height = .7cm] (n1) at (0,0) {$a+d[1,k]$};\n\\node[draw,left= of n1,minimum width = 2.2cm,minimum height = .7cm,text opacity = .6, label={above:$W(k-1,r)$}] {\\quad\\qquad$a$};\n \\end{tikzpicture} \n\\end{center}\n\\caption{Depiction  showing $ W(k,1,r) \\leq 2W(k-1,r)$ by finding a monochrome  $(k-1)$-a.p. $a  + d[1,k]$ in a set of size $W(k-1,r)$ and adding in the point $a$, which may be to the left of the original set of size $W(k-1,r)$.} \\label{fig:vdwclaim_basecase}\n\\end{marginfigure}\n\nTo show the induction step, we will first prove the following result.\n\\begin{claim}\n If $A+d,A+2d,\\dotsc, A+(k-1)d$ are \\emph{identically colored}\\sidenote{For each $x\\in A+d$, we have that $x,x+d,\\dotsc,x+(k-2)d$ all have the same color.} polychromatic $m$-tuples of a.p. of length $(k-1)$ then $A\\cup (A+d)\\cup\\dotsm \\cup (A+ (k-1)d)$ contains a polychromatic $(m+1)$-tuple of a.p. of length $k-1$, or a monochromatic a.p. of length $k$.\n\\end{claim}\n\\begin{subproof}\n~\\medskip\n\n\\begin{figure*}\n\\begin{center}\n\\begin{tikzpicture}[color=black]\n\\def\\xdiff{1}\n\\def\\xlength{2}\n\n\\node (B1) at (0,0){};\n\n\\node (B2) at (\\xdiff+\\xlength,0){};\n\n\\node (B3) at (3*\\xdiff+2*\\xlength,0){};\n\n\n\\foreach \\n in {1,2,3}\n{\n\\node (B\\n s0) at (B\\n){\\textvisiblespace};\n\\node (B\\n s1) at (B\\n s0)[right]{\\rdot};\n\\node (B\\n s2) at (B\\n s1)[right]{\\rdot};\n\\node (B\\n s3) at (B\\n s2)[right]{\\gdot};\n\\node (B\\n s4) at (B\\n s3)[right]{\\ydot};\n\\node (B\\n s5) at (B\\n s4)[right]{\\textvisiblespace};\n\\node (B\\n s6) at (B\\n s5)[right]{\\gdot};\n\\node (B\\n s7) at (B\\n s6)[right]{\\textvisiblespace};\n\\node (B\\n s8) at (B\\n s7)[right]{\\ydot};\n}\n\n\\node[draw,fit = (B1s0) (B1s8),label=below:$A+d$]{};\n\\node[draw,fit = (B2s0) (B2s8),label=below:$A+2d$]{};\n\\node[draw,fit = (B3s0) (B3s8),label=below:$A+(k-1)d$]{};\n\n\n\\node (dots) at ($(B2s0)!0.5!(B3s8)$) {$\\dotsm$};\n\n\\end{tikzpicture}\n\n\\begin{tikzpicture}[color=black]\n\\def\\xdiff{1}\n\\def\\xlength{2}\n\n\\node (B0) at (-\\xdiff-\\xlength,0){};\n\n\\node (B1) at (0,0){};\n\n\\node (B2) at (\\xdiff+\\xlength,0){};\n\n\\node (B3) at (3*\\xdiff+2*\\xlength,0){};\n\n\n\\foreach \\n in {0,1,2}\n{\n\\ifthenelse{\\n=0}\n{\n\\def\\opt{\\phantom}\n\\node[opacity=.4] (B\\n s0) at ($(B\\n) + (0,0)$){\\bdot};\n\\node[opacity=.4] (B\\n s0r) at ($(B\\n) + (0,.12)$){\\rdot};\n\\node[opacity=.4] (B\\n s0g) at ($(B\\n) + (0,-.12)$){\\gdot};\n}\n{\n\\def\\opt{}\n\\node (B\\n s0) at (B\\n){\\bdot};\n}\n\n\n\n\\node (B\\n s1) at (B\\n s0)[right]{\\opt\\rdot};\n\\node (B\\n s2) at (B\\n s1)[right]{\\opt\\rdot};\n\\node (B\\n s3) at (B\\n s2)[right]{\\opt\\gdot};\n\\node (B\\n s4) at (B\\n s3)[right]{\\opt\\ydot};\n\\node (B\\n s5) at (B\\n s4)[right]{\\opt\\textvisiblespace};\n\\node (B\\n s6) at (B\\n s5)[right]{\\opt\\gdot};\n\\node (B\\n s7) at (B\\n s6)[right]{\\opt\\textvisiblespace};\n\\node (B\\n s8) at (B\\n s7)[right]{\\opt\\ydot};\n}\n\n\\node[draw,fit = (B1s0) (B1s8),label=below:$A+d$]{};\n\\node[draw,fit = (B2s0) (B2s8),label=below:$A+2d$]{};\n\\node[draw,fit = (B0s0) (B0s8),label=below:$A$]{};\n% \\tikzset{ar/.style={bend left,->,yshift=-3pt,shorten >=2pt,shorten <=2pt}}\n\\tikzset{ar/.style={bend left,->,yshift=-2pt,opacity=.4}}\n\n\n\\newcommand\\ar[4]{\n\t\\draw[#1]\n\t($(#2.east) + (-.16,.11) $) edge[ar] ($(#3.north)$);\t\n\t\\draw[#1]\n\t($(#3.north)$) edge[ar] ($(#4.north)$);\n}\n\\ar{red}{B0s0r}{B1s1}{B2s2}\n\n\\ar{green}{B0s0g}{B1s3}{B2s6}\n\\ar{blue}{B0s0}{B1s0}{B2s0}\n\n\n% \\draw[blue]\n% \t(B0s0) edge[ar] (B1s0) \n% \tedge [ar] (B2s0);\n% \\draw[red]\n\t% (B0s0) edge[ar] \n\t% (B1s1) edge [ar] ($(B2s2.north)$);\t\n% \\draw[green]\n% \t(B0s0) edge[ar] ($(B1s3.north)$)\n% \tedge [ar] (B2s6);\t\n\n\n% \\node (dots) at ($(B1s8)-(B1s0)$) {$\\dotsm$};\n\n\\end{tikzpicture}\n\\end{center}\n\n\\caption{Upper: an example of identically colored $m$-tuples of arithmetic progressions (with $k=3$). Below: Possible constructions of a monotone $k$-a.p.} \\label{fig:VdWlemmafig}\n\\end{figure*}\n\n% \\missingfigure{Box labelled $A+d$ with blank, red, red, green, yellow, blank, green, blank, yellow. Box $A+2d$ with same dots. Horizontal dots. Box $A+(k-1)d$ with same dots. Now add a blue dot to the beginning of each, and draw a box to the far left labelled $A$. The far left element is $a$. Then $a$ to 0-slot in $A+d$ to 0-slot of $A+2d$ is a.p. Another is $a$ to $1$-slot in $A+d$ to $2$-slot in $A+2d$. Antoher is $a$ to first green of $A+d$ to second green of $A+2d$. And $a$ to first yellow in $A+d$ to second yellow in $A+2d$.(In this example, $k-1=2$)}\n% Let $c$ be denote the coloring. Each $A+\\ell d$ for $\\ell\\in[k-1]$ has the same set of colors:\n% \\[\n%  c(A+d):= \\{c(x): x\\in A+d\\} = c(A+2d) = \\dotsm = c(A+(k-1)d).\n%  \\] \n%  We write $A+d=\\bigcup_{i=1}^m \\{a+d+d_i[0,k-1]\\}$ where each $a+d+d_i[0,k-1]$ is monochromatic and differently colored from $a+d+d_j[0,k-1]$ for $i\\neq j$.  In this language, the identical coloring assumption is that $\\{ a+\\ell d + s d_i: \\ell \\in [k-1] \\}$ is monochromatic for each $s \\in [0,k-1]$.\n\n%  Now, we look at each element of $A$ in turn. Consider the first element, $a$. If $c(a) \\in c(A+d)$, then let $c(a) = c(a+d+sd_i)$ for $s\\in [k-1]$. Then, by our identical coloring assumption,\n%  \\[\n%  \\{ a, a+d+sd_i,a+2d+sd_i, \\dotsc,a+(k-1)d  +sd_i \\}\n%  \\]\n%  is a monochromatic a.p. of length $k$.  Let us consider an arbitrary element of $A$, $a+ xd_j$ for $x \\in [0,k-1]$ and $i\\in[m]$. If $c(a+xd_j) = c(a + d + sd_i)$ for $i\\in[m]$ and $s\\in[0,k-1]$, then\n\n Write $A+d=\\bigcup_{i=1}^m \\{a+d+d_i[0,k-1]\\}$ where each $a+d+d_i[0,k-1]$ is monochromatic and differently colored from $a+d+d_j[0,k-1]$ for $i\\neq j$. In this language, the identical coloring assumption is that $\\{ a+\\ell d + s d_i: \\ell \\in [k-1] \\}$ is monochromatic for each $s \\in [0,k-1]$.\n %Then the a.p. $a+sd+d_i[1,k-1]$ is monochromatic for every $i=1,\\dotsc,m$ and $s=1,\\dotsc,k-1$ (the color does not depend on $s$).\nIn particular, \n\\[\n P':=\\{a+d,a+2d,\\dotsc,a+(k-1)d\\}\n \\] is a monochromatic $(k-1)$-a.p.  If it  is the same as color as $a+d+d_i$ for some $i$, then \n \\[\n \\{ a+d, a+d+d_i,a+d+2d_i,\\dotsc,a+d+(k-1)d_i\\}\n \\]\nis a monochromatic $k$-a.p.\nOtherwise,\n%\n% Indeed, if $A\\cup(A+d)\\cup \\dotsm \\cup (A+(k-1)d)$ contains no monochromatic $k$-term a.p. then \n% \\[\n%  P':=\\{a+d,a+2d,\\dotsc,a+(k-1)d\\}\n%  \\] all are of the same color and different from the color of other a.p. in $A+sd$.\n% \nconsider\n\\[\n P_i':=\\{a+d+d_i,a+2d+2d_i,\\dotsc,a+(k-1)d+(k-1)d_i\\}\n \\] which is an a.p. of the same color as \n \\[\n \\{a+d+d_i,a+d+2d_i,\\dotsc,a+d+(k-1)d_i\\}.\n \\]\nand thus a different color from $P'$. Then $P'\\cup P_1'\\cup\\dotsc\\cup P_m'\\cup\\{a\\}$ is a polychromatic $(m+1)$-tuple. \\qedhere \\marginnote{This is the meat of the proof.}\n\\end{subproof}\n\n\nNow let $M = W(k,m,r)$ and $N = W(k-1,r^M)$. We will show that $W(k,m+1;r)\\leq 2MN$. \n\n% \\missingfigure{Partition into two intervals of length $MN$, and the second into $N$ intervals of length $M$ (by underbraces and overbraces). I can think of the second half as a coloring of $N$ into $r^M$ colors (we encode the coloring of an interval of length $M$ as a single color of the $r^M$ colors). }\n\\begin{figure*}\n\\begin{center}\n\\begin{tikzpicture}[node distance = .5cm, every node/.style={minimum height=.7cm}]\n\t\\node[draw,left= of n1] (nL)  {\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad};\n\n\t\\draw [\n    thick,\n    decoration={\n        brace,\n        mirror,\n        raise=0.5cm\n    },\n    decorate] (nL.west) --  (nL.east) node [pos=0.5,anchor=north,yshift=-0.55cm] {$MN$};\n\t\n\n\t\\node[draw,label=below:] (n1) at (0,0) {\\qquad\\qquad\\quad};\n\t\\node[draw,right= of n1] (n2)  {\\qquad\\qquad\\quad};\n\n\t\\node[right= .25cm of n2] (n3)  {$\\dotsm$};\n\t\\node[draw,right= .25cm of n3] (n4)  {\\qquad\\qquad\\quad};\n\n\n\t\t\\draw [\n    thick,\n    decoration={\n        brace,\n        mirror,\n        raise=0.5cm\n    },\n    decorate] (n1.west) --  (n4.east) node [pos=0.5,anchor=north,yshift=-0.55cm] {$N$ intervals};\n\n\n    \\draw [\n    thick,\n    decoration={\n        brace,\n        raise=0.5cm\n    },\n    decorate] (n1.west) --  (n1.east) node [pos=0.5,anchor=north,yshift=1.2cm] {$M$};\n\n \\draw [\n    thick,\n    decoration={\n        brace,\n        raise=0.5cm\n    },\n    decorate] (n2.west) --  (n2.east) node [pos=0.5,anchor=north,yshift=1.2cm] {$M$};\n     \\draw [\n    thick,\n    decoration={\n        brace,\n        raise=0.5cm\n    },\n    decorate] (n4.west) --  (n4.east) node [pos=0.5,anchor=north,yshift=1.2cm] {$M$};\n\t\\end{tikzpicture}\n\\end{center}\n\\caption{We think of coloring each interval  of size $M$ on the right side into $r$ colors as assigning the entire interval one of $r^M$ colors.}\\label{fig:vdw_ind_step}\n\\end{figure*}\nWe divide $2MN$ into an interval of size $MN$ followed by $N$ intervals of size $M$, as shown in \\cref{fig:vdw_ind_step}.\nBy choice of $N$, there are intervals $(k-1)$ intervals on the right side of the form $I,I+d,I+2d,\\dotsc,I+(k-2)d$  which are identically colored.\n\nBy the choice of $M$, we may assume $I$ contains a polychromatic $m$-tuple of a.p. of length $(k-1)$ which we will call $A+d$. The first interval of size $MN$ serves to include $A$. The induction step is finished by the claim.\n\\end{proof}\nThe proof yields very poor bounds for $W(k;r)$. The current best bounds are\n\\marginnote{LHS: folklore with a randomized construction. RHS: \\cite{Gowers2001}.}\n\\[\n(1 + o(1)) \\frac{r^k}{erk} < W(k;r) \\leq 2^{2^{r^{2^{2^{k+9}}}}}.\n\\]\nIt's conjectured that the upper bound can be improved substantially:\n\\begin{conjecture*}\n $W(k;2) \\leq 2^{k^2}$.\n\\end{conjecture*}\nLet us proceed to the Hales-Jewett theorem.\n It may be informally stated as the following: in a $\\overbrace{t\\times t\\dotsm \\times t}^{d \\text{ dimensional}}$ game of tic-tac-toe with $r$ players, a draw is impossible as long as $d$ is large enough compared to $r$ and $t$.\nLet $A$ be a finite alphabet of size $t$, typically $[0,t-1]$. \nThen $A^d$ is the set of ordered $d$-tuples of elements of $A$, or words of length $d$ in the alphabet $A$. \n\nIn tic-tac-toe, $A = \\{0,1,2\\}$, and $d=2$ (see \\cref{fig:tic-tac-toe}). We think of each player having a color; a coloring of $\\{0,1,2\\}^2$ by two colors then corresponds to the moves made by both the players. A draw is impossible if  in any coloring of $\\{0,1,2\\}^2$, there is a monochromatic ``3-in-a-row'', i.e.,  row, column, or diagonal.\n\\begin{marginfigure}\n\\begin{center}\n\\begin{tikzpicture}\n\n\\def\\xshift{3.5}\n\n\\tikzset{side/.style={gray, thin}}\n\n\\draw[step=1cm,gray,very thin] (0,0) grid (3,3);\n\n\\foreach \\x in {0,1,2}\n{\n\\node[side] at ($ (-.3,.5) +(0,\\x) $)  {\\x};\n\\node[side] at ($ (.5,-.3) +(\\x,0) $)  {\\x};\n\t\\foreach \\y in {0,1,2}\n\t{\n\t\\node at ($ (\\x,\\y) + (0.5,0.5) $) {\\x\\y};\n\t}\n}\n\\end{tikzpicture}\n\\end{center}\n\\caption{A depiction of  words in tic-tac-toe: elements of $\\{0,1,2\\}^2$.} \\label{fig:tic-tac-toe}\n\\end{marginfigure}\n% \\missingfigure{3x3 grid, labelled 0,1,2 going up and going right from bottom left. Write ordered pairs in the boxes. Possible roots: $\\tau_1=0\\star$, $\\tau_2=\\star\\star$. The line associated with $\\tau_1$ is the first column. The linea ssociated with $\\tau_2$ is the diagonal. The middle row is a combinatorial line associated with $\\star 1$. The diagonal from top left to bottom right is not a combinatorial line.}\n\\begin{figure*}[ht]\n\n\\begin{center}\n\\begin{tikzpicture}\n\\tikzset{side/.style={gray, thin}}\n\\tikzset{combline/.style={draw, DarkBlue, rounded corners=5pt, thick}}\n\n\\def\\xshift{4};\n\n\\begin{scope}\n\\draw[step=1cm,gray,very thin] (0,0) grid (3,3);\n\n\\foreach \\x in {0,1,2}\n{\n\\node[side] at ($ (-.3,.5) +(0,\\x) $)  {\\x};\n\\node[side] at ($ (.5,-.3) +(\\x,0) $)  {\\x};\n\t\\foreach \\y in {0,1,2}\n\t{\n\t\\node at ($ (\\x,\\y) + (0.5,0.5) $) {\\x\\y};\n\t}\n}\n\\def\\off{.15};\n\n\\draw[combline,label=above:$0\\stars$] (\\off,\\off) rectangle (1-\\off,3-\\off);\n\n\\node[above,DarkBlue] at (.5,3-\\off){$0\\star$};\n\\end{scope}\n\n\n\\begin{scope}[shift={(\\xshift,0)}]\n\n\\draw[step=1cm,gray,very thin] (0,0) grid (3,3);\n\n\\foreach \\x in {0,1,2}\n{\n% \\node[side] at ($ (-.3,.5) +(0,\\x) $)  {\\x};\n\\node[side] at ($ (.5,-.3) +(\\x,0) $)  {\\x};\n\t\\foreach \\y in {0,1,2}\n\t{\n\t\\node (\\x\\y) at ($ (\\x,\\y) + (0.5,0.5) $) {\\x\\y};\n\t}\n}\n\\def\\off{.15};\n%rotate around={30:(-1,0.5)}\n\n\\draw[combline,label=above:$0\\stars$,rotate around={-45:(.5,0.5)}] (\\off,\\off) rectangle ($ (1-\\off,1.41*3-4*\\off) $);\n\n\\node[above,DarkBlue] at (3-.5*\\off,3-2*\\off){$\\star\\star$};\n\\end{scope}\n\n\n\\begin{scope}[shift={(2*\\xshift,0)}]\n\\draw[step=1cm,gray,very thin] (0,0) grid (3,3);\n\n\\foreach \\x in {0,1,2}\n{\n% \\node[side] at ($ (-.3,.5) +(0,\\x) $)  {\\x};\n\\node[side] at ($ (.5,-.3) +(\\x,0) $)  {\\x};\n\t\\foreach \\y in {0,1,2}\n\t{\n\t\\node (\\x\\y) at ($ (\\x,\\y) + (0.5,0.5) $) {\\x\\y};\n\t}\n}\n\\def\\off{.15};\n%rotate around={30:(-1,0.5)}\n\n\n\n\\draw[combline,label=above:$0\\stars$] (\\off,1+\\off) rectangle ($ (3-\\off,2-\\off) $);\n\n\\node[right,DarkBlue] at (3-1.5*\\off,1.5){$\\star 1$};\n\\end{scope}\n\n\n\\begin{scope}[shift={(3*\\xshift,0)}]\n\n\\draw[step=1cm,gray,very thin] (0,0) grid (3,3);\n\n\\foreach \\x in {0,1,2}\n{\n% \\node[side] at ($ (-.3,.5) +(0,\\x) $)  {\\x};\n\\node[side] at ($ (.5,-.3) +(\\x,0) $)  {\\x};\n\t\\foreach \\y in {0,1,2}\n\t{\n\t\\node (\\x\\y) at ($ (\\x,\\y) + (0.5,0.5) $) {\\x\\y};\n\t}\n}\n\\def\\off{.15};\n%rotate around={30:(-1,0.5)}\n\n\\begin{scope}[xshift=2cm]\n\\draw[combline,Red,rotate around={45:(.5,0.5)}] (\\off,\\off) rectangle ($ (1-\\off,1.41*3-4*\\off) $);\n\\end{scope}\n\n% \\node[above,DarkBlue] at (3-.5*\\off,3-2*\\off){$\\star\\star$};\n\\end{scope}\n\n\n\n\\end{tikzpicture}\n\n\n\\end{center}\n\\caption{Three examples of combinatorial lines in $\\{0,1,2\\}^2$, followed by a nonexample.  From left to right, combinatorial lines corresponding to roots $\\tau_1 = 0\\star$, $\\tau_2 = \\star\\star$, and $\\tau_3 = \\star 1$, respectively, followed by the set $\\{02,11,20\\}$ in red which is not a combinatorial line. So in tic-tac-toe, not every ``3-in-a-row'' is a combinatorial line, but every combinatorial line is a ``3-in-a-row'', which is enough to show that draws are impossible if we can always find a monochromatic combinatorial line.  } \\label{fig:comb_lines_in_tic_tac_toe}\n\\end{figure*}\n\n\n\nMoving back to the general development, a \\defn{root} $\\tau$ is a word of length $d$ in the alphabet $A\\cup \\{\\star\\}$, where $\\star$ is a symbol not in $A$, which contains at least one $\\star$.\nFor a root $\\tau$ and $a\\in A$, the word $\\tau(a)$ is obtained by substituting $a$ instead of $\\star$ everywhere in $\\tau$. A \\defn{combinatorial line} in $A^d$ is a set $L_\\tau:=\\{ \\tau(a): a\\in A\\}$ where $\\tau$ is a root of length $d$. See \\cref{fig:comb_lines_in_tic_tac_toe} for examples and nonexamples of combinatorial lines in tic-tac-toe. With these definitions, we may formulate the Hales-Jewett theorem as follows. \n\\begin{theorem}[\\cite{halesJewett1963regularity}] \\label{thm:HJ}\nFor every $r$ and $t$, there exists $d= \\HJ(t,r)$ such that if $A$ is an alphabet with $|A|=t$ and $A^d$ is colored in $r$ colors, then there exists a monochromatic combinatorial line. \n\\end{theorem}\n\\begin{remark}\nThe same is true for every $d' \\geq d$.\n\\end{remark}\nBefore proving the Hales-Jewett theorem, we'll discuss an application. \nLet $V\\subset \\Z^d$ be a finite collection of vectors. $U$ is called a homothetic copy of $V=\\{v_1,v_2,\\dotsc,v_t\\}$ if $U = u +\\lambda V = \\{ u + \\lambda v_1,u+\\lambda v_2,\\dotsc, u+\\lambda v_t\\}$ for some $u\\in \\Z^d$ and $\\lambda\\in \\Z$. Homothetic copies are a generalization of arithmetic progressions: for $d=1$ and $V = \\{0,1,\\dotsc,k-1\\}$, a homothetic copy of $V$ is exactly an arithmetic progression of length $k$. See  \\cref{fig:homothetic_ex} for a two dimensional example.\n\\begin{marginfigure}[2cm]\n\\begin{center}\n\\begin{tikzpicture}[scale=.5]\n% \\draw[step=1cm,gray,very thin] (0,0) grid (4,5);\n\\def\\rad{1.5pt}\n\n\\foreach \\x in {0,1,...,4}\n{\n\t\\foreach \\y in {0,1,...,5}\n\t{\n\t\t\\filldraw[black] (\\x,\\y) circle (\\rad);\n\t}\n}\n\\filldraw[LightGreen] (0,0) circle (3*\\rad);\n\\filldraw[LightGreen] (0,1) circle (3*\\rad);\n\\filldraw[LightGreen] (1,0) circle (3*\\rad);\n\n\\def\\lam{2};\n\\def\\ux{1};\n\\def\\uy{2};\n\n\n\\filldraw[LightBlue] ($ (\\ux ,\\uy) + \\lam*(0,0)  $)  circle (3*\\rad);\n\\filldraw[LightBlue] ($ (\\ux ,\\uy)  + \\lam*(0,1)  $) circle (3*\\rad);\n\\filldraw[LightBlue] ($ (\\ux ,\\uy)  + \\lam*(1,0)  $) circle (3*\\rad);\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{An example of homothetic copies in $\\Z^2$: the green dots are $V = \\{(0,0),(0,1),(1,0)\\}$, and the blue dots are $U= u +\\lambda V$ for $u = (1,2)$ and $\\lambda=2$.} \\label{fig:homothetic_ex}\n\\end{marginfigure}\n\\begin{theorem}[Gallai (late 1930s), \\cite{Wit_paper}]\\marginnote{According to \\textcite[page 22]{Ramsey_Yesterday_today_tomorrow}, this theorem was communicated from Gallai to Rado in the late 1930s, and was published in 1943 (\\cite{Rado_note}). The theorem was independently proven by Wit in 1952.}\nLet $\\Z^d$ be colored in $r$ colors and let $V\\subset \\Z^d$ be finite. Then there exists a monochromatic homothetic copy of $V$.\n\\end{theorem}\n\\begin{remark}\nIn fact, one can replace $\\Z^d$ by $[n]^d$ where $n$ depends on $V$ and $r$, yielding an analog of van der Waerden's theorem.\n\\end{remark}\n\\begin{proof}\t\nThe proof will be based on the  HJ theorem.\n\nWe will take an appropriately large hypercube and project it into $\\Z^d$ such that a combinatorial line in the hypercube projects to a homothetic copy of $V$.\nLet $V = \\{v_1,\\dotsc,v_t\\} =: A$. Let $n$ be such that every $r$-coloring of $A^n$ contains a monochromatic combinatorial line.\n\nDefine $f: A^n\\to \\Z^d$ by $f((a_1,a_2,\\dotsc,a_n)) = \\sum_{i=1}^n a_i$ where $a_i \\in V$. If $\\chi$ is the coloring of $\\Z^d$ into $r$-colors, then we can define $\\chi': A^n \\to [r]$ by $\\chi'(a) = \\chi(f(a))$. By the choice of $n$, there exists a root $\\tau$ such that $\\tau(v_1)$, $\\tau(v_2)$, \\ldots, $\\tau(v_t)$ all receive the same color. Then $\\{f(\\tau(v_1)), f(\\tau(v_2)), \\ldots,f( \\tau(v_t))\\}$ is a homothetic copy of $V$, as follows. If $\\tau = a_1a_2\\dotsm a_n$ then set  $u := \\sum_{i: a_i\\neq \\star} a_i$, and $\\lambda$  the number of $\\star$'s in $\\tau$. Then $f(\\tau(v_i)) = u+ \\lambda v_i$.\n\n\\flavor{This is the end, but let me make you believe that this is the end.}\n\nFor example, consider $V=\\{v_1,v_2,v_3\\}$ and $\\tau= v_1 v_2 \\star v_1 \\star$. Then we're claiming $\\{f(\\tau(v_1)),f(\\tau(v_2)),f(\\tau(v_3))\\}$ is a homothetic copy of $V$. E.g.,\n\\[\nf(\\tau(v_1)) =  v_1 + v_2 + v_1  +v_1 + v_1 =4v_1  + v_2 = \\underbrace{v_1+v_2 +v_1}_{u} + \\underbrace{2}_\\lambda v_1. \\qedhere\n\\]\n\\end{proof}\n\\flavor{Paraphrase: Proofs by logicians are the hardest; I can follow it but I don't understand why things are happening.}\n\n\\lect{3}{14}\n% \\marginnote{Lecture X: Monday, March 14, 2016.}\n\n\\begin{proof}[Proof of \\cref{thm:HJ}]\nLet $n = \\HJ(t,r)$. We will prove its existence by induction on $t$ for fixed $r$.\nFirst, $\\HJ(1,r)=1$, since combinatorial lines of length 1 are certainly monochromatic.\n\nNow, the induction step. Assume $n = \\HJ(t-1,r)$. Let $n\\ll N_1 \\ll N_2 \\ll \\dotsm \\ll N_n$. Specifically, $N_1 = r^{t^n}$ and $N_i = r^{t^n + \\sum_{j=1}^{i-1} N_j}$ for $i\\geq 2$, and $N = \\sum_{i=1}^n N_i$. We will show that $\\HJ(r,t)\\leq N$, i.e. if $\\chi: A^N \\to [r]$, then $\\chi$ contains a monochromatic combinatorial line.\n\n% \\missingfigure{ long versions of \\textvisiblespace labelled $N_1$ to $N_n$ }\n\\begin{figure}\n\\begin{center}\n\\vspace{\\baselineskip}\n\\begin{tikzpicture}[node distance = 1cm]\n\\def\\ydif{.1};\n\\def \\nmax {3};\n\\coordinate (1A) at (0,0);\n\n\\coordinate (1Au) at ($ (1A) + \\ydif*(0,1) $);\n\\draw (1A) -- (1Au);\n\n\\coordinate[right= of 1A] (1B);\n\\coordinate (1Bu) at ($ (1B) + \\ydif*(0,1) $);\n\\draw (1B) -- (1Bu);\n\n\n\\draw (1A) -- (1B) node[midway, label={below:$N_1$}]{};\n\n\\foreach \\n in {2,3,...,\\nmax}\n{\n\t\\pgfmathparse{int(\\n -1 )};\n\n\t\\coordinate[right= of \\pgfmathresult B] (\\n A);\n\n\t\\coordinate (\\n Au) at ($ (\\n A) + \\ydif*(0,1) $);\n\t\\draw (\\n A) -- (\\n Au);\n\n\t\\coordinate[right= of \\n A] (\\n B);\n\t\\coordinate (\\n Bu) at ($ (\\n B) + \\ydif*(0,1) $);\n\t\\draw (\\n B) -- (\\n Bu);\n\n\n\t\\draw (\\n A) -- (\\n B) node[midway, label={below:$N_\\n$}]{};\n}\n\n% \\pgfmathparse{};\n\\pgfmathsetmacro{\\nplusone}{int(\\nmax +1)}\n\n\\node[right = of \\nmax B] (\\nplusone B) {$\\dotsm$};\n\n\\pgfmathparse{int(\\nmax +2)};\n\n\\pgfmathsetmacro{\\nplustwo}{int(\\nmax +2)}\n\n\\coordinate[right= of \\nplusone B] (\\nplustwo A);\n\n\\coordinate (\\nplustwo Au) at ($ (\\nplustwo A) + \\ydif*(0,1) $);\n\\draw (\\nplustwo A) -- (\\nplustwo Au);\n\n\\coordinate[right= of \\nplustwo A] (\\nplustwo B);\n\\coordinate (\\nplustwo Bu) at ($ (\\nplustwo B) + \\ydif*(0,1) $);\n\\draw (\\nplustwo B) -- (\\nplustwo Bu);\n\n\n\\draw (\\nplustwo A) -- (\\nplustwo B) node[midway, label={below:$N_n$}]{};\n\n\\end{tikzpicture}\n% \\vspace{-.5\\baselineskip}\n\\end{center}\n\\caption{A depiction of $N_1,\\dotsc,N_n$. We aim to show $\\HJ(t,r) = N:= \\sum_{i=1}^n N_i$.} \\label{fig:HJ1}\n\\end{figure}\nWe will say $a,b\\in A^n$ are \\emph{neighbors} if they differ in only one position and one of them has symbol $0$ in this position and the other has the symbol $1$; that is, if $a = a_1a_2\\dotsm a_{i-1} 0 a_{i+1} \\dotsm a_n$, and $b =a_1a_2\\dotsm a_{i-1} 1 a_{i+1} \\dotsm a_n$ for some $1\\leq i \\leq n$.\n\nLet $\\tau$ be a root of length $N$ such that $\\tau = \\tau_1\\dotsm \\tau_n$ and $\\tau_i$ is a root of length $N_i$. For $a \\in A^n$, $a = a_1\\dotsm a_n$, define\n\\[\n\\tau(a) = \\tau_1(a_1)\\tau_2(a_2)\\dotsm \\tau_n(a_n).\n\\]\nFor example, suppose we have a root $\\tau=\\overbrace{\\star}^{\\tau_1} \\overbrace{2\\star}^{\\tau_2} \\overbrace{\\star 3\\star}^{\\tau_3}$. Then $\\tau(012) = 021232$.\nGiven $\\tau = \\tau_1\\tau_2\\dotsm \\tau_n$ as above, define $\\chi_\\tau: A^n \\to [r]$ by $\\chi_\\tau(a) = \\chi(\\tau(a))$.\nNow, we're not guaranteed that we have monochromatic combinatorial lines in $A^n$, but we will try to compress to $A^{n-1}$ where we do have such lines.\n\\begin{claim}\nThere exist roots $\\tau_1,\\tau_2,\\dotsc,\\tau_n$ such that $\\tau_i$ has length $N_i$, and, if $\\tau=\\tau_1\\dotsm \\tau_n$, then\n\\[\n\\chi_\\tau(a) = \\chi_\\tau(b)\n\\]\nfor any pair of neighbors $a$ and $b$ in $A^n$.\n\\end{claim}\nWe accept this claim for now to finish the proof. Define $\\chi'_\\tau$ as the restriction $\\chi_\\tau': (A\\setminus \\{0\\})^n \\to [r]$ of $\\chi_\\tau$. Then $\\chi_\\tau'$ contains a monochromatic combinatorial line. That is, there exists $\\nu = \\nu_1\\nu_2\\dotsm \\nu_n$ with $\\nu_i\\in (A\\setminus\\{0\\})\\cup \\{\\star\\}$ such that $\\nu(1),\\nu(2),\\dotsc,\\nu(t-1)$ all have the same color. We wish to show that the line corresponding to $\\tau(\\nu)$ is monochromatic in $\\chi: A^N\\to[r]$, where we write $\\tau(\\nu) := \\tau_1(\\nu_1)\\dotsm \\tau_n(\\nu_n)$ with $\\tau_i(\\star)=\\tau_i$.\n\\marginnote{If $\\nu = (12\\star)$, then $\\tau(\\nu) = 122\\star 3\\star$, using our example $\\tau$ from earlier.}\nThat is, we want $\\tau(\\nu(0))$, $\\tau(\\nu(1))$,\\ldots, $\\tau(\\nu(t-1))$ to receive the same color in $\\chi$. So we want $\\chi_\\tau(\\nu(a))$ to be the same for $a=0,1,\\dotsc,t-1$.\n\nBut $\\chi_\\tau(\\nu(1)) = \\dotsm = \\chi_\\tau(\\nu(t-1))$ by the choice of $\\nu$. So it remains to show that $\\chi_\\tau(\\nu(0))=\\chi_\\tau(\\nu(1))$.  While $\\nu(0)$ and $\\nu(1)$ may differ in several positions, in each position one has 1 and the other has 0, so we may change them one at a time using that $\\chi_\\tau$ acts the same on neighbors, until we see they have the same coloring.\n\nThus, it remains to prove the claim. We will construct the roots $\\tau_1,\\dotsc,\\tau_n$ in reverse order.\nSuppose $\\tau_{i+1},\\dotsc,\\tau_n$ are constructed.\\marginnote{The case $\\tau_n$ is similar to the generic step we consider here.}\n% \\missingfigure{Intervals $N_1, N_2$, \\ldots $N_{i-1}$, $N_i$ with $W_k$, $N_{i+1}$ with $\\tau_{i+1}$, \\ldots, $N_n$ with $\\tau_n$. Overbrace from $N_1$ to $N_{i-1}$ by $M$.}\nFor $0\\leq k\\leq N_i$, let\n\\[\nW_k = \\underbrace{0\\dotsm 0}_k \\underbrace{1\\dotsm 1}_{N_i-k}\\in A^{N_i}\n\\]\nLet $M = \\sum_{j=1}^{i-1}N_i$. Define a coloring  $\\chi_k : A^{M+ n-i} \\to [r]$ as\n\\[\n\\chi_k(x_1 \\dotsm x_M y_{i+1}\\dotsm y_{n}) =  \\chi(x_1\\dotsm x_M W_k \\tau_{i+1}(y_{i+1})\\dotsm \\tau_n(y_n)).\n\\]\nSee \\cref{fig:vdwWk} for a depiction of this definition.\n\\begin{figure*}[ht]\n\\begin{center}\n\\begin{tikzpicture}[node distance = 1cm]\n\\def\\ydif{.1};\n\\def\\dotsdist{.5};\n\n\\setcounter{numint}{0};\n\\coordinate (0B) at (0,0);\n\\stepcounter{numint}\n\n\\newcommand{\\vdwinterval}[3][1]{\n\t\\def\\n{\\thenumint};\n\t\\pgfmathparse{int(\\thenumint - 1)};\n\n\t\\coordinate[ node distance = #1 cm, right= of \\pgfmathresult B] (\\n A);\n\n\t\\coordinate (\\n Au) at ($ (\\n A) + \\ydif*(0,1) $);\n\t\\draw (\\n A) -- (\\n Au);\n\n\t\\coordinate[right= of \\n A] (\\n B);\n\t\\coordinate (\\n Bu) at ($ (\\n B) + \\ydif*(0,1) $);\n\t\\draw (\\n B) -- (\\n Bu);\n\n\n\t\\draw (\\n A) -- (\\n B) node[midway, label={below:$N_{#2}$},yshift=.2cm]{#3};\n\n\t\\stepcounter{numint}\n}\n\\vdwinterval{1}{\\phantom{$W_k$}$\\cdot$\\phantom{$W_k$}}\n\\vdwinterval{2}{\\phantom{$W_k$}$\\cdot$\\phantom{$W_k$}}\n\n\\newcommand{\\vdwdots}{\n\t\\def\\n{\\thenumint};\n\t\\pgfmathparse{int(\\thenumint - 1)};\n\n\t\\node[node distance = \\dotsdist cm, right = of \\pgfmathresult B] (\\n B) {$\\dotsm$};\n\t\\stepcounter{numint}\n}\n\\vdwdots\n\\vdwinterval[\\dotsdist]{i-1}{\\phantom{$W_k$}$\\cdot$\\phantom{$W_k$}}\n\\vdwinterval{i}{$W_k$}\n\\vdwinterval{i+1}{$\\tau_{i+1}(\\cdot)$}\n\\vdwdots\n\\vdwinterval[\\dotsdist]{n}{$\\tau_n(\\cdot)$}\n\n\\draw [\n    thick,\n    decoration={\n        brace,\n        raise=0.65cm\n    },\n    decorate] (1A.west) --  (4B.east) node [pos=0.5,anchor=north,yshift=1.35cm] {$M$};\n\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{An illustration of how $\\chi_k$ acts on words in $A^{M+n-i}$. First, we write such a word as $x_1\\dotsm x_m y_{i+1}\\dotsm y_n$ where $x_j \\in A^{N_j}$ (for $j\\in [i-1]$) and $y_j \\in A$ (for $j=i+1,\\dotsc,n$). Then, $\\chi_k$ acts on such a word by creating a word of size $A^N$ as follows, which it plugs into $\\chi$.  First, $x_1,\\dotsc,x_m$ are not modified (visualized by empty slots of the appropriate size in the figure). Then  the word $W_k \\in A^{N_i}$ is concatenated, followed by $\\tau_{j}(y_j) \\in A^{N_j}$ for $j= i+1,\\dotsc,n$.} \\label{fig:vdwWk}\n\\end{figure*}\n% Suppose $\\tau_{i+1}$\nNow, $N_i > t^{M+n - i}$, so there exist $k,\\ell$ such that $\\chi_k = \\chi_\\ell$, by the pigeonhole principle. WLOG $k< \\ell$. Let $\\tau_i = \\underbrace{0\\dotsm0}_k \\underbrace{\\star\\dotsm\\star}_{\\ell-k} \\underbrace{1\\dotsm 1}_{N_i - \\ell}$. Then $\\tau_i(0) = W_\\ell$ and $\\tau_i(1) = W_k$.\n\nNow, let's show that the resulting $\\tau = \\tau_1\\dotsm \\tau_n$ satisfies the claim. Suppose $a = a_1a_2\\dotsm a_{i-1} 0 a_{i+1} \\dotsm a_n$, and $b =a_1a_2\\dotsm a_{i-1} 1 a_{i+1} \\dotsm a_n$. Then \n\\begin{align*}\t\n\\chi_\\tau(a) &= \\chi(\\tau_1(a_1)\\tau_2(a_2)\\dotsm \\tau_{i-1}(a_{i-1})W_\\ell \\tau_{i+1}(a_{i+1})\\dotsm \\tau_n(a_n)) \\\\\n&= \\chi_\\ell (\\tau_1(a_1)\\tau_2(a_2)\\dotsm \\tau_{i-1}(a_{i-1})W_\\ell a_{i+1}\\dotsm a_n)\\\\\n&= \\chi_k (\\tau_1(a_1)\\tau_2(a_2)\\dotsm \\tau_{i-1}(a_{i-1}) a_{i+1}\\dotsm a_n)\\\\\n&= \\chi_\\tau(b).\\qedhere\n\\end{align*}\n\\end{proof}\n%should 7.8\n\\begin{theorem}[Density Hales-Jewett theorem, \\cite{DensityHJ}] \\label{thm:HJ_density}\nFor every $\\epsilon>0$ and each $t$, there exist $n$ such that if $|A| = t$ and $Z \\subset A^n$ with $|Z| \\geq \\epsilon t^n$ then $Z$ contains a combinatorial line.\n\\end{theorem}\n\\begin{remark}\nThis theorem implies \\cref{thm:HJ}. If we consider an $r$-coloring as a partition $Z_1,\\dotsc,Z_r$ of $A^n$, then we have some $i$ such that $|Z_i| \\geq \\frac{1}{r}t^n$ by the pigeonhole principle. Then \\cref{thm:HJ_density} with $\\epsilon= \\frac{1}{r}$ implies that $Z_i$ contains  combinatorial line, proving \\cref{thm:HJ}.\n\\end{remark}\n\n%thm. 7.9\n\\begin{theorem}[\\cite{szemeredi1975sets}]\n$\\forall \\, \\epsilon>0$ and $\\forall\\, k$, $\\exists N > 0$ such that if $A\\subset [N]$ with $|A| \\geq \\epsilon N$, then $A$ contains an arithmetic progression of length $k$.\n\\end{theorem}\n\n\n%thm. 7.10\n\\begin{theorem}[\\cite{green2008primes}]\nFor every $k$, the set of primes contain arithmetic progressions of length $k$. \\marginnote{Since the density of primes goes to zero, the density theorem above does not help.}\n\\end{theorem}\n\n\nIf we wanted to formulate a density version of Ramsey's theorem, how it would it go? For every $t$ and $\\epsilon>0$, there exists $N$ such that if $G$ is a graph on $N$ verticies  and $|G|\\geq \\epsilon {N\\choose 2}$, then $G$ contains $K_t$.\nBut this is equivalent to $\\pi(K_t) = 0$, which is false for $t\\geq 3$.\n\n\\newthought{Let us} move on. Consider a finite sequence $A=(a_1,\\dotsc,a_n)$. Then we say $(a_{i_1},a_{i_2},\\dotsc,a_{i_k})$ is an \\defn{increasing subsequence}[subsequence!increasing] of $A$ if $i_1 < i_2 < \\dotsb< i_k$ and $a_{i_1}\\leq a_{i_2} \\leq \\dotsm \\leq a_{i_k}$. Likewise, $(a_{i_1},a_{i_2},\\dotsc,a_{i_k})$ is a \\defn{decreasing subsequence}[subsequence!decreasing] of $A$ if $i_1 < i_2 < \\dotsb< i_k$ and $a_{i_1}\\geq a_{i_2} \\geq \\dotsm \\geq a_{i_k}$. \n\n\\begin{problem}\nFor all $k$, does there exist a number $f(k)$ such that every sequence of length at least $f(k)$ contains an increasing or decreasing subsequence of length $k$?\n\\end{problem}\n\nIndeed, such an $f(k)$ exists: $f(k) \\leq R(k,k)$. Given a sequence $(a_n)_{n=1}^{R(k,k)}$, we can create a coloring on $[R(k,k)]$ as follows: given an edge $\\{i,j\\}\\in [R(k,k)]^{(2)}$  with $i<j$, we color $\\{i,j\\}$ red if $a_i\\leq  a_j$, and blue otherwise. Then by the definition of $R(k,k)$, we are guaranteed  a monochromatic complete graph on $k$ verticies. This yields an increasing subsequence if the monochromatic $K_k$ is red, and decreasing if the $K_k$ is blue. This yields an exponential bound on $f(k)$. But we can do better.\n\\begin{theorem}[\\cite{erdosszekeres1935combinatorial}]\nFor all $k,\\ell\\geq 2$ any sequence of length at least $(k-1)(\\ell-1)+1$ contains an increasing subsequence of length $k$ or a decreasing subsequence of length $\\ell$.\n\\end{theorem}\n\\begin{remark}\nIn the case $k=\\ell$, this is a quadratic bound on $f(k)$.\n\\end{remark}\n\\begin{proof}[Proof by induction on $k+\\ell$]\t\nIf $k=2$, say, then any sequence of length $\\ell$ is decreasing or not.\nFor the induction step, let $n = (k-1)(\\ell-1) + 1$,  $A=(a_1,\\dotsc,a_n)$ be a sequence, and $Z$ be the set of last elements of increasing subsequences of length $k-1$ in $A$. Then, $Z = (a_{i_1},a_{i_2},\\dotsc,a_{i_z})$ with $i_1< \\dotsm < i_z$, where $z = |Z|$. For contradiction, assume $A$ violates the claim.\n\nThen $A\\setminus Z$ contains no increasing subsequences of length $k-1$ and no decreasing subsequences of length $\\ell$. So $|A\\setminus Z| \\leq (k-2)(\\ell-1)$. Then $|Z| \\geq (k-1)(\\ell-1)+1 - (k-2)(\\ell-1)\\geq \\ell$. So if $Z$ is decreasing we are done. If not, then there exists $s$ such that $a_{i_s} \\leq a_{i_{s+1}}$. Then let $(b_{i_1},b_{i_2},\\dotsc,b_{i_{k-1}} = a_{i_s})$ be an increasing sequence ending in $a_{i_s}$. Then this sequence extends to a length $k$ sequence by adding $a_{i_{s+1}}$.\n\\end{proof}\n\\lect{3}{16}\n% \\marginnote{Lecture X: Wednesday, March 16, 2016.}\nLet us consider a pigeonhole proof of this result.\n\\begin{proof}\tAs before, let $A = (a_1,\\dotsc,a_n)$ be a sequence.\nFor every $s\\in[n]$, we will associate to $a_s$ two numbers:\n\\begin{itemize}\n\t\\item $i_s$, the length of the longest increasing subsequence ending in $a_s$,\n\t\\item $d_s$, the length of the longest decreasing subsequence ending in $a_s$. \n\\end{itemize}\nIf $A$ contains no subsequence we need then $1\\leq i_s\\leq k-1$ and $q\\leq d_s\\leq \\ell-1$ so there are $(k-1)(\\ell-1)$ possible pairs.\n\nSo there exist $1\\leq s<t \\leq n$ such that $(i_s,d_s) = (i_t,d_t)$ by the pigeonhole principle. Suppose, by symmetry, that $a_s\\leq a_t$. Then an increasing subsequence of length $i_s$ ending in $a_s$ can be extended to a subsequence of length $i_{s+1}$ ending in $a_t$. This is a contradiction to $i_t=i_s$.\n\\end{proof}\n\n", "meta": {"hexsha": "ef44a7287ebd664ff92819a26de0764743ddb5d5", "size": 41819, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/ch7_ramsey.tex", "max_stars_repo_name": "ericphanson/CombinatoricsNotes", "max_stars_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-04-24T06:43:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-20T04:27:41.000Z", "max_issues_repo_path": "chapters/ch7_ramsey.tex", "max_issues_repo_name": "ericphanson/CombinatoricsNotes", "max_issues_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/ch7_ramsey.tex", "max_forks_repo_name": "ericphanson/CombinatoricsNotes", "max_forks_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-04T19:38:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-04T19:38:24.000Z", "avg_line_length": 48.4016203704, "max_line_length": 678, "alphanum_fraction": 0.6380114302, "num_tokens": 16645, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300449389325, "lm_q2_score": 0.845942439250491, "lm_q1q2_score": 0.5942999798623976}}
{"text": "\\documentclass[letterpaper,11pt]{article}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[htt]{hyphenat}\n\\usepackage{courier}\n\n\\begin{document}\n\\title{\\Huge{Problem Set 1}\\\\\n\\vspace{0.125in}\n\\Large{MIT 6.0002}\\\\\n\\large{Introduction to Computational Thinking and Data Science\\\\\nas Taught in Fall 2016}\n}\n\\author{\nJohn L. Jones IV}\n\\maketitle\n\\pagebreak\n\\section*{Problem A.1}\nWhat were your results from \\texttt{compare\\_cow\\_transport\\_algorithms}? Which algorithm runs faster? Why? \\\\\n\\\\\nUsing \\texttt{ps1\\_cow\\_data.txt} the results were: \\\\\n\\texttt{\n  greedy\\_cow\\_transport:\\\\\n  length =  6 trips \\\\\n  time = .000145 seconds \\\\\n  \\\\\n  brute\\_force\\_cow\\_transport: \\\\\n  length =  5 trips \\\\\n  time = 0.48294 seconds \\\\\n}\n\\\\\nThe algorithm \\texttt{greedy\\_cow\\_transport} does not iterate through every possible combination of trips\nlike \\texttt{brute\\_force\\_cow\\_transport}.\nIn my implementation of \\texttt{greedy\\_cow\\_transport}, first the input dictionary of cows is copied to a \nlist of cow names \\texttt{sorted} from largest to smallest weight. \nThe python \\texttt{sorted} function utilizes a Timsort which is $\\mathcal{O}(n\\log{}n)$.\nThen \\texttt{greedy\\_cow\\_transport} removes any cow which is larger than \\texttt{limit},\nan $\\mathcal{O}(n)$ operation. \nThe sorted list is then utilize to select the cows which can fit on the ship.\nStarting at the big end of the list,\niterate and select cows that can fit onto the ship without exceeding the payload \\texttt{limit}.\nSince the list has been sorted, this is an $\\mathcal{O}(n\\log{}n)$ operation.\nThe python \\texttt{sorted} function's Timsort and \nselecting cows operation dominate the run time,\nmaking \\texttt{greedy\\_cow\\_transport} $\\mathcal{O}(n\\log{}n)$.\nIn comparison, \\texttt{brute\\_force\\_cow\\_transport} must first create all permutations of the possible trips, \n$\\mathcal{O}(n^{2})$.\nThen evaluate each of these trips, $\\mathcal{O}(n^{2})$.\nThis emphasizes Professor John Guttag's quote from Lecture 1, ``many optimization problems are inherently exponential.\nWhat that means is there is no algorithm that provides an exact solution to this problem whose worst case running time\nis not exponential in the number of items.\"\n\n\\section*{Problem A.2}\nDoes the greedy algorithm return the optimal solution? Why/why not? \\\\\n\\\\\nNo, the greedy algorithm \\emph{does not} return the optimal solution.\nThe nature of this ``knapsack'' problem is $\\mathcal{O}(n^{2})$.\nHowever, a reasonable solution can be solve in $\\mathcal{O}(n\\log n)$ with \\texttt{greedy\\_cow\\_transport}.\nWith \\texttt{ps1\\_cow\\_data.txt} \\texttt{greedy\\_cow\\_transport} returns a solution $1000$ times faster than\n\\texttt{brute\\_force\\_cow\\_transport}.\n\n\\section*{Problem A.3}\nDoes the brute force algorithm return the optimal solution? Why/why not? \\\\\n\\\\\nYes, the brute force algorithm does return the optimal solution.\nAll possible solutions are found then evaluated. \nThe optimal solution is guaranteed to be produced and returned.\nHowever, this comes at a great cost to run-time speed.\nWith \\texttt{ps1\\_cow\\_data.txt} \\texttt{brute\\_force\\_cow\\_transport} produces a solution $1000$ times slower\nthan the greedy algorithm. \n\n\\section*{Problem B.1}\nExplain why it would be difficult to use a brute force algorithm to solve this problem if there were $30$ different egg weights.\nYou do not need to implement a brute force algorithm in order to answer this. \\\\\n\\\\\nBrute force requires all possible solutions to be solved then evaluated.\nThe width of the search tree grows linearly with the number of egg weights.\nThe number of nodes grows exponentially with number of egg weights.\nTherefore, the computational complexity grows exponentially with the number of egg weights.\nObserve:\n1. This problem can be separated into smaller similar sub-problems therefore a recursive solution is possible.\n2. It is possible to have re-occurring sub-problems. \nSince these two conditions are true, a dynamic programming approach, which uses a memo,\nallows the optimal solution to be found in much less run time than a typical brute-force approach.\nThe same recursive solution without a memo is so slow, I did not have the patience to see if it worked on $n = 99$.\n\n\\section*{Problem B.2}\nIf you were to implement a greedy algorithm for finding the minimum number of eggs needed, what would the objective function be?\nWhat would the constraints be? What strategy would your greedy algorithm follow to pick which eggs to take? \nYou do not need to implement a greedy algorithm in order to answer this.\\\\\n\\\\\nAlways take the largest egg available, if and only if the largest egg does not exceed the available weight. \n\n\\section*{Problem B.3}\nWill a greedy algorithm always return the optimal solution to this problem?\nExplain why it is optimal or give an example of when it will not return the optimal solution.\\\\\n\\\\\nNo the greedy algorithm will not always return the optimal solution. \nFor example, if the egg weights are (1,5,7) and the target weight is 10. The optimal solution is 2 (2 * 5 = 10).\nThe greedy solution would return 4 (1 * 7 + 3 * 1 = 10).\n\n\\end{document}\n", "meta": {"hexsha": "5fe4da6b8776985fef82f706b44d28538858f4d5", "size": 5082, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ps1/ps1_answers.tex", "max_stars_repo_name": "John-L-Jones-IV/6.0002", "max_stars_repo_head_hexsha": "40ca76762266f89ff1f070dff5642cd9e3f120df", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ps1/ps1_answers.tex", "max_issues_repo_name": "John-L-Jones-IV/6.0002", "max_issues_repo_head_hexsha": "40ca76762266f89ff1f070dff5642cd9e3f120df", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-29T22:36:23.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-01T09:12:11.000Z", "max_forks_repo_path": "ps1/ps1_answers.tex", "max_forks_repo_name": "John-L-Jones-IV/6.0002", "max_forks_repo_head_hexsha": "40ca76762266f89ff1f070dff5642cd9e3f120df", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-05-26T11:58:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-26T11:58:04.000Z", "avg_line_length": 49.8235294118, "max_line_length": 128, "alphanum_fraction": 0.7672176309, "num_tokens": 1302, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585903489892, "lm_q2_score": 0.8128673201042492, "lm_q1q2_score": 0.5942536371761731}}
{"text": "\\section{Econometric Theory}\n\\subsection{Model and Estimation}\nThe econometric model used to forecast and its estimation method are closely related to Hansen \\cite{hansen2009averaging} and Andrews \\cite{andrews93}.\\footnote{Andrews considers GMM as the primary estimation method.} The model we are interested in is a linear time series regression with a possible structural break in the conditional mean. The observations we have are time series $\\{y_t,x_t\\}$ for $t = 1,...,T$, where $y_t$\\footnote{Since we are interested in forecasting, $y_t$ can be thought of as the variable to be predicted for the next period using currently available information $x_t$.} is the scalar dependent variable and $x_t$ is a $k\\times 1$ vector of related predictors and possibly lagged values of $y_t$, $k$ is the total number of regressors or predictors included. Parameters are estimated by ordinary least squares. The forecasting model allowing for structural break is:\n\\begin{equation} \\label{mod:1}\n\ty_t = x_t'\\beta_1 I_{[t<m]} + x_t'\\beta_2 I_{[t \\geq m]} + e_t\n\\end{equation}\nwhere $I_{[\\bullet]}$ is an indicator function, $m$ is the time index of the break and $E(e_t|x_t) = 0$. The break date is restricted to the interval $[m_1,m_2]$ which is bounded away from the ends of the sample on both sides, $1 < m_{1} < m_{2} < T$. In practice, a popular choice is to use the middle $70\\%$ portion of the sample. We assume that all information relevant for forecasting is included in the regressors $x_t$, and the source of model misspecification comes solely from the uncertainty about parameter stability. This is in contrast to many applied econometric models where model misspecification bias comes from the wrong choice of regressors but the parameters are assumed stable.\n\nWe can also use a stable linear model to forecast:\n\\begin{equation} \\label{mod:2}\n\ty_t = x_t'\\beta + e_t\n\\end{equation}\nThe traditional pre-test procedure starts with performing a test for structural breaks\\footnote{This can be done in various ways. One is to treat various possible number of breaks as different models, then select one according to some information criterion, e.g., AIC, SIC or Mallow's. Another way is hypothesis testing, following the relevant testing procedures outlined in Andrews \\cite{andrews93}, Bai and Perron \\cite{bai_perron98} and Elliot and Muller \\cite{elliott_muller_RES2006}.}, either by using Andrews' SupF or SupW test, or Bai and Perron's multiple-break test, and then decide to keep the stable or unstable model.\n\nAs an alternative to model selection, we can combine these two models by assigning weight $w$ to model~\\ref{mod:1} and $1 - w$ to model~\\ref{mod:2}, where $w \\geq 0$. So the combined predictive model is\n\\begin{equation} \\label{mod:3}\n    y_{t} = w \\left\\{ x_t'\\beta_1 I_{[t<m]} + x_t'\\beta_2 I_{[t \\geq m]} \\right\\} + (1 - w) \\left\\{ x_t'\\beta \\right\\} + e_t\n\\end{equation}\nWith the forecasting model ready, next, we are going to present the cross-validation criterion in detail which is crucial in determining the optimal weight $w$ in equation~\\ref{mod:3}.", "meta": {"hexsha": "0cfc84d146f63e2578d077bb7e5eff1e31131e5e", "size": 3076, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tex/theory_1_model.tex", "max_stars_repo_name": "anwenyin/ooscombo", "max_stars_repo_head_hexsha": "4f747c7ba0c7bde2a4ae13fdc112a24e01f25bf7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tex/theory_1_model.tex", "max_issues_repo_name": "anwenyin/ooscombo", "max_issues_repo_head_hexsha": "4f747c7ba0c7bde2a4ae13fdc112a24e01f25bf7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tex/theory_1_model.tex", "max_forks_repo_name": "anwenyin/ooscombo", "max_forks_repo_head_hexsha": "4f747c7ba0c7bde2a4ae13fdc112a24e01f25bf7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 161.8947368421, "max_line_length": 894, "alphanum_fraction": 0.7652795839, "num_tokens": 815, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673087708699, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.594253624127819}}
{"text": "\\section{single scatterer}\r\n\r\nWe derive the matrix for a single scatter in a quasi-1D channel of height w (where y goes from 0 to w) and the defect is at x = 0. The incident amplitude is A, reflection is B, and transmission is C.  These are later normalized to 1, r, and t, respectively.\r\n\r\nFor simplicity we assume there are two open channels (N=2) and two closed channels ($N_c$=2) and that the incident light is only in channel 2. The algorithm can be duplicated for any arbitrary open channel. It doesn't make sense to say the incident light is in the closed channel since those are not propagating modes. The algorithm show works for arbitrary number of open and closed channels. We show that closed channels can be ``folded'' into the open channels, which maintains conservation of information. The rank of the matrix is reduce from $N+N_c$ to N but the terms become messier.\r\n\r\nMethod for matrix setup: match boundary conditions for the amplitude and derivative at the scatterer.\r\n% see Ben's notes, 20080617\r\nThe general equation describing the system is\r\n\\begin{equation}\r\nA_2 \\chi_2(y) \\exp(i k_2 x) + \\sum_{n=1}^4 B_n \\chi_n(y) \\exp(-i k_n x) = \r\n\\sum_{n=1}^4 C_n \\chi_n(y) \\exp(i k_n x)\r\n\\label{singlescattererwave}\r\n\\end{equation}\r\nwhere if $n>N$, then $k_n = i \\kappa_n$.\r\n\r\nBoundary condition: amplitudes match at x = 0. For each channel, multiply both sides by\r\n\\begin{equation}\r\n\\int dy \\chi_m(y)\r\n\\end{equation}\r\nwhich by orthogonality with $\\chi_n(y)$ yields $\\delta_{nm}$, so at x=0,\r\n\\begin{equation}\r\n\\begin{gathered}\r\nB_1 = C_1 \\\\\r\nA_2 + B_2 = C_2 \\\\\r\nB_3 = C_3 \\\\\r\nB_4 = C_4\r\n\\end{gathered}\r\n\\end{equation}\r\n\r\nIn order to have the derivatives match we detour to the Schr\\\"{o}dinger equation:\r\n\r\n\\begin{equation}\r\n(-\\frac{\\hbar^2}{2 m}(\\frac{d^2}{dx^2}+\\frac{d^2}{dy^2})+\\gamma \\delta(x)\\delta(y-y_o))\\Psi = E \\Psi\r\n\\end{equation}\r\n(equation 1 in \\cite{1990_Bagwell}).\r\n\r\nseparation of variables:\r\n\\begin{equation}\r\n\\Psi = \\sum_n D_n(x) \\chi_n(y)\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n-\\frac{\\hbar^2}{2 m} \\frac{d^2}{dy^2} \\chi_n(y) = E_n \\chi_n(y)\r\n\\end{equation}\r\n\r\nboundary condition: $\\chi_n(y)$ = 0 for y = 0 and y = w.\r\n\r\n\\begin{equation}\r\n\\begin{gathered}\r\n\\chi_n(y) = \\sqrt{\\frac{2}{w}sin(\\frac{n \\pi}{w}y} \\\\\r\nE_n = \\frac{\\hbar^2}{2 m}(\\frac{n \\pi^2}{w})^2\r\n\\end{gathered}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n-\\frac{\\hbar^2}{2 m}\\sum_m D_m^{''}(x) \\chi_m(y) + \r\n\\sum_m D_m(x) E_m \\chi_m(y) + \r\n\\gamma \\delta(x) \\sum_m D_m(x) \\chi_m(y) \\delta(y-y_o) = \r\nE_{total} \\sum_m D_m(x) \\chi_m(y)\r\n\\end{equation}\r\nwhere $E_m \\chi_m(y)$ = $\\chi_m^{''}(y)$. Multiply both sides by \r\n$\\int dy \\chi_n(y)$ to get $\\delta_{mn}$ which leads to \r\n\\begin{equation}\r\n-\\frac{\\hbar^2}{2 m} D_n^{''}(x) +D_n(x) E_n +\r\n\\delta(x) \\sum_m D_m(x) \\gamma \\chi_n(y_o) \\chi_m(y_o) = D_n(x) E\r\n\\end{equation}\r\nre-arranging, we get Bagwell's equation 4,\r\n\\begin{equation}\r\n\\frac{d^2}{dx^2}D_n(x) + (\\frac{E-E_n}{\\hbar^2})D_n = \r\n\\delta(x) \\sum_m D_m(x) \\Gamma_{nm}\r\n\\label{Bagwellsequ4}\r\n\\end{equation}\r\nwhere\r\n\\begin{equation}\r\n\\begin{gathered}\r\nk_n^2 = \\frac{E-E_n}{\\hbar^2}2 m \\\\\r\n\\Gamma_{nm} = \\chi_n(y_o) \\chi_m(y_o)\r\n\\end{gathered}\r\n\\end{equation}\r\nNow we can match derivatives at x=0. Integrate over a small region, applying\r\n\\begin{equation}\r\n\\int_{0-\\epsilon}^{0+\\epsilon}dx\r\n\\end{equation}\r\nto equation \\ref{Bagwellsequ4}\r\n\r\n\\begin{equation}\r\n\\frac{dD_n}{dx}|^{+\\epsilon}_{-\\epsilon} + k_n^2 D_n 2 \\epsilon = \r\n\\sum_m D_m(0) \\Gamma_{nm}\r\n\\end{equation}\r\nwhich is Bagwell's equation 18. Now let $\\epsilon \\rightarrow$ 0\r\n\\begin{equation}\r\n\\frac{dD_n(+)}{dx} - \\frac{dD_n(-)}{dx} = \\sum_m^4 D_m(0) \\Gamma_{nm}\r\n\\end{equation}\r\nWhere $D_{n}(+)$ is the derivative of the wave function x component on the left side of the scatterer, and similarly $D_{n}(+)$ is on the right side of the scatterer. Apply to equation \\ref{singlescattererwave}\r\n\\begin{equation}\r\ni k_n C_n - (i k_2 A_2 \\delta_{2 n} - i k_n B_n) = \r\n\\sum_{m=1}^4 C_m \\Gamma_{nm}\r\n\\end{equation}\r\nfrom the fist set of boundary conditions, $B_1=C_1$, $A_2+B_2=C_2$, $B_3=C_3$, $B_4=C_4$, thus $B_2=C_2-A_2$\r\n\r\nFor channels 1,3, and 4 =n\r\n\\begin{equation}\r\n2 i k_n C_n = \\sum_{m=1}^4 C_m \\Gamma_{nm}\r\n\\end{equation}\r\nand for channel 2,\r\n\\begin{equation}\r\n\\begin{gathered}\r\ni k_2 (C_2 - A_2 +B_2) = i k_2 (C_2 -A_2 + (C_2 - A_2)) = 2 i k_2 (C_2 -A_2) = \\sum C_m \\Gamma_{2m} \\\\\r\n-2 i k_2 A_2 = (\\sum_m C_m \\Gamma_{2m}) - 2 i k_2 C_2 \\\\\r\n-2 i k_2 = (\\sum_m (\\frac{C_m}{A_2}) \\Gamma_{2m}) - 2 i k_2 \\frac{C_2}{A_2}\r\n\\end{gathered}\r\n\\end{equation}\r\nDefine $C_m/A_2$ as transmission $t_{2m}$ so that coefficients are normalized to incident unity. [Note: 1 with respect to each channel, so that if there are 2 input channels the incidence is actually 2 (?)]\r\n\r\nThe four sums can be written as a matrix\r\n\\begin{equation}\r\n \\left( \\begin{array}{c}\r\n0 \\\\\r\n-2 i k_2 \\\\\r\n0 \\\\\r\n0  \\end{array} \\right) =\r\n \\left( \\begin{array}{cccc}\r\n\\Gamma_{11}-2 i k_1 & \\Gamma_{12}         & \\Gamma_{13}              & \\Gamma_{14} \\\\\r\n\\Gamma_{21}         & \\Gamma_{22}-2 i k_2 & \\Gamma_{23}              & \\Gamma_{24} \\\\\r\n\\Gamma_{31}         & \\Gamma_{32}         & \\Gamma_{33}+2 i \\kappa_3 & \\Gamma_{34} \\\\\r\n\\Gamma_{41}         & \\Gamma_{42}         & \\Gamma_{43}              & \\Gamma_{44}+2 i \\kappa_4 \\end{array} \\right)\r\n \\left( \\begin{array}{c}\r\nt_21 \\\\\r\nt_22 \\\\\r\nt_23 \\\\\r\nt_24 \\end{array} \\right) \r\n\\end{equation}\r\n\r\nWhere the left side vector is the input, and the right side matrix are the transmission coefficients. Note that there were two (open) input channels, which correspond to the upper 2 elements, and the lower elements correspond to the closed channels, which can not be inputs.", "meta": {"hexsha": "c9f9b818f3748fc2fe743a64112c48c58c403493", "size": 5634, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix_q1d_single_scatterer_derivation.tex", "max_stars_repo_name": "bhpayne/physics_phd_dissertation", "max_stars_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendix_q1d_single_scatterer_derivation.tex", "max_issues_repo_name": "bhpayne/physics_phd_dissertation", "max_issues_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/appendix_q1d_single_scatterer_derivation.tex", "max_forks_repo_name": "bhpayne/physics_phd_dissertation", "max_forks_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7333333333, "max_line_length": 591, "alphanum_fraction": 0.6599219027, "num_tokens": 2044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673087708699, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5942536146018398}}
{"text": "\n\\section{Model Approach }\n\n%%%\n%%% SECTION: GOVERNNG EQNS\n%%%\n\\subsection{Governing Equations}\n\nAssume a volume of an incompressible porous medium with porosity, $\\phi$, that does not vary in time, containing $N_\\mathrm{p}$ immiscible fluid phases.  The volume fraction of each phase is,\n\\begin{equation}\nV_\\alpha = \\phi S_\\alpha,\n\\end{equation}\nwhere the subscript, $\\alpha$, denotes the phase, and $S_\\alpha$ is the saturation of that phase such that,\n\\begin{equation}\\label{e:total}\n\\sum_{\\alpha=1}^{N_\\mathrm{p}} S_\\alpha = 1.\n\\end{equation}\n\n\\subsubsection{Darcy's Law}\n\nDarcy's Law states that for a phase $\\alpha$ in a porous medium \\cite{bear_1972},\n\\begin{equation}\\label{e:darcy}\n\\mathbf{q}_\\alpha = -\\frac{k_{\\mathrm{r}\\alpha}}{\\mu_\\alpha}\\mathbf{K} \\left( \\nabla P_\\alpha - \\rho_\\alpha \\mathbf{g} \\right),\n\\end{equation}\nwhere $\\mathbf{q}_\\alpha$ is a volumetric flux rate, known as the Darcy velocity; $k_{\\mathrm{r}\\alpha}$ is the relative permeability of the phase and is a function of $S_\\alpha$; $\\mu_\\alpha$ is the dynamic viscosity of the fluid phase; $\\mathbf{K}$ is a 2nd rank tensor describing the permeability of the porous medium; $P_\\alpha$ is the fluid pressure of phase $\\alpha$; $\\rho_\\alpha$ is the mass density of phase $\\alpha$; and, $\\mathbf{g}$ is the gravitational acceleration vector.\n\nThe Darcy velocity can be rewritten in terms of the (interstitial or average) velocity of the fluid, $\\mathbf{v}_\\alpha$, as,\n\\begin{equation}\n\\mathbf{q}_\\alpha = V_\\alpha \\mathbf{v}_\\alpha = \\phi S_\\alpha \\mathbf{v}_\\alpha.\n\\end{equation}\nThen, we can rewrite Eqn. \\ref{e:darcy} as,\n\\begin{equation}\\label{e:darcy2}\nS_\\alpha \\mathbf{\\Lambda}_\\alpha \\mathbf{v}_{\\phi\\alpha} = - \\nabla P_\\alpha + \\rho_\\alpha \\mathbf{g},\n\\end{equation}\nwith,\n\\begin{eqnarray}\n\\mathbf{v}_{\\phi\\alpha} & = & \\phi \\mathbf{v}_\\alpha,\\\\\n\\mathrm{and} \\quad \\mathbf{\\Lambda}_\\alpha & = & \\frac{\\mu_\\alpha}{k_{\\mathrm{r}\\alpha}}\\mathbf{K}^{-1}.\\label{e:absorption}\n\\end{eqnarray}\n\n\\subsubsection{Mass conservation}\n\nConservation of mass for each fluid phase implies \\cite{bear_1972},\n\\begin{equation}\\label{e:mass}\n\\phi \\frac{\\partial}{\\partial t}(\\rho_\\alpha S_\\alpha) + \\nabla\\cdot(\\rho_\\alpha S_\\alpha  \\mathbf{v}_{\\phi \\alpha}) - Q_\\alpha = 0,\n\\end{equation}\nwhere $Q_\\alpha$ is a mass flux rate describing sources ($Q_\\alpha >0$) or sinks ($Q_\\alpha <0$) of phase $\\alpha$.\n\n\\subsubsection{Fick's Law}\n\nThe transport of a conservative substance (e.g.\\ salt), with constant mass, in a partially saturated flow can be described by Fick's Law \\cite{helmig_1997},\n\\begin{equation}\\label{e:transport}\n\\phi \\frac{\\partial}{\\partial t}(\\rho_\\alpha S_\\alpha X_\\alpha^\\kappa) + \\nabla \\cdot \\left( \\rho_\\alpha S_\\alpha X_\\alpha^\\kappa \\mathbf{v}_{\\phi\\alpha} - \\rho_\\alpha \\mathbf{D}_\\alpha^\\kappa \\nabla X_\\alpha^\\kappa \\right) - R_\\alpha^\\kappa = 0\n\\end{equation}\nwhere $R_\\alpha^\\kappa$ is a source rate for component $\\kappa$ in phase $\\alpha$; $\\mathbf{D}_\\alpha^\\kappa$ is the 2nd rank hydrodynamic dispersion tensor for component $\\kappa$ in phase $\\alpha$ (see below); and, $X_\\alpha^\\kappa$ is the mass fraction of component $\\kappa$ in phase $\\alpha$ such that,\n\\begin{equation}\n\\sum_{\\kappa = 1}^{N\\mathrm{c\\alpha}} X_\\alpha^\\kappa = 1 \\quad \\forall\\, \\alpha,\n\\end{equation}\nand $N_\\mathrm{c\\alpha}$ is the number of components that phase $\\alpha$ can partition into.\n\nIt is worth noting that the dispersion tensor, $\\mathbf{D}_\\alpha^{\\kappa}$, may be rewritten in terms of an effective diffusive part and a purely dispersive part, e.g.\\ \\cite{scheidegger_1961},\n\\begin{equation}\n\\mathbf{D}_\\alpha^\\kappa = (\\phi S_\\alpha D^{\\kappa}_{\\mathrm{e} \\alpha} + \\delta^{\\kappa}_{\\mathrm{T} \\alpha} q_\\alpha ) \\mathbf{I} + (\\delta^{\\kappa}_{\\mathrm{L} \\alpha} - \\delta^{\\kappa}_{\\mathrm{T} \\alpha}) \\frac{\\mathbf{q}_\\alpha \\mathbf{q}_\\alpha}{q_\\alpha},\n\\end{equation}\nwhere $D^{\\kappa}_{\\mathrm{e} \\alpha}$ is the effective molecular diffusion of component $\\kappa$ in phase $\\alpha$ such that $D^{\\kappa}_{\\mathrm{e} \\alpha} = D^{\\kappa}_{\\mathrm{m} \\alpha} / \\tau$, where $D^{\\kappa}_{\\mathrm{m} \\alpha}$ denotes the molecular diffusion coefficient of component $\\kappa$ in phase $\\alpha$, and $\\tau$ is the tortuosity of the porous medium.  The parameters $\\delta^{\\kappa}_{\\mathrm{L} \\alpha}$ and $\\delta^{\\kappa}_{\\mathrm{T} \\alpha}$ represent the longitudinal and transverse dispersivities of component $\\kappa$ in phase $\\alpha$, respectively; $\\mathbf{I}$ is the 2nd rank unit tensor; and, $q_\\alpha = | \\mathbf{q}_\\alpha |$.\n\n\\subsubsection{Fourier's Law}\n\nAssuming local thermal equilibrium, or equivalently rapid heat transfer between the phases, the temperature of all the phases, $T_\\alpha$, and the temperature of the porous medium, $T_s$, will be the same i.e.\\ $T_\\alpha = T_{s} = T$ \\cite{hassanizadeh_1993}.  Then, considering the thermal energy of all the fluid phases as well as the porous medium, the energy conservation equation of the system can be described by Fourier's Law as \\cite{helmig_1997},\n\\begin{eqnarray}\n& {} & N_\\mathrm{p} (1-\\phi) c_s \\rho_s \\frac{\\partial T}{\\partial t} + N_\\mathrm{p} \\lambda_s \\nabla^2 T \\nonumber \\\\\n& {} & {} + \\sum_{\\alpha = 1}^{N_\\mathrm{p}} \\left\\{ \\phi c_\\alpha \\frac{\\partial}{\\partial t}(\\rho_\\alpha S_\\alpha T) +  \\nabla \\cdot \\left[ S_\\alpha \\mathbf{v}_{\\phi \\alpha} \\left(c_\\alpha \\rho_\\alpha T + P_\\alpha \\right) \\right] - U_\\alpha \\right\\} = 0.\\label{e:energy}\n\\end{eqnarray}\nHere, $\\rho_s$, $c_s$ and $\\lambda_s$ are the mass density, the average heat capacity and the heat conductivity, respectively, at constant pressure, of the partially saturated porous medium; $c_\\alpha$ is the average heat capacity of phase $\\alpha$; and, $U_\\alpha$ represents heat flux rate sources or sinks of phase $\\alpha$.\n\n%%%\n%%% SECTION: CONSTITUTIVE EQNS.\n%%%\n\\subsection{Constitutive Equations}\n\n\\subsubsection{Equation of State}\n\nWe can express the equation of state as a linearized function of the state variables i.e.\\ pressure, temperature and mass fraction of component $\\kappa$ in phase $\\alpha$, and density of phase $\\alpha$,\n\\begin{equation}\\label{e:eos}\n\\rho_\\alpha = \\tilde{\\rho}_\\alpha + \\gamma_\\alpha^P (P_\\alpha - \\tilde{P}_\\alpha) - \\gamma_\\alpha^T (T-\\tilde{T}_\\alpha) + \\gamma_\\alpha^\\kappa (X_\\alpha^\\kappa - \\tilde{X}_\\alpha^{\\kappa}),\n\\end{equation}\nwhere parameters with a tilde above them ($\\tilde{\\ \\cdot\\ }$) represent an approximation of that parameter in the state space; and,\n\\begin{eqnarray}\n\\gamma_\\alpha^P & =\\ \\rho_\\alpha \\beta_\\alpha^P\\ = & \\frac{\\partial \\rho_\\alpha}{\\partial P_\\alpha}, \\label{e:gammaP}\\\\\n\\gamma_\\alpha^T & =\\ \\rho_\\alpha \\beta_\\alpha^T\\ = & - \\frac{\\partial \\rho_\\alpha}{\\partial T}, \\label{e:gammaT}\\\\\n\\mathrm{and} \\quad \\gamma_\\alpha^\\kappa & =\\ \\rho_\\alpha \\beta_\\alpha^\\kappa\\ = & \\frac{\\partial \\rho_\\alpha}{\\partial {X_\\alpha^\\kappa}}\\label{e:gammaX},\n\\end{eqnarray}\nwith $\\beta_\\alpha^P$, $\\beta_\\alpha^T$ and $\\beta_\\alpha^\\kappa$ representing the compressibility coefficient, thermal expansion coefficient and the coefficient of volume expansion due to component $\\kappa$ in phase $\\alpha$, respectively.\n\n\\subsubsection{Capillary Pressure and Relative Permeability}\n\nThe capillary pressure, $P_{\\mathrm{c} \\alpha \\gamma}$, relates the interfacial tension between two phases, $\\alpha$ and $\\gamma$, in the porous material as,\n\\begin{equation}\\label{e:capillary}\nP_{\\mathrm{c} \\alpha \\gamma} = P_\\alpha - P_\\gamma,\n\\end{equation}\nwhere $P_\\alpha$ is the pressure of the non-wetting phase and $P_\\gamma$ is the pressure of the wetting phase. The capillary pressure is usually expressed as a monotonic function of the saturation of the wetting phase (i.e.\\ $S_\\gamma$), and may be obtained from empirical data, or by using an analytical model such as that of Brooks and Corey \\cite{brooks_1964} or Van Genuchten \\cite{vangenuchten_1980} in the case of two-phase flow.\n\nLikewise, the relative permeability of a phase, $k_{\\mathrm{r}\\alpha}$, is dependent on the saturation of that phase, and possibly the saturation of other phases if $N_\\mathrm{p} > 2$.  $k_{\\mathrm{r}\\alpha}$ indicates the perturbation of that phase due to the presence of the other phases and may also be obtained from empirical data or by using an analytical model.  In any case, for all $\\alpha$,\n\\begin{equation}\n0 \\leq k_{\\mathrm{r}\\alpha} \\leq 1.\n\\end{equation}\n\n%%%\n%%% SECTION: NUMERICAL MODEL FOR COMPRESSIBLE 2-PHASE FLOW\n%%%\n\\subsection{Numerical Model for Compressible Two-Phase Flow}\n\nWe consider a system of just two compressible, immiscible fluids (i.e.\\ $N_\\mathrm{p} = 2$), a wetting phase (hereafter subscripted $w$) and a non-wetting phase (hereafter subscripted $n$).  We assume negligible solute concentration changes and neglible temperature changes, thus reducing  Eqn. \\ref{e:eos} to,\n\\begin{equation}\\label{e:eos2}\n\\rho_\\alpha = \\tilde{\\rho}_\\alpha + \\gamma_\\alpha^P (P_\\alpha - \\tilde{P}_\\alpha).\n\\end{equation}\n\n\\subsubsection{Temporal discretization}\\label{s:t_discretization}\n\nWe use a backward (implicit) Euler method to discretize the time derivatives in Eqn. \\ref{e:mass},\n\\begin{equation}\n\\frac{\\phi}{\\Delta t} S^n_\\alpha \\left( \\rho^{n+1}_\\alpha-\\rho^n_\\alpha \\right) + \\frac{\\phi}{\\Delta t} \\rho^n_\\alpha \\left( S^{n+1}_\\alpha-S^n_\\alpha \\right) + \\nabla \\cdot \\left( \\tilde{\\rho}^{n+1}_\\alpha \\tilde{S}^{n+1}_\\alpha \\mathbf{v}^{n+1}_{\\phi \\alpha} \\right) - Q^{n+1}_\\alpha = 0,\n\\end{equation}\nwhere $\\Delta t$ is a fixed time increment; variables with a tilde above them ($\\tilde{\\ \\cdot\\ }$) are calculated explicitly using a Picard (fixed point) iteration method (e.g.\\ see Chapter 9 of Press \\textit{et al.}, \\cite{press_2008}); and, superscripts denote time level.\n\nDividing this equation by the mass density of the phase at the current time level, $\\rho^n_\\alpha$, summing over the phases, and making use of Eqn. \\ref{e:total} we get,\n\\begin{equation}\\label{e:t_discretization}\n\\sum_{\\alpha = w,n} \\frac{1}{\\rho^n_\\alpha} \\left[ \\frac{\\phi}{\\Delta t} \\rho^{n+1}_\\alpha S^n_\\alpha + \\nabla \\cdot \\left( \\tilde{\\rho}^{n+1}_\\alpha \\tilde{S}^{n+1}_\\alpha \\mathbf{v}^{n+1}_{\\phi \\alpha} \\right) - Q^{n+1}_\\alpha \\right] - \\frac{\\phi}{\\Delta t} = 0.\n\\end{equation}\n\nFinally, substituting Eqn. \\ref{e:eos2} into Eqn. \\ref{e:t_discretization} to eliminate $\\rho^{n+1}_\\alpha$ we get,\n\\begin{equation}\\label{e:cty}\n\\sum_{\\alpha = w,n} \\frac{1}{\\rho^n_\\alpha} \\left\\{ \\frac{\\phi}{\\Delta t} S^n_\\alpha \\left[ \\tilde{\\rho}^{n+1}_\\alpha + \\gamma_\\alpha^P \\left( P^{n+1}_\\alpha - \\tilde{P}^{n+1}_\\alpha \\right) \\right] + \\nabla \\cdot \\left( \\tilde{\\rho}^{n+1}_\\alpha \\tilde{S}^{n+1}_\\alpha \\mathbf{v}^{n+1}_{\\phi \\alpha} \\right) - Q^{n+1}_\\alpha \\right\\} - \\frac{\\phi}{\\Delta t} = 0.\n\\end{equation}\n\nThis equation can be rewritten in terms of the explicit variables, $P^{n+1}_\\alpha$ and $\\mathbf{v}^{n+1}_{\\phi \\alpha}$, as,\n\\begin{eqnarray}\n\\frac{\\phi}{\\Delta t} \\sum_{\\alpha = w,n} \\frac{1}{\\rho^n_\\alpha} \\gamma^P_\\alpha S^n_\\alpha P^{n+1}_\\alpha + \\sum_{\\alpha = w,n} \\frac{1}{\\rho^n_\\alpha} \\nabla \\cdot \\left( \\tilde{\\rho}^{n+1}_\\alpha \\tilde{S}^{n+1}_\\alpha \\mathbf{v}_{\\phi \\alpha}^{n+1} \\right) {} & = & \\nonumber \\\\\n\\frac{\\phi}{\\Delta t} - \\sum_{\\alpha = w,n} \\frac{1}{\\rho^n_\\alpha} \\left[ \\frac{\\phi}{\\Delta t} S^n_\\alpha \\left( \\tilde{\\rho}^{n+1}_\\alpha - \\gamma^P_\\alpha \\tilde{P}^{n+1}_\\alpha \\right) - Q^{n+1}_\\alpha \\right] & {} & \\label{e:cty_t}\n\\end{eqnarray}\n\nLikewise, the temporal discretization of Eqn. \\ref{e:darcy2} at time level $(n+1)$ may be written as,\n\\begin{equation}\\label{e:darcy_t}\n\\tilde{S}^{n+1}_\\alpha \\tilde{\\mathbf{\\Lambda}}^{n+1}_\\alpha \\mathbf{v}^{n+1}_{\\phi\\alpha} +\\left( \\nabla  - \\gamma_\\alpha^P \\mathbf{g} \\right) P^{n+1}_\\alpha = (\\tilde{\\rho}^{n+1}_\\alpha - \\gamma_\\alpha^P \\tilde{P}^{n+1}_\\alpha) \\mathbf{g}.\n\\end{equation}\n\n\\subsubsection{Spatial discretization}\\label{s:x_discretization}\n\nTo discretize Eqns. \\ref{e:cty_t} and \\ref{e:darcy_t} we assume a finite-element mesh such that the velocity field is discretized with $\\mathcal{N}$ discontinuous piecewise-linear (P1$_\\mathrm{DG}$) degrees-of-freedom and the pressure field is discretised with $\\mathcal{M}$ continuous quadratic (P2) degrees-of-freedom (see Figure \\ref{f:discretization}a) as introduced by Cotter \\textit{et al.} \\cite{cotter_2007}, i.e.\n\\begin{equation}\n\\mathbf{v}^n_{\\phi \\alpha} = \\sum_{j=1}^\\mathcal{N} \\hat{\\mathbf{v}}^n_{\\phi \\alpha,j} N_j \\quad \\mathrm{and} \\quad P^n_\\alpha = \\sum_{j=1}^\\mathcal{M} \\hat{P}^n_{\\alpha,j} M_j.\n\\end{equation}\n\n\\begin{figure}[h]\n\\vbox{\n\\hbox{\\hspace{1cm}\n  \\includegraphics[width=15.0cm,height=19.cm]{./doc_figures/discretization}}\n\\vspace{-14cm}}\n%    \\epsfig{file=figures/discretization.eps}\n    \\caption[]{(a) Example of discretization scheme used in formulation involving P1$_\\mathrm{DG}$-P2 elements \\cite{cotter_2007}.  Shaded nodes represent degrees-of-freedom for the scalar field (i.e.\\ $P_\\alpha$) whilst white nodes represent degrees-of-freedom for the vector field (i.e.\\ $\\mathbf{v}_{\\phi \\alpha}$).  $\\Omega$ is the volume of the domain and $\\Gamma$ is its bounding surface.  (b) Control volumes of the same elements, outlined by dashed lines.  $\\Omega_\\mathrm{CV}$ is the volume of the control volume and $\\Gamma_\\mathrm{CV}$ is its bounding surface.\\label{f:discretization}}\n\\end{figure}\n\nThen, multiplying Eqn. \\ref{e:darcy_t} for each phase by the basis function $N_i$ and integrating over the entire volume of the system, $\\Omega$, we get the following matrix equation:\n\\begin{equation}\\label{e:darcy_matrix}\n\\tilde{\\mathbf{A}}^{n+1} \\mathbf{v}^{n+1} + \\mathbf{B} \\mathbf{p}^{n+1} = \\tilde{\\mathbf{h}}^{n+1},\n\\end{equation}\nwhere,\n\\begin{eqnarray}\n\\mathbf{v}^{n+1} & = & \\left[ \\hat{\\mathbf{v}}^{n+1}_{\\phi w} \\quad \\hat{\\mathbf{v}}^{n+1}_{\\phi n} \\right]^T,\\\\\n\\mathbf{p}^{n+1} & = & \\left[ \\hat{P}^{n+1}_w \\quad \\hat{P}^{n+1}_n \\right]^T,\\\\\n\\tilde{\\mathrm{A}}^{n+1}_{i,j} & = & \\int_\\Omega N_i \\left[ \\begin{array}{c c}\n\\tilde{S}^{n+1}_w \\tilde{\\mathbf{\\Lambda}}^{n+1}_w & 0 \\vspace*{1mm}\\\\\n0 & \\tilde{S}^{n+1}_n \\tilde{\\mathbf{\\Lambda}}^{n+1}_n\n\\end{array} \\right] N_j\\: d\\Omega,\\\\\n\\mathrm{B}_{i,j} & = & \\int_\\Omega N_i \\left[ \\begin{array}{c c}\n\\nabla - \\gamma^P_w \\mathbf{g} & 0 \\vspace*{1mm}\\\\\n0 & \\nabla - \\gamma^P_n \\mathbf{g}\n\\end{array} \\right] M_j\\: d\\Omega,\\\\\n\\mathrm{and} \\quad \\tilde{\\mathrm{h}}^{n+1}_{i} & = & \\int_\\Omega N_i \\left[ \\begin{array}{c}\n(\\tilde{\\rho}^{n+1}_w - \\gamma_w^P \\tilde{P}^{n+1}_w) \\mathbf{g} \\vspace*{2mm}\\\\\n(\\tilde{\\rho}^{n+1}_n - \\gamma_n^P \\tilde{P}^{n+1}_n) \\mathbf{g}\n\\end{array} \\right] d\\Omega.\n\\end{eqnarray}\n\nNow we consider control volumes, $\\Omega_\\mathrm{CV}$, with bounding surfaces $\\Gamma_\\mathrm{CV}$, around each node (see Figure \\ref{f:discretization}b), such that,\n\\begin{equation}\n\\Omega = \\sum_i \\Omega_{\\mathrm{CV}_i}\\quad \\mathrm{and} \\quad P^n_\\alpha = \\sum_{j=1}^\\mathcal{X} \\hat{P}^n_{\\alpha,j} \\chi_j\n\\end{equation}\nwhere $\\chi_j$ is a basis function that is unity on the control volume and zero elsewhere.  We also enforce $\\sum_\\alpha S_\\alpha = 1$ in each control volume as per Eqn. \\ref{e:total}.  Then, multiplying Eqn. \\ref{e:cty_t} by the basis function $\\chi_i$ and integrating over the control volumes we get the following matrix equation:\n\\begin{equation}\\label{e:cty_matrix}\n(\\mathbf{M}^n)^T \\mathbf{p}^{n+1} + (\\tilde{\\mathbf{C}}^{n+1})^T \\mathbf{v}^{n+1} = \\phi - (\\tilde{\\mathbf{r}}^{n+1})^T \\mathbf{1}\n\\end{equation}\nwhere,\n\\begin{eqnarray}\n\\mathrm{M}^n_{i,j} & = & \\int_{\\Omega_\\mathrm{CV}} \\chi_i \\left[ \\begin{array}{c}\n\\phi \\gamma_w^P S^n_w / \\rho^n_w \\vspace*{2mm}\\\\\n\\phi \\gamma_n^P S^n_n / \\rho^n_n\n\\end{array} \\right] \\chi_j\\ d\\Omega_\\mathrm{CV},\\\\\n\\tilde{\\mathrm{C}}^{n+1}_{i,j} & = & \\int_{\\Gamma_\\mathrm{CV}} \\chi_i \\left[ \\begin{array}{c}\n\\Delta t\\, \\tilde{\\rho}_w^{n+1} \\tilde{S}_w^{n+1} \\mathbf{n} / \\rho^n_w \\vspace*{2mm}\\\\\n\\Delta t\\, \\tilde{\\rho}_n^{n+1} \\tilde{S}_n^{n+1} \\mathbf{n} / \\rho^n_n\n\\end{array} \\right] N_j\\ d\\Gamma_\\mathrm{CV},\\\\\n\\tilde{\\mathrm{r}}^{n+1}_i & = & \\int_{\\Omega_\\mathrm{CV}} \\chi_i \\left[ \\begin{array}{c}\n\\left\\{ \\phi S^n_w \\left( \\tilde{\\rho}^{n+1}_w - \\gamma^P_w \\tilde{P}^{n+1}_w \\right) - \\Delta t\\, Q^{n+1}_w \\right\\} / \\rho^n_w \\vspace*{2mm}\\\\\n\\left\\{ \\phi S^n_n \\left( \\tilde{\\rho}^{n+1}_n - \\gamma^P_n \\tilde{P}^{n+1}_n \\right) - \\Delta t\\, Q^{n+1}_n \\right\\} / \\rho^n_n\n\\end{array} \\right] d\\Omega_\\mathrm{CV},\\\\\n\\mathrm{and} \\quad \\mathbf{1} & = & \\left[ 1\\quad 1 \\right]^T.\n\\end{eqnarray}\n\n\\subsubsection{Numerical model}\n\nCombining Eqns. \\ref{e:darcy_matrix} and \\ref{e:cty_matrix} we get,\n\\begin{equation}\n\\left[ \\begin{array}{c c}\n\\tilde{\\mathbf{A}}^{n+1} & \\mathbf{B}\\\\\n(\\tilde{\\mathbf{C}}^{n+1})^T & (\\mathbf{M}^n)^T\n\\end{array} \\right]\n\\left[ \\begin{array}{c}\n\\mathbf{v}^{n+1}\\\\\n\\mathbf{p}^{n+1}\n\\end{array} \\right] =\n\\left[ \\begin{array}{c}\n\\tilde{\\mathbf{h}}\\\\\n\\phi - (\\tilde{\\mathbf{r}}^{n+1})^T \\mathbf{1}\n\\end{array} \\right].\n\\end{equation}\nThis equation together with Eqns. \\ref{e:total} and \\ref{e:eos2} (for each of the phases), and appropriate means of calculating $k_{\\mathrm{r}w}(S^n_w)$ and $k_{\\mathrm{r}n}(S^n_n)$ in Eqn. \\ref{e:absorption}, constitutes our numerical model for compressible two-phase fluid flow in a porous medium.\n\n%%%\n%%% SECTI0ON: TEST CASES\n\\subsection{Test cases}\n\n\\subsubsection{Buckley-Leverett problem}\n\n\\begin{itemize}\n\\item Example of isothermal multiphase flow.\n\\item Displacement of oil by water in 1D / 2D.\n\\item No gravity.\n\\item No capillary pressure.\n\\item No dispersion.\n\\item Comparison with analytical solution.\n\\end{itemize}\n\n\\subsubsection{McWhorter problem}\n\n\\begin{itemize}\n\\item Example of isothermal multiphase flow.\n\\item Displacement of oil by water in 1D / 2D.\n\\item No gravity.\n\\item Inclusion of capillary pressure.\n\\item Comparison with quasi-analytical solution.\n\\end{itemize}\n\n\\subsubsection{Five-spot waterflood problem}\n\n\\begin{itemize}\n\\item Example of isothermal multiphase flow.\n\\item Displacement of oil by water in 2D.\n\\item One injector, one producer; or, Two injectors, two producers.\n\\item No gravity.\n\\item Water front shape and diffusivity allows comparison of different numerical techniques.\n\\end{itemize}\n\n\\subsubsection{Heat pipe simulation}\n\n\\begin{itemize}\n\\item Example of non-isothermal multiphase/multicomponent flow.\n\\item Influence of heat flux on water-steam system.\n\\item Inclusion of advection, conduction diffusion and capillary effects.\n\\item Comparison with semi-analytical solution.\n\\end{itemize}\n", "meta": {"hexsha": "1e47eb3e07a19df3f45ffcbe89a266e6f943e0ef", "size": 18418, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "software/multifluids_icferst/legacy_reservoir_prototype/doc/model_reservoir_equations.tex", "max_stars_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_stars_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-11T02:39:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-11T03:08:38.000Z", "max_issues_repo_path": "software/multifluids_icferst/legacy_reservoir_prototype/doc/model_reservoir_equations.tex", "max_issues_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_issues_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "software/multifluids_icferst/legacy_reservoir_prototype/doc/model_reservoir_equations.tex", "max_forks_repo_name": "msc-acse/acse-9-independent-research-project-Wade003", "max_forks_repo_head_hexsha": "cfcba990d52ccf535171cf54c0a91b184db6f276", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-21T22:50:19.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-28T17:16:31.000Z", "avg_line_length": 67.963099631, "max_line_length": 665, "alphanum_fraction": 0.6991530025, "num_tokens": 6464, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972583359805, "lm_q2_score": 0.682573740869499, "lm_q1q2_score": 0.5942468274131197}}
{"text": "\\section{Precise Definition of a Limit}\\label{sec:LimitsFormal}\r\n\r\n\r\nThis section introduces the formal definition of a limit. Many refer to this as ``the epsilon--delta,'' definition, referring to the letters $\\epsilon$ and $\\delta$ of the Greek alphabet.\\\\\r\n\r\nBefore we give the actual definition, let's consider a few informal ways of describing a limit.  Given a function $y=f(x)$ and an $x$-value, $c$, we say that ``the limit of the function $f$, as $x$ approaches $c$, is a value~$L$'': \r\n\r\n\\begin{description}\r\n\\item[1.]if ``$y$ tends to $L$'' as ``$x$ tends to $c$.''\r\n\\item[2.]if ``$y$ approaches $L$'' as ``$x$ approaches $c$.''\r\n\\item[3.]if ``$y$ is near $L$'' whenever ``$x$ is near $c$.''\r\n\\end{description}\r\n\r\nThe problem with these definitions is that the words ``tends,'' ``approach,'' and especially ``near'' are not exact.  In what way does the variable $x$ tend to, or approach, $c$? How near do $x$ and $y$ have to be to $c$ and $L$, respectively?  \\\\\r\n\r\nThe definition we describe in this section comes from formalizing {\\bf 3}.  A quick restatement gets us closer to what we want:\r\n\r\n\\begin{description}\r\n\\item[$\\textbf{3}^\\prime$.]If $x$ is within a certain \\textit{tolerance level} of $c$, then the corresponding value $y=f(x)$ is within a certain \\textit{tolerance level} of $L$.\r\n\\end{description}\r\n\r\nThe traditional notation for the $x$-tolerance is the lowercase Greek letter delta, or $\\delta$, and the $y$-tolerance is denoted by lowercase epsilon, or $\\epsilon$. One more rephrasing of $\\textbf{3}^\\prime$ nearly gets us to the actual definition:\r\n\r\n\\begin{description}\r\n\\item[$\\textbf{3}^{\\prime \\prime}$.]If $x$ is within $\\delta$ units of $c$, then the corresponding value of $y$ is within $\\epsilon$ units of $L$.\r\n\\end{description}\r\n\r\nWe can write ``$x$ is within $\\delta$ units of $c$'' mathematically as\r\n$$|x-c| < \\delta, \\qquad \\text{which is equivalent to }\\qquad c-\\delta < x < c+\\delta.$$\r\nLetting the symbol ``$\\longrightarrow$'' represent the word ``implies,'' we can rewrite $\\textbf{3}''$ as \r\n$$\r\n|x - c| < \\delta \\longrightarrow  |y - L| < \\epsilon \r\n\\qquad \\textrm{or} \\qquad c - \\delta < x < c + \\delta \\longrightarrow L - \\epsilon < y < L + \\epsilon.\r\n$$\r\nThe point is that $\\delta$ and $\\epsilon$, being tolerances, can be any positive (but typically small) values.  Finally, we have the formal definition of the limit with the notation  seen in the previous section.\r\n\r\n\r\n\r\n\\begin{definition}{The Limit of a Function $f$}{def:limit}\r\n{Let $I$ be an open interval containing $c$, and let $f$ be a function defined on $I$, except possibly at $c$. The \\textit{limit of $f(x)$, as $x$ approaches $c$, is $L$}, denoted by  \r\n$$\\displaystyle \\lim_{x\\rightarrow c} f(x) = L,$$\r\nmeans that given any $\\epsilon > 0$, there exists $\\delta > 0$ such that for all $x\\neq c$,  \r\nif  $|x - c| < \\delta$, then $|f(x) - L| < \\epsilon$.\\index{limit!definition}\r\n}\r\n\\end{definition}\r\n\r\n\r\n(Mathematicians often enjoy writing ideas without using any words. Here is the wordless definition of the limit:\\\\\r\n\r\n$\\displaystyle \\lim_{x\\rightarrow c} f(x) = L \\iff$\r\n$\\forall \\, \\epsilon > 0, \\exists \\, \\delta > 0 \\; s.t. \\;\r\n0<|x - c| < \\delta \\longrightarrow |f(x) - L| < \\epsilon$.\\text{)}\\\\\r\n\r\nNote the order in which $\\epsilon$ and $\\delta$ are given.  In the definition, the $y$-tolerance $\\epsilon$ is given \\textit{first} and then the limit will exist {\\bf if} we can find an $x$-tolerance $\\delta$ that works.  \r\n\r\nAn example will help us understand this definition.  Note that the explanation is long, but it will take one through all steps necessary to understand the ideas.\\\\\r\n\r\n\r\n\\begin{example}{Evaluating a limit using the definition}{ex_compute_lim1}{\r\nShow that $\\displaystyle \\lim_{x\\rightarrow 4} \\sqrt{x} = 2 $.}\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{Before we use the formal definition, let's try some numerical tolerances.  What if the $y$ tolerance is 0.5, or $\\epsilon =0.5$?  How close to 4 does $x$ have to be so that $y$ is within 0.5 units of 2, i.e., $1.5 < y < 2.5$?  In this case, we can proceed as follows:\r\n\\begin{align*}\r\n1.5 &< \\parbox{15pt}{\\centering $y$} < 2.5 \\\\\r\n1.5 &< \\parbox{15pt}{\\centering $\\sqrt{x}$} < 2.5\\\\\r\n1.5^2 &< \\parbox{15pt}{\\centering $x$} < 2.5^2\\\\\r\n2.25 &< \\parbox{15pt}{\\centering $x$} < 6.25.\r\n\\end{align*}\r\n\r\nSo, what is the desired $x$ tolerance?  Remember, we want to find a symmetric interval of $x$ values, namely\r\n$4 - \\delta < x < 4 + \\delta$.  The lower bound of $2.25$ is $1.75$ units from 4; the upper bound of 6.25 is 2.25 units from 4. We need the smaller of these two distances; we must have $\\delta \\leq 1.75$. See Figure \\ref{fig:choose_e_d}.\\\\\r\n\r\n\r\n\\begin{figure}\r\n\t\\centering\r\n\t\\begin{subfigure}[t]{0.5\\textwidth}\r\n\t\t\\begin{tikzpicture}\r\n\t\t\\begin{axis}[minor x tick num=1,axis y line=middle,axis x line=middle,ymin=-.1,ymax=3.1,xmin=-.1,xmax=7.1,xtick={2,4,6},ytick={1,2},name=myplot]\r\n\t\t\\addplot [{\\colorone},thick,smooth] coordinates {(0.,0.) (0.05,0.223607) (0.1,0.316228) (0.15,0.387298) (0.2,0.447214) (0.25,0.5) (0.5,0.707107) (0.75,0.866025) (1.,1.) (1.25,1.11803) (1.5,1.22474) (1.75,1.32288) (2.,1.41421) (2.25,1.5)(2.5,1.58114) (2.75,1.65831) (3.,1.73205) (3.25,1.80278)(3.5,1.87083) (3.75,1.93649) (4.,2.) (4.25,2.06155) (4.5,2.12132)(4.75,2.17945) (5.,2.23607) (5.25,2.29129) (5.5,2.34521) (5.75,2.39792) (6.,2.44949) (6.25,2.5)(6.5,2.54951) (6.75,2.59808) (7.,2.64575) \r\n\t\t};\r\n\t\t\\fill[black] (axis cs:4,2) circle (1pt);\r\n\t\t\\draw[thin,dashed,thick,{\\colortwo}] (axis cs:0,1.5) -- (axis cs:2.25,1.5);\r\n\t\t\\draw[thin,dashed,thick,{\\colortwo}] (axis cs:0,2.5) -- (axis cs:6.25,2.5);\r\n\t\t\\draw (axis cs:-.1,1.75) node [right]{ $\\left.\\rule{0pt}{7.5pt}\\right\\}\\epsilon = .5$};\r\n\t\t\\draw (axis cs:-.1,2.25) node [right]{ $\\left.\\rule{0pt}{7.5pt}\\right\\}\\epsilon = .5$};\r\n\t\t\\fill[{\\colortwo}] (axis cs:2.25,1.5) circle (1pt);\r\n\t\t\\fill[{\\colortwo}] (axis cs:6.25,2.5) circle (1pt);\r\n\t\t\\draw (axis cs:4,1) node [text width = 80pt,align=center] {  Choose $\\epsilon>0$. Then ...};\r\n\t\t\\end{axis}\r\n\t\t%\\fill[{\\colortwo}] (1,1) circle (1pt);\r\n\t\t\\node [right] at (myplot.right of origin) {  $x$};\r\n\t\t\\node [above] at (myplot.above origin) {  $y$};\r\n\t\t\\end{tikzpicture}\r\n        \\label{ }\r\n        \\caption{} \r\n    \\end{subfigure}% \r\n    \\begin{subfigure}[t]{0.5\\textwidth}\r\n    \\begin{tikzpicture}[>=stealth]\r\n    \\begin{axis}[minor x tick num=1,axis y line=middle,axis x line=middle,ymin=-.1,ymax=3.1,xmin=-.1,xmax=7.1,xtick={2,4,6},ytick={1,2},name=myplot]\r\n    \\addplot [{\\colorone},smooth,thick] coordinates {(0.,0.) (0.05,0.223607) (0.1,0.316228) (0.15,0.387298) (0.2,0.447214) (0.25,0.5) (0.5,0.707107) (0.75,0.866025) (1.,1.) (1.25,1.11803) (1.5,1.22474) (1.75,1.32288) (2.,1.41421) (2.25,1.5)(2.5,1.58114) (2.75,1.65831) (3.,1.73205) (3.25,1.80278)(3.5,1.87083) (3.75,1.93649) (4.,2.) (4.25,2.06155) (4.5,2.12132)(4.75,2.17945) (5.,2.23607) (5.25,2.29129) (5.5,2.34521) (5.75,2.39792) (6.,2.44949) (6.25,2.5)(6.5,2.54951) (6.75,2.59808) (7.,2.64575) \r\n    };\r\n    \\fill[black] (axis cs:4,2) circle (1pt);\r\n    \\draw[thin,dashed,gray,thick] (axis cs:0,1.5) -- (axis cs:2.25,1.5);\r\n    \\draw[thin,dashed,gray,thick] (axis cs:0,2.5) -- (axis cs:6.25,2.5);\r\n    \\draw[thin,dashed,{\\colortwo},thick] (axis cs:2.25,0) -- (axis cs:2.25,1.5);\r\n    \\draw[thin,dashed,{\\colortwo},thick] (axis cs:6.25,0) -- (axis cs:6.25,2.5);\r\n    \\draw (axis cs:-.1,1.75) node [right]{$\\left.\\rule{0pt}{7.5pt}\\right\\}\\epsilon = .5$};\r\n    \\draw (axis cs:-.1,2.25) node [right]{$\\left.\\rule{0pt}{7.5pt}\\right\\}\\epsilon = .5$};\r\n    \\fill[{\\colortwo}] (axis cs:2.25,1.5) circle (1pt);\r\n    \\fill[{\\colortwo}] (axis cs:6.25,2.5) circle (1pt);\r\n    \\draw (axis cs:3.125,.1) node [above,align=center,text width = 45pt] { width\\\\[-2pt] = 1.75\\\\[-3pt] $\\overbrace{\\rule{30pt}{0pt}}$};\r\n    \\draw (axis cs:5.125,.1) node [above,align=center,text width = 45pt] { width\\\\[-2pt] = 2.25\\\\[-3pt] $\\overbrace{\\rule{39pt}{0pt}}$};\r\n    \\draw (axis cs:4.3,1.35)\tnode [align=center,text width = 80pt] { ... choose $\\delta$ smaller than each of these};\r\n    \\draw [->,thin] (axis cs:4,1) -- (axis cs:3,.8);\r\n    \\draw [->,thin] (axis cs:4,1) -- (axis cs:5,.8);\r\n    \\end{axis}\r\n    %\\fill[{\\colortwo}] (1,1) circle (1pt);\r\n    \\node [right] at (myplot.right of origin) { $x$};\r\n    \\node [above] at (myplot.above origin) { $y$};\r\n    \\end{tikzpicture}\r\n        \\label{ }\r\n        \\caption{With $\\epsilon=0.5$, we pick any $\\delta < 1.75$.}    \r\n    \\end{subfigure} \r\n    \\caption{ Illustrating the $\\epsilon-\\delta$ process. \\label{fig:choose_e_d}}\r\n\\end{figure}\r\n\r\nNOTE: The concept of a \\dfont{one-sided limit} can also be made precise.\r\n\r\n\\begin{definition}{One-sided Limit}{onesided}\r\n Suppose that $f(x)$ is a function. We say that\r\n$\\ds \\lim_{x\\to a^-}f(x)=L$ if for every $\\epsilon>0$ there is a $\\delta >\r\n0$ so that whenever $0 < a-x < \\delta$, $|f(x)-L|<\\epsilon$.  We say\r\nthat $\\ds\\lim_{x\\to a^+}f(x)=L$ if for every $\\epsilon>0$ there is a\r\n$\\delta > 0$ so that whenever $0 < x-a < \\delta$, $|f(x)-L|<\\epsilon$.\r\n\\end{definition}\r\n\r\n%\\mtable{.4}{Illustrating the $\\epsilon-\\delta$ process.}{fig:choose_e_d}{%\r\n%%\t\t\\begin{tabular}{c} \r\n%\t\t\\myincludegraphics{figures/figLimitProof1a}\\\\\r\n%\t\t\\myincludegraphics{figures/figLimitProof1b}\\\\\r\n%\t\t\\noindent\\parbox{200pt}{With $\\epsilon=0.5$, we pick any $\\delta < 1.75$.}\r\n%%\t\t\\end{tabular}\r\n%\t}\r\n\r\n\t\t\r\n\r\nGiven the $y$ tolerance $\\epsilon =0.5$, we have found an $x$ tolerance, $\\delta \\leq 1.75$, such that whenever $x$ is within $\\delta$ units of 4, then $y$ is within $\\epsilon$ units of 2.  That's what we were trying to find.\\\\\r\n  \r\nLet's try another value of $\\epsilon$.\\\\\r\n\r\nWhat if the $y$ tolerance is 0.01, i.e.,  $\\epsilon =0.01$?  How close to 4 does $x$ have to be in order for $y$ to be within 0.01 units of 2 (or $1.99 < y < 2.01$)?  Again, we just square these values to get\r\n$1.99^2 < x < 2.01^2$, or \r\n$$3.9601 < x < 4.0401.$$  \r\nWhat is the desired $x$ tolerance?  In this case we must have $\\delta \\leq 0.0399$, which is the minimum distance from 4 of the two bounds given above.  %Note that in some sense, it looks like there are two tolerances (below 4 of 0.0399 units and above 4 of 0.0401 units).  However, we couldn't use the larger value of $0.0401$ for $\\delta$ since then the interval for $x$ would be  $3.9599 < x < 4.0401$ resulting in $y$ values of $1.98995 < y < 2.01$ (which contains values NOT within 0.01 units of 2).\\\\\r\n\r\nWhat we have so far: if $\\epsilon =0.5$, then $\\delta \\leq 1.75$ and if $\\epsilon = 0.01$, then $\\delta \\leq 0.0399$. A pattern is not easy to see, so we switch to general $\\epsilon$ try to determine $\\delta$ symbolically.  We start by assuming $y=\\sqrt{x}$ is within $\\epsilon$ units of 2:\r\n\r\n\\begin{eqnarray*}\r\n|y - 2| < \\epsilon &\\\\\r\n-\\epsilon < y - 2 < \\epsilon& \\qquad \\textrm{(Definition of absolute value)}\\\\\r\n-\\epsilon < \\sqrt{x} - 2 < \\epsilon  &\\qquad (y=\\sqrt{x})\\\\\r\n2 - \\epsilon < \\sqrt{x} < 2+ \\epsilon &\\qquad \\textrm{ (Add 2)}\\\\\r\n(2 - \\epsilon)^2 < x < (2+ \\epsilon) ^2 &\\qquad \\textrm{ (Square all)}\\\\\r\n4 - 4\\epsilon + \\epsilon^2 < x < 4 + 4\\epsilon + \\epsilon^2 &\\qquad \\textrm{ (Expand)}\\\\\r\n4 - (4\\epsilon - \\epsilon^2) < x < 4 + (4\\epsilon + \\epsilon^2). &\\qquad \\textrm{ (Rewrite in the desired form)}\r\n\\end{eqnarray*}\r\n\r\nThe ``desired form'' in the last step is ``$4-\\textit{something} < x < 4 +\\textit{something}$.''\r\nSince we want this last interval to describe an $x$ tolerance around 4, we have that either $\\delta \\leq 4\\epsilon - \\epsilon^2$ or $\\delta \\leq 4\\epsilon + \\epsilon^2$, whichever is smaller: $$\\delta \\leq \\min\\{4\\epsilon - \\epsilon^2, 4\\epsilon + \\epsilon^2\\}.$$  Since $\\epsilon > 0$, the minimum is $\\delta \\leq 4\\epsilon - \\epsilon^2$.  That's the formula: given an $\\epsilon$, set $\\delta \\leq 4\\epsilon-\\epsilon^2$. \r\n\r\nWe can check this for our previous values.  If $\\epsilon=0.5$, the formula gives\r\n$\\delta \\leq 4(0.5) - (0.5)^2 = 1.75$ and when $\\epsilon=0.01$, the formula gives $\\delta \\leq 4(0.01) - (0.01)^2 = 0.399$.\r\n%\\drawexampleline\r\n\r\nSo given any $\\epsilon >0$, set $\\delta \\leq 4\\epsilon - \\epsilon^2$. Then if $|x-4|<\\delta$ (and $x\\neq 4$), then $|f(x) - 2| < \\epsilon$,  satisfying the definition of the limit.  We have shown formally (and finally!) that $\\displaystyle \\lim_{x\\rightarrow 4} \\sqrt{x} = 2 $.\r\n}\r\n\\end{solution}\r\n\r\n\r\n\r\n\r\n%FOOTNOTE $**$: Actually, it is a pain, but this won't work if $\\epsilon \\ge 4$.  This shouldn't really occur since $\\epsilon$ is supposed to be small, but it could happen.  In the cases where $\\epsilon \\ge 4$, just take $\\delta = 1$ and you'll be fine.\r\nThe previous example was a little long in that we sampled a few specific cases of $\\epsilon$ before handling the general case. Normally this is not done.  The previous example is also a bit unsatisfying in that $\\sqrt{4}=2$; why work so hard to prove something so obvious? Many $\\epsilon$-$\\delta$ proofs are long and difficult to do. In this section, we will focus on examples where the answer is, frankly, obvious, because the non--obvious examples are even harder. In the next section we will learn some theorems that allow us to evaluate limits \\textit{analytically}, that is, without using the $\\epsilon$-$\\delta$ definition.\\\\\r\n%That is why theorems about limits are so useful! After doing a few more $\\epsilon$--$\\delta$ proofs, you will really appreciate the analytical ``short cuts'' found in the next section.\\\\\r\n\r\n\\begin{example}{Evaluating a limit using the definition}{ex_compute_lim2}{\r\nShow that $\\displaystyle \\lim_{x\\rightarrow 2} x^2 = 4$.}\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{Let's do this example symbolically from the start.  Let $\\epsilon > 0$ be given; we want $|y-4| < \\epsilon$, i.e.,  $|x^2-4| < \\epsilon$.  How do we find $\\delta$ such that when $|x-2| < \\delta$, we are guaranteed that $|x^2-4|<\\epsilon$?% for some $\\delta$ (in terms of $\\epsilon$)?\r\n\r\nThis is a bit trickier than the previous example, but let's start by noticing that \r\n$|x^2-4| = |x-2|\\cdot|x+2|$.  Consider:\\\\\r\n\\begin{equation} |x^2-4| < \\epsilon \\longrightarrow |x-2|\\cdot|x+2| < \\epsilon \\longrightarrow |x-2| < \\frac{\\epsilon}{|x+2|}.\\label{eq:limit1}\\end{equation} \r\nCould we not set $\\displaystyle \\delta = \\frac{\\epsilon}{|x+2|}$?  \r\n\r\nWe are close to an answer, but the catch is that $\\delta$ must be a \\textit{constant} value (so it can't contain $x$).  There is a way to work around this, but we do have to make an assumption.  Remember that $\\epsilon$ is supposed to be a small number, which implies that $\\delta$ will also be a small value.  In particular, we can (probably) assume that $\\delta < 1$.  If this is true, then $|x-2| < \\delta$ would imply that $|x-2| < 1$, giving $1 < x < 3$.  \r\n\r\nNow, back to the fraction $\\displaystyle \\frac{\\epsilon}{|x+2|}$.  If $1<x<3$, then $3<x+2<5$ (add 2 to all terms in the inequality).  Taking reciprocals, we have \r\n\\begin{align}\r\n\\frac15 <& \\frac1{|x+2|} < \\frac 13 & \\text{which implies}\\notag\\\\\r\n\\frac15 <& \\frac1{|x+2|} & \\text{which implies}\\notag\\\\\r\n\\frac\\epsilon5<&\\frac{\\epsilon}{|x+2|}.\\label{eq:limit2}\r\n\\end{align}\r\n\r\n%$\\displaystyle \\frac{1}{5}<\\frac{1}{|x+2|}<\\frac{1}{3}$ so that, in particular, \r\n%\\begin{equation} \\frac{\\epsilon}{5}<\\frac{\\epsilon}{|x+2|}.\\label{eq:limit2}\\end{equation}  \r\nThis suggests that we set \r\n$\\displaystyle \\delta \\leq \\frac{\\epsilon}{5}$. To see why, let consider what follows when we assume $|x-2|<\\delta$:\r\n\r\n\\small\r\n\\begin{align*}\r\n|x - 2| &< \\delta &\\\\\r\n|x - 2| &< \\frac{\\epsilon}{5}&  \\text{\\small(Our choice of $\\delta$)}\\\\\r\n|x - 2|\\cdot|x + 2| &< |x + 2|\\cdot\\frac{\\epsilon}{5}&  \\text{\\small(Multiply by $|x+2|$)}\\\\\r\n|x^2 - 4|&< |x + 2|\\cdot\\frac{\\epsilon}{5}&  \\text{\\small(Combine left side)}\\\\\r\n|x^2 - 4|&< |x + 2|\\cdot\\frac{\\epsilon}{5}< |x + 2|\\cdot\\frac{\\epsilon}{|x+2|}=\\epsilon &  \r\n\\text{\\small(Using (\\ref{eq:limit2}) as long as $\\delta <1$)}\r\n\\end{align*}\r\n\\normalsize\r\n\r\nWe have arrived at $|x^2 - 4|<\\epsilon$ as desired.  Note again, in order to make this happen we needed $\\delta$ to first be less than 1.  That is a safe assumption; we want $\\epsilon$ to be arbitrarily small, forcing $\\delta$ to also be small. \r\n\r\nWe have also picked $\\delta$ to be smaller than ``necessary.'' We could get by with a slightly larger $\\delta$, as shown in Figure \\ref{fig:limit_eover5}. The dashed outer lines show the boundaries defined by our choice of $\\epsilon$. The dotted inner lines show the boundaries defined by setting $\\delta = \\epsilon/5$. Note how these dotted lines are within the dashed lines. That is perfectly fine; by choosing $x$ within the dotted lines we are guaranteed that $f(x)$ will be within $\\epsilon$ of 4.%If the value we eventually used for $\\delta$, namely $\\epsilon/5$, is not less than 1, this proof won't work.  For the final fix, we instead set $\\delta$ to be the minimum of 1 and $\\epsilon/5$. This way all calculations above work.  \r\n\r\n\\mfigure{.8}{Choosing $\\delta = \\epsilon/5$ in Example \\ref{ex_compute_lim2}.}{fig:limit_eover5}{ %\r\n\\begin{tikzpicture}\r\n\\begin{axis}[axis y line=middle,axis x line=middle,ymin=2,ymax=6,xmin=1.3,xmax=3.3,xtick={2},ytick={4},name=myplot]\r\n\\addplot [{\\colorone},smooth,thick] coordinates {((0.,0.) (0.25,0.0625) (0.5,0.25) (0.75,0.5625) (1.,1.) (1.25,1.5625)\r\n(1.5,2.25) (1.75,3.0625) (2.,4.) (2.25,5.0625) (2.5,6.25)\r\n(2.75,7.5625) (3.,9.) \r\n};\r\n\\fill[black] (axis cs:2,4) circle (1pt);\r\n\\draw[thin,dashed,thick,{\\colortwo}] (axis cs:0,3) -- (axis cs:1.73,3);\r\n\\draw[thin,dashed,thick,{\\colortwo}] (axis cs:0,5) -- (axis cs:2.24,5);\r\n\\draw[thin,dashed,thick,{\\colortwo}] (axis cs:2.24,0) -- (axis cs:2.24,5);\r\n\\draw[thin,dashed,thick,{\\colortwo}] (axis cs:1.73,0) -- (axis cs:1.73,3);\r\n\\draw[dotted,ultra thick,gray] (axis cs:2.2,0) -- (axis cs:2.2,4.84);\r\n\\draw[dotted,ultra thick,gray] (axis cs:1.8,0) -- (axis cs:1.8,3.24);\r\n\\draw (axis cs:-.1,4.5) node [right]{ $\\left.\\rule{0pt}{12pt}\\right\\}\\epsilon$};\r\n\\draw (axis cs:2.1,2.15) node [above] { $\\delta$};\r\n\\draw (axis cs:2.1,2.05) node [above,scale=.35] {$\\overbrace{\\phantom{..........}}$};\r\n\\draw (axis cs:2.3,4.5)  -- (axis cs: 3.2,4.5);\r\n\\draw (axis cs:2.3,4.3)  -- (axis cs: 2.5,4.3);\r\n\\draw (axis cs:2.75,4.7) node [align=center]{  length of $\\epsilon$};\r\n\\draw (axis cs:2.75,3.9) node [align=center,text width=35pt]{ length of\\\\[-2pt] $\\delta=\\epsilon/5$};\r\n\\fill[{\\colortwo}] (axis cs:1.73,3) circle (1pt);\r\n\\fill[{\\colortwo}] (axis cs:2.24,5) circle (1pt);\r\n\\end{axis}\r\n%\\fill[{\\colortwo}] (1,1) circle (1pt);\r\n\\node [right] at (myplot.right of origin) {  $x$};\r\n\\node [above] at (myplot.above origin) { $y$};\r\n\\end{tikzpicture}}\r\n\r\nIn summary, given $\\epsilon > 0$, set $\\delta=\\leq\\epsilon/5$.  Then $|x - 2| < \\delta$ implies \r\n$|x^2 - 4|< \\epsilon$ (i.e. $|y - 4|< \\epsilon$) as desired.  This shows that $\\displaystyle \\lim_{x\\rightarrow 2} x^2 = 4 $. Figure \\ref{fig:limit_eover5} gives a visualization of this; by restricting $x$ to values within $\\delta = \\epsilon/5$ of 2, we see that $f(x)$ is within $\\epsilon$ of $4$.\r\n}\r\n\\end{solution}\r\n\r\nIt probably seems obvious that $\\ds \\lim_{x\\to2}x^2=4$, and it is worth\r\nexamining more closely why it seems obvious. If we write $\\ds x^2=x\\cdot\r\nx$, and ask what happens when $x$ approaches 2, we might say something\r\nlike, ``Well, the first $x$ approaches 2, and the second $x$\r\napproaches 2, so the product must approach $2\\cdot2$.'' In fact this is\r\npretty much right on the money, except for that word ``must.'' Is it\r\nreally true that if $x$ approaches $a$ and $y$ approaches $b$ then\r\n$xy$ approaches $ab$? It is, but it is not really obvious, since $x$\r\nand $y$ might be quite complicated. The good news is that we can see\r\nthat this is true once and for all, and then we don't have to worry\r\nabout it ever again. When we say that $x$ might be ``complicated'' we\r\nreally mean that in practice it might be a function. Here is then what\r\nwe want to know:\r\n\r\n\\begin{theorem}{Limit Product}{LimitProduct}\r\nSuppose $\\ds \\lim_{x\\to a} f(x)=L$ and $\\ds \\lim_{x\\to a}g(x)=M$. Then\r\n$\\ds \\lim_{x\\to a} f(x)g(x) = LM$.\r\n\\end{theorem}\r\n\r\n\\begin{proof}\r\nWe must use the Precise Definition of a Limit to prove the Produce Law for Limits.\r\nSo given any $\\epsilon$ we need to find a $\\delta$ so that $0<|x-a|<\\delta$ implies\r\n$|f(x)g(x)-LM|<\\epsilon$. What do we have to work with? We know that we can make \r\n$f(x)$ close to $L$ and $g(x)$ close to $M$, and we have to somehow connect these\r\nfacts to make $f(x)g(x)$ close to $LM$.\r\n\r\nWe use, as is often the case, a little algebraic trick:\r\n\\begin{align*}\r\n|f(x)g(x)-LM|&=|f(x)g(x)-f(x)M+f(x)M-LM|\t\\\\\r\n&=|f(x)(g(x)-M)+(f(x)-L)M|\t\\\\\r\n&\\leq |f(x)(g(x)-M)|+|(f(x)-L)M|\t\\\\\r\n&=|f(x)||g(x)-M|+|f(x)-L||M|.\r\n\\end{align*}\r\n\r\nThis is all straightforward except perhaps for the ``$\\leq$''. That is an example\r\nof the \\emph{triangle inequality}, which says that if $a$ and $b$ are any real numbers\r\nthen $|a+b|\\leq|a|+|b|$. If you look at a few examples, using positive and negative numbers\r\nin various combinations for $a$ and $b$, you should quickly understand why this is true.\r\nWe will not prove it formally.\r\n\r\nSuppose $\\epsilon>0$. Since $\\ds\\lim_{x\\to a} f(x)=L$, there is a value $\\delta_1$\r\nsuch that $0<|x-a|<\\delta_1$ implies $\\ds|f(x)-L|<\\frac{\\epsilon}{2(1+|M|)}$.\r\nThis means that $0<|x-a|<\\delta_1$ implies $|f(x)-L||M|<|f(x)-L|(1+|M|)<\\epsilon/2$.\r\n\r\nNow we focus our attention on the other term in the inequality, $|f(x)||g(x)-M|$.\r\nWe can make $|g(x)-M|$ smaller than any fixed number by making $x$\r\nclose enough to $a$; unfortunately, $\\epsilon/(2f(x))$ is not a fixed\r\nnumber, since $x$ is a variable. Here we need another little trick,\r\njust like the one we used in analyzing $x^2$. We can find a $\\delta_2$\r\nso that $|x-a|<\\delta_2$ implies that $|f(x)-L|<1$, meaning that $L-1\r\n< f(x) < L+1$. This means that $|f(x)|<N$, where $N$ is either $|L-1|$\r\nor $|L+1|$, depending on whether $L$ is negative or positive. The\r\nimportant point is that $N$ doesn't depend on $x$. Finally, we know that\r\nthere is a $\\delta_3$ so that $0<|x-a|<\\delta_3$ implies\r\n$|g(x)-M|<\\epsilon/(2N)$. Let $\\delta$ be the smallest of $\\delta_1$, $\\delta_2$, and\r\n$\\delta_3$. Then $|x-a|<\\delta$ implies that\r\n$|f(x)-L|<\\epsilon/(2(1+|M|))$, $|f(x)|<N$, and\r\n$|g(x)-M|<\\epsilon/(2N)$. Then \r\n\\begin{align*}\r\n|f(x)g(x)-LM|&\\leq |f(x)||g(x)-M|+|f(x)-L||M|\t\\\\\r\n&< N\\frac{\\epsilon}{2N}+\\frac{\\epsilon}{2(1+|M|)}(1+|M|)\t\\\\\r\n&=\\frac{\\epsilon}{2}+\\frac{\\epsilon}{2}=\\epsilon.\r\n\\end{align*}\r\nThis is just what we needed, so by the official definition,\r\n$\\ds \\lim_{x\\to a}f(x)g(x)=LM$.\r\n\\end{proof} \r\n\r\n\r\n\r\nExamples \\ref{exa:ex_compute_lim1} and \\ref{exa:ex_compute_lim2} determine $\\delta$ by using logic that is difficult to recreate as one learns this topic. For instance, Equation \\eqref{eq:limit2} is used based on the following facts:\r\n\t\\begin{enumerate}\r\n\t\\item\t\tWe want $\\delta \\leq \\frac{\\epsilon}{|x+2|}$. Since we cannot let $\\delta$ vary according to $x$,\r\n\t\\item\t\twe notice that $|x+2|<5$ for the values we are interested in, so\r\n\t\\item\t\t$\\frac{\\epsilon}{5} < \\frac{\\epsilon}{|x+2|}$ and setting $\\delta<\\frac{\\epsilon}{5}$ ensures that $\\delta<\\frac{\\epsilon}{|x+2|}$.\r\n\t\\end{enumerate}\r\n\r\nThe following theorem offers some inequalities that are useful when creating $\\delta$--$\\epsilon$ proofs.\r\n\r\n\r\n\\begin{theorem}{Power Function Inequalities}{power_ineq}\r\n{Let $x>y>0$ and $n>1$. The following inequalities hold:\r\n\\begin{itemize}\r\n\\item\t\t$\\ds x^n+y^n < (x+y)^n$\r\n\\item\t\t$\\ds (x-y)^n < x^n - y^n$\r\n\\item\t\t$\\ds\\sqrt[n]{x+y} < \\sqrt[n]{x}+\\sqrt[n]{y}$\r\n\\item\t\t$\\ds \\sqrt[n]{x}-\\sqrt[n]{y}<\\sqrt[n]{x-y}$\r\n\\end{itemize}\r\n}\r\n\\end{theorem}\r\n\r\n\r\n%\r\n%We revisit the limit found in Example \\ref{ex_compute_lim2} to demonstrate the use of Theorem \\ref{thm:power_ineq}.\r\n%\r\n%\\example{ex_compute_lim2b}{Show that $\\ds \\lim_{x\\to 2} x^2=4$ using Theorem \\ref{thm:power_ineq}.}\r\n%{We start the same as before; let $\\epsilon >0$ be given. We want to find $\\delta>0$ such that $|x^2-4|<\\epsilon$ whenever $|x-2|<\\delta$. Consider the following inequalities:\r\n%\\begin{align*}\r\n%|x^2-4|<\\epsilon & \\\\\r\n%-\\epsilon < x^2-4 < \\epsilon & \\qquad \\text{\\small (Definition of abs. value)}\\\\\r\n%4-\\epsilon < x^2 < 4+\\epsilon & \\qquad \\text{\\small (add 4)}\\\\\r\n%\\sqrt{4-\\epsilon} < x < \\sqrt{4+\\epsilon} & \\qquad \\text{\\small (Take square roots)}\\\\\r\n%\\sqrt{4}-\\sqrt{\\epsilon} < \\sqrt{4-\\epsilon} < x < \\sqrt{4+\\epsilon}< \\sqrt{4}+\\sqrt{\\epsilon} & \\qquad \\text{\\small (apply Theorem \\ref{thm:power_ineq})}\\\\\r\n%2-\\sqrt{\\epsilon} < x < 2 + \\sqrt{\\epsilon} & \\text{\\small (Simplify)}\r\n%\\end{align*}\r\n%This implies that when $x$ is within $\\sqrt{\\epsilon}$ of $2$, $x^2$ will be within $\\epsilon$ of $4$. Thus set $\\delta = \\sqrt{\\epsilon}$. We can now start with $|x-2|<\\delta$ and reverse the above steps:\r\n%\\begin{align*}\r\n%|x-2| < \\delta & \\\\\r\n%-\\delta < x-2 < \\delta & \\qquad \\text{\\small (Definition of abs. value)}\\\\\r\n%2-\\delta < x < 2+ \\delta & \\\\\r\n%2-\\sqrt{\\epsilon} < x < 2+\\sqrt{\\epsilon} & \\\\\r\n%\\end{align*}\r\n%}\r\n\r\nMake note of the general pattern exhibited in these last two examples. In some sense, each starts out ``backwards.'' That is, while we want to\r\n\\begin{enumerate}\r\n\t\\item start with $|x-c|<\\delta$ and conclude that\r\n\t\\item $|f(x)-L|<\\epsilon$,\r\n\\end{enumerate}\r\nwe actually start by assuming \r\n\\begin{enumerate}\r\n\t\\item $|f(x)-L|<\\epsilon$, then perform some algebraic manipulations to give an inequality of the form\r\n\t\\item $|x-c|<$ \\textit{something}.\r\n\\end{enumerate} \r\n%then perform some algebraic manipulations to transform that inequality into an inequality where the ``less than'' side is $|x-c|$. \r\nWhen we have properly done this, the \\textit{something} on the ``greater than'' side of the inequality becomes our $\\delta$. We can refer to this as the ``scratch--work'' phase of our proof. Once we have $\\delta$, we can formally start with $|x-c|<\\delta$ and use algebraic manipulations to conclude that $|f(x)-L|<\\epsilon$, usually by using the same steps of our ``scratch--work'' in reverse order.\r\n\r\nWe highlight this process in the following example.\\\\\r\n\r\n\r\n\\begin{example}{Evaluating a limit using the definition}{ex_compute_lim4}\r\n{Prove that $\\ds \\lim_{x\\rightarrow 1}x^3-2x = -1$.}\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{We start our scratch--work by considering $|f(x) - (-1)| < \\epsilon$:\r\n\\begin{align}\r\n|f(x)-(-1)| &< \\epsilon \\notag\\\\\r\n|x^3-2x + 1|&< \\epsilon & \\text{(Now factor)}\\notag\\\\\r\n|(x-1)(x^2+x-1)|&< \\epsilon \\notag\\\\\r\n|x-1| &<\\frac{\\epsilon}{|x^2+x-1|}.\\label{eq:lim4}\r\n\\end{align}\r\nWe are at the phase of saying that $|x-1|<$ \\textit{something}, where \\textit{something}$=\\epsilon/|x^2+x-1|$. We want to turn that \\textit{something} into $\\delta$.\r\n\r\nSince $x$ is approaching 1, we are safe to assume that $x$ is between 0 and 2. So\r\n\\begin{align*}\r\n0&< x<2  & \\\\\r\n0&< x^2<4.&\\text{(squared each term)}\\\\\r\n\\intertext{Since $0<x<2$, we can add $0$, $x$ and $2$, respectively, to each part of the inequality and maintain the inequality.}\r\n0&< x^2+x<6 &\\\\\r\n-1&< x^2+x-1<5.&\\text{(subtracted 1 from each part)}\r\n\\end{align*}\r\n\r\nIn Equation \\eqref{eq:lim4}, we wanted $|x-1|<\\epsilon/|x^2+x-1|$. The above shows that given any $x$ in $[0,2]$, we know that \r\n\\begin{align}\r\nx^2+x-1 &< 5 &\\text{which implies that}\\notag\\\\\r\n\\frac15 &< \\frac{1}{x^2+x-1} &\\text{which implies that}\\notag\\\\\r\n\\frac{\\epsilon}5 &< \\frac{\\epsilon}{x^2+x-1}.\\label{eq:lim4b}\r\n\\end{align}\r\n So we set $\\delta \\leq \\epsilon/5$. This ends our scratch--work, and we begin the formal proof (which also helps us understand why this was a good choice of $\\delta$).\r\n\r\nGiven $\\epsilon$, let $\\delta \\leq \\epsilon/5$. We want to show that when $|x-1|<\\delta$, then $|(x^3-2x)-(-1)|<\\epsilon$. We start with $|x-1|<\\delta$:\r\n\\begin{align*}\r\n|x-1| &< \\delta \\\\\r\n|x-1| &< \\frac{\\epsilon}5\\\\\r\n|x-1| &< \\frac\\epsilon5 < \\frac{\\epsilon}{|x^2+x-1|} & \\text{(for $x$ near 1, from Equation \\eqref{eq:lim4b})}\\\\\r\n|x-1|\\cdot |x^2+x-1| &< \\epsilon\\\\\r\n|x^3-2x+1| &< \\epsilon\\\\\r\n|(x^3-2x)-(-1)| &<\\epsilon,\r\n\\end{align*}\r\nwhich is what we wanted to show. Thus $\\ds \\lim_{x\\to 1}x^3-2x = -1$.\r\n}\r\n\\end{solution}\r\n\r\n\r\n\r\n%\r\nWe illustrate evaluating limits once more.\\\\\r\n\r\n\r\n\\begin{example}{Evaluating a limit using the definition}{ex_compute_lim3}\r\n{Prove that $\\displaystyle \\lim_{x\\rightarrow 0} e^x = 1 $.}\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{Symbolically, we want to take the equation $|e^x - 1| < \\epsilon$ and unravel it to the form $|x-0| < \\delta$.  Here is our scratch--work:\r\n\\begin{eqnarray*}\r\n|e^x - 1| < \\epsilon&\\\\\r\n-\\epsilon < e^x - 1 < \\epsilon& \\qquad \\textrm{(Definition of absolute value)}\\\\\r\n1-\\epsilon < e^x < 1+\\epsilon & \\qquad \\textrm{(Add 1)}\\\\\r\n\\ln(1-\\epsilon) < x < \\ln(1+\\epsilon) & \\qquad \\textrm{(Take natural logs)}\\\\\r\n\\end{eqnarray*}\r\nMaking the safe assumption that $\\epsilon<1$ ensures the last inequality is valid (i.e., so that $\\ln (1-\\epsilon)$ is defined). We can then set $\\delta$ to be the minimum of $|\\ln(1-\\epsilon)|$ and $\\ln(1+\\epsilon)$; i.e.,  %  Well, there is a catch.  The value of $\\epsilon$ is supposed to be small, but if it happens that $\\epsilon \\ge 1$, then $\\ln(1-\\epsilon)$ would be undefined!  The way to work around this is to simply define a new epsilon that is guaranteed to be smaller than the original epsilon \\textit{and} less than 1 (let's say less than 1/2 just to be on the safe side).  Let's call this new value $\\epsilon_1$ and define it to be $\\epsilon_1 = \\min\\{\\epsilon, 1/2\\}$.  Then we can use the calculations above to define \r\n$$\\delta = \\min\\{|\\ln(1-\\epsilon)|, \\ln(1+\\epsilon)\\} = \\ln(1+\\epsilon).$$  \r\n{\\textbf{Note:} Recall $\\ln 1= 0$ and $\\ln x<0$ when $0<x<1$. So $\\ln (1-\\epsilon) <0$, hence we consider its absolute value.}\r\n\r\nNow, we work through the actual the proof:\r\n\r\n\r\n\\begin{align*}\r\n|x - 0|&<\\delta\\\\\r\n-\\delta &< x < \\delta &  \\textrm{(Definition of absolute value)}\\\\\r\n-\\ln(1+\\epsilon) &< x < \\ln(1+\\epsilon). &\\\\  \r\n\\ln(1-\\epsilon) &< x < \\ln(1+\\epsilon). & \\text{(since $\\ln(1-\\epsilon) < -\\ln(1+\\epsilon)$)}\\\\ \r\n\\intertext{The above line is true by our choice of $\\delta$ and by the fact that since $|\\ln(1-\\epsilon)|>\\ln(1+\\epsilon)$ and $\\ln(1-\\epsilon)<0$, we know $\\ln(1-\\epsilon) < -\\ln(1+\\epsilon )$.} %\\textrm{(By our choice of}\\; \\delta)\\\\\r\n1-\\epsilon &< e^x < 1+\\epsilon &  \\textrm{(Exponentiate)}\\\\\r\n-\\epsilon &< e^x - 1 < \\epsilon &  \\textrm{(Subtract 1)}\\\\\r\n%-\\epsilon < e^x - 1 < \\epsilon & \\qquad \\textrm{(Since}\\; \\epsilon_1 \\le \\epsilon)\\\\\r\n\\end{align*}\r\n\r\nIn summary, given $\\epsilon > 0$, let $\\delta = \\ln(1+\\epsilon)$. Then $|x - 0| < \\delta$ implies $|e^x - 1|< \\epsilon$ as desired.  We have shown that $\\displaystyle \\lim_{x\\rightarrow 0} e^x = 1 $.\r\n}\r\n\\end{solution}\r\n\r\n\r\n\r\n\r\nWe note that we could actually show that $\\lim_{x\\rightarrow c} e^x = e^c $ for any constant $c$.  We do this by factoring out $e^c$ from both sides, leaving us to show $\\lim_{x\\rightarrow c} e^{x-c} = 1 $ instead.  By using the substitution $u=x-c$, this reduces to showing $\\lim_{u\\rightarrow 0} e^u = 1 $ which we just did in the last example.  As an added benefit, this shows that in fact the function $f(x)=e^x$ is \\textit{continuous} at all values of $x$, an important concept we will define in Section \\ref{sec:Continuity}.\\\\\r\n\r\nThis formal definition of the limit is not an easy concept grasp. Our examples are actually ``easy'' examples, using ``simple'' functions like polynomials, square--roots and exponentials. It is very difficult to prove, using the techniques given above, that $\\ds \\lim_{x\\to 0}(\\sin x)/x = 1$, as we approximated in the previous section.\r\n\r\nThere is hope. The next section shows how one can evaluate complicated limits using certain basic limits as building blocks. While limits are an incredibly important part of calculus (and hence much of higher mathematics), rarely are limits evaluated using the definition. Rather, the techniques of the following section are employed.\r\n\r\n\\clearpage\r\n\r\n\r\n\r\n\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for Section \\ref{sec:LimitsFormal}}\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\n\r\n% % % % % % % % % % %\r\n\\begin{ex}\r\n\\begin{enumerate}\r\n\\item {What is wrong with the following ``definition'' of a limit?\r\n\t\\begin{quote}\r\n``The limit of $f(x)$, as $x$ approaches $a$, is $K$'' means that given any $\\delta>0$ there exists $\\epsilon>0$ such that whenever $|f(x)-K|< \\epsilon$, we have $|x-a|<\\delta$.\r\n\t\\end{quote}\r\n}\r\n\r\n\\item {Which is given first in establishing a limit, the $x$--tolerance or the $y$--tolerance?}\r\n\r\n\\item {T/F: $\\epsilon$ must always be positive.}\r\n\r\n\\item {T/F: $\\delta$ must always be positive.}\r\n\\end{enumerate}\r\n\r\n\\begin{sol}\r\n\\begin{enumerate}\r\n\\item {$\\epsilon$ should be given first, and the restriction \t$|x-a|<\\delta$ implies $|f(x)-K|< \\epsilon$, not the other way around.}\r\n\\item {The $y$--tolerance.}\r\n\\item {T}\r\n\\item \r\n{T}\r\n\\end{enumerate}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n% % % % % % % % % % % %\r\n% % % % % % % % % % %\r\n\\begin{ex}\r\nProve the given limit using an $\\epsilon - \\delta$ proof.\r\n\\begin{enumerate}\r\n\r\n\\item {$\\displaystyle \\lim_{x\\to 2} 5 = 5$}\r\n\r\n\\item {$\\displaystyle \\lim_{x\\to 5} 3-x = -2$}\r\n\r\n\\item {$\\displaystyle \\lim_{x\\to 3} x^2-3 = 6$}\r\n\r\n\\item {$\\displaystyle \\lim_{x\\to 2} x^3-1 = 7$}\r\n\\item {$\\displaystyle \\lim_{x\\to 0} e^{2x}-1 = 0$}\r\n\r\n\\item {$\\displaystyle \\lim_{x\\to 0} \\sin x= 0$ (Hint: use the fact that $|\\sin x| \\leq |x|$, with equality only when $x=0$.)}\r\n\\item {$\\displaystyle \\lim_{x\\to 4} x^2+x-5 = 15$}\r\n\r\n\\end{enumerate}\r\n\r\n\\begin{sol}\r\n\\begin{enumerate}\r\n\\item {Let $\\epsilon >0$ be given. We wish to find $\\delta >0$ such that when $|x-2|<\\delta$, $|f(x)-5|<\\epsilon$. However, since $f(x)=5$, a constant function, the latter inequality is simply $|5-5|<\\epsilon$, which is always true. Thus we can choose any $\\delta$ we like; we arbitrarily choose $\\delta =\\epsilon$. \r\n}\r\n\\item {Let $\\epsilon >0$ be given. We wish to find $\\delta >0$ such that when $|x-5|<\\delta$, $|f(x)-(-2)|<\\epsilon$. \r\n\r\nConsider $|f(x)-(-2)|<\\epsilon$:\r\n\\begin{gather*}\r\n|f(x) + 2 | < \\epsilon \\\\\r\n|(3-x) + 2 |<\\epsilon \\\\\r\n| 5-x | < \\epsilon \\\\\r\n-\\epsilon < 5-x < \\epsilon \\\\\r\n-\\epsilon < x-5 < \\epsilon. \\\\\r\n\\end{gather*}\r\nThis implies we can let $\\delta =\\epsilon$. Then:\r\n\\begin{gather*}\r\n|x-5|<\\delta \\\\\r\n-\\delta < x-5 < \\delta\\\\\r\n-\\epsilon < x-5 < \\epsilon\\\\\r\n-\\epsilon < (x-3)-2 < \\epsilon \\\\\r\n-\\epsilon < (-x+3)-(-2) < \\epsilon \\\\\r\n|3-x - (-2)| < \\epsilon,\r\n\\end{gather*}\r\nwhich is what we wanted to prove.\r\n}\r\n\r\n\\item {Let $\\epsilon >0$ be given. We wish to find $\\delta >0$ such that when $|x-3|<\\delta$, $|f(x)-6|<\\epsilon$. \r\n\r\nConsider $|f(x)-6|<\\epsilon$, keeping in  mind we want to make a statement about $|x-3|$:\r\n\\begin{gather*}\r\n|f(x) -6 | < \\epsilon \\\\\r\n|x^2-3 -6 |<\\epsilon \\\\\r\n| x^2-9 | < \\epsilon \\\\\r\n| x-3 |\\cdot|x+3| < \\epsilon \\\\\r\n| x-3 | < \\epsilon/|x+3| \\\\\r\n\\end{gather*}\r\nSince $x$ is near 3, we can safely assume that, for instance, $2<x<4$. Thus\r\n\\begin{gather*}\r\n2+3<x+3<4+3 \\\\\r\n5 < x+3 < 7 \\\\\r\n\\frac{1}{7} < \\frac{1}{x+3} < \\frac{1}{5} \\\\\r\n\\frac{\\epsilon}{7} < \\frac{\\epsilon}{x+3} < \\frac{\\epsilon}{5} \\\\\r\n\\end{gather*}\r\nLet $\\delta =\\frac{\\epsilon}{7}$. Then:\r\n\\begin{gather*}\r\n|x-3|<\\delta \\\\\r\n|x-3| < \\frac{\\epsilon}7\\\\\r\n|x-3| < \\frac{\\epsilon}{x+3}\\\\\r\n|x-3|\\cdot|x+3| < \\frac{\\epsilon}{x+3}\\cdot|x+3|\\\\\r\n\\end{gather*}\r\nAssuming $x$ is near 3, $x+3$ is positive and we can drop the absolute value signs on the right.\r\n\\begin{gather*}\r\n|x-3|\\cdot|x+3| < \\frac{\\epsilon}{x+3}\\cdot(x+3)\\\\\r\n|x^2-9| < \\epsilon\\\\\r\n|(x^2-3) - 6| < \\epsilon,\r\n\\end{gather*}\r\nwhich is what we wanted to prove.\r\n}\r\n\\item \r\n{Let $\\epsilon >0$ be given. We wish to find $\\delta >0$ such that when $|x-2|<\\delta$, $|f(x)-7|<\\epsilon$. \r\n\r\nConsider $|f(x)-7|<\\epsilon$, keeping in  mind we want to make a statement about $|x-2|$:\r\n\\begin{gather*}\r\n|f(x) -7 | < \\epsilon \\\\\r\n|x^3-1 -7 |<\\epsilon \\\\\r\n| x^3-8 | < \\epsilon \\\\\r\n| x-2 |\\cdot|x^2+2x+4| < \\epsilon \\\\\r\n| x-3 | < \\epsilon/|x^2+2x+4| \\\\\r\n\\end{gather*}\r\nSince $x$ is near 2, we can safely assume that, for instance, $1<x<3$. Thus\r\n\\begin{gather*}\r\n1^2+2\\cdot1+4<x^2+2x+4<3^2+2\\cdot3+4 \\\\\r\n7 < x^2+2x+4 < 19 \\\\\r\n\\frac{1}{19} < \\frac{1}{x^2+2x+4} < \\frac{1}{7} \\\\\r\n\\frac{\\epsilon}{19} < \\frac{\\epsilon}{x^2+2x+4} < \\frac{\\epsilon}{7} \\\\\r\n\\end{gather*}\r\nLet $\\delta =\\frac{\\epsilon}{19}$. Then:\r\n\\begin{gather*}\r\n|x-2|<\\delta \\\\\r\n|x-2| < \\frac{\\epsilon}{19}\\\\\r\n|x-2| < \\frac{\\epsilon}{x^2+2x+4}\\\\\r\n|x-2|\\cdot|x^2+2x+4| < \\frac{\\epsilon}{x^2+2x+4}\\cdot|x^2+2x+4|\\\\\r\n\\end{gather*}\r\nAssuming $x$ is near 2, $x^2+2x+4$ is positive and we can drop the absolute value signs on the right.\r\n\\begin{gather*}\r\n|x-2|\\cdot|x^2+2x+4| < \\frac{\\epsilon}{x^2+2x+4}\\cdot(x^2+2x+4)\\\\\r\n|x^3-8| < \\epsilon\\\\\r\n|(x^3-1) - 7| < \\epsilon,\r\n\\end{gather*}\r\nwhich is what we wanted to prove.\r\n}\r\n\\item {Let $\\epsilon >0$ be given. We wish to find $\\delta >0$ such that when $|x-0|<\\delta$, $|f(x)-0|<\\epsilon$. \r\n\r\nConsider $|f(x)-0|<\\epsilon$, keeping in  mind we want to make a statement about $|x-0|$ (i.e., $|x|$):\r\n\\begin{gather*}\r\n|f(x) -0 | < \\epsilon \\\\\r\n|e^{2x}-1 |<\\epsilon \\\\\r\n-\\epsilon< e^{2x}-1 < \\epsilon \\\\\r\n1-\\epsilon< e^{2x} < 1+\\epsilon \\\\\r\n\\ln (1-\\epsilon) < 2x < \\ln (1+\\epsilon) \\\\\r\n\\frac{\\ln (1-\\epsilon)}{2} < x < \\frac{\\ln (1+\\epsilon)}{2} \\\\\r\n\\end{gather*}\r\nLet $\\delta = \\min\\left\\{\\left|\\frac{\\ln(1-\\epsilon)}{2}\\right|,\\frac{\\ln(1+\\epsilon)}{2}\\right\\}=\\frac{\\ln(1+\\epsilon)}{2}.$\r\n\r\nThus:\r\n\\begin{gather*}\r\n|x| < \\delta \\\\\r\n%|x| < \\min\\left\\{\\left|\\frac{\\ln(1-\\epsilon)}{2}\\right|,\\left|\\frac{\\ln(1+\\epsilon)}{2}\\right|\\right\\} \\\\\r\n|x| <\\frac{\\ln(1+\\epsilon)}{2}<\\left|\\frac{\\ln(1-\\epsilon)}{2}\\right| \\\\\r\n\\frac{\\ln(1-\\epsilon)}{2} < x < \\frac{\\ln(1+\\epsilon)}{2}\\\\\r\n\\ln(1-\\epsilon)< 2x < \\ln(1+\\epsilon)\\\\\r\n1-\\epsilon < e^{2x} < 1+\\epsilon\\\\\r\n-\\epsilon < e^{2x}-1 < \\epsilon\\\\\r\n|e^{2x}-1-(0)| < \\epsilon,\r\n\\end{gather*}\r\nwhich is what we wanted to prove.\r\n}\r\n\\item \r\n{Let $\\epsilon >0$ be given. We wish to find $\\delta >0$ such that when $|x-0|<\\delta$, $|f(x)-0|<\\epsilon$. In simpler terms, we want to show that when $|x|<\\delta$, $|\\sin x| < \\epsilon$. \r\n\r\nSet $\\delta = \\epsilon$. We start with assuming that $|x|<\\delta$. Using the hint, we have that $|\\sin x | < |x| < \\delta = \\epsilon$. Hence if $|x|<\\delta$, we know immediately that $|\\sin x| < \\epsilon$.\r\n}\r\n\\item {Let $\\epsilon >0$ be given. We wish to find $\\delta >0$ such that when $|x-4|<\\delta$, $|f(x)-15|<\\epsilon$. \r\n\r\nConsider $|f(x)-15|<\\epsilon$, keeping in  mind we want to make a statement about $|x-4|$:\r\n\\begin{gather*}\r\n|f(x) -15 | < \\epsilon \\\\\r\n|x^2+x-5 -15 |<\\epsilon \\\\\r\n| x^2+x-20 | < \\epsilon \\\\\r\n| x-4 |\\cdot|x+5| < \\epsilon \\\\\r\n| x-4 | < \\epsilon/|x+5| \\\\\r\n\\end{gather*}\r\nSince $x$ is near 4, we can safely assume that, for instance, $3<x<5$. Thus\r\n\\begin{gather*}\r\n3+5<x+5<5+5 \\\\\r\n8 < x+5 < 10 \\\\\r\n\\frac{1}{10} < \\frac{1}{x+5} < \\frac{1}{8} \\\\\r\n\\frac{\\epsilon}{10} < \\frac{\\epsilon}{x+5} < \\frac{\\epsilon}{8} \\\\\r\n\\end{gather*}\r\nLet $\\delta =\\frac{\\epsilon}{10}$. Then:\r\n\\begin{gather*}\r\n|x-4|<\\delta \\\\\r\n|x-4| < \\frac{\\epsilon}{10}\\\\\r\n|x-4| < \\frac{\\epsilon}{x+5}\\\\\r\n|x-4|\\cdot|x+5| < \\frac{\\epsilon}{x+5}\\cdot|x+5|\\\\\r\n\\end{gather*}\r\nAssuming $x$ is near 4, $x+5$ is positive and we can drop the absolute value signs on the right.\r\n\\begin{gather*}\r\n|x-4|\\cdot|x+5| < \\frac{\\epsilon}{x+5}\\cdot(x+5)\\\\\r\n|x^2+x-20| < \\epsilon\\\\\r\n|(x^2+x-5) -15| < \\epsilon,\r\n\\end{gather*}\r\nwhich is what we wanted to prove.\r\n}\r\n\r\n\\end{enumerate}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n% % % % % % % % % % % %\r\n\r\n\r\n\\begin{ex}\r\nLet $\\epsilon$ be a small positive real number. How close to 2 must we hold $x$ in order to be sure that $3x+1$ lies within $\\epsilon$ units of 7?\r\n\\end{ex}\r\n\r\n\\end{enumialphparenastyle}\r\n\r\n\\clearpage", "meta": {"hexsha": "27f2de927f99e9c696eee9d13dbe656b75ca33b7", "size": 39552, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "3-limits/3-2-limits-formal.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "3-limits/3-2-limits-formal.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "3-limits/3-2-limits-formal.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.1631799163, "max_line_length": 738, "alphanum_fraction": 0.6304105987, "num_tokens": 14456, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737214979745, "lm_q2_score": 0.870597266729631, "lm_q1q2_score": 0.5942468162776089}}
{"text": "\n\\subsection{Linear Discriminant Analysis (LDA)}\n\nAssume data is mixed gaussian. Use this to estimate classes.\n\n", "meta": {"hexsha": "9b8302c57e4285d24db0f15def518f6601d28c1c", "size": 112, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/discriminantAnalysis/01-01-LDA.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/discriminantAnalysis/01-01-LDA.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/discriminantAnalysis/01-01-LDA.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.6666666667, "max_line_length": 60, "alphanum_fraction": 0.7857142857, "num_tokens": 25, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8705972616934408, "lm_q2_score": 0.6825737214979745, "lm_q1q2_score": 0.5942468128400378}}
{"text": "%! Author = tstreule\n\n\\section{Molecular Adsorption and Electron Transfer}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quantum Mechanics}\n%\n\\formula{Schr\u00f6dinger eq.}{\\hat{H}\\psi(\\vec{r}) = E\\psi(\\vec{r})}\n\\quad where $\\hat{H} = -\\frac{\\hbar}{2m}\\pderiv[2]{~}{x} + V$\n\n\\formula{\\textbf{1D potential well}}{\\psi(x) = \\sqrt{\\frac{2}{L}} \\sin(k_nx)}\n\\quad for $0\\leq x\\leq L$\n\\formtex{~}{where $k_n = \\frac{n\\pi}{L}$, $n\\in\\mathbb{N}$ and $E_n = \\frac{\\hbar^2k_n^2}{2m}$}\n\n\\formula{\\textbf{1D potential barrier} (tunneling)}{\n    \\psi(x) =\n    \\begin{cases}\n        A\\eu^{\\iu k'x} + B\\eu^{-\\iu k'x}\t& \\phantom{for } x\\leq0\\\\\n        C\\eu^{\\iu k''x} + D\\eu^{-\\iu k''x}\t& \\text{for }    0\\leq x\\leq d\\\\\n        F\\eu^{\\iu k'x}\t\t\t\t\t\t& \\phantom{for } \\text{otw.}\n    \\end{cases}\n}\n\\formtex{~}{where $k' =\\sqrt{2mE}/\\hbar$ and $k''\\!\\!=\\sqrt{2m(V_0\\!-\\!E)}/\\hbar$}\n\n\\formbox{transm./tunneling probability}{T = \\frac{F^*\\!F}{A^*\\!A} = B\\eu^{-\\beta d}}\n, $\\beta {=} {-}2\\sqrt{2m(V_0{-}E)}/\\hbar$\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Electronic Transport through Molecules}\n%\n\\formbox{Quantum conductance (1D)}{G = G_0\\cdot T}\n\\enskip $G_0 = \\frac{2e^2}{h}$,\n\\enskip in parallel: $G = N\\cdot G_0$\n\\formbox{Tunneling probability}{T = B\\eu^{-\\beta d}}\n$\\propto\\;\\;\\; \\eu^{-\\beta d} = \\left(\\eu^{-\\beta_{N\\!P}d}\\right)^N$\n\\formbox{1D channel current}{j = {-}(\\mu_1{-}\\mu_2)e\\;v\\;\\rho_E}\n\\enskip $\\rho_E {=} DOS {=} \\frac{1}{\\hbar\\pi}\\sqrt{2m/E}$\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Atomic and Molecular Orbitals}\n%\n%\tThe linear combination of atomic orbitals (\\textbf{LCAO}) approximation supposes that molecular orbitals can be constructed from linear superpositions of atomic orbitals centred on individual atoms.\\\\\n%\tIMPORTANT: linear combinations of n atomic orbitals each with its own energy level (even if of the same E value!!!) give rise to n energy levels.\n%\n\\textbf{Bond order}: defined by difference (\\#electrons) divided by two\\\\\n$\\to$ If the bond order is different from zero, then the bond is \\textit{stable}\n%\n%\t\\textbf{HOMO - LUMO} (highest occupied vs lowest unoccupied MO):\\\\\n%\tIf we add thermal energy, the electrons will begin to jump. Since all lower states are filled, the first electron will jump from HOMO to LUMO.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\t\\subsection{Electrical Biosensor}\n%\t%\n%\t\\includegraphics[width=.5\\columnwidth]{Electrical_Biosensor}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Transition State Theory \\textnormal{$A+B \\to [AB] \\to C+D$}}\n%\n%\tCriterion is that colliding molecules must have sufficient energy to overcome a potential energy barrier (the activation energy) to react.\n%\n\\formula{Gibbs free energy}{G = H - TS = U + pV - TS}\n\\formbox{\\textbf{Ideal gas law}}{pV = n\\ped{total}RT}\n\\vspace{-1mm}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Marcus Theory}\n%\n\\begin{minipage}{.5\\columnwidth}\n    \\includegraphics[width=.8\\columnwidth]{Electrochemistry_Marcus_Theory}\n\\end{minipage}%\n\\begin{minipage}{.5\\columnwidth}\n    See \\textbf{section \\ref{sec:butler-volmer}} (Butler-Volmer)\\par\n    \\begin{tabular}{r@{$\\;=\\;$}l}\n        $k\\ped{red}$\t& $v\\ped{red} \\; \\eu^{-\\frac{\\Delta^\\ddagger G\\ped{red}}{RT}}$\\\\\n        $k\\ped{ox}$\t\t& $v\\ped{ox}  \\; \\eu^{-\\frac{\\Delta^\\ddagger G\\ped{ox}}{RT}}$\\\\\n        \\addlinespace\n        $\\Delta^\\ddagger G\\ped{red}$\t& $\\!\\!~^\\ddagger G-G\\ped{ox,min}$\\\\\n        $\\Delta^\\ddagger G\\ped{ox}$\t& $\\!\\!~^\\ddagger G-G\\ped{red,min}$\n    \\end{tabular}\n\\end{minipage}\n%\n%\t\\formula{rate constant}{k_{et} \\propto \\eu^{-\\frac{\\Delta^\\ddagger G}{RT}}}\n%\t\t\\quad with \\quad $R = k_B N_A = \\unit[8.3]{J\\;K^{-1}mol^{-1}}$\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Gerischer's view}\n%\n\\begin{minipage}{.4\\columnwidth}\n    \\includegraphics[width=\\columnwidth]{Redox_Energy_Levels_1}%\n    \\hspace{-.6\\columnwidth}\n    \\parbox[b]{.4\\columnwidth}{\n        $D\\ped{ox}(\\lambda,E)$ \\vspace{20mm}\\\\\\vspace{2mm}\n        $D\\ped{red}(\\lambda,E)$\n    }\n\\end{minipage}%\n\\begin{minipage}{.15\\columnwidth}\n    \\centering\n    \\includegraphics[width=.9\\columnwidth]{Redox_Energy_Levels_2}\n\\end{minipage}%\n\\begin{minipage}{.45\\columnwidth}\n    \\quad\\includegraphics[width=.55\\columnwidth]{Electrochemistry_Gerischers_View_Cathodic}%\n    \\hfill\\hspace{-\\columnwidth}\n    \\parbox[b]{\\columnwidth}{\n        \\hfill \\textbf{cathodic} polariz. \\par\n        \\hfill $i^+ \\gg i^-$\\vspace{22mm}\\\\\n    }\n%\t\tGaussian curve: ($+$: red, $-$: ox)\\\\\n%\t\t$W\\ped{red/ox} = \\frac{1}{\\sqrt{4\\pi\\lambda k\\ped{B}T}} \\eu^{-\\frac{(E-E_F\\pm\\lambda)^2}{4\\lambda k\\ped{B}T}}$\n\\end{minipage}\n", "meta": {"hexsha": "de1fe55981fd7ee71fa9207445d00fb4772456de", "size": 4730, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/BE18/sections/08_electron_transfer.tex", "max_stars_repo_name": "tstreule/eth-cheat-sheets", "max_stars_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-26T23:11:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T23:11:57.000Z", "max_issues_repo_path": "src/BE18/sections/08_electron_transfer.tex", "max_issues_repo_name": "tstreule/eth-cheat-sheets", "max_issues_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/BE18/sections/08_electron_transfer.tex", "max_forks_repo_name": "tstreule/eth-cheat-sheets", "max_forks_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.4807692308, "max_line_length": 202, "alphanum_fraction": 0.5862579281, "num_tokens": 1664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256393148982, "lm_q2_score": 0.7057850340255385, "lm_q1q2_score": 0.5941479374874361}}
{"text": "\n\\subsection{Error Correction Model}\n\n\\subsubsection{Static model}\n\nLike PAM we start with static estimator.\n\n\\subsubsection{The ECM}\n\nThe ECM does a regression with first differences, and includes lagged error terms.\n\nWe start with a basic first-difference model. \n\n\\(\\Delta y_t= \\Delta x_t\\)\n\nWe could also expand this to include laggs for both x and y. Here we don't.\n\nWe know that long term \\(y_t=\\theta x_t\\). We use the error from this in a first difference model. \n\n\\(\\Delta y_t= \\alpha \\Delta x_t + \\beta (y_{t-1}-\\theta x_{t-1})\\)\n\nPage on identifying error terms \n\nAlso, page on Vector Error Correction Model (VECM)\n\n", "meta": {"hexsha": "f9fc96bbcbc7be0189dfb22e646cfda2cc02cc71", "size": 627, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/forecastingMulti/04-02-ECM.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/forecastingMulti/04-02-ECM.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/forecastingMulti/04-02-ECM.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.1153846154, "max_line_length": 99, "alphanum_fraction": 0.7368421053, "num_tokens": 167, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8418256313782276, "lm_q2_score": 0.7057850402140659, "lm_q1q2_score": 0.5941479370955137}}
{"text": "\\documentclass[12pt,english]{article}\n% math tool collection\n\\usepackage{mathtools}\n% for math fonts\n\\usepackage{bm}\n\\usepackage{accents}\n% for links\n\\usepackage{hyperref}\n\\hypersetup{\n    colorlinks=true,\n    linkcolor=blue,\n    filecolor=magenta,\n    urlcolor=cyan,\n}\n\n% Titling and Author\n\\title{Latex Special Symbols}\n\\author{\\href{https://fanwangecon.github.io/}{Fan Wang}\\thanks{https://fanwangecon.github.io, repository: \\href{https://fanwangecon.github.io/Tex4Econ/}{Tex4Econ}}}\n\\date{\\today}\n\\begin{document}\n\n\\maketitle\n\nDependencies:\n\\begin{verbatim}\n  \\usepackage{mathtools}\n  \\usepackage{bm}\n  \\usepackage{accents}\n\\end{verbatim}\n\n\\section{Bold Romans and Greeks}\n\nTesting \\textbf{mathbf}:\n\n\\begin{align}\n  \\begin{gathered}\n    mathbf\\mathbf{this is bold} not bold\\\\\n    a + b \\mathbf{+ a + b} + a + b\\\\\n    A + B \\mathbf{+ A + B} + A + B\\\\\n    \\alpha + \\beta \\mathbf{+ \\alpha + \\beta} + \\alpha + \\beta\n  \\end{gathered}\n\\end{align}\n\nTesting \\textbf{boldsymbol}:\n\n\\begin{align}\n  \\begin{gathered}\n    boldsymbol\\boldsymbol{this is boldsymbol} not boldsymbol\\\\\n    a + b \\boldsymbol{+ a + b} + a + b\\\\\n    A + B \\boldsymbol{+ A + B} + A + B\\\\\n    \\alpha + \\beta \\boldsymbol{+ \\alpha + \\beta} + \\alpha + \\beta\n  \\end{gathered}\n\\end{align}\n\n\\section{Under and Below Hat, Bar and Tilde}\n\nUnder and below bar, allowing for bold notation for vector.\n\n\\subsection{Accents Package}\n\nReference:\n\\begin{itemize}\n  \\item \\href{https://tex.stackexchange.com/questions/125412/bar-below-symbol}{SE: bar-below-symbol}\n  \\item \\href{https://tex.stackexchange.com/questions/66537/making-hats-and-other-accents-bold}{SE: Making hats (and other accents) bold}\n\\end{itemize}\n\n\\subsubsection{Accents Package Regular Font}\n\nDefine:\n\n\\newcommand{\\uhat}[1]{\\underaccent{\\hat}{#1}}\n\\newcommand{\\ubar}[1]{\\underaccent{\\bar}{#1}}\n\\newcommand{\\utilde}[1]{\\underaccent{\\tilde}{#1}}\n% \\newcommand{\\barbm}[1]{\\bm{\\bar{#1}}}\n% \\newcommand{\\ubarbm}[1]{\\underaccent{\\bar}{\\bm{#1}}}\n\\begin{verbatim}\n  \\newcommand{\\uhat}[1]{\\underaccent{\\hat}{#1}}\n  \\newcommand{\\ubar}[1]{\\underaccent{\\bar}{#1}}\n  \\newcommand{\\utilde}[1]{\\underaccent{\\tilde}{#1}}\n\\end{verbatim}\n\nNow write letters with under and over bars etc:\n\n\\begin{verbatim}\n  \\hat{ABC}, \\bar{ABC}, \\tilde{ABC},\n  \\uhat{ABC}, \\ubar{ABC}, \\utilde{ABC},\n\\end{verbatim}\n\\begin{align}\n    \\begin{gathered}\n      \\hat{a}, \\bar{a}, \\tilde{a}, \\uhat{a}, \\ubar{a}, \\utilde{a},\n      \\hat{A}, \\bar{A}, \\tilde{A}, \\uhat{A}, \\ubar{A}, \\utilde{A}\n      \\\\\n      \\hat{abc}, \\bar{abc}, \\tilde{abc}, \\uhat{abc}, \\ubar{abc}, \\utilde{abc},\n      \\hat{ABC}, \\bar{ABC}, \\tilde{ABC}, \\uhat{ABC}, \\ubar{ABC}, \\utilde{ABC}\n      \\\\\n      \\hat{\\alpha}, \\bar{\\alpha}, \\tilde{\\alpha}, \\uhat{\\alpha}, \\ubar{\\alpha}, \\utilde{\\alpha},\n      \\hat{\\alpha}, \\bar{\\alpha}, \\tilde{\\alpha}, \\uhat{\\alpha}, \\ubar{\\alpha}, \\utilde{\\alpha}\n      \\\\\n      \\hat{\\alpha\\Gamma\\beta}, \\bar{\\alpha\\Gamma\\beta}, \\tilde{\\alpha\\Gamma\\beta}, \\uhat{\\alpha\\Gamma\\beta}, \\ubar{\\alpha\\Gamma\\beta}, \\utilde{\\alpha\\Gamma\\beta},\n      \\hat{\\alpha\\Gamma\\beta}, \\bar{\\alpha\\Gamma\\beta}, \\tilde{\\alpha\\Gamma\\beta}, \\uhat{\\alpha\\Gamma\\beta}, \\ubar{\\alpha\\Gamma\\beta}, \\utilde{\\alpha\\Gamma\\beta}\n      \\\\\n    \\end{gathered}\n\\end{align}\n\n\\subsubsection{Accents Package Bold}\n\nDefine these new commands that combine \\href{https://ctan.org/pkg/accents?lang=en}{accent} and the \\href{https://ctan.org/pkg/bm?lang=en}{bm} packages. Note that the accents are defined outside of the bold letters, so accents are not bold. Can not define bold accents with bm directly using the accents package it seems:\n\n\\newcommand{\\hatbm}[1]{\\hat{\\bm{#1}}}\n\\newcommand{\\barbm}[1]{\\bar{\\bm{#1}}}\n\\newcommand{\\tildebm}[1]{\\tilde{\\bm{#1}}}\n\\newcommand{\\uhatbm}[1]{\\underaccent{\\hat}{\\bm{#1}}}\n\\newcommand{\\ubarbm}[1]{\\underaccent{\\bar}{\\bm{#1}}}\n\\newcommand{\\utildebm}[1]{\\underaccent{\\tilde}{\\bm{#1}}}\n\\begin{verbatim}\n\\newcommand{\\hatbm}[1]{\\hat{\\bm{#1}}}\n\\newcommand{\\barbm}[1]{\\bar{\\bm{#1}}}\n\\newcommand{\\tildebm}[1]{\\tilde{\\bm{#1}}}\n\\newcommand{\\uhatbm}[1]{\\underaccent{\\hat}{\\bm{#1}}}\n\\newcommand{\\ubarbm}[1]{\\underaccent{\\bar}{\\bm{#1}}}\n\\newcommand{\\utildebm}[1]{\\underaccent{\\tilde}{\\bm{#1}}}\n\\end{verbatim}\n\nNow write letters with under and over bars etc:\n\n\\begin{verbatim}\n  \\hatbm{ABC}, \\barbm{ABC}, \\tildebm{ABC},\n  \\uhatbm{ABC}, \\ubarbm{ABC}, \\utildebm{ABC},\n\\end{verbatim}\n\\begin{align}\n    \\begin{gathered}\n      \\hatbm{\\Gamma}, \\uhatbm{\\Gamma\\beta}\\\\\n      \\barbm{a}, \\ubarbm{a}\\\\\n      \\tildebm{a}, \\utildebm{a}\\\\\n      \\hatbm{a}, \\barbm{a}, \\tildebm{a}, \\uhatbm{a}, \\ubarbm{a}, \\utildebm{a},\n      \\hatbm{A}, \\barbm{A}, \\tildebm{A}, \\uhatbm{A}, \\ubarbm{A}, \\utildebm{A},\n      \\\\\n      \\hatbm{abc}, \\barbm{abc}, \\tildebm{abc}, \\uhatbm{abc}, \\ubarbm{abc}, \\utildebm{abc},\n      \\hatbm{ABC}, \\barbm{ABC}, \\tildebm{ABC}, \\uhatbm{ABC}, \\ubarbm{ABC}, \\utildebm{ABC},\n      \\\\\n      \\hatbm{\\alpha}, \\barbm{\\alpha}, \\tildebm{\\alpha}, \\uhatbm{\\alpha}, \\ubarbm{\\alpha}, \\utildebm{\\alpha},\n      \\hatbm{\\alpha}, \\barbm{\\alpha}, \\tildebm{\\alpha}, \\uhatbm{\\alpha}, \\ubarbm{\\alpha}, \\utildebm{\\alpha}\\\\\n      \\hatbm{\\alpha\\Gamma\\beta}, \\barbm{\\alpha\\Gamma\\beta}, \\tildebm{\\alpha\\Gamma\\beta}, \\uhatbm{\\alpha\\Gamma\\beta}, \\ubarbm{\\alpha\\Gamma\\beta}, \\utildebm{\\alpha\\Gamma\\beta},\n      \\hatbm{\\alpha\\Gamma\\beta}, \\barbm{\\alpha\\Gamma\\beta}, \\tildebm{\\alpha\\Gamma\\beta}, \\uhatbm{\\alpha\\Gamma\\beta}, \\ubarbm{\\alpha\\Gamma\\beta}, \\utildebm{\\alpha\\Gamma\\beta}\\\\\n    \\end{gathered}\n\\end{align}\n\n\\subsubsection{Mathcal and Accents}\n\n\\begin{verbatim}\n  \\bar{\\mathcal{Q}^{C}}, \\ubar{\\mathcal{Q}^{C}},\n  \\barbm{\\mathcal{Q}^{C}}, \\ubarbm{\\mathcal{Q}^{C}}\n\\end{verbatim}\n\\begin{align}\n    \\begin{gathered}\n      \\bar{\\mathcal{A}}, \\ubar{\\mathcal{A}},\n      \\barbm{\\mathcal{A}}, \\ubarbm{\\mathcal{A}}\\\\\n      \\bar{\\mathcal{Q}^{C}}, \\ubar{\\mathcal{Q}^{C}},\n      \\barbm{\\mathcal{Q}^{C}}, \\ubarbm{\\mathcal{Q}^{C}}\\\\\n      \\tilde{\\mathcal{A}}, \\utilde{\\mathcal{A}},\n      \\tildebm{\\mathcal{A}}, \\utildebm{\\mathcal{A}}\\\\\n      \\tilde{\\mathcal{Q}^{C}}, \\utilde{\\mathcal{Q}^{C}},\n      \\tildebm{\\mathcal{Q}^{C}}, \\utildebm{\\mathcal{Q}^{C}}\\\\\n    \\end{gathered}\n\\end{align}\n\n\\subsubsection{bm and Underline and Overline}\n\nRather than using bar and under bar that are fixed width, use underline and overline which change width and can change in boldness.\n\n\\newcommand{\\overlinebm}[1]{\\bm{\\overline{#1}}}\n\\newcommand{\\underlinebm}[1]{\\bm{\\underline{#1}}}\n\\begin{verbatim}\n\\newcommand{\\overlinebm}[1]{\\bm{\\overline{#1}}}\n\\newcommand{\\underlinebm}[1]{\\bm{\\underline{#1}}}\n\\end{verbatim}\n\n\\begin{verbatim}\n  \\bar{\\mathcal{Q}^{C}}, \\ubar{\\mathcal{Q}^{C}},\n  \\barbm{\\mathcal{Q}^{C}}, \\ubarbm{\\mathcal{Q}^{C}}\n\\end{verbatim}\n\\begin{align}\n    \\begin{gathered}\n      \\overlinebm{\\mathcal{Q}^{C}}, \\underlinebm{\\mathcal{Q}^{C}}, \\overlinebm{\\mathcal{ABC}^{C}}, \\underlinebm{\\mathcal{ABC}^{C}}\\\\\n      \\overlinebm{\\alpha\\Gamma\\beta}, \\underlinebm{\\alpha\\Gamma\\beta}\\\\\n    \\end{gathered}\n\\end{align}\n\n\\end{document}\n", "meta": {"hexsha": "daa75a7c2606c1f70dc883da6ee0b3b4596cd1bc", "size": 6933, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "_other/symbols/fs_symbols.tex", "max_stars_repo_name": "guohui-jiang/Tex4Econ", "max_stars_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_other/symbols/fs_symbols.tex", "max_issues_repo_name": "guohui-jiang/Tex4Econ", "max_issues_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_other/symbols/fs_symbols.tex", "max_forks_repo_name": "guohui-jiang/Tex4Econ", "max_forks_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.4894736842, "max_line_length": 320, "alphanum_fraction": 0.6448867734, "num_tokens": 2543, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8418256452674008, "lm_q1q2_score": 0.5941479364789625}}
{"text": "\n\\subsection{Stochastic processes}\n\nIn a stochastic process we have a mapping from a variable (time) to a random variable.\n\n\\subsubsection{Discrete and continuous time}\n\nTime could be discrete, or continuous.\n\nTemperature over time is a stochastic process, as is the number of cars sold each day.\n\n\\subsubsection{Discrete and continous state space}\n\nThe state space for temperature is continous, the number of people on the moon is discrete.\n\n", "meta": {"hexsha": "c71321ed820bc3305c89e3a5129e7572afd42f69", "size": 443, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/probability/stochastic/01-01-process.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/probability/stochastic/01-01-process.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/probability/stochastic/01-01-process.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.6875, "max_line_length": 91, "alphanum_fraction": 0.7945823928, "num_tokens": 94, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8418256393148982, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5941479270681141}}
{"text": "\\documentclass[a4paper,11pt]{article}\n\\pdfoutput=1 \n\\usepackage{jcappub}\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\\usepackage{xcolor}\n\n\\newcommand{\\eq}[1]{Eq. (\\ref{#1})}\n\\newcommand{\\ee}[1]{\\begin{equation}#1\\end{equation}}\n\\newcommand{\\ea}[1]{\\begin{align}#1\\end{align}}\n\\newcommand{\\eg}[1]{\\begin{gather}#1\\end{gather}}\n\\providecommand{\\f}[2]{\\frac{{#1}}{{#2}}}\n\\def\\hmax{\\varphi_{\\rm bar}}\n\n\n\\newcommand{\\B}[1]{{\\color{blue} #1}}\n\n\\title{Notes}\n\\begin{document}\n\n\\section{The model}\nThe action\n%\n\\ee{S=\\int d^4x\\,\\sqrt{|g|}\\bigg[\\f{R}{2}\\left({M_{\\rm P}^2}+{\\xi}\\phi^2\\right)-\\f{1}{2}(\\nabla\\phi)^2-\\f{1}{2}m^2\\phi^2-\\f{\\lambda}{4}\\phi^4\\bigg]\\,,\\label{eq:act}}\ngives the equation of motion\n\\ee{\\left(-\\Box+m^2-\\xi R\\right){\\phi}+\\lambda\\phi^3=0\\,.} \nThe scalar curvature at the location $\\mathbf{r}$ given by a neutron star binary (with equal masses) we model as\n\\ee{R=\\f{\\rho_0}{M_{\\rm P}^2}\\bigg(e^{-|\\mathbf{r}-\\mathbf{r}_1|^2/r_{\\rm n}^2}+e^{-|\\mathbf{r}-\\mathbf{r}_2|^2/r_{\\rm n}^2}\\bigg)=\\f{3 M}{4\\pi r_{\\rm n}^3M_{\\rm P}^2}\\bigg(e^{-|\\mathbf{r}-\\mathbf{r}_1|^2/r_{\\rm n}^2}+e^{-|\\mathbf{r}-\\mathbf{r}_2|^2/r_{\\rm n}^2}\\bigg)\\,,}\nwhere $\\mathbf{r}_{1,2}$ are the locations of the neutron stars and $M$ and $r_{\\rm n}$ are the mass and the radius. For the mass and radius we use\n\\ee{M\\approx 1.4M_{\\odot}\\approx 1.6\\cdot 10^{57}{\\rm GeV}\\,,\\qquad r_{\\rm n}\\approx 10^4{\\rm m}\\approx 5\\cdot10^{19}({\\rm GeV})^{-1}\\,.}\n\\section{Units}\nAssuming a space-time lattice, in Minkowski space the discretised d'Alembertian reads\n\\ea{-\\Box\\phi&\\approx\\f{\\phi(t+\\Delta t)-2\\phi(t)+\\phi(t-\\Delta t)}{\\Delta t^2}-\\sum_i\\f{\\phi(x_i+\\Delta x)-2\\phi(x_i)+\\phi(x_i-\\Delta x)}{\\Delta x^2}%\\nonumber\\\\&\\equiv (\\Delta t)^{-2}D_t\\phi-(\\Delta x)^{-2}\\sum_iD_i\\phi\n\\,.}\nA convenient choice of lattice units is\n\\ee{t = n_t\\Delta t = n_t T_o\\tilde{\\Delta t}=T_o\\tilde{t}\\,,\\qquad x_i = n_i\\Delta x = n_i r_{\\rm n}\\tilde{\\Delta x}=r_n\\tilde{x}\\,,}\nwhere $T_o$ is the orbital period ({\\color{red}check!})\n\\ee{T_o=2\\pi\\sqrt{\\f{32\\pi M_{\\rm P}^2r_o^3}{M}}\\,,}\n$r_{\\rm n}$ the radius of the neutron star and $n_t,n_i,\\tilde{\\Delta t},\\tilde{\\Delta x}$ are dimensionless numbers. We choose a dimensionless $\\tilde{\\phi}$ as\n\\ee{\\phi=(T_o)^{-1}\\tilde{\\phi}\\,,}\nallowing us to write the equation of motion as\n\\ea{{\\tilde{\\phi}(n_t+1)-2\\tilde{\\phi}(n_t)+\\tilde{\\phi}(n_t-1)}&=\\underbrace{\\left(\\f{\\Delta t}{\\Delta x}\\right)^2}_{\\equiv C}\\sum_i\\left[{\\tilde{\\phi}(n_i+1)-2\\tilde{\\phi}(n_i)+\\tilde{\\phi}(n_i-1)}\\right]\\nonumber \\\\&+\\tilde{\\Delta t}^2\\bigg(-\\tilde{m}^2\\tilde{\\phi}+\\xi\\tilde{R}\\tilde{\\phi}-\\lambda \\tilde{\\phi}^3\\bigg)\\,,\\label{eq:eom}}\nwith the dimensionless scalar curvature\n\\ee{\\tilde{R}\\equiv RT_o^2=\\f{3 MT_o^2}{4\\pi r_{\\rm n}^3M_{\\rm P}^2}\\bigg(e^{-|\\mathbf{r}-\\mathbf{r}_1|^2/r_{\\rm n}^2}+e^{-|\\mathbf{r}-\\mathbf{r}_2|^2/r_{\\rm n}^2}\\bigg)=\\f{96\\pi^2r_o^3}{r_{\\rm n}^3}\\bigg(e^{-|\\mathbf{n}-\\mathbf{n}_1|^2/n_{\\rm n}^2}+e^{-|\\mathbf{n}-\\mathbf{n}_2|^2/n_{\\rm n}^2}\\bigg)\\,,}\nwhere the location of star 1 is given by \n\\ea{\\mathbf{r}_{1}&=\\left({r}_{1,x},{r}_{1,y},{r}_{1,z}\\right)=\\bigg(r_o\\sin\\left(\\f{2\\pi t}{T_o}\\right),r_o\\cos\\left(\\f{2\\pi t}{T_o}\\right),0\\bigg)\\nonumber \\\\\\Leftrightarrow\\quad\n\\mathbf{n}_{1}&=\\left({n}_{1,x},{n}_{1,y},{n}_{1,z}\\right)=\\left(n_o\\sin\\left(2\\pi \\f{n_t}{n_{T_o}}\\right),n_o\\cos\\left(2\\pi \\f{n_t}{n_{T_o}}\\right),0\\right)\\,,}\nand similarly for star 2. The dimensionless mass is\n\\ee{\\tilde{m}^2= (T_om)^2=\\frac{128 \\pi ^3 m^2 {M_{\\rm P}}^2 r_{o}^3}{M}\\,.}\n\\section{Bounday conditions \\& initialization}\nThe boundary conditions for $\\phi$ we choose in the following manner: on the edges i.e. at $x,y,z = 0$ or $L$ where $L$ is our box size we set the field to vanish, $\\phi(t,0)=\\phi(t,L)=0$. Since the wave equation is second order in time derivatives we also need to specify the time derivative at some instant. For simplicity we choose the speed to vanish at the beginning (here $t=0$ for simplicity), $\\dot{\\phi}(t=0,\\mathbf{x})=0$. For the discretized equation of motion (\\ref{eq:eom}), this allows one to solve for the \"phantom point\" $\\tilde{\\phi}(n_t=-1)$ via\n\\ee{\\f{\\tilde{\\phi}(1)-\\tilde{\\phi}(-1)}{2\\tilde{\\Delta t}}=0\\,.}\n\nAt the start of the simulation we need the binary to contain a stationary distribution i.e. $\\phi$ has to have relaxed into the minimum given by the potential $(1/2)(m^2-\\xi R)\\phi^2 +(\\lambda/4)\\phi^4$, surrounded by empty space. To give the field an initial kick we set $\\phi(0,\\mathbf{x})$ as white noise, for simplicity with positive values. Note that it is perfectly possible to have either star to have the field  relaxed in either vacuum, or even contain a domain wall. To force the system to relax we introduce a damping term $\\propto \\dot{\\phi}$, discretely\n\\ee{\\propto\\tilde{\\Delta t}\\left[\\tilde{\\phi}(n_t)-\\tilde{\\phi}(n_t-1)\\right]\\,,}\non the right-hand side of the equation of motion which we switch off before the start of the rotation.\n\\section{Energy}\nThe energy-density, where $R$ taken is just a potential term, is \n\\ea{T_{00} &=\\f12\\bigg[\\partial_\\rho\\phi\\partial^\\rho\\phi+(m^2-\\xi R)\\phi^2+\\f{\\lambda}{2}\\phi^4\\bigg]+(\\partial_t\\phi)^2\\nonumber\\\\ &=\\f12\\bigg[(\\partial_t \\phi)^2+\\sum_i(\\partial_i \\phi)^2+(m^2-\\xi R)\\phi^2+\\f{\\lambda}{2}\\phi^4\\bigg] \\,,}\n and when discretised is conveniently given in units of $(T_o)^{-4}$ as\n\\ea{T_{00} = (T_o)^{-4}\\f12\\bigg[&\\bigg(\\f{\\tilde{\\phi}(n_t+1)-\\tilde{\\phi}(n_t-1)}{2\\tilde{\\Delta t}}\\bigg)^2+C\\sum_i\\bigg(\\f{\\tilde{\\phi}(n_{i}+1)-\\tilde{\\phi}(n_i-1)}{2\\tilde{\\Delta t}}\\bigg)^2\\nonumber \\\\&+(\\tilde{m}^2-\\xi \\tilde{R})\\tilde{\\phi}^2+\\f{\\lambda}{2}\\phi^4\\bigg]\\equiv(T_o)^{-4} \\tilde{T}_{00}\\,.}\nThe energy is then\n\\ee{E=\\int dxdydz\\,T_{00}\\approx(\\Delta x)^3\\sum_{i,j,k}T_{00}=\\f{r_{\\rm n}^3}{T_o^4}(\\tilde{\\Delta x})^3\\sum_{i,j,k}\\tilde{T}_{00}\\equiv\\f{r_{\\rm n}^3}{T_o^4}\\tilde{E}\\,.}\n\\section{Flux}\nAssuming a constant increase of particle density inside the box, we can write the flux straightforwardly if we know how the energy increases over time as\n\\ee{f = \\f{\\Delta E}{\\Delta t}  = \\f{r_n^3}{T_o^5}\\f{\\Delta \\tilde{E}}{\\Delta \\tilde{t}}\\,.}\nThe loss of energy after one orbit is then\n\\ee{\\Delta E = \\f{r_n^3}{T_o^4}\\f{\\Delta \\tilde{E}}{\\Delta \\tilde{t}}\\,.}\n\\end{document}", "meta": {"hexsha": "0e4878a13ab78cdc7a13aff40119e3fb4017c4c3", "size": 6214, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "derivation/formulae.tex", "max_stars_repo_name": "ttmarkka/neutronstarwave", "max_stars_repo_head_hexsha": "7643b661a20c50adb6d7684f980e74bd842535b8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "derivation/formulae.tex", "max_issues_repo_name": "ttmarkka/neutronstarwave", "max_issues_repo_head_hexsha": "7643b661a20c50adb6d7684f980e74bd842535b8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "derivation/formulae.tex", "max_forks_repo_name": "ttmarkka/neutronstarwave", "max_forks_repo_head_hexsha": "7643b661a20c50adb6d7684f980e74bd842535b8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.3823529412, "max_line_length": 566, "alphanum_fraction": 0.6569037657, "num_tokens": 2533, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681122619883, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5941262117890683}}
{"text": "\\documentclass{article}\n\n\\usepackage{palatino}\n\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage[utf8]{inputenc}\n\\usepackage[colorlinks=true]{hyperref}\n\n% macros\n\\newcommand{\\LL}{\\mathcal{L}}\n\\newcommand{\\half}{\\frac{1}{2}}\n\\newcommand{\\norm}[1]{\\left|\\left|#1\\right|\\right|}\n\\newcommand{\\dd}{d}\n\\newcommand{\\ddd}[2]{\\frac{\\partial #1}{\\partial #2}}\n\\newcommand{\\block}[1]{\\left(#1\\right)}\n\\newcommand{\\mat}[1]{ \\begin{pmatrix} #1 \\end{pmatrix} }\n\\newcommand{\\inv}[1]{#1^{-1}}\n\\newcommand{\\argmin}[1]{\\underset{#1}{\\textrm{argmin}}}\n\n\\begin{document}\n\\title{Compliant Reference}\n\\author{Maxime Tournier}\n\\date{\\today}\n\\maketitle\n%\n\\begin{abstract}\n  Quick notes about the mathematical framework used in Compliant. We\n  review the time-stepping scheme, kinematic constraint handling,\n  potential forces as compliant constraints, and KKT systems.\n\\end{abstract}\n%\n\\section*{TODO}\n\\begin{itemize}\n  \\item Code cheat sheet\n  \\item finish sections :-)\n  \\item references\n  \\item wikipedia links\n  \\item introduction ?\n\\end{itemize}\n\n\\section{Lagrangian Dynamics}\nGiven an $n$-dimensional state space $Q$, we consider a Lagrangian\n$\\LL$ defined as usual:\n%\n\\begin{equation}\n\\LL(q, v) = \\half v^TMv - V(q)\n\\end{equation}\n%\nwhere $V(q)$ a the potential energy of the form:\n%\n\\begin{equation}\nV(q) = \\half \\norm{g(q)}_N^2\n\\end{equation}\n%\nfor a matrix norm $N$ in the suitable deformation space $Im(g)$. To be\nfully general, we include a Rayleigh dissipation function $\\half\nv^TDv$, where $D$ is PSD. The Euler-Lagrange equations of motions are,\nin this case:\n%\n\\begin{equation}\n \\frac{d}{dt}\\ddd{\\LL}{v} = \\ddd{\\LL}{q} - \\ddd{D}{v} \n\\end{equation}\n%\nFor simplicity, we restrict to the case where $M$ is constant, which\nyields:\n%\n\\begin{equation}\n  \\label{eq:dynamics}\n  M \\dot{v} = -\\nabla_q V - D v\n\\end{equation}\n%\n\\section{Time Stepping}\n%\nWe now discretize time using a fixed time step $h$. We use forward\ndifferences for $\\dot{v}$ as follows:\n%\n\\begin{equation}\n  h\\ \\dot{v}_{k+1} \\approx v_{k+1} - v_k\n\\end{equation}\n%\nEquation \\eqref{eq:dynamics} becomes: \n%\n\\begin{equation}\n  M v_{k+1} = M v_k - h \\block{\\nabla_q V_{k+1} + D v_{k+1}}\n\\end{equation}\n%\nOr, calling $p_k = M v_k$ the momentum of the system and $f_k =\n-\\nabla_q V_k$ the potential forces:\n%\n\\begin{equation}\n  \\block{M + h\\ D} v_{k+1} = p_k + h f_{k+1}\n\\end{equation}\n%\nIn order to control how implicit our integrator is on forces, we\nintroduced a blending parameter $\\alpha \\in [0, 1]$ as follows:\n%\n\\begin{equation}\n  \\block{M + h \\alpha D} v_{k+1} = p_k - h(1 - \\alpha) D v_k + h \\block{(1 - \\alpha) f_k + \\alpha f_{k+1}}\n\\end{equation}\n%\nWe linearize the above using the Hessian of the potential energy:\n%\n\\begin{equation}\n\\nabla_q V_{k+1} \\approx \\nabla_q V_k + \\nabla_q^2 V_k \\block{q_{k+1} - q_k}\n\\end{equation}\n%\nso that:\n%\n\\begin{equation}\n  (1 - \\alpha) f_k + \\alpha f_{k+1} = f_k - \\alpha \\nabla_q^2 V_k \\block{q_{k+1} - q_k}\n\\end{equation}\n%\nFinally, we use the following position time-stepping scheme:\n%\n\\begin{equation}\n  q_{k+1} = q_k + h \\block{ (1-\\beta) v_k + \\beta v_{k+1}}\n\\end{equation}\n%\nwhere $\\beta \\in [0, 1]$ is again a blending parameter.\n%\nLetting the stiffness matrix be $K = \\nabla_q^2 V_k$ %\n\\footnote{\n%\n  This is the opposite of the stiffness matrix as defined in SOFA,\n  \\emph{i.e.} $\\nabla f(q) = -\\nabla_q^2 V$. I find it easier to work with\n  PSD matrices.} and $d_k = -D v_k$ be the damping forces, we put\neverything together as:\n%\n\\begin{equation}\n  \\label{eq:time-stepping}\n  \\block{M + h\\alpha D + h^2 \\alpha \\beta K} v_{k+1} = \n  p_k + h \\block{ f_k + (1 - \\alpha) d_k} - \\alpha ( 1 - \\beta) h^2 K v_k \n\\end{equation}\n%\n(and yes, this is very ugly). Fortunately, everyone would use the much\nfriendlier, fully implicit scheme $\\alpha = \\beta = 1$ as follows:\n%\n\\begin{equation}\n   \\block{M + hD + h^2K} v_{k+1} = p_k + h f_k\n\\end{equation}\n%\n\\section{Response Matrix, Net Momentum}\n%\nFrom now on, we will refer to the \\emph{response matrix} defined as\nfollows:\n%\n\\begin{equation}\n  \\label{eq:response-matrix}\n  W = \\block{M + h\\alpha D + h^2 \\alpha \\beta K}\n\\end{equation}\n%\nWe will also refer to the \\emph{net momentum} of the system at time\nstep $k$:\n%\n\\begin{equation}\n  \\label{eq:net-momentum}\n  c_k = p_k + h \\block{ f_k + (1 - \\alpha) d_k} - \\alpha ( 1 - \\beta) h^2 K v_k\n\\end{equation}\n%\nThe time-stepping scheme \\eqref{eq:time-stepping} thus involves solving\nthe following linear system:\n%\n\\begin{equation}\n  W v_{k+1} = c_k\n\\end{equation}\n%\nThe numerical solvers for time-stepping will be described in section\n\\ref{sec:solvers}.\n%\n\\section{Constraints}\n\\label{sec:constraints}\n%\nWe now introduce holonomic constraints of the form:\n%\n\\begin{equation}\n  g(q) = 0\n\\end{equation}\n%\nwhere $g$ again maps kinematic DOFs to a suitable deformation space\n$Im(g)$. Such constraints are satisfied by introducing constraint\nforces $J^T\\lambda$, where $J = \\dd g$ is the constraint Jacobian\nmatrix, and $\\lambda$ are the \\emph{Lagrange multipliers}:\n%\n\\begin{align}\n  \\label{eq:constrained-dynamics}\n  M \\dot{v} &= -\\nabla_q V - D v + J^T \\lambda\\\\\n  g(q) &= 0\n\\end{align}\n%\nAgain, we discretize time and $\\dot{v}$ as follows:\n%\n\\begin{align}\n  M v_{k+1} &= p_k + h \\block{f_{k+1} + d_{k+1} + J_{k+1}^T\\lambda_{k+1}} \\\\\n  g\\block{q_{k+1}} &= 0 \n\\end{align}\n%\nAnd again:\n\\begin{equation}\ng( q_{k+1}) = g_k + h J_k \\block{ (1 - \\beta) v_k + \\beta v_{k+1}}  \n\\end{equation}\n%\nAt this point we become lazy so we approximate constraints as\n\\emph{affine} functions, meaning that $J_{k+1} \\approx J_k$, otherwise\ncomputing the constraint Hessian $\\dd^2 g$ would be too costly, and\nwould result in a non-linear system to solve anyways (\\emph{i.e.}\nwith terms involving $v_{k+1}^T\\dd^2g_k \\lambda_{k+1}$). As we will\nsee later, such approximation is consistent with our treatment of\npotential forces as compliant constraints. The time-discrete system is\nthen:\n%\n\\begin{align}\n  M v_{k+1} &= p_k + h \\block{f_{k+1} + d_{k+1} + J_k^T\\lambda_{k+1}} \\\\\n  J_k v_{k+1} &= -\\frac{g_k}{\\beta h} - \\frac{1 - \\beta}{\\beta} J_k v_k \\label{eq:constraintvalue}\n\\end{align}\n% \nAt this point, we \\emph{could} introduce an implicit blending between\n$\\lambda_k$ and $\\lambda_{k+1}$, but this would result in unneeded\ncomplication as $\\lambda_{k+1}$ would need to be computed anyways. The\nblended force would then be:\n\\[ J_k^T\\block{ (1-\\alpha) \\lambda_k + \\alpha \\lambda_{k+1} } \\] which\nsimply offsets $\\lambda_{k+1}$ with respect to $\\lambda_k$. We will\nthus happily ignore such blending. The final linear system to solve\n(for $v_{k+1}$ and $\\lambda_{k+1}$) becomes:\n%\n\\begin{align}\n\\label{eq:constrained-integrator}\n  W v_{k+1} - J_k^T \\lambda_{k+1} &= c_k  \\\\\n  J_k v_{k+1} &= -b_k\n\\end{align}\nwhere $b_k = \\frac{g_k}{\\beta h} + \\frac{1 - \\beta}{\\beta} J_k\nv_k$. Before leaving, note the sneaky accounting of $h$ inside\n$\\lambda_{k+1}$.\n%\n\\section{Compliance}\n\\label{sec:compliance}\n%\nWe will now establish a connection between elastic forces and\nconstraint forces through a compliance matrix. Remember that we\ndefined potential energy as:\n%\n\\begin{equation}\n  V(q) = \\half \\norm{g(q)}_N^2\n\\end{equation}\n%\nwhere $g$ maps kinematic DOFs to an appropriate deformation space\n$Im(g)$. This means the potential forces are:\n%\n\\begin{equation}\n  f = - \\nabla_q V = J^T N g(q)\n\\end{equation}\n%\nNow, the Hessian or stiffness matrix is :\n%\n\\begin{equation}\n  K = \\nabla^2_q V = \\dd J^T N g(q) + J^T N J\n\\end{equation}\n%\nAs in section \\ref{sec:constraints}, we are too lazy for computing\nsecond derivatives so we approximate deformation mappings as affine\nmaps, meaning that $\\dd J \\approx 0$. We are left with the following\nresponse matrix \\eqref{eq:response-matrix}:\n%\n\\begin{equation}\n  W = \\block{M + h\\alpha D + h^2 \\alpha \\beta J_k^T N J_k}\n\\end{equation}\n%\nThe net momentum \\eqref{eq:net-momentum} becomes:\n%\n\\begin{equation}\n  c_k = p_k + h (1 - \\alpha) d_k - h J_k^TN \\block{ g_k + h \\alpha ( 1 - \\beta) J_k v_k}\n\\end{equation}\n%\nIf we write the potential force as $J_k^T \\lambda_{k+1}$, we have: \n\\begin{equation}\n  \\lambda_{k+1} = -N\\block{h^2 \\alpha \\beta J_k v_{k+1} + h g_k + h^2 \\alpha (1 - \\beta) J_k v_k}\n\\end{equation}\nIt turns out that this system can be rewritten as a \\emph{larger} system:\n%\n\\begin{equation}\n  \\mat{M + h \\alpha D & -J_k \\\\\n    J_k & \\frac{\\inv{N}}{h^2 \\alpha \\beta} } \\mat{v_{k+1} \\\\ \\lambda_{k+1}} = \\mat{p_k + h (1 - \\alpha) d_k \\\\ -\\frac{g_k}{h \\alpha \\beta} - \\frac{( 1 - \\beta)}{\\beta} J_k v_k }\n\\end{equation}\n%\n(pffew !) One can notice that this system is \\emph{almost} exactly the\none we obtained with kinematic constraints in\n\\eqref{eq:constrained-integrator}, with $\\alpha = 1$, with the\nexception of the $\\inv{N}$ term in the $(2, 2)$ block. We call matrix\n$C = \\inv{N}$ the \\emph{compliance} matrix. We see that kinematic\nconstraints simply correspond to a zero compliance matrix, \\emph{i.e.}\ninfinite stiffness, as one would intuitively expect. It can be checked\nthat taking $\\alpha$ into account for constraint forces produces the\nsame system with $C = 0$. (TODO) We now recall the main results for\nthe unified elastic/constraint treatment:\n%\n\\subsection*{Linear System}\n\\begin{equation}\n  \\label{eq:time-stepping-kkt}\n  \\mat{ W & -J \\\\\n    -J & -\\frac{C}{h^2 \\alpha \\beta} } \\mat{v \\\\ \\lambda} = \\mat{c_k \\\\ b_k}\n\\end{equation}\n%\n\\subsection*{Response Matrix}\n%\n\\begin{equation}\n  W = M + h\\alpha D\n\\end{equation}\n%\n\\subsection*{Net Momentum}\n%\n\\begin{equation}\nc_k = p_k + h (1 - \\alpha) d_k  \n\\end{equation}\n%\n\\subsection*{Constraint Value}\n%\n\\begin{equation}\n  b_k = \\frac{g_k}{h \\alpha \\beta} + \\frac{( 1 - \\beta)}{\\beta} J_k v_k\n\\end{equation}\n%\nTODO pourquoi c'est cool: transition seamless entre contraintes et\nelasticite, traitement unifi\u00e9 contraintes/elasticit\u00e9, conditionnement\npour les sytemes tr\u00e8s raides. par contre ca fait des syst\u00e8mes plus gros.\n%\n\\section{KKT Systems, Compliance and Relaxation}\n%\nAt this point, it is probably a good idea to introduce a few notions\non saddle-point (or KKT) systems. Such systems are (for now, linear)\nsystems of the form:\n%\n\\begin{equation}\n \\label{eq:kkt-system}\n \\mat{Q & -A^T \\\\-A & 0} \\mat{x\\\\\\lambda} = \\mat{c\\\\b}\n\\end{equation}\n%\nKKT systems typically arise from the Karush-Kuhn-Tucker (hence the\nname) optimality conditions of constrained optimization problems. For\ninstance, the KKT system \\eqref{eq:kkt-system} corresponds to the\nfollowing equality-constrained Quadratic Program (QP):\n%\n\\begin{equation}\n  \\label{eq:constrained-optimization}\n  \\underset{Ax + b = 0}{\\textrm{argmin}} \\quad \\half x^T Q x - c^T x\n\\end{equation}\n%\nThe KKT system summarizes the optimality conditions for the following\nunconstrained function (also called the \\emph{Lagrangian}, this time in the\ncontext of optimization):\n%\n\\begin{equation}\n  \\LL(x, \\lambda) \\quad = \\quad \\half x^T Q x - c^T x \\quad - \\quad \\block{A x + b}^T \\lambda\n\\end{equation}\n%\nand whose critical points (in fact, saddle-points) solve the\nconstrained problem \\eqref{eq:constrained-optimization}. As one can\nsee, the constrained integrator \\eqref{eq:constrained-integrator} is\nan example of such KKT systems:\n\\begin{equation}\n  \\mat{W & -J\\\\-J & 0} \\mat{v \\\\ \\lambda} = \\mat{c \\\\ b}\n\\end{equation}\n%\nwhere we dropped subscripts for the sake of readability. As symmetric\n\\emph{indefinite} systems, they tend to be more difficult to solve\nthan positive definite ones: CG and Cholesky $LDL^T$ factorizations\nmay breakdown on such systems. However, when the $(1, 1)$ block (here,\nmatrix $Q$) is invertible, one can use \\emph{pivoting} to obtain the\nfollowing smaller system:\n%\n\\begin{align}\n  x &= \\inv{Q}\\block{c - A^T \\lambda} \\\\\n  -A x &= A\\inv{Q}A^T \\lambda - A\\inv{Q} c = b \\\\\n  \\label{eq:schur-complement}\n  A\\inv{Q}A^T\\lambda &= b - A \\inv{Q} c\n\\end{align}\n%\nThis smaller system \\eqref{eq:schur-complement} is known as the\n\\emph{Schur} complement of the KKT system. It is always SPD as long as\n$A^T$ is full column-rank, but requires to invert $Q$ which might be\ncostly in practice. The Schur complement system also corresponds to an\n(unconstrained) optimization problem:\n%\n\\begin{equation}\n  \\argmin{\\lambda}\\quad \\half \\lambda^T S \\lambda - s^T \\lambda\n\\end{equation}\n%\nwhere $S = A \\inv {Q} A^T$ is the Schur complement, and $s = b +\nA\\inv{Q}c$. The minimized quantity has a physical interpretation:\ntypically, holonomic constraints minimize the total kinetic energy. If\nwe now consider systems with compliance, such as:\n%\n\\begin{equation}\n  \\label{eq:compliant-kkt-system}\n  \\mat{Q & -A^T \\\\-A & -D} \\mat{x\\\\\\lambda} = \\mat{c\\\\b}\n\\end{equation}\n%\nwe see that they correspond to the following Lagrangian:\n%\n\\begin{equation}\n  \\LL(x, \\lambda) \\quad = \\quad \\half x^T Q x - c^T x \\quad - \n  \\quad \\block{A x + b}^T \\lambda \\quad - \\quad \\half \\lambda^T D \\lambda\n\\end{equation}\n%\nOne can check that the Schur complement becomes $S + D$, and\ncorresponds to:\n%\n\\begin{equation}\n  \\argmin{\\lambda}\\quad \\half \\lambda^T S \\lambda - s^T \\lambda \\quad + \n  \\quad \\half \\lambda^T D \\lambda\n\\end{equation}\n%\nWe see that $D$ acts as a form of numerical damping on the resolution\nof constraints, by biasing the solution of the constrained system\ntowards zero. It is a form of \\emph{constraint\n  relaxation}. Physically, elastic forces minimize a mix between the\nkinetic energy of the system and the $D$-norm of the forces\n$\\lambda$. Under their most general form, KKT systems include\nunilateral and complementarity constraints. We will develop these in\nmore details in section \\ref{sec:unilateral-constraints}.\n%\n\\section{Geometric Stiffness}\n%\nFor both constraint and elastic forces, we happily neglected\nsecond-order derivatives of the deformation mapping $g$. In the case\nof elastic forces, this resulted in a nice stiffness matrix of the\nform:\n%\n\\begin{equation}\n  \\nabla_q^2 V(q) \\approx J^T N J = K\n\\end{equation}\n%\nwhich enabled us to treat elastic forces as a compliant kinematic\nconstraints. But what about the second order terms $\\tilde{K}=(\\dd\nJ)^T N g(q)$ ? First of all, when the configuration space is a vector\nspace $Q$ and $g$ is $\\mathcal{C}^2$-continuous, the Schwarz' theorem\nensures that $\\nabla_q^2 V$ is symmetric, hence so is $\\tilde{K}$. We\ncall $\\tilde{K}$ the \\emph{geometric stiffness}, as it is induced by\nthe variation of the deformation mapping $g$. We do not know whether\nin the general case, it is possible to factor \\emph{both} $K$ and\n$\\tilde{K}$ as:\n%\n\\begin{equation}\n  \\nabla_q^2 V(q) = K + \\tilde{K} \\overset{?}{=} J^T\\block{N + \\tilde{N}}J\n\\end{equation}\n%\neven though some specific examples exist (\\emph{e.g.} mass spring\nsystems). Of course, it is always possible to apply a Cholesky\nfactorization directly on $\\nabla_q^2 V(q)$ as:\n%\n\\begin{equation}\n  \\nabla_q^2 V(q) = LDL^T\n\\end{equation}\n%\nand get back on our feet, but this would be highly inefficient in\npractice. Therefore, unless an ad-hoc derivation provides the needed\nfactorization, we are left with only one alternative: treat the\ngeometric stiffness as a regular stiffness instead of compliance.\n%\n\\section{Unilateral Constraints}\n\\label{sec:unilateral-constraints}\n%\nWe now consider \\emph{unilateral constraints} (inequality) instead of\nbilateral ones:\n%\n\\begin{equation}\n  \\label{eq:unilateral-constraint}\n  g(q) \\geq 0\n\\end{equation}\n%\nExamples of such constraints include non-penetration constraints, or\nangular limits for an articulated rigid body. Just like in the\nbilateral case, the constraints are enforced by the addition of\nconstraint forces $J^T \\lambda$, satisfying velocity constraints of\nthe form:\n%\n\\begin{equation}\n  J v_{k+1} \\geq -b_k\n\\end{equation}\n%\nobtained by differentiation of \\eqref{eq:unilateral-constraint},\naccording to the time-stepping scheme. Furthermore, reaction forces\nmust satisfy additional requirements known as the Signorini\nconditions:\n%\n\\begin{itemize}\n\\item constraint forces must be repulsive: $\\lambda_{k+1} \\geq 0$\n\\item constraint forces don't act when the constraint is inactive, and\n  conversely:\n  \\[ J_k v_{k+1} > -b_k \\Rightarrow \\lambda_{k+1} = 0, \\quad \\lambda_{k+1} > 0 \\Rightarrow J_k v_{k+1} = -b_k \\]\n\\end{itemize}\n%\nIntuitively, a non-penetration contact force is not allowed to push\nbodies further apart when they are already separating. The\nrequirements on constraint forces are summarized by the following\n\\emph{complementarity} constraint:\n%\n\\begin{equation}\n  \\label{eq:complementarity-constraint}\n  0 \\leq J_k v_{k+1} + b_k \\ \\bot \\ \\lambda_{k+1} \\geq 0 \n\\end{equation}\n%\nThe time-stepping scheme with constraint forces is, as before:\n%\n\\begin{equation}\n  \\label{eq:time-stepping-unilateral}\n  W v_{k+1} -J_k^T \\lambda_{k+1} = c_k\n\\end{equation}\n%\nIt turns out that the complementarity constraints\n\\eqref{eq:complementarity-constraint} together with time-stepping\nequation \\eqref{eq:time-stepping-unilateral} form the KKT conditions\nof the following \\emph{inequality-constrained} Quadratic Program:\n%\n\\begin{equation}\n  v_{k+1} = \\argmin{J_kv + b_k\\  \\geq\\  0} \\quad \\half v^TWv - c_k^T v\n\\end{equation}\n%\nTherefore, time-stepping in the presence of unilateral constraints can\nbe readily solved by a general QP solver. As in the bilateral case,\nwhen $W$ is easily invertible, it is possible to compute the Schur\ncomplement in order to obtain an equivalent, but smaller problem:\n%\n\\begin{equation}\n  \\label{eq:lcp}\n  0 \\leq J\\inv{W}J^T \\lambda_{k+1} + b_k - A\\inv{W} c_k\\ \\bot \\ \\lambda_{k+1} \\geq 0 \n\\end{equation}\n%\n(TODO: check formula) Such problem is known as a \\emph{Linear\n  Complementarity Problem} (LCP) and can be solved by various\nalgorithms, some of which will be presented in section\n\\ref{sec:solvers}.\n%\n\\section{Stabilization}\n%\n\n\\section{Restitution}\n%\nTODO: velocity constraints for contact with restitution (rigid),\nmention Generalized Reflections\n%\n\\section{Numerical Solvers}\n\\label{sec:solvers}\n%\nTODO: CG, Cholesky, Minres, GS, PGS, Sequential Impulses\n\n\n\\section{Code Cheat Sheet}\n\nA quick reminder of who does what where in the code:\n\n\\subsection{CompliantImplicitSolver}\n\n\\begin{itemize}\n\\item Performs KKT system assembly\n\\item Builds right-hand sides (correction/dyamics) computation\n\\item Integrate system solution, update graph and stuff\n\\end{itemize}\n\n\\subsection{Right-hand side computation}\n\nLots of cases:\n\\begin{itemize}\n\\item Bilateral (default), unilateral, friction constraint type\n\\item Elastic constraints, kinematic constraints (can be stabilized) (default), restitution constraints, \\ldots ?\n\\end{itemize}\n\nA constraint has a BaseConstraintValue constraint value (if not, the\ndefault is assigned) that produces a correct constraint violation term\n(the right-hand side for constraints in the KKT system) depending on\nthe correction/dynamics phase.\n\nElastic constraints have zero rhs during correction, and -error / dt\nduring dynamics.\n\nStabilized constraints (Stablization component) correct the error\nduring stabilization, but have zero rhs during dynamics (holonomic\nkinematic constraints)\n\nRestitution constraints: stabilize when velocity is small and when the constraint is not violated,\notherwise flip relative velocity.\n\nDepending on the type (bilateral, unilateral, friction), the\nconstraint also has an associated Projector, that you can find in the\nKKT system.\n\n\\subsection{KKTSolver}\nSolves a KKT system (AssemledSystem), for both correction/dynamics cases.\n\n\n\n\\section{Alternative formulations}\n\nAlternative formulations allow more complex projective constraints (e.g. velocity constraints) where the velocity formulation can only handle the fixed projective constraint.\nHowerver their corresponding linear system seems harder to resolve for constraints.\nNote that stabilization is always performed in velocity.\n\n\\subsection{Acceleration}\n\nBy considering $v_{k+1} = v_k + h a $, eq. (\\ref{eq:time-stepping}) becomes :\n\n\\begin{equation}\n  \\block{M + h\\alpha D + h^2 \\alpha \\beta K} a = \n  f_k - D v_k - \\alpha h K v_k\n\\end{equation}\n\nand eq. (\\ref{eq:constraintvalue}) becomes :\n\n\\begin{align}\n  J_k a &= \\frac{ -\\frac{g_k}{\\alpha h} - J_k v_k } {\\beta h}\n\\end{align}\n\n\n\\subsection{Velocity variation}\n\nBy considering $v_{k+1} = v_k + dv $, eq. (\\ref{eq:time-stepping}) becomes :\n\n\\begin{equation}\n  \\block{M + h\\alpha D + h^2 \\alpha \\beta K} dv = \n  h f_k - h D v_k - \\alpha h^2 K v_k\n\\end{equation}\n\nand eq. (\\ref{eq:constraintvalue}) becomes :\n\n\\begin{align}\n  J_k dv &= \\frac{ -\\frac{g_k}{\\alpha h} - J_k v_k } {\\beta}\n\\end{align}\n\n\n\\end{document}\n\n", "meta": {"hexsha": "3264f1b71e0dc987042dcbc11058229251a4945d", "size": 20305, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "applications/plugins/Compliant/doc/compliant-reference.tex", "max_stars_repo_name": "sofa-framework/issofa", "max_stars_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_stars_repo_licenses": ["OML"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "applications/plugins/Compliant/doc/compliant-reference.tex", "max_issues_repo_name": "sofa-framework/issofa", "max_issues_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_issues_repo_licenses": ["OML"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "applications/plugins/Compliant/doc/compliant-reference.tex", "max_forks_repo_name": "sofa-framework/issofa", "max_forks_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_forks_repo_licenses": ["OML"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.2301587302, "max_line_length": 177, "alphanum_fraction": 0.7141098252, "num_tokens": 6535, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681122619883, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5941262117890683}}
{"text": "\\documentclass{article}\r\n\\usepackage[utf8]{inputenc}\r\n\r\n\\title{Data Science Mathematics}\r\n\\author{Krystal Maughan }\r\n\\date{December 2017}\r\n\r\n\\begin{document}\r\n\r\n\\maketitle\r\n\r\n\\mathversion{normal}\r\n\r\n\\section{Data Science Notes}\r\nA set is a collection\r\n\\\\\r\n\\\\\r\n2 ${\\in}$ A means 2 is an element of A\r\n\\\\\r\n\\\\\r\nCardinality: size of a set A is the number of elements A\r\n\\\\\r\n\\\\\r\nrepresented as $|$A$|$ = number\r\n\\\\\r\n\\\\\r\nA $\\cap$ B is A intersect B\r\n\\\\\r\n\\\\\r\nA $\\cup$ B is A union B\r\n\\\\\r\n\\\\\r\n$|$ $\\emptyset$ $|$ = 0\r\n\\\\\r\n\\\\\r\nwriting conditions that satisfy a set \r\n\\\\\r\n\\\\\r\nA $\\cap$ B = $\\brace X: X \\in A and X \\in B$ \r\n\\\\\r\n\\\\\r\nInclusion/Exclusion Formula\r\n\\\\\r\n$|$A$\\cup$B$|$ = $|$A$|$ + $|$B$|$ - $|$A $\\cap$ B$|$\r\n\\\\\r\n\\\\\r\n\\\\\r\n\\\\\r\ndisjoint sets\r\n\\\\\r\n\\\\\r\n\\\\\r\n\\\\\r\n\\section{The Infinite World of Real Numbers}\r\nIR - real numbers\r\n\\\\\r\neverything between integers is also real number\r\n\\\\\r\n\\\\\r\nAbsolute value is denoted by $|$num$|$\r\n\\\\\r\n\\\\\r\nIf A B and C are numbers and C is not 0,\r\n\\\\\r\nand A = B,\r\n\\\\\r\nC * A = C * B\r\n\\\\\r\n\\\\\r\n\\section{Sigma}\r\ndummy indices\r\n\\\\\r\nSimplification rules for Sigma Notation\r\n\\\\\r\ndistributive property\r\n\\\\\r\nMean $\\mu$ vs Variance\r\n\\\\\r\nmean centering\r\n\\\\\r\nsubtract the mean from every value \r\n\\\\\r\nVariance is spread $\\sigma$ squared\r\n\\\\\r\nStandard deviation \r\n\r\n\\end{document}\r\n\r\n\r\n\r\n", "meta": {"hexsha": "05a0180383b3f096396a8239b7b1c83163096f56", "size": 1298, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "images/data_science_maths_week1/weekone.tex", "max_stars_repo_name": "kammitama5/kammitama5.github.io", "max_stars_repo_head_hexsha": "b6ef7ae5d551204a2f4920065e710f5e89b54011", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-06-07T07:15:14.000Z", "max_stars_repo_stars_event_max_datetime": "2017-06-07T07:15:14.000Z", "max_issues_repo_path": "images/data_science_maths_week1/weekone.tex", "max_issues_repo_name": "kammitama5/kammitama5.github.io", "max_issues_repo_head_hexsha": "b6ef7ae5d551204a2f4920065e710f5e89b54011", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "images/data_science_maths_week1/weekone.tex", "max_forks_repo_name": "kammitama5/kammitama5.github.io", "max_forks_repo_head_hexsha": "b6ef7ae5d551204a2f4920065e710f5e89b54011", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.2637362637, "max_line_length": 57, "alphanum_fraction": 0.5878274268, "num_tokens": 418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5941055318991876}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Written By Michael Brodskiy\n% Class: Analytic Geometry & Calculus III (Math-292)\n% Professor: V. Cherkassky\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[12pt]{article} \n\\usepackage{alphalph}\n\\usepackage[utf8]{inputenc}\n\\usepackage[russian,english]{babel}\n\\usepackage{titling}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\usepackage[super]{nth}\n\\usepackage{everysel}\n\\usepackage{ragged2e}\n\\usepackage{geometry}\n\\usepackage{fancyhdr}\n\\usepackage{cancel}\n\\geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in}\n\\newcommand{\\subtitle}[1]{%\n  \\posttitle{%\n    \\par\\end{center}\n    \\begin{center}\\large#1\\end{center}\n    \\vskip0.5em}%\n\n}\n\\usepackage{hyperref}\n\\hypersetup{\ncolorlinks=true,\nlinkcolor=blue,\nfilecolor=magenta,      \nurlcolor=blue,\ncitecolor=blue,\n}\n\n\\urlstyle{same}\n\n\n\\title{Lecture XXI Notes}\n\\date{\\today}\n\\author{Michael Brodskiy\\\\ \\small Professor: V. Cherkassky}\n\n% Mathematical Operations:\n\n% Sum: $$\\sum_{n=a}^{b} f(x) $$\n% Integral: $$\\int_{lower}^{upper} f(x) dx$$\n% Limit: $$\\lim_{x\\to\\infty} f(x)$$\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Fundamental Theorem of Line Integrals $-$ 16.3}\n\nThe Fundamental Theorem of Line Integrals states that, if $C$ is a smooth, continuous curve, and $\\overrightarrow{r}'(t)$, $a\\leq t\\leq b$, then:\n\n$$\\int_a^b \\nabla f\\,dr=f(\\overrightarrow{r}(b))-f(\\overrightarrow{r}(a))$$\n\n\\textit{Example:} Given $f(x,y,z)=\\cos(\\pi x)+\\sin(\\pi y)-xyz$, and $C$ is any path from $(1,\\frac{1}{2}, 2)$ to $(2,1,-1)$, Find:\n\n$$\\int_C \\nabla f\\,dr$$\n\n$$\\int_C \\nabla f\\,dr=f(2,1,-1)-f\\left(1,\\frac{1}{2},2\\right)$$\n\n$$(\\cos(2\\pi)+\\sin(\\pi)+2)-(\\cos(\\pi)+\\sin\\left(\\frac{\\pi}{2}\\right)-1)$$\n\n$$\\int_C \\nabla f\\,dr=4$$\n\nRecalling conservative vector fields, where $\\overrightarrow{F}=\\nabla f \\Leftarrow$ potential function  \n\nThe Fundamental Theorem of Line Integrals also states that conservative vector fields are independent of path. That is, as long as any path, say $C_1$ and $C_2$, start and end at the same point, then:\n\n$$\\int_{C_1} \\nabla f\\,dr=\\int_{C_2} \\nabla f\\,dr$$\n\nAny path, $C$, that starts at the same point at which it terminates is called closed.\n\nThere are two types of closed paths:\n\n\\begin{enumerate}\n\n  \\item Simple\n\n  \\item Non-Simple\n\n\\end{enumerate}\n\nA simple path is a shape such as a circle or ellipse. A non-simple path crosses over itself, and example of such a shape being the ``$\\infty$'' symbol.\n\nA region, $D$ is called open when it does not contain any of its own boundaries.\n\nAny region, $D$, in which one may connect any two points without exiting the boundaries is known as connected.\n\nFurthermore, a region, $D$ may be called simply connected when there are no holes.\n\n\\begin{enumerate}\n\n  \\item If $C$ is independent of path:\n    $$\\int_{C_1}\\nabla f\\,dr=\\int_{C_2}\\nabla f\\,dr$$\n\n  \\item $\\overrightarrow{F}$ is conservative if $\\overrightarrow{F}=\\nabla f$\n\n  \\item $\\overrightarrow{F}$ is a conservative vector field if $D$ is an open and connected region, where $\\int_C \\overrightarrow{F}\\,dr$ is independent of path (in $D$).\n\n  \\item $\\int_C \\overrightarrow{F}\\,dr$ is independent of path in a region, $D$, if $\\int_C \\overrightarrow{F}\\,dr=0$ on every closed path $C$ in $D$.\n\n\\end{enumerate}  \n\n    Given a function, $\\overrightarrow{F}(x,y)=P(x,y)\\bold{\\hat{i}}+Q(x,y)\\bold{\\hat{j}}$,  $\\overrightarrow{F}$ is conservative if:\n    $$\\frac{\\partial P}{\\partial y}=\\frac{\\partial Q}{\\partial x}$$\n\n    \\subsection{Solving for $\\nabla f$}\n\n    Given $\\overrightarrow{F}(x,y)=(2x^3y^4+x)\\bold{\\hat{i}}+(2x^4y^3+y)\\bold{\\hat{j}}$, determine whether $\\overrightarrow{F}$ is a conservative vector field, and, if so, find $\\nabla f$\n\n    $$\\frac{\\partial P}{\\partial y}=8x^3y^3$$\n    $$\\frac{\\partial Q}{\\partial x}=8x^3y^3$$\n    Therefore, it is conservative\n\n    $$\\frac{\\partial f}{\\partial x}=2x^3y^4+x$$\n    $$\\frac{\\partial f}{\\partial y}=2x^4y^3+y$$\n    $$f(x,y)=\\int_C 2x^3y^4+x\\,dx\\Rightarrow\\frac{1}{2}(x^4y^4+x^2)+h(y)$$\n    $$\\frac{\\partial f}{\\partial y}=2x^4y^3+h'(y)=2x^4y^3+y$$\n    $$\\cancel{2x^4y^3}+h'(y)=\\cancel{2x^4y^3}+y$$\n    $$h'(y)=y\\Rightarrow h(y)=\\frac{1}{2}y^2$$\n    $$f(x,y)=\\frac{1}{2}(x^4y^4+x^2+y^2)$$\n\n\n\\end{document}\n\n", "meta": {"hexsha": "0eb9828b36df8407d117e837ca5b18cbd42f680d", "size": 4525, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture Notes/Lecture21.tex", "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_issues_repo_path": "Lecture Notes/Lecture21.tex", "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notes/Lecture21.tex", "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.022556391, "max_line_length": 200, "alphanum_fraction": 0.6159116022, "num_tokens": 1502, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177517, "lm_q2_score": 0.7931059462938814, "lm_q1q2_score": 0.5941055318991875}}
{"text": "\\chapter{Bathroom Security}\\label{bathroom-security}\n\n\\href{https://adventofcode.com/2016/day/2}{Link}\n\nYou arrive at \\emph{Easter Bunny Headquarters} under cover of darkness.\nHowever, you left in such a rush that you forgot to use the bathroom!\nFancy office buildings like this one usually have keypad locks on their\nbathrooms, so you search the front desk for the code.\n\n``In order to improve security,'' the document you find says, ``bathroom\ncodes will no longer be written down. Instead, please memorize and\nfollow the procedure below to access the bathrooms.''\n\nThe document goes on to explain that each button to be pressed can be\nfound by starting on the previous button and moving to adjacent buttons\non the keypad: \\mintinline[]{idris}{U} moves up, \\mintinline[]{idris}{D}\nmoves down, \\mintinline[]{idris}{L} moves left, and\n\\mintinline[]{idris}{R} moves right. Each line of instructions\ncorresponds to one button, starting at the previous button (or, for the\nfirst line, \\emph{the ``5'' button}); press whatever button you're on at\nthe end of each line. If a move doesn't lead to a button, ignore it.\n\nYou can't hold it much longer, so you decide to figure out the code as\nyou walk to the bathroom. You picture a keypad like this:\n\n\\begin{minted}[]{text}\n1 2 3\n4 5 6\n7 8 9\n\\end{minted}\n\nSuppose your instructions are:\n\n\\begin{minted}[]{text}\nULL\nRRDDD\nLURDL\nUUUUD\n\\end{minted}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  You start at ``5'' and move up (to ``2''), left (to ``1''), and left\n  (you can't, and stay on ``1''), so the first button is\n  \\mintinline[]{idris}{1}.\n\\item\n  Starting from the previous button (``1''), you move right twice (to\n  ``3'') and then down three times (stopping at ``9'' after two moves\n  and ignoring the third), ending up with \\mintinline[]{idris}{9}.\n\\item\n  Continuing from ``9'', you move left, up, right, down, and left,\n  ending with \\mintinline[]{idris}{8}.\n\\item\n  Finally, you move up four times (stopping at ``2''), then down once,\n  ending with \\mintinline[]{idris}{5}.\n\\end{itemize}\n\nSo, in this example, the bathroom code is \\mintinline[]{idris}{1985}.\n\nYour puzzle input is the instructions from the document you found at the\nfront desk.\n\n\\section{Module Declaration and\nImports}\\label{module-declaration-and-imports}\n\n\\begin{minted}[]{idris}\n||| Day 2: Bathroom Security\nmodule Data.Advent.Day02\n\nimport public Data.Advent.Day\nimport public Data.Ix\n\nimport Data.Vect\n\nimport public Lightyear\nimport public Lightyear.Char\nimport public Lightyear.Strings\n\\end{minted}\n\n\\section{Data Types}\\label{data-types}\n\n\\begin{minted}[]{idris}\n%access public export\n\n||| Up, down, left or right.\ndata Instruction = ||| Up\n                   U\n                 | ||| Down\n                   D\n                 | ||| Left\n                   L\n                 | ||| Right\n                   R\n\n||| A single digit, i.e. a number strictly less than ten.\nDigit : Type\nDigit = Fin 10\n\nimplementation Show Digit where\n  show = show . finToInteger\n\nimplementation [showDigits] Show (List Digit) where\n  show = concatMap show\n\n||| A pair of coordinates on the keypad, `(x, y)`.\nCoordinates : Type\nCoordinates = (Fin 3, Fin 3)\n\\end{minted}\n\n\\newpage\n\n\\section{Parsers}\\label{parsers}\n\n\\begin{minted}[]{idris}\n%access export\n\nup : Parser Instruction\nup = char 'U' *> pure U <?> \"up\"\n\ndown : Parser Instruction\ndown = char 'D' *> pure D <?> \"down\"\n\nleft : Parser Instruction\nleft = char 'L' *> pure L <?> \"left\"\n\nright : Parser Instruction\nright = char 'R' *> pure R <?> \"right\"\n\ninstruction : Parser Instruction\ninstruction = up <|> down <|> left <|> right <?> \"up, down, left or right\"\n\npartial instructions : Parser (List Instruction)\ninstructions = some instruction <* (skip endOfLine <|> eof)\n\\end{minted}\n\n\\section{Part One}\\label{part-one}\n\n\\begin{quote}\n  What is the bathroom code?\n\\end{quote}\n\n\\begin{minted}[]{idris}\nnamespace PartOne\n\n  ||| A keypad like this:\n  |||\n  ||| ```\n  ||| 1 2 3\n  ||| 4 5 6\n  ||| 7 8 9\n  ||| ```\n  keypad : Vect 3 (Vect 3 Digit)\n  keypad = [ [1, 2, 3],\n             [4, 5, 6],\n             [7, 8, 9] ]\n\n  move : Coordinates -> Instruction -> Coordinates\n  move (x, y) U = (x, pred y)\n  move (x, y) D = (x, succ y)\n  move (x, y) L = (pred x, y)\n  move (x, y) R = (succ x, y)\n\\end{minted}\n\n\\newpage\n\n\\begin{minted}[]{idris}\n  button : Coordinates -> List Instruction -> (Coordinates, Digit)\n  button loc@(x, y) [] = (loc, index x (index y keypad))\n  button loc (i :: is) = button (move loc i) is\n\npartial partOne : List (List Instruction) -> String\npartOne = show @{showDigits} . go ((1,1), [])\n  where\n    go : (Coordinates, List Digit) -> List (List Instruction) -> List Digit\n    go (_, ds) []            = reverse ds\n    go (loc, ds) (is :: iis) = let (loc', d) = PartOne.button loc is in\n                                   go (loc', d :: ds) iis\n\nnamespace PartOne\n\n  ||| ```idris example\n  ||| example\n  ||| ```\n  partial example : String\n  example = fromEither $ partOne <$>\n            parse (some instructions) \"ULL\\nRRDDD\\nLURDL\\nUUUUD\"\n\\end{minted}\n\n\\section{Part Two}\\label{part-two}\n\nYou finally arrive at the bathroom (it's a several minute walk from the\nlobby so visitors can behold the many fancy conference rooms and water\ncoolers on this floor) and go to punch in the code. Much to your\nbladder's dismay, the keypad is not at all like you imagined it.\nInstead, you are confronted with the result of hundreds of man-hours of\nbathroom-keypad-design meetings:\n\n\\begin{minted}[]{text}\n    1\n  2 3 4\n5 6 7 8 9\n  A B C\n    D\n\\end{minted}\n\nYou still start at ``5'' and stop when you're at an edge, but given the\nsame instructions as above, the outcome is very different:\n\n\\begin{itemize}\n\\tightlist\n\\item\n  You start at ``5'' and don't move at all (up and left are both edges),\n  ending at \\mintinline[]{idris}{5}.\n\\item\n  Continuing from ``5'', you move right twice and down three times\n  (through ``6'', ``7'', ``B'', ``D'', ``D''), ending at\n  \\mintinline[]{idris}{D}.\n\\item\n  Then, from ``D'', you move five more times (through ``D'', ``B'',\n  ``C'', ``C'', ``B''), ending at \\mintinline[]{idris}{B}.\n\\item\n  Finally, after five more moves, you end at \\mintinline[]{idris}{3}.\n\\end{itemize}\n\n\\newpage\n\nSo, given the actual keypad layout, the code would be\n\\mintinline[]{idris}{5DB3}.\n\n\\begin{quote}\n  Using the same instructions in your puzzle input, what is the correct\n  \\textit{bathroom code}?\n\\end{quote}\n\n\\begin{minted}[]{idris}\nnamespace PartTwo\n\n  keypad : Vect 5 (n ** Vect n Char)\n  keypad = [ (1 **           ['1'])\n           , (3 **      ['2', '3', '4'])\n           , (5 ** ['5', '6', '7', '8', '9'])\n           , (3 **      ['A', 'B', 'C'])\n           , (1 **           ['D'])\n           ]\n\n  -- NOTE: This will wrap at the bounds, which might be unexpected.\n  partial convert : (n : Nat) -> Fin m -> Fin n\n  convert (S j) fm {m} =\n      let delta = half $ if S j > m\n                            then S j `minus` m\n                            else m `minus` S j in\n          the (Fin (S j)) $ fromNat $ finToNat fm `f` delta\n    where\n      f : Nat -> Nat -> Nat\n      f = if S j > m then plus else minus\n      partial half : Nat -> Nat\n      half = flip div 2\n\n  canMoveVertically : (Fin (S k), Fin 5) -> Instruction -> Bool\n  canMoveVertically (x, y) i with ((finToNat x, finToNat y))\n    canMoveVertically (x, y) U | (col, row) =\n        case row of\n             Z                   => False\n             S Z                 => col == 1\n             S (S Z)             => inRange (1,3) col\n             _                   => True\n    canMoveVertically (x, y) D | (col, row) =\n        case row of\n             S (S Z)             => inRange (1,3) col\n             S (S (S Z))         => col == 1\n             S (S (S (S Z)))     => False\n             _                   => True\n    canMoveVertically _ _ | _ = True\n\\end{minted}\n\n\\newpage\n\n\\begin{minted}[]{idris}\n  partial move : (Fin (S k), Fin 5) -> Instruction ->\n    ((n ** Fin n), Fin 5)\n  move (x, y) U = if canMoveVertically (x, y) U\n                     then let n = fst (index (pred y) keypad) in\n                              ((n ** convert n x), pred y)\n                     else ((_ ** x), y)\n  move (x, y) D = if canMoveVertically (x, y) D\n                     then let n = fst (index (succ y) keypad) in\n                              ((n ** convert n x), succ y)\n                     else ((_ ** x), y)\n  move (x, y) L = let n = fst (index y keypad) in\n                      ((n ** convert n (pred x)), y)\n  move (x, y) R = let n = fst (index y keypad) in\n                      ((n ** convert n (succ x)), y)\n\n  partial button : (Fin (S k), Fin 5) -> List Instruction ->\n    (((n ** Fin n), Fin 5), Char)\n  button loc@(x, y) [] =\n      let (n ** row) = index y PartTwo.keypad\n          xx = convert n x in\n          (((n ** xx), y), index xx row)\n  button loc (i :: is) =\n      let ((S _ ** x), y) = move loc i in\n          button (x, y) is\n\npartial partTwo : List (List Instruction) -> String\npartTwo = go (((5 ** 0),2), [])\n  where\n    partial go : (((n ** Fin n), Fin 5), List Char) ->\n      List (List Instruction) -> String\n    go (_, cs) []            = pack $ reverse cs\n    go (loc, cs) (is :: iis) =\n        let ((S k ** xx), y) = loc\n            (loc', c) = PartTwo.button (xx, y) {k=k} is in\n            go (loc', c :: cs) iis\n\nnamespace PartTwo\n\n  ||| ```idris example\n  ||| PartTwo.example\n  ||| ```\n  partial example : String\n  example = fromEither $ partTwo <$>\n            parse (some instructions) \"ULL\\nRRDDD\\nLURDL\\nUUUUD\"\n\\end{minted}\n\n\\section{Main}\\label{main}\n\n\\begin{minted}[]{idris}\nnamespace Main\n\n  partial main : IO ()\n  main = runDay $ MkDay 2 (some instructions)\n         (pure . partOne)\n         (pure . partTwo)\n\\end{minted}\n", "meta": {"hexsha": "7dfb1ce89bda9fd79f439f5f69d967572df8ce7b", "size": 9719, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/Data/Advent/Day02.tex", "max_stars_repo_name": "yurrriq/advent-of-code", "max_stars_repo_head_hexsha": "ee83efa138322b5dbbda9f4aeac75481a9cd49fe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-11-04T10:32:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-05T07:36:22.000Z", "max_issues_repo_path": "src/Data/Advent/Day02.tex", "max_issues_repo_name": "yurrriq/aoc19", "max_issues_repo_head_hexsha": "ee83efa138322b5dbbda9f4aeac75481a9cd49fe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Data/Advent/Day02.tex", "max_forks_repo_name": "yurrriq/aoc19", "max_forks_repo_head_hexsha": "ee83efa138322b5dbbda9f4aeac75481a9cd49fe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-26T19:27:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-26T19:27:21.000Z", "avg_line_length": 28.9255952381, "max_line_length": 75, "alphanum_fraction": 0.5865829818, "num_tokens": 2897, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7931059560743423, "lm_q1q2_score": 0.5941055303404206}}
{"text": "\\section{A data exploration case study}\n\n% Suppose you want to \n% Different kinds of statistical approach are appropriate to the analysis \n% of different combinations of variables. For example, to test whether a \n% continuous dependent variable (e.g. life-span) varies significantly \n% across a categorical independent variable (e.g. handedness), then a t \n% test might be appropriate. However, a t test would be inappropriate to \n% test whether a continuous dependent variable (e.g. growth rate) varies \n% significantly across a continuous independent variable (e.g. nutrient \n% concentration): here some sort of regression would be required. It is \n% critical that you know what sorts of test are appropriate to different \n% kinds of data.\n % If your dependent variable is count data, random variation in the \n % variable is likely to be distributed according to the Poisson \n % distribution, and particular care must be taken in its analysis.\n% \n% \n\n\nWe will use the famous {\\it Iris} data set which is available in R (one \nof many) as a case study for data exploration and descriptive \nstatistics. This dataset gives the measurements in centimeters of the \nvariables sepal length and width and petal length and width, \nrespectively, for 50 flowers from each of 3 species of iris: {\\it Iris \nsetosa}, {\\it I. versicolor}, and {\\it I. virginica}. All data came \nfrom the same  pasture, and picked on the same day and measured at the \nsame time by the same person with the same apparatus. \n\n\\section{Subsetting Data}\nOften, you will need to get only part of a matrix, dataframe, etc. Here \nare ways to subset your data:\n\\begin{snugshade}\n\\footnotesize\n\\begin{verbatim}\n> M <- matrix(1:9, 3, 3)\n> M\n\t [,1] [,2] [,3]\n[1,]    1    4    7\n[2,]    2    5    8\n[3,]    3    6    9\n> M[1,1]\n[1] 1\n> M[1, ]\n[1] 1 4 7\n> M[1:2, ]\n\t [,1] [,2] [,3]\n[1,]    1    4    7\n[2,]    2    5    8\n> M[c(1, 3), 2]\n[1] 4 6\n> M[-3,]\n\t [,1] [,2] [,3]\n[1,]    1    4    7\n[2,]    2    5    8\n> M[-3,-2]\n\t [,1] [,2]\n[1,]    1    7\n[2,]    2    8\n> attach(iris)\n> iris\n\t Sepal.Length Sepal.Width Petal.Length Petal.Width Species\n1            5.1         3.5          1.4         0.2 setosa\n2            4.9         3.0          1.4         0.2 setosa\n3            4.7         3.2          1.3         0.2 setosa\n4            4.6         3.1          1.5         0.2 setosa\n5            5.0         3.6          1.4         0.2 setosa\n6            5.4         3.9          1.7         0.4 setosa\n7            4.6         3.4          1.4         0.3 setosa\n8            5.0         3.4          1.5         0.2 setosa\n9            4.4         2.9          1.4         0.2 setosa\n10           4.9         3.1          1.5         0.1 setosa\n> iris$Sepal.Width[1:5]\n[1] 3.5 3.0 3.2 3.1 3.6\n> iris[1:3, c(\"Sepal.Width\",\"Petal.Width\")]\n\tSepal.Width Petal.Width\n1         3.5         0.2\n2         3.0         0.2\n3         3.2         0.2\n> iris[(Sepal.Width < 3.5) & (Sepal.Length > 7.2),]\n Sepal.Length Sepal.Width Petal.Length Petal.Width Species\n106      7.6         3.0          6.6         2.1 virginica\n108      7.3         2.9          6.3         1.8 virginica\n119      7.7         2.6          6.9         2.3 virginica\n123      7.7         2.8          6.7         2.0 virginica\n131      7.4         2.8          6.1         1.9 virginica\n136      7.7         3.0          6.1         2.3 virginica\n\\end{verbatim}\n\\end{snugshade}\n\n\\noindent  \n\\section{Basic Statistics in R}\nR contains several functions to perform basic (and advanced!) \nstatistics. \n\\begin{snugshade} \\footnotesize\n\\begin{verbatim}\n> attach(iris)\n> summary(iris)\n  Sepal.Length    Sepal.Width     Petal.Length  Petal.Width   \n Min.   :4.300   Min.   :2.000   Min.   :1.000 Min.   :0.100  \n 1st Qu.:5.100   1st Qu.:2.800   1st Qu.:1.600 1st Qu.:0.300  \n Median :5.800   Median :3.000   Median :4.350 Median :1.300  \n Mean   :5.843   Mean   :3.057   Mean   :3.758 Mean   :1.199  \n 3rd Qu.:6.400   3rd Qu.:3.300   3rd Qu.:5.100 3rd Qu.:1.800  \n Max.   :7.900   Max.   :4.400   Max.   :6.900 Max.   :2.500  \n       Species  \n setosa    :50  \n versicolor:50  \n virginica :50  \n> table(Species)\nSpecies\n    setosa versicolor  virginica \n        50         50         50\n\n> table(Species, Petal.Width)\n            Petal.Width\nSpecies      0.1 0.2 0.3 0.4 0.5 0.6  1 1.1 1.2 1.3 1.4 ...\n  setosa       5  29   7   7   1   1  0   0   0   0   0 ...\n  versicolor   0   0   0   0   0   0  7   3   5  13   7 ...\n  virginica    0   0   0   0   0   0  0   0   0   0   1 ...\n            Petal.Width\nSpecies      2.1 2.2 2.3 2.4 2.5\n  setosa       0   0   0   0   0\n  versicolor   0   0   0   0   0\n  virginica    6   3   8   3   3\n> t.test(Sepal.Width[which(Species == \"setosa\")], \n         Sepal.Width[which(Species == \"versicolor\")])\n\n\tWelch Two Sample t-test\n\ndata:  Sepal.Width[which(Species == \"setosa\")] and \n       Sepal.Width[which(Species == \"versicolor\")] \nt = 9.455, df = 94.698, p-value = 2.484e-15\nalternative hypothesis: true difference in means is\nnot equal to 0 \n95 percent confidence interval:\n 0.5198348 0.7961652 \nsample estimates:\nmean of x mean of y \n    3.428     2.770 \n> summary(lm(Sepal.Width ~ Sepal.Length))\n\nCall:\nlm(formula = Sepal.Width ~ Sepal.Length)\n\nResiduals:\n    Min      1Q  Median      3Q     Max \n-1.1095 -0.2454 -0.0167  0.2763  1.3338 \n\nCoefficients:\n             Estimate Std. Error t value Pr(>|t|)    \n(Intercept)   3.41895    0.25356   13.48   <2e-16 ***\nSepal.Length -0.06188    0.04297   -1.44    0.152    \n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1 \n\nResidual standard error: 0.4343 on 148 degrees of freedom\nMultiple R-squared: 0.01382,\tAdjusted R-squared: 0.007159 \nF-statistic: 2.074 on 1 and 148 DF,  p-value: 0.1519 \n\\end{verbatim}\n\\end{snugshade}\n", "meta": {"hexsha": "c4a3834830e9ad4b74076f36dbd43f8400fd1a72", "size": 5757, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "archived/SilCompBioStat/extra.tex", "max_stars_repo_name": "mathemage/TheMulQuaBio", "max_stars_repo_head_hexsha": "63a0ad6803e2aa1b808bc4517009c18a8c190b4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-12T13:33:14.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-12T13:33:14.000Z", "max_issues_repo_path": "archived/SilCompBioStat/extra.tex", "max_issues_repo_name": "OScott19/TheMulQuaBio", "max_issues_repo_head_hexsha": "197d710f76163469dfc7fa9d2d95ba3a739eccc7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "archived/SilCompBioStat/extra.tex", "max_forks_repo_name": "OScott19/TheMulQuaBio", "max_forks_repo_head_hexsha": "197d710f76163469dfc7fa9d2d95ba3a739eccc7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.537037037, "max_line_length": 74, "alphanum_fraction": 0.5659197499, "num_tokens": 2141, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7490872131147276, "lm_q1q2_score": 0.5941055266772115}}
{"text": "\\section{Curve Sketching}\\label{sec:CurveSketching}\r\n\r\nIn this section, we discuss how we can tell what the graph of a function\r\nlooks like by performing simple tests on its derivatives.\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n% Subsections to include\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\input{5-applications-of-derivatives/5-6-1-inc-dec-first-derivative-test}\r\n\\input{5-applications-of-derivatives/5-6-2-second-derivative-test}\r\n\\input{5-applications-of-derivatives/5-6-3-concavity-inflection-pts}\r\n\\input{5-applications-of-derivatives/5-6-4-asymptotes-other}\r\n\\input{5-applications-of-derivatives/5-6-5-summary}\r\n\r\n% Exercises are included at the end of each subsection, except the 'asymptotes-other' subsection which does not have its own exercises\r\n% The following are exercises for the entire section on Curve Sketching\r\n\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for \\ref{sec:CurveSketching}}\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\nSketch the curves. Identify clearly any interesting features, including\r\nlocal maximum and minimum points, inflection points, asymptotes, and\r\nintercepts. \r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=x^5-5x^4+5x^3$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=x^3-3x^2-9x+5$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=(x-1)^2(x+3)^{2/3}$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds x^2+x^2y^2=a^2y^2$, $a>0$.\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=xe^x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=(e^x+e^{-x})/2$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=e^{-x}\\cos x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=e^x-\\sin x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=e^x/x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = 4x+\\sqrt{1-x}$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = (x+1)/\\sqrt{5x^2 + 35}$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y= x^5 - x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = 6x + \\sin 3x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = x+ 1/x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = x^2+ 1/x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = (x+5)^{1/4}$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = \\tan^2 x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y =\\cos^2 x - \\sin^2 x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y = \\sin^3 x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=x(x^2+1)$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=x^3+6x^2 + 9x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=x/(x^2-9)$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=x^2/(x^2+9)$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=2\\sqrt{x} - x$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=3\\sin(x) - \\sin^3(x)$, for $x\\in[0,2\\pi]$\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n\t$\\ds y=(x-1)/(x^2)$\r\n\\end{ex}\r\n\r\n\\end{enumialphparenastyle}", "meta": {"hexsha": "fff58bd2dd1616f8243b131a3d6b49cfa40f9b1c", "size": 2804, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "5-applications-of-derivatives/5-6-0-curve-sketching.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5-applications-of-derivatives/5-6-0-curve-sketching.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5-applications-of-derivatives/5-6-0-curve-sketching.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.6352201258, "max_line_length": 135, "alphanum_fraction": 0.4975035663, "num_tokens": 1025, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7931059438487663, "lm_q1q2_score": 0.5941055211823979}}
{"text": "\\subsection{Metrics for Entity Typing}\nThe evaluation of an ET approach relies on some variants of the metrics used in classical classification problems. The explanation of each metric will be supported by numerical values obtained from the example in Table~\\ref{tab:metrics_example}. \n\n\\begin{table}[H]\n\\centering\n\\caption{Example of predictions over a type set $T=\\{T1, T2, T3\\}$ and a dataset $X=\\{X1, X2, X3\\}$}\n\\label{tab:metrics_example}\n\\begin{tabular}{c|ccc|ccc|}\n\\cline{2-7}\n                                       & \\multicolumn{3}{c|}{\\textbf{Prediction}}                                          & \\multicolumn{3}{c|}{\\textbf{Target}}                                              \\\\ \\hline\n\\multicolumn{1}{|c|}{\\textbf{Example}} & \\multicolumn{1}{c|}{\\textbf{T1}} & \\multicolumn{1}{c|}{\\textbf{T2}} & \\textbf{T3} & \\multicolumn{1}{c|}{\\textbf{T1}} & \\multicolumn{1}{c|}{\\textbf{T2}} & \\textbf{T3} \\\\ \\hline\n\\multicolumn{1}{|c|}{X$_{1}$}          & x                                &                                  & x           & x                                & x                                &             \\\\ \\hline\n\\multicolumn{1}{|c|}{X$_{2}$}          &                                  & x                                &             & x                                &                                  & x           \\\\ \\hline\n\\multicolumn{1}{|c|}{X$_{3}$}          & x                                &                                  & x           & x                                &                                  & x           \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nIn the definition of the metrics we will use terminology that needs a preliminary explanation:\n\\begin{itemize}\n    \\item True Positive (\\textbf{TP}): number of correct positive predictions. In the example in Table~\\ref{tab:metrics_example} we have TP=3 .\n    \\item False Positive (\\textbf{FP}): number of wrong positive predictions. In the example in Table~\\ref{tab:metrics_example} we have FP=2 .\n    \\item True Negative (\\textbf{TN}): number of correct negative predictions. In the example in Table~\\ref{tab:metrics_example} we have TN=1 .\n    \\item False Negative (\\textbf{FN}): number of wrong negative predictions. In the example in Table~\\ref{tab:metrics_example} we have FN=3 .\n\\end{itemize}\n\nOnce fixed these preliminary notions, the evaluated metrics are defined as follows:\n\n\\begin{itemize}\n    \\item \\textbf{Accuracy:} percentage of correct predictions; defined as \n    \\begin{gather*}\n        \\frac{count(Prediction == Target)}{|X|} = \\frac{1}{3}\n    \\end{gather*}\n    \n    \\item \\textbf{Micro precision:} percentage of correct positive predictions with respect to all the positive predictions; defined as\n    \\begin{gather*}\n        \\frac{TP}{TP + FP} = \\frac{3}{3 + 2} = 0.6\n    \\end{gather*}\n    \n    \\item \\textbf{Micro recall:} percentage of correct positive predictions with respect to the positive target; defined as\n    \\begin{gather*}\n        \\frac{TP}{TP + FN} = \\frac{3}{3 + 3} = 0.5\n    \\end{gather*}\n    \n    \\item \\textbf{Micro f1:} harmonic mean of micro precision and micro recall; defined as\n    \\begin{gather*}\n        2 \\cdot \\frac{MicroPrecision \\cdot MicroRecall}{MicroPrecision + MicroRecall} =\n        2 \\cdot \\frac{0.6 \\cdot 0.5}{0.6 + 0.5} = 0.54\n    \\end{gather*}\n    \n    \\item \\textbf{Macro precision examples:} mean of the percentages of correct positive predictions with respect to all the positive predictions of each example; defined as\n    \\begin{gather*}\n        \\frac{\\sum_{x \\in X}\\frac{TP(x)}{TP(x) + FP(x)}}{|X|}=\n        \\frac{0.5 + 0 + 1}{3}=0.5\n    \\end{gather*}\n    \n    \\item \\textbf{Macro recall examples:} mean of the percentages of correct positive predictions with respect to the positive target of each example; defined as\n    \\begin{gather*}\n        \\frac{\\sum_{x \\in X}\\frac{TP(x)}{TP(x) + FN(x)}}{|X|}=\n        \\frac{0.5 + 0 + 1}{3}=0.5\n    \\end{gather*}\n    \n    \\item \\textbf{Macro f1 examples:} harmonic mean of micro precision examples and micro recall examples; defined as\n    \\begin{gather*}\n        2 \\cdot \\frac{MacroPrecisionExamples \\cdot MacroRecallExamples}{MacroPrecisionExamples + MacroRecallExamples} =\\\\\n        = 2 \\cdot \\frac{0.5 \\cdot 0.5}{0.5 + 0.5} = 0.5\n    \\end{gather*}\n    \n    \\item \\textbf{Macro precision classes:} mean of the percentages of correct positive predictions with respect to all the positive predictions of each class; defined as\n    \\begin{gather*}\n        \\frac{\\sum_{t \\in T}\\frac{TP(t)}{TP(t) + FP(t)}}{|T|} =\n        \\frac{1 + 0 + 0.5}{3} = 0.5\n    \\end{gather*}\n    \n    \\item \\textbf{Macro recall classes:} mean of the percentages of correct positive predictions with respect to the positive target of each class; defined as\n    \\begin{gather*}\n        \\frac{\\sum_{t \\in T}\\frac{TP(t)}{TP(t) + FN(t)}}{|T|} =\n        \\frac{0.66 + 0 + 0.5}{3} = 0.38\n    \\end{gather*}\n    \n    \\item \\textbf{Macro f1 classes:} harmonic mean of macro precision classes and macro recall classes; defined as\n    \\begin{gather*}\n        2 \\cdot \\frac{MacroPrecisionClasses \\cdot MacroRecallClasses}{MacroPrecisionClasses + MacroRecallClasses} = \\\\\n        = 2 \\cdot \\frac{0.5 \\cdot 0.38}{0.5 + 0.38} = 0.43\n    \\end{gather*}\n\\end{itemize}", "meta": {"hexsha": "4b4d63db4b4e248022a1b794fd5c5d07d047ebb5", "size": 5222, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "project/experiments/metrics.tex", "max_stars_repo_name": "christianbernasconi96/MasterThesis", "max_stars_repo_head_hexsha": "6211ff86af247aace530912c4eca9019365d606e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "project/experiments/metrics.tex", "max_issues_repo_name": "christianbernasconi96/MasterThesis", "max_issues_repo_head_hexsha": "6211ff86af247aace530912c4eca9019365d606e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project/experiments/metrics.tex", "max_forks_repo_name": "christianbernasconi96/MasterThesis", "max_forks_repo_head_hexsha": "6211ff86af247aace530912c4eca9019365d606e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.4352941176, "max_line_length": 246, "alphanum_fraction": 0.5855993872, "num_tokens": 1505, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7931059438487663, "lm_q1q2_score": 0.5941055211823979}}
{"text": "\\documentclass[a4paper,12pt]{report}\n\n\\usepackage{amsmath,amsfonts,mathtools}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage{hyperref}\n\n\\begin{document}\n\\title{ECE259 Abridged}\n\\author{Aman Bhargava}\n\\date{Janaury 2020}\n\\maketitle\n\n\\tableofcontents\n\n\n\\chapter{Electrostatics}\n\\section{Coulomb's Law and Electric Field}\n\n$$\\vec{F}_e = k\\frac{Q_1Q_2}{|\\vec{R}^2|}(\\vec{R_2} - \\vec{R_1})$$\n\n\\section{Electric Field Intensity $\\vec{E}$}\n\\begin{itemize}\n\\item \\textbf{Source} charges 'make' the charge.\n\\item \\textbf{Test} charges 'feel' the charge.\n\\item Field lines point in the direction that a \\textbf{positive test charge} would be pushed if it were placed at a given point in space.\n\\end{itemize}\n\n\\paragraph{Electric Field at $R_2$ due to $Q_1$ at $R_1$: }\n$$\\vec{E_1} = \\lim_{Q_2 \\to 0} \\frac{\\vec{F}_{12}}{Q_2} = \\frac{Q_1}{4\\pi \\varepsilon_0 |\\vec{R}_2 - \\vec{R}_1|^3}(\\vec{R}_2 - \\vec{R}_1)$$\n\\textit{Note: $F_{12}$ is the force acting on 1 due to charge 2}.\n\n\\paragraph{Properties of $\\vec{E}$}\n\\begin{itemize}\n\\item Points away from positive charges.\n\\item Points toward negative charges.\n\\item Points along line between source and point of measurement.\n\\item Linear such that $\\vec{E}_{tot} = \\sum \\vec{E}_i$\n\\end{itemize}\n\n\n\\section{$\\vec{E}$ from Continuous Charge Distribution}\n\n$$\\vec{E}_{tot} = \\int d\\vec{E}' = \\int \\frac{dQ' (\\vec{R} - \\vec{R}')}{4\\pi\\varepsilon_0 |\\vec{R} - \\vec{R}'|^3}$$\n\nPossible charge distribution types consist of \\textbf{linear, surface, and volumes}. Arbitrary surfaces will not be tested - they generally fall into \n\\textbf{disk, cone, cylinder, sphere, and cube}.\n\n\\paragraph{Steps for Finding $\\vec{E}_{tot}$:}\n\\begin{enumerate}\n\\item Write down Coulomb's Law:\n$$\\vec{E} = \\int d\\vec{E}' = \\int \\frac{dQ'}{4\\pi\\varepsilon_0 |\\vec{R}-\\vec{R}'|^3} (\\vec{R}-\\vec{R}') $$\n\\item State $dQ'$, $\\vec{R}$, and $\\vec{R}'$.\n$$dQ' = \\rho_s dS$$\n\n\\item Integrate: \n\\begin{itemize}\n\\item Determine $d\\vec{E}' = \\frac{dQ'(\\vec{R}-\\vec{R}')}{4\\pi\\varepsilon_0 |\\vec{R}-\\vec{R}'|^3}$\n\\item Ensure that position vectors do not vary with spatial coordinates (if they do, convert them to cartesian or otherwise).\n\\item Cancel any unit vectors you can using symmetry.\n\\item Perform the integration. \n\\end{itemize}\n\\end{enumerate}\n\n\\section{Gauss's Law}\n\\subsection{Fundamental Postulates of Electrostatics}\n$$\\nabla \\times \\vec{E} = 0 \\,\\, \\to \\,\\, \\int_C \\vec{E} \\cdot d\\vec{l} = 0$$\n$$\\nabla \\cdot \\vec{D} = \\rho_S \\,\\, \\to \\,\\, \\iint_C \\vec{D} \\cdot d\\vec{S} = Q_{enc} = \\iiint \\rho_v dV$$\n\nThe second line is Gauss's law. The first is simply stating that electric fields are conservative/irrotational/etc.\n\n\\paragraph{What is $\\vec{D}$? } $\\vec{D} = \\varepsilon \\vec{E} = \\varepsilon_r \\varepsilon_o \\vec{E}$. $\\vec{D}$ is electric \\textbf{flux density} while \n$\\vec{E}$ is the electric \\textbf{field density}.\n\n\\subsection{Why is Gauss's law true?} \n\nWhy should it be the case that local charge density is given by the divergence of electric flux density? Let's \nthink about the properties of electric fields. We know that electric field lines (and thus electric flux lines) point away from positive charges. \n\\textbf{Divergence is how much the vector field points away} (or to) a given point. For that reason, it makes a lot of sense for the divergence to give \nyou the magnitude of local charge density. If you have a positive charge, your field lines will point away from that charge. Therefore, the divergence of \n$\\vec{D}$ will be correlated (in this case, equal) to the charge density!\n\n\\subsection{Problem Solving with Gauss's Law}\nUsing Gauss's law, we can greatly simplify the process of finding $\\vec{E}$ if a few conditions are met. We need to pick a surface that surrounds a \ngiven charge mass that has the following properties:\n\n\\paragraph{Gaussian Surface Properties: }\n\\begin{enumerate}\n\\item Closed.\n\\item $\\vec{E}$ is either perpenndicular or parallel to $d\\vec{S}$.\n\\item $|\\vec{E}|$ is constant over the surface.\n\\end{enumerate}\n\n\\paragraph{Steps to getting $\\vec{E}$ with Gauss's Law:}\n\\begin{enumerate}\n\\item Determine the charge distribution from the given source. \n\\item Choose a corresponding Gaussian surface.\n\\item $\\iint \\vec{E} \\cdot d\\vec{S} \\,\\, \\to \\,\\, E_R \\iint d\\vec{S} = E_r\\cdot SA$. From here, we can solve for $E_R$, or the \nelectric field at a given point on the surface (which is constant). The direction of that field can be found via common sense...\n\\end{enumerate}\n\nYou can also solve things with the non-differential form of Gauss's law by knowing that $\\nabla\\cdot \\vec{D} = \\rho_v$.\n\n\\section{Electric Scalar Potential}\nWe have the following methods of finding $\\vec{E}$ from a charge distribution:\n\\begin{enumerate}\n\\item Coulomb's law with integral superposition.\n\\item Gauss's Law.\n\\item \\textbf{Potential Theory}\n\\item Getting a computer to do it for us. \n\\end{enumerate}\n\nWe define electric potential $\\Delta V$ as the amount of work needed to move a 1-Coulomb charge from \none place to another in an electric field: \n$$\\Delta V = -\\int_{p_1}^{p_2} \\vec{E} \\cdot dl = \\frac{W_{ext}}{Q}$$\n$$\\nabla V = -\\vec{E}$$\n$$V = \\frac{1}{4\\pi\\varepsilon} \\int_{v, s, l} \\frac{dQ'}{|\\vec{R} - \\vec{R}'|}$$\n\n\nThat second one is Maxwell's 2nd equation. \n\nFortunately, this is conservative and linear. If you know that $\\Delta V$ from point $a$ to point $b$ and the \n$\\Delta V$ from point $b$ to point $c$, you can easily get the $\\Delta V$ from point $a$ to point $c$.\n\n\\paragraph{Absolute $V$: } If we take our reference point as one that is infinite distance away, we define \n\\textbf{absolute $V$} as the energy required to move a 1-Coulomb test charge from infinite distance to a given \npoint. If it's not otherwise stated, this is the voltage value you are being given. \n\n\\paragraph{Equipotential surfaces: } These surfaces have the same $V$ value. Remember how $\\nabla V = -\\vec{E}$? \nWell thanks to our knowledge of gradients, we know that these are always going to be perpendicular to $\\vec{E}$ \nlines. Also, we now know that $\\vec{E}$ points from \\textbf{high to low $V$} thanks to our awesome vector calculus \nprowess. Very cool. \n\n\n\\paragraph{Steps for Problem Solving: }\n\\begin{itemize}\n\\item Write $\\Delta V = -\\int_{p_1}^{p_2} \\vec{E}_a \\cdot dl$.\n\\item Evaluate $V_{origin \\to p_1}$ and $V_{origin \\to p_2}$.\n\\item Subtract.\n\\end{itemize}\n\n\\paragraph{Electric Potential of a Point Charge: } Using Coulomb's law, we get: $$V = \\frac{Q}{4\\pi \\varepsilon_0 |\\vec{R} - \\vec{R}'|}$$\n\nFor larger bodies, $V$ is a \\textbf{simple scalar summation} of the voltage due to individual point charges.\n\nNote that the formulae for the \\textbf{gradient operator} $\\nabla$ in various coordinate systems are on the aid sheet for the sheet.\n\n\n\\section{Dielectrics}\n\nIf you put a material that's not conductive in an electric field, the electron clouds will stretch, and you'll get some surface-bound net charges $\\rho_{sb}$. You don't actually have any volume charges, because in the middle of the material, the net charges still cancel out with adjacent atoms since you don't have any net flow of electrons away from their nucleii.\n\nIn the case of a capacitor, the original electric field is due entirely to the surface charge on the plates of the capacitor $\\pm \\rho_s$. This is the sole factor for generating the actual field within the capacitor $E_0 = \\frac{\\rho_{s}}{\\varepsilon_0}$. Similarly, there is an electric field due to polarization within the material due to the \\textbf{bound charge density} $\\rho_{sb}$: \n\n$$\\vec E_p = \\frac{\\rho_{sb}}{\\varepsilon_0};\\,\\,\\, E_{tot} = E_0 + E_p = \\frac{\\vec E_0}{\\varepsilon_r}$$\n\nAfter some fancy algebra, we also get: \n\n$$\\vec P = \\rho_{sb} \\hat n; \\,\\, \\vec D = \\rho_s \\hat n = \\varepsilon_0 \\vec E + \\vec P = \\varepsilon_r \\varepsilon_0 \\vec E$$\n\nThis is super cool, because it shows that $D$ doesn't really care about dielectrics or anything that gets in its way -- it just cares about the original sources of electric charge: \\textbf{free charge} densities. \n\nWe also have these super important relationships for when you don't actually have any field causing the polarization and you want to know stuff about the polarization vector or whatever. \n\n$$\\rho_{p, v} = -\\nabla \\cdot \\vec P; \\,\\, \\rho_{p, s} = -\\hat a_n \\cdot (\\vec P_2 - \\vec P_1)$$\n\nWhere $\\rho_{p, v}$ is the volume charge density caused by the polarization, etc.  \n\n\n\n\n\\subsection{Dielectric Strength}\n\nDielectrics have a maximum $\\vec{E}_b$ at which they turn into conductors because the electrons are ripped into the conduction band.\n\n\\subsection{Dielectric Boundary Conditions}\n\nAt the interface between two materials, the magnitude and direction of the electric field changes. There are \\textbf{two boundary conditions} that are able to effectively describe that change, derived from Maxwell's Equations.\n\n\\begin{itemize}\n\\item $E_{t1} = E_{t2}$. \\textit{Tangential $\\vec{E}$ is continuous across boundary}.\n\\item $D_{n1} - D_{n2} = \\rho_s$. \\textit{Normal $\\vec{D}$ is discontinuous by difference of $\\rho_s$ across boundary}.\n\\end{itemize}\n\nThese both apply in static AND time-varying situations.\n\n\\paragraph{Resultant properties of boundary conditions: } \n\\begin{itemize}\n\\item $\\vec{E} = 0$ for all conductors.\n\\item \\textbf{Current densities} at imperfect conductor boundaries are: $$\\frac{J_{t1}}{\\sigma_1} = \\frac{J_{t2}}{\\sigma_2}; J_{n1} = J_{n2} = J_n$$\n\\item At imperfect boundaries, the surface charge density is: $$\\rho_s = J_n\\{ \\frac{\\varepsilon_{r1}\\varepsilon_0}{\\sigma_1} - \\frac{\\varepsilon_{r2}\\varepsilon_0}{\\sigma_2} \\}$$\n\\end{itemize}\n\nDo note that all these rules only apply at interfaces, not in free spaces.\n\n\\subsection{Capacitance}\n\n$$Q = C\\Delta V;\\,\\,\\,\\, C = \\frac{Q}{V}$$\n\nCapacitance is the proportionality constant for a given setup between the voltage applied and the charge accumulated. Converting the above statement into integrals, we get:\n\n$$C = \\frac{Q}{V} = \\frac{\\iint_s \\vec{D}\\cdot d\\vec{S}}{|-\\int \\vec{E} \\cdot dl|}$$\n\n\\paragraph{Capacitors: } Pretty wacky, can usually be solved with a combination of Gauss's law and knowing your material properties and how to get voltage from an electric field or electric flux density distribution.\n\n\\begin{itemize}\n\\item \\textbf{Compound capacitors} have multiple dielectrics. BOUNDARY CONDITIONS MUST BE APPLIED when solving these.\n\\item \\textbf{DIELECTRIC RESISTANCE} is given by: $$RC = \\frac{\\varepsilon_r \\varepsilon_0}{\\sigma}$$\n\\item Supercapacitors use nanoengineering to make high capacitance capacitors.\n\\end{itemize}\n\n\\section{Electrostatic Energy}\n\n\\subsection{Parallel Plate Capacitor}\n\n$$Q = CV_0; |\\vec D| = \\rho_s = \\frac{Q}{S}$$\n$$\\vec E = \\frac{\\vec D}{\\varepsilon_r \\varepsilon_0} = \\frac{\\rho_s}{\\varepsilon_R \\varepsilon_0} = \\frac{V_0}{d}$$\n\n\\textbf{Energy} in an electrostatic system is defined as \\textbf{work} is required to go from \\textbf{infinite disperasl of particles} to the given configuration of charges.\n\n$$W_e = \\frac{1}{2} \\sum_{i = 1}^{n} Q_i V_i$$\n\n$$W_e = \\frac{1}{2} \\int_l \\rho_l V dl = \\frac{1}{2} \\iint_s \\rho_s V dS = \\frac{1}{2} \\iiint \\rho_v V dv$$\n\n$$W_e = \\frac{1}{2} \\iiint \\vec{D} \\cdot \\vec{E} dv = \\frac{1}{2} \\iiint \\frac{|\\vec{D}^2|}{\\varepsilon_r \\varepsilon_0} dv$$\n\n\\begin{enumerate}\n\\item $V_i$ is the voltage 'seen' by particle $i$.\n\\item $Q_i$ is the charge of particle $i$.\n\\item $\\frac{1}{2}$ is because we technically count each charge/associated energy twice.\n\\end{enumerate}\n\n\n\\paragraph{Energy density: } $w_e = \\frac{1}{2} \\vec{D}\\cdot\\vec{E}$\n\n\\paragraph{Two Paths to Energy $W_e$}\n\\textbf{Pathway I: } \n\\begin{enumerate}\n\\item $\\nabla V = \\vec{E}$\n\\item $\\vec{D} = \\varepsilon_0 \\varepsilon_r \\vec{E}$\n\\item $\\rho_s = |\\vec{D}|$\n\\item $Q = \\iiint \\rho_v dv$\n\\item $C = \\frac{Q}{V_0}$ $$W_e = \\frac{1}{2} CV_0^2$$\n\\end{enumerate}\n\n\\textbf{Pathway II: } \n$$W_e = \\frac{1}{2} \\iiint_v \\varepsilon_r \\varepsilon_0 |\\vec{E}|^2 dv = \\frac{1}{2} CV_0^2 $$\n\n\n\\section{LaPlace and Poisson's Equations}\n\nThese are techniques for going from \\textbf{charge density} to \\textbf{$\\vec{E}$} and are generally used in most practical situations.\n\nPoisson's Equation: \n\n$$\\nabla\\cdot(\\varepsilon_r\\varepsilon_0\\nabla V) = -\\rho_v$$\n\nLaplace's Equation:\n\n$$\\nabla\\cdot(\\varepsilon_r\\varepsilon_0 \\nabla V) = 0$$\n\n\\paragraph{Some vocabulary: } \n\\begin{itemize}\n\\item $\\nabla \\nabla V = \\frac{-\\rho_v}{\\varepsilon_r\\varepsilon_0} \\to \\vec{\\nabla}^2 V$ is the \\textbf{Laplacian} \n\\item Therefore $\\vec{\\nabla}^2 V = 0$ is the true version of Laplace's equation.\n\\item Under the same notation, $\\vec{\\nabla}^2V = -\\frac{\\rho_s}{\\varepsilon_r \\varepsilon_0}$.\n\\end{itemize}\n\nThis leaves us with a \\textbf{boundary value problem}. If we know the voltage at the bounds (or we can infer it from something else), we can then use the fact that $\\vec{\\nabla{V}}^2 = 0$ wherever there is no surface charge to solve the rest!\n\n\\paragraph{Notes on solving the BVP: }\n\\begin{itemize}\n\\item Even though the dielectric can be polarized in a question, it doesn't count because it's not a \\textbf{FREE} charge.\n\\item Since you rarely know the actual $\\rho_s$ (you would likely know the $V_0$), this is a far more utile problem solving strategy.\n\\item A reliable \\textbf{technique to solve the boundary value problem} is to \\textit{integrate the equation twice}.\n\\end{itemize}\n\n\\paragraph{Procedure for BVP's: } \n\\begin{itemize}\n\\item Solve \\textbf{LaPlace or Poisson's} equation via \\textit{direct integration} or \\textit{separation of variables}.\n\\item Apply \\textit{boundary conditions} to solve for arbitrary constants to get to a \\textit{particular solution}.\n\\item Use the following formulae to transform to any other variable of interest after solving for the voltage distribution:\n$$\\vec{E} = -\\nabla V; \\,\\,\\,\\, \\vec{D} = \\varepsilon_r\\varepsilon_0 \\vec{E}; \\,\\,\\,\\, \\vec{J} = \\sigma \\vec{E}$$\n$$R = \\frac{V_0}{\\iint \\vec{J} \\cdot dS} = \\frac{\\varepsilon_r \\varepsilon_0}{\\sigma C}$$\n\\end{itemize}\n\n\\paragraph{Electric Shielding}\n\nA conducting material can be used to `shield' the inside from external $\\vec{E}$. This is because you can't have a $\\Delta V$ across a conductor -- it just conducts the \ndifference in charge. Since we know that $V_{in} = V_{out}$, we can solve for $$\\vec{\\nabla}V = 0 \\to V(x, y, z) = V_0$$\n\nBy the uniquess principle, we know that this HAS to be the only solution. That's how \\textbf{Faraday cages} work!\n\n\n\\section{Resistance and Joule's Law}\n\nRecall that\n\\begin{itemize}\n\\item $\\vec{\\nabla} \\cdot (\\varepsilon_r \\varepsilon_0 \\vec{\\nabla} V) = -\\rho_v$\n\\item $D_{n1} - D_{n2} = \\rho_s$\n\\item $E_{t1} = E_{t2}$\n\\end{itemize}\n\n\\subsection{Volume Current Density}\n\n$\\vec{J}$ is the vector field that represents microscopic current flow. $$\\vec{J} = \\sigma \\vec{E}$$\n$$I = \\iint_S \\vec{J} \\cdot dS$$\n$$\\mu_e = \\frac{e \\tau}{m_e}$$\n$$\\sigma = \\frac{N_e e^2 \\tau}{m_e}$$\n\n\\paragraph{Resistance: } plays into the situation as follows.\n\n$$R = \\frac{V_0}{I} = \\frac{|-\\int \\vec{E}\\cdot dl|}{\\iint_s \\vec{J} \\cdot dS} = \\frac{|-\\int \\vec{E}\\cdot dl|}{|\\iint_s \\sigma \\vec{E} \\cdot d\\vec{S}|}$$\n\nAnd if \\textbf{$S$ is uniform over $L$}: \n\n$$R = \\frac{L}{\\sigma S}$$\n\n% Continue from end of March 5/Beginning of March 6\n\n\\subsection{Power}\n\n$$P = \\iiint_V \\vec{E} \\cdot \\vec{J} dV$$\n\nThis gives the power loss due to resistance leading to heat loss.\n\n\n\\chapter{Magnetostatics}\n\n\\section{Biot-Savart Law} \n\n$\\vec H$ is the magnetic field intensity and $\\vec B$ is the magnetic flux density. $\\vec B$ is kind of like $\\vec E$ and $\\vec H$ is like $\\vec D$. \n\n\\textbf{All magnetic fields} are due to moving charges of one sort or another. In general, \n\n$$d\\vec B = \\frac{\\mu_0 I d\\vec{l} \\times (\\vec R - \\vec R ')}{4\\pi |\\vec R - \\vec R'|^3}$$\n\n\\begin{itemize}\n\\item Point charge current: $$\\vec{B} = \\frac{\\mu_0}{4\\pi} \\frac{Q\\vec{u} \\times (\\vec R - \\vec R')}{|\\vec R - \\vec R'|^3}$$\n\\item Line current: $$B = \\frac{\\mu_0 I}{4 \\pi} \\int_C \\frac{dl' \\times (\\vec R - \\vec R')}{|\\vec R - \\vec R'|^3}$$\n\\item Surface current: $$B = \\frac{-\\mu_0}{4\\pi} \\iint_S \\frac{\\vec J_s \\times (\\vec R - \\vec R')}{|\\vec R - \\vec R'|^3}$$\n\\item Volume current: $$B = \\frac{\\mu_0}{4\\pi} \\iiint_{vol} \\frac{\\vec{J} \\times (\\vec R - \\vec R')}{|\\vec R - \\vec R'|^3} dV'$$\n\\end{itemize}\n\n$\\mu$ is the new version of the electric permeability: $$\\vec B =  \\mu \\vec H = \\mu_0 \\mu_r \\vec H$$\n\n\\section{Boundary Conditions}\n\n$$B_{n1} = B_{n2}; H_{t1} - H_{t_2} = J_s$$\n\n\\section{Energy} \n\n$$w_m = \\frac{1}{2} \\vec B \\cdot \\vec H$$\n\n$$W_m = \\frac{1}{2} \\iiint_{vol} \\vec B \\cdot \\vec H dV$$\n\n\\section{Lorentz Force}\n\nThis relates the movement of a charge in a magnetic field to the force it experiences.\n\n$$F_m = q\\vec u \\times \\vec B = \\vec I l \\times \\vec B$$\n\nYou use the right-hand-rule for this relationship: Your fingers represent the direction of motion $\\vec u$, your palm is the field $\\vec B$, and then your thumb is $F_m$. \n\n$$F_{tot} = F_e + F_m = q\\vec E + q \\vec u + \\vec B$$\n\n\n\\section{Ampere's Law} \n\n$$\\vec \\nabla \\times \\vec H = \\vec J$$\n\n$$\\int_C \\vec H \\cdot d\\vec l = \\iint_S \\vec J \\cdot d\\vec S = I_{enc}$$\n\nYou relate the current with $\\vec H$ via the right hand rule. Your fingers wrap around the wire in the direction of $\\vec{H}$ while your thumb points in the direction of the net current.\n\nThis is basically Gauss's law. Make sure that your $\\vec H$ is going to be constant and either be perpendicular or parallel to the closed path you draw, and then use the fact that:   $$\\int_C \\vec H \\cdot d\\vec l = |\\vec{H}| \\cdot \\text{circumference}$$ \n\n\n\n\\section{Magnetization}\n\n\\textbf{Magnetic Dipole: } A closed loop of current. $\\vec m = I S \\hat n$, where $I$ is the current, $S$ is the surface area of the loop, and $\\hat n$ is given by the right hand rule. \n\n\\textbf{Torque: } Is given by \n\n$$\\vec T = \\vec m \\times \\vec B$$\n\nIn other worse, $\\vec m$ will rotate until it conforms to $\\vec B$ so that torque is minimized. This leads to an internal magnetic field $$\\vec B_{material} > \\vec B_{external}$$\n\n\\textbf{Bound current density: } $\\vec J_{ms} = \\vec M \\times \\hat n$\n\n\\textbf{Magnetic field intensity: } $\\vec H = \\frac{\\vec B}{\\mu_0} - \\vec{M}$\n\n\\textbf{Magnetic susceptibility: } $\\vec{M} = \\chi_m \\vec H$\n\n\\textbf{Relative permeability: } $\\mu_r = \\chi_m + 1$\n\nMuch like with electrostatics, $\\vec H$ only arises from \\textbf{free currents}. $\\vec B$ is the one that deals with both free and bound currents. \n\n\\section{Material Properties: Atomic Magnetic Dipoles}\n\n\\begin{enumerate}\n\\item \\textbf{Non-zero moment} materials will have $\\vec M$ align with $\\vec B$. If strong alignment is achieved, then the field is enhanced (\\textit{ferromagnetism}. If weak alignment, then there will be moderate enhancement (\\textit{paramagnetic}). \n\\item \\textbf{Net-zero moment} materials will have a small \\textit{reduction} in $\\vec B$, leading to \\textit{diamagnetism}. \n\\end{enumerate}\n\n\\begin{itemize}\n\\item \\textbf{Diamagnetic: } $\\mu_r \\approx \\leq 1$ (values like 0.999). $B_{internal} < B_{applied}$. \n\\item \\textbf{Paramagnetic: } $\\mu_r \\approx \\geq 1$ (values like 1.01). $B_{internal} > B_{applied}$. \n\\item \\textbf{Ferromagnetic: } $\\mu_r >>> 1$. $B_{internal} >>> B_{applied}$. \n\\item \\textbf{Ferrimagnetic: } $\\mu_r > 1$. $B_{internal} > B_{applied}$.\n\\end{itemize}\n\n\\section{Modeling Transformer Circuits}\n\n\\begin{tabular}{|l|l|}\n\\hline\n\\textbf{Electric} & \\textbf{Magnetic} \\\\\n\\hline\nVoltages & $V_m = NI_0$ \\\\ \nCurrent & $\\Phi$ \\\\ \nResistance & $ R = \\frac{L}{\\mu S}$ \\\\ \nConductivity & Permeability $\\mu = \\mu_r \\mu_0$ \\\\\n\\hline\n\\end{tabular}\n\n\n\\subsection{Mesh Analysis}\n\n\\section{Inductance}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "8e8710666baf7907c772244eb32a56eb231f7a72", "size": 19593, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/ECE259.tex", "max_stars_repo_name": "AdamCarnaffan/EngSci_Abridged", "max_stars_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-10-25T06:03:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-15T02:14:13.000Z", "max_issues_repo_path": "tex/ECE259.tex", "max_issues_repo_name": "AdamCarnaffan/EngSci_Abridged", "max_issues_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/ECE259.tex", "max_forks_repo_name": "AdamCarnaffan/EngSci_Abridged", "max_forks_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-05T14:21:34.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-06T19:01:31.000Z", "avg_line_length": 43.3473451327, "max_line_length": 388, "alphanum_fraction": 0.6998417802, "num_tokens": 6372, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059414036511, "lm_q2_score": 0.7490872131147275, "lm_q1q2_score": 0.5941055193507934}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[utf8]{inputenc}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 3, 2014}\n\\maketitle\n\\section*{5.3 properties of continuous functions}\nlet $f:S\\to \\mathbb{R}^m$ be a function. the following are equivalent\n\\begin{enumerate}\n\\item\n  f is continuous on S\n\\item\n  if $\\{a_k\\}$ is a sequence of elements in S such that $\\lim a_k=v\\in S$ then $\\lim f(a_k)=f(v)$\n\\item\n  if $U\\subseteq \\mathbb{R}^m$ is open them $f^{-1}(U)$ is an open set in $S$\n\\end{enumerate}\n\na set $V$ is open in $S$ if $V=U\\cap S$ where $U$ is open in $\\mathbb{R}^n$ \n\n\\subsection*{examples}\nlet $S=[0,1]$\nthen $[0,\\frac{1}{2})$ is open in $S$ because $[0,\\frac{1}{2})=(-\\frac{1}{2},\\frac{1}{2})\\cap S$.\n\n$(0,\\frac{1}{2})=(0,\\frac{1}{2})\\cap S$.\n\n\\subsubsection*{proof of above thrm}\nby definition of limit, given any small $\\delta>0$ there exists $N$ such that $|a_k-V|<\\delta$ if $k\\ge N$. We want to study $|f(a_k)-f(v)|$. Because $f$ is continuous $|f(a_k-f(v)|<\\varepsilon$ if $|a_k-v|<r$ there exists $r(\\varepsilon,v)>0$. pick $r=\\delta$. then $|a_k-v|<r$ for all $k\\ge N$ implies $|f(a_k)-f(v)|< \\varepsilon$ and so $\\lim f(a_k)=f(v)$\n\nfor $2\\to1$ assume that $f$ is not continuous. we will prove that $2$ fails.\n\nif $f$ is not continuous, then there exists some pint $p\\in S$ and $\\varepsilon>0$ such that $\\forall r>0$ then $|x-p|<r$ and $|f(x)-f(p)|\\ge \\varepsilon$. consider $a_k\\in S$ such that $\\lim a_k=p$. by hypothesis $\\lim f(a_k)=f(p)$. but then we have $|a_k-p|<r$ and $|f(a_k)-f(p)|<\\varepsilon$\n\nfor $1\\to3$. we want to show that given and open set $U\\subseteq \\mathbb{R}^m, f^{-1}(U)$ is open  in $S$. pick $x\\in f^{-1}(U)$. we need to find a ball $B(x,r)\\subseteq f^{-1}(U)$.\n\n$f(x)\\in U$. $U$ is open, so there exists some $\\epsilon>0$ such that $B(f(x),\\varepsilon)\\subseteq U$. hence $f^{-1}(B(f(x),\\varepsilon))\\subseteq f^{-1}(U)$.\n\ngiven $\\varepsilon>0$ there exists $r>0$ such that if $||x-y||<r\\to ||f(x)-f(y)||<\\varepsilon$. continuity gives us a radius $r$ such that if $y\\in B(x,r)$ then $||f(x)-f(y)||<\\varepsilon$ and so $f(y)\\in B(f(x),\\varepsilon)$. So we have found $B(x,r)\\subseteq f^{-1}(B(f(x),\\varepsilon)\\subseteq f^{-1}(U)$\n\n\\subsubsection*{moral of the story}\ncontinuity can be seen in terms of finding balls.\n\n\\subsubsection*{example}\n$f(x)=x^2$\n\n$f^{-1}(c,d)=(\\sqrt{c},\\sqrt{d})\\cup (-\\sqrt{c},-\\sqrt{d})$\n\n$f^{-1}(-1,1)=(-1,1)$\n\n$f^{-1}(-2,-1)=\\emptyset$\n\n$f$ is continuous iff $f^{-1(U)}$ is open if $U$ is open, but if $U$ is open then $f(U)$ is not necessarily open.\n\n$f(-1,1)=[0,1)$ which is not open in $\\mathbb{R}$\n\n\\subsection*{prop5.3.5}\nlet $f:\\mathbb{R}^n\\to\\mathbb{R}^m, g:\\mathbb{R}^m\\to\\mathbb{R}^p$ assume $f,g$ are continuous. then $g\\circ f$ is continouos.\n\nassume that $U\\in \\mathbb{R}^p$ is open. we need to show that $(g\\circ f)^{-1}(U)$ is open. $(g\\circ f)^{-1}(U)=f^{-1}(g^{-1}(U))$. because g is continuous, $g^{-1}(U)$ is open. beccause $f$ is continuous, $f^{-1}(g^{-1}(U)$ is open.\n\n\\subsection*{prop 5.3.2}\n\nif in addition, $f$ and $g$ are continous at $a$ then so are $f\\pm g, f\\cdot g, \\alpha f,\\frac{f}{g}$.\n\ncorollary, we know that $f(x)=x$ is continous $f:\\mathbb{R}^n\\t\\mathbb{R}^n$ then $f(x)=x^{m}$ is continuous a $m\\in \\mathbb{N}$. and any  polynomial is continous. and any rational function  is continous a $f(x)/g(x)$ such that $g(x)\\ne 0$.\n\\end{document}\n\n\n\n", "meta": {"hexsha": "b1a45e12502ccd9f78df79c7a2b0536d5028c060", "size": 3519, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-11-03.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-11-03.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-11-03.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.1153846154, "max_line_length": 358, "alphanum_fraction": 0.6314293833, "num_tokens": 1391, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646140788307, "lm_q2_score": 0.8791467785920306, "lm_q1q2_score": 0.5940962835538908}}
{"text": "\r\n\r\n\\subsection{Tank Models}\r\n\r\nAccurately modelling tank pressure changes is essential for accurate\r\nmaneuver modelling and reconstruction.  The following sections\r\ndiscuss three types of tank models:  pressurant tank, regulated fuel\r\ntank, and blowdown fuel tank.  For each tank there are three models:\r\nisothermal, heat transfer, and adiabatic.\r\n\r\nThe models used in GMAT are based on work by Estey\\cite{Estey:83} ,\r\nHearn\\cite{Hearn:01} and Moran\\cite{Moran}.  For each tank, we\r\nselect a set of state variables that when defined, allow us to\r\ndetermine all remaining properties of the tank.  For the state\r\nvariables, we provide differential equations that describe how the\r\nstate variables change with respect to time.  The number of state\r\nvariables varies between different tanks, and with the model type\r\nsuch as isothermal and heat transfer.\r\n\r\nFor each of the three tanks, we develop a heat transfer model, an\r\nadiabatic model, and an isothermal model.  The heat transfer model\r\nis derived using the laws of conservation of energy and the\r\nconservation of mass. An adiabatic model is provided by setting the\r\nheat transfer rates to zero in the heat transfer model. The\r\nisothermal model for each tank is developed separately. Each ot\r\nthese models is useful for certain classes of maneuvers.  Isothermal\r\nmodels are useful for maneuvers with low mass flow rates, adiabatic\r\nmodels are useful for short maneuvers with high mass flow rates.\r\nHeat transfer models are useful for long maneuvers with large mass\r\nflow rates.\r\n\r\nWhen developing heat transfer models, we'll assume that specific\r\ninternal energy is given by\r\n%\r\n\\begin{equation}\r\n    u = c T\r\n\\end{equation}\r\n%\r\nspecific enthalpy for a liquid is given by\r\n%\r\n\\begin{equation}\r\n    h_\\ell = c_\\ell T_\\ell\r\n\\end{equation}\r\n%\r\nand specific enthalpy  for a gas is given by\r\n%\r\n\\begin{equation}\r\n    h_g = T_g( c_g + R_g)\r\n\\end{equation}\r\n%\r\nThe notation used in tank model development is shown below.  After\r\npresenting notation, we present the dynamics model for a pressurant\r\ntank.\r\n\r\n \\noindent\\textit{Nomenclature}\\vspace{-.1 in}\r\n\\begin{tabbing}\r\n    -----------------------     \\= Reynolds number based on length $s$ \\kill\r\n    $A_g,A_\\ell,A_w$    \\> = Heat transfer area     \\\\\r\n    $c_v, c_g$        \\>  = Specific heat at constant volume     \\\\\r\n    $D$        \\>  = Tank diameter     \\\\\r\n    $d$        \\>  = Liquid surface diameter    \\\\\r\n    $Gr$        \\>  = Grashof number  \\\\\r\n    $h_\\ell, h_v$        \\>  = Enthalpy     \\\\\r\n    $m_g,m_\\ell,m_w,m_v$    \\> = Mass   \\\\\r\n    $P_g,P_v,P_t$    \\> = Pressure   \\\\\r\n    $R_v,R_g$    \\> = Gas constant   \\\\\r\n    $T_g,T_\\ell,T_w,T_v,T_a$    \\> = Temperature   \\\\\r\n    $u_g,u_\\ell,u_w,u_v$    \\> = Specific internal energy   \\\\\r\n    $V_g,V_\\ell,V_t$    \\> = Volume   \\\\\r\n    $\\dot{W}$    \\> = Work rate   \\\\\r\n    $\\dot{Q}_g,\\dot{Q}_v,\\dot{Q}_l,\\dot{Q}_w$    \\> = Heat transfer rate     \\\\\r\n    $\\nu_l,\\nu_g,\\nu_v$    \\> = Specific volume  \\\\\r\n\\end{tabbing}\r\n%\r\n\\noindent\\textit{Subscripts}\\vspace{-.1 in}\r\n\\begin{tabbing}\r\n    -----------------------     \\= Reynolds number based on length $s$ \\kill\r\n    $a$    \\> = Ambient    \\\\\r\n    $g$    \\> = Pressurant gas    \\\\\r\n    $\\ell$    \\> = Propellant liquid  \\\\\r\n    $t$    \\> = Total    \\\\\r\n    $v$    \\> = Propellant vapor \\\\\r\n    $w$    \\> = Tank wall      \\\\\r\n    $e$    \\> = Exit-flow      \\\\\r\n    $i$    \\> = In-flow\r\n\\end{tabbing}\r\n\r\n\r\n\\subsubsection{Pressurant Tank}\r\n\r\nThe pressurant tank model is the simplest of the tank models\r\nprimarily due to the fact that there is only one substance, the\r\npressurant gas, contained in the tank.  In this section, we develop\r\na state space model for pressurant tank dynamics.  We choose the\r\nstate variables to be pressurant gas mass and temperature, $m_g$ and\r\n$T_g$ respectively, and tank wall temperature $T_w$.\r\n\r\n\r\nIn Fig.\\ref{fig:PressurantTank} we see an illustration of a\r\npressurant tank.  We divide the tank into two control volumes: the\r\ngas region and the tank wall region.  The only mass flow in the\r\nsystem occurs where pressurant gas exits the tank.  Heat transfer\r\noccurs between the gas and the wall, and the wall and the ambient\r\nsurroundings.\r\n%\r\n\\begin{figure}[ht]\r\n\\centerline{\r\n    \\begin{picture}(110,440)\r\n    \\special{psfile= ./Images/PressurantTank.eps hoffset= -120 voffset= 115\r\n    hscale=55 vscale=55}\r\n    \\makebox(90,525){$\\dot{m}_e$,$h_g$}\r\n    \\makebox(-105,790){1.Gas}\r\n    \\makebox(-105,765){$m_g$, $P_g$, $T_g$}\r\n    \\makebox(-165,680){$\\dot{Q}_g$}\r\n    \\makebox(-295,760){$\\dot{Q}_w$}\r\n    \\makebox(-260,865){2.Tank Wall}\r\n    \\makebox(-270,840){$m_w$,  $T_w$}\r\n    \\end{picture}}\\vskip -3.65 in  \\caption{ Pressurant Tank Diagram} \\label{fig:PressurantTank}\r\n\\end{figure}\r\n%\r\n\r\nKnowing the volume of the tank and the state variables  $m_g$,\r\n$T_g$, and $T_w$, we calculate pressure from one of the following\r\ntwo equations of state:\r\n%\r\n\\begin{equation}\r\n   P_g = \\frac{m_g R_g T_g}{V_g}\r\n\\end{equation}\r\n%\r\nor from the Beattie-Bridgeman Eq.\r\n%\r\n\\begin{equation}\r\n   P_g = \\frac{R_g T_g}{V_g} + \\frac{a_g}{V_g^2} + \\frac{b_g}{V_g^3}\r\n\\end{equation}\r\n%\r\n\r\nThe state variables $m_g$, $T_g$, and $T_w$ are described by\r\nordinary differential equations found by applying the first law of\r\nthermodynamics and the conservation of mass.  The 1st Law applied to\r\nthe gas control volume yields\r\n%\r\n\\begin{equation}\r\n   \\frac{d}{dt}\\left(m_g u_g \\right) =  \\dot{Q}_g - \\dot{m}_e h_g\r\n  \\label{Eq:PressurantGas1stLaw}\r\n\\end{equation}\r\n%\r\nThe 1st Law applied to the wall control volume  yields\r\n%\r\n\\begin{equation}\r\n     \\frac{d}{dt}\\left( m_w u_w \\right) = \\dot{Q}_w -  \\dot{Q}_g \\label{Eq:PressurantWall1stLaw}\r\n\\end{equation}\r\n%\r\nand finally from conservation of mass we obtain\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_g = -\\dot{m}_e \\label{Eq:PressurantMassCon}\r\n\\end{equation}\r\n%\r\nFor these equations to be useful for numerical integration, we need\r\nto expand the derivatives, and if necessary, decouple the equations\r\n(as we'll see, for the pressurant tank, the equations are not\r\ncoupled).\r\n\r\nExpanding the terms in Eq.~(\\ref{Eq:PressurantGas1stLaw}) we have\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_g c_g T_g + m_g c_g \\dot{T}_g = \\dot{Q}_g - \\dot{m}_e T_g \\left( c_g + R_g\\right)\r\n\\end{equation}\r\n%\r\nSimilarly, expanding Eq.~(\\ref{Eq:PressurantWall1stLaw}) we obtain\r\n%\r\n\\begin{equation}\r\n    m_w c_w \\dot{T}_w = \\dot{Q}_w -  \\dot{Q}_g\r\n\\end{equation}\r\n%\r\nSolving the system of equations yields the following differential\r\nequations of state for the pressurant tank heat transfer model.\r\n%\r\n\\begin{eqnarray}\r\n    \\dot{m}_g &=& -\\dot{m}_e\\\\\r\n    %\r\n    \\dot{T}_g &=& \\frac{1}{m_g c_g} \\left( \\dot{Q}_g - T_g R_g  \\dot{m}_e  \\right)\\\\\r\n    %\r\n    \\dot{T}_w &=& \\frac{1}{m_w c_w} \\left( \\dot{Q}_w - \\dot{Q}_g  \\right)\r\n\\end{eqnarray}\r\n%\r\nThe adiabatic model is obtained by setting the terms $\\dot{Q}_g$ and\r\n$\\dot{Q}_w$ to zero in the above equations.  (Note for the adiabatic\r\nmodel there are only two state variables, $m_g$ and $T_g$, as the\r\nwall temperature $T_w$ is removed from the system of equations.)\r\nSimilarly, the isothermal model is obtained by setting $\\dot{T}_g$\r\nand $\\dot{T}_w$ to zero.  So, for the isothermal model there is only\r\none state variable $m_g$.\r\n\r\nIn summary, for the pressurant tank, all models calculate the tank\r\npressure using\r\n%\r\n\\[\r\n   P_g = \\frac{m_g R_g T_g}{V_g}\r\n\\]\r\n%\r\n then the specific equations for the heat transfer, adiabatic, and\r\nisothermal models, are as follows\r\n\r\n\\noindent\\textit{Pressurant Tank:  Heat Transfer}\r\n\r\n\\noindent State Variables:  $m_g$, $T_g$, $T_w$\r\n%\r\n\\begin{eqnarray}\r\n    \\dot{m}_g &=& -\\dot{m}_e \\nonumber\\\\\r\n    %\r\n    \\dot{T}_g &=& \\frac{1}{m_g c_g} \\left( \\dot{Q}_g - T_g R_g  \\dot{m}_e  \\right)\\nonumber\\\\\r\n    %\r\n    \\dot{T}_w &=& \\frac{1}{m_w c_w} \\left( \\dot{Q}_w - \\dot{Q}_w  \\right)\\nonumber\r\n\\end{eqnarray}\r\n%\r\n\\textit{Pressurant Tank:  Adiabtic}\r\n\r\n\\noindent State Variables:  $m_g$, $T_g$\r\n%\r\n\\begin{eqnarray}\r\n    \\dot{m}_g &=& -\\dot{m}_e \\nonumber\\\\\r\n    %\r\n    \\dot{T}_g &=& \\dot{m}_e\\frac{T_g R_g   }{m_g c_g} \\nonumber\r\n\\end{eqnarray}\r\n%\r\n\\noindent\\textit{Pressurant Tank:  Isothermal}\r\n\r\n\\noindent State Variables:  $m_g$\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_g = -\\dot{m}_e\r\n\\end{equation}\r\n\r\nNow let's look at a model for a fuel tank operating in blow down\r\nmode.\r\n\r\n%\\subsubsection{Blow-Down Tank w/o Vapor Pressure}\r\n%\r\n%\r\n%\\begin{figure}[ht]\r\n%\\centerline{\r\n%    \\begin{picture}(100,510)\r\n%    \\special{psfile= ./Images/BlowDownTank.eps hoffset= -120 voffset= 170\r\n%    hscale=55 vscale=55}\r\n%    \\makebox(80,525){$\\dot{m}_e$,$h_l$}\r\n%    \\makebox(-80,970){1.Gas}\r\n%    \\makebox(-90,940){$m_g, P_g, T_g$}\r\n%    \\makebox(-190,905){$\\dot{Q}_v$}\r\n%    %\\makebox(-140,905){$\\dot{m}_v, h_v$}\r\n%    \\makebox(10,905){$\\dot{Q}_g$}\r\n%    \\makebox(-135,820){2.Liquid}\r\n%    \\makebox(-145,790){$m_\\ell, T_\\ell$}\r\n%    \\makebox(-235,740){$\\dot{Q}_\\ell$}\r\n%    \\makebox(-335,990){3.Tank Wall}\r\n%    \\makebox(-335,970){$m_w, T_w$}\r\n%    \\makebox(-277,1020){$\\dot{Q}_w$}\r\n%    \\end{picture}}\\vskip -3.65 in  \\caption{ Bi-Prop Thruster Diagram} \\label{fig:BlowDownTankWOVap}\r\n%\\end{figure}\r\n%\r\n%\\textit{Assumptions}\r\n%\\begin{itemize}\r\n%    \\item Vapor pressure is zero.\r\n%    \\item Liquid density is constant.\r\n%    \\item Gas mass is constant.\r\n%\\end{itemize}\r\n%%\r\n%Assume we are given $m_g$, the tank diameter $D$, and hence know the\r\n%total tank volume $V_t$, and we know the physical constants\r\n%associated with the liquid and gas ($R_g$,$c_g$,$\\nu_g$,$c_\\ell$,\r\n%$\\nu_\\ell$). We choose the state variables $m_\\ell$, $T_\\ell$,\r\n%$T_g$, and $T_w$, all other tank properties can be calculated from\r\n%these state variables using the following equations:\r\n%%\r\n%\\begin{eqnarray}\r\n%    V_\\ell & = &  \\nu_\\ell m_\\ell\\\\\r\n%   %\r\n%    V_g & = &  V_t - V_\\ell\\\\\r\n%   %\r\n%   P_g & = &  \\frac{m_g R_g T_g}{V_g}\\\\\r\n%\\end{eqnarray}\r\n%%\r\n%\r\n%We require differential equations that describe the time rate of\r\n%change of the state variables $m_\\ell$, $T_\\ell$, $T_g$, and $T_w$.\r\n%The differential equations are found by applying the 1st law of\r\n%thermodynamics and conservation of mass to the three control volumes\r\n%illustrated in Fig. \\ref{fig:BlowDownTankWOVap}. The 1st Law applied\r\n%to the gas control volume yields\r\n%%\r\n%\\begin{equation}\r\n%   \\frac{d}{dt}\\left(m_g u_g \\right) = \\dot{Q}_v + \\dot{Q}_g - P_g \\dot{V}_g\r\n%  \\label{Eq:Blowdown1stLawWOVap}\r\n%\\end{equation}\r\n%%\r\n%The 1st Law applied to the liquid control volume yields\r\n%%\r\n%\\begin{equation}\r\n%   \\frac{d}{dt}\\left( m_\\ell u_\\ell \\right) = \\dot{Q}_\\ell - \\dot{Q}_v +\r\n%   P_g \\dot{V}_g   - \\dot{m}_e h_\\ell \\label{Eq:BlowdownLiquid1stLawWOVap}\r\n%\\end{equation}\r\n%%\r\n%The 1st Law applied to the wall control volume  yields\r\n%%\r\n%\\begin{equation}\r\n%     \\frac{d}{dt}\\left( m_w u_w \\right) = \\dot{Q}_w - \\dot{Q}_\\ell - \\dot{Q}_g\r\n%\\end{equation}\r\n%%\r\n%and finally from conservation of mass we obtain\r\n%%\r\n%\\begin{equation}\r\n%    \\dot{m}_\\ell = -\\dot{m}_e \\label{Eq:BlowdownMassConWOVap}\r\n%\\end{equation}\r\n%\r\n%\r\n%The equations above give us four ordinary differential equations\r\n%that allow us to solve for the tank states as a function of time.\r\n%For numerical integration, we need to decouple these equations.\r\n%\r\n%Let's continue with Eq.~(\\ref{Eq:Blowdown1stLawWOVap}).   Taking the\r\n%derivative assuming $\\dot{m}_g = 0$ and noting that $\\dot{V}_g = -\r\n%\\nu_\\ell \\dot{m}_\\ell $ yields\r\n%%\r\n%\\begin{equation}\r\n%     m_g c_g \\dot{T}_g = \\dot{Q}_v + \\dot{Q}_g + P_g\\nu_\\ell \\dot{m}_\\ell\r\n%\\end{equation}\r\n%%\r\n%Gathering all state terms on the left hand side yields\r\n%%\r\n%\\begin{equation}\r\n%       -P_g\\nu_\\ell \\dot{m}_\\ell  + m_g c_g \\dot{T}_g = \\dot{Q}_v + \\dot{Q}_g \\label{Eq:CV1}\r\n%\\end{equation}\r\n%\r\n%\r\n%Continuing with Eq.~(\\ref{Eq:BlowdownLiquid1stLawWOVap}), we take\r\n%the derivative and group terms to obtain\r\n%%\r\n%\\begin{equation}\r\n%   \\left( c_\\ell T_\\ell  + P_g \\nu_\\ell\\right)\\dot{m}_\\ell +\r\n%    m_\\ell c_\\ell \\dot{T}_\\ell  =  \\dot{Q}_\\ell - \\dot{Q}_v -  c_\\ell T_\\ell \\dot{m}_e\\label{Eq:CV2}\r\n%\\end{equation}\r\n%%\r\n%Similarly for the wall region, we arrive at\r\n%%\r\n%\\begin{equation}\r\n%     m_w c_w \\dot{T}_w = \\dot{Q}_w - \\dot{Q}_\\ell - \\dot{Q}_g \\label{Eq:CV3}\r\n%\\end{equation}\r\n%\r\n%Equations \\ref{Eq:CV1} -\\ref{Eq:CV3} and\r\n%Eq.\\ref{Eq:BlowdownMassConWOVap} can be written in matrix form as\r\n%follows.\r\n%%\r\n%\\begin{equation}\r\n%   \\left(\\begin{array}{ccccccc}\r\n%   A_{11} & 0 & A_{13} & 0 \\\\\r\n%    A_{21} & A_{22} & 0  & 0 \\\\\r\n%    0 & 0 & 0  & A_{34} \\\\\r\n%    A_{41} &  & 0  & 0 \\\\\r\n%   \\end{array}\\right)\r\n%   %\r\n%   \\left(\\begin{array}{c}\r\n%    \\dot{m}_\\ell \\\\\r\n%    \\dot{T}_\\ell  \\\\\r\n%    \\dot{T}_g \\\\\r\n%   \\dot{T}_w  \\\\\r\n%   \\end{array}\\right) =\r\n%   %\r\n%   \\left(\\begin{array}{c}\r\n%    b_1\\\\\r\n%    b_2  \\\\\r\n%    b_3 \\\\\r\n%    b_4  \\\\\r\n%   \\end{array}\\right)\r\n%\\end{equation}\r\n%%\r\n%where\r\n%%\r\n%\\begin{eqnarray}\r\n%   A_{11}& = &  - P_g \\nu_\\ell \\\\\r\n%   %\r\n%   A_{13}& = &  m_g c_g\\\\\r\n%   %\r\n%   A_{21} & = & c_\\ell T_\\ell + P_g \\nu_\\ell \\\\\r\n%   %\r\n%   A_{22} & = & m_\\ell c_\\ell\\\\\r\n%   %\r\n%   A_{34} & = & m_w c_w\\\\\r\n%   %\r\n%   A_{41} & = & 1\\\\\r\n%   %\r\n%   b_1 & = & \\dot{Q}_v + \\dot{Q}_g \\\\\r\n%   %\r\n%   b_2 & = & \\dot{Q}_\\ell - \\dot{Q}_v -  c_\\ell T_\\ell\\dot{m}_e\\\\\r\n%   %\r\n%   b_3 & = & \\dot{Q}_w -\\dot{Q}_\\ell - \\dot{Q}_g\\\\\r\n%   %\r\n%   b_4 & = & -\\dot{m}_e\\\\\r\n%\\end{eqnarray}\r\n%%\r\n%\r\n%Solving the system of equations yields\r\n%%\r\n%\\begin{eqnarray}\r\n%    \\dot{m}_\\ell &=& -\\dot{m}_e\\\\\r\n%    %\r\n%    \\dot{T}_\\ell &=& \\frac{1}{m_\\ell c_\\ell}\\left( \\dot{Q}_\\ell - \\dot{Q}_v  + P_g \\nu_\\ell \\dot{m}_e\\right)\\\\\r\n%    %\r\n%    \\dot{T}_g &=&  \\frac{1}{m_g c_g}\\left( \\dot{Q}_v + \\dot{Q}_g - P_g \\nu_\\ell \\dot{m}_e\\right)\\\\\r\n%    %\r\n%    \\dot{T}_w &=&  \\frac{1}{m_w c_w}\\left(\\dot{Q}_w - \\dot{Q}_\\ell - \\dot{Q}_g \\right)\\\\\r\n%\\end{eqnarray}\r\n\r\n\\subsubsection{Blowdown Tank}\r\n\r\nThe blowdown tank model is significantly more complex than the\r\npressurant tank model due to the presence of liquid fuel and fuel\r\nvapor contained in the tank ullage.   In this section, we develop a\r\nstate space model for a blow down tank.  We choose the state\r\nvariables to be the liquid mass and temperature, $m_\\ell$ and\r\n$T_\\ell$, the gas temperature $T_g$, and tank wall temperature\r\n$T_w$.\r\n\r\n\r\nIn Fig.\\ref{fig:BlowDownTank} we see an illustration of a blow down\r\ntank. We divide the tank into three control volumes: the gas region,\r\nthe liquid region, and the tank wall region. Mass flow occurs where\r\nthe pressurant gas exits the tank and at the boundary between the\r\nliquid and gas in the form of evaporation. Heat transfer occurs\r\nbetween all three control volumes as well as with the surroundings.\r\nIn summary, the physical processes modelled for a blow down tank are\r\n%\r\n\\begin{compactenum}\r\n    \\item Vapor pressure is a function of liquid temperature.\r\n    \\item Liquid density is a function of liquid temperature.\r\n    \\item Heat transfer between the liquid and gas.\r\n    \\item Heat transfer between the tank wall and gas.\r\n    \\item Heat transfer between the tank wall and liquid.\r\n    \\item Heat transfer between the surroundings and tank.\r\n    wall.\r\n\\end{compactenum}\r\n%\r\nThe assumptions made in the tank model are\r\n%\r\n\\begin{compactenum}\r\n    \\item Pressurant does not dissolve in liquid ($m_g = C$).\r\n    \\item Vapor and gas temperatures are equal.\r\n    \\item Vapor and gas volumes are equal.\r\n\\end{compactenum}\r\n%\r\n\\begin{figure}[ht]\r\n\\centerline{\r\n    \\begin{picture}(100,510)\r\n    \\special{psfile= ./Images/BlowDownTankWVap.eps hoffset= -120 voffset= 170\r\n    hscale=55 vscale=55}\r\n    \\makebox(80,525){$\\dot{m}_e$,$h_l$}\r\n    \\makebox(-80,970){1.Gas/Vapor}\r\n    \\makebox(-90,940){$m_g, m_v, P_g, P_v, T_g$}\r\n    \\makebox(-190,905){$\\dot{Q}_v$}\r\n    \\makebox(-140,905){$\\dot{m}_v, h_{v}$}\r\n    \\makebox(10,905){$\\dot{Q}_g$}\r\n    \\makebox(-135,820){2.Liquid}\r\n    \\makebox(-145,790){$m_\\ell, T_\\ell$}\r\n    \\makebox(-235,740){$\\dot{Q}_\\ell$}\r\n    \\makebox(-335,990){3.Tank Wall}\r\n    \\makebox(-335,970){$m_w, T_w$}\r\n    \\makebox(-265,1030){$\\dot{Q}_w$}\r\n    \\end{picture}}\\vskip -3.65 in  \\caption{ Blow Down Tank Diagram} \\label{fig:BlowDownTank}\r\n\\end{figure}\r\n\r\nAssume we are given $m_g$, the tank diameter $D$, and hence know the\r\ntotal tank volume $V_t$, and we know the physical constants\r\nassociated with the liquid and gas ($R_g$, $c_g$, $\\nu_g$, $c_\\ell$,\r\n$\\nu_\\ell(T_\\ell)$ and $P_v(T_\\ell))$. We choose the state variables\r\n$m_\\ell$, $T_\\ell$, $T_g$, and $T_w$, all other tank properties can\r\nbe calculated from these state variables using the following\r\nequations:\r\n%\r\n\\begin{eqnarray}\r\n    V_\\ell & = &  \\nu_\\ell(T_\\ell)m_\\ell \\label{Eq:BlowDownVell}\\\\\r\n   %\r\n    V_g & = &  V_t - V_\\ell\\\\\r\n   %\r\n   P_g & = &  \\frac{m_g R_g T_g}{V_g}\\\\\r\n   %\r\n   P_v & = &  P_v(T_\\ell)\\\\\r\n   %\r\n   m_v & = & \\frac{P_v V_g}{R_v T_g}\\\\\r\n   %\r\n   P_t & = &  P_v + P_g \\label{Eq:BlowDownPt}\r\n\\end{eqnarray}\r\n\r\n%\r\nTo determine the state equations governing $m_\\ell$, $T_\\ell$,\r\n$T_g$, and $T_w$ we apply the 1st law of thermodynamics and the law\r\nof conservation of mass. The 1st Law applied to the gas control\r\nvolume is\r\n%\r\n\\begin{equation}\r\n   \\frac{d}{dt}\\left( m_v u_v + m_g u_g \\right) = \\dot{Q}_v + \\dot{Q}_g - P_t \\dot{V}_g +\r\n    \\dot{m}_v h_{v} \\label{Eq:BlowdownGas1stLaw}\r\n\\end{equation}\r\n%\r\nThe 1st Law applied to the liquid control volume is\r\n%\r\n\\begin{equation}\r\n   \\frac{d}{dt}\\left( m_\\ell u_\\ell \\right) = \\dot{Q}_\\ell - \\dot{Q}_v +\r\n   P_t \\dot{V}_g - \\dot{m}_v h_{lg} - \\dot{m}_e h_\\ell \\label{Eq:BlowdownLiquid1stLaw}\r\n\\end{equation}\r\n%\r\nThe 1st Law applied to the wall control volume yields\r\n%\r\n\\begin{equation}\r\n     \\frac{d}{dt}\\left( m_w u_w \\right) = \\dot{Q}_w - \\dot{Q}_\\ell - \\dot{Q}_g \\label{Eq:BlowdownWall1stLaw}\r\n\\end{equation}\r\n%\r\nand finally from conservation of mass:\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_\\ell = -\\dot{m}_e - \\dot{m}_v \\label{Eq:BlowdownMassCon}\r\n\\end{equation}\r\n%\r\nwe also know that\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_v = \\frac{P_v \\dot{V}_g}{R_v T_g} - \\frac{P_v V_g \\dot{T}_g}{R_v T_g^2} \\label{Eq:BlowDownGasLaw}\r\n\\end{equation}\r\n%\r\nwhere we assume that\r\n%\r\n\\begin{equation}\r\n   \\dot{P}_v \\approx 0\r\n\\end{equation}\r\n\r\nEquations (\\ref{Eq:BlowdownGas1stLaw}) - (\\ref{Eq:BlowDownGasLaw})\r\nare five equations in five unknowns ($m_v$, $m_\\ell$, $T_\\ell$,\r\n$T_g$, and $T_w$). Our approach is to use\r\nEq.~(\\ref{Eq:BlowdownMassCon}) to eliminate $\\dot{m}_v$ terms.  The\r\nresult is a system of four equations in four unknowns using\r\nEqs.~(\\ref{Eq:BlowdownGas1stLaw}), (\\ref{Eq:BlowdownLiquid1stLaw}),\r\n(\\ref{Eq:BlowdownWall1stLaw}), and (\\ref{Eq:BlowDownGasLaw}).  The\r\nresult we seek is four decoupled ordinary differential equations for\r\n$m_\\ell$, $T_\\ell$, $T_g$, and $T_w$.\r\n\r\n\r\nLet's continue with Eq.~(\\ref{Eq:BlowdownGas1stLaw}).  We need to\r\nrewrite the equation in terms of $\\dot{m}_\\ell$ and $\\dot{T}_g$\r\n($\\dot{T_w}$ and $\\dot{T}_\\ell$ don't appear explicitly).  Expanding\r\nthe derivatives assuming $\\dot{m}_g = 0$ yields\r\n%\r\n\\begin{equation}\r\n   \\dot{m}_v c_v T_g + m_v c_v \\dot{T}_g +  m_g c_g \\dot{T}_g = \\dot{Q}_v + \\dot{Q}_g - P_t \\dot{V}_g + \\dot{m}_v h_{v}\r\n\\end{equation}\r\n%\r\nNow, substituting $\\dot{m}_v = -\\dot{m}_\\ell - \\dot{m}_e$ and noting\r\nthat $\\dot{V}_g = -\\nu_l \\dot{m}_\\ell$ if we assume\r\n%\r\n \\[\\dot{\\nu}_\\ell = \\frac{d \\nu_\\ell}{dT_\\ell }\\dot{T}_\\ell\r\n\\approx 0\r\n\\]\r\n%\r\nwe arrive at\r\n%\r\n\\begin{equation}\r\n   \\begin{split}\r\n      \\left( T_g R_v - P_t \\nu_\\ell  \\right) \\dot{m}_\\ell + \\left( m_v c_v + m_g c_g\\right)\\dot{T}_g = \\\\\r\n      \\dot{Q}_v + \\dot{Q}_g - \\dot{m}_e T_g R_v   \\label{Eq:BlowdownWVaporEq1}\r\n   \\end{split}\r\n\\end{equation}\r\n%\r\n\r\nNow continuing with Eq.~(\\ref{Eq:BlowdownLiquid1stLaw}) expanding\r\nthe derivatives and making similar substitutions as we made\r\npreviously we obtain\r\n%\r\n\\begin{equation}\r\n\\begin{split}\r\n    \\dot{m}_\\ell c_\\ell T_\\ell + m_\\ell c_\\ell \\dot{T}_\\ell = \\dot{Q}_\\ell - \\dot{Q}_v +\r\n    P_t(-\\nu_\\ell \\dot{m}_\\ell) - \\\\ (-\\dot{m}_\\ell - \\dot{m}_e)h_{v} - \\dot{m}_e c_\\ell T_\\ell\r\n\\end{split}\r\n\\end{equation}\r\n%\r\nGrouping terms we obtain\r\n%\r\n\\begin{equation}\r\n\\begin{split}\r\n    ( c_\\ell T_\\ell  + P_t v_l - h_{v})\\dot{m}_\\ell + ( m_\\ell c_\\ell )\\dot{T}_\\ell = \\\\\r\n    \\dot{Q}_\\ell - \\dot{Q}_v + \\dot{m}_e (h_v - c_\\ell T_\\ell) \\label{Eq:BlowdownWVaporEq2}\r\n\\end{split}\r\n\\end{equation}\r\n\r\nFor the wall region, described by Eq.~(\\ref{Eq:BlowdownWall1stLaw}),\r\nwe arrive at\r\n\\begin{equation}\r\n     (m_w c_w) \\dot{T}_w = \\dot{Q}_w - \\dot{Q}_\\ell - \\dot{Q}_g \\label{Eq:BlowdownWVaporEq3}\r\n\\end{equation}\r\n\r\nFinally, by eliminating $\\dot{m}_v$ in the Gas Law shown in\r\nEq.~(\\ref{Eq:BlowDownGasLaw}) we obtain\r\n%\r\n\\begin{equation}\r\n    -\\dot{m}_\\ell - \\dot{m}_e =   \\frac{P_v (-\\nu_\\ell \\dot{m}_\\ell)}{R_v T_g} - \\frac{P_v V_g \\dot{T}_g}{R_v T_g^2}\r\n\\end{equation}\r\n%\r\nGrouping terms yields the result\r\n\\begin{equation}\r\n    \\left(  1 - \\frac{P_v\\nu_\\ell }{R_v T_g}  \\right) \\dot{m}_\\ell - \\frac{P_v V_g }{R_v T_g^2}\\dot{T}_g  =  - \\dot{m}_e \\label{Eq:BlowdownWVaporEq4}\r\n\\end{equation}\r\n\r\nEquations (\\ref{Eq:BlowdownWVaporEq1}),\r\n(\\ref{Eq:BlowdownWVaporEq2}), (\\ref{Eq:BlowdownWVaporEq3}), and\r\n(\\ref{Eq:BlowdownWVaporEq4}) are four coupled ordinary differential\r\nequations that can be decoupled by casting them in matrix form as\r\nfollows:\r\n%\r\n\\begin{equation}\r\n   \\left(\\begin{array}{ccccccc}\r\n   A_{11} & 0 & A_{13}  & 0 \\\\\r\n    A_{21} & A_{22} & 0  & 0 \\\\\r\n    0 & 0 & 0  & A_{34} \\\\\r\n    A_{41} &  & A_{43}  & 0 \\\\\r\n   \\end{array}\\right)\r\n   %\r\n   \\left(\\begin{array}{c}\r\n    \\dot{m}_\\ell \\\\\r\n    \\dot{T}_\\ell  \\\\\r\n    \\dot{T}_g \\\\\r\n   \\dot{T}_w  \\\\\r\n   \\end{array}\\right) =\r\n   %\r\n   \\left(\\begin{array}{c}\r\n    b_1\\\\\r\n    b_2  \\\\\r\n    b_3 \\\\\r\n    b_4  \\\\\r\n   \\end{array}\\right)\r\n\\end{equation}\r\n%\r\nwhere\r\n%\r\n\\begin{eqnarray}\r\n   A_{11}& = & T_g R_v - P_t \\nu_\\ell \\label{Eq:BlowDownA11}\\\\\r\n   %\r\n   A_{13}& = & m_v c_v + m_g c_g\\\\\r\n   %\r\n   A_{21} & = & c_\\ell T_\\ell + P_t v_l - h_{v}\\\\\r\n   %\r\n   A_{22} & = & m_\\ell c_\\ell\\\\\r\n   %\r\n   A_{34} & = & m_w c_w\\\\\r\n   %\r\n   A_{41} & = & 1  - \\nu_\\ell/\\nu_v\\\\\r\n   %\r\n   A_{43} & = & - m_v/T_g\\\\\r\n   %\r\n   b_1 & = & \\dot{Q}_v + \\dot{Q}_g - \\dot{m}_e T_g R_v \\label{Eq:BlowDownb1}\\\\\r\n   %\r\n   b_2 & = & \\dot{Q}_\\ell - \\dot{Q}_v + \\dot{m}_e (h_v - c_\\ell T_\\ell) \\\\\r\n   %\r\n   b_3 & = & \\dot{Q}_w -\\dot{Q}_\\ell - \\dot{Q}_g\\\\\r\n   %\r\n   b_4 & = & -\\dot{m}_e \\label{Eq:BlowDownb4}\r\n\\end{eqnarray}\r\n%\r\n\r\n\r\nThe solution to the equations is\r\n%\r\n\\begin{eqnarray}\r\n  \\dot{m}_\\ell &=& \\frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}} \\label{Eq:BlowDownmdotDiffEq}\\\\\r\n  %\r\n  %\\dot{T}_\\ell &=& \\frac{-A_{21}A_{43}b_1+ (A_{11}A_{43}-A_{41}A_{13})b_2+A_{21}A_{13}b_4}{A_{22}\\left( A_{11}A_{43}-A_{41}A_{13}\\right)}\\\\\r\n  \\dot{T}_\\ell &=& \\frac{1}{A_{22}}\\left( b_2 -  A_{21} \\frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\\right)\\\\\r\n  %\r\n  \\dot{T}_g &=& \\frac{A_{11} b_{4} - A_{41} b_{1}}{A_{11} A_{43}-A_{41} A_{13}}\\\\\r\n  %\r\n  \\dot{T}_w &=& \\frac{b_3}{A_{34}} \\label{Eq:BlowDownTwdotDiffEq}\r\n\\end{eqnarray}\r\n%\r\nFor the adiabatic model we set all heat transfer rates, $\\dot{Q}$,\r\nto zero in Eqs.~(\\ref{Eq:BlowDownb1})-(\\ref{Eq:BlowDownb4}) and so\r\nthere are only two state variables as $\\dot{T}_w = 0$ and so $T_w =$\r\nconstant.\r\n\r\nNow let's develop equations for an isothermal model of a blow down\r\ntank.  In the isothermal model, we assume $T_\\ell = T_g = T_w = T$.\r\nThe only state variable that requires a differential equation is\r\n$m_\\ell$.  Because $T_g$, $T_\\ell$, and hence, $P_v$ are constant,\r\nwe know that\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_v = \\frac{P_v \\dot{V}_g}{R_v T_g}\r\n\\end{equation}\r\n%\r\nSubtituting this result into Eq.(\\ref{Eq:BlowdownMassCon}) and\r\nsolving for $\\dot{m}_\\ell$ we obtain.\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_\\ell = -\\frac{\\dot{m}_e}{\\left( 1 - \\displaystyle\\frac{P_v \\nu_\\ell}{R_v T}\\right)}\r\n\\end{equation}\r\n\r\n\r\nIn summary for the heat transfer model for a blow down tank, we\r\nchoose $m_\\ell$, $T_\\ell$, $T_g$, and $T_w$ are state variables.\r\nEqs.~(\\ref{Eq:BlowDownVell})-(\\ref{Eq:BlowDownPt}) are used to\r\ncalculate the remaining tank properties, and\r\nEqs.~(\\ref{Eq:BlowDownmdotDiffEq})-(\\ref{Eq:BlowDownTwdotDiffEq})\r\nare used to model the tank states as functions of time.\r\n\r\nFor the all three models, heat transfer, adiabatic, and isothermal,\r\nknowing the state variables $m_\\ell$, $T_\\ell$, $T_g$, and $T_w$ we\r\ncompute the remaining tank properties using\r\n%\r\n\\begin{eqnarray}\r\n    V_\\ell & = &  \\nu_\\ell(T_\\ell)m_\\ell  \\nonumber \\\\\r\n   %\r\n    V_g & = &  V_t - V_\\ell\\nonumber \\\\\r\n   %\r\n   P_g & = &  \\frac{m_g R_g T_g}{V_g}\\nonumber \\\\\r\n   %\r\n   P_v & = &  P_v(T_\\ell)\\nonumber \\\\\r\n   %\r\n   m_v & = & \\frac{P_v V_g}{R_v T_g}\\nonumber \\\\\r\n   %\r\n   P_t & = &  P_v + P_g \\nonumber\r\n\\end{eqnarray}\r\n\r\n%\r\nThe models differ in the number of state variables and in the state\r\nrate equations.  A summary is presented below.\r\n\r\n\\noindent\\textit{Blow Down Tank: Heat Transfer}\r\n\r\n\\noindent State Variables:  $m_\\ell$, $T_\\ell$, $T_g$, $T_w$\r\n%\r\n\\begin{eqnarray}\r\n  \\dot{m}_\\ell &=& \\frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\\nonumber \\\\\r\n  %\r\n  %\\dot{T}_\\ell &=& \\frac{-A_{21}A_{43}b_1+ (A_{11}A_{43}-A_{41}A_{13})b_2+A_{21}A_{13}b_4}{A_{22}\\left( A_{11}A_{43}-A_{41}A_{13}\\right)}\\\\\r\n  \\dot{T}_\\ell &=& \\frac{1}{A_{22}}\\left( b_2 -  A_{21} \\frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41}\r\n  A_{13}}\\right) \\nonumber \\\\\r\n  %\r\n  \\dot{T}_g &=& \\frac{A_{11} b_{4} - A_{41} b_{1}}{A_{11} A_{43}-A_{41}\r\n  A_{13}} \\nonumber \\\\\r\n  %\r\n  \\dot{T}_w &=& \\frac{b_3}{A_{34}} \\nonumber\r\n\\end{eqnarray}\r\n%\r\nwhere $A_{ij}$ and $b_i$ are given by\r\nEqs.~(\\ref{Eq:BlowDownA11})-(\\ref{Eq:BlowDownb4}).\r\n\r\n\\noindent\\textit{Blow Down Tank:  Adiabtic}\r\n\r\n\\noindent State Variables:  $m_\\ell$, $T_\\ell$, $T_g$\r\n\r\n%\r\n\\begin{eqnarray}\r\n  \\dot{m}_\\ell &=& \\frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41} A_{13}}\\nonumber \\\\\r\n  %\r\n  %\\dot{T}_\\ell &=& \\frac{-A_{21}A_{43}b_1+ (A_{11}A_{43}-A_{41}A_{13})b_2+A_{21}A_{13}b_4}{A_{22}\\left( A_{11}A_{43}-A_{41}A_{13}\\right)}\\\\\r\n  \\dot{T}_\\ell &=& \\frac{1}{A_{22}}\\left( b_2 -  A_{21} \\frac{A_{43} b_{1}-A_{13} b_{4}}{A_{11} A_{43}-A_{41}\r\n  A_{13}}\\right) \\nonumber \\\\\r\n  %\r\n  \\dot{T}_g &=& \\frac{A_{11} b_{4} - A_{41} b_{1}}{A_{11} A_{43}-A_{41}\r\n  A_{13}} \\nonumber\r\n\\end{eqnarray}\r\n%\r\nwhere $A_{ij}$ and $b_i$ are given by\r\nEqs.~(\\ref{Eq:BlowDownA11})-(\\ref{Eq:BlowDownb4}).  Note that all\r\nheat flow rates, $\\dot{Q}$, are set to zero.\r\n\r\n\\noindent\\textit{Blow Down Tank:  Isothermal}\r\n\r\n\\noindent State Variables:  $m_\\ell$\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_\\ell = -\\frac{\\dot{m}_e}{\\left( 1 - \\displaystyle\\frac{P_v \\nu_\\ell}{R_v\r\n    T}\\right)} \\nonumber\r\n\\end{equation}\r\n\r\n\\subsubsection{Pressure Regulated Tank}\r\n\r\nThe pressure regulated fuel tank model is the most complex tank\r\nmodel supported in GMAT.  The model complexity is due to the\r\npresence of liquid fuel and fuel vapor contained in the tank ullage,\r\nand due to mass and energy transfer from the pressurant tank to the\r\nullage of the regulated fuel tank. In this section, we develop a\r\nstate space model for a pressure regulated tank. Note, to model a\r\npressure regulated tank using a heat transfer or adiabitic model, we\r\nmust simultaneously solve the equations associated with the\r\npressurant tank.  For the regulated tank model, we choose the state\r\nvariables to be the liquid mass and temperature, $m_\\ell$ and\r\n$T_\\ell$, the gas temperature $T_g$,  tank wall temperature $T_w$,\r\nand the incoming pressurant gas mass $m_i$.\r\n\r\n\r\nIn Fig.\\ref{fig:PressureRegulatedTank} we see an illustration of a\r\npressure regulated tank. Like the blow down tank model, we divide\r\nthe tank into three control volumes: the gas region, the liquid\r\nregion, and the tank wall region. Mass flow occurs where the\r\npressurant gas exits the tank, at the boundary between the liquid\r\nand gas in the form of evaporation, and from the pressurant tank to\r\nthe ullage of the regulated tank.  Heat transfer occurs between all\r\nthree control volumes as well as with the surroundings. Hence, the\r\nphysical processes modelled for a blow down tank are the same as\r\nthose listed for the blow down tank, with the added process of mass\r\nflow from the pressurant tank.\r\n%\r\n\\begin{figure}[ht]\r\n\\centerline{\r\n    \\begin{picture}(100,510)\r\n    \\special{psfile= ./Images/PressureRegulatedTank.eps hoffset= -120 voffset= 170\r\n    hscale=55 vscale=55}\r\n    \\makebox(80,525){$\\dot{m}_e$,$h_l$}\r\n    \\makebox(-80,970){1.Gas/Vapor}\r\n    \\makebox(-90,940){$m_g, m_v, P_g, P_v, T_g$}\r\n    \\makebox(-190,905){$\\dot{Q}_v$}\r\n    \\makebox(-140,905){$\\dot{m}_v, h_{lg}$}\r\n    \\makebox(10,905){$\\dot{Q}_g$}\r\n    \\makebox(-135,820){2.Liquid}\r\n    \\makebox(-145,790){$m_\\ell, T_\\ell$}\r\n    \\makebox(-235,740){$\\dot{Q}_\\ell$}\r\n    \\makebox(-335,990){3.Tank Wall}\r\n    \\makebox(-335,970){$m_w, T_w$}\r\n    \\makebox(-265,1030){$\\dot{Q}_w$}\r\n    \\makebox(-045,1030){$\\dot{m}_i$, $h_i$}\r\n    \\end{picture}}\\vskip -3.65 in  \\caption{ Pressure Regulated Tank Diagram} \\label{fig:PressureRegulatedTank}\r\n\\end{figure}\r\n\r\nThe derivation of the state equations for a pressure regulated tank\r\nfollows naturally from the derivation of the blow down tank.  The\r\nonly control volume that differs between the two models is the\r\ngas/vapor control volume.  Applying the 1st Law of thermodynamics to\r\nthe gas/vapor control volume of the pressure regulated tank gives us\r\n%\r\n\\begin{equation}\r\n   \\frac{d}{dt}\\left( m_v u_v + m_g u_g \\right) = \\dot{Q}_v + \\dot{Q}_g - P_t \\dot{V}_g +\r\n    \\dot{m}_v h_v + \\dot{m}_p h_p\\label{Eq:RegulatedGas1stLaw}\r\n\\end{equation}\r\n%\r\nTaking the time derivative of the gas law for the gas contained in\r\nthe tank ullage yields\r\n%\r\n\\begin{equation}\r\n   \\dot{m}_g = \\frac{P_g \\dot{V}_g}{R_g\r\n   T} - \\frac{P_g V_g \\dot{T}_g}{R_g T_g^2} \\label{Eq:RegulatedGasLawDeriv}\r\n\\end{equation}\r\n%\r\n\r\nEquations (\\ref{Eq:RegulatedGas1stLaw}) and\r\n(\\ref{Eq:RegulatedGasLawDeriv}), together with equations\r\n(\\ref{Eq:BlowdownWVaporEq2}), (\\ref{Eq:BlowdownWVaporEq3}), and\r\n(\\ref{Eq:BlowdownWVaporEq4}) are a system of 5 equations in 5\r\nunknowns which can be decoupled using simple linear algebra.\r\nHowever, first we must expand Eqs.~(\\ref{Eq:RegulatedGas1stLaw}) and\r\n(\\ref{Eq:RegulatedGasLawDeriv}) and write them in terms of the state\r\nrate derivatives.  Expanding Eq.~(\\ref{Eq:RegulatedGas1stLaw}) we\r\narrive at\r\n\\begin{equation}\r\n   \\begin{split}\r\n   \\left( R_v T_g - P_t \\nu_\\ell \\right)\\dot{m}_\\ell + \\left( m_v c_v + m_g c_g \\right)\r\n   \\dot{T}_g + \\\\ \\left(c_g Tg - h_p \\right)\\dot{m}_g = \\dot{Q}_v +\r\n   \\dot{Q}_g - \\dot{m}_e R_v T_g\r\n   \\end{split}\r\n\\end{equation}\r\n%\r\nSimilarly, for Eq.~(\\ref{Eq:RegulatedGasLawDeriv}) we obtain\r\n%\r\n\\begin{equation}\r\n    \\frac{\\nu_\\ell}{\\nu_g}\\dot{m}_\\ell + \\frac{m_g}{T_g}\\dot{T}_g + \\dot{m}_g = 0\r\n\\end{equation}\r\n%\r\n\r\nTo integrate the state equations we must decouple the equations and\r\nthis is easily done by casting the equations in matrix form and\r\nsolving the system of equations.  We can write the equations is\r\nstate space for as follows\r\n%\r\n\\begin{equation}\r\n   \\left(\\begin{array}{ccccccc}\r\n   A_{11} & 0 & A_{13}  & 0 & A_{15}\\\\\r\n    A_{21} & A_{22} & 0  & 0 & 0 \\\\\r\n    0 & 0 & 0  & A_{34} & 0\\\\\r\n    A_{41} & 0 & A_{43}  & 0 & 0\\\\\r\n    A_{51} & 0 & A_{43}  & 0 &  A_{55}\\\\\r\n   \\end{array}\\right)\r\n   %\r\n   \\left(\\begin{array}{c}\r\n    \\dot{m}_\\ell \\\\\r\n    \\dot{T}_\\ell  \\\\\r\n    \\dot{T}_g \\\\\r\n   \\dot{T}_w  \\\\\r\n   \\dot{m}_g\r\n   \\end{array}\\right) =\r\n   %\r\n   \\left(\\begin{array}{c}\r\n    b_1\\\\\r\n    b_2  \\\\\r\n    b_3 \\\\\r\n    b_4  \\\\\r\n    0\r\n   \\end{array}\\right)\r\n\\end{equation}\r\n%\r\nwhere the coefficients $A_{ij}$ and $b_i$ are given by\r\n%\r\n\\begin{eqnarray}\r\n   A_{11}& = & T_g R_v - P_t \\nu_\\ell \\label{Eq:RegulatedA11}\\\\\r\n   %\r\n   A_{13}& = & m_v c_v + m_g c_g\\\\\r\n   %\r\n   A_{15} & = & c_g T_g - h_p\\\\\r\n   %\r\n   A_{21} & = & c_\\ell T_\\ell + P_t v_l - h_{lg}\\\\\r\n   %\r\n   A_{22} & = & m_\\ell c_\\ell\\\\\r\n   %\r\n   A_{34} & = & m_w c_w\\\\\r\n   %\r\n   A_{41} & = & 1  - \\nu_\\ell/\\nu_v\\\\\r\n   %\r\n   A_{43} & = & - m_v/T_g\\\\\r\n   %\r\n   A_{51} & = & \\nu_\\ell/\\nu_g\\\\\r\n   %\r\n   A_{53} & = & m_g/T_g \\\\\r\n   %\r\n   A_{55} & = & 1 \\\\\r\n   %\r\n   b_1 & = & \\dot{Q}_v + \\dot{Q}_g - \\dot{m}_e R_v T_g \\label{Eq:Regulatedb1}\\\\\r\n   %\r\n   b_2 & = & \\dot{Q}_\\ell - \\dot{Q}_v + \\dot{m}_e (h_{lg} - c_\\ell T_\\ell)\\\\\r\n   %\r\n   b_3 & = & \\dot{Q}_w -\\dot{Q}_\\ell - \\dot{Q}_g\\\\\r\n   %\r\n   b_4 & = & -\\dot{m}_e \\label{Eq:Regulatedb4}\r\n\\end{eqnarray}\r\n\r\n\\begin{eqnarray}\r\n   \\dot{m}_\\ell &=& \\frac{A_{55} A_{43} b_1 - b_4 A_{13} A_{55} + b_4 A_{15} A_{53}}{D}\\\\\r\n   \\dot{T}_\\ell &=& \\frac{b_2 - A_{21}\\dot{m}_\\ell}{A_{22}}\\\\\r\n   %\r\n   \\dot{T}_g  &=& \\frac{-A_{41} A_{55} b_1 + b_4 A_{11} A_{55} - b_4 A_{51}\r\n    A_{15}}{D}\\\\\r\n   %\r\n   \\dot{T}_w &=& \\frac{b_3}{A_{34}}\\\\\r\n   %\r\n   \\dot{m}_g &=& \\frac{-b_1 A_{51} A_{43} + b_1 A_{41} A_{53} - b_4 A_{11} A_{53} + b_4 A_{51} A_{13}}{D}\r\n\\end{eqnarray}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n      D = A_{55} A_{43} A_{11} - A_{43} A_{51} A_{15} + A_{41} A_{15} A_{53} - A_{41} A_{13} A_{55}\r\n\\end{equation}\r\n\r\nFor the adiabatic model we set all heat transfer rates, $\\dot{Q}$,\r\nto zero in  Eqs.~(\\ref{Eq:Regulatedb1})-(\\ref{Eq:Regulatedb4}). For\r\nthe adiabatic model there is only four state variables as $\\dot{T}_w\r\n= 0$ and so $T_w =$ constant.\r\n\r\nNow let's develop equations for an isothermal model of a pressure\r\nregulated tank. In the isothermal model, we assume $T_\\ell = T_g =\r\nT_w = T$. The only state variables that require differential\r\nequations are $m_\\ell$ and $m_g$. Because $T_g =$ constant, and\r\nhence, $P_v =$ constant, we know that\r\n%\r\n\\begin{equation}\r\n    \\dot{m}_\\ell = -\\frac{\\dot{m}_e}{\\left( 1 - \\displaystyle\\frac{P_v \\nu_\\ell}{R_v\r\n    T}\\right)}\r\n\\end{equation}\r\n%\r\nSimilarly, for $m_g$ we obtain\r\n%\r\n\\begin{equation}\r\n   \\dot{m}_g = \\frac{\\nu_\\ell}{\\nu_g}\\frac{\\dot{m}_e}{\\left( 1 - \\displaystyle\\frac{P_v \\nu_\\ell}{R_v\r\n    T}\\right)}\r\n\\end{equation}\r\n\r\n\\subsubsection{Heat Transfer}\r\n\r\nHeat transfer models are from Ring\\cite{Ring:64} and\r\nIncropera\\cite{Incropera:06} and Pitts\\cite{Pitts:98}\r\n\r\n\\begin{equation}\r\n    \\dot{Q} = h A \\Delta T\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n    \\frac{h L}{k} = Nu = c(Gr_L Pr)^a\r\n\\end{equation}\r\n%\r\nso\r\n%\r\n\\begin{equation}\r\n       h = \\frac{k c}{L}(Gr_L Pr)^a\r\n\\end{equation}\r\n\\begin{table}[ht]\r\n\\centering \\caption{ Dimensionless Heat Transfer Terms\r\n\\cite{Incropera:06}}\r\n\\begin{tabular}{p{.65 in} p{.950 in} p{1.4 in}}\r\n  \\hline\\hline\r\n  % after \\\\: \\hline or \\cline{col1-col2} \\cline{col3-col4} ...\r\n  Parameter & Definition & Interpretation \\\\\r\n  \\hline\r\n  Grashof number ($Gr_L$) & \\hspace{ 1 in} $\\displaystyle\\frac{g \\beta(T_s - T_\\infty)L^3}{\\nu^2}$  & Measure of the ratio of buoyancy forces to viscous forces.\\\\\r\n  %\r\n  & \\\\\r\n  %\r\n  Prandtl number ($Pr$) & \\hspace{ 1 in} $\\displaystyle\\frac{c \\mu}{k} = \\frac{\\mu}{\\alpha}$  & Ratio of the momentum and thermal diffusivities.\\\\\r\n  %\r\n  & \\\\\r\n  %\r\n  Nusselt number ($Nu_L$) & \\hspace{ 1 in} $\\displaystyle\\frac{h L}{k_f}$ & Ratio of convection to pure conduction heat transfer.  \\\\\r\n  %\r\n  & \\\\\r\n  %\r\n  Reynolds number ($Re_L$) & \\hspace{ 1 in} $\\displaystyle\\frac{V L}{\\nu}$ & Ratio of inertia and viscous forces.  \\\\\r\n  \\hline\r\n\\end{tabular}\r\n\\end{table}\r\n\r\n\\subsubsection{ Physiochemical Properties }\r\n\r\nHydrazine Properties \\cite{HydraBook}\r\n\\[\r\nc = 3084 \\mbox{J/kg/K}\r\n\\]\r\n\\[\r\n\\rho \\mbox{ (kg/m}^3) = 1025.6 - 0.87415 \\mbox{ }T \\mbox{ }(^o\\mbox{C})- 0.45283e^{-3}\\mbox{ }T^2 \\mbox{ }(^o\\mbox{C})\r\n\\]\r\n%\r\n\\[\r\n\\rho \\mbox{ (kg/m}^3) = 1230.6 - 0.62677 \\mbox{ }T \\mbox{ }(^o\\mbox{K})- 0.45283e^{-3}\\mbox{ }T^2 \\mbox{ }(^o\\mbox{K})\r\n\\]\r\n\r\n\r\nSome models are from \\cite{Ricciardi:87}\r\n\r\n\\begin{eqnarray}\r\n  \\rho_\\ell(T_\\ell) & = &K_1 + K_2 T_\\ell + K_3 T_\\ell^2\r\n  \\mbox{  (kg/m$^3$)}\\\\\r\n  %\r\n  \\frac{d \\rho_\\ell}{dT_\\ell } &  = & K_2 + 2K_3 T_\\ell\r\n  \\mbox{  (kg/m$^3$)}\r\n\\end{eqnarray}\r\n%\r\n\\begin{eqnarray}\r\n  P_v & =  &\\displaystyle C_1 e^{(C_2 + C_3 T_\\ell + C_4 T_\\ell^2)} \\mbox{\r\n  (N/m$^2$)}\\\\\r\n  %\r\n  \\frac{d P_v}{d T_\\ell}& = &\\displaystyle C_1 ln{(C_2 + C_3 T_\\ell C_4 T_\\ell^2)}\\left(C_3 + 2C_4T_\\ell\\right)\r\n\\end{eqnarray}\r\n%\r\n\\begin{equation}\r\n    m_d = P_g m_\\ell^\\alpha\\left( \\frac{T_\\ell}{294}\\right)^2\r\n\\end{equation}\r\n%\r\n\\begin{equation}\\begin{split}\r\n    \\frac{d m_d}{dt} =  m_\\ell^\\alpha\\left(\r\n    \\frac{T_\\ell}{294}\\right)^2\\dot{P}_g + \\alpha P_g m_\\ell^{(\\alpha - 1)}\\left(\r\n    \\frac{T_\\ell}{294}\\right)^2 \\dot{m}_\\ell \\\\ + 2 P_g m_\\ell^\\alpha\\left(\r\n    \\frac{T_\\ell}{294}\\right)\\dot{T}_\\ell\r\n    \\end{split}\r\n\\end{equation}\r\n%\r\n\r\n\\begin{table} \\centering\r\n\\caption{Constants for Density Equations}\r\n\\begin{tabular}{lcc}\r\n  \\hline\r\n  % after \\\\: \\hline or \\cline{col1-col2} \\cline{col3-col4} ...\r\n  Const. & \\mbox{N}$_2$\\mbox{0}$_4$ & CH$_3$N$_2$H$_3$ \\\\\r\n  \\hline \\hline\r\n  $K_1$ (kg/m$^3$) & 2066 & 1105.3 \\\\\r\n  $K_2$ (kg/m$^3$/K )& -1.979 & -0.9395 \\\\\r\n  $K_3$ (kg/m$^3$/K$^2) $ & -4.826e-4 & 0 \\\\\r\n  \\hline\r\n\\end{tabular}\r\n\\end{table}\r\n\r\n\\begin{table} \\centering\r\n\\caption{Constants for Vapor Pressure Equations}\r\n\\begin{tabular}{lcc}\r\n  \\hline\r\n  % after \\\\: \\hline or \\cline{col1-col2} \\cline{col3-col4} ...\r\n  Const. & \\mbox{N}$_2$\\mbox{0}$_4$ & CH$_3$N$_2$H$_3$ \\\\\r\n  \\hline \\hline\r\n  $C_1$ (kg/m$^3$) & 6895 & 6895\\\\\r\n  $C_2$ (kg/m$^3$/K )& 18.6742 & 12.4293 \\\\\r\n  $C_3$ (kg/m$^3$/K$^2) $ & -5369.57 & 2543.37 \\\\\r\n  $C_4$ (kg/m$^3$/K$^2) $ & 194721 & 0 \\\\\r\n  \\hline\r\n\\end{tabular}\r\n\\end{table}\r\n\r\n\\begin{table} \\centering\r\n\\caption{Constants for Dissolved Pressurant Equations}\r\n\\begin{tabular}{lcc}\r\n  \\hline\r\n  % after \\\\: \\hline or \\cline{col1-col2} \\cline{col3-col4} ...\r\n  Const. & \\mbox{N}$_2$\\mbox{0}$_4$ & CH$_3$N$_2$H$_3$ \\\\\r\n  \\hline \\hline\r\n  $\\alpha$ & 3.335e-11 & 2.059e-11 \\\\\r\n  \\hline\r\n\\end{tabular}\r\n\\end{table}\r\n", "meta": {"hexsha": "392c86da3f297c0366d25e9a8c9586b572897a88", "size": 37463, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/SystemDocs/MathematicalSpecification/TankModels.tex", "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_issues_repo_path": "doc/SystemDocs/MathematicalSpecification/TankModels.tex", "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_licenses": ["NASA-1.3"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_forks_repo_path": "doc/SystemDocs/MathematicalSpecification/TankModels.tex", "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": ["NASA-1.3"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "avg_line_length": 33.8419150858, "max_line_length": 163, "alphanum_fraction": 0.6229880148, "num_tokens": 14205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706735, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.5940704644735745}}
{"text": " \n\n\\documentclass[12pt]{article}\n\\usepackage{times}\n\\usepackage{indentfirst}\n\\usepackage{graphicx}\n\\usepackage{multirow}\n\\usepackage{float}\n\\usepackage{amsmath}\n\\graphicspath{ {./img/} }\n%Setup the layout of the pages\n\\topmargin 0.0cm\n\\oddsidemargin 0.2cm\n\\textwidth 16cm \n\\textheight 21cm\n\\footskip 1.0cm\n\n\n\\title{\n    Machine Learning Review\\\\\n}\n\\author{Molin Liu}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Introduction}\n\nThis is a review work for the Machine Learning course in\n\\textit{Univeristy of Glasgow}.\n\n\\section{Regression}\n\n\\subsection{Linear Regression}\n\nFor $x = (x_1, x_2, ..., x_m)$, \nwhere $x_i$ is the $i_{th}$ attribute of $x$, \na lenear model tries to train a linear combination function\nfrom these attributes, i.e:\n$$f(x) = w_1 x_1 + w_2 x_2 +...+ w_m x_m+b$$\nwhich can be written in vector as:\n$$f(x)=w^Tx+b$$\nwhere $w=(w_1, w_2,...,w_m)$.\n\nThe linear regression model is trained by \\textit{loss function}.\nGenerally, we define our \\textit{loss function} as:\n$$min\\sum_{i=1}^{m}{(y_i-f(x_i))^2}$$\n\n\\subsubsection{Loss Function}\n\nThere are several kinds of \\textit{loss functions} we can choose from.\n\n- \\textbf{L1-norm}: L1-norm can be represented as follow:\n$$S=\\sum_{i=1}^{m}\\left|y_{i}-f\\left(x_{i}\\right)\\right|$$\n\n- \\textbf{L2-norm}: L2-norm is what we used above:\n$$S=\\sum_{i=1}^{m}\\left(y_{i}-f\\left(x_{i}\\right)\\right)^{2}$$\n\n\\subsubsection{Least Square Solution}\nThe solution of \\textbf{Least Square} is:\n$$\n\\vec{w}=\\left(X^{\\top} X\\right)^{-1} X^{\\top} Y\n$$\n\n\\textbf{Prove:}\n\nThe error vector $(Y-X \\vec{w})$ should be orthogonal to every column of $X$:\n$$\n(Y-X \\vec{w}) \\cdot X_{j}=0\n$$\n\nwhich can be written as a matrix equation:\n$$\n(Y-X \\vec{w})^{\\top} X=\\overrightarrow{0}\n$$\n\nThen we can easily derive the equation from the following process:\n$$\n\\begin{aligned} X^{\\top}(Y-X \\vec{w}) &=X^{\\top} Y-X^{\\top} X \\vec{w}=0 \\\\ \\Longrightarrow &\\left(X^{\\top} X\\right) \\vec{w}=X^{\\top} Y \\\\ \\Longrightarrow \\quad \\vec{w}=\\left(X^{\\top} X\\right)^{-1} X^{\\top} Y \\end{aligned}\n$$\n\\subsubsection{Conclusion}\n\n\\subsection{Polynomial Regression}\n\nThe \\textbf{Polynomial Regression} can be written as:\n$$\nt=w_{0}+w_{1} x+w_{2} x^{2}+w_{3} x^{2}+\\ldots+w_{K} x^{K}=\\sum_{k=0}^{\\kappa} w_{k} x^{k}\n$$\n\nDefine the loss funcion:\n$$\n\\mathcal{L}=\\frac{1}{N}(\\mathbf{t}-\\mathbf{X} \\mathbf{w})^{\\top}(\\mathbf{t}-\\mathbf{X} \\mathbf{w})\n$$\n\\subsubsection{Generalization \\& Overfitting}\n\nWe can find out that the \\textbf{loss} will always decrease\nas the model is made more complex.\n\nHow to choose the right model complexity?\n\\textbf{Cross-validation}\n\n\\subsubsection{Cross-Validation}\n\n\\section{Classification}\n\nThe \\textbf{Classification} task is to classify\na set of \\textit{N} objects $x_i$ with attributes.\nEach object has an associated label $t_i$\n\n\\textbf{Probabilistic classifier} produce a probability of class membership:\n\n$P(t_{\\text {new}}=k | \\mathbf{x}_{\\text {new }}, \\mathbf{X}, \\mathbf{t})$\n\n\\textbf{non-Probabilistic classifier} produce a hard assignment:\n\n$t_{new}=1$ or $t_{new}=0$\n\n\\subsection{KNN}\n\nK-Nearest Neighbours(KNN)\n\n- Non-probabilistic classifier;\n\n- Supervised trainning;\n\n- Fast;\n\n- We can use CV to find the right $K$;\n\n\\subsubsection{Problem}\n- As $K$ increases, the small classes will disppear.\n\n\\subsection{Logistic Regression}\n\n\\subsection{SVM}\n\\subsubsection{Hard Margin}\nIf the training data is linearly seperable,\nwe can select two parallel hyperplanes that separate the two classes of data,\nso that the distance between them is as large as possible.\n\nThese hyperplanes can be described by the equations:\n$$\n\\vec{w} \\cdot \\vec{x}_{i}-b \\geq 1, \\text { if } y_{i}=1\n$$\nor\n$$\n\\vec{w} \\cdot \\vec{x}_{i}-b \\leq-1, \\text { if } y_{i}=-1\n$$\n\nWe can easily infer that \n$$\ny_{i}\\left(\\vec{w} \\cdot \\vec{x}_{i}-b\\right) \\geq 1, \\quad \\text { for all } 1 \\leq i \\leq n\n$$\n\nWe want to maximise $\\gamma=\\frac{1}{\\|\\mathbf{w}\\|}$,\nequivalent to minimising $\\|\\mathbf{w}\\|$\n\nNote: $y_i$ is the label in the data, which is in $\\{-1, 1\\}$,\nrather than vertical axis value of the data.\n\\begin{figure}[H]\n    \\center\n    \\includegraphics{img/600px-SVM_margin.png}\n    \\caption{Hard margin}\n    \\label{fig:svm_hard_margin}\n\\end{figure}\n\n\\subsubsection{Soft Margin}\nSoft-margin function for the data are not linearly seperable.\n\n\\subsubsection{Inner Product}\n\n\\section{ROC}\nSensitivity/Recall\n\n$$\nS_{e}=\\frac{T P}{T P+F N}\n$$\n\nSpecificity\n\n$$\nS_{p}=\\frac{T N}{T N+F P}\n$$\n\n\\section{Unsupervised Learning}\n\\subsection{K-Means}\n\n\\subsection{Gaussion Mixture Model}\n\\end{document}\n\n", "meta": {"hexsha": "a29d56729f610d11f471d5211ad2f6985170fe37", "size": 4513, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Machine Learning/Review.tex", "max_stars_repo_name": "Molin-L/UoG_DataScience", "max_stars_repo_head_hexsha": "819bce5f9f4ab107f1844bea63c57ddaee9e89da", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Machine Learning/Review.tex", "max_issues_repo_name": "Molin-L/UoG_DataScience", "max_issues_repo_head_hexsha": "819bce5f9f4ab107f1844bea63c57ddaee9e89da", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-03T17:07:30.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-03T17:07:30.000Z", "max_forks_repo_path": "Machine Learning/Review.tex", "max_forks_repo_name": "Molin-L/UoG_DataScience", "max_forks_repo_head_hexsha": "819bce5f9f4ab107f1844bea63c57ddaee9e89da", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.5052083333, "max_line_length": 221, "alphanum_fraction": 0.6900066475, "num_tokens": 1547, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.8267117940706734, "lm_q1q2_score": 0.5940704594913784}}
{"text": "\\section{Uniform and Non-Uniform Hierarchical B-Splines}\n\\label{sec:31standardBSplines}\n\n\\minitoc{83mm}{7}\n\n\\noindent\nIn this section, we mainly follow the presentation of\n\\multicite{Pflueger10Spatially,Valentin14Hierarchische,Valentin16Hierarchical}\nto define hierarchical B-splines\nstarting from the well-known nodal B-spline basis\n\\multicite{Hoellig03Finite,Hoellig13Approximation,Quak16About}.\nNote that thanks to the groundwork laid in \\cref{chap:20sparseGrids},\nespecially \\thmref{lemma:tensorProductLinearIndependence} and\n\\thmref{prop:splittingUVToMV},\nit suffices to study the univariate case\nof all bases that will be defined in the rest of this thesis.\nThe multivariate case is treated canonically by tensor products.\n\n\n\n\\subsection{Uniform Hierarchical B-Splines}\n\\label{sec:311uniform}\n\n\\paragraph{Cardinal B-splines}\n\nThe \\term{cardinal B-spline}\n$\\cardbspl{p}\\colon \\real \\to \\real$ of \\term{degree} $p \\in \\natz$\nis defined by\n\\begin{equation}\n  \\label{eq:cardinalBSpline}\n  \\cardbspl{p}(x)\n  \\ceq\n  \\begin{cases}\n    \\displaystyle\\int_0^1 \\cardbspl{p-1}(x - y) \\diff{}y,&p \\ge 1,\\\\\n    \\charfun{\\hopint{0, 1}}(x),&p = 0,\n  \\end{cases}\n\\end{equation}\nwhere $\\charfun{\\hopint{0, 1}}$ is the characteristic function of\nthe half-open unit interval $\\hopint{0, 1}$\n(see \\cite{Hoellig13Approximation}).\nThe cardinal B-spline $\\cardbspl{p}$ has the following properties\n\\cite{Hoellig03Finite},\nwhich are shown in \\cref{fig:cardinalBSplineProps}:\n\n\\begin{figure}\n  \\includegraphics{cardinalBSplineProps_1}%\n  \\caption[%\n    Properties of cardinal B-splines%\n  ]{%\n    Eight properties of cardinal B-splines using the quadratic case\n    $p = 2$ as an example.\\\\\n    \\lefthphantom{1.}{5.}\\,\n    $\\cardbspl{p}$ is compactly supported on $\\clint{0, p+1}$.\\\\\n    \\lefthphantom{2.}{5.}\\,\n    $\\cardbspl{p}$ is symmetric and $0 \\le \\cardbspl{p} \\le 1$.\\\\\n    \\lefthphantom{3.}{5.}\\,\n    $\\cardbspl{p}$ is a weighted combination of\n    $\\cardbspl{p-1}$ \\emph{\\textcolor{C0}{(blue)}} and\n    $\\cardbspl{p-1}({\\cdot} - 1)$ \\emph{\\textcolor{C1}{(red)}.}\\\\\n    %where the coefficients \\emph{(dashed)} are linear in $x$.\\\\\n    \\lefthphantom{4.}{5.}\\,\n    $\\cardbspl{p}$ is a piecewise polynomial of degree $p$.\\\\\n    \\lefthphantom{5.}{5.}\\,\n    $\\deriv{x}{\\cardbspl{p}}$ \\emph{(dashed)}\n    is the difference of\n    $\\cardbspl{p-1}$ \\emph{\\textcolor{C0}{(blue)}} and\n    $\\cardbspl{p-1}({\\cdot} - 1)$ \\emph{\\textcolor{C1}{(red)}.}\\\\\n    \\lefthphantom{6.}{5.}\\,\n    $\\cardbspl{p}$ has unit integral.\\\\\n    \\lefthphantom{7.}{5.}\\,\n    $\\cardbspl{p}$ is the convolution of\n    $\\cardbspl{p-1}$ \\emph{\\textcolor{C0}{(blue)}} and $\\cardbspl{0}$.\\\\\n    \\lefthphantom{8.}{5.}\\,\n    Hat function \\emph{\\textcolor{C0}{(blue)}} and\n    Gaussian function \\emph{\\textcolor{C1}{(red)}}\n    are special cases of $\\cardbspl{p}$.%\n  }%\n  \\label{fig:cardinalBSplineProps}%\n\\end{figure}\n\n\\begin{enumerate}\n  \\item\n  \\emph{Compact support:}\n  The support of $\\cardbspl{p}$ is given by $\\supp \\cardbspl{p} = \\clint{0, p + 1}$.\n  \n  \\item\n  \\emph{Bounds and symmetry:}\n  The cardinal B-spline $\\cardbspl{p}$ is non-negative and bounded from above by one.\n  It is symmetric with respect to $x = \\tfrac{p+1}{2}$, i.e.,\n  $\\cardbspl{p}(x) = \\cardbspl{p}(p + 1 - x)$.\n  \n  \\item\n  \\emph{Recursion:}\n  The cardinal B-spline $\\cardbspl{p}$ ($p \\ge 1$)\n  satisfies the following recurrence relation\n  (which can be used as an alternative definition):\n  \\begin{equation}\n    \\cardbspl{p}(x)\n    = \\frac{x}{p} \\cardbspl{p-1}(x) + \\frac{p+1-x}{p} \\cardbspl{p-1}(x-1).\n  \\end{equation}\n  \n  \\item\n  \\emph{Spline:}\n  On every \\term{knot interval} $\\hopint{k, k+1}$ for $k = 0, \\dotsc, p$,\n  $\\cardbspl{p}$ is a polynomial of degree~$p$, i.e.,\n  $\\cardbspl{p}$ is a spline of degree $p$ (piecewise polynomial).\n  \n  \\item\n  \\emph{Derivative:}\n  At the \\term{knots} $k = 0, \\dotsc, p + 1$,\n  $\\cardbspl{p}$ is $(p - 1)$ times continuously differentiable (if $p \\ge 1$).\n  The derivative can be computed by differentiating\n  \\eqref{eq:cardinalBSpline}:\n  \\begin{equation}\n    \\label{eq:cardinalBSplineDerivative}\n    \\deriv{x}{\\cardbspl{p}}(x)\n    = \\cardbspl{p-1}(x) - \\cardbspl{p-1}(x-1),\\quad\n    x \\in \\real.\n  \\end{equation}\n  \n  \\item\n  \\emph{Integral:}\n  The B-spline $\\cardbspl{p}$ has unit integral, i.e.,\n  $\\int_{\\real} \\cardbspl{p}(x) \\dx = 1$.\n  %\\vspace{-0.5em}\n  %\\begin{equation}\n  %  \\int_{\\real} \\cardbspl{p}(x) \\dx = 1.\n  %\\end{equation}\n  \n  \\item\n  \\emph{Convolution:}\n  The integral in the definition of $\\cardbspl{p}$\n  is the convolution $\\cardbspl{p-1} \\convolution \\cardbspl{0}$\n  of the B-spline $\\cardbspl{p-1}$\n  of degree $p - 1$ with the B-spline $\\cardbspl{0}$ of degree zero.\n  \n  \\item\n  \\emph{Generalization:}\n  As a special case, $\\cardbspl{1}$ is a hat function,\n  interpolating the data\n  $\\{(k, \\kronecker{k}{1}) \\mid k \\in \\integer\\}$.\n  For $p \\to \\infty$, the normalized cardinal B-splines converge\n  pointwise to the standard Gaussian function\n  $\\cardbspl{\\infty}(x) \\ceq (2\\pi)^{-1/2} \\exp(-x^2/2)$ \\cite{Unser92Asymptotic}:%\n  \\footnote{%\n    This can also be seen as a consequence of the central limit theorem\n    applied to uniformly distributed random variables.\n    The pointwise convergence of the probability density functions\n    can be proven from the convergence\n    in distribution using a converse to Scheff\u00e9's theorem\n    \\cite{Boos85Converse}.%\n  }\n  \\vspace{-0.5em}\n  \\begin{equation}\n    \\lim_{p \\to \\infty}\n    c^p \\cardbspl{p}(c^p x + \\tfrac{p+1}{2})\n    = \\cardbspl{\\infty}(x),\\quad\n    c^p \\ceq \\sqrt{\\frac{p+1}{12}},\\quad\n    x \\in \\real.\n  \\end{equation}\n\\end{enumerate}\n\nThe cardinal B-splines of the first degrees are shown in\n\\cref{fig:cardinalBSpline}.\nDue to the convolution property,\ncardinal B-splines of degree $p \\ge 2$ are ``smoothed versions''\nof the hat function.\nThis is shown in the flip book animation in the bottom left corner\nof the even-numbered pages of this thesis.\n\n\\begin{figure}\n  \\includegraphics{cardinalBSpline_1}%\n  \\caption[%\n    Cardinal B-splines%\n  ]{%\n    Cardinal B-splines $\\cardbspl{p}$ up to quintic degree $p = 5$.%\n  }%\n  \\label{fig:cardinalBSpline}%\n\\end{figure}\n\n\\paragraph{Definition of uniform hierarchical B-splines}\n\n\\usenotation{\u00cbp}\nAs for the hat functions in \\cref{chap:20sparseGrids},\nwe can use the cardinal B-spline $\\cardbspl{p}$ as a ``parent function'' to\ndefine the uniform hierarchical B-spline\n$\\bspl{l,i}{p}\\colon \\clint{0, 1} \\to \\real$ of level~$l \\in \\natz$ and index\n$i \\in \\hiset{l}$ via an affine parameter transformation\n\\cite{Pflueger10Spatially}:\n\\begin{equation}\n  \\label{eq:uniformHierarchicalBSplineUV}\n  \\bspl{l,i}{p}(x)\n  \\ceq \\cardbspl{p}(\\tfrac{x}{\\ms{l}} + \\tfrac{p+1}{2} - i).\n\\end{equation}\nThe support of $\\bspl{l,i}{p}$ is given\nby $\\supp \\bspl{l,i}{p} = \\clint{0, 1} \\cap \\clint{\\gp{l,i-(p+1)/2}, \\gp{l,i+(p+1)/2}}$.\nThe hat function basis $\\bspl{l,i}{1}$ defined in\n\\eqref{eq:hatFunctionUV} is a special case of\n\\eqref{eq:uniformHierarchicalBSplineUV} for $p = 1$,\nwhich allows us to use the same notation $\\bspl{l,i}{p}$ for both.\nNote that due to the \\term{translation invariance} of $\\bspl{l,i}{p}$\n(i.e., the basis functions are the same up to scaling and translation),\nit suffices to precompute and implement the polynomial pieces of $\\cardbspl{p}$\nto enable evaluations of all hierarchical B-splines\n$\\bspl{l,i}{p}$ ($l \\in \\natz$, $i \\in \\hiset{l}$).\n\n\\begin{figure}\n  \\subcaptionbox{%\n    Nodal B-splines $\\bspl{l,i}{p}$ ($i \\in \\hiset{l}$) and grid points $\\gp{l,i}$\n    \\emph{(dots).}%\n  }[67mm]{%\n    \\includegraphics{hierarchicalBasis_4}%\n  }%\n  \\hfill%\n  \\begin{tikzpicture}\n    \\draw[decorate,decoration={brace,aspect=0.153}] (0,-8.5) -- (0,0);\n    \\node[anchor=east,inner sep=0mm] at (-0.15,-7.208) {$= \\bigoplus$};\n  \\end{tikzpicture}%\n  \\hfill%\n  \\subcaptionbox{%\n    Hierarchical B-splines $\\bspl{l',i'}{p}$ ($l' \\le l$, $i' \\in \\hiset{l'}$)\n    and grid points $\\gp{l',i'}$\n    \\emph{(dots).}%\n  }[69mm]{%\n    \\includegraphics{hierarchicalBasis_5}%\n  }%\n  \\caption[%\n    Nodal and hierarchical B-splines%\n  ]{%\n    Univariate nodal and hierarchical cubic B-splines ($p = 3$)\n    \\vspace{0.1em}%\n    up to level $l = 3$.\n    When restricting all functions to $\\rspldomain{l}{p}$\n    \\emph{(thick black line),}\n    \\vspace{-0.2em}%\n    the nodal space $\\restrictspace{\\nsbspl{l}{p}}{\\rspldomain{l}{p}}$\n    decomposes into the direct sum of the hierarchical subspaces\n    $\\restrictspace{\\hsbspl{l'}{p}}{\\rspldomain{l}{p}}$ ($l' \\le l$).%\n  }%\n  \\label{fig:hierarchicalBSpline}%\n\\end{figure}\n\n\\paragraph{Odd and even degrees}\n\nIn this thesis, we will only allow odd degrees $p = 1, 3, 5, \\dotsc$\nfor hierarchical B-splines.\nMany theoretical considerations fail for even degrees.\nThe basic reason is that for odd degrees, the knots of\n$\\bspl{l,i}{p}$ coincide with the grid points \\cite{Valentin14Hierarchische}\n\\begin{equation}\n  \\gp{l,i-(p+1)/2},\\quad\n  \\dotsc,\\quad\n  \\gp{l,i},\\quad\n  \\dotsc,\\quad\n  \\gp{l,i+(p+1)/2}.\n\\end{equation}\nFor even degrees $p$, the knots of $\\bspl{l,i}{p}$ lie exactly in\nthe middle between two subsequent grid points:\n\\begin{equation}\n  \\gp{l,i-p/2} - \\frac{\\ms{l}}{2},\\quad\n  \\dotsc,\\quad\n  \\gp{l,i} - \\frac{\\ms{l}}{2},\\quad\n  \\gp{l,i} + \\frac{\\ms{l}}{2},\\quad\n  \\dotsc,\\quad\n  \\gp{l,i+p/2} + \\frac{\\ms{l}}{2}.\n\\end{equation}\nThis fact has many adverse implications on the hierarchical basis.\nThe most crucial implication is\nthat for even degrees $p$,\nthe hierarchical splitting \\eqref{eq:hierSplittingUV} does not hold.\nFurthermore,\nwe would not be able to define non-uniform hierarchical B-splines as\nsimple as for odd degrees and\nfundamental splines would not be defined at all\n(as we will see in \\cref{sec:443fundamentalSplines}).\nAdditionally,\nodd degrees include the hat function case~($p = 1$) and the\nmost commonly applied cubic degree~($p = 3$).\nTherefore,\nit is reasonable to restrict ourselves to odd degrees\nfor the rest of the thesis.\n\n\n\n\\subsection{Non-Uniform B-Splines and Proof of the Hierarchical Splitting}\n\\label{sec:312proofHierarchicalSplitting}\n\n\\paragraph{Non-uniform B-splines and spline space}\n\nWith the hierarchical B-splines $\\bspl{l,i}{p}$, we can define\nthe nodal spaces $\\nsbspl{l}{p}$ and hierarchical subspaces $\\hsbspl{l}{p}$\nas in \\cref{chap:20sparseGrids}.\nHowever, in order for the hierarchical splitting \\eqref{eq:hierSplittingUV}\nto be correct, we have to prove that the conditions of\n\\thmref{lemma:hierSplittingUV} are satisfied.\nTo investigate how the nodal space $\\nsbspl{l}{p}$ looks like,\nwe introduce the notion of non-uniform B-splines.\n\n\\begin{definition}[non-uniform B-splines]\n  \\label{def:nonUniformBSpline}\n  Let $m, p \\in \\natz$ and $\\knotseq = (\\knot{0}, \\dotsc, \\knot{m+p})$ be an\n  increasing sequence of real numbers \\term{(knot sequence).}\n  For $k = 0, \\dotsc, m - 1$,\n  the \\term{(non-uniform) B-spline} $\\nonunifbspl{k,\\knotseq}{p}$ of degree $p$\n  with knots~$\\knotseq$ and index $k$ is defined by the\n  Cox--de~Boor recurrence\n  \\multicite{Cox72Numerical,Boor72Calculating,Hoellig13Approximation}\n  \\begin{equation}\n    \\nonunifbspl{k,\\knotseq}{p}(x)\n    \\ceq\n    \\begin{cases}\n      \\dfrac{x - \\knot{k}}{\\knot{k+p} - \\knot{k}} \\nonunifbspl{k,\\knotseq}{p-1}(x) +\n      \\dfrac{\\knot{k+p+1} - x}{\\knot{k+p+1} - \\knot{k+1}}\n      \\nonunifbspl{k+1,\\knotseq}{p-1}(x),&p \\ge 1,\\\\\n      \\charfun{\\hopint{\\knot{k}, \\knot{k+1}}}(x),&p = 0.\n    \\end{cases}\n    \\hspace*{-4mm}\n  \\end{equation}\n\\end{definition}\nNote that when choosing $\\knotseq = (0, 1, \\dotsc, p + 1)$ and\n$k = 0$, we obtain the cardinal B-spline~$\\cardbspl{p}$.\n\\Cref{def:nonUniformBSpline} can be used to characterize\nthe nodal space $\\nsbspl{l}{p}$:\n\n\\begin{proposition}[spline space]\n  \\label{prop:splineSpace}\n  Let $\\knotseq = (\\knot{0}, \\dotsc, \\knot{m+p})$ be a knot sequence.\n  Then, the B-splines $\\nonunifbspl{k,\\knotseq}{p}$ ($k = 0, \\dotsc, m - 1$)\n  form a basis of the \\term{spline space}\n  \\begin{equation}\n    \\nonunifsplspace{\\knotseq}{p}\n    \\ceq \\spn\\{\\nonunifbspl{k,\\knotseq}{p} \\mid k = 0, \\dotsc, m - 1\\}.\n  \\end{equation}\n  $\\nonunifsplspace{\\knotseq}{p}$ contains exactly those functions that are continuous\n  on $\\spldomain{\\knotseq}{p} \\ceq \\clint{\\knot{p}, \\knot{m}}$,\n  polynomials of degree $\\le p$ on every knot interval\n  $\\hopint{\\knot{k}, \\knot{k+1}}$ in\n  $\\spldomain{\\knotseq}{p}$\n  ($k = p, \\dotsc, m - 1$) and at least $(p - 1)$ times\n  continuously differentiable at every knot $\\knot{k}$ in the interior of\n  $\\spldomain{\\knotseq}{p}$ ($k = p + 1, \\dotsc, m - 1$).\n\\end{proposition}\n\n\\begin{proof}\n  See \\cite{Hoellig13Approximation}.\n\\end{proof}\n\nThis proposition gives the reason for the letter ``B'' in ``B-splines,''\nwhich stands for ``basis'' (of the space of splines) \\cite{Schoenberg67Spline}.\nOne example of a knot sequence and the corresponding B-splines is\ngiven in \\cref{fig:splineSpaceGeneral}.\nThe key observation is that B-splines of a knot sequence $\\knotseq$\ndo not form a basis of the spline space on the union\n$\\clint{\\knot{0}, \\knot{m+p}}$ of the B-spline supports.\nInstead, they form a basis of the spline space\non a proper sub-interval $\\spldomain{\\knotseq}{p}$.\nIntuitively, for every point in $\\spldomain{\\knotseq}{p}$ that is not a knot,\nexactly $p + 1$ B-splines must be \\term{relevant} (i.e., non-zero)\nto uniquely span the spline space,\nas on every knot interval, the spline is a polynomial of degree $\\le p$\nand therefore, there must be $p + 1$ degrees of freedom.\nOutside $\\spldomain{\\knotseq}{p}$, there are too few relevant B-splines\nto span the spline space.\nThis fact,\nwhich is shown in \\cref{fig:splineSpaceUniform},\nforces us to restrict the nodal space and the hierarchical subspaces\nto~$\\spldomain{\\knotseq}{p}$:\n\n\\begin{figure}\n  \\includegraphics{splineSpace_1}%\n  \\caption[%\n    Non-uniform B-splines with knot sequence and interpolation domain%\n  ]{%\n    Knot sequence $\\knotseq = (\\knot{0}, \\dotsc, \\knot{m+p})$\n    \\vphantom{$\\nonunifbspl{0,\\knotseq}{p}$}%\n    with the corresponding $m = 7$ non-uniform cubic B-splines\n    $\\nonunifbspl{k,\\knotseq}{p}$ ($k = 0, \\dotsc, m - 1$, $p = 3$).\n    On $\\spldomain{\\knotseq}{p}$ \\emph{(thick line, delimited by dashed lines),}\n    which starts with the last knot interval of the first B-spline\n    $\\nonunifbspl{0,\\knotseq}{p}$\n    and ends with the first knot interval of the last B-spline\n    $\\nonunifbspl{m-1,\\knotseq}{p}$,\n    the B-splines span the spline space $\\nonunifsplspace{\\knotseq}{p}$.\n    Elements of this space are splines $\\spl\\colon \\spldomain{\\knotseq}{p} \\to \\real$\n    \\emph{(black line),}\n    which are linear combinations\n    $\\spl = \\sum_{k=0}^{m-1} \\interpcoeff{k} \\nonunifbspl{k,\\knotseq}{p}$\n    of the B-splines.%\n  }%\n  \\label{fig:splineSpaceGeneral}%\n\\end{figure}\n\n\\begin{figure}\n  \\includegraphics{splineSpace_2}%\n  \\caption[%\n    Uniform nodal B-splines and knot sequence%\n  ]{%\n    Uniform knot sequence $\\nodalknotseq{l}{p}$ \\emph{(ticks on horizontal axis)}\n    and corresponding nodal cubic B-splines ($p = 3$) of level $l = 3$.\n    In the domain $\\clint{0, 1}$ \\emph{(delimited by dashed lines),}\n    the grid points $\\fgset{l}$ \\emph{\\textcolor{mittelblau}{(blue dots)}}\n    coincide with the B-spline knots.\n    The spline interpolation domain\n    $\\rspldomain{l}{p} \\ceq \\spldomain{\\nodalknotseq{l}{p}}{p}$\n    \\emph{(thick line)}\n    is only a proper subset of~$\\clint{0, 1}$.%\n  }%\n  \\label{fig:splineSpaceUniform}%\n\\end{figure}\n\n\\begin{corollary}[nodal B-spline space]\n  \\label{cor:nodalBSplineSpace}\n  The restricted nodal B-splines $\\restrictfcn{\\bspl{l,i}{p}}{\\rspldomain{l}{p}}$\n  ($i = 0, \\dotsc, 2^l$)\n  of level $l \\in \\natz$ are\n  a basis of the spline space $\\nonunifsplspace{\\nodalknotseq{l}{p}}{p}$,\n  where\n  \\begin{subequations}\n    \\begin{gather}\n      \\nodalknot{l,k}{p}\n      \\ceq (k - \\tfrac{p+1}{2}) \\ms{l},\\quad\n      k = 0, \\dotsc, m + p,\\quad\n      m \\ceq 2^l + 1,\\\\\n      \\rspldomain{l}{p} \\ceq [\\tfrac{p-1}{2} \\ms{l},\\;\n      1 - \\tfrac{p-1}{2} \\ms{l}],\n    \\end{gather}\n  \\end{subequations}\n  and consequently\n  \\begin{equation}\n    \\restrictspace{\\nsbspl{l}{p}}{\\rspldomain{l}{p}}\n    = \\restrictedsplspace{l}{p}\n    \\ceq \\nonunifsplspace{\\nodalknotseq{l}{p}}{p}.\n  \\end{equation}\n\\end{corollary}\n\n\\begin{proof}\n  We have $\\bspl{l,i}{p} = \\nonunifbspl{i,\\nodalknotseq{l}{p}}{p}$ for\n  $i = 0, \\dotsc, m - 1$,\n  as the B-splines on both sides have the same knots.\n  The assertions now follow from \\thmref{prop:splineSpace}.\n\\end{proof}\n\n\\vspace{1em}\n\nNote that $\\rspldomain{l}{p}$ might contain only a single point or even be empty,\nif $p$ is too large or $l$ is too small.\nHowever, the corresponding B-splines $\\bspl{l,i}{p}$ are still linearly\nindependent on~$\\clint{0, 1}$ (see \\cite{Hoellig13Approximation}).\nSimilarly, the corollary also implies that the hierarchical functions\n$\\bspl{l,i}{p}$ of level $l$ ($i \\in \\hiset{l}$)\nare linearly independent on $\\clint{0, 1}$.\n\n\\paragraph{Hierarchical splitting for uniform B-splines}\n\nWe can use \\cref{prop:splineSpace} and \\cref{cor:nodalBSplineSpace}\nto prove the hierarchical splitting for the uniform B-spline basis\n\\cite{Valentin16Hierarchical}.\n\n\\begin{lemma}[hierarchical B-splines in nodal space]\n  \\label{lemma:hierBSplineInNodalSpace}\n  The restricted hierarchical subspaces\n  $\\restrictspace{\\hsbspl{l'}{p}}{\\rspldomain{l}{p}}$ ($l' \\le l$) are\n  subspaces of the restricted nodal space $\\restrictspace{\\nsbspl{l}{p}}{\\rspldomain{l}{p}}$.\n\\end{lemma}\n\n\\begin{proof}\n  Every function $\\bspl{l',i'}{p}$ ($i' \\in \\hiset{l'}$) is continuous on\n  $\\rspldomain{l}{p}$, a polynomial of degree $\\le p$ on every knot interval\n  of $\\nodalknotseq{l}{p}$ (due to $p$ odd),\n  and at the knots themselves at least $(p - 1)$ times continuously\n  differentiable.\n  \\Cref{prop:splineSpace} implies $\\bspl{l',i'}{p} \\in \\restrictedsplspace{l}{p}$\n  and from \\cref{cor:nodalBSplineSpace}, it follows\n  $\\bspl{l',i'}{p} \\in \\restrictspace{\\nsbspl{l}{p}}{\\rspldomain{l}{p}}$.\n  As the functions $\\bspl{l',i'}{p}$ ($i' \\in \\hiset{l'}$) span\n  $\\restrictspace{\\hsbspl{l'}{p}}{\\rspldomain{l}{p}}$, we can conclude\n  $\\restrictspace{\\hsbspl{l'}{p}}{\\rspldomain{l}{p}} \\subset \\restrictspace{\\nsbspl{l}{p}}{\\rspldomain{l}{p}}$.\n\\end{proof}\n\n\\vspace{1em}\n\nIt is crucial to note that this lemma does not hold for even $p$,\nas the knots of the B-splines of level $l - 1$ are not contained in the\nknots of level $l$.\nThis implies that in general,\n$\\restrictspace{\\hsbspl{l-1}{p}}{\\rspldomain{l}{p}}$\nis not contained in $\\restrictspace{\\nsbspl{l}{p}}{\\rspldomain{l}{p}}$\nfor even degrees $p$.\nTherefore, the hierarchical splitting equation \\eqref{eq:hierSplittingUV}\ndoes not hold.\n\n\\begin{restatable}[hierarchical B-splines are linearly independent]{%\n  proposition%\n}{%\n  propHierBSplineLinearlyIndependent%\n}\n  \\label{prop:hierBSplineLinearlyIndependent}\n  The hierarchical B-splines\n  $\\bspl{l',i'}{p}$ ($l' \\le l$, $i' \\in \\hiset{l'}$)\n  are linearly independent.\n\\end{restatable}\n\n\\begin{proof}\n  See \\cref{sec:a121proofHierBSplineLinearlyIndependent}.\n\\end{proof}\n\n\\vspace{1em}\n\nAlthough we have to restrict all functions and spaces to $\\rspldomain{l}{p}$,\n\\thmref{lemma:hierSplittingUV} is still applicable to prove that\nthe hierarchical splitting equation \\eqref{eq:hierSplittingUV}\nis correct for hierarchical B-splines:\n\n\\begin{corollary}[hierarchical splitting for uniform B-splines]\n  \\label{cor:hierSplittingBSpline}\n  The hierarchical splitting \\eqref{eq:hierSplittingUV}\n  holds for the hierarchical B-spline basis\n  if restricting all functions to $\\rspldomain{l}{p}$:\n  \\begin{equation}\n    \\restrictedsplspace{l}{p}\n    = \\restrictspace{\\nsbspl{l}{p}}{\\rspldomain{l}{p}}\n    = \\bigoplus_{l'=0}^l \\restrictspace{\\hsbspl{l'}{p}}{\\rspldomain{l}{p}}.\n  \\end{equation}\n\\end{corollary}\n\n\\begin{proof}\n  Analogously to the proof of \\thmref{lemma:hierSplittingUV}\n  and apply \\thmref{cor:nodalBSplineSpace}.\n\\end{proof}\n\n\\vspace{1em}\n\nThis corollary is also visualized in \\cref{fig:hierarchicalBSpline}.\nWe can now proceed to define\nmultivariate nodal spaces $\\nsbspl{\\*l}{p}$,\nmultivariate hierarchical subspaces $\\hsbspl{\\*l}{p}$, and\nsparse grid spaces $\\regsgspace[p]{n}{d}$ as in \\cref{chap:20sparseGrids}.\nNote that it is possible to choose different degrees $p_t$ for\ndifferent dimensions $t = 1, \\dotsc, d$,\nsince the hierarchical splitting \\eqref{eq:hierSplittingMV} does not\nrequire the bases in each dimension to be the same.\nConsequently, we can define degree-dimension-adaptive\n(so-called \\term{$hp$-adaptive}) sparse grids\n$\\regsgspace[\\*p]{n}{d}$ for arbitrary odd degree vectors $\\*p$.\n\nIn the course of this thesis, we will derive multiple variations\nof the standard hierarchical B-spline basis.\nWe will not repeat formal proofs of the hierarchical splitting equation\n\\eqref{eq:hierSplittingUV}\n(i.e., verifying the two conditions of \\cref{lemma:hierSplittingUV})\nfor each of these bases for the sake of brevity.\nThe idea of the proof of \\cref{prop:hierBSplineLinearlyIndependent},\nwhich is inductively exploiting the smoothness conditions given by\nB-splines of previous levels, can be transferred to similar B-spline\nbases.\n\n\n\n\\subsection{Modification}\n\\label{sec:313modification}\n\n\\paragraph{Marsden's identity}\n\nSimilar to the piecewise linear case in \\cref{sec:242modified},\nPfl\u00fcger defined modified\nhierarchical B-splines to obtain reasonable values on the boundary\nwithout having to place grid points there \\cite{Pflueger10Spatially}.\nThe main motivation is to define basis\nfunctions $\\bspl[\\modified]{l,i}{p}$ that satisfy natural boundary\nconditions, i.e.,\n\\begin{equation}\n  \\label{eq:naturalBoundaryConditions}\n  \\deriv[2]{x}{\\bspl[\\modified]{l,i}{p}}(x) = 0,\\quad\n  x \\in \\bndrydomain{\\clint{0, 1}} = \\{0, 1\\}.\n\\end{equation}\nOriginally, this requirement stems from financial problems\n\\cite{Pflueger10Spatially}.\nFor the left boundary,\n\\eqref{eq:naturalBoundaryConditions} can be satisfied by\nmodifying the left-most function $\\bspl{l,1}{p}$ such that\n$\\bspl[\\modified]{l,1}{p}(x) = 2 - \\tfrac{x}{\\ms{l}}$ is a linear polynomial\nwhen $x$ is ``near'' the boundary.\nAs in \\cite{Pflueger10Spatially},\nwe append $\\bspl{l,1}{p}$ with\nB-splines $\\bspl{l,i}{p}$ with index $i \\le 0$ and\nuse \\term{Marsden's identity} to compute the corresponding coefficients.\nThe identity enables us to explicitly compute the B-spline coefficients\nof an arbitrary polynomial of degree $\\le p$ by giving an explicit formula\nfor the B-spline coefficients of shifted monomials $({\\cdot} - y)^p$.\nInterestingly, the coefficients are polynomials themselves (in $y$):\n\n\\vspace*{0pt plus 0.5fill}\n\n\\begin{lemma}[Marsden's identity]\n  \\label{lemma:marsden}\n  Let $p \\in \\natz$ and\n  $\\knotseq = (\\knot{0}, \\dotsc, \\knot{m+p})$ be a knot sequence.\n  Then, for all $x \\in \\spldomain{\\knotseq}{p}$ and $y \\in \\real$,\n  \\begin{equation}\n    \\label{eq:marsden}\n    (x - y)^p\n    = \\sum_{k=0}^{m-1}\n    \\;\\bracket*{(\\knot{k+1} - y) \\dotsm (\\knot{k+p} - y)}\\;\n    \\nonunifbspl{k,\\knotseq}{p}(x).\n  \\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n  See \\cite{Hoellig13Approximation}.\n\\end{proof}\n\n\\vspace*{0pt plus 1fill}\n\n\\paragraph{Modified hierarchical B-splines}\n\nBy extending the sum in Marsden's identity to $i \\in \\integer$ and\ncomparing the coefficients of $y^p$ and $y^{p-1}$ of both sides,\nwe obtain representations for the first two monomials:\n\\begin{equation}\n  1\n  = \\sum_{i \\in \\integer} \\bspl{l,i}{p}(x),\\quad\n  x\n  = \\sum_{i \\in \\integer} \\gp{l,i} \\bspl{l,i}{p}(x),\\quad\n  x \\in \\real.\n\\end{equation}\nThis can be used to derive a definition for $\\bspl[\\modified]{l,i}{p}$:\n\\begin{equation}\n  \\label{eq:modifiedBSplineConstruction}\n  2 - \\frac{x}{\\ms{l}}\n  = 2 \\sum_{i \\in \\integer} \\bspl{l,i}{p}(x)\n  - \\frac{1}{\\ms{l}} \\sum_{i \\in \\integer} \\gp{l,i} \\bspl{l,i}{p}(x)\n  = \\sum_{i \\in \\integer} (2 - i) \\bspl{l,i}{p}(x),\\quad\n  x \\in \\real.\n\\end{equation}\nNote that only the B-splines with $i \\ge 1 - \\tfrac{p+1}{2}$\nare relevant for the unit interval\nas all other B-splines vanish in $\\clint{0, 1}$.\nPfl\u00fcger omits summands with $i > 1$ as he only wants to modify\n$\\bspl{l,1}{p}$ left of its grid point $\\gp{l,1}$ \\cite{Pflueger10Spatially}.\nThe right-most function $\\bspl[\\modified]{l,2^l-1}{p}$ can be derived\nanalogously by mirroring $\\bspl[\\modified]{l,1}{p}$ at $x = \\tfrac{1}{2}$.\n\\pagebreak%\nFor~$l = 1$, again the ``constant one'' function is taken for the definition\nof modified hierarchical B-splines (see \\cref{fig:modifiedBSpline}):\n{%\n  \\setlength{\\abovedisplayskip}{9pt}%\n  \\setlength{\\belowdisplayskip}{9pt}%\n  \\begin{equation}\n    \\label{eq:modifiedBSplineDefinition}\n    \\bspl[\\modified]{l,i}{p}(x)\n    \\ceq\n    \\begin{cases}\n      1,&\n      l = 1,\\quad i = 1,\\\\\n      \\displaystyle\\sum_{i'=1-(p+1)/2}^1\\!\\!\\!\\! (2 - i') \\bspl{l,i'}{p}(x),&\n      l \\ge 2,\\quad i = 1,\\\\\n      \\bspl{l,i}{p}(x),&\n      l \\ge 2,\\quad i \\in \\hiset{l} \\setminus \\{1, 2^l - 1\\},\\\\\n      \\bspl[\\modified]{l,1}{p}(1 - x),&\n      l \\ge 2,\\quad i = 2^l - 1.\n    \\end{cases}\n  \\end{equation}%\n}%\nBy \\thmref{prop:splineSpace},\nthis definition implies that\n$\\bspl{l,1}{p}(x) = 2 - \\tfrac{x}{\\ms{l}}$ ($l \\ge 2$)\nis only valid for $x \\in \\clint{0, \\tfrac{5-p}{2} \\ms{l}}$, i.e.,\nthe second derivative at $x = 0$ vanishes only for $p \\le 5$.\nFor higher degrees, it is non-zero, albeit very small\nin its absolute value.\nTo enforce natural boundary conditions\nfor higher than quintic degrees,\nthe upper bound of $i'$ in the sum in \\eqref{eq:modifiedBSplineDefinition}\nmust be extended to $\\tfrac{p+1}{2}$.\nIn addition, more than just the left-most and the right-most inner\nhierarchical B-spline must be modified for $p \\ge 5$,\nas the size of the supports of $\\bspl{l,i}{p}$ increases\nfor growing $p$.\n\n\\begin{figure}\n  \\subcaptionbox{%\n    Construction of the modified\n    \\vphantom{$\\bspl[\\modified]{l',i'}{p}$}\\vspace{-0.2em}%\n    hierarchical cubic B-spline\n    $\\bspl[\\modified]{l',1}{p}$ (\\emph{dashed,} $l' \\ge 2$)\n    \\vphantom{$\\bspl[\\modified]{l',i'}{p}$}\\vspace{-0.2em}%\n    as a linear combination of neighboring B-splines $\\bspl{l',i'}{p}$.%\n    \\vphantom{$\\bspl[\\modified]{l',i'}{p}$}%\n  }[72mm]{%\n    \\includegraphics{modifiedBSpline_1}%\n  }%\n  \\hfill%\n  \\subcaptionbox{%\n    Modified hierarchical cubic B-splines\n    \\vphantom{$\\bspl[\\modified]{l',i'}{p}$}\\vspace{-0.2em}%\n    $\\bspl[\\modified]{l',i'}{p}$\n    ($l' = 1, \\dotsc, l$, $i' \\in \\hiset{l'}$) and\n    \\vphantom{$\\bspl[\\modified]{l',i'}{p}$}\\vspace{-0.2em}%\n    grid points $\\gp{l',i'}$\n    \\emph{(dots).}%\n    \\vphantom{$\\bspl[\\modified]{l',i'}{p}$}%\n  }[72mm]{%\n    \\includegraphics{hierarchicalBasis_6}%\n  }%\n  \\caption[%\n    Modified hierarchical B-splines%\n  ]{%\n    Construction of modified hierarchical cubic B-splines ($p = 3$) and\n    the resulting basis up to level $l = 3$.%\n  }%\n  \\label{fig:modifiedBSpline}%\n\\end{figure}\n\n\n\n\\subsection{Non-Uniform Hierarchical B-Splines}\n\\label{sec:314nonUniform}\n\nSparse grid spaces and their corresponding grid point sets,\nas we have defined them in \\cref{chap:20sparseGrids},\nare completely independent of the actual location of the grid points\n$\\gp{l,i}$.\nTherefore, it is possible to use different distributions for the grid points\nthan the standard equidistant choice of $\\gp{l,i} = i \\cdot \\ms{l}$,\nif the basis functions are altered accordingly\n\\cite{Valentin14Hierarchische}.\n\\usenotation{\u00cbcc}\nThe so-called Chebyshev points $\\ccgp{l,i}$ and the\nresulting Clenshaw--Curtis grids and B-splines will serve as an example.\n\n\\paragraph{Chebyshev points and Clenshaw--Curtis grids}\n\nThe \\term{Chebyshev points} $\\ccgp{l,i}$ of level $l \\in \\natz$\nare defined as\n\\begin{equation}\n  \\ccgp{l,i}\n  \\ceq \\frac{1 - \\cos(\\pi \\gp{l,i})}{2},\\quad\n  i = 0, \\dotsc, 2^l,\n\\end{equation}\nsee \\cite{Xu16Chebyshev} for example.\nChebyshev points are the locations of the extrema%\n\\footnote{%\n  The literature sometimes uses the name ``Chebyshev points'' for\n  the roots of $T_{2^l}$, which are closely connected with the extrema.\n  One way to distinguish them is to call the extrema\n  ``Chebyshev--Lobatto points'' and the roots\n  ``Chebyshev--Gauss points'' \\cite{Xu16Chebyshev}.%\n}\nof the Chebyshev polynomials $T_{2^l}$, defined as\n$T_{2^l}\\colon \\clint{0, 1} \\to \\real$,\n$T_{2^l}(x) \\ceq \\cos(2^l \\arccos(2x - 1))$.\nThey are geometrically obtained\nby dividing a semicircle into $2^l$ equally-sized\nsegments and subsequently orthogonally projecting the\nsegment endpoints onto the diameter\n(see \\cref{fig:pointSplittingChebyshev}).\nAnalytically, they are determined by minimizing\nthe interpolation error term in polynomial interpolation.\nThe most practical use of Chebyshev points is in\npolynomial interpolation to avoid Runge's phenomenon and in\nnumerical integration (quadrature), resulting in the\nso-called Clenshaw--Curtis quadrature rules.\n\n\\begin{SCfigure}\n  \\includegraphics{pointSplitting_2}%\n  \\caption[%\n    Decomposition of the set of univariate Clenshaw--Curtis grid points%\n  ]{%\n    The set of Chebyshev points $\\fgset[\\cc]{l}$ of level\n    $l = 4$ \\emph{(top)}\n    decomposes into hierarchical grids of level $l' \\le l$\n    (compare with \\cref{fig:pointSplittingUniform}).\n    The Chebyshev points are constructed as\n    the orthogonal projection of the\n    endpoints of $2^l$ equally-sized segments\n    of a semicircle onto its diameter \\emph{\\textcolor{C8}{(gray, top)}.}%\n  }%\n  \\label{fig:pointSplittingChebyshev}%\n\\end{SCfigure}\n\nIn some settings, it may be beneficial to use full or sparse grids consisting\nof Chebyshev points, which are then called \\term{Clenshaw--Curtis grids,}\ninstead of uniform grids.\nBesides the already mentioned advantages for polynomials and\nquadrature, Clenshaw--Curtis grids can help to reduce interpolation\nerrors in a neighborhood of the boundary of the domain due to the increased\ngrid point density near the boundary\n(at the cost of a lower grid point density in the interior).\nIf we want to use Chebyshev points as grid points for sparse grids,\nwe have to employ an appropriate basis to ensure that interpolation\nis still possible.\n\n\\paragraph{Hierarchical Clenshaw--Curtis B-splines}\n\nThe \\term{hierarchical Clenshaw--Curtis B-spline}\n$\\bspl[\\cc]{l,i}{p}$ of level $l \\in \\natz$ and index\n$i \\in \\hiset{l}$ is defined as \\cite{Valentin14Hierarchische}\n\\begin{equation}\n  \\bspl[\\cc]{l,i}{p}\n  \\ceq \\nonunifbspl{i,\\nodalknotseq[\\cc]{l}{p}}{p},\n\\end{equation}\nwhere\n\\begin{subequations}\n  \\label{eq:clenshawCurtisBSplineKnots}\n  \\begin{gather}\n    \\nodalknot[\\cc]{l,k}{p}\n    \\ceq\n    \\begin{cases}\n      (k - \\frac{p+1}{2}) \\cdot \\ccgp{l,1},&\n      k = 0,\\, \\dotsc,\\, \\frac{p-1}{2},\\\\\n      \\ccgp{l,k-(p+1)/2},&\n      k = \\frac{p+1}{2},\\, \\dotsc,\\, 2^l + \\frac{p+1}{2},\\\\\n      1 + (k - 2^l - \\frac{p+1}{2}) \\cdot \\ccgp{l,1},&\n      k = 2^l + \\frac{p+3}{2},\\, \\dotsc,\\, 2^l + p + 1,\n    \\end{cases}\\\\\n    k = 0, \\dotsc, m + p,\\quad\n    m \\ceq 2^l + 1.\n  \\end{gather}\n\\end{subequations}\nFor the construction of the knot sequence $\\nodalknotseq[\\cc]{l}{p}$,\nthe Chebyshev points $\\ccgp{l,i}$\nare equidistantly extended onto the real line $\\real$.\nAs for the standard hierarchical B-spline basis,\nit is now straightforward to define nodal spaces\n$\\nsbspl[\\cc]{l}{p}$\nand hierarchical subspaces $\\hsbspl[\\cc]{l}{p}$ as well as\nsparse grid spaces $\\regsgspace[p,\\cc]{n}{d}$ and\ngrid point sets $\\regsgset[\\cc]{n}{d}$\nusing tensor products of Clenshaw--Curtis B-splines\nand Cartesian products of Chebyshev point sets.\nThe one-dimensional cubic Clenshaw--Curtis basis and a\nsparse Clenshaw--Curtis grid are shown in \\cref{fig:clenshawCurtis}.\n\n\\begin{figure}\n  \\subcaptionbox{%\n    Hierarchical cubic Clenshaw--Curtis B-splines\n    $\\bspl[\\cc]{l',i'}{p}$\n    ($l' \\le l$, $i' \\in \\hiset{l'}$, $p = 3$) and\n    \\vspace{-0.2em}%\n    modified Clenshaw--Curtis B-splines\n     $\\bspl[\\cc,\\modified]{l',i'}{p}$\n    \\emph{(dashed).}%\n  }[72mm]{%\n    \\includegraphics{hierarchicalBasis_7}%\n  }%\n  \\hfill%\n  \\subcaptionbox{%\n    Sparse Clenshaw--Curtis grid $\\regsgset[\\cc]{n}{d}$\n    of level $n = 4$ in $d = 2$ dimensions.%\n  }[72mm]{%\n    \\includegraphics{sg_7}%\n  }%\n  \\caption[%\n    Clenshaw--Curtis B-splines and sparse grids%\n  ]{%\n    Clenshaw--Curtis B-splines and sparse grids.%\n  }%\n  \\label{fig:clenshawCurtis}%\n\\end{figure}\n\nNote that Clenshaw--Curtis B-splines\n$\\bspl[\\cc]{l,i}{p}$ are not translation-invariant,\nin contrast to uniform B-splines $\\bspl{l,i}{p}$.\nAs a result, both implementation effort and computation time for evaluation\nare significantly higher for Clenshaw--Curtis B-splines than\nfor uniform B-splines,\nas the Clenshaw--Curtis B-splines cannot be precomputed.\n\n\\paragraph{Modification and generalizations}\n\nWe define \\term{modified hierarchical Clenshaw--Curtis B-splines}\n$\\bspl[\\cc,\\modified]{l,i}{p}$ using the\nsame method as in \\cref{eq:modifiedBSplineDefinition}.\nHere, the second derivative of $\\bspl[\\cc,\\modified]{l,1}{p}$\ndoes not vanish at $x = 0$ even for degrees $p \\le 5$,\nas the formula \\eqref{eq:modifiedBSplineConstruction}\nderived from \\thmref{lemma:marsden} assumes uniform knots.\nHowever, since most of the B-spline knots in the summation formula\nlie outside $\\clint{0, 1}$ and are thus uniform according\nto \\cref{eq:clenshawCurtisBSplineKnots},\nthe absolute deviation of the second derivative from zero is small.\nAgain, to enforce natural boundary conditions,\nwe can recompute the coefficients\nof the components $\\bspl[\\cc]{l,i'}{p}$\ndynamically with Marsden's identity using the correct Chebyshev knots\nin \\cref{eq:marsden}.\n\nNote that our framework permits arbitrary grid point distributions\n$\\gp{l,i}^\\ast$,\nas long as two requirements are met:\nFirst, their number should grow exponentially\n(i.e., there are $2^l + 1$ points $\\gp{l,i}^\\ast$ in each level $l \\in \\natz$),\nand second, they should be nested\n(i.e., \\cref{eq:rewriteGridPoint} holds).\nAppropriate non-uniform B-spline bases can be defined analogously\nto Clenshaw--Curtis B-splines.\n", "meta": {"hexsha": "1e62b75ad46398ead370cc7e9b694ca77dbecf2f", "size": 33741, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/document/31standardBSplines.tex", "max_stars_repo_name": "valentjn/thesis", "max_stars_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2022-01-15T19:50:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-15T20:16:10.000Z", "max_issues_repo_path": "tex/document/31standardBSplines.tex", "max_issues_repo_name": "valentjn/thesis", "max_issues_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/document/31standardBSplines.tex", "max_forks_repo_name": "valentjn/thesis", "max_forks_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6153846154, "max_line_length": 111, "alphanum_fraction": 0.6878575027, "num_tokens": 11642, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.826711791935942, "lm_q1q2_score": 0.5940704579573722}}
{"text": "\\subsection{Noncompactness measures}\\label{subsec:noncompactness_measures}\n\n\\begin{definition}\\label{def:noncompactness_measures}\\mcite[def. 7.1]{Deimling1985}\n  Let \\( (X, \\rho) \\) be a \\hyperref[def:metric_space]{metric space} and let \\( \\mscrB \\) be the family of \\hyperref[def:metric_space/bounded_set]{bounded sets} in \\( X \\). We define the following functions\n  \\begin{thmenum}\n    \\thmitem{def:noncompactness_measures/sets} The \\term{Kuratowski measure of noncompactness},\n    \\begin{balign*}\n       & \\alpha: \\mscrB \\to [0, \\infty)                                                                                                            \\\\\n       & \\alpha(A) \\coloneqq \\inf \\{d > 0 \\colon \\exists U_1, \\ldots, U_n \\subseteq X: \\diam {U_k} < d \\T{and} A \\subseteq \\bigcup_{k=1}^n U_k \\}\n    \\end{balign*}\n\n    \\thmitem{def:noncompactness_measures/balls} The \\term{ball measure of noncompactness},\n    \\begin{balign*}\n       & \\beta: \\mscrB \\to [0, \\infty)                                                                                   \\\\\n       & \\beta(A) \\coloneqq \\inf \\{r > 0 \\colon \\exists x_1, \\ldots, x_2 \\in X: A \\subseteq \\cup_{k=1}^n B(x_k, r) \\}\n    \\end{balign*}\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{example}\\label{ex:noncompactness_measures}\\mcite[exer. 7.3]{Deimling1985}\n  Consider the subsets \\( B_2 \\subseteq B_3 \\subseteq B_1 \\subseteq C([0, 1]) \\), defined by\n  \\begin{balign*}\n    B_1 & \\coloneqq \\left\\{\n    x \\in C([0, 1]) \\colon \\begin{aligned}\n      0 \\leq t \\leq 1 \\implies 0 \\leq x(t) \\leq 1 \\\\\n      x(0) = 0, x(1) = 1                          \\\\\n    \\end{aligned}\n    \\right\\}\n    \\\\\n    B_2 & \\coloneqq \\left\\{\n    x \\in B_1 \\colon \\begin{aligned}\n      0 \\leq t \\leq \\frac 1 2 \\implies 0 \\leq x(t) \\leq \\frac 1 2 \\\\\n      \\frac 1 2 \\leq t \\leq 1 \\implies \\frac 1 2 \\leq x(t) \\leq 1\n    \\end{aligned}\n    \\right\\}\n    \\\\\n    B_3 & \\coloneqq \\left\\{\n    x \\in B_1 \\colon \\begin{aligned}\n      0 \\leq t \\leq \\frac 1 2 \\implies 0 \\leq x(t) \\leq \\frac 2 3 \\\\\n      \\frac 1 2 \\leq t \\leq 1 \\implies \\frac 1 3 \\leq x(t) \\leq 1\n    \\end{aligned}\n    \\right\\}\n  \\end{balign*}\n\n  Then \\( \\alpha(B_1) = 1, \\alpha(B_2) = \\frac 1 2, \\alpha(B_3) = \\frac 1 3 \\) and \\( \\beta(B_1) = \\beta(B_2) = \\beta(B_3) = \\frac 1 2 \\).\n\\end{example}\n\\begin{proof}\n  Since the distance between any two functions from \\( B_1 \\) is at most 1, we have that \\( \\diam B_1 = 1 \\) and \\( \\alpha(B_1) \\leq 1 \\).\n\n  Fix \\( \\varepsilon > 0 \\). For any function \\( f \\in B_1 \\), continuity of \\( f \\) gives us a radius \\( \\delta_f > 0 \\) such that\n  \\begin{equation*}\n    x < 2 \\delta_f \\implies f(x) < \\varepsilon.\n  \\end{equation*}\n\n  \\begin{figure}\n    \\centering\n    \\text{\\todo{Add diagram}}\\iffalse\\begin{mplibcode}\n      input metapost/plotting;\n      u := 4cm;\n      e := 0.16; % epsilon\n      d := sqrt(e); % delta\n\n      vardef f(expr x) =\n      x ** 2\n      enddef;\n\n      vardef tf(expr x) =\n      if x < d / 2:\n      (2 / d) * x\n      elseif x < d:\n      f(d) + (1 - f(d)) * ((2 / d) * (d - x))\n      else:\n      f(x)\n      fi\n      enddef;\n\n      beginfig(1)\n      drawarrow (0, 0) -- (u,  0);\n      drawarrow (0, 0) -- (0, u);\n\n      drawoptions(dashed withdots scaled 0.3);\n      draw (0, e) scaled u -- (1, e) scaled u;\n      label.lft(\"$\\varepsilon$\", (0, e) scaled u);\n\n      draw (d, 0) scaled u -- (d, e) scaled u;\n      label.bot(\"$\\delta$\", (d, 0) scaled u);\n      drawoptions();\n\n      draw path_of_plot(f, 0, 1, 0.01, u);\n      label.rt(\"$f$\", (0.75, 0.5) scaled u);\n\n      drawoptions(dashed evenly);\n      draw path_of_plot(tf, 0, d, 0.01, u);\n      drawoptions();\n      label.rt(\"$T_\\varepsilon(f)$\", (0.3, 0.7) scaled u);\n      endfig;\n    \\end{mplibcode}\\fi\n    \\caption{The operator \\( T_\\varepsilon \\) adds \\enquote{spikes} to functions.}\\label{ex:noncompactness_measures/spike_plot}\n  \\end{figure}\n\n  Define\n  \\begin{balign*}\n    T_\\varepsilon(f)(x) \\coloneqq \\begin{cases}\n      \\frac x {\\delta_f},                                       & 0 \\leq x < \\delta_f          \\\\\n      f(\\delta_f) + [1 - f(\\delta_f)] (2 - \\frac x {\\delta_f}), & \\delta_f \\leq x < 2 \\delta_f \\\\\n      f(x),                                                     & x \\geq 2 \\delta_f,\n    \\end{cases}\n  \\end{balign*}\n  so that\n  \\begin{balign*}\n    \\norm{T_\\varepsilon(f) - f}\n    \\geq\n    T_\\varepsilon(f) (\\delta_f) - f(\\delta_f)\n    =\n    1 - f(\\delta_f)\n    >\n    1 - \\varepsilon.\n  \\end{balign*}\n\n  Additionally, because \\( \\delta_{T_\\varepsilon(f)} < \\delta_f \\), we have that \\( f(\\delta_{T_\\varepsilon(f)}) < \\varepsilon \\) and\n  \\begin{balign*}\n    \\norm{T_\\varepsilon(T_\\varepsilon(f)) - f}\n    \\geq\n    T_\\varepsilon(T_\\varepsilon(f)) (\\delta_{T_\\varepsilon(f)}) - f(\\delta_{T_\\varepsilon(f)})\n    =\n    1 - f(\\delta_{T_\\varepsilon(f)})\n    >\n    1 - \\varepsilon.\n  \\end{balign*}\n\n  Thus, proceeding by induction, we see that for any \\( m = 1, 2, \\ldots \\)\n  \\begin{equation*}\n    \\norm{T_\\varepsilon^m(f) - f} > 1 - \\varepsilon,\n  \\end{equation*}\n  where \\( T_\\varepsilon^m \\) denotes repeated application of \\( T_\\varepsilon \\).\n\n  Consider the sequence\n  \\begin{equation*}\n    \\{ T_\\varepsilon^k(f) \\}_{k=0}^\\infty = \\{ f, T_\\varepsilon(f), T_\\varepsilon(T_\\varepsilon(f)), \\ldots \\}.\n  \\end{equation*}\n\n  We can easily see that the distance between any two elements of the sequence, say \\( T_\\varepsilon^k(f) \\) and \\( T_\\varepsilon^{k+m}(f) \\), is strictly greater that \\( 1 - \\varepsilon \\), i.e.\n  \\begin{balign*}\n    \\norm{T_\\varepsilon^k(f) - T_\\varepsilon^{k+m}(f)}\n    =\n    \\norm{T_\\varepsilon^k(f) - T_\\varepsilon^m(T_\\varepsilon^k(f))}\n    >\n    1 - \\varepsilon.\n  \\end{balign*}\n\n  Hence, \\( B_1 \\) cannot be covered by a finite \\( (1-\\varepsilon) \\)-net and \\( \\alpha(B_1) \\geq 1 - \\varepsilon \\). Since \\( \\varepsilon > 0 \\) can be made arbitrarily small, this implies that \\( \\alpha(B_1) \\geq 1 \\) and, because we already have the reverse inequality, \\( \\alpha(B_1) = 1 \\).\n\n  In the set \\( B_2 \\), the maximum distance between two functions is \\( \\frac 1 2 \\), thus \\( \\diam(B_2) = \\frac 1 2 \\) and \\( \\alpha(B_2) \\leq \\frac 1 2 \\). We can then define an operator similar to \\( T_\\varepsilon \\) that creates \\enquote{spikes} of height \\( \\frac 1 2 \\) to prove the reverse inequality, obtaining\n  \\begin{equation*}\n    \\alpha(B_2) = \\frac 1 2.\n  \\end{equation*}\n\n  Finally, the set \\( B_3 \\) has diameter \\( \\frac 2 3 \\) and hence \\( \\alpha(B_3) = \\frac 2 3 \\).\n\n  The ball measure for \\( B_1 \\) satisfies the inequalities\n  \\begin{equation*}\n    \\frac 1 2 \\leq \\beta(B_1) \\leq 1.\n  \\end{equation*}\n\n  Additionally, \\( B_1 \\) is strictly contained in the ball centered in the constant function \\( \\frac 1 2 \\) with radius \\( \\frac 1 2 \\), which implies that \\( \\beta(B_1) \\leq \\frac 1 2 \\), hence \\( \\beta(B_1) = \\frac 1 2 \\).\n\n  For \\( B_2 \\) we have\n  \\begin{equation*}\n    \\frac 1 4 \\leq \\beta(B_2) \\leq \\frac 1 2.\n  \\end{equation*}\n\n  Assume that for some \\( \\varepsilon > 0 \\) the set \\( B_2 \\) can be covered by a finite set of balls with centers \\( \\{ f_1, \\ldots, f_n \\} \\subsetneq C([0, 1]) \\) and radius \\( \\frac 1 2 - \\varepsilon \\).\n\n  Because of continuity, we can find a radius \\( \\delta > 0 \\) such that for all \\( f_k, k = 1, \\ldots, n \\) we have\n  \\begin{equation*}\n    x \\in \\left[\\tfrac {1 - \\delta} 2, \\tfrac {1 + \\delta} 2 \\right] \\implies \\abs{f_k(x) - f_k(\\tfrac 1 2)} < \\varepsilon.\n  \\end{equation*}\n\n  Consider the function\n  \\begin{balign*}\n    g(x) \\coloneqq \\begin{cases}\n      0,                                & 0 \\leq x < \\frac {1 - \\delta} 2,                       \\\\\n      \\frac{2x + \\delta - 1} {2\\delta}, & \\frac {1 - \\delta} 2 \\leq x \\leq \\frac {1 + \\delta} 2, \\\\\n      1,                                & \\frac {1 + \\delta} 2 < x \\leq 1.\n    \\end{cases}\n  \\end{balign*}\n\n  \\begin{figure}\n    \\centering\n    \\text{\\todo{Add diagram}}\\iffalse\\begin{mplibcode}\n      input metapost/plotting;\n\n      u := 6cm;\n      e := 0.1; % epsilon\n\n      vardef f_k(expr x) =\n      1 - 1 / (1 + exp(2 - 3 * x))\n      enddef;\n\n      vardef g(expr x) =\n      if x < (1 - d) / 2:\n      0\n      elseif x < (1 + d) / 2:\n      (2x + d - 1) / 2d\n      else:\n      1\n      fi\n      enddef;\n\n      f_minus := f_k(0.5) - e;\n      f_plus := f_k(0.5) + e;\n      d := (ln(1 / (1 - f_plus) - 1) - ln(1 / (1 - f_minus) - 1)) / 3; % delta\n      d_minus := (1 - d) / 2;\n      d_plus := (1 + d) / 2;\n\n      beginfig(1)\n      drawarrow (-0.1, 0) -- (u,  0);\n      drawarrow (0, 0) -- (0, u);\n\n      drawoptions(dashed withdots scaled 0.3);\n      draw (0, f_minus) scaled u -- (1, f_minus) scaled u;\n      label.lft(\"$f_k(\\frac 1 2) - \\varepsilon$\", (0, f_minus) scaled u);\n\n      draw (0, f_plus) scaled u -- (1, f_plus) scaled u;\n      label.lft(\"$f_k(\\frac 1 2) + \\varepsilon$\", (0, f_plus) scaled u);\n\n      draw (0.5, 0) scaled u -- (0.5, 1) scaled u;\n      label.bot(\"$\\frac 1 2$\", (0.5, 0) scaled u);\n\n      draw (d_minus, 0) scaled u -- (d_minus, 1) scaled u;\n      label.bot(\"$\\frac {1 - \\delta} 2$\", (d_minus, 0) scaled u);\n\n      draw (d_plus, 0) scaled u -- (d_plus, 1) scaled u;\n      label.bot(\"$\\frac {1 + \\delta} 2$\", (d_plus, 0) scaled u);\n      drawoptions();\n\n      drawoptions(dashed evenly);\n      draw path_of_plot(f_k, -0.1, 1, 0.01, u);\n      label.rt(\"$f_k$\", (1, f_k(1)) scaled u);\n      drawoptions();\n\n      draw path_of_plot(g, 0, 1, 0.005, u);\n      label.lft(\"$g$\", (d_minus, 0.1) scaled u);\n      endfig;\n    \\end{mplibcode}\\fi\n    \\caption{The function \\( g \\) always has points that are far enough from all \\( f_k, k = 1, \\ldots, n \\).}\\label{ex:noncompactness_measures/sigmoid_plot}\n  \\end{figure}\n\n  If \\( f_k(\\tfrac 1 2) \\geq \\frac 1 2 \\), then \\( f_k(\\tfrac {1 - \\delta} 2) > \\tfrac 1 2 - \\varepsilon \\) and\n  \\begin{equation*}\n    \\norm{f_k - g} \\geq f_k(\\tfrac {1 - \\delta} 2) - g(\\tfrac {1 - \\delta} 2) = f_k(\\tfrac {1 - \\delta} 2) > \\tfrac 1 2 - \\varepsilon.\n  \\end{equation*}\n\n  Analogously, if \\( f_k(\\tfrac 1 2) < \\frac 1 2 \\), then \\( f_k(\\tfrac {1 + \\delta} 2) < \\tfrac 1 2 + \\varepsilon \\) and\n  \\begin{equation*}\n    \\norm{g - f_k} \\geq g(\\tfrac {1 + \\delta} 2) - f_k(\\tfrac {1 + \\delta} 2) = 1 - f_k(\\tfrac {1 + \\delta} 2) > \\tfrac 1 2 - \\varepsilon.\n  \\end{equation*}\n\n  Thus, for every \\( k = 1, \\ldots, n \\) we have\n  \\begin{equation*}\n    \\norm{g - f_k} > \\frac 1 2 - \\varepsilon,\n  \\end{equation*}\n  i.e. \\( g \\) in not contained in a ball of radius \\( \\frac 1 2 - \\varepsilon \\) around any of the centers \\( f_1, \\ldots, f_n \\).\n\n  Hence, \\( \\beta(B_2) \\geq \\frac 1 2 \\), which implies \\( \\beta(B_2) = \\frac 1 2 \\). Because of the inclusion \\( B_2 \\subsetneq B_3 \\subsetneq B_1 \\), we have\n  \\begin{equation*}\n    \\frac 1 2 = \\beta(B_2) \\leq \\beta(B_3) \\leq \\beta(B_1) = \\frac 1 2,\n  \\end{equation*}\n  hence \\( \\beta(B_3) = \\frac 1 2 \\).\n\\end{proof}\n\n\\begin{theorem}[Kuratowski's noncompactness lemma]\\label{thm:noncompact_kuratowskis_lemma}\\mcite[exer. 7.4]{Deimling1985}\n  Let \\( X \\) be a Banach space and \\( \\{ A_n \\}_n \\) be a decreasing sequence of nonempty closed subsets such that \\( \\alpha(A_n) \\to 0 \\). Then \\( A \\coloneqq \\bigcap_n A_n \\) is nonempty and compact.\n\\end{theorem}\n\\begin{proof}\n  The set \\( A \\) is compact because it is closed as the intersection of closed sets and \\( \\alpha(A) \\leq \\alpha(A_n) \\to 0 \\), hence \\( \\alpha(A) = 0 \\).\n\n  It remains to show that \\( A \\) is nonempty.\n  Choose any sequence \\( \\{ x_n \\}_n \\) where \\( x_n \\in A_n \\). Since any finite set is compact, we have that for any \\( k \\geq 1 \\)\n  \\begin{balign*}\n    \\alpha(\\{ x_n \\}_{n \\geq 1})\n    =\n    \\max\\{ \\alpha(\\{ x_n \\}_{n < k}), \\alpha(\\{ x_n \\}_{n \\geq k}) \\}\n    =\n    \\alpha(\\{ x_n \\}_{n \\geq k})\n    \\leq\n    \\alpha(A_k) \\to 0,\n  \\end{balign*}\n  hence the set \\( \\{ x_n \\colon n \\geq 1 \\} \\) is compact and thus sequentially compact. We can choose a convergent subsequence \\( \\{ x_{n_k} \\}_k \\) of \\( \\{ x_n \\}_n \\) whose limit lies in every \\( A_n \\) (since they are closed) and, consequently, in their intersection \\( A \\). So \\( A \\) is nonempty.\n\\end{proof}\n", "meta": {"hexsha": "b82435a27b2f6b186a0fae6692a881abd428ad0d", "size": 11968, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/noncompactness_measures.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/noncompactness_measures.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/noncompactness_measures.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.1271477663, "max_line_length": 319, "alphanum_fraction": 0.5560661765, "num_tokens": 4558, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.8267117876664789, "lm_q1q2_score": 0.5940704449249679}}
{"text": "% As a sample LaTeX document, this is an actual assignment \n% written in LaTeX with my template for MATH 417, \n% Honors Real Variables (Measure Theory) at University of Alberta.\n% This source has been released with permission with the instructor,\n% Professor John C. Bowman as the solutions are available at\n% https://www.math.ualberta.ca/~bowman/m417/assign3.pdf\n\n% ------- END DISCLAIMER ------------- \n\n% https://www.math.ualberta.ca/~bowman/m417/assign3.pdf\n\\documentclass{article}\n\\usepackage{import}\n\n% Default settings, which adds XY matrix, 1.5 inch margins,\n% slightly more space between paragraphs and the \\SCSHAPE current template.\n\\newcommand{\\DEFAULTSETTINGS}{}\n\n% Change this directory if you put your template files in a different directory.\n\\newcommand{\\basedir}{../../}\n\n% If you are using Crowdmark or require new page for each problem, uncomment this.\n% \\newcommand{\\CROWDMARK}{}\n\n\\import{\\basedir/base/}{mathPreamble}\n\n\\newcommand{\\assignmentnum}{3}\n\\newcommand{\\coursename}{MATH 417}\n\n% Your name, email, Student ID.\n% Please don't submit assignments under the name\n% \"Jamie van Lindenberg\" :(\n\\newcommand{\\studentname}{Jamie van Lindenberg}\n\\newcommand{\\studentid}{1234567}\n\\newcommand{\\studentemail}{rassamee@ualberta.ca}\n\\newcommand{\\coursesec}{Q1}\n\n\n\\begin{document}\n\n\\integratedtitle\n\n\\problemsec\n\nWhat is the wrong in the following argument that \\textit{The set $(0,1)$ is countable}?\nExpress the open set $(0,1)$ as a union of open intervals $I_x$ centered on each real $x\\in (0,1)$:\n\\[\n    \\bigcup_{x\\in (0,1)} I_x=(0,1).\n\\]\nEach interval $I_x$ contains a rational which we can use as a label for that interval.\nSince there are only countably many rational numbers, there are countably many labels.\nHence the above union actually contains only a countable number of intervals, each centered at a real number.\nThere are therefore only a countable number of real numbers.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\problemsec \n\nLet $S\\subset \\real^d$ such that\n\\[\n    m^*(E\\cap S) + m^*(E\\cap S^c) = m(E).\n\\]\nfor all elementary set $E$.\nShow that $S$ is Lebesgue measurable.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\problemsec\n\nSuppose $S_n\\subset \\real^d$ for $n\\in \\naturals$ are Lebesgue measurable sets that converge pointwise to a set $S$.\n\n\\subsection{Part (a)}\n\nShow that $S$ is Lebesgue measurable.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\subsection{Part (b)}\n\\textit{Dominated Convergence}.\nSuppose that $S_n$ are all contained inside another Lebesgue measurable set $F$ of finite measure.\nShow that $m(S_n)\\to m(S)$ as $n\\to \\infty$.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\subsection{Part (c)}\n\nGive a counterexample to show that the dominated convergence theorem fails if the $S_n$ are not contained in a set of finite measure, even if we assume that $m(S_n)$ are all uniformly bounded.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\problemsec\n\nShow that an unsigned function $f:\\real^d\\to[0,\\infty]$ is a simple function if and only if it is measurable and takes on finitely many values.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\problemsec\n\nLet $d,d'\\in \\naturals$.\n\n\\subsection{Part (a)}\n\nIf $A\\subset \\real^d$ and $B\\subset \\real^d$, show that\n\\[\n    m^{d+d'*}(A\\times B)\\le m^{d*}(A)m^{d'*}(B).\n\\]\nwhere $m^{d*}$ denotes the $d$-dimensional Outer Lebesgue Measure.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\subsection{Part (b)}\nLet $A\\subset \\real^d$ and $B\\subset \\real^{d'}$ Lebesgue measurable sets (not necessarily of finite measure).\nShow that $A\\times B$ is Lebesgue Measurable with\n\\[\n    m(A\\times B)=m(A)m(B).\n\\]\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\n\\problemsec\n\nIf $f:\\real^d\\to [0,\\infty]$ is Lebesgue measurable, show that the Lebesgue measure of\n\\[\n    A_f \\defeq \\set{(x,y)\\in \\real^{d+1} : 0\\le y \\le f(x)}\n\\]\nexists and equals $\\int_{\\real^d} f$.\n\n\\begin{solution}\n    Solution removed.\n\\end{solution}\n\\end{document}", "meta": {"hexsha": "de96321b76d09e842e3534646afd8805eb6b345c", "size": 4000, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sample/math417hw3/m417hw3.tex", "max_stars_repo_name": "supakorn-ras/latex-templates", "max_stars_repo_head_hexsha": "94ac5cbb3addc4135db8e33395d4e9ce21716cbc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sample/math417hw3/m417hw3.tex", "max_issues_repo_name": "supakorn-ras/latex-templates", "max_issues_repo_head_hexsha": "94ac5cbb3addc4135db8e33395d4e9ce21716cbc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sample/math417hw3/m417hw3.tex", "max_forks_repo_name": "supakorn-ras/latex-templates", "max_forks_repo_head_hexsha": "94ac5cbb3addc4135db8e33395d4e9ce21716cbc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.397260274, "max_line_length": 192, "alphanum_fraction": 0.7195, "num_tokens": 1157, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.837619961306541, "lm_q1q2_score": 0.5938885897069583}}
{"text": "\\chapter{Motivation for Numerical Methods}\nMany partial differential equations do not have exact closed-form solutions for all choices of initial conditions\\footnote{An example is the Navier-Stokes equation which is thought to describe the motion of an incompressible viscous fluid.}. Irregular boundary conditions can also make finding an analytic solution difficult for many partial differentail equation. In these cases, finding an approximate solution with a numerical method can be helpful either for physical purposes, engineering purposes or for mathematical investigations of the behavior of solutions to these partial differential equations.\nThere are also cases where the partial differential equations have explicitly known exact solutions, but the formulae used to express the exact solutions require a large number of computations to evaluate them\\footnote{An example is the sine-Gordon equation.}. In this case we are interested in making numerical approximations that result in accurate and cost-efficient solutions.\n\nNumerical methods allows us to use a computer to calculate approximate solutions to partial differential equations. The accuracy of the solution will depend on which numerical method is used and usually more accurate numerical methods tend to be more complicated than less accurate methods. We will therefore start with some simple numerical methods to familiarize ourselves with how numerical methods work.\nWe encourage the reader to take a full course on the numerical solution of partial differential equations as well as reading the references to find out about numerical techniques not discussed here.\n\n", "meta": {"hexsha": "ed92efe105ae42d295b0561cd7132e73c614714d", "size": 1641, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MotivationForNumericalMethods/MotivationForNumericalMethods.tex", "max_stars_repo_name": "bcloutier/PSNM", "max_stars_repo_head_hexsha": "1cd03f87f93ca6cb1a3cfbe73e8bc6106f497ddf", "max_stars_repo_licenses": ["CC-BY-3.0", "BSD-2-Clause"], "max_stars_count": 40, "max_stars_repo_stars_event_min_datetime": "2015-01-05T14:22:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T23:51:25.000Z", "max_issues_repo_path": "MotivationForNumericalMethods/MotivationForNumericalMethods.tex", "max_issues_repo_name": "bcloutier/PSNM", "max_issues_repo_head_hexsha": "1cd03f87f93ca6cb1a3cfbe73e8bc6106f497ddf", "max_issues_repo_licenses": ["CC-BY-3.0", "BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-12-29T12:35:42.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-01T07:31:32.000Z", "max_forks_repo_path": "MotivationForNumericalMethods/MotivationForNumericalMethods.tex", "max_forks_repo_name": "bcloutier/PSNM", "max_forks_repo_head_hexsha": "1cd03f87f93ca6cb1a3cfbe73e8bc6106f497ddf", "max_forks_repo_licenses": ["CC-BY-3.0", "BSD-2-Clause"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2015-01-05T14:23:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-09T06:55:01.000Z", "avg_line_length": 205.125, "max_line_length": 607, "alphanum_fraction": 0.8397318708, "num_tokens": 280, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.8376199653600371, "lm_q1q2_score": 0.5938885822806734}}
{"text": "\\section{Introduction}\n\\label{sec:intro}\n\nWe consider a dataset that reports the new daily COVID-19 cases for each country, worldwide.\nOur goal is to compute:\n\\begin{enumerate}\n    \\item The seven-days moving average of new cases, for each day and for each country\n    \\item The percentage increase of the seven-days average w.r.t.\\ the day before, for each day and for each country\n    \\item The top-10 countries with the highest percentage increase, for each day\n\\end{enumerate}\n\n\\noindent\nIf data is provided with a coarser grain (e.g.\\ weekly), the increment is spread evenly between the week when computing the average.\nWe implemented the solution with the \\emph{Apache Spark SQL API}, since it provides:\n\\begin{itemize}\n    \\item an easy way to load CSV datasets as tables\n    \\item built-in windowing functions to aggregate data based on time\n    \\item efficient and transparent partitioning of the dataset between the worker nodes\n    \\item a concise and easy-to-read API, that reduces the amount of code to write and makes it way clearer that what we would have come up with using MPI\n\\end{itemize}\n\n\\noindent\nWe also considered using the Structured Streaming API, but ultimately we discarded it. This is our rationale:\n\\begin{itemize}\n    \\item The \\emph{Structured Streaming API} is suitable for scenarios where the data has to be analyzed online, for example if we were to produce our reports during the pandemic with data coming in each day. In this case we could also use watermarking to wait for late data. However, this approach makes sense only if the data is actually generated periodically, otherwise the effort spent to simulate a periodic data source from file is totally unjustified.\n    \\item The \\emph{SQL API} is more indicated for offline analysis, where the whole dataset is already available before the computation. This is exactly our case, so we chose this approach.\n\\end{itemize}", "meta": {"hexsha": "25efd1e04a1e37c9f8d35683b104707f378d823d", "size": 1910, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections_report/introduction.tex", "max_stars_repo_name": "fuljo/mask-the-week", "max_stars_repo_head_hexsha": "8e677529caffd4678c434e55c805c6ddb5df4c51", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/sections_report/introduction.tex", "max_issues_repo_name": "fuljo/mask-the-week", "max_issues_repo_head_hexsha": "8e677529caffd4678c434e55c805c6ddb5df4c51", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections_report/introduction.tex", "max_forks_repo_name": "fuljo/mask-the-week", "max_forks_repo_head_hexsha": "8e677529caffd4678c434e55c805c6ddb5df4c51", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.7407407407, "max_line_length": 460, "alphanum_fraction": 0.7811518325, "num_tokens": 423, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.8056321796478255, "lm_q1q2_score": 0.5938783466077833}}
{"text": "\\chapter{Quantum Computing and Type Theory }\nProgramming languages are often recognized as Procedural and Descriptive corresponding to performing a given n-step logic in a program. Quantum Programs are verified throughly using Formal methods. A qubits working semantics are completely writtten as mathematical abstractions and then verified with some given initial conditions. Doing QC\\footnote{Quantum computing} things this way is helpful as we are still not capable enough to have a practical quantum computer to try out things on. As in case of \\textbf{Stack Verification} of classical computing systems, complete verification for quantum computers is also done. Some of the most recent works include \\textit{Steve Zdancewic's} paper\\cite{qwire} that discusses the \\textbf{QWire} as a core language for Quantum circuits, an analogue to VHDL and HDL in classical computing. He has verified its correctness too in CoQ\\cite{COQ}. He describes Phantom Types as the type system he used for Qwire.\n\n\\textbf{EpiQC}\\cite{epiqc} is a multi-institutional paradigm foucsed on reducing the current gap between existing theoretical algorithms and practical quantum computing architectures. They have a very nice video series\\cite{epiqc_video} that nicely explains the application of program verification in Quantum computing. They refer to Technology-Aware Programming Environment as an upgrade to traditional optimization and abstraction techniques to one that is more practical to use. Major focus is upon developing better tools for robustness and scalability.\n\n\n\\begin{itemize}\n\\item{\n\t\\textbf{Procedural Implementations} include verifying step-by-step correctness of program semantics. \\textbf{QCL} as described earlier is a programming language that has Procedural behaviour inherited from classical languages. It is now being verifed .   \n}\n\\item{\n\t\\textbf{Descriptive Implementations} involves verifying the program logic only. For example, a sum function should return summation of input parameters correctly. The procedural verification aspect of same would mean to check the register allocation and machine cycle logic verification. \n}\n\\end{itemize}\n\n\\section{Phantom Types}\nPhantom Types \\cite{phantom_types} is the  lightweight dependent type system used by Qwire. Most of the mathematical work involves matrices and using phantom types have increased the functional expressivity of the same. The complete implementation cna be found here \n", "meta": {"hexsha": "0bb748739dc9104f92764cf6e3f20d2b4d7d197f", "size": 2444, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "files/interplay-qc-tt.tex", "max_stars_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_stars_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/interplay-qc-tt.tex", "max_issues_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_issues_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/interplay-qc-tt.tex", "max_forks_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_forks_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 135.7777777778, "max_line_length": 950, "alphanum_fraction": 0.8240589198, "num_tokens": 501, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8056321936479701, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5938783429720277}}
{"text": "\\chapter{Analytical Forces Induced by Dislocations on Linear Rectangular Surface Elements}\n\\label{c:lin_rect}\n%\n\\section{Forces Exerted by a Dislocation Line Segment on Linear Rectangular Surface Elements}\n\\label{s:f_lin_rect}\n%\nCoupling discrete dislocation dynamics to finite element method requires the traction field $ \\tns{\\sigma^{\\infty}}(\\vec{x}) \\cdot \\vec{n}(\\vec{x}) $ to be distributed among the set of relevant discrete nodes of a finite element or boundary element model. This is usually achieved as follows,\n\\begin{align}\\label{eq:ddd_fem_force}\n  \\vec{F}^{(m)} = \\int_{S^{e}} \\left[\\tns{\\sigma^{\\infty}}(\\vec{x}) \\cdot \\vec{n}(\\vec{x})\\right] N_{m}(\\vec{x})\\; \\rvar{d}S_{e}\\,,\n\\end{align}\nwhere $ \\rvar{d}S_{e} $ is the infinitesimal surface element with surface area $ S_{e} $. $ N_{m}(\\vec{x}) $ are so-called shape functions (interpolation functions) that distribute  the traction field among the surface element's nodes.\n\nThe problematic singularity associated with the classical Volterra dislocation is avoided by using the non-singular formulation of \\citet{a_non-singular_continuum_theory_of_dislocations}. The stress field of a dislocation in a homogenous infinite linear elastic domain can be calculated as a contour integral along the loop \\cite{eigenstrain},\n\\begin{align}\\label{eq:field_stress}\n  \\sigma_{ij}^{\\infty}(\\vec{x}) = C_{ijkl} \\oint \\epsilon_{lnh} C_{pqmn} \\dfrac{\\partial G_{kp}(\\vec{x} - \\vec{x'})}{\\partial x_{q}} b_{m} \\rvar{d}x'_{h}\\,,\n\\end{align}\nwhere $ C_{ijkl} $ is the elastic stiffness matrix, $ \\epsilon_{lnh} $ the permutation operator, $\\vec{b}$ the Burgers vector, $ \\vec{x'} $ the coordinate that spans the dislocation, and $ G_{kp}(\\vec{x} - \\vec{x'}) $ is Green's function of elasticity \\cite{eigenstrain}. $ G_{kp}(\\vec{x} - \\vec{x'}) $ is defined as the displacement component in the $ x_{k} $ direction at point $ \\vec{x} $ due to a force applied in the $ x_{p} $ direction at point $ \\vec{x'} $. The traditional singularity comes from taking the Burgers vector distribution as a delta function. \\citet{a_non-singular_continuum_theory_of_dislocations} proposed an alternative definition of $ G_{kp}(\\vec{x} - \\vec{x'}) $ which has a wider isotropic spread mainly localised in a radius $ a $ around the dislocation core,\n\\begin{align}\\label{eq:elastic_green_func}\n  G_{ij}(\\vec{x} - \\vec{x'}) = \\dfrac{1}{8\\pi \\mu}\\left[ \\delta_{ij} \\partial_{pp} - \\dfrac{1}{2(1-\\nu)} \\partial_{ij} \\right] R_{a}\\,,\n\\end{align}\nwhere $ \\mu $, $ \\nu $ are the isotropic shear modulus and Poisson's ratio respectively, $ \\delta_{ij} $ is the Kronecker Delta, $ \\partial_{x_{1} \\ldots\\, x_{n}} \\equiv \\dfrac{\\partial^{n}}{\\partial x_{1} \\ldots\\, \\partial x_{n}}$. $ R_{a} $ is the defined as,\n\\begin{subequations}\n  \\begin{align}\\label{eq:ra}\n    R_{a}(\\vec{x}) & = R(\\vec{x}) * w(\\vec{x}) = \\int R(\\vec{x} - \\vec{x'}) w(\\vec{x'}) \\rvar{d}^{3}\\vec{x'}\\nn\n                   & = \\sqrt{R^{2} + a^{2}}                                                                     \\\\\n    w(\\vec{x})     & = \\dfrac{15 a^{4}}{8\\pi (R^{2} + a^{2})^{7/2}}\\,,\n  \\end{align}\n\\end{subequations}\nwhere $ w(\\vec{x}) $ is the isotropic Burgers vector distribution derived in the appendix of \\cite{a_non-singular_continuum_theory_of_dislocations}, $ \\vec{x} = (x, y, z) $ and $ R(\\vec{x}) = \\sqrt{x^{2} + y^{2} + z^{2}} $.\n\nFor two dislocation nodes (1, 2) connected by straight line segments \\cref{eq:field_stress} becomes,\n\\begin{align}\\label{eq:two_node_stress_field}\n  \\tns{\\sigma}^{(12)}(\\vec{x}) =\n  \\begin{split}\n    &-\\dfrac{\\mu}{8\\pi} \\int\\limits_{\\vec{x_{1}}}^{\\vec{x_{2}}} \\left( \\dfrac{2}{R_{a}^{3}} + \\dfrac{3 a^{2}}{R_{a}^{5}} \\right) \\left[ \\left( \\vec{R} \\times \\vec{b} \\right) \\otimes \\rvar{d}\\vec{x'} + \\rvar{d}\\vec{x'} \\otimes \\left( \\vec{R} \\times \\vec{b} \\right) \\right]\\\\\n    %\n    &+ \\dfrac{\\mu}{4\\pi (1 - \\nu)} \\int\\limits_{\\vec{x_{1}}}^{\\vec{x_{2}}} \\left( \\dfrac{1}{R_{a}^{3}} + \\dfrac{3 a^{2}}{R_{a}^{5}} \\right) \\left[ \\left(\\vec{R} \\times \\vec{b}\\right)\\cdot \\rvar{d}\\vec{x'} \\right] \\mtx{I}_{2}\\\\\n    %\n    &- \\dfrac{\\mu}{4\\pi (1 - \\nu)} \\int\\limits_{\\vec{x_{1}}}^{\\vec{x_{2}}} \\dfrac{1}{R_{a}^{3}} \\left[ \\left(\\vec{b} \\times \\rvar{d}\\vec{x'}\\right) \\otimes \\vec{R} + \\vec{R} \\otimes \\left(\\vec{b} \\times \\rvar{d}\\vec{x'}\\right) \\right]\\\\\n    &+ \\dfrac{\\mu}{4\\pi (1 - \\nu)} \\int\\limits_{\\vec{x_{1}}}^{\\vec{x_{2}}} \\dfrac{3}{R_{a}^{5}} \\left[ \\left( \\vec{R} \\times \\vec{b} \\right) \\cdot \\rvar{d}\\vec{x'} \\right]\\vec{R} \\otimes \\vec{R}\n  \\end{split}\\,.\n\\end{align}\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{force_calc_linear_rectangle.pdf}\n  \\caption[Diagram of the analytical force calculation on linear rectangular surface elements.]{Diagram of the line integral method used to find analytical expressions for the forces exerted by dislocation lines on linear rectangular surface elements \\cite{analytic_tractions}.\n    \\textit{1}) For any given point $ \\vec{x} $ on the surface element and any given point $ \\vec{x'}$ on the dislocation line segment, define distance $ R_{a} $.\n    \\textit{2}) Integrate from $ x_{1} \\to x_{2} $ along line direction $ \\vec{t} $.\n    \\textit{3}) Integrate from $ r_{1} \\to r_{2} $ along vector $ \\vec{p} $.\n    \\textit{4}) Integrate from $ s_{1} \\to s_{2} $ along vector $ \\vec{q} $.}\n  \\label{f:flrs}\n\\end{figure}\n\\subsection{Resolving Singularities when Dislocation Line Segments are Parallel to Surface Elements}\n\\label{ss:par_dln_se}\n%\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{ftot_rotation_lin_rect.pdf}\n  \\caption[Avoiding singularities by rotating dislocation line segments.]{Effects of rotating a single dislocation line segment on the forces exerted by it on a linear rectangular surface element. The specific values of this function are not known \\emph{a priori}, all that is known is that it must be periodic ($ T = 2\\pi$) and have finite maximum and minimum values. The singularity is avoided by perturbing the angle $ \\theta = 0 \\to \\theta = \\pm \\epsilon\\,, \\epsilon \\gtrsim 0 $.}\n  \\label{f:rflrs}\n\\end{figure}\n%\n\\savearabiccounter", "meta": {"hexsha": "dacee3538e57a15f63137c78c2e0aee5fa96f1a1", "size": 6071, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/shadowrealm/force_lin_rect_se.tex", "max_stars_repo_name": "dcelisgarza/DPhil_Thesis", "max_stars_repo_head_hexsha": "bef9f1b3155cff1ba52bc038b5493c2db0089b7e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/shadowrealm/force_lin_rect_se.tex", "max_issues_repo_name": "dcelisgarza/DPhil_Thesis", "max_issues_repo_head_hexsha": "bef9f1b3155cff1ba52bc038b5493c2db0089b7e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-08-30T16:29:24.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-30T16:36:15.000Z", "max_forks_repo_path": "src/shadowrealm/force_lin_rect_se.tex", "max_forks_repo_name": "dcelisgarza/DPhil_Thesis", "max_forks_repo_head_hexsha": "bef9f1b3155cff1ba52bc038b5493c2db0089b7e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 96.3650793651, "max_line_length": 787, "alphanum_fraction": 0.6684236534, "num_tokens": 2015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8615382094310355, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5938631378337379}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\n\\title{A dynamical Reed-Frost model for Covid-19 household contamination}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Model description}\n\nWe consider a population divided into \\emph{households}, with individuals\ninteracting both within and outside of their respective households. We will \nfocus on modeling the spread within the household, while taking the possibility\nof community-acquired infection into account. \n\nFor a given household with \\(m\\) individuals, consider at each time \\(t\\) the\nset \\(S_t\\) of susceptible individuals and the set \\(I_t\\) of infectious\nindividuals. The number of such individuals will be noted \\(s_t\\) and \\(i_t\\)\nrespectively. Given the long incubation time of Covid-19 and the possibility of\npresymptomatic transmission, we will also consider the set \\(E_t\\) of \n\\emph{exposed} individuals, who have yet to develop symptoms. They are not\ninfectious in their current state (pre-infectious incubation). In accordance \nwith recent evidence about viral shedding in Covid-19, we will divide the set of\ninfectious individuals into pre-symptomatic and post-symptomatic individuals: \n\\[\n\tI_t=I_t^p\\cup I_t^s.\n\\]\n\nThe level of infectiousness of an infectious individual \\(j\\) will be noted \n\\(h_j(t)\\). We will assume that infectiousness is constant during the\npresymptomatic phase \\([t_p,t_s)\\), then decreases with time in a geometric\nfashion during the symptomatic phase \\([t_s,\\infty)\\), such\nthat, if \\(j\\) is infectious, \\begin{equation}\n\th_j(t)  = \n\t\t\\begin{cases}\n\t\th_0 & \\text{if } t_p \\le t < t_s,\\\\ \n\t\th_0\\gamma^{t-t_s}& \\text{if } t\\ge t_s.\n\t\t\\end{cases} \n\\end{equation}\nWithin the household, individuals are mixing completely. Hence, at time \\(t\\), a\nsusceptible individual gets exposed to all infectious individuals. The \ncollective force of infection is then the sum of the contributions of both\npresymptomatic and post-symptomatic infectious individuals:\n\\begin{equation}\n\tH_t = H_t^p + H_t^s = h_0 |I^p_t| + \\sum_{j\\in I_t^s} h_j(t),\n\\end{equation}\nand a susceptible individual becomes exposed (is infected with the virus) with\nprobability\n\\begin{equation}\n\t1-Q_h(t) = 1- \\exp(-\\beta H_t).\n\\end{equation}\nThe term \\(Q_h(t)\\) is sometimes dubbed \\emph{household escape probability}. \n\nIn addition to infections within the household, each susceptible individual can\nalso be infected in the community. If \\(1-A\\) is the probability of that event,\nwhich is assumed to be independent from within-household infection, then total\ninfection probability at time \\(t\\) is \n\n\\[\n\tP(t)=1-Q(t) = 1-A\\exp(-\\beta H_t).\n\\]\n\nThis process of infections happens at each timestep, independently for each\nsusceptible individual, so that at time \\(t+1\\), the number of new infections is\nbinomially distributed with parameters \\(s_t\\) and \\(P(t)\\). We will study the\nMarkovian process\n\\begin{equation}\n\tX_t=(s_t,e_t,i^p_t,i_t^s,H_t^s),\n\\end{equation}\nwhose transitions can be summed up as follow:\n\\begin{align*}\n\te_{t+1} & = e_t + \\Delta E_t - \\Delta I^p_t, \\\\\n\ti_{t+1}^p & = i_{t}^p +\\Delta I^p_t - \\Delta I_t^s, \\\\\n\ti_{t+1}^s & = i_t^s +\\Delta I_t^s, \\\\\n\tH_{t+1}^s & = \\gamma H_t^s + h_0 \\Delta I_t^s, \\\\\n\ts_{t+1} & = s_t - \\Delta E_t,\n\\end{align*}\nwhere \n\\begin{align*}\n\t\\Delta E_t & \\sim \\mathsf{Bin}(s_t,P(t)),\\\\\n\t\\Delta I^p_t & \\sim \\mathsf{Bin}(e_t,p_I),\\\\\n\t\\Delta I^s_t & \\sim \\mathsf{Bin}(i^p_t,p_S)\n\\end{align*}\nare the number of newly exposed, newly presymptomatic and symptomatic\nindividuals, respectively.\n\n\\section*{Observation model}\n\nWe assume that the data contain the number \\(s_0\\) of individuals in each\nhousehold, along with symptom onset times of symptomatic individuals. In our\ncontext, we will assume that symptomatic infectious individuals are detected\nwith probability \\(F\\) (thus aggregating subclinical or entirely asymptomatic\ndisease and biases due to self-reporting) and we will write \n\\[\n\tY_t \\sim \\mathsf{Bin}(\\Delta I_t^s,F)\n\\]\nfor the observed number of newly-infected, symptomatic individuals. \n\nThis, and the Markov property of the hidden process \\(X\\) makes inference \nthrough classical HMM algorithms theoretically possible, such as particle MCMC \n\\cite{Endo2019}, or iterated filtering \\cite{Ionides2015}. As a sanity check, \nit should also be possible to fit the classical Reed-Frost model to the final \nsize data \\((s_0,i_\\infty)\\) as in \\cite{Cauchemez2014} to estimate some of the \nsame parameters as in this model.\n\n\\section*{Parameters}\n\n\\begin{itemize}\n\\item Rate of decrease of infectiousness \\(\\gamma >0\\).\n\\item Initial infectiousness \\(h_0 >0\\).\n\\item Transmission parameter \\(\\beta >0\\).\n\\item Community escape probability \\(A\\).\n\\item Rate of asymptomatic infection \\(F\\).\n\\item Probability of an exposed individual becoming infectious per time unit \n\\(p_I\\).\n\n\\end{itemize}\n\n\\section*{Possible extensions}\n\nThere are a number of important aspects of Covid-19 infection that we would like\nto address with these data. This requires refining the model to take individual\ncovariates into account, such as age (which should be modelled as a categorical\nvariable), severity of disease or the frequency of outside contacts. \n\nFor now, we assume that infectious individuals have the same transmission\ncharacteristics, but this could very well not be the case. The initial\ninfectiousness \\(h_0\\) could be drawn from a common distribution,\nsuch as a Gamma distribution, which allows for varying levels of heterogeneity \n(see \\cite{Fraser2011}). Also, we did not include the age structure of\nhouseholds, which will have to be done at some point since transmissibility\nbetween age groups is a crucial knowledge gap at this time.\n\nThe impact of asymptomatic infections should also be modelled carefully, since\nit is not known at this time whether they can be infectious or not. Following \n\\cite{Fraser2011}, we could distinguish between uninfectious and infectious\nasymptomatic disease.\n\nA very important factor is the presence of non-Covid-19 infections in the\ncommunity, which can lead to symptoms and outbreaks similar to Covid-19. This\nrepresents a major bias in the data. This can be resolved in several ways: the\nsimplest one is to independently estimate the proportion of influenza-like\nillnesses who are in fact due to Covid-19 disease (for instance, by using the\nrate of positive PCR tests as a proxy), and adjust the parameters accordingly. A\nmore sophisticated approach would be to finely distinguish between symptoms,\naccording to their predictive value for Covid-19.\n\n\\bibliographystyle{plain}\n\\bibliography{library.bib}\n\n\n\\end{document}", "meta": {"hexsha": "d08b4265a2a4144e414e23564b777f84d1d7253d", "size": 6590, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/model.tex", "max_stars_repo_name": "phoscheit/reed_frost", "max_stars_repo_head_hexsha": "17555bb1808c5068047f28f431882a389215517a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/model.tex", "max_issues_repo_name": "phoscheit/reed_frost", "max_issues_repo_head_hexsha": "17555bb1808c5068047f28f431882a389215517a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/model.tex", "max_forks_repo_name": "phoscheit/reed_frost", "max_forks_repo_head_hexsha": "17555bb1808c5068047f28f431882a389215517a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.5161290323, "max_line_length": 80, "alphanum_fraction": 0.7611532625, "num_tokens": 1770, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540518, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5936715453001487}}
{"text": "\\chapter{Wave and optics}\n\nDifferential equation? Diffusion? Oscillation? Wave behavior?\n\nThis chapter has two purposes:\nintroduce differential equations to the reader,\nand prepare the reader for quantum mechanics.\n\n\\section{Wave}\n\n% https://en.wikipedia.org/wiki/Wave\n(\"Wave\", Wikipedia):\nA \\emph{wave} is an oscillation accompanied by a transfer of energy?\nA wave is a disturbance that transfers energy through matter or space?\n\n\\section{Detecting waves}\n\nWe can detect a wave by a diffraction slit.\nIf it's a wave, it diffracts.\n\nWe assume the converse: if it diffracts, it's very likely a wave.\n\n\\emph{Undulation} is an old term for \\emph{wave}.\n\n\\section{Oscillation of a loaded spring}\n\nIf a spring is loaded, pulled, and released, then it will oscillate.\n\nThe equation of motion can be derived from Hooke's law of spring restoring force (\\S\\ref{sec:hooke-s-law}).\n\nWave equation\n\nSecond-order differential equation\n\nWater wave\n\nD'Alembert's waves?\n\nHow do we describe oscillation?\nPeriodic motion?\nHarmonic motion?\n\nHow do we describe a diffusion?\nHow do we derive the wave equation from the diffusion equation?\n\n% https://en.wikipedia.org/wiki/Fick%27s_laws_of_diffusion\n% https://en.wikipedia.org/wiki/Continuity_equation\n% https://en.wikipedia.org/wiki/Diffusion_equation#Derivation\nDiffusion equation is derived from continuity equation and Fick's laws of diffusion.\nAssume homogeneous (made of the same thing everywhere)\nisotropic (behaving the same everywhere) medium?\nDiffusion:\nThe rate of diffusion is proportional to the gradient?\n\\[\n    f(x,t+h) - f(x,t) = c \\cdot \\frac{[f(x - h, t) - f(x,t)] + [f(x + h, t) - f(x,t)]}{2}\n\\]\nDivide both sides by \\(h\\)\n\\[\n    D_t f(x,t) = c \\cdot D_x f(x,t)\n\\]\n\\[\n    \\frac{\\partial f}{\\partial t} = c \\cdot \\frac{\\partial f}{\\partial x}\n\\]\n\\[\n    \\frac{\\partial f}{\\partial t} = \\vec{c} \\cdot \\nabla f\n\\]\n???\n\nHow do we describe waves?\n\nHow do we describe waves on a string?\nPulse on a string?\nPulse on a chain of springs?\nReplace the springs with more smaller springs?\n\n% https://en.wikipedia.org/wiki/D%27Alembert%27s_formula\n\nA function \\(f\\) has \\emph{period} \\(p\\) iff \\(f(x+p) = f(x)\\) for all \\(x\\).\n\nLet \\(f(x,t)\\) be the \\emph{amplitude} of the wave at position \\(x\\) and time \\(t\\).\n\nLet the oscillator be at position \\(0\\).\n\nLet \\(g\\) be an unknown function.\n\nFlow:\n\\(f(x,t + dt) - f(x,t) = g(c,f(x,t),f(x-dx,t),f(x+dx,t))\\)\n\n\\(f(x,t + dt) - f(x,t) = [f(x-dx,t)-f(x,t)] + [f(x+dx,t)-f(x,t)]\\)\n\n\\section{Velocities}\n\nPropagation velocity\n\nPhase velocity\n\nGroup velocity\n\n\\section{Light wave}\n\n\\section{Fermat's principle of least time}\n\nLight takes the path that takes the least time.\n\n\\section{Snell\\textendash{}Descartes law of refraction}\n\n% https://en.wikipedia.org/wiki/Snell%27s_law\n% https://en.wikipedia.org/wiki/Snell%27s_law#History\n\\begin{equation}\n    \\frac{\\sin \\theta_1}{\\sin \\theta_2} = \\frac{v_1}{v_2} = \\frac{\\lambda_1}{\\lambda_2} = \\frac{n_2}{n_1}\n\\end{equation}\n\nDescartes 1637 \\emph{Dioptrics},\n\nHuygens 1678: Huygens\\textendash{}Fresnel principle.\n\nSnell's law can be derived from Fermat's principle?\n\nSnell's law can be derived from Huygens\\textendash{}Fresnel principle?\n\n% https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_principle\n\n\\section{Optics}\n\n\\emph{Wavenumber} is?\n\\emph{Wavelength} is?\n\\emph{Frequency} is?\n\\emph{Phase speed} is?\n\\emph{Group velocity} is?\n\n\\emph{Dispersion relation} is?\n\n\\emph{Doppler effect} is\n\nExpanding universe?\n\n\\section{Reflection}\n\\section{Diffraction}\n\\section{Diffusion}\n\\section{Dispersion}\n\\section{Interference}\n\\section{Superposition}\n\\section{Fresnel spot}\n\\section{Transversal wave}\n\\section{Longitudinal wave}\n\\section{Young's double-slit experiment}\n\\section{Isochronic oscillation of a pendulum}\n\n\\section{Camera obscura}\n\n\\section{Newton's 1672 prism splits white light into colors?}\n\n\\section{Young's 1803 double-slit experiment}\n", "meta": {"hexsha": "24a7308d54802fe59eefc7150aab267ed9d7d2c3", "size": 3870, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "research/physics/wave.tex", "max_stars_repo_name": "edom/work", "max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_stars_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "research/physics/wave.tex", "max_issues_repo_name": "edom/work", "max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_issues_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z", "max_forks_repo_path": "research/physics/wave.tex", "max_forks_repo_name": "edom/work", "max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_forks_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z", "avg_line_length": 25.1298701299, "max_line_length": 107, "alphanum_fraction": 0.7266149871, "num_tokens": 1159, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.7461390043208003, "lm_q1q2_score": 0.5936715425609015}}
{"text": "\\documentclass[11pt,onecolumn]{article}\n\\usepackage{amsmath,amssymb,graphicx,epsfig,cite,algorithm,algorithmic,epstopdf}\n\\usepackage[utf8]{inputenc}\n\\usepackage{xfrac, nicefrac}\n\\usepackage{multirow}\n\\usepackage{appendix}\n\n\\begin{document}\n\\title{Monte Carlo Methods for Mortals}\n\\author{Santhosh Nadig}\n\\maketitle\n\\pagebreak\n\\section{Introduction}\nHere's a scenario: assume that you are playing solitaire (the card game) and wondering what's the probability of a successful outcome? Well, this was a question that occurred to Stan Ulam (the Polish mathematician) who was recovering from illness and playing solitaire to while away his time. He tried to solve it using combinatorics and failed. Finally, he thought of simulating a very large number of games (with cards shuffled at random, of course) and just counting the number of successes. He, and his friend, Von Neumann, who knew a thing or two about computers and programming, set out do this calculation and eventually used this method for other scientific purposes.\n\nThis is the central idea of MC. Use randomness to \"simulate\" a large number of experiments. Infer / calculate a quantity of interest based on the outcome of the experiments. More the number of experiments, the better!\n\nWhat are Monte Carlo (MC) methods? An (over)simplified view is that they are tools for:\n\\begin{enumerate}\n \\item Estimating (multi-dimensional) integrals\n \\item Estimating probabilities\n \\item ... etc.\n\\end{enumerate}\n\nFirst, a note about estimating integrals. The usual deterministic methods of numerical integration (such as trapizoidal method) suffer from what's known as the \"curse of dimensionality\". That is, for a fixed error the number of function evaluations (of the integrand) grows exponentially with the \"dimension of integration\". In general, calculations that grow exponentially are frowned upon by engineers and scientists. At the risk of giving away the story-line, it sufficies to say that MC methods do not suffer from the aforementioned \"curse.\" I will (probably) give examples of this later on.\n\n\\section{Estimating Probabilities} \n\\subsection{Random Variables}\nWhat is a random variable? As someone said, it is neither random nor variable. For example, let $X$ denote the resistance of a resistor (assume fixed temperature and other conditions, just for argument's sake). $X$ is an {\\em unknown constant} (or, only known to the supreme fascist, SF). We do have, however, a series of measurements $x_1, x_2, \\dots $ from an ohm-meter. These measurements are called realizations/observations/samples of the random variable. So, now we have a random variable $X$ and its realizations (or samples) $x_1, x_2, \\dots $ and so on.\n\nThe expected value (mean, for ordinary mortals) of a random variable, $X$, is defined as\n\\[\nE[X] = \\int x~ p(x)~ dx,\n\\]\nwhere $p(x)$ is the probability density function (pdf). Assume that $p(x)$ represents a Normal distribution (or Gaussian distribution) with mean $m$ and standard deviation $\\sigma$. Thus, the probability density function $p(x)$ is given by\n\\[\n p(x) = \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\biggl({-(x-m)^2/(2\\sigma^2)}\\biggr).\n\\]\nWhat it means is that if you plot a histogram of  $x_1, x_2, \\dots $, it resembles the bell shape centered around  $m$ and its width $\\propto \\sigma$.\n\nAssume that the unknown expected value $m$ is to be estimated given $N$ realizations $x_1,\\dots,x_N$ of $X$. We could estimate $m$ (and hence, the integral) simply as a weighted sum of the realizations [it can proven that this is indeed the maximum likelihood estimate of $m$, but I will skip the derivation].\n\\[\nE[X] = m =  \\int x~ p(x)~ dx \\approx \\frac{1}{N} \\sum_{i=1}^N x_i.\n\\] \nThe estimate gets better (closer to $m$) as the number of samples, N, increases.\n\nThe above idea might not be a revelation. But, let us assume that we have an integral of the kind:\n\\begin{align}\n\\int  g(x) \\cdot f(x)~ dx ,\n\\label{eq:gxfx}\n\\end{align}\n\n\nwhere $f(x)$ is a pdf (i.e., $f(x) \\ge 0$ for all $x$ and $\\int f(x)~dx = 1$.\nFor instance, if $g(x) = x$, we obtain the expected value; And, if $g(x) = x^2$, we obtain the \n\"variance\", etc. These quantities are very useful - for example, the expected \n value represents the best estimate of the random variable X and the variance \n gives how confident we are of the estimate.\n\n{\\bf I1: MC method says:}\n\\[\n\\int g(x) \\cdot f(x) ~ dx \\approx \\frac{1}{N} \\sum_i g(x'_i)\n\\]\nwhere $x'_1, x'_2,\\dots$ are \"sampled\" from the pdf $f(x)$.\n\n\\subsection{Sampling}\nWhat do we mean by ``sampling'' from a distribution? It means drawing realizations from a given pdf. Let's consider an example:\nSuppose we have a \"bent coin\" which has a probability of coming up heads = 0.7 and coming up tails = 0.3. A sample is simply the result of a toss. Now, if we sampled long enough; that is, if we tossed the bent coin many times and observed the results (realizations) and plotted a histogram (normalized), then, we'd see that almost 70\\% of the coin tosses came up heads and the remaining 30\\% or so came up tails. A Python script to simulate such a coin is shown below:\n\n\\begin{verbatim}\n\"\"\"\nSimlate a biased coin\n\"\"\"\n\nimport numpy as np\n\nclass biased_coin:\n    \"\"\" biased_ is a class that implements a biased coin with\n        probability of heads = p, probability of tails = 1-p \"\"\"\n    def __init__(self, p):\n        self.p = p\n      \n    def toss(self):\n        \"\"\" generate a single toss of the biased coin \"\"\"  \n        if np.random.rand() > self.p:\n            return 'T'\n        else:\n            return 'H'\n\n    def get_sequence(self,num_tosses):\n        \n        seq = [self.toss() for i in range(num_tosses)]\n        return seq\n        \n        \ndef main():\n    p = 0.7 # probability of heads\n    c = biased_coin(0.7);\n    tosses = c.get_sequence(100);\n    num_heads = tosses.count('H')\n    num_tails = tosses.count('T')\n    print (\"simulated 100 tosses of a biased coin with p = \" + str(p))\n    print (\"obtained \" + str(num_heads) + \" heads and \" + str(num_tails) + \" tails\")\n\n\n\nif __name__ == '__main__':\n    main()\n    \n    \n----- Sample Output -----\nsimulated 100 tosses of a biased coin with p = 0.7\nobtained 71 heads and 29 tails\n\n\\end{verbatim}\n\nA key observation in the above program is the following: we first sample from a uniform distribution (a random number that has the same probability of assuming any value between 0 and 1.0). This is provided by the function {\\texttt{np.random.rand()}}. This random variable is then used to determine whether the appropriate result of the coin toss shall be heads or tails. If the random number is $> p (= 0.7)$, then the outcome is tails; otherwise heads. So, about 70\\% of the coin tosses result in heads as expected.\n\n{\\bf I2: Random variables from a uniform distribution are ``translated'' to another desired distribution}. \n\nThis translation is a key feature of MC methods. There are a plethora of methods that achieve this (e.g., inverse transform sampling, rejection sampling, etc etc). But, the basic idea is that random variables from one distribution is used to generate random variables of another (desired) distribution.\n\nThe above example had a discrete pdf (a probabilty mass function (pmf)). If we assume a continuous pdf (such as a Normal distribution), then, the histogram of the samples would resemble a ``bell curve''.\n\n\\subsection{Recap}\nJust a quick recap; the first idea ({\\bf I1}) says that integrals of the kind (\\ref{eq:gxfx}) can be approximated by averaging $g(x)$ s evaluated at $x$s drawn from the pdf $f(x)$. The second idea ({\\bf I2}) says that samples from one distribution (such as uniform distribution, which is provided as a standard library in most OSes) can be used to generate samples from another distribution.\n\n\\subsection{Conditional Probabilty}\nIn this section, we explore how to generate samples from a pdf as a function of other pdfs. I will use the term probability density function (which, if you remember, is applicable to a continuous random variable) instead of probability mass function (of a discrete random variable) just for convinience \\footnote{see appendix \\ref{app:a}}. I know that this violates mathematical rigour, but it is worth the sacrifice (in my opinion). \n\\subsubsection{Task 1}\nAssume that we are given samples from a pdf $p(x)$ and we know the conditional pdf $p(y|x)$. Our task is to generate samples from the pdf $p(y)$. So, say that we have samples $x_1, x_2, \\dots, x_N$ from the pdf:\n\\begin{align}\n p(x) = \\begin{cases}\n         0.6, & x = 0. \\\\\n         0.4, & x = 1.\n        \\end{cases}\n\\end{align}\nWe are also given the conditional probabilities:\n\\begin{align}\n p(y|x = 0) &= \\begin{cases}\n         0.1, & y = 0. \\\\\n         0.9, & y = 1.\n        \\end{cases} \\\\\n p(y|x = 1) &= \\begin{cases}\n         0.7, & y = 0. \\\\\n         0.3, & y = 1.\n        \\end{cases}\n\\end{align}\n\\footnote{Digression: Althought this is an artificial toy example, it bears resemblance to a binary assymetric channel where $y$ can be thought of as the received signal.}\nNow, the first task is to draw samples from the distribution $p(y)$. One way to approach this task is to estimate $p(x)$ (call it $\\hat p(x)$) using the samples $x_1,\\dots,x_N$ and then apply a simple formula to calculate $p(y)$ as:\n\\begin{align}\n p(y = 0) &= p(y = 0|x = 0) \\cdot \\hat p(x = 0) + p(y = 0|x = 1)\\cdot \\hat p(x = 1) \\\\\n p(y = 1) &= p(y = 1|x = 0) \\cdot \\hat p(x = 0) + p(y = 1|x = 1)\\cdot \\hat p(x = 1)\n\\end{align}\nand then, sample from $p(y)$ as before. Another approach is to do the following:\nFor each $x_i$, simulate $M$ samples of $y$s (i.e., $y_1, y_2, \\dots, y_M$) using the conditional probability $p(y|x_i)$. Thus, we end up with $M\\cdot N$ samples of $y$ which are essentially distributed as $p(y)$.\nThe following code sample illustrates the procedure (with $N = 10000$ and $M = 10$).\n\\begin{verbatim}\n  x = sample(x_labels, p_x, 10000)\n  y = np.array([])\n  for i in range(len(x)):\n      q = p_y_x[x[i],]\n      y = np.append(y, sample(y_labels, q, 10))\n\n  cnt_y = np.array([(y == 0).sum(), (y == 1).sum()])\n  p_y = cnt_y/len(y)\n  print('p(y) is: [%1.2f, %1.2f]'%(p_y[0], p_y[1]))\n  \n----- Sample Output -----\np(y) is: [0.34, 0.66]\n\n\\end{verbatim}\n\n\\subsubsection{Task 2}\nFor the next task, let us suppose the following:\n\\begin{enumerate}\n  \\item The pdf $p(x)$ (a.k.a prior) is known\n \\item The conditional pdfs $p(y|x)$ (a.k.a likelihood) is known\n \\item We have a $y_i$, which is either a 0 or 1 in our case (a.k.a measurement); and\n \\item \\bf Our job is to find the best estimate of $x$ that might have generated the given $y_i$. That is, we need to estimate the posterior probability $p(x|y)$.\n\\end{enumerate}\n\nIn theory, this can be achieved by applying Bayes' theorem\n\\begin{align}\n \\textbf{posterior} &= \\frac{\\textbf{prior} \\cdot \\textbf{likelihood}}{\\textbf{evidence}} \\\\\n p(x = x_j| y = y_i) &= \\frac{p(y = y_i | x = x_j) p(x = x_j)}{\\sum_{j} p(y = y_i | x = x_j) p(x = x_j) }\n\\end{align}\nSay, $y_i = 0$ and we would like to determine $p(x|y=0)$. Using the above theorem, it turns out to be:\n\\begin{align}\n p(x = 0|y = 0) &= \\frac{p(y = 0 | x = 0) p(x = 0)}{p(y = 0 | x = 0) p(x = 0) + p(y = 0 | x = 1) p(x = 1) } = 0.176  \\nonumber \\\\\n p(x = 1|y = 0) &= (1 - 0.176) = 0.823.\n\\end{align}\n\n\nOne approach to solve this problem the ``Monte Carlo'' way is to do the following:\n\\begin{enumerate}\n \\item Generate ``candidate'' samples of $x$ (say, $x_1,\\dots,x_N$) from the known pdf $p(x)$.\n \\item For each of the $x_i$s generated, generate a ``measurement'' (call it $\\hat y$) according to the pdf (likelihood) $p(y|x)$.\n \\item If $\\hat y$ is the same as $y$, save that $x_i$, else discard the $x_i$.\n\\end{enumerate}\nOnce the above steps are complete, we would have a new (but reduced) set of $x_i$s (say, $x_1, \\dots, x_M$) which would be distributed according to the pdf $p(x|y)$.\nThe following code sample illustrates the procedure.\n\n\\begin{verbatim}\n x = sample(x_labels, p_x, 10000)\n    x_new = np.array([])\n    for i in range(len(x)):\n        q = p_y_x[x[i],]\n        y = sample(y_labels, q, 1)\n        if y == 0:\n            x_new = np.append(x_new, x[i])\n\n    #done! now, check the distribution\n    cnt_x_new = np.array([(x_new == 0).sum(), (x_new == 1).sum()])\n    p_x_y0 = cnt_x_new / len(x_new)\n    print('p(x|y=0) is: [%1.2f, %1.2f]'%(p_x_y0[0], p_x_y0[1]))\n    print('INFO: len(x) = %d, len(x_new) = %d'%(len(x), len(x_new)))\n    \n----- Sample Output -----\np(x|y=0) is: [0.18, 0.82]\nINFO: len(x) = 10000, len(x_new) = 3375\n\n\\end{verbatim}\n\n\\subsection{Recap}\nWe have seen one way of generating samples from conditional probabilities (or sampling from the posterior probability). The method in the example we used discarded samples from a set of candidates (generated using the prior pdf) to generate another set of samples which were distributed according to the posterior pdf. If this method is applied recursively, one may end up with too few (or even no) samples. However, there are ways to overcome this problem.\n\n\n\\begin{appendices}\n\\section{Probability Density Function and Probability Mass Function}\n\\label{app:a}\nIt is easy to be misled\\footnote{I, for one was...} to interpret the probability density function (pdf) of a continuous random variable as an analog of the probability mass function (pmf) of a discrete random variable. That is to say, that the ``y'' axis of the pdf represents probability. This is absolutely wrong! This confusion may arise, for example, when one compares the expression for the expected value of a discrete random variable $y$ and that of a continuous random variable $x$\n\\begin{align}\nE[x] = \\int x p(x) dx \\\\\nE[y] = \\sum_i y_i p(y_i)\n\\end{align}\nIn the case of a continuous random variable $p(y_i)$ represents an actual probability. That is, $p(y_i) = Pr(Y = y_i)$; whereas for a continuous random variable $p(x)$ is the probability density (and not an actual probability). That is:\n\\begin{align}\nPr(a \\le X \\le b) = \\int_{a}^b p(x) dx.\n\\end{align}\nFrom the above expression it also follows that the probability of a continous random variable taking on exactly one particular value is  0. \n\\end{appendices}\n\\end{document}\n", "meta": {"hexsha": "420cedd4f639e7782d0f7803d1e50e09081b2984", "size": 14097, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mcmm.tex", "max_stars_repo_name": "nad1g/mcmm", "max_stars_repo_head_hexsha": "650eb2d5b283532841e696d2c14ae4dde4978c14", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mcmm.tex", "max_issues_repo_name": "nad1g/mcmm", "max_issues_repo_head_hexsha": "650eb2d5b283532841e696d2c14ae4dde4978c14", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mcmm.tex", "max_forks_repo_name": "nad1g/mcmm", "max_forks_repo_head_hexsha": "650eb2d5b283532841e696d2c14ae4dde4978c14", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.7745901639, "max_line_length": 675, "alphanum_fraction": 0.6982336667, "num_tokens": 4004, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.7956581000631542, "lm_q1q2_score": 0.5936715425609015}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{hyperref}\n\\renewcommand{\\vec}[1]{\\boldsymbol{#1}}\n\n\\title{Variational optimization for the minimization of functions on the binary domain}\n\\date{\\today}\n\\author{Antti Ajanki}\n\n\\begin{document}\n\n\\maketitle\n\nWe consider the task of minimizing a non-linear scalar function\n$f(\\vec{x})$, where the input is an $N$ dimensional binary vector,\n$\\vec{x} \\in \\{0, 1\\}^N$.\n\n\\section{Variational optimization}\n\nThe minimum of a function is always less than or equal to its\nexpected value over any distribution $p(x | \\theta)$:\n\\[\n\\min f(x) \\leq E\\left[f(x)\\right]_{p(x | \\theta)}\n\\]\n\nThe bound can be made tight if $p(x | \\theta)$ is so flexible that all\nthe probability mass can be concentrated on the true minimum\n$\\text{argmin} f(x)$.\n\nThe idea of variational optimization~\\cite{staines2012} is to consider\nthe upper bound $U(\\theta) = E\\left[f(x)\\right]_{p(x | \\theta)}$ as a\nfunction of $\\theta$ and minimize that. This converts the task of\nminimizing a function $f(x)$ of binary variables to minimization of a\nfunction $U(\\theta)$ of a continuous variable. Any method for\ncontinuous optimization can be applied for find a local minimum of\n$U(\\theta)$.\n\n\\section{Stochastic gradient descent}\n\nBecause $\\vec{x}$ is a binary vector, it is natural to choose to take\nthe expectation over a separable Bernoulli distribution:\n\\[\np(\\vec{x} | \\vec{\\theta}) = \\prod_i p_i(x_i | \\theta_i) = \\prod_i\n\\theta_i^{x_i} (1 - \\theta_i)^{1-x_i}\n\\]\n\nLet's use the stochastic gradient descent to find the local minimum of\nthe upper bound $U(\\vec{\\theta})$. First we need the partial\nderivates:\n\\begin{align*}\n\\frac{\\partial U(\\vec{\\theta})}{\\partial \\theta_j}  &=\n\\frac{\\partial}{\\partial \\theta_j} E\n\\left[f(\\vec{x})\\right]_{p(\\vec{x} | \\vec{\\theta})}\\\\\n&= \\int f(\\vec{x}) \\frac{\\partial}{\\partial \\theta_j} p(\\vec{x} |\n\\vec{\\theta})  d\\vec{x}\\\\\n&= \\int f(\\vec{x}) \n\\frac{\\partial}{\\partial \\theta_j} \\theta_j^{x_j} (1-\\theta_j)^{1 - x_j}\n\\prod_{i \\neq j} p_i(x_i | \\theta_i) d\\vec{x}\\\\\n&= \\int f(\\vec{x}) \n\\left( \\theta_j^{x_j} (x_j - 1) (1-\\theta_j)^{1 - x_j - 1} + x_j\n\\theta_j^{x_j - 1} (1-\\theta_j)^{1 - x_j} \\right)\n\\prod_{i \\neq j} p_i(x_i | \\theta_i) d\\vec{x}\\\\\n&= \\int f(\\vec{x}) \\theta_j^{x_j}\n(1-\\theta_j)^{1-x_j} \\left( \\frac{x_j - 1}{1 - \\theta_j} +\n\\frac{x_j}{\\theta_j} \\right) \\prod_{i \\neq j} p_i(x_i | \\theta_i) d\\vec{x}\\\\\n&= \\int f(\\vec{x}) \\left( \\frac{x_j - 1}{1 - \\theta_j} +\n\\frac{x_j}{\\theta_j} \\right) \\prod_i p(\\vec{x} | \\vec{\\theta})\nd\\vec{x}\\\\\n&= E\\left[ f(\\vec{x}) \\left( \\frac{x_j - 1}{1 - \\theta_j} +\n\\frac{x_j}{\\theta_j} \\right) \\right]_{p(\\vec{x} | \\vec{\\theta})}\n\\end{align*}\n\nDavid Barber proposed approximating the last expectation by sampling~\\cite{barber2017}:\n\\[\n\\frac{\\partial U(\\vec{\\theta})}{\\partial \\theta_j} \\approx \\frac{1}{K}\n\\sum_{k=1}^K f\\left( \\vec{x}^{(k)} \\right) \\left( \\frac{x_j^{(k)} -\n  1}{1 - \\theta_j} + \\frac{x_j^{(k)}}{\\theta_j} \\right),\n\\]\nwhere $\\vec{x}^{(1)}$ through $\\vec{x}^{(K)}$ are samples from\n$p(\\vec{x}| \\vec{\\theta})$.\n\nNow that we have a way to approximate the gradient $\\nabla\nU(\\vec{\\theta})$, we can apply the stochastic gradient descent to\niteratively search for the minimum. The $\\vec{\\theta}$ is updated by\ntaking small steps in the direction of the negative gradient:\n\\[\n\\vec{\\theta}^{\\text{new}} = \\vec{\\theta} - \\frac{\\eta}{K} \\sum_{k=1}^K\nf\\left( \\vec{x}^{(k)} \\right) \\left( \\frac{x_j^{(k)} - 1}{1 -\n  \\theta_j} + \\frac{x_j^{(k)}}{\\theta_j} \\right),\n\\]\nwhere $\\eta$ is the learning rate. Next, new $\\vec{x}$ samples are drawn\nfrom $p(\\vec{x} | \\vec{\\theta}^{\\text{new}})$ and the iteration is\nrepeated until convergence.\n\n\\begin{thebibliography}{9}\n\\bibitem{staines2012}\n  Joe Staines, David Barber:\n  \\textit{Variational Optimization},\n  \\url{https://arxiv.org/abs/1212.4507v2}, 2012.\n\\bibitem{barber2017}\n  David Barber:\n  \\textit{Evolutionary Optimization as a Variational Method},\n  \\url{https://davidbarber.github.io/blog/2017/04/03/variational-optimisation/},\n  Apr 3, 2017.\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "b49c20cd0b7f41b54b0436e065cdc6cb28931298", "size": 4063, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/binary_variational_optimization.tex", "max_stars_repo_name": "NFM-8/variational-optimization", "max_stars_repo_head_hexsha": "52a32d576813faa8e280c5f810e4a761c12983da", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/binary_variational_optimization.tex", "max_issues_repo_name": "NFM-8/variational-optimization", "max_issues_repo_head_hexsha": "52a32d576813faa8e280c5f810e4a761c12983da", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/binary_variational_optimization.tex", "max_forks_repo_name": "NFM-8/variational-optimization", "max_forks_repo_head_hexsha": "52a32d576813faa8e280c5f810e4a761c12983da", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2752293578, "max_line_length": 87, "alphanum_fraction": 0.6704405612, "num_tokens": 1423, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7690802423634961, "lm_q1q2_score": 0.5936095843847642}}
{"text": "\\lesson{2}{Nov 15 2021 Mon (12:15:15)}{Polynomial Transformations}{Unit 3}\n\n\\begin{definition}[Theorems]\n    A theorem is a proven statement based on experimentation. I will go over these three \\bf{Theorems}:\n    \n    \\begin{itemize}\n        \\item Fundamental Theorem of Algebra\n        \\item Factor Theorem\n        \\item Remainder Theorem\n    \\end{itemize}\n\\end{definition}\n\n\\begin{theorem}[Fundamental Theorem of Algebra]\n    The Fundamental Theorem of Algebra generally states that the degree of a polynomial is equivalent to the number of zeros (both real and complex) of a function.\n    \n    By the \\bf{Fundamental Theorem of Algebra}, the polynomial function $f(x) = x^2 - 3x - 28$ has two zeros since the degree of the function is two. To determine these zeros, replace the function notation of $f(x)$ with $0$ and solve by factoring.\n    \n    To factor, let's use the \\bf{Quadratic Equation}:\n    \n    \\begin{align}\n        x    &= \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a} \\\\\n        f(x) &= x^2 - 3x - 28 \\\\\n        a    &= 1, b = -3, c = -28 \\\\\n        x    &= \\frac{-(-3) \\pm \\sqrt{(-3)^2 - 4 \\times 1(-28)}}{2 \\times 1} \\\\\n        x    &= \\frac{3 + 11}{2} \\rm{ AND } x = \\frac{3 - 11}{2} \\\\\n        x    &= 7 \\rm{ AND } -4 \\rightarrow (x + 4)(x - 7)\n    \\end{align}\n    \n    The zero product property tells us that for these factors to result in a product of $0$, one or both of them must equal $0$.\n    \n    \\begin{minipage}[c]{0.5\\textwidth}\n        \\centering\n        \\begin{tabular}{cccccc}\n            & 0 & = & x & + & 4 \\\\\n            - & 4 & = &   & - & 4 \\\\\n            \\hline\n            & x & = & & - & 4 \\\\\n        \\end{tabular}\n    \\end{minipage}\n    \n    \\hfill\n    \\begin{minipage}[c]{0.5\\textwidth}\n        \\centering\n        \\begin{tabular}{cccccc}\n            & 0 & = & x & - & 7 \\\\\n            + & 7 & = &   & + & 7 \\\\\n            \\hline\n            & x & = & & + & 7 \\\\\n        \\end{tabular}\n    \\end{minipage}\n    \n    The \\bf{Zeros} of $f(x) = x^2 - 3x - 28$ are: $x_1 = -4, x_2 = 7$.\n\\end{theorem}\n\n\\begin{marginfigure}\n    \\centering\n    \\incfig{fundamental-theorem-of-algebra}\n    \\sidecaption{$f(x) = x^2 - 3x - 28$ Graphed.}\n    \\label{fig:fundamental-theorem-of-algebra}\n\\end{marginfigure}\n\n\\begin{theorem}[Factor Theorem]\n    The \\bf{Factor Theorem} states that a first degree binomial is a factor of a polynomial function if the remainder, when the polynomial is divided by the binomial, is $0$.\n\n    To determine whether $x - 5$ is a factor of the function $f(x) = -4x^3 + 21x^2 - 25$, set up a division problem whereby $-4x^3 + 21x^2 - 25$ is divided by $x - 5$.\n    \n    \\[ \\polylongdiv{-4x^3 + 21x^2 - 25}{x - 5} \\]\n\n    When the function $f(x) = -4x^3 + 21x^2 - 25$ is divided by the binomial $x - 5$, the remainder is $0$. So, $x - 5$ is a factor of the function $f(x) = -4x^3 + 21x^2 - 25$.\n\\end{theorem}\n\n\\begin{theorem}[Remainder Theorem]\n    The \\bf{Remainder Theorem} states that when the opposite of the constant from the binomial divisor is substituted into a function for $x$, the result is the remainder.\n\n    When the polynomial function $f(x) = x^4 + 11x^3 + 26x^2 + 15x - 17$ is divided by $x + 8$ using division, the remainder is the last integer on the bottom row.\n\n    \\[ \\polylongdiv{x^4 + 11x^3 + 26x^2 + 15x - 17}{x + 8} \\]\n    \n    When the opposite constant in the divisor is substituted into the function, the result will be the same as the remainder in the division process.\n    \n    \\begin{align}\n        f(x)  &= x^4 + 11x^3 + 26x^2 + 15x - 17 \\\\\n        f(-8) &= (-8)^4 + 11(-8)^3 + 26(-8)^2 + 15(-8) - 17 \\\\\n              &= 4096 + 11(-512) + 26(64) + 15(-8) - 17 \\\\\n              &= 4096 - 5632 + 1664 - 120 - 17 \\\\\n              &= -9 \\leftarrow \\rm{Remainder}\n    \\end{align}\n\\end{theorem}\n\n\\subsubsection*{Using these Theorems}\n\nGiven a polynomial function $f(x)$ and a number $a$, if $(x - a)$ is a factor of $f(x)$, then $a$ is a $0$ of the polynomial.\n\nThe binomial $(x - a)$ can be proved as a factor of $f(x)$ by:\n\n\\begin{itemize}\n    \\item Using \\bf{Long Division} with $(x - a)$ as the divisor.\n    \\item Using factoring methods when appropriate (grouping, completing the square, \\ldots).\n\\end{itemize}\n\nIf a is a $0$ of the polynomial function $f(x)$, then:\n\n\\begin{itemize}\n    \\item The graph of $f(x)$ crosses the $x$ axis at $(a,0)$.\n    \\item Substituting $a$ into $(x - a)$ will equal $0$\n    \\item Substituting $a$ into $f(x)$ will result in $f(a) = 0$.\n\\end{itemize}\n\n\\newpage\n", "meta": {"hexsha": "60196add1630d19716481de74bdb185e7be64dfc", "size": 4465, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-2.tex", "max_stars_repo_name": "SingularisArt/notes", "max_stars_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_stars_repo_licenses": ["Info-ZIP"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-08-31T12:45:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T07:29:05.000Z", "max_issues_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-2.tex", "max_issues_repo_name": "SingularisArt/notes", "max_issues_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_issues_repo_licenses": ["Info-ZIP"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-2.tex", "max_forks_repo_name": "SingularisArt/notes", "max_forks_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_forks_repo_licenses": ["Info-ZIP"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5909090909, "max_line_length": 248, "alphanum_fraction": 0.5836506159, "num_tokens": 1523, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7718434978390746, "lm_q1q2_score": 0.5936095802995756}}
{"text": "%!TEX root = da2020-10.tex\n\n\\Chapter{10}{Sinkless Orientation}\n\n\\noindent\nIn this chapter we will study the complexity of \\emph{sinkless orientation}, a problem that was introduced in the previous chapter. This is a problem that is understood well: we will design algorithms and show that these are asymptotically optimal.\n\nRecall that sinkless orientation is the problem of orienting the edges of the tree so that each internal node has got at least one outgoing edge. We begin by studying sinkless orientation on paths (or $(2,2)$-biregular trees), and show that we can easily argue about local neighborhoods to prove a tight lower bound result. However, when we switch to $(3,3)$-biregular trees, we will need the round elimination technique to do the same.\n\n\\section{Sinkless Orientation on Paths} \\label{sec:so-paths}\n\nWe define \\emph{sinkless orientation on paths} to be the following bipartite locally verifiable problem $\\Pi = (\\Sigma, \\collA, \\collP)$. The alphabet is $\\Sigma = \\{ \\mI, \\mO \\}$, with the interpretation that $\\mI$ indicates that the edge is oriented towards the active node (``incoming'') and $\\mO$ indicates that the edge is oriented away from the active node (``outgoing''). Each active node must label at least one incident edge with $\\mO$, and thus the active configurations are\n\\[\n\t\\collA = \\bigl\\{ \\, [\\mO, \\mI],\\, [\\mO, \\mO] \\, \\bigr\\}. \n\\]\nEach passive node must have at least one incident edge labeled with $\\mI$, and thus the passive configurations are\n\\[\n\t\\collP = \\bigl\\{ \\, [\\mI, \\mO],\\,[\\mI, \\mI] \\, \\bigr\\}. \n\\]\nAs previously, nodes of degree 1 are unconstrained; the edges incident to them can be labeled arbitrarily.\n\n\\subsection{Hardness of Sinkless Orientation}\\label{ssec:so-hard-paths}\n\nWe begin by showing that solving sinkless orientation requires $\\Omega(n)$ rounds on $(2,2)$-biregular trees.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[page=\\PSOPathIntuition,scale=0.4]{figs.pdf}\n\t\\caption{Propagation of a sinkless orientation on paths. Orienting a single edge (orange) forces the orientation of the path all the way to the other endpoint.} \\label{fig:so-intuition}\n\\end{figure}\n\n\\begin{lemma} \\label{lem:so-hard-paths}\n\tSolving sinkless orientation on $(2,2)$-biregular trees in the bipartite $\\PN$-model requires at least $n/4-1$ rounds, even if the nodes know the value $n$.  \n\\end{lemma}\n\nLet us first see why the lemma is intuitively true. Consider a path, as illustrated in Figure~\\ref{fig:so-intuition}. Each active node $u$ must choose some label for its incident edges, and at least one of these labels must be $\\mO$. Then its passive neighbor $v$ over the edge labeled with $\\mO$ must have its other incident edge labeled $\\mI$. This further implies that the other active neighbor $w$ of $v$ must label its other edge with $\\mO$. The original output of $u$ \\emph{propagates} through the path and the outputs of other nodes far away from $u$ depend on the output of $u$.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[page=\\PSOPathLBConstruction,scale=0.4]{figs.pdf}\n\t\\caption{Sinkless orientation lower bound. Assume $T = 4$. Top: any algorithm must fix some output labeling with an outgoing edge for a fixed neighborhood $\\ball_N(v,4)$. Bottom: Copying the same 4-neighborhood twice, and arranging the copies towards the same node creates an input $N'$ where the two nodes orient towards the middle. There is no legal way to label the rest of the path.} \\label{fig:so-path-lb}\n\\end{figure}\n\nLet us now formalize this intuition.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:so-hard-paths}]\n\tConsider any algorithm $A$ running in $T(n) = o(n)$ rounds. Then there exists $n_0$ such that for all $n \\geq n_0$, we have that $T(n) \\leq (n-5)/4$. Now fix such an integer $n$ and let $T = T(n)$ denote the running time of the algorithm.\n\t\n\tConsider an active node $v$ in the middle of a path $N$ on $n$ nodes. Let $\\ball_N(v,T)$ denote the $T$-neighborhood of $v$. Assume that $\\ball_N(v,T)$ is consistently port-numbered from \\emph{left} to \\emph{right}, as illustrated in Figure~\\ref{fig:so-path-lb}. Node $v$ must use the output label $\\mO$ on one of its incident edges; without loss of generality, assume that this is port~$1$. We can now construct a counterexample $N'$ as follows. Take two copies of $\\ball_N(v,T)$, denoted by $\\ball_{N'}(v_1,T)$ and $\\ball_{N'}(v_2,T)$. In particular, this includes the port-numbering in $\\ball_N(v,T)$.\n\tAdd one new node that is connected to the \\emph{right} endpoint of both $\\ball_{N'}(v_1,T)$ and $\\ball_{N'}(v_2,T)$. Finally, add leaves at the other endpoints of $\\ball_{N'}(v_1,T)$ and $\\ball_{N'}(v_2,T)$; see Figure~\\ref{fig:so-path-lb} for an illustration. We claim that the algorithm $A$ fails on $N'$.\n\t\n\tBy definition, edges must be labeled alternatively $\\mO$ and $\\mI$ starting from both $v_1$ and $v_2$. Therefore we must eventually find an active node labeled $[\\mI, \\mI]$ or a passive node labeled $[\\mO, \\mO]$, an incorrect solution.\n\t\n\tThe total number of nodes in $N'$ is $n = 2(2T+1)+3 = 4T + 5$, giving $T = (n-5)/4$. Thus solving sinkless orientation requires at least $T+1 = (n-5)/4 + 1 \\geq n/4 - 1$ rounds, as required. \n\\end{proof}\n\n\\subsection{Solving Sinkless Orientation on Paths} \\label{ssec:so-alg-path}\n\nThe proof of Lemma~\\ref{lem:so-hard-paths} shows that it is impossible to solve sinkless orientation on paths in sublinear number of rounds. Now we will show a linear upper bound: it is possible to solve sinkless orientation once all nodes see an endpoint of the path.\n\n\\begin{lemma}\\label{lem:so-path-ub}\n\tSinkless orientation can be solved in $\\lfloor n/2 \\rfloor$ rounds in the bipartite $\\PN$-model on $(2,2)$-biregular trees. \n\\end{lemma}\n\n\\begin{proof}\n\tThe idea of the algorithm is the following: initially send messages from the leaves towards the middle of the path. Orient edges against the incoming messages, i.e., toward the closest leaf. Once the messages reach the midpoint of the path, all edges have been correctly oriented away from the midpoint.\n\n\tThe algorithm works as follows. Initially all non-leaf nodes wait. The leaf nodes send a message to their neighbor and stop. If they are active, they output $\\mI$ on their incident edge. \n\tWhenever a node receives a message for the first time, in the next round it sends a message to the other port and stops. If it is an active node, it outputs $\\mO$ in the port from which it received the message, and $\\mI$ in the other port. That is, it orients its incident edges towards the closer leaf. If a node receives two messages in the same round, it is the midpoint of the path; it does not send any further messages. If it is an active node, it outputs $\\mO$ on both of its incident edges.\n\n\tThe algorithm runs in $\\lfloor n/2 \\rfloor$ rounds: on paths with an even number of nodes, all nodes have received a message in round $n/2 - 1$, and thus stop in the next round. On paths with an odd number of nodes, the middle node receives two messages in round $\\lfloor n/2 \\rfloor$ and stops.\n\n\tIt remains to show that our algorithm is correct. All leaf nodes are trivially labeled correctly. Any active non-leaf node always has an incident label $\\mO$. Now consider a passive node $u$: there is an active $v$ that sends $u$ a message before stopping. This node will output $\\mI$ on $\\{u,v\\}$, and thus $u$ is also correctly labeled.\n\\end{proof}\n\n\\begin{theorem}\n\tThe complexity of sinkless orientation on paths is $\\Theta(n)$.\n\\end{theorem}\n\n\\begin{proof}\n\tFollows from Lemmas \\ref{lem:so-hard-paths} and \\ref{lem:so-path-ub}.\n\\end{proof}\n\n\\section{Sinkless Orientation on Trees}\n\nIn Section~\\ref{sec:so-paths} we saw that if we require that degree-$2$ nodes have at least one outgoing edge, we arrive at a problem that is hard already in the case of paths. The proof of hardness was a straightforward argument that used local neighborhoods.\n\nHowever, what happens if we relax the problem slightly and allow any orientation around degree-$2$ nodes? The proof of hardness from Section~\\ref{ssec:so-hard-paths} no longer works, but does the problem get easier to solve?\n\nFor concreteness, let us consider \\emph{trees of maximum degree $3$}, that is, both active and passive nodes have degree at most $3$; the case of higher degrees is very similar. We define the problem so that nodes of degree $1$ and $2$ are unconstrained, but nodes of degree $3$ must have at least one outgoing edge. We can encode it as follows as a bipartite locally verifiable problem $\\Pi_0 = (\\Sigma_0, \\collA_0, \\collP_0)$:\n\\begin{align*}\n\t\\Sigma_0 &= \\{ \\mO, \\mI \\}, \\\\\n\t\\collA_0 &= \\bigl\\{ \\, [\\mO],\\,[\\mI],\\,[\\mO, \\mO],\\,[\\mO,\\mI],\\,[\\mI,\\mI],\\,[\\mO,\\mI,\\mI],\\,[\\mO,\\mO,\\mI],\\,[\\mO,\\mO,\\mO] \\, \\bigr\\}, \\\\\n\t\\collP_0 &= \\bigl\\{ \\, [\\mO],\\,[\\mI],\\,[\\mO, \\mO],\\,[\\mO,\\mI],\\,[\\mI,\\mI],\\,[\\mI,\\mO,\\mO],\\,[\\mI,\\mI,\\mO],\\,[\\mI,\\mI,\\mI] \\, \\bigr\\}.\n\\end{align*}\nHere we have listed all possible configurations for nodes of degrees $1$, $2$, and $3$.\n\n\\subsection{Solving Sinkless Orientation on Trees} \\label{ssec:so-trees-alg}\n\nThe algorithm for solving sinkless orientation on trees uses ideas similar to the algorithm on paths: each node $u$ must determine the closest unconstrained node $v$, i.e., a node of degree $1$ or $2$, and the path from $u$ to $v$ is oriented away from $u$. This will make all nodes happy: each active node of degree $3$ has an outgoing edge, and all other nodes are unconstrained.\n\nLet us call nodes of degree 1 and 2 \\emph{special nodes}. We must be careful in how the nodes choose the special node: the algorithm would fail if two nodes want to orient the same edge in different directions.\n\nThe algorithm functions as follows. In the first round, only special nodes are sending messages, broadcasting to each port. Then the special nodes stop and, if they are active nodes, they output $\\mI$ on each edge. Nodes of degree 3 \\emph{wake up} in the first round in which they receive at least one message. In the next round they broadcast to each port from which they did not receive a message in the previous round. After sending this message, the nodes stop. If they are active nodes, they orient their incident edges towards the smallest port from which they received a message: output $\\mO$ on that edge, and $\\mI$ on the other edges. \n\n\\paragraph{Correctness.} Assume that the closest special nodes are at distance $t$ from some node $u$. Assume that $v$ is one of those nodes, and let $(v_1, v_2, \\dotsc, v_{t+1})$ denote the unique path from $v = v_1$ to $u = v_{t+1}$. We claim that in each round $i$, node $v_{i}$ broadcasts to $v_{i+1}$. By assumption, $v$ is also one of the closest special nodes to all $v_i$; otherwise there would be a closer special node to $u$ as well. \nIn particular, there will never be a broadcast from $v_{i+1}$ to $v_i$, as then $v_{i+1}$ would have a different closer special node. Therefore each $v_i$ will broadcast to $v_{i+1}$ in round $i$. This implies that in round $t$, node $u$ will receive a broadcast from $v_t$.\n\nAll nodes that receive a broadcast become happy: Active nodes output $\\mO$ on one of the edges from which they received a broadcast, making them happy. They output $\\mI$ on the other edges, so each passive node is guaranteed that every edge from which it receives a broadcast has the label $\\mI$.\n\n\\paragraph{Time Complexity.} It remains to bound the round by which all nodes have received a broadcast. To do this, we observe that each node is at distance $O(\\log n)$ from a special node.\n\nConsider a fragment of a $3$-regular tree centered around some node~$v$, and assume that there are no special nodes near~$v$. Then at distance $1$ from $v$ there are $3$ nodes, at distance $2$ there are $6$ nodes, at distance $3$ there are $12$ nodes, and so on. In general, if we do not have any special nodes within distance $i$, then at distance $i$ there are \n$\n\t3\\cdot2^{i-1} > 2^i\n$\nnodes in the tree. At distance $i = \\log_2 n$, we would have more than $n$ nodes. Thus, within distance $\\log_2 n$, there has to be a special node. Since each node can stop once it has received a broadcast, the running time of the algorithm is $O(\\log n)$.\n\n\\subsection{Roadmap: Next Steps}\n\nWe have seen that sinkless orientation in trees can be solved in $O(\\log n)$ rounds. We would like to now prove a matching lower bound and show that sinkless orientation cannot be solved in $o(\\log n)$ rounds. We will apply the round elimination technique from Chapter~9 to do this. However, we will need one further refinement to the round elimination technique that will make our life a lot easier: we can \\emph{ignore all non-maximal configurations}. We will explain this idea in detail in Section~\\ref{sec:re-maximal}, and then we are ready to prove the hardness result in Section~\\ref{sec:so-hard-trees}.\n\n\n\\section{Maximal Output Problems}\\label{sec:re-maximal}\n\nIn Chapter~\\chapterref{9} we saw how to use the round elimination technique to construct the \\emph{output problem} $\\Pi' = \\re(\\Pi)$ for any given bipartite locally verifiable problem $\\Pi$. We will now make an observation that allows us to \\emph{simplify} the description of output problems. We will change the definition of output problems to include this simplification.\n\nConsider an output problem $\\Pi' = (\\Sigma, \\collA, \\collP)$. Recall that $\\Sigma$ now consists of \\emph{sets} of labels. Assume that there are two configurations\n\\begin{align*}\nX &= [X_1, X_2, \\dotsc, X_d], \\\\\nY &= [Y_1, Y_2, \\dotsc, Y_d],\n\\end{align*}\nin $\\collA$. We say that $Y$ \\emph{contains} $X$ if we have $X_i \\subseteq Y_i$ for all $i$.\n\nIf $Y$ contains $X$, then configuration $X$ is redundant; whenever an algorithm solving $\\Pi'$ would like to use the configuration $X$, it can equally well use $Y$ instead:\n\\begin{itemize}\n\t\\item Active nodes are still happy if active nodes switch from $X$ to $Y$: By assumption, $Y$ is also a configuration in $\\collA$.\n\t\\item Passive nodes are still happy if active nodes switch from $X$ to $Y$: Assume that $Z = [Z_1, Z_2, \\dotsc, Z_\\delta]$ is a passive configuration in $\\collP$. As this is a passive configuration of $\\re(\\Pi)$, it means that there is a choice $z_i \\in Z_i$ such that $[z_1, z_2, \\dotsc, z_\\delta]$ is an active configuration in the original problem $\\Pi$. But now if we replace each $Z_i$ with a superset $Z'_i \\supseteq Z_i$, then we can still make the same choice $z_i \\in Z'_i$, and hence $Z' = [Z'_1, Z'_2, \\dotsc, Z'_\\delta]$ also has to be a passive configuration in $\\collP$. Therefore replacing a label with its superset is always fine from the perspective of passive nodes, and in particular switching from $X$ to $Y$ is fine.\n\\end{itemize}\nTherefore we can \\emph{omit} all active configurations that are contained in another active configuration and only include the \\emph{maximal} configurations, i.e., configurations that are not contained in any other configuration.\n\nWe get the following mechanical process for constructing the output problem $\\re(\\Pi) = (\\Sigma, \\collA, \\collP)$.\n\\begin{enumerate}[noitemsep]\n\t\\item Construct the output problem $\\re(\\Pi) = (\\Sigma, \\collA, \\collP)$ as described in Section~\\longref{9.2.2}{ssec:output-problems}.\n\t\\item Remove all non-maximal active configurations from $\\collA$.\n\t\\item Remove all unused elements from $\\Sigma$.\n\t\\item Remove all passive configurations containing labels not in $\\Sigma$.\n\\end{enumerate}\n\nThe resulting problem is exactly as hard to solve as the original problem:\n\\begin{itemize}\n\t\\item Since the simplified sets of configurations are subsets of the original sets of configurations, any solution to the simplified problem is a solution to the original problem, and thus the original problem is  \\emph{at most as hard as} the simplified problem.\n\t\\item By construction, any algorithm solving the original output problem can solve the simplified problem equally fast, by replacing some labels by their supersets as appropriate. Therefore the original problem is \\emph{at least as hard as} the simplified problem. \n\\end{itemize}\nWe will apply this simplification when we use the round elimination technique to analyze the sinkless orientation problem.\n\n\\section{Hardness of Sinkless Orientation on Trees}\\label{sec:so-hard-trees}\n\nWe will now show that sinkless orientation requires $\\Omega(\\log n)$ rounds on $(3,3)$-biregular trees\\mydash and therefore also in trees of maximum degree $3$, as $(3,3)$-biregular trees are a special case of such trees.\n\nLet us first write down the sinkless orientation problem as a bipartite locally verifiable problem $\\Pi_0 = (\\Sigma_0, \\collA_0, \\collP_0)$ in $(3,3)$-biregular trees; as before, we will only keep track of the configurations for nodes of degree~$3$, as leaf nodes are unconstrained:\n\\begin{align*}\n\\Sigma_0 &= \\{ \\mO, \\mI \\}, \\\\\n\\collA_0 &= \\bigl\\{ \\, [\\mO, x, y] \\bigm| x, y \\in \\Sigma \\, \\bigr\\}, \\\\\n\\collP_0 &= \\bigl\\{ \\, [\\mI, x, y] \\bigm| x, y \\in \\Sigma \\, \\bigr\\}.\n\\end{align*}\n\n\\subsection{First Step}\n\n\\begin{lemma}\n\tLet $\\Pi_0$ be the sinkless orientation problem. Then the output problem is $\\Pi_1 = \\re(\\Pi_0) = (\\Sigma_1, \\collA_1, \\collP_1)$, where\n\t\\begin{align*}\n\t\t\\Sigma_1 &= \\bigl\\{ \\, \\{ \\mI \\},\\, \\{ \\mO, \\mI \\} \\, \\bigr\\}, \\\\\n\t\t\\collA_1 &= \\Bigl\\{ \\, \\bigl[\\{ \\mI \\},\\, \\{ \\mO, \\mI \\},\\, \\{ \\mO, \\mI \\} \\bigr] \\, \\Bigr\\}, \\\\\n\t\t\\collP_1 &= \\Bigl\\{ \\, \\bigl[\\{ \\mO, \\mI \\}, x, y\\bigr] \\Bigm| x,y \\in \\Sigma_1 \\, \\Bigr\\}.\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\n\tLet us follow the procedure from Section~\\ref{sec:re-maximal}. First, we arrive at alphabet $\\Sigma_1$ that contains all non-empty subsets of $\\Sigma_0$: \n\t\\[\n\t\t\\Sigma_1 = \\bigl\\{\\, \\{\\mO\\},\\,\\{\\mI\\},\\,\\{\\mO,\\mI\\} \\,\\bigr\\}.\n\t\\]\n\tThe active configurations $\\collA_1$ consist of all multisets $[X, Y, Z]$ such that no matter how we choose $x \\in X$, $y \\in Y$, and $z \\in Z$, at least one element of the multiset $[x,y,z]$ is $\\mI$. This happens exactly when at least one of the labels $X$, $Y$, and $Z$ is the singleton set $\\{ \\mI \\}$. We get that \n\t\\begin{align*}\n\t\t\\collA_1 = \\Bigl\\{\\, & [ \\{ \\mI \\}, X, Y ] \\bigm| X,Y \\subseteq \\{ \\mO, \\mI \\} \\, \\Bigr\\} \\\\\n\t\t= \\Bigl\\{\\,\n\t\t&\\bigl[\\{ \\mI \\},\\, \\{\\mI \\},\\, \\{ \\mI \\}\\bigr],\\\\\n\t\t&\\bigl[\\{\\mI\\},\\, \\{\\mI\\},\\, \\{ \\mO \\}\\bigr],\\\\\n\t\t&\\bigl[\\{\\mI\\},\\,\\{\\mI\\},\\,\\{\\mO,\\mI\\} \\bigr],\\\\\n\t\t&\\bigl[\\{ \\mI \\},\\, \\{\\mO \\},\\,\\{ \\mO \\}\\bigr],\\\\\n\t\t&\\bigl[\\{ \\mI \\},\\, \\{\\mO \\},\\,\\{ \\mO, \\mI \\}\\bigr], \\\\\n\t\t&\\bigl[\\{\\mI\\},\\, \\{ \\mO, \\mI \\},\\, \\{ \\mO, \\mI \\}\\bigr] \\, \\Bigr\\}.\n\t\\end{align*}\n\t\n\tNext we remove all non-maximal configurations. We note that all other active configurations are contained in the configuration \n\t\\[\n\t\\bigl[\\{ \\mI \\},\\, \\{ \\mO, \\mI \\},\\, \\{ \\mO, \\mI \\} \\bigr].\n\t\\]\n\tThis becomes the only active configuration: \n\t\\[\n\t\t\\collA_1 = \\Bigl\\{ \\, \\bigl[\\{ \\mI \\},\\, \\{ \\mO, \\mI \\},\\, \\{ \\mO, \\mI \\} \\bigr] \\, \\Bigr\\}.\n\t\\]\n\tSince the label $\\{ \\mO \\}$ is never used, we may remove it from the alphabet, too: we get that\n\t\\[\n\t\t\\Sigma_1 = \\bigl\\{\\, \\{\\mI\\}, \\{\\mO, \\mI \\} \\,\\bigr\\}.\n\t\\]\n\t\n\tThe passive configurations are all multisets such that at least one label contains $\\mO$. Thus the simplified passive configurations are\n\t\\begin{align*}\n\t\t\\collP_1 = \\Bigl\\{\\,\n\t\t&\\bigl[\\{ \\mO, \\mI \\},\\, \\{ \\mO, \\mI \\},\\, \\{ \\mO, \\mI \\}\\bigr],\\\\\n\t\t&\\bigl[\\{ \\mO, \\mI \\},\\, \\{ \\mO, \\mI \\},\\,\\{ \\mI \\}\\bigr],\\\\\n\t\t&\\bigl[\\{ \\mO, \\mI \\},\\,\\{\\mI\\},\\,\\{\\mI\\}\\bigr] \\, \\Bigr\\}. \\qedhere\n\t\\end{align*}\n\\end{proof}\n\n\\subsection{Equivalent Formulation}\\label{ssec:so-output-equiv}\n\nNow let us simplify the notation slightly. We say that a problem $\\Pi'$ is \\emph{equivalent} to another problem $\\Pi$ if a solution of $\\Pi'$ can be converted in zero rounds to a solution of $\\Pi$ and vice versa. In particular, equivalent problems are exactly as hard to solve.\n\n\\begin{lemma}\n\tLet $\\Pi_0$ be the sinkless orientation problem. Then the output problem $\\re(\\Pi_0)$ is equivalent to $\\Pi_1 = (\\Sigma_1, \\collA_1, \\collP_1)$, where\n\t\\begin{align*}\n\t\t\\Sigma_1 &= \\{ \\mA, \\mB \\}, \\\\\n\t\t\\collA_1 &= \\bigl\\{ \\, [\\mA, \\mB, \\mB ] \\, \\bigr\\}, \\\\\n\t\t\\collP_1 &= \\bigl\\{ \\, [\\mB, x, y] \\bigm| x,y \\in \\Sigma_1 \\, \\bigr\\}.\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}\n\tRename the labels of $\\re(\\Pi_0)$ as follows to arrive at $\\Pi_1$:\n\t\\begin{align*}\n\t\t\\mA &= \\{ \\mI \\},\\\\\n\t\t\\mB &= \\{ \\mO, \\mI \\}. \\qedhere\n\t\\end{align*}\n\\end{proof}\n\n\\noindent In what follows, we will use $\\Pi_1$ and $\\re(\\Pi_0)$ interchangeably, as they are equivalent.\n\n\\subsection{Fixed Points in Round Elimination} \\label{ssec:re-failure}\n\nAs we will see soon, problem $\\Pi_1$ is a \\emph{fixed point} in round elimination: when round elimination is applied to $\\Pi_1$, the output problem is again~$\\Pi_1$ (or, more precisely, a problem equivalent to $\\Pi_1$).\n\nThis means that if we assume that $\\Pi_1$ can be solved in $T$ rounds, then by applying round elimination $T$ times we get a $0$-round algorithm for $\\Pi_1$. It can be shown that $\\Pi_1$ is \\emph{not} $0$-round solvable. This would seem to imply that $\\Pi_1$, and thus sinkless orientation, are not solvable at all, which would contradict the existence of the $O(\\log n)$-time algorithm presented in Section~\\ref{ssec:so-trees-alg}!\n\nTo resolve this apparent contradiction, we must take a closer look at the assumptions that we have made. The key step in round elimination happens when a node $u$ simulates the \\emph{possible} outputs of its neighbors. The correctness of this step assumes that the possible $T$-neighborhoods of the neighbors are \\emph{independent} given the $(T-1)$-neighborhood of $u$. When $T$ is so large in comparison with $n$ that the $T$-neighborhoods of the neighbors put together might already imply the existence of more than $n$ nodes, this assumption no longer holds\\mydash see Figure~\\ref{fig:full-tree} for an example.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[page=\\PSOFullTree,scale=0.3]{figs.pdf}\n\t\\caption{If we know that, e.g., $n = 35$, then orange, green, and blue extensions are no longer independent of each other: inputs (a) and (b) are possible but we cannot combine them arbitrarily to form e.g.\\ input~(c).} \\label{fig:full-tree}\n\\end{figure}\n\nFor the remainder of this chapter we consider algorithms that know the value $n$, the number of nodes in the graph. This allows us to define a \\emph{standard form} for algorithms that run in $T = T(n)$ rounds, where $T(n)$ is some function of $n$: since $n$ is known, each node can calculate $T(n)$, gather everything up to distance $T(n)$, and simulate any $T(n)$-time algorithm.\n\nIn $(d,\\delta)$-biregular trees, where $d > 2$, it can be shown that round elimination can be applied if the initial algorithm is assumed to have running time $T(n) = o(\\log n)$: this guarantees the independence property in the simulation step. However, round elimination fails for some function $T(n) = \\Theta(\\log n)$; calculating this threshold is left as Exercise~\\ref{ex:re-boundary}. \n\nAny problem that can be solved in time $T(n)$ can be solved in time $T(n)$ with an algorithm in the standard form. If the problem is a fixed point and $T(n) = o(\\log n)$, we can apply round elimination. We get the following lemma.\n\n\\begin{lemma}\n\tAssume that bipartite locally verifiable problem $\\Pi$ on $(d,d)$-biregular trees, for $d>2$, is a fixed point. Then the deterministic complexity of $\\Pi$ in the bipartite $\\PN$-model is either $0$ rounds or $\\Omega(\\log n)$ rounds, even if the number of nodes $n$ is known.\n\\end{lemma}\n\n\\subsection{Sinkless Orientation Gives a Fixed Point}\n\nWe will now show that the output problem $\\Pi_1 = \\re(\\Pi_0)$ of the sinkless orientation problem $\\Pi_0$ is a fixed point, that is, $\\re(\\Pi_1)$ is a problem equivalent to $\\Pi_1$ itself. Since this problem cannot be solved in 0 rounds, it requires $\\Omega(\\log n)$ rounds. As sinkless orientation requires, by definition, one more round than its output problem, sinkless orientation also requires $\\Omega(\\log n)$ rounds.\n\n\\begin{lemma}\n\tThe output problem $\\Pi_1 = (\\Sigma_1, \\collA_1, \\collP_1)$ of sinkless orientation, given by\n\t\\begin{align*}\n\t\t\\Sigma_1 &= \\{ \\mA, \\mB \\}, \\\\\n\t\t\\collA_1 &= \\bigl\\{ \\, [\\mA, \\mB, \\mB ] \\, \\bigr\\}, \\\\\n\t\t\\collP_1 &= \\bigl\\{ \\, [\\mB, x, y] \\bigm| x,y \\in \\Sigma_1 \\, \\bigr\\},\n\t\\end{align*}\n\tis a fixed point.\n\\end{lemma}\n\n\\begin{proof}\n\tLet $\\Pi_2 = \\re(\\Pi_1) = (\\Sigma_2, \\collA_2, \\collP_2)$ denote the output problem of $\\Pi_1$. Again, we have that \n\t\\[\n\t\t\\Sigma_2 = \\bigl\\{ \\, \\{ \\mA \\}, \\{ \\mB \\}, \\{ \\mA, \\mB \\} \\, \\bigl\\}.\n\t\\]\n\tThe active configurations $\\collA_2$ are\n\t\\[\n\t\t\\collA_2 = \\Bigl\\{\\, \\bigl[ \\{ \\mB \\}, x, y \\bigr] \\Bigm| x,y \\subseteq \\{ \\mA, \\mB \\}\\,  \\Bigl\\}. \n\t\\]\n\tThat is, one set must be the singleton $\\{ \\mB \\}$ to satisfy $\\collP_1$ for \\emph{all choices}, and the remaining sets are arbitrary.\n\t\n\tNext we determine the maximal configurations. Again, there is a single active configuration that covers the other configurations:\n\t\\[\n\t\t\\collA_2 = \\Bigl\\{\\,  \\bigl[\\{\\mB \\}, \\{ \\mA,\\mB\\}, \\{\\mA, \\mB \\}\\bigr]\\,  \\Bigr\\}. \n\t\\]\n\tThe alphabet is immediately simplified to\n\t\\[\n\t\t\\Sigma_2 = \\bigl\\{\\, \\{\\mB \\}, \\{\\mA, \\mB \\} \\,\\bigr\\},\n\t\\]\n\tas the label $\\{\\mA \\}$ is never used by any active configuration.\n\t\n\tThe passive configurations $\\collP_2$ are all multisets that contain the active configuration $[\\mA, \\mB, \\mB]$. Since $\\mA$ is now only contained in $\\{ \\mA, \\mB \\}$, we get that\n\t\\begin{align*}\n\t\t\\collP_2 = \\Bigl\\{\\,\n\t\t&\\bigl[\\{ \\mA, \\mB \\},\\, \\{ \\mA, \\mB \\},\\, \\{ \\mA, \\mB \\}\\bigr], \\\\\n\t\t&\\bigl[\\{ \\mA, \\mB \\},\\, \\{ \\mA, \\mB \\},\\, \\{ \\mB \\}\\bigr], \\\\\n\t\t&\\bigl[\\{ \\mA, \\mB \\},\\, \\{ \\mB \\},\\, \\{ \\mB \\}\\bigr] \\, \\Bigr\\}.\n\t\\end{align*}\n\tNow we may do a simple renaming trick to see that $\\Pi_2$ is equivalent to $\\Pi_1$: rename $\\{ \\mB \\} \\to \\mA$ and $\\{\\mA,\\mB\\} \\to \\mB$. Written this way, we have that $\\Pi_2$ is equivalent to the following problem:\n\t\\begin{align*}\n\t\t\\Sigma_2 &= \\{ \\mA, \\mB \\}, \\\\\n\t\t\\collA_2 &= \\bigl\\{\\,  [\\mA, \\mB, \\mB ] \\,\\bigr\\}, \\\\\n\t\t\\collP_2 &= \\bigl\\{\\,  [\\mB, x, y] \\bigm| x,y \\in \\Sigma_2 \\,\\bigr\\},\n\t\\end{align*}\n\twhich is exactly the same problem as $\\Pi_1$.\n\\end{proof}\n\n\\section{Quiz}\n\nCalculate the \\emph{number of different $2$-round algorithms} in the $\\PN$ model on $(3,3)$-biregular trees for bipartite locally verifiable problems with the binary alphabet $\\Sigma = \\{0,1\\}$.\n\nHere two algorithms $A_1$ and $A_2$ are considered to be \\emph{different} if there is some port-numbered network $N$ and some edge $e$ such that the label edge $e$ in the output of $A_1$ is different from the label of the same edge $e$ in algorithm $A_2$. Note that $A_1$ and $A_2$ might solve the same problem, they just solved it differently. Please remember to take into account that in a $(3,3)$-biregular tree each node has degree $1$ or $3$.\n\nPlease give your answer in the scientific notation with two significant digits: your answer should be a number of the form  $a \\cdot 10^b$, where $a$ is rounded to two decimal digits, we have $1 \\le a < 10$, and $b$ is a natural number.\n\n\\section{Exercises}\n\n\\begin{ex}[0-round solvability]\n\tProve that the following problems are not 0-round solvable. \\begin{subex}\n\t\t\\item $(d+1)$-edge coloring in $(d,d)$-biregular trees (see Section~\\longref{9.1.2}{ssec:bipartite-examples}).\n\t\t\\item Maximal matching in $(d,d)$-biregular trees (see Section~\\longref{9.1.2}{ssec:bipartite-examples}).\n\t\t\\item $\\Pi_1$, the output problem of sinkless orientation, in $(3,3)$-biregular trees (see Section~\\ref{ssec:so-output-equiv}).\n\t\\end{subex} \n\t\\hint{Show that a 0-round algorithm consists of choosing one active configuration and assigning its labels to the ports. Show that for any way of assigning the outputs to the ports, there exists a port numbering such that a the incident edges of a passive node are not labeled according to any passive configuration.}\n\\end{ex}\n\n\\begin{ex}[higher degrees]\n\tGeneralize the sinkless orientation problem from $(3,3)$-biregular trees to $(10,10)$-biregular trees. Apply round elimination and show that you get again a fixed point.\n\\end{ex}\n\n\\begin{ex}[non-bipartite sinkless orientation]\n\tDefine non-bipartite sinkless orientation as the following problem $\\Pi = (\\Sigma, \\collA, \\collP)$ on $(3,2)$-biregular trees:\n\t\\begin{align*}\n\t\t\\Sigma &= \\{ \\mO, \\mI \\}, \\\\\n\t\t\\collA &= \\bigl\\{\\,  [\\mO, x, y ] \\bigm| x,y \\in \\Sigma \\,\\bigr\\}, \\\\\n\t\t\\collP &= \\bigl\\{\\,  [\\mI, \\mO] \\, \\bigr\\}.\n\t\\end{align*}\n\tProve that applying round elimination to $\\Pi$ leads to a period-$2$ point, that is, to a problem $\\Pi'$ such that $\\Pi' = \\re(\\re(\\Pi'))$.\n\\end{ex}\n\n\\begin{ex}[matching lower bound]\\label{ex:pm-lb}\n\tLet us define \\emph{sloppy perfect matching} in trees as a matching such that all non-leaf nodes are matched. Encode this problem as a bipartite locally verifiable problem on $(3,3)$-biregular trees. Show that solving it requires $\\Omega(\\log n)$ rounds in the $\\PN$-model with deterministic algorithms.\n\\end{ex}\n\n\\begin{ex}[matching upper bound]\\label{ex:pm-ub}\n\tConsider the sloppy perfect matching problem from Exercise~\\ref{ex:pm-lb}.\n\tDesign an algorithm for solving it with a deterministic $\\PN$-algorithm on $(3,3)$-biregular trees in $O(\\log n)$ rounds.\n\t\\hint{Decompose the graph into \\emph{layers} $(V_0, V_1, \\dots, V_L)$, where nodes in layer $i$ have distance $i$ to the closest leaf. Then iteratively solve the problem, starting from layer $V_L$: match all nodes in layer $V_L$, $V_{L-1}$, and so on.}\n\\end{ex}\n\n\\begin{ex}[sinkless and sourceless]\\label{ex:sinkless-sourceless}\n\tSinkless and sourceless orientation is the problem of orienting the edges of the graph so that each non-leaf node has at least one outgoing edge \\emph{and} at least one incoming edge.\n\tEncode the sinkless and sourceless orientation problem as a binary locally verifiable labeling problem on $(5,5)$-biregular trees.\n\tDesign an algorithm for solving sinkless and sourceless orientation on $(5,5)$-biregular trees.\n\\end{ex}\n\n\\begin{ex}[failure of round elimination] \\label{ex:re-boundary}\n\tIn Section~\\ref{ssec:re-failure} we discussed that round elimination fails if in the simulation step the $T(n)$-neighborhoods of the neighbors are dependent from each other. This happens when there exist $T$-neighborhoods of the neighbors such that the resulting tree would have more than $n$ nodes.\n\tConsider $(d,d)$-biregular trees. Calculate the bound for $F(n)$ such that round elimination fails for algorithms with running time $T(n) \\geq F(n)$.\n\\end{ex}\n\n\\begin{exs}\\label{ex:sinkless-sourceless-3}\n\tDesign an algorithm for solving sinkless and sourceless orientation on $(3,3)$-biregular trees.\n\\end{exs}\n\n\\section{Bibliographic Notes}\n\nBrandt et al.\\ \\cite{brandt16lll} introduced the sinkless orientation problem and proved that it cannot be solved in $o(\\log\\log n)$ rounds with randomized algorithms, while Chang et al.\\ \\cite{chang16exponential} showed that it cannot be solved in $o(\\log n)$ rounds with deterministic algorithms. Ghaffari and Su \\cite{ghaffari17distributed} gave matching upper bounds.\n\nExercises \\ref{ex:pm-lb} and \\ref{ex:pm-ub} are inspired by \\cite{balliu20binary-labeling}, and Exercises \\ref{ex:sinkless-sourceless} and \\ref{ex:sinkless-sourceless-3} are inspired by \\cite{ghaffari19degree-splitting}.\n", "meta": {"hexsha": "7810c8ee5734510209fa1d391b76b549ee978824", "size": 31023, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "book/ch10.tex", "max_stars_repo_name": "suomela/da2020", "max_stars_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2020-12-11T00:47:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T15:46:43.000Z", "max_issues_repo_path": "book/ch10.tex", "max_issues_repo_name": "suomela/da2020", "max_issues_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-17T18:31:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-17T18:42:16.000Z", "max_forks_repo_path": "book/ch10.tex", "max_forks_repo_name": "suomela/da2020", "max_forks_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-06-22T03:53:31.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-11T12:33:40.000Z", "avg_line_length": 82.0714285714, "max_line_length": 738, "alphanum_fraction": 0.7055410502, "num_tokens": 9584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.593558335581337}}
{"text": "% file: problems/sorting-lower-bound.tex\n\n\\section{Lower Bound for Comparison-based Sorting (UD Problem 6.13)}\n\nProve a lower bound of $O(n \\lg n)$ on the time complexity of any comparison-based sorting algorithm.\n\n\\subsection{Understanding the Problem}\n\nFirst of all, this problem is \\emph{flawed}, if not wrong.\n\\marginnote{I am not surprised at all if you are good at proving something wrong to be a theorem.}\nIt should be $\\Omega(n \\lg n)$, instead of $O(n \\lg n)$.\n\n{\\bf Comparison-based Sorting.}\n\n\\begin{itemize}\n  \\item We only consider the sorting algorithms in which\n    \\marginnote{Counting sort, radix sort, or bucket sort is not a comparison-based sorting algorithm.}\n    comparison is the only way to obtain order information between elements; and\n  \\item Comparisons of elements are the critical operations we care about.\n    \\marginnote{In \\href{https://en.wikipedia.org/wiki/Pancake\\_sorting}{pancake sorting}, \n    we are interested in the minimum number of flips (or technically, prefix reversals).\\cite{Gates:DM1979}}\n\\end{itemize}\n\n{\\bf Time complexity $O(n \\lg n)$.} Algorithms on the inputs of size $n$.\n\n\\subsection{Solution}\n\n\\subsection{Comments}\n", "meta": {"hexsha": "1ffbd830d633feeb5ce5713d549074f0105fdfb4", "size": 1173, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2017/2017-2nd/2-2-efficiency/problems/sorting-lower-bound.tex", "max_stars_repo_name": "courses-at-nju-by-junma/problem-solving-class-problems", "max_stars_repo_head_hexsha": "79de740506000972b2bec91cc6042fa639cd2e55", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2018-03-16T04:33:03.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-11T14:50:38.000Z", "max_issues_repo_path": "2017/2017-2nd/2-2-efficiency/problems/sorting-lower-bound.tex", "max_issues_repo_name": "courses-at-nju-by-junma/problem-solving-class-problems", "max_issues_repo_head_hexsha": "79de740506000972b2bec91cc6042fa639cd2e55", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2018-03-19T10:36:11.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-03T04:58:39.000Z", "max_forks_repo_path": "2017/2017-2nd/2-2-efficiency/problems/sorting-lower-bound.tex", "max_forks_repo_name": "courses-at-nju-by-junma/problem-solving-class-problems", "max_forks_repo_head_hexsha": "79de740506000972b2bec91cc6042fa639cd2e55", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2018-03-16T04:26:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-11T11:42:48.000Z", "avg_line_length": 40.4482758621, "max_line_length": 108, "alphanum_fraction": 0.7510656436, "num_tokens": 294, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.766293653760418, "lm_q1q2_score": 0.5935583315938486}}
{"text": "\\chapter{Non-Relativistic Quantum Mechanics}\nHere an overview of some basic results from Schr{\\\"o}dinger and Heisenberg's quantum mechanics are given. The quantum harmonic oscillator (QHO) is solved exactly, and extended to the anharmonic case with first-order perturbation theory. Lagrangian mechanics is briefly covered, before some consideration of the important Dirac $\\delta$ and Heaviside $\\theta$ step functions.\n\nA time-varying quantum state my be expressed by a vector $\\ket{\\psi(t)}$. In the position representation, this yields the wavefunction, $\\psi(x, t) \\equiv \\braket{x}{\\psi(t)}$.\n\n\\section{Schr{\\\"o}dinger's equation and probability current density}\nWe start from the classical energy-momentum relation for a non-relativistic particle with mass $m$,\n\\begin{equation}\nE = \\frac{p^2}{2m}. \\label{eq:energyMomentum}\n\\end{equation}\nReplace the energy with the operator $E\\rightarrow\\hat{E} = i \\frac{\\partial}{\\partial t}$ and momentum with $p\\rightarrow \\hat{p} = -i \\vec{\\nabla}$. Then \\eqref{eq:energyMomentum} becomes the time-dependent Schr dinger equation (TDSE),\n\\begin{equation}\ni \\frac{\\partial \\psi}{\\partial t} = -\\frac{1}{2m} \\nabla^2 \\psi \\label{eq:schrodinger}.\n\\end{equation}\nMultiplying this by the complex conjugate of the wavefunction, $\\psi^*$,\n\\begin{equation}\ni \\psi^* \\frac{\\partial \\psi}{\\partial t} = -\\frac{1}{2m} \\psi^* \\nabla^2 \\psi \\label{eq:part1}.\n\\end{equation}\nTake the complex conjugate of \\eqref{eq:schrodinger} and multiply by $\\psi$,\n\\begin{equation}\n-i \\frac{\\partial \\psi^*}{\\partial t} \\psi = -\\frac{1}{2m} (\\nabla^2 \\psi^*) \\psi \\label{eq:part2}.\n\\end{equation}\nSubtracting \\eqref{eq:part2} from \\eqref{eq:part1},\n\\begin{equation}\ni\\left( \\psi^*\\frac{\\partial \\psi}{\\partial t} + \\frac{\\partial \\psi^*}{\\partial t} \\psi \\right) = -\\frac{1}{2m} \\left[ \\psi^* (\\nabla^2 \\psi) - \\psi (\\nabla^2 \\psi^*) \\right]\n\\end{equation}\nand rearranging using the product rule,\n\\begin{equation}\n\\frac{\\partial}{\\partial t}(\\psi^*\\psi) + \\frac{i}{2m} \\vec{\\nabla}\\cdot \\left[ \\psi(\\vec{\\nabla}\\psi^*) - (\\vec{\\nabla}\\psi)\\psi^* \\right] = 0.\n\\end{equation}\nThis is nothing but a continuity equation for the probability density $\\rho = \\psi^*\\psi = \\abs{\\psi}^2$ and the probability current density $\\vec{j} = \\frac{i}{2m}\\left[ \\psi(\\vec{\\nabla}\\psi^*) - (\\vec{\\nabla}\\psi)\\psi^* \\right]$.\n\n\\subsubsection{Application to a plane wave}\nConsider a free plane wave whose wavefunction is given by $\\psi(x, t) = \\mathcal{N} e^{i(\\vec{p}\\cdot\\vec{x} - Et)}$, where $\\mathcal{N}$ is a normalisation constant. Then, applying the definitions for $\\rho$ and $\\vec{j}$ above,\n\\begin{align}\n\\rho &= \\abs{\\psi}^2 = \\abs{N}^2 \\\\\n\\vec{j} &= \\frac{i}{2m}\\left[ \\psi(\\vec{\\nabla}\\psi^*) - (\\vec{\\nabla}\\psi)\\psi^* \\right] = \\frac{\\abs{N}^2}{m} \\vec{p}\n\\end{align}\n\n\\section{Heisenberg and interaction pictures}\\\n\\subsection{Heisenberg picture}\nIn the above, we have used the Schr{\\\"o}dinger picture where operators remain constant in time, and the states can vary. In contrast, the Heisenberg picture treats the states as constants, and the operators evolve in time. To show that these are equivalent, solve the time-independent Schr{\\\"o}dinger equation for an energy eigenstate $\\ket{\\psi(t)}$.\n\\begin{align}\ni \\frac{\\dd \\ket{\\psi(t)}}{\\dd t} &= \\hat{H} \\ket{\\psi(t)} \\\\\n\\Rightarrow\\quad \\int\\limits_0^t \\frac{\\dd \\ket{\\psi(t^\\prime)}}{\\ket{\\psi(t^\\prime)}} &= -i \\int\\limits_0^t \\hat{H} \\, \\dd t^\\prime \\\\\n\\Rightarrow\\quad \\ket{\\psi(t)} &= e^{-i\\hat{H}t} \\ket{\\psi(0)} \\quad \\text{for time-independent $\\hat{H}$.} \\label{eq:evolution}\n\\end{align}\nIn quantum mechanics, observables connect the theory with measurements. Observable quantities are expectation values of operators. Consider some general operator, $\\hat{A}$, whose expectation value is given by\n\\begin{align}\n\\expval{A} &= \\expval{\\hat{A}}{\\psi(t)} \\\\\n&= \\expval{e^{i\\hat{H}t}\\hat{A}e^{-i\\hat{H}t}}{\\psi(0)}.\n\\end{align}\nTherefore, it is equivalent to define time-independent states and time-dependent operators in the Heisenberg picture:\n\\begin{align}\n\\ket{\\psi_H} &\\equiv \\ket{\\psi(0)} \\\\\n\\hat{A}_H &\\equiv e^{i\\hat{H}t}\\hat{A}e^{-i\\hat{H}t}.\n\\end{align}\nThen the expectation value is simply\n\\begin{equation}\n\\expval{A} = \\expval{\\hat{A}_H}{\\psi_H}\n\\end{equation}\nand\n\\begin{align}\n\\dv{\\hat{A}_H}{t} &= i\\hat{H}e^{i\\hat{H}t}\\hat{A}e^{-i\\hat{H}t} - ie^{i\\hat{H}t}\\hat{A}\\hat{H}e^{-i\\hat{H}t} \\nonumber \\\\\n&= i(\\hat{H}\\hat{A} - \\hat{A}\\hat{H}) \\nonumber \\\\\n&= i [ \\hat{H}, \\hat{A} ]\n\\end{align}\n\n\\subsection{The interaction picture}\nIn the interaction picture, both states and operators have some time dependance. In the interaction picture, we split the Hamiltonian into a free part $\\hat{H}_0$ and interaction part $\\hat{H}^\\prime$,\n\\begin{equation}\n\\hat{H} = \\hat{H}_0 + \\hat{H}^\\prime.\n\\end{equation}\nThe free part is generally time-independent and easily solved. We then say that operators $\\hat{A}_I$ evolve according to the free part of the Hamiltonian,\n\\begin{equation}\n\\hat{A}_I(t) = e^{i\\hat{H}_0t}\\hat{A}e^{-i\\hat{H}_0t}.\n\\end{equation}\nThen the operator in the interaction picture evolves according to\n\\begin{equation}\n\\dv{\\hat{A}_I}{t} = i [ \\hat{H}_0, \\hat{A}_I(t) ].\n\\end{equation}\nNow consider the expectation value of the operator, $\\expval{A} = \\expval{e^{i\\hat{H}_0t}\\hat{A}e^{-i\\hat{H}_0t}}{\\psi_I(t)}$. We see that for this to be equivalent to the Schr{\\\"o}dinger picture expectation value, the states must evolve according to the interaction part of the Hamiltonian $\\hat{H}^\\prime$,\n\\begin{equation}\n\\ket{\\psi_I(t)} = e^{-i\\hat{H}^\\prime t}\\ket{\\psi(0)}\n\\end{equation}\nor equivalently\n\\begin{equation}\n\\ket{\\psi_I(t)} = e^{i\\hat{H}_0 t}\\ket{\\psi(t)}.\n\\end{equation}\nThe interaction picture is used in the perturbative expansion of the $S$-matrix in quantum field theory (QFT). This leads to the concept of using Feynman diagrams to calculate amplitudes.\n\nNote that the three different pictures of quantum mechanics coincide at $t=0$.\n\n\\section{Quantum harmonic oscillator}\nThe quantum harmonic oscillator is an exactly solvable problem that neatly introduces ladder operators.\n\nConsider a Hamiltonian of the form\n\\begin{equation}\nH = \\frac{p^2}{2m} + \\frac{1}{2} m \\omega^2 x^2\n\\end{equation}\nwhere it should be clear from the context which quantities are operators, so we drop the hat notation, $\\hat{O} \\rightarrow O$.\n\nIntroduce the ladder operator\n\\begin{equation}\na = \\frac{m \\omega x + ip}{\\sqrt{2m\\omega}}\n\\end{equation}\nand bearing in mind that $x$ and $p$ are hermitian,\n\\begin{equation}\na^\\dagger = \\frac{m \\omega x - ip}{\\sqrt{2m\\omega}}.\n\\end{equation}\nThe product $a^\\dagger a$ is\n\\begin{align}\na^\\dagger a &= \\frac{(m \\omega x - ip)(m \\omega x + ip)}{2 m \\omega} \\nonumber \\\\\n&= \\frac{1}{2m\\omega} \\left\\{ (m \\omega x)^2 + im\\omega [x, p] + p^2 \\right\\} \\nonumber \\\\\n&= \\frac{H}{\\omega} - \\frac{1}{2}.\n\\end{align}\nRearranging, the Hamiltonian can be written\n\\begin{equation}\nH = \\left( a^\\dagger a + \\frac{1}{2} \\right) \\omega.\n\\end{equation}\nSimilarly,\n\\begin{equation}\na a^\\dagger = \\frac{H}{\\omega} + \\frac{1}{2}.\n\\end{equation}\nWe also have that\n\\begin{equation}\n[a, a^\\dagger] = 1\n\\end{equation}\nand\n\\begin{align}\n[a, H] &= \\omega a\\\\\n[a^\\dagger, H] &= -\\omega a^\\dagger.\n\\end{align}\n\n\\subsection{Energy spectrum}\n\nNow consider the action of the operator $a^\\dagger$ on a stationary state $\\ket{n}$ with some energy $E_n$. What is the energy of the state $A^\\dagger \\ket{n}$?\n\\begin{align}\nH a^\\dagger \\ket{n} &= \\left( a^\\dagger H - [a^\\dagger, H] \\right)\\ket{n} \\nonumber \\\\\n&= \\left( a^\\dagger H + \\omega a^\\dagger \\right)\\ket{n} \\nonumber \\\\\n&= \\left( E_n + \\omega \\right) a^\\dagger \\ket{n}.\n\\end{align}\nSo $a^\\dagger\\ket{n}$ is also an energy eigenstate with energy $\\left( E_n + \\omega \\right)$. Since $a^\\dagger$ has raised the energy of the state $\\ket{n}$ by one quantum of energy, we call it the raising operator.\n\nSimilarly, the application of $a$ to $\\ket{n}$ lowers the energy by $\\omega$,\n\\begin{equation}\nH a \\ket{n} = (E_n - \\omega) a \\ket{n}.\n\\end{equation}\n$a$ is called the lowering operator and together $a$ and $a^\\dagger$ are ladder operators that span the energy spectrum of the QHO. In QFT, these operators correspond to the creation and annihilation of particles since particles are wave excitations of the vacuum.\n\n\\subsection{Zero-point energy}\nThere must be some state $\\ket{0}$ for which $a\\ket{0}$ vanishes. Equating to zero the square length of this state vector,\n\\begin{align}\n0 = \\abs{a\\ket{0}}^2 &= \\expval{a^\\dagger a}{0} \\\\\n&= \\expval{\\frac{H}{\\omega} - \\frac{1}{2}}{0} \\nonumber \\\\\n&= \\frac{E_0}{\\omega} - \\frac{1}{2}.\n\\end{align}\nRearranging gives the energy of the vacuum state, $\\ket{0}$,\n\\begin{equation}\nE_0 = \\frac{\\omega}{2}.\n\\end{equation}\n\n\\subsection{Action of the ladder operators}\n\nNow we can build up the ladder of energy states by continuous action of the raising operator,\n\\begin{align*}\na^\\dagger\\ket{0} &= \\alpha_0 \\ket{1} \\\\\na^\\dagger\\ket{1} &= \\alpha_1 \\ket{2} \\\\\n\\ldots \\\\\na^\\dagger\\ket{n} &= \\alpha_n \\ket{n+1}\n\\end{align*}\nwhere the normalisation constant can be found via\n\\begin{align}\n\\alpha_n^2 = \\abs{a^\\dagger\\ket{n}} &= \\expval{a a^\\dagger}{n} \\nonumber \\\\\n&= \\expval{\\frac{H}{\\omega} + \\frac{1}{2}}{n} \\nonumber \\\\\n&= n+1\n\\end{align}\nso $\\alpha_n = \\sqrt{n+1}$. Therefore the general action of the raising operator is\n\\begin{equation}\n\\boxed{\na^\\dagger \\ket{n} = \\sqrt{n+1}\\ket{n+1}\n}.\n\\end{equation}\nSimilarly,\n\\begin{equation}\n\\boxed{a \\ket{n} = \\sqrt{n} \\ket{n-1}}.\n\\end{equation}\n\nA general state $\\ket{n}$ can be made from repeated application of $a^\\dagger$ from the ground state:\n\\begin{equation}\n\\ket{n} = \\frac{1}{\\sqrt{n!}} (a^\\dagger)^n \\ket{0}.\n\\end{equation}\n\n\nNotice that $a^\\dagger a \\ket{n} = n \\ket{n}$, so $a^\\dagger a$ is called the number operator.\n\n\\section{The anharmonic oscillator (perturbation theory)}\nWe now generalise the above QHO to include an anharmonic term,\n\\begin{align}\nH &= \\frac{p^2}{2m} + \\frac{1}{2}m \\omega^2 x^2 + \\lambda x^3 \\\\\n&= H_0 + \\lambda H^\\prime\n\\end{align}\nwhere the second line expresses the Hamiltonian as in the interaction picture. In this case, the interaction hamiltonian is $H^\\prime = x^3$. In what follows, we assume that $\\lambda$ is small such that $H^\\prime$ is a perturbation to the QHO solution and we stop expansions after first order in $\\lambda$.\n\nThe eigenvectors of the full Hamiltonian may be expanded as\n\\begin{equation}\n\\ket{n} = \\ket{n}^0 + \\lambda\\ket{n}^1 + \\cdots\n\\end{equation}\nwhere all the $\\ket{n}^i$ are assumed to be orthogonal. The state has corresponding energy\n\\begin{equation}\nE_n = E_n^0 + E_n^1 + \\ldots\n\\end{equation}\nsuch that it is the solution of the TISE,\n\\begin{align}\nH\\ket{n} &= E_n\\ket{n} \\\\\n\\left( H_0 + \\lambda H^\\prime + \\ldots \\right)\\left( \\ket{n}^0 + \\lambda\\ket{n}^1 + \\ldots \\right) &= \\left( E_n^0 + \\lambda E_n^1 + \\ldots \\right)\\left( \\ket{n}^0 + \\lambda\\ket{n}^1 + \\ldots \\right)\n\\end{align}\n\nEquating the coefficients of $\\lambda^0$ gives the trivial harmonic part of the solution,\n\\begin{equation}\nH_0 \\ket{n}^0 = E_n^0 \\ket{n}^0.\n\\end{equation}\nEquating the coefficients of $\\lambda^1$ gives\n\\begin{equation}\nH_0\\ket{n}^1 + H^\\prime\\ket{n}^0 = E_n^0\\ket{n}^1 + E_n^1\\ket{n}^0. \\label{eq:lambda1}\n\\end{equation}\nRearranging and multiplying by $^0\\!\\bra{n}$,\n\\begin{equation}\n^0\\!\\bra{n}H_0 - E_n^0\\ket{n}^1 + ^0\\!\\bra{n}H^\\prime - E_n^1\\ket{n}^0 = 0.\n\\end{equation}\nNow use that $^0\\!\\bra{n}H_0 = \\,^0\\!\\bra{n} E_n^0$ to see that the first term is equal to zero. This gives the solution for the change in energy (up to first order in $\\lambda$) of the $n$th excited state,\n\\begin{equation}\\boxed{\nE_n^1 = \\,^0\\!\\expval{H^\\prime}{n}^0\n}.\n\\end{equation}\n\nNow we wish to find the first-order correction to the state, $\\ket{n}^1$. First, use the representation of $\\ket{n}^1$ in the basis of harmonic solutions, $\\ket{m}^0$,\n\\begin{equation}\n\\ket{n}^1 = \\sum_m \\ket{m}^0 \\, ^0\\!\\braket{m}{n}^1 \\label{eq:spectrum}.\n\\end{equation}\nThe eigenstates of the harmonic problem $\\ket{m}^0$ may serve as a basis since they are all orthonormal, i.e.~$^0\\!\\braket{m}{n}^0 = \\delta_{nm}$.\n\nNow to get the coefficients $^0\\!\\braket{m}{n}^1$, multiply \\eqref{eq:lambda1} by $^0\\!\\bra{m}$,\n\\begin{equation}\n\\underbrace{^0\\!\\bra{m}H_0\\ket{n}^1}_{E_m^0 \\,^0\\!\\braket{m}{n}^1} + ^0\\!\\bra{m}H^\\prime\\ket{n}^0 = ^0\\!\\bra{m}E_n^0\\ket{n}^1 + \\underbrace{^0\\!\\bra{m}E_n^1\\ket{n}^0}_0.\n\\end{equation}\nRearranging,\n\\begin{equation}\n^0\\!\\braket{m}{n}^0 = \\frac{^0\\!\\bra{m}H^\\prime\\ket{n}^0}{E_n^0 - E_m^0} \\label{eq:coefficients}.\n\\end{equation}\nNotice that this approach only works if the energy levels are non-degenrate, i.e.~$E_n^0 \\neq E_m^0$ for all $n \\neq m$. For the QHO this is the case, but in more general cases one must diagonalise the Hamiltonian first.\n\nNow, substituting \\eqref{eq:coefficients} into \\eqref{eq:spectrum} gives the value of the first-order correction to the energy eigenstates,\n\\begin{equation}\n\\boxed{\n\\ket{n}^1 = \\sum_m \\frac{^0\\!\\bra{m}H^\\prime\\ket{n}^0}{E_n^0 - E_m^0} \\ket{m}^0\n}.\n\\end{equation}\n\nIn the particular case where $H^\\prime = x^3$ it is helpful to express the perturbation in terms of the ladder operators,\n\\begin{equation}\nx^3 = \\frac{(a + a^\\dagger)^3}{(2m\\omega)^\\frac{2}{3}}.\n\\end{equation}\nThen the determination of $E_n^1$ and $\\ket{n}^1$ is achievable via the application of the ladder operators to the QHO eigenstates.\n\n\\section{Lagrangian mechanics}\nIn general, a Lagrangian is a function of coordinates $q$ and their derivatives, $L = L(q, \\dot{q})$. The action is the integral of $L$,\n\\begin{equation}\nS = \\int L(q, \\dot{q}) \\, \\dd t.\n\\end{equation}\nClassical paths are those paths with stationary action, where $\\delta S = 0$. This requires,\n\\begin{align}\n\\delta S &= \\int \\dd t \\left\\{ \\pdv{L}{q} \\delta q + \\pdv{L}{\\dot{q}} \\delta\\dot{q} \\right\\} \\\\\n&= \\int \\dd t \\left\\{ \\left[ \\pdv{L}{q} - \\dv{t}\\left( \\pdv{L}{\\dot{q}} \\right) \\right] \\delta q \\right\\} + \\left[ \\pdv{L}{\\dot{q}} \\delta q \\right]_{t_1}^{t_2} = 0.\n\\end{align}\nThe last term is a total derivative and vanishes for any $\\delta q$ that decays at spatial infinity and obeys $\\delta q(t_1) = \\delta q(t_2)$. For all such paths, we obtain the Euler-Lagrange equations of motion,\n\\begin{equation}\n\\boxed{\n\\pdv{L}{q} - \\dv{t} \\left( \\pdv{L}{(\\dot{q})} \\right) = 0\n}.\n\\end{equation}\n\n\\subsection{Relation to the Hamiltonian}\n\nEach coordinate $q$ has a conjugate momentum,\n\\begin{equation}\np = \\pdv{L}{\\dot{q}}.\n\\end{equation}\nThen the Hamiltonian is related to the Lagrangian by\n\\begin{equation}\n\\boxed{\nH = p\\dot{q} - L\n}\n\\end{equation}\n\nFor a quantum treatment, we replace the coordinates with operators. The operators and their conjugate momenta have the commutation relations\n\\begin{equation}\n[\\hat{q}, \\hat{p}] = i\n\\end{equation}\nin natural units.\n\n\\subsection{Application to the QHO}\nThe QHO Lagrangian is\n\\begin{equation}\nL = \\frac{1}{2}m\\dot{x}^2 - \\frac{1}{2}m\\omega^2 x^2.\n\\end{equation}\nTherefore the momentum, $p = \\pdv{L}{\\dot{x}} = m\\dot{x}$. So the Hamiltonian is\n\\begin{align}\nH &= p\\dot{x} - L \\\\\n&= \\frac{p^2}{m} - \\left(\\frac{1}{2}m\\dot{x}^2 - \\frac{1}{2}m\\omega^2 x^2\\right)\\\\\n&= \\frac{1}{2}m\\dot{x}^2 + \\frac{1}{2}m\\omega^2 x^2\n\\end{align}\nwhich is the same as we used for the QHO above.\n\nThe Lagrangian and Hamiltonian treatments are equivalent, but often a problem is more soluble in one over the other. QFT makes use of the Lagrangian density, and this is used to express the particles and interactions of the Standard Model.\n\n\\section{Dirac $\\delta$ function}\nThe $\\delta$ function may be thought of as a peaked function of width $\\Delta x$ and height $1/\\Delta x$ around a value $x=x_0$ in the limit $Delta x \\rightarrow 0$. It has the properties\n\\begin{equation}\n\\int\\limits_{-\\infty}^{+\\infty} \\delta(x - x_0) \\, \\dd{x} = 1\n\\end{equation}\nand\n\\begin{equation}\n\\int\\limits_{-\\infty}^{+\\infty} f(x) \\, \\delta(x - x_0) \\, \\dd{x} = f(x_0).\n\\end{equation}\nAs a result, $\\delta(x) - \\delta(-x)$ is an even function and $\\delta{ax} = \\delta{x}/\\abs{a}$. The proof is as follows:\n\\begin{align}\n\\int\\limits_{-\\infty}^{+\\infty} \\delta(ax) \\, \\dd{x} &= \\int\\limits_{-\\infty}^{+\\infty} \\delta(y) \\, \\frac{\\dd{y}}{a} \\quad \\text{for $y=ax$,} \\\\\n&= \\frac{1}{\\abs{a}}.\n\\end{align}\nNow consider a function $f(x)$ with roots $a_i$ such that $f(a_i)=0$. Then $\\delta(f(x))$ function is non-zero only in the vicinity $x \\sim a_i$. Expanding $f(x)$ about a particular root,\n\\begin{equation}\nf(x) = \\underbrace{f(a_i)}_0 + (x-a_i)\\left[ \\pdv{f}{x} \\right]_{x=a_i} + \\ldots\n\\end{equation}\nso, using the preceding result, we have\n\\begin{align}\n\\delta(f(x)) &= \\sum_i \\delta\\left((x-a_i)\\left[\\pdv{f}{x}\\right]_{x=a_i}\\right)\\\\\n&= \\sum_i \\frac{\\delta(x-a_i)}{\\left|\\pdv{f}{x}\\right|_{x=a_i}}.\n\\end{align}\n\n\nThe $\\delta$ function is normally constructed by taking the limit of some test function. Here, various test functions are discussed.\n\n\\subsubsection{Discrete $\\delta$ function}\nConsider some function $f(x)$ as a set of values $x_i$ in bins of width $\\Delta x$. Then the integral becomes a discrete sum,\n\\begin{equation}\n\\int\\limits_{-\\infty}^{+\\infty} f(x) \\, \\delta_t(x - x_0) \\, \\dd{x} \\rightarrow \\sum_{i={-\\infty}}^{+\\infty} f(x_i) \\, \\delta(x_i-x_0) \\Delta x\n\\end{equation}\nwhere $\\delta_t(x - x_0) = 0$ for $i \\neq 0$ and $\\delta_t(0) = 1/\\Delta{x}$. In the limit $\\Delta x \\rightarrow 0$, $\\delta_t$ becomes the Dirac $\\delta$ function. However, it is not particularly useful to have a discrete, non-differentiable definition.\n\n\\subsubsection{Breit-Wigner lineshape}\nConsider a test function of a Breit-Wigner resonance peak,\n\\begin{equation}\n\\delta_t(x) = \\frac{1}{\\pi} \\frac{\\epsilon}{x^2 + \\epsilon^2}.\n\\end{equation}\nIts integral is\n\\begin{equation}\n\\int\\limits_{-\\infty}{+\\infty} \\frac{1}{\\pi} \\frac{\\epsilon}{x^2 + \\epsilon^2} \\, \\dd{x} = \\int\\limits_{-{\\pi}/{2}}^{+{\\pi}/{2}} \\frac{1}{\\pi} \\frac{\\sec^2{\\theta}}{1+\\tan^2{\\theta}} \\, \\dd{\\theta} = 1\n\\end{equation}\nwhere the substitution $x = \\epsilon \\tan \\theta$ has been used. At $x=0$, $\\delta_t(0) = 1/\\pi\\epsilon$ and the full width at half-maximum is $2\\epsilon$. The Breit-Wigner lineshape approaches the Dirac $\\delta$ function in the limit $\\epsilon\\rightarrow 0$.\n\n\\subsubsection{Fourier analysis}\nThe Fourier integral theorem states that\n\\begin{equation}\nf(x) = \\frac{1}{2\\pi} \\int\\limits_{-\\infty}^{+\\infty} e^{ipx} \\left( \\int\\limits_{-\\infty}^{+\\infty} e^{-p\\alpha} f(\\alpha) \\, \\dd{\\alpha} \\right) \\, \\dd{p}.\n\\end{equation}\nThat is, the reverse transform of the Fourier transform of a function returns the original function. Rearranging\\footnote{Cauchy showed that the order of integration matters, but this rearrangement is justified under the \\emph{theory of distributions}.},\n\\begin{equation}\nf(x) = \\frac{1}{2\\pi} \\int\\limits_{-\\infty}^{+\\infty} \\left( \\int\\limits_{-\\infty}^{+\\infty} e^{pix} e^{-ip\\alpha} \\, \\dd{p} \\right) f(\\alpha) \\, \\dd{\\alpha} = \\int\\limits_{-\\infty}^{+\\infty} \\delta(x-\\alpha) f(\\alpha) \\, \\dd{\\alpha}\n\\end{equation}\nwhere we see the $\\delta$ function can now be defined\n\\begin{equation}\n\\delta(x) = \\frac{1}{2\\pi} \\int\\limits_{-\\infty}^{+\\infty} e^{ipx} \\, \\dd{p}.\n\\end{equation}\n\n\\section{Heaviside step function}\nThe Heaviside step function is defined in discrete form\n\\begin{equation}\n\\theta(x) =\n\\begin{cases}\n1 & x > 0 \\\\\n0 & x < 0\n\\end{cases}\n\\end{equation}\n\nAn integral representation for $\\theta$ can be reached through recognising that its derivative is the Dirac $\\delta$ function,\n\\begin{equation}\n\\dv{\\theta}{x} = \\delta(x) = \\frac{1}{2\\pi} \\int\\limits_{-\\infty}^{+\\infty} e^{ipx} \\, \\dd{p}.\n\\end{equation}\nIntegrating with respect to $x$,\\begin{align}\n\\theta(x) &= \\frac{1}{2\\pi} \\int\\limits_{-\\infty}^{+\\infty} \\int e^{ipx} \\, \\dd{p} \\, \\dd{x} \\\\\n&= \\frac{1}{2\\pi i} \\int\\limits_{-\\infty}^{+\\infty} \\frac{e^{ipx}}{p} \\dd{p}\n\\end{align}\nwhere the constant of integration is zero. Adding a small offset to the denominator, we get the final integral form of the $\\theta$ step function\n\\begin{align}\n\\theta(x) = \\lim_{\\epsilon\\rightarrow 0^+} \\frac{1}{2\\pi i} \\oint_C \\frac{e^{ipx}}{p+i\\epsilon} \\, \\dd{p}.\n\\end{align}\nwhere $C$ is a contour along the real axis closed in the upper-half plane.\n\nThe residue theorem states that\n\\begin{equation}\n\\oint_C \\frac{f(z)}{z-a} \\dd{z} = 2\\pi i f(a)\n\\end{equation}\nfor a contour $C$ containing $a$. Applying this to the integral representation for $\\theta$ above,\n\\begin{equation}\n\\theta(x) = \\lim_{\\epsilon\\rightarrow 0^+} \\frac{2\\pi i}{2\\pi i} e^{ip(-i\\epsilon)} = \\lim_{\\epsilon\\rightarrow 0^+} e^{p\\epsilon} = 1\n\\end{equation}\nfor $x > 0$. For $x < 0$, the contour must be closed in the lower-half plane and the integral evaluates to $0$.\n", "meta": {"hexsha": "b8e58d7546066f35d5afbf370fdeef6f8de10ad2", "size": 20605, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/4_Non-Relativistic_Quantum_Mechanics.tex", "max_stars_repo_name": "adambozson/Standard-Model-I", "max_stars_repo_head_hexsha": "9ea0d388c93f21d1c636ee18c9210a1b91169308", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/4_Non-Relativistic_Quantum_Mechanics.tex", "max_issues_repo_name": "adambozson/Standard-Model-I", "max_issues_repo_head_hexsha": "9ea0d388c93f21d1c636ee18c9210a1b91169308", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/4_Non-Relativistic_Quantum_Mechanics.tex", "max_forks_repo_name": "adambozson/Standard-Model-I", "max_forks_repo_head_hexsha": "9ea0d388c93f21d1c636ee18c9210a1b91169308", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.3789731051, "max_line_length": 374, "alphanum_fraction": 0.6826983742, "num_tokens": 7265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7745833945721304, "lm_q1q2_score": 0.5935583230323088}}
{"text": "\\documentclass{memoir}\n\\usepackage{linalg}\n\n% \\begin{figure}[ht]\n%     \\centering\n%     \\incfig{riemmans-theorem}\n%     \\caption{Riemmans theorem}\n%     \\label{fig:riemmans-theorem}\n% \\end{figure}\n\n\\begin{document}\n\\chapter{Inner Products and Norms}\n\\label{cha:inner_products_and_norms}\n\\section{Inner Products}\n\\label{sec:inner_products}\n\n\\begin{defn}[Dot Product]\n\tFor $x,y \\in \\R^{n}$, the \\textbf{dot product} of $x$ and $y$, denoted $x\\cdot y$, is defined by\n\t\\begin{align*}\n\t\tx\\cdot y = x_1y_1+\\ldots+x_ny_n\n\t\\end{align*}\n\twhere $x = (x_1,\\ldots,x_n)$ and $y = (y_1,\\ldots,y_n)$.\n\\end{defn}\nThe dot product takes in two vectors and returns a scalar. Moreover, $x\\cdot x = \\|x\\|^2$.\n\\begin{cor}[Properties of dot products]\n\t\\begin{itemize}\n\t\t\\item $x\\cdot x \\geq 0$ for all $x\\in \\R^{n}$ \n\t\t\\item $x\\cdot x = 0$ if and only if $x = 0$ \n\t\t\\item for $y \\in \\R^{n}$ fixed, the map from $\\R^{n}$ to $\\R$ that sends $x \\in \\R^{n}$ to $x\\cdot y$ is linear\n\t\t\\item $x\\cdot y = y\\cdot x$ for all $x,y \\in \\R^{n}$\n\t\\end{itemize}\n\\end{cor}\nThis is almost quite right for complex numbers, but for that we must generalize a little further.\n\\begin{defn}[Inner Product]\n\tAn \\textbf{inner product} on $V$ is a function that takes each ordered pair $(u,v)$ of elements in $V$ to a number $\\langle u, v \\rangle \\in F$ and has the following properties:\n\t\\begin{itemize}\n\t\t\\item $ \\langle v, v \\rangle \\geq 0 $ for all $v \\in V$ \\textbf{(positivity)}\n\t\t\\item $ \\langle v, v \\rangle = 0$ if and only if $v = 0$ \\textbf{(definiteness)}\n\t\t\\item  $ \\langle u+v, w \\rangle = \\langle u, w \\rangle + \\langle v, w \\rangle $ for all $u,v,w \\in V$ \\textbf{(additivity in first slot)}\n\t\t\\item $ \\langle \\lambda u, v \\rangle = \\lambda \\langle u, v \\rangle $ for all $\\lambda \\in F$ and all $u,v \\in V$ \\textbf{(homogeneity in first slot)}\n\t\t\\item $ \\langle u, v \\rangle = \\overline{ \\langle v, u \\rangle }$ for all $u,v \\in V$ \\textbf{(conjugate symmetry)}.\n\t\\end{itemize}\n\\end{defn}\n\\begin{defn}[Inner Product Space]\n\tAn \\textbf{inner product space} is a vector space $V$ along with an inner product on $V$.\n\\end{defn}\nNotation: For the rest of the chapter, $V$ denotes a inner product space over $F$.\n\n\\begin{cor}[Basic properties of an inner product]\n\t\\begin{itemize}\n\t\t\\item For each fixed $u \\in V$, the function that takes $v$ to $ \\langle v, u \\rangle $ is a linear map from $V$ to $F$ \n\t\t\\item $ \\langle 0, u \\rangle = 0$ for every $u \\in V$ \n\t\t\\item $ \\langle u, 0 \\rangle = 0$ for every $u \\in V$ \n\t\t\\item $ \\langle u, v+w \\rangle = \\langle u, v \\rangle + \\langle u, w \\rangle $ for all $u,v,w \\in V$\n\t\t\\item $ \\langle u, \\lambda v \\rangle  = \\overline{\\lambda} \\langle u, v \\rangle $ for all $\\lambda \\in F$ and $u,v \\in V$\n\t\\end{itemize}\n\\end{cor}\n\\section{Norms}\n\\label{sec:norms}\n\\begin{defn}[Norm]\n\tFor $v \\in V$, the \\textbf{norm} of $v$, denoted $\\|v\\|$, is defined by\n\t\\begin{align*}\n\t\t \\|v\\| = \\sqrt{ \\langle v, v \\rangle } .\n\t\\end{align*}\n\\end{defn}\n\\begin{cor}[Basic properties of the norm]\n\tSuppose $v \\in V$.\n\t\\begin{itemize}\n\t\t\\item $\\|v\\|=0$ if and only if $v = 0$.\n\t\t\\item $\\|\\lambda v\\|= \\left| \\lambda \\right| \\|v\\|$ for all $\\lambda \\in F$.\n\t\\end{itemize}\n\\end{cor}\n\\begin{defn}[Orthogonal]\n\tTwo vectors $u,v \\in V$ are called \\textbf{orthogonal} if $ \\langle u, v \\rangle = 0$.\n\\end{defn}\n\\begin{cor}\n\t\\begin{itemize}\n\t\t\\item 0 is orthogonal to every vector in $V$ \n\t\t\\item 0 is the only vector in $V$ that is orthogonal to itself\n\t\\end{itemize}\n\\end{cor}\n\\begin{thm}[Pythagorean Theorem]\n\tSuppose $u,v$ are orthogonal vectors in $V$. Then\n\t\\begin{align*}\n\t\t\\|u+v\\|^2 = \\|u\\|^2+\\|v\\|^2\n\t\\end{align*}\n\\end{thm}\n\\begin{cor}[Orthogonal Decomposition]\n\tSuppose $u,v \\in V$, with $v\\neq 0$. Set $c = \\frac{ \\langle u, v \\rangle }{\\|v\\|^2}$ and $w = u - \\frac{ \\langle u, v \\rangle }{\\|v\\|^2}v$. Then\n\t\\begin{align*}\n \\langle w, v \\rangle =0 \\text{ and } u = cv + w.\n\t\\end{align*}\n\\end{cor}\n\\begin{thm}[Cauchy-Schwarz Inequality]\n\tSuppose $u,v \\in V$. Then\n\t\\begin{align*}\n\t\t \\left| \\langle u, v \\rangle  \\right| \\leq \\|u\\|\\|v\\|.\n\t\\end{align*}\n\tThis inequality is an equality if and only if one of $u,v$ is a scalar multiple of the other.\n\\end{thm}\n\\begin{thm}[Triangle Inequality]\n\tSuppose $u,v \\in V$. Then\n\t\\begin{align*}\n \\|u+v\\| \\leq \\|u\\|+ \\|v\\|.\n\t\\end{align*}\n\tThis inequality is an equality if and only if one of $u,v$ is a nonnegative multiple of the other.\n\\end{thm}\n\\begin{thm}[Parallelogram Equality]\n\tSuppose $u,v \\in V$. Then\n\t\\begin{align*}\n \\|u+v\\|^2 + \\|u-v\\|^2 = 2\\left( \\|u\\|^2+ \\|v\\|^2 \\right) .\n\t\\end{align*}\n\\end{thm}\n\\section{Orthonormal Bases}\n\\label{sec:orthonormal_bases}\n\\begin{defn}[Orthonormal]\n\tA list of vectors is called \\textbf{orthonormal} if each vector in the list has norm 1 and is orthogonal to all other vectors in the list.\n\\end{defn}\n\\begin{cor}\n\tEvery orthonormal list of vectors in $V$ with length \\textrm{dim}$V$ is an orthonormal basis of $V$.\n\\end{cor}\n\\begin{lemma}\n\tSuppose $e_1,\\ldots,e_n$ is an orthonormal basis of $V$ and $v \\in V$. Then\n\t\\begin{align*}\n\t\tv = \\langle v, e_1 \\rangle e_1 + \\ldots + \\langle v, e_n \\rangle e_n\n\t\\end{align*}\n\tand\n\t\\begin{align*}\n\t\t\\|v\\|^2 = \\left| \\langle v, e_1 \\rangle  \\right|^2 + \\ldots + \\left| \\langle v, e_n \\rangle  \\right|^2.\n\t\\end{align*}\n\\end{lemma}\n\\begin{thm}[Gram-Schmidt Procedure]\n\tSuppose $v_1,\\ldots,v_m$ is a linearly independent list of vectors in $V$. Let $e_1 = \\frac{v_1}{\\|v_1\\|}$. For $j=2,\\ldots,m$, define $e_j$ inductively by\n\t\\begin{align*}\n\t\te_j = \\frac{v_j - \\langle v_j, e_1 \\rangle e_1 - \\ldots - \\langle v_j, e_{j-1} \\rangle e_{j-1}}{\\|v_j - \\langle v_j, e_1 \\rangle e_1 - \\ldots - \\langle v_j, e_{j-1} \\rangle e_{j-1}\\|}.\n\t\\end{align*}\nThen $e_1,\\ldots,e_m$ is an orthonormal list of vectors in $V$ such that\n\\begin{align*}\n\tspan(v_1,\\ldots,v_j) = span(e_1,\\ldots,e_j)\n\\end{align*}\n\\end{thm}\n\\begin{cor}\n\tEvery finite-dimensional inner product space has an orthonormal basis.\n\\end{cor}\n\\begin{cor}\n\tSuppose $V$ is finite-dimensional. Then every orthonormal list of vectors in $V$ can be extended to an orthonormal basis of $V$.\n\\end{cor}\n\\begin{lemma}\n\tSuppose $T \\in \\mathcal{L}(V)$. If $T$ has an upper-triangular matrix with respect to some basis of $V$, then $T$ has an upper-triangular matrix with respect to some orthonormal basis of $V$.\n\\end{lemma}\n\\begin{thm}[Schur's Theorem]\n\tSuppose $V$ is a finite-dimensional complex vector space and $T \\in \\mathcal{L}(V)$. Then $T$ has an upper-triangular matrix with respect to some orthonormal basis of $V$.\n\\end{thm}\n\\begin{thm}[Riesz Representation Theorem]\n\tSuppose $V$ is finite-dimensional and $\\varphi $ is a linear functional on $V$. Then there is a unique vector $u\\in V$ such that\n\t \\begin{align*}\n\t\t \\phi(v) = \\langle v, u \\rangle \n\t\\end{align*}\n\tfor every $v \\in V$.\n\\end{thm}\nTo construct this vector, first we write\n\\begin{align*}\n\t\\varphi(v) = \\varphi\\left( \\langle v, e_1 \\rangle e_1 + \\ldots + \\langle v, e_n \\rangle e_n \\right) \\\\\n\t= \\langle v, e_1 \\rangle \\varphi(e_1) + \\ldots + \\langle v, e_n \\rangle \\varphi(e_n) \\\\\n\t= \\langle v, \\overline{\\varphi(e_1)}e_1 + \\ldots + \\overline{\\varphi(e_n)}e_n \\rangle \n\\end{align*}\nfor every \\(v \\in V\\). Thus, we can simply set\n\\begin{align*}\n\tu = \\overline{\\varphi(e_1)}e_1 + \\ldots + \\overline{\\varphi(e_n)}e_n.\n\\end{align*}\n\\begin{lemma}\n\tSuppose there exists $ u_1,u_2 \\in V$ such that\n\t\\begin{align*}\n\t\t \\langle v, u_1 \\rangle = \\langle v, u_2 \\rangle \\text{ for all } v \\in V.\n\t\\end{align*}\n\tThen $u_1 = u_2$.\n\\end{lemma}\n\\end{document}\n", "meta": {"hexsha": "55275e299d547583ac84ea209adf9b13ad64f8ce", "size": 7467, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Linear Algebra/Notes/source/11-11-19-InnerProd-Norms.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Linear Algebra/Notes/source/11-11-19-InnerProd-Norms.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Linear Algebra/Notes/source/11-11-19-InnerProd-Norms.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7150837989, "max_line_length": 192, "alphanum_fraction": 0.6546136333, "num_tokens": 2847, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754607093178, "lm_q2_score": 0.8152324915965392, "lm_q1q2_score": 0.5934692486551957}}
{"text": "\\chapter{Aerodynamics}\n\nAerodynamic forces are calculated in Aerodynamic Axis System, while moments are calculated in Stability Axis System. Rotation matrix from Stability Axis System to Body Axis System can be calculated using formula (\\ref{eq-aero-matrix-alpha}). Rotation matrix from Aerodynamic Axis System to Body Axis System can be calculated using following formulas:\n\\begin{equation}\n  \\label{eq-aero-matrix-alpha}\n  \\boldsymbol T \\left( \\alpha \\right)\n  =\n  \\left[\n    \\begin{matrix}\n      -1 & 0 &  0 \\\\\n       0 & 1 &  0 \\\\\n       0 & 0 & -1 \\\\\n    \\end{matrix}\n  \\right]\n  \\left[\n    \\begin{matrix}\n      \\cos \\alpha & 0 & -\\sin \\alpha \\\\\n                0 & 1 &            0 \\\\\n      \\sin \\alpha & 0 &  \\cos \\alpha \\\\\n    \\end{matrix}\n  \\right]\n  =\n  \\left[\n    \\begin{matrix}\n      -\\cos \\alpha & 0 &  \\sin \\alpha \\\\\n                 0 & 1 &            0 \\\\\n      -\\sin \\alpha & 0 & -\\cos \\alpha \\\\\n    \\end{matrix}\n  \\right]\n\\end{equation}\n\n\\begin{equation}\n  \\label{eq-aero-matrix-beta}\n  \\boldsymbol T \\left( \\beta \\right)\n  =\n  \\left[\n    \\begin{matrix}\n       \\cos\\psi & \\sin\\psi & 0 \\\\\n      -\\sin\\psi & \\cos\\psi & 0 \\\\\n              0 &        0 & 1 \\\\\n    \\end{matrix}\n  \\right]\n\\end{equation}\n\n\\begin{equation}\n  \\boldsymbol T \\left( \\alpha, \\beta \\right)\n  =\n  \\boldsymbol T \\left( \\alpha \\right) \\boldsymbol T \\left( \\beta \\right)\n  =\n  \\left[\n    \\begin{matrix}\n      -\\cos \\alpha \\cos \\beta & -\\cos \\alpha \\sin \\beta &  \\sin \\alpha \\\\\n                  -\\sin \\beta &              \\cos \\beta &            0 \\\\\n      -\\sin \\alpha \\cos \\beta & -\\sin \\alpha \\sin \\beta & -\\cos \\alpha \\\\\n    \\end{matrix}\n  \\right]\n\\end{equation}\n\nConsidering a no-wind conditions angle of attack and angle of sideslip (positive when the aircraft velocity component along the transverse axis is positive \\cite{ISO-1151-1-1988}) are given as follows:\n\\begin{align}\n  \\alpha &= \\arctan \\left( \\frac{w}{ \\sqrt{ u^2 + v^2 } } \\right) \\\\\n  \\beta  &= \\arcsin \\left( \\frac{v}{V} \\right)\n\\end{align}\n\n\\section{Tail-off Aircraft}\n\nTail-off aircraft aerodynamics model is intended to be used in application, e.g. fixed-wing aircrafts, where asymmetric aerodynamic effects, such as autorotation spin or roll damping, are significant.\n\nForces and moments are calculated for each half-wing to consider asymmetric effects. Half wing aerodynamic center velocity vector used to calculate angle of attack, angle of sideslip as well as forces and moments is given as follows:\n\\begin{equation}\n  \\label{eq-aero-v-ac}\n  {\\vec V}_{AC} = {\\vec V}_O + {\\vec \\Omega} \\times {\\vec r}_{AC}\n\\end{equation}\n\nForces and moments generated by the half-wing are given as follows: \\cite{StevensLewis1992}\n\\begin{align}\n  {\\vec F}_a &= \\left[ F_{X,a}, F_{Y,a}, F_{Z,a} \\right]^T \\\\\n  {\\vec M}_s &= \\left[ M_{X,s}, M_{Y,s}, M_{Z,s} \\right]^T\n\\end{align}\n\nWhere:\n\\begin{align}\n  \\label{eq-aero-fxa}\n  F_{X,a} &= \\frac{1}{2} \\rho V^2 S C_D \\\\\n  \\label{eq-aero-fya}\n  F_{Y,a} &= \\frac{1}{2} \\rho V^2 S C_Y \\\\\n  \\label{eq-aero-fza}\n  F_{Z,a} &= \\frac{1}{2} \\rho V^2 S C_L \\\\\n  M_{X,s} &= \\frac{1}{2} \\rho V^2 S \\hat c C_l \\\\\n  M_{Y,s} &= \\frac{1}{2} \\rho V^2 S \\hat c C_m \\\\\n  M_{Z,s} &= \\frac{1}{2} \\rho V^2 S \\hat c C_n\n\\end{align}\n\nForces and moments generated by the half-wing expressed in Body Axis System are given by the following formulas:\n\\begin{align}\n  \\label{eq-aero-fb}\n  {\\vec F}_b &= {\\boldsymbol T} \\left( \\alpha, \\beta \\right) {\\vec F}_a \\\\\n  {\\vec M}_b &=\n  {\\boldsymbol T} \\left( \\alpha \\right) {\\vec M}_s\n  +\n  {\\vec r}_{AC,b} \\times {\\vec F}_b\n\\end{align}\n\n\\section{Stabilizers}\n\nVelocity vector used to calculate stabilizer angle of attack, angle of sideslip as well as forces and moments is calculated using expression (\\ref{eq-aero-v-ac}).\n\nHorizontal stabilizer angle of attack is modified due to incidence angle and downwash angle, what can be expressed as follows. \\cite{Etkin1972}\n\\begin{equation}\n  \\Delta \\alpha_h\n  =\n  i_h + \\frac{ \\partial \\epsilon }{ \\partial \\alpha } \\alpha\n\\end{equation}\n\nForces generated by stabilizers are calculated using formulas (\\ref{eq-aero-fxa}), (\\ref{eq-aero-fya}) and (\\ref{eq-aero-fza}).\n\nFormula (\\ref{eq-aero-fb}) can be used to calculate stabilizer generated forces expressed in Body Axis System.\n\nIt is assumed that horizontal stabilizer generates only drag and lift, while vertical stabilizer generates only drag and side force. Moments generated by stabilizers comes only from force acting on arm, other moments are neglected.\n\\begin{equation}\n  {\\vec M}_b = {\\vec r}_{AC,b} \\times {\\vec F}_b\n\\end{equation}\n", "meta": {"hexsha": "e9290ec43c6c5f76e0b6e32c52320c5056c1857b", "size": 4545, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/fdm_4.tex", "max_stars_repo_name": "marek-cel/mscsim-docs", "max_stars_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-12-01T02:27:28.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-09T07:02:20.000Z", "max_issues_repo_path": "tex/fdm_4.tex", "max_issues_repo_name": "marek-cel/mscsim-docs", "max_issues_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/fdm_4.tex", "max_forks_repo_name": "marek-cel/mscsim-docs", "max_forks_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-12-01T10:56:23.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-01T19:41:05.000Z", "avg_line_length": 36.9512195122, "max_line_length": 350, "alphanum_fraction": 0.6413641364, "num_tokens": 1439, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.593469242209993}}
{"text": "% -*- LaTeX -*-\n% -*- coding: utf-8 -*-\n%\n% michael a.g. a\u00efv\u00e1zis\n% orthologue\n% (c) 1998-2022 all rights reserved\n%\n\n\\section{Two simple implementations}\n\\label{sec:simple}\n\nTurning the math of the previous section into actual code is not very hard, especially if\nyou don't try improve the effective convergence rate by sampling the integration region in\ntricky ways. To simplify things even further, we will forgo integration with non-trivial\nintegrands for now. Instead, we will compute areas and volumes by letting $f$ be the unit\nfunction over our chosen region.\n%\n\\begin{figure}\n\\centering\n\\includegraphics[scale=.75]{figures/quadrant.pdf}\n\\caption{Estimating $\\pi$ by computing the area of a quadrant of the unit circle\n  \\label{fig:pie}}\n\\end{figure}\n%\n\nAs a warm up exercise, let's use Monte Carlo integration to compute $\\pi$. We can get an\nestimate for $\\pi/4$ by computing the area of the upper right quadrant of the unit circle,\nas shown in \\figref{pie}. This will require generating points in the unit square and\ncounting the fraction that fall in the shaded region.\n\nThe implementation strategy is very simple. Let $N$ be the total number of points we would like\nto generate. We will set up two counters, $t$ for the total number of points and $i$ for the\npoints that fall inside the unit circle, and iterate $N$ times building random points in the\nunit square, checking whether each point falls inside the unit circle and updating our counters\nappropriately, as shown in \\algref{pi}.  Integration has been reduced to placing points in two\nbins. Note that $\\Theta(\\Omega)$ in \\eqref{mc} corresponds to the conditional in\n\\alglineref{pi:check}, and the dimensionality of the integrand is reflected in the number of\ncalls to the random number generator it takes to build a point. Further, our sampling box $B$\nhas unit area, so there is no need for any extra normalization of the result.\n%\n\\begin{algorithm}\n\\caption{$\\pi_{N}$: the Monte Carlo estimate of $\\pi$ \\label{alg:pi}}\n%\n\\DontPrintSemicolon\n\\SetAlgoNoEnd\n%\n$i \\leftarrow 0$ \\;\n$t \\leftarrow 0$ \\;\n\\While{$t < N$}{\n   $x \\leftarrow \\mbox{random()}$ \\;\n   $y \\leftarrow \\mbox{random()}$ \\;\n   \\If{$\\sqrt{x^{2}+y^{2}} \\leq 1$}{ \\nllabel{line:pi:check}\n      $i \\leftarrow i + 1$\n   }\n   $t \\leftarrow t + 1$\n}\n$\\pi_{N} = 4 i/N \\;$ %\\frac{i}{t}$ \\;\n\\end{algorithm}\n%\n\nIt is straightforward to generalize this procedure to compute integrals of non-trivial\nintegrands over higher dimensional regions. Perhaps you can already see that computing the\nintegral of a non-trivial $f$ should only affect our interpretation of the counter $i$. We will\naddress this and other ways to generalize this algorithm in \\secref{classes}.\n\n\\subsection{The two-minute python script}\n\\label{sec:simple:python}\n\nThe python implementation is a straightforward translation of the pseudocode of\n\\algref{pi}. The python standard library\\supercite{python-doc} includes the package {\\tt random}\nwith an implementation of the Mersenne Twister\\supercite{mersenne-twister} random number\ngenerator. Two minutes of typing produce:\n%\n\\python{\n  % firstnumber=9,\n  linerange={9-25},\n  label={lst:simple:python},\n  caption={\\srcfile{simple/gauss.py}: Estimating $\\pi$ in python},\n}{listings/simple/gauss.py}\n%\nNote that by deciding whether the point is inside the circle using the square of the distance\nsaves us a large number of calls to \\function{sqrt}. This type of consideration is ubiquitous\nwhen trying to get good performance out of Monte Carlo methods.  Also, note that we did not\nhave to convert our integer counters to floats before the computation of the area fraction,\nsince division of integers results in floating point numbers whenever necessary, a behavior\nthat is standard since version 3.0 of the python interpreter.\n\nThere are two obvious questions to answer: how accurate and how fast is this calculation?  To\nanswer these questions, we can perform some experiments where we vary the total number of\npoints $N$, and record the result and the time it took, as shown in \\tabref{simple-python}.\n%\n\\begin{table}\n\\centering\n\\[\n\\begin{array}{rd{8}cd{4}}\n% headersep\n  \\multicolumn{1}{c}{N} &\n  \\multicolumn{1}{c}{\\pi_{N}} &\n  \\multicolumn{1}{c}{\\Delta} &\n  \\multicolumn{1}{c}{t \\mbox{(sec)}} \\\\\n  \\hline \\\\ [-1.5ex]\n% body\n  10^{0} &  0          & 1                   &    .014  \\\\\n  10^{1} &  3.6        & 1.46 \\times 10^{-1} &    .014  \\\\\n  10^{2} &  3.36       & 6.95 \\times 10^{-2} &    .014  \\\\\n  10^{3} &  3.076      & 2.09 \\times 10^{-2} &    .015  \\\\\n  10^{4} &  3.156      & 4.59 \\times 10^{-3} &    .027  \\\\\n  10^{5} &  3.14496    & 1.07 \\times 10^{-3} &    .144  \\\\\n  10^{6} &  3.144028   & 7.75 \\times 10^{-4} &   1.265  \\\\\n  10^{7} &  3.142112   & 1.65 \\times 10^{-4} &  12.624  \\\\\n  10^{8} &  3.14170136 & 3.46 \\times 10^{-5} & 130.430\n\\end{array}\n\\]\n\\caption{Estimating $\\pi$: accuracy and cost of the python implementation\n  \\label{tab:simple-python}}\n\\end{table}\n%\nVarying the sample size involves editing the script, making the corresponding change to the\nvalue of $N$ and making another run. We could have exposed this variable to the end user by\nattaching it to a command line argument, but there is no reason to go through all this at this\npoint. We will see a better way to get this kind of flexibility in \\secref{pyreapp}.\n\nAs estimators of $\\pi$ go, this is not a good one: it converges rather slowly and it is rather\nexpensive. Sample sets larger than $10^{8}$ are not practical in pure python. How much better\ncan we do by switching over to a lower level language, such as \\cpp?\n\n\\subsection{The ten-minute \\cpp\\ program}\n\\label{sec:simple:c}\n\nThe implementation in \\cpp\\ is also fairly straightforward. One complication is that the random\nnumber generator in the standard \\cc\\ library has rather poor properties and it is not well\nsuited for stochastic calculations; fortunately, \\package{GSL}, the \\package{GNU} Scientific\nLibrary\\supercite{gsl}, has a large number of good generators whose properties and cost are\nrather well documented. Binary distributions of \\package{GSL} are available for a variety of\nplatforms, and you can always download and compile it from its freely available source code.\n\nNote that we are only using \\cpp\\ as a better \\cc, taking advantage of the strict type\nchecking, the ability to declare variables where we need them, and the prettier comment\nsyntax. Later we will see a couple of other reasons to compile \\cc\\ programs with \\cpp\\\ncompilers. For now, here is what a simple translation of the pseudocode of \\algref{pi} looks\nlike:\n%\n\\Cpp{\n  % firstnumber=9,\n  linerange={9-35},\n  label={lst:simple:cpp},\n  caption={\\srcfile{simple/gauss.cc}: Estimating $\\pi$ in \\cpp},\n}{listings/simple/gauss.cc}\n%\nwhose accuracy and cost is shown in \\tabref{simple-c}. It is clear that the\n\\cpp\\ implementation is almost an order of magnitude faster. However, the cost of making the\nnecessary changes to the value of $N$ is a little higher for the \\cpp\\ program, since the cycle\ninvolves the additional steps of compiling and linking. Again, a flexible program would expose\nthis variable to the end user; we will address this issue in \\secref{pyreapp}.\n%\n\\begin{table}\n\\centering\n\\[\n\\begin{array}{rd{8}cd{4}}\n% headers\n  \\multicolumn{1}{c}{N} &\n  \\multicolumn{1}{c}{\\pi_{N}} &\n  \\multicolumn{1}{c}{\\Delta} &\n  \\multicolumn{1}{c}{t \\mbox{(sec)}} \\\\\n  \\hline \\\\ [-1.5ex]\n% body\n  10^{0} &  0          & 1                   &    .002  \\\\\n  10^{1} &  3.2        & 1.86 \\times 10^{-2} &    .002  \\\\\n  10^{2} &  3.16       & 5.86 \\times 10^{-3} &    .002  \\\\\n  10^{3} &  3.2        & 1.86 \\times 10^{-2} &    .002  \\\\\n  10^{4} &  3.1456     & 1.28 \\times 10^{-3} &    .004  \\\\\n  10^{5} &  3.13528    & 2.01 \\times 10^{-3} &    .026  \\\\\n  10^{6} &  3.140756   & 2.66 \\times 10^{-4} &    .230  \\\\\n  10^{7} &  3.141948   & 1.13 \\times 10^{-4} &   2.277  \\\\\n  10^{8} &  3.1417769  & 5.86 \\times 10^{-5} &  22.749  \\\\\n  10^{9} &  3.1415631  & 9.41 \\times 10^{-6} & 227.735\n\\end{array}\n\\]\n\\caption{Estimating $\\pi$: accuracy and cost of the \\cpp\\ implementation\n  \\label{tab:simple-c}}\n\\end{table}\n%\n\nWe have chosen the \\package{RANLUX} random number generator, one the most costly but well behaved\ngenerators in the library. Even so, the \\cpp\\ implementation is six to seven times\nfaster. Since the python generator \\function{random} is implemented in \\cc, the difference in\nrunning times is mostly due to the cost of setting up the function call in the two\nlanguages. For many computations, the extra flexibility that python has to offer is worth the\nperformance hit; for others it is unacceptable. However, as we will see later, a little care,\nsome common sense, and some careful profiling will make it possible to have the best of both\nworlds.\n\n% end of file\n", "meta": {"hexsha": "8ac719f33ca06f0ff42e9526d62c94d0b200e2e4", "size": 8743, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/gauss/sections/simple.tex", "max_stars_repo_name": "PyreFramework/pyre", "max_stars_repo_head_hexsha": "345c7449a3416eea1c1affa74fb32faff30a6aaa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/gauss/sections/simple.tex", "max_issues_repo_name": "PyreFramework/pyre", "max_issues_repo_head_hexsha": "345c7449a3416eea1c1affa74fb32faff30a6aaa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/gauss/sections/simple.tex", "max_forks_repo_name": "PyreFramework/pyre", "max_forks_repo_head_hexsha": "345c7449a3416eea1c1affa74fb32faff30a6aaa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.0670103093, "max_line_length": 97, "alphanum_fraction": 0.7034198788, "num_tokens": 2598, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.815232489352, "lm_q1q2_score": 0.5934692373987598}}
{"text": "%s How the representative set is computed; pruning strategies and verification\n%step goes here.\n\n\\section{Computing Representative Sets} \n\\label{sec:representative}\nRepresentative vertex $v$ of a pattern\nvertex $u$ implies that there exists an isomorphism $\\phi$ for which $\\phi(u) =\nv$.  One way to interpret it is that the neighborhood of $u$ matches with that\nof $v$.  By comparing the neighborhoods we can find vertices that are not valid\nrepresentatives of $u$ without trying to find an isomorphism exhaustively.\nTherefore, to compute the representative sets we will start with a candidate\nrepresentative set denoted by \\CR  and iteratively prune some of the vertices if\nthe neighborhoods cannot be matched.  The candidate set is a super set of the\nrepresentative set, $\\CR \\supseteq \\RS$.  An example of candidate set, $ \\CR =\n\\{v| v \\in \\vg, \\matij{C}{L(u)}{L(v)} \\leq \\alpha \\}$ i.e., the isomorphisms of\nthe single vertex pattern with label $L(u)$.  In this section, we will describe\ndifferent notions of neighborhood and show how they help us in computing the\nrepresentative sets of vertices in a pattern.\n\nThe problem of checking whether a vertex $v \\in R(u)$ involves solving\nisomorphism which is a NP complete problem. The pruning methods typically do not\nprune all the invalid vertices.  So, we use an exhaustive enumeration method to\nprune these invalid vertices and reduce \\CR to \\RS.\n\n \\subsection{\\khop Label}\n %Neighbors of a vertex in a graph denotes the set of vertices that are\n %reachable via a single edge. \n \\khop label is defined as the set of vertices that are reachable via a simple\n path of length $k$.  In other words, k-hop label contains all vertices that are\n reachable in k-hops starting from $u$ and by visiting each vertex at most once.\n Note that, we use the word label even though we refer to a set of vertices.\n Formally, the \\khop label of a vertex $u$ in graph $G$, $\\khopl{k}{u,G} = \\{v |\n v \\in G, \\kpath{u}{v}{k}\\}$.  We simply write it as $\\khopl{k}{u}$ when the\n graph is evident from the context.  For example, for pattern $P$ in\n Fig.~\\ref{subfig:pattern}, the $0$-hop label of vertex $5$ is $h_0(5) = \\{5\\}$,\n its $1$-hop label is the multiset $h_1(5) = 2, 4, 6$ (we omit the set notation\n for convenience) and its $2$-hop label $h_2(5) = 1, 3$. The minimum cost of\n matching \\khop labels $\\khopl{k}{u}$ and $\\khopl{k}{v}$ is \n\n \\begin{equation} \\label{eq:khop} \\khopcost{k}{u}{v} =\n     \\text{min}\\displaystyle\\sum_{u' \\in \\khopl{k}{u}}\n     \\matij{C}{L(u')}{L(f(u'))} \\end{equation}\n\n where the minimization is over all injective functions $f\\!\\!:\\khopl{k}{u}\n \\rightarrow \\khopl{k}{v}$ and $\\matij{C}{L(u')}{L(f(u'))}$ is the cost of\n matching the vertex labels.  In other words, it is the minimum total cost of\n matching the vertices present in the k-hop labels.  The following theorem\n places a upper bound on the minimum cost of matching the k-hop labels of\n pattern vertex and any of its representative vertices.\n\n\n\\begin{thm} Given any pattern vertex $u$, a representative vertex $v \\in R(u)$\n    and cost threshold $\\alpha$, the minimum cost of matching the \\khop labels,\n    $\\khopcost{k}{u}{v} \\leq \\alpha$ for all $k \\geq 0$.\n\n\\begin{myproof} Consider any isomorphism $\\phi$ such that $\\phi(u) = v$. It is\n    enough if we can show an injective function $f\\!\\!:\\khopl{k}{u} \\rightarrow\n    \\khopl{k}{v}$ with a cost ( as defined in equation \\ref{eq:khop}) $\\leq\n    \\alpha$. We will argue that the function $\\phi$ on the restricted domain\n    $\\khopl{k}{u}$ is one such function $f$.  First, we know that\n    $\\sum\\matij{C}{L(u)}{\\phi(L(u)} \\leq \\alpha$, $u \\in \\vp$, since $\\phi$ is\n    an isomorphism. Second, let $\\kpath{u}{u'}{k}$ then $\\phi(u') \\in\n    \\khopl{k}{v}$ because for every edge $(u_1, u_2)$ on a path between $u$ and\n    $u'$ in $\\pat$, $(\\phi(u_1),\\phi(u_2)) \\in \\eg$.  Therefore the minimum cost\n    of matching the \\khop labels is upper bounded by $\\alpha$.  \\end{myproof}\n    \\label{thm:khop} \\end{thm}\n\nBased on the above theorem, a vertex $v$ is not a representative vertex of $u$\nif $\\khopcost{k}{u}{v} > \\alpha$ for any $k \\geq 0$. However, in practice, it\nenough to check the condition only for $k \\leq |V_P|-1$ because $\\khopl{k}{u}$\nis the null set $\\forall k \\geq |V_P|$ and the condition is trivially satisfied.\n\nFigure~\\ref{fig:ncexample} shows an example for the \\khop label based pruning of\nthe candidate representative set where the threshold $\\alpha = 0.5$. Consider\nvertex $2 \\in \\vp$ and vertex $20 \\in \\vg$, we have, $\\khopcost{0}{2}{20} = 0$,\nsince the cost of matching vertex labels $\\matij{C}{L(2)}{L(20)} = 0$ , as per\nthe label matching matrix $C$ in Fig.~\\ref{subfig:match}. The \\khop labels for\n$k=1,2,3$ and the minimum of cost matching them are as shown in the table\n\\ref{tab:khop220}, and it can be verified that the minimum cost is within the\nthreshold $\\alpha$.\n\nThus far, we cannot prune node $20$ from $R'(2)$.  However, $\\khopl{4}{2} = 4, 5\n$ and $\\khopl{4}{20} = 30, 60$ and the minimum cost of matching them is $0.6 >\n\\alpha$.  Thus, we conclude that $20 \\notin R'(2)$. This example illustrates\nthat \\khop labels can help prune the candidate representative sets.\n\n\n% Example for showing the incremental updates of the labels\n\\begin{figure}[!ht]\n\\captionsetup[subfloat]{captionskip=15pt}\n  \\centering\n  \\subfloat[Pattern $P$]{\n    \\label{subfig:pattern}\n\t\\scalebox{0.9}{\n    % # 3 vertices and 2 edge pattern\n    \\begin{pspicture}(0,0)(4,3)\n    \\cnodeput[linecolor=black](0,2) {n1} {A}\n    \\cnodeput[linecolor=black](0,1) {n2} {C}\n    \\cnodeput[linecolor=black](1,0) {n3} {B}\n    \\cnodeput[linecolor=black](2,2) {n4} {C}\n    \\cnodeput[linecolor=black](2,1) {n5} {A}\n    \\cnodeput[linecolor=black](2,0) {n6} {D}\n    %% Draw the edges of the pattern\n  \\ncline{-}{n1}{n2}\n  \\ncline{-}{n2}{n3}\n  \\ncline{-}{n3}{n4}\n  \\ncline{-}{n2}{n5}\n  \\ncline{-}{n4}{n5}\n  \\ncline{-}{n5}{n6}\n  \\ncline{-}{n3}{n6}\n    \\uput{.3cm}[90](n1){ {1} }\n    \\uput{.3cm}[180](n2){ {2} }\n    \\uput{.3cm}[270](n3){ {3} }\n    \\uput{.3cm}[90](n4){ {4} }\n    \\uput{.3cm}[0](n5){ {5} }\n    \\uput{.3cm}[270](n6){ {6} }\n    \\end{pspicture}\n\t} }\n  \\subfloat[Database Graph $G$]{\n    \\label{subfig:database}\n\t\\scalebox{0.9}{\n    \\begin{pspicture}(0,0)(2.5,2.5)\n    \\cnodeput[linecolor=black](0,2) {N1} {A}\n    \\cnodeput[linecolor=black](0,1) {N2} {C}\n    \\cnodeput[linecolor=black](0,0) {N3} {D}\n    \\cnodeput[linecolor=black](1,2) {N4} {B}\n    \\cnodeput[linecolor=black](2.25,1) {N5} {B}\n    \\cnodeput[linecolor=black](2,0) {N6} {A}\n    % vertex ids\n    \\uput{.3cm}[90](N1){ {10} }\n    \\uput{.3cm}[180](N2){ {20} }\n    \\uput{.3cm}[270](N3){ {30} }\n    \\uput{.3cm}[90](N4){ {40} }\n    \\uput{.3cm}[0](N5){ {50} }\n    \\uput{.3cm}[270](N6){ {60} }\n    % edges in the database\n  \\ncline{-}{N1}{N2}\n  \\ncline{-}{N2}{N3}\n  \\ncline{-}{N4}{N5}\n  \\ncline{-}{N5}{N6}\n  \\ncline{-}{N3}{N4}\n  \\ncline{-}{N2}{N5}\n  \\ncline{-}{N2}{N6}\n    \\end{pspicture}\n\t} }\n  \\newline\n\\captionsetup[subfloat]{captionskip=5pt}\n\\subfloat[Cost Matrix]{\n  \\label{subfig:match}\n  % Table for the search space pruning\n  \\begin{tabular}{|c|c|c|c|c|}\n    \\hline\n    $C$ & A &  B & C & D \\\\\n    \\hline\n    A & 0 & 0.7 & 0.6 & 0.1\\\\\n    \\hline\n    B & 0.7 & 0 & 0.3 & 1\\\\\n    \\hline\n    C & 0.6 & 0.3 & 0 & 0.8\\\\\n    \\hline\n    D & 0.1 & 1 & 0.8 & 0\\\\\n    \\hline\n  \\end{tabular}\n  } \n  \\caption{Pattern \\protect\\subref{subfig:pattern}, \n  database graph \\protect\\subref{subfig:database}, and cost\n  matrix \\protect\\subref{subfig:match}.\n  } \n  \\label{fig:ncexample}\n\\end{figure}\n\n\\begin{table}[h]\n    \\centering\n    \\begin{tabular}{|c|c|c|c|}\n        \\hline\n        k & $\\khopl{k}{2}$ & $\\khopl{k}{20}$ & $\\khopcost{k}{2}{20}$\\\\\n        \\hline\n        1 & 1, 3, 5 & 10, 30, 50, 60 & 0 \\\\\n        2 & 4, 6 & 40, 50, 60 & 0.4 \\\\\n        3 & 3, 5 & 40, 30, 50 & 0.1\\\\\n        \\hline\n    \\end{tabular}\n    \\caption{\\khop label of vertices $2$ and $20$}\n    \\label{tab:khop220}\n\\end{table}\n\n\\begin{table}[h]\n    \\centering\n    \\begin{tabular}{|c|c|c|c|}\n        \\hline\n        k & $h_k(3)$ & $h_k(50)$ & $\\khopcost{k}{3}{50}$ \\\\\n        \\hline\n        0 & 3 & 50 & $0$\\\\\n        1 & 2, 4, 6 & 20, 40, 60 & $0.4$ \\\\\n        2 & 1, 5 & 10, 20 , 30, 60 & 0\\\\\n        3 & 2, 4, 5 & 10, 20, 30, 40 & $0.3$ \\\\\n        4 & $1$ & $10, 40, 60$ & $0$ \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{\\khop labels of vertices $3$ and $50$.}\n    \\label{tab:khop350}\n\\end{table}\n\n\n\n\\subsection{Neighbor Concatenated Label} In Neighbor concatenated label (\\ncl) ,\nthe information regarding the candidates of a neighbor that were pruned in the\nprevious iteration is used along with the current \\khop label to prune\ncandidates in the current iteration. In contrast, the \\khop label pruning\nstrategy for a vertex $u$ works independently of the result of \\khop label\npruning of other vertices in the pattern. This leads us to the following\nrecursive formulation for \\ncl.\n\nThe \\ncl of a vertex in the ${k+1}^{th}$ iteration, $\\nclab{k+1}{u}$, is defined\nas the tuple $(\\{\\nclab{k}{u'} | u' \\in N(u)\\},\\xspace \\khopl{k+1}{u})$.  The\nfirst element(A) of the tuple is the \\ncl of the neighbors of the vertex $u$ in\nthe previous iteration and the second element(B) is exactly same as the \\khop\nlabel defined in the previous section. We say that $\\nclab{k+1}{v}$ dominates\n$\\nclab{k+1}{u}$, denoted by $\\nclab{k+1}{u} = (A, B) \\preceq \\nclab{k+1}{v} =\n(A', B') $, i) iff $\\khopcost{k+1}{B}{B'} \\leq \\alpha$ i.e., the minimum cost of\nmatching the \\khop labels is within $\\alpha$ ii) there exits an injective\nfunction $g\\!\\!:A\\rightarrow A'$ such that $a \\preceq g(a)$ for all $a \\in A$\ni.e., there is a one to one mapping between the \\ncl labels (in the previous\niteration, $k$) of neighbors of $u$ and $v$.  The base case $\\nclab{0}{u}\n\\preceq \\nclab{0}{v}$ iff $\\matij{C}{L(u)}{L(v)} \\leq \\alpha$. For example, in\nFig~\\ref{fig:ncexample} $\\nclab{1}{2} \\preceq \\nclab{1}{20}$ because\n$\\khopcost{1}{2}{20} \\leq \\alpha$ and the \\ncl labels of vertices $1, 3, 5$ are\ndominated by the \\ncl labels of vertices $10, 50, 30$ respectively.  The\nfollowing theorem states that the \\ncl of a pattern vertex $u$ is dominated by\nthe \\ncl of any of its representative vertex $v \\in  R(u)$.\n\n\\begin{thm} Given any pattern vertex $u$, a representative vertex $v \\in R(u)$\n    and cost threshold $\\alpha$, $\\nclab{k}{u} \\preceq \\nclab{k}{v}$ for all $k\n    \\geq 0$.  \\begin{myproof} Let $\\phi$ be any isomorphism such that $\\phi(u) =\n        v$.  We prove the theorem by using induction on $k$.\\\\ \\textbf{Base\n        case:} $\\nclab{0}{u} \\preceq \\nclab{0}{v} \\iff \\matij{C}{L(u)}{L(v)}\n        \\leq \\alpha$ is true because $v \\in R(u)$. \\\\ \\textbf{Inductive\n        Hypothesis:} Assume that $\\nclab{k}{u} \\preceq \\nclab{k}{v}$ holds true\n        for all $u \\in \\pat$ and $v \\in R(u)$. \\\\ Now consider $\\nclab{k+1}{u} =\n        (A, B)$  and $ \\nclab{k+1}{v} = (A', B') $, from theorem \\ref{thm:khop}\n        we know that $\\khopcost{k+1}{B}{B'} \\leq \\alpha$, for all $k \\geq 0$.\n        Let $u'$ be a neighbor of $u$ and let $v' \\in \\phi(u')$ , then from the\n        inductive hypothesis $\\nclab{k}{u'} \\preceq \\nclab{k}{v'}$.  Therefore,\n        the injective function $\\phi$ maps the elements $a \\in A$ to $\\phi(a)\n        \\in A'$.  The theorem follows from the definition of the NL label.\n    \\end{myproof} \\label{thm:ncl} \\end{thm}\n\nBased on the above theorem, a vertex $v$ can be pruned from \\CR if $\\nclab{k}{u}\n\\not\\preceq \\nclab{k}{v}$ for some $k \\geq 0$. In Fig~\\ref{fig:ncexample} ,\nconsider the vertices $3 \\in \\pat$ , $50 \\in \\db$ and let $\\alpha = 0.5$. The\n\\ncl labels, $\\nclab{0}{3} \\preceq \\nclab{0}{50}$ as $\\matij{C}{B}{B} = 0 \\leq\n\\alpha$.  Similarly it is also true for the pairs $(2, 20)$, $(4, 40)$ etc. It\nfollows that $\\nclab{1}{3} \\preceq \\nclab{1}{50}$ as the neighbors $2, 4, 6$ can\nbe mapped to $20, 40, 60$ respectively and  the minimum cost of the matching the\n$1$-hop label is $0.4$ which is less than the $\\alpha$ threshold. But\n$\\nclab{2}{3} \\not\\preceq \\nclab{2}{50}$ because the \\ncl label $\\nclab{1}{6}$\nis not dominated by the \\ncl label of $20, 40$ or  $60$ in the previous\niteration. So, there is no mapping between the neighbors of vertices $3$ and\n$50$ in the current iteration. Hence, the vertex $50$ can be pruned from the\ncandidate representative set of vertex $3$.  Note that using the \\khop label in\nthe same example will not prune the vertex $50$ because the minimum cost of\nmatching the \\khop labels is within $\\alpha$ as shown in table\n\\ref{tab:khop350}. Therefore, \\ncl label is more efficient compared to \\khop\nlabel as it subsumes the latter label.\n\n\\subsection{Candidate set verification} \\label{sec:verification} The pruning\nmethods based on the \\khop and the \\ncl labels start with a \\CR and prune some\nof the candidate vertices based on the conditions described in theorems\n\\ref{thm:khop} and \\ref{thm:ncl}.  The verification step reduces \\CR to \\RS by\nretaining only those vertices for which there exists an isomorphism $\\phi$ in\nwhich $\\phi(u) = v$.  Informally, it does this by checking if the pattern $P$ can be\nembedded at $v$ such that total cost of label mismatch is at most $\\alpha$.\n\n\nLet $w_p = u_0,\\ldots, u_m$ be a walk in the pattern that covers each edge of\nthe pattern at least once starting from vertex $u$ i.e. $u_0 = u$. Finding a\npath that covers each edge at least once is a special case of the Chinese\npostman problem \\cite{chinesepostman} where the distance between pair of\nvertices is one. The following three conditions are satisfied iff the vertex $v$\nrepresents the vertex $u$. i) there exists a walk $w_d = v_0,\\ldots, v_m$,\n$\\forall (v_i, v_{i+1}) \\in w_d$, $v_i \\in R(u_i)$ and $v_{i+1} \\in R(u_{i+1})$.\nii) $v_i = v_j$ implies that $u_i = u_j$. iii) $\\sum\\matij{C}{L(u_i)}{L(v_i)}\n\\leq \\alpha$ where $u_i \\in \\vp$. Unlike the \\ncl label, \nthese conditions are necessary and sufficient and can be verified by following\nthe definition of isomorphism. Using an example we will\nshow how these conditions can be used to check definitely whether $v \\in R(u)$.\n\nConsider checking whether the vertex $30$ is a valid representative of the vertex\n$1$ in the pattern in the figure~\\ref{subfig:ex_sub} and let $\\alpha = 0.5$ . The\nwalk $ w_p = 1, 2, 4, 3, 1$ covers each edge of the pattern at least once. A\nwalk $w_d$, in the database in the figure~\\ref{subfig:ex_db}, \nthat satisfies the above three conditions should start by mapping the\nvertex $1$ to $30$. The total cost of matching the labels till now is $0.2$ and\nthe budget available for matching other vertices in $P$ is $0.5 - 0.2 = 0.3$.\nWe will cover the walk $w_p$ one edge at a time. For any $(u_i, u_{i+1}) \\in\nw_p$, if $u_i$ and $u_{i+1}$ are mapped, then we verify that there is an edge\nbetween the database vertices $v_i$ and $v_{i+1}$ to which the pattern vertices\nare mapped.  If however, $u_{i+1}$ is not mapped then we map it to some vertex\n$v_{i+1} \\in R'(u_{i+1})$ and subtract the cost of this mapping from the\nremaining $\\alpha$ cost.  For example, in the first step $(1,2)$ we can map $2$ to $20$.\nThe remaining cost is then $0.3 - 0.2 = 0.1$. In the next step $(2,3)$, the\nvertex $3$ cannot be mapped to any vertex in the database without violating the\nthird condition.  In such a case, we back track one step and choose a different\nmapping say $10$ for the vertex $2$ which is the last vertex that was mapped.\nProceeding this way, we can arrive at the mapping corresponding to $\\phi_{1}$ as\nin table~\\ref{subfig:ex_occur}.  This isomorphism not only guarantees that $30\n\\in R(1)$, it also implies that the verification check between the pairs $(2,\n10), (3, 60)$ and $(4, 40)$ can be avoided because of the approximate isomorphism\n$\\phi_1$ that was found.  The above procedure can extended to enumerate the\ncomplete set of isomorphisms.\n\n\n\n\\subsection{Label costs and dominance checking} \\label{sec:labelcheck} \nCandidate representative vertices are pruned by checking for dominance \nrelation between the \\ncl labels of pattern vertex and that of candidate\nvertex in the database. Comparing the \\ncl labels requires i) computing the\ncost of matching the \\khop labels ii) matching the neighbors of pattern vertex \nwith neighbors of the candidate vertex. First problem can be formulated\nas a minimum cost maximum flow in a network and the second as maximum matching\nin a bipartite graph.\n\n\\medskip{\\textit{Computing \\khop label cost}:} To compute the minimum matching\ncost between the \\khop labels $\\khopl{k}{u}$ and $\\khopl{k}{v}$, we compute the\nmaximum flow with minimum cost in a flow network $F$ defined as follows.  Each\nedge in $F$ is associated with a maximum capacity and the cost for sending a\nunit flow across it.  The network contains a vertex for each label $l_u =\nL(u')$ where $v' \\in h_k(u)$ and a vertex for each label $l_v = L(v')$ where $v'\n\\in h_k(v)$. There is a directed between between source vertex ($s$) and each\n$l_u$ with a zero cost and a capacity equal to the multiplicity of the $l_u$\ni.e., the number of vertices in $h_k(u)$ that have the label $l_u$. Similarly\nthere is a directed edge between $l_v$ and the sink node ($t$). In addition,\nthere is a directed edge from $l_u$ to $l_v$ with a cost\nequal to $\\matij{C}{l_u}{l_v}$ and a capacity equal to the\nmultiplicity of $l_u$. The cost between the \\khop labels is equal to\nthe minimum cost for maximum flow if the maximum flow is equal to\n$|\\khopl{k}{u}|$ and $\\infty$ otherwise.  \\\\ Figure~\\ref{fig:Hflow} shows the\nflow network required to compute the minimum cost of matching the \\khop labels\n$\\khopl{2}{2} = 4,6 $ and $\\khopl{2}{20} = 40, 50, 60$ as shown in Table\n\\ref{tab:khop220}. The labels of vertices in the \\khop labels are $C,D$ and $B,\nB, A$ respectively. The capacity of the edge betweenn $B$ and $t$ is two because \nboth the vertices $40$ and $50$ have the same label $B$.\nThere is an edge from $s$ to each of $C, D$ with zero cost\nand maximum capacity of one.  Similarly, there is an edge from each of $A, B$ to\nthe sink vertex $t$ with zero cost and maximum capacity of one and two\nrespectively. There is an edge from $C, D$ to each of $A, B$ with cost equal to\nthe corresponding entry in the cost matrix $C$. The maximum flow in the network\nis two and the minimum cost of sending two units of flow $0.4$ is achieved by\npushing a unit flow along the paths $s, C, B, t$ and $a, D, A, t$.  Therefore,\nthe cost of matching the labels $\\khopl{2}{2}$ and $\\khopl{2}{20}$ is $0.4$. It\nimplies that the vertex $4$ with label $C$ can be matched to either $40$ or $50$\nand the vertex $6$ to $60$.\n\n\\medskip{\\textit{Dominance check}: } Consider the \n\\ncl labels $\\nclab{k+1}{u} = (A, B)$ and  $\\nclab{k+1}{v} = (A', B')$, the cost matching\n$B$ and $B'$ can be computed using the above the network formulation.\nFinding an injective function $f\\!\\!:A \\rightarrow A'$ such that $a \\preceq\nf(a)$ , is equivalent to find a matching of size $|N(u)|$ in the bipartite graph\nwith edges $(a, a')$, for all $a \\in A$ and $a \\preceq a'$.\nThe \\ncl label $\\nclab{k}{v}$ therefore dominates $\\nclab{k}{u}$ if the\ncost between the \\khop labels is within $\\alpha$ and the size of maximum\nbipartite matching is $|N(u)|$.\n\n\\medskip{\\textit{Optimization}: } The candidate pattern may contain groups of\nsymmetric vertices that are indistinguishable with respect to the \\khop label.\nIn such a scenario, the candidate representative sets of all these vertices are\nexactly the same. Utilizing the symmetry, we can apply the pruning label strategy only on\none vertex per symmetry group and replicate the results for all other vertices\nin the group. For example, the vertices $10$ and $40$ in\nfigure~\\ref{subfig:ex_sub} are symmetric and the representative sets $R(10)$\nand $R(40)$ are exactly the same.  In abstract algebra terms such  groups are\ncalled orbits of the graph and can be computed using nauty algorithm\n\\cite{nauty}. \nEven\nthough computing the orbits is expensive, we can avoid $ (|g|-1) \\times |\\CR|$\n\\ncl label cost computations where $g$ is the size of an orbit. Note that\nwe find the orbits only for the pattern which is usually very small compared\nto the database graph.\n%Note that the payoff is zero if all the vertex orbits are of size $1$.\n\n\\subsection{Precomputing database \\khop labels} The \\khop label of the database\nvertices is independent of the candidate pattern.\nAlso, the flow network to compute the cost of matching the \\khop labels requires\nonly the aggregate information about the number of vertices of a given label.\nHence, we can precompute the \\khop label of the database vertices and store them\nin the memory. The following theorem proves that computing \\khop label is\nexpensive.\n\n\\begin{thm} k-reachable (KR) : Given a graph $G$, $k$ and $u \\in \\vg$. Compute\n    $\\khopl{k}{u}$.  KR cannot be solved in polynomial time unless $P =\n    NP$.\n\n\\begin{myproof} We prove this by reducing Hamiltonian path (HP) to KR.\n    Hamiltonian Path : Given a graph $G$, is there a simple path of length\n    $|\\vg|-1$ i.e. is there a path that visits each and every vertex exactly\n    once. The problem of finding a Hamiltonian path is proven to be an\n    NP-Complete problem \\cite{npcomplete}.\\\\ Assume that algorithm $X(k)$ can\n    compute KR in polynomial time. Let $|\\vg| = n$ and $u$ be the starting\n    vertex in HP if it exists.  Given an instance of HP, we first get a vertex\n    $v$, $\\kpath{u}{v}{n-1}$ using $X(n-1)$. The vertex $v$ is removed from the\n    graph and we find a vertex $v'$ such that $\\kpath{u}{v'}{n-2}$ and $(v', v)\n    \\in \\eg$. We repeat this process $n-1$ times. If at any stage $X(j) = \\{\\}$\n    then we restart from a different starting vertex. The vertices selected in\n    each iteration lie on a path of length $n-1$ if it exists. If there is\n    polynomial time algorithm for KR then HP could be solved in polynomial time\n    by reducing it to KR. Therefore, $KR$ is atleast as hard as $HP$.\n    So, KR cannot be solved in polynomial time unless\n    P = NP.\n\\end{myproof}\n\\end{thm}\n\nTo compute \\khop label of a vertex $u$, we check for each vertex $v$ whether $v\n\\in \\kpath{u}{v}{k}$ by enumerating all possible $k$ length paths until a path\nis found.  This procedure is exponential, we therefore fix a maximum value\n$k_{max}$ and use the \\khop label based pruning only for values of $k \\leq\nk_{max}$.  It only takes a couple of minutes to compute the \\khop label for $k\n\\leq 6$ for all the vertices in the database graph. This is significantly less\nthan the overall run time of the algorithm. Once $\\khopl{k}{u}$ is computed we\nstore in memory only the tuples $(m, l)$ where $m$ is the number of vertices $u'\n\\in \\khopl{k}{u}$, $L(u') = l$. The total amount of main memory required to store\nthe precomputed \\khop labels is O($|\\vg| \\times |\\Sigma| \\times k_max$).\n\n\n\n\n\\begin{figure}[!h]\n    \\centering\n\\scalebox{0.6}[0.6]{\n  \\psset{unit=0.85in}\n  \\newcommand\\arc[4]{\\ncline{#1}{#2}{#3}\\ncput{\\colorbox{gray!40}{#4}}}\n      \\begin{pspicture}(0,1)(5,3)\n        \\cnodeput[doubleline=true](1,2){src}{s}\n        \\cnodeput(2,1){n1}{C}\n        \\cnodeput(2,3){n2}{D}\n        \\cnodeput[doubleline=true](5,2){sink}{t}\n        \\cnodeput(4,1){n4}{B}\n        \\cnodeput(4,3){n5}{A}\n        \\arc{->}{src}{n1}{$1,0$}\n        \\arc{->}{src}{n2}{$1,0$}\n        %\\arc{->}{n1}{n4}{$1$}\n        \\ncline{->}{n1}{n4}\\ncput[npos=0.5]{\\colorbox{gray!40}{$1,0.1$}}\n        \\ncline{->}{n1}{n5}\\ncput[npos=0.3]{\\colorbox{gray!40}{$1,1$}}\n        \\ncline{->}{n2}{n4}\\ncput[npos=0.3]{\\colorbox{gray!40}{$1,0.6$}}\n        \\ncline{->}{n2}{n5}\\ncput[npos=0.5]{\\colorbox{gray!40}{$1,0.3$}}\n        \\arc{->}{n4}{sink}{$2,0$}\n        \\arc{->}{n5}{sink}{$1,0$}\n        %\\arc{->}{n6}{sink}{$1$}\n      \\end{pspicture}\n    }\n    \\caption{Flow network for \\khopl{2}{2} and \\khopl{2}{20}}\n\t\\label{fig:Hflow}\n\\end{figure}\n", "meta": {"hexsha": "7cfb9759f847dc319649046c450fdc9a658e2026", "size": 23742, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/representative.tex", "max_stars_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_stars_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/representative.tex", "max_issues_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_issues_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/representative.tex", "max_forks_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_forks_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-08T11:17:33.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-08T11:17:33.000Z", "avg_line_length": 52.4105960265, "max_line_length": 89, "alphanum_fraction": 0.6718894786, "num_tokens": 7958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5934692325875263}}
{"text": "%------------------------------------------------\n% FILENAME: assignment_0_revised.tex\n%  PROJECT: mathbootcamp\n%   AUTHOR:Joe Patten\n%    EMAIL: joe.patten@wsu.edu\n%  WEBSITE: http://joepatten.github.io\n%------------------------------------------------\n\\documentclass[a4paper, 11pt]{article}\n\\usepackage{assignment_0_style}\n\n\\title{ Mathematics Bootcamp Assignment 0 }\n\n\\begin{document}\n\\maketitle\n\nThere are 43 problems in this assignment.\nTake a deep breath.\nThat is a lot, but many are quite quick if you have taken courses in calculus. The questions are not meant to be exhaustive of all important topics in univariate calculus.\nInstead, they are intended to \\emph{nudge} you to remember things you likely once learned, but forgot.\nFor example, the rules of limits, l'Hopital's rule, continuity, the product rule, the chain rule, the fundamental theorem of calculus, taylor series, integration, the intermediate value theorem, the mean value theorem, inverse functions, implicit differentiation, etc.\n\n\\section{Calculus}\n\n\\paragraph{Problem 1} Compute\n\\begin{align}\n    \\lim_{x\\rightarrow 3} \\frac{5x^2 - 8x -13}{x^2-5}  \\nonumber\n\\end{align}\n\n\\paragraph{Problem 2} Compute\n\\begin{align}\n    \\lim_{x\\rightarrow 3} \\frac{x^4 - 81}{2x^2-5x-3}  \\nonumber\n\\end{align}\n\n\\paragraph{Problem 3}\nCompute\n\\begin{align}\n    \\lim_{x\\rightarrow 4} \\frac{3-\\sqrt{x+5}}{x-4}  \\nonumber\n\\end{align}\n\n\\paragraph{Problem 4}\nConsider the values of constants $a$ and $b$ so that $\\lim_{x \\rightarrow 2} f(x)$ exists and is equal to $f(2)$ where $f(x)$ is defined as below.\n\\begin{align}\n    f(x) = \\begin{cases}\n    \t\t\ta + bx & \\text{ if } x > 2 \\\\\n    \t\t\t3 & \\text{ if } x=2 \\\\\n    \t\t\tb-ax^2 & \\text{ if } x < 2\n    \t   \\end{cases}  \\nonumber\n\\end{align}\n\n\\paragraph{Problem 5}\nCompute the following limit.\n\\begin{align}\n    \\lim_{x\\rightarrow \\infty} \\frac{100}{x^2 + 5}  \\nonumber\n\\end{align}\n\n\n\\paragraph{Problem 6}\nCompute\n\\begin{align}\n    \\lim_{x\\rightarrow \\infty} (3x^3 - 1000x^2)  \\nonumber\n\\end{align}\n\n\n\\paragraph{Problem 7}\nCompute\n\\begin{align}\n    \\lim_{x\\rightarrow \\infty} \\frac{7x^2 + x - 100}{2x^2 - 5x}  \\nonumber\n\\end{align}\n\n\n\\paragraph{Problem 8}\nCompute\n\\begin{align}\n    \\lim_{x\\rightarrow \\infty} \\left( 3^x + 3^{2x} \\right)^{\\frac{1}{x}}  \\nonumber\n\\end{align}\n\n\n\\paragraph{Problem 9}\nCompute $\\lim_{x\\rightarrow 0^+} \\; x \\cdot \\ln x$.\n\n\n\\paragraph{Problem 10}\nCompute $\\lim_{x \\rightarrow 0^+} x \\cdot \\left( \\ln x \\right)^2$.\n\n\n\\paragraph{Problem 11}\nCompute $\\lim_{x \\rightarrow 0 } (1-x)^{1/x}$.\n\n\n\\paragraph{Problem 12}\nConsider the following functions for $w \\geq 0$ and $0 < \\sigma < 1$.\n\\begin{align}\n    u(w) &= \\frac{w^{1-\\sigma}-1}{1-\\sigma}  \\nonumber \\\\\n    u(w) &= \\ln(w) \\nonumber \\nonumber \\\\\n    u(w) &= \\sqrt{w} \\nonumber \n\\end{align}\nFor each function above, find the following limits (if they exist) where $u'(\\cdot)$ and $u''(\\cdot)$ represent the first and second derivatives respectively.\n\\begin{enumerate}[(i)]\n\t\\item $\\lim_{w\\rightarrow 0} \\; -u''(w)/u'(w)$.\n\t\\item $\\lim_{w\\rightarrow \\infty} \\; -u''(w)/u'(w)$.\n\t\\item $\\lim_{w\\rightarrow 0} \\; (-u''(w)\\cdot w)/u'(w)$.\n\t\\item $\\lim_{w\\rightarrow \\infty} \\; (-u''(w)\\cdot w)/u'(w)$.\n\\end{enumerate}\n\n\\paragraph{Problem 13} Differentiate $y = x^x$ (with respect to $x$).\n\n\n\\paragraph{Problem 14} Differentiate $y = x^{e^x}$ (with respect to $x$).\n\n\n\\paragraph{Problem 15} Compute the following limits associated with the functions $f(x) = |x|$.\n\\begin{align}\n    &\\lim_{h\\rightarrow 0} \\frac{f(-2 + h) - f(-2)}{h}  \\nonumber \\\\\n    &\\lim_{h\\rightarrow 0} \\frac{f(0 + h) - f(0)}{h} \\nonumber \\\\\n    &\\lim_{h\\rightarrow 0} \\frac{f(3 + h) + f(3)}{h} \\nonumber\n\\end{align}\n\n\\paragraph{Problem 16}\nDefine the intervals (if any) over which the following function is continuous.\n\\[\n\tf(x) = \\frac{7x^5 + x - 2}{x^2-4}\n\\]\n\n\n\\paragraph{Problem 17}\nShow that there is a root of the equation $3x^7 - 2x^5 + x -1 = 0$ between $0$ and $1$.\n\n\n\\paragraph{Problem 18}\nCompute the following limit associated with $f(x) = x^2 - 8x + 9$.\n\\[\n\t\\lim_{x\\rightarrow a} \\; \\frac{(x^2-8x+9)-(a^2-8a+9)}{x-a}\n\\]\n\n\n\\paragraph{Problem 19}\nIf the function $f(x)$ is differentiable in the interval $(a,b)$ and $|f'(x)| \\leq B < \\infty$ for all $x$ in the interval $(a,b)$ then is the maximum change in the function over any sub interval $(c,d) \\subseteq (a,b)$ finite or infinite? Prove it.\n\n\\paragraph{Problem 20}\nIf the function $f(x)$ is differentiable on $(a,b)$, but not continuously differentiable, then is $f$ continuous everywhere on $(a,b)$? Prove it. (Hint: use a theorem)\n\n\\paragraph{Problem 21}\nCan you differentiate the expression $2x = 1$? If so what is the derivative?  If not, why not?\n\n\\paragraph{Problem 22} Find the derivative of the function\n\\[\n\tf(x) = \\frac{\\sqrt[4]{x}}{x^{-1} \\sqrt{x^5}}\n\\]\n\n\n\\paragraph{Problem 23} Differentiate the function\n\\[\n\tf(x) = \\frac{3x^2 - 5\\sqrt{x} }{ 6x^4 }\n\\]\n\n\n\\paragraph{Problem 24}\nFind the linearization of the function $f(x) = \\sqrt[3]{1+x}$ at $a=0$ and use it to approximate the numbers $f(-0.05)$ and $f(0.1)$.\nAre these approximations overestimates or underestimates?\n\n\\paragraph{Problem 25}\nLet $f(x) = (3x-5)/(4-2x)$ and find $f^{-1}(x)$.  Then compare $f'(x)$ and $\\frac{d}{dx} f^{-1}(x)$ and describe the relationship (if any).\n\n\n\\paragraph{Problem 26}\nCompute the derivative of the following function, $H(p)$, with respect to $p$ and use it to show the answer to the following questions.\n\\begin{align}\n    H(p) &= p \\log_2 \\left(\\frac{1}{p} \\right) + (1-p)\\log_2 \\left(\\frac{1}{1-p} \\right)  \\nonumber\n\\end{align}\n\\begin{enumerate}[(i)]\n\t\\item Is there a global maximum and minimum over the interval $[0,1]$, if so, what is it?\n\t\\item If so, over what subset of $[0,1]$, if any, is the function increasing?\n\t\\item If so, over what subset of $[0,1]$, if any, is the function decreasing?\n\\end{enumerate}\n\n\\paragraph{Problem 27}\nConsider the function $f(x) = x^4 e^x$ with domain all real numbers.\n\\begin{enumerate}[(i)]\n\t\\item Find the $x$-value(s) of all roots ($x$-intercepts) of $f$.\n\t\\item Find the $x$- and $y$-value(s) of all critical points and identify each as a local max, local min, or neither.\n\t\\item Find the $x$- and $y$-value(s) of all global extrema and identify each as a global max or global min.\n\t\\item Find the $x$-value(s) of all inflection points.\n\\end{enumerate}\n\n\\paragraph{Problem 28}\nUse the Intermediate Value Theorem to show that $f(x) = x^3 - 2x - 1$ has a root on $[1,2]$.\n\n\n\\paragraph{Problem 29}\nDoes the Extreme Value Theorem say anything about the function $f(x) = x^2$ on each of the following intervals? If so what does it say?  In either case, explain why.\n\\begin{enumerate}[(i)]\n\t\\item $[1,4]$\n\t\\item $(1,4)$\n\\end{enumerate}\n\n\\paragraph{Problem 30}\nFind the value of the constant $c$ that the Mean Value Theorem specifies for $f(x)=x^3 + x$ on the interval $[0,3]$.\n\n\n\\paragraph{Problem 31}\nFor the equation $x^3 + y^3 = \\ln(xy) - 1$ use implicit differentiation to find $\\dd y/ \\dd x$.\n\n\n\\paragraph{Problem 32}\nConsider the function\n\\[\n\tf(x) = \\begin{cases}\n\t\t\t\tb-x^2 & \\text{ if } x < 3 \\\\\n\t\t\t\tax & \\text{ if } x \\geq 3\n\t \t   \\end{cases}\n\\]\n\\begin{enumerate}[(i)]\n\t\\item What condition(s) must be placed on the constants $a$ and $b$ in order for $f$ to be continuous on $(-\\infty, \\infty)$?\n\t\\item For what values of the constants $a$ and $b$ will $f$ be differentiable on $(-\\infty, \\infty)$?\n\\end{enumerate}\n\n\n\\paragraph{Problem 33}\nConsider the function $f(x) = \\ln(x^2)$.\nFind the fourth-order Taylor polynomial for $f(x)$ centered at $x_0=1$.\n\n\n\\paragraph{Problem 34}\nLet $X$ be a random variable with a probability density function (PDF) $f(x)$\n\\[\n\tf(x) = \\begin{cases}\n \t\t\t\tce^{-x/3} & \\text{ for } x > 0 \\\\\n \t\t\t\t0 & \\text{ otherwise }\n \t\t   \\end{cases}\n\\]\nRemember that a PDF has the property that $\\int_{\\infty}^{\\infty} f(x)\\dd x = 1$.\n\\begin{enumerate}[(i)]\n\t\\item Find the value of the constant $c$ that makes $f(x)$ a valid PDF.\n\t\\item Find the probability that $X \\leq 1/4$.\n\\end{enumerate}\n\n\n\\paragraph{Problem 35}\nFind the derivative of the following function\n\\[\n\tG(x) = \\int_{1}^{\\sin{x}} t \\dd t\n\\]\n\n\n\\paragraph{Problem 36}\nFind the derivative of the following function\n\\[\n\tG(x) = \\int_{72}^{\\ln(x)} t \\dd t\n\\]\n\n\n\\paragraph{Problem 37}\nSuppose that $\\sum_{n=1}^{\\infty} a_n$ represents a convergent series and that no term of the series equals zero, i.e., $a_n \\neq 0$ for all $n=1,2, \\dots$.\nProve that $\\sum_{n=1}^{\\infty} 1/a_n$ is a divergent series.\n\n\n\\paragraph{Problem 38}\nDetermine whether the given sequence is increasing, decreasing, or not monotonic.\nIs the sequence bounded?  On the basis of what you find, does the series converge, diverge or can't be determined?\n\\[\n\ta_n = \\frac{1}{5^n}\n\\]\n\n\n\\paragraph{Problem 39}\nDifferentiate the function $y = 3(x^2-1)^3 (x^2 + 1)^5$.\n\n\n\\paragraph{Problem 40}\nConsider a continuously differentiable utility function $u(\\cdot)$ such that $u'>0$ and $u'' < 0$.\nUtility comes from income $I$ which takes on two different values $I_H(x)$ and $I_L(x)$ and that the probability of that $I=I_H$ is given by $p(x)$ where $x\\geq 0$ and $p'(x)>0$.\nDifferentiate the expected utility with respect to $x$.\n\\[\n\tE[u] = p(x)u[I_H(x)] + (1-p(x))u[I_L(x)]\n\\]\n\n\n\\paragraph{Problem 41}\nConsider a twice continuously differentiable utility function $U(\\cdot)$ such that $U'>0$ and $U''< 0$.\nUtility comes from wealth $W$ and a person takes a gamble $h$ that represents a gain or loss to the a person's wealth and $h=0$ on average ($E(h)=0$).\nLet $p$ represent the size of an insurance premium paid to avoid taking the gamble and makes the person exactly indifferent between the gamble $h$ and paying $p$ with certainty.\nLet $E$ represent the expectation operator.\nUse the identity below to answer the questions.\n\\[\n\tE[U(W + h)] \\equiv U(W-p)\n\\]\n\\begin{enumerate}[(i)]\n\t\\item Derive a linear (first order) Taylor approximation of the right-hand side of the identity (with respect to $W$).\n\t\\item Derive a quadratic (second order) Taylor approximation of the left-hand side of the identity (with respect to $W$).\n\t\\item Let $r(W) = -U''(W)/U'(W)$ (this is the Arrow-Pratt measure of risk-aversion) use the previous approximations and the identity relationship to sign the derivative $\\dd P / \\dd r(W)$.  What is the sign and what relationship does this imply about the level of risk aversion and the amount of insurance premium an individual is willing to pay?\n\\end{enumerate}\n\n\n\\paragraph{Problem 42}\nA new machine has a rental rate $v(s)$ at any time $s$ and depreciates at rate $d$.\nWith interest rate $r$ the present discounted value of this machine is \n\\[\n\tPDV(t) = \\int_{t}^{\\infty} e^{(r+d)t} v(s) e^{-(r+d)s} \\dd s\n\\]\nLet $p(t)$ be the purchase price of the machine at time $t$.\nIn equilibrium the purchase price at time $t$, $p(t)$ will equal the present discounted value and hence we have\n\\[\n\tp(t) = \\int_{t}^{\\infty} e^{(r+d)t} v(s)e^{-(r+d)s} \\dd s\n\\]\n\\begin{enumerate}[(i)]\n\t\\item Using the last expression for $p(t)$, find $\\frac{dp(t)}{dt}$.\n\t\\item Using your expression for $\\frac{dp(t)}{dt}$ solve for $v(t)$ in terms of $p(t)$ to show the relationship between a new machines rental rate at time $t$ and the machine's purchase price at time $t$.\n\\end{enumerate}\n\n\n\\paragraph{Problem 43}\nA forester must decide when to cut down a growing tree.\nThe value at any time $t$ is given by $f(t)$ where $f' > 0$ and $f'' < 0$ and there was an initial investment of $L$.\nThe continuous interest rate is $r$.\nThe forester must choose $t$ (the time of harvest) to maximize the present discounted value of her profits.\n\\[\n\tPDV(t) = e^{-rt}f(t) - L\n\\]\n\\begin{enumerate}[(i)]\n\t\\item Differentiate $PDV(t)$ with respect to $t$.\n\t\\item Use the derivative as a first order condition to characterize the relationship between the $t$ that maximizes the forester's present discounted value of profit and the interest rate $r$.\n\\end{enumerate}\n\n\n\n\n\n\n\n\n\\end{document}", "meta": {"hexsha": "dcc9f06dd9642a52c3b9f6de270f8134e32e4c88", "size": 11868, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/pdfs/math_bootcamp/bootcamp_repo/assignment_0/assignment_0_revised.tex", "max_stars_repo_name": "joepatten/joepatten.github.io", "max_stars_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/pdfs/math_bootcamp/bootcamp_repo/assignment_0/assignment_0_revised.tex", "max_issues_repo_name": "joepatten/joepatten.github.io", "max_issues_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-08-09T16:28:31.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-10T14:48:57.000Z", "max_forks_repo_path": "assets/pdfs/math_bootcamp/bootcamp_repo/assignment_0/assignment_0_revised.tex", "max_forks_repo_name": "joepatten/joepatten.github.io", "max_forks_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.8549848943, "max_line_length": 347, "alphanum_fraction": 0.6716380182, "num_tokens": 3869, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5934558396636159}}
{"text": "\\section{Security Analysis}\nWe look into attacks against the fundamental property:\n$READ(Q_1,x) = READ(Q_2,x)$ for $\\forall Q_1, Q_2 \\in QS$, which is\nknown as equivocation.  The best that attackers can do is divide a\nclique into two sets ($\\mathcal{H}_1$ and $\\mathcal{H}_2$) and ask\neach set to sign $\\langle x,t,v \\rangle$ and $\\langle x,t,v' \\rangle$\nseparately. Then do the {\\sf write} protocol for the target nodes with\ncollected signature sets $S$ and $S'$. Honest servers will refuse the\nrequest because it does not satisfy the basic $b$-masking quorum\ncondition: $|S| \\geq b+1$. But with $b+1$ colluding nodes\n($\\mathcal{F}$), the attack will succeed.\n\n\\ifdefined\\ABSTRACT\n\\else\n\\newcommand{\\slice}[4]{\n  \\pgfmathparse{0.5*#1+0.5*#2}\n  \\let\\midangle\\pgfmathresult\n\n  % slice\n  \\draw[thick,fill=black!10] (0,0) -- (#1:1) arc (#1:#2:1) -- cycle;\n\n  % outer label\n  \\node[label=\\midangle:#4] at (\\midangle:1) {};\n\n  % inner label\n  \\pgfmathparse{min((#2-#1-10)/110*(-0.3),0)}\n  \\let\\temp\\pgfmathresult\n  \\pgfmathparse{max(\\temp,-0.5) + 0.8}\n  \\let\\innerpos\\pgfmathresult\n  \\node at (\\midangle:\\innerpos) {#3};\n}\n\n\\begin{tikzpicture}[scale=1.5]\n\n\\newcounter{a}\n\\newcounter{b}\n\\foreach \\p/\\t/\\l in {25//,40/$\\mathcal{H}_1$/Honest nodes,\n20/$\\mathcal{F}$/Faulty nodes, 40/$\\mathcal{H}_2$/Honest nodes}\n  {\n    \\setcounter{a}{\\value{b}}\n    \\addtocounter{b}{\\p}\n    \\slice{\\thea/100*360}\n          {\\theb/100*360}\n          {\\t}{\\l}\n  }\n\n\\end{tikzpicture}\n\\fi\n\nThe maximum number of signatures dishonest clients can get is\n$b+(n-b)/2$. Therefore, to overcome the attack we need:\n\\[ n-b > b+(n-b)/2 \\Rightarrow n > 3b. \\]\n\n\\subsubsection*{Detecting equivocation on {\\sf read}}\nEven if the number of faulty nodes exceeds the above threshold, the\nsystem can detect malicious actions with the following probability:\n\\begin{align*}\n  D_p &= Pr[Q \\cap \\mathcal{H}_1 \\neq \\emptyset \\wedge Q \\cap\n        \\mathcal{H}_2 \\neq \\emptyset] \\\\\n      & = 1 - Pr[Q \\subseteq \\mathcal{F} \\cup \\mathcal{H}_1] \\\\\n      & = 1 - ((n + f) / 2n)^{|Q|}\n\\end{align*}\nwhen $f > b$ and $|Q| < (f + n)/2$, where $f = |\\mathcal{F}|$ is the\nnumber of the faulty nodes, assuming the sizes of $\\mathcal{H}_1$ and\n$\\mathcal{H}_2$ are the same.\n\nIn the case of $f \\le b$ malicious messages will never go through\ntherefore it does not make sense to talk about the detection rate on\n{\\sf read}, however, it is possible to detect malicious messages on\n{\\sf write} if such actions are taken by clients.\nWhen the number of faulty nodes exceeds the threshold, i.e., $f > b$,\nit will be possible that the (honest) client cannot detect all\nmalicious actions. Since the minimum quorum size is $(n-1)/3$ it does\nnot need to consider the case where $|Q| < n/3$. Also, if the size\nexceeds $(f+n)/2$ any quorum always includes at least one node from\neach $\\mathcal{H}_i$ which makes the detection rate 100\\%.\n\n\\ifdefined\\ABSTRACT\n\\else\n\\begin{tikzpicture}[xscale=6,yscale=6]\n  \\draw [<->] (0,0.9) -- (0,0) -- (1.0,0);\n  \\node [below right] at (1,0) {$|Q|$};\n  \\node [left] at (0,{1-pow(17/300,0.7)}) {$1$};\n  \\node [left] at (0,0) {$0$};\n  \\node [below] at (0.3,0) {$n/3$};\n  \\node [below] at (0.7,0) {$(f+n)/2$};\n  \\draw[dashed, domain=0:0.3] plot (\\x, {1-pow(17/300,\\x)});\n  \\draw[thick, domain=0.3:0.7] plot (\\x, {1-pow(17/300,\\x)});\n  \\draw[dashed, domain=0.7:1.0] plot (\\x, {1-pow(17/300,0.7)});\n\\end{tikzpicture}\n\\fi\n\nFor example, assuming we choose a quorum $|Q| = 3b+1$ out of $n = 4b+1$,\nwhich is the default setup of the kv quorum system, the detection rate\nis 100\\% up to $f = 2b$ failure nodes.\n\n\\ifdefined\\ABSTRACT\n\\else\n\\begin{tikzpicture}[xscale=6,yscale=6]\n  \\draw [<->] (0,1.05) -- (0,0) -- (1.05,0);\n  \\node [left] at (0,1) {$1$};\n  \\node [left] at (0,0) {$0$};\n  \\node [below] at (0.3,0) {$n/3$};\n  \\node [below] at (0.7,0) {$2n/3$};\n  \\node [below] at (1,0) {$n$};\n  \\node [right] at (1.05,0) {$f$};\n  \\draw[thick, smooth] (0,1) to (0.6,1) to [out=0,in=90] (1,0);\n\\end{tikzpicture}\n\\fi\n", "meta": {"hexsha": "f31602ae3a281f6adf16c11ef57447b5d083463e", "size": 3961, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/analysis.tex", "max_stars_repo_name": "dmitris/bftkv", "max_stars_repo_head_hexsha": "8769a830c87436922a2568f967aed891baf0cd5b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2017-09-29T22:46:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-07T22:27:28.000Z", "max_issues_repo_path": "docs/tex/analysis.tex", "max_issues_repo_name": "dmitris/bftkv", "max_issues_repo_head_hexsha": "8769a830c87436922a2568f967aed891baf0cd5b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/tex/analysis.tex", "max_forks_repo_name": "dmitris/bftkv", "max_forks_repo_head_hexsha": "8769a830c87436922a2568f967aed891baf0cd5b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-12-27T17:17:40.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-08T23:56:28.000Z", "avg_line_length": 36.0090909091, "max_line_length": 72, "alphanum_fraction": 0.640747286, "num_tokens": 1512, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.5934558234198822}}
{"text": "\n\n\n\\documentclass[10pt,handout]{beamer}\n\\setbeamertemplate{navigation symbols}{}\n\\usefonttheme{serif} \n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{cite}\n\\usepackage{color} \n\\usepackage{setspace}\n\\usepackage{hyperref}\n\n\\newcommand{\\xx}{{\\bf{x}}}\n\\begin{document}\n\\title{Machine Learning I Lecture IV:\\\\ Bayesics of Inference}   \n\\author{Jakob H Macke\\\\ Max Planck Institute for Biological Cybernetics\\\\ Bernstein Center for Computational Neuroscience} \n\\date{XY.XY.2012} \n\n\\frame{\\titlepage} \n\n\\frame{\\frametitle{Plan for today}\\tableofcontents} \n\n\n\\section{Bayesian Inference} \n\\frame{\\frametitle{'Bayesian Inference' refers to computation of the posterior distribution over parameters given the data.} \n\\begin{itemize}\n\\item Data $\\mathcal{D}=\\{t_1, t_2, \\ldots, t_N\\}$ \\pause\n\\item Supervised learning: $\\mathcal{D}=\\{(x_1,t_1), (x_2,t_2), \\ldots, (x_N,t_N)\\}$ \\pause\n\\item Likelihood function $P(t|w)$ \\hspace{1cm}(parameterized by $w$)\\pause\n\\item $P(D|w)= \\prod_{n=1}^N P(t_n|w)$\\pause\n\\item Prior distribution $\\pi(w)$\\pause\n\\item Bayes rule: $\\mbox{Posterior} \\propto \\mbox{Likelihood}  \\times \\mbox{Prior}$\n\\begin{align}\nP(w | D) =\\frac{1}{Z} P(D|w) P(w)\n\\end{align}\\pause\n\\item Probabilities must normalize to $1$: \n\\begin{align}\nZ= P(D)= \\int P(D|w) P(w) dw\n\\end{align}\n\\end{itemize}\n\n}\n\n%\\subsection{Subsection no.1.1  }\n%\\frame{ \n%Without title somethink is missing. \n%}\n\\frame[shrink=5]{\\frametitle{We can use the posterior distribution to make predictions, decisions or scientific statements.}\n\\begin{itemize}\n\\item Predictions: \\alert{Predictive distribution}\n\\begin{align}\nP(t^*|x^*, D)= \\int_w P(t^*| x^*, w) P(w| D)   dw\n\\end{align}\n\\pause\n\\item Making decisions: Suppose we have calculated $P(t^*|x^*, D)$, and someone asks us to give a guess $\\hat t$ of $t^*$. While we could just take the \\emph{most likely value} of $t^*$, it really depends on the cost function. Are mistakes in one direction as costly as mistakes in the other direction? For example, what is the cost of a false positive or false negative? Given a \\alert{cost function} $C(\\hat t, t)$, one can calculate the 'Bayes-optimal' decision from the posterior distribution. \n\\item \\pause Scientific statements: e.g. 'After observing 100 data points, we were 90\\% sure that the parameter $\\theta$ is between -.1 and .3. Now that we have observed another 200, we are 97\\% sure.'\n\n\\end{itemize}\n\n\n}\n\\frame{\\frametitle{In most cases, the posterior distribution can not be calculated exactly, and approximations have to be used.} \n\\begin{itemize}\n\\item Ignore prior, maximize likelihood $P(D|w)$: \\alert{Maximum likelihood learning}  \\pause\n\\item Only search for mode of posterior (\\alert{Maximum a posteriori, MAP}), i.e. $\\mbox{argmax}_w P(w |D) $. In practice, maximize log-posterior.\\\\\n\\pause\n\\textcolor{gray}{Q: Why is finding the mode of the posterior so much easier than finding the full posterior?}\\\\\n\\pause\n\\textcolor{gray}{Q: When is MAP a really bad idea?}\n\\item \\pause Use simplified model to approximate posterior: Find parameters of model $q(w,\\Phi)$ such that\n$q(w) \\approx P(w|D)$. \\\\\n\\pause\n\\begin{itemize}\n\\item Examples: Varational Inference, Expectation Propagation, Laplace Approximation\n\\item Very often, a Normal approximation is used: $q(w) \\approx \\mathcal{N}(w|\\mu,\\Sigma)$.\n\\end{itemize}\n\\pause\n\\item Use MCMC sampling to generate samples from posterior distribution\n\\end{itemize}\n\n}\n\n\n\\section{Example: Bayesian coin toss}\n\\frame{\\frametitle{Example: Bayesian coin toss}\n\\begin{itemize}\n\\item Suppose we have $N$ throws of a coin, $D=\\{t_1, t_2, \\ldots, t_N\\}$\n\\item We write $T_n= 1$ if the n-th throw was head, and $t_n=0$ if it was tail.\n\\item \\pause One parameter:  $q\\in [0, 1]$, the probability of obtaining heads\n\\item \\pause Likelihood of one throw:\n\\begin{align}\n P(T_n=1|q)&= q\\\\\n P(T_n=0|q)&= (1-q)\n\\end{align}\n\\item Likelihood of data $D$: [on board]\n\\end{itemize}\n\n}\n\n\\frame{\\frametitle{We will use a beta distribution as a prior for $q$.}\nThe shape of the distribution is determined by two parameters.\n\\centering\n\\includegraphics[width=10cm]{BetaZoo}\n}\n\n\\frame{\\frametitle{We will use a beta distribution as a prior for $q$.}\n\\begin{itemize}\n\\item \nBeta distribution: \n\\begin{align}\n\\pi(q| \\alpha, \\alpha_{2})= \\frac{1}{Z}q^{\\alpha_1-1} (1-q)^{\\alpha_{2}-1}\n\\end{align}\n\\item \\pause Normalizing constant: the 'beta function'\n\\begin{align}\nZ= \\int_0^1 q^{\\alpha_1-1} (1-q)^{\\alpha_{2}-1} dq=: B(\\alpha_1, \\alpha_{2})\n\\end{align}\n\\item \\pause Mean and Variance:\n\\begin{align}\n\\mbox{E}(q|\\alpha_1,\\alpha_{2})&= \\frac{\\alpha_1}{\\alpha_1+\\alpha_{2}}\\\\\n\\mbox{Var}(q|\\alpha_1,\\alpha_{2})&= \\frac{\\alpha_1\\alpha_{2}}{(\\alpha_1+\\alpha_{2})^2(\\alpha_1+\\alpha_{2}+1)}\n\\end{align}\n\\item \\pause Symmetric case and heuristics:  [on board]\n\\end{itemize}\n\n}\n\\frame{\\frametitle{Illustration: The posterior gets more peaked as more data is coming in. }\n\n\\centering\nData: $D=\\{1     1     0     1     1     1     1     1     0\\}$\n\n\\includegraphics[width=10cm]{BetaPosterior}\n}\n\n\\frame{\\frametitle{The posterior distribution can be calculated in closed form.}\n\nWe use $S_n$ to denote the number of heads on the first $n$ trials.\n\\vspace{.5cm}\n\n\n \\pause Maximum likelihood estimation:\n\n\\vspace{.5cm}\n\n[on board]\n\n\\vspace{.5cm}\n\n \\pause Posterior distribution:\n\n\\vspace{.5cm}\n\n[on board]\n\n\\vspace{.5cm}\n\n\\pause We can either take all the data and calculate the posterior at once, or do it sequentially as new data comes in:\n\n\\vspace{.5cm}\n\n[on board]\n}\n\n\n\\frame[shrink=10]{\\frametitle{We can use the posterior distribution for predictions ... }\n\\vspace{.5cm}\nAfter observing $N$ coin-flips, what is our prediction for the next coin flip?\n\\begin{align}\nP(T^*=1| D)&= \\int_0^1 P(T^*=1|q) P(q|D) dq\\\\\n& = \\int_0^1 q P(q|D) dq\\\\\n&= \\mbox{E}(q|D) \\\\\n&=\\frac{\\alpha_1+S_N}{(\\alpha_1+ S_N)+(\\alpha_{2}+N-S_N)}\\\\\n&=\\frac{\\alpha_1+S_N}{(\\alpha_1+\\alpha_2+N)}\n\\end{align}\n\n\\pause\n\\vspace{.5cm}\nIn our example, $\\mbox{E}(q|D)=0.69$, and $\\mbox{MLE}=0.78$. \n\n\\pause\n\\vspace{.5cm}\nQ: What happens if $N$ gets very large?\n\n\\pause\n\\vspace{.5cm}\nNote: This is a bit of a special case-- in general, the predictive distribution is not simply the likelihood evaluated at the mean!!!\n}\n\n\\frame{\\frametitle{... for statistical reasoning .... }\n\\includegraphics[width=10cm]{BetaPosteriorConfidence}\n}\n\n\n\\frame{\\frametitle{... and for making decisions. }\nSomeone offers you the bet that you get 1 euros if you predict the next coin toss correctly, but you have to pay 2 euros if if you are wrong. \n\nShould you take the bet? What should you predict?\n\n\\begin{align}\nC(t^*,\\hat t)=\n\\begin{cases}\n-1 &\\mbox{if~} t^*= \\hat t\\\\\n2 &\\mbox{if~} t^*\\neq \\hat t\\\\\n\\end{cases}\n\\end{align}\n\nExpected cost: \n\\begin{align}\nE(C)=\\sum_{t^*=0}^1 C(t^*,\\hat t) P(t^*| D)\n\\end{align}\n\n\\pause\nIf we predict tail ($t^*=0$): \n\\begin{align}\nE(C)&= P(t^*=1|D) C(0,1)+P(t^*=0|D) C(0,0)  \\\\\n&=0.69 (2)  + 0.31 (-1)= 1.07\n\\end{align}\n\n\\pause\nIf we predict head ($t^*=1$): \n\\begin{align}\nE(C)=0.69 (-1)  + 0.31 (2)= -0.07;\n\\end{align}\n\n\n\n\n\n}\n\n\\section{Conjugate priors and the exponential family}\n\n\\frame{\\frametitle{Why was inference so easy here?}\n\\begin{itemize}\n\\item Posterior distribution had a closed form solution. \\pause\n\\item In fact, the posterior had the same functional form as the prior, just different parameters. \\pause\n\\item Parameters of posterior could be calculated by simply adding observations to prior parameters. \\pause\n\\item We used a likelihood from the \\alert{exponential family} and its \\alert{conjugate prior}. In this case,  Bayesian inference is always easy. \\pause\n\\item For this reason, exponential families and conjugate priors are used extensively in Bayesian modelling, often as 'building blocks' of more complicated models.\n\\end{itemize}\n\n}\n\n\\frame{\\frametitle{Inference is easy whenever the likelihood is in the exponential family and the prior is its conjugate.}\n\\begin{itemize}\n\\item Exponential family distributions  have the form \n\\begin{align}\nP(\\xx|\\theta)&= g(\\theta) f(\\xx) \\exp\\left(\\phi(\\theta)^\\top S(\\xx)\\right)\n\\end{align}\n\n\\item \\pause The conjugate prior is\n\\begin{align}\n\\pi(\\theta)&= F(\\tau,\\nu) g(\\theta)^\\nu \\exp(\\phi(\\theta)^\\top \\tau)\n\\end{align}\n\\item \\pause  Calculating the posterior:\n\\begin{align}\n [\\mbox{on board}]\n\\end{align}\n\\end{itemize}\n\n}\n \n\\frame{\\frametitle{Inference is easy whenever the likelihood is in the exponential family and the prior is its conjugate.}\nThe posterior given an exponential family likelihood and conjugate prior is \n\\begin{align}\nP(\\theta|D)= F(\\tau+ \\sum_i S(x_i), \\nu+N) g(\\theta)^{\\nu+N} \\exp\\left(\\phi(\\theta)^\\top(\\tau +\\sum_i S(x_i)    )\\right)\n\\end{align}\n\\pause\n\\begin{itemize}\n\\item $\\phi(\\theta)$ is the vector of \\alert{natural parameters}\n\\item $\\sum_i S(x_i)$ is the vector of \\alert{sufficient statistics}\n\\item $\\tau$ are \\alert{pseudo-observations}\n\\item $\\nu$ is the scale of the prior\\\\\n\\end{itemize}\n\n\\pause\nThe exponential family includes most common distributions, including the Normal, Exponential, Gamma, Chi-square, Beta, Dirichlet, Bernoulli, Poisson, Wishart and the Inverse Wishart.\n}\n  \n  \\frame{\\frametitle{How can we put our coin-example into this framework?}\n  [on board]\n  }\n  \n  \n%\\subsection{Lists I}\n%\\begin{itemize}\n%\\item Introduction to  \\LaTeX  \n%\\item Course 2 \n%\\item Termpapers and presentations with \\LaTeX \n%\\item Beamer class\n%\\end{itemize} \n\n%\\frame{\\frametitle{lists with pause}\n%\\begin{itemize}\n%\\item Introduction to  \\LaTeX \\pause \n%\\item Course 2 \\pause \n%\\item Termpapers and presentations with \\LaTeX \\pause \n%\\item Beamer class\n%\\end{itemize} \n%}\n\n%\\subsection{Lists II}\n%\\frame{\\frametitle{numbered lists}\n%\\begin{enumerate}\n%\\item Introduction to  \\LaTeX  \n%\\item Course 2 \n%\\item Termpapers and presentations with \\LaTeX \n%\\item Beamer class\n%\\end{enumerate}\n%}\n%\\frame{\\frametitle{numbered lists with pause}\n%\\begin{enumerate}\n%\\item Introduction to  \\LaTeX \\pause \n%\\item Course 2 \\pause \n%\\item Termpapers and presentations with \\LaTeX \\pause \n%\\item Beamer class\n%\\end{enumerate}\n%}\n\n%\\section{The Normal distribution} \n%\\subsection{Tables}\n%\\frame{\\frametitle{Next week}\n\n%Gaussians\n\n%\\vspace{1cm}\n%Gaussians\n\n\n%\\vspace{1cm}\n%Linear Regression\n\n%}\n\n\n%\\frame{\\frametitle{The multivariate Gaussian}\n%}\n%\\begin{tabular}{|c|c|c|}\n%\\hline\n%\\textbf{Date} & \\textbf{Instructor} & \\textbf{Title} \\\\\n%\\hline\n%WS 04/05 & Sascha Frank & First steps with  \\LaTeX  \\\\\n%\\hline\n%SS 05 & Sascha Frank & \\LaTeX \\ Course serial \\\\\n%\\hline\n%\\end{tabular}}\n%}\n\n%\\frame{\\frametitle{Tables}\n%}\n%\\frame{\\frametitle{Tables with pause}\n%\\begin{tabular}{c c c}\n%A & B & C \\\\ \n%\\pause \n%1 & 2 & 3 \\\\  \n%\\pause \n%A & B & C \\\\ \n%\\end{tabular} }\n\n\n%\\section{Section no. 4}\n%\\subsection{blocs}\n%\\frame{\\frametitle{blocs}\n\n%\\begin{block}{title of the bloc}\n%bloc text\n%\\end{block}\n\n%\\begin{exampleblock}{title of the bloc}\n%bloc text\n%\\end{exampleblock}\n\n\n%\\begin{alertblock}{title of the bloc}\n%bloc text\n%\\end{alertblock}\n%}\n\\end{document}\n\n", "meta": {"hexsha": "99a2dcf4dd4f451c1f4aad055c4e36fe6b432e34", "size": 10945, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/lecture4_bayesianinference/lecture4.tex", "max_stars_repo_name": "mackelab/machine-learning-I", "max_stars_repo_head_hexsha": "fedd9ea0b9b257af5cd59036a3b49876aed5c77c", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2015-07-31T15:08:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T17:07:23.000Z", "max_issues_repo_path": "slides/lecture4_bayesianinference/lecture4.tex", "max_issues_repo_name": "cne-tum/msne_statsandprob_ss2018", "max_issues_repo_head_hexsha": "fedd9ea0b9b257af5cd59036a3b49876aed5c77c", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/lecture4_bayesianinference/lecture4.tex", "max_forks_repo_name": "cne-tum/msne_statsandprob_ss2018", "max_forks_repo_head_hexsha": "fedd9ea0b9b257af5cd59036a3b49876aed5c77c", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2018-03-16T07:42:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T14:02:27.000Z", "avg_line_length": 28.7270341207, "max_line_length": 498, "alphanum_fraction": 0.7044312471, "num_tokens": 3542, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7634837527911057, "lm_q1q2_score": 0.5934558229311688}}
{"text": "\\section*{Problem 1: Logarithmic Regret of UCB}\n\nWe will consider the multi-armed bandit setting discussed in class, where the\nactions $a \\in \\{1,\\cdots,K\\}$, $\\mu_a$ is the mean reward provided by arm $a$,\nand $X_t$ is reward observed at time $t$ if we pull arm $a$. As in class, we\nassume that the observed rewards are bounded a $0 \\leq X_t \\leq 1$ almost\nsurely.\n\nRecall $\\mu_* = \\max_a\\mu_a$, and let $a_*$ be the index of an optimal\narm. Define $\\Delta_a$ as:\n\\begin{equation}\n  \\Delta_a = \\mu_* - \\mu_a\n\\end{equation}\nand define:\n\\begin{equation}\n  \\Delta_{\\min} = \\min_{a \\neq a_*} \\Delta_a.\n  \\label{eqn:1_delta_min}\n\\end{equation}\n\nIn this problem, we seek to prove the following theorem:\n\n\\begin{theorem}\n  The UCB algorithm (with an appropriate setting of the parameters) has a regret\n  bound that is:\n  \\begin{equation}\n    T\\mu_* - \\mathbb{E}\\left[\n      \\sum_{t \\leq T} X_t\n    \\right] \\leq c \\frac{K\\log T}{\\Delta_{\\min}},\n    \\label{eqn:1_result}\n  \\end{equation}\n  where $c$ is a universal constant.\n\\end{theorem}\n\n\n\\subsection*{Let's prove this!}\n\nLet $N_{a,t}$ be the number of times we pulled arm $a$ up to time $t$. Recall\nfrom class that by Hoeffding's bound (and the union bound), we can provide a\nconfidence bound for an arbitrary algogrithm as follows: with probability\ngreater than $1 - \\delta$, we have that fo all arms and for all time steps $K\n\\leq t \\leq T$:\n\\begin{equation}\n  \\mathbb{P}\\left(\\forall t, a, \\left\\lvert\n      \\hat{\\mu}_{a,t} - \\mu_a\n    \\right\\rvert\n    \\leq  c_2 \\sqrt{\\frac{\\log\\left(T/\\delta\\right)}{N_{a,t}}}\n  \\right) \\geq 1 - \\delta,\n  \\label{eqn:1_confidence_interval}\n\\end{equation}\nwhere $c_2$ is some universal constant. Note that the algorithm starts the first\n$K$ steps by sampling each arm once, so we can assume $t \\geq K$.\n\n\\begin{enumerate}\n\\item Now consider the UCB algorithm using this confidence interval. Argue that\n  with probability greater thatn $1 - \\delta$, the total number of times that an\n  sub-optimal arm $a$ will be pulled up to time $T$ will be bounded as follows:\n  \\begin{equation}\n    N_{a,T} \\leq c_3 \\frac{\\log\\left(T/\\delta\\right)}{\\Delta_a^2}\n    \\label{eqn:1_N_at_bound}\n  \\end{equation}\n  for some constant $c_3$.\n\n  \\subsubsection*{Solution}\n\n  \\begin{proof}\n    This bound follows from Equation \\ref{eqn:1_confidence_interval}. Then, for\n    any $a$ and $t \\in [K, T]$ with probability greater than $1 - \\delta$, we\n    have that\n    \\begin{align*}\n      \\left\\lvert \\hat{\\mu}_{a,t} - \\mu_a \\right\\rvert\n      &\\leq  c_2 \\sqrt{\\frac{\\log\\left(T/\\delta\\right)}{N_{a,t}}} \\\\\n      \\sqrt{N_{a,t}}\n      &\\leq c_2\\frac{\n        \\sqrt{\\log\\left(T/\\delta\\right)}\n        }{\\left\\lvert \\hat{\\mu}_{a,t} - \\mu_a \\right\\rvert} \\\\\n      N_{a,t}\n      &\\leq c_2^2 \\frac{\\log\\left(T/\\delta\\right)}{\n        \\left(\\hat{\\mu}_{a,t} - \\mu_a \\right)^2}.\n    \\end{align*}\n    If we let $c_3 = c_2^2$, substitute\n    $\\Delta_a^2 = \\left(\\hat{\\mu}_{a,t} - \\mu_a \\right)^2$, and fix $t = T$, we\n    have Equation \\ref{eqn:1_N_at_bound} with probability greater than\n    $1 - \\delta$ as desired.\n  \\end{proof}\n\\item Argue that the observed regret of UCB is bounded as follows: with\n  probability greater than $1 - \\delta$, we have that:\n  \\begin{equation}\n    T\\mu_* - \\sum_{t \\leq T} \\mu_{at} \\leq c_3 \\sum_{a \\neq a_*}\\frac{\\log\\left(T/\\delta\\right)}{\\Delta_a},\n    \\label{eqn:1_2_result}\n  \\end{equation}\n  where $a_t$ is the arm chosen by the algorithm at time $t$.\n\n  \\subsubsection*{Solution}\n\n  \\begin{proof}\n    Equation \\ref{eqn:1_2_result} follows from Equation \\ref{eqn:1_N_at_bound},\n    noting that $\\sum_{a = 1}^K N_{a,T} = T$, and seeing that\n    $\\Delta_{a_*} = \\mu_* - \\mu_{a_*} = 0$\n\n    We have that\n    \\begin{align*}\n      T\\mu_* - \\sum_{t \\leq T} \\mu_{a_t}\n      &= \\sum_{a = 1}^K N_{a,T}\\left(\\mu_* - \\mu_a\\right)\np      \\\\\n      &= \\sum_{a = 1}^K \\Delta_{a}N_{a,T} \\\\\n      &= \\sum_{a \\neq a_*} \\Delta_{a}N_{a,T} \\\\\n      &\\leq \\sum_{a \\neq a_*} \\Delta_{a}\\left(c_3\\frac{\\log\\left(T/\\delta\\right)}{\\Delta_{a}^2}\\right) \\\\\n      &= \\sum_{a \\neq a_*} c_3\\frac{\\log\\left(T/\\delta\\right)}{\\Delta_{a}},\n    \\end{align*}\n    which gives Equation \\ref{eqn:1_2_result} with probability $1 - \\delta$ as\n    desired.\n  \\end{proof}\n\\item Now show that the expected regret of UCB is bounded as:\n  \\begin{equation}\n    T\\mu_* - \\mathbb{E}\\left[\\sum_{t \\leq T} X_t\\right] \\leq\n    c_4 \\sum_{a \\neq a_*} \\frac{\\log\\left(T\\right)}{\\Delta_a}.\n    \\label{eqn:1_3_result}\n  \\end{equation}\n  \\subsubsection*{Solution}\n  \\begin{proof}\n    Fix $\\delta = 1/T^2$ as in the proof of Lemma 3.1 from the lecture notes.\n    \n    We have that\n    \\begin{align*}\n      \\mathbb{E}\\left[\\sum_{t \\leq T} \\left(\\mu_* - X_t\\right)\\right]\n      &=\n        T\\mu_* - \\mathbb{E}\\left[\\sum_{t \\leq T} X_t\\right]\n        = T\\mu_* - \\sum_{t \\leq T} \\mu_{a_t} \\\\\n      &\\leq \\left(1 - \\delta\\right)c_3\\sum_{a \\neq a_*}\\frac{\\log\\left(T/\\delta\\right)}{\\Delta_{a}}\n        + \\delta T \\\\\n      &= \\left(1 - \\frac{1}{T^2}\\right)c_3\\sum_{a \\neq a_*}\\frac{\\log\\left(T\\right) + 2\\log\\left(T\\right)}{\\Delta_{a}}\n        + \\frac{1}{T} \\\\\n      &= 3c_3 \\sum_{a \\neq a_*} \\frac{\\log\\left(T\\right)}{\\Delta_{a}} - \\frac{3c_3}{T^2} \\sum_{a \\neq a_*} \\frac{\\log\\left(T\\right)}{\\Delta_{a}} + \\frac{1}{T} \\\\\n      &= O\\left(\\sum_{a \\neq a_*} \\frac{\\log\\left(T\\right)}{\\Delta_{a}}\\right)\n    \\end{align*}\n    asymptotically since the other terms decay with $T$. Thus, it follows that\n    there exists some $c_4$ such that\n    \\begin{equation*}\n      T\\mu_* - \\mathbb{E}\\left[\\sum_{t \\leq T} X_t\\right]\n      \\leq c_4\\sum_{a \\neq a_*} \\frac{\\log\\left(T\\right)}{\\Delta_{a}}.\n    \\end{equation*}\n  \\end{proof}\n\\item Now argue that the theorem follows and specify what the UCB algorithm is\n  (with parameters set appropriately).\n  \\subsubsection*{Solution}\n  \\begin{proof}\n    Applying Equation \\ref{eqn:1_delta_min} to Equation \\ref{eqn:1_3_result}, we have that\n    \\begin{align*}\n      T\\mu_* - \\mathbb{E}\\left[\\sum_{t \\leq T} X_t\\right]\n      &\\leq\n        c_4 \\sum_{a \\neq a_*} \\frac{\\log\\left(T\\right)}{\\Delta_a} \\\\\n      &\\leq\n        c_4 \\sum_{a \\neq a_*} \\frac{\\log\\left(T\\right)}{\\Delta_{\\min}} \\\\\n      &\\leq c_4 K\\frac{\\log\\left(T\\right)}{\\Delta_{\\min}},\n    \\end{align*}\n    which gives us Equation \\ref{eqn:1_result} if we define $c = c_4$.    \n  \\end{proof}\n\n  Thus, we have the following UCB algorithm.\n  \\begin{enumerate}[label=(\\arabic*)]\n  \\item Try each of the $K$ arms once.\n  \\item Fix $t$. Calculate \\begin{equation} U_{a,t} = \\hat{\\mu}_{a,t} +\n      c_2\\sqrt{3\\frac{\\log{T}}{N_{a,t}}}\n      \\label{eqn:1_4_upper_bound}\n    \\end{equation}\n    for all $a = 1,2,\\ldots,K$. Pull arm $a_{*}^{(t)} = \\argmax_a U_{a,t}$.\n    \\label{alg:1_4_explore}\n  \\item Repeat Step \\ref{alg:1_4_explore} $T$ times.\n  \\end{enumerate}\n\\end{enumerate}\n", "meta": {"hexsha": "5c9fdea899d3bb9fea6e5d450c8d07f36c9e981e", "size": 6780, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw4/problem1/problem1.tex", "max_stars_repo_name": "kspathak/cse547", "max_stars_repo_head_hexsha": "2379c6435c871720aa7da53d3c8066a628e81830", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw4/problem1/problem1.tex", "max_issues_repo_name": "kspathak/cse547", "max_issues_repo_head_hexsha": "2379c6435c871720aa7da53d3c8066a628e81830", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw4/problem1/problem1.tex", "max_forks_repo_name": "kspathak/cse547", "max_forks_repo_head_hexsha": "2379c6435c871720aa7da53d3c8066a628e81830", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-18T01:39:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-18T01:39:20.000Z", "avg_line_length": 39.649122807, "max_line_length": 161, "alphanum_fraction": 0.6259587021, "num_tokens": 2513, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542925, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.593455819481127}}
{"text": "\\subsubsection{One Dimensional Permeable Porous Medium Radionuclide Mass Balance\nModel}\\label{sec:one_dim_ppm}\n\nThe advection dispersion equation is at the core of contaminant transport. A \ndescription of advection and dispersion appears in the following section on \nmass transfer. For now, it is sufficient to note that various solutions to the advection dispersion equation\n\\eqref{unidirflow} have been published for both the first and third types of\nboundary conditions. The third, Cauchy type, is more mass conservative, and is\nthe primary kind of boundary condition used at the source for the model\nimplementation in \\Cyder.\nAbstraction results informed modifications to the implementation of an\nanalytic solution to the one dimensional advection-dispersion equation with\na finite domain and Cauchy and Neumann boundary conditions at the inner and outer\nboundaries, respectively.\n\nThe conceptual model in Figure \\ref{fig:1dinf} represents solute transport in\none dimension with unidirectional flow upward and a\nfinite boundary condition in the positive flow direction.\nNotably, unidirectional vertical flow upward in the far field simplifies a \n3-dimensional problem into one dimenstion. The vertical direction was chosen to \nbe conservative, since the shortest path to the biosphere is the vertical, $z$, direction.\nIn \\Cyclus, radioactive decay is handled external to the components, so there is\nno need to include production or decay.  An approximate solution for these conditions\nmade by Brenner \\cite{brenner_diffusion_1962} is described below as\nit is given in van Genuchten et. al, \\cite{van_genuchten_analytical_1982},\n\n\\FloatBarrier\n\\vspace{4mm}\n\n\\begin{figure}[h!]\n  \\begin{center}\n    \\def\\svgwidth{0.7\\columnwidth}\n    \\input{./nuclide_models/mass_balance/one_dim_ppm/1dfin.eps_tex}\n  \\end{center}\n  \\caption[1D finite advection dispersion solution.]{A one-dimensional,\n        finite, unidirectional flow solution with Cauchy $(z=0)$ and Neumann \n        $(z=L)$ boundary\n        conditions.}\n  \\label{fig:1dinf}\n\\end{figure}\n\nFor the Cauchy boundary condition,\n\\begin{align}\n  -D \\frac{\\partial C}{\\partial z}\\big|_{z=0} + v_zc &= \\begin{cases}\n    v_zC_0  &  \\left( 0<t<t_0 \\right)\\\\\n    0  &  \\left( t>t_0 \\right)\\\\\n  \\end{cases}\n\\intertext{where}\n      D &= \\mbox{ Effective Dispersion Coefficient }[m^2/s]\\\\\n      v &= \\mbox{ Fluid Velocity in the medium }[m/s]\n\\intertext{the Neumann boundary condition,}\n  \\frac{\\partial C}{\\partial z}\\big|_{z=L} &= 0\n  \\intertext{and the initial condition,}\n  C(z,0) &= C_i,\n  \\label{1dinfBC}\n\\end{align}\n\\begin{align}\n  \\intertext{the solution is given as }\n  C(z,t) &= \\begin{cases}\n  C_i + \\left(C_0 - C_i\\right)A\\left(z,t\\right) & 0<t\\le t_0\\\\\n  C_i + \\left(C_0 - C_i\\right)A\\left(z,t\\right) - C_0A(z,t-t_0) & t\\ge t_0.\n  \\end{cases}\n\\end{align}\n\nFor the vertical flow coordinate system, $A$ is defined as\n\\begin{align}\nA(z,t) =& \\left(\\frac{1}{2}\\right)\\erfc{\\left[\\frac{Rz-vt}{2\\sqrt{DRt}}\\right]} \\nonumber\\\\\n&+ \\left(\\frac{v^2t}{\\pi RD}\\right)^{1/2}\\exp{\\left[-\\frac{(Rz-vt)^2}{4DRt}\\right]}\\nonumber\\\\\n&- \\frac{1}{2} \\left(1+\\frac{vz}{D} + \\frac{v^2t}{DR}\\right) \\exp{\\left[\\frac{vz}{D}\\right]}\\erfc{\\left[\\frac{Rz+vt}{2\\sqrt{DRt}}\\right]}\\nonumber\\\\\n&+ \\left(\\frac{4v^2t}{\\pi RD}\\right)^{1/2}\\left[1+\\frac{v}{4D}\\left(2L-z+\\frac{vt}{R}\\right)\\right]\\exp{\\left[\\frac{vL}{D} - \\frac{R}{4Dt}\\left(2L-z+\\frac{vt}{R}\\right)^2\\right]}\\nonumber\\\\\n&- \\frac{v}{D}\\left[2L - z + \\frac{3vt}{2R} + \\frac{v}{4D}\\left(2L - z + \\frac{vt}{R}\\right)^2\\right]\\exp{\\left[\\frac{vL}{D}\\right]}\\erfc{\\left[\\frac{R(2L-z) + vt}{2\\sqrt{DRt}}\\right]}\n\\label{simple_genuchten}\n\\intertext{where}\nL =& \\mbox{Extent of the solution domain }[m]\\nonumber\\\\\nR =& \\mbox{Retardation factor }[-].\\nonumber\n\\end{align}\n\n\n", "meta": {"hexsha": "4fefb74959e3fa4d939f2dd15b312d09f141fc4e", "size": 3754, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "nuclide_models/mass_balance/one_dim_ppm/one_dim_ppm.tex", "max_stars_repo_name": "katyhuff/2017-huff-rapid", "max_stars_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nuclide_models/mass_balance/one_dim_ppm/one_dim_ppm.tex", "max_issues_repo_name": "katyhuff/2017-huff-rapid", "max_issues_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nuclide_models/mass_balance/one_dim_ppm/one_dim_ppm.tex", "max_forks_repo_name": "katyhuff/2017-huff-rapid", "max_forks_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5189873418, "max_line_length": 189, "alphanum_fraction": 0.7168353756, "num_tokens": 1239, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8872046026642944, "lm_q2_score": 0.6688802669716107, "lm_q1q2_score": 0.593433651488535}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{amsmath}\n\\usepackage[margin=0.75in]{geometry}\n\\usepackage[numbers]{natbib}\n\\usepackage{physics}\n\\usepackage{xcolor}\n\\usepackage{cancel}\n\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{Creative Destruction Lab}\n\\lhead{Week 2: Optimization problems \\& Rydberg atom arrays}\n\\rfoot{Page \\thepage}\n\n\\usepackage{graphicx}\n\n\\usepackage[hidelinks]{hyperref}\n\n\n\\renewcommand{\\phi}{\\varphi}\n\n\\newenvironment{changemargin}[2]{%\n\\begin{list}{}{%\n\\setlength{\\topsep}{0pt}%\n\\setlength{\\leftmargin}{#1}%\n\\setlength{\\rightmargin}{#2}%\n\\setlength{\\listparindent}{\\parindent}%\n\\setlength{\\itemindent}{\\parindent}%\n\\setlength{\\parsep}{\\parskip}%\n}%\n\\item[]}{\\end{list}}\n\n\\title{Week 2: Optimization problems \\& Rydberg atom arrays}\n\\author{}\n\\date{}\n\n\n\\begin{document}\n\n\\maketitle\n\n\\thispagestyle{empty}\n\n\\section*{Introduction}\n\nLast week, you were able to simulate elements of the seminal work produced from Google's Sycamore quantum computer on your own (classical) laptops.\nHowever, given that you are in the CDL stream, you are probably wondering how one might be able to demonstrate a quantum advantage in a ``real-world'' setting, such as optimization problems, instead of the simulatability of an ``unusual'' probability distribution (the Porter-Thomas distribution).\n\nSuperconducting-qubit (Sycamore) and trapped-ion devices are considered top contenders for scalable quantum computers, due to their long coherence times and\nhigh gate fidelities.  However, recently neutral-atom quantum computers are also proving to be competitive due to the ease of control of a large number of programmable\nqubits~\\cite{serret_solving_2020,bernien_probing_2017,ebadi_quantum_2020,henriet_quantum_2020,Browaeys2020}.\n%Logically, if we have access to larger quantum computers, we can foreseeably demonstrate a quantum advantage for this very reason.\nGiven the immense scaling solutions required for today's largest data-driven problems (e.g. supply chain management), it seems \nlikely that neutral-atom quantum computers could be close close to demonstrating a quantum advantage for today's real problems.\n\nThe foundation of today's neutral-atom quantum computers is comprised of Rydberg atoms~\\cite{Browaeys2020}.\nBriefly, Rydberg atoms are highly-excited atoms (e.g. rubidium) that interact with each other on the scale of a few micrometres.\nA controlled laser pulse can excite an atom into a quantum state with a large principal quantum number (i.e. a Rydberg state) that is quasi-stable.\nThe binary nature of a Rydberg atom's ground and excited states is analogous to the state of a qubit.\nBesides ease of control and scalability, the value of Rydberg atoms in the context of solving real-world problems is in the way that Rydberg atoms interact with each other. \nTheir interactions map trivially to a well-known mathematical problem called the Unit-Disk Maximum Independent Set (UD-MIS) problem, which is NP-hard\\footnote{Problems that are NP-hard are not solvable in polynomial time, but trial solutions can be verified to be correct or not in polynomial time.}.\n\nA host of today's real-world problems are classified as NP-hard, and it turns out that NP-hard problems can be translated into other NP-hard problems with some negligible overhead.\nSo, we can map a lot of real-world problems to the building blocks of quantum computers! \nThis is precisely what you will be investigating this week: exploring the UD-MIS problem as it pertains to Rydberg atoms, and mapping it to a real-life NP-hard problem.\nFirst, let's look at some of the relevant math so that you can do your tasks!\n\n\\subsection*{Modelling Rydberg atoms}\n\nRydberg atoms need to be placed at a physical location.\nWe will strictly look at Rydberg atoms on a {\\it graph} $G$ with vertices (physical Rydberg atom locations) and edges $V$ and $E$, respectively.\nWith this, we will look at a Rydberg Hamiltonian of the form\n\\begin{equation} \\label{eq:rydberg_hamiltonian_Omega=0}\n\t\\hat{H} = -\\sum_{i \\in V} \\hat{n}_i + \\sum_{ i < j } \\left( \\frac{R_b}{r_{ij}} \\right)^6 \\hat{n}_i \\hat{n}_j,\n\\end{equation}\nwhere $\\hat{n}_i = 1/2 \\left( I - \\hat{\\sigma}_i^z \\right) = \\ketbra{1}{1}_i$ is called an {\\it occupation operator}, $R_b$ is a parameter called the {\\it blockade radius}, and $r_{ij}$ is the distance between Rydberg atoms located at vertices $i$ and $j$.\nThe computational basis we will be working in is the occupation basis: the eigenstates of $\\hat{n}_i$,\n\\begin{subequations} \\label{eq:occupation_basis}\n    \\begin{align} \n        \\hat{n}_i \\ket{0}_j =& 0 \\qquad (\\text{$\\forall$ $i,j$}), \\\\\n        \\intertext{and}\n        \\hat{n}_i \\ket{1}_j =& \\delta_{i,j}\\ket{1}_j,\n    \\end{align}\n\\end{subequations}\nwhere the state $\\ket{0}$ ($\\ket{1}$) represents the ground (excited) state of a Rydberg atom.\n\nOn observing the form of the Hamiltonian in Eq.~\\ref{eq:rydberg_hamiltonian_Omega=0}, we can see that the first term (sum over $V$) favours all sites being occupied, while the interaction term penalizes occupied pairs.\nIt is precisely this dichotomy that leads us to our next section.\n\n\\subsection*{Rydberg atoms and the UD-MIS problem}\n\nThe MIS problem\\footnote{Not UD-MIS!} is defined as follows \\cite{serret_solving_2020}: \\\\\n\n\\begin{changemargin}{1.5cm}{1.5cm}\n\t\\noindent Let $G = (V,E)$ be a graph with a set of vertices $V$ and edges $E$, $N = |V|$, and $S = (n_1, \\dotsb, n_N)$ be an $N$-bit string (i.e. $n_i \\in 0,1$) with Hamming weight $|S| = \\sum_{i = 1}^N n_i$. \n\tThe MIS problem is defined as finding\n\t\\begin{align*}\n\t\t\\max_{S \\in \\mathcal{B}} & |S|,\n\t\\end{align*}\n\tsubject to the constraint that $S$ is an independent set.\n\tHere, $\\mathcal{B}$ is the set of all possible $N$-bit strings, and an independent set is defined as a set with mutually non-connected vertices: $n_i = n_j = 1 \\Rightarrow (i,j) \\cancel{\\in} E$. \\\\\n\\end{changemargin}\n\nSolving this problem consists of finding the largest possible independent set and returning an instance of it.\nWith the additional unit-disk constraint, all vertices of the given graph share an edge except those that are separated by a distance greater than 1.\nAn example of an independent set on a unit-disk graph is shown in Fig.~\\ref{fig:udmis_example}.\n\n\\begin{figure}\n    \\begin{center}\n        \\includegraphics[width=0.4\\linewidth]{images/udmis_example.png}\n    \\end{center}\n    \\caption{An example of a unit-disk graph $G$ with an instance of the corresponding solution to its UD-MIS problem. Here, blue lines indicate graph edges $E$, and dots represent vertices $V$ wherein the shaded regions represent unit-radii disks. The red dots correspond to a maximum independent set.} \\label{fig:udmis_example}\n\\end{figure}\n\nThe ground state of the Hamiltonian\n\\begin{equation} \\label{eq:UDMIS_hamiltonian}\n\t\\hat{H} = -\\sum_{i \\in V} \\hat{n}_i + u \\sum_{i,j \\in E} \\hat{n}_i \\hat{n}_j\n\\end{equation}\nis precisely the solution to the UD-MIS problem on a graph $G$\\footnote{So long as $u > 1$ \\cite{serret_solving_2020}. You will use $u = 1.35$ for all of your tasks.}.\nThis is eerily similar to Eq.~\\ref{eq:rydberg_hamiltonian_Omega=0}, with the difference being in the coefficients in front of the interaction terms.\nHowever, to a very good approximation, it's ``similar enough''.\nSo, essentially, finding the ground state of a bunch of Rydberg atoms on the same graph $G$ will (approximately) solve the UD-MIS problem.\n\nIn all of your tasks, we will be modelling Eq.~\\ref{eq:UDMIS_hamiltonian} for simplicity.\nHowever, let's just take a moment to appreciate how having a physical neutral-atom quantum computer makes solving the UD-MIS problem so simple.\nWe could just place the Rydberg atoms at the desired vertex locations, and measure their Rydberg occupation! Pretty easy\\footnote{Obviously, a lot more goes into it. These experiments are highly non-trivial affairs... No free lunch!}!\n\n\\subsection*{Task 1: Simulated Classical Annealing}\n\nLet's first try and solve the UD-MIS problem classically.\nYou'll notice that the Hamiltonian in Eq.~\\ref{eq:UDMIS_hamiltonian} is diagonal in the Rydberg occupation basis (Eq.~\\ref{eq:occupation_basis}).\nSo, we can use classical Monte Carlo methods (namely the Metropolis-Hastings algorithm) to simulate this Hamiltonian at finite temperatures.\nBut, we're interested in the {\\it ground} state, not some high-temperature state.\nThat being said, so long as we can simulate the Hamiltonian at a low enough temperature, it should be fine.\n\nIn this directory, you will find a Jupyter notebook titled \\texttt{Task1.ipynb}.\nHere, there's all the code you will need to solve the UD-MIS problem via classical {\\it simulated annealing}: performing Monte Carlo simulations at lower and lower temperatures to hopefully simulate the ground (zero-temperature) state.\nThe {\\it annealing schedule} is the manner in which the temperature is decreased.\nWe've provided an annealing schedule for you, but {\\bf try other annealing schedules to see if you can get a solution to the UD-MIS problem faster (less steps)}.\n\n\\subsection*{Task 2: Quantum Annealing}\n\nLet's now try solving the UD-MIS problem quantum-ly.\nWe will employ a quantum annealing (QA)-based approach.\nBriefly, the goal of QA is to start in an easily-preparable ground state of a Hamiltonian and time evolve this state with a time-dependent Hamiltonian.\nAt the end of the time evolution process, the Hamiltonian should be such that its ground state is what we are after.\nConsider the time-dependent Hamiltonian\n\\begin{equation} \\label{eq:time_dep_hamiltonian}\n\t\\hat{H}(t) = \\Omega(t) \\sum_{i \\in V} \\hat{\\sigma}_i^x - \\delta(t) \\sum_{i \\in V} \\hat{n}_i + u \\sum_{i,j \\in E} \\hat{n}_i \\hat{n}_j.\n\\end{equation}\nNotice that if $\\Omega(t = 0) = \\delta(t = 0) = 0$, the ground state of this Hamiltonian is every Rydberg atom in its ground state: $\\ket{0}^{\\bigotimes N}$.\nIf $\\Omega(t = t_{\\text{final}}) = 0$ and $\\delta(t = t_{\\text{final}}) = 1$, we have Eq.~\\ref{eq:UDMIS_hamiltonian}.\nTime-evolving the initial state $\\ket{0}^{\\bigotimes N}$ with the Hamiltonian in Eq.~\\ref{eq:time_dep_hamiltonian} by manually choosing how to vary $\\Omega(t)$ and $\\delta(t)$ will hopefully give us the ground state of the UD-MIS Hamiltonian (Eq.~\\ref{eq:UDMIS_hamiltonian}).\nIn a nutshell, this is the QA process as it pertains to our problem.\n\nMathematically and algorithmically, QA looks like the following\n\\begin{align*}\n\t\\ket{\\psi(t)} =& U(t) U(t - \\delta t) \\cdots U(t_0 + \\delta t) U(t_0) \\ket{\\psi (t = t_0)},\n\\end{align*}\nwhere $U(t)$ is the time-evolution operator\n\\begin{align*}\n\tU(t) =& \\exp(-\\frac{i}{\\hbar} \\delta t \\hat{H}(t)),\n\\end{align*}\n$\\hat{H}(t)$ is given in Eq.~\\ref{eq:time_dep_hamiltonian}, and $\\delta t$ is a user-defined increment of time that defines the number of subdivisions in the time interval $t \\in [t_0, t_f]$.\nWe will implement this on a quantum circuit simulator package in the \\texttt{Julia} programming language called \\texttt{Yao.jl} \\cite{luo_yaojl_2020}\\footnote{Created by our very own Roger Luo!}.\nEverything you need is in \\texttt{run\\_quantum\\_annealing.jl}.\nWe have taken inspiration from Ref.~\\cite{serret_solving_2020} for the annealing schedule (see Fig. 2 in this work).\nThe very last line in \\texttt{run\\_quantum\\_annealing.jl} draws samples from the resulting wavefunction $\\ket{\\psi(t)}$ that has been time-evolved.\n{\\bf Plot the samples on the graph coordinates given in the code and verify that indeed the UD-MIS problem has been solved.}\n\nThe resulting time-evolved wavefunction may not \\textit{perfectly} be the ground state (i.e. when you sample it, you might get configurations that aren't the solution we desire).\nSample the resulting wavefunction several thousand times, and look for the most common bit string.\nUse this as your solution\\footnote{You might need to run the code a few different times!}.\n\n\\subsection*{Task 3: A Real Problem}\n\nThe City of Gotham is looking at putting in new cell phone towers.\nThe possible locations of the cell phone towers are given in Fig.~\\ref{fig:gotham}.\nThe billionaire Bruce Wayne is funding the project and he loves his money. \nTherefore, Gotham should only purchase the required number of cell phone towers such that 1) the cell phone tower signal ranges do not overlap\\footnote{Yes, this means some areas of Gotham City won't have cell phone service... ``Too bad,'' said Bruce.}, and 2) as much of Gotham City can be within cell signal range. \n\nThe possible Gotham City cell phone tower locations are\\footnote{All coordinates are normalized to the cell phone tower signal range.}:\n\\begin{enumerate}\n\t\\item (1.19, 4.25)\n\t\\item (2.71, 3.48)\n\t\\item (1.19, 3.51)\n\t\\item (2, 3.38)\n\t\\item (1.12, 2.86)\n\t\\item (1.70, 2.42)\n\t\\item (2.36, 2.54)\n\t\\item (1.52, 1.48)\n\t\\item (2.15, 1.54)\n\t\\item (2.14, 1.87)\n\t\\item (1.72, 0.86)\n\t\\item (2.29, 0.87)\n\\end{enumerate}\n\n\\begin{enumerate}\n\t\\item {\\bf Explain why this is a problem that can be easily mapped to the UD-MIS problem.} A few sentences is all that is needed.\n\t\\item {\\bf Solve Gotham City's problem.} Using the methods provided in Tasks 1 and 2 \\textbf{Are there multiple solutions}?\n\t\\item {\\bf Should Bruce pay for a few more cell phone towers to make sure that more of Gotham City has cell phone service?}\n\\end{enumerate}\n\n\\begin{figure}\n    \\begin{center}\n        \\includegraphics[width=0.4\\linewidth]{images/gothamcity.png}\n    \\end{center}\n    \\caption{Possible cell phone tower locations in Gotham City.} \\label{fig:gotham}\n\\end{figure}\n\n\\section*{Additional Challenges}\n\n\\begin{itemize}\n\t\\item Compare / benchmark the methods outlined in the Task 1 and 2 codes.You will probably want to make larger graphs to push both of these algorithms to the test. Which method is better (i.e. faster, more efficient)?\n\t\\item Benchmark other classical and quantum optimization methods to solve your UD-MIS problems.  \n\t\\item Solve another real-world problem that can be mapped to the UD-MIS problem.\n\t\\item Perform any of the tasks with real quantum hardware.\n\n\\end{itemize}\n\n\\bibliography{refs}\n\\bibliographystyle{unsrturl}\n\n\\end{document}\n", "meta": {"hexsha": "54574e2db29424d409550aa92df243af108192c3", "size": 14025, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Week2_Rydberg_Atoms/instructions/instructions.tex", "max_stars_repo_name": "ecavan/CDL_Quantum_Computing_Projects", "max_stars_repo_head_hexsha": "4ae2d0496aa9e059b50ae36848e46e2cddd6f302", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Week2_Rydberg_Atoms/instructions/instructions.tex", "max_issues_repo_name": "ecavan/CDL_Quantum_Computing_Projects", "max_issues_repo_head_hexsha": "4ae2d0496aa9e059b50ae36848e46e2cddd6f302", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week2_Rydberg_Atoms/instructions/instructions.tex", "max_forks_repo_name": "ecavan/CDL_Quantum_Computing_Projects", "max_forks_repo_head_hexsha": "4ae2d0496aa9e059b50ae36848e46e2cddd6f302", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.7841409692, "max_line_length": 329, "alphanum_fraction": 0.7534402852, "num_tokens": 3929, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891392358014, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.593409430456075}}
{"text": "\\chapter{Reliability Analysis}\n\\chaptermark{Reliability}\n\nWe will define reliability analysis to be the analysis of the \\emph{time} to \\emph{failure}.\nWe will also assume that ``time'' and ``failure'' are well defined and agreed upon.\n\nWe intuitively understand ``more reliable'' to mean ``lasts longer''. \nWe should also consider, however, the case of a product that is designed to fail after some time, thus forcing the consumer to buy a new one. \nSome may say that a major hi-tech company named after a fruit employs this practice, which is known as \\emph{planned obsolescence}\\footnote{\\url{https://en.wikipedia.org/wiki/Planned_obsolescence}.}\nBe it true or not, I hope we can agree that good knowledge of your product's life expectancy is a desirable. \n\nBy ``product'' we clearly imply a production problem, but our methods equally apply to other problems where we analyse the distribution of times-to-events. \n\\begin{description}\n\t\\item[Services] The analysis of time-to-service in service queues. \n\t\\item[Marketing] The analysis of time-to-adoption/abandon in marketing. \n\t\\item[Sotware reliability] The analysis of time-to-bug in software reliability.\n\t\\item[Actuary] The analysis of time-to-death in actuary sciences (insurance).\n\t\\item[Pharmaceutics] The analysis of time-to-effect in pharmaceutics. \n\t\\item[Epidemiology] The analysis of time-to-infection/recovery in epidemiology and biostatistics. \n\\end{description}\n\nReliability analysis involves the study of a probabilistic property of our product- its \\emph{survival}.\nAny probabilistic model will require calibration to reality via data. \nThis chapter thus introduces both the probability calculus typically used for reliability analysis, and then some statistical considerations involved when calibrating these models.\n\n\n\n\\section{Probabilistic Analysis}\n\n\n\n\n\\subsection{A Static View}\n\nLet $\\x_j \\in \\set{0,1}, j=1,\\dots,p$ denote the state of the $j$'th component of a system, and $x=(x_1,\\dots,x_p)$.\n\n\\begin{definition}[Structure Function]\nThe \\emph{structure function}, $\\struct=\\struct(x):x \\mapsto \\set{0,1}$, is an indicator function of the state of the system. A failure indicated by $0$. \n\\end{definition}\n\n\\begin{remark}[$\\Phi$]\nWe apologize to the reader for using $\\Phi$ to denote both the $\\gauss{0,1}$ CDF, and the structure function.\nWe do so to stay in accordance with reliability literature, and since no collisions are created in this chapter by doing so.\n\\end{remark}\n\n\\begin{definition}[Series System]\nA \\emph{series system}, or \\emph{serial system}, is one where all components need to function for the system to function: $$\\struct(x)=\\prod_{j=1}^{p}x_j.$$\n\\end{definition}\nA reliability diagram of a series system is given in Figure~\\ref{fig:series_system}.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{art/series_system}\n\\caption{Series system.}\n\\label{fig:series_system}\n\\end{figure}\n\n\n\\begin{definition}[Parallel System]\nA \\emph{parallel system} is one where all components need to fail for the system to fail:\n$$\\struct(x)=1-\\prod_{j=1}^{p} (1-x_j)= \\coprod_{j=1}^p x_j.$$\n\\end{definition}\nA reliability diagram of a parallel system is given in Figure~\\ref{fig:parallel_system}.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{art/parallel_system}\n\\caption{Parallel system.}\n\\label{fig:parallel_system}\n\\end{figure}\n\n\n\n\n\n\\begin{definition}[k-out-of-p System]\nA \\emph{k-out-of-p} system is one where at least $k$ components need to function for the system to function:\n$$\\struct(x)=\\indicator{\\sum_{j=1}^{p} x_j \\geq k}.$$\n\\end{definition}\n\n\n\n\\begin{think}\n\tA reliability diagram of a k-out-of-p system is not provided, since it is not very friendly. Try thinking of a 2-out-of-3 system to see why.\n\\end{think}\n\n\n\n\\begin{definition}[Monotone System]\nA system is said to be \\emph{monotone} if $\\struct(x_1,\\dots,x_p)$ is non decreasing in all components.\n\\end{definition}\nThe definition of monotonicity captures the idea that you cannot improve a system's state by breaking components.\nThis seems rather natural (I am still looking for a counter example).\n\n\n\\begin{think}\n\tThink how the structure function of a non monotone system would look.\n\\end{think}\n\n\n\n\n\\begin{definition}[Reliability]\nWe define the \\emph{reliabity of component $j$} to be $$p_j:= P(\\x_j=1),$$ \nand  the \\emph{reliability of the system} \n$$ S_\\struct = S_{\\struct(x)}:=P(\\struct(x)=1).$$\n\\end{definition}\n\n\n\\begin{example}[Reliability of a series system]\nFor $\\Phi(x)$ a series system, assuming independent components, we have\n$$ S_\\struct= \\prod_{j=1}^{p} p_j.$$\n\\end{example}\n\n\n\\begin{example}[Reliability of a parallel system]\nFor $\\Phi(x)$ a parallel system, assuming independent components, we have\n$$ S_\\struct= 1-\\prod_{j=1}^{p} (1-p_j)= \\coprod_{j=1}^p p_j. $$\n\\end{example}\n\n\n\\begin{example}[Reliability of a k-out-of-p system]\nFor $\\Phi(x)$ a k-out-of-p system, assuming independent components with equal reliability ($p_i=p_0$), we have\n$$ S_\\struct= \\sum_{i=k}^{p} \\binom{p}{i} p_0^i (1-p_0)^{p-i} .$$\n\\end{example}\n\n\n\n\\subsubsection{State enumeration method}\nTo compute the reliability of more complex structures, the brute-force approach is the \\emph{state enumeration method}. \nThis method simply relies on summation of the probabilities of the states for which the system functions.\n$$ S_\\struct= \\sum_{x} \\Phi(x) P(\\x=x).$$\n\n\n\\subsubsection{Factoring method}\nThe \\emph{factoring method}, \\aka \\emph{pivot-decomposition method}, relies on two ingredients: \n(a) conditioning on the state of some components greatly simplifies the structure, and\n(b) the total probability argument.\nCombining the two we have:\n$$ S_\\struct= p_j  S_{\\struct|x_j=1} + (1-p_j) S_{\\struct|x_j=0}   ,$$\nwhere $S_{\\struct|x_j=1}$ denotes the reliability of the structure $\\Phi$ conditional on $x_j=1$.\nThe following example demonstrates the power of the factoring method.\n\n\\begin{example}[Bridge Structure]\n\\label{eg:bridge}\nConsider structure in Figure~\\ref{fig:bridge}.\nTo compute the reliability, we will call upon the factoring method while conditioning on the state of component $3$:\n$$ S_\\struct= p_3  S_{\\struct|x_3=1} + (1-p_3) S_{\\struct|x_3=0}   .$$\nNow note that when $x_3=1$ then we have a series structure of parallel structures, while when $x_3=0$ we have a parallel structure of series structures.:\n\\begin{align*}\n\tS_{\\struct|x_3=1} &= (p_1 \\coprod p_2) (p_4 \\coprod p_5),\\\\\n\tS_{\\struct|x_3=0} &=  p_1 p_4 \\coprod p_2 p_5,\n\\end{align*}\nso that \n$$ \t\n\tS_\\struct = p_3  (p_1 \\coprod p_2) (p_4 \\coprod p_5) + (1-p_3) (p_1 p_4 \\coprod p_2 p_5).\n$$\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{art/bridge}\n\\caption[Bridge Structure]{Structure of a bridge system. Source: \\cite[Fig.2.5]{aven_stochastic_1999}}\n\\label{fig:bridge}\n\\end{figure}\n\\end{example}\nExample~\\ref{eg:bridge} demonstrates a single application of the factoring method. Clearly, it can be applied recursively for more complicated systems.\n\nThe example also demonstrates a more general principle. Namely, that redundancy is preferable at the component level, and not at the system's level.\nPut differently- when designing a backup, and the resources allow a fill copy of the original system, we are better of by designing a component-wise backup, than a single backup system.\nPut formally:\n\\begin{theorem}[Component-wise redundancy]\nFor a monotone structure $\\Phi$, \n\\begin{align}\n\tS_{\\struct(x \\coprod y)} \\geq S_{\\struct(x)} \\coprod S_{\\struct(y)}\n\\end{align}\nwhere $x \\coprod y$ denotes a component-wise backup: $(x_1 \\coprod y_1,\\dots,x_p \\coprod y_p)$.\n\\end{theorem}\n\n\n\n\\begin{extra}[Reliability analysis of complex systems]\nExcept for simple systems, of the type we presented, the computation of the reliability of a complex system may be a formidable task. \nFor complicated real-life systems, \\emph{min-cut--max-flow} algorithms, or \\emph{inclusion-exclusion} type algorithms are employed. \nFor more details, see \\cite{aven_stochastic_1999}.\n\\end{extra}\n\n\n\n\n\n\n\n\n\n\\subsubsection{Reliability Importance Measures}\n\\begin{quotation}\nThe Strength Of The Chain Is In The Weakest Link.\n\\end{quotation}\nThis is obviously a profound observation in reliability analysis.\nIn order to identify the weakest link we require some measure of reliability importance.\n\n\n\\begin{definition}[Improvement potential]\nThe \\emph{improvement potential} is defined as the change in a system's reliability, if we could force a component to function indefinitely.\nFormally, we denote $\\Phi^{(j)}$ to be a system where component $j$ cannot fail: $\\Phi^{(j)}:=\\Phi_{(p_j=1)}$.\nWe then define the improvement potential with respect to component $j$ to be \n\\begin{align}\n\tI_j :=S_{\\Phi^{(j)}}-S_{\\Phi}.\n\\end{align}\n\\end{definition}\n\n\n\n\\begin{definition}[Birenbaum's measure]\n\\emph{Birenbaum's measure} is defined as the change in a system's reliability, if we infinitesimally improve the reliability of component $j$.\nFormally \n\\begin{align}\n\tI_j: =\\frac{\\partial}{\\partial p_j} S_{\\Phi}.\n\\end{align}\n\\end{definition}\n\n\nClearly any such importance measure, once computed, may serve to decide which component should be treated to improve reliability.\n\n\n\n\n\n\n\n\\subsection{A Time Dynamic View}\nThe reliability of each component ($p_j$), typically changes in time, and so does the reliability of the whole system.\nIn the following, $\\T$ will typically stand for the time to malfunction. It is thus assumed to be \\textbf{continuous} and \\textbf{non-negative}.\n\n\n\\begin{definition}[CDF]\nThe cumulative distribution function (CDF) of a random variable $\\T$ at a point $t$  is given by\n\\begin{align}\n\t\\cdf{\\T}{t}:= P(\\T<t).\n\\end{align}\n\\end{definition}\n\n\\begin{definition}[PDF]\nThe probability density function (PDF) of a continuous random variable $\\T$ at a point $t$ is given by \n\\begin{align}\n\t\\pdf{\\T}{t}:= \\frac{\\partial}{\\partial t}\\cdf{\\T}{t}.\n\\end{align}\n\\end{definition}\n\n\n\\begin{definition}[Survival Function]\nThe survival function of a random variable $\\T$ at a point $t$ is given by \n\\begin{align}\n\t\\survive{\\T}{t}:= P(\\T>t)=1-\\cdf{\\T}{t}.\n\\end{align}\n\\end{definition}\nBy definition, it follows that if $\\T_j$ is the time to failure of component $j$, then $$p_j(t)=\\survive{\\T_j}{t}.$$\nIf $\\T_\\struct$ is the time to failure of a system $\\Phi$, then we may write $S_\\struct(t)=\\survive{\\T_\\struct}{t}$.\n\n\n\\begin{example}[Survival of a series system]\nFor a series system $\\struct$, the reliability of the system at time $t$ is given by $$\\survive{\\Phi}{t}=\\prod_{j=1}^{p} p_j(t).$$\n\\end{example}\n\n\n\\begin{example}[Survival of a parallel system]\nFor a parallel system $\\struct$, the reliability of the system at time $t$ is given by $$\\survive{\\Phi}{t}=1- \\prod_{j=1}^{p} (1-p_j(t))= \\coprod_{j=1}^p p_j(t).$$\n\\end{example}\n\n\n\n\n\nAnother way to present a distribution, no less informative than the previous ones, is by the \\emph{hazard function}, which is the ``probability of surviving just another instant''.\n\\begin{definition}[Hazard Function]\nThe \\emph{hazard function}, or \\emph{failure rate} of a random variable $\\T$ at a point $t$ is given by \\marginnote{Hazard Function}\n\\begin{align}\n\t\\hazard{\\T}{t} &:= \\lim_{dt\\to 0}\\frac{P( \\T \\in [t,t+dt)|\\T \\geq t )}{dt} \\label{eq:hazard}\\\\\n\t&= \\frac{\\pdf{\\T}{t}}{\\survive{\\T}{t}} \\\\\n\t&= - \\frac{\\partial}{\\partial t}\\log \\survive{\\T}{t} . \\label{eq:survival_to_hazard}\n\\end{align}\n\\end{definition}\nThe hazard function can be thought of as the expected number of events per unit of time, thus the name failure-rate. \nIf the unit of time is days, and the $\\hazard{\\T}{t}=2$, then we can expect two failure events in day $t$. \nThis also means that the total number of events expected since $t=0$, is the integral of the hazard, known as the \\emph{cummulative risk}.\n\\begin{definition}[Cumulative Risk]\nThe \\emph{cumulative hazard}, \\aka the \\emph{cumulative risk}, of a random variable $\\T$ at a point $t$ is given by \\marginnote{Cumulative Hazrd}\n\\begin{align}\n\t\\cuhazard{\\T}{t} &:= \\int_{0}^{t}\\hazard{\\T}{s} \\dif s \\\\\n\t\\Rightarrow \\survive{\\T}{t} &= \\exp(-\\cuhazard{\\T}{t}). \\label{eq:cumhazrd}\n\\end{align}\n\\end{definition}\nEq.(\\ref{eq:cumhazrd}) readily shows that a distribution is well defined by its hazards.\n\n\n\n\\begin{theorem}[Failure rate of a series system]\n\\label{thm:ifr_closure}\nThe failure rate of a series system of independent components $\\Phi$ is given by the sum of the failure rates of its components\n\\begin{align}\n\t\\hazard{\\Phi}{t}= \\sum_{j=1}^{p} \\hazard{\\T_j}{t}\n\\end{align}\n\\end{theorem}\nThe proof is immediate using the cumulative risk.\nThe failure rate of a parallel system, does not admit such a nice closed form as we will soon see in Example~\\ref{eg:failure_parallel}.\n\n\n\n\n\\begin{example}[Exponential Hazard]\nThe simplest distribution when discussing hazards is the exponential.\nRecalling the for non-negative $t$:\n\\begin{align}\n\t\\pdf{\\T}{t}= \\lambda e^{-\\lambda t}, \\\\\n\t\\cdf{\\T}{t}= 1-e^{-\\lambda t},\n\\end{align}\nso that \n\\begin{align}\n\t\\survive{\\T}{t} &= e^{-\\lambda t}, \\\\\n\t\\hazard{\\T}{t} &= \\lambda.\n\\end{align}\n\\end{example}\nThe exponential is the only distribution with constant hazard which makes it very easy to analyze.\nThe constant hazard is due to the \\emph{memoryless} property. Look at Eq.(\\ref{eq:hazard}) and think why.\n\n% [HW: prove that memory less is sufficient]\n\n\n\\begin{example}[Failure rate of a series of exponential components]\nThe failure rate of a series system $\\Phi$, of $p$ independent components each with exponentially distributed failure times, is simply \n\\begin{align}\n\t\\hazard{\\Phi}{t}= \\sum_{j=1}^{p} \\lambda_j, \\forall t \\geq 0\n\\end{align}\nwhere $\\lambda_j=\\lambda_j(t)$ is the rate of each component.\n\\end{example}\nThis is obviously the simplest system possible for reliability analysis, which stems from the fact that a minimum of exponentials is exponential with the sum of rates.\n\n\n\nThe following example, seemingly very simple, provides tremendous insight into the complexities of reliability analysis.\n\\begin{example}[Failure rate of a two exponential-component parallel-system]\n\\label{eg:failure_parallel}\nConsider a system of two independent, parallel, exponential components, with failure times $\\T_j\\sim \\exp(\\lambda_j); j=1,2$.\nThe failure rate is given by\n\\begin{align}\n\t\\hazard{\\Phi}{t}=\n\t\\frac\n\t{\\exppdf{\\lambda_1}{t} + \\exppdf{\\lambda_2}{t}  - \\exppdf{(\\lambda_1+ \\lambda_2)}{t}}\n\t{\\expcdf{\\lambda_1}{t} + \\expcdf{\\lambda_2}{t} - \\expcdf{(\\lambda_1+ \\lambda_2)}{t}}\n\\end{align}\n\\end{example}\nWhy is Example~\\ref{eg:failure_parallel} so important?\nBecause it demonstrates that even in a simple system, with the simplest components, the reliability is not so simple to compute (as a function of the components' reliability). \nIndeed, even though the component-wise hazards are fixed in time, the system's hazard is not fixed, and not even monotone in time (Figure~\\ref{fig:hazard_non_monotone}). \n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{art/hazard}\n\\caption{Failure rate of the parallel exponential component system.}\n\\label{fig:hazard_non_monotone}\n\\end{figure}\n\n\n\n\n\n\\begin{example}[Weibull Hazard]\nThe Weibull distribution is very common in reliability analysis since it is tractable generalization of the exponential distribution with many nice properties. \nIt can be constructed by \n$\\T := \\lambda \\U^{1/k}$, where $\\U \\sim \\exp(1)$. \nThis implies that for non negative $t$:\n\\begin{align}\n\t\\pdf{\\T}{t} &= \\frac{k}{\\lambda}\\left(\\frac{t}{\\lambda} \\right)^{k-1} e^{-(t/\\lambda)^k}, \\\\\n\t\\cdf{\\T}{t} &= 1 - e^{-(t/\\lambda)^k},\n\\end{align}\nso that \n\\begin{align}\n\t\\survive{\\T}{t} &= e^ {-(t/\\lambda )^k}, \\\\\n\t\\hazard{\\T}{t} &= \\frac{k}{\\lambda} \\left(\\frac{t}{\\lambda} \\right)^{k-1} .\n\\end{align}\n\\end{example}\nElementary analysis shows that the hazard function of the Weibull may be increasing or decreasing in time ($\\T$), depending on $k$, but it is always monotone.\n\n\n\n\n\\begin{example}[Gompertz Hazard]\n\tThe Gompertz distribution\\footnote{Not to be confused with it's generalization, the \\emph{Gompertz-Makeham} distribution.} is another common distribution in reliability analysis. \n\t\\begin{align}\n%\t\\pdf{\\T}{t} &= \\\\\n\t\\cdf{\\T}{t} &= 1- e^\\frac{-\\lambda (\\varphi^t-1)}{\\log \\varphi},\n\t\\end{align}\n\tso that \n\t\\begin{align}\n\t\\survive{\\T}{t} &= e^\\frac{-\\lambda (\\varphi^t-1)}{\\log \\varphi}, \\\\\n\t\\hazard{\\T}{t} &= \\lambda \\varphi^t .\n\t\\end{align}\n\\end{example}\nElementary analysis shows that the hazard function of the Gompertz may be increasing or decreasing in time ($\\T$), depending on $\\varphi$, but it is always monotone.\n\n\n\n\n\n\\begin{example}[Empirical risk rates]\n\t\\label{ex:bathtub}\nWhen examining empirical risk rates of true devices, we almost always notice a \\emph{bathtub} structure, such as in Figure~\\ref{fig:bathtub}.\\marginnote{Bathtub}\nThis shape captures the idea that products tend to fail more when they are brand new, or as they are very old, while their failure rates are fairly stable in the ``mid-life''.\nIn this text, we will not be providing a particular distribution which has this property. \nWe refer the reader to \\cite{nadarajah_bathtub-shaped_2008} for examples of distributions which have the bathtub property.\n\\end{example}\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{art/bathtub_curve}\n\\caption[Bathtub empirical hazard curve]{Bathtub curve of empirical failure rates. \\newline\n\\url{http://support.minitab.com/en-us/minitab/17/topic-library/modeling-statistics/reliability/distributions-in-reliability-analysis/hazard-functions/}}\n\\label{fig:bathtub}\n\\end{figure}\n\n\n\n\n\\subsubsection{Aging}\nThe idea of \\emph{aging} is that failure rate may vary over time. It is an important concept in reliability, as demonstrated by the empirical bathtub failure rate (Figure~\\ref{fig:bathtub}).\nInstead of checking if a particular textbook distribution has some ageing property, we instead analyze classes of distributions with the desired notion of ageing.\nOur goal will ultimately be to understand the ageing of a whole system, as a function of the ageing of its components.\n\n\\begin{definition}[IFR]\nWe call a failure time distribution to be in the \\emph{increasing failure rate} (IFR) ageing class, if it has a non decreasing failure rate.\n\\end{definition}\n\n\n\\begin{definition}[IFRA]\nWe call a failure time distribution to be in the \\emph{increasing failure rate average} (IFRA) ageing class, if the average risk is non decreasing in $t$. \nFormally: $\\cuhazard{\\T}{t}/t=\\int_0^t \\hazard{\\T}{s}\\dif s/t$ is non decreasing in $t$.\n\\end{definition}\nThe intuition underlying IFRA relies on understanding $\\cuhazard{\\T}{t}/t$, which can be seen as the average risk from ignition to time $t$. \nIFRA thus means that even if the risk decreases at some point in time, the average risk still increases. \n\n\n\\begin{definition}[NBU]\nWe call a failure time distribution to be in the \\emph{new better then used} (NBU) ageing class, if \n$\\survive{\\T}{t_1+t_2} \\leq \\survive{\\T}{t_2} \\survive{\\T}{t_1}$.\n\\end{definition}\n\n\n\\begin{definition}[NBUE]\nDefine the \\emph{expected residual life}, $\\mu(t)$, to be \n$$\\mu(t):= \\expect{\\T-t|\\T>t}.$$\n\\marginnote{Expected Residual Life}\nWe call a failure time distribution to be in the \\emph{new better then used in expectation} (NBUE) ageing class, if \n$\\mu(t) \\leq \\mu(0)$.\n\\end{definition}\nNBUE can be easily understood, since it means that a component's expected life is maximal when it is brand new. \n\n\n\\begin{theorem}\n$IFR \\Rightarrow IFRA \\Rightarrow NBU \\Rightarrow NBUE. $\n\\end{theorem}\n\n\nThe following theorem states a relation between the ageing properties of particular components, and that of the whole system. In particular it states that for the (very wide) class of monotone systems, then the IFRA property is conserved. \nThis should be contrasted with the IFR property, which is not conserved, as demonstrated by the parallel system in  Example~\\ref{eg:failure_parallel}.\n\\begin{theorem}[IFRA closure theorem]\n\\label{thm:ifra_closure}\nIf the independent components of a monotone system are IFRA, then so is the whole system.\n\\end{theorem}\n\n\nSeries systems are a extremely small and particular subset of monotone systems.\nIt does provide, however, an example of systems where not only IFRA is preserved, but also the stronger IFR.\nThe following corollary follows immediately from Theorem~\\ref{thm:ifr_closure} and the fact that a sum of monotone functions is monotone.\n\\begin{cor}[IFR closure for series systems]\n\\label{cor:ifr_series}\nA series system of independent IFR components is IFR.\n\\end{cor}\n\n\n\n\nNow consider a two-component system, where one component kicks-in when the first fails.\nWe will call this an \\emph{offline backup}.\\marginnote{Offline Backup}\nThe survival times of the components in an offline backup system are clearly dependent. \nIt turns out that for such a system of IFR components, does conserve the IFR property, as seen in the following theorem.\n\\begin{theorem}[Convolution of IFR]\n\\label{thm:ifr_convolution}\nFor two independent random variables, $\\x$ and $\\y$, both in the IFR ageing class, then so is $\\x+\\y$.\n\\end{theorem}\nThe theorem is called the convolution theorem, because the distribution of a sum of independent random variables, is the convolution of their distributions.\n\n\n\\begin{extra}[IFR and log-concave]\nThe IFR requirement, is essentially the same as log-concavity of the density function.\nThis immediately implies many properties of the class, including the convolution theorem above.\nSee \\cite{bagnoli_log-concave_2005}.\n\\end{extra}\n\n\\begin{example}[IFR of Gamma]\nThe Gamma (and thus the Erlang) distribution is in the IFR aging class, since it is the sum of exponentials, each IFR.\n\\end{example}\n\n\\begin{example}[Series system of offline backups]\nWhat can we say about the ageing class of a series systems of offline backup systems? \nIt turns out that if the components are IFR, then so will the whole system.\nThis is immediate from Corollary~\\ref{cor:ifr_series} and Theorem~\\ref{thm:ifr_convolution}.\n\\end{example}\n\n\n\n\n\n\n\\section{Statistical Analysis}\n%TODO: update using Venables and Ripley (2002)\n\nThe probabilistic analysis of the previous section is great fun and all, but like any probabilistic problem, is has to be calibrated to real life. \nThis is where data, and statistics come in.\n\nThe statistical analysis of time-to-event data is very important to many disciplines. \nIt possibly started in the $17$'th century with the \\emph{Life Tables} of John Graunt and\nWilliam Petty, used by actuaries to compute life expectancies. \nThese were later reinvented and augmented to the full statistical framework known as \\emph{failure-time} analysis in engineering, \\emph{survival} analysis in epidemiology and biostatistics, and \\emph{event-history} analysis in sociology. \\marginnote{Failure-time analysis}\n\n\\begin{remark}[Reliability analysis versus failure-time analysis]\n\tIn this text we use the term \\emph{reliability analysis} for the probabilistic analysis above, and \\emph{failure-time analysis} for the statistical problem. \n\tWe do so to respect the original literature where these originate. \n\\end{remark}\n\n\nThe statistical analysis of failure data, is typically concerned with the estimation of the \\textbf{hazard} function, or the \\textbf{survival} function. \nTo do so, we face the following statistical challenges:\n\\begin{description}\n\\item [Failure Distribution] like any statistical model, we will need to assume something about distributions of failure times.\n\n\\item [Identifiability] It is typically hard, if not impossible, to estimate the reliability of particular components, from the reliability of the whole system. \n\n\\item [Censoring] A major concern with reliability data, is that in any finite length experiment, some events will just not have happened yet; their failure time will thus be \\emph{censored}. Ironically- the more reliable a component, the less data we will have to estimate its reliability. \n\n\\item [Lab versus real-life conditions] Reliable components take very long time to fail. We will thus extrapolate from harsh lab conditions to real-life operating conditions. This requires the introduction of covariates. \n\n\\end{description}\n\n\n\n\\subsection{Failure distribution}\nAssuming the distribution of the failure time is no different than the problem of assuming the sampling distribution in elementary statistical problems (e.g., in a t-test).\nLuckily, there is one very popular estimator of the hazard function, which is \\emph{non-parametric}, meaning that it is free of any distribution assumption.\n\\begin{definition}[Kaplan-Meier Estimator]\n\tThe Kaplan-Meier estimator of the survival function $\\survive{\\T}{t}$ is defined as \n\t\\begin{align}\n\t\t\\surviven{\\T}{t}:=\\prod_{t_i \\leq t} \\frac{n_i-d_i}{n_i},\n\t\\end{align}\n\twhere $t_i$ is the period between the $i-1$'th event and the $i$'th event, \n\t$n_i$ is the number of units that did not fail until the end of $t_i$,\n\t$d_i$ is the number of units that fail in period $t_i$ (typically $1$, except for ties).\n\\end{definition}\nThe rational of the estimator is obvious once we realize that $\\frac{n_i-d_i}{n_i}$ estimates the probability of \\textbf{not} failing during period $t_i$, and that surviving all the way to time $t$ means not failing for all previous periods. \n\nGiven a covariate, $x$, that splits into sub populations, one may estimate and compare the survival separately for each value of the covariate: $\\surviven{\\T|x}{t}$.\n\n\n\n\n\n\n\\subsection{Identifiability}\n\n\\begin{example}[Likelihood estimation of a series system]\n\\label{eg:likelihood_of_failures}\nAssume a series system $\\Phi$ with $p$ independent, exponential components with rates $(\\lambda_1,\\dots,\\lambda_p)$.\nWe have $n$ observations on the failure times of the system $t_1,\\dots,t_n$.\nHow can we estimate the failure rates?\nTo use a likelihood approach, we need the data's sampling distribution.\nDenoting the failure time of the $j$'th component of the $i$th device with $\\T_{i,j}$, we have that $\\T_{i,j}\\sim \\exp(\\lambda_j)$ by assumption.\nSince the system is serial, then $\\T_i=\\min_j(\\T_{i,1},\\dots,\\T_{i,p})$.\nBy the properties of the exponential distribution $\\T_i \\sim \\exp(\\lambda)$, where $\\lambda:=\\sum_{j=1}^{p} \\lambda_j$, as we have already seen with the failure rate. It follows that\n$\\pdf{\\T_i}{t}=\\lambda \\exp(-\\lambda t)$.\nWe may then write the likelihood function, maximize it with respect to $\\lambda$ and discover, as we already know, that $$\\hat{\\lambda}=\\frac{n}{\\sum_{i=1}^{n} t_i}.$$\nWe are now left with the problem of recovering $(\\lambda_1,\\dots,\\lambda_p)$ from $\\lambda$. \nCan we do it? On the face of it- no. Which should not surprise us, since the mere knowledge of a device failure, is not very informative on the particular component that failed, which we would need to estimate $(\\lambda_1,\\dots,\\lambda_p)$.\n\\end{example}\n\nExample~\\ref{eg:likelihood_of_failures} teaches us that unless further assumptions are introduced, the estimation of the component-wise failure rates requires information on the component-wise failure times. \n\n\n\n\n\n\\subsection{Censored Events}\nConsider several components being analyze for their reliability. \nIronically, we actually want them to fail. If they do not, they do not convey information on their reliability.\nIn the event that a component has not failed, we clearly cannot register its failure time. \nOmitting this component from the sample will upward bias the estimated reliability.\nThese events are called \\emph{censored} observations. \n\nDecomposing the likelihood function, $\\lik$, to the contributions of each independent event, $\\lik=\\prod_{i=1}^n \\lik_i$, then the likelihood of non-censored events is given by\n\\begin{align}\n\t\\lik_i=\\density_\\T(t_i)= \\survive{\\T}{t_i} \\hazard{\\T}{t_i},\n\\end{align}\nand the likelihood of a censored event, under the \\emph{non-informative} assumption, is given by \n\\begin{align}\n\t\\lik_i=\\survive{\\T}{t_i} .\n\\end{align}\nUnifying the two cases assuming independent observations, using an indicator for non-censoring, $c_i$, and taking logs we have\n\\begin{align}\n\t\\loglik &:=\\log \\lik\\\\ \n\t&= \\log \\left[\\prod_{i=1}^{n} \\survive{\\T}{t_i} \\hazard{\\T}{t_i}^{c_i}\\right] \\\\\n\t&= \\sum_{i=1}^{n} [c_i \\log \\hazard{\\T}{t_i} - \\cuhazard{\\T}{t_i}]. \\label{eq:censored_likelihood}\n\\end{align}\n\n\n\\begin{example}[Censored exponential lifetimes]\n\\label{ex:censoring}\nRecalling that the failure rates of exponential lifetimes are fixed, we have that the likelihood of censored exponential lifetimes is given by \n$$\n\t\\sum_{i=1}^{n} [c_i \\log \\lambda - \\lambda t_i].\n$$\nThe maximum likelihood estimator of $\\lambda$ is thus\n\\begin{align}\n\t\\estim{\\lambda}= \\frac{\\sum c_i}{\\sum t_i}. \\label{eq:censored_ml}\n\\end{align}\nEq.(\\ref{eq:censored_ml}) lends itself to a nice interpretation.\nThe numerator is the total number of failures.\nThe denominator is the total \\emph{exposure time}. \nThe estimated failure rate is thus the number of failures per unit of exposure time. \n\\end{example}\n\nThe example also demonstrates that overly reliable components is a problem. \nIf no component fails, i.e., all observations are censored, then $\\sum c_i=0$, so that $\\estim{\\lambda}=0$. \nThis is both mathematically true, and absolutely not interesting; we need events to occur for us to learn from them!\nThis motivates the use of covariates such as temperature, pressure, humidity, etc. to force failure events. \nThe problem will then be how to convert failure rates in extreme conditions, to failure rates in regular conditions?\nSometime we have a physical theory that tells us what is the effects of various factors.\nFor instance, the \\emph{Arrhenius model}, and the \\emph{Eyring model} are physically motivated models that capture the effect of temperature.\nFor models for stress, voltage, humidity, and other life accelerating covariates, see for instance \\cite[Sec.8.1.5]{natrella_nist/sematech_2010}\nIn the next section we introduce two very simple models to demonstrate how we deal with covariates. \n\n\n\n\\begin{extra}\nThe results in Example~\\ref{ex:censoring} is trivial if you consider failures as events which come as a Poisson process, which is implied from the exponential times assumption. \nThe process is run for $\\sum t_i$ time, and the total event count is $\\sum c_i$. \nThe trivial estimator for the rate of the process, is $\\sum c_i/\\sum t_i$.\n\\end{extra}\n\n\n\n\n\n\n\\subsection{From lab to reality: ``Accelerating Life''.}\n\n\n\\subsubsection{Accelerated Life model}\nAn \\emph{accelerate life} model assumes that covariates rescale \\emph{time}. \nFor instance, the lab may produce conditions where time advances ten times faster than in real-life operating conditions.\n\n\nThe model is best understood as a \\emph{log-linear} model, i.e., the effects of the various factors, $x$, are linear in log-of-time scale:\n\\begin{align}\n\\label{eq:estimating_accelerated_life}\n\t\\log \\T_i = x_i'\\beta + \\varepsilon_i,\n\\end{align}\nfor some error term $\\varepsilon_i$. \n\nThe definition of accelerated life models via Eq.(\\ref{eq:estimating_accelerated_life}) is very convenient for practical purposes. \nIt means that after taking the log of failure times, effects ($\\beta$'s) can be estimated using any software that can solve a linear model. \n\nThe accelerated life model, like any other log-linear model, implies that our factors have multiplicative effects.\nTo see this, compare the distribution of failure times at some condition $x_0$ versus some other condition $x_1$.\nTaking the exponent of both sides of Eq.(\\ref{eq:estimating_accelerated_life}) we have\n\\begin{align}\n\t\\T_{x_1} = \\T_{x_0} e^{\\beta'(x_1-x_0)},\n\\end{align}\nso we see that a change in the conditions from $x_0$ to $x_1$ multiplies failure times. \nThis also explains why the models are called \\emph{accelerated time}: this is because we can make time ``move'' multiplicatively faster or slower, by changing the operating conditions. \n\n\n\n\\begin{think}\n\tThink of a single two-state factor, $x_i \\in\\set{0,1}$ to get some intuition to the meaning of the parameter $\\beta$. \n\\end{think}\n\n\n\n\\begin{example}[Accelerated life with Gaussian noise]\n\\label{eg:accelerated_gaussian}\nThe maximum likelihood estimation of $\\beta$ when assuming that $\\varepsilon_i \\sim \\gauss{0,\\sigma^2}$, collapses to a simple linear regression when the dependent variable is simply $\\log t_i$.\n\\end{example}\n\n\n\\begin{extra}[Tobit regression]\nAssuming $\\varepsilon_i \\sim \\gauss{0,\\sigma^2}$, as in the previous example, implies that failure times are \\emph{log normal} distributed. \nThis approach is known as \\emph{Tobit} regression.\n\\end{extra}\n\n\n\n\n\n\n\n\\subsubsection{Proportional Hazard Models}\nThe \\emph{proportional hazard}, or \\emph{proportional risk} class of models, assumes that covariates multiply not time, but rather failure rates. \nPut differently, accelerated rescales time linearly, thus hazards are rescaled non-linearly. Proportional hazards rescale hazards linearly, thus time is rescaled non-linearly. \nQualitatively, they are the same. \nQuantitatively, the exact amount of acceleration (deceleration) may differ. \nThe choice between these models typically depends on the underlying physical theory, and on the ease of computation and interpretation.\n\n\nA proportional hazard model assumes that the risk function when operating under some conditions $x_0$ is related to the risk under conditions $x_1$ via:\n\\begin{align}\n\\label{eq:proportional_hazard}\n\t\\hazard{\\T|x_1}{t}:=\\hazard{\\T|x_0}{t} e^{\\beta'(x_1-x_0)}  ,\n\\end{align}\n\n\n\\begin{think}\n\tThink of a single two-state factor, $x_i \\in\\set{0,1}$ to get some intuition to the meaning of the parameter $\\beta$. \n\\end{think}\n\n\n\n\nThe linear rescaling of the risk in the proportional hazard model, implies the following relation between survival functions\n\\begin{align}\n\\label{eq:survival_proportional_hazard}\n\t\\survive{\\T|x_1}{t}=\\survive{\\T|x_0}{t}^{\\exp((x_1-x_0)'\\beta)}.\n\\end{align}\nTo see this recall Eq.(\\ref{eq:survival_to_hazard}):\n\\begin{align*}\n\\hazard{\\T|x_1}{t} \n&=  - \\frac{\\partial}{\\partial t} \\log \\survive{\\T|x_1}{t}  \\\\\n&= - \\frac{\\partial}{\\partial t} \\log \\survive{\\T|x_0}{t}^{\\exp((x_1-x_0)'\\beta)} \\\\\n&= e^{(x_1-x_0)'\\beta} - \\frac{\\partial}{\\partial t}  \\log \\survive{\\T|x_0}{t} \\\\\n&=  \\hazard{\\T|x_0}{t} e^{(x_1-x_0)'\\beta} \n\\end{align*}\nas required.\n\n\n\n\n\n\n\\begin{example}[Closure of the exponential failure times]\nThe exponential failure times have a nice property:\nif the failure times are exponential under \\textbf{some} condition, $x_0$, then they are exponential for \\textbf{all} conditions. \nThis holds if we assume that factors have an accelerated-time effect, or a proportional-hazard effect. \nPut differently: for the exponential distribution (only), a linear rescaling of time is equivalent to a linear rescaling of hazards. \nPut more differently: if we assume exponential failure times, the proportional-hazard model and the accelerated-life model are the same. \\\\\n\\textbf{Proof:}\nThe proportional-hazard case:\n$\\T_{x_0}\\sim \\exp(\\lambda)$ by the exponential assumption, so that \n$\\hazard{\\T_{x_0}}{t}=\\lambda$.\nNow $\\hazard{\\T_{x_1}}{t}=\\hazard{\\T_{x_0}}{t} e^{\\beta(x_1-x_0)}$ by the proportional hazard assumption, so that $\\forall x_1: \\hazard{\\T_{x_1}}{t}= const$, which completes the proof that failure times are exponential under \\textbf{all} conditions. \\\\\nThe accelerated-life case:\n$\\T_{x_0}\\sim \\exp(\\lambda)$ by the exponential assumption, so that \n$e^{\\varepsilon_i}$ is exponential.\nDenoting $\\gamma:=e^{\\beta'(x_1-x_0)}$, then $\\T_{x_1} = \\gamma \\T_{x_0} $ by the accelerated time assumption, so that $\\forall x_1: \\T_{x_1} \\sim \\exp(\\lambda/\\gamma)$, which completes the proof that failure times are exponential under \\textbf{all} conditions. \n\\end{example}\n\n\n\n\n\n\n\n\n\\subsection{Choosing the Base Failure Rate}\n\\label{sec:based-failure-rate}\nIn all the above models, we are free to choose the base failure rate: \n$\\survive{\\T_0}{t}$ in Eq.(\\ref{eq:accelerated_life}), or\n$\\varepsilon$ in Eq.(\\ref{eq:estimating_accelerated_life}), or \n$\\hazard{\\T|x_0}{t}$ in Eq.(\\ref{eq:proportional_hazard}).\nThree possible approaches include:\n\\begin{enumerate}\n\\item Assume a \\textbf{parametric} model, such as exponential times, Weibull times, etc.\n\\item Assume a \\textbf{semi-parametric} model, which can be simply seen as a flexible class of distributions, that has no particular parametric representation. In failure-time analysis, the \\emph{piece-wise constant} hazard is a popular choice.\n\\item Do not assume anything on the distribution, known as a \\textbf{non-parametric} approach. \n\\end{enumerate}\nIf we assume a particular parametric model, then we may gather failure time data, write the likelihood function, and return failure rate estimates, and covariate effects.\nWe now focus on the more flexible framework of semi-parametric modelling.\n\n\n\\subsection{The Parametric Case}\nA parametric model fitting to failure data, is simply a maximum likelihood problem.\nExamples \\ref{eg:likelihood_of_failures}, \\ref{eg:accelerated_gaussian}, and \\ref{eg:accelerated_exponential} demonstrate this. \n\n\n\n\\subsection{The Semi Parametric Case}\nWe now relax the explicit failure time distribution assumption, and adopt a more flexible semi-parametric distribution class, known as the \\emph{piecewise exponential class}.\\marginnote{Piecewise Exponenitial}\nConsider the proportional hazard model:\n\\begin{align}\n\t\\hazard{\\T|x_1}{t}:=\\hazard{\\T|x_0}{t} \\times \\exp((x_1-x_0)'\\beta).\n\\end{align}\nThe model clearly requires some baseline failure rate $\\hazard{\\T|x_0}{t}$.\nA flexible, yet not too flexible assumptions, is that the failure rate is constant in some time intervals:\n\\begin{align}\n\t\\hazard{0}{t}=h_j \\quad \\text{if} \\quad t\\in [\\tau_{j-1},\\tau_j)\n\\end{align}\nThis class of distributions has $J(J-1)$ parameters: $(\\tau_1,\\dots,\\tau_{J-1},h_1,\\dots,h_J)$.\nWe are free to choose $J$. \nLarge $J$ are very flexible classes, but will require a lot of failure data to estimate.\nSmall $J$ are less flexible, but require less data to estimate. \nAt the limits, when $J=1$, we are back to exponential failure times. \nAt the other limit, where $J \\to \\infty$, we have an absurdly flexible distribution class, which requires impossibly large amounts of data to estimate.\n\nSince the failure rate is piece-wise constant, the distribution class is known as \\emph{piece-wise exponential}.\\marginnote{Piecewise Exponential}\nIt is a rather flexible class of distributions. Figure~\\ref{fig:piecewise_exponential} depicts the approximation of the Weibull survival function, using a piece-wise constant hazard function, with $J=3$ and appropriate selected parameters.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[height=0.2\\textheight]{art/piecewise_exponential}\n\\caption{Piecewise Exponential aproximation of the Weibull distribution.}\n\\label{fig:piecewise_exponential}\n\\end{figure}\n\n\nThe piecewise-constant hazard model is very convenient to analyze under the proportional hazard assumption:\n\\begin{align}\n\t\\hazard{x}{t} = \\hazard{x_0}{t} e^{(x-x_0)'\\beta} = h_j e^{(x-x_0)'\\beta}.\n\\end{align}\nThere are $p+2J$ parameters to estimate.\nThis can be done directly using maximum likelihood, or by casting the problem as several separate Poisson regression problem. \nThis has the benefit that the problem may be immediately solved with any statistical software suite, with existing numerical solvers.\nWe will currently not pursue this avenue, and refer the reader to the bibliographic notes.\n\n\n\n\\section{Collecting the pieces}\nIn this chapter we have seen the probabilistic reliability analysis, where we assumed components' reliabilities are known. \nWe then proceeded to the statistical problem of estimating reliabilities from failure data.\nWe now glue collect these pieces to sketch a realistic analysis workflow, which would look roughly as follows:\n\\begin{enumerate}\n\\item Estimate reliability parameters by collecting component-wise failure data. Data is collected in a lab so that you may accelerate time and rescale to realistic operating conditions.\n\\item Perform the probabilistic analysis of the whole system using the estimated parameters.\n\\end{enumerate}\n\n\n\\begin{remark}[Interplay between probability and statistics]\nThere is obviously an interplay between the above stages:\nIn the statistical analysis, we want the least possible set of assumptions;\nFor the probabilistic analysis, the more we can assume, the more we can say about the system as a whole.\n\\end{remark}\n\n\n\n\\begin{remark}[What is a component?]\nThe notion of a ``system'' and a ``component'' is not well defined.\nIndeed, for some purposes you may consider a system, as a component in a larger system.\nFor the statistical problem, a component is probably the smallest unit you can collect data on. At times, the smallest unit, may be the system as a whole.\n\\end{remark}\n\n\n\n\n\\section{Repairable systems}\nIn this section we return the a probabilistic analysis.\nUnlike the previous sections, we now study systems which may be repaired after they fail.\nWe will thus want to study the state of the system at time $t$.\nThis state may depend on the number of failures and repairs performed on the system up to $t$.\n\nWe start with the introduction of some quantities of interest.\n\\begin{definition}[Point availability]\nThe point availability at time $t$, denoted $A(t)$ is defined as \n\\begin{align}\n\tA(t):= \\expect{\\Phi_t}=P(\\Phi_t=1) .\n\\end{align}\n\\end{definition}\n\n\\begin{definition}[Interval reliability]\nWhen considering the availability in some time interval $J$, and denoting by $N_J$ the number of system failures in the interval, we may study\n\\begin{align}\n\tP(N_j\\leq k) &\\\\\n\tM(J) &:= \\expect{N_j} \\\\\n\tA(J) &:= P(\\Phi_t=1), \\forall t \\in J.\n\\end{align}\n\\end{definition}\n\n\n\\begin{definition}[Interval downtime]\nDenoting by $Y_J=\\int_J (1-\\Phi_t) dt$ the downtime during interval $J$, we may study\n\\begin{align}\n\tP(Y_j \\leq y) &\t\\\\\n\tA^D(J) := \\frac{\\expect{Y_J}}{|J|}\n\\end{align}\n\\end{definition}\n\n\\paragraph{Limiting measures} The particular time $t$, or interval $J$ are usually not of real importance in the sense that all times and intervals are equally important.\nWe will thus typically be interested in the above performance measures for in some \\emph{steady state} of the systems, so that the measure is representative of all $t$ (or $J$), and thus no longer depends on $t$ (or $J$).\nThe typical approach for this is to study the limit of the performance measure, which implies the system has reached it's steady state. Formally, this means studying $\\lim_{t \\to \\infty}$ of the above measures. \n\n\nWe now start with the analysis of a \\emph{single component} system, which we later complicate into \\emph{multiple component systems}. \nThe required theory is that of stochastic processes, in particular \\emph{counting processes}. \nThe reader is referred to the bibliographic notes for rigorous proofs and details. \n\n\\subsection{Single component systems}\nFor a single component, $\\Phi_t=\\x(t)$. \nIf the component fails, it is replaced or repaired. \nWe denote by $T_k$ and $R_k$ and the (random) time of the $k$'th run, and repair, respectively.\nWe assume $T_k \\sim F$ and $R_k \\sim G$, independent.\n\n\\begin{definition}[MTTF]\nWe denote by $\\mu_F=\\expect{T_k}$, the \\emph{mean time to failure} (MTTF).\n\\end{definition}\n\n\n\\begin{definition}[MTTR]\nWe denote by $\\mu_G=\\expect{R_k}$, the \\emph{mean time to repair} (MTTR).\n\\end{definition}\n\nObviously, MTTR and MTTF are important characteristics of the single-component system.\n\n\n\\begin{theorem}[Stable point availability]\nAs $t \\to \\infty$\n\\begin{align}\n\tA(t) \\to \\frac{\\mu_F}{\\mu_F+\\mu_G}.\n\\end{align}\n\\end{theorem}\n\n\\begin{theorem}[Stable failures per unit of time]\n\nAs $t \\to \\infty$, then with probability one\n\\begin{align}\n\t\\frac{N_t}{t} \\to \\frac{1}{\\mu_F+\\mu_g}\n\\end{align}\n\\end{theorem}\n\n\\begin{theorem}[Stable unavailability]\nAs $t \\to \\infty$, then with probability one\n\\begin{align}\n\tA^D([0,t])=\\frac{Y_t}{t} \\to \\frac{\\mu_G}{\\mu_F+\\mu_G} \n\\end{align}\n\t\n\\end{theorem}\n\n\n\n\n\n\n\n\n\\subsection{Multiple component systems}\nWe will now want to study the availability of a system of multiple repairable components.\nThe performance of the single-component system still apply, but the analysis now has to account for the fact that the state of the systems depends on the state of $n$ repairable components, assumingly independent.\nBy indexing the components with $i$, we denote $T_{i,k}, R_{i,k}$ for the uptime and repair time of the $k$'th failure of the $i$'th component. Their distributions are $F_i$ and $G_i$ respectively.\nThe system failures up to time $t$ is still $N(t)$, but not we also allow for component-wise processes $N_i(t)$, with expectations $M(t)$, and $M_i(t)$.\n\nDenoting $A_i(t)$ the availability of component $i$ at time $t$, $A(t)$ the n-vector of reliabilities, and $A_\\Phi(t)$ the whole system's reliability. \n\n[TODO:Complete from  \\cite[Sec.4.3]{aven_stochastic_1999}]\n\n\n\n\\section{Bibliographic Notes}\nAn light introductory discussion, may be found in \\cite{nahmias_production_2015}. \nThe probabilistic analysis in this text is adapted from \\cite{aven_stochastic_1999}.\nThe seminal reference probably being \\cite{barlow_mathematical_1965}.\nThe statistical analysis is adapted from German Rodriguez's Generalized-Linear-Models class notes\\footnote{\\url{http://data.princeton.edu/wws509/notes/c7.pdf}.} and \\cite[Ch.8]{natrella_nist/sematech_2010}.\nFor more on the statistical analysis, see Chapter 13 of the excellent \\citet{venables_modern_2002} and references therein. \n\n\n% ", "meta": {"hexsha": "95866df25306ec7ced85ccae082e01cbf5e95692", "size": 45234, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Class_notes/reliability.tex", "max_stars_repo_name": "johnros/qualityEngineering", "max_stars_repo_head_hexsha": "4a1c0959672fb5c5a6e59829e543c95beb4e5b44", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Class_notes/reliability.tex", "max_issues_repo_name": "johnros/qualityEngineering", "max_issues_repo_head_hexsha": "4a1c0959672fb5c5a6e59829e543c95beb4e5b44", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Class_notes/reliability.tex", "max_forks_repo_name": "johnros/qualityEngineering", "max_forks_repo_head_hexsha": "4a1c0959672fb5c5a6e59829e543c95beb4e5b44", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.8260869565, "max_line_length": 291, "alphanum_fraction": 0.7533934651, "num_tokens": 12297, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929053683038, "lm_q2_score": 0.7606506635289836, "lm_q1q2_score": 0.5933021210163}}
{"text": "\\documentclass[]{article}\n\n\\usepackage{amsthm} %qed\n\\usepackage[cmex10]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{lemma}[theorem]{Lemma}\n\\theoremstyle{definition}\n\\newtheorem{definition}[theorem]{Definition}\n\n\\newcommand{\\BIN}{\\begin{bmatrix}}\n\\newcommand{\\BOUT}{\\end{bmatrix}}\n\n%\\newcommand{\\mdiff1}[1]{\\frac{\\partial}{\\partial #1}}\n\\newcommand{\\diff}[2]{\\dfrac{\\partial #1}{\\partial #2}}\n\\newcommand{\\norm}[1]{\\left\\| #1 \\right\\|}\n\n\n\\begin{document}\n\n\\title{\\Large Substitutions}\n\\author{Adrien Escande}\n\n\\maketitle\n\n\\section{Introduction}\nIn this document, we are considering a system of linear equations that we want to pre-solve for some variables. We write this system\n\\begin{align}\nA_{1,1} x_1 + A_{1,2} x_2 + \\ldots + A_{1,k} x_k + B_1 y &= c_1 \\\\\nA_{2,1} x_1 + A_{2,2} x_2 + \\ldots + A_{2,k} x_k + B_2 y &= c_2 \\\\\n\\vdots& \\nonumber\\\\\nA_{k,1} x_1 + A_{k,2} x_2 + \\ldots + A_{k,k} x_k + B_k y &= c_k \\\\\nD_1 x_1 + D_2 x_2 + \\ldots + D_k x_k + E y &= f \\label{eq:other}\n\\end{align}\nwhere the $x_j \\in \\mathbb{R}^{n_j}$, $j=1..k$ are the variables for which we want to pre-solve, $y \\in \\mathbb{R}^p$ are all the other possible variables, the $A_{i,j}$ are $m_i \\times n_j$ matrices, the $B_i$ are $m_i \\times p$, the $D_i$ are $q \\times n_j$, $E$ is $q \\times p$, the $c_i$ are $m_i$-vectors, and $f$ is a $q$-vector.\nWe suppose the system is feasible.\n\nWithout loss of generality, we can consider the $k$ first lines to form a block triangular system:\n\\begin{align}\nA_{1,1} x_1 + A_{1,2} x_2 + \\ldots + A_{1,k} x_k + B_1 y &= c_1 \\label{eq:triang1}\\\\\nA_{2,2} x_2 + \\ldots + A_{2,k} x_k + B_2 y &= c_2 \\label{eq:triang2}\\\\\n\\vdots& \\nonumber\\\\\nA_{k,k} x_k + B_k y &= c_k \\label{eq:triangk}\n\\end{align}\nIndeed, if $l\\leq k$ variables, let's note them $x_1, \\ldots x_l$, appears in exactly $l$ equations, that we can take to be the $l$ first ones, then these $l$ equations can be rewritten as one single equation $\\tilde{A} \\tilde{x} + \\tilde{A}_{l+1} x_{l+1} + \\ldots + \\tilde{A}_k x_k + \\tilde{B} y = \\tilde{c}$, with \n\\begin{equation*}\n\t\\tilde{A} = \\BIN A_{1,1} & \\hdots & A_{1,k} \\\\ \\vdots & \\ddots & \\vdots \\\\ A_{k,1} & \\hdots & A_{k,k} \\BOUT, \\\n\t\\tilde{A_j} = \\BIN A_{1,j} \\\\ \\vdots \\\\ A_{l,j}\\BOUT, \\\n\t\\tilde{B} = \\BIN B_{1} \\\\ \\vdots \\\\ B_{l}\\BOUT,\\\n\t\\tilde{c} = \\BIN c_{1} \\\\ \\vdots \\\\ c_{l} \\BOUT \\mbox{and}\\\n\t\\tilde{x} = \\BIN x_{1} \\\\ \\vdots \\\\ x_{l} \\BOUT\n\\end{equation*}\n\nLet's note $r_j$ the rank of the matrix $A_{j,j}$. We require that $r_j \\leq n_j$.\nOur goal is to transform the system (\\ref{eq:triang1}) - (\\ref{eq:triangk}), (\\ref{eq:other}) into an equivalent system\n\\begin{align}\n  x_1 &= \\varphi_1(y,z_1, \\ldots, z_k) \\\\\n\tx_2 &= \\varphi_2(y,z_2, \\ldots, z_k) \\\\\n  \\vdots & \\nonumber \\\\\n\tx_k &= \\varphi_{k-1}(y,z_k) \\\\\n\tx_k &= \\varphi_k(y) \\\\\n\t\\psi(y,z_1, \\ldots z_k) &= 0\n\\end{align}\nwhere $z_j \\in \\mathbb{R}^{n_j-r_j}$, and $\\psi$ and the $\\varphi_i$ are linear functions.\n\n\n\\section{One variable substitution}\n\\subsection{Principle}\nConsider the system\n\\begin{align}\n\tA x + B y &= c \\label{eq:simple1}\\\\\n\tD x + E y &= f \\label{eq:simple2}\n\\end{align}\nwhere we want to pre-solve in $x$, using the first equation. We denote $r$ the rank of $A$.\n\nLet $A^\\sharp$ be a generalized inverse of $A$, $N$ a basis of $\\mathrm{null}(A)$, and $S$ a basis of $\\mathrm{null}(A^T)$, and define $P = (I-A^T G^T) N^{+T}$. Using the tautology $P^T x = P^T x$, we have\n\\begin{align*}\n  \\mathrm{(\\ref{eq:simple1})} &\\Longleftrightarrow \n\t\\left\\{\\begin{array}{rcl}\n\t  \\BIN A \\\\ P^T\\BOUT x \\hspace{-5pt}&=&\\hspace{-5pt} \\BIN -By + c \\\\ P^T x\\BOUT\n\t\\end{array}\\right.\\\\\n\t&\\Longleftrightarrow \n\t\\left\\{\\begin{array}{rcl}\n\t  \\BIN A^{\\sharp} & N \\\\ S^T & 0 \\BOUT \\BIN A \\\\ P^T\\BOUT x \\hspace{-5pt}&=&\\hspace{-5pt} \\BIN A^{\\sharp} & N \\\\ S^T & 0 \\BOUT \\BIN -By + c \\\\ P^T x\\BOUT\n\t\\end{array}\\right.\\\\\n\t&\\Longleftrightarrow \n\t\\left\\{\\begin{array}{rcl}\n\t  x \\hspace{-5pt}&=&\\hspace{-5pt}  -A^{\\sharp}By + N z + A^{\\sharp}c \\\\ \n\t\tz \\hspace{-5pt}&=&\\hspace{-5pt} P^T x\\\\\n\t\tS^T By \\hspace{-5pt}&=&\\hspace{-5pt} S^T c\n\t\\end{array}\\right.\\\\\n\t&\\Longleftrightarrow \n\t\\left\\{\\begin{array}{rcl}\n\t  x \\hspace{-5pt}&=&\\hspace{-5pt}  -A^{\\sharp}By + N z + A^{\\sharp}c, \\quad z \\in \\mathbb{R}^{n-r} \\\\ \n\t\tS^T By \\hspace{-5pt}&=&\\hspace{-5pt} S^T c\n\t\\end{array}\\right.\n\\end{align*}\nThe second equivalence uses the fact that the introduced matrix is invertible (see theorem~\\ref{th:inverse}), the third that $A^{\\sharp}A + NP^T = I$ (see theorem~\\ref{th:NP}), and the last that $P$ is full rank, and so $P^T$ spans $\\mathbb{R}^{n-r}$. \n\nNoting $M = -A^\\sharp B$, and $u = A^\\sharp c$, we thus have\n\\begin{equation*}\n  \\left. \\begin{array}{c} (\\ref{eq:simple1}) \\\\ (\\ref{eq:simple2}) \\end{array} \\right\\}\n\t\\Longleftrightarrow\n\t\\left\\{ \\begin{array}{rcl}\n\tx \\hspace{-5pt}&=&\\hspace{-5pt} M y + N z + u \\\\\n\t(E + D M) y + D N z \\hspace{-5pt}&=&\\hspace{-5pt} f - D u \\\\\n\tS^T B y \\hspace{-5pt}&=&\\hspace{-5pt} S^T f\n\t\\end{array} \\right.\n\\end{equation*}\n\n\\subsection{Generic $A$}\nFor a generic $A$, we can compute a rank-revealing QR\n\\begin{equation}\n\tA = \\BIN Q^r & Q^c \\BOUT \\BIN R^r & R^c \\\\ 0 & 0 \\BOUT \\BIN {\\Pi^r}^T \\\\ {\\Pi^c}^T \\BOUT\n\\end{equation}\nThen we can take $A^{\\sharp} = {\\Pi^r} {R^r}^{-1} {Q^r}^T$, $N = - \\Pi^r {R^r}^{-1} R^c + \\Pi^c$ and $S = Q^c$. This yields $P = \\Pi^c$.\n\n\n\n\\section{Multiple variables}\nConsider the system\n\\begin{align}\nA_{1,1} x_1 + A_{1,2} x_2 + \\ldots + A_{1,l} x_l + B_1^{(l)} y + Z_1^{(l)} z^{(l)}&= c_1^{(l)} \\label{eq:triang_1}\\\\\nA_{2,2} x_2 + \\ldots + A_{2,l} x_l + B_2^{(l)} y + Z_2^{(l)} z^{(l)}&= c_2^{(l)} \\label{eq:triang_2}\\\\\n\\vdots& \\nonumber\\\\\nA_{l,l} x_l + B_l^{(l)} y + Z_l^{(l)} z^{(l)}&= c_l^{(l)} \\label{eq:triang_l}\\\\\nD_1 x_1 + D_2 x_2 + \\ldots + D_l x_l + E^{(l)} y + H^{(l)} z^{(l)}&= f^{(l)} \\label{eq:other_}\n\\end{align}\nfor $l \\leq k$, with $z \\in \\mathbb{R}^{n_s}$, $n_s = kn - \\sum_1^k{r_i}$ and $Z_i^{(l)} \\in \\mathbb{R}^{m_i \\times n_s}$.\n\nWe use the above section for eq.~(\\ref{eq:triang_l}): let's $A_l^\\sharp$, $N_l$ and $S_l$ be respectively a generalized inverse of $A_{l,l}$, a basis of $\\mathrm{null}(A_{l,l})$ and a basis of $\\mathrm{null}(A_{l,l}^T)$.\nWe define\n\\begin{align*}\n\tM_l &= -A_l^\\sharp B_l^{(l)}\\\\\n\tu_l &= A_l^\\sharp c_l^{(l)}\n\\end{align*}\nEq.~(\\ref{eq:triang_l}) becomes\n\\begin{equation}\n\t\\left\\{ \\begin{array}{rcl}\n\tx_l \\hspace{-5pt}&=&\\hspace{-5pt} M_l y -A_l^\\sharp Z_l^{(l)} z^{(l)} + N_l z_l + u_l \\\\\n\tS_l^T B_l^{(l)} y + S_l^T Z_l^{(l)} z^{(l)} \\hspace{-5pt}&=&\\hspace{-5pt} S_l^T c_l^{(l)}\n\t\\end{array}\\right.\n\\end{equation}\nSubstituting $x_l$ in the $i^{\\mathrm{th}}$ equation ($i<l$) yields\n\\begin{equation*}\n\\footnotesize\n  A_{i,i} x_i + \\ldots + A_{i,l-1} x_{l-1} + (B_i^{(l)} + A_{i,l} M_l)y \n\t+ \\BIN\\hspace{-1pt} Z_i^{(l)} \\hspace{-7pt}-\\hspace{-3pt} A_{i,l} A_l^\\sharp Z_l^{(l)} &\\hspace{-5pt} A_{i,l} N_l\\hspace{-1pt}\\BOUT\\hspace{-5pt} \\BIN \\hspace{-1pt}z^{(l)}\\hspace{-1pt} \\\\ z_l \\BOUT \\hspace{-4pt}\n\t= \\hspace{-3pt} c_i^{(l)} \\hspace{-3pt}- A_{i,l} u_l\n\\end{equation*}\nLikewise eq.~(\\ref{eq:other_}) becomes\n\\begin{equation*}\n\\small\n  D_1 x_ 1+ \\ldots + D_{l-1} x_{l-1} + (E^{(l)} + D_l M_l)y \n\t+ \\BIN\\hspace{-1pt} H^{(l)} \\hspace{-5pt}-\\hspace{-3pt} D_l A_l^\\sharp Z_l^{(l)} &\\hspace{-5pt} D_l N_l\\hspace{-1pt}\\BOUT\\hspace{-5pt} \\BIN \\hspace{-1pt}z^{(l)}\\hspace{-1pt} \\\\ z_l \\BOUT \\hspace{-4pt}\n\t= \\hspace{-3pt} f^{(l)} \\hspace{-3pt}- D_l u_l\n\\end{equation*}\nWe define\n\\footnote{\nNote that we could also define $z^{(l-1)} = \\BIN z_l \\\\ z^{(l)} \\BOUT$ and change $Z_i^{(l-1)}$ and $H^{(l-1)}$ accordingly. We make the current choice with the implementation efficiency in mind: it is easier to grow a matrix by the bottom or the right, while ensuring the memory alignement.\n}\n\\begin{align*}\nB_i^{(l-1)} &= (B_i^{(l)} + A_{i,l} M_l) = (B_i^{(l)} - A_{i,l} A_l^\\sharp B_l^{(l)})\\\\\nz^{(l-1)} &= \\BIN z^{(l)}\\\\ z_l \\BOUT\\\\\nZ_i^{(l-1)} &= \\BIN Z_i^{(l)} - A_{i,l} A_l^\\sharp Z_l^{(l)} & A_{i,l}  N_l\\BOUT\\\\\nc_i^{(l-1)} &= c_i^{(l)} - A_{i,l} u_l = c_i^{(l)} - A_{i,l} A_l^\\sharp c_l^{(l)}\\\\\nE^{(l-1)} &= E^{(l)} + D_l M_l = E^{(l)} - D_l A_l^\\sharp B_l^{(l)}\\\\\nH^{(l-1)} &= \\BIN H^{(l)} - D_l A_l^\\sharp Z_l^{(l)} & D_l N_l\\BOUT\\\\\nf^{(l-1)} &= f^{(l)} - D_l u_l = f^{(l)} - D_l A_l^\\sharp c_l^{(l)}\n\\end{align*}\nand the system (\\ref{eq:triang_1}) - (\\ref{eq:other_}) is equivalent to\n\\begin{align}\nA_{1,1} x_1 + A_{1,2} x_2 + \\ldots + A_{1,l-1} x_{l-1} + B_1^{(l-1)} y + Z_1^{(l-1)} z^{(l-1)}&= c_1^{(l-1)}  \\label{eq:triang_1s}\\\\\nA_{2,2} x_2 + \\ldots + A_{2,l-1} x_{l-1} + B_2^{(l-1)} y + Z_2^{(l-1)} z^{(l-1)}&= c_2^{(l-1)}\\\\\n\\vdots& \\nonumber\\\\\nA_{l-1,l-1} x_{l-1} + B_{l-1}^{(l-1)} y + Z_{l-1}^{(l-1)} z^{(l-1)}&= c_{l-1}^{(l-1)}\\\\\nD_1 x_1 + D_2 x_2 + \\ldots + D_{l-1} x_{l-1} + E^{(l-1)} y + H^{(l-1)} z^{(l-1)}&= f^{(l-1)} \\label{eq:other_s}\\\\\nM_l y -A_l^\\sharp Z_l^{(l)} z^{(l)} + N_l z_l + u_l &= x_l\\\\\n\tS_l^T B_l^{(l)} y + S_l^T Z_l^{(l)} z^{(l)} &= S_l^T c_l^{(l)}\n\\end{align}\nEquations~(\\ref{eq:triang_1s})-(\\ref{eq:other_s}) have the same form as equations (\\ref{eq:triang_1}) - (\\ref{eq:other_}), so that we can apply the above recursively, starting with\n\\begin{align*}\nB_i^{(k)} &= B_i\\\\\nz^{(k)} &\\mathrm{empty}\\\\\nZ_i^{(k)} & \\mathrm{empty}\\\\\nc_i^{(k)} &= c_i\\\\\nE^{(k)} &= E\\\\\nH^{(k)} & \\mathrm{empty} \\\\\nf^{(k)} &= f\n\\end{align*}\n\nWe get\n\\begin{align}\n  x_1 &= M_1 y - A_1^\\sharp Z_1^{(1)} z^{(1)} + N_1 z_1 + u_1\\\\\n  x_2 &= M_2 y - A_2^\\sharp Z_2^{(2)} z^{(2)} + N_2 z_2 + u_2\\\\\n  &\\vdots\\nonumber \\\\\n  x_k &= M_k y - \\underbrace{A_k^\\sharp Z_k^{(k)} z^{(k)}}_0 + N_k z_k + u_k\\\\\n  E^{(0)} y + H^{(0)} z^{(0)}&= f^{(0)} \\\\\n\tS_1^T B_1^{(1)} y + S_1^T Z_1^{(1)} z^{(1)} &= S_1^T c_1^{(1)}\\\\\n\tS_1^T B_1^{(2)} y + S_2^T Z_2^{(2)} z^{(2)} &= S_1^T c_2^{(2)}\\\\\n\t&\\vdots\\nonumber \\\\\n\tS_k^T B_k^{(k)} y + S_k^T Z_k^{(k)} z^{(k)} &= S_1^T c_k^{(k)}\n\\end{align}\n\n\\appendix\n\\section{Some properties around the generalized inverse}\nIn this section, $A$ is a general $m \\times n$ matrix with rank $r$.\n\n\\begin{definition}\n  $G \\in \\mathbb{R}^{n \\times m}$ is a generalized inverse of $A$ if and only if $A G A = A$.\n\\end{definition}\nThe following theorem is due to [Rao 1972]:\n\\begin{theorem}\n  Given a particular generalized inverse $G$, and an arbitrary matrix $M\\in \\mathbb{R}^{n \\times m}$, the matrix\n\t\\begin{equation}\n\t\tG + M - GAMAG\n\t\\end{equation}\n\tis a generalized inverse of $A$, and all of the generalized inverses of $A$ are obtained this way.\n\\end{theorem}\n\nLet's consider the SVD decomposition of $A$:\n\\begin{equation*}\n  A = \\BIN U_1 & U_2 \\BOUT \\BIN \\Sigma & 0 \\\\ 0 & 0 \\BOUT \\BIN V_1^T \\\\ V_2^T \\BOUT = U_1 S V_1^T\n\\end{equation*}\nwhere $\\BIN U_1 & U_2 \\BOUT$ and $\\BIN V_1 & V_2 \\BOUT$ are orthogonal matrices, $U_1 \\in \\mathbb{R}^{m \\times r}$, $U_2 \\in \\mathbb{R}^{m \\times m-r}$, $V_1 \\in \\mathbb{R}^{n \\times r}$, $V_2 \\in \\mathbb{R}^{n \\times n-r}$ and $\\Sigma \\in \\mathbb{R}^{r \\times r}$. As a particular generalized inverse, we have the Moore-Penrose pseudo-inverse $A^+ = V_1 \\Sigma^{-1} U_1^T$, and we can write any generalized inverse $G$ as\n\\begin{equation}\n  G = V_1 \\Sigma^{-1} U_1^T + M - V_1 V_1^T M U_1 U_1^T \\label{eq:generalFormulation}\n\\end{equation}\n\n\\begin{lemma}\n\\label{lemma:projection}\nFor any generalized inverse $G$\n\t\\begin{align*}\n\tV_2 V_2^T (I-GA) &= I-GA\\\\\n\t(I-AG)U_2U_2^T &= I-AG\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}\nLet $M$ be such that $G$ writes as in eq.~(\\ref{eq:generalFormulation}). Then, using that $U_1^T U_1 = I$, $V_1^T V_1 = I$ and $V_1 V_1^T + V_2 V_2^T = I$, we get\n\\begin{equation*}\n  I-GA = V_2 V_2^T (I-MA)\n\\end{equation*}\nand thus\n\\begin{equation*}\n  V_2 V_2^T (I-GA) = V_2 V_2^T V_2 V_2^T (I-MA) = V_2 V_2^T (I-MA) = I - GA\n\\end{equation*}\nSimilarly\n\\begin{equation*}\n  I-AG = (I-AM)U_2 U_2^T\n\\end{equation*}\nand\n\\begin{equation*}\n  (I-AG)U_2 U_2^T = (I-AM)U_2 U_2^T U_2 U_2^T = (I-AM)U_2 U_2^T  = I-AG\n\\end{equation*}\n\\end{proof}\n\n\\begin{corollary}\n\\label{co:projection}\n$(I-GA)G U_2 U_2^T = (I-GA)G$ and $V_2 V_2^T G(I-AG) = G(I-AG)$\n\\end{corollary}\n\\begin{proof}\nNote that $(I-GA)G = (G - GAG) = G(I-AG)$. Then\n\\begin{align*}\n  (I-GA)G U_2 U_2^T = G(I-AG) U_2 U_2^T = G(I-AG) = (I-GA)G \\\\\n\tV_2 V_2^T G(I-AG) = V_2 V_2^T (I-GA)G = (I-GA)G = G(I-AG)\n\\end{align*}\nwhere for each line the second equality is a consequence of Lemma~\\ref{lemma:projection}.\n\\end{proof}\n\nWe have that $\\mathrm{rank}(GA) = \\mathrm{rank}(AG) =  \\mathrm{rank}(A) =  r$, so that the projectors $(I-GA)$ and $(I-AG)$ have rank $n-r$ and $m-r$ respectively.\nTherefore, we have the rank factorizations\n\\begin{align*}\n \\exists N,P\\in \\mathbb{R}^{n \\times n-r}, \\quad \\mathrm{rank}(N) = \\mathrm{rank}(P) = n-r &&\\ I-GA = N P^T\\\\\n  \\exists T,S\\in \\mathbb{R}^{m \\times m-r}, \\quad \\mathrm{rank}(T) = \\mathrm{rank}(S) = m-r && I-AG = T S^T \n\\end{align*}\n\n\\begin{theorem}\n\\label{th:NP}\n  $I-GA = N P^T$ with $N$ and $P$ full rank iff $N$ is a basis of $\\mathrm{null}(A)$ and $P = (I-A^TG^T)N^{+T}$.\n\\end{theorem}\n\\begin{proof}\n  Let's first prove $\\Longrightarrow$.\n\t\\newline\n\t$I-GA = N P^T \\Longrightarrow A(I-GA) = A N P^T \\Longrightarrow ANP^T=0$. Since $P^T$ is full rank, its range space is $\\mathbb{R}^{n-r}$, so that $ANP^T=0 \\Longrightarrow AN = 0$. $N$ is full rank so it is a basis of $\\mathrm{null}(A)$.\n\\newline\t\n\tSince $N$ is full rank, $N^+ N = I$. Thus $I-GA = N P^T \\Longrightarrow N^+(I-GA) = P^T \\Longrightarrow P = (I-A^TG^T)N^{+T}$.\n\t\n\tConversely, let's prove $\\Longleftarrow$.\n  \\newline\n\t$V_2$ is also a basis of $\\mathrm{null}(A)$, so that there is an $n-r \\times n-r$ invertible matrix $X$ such that $N = V_2 X$. Therefore $N^+ = X^{-1} V_2^T$ and $N P^T = V_2 V_2^T (I-GA) = I-GA$ where the last equality comes from Lemma~\\ref{lemma:projection}.\n\\end{proof}\nA direct consequence of this theorem is that for any choice of a generalized inverse $G$ of $A$ and of a basis $N$ of $\\mathrm{null}(A)$, we can find $P$ to get a rank factorization of $I-GA$.\n\nAn equivalent theorem can be made for $I-AG$:\n\\begin{theorem}\n\\label{th:TS}\n  $I-AG = T S^T$ with $T$ and $S$ full rank iff $S$ is a basis of $\\mathrm{null}(A^T)$ and $T = (I-AG)S^{+T}$.\n\\end{theorem}\n\\begin{proof}\n\tThe proof follows the same lines as for the previous theorem.\n\\end{proof}\n\n\\begin{theorem}\n\\label{th:inverse}\n\tLet $G$ be a generalized inverse of $A$, $N$ a basis of $\\mathrm{null}(A)$ and $S$ a basis of $\\mathrm{null}(A^T)$. Then the matrix\n\t\\begin{equation}\n\t\tH = \\BIN G & N \\\\ S^T & 0 \\BOUT\n\t\\end{equation}\n\tis invertible and its inverse is\n\t\\begin{equation}\n\t  \\BIN A & T \\\\ P^T  & -P^TGS^{+T} \\BOUT\n\t\\end{equation}\n\twhere $P$ and $T$ are as define in theorems \\ref{th:NP} and \\ref{th:TS}.\n\\end{theorem}\n\\begin{proof}\n\tLet's show that\n\t\\begin{equation}\n\t\t\\BIN A & T \\\\ P^T  & -P^TGS^+ \\BOUT \\BIN G & N \\\\ S^T & 0 \\BOUT = \n\t\t\\BIN AG + TS^T & AN \\\\ P^T G - P^TG S^{+T} S^T & P^T N \\BOUT \\label{eq:inverse}\n\t\\end{equation}\n\tis the identity.\n\t\\newline\n\t$AG + TS^T = I$ because by definition of $T$, $I-AG = T S^T$ (theorem \\ref{th:TS}).\n\t\\newline\n\t$AN = 0$ by definition of $N$.\n\t\\newline\n\tSince $S$ can be written as $U_2 Y$ with $Y$ invertible, we have that $S^{+T} S^T = U_2 U_2^T$, so using the expression of $P$, $P^T G S^{+T} S^T = N^+(I-GA) G U_2 U_2^T = N^+(I-GA) G = P^T G$, with the second equality coming from Corollary \\ref{co:projection}. Therefore $P^T G - P^TG S^{+T} S^T = P^T G - P^TG = 0$.\n\t\\newline\n\t$P^T N = N^+(I-GA) N = N^+ N = I$ where the second equalities is a consequence of $AN = 0$ and the third one of $N$ being full rank.\n\t\\newline\n\tThe matrix given by eq.~(\\ref{eq:inverse}) is a left inverse of $H$, therefore $H$ is invertible and it is its inverse.\n\\end{proof}\n\\emph{Remark}: the matrix in~(\\ref{eq:inverse}) is therefore also the right inverse of $H$. This can be verified in the same way as above, using the fact that $P^T G S^{+T} = N^+ G T$.\n\\end{document}", "meta": {"hexsha": "abfa12fe58726dad7acf45110f8cbd4cc0ccc9f8", "size": 15835, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/Substitution.tex", "max_stars_repo_name": "mmurooka/tvm", "max_stars_repo_head_hexsha": "9098ad443d059ed4d216afb5bcc41655afdc34e0", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2018-12-20T06:44:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T20:16:45.000Z", "max_issues_repo_path": "doc/Substitution.tex", "max_issues_repo_name": "mmurooka/tvm", "max_issues_repo_head_hexsha": "9098ad443d059ed4d216afb5bcc41655afdc34e0", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 20, "max_issues_repo_issues_event_min_datetime": "2019-12-05T13:41:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-14T16:44:54.000Z", "max_forks_repo_path": "doc/Substitution.tex", "max_forks_repo_name": "mmurooka/tvm", "max_forks_repo_head_hexsha": "9098ad443d059ed4d216afb5bcc41655afdc34e0", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-07-22T09:54:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-05T02:27:05.000Z", "avg_line_length": 46.849112426, "max_line_length": 422, "alphanum_fraction": 0.605620461, "num_tokens": 6999, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825006, "lm_q2_score": 0.7606506418255928, "lm_q1q2_score": 0.5933021079779263}}
{"text": "\\subsection{Tuning Rules According to Chien, Hrones and Reswick (Step Response Method)}\n\nThis  method is an improvement of the  Ziegler-Nichols  method  and  uses  the\nsystem's step response characteristics $T_u$, $T_g$ and  $K_s$  (discussed  in\nsection \\ref{sec:theory:characterisation}).\n\nThe procedure of the Chien-Hrones-Reswick method is:\n\n\\begin{enumerate}\n    \\item Record the step response of the plant and measure the gain $K_{S}$, the delay time $T_{u}$ and the compensation time $T_{g}$.\n    \\item Determine the controller parameters according to the tuning rules (see tables \\ref{tab:CHR_rejection} and \\ref{tab:CHR_tracking}).\n\\end{enumerate}\n\n\\begin{center}\n    \\begin{threeparttable}\n        \\begin{tabular}{cccc}\n            \\toprule\n            Type & $K_p$                                    &  $T_{i}$            &  $T_{d}$ \\\\\n            \\midrule\n            P    &  $0.3   \\cdot\\frac{T_g}{T_u \\cdot K_s}$  &  -                  &  -                  \\\\\n            PI   &  $0.6   \\cdot\\frac{T_g}{T_u \\cdot K_s}$  &  $4 \\cdot T_u$      &  -                  \\\\\n            PID  &  $0.95  \\cdot\\frac{T_g}{T_u \\cdot K_s}$  &  $2.4 \\cdot T_u$    &  $0.42 \\cdot T_u$   \\\\\n            \\bottomrule\n        \\end{tabular}\n        \\caption{Table with controller parameters according to the Chien-Hrones-Reswick method (good disturbance rejection).}\n        \\label{tab:CHR_rejection}\n    \\end{threeparttable}\n\\end{center}\n\n\\begin{center}\n    \\begin{threeparttable}\n        \\begin{tabular}{cccc}\n            \\toprule\n            Type & $K_p$                                    &  $T_{i}$            &  $T_{d}$ \\\\\n            \\midrule\n            P    &  $0.3   \\cdot\\frac{T_g}{T_u \\cdot K_s}$  &  -                  &  -                  \\\\\n            PI   &  $0.35  \\cdot\\frac{T_g}{T_u \\cdot K_s}$  &  $1.2 \\cdot T_g$    &  -                  \\\\\n            PID  &  $0.6   \\cdot\\frac{T_g}{T_u \\cdot K_s}$  &  $T_g$              &  $0.5 \\cdot T_u$    \\\\\n            \\bottomrule\n        \\end{tabular}\n        \\caption{Table with controller parameters according to the Chien-Hrones-Reswick method (aperiodic behaviour, good tracking).}\n        \\label{tab:CHR_tracking}\n    \\end{threeparttable}\n\\end{center}\n\nBased  on  the  ratio   of   $\\lambda=\\frac{T_{u}}{T_{g}}$,  we  can  say  how\ncontrollability the control plant is.\n\n\\begin{center}\n    \\begin{threeparttable}\n        \\begin{tabular}{ccc}\n            \\toprule\n            Ratio                     &  Controllability  & Difficulty \\\\\n            \\midrule\n            $\\lambda < 0.1$           &  very good        & small  \\\\ \n            $0.1 \\leq \\lambda < 0.2$  &  good             & medium  \\\\ \n            $0.2 \\leq \\lambda < 0.4$  &  fair             & large  \\\\ \n            $0.4 \\leq \\lambda < 0.8$  &  bad              & very large  \\\\ \n            $ \\lambda \\geq 0.2$       &  very bad         & \\makecell{(special \\\\ measurements \\\\ required)}  \\\\ \n            \\bottomrule\n        \\end{tabular}\n        \\caption{Table showing approximate ratios and how ``good'' they are}\n        \\label{tab:controllability}\n    \\end{threeparttable}\n\\end{center}\n\n", "meta": {"hexsha": "329eccf81b8614b6735b9e916b645222d2044db8", "size": 3124, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "versuche/rtGL/labor2/sections/theory/chien_hrones_reswick.tex", "max_stars_repo_name": "TheComet93/laborjournal", "max_stars_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "versuche/rtGL/labor2/sections/theory/chien_hrones_reswick.tex", "max_issues_repo_name": "TheComet93/laborjournal", "max_issues_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "versuche/rtGL/labor2/sections/theory/chien_hrones_reswick.tex", "max_forks_repo_name": "TheComet93/laborjournal", "max_forks_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.6268656716, "max_line_length": 140, "alphanum_fraction": 0.5150448143, "num_tokens": 937, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928900257127, "lm_q2_score": 0.7606506581031359, "lm_q1q2_score": 0.5933021051138253}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\n\n\\begin{document}\n\\begin{center}\n\\textbf{\\huge{Week 2}}\n\\end{center}\n\n\\section{Independent random variables}\nDefinition: Two random variables $X_1, X_2 $ are said to be independent if,\n$$ P(X_1=x_1,X_2=x_2)= P(X_1=x_1)P(X_2=x_2) \\quad \\forall \\; x_1 \\in \\mathcal{X}_1\\, \\&\\; x_2 \\in \\mathcal{X}_2$$\n\n\\fbox{$ P(X_1=x_1,X_2=x_2)$ means that $X_1=x_1$ \\textbf{AND} $X_2=x_2$ occur.}\n\n\\subsection{Claim}\n$$ \\sum_{x_2 \\in \\mathcal{X}_2} P(X_1=x_1,X_2=x_2)=P(X_1=x_1)$$\n\n\\textbf{Proof}: Suppose $A $ \\& $B$ are disjoint/mutually exclusive. Then,\n$$ P(A \\cap B)=0 \\quad \\& \\quad P(A \\cup B)=P(A)+P(B)$$\n\nNow the events $(X_1=x_1)\\cap (X_2=x_2)$ are disjoint for different values of $x_2\\in \\mathcal{X}_2$.(if $x_2 \\neq x_2' \\;\\forall\\; x_2\\in \\mathcal{X}_2$)\n\nThus,\n\n    \\begin{align*}\n        \\sum_{x_2 \\in \\mathcal{X}_2} P(X_1=x_1,X_2=x_2)& = \\sum_{x_2 \\in \\mathcal{X}_2} P\\bigl( (X_1=x_1)\\cap (X_2=x_2) \\bigr) \\\\\n        &=P\\left( \\bigcup_{x_2 \\in \\mathcal{X}_2} (X_1=x_1)\\cap (X_2=x_2) \\right)\n    \\end{align*}\nNow,\n    \\begin{align*}\n        \\bigcup_{x_2 \\in \\mathcal{X}_2} \\left[(X_1=x_1)\\cap (X_2=x_2) \\right]\n        &= (X_1=x_1)\\bigcap \\left[ \\bigcup_{x_2 \\in \\mathcal{X}_2} (X_2=x_2) \\right] \\\\\n        &= (X_1=x_1)\\bigcap \\left[ x_2 \\in \\mathcal{X}_2 \\right]\n    \\end{align*}\nAs $[ x_2 \\in \\mathcal{X}_2] $ forms the entire sample space,\n\\begin{align*}\n    P\\left( \\bigcup_{x_2 \\in \\mathcal{X}_2} (X_1=x_1)\\cap (X_2=x_2) \\right) &= P\\left( (X_1=x_1)\\bigcap \\left[ x_2 \\in \\mathcal{X}_2 \\right] \\right) \\\\\n    &= P\\left( (X_1=x_1) \\bigcap \\Omega \\right) \\\\\n    &= P(X_1=x_1)\n\\end{align*}\n\n\n\\section{Entropy}\n\nSuppose $X_1 \\in \\mathcal{X}_1\\, \\&\\; X_2 \\in \\mathcal{X}_2$ are \\textbf{independant} random variables. Then,\n$$ H(X_1,\\;X_2)=H(X_1)+H(X_2)$$\n\nProof:\n\\begin{gather*}\n    \\sum_{ \\substack{x_1 \\in \\mathcal{X}_1 \\\\ x_2 \\in \\mathcal{X}_2}} P(X_1=x_1,X_2=x_2) \\log \\left( \\frac{1}{P(X_1=x_1,X_2=x_2)} \\right) \\\\\n    = \\sum_{x_1 \\in \\mathcal{X}_1 }\\sum_{x_2 \\in \\mathcal{X}_2} P(X_1=x_1,X_2=x_2) \\left[  \\log \\frac{1}{P(X_1=x_1)} + \\log \\frac{1}{P(X_2=x_2)}\\right] \\\\\n    = \\sum_{x_1 \\in \\mathcal{X}_1 } \\log \\frac{1}{P(X_1=x_1)} \\left( \\sum_{x_2 \\in \\mathcal{X}_2} P(X_1=x_1,X_2=x_2)\\right) \\\\\n    + \\sum_{x_2 \\in \\mathcal{X}_2 } \\log \\frac{1}{P(X_2=x_2)} \\left( \\sum_{x_1 \\in \\mathcal{X}_1} P(X_1=x_1,X_2=x_2)\\right)\n\\end{gather*}\n\nFrom the previous claim,\n\\begin{gather*}\n    H(X_1,X_2)= \\sum_{x_1 \\in \\mathcal{X}_1}P(X_1=x_1) \\log \\frac{1}{P(X_1=x_1)} \\\\\n    + \\sum_{x_2 \\in \\mathcal{X}_2}P(X_2=x_2) \\log \\frac{1}{P(X_2=x_2)} \\\\\n    =H(X_1)+H(X_2)\n\\end{gather*}\n\n\\subsection{Conditional probability distribution}\n\nWhat if $X_1$ \\& $X_2$ are not independent? Then we would use conditional probability distribution. i.e.\n$$ P(X_2=x_2/X_1=x_1):= \\frac{P(X_2=x_2, X_1=x_1)}{P(X_1=x_1)}, \\quad P(X_1=x_1) \\neq 0 $$\n\nThis definition for conditional probability satisfies the probability axioms and hence it is a valid probability measure.\n\n\\subsection{Conditional entropy}\nDefinition:\n$$ H(X_2/X_1):= \\sum_{x_1 \\in \\mathcal{X}_1} P(X_1=x_1) H(X_2/X_1=x_1)$$\nwhere,\n$$ H(X_2/X_1=x_1):= \\sum_{x_2 \\in \\mathcal{X}_2}P(X_2=x_2/X_1=x_1) \\log \\frac{1}{P(X_2=x_2/X_1=x_1)}$$\n\n\\subsubsection{Support of a function}\n\nWhen $P(X=x)=0$, it is not considered in calculating the entropy.\n$$ H(X)=-\\sum_{ \\{ x \\in \\mathcal{X} : P(X=x) \\neq 0\\} } P(X=x)log(P(X=x))$$\n\nSuppose that f : X \u2192 R is a real-valued function whose domain is an arbitrary set X. The set-theoretic support of f, written supp(f), is the set of points in X where f is non-zero:\n$$ supp(X)=\\{ x \\in X : f(x) \\neq 0 \\}$$\n\nNote: $P(X=x)$ can be denoted as $P_X(x)$ or $P(x)$.\n\\subsection{Important Results}\n     $$ H(X)= -\\sum_{x \\in supp(P)} P(x)log(P(x))$$\n     $$ H(X/Y)= \\sum_{y \\in supp(P_Y)} P_Y(y)H(X/Y=y)$$\n     $$ H(X/Y=y)= -\\sum_{x \\in supp(P_{X/Y})} P_{X/Y}(x/y)log(P_{X/Y}(x/y))$$\n\n\\subsection{Chain Rule}\n$$ H(X,Y)=H(X)+H(Y/X)=H(Y)+H(X/Y)$$\n\nProof:\n\\begin{multline*}\n     H(X)+H(Y/X)= -\\sum_{x \\in supp(P)} P(x)log(P(x)) \\\\\n      + \\sum_{x \\in supp(P_X)} P(x)\\left(\\sum_{y \\in supp(P_{Y})} P(y/x)log\\left(\\frac{1}{P(y/x)}\\right)\\right)\n\\end{multline*}\n\n\\begin{multline*}\n    \\qquad \\qquad \\qquad \\quad= -\\sum_{x \\in supp(P_X)}\\left(\\sum_{y \\in supp(P_Y)} P(x,y)\\right)log(P(x)) \\\\\n    + \\sum_{x \\in supp(P_X)} \\left(\\sum_{y \\in supp(P_{Y})} P(x,y) \\right) log\\left(\\frac{1}{P(y/x)}\\right)\n\\end{multline*}\n\n\n$$ \\qquad \\qquad =\\sum_{x \\in supp(P_X)} \\sum_{y \\in supp(P_{Y})} P(x,y)\\left(\\log \\frac{1}{P(x)P(y/x)} \\right)  $$\n\n$$ = - \\sum_x \\sum_y P(x,y)\\log P(x,y)=H(x,y)$$\n\n\\subsection{Upper and lower bound of entropy}\n\n\n$$ 0 \\leq H(X) \\leq |log \\mathcal X|$$\n\nProof:\n\n$ H(X)= \\sum_x P(x)log \\frac{1}{P(x)} \\rightarrow $ always positive as $P(x)>0$ $\\forall x \\in supp(P_X)$.\n\n$H(X)= 0$ when $|supp(P_X)|=1$.\n\n\\subsubsection{Jensen's inequality (Concave/convex functions)}\n\n\\includegraphics[width=\\textwidth]{graph.png}\n$x_1 < x_3 <x_2 $ such that $x_3= tx_1+(1-t)x_2$\n\nIf $f(x_3) \\geq tf(x_1)+(1-t)f(x_2) \\rightarrow $ Concave function.\n\nIf $f(tx_1+(1-t)x_2) \\leq tf(x_1)+(1-t)f(x_2) \\rightarrow $ Convex function.\n\nHence, the above graph is a convex function.\n\nWe shall use the concavity property of the log function to obtain the proof for $ H(X) \\leq |log \\mathcal X| $.\n\nProof:\n$$ H(X)= \\sum_{x \\in supp(P)} P(x)log\\frac{1}{P(x)}$$\n\nLet us assume $\\lambda_x$ is another notation for $P(x)$.\n\n$$H(X)= \\sum_{x \\in supp(P)} \\lambda_x log\\frac{1}{P(x)}$$\n\nThis is a convex combination of $log\\frac{1}{P(x)}:x \\in supp(P_X)$ as $\\sum_{x}P(x)=1$.\n\n$$ H(X) \\leq \\log \\left(\\sum_{x \\in supp(P_X)}\\lambda_x \\frac{1}{P(x)} \\right) \\leq \\log |supp(P_x)|$$\n$$ \\Rightarrow H(X) \\leq \\log |\\mathcal X |$$\n\n\\subsubsection{When is $H(X)$ is $ log|\\mathcal X|$?}\n\nWhen Jensen's inequality is satisfied with equality, i.e. if $f(x)$ is a straight line, but the $\\log x$ curve is strictly concave.\n\nSuppose $f(x)$ is strictly concave(or convex) \\& $\\lambda_1, \\lambda_2 \\neq 0$ \\& $\\lambda_1+\\lambda_2=1$.\n\nIf $f(\\lambda_1 x_1 + \\lambda_2 x_2)=\\lambda_1 f(x_1) + \\lambda_2 f(x_2)$, then $x_1=x_2$.\n\nMore generally, if $\\lambda_i \\neq 0 $, $\\sum_{i=1}^{n} \\lambda_i=1$ and Jensen's inequality holds with equality, then $x_1=x_2\\cdots x_n$.\n\nApplying this to $H(X) \\leq log|\\mathcal X|$, from the previous proof,\n\n$$ H(X)= \\sum_{x \\in supp(P)} \\lambda_x log\\frac{1}{P(x)} \\leq \\log |supp(P_x)| $$\nSuppose equality holds, then by the above claim we must have:\n\n$$ \\frac{1}{P(X)}= const \\quad \\forall x \\in supp(P_x)$$\n$$ \\Rightarrow const.\\; c= P(x)= \\frac{1}{|supp(P_x)|}$$\n\nIf $|supp(P_x)|=|\\mathcal X|$, $H(X)= \\log |supp(P_x)|$. Then $P(X)= \\frac{1}{|\\mathcal X|}\\; \\forall x \\in \\mathcal X$.\n\nWe have just proved that $H(X)= \\log_2 |\\mathcal X|$, this can be true only when $P_x$ is uniform. Thus:\n\nLemma:\n$ H(X)= \\log_2 |\\mathcal X| $ iff $P_x$ is uniform.\n\n\\section{Relative Entropy/ Information Divergence/ Kullback-Leibler Divergence}\n\nSuppose there is a random variable $X$ for which we have two probability distributions $p_x$ \\& $q_x$.\n\nThen the RE or ID or KL is defined as:\n$$ D(p_X || q_Y):=\\sum_{x \\in supp(p_x)} p(x)\\log \\frac{p(x)}{q(x)}$$\n\n($D(p_X || q_Y)$ is a `kind of' a distance measure between distributions p \\& q.)\n\n\\subsection{Lower limit of relative entropy}\n$$ D(p||q)\\geq 0$$\nProof:\n$$D(p || q)=-\\sum_{x \\in supp(p_x)} p(x)\\log \\frac{q(x)}{p(x)}$$\n$$ \\geq - \\log \\left(\\sum_{x \\in supp(p_x)} p(x) \\frac{q(x)}{p(x)} \\right)$$\n$$ \\geq - \\log\\left(\\sum_{x \\in supp(p_x)} q(x)\\right)$$\n\n$$ \\geq 0 \\quad as \\quad \\sum q(x) \\leq 1$$\n\n\\subsection{When is RE equal to zero?}\n\nBy applying Jensen's inequality for equality condition.\n\n$$ \\frac{p(x)}{q(x)}= const. \\; c \\qquad \\forall x \\in supp(p_x)$$\n$$ \\Rightarrow p(x)=cq(x) \\quad \\&$$\n$$ \\sum_{x \\in supp(p_x)} p(x)= \\sum_{x \\in supp(p_x)} cq(x) \\forall x \\in supp(p_x)$$\nThese togethermean that $c=1$.\n$$\\Rightarrow p(x)=q(x) \\forall x \\in supp(p_x)$$\n$$ \\Rightarrow D(p||q) \\quad iff \\quad p_x=q_x$$\n\n\\subsection{Upper and lower limits of conditional entropy}\n\n$$ H(X/Y)= \\sum_{y \\in supp(P_Y)} P_Y(y) \\sum_{x \\in supp(P_{X/Y})}P_{X/Y}(x/y)\\log \\frac{1}{P_{X/Y}(x/y)}$$\n\n$$ 0 \\leq H(X/Y) \\leq H(X)$$\n\nProof:\n\n$H(X/Y) \\geq 0$ is true because $H(X/Y=y)\\geq 0$ \\& $P(y) \\geq 0$.\n\n$$ H(X)-H(X/Y)= \\sum_{x \\in supp(P_x)}P(x)\\log\\frac{1}{P(x)}-\\sum_{x,y}P(x,y)\\log\\frac{1}{P(x/y)}$$\n$$ =\\sum_{x,y}P(x,y)\\log\\frac{1}{P(x)}-\\sum_{x,y}P(x,y)\\log\\frac{1}{P(x/y)}$$\n$$ = \\sum_{x,y}P(x,y)\\log \\left(\\frac{P(x/y)}{P(x)}\\right)= \\sum_{x,y}P(x,y)\\log \\left(\\frac{P(x,y)}{P(x)P(y)}\\right)$$\n\nAs both $P(x,y)$ \\& $P(x)P(y)$ are valid joint distribution of $x,y$. Hence,\n\n$$ H(X)-H(X/Y)= D(P(x,y)||P(x)P(y))$$\n$$ \\geq 0 \\qquad (as\\; D(p||q)\\geq 0 )$$\n$$ \\Rightarrow H(X/Y) \\leq H(X)$$\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "dd8d16932d6543f353428e7b61bbaf7a5def3538", "size": 8787, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Source/Notes_week2.tex", "max_stars_repo_name": "thundermage117/Information-Comm.-Notes", "max_stars_repo_head_hexsha": "dfffa27d7216bd231b0e0e5743d7105c64ecf7fc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Source/Notes_week2.tex", "max_issues_repo_name": "thundermage117/Information-Comm.-Notes", "max_issues_repo_head_hexsha": "dfffa27d7216bd231b0e0e5743d7105c64ecf7fc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Source/Notes_week2.tex", "max_forks_repo_name": "thundermage117/Information-Comm.-Notes", "max_forks_repo_head_hexsha": "dfffa27d7216bd231b0e0e5743d7105c64ecf7fc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2043478261, "max_line_length": 180, "alphanum_fraction": 0.6134061682, "num_tokens": 3738, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7799928951399099, "lm_q1q2_score": 0.5933021047718202}}
{"text": "\\renewcommand*\\chappic{img/satisfiability.pdf}\n\\renewcommand*\\chapquote{What idiot called them logic errors rather than bool shit?}\n\\renewcommand*\\chapquotesrc{Unknown}\n\\chapter{Satisfiability}\n\\label{ch:sat}\n%\nBoolean algebra allows us to describe functions over two-valued variables.\nSatisfiability is the question for an assignment such that a function\nevaluates to true. Satisfiability problems are solved by SAT~solvers.\nWe discuss the basic theory behind satisfiability. Because any computation\ncan be represented as satisfiability problem, we are able to verify\nwhether an algorithm can reach a certain state. In Chapter~\\ref{ch:enc},\nwe will represent a differential cryptanalysis problem such that it is\nsolvable iff the corresponding SAT~problem is satisfiable.\n\n\\section{Basic notation and definitions}\n\\label{sec:sat-intro}\n%\n\\index{Boolean function}\n\\begin{defi}[Boolean function]\n  A \\emph{Boolean function} is a mapping $h: X \\to Y$ with $X = \\left\\{0,1\\right\\}^n$\n  for $n \\in \\mathbb N_{\\geq 1}$ and $Y = \\left\\{0,1\\right\\}$.\n\\end{defi}\n\n\\index{Assignment}\n\\begin{defi}[Assignment]\n  A \\emph{$k$-assignment} is an element of $\\left\\{0,1\\right\\}^k$.\n\n  \\noindent\n  Let $f$ be some $k$-ary Boolean function.\n  An \\emph{assignment for function $f$} is any $k$-assignment.\n\\end{defi}\n\n\\index{Truth table}\n\\begin{defi}[Truth table]\n  Let $f$ be some $k$-ary Boolean function.\n  The \\emph{truth table of Boolean function $f$} assigns\n  truth value $0$ or $1$ to any assignment of $f$.\n\\end{defi}\n\nBoolean functions are characterized by their corresponding truth table.\n\n\\begin{table}[pt]\n  \\centering\n  \\subfloat[\\boolf{AND}]{%\n    \\begin{tabular}{cc|c}\n      $x_1$ & $x_2$ & $f(x_1, x_2)$ \\\\\n     \\hline\n      $1$ & $1$ & $1$ \\\\\n      $1$ & $0$ & $0$ \\\\\n      $0$ & $1$ & $0$ \\\\\n      $0$ & $0$ & $0$\n    \\end{tabular}\n  }\n  ~\n  \\subfloat[\\boolf{OR}]{%\n    \\begin{tabular}{cc|c}\n      $x_1$ & $x_2$ & $f(x_1, x_2)$ \\\\\n     \\hline\n      $1$ & $1$ & $1$ \\\\\n      $1$ & $0$ & $1$ \\\\\n      $0$ & $1$ & $1$ \\\\\n      $0$ & $0$ & $0$\n    \\end{tabular}\n  }\n  ~\n  \\raisebox{13.6pt}{%\n    \\subfloat[\\boolf{NOT}]{%\n      \\begin{tabular}{c|c}\n        $x$ & $f(x)$ \\\\\n       \\hline\n        $1$ & $0$ \\\\\n        $0$ & $1$\n      \\end{tabular}\n    }%\n  }%\n  \\caption{Truth tables for \\boolf{AND}, \\boolf{OR} and \\boolf{NOT}}\n  \\label{tab:andornot-truthtables}\n\\end{table}\n\nTable~\\ref{tab:andornot-truthtables} shows example truth tables for\nthe Boolean \\boolf{AND}, \\boolf{OR} and \\boolf{NOT} functions.\nA different definition of the three functions is given the following way:\n\n\\index{AND (Boolean function)}\n\\index{OR (Boolean function)}\n\\index{NOT (Boolean function)}\n\\begin{defi}\n  Let \\boolf{AND}, \\boolf{OR} and \\boolf{NOT} be three Boolean functions.\n  \\begin{itemize}[noitemsep,topsep=0pt]\n    \\item\n      \\boolf{AND} maps $X = \\left\\{0,1\\right\\}^2$\n      to $1$ if all values of $X$ are $1$.\n    \\item\n      \\boolf{OR} maps $X = \\left\\{0,1\\right\\}^2$\n      to $1$ if any value of $X$ is $1$.\n    \\item\n      \\boolf{NOT} maps $X = \\left\\{0,1\\right\\}^1$\n      to $1$ if the single value of $X$ is $0$.\n  \\end{itemize}\n  All functions return $0$ in the other case.\n  Those functions are denoted $a_0 \\land a_1$, $a_0 \\lor a_1$\n  and $\\neg a_0$ respectively, for input parameters $a_0$ and $a_1$.\n\\end{defi}\n\nIt is interesting to observe, that any Boolean function can be represented\nusing only these three operators. This can be proven by complete induction\nover the number of arguments $k$ of the function.\n\nLet $k = 1$. Then we consider any possible $2$-assignment for one input\nvariable $x_1$ and one value of $f(x_1)$. Then four truth tables are possible\nlisted in Table~\\ref{tab:unary-f}. The description shows the corresponding\ndefinition of $f$ using \\boolf{AND}, \\boolf{OR} and \\boolf{NOT} only.\n\nNow let $g$ be some $k$-ary function. Let $(a_0, a_1, \\ldots, a_k)$ be the\n$k$ input arguments to $g$ and $x_1 \\coloneqq g(a_0, a_1, \\ldots, a_k)$.\nThen we can again look at Table~\\ref{tab:unary-f} to discover that 4 cases\nare possible: 2 cases where the return value of our new $(k+1)$-ary function\ndepends on value $x_1$ and 2 cases where the return value is constant.\n\nThis completes our proof.\n\n\\begin{table}[ht]\n  \\centering\n  \\subfloat[$f: x \\mapsto 1$]{%\n    \\begin{tabular}{cc}\n      $x_1$ & $f(x_1)$ \\\\\n     \\hline\n      $1$ & $1$ \\\\\n      $0$ & $1$\n    \\end{tabular}\n  }\n  ~\n  \\subfloat[$f: x \\mapsto x$]{%\n    \\begin{tabular}{cc}\n      $x_1$ & $f(x_1)$ \\\\\n     \\hline\n      $1$ & $1$ \\\\\n      $0$ & $0$\n    \\end{tabular}\n  }\n  ~\n  \\subfloat[$f: x \\mapsto \\neg x$]{%\n    \\begin{tabular}{cc}\n      $x_1$ & $f(x_1)$ \\\\\n     \\hline\n      $1$ & $0$ \\\\\n      $0$ & $1$\n    \\end{tabular}\n  }\n  ~\n  \\subfloat[$f: x \\mapsto 0$]{%\n    \\begin{tabular}{cc}\n      $x_1$ & $f(x_1)$ \\\\\n     \\hline\n      $1$ & $0$ \\\\\n      $0$ & $0$\n    \\end{tabular}\n  }%\n  \\caption{Unary $f$ and its four possible cases}\n  \\label{tab:unary-f}\n\\end{table}\n\nBoolean functions have an important property which is described\nin the following definition:\n\n\\index{Satisfiability}\n\\index{Assignment}\n\\index{Model}\n\\begin{defi}\n  A Boolean function $f$ is \\emph{satisfiable} iff there exists at least one\n  input $x \\in X$ such that $f(x) = 1$.\n  Every input $x \\in X$ satisfying this property is called \\emph{model}.\n\\end{defi}\n\nThe corresponding tool to determine satisfiability is defined as follows:\n\n\\index{SAT solver}\n\\begin{defi}\n  A \\emph{SAT solver} is a tool to determine satisfiability (SAT or UNSAT)\n  of a Boolean function. If satisfiability is given, it returns some model.\n\\end{defi}\n\n\\subsection{Computational considerations}\n\\label{sec:sat-complexity}\n%\nThe generic complexity of SAT determination is given by $2^n$ for $n$ Boolean variables.\n\nLet $n$ be the number of variables of a Boolean function.\nNo known algorithm exists to determine satisfiability in polynomial runtime.\nThis means no algorithm solves the SAT problem with runtime behavior\nwhich depends polynomially on the growth of $n$.\n%\n\\index{Unit propagation}\nHowever, SAT solvers can take advantage of the problem's description.\nFor example, consider function $f$:\n\\begin{align} f(x_0, x_1, x_2) &= x_0 \\land (\\neg x_1 \\lor x_2) \\label{eq:3f} \\end{align}\nInstead of trying all possible 8~cases for 3~Boolean variables,\nwe can immediately see that $x_0$ is required to be $1$.\nSo we don't need to test $x_0 = 0$ and can skip 4~cases.\nThis particular strategy is called \\emph{unit propagation}.\n\n\\subsection{SAT competitions}\n\\label{sec:sat-competitions}\n%\nSAT research is heavily concerned with finding fast heuristics\ndetermining (un)satisfiability. Biyearly,\nSAT competitions~\\cite{satcomp} take place to challenge\nSAT solvers in a set of benchmarks. The committee evaluates the most successful\nSAT solvers defined by solving the most problems within a given time frame.\n\nIn 2014, lingeling by Armin Biere has won first prize in\nthe Application benchmarks track and second prize in the Hard Combinatorial benchmarks\ntrack for SAT and UNSAT instances, respectively. Its parallelized sibling plingeling\nand Cube \\& Conquer sibling treengeling have won prizes in parallel settings.\nAnd in the most recent 2016 competition lingeling has won bronze in the Main track\nfor SAT+UNSAT instances.\n\nIn Chapter~\\ref{ch:results}, we will discuss runtime results shown by (but not limited to)\nthose SAT solvers.\n\n\\section{The DIMACS de-facto standard}\n\\label{sec:sat-dimacs}\n%\n\\index{Conjunction}\n\\index{Disjunction}\n\\index{Literal}\n\\index{Positive literal}\n\\index{Negative literal}\n\\index{Conjunctive Normal Form}\n\\begin{defi}\n  A SAT problem is given in \\emph{Conjunctive Normal Form} (CNF) if\n  the problem is defined as conjunction of disjunctions of literals.\n\n  A \\emph{conjunction} is a sequence of Boolean functions combined using\n  a logical \\boolf{AND}. A \\emph{disjunction} is a sequence of Boolean functions\n  combined using a logical \\boolf{OR}. A \\emph{literal} is a Boolean variable\n  (\\emph{positive}) or its negation (\\emph{negative}).\n\\end{defi}\n\nA simple example for a SAT problem in CNF is the exclusive \\boolf{OR} (\\boolf{XOR}).\nIt takes two Boolean values $a$ and $b$ as arguments and returns true\nif and only if the two arguments differ:\n{\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n\\setlength{\\abovedisplayshortskip}{0pt}\n\\setlength{\\belowdisplayshortskip}{0pt}\n\\begin{align} (a \\lor b) \\land (\\neg a \\lor \\neg b) \\label{eq:xor}\\end{align}\n}\nIt consists of one conjunction (denoted $\\land$) of two disjunctions\n(denoted $\\lor$) of literals (denoted $a$ and $b$ where prefix $\\neg$ represents\nnegation). This structure constitutes a CNF.\n\n\\index{Disjunctive Normal Form}\nAnalogously, we define a \\emph{Disjunctive Normal Form} (DNF) as disjunction\nof conjunctions of literals. The negation of a CNF is in DNF, because literals\nare negated and conjunctions become disjunctions, vice versa.\n\n\\begin{theorem}\n  \\label{thm:all-cnf}\n  Every Boolean function can be represented as CNF.\n\\end{theorem}\n\nTheorem~\\ref{thm:all-cnf} is easy to prove.\n% Consider null-ary functions as\n% induction base. A (null-ary) Boolean function is a set of associations.\n% An association maps an assignment for input arguments to one output value.\n% True can be represented as $a \\lor \\neg a$ and false can be\n% represented as $a \\land \\neg a$ for some free Boolean variable $a$ and using\n% only negation, conjunctions and disjunctions. Recognize that those representations\n% are given in DNF as well as CNF. Hence all null-ary Boolean functions can be\n% represented as CNF and DNF.\n%\n% Let $f$ be a Boolean function represented in DNF with $k$ input arguments.\n% Extend every conjunction with a Boolean variable $v_{k+1}$.\nConsider the truth table of an arbitrary Boolean function $f$ with $k$ input arguments\nand $j$ rows of output value false. We represent $f$ as CNF.\n\nConsider Boolean variables $b_{i,l}$ with $0 \\leq i \\leq j$ and $0 \\leq l \\leq k$.\nFor every row $i$ of the truth table with assignment $(r_i)$, add one disjunction to the CNF.\nThis disjunction contains $b_{i,l}$ if $r_{i,l}$ is false.\nThe disjunction contains $b_{i,l}$ if $r_{i,l}$ is true.\n\nAs far as $f$ is an arbitrary $k$-ary Boolean function, we have proven that\nany Boolean function can be represented as CNF.\n\nSAT problems are usually represented in the DIMACS de-facto standard.\nConsider a SAT problem in CNF with \\emph{nbclauses} clauses and\nenumerate all variables from 1 to \\emph{nbvars}. A DIMACS file is an ASCII text\nfile. Lines starting with \\enquote{\\texttt{c}} are skipped (comment lines).\nThe first remaining line has to begin with \\enquote{\\texttt{p cnf}} followed by\n\\emph{nbclauses} and \\emph{nbvars} separated by spaces (header line).\nAll following non-comment lines are space-separated indices of Boolean variables\noptionally prefixed by a hyphen. Then one line represents one clause and\nmust be terminated with a zero character after a space. All lines are conjuncted\nto form a CNF.\n\nVariations of the DIMACS de-facto standard also allow multiline clauses (the\nzero character constitutes the end of a clause) or arbitrary whitespace instead of\nspaces. Another variant terminates DIMACS files once it encounters a single\npercent sign on a line. The syntactical details are individually published\non a per competition basis.\n\n\\renewcommand{\\lstlistingname}{Listing}  % REMARK otherwise it's Japanese, WTF\n\\begin{lstlisting}[caption={CNF of the \\boolf{XOR} in Display~\\eqref{eq:xor}}]\np cnf 2 2\na b\n-a -b\n\\end{lstlisting}\n\n\\section{Terminology}\n\\label{sec:sat-terminology}\n%\nGiven a conjunctive structure of disjunctions, we can define terms\nrelated to this structure. Those terms will be used in the SAT features\nwe present in Section~\\ref{sec:features-suggested}.\n\n\\index{Clause}\n\\index{$k$-clause}\n\\index{Unit clause}\n\\index{Horn clause}\n\\index{Definite clause}\n\\begin{defi}\n  A \\emph{clause} is a disjunction of literals.\n  A $k$-\\emph{clause} is a clause consisting of exactly $k$ literals.\n  A \\emph{unit clause} is a $1$-clause.\n\n  A \\emph{Horn clause} is a clause with at most one positive literal.\n  A \\emph{definite clause} is a clause with exactly one positive literal.\n  A \\emph{goal clause} is a clause with no positive literal.\n\\end{defi}\n\n\\index{Negated literal}\n\\index{Existential literal}\n\\index{Used variable}\n\\begin{defi}\n  Given a literal, its \\emph{negated literal} is the literal with its sign negated.\n  A literal is \\emph{positive}, if its sign is positive. A literal is \\emph{negative} if its sign is negative.\n\n  An \\emph{existential literal} is a literal which occurs exactly once and\n  its negation does not occur. A \\emph{used variable} is a variable which\n  occurs at least once in the CNF.\n\n  The \\emph{literal frequency} is the number of occurences of a literal in the CNF divided by the number of clauses declared.\n  Equivalently \\emph{variable frequency} defines the number of variable occurences divided by the number of clauses declared.\n\\end{defi}\n\n\\index{Clause length}\n\\index{Tautological clause}\n\\begin{defi}\n  The \\emph{clause length of a clause} is the number of literals contained.\n  A clause is called \\emph{tautological} if a literal and its negated literal occurs in it.\n\\end{defi}\n\nA few basic properties hold in terms of satisfiability. For example, existential literals\nare interesting, because they can be set to true and make one clause immediately satisfied\nwithout influencing other clauses.\n\n\\section{Basic SAT solving techniques}\n\\label{sec:sat-solving}\n%\n\\index{Equisatisfiability}\n\\begin{defi}\n  Given two CNFs $A$ and $B$, they are called \\emph{equisatisfiable}\n  iff $A$ is satisfiable iff $B$.\n\\end{defi}\n\n\\subsection{Boolean constraint propagation (BCP)}\n\\label{sec:sat-bcp}\n%\n\\index{Boolean constraint propagation}\nOne of the most basic techniques to SAT solving is \\emph{Boolean Constraint Propagation},\nalso called \\emph{unit propagation}.\nIt is so fundamental that SATzilla, introduced in Section~\\ref{sec:features-related},\napplies it immediately before looking at SAT features.\n\nLet $l$ be the literal of a unit clause in a CNF. Remove any clause containing\n$l$ and replace any occurences of $-l$ from the CNF. It is easy to see, that\nthe resulting CNF is equisatisfiable, because due to the unit clause $l$ must\nbe true. So any clause containing $l$ is satisfied and $-l$ yields false,\nwhere ($A \\lor $ false) is equivalent to $(A)$ for any Boolean function $A$.\n\n\\subsection{Watched Literals}\n\\label{sec:sat-wl}\n%\n\\index{Watched Literals}\nWatched Literals are another fundamental concept in SAT solving. It is very\nexpensive to check satisfiability of all clauses for every assigned value\nof a literal. Watched Literals is a neat technique to reduce the number of checks.\n\nIn each clause two unassigned literals are declared to be \\enquote{watched}.\nStructurally it is implemented the other way around:\nA clauses watch list is maintained per literal.\nNow as long as at least two literals are unassigned, the clause cannot become\nfalse (recognize that a clause is false iff all literals are false).\nTherefore the clause does not need to be visited as long as at least one\nunassigned literal exist. This implies the following decision procedure:\n\\begin{itemize}\n  \\item If all but one literal is false, propagate the remaining literal to be true.\n  \\item If all literals are false, report UNSAT.\n  \\item If any literal becomes true, watched literals do not change.\n  \\item Else replace the literal on the watch list with a remaining unassigned literal.\n\\end{itemize}\n\n%Consider the solver assigns a value for literal $l$. Instead of looking at\n%all clauses and testing whether the clause is falsified by $l$, only clauses\n%containing $l$ are checked if $l$ is one of the watched literals of the clause.\nThis empirical approach was established with the Chaff and zChaff SAT\nsolvers~\\cite{moskewicz2001chaff} and has proven useful in various variants.\n\n\\subsection{Remark}\n\\label{sec:sat-remark}\n%\nThe previous two techniques illustrate basic approaches, but actual SAT\nsolving research requires decades of development to tune individual SAT solvers.\nMemory models and concurrency strategies lead to fundamentally different runtime\nbehaviors of SAT solvers.\n\nAs such, an initial idea to initiate an individual SAT solver specifically designed for\nsolving problems in differential cryptanalysis was dropped, because development time\nis expected too long for a master thesis to be fruitful. As such we focused on popular\nand established SAT solvers of the SAT community.\n\n\\section{SAT solvers in use}\n\\label{sec:sat-solvers}\n%\n\\index{lingeling}\nIn this thesis, we considered several SAT solvers.\nThey have been selected either by their popularity\nor their good results at previous SAT competitions:\n\\begin{itemize}\n  \\item MiniSat 2.2.0\n  \\item CryptoMiniSat versions 4.5.3 and~5\n  \\item treengeling, lingeling and plingeling, in versions:\n    \\begin{itemize}\n      \\item lingeling ats1\n      \\item lingeling ats1o1\n      \\item lingeling ats1o2\n      \\item lingeling ats1o4\n      \\item lingeling baz\n    \\end{itemize}\n  \\item glucose version 4.0 and glucose syrup version 4.0\n\\end{itemize}\n\nThis means the hash collision attacks we implemented have run with\nthese SAT solvers. The results are discussed in Chapter~\\ref{ch:results}\nand a more comprehensive list is provided in Appendix~\\ref{app:runtimes}.\n\nMiniSat is known as \\enquote{Swiss army knife of SAT solving} meaning that\nit includes many well-established techniques that can be built upon.\nSAT competitions 2009, 2011, 2013 and 2014 included a special MiniSat\n\\enquote{hack track} where participants are asked to modify MiniSat to prove the\nbest performance with as little change to the MiniSat codebase as possible.\nEven though is not one of the fastest SAT solvers today, it provides\na nice codebase to experiment with.\n\nCryptoMiniSat is a derivative of MiniSat, which was originally modified\nfor cryptographic problems. It features XOR clauses meaning that\nbinary clauses of structure $a \\oplus b$ could be added and will be resolved\nusing Gaussian elimination.\n%Temporarily development has been given up but most recently it was added again.\nPlease recognize that our encoding\nintroduced in Section~\\ref{sec:enc-algotocnf} uses equivalence to model\nassignment and as such only clauses of structure $r = a \\oplus b$ emerge\nrendering this feature impractical to use.\n\nGlucose was the gold winner 2011 in the SAT+UNSAT application track.\nModifications of glucose also ranked high throughout the years of SAT competition.\nGlucose is a sequential SAT solver whereas glucose syrup is its parallelized\nversion.\n\nLingeling is SAT solver developed by Armin Biere. Lingeling has been the\nwinner of several tracks in the SAT competitions 2011 to 2016. For example,\nit has won gold in the SAT+UNSAT application track in 2014.\nLingeling has two siblings: plingeling and treengeling. plingeling is a\nparallelized version of lingeling. As such it executes in multiple threads\nand shares units and equivalences between those instances. treengeling\nis a Cube \\& Conquer solver meaning it partitions the problem into many\nsubproblems and solves them individually.\n\nLingeling releases ats1o1, ats1o2 and ats1o4 are non-public, experimental\nreleases of lingeling.\nThey have been developed in private communication with Armin Biere.\nOur main goal was to achieve a separation between two sets of variables.\nFirst, all variables of the first need to be assigned in the best possible\nway. Afterwards, the second set of variables is considered.\nSpecifically variables modelling the differences between the two hash\nalgorithm instances should constitute the first set as discussed in\nChapter~\\ref{ch:enc}.\n\nLingeling~ats1o1 implements the strategy to guess difference variables first (with Boolean value false) and\nusual heuristics apply for all other variables.\nOur intermediate results with incomplete CNF files showed a high number of restarts.\nTherefore ats1o2 disables backjumping and therefore skips decisions for important variables.\nFinally ats1o4 is not expected to distinguish from ats1o2. It only provides further debugging information.\n\nThe SAT solvers have generally been run without any special options\nand several times, except for\n\\begin{itemize}\n  \\item MiniSat was run with \\texttt{\\textendash{}\\textendash{}pre=once} as it is generally recommended to run with the builtin preprocessor.\n  \\item Lingeling has been generally run with\n    \\texttt{\\textendash{}\\textendash{}phase=0} per default and\n    \\texttt{\\textendash{}\\textendash{}phase=-1} to prefer false as initial assignment to literals.\n    However, lingeling ats1o1 implements this with a more forceful strategy.\n\\end{itemize}\n\n%Testcases with lingeling have all been run 5 times with various seeds (for reference, the default seed is 0).\n%Only the mean runtime value is displayed in the results in Chapter~\\ref{ch:results}.\n\nPreprocessing is a difficult topic on its own. Sometimes preprocessing can provide a speedup,\nbefore actually solving the problem, but mostly SAT solvers implement preprocessing strategies\nthemselves and run them repeatedly when solving the problem. Chapter~\\ref{ch:results}\npresents runtime results for that issue.\n", "meta": {"hexsha": "87d5d536f6dc32f495d9ab991115266afd1b02b2", "size": 21189, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sat.tex", "max_stars_repo_name": "prokls/master_iaik", "max_stars_repo_head_hexsha": "30703774eaac6a514add2424f5c408a549cd4db1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-05-13T21:45:11.000Z", "max_stars_repo_stars_event_max_datetime": "2016-05-13T21:45:11.000Z", "max_issues_repo_path": "sat.tex", "max_issues_repo_name": "prokls/master_iaik", "max_issues_repo_head_hexsha": "30703774eaac6a514add2424f5c408a549cd4db1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sat.tex", "max_forks_repo_name": "prokls/master_iaik", "max_forks_repo_head_hexsha": "30703774eaac6a514add2424f5c408a549cd4db1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.1436893204, "max_line_length": 141, "alphanum_fraction": 0.7456227288, "num_tokens": 5923, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7799928951399098, "lm_q1q2_score": 0.5933021047718201}}
{"text": "\\documentclass{homework}\n\\course{Math 5522H}\n\\author{Alex Li}\n\\input{preamble}\n\n\\begin{document}\n\\maketitle\n\n\\begin{inspiration}\nI don't think \\byline{Warren Buffett}\n\\end{inspiration}\n\n\n\\section{Terminology}\n\n\\begin{problem}\n  What is a half-space?  Define convex hull.\n  \\end{problem}\n  \\begin{solution}\n  A (open) half-space is a set of points in $\\C$ is the set of points on one side of a line, i.e. $H=\\{z: z\\cdot u > b\\}$ for a unit vector $u\\in\\C$, $b\\in\\Z$\n   \n   The convex hull of a set of points is the intersection of all half-spaces containing every point of the set.\n\n   \\end{solution}\n   \\begin{problem}\n   Define the Wirtinger partial derivatives $\\displaystyle\\frac{\\partial}{\\partial z}$ and $\\displaystyle\\frac{\\partial}{\\partial \\conj{z}}$.\n   \\end{problem}\n   \\begin{solution}\n   For a function $f$ with domain $\\C = \\{x + iy|x, y\\in\\R\\}$, we define the Wirtinger partial derivatives as follows:\n   \\[\\pfrac{}{z} = \\frac{1}{2}(\\pfrac{}{x} - i\\pfrac{}{y})\\]\n   \\[\\pfrac{}{\\conj{z}} = \\frac{1}{2}(\\pfrac{}{x} + i\\pfrac{}{y})\\]\n   \\end{solution}\n\n   \\begin{problem}\n     What are the Cauchy-Riemann equations?\n     \\end{problem}\n     We can uniquely represent a complex function of $z = x + iy$ as a sum of two real functons of two variables, $u$ and $v$, to get the equation  $f(z) = u(x, y) + iv(x, y).$\n     The Cauchy-Riemann equations are the following\n     \\begin{solution}\n     \\[\\pfrac{u}{x} = \\pfrac{v}{y}\\label{cauchy-riemann-x-real}\\]\n     \\[\\pfrac{u}{y} = -\\pfrac{v}{x}\\label{cauchy-riemann-y-real}\\]\n     \\end{solution}\n     These two equations are neccessary for differentiability of f and, together with continuity of the partial derivatives, sufficient. \n\n     \\section{Numericals}\n\n     \\begin{problem}\\label{cauchy-riemann-polar}\n     What are the Cauchy-Riemann equations\\ldots in polar coordinates?\n     \\end{problem}\n     \\begin{solution}\n     Suppose that $f(re^{i\\theta}) = u(r, \\theta) + iv(r, \\theta)$ is a complex differentiable  function. Let's ignore the origin since $\\theta$ isn't even defined, this formula can apply everywhere else. Then differenting along the first coordinate,\n     \\begin{align*}\n     f'(r_0e^{i\\theta_0}) &=  \\lim_{r\\to r_0} \\frac{u(r, \\theta_0) + iv(r, \\theta_0) - u(r_0, \\theta_0) - iv(r_0, \\theta_0)}{re^{i\\theta_0} - r_0e^{i\\theta_0}}\\\\\n     &= \\frac{1}{e^{i\\theta_0}}\\lim_{r\\to r_0} \\frac{u(r, \\theta_0) - u(r_0, \\theta_0)}{r - r_0} + \\frac{i}{e^{i\\theta_0}} \\lim_{r\\to r_0} \\frac{v(r, \\theta_0) - v(r_0, \\theta_0)}{r - r_0}\n     \\end{align*}\n     \\begin{align}\\label{first_rc_polar}\n     f'(r_0e^{i\\theta_0}) &= \\frac{u_r + iv_r}{e^{i\\theta}}\n     \\end{align}\n     Next, we differentiate along the second coordinate, $\\theta$. \n     \\begin{align*}\n     f'(r_0e^{i\\theta_0}) &=  \\lim_{\\theta\\to \\theta_0} \\frac{u(r_0, \\theta) + iv(r, \\theta_0) - u(r_0, \\theta_0) - iv(r_0, \\theta_0)}{r_0(e^{i\\theta} - e^{i\\theta_0})}\\\\\n     &= \\lim_{\\theta\\to \\theta_0} \\frac{u(r_0, \\theta) - u(r_0, \\theta_0)}{r_0e^{i}(e^\\theta - e^{\\theta_0})} + i \\lim_{\\theta\\to \\theta_0} \\frac{v(r, \\theta_0) - v(r_0, \\theta_0)}{r_0e^{i}(e^\\theta - e^{\\theta_0})}\n     \\end{align*}\n     We want to compare this to the real partial derivatives in the direction $\\theta$:\n     \\begin{gather*}\n     \\pfrac{u(r_0, \\theta_0)}{\\theta} = \\lim_{\\theta\\to \\theta_0} \\frac{u(r_0, \\theta) - u(r_0, \\theta_0)}{\\theta - \\theta_0}\\\\\n     \\pfrac{v(r_0, \\theta_0)}{\\theta} = \\lim_{\\theta\\to \\theta_0} \\frac{v(r_0, \\theta) - v(r_0, \\theta_0)}{\\theta - \\theta_0}\n     \\end{gather*}\n     In the limit, the ratio between these two can be computed by noticing that $\\lim_{\\theta\\to\\theta_0}\\frac{e^{i\\theta} - e^{i\\theta_0}}{i\\theta - i\\theta_0}$ is the derivative of $e^z$ at $i\\theta_0$.\n     \\[\\lim_{\\theta \\to \\theta_0} \\frac{\\theta - \\theta_0}{r_0(e^{i\\theta} - e^{i\\theta_0})} = r_0\\lim_{\\theta \\to \\theta_0} \\frac{i\\theta - i\\theta_0}{e^{i\\theta} - e^{i\\theta_0}}\\frac{1}{i} = r_0\\times e^{i\\theta}\\times\\frac{1}{i}\n     \\]\n     Thus we see that\n     \\begin{align}\\label{second_rc_polar}\n     f'(r_0e^{i\\theta_0}) = \\frac{v_{\\theta_0} - iu_{\\theta_0}}{e^{i\\theta_0}r_0}\n     \\end{align}\n     Setting real and imaginary parts equal in \\ref{first_rc_polar} and \\ref{second_rc_polar},\n     \\begin{align*}\n     r_0u_r = v_\\theta \\\\\n     r_0v_r = - u_\\theta\n     \\end{align*}\n     \\end{solution}\n     \\begin{problem}\n       For a natural number $n$ define $f : \\mathbb{C} \\to \\mathbb{C}$ by\n         \\[\n             f(r \\cos \\theta + i \\, r\\sin \\theta) = r^n \\cos \\left( n\\theta \\right) + i \\, r^n \\sin \\left( n\\theta \\right).\n               \\]\n                 Use \\ref{cauchy-riemann-polar} to verify that $f$ satisfies the\n                   Cauchy-Riemann equations.\n                   \\end{problem}\n                   \\begin{solution}\n                   Let $u=r^n\\cos(n\\theta)$, $v = r^n\\sin(n\\theta)$, and we compute the 4 partials.\n                   \\begin{align*}\n                   u_r &= nr^{n-1}\\cos(n\\theta)\\\\\n                   v_r &= nr^{n-1}\\sin(n\\theta)\\\\\n                   u_\\theta &= -nr^{n}\\sin(n\\theta) = -rv_r\\\\\n                   v_\\theta &= nr^{n}\\cos(n\\theta) = ru_r\\\\\n                   \\end{align*}\n                   \\end{solution}\n\n\n                   \\begin{problem}\n                     On a certain domain, define $f(z) = \\log \\abs{z} + i \\cdot \\Arg z$\n                       and verify that $f$ satisfies the Cauchy-Riemann equations.\n                       \\end{problem}\n                       \\begin{solution}\n                       Let $z=r\\cos{\\theta} + r\\sin{\\theta}$ with $\\theta \\in (-\\pi, \\pi)$, so $f(z) = \\Log r + i \\theta$. We validate the cauchy riemann equations on the domain $\\C\\setminus(-\\infty, 0]$ by first noting that they are differentiable on this domain, then checking them in polar coordinates:\n                       \\begin{align*}\n                       u_r &= \\frac{1}{r} &  u_\\theta &= 0\\\\\n                       v_r &= 0 & v_\\theta &= 1\n                       \\end{align*}\n                       Thus $rv_r = u_\\theta$ and $ru_r = v_\\theta$\n                       \\end{solution}\n\n                       \\begin{problem}\n                       Compute $\\displaystyle\\frac{\\partial}{\\partial z} \\left( z \\conj{z} \\right)$.\n                       \\end{problem}\n                       \\begin{solution}\n                       \\begin{align*}\n                       \\pfrac{}{z} \\left( z \\conj{z} \\right) &= \\frac{1}{2}(\\pfrac{}{x}(x^2+y^2) - i\\pfrac{}{y}(x^2+y^2))\\\\\n                       &= x - iy\\\\\n                       \\end{align*}\n                       If we pretend $z$ and $\\conj z$ are independent and differentiate, we get the same answer:\n                       \\begin{align*}\n                       \\pfrac{}{z} \\left( z \\conj{z} \\right) &= \\conj{z} = x - iy\\\\\n                       \\end{align*}\n                       \\end{solution}\n\n                       \\begin{problem}\\label{harmonic-conjugate}\n                       Let $u(x,y) = e^y \\sin x$.  Find $v(x,y)$ so that $u$ and $v$ satisfy the Cauchy-Riemann equations.  Such a function $v$ is a\n                         \\textbf{harmonic conjugate} of $u$.\n                         \\end{problem}\n                         \\begin{solution}\n                         First, let's compute the partial derivatives of $v$ with the Cauchy-Riemann equations.\n                         \\[u_x = -e^y\\cos{x} = v_y\\]\n                         \\[-u_y = -e^y\\sin{x} = v_x\\]\n                         Integrating, \n                         \\[v = \\int v_y dy = -e^y\\cos{x} + C(x)  = -e^y\\cos{x} + C(y) = \\int v_x dx\\]\n                         So $v = -e^y\\cos{x} + C$ for any real number $C$.\n                         \\end{solution}\n                         \\section{Exploration}\n\n                         \\begin{problem}\\label{cayley-transform}Relate the upper half-plane\n                           $H = \\{ z \\in \\mathbb{C} : \\Imag z > 0 \\}$ and the open disk\n                             $D = \\{ z \\in \\mathbb{C} : \\abs{z} < 1 \\}$ by finding $a,b,c,d \\in \\C$ so that the M\\\"obius transformation\n                               \\[f(z) = \\frac{az + b}{cz + d}\\] yields a bijection $f : H \\to D$.\n                               \\end{problem}\n                               \\begin{solution}\n                               Maybe it will work if the transformation sends the real line to the unit circle, with say $f(0)\\mapsto -i$, $f(1)\\mapsto 1$, $f(\\infty)\\mapsto i$. Solving for $a,b,c,d$, we get the following mobius transformation:\n                               $f(z) = \\frac{iz+1}{z + i}.$\n\n                               % https://en.wikipedia.org/wiki/M%C3%B6bius_transformation#Composition_of_simple_transformations\n                               We can check that we can write this mobius tranformation $f(z) = f_4\\cdot f_3 \\cdot f_2 \\cdot f_1(x)$ where $f_1 = z + i$ (mapping $H$ to the half plane above the line with i=1), $f_2 = 1/z$ (mapping $Image f_1$ to the open circle with radius $\\frac{1}{2}$ between 0 and $-i$), $f_3 = 2z$ (mapping $Image f_2$ to the circle between 0 and $-2i$) and $f_4 = z + i$ (mapping $Image f_3$ to D).\n\n                               It's easy to see that each of the $f_i$ are bijective, so $f$ must be bijective as well.\n                               \\end{solution}\n\n\n                               \\begin{problem}\\label{harmonic-necessary}Given $u : \\mathbb{R}^2 \\to \\mathbb{R}$, is it always possible to\n                                 find a function $v : \\mathbb{R}^2 \\to \\mathbb{R}$ so that $u$ and\n                                   $v$ satisfy the Cauchy-Riemann equations?\n                                   \\end{problem}\n                                   \\begin{solution}\n                                   Obviously not if $u$ isn't differentiable. Let's assume that $u$ is twice differentiable with equal mixed partials. Then we can solve for $v$ like in \\ref{harmonic-conjugate}:\n                                   \\[v = \\int u_x dy = -\\int u_y dx.\\]\n                                   Thus $v_{xy} = -u_{yy}$ and $v_{yx} = u_{xx}$, and we need $u$ to satisfy $u_{xx} = -u_{yy}$. If that's true, then integrating $u_xdy$ and $u_ydx$ should give the same answer for $v$, so $u$ will have a harmonic conjugate.\n                                   \\end{solution}\n\n\n                                   \\begin{problem}\n                                     For $p \\in \\C[x]$, show that the convex hull of the zeroes of $p$\n                                       contains the zeroes of $p'$.  This is the \\textbf{Gauss-Lucas\n                                           theorem}.\n                                           \\end{problem}\n                                           \\begin{solution}\n                                           By the fundamental theorem of algebra, we can split the polynomial $p$ into linear factors $p = \\prod_i(z - r_i).$\n                                           Now notice that for functions $f_1(z), f_2(z), ... f_n(z)$, the product rule gives\n                                           \\begin{align}\\label{log-derivative-product-to-sum}\n                                           \\frac{\\frac{d}{dz}\\prod_i f_i}{\\prod_i f_i} = \\frac{\\sum_i f_i'\\prod_{j\\neq i}f_j}{\\prod_i f_i} = \\sum_i\\frac{ f_i'}{f_i}\n                                           \\end{align}\n                                           Using this fact, we can compute\n                                           \\[\n                                           \\frac{\\frac{dp}{dz}}{p} = \\frac{\\frac{d}{dz}\\prod_i (z - r_i)}{\\frac{d}{dz}\\prod_i (z - r_i)} = \\sum_i \\frac{\\frac{d}{dz}(z-r_i)}{z - r_i} = \\sum_i \\frac{1}{z - r_i}\n                                           \\]\n\n                                           Let's consider an arbitrary half space $H=\\{z: z\\cdot u > b\\}$ for a unit vector $u\\in\\C$, $b\\in\\Z$ which contains none of the zeros of $p$. Then for points $z$ in this half space, $(z-r_i)\\cdot u = z\\cdot u - r_i \\cdot u$.\n                                           Here $z\\cdot u > b$ since $z\\in H$, and $r_i \\cdot u <= b$ since $r_i\\not\\in H$, so $(z-r_i)\\cdot u > 0$ and thus $\\frac{1}{z-r_i}\\cdot u > 0$.\n\n                                           By linearity, \\[0 < \\sum_i \\frac{1}{z - r_i} \\cdot u  = \\frac{\\frac{dp}{dz}}{p}\\cdot u\\] so $\\frac{dp}{dz} \\neq 0$ on this half space (as $p$ is nonzero). Since the half space was choosen aribtrarily from the half spaces containing none of the zeros of $p$, there are no zeros of $p'$ outside of the convex hull of zeros of $p$.\n                                           \\end{solution}\n                                           \\begin{problem}\\label{cauchy-riemann-again}\n                                             Suppose the smooth function $f : \\C \\to \\C$ satisfies the\n                                               Cauchy-Riemann equations.  Does $f'$ also satisfy the Cauchy-Riemann\n                                                 equations?\n                                                 \\end{problem}\n                                                 \\begin{solution}\n                                                 Let $f = u(x,y) + iv(x,y).$ \n                                                 Taking the derivative of $f$ in the $x$ direction, $f' = u_x + iv_x,$ and so to show $f'$ satisfies the Cauchy Riemann equations that we want to show that $u_{xx}=v_{xy}$ and $u_{xy} = -v_{xx}.$ To do this, notice that since $f$ satisfies the equations, $u_{x} = v_y$ and $u_y=-v_x$. Taking the $x$ derivative of these two equations and using the equality of mixed partials we show what we wanted: namely, that $u_{xx}=v_{xy}$ and $u_{xy} = -v_{xx}.$ So $f'$ satisfies the Cauchy Riemann equations.\n                                                 \\end{solution}\n                                                 \\begin{problem}\n                                                   Suppose $f : \\R^2 \\to \\R^2$ satisfies the Cauchy-Riemann equations,\n                                                     and consider curves $\\gamma_1, \\gamma_2 : (-1,1) \\to \\R^2$ passing\n                                                       through the origin so that $\\gamma_1(0) = \\gamma_2(0) = (0,0)$.\n                                                         Relate the angle between the curves $\\gamma_1$ and $\\gamma_2$ to the\n                                                           angle between the curves $f \\circ \\gamma_1$ and $f \\circ \\gamma_2$.\n                                                             (This is related to \\ref{tj-versus-jt}.)\n                                                             \\end{problem}\n                                                             \\begin{solution}\n                                                             To get the angle between the curves, we need to compare the derivatives between the curves at the point (0,0).\n\n                                                             \\[\\frac{d}{dt} f(\\gamma_1(t)) = \n                                                             \\begin{pmatrix}\n                                                             u_x &u_y\\\\\n                                                             v_x &v_y\\\\\n                                                             \\end{pmatrix}\n                                                             \\begin{pmatrix}\n                                                             \\partial_1\\gamma_1\\\\\n                                                             \\partial_2\\gamma_1\\\\\n                                                             \\end{pmatrix}\n                                                             = \n                                                             \\begin{pmatrix}\n                                                             u_x & u_y\\\\\\\n                                                             -u_y & u_x\\\\\n                                                             \\end{pmatrix}\n                                                             \\begin{pmatrix}\n                                                             \\partial_1\\gamma_1\\\\\n                                                             \\partial_2\\gamma_1\\\\\n                                                             \\end{pmatrix}\\]\n                                                             We can choose some choice of $R, \\theta$ based on the values of $u_x, u_y$ so that we get\n                                                             \\[\\frac{d}{dt} f(\\gamma_1(t)) = \n                                                             R\n                                                             \\begin{pmatrix}\n                                                             \\cos{\\theta} & \\sin{\\theta}\\\\\n                                                             -\\sin{\\theta} & \\cos{\\theta}\\\\\n                                                             \\end{pmatrix}\n                                                             \\begin{pmatrix}\n                                                             \\partial_1\\gamma_1\\\\\n                                                             \\partial_2\\gamma_1\\\\\n                                                             \\end{pmatrix}\n                                                             \\]\n                                                             Thus, applying $f$ to $\\gamma_1$ will give us a derivative vector pointing in the direction $\\arg(\\gamma_1) + \\theta,$ and similarly applying $f$ to $\\gamma_2$ will result in a derivative pointing in the direction $\\arg(\\gamma_2)+\\theta$. Thus the argle between $\\gamma_1$ and $\\gamma_2$ at the point $(0,0)$ is equal to the angle after applying $f$.\n                                                             \\end{solution}\n                                                             \\begin{problem}\n                                                               Suppose $f : \\C \\to \\C$ is smooth and holomorphic.  Show that the\n                                                                 real part of $f$ is harmonic (cf.~\\ref{harmonic-function}).\n                                                                 \\end{problem}\n                                                                 \\begin{solution}\n                                                                 We can solve similarly to \\ref{cauchy-riemann-again}. Let $f=u+iv$, since $f$ is harmonic it satisfies the cauchy riemann equations and thus\n                                                                 \\begin{align*}\n                                                                 u_x=\\phantom{-}v_y &\\implies u = \\phantom{-}\\int v_y dx &\\implies u_{xx} = \\phantom{-}v_{yx}\\\\\n                                                                 u_y=-v_x &\\implies u = -\\int v_x dy &\\implies u_{yy} = -v_{xy}\\\\\n                                                                 \\end{align*}\n                                                                 Since $f$ is smooth, its mixed partials are equal and thus $u_{yy}=u_{xx}$.\n                                                                 \\end{solution}\n                                                                 \\begin{problem}\n                                                                   Let's redo \\ref{abels-theorem} in the context of complex analysis.\n                                                                     Consider a sequence $(a_i)$ of complex numbers so that\n                                                                       $\\sum_{i=0}^\\infty a_i$ converges to $L$.  Then\n                                                                         $\\lim_{z \\to 1^{-}} \\sum_{i=0}^\\infty a_i z^i = L$ provided\n                                                                           $z \\to 1$ in a \\textbf{Stolz sector}, i.e., suppose there is some\n                                                                             $K$ so that $|1-z| \\leq K(1-|z|)$.\n                                                                             \\end{problem}\n                                                                             \\begin{solution}\n                                                                             \\begin{lemma}\\label{conditional-row-sum-absolute-column-sum}\n                                                                             Suppose $\\sum_i^\\infty a_i,$ $\\sum_i^\\infty b_i=B,$ and $\\sum_{i=0}^\\infty\\sum_{j=0}^\\infty f(i,j) = L$ are convergent, with $f(i, j) = \\begin{cases}a_ib_j & i \\leq j\\\\0. \\end{cases}$.\n                                                                             Then $\\sum_{j=0}^\\infty\\sum_{i=0}^\\infty f(i, j)$ converges to L.\n                                                                             \\end{lemma}\n                                                                             \\begin{proof}\n                                                                             For any $N$, we have the following\n                                                                             \\begin{align*}\n                                                                             |\\sum_{j=0}^\\infty\\sum_{i=0}^\\infty f(i,j) - \\sum_{i=0}^N\\sum_{j=0}^\\infty f(i,j)| &=\n                                                                             |(\\sum_{j=0}^\\infty\\sum_{i=0}^\\infty a_ib_j - \\sum_{i=0}^\\infty\\sum_{j=0}^{i-1} a_ib_j) - (\\sum_{i=0}^N a_iB - \\sum_{i=0}^N\\sum_{j=0}^{i-1}a_ib_j|)\\\\\n                                                                             &= |\\sum_{j=0}^\\infty\\sum_{i=0}^\\infty a_ib_j - B\\sum_{i=0}^N a_i - \\sum_{i=N+1}^\\infty \\sum_{j=0}^{i-1} a_ib_j|\\\\\n                                                                             &\\leq |\\sum_{j=0}^\\infty b_j\\sum_{i=0}^\\infty a_i - \\sum_{j=0}^\\infty b_j\\sum_{i=0}^N a_i| +  |\\sum_{i=N+1}^\\infty a_i\\sum_{j=0}^{i-1} b_j|\\\\\n                                                                             &\\leq |\\sum_{j=0}^\\infty \\left(b_j\\sum_{i=N+1}^\\infty a_i\\right)| +  |\\sum_{i=N+1}^\\infty a_i\\sum_{j=0}^{i-1} b_j|\n                                                                             \\end{align*}\n                                                                             Let $B'$ be the sup of the absolute value of the maximum partial sum of $b_j$. Then, choose $N$ so that $|\\sum_{i=N+1}^\\infty a_i| < \\frac{\\epsilon}{2B'}$. Plugging this into the last equation, we can conclude\n                                                                             \\begin{align*}\n                                                                             |\\sum_{j=0}^\\infty\\sum_{i=0}^\\infty a_ib_j - \\sum_{i=0}^N\\sum_{j=0}^\\infty a_ib_j| &\\leq\n                                                                             |B'\\frac{\\epsilon}{2B'}| +|\\frac{\\epsilon}{2B'}B'|   \\leq \\epsilon\n                                                                             \\end{align*}\n                                                                             So the two sums are equal.\n                                                                             \\end{proof}\n                                                                             With this we move onto the proof of the theorem. First, notice that the condition $|1-z|\\leq K(1-|z|)$ implies $|z| <  1$, as if $|z| < 1$ then the RHS is negative (impossible since there is an absolute value). If $|z|=1$ then the RHS may be 0, but the only way the inequality is satisfied is if $z=1$, and that's the limit point.\n\n                                                                             Thus $\\sum a_iz^i$ converges absolutely by comparison to the geometric series $max(a_i)|z|^i$.\n\n                                                                             We would like to show that the two values have difference zero. Subtract them, expand, and sum `along the columns instead of along the rows'. Lemma \\ref{conditional-row-sum-absolute-column-sum} shows that this operation will not mess anything up.\n                                                                             \\begin{align*}\n                                                                             \\sum a_i - \\sum a_iz^i &= \\sum a_i(1-z^i)\\\\\n                                                                             &= \\sum_{i=0}^\\infty a_i(1-z)(1 + z + \\dots + z^{i-1})\\\\\n                                                                             &= (1 - z)\\sum_{j=0}^\\infty z^j\\sum_{i=j}^\\infty a_i\n                                                                             \\end{align*}\n                                                                             It suffices to prove that the absolute value of the difference is 0. Note that since $\\sum a_i$ converges, $\\forall \\epsilon > 0$, we can find a number $N$ so that for all $i\\geq N$, $L-\\sum_{i=1}^{N-1} a_i = \\sum_{i=N}^\\infty a_i < \\epsilon.$\n                                                                             \\begin{align*}\n                                                                             |\\sum a_i - \\sum a_iz^i| &=  \\left|(1 - z)\\sum_{j=0}^\\infty z^j\\sum_{i=j}^\\infty a_i\\right|\\\\\n                                                                             &\\leq \\left|K(1 - |z|)\\right|\\sum_{j=0}^{N-1} |z|^j\\left|\\sum_{i=j}^\\infty a_n\\right| + \\left|K(1 - |z|)\\right|\\sum_{j=N}^\\infty |z|^j|\\epsilon|\\\\\n                                                                             &\\leq \\left|K(1 - |z|^{N})(L-\\sum_{i=0}^N\\sum_{j=0}^N a_i)\\right| +\n                                                                             |K\\epsilon|\\\\\n                                                                             &\\leq C_1(N)(1 - |z|^{N}) + C_2\\epsilon\n                                                                             \\end{align*}\n                                                                             For some positive constant $C_2$ and constant function of $N$ $C_1(N)$. We can make the second term go to 0 by choosing $N$ to be huge so that $\\epsilon$ vanishes, and we can then independently make the first term go to zero by choosing $z$ close enough to 1 that $1-|z|^N$ is close to 0.\n                                                                             \\end{solution}\n                                                                             \\section{Prove or Disprove and Salvage if Possible}\n                                                                             As always with PODASIPs, many of statements below are incorrect.  For\n                                                                             full credit, you must not only salvage these false statements, but\n                                                                             also explain why the statement is false (e.g., perhaps by exhibiting a\n                                                                             counterexample).\n\n                                                                             \\begin{problem}\n                                                                               Suppose $f : \\R^2 \\to \\R^2$ and $g : \\R^2 \\to \\R^2$ are smooth (but\n                                                                                 not necessarily holomorphic when regarded as functions from $\\C$ to\n                                                                                   $\\C$).  Then by the chain rule,\n                                                                                     \\[\n                                                                                         \\frac{\\partial}{\\partial z} \\left( f \\circ g \\right) =\n                                                                                             \\frac{\\partial f}{\\partial z} \\frac{\\partial g}{\\partial z} + \\frac{\\partial f}{\\partial \\conj{z}} \\frac{\\partial g}{\\partial \\conj{z}}.\n                                                                                               \\]\n                                                                                                 and similarly\n                                                                                                   \\[\n                                                                                                       \\frac{\\partial}{\\partial \\conj{z}} \\left( f \\circ g \\right) =\n                                                                                                           \\frac{\\partial f}{\\partial \\conj{z}} \\frac{\\partial g}{\\partial \\conj{z}} + \\frac{\\partial f}{\\partial z} \\frac{\\partial g}{\\partial z}.\n                                                                                                             \\]\n                                                                                                             \\end{problem}\n                                                                                                             \\begin{solution}\n                                                                                                             Surely not, as this would imply that $\\pfrac{f\\circ g}{z} = \\pfrac{f\\circ g}{\\conj z}$. Plus, $\\pfrac{f}{z}$ doesn't make that much sense since $f$ isn't really a function of $z$ but rather $g(z)$.\n\n                                                                                                             \\begin{align*}\n                                                                                                             \\pfrac{f\\circ g}{z} &= \\frac{1}{2}(\\pfrac{}{x} - i\\pfrac{}{y})(f\\circ g)\n                                                                                                             \\end{align*}\n                                                                                                             Define real valued 2 variable functions $u_1, v_1, u_2, v_2$ so that $g(x,y) = u_2(x,y) + iv_2(x,y)$, $f(x, y) = u_1(x, y) + iv_1(x, y)$. Then \n                                                                                                             \\begin{align*}\n                                                                                                             (f\\circ g)(x+iy) = u_1(u_2(x, y), v_2(x, y)) + iv_1(u_2(x, y), v_2(x, y))\n                                                                                                             \\end{align*}\n                                                                                                             Applying the chain rule, we can get the $x$ and $y$ derivative:\n                                                                                                             \\begin{align*}\n                                                                                                             \\pfrac{f\\circ g}{x} &= \\pfrac{f}{u_2}\\pfrac{u_2}{x} + i\\pfrac{f}{v_2}\\pfrac{v_2}{x}\\\\\n                                                                                                             \\pfrac{f\\circ g}{y} &= \\pfrac{f}{u_2}\\pfrac{u_2}{y} + i\\pfrac{f}{v_2}\\pfrac{v_2}{y}\n                                                                                                             \\end{align*}\n                                                                                                             Then we can compute the Wirtinger partial derivative $\\pfrac{f\\circ g}{z}$\n                                                                                                             \\begin{align*} \n                                                                                                             \\pfrac{f\\circ g}{z} &= \\frac{1}{2}(\\pfrac{f\\circ g}{x} - i \\pfrac{f\\circ g}{y})\\\\\n                                                                                                             &= \\frac{1}{2}(\\pfrac{f}{u_2}\\pfrac{u_2}{x} + i\\pfrac{f}{v_2}\\pfrac{v_2}{x} -  i\\pfrac{f}{u_2}\\pfrac{u_2}{y} + \\pfrac{f}{v_2}\\pfrac{v_2}{y})\\\\\n                                                                                                             &= \\frac{1}{2}(\\pfrac{f}{u_2}(\\pfrac{u_2}{x} -i\\pfrac{u_2}{y}) + \\pfrac{f}{v_2}(i\\pfrac{v_2}{x} + \\pfrac{v_2}{y}))\\\\\n                                                                                                             &= \\frac{1}{2}(\\pfrac{f}{u_2}(\\pfrac{u_2}{x} -i\\pfrac{u_2}{y}) - i\\pfrac{f}{v_2}(\\pfrac{v_2}{x} - i\\pfrac{v_2}{y}))\\\\\n                                                                                                             &= \\pfrac{f}{u_2}\\pfrac{u_2}{z} - i\\pfrac{f}{v_2}\\pfrac{v_2}{z}\n                                                                                                             \\end{align*}\n                                                                                                             Let $w(x, y) = u_2(x, y) + iv_2(x, y).$ Then $u_2 = \\frac{1}{2}(w + \\conj{w})$ and $iv_2 = \\frac{1}{2}(w - \\conj{w}).$\n\n                                                                                                             \\begin{align*}\n                                                                                                             \\pfrac{f\\circ g}{z} &= \\frac{1}{2}((\\pfrac{f}{w} +\\pfrac{f}{\\conj{w}})\\pfrac{u_2}{z} + (\\pfrac{f}{\\conj{w}} - \\pfrac{f}{w})\\pfrac{v_2}{z}\\\\\n                                                                                                             &= \\frac{1}{2}(\\pfrac{f}{w}(\\pfrac{u_2}{z} - v\\pfrac{v_2}{z}) + \\pfrac{f}{\\conj{w}}(\\pfrac{u_2}{z} + \\pfrac{v_2}{z}))\\\\\n                                                                                                             &= \\pfrac{f}{w}\\pfrac{g}{z} + \\pfrac{f}{\\conj{w}}\\pfrac{\\bar{g}}{z}\n                                                                                                             \\end{align*}\n                                                                                                             And by symmetry we can argue that \n                                                                                                             \\[\n                                                                                                             \\pfrac{f\\circ g}{\\conj{z}} =\\pfrac{f}{\\conj{w}}\\pfrac{\\conj{g}}{\\conj{z}} + \\pfrac{f}{w}\\pfrac{g}{\\conj{z}}\n                                                                                                             \\]\n                                                                                                             \\end{solution}\n\n\n                                                                                                             \\begin{problem}\\label{schwarz-reflection-principle}\n                                                                                                             If $f : \\C \\to \\C$ is holomorphic, then $z \\mapsto \\overline{f(\\conj z)}$ is holomorphic.\n                                                                                                             \\end{problem}\n                                                                                                             \\begin{solution}\n                                                                                                             If $f(x+iy) := u_1(x, y) + iv_1(x, y)$ then $ u_2(x, y) + v_2(x, y) := \\conj{f}(x - iy) = u_1(x, -y) - iv_1(x, -y).$ Let's check the equations:\n                                                                                                             \\begin{align*}\n                                                                                                             \\pfrac{u_2}{x} &= \\pfrac{u}{x} &\n                                                                                                             \\pfrac{v_2}{x} &= -\\pfrac{v}{x}\\\\\n                                                                                                             \\pfrac{u_2}{y} &= -\\pfrac{v}{y} & \n                                                                                                             \\pfrac{v_2}{y} &= \\pfrac{v}{y}\\\\\n                                                                                                             \\end{align*}\n                                                                                                             Using the fact that Cauchy-Riemann equations are satisfied for $f$,\n                                                                                                             \\begin{align*}\n                                                                                                             \\pfrac{u_2}{x} &= \\pfrac{v_2}{y} & \n                                                                                                             \\pfrac{u_2}{y} &= -\\pfrac{v_2}{y}\n                                                                                                             \\end{align*}\n                                                                                                             So the claim is true.\n                                                                                                             \\end{solution}\n\n                                                                                                             \\begin{problem}\\label{cauchy-riemann-alone-not-sufficient}Suppose $u, v : \\mathbb{R}^2 \\to \\mathbb{R}$ satisfy\n                                                                                                               \\[\n                                                                                                                   \\frac{\\partial u}{\\partial x}(0,0)=\\frac{\\partial v}{\\partial y}(0,0)\n                                                                                                                       \\mbox{ and }\n                                                                                                                           \\frac{\\partial u}{\\partial y}(0,0)=-\\frac{\\partial v}{\\partial x}(0,0).\n                                                                                                                             \\]\n                                                                                                                               Then $f : \\C \\to \\C$ given by $f(x+iy) = u(x,y) + i \\, v(x,y)$ is holomorphic at $0$.\n                                                                                                                               \\end{problem}\n                                                                                                                               \\begin{solution}\n                                                                                                                               False, define $f(z) = \\begin{cases}1&\\Re(z) = 0 \\text{ or } \\Im(z) = 0\\\\ 0 & \\text{otherwise}\\end{cases}$. It's not holomorphic at 0 since it's not real differentiable there. On the other hand, if $f$ is real differentiable and satisfies the Cauchy Riemann equations, then it must be complex differentiable.\n\n                                                                                                                               Choose $z = x + iy\\in\\C$. Then \\[f(z) = (f(x + iy) - f(x)) +  (f(x) - f(0)).\\] By the MVT we can find points on the line between $x$ to $x+iy$ and the line between $x$ to $0$ with the correct derivative for the functions $u$ and $v$:\n                                                                                                                               \\begin{align*}\n                                                                                                                               u(z) &= xu_x(p_1) + u_y(p_2)y\\\\\n                                                                                                                               v(z) &= xv_x(p_3) +v_y(p_4)y\n                                                                                                                               \\end{align*}\n                                                                                                                               Since the derivatives are all continuous at 0, and the $p_i$ are close to 0, making $z$ closer to zero allows us to find arbitrarily small $\\epsilon_i$ so that \n                                                                                                                               \\begin{align*}\n                                                                                                                               u(z) &= (u_x(0)+\\epsilon_1)x + (u_y(0)+\\epsilon_2)y \\\\\n                                                                                                                               v(z) &= (v_x(0)+\\epsilon_3)x + (v_y(0)+\\epsilon_4)y &= (u_y(0)+\\epsilon_5)x + (u_x(0)+\\epsilon_6)y\n                                                                                                                               \\end{align*}\n                                                                                                                               Then we show that the function is complex differentiable by showing the following limit exists:\n                                                                                                                               \\begin{align*}\n                                                                                                                               \\lim_{z\\to 0} \\frac{f(z)}{z} &= \\lim \\frac{(u_x(0)+\\epsilon_1)x + (u_y(0)+\\epsilon_2)y + i((u_y(0)+\\epsilon_5)x + (u_x(0)+\\epsilon_6)y)}{x+ iy}\\\\\n                                                                                                                               &= \\lim \\frac{u_x(0)(x+iy) + u_y(0)(y-ix) + \\epsilon_1x + \\epsilon_2y + \\epsilon_5x - \\epsilon_6y}{x+ iy}\\\\\n                                                                                                                               &= \\lim \\frac{u_x(0)(x+iy) + iu_y(0)(x+iy)}{x+iy} + \\lim_{z\\to 0} \\frac{(\\epsilon_1+\\epsilon_5)x}{x+iy} + \\lim_{z\\to 0}\\frac{(\\epsilon_2 - \\epsilon_6)y}{x+ iy}\\\\\n                                                                                                                               &= u_x(0) + iu_y(0)\n                                                                                                                               \\end{align*}\n                                                                                                                               Where the second limit disappears because $\\epsilon$ goes to zero along with $z$.\n                                                                                                                               \\end{solution}\n\n                                                                                                                               \\begin{problem}\\label{open-mapping-theorem-preview}\n                                                                                                                               There is a nonconstant holomorphic function with constant absolute value.\n                                                                                                                               \\end{problem}\n                                                                                                                               \\begin{solution}\n                                                                                                                               False, suppose that $f$ is holomorphic with $|f(z)| = c$. Then $f(x+iy) = u(x, y) + iv(x, y)$ and $u^2 + v^2 = c^2.$ This implies that \n                                                                                                                               \\begin{align}\n                                                                                                                               \\label{first_abs_omtp} 2uu_x + 2vv_x &= \\frac{d}{dx}c^2 = 0\\\\\n                                                                                                                               \\label{mid_abs_omtp}2uu_y + 2vv_y &= \\frac{d}{dx}c^2 = 0\\\\\n                                                                                                                               \\label{second_abs_omtp} 0 &= -2uv_x + 2vu_x \\color{purple} \\quad\\text{ apply CR equations to \\ref{mid_abs_omtp}}\n                                                                                                                               \\end{align}\n                                                                                                                               Multiplying \\ref{first_abs_omtp} by $v$ and \\ref{second_abs_omtp} by $u$,\n                                                                                                                               \\begin{align*}\n                                                                                                                               2uvu_x + 2v^2v_x &= -2u^2v_x + 2vuu_x\\\\\n                                                                                                                               2v^2v_x &= -2u^2v_x\\\\\n                                                                                                                               2(v^2 +u^2) v_x &= 0\\\\\n                                                                                                                               c^2v_x &= 0\n                                                                                                                               \\end{align*}\n                                                                                                                               If $c^2=0$, then $f$ is constant (as $|f(z)|=c=0$ everywhere), and otherwise $v_x$ (and $u_x$ if we do the last step after multiplying \\ref{second_abs_omtp} by $-u$ instead) are 0. Using the Cauchy Riemann equations, this implies that all of the derivatives are 0, so $f$ is constant.\n                                                                                                                               \\end{solution}\n                                                                                                                               \\begin{problem}\\label{schwarzian-derivative}For $f : \\mathbb{C} \\to \\mathbb{C}$, the \\textbf{Schwarzian derivative} of $f$ is\n                                                                                                                                 \\[ (Sf)(z)=\\left({\\frac{f''(z)}{f'(z)}}\\right)'-{\\frac12}\\left(\\frac{f''(z)}{f'(z)}\\right)^2={\\frac {f'''(z)}{f'(z)}}-{\\frac32}\\left(\\frac{f''(z)}{f'(z)}\\right)^2. \\]\n                                                                                                                                   If $f$ is a M\\\"obius transformation, then $(Sf)(z) = 0$.\n                                                                                                                                   \\end{problem}\n                                                                                                                                   \\begin{solution}\n                                                                                                                                   In the definition, $f$ should be holomorphic as well, so that the notation $f'(z)$ makes sense.\n\n                                                                                                                                   Let $f=\\frac{az + b}{cz + d},$ and we can try to compute $Sf$. First let's find $f'$ using the quotient rule:\n                                                                                                                                   \\begin{align*}\n                                                                                                                                   f' &= \\frac{(cz + d)a - (az + b)c}{(cz + d)^2}\\\\\n                                                                                                                                   &= \\frac{da - bc}{(cz + d)^2}\n                                                                                                                                   \\end{align*}\n                                                                                                                                   When we find the logarithmic derivative, the constant $(da - bc)$ will disappear, making the next computation a bit better:\n                                                                                                                                   \\begin{align*}\n                                                                                                                                   \\frac{f''}{f'} &= \\frac{-2(cz + d)}{(cz+d)^4} / \\frac{1}{(cz+d)^2}\\\\\n                                                                                                                                   &= \\frac{-2}{cz + d}\n                                                                                                                                   \\end{align*}\n                                                                                                                                   For the final term:\n                                                                                                                                   \\begin{align*}\n                                                                                                                                   (\\frac{f''}{f'})' = \\frac{-2}{(cz + d)^2}\n                                                                                                                                   \\end{align*}\n                                                                                                                                   Now we can plug into the equation and quickly see that the value is in fact 0.\n                                                                                                                                   \\begin{align*}\n                                                                                                                                   Sf(z) &= \\left({\\frac{f''(z)}{f'(z)}}\\right)'-{\\frac12}\\left(\\frac{f''(z)}{f'(z)}\\right)^2\\\\\n                                                                                                                                   &= \\frac{-2}{(cz + d)^2} - \\frac{1}{2}(\\frac{-2}{cz + d})^2\n                                                                                                                                   &= 0\n                                                                                                                                   \\end{align*}\n                                                                                                                                   \\end{solution}\n                                                                                                                                   \\end{document}\n", "meta": {"hexsha": "2778e3e31032215621f58c935648a534b9345f64", "size": 51299, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problem-solutions/sol2.tex", "max_stars_repo_name": "Alex7Li/math5522h", "max_stars_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "problem-solutions/sol2.tex", "max_issues_repo_name": "Alex7Li/math5522h", "max_issues_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem-solutions/sol2.tex", "max_forks_repo_name": "Alex7Li/math5522h", "max_forks_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 112.0065502183, "max_line_length": 547, "alphanum_fraction": 0.3061268251, "num_tokens": 10230, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.8080672135527632, "lm_q1q2_score": 0.5932179299180499}}
{"text": "%!TEX root = ../main.tex\n\n\t\\begin{thm}[Lemma 3.1 in {[Stovall], originally due to [Dendrinos\\&Wright]}, complexified]\n\t\\label{lem:geometric_lemma}\n\t\tLet $\\gamma:\\mathbb C \\to \\mathbb C^d$ be a polynomial curve of degree $N$, and assume $\\Lambda^{(d)_\\gamma} \\not \\equiv 0$, then we can split $\\mathbb C \\cup \\{\\infty\\}$ into $M = O_{N}(1)$ non-overlapping triangles $\\{T_j\\}_{j=1}^M$ so that on each triangle $T_j$:\n\t\t\\begin{equation}\n\t\t\\label{eq:power_geometric_estimates}\n\t\t\t|\\Lambda_{\\gamma'}^{(d)}(z)| \\sim A_j |z-b_j|^{k_j}, \\,\\, and \\,\\, |\\gamma'_1(t)|\\sim B_j|z-b_j|^{l_j}\n\t\t\\end{equation}\n\t\tand, for $z := (z_1, \\dots z_d) \\in T_j^d$:\n\t\t\\begin{equation}\n\t\t\t\\tag{DW}\n\t\t\t\\label{eq:hard_jacobian_estimate}\n\t\t\t\\left|\\frac{J_{\\gamma'}(z)}{v(z)}\\right|\\gtrsim_N \\prod_{i=1}^d \\Lambda^{(d)}_{\\gamma'}(z_i)^{1/d}\n\t\t\\end{equation}\n\t\tMoreover, for each triangle $T_j$ there is a closed, zero-measure set $R_j \\subseteq T_j^d$ so that the sum map $\\Sigma(z):=\\sum_{i=1}^d \\gamma(z_i)$ is $O_N(1)$-to-one in $T_j^d\\setminus R_j$.\n\t\\end{thm}\n\\todo[inline]{Schur polynomials are strictly column increasing weak row increasing.}\n\n\n\tTo simplify notation, and since the inequality we want to prove above concerns $\\gamma'$ only and not $\\gamma$, we will prove inequality \\eqref{eq:hard_jacobian_estimate} for a generic curve $\\gamma$ that will end up being the $\\gamma'$ above.\\\\\n\n\tThe strategy to prove the theorem will be the following: First inequality \\eqref{eq:hard_jacobian_estimate} will be shown for the moment curve, or for generalized moment curves (that is, curves that are affine equivalent to a curve of the form $(z_1^{\\delta_1}, \\dots z_1{^{\\delta_d}}x)$). Then, we will show  that the result is in fact stable to suitable small perturbations of the polynomial. This, together with a compactness argument on $\\mathbb C \\cup \\{\\infty\\}$, will give the non-uniform estimate (where the constant could depend on the polynomial). The only potential source of non-uniformity at this point will be the number of open sets, and finally we will show a stability on the number of open sets to use, and a compactness argument on the set of polynomials with coefficients $\\lesssim 1$ will finish the proof.\n\n\t\\subsection{Preliminaries} % (fold)\n\t\\label{sub:prelim}\n\n\tIn this section we will define a systematic way of changing co-ordinates to polynomial curves to understand the behavior near a point, which we refer to as \\textit{the zoom-in method} from now on. \n\n\t\\begin{defi}\n\t\t[Canonical form for a curve]\n\t\tLet $\\gamma = (\\gamma_1, \\dots \\gamma_d)$ be a polynomial curve. Let $\\delta_i$ be the lowest degree of a non-zero monomial in $\\gamma_i$. Then $\\gamma$ is in canonical form at zero if:\n\n\t\t\\begin{itemize}\n\t\t\t\\item $\\delta_1< \\dots < \\delta_d$\n\t\t\t\\item The coefficient of degree $\\delta_i$ in $\\gamma_i$ is $1$.\n\t\t\\end{itemize}\n\t\tsimilarly, that $\\gamma$ is in canonical form at $c\\in\\mathbb C$ if $\\gamma(z-c)$ is in canonical form at zero. The curve $\\gamma$ is in canonical form at infinity if $z^D \\gamma(z^{-1})$ is in canonical form at zero, where $D$ is the maximum of the degrees of $\\gamma_i$.\n\t\\end{defi}\n\n\tA polynomial admits a canonical form for every point if and only if the Jacobian for the curve is not the zero polynomial (that is, as long as the curves are linearly independent as polynomials). Given a polynomial $\\gamma$, and a linear transformation $L\\in GL(d;\\mathbb C)$ so that $L \\gamma$ is in canonical form, we define a \\textbf{zoom in at zero at scale $\\lambda$} as the (normalized) zoom-in:\n\n\t\\begin{defi}\n\t The zoom-in of $\\gamma$ at scale $\\lambda$ is the polynomial curve $\\mathcal B_\\lambda [\\gamma](z) := \\text{diag}(\\lambda^{-\\delta_1}, \\dots, \\lambda^{-\\delta_d})L \\gamma(\\lambda z)$. Note that the coefficents of $\\mathcal B_\\lambda [\\gamma](z)$ converge to the polynomial $(z^{\\delta_1}, \\dots z^{\\delta_d})$ as $\\lambda$ goes to zero.\n\t\\end{defi}\n\t% subsection a_systematic_blow_up (end)\n\t\n\t\\subsection{Model Case: Generealized moment curve} % (fold)\n\t\\label{sub:model_case_generealized_moment_curve}\n\t\n\n\tFor a generalized moment curve $\\gamma$ with exponents $\\mbf n:= (n_1, \\dots n_d)$ (that is $\\gamma(z):= (z^{n_1}, \\dots z^{n_d})$ ), and for ${\\bf z} := (z_1, \\dots z_d)$ the following holds:\n\n\t\\begin{equation}\n\t\t\\frac{J(\\mbf z)}{v(\\mbf z)} = S_{\\bf n} (\\bf z)\n\t\\end{equation}\n\twhere $S_{\\bf n}$ is the Schur polynomial of degree $\\bf n$, defined as\n\t\\begin{equation}\n\tS_{\\bf n}(z_1, \\dots z_d) = \\sum_{(t_i) \\in T_{\\bf n}} z_1^{t_1}\\dots z_d^{t_d}\t\n\t\\end{equation}\t\n\twhere $T_{\\bf n}$ is the set of semistandard Young Tableaux of shape $\\bf n$. Now, to compare $J(z_i)$ with $\\Lambda(z_1, \\dots, z_d)$, the following fact is useful:\n\n\t\\begin{lemma}\n\t\\label{lem:det_is_limit}\n\t\tLet ${\\bf z}\\in \\mathbb C^d$, with $z_i\\neq z_j$ for $i\\neq z$, let $s\\in\u00a0\\mathbb C$, and $\\gamma:\\mathbb C\\to \\mathbb C^d$ a polynomial curve, then:\n\t\t\\begin{equation}\n\t\tJ_\\gamma(s) = \\lim_{\\lambda\\to 0} \\frac{\\Lambda_\\gamma(\\lambda {\\bf z}+ s)}{v(\\lambda {\\bf z})} \n\t\t\\end{equation}  \n\t\tand, in particular, in the case when $\\gamma$ is a moment curve of exponent $\\bf n$,\n\t\t\\begin{equation}\n\t\t\\label{eq:det_is_schur}\n\t\t\t\\Lambda^{(d)}_\\gamma(s) = S_{\\bf n}(s, \\dots s)\n\t\t\\end{equation}\n\t\\end{lemma}\n\n\t\\begin{proof}\n\t\tBy Taylor expansion we have:\n\n\t\t\\begin{equation}\n\t\t\t\\gamma_i(s+ \\lambda \\mbf z_j) =  \\sum_{k=0}^{d-1} \\frac 1 {k!}\\gamma_i^{(k)}(s) \\lambda^k \\mbf z_j ^k + O(\\lambda^d)\n\t\t\\end{equation}\n\t\tnow, defining the matrices $\\Gamma_{ij} = \\gamma_i(s+ \\lambda \\mbf z_j)$, $ { Z}_{kj} =  (\\lambda \\mbf z_j) ^k$, and $(T_\\gamma)_{ik} = \\frac 1 {k!}\\gamma_i^{(k)}(s) $ the equation above can be rewritten as:\n\n\t\t\\begin{equation}\n\t\t\t\\Gamma = T_\\gamma Z + O(\\lambda^d)\n\t\t\\end{equation}\n\t\tsince the determinant of $Z$ is $v({\\bf z})$, the lemma follows from the multiplicative property of the determinant:\n\n\t\t\\begin{equation}\n\t\t\t\\frac{\\Lambda_\\gamma(\\lambda {\\bf z}+ s)}{v(\\lambda {\\bf z})} = \n\t\t\t\\frac{\\det \\Gamma}{\\det Z} =  \\det [T_\\gamma +  Z^{-1}O(\\lambda^d)] \\to_{\\lambda \\to 0} \\det T_\\gamma =  \\Lambda^{(d)}_\\gamma(s)\n\t\t\\end{equation}\n\t\tThe fact that $Z^{-1}= o(\\lambda^{-d})$ (and thus we can eliminate the term as $\\lambda\\to 0$) is a quick computation from the adjoint forumla for the inverse.\n\t\\end{proof}\n\n\t\\begin{remark}\n\t\tThe same argument works as well for $\\Lambda^{(k)}_\\gamma$, $1\\le k<d$, since each component of $\\Lambda^{(k)}_\\gamma$ is a determinant of components of the polynomial.\n\t\\end{remark}\n\n\tLemma \\ref{lem:det_is_limit} is (together with Schur positivity) all we need to show the theorem for the generalized moment curve: \n\n\t\\begin{proof}\n\t\t[Proof (of \\eqref{eq:hard_jacobian_estimate}, moment curve case).]\n\n\t\tLet $\\mu$ be a generalized moment curve of exponents $\\mbf n$. Decompose $\\mathbb C = \\bigcup W_i$ into finitely many sectors $W_i = \\{z: |\\arg z - \\theta_i|<\\epsilon \\}$ of angle $\\epsilon$ small enough (depending on the exponents). Now, for $\\mbf z = (\\mbf z_1, \\dots \\mbf z_d) \\in W_i^d$\n\n\t\t\\begin{equation}\n\t\t\t C_n |S_{\\bf n}(\\mbf z)| \n\t\t\t \\ge\n\t\t\t |\\mbf z_1\\cdot \\mbf z_2  \\dots \\mbf z_d|^{\\frac{\\deg S_{\\bf n}}{d}}\n\t\t\t = C_{\\bf n} \\left|\\prod_{i=1}^d S_{\\bf n}(\\mbf z_i, \\dots , \\mbf z_i) \\right|^{1/d}\n\t\t\\end{equation}\n\t\tthe first inequality is AM-GM inequality for all the monomials of $S_{\\bf n}(\\mbf z)$. The second equality follows from the fact that $ S_{\\mbf n}(\\mbf z_i, \\dots , \\mbf z_i) = C_n \\mbf z_i^{{\\deg S_{\\mbf n}}/{d}}$. Now the result follows from equation \\eqref{eq:det_is_schur} on Lemma \\ref{lem:det_is_limit}.\n\t\\end{proof}\n\n\tLemma \\ref{lem:det_is_limit} leads to the definition of a new differential form that corrects for the Vandermonde factor:\n\n\t\\begin{defi}\n\t\t[Corrected multilinear form]\n\t\tFor $\\gamma:\\mathbb C \\to \\mathbb C^n$ and $\\mbf z \\in \\mathbb C^n$we define:\n\t\t\\begin{equation}\n\t\t\t\\tilde \\Lambda_\\gamma(\\mbf z) = \n\t\t\t\\frac\n\t\t\t{\\Lambda_\\gamma(\\mbf z) }\n\t\t\t{v(\\mbf z) }\n\t\t\\end{equation}\n\t\tmoreover, (as we shall see in the following section) the map ${\\tilde\\Lambda}_{\\cdot}(\\cdot)$ is continuous in its domain $ \\mathbb C^d \\times P_N(\\mathbb C)^d$.\n\t\\end{defi}\n\n\tthere is an extra property of the generalized moment curve that will be used later in this section, that we will prove now: \\\\\n\n\t\\todo[inline]{Is this lemma really clearer in sequence form? }\n\n\t\\begin{lemma}\n\t\t[Transversality of the corrected multilinear form for moment curves]\n\t\t\\label{lem:transversality_corrected} Let $\\mu$ be a moment curve, and $W$ a wedge of $\\mathbb C$ of angle $\\epsilon$ (depending on $\\mu$) small enough. Let $\\{\\bf w^{(k)}\\}_{k=1}^{\\infty} $ be a sequence of elements in $ W^s$, $ \\{\\mbf z^{(k)} \\}_{k=1}^{\\infty}$ a sequence in $ W^t$, with $k:=c+s\\le d$, assume $|\\mbf z^{(k)}_i| = O(1)$, and $\\mbf w^{(k)} \\to 0$.\n\n\t\t\\begin{equation}\n\t\t\\label{eq:transversality}\n\t\t\t\\|\\tilde \\Lambda_\\mu (\\mbf z^{(k)}_1 \\dots \\mbf z^{(k)}_t, \\mbf w^{(k)}_1 \\dots \\mbf w^{(k)}_s) \\| \\approx_\\mu \n\t\t\t\\|\\tilde \\Lambda_\\mu (\\mbf z^{(k)}_1 \\dots \\mbf z^{(k)}_t)\\|\\|\\tilde \\Lambda_\\mu( \\mbf w^{(k)}_1 \\dots \\mbf w^{(k)}_s) \\|\n\t\t\\end{equation}\n\t\\end{lemma}\n\n\t\\begin{proof}\n\t\tThe $\\lesssim$ direction is the fact that, for forms $\\|a\\wedge b\\| \\le \\|a\\|\\|b\\|$. \n\n\t\tFor the converse, there is one co-ordinate $e_{n_1} \\wedge e_{n_2} \\dots \\wedge e_{n_k}$ on the LHS that dominates the norm of the form. By restricting to that co-ordinate, it can be assumed that $k=d$. Note also that the term $\\|\\tilde \\Lambda_\\mu (z^{(k)}_1 \\dots z^{(k)}_t)\\|$ is a the absolute value of a Schur polynomial (and therefore is $O(1)$), so we can omit it in the estimates. \n\n\t\tBy using the Young tableau decomposition of the Schur polynomials again, it suffices to show that each monomial in $\\| \\tilde \\Lambda_\\mu( w^{(k)}_1 \\dots w^{(k)}_s) \\|$ is dominated by a monomial in $\\|\\tilde \\Lambda_\\mu (z^{(k)}_1 \\dots z^{(k)}_t, w^{(k)}_1 \\dots w^{(k)}_s) \\|$. This can be done as follows:\n\n\t\t[DRAW PICTURE OF YOUNG TABLEAUX]\n\t\\end{proof}\n\n\t[DO WE NEED WHAT WAS LEMMA 2.5 ANYWHERE]\n\n\t% subsection model_case_generealized_moment_curve (end)\n\n\t\\subsection{Fixed polynomial case} % (fold)\n\t\\label{sub:fixed_polynomial_case}\n\n\tThis section shows that locally any polynomial can be approximated by a moment curve in such a way that the estimates can be transfered from the moment curve to the polynomial. By compactness, this will allow us to conclude Lemma \\eqref{eq:hard_jacobian_estimate} for single polynomials, but with a number of open sets that might depend on the polynomial.\n\n\t\\begin{lemma}\n\t\t[Convergence to the model in the non-degenerate set-up] \n\t\t\\label{lem:nondegenerate_continuity}The function $\\tilde\\Lambda_{\\mu}({\\bf z})$ is continuous in $({\\bf z},\\mu)\\in \\mathbb C^k \\times P_n(\\mathbb C)^d$, where $P_n(\\mathbb C)$ denotes the set of polynomials of degree $n$.\n\t\\end{lemma}\n\n\t\\begin{proof}\n\t\tConsider both the numerator and denominator of $\\tilde \\Lambda_\\mu({\\bf z})$ as a polynomial in the components of $\\mu$ and $z$. The polynomial $\\Lambda_\\mu(\\bf z)$ on the numerator vanishes on the zero set set $\\mathcal Z(v(z_1, \\dots z_k))$, and since this polynomial does not have repeated factors, $v(z_1, \\dots z_k)$ divides the numerator by the Nullstellensatz and the result follows. \n\t\\end{proof}\n\n\tThis lemma implies the local version of the theorem around points where the Jacobian does not degenerate:\n\n\t\\begin{prop}\n\t\\label{prop:local_nondegenerate_jacobian}\n\t\tLet $\\gamma$ be a polynomial curve in $\\mathbb C^d$, such that $\\Lambda^{(d)}(z) \\neq 0$. Then there is a neighborhood $B_\\epsilon(0)$, with $\\epsilon=\\epsilon(\\gamma)$ where \\eqref{eq:hard_jacobian_estimate} holds with constant depending only on the dimension.\n\t\\end{prop}\n\n\n\n\t\\begin{proof}\n\t\tBy the affine invarance of \\eqref{eq:hard_jacobian_estimate}, we can consider a sequence of zoom-ins in the canonical form parametrized by $\\lambda$ that converge to the moment curve (the sequence cannot converge to any other generalized moment curve as the determinant does not vanish at zero). Therefore, we have to show that, for $\\lambda$ small enough, the lemma is true for $\\mathcal B_\\lambda[\\gamma]$, that is:\n\n\t\t\\begin{equation}\n\t\t\t\\left|\\frac{J_{\\mathcal B_\\lambda[\\gamma]}(z)}{v(z)}\\right|\\gtrsim_N \\prod_{i=1}^d \\Lambda^{(d)}_{\\mathcal B_\\lambda[\\gamma]}(z_i)^{1/d}\n\t\t\\end{equation}\n\n\t\tFor the moment curve (the case $\\lambda=0$) inequality \\eqref{eq:hard_jacobian_estimate} is true, and reads:\n\n\t\t\\begin{equation}\n\t\t\t\\tilde \\Lambda_{\\mu}(z_1, \\dots z_d) \\gtrsim 1\n\t\t\\end{equation}\n\t\tand since both sides of the inequality converge locally uniformly as $\\lambda \\to 0$ (the LHS by Lemma \\ref{lem:nondegenerate_continuity} and the RHS because it is the $d-$th root of a sequence of converging polynomials), the inequality is true for $\\lambda$ small enough in the zoom-in.\n\t\\end{proof}\n\n\tFor the degenerate points where the Jacobian vanishes a similar, but slightly more technical approach gives the same result.\n\n\t\\begin{lemma}\n\t\t[Convergence to the model case in the degenerate set-up with zoom-in] \n\t\t\\label{lem:degenerate_convergence}\n\t\tLet $W \\subseteq C$ be a sector of small amplitude $\\epsilon(N,d)$ to be determined. Let ${\\bf z}_j$ be a sequence of points in $W^k$ that have norm $\\lesssim 1$. Let $\\gamma_j:=\\mathcal B_{\\lambda_j}[\\gamma]$, $\\lambda_j \\to 0$, be a sequence of polynomial curves of degree $N$ that converges to $\\mu$, a generalized moment curve of exponents ${\\bf n}=(n_1\\dots n_d)$. Then:\n\n\t\t\\begin{equation}\n\t\t\t\\lim_{j\\to \\infty} \\frac{\\Lambda_{\\gamma_j}({\\bf z}_j)}\n\t\t\t{|\\Lambda_{\\gamma_j}({\\bf z}_j)|}-\\frac{\\Lambda_{\\mu}({\\bf z}_j)}\n\t\t\t{|\\Lambda_{\\mu}({\\bf z}_j)|} = 0\n\t\t\\end{equation}\n\t\\end{lemma}\n\n\t\\begin{proof}\n\t\tFirst note that it suffices to prove that the lemma is true for a subsequence of the $(\\lambda_j,\\mbf z_j)$. The lemma will follow if we can prove that, for any fixed coordinate $e = e_{l_1} \\wedge \\dots \\wedge e_ {l_k}$ we have:\n\n\t\t\\begin{equation}\n\t\t\t\\lim_{j\\to \\infty} \\frac{\\Lambda_{\\gamma_j}({\\bf z}_j)|_e}\n\t\t\t{\\Lambda_{\\mu}({\\bf z}_j)|_e} = 1\n\t\t\\end{equation}\n\tusing the notation $w|_e$ to denote the $e-$th co-ordinate of the form $w$. By restricting the problem to the co-ordinates $(e_{l_1}, \\dots  e_ {l_k})$ we may assume $k=d$, and then it suffices to show, in the same set-up of the lemma, that:\n\t\\begin{equation}\n\t\t\t\\lim_{j\\to \\infty} \\frac{\\tilde \\Lambda_{\\gamma_j}({\\bf z}_j)}\n\t\t\t{\\tilde \\Lambda_{\\mu}({\\bf z}_j)} = 1\n\t\\end{equation}\n\n\tWe will prove this by induction. By passing to a subsequence if necessary, assume WLOG that $\\mbf z_j$ has a limit. In the base case none of the components of ${\\bf z}_j$ has limit zero. In that case, the denominator converges to a non-zero number (since the denominator is a Schur polynomial in the components of ${\\bf z}_j$) and the result follows. \n\n\tOur first induction case is when all the components went to zero. In this case, by doing a further zoom-in and passing to a further subsequence if necessary, one can reduce to the case where not all the components of ${\\bf z}_j$ go to zero. Thus, assume some, but not all the components of ${\\bf z}_j$ go to zero.\n\n\tWithout loss of generality assume it's the first $0<k'<k$ components that go to zero. Let ${\\mbf z'}_j:= ((\\mbf z_j)_1, \\dots (\\mbf z_j)_{k'})$ be the sequence made by the first $k'$ components of each ${\\bf z}_j$, and ${\\mbf z'}''_j$ the sequence made by the remaining components. Then,\n\n\t\\begin{equation}\n\t\t \\frac{\\tilde \\Lambda_{\\gamma_j}({\\bf z}_j)}\n\t\t\t{\\tilde \\Lambda_{\\mu}({\\bf z}_j)}\n\t\t=  \\frac\n\t\t{ \\sum_{e'\\wedge e'' = e}\\tilde \\Lambda_{\\gamma_j}({\\bf z}'_j)|_{e'}\n\t\t\t\\cdot\n\t\t\\tilde \\Lambda_{\\gamma_j}({\\bf z}''_j)|_{e''}}\n\t\t{ \\sum_{e'\\wedge e'' = e}\\tilde \\Lambda_{\\mu}({\\bf z}'_j)|_{e'}\n\t\t\t\\cdot\n\t\t\\tilde \\Lambda_{\\mu}({\\bf z}''_j)|_{e''}}\n\t\\end{equation}\n\twe know by the induction hypothesis that each of the terms in the sum in the numerator converges to the corresponding term in the denominator (in the sense that their quotient goes to $1$). So the result will follow if we can prove there is not much cancellation going on on the denominator, that is:\n\n\t\\begin{equation}\n\t\t\\limsup_{j \\to \\infty }\n\t\t\\frac \n\t\t{ \\sum_{e'\\wedge e'' = e} |\\tilde \\Lambda_{\\mu}({\\bf z'}_j)|_{e'}\n\t\t\\cdot\n\t\t\\tilde \\Lambda_{\\mu}({\\bf z''}_j)|_{e''}|} {|\\tilde \\Lambda_{\\mu}({\\bf z}_j)|_e|}<\\infty\n\t\\end{equation}\n\tbut this is a consequence of Lemma \\ref{lem:transversality_corrected}, because we can bound each of the elements in the sum by \n\t$|\\tilde \\Lambda_{\\mu}({\\bf z'}_j)|\n\t\t\\cdot\n\t\t|\\tilde \\Lambda_{\\mu}({\\bf z''}_j)| \\lesssim {|\\tilde \\Lambda_{\\mu}({\\bf z}_j)|} $, by equation \\eqref{eq:transversality}.\n\n\n\t\\end{proof}\n\n\n\tWe can also see this lemma in the continuity set-up. Following the same proof as in Proposition \\ref{prop:local_nondegenerate_jacobian}, we can prove\n\n\n\t\\begin{prop}\n\t\\label{prop:local_degenerate_jacobian}\n\t\tLet $\\gamma$ be a polynomial curve in $\\mathbb C^d$, such that $\\Lambda^{(d)}(0) = 0$. Then there is a neighborhood $B_\\epsilon(0)$, with $\\epsilon=\\epsilon(\\gamma)$ where \\eqref{eq:hard_jacobian_estimate} holds with constant depending only on the dimension.\n\t\\end{prop}\n\n\t\\begin{proof}\n\t\tWe have to show that there exists a zoom-in $\\mathcal B_{\\lambda}[\\gamma]$ for $\\lambda$ small enough so that the inequality \n\t\t$$\n\t\t\\left|\\frac{J_{\\mathcal B_{\\lambda}[\\gamma]}(z)}{v(z)}\\right| \\prod_{i=1}^d \\Lambda^{(d)}_{\\mathcal B_{\\lambda}[\\gamma]}(z_i)^{- 1/d} \\gtrsim_N 1\n\t\t$$ holds in the unit ball, but lemma \\ref{lem:degenerate_convergence} implies that\n\n\t\t$$\n\t\t\\lim_{\\lambda_\\to 0}\n\t\t\\left|\\frac{J_{\\mathcal B_{\\lambda}[\\gamma]}(z)}{v(z)}\\right| \\prod_{i=1}^d \\Lambda^{(d)}_{\\mathcal B_{\\lambda}[\\gamma]}(z_i)^{- 1/d} = \n\t\t\\left|\\frac{J_{\\mu}(z)}{v(z)}\\right| \\prod_{i=1}^d \\Lambda^{(d)}_{\\mu}(z_i)^{- 1/d} \\gtrsim 1\n\t\t$$\n\t\twhere the  convergence is locally uniform, and the second inequality is inequality \\ref{eq:hard_jacobian_estimate} for the moment curve.\n\n\t\t\n\t\\end{proof}\n\n\tThis finishes the proof that \\eqref{eq:hard_jacobian_estimate} if we split compact sets in a finite number of sets that may depend on the polynomial. A small variation of Lemma \\ref{prop:local_degenerate_jacobian} can be used at a neighborhood of infinity. In this exposition infinity will be considered simultaneously with the uniform case instead.\n\n\t% subsection fixed_polynomial_case (end)\n\n\t\\subsection{Uniformity for polynomials} % (fold)\n\t\\label{sub:uniformity_for_polynomials}\n\n\tThe aim of this section is to show that the number of open sets in the geometric Lemma \\ref{lem:geometric_lemma} does not depend on the polynomial. In order to do so, we will show that given a sequence of polynomial curves there exists a subsequence of curves for which \\eqref{eq:hard_jacobian_estimate} holds with a uniformly bounded amount of subsets, and thus there must be a uniform bound for all polynomial curves.\n\n\tThe main challenge in the proof of the uniformity of the number of open sets in Lemma \\ref{lem:geometric_lemma} is the case in which the zeros of the Jacobian \\textit{merge}, that is, the curves $\\gamma_n$ converge to a curve $\\gamma$ such that $J_\\gamma$ has less zeros than $\\gamma$ (without counting multiplicity). We will use zoom-ins near the zeros of $\\gamma$ to keep track of this cases. The following lemma is a key tool to do the zoom in:\n\n\t\\begin{lemma}\\label{lem:annuli_continuity}\n\t\tLet $\\gamma$ be a non-degenerate polynomial curve in $\\mathbb C^d$ of degree $N$ such that $$\\gamma_i = \\prod_{k=1}^{n_j} (z - w_{i,k}) \\prod_{l=1}^{m_j} \\left(1 - \\frac{z}{v_{i,l}}\\right)$$, $v_{i,l}, w_{j,k} \\in B_{R }\\setminus B_{r}$, $n_1<n_2< \\dots < n_d + m_d$ Then there is a constant $C := C(N, d)$ such that $(PI)$ holds on $W \\cap (B_{C^{-1}R }\\setminus B_{C r})$ for any wedge $W$ of angle $\\le \\epsilon(N,d)$.\n\t\\end{lemma}\n\n\tThe lemma (and its proof) can be informally stated as: ``If all the zeros of the components of $\\gamma$ are far from an annuli, then $\\gamma$ behaves like the corresponding moment curve in the annuli\". To prove the lemma we will have to re-write it into an equivalent form, more suitable for compactness arguments:\n\n\t\\begin{lemma} [Lemma \\ref{lem:degenerate_convergence}, annuli verison]\\label{lem:annuli_convergence}\n\t\tLet $\\gamma_n$ be a sequence of polynomial curves for which the coefficients coefficients $w_{(i,k),n} \\to_{n\\to \\infty} 0$, $v_{(i,l),n} \\to_{n\\to \\infty} \\infty$,  ($v,w$ defined using the notation of the previous lemma, $\\gamma_n \\to \\mu$, a non-degenerate moment curve). Let $r_n$ define a sequence of annuli $A_n = B_0(1)\\setminus B_0(r_n)$, so that $\\max_{i,k} w_{(i,k),n} = o(r_n)$. Let $\\mbf z_n \\in (A_n \\cap W)^k$, where $W$ is a sector of small enough angle depending of $n,d$ only. Then:\n\n\t\t\\begin{equation}\n\t\t\t\\lim_{j\\to \\infty} \\frac{\\Lambda_{\\gamma_j}({\\bf z}_j)}\n\t\t\t{|\\Lambda_{\\gamma_j}({\\bf z}_j)|}-\\frac{\\Lambda_{\\mu}({\\bf z}_j)}\n\t\t\t{|\\Lambda_{\\mu}({\\bf z}_j)|} = 0\n\t\t\\end{equation}\n\t\\end{lemma}\n\n\t\\begin{proof}\n\t\tThe proof is the same as the proof in Lemma \\ref{lem:degenerate_convergence}. The key difference being the reason why we can zoom in again. In Lemma \\ref{lem:degenerate_convergence} the $\\gamma_n$ were themselves blow-ups, so blowing up did not change the hypothesis of the lemma. Here, the control of $r_n$ ensures the blow-up will be always at a smaller scale than the scale at which the zeros of $\\gamma_n$ are.\n\t\\end{proof}\n\n\t\\begin{remark}\n\t\tNote that in the particular case in which there are no $v_{(i,l),n}$ (that is, all the zeros are going to zero) the annuli can be taken to have exterior radius equal to infinity (that is, the annuli can degenerate to the complement of a disk) or, in the case where all the $w_{(i,k),n}$ are exactly equal to zero, it can be taken to have interior radius equal to 0.\n\t\\end{remark}\n\n\tLemma \\ref{lem:annuli_continuity} that we just proved (in its convergence form) lets us control  $J_\\gamma$ far from the zeros of the components of $\\gamma$. The zeros of the components, however, depend on the co-ordinates we take. In order to solve this, we will show that there is one \\textit{honest} co-ordinate system in which, if we have a zero of a co-ordinate of $\\gamma$ that has size $O(1)$ then there is also a zero of $\\mathcal J_{\\gamma}$ that has size $O(1)$.\n\n\n\t\\begin{lemma}[Honest zeros lemma]\n\t\\label{lem:honest_zeros}\n\t\tFor a non-degenerate polynomial curve $\\gamma$, let $R(\\gamma)$ be the (absolute value of the) supremmum of the zeros of $J_\\gamma$ that has absolute value smaller than 1. Let $r_\\gamma$ be the supremmum of the zeros of the co-ordinates of $\\gamma$ (again in absolute value, and counting only the zeros that have absolute value less than 1).\n\n\t\tThen, for any sequence of polynomial curves $\\gamma_n \\to \\mu$, a non-degenerate generalized moment curve, there is a constant $k := k(\\gamma_n)$, a sequence of linear operators $L_n \\in GL(n;\\mathbb C)$ converging to the identity and a sequence of constants $c_n\\to 0$ so that:\n\n\t\t\\begin{equation}\n\t\t\tR(L_n \\gamma_n(z-c_n)) \\ge k r(L_n \\gamma_n(z-c_n))\n\t\t\\end{equation}\n\n\t\tIn other words, after a suitable change of co-ordinates, controlling the zeros of a sequence of polynomial curves allows us to control the zeros of its Jacobian without significant losses.\n \t\\end{lemma}\n\n \t\\begin{proof}\n \t\tWe will assume that $\\mu$ is not the standard moment curve, since otherwise the result is trivial because $J_\\mu = 1$. We choose the $c_n \\to 0$ to re-center the $\\gamma_n$ so that $J_{\\gamma_n}$ always has a zero at zero. Let $n_i$ be the degree of the $i-$th co-ordinate of $\\mu$. By composing with suitable $L_n\\to Id$ we can assume that the degree $n_i$ component of the $i-th$ co-ordinate of $\\gamma_n$ is always $1$, and the degree $n_j$ of the $i-$th component (for $i\\neq 0$) is $0$ for all $\\gamma_n$. Let $\\tilde \\gamma_n = L_n \\gamma_n(z-c_n)$, then, the following holds:\n\n \t\t\\begin{lemma}\n \t\t\\label{lem:honest_helper}\n \t\t\tLet $\\hat\\gamma_n$ be a sequence of zoom-ins to $L_n \\gamma_n(z-c_n)$ at the scale where the zeros of the co-ordinates appear, and assume $\\hat\\gamma_n \\to \\gamma$. Then the multiplicity of the zero of $J_\\gamma$ at zero is strictly smaller than the multiplicity of $J_\\mu$ at zero.\n \t\t\\end{lemma}\n\n \t\tusing the lemma above, we can finish the proof by contradiction. Assume $ \\gamma_n$ is such that $\\tilde \\gamma n = L_n \\gamma_n(z-c_n)$ contradicts the lemma. Pick a subsequence  for which $R(\\tilde\\gamma_n)/r(\\tilde\\gamma_n)$ goes to zero. Let $\\hat \\gamma_n$ be a zoom-in at the scale at which the first zeros of the components of $\\tilde\\gamma_n$ appear.Assume, by passing to a subsequence if necesary, that $\\hat \\gamma_n$ converges. By the hypothesis of $R(\\tilde\\gamma_n)/r(\\tilde\\gamma_n)$, it must be that all the zeros of $J:{\\hat\\gamma_n}$ concentrate back at zero, but this contradicts lemma \\ref{lem:honest_helper}.\n \t\\end{proof}\n\t\n\t% subsection uniformity_for_polynomials (end)\n\n\n\t\\begin{proof}\n\t\t[Proof (of  Lemma \\ref{lem:honest_helper}).]\n\t\tDefine a matrix $M_{i,k} \\in \\mathcal M_{(d,N)}(\\mathbb C)$ so that $M_{i,k}$ is the coefficient of degree $k$ of the $i$ component. By the transformation we have done, we know the matrix $M_{i,k}$ has rank $d$, and that for the column $i$, the element $M_{i,n_i}$ is equal to $1$ and for $j>n_i$, $M_{i,n_i}$ is equal to $0$. The multiplicity of $J\\mu$ at zero is $\\sum n_i - \\frac{d^2+d}{2}$. To compute the multiplicity of $J_\\gamma$ at zero, we do the following procedure:\n\n\t\tPick the first column (smallest $k$ index) that is non-zero. Pick the first element of this column (smallest $i$) index that is non-zero. Define $\\tilde n_i:=k$, where $i$ is the index that is not zero. Row-reduce $M_{i,k}$ so that the $k-$th column is $e_i$. Set all the elements on the right of $(i,n_i)$ to zero. Repeat this process $d$ times. \n\n\t\tThis procedure is a row-reduction and blow-up procedure at the origin, that shows that, at the origin, the polynomial curve looks like a generalized moment cuve of degrees $\\tilde n_i$, where $\\tilde n_i\\le n_i$ with at least one of the $\\tilde n_i< n_i$. Therefore, the degree at the origin is strictly smaller.\n\t\\end{proof}\n\n\tNow we have all the necessary tools to prove \\eqref{eq:hard_jacobian_estimate} in  Lemma \\ref{lem:geometric_lemma} in the full generality case. The proof is as follows:\n\n\t\\begin{itemize}\n\t\t\\item The proof is a proof by contradiction. Assume there is a sequence of $\\gamma_n$ for which the minimum number of sets need for the geometric Lemma \\ref{lem:geometric_lemma} to hold grows to infinity. The contradiction will come from showing that a certain subsequence of the $\\gamma_n$ can be covered by a bounded number of subsets.\n\t\t\\item By passing to a subsequence if necessary, and re-parametrizing, we will assume that the $\\gamma_n$ converge to a non-degenerate generalized moment curve $\\hat \\gamma$, and that all the zeros of $J_{\\gamma_n}$ converge to the origin, with one zero of $J_{\\gamma_n}$ being exactly at the origin.\n\t\t\\item By Lemma \\ref{lem:annuli_continuity} [convergence of the Jacobian on annuli with possibly infinite radius], after a suitable reparametrization if necessary, we can cover uniformly $\\mathbb C\\setminus B_{r_n}$ with wedges so that property \\eqref{eq:hard_jacobian_estimate} holds, where $r_n$ is proportional (with a constant depending on $d,N$) to the size of the biggest zero of $J_{\\gamma_n}$. After a suitable change of coordinates by Lemma \\ref{lem:honest_zeros}, this is equivalent to consider $r_n$ to be of the size of the biggest zero of a component of $\\gamma_n$.\n\t\t\\item Zoom in to the polynomials $\\gamma_n$ at scale $r_n$ to obtain the polynomials $\\gamma'_n$. We have to show now that the theorem holds for $\\gamma'_n$ on the unit ball. By passing to a subsequence, assume WLOG that the polynomials converge to a non-degenerate polynomial curve $\\gamma'$. Note that the zeros of $J_{\\gamma'}$ cannot all converge to the origin (because for each $\\gamma'_n$ there is a zero with size $O(1)$). \n\t\t\\item By Lemma \\ref{lem:annuli_continuity} again, we can find a sequence of annuli of outer radius $O(1)$, centered at the zeros of $\\gamma'_n$ so that the condition \\eqref{eq:hard_jacobian_estimate} holds after splitting the annuli into wedges.\n\t\t\\item On the intersection of all the exteriors of all the Annuli (by the exterior meaning the connected component containing infinity of the complement of the annuli), property \\eqref{eq:hard_jacobian_estimate} holds for $n$ big enough after splitting into $O(1)$ sets, by compactness and Proposition \\ref{prop:local_nondegenerate_jacobian} [Local version of \\eqref{eq:hard_jacobian_estimate} in the non-degenerate case].\n\t\t\\item Therefore, it suffices to prove that property \\eqref{eq:hard_jacobian_estimate} wolds in the interior component of the complement of the annuli. But this can be done by induction: Zoom in into each of those components (which, by hypothesis have lower degree than the original one), and repeat the argument.\n\t\\end{itemize}\n\n\t\\subsection{Injectivity of the $\\Sigma$ map} % (fold)\n\t\\label{sub:injectivity_of_the_}\n\n\tThe goal of this section is the last part of Lemma \\ref{lem:geometric_lemma}, which we re-state:\n\n\t\\begin{lemma}\n\t\tFor each triangle $T_j$ described in the proof of Lemma \\ref{lem:geometric_lemma} there is a closed, zero-measure set $R_j \\subseteq T_j^d$ so that the sum map $\\Sigma(z):=\\sum_{i=1}^d \\gamma(z_i)$ is $O_N(1)$-to-one in $T_j^d\\setminus R_j$.\n\t\\end{lemma}\n\t\n\n\t\\begin{proof}\n\t\tOur set $R_j$ is the set where there is $i\\neq j$ where $z_i=z_j$. The fact that $\\Lambda_\\gamma'(z_1, \\dots z_d)$ does not vanish in $T_j\\setminus R_j$ (a consequence of \\eqref{eq:hard_jacobian_estimate}) tells us that $(z_1, \\dots z_d)$ does not belong to an irreducible variety of dimension greater than zero of the variety defined by $\\{(x_1, \\dots x_d) \\in \\mathbb C^d | \\Lambda_\\gamma'(x_1, \\dots x_d) = \\Lambda_\\gamma'(z_1, \\dots z_d)\\}$. Therefore, the result follows by Bezout's theorem.\n\t\\end{proof}\n\t% subsection injectivity_of_the_ (end)", "meta": {"hexsha": "cf9ce86f2209b157c330e6c6e38fc851a65f26e6", "size": 29857, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Complex Curves Paper/Parts/geometric_lemma.tex", "max_stars_repo_name": "jaumededios/Restriction", "max_stars_repo_head_hexsha": "3e6600c352a4b2b89ce572be320ee2c312a0b0c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Complex Curves Paper/Parts/geometric_lemma.tex", "max_issues_repo_name": "jaumededios/Restriction", "max_issues_repo_head_hexsha": "3e6600c352a4b2b89ce572be320ee2c312a0b0c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Complex Curves Paper/Parts/geometric_lemma.tex", "max_forks_repo_name": "jaumededios/Restriction", "max_forks_repo_head_hexsha": "3e6600c352a4b2b89ce572be320ee2c312a0b0c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.1596858639, "max_line_length": 828, "alphanum_fraction": 0.7056636635, "num_tokens": 9570, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672181749421, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5932179286107087}}
{"text": "\\section{Parallel VGLCS Algorithm} \\label{sec:parallelVGLCS}\n\n\\subsection{Basic Dynamic Programming}\n\nWe first describe a basic dynamic programming for\nVGLCS~\\cite{Peng2011TheLC}. Let $A$ and $B$ denote two input strings\nof length $n$ and $m$ respectively, and $G_A$ and $G_B$ be the arrays\nof the variable gap constraints.  We define $V[i][j]$ to be the {\\em\n  maximum} length of the variable gapped longest common subsequence\nbetween substring $A[1, i]$ and $B[1, j]$.  It is easy to see that\n$V[i][j]$ is the {\\em maximum} among $V[k][l]$, where $k$ is between\n$i-1$ and $i-G_A(i)-1$, and $l$ is between $j-1$ and $j-G_B(j)-1$,\ni.e., a rectangle within $V$ on the left and upper of $V[i][j]$.\nPlease refer to Figure~\\ref{fig:fig-VGLCS-dp-naive} for an\nillustration.\n\n\\begin{figure}[!thb]\n  \\centering \\subfigure[How to compute $V$.] {\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-VGLCS-dp-naive.pdf}\n    % \\caption{How to compute $V$.}\n    \\label{fig:fig-VGLCS-dp-naive}\n  } \\subfigure[Compute $V$ with incremental suffix maximum queries.] {\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-VGLCS-dp.pdf}\n    \\label{fig:fig-VGLCS-dp}\n  }\n  \\caption{The basic dynamic programming for VGLCS}\n  \\label{fig:basic-dp-VGLCS}\n\\end{figure}\n\nThe computation of $V$ can be optimized as follows.  Note that the\ncomputation of all $V[i][j]$'s with the same $i$ has the {\\em same} gap\nconstraint $G_A(i)$, so the maximum within the rectangle can be computed\nin two steps.  First, we compute the maximum of {\\em every column} of\nthis rectangle, and place them into another array $R$. Then we compute\nthe maximum of the {\\em suffix} of length $G_B$ on $R$, which is exactly\n$V[i][j]$.  Please refer to Figure~\\ref{fig:fig-VGLCS-dp} for an\nillustration.\n\nWe note that this optimization requires maximum queries on the suffix\n{\\em incrementally} in the following sense.  Recall that in the first\nstep of the optimization, we need to compute the maximum of every column\nwithin the rectangle.  This is just like finding the maximum of the {\\em\nsuffix} of every column in that rectangle.  After we compute the $i$-th\nrow of $V$ and go to the next row to compute $V[i+1][*]$, we will then\nneed the maximum of the suffix of every column of length $G_A(i+1)$. It\nwill be beneficial if we put the $V$'s in each column into a data\nstructure that supports suffix maximum query.  Similarly, the\ncomputation within the same row, that is, from $V[i][j]$ to $V[i][j+1]$,\nalso requires the maximum of the suffix on $R$.  We will refer to this\ntype of queries as {\\em incremental suffix maximum query}, i.e., we\nwould like to maintain a data structure form which we can find the\nmaximum of its suffix efficiently while we add data at its end.\n\n\\subsection{Peng's Algorithm}\n\nThe sequential VGLCS algorithm~\\ref{alg:serial-VGLCS} by\nPeng~\\cite{Peng2011TheLC} applies the optimization and is shown as\nAlgorithm~\\ref{alg:serial-VGLCS}.  The outer loop goes through every\nrow, and the inner loop goes through every element of a row from left to\nright.  We use an array of $C$ to answer incremental maximum queries on\nall columns.  That is, we can think of $C[j]$ as a data structure that\nsupports incremental suffix maximum queries on the $j$-th column of $V$.\nFrom the previous observation that all computation of elements in the\n$i$-th row of $V$ share the same gap $G_A(i)$, we will query each $C$\nfor the maximum in the suffix that ends at row $i-1$ with length\n$G_A(i)+1$, and place these maximums in another data structure $R$ that\nalso supports incremental suffix maximum queries.  It is easy to see\nthat the value of $V[i][j]$ can be obtained by querying $R$ with a\nsuffix maximum query of length $G_B(j)+1$ as shown in\nFigure~\\ref{fig:fig-VGLCS-dp}.\n\nWe update $V$, $C$, and $R$ as follows.  If the $i$-th character of $A$\nmatches the $j$-th character of $B$ then $V[i][j]$ is the maximum among\nthe rectangle plus 1, as shown in Figure~\\ref{fig:fig-VGLCS-dp}. This\nmaximum can be found by querying $R$ for the maximum among the last\n$G_B(j)$ elements in it.  Note that $R$ contains the information of the\nprevious row, up to the element of the $j-1$ element.  After that, we\nadd the maximum of last $G_B(j)+1$ in the $j$-the column into $R$, and\nthe newly computed $V[i][j]$ into $C[j]$, which supports ISMQ on the\n$j$-the column, before going to column $j+1$.  If the $i$-th character\nof $A$ does {\\em not} match the $j$-th character of $B$, we simply set\n$V[i][j]$ to 0 since it does not affect the answer, then again update\n$R$ accordingly.\n \n \n\\input{\\AlgoPath/alg-serial-VGLCS-2e}\n\n\\subsection{Incremental Suffix Maximum Query}\n\nFrom the previous discussion of Peng's algorithm, we note that in order\nto find VGLCS efficiently, we need to address the {\\em incremental\nsuffix maximum query} (ISMQ) problem.  A data structure that supports\nincremental suffix maximum queries should support the three operations.\nFirst, a {\\sc Make} operation creates an empty array $A$. Second, an\n{\\sc Append}$(V)$ operation appends a value $V$ to array $A$. Finally, an\nISMQ {\\sc Query}$(x)$ finds the {\\em maximum} value among those from $x$\nto the end of an array $A$.\n\nPeng uses a {\\em disjoint-set} data structure to answer incremental\nsuffix maximum queries in his VGLCS algorithm.  The disjoint-set data\nstructure was proposed by Gabow~\\cite{Gabow1983ALA} and\nTarjan~\\cite{Tarjan1975EfficiencyOA} to solves the {\\em union-and-find\nproblem}.  The set of data is stored in a sequence of disjoint sets, and\nthe {\\em maximum} of each disjoint set is at the root of the tree, and\nthese maximum are in {\\em decreasing} order.  When we add a value $x$,\nwe make it as a disjoint set with a single element by itself, and as the\n{\\em last} disjoint set in the sequence.  Then we start joining (with\nunion operation) from the last set to its previous set until the maximum\nof the previous set is {\\em larger} than $x$. It is easy to see that the\n{\\sc Query}$(x)$ operation is simply a {\\em find} operation that finds the\nroot, which has the {\\em maximum}, of the tree that $x$ belongs to.  The\namortized time per union/find operation is $O(\\alpha(n))$.\n\n\\subsection{A Parallel VGLCS Algorithm with Sparse Table}\n\nThe sequential VGLCS algorithm~\\ref{alg:serial-VGLCS} by\nPeng~\\cite{Peng2011TheLC} and other variants of LCS are difficult to\nparallelize in a row-by-row manner.  These algorithms use several\nstates to determine a new state with a dynamic programming.  This\nconstruction requires {\\em heavy data dependency}, and is difficult to\nparallelize in a naive row-by-row manner because an element of the\ndynamic table needs the values of elements in the {\\em same} row to\ncompute its value.\n\nIt is also difficult to parallelize Peng's algorithm with the wavefront\nmethod because it requires {\\em extra space} to keep track of row\nstatus, which is not required in a sequential algorithm. Recall that\nVGLCS requires a rectangle of data in $V$ to compute a new element, if\nthose $V$'s want to compute on a diagonal wavefront, those rectangles of\ndata will require extra bookkeeping since the gap constraints of those\nelements on the wavefront could be very different.  Please refer to\nFigure~\\ref{fig:fig-VGLCS-dp-wavefront} for an illustration of those\ndata that must be present (in solid line).  Also, the new elements\ncomputed must be appended to the data structures according to the\nwavefront, which incurs more bookkeeping and overheads.\n\n\\begin{figure}[!thb]\n  \\centering \\subfigure[The first stage]{\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-VGLCS-dp-wavefront-second.pdf}\n  } \\subfigure[The second stage]{\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-VGLCS-dp-wavefront-first.pdf}\n  }\n  \\caption{The bookkeeping data of the wavefront method}\n  \\label{fig:fig-VGLCS-dp-wavefront}\n\\end{figure}\n\nWe propose a parallel VGLCS algorithm with an optimized row-by-row\napproach, which maintains only {\\em one} row data structure that\ncollects suffix maximum values from all columns.  That is, our\noptimized row-by-row approach removes the data dependency among\nelements {\\em within the same row}.\n\nOur optimized row-by-row approach uses less space, has a more balanced\nworkload, and a smaller thread synchronization overhead, than the\nwavefront method.  We observe that the length of the critical path of\na wavefront method is greater than that of a row-by-row method.  In\naddition, if we can removes the data dependency among elements within\nthe same row, then the computation on the elements of the same row can\nbe {\\em fully} and {\\em evenly} parallelized.  The result is a much\nmore balanced workload distribution and a much easier synchronization\namong threads.\n\n\n% On the other hand, if we use the\n% Maleki's~\\cite{Maleki2016EfficientPU} technique, % need to explain\n% this technique it also uses extra space to maintain state\n% translation, and spends more time to merge data.  Therefore, it is\n% crucial that our parallelization conserves {\\em both} memory and\n% time.\n\n% It seems that you should use the new figures to explain these???\n% Morris: how to make it to row-by-row manner\n\n\nA sketch of our algorithm is as follows.  Our algorithm computes $V$ one\nrow at a time.  The computation of each row has two stages.  In the\nfirst stage, the algorithm queries each data structure $C$ for every\ncolumn within the rectangle {\\em in parallel}, so as to obtain the\nmaximums of suffix of length $G_A(i) + 1$ of every column, and place\nthem into an array $R$. Recall from Algorithm~\\ref{alg:serial-VGLCS}\nthat every column of $V$ has a data structure $C$ that supports\nincremental suffix maximum on $V$.  Please refer to\nFigure~\\ref{fig:fig-VGLCS-dp-rmq} for an illustration.\n\nIn the second stage, our algorithm issues {\\em range maximum queries},\none for each column, on $R$ to compute all $m$ elements of the $i$-th\nrow of $V$ {\\em in parallel}.  Note that unlike the sequential\nalgorithm, we compute all elements in the $i$-th row of $V$ in parallel,\nso we cannot query the {\\em suffix} of $R$.  Instead, we need to query a\n{\\em range} of $R$ for the maximum, where the range is the gap\nconstraint on that column.  Please refer to\nFigure~\\ref{fig:fig-VGLCS-dp-rmq} for an illustration.  Note that we\nneed to add the newly computed $V[i][j]$ into the $C$ of the $j$-th\ncolumn {\\em incrementally}, so that they will contain the correct\ninformation for the computation of the $(i+1)$-th row.  Also, since the\nalgorithm iterates in rows, these $C$'s only need to support suffix\nmaximum query.  No range query on them is required.  In contrast, we do\nneed to support range maximum query on $R$, and these queries will be in\nparallel.\n\n\\begin{figure}[!thb]\n  \\centering \\subfigure[The first stage]{\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-VGLCS-dp-rmq-first.pdf}\n  } \\subfigure[The second stage]{\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-VGLCS-dp-rmq-second.pdf}\n  }\n  \\caption{Two stages of the computation of one row of $V$.}\n  \\label{fig:fig-VGLCS-dp-rmq}\n\\end{figure}\n\nTo resolve the data dependency, we need to consider a good data\nstructure that can handle incremental suffix/range maximum query {\\em in\nparallel}.  We note that it is {\\em not} feasible to parallelize the\ndisjoint set implementation for three reasons.  First, a query for\ndisjoint set will change the data structure because a lookup will {\\em\ncompress} the path to the root.  It is difficult to maintain a\nconsistent view of the data structure when multiple threads are\ncompressing the path {\\em simultaneously}.  Second, when multiple\nthreads are compressing different paths, the load among them could be\nvery different, and this will incur load imbalance.  Third, there will\nbe a large number of threads working on different parts of the disjoint\nset, therefore it will be difficult to synchronize them efficiently.\n\n\\subsubsection{Sparse Table} \\label{sec:sparse-table}\n\nSince the disjoint set cannot be implemented efficiently in parallel,\nwe use {\\em sparse table}~\\cite{Berkman1993RecursiveSP} to support\nincremental suffix/range maximum queries in our VGLCS algorithm.\nSparse table~\\cite{Berkman1993RecursiveSP} requires a $O(n \\log n)$\npreprocessing, and can support range maximum query in $O(1)$ time on\none dimensional data.  A sparse table is a two dimensional array.  The\nelement of a sparse table in the $j$-th row and $i$-th column is the\nmaximum among the $i$-th elements and its $2^j - 1$ predecessors in\nthe input array.\n\nWe give an example of the sparse table\n(Figure~\\ref{fig:interval-decomposition}).  The input is in array $A$.\nThen we build a sparse table $T$ on $A$ as described earlier.  Now a\nranged maximum query on $A$ can be answered by at most {\\em two} queries\ninto the sparse table.  For example, if the query is from 2 to 13, then\nthe answer is the maximum of from 2 to 9 ($T[3][9]$), and from 6 to 13\n($T[3][13]$).  Both are from the third level of the table since each has\nthe maximum of $2^3 = 8$ elements in the input.\n\n\\begin{figure}[!thb]\n  \\centering \\subfigure[Array]{\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-interval-decomposition-origin.pdf}\n    \\label{fig:fig-interval-decomposition}\n  } \\subfigure[Sparse table]{\n    \\includegraphics[width=0.45\\linewidth]{\\GraphicPath/fig-sparse-table-origin.pdf}\n    \\label{fig:fig-sparse-table}\n  }\n  \\caption{A sparse table example}\n  \\label{fig:interval-decomposition}\n\\end{figure}\n\nIt is easy to see that one can build a sparse table in parallel\nefficiently.  Please refer to\nAlgorithm~\\ref{alg:parallel-sparse-table} for details.\nAlgorithm~\\ref{alg:parallel-sparse-table} builds sparse table in\nparallel and in $O(n \\log n / p + \\log n)$ time, where $n$ is the\nnumber of elements and $p$ is the number of processors.  This\nalgorithm is very easy to parallelize and implement.\n\n\\input{\\AlgoPath/alg-parallel-sparse-table-2e}\n\n\\subsubsection{A Parallel VGLCS with Sparse Table}\n\nThe operations on a sparse table are much easily to parallelize than\nthose on a disjoint set, which is used within the inner loop of Peng's\nsequential VGLCS algorithm.  The inner loop of Peng's algorithm\nalternates between append and query operations on $R$.  Please refer\nto Algorithm~\\ref{alg:serial-VGLCS} for details.  This alternation\nbetween appending and querying incurs heavy data dependency.  In\naddition, the parallelism of operations on a disjoint tree is limited\nby the length of path under compression.  The length is usually very\nshort and provides very limited parallelism.\n\nThe pseudo code of our parallel VGLCS algorithm with sparse table is\ngiven in Algorithm~\\ref{alg:parallel-VGLCS}.  The algorithm computes\n$V$ one row at a time.  The computation of each row has two stages.\nIn the first stage, the algorithm queries the $C$'s {\\em in parallel}\nto obtain the maximums of suffix of length $G_A+1$ and place them\ninto an array $R$.  Then we build a sparse table $T$ with the data of\n$R$.  Then in the second stage, the algorithm queries $T$ to find the\nrange maximum in $R$ to compute all elements in the $i$-th row of $V$\n{\\em in parallel}.\n\n\\input{\\AlgoPath/alg-parallel-VGLCS-2e}\n\nThe implementation of the two stages have different challenges.  The\nfirst stage is easier to parallelize because the operations on\nindividual columns are {\\em independent}.  However, it will insert new\ndata into $C$, and still needs to answer suffix queries efficiently in\norder to build the $R$ array.  The second stage does {\\em not}\nrequires insertion so it is more static.  However, since we compute\nall $V$'s in the same row in parallel, it requires {\\em ranged\n  queries}, instead of suffix queries, on the sparse table $T$.  The\nnext two sections will describe our approaches to address these\nchallenges of two stages.  For ease of presentation we will describe\nour approach for the second stage in Section~\\ref{sec:parallelRMQ}\nfirst.  Then we address the first stage in Section~\\ref{sec:QIUD}.\n", "meta": {"hexsha": "94c0f13f58688636b925f7e1dc437023737778e8", "size": 15920, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/IEEE/partial/parallel-VGLCS-en.tex", "max_stars_repo_name": "morris821028/parallel-VGLCS", "max_stars_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-02-11T08:45:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T07:30:24.000Z", "max_issues_repo_path": "doc/IEEE/partial/parallel-VGLCS-en.tex", "max_issues_repo_name": "morris821028/parallel-VGLCS", "max_issues_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2017-02-21T02:01:16.000Z", "max_issues_repo_issues_event_max_datetime": "2017-02-24T00:13:34.000Z", "max_forks_repo_path": "doc/IEEE/partial/parallel-VGLCS-en.tex", "max_forks_repo_name": "morris821028/parallel-VGLCS", "max_forks_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.5412541254, "max_line_length": 94, "alphanum_fraction": 0.7542085427, "num_tokens": 4338, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672227971211, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.5932179273033674}}
{"text": "\n \\documentclass[pre,twocolumn,superscriptaddress]{revtex4} \n\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{booktabs}\n\\usepackage{caption}\n\\usepackage{color}\n\\usepackage{comment}\n\\usepackage{float}\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\usepackage[utf8]{inputenc} % allows using accents directly in text, like \u00d4\u00f8\u03a9\n\\usepackage{subfig}\n\\usepackage{xspace}\n%\\usepackage{cuted}\n\n\\captionsetup{justification=raggedright,\nsinglelinecheck=false\n}\n\n\\newcommand{\\pygbe}{\\texttt{PyGBe}\\xspace}\n\\newcommand{\\gb}{{\\small G\\,B1\\,D4$^\\prime$}\\xspace}\n\\newcommand{\\gmres}{\\textsc{gmres}\\xspace}\n\\newcommand{\\bem}{\\textsc{bem}\\xspace}\n\\newcommand{\\ses}{\\textsc{ses}\\xspace}\n\\newcommand{\\sam}{\\textsc{sam}}\n\\newcommand{\\gpu}{\\textsc{gpu}}\n\\newcommand{\\cpu}{\\textsc{cpu}}\n\\newcommand{\\apbs}{\\textsc{apbs}\\xspace}\n\\newcommand{\\nvidia}{\\textsc{nvidia}\\xspace}\n\\newcommand{\\msms}{\\texttt{\\textsc{msms}}\\xspace}\n\\newcommand{\\amber}{\\texttt{\\textsc{amber}}\\xspace}\n\\newcommand{\\ccby}{\\textsc{cc-by}\\xspace}\n\\newcommand{\\bigO}{\\mathcal{O}}\n\\renewcommand{\\O}[1]{\\mathcal{O}(#1)}\n\n\\graphicspath{{figs/}} %  PATH to figure files-- change to ./ for submission\n\n\n\n\\begin{document}\n\n\n\\title{Computational nanoplasmonics in the quasistatic limit for biosensing applications}\n\n\\author{Natalia C. Clementi}\n\\email{ncclementi@gwu.edu}\n\\affiliation{Department of Mechanical \\& Aerospace Engineering, The George Washington University, Washington, D.C.}\n\n\\author{Christopher D. Cooper}\n\\email{christopher.cooper@usm.cl}\n\\affiliation{Department of Mechanical Engineering and Centro Cient\\'ifico Tecnol\\'ogico de Valpara\\'iso, Universidad T\\'ecnica Federico Santa Mar\\'ia, Valpara\\'iso, Chile.}\n\n\\author{Lorena A.~Barba}\n\\email{labarba@gwu.edu}\n\\affiliation{Department of Mechanical \\& Aerospace Engineering, The George Washington University, Washington, D.C.}\n%\\date{\\today}\n\n\n\\begin{abstract} % in revtex4, the abstract must come before the \\maketitle command\n\nThe phenomenon of localized surface plasmon resonance provides high sensitivity in detecting biomolecules through shifts in resonance frequency when a target is present. \nComputational studies in this field have used the full Maxwell equations with simplified models of a sensor-analyte system, or neglected the analyte altogether. \nIn the long-wavelength limit, one can simplify the theory via an electrostatics approximation, while adding geometrical detail in the sensor and analytes (at moderate computational cost).\nThis work uses the latter approach, expanding the open-source \\pygbe code to compute the extinction cross-section of metallic nanoparticles in the presence of any target for sensing.\nThe target molecule is represented by a surface mesh, based on its crystal structure. \n\\pygbe is research software for continuum electrostatics, written in Python with computationally expensive parts accelerated on GPU hardware, via PyCUDA.\nIt is also accelerated algorithmically via a treecode that offers $\\mathcal{O}(N \\log N)$ computational complexity. \nThese features allow \\pygbe to handle problems with half a million boundary elements or more.\nIn this work, we demonstrate the suitability of \\pygbe, extended to compute LSPR response in the electrostatic limit, for biosensing applications. \nUsing a model problem consisting of an isolated silver nanosphere in an electric field, our results show grid convergence as $1/N$, and accurate computation of the extinction cross-section as a function of wavelength (compared with an analytical solution).\nFor a model of a sensor-analyte system, consisting of a spherical silver nanoparticle and a set of bovine serum albumin (BSA) proteins, our results again obtain grid convergence as $1/N$ (with respect to the Richardson extrapolated value).\nComputing the LSPR response as a function of wavelength in the presence of BSA proteins captures a red-shift of 0.5 nm in the resonance frequency due to the presence of the analytes at 1-nm distance.\nThe final result is a sensitivity study of the biosensor model, obtaining the shift in resonance frequency for various distances between the proteins and the nanoparticle.\nAll results in this paper are fully reproducible, and we have deposited in archival data repositories all the materials needed to run the computations again and re-create the figures. \\pygbe is open source under a permissive license and openly developed. Documentation is available at \\url{http://barbagroup.github.io/pygbe/docs/}. \n\\end{abstract}\n\n\\maketitle\n\n% Body of paper.\n\n\\section{Introduction} \\label{sec:intro}\n\\input{introduction}\n\n%=============\n\\section{Methods}\\label{sec:methods}\n\\input{methods} \n\n%=============\n\\section{Results} \\label{sec:results}\n\\input{results}\n\n%=============\n\\section{Discussion} \\label{sec:discussion}\n\\input{discussion}\n\n\\section{Conclusion}\n%\\input{conclusion}\n\nIn this work, we combined the implicit-solvent model of electrostatics interactions in \\pygbe \nwith a long-wavelength representation of LSPR response in nanoparticles. \nWe extended \\pygbe to work with complex-valued quantities, and added functionality to \ninclude an imposed electric field and compute relevant quantities \n(dipole moment, extinction cross-section). \nPrevious work with \\pygbe showed its suitability for computing \nbiomolecular electrostatics considering solvent-filled cavities and Stern layers \\cite{CooperBardhanBarba2013}, \nand for protein-surface electrostatic interactions \\cite{CooperBarba2016}.\nThis latest extension can offer a valuable computational approach to study nanoplasnomics and aid in the design of LSPR biosensors. \nThanks to algorithmic acceleration with a treecode, and hardware acceleration with GPUs, \\pygbe is able to compute problems with half a million elements, or more, which is required to represent the molecular surface accurately.\n\n\n\n\\begin{acknowledgments}\n\nCDC acknowledges the financial support from CONICYT through projects FONDECYT Iniciaci\\'on 11160768 and Basal FB0821.\n\\end{acknowledgments}\n\n% Create the reference section using BibTeX:\n%\\bibliographystyle{} % revtex ournal sub-style automatically sets this\n\\bibliography{compbio,bem,scicomp,fastmethods,biosensors} %don't leave spaces between elements, it throws error\n\n\\end{document}\n", "meta": {"hexsha": "a2b3d692e8133fa01c27cd38208480938d5715e3", "size": 6230, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/ClementiCooperBarba2018.tex", "max_stars_repo_name": "barbagroup/pygbe_lspr_paper", "max_stars_repo_head_hexsha": "4517bb350646fdbab483b68fd0685600149faa12", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-12-01T03:19:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T21:00:00.000Z", "max_issues_repo_path": "tex/ClementiCooperBarba2018.tex", "max_issues_repo_name": "barbagroup/pygbe_lspr_paper", "max_issues_repo_head_hexsha": "4517bb350646fdbab483b68fd0685600149faa12", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2018-08-03T15:45:54.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-03T22:28:51.000Z", "max_forks_repo_path": "tex/ClementiCooperBarba2018.tex", "max_forks_repo_name": "barbagroup/pygbe_lspr_paper", "max_forks_repo_head_hexsha": "4517bb350646fdbab483b68fd0685600149faa12", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-01T03:19:36.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-01T03:19:36.000Z", "avg_line_length": 49.84, "max_line_length": 332, "alphanum_fraction": 0.793258427, "num_tokens": 1555, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672089305841, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5932179218242448}}
{"text": "\\title{\\bf Structure Formation}\n\n\\section{Basics \\& Nomenclature}\n\nWithin the overall cosmological growth, fluctuations grow through\ngravitation. Inflationary theory predicts that these fluctuations\noriginate from quantum fluctuations frozen in as progressively larger\nscales become causally disconnected in inflation. Cosmic microwave\nbackground observations of temperature fluctuations find that at\nrecombination ($z\\sim 1100$) the density fluctuations were\nfractionally of order $10^{-5}$. These fluctuations grow first\nlinearly and then nonlinearly to form bound structures known as {\\it\n  dark matter haloes}. It is within these bound structures that\ngalaxies form.\n\nAt lower redshifts, we can define the matter fluctuations around the\nhomogenous density $\\rho_0$:\n\\begin{equation}\n\\frac{\\rho}{\\rho_0} = 1+ \\delta\n\\end{equation}\nWhen it is considered in configuration space, $\\delta$ is often\nfiltered on some scale $\\gg 1$ kpc.  However, we often quantify the\ntwo point statistics of this field using the power spectrum $P(k)$ of\n$\\delta$ and a corresponding correlation function $\\xi(r)$.\n\nThe power spectrum is defined as:\n\\begin{equation}\n\\left\\langle \\tilde\\delta(\\vec{k}) \\tilde\\delta(\\vec{k}') \\right\\rangle\n= (2\\pi)^3 \\delta_D\\left(\\vec{k} - \\vec{k}'\\right) P(k).\n\\end{equation}\nIn plain language, the power spectrum is the variance in the\namplitudes of the Fourier mode amplitudes as a function of wavenumber\n$k$. The Fourier transform of the power spectrum is the correlation\nfunction:\n\\begin{equation}\n\\left\\langle\\delta\\left(\\vec{x}\\right) \\delta\\left(\\vec{x}\n+ \\vec{r} \\right) \\right\\rangle = \\xi(r).\n\\end{equation}\nIn plain language, the correlation function is the excess probability\nof finding a pair of galaxies with separation $r$, above the\nprobability for a spatially uniform Poisson distribution with the same\nnumber density of galaxies.  Typically, $P(k)$ and $\\xi(r)$ are\nexpressed in comoving coordinates, because as discussed below under\nlinear theory their overall shape in comoving coordinates is retained,\nand only their amplitude changes.\n\nThe inflationary $\\Lambda$CDM prediction for $P(k)$ is that during the\nera of linear gravitational growth, on large scales (low $k$) its\npower law slope is $n\\sim 1$ and on small scales (high $k$) its power\nlaw slope is $n\\sim -3$ (e.g. \\citealt{bardeen86a}; Appendix G). The\nturnover scale is associated with the Hubble scale at matter-radiation\nequality, for reasons explored in the exercises. We can characterize\nthe overall second-order amplitude fluctuations on any scale as:\n\\begin{equation}\n\\Delta(k) \\sim k^3 P(k)\n\\end{equation}\nwhich makes it clear that the strongest fluctuations are on the\nsmallest scales, a characteristic known as {\\it hierarchical\nclustering}. As shown below, $\\Delta(k)$ will undergo a linear growth\nphase at early times.  When $\\Delta(k) \\sim 1$, fluctuations on that\nscale go nonlinear, and in general the growth rate of fluctuations\naccelerates. Because smaller scales clearly go nonlinear first, this\nprocess leads to a nonlinear power spectrum flatter than the linear\nspectrum.\n\nThe overall amplitude is often quantified by $\\sigma_8$, which is the\nstandard deviation of fluctuations in 8 $h^{-1}$ Mpc radius spheres,\nwhich can also be expressed as an integral of $P(k)$. When the\nequivalent quantity $\\sigma_{8,g}$ for galaxies is measured in the\ngalaxy distribution, this quantity is expressed as the observed level\nof fluctuations, and consequently includes the nonlinear effects\npresent in the real universe. When $\\sigma_8$ of the matter is\ninferred from cosmological observations (the cosmic microwave\nbackground, or gravitational lensing, or redshift space distortions)\nit is usually defined as the primordial $\\sigma_8$ linearly evolved to\n$z=0$ or the redshift in question.\n\nIn galaxy surveys, $\\delta$ is not directly observable, but the\noverdensity $\\delta_g$ of some particular class of galaxies can be. On\nlarge scales, where $\\delta\\ll 1$, often it is sufficient to\napproximate the relationship between the two with a {\\it linear, local\ngalaxy bias}:\n\\begin{equation}\n\\delta_g(\\vec{x}) \\approx b \\delta(\\vec{x})\n\\end{equation}\nOn small scales this relationship cannot remain linear and in general\ncannot be local either. Bias can alternatively be defined as\n$\\sigma_{8,g} / \\sigma_8$ (or equivalent statistical quantities on\nlarger scales). In general the halo occupation distribution model is a\nmore accurate description of the relationship between galaxies and\nmatter, but the concept of galaxy bias as defined here is still\nuseful, especially on linear scales.\n\nTo understand the linear growth, we start with the equations of motion\nfor a pressureless, gravitating fluid:\n\\begin{eqnarray}\n\\label{eq:homogeneous}\n\\frac{\\mathrm{D}\\vec{v}}{\\mathrm{D}t} &=& - \\vec\\nabla\\phi\n\\mathrm{\\quad (Euler's~equation)}\\cr\n\\frac{\\mathrm{D}\\rho}{\\mathrm{D}t} &=& - \\rho \\vec\\nabla\\cdot \\vec{v}\n\\mathrm{\\quad (Continuity~equation)}\\cr\n\\nabla^2\\phi &=& 4\\pi G \\rho \n\\mathrm{\\quad (Poisson's~equation)}\n\\end{eqnarray}\nwhere the convective derivative is:\n\\begin{equation}\n\\frac{\\mathrm{D}}{\\mathrm{D}t} = \\frac{\\partial}{\\partial t} +\n\\vec{v}\\cdot\\vec{\\nabla}\n\\end{equation}\nHere and below $\\nabla$ refers to a spatial derivative in physical\nunits (not comoving units).\n\nFor the $\\Omega_m = 1$ case, we can show that:\n\\begin{equation}\na(t) = \\left(\\frac{t}{t_0}\\right)^{2/3}\n\\end{equation} \nwhich if we use to construct a homogeneous solution to the above\nequations, we can perturb the density around the homogenous density\n$\\rho_0(a) \\propto a^{-3}$:\n\\begin{equation}\n\\frac{\\rho}{\\rho_0} = 1+ \\delta\n\\end{equation}\nWe will also make use of the time derivative with respect to a\ncomoving observer, which is related to the convective derivative by:\n\\begin{equation}\n\\frac{\\mathrm{D}}{\\mathrm{D}t} = \\frac{\\mathrm{d}}{\\mathrm{d} t} +\n\\vec{v}_p\\cdot\\vec{\\nabla}\n\\end{equation}\nHere we choose to keep the peculiar velocity in physical units, and\nthe derivative $\\nabla$ in physics units (not comoving units).  We can\nshow the continuity equation holds for peculiar velocities:\n\\begin{equation}\n\\label{eq:continuity}\n\\frac{\\mathrm{d}\\delta}{\\mathrm{d}t} =  \n- \\vec\\nabla\\cdot\\vec{v}_p\n\\end{equation}\nIn the perturbed quantities we find to linear order:\n\\begin{eqnarray}\n\\frac{\\mathrm{d}\\vec{v}_p}{\\mathrm{d}t} &=&\n- \\vec\\nabla(\\delta\\phi)\n- H(t) \\vec{v}_p \\cr\n\\nabla^2\\left(\\delta\\phi\\right)&=& 4\\pi G \\rho_0 \\delta\n\\end{eqnarray}\nRemembering that the spatial derivatives are in physical, not comoving\nunits, we can write:\n\\begin{equation}\n\\frac{\\mathrm{d}}{\\mathrm{d}t} \\left[ \\vec\\nabla\\cdot\\vec{v}_p \\right]\n= \n\\vec\\nabla\\cdot\n\\left[\\frac{\\mathrm{d}\\vec{v}_p}{\\mathrm{d}t} \\right]\n+ H(t) \\frac{\\mathrm{d}\\delta}{\\mathrm{d}t} \n\\end{equation}\nThen one can take a time derivative with respect to a comoving\nobserver to Equation \\ref{eq:continuity}, and with substitutions this\nleads to a second-order equation for the density:\n\\begin{equation}\n\\label{eq:secondorder}\n\\frac{\\mathrm{d}^2\\delta}{\\mathrm{d}t^2} + 2 \\frac{\\dot a}{a} \n\\frac{\\mathrm{d}\\delta}{\\mathrm{d}t} - 4 \\pi G \\rho_0 \\delta = 0 \n\\end{equation}\nThis linear set of equations is separable, so that whatever spatial\npattern exists simply changes in amplitude over time:\n\\begin{equation}\n\\delta(x, t) = \\delta(x, t_0) \\frac{D(t)}{D(t_0)},\n\\end{equation}\nwhere often the convention is $D(t_0) = 1$.\n  The general solution is:\n\\begin{equation}\n\\label{eq:lineargrowth}\nD(t) = A t^{-1} + B t^{2/3}\n\\end{equation}\nThe first mode is decaying, and thus not important to the growth\nof structure.  The second mode is the one that contributes to the\ngrowth of structure.\n\nThis set of solutions is appropriate for the zero-energy, or ``flat''\nUniverse, without a cosmological constant, when matter density (rather\nthan radiation) dominates. At early times (but after matter-radiation\nequality), while deceleration dominates the dynamics, it is a good\ndescription of the Universe.  However, at later times it becomes less\ngood. In particular, in our Universe, which appears to be\naccelerating, the growth is slowed down considerably by the\nacceleration.\n\nThe continuity equation (\\ref{eq:continuity}) and linear growth imply\na relationship between the peculiar velocity field and the growth\nrate. If we convert the spatial derivative to comoving units we find:\n\\begin{equation}\n\\frac{1}{a} \\vec\\nabla_c\\cdot\\vec{v}_p = - \\delta(\\vec{x}, t_0) \\dot\nD(t),\n\\end{equation}\n(taking $D(t_0) = 1$), which we can rewrite as:\n\\begin{equation}\n\\vec\\nabla_c\\cdot\\vec{v}_p = - a \\delta(\\vec{x}, t_0) \\dot  \nD(t) = - a \\delta(\\vec{x}, t_0) H f\n\\end{equation}\nwhere the {\\it growth rate} is:\n\\begin{equation}\nf = \\frac{\\dd \\ln D}{\\dd \\ln a}\n\\end{equation}\nThe choice to express this result in terms of the comoving spatial\nderivative of the physical peculiar velocity is strange but\nconventional.\n\nThis peculiar velocity field distorts redshift-based maps of the\nuniverse in a specific way on large scales, that can be measured to\nconstrain $f$. Since $\\delta$ is not directly observable, the directly\nobservable quantity on linear scales is $\\beta = f/b$. Since the\nfluctuations in the galaxy sample can be observed, we can recast\n$\\beta = f \\sigma_8 / \\sigma_{8,g}$ and the observable is\n$\\beta \\sigma_{8,g}$, from which we infer $f\\sigma_8$.\n\nAs small scales go nonlinear, gravitationally bound objects will\nform. This process can be approximated in the spherical case. If we\nsituate our coordinate system on the center of a spherical system with\na constant overdensity $\\bar\\delta > 0$ and size $R$, the system can\nbe considered completely analogous to a universe with matter density\nof $\\Omega_m(1+\\bar\\delta)$. Therefore, if this quantity is greater\nthan unity, than the sphere will expand for some time, then turn\naround at $t=t_{\\rm TA}$, and then collapse on itself; this process\ncan be followed exactly. It can be shown that the mean density of the\nsphere at turn-around is about 5.5 times the mean density of the\nuniverse, and collapse occurs in twice the turn-around time. The\nvirial theorem and energy conservation lead to a typical overdensity\nof the collapsed object within its virial radius of $\\delta_{\\rm vir}\n= 18\\pi^2 \\approx 178$. Meanwhile, the linearly extrapolated\noverdensity at that time is only about $\\delta_{\\rm linear}\\approx\n1.7$.\n\nThe mass spectrum of collapsed halos can be predicted approximately\nusing {\\it excursion set theory}, or the {\\it Press-Schechter}\napproach (\\citealt{press74a, bond91a, lacey93a}). Imagine a patch of\nmass $M$ at early times; it will have some specific radius $R$\ndepending on the mean density. We can predict when it will collapse to\na virialized object when $\\delta_{\\rm linear} \\approx 1.686$ within\nradius $R$. At any given time, we can ask what fraction of the\nuniverse's volume, when smoothed on radius $R$, has $\\delta_{\\rm\nlinear} > 1.686$. For simplicity, we will smooth by a top-hat in\n$k$-space (in configuration space this is smothing by the first order\nspherical Bessel function $j_1$). Calculating this fraction tells us\nfor any mass (that is, smoothing scale), what fraction of the volume\nends up in dark matter halos greater than that mass. This function can\nbe differentiated to yield the halo mass function:\n\\begin{equation}\n\\Phi(M) \\dd M\n= \\frac{1}{\\sqrt{2\\pi}} \\frac{\\bar \\rho}{M} \\frac{\\delta_c}{\\sigma^3(M)} \\left[\n- \\frac{\\dd \\sigma^2}{\\dd\nM}\\right] \\exp\\left[-\\frac{\\delta_c^2}{2\\sigma^2}\\right] \\dd M\n\\end{equation}\nFor $P(k)\\propto k^n$, one can show:\n\\begin{equation}\n\\Phi(M) \\dd{M} = \\frac{\\bar\\rho}{\\sqrt{2\\pi}\nM} \\left(\\frac{M}{M_\\ast}\\right)^{(n+3)/6}\n\\left(\\frac{n+3}{3}\\right)\n\\exp\\left[-\\frac{1}{2}\\left(\\frac{M}{M_\\ast}\\right)^{(n+3)/3}\\right] \\frac{\\dd\nM}{M}\n\\end{equation}\nWhere the nonlinear mass $M_\\ast$ is defined by the relation:\n\\begin{equation}\n\\sigma^2 = \\left(\\frac{M}{M_\\ast}\\right)^{-(n+3)/3} \\delta_c^2.\n\\end{equation}\nBecause $n>-3$ always, as $\\sigma^2$ grows with time, the nonlinear\nmass scale grows.  In the standard cosmology, at low small scales (and\nthus low masses) $n$ slowly approaches $-3$ from above and\n$\\Phi(M) \\propto M^{-2 + \\epsilon}$, where $\\epsilon = (n+3) /3$, and\nthus is almost divergent.\n\nThe detailed prediction of nonlinear growth and collapse to dark\nmatter halos requires the use of simulations. Because the dark matter\nis collisionless, fluid simulations are not sufficient. The universal\napproach is to model the dark matter statistically using a large\nnumber of collisionless particles; the $N$-body approximation. An {\\it\nN-body simulation} is understood to model only the dark matter. These\nsimulations use some variant of particle-mesh techniques on large\nscales, often with an adaptive component on small scales that may use\ndirect calculations of mutual forms. They invariably employ some\nsoftening length that is reported as the resolution.  {\\it\nHydrodynamic} simulations include baryonic fluids in the modeling, and\noften their cooling and collapse to stellar systems. They may also\ninclude feedback of supernovae, winds, and active galactic nuclei on\nthe fluid; this {\\it subgrid physics} is typically parameterized in a\nsimple way.\n\nOne important insight from N-body simulations is how halos grow through\naccretion of smaller companion halos. These accreted halos often\nsurvive for long periods of time, and are therefore distinct clumps\nknown as {\\it subhalos} within each halo. The centers of halos and\nsubhalos are the locations where galaxies form (\\citealt{wechsler18a}).\n\nThe simulations also clarify the internal structure of\nhalos. Generally their radial profiles can be modeled as:\n\\begin{equation}\n\\rho(r) = \\frac{\\rho_s}{\\frac{r}{r_s}\\left(1\n+ \\frac{r}{r_s}\\right)^2},\n\\end{equation}\nthe ``NFW'' profile of \\citet{navarro97a}. In the context of a\nspecific profile, we define $R_{\\rm vir}$ and $M_{\\rm vir}$ based on\nthe radius and enclosed mass within which the mean overdensity is\n$\\delta_{\\rm vir}$ as predicted by the spherical collapse model. Thus:\n\\begin{equation}\nM_{\\rm vir} = \\frac{4\\pi}{3} R_{\\rm vir}^3 \\bar\\rho \\left(1+ \\delta_{\\rm\nvir}\\right)\n\\end{equation}\nThe halo concentration can be characterized by:\n\\begin{equation}\nc_{\\rm vir} = \\frac{R_{\\rm vir}}{r_s}\n\\end{equation}\nThe structure of a halo is fully defined by $M_{\\rm vir}$ and $c_{\\rm\nvir}$. In simulations, typically $c_{\\rm vir} \\sim 5$--$30$\n(\\citealt{bullock01b, wang20a}), declining with increasing halo mass\nand with increasing redshift. Any given choice of halo structure\nyields a particular maximum circular velocity $v_{\\rm max}$; the\ndependence on mass is such that:\n\\begin{equation}\nM_{\\rm vir} \\propto v_{\\rm max}^{3.4}.\n\\end{equation}\n\n\\section{Commentary}\n\n\n\\section{Key References}\n\n\\begin{itemize}\n  \\item\n    {\\it Physics Foundations of Cosmology},\n    \\citet{mukhanov05a}\n  \\item\n    {\\it The large-scale structure of the\n    universe}, \\citet{peebles80a}\n  \\item\n    {\\it Formation and Evolution of Galaxies: Les Houches\n    Lectures}, \\citet{white94a} \n\\end{itemize}\n\n\\section{Order-of-magnitude Exercises}\n\n\\begin{enumerate} \n\\item At approximately what redshift does structure growth start to\n    slow down for a Universe with $\\Omega_m = 0.3$, $\\Omega_\\Lambda = 0.7$?\n\\end{enumerate} \n\n\\section{Analytic Exercises}\n\n\\begin{enumerate}\n\\item We can understand the shape of the linear power spectrum $P(k)$\nonce we understand how growth proceeds for $k$ modes inside and\noutside the Hubble radius $r_{\\rm Hubble} = c/H(t)$. Here we give the\npicture in the ``synchronous gauge''; the heuristic narrative outside\nthe horizon depends on gauge, but the observables do not\n(\\citealt{ma95a}). The inflationary $P_{\\rm inflation}(k)\\propto k$ at\nall scales (roughly). However, during the radiative-dominated era,\n$P(k)$ is altered so that it turns over at small scales and becomes\n$\\propto k^{-3}$ at small scales. All of this happens when all scales\nare still very much in the linear growth regime, and so the resulting\nfunction is usually referred to as $P_{\\rm linear}(k)$, the ``linear''\npower spectrum.  The transfer function is defined as $T(k) = P_{\\rm\nlinear}(k) / P_{\\rm inflation}(k)$, where we remind ourselves that $k$\nis expressed in comoving coordinates. During radiation domination,\ninside the horizon the density only grows logarithmically (because the\nJeans scale is nearly the horizon size for a relativistic fluid) and\noutside the Hubble radius the density grows as $\\delta\\propto\na^2$. During matter domination, at all scales $\\delta\\propto a$.\n\\begin{enumerate}\n\\item For a mode $k$ (expressed in comoving coordinates) that\nenters the Hubble radius before matter-radiation equality, how does\nthe scale radius $a_{\\rm in}$ at which it enters the Hubble radius\ndepend on $k$?\n\n\\begin{answer}[Author: Nicholas Faucher]\nFor any power law dependence of $a(t)= t^n$ on time, the Hubble radius\n$r_{\\rm Hubble} = c a/{\\dot a} \\propto a^{1/n}$ in physical\ncoordinates. In comoving coordinates, $r_{\\rm Hubble, c} \\propto\na^{1/n-1}.$ In the radiation dominated era where $a\\propto t^{1/2}$,\ntherefore, $r_{\\rm Hubble,c} \\propto a$. Since $k \\propto r^{-1}$,\nthen the scale radius at which mode $k$ enters the horizon scales with\n$k$ as $a_{\\rm in} \\propto k^{-1}$.\n\\end{answer}\n\n\\item How does the growth factor experienced outside the horizon scale\nwith $k$, for scales that enter the horizon during the radiation\ndominated era?\n\n\\begin{answer}[Author: Nicholas Faucher]\nAs noted above, the growth factor outside the horizon in the radiation\ndominated era scales with $a^2$, so by the time they enter the\nhorizons the densities $\\delta$ have grown an amount $\\propto a_{\\rm\nin}^2 \\propto k^{-2}$.\n\\end{answer}\n\\item Therefore, determine how the primordial $P(k) \\propto k$ will \nhave been modified by the time the universe has reached\nmatter-radiation equality?\n\n\\begin{answer}\nThe power spectrum $P(k)\\propto \\delta^2$ so at scales below the\nHubble radius, the amount that the power spectrum on comoving\nwavenumbers $k > 2\\pi/r_{\\rm Hubble, c}$ at matter-radiation equality\nwill have grown is $\\propto k^{-4}$. At $k<2\\pi/r_{\\rm Hubble, c}$,\nthe primordial $P(k)$ will have grown in amplitude but not changed its\ndependence on scale.\n\nTherefore if the initial, primordial $P(k) \\propto k$ at all scales,\nthen it will retain that shape at small $k$, but be altered to\n$P(k)\\propto k^{-3}$ at large $k$, with a broad turn-over region\naround $k \\sim 2\\pi/r_{\\rm Hubble, c}$, with $r_{\\rm Hubble, c}$ being\nthe comoving Hubble radius at matter-radiation equality. \n\nAfter matter-radiation equality, both large and small scales grow\n$\\propto a$, so the shape of $P(k)$ is ``frozen in'' and only the\namplitude grows, until nonlinear growth ensuse on small scales.\n\\end{answer}\n\\end{enumerate}\n\\item Starting from the continuity equation in Equation\n   \\ref{eq:homogeneous}, assuming a flat matter dominated universe\n    ($\\Omega_m = 1$), and keeping only first-order terms, derive\n    Equation \\ref{eq:continuity}.\n\n%\\begin{answer}\n%\\begin{eqnarray}\n%\\frac{{\\rm D}\\vec{v}}{{\\rm D} t} =\n%\\left[ \\frac{\\dd{}}{\\dd{t}} + \\vec{v}_p \\cdot \\vec{\\nabla} \\right]\n%\\left[\\rho_0 \\left(1+\\delta\\right)\\right] &=&\n%- \\rho_0 (1+\\delta) \\vec{\\nabla}\\cdot\\vec{v}_0\n%- \\rho_0 (1+\\delta) \\vec{\\nabla}\\cdot\\vec{v}_p \\cr\n%\n%\\frac{\\dd{\\rho_0}}{\\dd{t}}\n%\\frac{\\dd{}}{\\dd{t}}\\left(\\rho_0 \\delta\\right)\n%+ \\vec{v}_p \\cdot \\vec{\\nabla} \\right]\n%\\left[\\rho_0 \\left(1+\\delta\\right)\\right] &=&\n%- \\rho_0 (1+\\delta) \\vec{\\nabla}\\cdot\\vec{v}_0\n%- \\rho_0 (1+\\delta) \\vec{\\nabla}\\cdot\\vec{v}_p\n%\\end{eqnarray}\n%\\end{answer}\n\n\n\\item Starting from Equation \\ref{eq:homogeneous}, and assuming a flat\n    matter dominated universe ($\\Omega_m = 1$), derive\n    Equation \\ref{eq:secondorder}.\n\n\\begin{answer}[Author: Kate Storey-Fisher]\nEquations \\ref{eq:homogeneous} are the equations of motion for a\npressureless, gravitating fluid (Euler's equation, the continuity\nequation, and Poisson's equation).\n\nWe can perturb the density around the homogeneous density $\\rho_0$.\nIf $\\Omega_m = 1$, conservation of energy gives us $\\rho_0\n\\propto a^{-3}$. We also perturb the other quantities, giving:\n\\begin{eqnarray}\n\\rho  &\\rightarrow& (1+\\delta)\\rho_0 \\\\\n\\vec{v} &\\rightarrow& \\vec{v}_0 + \\vec{v}_p \\\\\n\\phi  &\\rightarrow& \\phi_0 + \\delta\\phi\n\\end{eqnarray}\n\nFor Poisson's equation we find:\n\\begin{eqnarray}\n\\nabla^2\\phi &=& 4\\pi G \\rho  \\cr\n\\nabla^2\\left(\\phi_0 + \\delta\\phi\\right) &=& 4\\pi\nG \\rho_0\\left(1+\\delta\\right)  \\cr\n\\nabla^2\\left(\\delta\\phi\\right) &=& 4\\pi G \\rho_0 \\delta\n\\end{eqnarray}\n\nFor Euler's equation we find:\n\\begin{eqnarray}\n\\frac{{\\rm D}\\vec{v}}{{\\rm D} t} =\n\\left[ \\frac{\\dd{}}{\\dd{t}} + \\vec{v}_p \\cdot \\vec{\\nabla} \\right]\n\\left[\\vec{v}_0 + \\vec{v}_p\\right] &=& - \\vec{\\nabla}\\left(\\phi_0\n+ \\delta\\phi\\right) \\cr\n\\frac{\\dd{\\vec{v}_0}}{\\dd{t}} + \\vec{v}_p \\cdot \\vec{\\nabla}\\vec{v}_0\n+ \\frac{\\dd{\\vec{v}_p}}{\\dd{t}} + \\vec{v}_p \\cdot \\vec{\\nabla}\\vec{v}_p\n&=& - \\vec{\\nabla}\\phi_0 - \\vec{\\nabla}\\left(\\delta\\phi_0\\right)\n\\end{eqnarray}\nThe first terms on both sides cancel because they solve the\nhomogeneous equations (for which ${\\rm D}/{\\rm D}t\n= \\dd{}/\\dd{t}$). The last term on the left hand side is\nsecond-order. Rearranging, we are left with\n\\begin{eqnarray}\n\\frac{\\dd{\\vec{v}_p}}{\\dd{t}}\n&=& - \\vec{v}_p \\cdot \\vec{\\nabla}\\vec{v}_0\n- \\vec{\\nabla}\\left(\\delta\\phi_0\\right)\n\\end{eqnarray}\nRemembering that $\\vec{v}_0 = ({\\dot a}/a)\\vec{r} = H\\vec{r}$, where\n$\\vec{r}$ is in physical units, we have\n\\begin{eqnarray}\n\\label{eq:dvdt}\n\\frac{\\dd{\\vec{v}_p}}{\\dd{t}}\n&=&\n- H \\vec{v}_p\n- \\vec{\\nabla}\\left(\\delta\\phi_0\\right)\n\\end{eqnarray}\nwhere the first term on the right-hand side is the Hubble drag term.\n\nWe now take the divergence of Equation \\ref{eq:dvdt}, and get\n\\begin{eqnarray}\n\\nabla \\cdot \\frac{\\mathrm{d}\\vec{v}_p}{\\mathrm{d} t} &=& -\\vec{\\nabla}^2(\\delta\\phi) - H(t)\\nabla \\cdot \\vec{v}_p \\\\  \n &=& -4\\pi G \\rho_0 \\delta + H(t) \\frac{\\mathrm{d}\\delta}{\\mathrm{d} t}  \\label{eq:4piG}\n\\end{eqnarray}\nwhere in the second line we have substituted the continuity equation\nand the Poisson equation.\n\nTaking the time derivative of the continuity equation, we find:\n\\begin{equation}\n\\frac{\\mathrm{d}^2\\delta}{\\mathrm{d} t^2}\n= \\frac{\\mathrm{d}}{\\mathrm{d}\nt}\\left(-\\vec{\\nabla} \\cdot \\vec{v}_p\\right) \\label{eq:d2delta}\n\\end{equation}\n\nBecause the spatial derivatives are in physical units, and using the\ncontinuity equation again:\n\\begin{equation}\n\\frac{\\mathrm{d}^2\\delta}{\\mathrm{d} t^2} =\n- \\frac{\\mathrm{d}}{\\mathrm{d}t} \\left(\\vec{\\nabla} \\cdot \\vec{v}_p\\right)\n- = \\vec{\\nabla} \\cdot \\frac{\\mathrm{d}\\vec{v}_p}{\\mathrm{d} t} - H(t) \\frac{\\mathrm{d}\\delta}{\\mathrm{d} t} \n\\end{equation}\n\nAnd now substitute Equation \\ref{eq:4piG} into the RHS of the above equation:\n\\begin{equation}\n\\frac{\\mathrm{d}^2\\delta}{\\mathrm{d} t^2} = -\\Bigg(-4\\pi G \\rho_0 \\delta + H(t) \\frac{\\mathrm{d}\\delta}{\\mathrm{d} t}\\Bigg) - H(t) \\frac{\\mathrm{d}\\delta}{\\mathrm{d} t} \n\\end{equation}\nRearranging,\n\\begin{equation}\n\\frac{\\mathrm{d}^2\\delta}{\\mathrm{d} t^2} +\n2H(t) \\frac{\\mathrm{d}\\delta}{\\mathrm{d} t} - 4\\pi G \\rho_0 \\delta =\n0,\n\\end{equation}\nwhich is the equation for linear growth of the overdensity field.\n\\end{answer}\n\n\\item Show that Equation \\ref{eq:lineargrowth} solves\nEquation \\ref{eq:secondorder}.\n\\item Consider a spherical region with mean overdensity $\\bar\\delta\n>0$, within an expanding universe with no cosmological constant. As\n long as there is no {\\it shell crossing} --- that is, material at one\n radius does not catch up to material at another radius --- the\n equations governing the radius of this sphere over time are\n \\begin{equation}\n\\frac{\\dd^2 R}{\\dd t^2} = - \\frac{GM(<r)}{R^2} = - \\frac{4\\pi\n G}{3} \\bar\\rho(1+\\bar\\delta) R\n \\end{equation}\n\\begin{enumerate}\n\\item In terms of $\\Omega_m$ at the present time, what is the\ncondition that the spherical region will collapse on itself?\n\n\\begin{answer}\nThis region will evolve the same way as a universe with\nmatter density $\\Omega_m(1+\\bar\\delta)$. The condition for a closed\nuniverse then implies $\\Omega_m(1+\\bar\\delta)>1$ or\n\\begin{equation}\n\\bar\\delta > \\frac{1}{\\Omega} -1.\n\\end{equation}\n\\end{answer}\n\n\\item Demonstrate that the solutions to the above equation can be\nexpressed as:\n\\begin{eqnarray}\n\\label{eq:sphericalcollapse}\n\\frac{R}{R_{\\rm max}} &=& \\frac{1}{2}\\left(1 - \\cos\\eta\\right), \\cr\n\\frac{t}{t_{\\rm max}} &=& \\frac{1}{\\pi}\\left(\\eta - \\sin\\eta\\right)\n\\end{eqnarray}\nwhere at time $t_{\\rm max}$ the sphere reaches its maximum radius of\nexpansion $R_{\\rm max}$, before collapsing. \n\n\\begin{answer}[Author: Gregory Beauregard]\nWe seek the solution to:\n\\begin{equation}\\label{eq:orig}\n  \\frac{\\mathrm d^2 R}{\\mathrm dt^2}=-\\frac{GM(<r)}{R^2}=-\\frac{4\\pi G}{3}\\bar\\rho(1+\\bar\\delta)R\n\\end{equation}\nWe wish to demonstrate that solutions to the above equation can be expressed as\n\\begin{align}\n  \\frac{R}{R_\\text{max}}&=\\frac{1}{2}(1-\\cos\\eta)\\label{eq:r} \\\\\n  \\frac{t}{t_\\text{max}}&=\\frac{1}{\\pi}(\\eta-\\sin\\eta)\\label{eq:t}.\n\\end{align}\nWe do some implicit differentiation with respect to $t$ with\nEquations \\ref{eq:r} and \\ref{eq:t}.\n\\begin{align}\n  \\frac{\\dot R}{R_\\text{max}}&=\\frac{1}{2}\\dot\\eta\\sin\\eta\\label{eq:rd} \\\\\n  \\frac{\\ddot R}{R_\\text{max}}&=\\frac{1}{2}(\\dot\\eta^2\\cos\\eta+\\ddot\\eta\\sin\\eta)\\label{eq:rdd} \\\\\n  \\frac{1}{t_\\text{max}}&=\\frac{\\dot\\eta}{\\pi}(1-\\cos\\eta)\\label{eq:td} \\\\\n  0&=\\frac{1}{\\pi}(\\ddot\\eta(1-\\cos\\eta)+\\dot\\eta^2\\sin\\eta)\\label{eq:tdd}\n\\end{align}\nSolving Equation \\ref{eq:tdd} for $\\ddot\\eta$ and plugging into Equation \\ref{eq:rdd} we get a nice simplification:\n\\begin{equation}\n  \\frac{\\ddot R}{R_\\text{max}}=-\\frac{\\dot\\eta^2}{2}\\label{eq:rdd-simp}\n\\end{equation}\nFrom Equation \\ref{eq:td} we get $\\dot\\eta$ as\n\\begin{equation}\n  \\dot\\eta=\\frac{\\pi}{t_\\text{max}}\\frac{1}{1-\\cos\\eta}\\label{eq:etad}.\n\\end{equation}\nWe can combine Equations \\ref{eq:rdd-simp} and \\ref{eq:etad} to get $\\ddot R$, the lefthand side of Equation \\ref{eq:orig}.\n\\begin{equation}\n  \\ddot R=-\\frac{\\pi^2 R_\\text{max}}{2t_\\text{max}^2}\\frac{1}{(1-\\cos\\eta)^2}\\label{eq:left}\n\\end{equation}\nComputing the righthand side of Equation \\ref{eq:orig}, $-GM/R^2$, with Equation \\ref{eq:r} we have that\n\\begin{equation}\n  -\\frac{GM}{R^2}=-\\frac{4GM}{R_\\text{max}^2}\\frac{1}{(1-\\cos\\eta)^2}\\label{eq:right}.\n\\end{equation}\nBy comparing Equation \\ref{eq:left} to Equation \\ref{eq:right} we see\nthat Equations \\ref{eq:r} and \\ref{eq:t} give a solution to\nEquation \\ref{eq:orig} with\n\\begin{equation}\n  GM=\\frac{\\pi^2}{8}\\frac{R_\\text{max}^3}{t_\\text{max}^2}.\n\\end{equation}\n\\end{answer}\n\n\\item Show that at time $t_{\\rm max}$, the density of the sphere\nrelative to the mean density of the universe will be $\\rho_{\\rm max}\n/ \\bar\\rho(t_{\\rm max}) = 9 \\pi^2 / 16 \\approx 5.5$.\n\n\\item The collapse of the sphere will proceed in reverse, and will\ntherefore take $t_{\\rm max}$ to do so. However, upon full collapse\nshell-crossing will occur, because the collisionless dark matter will\npass through the origin and oscillate around it. This process can be\nmodeled (\\citealt{bertschinger85a, lithwick11a}) to derive the detailed\nstructure of the resulting halo mass profile, but the virial theorem\n($U=-2K$) can tell us about its overall size. Show that the final\ncharacteristic radius of the resulting {\\it virialized} halo is\n$R_{\\rm vir} = R_{\\rm max} / 2$.\n\n\\begin{answer}[Author:Nanoom Lee]\nUsing the virial theorem, energy conservation\ncan be written as\n\\begin{equation}\nE(t_{\\rm max}) = -\\frac{GM^2}{R_\\text{max}} = E(t_{\\rm vir}) =\n-\\frac{GM^2}{R_\\text{vir}} + K = -\\frac{GM^2}{R_\\text{vir}} +\n\\frac{GM^2}{2R_\\text{vir}} = -\\frac{GM^2}{2R_\\text{vir}},\n\\end{equation}\nwhich gives\n\\begin{equation}\nR_\\text{vir}=R_\\text{max}/2.\n\\end{equation}\n\\end{answer}\n\n\\item Show that the mean overdensity within the resulting halo  is\n$\\delta_{\\rm vir} = 18\\pi^2 \\approx 178$. \n\\item By linearizing the Equations \\ref{eq:sphericalcollapse}, show\nthat the linearly extrapolated overdensity at the time of collapse is\n$\\delta_{\\rm lin}(2 t_{\\rm max}) \\approx 1.686$.\n\\end{enumerate}\n\\item The Press-Schechter or excursion set estimate of the halo mass\nfunction can be calculated from the statistics of Gaussian random\nfields. We can ask what fraction of the volume in the nearly-uniform\nearly universe ends up in halos of a given mass.  Consider the density\nfield linearly-evolved to some redshift $z$.\n\\begin{enumerate}\n\\item If we smooth the density field on some characteristic scale $R$,\nthe smoothed density field will relate to the the statistics of halos\nof what mass $M$?\n\\begin{answer}[Author: Trey Jensen]\n    Given that we have a nearly uniform universe with density $\\bar{\\rho}$, we know that the scale will be directly related to the mass via \n    \\begin{equation}\n        M=\\bar{\\rho} \\frac{4}{3}\\pi R^3.\n    \\end{equation}\n\\end{answer}\n\\item If the smoothing is performed as a top-hat function in\n$k$-space, what does that smoothing corrrespond to in real space?\n\\begin{answer}[Author: Trey Jensen]\n    The Fourier transform of a top-hat in three dimensions is a first order spherical Bessel function:\n\\begin{equation}\n\\tilde \\delta = \\frac{\\sin k R}{kR}\n\\end{equation}\n\\end{answer}\n\\item In terms of the power spectrum, how does the variance\n$\\sigma^2(M)$ scale with $M$?\n\\begin{answer}[Author: Trey Jensen]\nThe variance as a function of the wavenumber $k$ is approximately\n$\\sigma^2(k) \\propto k^3 P(k)$. Using $k=2\\pi/R$ to associate $k$ and\n$R$, and from the previous part knowing the relation of mass to\nradius, then, we find:\n\\begin{equation}\n    \\sigma^2(k) \\sim P(k)\n    \\left(\\frac{32\\pi^4\\bar{\\rho}}{3M}\\right)\n\\end{equation}\n\\end{answer}\n\\item Assume that locations above some linearly-evolved\noverdensity $\\delta_c \\sim 1.686$ on scale $R$ or larger have in fact\ncollapsed into halos of the corresponding mass $M$ or larger. What\nfraction $F(>M)$ of the volume has done so (express in terms of $\\delta_c$ and\n$\\sigma(M)$)?\n\\begin{answer}[Author: Trey Jensen]\n    Because we have a Gaussian random field, and we know the standard\n    deviation of density from above, then the fraction of volume above\n    this critical density on scale $M$ is the probability of this\n    Gaussian random field being above this value, that is, we simply\n    integrate the the PDF, giving the complementary error function.\n    which is the complementary error function. However, we also need\n    to account for the regions which are below the critical overdensity on\n    scale $M$, but above it on some larger scale. Since the $\\delta$\n    as a function of increasing smoothing wavenumber $k$ (decreasing\n    mass) is a random walk, for all points that cross the critical\n    overdensity at some wavenumber smaller than $k$ that are still above\n    the critical overdensity at $k$, there is another point that takes the\n    equal and opposite path after crossing the critical overdensity, and\n    is below the critical overdensity at scale $k$. This leads to an\n    extra factor of two so:\n    \\begin{equation}\n        F(>M)\n    = \\frac{2}{\\sqrt{2\\pi}\\sigma}\\int^\\infty_{\\delta_c}\\dd{M} \\delta \\exp\\left(-\\frac{\\delta^2}{2\\sigma^2}\\right)\n    \\end{equation}\n    We leave the expression in this form rather than in terms of the\n    error function, because it makes the calculation below easier.\n\\end{answer}\n\n\\item Derive from $F(>M)$ the mass function of halos $\\Phi(M)$. \n\\begin{answer}\n    The mass function is simply the derivative of $F(>M)$,\n    converted to a number density with the factor $\\bar{\\rho}/M$:\n    \\begin{equation}\n    \\Phi(M) =\n        \\frac{\\bar{\\rho}}{M} \\frac{\\dd{}}{\\dd{M}} F(>M)\n    \\end{equation}\n    If we rewrite $F(>M)$ with a change of variables $x=\\delta/\\sigma$:\n    \\begin{equation}\n    F(>M) = \\frac{2}{\\sqrt{2\\pi}} \\int_{\\delta_c/\\sigma}^\\infty \\dd{x}\n                          \\exp\\left(-\\frac{x^2}{2}\\right)\n    \\end{equation}\n    then it is apparent that:\n    \\begin{equation}\n    \\frac{\\partial F}{\\partial M}\n    = \\frac{2}{\\sqrt{2\\pi}} \\exp\\left(-\\frac{\\delta_c^2}{2\\sigma^2}\\right)\n    \\frac{\\partial}{\\partial M}\\left(\\frac{\\delta_c}{\\sigma}\\right).\n    \\end{equation}\n    We can then convert this to a slightly different form:\n    \\begin{equation}\n    \\Phi(M) = \\frac{\\bar{\\rho}}{M}\n    \\frac{1}{\\sqrt{2\\pi}} \\frac{\\delta_c}{\\sigma^3}\n    \\exp\\left(-\\frac{\\delta_c^2}{\\sigma^2}\\right)\n    \\left[ - \\frac{\\partial \\sigma^2}{\\partial M}\\right]\n    \\end{equation}\n    to match the text.\n\\end{answer}\n\n\\item Assume $P(k)\\propto k^n$. Define the nonlinear mass $M_\\ast$:\n\\begin{equation}\n\\sigma^2 = \\delta_c^2 \\left(\\frac{M}{M_\\ast}\\right)^{-(n+3)/3}.\n\\end{equation}\nWrite $\\Phi(M)$  in terms of $M_\\ast$, $\\bar\\rho$, and $n$. What\nhappens as $n\\rightarrow -3$, as it does at small scales?\n\\begin{answer}\nFirst let us evaluate:\n\\begin{equation}\n\\frac{\\dd \\sigma^2}{\\dd M}\n= - \\delta_c^2 \\frac{1}{M_\\ast} \\frac{n+3}{3}\n\\left(\\frac{M}{M_\\ast}\\right)^{-(n+6)/3}\n\\end{equation}\nThen we can plug in\n\\begin{eqnarray}\n\\Phi(M) &=& \\frac{\\bar{\\rho}}{M}\n\\frac{1}{\\sqrt{2\\pi}}\n\\frac{1}{M_\\ast} \\frac{n+3}{3}\n\\left(\\frac{M}{M_\\ast}\\right)^{-(n+6)/3}\n\\frac{\\delta_c^3}{\\sigma^3}\n\\exp\\left(-\\frac{\\delta_c^2}{\\sigma^2}\\right) \\cr\n&=& \\frac{\\bar{\\rho}}{M}\n\\frac{1}{\\sqrt{2\\pi}}\n\\frac{1}{M_\\ast} \\frac{n+3}{3}\n\\left(\\frac{M}{M_\\ast}\\right)^{-(n+6)/3}\n\\left(\\frac{M}{M_\\ast}\\right)^{(n+3)/2}\n\\exp\\left(-\\frac{\\delta_c^2}{\\sigma^2}\\right) \\cr\n&=& \\frac{\\bar{\\rho}}{M}\n\\frac{1}{\\sqrt{2\\pi}}\n\\frac{1}{M_\\ast} \\frac{n+3}{3}\n\\left(\\frac{M}{M_\\ast}\\right)^{(n-3)/6}\n\\exp\\left(-\\frac{\\delta_c^2}{\\sigma^2}\\right) \\cr\n&=& \\frac{\\bar{\\rho}}{M}\n\\frac{1}{\\sqrt{2\\pi}}\n\\frac{1}{M}\n\\frac{n+3}{3}\n\\left(\\frac{M}{M_\\ast}\\right)^{(n+3)/6}\n\\exp\\left[-\\left(\\frac{M}{M_\\ast}\\right)^{(n+3)/3}\\right]\n\\end{eqnarray}\nwhere the last term matches the corresponding equation in the text.\nWhen $n\\rightarrow -3$ from above, and we consider $M\\ll M_\\ast$, this\nmass function becomes close to but slightly shallower then $M^{-2}$,\nwhich would be the divergent function.\n\nIt is worth pausing here and asking what it would mean if $n<-3$ ---\nwould that lead to a divergent mass function? Obviously that cannot\nreflect reality, but what happens to the excursion set argument?  What\nhappens is that $n<-3$ implies that the smallest scales do not\ncollapse first, because $k^3P(k)$ is not monotonically\nincreasing. This means that the hierarchical collapse ansatz  that\nunderlies the excursion set picture fails and it is not applicable in\nthis case. \n\\end{answer}\n\n\n\\end{enumerate}\n\n\\item Argue {\\it qualitatively} why the excursion set approach should\nlead to the prediction that dense regions on large scales should have\nmore high mass dark matter halos; this argument was formalized\noriginally by \\citet{mo96a}.\n\n\\begin{answer}\nThe Press-Schechter ansatz can be expressed in terms of a random\nwalk. For a random location, consider the smoothed linear density\nfield around it, starting with very large smoothing and stepping to\nsmaller and smaller scales. We quantify the smoothing scale by $k$,\nwhere the radius of the smoothing scale is $R=2\\pi/k$. For the largest\nscales ($k=0$), the overdensity is 0 (the mean). As $k$ increases, the\noverdensity at that particular location goes up and down in a random\nwalk. The first smoothing scale $R=2 \\pi/k$) at which the linear\ndensity around that point exceeds $\\sim$ 1.7---the first\n``upcrossing''---corresponds to the mass of the halo that that\nlocation is in. Imagine performing this random walk for all locations;\nthe result is an ensemble of different walks, and for those walks the\ndistribution of the masses associated with the ``first upcrossings''\nis the prediction for the distribution of halo masses.\n\nNow consider, instead of a {\\it random} location, a location already\nknown to be an overdense region on some scale $R$. That fixes a\nwavenumber $k= 2\\pi/R$ and an overdensity $\\delta$ at which the random\nwalk ``starts.'' The random walk proceeds to higher $k$.  But since it\nstarted at some $delta>0$, its first upcrossing will happen sooner on\naverage, i.e. at smaller $k$. That means that on average these points\nwill be in larger mass galaxies. Thus, we see that overdense regions\nshould have halo mass functions that are shifted to higher masses.\n\nConsidering a location known to be an underdense region on some scale,\nit should have its first upcrossing happen at a larger k, or smaller\nmass. So the mass function will be shifted to lower masses in regions\nthat are underdense on large scales.\n\n\\citet{mo96a} take this argument farther (and more rigorously\nmathematically) and calculate the relationship between $\\delta_m$ and\nthe halo overdensity $\\delta_h$ as a function of halo mass $M$---the\n``bias'' of the halos.\n\\end{answer}\n\n\\end{enumerate}\n\n\\section{Numerics and Data Exercises}\n\n\\begin{enumerate}\n\\item Download and install CAMB, the standard code to calculate the\npower spectrum. Plot the linear $P(k)$, for $\\Omega_m= 0.1$, $0.3$ and\n1 (assume $h=0.7$). Examine the dependence on baryon density by\ndoubling it from the standard value; the wiggles you see getting\nstronger are due to the baryon acoustic oscillation.\n\\end{enumerate}\n\n\\bibliographystyle{apj}\n\\bibliography{exex}  \n", "meta": {"hexsha": "366301e352e85166339b186872843835c743b234", "size": 37004, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/structure-text.tex", "max_stars_repo_name": "blanton144/exex", "max_stars_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/structure-text.tex", "max_issues_repo_name": "blanton144/exex", "max_issues_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/structure-text.tex", "max_forks_repo_name": "blanton144/exex", "max_forks_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.9477434679, "max_line_length": 169, "alphanum_fraction": 0.7235434007, "num_tokens": 11476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5932179090298667}}
{"text": "\\documentclass{article}\n\n\\usepackage[left=1.25in,top=1.25in,right=1.25in,bottom=1.25in,head=1.25in]{geometry}\n\\usepackage{amsfonts,amsmath,amssymb,amsthm}\n\\usepackage{verbatim,float,url,enumerate}\n\\usepackage{graphicx,subfigure,psfrag}\n\\usepackage{natbib}\n\\usepackage{environ}\n\\usepackage{hyperref}\n\\usepackage{pifont}\n\\usepackage{xcolor}\n\n\\newtheorem{algorithm}{Algorithm}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{corollary}{Corollary}\n\n\\theoremstyle{remark}\n\\newtheorem{remark}{Remark}\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}\n\n\\newcommand{\\argmin}{\\mathop{\\mathrm{argmin}}}\n\\newcommand{\\argmax}{\\mathop{\\mathrm{argmax}}}\n\\newcommand{\\minimize}{\\mathop{\\mathrm{minimize}}}\n\\newcommand{\\maximize}{\\mathop{\\mathrm{maximize}}}\n\\newcommand{\\st}{\\mathop{\\mathrm{subject\\,\\,to}}}\n\\newcommand{\\dist}{\\mathop{\\mathrm{dist}}}\n\n\\newcommand{\\reals}{\\mathbb R}\n\\newcommand{\\prox}{\\operatorname{prox}}\n\\newcommand{\\dom}{\\operatorname{dom}}\n\\def\\R{\\mathbb{R}}\n\\def\\E{\\mathbb{E}}\n\\def\\P{\\mathbb{P}}\n\\def\\Cov{\\mathrm{Cov}}\n\\def\\Var{\\mathrm{Var}}\n\\def\\half{\\frac{1}{2}}\n\\def\\sign{\\mathrm{sign}}\n\\def\\supp{\\mathrm{supp}}\n\\def\\th{\\mathrm{th}}\n\\def\\tr{\\mathrm{tr}}\n\\def\\dim{\\mathrm{dim}}\n\\def\\hbeta{\\hat{\\beta}}\n\n\\begin{document}\n\n\\title{Bicycle Model}\n\\author{\\Large Josh Bennett, jjbennet@andrew.cmu.edu}\n\\maketitle\n\n\\section{Derivation}\n\n\\begin{figure}[H]\n    \\label{fig:bicycle_model}\n    \\centering\n        \\includegraphics[width=0.95\\textwidth]{bicycle_model_diagram.png}\n    \\caption{Bicycle model state variable diagram.}\n\\end{figure}\n\nSome preliminary relationships:\n\nState variables:\n\n$$ z = \\left \\lbrack \\begin{array}{c}  x \\\\ y \\\\ \\psi \\\\ \\delta_f \\\\ v_f \\end{array} \\right \\rbrack $$\n\nWe will use $v_f$ as the state variable for velocity to avoid discontinuities.\n\n$$ v_r = v_f \\cos {( \\delta_f) } $$\n$$ \\dot{\\psi} = v_f \\sin {( \\delta_f) } $$\n\nWe will assume a front wheel drive vehicle with wheel torque $\\tau$ and wheel radius $r$, generating a force of $\\frac{\\tau} r$ as an input. Assuming a mass $m$ and an inertia $I$ for the vehicle defined as the inertia at the wheel base, not the center of mass, we can derive the following equation for the acceleration of the front wheel:\n\n$$ a_f = \\frac{\\tau}{r} \\left ( \\frac {1} {m \\cos {( \\delta_f) }} + \\frac {\\left((l_r+l_f\\right)\\sin {( \\delta_f) } )^2} {I} \\right ) $$\n\nWe will also assume that the steering angle is controlled by some steering rate input $w$\n\n\n$$ \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{c}\n    \\dot{x} \\\\\n    \\dot{y} \\\\\n    \\dot{\\psi} \\\\\n    \\dot{\\delta_f} \\\\\n    \\dot{v_f}\n\\end{array} \\right \\rbrack  = \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{c}\n    v_f \\cos {( \\delta_f) } \\cos{( \\psi )} \\\\\n    v_f \\cos {( \\delta_f) } \\sin{( \\psi )} \\\\\n    \\frac {v_f \\sin {( \\delta_f) }} {l_r + l_f} \\\\\n    0 \\\\\n    0\n\\end{array} \\right \\rbrack + \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{c}\n    0 \\\\\n    0 \\\\\n    0 \\\\\n    w \\\\\n    a_f\n\\end{array} \\right \\rbrack $$\n\nThe linearized dynamics for this system are defined as:\n\n$$ \\nabla f(X)  = \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{ccccc}\n    0 & 0 & - v_f \\cos {( \\delta_f) } \\sin{( \\psi )} & -v_f \\sin {( \\delta_f) } \\cos{( \\psi )} & \\cos {( \\delta_f) } \\cos{( \\psi )}  \\\\\n    0 & 0 & v_f \\cos {( \\delta_f) } \\cos{( \\psi )} & -v_f \\sin {( \\delta_f) } \\sin{( \\psi )} & \\cos {( \\delta_f) } \\sin{( \\psi )} \\\\\n    0 & 0 & 0 & \\frac {v_f \\cos {( \\delta_f) }} {l_r + l_f} & \\frac {\\sin {( \\delta_f) }} {l_r + l_f} \\\\\n    0 & 0 & 0 & 0 & 0 \\\\\n    0 & 0 & 0 & 0 & 0 \\\\\n\\end{array} \\right \\rbrack + \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{c}\n    0 \\\\\n    0 \\\\\n    0 \\\\\n    w \\\\\n    a_f\n\\end{array} \\right \\rbrack $$\n\n\nAlternatively, we can clamp the vehicle velocity and steering angle with a sigmoid function:\n\nThe sigmoid is defined as\n\n$$ \\sigma(z) = \\frac 1 {1+e^{-z}} $$\n\nThe derivative of the sigmoid is\n\n$$ \\frac d {dz} \\sigma(z) = \\sigma(z)(1 - \\sigma(z))$$\n\nWe will use the following constraints to clamp velocity and steering angle:\n\n$$ \\delta_{fapplied} = 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right )  $$\n\n$$ v_{fapplied} = 1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right )  $$\n\nNotice that we have clamped the vehicles reverse velocity. This results in the following dynamics:\n\n$$ \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{c}\n    \\dot{x} \\\\\n    \\dot{y} \\\\\n    \\dot{\\psi} \\\\\n    \\dot{\\delta_f} \\\\\n    \\dot{v_f}\n\\end{array} \\right \\rbrack  = \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{c}\n    1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) \\cos {\\left ( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) \\right) } \\cos{( \\psi )} \\\\\n    1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) \\cos {( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) ) } \\sin{( \\psi )} \\\\\n    \\frac { 1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) \\sin {( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) ) }} {l_r + l_f} \\\\\n    0 \\\\\n    0\n\\end{array} \\right \\rbrack + \\left \\lbrack \\arraycolsep=1.4pt\\def\\arraystretch{1.5} \\begin{array}{c}\n    0 \\\\\n    0 \\\\\n    0 \\\\\n    w \\\\\n    a_f\n\\end{array} \\right \\rbrack $$\n\nThe linearized dynamics for this system are defined as:\n\n$$ \\nabla f_{02}(X) = - 1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) \\cos {\\left ( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) \\right) } \\sin{( \\psi )} $$\n$$ \\nabla f_{03}(X) = - 1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) 2\\delta_{fmax} \\sigma(\\delta_f) (1 - \\sigma(\\delta_f) ) \\sin {\\left ( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) \\right) } \\cos{( \\psi )} $$\n$$ \\nabla f_{04}(X) = 1.5v_{fmax}  \\sigma(v_f) ( 1 - \\sigma(v_f) ) \\cos {\\left ( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) \\right) } \\cos{( \\psi )} $$\n\n$$ \\nabla f_{12}(X) = 1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) \\cos {\\left ( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) \\right) } \\cos{( \\psi )} $$\n$$ \\nabla f_{13}(X) = - 1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) 2\\delta_{fmax} \\sigma(\\delta_f) (1 - \\sigma(\\delta_f) ) \\sin {\\left ( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) \\right) } \\sin{( \\psi )} $$\n$$ \\nabla f_{14}(X) = 1.5v_{fmax}  \\sigma(v_f) ( 1 - \\sigma(v_f) ) \\cos {\\left ( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) \\right) } \\sin{( \\psi )} $$\n\n$$ \\nabla f_{23}(X) = \\frac { 1.5v_{fmax}  \\left ( \\sigma(v_f) - \\frac 1 3 \\right ) 2\\delta_{fmax} \\sigma(\\delta_f) (1 - \\sigma(\\delta_f) ) \\cos {( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) ) }} {l_r + l_f}  $$\n$$ \\nabla f_{24}(X) =  \\frac { 1.5v_{fmax} \\sigma(v_f) ( 1 - \\sigma(v_f) ) \\sin {( 2\\delta_{fmax}  \\left ( \\sigma(\\delta_f) - \\frac 1 2 \\right ) ) }} {l_r + l_f} $$\n\n\\end{document}\n", "meta": {"hexsha": "7561f4dff158ffc2ae89a7b705eea4b525582c19", "size": 6971, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bicycle_model/bicycle_model.tex", "max_stars_repo_name": "joshuajbennett/av-trajectory-planner", "max_stars_repo_head_hexsha": "522e5193c6aeeeba1ebaca3d43e26bdd21c56340", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-13T23:32:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-13T17:36:42.000Z", "max_issues_repo_path": "bicycle_model/bicycle_model.tex", "max_issues_repo_name": "joshuajbennett/av-trajectory-planner", "max_issues_repo_head_hexsha": "522e5193c6aeeeba1ebaca3d43e26bdd21c56340", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bicycle_model/bicycle_model.tex", "max_forks_repo_name": "joshuajbennett/av-trajectory-planner", "max_forks_repo_head_hexsha": "522e5193c6aeeeba1ebaca3d43e26bdd21c56340", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-04-07T02:29:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-09T02:14:28.000Z", "avg_line_length": 41.494047619, "max_line_length": 339, "alphanum_fraction": 0.6119638502, "num_tokens": 2756, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.798186768138228, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.5931868709781191}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Written By Michael Brodskiy\n% Class: Analytic Geometry & Calculus III (Math-292)\n% Professor: V. Cherkassky\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[12pt]{article} \n\\usepackage{alphalph}\n\\usepackage[utf8]{inputenc}\n\\usepackage[russian,english]{babel}\n\\usepackage{titling}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\usepackage[super]{nth}\n\\usepackage{everysel}\n\\usepackage{ragged2e}\n\\usepackage{geometry}\n\\usepackage{fancyhdr}\n\\geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in}\n\\newcommand{\\subtitle}[1]{%\n  \\posttitle{%\n    \\par\\end{center}\n    \\begin{center}\\large#1\\end{center}\n    \\vskip0.5em}%\n\n}\n\\usepackage{hyperref}\n\\hypersetup{\ncolorlinks=true,\nlinkcolor=blue,\nfilecolor=magenta,      \nurlcolor=blue,\ncitecolor=blue,\n}\n\n\\urlstyle{same}\n\n\n\\title{Lecture X Notes}\n\\date{\\today}\n\\author{Michael Brodskiy\\\\ \\small Professor: V. Cherkassky}\n\n% Mathematical Operations:\n\n% Sum: $$\\sum_{n=a}^{b} f(x) $$\n% Integral: $$\\int_{lower}^{upper} f(x) dx$$\n% Limit: $$\\lim_{x\\to\\infty} f(x)$$\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Limits of Multivariable Functions $-$ 14.2}\n\nJust like in Calculus I, there are three conditions for a function to be continuous:\n\n\\begin{enumerate}\n\n  \\item $f(a)$ exists\n\n  \\item $\\lim_{x\\to a} f(x) = f(a)$\n\n  \\item $\\lim_{x\\to a^-} f(x) =\\lim_{x\\to a^+} f(x)$\n\n\\end{enumerate}\n\nIn three dimensions, the same rules apply, in addition to a new one:\n\n$$\\lim_{(x,y)\\to(a,b)}f(x,y)=L$$ if, for every number $\\epsilon > 0$ there is a corresponding number $\\partial > 0$ such that,\\\\ if $(x,y)\\in D$ and $0 < \\sqrt{(x-a)^2+(y-b)^2} < \\partial$ then $|f(x,y) - L| < \\epsilon$\n\n\\section{Partial Derivatives $-$ 14.3}\n\nFor the function, $z=f(x,y)$, the partial derivatives may be expressed through the use of the limit definition of a derivative, where:\n\n$$f_x(x,y)=\\lim_{\\Delta x\\to0} \\frac{f(x+\\Delta x,y)-f(x,y)}{\\Delta x}$$\n$$f_y(x,y)=\\lim_{\\Delta y\\to0} \\frac{f(x,y + \\Delta y)-f(x,y)}{\\Delta y}$$\n\nSome common notations are:\n\n$$f_x(x,y)=f_x=\\frac{\\partial f}{\\partial x}=\\frac{\\partial}{\\partial x}f(x,y)=\\frac{\\partial z}{\\partial x} = f_1=D_1f=D_xf$$\n$$f_y(x,y)=f_y=\\frac{\\partial f}{\\partial y}=\\frac{\\partial}{\\partial y}f(x,y)=\\frac{\\partial z}{\\partial y} = f_2=D_2f=D_yf$$\n\n\\subsection{Finding the Partial Derivative}\n\nTo find the partial derivatives of the function $f(x,y)$, one would hold $y$ constant, while differentiating with respect to $x$ to find $\\frac{\\partial f}{\\partial x}$, and hold $x$ constant, while differentiating with respect to $y$ to find $\\frac{\\partial f}{\\partial y}$\n\n\n\\end{document}\n", "meta": {"hexsha": "ed70a285182f1bab8348b09795b9cad1ab006c88", "size": 2948, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture Notes/Lecture10.tex", "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_issues_repo_path": "Lecture Notes/Lecture10.tex", "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notes/Lecture10.tex", "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.3956043956, "max_line_length": 274, "alphanum_fraction": 0.5966757123, "num_tokens": 907, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5930972288415673}}
{"text": "%!TEX root = ../thesis.tex\n\\chapter{Preliminaries}\n\\label{preliminaries}\n\n\\section{Overview}\n\\label{preliminaries:overview}\n\nCryptography is the art and science of securing digital information, transactions, and distributed computations in the presence of third parties~\\cite{Katz:2014:IMC:2700550}. Cryptography is directly related to data privacy and anonymity requirements. The blockchain is a good example of cryptography embracement to enable the sought trust needed for the exchange of digital assets. In this section we review the basic building blocks that cryptography provides to a data sharing system when the blockchain is used. In this Section, cryptographic schemes are introduced in order to lay the ground for the following research. As a result, the exposition style will not be formal in well known cases.\n\n\\section{Cryptographic Hash Functions}\n\\label{preliminaries:hash}\n\nA \\textit{hash function} is a function which takes an input of arbitrary length and returns a fixed-length value~\\cite{crypto_101,boneh_crypto,kiagias:crypto,Katz:2014:IMC:2700550}. The hash output value is called \\textit{digest}.\n\nHash functions have many applications. They are used in data structures -- such as hash tables -- in authentication schemes, password verification and data identifiers to name just a few.\n\nA hash function guarantees at a minimum that for the same input yields the same output. The size of the output is fixed, thus the output range is finite. For this reason it is possible two different inputs produce the same output. This phenomenon is called \\textit{collision}.\n\n\\textit{Cryptographic hash functions} have much stronger properties than regular hash functions. Ideal cryptographic hash functions should be easily computable, noninvertible and collision-resistant~\\cite{Katz:2014:IMC:2700550, kiagias:crypto, crypto_101}. A cryptographic hash function $H$ is a deterministic polynomial algorithm that takes as input any given string $x \\in \\{0, 1\\}^{*}$ and outputs a string $H(x) \\in \\{0, 1\\}^{k}$ where $k$ is of fixed size. In the case of a hash function $H$, a collision is a pair of distinct messages $m_0, m_1$ where $m_0 \\neq m_1$ and $H(m_0) = H(m_1)$. A hash function $H$ is collision-resistant if it is infeasible for any polynomial-time algorithm to find collisions.\n\nThree levels of security~\\cite{Katz:2014:IMC:2700550} can be identified:\n\n\\begin{itemize}\n  \\item \\textbf{Preimage resistance}: Given a digest $h$ it is hard to find any message $m$ with $H(m) = h$\n  \\item \\textbf{Second preimage resistance}: For any given message $m_0$ it is hard to find a second message $m_1 \\neq m_0$ such as $H(m_0) = H(m_1)$\n  \\item \\textbf{Collision resistance}: It is hard to find a pair of messages $m_0, m_1$ where $m_0 \\neq m_1$ and $H(m_0) = H(m_1)$\n\\end{itemize}\n\nCollision resistance is the strongest security property and a requirement for cryptographic hash functions.\n\nIn a data sharing system, cryptographic hash functions can be used to provide file integrity verification and provenance tracking~\\cite{10.1109/SPW.2015.27, Azaria2016}. File modifications in transit can be easily detected, as all changes output a different digest. A digest of a dataset serves as a means of unique file identification; unique persistent identifiers (PID) can be used as pointers to a data location. Any change to the PID is visible and trackable~\\cite{dist_pid}.\n\nCryptographic hash functions are heavily used in the blockchain. They are used to create unique transactions and block IDs, to provide proofs of inclusion -- meaning to prove if a transaction is contained in a block -- and, most importantly, to achieve decentralized consensus among the participants. In particular, Bitcoin makes use of the SHA256 and RIPEMD160 hash functions. That said, cryptographic hash functions play an important role in the blockchain ecosystem.\n\n\\section{Symmetric-key cryptography}\n\\label{preliminaries:sym}\n\nIn a symmetric cryptosystem two parties share a common \\textit{secret key} agreed prior to communication. The key is used for both encryption and decryption. When a party wants to securely send a message she has to use the secret key to encrypt it and then sent it. The receiver uses the same secret key to decrypt and recover the message. More formally, a symmetric cryptosystem is composed of the following algorithms~\\cite{Katz:2014:IMC:2700550, kiagias:crypto}:\n\n\\begin{itemize}\n  \\item A \\textbf{key generation algorithm} $\\calg$ that takes as input a security parameter $1^{n}$ and outputs a key $k$.\n  \\item An \\textbf{encryption algorithm} $\\cale$ that takes as input a key $k$ and a plaintext $m$ and outputs a ciphertext $c$.\n  \\item A \\textbf{decryption algorithm} $\\cald$ that takes as input a key $k$ and a ciphertext $c$ and outputs a plaintext $m$.\n\\end{itemize}\n\nThe set of all possible keys which can derive from the key generation algorithm $\\calg$ is called the \\textit{key space} $\\calk$. Respectively, the set of all possible plaintext is called the \\textit{plaintext message space} denoted $\\calm$, and the set of all possible ciphertexts is called \\textit{ciphertext message space} denoted $\\calc$. In practice, keys are usually of some fixed length; 256-bit keys are very common. On the other hand, messages and ciphertexts can be of arbitrary length. For example, a message may be a video or a music file or even a single bit.\n\nA symmetric cryptosystem must satisfy the \\textit{correctness} property: for all $m \\in \\calm$ and $k \\in \\calk$, it holds that:\n\n\\begin{equation*}\n  \\cald_{k}(\\cale_{k}(m)) = m\n\\end{equation*}\n\nAny deterministic cryptosystem can not be secure~\\cite{Katz:2014:IMC:2700550, kiagias:crypto}. For this reason, randomness is essential to any encryption scheme.\n\nIn the following chapters the notions of cipher and symmetric encryption scheme or symmetric scheme are identical and will be used interchangeably.\n\n\\subsection{Block Ciphers}\n\\label{preliminaries:sym:block}\n\nA block cipher is a deterministic algorithm that encrypts blocks -- a sequence of bits -- of fixed length. The plaintext and the ciphertext is always of the same size. The size is fixed by the block cipher and is called the \\textit{block size}. The message space $\\calm$ and ciphertext space $\\calc$ are the same finite set $\\{ 0, 1\\}^{n}$ and the key space is  $\\calk = \\{ 0, 1\\}^{l}$.\n\nFormally, a block cipher is a keyed permutation function $F: \\{ 0, 1\\}^{n} \\times \\{ 0, 1\\}^{l} \\rightarrow \\{ 0, 1\\}^{n}$ where $n$ is the block size and $l$ the key size. Usually, the block size and the key size are the same; common key and block sizes range from $128$ to $512$ bits. The function $F$ must be \\textit{one-to-one} for every key $k \\in \\calk$ and both $F$ and its inverse $F^{-1}$ should be easily computed; there is polynomial-time algorithm that given $k$ computes $F$ and $F^{-1}$. The number of total permutation a block cipher can produce is $2^{n}!$.\n\nThe security of a block cipher is much stronger than semantic security~\\cite{boneh_crypto}: A permutation produced by $F$ is indistinguishable from a random permutation. Cryptographic schemes based on block ciphers with rigorous proofs of security can, therefore, be constructed.\n\n\\begin{figure}[!ht]\n    \\centering\n    \\begin{tikzpicture}[node distance=2.5cm,auto,>=latex']\n      \\node [cipher, minimum size=7em] (a) {AES};\n      \\draw[<-] (a.west)  -- node[above]{$128$ bits} node[left, xshift=-2em]{$m$} ++(-4em,0em);\n      \\draw[->] (a.east)  -- node[above]{$128$ bits} node[right, xshift=2em]{$c$} ++(4em,0em);\n      \\draw[<-] (a.north)  -- node[right]{$128$ bits} node[above, yshift=2em]{$k$} ++(0em,4em);\n    \\end{tikzpicture}\n  \\caption{Block cipher AES}\n  \\label{fig:sym:block:aes}\n\\end{figure}\n\n\\subsubsection[Modes of Operation]{Modes of Operation~\\cite{Katz:2014:IMC:2700550}}\n\\label{preliminaries:sym:modes}\n\nA mode of operation is a way to encrypt arbitrary-length messages using block ciphers of fixed message length. A message is broken into $l$ blocks of size $n$, the block cipher operating on each block. If the size of the last block is less than the block size, the block is padded to a full size block. In most modes of operation, a random initial vector (IV) of length $n$ is required. The IV ensures distinct ciphertexts for each encryption even for the same plaintext and key. The IV is not secret as it is crucial for completing decryption. For a mode to be secure, the IV must be distinct and random in each encryption~\\cite{Katz:2014:IMC:2700550}\n\nSeveral modes of operation are presented below. These modes are vulnerable either to padding oracle attacks~\\cite{padding_attacks} -- attacks that exploit the padding validation to decrypt the ciphertext -- or malleable~\\cite{malleability} -- the property of transforming one ciphertext into another which decrypts to a different plaintext. To countermeasure padding oracle attacks, or changes to the ciphertext, message authentication~\\cite{Katz:2014:IMC:2700550} should be used along with encryption to detect tampering with the ciphertext. Notable modes that combine encryption with authentication are Counter with CBC-MAC (CCM), Offset Codebook Mode (OCB) and Galois/Counter Mode (GCM).\n\n\\epar{Electronic Codebook (ECB)}\n\\label{preliminaries:sym:modes:ecb}\n\nThe Electronic Codebook (ECB) is the simplest mode of operation. Given a plaintext $m = m_1, m_2, \\dots, m_l$ each block is encrypted separately. The ciphertext is $c = F_k(m_1), F_k(m_2), \\dots, F_k(m_l)$. ECB mode is not secure and should never be used.\n\n\\epar{Cipher Block Chaining (CBC)}\n\\label{preliminaries:sym:modes:cbc}\n\nThe Cipher Block Chaining (CBC) mode outputs a sequence of cipher blocks where each plaintext block is XORed with the previous ciphertext block. Each ciphertext is dependent on all preceding blocks. First, a random IV of length $n$ is selected. The first plaintext block is XORed with the IV. The second block is XORed with the first cipher block $c_1$. In general, let $c_0 = IV$ then $c_i = F_k(c_{i-1} \\xor m_i) \\mbox{, } \\forall i \\in [1, l]$.\n\nThe main drawback of CBC mode is that encryption and decryption are sequential -- meaning that they cannot be parallelized.\n\n\\epar{Output Feedback (OFB)}\n\\label{preliminaries:sym:modes:ofb}\n\nThe Output Feedback (OFB) mode uses the block cipher to generate a pseudorandom stream which is then XORed with the plaintext. The stream is generated only by repeatedly encrypting the IV independently of the plaintext. The stream is generated as follows. Set the $i$-th block of the stream to $r_i = F_k(r_{i-1})$ where $r_0 = IV \\rselect{\\{ 0, 1 \\}^{n}}$. The ciphertext is produced by XORing the plaintext with the appropriate stream block; that is, $c_i = m_i \\xor r_i$.\n\nAs in CBC, the encryption and decryption cannot be parallelized. On the other hand, the pseudorandom stream can be computed independently of the actual message encryption and can be prepared ahead of time.\n\n\\epar{Counter (CTR)}\n\\label{preliminaries:sym:modes:ctr}\n\nThe Counter (CTR) mode, like OFB, also generates a pseudorandom stream. First, a random $CTR \\rselect{\\{ 0, 1 \\}^{n}}$ is chosen -- much like the IV in previous modes. The stream is generated as $r_i = F_k(CTR + i)$ where $r_0 = CTR$. Same as before, the ciphertext is computed as $c_i = m_i \\xor r_i$.\n\nCTR mode has many advantages. One of the main advantages of CTR mode is that both encryption and decryption can be fully parallelized. Secondly, as with OFB, the stream can be computed in advance. Finally, it is possible to decrypt the $i$-th block of the ciphertext without having to decrypt any other cipher block. This property is called random access.\n\n\\begin{figure}[!ht]\n    \\centering\n    \\begin{tikzpicture}[node distance=2.5cm,auto,>=latex']\n      \\node [cipher] (a) {$F_k$};\n      \\node [cipher, right of=a] (b) {$F_k$};\n      \\node [cipher, right of=b] (c) {$F_k$};\n      \\draw[<-] (a.north)  -- node[above, yshift=2em]{$m_1$} ++(0em,4em);\n      \\draw[<-] (b.north)  -- node[above, yshift=2em]{$m_2$} ++(0em,4em);\n      \\draw[<-] (c.north)  -- node[above, yshift=2em]{$m_3$} ++(0em,4em);\n      \\draw[->] (a.south)  -- node[below, yshift=-2em]{$c_1$} ++(0em,-4em);\n      \\draw[->] (b.south)  -- node[below, yshift=-2em]{$c_2$} ++(0em,-4em);\n      \\draw[->] (c.south)  -- node[below, yshift=-2em]{$c_3$} ++(0em,-4em);\n    \\end{tikzpicture}\n  \\caption{ECB mode}\n  \\label{fig:sym:block:ecb}\n\\end{figure}\n\n\\begin{figure}[!ht]\n    \\centering\n    \\begin{tikzpicture}[node distance=3cm,auto,>=latex']\n      \\node [cipher] (a) {$F_k$};\n      \\node [cipher, right of=a] (b) {$F_k$};\n      \\node [cipher, right of=b] (c) {$F_k$};\n      \\node [above of=a, node distance=2cm] (xor1) {$\\xor$};\n      \\node [above of=b, node distance=2cm] (xor2) {$\\xor$};\n      \\node [above of=c, node distance=2cm] (xor3) {$\\xor$};\n\n      \\node[left of=a] (iv) {$IV$};\n\n      \\draw[<-] (xor1.north)  -- node[above, yshift=1em]{$m_1$} ++(0em,2em);\n      \\draw[<-] (a.north)  -- (xor1.south) node[above, yshift=1em]{};\n\n      \\draw[<-] (xor2.north)  -- node[above, yshift=1em]{$m_2$} ++(0em,2em);\n      \\draw[<-] (b.north)  -- (xor2.south) node[above, yshift=1em]{};\n\n      \\draw[<-] (xor3.north)  -- node[above, yshift=1em]{$m_3$} ++(0em,2em);\n      \\draw[<-] (c.north)  -- (xor3.south) node[above, yshift=1em]{};\n\n      \\draw[->] (a.south)  -- node[below, yshift=-2em]{$c_1$} ++(0em,-4em);\n      \\draw[->] (b.south)  -- node[below, yshift=-2em]{$c_2$} ++(0em,-4em);\n      \\draw[->] (c.south)  -- node[below, yshift=-2em]{$c_3$} ++(0em,-4em);\n\n      \\draw[->] (iv.south)  -- node[below, yshift=-2.5em]{$IV$} ++(0em,-5em);\n\n      \\draw[->] ([yshift=-4em]a.center) -- ++(3.5em,0em) -- ++(0em,8.7em) -- (xor2.west);\n      \\draw[->] ([yshift=-4em]b.center) -- ++(3.5em,0em) -- ++(0em,8.7em) -- (xor3.west);\n\n      \\draw[->] (iv.north) -- ++(0em, 3.95em) -- (xor1.west);\n\n    \\end{tikzpicture}\n  \\caption{CBC mode}\n  \\label{fig:sym:block:cbc}\n\\end{figure}\n\n\\begin{figure}[!ht]\n    \\centering\n    \\begin{tikzpicture}[node distance=3cm,auto,>=latex']\n      \\node [cipher] (a) {$F_k$};\n      \\node [cipher, right of=a] (b) {$F_k$};\n      \\node [cipher, right of=b] (c) {$F_k$};\n      \\node [below of=a, node distance=2cm] (xor1) {$\\xor$};\n      \\node [below of=b, node distance=2cm] (xor2) {$\\xor$};\n      \\node [below of=c, node distance=2cm] (xor3) {$\\xor$};\n\n      \\node[left of=a] (iv) {$IV$};\n\n      \\draw[->] (a.south)  -- (xor1.north);\n      \\draw[->] (xor1.south)  -- node[below, yshift=-1em]{$c_1$} ++(0em,-2em);\n      \\draw[<-] (xor1.west)  -- node[left, xshift=-1em]{$m_1$} ++(-2em,0em);\n\n      \\draw[->] (b.south)  -- (xor2.north);\n      \\draw[->] (xor2.south)  -- node[below, yshift=-1em]{$c_2$} ++(0em,-2em);\n      \\draw[<-] (xor2.west)  -- node[left, xshift=-1em]{$m_2$} ++(-2em,0em);\n      \\draw[->] ([yshift=-3em]a.center) -- ++(3.5em,0em) -- ++(0em,7.7em) --  ++(3.6em,0em) -- (b.north);\n\n      \\draw[->] (c.south)  -- (xor3.north);\n      \\draw[->] (xor3.south)  -- node[below, yshift=-1em]{$c_3$} ++(0em,-2em);\n      \\draw[<-] (xor3.west)  -- node[left, xshift=-1em]{$m_3$} ++(-2em,0em);\n      \\draw[->] ([yshift=-3em]b.center) -- ++(3.5em,0em) -- ++(0em,7.7em) --  ++(3.6em,0em) -- (c.north);\n\n      \\draw[->] (iv.north) -- ++(0em, 4em) -- ++(7.1em, 0em) -- (a.north);\n      \\draw[->] (iv.south)  -- node[below, yshift=-3.5em]{$IV$} ++(0em,-6em);\n\n    \\end{tikzpicture}\n  \\caption{OFB mode}\n  \\label{fig:sym:block:ofb}\n\\end{figure}\n\n\\begin{figure}[!ht]\n    \\centering\n    \\begin{tikzpicture}[node distance=3cm,auto,>=latex']\n      \\node [cipher] (a) {$F_k$};\n      \\node [cipher, right of=a] (b) {$F_k$};\n      \\node [cipher, right of=b] (c) {$F_k$};\n      \\node [below of=a, node distance=2cm] (xor1) {$\\xor$};\n      \\node [below of=b, node distance=2cm] (xor2) {$\\xor$};\n      \\node [below of=c, node distance=2cm] (xor3) {$\\xor$};\n\n      \\draw[<-] (a.north)  -- node[above, yshift=1em]{$CTR + 1$} ++(0em,2em);\n      \\draw[<-] (b.north)  -- node[above, yshift=1em]{$CTR + 2$} ++(0em,2em);\n      \\draw[<-] (c.north)  -- node[above, yshift=1em]{$CTR + 3$} ++(0em,2em);\n\n      \\node[left of=a] (iv) {$CTR$};\n\n      \\draw[->] (a.south)  -- (xor1.north);\n      \\draw[->] (xor1.south)  -- node[below, yshift=-1em]{$c_1$} ++(0em,-2em);\n      \\draw[<-] (xor1.west)  -- node[left, xshift=-1em]{$m_1$} ++(-2em,0em);\n\n      \\draw[->] (b.south)  -- (xor2.north);\n      \\draw[->] (xor2.south)  -- node[below, yshift=-1em]{$c_2$} ++(0em,-2em);\n      \\draw[<-] (xor2.west)  -- node[left, xshift=-1em]{$m_2$} ++(-2em,0em);\n\n      \\draw[->] (c.south)  -- (xor3.north);\n      \\draw[->] (xor3.south)  -- node[below, yshift=-1em]{$c_3$} ++(0em,-2em);\n      \\draw[<-] (xor3.west)  -- node[left, xshift=-1em]{$m_3$} ++(-2em,0em);\n\n      \\draw[->] (iv.south)  -- node[below, yshift=-3.5em]{$CTR$} ++(0em,-6em);\n\n    \\end{tikzpicture}\n  \\caption{CTR mode}\n  \\label{fig:sym:block:ctr}\n\\end{figure}\n\n\\section{Public Key Cryptography}\n\\label{preliminaries:pub}\n\nAs we see in~\\ref{preliminaries:sym} a secret key needs to be agreed upon prior to communication. In 1976, Whitfield Diffie and Martin Hellman published a paper called New Directions in Cryptography~\\cite{Diffie:2006:NDC:2263321.2269104} which changed completely the way we communicate. The two cryptographers proposed a protocol that enabled two parties, having no prior communication, to establish a secret key over an insecure channel in the presence of eavesdropping adversaries. The protocol uses two keys, one for encryption and one for decryption. The encryption key is called the \\textit{public key} and the decryption key is called the \\textit{secret key} (or private). Every party has a key pair, consisting of a public and a secret key. The public is available to anyone who wants to sent an encrypted message to the key holder -- the receiver may post the public key online beforehand. The receiver of the message decrypts the ciphertext with the use of her private key. Only the owner of the private key can decrypt a message that was encrypted using the corresponding public key. Any key pair is essentially an identity and the blockchain smartly utilizes it to provide anonymous identities to the users of the system.\n\nA public-key encryption scheme is composed of the following probabilistic, polynomial-time algorithms~\\cite{Katz:2014:IMC:2700550, kiagias:crypto}:\n\n\\begin{itemize}\n  \\item The \\textbf{key generation algorithm} $\\calg$: Takes as input a security parameter $1^{n}$ it and outputs a key pair ($p_k$, $s_k$).\n  \\item The \\textbf{encryption algorithm} $\\cale$: Takes as input a public key $p_k$ and a plaintext $m$ and outputs a ciphertext $c$.\n  \\item The \\textbf{decryption algorithm} $\\cald$: Takes as input a private key $s_k$ and a ciphertext $c$ and outputs a plaintext $m$.\n\\end{itemize}\n\nLikewise, a public-key cryptosystem must satisfy the correctness property: for all $m \\in \\calm$ and $(p_k, s_k) \\in \\calk$, it holds that:\n\n\\begin{equation*}\n  \\cald_{s_k}(\\cale_{p_k}(m)) = m\n\\end{equation*}\n\nMany important and widely used public-key schemes base their security on mathematical problems that, under certain conditions, are assumed to be hard; they cannot be solved in polynomial time. Such problems are the discrete logarithm and the factoring problem.\n\n\\subsection[Discrete Logarithm and Diffie-Hellman Assumptions]{Discrete Logarithm and Diffie-Hellman Assumptions~\\cite{Katz:2014:IMC:2700550, kiagias:crypto}}\n\\label{preliminaries:pub:dlog}\n\nIn cryptography a system is considered to be secure if it is provable secure. There are cryptosystems which are perfectly secure~\\cite{shannon_otp}, like the \\textit{one-time pad}~\\cite{shannon_otp}, and others that perfect security cannot be achieved. In the second case security can be proven by computational security assumptions, in other words a reduction to a particular problem which under certain conditions is assumed to be hard to solve in polynomial-time. Significant cryptographic schemes like the Diffie-Hellman key exchange and El Gamal encryption and signature protocol are built according to the discrete log computational hardness assumption.\n\n\\begin{dfn}\n  Let $\\G$ be a cyclic group of order $q$ and let $g$ be a generator of $\\G$. The \\textit{discrete log problem} (DLOG) is as follows: given a random element $h \\in \\G$, find an integer $x \\in \\Z_q$ such that $g^{x} = h$.\n\\end{dfn}\n\n\\begin{dfn}\n The \\textit{computational Diffie-Hellman problem} (CDH) is as follows: given a cyclic group $\\G$ of order $q$, a generator $g \\in \\G$, $g^{a}$ and $g^{b}$ where $a, b \\rselect{\\Z_q}$, compute $g^{ab}$.\n\\end{dfn}\n\n\\begin{dfn}\n The \\textit{decisional Diffie-Hellman problem} (DDH) is as follows: given a cyclic group $\\G$ of order $q$, a generator $g \\in \\G$, and $g^{a}, g^{b}, g^{c}$ where $a, b, c \\rselect{\\Z_q}$, decide if $c = ab$ or $c \\rselect{\\Z_q}$.\n\\end{dfn}\n\nThe DLOG problem is believed to be hard for specific group families of $\\G$. For example, solving the DLOG for the subgroup of order $q$ of $\\Z^{*}_{p}$, where $p$ and $q$ are primes of size at least $2048$-bits and $256$-bits respectively, is considered infeasible. This assumption is called the \\textit{discrete log assumption}.\n\nThe DDH problem is weaker than the CDH problem. It holds that $DDH \\leq CDH$; if an adversary can solve CDH she can compute $g^{ab}$ and compare it to $g^{c}$ and easily solve the DDH.\n\n\\section{Digital signatures}\n\\label{preliminaries:sign}\n\nA \\textit{digital signature} is a fundamental cryptographic primitive. It is considered to be the equivalent to a handwritten signature. It is a scheme that demonstrates the authenticity of digital messages or documents.\n\nIn a digital signature scheme, each party holds a unique key pair $(p_k, s_k)$. A message $m$ is uniquely signed with the \\textit{signing key} $s_k$ and the \\textit{verification key} $p_k$ is used to verify the signature. Only someone who knows $s_k$ can sign a message, but all parties that have access to $p_k$ can verify a signature.\n\nDigital signatures have the following important properties:\n\n\\begin{itemize}\n  \\item \\textbf{Authentication}: The message is signed by a known sender\n  \\item \\textbf{Non-repudiation}: The sender cannot deny sending the message\n  \\item \\textbf{Integrity}: The message is not altered in transit\n\\end{itemize}\n\nDigital signatures are commonly used for software distribution and financial transactions and in cases where forgery detection is important. The blockchain uses digital signatures to provide asset ownership; the rightful owner signs the transaction to prove her possession of the asset.\n\nA digital signature scheme is composed of the following probabilistic polynomial-time algorithms~\\cite{Katz:2014:IMC:2700550,kiagias:crypto}:\n\n\\begin{itemize}\n  \\item The key \\textbf{generation algorithm} $Gen$: Takes as input a security parameter $1^{n}$ and outputs a key pair ($p_k$, $s_k$).\n  \\item A \\textbf{signing algorithm} $Sign$: Takes a signing key $s_k$ and a message $m$ and produce a digital signature $\\sigma$ of $m$\n  \\item A \\textbf{deterministic verification} algorithm $Verify$: Takes a verification key $p_k$ and a signature $\\sigma$. It outputs $b=1$ or $b=0$ ($true$ or $false$) to indicate if the signature is valid.\n\\end{itemize}\n\nThe primary goal of digital signatures is \\textit{unforgeability}; an adversary cannot create a new valid message-signature pair without the corresponding signing key.\n\n\\subsection{El Gamal Signature Scheme}\n\\label{preliminaries:sign:el_gamal}\n\nThe \\textit{El Gamal Signature Scheme}~\\cite{el_gamal} is a digital signature scheme based on the difficulty of the discrete logarithm problem (\u00a7~\\ref{preliminaries:pub:dlog}).\n\nThe scheme works as follows:\n\n\\begin{itemize}\n  \\item \\textbf{Key Generation}:\n    \\begin{enumerate}\n      \\item Choose a cryptographic hash function $H$ where $H: \\{0, 1\\}^{*} \\rightarrow \\{0, 1\\}^{n}$\n      \\item Choose a prime $p$ such that the DLOG problem is difficult\n      \\item Find a generator $g$ of the group $\\Z^{*}_{p}$\n      \\item Choose randomly $x \\rselect{\\Z_q}$\n      \\item Compute $h = g^{x} modp$\n      \\item Return $(h, x)$ where $h$ is the public key and $x$ the private key\n    \\end{enumerate}\n  \\item \\textbf{Sign}: Sign a message $m \\in \\{0, 1\\}^{*}$ with a private key $x$\n    \\begin{enumerate}\n      \\item Choose randomly $k \\rselect{\\Z_q}$ such that $gcd(k, p - 1) = 1$\n      \\item Compute $r = g^{k}modp$\n      \\item Compute $s = k^{-1}(H(m) + xr) mod(p - 1)$\n      \\item Return the signature $(r, s)$\n    \\end{enumerate}\n  \\item \\textbf{Verify}: Verify a signature $(r, s)$ of the message $m$ with a public key $h$\n    \\begin{enumerate}\n      \\item Verify that $r \\in \\Z_q$ and $s \\in \\Z_q$. Else ouput $0$.\n      \\item Compute $v = H(m)$\n      \\item If $v \\stackrel{?}{=} (h^{r}r^{s}) modp$ then output $1$ else $0$.\n    \\end{enumerate}\n\\end{itemize}\n\n\\subsection{Digital Signature Algorithm (DSA)}\n\\label{preliminaries:sign:dsa}\n\nThe \\textit{Digital Signature Algorithm} (DSA)~\\cite{dsa_nist} is a variant of the El Gamal signature scheme. The DSA was standardized by the National Institute of Standards and Technology (NIST). The main advantage of DSA over El Gamal Signature is the size of the signature. DSA produces signatures of $320$-bit in size, in comparison to El Gamal where at least $2048$-bit signatures are required for it to be secure according to contemporary security standards~\\cite{dsa}.\n\nThe algorithm works as follows~\\cite{dsa}:\n\n\\begin{itemize}\n  \\item \\textbf{Key Generation}:\n    \\begin{enumerate}\n      \\item Choose a cryptographic hash function $H$ where $H: \\{0, 1\\}^{*} \\rightarrow \\{0, 1\\}^{n}$\n      \\item Choose a prime $q$ of size $n$\n      \\item Choose a prime $p$ such that $p - 1$ is a multiple of $q$ of size $l$. The size of $l$ must be a multiple of $64$ between $512$ and $1,024$\n      \\item Choose randomly $a \\rselect{\\Z_p}$\n      \\item Compute $g = a^{(p - 1)/q} mod p$\n      \\item Choose randomly $x \\rselect{\\Z_q}$\n      \\item Compute $h = g^{x} modp$\n      \\item Return $((p, q, g), h, x)$ where $(p, q, g)$ is the public parameters of the algorithm, $h$ is the public key and $x$ the private key.\n    \\end{enumerate}\n  \\item \\textbf{Sign}: Sign a message $m \\in \\{0, 1\\}^{*}$ with a private key $x$\n    \\begin{enumerate}\n      \\item Choose randomly $k \\rselect{\\Z_q}$\n      \\item Compute $r = (g^{k}modp)modq$\n      \\item Compute $s = k^{-1}(H(m) + xr) modq$\n      \\item Return the signature $(r, s)$\n    \\end{enumerate}\n  \\item \\textbf{Verify}: Verify a signature $(r, s)$ of the message $m$ with a public key $h$\n    \\begin{enumerate}\n      \\item Verify that $r \\in \\Z_q$ and $s \\in \\Z_q$. Else ouput $0$.\n      \\item Calculate $w = s^{-1}modq$\n      \\item Calculate $u_1 = (H(m)w)modq$\n      \\item Calculate $u_2 = (rw)modq$\n      \\item Calculate $v = ((g^{u_1}h^{u_2})modp)modq$\n      \\item If $v \\stackrel{?}{=} r$ then output $1$ else $0$.\n    \\end{enumerate}\n\\end{itemize}\n\n\\section{Elliptic-curves}\n\\label{preliminaries:el_curves}\n\n\\textit{Elliptic curves} (ECC) play an import role in cryptography. All cryptosystems presented in previous chapters are built over multiplicative groups of integers modulo a sufficient large prime $p$~\\cite{kiagias:crypto, boneh_crypto}. Elliptic curves are additive \\textit{abelian groups}~\\cite{kiagias:crypto}; a group satisfying associativity, commutativity, and existence of identity element and of inverse element under the addition group operation~\\cite{elliptic_curves_2}.\n\nThere are two apparent benefits for using curves over modular groups~\\cite{kiagias:crypto}. The first is cost efficiency. In practice, cryptosystems that their security depends on the discrete log problem or the factorisation problem on a modular group, such as the El Gamal or the RSA, the primes have to be at least 2048 bits for the system to be secure. On the other hand, elliptic curves offer similar security using much smaller key sizes. A 233-bit elliptic curve key gives the same level of security as a 2,240-bit RSA key~\\cite{ecc_rsa_bits, ecc_rsa_bits_1} and a 333-bit elliptic curve key as a 4096-bit RSA key~\\cite{blake1999elliptic}. The second reason is that a way to generalize the attacks against the discrete logarithm problem on a modular group to elliptic curves~\\cite{kiagias:crypto} has not been found yet.\n\nCryptography is primarily interested in elliptic curves over $\\F_p$: the field of integers modulo $p$, where $p$ is a prime number. An elliptic curve $E$ over $\\F_p$ is defined by the equation of the form~\\cite{elliptic_curves, elliptic_curves_2}:\n\n\\begin{equation*}\n  y^2 = x^3 + ax + b\n\\end{equation*}\n\nwhere $a, b \\in \\F_p$. A pair $(x, y)$ where $x, y \\in \\F_p$ is a point on the curve if it satisfies the equation where $E$ is defined. There is a special point, called \\textit{point at infinity} and denoted by $\\infty$, where is also on the curve and serve as the \\textit{identity element}~\\cite{elliptic_curves_2}.\n\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}\n    \\begin{axis}[\n            xmin=-4,\n            xmax=5,\n            ymin=-5,\n            ymax=5,\n            xlabel={$x$},\n            ylabel={$y$},\n            scale only axis,\n            axis lines=middle,\n            domain=-1.3247:2,\n            y domain=-3:3,\n            samples=200,\n            smooth,\n            clip=false,\n            axis equal image=false,\n            ticks=none\n        ]\n\n    \\addplot[red] {sqrt(x^3-x+1)} node[right, black] {$y^2 = x^3 - x + 1$};\n    \\addplot[red] {-sqrt(x^3-x+1)};\n\n    \\end{axis}\n    \\end{tikzpicture}\n  \\caption{Elliptic curve}\n\\end{figure}\n\n\\subsection[Points addition]{Points addition~\\cite{elliptic_curves}}\n\\label{preliminaries:el_curves:addition}\n\nLet $P = (x_1, y_1)$ and $Q = (x_2, y_2)$ be two points on the elliptic curve $E$ where $P \\neq Q$ and $P, Q \\neq \\infty$. Any two points on a curve add to produce a third point on the curve. A third point $R = (x_3, y_3)$ on the curve $E$ is defined as:\n\n\\begin{equation*}\n  R = P + Q\n\\end{equation*}\n\nWe can find the third point $R$ as follows:\n\n\\begin{enumerate}\n  \\item Draw the line $L$ through $P$ and $Q$.\n  \\item Find the third point $-R$ that $L$ intersects with.\n  \\item Reflect $-R$ across the $x$-axis to obtain $R$\n\\end{enumerate}\n\nThe slope of the line $L$ can be found as\n\n\\begin{equation*}\n  m = \\dfrac{y_2 - y_1}{x_2 - x_1}\n\\end{equation*}\n\nand assuming $x_1 \\neq x_2$ the equation of $L$ is\n\n\\begin{equation*}\n  y = m(x - x_1) + y_1\n\\end{equation*}\n\nTo find the intersection with $E$, substitute $y$ to get\n\n\\begin{equation*}\n  (m(x - x_1) + y_1)^{2} = x^3 + ax + b\n\\end{equation*}\n\nThe three roots of cubic polynomial correspond to the three points of intersection of $L$ with $E$. As we already know the two of the tree roots, since $P$ and $Q$ are points on both $L$ and $E$, finding the third root is easy. For a cubic polynomial $x^{3} + ax^{2} + bx + c$ with roots $r, s, t$ it holds that~\\cite{elliptic_curves}\n\n\\begin{equation*}\n  x^{3} + ax^{2} + bx + c = (x - r)(x - s)(x - t) = x^{3} - (r + s + t)x^{2} + \\dots\n\\end{equation*}\n\nTherefore,\n\n\\begin{equation*}\n  r + s + t = -a\n\\end{equation*}\n\nKnowing the two roots $r, s$ we can recover the third as $t = -a - r  - s$.\n\nSo, to find the third point where $L$ intersects with $E$ we obtain\n\n\\begin{equation*}\n  x = m^{2} + - x_1 - x_2\n\\end{equation*}\n\nand\n\n\\begin{equation*}\n  y = m(x - x_1) + y_1\n\\end{equation*}\n\nFinally, to reflect $-R$ to the $x$-axis to obtain $R$ we change the sign of $y_1$. The final equations that give the third point are\n\n\\begin{align*}\n  x_3 = m^{2} + - x_1 - x_2, && y_3 = m(x_1 - x_3) - y_1\n\\end{align*}\n\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}\n    \\begin{axis}[\n            xmin=-4,\n            xmax=5,\n            ymin=-5,\n            ymax=5,\n            xlabel={$x$},\n            ylabel={$y$},\n            scale only axis,\n            axis lines=middle,\n            domain=-1.3247:2,\n            y domain=-3:3,\n            samples=200,\n            smooth,\n            clip=false,\n            axis equal image=false,\n            ticks=none,\n        ]\n\n    \\addplot[red] {sqrt(x^3-x+1)} node[right, black] {};\n    \\addplot[red] {-sqrt(x^3-x+1)};\n    \\addplot[blue, domain=-3:3,] {0.27*x + 0.87};\n\n    \\draw [fill=black] (axis cs:-1.25, 0.535) circle (2pt);\n    \\draw [fill=black] (axis cs:0.165, 0.9) circle (2pt);\n    \\draw [fill=black] (axis cs:1.165, 1.17) circle (2pt);\n    \\draw [fill=black] (axis cs:1.165, -1.17) circle (2pt);\n\n    \\draw [dashed] (axis cs:1.165, 1.17) -- (axis cs:1.165,-1.17);\n\n    \\draw[color=black] (axis cs:-1.4, 0.635) node [left]{$P$};\n    \\draw[color=black] (axis cs:0.0, 1.5) node [left]{$Q$};\n    \\draw[color=black] (axis cs: 1.4, 1.6) node [left]{$-R$};\n    \\draw[color=black] (axis cs: 3.7, -1) node [left]{$R = P + Q$};\n\n    \\end{axis}\n    \\end{tikzpicture}\n  \\caption{Addition of points on elliptic curves}\n\\end{figure}\n\n\\subsection[Points doubling]{Points doubling~\\cite{elliptic_curves}}\n\\label{preliminaries:el_curves:doubling}\n\nLet $P = (x_1, y_1)$ a point on the elliptic curve $E$ where $P \\neq -P$ and $P \\neq \\infty$. Then $2P = (x_3, y_3)$. Unlike ~\\ref{preliminaries:el_curves:addition} where there are two points, in point doubling we have only one point, thus we cannot draw the line $L$. In that case, we use the tangent line $L$ to the curve $E$ at point $P$.\n\nThe slope of $L$ can be found as:\n\n\\begin{equation*}\n  m = \\dfrac{dy}{dx} = \\dfrac{3x_1^{2} + a}{2y_1}\n\\end{equation*}\n\nAssuming $y_1 \\neq 0$ the equation of $L$ is\n\n\\begin{equation*}\n  y = m(x - x_1) + y_1\n\\end{equation*}\n\nTherefore, proceeding as in~\\ref{preliminaries:el_curves:addition}, we obtain\n\n\\begin{align*}\n  x_3 = m^{2} + - 2x_1, && y_3 = m(x_1 - x_3) - y_1\n\\end{align*}\n\n\\subsection{Elliptic curve discrete logarithm problem (ECDLP)}\n\nCyclic subgroups of some elliptic curve groups can be used to implement secure systems based on the discrete logarithm problem. The DLOG is assumed to be hard, like in multiplicative groups.\n\n\\begin{dfn}\n  The elliptic curve discrete logarithm problem (ECDLP) is as follows: given an elliptic curve $E$ over a finite field $\\F_p$ and two points $P, Q \\in E$, find an integer $d \\in \\Z_p$ such that $Q = dP$.\n\\end{dfn}\n\nThe parameters that describe an elliptic curve $E$ defined over a finite field $\\F_p$, a base (generator) point $P \\in E$, and its order $q$ should be chosen carefully so that the ECDLP is resistant to all known attacks.\n\n\\subsection[Key generation]{Key generation~\\cite{elliptic_curves_2}}\n\\label{preliminaries:el_curves:key_gen}\n\nLet $E$ be an elliptic curve over $\\F_p$ and $P$ a generator point in $E$ of order $q$. The key pair generation algorithm is defined as follows:\n\n\\begin{enumerate}\n  \\item Select a random integer $d \\rselect{\\in \\Z_q}$\n  \\item Compute $Q = dP$\n  \\item Return the tuple $(Q, d)$ where $Q$ is the public key and $d$ the secret key\n\\end{enumerate}\n\nThe prime $p$, the curve $E$ the generator point $P$, its order $q$ and the public key $Q$ are public. Finding the secret key $d$ is equivalent to solving the ECDLP.\n\n\\subsection{Elliptic Curve Digital Signature Algorithm (ECDSA)}\n\\label{preliminaries:sign:el_curves:ecdsa}\n\nThe \\textit{Elliptic Curve Digital Signature Algorithm} (ECDSA)~\\cite{ecdsa} is a variant of the Digital Signature Algorithm (DSA) which uses elliptic Curves. It is the most widely adopted elliptic curve-based signature scheme~\\cite{elliptic_curves_2}.\n\nLet $E$ be an elliptic curve over $\\F_p$, a generator point $P$ of order $q$ and $H$ a cryptographic hash function where $H: \\{0, 1\\}^{*} \\rightarrow  \\{0, 1\\}^{q}$. The ECDSA is defined as follows~\\cite{elliptic_curves_2}:\n\n\\begin{itemize}\n  \\item \\textbf{Key generation}: Run elliptic curve key generation algorithm defined in~\\ref{preliminaries:el_curves:key_gen} and get a key pair $(Q, d)$.\n  \\item Sign: Sign a message $m$ with a private key $d$\n    \\begin{enumerate}\n      \\item Choose randomly $k \\rselect{\\Z_q}$\n      \\item Compute the point $kP = (x_1, y_1)$\n      \\item Compute $r = x_1modq$. If $r \\stackrel{?}{=} 0$ then go to step 1\n      \\item Compute $e = H(m)$\n      \\item Compute $s = k^{-1} (e + dr) modq$. If $s \\stackrel{?}{=} 0$ then go to step 1\n      \\item Return the signature $(r, s)$\n    \\end{enumerate}\n  \\item \\textbf{Verify}: Verifies a signature $(r, s)$ of the message $m$ with a public key $Q$\n    \\begin{enumerate}\n      \\item Verify that $r \\in \\Z_q$ and $s \\in \\Z_q$. Else ouput $0$.\n      \\item Compute $e = H(m)$\n      \\item Compute $w = s^{-1}modq$\n      \\item Compute $u_1 = (ew)modq$ and $u_2 = (rw)modq$\n      \\item Compute $X = u_1P + u_2Q$\n      \\item If $X \\stackrel{?}{=} \\infty$ output $0$\n      \\item Take $x_1$ cordinate of $X$ and compute $v = x_1modq$\n      \\item If $v \\stackrel{?}{=} r$ then output $1$ else $0$.\n    \\end{enumerate}\n\\end{itemize}\n\nBlockchain uses elliptic curves for key pair generation and transaction signing (ECDSA). Bitcoin and Ethereum use a specific elliptic curve for ECDSA~\\cite{ecdsa} whose parameters are defined in the secp256k1 standard~\\cite{secp}.\n\n\\section{Homomorphic Encryption}\n\\label{preliminaries:homo}\n\n\\textit{Homomorphic encryption} allows to perform specific types of computation on encrypted data. Computations performed over a ciphertext return encrypted results. When the results are decrypted they match the results of the operations as if they had been performed on the plaintext. An encryption scheme is called homomorphic when it has the homomorphic property. In particular, if for all $m_1, m_2 \\in \\calm$ and $k \\in \\calk$ (secret key or public key):\n\n\\begin{equation*}\n  Enc_k(m_1) \\otimes Enc_k(m_2) = Enc_k(m_1 \\oplus m_2)\n\\end{equation*}\n\nthen the encryption scheme is homomorphic.\n\nHomomorphic encryption schemes are by nature \\textit{malleable} and have weaker security properties than non-homomorphic schemes. Various known encryption schemes are homomorphic such as the unpadded RSA and El Gamal.\n\n\\section[Zero Knowledge Proofs]{Zero Knowledge Proofs~\\cite{kiagias:crypto}}\n\\label{zkp}\n\nA \\textit{proof of knowledge} is a protocol that enables one party to convince another of the validity of a \\textit{statement}.\nIn a \\textit{zero-knowledge proof}~\\cite{zkp} this is accomplished without revealing any information beyond the legitimacy of the proof~\\cite{kiagias:crypto}.\nIn other words, one party can prove to another party that a given statement is true, without conveying any information apart from the fact that the statement is indeed true.\n\nLet $\\calp$ be a prover and $\\calv$ the verifier. $\\calp$ must convince $\\calv$ that she has some\nknowledge of a statement $x$ without explicitly stating what she knows. We call this knowledge a \\textit{witness} $w$.\nBoth parties are aware of a predicate $R$ that will attest to $w$ being a valid witness to $x$~\\cite{kiagias:crypto}. In general,\n\n\\begin{itemize}\n  \\item The predicate $R$ is assumed to be polynomial-time computable.\n  \\item The prover $\\calp$ has $R,x$, and $w$ such that $R(x,w) = 1$. She wishes to prove possesion of $w$ by producing a proof of knowledge $\\pi$.\n  \\item The verifier $\\calv$ has $R,x$, and $\\pi$.\n  \\item Given $R$ it is hard to find a corresponding $w$ such as $R(x,w) = 1$\n  \\item The prover $\\calp$ is unwilling to reveal $w$; otherwise the solution is trivial.\n  \\item The verifier $\\calv$ can efficiently check the validity of $\\pi$.\n\\end{itemize}\n\n\\subsection{Examples}\n\n\\subsubsection{The strange cave of Ali Baba}\n\\label{zkp:examples}\n\nLet's bring in all cryptographers' old friends, Alice and Bob, and let them play the following game: at the bottom of the cave~\\cite{Quisquater:1989:EZP:118209.118269} of figure~\\ref{fig:zkp:alibaba} there is a magic door that\ncan only open with a secret password. The cave has one entrance on one side and a magic door blocking the entrance on the opposite side. Bob knows the secret password and he wants to prove that to Alice without revealing the password.\nBob proposes the following game to prove that he can open the magic door: Alice has to wait outside of the cave, while Bob enters the cave and\ntakes either the left or the right path. Alice cannot see which path Bob is taking. Then, Alice enters; standing at the entrance of the cave, she calls\nBob to come out from either the left or the right passage. Bob does so, using the secret password if necessary.\n\nIf Bob does not know the secret password he has $1/2$ change of guessing correctly as Alice chooses at random which path Bob is taking. If this process is repeated $k$ times the probability of Bob convincing Alice that he knows the secret password without actually knowing is exponentially reduced -- at most $1/2^{k}$.\n\n\\subsubsection{The colour-blind friend}\n\nNow, suppose that Bob is color-blind~\\cite{zkp:colour_blind} and Alice keeps two balls of the same size in her hands, one red and one green.\nTo Bob they seem identical and he is not sure which one is which. Alice wants to prove him that indeed they are of different colour,\nwithout revealing which one is red and which is green. To prove her knowledge, she tells Bob to hold the two balls, one on each hand. Alice can see\nat this point which ball is in which hand. Next, he can either switch hands behind his back or leave them be. Finally, he reveals them. Alice can say with certainty whether he switched hands or not. Have the balls had the same color, they would have been indistinguishable. There would have been no way Alice could guess the right answer with probability higher than $1/2$. If they repeat this $k$ times the probability that Alice succeeds when the balls are identical is at most $1/2^{k}$.\n\n\\begin{figure}[t!]\n  \\centering\n  \\begin{subfigure}[t]{0.30\\textwidth}\n    \\centering\n    \\resizebox{\\linewidth}{!}{\n      \\begin{tikzpicture}[scale=1]\n        % Cave %\n        \\draw(0,5) -- (2,5) -- (2,8) -- (10,8) -- (10,4);\n        \\draw(0,4) --(2,4) -- (2,1) -- (10,1) -- (10, 4);\n        \\draw(9,4.5) -- (10,4.5);\n        \\draw(3,5) -- (3,7) -- (9,7) -- (9,5);\n        \\draw(3,5) -- (3,2) -- (9,2) -- (9,5);\n        % Actors %\n        \\draw[fill] (1,5.6) circle [radius=0.1];\n        \\node [below] (alice) at (1,5.5) {Alice};\n        \\draw[fill] (2.5,4.6) circle [radius=0.1];\n        \\node [below] (bob) at (2.5,4.5) {Bob};\n        % Arrows %\n        \\draw [dashed, ->] (2.5,4.6) -- (2.5,6);\n        \\draw [dashed, ->] (2.5,3.9) -- (2.5,2);\n\n      \\end{tikzpicture}\n    }\n    \\caption{Alice stands outside and Bob choose randomly a path, left or right}\n    \\label{fig:zkp:alibaba:a}\n  \\end{subfigure}\n  \\begin{subfigure}[t]{0.30\\textwidth}\n    \\centering\n    \\resizebox{\\linewidth}{!}{\n      \\begin{tikzpicture}[scale=1]\n        % Cave %\n        \\draw(0,5) -- (2,5) -- (2,8) -- (10,8) -- (10,4);\n        \\draw(0,4) --(2,4) -- (2,1) -- (10,1) -- (10, 4);\n        \\draw(9,4.5) -- (10,4.5);\n        \\draw(3,5) -- (3,7) -- (9,7) -- (9,5);\n        \\draw(3,5) -- (3,2) -- (9,2) -- (9,5);\n        % Actors %\n        \\draw[fill] (2.5,4.6) circle [radius=0.1];\n        \\node [below] (alice) at (2.5,4.5) {Alice};\n        \\draw[fill] (9.5,4.3) circle [radius=0.1];\n        \\node [below] (bob) at (9.5,4.2) {Bob};\n        % Speech Bubble %\n        \\node[overlay,draw,cloud callout,callout relative pointer={(0.2cm,-0.7cm)},%\n        aspect=2.5,fill=white!90] at ($(alice)+(-0.5cm,2.3cm)$) {Left};\n\n      \\end{tikzpicture}\n    }\n    \\caption{Alice calls to Bob, asking him to come out either the left or the right passage}\n    \\label{fig:zkp:alibaba:b}\n  \\end{subfigure}\n  \\begin{subfigure}[t]{0.30\\textwidth}\n    \\centering\n    \\resizebox{\\linewidth}{!}{\n      \\begin{tikzpicture}[scale=1]\n        % Cave %\n        \\draw(0,5) -- (2,5) -- (2,8) -- (10,8) -- (10,4);\n        \\draw(0,4) --(2,4) -- (2,1) -- (10,1) -- (10, 4);\n        \\draw(9,4.5) -- (9.5,4.5);\n        \\draw(3,5) -- (3,7) -- (9,7) -- (9,5);\n        \\draw(3,5) -- (3,2) -- (9,2) -- (9,5);\n        % Actors %\n        \\draw[fill] (2.5,4.6) circle [radius=0.1];\n        \\node [below] (alice) at (2.5,4.5) {Alice};\n        \\draw[fill] (2.5,7.8) circle [radius=0.1];\n        \\node [below] (bob) at (2.5,7.7) {Bob};\n        % Speech Bubble %\n        \\node[overlay,draw,cloud callout,callout relative pointer={(0.2cm,-0.7cm)},%\n        aspect=2.5,fill=white!90] at ($(alice)+(-0.5cm,2.3cm)$) {OK};\n\n      \\end{tikzpicture}\n    }\n    \\caption{Bob complies appearing at the exit Alices names}\n    \\label{fig:zkp:alibaba:c}\n  \\end{subfigure}\n  \\caption{Ali Baba Cave}\n  \\label{fig:zkp:alibaba}\n\\end{figure}\n\n\\subsection[Formal Definition]{Formal Definition~\\cite{kiagias:crypto}}\n\\label{zkp:definition}\n\nLet $<\\calp, \\calv>$ be a pair of interactive programs. Define $\\text{out}^{\\calp}_{\\calp, \\calv}(x,w,z)$\nto be the output of $\\calp$ when both $\\calp$ and $\\calv$ are executed with the public input $x$ and private\ninputs $w$ and $z$ ($\\calp$ determines $w$ and $\\calv$ choose $z$); $\\text{out}^{\\calv}_{\\calp, \\calv}(x,w,z)$\nis similar defined for $\\calv$. The PPT interactive protocol $<\\calp, \\calv>$ is a \\textit{zero-knowledge proof}\nfor a language $L \\in NP$ with knowledge error $k$ and zero-knowledge distance $\\e$ if the following\nproperties hold~\\cite{kiagias:crypto}.\n\n\\begin{enumerate}\n  \\item \\textbf{Completeness:} If $x \\in L$ and $R(x,w) = 1$ for some witness $w$, then $\\text{out}^{\\calv}_{\\calp, \\calv}(x,w,z)$ = 1\n    for all string $z$ with overwhelming probability in $v$.\n  \\item \\textbf{Soundness:} For any polynomial-time program $\\calp^{*}$ define for arbitrary $x,w,z,$\n    \\begin{equation*}\n      \\pi_{x,w,z} = \\Prob[\\text{out}^{\\calv}_{\\calp^{*}, \\calv}(x,w,z) = 1].\n    \\end{equation*}\n    A protocol $<\\calp, \\calv>$ satisfies soundness if there are non-negligible functions $s(v), q(v)$ such for all $\\calp^{*}$ here exists a probabilistic\n    Turing machine (PTM) program $K$, called a knowledge extractor with the following property. Suppose that\n\n    \\begin{equation*}\n      \\widetilde{\\pi} = \\Prob[K(x,w,z) = w^{'} : R(x, x^{'}) = 1].\n    \\end{equation*}\n    Then it holds that $\\pi_{x,w,z} \\geq s(|x|)$ implies that $\\widetilde{\\pi}_{x,w,z} \\geq q(\\abs{x})$.\n\n  \\item \\textbf{(Statistical) Zero-knowledge:} For each polynomial-time program $\\calv^{*}$, there is a PTM program $S$, called the \\textit{simulator}, such that for all\n    $x,w$ with $R(x,w) = 1$, the random variables $S(x,z)$ and $\\text{out}^{\\calv^{*}}_{\\calp, \\calv^{*}}(x,w,z)$ are statistically indistinguishable for all strings $z$:\n    \\begin{equation*}\n      \\forall \\cala \\abs[\\Big]{\\Prob[\\cala(S(x,z) = 1)] - \\Prob[\\cala(\\text{out}^{\\calv^{*}}_{\\calp, \\calv^{*}}(x,w,z)) = 1]} < \\e.\n    \\end{equation*}\n\\end{enumerate}\n\nCompleteness is very similar to correctness. Assuming both the prover and verifier follow the protocol faithfully, completeness guarantees that the\nprotocol will succeed with a sufficiently high probability.\nSoundness ensures that if the statement is false, no cheating prover can convince the honest verifier that it is true, except with some small probability.\nIntuitively, statistical zero-knowledge is a property that prohibits a verifier from extracting information from an honest prover. A weaker version of zero-knowledge is \\textit{honest-verifier zero-knowledge} (HVZK) where it is assumed that the verifier executes the protocol faithfully, but makes additional computations.\n\n\\subsection{Verifiable Computation (VC)}\n\\label{zkp:vc}\n\nIn devices such as mobiles or IoT devices where the computational power is often limited, the need to outsource computation to one or more powerful workers on the cloud emerges. Yet, confidence is required for the working entity to assure computation is done properly. The client should be able to verify the correctness of the output. This way, the client is protected from malicious or malfunctioning workers, and the legitimate worker is no longer accountable for the computation~\\cite{pinocchio-nearly-practical-verifiable-computation}. This scheme is called \\textit{public verifiable computation} (VC)~\\cite{pinocchio-nearly-practical-verifiable-computation}.\n\nIn a public verifiable computation scheme the client outsources the evaluation of a function $F$ on input $u$ -- for example, a query. The client then can verify the correctness of the computation of $F(u)$ doing less work than the initial computation of $F(u)$. The outsource function $F$ can take two inputs $u$ and $w$, where $w$ is the worker's private input -- for example, a data set. In this case, the scheme is a \\textit{Zero-Knowledge Verifiable Computation} (or non-interactive zero knowledge (NIZK) proof~\\cite{Blum:1991:NZ:123137.123145}) if the client learns nothing about the worker's input except the computation's output~\\cite{pinocchio-nearly-practical-verifiable-computation}.\n\nSuch schemes are not interactive, which means that no interaction is necessary between prover and verifier (worker and client), and a \\textit{common reference string} (CRS) shared between them is enough to achieve computational zero-knowledge~\\cite{Blum:1991:NZ:123137.123145}.\n\nFormally, as defined by Parno et. al~\\cite{pinocchio-nearly-practical-verifiable-computation}, a public Zero-Knowledge verifiable computation scheme $\\calv \\calc$ consists of a set of three polynomial-time algorithms:\n\n\\begin{itemize}\n  \\item $(EK_F, VK_F) \\leftarrow \\text{KeyGen}(F, 1^{\\lambda})$: The \\textbf{randomized key generation algorithm}. It takes as input the outsourced function $F$ and a security parameter ${\\lambda}$; it outputs a public evaluation key $EK_F$ and a public verification key $VK_F$.\n  \\item $(y, \\pi_y) \\leftarrow \\text{Compute}(EK_F, u)$: The \\textbf{deterministic worker algorithm}. It takes as input the public evaluation key $EK_F$ and a public input $u$; it outputs $y \\leftarrow F(u, w)$, where $w$ is an auxiliary private input, and a proof $\\pi$ of $y$'s correctness.\n  \\item $\\{0, 1\\} \\leftarrow \\text{Verify}(VK_F, u, y, \\pi_y)$:  The \\textbf{deterministic verification algorithm}. It takes as input the public verification key $VK_F$, the public input $u$, the output $y$ and the proof $\\pi_y$; It outputs $1$ if $F(u) = y$ and $0$ otherwise.\n\\end{itemize}\n\n\\subsection{zk-SNARKs}\n\\label{zkp:snarks}\n\nA \\textit{Non-interactive Zero-Knowledge proof} (NIKZ)~\\cite{Blum:1991:NZ:123137.123145} is a zero-knowledge proof where sharing a common reference string (CRS) between the prover and the verifier ahead of time is enough to implement zero knowledge protocols without the need for interaction between the participants of the protocol. The \\textit{zkSNARKs} protocols are non-interactive zero-knowledge proof protocols which have some extra properties regarding the size of proof and the verification time.\n\nThe acronym zkSNARK stands for \\textit{`Zero-Knowledge Succinct Non-Interactive Argument of Knowledge'} in which each individual part -- informally defined -- have the following meaning~\\cite{zksnarks_nutshell, zcash, 184425, Bitansky:2012:ECR:2090236.2090263, zksnark_basics}:\n\n\\begin{itemize}\n  \\item \\textbf{Zero-Knowledge}: The verifier learns nothing but the validity of the computation\n  \\item \\textbf{Succinct}: The proofs are sort and easy to verify in comparison to the actual computation (constant size, and polynomial verifiable in the size of the proof).\n  \\item \\textbf{Non-Interactive}: There is no or little interaction in one step. Anyone can verify the proof, no need for new interaction (publicly verifiable).\n  \\item \\textbf{ARgument}: Soundness (\u00a7~\\ref{zkp:definition}) holds only against computationally bounded provers.\n  \\item \\textbf{of Knowledge}: If the verifier accepts a proof output by a computationally bounded prover, then the prover has a witness for the given instance.\n\\end{itemize}\n\nThe first zkSNARKs constructions where inspired by the \\textit{PCP} theorem allowing faster and shorter proofs~\\cite{zksnark_basics}. In this section an analysis of the zk-SNARK model proposed by Gennaro et al~\\cite{ggpr} and the Pinnochio protocol proposed by Parno et al~\\cite{pinocchio-nearly-practical-verifiable-computation} is attempted, without getting into rigorous mathematical proofs, security assumptions or implementations. The purpose of this chapter is to provide a high level understanding of zkSNARKs and their core mechanisms.\n\n\\subsection{Main idea}\n\\label{zkp:snarks:main_idea}\n\nIn order to be able to construct a zkSNARKs for a computation problem it has to be encoded in a specific form, an equivalent to the original problem, from which zkSNARKs can derive. This form is called \\textit{Quadratic Arithmetic Program} (QAP).\n\nAt a high level, zkSNARKs consists of four basic steps (Figure ~\\ref{fig:zkp:zksnark_flow}):\n\n\\begin{enumerate}\n  \\item Computation to Arithmetic circuit~\\cite{pinocchio-nearly-practical-verifiable-computation}\n  \\item Arithmetic circuit to Rank 1 Constraint System (R1CS)~\\cite{ggpr}\n  \\item R1CS to Quadratic Span Program (QAP)~\\cite{ggpr}\n  \\item QAP to zkSNARK~\\cite{pinocchio-nearly-practical-verifiable-computation}\n\\end{enumerate}\n\n\\begin{figure}[ht!]\n  \\center\n  \\begin{tikzpicture}[\n    flow/.style={rectangle, minimum width=2cm, minimum height=1cm, text centered, draw=black,font=\\footnotesize},\n    scale=0.8\n  ]\n    \\node[flow] (comp) at (0, 0) {Computation};\n    \\node[flow] (alg) at (4, 0) {Arithmetic Circuit};\n    \\node[flow] (r1cs) at (8, 0) {R1CS};\n    \\node[flow] (qap) at (12, 0) {QAP};\n    \\node[flow] (zk) at (16, 0) {zkSNARK};\n\n    \\draw[->] (comp) -- (alg);\n    \\draw[->] (alg) -- (r1cs);\n    \\draw[->] (r1cs) -- (qap);\n    \\draw[->] (qap) -- (zk);\n\n\n  \\end{tikzpicture}\n  \\caption{Steps of zk-SNARK}\n  \\label{fig:zkp:zksnark_flow}\n\\end{figure}\n\n\\subsubsection{Arithmetic circuits}\n\\label{zkp:snarks:circuits}\n\nAn \\textit{arithmetic circuit} $C$ over a finite field $\\F_p$ is a circuit that contains\nonly addition and multiplication gates. It takes as inputs elements in $\\F_p$ and its gates output elements in $\\F_p$~\\cite{184425, zcash}. The outputs are determined by the inputs which pass through the gates where their values are changed accordingly. Any program  can be reduced to  an arithmetic circuit~\\cite{pankova_succinct_2013, 10.1007/978-3-642-40084-1_6} and normally the circuit is associated to the function it computes -- the outsource function upon which the prover works.\n\nA legal or a \\textit{valid assignment} for an arithmetic circuit $C$ is a tuple $(c_1, c_2, \\dots, c_n) \\in \\F_p$ such that $C(c_1, c_2, \\dots, c_k) = (c_{k+1}, c_2, \\dots, c_n)$ where $k$ is the number of all input and $n - k$ the number of all outputs of the circuit. Given a circuit evaluation, the task of the prover is to convince the verifier that evaluations of intermediate gates exist, so that the circuit indeed produces such an output with such an input~\\cite{pankova_succinct_2013}. For example, if Alice wants to prove to Bob that she knows $c_1, c_2, c_3 \\in \\F_p$, such that $(c_{1}c_{2})(c_1 + c_3) = 7$, first she has to encode the computation to an arithmetic circuit $C$ (the presentation of $C$ is shown in Figure~\\ref{fig:zkp:circuit}). Then she wants to prove that she knows a valid assignment $(c_1, c_2, c_3, c_4, c_5)$, where $c_4 = c_{1}c_{2}$ and $c_5 = c_{4}(c_1 + c_3)$, such that $c_5 = 7$.\n\n\\begin{figure}[ht!]\n  \\center\n  \\begin{tikzpicture}[\n        gate/.style={\n        circle,\n        minimum size=1cm,\n        draw\n      },\n      node distance=2cm\n    ]\n\n    \\node[gate] (c1) at (0, 0) {$c_1$};\n    \\node[gate] (c2) at (2, 0) {$c_2$};\n    \\node[gate] (c_3) at (4, 0) {$c_3$};\n\n    \\node[gate] (g1) at (1, 2) {$*$};\n    \\node[gate] (_g) at (3, 2) {$+$};\n\n    \\node[gate] (g2) at (2, 4) {$*$};\n\n    \\draw[->] (c1) -- (g1);\n    \\draw[->] (c1) -- (_g);\n    \\draw[->] (c2) -- (g1);\n\n    \\draw[->] (c_3) -- (_g);\n\n    \\draw[->] (g1) -- (g2);\n    \\draw[->] (_g) -- (g2);\n\n    \\draw[->] (g2) -- ++(0, 2);\n\n  \\end{tikzpicture}\n  \\caption{A simple arithmetic circuit}\n  \\label{fig:zkp:circuit}\n\\end{figure}\n\n\\subsubsection[Quadratic Arithmetic Program (QAP)]{Quadratic Arithmetic Program (QAP)~\\cite{zksnark_basics, zcash_snarks}}\n\\label{zkp:snarks:qap}\n\nGeranno et. al~\\cite{ggpr} showed how to efficiently encode computations to quadratic programs called Quadratic Arithmetic Programs (QAP), to obtain zk-SNARKs. QAPs play an important role as they enable the prover to construct the proof $\\pi$ where she claims that she knows a valid assignment of a circuit $C$.\n\nA  QAP $Q(C) = (L, R, O, T)$ for a given circuit $C$ is a set of three polynomials\n\n\\begin{align*}\n  L = \\sum_{i=1}^{m}c_{i}L_{i} && R = \\sum_{i=1}^{m}c_{i}R_{i} && O = \\sum_{i=1}^{m}c_{i}O_{i} && (m \\geq n)\n\\end{align*}\n\nand a target polynomial $T$ such that $T$ divides the polynomial:\n\n\\begin{equation*}\n  P = LR - O\n\\end{equation*}\n\nif and only if $(c_1, c_2, \\dots, c_n)$ is a valid assignment for $C$. The prover constructs the polynomial for its proof $\\pi$ and the verifier checks the divisibility of $P$ by $T$. Identically, $P = TH$ for some polynomial $H$.\n\nTo translate the $C$ into a QAP the wires and gates must be labeled in a specific way:\n\n\\begin{itemize}\n  \\item Each multiplication gate has exactly two input wires: a left and a right wire\n  \\item Each multiplication gate has a unique label\n  \\item Addition gates are not labeled\n  \\item Outgoing wires to more than one gate are labeled as one wire\n  \\item Outgoing wires from an addition gate to a multiplication gate are not labeled; the inputs of an addition gate go directly to the multiplication gate\n\\end{itemize}\n\nA label assignment of the circuit of Figure~\\ref{fig:zkp:circuit} is shown in Figure~\\ref{fig:zkp:circuit_label}.\n\n\\begin{figure}[ht!]\n  \\center\n  \\begin{tikzpicture}[\n        gate/.style={\n        circle,\n        minimum size=1cm,\n        draw\n      },\n      node distance=2cm\n    ]\n\n    \\node[gate] (c1) at (0, 0) {$c_1$};\n    \\node[gate] (c2) at (2, 0) {$c_2$};\n    \\node[gate] (c_3) at (4, 0) {$c_3$};\n\n    \\node[gate, label={left:$g_1$}] (g1) at (1, 2) {$*$};\n    \\node[gate] (_g) at (3, 2) {$+$};\n\n    \\node[gate, label={right:$g_2$}] (g2) at (2, 4) {$*$};\n\n    \\draw[->] (c1) -- (g1) node[midway,left] {$w_1$};\n    \\draw[->] (c1) -- (_g);\n    \\draw[->] (c2) -- (g1) node[midway,right] {$w_2$};\n\n    \\draw[->] (c_3) -- (_g) node[midway,right] {$w_3$};\n\n    \\draw[->] (g1) -- (g2) node[midway,left] {$w_4$};\n    \\draw[->] (_g) -- (g2);\n\n    \\draw[->] (g2) -- ++(0, 2) node[midway,left] {$w_5$};\n\n  \\end{tikzpicture}\n  \\caption{Circuit label assignment}\n  \\label{fig:zkp:circuit_label}\n\\end{figure}\n\nThe circuit is ready to be transformed into a QAP. Let $M$ be the set that contains the indexes of the multiplication gates and $W$ the set that contains the input and output wires. The points in $M$ are called target points. For each gate $g \\in M$ we construct a set of left, right, and output polynomials as follows: for each wire a polynomial is constructed in such way that it evaluates at zero on each target point except those that correspond to the multiplication gate.\n\n\\textit{QAP construction of circuit of Figure~\\ref{fig:zkp:circuit_label}}: Gate $g_1$, with target point $1$, has $w_1$ as left wire, $w_2$ as right wire and $w_4$ as output label. As the polynomial on point $1$ that corresponds to $g_1$ evaluates to $1$ and on point $2$ corresponding to $g_2$ evaluates to $0$, we construct $L_1 = R_2 = O_4 = 2 - x$. Similar, for gate $g_2$ $L_4 = R_1 = R_3 = O_5 = x - 1$. Note that the wires $w_2$ and $w_3$ are both right inputs of $g_2$.\n\nThe wire polynomials:\n\n\\begin{align*}\n  L_1 = (2 - x) && L_2 = 0 && L_3 = 0 && L_4 = (x - 1) && L_5 = 0\n\\end{align*}\n\\begin{align*}\n  R_1 = (x - 1) && R_2 = (2 - x) && R_3 = (x - 1) && R_4 = 0 && R_5 = 0\n\\end{align*}\n\\begin{align*}\n  O_1 = 0 && O_2 = 0 && O_3 = 0 && O_4 = (2 - x) && O_5 = (x - 1)\n\\end{align*}\n\nThe total left, right and output polynomials:\n\n\\begin{align*}\n  L &= \\sum_{i=1}^{5}c_{i}L_{i} & R &= \\sum_{i=1}^{5}c_{i}R_{i} & O &= \\sum_{i=1}^{5}c_{i}O_{i} \\\\\n    &= c_1L_1 + c_4L_4 & &= c_1R_1 + c_2R_2 + c_3R_3 & &= c_4O_4 + c_5O_5 \\\\\n    &= c_1(2 - x) + c_4(x - 1) & &= c_1(x - 1) + c_2(2 - x) + c_3(x - 1) & &= c_4(2 - x) + c_5(x - 1)\n\\end{align*}\n\nIf we evaluate these polynomials at target points $1$ and $2$ we get:\n\n\\begin{align*}\n  P(1) &= L(1)R(1) - O(1) & P(2) &= L(2)R(2) - O(2) \\\\\n       &= c_1c_2 - c_4 & &=c_4(c_1 + c_3) - c_5\n\\end{align*}\n\nHence, $T(x) = (x - 1)(x - 2)$ divides $P(x)$ if and only if $1, 2$ are roots of $P(x)$. Identically, the tuple $(c_1, c_2, \\dots, c_5)$ is a valid assignment for $C$.\n", "meta": {"hexsha": "c70fcc7aacde0be9a6dc162d82d7c81192601beb", "size": 60256, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/preliminaries.tex", "max_stars_repo_name": "cnasikas/thesis", "max_stars_repo_head_hexsha": "e5bfd9d293fb7b8024863e389c9a2bc1f5d53509", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-06-21T17:44:21.000Z", "max_stars_repo_stars_event_max_datetime": "2018-06-21T17:44:21.000Z", "max_issues_repo_path": "chapters/preliminaries.tex", "max_issues_repo_name": "cnasikas/thesis", "max_issues_repo_head_hexsha": "e5bfd9d293fb7b8024863e389c9a2bc1f5d53509", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/preliminaries.tex", "max_forks_repo_name": "cnasikas/thesis", "max_forks_repo_head_hexsha": "e5bfd9d293fb7b8024863e389c9a2bc1f5d53509", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.0496453901, "max_line_length": 1232, "alphanum_fraction": 0.6864046734, "num_tokens": 19035, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5930972254395407}}
{"text": "\n\\subsection{Summary of the important contributions}\n\\label{sec:summ-import-results}\n\n\n%% Theoretical part: generic formulation\nIn this paper, the characteristic structure of the solution of hyperbolic problems in elastic-plastic solids in two space dimensions has been highlighted.\nFirst, a thermodynamically-consistent formulation leading to the writing of a hyperbolic system involving the fourth-order elastic-plastic stiffness tensor as been proposed.\nThe aforementioned tensor can be easily specialized to plane stress and plane strain problems in such a way that the quasi-linear form derived provides a generic framework for the study of all mechanical problems in two space dimensions.\n\n%% Characteristic analysis\nSecond, the characteristic analysis of the hyperbolic plane strain and plane stress problems has been carried out.\nAs already emphasized for simpler two-dimensional problems in prior works \\cite{Rakhmatulin,CRISTESCU19591605} the solutions involve slow and fast simple waves. \nThe characteristic equations governing the evolution of the system inside the simple waves have then been derived as a set of ODEs by applying the method of characteristics.\n\n%% Mathematical properties of the loading paths\nThird, some mathematical properties of these characteristic equations have been highlighted for plane strain and plane stress, despite the complexity of the equations.\nAs an interesting result of this work, it has been shown that the loading paths followed inside slow and fast waves are perpendicular in the stress space.\nAlthough this feature has been already emphasized in \\cite{Clifton} for a combined longitudinal and torsional loading, the property is in fact due to the symmetry of the acoustic tensor and is therefore valid for all two-dimensional problems.\n\n%% Numerical results\nAt last, to overcome the mathematical complexity of the characteristic equations, numerical investigations have been proposed.\nThe loading paths depicted in the stress space or in the deviatoric plane then enable the identification of symmetry properties that are not proofed mathematically.\nMoreover, the integral curves holding inside fast waves are restricted to the initial yield surface for both plane stress and plane strain situations.  \nIn the former case, the paths end as soon as a direction of pure shear is reached in the deviatoric plane, whereas in the latter one, the paths appears to be radial once  a direction of pure tension/compression is achieved.\nOn the other hand, the loading through the simple waves exhibit rough changes regardless of the kinematic considered for low hardening moduli (\\textit{i.e. $C=\\mathcal{O}(10^8)$}). \nIt has moreover been shown numerically that this inflection corresponds, for plane stress, to the reaching of the maximum shear stress on the current yield surface for a given longitudinal stress.\nFor plane strain, a similar response is also seen but before the \\textit{maximum-shear-stress} condition is achieved.\n% Similar conclusions can be drawn for plane strain.\nNevertheless, increasing the hardening modulus leads to loading paths whose direction changes before the \\textit{maximum-shear-stress} condition is achieved for plane stress, and which reach a direction of pure shear at the same time as the maximum shear stress.\n\n\n\\subsection{Concluding remarks}\n\\label{sec:concludingRemarks}\n\n%% Perspectives\nThe results of the present paper allow a better understanding of the physical response of linearly hardening elastic-plastic solids to dynamic loadings.\nHowever, the singularities that have been highlighted numerically still need to be identified mathematically.\n%It would also be interesting to consider different hardenings (\\textit{i.e. kinematic, non-linear etc.}).  \nNotice that kinematic hardening should yield identical results for the monotonic loadings considered here, but would greatly influence the response for unloading or reverse plastic loading.\n  These waves must also be the object of an analysis for two-dimensional problems in order to construct the solution of the Riemann problem. \nAs a more long-term perspective, the elementary loading paths studied here could be used in order to enrich numerical methods based on the use of Riemann solvers.\nIn fact, following the idea of \\textsc{Lin} and \\textsc{Ballman} \\cite{Lin_et_Ballman}, a numerical procedure that accounts for both elastic and plastic characteristics can be developed in order to improve the tracking of waves in elastic-plastic solids.\n% Nonetheless, the hyperbolic problems in elastoplastic media may involve not only simple waves but also shocks so that the effects of the latter must be investigated as well. \n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"manuscript\"\n%%% End:\n", "meta": {"hexsha": "9e0bb83cd31f971da6ae8855f30182fbab58c83f", "size": 4750, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papJmPs/conclusion.tex", "max_stars_repo_name": "adRenaud/research", "max_stars_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-18T14:52:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-18T14:52:03.000Z", "max_issues_repo_path": "papJmPs/conclusion.tex", "max_issues_repo_name": "adRenaud/research", "max_issues_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-07T13:11:11.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-07T13:11:11.000Z", "max_forks_repo_path": "papJmPs/conclusion.tex", "max_forks_repo_name": "adRenaud/research", "max_forks_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.137254902, "max_line_length": 262, "alphanum_fraction": 0.8170526316, "num_tokens": 935, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5930972249999802}}
{"text": "\\section{Introduction}\n\n\\subsection{Longest Common Subsequence}\n\\begin{frame}\n    \\frametitle{Longest Common Subsequence}\n    \\begin{itemize}\n        \\setlength\\itemsep{1em}\n        \\item \n            The {\\em longest common subsequence} (LCS) is a famous\n            problem in string processing.\n        \\item \n            For example, \n            \\begin{itemize}\n                \\setlength\\itemsep{1em}\n                \\item \n                    Revision control systems like SVN and Git.\n                \\item \n                    The sequence alignment in bioinformatics.\n            \\end{itemize}\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Definition of Longest Common Subsequence}\n    %\\setlength\\itemsep{1em}\n    Given two strings $A = a_1 \\; a_2 \\cdots a_n$ and $B = b_1 \\; b_2\n    \\cdots b_m$.\n    \\begin{definition}\n        Subsequence $S = s_1 \\; s_2 \\cdots s_r$ of $A$, we define a\n        \\emph{correspondence sequence} of $A$ and $S$, $C(A, S) = c_1 \\;\n        c_2 \\cdots c_r$ to be a strictly increasing sequence of integers\n        such that $s_i = a_{c_i}$ $1 \\le i \\le r$\n\t\\end{definition}\n\t\\begin{definition}\n        A common subsequence $S$ of $A$ and $B$, if there exists\n        correspondence sequence $C(A, S)$ and $C(B, S)$.\n\t\\end{definition}\n\\end{frame}\n\n\\subsection{Variable Gapped LCS}\n\\begin{frame}\n    \\frametitle{Definition of Variable Gapped LCS}\n    Given two strings $A$, $B$, and two gap values $G_{A}$, $G_{B}$\n    \\begin{definition}\n        It is a LCS and satisfy the constraints as follows:\n        \\begin{align*}\n            C(A, S)[i] - C(A, S)[i-1] \\le G_{A}(c_i) \\\\\n            C(B, S)[i] - C(B, S)[i-1] \\le G_{B}(c_i)\n        \\end{align*}\n    \\end{definition}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{An Example for Variable Gapped LCS}\n    \\begin{figure}[!thb]\n      \\centering\n      \\includegraphics[width=0.5\\linewidth]{\\GraphicPath/fig-VGLCSex.pdf}\n      \\includegraphics[width=0.5\\linewidth]{\\GraphicPath/fig-VGLCSex2.pdf}\n      \\caption{A VGLCS example} \\label{fig:VGLCSex}\n    \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Efficient Serial Algorithm}\n    \\begin{itemize}\n        \\setlength\\itemsep{1em}\n        \\item \n            Peng gives a $O(n m \\alpha(n))$ algorithm \\footnote{$\\alpha$\n            is the inverse of Ackermann's function.}, and\n        \\item\n            An asymptotically better $O(n m)$ algorithm.\n        \\item \n            Both of algorithms uses disjoint-set data structure.\n    \\end{itemize}\n\\end{frame}\n\n\\subsection{Contribution}\n\\begin{frame}\n    \\frametitle{Contribution}\n    \\begin{itemize}\n        \\setlength\\itemsep{1em}\n        \\item\n            In this paper, we propose our $O(nm)$ algorithm which is {\\em\n            easy} to implement and runs {\\em efficiently} in a {\\em\n            parallel} environment.\n        \\item\n            Our parallel algorithm uses a more powerful {\\em sparse table}\n            instead of the disjoint set.\n        \\item\n            For sparse table algorithm, we propose the {\\em parallel}\n            building LCA table algorithm and its {\\em dynamic} Catalan\n            index computing algorithm.\n    \\end{itemize}\n\\end{frame}\n", "meta": {"hexsha": "0d3944d344a1ecbfbdbc0e3337ef7ba0224cf805", "size": 3181, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/IEEE/slides/partial/introduction.tex", "max_stars_repo_name": "morris821028/parallel-VGLCS", "max_stars_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-02-11T08:45:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T07:30:24.000Z", "max_issues_repo_path": "doc/IEEE/slides/partial/introduction.tex", "max_issues_repo_name": "morris821028/parallel-VGLCS", "max_issues_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2017-02-21T02:01:16.000Z", "max_issues_repo_issues_event_max_datetime": "2017-02-24T00:13:34.000Z", "max_forks_repo_path": "doc/IEEE/slides/partial/introduction.tex", "max_forks_repo_name": "morris821028/parallel-VGLCS", "max_forks_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4842105263, "max_line_length": 74, "alphanum_fraction": 0.5979251808, "num_tokens": 917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7577943658046609, "lm_q1q2_score": 0.593097216877245}}
{"text": "\\chapter{Classical Mechanics}\n\nClassical Mechanics describes everything in the world as if they have an exact position $\\textbf{x}$ and velocity $\\dot{\\textbf{x}}$, which in principle could be known simultaneously. There happens to be real physical limit as to how well we can know both of these quantities at the same time (from the Heisenberg Uncertainty principle), but that limit is so small that for everyday objects, Classical Mechanics works just fine.\n\n\n\\section{Lagrangian Formalism}\nWhen dealing with conservative forces, there turns out to be a nice way to figure out all of Newton's laws without having to draw out all of the free body diagrams. In fact, when this method was first proposed, Lagrange bragged that he had \\emph{no pictures or diagrams} in his book\\cite{lagrange}. The Lagrangian is defined as \n\\begin{align}\nL\\equiv T-V\n\\end{align}\nWhere $T$ is the kinetic energy and $U$ is the potential energy. For each coordinate $q_i$ (usually things like $x,y,z$), we get an equation \n\\begin{align}\n\\boxed{\\frac{d}{dt}\\Big(\\frac{\\partial L}{\\partial \\dot{q}_i}\\Big) = \\frac{\\partial L}{\\partial q_i} }\n\\end{align}\nThese are called the \\textbf{Euler-Lagrange equations}. Basically all Classical Mechanics problems boil down to finding a useful set of coordinates to describe your system, then writing out $T$ and $U$ in terms of them, then solving a system of equations given by these.\n\n\n     An easy way to find the Lagrangian for any system is to identify the coordinate of each of its masses in cartesian coordinates, then rewriting it in terms of coordinates that best suit the problem. \n     \\begin{align}\n        x_1(q_1,q_2,...)\\\\\n        y_1(q_1,q_2,...)\n     \\end{align}\n     \n     We can then just take the time derivative of each of them to solve for the $\\dot{q}$ term.\n\n\n\n\n\\subsection{Action}\n\\cite{dhoker_cm}The action is a scalar quantity defined by\n\\begin{align}\nS[q] = \\int_{t_1}^{t_2} dt L (q_1,...q_N, \\dot{q}_1, ..., \\dot{q}_N, t)\n\\end{align}\nThe $[q]$ means that it takes as an argument a \\emph{function} not just a variable. These types of mathematical objects are called functionals, and take entire trajectories as their argument instead of just one point. In order to find the action, we would have to know exactly the path that $q_1, q_2, q_3, ...$ had all taken, then we integrate the Lagrangian, which is a function of those variables from $t_1$ to $t_2$. \n\n\n\nThe importance of the action is that for \\emph{any} real physical system, the \"path\" which all of the coordinates $\\textbf{q} = (q_1,q_2,q_3,...)$ follow will be one for which the action will always be at an extremum or saddle point.\n\\begin{align}\n    \\frac{\\partial S}{\\partial \\textbf{q}(t)} = 0\n\\end{align}\nThis weird fact is called Hamilton's Principle. Apparently the extremization of the action is a direct consequence of the second law of thermodynamics, that entropy is always increasing.\\cite{stackexchange_action} Knowing that the integral of the Lagrangian extremizes the Action, we can actually derive Lagrange's equations of motion using variational calculus.\n\nTo do so, we consider two possible Lagrangians, one is a function of all of the coordinates that truly minimize the action, $q_1, q_2, q_3, ... $ etc, and one is a set that is infinitesimally close to them $q_1', q_2', q_3', ...$ related by\n\\begin{align}\n    q_i'(t) = q_i(t) + \\epsilon s(t)\n\\end{align}\nThe differences in the Lagrangians of each case is\n\\begin{align}\n    \\delta L(q,\\dot{q},t) &= L(q',\\dot{q}',t) - L(q,\\dot{q},t)\\\\\n\\end{align}\nSince the difference in the Lagrangian will be small (from the small change in $q$) we can look at the first order in the Taylor expansion of the difference, which is\n\\begin{align}\n    \\delta L(q,\\dot{q},t) &= \\frac{\\partial L}{\\partial q} \\delta q + \\frac{\\partial L}{\\partial \\dot{q}}\\delta \\dot{q}\\\\\n    &\\approx \\epsilon\\Big( \\frac{\\partial L}{\\partial q} s + \\frac{\\partial L}{\\partial \\dot{q}}\\dot{s}\\Big)\n\\end{align}\nNow looking at the change in the action, we have\n\\begin{align}\n    \\delta S[q] = \\epsilon\\int_{t_1}^{t_2} dt \\Big( \\frac{\\partial L}{\\partial q} s + \\frac{\\partial L}{\\partial \\dot{q}}\\dot{s}\\Big)\n\\end{align}\nWe can integrate the $\\dot{s}$ term by parts and find\n\\begin{align}\n    \\delta S[q] = \\epsilon\\int_{t_1}^{t_2} dt \\Big( \\frac{\\partial L}{\\partial q}  - \\frac{d}{dt}\\frac{\\partial L}{\\partial \\dot{q}}\\Big)s(t)\n\\end{align}\nIf we want the difference in the action between the two paths to be zero (giving us our original path back) and know that $s(t)$ is an entirely arbitrary function, we have to have the integrand be zero\n\\begin{align}\n    \\frac{\\partial L}{\\partial q}  - \\frac{d}{dt}\\frac{\\partial L}{\\partial \\dot{q}} = 0\n\\end{align}\nThese are of course the Euler-Lagrange equations of motion for a system.\n\n%TODO- Also do a simple example where you get back Newton's laws in an easy case\n\n\n%Equivalent Lagrangians\n\n%$$L(q') = L(q) + \\frac{d}{dt}\\Lambda(q,t)$$\n\n%For finding equivalent Lagrangians, you will write the new $q'$ as\n\n%$$q'_i = q_i + \\epsilon \\delta q_i$$\n\n%Using the above equation, you can find that \n\n%$$\\frac{\\partial L(q')}{\\partial \\epsilon}|_{\\epsilon = 0} = \\sum \\frac{\\partial L}{\\partial q_i}\\delta q_i + \\frac{\\partial L}{\\partial \\dot{q}_i}\\delta \\dot{q}_i = \\frac{d\\Lambda}{dt}$$\n\n\n\n\\subsection{Holonomic Constraints}\nThis is essentially when you want to say that a particle must travel along some path, or that the length of a rope is only so long, etc. Basically you can write out the constraint as a formula, lets do the length of a string hanging off a ledge.\n\n\\begin{align}\n    l = x + z\n\\end{align}\nWhich would be if we have the horizontal component of the string given by $x$ and vertical by $z$. Our constraint equation is thus\n\\begin{align}\n    \\phi(x,z,t) = 0 = l - x - z\n\\end{align}\nFrom here, we use Lagrange multipliers (Section \\ref{lagrange-mult}) within Lagrange's equations over all our constraints...\n\n$$\\frac{d}{dt}\\frac{\\partial L}{\\partial\\dot{q}_i} - \\frac{\\partial L}{\\partial q_i} - \\sum_\\alpha \\lambda_\\alpha \\frac{\\partial \\phi_\\alpha}{\\partial q_i} = 0$$\n\nSo for each constraint, you have another Lagrange multiplier. To solve for $\\lambda_\\alpha$ and, and therefore the equations of motion, first look for equations that don't contain the term (i.e. $\\partial\\phi_\\alpha/\\partial q_i = 0$), solve those equations, then plug their solution into ones that do, eventually finding them.\n\n\nYou can also solve for one of the coordinates in terms of the other ones in the constraint, and plug it in explicitly to the Lagrangian, then find the E.L. Equations from there.\n\n\\subsection{Normal Modes}\nThe normal modes of a system are when all parts of the system are oscillating with the same frequency. So to solve these, we take the ansatz usually that\n\\begin{align}\n    q_1(t) = Ae^{i\\omega t} && q_2(t) = Be^{i\\omega t}\n\\end{align}\nThen we can use these in the equations of motion, and solve for the frequencies, usually in a quadratic equation.\n\n\\subsection{Equilibrium}\n\nWe know that a system is in equilibrium if the forces acting on it are equal to zero. This means that\n\n$$\\frac{\\partial V}{\\partial q_i}|_{q_i=q_i^0} = 0$$\n\nThis means that the variable $q_i$ will not be accelerated and is in equilibrium. The equilibrium point can be solved for by calculating this quantity, then solving for what $q_i^0$ must be. We can then find what small oscillations around the equilibrium would be by guessing a solution of the form \n\n\\begin{align}\nq_i(t) = q_i^0 + \\eta_i(t)  \n\\end{align}\n\nFrom here, we just plug this into the Euler-Lagrange equations for $q_i$, and solve for the equations of motion for $\\eta(t)$, which tells us how the system moves for small oscillations around an equilibrium.\n\n\\subsection{Noether's Theorem}\nThe essence of what this theorem says is that for every symmetry we can find in our system, there will be some kind of \"conserved quantity\" associated with it that does not change with time.\nPlug in conserved charges in equations of motion, \\emph{not} Lagrangian.\n$$Q = \\sum\\frac{\\partial L}{\\partial\\dot{q}_i}\\delta q_i - \\Lambda$$\n\n$Q$ is a conserved quantity, i.e. $dQ/dt = 0$\nSome nice examples of Noether charges are things like the energy of the system, or the angular momentum. \n\n%TODO Add more here\n\n\n\\subsection{Solving the Equations of Motion}\n\\begin{enumerate}\n    \\item Write out the kinetic and potential energies in a nice choice of coordinates.\n    \\item Find all the Holonomic constraints and plug them in (or do it the other way)\n    \\item Find EL equations\n    \\item Find Noether Charges\n    \\item Plug in Noether charges into EL equations.\n\\end{enumerate}\n\n\n\\section{Hamiltonian Formalism}\nIn a way identical to what we do for thermodynamics, we can make a \\emph{Legendre transform} of our variables into a new set. We start by defining the canonical momentum as\n\\begin{align}\np_i \\equiv \\frac{\\partial L}{\\partial \\dot{q}_i}\n\\end{align}\n\n$$H = \\sum\\frac{dL}{d\\dot{q_i}}d\\dot{q_i} - L$$\n\n\\section{Virial Theorem}\nImagine we have some value called $G$ that is defined as\n\\begin{align}\nG = \\sum_{k=1}^N \\textbf{p}_k\\cdot\\textbf{r}_k\n\\end{align}\nWhere $\\textbf{p}_k$ is the momentum of the $k$th particle of $N$, and $\\textbf{x}_k$ it's coordinate. It can be shown\\cite{goldstein} that this quantity is identical to half the time derivative of the moment of inertia with\n\\begin{align}\nG = \\frac{1}{2}\\frac{dI}{dt}\n\\end{align}\nThe trick to derive the Virial theorem is to take the time derivative of $G$, giving us\n\\begin{align}\n\\frac{dG}{dt} &= \\sum_{k=1}^N \\Big[\\textbf{p}_k\\cdot\\frac{d\\textbf{r}_k}{dt} + \\frac{d\\textbf{p}_k}{dt}\\cdot\\textbf{r}_k\\Big]\\\\\n&= \\sum_{k=1}^N \\Big[m\\frac{d\\textbf{r}_k}{dt}\\cdot\\frac{d\\textbf{r}_k}{dt} + \\textbf{F}_k\\cdot\\textbf{r}_k\\Big]\\\\\n&= 2T - \\sum_{k=1}^N \\textbf{F}_k\\cdot\\textbf{r}_k\n\\end{align}\nThe total force on any one particle $\\textbf{F}_k$ is given by the sum of all of the individual forces from each of the other particles, or\n\\begin{align}\n\\textbf{F}_k = \\sum_{j\\neq k}^N \\textbf{F}_{jk}\n\\end{align}\nThis lets us rewrite the second term as\n\\begin{align}\n\\sum_{k=1}^N \\textbf{F}_k = \\sum_{k=1}^N \\sum_{j\\neq k}^N \\textbf{F}_{jk}\\cdot\\textbf{r}_k = \\sum_{k=1}^N \\sum_{j\\neq k}^N \\textbf{F}_{jk}\\cdot(\\textbf{r}_k - \\textbf{r}_j)\n\\end{align}\nWhere the last bit is kosher because no particle has a self force or $\\textbf{F}_{jj} = 0$. The usefulness of the theorem come when we have a potential between each particle is a power series of the form $V(\\textbf{r}_k - \\textbf{r}_j)  = \\alpha |\\textbf{r}_j-\\textbf{r}_k|^n$, since\n\\begin{align}\n\\textbf{F}_{jk}\\cdot(\\textbf{r}_k - \\textbf{r}_j) = \\Big[-\\nabla_k V(\\textbf{r}_k - \\textbf{r}_j) \\Big]\\cdot(\\textbf{r}_k - \\textbf{r}_j) = n V(\\textbf{r}_k - \\textbf{r}_j) \n\\end{align}\nTherefore\n\\begin{align}\nn \\sum_{k=1}^N \\sum_{j\\neq k}^N V(\\textbf{r}_k - \\textbf{r}_j) = nV\n\\end{align}\nWhere $V$ is the total potential energy of the system. Therefore if we look at the average, and have a \\emph{stably bound system} we have that\n\\begin{align}\n\\Big\\langle\\frac{dG}{dt}\\Big\\rangle = 0 = 2\\langle T \\rangle - n\\langle V \\rangle \n\\end{align}\nWhere $n$ of course is the power of the potential.\n\n\n%TODO \\section{Hamilton-Jacobi Equations}\n%Someday...\n\n\\section{Special Relativity}\nIt turns out that both space and time get jumbled together when you move fast enough. Special relativity is able to talk about two frames (i.e. two people) which move at a constant velocity with respect to one another. The theory was developed as a consequence of the strange fact that the speed of light, found in Maxwell's equations, was found to be \\emph{completely independent} of whatever frame you are in. \n\n\\centerline{\\includegraphics[width=0.5\\textwidth]{physics/images/relativity}}\n\nWith relativity, we always talk about \"frames\" which are just places that you, as an observer would see things from. Galilean relativity (or \"common sense\" relativity) says that if you have light of speed $c$ in one frame $F$, which is moving at speed $v$ towards an observer in frame $F'$, the speed of light should be \n\\begin{align}\n    c' = v+ c\n\\end{align}\nIf for instance the like was going towards the origin $O$ in frame $F$. This would be \\emph{faster} than what Maxwell's equation's say, so something has to give. Einstein fixed this by positing that both \\emph{space and time themselves} are changed between frames with different velocities. When you look at something that is moving, it is shrunken in the direction of travel compared to how it sees itself. Additionally the amount of time it takes for an event to take place (ball thrown, clock tick, etc) takes \\emph{longer} in the frame that sees the event as moving, than it would in a frame at which the the event takes place at rest. \n\n\nTo illuminate, lets pretend we have a person on earth (frame $F$) and a person on a spaceship (frame $F'$) moving away from the earth in the positive $x$ direction with speed $v$. The earth measures everything with coordinates $(t,x,y,z)$ and the person on the ship measures everything with coordinates $(t',x',y',z')$. Both of these coordinate systems are \\emph{completely normal} in their own frames. If they had a ball on earth and measured it's radius as $r$, the ruler with which they measure it on the ship would say it is the same $r$ when the entire ship, ball, ruler system is moving together. \n\nLets pretend that right when the spaceship passes the earth, they are able to align their coordinate system somehow so everything is zero. The usefulness of special relativity is know how everyone on the ship (in $F'$) sees things after doing some mathematics with all of the things you are capable of measuring in $F$. It turns out that the transformation is given by %\\footnote{do someday, classical HW 3}\n\\begin{align}\\label{lorentzcontract}\n    t' &= \\gamma\\Big(t -\\frac{vx}{c^2} \\Big)\\\\\n    x' &= \\gamma(x - vt)\\\\\n    y' &= y\\\\\n    z' &= z\n\\end{align}\nWhere \n\\begin{align}\n\\gamma =\\frac{1}{ \\sqrt{1-\\frac{v^2}{c^2}}  }\n\\end{align}\n% TODO We can remember these equations as follows. In Galilean relativity, after drawing out the frames, we would have\n%\\begin{align}\n%   x' = x - vt\n%\\end{align}\n%Both are modulated by $\\gamma$, to keep the expression finite when $v$ gets larger and larger. The sign on the time matches $x$, and must match units and $t$. ...  TODO\n\nWe can do something interesting from here, lets look at the following quantity\n\\begin{align}\n    s^2 &= -c^2t'^2 +x'^2 + y'^2 + z'^2\\\\\n    &= \\frac{c^2(t^2 - \\frac{vx}{c^2})^2}{1-\\frac{v^2}{c^2}}  + \\frac{(x - vt)^2}{1-\\frac{v^2}{c^2}} + y^2 + z^2\\\\\n    &= -c^2t^2 + x^2 + y^2 + z^2\n\\end{align}\nIt turns out this quantity is \\emph{invariant} when we change between frames. These type of things are incredibly important when dealing with topics in special relativity\n\n\\subsection{Four Vectors}\nA nice way to write things that give us invariants easily is in four vector notation. We can define the position \\emph{contravariant} vector with\n$$ x^\\mu \\equiv (x^0, x^1,x^2,x^3) = (ct,x,y,z) $$\nRemember the signs by knowing all the vectors with \\textit{upper} indices have all \\textit{positive} quantities. We can then define a \\emph{covariant} with\n\\begin{align}\n     x_\\mu \\equiv (-x^0, x^1,x^2,x^3) = (-ct,x,y,z) \n\\end{align}\nWe see that if we dot these two vectors we get \n\\begin{align}\n    s^2 = x_\\mu x^\\mu = -c^2t^2 + x^2 + y^2 + z^2\n    \\end{align}\nThis one tells us things about how events are causally connected to one another, since\n\n$$s^2 = - c^2(t_2-t_1)^2 + (\\textbf{x}_2-\\textbf{x}_1)^2$$\n\n\\begin{itemize}\n\\item $s^2 = 0$ Lightlike, only massless particles moving at $c$ will have this 0 \n\\item $s^2 > 0$ Spacelike, in every frame, there will always be some separation of space between the events. Causally unrelated\n\\item $s^2 < 0$ Timelike, time separates these events in all frames.\n\\end{itemize}\n\nAnother useful thing that comes up often is the \\textbf{Minkowski Metric} defined as \n\n\\begin{align}\n\\eta_{\\mu\\nu} = \\left(\n{\\begin{array}{cccc}\n-1&0&0&0\\\\\n0&1&0&0\\\\\n0&0&1&0\\\\\n0&0&0&1\n\\end{array}}\n\\right)\n\\end{align}\nThis transforms a contravariant into a covariant with\n\\begin{align}\n    x_\\mu = \\eta_{\\mu\\nu}x^\\nu\n\\end{align}\nAlso important is the 4-gradient\n\n\\begin{align}\n\\partial_\\mu &\\equiv \\frac{\\partial}{\\partial x^\\mu} = (\\frac{\\partial t}{\\partial x^0}\\frac{\\partial}{\\partial t}, \\frac{\\partial x}{\\partial x^1}\\frac{\\partial}{\\partial x}, \\frac{\\partial y}{\\partial x^2} \\frac{\\partial}{\\partial y},\\frac{\\partial z}{\\partial x^3}\\frac{\\partial}{\\partial z} ) = (\\frac{1}{c} \\partial_t, \\partial_x,\\partial_y,\\partial_z)\n\\end{align}\n\nBe aware that the lower index partial is with respect to the upper index $x$. Similarly this gives\n$$\\partial^\\mu = (-\\frac{1}{c}\\partial_t,\\partial_x,\\partial_y, \\partial_z)$$\n\n\\subsection{Relativistic Kinematics}\nThe key equation to remember is \n\\begin{align}\\label{relenergy}\n    E^2 = p^2c^2 +m^2c^4\n\\end{align}\nWe can define the \\emph{four momentum} as\n\\begin{align}\n    p^\\mu = (E/c,p_x,p_y,p_z)\n\\end{align}\nNotations can sometimes swap the minus signs, but it seems easier to remember that all raised index vectors are entirely positive. The covariant is the same with the front sign swapped. We see that if we rearrange equation \\ref{relenergy} we can find\n\\begin{align}\n- m^2c^2 = -\\frac{E^2}{c^2} + p^2 = p_\\mu p^\\mu\n\\end{align}\nUsually in the problems here you are given two objects with momentum $p_1^\\mu$ and $p_2^\\mu$ respectively, that turn into a new particle with four-vector $p_3^\\mu$. We know that in general\n\\begin{align}\n    p_3^\\mu = p_1^\\mu + p_2^\\mu\n\\end{align}\nWhich means we just add each of the components of the vector like normal, which gives us the new 4 momentum. From here, we can find the invariant mass of the system just doing\n\\begin{align}\n    -m_3^2c^2 = (p_{1\\mu} + p_{2\\mu})(p_1^\\mu + p_2^\\mu)\n\\end{align}\nProper time is Lorentz invariant and defined as\n\n\\begin{equation}\\label{propertime}\nc^2d\\tau^2 = -\\eta_{\\mu\\nu}dx^{\\mu}dx^{\\nu}\n\\end{equation}\nThe 4-velocity is then defined as\n\n$$u^\\mu \\equiv \\frac{d x^\\mu}{d\\tau}$$\nPlugging in our definition back into equation \\ref{propertime} we get that \n\\begin{align}\n-c^2 &= u_\\mu u^\\nu\\\\\n &= -c^2\\frac{d t}{d \\tau}^2 + \\frac{d x}{d \\tau}^2 + \\frac{d y}{d \\tau}^2 + \\frac{d z}{d \\tau}^2\\\\\n &= -c^2\\frac{d t}{d \\tau}^2(1- \\frac{v^2}{c^2}) \n\\end{align}\nThis gives us time dilation with $\\gamma d\\tau = dt$. Replugging this into the definition of the 4-velocity, we get\n\n$$u^\\mu = (\\gamma c, \\gamma v_x, \\gamma v_y,\\gamma v_z)$$\nThis allows us to then define the 4-momentum of a massive particle as \n$$p^\\mu \\equiv mu^\\mu = (\\gamma mc, \\gamma mv_x, \\gamma mv_y,\\gamma mv_z)$$\nThe Lorentz force equation is given by\n\n$$\\frac{dp^\\mu}{d\\tau} = eF^{\\mu\\nu}\\frac{dx^\\nu}{d\\tau}$$\n\n\nThere is a similar way to write the boost in momentum and energy as we did for space and time\n\\begin{align}\n    E' &= \\gamma(E - vp_x)\\\\\n    p_x' &= \\gamma(p_x - vE/c^2)\\\\\n    p_y' &= p_y\\\\\n    p_z' &= p_z\n\\end{align}\nIf the other frame $F'$ is moving with velocity $v$ in the $x$ direction relative to frame $F$.\n\n\n\\section{Rotation}\\label{classicalrot}\n\n\n\n\\begin{figure}\\label{rotation}\n\\centerline{\\includegraphics[width=0.7\\textwidth]{physics/images/rotation}}\n\\caption{The left is an \\emph{active} rotation, which involves physical rotation of an object. The right is a \\emph{passive} rotation which involves instead the rotation of the coordinate system.}\n\\end{figure}\n\n$$R(\\hat{n},\\theta)$$\n\n$R$ is a rotation matrix, $\\hat{n}$ is what axis you are choosing to rotate around, points along that axis. $\\theta$ determines how far around that axis you are going to rotate. Directions are chosen from the cross product, for instance if it is $\\hat{z}$, we know $x\\times y=z$, so that requires use to have it such that it looks like $x$ chases $y$, or $x$ will move towards $y$ in the direction it is closest to it from. This turns out to happen for all of the pairs, when said in order, i.e.\n\n\\begin{align}\nx\\times y &= z\\\\\ny\\times z &= x\\\\\nz\\times x &= y\\\\\n\\end{align}\n\nIf you ever find them in the opposite order, it is equivalent to going the opposite direction. There are two typical conventions for rotations outline in Figure \\ref{rotation}. In most cases, we only care about the active rotation, given around the $\\hat{z}$ axis below with\n\n\\begin{align}\nR(\\hat{z},\\theta) =  \\left(\n{\\begin{array}{ccc}\n\\cos\\theta&-\\sin\\theta&0\\\\\n\\sin\\theta&\\cos\\theta&0\\\\\n0&0&1\\\\\n\\end{array}}\n\\right)\n\\end{align}\n\nSimilar rotations around $\\hat{x},\\hat{y}$ can be found easily by cyclically permutating the vectors and realigning, with\n\n\\begin{align}\\left(\n{\\begin{array}{c}\nx_1\\\\\ny_1\\\\\nz_1\\\\\n\\end{array}}\\right) &\\rightarrow\n\\left(\n{\\begin{array}{c}\nz_1\\\\\nx_1\\\\\ny_1\\\\\n\\end{array}}\\right) \\\\\n\\left(\n{\\begin{array}{c}\nz_1'\\\\\nx_1'\\\\\ny_1'\\\\\n\\end{array}}\\right) &= \n\\left(\n{\\begin{array}{ccc}\n\\cos\\theta&-\\sin\\theta&0\\\\\n\\sin\\theta&\\cos\\theta&0\\\\\n0&0&1\\\\\n\\end{array}}\n\\right)\\left(\n{\\begin{array}{c}\nz_1\\\\\nx_1\\\\\ny_1\\\\\n\\end{array}}\\right)\\\\\n\\rm{Reorder}\\implies  \n\\left(\n{\\begin{array}{c}\nx_1'\\\\\ny_1'\\\\\nz_1'\\\\\n\\end{array}}\\right)\n&= \\left(\n{\\begin{array}{ccc}\n\\cos\\theta&0 &\\sin\\theta\\\\\n0&1&0\\\\\n-\\sin\\theta&0&\\cos\\theta\\\\\n\\end{array}}\n\\right)\\left(\n{\\begin{array}{c}\nx_1\\\\\ny_1\\\\\nz_1\\\\\n\\end{array}}\\right)\n\\end{align}\nThis is a rotation around the $\\hat{y}$ axis, as that direction is unaffected. \n\n\n\n\n\\subsection{Rotating Frames}\nGiven a rotating frame $r$ which is rotating with constant angular velocity $\\omega$ and we want to find a non-rotating frame $s$ where Newton's laws apply, we have that\n\\begin{align}\n\\textbf{v}_s = \\textbf{v}_r + \\boldsymbol{\\omega}\\times\\textbf{r}\n\\end{align}\nWhere $r$ is the distance from the center of rotation. To find the acceleration, we must also consider the unit vectors as a function of time, with %\\todo{Fix this, do in index notation}\n\\begin{align}\n\\textbf{a}_s = \\frac{d\\textbf{v}_s}{dt} &= \\frac{d}{dt}\\Big[|v_x|\\hat{x} + |v_y|\\hat{y} + |v_z|\\hat{z} + \\boldsymbol{\\omega}\\times(|r_x|\\hat{x} + |r_y|\\hat{y} + |r_z|\\hat{z})\\Big]\\\\\n&= \\Big[|\\dot{v}_x|\\hat{x} + |\\dot{v}_y|\\hat{y} + |\\dot{v}_z|\\hat{z} + \\boldsymbol{\\omega}\\times(|\\dot{r}_x|\\hat{x} + |\\dot{r}_y|\\hat{y} + |\\dot{r}_z|\\hat{z})\\Big] \\\\\n&+ \\Big[|v_x|\\dot{\\hat{x}} + |v_y|\\dot{\\hat{y}} + |v_z|\\dot{\\hat{z}}  +\\boldsymbol{\\omega}\\times(|r_x|\\dot{\\hat{x}} + |r_y|\\dot{\\hat{y}} + |r_z|\\dot{\\hat{z}})\\Big]\\\\\n\\end{align}\nIn general the time derivative of any unit vector in a rotating frame is given by\n\\begin{align}\n\\dot{\\hat{u}} = \\boldsymbol{\\omega}\\times\\hat{u}\n\\end{align}\nWe also know that $\\dot{r}_i = v_i$ and $\\dot{v}_i = a_i$ since they are simply the rate of change of the objects position and velocity. Thus we have\n\\begin{align}\n\\textbf{a}_s &= \\textbf{a}_r + \\boldsymbol{\\omega}\\times\\textbf{v}_r + \\boldsymbol{\\omega}\\times\\textbf{v}_r + \\boldsymbol{\\omega}\\times(\\boldsymbol{\\omega}\\times\\textbf{r})\\\\\n&= \\textbf{a}_r + 2\\boldsymbol{\\omega}\\times\\textbf{v}_r + \\boldsymbol{\\omega}\\times(\\boldsymbol{\\omega}\\times\\textbf{r})\n\\end{align}\nNow we care about how Newton's laws look in the rotating frame, and rearranging tells us that\n\\begin{align}\n\\textbf{a}_r = \\textbf{a}_s - 2\\boldsymbol{\\omega}\\times\\textbf{v}_r - \\boldsymbol{\\omega}\\times(\\boldsymbol{\\omega}\\times\\textbf{r})\n\\end{align}\nSo we have the normal acceleration that would be created from Newton's second law in a non-rotating frame, then two extra terms which are caused by the rotation of the frame itself. The first term is the Coriolis acceleration, and the second is centrifugal acceleration.\n\n\n\\subsection{Moment of Inertia}\nTo find the moment of inertia of a simple object positioned in a difficult way, first find the moment of inertia of the body in it's center of mass frame such that the tensor is diagonal such that \n\n$$ I_{CM} = \n\\left(\n{\\begin{array}{ccc}\nI_{xx}&0&0\\\\\n0&I_{yy}&0\\\\\n0&0&I_{zz}\\\\\n\\end{array}}\n\\right)$$\n\nWith this you can then rotate the tensor to find what it would be if the object is spun on a more difficult axis. with \n\n$$I_{Rot} = R(\\theta_x)R(\\theta_y)R(\\theta_z)~I_{CM}~R(\\theta_z)^{-1}R(\\theta_y)^{-1}R(\\theta_x)^{-1}$$\n\nIf the axis is still not where you want it to be, just apply the parallel axis theorem to again shift it with\n\n$$I_{F} = I_{Rot} + Md^2$$\n\n\n\n\n\n\\section{Fluid Mechanics}\nFluid Mechanics is right on the border of being classical mechanics and statistical mechanics. It has to do with how a large number of particles move together. Looking at the mass density $\\rho$, because we cannot create or destroy mass classically, we have\n\\begin{equation}\\label{masscon}\n\\frac{\\partial\\rho}{\\partial t} = -\\nabla\\cdot (\\rho\\textbf{v})\n\\end{equation}\nWhich is just the continuity equation, and says that the rate at which the mass density is shrinking is equal to how much mass is leaving each surface at a given time. We also define the momentum of a system by adding up each infinitesimal mass and multiplying it by its respective velocity. This is the same thing as \n\\begin{align}\n    \\textbf{p}(V,t) = \\int_V dx^3 \\rho(\\textbf{x},t)\\textbf{v}(\\textbf{x},t)\n\\end{align}\nAn important thing to note is that the velocity $\\textbf{v}$ in the integral is not specifically the velocity of any one particle, but called a \\emph{velocity field} since it is defined over the entire volume, and the particles are too small for us to be concerned about any of them individually. Newton's third law tells us that $\\textbf{F} = \\dot{\\textbf{p}}$, so we first define the \\emph{force field}\n\\begin{align}\n    \\textbf{F}(V,t) = \\int_V dx^3 \\textbf{f}(\\textbf{x},t)\n\\end{align}\nLet's match Newton's third law and find\n\\begin{align}\n    \\int_V dx^3 \\textbf{f}(\\textbf{x},t) &= \\frac{d}{dt}\\int_V dx^3 \\rho(\\textbf{x},t)\\textbf{v}(\\textbf{x},t)\n\\end{align}\nConservation of momentum (also called Euler's equations) gives us\n\\begin{equation}\\label{momcon}\n\\rho\\left({\\frac{\\partial\\textbf{v}}{\\partial t} + (\\textbf{v}\\cdot\\nabla)\\textbf{v}}\\right) = \\textbf{f}\n\\end{equation}\n\n%TODO Some more trivia\n%\n%\\begin{itemize}\n%\\item Ideal Fluids\n%\\begin{itemize}\n%\\item Force field given by $\\textbf{f} = -\\nabla p$\n%\\item Isentropic Flow\n%\\begin{itemize}\n%\\item Kelvin's Theorem $\\frac{d}{dt}\\oint \\delta\\textbf{x}\\cdot\\textbf{v} = 0$\n%\\end{itemize}\n%\\end{itemize}\n%\\item Incompressible Fluids -  ($\\rho=$ const, $\\nabla\\cdot \\textbf{v} = 0$)\n%\\item Steady Flow -  $d\\textbf{v}/dt = 0$\n%\\item Potential Flow $\\textbf{v} = \\nabla\\phi$\n%\\end{itemize}\n%If any coordinate goes to infinity ($\\textbf{v}\\cdot\\nabla)\\textbf{v} = 0$\n%\n\n%\\section{TODO}\n\n%\\begin{itemize}\n\n%\\item Classical HW6 Problem 1, coupled oscillators\n%\\end{itemize}\n\n", "meta": {"hexsha": "ec6add71f8b04ae596b59eac68f10d594e9d35ec", "size": 26525, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "physics/classicalMechanics.tex", "max_stars_repo_name": "williamnash/notes", "max_stars_repo_head_hexsha": "6f89e27c51a1c0e14b3a24eab825299fb406fc2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "physics/classicalMechanics.tex", "max_issues_repo_name": "williamnash/notes", "max_issues_repo_head_hexsha": "6f89e27c51a1c0e14b3a24eab825299fb406fc2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-08-23T23:01:48.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-16T23:17:43.000Z", "max_forks_repo_path": "physics/classicalMechanics.tex", "max_forks_repo_name": "williamnash/notes", "max_forks_repo_head_hexsha": "6f89e27c51a1c0e14b3a24eab825299fb406fc2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.6202290076, "max_line_length": 640, "alphanum_fraction": 0.7103110273, "num_tokens": 8363, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387998695208, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.5930427034329988}}
{"text": "\\subsection{ABS} --- r1 \\leftarrow $ |r2| $\n\\subsection{ADD} --- r1 \\leftarrow $ r2 + r3 $\n\\subsection{AND} --- r1 \\leftarrow $ r2 and r3 $ ; input and output are truncated and handled as if they were 32-bit integer.\n\\subsection{CALL} \\emph{peripheral}\\: Byte, \\emph{arg}\\: Int --- call command number \\emph{arg} from \\emph{peripheral}. BIOS is always mapped to \\emph{peripheral} 255.\n\\subsection{CBRT} --- r1 \\leftarrow $ r2 ^ 3 $\n\n", "meta": {"hexsha": "dbebd6466bfb7b851054c5ff131fd5553466c8e9", "size": 433, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/quickref.tex", "max_stars_repo_name": "minjaesong/terran-basic-java-vm", "max_stars_repo_head_hexsha": "06e0a8aa2514098bf78bbeb3afaecc3498aedb74", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/quickref.tex", "max_issues_repo_name": "minjaesong/terran-basic-java-vm", "max_issues_repo_head_hexsha": "06e0a8aa2514098bf78bbeb3afaecc3498aedb74", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/quickref.tex", "max_forks_repo_name": "minjaesong/terran-basic-java-vm", "max_forks_repo_head_hexsha": "06e0a8aa2514098bf78bbeb3afaecc3498aedb74", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.8571428571, "max_line_length": 167, "alphanum_fraction": 0.6789838337, "num_tokens": 145, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8479677737461007, "lm_q2_score": 0.6992544273261176, "lm_q1q2_score": 0.5929452200218325}}
{"text": "\\documentclass{article}\n\\usepackage{latexsym}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{amsthm}\n\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem*{problem}{Problem}\n\n\\begin{document}\n\\title{Problems with primes}\n\\author{Dave Neary}\n\n\\section{When is $1+p+p^2+p^3+p^4$ prime for prime $p$?}\n\n\\begin{problem}\n\tGiven a prime number $p$, when is the sum of all of the positive factors of $p^4$\n\ta perfect square?\n\\end{problem}\n\n\\begin{proof}\n\tSince $p$ is a prime number, the only factors of $p^4$ are of the form\n\t$p^i, i \\in \\{0,1,2,3,4\\}$. The sum of the factors is thus:\n\t\\[ 1+p+p^2+p^3+p^4 = \\frac{p^5-1}{p-1} \\]\n\t\n\tLet's call this sum of powers of $p$ $f(p)$ for convenience. We begin our search\n\twith some exploration. When $n=2, 3, 5, 7, 11$ we obtain the following values:\n\t\\begin{center}\n\t\t\\begin{tabular}{ |c|c|c| }\n\t\t\t\\hline\n\t\t\t$p$ & $f(p)$ & round($\\sqrt{f(p)}$)$^2$ \\\\\n\t\t\t\\hline\n\t\t2 & 31 & $6^2=36$ \\\\\n\t\t3 & 121 & $11^2=121$ \\\\\n\t\t5 & 781 & $28^2=784$ \\\\ \n\t\t7 & 2801 & $53^2=2809$ \\\\ \n\t\t11 & 16105 & $127^2=16129$ \\\\ \n\t\t\\hline \n\t\t\\end{tabular}\n\t\\end{center}\n\n\t3 is the only prime that produces a perfect square so far, and it seems like there\n\tis always a square close above the other odd primes. It's also interesting to note\n\tthat the square close to $f(p)$ is a little more than the square of the square of our\n\tprime (not surprising given the $p^4$ term which will dominate as $p$ grows larger):\n\t$28 = 5^2 + 3, 53 = 7^2 + 4, 127=11^2 + 6$.\n\n\tLet's see if we can get close to $f(p)$ with the square of a quadratic expression.\n\tClearly such a quadratic will be of the form:\n\t\\[ (p^2 + ap + b)^2 = p^4 + 2ap^3 + (a^2+2b)p^2 + 2abp + b^2\\]\n\n\tIt makes sense to let $a=\\frac{1}{2}$, then:\n\t\\[(p^2 + \\frac{p}{2} + b)^2 = p^4 + p^3 + (\\frac{1}{4}+2b)p^2 + bp + b^2 \\]\n\n\tGiven that $p$ is an odd prime, we can turn this into an integer expression by\n\tsetting $b=\\frac{1}{2}$: \n\t\\[ (p^2 + \\frac{p+1}{2})^2 = p^4 + p^3 + \\frac{5p^2}{4}p^2 + \\frac{p}{2} + \\frac{1}{4} \\]\n\t\\[ = \\sum_{i=0}^4 p^i + \\frac{1}{4}p^2 - \\frac{p}{2} - \\frac{3}{4} \\]\n\n\tSo we can write:\n\t\\[ (p^2 + \\frac{p+1}{2})^2 = \\sum_{i=0}^4 p^i + \\frac{1}{4}(p+1)(p-3) \\]\n\tor\n\t\\[ (p^2 + \\frac{p+1}{2})^2 = \\sum_{i=0}^4 p^i + \\frac{1}{4}((p-1)^2-4) \\]\n\n\tSo this is bigger than $\\sum_{i=0}^4 p^i$ for all $p>3$ (and, as we saw earlier, is\n\tequal for $p=3$). If we reduce the square by 1, can we prove that this is always\n\tsmaller than $\\sum_{i=0}^4 p^i$?\n\n\t\\begin{align*}\n\t\t(p^2 + \\frac{p-1}{2})^2 &= p^4 + p^3 - \\frac{3p^2}{4} - \\frac{p}{2} + \\frac{1}{4}\\\\\n\t\t&= \\sum_{i=0}^4 p^i -\\frac{7}{4}p^2 - \\frac{3}{2}p - \\frac{3}{4} \\\\\n\t\t&< \\sum_{i=0}^4 p^i\n\t\\end{align*}\n\tsince $p>0$.\n\n\tSo we have shown that:\n\t\\[(p^2 + \\frac{p-1}{2})^2 < \\sum_{i=0}^4 p^i \\leq (p^2 + \\frac{p+1}{2})^2 \\] \n\twith equality if and only if $\\frac{1}{4}(p+1)(p-3) = 0$, or when $p=3$\n\n\\end{proof}\n\n\\section{When is $p^2 + q^2 + 2017$ a perfect square?}\n\n\\begin{problem}\n        How many prime numbers $p, q$ make $p^2 + q^2 + 2017$ a perfect square?\n\\end{problem}\n\n\\begin{proof}\n\tConsider the numbers $p^2,q^2,2017 \\pmod{4}$.\n\t\\[2017 \\equiv 1 \\pmod{4} \\]\n\tIf $p$ is odd, $p^2 \\equiv 1 \\pmod{4}$, and if $p$ is even, $p^2 \\equiv 0 \\pmod{4}$.\n\n\tThen: \n\t\\[ 2017 + p^2 + q^2 \\pmod{4} \\in \\{1,2,3\\} \\]\n\twith the value 1 when both $p,q$ are event, 2 if one of them is odd, and 3 if both\n\tare odd.\n\n\tFor any integer $a$, $a^2 \\equiv 0 \\text{ or } 1 \\pmod{4}$ - so both $p,q$ must be\n\teven. The only even prime number is 2, so the only number we need to check is\n\t\\[ 2017 + 2^2 + 2^2 = 2025 = 45^2 \\]\n\n\tSo the only answer is $(p,q) = (2,2)$.\n\\end{proof}\n\n\n\\end{document}\n\n", "meta": {"hexsha": "f4aa25a40e1f8fc3c08094bdfe3d39ec5f995311", "size": 3634, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "prime_powers_problem.tex", "max_stars_repo_name": "dneary/math", "max_stars_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "prime_powers_problem.tex", "max_issues_repo_name": "dneary/math", "max_issues_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prime_powers_problem.tex", "max_forks_repo_name": "dneary/math", "max_forks_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.3394495413, "max_line_length": 90, "alphanum_fraction": 0.5897083104, "num_tokens": 1592, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.8479677564567912, "lm_q1q2_score": 0.5929451919886508}}
{"text": "\\documentclass[../../main.tex]{subfiles}\n\n\\begin{document}\n\n\\subsection{Motivation}\n\nIt is considered a time series classification problem. This task is solved using ODE-RNN model. The ODE-RNN architecture was decribed in \\cite{Samokhina}.\n\n\\subsection{Problem statement}\n\nGiven a multivariate time series:\n\n$$\\mathfrak{D} = \\{(x_i, y_i)\\}, \\quad i = \\overline{1, n}, \\quad x_i \\in \\mathbb{R}^m.$$\n\n\\noindent\nThe problem of time series intervals classification is solved. We have to define if there is a P300 potential on the EEG interval.\n\nOptimization task:\n\n$$\\hat{\\theta} = \\arg\\max\\limits_{\\theta}L(\\theta, \\mathbf{X}).$$\n\n\\subsection{Problem solution}\n\nODE-LSTM is based on standard LSTM architecture but the continuity of hidden state is added. Hidden state of LSTM is a pair $(\\mathbf{c}_t, \\mathbf{h}_t)$, where $\\mathbf{c}_t$ is a long term memory state, $\\mathbf{h}_t$ is a hiden state. The function $f_\\theta(\\mathbf{x}_{t+1}, (\\mathbf{c}_t, \\mathbf{h}_t), \\mathbf{1}) \\rightarrow (\\mathbf{c}_{t+1}, \\mathbf{h}_{t+1})$ that updates these states can be described using the following equations:\n\n$$\\mathbf{z}_{t+1} = \\tanh(\\mathbf{W}_z\\mathbf{x}_{t+1} + \\mathbf{R}_z\\mathbf{h}_t + \\mathbf{b}_z)$$\n$$\\mathbf{i}_{t+1} = \\sigma(\\mathbf{W}_i\\mathbf{x}_{t+1} + \\mathbf{R}_i\\mathbf{h}_t + \\mathbf{b}_i)$$\n$$\\mathbf{f}_{t+1} = \\sigma(\\mathbf{W}_f\\mathbf{x}_{t+1} + \\mathbf{R}_f\\mathbf{h}_t + \\mathbf{b}_f + \\mathbf{1})$$\n$$\\mathbf{o}_{t+1} = \\sigma(\\mathbf{W}_o\\mathbf{x}_{t+1} + \\mathbf{R}_o\\mathbf{h}_t + \\mathbf{b}_o)$$\n$$\\mathbf{c}_{t+1} = \\mathbf{z}_{t+1} \\odot \\mathbf{i}_{t+1} + \\mathbf{c}_{t} \\odot \\mathbf{f}_{t+1}$$\n$$\\mathbf{h}_{t+1} = \\tanh(\\mathbf{c}_{t+1}) \\odot \\mathbf{o}_{t+1}$$\n\nWhen shifting from LSTM to ODE-LSTM the above equations are changed as follows:\n\n$$(\\mathbf{c}_{i}, \\mathbf{h}_{i}^\\prime) = \\text{LSTM}(\\theta_l, (\\mathbf{c}_{i}, \\mathbf{h}_{i}), x_i)$$\n$$\\mathbf{h}_i = \\text{ODESolve}(f_\\theta, \\mathbf{h}_{i-1}, \\mathbf{h}_i^\\prime, t_i - t_{i-1})$$\n$$\\mathbf{o}_i = \\mathbf{h}_i\\mathbf{W}_{\\text{output}} + b_{\\text{output}}$$\n\nThe differential equation is solved numerically using fourth order Runge-Kutta methods.\n\n\\subsection{Data}\n\nThe data was collected on 60 healthy people.  They were participating in a virtual reality game with a neural interface that is based on the classification of P300 potentials. It is considered a binary problem statement. In this dataset both the train and test samples are unbalanced. The class ratio is 1 to $(s-1)$, where $s$ is a number of stimuli. The number of timestamps in the dataset is 56540.\n\n\\subsection{Code analysis}\n\nCode for ODE-LSTM was taken from GitHub repository\\footnote{https://github.com/Alina-Samokhina/MasterThesis}.\n\n\\subsection{Experiment}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{sections/gorpinich/data.png}\n\\caption{Time series visualization}\n\\label{fig:data}\n\\end{figure}\n\nFig. \\ref{fig:data} plots P300 time series.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{sections/gorpinich/res.png}\n\\caption{Result after applying ODE-RNN}\n\\label{fig:res}\n\\end{figure}\n\nFig. \\ref{fig:res} plots the result of applying ODE-RNN.\n\n\\end{document}", "meta": {"hexsha": "19c9db3a2f0812f754b3980b281f8222def47275", "size": 3188, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/Gorpinich2021Lab8/main.tex", "max_stars_repo_name": "Intelligent-Systems-Phystech/mmp2021", "max_stars_repo_head_hexsha": "213f5d81e2ae0c4e77b197b63e6980523f65d9bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2021-09-15T18:31:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-20T03:58:47.000Z", "max_issues_repo_path": "sections/Gorpinich2021Lab8/main.tex", "max_issues_repo_name": "Intelligent-Systems-Phystech/mmp2021", "max_issues_repo_head_hexsha": "213f5d81e2ae0c4e77b197b63e6980523f65d9bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/Gorpinich2021Lab8/main.tex", "max_forks_repo_name": "Intelligent-Systems-Phystech/mmp2021", "max_forks_repo_head_hexsha": "213f5d81e2ae0c4e77b197b63e6980523f65d9bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-11-19T21:55:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-20T13:56:02.000Z", "avg_line_length": 46.2028985507, "max_line_length": 445, "alphanum_fraction": 0.7073400251, "num_tokens": 1117, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677468516187, "lm_q2_score": 0.6992544085240402, "lm_q1q2_score": 0.5929451852721916}}
{"text": "\\section{Conclusion}\\label{sec:stringmatcher:conclusion}\nWe made the first non-trivial use of (Liquid) Haskell as a proof\nassistant. \nWe proved the parallelization of chunkable monoid\nmorphisms to be correct\nand applied our parallelization technique to string matching,\nresulting in a formally verified parallel string matcher.\n%\nOur proof uses refinement types to specify\nequivalence theorems,\nHaskell terms to express proofs,\nand Liquid Haskell to check that the terms prove the theorems.\n%\nBased on our 1839LoC sophisticated proof we conclude that\nHaskell can be successfully used as a theorem prover\nto prove arbitrary theorems about real Haskell code\nusing SMT solvers to automate proofs\nover key theories like linear arithmetic and equality.\n%\nHowever, Coq-like tactics or Dafny-like heurestics are required\nto ease the user from manual proof term generation.\n\n\n\\begin{comment}\n  - lines of code\n  - interaction of proofs with code\n        no interaction: the main proof\n        invariant good indices are requied to prove that\n           if target it bigger than input then indices is empty\n           proof oof good indexing requires casting\n   - proof reuse\n   - trust library factions with assume annotations\n\\end{comment}\n", "meta": {"hexsha": "abe335e287fadeda70fcb9406e8baf48864dfb9e", "size": 1233, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/stringmatcher/conclusion.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/stringmatcher/conclusion.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/stringmatcher/conclusion.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 36.2647058824, "max_line_length": 64, "alphanum_fraction": 0.7858880779, "num_tokens": 269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438950947024555, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.5928616588050608}}
{"text": "\\section{Returns time series simulations}\n\\label{sec:simulations}\n\nIn Sect. \\ref{subsec:gauss_alg_sim} we simulate returns time series following\na multivariate Gaussian and algebraic distribution. We test the influence of\nthe normalization within the epochs in the ergodicity defect in Sect.\n\\ref{subsec:norm_epochs_sim} and propose a solution for this defect in Sect.\n\\ref{subsec:norm_full_sim}. Finally we use empirical data to support our\nsimulation findings in Sect. \\ref{subsec:emp_results}.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Simulation of multivariate Gaussian and algebraic distributions}\n\\label{subsec:gauss_alg_sim}\n\nIn Sect. \\ref{subsubsec:gauss_sim} we describe the methodology to simulate\nmultivariate Gaussian distributed returns time series and in Sect.\n\\ref{subsubsec:alg_sim} we describe the methodology to simulate multivariate\nalgebraic distributed returns time series.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Multivariate Gaussian distributions}\\label{subsubsec:gauss_sim}\n\nTo simulate the returns time series, we use a method \\cite{drawing_dist} for\ndrawing a random vector $x$ from the $N$-dimensional multivariate Gaussian\ndistribution with mean vector $\\mu$ and covariance matrix $\\Sigma$. First, we\ncreate a correlation matrix $C$ with $c = 1$ on its diagonal and $c = 0.3$ on\nits non-diagonal entries. Then, we compute the eigenvalues and eigenvectors of\nthe correlation matrix, such that $C = U \\Lambda U^{-1}$. We get a $z$ vector\nwhose components are drawn from an independent standard Gaussian distribution.\nFinally we obtain the returns with the desired distribution as\n\\begin{equation}\n    r = \\mu + U \\Lambda^{1/2} z\n\\end{equation}\n\nIn our case, the $r$ vector components are drawn from a normal distribution\nwith $\\mu$ vector zero. With this method we want to obtain time series\nsimulating the data matrix G with dimensions $2 \\times T$, where $T$ is the\nwindow length of the epochs. These returns can be later normalized, rotated and\naggregated to compare with the behavior of the results in Sect.\n\\ref{subsec:epochs}.  The goal of this approach is that all simulations should\nshow standard normal distributions.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]\n    {figures/06_epochs_sim_gauss_agg_ret_pairs_no_norm.png}\n    \\caption{Simulated aggregated rotated and scaled Gaussian distributed\n             returns ($\\tilde{r}$) for fixed covariance and $K=200$ without\n             normalization, neither within the epochs nor for all the time\n             series. $\\Delta t = 1$ unit and epochs window  lengths\n             $T=10, 25, 40, 55, 100$ units.}\n    \\label{fig:gauss_epochs_agg_ret_pairs_no_norm}\n\\end{figure}\n\nIn Fig. \\ref{fig:gauss_epochs_agg_ret_pairs_no_norm} we simulate time series\nfor $K = 200$. Each time series is made of $200$ epochs to make them comparable\nto the empirical data. We use epochs window lengths $T = 10, 25, 40, 55$ units\nto rotate, scale and aggregate without normalizing neither within the epochs\nnor the full time series. As expected, as we draw the returns from a\nmultivariate Gaussian distribution, all the simulations show standard Gaussian\ndistribution behavior. Thus, this is our reference to check what is introducing\nthe ergodicity defect in the original method for the multivariate Gaussian\ncase.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Multivariate algebraic distributions}\\label{subsubsec:alg_sim}\n\nTo simulate returns time series drawn from multivariate algebraic\ndistributions, we use a similar approach as in Sect. \\ref{subsubsec:gauss_sim}.\nFirst, we create a correlation matrix $C$ with $c = 1$ on its diagonal and\n$c = 0.3$ on its non-diagonal entries. From \\cite{t_student_dist} we know that\n\\begin{equation}\n    T = \\left( S^{-1/2} \\right)^{\\dagger} X + M,\n\\end{equation}\nwhere $T$ is a vector of length $K$. Then it is needed to repeat the following\nsteps to generate a data matrix where the columns are the $T$ vectors. $X$ is\ndrawn from a matrix variate normal distribution. Matrix $S$ is a Wishart\ndistributed covariance matrix without normalization and M is a parameter that\nfor this case is zero. With these in mind, we first generate $S$ as a\npositive semi-definite matrix. To do this, we first create time series of\nsimulated data matrix $G$ with dimension $K \\times \\left(n + K - 1 \\right)$\nwhere $n$ is the a parameter of the degree of freedom and is connected to the\nshape parameter $l$ as\n\\begin{equation}\n    l = \\frac{n + K}{2},\n\\end{equation}\nwhere $K$ is the number of companies. These time series are generated by\ncalculating\n\\begin{equation}\n    y = U \\Lambda^{1/2} z\n\\end{equation}\nwith a fixed covariance matrix $\\Sigma$. Vector $y$ is a column vector of $G$,\n$z$ is a univariate standard normal distribution vector, $U$ has the\neigenvectors of $\\Sigma$ as columns and the diagonal matrix $\\Lambda$ contains\nthe eigenvalues of $\\Sigma$. Thus, we compute\n\\begin{equation}\n    S = G G^{\\dagger}\n\\end{equation}\nThen, we obtain $S^{1/2}$ as\n\\begin{equation}\n    S^{1/2} = U_{S} \\Lambda_{S}^{1/2} U_{S}^{\\dagger},\n\\end{equation}\nsince\n\\begin{align}\n    S &= S^{1/2} S^{1/2} \\\\\n    &= U_{S} \\Lambda_{S}^{1/2} U_{S}^{\\dagger}\n    U_{S} \\Lambda_{S}^{1/2} U_{S}^{\\dagger}\\\\\n    &= U_{S} \\Lambda_{S} U_{S}^{\\dagger}\n\\end{align}\nWe generate $X$ as\n\\begin{equation}\n    X = \\sqrt{m} z\n\\end{equation}\nwhere $z$ is a univariate standard normal distribution vector of length $K$ and\n$m$ is the variance.\n\nThese returns can be later normalized, rotated and aggregated to compare with\nthe behavior of the results in Sect. \\ref{subsec:epochs}.  The goal of this\napproach is that all simulations should show standard algebraic distributions.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]\n    {figures/06_epochs_sim_alg_agg_ret_pairs_no_norm.png}\n    \\caption{Simulated aggregated rotated and scaled algebraic distributed\n             returns ($\\tilde{r}$) for fixed covariance and $K=200$ without\n             normalization, neither within the epochs nor for all the time\n             series. $\\Delta t = 1$ unit and epochs window  lengths\n             $T=10, 25, 40, 55, 100$ units.}\n    \\label{fig:alg_epochs_agg_ret_pairs_no_norm}\n\\end{figure}\n\nIn Fig. \\ref{fig:alg_epochs_agg_ret_pairs_no_norm} we simulate time series for\n$K = 200$. Each time series is made of $200$ epochs to make them comparable\nto the empirical data. We use epochs window lengths $T = 10, 25, 40, 55$ units\nto rotate, scale and aggregate without normalizing neither within the epochs\nnor the full time series. We can see an interesting behavior in the algebraic\ncase, where for small epochs window lengths, the tails are similar to the\nGaussian distribution, and as the epochs window lengths grow, the simulations\nreveal good agreement with the algebraic distribution. Thus, this is our\nreference to check what is introducing the ergodicity effect in the original\nmethod for the multivariate algebraic case.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Normalization within the epochs}\n\\label{subsec:norm_epochs_sim}\n\nNow, to check the normalization within the epochs, we first simulate the\nreturns and then normalize each epoch to mean $\\mu = 0$ and variance\n$\\sigma^{2} = 1$. Finally we repeat the procedure of rotate, scale and\naggregate.\n\nFor the Gaussian case with the simulated pair returns time series, we proceed\nto normalize the epoch, compute the $2 \\times 2$ sample covariance matrix and\ndiagonalize it. We rotate the two-component returns vectors into the eigenbasis\nof the covariance matrix and normalize the axis with the eigenvalues. Finally\nwe aggregate all the components into a single univariate distribution.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]\n    {figures/06_epochs_sim_gauss_agg_ret_pairs_norm.png}\n    \\caption{Simulated aggregated rotated and scaled Gaussian distributed\n             returns ($\\tilde{r}$) for fixed covariance and $K=200$ with\n             normalization within the epochs. $\\Delta t = 1$ unit and epochs\n             window lengths $T=10, 25, 40, 55, 100$ units.}\n    \\label{fig:epochs_gauss_agg_ret_pairs_norm}\n\\end{figure}\n\nAs we can see in Fig. \\ref{fig:epochs_gauss_agg_ret_pairs_norm}, the ergodicity\ndefect clearly appears for an epoch window length $T = 10$. As the epoch window\nlength grows, the ergodicity defect starts to disappear. We could even argue\nthat with an epoch window length greater or equal to $T = 25$, we already are\nclose enough to the Gaussian distribution. Thus, this ergodicity defect has a\nlarge impact in small epochs window length, and its effects tend to disappear\nas the epochs window lengths grow. This can be seen with $T = 100$.\n\nSomething similar happens with the algebraic case. We again simulate the\nreturns, normalize each epoch to mean $\\mu = 0$ and variance $\\sigma^{2} = 1$\nand finally repeat the procedure of rotate, scale and aggregate. In Fig.\n\\ref{fig:epochs_alg_agg_ret_pairs_norm} can be seen how the ergodicity effect\nappears again. It seems to have a large effect in epochs window lengths with\nsmall values. Thus, we need to find an alternative to solve this problem.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]\n    {figures/06_epochs_sim_alg_agg_ret_pairs_norm.png}\n    \\caption{Simulated aggregated rotated and scaled Gaussian distributed\n             returns ($\\tilde{r}$) for fixed covariance and $K=200$ with\n             normalization within the epochs. $\\Delta t = 1$ unit and epochs\n             window lengths $T=10, 25, 40, 55, 100$ units.}\n    \\label{fig:epochs_alg_agg_ret_pairs_norm}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Normalization complete return time series}\n\\label{subsec:norm_full_sim}\n\nWe already showed that the ergodicity defect is directly related with the\nnormalization of the time series. To try to solve this issue, instead of\nnormalizing the time series within each epoch, we normalize the complete time\nseries and then proceed to rotate, scale and aggregate.\n\nIn Fig. \\ref{fig:epochs_gauss_agg_ret_pairs_norm_full_ts} and Fig.\n\\ref{fig:epochs_alg_agg_ret_pairs_norm_full_ts} are shown the Gaussian and\nalgebraic cases with the corresponding simulations. We can notice how in both\ncases the ergodicity defect disappears. In each epochs window length, the\nprobability density function perfectly fits the Gaussian distribution and the\nalgebraic distribution. It can be noted that the larger the epochs window\nlength, the better the simulated data fit the distributions. Furthermore, to\naccomplish our assumption of stationarity on short time scales, the results\nfor short epochs window lengths have a good agreement with the theoretical\ndistribution.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]\n    {figures/06_epochs_sim_gauss_ts_norm.png}\n    \\caption{Simulated aggregated rotated and scaled Gaussian distributed\n             returns ($\\tilde{r}$) for fixed covariance and $K=200$ with\n             normalization for the complete time series. $\\Delta t = 1$ unit\n             and epochs window lengths $T=10, 25, 40, 55, 100$ units.}\n    \\label{fig:epochs_gauss_agg_ret_pairs_norm_full_ts}\n\\end{figure}\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]\n    {figures/06_epochs_sim_alg_ts_norm.png}\n    \\caption{Simulated aggregated rotated and scaled algebraic distributed\n             returns ($\\tilde{r}$) for fixed covariance and $K=200$ with\n             normalization for the complete time series. $\\Delta t = 1$ unit\n             and epochs window lengths $T=10, 25, 40, 55, 100$ units.}\n    \\label{fig:epochs_alg_agg_ret_pairs_norm_full_ts}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Empirical results}\n\\label{subsec:emp_results}\n\nAfter we found the problem in the method and solved for the simulated time\nseries, it is time to check the solution in the empirical data. We first\nnormalize the complete time series. Then we rotate and scale the returns and\nfinally we aggregate them. With this method, we can see how the ergodicity\ndefect disappears, as we expected from the simulations. Furthermore, we can now\nsee that with an epoch window length of $T = 25$ the returns show small fat\ntails. Then, we confirm that the aggregated returns have internal structures\nthat we will use to define the exact multivariate amplitude distributions in\nSect. \\ref{sec:exact_distributions}.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.8\\columnwidth]\n    {figures/06_window_comparison.png}\n    \\caption{Aggregated distribution of returns ($\\tilde{r}$) for fixed\n             covariance of different number of companies selected from the S\\&P\n             500 dataset. $\\Delta t = 1d$ and different epochs window lengths\n             $T=10d$ (top left), $T=25d$ (top right), $T=40d$ (bottom left) and\n             $T=55d$ (bottom right).}\n    \\label{fig:window_comparison_long_norm}\n\\end{figure}", "meta": {"hexsha": "b3beaa263f99cb01a0d767979b817daca3bd1341", "size": 13266, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/exact_distributions_financial_paper/sections/06_simulations.tex", "max_stars_repo_name": "juanhenao21/exact_distributions_financial", "max_stars_repo_head_hexsha": "02eb058e5f963fbccb9029aae3fb6e15def7a93a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-20T18:24:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-15T07:25:50.000Z", "max_issues_repo_path": "paper/exact_distributions_financial_paper/sections/06_simulations.tex", "max_issues_repo_name": "juanhenao21/exact_distributions_financial", "max_issues_repo_head_hexsha": "02eb058e5f963fbccb9029aae3fb6e15def7a93a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/exact_distributions_financial_paper/sections/06_simulations.tex", "max_forks_repo_name": "juanhenao21/exact_distributions_financial", "max_forks_repo_head_hexsha": "02eb058e5f963fbccb9029aae3fb6e15def7a93a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.6335877863, "max_line_length": 79, "alphanum_fraction": 0.7262927785, "num_tokens": 3330, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914997895581, "lm_q2_score": 0.7853085708384736, "lm_q1q2_score": 0.592841218101908}}
{"text": "\\input{../../style/preamble}\n\\input{../../latex-math/basic-math}\n\\input{../../latex-math/basic-ml}\n\n\\newcommand{\\titlefigure}{figure_man/hypercube.png}\n\\newcommand{\\learninggoals}{\n  \\item Understand that our intuition about geometry fails in high-dimensional spaces\n  \\item Understand the effects of the curse of dimensionality\n}\n\n\\title{Introduction to Machine Learning}\n\\date{}\n\n\\begin{document}\n\n\\lecturechapter{Curse of Dimensionality}\n\\lecture{Introduction to Machine Learning}\n\n\n\n\\begin{vbframe}{Curse of dimensionality}\n\n\n\\begin{itemize}\n\\item The phenomenon of data becoming sparse in high-dimensional spaces is one effect of the \\textbf{curse of dimensionality}.\n\\item The \\textbf{curse of dimensionality} refers to various phenomena that arise when analyzing data in high-dimensional spaces that do not occur in low-dimensional spaces.\n\\item Our intuition about the geometry of a space is formed in two and three dimensions. \n\\item We will see: This intuition is often misleading in high-dimensional spaces.\n\\end{itemize}\n\n\\end{vbframe}\n\n\\begin{vbframe}{Curse of Dimensionality: Example}\nTo illustrate one of the problematic phenomena of data in high dimensional data, we look at an introductory example: \\\\ \\lz\nWe are given $20$ emails, $10$ of them are spam and $10$ are not. \\\\\nOur goal is to predict if a new incoming mail is spam or not. \n\n\\medskip\n\nFor each email, we extract the following features:\n\n\\begin{itemize}\n\\item frequency of exclamation marks (in \\%)\n\\item the length of the longest sequence of capital letters\n\\item the frequency of certain words, e.g., \\enquote{free} (in \\%)\n\\item ... \n\\end{itemize}\n\n... and we could extract many more features!\n\n\\framebreak\n\n\n\n\\vspace*{0.1cm}\n\\begin{center}\n\\includegraphics[width = 11cm ]{figure_man/exclamation_marks.png}\n\\end{center}\n\nBased on the frequency of exclamation marks, we train a very simple classifier (a decision stump with split point $\\xv = 0.25$):\n\n\\begin{itemize}\n\\item We divide the input space into $2$ equally sized regions.\n\\item In the second region $[0.25, 0.5]$, $7$ out of $10$ are spam.\n\\item Given that at least $0.25\\%$ of all letters are exclamation marks, an email is spam with a probability of $\\frac{7}{10} = 0.7$.\n\\end{itemize}\n\\framebreak\n\n\nLet us feed more information into our classifier. We include a feature that contains the length of the longest sequence of capital letters.\n\\medskip\n\n\\vspace*{0.1cm}\n\\begin{center}\n\\includegraphics[width = 10cm ]{figure_man/capital_letters.png}\n\\end{center}\n\n\\begin{itemize}\n\\item In the 1D case we had $20$ observations across $2$ regions.\n\\item The same number is now spread across $4$ regions.\n\\end{itemize}\n\\framebreak\n\n\nLet us further increase the dimensionality to 3 by using the frequency of the word \\enquote{your} in an email.\n\n\\vspace*{0.1cm}\n\\begin{center}\n\\includegraphics[width = 10cm]{figure_man/capital_letters2.png}\n\\end{center}\n\n\\vspace*{-.3cm}\n\n\\framebreak\n\n\\begin{itemize}\n\\item When adding a third dimension, the same number of observations is spread across $8$ regions.\n\\item In $4$ dimensions the data points are spread across $16$ cells, in $5$ dimensions across $32$ cells and so on ...\n\\item As dimensionality increases, the data become \\textbf{sparse}; some of the cells become empty.\n\\item There might be too few data in each of the blocks to understand the distribution of the data and to model it.\n\\end{itemize}\n\n\n\\vspace*{-.2cm}\n\n\\begin{center}\n\\includegraphics[width = 0.5\\textwidth]{figure_man/exponentialcubes.png}\\\\\n\\scriptsize{Bishop, Pattern Recognition and Machine Learning, 2006}\n\\end{center}\n\n\\end{vbframe}\n\n\n\n\n\\section{Geometry of High-Dimensional Spaces}\n\n\\begin{vbframe}{The high-dimensional cube}\n\n\\begin{itemize}\n  \\item We embed a small cube with edge length $a$ inside a unit cube.\n  \\item How long does the edge length $a$ of this small hypercube have to be so that the hypercube covers $10\\%, 20\\%, ...$ of the volume of the unit cube (volume 1)?\n\n  \\medskip\n  \\begin{center}\n    \\includegraphics[height = 4cm, width = 4cm]{figure_man/hypercube.png}\n  \\end{center}\n\n\\framebreak\n\n\\vspace*{0.1cm}\n\\begin{center}\n\\includegraphics[width = 10cm ]{figure_man/high-dim-cube.png}\n\\end{center}\n\n\\medskip \n\n  \\begin{footnotesize}\n  \\begin{eqnarray*}\n    a^p &=& \\frac{1}{10} \\Leftrightarrow a = \\frac{1}{\\sqrt[p]{10}}\n  \\end{eqnarray*}\n  \\end{footnotesize}\n  \\vspace*{-0.5cm}\n  \\item  So: covering $10\\%$ of total volume in a cell requires cells with almost $50\\%$ of the entire range in $3$ dimensions, $80\\%$ in $10$ dimensions. \n\\end{itemize}\n\n\\end{vbframe}\n\n\n\\begin{vbframe}{The high-dimensional sphere}\n\n\nAnother manifestation of the \\textbf{curse of dimensionality} is that the majority of data points are close to the outer edges of the sample.\n \n\nConsider a hypersphere of radius $1$. The fraction of volume that lies in the $\\epsilon$-\\enquote{edge}, $\\epsilon := R - r$, of this hypersphere can be calculated by the formula\n\n\\vspace*{-0.7cm}\n\n$$\n1-\\left(1-\\frac{\\epsilon}{R}\\right)^p.\n$$\n\n\\vspace*{-0.5cm}\n\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{figure_man/orange.png}\n\\end{center}\n\n\\vspace*{-0.5cm}\n\nIf we peel a high-dimensional orange, there is almost nothing left. \n\n\\flushleft\n\n\n\\framebreak\n\nConsider a $20$-dimensional sphere. Nearly all of the volume lies in its outer shell of thickness $0.2$:\n\\medskip\n\n\\vspace*{0.1cm}\n\\begin{center}\n\\includegraphics[width = 10cm ]{figure_man/cursedim-fractionedge-plot-1.pdf}\n\\end{center}\n\n\\end{vbframe}\n\n\\begin{vbframe}{Hyphersphere within hypercube}\nConsider a p-dimensional hypersphere of radius $r$ and volume $S_p(r)$ inscribed in a p-dimensional hypercube with sides of length $2r$ and volume $C_p(r)$. Then it holds that \n\\begin{footnotesize}\n$$\\lim_{p\\rightarrow \\infty} \\frac{S_p(r)}{C_p(r)} = \\lim_{p\\rightarrow \\infty}\n\\frac{\\left( \\frac{\\pi^{\\frac{p}{2}}}{\\Gamma(\\frac{p}{2}+1)} \\right)r^p}{(2r)^p} =\n \\lim_{p\\rightarrow \\infty} \\frac{\\pi^{\\frac{p}{2}}}{2^p\\Gamma(\\frac{p}{2}+1)} = 0,$$\n\\end{footnotesize}\ni.e., as the dimensionality increases, most of the volume of the hypercube can be found in its corners.\n\n\\begin{center}\n\\includegraphics[height = 3cm, keepaspectratio]{figure_man/sphere_in_cube.png}\\\\\n\\scriptsize{Mohammed J. Zaki, Wagner Meira, Jr., Data Mining and Analysis: Fundamental Concepts and Algorithms, 2014}\n\\end{center}\n\n\\framebreak\n\nConsider a $10$-dimensional sphere inscribed in a $10$-dimensional cube. Nearly all of the volume lies in the corners of the cube:\n\\medskip\n\n\\vspace*{0.1cm}\n\\begin{center}\n\\includegraphics[width = 10cm ]{figure_man/vol_dim.png}\n\\end{center}\n\n\\begin{footnotesize}\nNote: For $r > 0$, the volume fraction $\\frac{S_p(r)}{C_p(r)}$ is independent of \n$r$.\n\\end{footnotesize}\n\n\\end{vbframe}\n\n\\begin{vbframe}{Uniformly distributed data}\nThe consequences of the previous results for uniformly distributed data in the high-dimensional hypercube are:\n\n\\medskip\n\n\\begin{itemize}\n\\item Most of the data points will lie on the boundary of the space.\n\\item The points will be mainly scattered on the large number of corners of the hypercube, which themselves will become very long spikes.\n\\item Hence the higher the dimensionality, the more similar the minimum and maximum distances between points will become.\n\\item This degrades the effectiveness of most distance functions.\n\\item Neighborhoods of points will not be local anymore.\n\\end{itemize}\n\n\\end{vbframe}\n\n\\begin{vbframe}{Gaussians in high dimensions}\n\nA further manifestation of the \\textbf{curse of dimensionality} appears if we consider a standard Gaussian $N_p(\\bm{0}, \\id_p)$ in $p$ dimensions.\n\n\\begin{itemize}\n    \\item After transforming from Cartesian to polar coordinates and integrating out the directional variables, we obtain an expression for the density $p(r)$ as a function of the radius $r$ (i.e., the point's distance from the origin), s.t.\n    $$ p(r) = \\frac{S_p(r)r^{p-1}}{(2 \\pi \\sigma^2)^{p/2}} \\exp \\left( -\\frac{r^2}{2\\sigma^2}\\right),$$\n    where $S_p(r)$ is the surface of the $p$-dimensional hypersphere of radius $r$.\n    \\item Thus $p(r) \\delta r$ is the approximate probability mass inside a thin shell of thickness $\\delta r$ located at radius $r$. \n    \n  \n\\framebreak \n\\item To verify this functional relationship empirically, we draw $10^4$ points from the p-dimensional standard normal distribution and plot $p(r)$ over the histogram of the points' distances to the origin:\n\n\\begin{center}\n\\includegraphics[width = 11cm ]{figure_man/histograms.png}\n\\end{center}\n\n\\item We can see that for large $p$ the probability mass of the Gaussian is concentrated in a fairly thin \\enquote{shell} rather far away from the origin. \\\\\nThis may seem counterintuitive, but:\n\n\\framebreak\n\n  \\item For the probability mass of a hyperspherical shell it follows that\n    $$\\int^{r + \\frac{\\delta r}{2}}_{r - \\frac{\\delta r}{2}}p(\\tilde{r})d\\tilde{r} = \\int_{r - \\frac{\\delta r}{2} \\; \\leq \\; ||\\xv||_2 \\; \\leq \\; r + \\frac{\\delta r}{2}} f_p(\\tilde{\\xv}) d\\tilde{\\xv},$$\n    where $f_p(\\xv)$ is the density of the $p$-dimensional standard normal distribution and $p(r)$ the associated radial density.\n\n\\begin{center}\n\\includegraphics[width = 11cm ]{figure_man/2d_normal.png}\n\\end{center}\n\n\\item While $f_p$ becomes smaller with increasing $r$, the region of the integral -the hyperspherical shell- becomes bigger.\n\\end{itemize}\n\n\\normalsize\n\n\\end{vbframe}\n\n\\begin{vbframe}{Intermediate Remarks}\nHowever, we can find effective techniques applicable to high-dimensional spaces if we exploit these properties of real data: \n\\begin{itemize}\n\\item Often the data is restricted to a manifold of a lower dimension. \\\\ (Or at least the directions in the feature space over which significant changes in the target variables occur may be confined.)\n\\item At least locally small changes in the input variables usually will result in small changes in the target variables. \n\\end{itemize}\n\n\\begin{center}\n\\includegraphics[width = 11cm ]{figure_man/manifold.png}\n\\end{center}\n\n\\end{vbframe}\n\n\\endlecture\n\\end{document}", "meta": {"hexsha": "22161ee804f5dafcf584d453168a6e0b6bcb6ff8", "size": 9976, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/cod/slides-cod.tex", "max_stars_repo_name": "jukaje/lecture_i2ml", "max_stars_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2019-02-27T17:20:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-24T12:05:52.000Z", "max_issues_repo_path": "slides/cod/slides-cod.tex", "max_issues_repo_name": "jukaje/lecture_i2ml", "max_issues_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 323, "max_issues_repo_issues_event_min_datetime": "2019-02-27T08:02:59.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T08:11:41.000Z", "max_forks_repo_path": "slides/cod/slides-cod.tex", "max_forks_repo_name": "jukaje/lecture_i2ml", "max_forks_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 56, "max_forks_repo_forks_event_min_datetime": "2019-02-27T16:25:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T05:53:20.000Z", "avg_line_length": 34.7595818815, "max_line_length": 241, "alphanum_fraction": 0.7400761828, "num_tokens": 2816, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149923816048, "lm_q2_score": 0.7853085758631158, "lm_q1q2_score": 0.5928412175649129}}
{"text": "% DOCUMENT HEADER\n\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{graphicx}\n\\usepackage{pgfplots}\n\\usepackage{amsfonts}\n\\usepackage{fancyhdr}\n\\usepackage{tabularx}\n\\usepackage{gensymb}\n\\usepackage{amsmath}\n\\usepackage{hyperref}\n\n% Set up hyperlinks\n\\hypersetup{colorlinks=true,\n            linkcolor=blue,\n            urlcolor=blue}\n\n\n% TITLE\n\\title{\\textbf{Understanding the Tangent Function}}\n\\author{\\textit{Michal \u0160pano}}\n\\date{}\n\n% Set up default path for pictures \n\\graphicspath{{assets/}}\n\n% Page header\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{MAT, @michalspano}\n\\lhead{Tangent function}\n\\rfoot{Page \\thepage}\n\n\\begin{document}\n\n\\maketitle\n\n% SECTION 1\n\\section{Definition}\nTo define the $tangent$ function, one must understand the basic principles of deriving $sine$ and $cosine$ using a so-called \\textbf{unit circle}.\n\nA \\textbf{unit circle} is the circle of radius 1 centred at the origin $O(0, 0)$ in the \\textbf{Cartesian coordinate system}.\n\n% Include a picture of a unit circle\n\\begin{figure}[htp]\n    \\centering\n    \\includegraphics[width=9cm]{unit_circle.png}\n    \\caption{Unit circle}\n\\end{figure}\n\nThen for a point $A = [x, y]$ moved along the circumference $k$ by an angle $\\theta$ is valid:\\\\ \n\\[  \nA = \\left[ cos(\\theta), sin(\\theta) \\right]\n\\]\n\n\\textit{Note: an applet is available (\\href{https://www.geogebra.org/m/gvtvjtpf}{link}) or scan its \\textbf{QR code} under \\textbf{Resources}}.\n\nA right-angled triangle consists of the \\textbf{hypotenuse} (the side opposing the right angle) and 2 other adjacent sides called \\textbf{legs}. The definition of tangent states that it is the quotient of the opposite side and the adjacent side to a certain angle $\\theta$, symbolically: \n\\[\ntan(\\theta) = \\frac{opposite}{adjacent}\n\\]\n\nWhich can be further transformed into: \n\\[\ntan(\\theta) = \\frac{y}{x} ... tan(\\theta) = \\frac{sin(\\theta)}{cos(\\theta)}\n\\]\n\n% SECTION 2\n\\section{Properties}\n\nNow, let's address the properties of $f:y=tan(x)$\n\n\\textbf{Graph: }\n\n% GENERAL GRAPH\n\\begin{figure}[htp]\n    \\centering\n    \\includegraphics[width=8cm]{tan.png}\n    \\caption{$y=tan(x)$}\n\\end{figure}\n\n% ENUMERATED PROPERTIES\n% http://www.nabla.hr/TF-TrigFunctionsD3.htm\n\\textbf{Properties: }\n\\begin{enumerate}\n\n    % DOMAIN\n    \\item \\textbf{Domain}\n    \\[\n    f:y=tan(x) ... y=\\frac{sin(x)}{cos(x)} \\Leftrightarrow cos(x) \\neq 0\n    \\]\n    \n    \\begin{figure}[htp]\n        \\centering\n        \\includegraphics[width=10cm]{cosine.png}\n        \\caption{$y=cos(x)$}\n    \\end{figure}    \n    \n    \\textit{Let $f:y=cos(x)$}, solve for $x\\in\\mathbb{R}$.\n    \\[\n    x \\in \\Big\\{ \\frac{\\pi}{2}; \\frac{3\\pi}{2} + 2k\\pi, k \\in\\mathbb{Z} \\Big\\} \\Leftrightarrow\n    x \\in \\Big\\{ \\frac{\\pi}{2} + k\\pi, k \\in\\mathbb{Z} \\Big\\}\n    \\]\n    Through additional steps, we obtain the following result:\n    \\[\n    x\\in\\{(2k+1)\\frac{\\pi}{2}, k\\in\\mathbb{Z} \\}\n    \\]\n    \n    In other words, the \\textbf{Domain} of $y=tan(x)$ is all real numbers except every $x$ for which is $cos(x) = 0$ defined. Symbolically:\n    \\[\n    D_f= \\mathbb{R}-\\{ (2k+1)\\frac{\\pi}{2}, k\\in\\mathbb{Z} \\}\n    \\]\n    \n    % Range of values\n    \\item \\textbf{Range of values}\n    \n    As shown in the figure above, the \\textbf{Range} of $f: y=tan(x)$:\n    \\[\n    H_f=\\mathbb{R}\n    \\]\n    \n    % Values in quadrants\n    \\item \\textbf{Values in quadrants}\n    \n    \\begin{center}\n    \\begin{tabular}{||c c c c||} \n    \\hline\n    \\textit{I.} & \\textit{II.} & \\textit{III} & \\textit{IV} \\\\ [0.5ex] \n    \\hline\\hline\n    $\\Big(0;\\frac{\\pi}{2}\\Big)$ & $\\Big(\\frac{\\pi}{2};\\pi\\Big)$ & $\\Big(\\pi;\\frac{3\\pi}{2}\\Big)$ & \n    $\\Big(\\frac{3\\pi}{2};2\\pi\\Big)$ \\\\\n    \\hline\n    + & - & + & - \\\\ [1ex] \n    \\hline\n    \\end{tabular}\n    \\end{center}\n    \n    % Periodicity of tangent\n    \\item \\textbf{Periodicity}\n    \n    Suppose that the $period$ of $y=tan(x)$ is $2\\pi$: \n    \\[\n    tan(x + 2k\\pi) = \\frac{sin(x + 2k\\pi)}{cos(x + 2k\\pi)} = \\frac{sin(x)}{cos(x)} = tan(x)\n    \\]\n    However, $2\\pi$ is not the smallest possible period. Let us consider a case where $period$ is $\\pi$.\n    \\[\n    tan(x + \\pi) = \\frac{sin(x +\\pi)}{cos(x + \\pi)} = \\frac{-sin(x)}{-cos(x)} = tan(x)\n    \\]\n    \n    % Include graphical depiction\n    \\begin{figure}[ht!]\n        \\centering\n        \\vspace{-10pt}\n        \\centering\\includegraphics[scale=2.5]{unit_circle_tangent_proof.png}\n        \\vspace{-10pt}\n        \\caption{Periodicity of tangent}\n        \\vspace{-10pt}\n    \\end{figure}\n    \n    \\textbf{Explanation}: The $period$ of $f:y=sin(x)$ and $f:y=cos(x)$ is $2\\pi$, any $x \\in D_f$ shifted by the period of $\\pi$ will produce a \\textbf{function value} for which is valid: $\\forall x \\in D_f \\subset \\mathbb{R}: f(x + \\pi)=-f(x)$.\n    \n    Let $A'$ be a point shifted along the circumference $k$ by an angle $\\theta$, $A'_1$ be a point shifted along the same circumference by an angle $\\theta + \\pi$. Their coordinates are depicted using a \\textbf{unit circle} and are symmetrical through $O[0,0]$ (point symmetry). Notice: \n    \\[\n    A'_x = -A'_{x_1} \\land A'_y = -A'_{y_1} \n    \\]\n    \n    \\textit{Note: an applet is available (\\href{https://www.geogebra.org/m/qr7hjqs6}{link}) or scan its \\textbf{QR code} under \\textbf{Resources}}.\n    \n    \\textbf{Conclusion}\n    Function \\textbf{tangent} is \\textbf{periodic} with the smallest possible\n    period $\\pi$.\n    \n    % Parity of tangent\n    \\item \\textbf{Parity}\n    \\[\n    f^+(x):y=tan(x)\n    \\]\n    \\[\n    f^-(-x):y=tan(-x) = \\frac{sin(-x)}{cos(-x)} = \\frac{-sin(x)}{cos(x)} = -\\frac{sin(x)}{cos(x)} = -tan(x),\n    \\]\n    Based on the parity of \\textbf{sine} (odd), \\textbf{cosine} (even) $\\Rightarrow sin(-x)=-sin(x) \\land cos(-x) = cos(x)$.\n    \n    $f(-x) = -f(x) \\Rightarrow$ Tangent is an \\textbf{odd function}.\n    \n    % Monotonicity of tangent\n    \\item \\textbf{Monotonicity}\n    \n    The tangent function is \\textbf{increasing} on $\\mathbb{R}$ between 2 consecutive vertical \\textbf{asymptotes}, where every asymptote is defined as: $x=(2k + 1)\\frac{\\pi}{2}, k \\in\\mathbb{Z}$,\\\\that is:\n    \\[\n    \\forall x_{1,2}\\in D_f:tan(x_1)<tan(x_2) \\land x_1<x_2 \n    \\]\n    \n    \\begin{figure}[ht!]\n        \\centering\n        \\vspace{-10pt}\n        \\centering\\includegraphics[scale=0.92]{monotony.png}\n        \\vspace{-10pt}\n        \\caption{Monotonicity of tangent}\n        \\vspace{-10pt}\n    \\end{figure}\n    \n    \\item \\textbf{Graph}\n    \n    \\textbf{Tangentoid} is the plot of the tangent function.\n    \n    \\item \\textbf{Important function values of tangent}\n    \n    \\begin{center}\n    \\begin{tabular}{||c c c c c c||} \n    \\hline\n     $x: $ & $0\\degree$ & $30\\degree$ & $45\\degree$ & $60\\degree$ & $90\\degree$ \\\\ [0.5ex] \n     \\hline\n     $tan(x): $ & 0 & $\\frac{\\sqrt{3}}{3}$ & 1 & $\\sqrt{3}$ & $\\text{\\O}$ \\\\ [1ex] \n     \\hline\n    \\end{tabular}\n    \n    \\end{center}\n    \n\\end{enumerate}\n\n% Include scanable resources\n\\section{\\textbf{Resources}}\n\n\\centering\\includegraphics[scale=0.05]{applet1.png}\n\\includegraphics[scale=0.05]{applet2.png}\n\n\\centering\\textit{Applet 1, Applet 2}\n\n% Plot list of figures\n\\listoffigures\n\n\\end{document}\n\n", "meta": {"hexsha": "cdf6f905ee229986f76f9655ad5c2a0dbe472fe8", "size": 7053, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "study-materials/Mathematics/understanding-tangent/tangent.tex", "max_stars_repo_name": "michalspano/study-materials", "max_stars_repo_head_hexsha": "a1d69bcf84ae654ba247587f717168225aefd588", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-02-10T07:33:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T15:29:13.000Z", "max_issues_repo_path": "study-materials/Mathematics/understanding-tangent/tangent.tex", "max_issues_repo_name": "michalspano/study-materials", "max_issues_repo_head_hexsha": "a1d69bcf84ae654ba247587f717168225aefd588", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "study-materials/Mathematics/understanding-tangent/tangent.tex", "max_forks_repo_name": "michalspano/study-materials", "max_forks_repo_head_hexsha": "a1d69bcf84ae654ba247587f717168225aefd588", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.141025641, "max_line_length": 288, "alphanum_fraction": 0.6150574224, "num_tokens": 2454, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5928412132347399}}
{"text": "\\chapter{Appendix}\\label{section-appendix} \n\\section{MATLAB Thrust Stabilization-Single tether implementation}\\label{appendix-thruststab}\n\\begin{lstlisting}\n% This code is an implementation of Thrust Stabilization\n% for single tether\n% Hovell, K, and Ulrich, S. (2017)\n% \"Experimental Validation for Tethered Capture of the Spinning Space Debris\"\n% AIAA SciTech Forum, AIAA Guidance, Navigation, and Control Conference, Grapevine, Texas. 9 ?13 January\n% Paper no AIAA 2017-1049.\n\n% ======\n% Nomenclature:\n% Jzzt = Moment of inertia, target about z-axis \n% Jzzc = Moment of inertia, chaser about z-axis\n\n% mt = Mass of the target\n% mc = Mass of the chaser\n% mj = Mass of the junction\n\n% thetac0 = Initial angle of the chaser \n% thetat0 = Initial angle of the target\n\n% TS = Time step\n\n% Lm0= unstretched length of the main tether\n% Ls10 = unstretched length of the sub tether 1\n% Ls20 = unstretched length of the sub tether 2\n% Ls0 = unstretched length of the single tether\n\n% ac = tether attachment point to body chaser with respect to chaser's\n% center of mass\n% as1 = tether attachment point to body sub tether 1 with respect to\n% sub-tether1's center of mass\n% as2 = tether attachment point to body sub tether 2 with respect to\n% sub-tether2's center of mass\n% at = tether attachment point to body target with respect to\n% target's center of mass\n\n% csingle = damping coefficient for single tether\n% csub = damping coefficient for sub tether\n\n% KPtheta = proportional gain matrix for the theta \n% KDtheta = derivative gain matrix for the theta\n% KP = proportional gain matrix for the position\n% KD = derivative gain matrix for the postion\n\n% ======\n% Define initial conditions and TSS parameters for all simulations and\n% experiment\uff1a\n\n%in Kg\u00b7m^2\nJzzt=0.22; \nJzzc=0.30; \n\n%in Kilogram\nmt=12.19; \nmc=17.24; \nmj=0.01;\n\n%in Degree\nthetac0=180; \nthetat0=0;\n\n% in Second\nTS=0.004; \n\n%in Meter\nLm0=0.28; \nLs10=0.28; \nLs20=0.28;  \nLs0=0.54; \nac=[0.135;-0.009];\nas1=[0.110;0.127];\nas2=[0.113;-0.094];\nat=[0.110;0.016];\n\n% in Newton\u00b7second/meter\ncsingle=3.0; \ncsub=0.9;\n\nKPtheta=0.33;\nKDtheta=0.56;\nKP=[19.1 0;0 19.1];\nKD=[32.7 0;0 32.7];\n\n% ======\n% Define initial conditions for Thurst Stabilization simulations and experiment\nrt0=[0.35;1.19];\nrc0=[1.10;1.20];\nvt0=[0;0];\nvc0=[0;0];\nwt0=10; %degree/second\nrf=[3.10;1.20];\n\n% ======\n% Initialization\n% Simulation at time t=2s\n% Nothing change in the first two seconds\nrt=rt0;\nrc=rc0;\nvt=vt0;\nvc=vc0;\nthetat=thetat0;\nthetac=thetac0;\nLs=rt+A(thetat)*at-rc-A(thetac)*ac;\n\n% fai is the target tether angle\nfai=norm(atand(Ls(2)/Ls(1))-thetat);\n\n\n\nwt=wt0;\nwc=0;\nt=0; % It started to simulate at T=2, which seem it as t=0;\nFs=[0;0];\n\n% AngularMomentum \n% In radius\n\nAngularMomentum = Jzzt * wt/57.3 + Jzzc * wc/57.3 ; \n\naccelerationc=[0;0];\naccelerationt=[0;0];\nangularaccelerationt=0;\ntaot=0;\nFcontrol=[0;0];\n\n\n\nfor i=1:25000 % i* TS = the time , i is the number of the current time step\n%record the data\nrecordt(i)=t;\nrecordrc(i,1)=rc(1);\nrecordrc(i,2)=rc(2);\nrecordrt(i,1)=rt(1);\nrecordrt(i,2)=rt(2);\nrecordthetat(i)=thetat;\n\nrecordwt(i)=wt;\nrecordFs(i,1)=Fs(1);\nrecordFs(i,2)=Fs(2);\n\nrecordLs(i,1)=Ls(1);\nrecordLs(i,2)=Ls(2);\n\n\nrecordvc(i,1)=vc(1);\nrecordvc(i,2)=vc(2);\n\nrecordvt(i,1)=vt(1);\nrecordvt(i,2)=vt(2);\n\nrecordaccelerationc(i,1)=accelerationc(1);\nrecordaccelerationc(i,2)=accelerationc(2);\nrecordaccelerationt(i,1)=accelerationt(1);\nrecordaccelerationt(i,2)=accelerationt(2);\n\nrecordangularaccelerationt(i)=angularaccelerationt;\n\nrecordtaot(i)=taot;\n\nrecordFcontrol(i,1)=Fcontrol(1);\nrecordFcontrol(i,2)=Fcontrol(2);\nrecordfai(i)=fai;\n\nrecordAngularMomentum(i) = AngularMomentum ; \n\n%recordstiffness(i) = stiffness(norm(Ls)-Ls0);\n\n%Ls is the vector which is from chaser to target, pointing at the target\nLs=rt+A(thetat)*at-rc-A(thetac)*ac;\n\n% fai is target tether angle\nfai=norm(-atand(Ls(2)/Ls(1))+thetat);\n\n\n% Calculate the state of the tether \n% Hovell and Ulrich (2017) eqn 19\nif (norm(Ls)-Ls0)>0 % tether is tensioned\n    Fs=(stiffness(norm(Ls)-Ls0)*(norm(Ls)-Ls0)+...\n        dot(csingle*(vt+A(thetat)*[-wt/57.3*at(2);wt/57.3*at(1)]-vc-A(thetac)*[-wc/57.3*ac(2);wc/57.3*ac(1)]),...\n        Ls/norm(Ls))).*Ls/norm(Ls);\nelse % tether is slackness\n    Fs=[0;0];\nend\n\n% Force on target due to tether tension:\n% through center of mass\n% referenced to the target's body frame\nFst=(A(thetat))'*Fs;\nFsc=(A(thetac))'*Fs;\n\n\n% Torque on target due to tether tension:\n% about the z-axis of body frame (which is parallel to the z-axis of inertial frame)\n% through center of mass target\n% referenced to target body frame (the torque referenced to the body is the same as to the inertial frame)\ntaot=at(2)*Fst(1)-at(1)*Fst(2); % Nm\ntaoc=ac(1)*Fsc(2)-ac(2)*Fsc(1); % Nm\n\n% The control thrust of chaser vehicle\nFcontrol=KP*(rdes(t)-rc)+KD*(vdes(t)-vc);\n\n% The control torque of chaser vehicle\ntaocontrol=KPtheta* (180 - thetac) + KDtheta*(0-wc)  ;\naccelerationt=(-Fs/mt);\naccelerationc=(Fs+Fcontrol)/mc;\nangularaccelerationt=taot/Jzzt;\nangularaccelerationc=(taoc+taocontrol)/Jzzc;\n\n% position and angle update\nrt=rt+vt*TS;\nrc=rc+vc*TS;\nthetat=thetat+wt*TS;\nthetac=thetac+wc*TS;\n\nvt=vt+accelerationt*TS;\nwt=wt+angularaccelerationt*TS*(180/pi);%/degree t-1\nvc=vc+accelerationc*TS;\nwc=wc+angularaccelerationc*TS*(180/pi); %/degree t-1\nAngularMomentum = Jzzt * wt/57.3 + Jzzc * wc/57.3;\nt=t+TS;\nend\nrecordt=recordt';\nrecordthetat=recordthetat';\nrecordwt=recordwt';\n\n\nrecordrc=recordrc';\nrecordrt=recordrt';\nrecordvc=recordvc';\nrecordvt=recordvt';\nrecordaccelerationc=recordaccelerationc';\nrecordaccelerationt=recordaccelerationt';\nrecordangularaccelerationt=recordangularaccelerationt';\nrecordangularaccelerationt=recordangularaccelerationt';\nrecordaccelerationc=recordaccelerationc';\nrecordaccelerationt=recordaccelerationt';\nrecordtaot=recordtaot';\nrecordFcontrol=recordFcontrol';\n\nn=1;\nfigure(n)\nplot(recordt,recordwt);\ntitle('Target angular rate')\nxlabel('Time,s')\nylabel('Angular rate, deg/s')\nn=n+1;\n\nfigure(n)\nplot(recordt,recordthetat)\ntitle('angle of target')\nn=n+1;\n\nfigure(n)\nplot(recordt,recordrc(1,:))\ntitle(' x position of the chaser')\nn=n+1;\n\nfigure(n)\nplot(recordt,recordrc(2,:))\ntitle('y position of the chaser')\nn=n+1;\n\nfigure(n)\nplot(recordt,recordrt(1,:))\ntitle('x position of the target')\nn=n+1;\n\nfigure(n)\nplot(recordt,recordrt(2,:))\ntitle('y position of the target')\nn=n+1;\n\nfigure(n)\nplot(recordt,mod(recordfai,360))\ntitle('Target tether angle')\nxlabel('Time,s')\nylabel('Thether Angle,deg')\nn=n+1;\n\nfigure(n)\nplot(recordt,recordAngularMomentum)\ntitle('Total Angular Momentum')\nxlabel('Time,s')\nylabel('Angular Momentum')\n\nn = n + 1;\nfigure(n)\nplot(recordrc(1,:),recordrc(2,:))\ntitle('Track of Chaser')\n\nn = n+1;\nfigure(n)\nplot(recordrt(1,:),recordrt(2,:))\ntitle('Track of Target')\n\n% The function gives the attitude matrix in terms of theta\n% cos -sin; sin cos;\n\nfunction attitude=A(sita)\nattitude=[cosd(sita) -sind(sita);sind(sita) cosd(sita)];\nend\n\n% calculate the design radius vector of time t\n% the goal is to reach the destination in 20s after t=2\nfunction [rdes]=rdes(t)\nrc0=[1.10;1.20];\nrf=[3.10;1.20];\nd=2.*(rf-rc0)./(20*20); %in 20s\n[rdes]=(rc0+d.*(t*t/2));\nend\n\n% calculate the single tether stiffness(N/m) in terms of\n% stretch(m)-independent variable \n% x=(||Ls||-Ls,0)\nfunction stiff=stiffness(x)\noriginalstiffness=[163.500000000000,114.450000000000,80.2636363636363,\n51.3857142857143,29.9096000000000,18.8842500000000,17.0724030612245,\n10.1577253521127,8.06715957446808,7.09840970654628,6.99056826923077,\n7.22393661971831,8.06497910662824,8.78250400534045,9.58060266159696];\noriginalstretch=[0.00300000000000000,0.00600000000000001,0.0110000000000000,\n0.0210000000000000,0.0450000000000000,0.0860000000000001,\n0.0980000000000001,0.213000000000000,0.329000000000000,0.443000000000000,\n0.520000000000000,0.639000000000000,0.694000000000000,0.749000000000000,\n0.789000000000000];\nstiff=spline(originalstretch,originalstiffness,x);\nend\n\n% calculate the design velocity of time t\n% the goal is to reach the destination in 20s after t=2s\nfunction vdes=vdes(t)\nrc0=[1.10;1.20];\nrf=[3.10;1.20];\nd=2.*(rf-rc0)./(20*20); %in 20s\nvdes=d*t;\nend\n\\end{lstlisting}\n\n\\section{Arduino CSV Streaming Example}\\label{appendix-CSV}\n\\begin{lstlisting}\n#include <ArdusatSDK.h>\n#include <Wire.h>\n#include <Arduino.h>\n\n\n/*--------------------------------------------------------------\n *  Setup Software Serial to allow for both RF communication and USB communication\n *    RX is digital pin 8 (connect to TX/DOUT of RF Device)\n *    TX is digital pin 9 (connect to RX/DIN of RF Device)\n *-------------------------------------------------------------*/\nArdusatSerial serialConnection(SERIAL_MODE_HARDWARE_AND_SOFTWARE, 8, 9); \n\n/*--------------------------------------------------------------\n *  Constant Definitions\n *------------------------------------------------------------*/\n \nAcceleration accel;\nGyro gyro;\nMagnetic mag;\nOrientation orient(accel, mag);\n\nvoid setup(void) {\n  serialConnection.begin(57600);\n  \n  accel.begin();\n  gyro.begin();\n  mag.begin();\n  orient.begin();\n  serialConnection.println(\"Start of the loop\");\n  serialConnection.println(\"The acceleration x y z unit in m/s^2\");\n  serialConnection.println(\"The angular rate x y z unit in radian per second\");\n  serialConnection.println(\"The orientation roll pitch heading unit in degree\");\n}\n\nvoid loop(void) {\n  serialConnection.println(accel.readToCSV(\"Acceleration\"));\n  serialConnection.println(gyro.readToCSV(\"Gyro\"));\n  serialConnection.println(orient.readToCSV(\"Orientation\"));\n}\n\n\\end{lstlisting}\n\n\n\\section{MATLAB Serial REALTIME PLOTING}\\label{appendix-realtime} \n\\begin{lstlisting} \n%% Pre-process\nclose all;  % close all figures\nclear all;  % clear all workspace variables\nclc;        % clear the command line\nfclose('all'); % close all open files\ndelete(instrfindall); % reset com port \n\n%% Constatns\nBAUDRATE = 57600; % Need to configure Xbee parts BD i.e. Interface Data Rate\nINPUTBUFFER = 51200; %Unit in bytes. Definition:A location that holds all \n% incoming information before it continues to the CPU for processing; \n% Buffers used to store information before it processed\n\n\n%% Initialize the stripchart\nfigure_acceleration = figure('Name','Acceleration');\naxes_accelerationx = subplot(3,1,1); % Axes Object\nxlabel(axes_accelerationx,'Time/ second')\nylabel(axes_accelerationx,'Accelerationx/ m/s^2')\n\naxes_accelerationy = subplot(3,1,2);\nxlabel(axes_accelerationy,'Time/ second')\nylabel(axes_accelerationy,'Accelerationy/ m/s^2')\n\naxes_accelerationz = subplot(3,1,3);\nxlabel(axes_accelerationz,'Time/ second')\nylabel(axes_accelerationz,'Accelerationz/ m/s^2')\n\nh_ax = animatedline(axes_accelerationx,'MaximumNumPoints',Inf,'Marker','o','Color','red'); % animated line object\nh_ay = animatedline(axes_accelerationy,'MaximumNumPoints',Inf,'Marker','+','Color','blue');\nh_az = animatedline(axes_accelerationz,'MaximumNumPoints',Inf,'Marker','*','Color','green');\n\nfigure_gyro = figure('Name','Gyro');\naxes_gyrox = subplot(3,1,1);\nxlabel(axes_gyrox,'Time/ second')\nylabel(axes_gyrox,'Gyrox/ rad/s')\n\naxes_gyroy = subplot(3,1,2);\nxlabel(axes_gyroy,'Time/ second')\nylabel(axes_gyroy,'Gyroy/ rad/s')\n\naxes_gyroz = subplot(3,1,3);\nxlabel(axes_gyroz,'Time/ second')\nylabel(axes_gyroz,'Gyroz/ rad/s')\n\nh_gx = animatedline(axes_gyrox,'MaximumNumPoints',Inf,'Marker','o','Color','red');\nh_gy = animatedline(axes_gyroy,'MaximumNumPoints',Inf,'Marker','+','Color','blue');\nh_gz = animatedline(axes_gyroz,'MaximumNumPoints',Inf,'Marker','*','Color','green');\n\nfigure_orientation = figure('Name','Orientation');\naxes_roll = subplot(3,1,1);\nxlabel(axes_roll,'Time/ second')\nylabel(axes_roll,'Roll/ deg')\n\naxes_pitch = subplot(3,1,2);\nxlabel(axes_pitch,'Time/ second')\nylabel(axes_pitch,'Pitch/ deg')\n\naxes_heading = subplot(3,1,3);\nxlabel(axes_heading,'Time/ second')\nylabel(axes_heading,'Heading/ deg')\n\nh_roll = animatedline(axes_roll,'MaximumNumPoints',Inf,'Marker','o','Color','red');\nh_pitch = animatedline(axes_pitch,'MaximumNumPoints',Inf,'Marker','+','Color','blue');\nh_heading =animatedline(axes_heading,'MaximumNumPoints',Inf,'Marker','*','Color','green');\n\n% Index \ni = 1;\nj = 1;\nk = 1;\n\n% Record the data\n% 10000 Row data\nrecord_acceleration = zeros(10000,4);\nrecord_gyro = zeros(10000,4);\nrecord_orientation = zeros(10000,4);\n\n%% Create a serial object\nboard = serial('COM4','BaudRate',BAUDRATE,'DataBits',8);\n% Related with the used COM, can be different \n% COM4 for wireless, COM3 for wire\n% BAUDRATE unit: bits per second, e.g. 9600 means 9600 bits per second,\n% i.e. 1200 bytes per second i.e. 1.2 KB/s or 1.17KB/s\n\n\n% Set serial port buffer\nset(board,'InputBufferSize',INPUTBUFFER);\n% InputBufferSize: total number of bytes that can be stored in the input\n% buffer during a read operation. The read operation is terminated if the\n% amount of data stored in the input buffer equals the InputBufferSize\n\nfopen(board);\nwhile(1)\n    a = fgetl(board);% a:character vector\n    b = textscan(a,'%u32 %s %f %f %f %u32','Delimiter',',');\n    \n    if strcmp(char(b{2}),'Acceleration')\n    \n        time = double(b{1})/1000; % in second\n        ax = b{3}; \n            addpoints(h_ax,time,ax);\n            drawnow limitrate\n             \n        ay = b{4};\n            addpoints(h_ay,time,ay);\n            drawnow limitrate\n            \n        az = b{5};\n            addpoints(h_az,time,az);\n            drawnow limitrate\n            \n        record_acceleration(i,:)=[time ax ay az];\n        i=i+1;\n        \n    elseif strcmp(char(b{2}),'Gyro')\n        time = double(b{1})/1000; % in second\n        gx = b{3}; \n            addpoints(h_gx,time,gx);\n            drawnow limitrate\n            \n        gy = b{4};\n            addpoints(h_gy,time,gy);\n            drawnow limitrate\n        gz = b{5};\n            addpoints(h_gz,time,gz);\n            drawnow limitrate\n            \n        record_gyro(j,:)=[time gx gy gz];\n        j=j+1;\n        \n    elseif strcmp(char(b{2}),'Orientation')\n        time = double(b{1})/1000; % in second\n        roll = b{3};\n            addpoints(h_roll,time,roll);\n            drawnow limitrate\n            \n        pitch = b{4};\n            addpoints(h_pitch,time,pitch);\n            drawnow limitrate\n            \n        heading = b{5};\n            addpoints(h_heading,time,heading);\n            drawnow limitrate\n            \n        record_orientation(k,:) = [time roll pitch heading];\n        k = k+1;\n    end \n\nend \n\\end{lstlisting} \n\\newpage    \n", "meta": {"hexsha": "f317f51bec8e2f6785962be0c030b0b706fa6468", "size": 14416, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/section-appendix.tex", "max_stars_repo_name": "KEVINHKUST/IndependentProject_Thesis", "max_stars_repo_head_hexsha": "a1fdd6e4c371ca0fe4b9ce67c76241d7482acc1d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/section-appendix.tex", "max_issues_repo_name": "KEVINHKUST/IndependentProject_Thesis", "max_issues_repo_head_hexsha": "a1fdd6e4c371ca0fe4b9ce67c76241d7482acc1d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/section-appendix.tex", "max_forks_repo_name": "KEVINHKUST/IndependentProject_Thesis", "max_forks_repo_head_hexsha": "a1fdd6e4c371ca0fe4b9ce67c76241d7482acc1d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.7955390335, "max_line_length": 113, "alphanum_fraction": 0.6897891232, "num_tokens": 4514, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511616741042, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5927984404600957}}
{"text": "\\chapter{First Chapter}\n\\label{Chapter:1}\n\n\\section{A Section}\n\\label{Section:1-1}\n\nExample of some math:\n\\begin{equation}\n\t\\begin{alignedat}{2}\n\t\\mathcal{H} = &-\\mu_\\mathrm{S}\\, H \\sum\\limits_{i=1}^{N_\\mathrm{S}} \\hat{H}\\cdot\\vec{n}_i &&- \\sum\\limits_{\\braket{ij}}\\, J_{ij} \\vec{n}_i\\cdot\\vec{n}_j\\\\\n\t&-K \\sum\\limits_{i=1}^{N_\\mathrm{S}} (\\hat{K}\\cdot\\vec{n}_i)^2 &&- \\sum\\limits_{\\braket{ij}} \\vec{D}_{ij} \\cdot (\\vec{n}_i\\times\\vec{n}_j),\n\t\\label{Eq:Hamiltonian_Model}\n\t\\end{alignedat}\n\\end{equation}\nThis is an atomistic spin model which can be used in \\acf{ASD}", "meta": {"hexsha": "f1cc8212dc52ca440f8cb8bbabe0a96c4ea08e74", "size": 566, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/Chapter1/Chapter1.tex", "max_stars_repo_name": "GPMueller/mwe-tex", "max_stars_repo_head_hexsha": "b6a734d44a7966613e2ce74e3e63656a6c393de0", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2016-11-15T16:48:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T12:06:39.000Z", "max_issues_repo_path": "thesis/Chapter1/Chapter1.tex", "max_issues_repo_name": "GPMueller/mwe-tex", "max_issues_repo_head_hexsha": "b6a734d44a7966613e2ce74e3e63656a6c393de0", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/Chapter1/Chapter1.tex", "max_forks_repo_name": "GPMueller/mwe-tex", "max_forks_repo_head_hexsha": "b6a734d44a7966613e2ce74e3e63656a6c393de0", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7333333333, "max_line_length": 155, "alphanum_fraction": 0.6643109541, "num_tokens": 235, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299570920386, "lm_q2_score": 0.640635854839898, "lm_q1q2_score": 0.5927354844851404}}
{"text": "%classic\n\\section{Picard Vessiot Theory}\n\n\n\\begin{frame}\n\\frametitle{}\n\\bi\n\\item $(k, \\partial)$ differential field over $\\qzcl$,\n\\item $L  \\in k[\\partial] \\subset \\mathrm{End}_{\\qzcl}(k)$ our differential operator\n$$L = \\partial^2 - a,\\ a \\in k^\\times = k\\backslash \\{0\\}$$\n\\item differential module\n$$\\left<\\partial^i(x) = e_{1-i} : i = 0, 1\\right>_{\\qzcl} \\subset k \\oplus k,$$\nthe solution space(!)\n\\item representation matrix\n$$A_\\partial = \\left(\\bao{cc}0&1\\\\a&0\\\\\\ea\\right),$$\n\\ei\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{}\n\\bi\n\\item $A_\\partial$ has eigenvalues in $\\qzcl$ if $a \\in \\qzcl^\\times$,\n\\item eigenspaces of $A_\\partial$ are non-trivial diff submodules of solution space,\n\\ei\n\\end{frame}\n\n", "meta": {"hexsha": "15a9c14fed9be5bf0d679cc051cda6731f2e3898", "size": 706, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "def_talk/classic.tex", "max_stars_repo_name": "gmuel/texlib", "max_stars_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "def_talk/classic.tex", "max_issues_repo_name": "gmuel/texlib", "max_issues_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "def_talk/classic.tex", "max_forks_repo_name": "gmuel/texlib", "max_forks_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.1481481481, "max_line_length": 84, "alphanum_fraction": 0.6685552408, "num_tokens": 263, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070158103778, "lm_q2_score": 0.6513548646660543, "lm_q1q2_score": 0.592672361141862}}
{"text": "\\documentclass{article}\n    % General document formatting\n    \\usepackage[margin=0.7in]{geometry}\n    \\usepackage[parfill]{parskip}\n    \\usepackage[utf8]{inputenc}\n    \\usepackage[mathscr]{euscript}\n    \\usepackage{amsmath}\n    \\usepackage{amssymb}\n    \\usepackage{tikz}\n    \\usepackage{fancyhdr}\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{Edgar Jacob Rivera Rios - A01184125}\n\n\\renewcommand{\\labelenumi}{\\alph{enumi})}\n\\renewcommand{\\labelenumii}{\\roman{enumii})}\n\n\\begin{document}\n\\section*{1.3.1 Functions: Image, Closure}\n\\begin{enumerate}\n    \\item The floor function from $\\mathbb{R}_{+}$ into $\\mathbb{N}$ is defined by putting $\\lfloor{x}\\rfloor$ to be the largest integer less than or equal to $x$. What are the images under the floor function of the sets\n    \\begin{enumerate}\n        \\item $[0, 1] = \\{x \\in R: 0 \\leq x \\leq 1 \\}$\n        \\begin{equation*}\n            Image(Floor([0, 1])) = \\{0, 1\\}\n        \\end{equation*}\n        \\item $[0, 1) = \\{x \\in R: 0 \\leq x < 1 \\}$\n        \\begin{equation*}\n            Image(Floor([0, 1))) = \\{0\\}\n        \\end{equation*}\n        \\item $(0, 1] = \\{x \\in R: 0 < x \\leq 1 \\}$\n        \\begin{equation*}\n            Image(Floor((0, 1])) = \\{0, 1\\}\n        \\end{equation*}\n        \\item $(0, 1) = \\{x \\in R: 0 < x < 1 \\}$\n        \\begin{equation*}\n            Image(Floor((0, 1))) = \\{0\\}\n        \\end{equation*}\n    \\end{enumerate}\n    \\item Let $f : A\\rightarrow A$ be a function from set $A$ into itself. Show that for all $X \\subseteq A$, $f(X) \\subseteq f[X]$, and give a simple example of the failure of the converse inclusion.\n    \\begin{align*}\n        f[X] &= \\{a \\in A: f(x) = a, x \\in X\\}\\\\\n        \\forall x\\in X, x \\in A &\\implies f[X] = \\{a \\in A: f(x) = a, x \\in A\\}\\\\\n        &\\implies f(x) \\rightarrow A\n    \\end{align*}\n    \\item Show that when $f(A) \\subseteq A$ then $f[A] = A$\n    \\begin{align*}\n        f(A) \\subseteq A &\\implies  f[A] = \\{a \\in A: f(x) = a\\}\\\\\n        &\\implies \\forall a \\in f[A], a \\in A \n    \\end{align*}\n    \\item Show that for any partition of $A$, the function $f$ taking each element $a \\in A$ to its cell is a function on $A$ into the power set $\\mathscr{P}(A)$ of $A$ with the partition as its range.\n    \\begin{align*}\n        f: A &\\rightarrow \\mathscr{P}(A)\\\\\n        range(f) &= \\mathcal{P}(A)\\\\\n        \\forall X \\in \\mathcal{P}(A)\\\\\n    \\end{align*}\n    \\item Let $f: A \\rightarrow B$ be a function from set $A$ into set $B$. Recall the \u2018abstract inverse\u2019 function$f^{-1}: B \\rightarrow \\mathcal{P}(A)$ defined at the end of Slide 52 by putting $f^{-1}(b) = \\{a \\in A: f(a) = b\\}$ for each $b \\in B$.\n    \\begin{enumerate}\n        \\item Show that the collection of all sets for $b \\in f(A) \\subseteq B$ is a partition of $A$ in the sense defined in Chapter 2 of the David Makinson\u2019s book.\n        \\begin{align*}\n            \\mathcal{P}(A) &= \\{ A_{i}: i \\in I, \\forall a_{i} \\in A_{i}, A_{i} \\neq \\emptyset, A_{i} \\cap A_{i'} = \\emptyset \\}\\\\\n            \\{a \\in A: f(a) = b, b \\in B\\} &= \\mathcal{P}(A)\\\\\n            \\forall a \\in A, \\exists b \\in B : f(a) = b &\\implies f^{-1}(b_{i}) \\cap f^{-1}(b_{i'}) = \\emptyset\\\\\n            &\\implies f^{-1}(B) = \\mathcal{P}(A)\n        \\end{align*}\n        \\item Is this still the case if we include in the collection the sets $f^{-1}(b)$ for $b \\in B \\setminus f(A)$?\\\\\n        Depends in if $f$ is bijective, if it is, B then would just add an empty set, which would not affect the partition, but if B is not srujective, then $f^{-1}$ would be greater that $\\mathcal{P}(A)$\n    \\end{enumerate}\n\\end{enumerate}\n\\section*{1.3.2 Injections, surjections, bijections}\n\\begin{enumerate}\n    \\item Is the floor function from $\\mathbb{R}_{+}$ into $\\mathbb{N}$ injective?\\\\\n    No, because several values from $\\mathbb{R}_{+}$ map to the same values in $\\mathbb{N}$, An exaple is that 1.01, 1.1, 1.9 all map to 1\n    \\item Show that the composition of two bijections is a bijection. You may make use of results of exercises in the previous slides on injectivity and surjectivity.\n    \\begin{align*}\n        f: A &\\rightarrow B & g: B &\\rightarrow C\\\\\n        \\forall a \\in A,&\\ \\exists b \\in B: f(a) = b & \\forall b \\in B,&\\ \\exists c \\in C: g(b) = c\\\\\n        f(a) = f(a') &\\iff a = a' & g(b) = g(b') &\\iff b = b'\n    \\end{align*}\n    \\begin{align*}\n        g \\circ f &\\implies \\forall a \\in A,\\ \\exists c \\in C: g(f(a)) = c\\\\\n        &\\implies g(f(a)) = g(f(a')) \\iff a = a'\\\\\n        &\\implies g \\circ f\\ is\\ bijective\n    \\end{align*}\n    \\item Use the equinumerosity principle to show that there is never any bijection between a finite set and any of its proper subsets.\n    \\begin{align*}\n        \\#(A) = \\#(B) &\\iff f: A \\rightarrow B\\ is\\ bijective\\\\\n        B \\subset A &\\implies B = \\{\\forall b \\in B, b \\in A \\},\\ A \\setminus B \\neq \\emptyset\\\\\n        B \\subset A &\\implies \\exists a \\in A: f(a) \\notin B\\\\\n        &\\implies f\\ is\\ not\\ bijective\\\\\n        &\\implies \\#(A) \\neq \\#(B)\n    \\end{align*}\n    \\item Give an example to show that there can be a bijection between an infinite set and certain of its proper subsets.\n    \\begin{align*}\n        A = \\mathbb{N}, &B = \\mathbb{N}_{+}\\\\\n        f: A \\rightarrow B &= f(a) = a + 1\\\\\n        f(0) = 1, f(1) = 2 &... f(n) = n + 1\n    \\end{align*}\n    \\item Use the principle of comparison to show that for finite sets $A$, $B$, if there are injective functions $f : A \\rightarrow B$ and $g: B \\rightarrow A$, then there is a bijection from $A$ to $B$. (Hint: Consider the superposition $g \\circ f$ and establish that it provides for a desired bijection).\n    \\begin{align*}\n        f: A &\\rightarrow B & g: B &\\rightarrow A\\\\\n        \\forall a \\in A,&\\ \\exists b \\in B: f(a) = b & \\forall b \\in B,&\\ \\exists a \\in A: g(b) = a\\\\\n        f(a) = f(a') &\\iff a = a' & g(b) = g(b') &\\iff b = b'\n    \\end{align*}\n    \\begin{align*}\n        g \\circ f &: A \\rightarrow A\\\\\n        f \\circ g &: B \\rightarrow B\\\\\n        &\\implies \\forall a \\in A,\\ \\exists b \\in B\\\\\n        &\\implies A\\ and\\ B\\ are\\ bijective\n    \\end{align*}\n\\end{enumerate}\n\\newpage\n\n\\section*{1.3.3 Pigeonhole principle}\n\\begin{enumerate}\n    \\item If a set $A$ is partitioned into n cells, how many distinct elements of $A$ need to be selected to guarantee that at least two of them are in the same cell?\\\\\n    $n + 1$\n    \\item Let $K =\\{1,2,3,4,5,6,7,8\\}$. How many distinct numbers must be selected from $K$ to guarantee that there are two of them that sum to 9? (Hint: Let $A$ be the set of all unordered pairs $(x,y)$ with $x,y \\in K$ and $x+y = 9$. Check that this set forms a partition of K and apply the preceding part of the exercise).\n    \\begin{align*}\n        A &= \\{{x, y}: x, y \\in K, x + y = 9 \\}\\\\\n        A &= \\{\\{1, 8\\}, \\{2, 7\\}, \\{3, 6\\}, \\{4, 5\\}\\}\\\\\n        A &= \\mathcal{P}(K)\\\\\n        \\#(A) = 4 &\\implies You\\ need\\ 5\\ numbers\n    \\end{align*}\n\\end{enumerate}\n\n\\section*{1.3.4 Handy functions}\n\\begin{enumerate}\n    \\item Let $f: A \\rightarrow B$ and $g: B \\rightarrow C$.\n    \\begin{enumerate}\n        \\item Show that if at least one of $f$, $g$ is a constant function, then $g \\circ f : A \\rightarrow C$ is a constant function.\n        \\begin{align*}\n            f\\ is\\ constant &\\implies \\exists b \\in B \\forall a \\in A: f(a) = b\\\\\n            &\\implies f(A) = b\\\\\n            g \\circ f &\\implies \\exists c \\in C \\forall a \\in A: g(f(a)) = c\n        \\end{align*}\n        \\item If $g \\circ f : A \\rightarrow C$ is a constant function, does it follow that at least one of $f$,$g$ is a constant function (give a verification or a counterexample).\n        \\begin{align*}\n            g \\circ f &\\implies \\exists c \\in C \\forall a \\in A: g(f(a)) = c\\\\\n            &\\implies f(A) = b \\vee g(B) = c\\\\\n            &\\implies\\exists b \\in B \\forall a \\in A: f(a) = b \\vee \\exists c \\in C \\forall b \\in B: g(b) = c\\\\\n            &\\implies f\\ is\\ constant\\ or\\ g\\ is\\ constant\n        \\end{align*}\n    \\end{enumerate}\n\\end{enumerate}\n\\end{document}", "meta": {"hexsha": "4dd60cbc44fd15c61a4583aa117a14184729695d", "size": 7923, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/Homework1_3.tex", "max_stars_repo_name": "edjacob25/Applied-Maths", "max_stars_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework/Homework1_3.tex", "max_issues_repo_name": "edjacob25/Applied-Maths", "max_issues_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/Homework1_3.tex", "max_forks_repo_name": "edjacob25/Applied-Maths", "max_forks_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.8979591837, "max_line_length": 325, "alphanum_fraction": 0.5698599016, "num_tokens": 2799, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599562, "lm_q2_score": 0.8397339736884712, "lm_q1q2_score": 0.5926716556019681}}
{"text": "\\subsection{States, hashes, messages, representatives}\n\nLight node protocol states are also sets of messages,\neach message being a triple $(c, v, j)$, where:\n\\begin{itemize}\n    \\item $c$ is a (proposed) consensus value;\n    \\item $v$ identifies the message sender;\n    \\item $j$, the justification, is the {\\em set of hashes} of all messages\n        of the protocol state seen by the sender at the time of message\n        sending.\n\\end{itemize}\n\nThe fact that justifications are sets of hashes allows for a simpler,\nnon-recursive definition of states. To begin, the type of hashes can be any\ntotally ordered set. In turn, a justification is a list of hashes, and\na message is a triple:\n\n\\begin{coq}\nVariable hash : Type.\nVariable (about_H : `{StrictlyComparable hash}).\nDefinition justification_type : Type := list hash.\nDefinition message : Type := C * V * justification_type.\n\\end{coq}\n\nThe total order on hashes induces a total lexicographic order on\njustifications, which can be extended to messages.\n\nWe can therefore work with sorted lists of hashes as representatives\nfor sets of hashes, reducing equality between justifications to\nsyntactic equality.\n\nWe define state equality set equality, that is,\ndouble inclusion between the sets of messages representing states.\n\nWe assume an injective function from messages to hashes:\n\\begin{coq}\nParameters (hash_message : message -> hash)\n           (hash_message_injective : Injective hash_message).\n\\end{coq}\nThis allows us to recursively define a function \\verb\"hash_state\" taking\nstates and returning sorted lists of hashes, i.e., justifications, with the\nproperty that two justifications are equal iff they belong to\nstates which are equal as sets:\n\n\\begin{coq}\nLemma hash_state_injective : forall sigma1 sigma2,\n  hash_state sigma1 = hash_state sigma2\n  <->\n  set_eq sigma1 sigma2.\n\\end{coq}\n\nThis allows for the following inductive definition of protocol states:\n\n\\begin{coq}\nInductive protocol_state : state -> Prop :=\n| protocol_state_nil : protocol_state state0\n| protocol_state_cons : forall (j : state),\n    protocol_state j ->\n    forall (c : C),\n      valid_estimate c j ->\n      forall (v : V) (s : state),\n        In (c, v, hash_state j) s ->\n        protocol_state (set_remove compare_eq_dec\n        \t\t\t\t\t\t\t\t\t\t\t\t\t(c, v, hash_state j) s) ->\n        NoDup s ->\n        not_heavy s ->\n        protocol_state s.\n\\end{coq}\n\nThe above definition reads as:\n\\begin{itemize}\n    \\item a protocol state is either empty; or\n    \\item it is a duplicate-free state \\verb|s| that does not exceed the fault tolerance threshold for which\n        there exists a consensus value \\verb|c|, a sender \\verb|v|, and a state \\verb|j| such that:\n        \\begin{itemize}\n            \\item \\verb|j| is a protocol state;\n            \\item \\verb|c| is a consensus value which the estimator agrees on for \\verb|j|;\n            \\item \\verb|(c,v, hash_state j)| belongs to \\verb|s|;\n            \\item The state obtained from \\verb|s| by removing\n                \\verb|(c,v, hash_state j)| is a protocol state.\n        \\end{itemize}\n\\end{itemize}", "meta": {"hexsha": "bc1be4a3d6182977192dc50c9ddbb953d7c71c30", "size": 3082, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/light.tex", "max_stars_repo_name": "runtimeverification/casper-cbc-proofs", "max_stars_repo_head_hexsha": "8c4985f0921fea0a38c05e72a47364471164ab72", "max_stars_repo_licenses": ["NCSA"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-06-16T15:57:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-23T11:21:07.000Z", "max_issues_repo_path": "report/light.tex", "max_issues_repo_name": "runtimeverification/casper-cbc-proofs", "max_issues_repo_head_hexsha": "8c4985f0921fea0a38c05e72a47364471164ab72", "max_issues_repo_licenses": ["NCSA"], "max_issues_count": 105, "max_issues_repo_issues_event_min_datetime": "2019-11-26T09:22:58.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-02T10:00:11.000Z", "max_forks_repo_path": "report/light.tex", "max_forks_repo_name": "runtimeverification/casper-cbc-proofs", "max_forks_repo_head_hexsha": "8c4985f0921fea0a38c05e72a47364471164ab72", "max_forks_repo_licenses": ["NCSA"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2019-12-17T07:48:58.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-22T08:51:55.000Z", "avg_line_length": 37.5853658537, "max_line_length": 108, "alphanum_fraction": 0.7070084361, "num_tokens": 758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.839733955639775, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5926716480601856}}
{"text": "\\section{Examples of Linearly Separable Sets}\nIn this section, we shall give several examples of linearly separable sets.  We will give a detailed description of these sets and list the web pages that contain these sets. \n\n\\subsection{L-MNIST sets}\nThe original MNIST data set contains the pictures of single digits. The size of each digit picture is 28*28 and the label is the digit from 0 to 9. There are 60000 training data and 10000 test data in all. The accuracy of the state-of-art CNN model applied on the MNIST is higher than 99\\%. If we apply standard least square to the original MNIST data, we only get 86\\% accuracy. If we apply logistic regression to the original MNIST data, we get 92\\% accuracy. This means that the original MNIST is linearly separable. \n\nThis collection of linearly separable L-MNIST data set are derived from the MNIST data basis after applying some appropriate DNN model. The fully connected neural network contains an input layer, a hidden layer and an output layer. The corresponding widths are 784, 500 and 10. We use this DNN to train all the data (70000) in MNIST for 50 epochs with cross entropy as the loss function and Adam as the optimization method. Then the training accuracy reaches 100\\%. We save this model and define the output of the hidden layer as the linearly separable L-MNIST data set. \n\nTo verify this data set is indeed linearly separable, we can use a fully connected neural network with input layer (500) and output layer (10) only without activation. So this is a straightforward linear combination instead of  logistic regression. Due to the weight initialization, the immediate classification accuracy may be low. But after training with this DNN for only 3 epochs, the classification accuracy for all the data in MNIST is already 100\\%.  As a comparison, if we apply standard least square to the linearly separable MNIST data, we get 98.38\\% accuracy.\n\nWe have also uploaded the linearly separable MNIST data set to \\\\\nhttp://multigrid.org/wiki/index.php/deep-neural-network/\n\\subsection{CIFAR sets}\n", "meta": {"hexsha": "ebe486166c31ac961c5b441eacfb2592dc3e7d29", "size": 2064, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/LinearSets.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/LinearSets.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/LinearSets.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 147.4285714286, "max_line_length": 571, "alphanum_fraction": 0.7950581395, "num_tokens": 478, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019595, "lm_q2_score": 0.8006919997179627, "lm_q1q2_score": 0.5926516992592741}}
{"text": "% When using TeXShop on the Mac, let it know the root document. The following must be one of the first 20 lines.\n% !TEX root = ../design.tex\n\n\\chapter[Cox Proportional-Hazards]{Cox Proportional-Hazards}\n\n\\section{Introduction}\n% Abstract. What is the problem we want to solve?\nProportional-Hazard models enable comparison of \\textit{survival models}.\nSurvival models are functions describing the probability of an one-item event\n(prototypically, this event is death) with respect to time.  The interval of\ntime before death occurs is the \\textit{survival time}.   Let $T$ be a random\nvariable representing the survival time, with a cumulative probability function\n$P(t)$.  Informally, $P(t)$ represents the probability that death has happened\nbefore time $t$.\n\nAn equivalent formation is the \\textit{survival function} $S(t)$, defined as\n$S(t) \\equiv 1 - P(t)$.  Informally, this is the probability that death hasn't\nhappened by time $t$.  The \\textit{hazard function} $h(t)$ which assesses the\ninstantaneous risk of demise at time $t$, conditional on survival upto time $t$.\n\\begin{align}\n    h(t) &= \\lim_{\\Delta t \\rightarrow 0} \\frac{p\\left(t \\le T < t + \\Delta t | T \\ge t \\right)}\n                                              {\\Delta t} \\\\\n         &= \\lim_{\\Delta t \\rightarrow 0}  \\frac{1}{\\Delta t}\\frac{p(T < t + \\Delta t) - p(T < t)}{P(T \\ge t)}\n\\end{align}\n\nThe relationship between $h(t)$ and $S(t)$, using $S(t) = 1 - p(T < t)$ is\n\\begin{align}\nh(t) & = \\lim_{\\Delta t \\rightarrow 0}  \\frac{1}{\\Delta t} \\frac{\\left(S(t + \\Delta_t) - S(t)\\right)}{-S(t)}\\\\\n     & = \\frac{-S'(t)}{S(t)}\n\\end{align}\nwhere\n$$S'(t) = \\lim_{\\Delta t \\rightarrow 0}  \\frac{S(t + \\Delta t) - S(t)}{\\Delta t}$$\ndenotes the derivative of $S(t)$.\n\nIn the simplest case, the Cox model assumes that $h(t)$ is\n\\begin{equation}\nh(t) = e^{\\alpha(t)}\n\\end{equation}\n\nwhere exp$(\\alpha(t))$ is the \\textit{baseline function}, which depends only on\n$t$.  However, in many applications, the probability of death may depend on more\nthan just $t$.  Other covariates, such as age or weight, may be important.  Let\n$x_i$ denote the observed value of the  $i$th covariate, then the Cox model is\nwritten as\n\n\\begin{equation}\nh(t) = e^{\\alpha(t)} e^{\\beta_1 x_1 + \\beta_2 x_2 + \\dots + \\beta_m x_m} = e^{\\alpha(t)} e^{\\bold{\\beta^T x}}\n\\end{equation}\nwhere $\\beta_i$ is the coefficient associated with the $i$th covariate.\n\nMany applications take values from multiple observations, measuring the values\nof $x_i$ for each observation.  %The $j$th observation has the hazard function\n\n%\\begin{equation}\n%h_j(t) = e^{\\alpha(t)} e^{\\beta_1 x_{j1} + \\beta_2 x_{j2}+ \\dots + \\beta_k x_{jk}}= e^{\\alpha(t)} e^{\\bold{\\beta^T x_j}}.\n%\\end{equation}\n\nIn the \\textit{proportional-hazard model}, the hazard functions of two\nobservations $j$ and $k$ are compared. The ratio of the two is\n\\begin{equation}\n\\frac{h_j(t)}{h_k(t)} = \\frac{e^{\\alpha(t)} e^{\\bold{\\beta^T x_j}} }{e^{\\alpha(t)} e^{\\bold{\\beta^T x_k}} } = \\frac{e^{\\bold{\\beta^T x_j} }}{ e^{\\bold{\\beta^T x_k}}}\n\\end{equation}\n\nThe critical idea here is that the ratio of the two hazard functions is\ncompletely independent of the baseline function.  This allows meaningful\ncomparisons between samples without knowing the baseline function, which may be\ndifficult to measure or specify.\n\n% SECTION:  Applications\n\\section{Applications}\\label{cox:Application}\nGenerally, applications start with a list of $n$ observations, each with $m$\ncovariates and a time of death.  From this $n \\times (m+1)$ matrix, we would\nlike to derive the correlation between the covariates and the hazard function.\nThis amounts to finding the values of  $\\beta$.\n\nThe values of $\\beta$ can be estimated with the method of \\textit{partial\nlikelihood}.  This method begins by sorting the observations by time of death\ninto a list $[t_1, t_2, \\dots, t_n]$ such that $t_i \\le t_j : i < j\\ \\forall\ni,j$.  For any time $t_i$, let $R(t_i)$ be the set of observations still alive\nat time $t_i$.\n\nGiven that there was a death at time $t_i$ and observation $k$ was alive prior\nto $t_i$, the probability that the death happened to observation $k$  is\n\\begin{equation}\\label{cox:equ:death-prob}\n\\Pr(T_k = t_i | R(t_i)) =  \\frac{e^{\\bold{\\beta^T x_k} }}{ \\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j}}}.\n\\end{equation}\n\nThe \\textit{partial likelihood function} can now be generated as the product of conditional probabilities.\nMore formally,\n\\begin{equation}\\label{cox:equ:likelihood}\n\\mathcal{L} = \\prod_{i = 1}^n \\left(  \\frac{e^{\\bold{\\beta^T x_i} }}{ \\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j}}} \\right).\n\\end{equation}\n\n The log-likelihood form of this equation is\n\\begin{equation}\\label{cox:equ:LLH}\nL = \\sum_{i = 1}^n \\left[  \\bold{\\beta^T x_i} - \\log\\left(\\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j} } \\right) \\right].\n\\end{equation}\n\nAn estimation of $\\beta$ can be found by simply maximizing this log-likelihood.\nTo maximize the likelihood, it helps to have the derivative of equation\n\\ref{cox:equ:LLH}, which is\n\\begin{equation}\\label{cox:equ:LLH derivative}\n\\frac{\\partial L}{\\partial \\beta_k} = \\sum_{i = 1}^n \\left( x_{ik} - \\frac{\\sum_{j \\in R(t_i)} x_{jk} e^{\\bold{\\beta^T x_j} } }{\\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j} } } \\right).\n\\end{equation}\nIt follows that the second derivative is\n\\begin{equation}\\label{cox:equ:LLH second derivative}\n\\frac{\\partial^2 L}{\\partial \\beta_k \\beta_l} = \\sum_{i = 1}^n\n            \\left(\n                \\frac{\\left(  \\sum_{j \\in R(t_i)} x_{jk} e^{\\bold{\\beta^T x_j} } \\right)\n                            \\left(  \\sum_{j \\in R(t_i)} x_{jl} e^{\\bold{\\beta^T x_j} } \\right)}\n                     {\\left( \\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j} } \\right)^2 } -\n                \\frac{  \\sum_{j \\in R(t_i)} x_{jk}x_{jl} e^{\\bold{\\beta^T x_j} } }\n                     {\\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j} } }\n            \\right).\n\\end{equation}\n\n% Incomplete Data\n\\subsection{Incomplete Data}\nFrequently, not every observation will have an associated time of death.\nTypically, this arises when the period of observation terminates before the\nentire population being studied has died.  This is known as \\textit{censoring}\nthe data.  To account for this, an additional indicator variable is introduced\n$\\delta_i$, which is set to 1 if the $i$th observation has an associated time of\ndeath, and 0 otherwise.\n\nIncorporating this indicator variable into equation \\ref{cox:equ:likelihood} gives\n\\begin{equation}\\label{cox:equ:likelihood-censoring}\n\\mathcal{L} = \\prod_{i = 1}^n \\left(  \\frac{e^{\\bold{\\beta^T x_i} }}{ \\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j}}} \\right)^{\\delta_i}.\n\\end{equation}\nThe appropriate changes to the LLH function and its derivatives are trivial.\n\n\n% SECTION: Implementation\n\\subsection{Partition and aggregation of the data to speed up}\n\nIn order to speed up the computation, we first partition the data and\naggregate each piece of the data into one big row. During the\ncomputation, the whole big row is loaded into the memory for\nprocessing, which speeds up the computation.\n\nWhen we partition the data, we want to (1) keep the sorted descending\norder of the time column, (2) make sure that each piece has\napproximately the same amount of data so that the work load is even,\nand (3) do this as fast as possible.\n\nOur solution to to first sample a certain amount of the data, and then\ncompute the approximate break points using the sampled data. The\nsampled data should be small enough to load into the memory, abd also\nlarge enough so that the break points can be computed relatively\naccurate.\n\nAfter we have partitioned the data, we aggregate each partition into\none big row. The order of the data should be kept during the\naggregation.\n\nThen we use the sequential algorithm described in the next to process\nthe new data row by row in the order of the time column. Since each\nbig row contains lots of original rows, and we deal with them all in\nthe memory, this can speed up the computation.\n\n\n\\subsection{Implementation of Newton's Method}\nNewton's method is the most common choice for estimating $\\beta$ by minimizing\n\\ref{cox:equ:likelihood} using the following update rule:\n\\begin{equation}\n\\beta_{k} = \\beta_{k} - \\alpha_k \\left( {\\nabla^2 L}^{-1} \\nabla L \\right)\n\\end{equation}\nwhere $\\alpha_k$ is a positive scalar denoting the step length in the newton\ndirection ${\\nabla^2 L}^{-1} \\nabla L$ determined using the first and second\nderivative information. We would like to emphasize that the problems we are\ndesigning this system for are those with many records and few features i.e. $n\n\\gg m$, thereby keeping the inverse operation on the hessian matrix relatively\ncheap.\n\nThe gradient and Hessian matrix may be hard to parallelize therefore reducing an\nadvantage for large number of observations. To elaborate, consider equations\n\\ref{cox:equ:LLH derivative} and \\ref{cox:equ:LLH second derivative} which are\nsums with independent terms. One might think it is natural to reduce the\ncomputational by parallelization. Efficient parallelization may be achieved if\neach term could be computed independently in parallel by a set of worker tasks\nand a master task could collect the output from each worker node sum them\ntogether. However, this might not work well in this setting. To see why,\nconsider parallelizing equation \\ref{cox:equ:LLH second derivative}.  Each\nworker task is given one term in the sum, which looks like\n\\begin{equation}\n \\frac{\\left(  \\sum_{j \\in R(t_i)} x_{jk} e^{\\bold{\\beta^T x _j} } \\right) \\left(  \\sum_{j \\in R(t_i)} x_{jl} e^{\\bold{\\beta^T x _j} } \\right)}{\\left( \\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j} } \\right)^2 }   -  \\frac{  \\sum_{j \\in R(t_i)} x_{jk}x_{jl} e^{\\bold{\\beta^T x_j} } }{\\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j} } }.\n\\end{equation}\n\nNote that the sums in the numerator are summing over all the data points in the\ndata matrix. A similar such issue is encountered while computing the first\nderivative terms as defined in \\ref{cox:equ:LLH derivative}. However, we note\nthat this sum has a structure that allows it to be computed in linear time (with\nrespect to the number of data points) using the following quantities.\n\\begin{align}\nH_{i} &=   \\sum_{j \\in R(t_i)} x_{j} e^{\\bold{\\beta^T x_j}}\\\\\nS_{i}&=   \\sum_{j \\in R(t_i)} e^{\\bold{\\beta^T x_j}} \\\\\nV_{i}&=   \\sum_{j \\in R(t_i)} x_{j}x_{j}^T e^{\\bold{\\beta^T x_j} }\n\\end{align}\n\nNote that $H_{i}$ is a column vector with $m$ elements ( $H_{i}\\in\n\\mathbb{R}^m$), $S_{i}$ is a scalar and and $V_{i}$ is an $m \\times m$ matrix.\nWe can now write the first derivative of the maximum likelihood estimator,\ndefined in Equation \\ref{cox:equ:LLH derivative} as\n\\begin{align}\n\\frac{\\partial L}{\\partial \\beta_k} = \\sum_{i = 1}^n \\left( x_{i} - \\frac{H_{i} }{ S_{i}}  \\right)\n\\end{align}\nwhile the second derivative, defined in Equation \\ref{cox:equ:LLH second\nderivative}, can be reduced to\n\n\\begin{align}\n\\frac{\\partial^2 L}{\\partial \\beta_k \\beta_l} = \\sum_{i = 1}^n \\left( \\frac{H_{i}H_{i}^T }{ S_{i}^2 } -  \\frac{V_{i}}{ S_{i} } \\right)\n\\end{align}\nSince we assume that the data points are sorted in increasing order i.e\n$R(t_i) = \\{i, i+1 \\ldots n \\}$, we can calculate the above summation as\n\\begin{align}\nH_{i}& =   H_{i+1} +  x_{i} e^{\\bold{\\beta^T x_i}} \\label{eq:numerator_avg}\\\\\nS_i & = S_{i+1} + e^{\\bold{\\beta^T x_i}} \\label{eq:denominator_avg}\\\\\nV_i & = V_{i+1} +  \\frac{H_{i}H_{i}^T }{ S_{i}^2 } -  \\frac{V_{i}}{\n  S_{i}} \\label{eq:hessian}.\n\\end{align}\nWith this recurrence relationship, the hessian matrix and the gradient direction\ncan be computed in linear time.\n\n%In addition to the computational expense of computing the Hessian, the inverse\n%Hessian must be computed as well.  Unfortunately, matrix inversion is an\n%expensive operation, and  cannot be parallelized.\n\n\n% SECTION: Stratification Support\n\\section{Stratification Support}\\label{cox:stratified}\nA crucial component of the Cox Proportional Hazards model is the proportional hazards assumption:\nThe hazard for a given individual is a fixed proportion of the hazard for any\nother individual in the same stratum, and the ratio of the hazards is constant\nacross time.\n\nIn actual use cases, the proportional hazard assumption may not be satisfied if\nwe use all independent variables as covariates. A stratified Cox regression\nmodel may then be useful. It offers a way such that we can choose a subset of\nindependent variables as covariates while are still taking the remaining\nindependent variables into account. The stratified Cox regression is available\nin both R~\\cite{r-cox} and Stata~\\cite{stata-cox}.\n\nStratification is used as shorthand for building a Cox model that allows for\nmore than one stratum, and hence, allows for more than one baseline hazard\nfunction.  Stratification provides two pieces of key, flexible functionality for\nthe end user of Cox models:\n\\begin{enumerate}\n    \\item Allows a categorical variable Z to be appropriately accounted for in\n    the model without estimating its predictive/associated impact on the\n    response variable (i.e. without estimating Z's ``coefficient'').\n    \\item Categorical variable Z is predictive/associated with the response\n    variable, but Z may not satisfy the proportional hazards assumption.\n\\end{enumerate}\n\nTo explicitly clarify how stratification differentiates from grouping support:\n\\begin{itemize}\n    \\item Grouping by a categorical column would build a completely separate\n    Cox model for each value of the categorical column, where the baseline\n    hazards for each value of the categorical column would be different and\n    the estimated coefficient values for each explanatory variable would be\n    \\textbf{different} for each value of the categorical column.\n    \\item Stratifying by a categorical column would build a single\n    common Cox model, where the baseline hazards for each value of the\n    categorical column would be different, but the estimated coefficient values\n    for each explanatory variable would be the \\textbf{same} for each value of the stratum.\n\\end{itemize}\n\nIt is valuable to emphasize that all strata share all coefficients, and that the\nonly difference between strata is the baseline hazard.  In other words,\ncoefficients for all non-strata explanatory variables are identical across the\ndifferent strata.\n\n\\subsection{Estimating A Stratified Cox Model}\\label{cox:estimate-stratified}\nThe parameter estimation is done by maximizing the product of likelihoods, each\nfrom a stratum~\\cite{stratified-ethz-slides}.\n\nGiven $n$ observations, each with $m$ covariates and each in one of $K$\nstrata~\\footnote{Note that this does not mean that we have $K$ variables other\nthan the $m$ covariates, but $K$ groups classified by strata ID variables.}, let\n$\\mathit{ST}_{k}$ denote the set of observations in the $k$-th stratum.\n\nBecause, as an objective function, the sum of log-likelihoods is equivalent to\nthe product of likelihoods, according to Equation \\ref{cox:equ:LLH}, we have the\nlog-likelihood associated with the $k$-th stratum,\n\\begin{equation}\\label{cox:equ:obj-each-stratum}L_{k} = \\sum_{i \\in \\mathit{ST}_{k}} \\left[  \\bold{\\beta^T x_i} - \\log\\left(\\sum_{j \\in R_k(t_i)} e^{\\bold{\\beta^T x_j} } \\right) \\right],\n\\end{equation}\nwhere $R_k(t_i)$ the set of observations in $\\mathit{ST}_{k}$ and still alive at time $t_i$.\n\nTherefore, the objective function of stratified cox regression can be expressed as\n\\begin{equation}\\label{cox:equ:stratified-obj}\nL^{\\mathit{stratified}} = \\sum_{k=1}^{K} L_{k} = \\sum_{k=1}^{K} \\sum_{i \\in \\mathit{ST}_{k}} \\left[  \\bold{\\beta^T x_i} - \\log\\left(\\sum_{j \\in R_k(t_i)} e^{\\bold{\\beta^T x_j} } \\right) \\right].\n\\end{equation}\nThe appropriate changes to gradient, Hessian and censoring are trivial.\n\n\\subsection{When We Need A Stratified Cox Model?}\\label{cox:diagnostic}\nGeneral practice in standard statistical packages (i.e. R, SAS) is to\nmake use of the Schoenfeld residuals to gauge whether or not the\nproportional hazards assumption has been violated.\n\n% To this end, we need to investigate the implementation of R's\n% \\texttt{cox.zph()} function for MADlib.\n% There are several parametric options available in the\n% \\texttt{cox.zph()} function, but we prioritize implementation of\n% \\texttt{cox.zph()} with its default parameter settings.\n% The output of \\texttt{cox.zph()} contains 3 numbers for each\n% covariate, scaled Schoenfeld residual, chi-square, and p-value.\n% The actual formulas of computing these are still to be investigated.\n\nThe Schoenfeld residuals are centered on zero and should be\nindependent of time if PHA (proportional hazard assumption) is\ntrue. Deviations from this, i.e. residuals that exhibit some trend in\ntime, indicate that the PHA is violated.\n\nThe Schoenfeld residuals, at the time when a failure or death were to occur, are\ndefined by the difference between the observed and expected covariate values at\nthat time. Also note that the Schoenfeld residuals are zeroes for censored\nobservations.\n\\begin{equation}\n\\hat{\\vec{r}}_i = \\vec{x}_i - E[\\vec{x}_i]\\ ,\\mbox{ only for\n}\\delta_i=1\\ ,\n\\end{equation}\n\nTo compute the expected values at time $t_{i}$, we use the probability distribution given by Equation \\ref{cox:equ:death-prob}.\n\\begin{eqnarray}\nE[\\vec{x}_i] &=& \\sum_{k\\in \\mathcal{R}(t_i)}\\vec{x}_k p(k\\mbox{ dies at\n}t_i) \\nonumber\\\\\n&=& \\frac{\\sum_{k\\in \\mathcal{R}(t_i)}\\vec{x}_k\n  e^{\\vec{\\beta}\\vec{x}_k}}{\\sum_{j\\in\n    \\mathcal{R}(t_i)}e^{\\vec{\\beta}\\vec{x}_j} },\n\\end{eqnarray}\nwhere the values of $\\vec{\\beta}$ are the fitted coefficients, and $\\mathcal{R}(t)$ is the set of individuals that are still alive at time $t$.\n\n\\subsubsection{Scaled Schoenfeld Residuals}\\label{cox:scaled-residual}\nSuggested by Grambsch and Therneau \\cite{grambsch1994proportional} and also\nfollowed by statistical softwares \\cite{testph}, scaling the Schoenfeld\nresiduals by an estimator of its variance yields greater diagnostic power. The\nscaled Schoenfeld residuals is\n\\begin{eqnarray}\n\\hat{\\vec{r}}^{*}_i &= [\\mathit{Var}(\\hat{\\vec{r}}_i)]^{-1} \\hat{\\vec{r}}_i\\\\\n&\\approx m \\mathit{Var}(\\hat{\\vec{\\beta}}) \\hat{\\vec{r}}_i  ,\n\\end{eqnarray}\nwhere $\\mathit{Var}$ denotes a covariance matrix, and $m$ is the\nnumber of uncensored survival times. $\\hat{\\vec{r}}_i$ is a length-$n$\nvector, and $\\mathit{Var}(\\hat{\\vec{\\beta}})$ is a $n\\times n$\nmatrix, where $m$ is the number of uncensored data points and $n$ is\nthe number of features.\n\n\\subsubsection{Transformation of Time}\\label{cox:transform}\nTransformation of the time values often helps the analysis of the correlation\nbetween the scaled Schoenfeld residuals and time. Common transformation methods\ninclude ranking, computing log, and Kaplan-Meier's method. (We don't fully\nunderstand the last one yet.)\n\n\\footnote{The \\texttt{cox.zph()} function in R allows different options for\ntransformation of times: \\texttt{``km''}, \\texttt{``rank''},\n\\texttt{``log''} and \\texttt{``identity''}.\n\\texttt{``km''} stands for Kaplan-Meier's method.\n\\texttt{``rank''} takes the ranks of the times instead of times.\n\\texttt{``log''} uses the logarithm of times.\nAnd \\texttt{``identity''} does nothing and directly uses times.\\cite{cox-zph}}\n\n%To quantify whether there is a trend in the Schoenfeld residuals over\n%time, we can compute the correlation between the scaled residuals and the\n%ranks of the times, where the scaled residuals are given by\n%\\begin{equation}\n%\\hat{\\vec{r}}_i = m \\widehat{\\mbox{Var}}(\\vec{\\beta}) \\vec{r}_i\\ ,\n%\\end{equation}\n%where $m$ is the number of uncensored survival times, and\n%$\\widehat{\\mbox{Var}}(\\vec{\\beta})$ is the estimated variance matrix for\n%the fitted coefficients.\n\n\\subsubsection{$p$-values}\nThe process for computing the $p$-values of the correlation in R's\n\\texttt{survival} package is given\nin the following.\n\nLet $m$ be the number of uncensored data points, $n$ be the number of\nfeatures, $\\hat{\\vec{t}}$ be the transformed survival time, which is a\nlength-$m$ vector,\n$\\mathit{Var}(\\hat{\\vec{\\beta}})$ be the variance matrix for the\nfitted coefficients which is a $n\\times n$ matrix, $\\hat{\\vec{R}}$\nis the unscaled Schoenfeld residuals, which is a $m\\times n$ matrix.\n\\begin{eqnarray}\n\\vec{w} &=& \\hat{\\vec{t}} - \\bar{\\hat{t}}\\ , \\mbox{ a length-}m \\mbox{\n  vector}\\ ,\\\\\n\\vec{v} &=& m\\,\\vec{w}^T\\hat{\\vec{R}}\\mathit{Var}(\\hat{\\vec{\\beta}})\\ , \\mbox{\na length-}n\\mbox{ vector}\\\\\nz_i &=& \\frac{1}{m\\,\\vec{w}^T\\vec{w}}\n\\cdot\\frac{v^2_i}{\\left[\\mathit{Var}(\\hat{\\vec{\\beta}})\\right]_{ii}}\\\n, \\mbox{ for }i=1,\\dots,n\\ ,\\\\\np_i &=& 1 - \\chi_1^2(z_i), \\mbox{ for }i=1,\\dots,n\\ .\n\\end{eqnarray}\nHere $\\bar{\\hat{t}}$ is the average of the transformed survival\ntimes. $z_i$ is the z-stats for $i$-th coefficient. $p_i$ is the\np-value for $i$-th coefficient. We need a separate function to compute\n$\\vec{w}$, but both $\\vec{v}$ and $\\vec{w}^T\\vec{w}$ can be computed\nin an aggregate function. $z_i$ and $p_i$ can be computed in the final\nfunction of that aggregate function. $\\chi^2_1(\\cdot)$ is the\nchi-square function with the degree of freedom being $1$.\n\n\\subsection{Implementation}\nWe can use the iteration equations Eqs. (\\ref{eq:numerator_avg},\n \\ref{eq:denominator_avg}, \\ref{eq:hessian}) to compute the residuals and the\nhessian, which is needed by the variance matrix.\n\nThe current implementation uses an ordered aggregate on the data to\ncompute these quantities, which is not in parallel. To enable a distributed\nsolution we can use the ``GROUP BY'' functionality provided by SQL to enable the\nindependent computation of the log-likelihood function in each distributed segment\ncorresponding to each strata (`transition' function). These values can then be added\nacross the segments (`merge' function), with the gradient for the parameter computed\non the final sum (`final' function).\n\n\\subsection{Resolving Ties}\nIn ``coxph\\_train'', Breslow method is used to resolve ties, which uses\nas the partial likelihood~\\cite{hosmer2011applied}\n\\begin{eqnarray}\n  L &=& \\sum_{i=1}^{m}\\left[ \\vec{\\beta}^{T} \\vec{x}_{i+} -\n    d_i \\log\\left( \\sum_{j \\in R(t_i)} e^{\\vec{\\beta}^T\n        \\vec{x}_j}\\right) \\right]\\\\\n  &=& \\sum_{i=1}^{m}\\left[ \\vec{\\beta}^{T} \\vec{x}_i - \\log\\left( \\sum_{j \\in R(t_i)} e^{\\vec{\\beta}^T\n        \\vec{x}_j}\\right)\\right]_{+}\n\\end{eqnarray}\nwhere $d_i$ is the number of observations that have the same $t_i$,\nand $x_{i+}$ is the sum of the covariants over all observations that\nhave the same $t_i$. $m$ is the number of distinct survival\ntimes. Here $[\\cdot]_{+}$ means sum over all observations with the\nsame survival time.\n\n\\subsection{Robust Variance Estimators}\nIn MADlib, we implement robust variance estimator devised by Lin and Wei~\\cite{lin1989robust}.\nWith our notations above, let\n\\begin{eqnarray*}\n    \\vec{W}_{i}\n    &=& \\delta_i \\cdot \\left[ \\vec{x_i} - \\frac{\\left( \\sum_{l: t_l \\ge t_i} e^{\\vec{\\beta}^T \\vec{x_l}} \\vec{x_l} \\right)}{\\sum_{l: t_l \\ge t_i} e^{\\vec{\\beta}^T \\vec{x_l}}} \\right]\n    - \\sum_{j: t_j \\le t_i} \\left\\{ \\delta_j \\cdot \\frac{e^{\\vec{\\beta}^T \\vec{x_i}}}{\\sum_{k: t_k \\ge t_j} e^{\\vec{\\beta}^T \\vec{x_k}}}\n    \\cdot \\left[ \\vec{x_i} - \\frac{\\left( \\sum_{k: t_k \\ge t_j} e^{\\vec{\\beta}^T \\vec{x_k}} \\vec{x_k} \\right)}{\\sum_{k: t_k \\ge t_j} e^{\\vec{\\beta}^T \\vec{x_k}}} \\right] \\right\\}.\\\\\n    &=& \\delta_i \\cdot \\left[ \\vec{x_i} - \\frac{\\vec{H}_i}{S_i} \\right]\n    - \\sum_{j: t_j \\le t_i} \\left\\{ \\delta_j \\cdot \\frac{e^{\\vec{\\beta}^T \\vec{x_i}}}{S_j}\n    \\cdot \\left[ \\vec{x_i} - \\frac{\\vec{H}_j}{S_j} \\right] \\right\\}.\\\\\n    &=& \\delta_i \\cdot \\left[ \\vec{x_i} - \\frac{\\vec{H}_i}{S_i} \\right]\n    - \\sum_{j: t_j \\le t_i} \\delta_j \\cdot \\frac{e^{\\vec{\\beta}^T \\vec{x_i}}}{S_j} \\vec{x_i}\n    + \\sum_{j: t_j \\le t_i} \\delta_j \\cdot \\frac{e^{\\vec{\\beta}^T \\vec{x_i}}}{S_j} \\frac{\\vec{H}_j}{S_j}.\\\\\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\nH_i &=&  \\sum_{l: t_l \\ge t_i} e^{\\vec{\\beta}^T \\vec{x_l}} \\vec{x_l}\\\\\nS_i &=& \\sum_{l: t_l \\ge t_i} e^{\\vec{\\beta}^T \\vec{x_l}}\n\\end{eqnarray*}\n\nLet\n\\begin{eqnarray*}\n    A_i &=& \\sum_{j: t_j \\le t_i} \\frac{\\delta_j}{S_j},\\\\\n    \\vec{B}_i &=& \\sum_{j: t_j \\le t_i} \\frac{\\delta_j \\vec{H}_j}{S_j^2},\\\\\n\\end{eqnarray*}\nwe have\n\\[\n\\vec{W}_i = \\delta_i \\cdot \\left[ \\vec{x_i} - \\frac{\\vec{H}_i}{S_i} \\right] - e^{\\vec{\\beta}^T \\vec{x_i}} A_i \\vec{x_i} + e^{\\vec{\\beta}^T \\vec{x_i}} \\vec{B}_i.\n\\]\nAnd the recursions of $A_i$ and $\\vec{B}_i$ are given if we assume time $t_i$ is\nsorted ascending order\n\\begin{eqnarray*}\n    A_i &=& A_{i-1} + \\frac{\\delta_i}{S_i}\\\\\n    \\vec{B}_i &=& \\vec{B}_{i-1} + \\frac{\\delta_i \\vec{H}_i}{S_i^2}\n\\end{eqnarray*}\n\nThe meat part of the sandwich estimator is\n\\begin{equation}\n  \\mathbb{M} = \\sum_{i=1}^n\\vec{W}_i\\vec{W}_i^T\\ .\n\\end{equation}\n\n\\subsection{Clustered Variance Estimation}\n\nThe data has multiple clusters $m=1,\\dots,K$, and the meat part is\n\\begin{equation}\n  \\mathbb{M} = \\mathbb{A}^T\\mathbb{A}\\ ,\n\\end{equation}\nwhere the matrix $\\mathbb{A}$'s $m$-th row is given by\n\\begin{equation}\n  \\mathbb{A}_m = \\sum_{i\\in G_m} \\vec{W}_i\\ .\n\\end{equation}\nHere $G_m$ is the set of rows of that belong to the $m$-th cluster.\n\nWith stratification, we take into account of the strata only when we\ncompute $\\vec{H}_i$, $S_i$, $A_i$, $\\vec{B}_i$ and $\\vec{W}_i$. The\ncalculations of $\\mathbb{A}_m$ and $\\mathbb{M}$ only need to group by\nthe clustered variables and the strata variables are irrelavent.\n\n\\section{How to prevent under/over -flow errors ?}\n\nA problem that is not mentioned above but appears in the real\napplications of the CoxPH training model is the under-flow or\nover-flow errors. We have exponential functions in the computation,\nand it is very easy to get under-flow or over-flow errors if the\ncoefficients become too small or large at certain steps.\n\nWe use the same method as R's \"survival\" package to deal with the\npossible under-flow/over-flow errors. This methods contains three\nparts, which makes the algorithm described above even more\ncomplicated:\n\n(1) center and scale the independent variables.\n\\begin{equation}\n    x_i \\rightarrow \\frac{x_i - E[x]}{E[\\vert x_i - E[x] \\vert]}\n\\end{equation}\n\n(2) Estimate the maximum possible value of all the coefficients using\nthe coefficients computed from the first iteration.\n\\begin{equation}\n    \\beta_k^{(max)} = 20 \\, \\sqrt{\\frac{h_{kk}}{\\sum_i \\delta_i}}\\ ,\n\\end{equation}\nwhere $\\beta_k^{(max)}$ is the estimate of the possible maximum value\nof the coefficient $\\beta_k$, $h_{kk} = \\partial^2 L / \\partial\n\\beta_k^2$ is the diagonal elements of the Hessian matrix, and\n$\\delta_i=0,1$ is the censoring status of the records.\n\nDuring the computation, whenever the coefficient $\\vert \\beta_k \\vert\n> \\beta_k^{(max)}$, we set $\\beta_k \\rightarrow \\mbox{sign}(\\beta_k)\n\\beta_k^{(max)}$.\n\nThe authors of \"survival\" package explains in\nhttp://stat.ethz.ch/R-manual/R-devel/RHOME/library/survival/doc/sourcecode.pdf\nwhy they use such an estimate for the coefficients:\n\n\"We use a cutpoint of $\\beta * \\mbox{std}(x) < 23$\nwhere the standard deviation is the average standard deviation of $x$ within a risk\nset. The rationale is that $e^{23}$ is greater than the current world\npopulation, so such a coefficient corresponds to a between subject\nrelative risk that is larger than any imaginable.\"\n\nIn their implementation, they also use $1/h_{kk}$ as an approximation\nto $\\mbox{std}(x)$. And besides $23$, they also use $20$ in the\nestimate.\n\n(3) Although (1) and (2) stabalize the computation, it is still not\nenough. Step-halving method is used. Whenever the current iteration's\nlog-likelihood is smaller than that of previous iteration, we accept\nthe coefficients as\n\\begin{equation}\n    \\beta_k = \\frac{1}{2}(\\beta_k^{new} + \\beta_k^{old})\n\\end{equation}\n\n(4) The stopping threshold is\n\\begin{equation}\n    1 - \\frac{L_{new}}{L_{old}} < \\mbox{threshold}\\ .\n\\end{equation}\n\n\\section{Marginal Effects}\n\nSee \\ref{sub:marginal_effects} for an introduction to marginal effects\n(all notations below are the same as those defined in \\ref{sub:marginal_effects}).\nWe implement the default algorithm used by Stata 13.1 for computing the\nmarginal effects. Note that older versions of Stata may use a\ndifferent default algorithm to compute values different from MADlib.\n\n\\subsection{Basic Formulation}\n\nThe relative hazard ratio for $i$-th record is given by\n\\begin{equation}\n    h_i = h_0 \\exp \\left( \\vec{\\beta}\\cdot\\vec{f} \\right)\\ ,\n\\end{equation}\nwhere $h_0$ is the baseline and both $\\vec{\\beta}$ and\n$\\vec{f}$ are vectors.  Here we use the indices\n$i$ or $j$ for the data records, and $a$ or $b$ for the indices of\ncovariate terms in $\\vec{\\beta}\\cdot \\vec{f}(\\vec{x}_i)$. And we will use\n$u$ or $v$ to denote the indices of $\\vec{x}$.\n\nThe value of the baseline $h_0$ is arbitrary and difficult to compute. Strata\nignores the baseline hazard value for the computation of marginal effect. For\nMADlib, we follow the same principle and ignore the baseline (i.e. set baseline\nas 1) to compute hazard value.\n\nThus the marginal effect corresponding to variable $x_k$ is computed as,\n\\begin{align*} \\label{eq:me1}\n    \\mathit{ME}_{k} & = \\pder[h_i]{x_k} \\\\\n           & =  \\pder[e^{\\vec{\\beta} \\vec{f}}]{x_k} \\\\\n           & = e^{\\vec{\\beta} \\vec{f}} \\vec{\\beta} \\pder[f]{x_k}.\n\\end{align*}\nVectorizing the above equation (similar to \\ref{ssub:logistic_regression}) gives\n\\begin{equation*}\n  \\mathit{ME} = e^{\\vec{\\beta} \\vec{f}} J^T \\vec{\\beta},\n \\end{equation*}\nwhere $J$ is defined in \\ref{sub:marginal_effects_for_regression_methods}.\n\n\nCensoring status and stratification are irrelavant to the marginal effect\ncalculation and can be ignored.\n\n\\subsection{Categorical variables}\n\nFor categorical variables, we compute the discrete difference as described in\n\\ref{sub:categorical_variables}.\nThe discrete difference with respect to $x_k$ is given as\n\\begin{align*}\n    \\mathit{ME_k} &= h^{set} - h^{unset} \\\\\n                  &= e^{\\vec{\\beta}\\vec{f^{set}}} - e^{\\vec{\\beta}\\vec{f^{unset}}}\n\\end{align*}\n\n\\subsection{Standard Error}\n\nAs has already been described in \\ref{sub:standard_errors}, the method to compute\nthe standard errors is\n\\begin{equation}\n    \\mbox{Var}(ME) = \\mathbb{S}\\, \\mbox{Var}(\\vec{\\beta})\\,\n    \\mathbb{S}^T,\n\\end{equation}\nwhere the matrix $\\mathbb{S}$, computed as the partial derivative of\nthe marginal effect over all the coefficients, is a $M\\times N$ matrix $S_{mn}\n= \\pder[\\mathit{ME}_m]{\\beta_n}$\n\n\\begin{align*}\n    S_{mn} & = \\pder[\\mathit{ME}_m]{\\beta_n} \\\\\n           & = \\pder{\\beta_n} \\left( e^{\\vec{\\beta} \\vec{f}} \\vec{\\beta} \\pder[\\vec{f}]{x_m}  \\right)\\\\\n           & = e^{\\vec{\\beta} \\vec{f}} \\pder{\\beta_n}\\left(\\vec{\\beta} \\pder[\\vec{f}]{x_m} \\right) +\n                \\pder[e^{\\vec{\\beta} \\vec{f}}]{\\beta_n} \\vec{\\beta} \\pder[\\vec{f}]{x_m} \\\\\n           & = e^{\\vec{\\beta} \\vec{f}} \\pder[f_n]{x_m} +\n                e^{\\vec{\\beta} \\vec{f}} \\cdot f_n \\cdot \\vec{\\beta} \\pder[\\vec{f}]{x_m} \\\\\n           & = e^{\\vec{\\beta} \\vec{f}} \\left( \\pder[f_n]{x_m} +\n                  f_n \\cdot \\vec{\\beta} \\pder[\\vec{f}]{x_m} \\right).\n\\end{align*}\n\nVectorizing this equation to express for the complete matrix,\n\\begin{equation*}\n  \\vec{S} = e^{\\vec{\\beta} \\vec{f}} \\left(J^T +\n                (J^T \\vec{\\beta}) \\vec{f}^T \\right).\n\\end{equation*}\n", "meta": {"hexsha": "046e3e5e86d364516a62784cc5349504e3d157c1", "size": 31055, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/design/modules/cox-proportional-hazards.tex", "max_stars_repo_name": "madlib/archived_madlib", "max_stars_repo_head_hexsha": "5b964cb50c562f8f8fd4bc47556cd2bbb49d27e4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-09-18T07:44:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-14T19:45:18.000Z", "max_issues_repo_path": "doc/design/modules/cox-proportional-hazards.tex", "max_issues_repo_name": "madlib/archived_madlib", "max_issues_repo_head_hexsha": "5b964cb50c562f8f8fd4bc47556cd2bbb49d27e4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/design/modules/cox-proportional-hazards.tex", "max_forks_repo_name": "madlib/archived_madlib", "max_forks_repo_head_hexsha": "5b964cb50c562f8f8fd4bc47556cd2bbb49d27e4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.5295055821, "max_line_length": 325, "alphanum_fraction": 0.7058122686, "num_tokens": 9698, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5926516964302593}}
{"text": "\\section{Deep Water Wave Propagation}\nThis simulates the free propagation of a sinusoidal wave in deep water. The initial condition is still water in a ``large'' box with uniform depth. The wave is generated from the left boundary and propagates to the right. The depth on the right boundary is set as time dependent function\n\\begin{equation}\nh(t) = A\\sin{\\frac{2\\pi t}{\\lambda}}\n\\end{equation}\nwith $u=v=0$. Here $A$ is the amplitude of the generating wave, $t$ is time variable, and $\\lambda$ is the wave length as well as the period of the generating wave.\n\nAnalytically, the wave should travel through the domain without deformation. Lower-order-accuracy algorithms may result in undue wave dampening with the mesh size used in the current problem. This can have practical implications e.g. for tsunami propagation problems, and is usually dealt with by using second-order accurate methods. Alternatively you can refine the mesh until the dampening becomes insignificant, but this may be computationally expensive in realistic problems. \n\nThis example can also illustrate difficulties with radiation-type boundary conditions where the wave exits the domain (of course, for this problem, we could do that by exploiting the analytical solution - but this is not possible for general wave propagation problems). This will most obviously affect the right edge of the domain, but its effects will ultimately be felt throughout. \n\n\n\\subsection{Results}\nIn this test, we consider $A=1$ and $\\lambda=300$.\nFigure~\\ref{fig:stagewave} shows the time-evolution of the water elevation at three points in the domain. Ideally these time series should show the wave propagating without deformation or attenuation (i.e. the wave has the same shape, amplitude, period, mean water level etc. at each point).  \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{wave_atten.png}\n\\caption{Stage over time at 3 points in space}\n\\label{fig:stagewave}\n\\end{center}\n\\end{figure}\n\n\nThe corresponding momentums of Figure~\\ref{fig:stagewave} are shown in Figures~\\ref{fig:xmom} and~\\ref{fig:ymom}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{xmom.png}\n\\caption{Xmomentum over time at 3 points in space}\n\\label{fig:xmom}\n\\end{center}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{ymom.png}\n\\caption{Ymomentum over time at 3 points in space}\n\\label{fig:ymom}\n\\end{center}\n\\end{figure}\n\n\n\n\\endinput\n", "meta": {"hexsha": "b4924a93c1a39c179f0c97cbe9eb9681c99143c6", "size": 2448, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "validation_tests/analytical_exact/deep_wave/results.tex", "max_stars_repo_name": "samcom12/anuga_core", "max_stars_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_stars_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_stars_count": 136, "max_stars_repo_stars_event_min_datetime": "2015-05-07T05:47:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T03:07:40.000Z", "max_issues_repo_path": "validation_tests/analytical_exact/deep_wave/results.tex", "max_issues_repo_name": "samcom12/anuga_core", "max_issues_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_issues_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-05-03T09:27:54.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-20T04:22:48.000Z", "max_forks_repo_path": "validation_tests/analytical_exact/deep_wave/results.tex", "max_forks_repo_name": "samcom12/anuga_core", "max_forks_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_forks_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_forks_count": 70, "max_forks_repo_forks_event_min_datetime": "2015-03-18T07:35:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T07:07:29.000Z", "avg_line_length": 55.6363636364, "max_line_length": 480, "alphanum_fraction": 0.7839052288, "num_tokens": 606, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019595, "lm_q2_score": 0.8006919949619793, "lm_q1q2_score": 0.5926516957390171}}
{"text": "% $Id: datarep-sec.tex,v 1.5 2002/04/17 20:05:58 ellard Exp $\n\n\\section{Introduction}\n\nIn order to understand how a computer is able\nto manipulate data and perform computations,\nyou must first understand how data is represented by a computer.\n\nAt the lowest level, the indivisible unit of data in a computer is a\n{\\em bit}\\index{bit}.\nA bit represents a single binary value, which may be either 1 or 0.\nIn different contexts, a bit value of 1 and 0 may also be referred to as\n``true'' and ``false'',\n``yes'' and ``no'',\n``high'' and ``low'',\n``set'' and ``not set'', or\n``on'' and ``off''.\n\nThe decision to use binary values, rather than something larger (such\nas decimal values) was not purely arbitrary-- it is due\nin a large part to the relative simplicity of building electronic\ndevices that can manipulate binary values.\n\n\n\\section{Representing Integers}\n\n\\subsection{Unsigned Binary Numbers}\n\\index{unsigned binary numbers}\n\n%%% This really needs some explanation here.\n\nWhile the idea of a number system with only two values may\nseem odd, it is actually very similar to the decimal system\nwe are all familiar\nwith, except that each digit is a bit containing a 0 or 1\nrather than a number from 0 to 9.\n(The word ``bit'' itself is a contraction of the words ``binary digit'')\nFor example, figure~\\ref{datarep-binary-decimal-table} shows several\nbinary numbers, and the equivalent decimal numbers.\n\n\\begin{figure}[hbtp]\n\\caption{Binary and Decimal Numbers}\n\\label{datarep-binary-decimal-table}\n\\begin{center}\n\\begin{tabular}{|rcr|}\n\\hline\n{\\bf Binary}    &       & {\\bf Decimal} \\\\\n\\hline\n       0        & =             & 0     \\\\\n       1        & =             & 1     \\\\\n      10        & =             & 2     \\\\\n      11        & =             & 3     \\\\\n     100        & =             & 4     \\\\\n     101        & =             & 5     \\\\\n     110        & =             & 6     \\\\\n$\\vdots$        & $\\vdots$      & $\\vdots$      \\\\\n11111111        & =             & 255   \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{figure}\n\nIn general, the binary representation of $2^{k}$ has a $1$ in\nbinary digit $k$ (counting from the {\\em right}, starting at 0)\nand a $0$ in every other digit.\n(For notational convenience,\nthe $i$th bit of a binary number $A$ will be\ndenoted as $A_{i}$.)\n\nThe binary representation of a number that is not a power\nof 2 has the bits set corresponding to the powers of two\nthat sum to the number:  for example, the decimal number\n$6$ can be expressed in terms of powers of 2 as\n$1 \\times 2^{2} ~+~ 1 \\times 2^{1} ~+~ 0 \\times 2^{0}$,\nso it is written in binary as $110$.\n\nAn eight-digit binary number is commonly called a {\\em byte}.\nIn this text, binary numbers will usually be written as bytes\n(i.e. as strings of eight binary digits).  For example, the binary number\n$101$ would usually be written as $00000101$-- a $101$\npadded on the left with five zeros, for a total of eight digits.\n\nWhenever there is any possibility of ambiguity between\ndecimal and binary notation, the {\\em base} of the number\nsystem (which is 2 for binary, and 10 for decimal) is\nappended to the number as a subscript.  Therefore, $101_{2}$\nwill always be interpreted as the binary representation for five,\nand never the decimal representation of one hundred and one\n(which would be written as $101_{10}$).\n\n\\subsubsection{Conversion of Binary to Decimal}\n\\label{datarep-binary-to-decimal-sec}\n\nTo convert an unsigned binary number to a decimal number, add\nup the decimal values of the powers of 2 corresponding to bits\nwhich are set to 1 in the binary number.\nAlgorithm~\\ref{datarep-binary-to-decimal-alg} shows a method to do this.\nSome examples of conversions from binary to decimal\nare given in figure~\\ref{datarep-bin-to-dec-figure}.\n\n\\begin{algorithm}{Binary to Decimal}{datarep-binary-to-decimal-alg}{\n        To convert a binary number to decimal.\n}\n\n\\begin{itemize}\n\\item   Let $X$ be a binary number, $n$ digits in length, composed of bits\n                $X_{n-1} \\cdots X_{0}$.\n\\item   Let $D$ be a decimal number.\n\\item   Let $i$ be a counter.\n\\end{itemize}\n\n\\begin{enumerate}\n\\item   Let $D \\leftarrow 0$.\n\\item   Let $i \\leftarrow 0$.\n\\item   While $i < n$ do:\n        \\begin{itemize}\n        \\item   If $X_{i}$ is $1$\n                (i.e. if bit $i$ in $X$ is $1$),\n                then set $D \\leftarrow (D + 2^{i})$.\n        \\item   Set $i \\leftarrow (i + 1)$.\n        \\end{itemize}\n\\end{enumerate}\n\\end{algorithm}\n\n\\begin{figure}[hbtp]\n\\caption{Examples of Conversion from Binary to Decimal}\n\\label{datarep-bin-to-dec-figure}\n\\begin{center}\n\\begin{tabular}{|lclclcr|}\n\\hline\n{\\bf Binary}    & & & & & & {\\bf Decimal} \\\\\n\\hline\n\n00000000        & = &   $0$                     & = &\n                        $0$                     & = & 0 \\\\\n& & & & & & \\\\\n00000101        & = & $2^2 + 2^0$               & = &\n                        $4 + 1$                 & = & 5 \\\\\n& & & & & & \\\\\n00000110        & = & $2^2 + 2^1$               & = &\n                        $4 + 2$                 & = & 6 \\\\\n& & & & & & \\\\\n00101101        & = & $2^5 + 2^3 + 2^2 + 2^0$   & = &\n                        $32 + 8 + 4 + 1$        & = & 45 \\\\\n& & & & & & \\\\\n10110000        & = & $2^7 + 2^5 + 2^4$         & = &\n                        $128 + 32 + 16$         & = & 176 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{figure}\n\n\nSince there are $2^{n}$ unique sequences of $n$ bits,\nif all the possible bit sequences of length $n$ are used,\nstarting from zero, the largest number will be $2^{n} - 1$.\n\n\\subsubsection{Conversion of Decimal to Binary}\n\\label{datarep-decimal-to-binary-algorithm}\n\nAn algorithm for converting a decimal number to binary notation is\ngiven in algorithm~\\ref{datarep-decimal-to-binary-alg}.\n\n\\begin{algorithm}{Decimal to Binary}{datarep-decimal-to-binary-alg}{\n        To convert a positive decimal number to binary.\n}\n\n\\begin{itemize}\n\\item   Let $X$ be an unsigned binary number, $n$ digits in length.\n\\item   Let $D$ be a positive decimal number, no larger than $2^{n} - 1$.\n\\item   Let $i$ be a counter.\n\\end{itemize}\n\\begin{enumerate}\n\\item   Let $X \\leftarrow 0$ (set all bits in $X$ to $0$).\n\\item   Let $i \\leftarrow (n - 1)$.\n\\item   While $i \\geq 0$ do:\n        \\begin{enumerate}\n        \\item   If $D \\geq 2^{i}$, then\n                \\begin{itemize}\n                \\item   Set $X_{i} \\leftarrow 1$\n                        (i.e. set bit $i$ of $X$ to $1$).\n                \\item   Set $D \\leftarrow (D - 2^{i})$.\n                \\end{itemize}\n        \\item   Set $i \\leftarrow (i - 1)$.\n        \\end{enumerate}\n\\end{enumerate}\n\\end{algorithm}\n\n\\subsubsection{Addition of Unsigned Binary Numbers}\n\\label{datarep-unsigned-addition-sec}\n\nAddition of binary numbers can be done in exactly the same\nway as addition of decimal numbers, except that all of the\noperations are done in binary (base 2) rather than\ndecimal (base 10).\nAlgorithm~\\ref{datarep-unsigned-addition-alg} gives a method\nwhich can be used to perform binary addition.\n\n\\begin{algorithm}{Unsigned Binary Addition}{datarep-unsigned-addition-alg}{\n        Addition of unsigned binary numbers.\n}\n\n\\begin{itemize}\n\\item   Let $A$ and $B$ be a pair of $n$-bit binary numbers.\n\\item   Let $X$ be a binary number which will hold the sum of $A$ and $B$.\n\\item   Let $c$ and $\\hat{c}$ be carry bits.\n\\item   Let $i$ be a counter.\n\\item   Let $s$ be an integer.\n\\end{itemize}\n\\begin{enumerate}\n\\item   Let $c \\leftarrow 0$.\n\\item   Let $i \\leftarrow 0$.\n\\item   While $i < n$ do:\n        \\begin{enumerate}\n\t\\item\tSet $s \\leftarrow A_{i} + B_{i} + c$.\n        \\item   Set $X_{i}$ and $\\hat{c}$ according to the \n\t\tfollowing rules:\n\t\t\\begin{itemize}\n\t\t\\item\tIf $s$ is $0$, then\n\t\t\t$X_{i} \\leftarrow 0$ and $\\hat{c} \\leftarrow 0$.\n\t\t\\item\tIf $s$ is $1$, then\n\t\t\t$X_{i} \\leftarrow 1$ and $\\hat{c} \\leftarrow 0$.\n\t\t\\item\tIf $s$ is $2$, then\n\t\t\t$X_{i} \\leftarrow 0$ and $\\hat{c} \\leftarrow 1$.\n\t\t\\item\tIf $s$ is $3$, then\n\t\t\t$X_{i} \\leftarrow 1$ and $\\hat{c} \\leftarrow 1$.\n\t\t\\end{itemize}\n        \\item   Set $c \\leftarrow \\hat{c}$.\n        \\item   Set $i \\leftarrow (i + 1)$.\n        \\end{enumerate}\n\\end{enumerate}\n\n\\end{algorithm}\n\nWhen algorithm~\\ref{datarep-unsigned-addition-alg} terminates,\nif $c$ is not 0, then an {\\em overflow}\\index{overflow}\nhas occurred-- the resulting number is simply too large to\nbe represented by\nan $n$-bit unsigned binary number.\n\n\\subsection{Signed Binary Numbers}\n\\index{signed binary numbers}\n\nThe major flaw with the representation that we've used for\nunsigned binary numbers is that it doesn't include a way to\nrepresent negative numbers.\n\nThere are a number of ways to extend the unsigned representation\nto include negative numbers.\nOne of the easiest is to add an additional bit\nto each number that is used to represent the {\\em sign} of the\nnumber-- if this bit is $1$, then the number is negative; otherwise\nthe number is positive (or vice versa).\nThis is analogous to the way that we write negative numbers\nin decimal-- if the first symbol of the number is a negative sign,\nthen the number is negative, otherwise the number is positive.\n\nUnfortunately, when we try to adapt the algorithm for addition\nto work properly with this representation, this apparently simple\nmethod turns out to cause some trouble.\nInstead of simply adding the numbers together\nas we do with unsigned numbers, we now need to consider\nwhether the numbers being added are positive or negative.\nIf one number is positive and the other negative, then we\nactually need to do subtraction instead of addition, so\nwe'll need to find an algorithm for subtraction.  Furthermore,\nonce we've done the subtraction, we need to compare the\nthe unsigned magnitudes of the numbers to determine whether the\nresult is positive or negative!\n\nLuckily, there is a representation that allows us to represent\nnegative numbers in such a way that addition (or subtraction)\ncan be done easily, using algorithms very similar to the ones\nthat we already have.\nThe representation that we will use\nis called {\\em two's complement} notation.\n\\index{two's complement}\n\\index{2's complement}\n\nTo introduce two's complement, we'll start by defining, in\nalgorithm~\\ref{datarep-twos-complement-negate-alg}, the\nalgorithm that is used to compute the negation of a two's complement number.\n\n\\begin{algorithm}{Two's Complement Negation}\n\t{datarep-twos-complement-negate-alg}{\n        Negation of a two's complement number.\n}\n\n\\begin{enumerate}\n\\item   Let $\\bar{x}$ be the\n        {\\em logical complement}\\index{logical complement} of $x$.\n\n        The logical complement (also called the {\\em one's complement})\n        \\index{one's complement} \\index{1's complement}\n        is formed by flipping all the bits in the number, changing\n        all of the $1$ bits to $0$, and vice versa.\n\n\\item   Let $X \\leftarrow \\bar{x} + 1$.\n\n        If this addition {\\em overflows}, then the overflow bit\n        is discarded.\n\\end{enumerate}\n\nBy the definition of two's complement, the resulting $X$ is the\nnegation of the original $x$.\n\n\\end{algorithm}\n\nFigure~\\ref{datarep-twos-complement-neg-table}\nshows the process of negating several\nnumbers.  Note that the negation of zero is zero.\n\n\\begin{figure}[hbtp]\n\\caption{Examples of Negation Using Two's Complement}\n\\begin{center}\n\\label{datarep-twos-complement-neg-table}\n\n\\begin{tabular}{llcr}\n                & 00000110      & = & 6 \\\\\n1's complement  & 11111001      &   &   \\\\\nAdd 1           & 11111010      & = & -6 \\\\\n\\\\\n\\end{tabular}\n\n\\begin{tabular}{llcr}\n                & 11111010      & = & -6 \\\\\n1's complement  & 00000101      &   &   \\\\\nAdd 1           & 00000110      & = & 6 \\\\\n\\\\\n\\end{tabular}\n\n\\begin{tabular}{llcr}\n                & 00000000      & = & 0 \\\\\n1's complement  & 11111111      &   &   \\\\\nAdd 1           & 00000000      & = & 0 \\\\\n\\\\\n\\end{tabular}\n\\end{center}\n\\end{figure}\n\nThis representation has several important properties:\n\\begin{itemize}\n\\item   The leftmost (most significant)\n        bit also serves as a sign bit; if 1, then the number is negative,\n        if 0, then the number is positive or zero.\n\n\\item   The rightmost (least significant)\n        bit of a number always determines\n        whether or not the number is odd or even--\n        if bit 0 is 0, then the number is even, otherwise the number is odd.\n\n\\item   The largest positive number that can be represented in\n        two's complement notation in an $n$-bit binary number is $2^{n-1} - 1$.\n        For example, if $n$ is $8$, then the largest positive number is\n        $01111111 ~~=~~ 2^{7} - 1 ~~=~~ 127$.\n\n\\item   Similarly, the ``most negative'' number is $-2^{n-1}$, so\n        if $n = 8$, then it is $10000000$, which is $-2^{7} ~~=~~ -128$.\n        Note that the negative of the most negative number (in this\n        case, 128) cannot be represented in this notation.\n\\end{itemize}\n\n\n\\subsubsection{Addition and Subtraction of Signed Binary Numbers}\n\nThe same addition algorithm that was used for unsigned binary numbers\nalso works properly for two's complement numbers.\n\n\\begin{tabular}{llcr}\n        \\\\\n        &       00000101        & = & 5         \\\\\n  $+$   &       11110101        & = & -11       \\\\\n\\hline\n        &       11111010        & = & -6        \\\\\n        \\\\\n\\end{tabular}\n\nSubtraction is also done in a similar way: to subtract A from B, take\nthe two's complement of A and then add this number to B.\n\nThe conditions for detecting overflow are different for signed\nand unsigned numbers, however.  If we use\nalgorithm~\\ref{datarep-unsigned-addition-alg} to add two unsigned numbers,\nthen if $c$ is $1$ when the addition terminates,\nthis indicates that the result has an absolute value\ntoo large to fit the number of bits allowed.\nWith signed numbers, however, $c$ is not relevant, and an overflow\noccurs when the signs of both numbers being added are the same\nbut the sign of the result is opposite.  If the two numbers being added have\nopposite signs, however, then an overflow {\\em cannot} occur.\n\nFor example, consider the sum of $1$ and $-1$:\n\n\\begin{tabular}{llcrl}\n        \\\\\n        &       00000001        & = & 1         & \\\\\n  $+$   &       11111111        & = & -1        & \\\\\n\\hline\n        &       00000000        & = & 0         & {\\bf Correct!}        \\\\\n        \\\\\n\\end{tabular}\n\nIn this case, the addition will overflow, but it is not\nan error, since the result that we get (without considering\nthe overflow) is exactly correct.\n\nOn the other hand, if we compute the sum of 127 and 1, then\na serious error occurs:\n\n\\begin{tabular}{llcrl}\n        \\\\\n        &       01111111        & = & 127       &       \\\\\n  $+$   &       00000001        & = & 1         &       \\\\\n\\hline\n        &       10000000        & = & -128      & {\\bf Uh-oh!}  \\\\\n        \\\\\n\\end{tabular}\n\nTherefore, we must be very careful when doing signed binary arithmetic\nthat we take steps to detect bogus results.\nIn general:\n\\begin{itemize}\n\\item   If $A$ and $B$ are of the same sign, but $A + B$ is of\n        the opposite sign, then an overflow or wraparound error\n        has occurred.\n\\item   If $A$ and $B$ are of different signs, then $A + B$ will\n        never overflow or wraparound.\n\\end{itemize}\n\n\\subsubsection{Shifting Signed Binary Numbers}\n\nAnother useful property of the two's complement notation\nis the ease with which numbers can be multiplied or divided by\ntwo.  To multiply a number by two, simply shift the number ``up''\n(to the left) by one bit, placing a 0 in the least significant bit.\nTo divide a number in half, simply shift the the number ``down''\n(to the right) by one bit (but do not change the sign bit).\n\nNote that in the case of odd numbers, the effect of shifting\nto the right one bit is like dividing in half, rounded towards\nnegative infinity, so that 51 shifted to the right one bit becomes 25,\nwhile -51 shifted to the right one bit becomes -26.\n\n\\begin{figure}\n\\caption{ Doubling and Halving Two's Complement Numbers }\n\\begin{center}\n\\begin{tabular}{llcr}\n\\\\\n                & 00000001      & = & 1 \\\\\nDouble          & 00000010      & = & 2 \\\\\nHalve\t\t& 00000000      & = & 0 \\\\\n\\\\\n                & 00110011      & = & 51 \\\\\nDouble          & 01100110      & = & 102 \\\\\nHalve           & 00011001      & = & 25 \\\\\n\\\\\n                & 11001101      & = & -51 \\\\\nDouble          & 10011010      & = & -102 \\\\\nHalve           & 11100110      & = & -26 \\\\\n\\\\\n\\end{tabular}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Hexadecimal Notation}\n\\index{hexadecimal}\n\\index{octal}\n\nWriting numbers in binary notation can soon get tedious,\nsince even relatively small numbers require many\nbinary digits to express.  A more compact notation, called {\\em hexadecimal}\n(base 16), is usually used to express large binary numbers.\nIn hexadecimal, each digit represents four unsigned binary digits.\n\nAnother notation, which is not as common currently, is called {\\em octal}\nand uses base eight to represent groups of three bits.\nFigure~\\ref{datarep-hex-oct-table} show examples of binary, decimal, octal, and\nhexadecimal numbers.\n\n\\begin{figure}\n\\caption{Hexadecimal and Octal}\n\\label{datarep-hex-oct-table}\n\\begin{center}\n\\begin{tabular}{|l|c|c|c|c|c|c|c|c|}\n\\hline\nBinary  & 0000  & 0001  & 0010  & 0011  & 0100  & 0101  & 0110  & 0111  \\\\\n\\hline\nDecimal & 0     & 1     & 2     & 3     & 4     & 5     & 6     & 7     \\\\\n\\hline\nHex     & 0     & 1     & 2     & 3     & 4     & 5     & 6     & 7     \\\\\n\\hline\nOctal   & 0     & 1     & 2     & 3     & 4     & 5     & 6     & 7     \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}{|l|c|c|c|c|c|c|c|c|}\n\\hline\nBinary  & 1000  & 1001  & 1010  & 1011  & 1100  & 1101  & 1110  & 1111  \\\\\n\\hline\nDecimal & 8     & 9     & 10    & 11    & 12    & 13    & 14    & 15    \\\\\n\\hline\nHex     & 8     & 9     & A     & B     & C     & D     & E     & F     \\\\\n\\hline\nOctal   & 10    & 11    & 12    & 13    & 14    & 15    & 16    & 17    \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{figure}\n\nFor example, the number ${\\tt 200}_{10}$ can be written as\n${\\tt 11001000}_{2}$,\n${\\tt C8}_{16}$, or\n${\\tt 310}_{8}$.\n\n\\section{Representing Characters}\n\nJust as sequences of bits can be used to represent numbers,\nthey can also be used to represent the letters of the alphabet,\nas well as other characters.\n\nSince all sequences of bits represent numbers, one way to\nthink about representing characters by sequences of bits\nis to choose a number that corresponds to each character.\nThe most popular correspondence currently is the ASCII character set.\n\\index{ASCII}\nASCII, which stands for the American Standard Code for Information\nInterchange, uses 7-bit integers to represent characters, using the\ncorrespondence shown in table~\\ref{ascii}.\n\n%%% Jam the ASCII table in here.\n\\begin{figure}[hbt]\n\\caption{The ASCII Character Set}\n\\label{ascii}\n\\index{ASCII}\n\\begin{center}\n\\input{ascii}\n\\end{center}\n\\end{figure}\n\nWhen the ASCII character set was chosen, some care was taken to\norganize the way that characters are represented in order to make\nthem easy for a computer to manipulate.  For example, all of the\nletters of the alphabet are arranged in order, so that sorting\ncharacters into alphabetical order is the same as sorting in\nnumerical order.\nIn addition, different classes of characters\nare arranged to have useful relations.  For example, to convert\nthe code for a lowercase letter to the code for the same letter in\nuppercase, simply set the 6th bit of the code to 0 (or subtract 32).\nASCII is by no means the only character set to have\nsimilar useful properties, but it has emerged as the standard.\n\nThe ASCII character set does have some important limitations, however.\nOne problem is that the character set only defines the representations\nof the characters used in written English.  This causes problems with\nusing ASCII to represent other written languages.  In particular,\nthere simply aren't enough bits to represent all the written characters\nof languages with a larger number of characters (such as Chinese or\nJapanese).  Already new character sets which address these problems\n(and can be used to represent characters of many languages side\nby side) are being proposed, and eventually there will unquestionably\nbe a shift away from ASCII to a new multilanguage standard\\footnote{\n        This shift will break many, many existing programs.  Converting\n        all of these programs will keep many, many programmers busy\n        for some time.}.\n\n%% This section isn't really very good, and\n%% is unnecessary for CS50 anyway, so out it goes...\n%% \\input{datarep/datarep-representing-rational-numbers-de1}\n\n\\section{Representing Programs}\n\nJust as sequences of bits can be used to represent numbers, they can\nalso be used to represent instructions for a computer to perform. \nUnlike the two's complement notation for integers, which is a standard\nrepresentation used by nearly all computers, the representation of\ninstructions, and even the set of instructions, varies widely from one\ntype of computer to another.\n\nThe {\\sc Ant-8} architecture, which is the focus of the rest of this\ndocument, uses a relatively simple and straightforward representation. \nEach instruction is exactly 16 bits in length, and consists of several\nbit fields, as depicted in\nfigure~\\ref{datarep-ant-instruction-figure}.\n\n\\begin{figure}[hbtp]\n\\caption{{\\sc Ant-8} Instruction Formats}\n\\label{datarep-ant-instruction-figure}\n\\begin{center}\n\\begin{tabular}{|p{1.0in}|p{1.0in}|p{1.0in}|p{1.0in}|}\n\\hline\n4 bits & 4 bits & 4 bits & 4 bits \\\\\n\\hline\n\\hline\nop & des & reg1 & reg2 \\\\\n\\hline\nop & des & reg1 & 4-bit constant \\\\\n\\hline\nop & reg & \\multicolumn{2}{|l|}{8-bit constant} \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{figure}\n\nThe first four bits (reading from the left, or high-order bits) of\neach instruction are called the {\\tt op} field.  The op field\ndetermines what operation the instruction represents.  Depending on\nwhat the op is, the rest of the instruction may represent the names of\nregisters or constants used by the op.\n\nFor example, the instruction $1234_{16}$ has an op of 1, which\ncorresponds to the operation of loading a constant ({\\tt\nlc}).\\footnote{The fact that most of the instructions consist of four\n4-bit fields makes hexadecimal notation particularly appropriate for\nexpressing {\\sc Ant-8} instructions.} With the addition operation, the\nnext 4-bit fields is interpreted as the names of the register to use\nload the constant into, and the last 8 bits are the constant to load. \nTherefore, instruction $1234_{16}$ places the value {\\tt 0x34} into\nregister 2.  (The {\\tt lc} instruction and the rest of the {\\sc Ant-8}\ninstructions are described more fully in the {\\sc Ant-8} tutorial.)\n\n\\section{Memory Organization}\n\nWe've seen how sequences of binary digits can be used\nto represent numbers, characters, and instructions.\nIn a computer, these binary digits\nare organized and manipulated in discrete groups,\nand these groups are said to be the {\\em memory} of the computer.\n\n\\subsection{Units of Memory}\n\nThe smallest of these groups, on most computers,\nis called a {\\em byte}\\index{byte}.\nOn nearly all currently popular computers a byte is composed of 8 bits.\n\nThe next largest unit of memory is usually composed of\n16 bits.  What this unit is called varies from computer\nto computer-- on smaller machines, this is often called\na {\\em word}\\index{word}, while on newer architectures that\ncan handle larger chunks of data,\nthis is called a {\\em halfword}\\index{halfword}.\n\nThe next largest unit of memory is usually composed of 32 bits.\nOnce again, the name of this unit varies-- on smaller machines,\nit is referred to as a {\\em long}\\index{long}, while\non newer and larger machines it is called a {\\em word}\\index{word}.\n\nFinally, on the newest machines, the computer also can handle\ndata in groups of 64 bits.  On a smaller machine, this is known\nas a {\\em quadword}\\index{quadword}, while on a larger machine\nthis is known as a {\\em long}\\index{long}.\n\n\\subsubsection{Historical Perspective}\n\nThere have been architectures that have used nearly\nevery imaginable word size-- from 6-bit bytes to 9-bit bytes,\nand word sizes ranging from 12 bits to 48 bits.\nThere are even a few architectures that have no fixed\nword size at all (such as the CM-2) or word sizes that\ncan be specified by the operating system at runtime.\n\nOver the years, however, most architectures have converged\non 8-bit bytes and 32-bit longwords.\nAn 8-bit byte is a good match for the ASCII character set\n(which has some popular extensions that require 8 bits),\nand a 32-bit word has been, at least until recently,\nlarge enough for most practical purposes.\n\n\\subsection{Addresses and Pointers}\n\nEach unique byte\\footnote{\n\tIn some computers, the smallest distinct unit of memory\n\tis not a byte.  For the sake of simplicity, however,\n\tthis section assumes that the smallest distinct unit\n\tof memory on the computer in question is a byte.}\nof the computer's memory is given a unique identifier, known\nas its {\\em address}.  The {\\em address of} a piece of memory is\noften referred to as a {\\em pointer to} that piece of memory--\nthe two terms are synonymous, although there are many contexts\nwhere one is commonly used and the other is not.\n\nThe memory of the computer itself is often organized as a large array\n(or group of arrays) of bytes of memory.  In this organization, the\naddress of each byte of memory is simply the index of the memory array\nlocation where that byte is stored.\n\n\n\\subsection{Summary}\n\nIn this chapter, we've seen how computers represent integers\nusing groups of bits, and how basic arithmetic and other\noperations can be performed using this representation.\n\nWe've also seen how the integers or groups of bits can be used\nto represent several different kinds of data,\nincluding written characters (using the ASCII character codes),\ninstructions for the computer to execute,\nand addresses or pointers, which can be used to reference other data.\n\nThere are also many other ways that information can be represented\nusing groups of bits, including representations for rational\nnumbers (usually by a representation called {\\em floating point}),\nirrational numbers, graphics, arbitrary character sets, and\nso on.  These topics, unfortunately, are beyond the scope of this\nchapter.\n\n", "meta": {"hexsha": "16f050239b265aa32f0c24cd603adc71444e53c8", "size": 26197, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/datarep-sec.tex", "max_stars_repo_name": "geoffthorpe/ant-architecture", "max_stars_repo_head_hexsha": "d85952e3050c352d5d715d9749171a335e6768f7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Documentation/datarep-sec.tex", "max_issues_repo_name": "geoffthorpe/ant-architecture", "max_issues_repo_head_hexsha": "d85952e3050c352d5d715d9749171a335e6768f7", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documentation/datarep-sec.tex", "max_forks_repo_name": "geoffthorpe/ant-architecture", "max_forks_repo_head_hexsha": "d85952e3050c352d5d715d9749171a335e6768f7", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-15T04:09:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-15T04:09:05.000Z", "avg_line_length": 36.9492242595, "max_line_length": 79, "alphanum_fraction": 0.6772531206, "num_tokens": 7166, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.8006919949619792, "lm_q1q2_score": 0.5926516865607302}}
{"text": "\\section{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\n\\frame{\\tableofcontents[currentsection, hideothersubsections]}\n\n\\begin{frame}\n\\frametitle{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\nLikely, tridiagonal approx $\\hat{F}^{-1}$ is better than (one)diagonal $\\breve{F}^{-1}$,\nBUT harder to obtain because\\\\\napproximating $\\tilde{F}^{-1}$ as block-tridiagonal is NOT equivalent to approximating $\\tilde{F}$ as block-tridiagonal.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.175]{kfac_12}\n\\end{figure}\n\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\n\\begin{itemize}\n    % \\item Define $\\hat{F}$ that agrees with $\\tilde{F}$ on the tridiagonal blocks,\n    %     \\begin{itemize}\n    %         \\item and that satisfies the property that $\\hat{F}^{-1}$ is block-tridiagonal\n    %     \\end{itemize}\n    \\item Assume that $\\hat{F}^{-1}$ is block-tridiagonal\n        \\begin{itemize}\n            \\item is equivalent to assuming that:\n            $\\hat{F}^{-1}$ is the precision matrix of an undirected Gaussian graphical model (UGGM) over $\\mathcal{D}\\theta$\n        \\end{itemize}\n    \\item As this graphical model has a tree structure, there is an equivalent\n        directed graphical model with the same distribution\n        \\begin{itemize}\n            \\item this equivalent directed model will also be linear/Gaussian, and\n            hence, a directed Gaussian Graphical model (DGGM).\n        \\end{itemize}\n\\end{itemize}\n\n\\vspace{10mm}\n{\\footnotesize\nRecall: The precision matrix of a random vector is the inverse of its covariance matrix.\n% * https://www.statlect.com/glossary/precision-matrix\n}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.25]{kfac_fig_04_arxiv}\n\\end{figure}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.25]{kfac_16}\n\\end{figure}\n\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.325]{kfac_17}\n\\end{figure}\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.325]{kfac_18}\n\\end{figure}\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.3]{kfac_19}\n\\end{figure}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\nApplying formula for inverse covariance of directed model used in FActorized Natural Gradient (FANG)(Grosse and Salakhutdinov, 2015)\ngives:\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.25]{kfac_14}\n\\end{figure}\n\n% \\begin{figure}\n%     \\centering\n%     \\includegraphics[scale=0.15]{kfac_15}\n% \\end{figure}\n\\end{frame}\n\n% \\begin{frame}\n% \\frametitle{KFAC: Block-tridiagonal inverse approx, $\\hat{F}^{-1} \\approx \\tilde{F}^{-1}$}\n% \\begin{figure}\n%     \\centering\n%     \\includegraphics[scale=0.225]{kfac_11}\n% \\end{figure}\n% \\end{frame}\n\n", "meta": {"hexsha": "4e2685198fa5d60fd3632fb361f9f82f2fb3c369", "size": 3209, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "talk/tor/kfac-20180824/kfac3.tex", "max_stars_repo_name": "tttor/robot-foundation", "max_stars_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "talk/tor/kfac-20180824/kfac3.tex", "max_issues_repo_name": "tttor/robot-foundation", "max_issues_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "talk/tor/kfac-20180824/kfac3.tex", "max_forks_repo_name": "tttor/robot-foundation", "max_forks_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.4607843137, "max_line_length": 132, "alphanum_fraction": 0.6821439701, "num_tokens": 1039, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920068519376, "lm_q2_score": 0.7401743505760728, "lm_q1q2_score": 0.5926516861830854}}
{"text": "\\chapter{Involutions and special $p$-groups}\r\n\\section{Dihedral Groups and Involutions}\r\n{\\bf Theorem 1:}\r\nIf $x, y \\in Inv(G)$ then $ \\langle x, y \\rangle $ is a dihedral group of order $2|xy|$.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\nLet $u=xy$, $U= \\langle u \\rangle $ and $|xy|=n$. $U^x=U^y=U$ so $U \\lhd D$.  $D= U \\cup Ux$.\r\n\\end{quote}\r\n{\\bf Theorem 2:}\r\nIf $x, y \\in Inv(G)$, $n=|xy|$ and $D= \\langle x, y \\rangle $ then \r\n(1) $z \\in D \\setminus \\langle xy \\rangle $ is an\r\ninvolution; (2) if $n$ is odd, $D$ is transitive on involutions;  \r\n(3) if $n$ is even,\r\nexactly one of $x, y$ is conjugate to the unique involution in $ \\langle xy \\rangle $;\r\n(4) if $n$ is even\r\nand $z$ is the unique involution in $ \\langle xy \\rangle $ then $xz$ is conjugate to \r\n$x$ in $D$ iff\r\n$n= 0 \\jmod{4}$.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\nLet $u=xy$, $U= \\langle u \\rangle $, and $|xy|=n$.  $u^x=u^{-1}$ and, \r\nin fact, $v \\in U \\rightarrow v^x= v^{-1}$.\r\n\\\\\r\n(1) If $z \\in D \\setminus U$, $z= vx$ and $(vx)^2=vv^x=1$.\r\n\\\\\r\n(2) For $w \\in U$, $(vx)^w = v x^w = v w^{-1} w^x x =v v w^{-2} x$ so\r\n$x^D= \\{ v^2x: v \\in U \\}$.\r\nIf $n$ is odd, $U$ has no involutions, so $x^D = Inv(D)= D \\setminus U$.\\\\\r\n(3) If $ux=y$, $D \\setminus U = \\{v^2x: v \\in U\\} \\cup \\{v^2ux: v \\in U \\}= x^D \\cup y^D$\\\\\r\n(4) $zx \\in x^D$ when $z$ is a square in $U$ that is when $n=0 \\jmod{4}$.\r\n\\end{quote}\r\n{\\bf Theorem 3:}\r\nLet $G$ have even order with ${\\mathbb Z}(G)=1$ and suppose\r\nthat $G$ has $m$ involutions with $n=|G|/m$ then\r\n$G$ has a proper subgroup of index at most $2n^2$.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\nLet $I= Inv(G)$, $R= \\{ g \\in G: g^x=g^{-1}, x \\in I \\}$ and $\\{x_i \\}$ representatives of\r\nthe conjugacy classes of $G$ in $R$ for $0 \\le i \\le k$ and pick $x_0=1$.  Set $m_i= |x_i^G|$\r\nand $B_i= \\{ (u, v), u,v \\in I: uv= x_i \\}$, put $b_i= |B_i|$.  If $u,v \\in I$ then either\r\n$u=v, uv=1$ or $ \\langle u,v \\rangle $ is dihedral.  \r\nIn either case, $(uv)^u= u^{-1}$ so $uv \\in R$.\r\n$m^2 = |I \\times I| = \\sum_{i=0}^k m_i b_i$.\r\n\\\\\r\n\\\\\r\nNow, $\\exists t_i \\in Inv(G): (x_i)^{t_i}=x_i^{-1}$.  If $u,v \\in B_i, (x_i)^u=x_i^{-1}$\r\nand $(u,v) \\mapsto u$ is an injection from $B_i$ into $t_iC_G(t_i)$; thus \r\n$b_i \\le |C_G(t_i)$ and $m_ib_i \\le |G|$, in fact, $m_0=1$ and $b_0=m$ so\r\n$m^2 \\le m + k|G|$.  Let $H<G$ be a subgroup of minimal index, $s=|G/H|$.\r\nIf $i > 0$ then $x_i \\notin {\\mathbb Z}(G)$ and $m_i =|G:C_G(x_i)| \\ge s$.\r\n$|G| \\ge \\sum_{i=0}^k m_i \\ge 1+ks$ and $k \\le {\\frac {(|G|-1)} {s}}$.  This gives\r\n$m^2 \\le (|G|{\\frac {(|G|-1)} {s}})+m$.  But $n={\\frac {|G|} {m}}$ and $m \\ge 2$ so\r\n$s \\le {\\frac {n(n-m^{-1})} {(1-m^{-1})}}$ and $s \\le (2 n^2)!$.\r\n\\end{quote}\r\n{\\bf Theorem 4:}\r\nThe $G$ be a finite simple group of even order and $t \\in Inv(G)$ with $n=|C_G(t)|$ then\r\n$|G| \\le (2 n^2 )!$.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\nBy the previous result, $\\exists H <G$ such that $|G:H| \\le 2 n_0^2, n_0=|G|/m$ where\r\n$m=|Inv(G)|$ and $m \\ge |t^G|=|G:C_G(t)|$ and so $n_0 \\le {\\frac {|G|} {|G:C_G(t)|}}=n$.\r\nRepresenting $G$ as a permutation group on the cosets $\\{ Hx \\}$, $k=|G/H| \\le 2n^2$.  Since\r\n$G$ is simple, this representation is faithful and $G$ is isomorphic to a subgroup of\r\n$S_k$ and so $|G| \\le k! \\le (2n^2)!$.\r\n\\end{quote}\r\n{\\bf Brauer-Fowler Theorem:}  Let $H$ be a finite group.   There are at most finitely many\r\nfinite simple groups with an involution $t$ such that $H \\cong C_G(t)$.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\nFollows from previous result.\r\n\\end{quote}\r\n{\\bf Thompson Order Formula:}  Let $G$ be a finite group with $k \\ge 2$ conjugacy classes of\r\ninvolutions $\\{x_i^G\\}$, $i= 1, 2, \\ldots , k$.  Let $n_i$ be the number of ordered pairs\r\n$u \\in x_1^G, v \\in x_2^G$ and $x_i \\in \\langle uv \\rangle $ then \r\n$|G|= |C_G(x_1)| |C_G(x_2)| \\sum_{i=1}^k {\\frac {n_i} {C_G(x_i)}}$.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\nAgain let $I= Inv(G)$, $\\Omega= x_1^G \\times x_2^G$.\r\n$|\\Omega|= |x_1^G| |x_2^G|= |G:C_G(x_1)||G:C_G(x_2)|$.\r\nFor $(u,v) \\in \\Omega, u \\notin v^G$, there is a unique involution $z(u,v) \\in \\langle uv \\rangle $.\r\nLet $\\Omega_z= \\{ (u,v): z=z(u,v) \\}$.  $\\Omega= \\bigcup_{z \\in I} \\Omega_z$ and\r\n$|\\Omega_z|= |\\Omega_{x_i}|$ for $z \\in x_i^G$.  So $\\sum_{z \\in I} |\\Omega_z| = |G:C_G(x_i)| n_i$\r\nand $|\\Omega|= \\sum_{i=1}^k |G:C_G(x_i)|n_i$.\r\n\\end{quote}\r\n{\\bf Remark:} $n_i$ can be calculated if $x_i^G \\cap C_G(x_i)$ is known so $|G|$ can be\r\ncalculated by the fusion of involutions in local subgroups.\r\n\\section{Critical subgroup of a $p$-group:}\r\n{\\bf Definition 1:}\r\n$H \\; char \\; G$ is a \\emph {critical subgroup} if $\\Phi(H) \\le {\\mathbb Z}(H) \\ge [G,H]$.\r\n$C_G(H)=Z(H)$.  \r\n\\\\\r\n\\\\\r\n{\\bf Definition 2:}  \r\n$Mod(p^n)= \\langle x \\rangle \\ltimes \\langle y \\rangle $, $|x|= p^{n-1}$, $|y|=p$.\r\n$D(2^n)= \\langle x \\rangle \\ltimes \\langle y \\rangle $, $x^y = x^{-1}$.\r\n$SD(2^n)= \\langle x \\rangle \\ltimes \\langle y \\rangle $, $x^y = x^{2^{n-2}-1}$.\r\n$Q(2^n)= G/ \\langle x^{2^{n-2}}, y \\rangle $, where $|y|=4$, $x^y= x^{-1}$, note that\r\n${\\mathbb Z}(G)= \\langle x^{2^{n-2}}, y \\rangle $.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem:}  Let $G$ be a non-abelian group of order $p^n$ with a cyclic normal subgroup\r\nof order $p^{n-1}$ \r\n$G \\cong Mod(p^{n+1})$,\r\n$G \\cong D(2^{n+1})$,\r\n$G \\cong SD(2^{n+1})$, or\r\n$G \\cong Q(2^{n+1})$.\r\n\\begin{quote}\r\n\\emph{Proof:} Since $G$ is non-abelian, $n \\ge 3$.  Let $X$ be a cyclic subgroup: $|G:X|=p$.\r\n$X=C_G(X)$ and $\\exists y \\in G \\setminus X$ such that $y$ acts non-trivially on $X$.\r\n$y^p \\in X$ so $y$ induces an automorphism of order $p$.  $Aut(X)$ has a unique subgroup\r\nof order $p$ (case 1) unless $p=2$ and $n \\ge 4$ and in that case $Aut(X)$ has $3$ involutions\r\n(case 2).  In case 1, $x^y=xz$ for some $z$ of order $p$ in $X$.  In case 2,\r\n$x^y=x^{-1} z^{\\epsilon}$ where $\\epsilon= 0, 1$ and $z \\in Inv(X)$.  If $G$ splits over $X$,\r\n$G=X \\rtimes \\langle y \\rangle $ and $G \\cong Mod(p^n),  D(2^n), SD(2^n)$.  Otherwise,\r\n$C_X(y)= \\langle x^p \\rangle $ if $x^y=xz$ while $C_X(y)= \\langle z \\rangle $ otherwise.  \r\n$y^p \\in C_X(y)$ and since $G$ does\r\nnot split over $X$,  $ \\langle y, C_X(y) \\rangle $ does not split over \r\n$C_X(y)$ and hence it is abelian and\r\n$C_X(y)= \\langle y^p \\rangle $ and $y^p = x^p$ if $x^y=xz$ and $y^2 = z$ otherwise.  \r\nSuppose $x^y=xz$,\r\n$z=[y,x] \\in C( \\langle x,y \\rangle)$.  So, $y x^{-1})^p= y^p x^{-p} z^{\\frac {p(p-1)} {2}}$\r\nbut $z^{\\frac {p(p-1)} {2}}=1, p \\ne 2$, thus $p=2$ and $z=x^{2^{n-2}}$\r\nand if $n \\ge 4$ then setting $i= 2^{n-3}-1$ so $(y x^i)^2=1$; if $n=3$, $x^y= x^{-1}$.\r\nWe're left with $p=2$, $x^y= x^{-1}z^{\\epsilon}$ and $y^2=z$.  If $\\epsilon=0$,\r\n$G \\cong Q(2^n)$ but $z \\in {\\mathbb Z}(G)$, $(yx)^2= y^2 x^y x = z x^{-1} z x = 1$ so\r\nthe extension does split.\r\n\\end{quote}\r\n{\\bf Theorem 5:} If $G = \\langle x \\rangle $, $|x|=q=p^n$, $A=Aut(G)$.\r\n(1) $a \\mapsto m(a) \\subseteq U(q)$ is an isomorphism on the units of $q$;\r\n(2) the cyclic subgroup of order $p-1$ is faithful on $\\Omega_1(G)$; and,\r\n(3) $P \\in S_p(A)$ is cyclic and faithful.\r\n\\begin{quote}\r\n\\emph{Proof:}  Elementary.\r\n\\end{quote}\r\n{\\bf Theorem 6:}  \r\nLet $G$ be a non-abelian group with a cyclic normal subgroup, $U$,\r\nof order $p^{n}$ and $C_G(U)=U$.  Then either\r\n(1)\r\n$G \\cong D(2^{n+1})$, $G \\cong SD(2^{n+1})$, or $G \\cong Q(2^{n+1})$; or, \r\n(2) $M=C_G(\\mho^1(U)) \\cong Mod(p^{n+1})$ and $E_{p^2}= \\Omega_1(M)$ char $G$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet ${\\overline G}= G/U \\rightarrow Aut(U)$.  ${\\overline G} \\ne 1$ and $n \\ge 2$ if\r\n$|G|=p$.  If ${\\overline G} > p$ the conclusion holds by the previous result.\r\n$\\exists {\\overline y} \\in {\\overline G}$, $|{\\overline y}|=p$, $u^y= u^{p^{n-1}+1}$, \r\n$U= \\langle u \\rangle $.\r\n$M= < \\langle y,u \\rangle $.  \r\n$M=Mod_{p^{n+1}}$, $E= \\Omega_1(M)= E_{p^2}$, $E \\; char \\; G$.  Since\r\n${\\overline G}'=1$, ${\\overline G}$ is cyclic or $p=2$ and $u^g= u^{-1}$.   In the first case,\r\n$\\Omega_1({\\overline G})= {\\overline M}$, $E= \\Omega_1(M)= \\Omega_1(G) \\; char \\; G$.\r\nIn second case, $\\mho^1(U)= \\langle u^2 \\rangle = \\langle [u,g] \\rangle $ \r\nand $G' \\subseteq U$.  So\r\n$G' = \\mho^1(U)$ or $U \\; char \\; G' \\; char \\; G$.  \r\nThus $E= \\Omega_1(C_G(\\Omega^1(U))) \\; char \\; G$.\r\n\\end{quote}\r\n{\\bf Definition:}  A \\emph{critical} subgroup of $G$ is a characteristic subgroup\r\n$H \\; char \\; G$ such that $\\Phi(H) \\le {\\mathbb Z}(H) \\ge [G, H]$ and $C_G(H)= {\\mathbb Z}(H)$\r\nA $p$-group is \\emph{special} if $\\Phi(G)= {\\mathbb Z}(G) = G^{(1)}$, \\emph{extra-special}\r\nif the center is cyclic.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 7:}\r\nEvery $p-$group has a critical subgroup.\r\n\\begin{quote}\r\n\\emph{Proof:} Let $S$ be the set of characteristic subgroups, $H$, of $G$ with\r\n$\\Phi(H) \\le {\\mathbb Z}(H) \\ge [G, H]$ and $C_G(H)= {\\mathbb Z}(H)$ and $H$ be a maximal\r\nelement.  Claim: $H$ is critical.  If not, set $K= C_G(H)$, $Z= {\\mathbb Z}(H)$ and\r\ndefine $X$ by $X/Z= \\Omega_1({\\mathbb Z}(G/Z)) \\cap K/Z$.  The $K \\nleq H$ and\r\n$Z= H \\cap K$; since $K \\lhd G$, $X \\ne Z$ but then $XH \\in S$ violating the maximality.\r\n\\end{quote}\r\n{\\bf Definition:}\r\nA $p-$group $P$ is {\\bf special} if $\\Phi(G)=Z(G)=G'$ and {\\bf extra-special}\r\nif $Z(G)$ is cyclic.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 8:}\r\nIf $G$ is special, ${\\mathbb Z}(G)$ is elementary abelian.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\n$g^p \\in \\Phi(G)= {\\mathbb Z}(G)$ $1= [g^p,h]= [g,h]^p$ so $G^{(1)}$ is elementary\r\nso ${\\mathbb Z}(G)= G^{(1)}$.\r\n\\end{quote}\r\n{\\bf Theorem 9:}\r\nLet $E$ be an extra-special subgroup of $G$ $[G, E] \\le {\\mathbb Z}(G)$, then\r\n$G=E C_G(E)$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$Z=\\langle z \\rangle= Z(E)$.  $E/Z \\leq Aut_G(E) \\leq C= C_{Aut(E)}(E/Z)$.  Suffices to show\r\n$E/Z=C$.  Let $\\alpha \\in C$ and $\\langle x_iZ \\rangle$ is a basis for $E/Z$.  Then $[x_i , \\alpha ]= z^{m_i}$,\r\n$0 \\leq m_i <p$ and $E= \\langle x_i, 1 \\leq i \\leq n \\rangle$ and thus $|C| \\leq p^n =|E/Z|$.\r\n\\end{quote}\r\n{\\bf Definition:}\r\nA $p$-group is of symplectic type if it has no non-cyclic characteristic abelian subgroups.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 10:}\r\nIf $G$ is of symplectic type, $G= E*R$ where (1) $E=1$ or is extra-special;\r\n(2) either $R$ is cyclic or $R$ is dihedral;, semi-dihedral or quaternion of order\r\n$\\ge 16$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$G$ has a critical subgroup $H$, $U= {\\mathbb H}$, let $Z$ be a cyclic subgroup of $U$ of order $p$.\r\n$G^*= G/Z$.  Put $K^* = \\Omega_1(H^*)$ and let $E^*$ be the complement to ${mathbb Z}(K)^*$ in $K^*$.\r\nWe can show $K$ is extraspecial.  See Aschbacher p 109 for the rest of the argument.\r\n\\end{quote}\r\n{\\bf Theorem 11:}\r\nLet $E$ be an extra-special $p$-group with $Z={\\mathbb Z}(G)$, ${\\overline E}= E/Z$.  Identify\r\n$Z$ with ${\\mathbb Z}_p$ and ${\\overline E}$ as a vector space over $Z$ then\r\n(1) $f: {\\overline E} \\times {\\overline E} \\rightarrow Z$\r\nby $f({\\overline x}, {\\overline y}) = [x, y]$ is a symplectic form;\r\n(2) $m({\\overline E})= 2n$;\r\n(3) if $p=2$, then $Q(x)= x^2$ is the associated quadratic form;\r\n(4) if $Z \\le U \\le E$,  $U$ is extra-special or abelian iff ${\\overline U}$ is\r\nnon-degenerate or totally isotropic, respectively.  If $p=2$, then $U$ is elementary abelian\r\niff ${\\overline U}$ is totally singular.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nAschbacher, p 111.\r\n\\end{quote}\r\n{\\bf Theorem 12:}\r\n$G$ a $p$ group, $A$, a $p'$-group.\r\nLet $A$ be a maximal abelian normal subgroup of $G$, $Z= \\Omega_1(A)$ then\r\n(1) $A=C_G(A)$,\r\n(2) $(C_G(A/Z) \\cap C(Z))' \\le A$,\r\n(3) if $p \\ne 2$, $\\Omega_1(C_G(Z)) \\le C_G(A/Z)$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $C= C_G(A)$ and $a \\le C \\lhd G$.  If $C \\ne A$, $\\exists D: |D/A|= p$ and\r\n$D/A \\subseteq {\\mathbb Z}(G/A) \\cap C/A$.  Then $D \\lhd G$ and $D$ is abelian\r\ncontradicting the maximality of $A$.  $(C_G(A/Z) \\cap C(Z)^{(1)} \\le C(A)$ so\r\n1 implies 2.  Let $P \\ne 2$, $|x|= p$, $x \\in C_G(Z)$ and $X= \\langle x, A \\rangle$\r\nand $Y= \\langle x C_A(\\langle x, Z \\rangle)/Z \\rangle$.  $cl(Y) \\le 2$ so\r\n$W=\\Omega_1(Y)$ is of exponent $p$.  Thus $W= \\langle x, Z \\rangle$.  But\r\n$W \\; char \\; Y$ so $N_X(Y) \\le N_X(W)=Y$ so $Y= X$ and 3 holds.\r\n\\end{quote}\r\n{\\bf Theorem 13:}\r\n$G$ a $p$ group.\r\nIf $p \\ne 2$ and $cl(G) \\le 2$ then $\\Omega_1(G)$ is of exponent $p$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $x, y$ be elements of order $p$.  $[x, y]= z \\in {\\mathbb Z}(G)$ so\r\n$(xy)^p = x^p y^p z^{\\frac {p(p-1)} 2}= 1$.\r\n\\end{quote}\r\n{\\bf Theorem 14:}\r\n$G$ a $p$ group, $A$, a $p'$-group.\r\nLet $p \\ne 2$ and $Z$ be an elementary abelian normal subgroup of $G$ then $Z= \\Omega_1(C_G(Z))$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $X= \\Omega_1(C_G(Z))$.  \\emph{Claim:} $exp(X)=p$.  If claim is true,\r\nand $X \\ne Z$, $\\exists D: |D/A|=p$ and $D$ is elementary abelian, contradicting\r\nmaximality.  \\emph{Proof of claim:}\r\nLet $A$ be a maximal  abelian normal subgroup of $G$ containing $Z$, then $Z= \\Omega_1(A)$.\r\n$[X, A] \\le Z$ and so $X^{(1)} \\le A$.   Choose $U \\le X$ of minimal order subject to\r\n$U= \\Omega_1(U)$ and $U$ not of exponent $p$.  $\\exists x, y \\in U: U= \\langle x, y \\rangle$\r\n$V= \\langle x^U \\rangle \\ne U$ and so $exp(V)=p$.  Hence $z=[x,y]$ has order at most $p$ so\r\n$X^{(1)} \\le A, z \\in Z$.  As $X \\le C(Z)$, $exp(U)=p$ contrary to the choice of $U$.\r\n\\end{quote}\r\n{\\bf Theorem 15:}\r\n$G$ a $p$ group, $A$, a $p'$-group.\r\nIf $G$ is abelian then $A$ is faithful on $\\Omega_1(G)$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nWLOG, $A$ centralizes $\\Omega_1(G)$.  Let $|X|= p$ in $G$ and ${\\overline G}= G/X$.\r\n$A$ is faithful on $\\Omega_1({\\overline G})$ and so WLOG, ${\\overline G}= \\Omega_1({\\overline G})$.\r\nWe may take, $C_{\\overline G}(A)=1$ so $X= \\Omega_1(G)$ so $G$ is cyclic.  But this contradicts\r\nthe known structure of the automorphism group.\r\n\\end{quote}\r\n{\\bf Theorem 16:}\r\nLet $H$ be a critical subgroup of $G$, then (1) $A$ is faithful on $H$\r\n(2) if $p \\ne 2$ then $A$ is faithful on $\\Omega_1(H)$ and there is a critical subgroup\r\nof $G$ such that $\\Omega_1(H)$ contains each element of order $p$ in\r\n$C_G(\\Omega_1(H))$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$C_G(H) \\leq H$ so by the $p \\times q$ lemma (with $P=H, Q = C_A(H)$), $C_A(H)=1$,\r\nproving (1).  For (2), see Aschbacher p114.\r\n\\end{quote}\r\n\\section{More on special $p$-groups}\r\n{\\bf Lemma:}\r\nIf $M$ is a normal subgroup of a $p$-group, $P$, maximal subject to being\r\nabelian then $M= C_P(M)$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$M \\subseteq H = C_P(M)$ since $M$ is abelian.  Supose $H \\supset M$ and set\r\n${\\overline P}= P/M$.  ${\\overline H} \\subset {\\overline P}$, $H \\lhd P$ and\r\n${\\overline H} \\ne 1$ so ${\\overline H} \\cap {\\mathbb Z}({\\overline P}) \\ne 1$.\r\nIf ${\\overline X}$ is a subgroup of ${\\overline H} \\cap {\\mathbb Z}({\\overline P})$,\r\n$X \\lhd P$ and $X \\subseteq H$.  Since $H$ centralizes $M$, $M \\subseteq {\\mathbb Z}( X)$.\r\n$X/M$ is cyclic of order $p$ and so $X$ is abelian, this is a contradiction.\r\n\\end{quote}\r\n{\\bf Theorem 17:}\r\nA $p$-group, $P$, possesses a characteristic subgroup, $C$, with the following properties:\r\n(1) $cl(C) \\le 2$;\r\n(2) $[P, C] \\subseteq {\\mathbb Z}(C)$;\r\n(3) $C_P(C)= {\\mathbb Z}(C)$;\r\n(4) Every nontrivial $p'$ automorphism of $P$ induces a non-trivial\r\nautomorphism of $C$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSuppose that such a characteristic subgroup $C$ exists.  Let $\\phi$ be a $p'$ automorphism\r\nof $P$ which acts trivially on  $C$ and put $A= \\langle \\phi \\rangle$.\r\n$[C, A] = 1$ so $[C, A, P] =1$ and $[P, C, A]= 1$ so\r\n$[A, P, C] = 1$.  $[A,P] \\subseteq C_P(C)$ and $A$ stabilizes\r\n$P \\supseteq [A, P] \\supseteq 1$ and $A = 1$ so $\\phi= 1$ and 3 implies 4.\r\nLet $M$ \r\nbe a normal subgroup of a $P$, maximal subject to being abelian so $C_P(M)=P$.\r\nIf $M \\; char \\; P$, $C= M$ satisfies the theorem because $C_P(C) \\subseteq C$ and\r\n$C$ is of class $1$ further $C/{\\mathbb Z}(C)$ is trivial and $[P,C] \\subseteq C = {\\mathbb Z}(C)$\r\nand so 1 and 2 hold.\r\n\\\\\r\n\\\\\r\nLet $D$ be a maximal characteristic abelian subgroup of $P$ containing $D$ so $D \\subset M$.\r\n$M \\subseteq H=C_P(D)$ and so $D \\subset H$ and $H \\; char \\; P$.  Set ${\\overline P}= P/D$\r\nthen ${\\overline H} \\ne 1$ so \r\n${\\overline C} = {\\overline H} \\cap \\Omega_1({\\mathbb Z}({\\overline P}) \\ne 1$\r\nWe claim the preimage, $C$, has the required properties.  First the inverse image,\r\n$K$ of $\\Omega_1({\\mathbb Z}({\\overline P}))$ is characteristic in $P$ and so\r\n$C = H \\cap K \\; char \\; P$.  Since $C \\subseteq H= C_P(D)$, $D \\subseteq {\\mathbb Z}(C)$ but\r\nthis is characteristic in $P$ so ${\\mathbb Z}(C)=D$ by maximality of $D$.\r\nBut then $C/{\\mathbb Z}(C)$ is elementary abelian and $cl(C)=2$.  Further,\r\nsince ${\\overline C} \\subseteq {\\mathbb Z}({\\overline P})$, $[{\\overline C}, {\\overline P}]= 1$\r\nwhence $[P, C] \\subseteq D$ and $C$ satisfies 1 and 2.\r\nNow set $Q= C_P(C)$ and suppose $Q \\nsubseteq C$.  $Q \\cap C = D$ and $Q \\subseteq H$ since\r\n$Q$ centralizes $D$.  Thus\r\n${\\overline Q} \\subseteq \\cap {\\overline C} = 1$, ${\\overline Q} \\lhd {\\overline P}$ and\r\n${\\overline Q} \\ne 1$.  \r\nBut $1 \\ne {\\overline Q} \\cap \\Omega_1({\\mathbb Z}({\\overline P}))= {\\overline C}$ and thus\r\n${\\overline Q} \\cap {\\overline C} \\ne 1$, a contradiction.  So $Q \\subseteq C$ and \r\n$Q= {\\mathbb Z}(C)$ and 3 holds.\r\n\\end{quote}\r\n{\\bf Theorem 18:} If $P$ is a $p$-group then $\\Phi(P)$ is the smallest subgroup, $H$, such that\r\n$G/H$ is elementary abelian.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nIf $M$ is maximal then $M \\lhd G$ and $|G:H|=p$ and $P' \\leq M$ so $P' \\leq \\Phi(P)$ so\r\n$P/ \\Phi(P)$ is abelian.  $g^p \\in M$ so $g^p \\Phi(P)$ hence $P/ \\Phi(P)$ is elementary abelian.\r\n\\end{quote}\r\n{\\bf Theorem 19:}\r\nLet $A$ be a $p'$ group of  automorphisms of a $p$ group $P$ and let $\\phi \\in A^{\\#}$,\r\nthen $P$ possesses an $A$-invariant special subgroup $Q$ such that $Q$ acts irreducibly\r\non $Q/\\Phi(Q)$, $\\phi$ acts non-trivially on  $Q/\\Phi(Q)$ and $\\phi$ act trivially on\r\n$\\Phi(Q)$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSuppose $b \\in A$, $[b,G/\\Phi(G)]=1$.  WTS $b=1$.  If not, there is a power of $b$ that\r\nis a $q$-element.  Put $B= \\langle b \\rangle$ and $g \\in G$.  $B$ acts on the coset\r\n$X= g \\Phi(G)$ and the fixed points, $m= |X| \\jmod{p}$, $|X|= |\\Phi(G)|$ is a power of $p$.\r\n$|X| \\neq 0 \\jmod{q}$ so $[B,x]=1$ for some $x \\in X$ so $B$ centralizes a set, $Y$ of\r\ncoset representatives for $\\Phi(G)$ in $G$ so $G= \\langle Y \\rangle \\leq C_G(B)$\r\nso $B=1$.\r\n\\end{quote}\r\n\r\n", "meta": {"hexsha": "df2b54c14f49194f34c8cde80cf07a9a21d6c14f", "size": 18129, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "groups/gtInvolutions.tex", "max_stars_repo_name": "jlmucb/class_notes", "max_stars_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "groups/gtInvolutions.tex", "max_issues_repo_name": "jlmucb/class_notes", "max_issues_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "groups/gtInvolutions.tex", "max_forks_repo_name": "jlmucb/class_notes", "max_forks_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.3569405099, "max_line_length": 112, "alphanum_fraction": 0.5767003144, "num_tokens": 7582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006919925839874, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5926516848006016}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{xcolor}\n\\usepackage{amsthm}\n\\usepackage[mathcal]{euscript}\n\n\\usepackage{url}\n\n\\newcommand{\\Hcal}{\\mathcal{H}}\n\\newcommand{\\real}{\\mathbb{R}}\n\\newcommand{\\interior}[1]{%\n  {\\kern0pt#1}^{\\mathrm{o}}%\n}\n\\newcommand{\\Tcal}{\\mathcal{T}}\n\\newcommand{\\Lcal}{\\mathcal{L}}\n\n\\title{Notes on Minkowski Functional}\n\\author{Nazarov Ivan}\n\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\nConsider a Banach space $(\\Hcal, \\|\\cdot\\|)$ with the norm $\\Tcal^{\\|\\cdot\\|}_\\Hcal$\ntopology on it $\\Hcal$. In the following $[C]$ is the closure of a set $C$ in the\ntopology of $\\Hcal$: $[C]$ the smallest closed set covering $C$, $[C] = \\bigl[[C]\\bigr]$,\nand monotone. In contrast, $\\interior{C}$ --- the topological interior of $C$ in\n$\\Hcal$, --- is the largest open set contained within $C$.\n\nFor a nonempty $C \\subseteq \\Hcal$ the {\\it Minkowski Functional} $p\\colon \\Hcal\n\\to [0, +\\infty]$ is\n\\begin{equation*}\n  p(x)\n    = \\inf \\bigl\\{ \\lambda > 0\\,:\\, x \\in \\lambda C\\bigr\\}\n    \\,,\n\\end{equation*}\nwhere $\\lambda C = \\{\\lambda z \\,:\\, z\\in C\\}$ is the $\\lambda$-inflation of $C$,\n$\\lambda \\in\\real$. This functional has many useful properties when $C$ is convex\nand the interior of $C$ contains $0$.\n\n\\paragraph{Properties of $\\lambda$-inflation} % (fold)\n\\label{par:properties_of_lambda_inflation}\n\nConsider any $C\\subseteq \\Hcal$, any $t \\neq 0$ and any $x\\in \\Hcal$. Then $x\\in t C$\nmeans that there exists $z \\in C$ such that $x = t z$. Therefore $\\tfrac1t x = z\n\\in C$, since $\\Hcal$ is a vector space. For such $(C, x, t)$ we have the implication\n$x \\in t C \\Rightarrow \\tfrac1t x \\in C$.\n\nWe get the reverse implication by applying the forward one above to the tuple $(C, x, t)\n= (\\alpha K, \\tfrac1\\alpha z, \\tfrac1\\alpha)$, noting that $z = \\alpha x$. Therefore\n$x\\in t C$ iff $\\tfrac1t x \\in C$ for any $C\\subseteq \\Hcal$, $x\\in \\Hcal$ and $t > 0$.\nFurthermore, for any $\\alpha, \\beta \\neq 0$ we have\n\\begin{equation*}\n  \\alpha x \\in \\beta C \\Leftrightarrow\n  \\beta^{-1} \\alpha x \\in C \\Leftrightarrow\n  \\beta^{-1} x \\in \\alpha^{-1} C \\Leftrightarrow\n  x \\in \\beta \\alpha^{-1} C\n  \\,.\n\\end{equation*}\n\nSuppose $C$ is convex, then $\\beta C$ is convex for any $\\beta$. Indeed, for $x_1,\nx_0 \\in \\beta C$ and $\\theta\\in [0, 1]$ there are $z_1, z_0 \\in C$ such that $x_i\n= \\beta z_i$. But then $x_\\theta \\in \\beta C$, since we have $z_\\theta = \\theta z_1\n+ (1-\\theta) z_0 \\in C$ and\n\\begin{equation*}\n  x_\\theta\n  = \\theta x_1 + (1-\\theta) x_0\n  = \\theta \\beta z_1 + (1-\\theta) \\beta z_0\n  = \\beta z_\\theta\n  \\,.\n\\end{equation*}\n\nSuppose $C$ is convex and $0\\in C$. Then for any $\\theta \\in (0, 1)$ and $x\\in C$\nwe have $\\theta x = (1 - \\theta)\\, 0 + \\theta x \\in C$, hence $x \\in \\theta^{-1} C$.\nTherefore $C \\subseteq \\lambda C$ for all $\\lambda \\geq 1$. Next, if $\\alpha > \\beta$,\nthen the $\\beta$-inflation of $C$ is convex, and $0 \\in \\beta C$. Therefore $\\beta C\n\\subseteq \\tfrac\\alpha\\beta (\\beta C) = \\alpha C$.\n\n% paragraph properties_of_lambda_inflation (end)\n\n\\paragraph{Zero} % (fold)\n\\label{par:zero}\n\nThe Minkowski functional is zero on $0$ if $0\\in C$. Indeed, for any $\\lambda > 0$\nwe have $0 \\in \\lambda C$, since $0 = \\lambda 0$. Therefore, $p(0) \\leq \\lambda$\nfor all $\\lambda > 0$, whence $p(0) = 0$.\n\nAlternatively, if $p(0) = 0$, then $0\\in \\lambda C$ for any $\\lambda > 0$, which\nmeans that for $\\lambda=1$ there is $z\\in C$ such that $z = 0$. Hence $0\\in C$.\n\n% paragraph zero (end)\n\n\\paragraph{Positive homogeneity} % (fold)\n\\label{par:pos_homogeneity}\n\nLet $t > 0$. If $x\\in\\Hcal$ is such that $p(x) = +\\infty$, then $x\\notin \\lambda C$\nfor any $\\lambda>0$. In particular $t x\\notin t \\tfrac\\lambda{t} C$, whence $p(tx)\n= +\\infty$.\n\nIf $x \\in \\Hcal$ such that $p(x) < +\\infty$, then for any $U > p(x)$ there exists\n$\\lambda < U$ with $x\\in \\lambda C$. Hence $t x \\in t \\lambda C$ and $p(t x) \\leq\nt \\lambda < t U$. Therefore $p(t x) < +\\infty$ and $p(t x) \\leq t p(x)$.\n\nThe opposite inequality follows from applying the direct one to $(x, t) = (\\alpha\nz, \\alpha^{-1})$ for any $\\alpha > 0$ and $z \\in \\Hcal$. Indeed, since $\\alpha^{-1}\nx = z$, $p(\\alpha^{-1} x) \\leq \\alpha^{-1} p(x)$ implies $\\alpha p(z) \\leq p(\\alpha z)$.\nTherefore $p(t x) = t p(x)$ for any $x \\in \\Hcal$ and every $t > 0$.\n\nBy positive homogeneity we have $p(0) = p(t \\cdot 0) = t p(0)$ for any $t > 0$, which\ncan be satisfied only if $p(0) \\in \\{0, \\infty\\}$. Hence, $p(0) = 0$ iff $0\\in C$, \nand $p(0)\\neq 0$ iff $0\\notin C$, in which case $p(0) = +\\infty$.\n\n% paragraph pos_homogeneity (end)\n\n\\paragraph{Homogeneity via balancedness} % (fold)\n\\label{par:homogeneity_via_balancedness}\n\nIf $C$ is balanced then $p(x)$ is homogeneous.\n\nA set is $C$ is balanced if $\\beta C \\subset C$ for all $\\lvert \\beta \\rvert \\leq 1$.\nFor any balanced $C$ we have $\\lambda C \\subset C$ in particular for $\\lambda = 0$,\nwhence $\\{0\\} = 0 C \\subset C$. Note that for any $\\alpha \\neq 0$ the set $\\alpha C$\nis balanced as well. Indeed, we have $\\beta (\\alpha C) = \\alpha (\\beta C) \\subset\n\\alpha C$.\n\nSuppose $\\alpha x \\in \\lambda C$ for $\\alpha \\neq 0$ and $\\lambda > 0$. Then we have\n\\begin{equation*}\n  \\lvert \\alpha \\rvert x\n    = \\tfrac{\\lvert \\alpha \\rvert}\\alpha (\\alpha x)\n    \\in \\tfrac{\\lvert \\alpha \\rvert}\\alpha (\\lambda C)\n    = \\lambda \\Bigl( \\tfrac{\\lvert \\alpha \\rvert}\\alpha C \\Bigr)\n    \\subset \\lambda C\n    \\subseteq \\lambda C\n    \\,,\n\\end{equation*}\nby definition of the set inflation, vector space $\\Hcal$ and from balancedness of\n$C$. Conversely $\\lvert \\alpha \\rvert x \\in \\lambda C$ implies $\\alpha x \\in \\lambda\nC$:\n\\begin{equation*}\n  \\alpha x\n    = \\tfrac\\alpha{\\lvert \\alpha \\rvert} (\\lvert \\alpha \\rvert x)\n    \\in \\tfrac\\alpha{\\lvert \\alpha \\rvert} (\\lambda C)\n    = \\lambda \\Bigl( \\tfrac\\alpha{\\lvert \\alpha \\rvert} C \\Bigr)\n    \\subset \\lambda C\n    \\subseteq \\lambda C\n    \\,.\n\\end{equation*}\nFor $t = 0$ we have $0 = p(0 x) = 0 p(x) = 0$ and for $t\\neq 0$ we consider the\nfollowing:\n\\begin{equation*}\n  p(t x)\n    = \\inf\\{\\lambda > 0\\colon t x \\in \\lambda C\\}\n    = \\inf\\{\\lambda > 0\\colon \\lvert t \\rvert x \\in \\lambda C\\}\n    = p\\bigl( \\lvert t \\rvert x \\bigr) = \\lvert t \\rvert p(x)\n    \\,.\n\\end{equation*}\n\n% paragraph homogeneity_via_balancedness (end)\n\n\\paragraph{Subadditivity} % (fold)\n\\label{par:subadditivity}\n\nWe claim that $p$ is subadditive if $C$ is convex.\n\nConsider $x_1, x_2 \\in \\Hcal$. If $p(x_i) = +\\infty$, then trivially $p\\bigl( x_1\n+ x_2 \\bigr) \\leq +\\infty =  p(x_1) + p(x_2)$. Suppose $p(x_i) < + \\infty$. Then\nfor any $\\varepsilon > 0$ there are $\\lambda_i > 0$ with $x_i \\in \\lambda_i C$ such\nthat $\\lambda_i < p(x_i) + \\tfrac12\\varepsilon$ for any $i$. We argue that $x_1 + x_2\n\\in (\\lambda_1 + \\lambda_2) C$, since $C$ is convex, $\\tfrac{x_i}{\\lambda_i} \\in C$,\nand\n\\begin{equation*}\n  \\tfrac1{\\lambda_1 + \\lambda_2} (x_1 + x_2)\n    = \\tfrac{\\lambda_1}{\\lambda_1 + \\lambda_2} \\frac{x_1}{\\lambda_1}\n      + \\tfrac{\\lambda_2}{\\lambda_1 + \\lambda_2} \\frac{x_2}{\\lambda_2}\n    \\,.\n\\end{equation*}\nTherefore $p\\bigl( x_1 + x_2 \\bigr) \\leq p(x_1) + p(x_2)$, since for all $\\varepsilon\n> 0$\n\\begin{equation*}\n  p\\bigl( x_1 + x_2 \\bigr)\n    \\leq \\lambda_1 + \\lambda_2\n    < p(x_1) + p(x_2) + \\varepsilon\n    \\,.\n\\end{equation*}\n\n% paragraph subadditivity (end)\n\n\\paragraph{Sublinearity} % (fold)\n\\label{par:sublinearity}\n\nIf $C$ is convex and balanced (or just $0 \\in C$), then we have $p(t x) = t p(x)$\nfor any $t\\geq 0$ and $x\\in \\Hcal$ such that $p(x) < +\\infty$. This implies that\n$p(-t x)$ is finite when $p(-x)$ is finite. Hence is $p(x), p(-x) < +\\infty$ then\nfor $t \\leq 0$\n\\begin{equation*}\n  0 = p(0)\n    = p\\bigl( t x + (- t) x \\bigr)\n    \\leq p\\bigl( t x \\bigr) + p\\bigl( (- t) x \\bigr)\n    = p\\bigl( t x \\bigr) + (- t) p( x )\n    \\,,\n\\end{equation*}\nby subadditivity. Therefore $t p(x) \\leq p(t x)$ for any $t\\in \\real$.\n\n% paragraph sublinearity (end)\n\n\\paragraph{the sufficient condition for finite values} % (fold)\n\\label{par:the_sufficient_condition_for_finite_values}\n\nWe conjecture that $p(\\Hcal) \\subseteq \\real$ requires $0$ to be in the topological\ninterior of $C$, i.e. the largest open set that fits inside $C$. Indeed, in this\ncase there would be a $\\varepsilon > 0$, such that $B(0, \\varepsilon) = \\{z\\colon\n\\|z\\| < \\varepsilon\\} \\subseteq C$, implying that for a given $\\eta \\in (0, 1)$ we\nhave\n\\begin{equation*}\n  \\tfrac{\\eta}{\\|x\\|} x \\in B(0, 1) = \\varepsilon^{-1} B(0, \\varepsilon)\n  \\Leftrightarrow\n  \\tfrac{\\varepsilon \\eta}{\\|x\\|} x \\in B(0, \\varepsilon)\n  \\subseteq C\n  \\Leftrightarrow\n  x \\in \\tfrac{\\|x\\|}{\\eta \\varepsilon} C\n  \\,,\n\\end{equation*}\nfor any $x\\neq 0$. Therefore $p(x) < +\\infty$ for any $x\\in \\Hcal$ for such $C$.\n\n% paragraph the_sufficient_condition_for_finite_values (end)\n\n\\paragraph{Upper bound for continuity} % (fold)\n\\label{par:upper_bound_for_continuity}\n\nSuppose $C$ is convex and such that $p(\\Hcal)\\subseteq \\real$. Then for any $x, h\n\\in \\Hcal$ we have $p(x) \\leq p(x+h) + p(-h)$ and $p(x + h) \\leq p(x) + p(h)$ by\nsubadditivity. Together with positive homogeneity, this implies that for any $x \\in\n\\Hcal$ and $h\\neq 0$\n\\begin{equation*}\n  % the rhs is of the form $\\|h\\| \\ldots$, implying that the bound holds for $h=0$\n  \\bigl\\lvert p(x + h) - p(x) \\bigr\\rvert\n    % \\leq \\max\\{p(h), p(- h)\\}\n    \\leq \\| h \\| \\max\\Bigl\\{\n        p\\Bigl(\\tfrac{h}{\\|h\\|}\\Bigr),\n        p\\Bigl(- \\tfrac{h}{\\|h\\|}\\Bigr)\n      \\Bigr\\}\n    \\leq \\| h \\| \\sup \\{p(u)\\colon \\|u\\|\\leq 1\\}\n    % \\Big\\vert_{u = \\|h\\|^{-1} h}\n    \\,.\n\\end{equation*}\nSince the right-hand side does not depend on $x$, we note that the bound holds\nuniformly over $x \\in \\Hcal$. Therefore in order to show continuity (uniform) it\nis enough to prove that $p$ is bounded within $B[0, 1] = \\{u\\colon \\|u\\| \\leq 1\\}$\n-- the closed unit ball in $\\Hcal$.\n\n% paragraph upper_bound_for_continuity (end)\n\n\\paragraph{Preimage inclusion} % (fold)\n\\label{par:preimage_inclusion}\n\nConsider a convex $C$ with $0\\in C$. Then for any $\\alpha > 0$ we have the following.\nIf $x \\in \\alpha C$ for some $\\alpha > 0$, then by definition $p(x) \\leq \\alpha$ and\nthus $\\alpha C \\subseteq \\{p \\leq \\alpha\\}$. Conversely, if $x\\in \\{p < \\alpha\\}$,\nthen $x \\in \\lambda_x C$ for some $p(x) \\leq \\lambda_x < \\alpha$. Since $C$ is convex\nand $0\\in C$, it must hold $\\lambda_x C \\subseteq \\alpha C$. Therefore\n\\begin{equation*}\n  \\{p < \\alpha \\}\n    \\subseteq \\alpha C\n    \\subseteq \\{p \\leq \\alpha \\}\n    \\,,\n\\end{equation*}\nfor any $\\alpha > 0$. In particular we have $C \\subseteq \\{p \\leq 1\\}$ and by positive\nhomogeneity\n\\begin{equation*}\n  \\alpha \\{p\\leq 1\\}\n    = \\{\\alpha x \\in \\Hcal \\colon p(x) \\leq 1\\}\n    = \\bigl\\{z \\in \\Hcal \\colon \\tfrac1{\\alpha} p(z) \\leq 1\\bigr\\}\n    = \\{p \\leq \\alpha\\}\n    \\,.\n\\end{equation*}\nConsider the scaling map $x\\mapsto \\sigma_\\alpha(x) = \\alpha x$ on a vector space.\nIts inverse is $\\sigma_\\alpha^{-1} = \\sigma_{\\alpha^{-1}}$, and it is easy to see\nthat $\\sigma_\\alpha$ is bouded linear map. By positive homogeneity of $p$ the maps\n$\\sigma_\\alpha$ and $p$ commute: $\\sigma_\\alpha \\circ p = p \\circ \\sigma_\\alpha$\nfor all $\\alpha > 0$. Therefore, for any $U \\subseteq [0, +\\infty)$ and $\\alpha > 0$\nwe get $\\{p\\in \\alpha U\\} = \\alpha \\{ p\\in U \\}$:\n\\begin{align*}\n    \\sigma_\\alpha \\bigl( \\{p \\in U\\} \\bigr)\n    &= \\bigl\\{\\sigma_\\alpha^{-1} \\circ p \\in U\\bigr\\}\n    = \\bigl\\{\\sigma_{\\alpha^{-1}} \\circ p \\in U\\bigr\\}\n    \\\\\n    &= \\bigl\\{p \\circ \\sigma_{\\alpha^{-1}} \\in U\\bigr\\}\n    = \\bigl\\{p \\in \\sigma_{\\alpha^{-1}}^{-1}(U)\\bigr\\}\n    = \\bigl\\{p \\in \\sigma_\\alpha(U)\\bigr\\}\n    \\,,\n\\end{align*}\nbecause for any maps $f$ and $h$ the preimages of their composition is\n\\begin{equation*}\n  \\{f\\circ h \\in U\\}\n    = \\bigl\\{ h \\in f^{-1}(U) \\bigr\\}\n    = h^{-1}\\bigl(\\{f \\in U \\}\\bigr)\n    \\,.\n\\end{equation*}\n\n% paragraph preimage_inclusion (end)\n\n\\paragraph{Boundedness} % (fold)\n\\label{par:boundedness}\n\nConsider $C$ that has $0$ in its topological interior. This means that there is $\\delta\n> 0$, such that $B(0, \\delta) \\subseteq C$. Thus for any $\\eta \\in (0, 1)$\n\\begin{equation*}\n  B[0, 1]\n    = \\{u\\colon \\|u\\| \\leq 1\\}\n    % since [0, ]\n    \\subseteq B\\bigl( 0, \\tfrac1\\eta \\bigr)\n    % scaling and translation of balls (continuous)\n    = \\tfrac1{\\delta \\eta} B(0, \\delta)\n    % inflation is a monotone set-automorphism\n    \\subseteq \\tfrac1{\\delta \\eta} C\n    % proven fact that $\\alpha C \\subseteq \\{p \\leq \\alpha\\}$\n    \\subseteq \\bigl\\{p \\leq \\tfrac1{\\delta\\eta} \\bigr\\}\n    \\,,\n\\end{equation*}\nsince $[0, 1] \\subset [0, \\eta^{-1})$. Hence there exists $M \\geq 0$ that depends only\non $C$, such that $p(u) \\leq M$ for any $u \\in B[0, 1]$. It is worth reminding that\nthe rightmost set inclusion follows from the definition of $p$, and does not require\nconvexity of $C$.\n\nIf $M = 0$, then $p(x) = 0$ for any $x\\in \\Hcal$, which means that $x\\in \\lambda C$\nfor every $\\lambda > 0$. Hence $\\{t x\\colon t > 0\\} \\subseteq C$ for any $x\\in \\Hcal$,\nand $\\Hcal \\subseteq \\bigcup_{x\\in \\Hcal} \\{t x\\colon t > 0\\} \\subseteq C$. Therefore\n$C\\subset \\Hcal$ guarantees $\\sup\\{p(u)\\colon \\|u\\|\\leq 1\\} > 0$.\n\nIf a positively homogeneous $p$ is bounded on $\\{u\\colon \\|u\\|\\leq 1\\}$, then it\nis finite:\n\\begin{equation*}\n  p(x)\n    \\leq \\|x\\| p\\Bigl(\\tfrac{x}{\\|x\\|}\\Bigr)\n    \\leq \\|x\\| \\sup\\{p(u)\\colon \\|u\\|\\leq 1\\}\n    < +\\infty\n    \\,.\n\\end{equation*}\n\n% paragraph boundedness (end)\n\n\\paragraph{Continuity} % (fold)\n\\label{par:continuity}\n\nWe claim that for $p\\colon \\Hcal \\to [0, +\\infty]$ to be continuous, it suffices\nfor $C$ to have $0\\in \\interior{C}$ and be convex. Indeed, since $0\\in C$ for such\n$C$ and $p$ is real-valued, the upper bounds derived earlier hold true. Hence for\nany $h \\in \\Hcal$ with $\\|h\\| < \\tfrac\\varepsilon{M}$ and uniformly over $x$ we have\n\\begin{equation*}\n  \\bigl\\lvert p(x + h) - p(x) \\bigr\\rvert\n    \\leq\n      \\sup_{x\\in \\Hcal}\n        \\bigl\\lvert p(x + h) - p(x) \\bigr\\rvert\n    \\leq M \\| h \\| < \\varepsilon\n      \\,.\n\\end{equation*}\nTherefore $B(x, \\delta) \\subseteq \\bigl\\{ p\\in B(p(x), \\varepsilon)\\bigr\\}$ over\nall $x\\in \\Hcal$ for a certain $\\delta_\\varepsilon > 0$, that depends on $C$ and\n$\\varepsilon$, but {\\bf not on} $x$. Hence $p$ is uniformly continuous on $\\Hcal$.\n\nAs a reminder, let $U$ be open in $\\real_+ = [0, +\\infty)$. For any $x\\in \\{p\\in U\\}$\nthere is $\\varepsilon_x > 0$, such that $B(p(x), \\varepsilon_x) \\subseteq U$. In\nturn there exists $\\delta_x = \\delta_{\\varepsilon_x} > 0$ with\n\\begin{equation*}\n  x \\in B\\bigl(x, \\delta_x\\bigr)\n    % continuity at $x$\n    \\subseteq \\bigl\\{ p\\in B(p(x), \\varepsilon_x)\\bigr\\}\n    % preimage is monotone\n    \\subseteq \\{ p \\in U \\}\n    \\,.\n\\end{equation*}\n% Assume the norm topology $\\Tcal = \\Tcal^{\\|\\cdot\\|}_\\Hcal$ on $\\Hcal$.\nTherefore $\\{ p \\in U \\} = \\bigcup_{x\\in \\{ p \\in U \\}} B(x, \\delta_x)$, which\nestablishes the continuity of $p$.\n\n% paragraph continuity (end)\n\n\\paragraph{Various preimages} % (fold)\n\\label{par:various_preimages}\n\nLet's study how values of $p$ are related to the underlying set $C$, when the set\nhas {\\bf nice} properties: $C$ is convex and has $0\\in \\interior{C}$, i.e. when\n$p(\\Hcal) \\subseteq \\real$, $p$ becomes continuous, and $\\alpha \\leq \\beta$ implies\n$\\alpha C \\subseteq \\beta C$. % See \\ref{pr01}, \\ref{pr03}, and \\ref{pr05} in summary.\n\n% paragraph various_preimages (end)\n\n\\paragraph{Preimage of $[0, 1]$} % (fold)\n\\label{par:preimage_of_0_1_closed}\n\nSuppose $C$ is such, that $p$ is continuous. Thus the set $\\{p\\leq 1\\}$ is closed in\n$\\Hcal$, whence we must have $[C] \\subseteq \\{p\\leq 1\\}$, since $C \\subseteq \\{p\\leq 1\\}$.\n\nSuppose $x\\in \\{p\\leq 1\\}$. By the assumptions on $C$ we have $x = 0 \\in C \\subseteq [C]$,\nso suppose $x\\neq 0$. For any $\\lambda > 1$ we have $x\\in \\lambda C$, implying that\n$t x \\in C$ for all $t \\in (0, 1)$. If for an arbitrary open ball $B(x, \\delta)$ we\ncan find $t < 1$, such that $t x \\in B(x, \\delta)$, then $B(x, \\delta) \\cap C \\neq\n\\emptyset$, and hence $x \\in [C]$. Indeed, for any $\\delta > 0$\n\\begin{equation*}\n  t \\in\\Bigl(\n    \\max\\bigl\\{1 - \\tfrac\\delta{\\|x\\|}, 0\\bigr\\}, 1\n  \\Bigr)\n    \\Leftrightarrow\n      \\lvert t - 1\\rvert < \\tfrac\\delta{\\|x\\|}\n      \\, \\& \\, t \\in (0, 1)\n    \\Rightarrow\n      t x \\in B(x, \\delta) \\cap C\n      \\,.\n\\end{equation*}\nTherefore, $[C] = \\{p \\leq 1\\}$.\n% Positive homogeneity implies that $\\alpha [C] $\n\n% paragraph preimage_of_0_1_closed (end)\n\n\\paragraph{Preimage of $[0, 1)$} % (fold)\n\\label{par:preimage_of_0_1_open}\n\nSuppose $C$ is such, that $p$ is continuous. Since $\\{p < 1\\}$ is open by continuity\nof $p$ and $\\{p < 1\\} \\subseteq C$, we must have $\\{p < 1\\} \\subseteq \\interior{C}$.\n\nConversely, suppose $x\\in \\{p \\geq 1\\}$. Then for any for any $\\lambda < 1$ we have\n$x \\notin \\lambda C$, or, equivalently, $t x \\in \\Hcal \\setminus C$ for all $t > 1$.\nIn particular $x\\neq 0\\in C$. For any $\\delta > 0$ we make the following argument:\n\\begin{equation*}\n  t \\in\\Bigl(1, 1 + \\tfrac\\delta{\\|x\\|} \\Bigr)\n    \\Leftrightarrow\n      \\lvert t - 1\\rvert < \\tfrac\\delta{\\|x\\|}\n      \\, \\& \\, t \\in (1, +\\infty)\n    \\Rightarrow\n      t x \\in B(x, \\delta) \\cap \\Hcal \\setminus C\n      \\,.\n\\end{equation*}\nThus $x\\in \\bigl[\\Hcal \\setminus C \\bigr]$. Note that $\\Hcal\\setminus \\interior{C}$\nis a closed set covering $\\Hcal \\setminus C$, which implies that it also covers the\nclosure. Therefore $\\{p \\geq 1\\} \\subseteq \\Hcal\\setminus \\interior{C}$, and $\\interior{C}\n\\subseteq \\{p < 1\\}$.\n\n% paragraph preimage_of_0_1_open (end)\n\n\\paragraph{Preimage of $\\{1\\}$} % (fold)\n\\label{par:preimage_of_1}\n\nWe claim that for a {\\bf nice} $C$, $\\partial C = \\{p = 1\\}$. Indeed, the result\nfollows immediately from\n\\begin{equation}\n  \\{p = 1\\}\n    = \\{p \\leq 1\\} \\setminus \\{p < 1\\}\n    = [C] \\setminus \\interior{C}\n    = \\partial C\n    \\,.\n\\end{equation}\nPositive homogeneity implies, that the preimages of $[0, \\alpha)$ and $[0, \\alpha]$\nfor any $\\alpha > 0$ are given by $\\alpha$-inflations of the interior and the closure\nof $C$, respectively:\n\\begin{equation*}\n  \\{p \\leq \\alpha \\}\n    = \\alpha \\{p \\leq 1\\}\n    = \\alpha [C]\n    \\,, \\text{ and }\n  \\{p < \\alpha \\}\n    = \\alpha \\{p < 1\\}\n    = \\alpha \\interior{C}\n    \\,.\n\\end{equation*}\n\n% paragraph preimage_of_1 (end)\n\n\\paragraph{Bounding linear functionals} % (fold)\n\\label{par:bounding_linear_functionals}\n\nLet $l$ be a linear functional and $p$ be a positively homogeneous function taking\nvalues in $[0, \\infty]$. Suppose $l \\leq p$ on $\\Hcal$. Then for any $x\\in \\Hcal$ we\nhave $l(x) \\leq p(x)$, whence $- p(x) \\leq - l(x) = l(- x)$ by linearity of $l$. At\nthe same time $- x \\in \\Hcal$ implies $l(- x) \\leq p(- x)$. If $x=0$, then trivially\n$l(0) = 0 \\leq 0 \\cdot p(0)$. However, for $x\\neq 0$ we get the following bound\n\\begin{multline*}\n  - p(- x) \\leq l(x) \\leq p(x)\n  \\Rightarrow\n    \\\\\n    \\lvert l(x) \\rvert\n    % \\leq (p(x) \\vee p(- x))\n      \\leq \\| x \\| \\max\\Bigl\\{\n          p\\Bigl(\\tfrac{x}{\\|x\\|}\\Bigr),\n          p\\Bigl(- \\tfrac{x}{\\|x\\|}\\Bigr)\n        \\Bigr\\}\n      \\leq \\|x\\| \\, \\sup\\{p(u)\\colon \\|u\\|\\leq 1\\}\n    \\,.\n\\end{multline*}\n% Note that there has been no need for $p(\\Hcal)\\subseteq \\real$ in deriving this bound.\nIf $p$ is the Minkowski functional of a set $C$ with $0\\in \\interior{C}$, then $p$\nis bounded on $B[0, 1]$ and positively homogeneous. Therefore this upper bound implies\nthat $l$ is a bounded linear functional. Note that convexity of $C$ is not necessary.\n\n% paragraph bounding_linear_functionals (end)\n\n\\paragraph{Discussion} % (fold)\n\\label{par:discussion}\n\nObviously, if the set $\\{ \\lambda > 0 \\colon x \\in \\lambda C \\}$ is non-empty for\nany $x \\in \\Hcal$, then $p(x)$ is finite-valued, because every $x$ is covered by\nat least one $\\lambda$-inflation.\n\nThis condition implies that $0 \\in C$. Indeed, under it there must be $\\lambda > 0$\nsuch that $0 \\in \\lambda C$, which in turn means that $\\exists z\\in C$ such that\n$0 = \\lambda z$. Since $\\lambda > 0$, we get $0 = z\\in C$.\n\nSo $0 \\in C$ seems necessary for finiteness. However, it is not sufficient. Consider\na cone $C = \\{t z\\colon z\\in A\\} = \\bigcup_{t\\geq 0} t A$ for $A = \\{z\\colon \\|z - a\\|\n\\leq \\|a\\|\\}$ for a given $a \\neq 0$. Since $t A = \\{z\\colon \\|z - t a\\| \\leq t \\|a\\| \\}$\nfor any $t\\geq 0$, we still have $-a \\notin C$: if it were otherwise, then $\\tfrac{- a}t\n\\in A$ for some $t > 0$ with $1 + \\tfrac1t \\leq 1$ --- a contradiction.\n\nNote that balancedness alone does not imply that $0 \\in \\interior{C}$, as exemplified\nby taking a union of a closed convex cone and its mirrored counterpart.\n\n% paragraph discussion (end)\n\n\\paragraph{Summary} % (fold)\n\\label{par:summary}\n\nList of properties\n\\begin{enumerate}\n  \\item \\label{pr01}\n    $C$ is nonempty\n  \\item \\label{pr02}\n    $C$ is closed\n  \\item \\label{pr03}\n    $C$ is convex\n  \\item \\label{pr04}\n    $0 \\in C$\n  \\item \\label{pr05}\n    $0$ is in the topological interior of $C$\n  \\item \\label{pr06}\n    $C$ is balanced: $\\lambda C \\subset C$ for all $\\lvert \\lambda \\rvert \\leq 1$\n  \\item \\label{pr07}\n    $C$ is absorbing at $0$: $\\{\\lambda>0\\colon x \\in \\lambda C \\} \\neq \\emptyset$\n    for all $x\\in \\Hcal$\n  \\item \\label{pr08}\n    $\\alpha C \\subseteq \\beta C$ whenever $\\alpha \\leq \\beta$\n  \\item \\label{pr09}\n    $p(x) < +\\infty$\n  \\item \\label{pr10}\n    $p(tx) = t p(x)$ for all $t > 0$\n  \\item \\label{pr11}\n    $p(tx) = \\lvert t \\rvert p(x)$ for all $t \\in \\real$\n  \\item \\label{pr12}\n    $t p(x) \\leq p(t x)$ for all $t \\in \\real$\n  \\item \\label{pr13}\n    $p(x_1 + x_2) \\leq p(x_1) + p(x_2)$ for any $x_1, x_2 \\in \\Hcal$\n  \\item \\label{pr14}\n    $p(0) = 0$\n  \\item \\label{pr15}\n    $A \\subseteq B$ implies $\\alpha A \\subseteq \\alpha B$\n  \\item \\label{pr16}\n    $\\alpha x \\in \\lambda C$ iff $\\lvert \\alpha \\rvert x \\in \\lambda C$ for any\n    $\\alpha \\neq 0$ and $\\lambda > 0$\n  \\item \\label{pr17}\n    $h \\in C$ if and only if $-h \\in C$\n  \\item \\label{pr18}\n    $p\\colon \\Hcal \\to [0, +\\infty)$ is a continuous map\n  \\item \\label{pr19}\n    $\\sup\\{p(u)\\colon \\|u\\|\\leq 1\\} < +\\infty$\n\\end{enumerate}\nAll implications tacitly assume $\\{\\ref{pr01}\\}$:\n$\\{\\ref{pr01}\\} \\Rightarrow \\ref{pr10}$,\n$\\{\\ref{pr03}, \\ref{pr04}\\} \\Rightarrow \\{\\ref{pr08}, \\ref{pr12}\\}$,\n$\\{\\ref{pr04}\\} \\Rightarrow \\ref{pr14}$,\n$\\{\\ref{pr14}\\} \\Rightarrow \\ref{pr04}$,\n$\\{\\ref{pr06}\\} \\Rightarrow \\{\\ref{pr11}, \\ref{pr16}\\}$,\n$\\{\\ref{pr03}, \\ref{pr17}\\} \\Rightarrow \\{\\ref{pr04}, \\ref{pr06}\\}$,\n% Shown directly: if $x\\in \\lambda C$, then $x = \\lambda z$ for some $z\\in C$.\n% if $\\lambda > 0$, then $h = z\\in C$, if $\\lambda = 0$, then $\\{\\ref{pr17}\\}$\n% implies $0\\in C$. Otherwise, $x = -\\lvert \\lambda\\rvert z$ and $h = -z\\in C$\n% since $z\\in C$. Therefore $x = (1 - \\lvert \\lambda\\rvert) 0 + \\lvert \\lambda\\rvert h$\n% for $0, h\\in C$, whence $x\\in C$.\n% for any $h\\in C$ we have $0 = \\tfrac12 (-h) + \\tfrac12 h$.\n$\\{\\ref{pr05}\\} \\Rightarrow \\{\\ref{pr04}, \\ref{pr07}, \\ref{pr09}\\}$,\n$\\{\\ref{pr07}\\} \\Rightarrow \\{\\ref{pr04}, \\ref{pr09}\\}$,\n$\\emptyset \\Rightarrow \\ref{pr15}$,\n$\\{\\ref{pr11}\\} \\Rightarrow \\{\\ref{pr10}, \\ref{pr12}\\}$,\n$\\{\\ref{pr03}\\} \\Rightarrow \\ref{pr13}$,\n$\\{\\ref{pr05}\\} \\Rightarrow \\ref{pr19}$,\n$\\{\\ref{pr03}, \\ref{pr19}\\} \\Rightarrow \\ref{pr18}$.\n\n% paragraph summary (end)\n\n\n\\end{document}\n", "meta": {"hexsha": "bb78a81d9392e647eb4ae5f4f442ddb0177b3941", "size": 23122, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "scribbles/minkowski-functional.tex", "max_stars_repo_name": "ivannz/general-scribbles", "max_stars_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-12-07T20:41:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-28T12:47:40.000Z", "max_issues_repo_path": "scribbles/minkowski-functional.tex", "max_issues_repo_name": "ivannz/general-scribbles", "max_issues_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scribbles/minkowski-functional.tex", "max_forks_repo_name": "ivannz/general-scribbles", "max_forks_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8605042017, "max_line_length": 90, "alphanum_fraction": 0.6178963757, "num_tokens": 8959, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768249, "lm_q2_score": 0.817574471748733, "lm_q1q2_score": 0.5926354331316258}}
{"text": "\\subsection{Unitary operators}\nThis section decribes all the functions in the file ``ptwXY\\_unitaryOperators.c''.\n\n\\subsubsection{ptwXY\\_abs}\nThis function applies the math absolute operation to every y-value in \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_abs(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n\n\\subsubsection{ptwXY\\_neg}\nThis function applies the math negate operation to every y-value in \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_neg(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n", "meta": {"hexsha": "e50d0de4dddaa073066958997e2cd1714b42afa6", "size": 882, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "numericalFunctions/Doc/ptwXY_unitaryOperators.tex", "max_stars_repo_name": "Mathnerd314/gidiplus", "max_stars_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_stars_repo_licenses": ["MIT-0", "MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2019-08-29T23:46:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T10:16:25.000Z", "max_issues_repo_path": "numericalFunctions/Doc/ptwXY_unitaryOperators.tex", "max_issues_repo_name": "Mathnerd314/gidiplus", "max_issues_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_issues_repo_licenses": ["MIT-0", "MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-04T16:14:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-01T01:54:34.000Z", "max_forks_repo_path": "numericalFunctions/Doc/ptwXY_unitaryOperators.tex", "max_forks_repo_name": "Mathnerd314/gidiplus", "max_forks_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_forks_repo_licenses": ["MIT-0", "MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-03-03T22:41:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T22:54:43.000Z", "avg_line_length": 51.8823529412, "max_line_length": 88, "alphanum_fraction": 0.7777777778, "num_tokens": 252, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8175744673038222, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.5926354250503251}}
{"text": "\\input{/Users/alessandromanzotti/Work/preamble.tex}\n\n\n%%% BEGIN DOCUMENT\n\\begin{document}\n\\title{CMB DC Mode}\n\\author{}\n\\affiliation{Department of Astronomy \\& Astrophysics, University of Chicago, Chicago IL 60637}\n\\date{\\today}\n\n\\section{Fisher\\_estimate}\n\nIn this code I load the CAMB $C_{\\ell}$s and I compute the Fisher estimates of the effect we are looking for, given a precise detector precision. The idea is expressed in Wayne's notes.\n\\begin{equation}\nC_l = C_l^{\\rm fid} \\left[ 1+ \\frac{\\partial \\ln (l^2 C_l^{\\rm fid} )}{\\partial \\ln l} s \\right],\n\\end{equation}\ndo a Fisher estimate of $\\sigma_s^2$ (no other parameters marignalized) and compare it to $\\sigma_\\kappa^2$.  \nIt means it the code to compute\n\\begin{equation}\nC_l =  C^{\\rm fid}\\frac{\\partial \\ln (l^2 C_l^{\\rm fid}) }{\\partial \\ln l},\n\\end{equation}\nusing  a spline function.\nafter that in the fisher formalism\n\\ben\nF_{ss}=\\sum_{\\ell}\\frac{1}{(\\delta C_{\\ell})^{2}}\\left( C^{\\rm fid}\\frac{\\partial \\ln (l^2 C_l^{\\rm fid} )}{\\partial \\ln l}\\right)^{2}\n\\een\n\\ben\nF_{ss}=\\sum_{\\ell}\\frac{(2\\ell+1)f_{\\rm sky}}{2[C_{\\ell}+N_{\\ell}]^{2}}\\left( C^{\\rm fid}\\frac{\\partial \\ln (l^2 C_l^{\\rm fid} )}{\\partial \\ln l}\\right)^{2}\n\\een\n\nassuming SPT-pol 4$\\mu K$-arcminute noise and a beam of 1-arcmin\n\\be\nN_{\\ell}=\\exp\\{-\\ell(\\ell+1)\\theta_{FMWH}/(8\\log 2)\\}\n\\ee\n\n\n\\end{document}", "meta": {"hexsha": "f2661fc733cbfa12ba7785c199a291036978bb08", "size": 1342, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/code_notes.tex", "max_stars_repo_name": "amanzotti/super_sample_cov", "max_stars_repo_head_hexsha": "aedf4a08cdcab187c401a731fd8f8bbed593bcd7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/code_notes.tex", "max_issues_repo_name": "amanzotti/super_sample_cov", "max_issues_repo_head_hexsha": "aedf4a08cdcab187c401a731fd8f8bbed593bcd7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/code_notes.tex", "max_forks_repo_name": "amanzotti/super_sample_cov", "max_forks_repo_head_hexsha": "aedf4a08cdcab187c401a731fd8f8bbed593bcd7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.2702702703, "max_line_length": 185, "alphanum_fraction": 0.6795827124, "num_tokens": 487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637648915617, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5925710726603982}}
{"text": "\\chapter{Chapter 4. Syntax of Propositional Logic}\n\n\\section*{4.7 Self-Study Questions}\n\n\t\\begin{enumerate}\n\t\n\t\t\\item[4.7.1]  \n\t\t\n\t\t\\begin{enumerate}\n\t\t\n\t\t\t\\item Well, the formula could be sentence letter, which is a formula but contains no parentheses.\n\t\t\t\n\t\t\t\\item This is not true: also a formula that is formed from a single sentence letter by means of some negations, such as $\\neg p$ or $\\neg\\neg p$ etc., would not contain any parentheses.\n\t\t\t\n\t\t\t\\item This is the only option which is guaranteed to hold: if a formula contains $\\land,\\lor,\\to,\\leftrightarrow$, then it needs to contain parentheses.\n\t\t\t\n\t\t\t\\item This is simply not true: $\\neg p$ contains an odd number of negations, but no parentheses.\n\t\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\item[4.7.2] Strategy (a) is the hardest to apply, which we've seen by means of our example. Strategy (c) will always give the right result but may take a long time. Strategy (d) is actually included in strategy (c)---just look at the second step of the algorithm. So, before you employ (c) fully, you just do (d). Having tried strategy (b) is a prerequisite for applying the algorithm (as we say before we describe the algorithm). But note that (b) can fail to tell you that something's not a formula even if it isn't: $(p\\land\\neg())$ is not a formula, but you actually need to apply the algorithm to see this, not even parentheses checking will help immediately.\n\t\t\n\t\t\\item[4.7.3] The only correct answer is (d): conventional notation is just that, conventional, and so we need to make clear that we're using it.\n\t\t\n\t\t\\item[4.7.4] To see why (b) is not correct, consider the formula $\\phi=(\\neg\\neg\\neg p\\lor \\neg p)$. It's easily checked that $c(\\phi)=4$, but there are 5 connectives in $\\phi$. To see that (f) is correct, we can use the Proposition 4.4.5: the complexity of a formula corresponds to the longest path from the root in its parsing tree. Since each step in path goes from one node to another, we have the starting node, the root, plus at least four other nodes, meaning five nodes. Note that there can be more than five nodes in the tree, as you can check by doing the parsing tree for $\\phi$, which we used to explain why (b) is incorrect.\n\t\n\t\\end{enumerate}\n\n\\section*{4.8 Exercises}\n\n\\begin{enumerate}\n\t\n\t\t\\item[4.8.1] Translation key:\n\t\t\n\t\t\t\t\\begin{longtable}{c c c}\n\t\t\t\t$p$ & : & Alan Turing built the first computer\\\\\n\t\t\t\t$q$ & : & Ada Lovelace invented the first computer algorithm\\\\\n\t\t\t\t$r$ & : & Today is Monday\\\\\n\t\t\t\t$s$ & : & Alan Turing is your favorite computer scientist\\\\\n\t\t\t\t$t$ & : & Ada Lovelace is your favorite computer scientist\\\\\n\t\t\t\t$u$ & : & Yesterday was Tuesday\\\\\n\t\t\t\t$v$ & : & Tomorrow is Saturday\\\\\t\t\t\t\n\t\t\t\\end{longtable}\n\t\t\n\t\t\\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item $(p\\land q)$\n\t\t\t\n\t\t\t\\item $(r\\to p)$\n\t\t\t\n\t\t\t\\item Inclusive reading: $(s\\lor t)$; Exclusive reading: $((s\\lor t)\\land\\neg(s\\land t))$\n\t\t\t\n\t\t\t\\item $(r\\leftrightarrow (u\\land v))$.\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\n\t\t\\item (For now, English only)\n\t\t\n\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\n\t\t\t\t\\item It's not the case that I'm not both happy and clapping my hands.\n\t\t\t\t\n\t\t\t\t\\item If I'm not happy, then I don't clap my hands.\n\t\t\t\t\n\t\t\t\t\\item I'm happy if and only if you're not happy and I clap my hands.\n\t\t\t\t\n\t\t\t\t\\item If I clap my hands and you clap your hands, then we both clap our hands.\n\t\t\t\t\n\t\t\t\t\\item If I clap my hands and you clap your hands, then either I'm happy or you're happy.\n\t\t\t\t\n\t\t\t\t\\item \\emph{Either} I'm happy and clap my hands \\emph{or} you're happy and clap your hands.\t\t\t\n\t\t\t\\end{enumerate}\n\t\t\n\t\t\\item[4.8.3] The set of all formulas of $\\mathcal{L}$ that only contain (symbols from) $p,q,\\neg,\\land,(,$ and $)$ is the smallest set $X$ such that:\n\t\t\n\t\t\\begin{itemize}\n\t\t\n\t\t\t\\item $p,q\\in X$,\n\t\t\t\n\t\t\t\\item if $\\phi\\in X,$ then $\\neg \\phi\\in X$,\n\t\t\t\n\t\t\t\\item if $\\phi,\\psi\\in X$, then $(\\phi\\land\\psi)\\in X$.\n\t\t\n\t\t\\end{itemize}\n\t\t\n\t\\item[4.8.3] \n\t\n\t\\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item $(q\\leftrightarrow (p\\land (q\\lor (r\\land \\neg s)))$ \n\t\t\t\n\t\t\t\\begin{center}\n\n\\Tree [.{$(q\\leftrightarrow (p\\land (q\\lor (r\\land \\neg s)))$} [.{$q$\\checkmark} ]  [.$(p\\land (q\\lor (r\\land \\neg s))$ [.{$p$\\checkmark} ] [.{$(q\\lor (r\\land \\neg s)$} [.{$q$\\checkmark} ] [.{$(r\\land\\neg s$\\frownie} ]  ] ] ]\\\\[2ex]\n\n\\emph{Answer}: Not a formula!\n\n\\end{center}\n\n\t\t\t\n\t\t\t\\item $((p\\land q)\\lor (p\\land (q\\to\\neg q)))$\n\t\t\t\n\t\t\t\\begin{center}\n\n\\Tree[.{$((p\\land q)\\lor (p\\land (q\\to\\neg q)))$} [.{$(p\\land q)$} [.{$p$\\checkmark} ] [.{$q$\\checkmark} ] ] [.{$(p\\land (q\\to\\neg q))$}  [.{$p$\\checkmark} ] [.$(q\\to\\neg q)$ [.{$q$\\checkmark} ] [.{$\\neg q$} [.{$q$\\checkmark} ] ] ] ] ]\\\\[2ex]\n\n\\emph{Answer}: Formula!\n\n\n\\end{center}\n\t\t\t\n\t\t\t\\item $(p\\to (p\\to ((p\\land p)\\leftrightarrow p\\lor p)))$\n\t\t\t\n\t\t\t\\begin{center}\n\n\n\t\t\\Tree[.$(p\\to (p\\to ((p\\land p)\\leftrightarrow p\\lor p)))$ \n\t\t[.{$p$\\checkmark} ] \n\t\t[.$(p\\to ((p\\land p)\\leftrightarrow p\\lor p))$ \n\t\t\t[.{$p$\\checkmark} ]\n\t\t\t[.$((p\\land p)\\leftrightarrow p\\lor p)$\n\t\t\t\t[.$(p\\land p)$ \n\t\t\t\t\t[.{$p$\\checkmark} ] \n\t\t\t\t\t[.{$p$\\checkmark} ] \n\t\t\t\t\t]\n\t\t\t\t[.{$p\\lor p$\\frownie} ]]]] \\\\[2ex]\n\t\t\t\t\\emph{Answer}: Not a formula!\n\n\t\n\\end{center}\n\t\t\t\n\t\t\t\\item $\\neg\\neg (\\neg\\neg p\\land (q\\lor q) )$\n\t\t\t\n\t\t\t\\begin{center}\n\t\t\t\\Tree[.$\\neg\\neg (\\neg\\neg p\\land (q\\lor q))$\n\t\t[.$\\neg (\\neg\\neg p\\land (q\\lor q))$ \n\t\t\t[.$(\\neg\\neg p\\land (q\\lor q))$ \n\t\t\t\t[.$\\neg\\neg p$ \n\t\t\t\t\t[.$\\neg p$ \n\t\t\t\t\t\t[.{$p$\\checkmark} ]\n\t\t\t\t\t\t]\n\t\t\t\t\t]\n\t\t\t\t[.$(q\\lor q)$ \n\t\t\t\t\t[.{$q$\\checkmark} ]\n\t\t\t\t\t[.{$q$\\checkmark} ]\n\t\t\t\t\t]\n\t\t\t\t]\n\t\t\t] \n\t] \\\\[2ex]\n\t\\emph{Answer}: Formula!\n\n\t\t\t\\end{center}\n\t\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\\item[4.8.5] \\\n\t\n\t\\begin{enumerate}[(a)]\n\t\n\t\t\\item $\\#_{conn}:\\mathcal{L}\\to\\mathbb{N}$ can be defined by:\n\t\t\n\t\t\t\\begin{enumerate}[(i)]\n\t\n\t\t\t\t\\item $\\#_{conn}(p)=0$ for $p\\in\\mathcal{P}$\n\t\t\n\t\t\t\t\\item \\begin{enumerate}[(a)]\n\t\t\n\t\t\t\t\t\\item $\\#_{conn}(\\neg\\phi)=\\#_{conn}(\\phi)+1$\n\t\t\t\n\t\t\t\t\t\\item $\\#_{conn}((\\phi\\circ\\psi))=\\#_{conn}(\\phi)+\\#_{conn}(\\psi)+1$, for $\\circ=\\land,\\lor,\\to,\\leftrightarrow$.\n\t\t\n\t\t\t\t\t\\end{enumerate}\n\t\t\n\t\t\t\\end{enumerate}\n\t\t\n\t\t\\item $\\#_(:\\mathcal{L}\\to\\mathbb{N}$ can be defined by:\n\t\n\t\t\t\\begin{enumerate}[(i)]\n\t\t\n\t\t\t\t\\item $\\#_{(}(p)=0$ for $p\\in\\mathcal{P}$\n\t\t\n\t\t\t\t\\item \\begin{enumerate}[(a)]\n\t\t\n\t\t\t\t\t\t\\item $\\#_{(}(\\neg\\phi)=\\#_{(}(\\phi)$\n\t\t\t\n\t\t\t\t\t\t\\item $\\#_{(}((\\phi\\circ\\psi))=\\#_{(}(\\phi)+\\#_{(}(\\psi)+1$, for $\\circ=\\land,\\lor,\\to,\\leftrightarrow$.\n\t\t\n\t\t\t\t\t\\end{enumerate}\n\t\t\t\t\n\t\t\t\\end{enumerate}\n\t\t\n\t\\item  $\\#_{\\mathcal{P}}:\\mathcal{L}\\to\\mathbb{N}$ can be defined by:\n\t\t\n\t\t\\begin{enumerate}[(i)]\n\t\n\t\t\\item $\\#_{\\mathcal{P}}(p)=1$ for $p\\in\\mathcal{P}$\n\t\t\n\t\t\\item \\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item $\\#_{\\mathcal{P}}(\\neg\\phi)=\\#_{\\mathcal{P}}(\\phi)$\n\t\t\t\n\t\t\t\\item $\\#_{\\mathcal{P}}((\\phi\\circ\\psi))=\\#_{\\mathcal{P}}(\\phi)+\\#_{\\mathcal{P}}(\\psi)$, for $\\circ=\\land,\\lor,\\to,\\leftrightarrow$.\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\item $\\mathbf{1}_p:\\mathcal{L}\\to\\{0,1\\}$ can be defined by \n\t\n\t\\begin{enumerate}[(i)]\n\t\n\t\t\\item $\\mathbf{1}_{p}(p)=1$ and $\\mathbf{1}_{p}(q)=0$ for all $q\\neq p\\in\\mathcal{P}$\n\t\t\n\t\t\\item \\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item $\\mathbf{1}_{p}(\\neg\\phi)=\\mathbf{1}_{p}(\\phi)$\n\t\t\t\n\t\t\t\\item $\\mathbf{1}_{p}((\\phi\\circ\\psi))=max(\\mathbf{1}_{p}(\\phi),\\mathbf{1}_{p}(\\psi))$, for $\\circ=\\land,\\lor,\\to,\\leftrightarrow$.\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\end{enumerate}\n\t\n\t\\end{enumerate}\n\t\n\t\\item[4.8.6]\n\t\n\t\\begin{enumerate}[(a)]\n\t\n\t\t\\item This function counts the number of symbols in a formula.\n\t\t\n\t\t\\item This function counts the number of negations in a formula.\n\t\t\n\t\t\\item This function assigns one as a value iff the number of negations in the formula is even.\n\t\n\t\\end{enumerate}\n\t\n  \\item[4.8.7] We willen laten zien dat boor alle formules $\\phi\\in\\mathcal{L}$,\n\thet aantal subformules maximaal tweemaal het aantal connectieven plus 1,\n\ti.e. $|sub(\\phi)| \\leq 2 \\cdot \\#_{conn}(\\phi) + 1$.\n\n\t  We bewijzen dit met inductie op formules.\n\n\t  \\begin{itemize}\n\t\t\\item[(i)] Het basisgeval $p$ heeft \u00e9\u00e9n subformule, $p$, en bevat geen connectieven.\n\t\t  Het aantal subformules is dus niet groter dan $2 \\cdot 0 + 1 = 1$.\n\t\t\\item[(ii)]\n\t\t  \\begin{itemize}\n\t\t\t\\item [a.] Neem een formule $\\phi\\in\\mathcal{L}$.\n\t\t\t  We nemen via de inductiehypothese aan dat het aantal subformules van $\\phi$ niet groter is dan $2\\cdot \\#_{conn}(\\phi) + 1$.\n\t\t\t  We bekijken het geval $\\neg\\phi$.\n\t\t\t  Met de definitie van de subformules (zie 4.4.2) weten we dat\n\t\t\t  $sub(\\neg\\phi) = sub(\\phi) \\cup \\{\\neg\\phi\\}$ dus $|sub(\\neg\\phi)| = |sub(\\phi)| + 1$.\n\t\t\t  Aangezien we \u00e9\u00e9n negatie toevoegen volgt\n\t\t\t  $\\#_{conn}(\\neg\\phi) = \\#_{conn}(\\phi) + 1$.\n\t\t\t  Neem nu de inductiehypothese:\n\t\t\t  \\begin{align*}\n\t\t\t\t|sub(\\phi)| &\\leq 2\\cdot \\#_{conn}(\\phi) + 1\n\t\t\t  \\end{align*}\n\t\t\t  Substitutie geeft:\n\t\t\t  \\begin{align*}\n\t\t\t\t|sub(\\neg\\phi)| - 1 &\\leq 2\\cdot \\#_{conn}(\\neg\\phi) \\\\\n\t\t\t\t|sub(\\neg\\phi)| &\\leq 2\\cdot \\#_{conn}(\\neg\\phi) + 1\n\t\t\t  \\end{align*}\n\t\t\t  Hiermee geldt de claim voor $\\neg\\phi$.\n\n\t\t\t\\item [b.] Bezie\n\t\t\t  $\\phi, \\psi \\in \\mathcal{L}$.\n\t\t\t  We nemen via de inductiehypothese aan dat\n\t\t\t  $|sub(\\phi)|\\leq 2\\cdot \\#_{conn}(\\phi)+1$\n\t\t\t  en\n\t\t\t  $|sub(\\psi)|\\leq 2\\cdot \\#_{conn}(\\psi)+1$.\n\t\t\t  We beschouwen het geval $(\\phi\\circ\\psi)$ met\n\t\t\t  $\\circ\\in\\{\\vee,\\wedge,\\to,\\leftrightarrow\\}$.\n\t\t\t  Met de definitie van de subformules (zie 4.4.2) weten we dat\n\t\t\t  $sub((\\phi \\circ \\psi))=sub(\\phi) \\, \\cup \\, sub(\\psi) \\, \\cup \\, \\{ (\\phi\\circ\\phi)\\}$.\n\t\t\t  Omdat er overlap kan zijn tussen de subformules van $\\phi$ en $\\psi$, moet gelden dat\n\t\t\t  $|sub((\\phi \\circ \\psi))|\\leq |sub(\\phi)| + sub(\\psi)$.\n\t\t\t  Verder\n\t\t\t  $\\#_{conn}((\\phi \\circ \\psi)) = \\#_{conn}(\\phi) + \\#_{conn}(\\psi) + 1$.\n\t\t\t  Begin als volgt:\n\t\t\t  \\begin{align*}\n\t\t\t\t|sub((\\phi \\circ \\psi))| &\\leq |sub(\\phi)| + sub(\\psi)\n\t\t\t  \\end{align*}\n\t\t\t  Substitutie geeft:\n\t\t\t  \\begin{align*}\n\t\t\t\t|sub((\\phi \\circ \\psi))| &\\leq 2\\cdot \\#_{conn}(\\phi) + 2\\cdot \\#_{conn}(\\psi) + 2 \\\\\n\t\t\t\t\t\t\t\t\t\t &\\leq 2 \\cdot (\\#_{conn}(\\phi) + \\#_{conn}(\\psi) + 1)\\\\\n\t\t\t\t\t\t\t\t\t\t &\\leq 2 \\cdot \\#_{conn}((\\phi \\circ \\psi)) \\\\\n\t\t\t\t\t\t\t\t\t\t &\\leq 2 \\cdot \\#_{conn}((\\phi \\circ \\psi)) + 1\n\t\t\t  \\end{align*}\n\t\t\t  Hiermee geldt de eigenschap ook voor $(\\phi \\circ \\psi)$.\n\t\t\t  We hebben nu via inductie bewezen dat de eigenschap voor alle formules geldt.\n\t\t  \\end{itemize}\n\t  \\end{itemize}\n\n\t\\item[4.8.8] We give an informal outline of the argument. This can be made precise using function $\\#_($ from 1.8.5 and an analogously defined function $\\#_)$.\n\t\n\t\\emph{Claim}. The number of $($ and $)$ in a formula $\\phi$ is always the same. \n\t\n\t\\begin{proof}\n\tWe prove this by induction.\n\t\n\t\\begin{enumerate}[(i)]\n\t\n\t\t\\item For the base case, note that the number of both $($'s and $)$' in any sentence letter $p$ is both zero.\n\t\t\n\t\t\\item \\begin{enumerate}[(a)]\n\t\t\n\t\t\t\\item Assume the induction hypothesis, that numbers of $($ and $)$ in $\\phi$ are the same. Consider $\\neg\\phi$. Note that the number of $($'s $\\neg\\phi$ is the same as in $\\phi$ and the number of $)$'s in $\\neg\\phi$ is the same as in $\\phi$ (no new parentheses have been added). Hence, the numbers of $($ and $)$ in $\\neg\\phi$ are also the same.\n\t\t\t\n\t\t\t\\item Assume the induction hypotheses, that numbers of $($ and $)$ in $\\phi$ are the same and the numbers of $($ and $)$ in $\\psi$ are the same. Denote the number of $($'s in $\\phi$ by $n$, the number of $)$'s in $\\phi$ by $m$, the number of $($'s in $\\psi$ by $k$, and the number of $)$'s in $\\psi$ by $l$. We have $n=k$ and $m=l$. Consider $(\\phi\\circ\\psi)$. The number of $($'s in $(\\phi\\circ\\psi)$ is $n+k+1$. The number of $)$'s in $(\\phi\\circ\\psi)$ is $m+l+1$. Since  $n=k$ and $m=l$,  $n+k+1=m+l+1$, as desired.\n\t\n\t\t\\end{enumerate}\n\t\tWe conclude our claim by induction on formulas.\n\t\n\t\\end{enumerate}\n\t\n\t\\end{proof}\n\t\t\n\t\\item[4.8.9]\n\t\n\t\t\\begin{enumerate}[(a)]\n\t\t\n\t\t\n\n\\item $(\\neg p\\land q)$\n\n\\item $\\neg((p\\land q)\\to (\\neg p\\lor\\neg q))$\n\n\\item $((p\\lor p)\\leftrightarrow \\neg p)$\n\n\\item $((p\\lor q)\\land r)$\n\n\\item $((p\\to p)\\leftrightarrow (p\\to p))$\n\n\\item $(\\neg p \\land (((q\\lor r)\\to p)\\leftrightarrow q))$\n\n\n\n\\item $(p\\land (p\\lor q))$\n\n\n\n\\item $((p\\to (q\\lor q))\\leftrightarrow r)$\n\n\n\n\\item $((p\\to q)\\leftrightarrow (\\neg q\\to \\neg p))$\n\n\n\n\\item $\\neg\\neg\\neg p$\n\n\n\n\\item $(({p} \\to {p}) \\leftrightarrow ({p}\\lor \\neg {p}))$\n\n\n\\item $((p\\lor q)\\to (\\neg r\\land (s\\leftrightarrow {p})))$\n\n\t\t\n\t\t\\end{enumerate}\n\t\t\n\t\t\\item[4.8.10] \\\n\t\t\n\t\t\\begin{enumerate}\n\t\t\n\t\t\t\\item $p\\land q$\n\t\t\n\t\t\t\\item $\\neg\\neg q$\n\t\t\t\n\t\t\t\\item $p\\land (r\\lor q)$\n\t\t\t\n\t\t\t\\item $p\\to (r\\lor (p\\land (q\\leftrightarrow r)))$\n\t\t\t\n\t\t\t\\item $p\\lor \\neg (p\\lor q)$\n\t\t\t\n\t\t\t\\item $p\\land q\\to r$\n\t\t\t\n\t\t\t\\item $p\\lor q\\to \\neg q\\leftrightarrow r$\n\t\t\t\n\t\t\t\\item $p\\land q\\land r$\n\t\t\t\n\t\t\t\\item $p\\land q\\land r$\n\t\t\t\n\t\t\t\\item $p\\lor q\\lor r$\n\t\t\t\n\t\t\t\\item $p\\land (q\\lor r)$\n\t\t\t\n\t\t\t\\item $p\\land (q\\to r)$\n\t\t\t\t\n\t\t\\end{enumerate}\n\t\t\n\\end{enumerate}\n\t\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"../../logic.tex\"\n%%% End:\n", "meta": {"hexsha": "a60320f3c423dc6f822032674d57a846ef46c485", "size": 12858, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lib/notes/tex/appendix/ans-prop-syntax.tex", "max_stars_repo_name": "crcaret/KI1V13001-Inleiding-Logica", "max_stars_repo_head_hexsha": "6c7966886cde1c5a3622dadab3c9c903a7ac4ff7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lib/notes/tex/appendix/ans-prop-syntax.tex", "max_issues_repo_name": "crcaret/KI1V13001-Inleiding-Logica", "max_issues_repo_head_hexsha": "6c7966886cde1c5a3622dadab3c9c903a7ac4ff7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lib/notes/tex/appendix/ans-prop-syntax.tex", "max_forks_repo_name": "crcaret/KI1V13001-Inleiding-Logica", "max_forks_repo_head_hexsha": "6c7966886cde1c5a3622dadab3c9c903a7ac4ff7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.9850746269, "max_line_length": 667, "alphanum_fraction": 0.5892829367, "num_tokens": 4761, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.689305616785446, "lm_q2_score": 0.8596637577007394, "lm_q1q2_score": 0.5925710567300024}}
{"text": "\\section{Scene Surveying With Heterogeneous Battery Constraints and Sampling Times}\\label{sec:SceneSurveyingBatteryConstraints}\n%\\note{Keep this very brief - there is not a great research output here. Simply mention that we began looking into using solvers to calculate solutions as is line with a lot of the literature.}\n\nOnce we had prototyped the NN algorithm in order to find a suitable solution to the simplified problem, we then considered the more general problem, where RAVs have heterogeneous battery constraints, sampling times and operational speeds, which is mentioned in Section \\ref{sec:SceneSurveying}. This can be described by the more general Vehicle Routing Problem (VRP). We decided to formulate the problem as a \\textit{linear program} and planned to find solutions using a linear program solver, following the approach outlined in \\cite{Toth2002TheProblem}. We subsequently use terminology commonly used in convex optimisation and linear programming. We refer the reader to the text \"Convex Optimization\" \\cite{Boyd2004ConvexOptimization} for information on linear programming and convex optimisation in general. This means that the problem's objective function and constraints must be written as linear expressions.\n%A number of code repositories with permissive licences that provide solvers for linear programs exist .S\nSpecifying the VRP explicitly as a linear program and then passing it to a solver is a time-consuming process, which has led to the development of a number of tools which act as wrappers for software developers to solve well-known problems without having to write excessive amounts of boiler-plate code. We chose to use Google's Apache-licensed \\href{https://developers.google.com/optimization/}{\\textit{Operations Research}}\\footnote{\\href {https://developers.google.com/optimization/}{https://developers.google.com/optimization/}} (OR) repository, which contains a routing library with high-level interfaces specifically designed to allow the user to define and solve VRPs. This meant we could focus on defining the salient aspects of the problem rather than the details of how to convert the VRP into a linear program.\n\nWe began by following the example outlined in the \\href{https://developers.google.com/optimization/routing/vrp}{OR tools documentation}\\footnote{\\href {https://developers.google.com/optimization/routing/vrp}{https://developers.google.com/optimization/routing/vrp}}. The documentation outlines the steps to solve a linear program using the repo follow the same pattern: \n\\begin{itemize}\n\\item Create the variables.\n\\item Define the constraints.\n\\item Define the objective function.\n\\item Declare the solver \u2014 the method that implements an algorithm for finding the optimal solution.\n\\item Invoke the solver and display the results.\n\\end{itemize}\n\n\n\\subsection{Specifying Variables and Constraints}\nWe first implemented a vehicle class which records the variables related to routing for each vehicle. Member variables included:\n\\begin{itemize}\n    \\item The time taken to record data at each node\n    \\item The location of the depot, where the charging point is located\n    \\item The operational speed of the vehicle\n    \\item The estimated remaining battery life in seconds\n\\end{itemize}\nIn order to facilitate recharging, we followed the approach outlined in the documented examples and created \"virtual\" depot nodes, which are duplicate nodes of the recharge location and have an associated recharge time for each RAV.  We force the RAVs to visit the depot to recharge once its battery level reaches below 5\\% by creating a cumulative variable with a range equal to the predicted time taken for the RAV battery to degrade to this level. In effect, this indicates to the solver that it must include a virtual depot node in the solution at this time.\n\nFor each RAV, we then created a lookup table of times taken to travel from one node to the next and service the second node, based on the distances between the nodes, the operational speed of the RAV and the service time. In order to communicate to the solver that it should refer to each of these lookup tables to determine the cost of adding a node to a vehicle's route, we created a \\textit{dimension} for each vehicle which refers to the lookup table. \\textit{Dimensions} are used in the OR Tools interface to represent quantities accumulated at nodes along the routes. We then specified each time dimension to contribute to the objective function by using the $setGlobalSpanCostCoefficient$ method, which notifies the solver to minimise the largest cost among all the time dimensions, which equates to minimising the longest time taken for any vehicle to complete its route. Since the solver is designed to find solutions to mixed-integer linear programs, we took the recommended approach of scaling up distance between nodes and then rounding, in order to minimise error. Once the solution had been computed, we normalised the results to their original scale. We used the default \\href{http://google.github.io/or-tools/python/ortools/sat/python/cp\\_model.html}{CP-SAT}\\footnote{\\href {http://google.github.io/or-tools/python/ortools/sat/python/cp\\_model.html}{http://google.github.io/or-tools/python/ortools/sat/python/cp\\_model.html}} solver to compute a solution, with the cheapest arc heuristic. The cheapest arc heuristic builds a solution by beginning with the start node and connects it to the node which produces the cheapest route segment, then extends the route by iterating on the last node added to the route. %We did not get the chance to evaluate other heuristics due to time constraints.\n\n\n\\subsection{Results of the Solver-Based Method}\nWe prototyped the OR-Tools solver based method, which meant the results that we generated apply to carefully chosen regions that offer a proof-of-concept that the OR-Tools based method can provide viable solutions. We outline in Section \\ref{sec:SurveyingConclusionFutureWork} how we envision the prototype solution could be extended and sufficiently validated. Sample solutions computed for various configurations of the vehicles are shown in Table \\ref{table:ORToolsResults}. For each of the solutions shown, we enforced that RAV 1 (red) and RAV 2 (green) have 1200 seconds fly time and that RAV 3 (yellow) has 2400 seconds of flying time. We assumed arbitrarily that the RAVs take 600 seconds to recharge to full capacity and that the RAVs will always recharge to full capacity if they return to the depot node. RAV 1, RAV 2 and RAV 3 begin with 17.5\\%, 33.3\\% and 100\\% of their full battery capacities respectively. RAVs start and finish at the same location.\n\nThe configurations and solutions for the individual vehicles used are given in Table \\ref{table:ORToolsResults}\n\\note{Ran with 3 RAVs also, might be worth showing}\n\\begin{table}[H]\n  \\centering\n  \\begin{tabular}{ | c | m{5.2cm} | }\n    \\hline\n    Planned RAV Routes & Solution details \\\\\n    \\hline\n    \n    %single RAV\n    \\begin{minipage}[c][53mm][c]{.6\\textwidth}\n      \\includegraphics[width=\\linewidth, height=51mm]{Chapters/MultiAgentCoverage/MultipleTravellingSalesman/Figs/ORToolsSolns/Example1.PNG}\n\n    \\end{minipage}\n    &\n    \\small\n    \\begin{tabular}{m{10mm}|m{11mm} m{11mm}}\n        & RAV 1 & RAV 2\\\\\n        \\hline\n        Speed& 4 & 2 \\\\\n        T.T.S & 0.5 & 0.5 \\\\\n        Color & Red & Green \\\\\n        \\hline\n        Dist.& 1813 & 1813 \\\\\n        Time& 1410 & 1600 \\\\\n    \\end{tabular}\n    \\normalsize\n    \\\\\n    \\hline\n    %single RAV\n    \\begin{minipage}[c][53mm][c]{.6\\textwidth}\n      \\includegraphics[width=\\linewidth, height=51mm]{Chapters/MultiAgentCoverage/MultipleTravellingSalesman/Figs/ORToolsSolns/Example2.PNG}\n\n    \\end{minipage}\n    &\n    \\small\n    \\begin{tabular}{m{10mm}|m{11mm} m{11mm} }\n        & RAV 1 & RAV 2\\\\\n        \\hline\n        Speed& 1 & 1 \\\\\n        T.T.S & 0.5 & 0.5 \\\\\n        Color & Red & Green \\\\\n        \\hline\n        Dist.& 1961 & 2141 \\\\\n        Time& 3423 & 3423 \\\\\n    \\end{tabular}\n    \\normalsize\n    \\\\\n    \\hline\n    \n    %single RAV\n    \\begin{minipage}[c][53mm][c]{.6\\textwidth}\n      \\includegraphics[width=\\linewidth, height=51mm]{Chapters/MultiAgentCoverage/MultipleTravellingSalesman/Figs/ORToolsSolns/Example3.PNG}\n\n    \\end{minipage}\n    &\n    \\small\n    \\begin{tabular}{m{10mm}|m{11mm} m{11mm}}\n        & RAV 1 & RAV 2\\\\\n        \\hline\n        Speed& 1 & 1 \\\\\n        T.T.S & 2 & 2 \\\\\n        Color & Red & Green \\\\\n        \\hline\n        Dist.& 2142 & 2150 \\\\\n        Time& 3462 & 3462 \\\\\n    \\end{tabular}\n    \\normalsize\n    \\\\\n    \\hline\n    \n    %single RAV\n    \\begin{minipage}[c][53mm][c]{.6\\textwidth}\n      \\includegraphics[width=\\linewidth, height=51mm]{Chapters/MultiAgentCoverage/MultipleTravellingSalesman/Figs/ORToolsSolns/Example4.PNG}\n\n    \\end{minipage}\n    &\n    \\small\n    \\begin{tabular}{m{10mm}|m{11mm} m{11mm}}\n        & RAV 1 & RAV 2\\\\\n        \\hline\n        Speed& 8 & 1 \\\\\n        T.T.S & 1 & 1 \\\\\n        Color & Red & Green \\\\\n        \\hline\n        Dist.& 2638 & 870 \\\\\n        Time& 1410 & 1600 \\\\\n    \\end{tabular}\n    \\normalsize\n    \\\\\n    \\hline\n\n  \\end{tabular}\n  \\caption{Results of applying solution found with OR-Tools solver. T.T.S = Time to sample, Dist. = Distance}\\label{table:ORToolsResults}\n\\end{table}\n", "meta": {"hexsha": "77215ebe051e43910b0cde7a0c9b86519fa2031b", "size": 9263, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/MultiAgentCoverage/MultipleTravellingSalesman/ORToolsSolution.tex", "max_stars_repo_name": "DavidLSmyth/ResearchMScThesis", "max_stars_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/MultiAgentCoverage/MultipleTravellingSalesman/ORToolsSolution.tex", "max_issues_repo_name": "DavidLSmyth/ResearchMScThesis", "max_issues_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-18T11:59:42.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-18T11:59:42.000Z", "max_forks_repo_path": "Chapters/MultiAgentCoverage/MultipleTravellingSalesman/ORToolsSolution.tex", "max_forks_repo_name": "DavidLSmyth/ResearchMScThesis", "max_forks_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.8062015504, "max_line_length": 1807, "alphanum_fraction": 0.7419842384, "num_tokens": 2320, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8596637433190939, "lm_q2_score": 0.6893056231680121, "lm_q1q2_score": 0.592571052303514}}
{"text": "\n\\documentclass[12pt]{article}\\usepackage[]{graphicx}\\usepackage[]{color}\n% maxwidth is the original width if it is less than linewidth\n% otherwise use linewidth (to make sure the graphics do not exceed the margin)\n\\makeatletter\n\\def\\maxwidth{ %\n  \\ifdim\\Gin@nat@width>\\linewidth\n    \\linewidth\n  \\else\n    \\Gin@nat@width\n  \\fi\n}\n\\makeatother\n\n\\usepackage{Sweavel}\n\n\n\n\\usepackage[utf8]{inputenc}\n%\\usepackage{beamerthemebars}\n\n%\\usepackage[spanish]{babel}\n\\usepackage{fontenc}\n\\usepackage{graphics}\n\\usepackage{hyperref}\n\\usepackage{multimedia}\n\\usepackage{hyperref}\n\\usepackage{color}\n\\usepackage{pgfmath}\n\\usepackage{tikz}\n\n\\usepackage{latexsym} % S\u00edmbolos                        \u0131\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\newtheorem{axiom}{Axiom}\n\\newtheorem{case}{Case}\n\\newtheorem{claim}{Claim}\n\\newtheorem{conclusion}{Conclusion}\n\\newtheorem{condition}{Condition}\n\\newtheorem{conjecture}{Conjecture}\n\\newtheorem{assumption}{Assumption}\n\\newtheorem{propiete}{Propi\\'et\\'e}\n\\newtheorem{proposition}{Proposition}\n\\newtheorem{question}{Question}\n\\newtheorem{remark}{Remark}\n\n\n\\title{SEIR Epidemiological Model: A Review} \n\\author{}\n\\date{\n}\n\n\\begin{document}\n\\maketitle\n\n\\section{Basic SIR Model with demography}\n\nModel Assumptions:\n\n\\begin{enumerate}\n\\item The infection circulates in a population of size $N,$ with a per capita background death rate, $\\mu$ which is balanced by a birth rate $\\mu N$ From the sum of Eqs. (2.1)\u2013 (2.3), $\\frac{dN}{dt} = 0$ and $N = S + I + R$ is thus constant.\n\\item The infection causes acute morbidity (not mortality); That is, in this version of the SIR model we assume we can ignore disease-induced mortality. This is reasonable for certain infections like chickenpox, but certainly not for others like rabies, SARS, or ebola.\n\\item Individuals are recruited directly into the susceptible class at birth (so we ignore perinatal maternal immunity).\n\\item Transmission of infection from infectious to susceptible individuals is controlled by a bilinear contact term $\\beta I S$ This stems from the assumption that the $I$ infectious individuals are independently and randomly mixing with all other individuals, so the fraction $S/N$ of the encounters is with susceptible individuals; $\\beta$ is the contact rate times the probability of transmission given a contact between a susceptible and an infectious individual.\n\\item Chances of recovery or death is assumed not to change during the course of infection.\n\\item Infectiousness is assumed not to change during the course of infection.\n\\item Infected individuals move directly into the the infectious class (as opposed to the SEIR model;and remains there for an average infectious period of $1/\\gamma$ (assuming $\\mu << \\gamma$).\n\\item The model assumes that recovered individuals are immune from reinfection for\nlife.\n\\end{enumerate}\n\nThe model is characterised taking into account the manifold of Susceptible, Infectious and Recovered individuals, namely\n$$\n\\mathcal{M}_{SIR}(N) = \\{(S,I,R) \\in \\mathbb{R}_+^3 : S+I+R = N\\}\n$$\ntogether the parameter manifold\n$$\n\\mathcal{P}_{SIR} = \\{(\\mu,\\beta,\\gamma) \\in \\mathbb{R}_+^3: \\mu << \\gamma\\}.\n$$\nThen for a given an observation $(S_0,I_0,R_0) \\in \\mathcal{M}_N$ and $(\\mu,\\beta,\\gamma) \\in \\mathcal{P}_{SIR}$ in time $t=0$ the\nevolution of the system is characterised by the ODE:\n\\begin{align}\n\\dot{S}(t) & = \\mu(N-S)-\\beta S \\frac{I}{N}, \\label{SIR1}\\\\\n\\dot{I}(t) & = \\beta S \\frac{I}{N} - (\\mu + \\gamma)I, \\label{SIR2}\\\\\n\\dot{R}(t) & = \\gamma I - \\mu R \\label{SIR3}\n\\end{align}\n\nFor directly transmitted pathogens, $R_0$ is, per definition, the expected number of secondary cases that arise from a typical infectious index-case in a completely susceptible host population. $R_0$ plays a critical role for a number of aspects of disease dynamics and is therefore the focus of much study in historical and contemporary infectious disease dynamics. $(R_0)$ is a very important quantity in epidemiology. For this simple SIR model $$R_0 = \\frac{\\beta}{\\gamma + \\mu}$$\n\nWe need to load the library to solve numerically an ODE:\n\n\\begin{Schunk}\n\\begin{Sinput}\nlibrary(deSolve)\n\\end{Sinput}\n\\end{Schunk}\n\nWe construct the SIR model function:\n\n\\begin{Schunk}\n\\begin{Sinput}\nsirmod = function(t, y, parms) {\n  # Pull state variables from y vector\n  S = y[1]\n  I = y[2]\n  R = y[3]\n  # Pull parameter values from parms vector beta = parms[\"beta\"]\n  mu = parms[\"mu\"]\n  gamma = parms[\"gamma\"]\n  N = parms[\"N\"]\n  beta = parms[\"beta\"]\n  # Define equations\n  dS = mu * (N - S) - beta * S * I/N\n  dI = beta * S * I/N - (mu + gamma) * I \n  dR = gamma * I - mu * R\n  res = c(dS, dI, dR)\n  # Return list of gradients\n  list(res)\n}\n\\end{Sinput}\n\\end{Schunk}\n\nWe introduce the parameter values\n\n\\begin{Schunk}\n\\begin{Sinput}\ntimes = seq(0, 26, by = 1/10)\nparms = c(mu = 0.2, N = 1, beta = 2, gamma = 1/2)\nstart = c(S = 0.999, I = 0.001, R = 0.1)\n\\end{Sinput}\n\\end{Schunk}\n\n\nWe integrate numerically \\eqref{SIR1}-\\eqref{SIR3}\n\n\\begin{Schunk}\n\\begin{Sinput}\nout = ode(y=start, times=times, func=sirmod, parms= parms)\nout = as.data.frame(out) \nhead(round(out, 3))\n\\end{Sinput}\n\\begin{Soutput}\n  time     S     I     R\n1  0.0 0.999 0.001 0.100\n2  0.1 0.999 0.001 0.098\n3  0.2 0.999 0.001 0.096\n4  0.3 0.998 0.001 0.094\n5  0.4 0.998 0.002 0.093\n6  0.5 0.998 0.002 0.091\n\\end{Soutput}\n\\end{Schunk}\n\nFinally, we plot the numerical solution\n\n\\begin{Schunk}\n\\begin{Sinput}\n#Calculate R0\nR0=parms[\"beta\"]/(parms[\"gamma\"]+parms[\"mu\"])\n#Adjust margins to accommodate a second right axis\npar(mar = c(5,5,2,5))\n#Plot state variables\nplot(x=out$time, y=out$S, ylab=\"Fraction\", xlab=\"Time\",type=\"l\")\nlines(x=out$time, y=out$I, col=\"red\") \nlines(x=out$time, y=out$R, col=\"green\") \n#Add vertical line at turnover point \nxx = out$time[which.max(out$I)] \nlines(c(xx,xx), c(1/R0,max(out$I)), lty=3)\n#prepare to superimpose 2nd plot\npar(new=TRUE)\n#plot effective reproductive ratio (w/o axes) \nplot(x=out$time, y=R0*out$S, type=\"l\", lty=2, lwd=2,\n     col=\"black\", axes=FALSE, xlab=NA, ylab=NA,\nylim=c(-.5, 4.5))\nlines(c(xx, 26), c(1,1), lty=3)\n#Add right-hand axis for RE\naxis(side = 4)\nmtext(side = 4, line = 4, expression(R[E])) #Add legend\nlegend(\"right\", legend=c(\"S\", \"I\", \"R\",expression(R[E])), \n\tlty=c(1,1,1, 2), col=c(\"black\", \"red\", \"green\", \"black\"))\n\\end{Sinput}\n\n\\includegraphics[width=\\maxwidth]{figure/unnamed-chunk-5-1} \\end{Schunk}\n\n\n\\section{The SEIR Model}\n\nWe briefly introduce a refinement to the SIR model to take into account the latent period. The process of transmission often occurs due to an initial inoculation with a very small number of pathogen units (e.g., a few bacterial cells or virions). A period of time then ensues during which the pathogen reproduces rapidly within the host, relatively unchallenged by the immune system. During this stage, pathogen abundance is too low for active transmission to other susceptible hosts, and yet the pathogen is present. Hence, the host cannot be categorized as susceptible, infectious, or recovered; we need to introduce a new category for these individuals who are infected but not yet infectious. These individuals are referred to as Exposed and are represented by the variable $E$ in SEIR models.\n\nWe assume that:\n\\begin{itemize}\n\\item An average infective individual produces $\\beta$ new infections per unit of time when all contacts are with susceptible but that otherwise, this rate is reduced by the ratio $S/N.$\n\\item Individuals in the exposed class $E$ progress to the infective class at the per capita rate $k.$\n\\item There is no disease-induced mortality or permanent immunity, and there is a mean infective period of $1/\\sigma.$\n\\end{itemize}\nThen the SEIR equations are:\n\\begin{align}\n\\dot{S}(t) & = \\mu(N-S)-\\beta S \\frac{I}{N}, \\label{SEIR1}\\\\\n\\dot{E}(t) & = \\beta S \\frac{I}{N} - (\\mu+\\sigma)E, \\label{SEIR2} \\\\\n\\dot{I}(t) & = \\sigma E - (\\mu + \\gamma)I, \\label{SEIR3}\\\\\n\\dot{R}(t) & = \\gamma I - \\mu R \\label{SEIR4}\n\\end{align}\nThe model is characterised taking into account the manifold of Susceptible, Exposed, Infectious and Recovered individuals, namely\n$$\n\\mathcal{M}_{SEIR}(N) = \\{(S,E,I,R) \\in \\mathbb{R}_+^4 : S+E+I+R = N\\}\n$$\ntogether the parameter manifold\n$$\n\\mathcal{P}_{SEIR} = \\{(\\mu,\\beta,\\gamma,\\sigma) \\in \\mathbb{R}_+^4: \\mu << \\gamma\\}.\n$$\n\nWe construct the SIR model function:\n\n\\begin{Schunk}\n\\begin{Sinput}\nseirmod = function(t, y, parms) {\n  # Pull state variables from y vector\n  S = y[1]\n  E = y[2]\n  I = y[3]\n  R = y[4]\n  # Pull parameter values from parms vector beta = parms[\"beta\"]\n  mu = parms[\"mu\"]\n  gamma = parms[\"gamma\"]\n  N = parms[\"N\"]\n  beta = parms[\"beta\"]\n  sigma = parms[\"sigma\"]\n  # Define equations\n  dS = mu * (N - S) - beta * S * I/N\n  dE = beta * S * I/N - (mu + sigma) * E\n  dI = sigma * E  - (mu + gamma) * I \n  dR = gamma * I - mu * R\n  res = c(dS, dE, dI, dR)\n  # Return list of gradients\n  list(res)\n}\n\\end{Sinput}\n\\end{Schunk}\n\nWe introduce the parameter values\n\n\\begin{Schunk}\n\\begin{Sinput}\ntimes = seq(0, 30, by = 1/10)\nparms = c(mu = 0.01, N = 1, beta = 2, gamma = 1/2, sigma = 0.1)\nstart = c(S = 0.999, E=0.05, I = 0.2, R = 0.1)\n\\end{Sinput}\n\\end{Schunk}\n\n\nWe integrate numerically \\eqref{SEIR1}-\\eqref{SEIR4}\n\n\\begin{Schunk}\n\\begin{Sinput}\nout = ode(y=start, times=times, func=seirmod, parms= parms)\nout = as.data.frame(out) \nhead(round(out, 3))\n\\end{Sinput}\n\\begin{Soutput}\n  time     S     E     I     R\n1  0.0 0.999 0.050 0.200 0.100\n2  0.1 0.961 0.088 0.191 0.110\n3  0.2 0.926 0.122 0.182 0.119\n4  0.3 0.893 0.152 0.175 0.128\n5  0.4 0.863 0.181 0.167 0.136\n6  0.5 0.836 0.206 0.161 0.144\n\\end{Soutput}\n\\end{Schunk}\n\nFinally, we plot the numerical solution\n\n\\begin{Schunk}\n\\begin{Sinput}\n#Calculate R0\nR0=(parms[\"beta\"]*parms[\"sigma\"])/\n((parms[\"gamma\"]+parms[\"mu\"])*(parms[\"sigma\"]+parms[\"mu\"]))\n#Adjust margins to accommodate a second right axis\npar(mar = c(5,5,2,5))\n#Plot state variables\nplot(x=out$time, y=out$S, ylab=\"Fraction\", xlab=\"Time\",type=\"l\")\nlines(x=out$time, y=out$E, col=\"purple\") \nlines(x=out$time, y=out$I, col=\"red\") \nlines(x=out$time, y=out$R, col=\"green\") \n#Add vertical line at turnover point \nxx = out$time[which.max(out$I)] \nlines(c(xx,xx), c(1/R0,max(out$I)), lty=3)\n#prepare to superimpose 2nd plot\npar(new=TRUE)\n#plot effective reproductive ratio (w/o axes) \nplot(x=out$time, y=R0*out$S, type=\"l\", lty=2, lwd=2,\n     col=\"black\", axes=FALSE, xlab=NA, ylab=NA,\nylim=c(-.5, 4.5))\nlines(c(xx, 26), c(1,1), lty=3)\n#Add right-hand axis for RE\naxis(side = 4)\nmtext(side = 4, line = 4, expression(R[E])) #Add legend\nlegend(\"topright\", legend=c(\"S\",\"E\" ,\"I\", \"R\",expression(R[E])), \n  lty=c(1,1,1,1,2), col=c(\"black\",\"purple\" , \"red\", \"green\", \"black\"))\n\\end{Sinput}\n\n\\includegraphics[width=\\maxwidth]{figure/unnamed-chunk-9-1} \\end{Schunk}\n\n\\end{document}\n", "meta": {"hexsha": "fb488ec81237916d5558d07881f24ed261d0efd9", "size": 10725, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "R/covid19-falco/SEIR_Review.tex", "max_stars_repo_name": "giant-uji/COVID19", "max_stars_repo_head_hexsha": "17c0d1c6b7beb23cd46c2f59aaedc50ad8070343", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "R/covid19-falco/SEIR_Review.tex", "max_issues_repo_name": "giant-uji/COVID19", "max_issues_repo_head_hexsha": "17c0d1c6b7beb23cd46c2f59aaedc50ad8070343", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "R/covid19-falco/SEIR_Review.tex", "max_forks_repo_name": "giant-uji/COVID19", "max_forks_repo_head_hexsha": "17c0d1c6b7beb23cd46c2f59aaedc50ad8070343", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.396039604, "max_line_length": 797, "alphanum_fraction": 0.6947319347, "num_tokens": 3587, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585903489891, "lm_q2_score": 0.8104789155369048, "lm_q1q2_score": 0.5925075734999871}}
{"text": "\\section{Grammar Transformations}\n\n\\subsection{Expansion of a nonterminal}\nEliminate a nonterminal: $A \\rarr \\alpha B \\gamma$, $B \\rarr \\beta_1 | \\ldots | \\beta_n$ becomes $A \\rarr \\alpha \\beta_1 \\gamma | \\ldots | \\alpha \\beta_n \\gamma$.\n\nIt's always possible to remove the axiom from the right part by adding a new axiom $S_0 \\rarr S$.\n\n\\subsection{Elimination of empty rules}\nA nonterminal is \\textbf{nullable} iff it exists a derivation $A \\Rightarrow^+ \\epsilon$.\n\\begin{align*}\n    A \\in \\text{Null} &\\text{ if }A \\rarr \\epsilon \\in P \\\\\n    A \\in \\text{Null} &\\text{ if } A \\rarr A_1\\ldots A_n \\in P, A_i \\in V \\setminus \\{A\\} \\land \\forall A_i (A_i \\in \\text{Null})\n\\end{align*}\n\\textbf{Note}: Recursive rules cannot be used.\n\n\\paragraph{Construction of the non-nullable normal form}\n\\begin{itemize}\n    \\item Compute the Null set.\n    \\item For each rule add as alternative all the combinations of removing the nullable nonterminals.\n    \\item Remove all empty rules ($A \\rarr \\epsilon$) excpet for $A = S$.\n    \\item Clean the grammar and remove circularity.\n\\end{itemize}\n\n\\textbf{Example} $S \\rarr SAB | AC$, $A \\rarr aA | \\epsilon$, $B \\rarr bB | \\epsilon$, $C \\rarr cC | c$. Null is $\\{A, B\\}$, remove $A$ and $B$ from the rules in every combination: $S \\rarr SAB | SA | SB | S | AC | C$, $A \\rarr aA | a$, $B \\rarr bB | b$, $C \\rarr cC | c$. Then remove circularity ($S \\rarr S$).\n\n\\subsection{Elimination of copy rules}\n$\\text{Copy}(A) = \\{ B \\in V | A \\Rightarrow^* B \\}$.\n\nAssuming \\textbf{grammar with no empty rules}, apply until a fixed point:\n\\begin{align*}\n    A &\\in \\text{Copy}(A) \\\\\n    C &\\in \\text{Copy}(A) \\text{ if } B \\in \\text{Copy}(A) \\land B \\rarr C \\in P\n\\end{align*}\n\n\\paragraph{Construction of the grammar without copy rules}\n\n\\begin{itemize}\n    \\item Remove copy rules: $P' := P \\setminus \\{A\\rarr B | A, B \\in P\\}$\n    \\item Add compensating rules: $P' := P' \\cup \\{ A \\rarr \\alpha | \\exists B (B \\in \\text{Copy}(A) \\land (B \\rarr \\alpha \\in P)) \\}$\n\\end{itemize}\n\n\\textbf{Example} $E \\rarr E + T | T$, $T \\rarr T \\times C | C$, $C \\rarr 0|\\ldots|9$.\n$\\text{Copy}(E)=\\{E,T,C\\}$, $\\text{Copy}(T) = \\{T, C\\}$, $\\text{Copy}(C) = \\{C\\}$.\nThe grammar becomes $E \\rarr E+T|T\\times C|0|\\ldots|9$, $T \\rarr T\\times C|0|\\ldots|9$, $C \\rarr 0|\\ldots|9$.\n\n\\subsection{Conversion from left to right recursion}\n\n\\paragraph{Immediate L-recursions} $A \\rarr A\\beta_1 | \\ldots | A\\beta_h$ where no $\\beta_i$ is empty, $A \\rarr \\gamma_1 | \\ldots | \\gamma_k$. Introduce new nonterminal $A'$:\n\\begin{align*}\n    A &\\rarr \\gamma_1A' | \\ldots | \\gamma_kA' | \\gamma_1 | \\ldots | \\gamma_k \\\\\n    A' &\\rarr \\beta_1A' | \\ldots | \\beta_hA' | \\beta_1 | \\ldots | \\beta_h\n\\end{align*}\n\n\\textbf{Example} $E \\rarr E+T | T$, $T \\rarr T\\times F | F$, $F \\rarr (E) | i$. It becomes $E \\rarr TE' | T$, $E' \\rarr +TE'|+T$, $T \\rarr FT'|F$, $T' \\rarr \\times FT'|\\times F$, $F\\rarr (E)i$.\n\n\\textbf{Non-Immediate L-recurions} are much harder and not covered by the slides.\n", "meta": {"hexsha": "2c7057affc4db04f1faab404b4426e0f714cbd58", "size": 2973, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "grammars/transformations.tex", "max_stars_repo_name": "TiberioG/FLC-cheatsheet", "max_stars_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-01-13T14:36:20.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-18T16:22:18.000Z", "max_issues_repo_path": "grammars/transformations.tex", "max_issues_repo_name": "TiberioG/FLC-cheatsheet", "max_issues_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "grammars/transformations.tex", "max_forks_repo_name": "TiberioG/FLC-cheatsheet", "max_forks_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-21T11:05:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-17T14:59:50.000Z", "avg_line_length": 52.1578947368, "max_line_length": 311, "alphanum_fraction": 0.6374032963, "num_tokens": 1113, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.8104789155369047, "lm_q1q2_score": 0.5925075640019973}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section*{Fri Nov 22 2019}\n\nWe found the equation \n%\n\\begin{align}\n  \\frac{1}{l^2} \\qty(\\dv{r}{\\lambda })^2 + W _{\\text{eff}}(r) = \\frac{1}{b^2}\n\\,,\n\\end{align}\n%\nwhere \n%\n\\begin{align}\n  W _{\\text{eff}} = \\frac{1}{r^2} \\qty(1 - \\frac{2GM}{r})\n\\,\n\\end{align}\n%\nand \n%\n\\begin{align}\n  b^2 = \\frac{l^2}{e^2}\n\\,,\n\\end{align}\n%\n\\(l\\) and \\(e\\) being the integrals corresponding to the Killing vectors of time translations and azimuthal angle rotations. \n\nNow, we want to give the interpretation of \\(b\\) as the impact parameter. \nWe consider a BH at the origin of the \\(x, y\\) axes, and a photon approaching parallel to the \\(x\\) axis with impact parameter \\(d\\). The impact parameter is the distance between the two lines: the trajectory of the photon far away from the BH and the line parallel to the trajectory and passing through the BH. \n\nWe can calculate \n%\n\\begin{align}\n  \\dv{\\varphi }{t} = \\underbrace{\\dv{r}{t}}_{-1} \\dv{\\varphi }{r} = - \\qty(-\\frac{d}{r^2})\n\\,,\n\\end{align}\n%\nwhere we used a small angle approximation: \\(\\varphi \\approx d/r\\), which can be differentiated with respect to \\(r\\). \nSo \n%\n\\begin{align}\n  b = \\frac{l}{e} = \\frac{r^2 \\dv*{\\varphi }{\\lambda }}{\\dv*{t}{\\lambda }} = r^2 \\dv{\\varphi }{t} = d \n\\,,\n\\end{align}\n%\nwhich means that \\(b=d\\): the ratio of \\(l\\) to \\(e\\) is the impact parameter. \n\nSo, the photon interacts with the BH and the angle of deflection of the path compared to a straight path is denoted as \\(\\delta \\varphi \\). \nThe total deflection angle is \\(\\Delta \\varphi  = \\pi + \\delta \\varphi \\). \n\nThe parameter \\(l\\) is \n%\n\\begin{align}\n  l = r^2 \\dv{\\varphi }{\\lambda } \n\\,,\n\\end{align}\n%\nso \n%\n\\begin{align}\n  \\dv{}{\\lambda } = \\frac{l}{r^2} \\dv{}{\\varphi }\n\\,.\n\\end{align}\n\nThis allows us to change variables in our 1D equation, and we get \n%\n\\begin{align}\n\\frac{1}{l^2} \\frac{l^2}{r^2} \\qty(\\dv{r}{\\varphi })^2 + \\frac{1}{r^2}\\qty(1 - \\frac{2GM}{r}) = \\frac{1}{b^2}\n\\,,\n\\end{align}\n%\nso if we change variables to \\(u = r^{-1}\\), with \n%\n\\begin{align}\n  \\dv{r}{\\varphi } = -\\frac{1}{u^2} \\dv{u }{\\varphi }\n\\,,\n\\end{align}\n%\nwe  get \n%\n\\begin{align}\n  u^{4} u^{-4} \\qty(\\dv{u}{\\varphi } )^2+ u^2 \\qty(1-2GMu) = \\frac{1}{b^2}\n\\,,\n\\end{align}\n%\nand then, differentiating, we find \n%\n\\begin{align}\n  2 \\dv{u}{\\varphi } \\dv[2]{u}{\\varphi } + 2 u \\dv{u}{\\varphi } - 6GM u^2 \\dv{u}{\\varphi }= 0\n\\,,\n\\end{align}\n%\nand we can simplify as long as \\(\\dv*{u}{\\varphi } \\neq 0\\), which only fails at one point. So in the end our equation is \n%\n\\begin{align}\n  \\dv[2]{u}{\\varphi } + u = 3GMu^2\n\\,.\n\\end{align}\n%\n\nWe solve it perturbatively. \nIf \\(GM=0\\), there is no BH and we have a straight line: the impact parameter is constant and equal to  \\(b = r \\sin(\\varphi )\\), therefore \n%\n\\begin{align}\n  u = \\frac{1}{b} \\sin(\\varphi )\n\\,,\n\\end{align}\n%\nis the most general solution to the harmonic oscillator which satisfies the boundary conditions. \nSo, we hypothesize that our solution satisfies \n%\n\\begin{align}\n  u (\\varphi ) = \\frac{1}{b} \\qty(\\sin(\\varphi ) + W(\\varphi ))\n\\,,\n\\end{align}\n%\nwhere \\(W\\) is small. \n\nThen, we insert this: \n%\n\\begin{align}\n  -\\frac{1}{b} \\sin(\\varphi ) + \\frac{1}{b} \\dv[2]{W}{\\varphi } +\n  \\frac{1}{b} \\sin(\\varphi )\n  + \\frac{1}{b} W \\approx 3GM \\frac{\\sin^2(\\varphi )}{b}\n\\,,\n\\end{align}\n%\nbut since the zeroth order equation is satisfied we find: \n%\n\\begin{align}\n  \\dv[2]{W}{\\varphi } + W \\approx \\frac{3GM}{b} \\sin^2(\\varphi )\n\\,. \n\\end{align}\n\nOur ansatz is then: \n%\n\\begin{align}\n  W = A + B \\sin^2 \\varphi \n\\,,\n\\end{align}\n%\nwhich looks like it might solve the equation. Its second derivative is \n%\n\\begin{align}\n  \\dv[2]{W}{\\varphi } = 2B \\qty(\\cos^2 \\varphi  - \\sin^2 \\varphi ) = 2B - 4B \\sin^2\\varphi \n\\,.\n\\end{align}\n\nInserting this we get:\n%\n\\begin{align}\n  2B - 4B \\sin^2\\varphi + A + B \\sin^2\\varphi \n  = \\frac{3GM}{b} \\sin^2\\varphi \n\\,,\n\\end{align}\n%\nwhich implies \\(2B+A = 0\\) and \\(-3B = 3GM/b\\). Then, \n%\n\\begin{align}\n  W = \\frac{2GM}{b} - \\frac{GM}{b} \\sin^2\\varphi \n  = \\frac{2GM}{b} \\qty(1 - \\frac{\\sin^2\\varphi}{2} )\n\\,\n\\end{align}\n%\nsolves the perturbed equation. \nWe can see that our condition of \\(W \\ll 1\\) is actually the physically meaningful \\(GM \\ll b\\), or the impact parameter being much larger than the Schwarzschild radius. \n\nOur complete solution is \n%\n\\begin{align}\n  u (\\varphi ) = \\frac{1}{b} \\qty(\\sin \\varphi + \\frac{2GM}{b} \\qty(1 - \\frac{\\sin^2\\varphi}{2}))\n\\,,\n\\end{align}\n%\nand we are interested in the asymptotic past and future, which correspond to \\(u =0\\). \nNow, \\(\\varphi _{\\text{in}}= 0\\) and \\(\\varphi _{\\text{out}} = \\pi\\) will not be a solution anymore. However, the deflection is small so we write the solution as \\(\\varphi _{\\text{in}} = \\epsilon _{\\text{in}}\\) and \\(\\varphi _{\\text{out}} = \\pi + \\epsilon _{\\text{out}}\\).\nWe substitute these in to the equation \\(u = 0\\): \n%\n\\begin{align}\n  0 = \\sin(\\epsilon _{\\text{in}}) + \\frac{2GM}{b}\n\\,,\n\\end{align}\n%\nor \\(\\epsilon _{\\text{in}} \\approx - 2GM/b\\), since the deflection is small. \n\nFor \\(\\epsilon _{\\text{out}}\\) we will have \n%\n\\begin{align}\n  0= \\sin(\\pi + \\epsilon _{\\text{out}}) + \\frac{2GM}{b} \\approx - \\epsilon _{\\text{out}} + \\frac{2GM}{b}\n\\,,\n\\end{align}\n%\nso \\(\\delta \\varphi = 2 \\varphi _{\\text{in}} = 2 \\varphi _{\\text{out}} = 4GM/b\\). \n\nThis was one of the first tests of GR by Sir Eddington in 1919: during an eclipse he saw a shift in the apparent position of the stars. reinserting \\(c\\) we find that we must divide \\(4GM/b\\) by \\(c^2\\). \nOur \\(b\\) is approximately the radius of the Sun: the calculation is\n\\begin{lstlisting}[language=Python]\nfrom scipy.constants import c, G, pi\nsun_mass = 2e30\nsun_radius = 696e6\nrad2arcsec = 3600 * 180 / pi\n4*G*sun_mass/c**2 /sun_radius * rad2arcsec\n\\end{lstlisting}\n\n\\section{Horizons and coordinate systems}\n\nFor the rest of today, we will talk about the Schwarzschild horizon: recall the line element \n%\n\\begin{align}\n  \\dd{s^2} = - \\qty(1- \\frac{2GM}{r}) \\dd{t^2}\n  + \\qty(1 - \\frac{2GM}{r})^{-1} \\dd{r^2} \n  + r^2 \\dd{\\Omega^2}\n\\,.\n\\end{align}\n%\n\nIt is useful to plot light cones in order to understand the structure of the spacetime. \nWe restrict ourselves to radial motion of light. So, we have \n%\n\\begin{align}\n  0 = - \\qty(1- \\frac{2GM}{r}) \\dd{t^2}\n  + \\qty(1 - \\frac{2GM}{r})^{-1} \\dd{r^2} \n\\,,\n\\end{align}\n%\nor \n%\n\\begin{align}\n  \\dv{t}{r} = \\pm \\qty(1 - \\frac{2GM}{r})^{-1}\n\\,.\n\\end{align}\n%\n\nThe light cones become slimmer and slimmer as we approach the horizon, they are straight lines at \\(r = 2GM\\): the photon appears to cover less and less \\(\\dd{r}\\) for a fixed \\(\\dd{t}\\) as it approaches the horizon. \nMassive particles are even slower. \nLet us integrate this relation: \n%\n\\begin{align}\n  \\int_0^t \\dd{t} = - \\int_{r_0}^{r(t)} \\frac{\\dd{r}}{1 - \\frac{2GM}{r}} \n\\,,\n\\end{align}\n%\nwhich comes out, by a separation of fractions, to be \n%\n\\begin{align}\n  t = r_0 - r(t) + 2GM \\log \\qty(r_0 - 2GM) - 2GM \\log \\qty(r(t) - 2GM)\n\\,,\n\\end{align}\n%\nwhich diverges as \\(r(t)\\) approaches \\(2GM\\). \n\nWhat do we really mean by \\(t\\)? \nAn observer which is far away and at rest has \\(g_{\\mu \\nu } = \\eta_{\\mu \\nu }\\), and \\(t = \\tau \\): \\(t\\) is the proper time, as measured on the clock of an observer who is far away. \nThey will measure the photon as going slower and slower, and becoming redder and redder. \n\nLet us neglect Doppler redshift, which is due to motion. \nThe formula for gravitational redshift is \n%\n\\begin{align}\n  f _{\\text{obs}} = f _{\\text{emit}} \\sqrt{\\frac{- g_{00} (\\text{emit})}{-g_{00} (\\text{obs})}}\n\\,.\n\\end{align}\n%\n\nIn our case we have \n%\n\\begin{align}\n  f _{\\text{obs}} = f _{\\text{emit}} \\sqrt{1 - \\frac{2GM}{r}}\n\\,,\n\\end{align}\n%\nwhich approaches zero as \\(r \\rightarrow 2GM\\). \nWe see the infalling observer becoming redder and ultimately freezing. \n\nWe should use different coordinates to describe the infalling observer who passes through the event horizon. \n\nFirst of all, we discuss Minkowski spacetime as seen by an accelerating observer: Rindler spacetime and the Riddler horizon. \n\nRecall the exercise in sheet 2, about an accelerating observer: we now consider an observer moving with the position law \n%\n\\begin{align}\n  x(t) = \\frac{1}{\\kappa } \\sqrt{1 + \\kappa^2t^2}\n\\,,\n\\end{align}\n%\n(as opposed to the homework, we remove the constant added to this position).\nLike in the homework we compute the proper time for the observer: \n%\n\\begin{align}\n  \\dd{s^2} = - \\dd{t^2} \\qty(1 - \\qty(\\dv{x}{t})^2)\n\\,,\n\\end{align}\n%\nso \n%\n\\begin{align}\n  \\dd{\\tau} = \\frac{ \\dd{t}}{\\sqrt{1 + \\kappa^2t^2}}\n\\,,\n\\end{align}\n%\nwhich means \n%\n\\begin{align}\nt = \\frac{1}{\\kappa } \\operatorname{sinh} (\\kappa \\tau )\n    \\qquad \n\\tau  = \\frac{1}{\\kappa } \\operatorname{arcsinh} (\\kappa \\tau )\n\\,,\n\\end{align}\n%\ntherefore \n%\n\\begin{align}\n  x = \\frac{1}{\\kappa } \\cosh(\\kappa \\tau )\n\\,.\n\\end{align}\n%\n\nWe want a coordinate system in which \n\\begin{enumerate}\n    \\item the observer is at constant spatial position;\n    \\item where, up to a constant, the time is equal to the proper time.\n\\end{enumerate}\n\nOur change of variable is \n%\n\\begin{subequations}\n\\begin{align}\n  \\begin{cases}\n      t = \\rho \\sinh \\eta  \\\\\n      x = \\rho \\cosh \\eta \n  \\end{cases}\n\\,.\n\\end{align}\n\\end{subequations}\n%\n\nthe observer is at fixed spatial coordinate \\(\\rho_{*} = 1 / \\kappa \\), and the proper time measured is \\(\\tau = \\eta / \\kappa = \\eta \\rho_{*}\\). \n\nLet us consider a family of observers at different spatial locations in the new frame: each has a constant acceleration, this means varying \\(\\kappa \\) or \\(\\rho \\).\n\nIf instead we vary \\(\\eta \\) we have: \n%\n\\begin{align}\n  \\frac{t}{x} = \\tanh \\eta \\implies \n  \\eta = \\tanh^{-1} \\qty(\\frac{t}{x})\n\\,,\n\\end{align}\n%\nand we can see that since \\(\\tanh 0 = 0 \\) we have that the \\(t=0\\) axis has \\(\\eta = 0\\), while the lightspeed observers are at \\(\\eta = \\pm \\infty\\). \n\nThese coordinates cover one quadrant of Minkowski spacetime. \nThe line element in these new coordinates is \n%\n\\begin{subequations}\n\\begin{align}\n  \\dd{s^2} &= - \\dd{t^2} + \\dd{x^2}   \\\\\n  &= - \\qty(\\dd{\\rho^2} \\sinh \\eta  + \\rho \\cosh \\eta \\dd{\\eta })^2\n  + \\qty( \\dd{\\rho^2} \\cosh \\eta + \\rho \\sinh \\eta \\dd{\\eta })^2  \\\\\n  &= - \\rho^2 \\dd{\\eta^2} + \\dd{\\rho^2}\n\\,.\n\\end{align}\n\\end{subequations}\n%\n\nThis is \\emph{Rindler geometry}. \n\nIf we have an observer staying still at \\(x_0 >0\\), and there is a Rindler (accelerating) observer, then after a time \\(x_0\\) the observer exits the quarter of the plane covered by the Rindler coordinates: if event \\(A\\) is at \\((0, x_0 )\\), and \\(B\\) is at \\((x_0 , x_0 )\\) then the \\(\\eta \\) of event \\(A\\) is zero, while the \\(\\eta \\) of event \\(B\\) is infinite. \n\nThis means that after a time \\(x_0 \\) the stationary observer becomes causally disconnected from the Rindler observer.\n\n\\end{document}", "meta": {"hexsha": "eb76a9196bf56ac84daa884c6e253c908c295889", "size": 10762, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_first_semester/general_relativity/22nov.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_first_semester/general_relativity/22nov.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_first_semester/general_relativity/22nov.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 28.9301075269, "max_line_length": 366, "alphanum_fraction": 0.6357554358, "num_tokens": 3884, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.8104789086703225, "lm_q1q2_score": 0.5925075589821235}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{physics}\n\\usepackage{amsmath}\n\\usepackage{showlabels}\n\\usepackage{amssymb}\n\\renewcommand{\\vec}[1]{\\ensuremath{\\mathbf{#1}}}\n\\newcommand{\\uvec}[1]{\\ensuremath{\\hat{\\mathbf{#1}}}}\n\n\n\\title{Statistical Computing for Scientists and Engineers\\\\[1em] Final Homework}\n\\author{Jiale Shi}\n\\date{Dec/10/2018}\n\n\\usepackage{natbib}\n\\usepackage{graphicx}\n\\usepackage{array}\n\\begin{document}\n\\maketitle\n\n\\newpage\n\\section{EM algorithm}\nImplement the EM algorithm for estimating the parameters of a mixture of Gaussians with isotropic covariances using the data provided on data resources. There are two data sets each of which is two-dimensional. You can write your own or use any available code for mixture of Gaussians (e.g. you ca use the code in the code director with some changes to account for the isotropic covariances. Also see the accompanying paper Unsupervised Learning of Finite Mixture Models, M.Figueiredo and A.K. Jain.)\n\nExperiment with the number of mixtures and comment on the trade-off between the number of mixture and goodness of fit (i.e. log-likelihood) of the data. Plot the log-likelihood as a function of the number of components of a mixture of Gaussians to support your argument.\n\nFind a fixed number of Gaussians that works well for each data set.\n\nPlot the estimated Gaussians as one-sigma contours of each mixing component on top of the training data.\n\nList the mean, covariance and mixing weights of each mixture component.\n\n\\textbf{Solution}:\nImplement the EM algorithm for estimating the parameters of a mixture of Gaussians with isotropic covariances. The isotropic covariances maxtrix means the covariance matrix is diagonal and all elements on the diagonal is equal.\n\n\\begin{equation}\n    \\textbf{C}_{\\mbox{isotropic}} = \\lambda \\textbf{I} = \\left( \\begin{array}{cc}  \\lambda & 0 \\\\ 0 & \\lambda\n    \\end{array} \\right)\n\\end{equation}\n\nThe matlab code (covoption=1) can deal with diag covariances matrix. \nFor diag cov:\n\\begin{equation}\n    \\textbf{C}_{\\mbox{diag}} =  \\left( \\begin{array}{cc}  \\lambda_1 & 0 \\\\ 0 & \\lambda_2\n    \\end{array} \\right)\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n& \\mbox{For diag cov}\\\\\n& \\sigma_{ik}^{2} = \\frac{\\sum_n \\frac{1}{N}(x_{in}-\\mu_{ik})^2}{\\sum_{n=1}^{N} \\frac{1}{N}} & i = 1,2 \\\\\n& \\mbox{For isotropic cov} \\\\\n& \\sigma_{k}^{2} = \\frac{\\sum_{n=1}^{N} \\frac{1}{N}||\\vec{x}_{n}-\\vec{\\mu}_{k}||^2}{2 \\sum_n \\frac{1}{N}} =  \\frac{\\sigma_{1k}^{2}+\\sigma_{2k}^{2}}{2}\n\\end{aligned}\n\\end{equation}\n\nTherefore, once getting the diag cov $\\left( \\begin{array}{cc}  \\sigma_{1k}^{2} & 0 \\\\ 0 & \\sigma_{2k}^{2}\n    \\end{array} \\right)$, I change it based on the diga cov to get isotropic cov.\n\\begin{equation}\n    \\textbf{C}_{\\mbox{isotropic},k}  = \\left( \\begin{array}{cc}  \\sigma_{k}^{2} & 0 \\\\ 0 & \\sigma_{k}^{2} \n    \\end{array} \\right)  = \\left( \\begin{array}{cc}  \\frac{\\sigma_{1k}^{2}+\\sigma_{2k}^{2}}{2} & 0 \\\\ 0 & \\frac{\\sigma_{1k}^{2}+\\sigma_{2k}^{2}}{2} \n    \\end{array} \\right)\n\\end{equation}\nOther steps are the same as diag cov.\n\\newpage\nFor \\textbf{data1train.dat}:\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.6]{fig/HW6P1_11.png}\n\\caption{log-likelihood as a function of the number of components of amixture of Gaussians for \\textbf{data1train.dat}}\n%\\label{fig:HW6P1_1}\n\\end{figure}\n\n\nfixed number of Gaussians that works well for \\textbf{data1train.dat} is \\textbf{14}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.6]{fig/HW6P1_12.png}\n\\caption{Optimal Gaussians mixtures that work for \\textbf{data1train.dat}}\n%\\label{fig:HW6P1_1}\n\\end{figure}\n\n\n\\newpage\nList the mean, covariance and mixing weights for each mixture component.\n\n\\begin{math}\n\\begin{aligned}\n\\mbox{best mean} & \\mbox{best covariance} & \\mbox{best mixing weight} \\\\\n\\left( \\begin{array}{c}  1.3716  \\\\   -9.6936\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   0.8763 & 0 \\\\   0 & 0.8763\n\\end{array} \\right) \n&   0.0646  \\\\ %1\n\\left( \\begin{array}{c}  6.1686 \\\\  8.0496    \n\\end{array} \\right)\n& \\left( \\begin{array}{cc}   1.2756 & 0 \\\\   0 & 1.2756 \n\\end{array} \\right) \n&   0.0667\\\\ %2\n\\left( \\begin{array}{c}   -4.3445 \\\\  -8.8821 \n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.0669  & 0 \\\\   0 & 1.0669 \n\\end{array} \\right) \n&   0.0471 \\\\ %3\n\\left( \\begin{array}{c}  -9.7203 \\\\  -1.4463\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.4420 & 0 \\\\   0 & 1.4420\n\\end{array} \\right) \n&   0.0651\\\\ %4\n\\left( \\begin{array}{c}  4.5894 \\\\  -8.5408\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}    1.3080 &  0 \\\\   0 &   1.3080\n\\end{array} \\right) \n&   0.0724\\\\ %5\n\\left( \\begin{array}{c}   9.3126 \\\\  4.0447 \n\\end{array} \\right) \n& \\left( \\begin{array}{cc} 1.8687  & 0 \\\\   0 & 1.8687 \n\\end{array} \\right) \n&   0.0747\\\\ %6\n\\left( \\begin{array}{c}   -10.0447 \\\\  2.2659\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.6937  & 0 \\\\  0   & 1.6937 \n\\end{array} \\right) \n&   0.0809\\\\ %7\n\\left( \\begin{array}{c}  -1.4953 \\\\  -10.0566\n\\end{array} \\right)\n& \\left( \\begin{array}{cc}   0.8459   & 0 \\\\   0 & 0.8459   \n\\end{array} \\right) \n&   0.0510 \\\\ %8\n\\left( \\begin{array}{c}   9.8751 \\\\  -0.6106\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.9610 & 0 \\\\   0 & 1.9610 \n\\end{array} \\right) \n&   0.1189 \\\\ %9\n\\left( \\begin{array}{c} -7.6513 \\\\  -6.2702  \n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.3372 & 0 \\\\   0 & 1.3372\n\\end{array} \\right) \n&   0.0582\\\\ %10\n\\left( \\begin{array}{c}  1.8042 \\\\  9.7706\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.7976 &  0 \\\\   0 &  1.7976\n\\end{array} \\right) \n&   0.0787\\\\ %11\n\\left( \\begin{array}{c}   8.0447 \\\\  -5.6998 \n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.1455  & 0 \\\\   0 &  1.1455 \n\\end{array} \\right) \n&   0.0753\\\\ %12\n\\left( \\begin{array}{c}   -3.6905 \\\\  9.1483\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}    1.9368 & 0 \\\\  0   &  1.9368\n\\end{array} \\right) \n&   0.0844\\\\ %13\n\\left( \\begin{array}{c}  -7.8340 \\\\  6.0516\n\\end{array} \\right)\n& \\left( \\begin{array}{cc}    1.1180 & 0 \\\\   0 &  1.1180\n\\end{array} \\right) \n&   0.0621 %14\n\\end{aligned}\n\\end{math}\n\n\n\\newpage\nFor \\textbf{data2train.dat}:\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.6]{fig/HW6P1_21.png}\n\\caption{log-likelihood as a function of the number of components of amixture of Gaussians for \\textbf{data2train.dat}}\n%\\label{fig:HW6P1_1}\n\\end{figure}\n\n\nfixed number of Gaussians that works well for \\textbf{data1train.dat} is \\textbf{10}.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.6]{fig/HW6P1_22.png}\n\\caption{Optimal Gaussians mixtures that work for \\textbf{data2train.dat}}\n%\\label{fig:HW6P1_1}\n\\end{figure}\n\n\\newpage\nList the mean, covariance and mixing weights for each mixture component.\n\\begin{math}\n\\begin{aligned}\n\\mbox{best mean} & \\mbox{best covariance} & \\mbox{best mixing weight} \\\\\n\\left( \\begin{array}{c}  -4.5860 \\\\ -1.6871\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}  0.5758  &  0 \\\\ 0    & 0.5758\n\\end{array} \\right) \n&  0.0793   \\\\ %1\n\\left( \\begin{array}{c} 2.1473  \\\\  -14.3782\n\\end{array} \\right)\n& \\left( \\begin{array}{cc}  1.3352  & 0 \\\\ 0   & 1.3352\n\\end{array} \\right) \n&  0.1471  \\\\ %2\n\\left( \\begin{array}{c}   -4.8326  \\\\  0.7035\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   0.9502 & 0  \\\\ 0   &  0.9502\n\\end{array} \\right) \n&  0.1073 \\\\ %3\n\\left( \\begin{array}{c}  -3.9271  \\\\  3.2646\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   0.4316 & 0 \\\\  0  & 0.4316\n\\end{array} \\right) \n&  0.0671   \\\\ %4\n\\left( \\begin{array}{c} -1.6167   \\\\  4.6312\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   0.7350   & 0  \\\\  0  &  0.7350 \n\\end{array} \\right) \n&  0.1017  \\\\ %5\n\\left( \\begin{array}{c}  -3.3163 \\\\  -3.6515\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}  0.6169    & 0 \\\\ 0   &  0.6169  \n\\end{array} \\right) \n&   0.0778 \\\\ %6 \n\\left( \\begin{array}{c} 2.1058  \\\\  -5.6436 \n\\end{array} \\right)\n& \\left( \\begin{array}{cc}  1.0695   & 0 \\\\  0  &  1.0695 \n\\end{array} \\right) \n&  0.0964 \\\\ %7\n\\left( \\begin{array}{c}   4.6136 \\\\  -11.3956\n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   1.4347 & 0 \\\\  0  & 1.4347\n\\end{array} \\right) \n& 0.1386 \\\\ %8\n\\left( \\begin{array}{c} 4.5414 \\\\ -7.9118 \n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   0.9857   & 0 \\\\  0  & 0.9857  \n\\end{array} \\right) \n&  0.0924 \\\\ %9\n\\left( \\begin{array}{c}  -0.7092 \\\\  -4.9507 \n\\end{array} \\right) \n& \\left( \\begin{array}{cc}   0.9208   & 0  \\\\  0  & 0.9208  \n\\end{array} \\right) \n&  0.0924  %10\n\\end{aligned}\n\\end{math}\n\n\n\n\\newpage\n\\section{Resampling}\nRandomly generate 100 particles $x^{i}$ form some distribution $\\pi$ of your choice, and 100 (positive) weights $w^{i}$. Normalize the weights such that $\\sum_{i} w^{i} =1$, and the weighted samples ${x^{i},w^{i}}$ to estimate the mean $m$ of $\\pi$, and denote this estimate by $\\hat{m}$.\n\nResample the particles $x^i$ (from the weights $w^i$) using multinominal resampling, and estimate the mean from the resampled (now equally weighted) samples. Denote this estimate $\\hat{m}_m$.\n\nRepeat this for systematic resampling, and denote this estimate $\\hat{m}_s$.\n\nRepeat this for stratified resampling, and denote this estimate $\\hat{m}_t$.\n\nRepeat the items above multiple times, and report an estimate of the variance for $\\hat{m}-\\hat{m}_m$, $\\hat{m}-\\hat{m}_s$, and $\\hat{m}-\\hat{m}_t$ respectively, conditionally on $\\hat{m}$ (that is, do not sample new particles from $\\pi$, but only repeat the resampling step). Which resampling scheme appears to be the preferred one, in terms of variance?\n\n\\textbf{Solution}:\n\nI randomly generate 100 particles $x^{i}$ using Gaussian distribution $G(0,1)$ and calculate weights $w^{i}$, normalize the weights by \n\\begin{equation}\n    w^{i} = \\frac{w^{i}}{sum(w^{i})}\n\\end{equation}\n\nThen I use the same samples and normalized weights to resample multiple times. \nThe algorithms of multinomial resampling, systematic resampling and stratified resampling are in the code (see the attached code)\n\nI repeat resample 1000 times to calculate the variance of $\\hat{m}-\\hat{m}_m$, $\\hat{m}-\\hat{m}_s$, and $\\hat{m}-\\hat{m}_t$.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.4]{fig/P5_1.png}\n\\caption{$\\hat{m}-\\hat{m}_m$, $\\hat{m}-\\hat{m}_s$, and $\\hat{m}-\\hat{m}_t$ in 1000 iterations}\n%\\label{fig:HW6P1_1}\n\\end{figure}\nIn this Figure 5, we find that the variance of $\\hat{m}-\\hat{m}_m$ is the largest. The variance of $\\hat{m}-\\hat{m}_s$ is smallest. \n\nAnalytically, in our simulation, 1000 iterations.\n\nVariance of $\\hat{m}-\\hat{m}_m$ is $0.005132756202771032$\n\nVariance of $\\hat{m}-\\hat{m}_s$ is $0.0014684267477009338$\n\nVariance of $\\hat{m}-\\hat{m}_t$ is $0.002763182508388841$\n\nIn terms of variance, systematic resampling is the preferred one.\n\n\\newpage\n\\section{EM algorithm}\nConsider data,\n\\begin{equation}\n    D = \\left\\{\\left( \\begin{array}{c}  1 \\\\  3\n    \\end{array} \\right)\n    \\left( \\begin{array}{c}  4 \\\\  5\n    \\end{array} \\right)\n    \\left( \\begin{array}{c}  2 \\\\  \\star\n    \\end{array} \\right)\\right\\}\n\\end{equation}\nsampled from a two-dimensional (separable) distribution $p(x_1,x_2) = p_{x_{1}}(x_{1})p_{x_{2}}(x_{2})$ where:\n\n\\begin{equation}\np_{x_1}(x_1) = \\left\\{\\begin{array}{rcl}\n\\frac{1}{\\theta_1} \\exp(-\\frac{x_1}{\\theta_1}) & \\mbox{if} & x_1 \\geq 0,\\\\ \n0  & \\mbox{otherwise} ,\n\\end{array}\\right. \n\\end{equation}\n\nand\n\n\\begin{equation}\np_{x_2}(x_2) = \\left\\{\\begin{array}{rcl}\n\\frac{1}{\\theta_2}  & \\mbox{if} & 0 \\leq x_2 \\leq \\theta_2,\\\\ \n0  & \\mbox{otherwise} ,\n\\end{array}\\right. \n\\end{equation}\n\n\nand $*$ in the dataset indicates a missing value.\n\\newline\n(a) What can you infer from $\\theta_2$ by looking at $D$?\n\\newline\n\\textbf{Solution}:\n\nThe  existing two pairs ($x_1$,$x_2$)  $\\left( \\begin{array}{c}  1 \\\\  3\n    \\end{array} \\right)\n    \\left( \\begin{array}{c}  4 \\\\  5\n    \\end{array} \\right)$ mean that \n    \n    for $p_{x_2}(x_2)$,$p_{3}(3) \\neq 0$ and $p_{5}(5) \\neq 0$.\n    \n    $0 \\leq 3 \\leq 5 \\leq \\theta_2$\n    \nTherefore, $\\theta_2$  should be greater or equal to 5.\n\\newline\n(b) Start with an initial estimate $\\theta^{0}= \\left( \\begin{array}{c}  3 \\\\  6\n    \\end{array} \\right)$ and analytically calculate $Q(\\theta | \\theta^{0})$. \nThis is the expected joint data log-likelihood considered in class. For this problem to compute it, you effectively have to marginalize out the missing values. This is the estimate expectation step in the EM algorithm.\n\\newline\n\\textbf{Solution}:\n\n\\begin{math}\n    D = \\left\\{\\left( \\begin{array}{c}  x_{11} = 1 \\\\  x_{12} = 13\n    \\end{array} \\right)\n    \\left( \\begin{array}{c}  x_{21} =  4 \\\\ x_{22} = 5\n    \\end{array} \\right)\n    \\left( \\begin{array}{c} x_{31} = 2 \\\\  x_{32} = \\star\n    \\end{array} \\right)\\right\\}\n\\end{math}\n\n$\\theta^{0}= \\left( \\begin{array}{c}  \\theta_1^{0} = 3 \\\\ \\theta_2^{0} =  6\n    \\end{array} \\right) = (\\theta_1^{0},\\theta_2^{0})$ \n    \n$\\theta= \\left( \\begin{array}{c}  \\theta_1  \\\\ \\theta_2\n    \\end{array} \\right) = (\\theta_1,\\theta_2)$\n\n\\begin{equation}\n\\begin{aligned}\n    p(x_1,x_2,\\theta) & = p_{x_1}(x_1 | \\theta) p_{x_2}(x_2 | \\theta) \\\\\n                      & = \\left\\{\\begin{array}{rcl}\n\\frac{1}{\\theta_1} \\exp(-\\frac{x_1}{\\theta_1}) \\frac{1}{\\theta_2}  & \\mbox{if} & x_1 \\geq 0  \\&  0 \\leq x_2 \\leq \\theta_2,\\\\ \n0  & \\mbox{otherwise} ,\n\\end{array}\\right. \n\\end{aligned}\n\\end{equation}\n\nThen we need to use the expected joint data log-likelihood $Q(\\theta | \\theta^{0})$.\n\n\\begin{equation}\n\\begin{aligned}\n& Q(\\theta | \\theta^{0})  = E_{x_{32}}[\\ln{p(x_v,x_h,\\theta)} | \\mathcal{D},\\theta^{0}] \\\\\n& = \\int_{-\\infty}^{\\infty} [\\ln{p(x_{11},x_{12} | \\theta)} + \\ln{p(x_{21},x_{22} | \\theta)}+ \\ln{p(x_{31},x_{32} | \\theta)}] p(x_{32} |x_{31},\\theta^{0}) dx_{32} \\\\\n& = \\ln{p(x_{11},x_{12} | \\theta)} + \\ln{p(x_{21},x_{22} | \\theta)} + \\int_{-\\infty}^{\\infty} [ \\ln{p(x_{31},x_{32} | \\theta)}] \\frac{p(x_{31},x_{32}|\\theta^{0})}{\\int_{-\\infty}^{\\infty} p(x_{31}, x_{32}^{'} |\\theta^{0}) dx_{32}^{'} } dx_{32}\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\\ln{p(x_{11},x_{12} | \\theta)} + \\ln{p(x_{21},x_{22} | \\theta)} & = \\ln(\\frac{1}{\\theta_1 \\theta_2}e^{-\\frac{x_{11}}{\\theta_1}}) +\\ln(\\frac{1}{\\theta_1 \\theta_2}e^{-\\frac{x_{21}}{\\theta_1}})  \\\\\n& =\\ln(\\frac{1}{\\theta_1 \\theta_2}e^{-\\frac{1}{\\theta_1}}) +\\ln(\\frac{1}{\\theta_1 \\theta_2}e^{-\\frac{4}{\\theta_1}}) \\\\\n& = -2 \\ln{\\theta_1 \\theta_2} -\\frac{5}{\\theta_1}\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\np(x_{31}=2,x_{32} | \\theta^{0}) = p_{x_{31}}(x_{31}=2|\\theta^{0}) p_{x_{32}}(x_{32}|\\theta^{0})    \n\\end{equation}\n\n\\begin{equation}\n\\int_{-\\infty}^{\\infty} p(x_{31}=2, x_{32}^{'} |\\theta^{0}) dx_{32}^{'} = p_{x_{31}}(x_{31}=2|\\theta^{0})  \n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n \\int_{-\\infty}^{\\infty} [ \\ln{p(x_{31},x_{32} | \\theta)}] \\frac{p(x_{31},x_{32}|\\theta^{0})}{\\int_{-\\infty}^{\\infty} p(x_{31}, x_{32}^{'} |\\theta^{0}) dx_{32}^{'} } dx_{32} = \\int_{-\\infty}^{\\infty} [ \\ln{p(x_{31},x_{32} | \\theta)}] p_{x_{32}}(x_{32}|\\theta^{0}) dx_{32}\n\\end{aligned}\n\\end{equation}\n\n$[ \\ln{p(x_{31},x_{32} | \\theta)}] p_{x_{32}}(x_{32}|\\theta^{0}) \\neq 0$ if $ 0 \\leq x_{32} \\leq \\min(\\theta_2,  \\theta_{2}^{0}=6) = \\min(\\theta_2,  6) $\n\n\\begin{equation}\n\\begin{aligned}\n \\int_{-\\infty}^{\\infty} [ \\ln{p(x_{31},x_{32} | \\theta)}] p_{x_{32}}(x_{32}|\\theta^{0}) dx_{32} = \\int_{0}^{\\min(\\theta_2,  6)} ( -\\ln{\\theta_1 \\theta_2} -\\frac{2}{\\theta_1}) (\\frac{1}{6}) dx_{32}\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n \\mbox{for }   & 5 \\leq \\theta_2 <6, \\theta_2=\\min(\\theta_2,  6) \\\\\n& \\int_{-\\infty}^{\\infty} [ \\ln{p(x_{31},x_{32} | \\theta)}] p_{x_{32}}(x_{32}|\\theta^{0}) dx_{32} \\\\\n& = \\int_{0}^{\\theta_2} ( -\\ln{\\theta_1 \\theta_2} -\\frac{2}{\\theta_1}) (\\frac{1}{6}) dx_{32} \\\\\n& = \\frac{\\theta_2}{6}(-\\ln{\\theta_1 \\theta_2} -\\frac{2}{\\theta_1})\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n \\mbox{for }   & 6 \\leq \\theta_2 , 6=\\min(\\theta_2,  6) \\\\\n& \\int_{-\\infty}^{\\infty} [ \\ln{p(x_{31},x_{32} | \\theta)}] p_{x_{32}}(x_{32}|\\theta^{0}) dx_{32} \\\\\n& = \\int_{0}^{6} ( -\\ln{\\theta_1 \\theta_2} -\\frac{2}{\\theta_1}) (\\frac{1}{6}) dx_{32} \\\\\n& = (-\\ln{\\theta_1 \\theta_2} -\\frac{2}{\\theta_1})\n\\end{aligned}\n\\end{equation}\n\n\nTherefore, combine (11)(16)(17).\n\\begin{equation}\n\\begin{aligned}\n\\mbox{for }   & 5 \\leq \\theta_2 <6 \\\\\n& Q(\\theta | \\theta^{0})  = -2 \\ln{\\theta_1 \\theta_2} -\\frac{5}{\\theta_1} + \\frac{\\theta_2}{6}(-\\ln{\\theta_1 \\theta_2} -\\frac{2}{\\theta_1})    = -(2+\\frac{\\theta_2}{6})\\ln{\\theta_1 \\theta_2} -(5+\\frac{\\theta_2}{3}) \\frac{1}{\\theta_1} \\\\\n \\mbox{for }   & 6 \\leq \\theta_2  \\\\\n& Q(\\theta | \\theta^{0})  = -2 \\ln{\\theta_1 \\theta_2} -\\frac{5}{\\theta_1} +  (-\\ln{\\theta_1 \\theta_2} -\\frac{2}{\\theta_1}) \n= -3\\ln{\\theta_1 \\theta_2} -\\frac{7}{\\theta_1}\n\\end{aligned}\n\\end{equation}\n\n(c) Find the $\\theta$ that maximizes your  $Q(\\theta | \\theta^{0})$. This is the maximization step of the EM algorithm.\n\\newline\n\\textbf{Solution}:\n\n\\textbf{M} step: solving $ \\bigtriangledown Q(\\theta | \\theta^{0})= 0 $ or get the maximization of \n$Q(\\theta | \\theta^{0})$\n\nWe have the boundary conditions for $\\theta_2$, so we start from $\\theta_2$.\n\n\\begin{equation}\n    \\frac{\\partial Q(\\theta | \\theta^{0})}{ \\partial \\theta_2 }\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\\mbox{for }   & 5 \\leq \\theta_2 <6 \\\\\n& \\frac{\\partial Q(\\theta | \\theta^{0})}{ \\partial \\theta_2 } = -\\frac{\\ln \\theta_1}{6}-\\frac{1}{3\\theta_1} -\\frac{2}{\\theta_2}- \\frac{\\ln \\theta_2}{6}  -\\frac{1}{6} \\\\ \n& = F(\\theta_1)+G(\\theta_2)-\\frac{1}{6} \n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nF(\\theta_1) = -\\frac{\\ln \\theta_1}{6}-\\frac{1}{3\\theta_1}\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\\frac{\\partial F(\\theta_1)}{ \\partial \\theta_1} = \\frac{1}{6} (\\frac{1}{\\theta_1}-\\frac{2}{(\\theta_{1})^{2}}) = 0  \\rightarrow \\theta_1 = 2 \n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nF(\\theta_1) \\leq F(2) = -\\frac{1}{6}(\\ln2 +1) <0\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nG(\\theta_2) = -\\frac{2}{\\theta_2}- \\frac{\\ln \\theta_2}{6} < 0 & \\mbox{for } & 5 \\leq \\theta_2 <6\n\\end{aligned}\n\\end{equation}\n\nTherefore,\n\n\\begin{equation}\n\\begin{aligned}\n\\mbox{for }   & 5 \\leq \\theta_2 <6 \\\\\n& \\frac{\\partial Q(\\theta | \\theta^{0})}{ \\partial \\theta_2 } < 0\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\\mbox{for }   & 6 \\leq \\theta_2  \\\\\n& \\frac{\\partial Q(\\theta | \\theta^{0})}{ \\partial \\theta_2 } = -\\frac{3}{\\theta_2} < 0\n\\end{aligned}\n\\end{equation}\n\nAnd $Q(\\theta | \\theta^{0})$ is continuous at $\\theta=6$. Therefore, for $5 \\leq \\theta_2 <6$ and $6 \\leq \\theta_2 $, $Q(\\theta | \\theta^{0})$ is monotonically continuous decreasing. \n\nWhen $\\theta_2 = 5$, $Q(\\theta | \\theta^{0})$ gets the largest value $Q(\\theta_1,\\theta_2 = 5 | \\theta^{0})$ . Then we fix $\\theta_2 = 5$ and solve $\\theta_1$. \n\n\\begin{equation}\n\\begin{aligned}\nQ(\\theta_1,\\theta_2 = 5 | \\theta^{0}) = -(\\frac{17}{6})\\ln{5\\theta_1} -(\\frac{20}{3}) \\frac{1}{\\theta_1}\n\\end{aligned}\n\\end{equation}\n\n$\\theta_1 > 0$\n\n\\begin{equation}\n\\begin{aligned}\n& \\frac{\\partial Q(\\theta_1,\\theta_2 = 5 | \\theta^{0})}{ \\partial \\theta_1 } = -(\\frac{17}{6})\\frac{1}{\\theta_1} + (\\frac{20}{3}) \\frac{1}{(\\theta_1)^2} = 0 \\rightarrow \\theta_1 = \\frac{40}{17}\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n& \\frac{\\partial Q(\\theta_1,\\theta_2 = 5 | \\theta^{0})}{ \\partial \\theta_1 } >0 & \\mbox{if } & 0<\\theta_1 < \\frac{40}{17} \\\\\n& \\frac{\\partial Q(\\theta_1,\\theta_2 = 5 | \\theta^{0})}{ \\partial \\theta_1 } <0 & \\mbox{if } & \\frac{40}{17} < \\theta_1 \n\\end{aligned}\n\\end{equation}\n\n$Q(\\theta_1,\\theta_2 = 5 | \\theta^{0})$ gets the largest value when $\\theta_1 = \\frac{40}{17}$.\n\nIn conclusion, the $\\theta= \\left( \\begin{array}{c}  \\theta_1 \\\\  \\theta_2\n    \\end{array} \\right) =\\left( \\begin{array}{c}  \\frac{40}{17} \\\\  5\n    \\end{array} \\right) $ maximizes $Q(\\theta | \\theta^{0})$. \n\n%\\bibliographystyle{plain}\n%\\bibliography{references}\n\\end{document}\n", "meta": {"hexsha": "ce0bbea8d3c2f5b13a63f5003fd816f2233ec206", "size": 19883, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW6/HW6_SHI_JIALE/HW6_SHI_JIALE_latexsource/main.tex", "max_stars_repo_name": "shijiale0609/Statistical-Computing-Methods", "max_stars_repo_head_hexsha": "e780746d5f1e4b475bf38eb15d9d825daf45ffa6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework/HW6/HW6_SHI_JIALE/HW6_SHI_JIALE_latexsource/main.tex", "max_issues_repo_name": "shijiale0609/Statistical-Computing-Methods", "max_issues_repo_head_hexsha": "e780746d5f1e4b475bf38eb15d9d825daf45ffa6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW6/HW6_SHI_JIALE/HW6_SHI_JIALE_latexsource/main.tex", "max_forks_repo_name": "shijiale0609/Statistical-Computing-Methods", "max_forks_repo_head_hexsha": "e780746d5f1e4b475bf38eb15d9d825daf45ffa6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4444444444, "max_line_length": 500, "alphanum_fraction": 0.6252074637, "num_tokens": 8260, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110203, "lm_q2_score": 0.81047890180374, "lm_q1q2_score": 0.5925075444642598}}
{"text": "\\subsection{Dimensionality Reduction}\n\\label{sec:dimreduce}\n\nUsing high dimensional vectors is problematic with many learning\nalgorithms because of the computational cost and the curse of\ndimensionality.  In this section we investigate if there is a low\ndimensional representation of the substitute vectors which still\npreserve the neighborhood information necessary to learn syntactic\ncategories.  We first briefly describe then report experimental\nresults on principal components analysis (PCA), Isomap\n\\cite{tenenbaum2000global}, locally linear embedding (LLE)\n\\cite{roweis2000nonlinear}, and Laplacian eigenmaps\n\\cite{belkin2003laplacian}.\n\nEach dimensionality reduction algorithm tries to preserve certain\naspects of the original vectors.  PCA is a linear method that\nminimizes reconstruction error.  Isomap tries to preserve distances as\nmeasured along a low dimensional submanifold assuming the input\nvectors were sampled from the neighborhood of such a manifold.  LLE\nmost faithfully preserves the local linear structure of nearby input\nvectors.  Laplacian eigenmaps most faithfully preserve proximity\nrelations, mapping nearby inputs to nearby outputs.\n\n% we don't use KL2 anymore, we will probably use Jensen\n% we may use fewer than 100 nearest neighbors, keeping it 100 for now\n% we want to split 1m data into 10 pieces, but that may prove\n%difficult to do so \nWe wanted to see how accuracy (based on the\nk-nearest-neighbor supervised baseline as in the previous section)\nchanges based on the number of dimensions for each dimensionality\nreduction algorithm.\n%\n%% For computational efficiency, we extracted the first 24K tokens of\n%% Wall Street Journal Section of the Penn Treebak \\cite{treebank3}.  We\n%% applied each algorithm to each chunk and obtained average accuracy and\n%% standard deviation for a given number of dimensions.\nFor algorithms that require a distance matrix rather than raw input\nvectors we used the Jensen-Shannon divergence judged best by the\nexperiments of the previous section.  For graph based methods we built\nneighborhood graphs using 100 nearest neighbors.  The low dimensional\noutput vectors were compared using the cosine distance metric for the\nsupervised k-nearest-neighbor algorithm.  Figure~\\ref{fig:dimreduce}\nplots supervised baseline accuracy vs. number of dimensions for each\nalgorithm.\n\n% this graph needs to updated\n% graph of dims vs accuracy\n\\begin{figure}[h] \\centering\n\\includegraphics[width=0.6\\textwidth]{baseline_graph_mono.png}\n\\caption{Supervised knn baselines for the four dimensionality\n  reduction algorithms.}\n\\label{fig:dimreduce}\n\\end{figure}\n\n% /scratch/esert/pos_ind/work/BASELINE_GRAPH/plot_data\n% dimension PCA    LLE     ISO     LEM(Spectral)\n% 2              0.2831 0.2272 0.3433 0.3480\n% 4              0.4300 0.4547 0.5596 0.5019\n% 8              0.4920 0.5968 0.6480 0.6234\n% 16            0.5303 0.6500 0.6678 0.6572\n% 32            0.5676 0.6617 0.6790 0.6708\n% 64            0.6162 0.6680 0.6818 0.6774\n% 128          0.6390 0.6735 0.6844 0.6838\n% 256          0.6527 0.6747 0.6860 0.6914\n% 512          0.6658 0.6774 0.6876 0.6948\n% 1024        0.6720 0.6798 0.6891 0.7022\n% 2048        0.6764 0.6811 0.6878 0.7070\n% \n\n% this should be about same\nThe graph based algorithms (Isomap, LLE, and Laplacian eigenmaps) all\noutperform PCA.  They stay within 5\\% of their peak accuracy with as\nfew as 16 dimensions.  In fact Laplacian eigenmaps outperform the\nbaseline with the original 12,672 dimensional vectors (68.95\\%) when\nallowed to retain more than about 250 dimensions.  Spectral clustering\nuses the same transformation as the Laplacian eigenmaps algorithm and\nwe compare its performance to other clustering algorithms in the next\nsection.\n\n%%\n", "meta": {"hexsha": "404d27657f86bb879a85edfa9962e3d637caade5", "size": 3726, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/cl2012/cl/dimension.tex", "max_stars_repo_name": "ai-ku/upos", "max_stars_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-01-24T11:27:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-18T11:35:02.000Z", "max_issues_repo_path": "papers/coling2014/dimension.tex", "max_issues_repo_name": "ai-ku/upos_2014", "max_issues_repo_head_hexsha": "f4723cac53b4d550d2b0c613c9577eb247c7ff4a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/coling2014/dimension.tex", "max_forks_repo_name": "ai-ku/upos_2014", "max_forks_repo_head_hexsha": "f4723cac53b4d550d2b0c613c9577eb247c7ff4a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-06T07:56:00.000Z", "avg_line_length": 46.0, "max_line_length": 73, "alphanum_fraction": 0.7734836286, "num_tokens": 1015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.5924537187320634}}
{"text": "\\documentclass{article}\n\n\\usepackage{../preamble}\n\\usepackage{../macros}\n\n\\title{Notes for High-Dimensional Probability:\\\\ Random Vectors in High Dimensions}\n\n\\author{Isak Falk}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Preliminaries}\n\n\\begin{proposition}\n  \\label{prop:sub-gauss-sub-exp-norm-relation}\n  Let \\(X\\) be a real-values random variable and \\(Y = \\sqrt{\\abs{X}}\\). Then\n  \\(X\\) is sub-exponential if and only if \\(Y\\) is sub-Gaussian, and in such\n  case \\(\\norm{X}_{\\psi_{1}} = \\norm{Y}_{\\psi_{2}}^{2}\\)\n\\end{proposition}\n\n\\begin{proposition}\n  \\label{prop:centering-of-sub-exp-rvs}\n  Let \\(X\\) be a real-valued sub-exponential variable, then the centered random\n  variable \\(X - \\E X\\) is sub-exponential and\n  \\begin{equation}\n    \\norm{X - \\E X}_{\\psi_{1}} \\leq (1 + \\frac{2}{\\ln 2}) \\norm{X}_{\\psi_{2}}.\n  \\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n  The proof is analogous to the sub-Gaussian case but with different constant\n  since the definition of the norm is different,\n  \\begin{equation}\n    \\norm{X - \\E X}_{\\psi_{1}} \\leq \\norm{X}_{\\psi_{1}} + \\norm{\\E X}_{\\psi_{1}},\n  \\end{equation}\n  and\n  \\begin{equation}\n    \\norm{\\E X}_{\\psi_{1}} = \\frac{\\abs{\\E X}}{\\ln 2} \\leq \\frac{\\E \\abs{X}}{\\ln 2} = \\frac{\\norm{X}_{1}}{\\ln 2} \\leq \\frac{2}{\\ln 2}\\norm{X}_{\\psi_{1}},\n  \\end{equation}\n  where we have used the definition of sub-exponential norm for constant\n  functions, Jensen and bound of \\(L^{p}\\) norm of \\(X\\) by sub-exponential\n  norm. We thus have that\n  \\(\\norm{X - \\E X}_{\\psi_{1}} \\leq (1 + \\frac{2}{\\ln 2})\\norm{X}_{\\psi_{1}}\\)\n  which is what we wanted to show.\n\n\\end{proof}\n\n\\begin{theorem}[Bernstein's theorem]\n  \\label{thm:bernsteins-ineq}\n  Let \\((X_{i})_{i=1}^{n}\\) be a sequence of independent real-valued zero-mean\n  random variables such that \\(\\norm{X_{i}}_{\\psi_{1}} < \\infty\\). Then, for\n  every \\(t > 0\\)\n  \\begin{equation}\n    \\Pr(\\abs*{\\frac{1}{n}\\sum_{i=1}^{n}X_{i}} > t) \\leq 2\\exp(-cn\\min(\\frac{t^{2}}{K^{2}}, \\frac{t}{K})),\n  \\end{equation}\n  where \\(K = \\max_{i}\\norm{X_{i}}_{\\psi_{1}}\\) and \\(c > 0\\) is some absolute constant.\n\\end{theorem}\n\n\\section{Concentration of the Norm}\n\\begin{theorem}[Concentration of the \\(L_{2}\\) norm]\nLet \\(X = (X_{1}, \\dots, X_{n}) \\in \\R^{n}\\) be a random vector with independent\nsub-gaussian coordinates \\(X_{i}\\) that satisfy \\(\\E X_{i}^{2} = 1\\). Then\n\\begin{equation}\n  \\norm{\\norm{X}_{2} - \\sqrt{n}}_{\\psi_{2}} \\leq CK^{2},\n\\end{equation}\nwhere \\(K = \\max_{i} \\norm{X_{i}}_{\\psi_{2}}\\) and \\(C\\) is an absolute constant.\n\\end{theorem}\n\n\\begin{proof}\n  We first note that \\(K \\geq 1\\). By Jensen's Inequality we have that\n  \\(\\E \\exp(\\frac{X_{i}^{2}}{t^{2}}) \\geq \\exp(\\frac{\\E X_{i}^{2}}{t^{2}}) = \\exp(t^{-2})\\)\n  and using \\(t = 1\\) we see that \\(\\E \\exp(X_{i}^{2}) \\geq e > 2\\) so\n  \\(\\norm{X_{i}}_{\\psi_{2}} \\geq 1\\) for all \\(i \\in \\{1, \\dots, n\\}\\). Since\n  \\(K = \\max_{i}\\norm{X_{i}}_{\\psi_{2}} \\geq 1\\) we are done.\n\n  Now consider the quantity \\(\\frac{1}{n}\\norm{X}_{2}^{2} - 1\\) which we can write as\n  \\begin{equation}\n    \\frac{1}{n}\\norm{X}_{2}^{2} - 1 = \\frac{1}{n}\\sum_{i=1}^{n}(X_{i}^{2} - 1) = \\frac{1}{n}\\sum_{i=1}^{n}Y_{i},\n  \\end{equation}\n  where \\(Y_{i} = X_{i}^{2} - 1\\). Since \\(\\E X_{i}^{2} = 1\\) for any \\(i\\),\n  \\((Y_{i})_{i=1}^{n}\\) is a vector of zero-centred random variables. Since\n  \\(\\norm{X_{i}}_{\\psi_{2}} < \\infty\\) we can show that\n  \\(\\norm{Y_{i}}_{\\psi_{1}} < \\infty\\) since\n  \\begin{align}\n    \\label{al:1}\n    \\norm{X_i^2 - 1}_{\\psi_1} & \\leq C\\norm{X_i^2}_{\\psi_1} \\\\\n                              & = C\\norm{X_i}_{\\psi_2}^2 \\\\\n                              & \\leq C K^2,\n  \\end{align}\n  using centring property of sub-exponentials\n  \\cref{prop:centering-of-sub-exp-rvs} (noting that \\(C > 1\\)),\n  \\cref{prop:sub-gauss-sub-exp-norm-relation} and definition of \\(K\\). Since\n  \\((Y_{i})_{i=1}^{n}\\) are independent, zero-mean and by the above calculation sub-exponential, we can apply Bernstein's Inequality\n  \\cref{thm:bernsteins-ineq} which for any \\(u \\geq 0\\) means that\n  \\begin{equation}\n    \\Pr(\\abs*{\\frac{1}{n}\\sum_{i=1}^{n}Y_{i}} > u) \\leq 2 \\exp(-cn\\min(\\frac{u^{2}}{C^{2}K^{4}}, \\frac{u}{CK^{2}}))\n  \\end{equation}\n  for some \\(c > 0\\). Since \\(C, K > 1\\) we have that \\(C^{2} > C\\) and\n  \\(K^{4} > K^{2}\\), so we can write\n  \\begin{equation}\n    \\label{eq:conc-norm-bernstein}\n    \\exp(-cn\\min(\\frac{u^{2}}{C^{2}K^{4}}, \\frac{u}{CK^{2}})) \\leq \\exp(-\\frac{cn}{C^{4}K^{4}}\\min(u^{2}, u)).\n  \\end{equation}\n\n  So far we have proved a concentration bound on \\(\\norm{X}_{2}^{2}\\). We will\n  finish the proof by relating \\(\\norm{X}_{2}\\) to \\(\\norm{X}_{2}^{2}\\). Note\n  the following; for any \\(z, \\delta \\geq 0\\)\n  \\begin{equation}\n    \\label{eq:zp1-delta-observation}\n    \\abs{z - 1} \\geq \\delta \\Rightarrow \\abs{z^{2} - 1} \\geq \\max(\\delta, \\delta^{2}).\n  \\end{equation}\n  We show it by first noting that\n  \\(\\abs{z^{2} - 1} = \\abs{z - 1}\\abs{z + 1} \\geq \\abs{z + 1}\\delta\\), and see\n  that if \\(\\delta \\in [0, 1)\\) then \\(\\abs{z + 1} \\geq \\delta\\) and if\n  \\(\\delta \\geq 1\\) then since \\(\\abs{z + 1} \\geq 1\\) we have that\n  \\(\\abs{z^{2} + 1} \\geq \\delta\\). We can write this compactly as\n  \\cref{eq:zp1-delta-observation}.\n\n  Now, consider any \\(\\delta \\geq 0\\) and the expression\n  \\(\\abs{\\frac{1}{\\sqrt{n}}\\norm{X}_{2} - 1} \\geq \\delta\\). Using\n  \\cref{eq:zp1-delta-observation} with \\(z = \\frac{1}{\\sqrt{n}}\\norm{X}_{2}\\) we\n  see that\n  \\begin{equation}\n    \\label{eq:4}\n    \\abs{\\frac{1}{\\sqrt{n}}\\norm{X}_{2} - 1} \\geq \\delta \\Rightarrow \\abs{\\frac{1}{n}\\norm{X}^{2}_{2} - 1} \\geq \\max(\\delta, \\delta^{2}).\n  \\end{equation}\n  In terms of events, this means that\n  \\begin{equation}\n    \\Pr(\\abs*{\\frac{1}{\\sqrt{n}}\\norm{X}_{2} - 1} \\geq \\delta) \\leq \\Pr(\\abs*{\\frac{1}{n}\\norm{X}^{2}_{2} - 1} \\geq \\max(\\delta, \\delta^{2})) \\leq 2 \\exp(-\\frac{cn}{C^{4}K^{4}}\\delta^{2}),\n  \\end{equation}\n  where in the final inequality we have used \\cref{eq:conc-norm-bernstein}\n  together with\n  \\begin{equation}\n    \\delta^{2} = \\min(\\max(\\delta, \\delta^{2}), \\max(\\delta, \\delta^{2})^{2}).\n  \\end{equation}\n  Letting \\(t = \\delta \\sqrt{n}\\) we obtain the bound\n  \\begin{equation}\n    \\Pr(\\abs{\\norm{X}_{2} - \\sqrt{n}} \\geq t) \\leq 2\\exp(-\\frac{ct^{2}}{C^{4}K^{4}}),\n  \\end{equation}\n  for any \\(t \\geq 0\\) and this is equal to the conclusion of the theorem.\n\\end{proof}\n\n\\begin{remark}\n  The above bound tells us that with high probability, \\(X\\) takes values very\n  close to the sphere of radius \\(\\sqrt{n}\\). In particular, for a fixed probability \\(X\\) stays\n  within a constant distance from that sphere independently of the dimension\n  \\(n\\). This is due to the fact that \\(\\norm{X}_{2}^{2}\\) has mean \\(n\\) and\n  standard deviation \\(O(\\sqrt{n})\\) since\n  \\begin{equation}\n    \\V(\\norm{X}_{2}^{2}) = \\sum_{i=1}\\V(X_{i}^{2}) = n \\V(X_{1}^{2}),\n  \\end{equation}\n  due to independence of the coordinates of \\(X\\), and so\n  \\(\\sqrt{\\V(\\norm{X}_{2}^{2})} = \\sqrt{n} \\cdot \\mathrm{std}(X_{1}^{2})\\).\n  \\(\\sqrt{n \\pm O(\\sqrt{n})} = \\sqrt{n} \\pm O(1)\\) since by Taylor expansion\n  around \\(\\sqrt{n}\\) on the interval \\([n - c \\sqrt{n}, n + c \\sqrt{n}]\\) where\n  \\(c\\) need to be chosen so that \\(n - c \\sqrt{n} \\geq 0\\) we see that\n  \\begin{align}\n    \\sqrt{n \\pm c\\sqrt{n}} = \\sqrt{n} + R_{1}(c\\sqrt{n})\n  \\end{align}\n  where \\(R_{1}(x) = \\) [ Need to fill in ].\n\\end{remark}\n\n\\section{Covariance, second moments, whitening and isotropy}\n\\begin{definition}\n  The covariance matrix of a random vector \\(X \\in \\R^{n}\\) is\n  \\begin{equation}\n    \\Cov(X) = \\E(X - \\E X)(X - \\E X)\\tran = \\E X X\\tran - \\E X (\\E X)\\tran.\n  \\end{equation}\n\\end{definition}\n\n\\begin{definition}\n  The second moment matrix of a random vector \\(X \\in \\R^{n}\\) is\n  \\begin{equation}\n  \\Sigma = \\Sigma(X) = \\E X X\\tran.\n  \\end{equation}\n\\end{definition}\n\nNote that if \\(\\E X = 0\\), then \\(\\Cov(X) = \\Sigma(X)\\). So we can mostly\nconsider the second moment matrix without loss of generality in place of the\ncovariance as long as we center our variable \\(X\\). Both of the matrices are\nsymmetric and positive semi-definite.\n\n\\begin{definition}\n  A random vector \\(X \\in \\R^{n}\\) is called \\emph{isotropic} if\n  \\begin{equation}\n    \\Sigma(X) = \\E X X\\tran = I.\n  \\end{equation}\n\\end{definition}\n\nFor a random variable \\(X \\in \\R^{n}\\) with covariance matrix \\(\\Sigma\\) and\nmean \\(\\mu\\) we can always put it into a isotropic form by whitening\n\\(Z = \\Sigma^{-1/2}(X - \\mu)\\). Similarly, the transformation using a psd matrix\n\\(\\Sigma\\),\n\\(X = \\mu + \\Sigma^{1/2}Z\\) of a mean-zero isotropic random vector\n\\(Z \\in \\R^{n}\\) leads to a random vector with mean \\(\\mu\\) and covariance\n\\(\\Sigma\\). This observation means that in many cases we may focus on random\nisotropic mean-zero random vectors without loss of generality.\n\n\\begin{lemma}\n  \\label{lem:isotropy-marginal-characterisation}\n  A random vector \\(X \\in \\R^{n}\\) is isotropic if and only if\n  \\begin{equation}\n    \\E \\scal{X}{u} = \\norm{u}^{2}\n  \\end{equation}\n  for all \\(u \\in \\R^{n}\\). Equivalently we could have used\n  \\begin{equation}\n    \\E \\scal{X}{u} = 1\n  \\end{equation}\n  for all \\(u \\in S^{n-1}\\), the \\(n\\)-dimensional unit sphere.\n\\end{lemma}\n\n\\begin{proof}\n  Two symmetric matrices \\(A, B\\) are equal if and only if\n  \\(x\\tran A x = x\\tran B\\) for any \\(x \\in \\R^{n}\\). Thus \\(X\\) is isotropic if\n  and only if\n  \\begin{equation}\n    u\\tran \\E X X\\tran u = \\E \\scal{X}{u} = u\\tran I u = \\norm{u}_{2}^{2}\n  \\end{equation}\n  for all \\(u \\in \\R^{n}\\) which is what we wanted to show.\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(X\\) be an isotropic random vector in \\(\\R^{n}\\). The\n  \\begin{equation}\n    \\E \\norm{X}^{2}_{2} = n.\n  \\end{equation}\n  Moreover, if \\(X, Y\\) are two independent isotropic random vectors in\n  \\(\\R^{n}\\), then\n  \\begin{equation}\n    \\E\\scal{X}{Y}^{2} = n\n  \\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n  First we have\n  \\begin{align}\n    \\E\\norm{X}_{2}^{2} & = \\E X\\tran X \\\\\n                       & = \\E \\Tr(X X\\tran) \\\\\n                       & = \\Tr(\\E X X\\tran) \\\\\n                       & = \\Tr(I) \\\\\n                       & = n.\n  \\end{align}\n  For the second part we use the so called \\emph{law of total expectation} to\n  write\n  \\begin{equation}\n    \\E \\scal{X}{Y} = \\E_{X} \\E_{Y}(\\scal{x}{Y} | X = x) = \\E_{X} \\norm{X}_{2}^{2} = n\n  \\end{equation}\n  by the \\cref{lem:isotropy-marginal-characterisation} and reusing the first\n  part of the proof.\n\\end{proof}\n\n\\begin{corollary}\n  Let \\(X, Y\\) be independent mean-zero isotropic random vectors in \\(R^{n}\\),\n  then\n  \\begin{equation}\n    \\E \\norm{X - Y}_{2}^{2} = 2n\n  \\end{equation}\n\\end{corollary}\n\n\\begin{proof}\n  \\begin{align}\n    \\E \\norm{X - Y}_{2}^{2} & = \\E \\norm{X}_{2}^{2} + \\E \\norm{Y}_{2}^{2} - 2 \\E \\scal{X}{Y} \\\\\n                            & = 2n - 0 \\\\\n                            & = 2n\n  \\end{align}\n\\end{proof}\n\n\\section{Examples of high-dimensional distributions}\nLet \\(X \\sim \\Unif(\\sqrt{n} S^{n-1})\\) mean that the law of \\(X\\) is the uniform\nmeasure on the zero-centered sphere of radius \\(\\sqrt{n}\\) in \\(n\\) dimensions.\n\n\\begin{theorem}[Isotropy of uniform random variable on \\(\\sqrt{n} S^{n-1}\\)]\n\n\\end{theorem}\n\n\\end{document}\n", "meta": {"hexsha": "ef9f3fb596340b8344940c8bad067bffbf966fc0", "size": 11095, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture_notes_Isak/notes.tex", "max_stars_repo_name": "IsakFalk/vershynin-reading-group-presentation", "max_stars_repo_head_hexsha": "ba9361e3c5cfde459f12a0ac83f0b9dee3c25f6f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-11T15:32:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-11T15:32:50.000Z", "max_issues_repo_path": "lecture_notes_Isak/notes.tex", "max_issues_repo_name": "IsakFalk/vershynin-reading-group-presentation", "max_issues_repo_head_hexsha": "ba9361e3c5cfde459f12a0ac83f0b9dee3c25f6f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture_notes_Isak/notes.tex", "max_forks_repo_name": "IsakFalk/vershynin-reading-group-presentation", "max_forks_repo_head_hexsha": "ba9361e3c5cfde459f12a0ac83f0b9dee3c25f6f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.7670250896, "max_line_length": 188, "alphanum_fraction": 0.5957638576, "num_tokens": 4258, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.5924537187320634}}
{"text": "Before we can address the identification problem in the context of\n  the identification workflow, we must define how images are grouped\n  into occurrences.\nIn this section we propose a clustering algorithm to accomplish\n  this task.\n\n\\paragraph{Occurrence definition}\nThe Darwin Core defines an \\occurrence{} as a collection of evidence\n  that shows an organism exists within specific location and span of\n  time~\\cite{wieczorek_darwin_2012}.\nFor our purposes this amounts to a cluster of images localized in space\n  and time.\nWe propose that the \\occurrence{} grouping algorithm should perform\n  agglomerative clustering on the GPS coordinates and time specified in\n  the image metadata.\n\n\\paragraph{Space-time image distance}\nTowards this goal we define a space-time feature $\\g_i$ for each image\n  $i$, and a pairwise distance, $\\Delta(\\g_i, \\g_j)$, between these\n  features.\nThis feature will a two dimensional feature tuple, %\n$\\g_i = \\paren{\\time_i, \\gps_i}$, where the first component is the\n  POSIX timestamp $\\time_i$, and the second component is a GPS coordinate %\n$\\gps_i = \\brak{\\lat_i, \\lon_i}^{T}$, where the angles of latitude and\n  longitude are measured in radians.\nTo compute this distance between two images $\\g_i$ and $\\g_j$ we first\n  compute the distance in each component of the feature tuple.\nThe difference in time is the absolute value of the timedelta,  %\n$\\Delta_t(\\g_i, \\g_j) = \\abs{\\time_i - \\time_j}$, which is in seconds.\n\n% DISTANCE BETWEEN TWO IMAGES (space and final)\nNext, the distance in space is computed by approximating the Earth as a\n  sphere.\nIn general, the distance between two points on a sphere with radius $r$\n  is a function of inverse haversines, and is expressed as:\n\\begin{equation}\\label{eqn:geodistance}\n    d(\\gps_i, \\gps_j, r) =\n        2 r \\asin{\\sqrt{\n            \\haversine{\\lat_i - \\lat_j} +\n            \\haversine{\\lon_i - \\lon_j} +\n            \\cos\\paren{\\lat_i} \\cos\\paren{\\lat_j}}}\n\\end{equation}\nIn the previous equation, $\\haversine{\\theta} = \\haversineFULL{\\theta}$\n  is the half vertical sine function.\nThus, we arrive at the spatial distance between two images by\n  estimating the radius of the earth to be $r=6367$ kilometers.\n\\begin{equation}\n    \\Delta_s(\\g_i, \\g_j) = d(\\gps_i, \\gps_j, 6367).\n\\end{equation}\nThis results in distance in seconds and a distance in kilometers, which\n  are in incompatible units.\nTo combine these distances we convert kilometers to seconds by\n  heuristically estimating the walking speed, $S$, of an animal (for\n  zebras we use $S=2\\sciE{-3}$ kilometers per second).\nThis allows us to cancel kilometers from the expression and express GPS\n  distance as a unit of time:\n$\\frac{\\Delta_s(\\g_i, \\g_j)}{S}$.\nThis distance can be interpreted as the total amount of time it would\n  take an animal to move between two points.\nThe total distance between two images is the sum of these components.\n\\begin{equation}\\label{eqn:imgdist}\n    \\Delta(\\g_i, \\g_j) =\n        \\Delta_t(\\g_i, \\g_j) + \\frac{\\Delta_s(\\gps_i, \\gps_j)}{S}\n\\end{equation}\nNotice that if there is no difference in GPS location, then this\n  measure becomes to a distance in time.\n\n\\paragraph{Clustering procedure}\nHaving defined pairwise a distance between two images, we proceed to\n  describe the agglomerative clustering algorithm.\nThere are two inputs to the agglomerative clustering algorithm:\n(1) The matrix of pairwise distance between images, and\n(2) the minimum distance threshold between two images.\nThe matrix of distances is computed using~\\cref{eqn:imgdist}, and we\n  set the distance threshold to $600$ seconds.\nAny pair of images that is within this threshold connected via a\n  linkage matrix.\nConnected components in this matrix form the final clusters that we use\n  as \\occurrences{}.\n\n\\paragraph{Discussion of occurrences}\nThese computed \\occurrences{} are valuable measurements for multiple\n  components of the IBEIS software.\nAt its core an \\occurrence{} describes \\wquest{when} a group of animals\n  was seen and \\wquest{where} that group was seen.\nHowever, to answer the questions like \\wquest{how many} animals there\n  were, \\wquest{who} an animal is, \\wquest{who else} is an animal with,\n  and \\wquest{where else} have these animals been seen, the annotations\n  in the \\occurrence{} must be grouped into individual \\encounters{} and\n  then matched against the \\masterdatabase{}.\nThe next section describes the first of these procedures:\nthe \\intraoccurrence{} identification algorithm that produces\n  \\encounters{}.\n", "meta": {"hexsha": "57e3dfb4016d42abf0b90b3bc729d26e657a4ebe", "size": 4495, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "guts/sec-4-3-occur.tex", "max_stars_repo_name": "Erotemic/crall-thesis-2017", "max_stars_repo_head_hexsha": "0f340b55dffb8545312abf0e43813f8b5c128888", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-02-01T19:41:38.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-01T19:41:38.000Z", "max_issues_repo_path": "guts/sec-4-3-occur.tex", "max_issues_repo_name": "Erotemic/crall-thesis-2017", "max_issues_repo_head_hexsha": "0f340b55dffb8545312abf0e43813f8b5c128888", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "guts/sec-4-3-occur.tex", "max_forks_repo_name": "Erotemic/crall-thesis-2017", "max_forks_repo_head_hexsha": "0f340b55dffb8545312abf0e43813f8b5c128888", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.8191489362, "max_line_length": 75, "alphanum_fraction": 0.7532814238, "num_tokens": 1139, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8902942144788076, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.5924111766126523}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n\n\n    \n    \n\n\\subsection*{gTrig.m} \n\n\\begin{par}\n\\textbf{Summary:} Compute moments of the saturating function $e*sin(x(i))$ and \\$ e*cos(x(i))\\$, where $x \\sim\\mathcal N(m,v)$ and $i$ is a (possibly empty) set of $I$ indices. The optional  scaling factor $e$ is a vector of length $I$. Optionally, compute derivatives of the moments.\n\\end{par} \\vspace{1em}\n\n\\begin{verbatim}  [M, V, C, dMdm, dVdm, dCdm, dMdv, dVdv, dCdv] = gTrig(m, v, i, e)\\end{verbatim}\n    \\begin{par}\n\\textbf{Input arguments:}\n\\end{par} \\vspace{1em}\n\\begin{verbatim}m     mean vector of Gaussian                                    [ d       ]\nv     covariance matrix                                          [ d  x  d ]\ni     vector of indices of elements to augment                   [ I  x  1 ]\ne     (optional) scale vector; default: 1                        [ I  x  1 ]\\end{verbatim}\n\\begin{par}\n\\textbf{Output arguments:}\n\\end{par} \\vspace{1em}\n\\begin{verbatim}M     output means                                              [ 2I       ]\nV     output covariance matrix                                  [ 2I x  2I ]\nC     inv(v) times input-output covariance                      [ d  x  2I ]\ndMdm  derivatives of M w.r.t m                                  [ 2I x   d ]\ndVdm  derivatives of V w.r.t m                                  [4II x   d ]\ndCdm  derivatives of C w.r.t m                                  [2dI x   d ]\ndMdv  derivatives of M w.r.t v                                  [ 2I x d^2 ]\ndVdv  derivatives of V w.r.t v                                  [4II x d^2 ]\ndCdv  derivatives of C w.r.t v                                  [2dI x d^2 ]\\end{verbatim}\n\\begin{par}\nCopyright (C) 2008-2013 by Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen.\n\\end{par} \\vspace{1em}\n\\begin{par}\nLast modified: 2013-03-25\n\\end{par} \\vspace{1em}\n\n\\begin{lstlisting}\nfunction [M, V, C, dMdm, dVdm, dCdm, dMdv, dVdv, dCdv] = gTrig(m, v, i, e)\n\\end{lstlisting}\n\n\n\\subsection*{Code} \n\n\n\\begin{lstlisting}\nd = length(m); I = length(i); Ic = 2*(1:I); Is = Ic-1;\nif nargin == 3, e = ones(I,1); else e = e(:); end; ee = reshape([e e]',2*I,1);\nmi(1:I,1) = m(i); vi = v(i,i); vii(1:I,1) = diag(vi);     % short-hand notation\n\nM(Is,1) = e.*exp(-vii/2).*sin(mi); M(Ic,1) = e.*exp(-vii/2).*cos(mi);    % mean\n\nlq = -bsxfun(@plus,vii,vii')/2; q = exp(lq);\nU1 = (exp(lq+vi)-q).*sin(bsxfun(@minus,mi,mi'));\nU2 = (exp(lq-vi)-q).*sin(bsxfun(@plus,mi,mi'));\nU3 = (exp(lq+vi)-q).*cos(bsxfun(@minus,mi,mi'));\nU4 = (exp(lq-vi)-q).*cos(bsxfun(@plus,mi,mi'));\nV(Is,Is) = U3 - U4; V(Ic,Ic) = U3 + U4; V(Is,Ic) = U1 + U2;\nV(Ic,Is) = V(Is,Ic)'; V = ee*ee'.*V/2;                               % variance\n\nC = zeros(d,2*I); C(i,Is) = diag(M(Ic)); C(i,Ic) = diag(-M(Is)); % inv(v) * cov\n\nif nargout > 3                                           % compute derivatives?\n  dVdm = zeros(2*I,2*I,d); dCdm = zeros(d,2*I,d); dVdv = zeros(2*I,2*I,d,d);\n  dCdv = zeros(d,2*I,d,d); dMdm = C';\n  for j = 1:I\n    u = zeros(I,1); u(j) = 1/2;\n    dVdm(Is,Is,i(j)) = e*e'.*(-U1.*bsxfun(@minus,u,u')+U2.*bsxfun(@plus,u,u'));\n    dVdm(Ic,Ic,i(j)) = e*e'.*(-U1.*bsxfun(@minus,u,u')-U2.*bsxfun(@plus,u,u'));\n    dVdm(Is,Ic,i(j)) = e*e'.*(U3.*bsxfun(@minus,u,u') +U4.*bsxfun(@plus,u,u'));\n    dVdm(Ic,Is,i(j)) = dVdm(Is,Ic,i(j))';\n    dVdv(Is(j),Is(j),i(j),i(j)) = exp(-vii(j)) * ...\n                               (1+(2*exp(-vii(j))-1)*cos(2*mi(j)))*e(j)*e(j)/2;\n    dVdv(Ic(j),Ic(j),i(j),i(j)) = exp(-vii(j)) * ...\n                               (1-(2*exp(-vii(j))-1)*cos(2*mi(j)))*e(j)*e(j)/2;\n    dVdv(Is(j),Ic(j),i(j),i(j)) = exp(-vii(j)) * ...\n                                   (1-2*exp(-vii(j)))*sin(2*mi(j))*e(j)*e(j)/2;\n    dVdv(Ic(j),Is(j),i(j),i(j)) = dVdv(Is(j),Ic(j),i(j),i(j));\n    for k = [1:j-1 j+1:I]\n      dVdv(Is(j),Is(k),i(j),i(k)) = (exp(lq(j,k)+vi(j,k)).*cos(mi(j)-mi(k)) ...\n                         + exp(lq(j,k)-vi(j,k)).*cos(mi(j)+mi(k)))*e(j)*e(k)/2;\n      dVdv(Is(j),Is(k),i(j),i(j)) = -V(Is(j),Is(k))/2;\n      dVdv(Is(j),Is(k),i(k),i(k)) = -V(Is(j),Is(k))/2;\n      dVdv(Ic(j),Ic(k),i(j),i(k)) = (exp(lq(j,k)+vi(j,k)).*cos(mi(j)-mi(k)) ...\n                         - exp(lq(j,k)-vi(j,k)).*cos(mi(j)+mi(k)))*e(j)*e(k)/2;\n      dVdv(Ic(j),Ic(k),i(j),i(j)) = -V(Ic(j),Ic(k))/2;\n      dVdv(Ic(j),Ic(k),i(k),i(k)) = -V(Ic(j),Ic(k))/2;\n      dVdv(Ic(j),Is(k),i(j),i(k)) = -(exp(lq(j,k)+vi(j,k)).*sin(mi(j)-mi(k)) ...\n                         + exp(lq(j,k)-vi(j,k)).*sin(mi(j)+mi(k)))*e(j)*e(k)/2;\n      dVdv(Ic(j),Is(k),i(j),i(j)) = -V(Ic(j),Is(k))/2;\n      dVdv(Ic(j),Is(k),i(k),i(k)) = -V(Ic(j),Is(k))/2;\n      dVdv(Is(j),Ic(k),i(j),i(k)) = (exp(lq(j,k)+vi(j,k)).*sin(mi(j)-mi(k)) ...\n                         - exp(lq(j,k)-vi(j,k)).*sin(mi(j)+mi(k)))*e(j)*e(k)/2;\n      dVdv(Is(j),Ic(k),i(j),i(j)) = -V(Is(j),Ic(k))/2;\n      dVdv(Is(j),Ic(k),i(k),i(k)) = -V(Is(j),Ic(k))/2;\n    end\n    dCdm(i(j),Is(j),i(j)) = -M(Is(j)); dCdm(i(j),Ic(j),i(j)) = -M(Ic(j));\n    dCdv(i(j),Is(j),i(j),i(j)) = -C(i(j),Is(j))/2;\n    dCdv(i(j),Ic(j),i(j),i(j)) = -C(i(j),Ic(j))/2;\n  end\n  dMdv = permute(dCdm,[2 1 3])/2;\n\n  dMdv = reshape(dMdv,[2*I d*d]);\n  dVdv = reshape(dVdv,[4*I*I d*d]); dVdm = reshape(dVdm,[4*I*I d]);\n  dCdv = reshape(dCdv,[d*2*I d*d]); dCdm = reshape(dCdm,[d*2*I d]);\nend\n\\end{lstlisting}\n", "meta": {"hexsha": "3e731a1e8f1a8ea0a0e43b6700498bee6bd0b566", "size": 5385, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/gTrig.tex", "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_issues_repo_path": "doc/tex/gTrig.tex", "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_forks_repo_path": "doc/tex/gTrig.tex", "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "avg_line_length": 47.6548672566, "max_line_length": 284, "alphanum_fraction": 0.469823584, "num_tokens": 2143, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.885631476836816, "lm_q2_score": 0.6688802669716106, "lm_q1q2_score": 0.5923814186650712}}
{"text": "\\subsection{The Statistical Definition of Entropy}\nEntropy has a more fundamental definition (though that doesn't make the classical definition any less important) rooted in statistics. To understand it, we're going to start of my talking about macrostates and microstates.\n\\input{Entropy/microstates}\n\\input{Entropy/statdefinition}\n\\input{Entropy/arrowoftime}", "meta": {"hexsha": "9e422ed00663ccf0644086687a236c3b424ea052", "size": 360, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Entropy/statisticaldefinition.tex", "max_stars_repo_name": "RioWeil/SCIE001-thermo-notes", "max_stars_repo_head_hexsha": "8578248f8f79f5704319dc6cd4ec679ce12b949c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Entropy/statisticaldefinition.tex", "max_issues_repo_name": "RioWeil/SCIE001-thermo-notes", "max_issues_repo_head_hexsha": "8578248f8f79f5704319dc6cd4ec679ce12b949c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Entropy/statisticaldefinition.tex", "max_forks_repo_name": "RioWeil/SCIE001-thermo-notes", "max_forks_repo_head_hexsha": "8578248f8f79f5704319dc6cd4ec679ce12b949c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-30T05:36:50.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-30T05:36:50.000Z", "avg_line_length": 72.0, "max_line_length": 222, "alphanum_fraction": 0.8277777778, "num_tokens": 76, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8354835452961425, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5923738247145796}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% fphw Assignment\n% LaTeX Template\n% Version 1.0 (27/04/2019)\n%\n% This template originates from:\n% https://www.LaTeXTemplates.com\n%\n% Authors:\n% Class by Felipe Portales-Oliva (f.portales.oliva@gmail.com) with template \n% content and modifications by Vel (vel@LaTeXTemplates.com)\n%\n% Template (this file) License:\n% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%----------------------------------------------------------------------------------------\n%    PACKAGES AND OTHER DOCUMENT CONFIGURATIONS\n%----------------------------------------------------------------------------------------\n\n\\documentclass[\n    12pt, % Default font size, values between 10pt-12pt are allowed\n    %letterpaper, % Uncomment for US letter paper size\n    %spanish, % Uncomment for Spanish\n]{fphw}\n\n% Template-specific packages\n\\usepackage[utf8]{inputenc} % Required for inputting international characters\n\\usepackage[T1]{fontenc} % Output font encoding for international characters\n\\usepackage{fontspec,unicode-math} % Required for using utf8 characters in math mode\n\\usepackage{parskip}  % To add extra space between paragraphs\n% \\usepackage{mathpazo} % Use the Palatino font\n\\usepackage{graphicx} % Required for including images\n\\usepackage{booktabs} % Better horizontal rules in tables\n\\usepackage{hyperref} % For links (both internal and external)\n% \\usepackage{listings} % Required for insertion of code\n\\usepackage{enumerate}% To modify the enumerate environment\n\\usepackage{cleveref} % Better \\ref command -> \\cref\n\\usepackage{import}   % This 4 packages and the command allow importing pdf\n\\usepackage{xifthen}  % figures generated with inkscape\n\\usepackage{pdfpages} % Source: https://castel.dev/post/lecture-notes-2/\n\\usepackage{mathtools}\n\\usepackage{wrapfig}\n\\usepackage{cancel}\n\\usepackage{transparent}\n\\newcommand{\\incfig}[1]{%\n    \\def\\svgwidth{0.95\\columnwidth}\n    \\small\n        \\import{./images/}{#1.pdf_tex}\n}\n\n\\setlength{\\parindent}{15pt}\n\\setlength{\\headheight}{22.66pt}\n\n%----------------------------------------------------------------------------------------\n%    ASSIGNMENT INFORMATION\n%----------------------------------------------------------------------------------------\n\n\\title{Task 8 \\\\ Helix Surface} % Assignment title\n\n\\author{Emilio Dom\u00ednguez S\u00e1nchez} % Student name\n\n\\date{December 26th, 2020} % Due date\n\n\\institute{University of Murcia \\\\ Faculty of Mathematics} % Institute or school name\n\n\\class{Geometr\u00eda de Superficies} % Course or class name\n\n\\professor{Dr. Pascual Lucas Saorin} % Professor or teacher in charge of the assignment\n\n%----------------------------------------------------------------------------------------\n%    Definitions\n%----------------------------------------------------------------------------------------\n\n\\usepackage{physics}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\basis}[2]{\\qty{#1,\\ #2}}\n\\DeclareMathOperator{\\sgn}{sgn}\n\\DeclareMathOperator{\\Id}{Id}\n\\newcommand{\\inner}[2]{\\left\\langle #1, \\; #2 \\right\\rangle}\n\\newcommand{\\n}{\\vectorbold{n}}\n\n\\begin{document}\n\n\\maketitle % Output the assignment title, created automatically using the information in the custom commands above\n\n%----------------------------------------------------------------------------------------\n%    ASSIGNMENT CONTENT\n%----------------------------------------------------------------------------------------\n\n\\section*{Problem}\n\n\\begin{problem}\n    Let $\u03b2 : I \\subset \\R \\to \\R^2 \\subset \\R^3$ be a planar, parametrized by arc, curve\n    with curvature $\u03ba_\u03b2$ and tangent and normal vectors $T_\u03b2$ and $N_\u03b2$ respectively.\n    Let $\\n = (T_\u03b2 \\cross N_\u03b2)(s)$ be the unitary vector\n    orthogonal to the plane that contains $\u03b2$.\n    Then, given $\u03c6 \\in [0, \\frac{\u03c0}{2}]$, we define\n    the \\textit{helix surface $S_\u03c6$ built over $\u03b2$}\n    as the surface parametrized by\n    %\n    \\begin{equation*}\n        X(s, t) = \u03b2(s) + t(\\cos{\u03c6}N_\u03b2(s) + \\sin{\u03c6}\\n).\n    \\end{equation*}\n\n    \\begin{enumerate}\n        \\item \\label{stm:i} Determine the unitary field $N(s, t)$ normal to $S_\u03c6$.\n\n        \\item \\label{stm:ii} Compute the Gaussian curvature and the mean curvature\n        and classify its points.\n\n        \\item \\label{stm:iii} Answer whether $\u03b2$, seen as belonging to $S_\u03c6$,\n        is a line of curvature or an asymptotic curve.\n\n        \\item \\label{stm:iv} What type of surface do we obtain when $\u03c6 = \\frac{\u03c0}{2}$?\n        And when $\u03c6 = 0$?\n    \\end{enumerate}\n\\end{problem}\n\n%----------------------------------------------------------------------------------------\n\n\\subsection*{Answer}\n\n    It is unclear by the statement over which domain is $X$ defined.\nOne option is to assume that $t$ moves over a range which makes $S_\u03c6$ not self intersect.\nAnother option is to think of $S_\u03c6$ as a surface in a manifold of higher dimension,\nwhere it does not intersect.\nThat is, $X : \\R^2 \\to \\R^5 = [\\R^2 | \\R^3]$,\n%\n\\begin{equation*}\n    X(s, t) = [(s, t) | \u03b2(s) + t(\\cos{\u03c6}N_\u03b2(s) + \\sin{\u03c6}\\n)].\n\\end{equation*}\n%\nWe can proceed without fixing a domain,\nmerely requiring that it is an open set in $\\R^2$.\n\n\\subsubsection*{Item \\ref{stm:i}}\n    To compute a normal unitary field to $S_\u03c6$,\nwe will normalize the cross product of $X_s$ and $X_t$.\n%\n\\begin{align*}\n    X_s(s, t) &= T_\u03b2(s) - \\cos{\u03c6}t\u03ba_\u03b2(s)T_\u03b2(s) = (1 - \\cos{\u03c6}t\u03ba_\u03b2(s))T_\u03b2(s) \\qq{and} \\\\\n    X_t(s, t) &= \\cos{\u03c6}N_\u03b2(s) + \\sin{\u03c6}\\n,\n\\end{align*}\n%\nwhere we have used that $N_\u03b2'(s) = -\u03ba_\u03b2(s)T_\u03b2(s)$.\nIn addition, we can also see that $\\inner{X_s}{X_t} = 0$.\nTo perform the cross product mentally fast,\nnote that $T_\u03b2$, $N_\u03b2$ and $\\n$ are a normal basis in $\\R^3$ and hence\n$T_\u03b2 \\cross N_\u03b2 = \\n$, $N_\u03b2 \\cross \\n = T_\u03b2$ and $\\n \\cross T_\u03b2 = N_\u03b2$\\footnote{\n    Actually, it is a positive oriented basis because $\\n \\coloneqq T_\u03b2 \\cross N_\u03b2$.\n    Otherwise we would have the same equalitites but with a minus sign.\n}.\nTherefore\n%\n\\begin{equation*}\n    X_s(s, t) \\cross X_t(s, t) =\n    (1 - \\cos{\u03c6}t\u03ba_\u03b2(s))(-\\sin{\u03c6}N_\u03b2(s) + \\cos{\u03c6}\\n).\n\\end{equation*}\n%\nWe can see that the right part, $-\\sin{\u03c6}N_\u03b2(s) + \\cos{\u03c6}\\n$, is unitary,\nbut in order to preserve the sign of $X_s \\cross X_t$,\nwe need to know the sign of $1 - \\cos{\u03c6}t\u03ba_\u03b2(s)$.\n\n    Now it is clear that we need to study the domain of $X$,\nbecause depending on the sign of $1 - \\cos{\u03c6}t\u03ba_\u03b2(s)$,\nthe normal is $-\\sin{\u03c6}N_\u03b2(s) + \\cos{\u03c6}\\n$ or it is $\\sin{\u03c6}N_\u03b2(s) - \\cos{\u03c6}\\n$.\nWhat is more, when $\u03c6 \\ne \\frac{\u03c0}{2}$ and $\u03ba_\u03b2(s) \\neq 0$,\n$X_s\\qty(s, \\frac{1}{\\cos{\u03c6}\u03ba_\u03b2(s)}) = 0$.\nThis means that the value $t = \\frac{1}{\\cos{\u03c6}\u03ba_\u03b2(s)}$ is problematic because\n$S_\u03c6$ stops being a surface.\nBelow and above that value, we have two connected components with opposite normal vectors.\nIn general,\n%\n\\begin{equation*}\n    N_{S_\u03c6} = \\mp \\sin{\u03c6}N_\u03b2 \\pm \\cos{\u03c6}\\n,\n\\end{equation*}\n%\nwhere the sign is given by the sign of $1 - \\cos{\u03c6}t\u03ba_\u03b2(s)$.\n\n\\subsubsection*{Item \\ref{stm:ii}}\n\nThe Gaussian curvature and the mean curvature are defined in terms of the\nprincipal curvatures of the surface,\nwhich are given by the eigen vectors of $-\\dd{N_{S_\u03c6}}$.\nIn this case, it will be easy to compute both from\nthe expression $N_{S_\u03c6}(s, t)$ we derived.\n%\n\\begin{align*}\n        N \\circ X\\;(s, t) = {}&\n            \\mp \\sin{\u03c6}N_\u03b2(s) \\pm \\cos{\u03c6}\\n. \\\\\n    \\dd{N}(X(s, t))\\qty(X_t(s, t)) = \\qquad\n        \\pdv{N \\circ X}{t}\\,(s, t) = {}&\n            0. \\\\\n    \\dd{N}(X(s, t))\\qty(X_s(s, t)) = \\qquad\n        \\pdv{N \\circ X}{s}\\,(s, t) = {}&\n            \\pm \\sin{\u03c6}\u03ba_\u03b2(s)T_\u03b2(s). \\\\\n\\end{align*}\n%\nFrom the first equality we obtain that one eigen value is $0$,\nand therefore the Gaussian curvature is also $0$,\nwhile the second can be obtained from the last identity, because\n$\\dd{N}(X(s, t))\\qty(X_s)$ is proportional to $X_s$\n(both are an elongation of $T_\u03b2(s)$).\nDivide the second result by the coefficient which acompanies $X_s$,\n$1 - t\\cos{\u03c6}\u03ba_\u03b2$,\nto get the second eigen value.\n%\n\\begin{gather*}\n    \\qty{\u03ba_1(s, t), \u03ba_2(s, t)} =\n    \\qty{0, \\mp \\frac{\\sin{\u03c6}\u03ba_\u03b2(s)}{1 - t\\cos{\u03c6}\u03ba_\u03b2(s)} }. \\\\\n    K(s, t) = 0 \\qquad H(s, t) = \\mp \\frac{\\sin{\u03c6}\u03ba_\u03b2(s)}{2(1 - t\\cos{\u03c6}\u03ba_\u03b2(s))}.\n\\end{gather*}\n\n\\subsubsection*{Item \\ref{stm:iii}}\n\nFrom the previous item we can also deduce that $X_t$ and $X_s$ are\neigen vectors associated with the principal curvatures.\nThis means that $\u03b2(s) = X(s, 0)$ is a line of curvature,\nbecause $\u03b2'(s) = X_s(s, 0)$,\nbut it is not an asymptotic curve unless\n%\n\\begin{equation*}\n    -\\frac{\\sin{\u03c6}\u03ba_\u03b2(s)}{2(1 - t\\cos{\u03c6}\u03ba_\u03b2(s))} = 0.\n\\end{equation*}\n%\nThat is, when $\u03c6 = 0$ or $\u03ba_\u03b2(s) = 0$.\n    \n\\subsubsection*{Item \\ref{stm:iv}}\n\nWhen $\u03c6 = \\frac{\u03c0}{2}$,\n%\n\\begin{equation*}\n    X(s, t) = \u03b2(s) + t\\n,\n\\end{equation*}\n%\nan extrusion of $\u03b2$ in perpendicular to the plain it is contained in.\nFor example, if $\u03b2$ is a fraction of a circle,\n$S_{\\frac{\u03c0}{2}}$ would be a fraction of a cylinder.\nWe call the surface obtained with any curve a generalized cilinder.\n\nWhen $\u03c6 = 0$,\n%\n\\begin{equation*}\n    X(s, t) = \u03b2(s) + tN_\u03b2(s),\n\\end{equation*}\n%\nan extension of $\u03b2$ over the plane it is contained in.\n$S_0$ would be contained in a plane.\nThat would make $S_0$ a fraction of a plane.\n\n%----------------------------------------------------------------------------------------\n\n\\end{document}\n", "meta": {"hexsha": "c0a373a5bc8989d3aa7186c191bca99800a75248", "size": 9215, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "helix_surface.tex", "max_stars_repo_name": "useredsa/exercises-surfaces-geometry", "max_stars_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-25T03:04:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T03:04:15.000Z", "max_issues_repo_path": "helix_surface.tex", "max_issues_repo_name": "useredsa/introductory-exercises-of-differential-geometry", "max_issues_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "helix_surface.tex", "max_forks_repo_name": "useredsa/introductory-exercises-of-differential-geometry", "max_forks_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.99609375, "max_line_length": 114, "alphanum_fraction": 0.5952251763, "num_tokens": 2873, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.709019146082187, "lm_q2_score": 0.8354835350552604, "lm_q1q2_score": 0.5923738225906077}}
{"text": "\\section{Overview}\n\n\n\n%\\subsection{Pipeline Overview}\n\n%\\subsection{The Joint Loss Function}\n\n\n% Our pipeline is as follows (Fig.~\\ref{fig:arch_content}):\nOur pipeline consists of the following steps (illustrated in Fig.~\\ref{fig:arch_content}):\n\n\\begin{enumerate}\n\\item Fit a 3D model to extract static albedo textures from each frame in the source video sequence and the single RGB target image (Section 4).\n\\item Infer dynamic textures and retarget the per-frame texture expressions from the source video frames onto the target image texture using a generative adversarial framework (Section 5).\n\\item Composite the target mesh with the generated dynamic textures into each frame in the source video (Section 6).\n\\end{enumerate}\n\n% \\vfill\\eject\n\n\\section{Fitting the Face Model}\n\n\nWe model the face shape $S$ and albedo as a multilinear PCA model with $n = 53k$ vertices and $106k$ faces:\n\\begin{equation}\nS(\\beta_{id}, \\beta_{exp}) = \\hat{S} + B_{id}\\beta_{id} + B_{exp} \\beta_{exp}\n\\end{equation}\n\\begin{equation}\nI(\\alpha_{alb}) = \\hat{I} + A_{alb} \\cdot \\alpha_{alb}\n\\end{equation}\nThe identity and expression are represented as a multivariate normal distribution with $B_{id} \\in \\mathbf{R}^{3n \\times 80} $, $B_{exp} \\in \\mathbf{R}^{3n \\times 29}$ and $A_{alb} \\in \\mathbf{R}^{3n \\times 80}$.  The dimensions of the mean shape are $\\hat{S} = \\hat{S}_{id} + \\hat{S}_{exp} \\in \\mathbf{R}^{3n}$, and the mean albedo is given by $\\hat{I} \\in \\mathbf{R}^{3n}$.  The standard deviations are given by: $\\sigma_{id} \\in \\mathbf{R}^{80}$, $\\sigma_{exp} \\in \\mathbf{R}^{29}$ and $\\sigma_{alb} \\in \\mathbf{R}^{80}$.\n\nWe use the Basel Face Model ~\\cite{blanz1999} for $B_{id}$, $A_{alb}$, $\\hat{S}$ and $\\hat{I}$ as well as FaceWarehouse ~\\cite{cao2014facewarehouse} for $B_{exp}$. We model the illumination using second order Spherical Harmonics and assume Lambertian surface reflectance.  We denote the illumination as $L \\in \\mathbf{R}^{27}$.  \n\nFollowing the optimization scheme of ~\\cite{f2f}, we jointly solve for all the unknowns $\\textbf{Y} = \\{ S, I, R, t, P, L \\}$ leveraging the Gauss-Newton method applied to iteratively re-weighted least squares with three levels of image pyramid, where P are the camera parameters. One can refer to ~\\cite{f2f} for details of this optimization.  In short, our objective function is:\n\\begin{equation}\nE(\\textbf{Y}) = w_{col} E_{col}(\\textbf{Y}) + w_{lan} E_{lan}(\\textbf{Y}) + w_{reg}E_{reg}(\\textbf{Y})\n\\end{equation}\nWe use energy weights $w_{col} = 1$, $w_{lan} = 10$ and $w_{reg} = 2.5 \\times 10^{-5}$.  \nThe photo-consistency term is given by \n\\begin{equation}\nE_{col}(\\textbf{Y}) = \\frac{1}{| M | } \\sum_{p \\in M}  ||C_{gt}(p) - C_{render}(p)||_2\n\\end{equation}\n where $C_{gt}$ is the input image and $C_{render}$ is the synthesized image.  $p \\in M$ denotes pixel visibility in the source image.  \nThe landmark term is given by: \n\\begin{equation}\nE_{lan}(\\textbf{Y}) = \\frac{1}{|F|} \\sum_{f_i \\in F}  || f_i - \\Pi_P(RS_i + t) ||_2^2\n\\end{equation}\n where $f_i \\in F$ is a 2D facial feature following the method presented in ~\\cite{kazemi2014one}.  \nThe regularization $E_{reg}$ term ensures that faces stay close to the normal distribution.  This term prevents degenerative faces when performing the fitting:\n\\begin{equation}\nE_{reg}(\\textbf{Y}) = \\sum_{i = 1}^{80} [ (\\frac{\\beta_{id, i}}{\\sigma_{id,i}})^2 + (\\frac{\\alpha_{alb, i}}{\\sigma_{alb, i}})^2] + \\sum_{i =1}^{29} ( \\frac{\\beta_{exp,i}}{\\sigma_{exp,i}})^2\n\\end{equation}\n\n\\section{ Dynamic Texture Synthesis}\n\n\\subsection{Deep Learning Framework}\n\n\\begin{figure*}[th]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{figures/network/network5.pdf}\n\t\\caption{The network architecture. The encoder takes as input the identity texture and the expression texture, and the decoder outputs the dynamic texture. The adversarial discriminator is used during training to judge whether the output texture looks real or not.}\\label{fig:network}\n\t\\vspace{-0.15in}\n\\end{figure*}\n\nThe core of our dynamic texture synthesis pipeline for inferring fine details is a Conditional Generative Adversarial Network used to infer deformations from a source texture onto a target texture.\nBroadly speaking, a Generative Adversarial Network (GAN) $G$ is a function which produces ``realistic'' output from a noise vector.  \nThat is, given a distribution $M$, a variable $z \\sim M$, and a set of ground truth ``real'' examples $X$,  we would like $G(z)$ to be indistinguishable between \nany $x \\sim X$.  Formally, by indistinguishable we mean to say that the generator function fools as best as possible a discriminator function, $D$, \ntrained precisely to separate the generated output $G(z)$ from the real output drawn from $X$. A conditional GAN, on the other hand, takes as input both the noise vector $z$, along with additional input $y$ in order to produce the output $x$.    \nNotice that $y$ and $x$ are not necessarily from the same set.  \n\nIn our setting, we attempt to learn a target texture deformation given a source texture deformation.  \nThat is, given a source texture $U_{source}$ that encodes a given expression, we would like to transfer this expression to a neutral-pose target texture $N_{target}$.  \nFor example, the source texture expression might contain a wrinkle on the left cheek that we would like to synthesize onto the target neutral texture. \nIn this case, the output is $U_{target}$, the target texture with the wrinkle, and we are conditioning on $U_{source}$ and $N_{target}$.  \n\n\nNote that neural networks are inclined to be heavily affected by noise and variation within the training corpus.  For this reason, we work in the UV texture space of the captured facial albedos.  This provides many benefits during training.  First of all, it minimizes variations in the input due to factors such as lighting, image background and head pose. In UV space, the mouth, eyes, and nose are in the exact same location in each image - the only thing that changes is the content of those locations. Working in this space makes it possible to retarget mouth motion, blinking, scrunching, and various other skin deformations that would be much harder to learn otherwise.  \n\n\n\\subsection{Loss Function}\n\nThe energy function we minimize is given by \n\\begin{equation} \\label{eqn:1}\n\\begin{split}\nL_{cGAN}(G,D)& = E_{{x,y}\\sim p_{data}({x,y}),z\\sim p_z{z}}[\\log D({x,y})]\\\\\n& +E_{{x,y}\\sim p_{data}(x),z\\sim p_z(z)}[\\log(1-D(G(x,z)))] \n\\end{split}\n\\end{equation}\n\nIn our formulation, $x$ is the pair $(U_{source}, N_{target})$ and $y$ is given by $U_{target}$.  \nIn addition to this energy, the generator $G$ also attempts to minimize the reconstruction loss to the target $y$ in the $\\ell_1$ sense.  \nThat is, $L_{\\ell_1}(G)= E[\\parallel y-G(x,z)\\parallel_1]$ ~\\cite{pix2pix}.\n\n$G$ attempts to minimize this objective while $D$ attempts to maximize it.  In other words:\n$ G_{opt}=\\text{arg}\\min_{G}\\max_{D} L_{cGAN}(G,D)+\\lambda L_{\\ell_1}(G)$, where $\\lambda$ \nencodes the relative weighting of the different errors ~\\cite{pix2pix}.  In our implementation of the conditional GAN, \nwe do not input any noise during generation, but otherwise the optimization program is accurate.  We set $\\lambda = 0.001$ in all experiments. \nThis parameter can be viewed as a balancing between the need to reconstruct accurate wrinkles with the need to have the final texture\nremain \"realistic\".  \n\n\\subsection{Network Architecture}\n\n\nWe use a similar architecture as \\cite{pix2pix}. \nFirst, we concatenate the driver expression and the neutral target identity, $(U_{source}, N_{target})$, along their color-channel dimension as input.\nThe generator output is the target expression $y$, which is the expression transferred from the source to target texture.\n\nSecond, we use a masked prior to define the $\\ell_1$ and adversarial loss. \nThe mask is applied on the $\\ell_1$ loss such that the the loss around the mouth and eye regions is ten times higher than in other areas - we\ndo this because wrinkles, blinking, and most texture deformations are a lot more prevalent in these areas.  \nFor the discriminator, we adopt a Markovian Discriminator \n(PatchGAN) with patch size set to $70 \\times 70$. We found that adding skip connections between\nencoder layers and decoder layers, skipping around the code layer, greatly reduces noise in the output image. \n%Finally, we do not input any \nThe parameters are\noptimized using ADAM with back-propagation. Our input and output resolution are $256\\times 256$.\n\n\n%%\\begin{eqnarray*}\n%%G^*&=&arg\\min\\max L_{cGAN}(G,D)+\\lambda L_{L1}(G) \\\\\n%%L_{L1}(G)&=& E[M\\cdot\\parallel y-G({x,y},z)\\parallel_1] \\\\\n%%L_{GAN}(G,D)&=& E_{{x,y}\\sim p_{data}({x,y}),z\\sim p_z{z}}[\\log D({t})]+E_{{x,y}\\sim p_{data}(x),z\\sim p_z(z)}[\\log(1-D(G(x,z)))]\n%%\\label{eqn:1}\n%%\\end{eqnarray*}\n\n\\subsection{Mouth Synthesis}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{figures/flow/opticalflow.pdf}\n\t\\caption{Visualization of flow-upsampling from source mouth to low-resolution inferred mouth.  Left is source mouth, top is target mouth, bottom is flow field, right is final result.}\\label{fig:flow}\n\t\\vspace{-0.05in}\n\\end{figure}\nWe model the mouth interior region as a part of the UV texture.  When the mouth is closed, the area between the lips get projected onto this area to\nmake a pink color.  When the mouth is open, the mouth interior, including the teeth and tongue, are projected to this region (Fig.~\\ref{fig:flow}).  \n\nUsing our deep-learning framework, we are able to transfer open-mouth expressions onto the closed-mouth neutral target textures and infer the\ninner mouth region.  However, due to lack of training data for the mouth interior, the inferred texture here tends to be of rather low resolution.  In order\nto improve this area, we use SIFT-Flow ~\\cite{siftflow} to redraw the target mouth using the source mouth for each frame.  In particular, after\ncomputing SIFT features at the pixel level, we perform matching based on the following energy term:\n\\begin{equation}\n\\begin{split}\nE(w)& = \\sum_p \\min (||s_1(p) - s_2(p + w(p))||_1, t)  \\\\\n&\\hspace{-6mm} +\\sum \\eta(|u(p)| + |v(p)|)  \\\\\n&\\hspace{-6mm} +\\sum_{(p,q) \\in \\epsilon} min(\\alpha |u(p) - u(q)|, d) + min(\\alpha|v(p) - v(q)|, d)\n\\end{split}\n\\end{equation}\nThe terms in the equation are, in order, the data term, the small displacement term, and the spatial regularization term ~\\cite{sift}, \nwhere $w(p) = (u(p), v(p))$ is the flow vector at a point $p = (x,y)$, and $s_1$ and $s_2$ are the sift features\ncomputed for either image 1 or image 2 at the point $p$.  The second and third term regularize the matching by pushing\nnearby points to get mapped to each other.\n\nInferring the inner mouth also has the added benefit of improving the lip texture around the mouth during tracking failure of the original video: tracking of tight-lipped\nexpressions such as kissing often fail and the lip texture gets projected to the interior region in UV space.  During rendering, this causes\nthe lips to be thinner than they should be, which gives an unnatural appearance.  By inferring this inner-mouth region, we are able to synthesize realistic \nkissing faces on the target even when lip tracking fails on the source.\n\n%\\subsection{Fitting to Video Sequence}\n\n%We adopt a similar optimization scheme as ~\\cite{f2f} to obtain the per-frame mesh and per-frame texture of the source video sequence at run-time.  In particular:\n\n%%IMPLEMENTATION DETAILS OF F2F\n\n%Using the per-frame texture, we synthesize per-frame textures and mouth flows using the above framework on the single RGB target image, and then\n%are able render a realistic animation of the target.  \n\n\\section{Video Face Replacement via Blending}\n\nOnce we have the per-frame textures and retargeted mesh, we are also able to transfer the target appearance back onto the source video\nsequence for a detailed and realistic animation (Fig.~\\ref{fig:blend}). We use a graph-cut approach similar to ~\\cite{replace} in order \nto achieve this.  In particular, for blending the retargeted faces to the source video in a photorealistic manner,\nwe first find a graph-cut to partition each frame into two regions, which determines whether \na pixel comes from the frame in the source video or the image of the rendered retargeted face model(Fig.~\\ref{fig:blend}a-c). \nWe then linearly blend the target and source regions of the images along the seam to achieve a smooth and realistic result (Fig.~\\ref{fig:blend}d).\n \n\\begin{figure}[t]\n  \\centering\n  \\includegraphics[width=0.6\\linewidth]{figures/blending/blending.pdf}\n  \\caption{a) The input consists of the frames of the source video (left) and the rendered retargeted mesh (right). b) Naively projecting all of the target mesh's vertices onto the source frame results in an unrealistic and incoherent result (left) when a more optimal partition is outlined in red on the image on the right. \n\tc) The composition of the two images using the optimal scene.\n\td) Linearly blending the pixels along the seam gives a smooth and realistic result.}\n\t\\label{fig:blend}\n  \\vspace{-0.15in}\n\\end{figure}\n\n\n\n\\subsection{Graph-Cut}\nSimilar to ~\\cite{replace}, \nwe optimize the partition so as to ensure spatial coherence between neighboring pixel values\nand temporal coherence between frames. \n\n\nIf we naively project the mesh back onto the frame of the source video, as seen in the left-hand image of Fig~\\ref{fig:blend}b, the result is not smooth or realistic. \nIn order to maintain spatial coherence across large variations in pose and expression, \n we construct a graph-cut problem on each of the frames in the source video and their corresponding retargeted mesh as in ~\\cite{graphcut}. For each frame, the graph cut algorithm labels each vertex on the mesh as either a source vertex or target vertex in a manner that minimizes the difference in pixel values across neighboring vertices, as shown on the right-hand image of Fig.~\\ref{fig:blend}b. We then project the seam of the labeled mesh onto the image (Fig.~\\ref{fig:blend}c). \n\nThe nodes of the graph represent the vertices on the mesh for each frame in the source video. \nLet each vertex be denoted as $V_{t,i}$, where $t$ refers to the $t$th frame, and $i$ refers to the $i$th vertex on the mesh. \n%and construct the following graph to calculate the seam in the rest region.\n\nIn order to minimize the difference between source and target pixels along the seam, for each frame, we set weights on the graph for each edge between each pair of vertices $V_{t,u}$ and  $V_{t,v}$ that share an edge on the mesh as:\n\\begin{equation}\n\\begin{split}\nW_m(V_{t,u}, V_{t,v}) = & \\|I_{s}(V_{t,u})-I_{s}(V_{t,v})\\|\\\\\n& + \\|I_{t}(V_{t,u})-I_{t}(V_{t, v})\\| \n\\end{split}\n\\end{equation}\n\n $I_{s}$is the source image, $I_{w}$ is the target image, $I_{s}(V)$ is the color at the pixel where $V$ is projected onto the source frame, and $I_{t}(V)$ is the color at the pixel V is projected onto in the rendered target image.\n For temporal coherence, we set edges between vertices $V_{t,i}$ and $V_{t+1,i}$ for all $i$ and $t$, with weight as:\n\\begin{equation}\n\\begin{split}\nW_f(V_{t,i}, V_{t+1,i})& = \\lambda \\|I_{s}(V_{t,i})-I_{s}(V_{t+1,i} +1)\\|^{-1}\\\\\n              & + \\|I_{t}(V_{t,i})-I_{t}(V_{t+1,i} + 1)\\|^{-1}\n\\end{split}\n\\end{equation}\n\nAfter computing the seam, we can composite the target appearance onto the source video and linearly blend the pixels across the seam for a realistic novel reenactment.  \n\n\n\n\n\n", "meta": {"hexsha": "bd27811884934464e9c30ff8fbdf6afc3310bd8c", "size": 15439, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algorithm.tex", "max_stars_repo_name": "leehomyc/ICCV-2017-Paper", "max_stars_repo_head_hexsha": "53188617b9e2b27f14fe5cddc483e1f001acc6d3", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2017-08-05T02:41:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-08T01:29:20.000Z", "max_issues_repo_path": "algorithm.tex", "max_issues_repo_name": "leehomyc/ICCV-2017-Paper", "max_issues_repo_head_hexsha": "53188617b9e2b27f14fe5cddc483e1f001acc6d3", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-08-04T16:36:15.000Z", "max_issues_repo_issues_event_max_datetime": "2017-08-06T18:22:18.000Z", "max_forks_repo_path": "algorithm.tex", "max_forks_repo_name": "leehomyc/ICCV-2017-Paper", "max_forks_repo_head_hexsha": "53188617b9e2b27f14fe5cddc483e1f001acc6d3", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-08-10T12:16:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T12:33:21.000Z", "avg_line_length": 68.013215859, "max_line_length": 678, "alphanum_fraction": 0.7374829976, "num_tokens": 4297, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835493924953, "lm_q2_score": 0.7090191337850932, "lm_q1q2_score": 0.5923738224819621}}
{"text": "\\chapter{Schnorr's applications}\n\\label{chpr:application}\nIn this chapter we will present some of the applications that Schnorr would make deployable in Bitcoin: we will study improved multi and threshold signature schemes and we will see the construction called adaptor signature, that allows the development of the so called \\textit{scriptless scripts}, affecting considerably privacy and efficiency in cross-chain atomic swaps and in the Lightning Network.\n\\\\\nIn case the reader is not familiar with Bitcoin's inner working, we recommend the reading of Appendix \\ref{app:A}.\n\n\\bigskip\n\n\\section{Multi-signature: MuSig ($\\mu \\Sigma$)}\n\\label{musig}\nUsually Schnorr is presented with an implicit multi-signature scheme: given $n$ users that want to sign a single message $m$, they can sign it on their own, the final signature being the sum of the so called partial signatures. This signature can then be verified against the sum of the public keys.\n\\\\\nLet's study this scheme through an example: Alice and Bob have key pairs $\\{q_A, Q_A\\}$ and $\\{q_B, Q_B\\}$, respectively. If both participants are honest, they will proceed as follows: they exchange their public keys, computing the aggregated one $Q = Q_A + Q_B$. Then each one of them calculate as usual the public nonces $K_A$ and $K_B$, defining the joint nonce as $K = K_A + K_B$. The signature would then be $(x_K, s)$, with $s = s_A + s_B = k_A + \\text{hash}(x_K \\ || \\ Q \\ || \\ m)q_A + k_B + \\text{hash}(x_K \\ || \\ Q \\ || \\ m)q_B \\ (\\text{mod} \\ n)= (k_A + k_B) + \\text{hash}(x_K \\ || \\ Q \\ || \\ m)(q_A + q_B) \\ (\\text{mod} \\ n)$. Looking exactly as a single user signature, the verification procedure would follow Algorithm \\ref{alg:schnorr_ver}.\n\\\\\nThis sounds great, except for the fact that it is a completely insecure scheme: we assumed that both the participants were honest, a deadly hypothesis for every cryptosystem. Imagine that it is Bob that wants to cheat. He could simply says that his public key is $Q_B' = Q_B - Q_A$. Then, if someone sends money to the address associated to $Q = Q_A + Q_B' = Q_A + Q_B - Q_A = Q_B$, clearly Bob can control the funds by himself, being in possess of the associated private key.\n\\\\\nThis kind of attack is called rogue key attack and is a serious concern for multi-signature schemes: given $n$ participants, a subset of $1 \\leq t < n$ dishonest signers use public keys that are functions of the public keys of honest signers, allowing them to forge a signature without the aid of the honest signers for the whole set of public keys. There are certain ways to prevent such an attack: for example by ensuring that the participants own the private keys associated with the alleged public keys (now it is not possible for Bob to cheat, since it would imply breaking the ECDLP), a setting that takes the name of KOSK (knowledge of secret key). \n\n\\bigskip\n\\noindent\nIn this section we will present a provably secure multi-signature scheme of the type $n$-of-$n$. But before delving into its technicalities, it could be better to stop and talk a little about how changes are introduced in Bitcoin: deploying innovations in Bitcoin is a long procedure, due to its decentralized consensus protocol. Since it could require years to take a new feature to it, we should think about properties that would enhance Bitcoin in the long term. Today Bitcoin is missing some important properties in order to be a good method of payment: it is missing both fungibility and privacy. It is missing fungibility due to the fact that it is missing privacy: Bitcoin is pseudonymous, not anonymous, in the sense that an address is not directly linked to a physical person, but every single transaction is on the public  ledger open to (possibly) every node in the network. Low privacy means that bitcoins, not being interchangeable, could be treated differently: think about the bitcoins possessed by the creator of Bitcoin, Satoshi Nakamoto, and not moved since the creation of Bitcoin. Obviously they have not the same appeal of newly minted coins.\n\\\\\nFortunately enough, the lack of some properties is not everlasting: for such a reason, when introducing a new feature in Bitcoin we should try to fix these problems. So, we will give now a look to some properties that, in a long term view, a new multi-signature scheme should possess:\n\\begin{enumerate}\n\t\\item Accountability: this property refers to $m$-of-$n$ multi-signature schemes (also referred to as threshold schemes) and deals with the fact that for the participants of the scheme should be possible to know who signed and to show to others that they have not;\n\t\\item Usability: the ease of use is important. If an interactive scheme requires a huge number of rounds, it won't be used by anyone;\n\t\\item Privacy: third parties should learn as little about the policy of the scheme as possible (particular kind of policies could identify your transactions, leading to various problems, like censorship by miners).\n\\end{enumerate}\n\n\\bigskip\n\\noindent\nAfter this brief digression, we are ready to present the MuSig scheme \\cite{RefWork:11}. MuSig is an interactive (meaning that the scheme comprehends different rounds of communication between the participants) multi-signature scheme, based on the Schnorr signature. MuSig has some very attractive properties, namely:\n\\begin{itemize}\n\t\\item The size of the signature is equal to the single user case;\n\t\\item It is provably secure in the plain public key model\\footnote{The signers are only required to have a public key: they do not have to prove ownership of it, i.e. knowledge of the associated private key.}.\n\\end{itemize}\nThese properties, although being appealing, are not original: MuSig shares them with other schemes, in particular with the Bellare-Neven (BN) scheme \\cite{RefWork:10}. The novelty introduced by the authors is that they recovered key aggregation, meaning that to the scheme can be associated a unique joint public key, leading to a verification algorithm that is equal to the single user case: the multi-signature can be verified with respect to a single aggregated public key, leading to greater privacy since for third parties the multi-signature policy is completely hidden.\n\\\\\nThe Bellare-Neven scheme prevents rogue key attacks relying on a particular algorithm to compute the partial signatures, avoiding a trusted setup: each participant $i \\in \\{1, ..., n\\}$ with key pair $\\{q_i, Q_i\\}$ computes $s_i = k_i + c_iq_i, \\ c_i = \\text{hash}(\\langle L \\rangle\\ || \\ K \\ || \\ Q_i \\ || \\ m)$, where $K = \\sum_{i = 1}^{n}K_i$ is the aggregated public nonce, $m$ is the message to be signed and $\\langle L \\rangle$ is a unique encoding of the multiset of public keys $L = \\{Q_1, Q_2, ..., Q_n\\}$, e.g. ordered lexicographically. The signature is $(K, s)$, where $s = \\sum_{i = 1}^{n}s_i$, and the verification equation is: $sG = K + \\sum_{i = 1}^{n}c_iQ_i$. We can notice that a validator needs the whole set of public keys in order to check the signature. \n\\\\\nMuSig can be thought as a variant of BN that recovers key aggregation. The setting in which the two schemes are defined is the same: both can be proven to be secure in the plain public key model under the discrete logarithm assumption\\footnote{The discrete logarithm assumption requires the DL to be hard on the selected group. This means that if the DL is hard, then the scheme is secure.}, modelling the hash functions involved as a public random oracle.\n\\\\\nSecurity is to be intended in the sense that it is infeasible for an adversary to forge multi-signatures involving at least an honest participant, that is: the adversary is not able to produce on its own a signature valid for the set of public keys containing the one of the honest signer.\n\\\\\nWe stress the fact that, from the applicative point of view, key aggregation is a fundamental property: if the scheme is usable (few interaction rounds), then we get for free privacy and accountability (MuSig is an $n$-of-$n$ scheme, so that it is possible to generate a valid signature only if all the participants agree). Indeed, thanks to key aggregation, verifiers will only see an aggregated public key: they would not even know that it is indeed aggregate, since it is indistinguishable from a normal public key. This is also important from the point of view of efficiency: in Bitcoin every single node has the possibility to validate each transaction, meaning that verification efficiency and signature size are very important, more than the timing of the signing algorithm. This is why, although there are multi-signature schemes with fewer interaction rounds, we present here MuSig: the benefits of key aggregation are improved bandwidth (no need for communication of multiple public keys), privacy (aggregated public key indistinguishable from a normal one) and validation efficiency (as efficient as a normal Schnorr's verification).\n\\\\\nMoreover if the aggregated public key is not given to the verifier, it is still possible to recover it just from the set of public keys of the participant, without interaction with the signers.\n\n\\bigskip\n\\noindent\nThe plain public key setting plays a crucial role when trying to enable multi-signatures across multiple inputs of a Bitcoin transaction, resulting in a unique signature per transaction.  In case the transaction spends inputs from different owners, they will obviously need to collaborate to produce the multi-signature. Such a construction would reduce further the traffic on-chain, resulting in a benefit for all the network participants. Such a change would require the introduction of new opcodes in the Bitcoin scripting language, but this can be done via a soft fork (i.e. in a backward compatible way). To see why security in the plain public key model is fundamental to enable cross-input multi-signature, think about an attacker that identifies some outputs he wants to steal, corresponding to a set $\\{Q_1, Q_2, ..., Q_{n - t}\\}$ of public keys. He could try to identify another set of keys $\\{Q_{n - t + 1}, ..., Q_n\\}$ such that he can sign for the aggregated public key. He would be able to steal the coins just by sending a small amount of his own money to outputs corresponding to the keys he found and finally creating a transaction referencing the outputs he wants to steal and the newly created outputs in his possession: by construction he is able to forge a signature on his own for this transaction. But the plain public key model defends exactly against such a situation, since the game is won by the adversary if he is able to forge a signature over a set of keys that includes at least one key not in possession of the attacker. \n\\\\\nIn the setting of signature aggregation on a transaction basis, it is possible to resort to signature schemes different from MuSig (in particular the older and more reviewed BN scheme), since all the public keys involved are public.\n\\\\\nUp to now we have talked about cross-input aggregation, that would reduce the number of signatures to one per transaction. Is it possible to go further? For example, is it possible to obtain a single signature on a block basis? Unfortunately it is not possible through MuSig, since the scheme is interactive: this obviously prevents aggregation on a block level. However the discussion would not be fair if we would not point out that changing signature scheme would allow aggregation on a block basis: for example the BLS signature scheme (\\cite{RefWork:15}, \\cite{RefWork:16}) is non interactive when it comes to aggregation. This would enable a single signature per block. However, although there is interest around BLS signatures, it is underpinned by different cryptographic hypothesis than ECDSA and Schnorr. For this reason the Bitcoin community is working towards the implementation of Schnorr.\n\n\\bigskip\n\\noindent\nNow we can finally look at the inner working of the scheme: it is parameterized by the cyclic group $\\mathbb{G}$ (a subgroup of $E(\\mathbb{F}_p)$), its order $n$, a generator of the group $G$ and three hash functions $\\text{hash}_{com}, \\ \\text{hash}_{agg}$ and $\\text{hash}_{sign}$: $\\{0, 1\\}^* \\to \\{0, 1\\}^{L_n}$. The bit length of the order $n$ is denoted by $L_n$ and assumed to be a security parameter. The key generation algorithm is not presented: we assume each participant $i \\in \\{1, ..., m\\}$ is in possession of a proper key pair $\\{q_i, Q_i\\}$. The signing algorithm is presented by Algorithm \\ref{alg:musig_sig}. In the following $\\langle L \\rangle$ denotes a unique encoding of the multiset of public keys and the indices used in the signing algorithm are local references to the other cosigners. The algorithm is split in interactive rounds according to the \\textbf{send} and \\textbf{upon reception of} commands. Verification occurs following the single signature Schnorr verification Algorithm \\ref{alg:schnorr_ver}.\n\n\\bigskip\n\n\\begin{algorithm}\n\t\\caption{MuSig: signing algorithm}\n\t\\label{alg:musig_sig}\n\t\\begin{algorithmic}[1]\n\t\t\\Procedure{MuSig\\_sig}{$M, q_1, \\{Q_2, ..., Q_m\\}$}\n\t\t\\State $m \\gets \\text{hash}(M)$\n\t\t\\State $\\langle L \\rangle \\gets \\{q_1G, Q_2, ..., Q_m\\}$\n\t\t\\For {$i \\gets 1, m$}\n\t\t\\State $a_i \\gets \\text{int}(\\text{hash}_{agg}(\\langle L \\rangle \\ || \\ \\text{bytes}(Q_i)))$\n\t\t\\EndFor\n\t\t\\State $Q \\gets \\sum_{i = 1}^{m}a_iQ_i$\n\t\t\\State $k_1 \\xleftarrow{\\text{\\$}} \\{1, ..., n - 1\\}$\n\t\t\\State $K_1 \\gets k_1G, \\ t_1 \\gets \\text{hash}_{comm}(\\text{bytes}(K_1))$\n\t\t\\State \\textbf{send} $t_1$\n\t\t\\State \\textbf{upon reception of} $t_2, ..., t_m$ \\textbf{send} $K_1$\n\t\t\\State \\textbf{upon reception of} $K_2, ..., K_m$ \\textbf{do}\n\t\t\\State $\\ \\ \\ \\ \\ $ \\textbf{for} $i \\gets 2, m$ \\textbf{do}\n\t\t\\State $\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ $ \\textbf{if} $t_i \\neq \\text{hash}_{comm}(\\text{bytes}(K_i))$ \\textbf{do}\n\t\t\\State $\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ $ \\textbf{abort}\n\t\t\\State $\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ $ \\textbf{end if}\n\t\t\\State $\\ \\ \\ \\ \\ $ \\textbf{end for} \n\t\t\\State $K \\gets \\sum_{i = 1}^{m}K_i$\n\t\t\\If {$\\text{jacobi}(y_K) \\neq 1$}\n\t\t\\State $k_1 \\gets n - k_1$\n\t\t\\EndIf\n\t\t\\State $c \\gets \\text{int}(\\text{hash}_{sig}(\\text{bytes}(x_K) \\ || \\ \\text{bytes}(Q) \\ || \\ m))$\n\t\t\\State $s_1 \\gets k_1\\ + ca_1q_1 \\ (\\text{mod} \\ n)$\n\t\t\\State \\textbf{send} $s_1$\n\t\t\\State \\textbf{upon reception of} $s_2, ..., s_m$ \\textbf{do}\n\t\t\\State $s \\gets \\sum_{i = 1}^{m}s_i \\ (\\text{mod} \\ n)$\n\t\t\\State \\textbf{return} $(x_K, s)$\n\t\t\\EndProcedure\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\bigskip\n\\noindent\nAs usual, here follows the proof of the correctness.\n\\\\\n{\\bf Proof}: Loosely speaking, we have to prove that $sG = K + cQ$.\n\\\\\n$$sG = \\left(\\sum_{i = 1}^{m} s_i\\right)G = \\left(\\sum_{i = 1}^{m}(k_i + ca_iq_i)\\right)G = \\sum_{i = 1}^{m}(k_iG + ca_iq_iG) =$$\n$$= \\sum_{i = 1}^{m}k_iG + \\sum_{i = 1}^{n} ca_iq_iG = \\sum_{i = 1}^{m}K_i + c\\sum_{i  = 1}^{m}a_iQ_i = K + cQ.$$\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ $\\square$\n\n\\bigskip\n\\noindent\nFor the security proof of the scheme we refer to the original article \\cite{RefWork:11}. \n\\\\\nDealing with ECDSA and Schnorr in Chapter \\ref{chpr:dss}, we have seen that in the single user setting there was the possibility to derandomize the signature algorithm without loss of security, by generating the random nonce $k$ through a deterministic function. This is done since pseudo random generation is one of the major sources of problems in cryptography. This is not anymore suggested in the multi-user setting: it is necessary to ensure to use different unpredictable values when the other signers change their $K$ values in repeated signing attempts, otherwise secret key recovery would be possible. \n\\\\\nHere follows an example taken directly from \\cite{RefWork:11}: assume Alice and Bob have key pairs $\\{q_A, Q_A\\}$ and $\\{q_B, Q_B\\}$, respectively. They want to jointly produce a signature. Alice generates $k_A$ and sends $K_A = k_AG$ to Bob. In a first attempt, Bob responds with $K_B$. Alice then computes:\n$$K = K_A + K_B,$$\n$$c = \\text{hash}_{sig}(x_K \\ || \\ Q \\ || \\ m),$$\n$$s_A = k_A + ca_Aq_A \\ (\\text{mod} \\ n),$$\nand sends $s_A$ to Bob. Bob is trying to cheat on Alice, and decides not to produce a valid $s_B$, and thus the protocol fails. A new signing attempt takes place, and Alice again sends the same $K_A$. Bob responds with $K_B' \\neq K_B$. Alice then computes $c' = \\text{hash}_{sig}(x_{K_A + K_B'} \\ || \\ Q \\ || \\ m)$ and $s_A' = k_A + c'a_Aq_A \\ (\\text{mod} \\ n)$ and sends $s_A'$ to Bob. Now Bob is able to derive Alice's private key:\n$$s_A - s_A' = (c - c')a_Aq_A \\ (\\text{mod} \\ n) \\ \\Longrightarrow \\ q_A = (c - c')^{-1}a_A^{-1}(s_A - s_A') \\ (\\text{mod} \\ n).$$\nTo avoid this problem, each signer must ensure that whenever any $K$ value sent by other cosigners or the message $m$ changes, his $k_i$ changes as well. \n\n\\bigskip\n\n\\bigskip\n\n\\section{Threshold signature}\n\\label{threshold}\nWith the name threshold signatures we refer to policies of the kind $m$-of-$n$, where it is necessary that at least $m$ participants of the scheme decide to collaborate to produce a valid signature. This kind of policy is very popular in Bitcoin, since it is flexible and has many applications: for example, a single user could use such a policy to improve security, storing the keys on different machines, at the same time defeating the risk of loss of some keys. \n\\\\\nIn this section we will study a provably secure Schnorr based threshold scheme. For the security proof we refer to \\cite{RefWork:14}. The scheme is constructed on top of the Pedersen Verifiable Secret Sharing Scheme (VSS scheme) and Pedersen's multi-party protocol to generate a random shared secret, hence we start presenting these two schemes.\n\n\\bigskip\n\n\\subsection{Verifiable secret sharing scheme}\n\\label{subsec:1}\nHereinafter, we will refer to threshold signature schemes as $t$-of-$m$ schemes in order to avoid confusion in the notation.\n\\\\\nTypically, in a $t$-of-$m$ secret sharing scheme, a trusted dealer distributes a secret $s$ to $m$ players $P_1, ..., P_m$ in such a way that any subgroup of at least $t$ members can recover the secret, while any subgroup of cardinality strictly less than $t$ learns nothing about it. A verifiable secret sharing scheme moreover prevents the dealer from cheating, since each participant can verify that his share of the secret is consistent with the others. The novelty introduced by Pedersen is that his scheme is non-interactive in the verification and does not require trust between the parties involved. Still there is a central dealer, a figure we would like to get rid of: this will be done through the second protocol that will be presented in Section \\ref{subsec:2}. Here follows the VSS scheme proposed by Pedersen.\n\n\\bigskip\n\\noindent\nFix an elliptic curve over a prime finite field $E(\\mathbb{F}_p)$, characterized by the EC domain parameters $T = (a, b, p, G, n, h)$ and fix another generator $H$ of the same cyclic group generated by $G$. We require that these two generators are \\textit{nothing up my sleeves} (NUMS), meaning that we do not know the discrete logarithm of one with respect to the other, and vice versa.\n\\\\\nAssume that the dealer has a secret value $s \\in \\mathbb{Z}_n$ and a number $s' \\in \\mathbb{Z}_n$ generated at random. He commits\\footnote{A commitment scheme is a cryptographic primitive used to commit to some secret data without revealing anything about it. Commitment schemes are designed so that it is unfeasible to change the secret: when revealed in a second moment, we have probabilistic assurance that it is the real committed value. A typical example of commitment scheme is a collision resistant hash function, e.g. SHA-256.} to the couple $(s, s')$ through the so called Pedersen commitment $C_0 = sG + s'H$. The NUMS property is needed to prevent the person who commits from lying about the values he committed to. Assume indeed that the dealer knows the discrete logarithm of $H$ with respect to $G$: $H = r_HG$. In this case he could write: $C_0 = sG + s'H = (s + s'r_H)G = (s \\pm ar_H + s'r_H)G = (s - ar_H)G + (s'r_H + ar_H)G = (s - ar_H)G + (s' + a)H, \\ \\forall a \\in \\mathbb{Z}_n$.  As the calculations clearly show, knowing the DL the dealer could commit so the couple $(s, s')$ but later reveal $(s - ar_H, s' + a)$.\n\\\\\nAfter having broadcast the commitment, the secret $s$ can be shared among $P_1, ..., P_m$ through the following protocol:\n\n\\bigskip\n\\noindent\n{\\bf The dealer}:\n\\begin{enumerate}\n\t\\item Chooses a couple of random polynomials of degree $t - 1$: \n\t$$f(u) = s + f_1u + ... + f_{t - 1}u^{t - 1} \\ (\\text{mod} \\ n),$$\n\t$$f'(u) = s' + f'_1u + ... + f'_{t - 1}u^{t - 1} \\ (\\text{mod} \\ n),$$\n\twhere $s \\ \\text{and} \\ s'$ are the committed values, while $f_i, f'_i \\in \\mathbb{Z}_n$ are randomly chosen for every $i \\in \\{1, ..., t - 1\\}$;\n\t\\item Computes $(s_i, s'_i) = (f(i), f'(i))$ for $i \\in \\{1, ..., m\\}$;\n\t\\item Sends secretly $(s_i, s'_i)$ to $P_i, \\forall i \\in \\{1, ..., m\\}$;\n\t\\item Broadcasts the values $C_j = f_jG + f'_jH, \\ \\forall j \\in \\{1, ..., t - 1\\}$.\n\\end{enumerate}\n\n\\bigskip\n\n\\noindent\n{\\bf Each participant $P_i$}:\n\\begin{enumerate}\n\t\\item Verifies the consistency of its share of the secret as:\n\t$$s_iG + s'_iH = \\sum_{j = 0}^{t - 1}i^jC_j.$$\n\tIf this check fails, she broadcasts a compliant against the dealer;\n\t\\item For each compliant from a player $i$, the dealer defends himself by broadcasting the values $(s_i, s'_i) = (f(i), f'(i))$ that satisfy the checking equation at point 1;\n\t\\item Aborts the protocol if:\n\t\\begin{itemize}\n\t\t\\item The dealer received more than $t$ compliants;\n\t\t\\item He answered to a compliant with values that violate again the checking equation.\n\t\\end{itemize}\n\\end{enumerate}\nPedersen proved that any coalition of less than $t$ players cannot get any information about the shared secret, provided that the discrete logarithm in $E(\\mathbb{F}_p)$ is hard\\footnote{This is important since we are not adding cryptographic assumptions.}. For the proof we refer to the original paper \\cite{RefWork:13}. Although we do not look at the proof, it may still be of interest to check why the verification procedure at step 1 should succeed:\n$$s_iG+ s'_iH = f(i)G + f'(i)H = \\sum_{j = 0}^{t- 1}f_ji^jG + \\sum_{j = 0}^{t - 1}f'_ji^jH =$$\n$$= \\sum_{j = 0}^{t - 1}i^j(f_jG + f'_jH)= \\sum_{j = 0}^{t - 1}i^jC_j.$$\nWe used the convention that $f_0 = s$ and $f'_0 = s'$. Remembering that $C_0$ commits to the secret and that the other $C_j$ commits to the polynomials, we have the assurance that the dealer is not cheating. Indeed there is one and only one polynomial over $\\mathbb{Z}_n$ of degree at most $t - 1$ satisfying $f(i) = s_i$, respectively $f'(i) = s'_i$, for $t$ values of $i$. \n\\\\\nThis is also the key property that allows the reconstruction of the secret value from any group $\\mathcal{P}$ of $t$ participants. Indeed the members in $\\mathcal{P}$ can recover the polynomial $f$ through the Lagrange's interpolation formula, that given a set of $t$ points $(i, s_i = f(i))$ returns the lowest degree polynomial (in this case a $t - 1$ degree polynomial) interpolating the given points:\n$$f(u) = \\sum_{i \\in \\mathcal{P}}f(i)\\omega_i(u) \\ (\\text{mod} \\ n), \\ \\text{where} \\ \\omega_i(u) = \\prod_{j \\in \\mathcal{P}, \\ j \\neq i}\\frac{u - j}{i - j} \\ (\\text{mod} \\ n).$$\nSince it holds that $s = f(0)$ by definition, the group $\\mathcal{P}$ can directly reconstruct the secret as:\n$$s = f(0) = \\sum_{i \\in \\mathcal{P}}f(i)\\omega_i \\ (\\text{mod} \\ n),  \\ \\text{where} \\ \\omega_i = \\omega_i(0) = \\prod_{j \\in \\mathcal{P}, \\ j \\neq i}\\frac{j}{j - i} \\ (\\text{mod} \\ n).$$\n\n\\bigskip\n\n\\subsection{Protocol for the generation of a random shared secret}\n\\label{subsec:2}\nAs we pointed out before, we would like to get rid of the figure of the dealer, so that it is possible to generate a key pair between distrustful parties without his intervention. The key generation phase of the signature scheme generates a random shared secret key in a distributed way according to the following protocol:\n\\\\\n\\\\\n{\\bf Each participant $P_i$}:\n\\begin{enumerate}\n\t\\item Chooses $r_i, r'_i \\in \\mathbb{Z}_n$ at random and verifiably shares $(r_i, r'_i)$ acting as the dealer according to the Pedersen's VSS scheme described above. Let the sharing polynomials of participant $i$ be $f_i(u) = \\sum_{j = 0}^{t - 1}a_{ij}u^j \\ (\\text{mod} \\ n)$, $f'_i(u) = \\sum_{j= 0}^{t - 1}a'_{ij}u^j \\ (\\text{mod} \\ n)$, where $a_{i0} = r_i$ and $a'_{i0} = r'_i$. The public commitments are $C_{ik} = a_{ik}G + a'_{ik}H$ for $k \\in \\{0, ..., t - 1\\}$;\n\t\\item Sets $H_0 = \\{P_j \\ | \\ P_j \\ \\text{is not detected to be cheating at step 1}\\}$. The distributed secret value $r$ is equal to $\\sum_{i \\in H_0}r_i \\ (\\text{mod} \\ n)$ (but nobody can compute it on its own). Each participant $P_i$ sets his share of the secret at $s_i= \\sum_{j \\in H_0}f_j(i) \\ (\\text{mod} \\ n)$ and sets the value $s'_i = \\sum_{j \\in H_0}f'_j(i) \\ (\\text{mod} \\ n)$ (the share of the secret is simply equal to the sum of the partial shares received from honest participants);\n\t\\item Each player in $H_0$ broadcasts $R_i = r_iG$ via Feldman's VSS scheme:\n\t\\begin{enumerate}\n\t\t\\item Each player $P_i$ in $H_0$ broadcasts $A_{ik} = a_{ik}G$ for $k \\in \\{0, ..., t - 1\\}$;\n\t\t\\item Each player $P_j$ verifies the values broadcast by the other players in $H_0$, i.e. for each $P_i \\in H_0$, $P_j$ checks that:\n\t\t$$f_i(j)G = \\sum_{k = 0}^{t - 1} j^kA_{ik}.$$\n\t\tIf the check fails for an index $i$, $P_j$ complaints against $P_i$ broadcasting the values $(f_i(j), f'_i(j))$ that satisfy the checking equation of the Pedersen's VSS scheme  but not the one at point (b);\n\t\t\\item For players $P_i$ who received at least one valid complaint, the other players run the reconstruction phase of Pedersen's VSS scheme to compute $r_i$, $f_i(u)$ and $A_{ik}$ for $k \\in \\{0, ..., t - 1\\}$. All participants in $H_0$ set $R_i = r_iG$. \n\t\\end{enumerate} \n\\end{enumerate}\nAfter the execution of the protocol the following equations hold:\n$$R = \\sum_{i \\in H_0} R_i = \\sum_{i \\in H_0}r_iG = \\sum_{i \\in H_0}A_{i0} = rG,$$\n$$f(u) = \\sum_{i \\in H_0} f_i(u) = r + a_1u + ... + a_{t - 1}u^{t - 1} \\ (\\text{mod} \\ n), \\ \\text{where} \\ a_i = \\sum_{j \\in H_0}a_{ji} \\ (\\text{mod} \\ n),$$\n$$f(i) = s_i.$$\nWe introduce the following notation: \n$$(s_1, ..., s_n) \\xleftrightarrow{\\text{(t, m)}} (r|R, a_iG, H_0), \\ i \\in \\{0, ..., t - 1\\}.$$\nIt means that $s_j$ is the share of secret key $r$ belonging to $P_j$ for $j \\in H_0$. The values $a_iG$\nare the public commitments of the sharing polynomial and $\\{r, R\\}$ is the key pair that can be reconstructed by any subgroup of $H_0$ composed of at least $t$ participants: this can be done via Lagrange's interpolation formula, as described in Section \\ref{subsec:1}.\n\\\\\nBefore we pass to analyse the actual signature scheme, let's give a look at the checking equation at point (b) in the protocol and verify why it should work:\n$$f_i(j)G = \\sum_{k = 0}^{t - 1}a_{ik}j^kG = \\sum_{k = 0}^{t - 1}j^kA_{ik}.$$\n\n\\bigskip\n\n\\subsection{Threshold signature scheme}\nNow that we have defined the primitives on which it is built, we can finally discuss the protocol that implements the $t$-of-$m$ threshold signature scheme. It is important to understand that the participants do not simply recover the private key: this would allow anyone possessing it to sign on the behalf of every other participant in the scheme, even those that did not take part in the reconstruction procedure. Instead, as we will see, the approach consists in the generation of a partial signature through the share of secret key in possession of each user.\n\n\\bigskip\n\\noindent\n{\\bf Key generation}: All $m$ participants have to cooperate to generate a public key $Q$ and a share of the secret key for each participant $P_j$. This can be done relying on the protocol presented in Section \\ref{subsec:2}. The output of the protocol is:\n$$(\\alpha_1, ..., \\alpha_m) \\xleftrightarrow{\\text{(t, m)}} (q|Q, b_iG, H_0), \\ i \\in \\{0, ..., t - 1\\}.$$\nThe $\\alpha$ values denote the secret key share belonging to $P_j$. They will be used to generate a partial signature for the key pair $\\{q, Q\\}$.\n\\\\\n\\\\\n{\\bf Signing algorithm}: Let $msg$ denote the message to be signed. Suppose that a subset $H_1 \\subseteq H_0$ wants to issue a signature. The members of $H_1$ proceed as follows:\n\\begin{enumerate}\n\t\\item If $|H_1| < t$, abort. Otherwise, the subset $H_1$ generates a random shared secret following again the protocol presented in Section \\ref{subsec:2}. We denote the output as:\n\t$$(\\beta_1, ..., \\beta_m) \\xleftrightarrow{\\text{(t, m)}} (k|K, c_iG, H_2), \\ i \\in \\{0, ..., t - 1\\};$$\n\t\\item If $\\text{jacobi}(y_K) \\neq 1$, then each player $i$ sets $\\beta_i = n - \\beta_i$;\n\t\\item If $|H_2| < t$, aborts. Otherwise, each $P_i \\in H_2$ reveals\n\t$$\\gamma_i = \\beta_i + e\\alpha_i \\ (\\text{mod} \\ n),$$\n\twhere $e = \\text{int}(\\text{hash}(\\text{bytes}(x_K) \\ || \\ \\text{bytes}(Q) \\ || \\ msg))$.\n\t\\item Each $P_i \\in H_2$ verifies $\\forall P_l \\in H_2$:\n\t$$\\gamma_lG = \\begin{cases} K + \\sum_{j = 1}^{t - 1} c_jl^jG + e\\left(Q + \\sum_{j = 1}^{t - 1}b_jl^jG\\right), & \\mbox{if } \\text{jacobi}(y_K) = 1 \\\\ - K -\\sum_{j = 1}^{t - 1} c_jl^jG + e\\left(Q + \\sum_{j = 1}^{t - 1}b_jl^jG\\right), & \\mbox{if } \\text{jacobi}(y_K) \\neq 1 \\end{cases}$$\n\tLet $H_3 = \\{P_j \\ | \\ P_j \\ \\text{not detected to be cheating at step 4}\\}$.\n\t\\item If $|H_3| < t$, abort. Otherwise each $P_i \\in H_3$ selects an arbitrary subset $H_4 \\subseteq H_3$ with $|H_4| = t$ and computes $\\sigma$ satisfying $\\sigma = k + eq \\ (\\text{mod} \\ n)$, where:\n\t$$\\sigma = \\sum_{j \\in H_4}\\gamma_j\\omega_j \\ (\\text{mod} \\ n), \\ \\text{where} \\ \\omega_j = \\prod_{h \\in H_4, \\ h \\neq j}\\frac{h}{h - j} \\ (\\text{mod} \\ n).$$\n\tThe signature is $(x_K, \\sigma)$; it is verified as a simple Schnorr signature.\n\\end{enumerate}\nThis concludes the presentation of the protocol. Nonetheless there are some formulas that deserve greater attention.\n\\begin{itemize}\n\t\\item Checking formula at point 4\\footnote{The formula is checked in case $\\text{jacobi}(y_K) = 1$; the other case is proved in a similar way.}: \n\t$$\\gamma_lG = (\\beta_l + e\\alpha_l)G = \\beta_lG + e\\alpha_lG = $$\n\t$$= \\left(k + \\sum_{j = 1}^{t - 1}c_jl^j\\right)G + e\\left(q + \\sum_{j = 1}^{t - 1}b_jl^j\\right)G =$$\n\t$$= K + \\sum_{j = 1}^{t - 1}c_jl^jG + e\\left(Q + \\sum_{j = 1}^{t - 1}b_jl^jG\\right).$$\n\t\\item Formula used to compute $\\sigma$: we defined $\\gamma_i = \\beta_i + e\\alpha_i \\ (\\text{mod} \\ n), \\ \\forall i \\in H_2$. In particular the equation holds for every $i \\in H_4$, with $|H_4| = t$. The $\\alpha$ and $\\beta$ values are defined to be the pointwise evaluation of the sharing polynomials created during the two iterations of the Pedersen protocol for the generation of a random shared secret, that we denote by $F_1(u)$ and $F_2(u)$. These polynomials have degree $t - 1$, so that we can define another polynomial of degree $t - 1$ as:\n\t$$F_3(u) = F_2(u) + eF_1(u) \\ \\Longrightarrow $$\n\t$$\\Longrightarrow \\ F_3(0) = F_2(0) + eF_1(0) \\Longrightarrow $$\n\t$$\\Longrightarrow \\sigma := F_3(0) = k + eq \\ (\\text{mod} \\ n).$$\n\tAt this point we can apply the Lagrange's interpolation formula, since we know that $F_3(u)$ satisfies by construction $F_3(i) = \\gamma_i$:\n\t$$F_3(u) = \\sum_{j \\in H_4}\\gamma_j\\omega_j(u) \\ (\\text{mod} \\ n), \\ \\text{where} \\ \\omega_j(u) = \\prod_{h \\in H_4, \\ h \\neq j}\\frac{u - h}{j -h} \\ (\\text{mod} \\ n).$$\n\tThus, $\\sigma$ can be directly computed as:\n\t$$\\sigma = F_3(0) = \\sum_{j \\in H_4}\\gamma_j\\omega_j \\ (\\text{mod} \\ n), \\ \\text{where} \\ \\omega_j = \\prod_{h \\in H_4, \\ h \\neq j}\\frac{h}{h - j} \\ (\\text{mod} \\ n).$$\n\\end{itemize}\nNotice that the scheme is robust, meaning that a corrupt signer who does not follow the protocol will be detected. The validity of the partial signatures computed at step 3 is immediately tested at step 4.\n\n\\bigskip\n\n\\bigskip\n\\noindent\nThe scheme presented is complex and cumbersome, but all the computations and interactions burden the participants who freely decided to rely on it. The Bitcoin blockchain would see a standard transaction with a single signature, indistinguishable from others. Nobody could tell that it required a threshold signature. This means that the protocol is highly private, but it comes with the downside of not being accountable, since the participants that do not sign cannot learn who instead did it. As usual, it is a matter of trade-offs: if accountability is desired, the naive ECDSA approach should be preferred (and will be available), otherwise users aiming at better privacy could rely on the presented Schnorr threshold scheme.\n\n\\bigskip\n\n\\bigskip\n\n\\section{Adaptor signature}\n\\label{adaptor}\nAdaptor signatures are the building block for the so called \\textit{scriptless script}, an innovative idea by Andrew Poelstra: the major aim is to introduce much more flexibility in systems lacking a scripting language (e.g. the Mimblewimble protocol discussed in \\cite{MW1} and \\cite{MW2}). However, it turns out that these kind of signatures have a couple of interesting applications also in Bitcoin, that we are going to explore.\n\\\\\nAdaptor signatures are based on Schnorr and leverage its linearity property. Therefore, introducing Schnorr in Bitcoin would allow the deployment of adaptor signatures for free. As we will see in the next two sections, this tool could affect significantly cross-chain atomic swaps and payment channels, where transactions can be made atomic through signatures and not through Bitcoin scipts. This traduces again in transactions that on-chain look the same as ordinary single signer transactions, improving greatly privacy and efficiency: smaller transactions result in lower fees, lower blockchain size and lower CPU requirement (since the UTXO set, i.e. the set saved in the RAM of the unspent transaction outputs, shrinks), while a unique aggregate signature hides completely the participants involved.\n\n\\bigskip\n\\noindent\nAdaptor signatures come from the addition of the public nonce $K$ generated during Algorithm \\ref{alg:schnorr_sig} with a random EC point $T = tG$: however, the secret nonce $k$ is not updated, in the sense that it is still considered $k$ as secret nonce instead of $k + t$. This results obviously in an invalid signature. But thanks to this construction, learning the secret integer $t$ is equivalent to learning a valid signature for the same message and public key used for the adaptor signature.\n\\\\\nTo clarify how adaptor signatures work, let's look at an example. Consider two participants, Alice and Bob. To produce an adaptor signature Bob proceeds as shown in Algorithm \\ref{alg:adaptor} on input a message $M$ and his private key $q_B$.\n\n\\begin{algorithm}\n\t\\caption{Adaptor signature}\n\t\\label{alg:adaptor}\n\t\\begin{algorithmic}[1]\n\t\t\\Procedure{adaptor\\_sig}{$M, q$}\n\t\t\\State $m \\gets \\text{hash}(M)$\n\t\t\\State $k \\gets \\text{int}(\\text{hash}(\\text{bytes}(q) \\ || \\ m)) \\ (\\text{mod} \\ n)$\n\t\t\\State $t \\xleftarrow{\\text{\\$}} \\{1, ..., n - 1\\}$\n\t\t\\State $K \\gets kG, \\ T \\gets tG$\n\t\t\\If {$\\text{jacobi}(y_{K + T}) \\neq 1$}\n\t\t\\State $k \\gets n - k, \\ t \\gets n - t$\n\t\t\\EndIf\n\t\t\\State $e \\gets \\text{int}(\\text{hash}(\\text{bytes}(x_{K + T}) \\ || \\ \\text{bytes}(qG) \\ || \\ m)) \\ (\\text{mod} \\ n)$\n\t\t\\State $s' \\gets k + eq \\ (\\text{mod} \\ n)$\n\t\t\\State \\textbf{return} $(\\text{bytes}(x_T), \\text{bytes}(x_{K + T}), \\text{bytes}(s'))$\n\t\t\\EndProcedure\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\bigskip\n\\noindent\nAs we have already pointed out, the signature $(r', s') = (x_{K + T}, s')$ is not valid. Indeed, following the verification algorithm \\ref{alg:schnorr_ver} Alice would compute:\n$$e = \\text{int}(\\text{hash}(\\text{bytes}(r') \\ || \\ \\text{bytes}(Q_B) \\ || \\ m)) \\ (\\text{mod} \\ n) \\ \\Longrightarrow$$\n$$\\Longrightarrow \\ s'G - eQ_B = (k + eq_B)G - eQ_B = kG = K.$$\nThen the algorithm would check whether or not $x_K$ is equal to $r' = x_{K + T}$, failing. This is because the secret nonce $k$ has not been offset by $t$.\n\\\\\nAlthough the adaptor signature is invalid, Alice can check that it is a valid adaptor signature, meaning that it is consistent. Consistency is to be intended in the sense that learning a valid signature is equivalent to learning $t$ and vice versa. This is done checking that:\n$$s'G = K + eQ_B.$$\nShe can do this verification because she has $s'$ and $Q_B$, while from the given $x$ coordinates she is able to reconstruct both $T$ and $K + T$, from which she gets $K$. However, thanks to the difficulty of the ECDLP, she cannot find $t$.\n\\\\\nAssume now that Bob gives Alice a valid signature $(r', s' + t)$. It is valid by construction, indeed after the usual checks the verification algorithm would compute:\n$$e = \\text{int}(\\text{hash}(\\text{bytes}(r') \\ || \\ \\text{bytes}(Q_B) \\ || \\ m)) \\ (\\text{mod} \\ n) \\ \\Longrightarrow$$\n$$\\Longrightarrow \\ (s' + t)G - eQ_B = (k + eq_B + t)G - eQ_B = (k + t)G = K + T.$$\nThis time, the check $x_{K + T} = r'$ would succeed. \n\\\\\nFrom this valid signature and the previously received adaptor signature, Alice can immediately recover $t$ simply by taking the difference: $s' + t - s' = t$.\n\\\\\nObviously also the vice versa holds true: given the adaptor signature $(r, r', s') = (x_T, x_{K + T}, s')$ and $t$ Alice can immediately compute a valid signature as $(r', s' + t)$.\n\n\\bigskip\n\\noindent\nAdaptor signatures can be used jointly with the multi-signature scheme described in Section \\ref{musig}. Assuming again that it is Bob that wants to generate an adaptor signature, he would modify slightly Algorithm \\ref{alg:musig_sig}, while Alice would follow it closely: when computing $K_B$ at step 9, he generates at random $t$ and computes $T = tG$. Then, he commits to $K_B + T$ and, at point 11, sends it to Alice. The final modification occurs at step 24: with $s_B$ he sends Alice also $T$, so that she can verify the adaptor signature. Nothing else changes. \n\\\\\nThe final joint signature is: \n$$(x_{K_A + K_B + T}, s),$$\nwhere\n$$s = s_A + s_B = $$\n$$= k_A + \\text{hash}(\\text{bytes}(x_{K_A + K_B + T}) \\ || \\ \\text{bytes}(Q) \\ || \\ m)\\text{hash}(\\langle L \\rangle \\ || \\ \\text{bytes}(Q_A))q_A + $$\n$$+ k_B + \\text{hash}(\\text{bytes}(x_{K_A + K_B + T}) \\ || \\ \\text{bytes}(Q) \\ || \\ m)\\text{hash}(\\langle L \\rangle \\ || \\ \\text{bytes}(Q_B))q_B$$ \nand\n$$Q =  \\text{hash}(\\langle L \\rangle \\ || \\ \\text{bytes}(Q_A))Q_A + \\text{hash}(\\langle L \\rangle \\ || \\ \\text{bytes}(Q_B))Q_B.$$ \nAs before this signature is not valid for $Q$, since Bob did not offset $s_B$ by $t$. Again, learning $t$ is equivalent to learning a valid signature.\n\\\\\nThis is the key idea behind the applications we will study in the following sections.\n\n\\bigskip\n\\noindent\nAdaptor signatures may not seem a big deal, but it is worth noticing that if $t$ is some necessary data for a separate protocol, arbitrary steps of arbitrary protocols can be made equivalent to signature production. In such a case it could be useful to attach auxiliary data to the signature to ensure that the role of $t$ is the one claimed. In particular, using the same $T$ in multiple subsequent adaptor signatures, it is possible to make arbitrary sets of signatures atomic: once a valid signature is produced, $t$ is revealed making all the signatures valid. Last thing worth of being pointed out is that adaptor signatures are deniable: for every signature on the blockchain one can come up with some $t$ and construct a linked adaptor signature.\n\n\\bigskip\n\n\\subsection{Atomic swap}\n\\label{atomic}\nIn this section we look at one application of adaptor signatures, starting from the easier concept of cross-chain atomic swap. Notice that the Schnorr's applications discussed in Sections \\ref{musig} and \\ref{threshold} can be extended outside the Bitcoin ecosystem. Atomic swaps instead are a concept strictly related to the crypto-currency world; for this reason we suggest to stop for a moment and give a look to Appendix \\ref{app:A}: in there we give an overview of the role that signatures play in Bitcoin and a brief introduction to Bitcoin scripting language.\n\n\\bigskip\n\\noindent\nA cross-chain atomic swap is the exchange of different crypto-currencies among two users in an atomic way, meaning that the swap is successful or the balances of the participants remain unchanged, in such a way that it is not possible to cheat, stealing coins from the other party. Following Bitcoin's spirit this is done in a decentralized way, without the need of resorting on a trusted third party. Atomic swaps' functionalities are important, since usually to convert a crypto-currency into another it is necessary to resort on exchanges, that charges high fees and constitutes a single point of failure.\n\\\\\nLet's start discussing how atomic swaps are implemented nowadays: to avoid recourse on exchanges and frauds from other users, the atomicity of the swap is enforced through the so called HTLC (Hashed TimeLock Contract). It is a special kind of locking script used to lock the funds on both blockchains. It is constructed in such a way that when one party claims the funds on one chain, the other party can retrieve the coins locked in the other transaction. If something goes wrong or if too much time elapses, it is possible for both parties to get back their coins.\n\\\\\nConsidering as usual the two parties to be Alice and Bob, we imagine that they want to trade A-coins for B-coins. An HTLC constructed for this situation could have a structure similar to the following one:\n\n\\bigskip\n\n\\begin{lstlisting}[frame=single]\nOP_IF \n\tOP_HASH256 <digest> OP_EQUALVERIFY OP_DUP OP_HASH160 <Bob address>\nOP_ELSE\n\t<num> OP_CHECKSEQUENCEVERIFY OP_DROP OP_DUP OP_HASH160 <Alice addres>\nOP_ENDIF\nOP_EQUALVERIFY OP_CHECKSIG\n\\end{lstlisting}\n\n\\bigskip\n\\noindent\nThis HTLC would lock the funds on Alice's blockchain. The two branches of the if operator enforce two possibilities: the first one is linked to Bob, that providing the preimage of the digest\\footnote{Typically one party generates a secret and gives its hashed value (digest) to the counterparty: it is the secret value that enforces the atomicity, while the hash value acts as a commitment.} and a signature valid for his public key could claim the funds. This can be done through an unlocking script of the following form, where the 1 at the end is needed to force the execution of the correct if branch:\n\n\\bigskip\n\n\\begin{lstlisting}[frame=single]\n<Bob sig> <Bob pubkey> <preimage> 1\n\\end{lstlisting}\n\n\\bigskip\n\\noindent\nThe second path corresponds to Alice's lock time refund possibility: after enough time has elapsed\\footnote{The exact amount depending on $<$num$>$ OP\\_CHECKSEQUENCEVERIFY. This is not the unique possibility to fix a time lock: the OP\\_CHECKSEQUENCEVERIFY operator can be substituted by OP\\_CHECKLOCKTIMEVERIFY.} she can claim the funds through an unlocking script that would look something like this, where again the 0 at the end enforces the execution of the else path:\n\n\\bigskip\n\n\\begin{lstlisting}[frame=single]\n<Alice sig> <Alice pubkey> 0\n\\end{lstlisting}\n\n\\bigskip\n\\noindent\nA similar contract, where the roles of Alice and Bob are exchanged, would lock Bob's fund on the B-coin chain. Assuming it is Bob that generates the secret preimage, he would need to send its hash (the digest) to Alice, in order to construct the transaction. The atomicity of the swap is ensured by the fact that when Bob spends the output locked by the HTLC he publishes on-chain the preimage: Alice, that needs to monitor the blockchain, could now use the same preimage to take the B-coins.\n\\\\\nCaution has to be used when setting the time locks, in order to avoid fund loss: if Bob chooses the preimage, the time locked refund of B-coins should be greater than the one locking the A-coins. Otherwise Bob would be able to wait until his lock time expires and get his B-coins back. However Alice's lock time, being greater, would still prevents the creation of a refund transaction: Bob could publish the preimage and take all the money.\n\n\\bigskip\n\\noindent\nAs we have just seen, Alice and Bob have to agree on some aspects of the protocol before they can proceed:\n\\begin{itemize}\n\t\\item The number of A-coins and B-coins to be exchanged (i.e. the exchange rate between the two currencies);\n\t\\item The addresses where they want to receive funds: each party needs an address for A-coins and one for B-coins (in case of success and failure of the atomic swap)\\footnote{Addresses here have to be intended in general: there are blockchain based protocols that do not have addresses at all, e.g. Mimblewimble.};\n\t\\item The hash of the locking secret: Bob generates the secret and sends the hash of it to Alice;\n\t\\item The expiry time of the exchange. As we have told, Alice needs enough time to redeem her funds, otherwise Bob would be able to grab them all.\n\\end{itemize}\n\n\\bigskip\n\\noindent\nNow we analyse the possibilities offered by the introduction of adaptor signatures: first of all, they would allow to bring atomic swaps also on chains which do not support a scripting language to enforce an HTLC. \n\\\\\nFor the sake of simplicity, but without loss of generality, we assume a unitary exchange rate between A-coins and B-coins. Here follows the detailed protocol to be enforced\\footnote{We present the protocol in the case both chains support Schnorr signature; actually this is not strictly required.}:\n\\begin{enumerate}\n\t\\item Alice and Bob agree on a pair of locking scripts which are secured by an aggregated public key. Let's name these outputs $O_1$ and $O_2$: they are locked respectively by the pairs of keys $(Q_1^A, Q_1^B)$ and $(Q_2^A, Q_2^B)$, aggregated through the MuSig protocol described in Section \\ref{musig}. In particular:\n\t\\begin{enumerate}\n\t\t\\item Alice prepares a transaction TX$_1$ paying one A-coin into $O_1$, but does not sign it. Then she gives it to Bob and asks for a refund transaction that pays back the funds to Alice from TX$_1$, but has lock time L$_1$ (the two cooperates in the signature procedure). Upon reception of the refund transaction Alice can sign TX$_1$. If Bob broadcasts it, he can do nothing since he has not Alice's partial signature; Alice can just wait until the lock time expires and broadcast the refund transaction;\n\t\t\\item Bob mirrors Alice's behaviour: he prepares TX$_2$ paying one B-coin to $O_2$ but does not sign it; he shares TX$_2$ and requires a signed refund transaction from Alice: it pays from TX$_2$ to Bob with lock time L$_2 > L_1$. Upon reception of the refund transaction (the two must interact to sign it), Bob can sign TX$_2$.\n\t\\end{enumerate}\n\t\\item At this point Alice and Bob can broadcast TX$_1$ and TX$_2$ and both wait for confirmation on the two blockchains. If there are problems at this step, refund transactions can be used to regain control of the money after having waited for the lock times to expire;\n\t\\item Alice and Bob engage the MuSig protocol to spend from TX$_1$ and TX$_2$. Bob generates in parallel adaptor signatures for both $O_1$ and $O_2$ with the same $t$ value and sends them to Alice. She verifies them\\footnote{In particular she verifies that both are valid adaptor signatures and that the offset value used is the same.}: if they are valid, she can provide her partial signature to spend TX$_1$,while if there are problems she aborts the protocol broadcasting the refund transaction;\n\t\\item When Bob spends the output of TX$_1$ thanks to Alice's signature, she learns the secret offset $t$ (monitoring the blockchain) and becomes capable of spending from TX$_2$. At this point Bob cannot cheat: being L$_2$ greater than L$_1$ Alice has time to take the B-coins.\n\\end{enumerate}\nIn the cooperative case (no refund transaction broadcast) the unlocking scripts of TX$_1$ and of TX$_2$ look exactly the same as an ordinary single user payment: moreover, since only Alice had the adaptor signatures allowing her to extract $t$, nobody but Alice and Bob can link the transactions between the two chains.\n\\\\\nNotice that the efficiency achievement is impressive: we were able to condensate the verbose script semantics required by HTLC in a fixed size signature: this would result, once again, in greater privacy and efficiency, leading to an improvement also in fungibility.\n\n\\bigskip\n\n\\subsection{Lightning Network}\n\\label{ln}\nThe Lightning Network is an example of the broader concept of payment channel, an idea originally proposed by Joseph Poon and Thaddeus Dryja in \\cite{RefWork:18}: the aim of this construction is to improve the well known scalability problem of Bitcoin through the so called layer 2 solutions. The name comes from the fact that they do not scale on chain\\footnote{It is nearly impossible for Bitcoin to reach the transaction volume of centralized circuits like Visa: Bitcoin processes about ten transactions per second (tps), while Visa has been able to handle a peak of 47000 tps. It means that Visa is around 5000 times better than Bitcoin in this regard: an on-chain scale would result in a bloat of the blockchain size.}, but rather suggest the implementation of other protocols to handle off-chain transactions. The blockchain is seen as a court to which it is necessary to resort only in controversial situations or to close the channel. Although other implementations have been suggested (e.g. the eltoo protocol\\footnote{The Lightning Network enforces an honest behaviour under a threat; eltoo avoids this punitive logic enforcing the possibility to update the broadcast of an old transaction, but needs the deployment of the SIGHASH\\_NOINPUT soft fork proposed in \\cite{BIP5}. Moreover three transactions have to be committed to the blockchain to open and close any channel.} described in \\cite{RefWork:19}), nowadays layer 2 solutions leverage the HTLC presented in Section \\ref{atomic}. The idea is to create bidirectional channels between the participants that transact frequently: the Lightning Network can be seen as a routed path composed by these channels, in such a way that it is not necessary to open a new channel anytime two parties want to transact.\nWe start presenting the basic working of the Lightning Network, that could be of interest on its own. Since its inner working is sort of an extension of the one presented when dealing with atomic swaps (being the atomicity the core property to be preserved), we will give only a low level overview.\n\\\\\nFirst we consider the example of a single channel between Alice and Bob, then we will look at how different channels can be connected to create a multi-hop payment channel.\n\\\\\nAlice and Bob open a channel committing some funds in 2-of-2 multi-signature address through the so called funding transaction. To avoid fund loss, they require the other party to sign an asymmetric commitment transaction before the funding transaction is broadcast to the Bitcoin network. These transactions spend the output of the funding transaction and lock the funds in two outputs: the first spending immediately what is owed to the other party, the second paying the party holding the transaction after a time lock. For example, the outputs of Bob's commitment transaction would look something like this:\n\\begin{itemize}\n\\item Output 0:\n\t\\begin{lstlisting}[frame=single]\n\tOP_HASH160 <Alice's address> OP_EQUALVERIFY OP_CHECKSIG\\end{lstlisting}\n\t\n\\item Output 1:\n\t\\begin{lstlisting}[frame=single]\n\tOP_IF\n\t\tOP_HASH160 <revocation address>\n\tOP_ELSE\n\t\t<num> OP_CHECKSEQUENCEVERIFY OP_DROP OP_HASH160 <Bob's address> \n\tOP_ENDIF\n\tOP_EQUALVERIFY OP_CHECKSIG\\end{lstlisting}\n\\end{itemize}\nIn Alice's transaction the roles of the two parties are exchanged. These transactions are signed by the other party, so that at any time the holder can also sign and broadcast: the revocation address is required to update the state of the channel, since its presence prevents the broadcast of an old state that is economically more appealing. This is a key problem for payment channels: any signed commitment transaction could be broadcast and the blockchain is not amendable, thus it is fundamental to prevent old commitment transactions from being broadcast (there is an economic incentive for the party that spent some money in the payment channel to broadcast an older version where he has more money). The revocation address is linked to a combination of Alice's public key and a revocation public key (revocation address = RIPEMD160(SHA-256($Q_A + Q_{rev}$))): through this mechanism, when Bob gives Alice the secret revocation key $q_{rev}$, by adding it to its private key only she is able to retrieve the funds locked in Output 1 before the lock time has elapsed. In this way, if Bob broadcast an old state Alice has some time to take action and punish Bob retrieving all the funds: this is the threat enforcing an honest behaviour we were mentioning before. Old transactions are toxic, since an erroneous broadcast would result in the funds being lost: it is better to delete intermediate states. However, revocation is not automatic: Alice has to monitor the blockchain and the same holds true for Bob.\n\\\\\nAnytime they want to update the balance, they exchange the revocation keys and signatures for the new commitment transactions, that spends always from the original funding transaction.\n\\\\\nThe closure of a channel could happen cooperatively, so that the two parties cooperate creating a transaction with the correct balance but without any time lock, or not, meaning that one of the parties broadcast a commitment transaction for which the other party has not the revocation key: the drawback in this case is that he will need to wait for the lock time to expire.  \n\n\\bigskip\n\\noindent\nNow that we have seen the basic idea behind bidirectional channels, we can study their routing. Such a construction creates multi-hop payment channels, for which HTLC are needed: this contract ensures the atomicity of all the payments along the path.\n\\\\\nConsider the situation in which Alice wants to pay Carol 1 BTC. There is not an open channel between them: they could open one committing to the blockchain and consequently paying the associated fees, but a better idea is to rely on the Lightning Network. It automatically finds a path going from Alice to Carol taking advantage of already opened channels. In our example we consider the channels Alice-Bob and Bob-Carol.\n\\\\\nAlice's Lightning Network node (LN node, not to be confused with the Bitcoin node) needs to find a proper path connecting her and Carol. Notice that she is the only one knowing this path: every other participant knows only the previous and the following nodes\\footnote{Such a situation is achieved through the so called onion routing.}. Obviously Carol knows that Alice has to pay her: for this reason she generates a secret $r$ and sends its hash $H_r = \\text{hash}(r)$ to Alice. The atomicity of all the transactions is ensured by this hash value. At this point Alice constructs an HTLC paying to the hash $H_r$: the output of the transaction has to be slightly greater than 1 BTC (e.g. 1.001 BTC), so that the participants in the channel are rewarded with a small fee for the participation. Alice gives the HTLC to Bob, that updates the balance in the channel: Alice's balance decreases but Bob's does not increase; this is due to the fact that Alice's funds are committed in the HTLC and Bob can redeem them only knowing $r$. The HTLC provides a lock time refund logic, so that Bob has a limited amount of time to learn the secret. For this reason he constructs another HTLC updating the state of his channel with Carol, the logic being the same as above: he locks 1 BTC (he takes for himself 0.001 BTC as a fee) contingent to the revelation of the $H_r$'s preimage, with a smaller time lock with respect to the previous contract. The smaller time lock gives Bob enough time to take the coins from Alice's HTLC in case $r$ is revealed by Carol. At this point Carol has secret $r$ and can redeem the funds: she could either send the HTLC on-chain revealing $r$ and closing the channel with Bob, or she can simply reveal the secret to Bob and update the balance of the channel. The same holds true for Bob, that has incentives to reveal $r$ to Alice (remember that he earned a small fee). The secret acts as a sort of receipt for Alice, telling her that the payment has been successful.\n\n\n\\bigskip\n\\noindent\nIn the non cooperative case, these transactions go on-chain and are easily identifiable for blockchain observers, leading to a non negligible lack of privacy. This issue can be avoided through adaptor signatures\\footnote{A much thorough discussion can be found in \\cite{RefWork:17}.}. Instead of using $H_r$, Alice would require public keys $T_B$ and $T_C$ both from Bob and Carol (a unique offset could be used, but relying on one for participant allows reblinding of the offset). The funds of her transaction would be locked requiring a 2-of-2 Schnorr multi-signature: during the signature procedure she requires from Bob an adaptor signature with public offset $T_B + T_C$, so that Bob cannot provide a valid signature not knowing $t_C$. Similarly, Bob sends coins to Carol demanding for an adaptor signature with $T_C$ as offset. At this point, \nCarol can produce a valid signature knowing $t_C$: she would grab the funds, revealing $t_C$ (either by publishing on-chain or by telling directly to Bob) and thus allowing Bob to do the same with Alice's transaction. Notice that for this construction to work it is necessary to include a lock time refund logic.\n\\\\\nBeyond the usual efficiency and privacy benefits,we have seen that the secret values used as offset can be reblinded between hops, allowing long chains of transactions to be made atomic while even the participants cannot identify which transactions are part of the chain. This is in sharp contrast with the HTLC based channels: although Alice does not reveal the path, all the members have the same hash and can collude to find out the nodes contributing to the channel.\n\n", "meta": {"hexsha": "6c88864eff3b638880f49527f29888e7347d9292", "size": 57947, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/Schnorr_applications.tex", "max_stars_repo_name": "gionasoldati/thesis", "max_stars_repo_head_hexsha": "e8b3b3828f4ccb0a35e26381b361425e09a51b11", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/Schnorr_applications.tex", "max_issues_repo_name": "gionasoldati/thesis", "max_issues_repo_head_hexsha": "e8b3b3828f4ccb0a35e26381b361425e09a51b11", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/Schnorr_applications.tex", "max_forks_repo_name": "gionasoldati/thesis", "max_forks_repo_head_hexsha": "e8b3b3828f4ccb0a35e26381b361425e09a51b11", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-11-06T23:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-06T23:47:52.000Z", "avg_line_length": 123.5543710021, "max_line_length": 1988, "alphanum_fraction": 0.7397276822, "num_tokens": 15564, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.835483553488848, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5923738202493446}}
{"text": "\\chapter{Approximate string matching}\n\n\\section{Edit (Levensthein) distance}\n\n\\begin{equation*}\n  \\qquad\\operatorname{d}_{t,p}(i,j) =\n  \\begin{cases}\n    \\max(i,j) & \\text{ if } \\min(i,j)=0, \\\\\n    \\min\n      \\begin{cases}\n        \\operatorname{d}_{t,p}(i-1,j) + 1 \\\\\n        \\operatorname{d}_{t,p}(i,j-1) + 1 \\\\\n        \\operatorname{d}_{t,p}(i-1,j-1) + 1_{(t_i \\neq p_j)}\n      \\end{cases} & \\text{ otherwise.}\n  \\end{cases}\n\\end{equation*}\n\n\\begin{table}\n  \\begin{center}\n    \\begin{tabular}{c|ccccccc}\n                  &\\texttt{-}\t\t&\\texttt{a}\t\t&\\texttt{g}\t\t&\\texttt{t}\t\t&\\texttt{c}\t\t&\\texttt{a}\t\t&\\texttt{g}\\\\\n      \\hline\n      \\texttt{-}\t&\\tm{l10}0\\tm{r10}  &\\tm{l11}1\\tm{r11}\t&\\tm{l12}2\\tm{r12}  &\\tm{l13}3\\tm{r13}  &\\tm{l14}4\\tm{r14}\t&\\tm{l15}5\\tm{r15}\t&\\tm{l16}6\\tm{r16}\\\\[2px]\n      \\texttt{g}\t&\\tm{l20}1\\tm{r20}\t&\\tm{l21}1\\tm{r21}\t&\\tm{l22}1\\tm{r22}\t&\\tm{l23}2\\tm{r23}\t&\\tm{l24}3\\tm{r24}\t&\\tm{l25}4\\tm{r25}\t&\\tm{l26}5\\tm{r26}\\\\[2px]\n      \\texttt{a}\t&\\tm{l30}2\\tm{r30}\t&\\tm{l31}1\\tm{r31}\t&\\tm{l32}2\\tm{r32}\t&\\tm{l33}2\\tm{r33}\t&\\tm{l34}3\\tm{r34}\t&\\tm{l35}3\\tm{r35}\t&\\tm{l36}4\\tm{r36}\\\\[2px]\n      \\texttt{t}\t&\\tm{l40}3\\tm{r40}\t&\\tm{l41}2\\tm{r41}\t&\\tm{l42}2\\tm{r42}\t&\\tm{l43}2\\tm{r43}\t&\\tm{l44}3\\tm{r44}\t&\\tm{l45}4\\tm{r45}\t&\\tm{l46}4\\tm{r46}\\\\[2px]\n      \\texttt{c}\t&\\tm{l50}4\\tm{r50}\t&\\tm{l51}3\\tm{r51}\t&\\tm{l52}3\\tm{r52}\t&\\tm{l53}3\\tm{r53}\t&\\tm{l54}2\\tm{r54}\t&\\tm{l55}3\\tm{r55}\t&\\tm{l56}4\\tm{r56}\\\\[2px]\n      \\texttt{a}\t&\\tm{l60}5\\tm{r60}  &\\tm{l61}4\\tm{r61}\t&\\tm{l62}4\\tm{r62}\t&\\tm{l63}4\\tm{r63}\t&\\tm{l64}3\\tm{r64}\t&\\tm{l65}2\\tm{r65}\t&\\tm{l66}3\\tm{r66}\\\\[2px]\n    \\end{tabular}\n%    \\begin{tikzpicture}\n%      [\n%        remember picture, \n%        overlay, \n%        -latex,\n%        shorten >=2pt,\n%        shorten <=2pt\n%      ]\n%      \\draw[green] ([yshift=0.5ex]{pic cs:l11}) -- ([yshift=0.5ex]{pic cs:r10});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l12}) -- ([yshift=0.5ex]{pic cs:r11});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l13}) -- ([yshift=0.5ex]{pic cs:r12});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l14}) -- ([yshift=0.5ex]{pic cs:r13});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l15}) -- ([yshift=0.5ex]{pic cs:r14});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l16}) -- ([yshift=0.5ex]{pic cs:r15});\n%      \n%      \\draw[green] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r20}) -- ([xshift=-0.5ex]{pic cs:r10});\n%      \\draw[blue] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r30}) -- ([xshift=-0.5ex]{pic cs:r20});\n%      \\draw[blue] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r40}) -- ([xshift=-0.5ex]{pic cs:r30});\n%      \\draw[blue] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r50}) -- ([xshift=-0.5ex]{pic cs:r40});\n%      \\draw[blue] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r60}) -- ([xshift=-0.5ex]{pic cs:r50});\n%\n%      \\draw[green] ([yshift=1.0ex]{pic cs:l21}) -- ([yshift=0.0ex]{pic cs:r10});\n%      \\draw[green] ([yshift=1.0ex]{pic cs:l22}) -- ([yshift=0.0ex]{pic cs:r11});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l23}) -- ([yshift=0.5ex]{pic cs:r22});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l24}) -- ([yshift=0.5ex]{pic cs:r23});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l25}) -- ([yshift=0.5ex]{pic cs:r24});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l26}) -- ([yshift=0.0ex]{pic cs:r15});\n%\n%      \\draw[green] ([yshift=1.0ex]{pic cs:l31}) -- ([yshift=0.0ex]{pic cs:r20});\n%      \\draw[green] ([yshift=1.0ex]{pic cs:l32}) -- ([yshift=0.0ex]{pic cs:r21});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l33}) -- ([yshift=0.0ex]{pic cs:r22});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l34}) -- ([yshift=0.0ex]{pic cs:r23});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l35}) -- ([yshift=0.0ex]{pic cs:r24});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l36}) -- ([yshift=0.5ex]{pic cs:r35});\n%\n%      \\draw[blue] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r41}) -- ([xshift=-0.5ex]{pic cs:r31});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l42}) -- ([yshift=0.0ex]{pic cs:r31});\n%      \\draw[green] ([yshift=1.0ex]{pic cs:l43}) -- ([yshift=0.0ex]{pic cs:r32});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l44}) -- ([yshift=0.0ex]{pic cs:r33});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l45}) -- ([yshift=0.0ex]{pic cs:r34});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l46}) -- ([yshift=0.0ex]{pic cs:r35});\n%\n%      \\draw[blue] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r51}) -- ([xshift=-0.5ex]{pic cs:r41});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l52}) -- ([yshift=0.0ex]{pic cs:r41});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l53}) -- ([yshift=0.0ex]{pic cs:r42});\n%      \\draw[green] ([yshift=1.0ex]{pic cs:l54}) -- ([yshift=0.0ex]{pic cs:r43});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l55}) -- ([yshift=0.5ex]{pic cs:r54});\n%      \\draw[blue] ([yshift=0.5ex]{pic cs:l56}) -- ([yshift=0.5ex]{pic cs:r55});\n%      \n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l61}) -- ([yshift=0.0ex]{pic cs:r50});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l62}) -- ([yshift=0.0ex]{pic cs:r51});\n%      \\draw[blue] ([yshift=1.0ex]{pic cs:l63}) -- ([yshift=0.0ex]{pic cs:r52});\n%      \\draw[blue] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r64}) -- ([xshift=-0.5ex]{pic cs:r54});\n%      \\draw[green] ([yshift=1.0ex]{pic cs:l65}) -- ([yshift=0.0ex]{pic cs:r54});\n%      \\draw[green] ([yshift=0.5ex]{pic cs:l66}) -- ([yshift=0.5ex]{pic cs:r65});\n%      \n%      \\draw[green] ([yshift=1.0ex,xshift=-0.5ex]{pic cs:r32}) -- ([xshift=-0.5ex]{pic cs:r22});\n%      \\draw[green] ([yshift=0.5ex]{pic cs:l32}) -- ([yshift=0.5ex]{pic cs:r31});\n%    \\end{tikzpicture}\n  \\end{center}\n  \\caption{Edit distance between $t=\\texttt{agtcag}$ a $p=\\texttt{gatca}$.}\n\\end{table}\n\n\\clearpage\n\\section{Weighted edit distance}\n\n\\begin{equation*}\n  \\qquad\\operatorname{d}_{t,p}(i,j) =\n  \\begin{cases}\n    0 & \\text{ if } i=0 \\land j = 0,\\\\\n    {d}_{t,p}(i-1,0) + D(t_i) & \\text{ if } j=0 \\land i > 0, \\\\\n    {d}_{t,p}(0,j-1) + I(p_j) & \\text{ if } i=0 \\land j > 0, \\\\\n    \\min\n      \\begin{cases}\n        \\operatorname{d}_{t,p}(i-1,j) + D(t_i) \\\\\n        \\operatorname{d}_{t,p}(i,j-1) + I(p_j) \\\\\n        \\operatorname{d}_{t,p}(i-1,j-1) + R(p_j,t_i)_{(t_i \\neq p_j)}\n      \\end{cases} & \\text{ otherwise.}\n  \\end{cases}\n\\end{equation*}\n\n\\begin{table}\n  \\begin{center}\n    \\begin{tabular}{cc}\n      \\begin{tabular}[t]{c|cccc}\n        R   &\\texttt{a} &\\texttt{g} &\\texttt{c} &\\texttt{t}\\\\\\hline\n        \\texttt{a}  &-   &2   &3   &4\\\\\n        \\texttt{g}  &2   &-   &6   &2\\\\\n        \\texttt{c}  &2   &5   &-   &1\\\\\n        \\texttt{t}  &5   &2   &1   &-\\\\\\hline\n        I   &5 &2 &4 &3\\\\\n        D   &2 &5 &3 &4\\\\\n      \\end{tabular}\n      \\quad\n      \\begin{tabular}[t]{c|ccccccc}\n        DP          &\\texttt{-}\t\t&\\texttt{a}\t\t&\\texttt{g}\t\t&\\texttt{t}\t\t&\\texttt{c}\t\t&\\texttt{a}\t\t&\\texttt{g}\\\\\n        \\hline\n        \\texttt{-}\t&\\tm{l10} 0\\tm{r10}  &\\tm{l11} 2\\tm{r11}\t&\\tm{l12} 7\\tm{r12} &\\tm{l13}11\\tm{r13} &\\tm{l14}14\\tm{r14}\t&\\tm{l15}16\\tm{r15}\t&\\tm{l16}21\\tm{r16}\\\\[2px]\n        \\texttt{g}\t&\\tm{l20} 2\\tm{r20}\t &\\tm{l21} 2\\tm{r21}\t&\\tm{l22} 2\\tm{r22}\t&\\tm{l23} 6\\tm{r23}\t&\\tm{l24} 9\\tm{r24}\t&\\tm{l25}11\\tm{r25}\t&\\tm{l26}16\\tm{r26}\\\\[2px]\n        \\texttt{a}\t&\\tm{l30} 7\\tm{r30}\t &\\tm{l31} 2\\tm{r31}\t&\\tm{l32} 4\\tm{r32}\t&\\tm{l33} 6\\tm{r33}\t&\\tm{l34} 9\\tm{r34}\t&\\tm{l35} 9\\tm{r35}\t&\\tm{l36}13\\tm{r36}\\\\[2px]\n        \\texttt{t}\t&\\tm{l40}10\\tm{r40}\t &\\tm{l41} 5\\tm{r41}\t&\\tm{l42} 4\\tm{r42}\t&\\tm{l43} 4\\tm{r43}\t&\\tm{l44} 7\\tm{r44}\t&\\tm{l45} 9\\tm{r45}\t&\\tm{l46}11\\tm{r46}\\\\[2px]\n        \\texttt{c}\t&\\tm{l50}14\\tm{r50}\t &\\tm{l51}9\\tm{r51}\t&\\tm{l52} 8\\tm{r52}\t&\\tm{l53} 5\\tm{r53}\t&\\tm{l54} 4\\tm{r54}\t&\\tm{l55} 6\\tm{r55}\t&\\tm{l56}11\\tm{r56}\\\\[2px]\n        \\texttt{a}\t&\\tm{l60}19\\tm{r60}  &\\tm{l61}14\\tm{r61}\t&\\tm{l62}11\\tm{r62}\t&\\tm{l63} 10\\tm{r63}\t&\\tm{l64} 8\\tm{r64}\t&\\tm{l65} 4\\tm{r65}\t&\\tm{l66}8\\tm{r66}\\\\[2px]\n      \\end{tabular}\n    \\end{tabular}\n  \\end{center}\n  \\caption{Weighted edit distance between $t=\\texttt{agtcag}$ a $p=\\texttt{gatca}$.}\n\\end{table}\n\n\\clearpage\n\\section{Needleman--Wunsch}\n\n\\begin{equation*}\n  \\qquad\\operatorname{d}_{t,p}(i,j) =\n  \\begin{cases}\n    0 & \\text{ if } i=0 \\land j = 0,\\\\\n    i \\cdot c & \\text{ if } j=0 \\land i > 0, \\\\\n    j \\cdot c & \\text{ if } i=0 \\land j > 0, \\\\\n    \\max\n      \\begin{cases}\n        \\operatorname{d}_{t,p}(i-1,j) + c \\\\\n        \\operatorname{d}_{t,p}(i,j-1) + c \\\\\n        \\operatorname{d}_{t,p}(i-1,j-1) + S(p_j,t_i)\n      \\end{cases} & \\text{ otherwise.}\n  \\end{cases}\n\\end{equation*}\n\n\n\\begin{table}\n  \\begin{center}\n    \\begin{tabular}{cc}\n      \\begin{tabular}[t]{c|cccc}\n        S   &\\texttt{a} &\\texttt{g} &\\texttt{c} &\\texttt{t}\\\\\\hline\n        \\texttt{a}  &10   &-1   &-3   &-4\\\\\n        \\texttt{g}  &-1   &7    &-5   &-3\\\\\n        \\texttt{c}  &-3   &-5   &9    &0\\\\\n        \\texttt{t}  &-4   &-3   &0    &8\\\\\\hline\n        c  & -5 \\\\\n      \\end{tabular}\n      \\quad\n      \\begin{tabular}[t]{c|ccccccc}\n        DP          &\\texttt{-}\t\t&\\texttt{a}\t\t&\\texttt{g}\t\t&\\texttt{t}\t\t&\\texttt{c}\t\t&\\texttt{a}\t\t&\\texttt{g}\\\\\n        \\hline\n        \\texttt{-}\t&\\tm{l10}  0\\tm{r10}  &\\tm{l11} -5\\tm{r11}\t&\\tm{l12}-10\\tm{r12}    &\\tm{l13}-15\\tm{r13}    &\\tm{l14}-20\\tm{r14}\t&\\tm{l15}-25\\tm{r15}\t&\\tm{l16}-30\\tm{r16}\\\\[2px]\n        \\texttt{g}\t&\\tm{l20} -5\\tm{r20}  &\\tm{l21} -1\\tm{r21}\t&\\tm{l22}  2\\tm{r22}\t&\\tm{l23} -3\\tm{r23}\t&\\tm{l24} -8\\tm{r24}\t&\\tm{l25}-13\\tm{r25}\t&\\tm{l26}-18\\tm{r26}\\\\[2px]\n        \\texttt{a}\t&\\tm{l30}-10\\tm{r30}  &\\tm{l31}  5\\tm{r31}\t&\\tm{l32}  0\\tm{r32}\t&\\tm{l33} -2\\tm{r33}\t&\\tm{l34} -6\\tm{r34}\t&\\tm{l35}  2\\tm{r35}\t&\\tm{l36} -3\\tm{r36}\\\\[2px]\n        \\texttt{t}\t&\\tm{l40}-15\\tm{r40}  &\\tm{l41}  0\\tm{r41}\t&\\tm{l42}  2\\tm{r42}\t&\\tm{l43}  8\\tm{r43}\t&\\tm{l44}  3\\tm{r44}\t&\\tm{l45} -2\\tm{r45}\t&\\tm{l46} -1\\tm{r46}\\\\[2px]\n        \\texttt{c}\t&\\tm{l50}-20\\tm{r50}  &\\tm{l51} -5\\tm{r51}\t&\\tm{l52} -3\\tm{r52}\t&\\tm{l53}  3\\tm{r53}\t&\\tm{l54}  17\\tm{r54}\t&\\tm{l55} 12\\tm{r55}\t&\\tm{l56} 7\\tm{r56}\\\\[2px]\n        \\texttt{a}\t&\\tm{l60}-25\\tm{r60}  &\\tm{l61}-10\\tm{r61}\t&\\tm{l62} -6\\tm{r62}\t&\\tm{l63} -2\\tm{r63}\t&\\tm{l64} 12\\tm{r64}\t&\\tm{l65}  27\\tm{r65}\t&\\tm{l66}  22\\tm{r66}\\\\[2px]\n      \\end{tabular}\n    \\end{tabular}\n  \\end{center}\n  \\caption{Needleman--Wunsch price of alignment between $t=\\texttt{agtcag}$ a $p=\\texttt{gatca}$.}\n\\end{table}\n\n\\clearpage\n\\section{Algorithm implementations}\n\\cpplistings{tut6/levensthein}{Implementation of Levenshtein distance algorithm.}{{17-44}}\n  \\clistings{tut6/distance}{Implementation of Levenshein distance algorithm with backtracking.}{{23-61}}\n\\cpplistings{tut6/wed}{Implementation of Weighted edit distance algorithm.}{{89-134}}\n\\cpplistings{tut6/nw}{Implementation of Needleman--Wunsch algorithm.}{{73-106}}\n", "meta": {"hexsha": "245a2c47d51f7df26fd7ec2aef2adf430b6e04b0", "size": 10476, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "evy/ch6.tex", "max_stars_repo_name": "exander77/handouts", "max_stars_repo_head_hexsha": "c30e8f1128bc71ea69c7a83daaf8915b6a8eff36", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "evy/ch6.tex", "max_issues_repo_name": "exander77/handouts", "max_issues_repo_head_hexsha": "c30e8f1128bc71ea69c7a83daaf8915b6a8eff36", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "evy/ch6.tex", "max_forks_repo_name": "exander77/handouts", "max_forks_repo_head_hexsha": "c30e8f1128bc71ea69c7a83daaf8915b6a8eff36", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.8481675393, "max_line_length": 179, "alphanum_fraction": 0.5361779305, "num_tokens": 5243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916240341031, "lm_q2_score": 0.6859494485880928, "lm_q1q2_score": 0.5922430084217709}}
{"text": "%---------------------------------------------------------------------\n\\section{Probability Calculation}\n\\label{full}\n\n\\subsection{Differential Cross Section at the Parton Level}\n\nThe matrix element analysis technique reconstructs each event to the\nfinal state four-vectors to evaluate the signal and background leading\norder matrix element. The following sections derive the signal and\nbackground probabilities starting from the final state at the parton\nlevel and then relating these objects to the physical quantities\nmeasured in the detector. The following also assumes a lepton,\nneutrino, and two quarks in the final state.\n\nThe probability density for a process to occur at a hadron-hadron\ncollider is given as an integral of the hard-scatter differential\ncross section over all possible ways of producing the process from the\nquarks and gluons inside the hadron. This probability density, shown\nbelow, is a convolution of the hard scatter differential cross section\nwith a parton distribution function for each of the two partons from\nthe hadrons with an integral over all possible momentum fractions\n$x_{i}, x_{j}$ from each initial parton.\n\\begin{equation}\n{\\cal P}(\\vec{y}) = \\frac{1}{\\sigma} \\sum_{i,j}\n\\int f_{i}(q_{1}, Q^{2})dq_{1}\n\\times f_{j}(q_{2}, Q^{2})dq_{2}\n\\times d\\sigma_{hs,ij}(\\vec{y})\n\\end{equation}\n\\noindent where the normalization constant $\\sigma$ is defined as\nintegral of the differential cross section over the initial- and\nfinal-state phase spac:\n\\begin{equation}\n\\label{ds}\n\\sigma = \\int  \\sum_{i,j}\n\\int f_{i}(q_{1}, Q^{2})dq_{1}\n\\times f_{j}(q_{2}, Q^{2})dq_{2}\n\\times \\pderiv{\\sigma_{hs,ij}(\\vec{y})}{\\vec{y}} d\\vec{y}\n\\end{equation}\n\\noindent and finally, the hard-scatter differential cross section is\ndefined as the product of the final state phase space factor, the\nsquare of the matrix element amplitude and an overall flux factor:\n\\begin{equation}\nd\\sigma_{hs} = \\frac{(2\\pi)^4}{4}\n\\frac{{|\\cal M|}^{2}}\n{\\sqrt{(q_{1}q_{2})^2 - m_{1}^2 m_{2}^2}}\n\\frac{d^{3}p_{1}}{(2\\pi)^3 2E_{1}}\n\\frac{d^{3}p_{2}}{(2\\pi)^3 2E_{2}}\n\\frac{d^{3}p_{\\ell}}{(2\\pi)^3 2E_{\\ell}}\n\\frac{d^{3}p_{\\nu}}{(2\\pi)^3 2E_{\\nu}}\n\\delta^{4}(q_{1}q_{2};p_{1},p_{2},p_{\\ell},p_{\\nu})\n\\end{equation}\n\n\n\\subsection{Evaluating the Hard Scatter Differential Cross Section}\n\nThe following section evaluates the differential cross section shown\nin Eq.~\\ref{ds} given a set of inital and final state four-vectors.\n\nThe first assumption made is that all collisions occur along the beam\naxis with no net transverse momentum. This means the initial state\nfour vectors can be written as\n\\begin{equation}\nq_{1} = ( E_{beam} x_{1}, 0, 0, E_{beam} x_{1} )\n\\end{equation}\n\\begin{equation}\nq_{2} = ( E_{beam} x_{2}, 0, 0, -E_{beam} x_{2} )\n\\end{equation}\n\\noindent The next assumption is that all particle masses are known\nand are negligible compared to their energies and thus can be ignored\nfor this calculation. The flux factor (shown below) in the hard\nscatter cross section can now be written in terms in the two momentum\nfractions of the incoming partons:\n\\begin{equation}\n\\frac{1}{\\sqrt{(q_{1}q_{2})^2 - m_{1}^2m_{2}^2}} \\rightarrow\n\\frac{1}{\\sqrt{(q_{1}q_{2})^2}} \\rightarrow\n\\frac{1}{2E_{beam}x_{1}x_{2}}\n\\end{equation}\n\\noindent For the remainder of the note, the following notation will\nbe used to distinquish quarks, leptons, and neutrinos: $p_{\\ell}$ is\nthe momemtum of the lepton, $p_{1,2}$ is the momentum of the first and\nsecond final state partons, and $p_{\\nu}$ is the neutrino\nmomentum. Because the phase space is written in terms of rectangular\ncoordinates, the next step towards the final differential cross\nsection equation is to redefine the phase space factors in terms of\nspherical coordiniates. This is done for all final state particles\nexcept the neutrino for reasons that will be clear later in the\ndocument.\n\\begin{eqnarray}\nd\\Phi_{4} = \n\\frac{d^{3}p_{1}}{(2\\pi)^3 2E_{1}}\n\\frac{d^{3}p_{2}}{(2\\pi)^3 2E_{2}}\n\\frac{d^{3}p_{\\ell}}{(2\\pi)^3 2E_{\\ell}}\n\\frac{d^{3}p_{\\nu}}{(2\\pi)^3 2E_{\\nu}} \\delta^{4} \n\\rightarrow \\\\\n\\frac{1}{16(2\\pi)^{12}}\n\\frac{|p_{1}|^{2}d|p_{1}|d\\Omega_{1}}{E_{1}}\n\\frac{|p_{2}|^{2}d|p_{2}|d\\Omega_{2}}{E_{2}}\n\\frac{|p_{\\ell}|^{2}d|p_{\\ell}|d\\Omega_{\\ell}}{E_{\\ell}}\n\\frac{d^{x}_{\\nu} d^{y}_{\\nu} d^{z}_{\\nu}}{E_{\\nu}}\\delta^{4} \n%\\frac{d^{3}p_{1}}{(2\\pi)^3 2E_{1}} \\frac{d^{3}p_{2}}{(2\\pi)^32E_{2}} \\frac{d^{3}p_{\\ell}}{(2\\pi)^3 2E_{\\ell}}\n%\\frac{d^{3}p_{\\nu}}{(2\\pi)^3 2E_{\\nu}} \\delta^{4} \\righarrow \n%\\frac{1}{16(2\\pi)^{12}}\n\\end{eqnarray}\n\\noindent To summarize, the full hard scatter differential cross\nsection is now\n\\begin{equation}\n\\label{dhs}\nd\\sigma_{hs} = \\frac{1}{128(2\\pi)^{8}E_{beam}}\n\\frac{{|\\cal M|}^{2}}{2x_{1}x_{2}}\n\\frac{|p_{1}|^{2}d|p_{1}|d\\Omega_{1}}{E_{1}}\n\\frac{|p_{2}|^{2}d|p_{2}|d\\Omega_{2}}{E_{2}}\n\\frac{|p_{\\ell}|^{2}d|p_{\\ell}|d\\Omega_{\\ell}}{E_{\\ell}}\n\\frac{d^{x}_{\\nu} d^{y}_{\\nu} d^{z}_{\\nu}}{E_{\\nu}} \\delta^{4} \n\\end{equation} \n\n\n\\subsection{Evaluating the Hadron-Hadron Differential Cross Section}\n\nThe next step to writing the full hadron-hadron differential cross\nsection is to rewrite Eq.~\\ref{dhs} such that any integration over the\nphase space will remove the four-dimensional delta function required\nfor energy and momentum conservation. The delta function is currently\nwritten such that it will vanish only over integrations of total\n$p_{x}, p_{y}, p_{z}$, and $E$. Because there was an original\nassumption of no net transverse momentum in the collision, the total\n$p_{x}$ and $p_{y}$ can be solved for the neutrino transverse\nmomentum.\n\\begin{eqnarray}\n\\sum_{i}P^{x}_{i} =\np^{x}_{1} + p^{x}_{2} + p^{x}_{\\ell} + p^{x}_{\\nu} = 0\n\\rightarrow p^{x}_{\\nu} =\n- p^{x}_{1} - p^{x}_{2} - p^{x}_{\\ell} \\\\\n\\sum_{i}P^{y}_{i} =\np^{y}_{1} + p^{y}_{2} + p^{y}_{\\ell} + p^{y}_{\\nu} = 0\n\\rightarrow p^{y}_{\\nu} =\n- p^{y}_{1} - p^{y}_{2} - p^{y}_{\\ell}\n\\end{eqnarray}\n\\noindent The total $p_{z}$ and requirement can be rewritten in terms\nof the intial parton's momemtum fraction and the other final state\npartons' $z$ momenta and thus solve for the neutrino $p_{z}$:\n\\begin{eqnarray}\n\\nonumber\n\\sum_{i}P^{z}_{i} =\np^{z}_{1} + p^{z}_{2} + p^{z}_{\\ell} + p^{z}_{\\nu} -\nE_{beam}x_{1} + E_{beam}x_{2} = 0 \\rightarrow \\\\\np^{z}_{\\nu} =\n- E_{beam}(x_{1} - x_{2}) - p^{z}_{1} - p^{z}_{2} - p^{z}_{\\ell}\n\\end{eqnarray}\n\\noindent Finally, the total energy delta function implies the\nfollowing:\n\\begin{equation}\nE_{beam}x_{1} + E_{beam}x_{2} = E_{1} + E_{2} + E_{\\ell} + E_{\\nu}\n\\end{equation}\n\nAt this point, is it useful to rewrite the full differential cross\nsection at the parton level:\n\\begin{eqnarray}\n\\nonumber\nd\\sigma(\\vec{y}) = \\sum_{i,j} \\int f_{i}(x_{1}, Q^{2})dx_{1}\n\\times f_{j}(x_{2}, Q^{2})dx_{2}\n\\times \\frac{1}{128(2\\pi)^{8}E_{beam}}\n\\frac{{|\\cal M|}^{2}}{2x_{1}x_{2}} \\times \\\\\n\\nonumber\n\\frac{|p_{1}|^{2}d|p_{1}|d\\Omega_{1}}{E_{1}}\n\\frac{|p_{2}|^{2}d|p_{2}|d\\Omega_{2}}{E_{2}}\n\\frac{|p_{\\ell}|^{2}d|p_{\\ell}|d\\Omega_{\\ell}}{E_{\\ell}}\n\\frac{d^{x}_{\\nu} d^{y}_{\\nu} d^{z}_{\\nu}}{E_{\\nu}} \\times \\\\\n\\nonumber\n\\delta(p^{x}_{\\nu} + p^{x}_{1} + p^{x}_{2} + p^{x}_{\\ell}) \\times \\\\\n\\nonumber\n\\delta(p^{y}_{\\nu} + p^{y}_{1} + p^{y}_{2} + p^{y}_{\\ell}) \\times \\\\\n\\nonumber\n\\delta(p^{z}_{\\nu} + E_{beam}(x_{1} - x_{2}) + p^{z}_{1}\n+ p^{z}_{2} + p^{z}_{\\ell}) \\times \\\\\n\\delta(E_{beam}x_{1} + E_{beam}x_{2} - E_{1} - E_{2}\n- E_{\\ell} - E_{\\nu})\n\\end{eqnarray}\n\nThe next step is to rewrite the integrational variables, $x_{1}$ and\n$x_{2}$, in terms of the total energy and total $p_{z}$:\n\\begin{eqnarray}\n\\label{x1}\nx_{1} = \\frac{E_{tot} + p^{z}_{tot}}{2E_{beam}} \\\\\n\\label{x2}\nx_{2} = \\frac{E_{tot} - p^{z}_{tot}}{2E_{beam}}\n\\end{eqnarray}\n\\noindent Now, the integration over $x_{1}$ and $x_{2}$ can be\nrewritten in terms of $E_{tot}$ and $p_{z}$:\n\\begin{eqnarray}\ndx_{1}dx_{2} =\n\\frac{1}{J(x_{1},x_{2};E_{tot},p^{z}_{tot})}dE_{tot}dp^{z}_{tot} \\\\\nJ(x_{1},x_{2};E_{tot},p^{z}_{tot}) = 2E^{2}_{beam}\n\\end{eqnarray}\n\\noindent At this point the integration over the total energy and\n$p_{z}$ will constrain the two incoming partons' momentum fractions\nthrough Eq.~\\ref{x1} and \\ref{x2}.\n\nThe full differential cross section at the parton level can now be\nwritten as\n\\begin{eqnarray}\n\\nonumber\nd\\sigma(\\vec{y}) = \\sum_{i,j} \\int f_{i}(x_{1}, Q^{2})\n\\times \nf_{j}(x_{2}, Q^{2}) \\times \\frac{1}{128(2\\pi)^{8}E_{beam}}\n\\frac{{|\\cal M|}^{2}}{2x_{1}x_{2}} \\times \\\\\n\\nonumber\n\\frac{|p_{1}|^{2}d|p_{1}|d\\Omega_{1}}{E_{1}}\n\\frac{|p_{2}|^{2}d|p_{2}|d\\Omega_{2}}{E_{2}}\n\\frac{|p_{\\ell}|^{2}d|p_{\\ell}|d\\Omega_{\\ell}}{E_{\\ell}}\n\\frac{d^{x}_{\\nu} d^{y}_{\\nu} d^{z}_{\\nu}}{E_{\\nu}} \\times \\\\\n\\int \\frac{1}{2E^{2}_{beam}} dp^{z}_{tot}\n\\end{eqnarray}\n\\noindent where the implicit integration over the four dimensional\ndelta function yields the following formulas for the neutrino four\nvector and the incoming partons' momentum fraction in terms of the\nremaining differential variables.\n\\begin{eqnarray}\np^{x}_{\\nu} = - p^{x}_{1} - p^{x}_{2} - p^{x}_{\\ell} \\\\\np^{y}_{\\nu} = - p^{y}_{1} - p^{y}_{2} - p^{y}_{\\ell} \\\\\np^{z}_{\\nu} = - p^{z}_{tot} - p^{z}_{1} - p^{z}_{2} - p^{z}_{\\ell} \\\\\nx_{1} = \\frac{E_{1} + E_{2} + E_{\\ell} + E_{\\nu} + p^{z}_{tot}}\n{2E_{beam}} \\\\\nx_{2} = \\frac{E_{1} + E_{2} + E_{\\ell} + E_{\\nu} - p^{z}_{tot}}\n{2E_{beam}}\n\\end{eqnarray}\n\n\n\\subsection{Relating Reconstructed Objects to Partons}\n\nThe previous sections have calculated the differential cross section\nfor a hadron-hadron collision producing a lepton, neutrino, and two\npartons in the final state. These particles are not exactly what is\nmeasured in the detector and thus it is necessary to relate\nquantities. To do this, the differential cross section is convoluted\nwith a function, $W(\\vec{x}, \\vec{y})$, which is the probability of\nproducing a final state, $\\vec{y}$, and observed state, $\\vec{x}$, in\nthe detector. The resulting differential cross section is then\nintegrated over the final state phase space, $d\\vec{y}$:\n\\begin{equation}\n\\pderiv{\\sigma^{'}(\\vec{x})}{\\vec{x}} =\n\\int \\pderiv{\\sigma(\\vec{y})}{\\vec{y}} W(\\vec{x}, \\vec{y}) d\\vec{y}\n\\end{equation}\n\\noindent where the function $W(\\vec{x}, \\vec{y})$ is assumed to be\nfactorizable for each measured object:\n\\begin{equation}\nW(\\vec{x}, \\vec{y}) = \\prod_{i} W_{i}(\\vec{x_{i}}, \\vec{y_{i}})\n\\end{equation}\n\n\n\\subsubsection{Jets}\n\nThe transfer function for jets measured in the calorimter is assumed\nto only be a function of the relative energy difference between the\ntwo objects and all angles are assumed to be well measured:\n\\begin{equation}\nW_{jet}(\\vec{x}_{jet}, \\vec{y}_{parton}) = W(E_{jet} -\nE_{parton}) \\times \\delta(\\Omega_{jet} - \\Omega_{parton})\n\\end{equation}\n\\noindent where $W(E_{jet} - E_{parton})$ is parametrized using the\nfollowing functional form:\n\\begin{equation}\nW(E_{jet} - E_{parton}) = \\frac{e^{-\\frac{(E_{jet}\n- E_{parton} - p_{1})^{2}}{2p^{2}_{2}}} + p_{3}e^{-\\frac{(E_{jet}\n- E_{parton} - p_{4})^{2}}{2p^{5}_{2}}}}{2\\pi(p_{2} + p_{3}p_{5})}\n\\end{equation}\n\\noindent where $p_{i} = \\alpha_{i} + \\beta_{i} \\times E_{parton}$.\nThe five $\\alpha$ and five $\\beta$ parameters are determined by\nminimizing a likelihood formed by measuring the parton energy in Monte\nCarlo and the matched jet energy also in Monte Carlo. The parameters\nused for this analysis were determined in several regions of the\ncalorimeter to account for the resolution differences in the detector.\n\n\n\\subsubsection{Electrons}\n\nThe transfer function for electrons~\\cite{ElectronTF} is assumed to be\nsolely a function of the reconstructed energy of the electron,\n$E_{reco}$, the parton energy of the electron, $E_{parton}$, and\n$\\theta$, the production angle with respect to the beam axis:\n\\begin{equation}\nW_{electron}(\\vec{x}_{reco}, \\vec{y}_{parton}) = W(E_{reco},\nE_{parton}, \\theta) \\times \\delta(\\Omega_{reco} - \\Omega_{parton})\n\\end{equation}\n\\noindent where $W(E_{reco}, E_{parton}, \\theta)$ is parametrized\nusing the following functional form:\n\\begin{eqnarray}\nW(E_{reco}, E_{parton}, \\theta) & = &\n\\frac{1}{2\\pi\\sigma}\\mathrm{exp}\n[-\\frac{(E_{reco} - E_{center})^{2}}{2\\sigma^{2}}]\\\\\nE_{center} & = &\n1.0002 E_{parton} + 0.324\\,{GeV}/c^2 \\\\\n\\sigma & = &\n0.028 E_{center} \\oplus \\textrm{Sampling}(E_{center},\n\\eta) E_{center} \\oplus 0.4 \\\\\n\\textrm{Sampling}(E, \\theta) & = &\n\\left[\\frac{0.164}{\\sqrt{E}} + \\frac{0.122}{E}\\right]\n\\textrm{exp}\\left[\\frac{\\textrm{p1}(E)}\n{\\textrm{sin}\\theta}-\\textrm{p1}(E)\\right] \\\\\n\\textrm{p1}(E)& = & 1.35193 - \\frac{2.09564}{E} - \\frac{6.98578}{E^2}.\n\\end{eqnarray}\n\n\n\\subsubsection{Muons}\n\nThe transfer function for muons~\\cite{MuonTF} is assumed to be a\nfunction of\n\\begin{equation}\n\\Delta\\left( \\frac{q}{p_t}\\right) =\n\\left( \\frac{q}{p_t}\\right)_{reco} -\n\\left( \\frac{q}{p_t}\\right)_{parton}\n\\end{equation}\n\\noindent and of $\\eta_\\mathrm{CFT}$,\n\\begin{equation}\nW_{muon}(\\vec{x}_{reco}, \\vec{y}_{parton}) =\nW\\left(\\Delta\\left( \\frac{q}{p_t}\\right),\n\\eta_\\mathrm{CFT}\\right) \\times \n\\delta(\\Omega_{reco} - \\Omega_{parton})\n\\end{equation}\n\\noindent where $W\\left(\\Delta\\left( \\frac{q}{p_t}\\right),\n\\eta_\\mathrm{CFT}\\right)$ is parametrized using a single Gaussian:\n\\begin{equation}\nW\\left(\\Delta\\left( \\frac{q}{p_t}\\right), \\eta_\\mathrm{CFT}\\right) = \n\\frac{1}{2\\pi\\sigma}\\mathrm{exp}\n\\left\\{-\\frac{\\left[\\Delta\n\\left( \\frac{q}{p_t} \\right)\\right]^2}\n{2\\sigma^2}\\right\\}\n\\end{equation}\n\\begin{equation}\n\\sigma  =  \\left\\{ \n\\begin{array} {c@{\\quad:\\quad}l} \\sigma_o &\n|\\eta_\\mathrm{CFT}| \\le \\eta_o \\\\\n\\sqrt{\\sigma^2_o + [c(|\\eta_\\mathrm{CFT}| - \\eta_o)]^2} &\n|\\eta_\\mathrm{CFT}| > \\eta_o \n\\end{array} \\right.\n\\end{equation}\n\n\\noindent There are three fitted parameters in the above equations:\n$\\sigma_o$, $c$, and $\\eta_o$, each of which is actually fitted by two\nsub-parameters:\n\\begin{equation}\npar = par(0) + par(1) \\cdot 1/p_t.\n\\end{equation}\n\\noindent Furthermore, these parameters are derived for four classes\nof events: those that were from before or after the 2004 shutdown,\nwhen the magnetic field strength changed, and in each run range, those\nthat have an SMT hit and those that do not.\n\nAs a simplification, we assume $q_{reco} = q_{parton}$, that is, we\ndo not consider charge misidentification\n\n\n\\subsection{Full Differential Cross Section and Normalization}\n\nThe full differential cross section at the detector object level can\nnow be written as\n\\begin{eqnarray}\n\\label{fullds}\n\\nonumber\n\\pderiv{\\sigma^{'}(\\vec{x})}{\\vec{x}} =\n\\int dp^{z}_{tot}dq_{1}dq_{2}dp_{\\ell} \\sum_{i,j}\nf_{i}(q_{1}, Q^{2}) \\times f_{j}(q_{2}, Q^{2}) \\\\\n\\times \\frac{1}{256(2\\pi)^{8}E^{3}_{beam}}\n\\frac{{|\\cal M|}^{2}}{2x_{1}x_{2}} \\times\n\\frac{p_{1}^{2}}{E_{1}} \\frac{p_{2}^{2}}{E_{2}} \n\\frac{p_{\\ell}^{2}}{E_{\\ell}} \\frac{1}{E_{\\nu}}\n\\times W_{Lepton}W_{Jet1}W_{Jet2}\n\\end{eqnarray}\n\\noindent The final step to evaluating the probability density is to\nproperly normalize the differential cross section in\nEq.~\\ref{fullds}. This is done by integration of the differential\ncross section over all possible states in the detector. Since the\nevent selection cuts will change the number events due to acceptance\nlosses, this must be accounted for in the overall normalization (cross\nsection) calculation. The total cross section is then written as\n\\begin{eqnarray}\n\\label{norm}\n\\nonumber\n\\sigma = \\int \\pderiv{\\sigma^{'}(\\vec{x})}{\\vec{x}} d\\vec{x} =\n\\int d\\vec{x}dp^{z}_{tot}dq_{1}dq_{2}dp_{\\ell} \\sum_{i,j}\nf_{i}(q_{1}, Q^{2}) \\times f_{j}(q_{2}, Q^{2}) \\\\\n\\times \\frac{1}{256(2\\pi)^{8}E^{3}_{beam}}\n\\frac{{|\\cal M|}^{2}}{2x_{1}x_{2}} \\times\n\\frac{p_{1}^{2}}{E_{1}} \\frac{p_{2}^{2}}{E_{2}} \n\\frac{p_{\\ell}^{2}}{E_{\\ell}} \\frac{1}{E_{\\nu}}\n\\times W_{Lepton}W_{Jet1}W_{Jet2} \\times \\Theta_{\\rm{Cuts}}(\\vec{x})\n\\end{eqnarray}\n\n", "meta": {"hexsha": "af215049c33a73b80e85b4cb1def7e6a59703902", "size": 15533, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MEnote/FullDerivation.tex", "max_stars_repo_name": "tgadf/thesis", "max_stars_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MEnote/FullDerivation.tex", "max_issues_repo_name": "tgadf/thesis", "max_issues_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MEnote/FullDerivation.tex", "max_forks_repo_name": "tgadf/thesis", "max_forks_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.0335051546, "max_line_length": 110, "alphanum_fraction": 0.6695422649, "num_tokens": 5737, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8807970779778824, "lm_q2_score": 0.6723316860482763, "lm_q1q2_score": 0.5921877845032648}}
{"text": "\\chapter{Polyhedral Sets and Polyhedrally Constrained Languages}\\label{chp:polyhedral}\n\nIn this chapter we introduce the family of \\emph{polyhedrally constrained languages} which we use in the proof of \\cref{thm:geodesic-growth}.\nThis family of languages is a generalisation of linearly constrained languages, as in \\cref{defn:linearly-constrained-language}.\nIt is a result of \\textcite{massazza1993} that the generating function of linearly constrained languages is holonomic (see~\\cref{prop:lcl-is-holonomic}).\nIn \\cref{prop:polyhedrally-constrained-is-holonomic}, we show that the family of polyhedrally constrained languages also have holonomic (multivariate) generating functions.\n\n\\Textcite{benson1983} introduced the concept of \\emph{polyhedral sets} to compute the volume growth of virtually abelian groups, in particular, for each virtually abelian group \\citeauthor{benson1983} constructed a polyhedral set whose (volume) generating function is the volume growth series of the group.\nIn \\cref{sec:virtually-abelian-groups} we apply a similar argument to construct a \\emph{polyhedrally constrained language} whose generating function is the geodesic growth series of a virtually abelian group.\nWe define and study the class of polyhedral sets in \\cref{sec:polyhedral-sets}, and the family of polyhedrally constrained languages in \\cref{sec:polyhedrally-constrained-languages}.\n\n\\section{Polyhedral Sets}\\label{sec:polyhedral-sets}\n\nA \\emph{polyhedral set}, as we see in \\cref{defn:polyhedral-set}, is a subset of $\\mathbb{Z}^n$ that encodes the integer solutions to finitely many systems of linear equations, inequalities and congruences.\nThe class of polyhedral sets is closed under Boolean expressions, Cartesian products, and (inverse) mapping by integer affine transformation (see \\cref{prop:affine-transforms-of-polyhedral-sets,prop:closure-properties-of-polyhedral-sets}).\nThe class of polyhedral sets and their closure properties are essential to our study of the language of geodesics for virtually abelian groups in \\cref{sec:virtually-abelian-groups}.\n\n\\begin{definition}\\label{defn:polyhedral-set}\nA subset $\\mathcal{E} \\subseteq \\mathbb{Z}^m$ is called an \\emph{elementary region} if it can be expressed as\n\\[\n\t\\left\\{\n\t\tz \\in \\mathbb{Z}^m\n\t\\, \\middle\\vert \\,\n\t\ta\\cdot z = b\n\t\\right\\},\n\t%\n\t\\ \n\t%\n\t\\left\\{\n\t\tz \\in \\mathbb{Z}^m\n\t\\, \\middle\\vert \\,\n\t\ta\\cdot z > b\n\t\\right\\}\n\t%\n\t\\text{ or } \n\t%\n\t\\left\\{\n\t\tz \\in \\mathbb{Z}^m\n\t\\, \\middle\\vert \\,\n\t\ta\\cdot z \\equiv b\\ (\\bmod\\ c)\n\t\\right\\}\n\\]\nfor some $a \\in \\mathbb{Z}^m$ and $b,c\\in \\mathbb{Z}$ with $c > 0$.\nA \\emph{basic polyhedral set} is a finite intersection of elementary regions;\nand a \\emph{polyhedral set} is a finite disjoint union of basic polyhedral sets.\n\\end{definition}\n\nFrom this definition we see that $\\emptyset$ and $\\mathbb{Z}^m$ are elementary regions, and that $\\mathbb{N}^m$ is a basic polyhedral set.\nIn \\cref{prop:closure-properties-of-polyhedral-sets} we see that the class of polyhedral sets is closed under Boolean expressions and Cartesian products.\n\n\\begin{proposition}[Proposition~13.1~and~Remark~13.2~in~\\cite{benson1983}]\\label{prop:closure-properties-of-polyhedral-sets}\n\tThe class of polyhedral subsets of $\\mathbb{Z}^m$ is{\\tiny } closed under finite union, finite intersection and set difference.\n\tMoreover, the class of polyhedral sets is closed under Cartesian product.\n\\end{proposition}\n\nA map $E \\colon \\mathbb{Z}^m \\to \\mathbb{Z}^n$ is an \\emph{integer affine transform} if it can be written as $E(v) = vA + b$ where $A \\in \\mathbb{Z}^{m \\times n}$ is a matrix and $b \\in \\mathbb{Z}^n$ is a vector.\nIn \\cref{prop:affine-transforms-of-polyhedral-sets} we see that the class of polyhedral sets is closed under (inverse) mapping by integer affine transformations.\n\n\\begin{proposition}[Propositions~13.7~and~13.8~in~\\cite{benson1983}]\\label{prop:affine-transforms-of-polyhedral-sets}\n\tSuppose that $\\mathcal{P} \\subseteq \\mathbb{Z}^m$ and $\\mathcal{Q} \\subseteq \\mathbb{Z}^n$ are polyhedral sets, and $E\\colon\\mathbb{Z}^m\\to\\mathbb{Z}^n$ is an integer affine transform.\n\tThen, $E(\\mathcal{P})$ and $E^{-1}(\\mathcal{Q})$ are both polyhedral sets.\n\\end{proposition}\n\nNotice that our definition of $n$-atoms and $n$-constraints in \\cref{defn:n-constraints} is similar to that of elementary regions and polyhedral sets, respectively, without modular arithmetic.\nFrom the closure properties in \\cref{prop:closure-properties-of-polyhedral-sets} we see that $n$-constraints form a subclass of the polyhedral subsets of $\\mathbb{Z}^n$.\nIt can be verified by the reader that, for each $n \\geqslant 1$,\n\\[\n\t\\{\n\t\t(x,0,0,\\ldots,0) \\in \\mathbb{Z}^n\n\t\\mid\n\t\tx \\equiv 0\\ (\\bmod\\ 2)\n\t\\}\n\\]\nis a polyhedral set but not an $n$-constraint, and thus the class of $n$-constraints form a proper subclass of the polyhedral subsets of $\\mathbb{Z}^n$.\nIn the following section, we generalise the family of linearly constrained languages, defined in \\cref{defn:linearly-constrained-language}, to the family of \\emph{polyhedral constrained languages} which we use in the proof of \\cref{thm:geodesic-growth}.\n\n\\section{Polyhedrally Constrained Languages}\\label{sec:polyhedrally-constrained-languages}\n\nIn \\cref{sec:constrained-languages}, we saw that a constrained language, $L(U,\\mathcal{C})$, is the intersection of an unambiguous context-free language $U \\subseteq \\Sigma$ and the set of words whose Parikh images belong to a set $\\mathcal{C} \\subseteq \\mathbb{Z}^{|\\Sigma|}$.\nMoreover, we defined the family of linearly constrained languages in \\cref{defn:linearly-constrained-language} as the constrained languages where $\\mathcal{C}$ is a $|\\Sigma|$-constraint.\nIn this section we generalise this definition to the family of \\emph{polyhedrally constrained languages} as follows.\n\n\\begin{definition}\\label{defn:polyhedrally-constrained-language}\n\tA language is \\emph{polyhedraly constrained} if it can be written as\n\t\\[\n\t\tL(U,\\mathcal{P})\n\t\t=\n\t\t\\left\\{\n\t\t\tw \\in U \\subseteq \\Sigma^*\n\t\t\\mid\n\t\t\t\\Parikh_\\Sigma(w) \\in \\mathcal{P}\n\t\t\\right\\}\n\t\\]\n\twhere $U$ is an unambiguous context-free language, and $\\mathcal{P}$ is a polyhedral set.\n\\end{definition}\n\nIn \\cref{sec:virtually-abelian-groups} we will not require the full power of polyhedrally constrained languages, in particular, we only require polyhedrally constrained languages $L(U,\\mathcal{P})$ where $U$ is a regular language.\nIt can then be shown that such languages form a subfamily of the \\emph{RCM languages} introduced by \\textcite{castiglione2017}, moreover, they showed that the single-variable generating functions of these languages are holonomic.\nIn \\cref{prop:polyhedrally-constrained-is-holonomic}, we show that the multivariate generating function of each polyhedrally constrained language is holonomic.\nWe make use of this characterisation in the proof of \\cref{thm:geodesic-growth}.\n\n\\begin{proposition}\\label{prop:polyhedrally-constrained-is-holonomic}\n\tThe multivariate generating function of a polyhedrally constrained language is holonomic.\n\\end{proposition}\n\n\\begin{proof}\t\n\tLet $L(U,\\mathcal{P}) \\in \\Sigma^*$ be a polyhedrally constrained language.\n\tFrom the definition of polyhedral sets, we may decompose $\\mathcal{P} \\subseteq \\mathbb{Z}^{|\\Sigma|}$ into a union of finitely many disjoint basic polyhedral sets\n\t$\\mathcal{P} = \\bigcup_{i=1}^L \\mathcal{B}_i$.\n\tMoreover, each such basic polyhedral set $\\mathcal{B}_i \\subseteq \\mathbb{Z}^{|\\Sigma|}$ can be written as a finite intersection of elementary regions\n\t\\begin{multline*}\n\t\t\\mathcal{B}_i\n\t\t=\n\t\t\\bigcap_{j = 1}^{K_{i,1}}\n\t\t\\{\n\t\t\tv \\in \\mathbb{Z}^{|\\Sigma|}\n\t\t\\mid\n\t\t\t\\alpha_{i,j} \\cdot v = \\beta_{i,j}\n\t\t\\}\n\t\t\\cap\n\t\t\\bigcap_{j = 1}^{K_{i,2}}\n\t\t\\{\n\t\t\tv \\in \\mathbb{Z}^{|\\Sigma|}\n\t\t\\mid\n\t\t\t\\xi_{i,j} \\cdot v > \\lambda_{i,j}\n\t\t\\}\n\t\t\\\\\\cap\n\t\t\\bigcap_{j = 1}^{K_{i,3}}\n\t\t\\{\n\t\t\tv \\in \\mathbb{Z}^{|\\Sigma|}\n\t\t\\mid\n\t\t\t\\zeta_{i,j} \\cdot v \\equiv \\eta_{i,j}\\ (\\bmod\\ \\theta_{i,j})\n\t\t\\}\n\t\\end{multline*}\n\twhere each $\\alpha_{i,j},\\xi_{i,j},\\zeta_{i,j} \\in \\mathbb{Z}^{|\\Sigma|}$, each $\\beta_{i,j},\\lambda_{i,j}, \\eta_{i,j} \\in \\mathbb{Z}$, and $\\theta_{i,j} \\in \\mathbb{N}_+$.\n\t\n\tFrom the definition of constrained language we see that $L(U,\\mathcal{P})$ is the union of disjoint polyhedrally constrained languages $L(U,\\mathcal{B}_i)$.\n\tWe see that if each $L(U,\\mathcal{B}_i)$ has a multivariate generating function of $f_i(x_1,x_2,\\ldots,x_{|\\Sigma|})$, then the multivariate generating function for $L(U,\\mathcal{P})$ is given by\n\t\\[\n\t\tf(x_1,x_2,\\ldots,x_{|\\Sigma|})\n\t\t=\n\t\t\\sum_{i=1}^L f_i(x_1,x_2,\\ldots,x_{|\\Sigma|}).\n\t\\]\n\t\n\tFor each basic polyhedral set $\\mathcal{B}_i$, we introduce a $|\\Sigma|$-constraint\n\t\\[\n\t\t\\mathcal{C}_i\n\t\t=\n\t\t\\bigcap_{j = 1}^{K_{i,1}}\n\t\t\\{\n\t\t\tv \\in \\mathbb{Z}^{|\\Sigma|}\n\t\t\\mid\n\t\t\t\\alpha_{i,j} \\cdot v = \\beta_{i,j}\n\t\t\\}\n\t\t\\cap\n\t\t\\bigcap_{j = 1}^{K_{i,2}}\n\t\t\\{\n\t\t\tv \\in \\mathbb{Z}^{|\\Sigma|}\n\t\t\\mid\n\t\t\t\\xi_{i,j} \\cdot v > \\lambda_{i,j}\n\t\t\\},\n\t\\]\n\tand a monoid homomorphism $\\varphi_i \\colon \\Sigma^* \\to \\prod_{j=1}^{K_{i,3}} (\\mathbb{Z}/\\theta_{i,j}\\mathbb{Z})$ such that\n\t\\[\n\t\t\\varphi_i(w)\n\t\t=\n\t\t(\n\t\t\t\\zeta_{i,1} \\cdot \\Parikh_\\Sigma(w),\\,\n\t\t\t\\zeta_{i,2} \\cdot \\Parikh_\\Sigma(w),\\,\n\t\t\t\\ldots,\\,\n\t\t\t\\zeta_{i,K_{i,3}} \\cdot \\Parikh_\\Sigma(w)\n\t\t);\n\t\\]\n\tmoreover, we write $R_i \\in \\Sigma^*$ for the inverse image\n\t\\[\n\t\tR_i =\n\t\t\\varphi_i^{-1}(\\{ (\\eta_{i,1},\\eta_{i,2},\\ldots,\\eta_{i,K_{i,3}}) \\}).\n\t\\]\n\t\n\tEach language $R_i \\in \\Sigma^*$ is expressed as the inverse image of a subset of a finite monoid.\n\tFrom \\cite[Theorem~1]{rabin1959} we see that each $R_i$ is a regular language, in particular, for each $R_i$ we may construct a finite-state automaton with states given by the set $\\prod_{j=1}^{K_{i,3}} (\\mathbb{Z}/\\theta_{i,j}\\mathbb{Z})$, initial state given by $(0,\\ldots,0)$, an accepting state of $(\\eta_{i,1},\\eta_{i,2},\\ldots,\\eta_{i,K_{i,3}})$, and a transition $v \\to^\\sigma v'$ for each state $v$ and letter $\\sigma \\in \\Sigma$ where $v' = v + \\varphi_i(\\sigma)$.\n\tMoreover, since the class of unambiguous context-free grammar is closed under intersection with regular language (see Theorem~6.4.1 on p.~197 of \\cite{harrison1978}), we see that each $L(U \\cap R_i,\\mathcal{C}_i) = L(U,\\mathcal{B}_i)$ is linearly constrained as in \\cref{defn:linearly-constrained-language}.\n\tFrom \\cref{prop:lcl-is-holonomic}, we see that each $f_i(x_1,x_2,\\ldots,x_{|\\Sigma|})$ is holonomic.\n\t\n\tFrom \\cref{lemma:holonomic-closure-properties}, holonomic functions are closed under addition, and thus the multivariate generating function of $L(U, \\mathcal{P})$ is holonomic.\n\\end{proof}\n", "meta": {"hexsha": "c7493bd28e15f0210a3b3262c881eabbf297a419", "size": 10522, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/04_Polyhedral.tex", "max_stars_repo_name": "alexbishop/phd-thesis", "max_stars_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/04_Polyhedral.tex", "max_issues_repo_name": "alexbishop/phd-thesis", "max_issues_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/04_Polyhedral.tex", "max_forks_repo_name": "alexbishop/phd-thesis", "max_forks_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.2673796791, "max_line_length": 474, "alphanum_fraction": 0.7222011025, "num_tokens": 3499, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7905303186696747, "lm_q1q2_score": 0.5921761621512944}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\marginpar{Wednesday\\\\ 2020-10-14, \\\\ compiled \\\\ \\today}\n\nWe were talking about Spontaneous Symmetry Breaking: how does it work in a cosmological context? \nSpecifically, we ask about its effect on cosmological phase transitions at high temperatures in the early universe. \n\nWe start out with a scalar field \\(\\varphi \\) with Lagrangian\\footnote{We can write it with partial derivatives instead of covariant ones since for a scalar \\(\\varphi \\) they are equal: \\(\\nabla_{\\mu } \\varphi = \\partial_{\\mu} \\varphi \\).}\n%\n\\begin{align}\n\\mathscr{L}_\\varphi = \n- \\frac{1}{2} g^{\\mu \\nu } \\partial_{\\mu } \\varphi \\partial_{\\nu } \\varphi \n- V(\\varphi )\n\\,,\n\\end{align}\n%\nwhere we choose a potential \n%\n\\begin{align}\nV(\\varphi ) = \\frac{\\lambda }{4} \\qty(\\varphi^2 - \\sigma^2)^2\n\\,.\n\\end{align}\n\nThis is the typical example of a potential which exhibits SSB. Its vacuum (minimum) is a pair of points at \\(\\abs{\\varphi} = \\sigma \\).\n\nThe Lagrangian is invariant under \\(\\varphi \\to - \\varphi \\); either of the vacuum states is not. \n\nWe need to consider finite-temperature effects on the propagator of the scalar field. \nThe temperature corrections to this potentials yields a temperature-dependent mass term, which looks like \n%\n\\begin{align}\nm^2_T = \\alpha \\lambda T^2\n\\,,\n\\end{align}\n%\nwhere \\(\\lambda \\) is the coupling of the field, while \\(\\alpha \\) is a dimensionless order-1 number. \n\nThen, the potential reads \n%\n\\begin{align}\nV_T (\\varphi ) = V_{T=0}(\\varphi ) + \\frac{1}{2} \\alpha \\lambda \\varphi^2 T^2\n\\,.\n\\end{align}\n\nIf the temperature is sufficiently high, the potential will have only one vacuum again. \nThis means that if we go far enough back in time the symmetry is restored. \n\nThe moment at which the potential goes from one minimum to two is the one at which we have a \\textbf{phase transition}. At which temperature does it happen? We can find out by considering the sign of the second derivative of the potential at \\(\\varphi = 0\\): \n%\n\\begin{align}\n\\eval{\\dv[2]{V}{\\varphi }}_{\\varphi =0}  = - \\lambda \\sigma^2 + \\lambda \\alpha T^2\n\\,,\n\\end{align}\n%\nso we have a critical temperature \\(T \\approx \\sigma / \\sqrt{\\alpha }\\) at which the symmetry is broken.\n\n\\subsection{Topological defects}\n\nThe defects are quite similar to the defects we find in regular phase transitions we know at our scales, like water to ice: the crystal which forms is not perfect. \n\nThe minima \\(\\varphi = \\pm \\sigma \\) are the \\emph{true vacuum} of the system, while \\(\\varphi = 0\\) is the \\emph{false vacuum}. \n\nThere will be regions in the universe in which the scalar field goes to \\(+ \\sigma \\), and other regions in which it goes into \\(- \\sigma \\). \nThis is because the two minima are equivalent: there are even odds for the field at any point to fall into either. In causally connected regions it will go into the same minimum. \nThere will then be boundaries between the regions in which the field goes into \\(+ \\sigma \\) and \\(- \\sigma \\).\n\nIn a mostly-plus metric signature, the equation of motion reads \n%\n\\begin{align}\n\\square \\varphi = \\pdv{V}{\\varphi }\n\\,,\n\\end{align}\n%\nso if we neglect the curvature and consider static solutions we will have \n%\n\\begin{align}\n\\nabla^2 \\varphi = \\pdv{V}{\\varphi }\n\\,.\n\\end{align}\n\n%% GOT HERE\n\nFurther, we consider an infinite \\textbf{domain wall} in the \\(xy\\) plane, assuming that for \\(z \\to \\pm \\infty \\) we have \\(\\varphi = \\pm \\sigma \\). Also, we assume that the whole field has no \\(x\\) or \\(y\\) dependence.  \nLet us then substitute into the equation: \n%\n\\begin{align}\n- \\pdv[2]{\\varphi }{z} = - \\pdv{V}{\\varphi } = - \\lambda \\varphi (\\varphi^2 - \\sigma^2)\n\\,.\n\\end{align}\n\nSolving this yields \n%\n\\begin{align}\n\\varphi (z) = \\sigma \\tanh \\qty( \\frac{z}{\\Delta })\n\\,,\n\\end{align}\n%\nwhere \\(\\Delta \\) is the \\emph{thickness} of the wall, which we can estimate through energetic configurations: the surface energy will have contributions through the gradient of the field: \\( \\Delta (\\partial_{z} \\varphi)^2 \\sim \\Delta \\sigma^2 / \\Delta^2 = \\sigma^2 / \\Delta  \\); and a potential term \\(V(\\varphi ) \\sim \\Delta V(\\varphi =0) \\sim \\Delta \\lambda \\sigma^{4} / 4\\). \n\nThis is because the total Hamiltonian for a unit-area region of this stationary configuration will also look like \n%\n\\begin{align}\nH &= \\int \\qty[\\frac{1}{2} g^{\\mu \\nu } \\partial_{\\mu } \\varphi \\partial_{\\nu } \\varphi + V(\\varphi ) ] \\dd{z}  \\\\\n&= \\int \\frac{1}{2} \\qty(\\partial_{z} \\varphi )^2 + V(\\varphi ) \\dd{z}  \\\\\n&\\approx \\frac{\\Delta}{2} \\eval{\\qty(\\partial_{z} \\varphi)^2}_{z=0} + \\frac{\\lambda}{4} \\sigma^4 \\Delta  \\underbrace{\\int \\qty(\\tanh^2 z - 1)^2 \\dd{z}}_{\\order{1}}   \\\\\n&\\approx \\Delta \\frac{\\sigma^2}{\\Delta^2} + \\frac{\\lambda}{4} \\sigma^4 \\Delta \n\\,.\n\\end{align}\n\n% \\todo[inline]{Add clarification of the estimates. Gradient of the field is the kinetic term?}\n\nThe two contributions scale oppositely with \\(\\Delta \\): the kinetic energy decreases for a wide wall, the potential energy decreases for a narrow wall. \nThen, we can find an optimum at \\(\\Delta \\sim 1 / (\\sigma \\sqrt{\\lambda })\\). \n\nThis domain wall is not removable, the configuration is topologically stable. \n\nNow we discuss the \\textbf{Kibble mechanism}, which demonstrates how phase transitions always generate domain walls. \nLet us denote as \\(\\xi \\) the typical linear size of the domains: it is called the \\emph{correlation length} of \\(\\varphi \\). \n\nWe know that in the radiation-dominated epoch there is a finite particle horizon \\(d_H(t) \\approx 2 t\\), so we must have \\(\\xi \\lesssim d_H (t)\\), since regions which were further apart could not have communicated yet at that time. \n\nThen, we find a lower bound on the number density of the domain walls, \\(n_X \\sim \\xi^{-3} \\gtrsim d_H^{-3}(t) = (2t)^{-3}\\). \n\nRecall from regular HBB cosmology that \n%\n\\begin{align}\nH^2(t) = \\frac{8 \\pi G}{3} \\rho _r = \\frac{8 \\pi G}{3} \\frac{\\pi^2}{30} g_* T^{4}\n\\,,\n\\end{align}\n%\nso \n%\n\\begin{align}\nt = \\frac{1}{2 H} \\approx \\num{.3} \\frac{1}{\\sqrt{g_*}} \\frac{M _{\\text{Pl}}}{T^2}\n\\,,\n\\end{align}\n%\ntherefore \n%\n\\begin{align}\nn_X \\gtrsim \\qty(\\frac{\\sqrt{g_*}}{\\num{.6}} \\frac{T}{M _{\\text{Pl}}})^3 T^3 \\sim \\qty(\\frac{\\sqrt{g_*}}{\\num{.6}} \\frac{T}{ M _{\\text{Pl}}})^3 n_\\gamma (T)\n\\,,\n\\end{align}\n%\nsince the number density of photons scales like \\(n_\\gamma (T) \\sim T^3\\).\n\nLet us evaluate this number density, for \\(T \\sim T _{\\text{GUT}} \\sim \\SI{e15}{GeV}\\).\nHere, \\(g_* \\sim 100\\), meaning that we get \n%\n\\begin{align}\nn_X (T _{\\text{GUT}}) > \\num{e-9} \\divisionsymbol \\num{e-10} n_\\gamma (T _{\\text{GUT}})\n\\,.\n\\end{align}\n\nTherefore, we have the ratio \n%\n\\begin{align}\n\\frac{n_X (T _{\\text{GUT}})}{n_\\gamma (T _{\\text{GUT}})} > \\num{e-9} \\divisionsymbol \\num{e-10}\n\\,.\n\\end{align}\n\nIf we assume that after production these objects are stable, there are no processes which can modify their number. Then, for lower temperatures we keep the same ratio, both number densities will scale like \\(n_\\gamma \\sim T^3 \\sim a^{-3}\\). \n\nThis is a very similar number to the ratio of baryons to photons, \\(\\eta = n_b / n_\\gamma \\)! \nThis means that \n%\n\\begin{align}\n\\Omega_{0x} = \\frac{m_x \\eta _x(t_0 )}{\\rho _{\\text{crit}}}\n\\gtrsim \\frac{m_x \\eta_{0b}}{\\rho _{\\text{crit}}} = \\frac{m_x}{m_p}  \\Omega_{0b}\n\\,,\n\\end{align}\n%\nwhich means that, since \\(m_x \\sim T _{\\text{GUT}} \\sim \\SI{e15}{GeV}\\), we must have \\(\\Omega_{0x} \\gtrsim \\num{e14}\\)! This definitely \\textbf{over-closes} the universe.\n\nHow does inflation solve this problem? These objects are produced in these early stages, but their number density is very \\textbf{diluted}. \nEach \\(\\pm \\sigma \\) region is inflated to the size of the observable universe. \n\nNow we will give some arguments as to why a scalar field makes sense, a characterization of different inflationary models, and discuss the generation of the first primordial density perturbations. \n\nA De Sitter phase is one with a cosmological constant \\(\\Lambda \\): \n%\n\\begin{align}\nH^2= \\frac{8 \\pi G}{3} \\rho _\\Lambda - \\frac{k}{a^2}\n\\,.\n\\end{align}\n \nHere \\(P_\\Lambda = - \\rho _\\Lambda \\). \nThen, \\(a(t) \\propto \\exp(Ht)\\), with \\(H = \\const\\). This \\(\\rho _\\Lambda\\) is constant as the universe expands. \n\nA cosmological constant term can be written in terms of a vacuum energy density of the quantum system. It appears in the EFE as \n%\n\\begin{align}\nR_{\\mu \\nu } - \\frac{1}{2} g_{\\mu \\nu } R = 8 \\pi G T_{\\mu \\nu } - \\Lambda g_{\\mu \\nu }\n\\,.\n\\end{align}\n\nThis can be calculated as \\(\\bra{0} T_{\\mu \\nu } \\ket{0} \\propto\n - \\bra{0} \\varphi \\ket{0} g_{\\mu \\nu }= - \\expval{\\rho } g_{\\mu \\nu }\\). \n \nPlugging this into the EFE we find \n%\n\\begin{align}\n\\Lambda = - 8 \\pi G \\expval{\\rho }\n\\,.\n\\end{align}\n\nWe cannot get rid of the vacuum energy density of the system, since energy gravitates. \n\nLet us come back to the Lagrangian \n%\n\\begin{align}\n\\mathscr{L}_{\\varphi } = - \\frac{1}{2} g^{\\mu \\nu } \\partial_{\\mu } \\varphi \\partial_{\\nu } \\varphi - V(\\varphi )\n\\,,\n\\end{align}\n%\nwhose energy momentum tensor is \n%\n\\begin{align}\nT^{\\varphi }_{\\mu \\nu } = \\partial_{\\mu } \\varphi \\partial_{\\nu } \\varphi + \\mathscr{L}_{\\varphi } g_{\\mu \\nu }\n\\,.\n\\end{align}\n\nLet us look at the Vacuum Expectation Value of the field: \\(\\expval{\\varphi } = \\bra{0} \\varphi \\ket{0}\\). If this is a constant, it should correspond to the minimum of the classical potential: the ground state. \nThis behaves like a cosmological constant. \nSince \\(\\varphi \\) is a constant, we have \n%\n\\begin{align}\n\\expval{T_{\\mu \\nu }} = g_{\\mu \\nu } V(\\expval{\\phi })\n\\,,\n\\end{align}\n%\nsince the derivatives of a constant vanish. \nThis is an effective \\(\\Lambda \\). \n\nA phase transition can move us away from this Vacuum Energy Value (VEV).\n\nThe VEV of \\(\\varphi \\) can be a function of time. \n\nAre there other options beyond a scalar field? Say, a vector field, or a spinor?\nThe first reason we do not choose this is because it breaks isotropy.\n\n\n\n\\end{document}\n", "meta": {"hexsha": "32208544d08387ae60f33bf5b53eb75c296d6c67", "size": 9882, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_third_semester/early_universe/oct14.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_third_semester/early_universe/oct14.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_third_semester/early_universe/oct14.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 40.1707317073, "max_line_length": 380, "alphanum_fraction": 0.6818457802, "num_tokens": 3149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5921761577231294}}
{"text": "\\chapter{Stopped martingales (TO DO)}\n\\section{How to make money almost surely}\nWe now take our newfound knowledge of measure theory to a casino.\n\nHere's the most classical example that shows up:\na casino lets us play a game where we can bet any amount of\non a fair coin flip, but with bad odds:\nwe win $\\$n$ if the coin is heads,\nbut lose $\\$2n$ if the coin is tails,\nfor a value of $n$ of our choice.\nThis seems like a game that no one in their right mind would want to play.\n\nWell, if we have unbounded time and money,\nwe actually can almost surely make a profit.\n\\begin{example}\n\t[Being even greedier than 18th century France]\n\tIn the game above, we start by betting $\\$1$.\n\t\\begin{itemize}\n\t\t\\ii If we win, we leave having made $\\$1$.\n\t\t\\ii If we lose, we then bet $\\$10$ instead, and\n\t\t\\begin{itemize}\n\t\t\t\\ii If we win, then we leave having made $\\$10-\\$2=\\$8$, and\n\t\t\t\\ii If we lose then we bet $\\$100$ instead, and\n\t\t\t\\begin{itemize}\n\t\t\t\t\\ii If we win, we leave having made $\\$1000-\\$20-\\$2=\\$978$, and\n\t\t\t\t\\ii If we lose then we bet $\\$1000$ instead, and so on\\dots\n\t\t\t\\end{itemize}\n\t\t\\end{itemize}\n\t\\end{itemize}\n\tSince the coin will almost surely show heads eventually,\n\twe make money whenever that happens.\n\tIn fact, the expected amount of time until a coin shows heads\n\tis only $2$ flips! What could go wrong?\n\\end{example}\nThis chapter will show that under sane conditions\nsuch as ``finite time'' or ``finite money'',\none cannot actually make money in this way --- the \\emph{optional stopping theorem}.\nThis will give us an excuse to define conditional probabilities,\nand then talk about martingales (which generalize the fair casino).\n\nOnce we realize that trying to extract money from Las Vegas is a lost cause,\nwe will stop gambling and then return to solving math problems,\nby showing some tricky surprises,\nwhere problems that look like they have nothing to do with gambling\ncan be solved by considering a suitable martingale.\n\nIn everything that follows, $\\Omega = (\\Omega, \\SA, \\mu)$ is a probability space.\n\n\\section{Sub-$\\sigma$-algebras and filtrations}\n\\prototype{$\\sigma$-algebra generated by a random variable, and coin flip filtration.}\nWe considered our $\\Omega$ as a space of worlds,\nequipped with a $\\sigma$-algebra $\\SA$ that lets us integrate over $\\Omega$.\nHowever, it is a sad fact of life that at any given time,\nyou only know partial information about the world.\nFor example, at the time of writing,\nwe know that the world did not end in 2012\n(see \\url{https://en.wikipedia.org/wiki/2012_phenomenon}),\nbut the fate of humanity in future years remains at slightly uncertain.\n\nLet's write this measure-theoretically: we could consider\n\\begin{align*}\n\t\\Omega &= A \\sqcup B \\\\\n\tA &= \\left\\{ \\omega \\text{ for which world ends in $2012$} \\right\\} \\\\\n\tB &= \\left\\{ \\omega \\text{ for which world does not end in $2012$} \\right\\}.\n\\end{align*}\nWe will assume that $A$ and $B$ are measurable sets,\nthat is, $A,B \\in \\SA$.\nThat means we could have good fun arguing about what the values\nof $\\mu(A)$ and $\\mu(B)$ should be\n(``a priori probability that the world ends in $2012$''),\nbut let's move on to a different silly example.\n\n\nWe will now introduce a new notion that\nwe will need when we define conditional probabilities later.\n\\begin{definition}\n\tLet $\\Omega = (\\Omega, \\SA, \\mu)$ be a probability space.\n\tA \\vocab{sub-$\\sigma$-algebra} $\\SF$\n\ton $\\Omega$ is exactly what it sounds like:\n\ta $\\sigma$-algebra $\\SF$ on the set $\\Omega$\n\tsuch that each $A \\in \\SF$ is measurable\n\t(i.e.,\\ $\\SF \\subseteq \\SA$).\n\\end{definition}\n\nThe motivation is that $\\SF$ is the $\\sigma$-algebra of sets\nwhich let us ask questions about some piece of information.\nFor example, in the 2012 example we gave above,\nwe might take $\\SF = \\{\\varnothing, A, B, \\Omega\\}$,\nwhich are the sets we care about if we are thinking only about 2012.\n\nHere are some more serious examples.\n\\begin{example}\n\t[Examples of sub-$\\sigma$-algebras]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Let $X \\colon \\Omega \\to \\{0,1,2\\}$ be a random\n\t\tvariable taking on one of three values.\n\t\tIf we're interested in $X$ then we could define\n\t\t\\begin{align*}\n\t\t\tA &= \\{\\omega \\colon X(\\omega) = 1\\} \\\\\n\t\t\tB &= \\{\\omega \\colon X(\\omega) = 2\\} \\\\\n\t\t\tC &= \\{\\omega \\colon X(\\omega) = 3\\}\n\t\t\\end{align*}\n\t\tthen we could write\n\t\t\\[ \\SF = \\left\\{ \\varnothing, \\; A, \\; B, \\; C, \\;\n\t\t\t\tA \\cup B, \\; B \\cup C, \\; C \\cup A, \\; \\Omega \\right\\}. \\]\n\t\tThis is a sub-$\\sigma$-algebra on $\\SF$\n\t\tthat lets us ask questions about $X$\n\t\tlike ``what is the probability $X \\ne 3$'', say.\n\n\t\t\\ii Now suppose $Y \\colon \\Omega \\to [0,1]$ is another random variable.\n\t\tIf we are interested in $Y$,\n\t\tthe $\\SF$ that captures our curiosity is\n\t\t\\[ \\SF = \\left\\{ Y\\pre(B) \\mid B \\subseteq [0,1]\n\t\t\t\\text{ is measurable } \\right\\}. \\]\n\t\\end{enumerate}\n\\end{example}\n\nYou might notice a trend here which we formalize now:\n\\begin{definition}\n\tLet $X \\colon \\Omega \\to \\RR$ be a random variable.\n\tThe \\vocab{sub-$\\sigma$-algebra generated by $X$} is defined by\n\t\\[ \\sigma(X) \\defeq \\left\\{ X\\pre(B) \\mid B \\subseteq \\RR\n\t\t\\text{ is measurable } \\right\\}. \\]\n\tIf $X_1$, \\dots is a sequence (finite or infinite) of random variables,\n\tthe sub-$\\sigma$-algebra generated by them\n\tis the smallest $\\sigma$-algebra which contains $\\sigma(X_i)$ for each $i$.\n\\end{definition}\n\nFinally, we can put a lot of these together ---\nsince we're talking about time, we learn more as we grow older,\nand this can be formalized.\n\\begin{definition}\n\tA \\vocab{filtration} on $\\Omega = (\\Omega, \\SA, \\mu)$\n\tis a nested sequence\\footnote{For convenience,\n\t\twe will restrict ourselves to $\\ZZ_{\\ge0}$-indexed\n\t\tfiltrations, though really any index set is okay.}\n\t\\[ \\SF_0 \\subseteq \\SF_1 \\subseteq \\SF_2 \\subseteq \\dots \\]\n\tof sub-$\\sigma$-algebras on $\\Omega$.\n\\end{definition}\n\n\\begin{example}\n\t[Filtration]\n\tSuppose you're bored in an infinitely long class\n\tand start flipping a fair coin to pass the time.\n\t(Accordingly, we could let $\\Omega = \\{H,T\\}^\\infty$\n\tconsist of infinite sequences of heads $H$ and tails $T$.)\n\tWe could let $\\SF_n$ denote the sub-$\\sigma$-algebra\n\tgenerated by the values of the first $n$ coin flips.\n\tSo:\n\t\\begin{itemize}\n\t\t\\ii $\\SF_0 = \\{\\varnothing, \\Omega\\}$,\n\t\t\\ii $\\SF_1 = \\{\\varnothing, \\text{first flip $H$}, \\text{first flip $T$}, \\Omega\\}$,\n\t\t\\ii $\\SF_2 = \\{\\varnothing, \\text{first flips $HH$}, \\text{second flip $T$}, \\Omega, \\text{first flip and second flip differ}, \\dots\\}$.\n\t\t\\ii and so on, with $\\SF_n$ being the measurable sets\n\t\t``determined'' only by the first $n$ coin flips.\n\t\\end{itemize}\n\\end{example}\n\n\\begin{exercise}\n\tIn the previous example, compute the cardinality $|\\SF_n|$ for each integer $n$.\n\\end{exercise}\n\n\\section{Conditional expectation}\n\\prototype{$\\EE(X \\mid X+Y)$ for $X$ and $Y$ distributed over $[0,1]$.}\n\nWe'll need the definition of conditional probability to define a martingale,\nbut this turns out to be surprisingly tricky.\nLet's consider the following simple example to see why.\n\\begin{example}\n\t[Why high-school methods aren't enough here]\n\tSuppose we have two independent random variables $X$, $Y$ distributed\n\tuniformly over $[0,1]$ (so we may as well take $\\Omega = [0,1]^2$).\n\tWe might try to ask the question:\n\t\\begin{quote}\n\t\t\\itshape\n\t\t``what is the expected value of $X$\n\t\tgiven that $X+Y = 0.6$''?\n\t\\end{quote}\n\tIntuitively, we know the answer has to be $0.3$.\n\tHowever, if we try to write down a definition, we quickly run into trouble.\n\tIdeally we want to say something like\n\t\\[ \\EE[X \\text{ given } X+Y=0.6]\n\t\t= \\frac{\\int_{S} X}{\\int_{S} 1}\n\t\t\\text{ where }\n\t\tS = \\left\\{ \\omega \\in \\Omega \\mid X(\\omega)+Y(\\omega)=0.6 \\right\\}.  \\]\n\tThe problem is that $S$ is a set of measure zero,\n\tso we quickly run into $\\frac 00$, meaning a definition\n\tof this shape will not work out.\n\\end{example}\n\nThe way that this is typically handled in measure theory\nis to use the notion of sub-$\\sigma$-algebra that we defined.\nLet $\\SF$ be a sub-$\\sigma$-algebra which captures the information\nThis means we create a function assigning the\n``conditional expectation'' to \\emph{every} point $\\omega \\in \\Omega$,\nwhich is measurable with respect to $\\SF$.\n\\todo{give the example}\n\n\\begin{proposition}\n\t[Conditional expectation definition]\n\t\\label{prop:conditional_exp}\n\tLet $X \\colon \\Omega \\to \\RR$ be an \\emph{absolutely integrable}\n\trandom variable (meaning $\\EE[|X|] < \\infty$)\n\tover a probability space $\\Omega$,\n\tand let $\\SF$ be a sub-$\\sigma$-algebra on it.\n\n\tThen there exists a function $\\eta \\colon \\Omega \\to \\RR$\n\tsatisfying the following two properties:\n\t\\begin{itemize}\n\t\t\\ii $\\eta$ is $\\SF$-measurable (that is,\n\t\t\tmeasurable as a function $(\\Omega, \\SF, \\mu) \\to \\RR$); and\n\t\t\\ii for any set $A \\in \\SF$ we have\n\t\t$\\EE[\\eta \\cdot \\mathbf{1}_A] = \\EE[X \\cdot \\mathbf{1}_A]$.\n\t\\end{itemize}\n\tMoreover, this random variable is unique up to almost sureness.\n\\end{proposition}\n\\begin{proof}\n\tOmitted, but relevant buzzword used is ``Radon-Nikodym derivative''.\n\\end{proof}\n\n\\begin{definition}\n\tLet $\\eta$ be as in the previous proposition.\n\t\\begin{itemize}\n\t\t\\ii We denote $\\eta$ by $\\EE(X \\mid \\SF)$\n\t\tand call it the \\vocab{conditional expectation} of $X$ with respect to $\\SF$.\n\t\t\\ii If $Y$ is a random variable then\n\t\t$\\EE(X \\mid Y)$ denotes $\\EE(X \\mid \\sigma(Y))$,\n\t\ti.e.\\ the conditional expectation of $X$\n\t\twith respect to the $\\sigma$-algebra generated by $Y$.\n\t\\end{itemize}\n\\end{definition}\n\nMore fine print:\n\\begin{remark}\n\t[This notation is terrible]\n\tThe notation $\\EE(X \\mid \\SF)$ is admittedly confusing,\n\tsince it is actually an entire function $\\Omega \\to \\RR$,\n\trather than just a real number like $\\EE[X]$.\n\tFor this reason I try to be careful to remember\n\tto use parentheses rather than square brackets\n\tfor conditional expectations; not everyone does this.\n\\end{remark}\n\n\\begin{abuse}\n\tIn addition, when we write $Y = \\EE(X \\mid \\SF)$,\n\tthere is some abuse of notation happening here\n\tsince $\\EE(X \\mid \\SF)$ is defined only up to some reasonable uniqueness\n\t(i.e.\\ up to measure zero changes).\n\tSo this really means that\n\t``$Y$ satisfies the hypothesis of \\Cref{prop:conditional_exp}'',\n\tbut this is so pedantic that no one bothers.\n\\end{abuse}\n\n\\todo{properties}\n\n\\section{Supermartingales}\n\\prototype{Visiting a casino is a supermartingale, assuming house odds.}\n\n\\begin{definition}\n\tLet $X_0$, $X_1$, \\dots be a sequence of random variables\n\ton a probability space $\\Omega$,\n\tand let $\\SF_0 \\subseteq \\SF_1 \\subseteq \\cdots$ be a filtration.\n\n\tThen $(X_n)_{n \\ge 0}$ is a \\vocab{supermartingale}\n\twith respect to $(\\SF_n)_{n \\ge 0}$ if the following conditions hold:\n\t\\begin{itemize}\n\t\t\\ii $X_n$ is absolutely integrable for every $n$;\n\t\t\\ii $X_n$ is measurable with respect to $\\SF_n$; and\n\t\t\\ii for each $n = 1, 2, \\dots$ the inequality\n\t\t\\[ \\EE(X_n \\mid \\SF_{n-1}) \\le X_{n-1} \\]\n\t\tholds for all $\\omega \\in \\Omega$.\n\t\\end{itemize}\n\n\tIn a \\vocab{submartingale} the inequality $\\le$ is replaced with $\\ge$,\n\tand in a \\vocab{martingale} it is replaced by $=$.\n\\end{definition}\n\n\\begin{abuse}\n\t[No one uses that filtration thing anyways]\n\tWe will always take $\\SF_n$ to be the $\\sigma$-algebra\n\tgenerated by the previous variables $X_0$, $X_1$, \\dots, $X_{n-1}$,\n\tand do so without further comment.\n\tNonetheless, all the results that follow hold in the more general setting\n\tof a supermartingale with respect to some filtration.\n\\end{abuse}\n\nWe will prove all our theorems for supermartingales;\nthe analogous versions for submartingales can be obtained\nby replacing $\\le$ with $\\ge$ everywhere\n(since $X_n$ is a martingale iff $-X_n$ is a supermartingale)\nand for martingales by replacing $\\le$ with $=$ everywhere\n(since $X_n$ is a martingale iff it is both a supermartingale\nand a submartingale).\n\nLet's give examples.\n\\begin{example}\n\t[Supermartingales]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii \\textbf{Random walks}:\n\t\tan ant starts at the position $0$ on the number line.\n\t\tEvery minute, it flips a fair coin and either\n\t\twalks one step left or one step right.\n\t\tIf $X_t$ is the position at the $t$th time,\n\t\tthen $X_t$ is a martingale, because\n\t\t\\[ \\EE(X_t \\mid X_0, \\dots, X_{t-1})\n\t\t\t= \\frac{(X_{t-1}+1) + (X_{t-1}-1)}{2} = X_{t-1}. \\]\n\n\t\t\\ii \\textbf{Casino game}:\n\t\tConsider a gambler using the strategy described\n\t\tat the beginning of the chapter.\n\t\tThis is a martingale, since every bet the gambler makes\n\t\thas expected value $0$.\n\n\t\t\\ii \\textbf{Multiplying independent variables}:\n\t\tLet $X_1$, $X_2$, \\dots, be independent (not necessarily\n\t\tidentically distributed) integrable random variables with mean $1$.\n\t\tThen the sequence $Y_1$, $Y_2$ \\dots defined by\n\t\t\\[ Y_n \\defeq X_1 X_2 \\dots X_n \\]\n\t\tis a martingale; as\n\t\t$\\EE(Y_n \\mid Y_1, \\dots, Y_{n-1}) = \\EE[Y_n] \\cdot Y_{n-1} = Y_{n-1}$.\n\n\t\t\\ii \\textbf{Iterated blackjack}:\n\t\tSuppose one shows up to a casino and plays\n\t\tinfinitely many games of blackjack.\n\t\tIf $X_t$ is their wealth at time $t$, then $X_t$ is a supermartingale.\n\t\tThis is because each game has negative expected value (house edge).\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{example}\n\t[Frivolous/inflamamtory example --- real life is a supermartingale]\n\n\tLet $X_t$ be your happiness on day $t$ of your life.\n\tLife has its ups and downs,\n\tso it is not the case that $X_t \\le X_{t-1}$ for every $t$.\n\tFor example, you might win the lottery one day.\n\n\tHowever, on any given day, many things can go wrong (e.g.\\ zombie apocalypse),\n\tand by Murphy's Law this is more likely than things going well.\n\tAlso, as you get older, you have an increasing number of responsibilities\n\tand your health gradually begins to deteriorate.\n\n\tThus it seems that\n\t\\[ \\EE(X_t \\mid X_0, \\dots, X_{t-1}) \\le X_{t-1} \\]\n\tis a reasonable description of the future ---\n\t\\emph{in expectation}, each successive day is\n\tslightly worse than the previous one.\n\t(In particular, if we set $X_t = -\\infty$ on death,\n\tthen as long as you have a positive probability of dying,\n\tthe displayed inequality is obviously true.)\n\\end{example}\n\nBefore going on, we will state without proof one useful result:\nif a martingale is bounded, then it will almost certainly converge.\n\n\\begin{theorem}\n\t[Doob's martingale convergence theorem]\n\t\\label{thm:doob_martingale_converge}\n\tLet $X_0$, \\dots be a supermartingale on a\n\tprobability space $\\Omega$ such that\n\t\\[ \\sup_{n \\in \\ZZ_{\\ge 0}}\n\t\t\\EE \\left[ \\left\\lvert X_n \\right\\rvert \\right] < \\infty. \\]\n\tThen, there exists a random variable $X_\\infty \\colon \\Omega \\to \\RR$\n\tsuch that\n\t\\[ X_n \\asto X_\\infty. \\]\n\\end{theorem}\n\n\\section{Optional stopping}\n\\prototype{Las Vegas.}\n\nIn the first section we described how to make money almost surely.\nThe key advantage the gambler had was the ability to quit whenever he wanted\n(equivalently, an ability to control the size of the bets;\nbetting \\$0 forever is the same as quitting.)\nLet's formalize a notion of stopping time.\n\nThe idea is we want to define a function\n$\\tau \\colon \\Omega \\to \\{0, 1, \\dots, \\infty\\}$ such that\n\\begin{itemize}\n\t\\ii $\\tau(\\omega)$ specifies the index\n\tafter which we \\emph{stop} the martingale.\n\tNote that the decisions to stop after time $n$\n\tmust be made with only the information available at that time ---\n\ti.e., with respect to $\\SF_n$ of the filtration.\n\n\t\\ii $X_{\\tau \\wedge n}$ is the random value representing the\n\tvalue at time $n$ of the stopped martingale,\n\twhere if $n$ is \\emph{after} the stopping time,\n\twe just take it to be the our currently value after we leave.\n\n\tSo for example in a world $\\omega$ where we stopped at time $3$, then\n\t$X_{\\tau \\wedge 0}(\\omega) = X_0(\\omega)$,\n\t$X_{\\tau \\wedge 1}(\\omega) = X_1(\\omega)$,\n\t$X_{\\tau \\wedge 2}(\\omega) = X_2(\\omega)$,\n\t$X_{\\tau \\wedge 3}(\\omega) = X_3(\\omega)$, but then\n\t\\[ X_3(\\omega)\n\t\t= X_{\\tau \\wedge 4}(\\omega)\n\t\t= X_{\\tau \\wedge 5}(\\omega)\n\t\t= X_{\\tau \\wedge 6}(\\omega)\n\t\t= \\dots\n\t\\]\n\tsince we have stopped --- the value stops changing.\n\n\t\\ii $X_{\\tau}$ denotes the eventual value after we stop\n\t(or the limit $X_\\infty$ if we never stop).\n\\end{itemize}\n\nHere's the compiled machine code.\n\\begin{definition}\n\tLet $\\SF_0 \\subseteq \\SF_1 \\subseteq \\cdots$ be a filtration\n\ton a probability space $\\Omega$.\n\t\\begin{itemize}\n\t\t\\ii A \\vocab{stopping time} is a function\n\t\t\\[ \\tau \\colon \\Omega \\to \\{0, 1, 2, \\dots\\} \\cup \\{\\infty\\} \\]\n\t\twith the property that for each integer $n$, the set\n\t\t\\[ \\left\\{ \\omega \\in \\Omega \\mid \\tau(\\omega) = n \\right\\} \\]\n\t\tis $\\SF_n$-measurable (i.e., is in $\\SF_n$).\n\n\t\t\\ii For each $n \\ge 0$ we define\n\t\t$X_{\\tau \\wedge n} \\colon \\Omega \\to \\RR$ by\n\t\t\\[ X_{\\tau \\wedge n}(\\omega)\n\t\t= X_{\\min \\left\\{ \\tau(\\omega), n \\right\\}}(\\omega) \\]\n\n\t\t\\ii Finally, we let the eventual outcome be denoted by\n\t\t\\[ X_\\tau(\\omega)\n\t\t\t= \\begin{cases}\n\t\t\t\tX_{\\tau(\\omega)}(\\omega) & \\tau(\\omega) \\ne \\infty \\\\\n\t\t\t\t\\lim_{n \\to \\infty} X_n(\\omega) & \\tau(\\omega) = \\infty\n\t\t\t\t\\text{ and } \\lim_{n \\to \\infty} X_n(\\omega) \\text{ exists } \\\\\n\t\t\t\t\\text{undefined} & \\text{otherwise}.\n\t\t\t\\end{cases}\n\t\t\\]\n\t\tWe require that the ``undefined'' case occurs\n\t\tonly for a set of measure zero\n\t\t(for example, if \\Cref{thm:doob_martingale_converge} applies).\n\t\tOtherwise we don't allow $X_\\tau$ to be defined.\n\t\\end{itemize}\n\\end{definition}\n\n\n\\begin{proposition}\n\t[Stopped supermartingales are still supermartingales]\n\tLet $X_0$, $X_1$, \\dots be a supermartingale.\n\tThen the sequence\n\t\\[ X_{\\tau \\wedge 0}, \\; X_{\\tau \\wedge 1}, \\; \\dots \\]\n\tis itself a supermartingale.\n\\end{proposition}\n\\begin{proof}\n\tWe have almost everywhere the inequalities\n\t\\begin{align*}\n\t\t\\EE\\left( X_{\\tau \\wedge n} \\mid \\SF_{n-1} \\right)\n\t\t&= \\EE \\left( X_{n-1} + \\mathbf 1_{\\tau(\\omega) = n-1} (X_n - X_{n-1}) \\mid \\SF_{n-1} \\right) \\\\\n\t\t&= \\EE \\left( X_{n-1} \\mid \\SF_{n-1} \\right)\n\t\t+ \\EE \\left( \\mathbf 1_{\\tau(\\omega) = n-1} \\cdot (X_n - X_{n-1}) \\mid \\SF_{n-1} \\right) \\\\\n\t\t&= X_{n-1} + \\mathbf 1_{\\tau(\\omega) = n-1}\n\t\t\t\\cdot\\EE \\left(  X_n - X_{n-1} \\mid \\SF_{n-1} \\right)\n\t\t\\le X_{n-1}\n\t\\end{align*}\n\tas functions from $\\Omega \\to \\RR$.\n\\end{proof}\n\n\\begin{theorem}\n\t[Doob's optional stopping theorem]\n\tLet $X_0$, $X_1$, \\dots be a supermartingale on a probability space $\\Omega$,\n\twith respect to a filtration $\\SF_0 \\subseteq \\SF_1 \\subseteq \\cdots$.\n\tLet $\\tau$ be a stopping time with respect to this filtration.\n\tSuppose that \\emph{any} of the following hypotheses are true,\n\tfor some constant $C$:\n\t\\begin{enumerate}[(a)]\n\t\t\\ii \\textbf{Finite time}: $\\tau(\\omega) \\le C$ for almost all $\\omega$.\n\t\t\\ii \\textbf{Finite money}: for each $n \\ge 1$,\n\t\t$\\left\\lvert X_{\\tau \\wedge n}(\\omega) \\right\\rvert \\le C$\n\t\tfor almost all $\\omega$.\n\t\t\\ii \\textbf{Finite bets}: we have $\\mathbb E[\\tau] < \\infty$,\n\t\tand for each $n \\ge 1$, the conditional expectation\n\t\t\\[ \\EE\\left( \\left\\lvert X_n-X_{n-1} \\right\\rvert\n\t\t\t\\mid \\SF_n \\right) \\]\n\t\ttakes on values at most $C$ for almost all $\\omega \\in \\Omega$\n\t\tsatisfying $\\tau(\\omega) \\ge n$.\n\t\\end{enumerate}\n\tThen $X_\\tau$ is well-defined almost everywhere,\n\tand more importantly, \\[ \\EE[X_\\tau] \\le \\EE[X_0]. \\]\n\\end{theorem}\nThe last equation can be cheekily expressed as\n``the only winning move is not to play''.\n\n\\begin{proof}\n\t\\todo{do later tonight}\n\\end{proof}\n\n\\begin{exercise}\n\tConclude that going to Las Vegas with the strategy\n\tdescribed in the first section is a really bad idea.\n\tWhat goes wrong?\n\\end{exercise}\n\n\\section{Fun applications of optional stopping (TO DO)}\n%% for 18.A34 we can do the random walk example\nWe now give three problems which showcase some of the power of\nthe results we have developed so far.\n\n\\subsection{The ballot problem}\nSuppose Alice and Bob are racing in an election;\nAlice received $a$ votes total while Bob received $b$ votes total, and $a > b$.\nIf the votes are chosen in random order,\none could ask: what is the probability that Alice never falls behind\nBob in the election?\n\n\\missingfigure{path}\n\n\\begin{proposition}\n\t[Ballot problem]\n\tThis occurs with probability $\\frac{a-b}{a+b}$.\n\\end{proposition}\n\n\n\\subsection{ABRACADABRA}\n\n\n\\subsection{USA TST 2018}\n\n\\section{\\problemhead}\n\n\\begin{problem}\n\t[Examples of martingales]\n\tWe give some more examples of martingales.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii \\textbf{(Simple random walk)}\n\t\tLet $X_1$, $X_2$, \\dots be i.i.d.\\ random variables\n\t\twhich equal $+1$ with probability $1/2$,\n\t\tand $-1$ with probability $1/2$.\n\t\tProve that\n\t\t\\[ Y_n = \\left( X_1 + \\dots + X_n \\right)^2 - n \\]\n\t\tis a martingale.\n\n\t\t\\ii \\textbf{(de Moivre's martingale)}\n\t\tFix real numbers $p$ and $q$ such that $p,q > 0$ and $p+q=1$.\n\t\tLet $X_1$, $X_2$, \\dots be i.i.d.\\ random variables\n\t\twhich equal $+1$ with probability $p$,\n\t\tand $-1$ with probability $q$.\n\t\tShow that\n\t\t\\[ Y_n = \\left(qp^{-1}\\right)^{X_1 + X_2 + \\dots + X_n} \\]\n\t\tis a martingale.\n\n\t\t\\ii \\textbf{(P\\'{o}lya's urn)}\n\t\tAn urn contains one red and one blue marble initially.\n\t\tEvery minute, a marble is randomly removed from the urn,\n\t\tand two more marbles of the same color are added to the urn.\n\t\tThus after $n$ minutes, the urn will have $n+2$ marbles.\n\n\t\tLet $r_n$ denote the fraction of marbles which are red.\n\t\tShow that $r_n$ is a martingale.\n\t\\end{enumerate}\n\\end{problem}\n\n\\begin{problem}\n\tA deck has $52$ cards; of them $26$ are red and $26$ are black.\n\tThe cards are drawn and revealed one at a time.\n\tAt any point, if there is at least one card remaining in the deck,\n\tyou may stop the dealer;\n\tyou win if (and only if) the next card in the deck is red.\n\tIf all cards are dealt, then you lose.\n\tAcross all possible strategies,\n\tdetermine the maximal probability of winning.\n\t\\begin{hint}\n\t\tThere is a cute elementary solution.\n\t\tFor the martingale-based solution,\n\t\tshow that the fraction of red cards in the deck is a martingale\n\t\tat time $n$ is a martingale.\n\t\\end{hint}\n\\end{problem}\n\n\\begin{problem}\n\t[Wald's identity]\n\tLet $\\mu$ be a real number.\n\tLet $X_1$, $X_2$, \\dots be independent random variables\n\ton a probability space $\\Omega$ with mean $\\mu$.\n\tFinally let $\\tau \\colon \\Omega \\to \\{1, 2, \\dots\\}$\n\tbe a stopping time such that $\\mathbb E[\\tau] < \\infty$,\n\tsuch that the event $\\tau = n$ depends only on $X_1$, \\dots, $X_n$.\n\t\n\tProve that\n\t\\[ \\EE[X_1 + X_2 + \\dots + X_\\tau] = \\mu \\EE[\\tau]. \\]\n\\end{problem}\n\n\\begin{problem}\n\t[Unbiased drunkard's walk]\n\tAn ant starts at $0$ on a number line,\n\tand walks left or right one unit with probability $1/2$.\n\tIt stops once it reaches either $-17$ or $+8$.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Find the probability it reaches $+8$ before $-17$.\n\n\t\t\\ii Find the expected value of the amount of time\n\t\tit takes to reach either endpoint.\n\t\\end{enumerate}\n\\end{problem}\n\n\\begin{problem}\n\t[Biased drunkard's walk]\n\tLet $0 < p < 1$ be a real number.\n\tAn ant starts at $0$ on a number line,\n\tand walks left or right one unit with probability $p$.\n\tIt stops once it reaches either $-17$ or $+8$.\n\tFind the probability it reaches $+8$ first.\n\\end{problem}\n\n\\begin{problem}\n\tThe number $1$ is written on a blackboard.\n\tEvery minute, if the number $a$ is written on the board,\n\tit's erased and replaced by a real number\n\tin the interval $[0, 2.01a]$ selected uniformly at random.\n\tWhat is the probability that the resulting sequence of numbers approaches $0$?\n\t\\begin{hint}\n\t\tIt occurs with probability $1$.\n\t\tIf $X_n$ is the number on the board at step $n$,\n\t\tand $\\mu = \\frac{1}{2.01} \\int_0^{2.01} \\log t \\; dt$,\n\t\tshow that $\\log(X_n) - n \\mu$ is a martingale.\n\t\t(Incidentally, using the law of large numbers could work too.)\n\t\\end{hint}\n\\end{problem}\n\n", "meta": {"hexsha": "803f3d276557ab1dd5f988befd9fd5a74e4ddf53", "size": 23393, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/measure/martingale.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/measure/martingale.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/measure/martingale.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4288, "max_line_length": 138, "alphanum_fraction": 0.6917026461, "num_tokens": 7484, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7905303236047049, "lm_q1q2_score": 0.5921761569917321}}
{"text": "\\subsection{Free groups}\\label{subsec:free_groups}\n\n\\begin{definition}\\label{def:free_monoid}\n  Let \\( S \\) be an arbitrary set. We associate with \\( S \\) its \\term{free monoid} \\( F(S) \\coloneqq (S^{\\ast}, \\cdot) \\), where \\( S^{\\ast} \\) is the \\hyperref[def:formal_language/kleene_star]{Kleene star} and \\( \\cdot \\) is \\hyperref[def:formal_language/concatenation]{concatenation}.\n\\end{definition}\n\\begin{proof}\n  It is a monoid due to \\fullref{thm:kleene_star_is_monoid}.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:free_monoid_universal_property}\n  For every set \\( A \\), denote by \\( \\iota_A: A \\to F(A) \\) the canonical inclusion function, which sends every member of \\( A \\) into the corresponding single-symbol word in the \\hyperref[def:free_monoid]{free monoid} \\( F(A) \\).\n\n  As a consequence of \\fullref{thm:free_monoid_universal_property}, \\( F(A) \\) is the unique up to an isomorphism monoid that satisfies the following \\hyperref[rem:universal_mapping_property]{universal mapping property}:\n  \\begin{displayquote}\n    For every monoid \\( \\mscrM \\) and every function \\( f: A \\to \\mscrM \\), there exists a unique monoid homomorphism \\( \\widetilde{f}: F(A) \\to \\mscrM \\) such that the following diagram commutes:\n    \\begin{equation}\\label{eq:def:free_monoid/diagram}\n      \\begin{aligned}\n        \\includegraphics[page=1]{output/def__free_monoid.pdf}\n      \\end{aligned}\n    \\end{equation}\n  \\end{displayquote}\n\\end{proposition}\n\\begin{proof}\n  For every function \\( f: A \\to \\mscrM \\), we have the monoid homomorphism\n  \\begin{equation*}\n    \\begin{aligned}\n      &\\widetilde{f}: F(A) \\to \\mscrM, \\\\\n      &\\widetilde{f}(x_1 x_2 \\ldots x_n) \\coloneqq f(x_1) \\cdot f(x_2) \\cdot \\ldots \\cdot f(x_n)\n    \\end{aligned}\n  \\end{equation*}\n  obtained by applying the monoid operation \\( \\cdot \\) recursively to the pointwise image\n  \\begin{equation*}\n    f(x_1) f(x_2) \\ldots f(x_n)\n  \\end{equation*}\n  of the word\n  \\begin{equation*}\n    x_1 x_2 \\ldots x_n.\n  \\end{equation*}\n\n  The homomorphism \\( \\widetilde{f} \\) is uniquely determined by the action of \\( f \\) on single-symbol words.\n\\end{proof}\n\n\\begin{corollary}\\label{thm:free_monoid_functor}\n  Consider the functor \\( F: \\cat{Set} \\to \\cat{Mon} \\) defined for objects pointwise in \\fullref{def:free_monoid}. For every function \\( f: A \\to B \\), define the \\hyperref[def:unital_magma/homomorphism]{monoid homomorphism}\n  \\begin{equation*}\n    \\begin{aligned}\n      &F(f): F(A) \\to F(B) \\\\\n      &F(f)(x_1 \\cdots x_n) \\coloneqq f(x_1) \\cdots f(x_n).\n    \\end{aligned}\n  \\end{equation*}\n\n  This functor is \\hyperref[def:category_adjunction]{left adjoint} to the \\hyperref[def:concrete_category]{forgetful functor} \\( U: \\cat{Mon} \\to \\cat{Set} \\).\n\\end{corollary}\n\\begin{proof}\n  Follows from \\fullref{thm:free_monoid_universal_property} via \\fullref{rem:universal_mapping_property}.\n\\end{proof}\n\n\\begin{definition}\\label{def:free_group}\n  Let \\( S \\) be an arbitrary set. We will now construct the \\term{free group} \\( F(S) \\) of \\( S \\). The construction is similar to that of \\hyperref[def:free_monoid]{free monoids}, but it is much more complicated because of special reduction rules for \\hyperref[def:unital_magma_inverse_element]{inverse elements}. Refer to \\cite{code:free_group_grammar_verification} for a software implementation of the construction.\n\n  Let \\( \\star \\) be a \\hyperref[def:formal_language/symbol]{symbol} not in \\( S \\). Our goal is, for each \\( a \\in S \\), to make the word \\( a{\\star} \\) behave like the inverse of \\( a \\) in a group. Rather than considering the \\hyperref[def:formal_language/kleene_star]{Kleene star} \\( (S \\cup \\{ \\star \\})^* \\) and removing elements via \\enquote{reductions} as in \\cite{code:free_group_reduction_verification} and \\cite[306]{Knapp2016BasicAlgebra}, we directly build a language of \\term{reduced words} using the mutually recursive \\hyperref[def:formal_grammar]{grammar}\n  \\begin{alignedeq}\\label{eq:def:free_group/grammar}\n    &I \\to \\varepsilon,           &&                        && \\text{\\( I \\) is the initial state} \\\\\n    &I \\to S_a \\mid D_a,             && a \\in S              && \\\\\n    &S_a \\to a \\mid a S_a,           && a \\in S              && S_a \\text{ does not produce words beginning with } a\\star \\\\\n    &S_a \\to a D_b,               && a, b \\in S, a \\neq b && \\\\\n    &D_a \\to a\\star S_b,          && a, b \\in S, a \\neq b && D_a \\text{ does not produce words beginning with } a \\\\\n    &D_a \\to a\\star \\mid a\\star D_a, && a \\in S              && \\\\\n  \\end{alignedeq}\n\n  The \\term{free group} \\( F(S) \\) is defined to be the language of \\eqref{eq:def:free_group/grammar} equipped with the inductively defined operation\n  \\begin{equation}\\label{eq:def:free_group/operation}\n    w_1 \\odot w_2 \\coloneqq \\begin{cases}\n     p \\odot s, &w_1 = p a \\T{and} w_2 = a\\star s \\text{ for some } a \\in S, \\\\\n     p \\odot s, &w_1 = p a\\star \\T{and} w_2 = as \\T{and} s \\neq \\star t \\text{ for some } a \\in S, \\\\\n     ps,        &\\text{otherwise}.\n   \\end{cases}\n  \\end{equation}\n\n  The inverse of the word \\( w = a_1 \\ldots a_n \\) is \\( w^{-1} \\coloneqq b_1 \\ldots b_n \\), where\n   \\begin{equation}\\label{eq:def:free_group/inverse}\n     b_{n-k+1} \\coloneqq \\begin{cases}\n       \\varnothing, &a_k = {\\star} \\\\\n       a_k{\\star},  &a_k \\neq {\\star} \\T{and} k = n \\\\\n       a_k{\\star},  &a_k \\neq {\\star} \\T{and} a_{k+1} \\neq {\\star} \\\\\n       a_k,         &a_k \\neq {\\star} \\T{and} k \\neq n \\T{and} a_{k+1} = {\\star} \\\\\n     \\end{cases}\n   \\end{equation}\n   for \\( k = 1, \\ldots, n \\).\n\n  The group \\( (F(S), \\odot) \\) is called the \\term{free group} generated by \\( S \\).\n\\end{definition}\n\\begin{proof}\n  The proof of the well-definedness of the group structure of \\( F(S) \\) is a straightforward (but tedious) application of induction.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:free_group_is_free_functor}\n  The functor \\( F: \\cat{Set} \\to \\cat{Grp} \\), defined pointwise in \\fullref{def:free_group}, is \\hyperref[def:category_adjunction]{free}.\n\\end{proposition}\n\\begin{proof}\n  The outline of the proof is similar to the proof of \\fullref{thm:free_monoid_universal_property}.\n\\end{proof}\n\n\\begin{definition}\\label{def:group_presentation}\\mcite[314]{Knapp2016BasicAlgebra}\n  Let \\( S \\) be a set, \\( F(S) \\) be the \\hyperref[def:free_group]{free group} and \\( \\mscrR \\subseteq F(S) \\) be a subset. Denote by \\( \\mscrN(\\mscrR) \\) the smallest normal subgroup of \\( F(S) \\) that includes \\( \\mscrR \\) as a subset.\n\n  We define the group\n  \\begin{equation}\\label{eq:def:group_presentation/presentation}\n    \\mscrG = \\braket{ S \\mid \\mscrR} \\coloneqq F(S) / \\mscrN(\\mscrR)\n  \\end{equation}\n  called the group with \\term{generators} \\( S \\) and \\term{relators} \\( \\mscrR \\). The expression \\eqref{eq:def:group_presentation/presentation} is called a \\term{presentation} of \\( \\mscrG \\).\n\n  If there exists a presentation for \\( \\mscrG \\) such that \\( S \\) is finite, it is called a \\term{finitely generated} group. If there exists a presentation such that both \\( S \\) and \\( \\mscrR \\) are finite, it is called \\term{finitely presented}.\n\n  If \\( \\mscrR = \\varnothing \\), there are no restrictions and we use the notation\n  \\begin{equation}\\label{eq:def:group_presentation/free}\n    \\mscrG = \\braket{ S } \\coloneqq F(S)\n  \\end{equation}\n  for the free group.\n\\end{definition}\n\n\\begin{theorem}\\label{thm:every_group_is_representable}\\mcite[prop. 7.7]{Knapp2016BasicAlgebra}\n  Every group \\( \\mscrG \\) has at least one \\hyperref[def:group_presentation]{presentation}.\n\\end{theorem}\n\\begin{proof}\n  Let \\( \\mscrG \\) be an arbitrary group and let \\( S \\coloneqq U(\\mscrG) \\) be the underlying set. Let \\( F(S) \\) be the corresponding free group with \\( \\iota: S \\to F(S) \\) sending elements of \\( S \\) to singleton words in \\( F(S) \\). By \\fullref{thm:free_group_is_free_functor}, there exists a unique homomorphism \\( \\varphi: F(S) \\to \\mscrG \\) such that\n  \\begin{equation*}\n    \\text{\\todo{Add diagram}}\\iffalse\\begin{mplibcode}\n      beginfig(1);\n      input metapost/graphs;\n\n      v1 := thelabel(\"$S$\", origin);\n      v2 := thelabel(\"$U(F(S))$\", (-1, -1) scaled u);\n      v3 := thelabel(\"$U(G)$\", (1, -1) scaled u);\n\n      a1 := straight_arc(v1, v2);\n      a2 := straight_arc(v1, v3);\n\n      d1 := straight_arc(v2, v3);\n\n      draw_vertices(v);\n      draw_arcs(a);\n\n      drawarrow d1 dotted;\n\n      label.ulft(\"$\\iota$\", straight_arc_midpoint of a1);\n      label.urt(\"$\\id$\", straight_arc_midpoint of a2);\n      label.top(\"$U(\\varphi)$\", straight_arc_midpoint of d1);\n      endfig;\n    \\end{mplibcode}\\fi\n  \\end{equation*}\n  that is, \\( U(\\varphi) \\circ \\iota = \\id \\). Thus, \\( G = S \\subseteq \\ker \\varphi \\). Define \\( \\mscrR \\coloneqq \\ker \\varphi \\). By \\fullref{def:normal_subgroup}, \\( \\mscrR \\) is a normal subgroup of \\( F(S) \\), thus\n  \\begin{equation*}\n    G = \\varphi(F(S)) \\cong F(S) / \\ker \\varphi = \\braket{ S \\mid \\mscrR }.\n  \\end{equation*}\n\\end{proof}\n\n\\begin{definition}\\label{def:cyclic_group}\n  For a singleton alphabet \\( \\set{ a } \\), we define the \\term{infinite cyclic group}\n  \\begin{equation*}\n    C \\coloneqq \\braket{a}\n  \\end{equation*}\n  and, for positive integers \\( n \\), the \\term{finite cyclic group} of \\term{order} \\( n \\) as\n  \\begin{equation*}\n    C_n \\coloneqq \\braket{a \\given a^n}.\n  \\end{equation*}\n\n  We use the same notation independent of \\( a \\) because all cyclic groups of the same order are obviously \\hyperref[def:group/homomorphism]{isomorphic}.\n\n  See \\fullref{thm:cyclic_group_isomorphic_to_integers_modulo_n}.\n\\end{definition}\n\n\\begin{definition}\\label{def:group_free_product}\\mcite[323]{Knapp2016BasicAlgebra}\n  The \\term{free product} of a nonempty family of groups \\( \\seq{ \\mscrX_k }_{k \\in \\mscrK} \\) with presentations \\( \\braket{S_k \\mid \\mscrR_k}, k \\in \\mscrK \\) is the group\n  \\begin{equation*}\n    \\Ast_{k \\in \\mscrK} \\mscrX_k \\coloneqq \\braket*{ \\coprod_{k \\in \\mscrK} S_k \\given* \\coprod_{k \\in \\mscrK} \\mscrR_k },\n  \\end{equation*}\n  where \\( \\coprod \\) is the \\hyperref[def:disjoint_union]{disjoint union}.\n\\end{definition}\n\n\\begin{definition}\\label{def:free_abelian_group}\n  A \\term{free abelian group} is a \\hyperref[def:free_left_module]{free} \\hyperref[thm:abelian_group_iff_z_module]{\\( \\BbbZ \\)-module}. This definition of a free abelian group is different from the definition of a \\hyperref[def:free_group]{free group}.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:product_of_cyclic_groups}\n  The \\hyperref[def:group_direct_product]{direct product} \\( C_n \\times C_m \\) of two \\hyperref[def:cyclic_group]{cyclic groups} is cyclic if and only if \\( n \\) and \\( m \\) are \\hyperref[def:coprime_numbers]{coprime}.\n\\end{proposition}\n\\begin{proof}\n  Take \\( (a^i, a^j) \\in C_n \\times C_m \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:group_direct_product}\n  The \\term{direct product} of a nonempty family of groups \\( \\{ \\mscrX_k \\}_{k \\in \\mscrK} \\) is their \\hyperref[def:cartesian_product]{Cartesian product} \\( \\prod_{k \\in \\mscrK} \\mscrX_k \\) with the componentwise group operation\n  \\begin{equation*}\n    \\{ x_k \\}_{k \\in \\mscrK} \\cdot \\{ y_k \\}_{k \\in \\mscrK}\n    \\coloneqq\n    \\{ x_k \\cdot y_k \\}_{k \\in \\mscrK}.\n  \\end{equation*}\n\\end{definition}\n\n\\begin{definition}\\label{def:group_direct_sum}\n  The \\term{direct sum} \\( \\bigoplus_{k \\in \\mscrK} \\mscrX_k \\) of a nonempty family of groups \\( \\{ \\mscrX_k \\}_{k \\in \\mscrK} \\) is a subgroup of their \\hyperref[def:group_direct_sum]{direct product} where, for any group element \\( \\{ x_k \\}_{k \\in \\mscrK} \\), only finitely many components are different from zero.\n\n  \\begin{thmenum}\n    \\thmitem{def:group_direct_sum/internal}\\mcite[126]{Knapp2016BasicAlgebra}If all \\( \\{ \\mscrX_k \\}_{k \\in \\mscrK} \\) are subgroups of a group \\( \\mscrX \\), we say that \\( \\mscrX \\) is their \\term{internal direct sum} if the homomorphism\n    \\begin{align*}\n       &\\varphi: \\bigoplus_{k \\in \\mscrK} \\mscrX_k \\to X \\\\\n       &\\varphi(\\{ x_k \\}_{k \\in \\mscrK}) \\coloneqq \\cdot_{k \\in \\mscrK} x_k\n    \\end{align*}\n    is an isomorphism.\n\n    The sum is well-defined since, by definition, there are only finitely many non-identity summands.\n\n    \\thmitem{def:group_direct_sum/external} To distinguish \\( \\bigoplus_{k \\in \\mscrK} \\mscrX_k \\) from \\( X \\), we sometimes call it the \\term{external direct product}.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:group_categorical_limits}\n  We are interested in \\hyperref[def:category_of_cones/limit]{categorical limits} and \\hyperref[def:category_of_cones/colimit]{colimits} in \\( \\cat{Grp} \\). Fix an indexed family  \\( \\{ \\mscrX_k \\}_{k \\in \\mscrK} \\) of groups.\n\n  \\begin{thmenum}\n    \\thmitem{thm:group_categorical_limits/product} Their \\hyperref[def:discrete_category_limits]{categorical product} is their \\hyperref[def:group_direct_product]{direct product} \\( \\prod_{k \\in \\mscrK} \\mscrX_k \\), the projection morphisms being inherited from \\fullref{thm:discrete_category_limits_in_set}.\n\n    \\thmitem{thm:group_categorical_limits/coproduct} Their \\hyperref[def:discrete_category_limits]{categorical coproduct} is their \\hyperref[def:group_free_product]{free product} \\( \\Ast_{k \\in \\mscrK} \\mscrX_k \\), the embedding morphisms being\n    \\begin{balign*}\n       &\\iota_m: \\mscrX_m \\to \\Ast_{k \\in \\mscrK} \\mscrX_k \\\\\n       &\\iota_m(x_m) \\coloneqq x_m.\n    \\end{balign*}\n  \\end{thmenum}\n\\end{proposition}\n", "meta": {"hexsha": "1abc463cbd4deeade7da3cd54bbdd7a65d1feda3", "size": 13308, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/free_groups.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/free_groups.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/free_groups.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.6103896104, "max_line_length": 572, "alphanum_fraction": 0.6760595131, "num_tokens": 4461, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7905303236047049, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.5921761525635668}}
{"text": "\\chapter{Classification of Spectra}\n\\label{ch:spectra_classification}\n\nLet\u2019s open the collagen data set again and see how well can logistic regression predict its four classes. Straightforward, right?\n\n\\begin{figure}[h]\n    \\includegraphics[width=0.95\\textwidth]{sp_classification-fig1.png}\n    \\label{fig:spectra_classification-fig1}\n    \\caption{In \\widget{Preprocess Spectra} we also did some spectral processing: we decided to only keep columns for wavenumbers between 1500\\wn and 1800\\wn.}\n\\end{figure}\n\n\\noindent Connect \\widget{Datasets}, \\widget{Logistic Regression}, \\widget{Predictions}, \\widget{Confusion Matrix} and that's it.\n\n\\begin{figure*}\n  \\newcommand{\\spectra}{\\includegraphics[scale=0.35]{sp_classification-fig2b.png}}\n  \\newcommand{\\confusion}{\\includegraphics[scale=0.45]{sp_classification-fig2a_.png}}\n  \\hspace{3.5cm} \\stackinset{l}{-4cm}{t}{-1cm}\n  {\\confusion}\n  {\\spectra}\n  \\caption{In the \\widget{Confusion Matrix} we selected wrong predictions for the actual class DNA. The connected \\widget{Spectra} widget displays them.}\n  \\vspace{-0.5cm}\n  \\label{fig:spectra_classification-fig2}\n\\end{figure*}\n\n\\noindent Let\u2019s not forget that it is pointless to predict for the same data as we used for learning. We could either  use a \\widget{Data Sampler} and connect its Sample output to \\widget{Preprocess Spectra} and Remaining output to \\widget{Predictions}, or obtain predictions from the \\widget{Test and Score} widget.\n\\widget{Confusion Matrix} now shows the mistakes of the model (scored with cross-validation). We can select them and inspect them further in a \\widget{Spectra} widget. Here we colored them by the predicted class (see the Menu). \f\n\n\\begin{wrapfigure}{o}{1.1\\textwidth}\n  \\centering\n  \\includegraphics[width=1.1\\textwidth]{sp_classification-fig3.png}%\n  \\label{fig:spectra_classification-fig3}\n\\end{wrapfigure}\n\nBut how does the model make its decisions? We already inspected a different model, classification tree, where each node represents a decision on a value of a column.  \\widget{Logistic regression} works differently. On the training data it computes weights for all columns (wavelengths), which are then used for prediction, where values are multiplied with weights. To see the weights, connect \\widget{Logistic Regression} to a \\widget{Data Table}. \n\n\\begin{wrapfigure}{o}{\\textwidth}\n%  \\centering\n  \\vspace{-0.7cm}\n  \\includegraphics[width=0.9\\textwidth]{sp_classification-fig4.png}\n  \\label{fig:spectra_classification-fig4}\n\\end{wrapfigure}\nWe get a table that is hard to understand. What if we visualize it? First, \\widget{Transpose} the data. Then, use \\widget{Select Columns} to make the visualization prettier: in the widget remove the intercept.\n\nNow, open \\widget{Logistic Regression} and try changing its parameters. Observe the effect on the weights.\n\n\\begin{figure}[h]\n\\hspace{-1cm}\\stackinset{r}{0\\linewidth}{t}{-0.12\\linewidth}\n  {\\includegraphics[scale=0.35]{sp_classification-fig5a.png}}\n  {\\includegraphics[scale=0.35]{sp_classification-fig5b.png}}\n  \\label{fig:spectra_classification-fig5}\n\\end{figure}\n", "meta": {"hexsha": "e0bdab2fdbf02ed61fddbeacae9c741462e21211", "size": 3076, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/spec-030-classification/spectra-classification.tex", "max_stars_repo_name": "PrimozGodec/orange-lecture-notes", "max_stars_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-10-13T14:31:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:47:06.000Z", "max_issues_repo_path": "chapters/spec-030-classification/spectra-classification.tex", "max_issues_repo_name": "PrimozGodec/orange-lecture-notes", "max_issues_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-02-26T13:33:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-25T19:15:34.000Z", "max_forks_repo_path": "chapters/spec-030-classification/spectra-classification.tex", "max_forks_repo_name": "PrimozGodec/orange-lecture-notes", "max_forks_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-01-19T16:55:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-21T20:35:41.000Z", "avg_line_length": 59.1538461538, "max_line_length": 448, "alphanum_fraction": 0.7747074122, "num_tokens": 843, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.5921761488667989}}
{"text": "\n\\newpage\n\\subsection{Observations of the size of hidden layers}\n\nAn interesting phenomenon is that if we increase the size of the hidden layer, the two-layer linear NN descend faster from the beginning than single-layer linear NN. We use 0 and 1 in MNIST dataset to observe if this phenomenon also occurs in MNIST dataset.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=2.5in]{MNIST_hidden1000_step50.png}\n\t\\caption{MNIST: $d_1 = 1000, lr = 0.1$}\n\\end{figure}\n\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=2in]{MNIST_hidden8000_step1000.png}\n\t\\includegraphics[width=2in]{MNIST_hidden8000_step30.png}\n\t\\caption{MNIST: WideNet($d_1 = 8000$) vs NarrowNet($d_1 = 100$)}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=2in]{MNIST_hidden64000_1.png}\n\t\\includegraphics[width=2in]{MNIST_hidden64000_2.png}\n\t\\caption{MNIST: $d_1 = 2000,4000,8000,16000$}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=2in]{MNIST_hidden64000_3.png}\n\t\\includegraphics[width=2in]{MNIST_hidden64000_4.png}\n\t\\caption{MNIST: $d_1 = 16000, 32000, 64000$}\n\\end{figure}\n\n\\begin{itemize}\n\t\\item When $d_1 = 1000$, Net1 descend slower than single-layer linear NN from the beginning, but catch up and surpass Net0\n\t\\item When $d_1 > 2000$, Net1 descend faster than Net0 from the beginning. And the speed goes faster as we increase the size of hidden-layer $d_1$.\n\t\\item If we put Net0(Single-layer), NarrowNet1(2-layer with $d_1 = 100$), WideNet1(2-layer with $d_1 = 8000$) together, we can observe that Net0 descend faster at the beginning than NarrowNet1, but NarrowNet1 catches up and surpass Net0 soon. WideNet1 is much more faster than the other two at the beginning and remain the fastest all the time.\n\\end{itemize}\n\n\\subsection{Observation of initializations}\n\nBut if we look at how Pytorch do initialization, we can see that their initialization method is related to the size of hidden layer. For example, for some layer in linear NN, if input size is $d_{in}$, output size is $d_{out}$, then Pytorch use the following initialization:\n\\begin{equation}\n\tW,b \\sim U(-\\frac{1}{\\sqrt{d_{in}}},\\frac{1}{\\sqrt{d_{in}}}),\n\\end{equation}\n which means each single parameter in $W$ and $b$ is uniformly sampled from interval $(-\\frac{1}{\\sqrt{d_{in}}},\\frac{1}{\\sqrt{d_{in}}})$.\\\\\n \n We know that in our experiments, there are mainly three things that determine the training process: (1) the size of hidden layer, (2) Initialization, (3) learning rate(step size). But if we use a default initialization, we are hard to draw a conclusion because we don't know if this kind of initializaiton is the best in different hidden-size settings. So we do a simple experiment to test this. \\\\\n \\indent We only consider a 2-layer linear NN where the fisrt layer is unbiased. So we need to initialize the parameters $W_1,W_2,b$. We set\n \\begin{equation}\n \tW_1,W_2,b \\sim U(-\\beta,\\beta),\n \\end{equation}\n and change the value of $\\beta$, to see if it will influence the descent of loss.\n \n  \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[width=1.5in]{MNIST_hidden10_uniform1.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden10_uniform2.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden10_uniform3.png}\n \t\\caption{MNIST: $d_1 = 10$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n \\end{figure}\n\n \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[width=1.5in]{MNIST_hidden100_uniform1.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden100_uniform2.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden100_uniform3.png}\n \t\\caption{MNIST: $d_1 = 100$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n \\end{figure}\n\n \\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.5in]{MNIST_hidden500_uniform1.png}\n\t\\includegraphics[width=1.5in]{MNIST_hidden500_uniform2.png}\n\t\\includegraphics[width=1.5in]{MNIST_hidden500_uniform3.png}\n\t\\caption{MNIST: $d_1 = 500$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n\\end{figure}\n\n \\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.5in]{MNIST_hidden1000_uniform1.png}\n\t\\includegraphics[width=1.5in]{MNIST_hidden1000_uniform2.png}\n\t\\includegraphics[width=1.5in]{MNIST_hidden1000_uniform3.png}\n\t\\caption{MNIST: $d_1 = 1000$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n\\end{figure}\n \n  \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[width=1.5in]{MNIST_hidden5000_uniform1.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden5000_uniform2.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden5000_uniform3.png}\n \t\\caption{MNIST: $d_1 = 5000$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n \\end{figure}\n \n   \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[width=1.5in]{MNIST_hidden10000_uniform1.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden10000_uniform2.png}\n \t\\includegraphics[width=1.5in]{MNIST_hidden10000_uniform3.png}\n \t\\caption{MNIST: $d_1 = 10000$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n \\end{figure}\n\n   \\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.5in]{MNIST_1hidden_step50_uniform1.png}\n\t\\includegraphics[width=1.5in]{MNIST_1hidden_step50_uniform2.png}\n\t\\includegraphics[width=1.5in]{MNIST_1hidden_step50_uniform3.png}\n\t\\caption{MNIST: $d_1 = 10,100,500,1000,5000,10000$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n\\end{figure}\n \n    \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[width=1.5in]{MNIST_1hidden_step10_uniform1.png}\n \t\\includegraphics[width=1.5in]{MNIST_1hidden_step10_uniform2.png}\n \t\\includegraphics[width=1.5in]{MNIST_1hidden_step10_uniform3.png}\n \t\\caption{MNIST: $d_1 = 10,100,500,1000,5000,10000$, $W_1,W_2,b \\sim U(-\\beta,\\beta)$}\n \\end{figure}\n ", "meta": {"hexsha": "68e57b3158dd6efc9043ba9430a05f334a6031f6", "size": 5464, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/Wide_LinearNN.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/Wide_LinearNN.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/Wide_LinearNN.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.9159663866, "max_line_length": 399, "alphanum_fraction": 0.7494509517, "num_tokens": 1879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7905303137346446, "lm_q1q2_score": 0.5921761407418655}}
{"text": "\\section{closed channel transfer matrix folding}\r\n\r\nNow that we have a matrix of size $(N+N_c)x(N+N_c)$ it would be convenient to be able to represent it as a matrix of size $NxN$.  We will call this reduction \"folding\" as we are folding the closed channels into a smaller matrix. There is conservation of information in this matrix of reduced rank as the elements are messier.  The folding operation occurs one closed channel at a time, but it is shown by a recursion relation that the operation can be done by induction for an infinite number of closed channels.\r\n\r\nBagwell does the operation but gives an incorrect recursion relation. See~\\cite{1990_Bagwell} page 358, equation 25.\r\n\r\ndealing with a 4x4 matrix (2 open and 2 closed channels), solve for $t_{24}$ in the fourth equation:\r\n\\begin{equation}\r\nt_{24} = -\\frac{\\Gamma_{41}}{\\Gamma_{44}+2\\kappa_4} t_{21} + \r\n         -\\frac{\\Gamma_{42}}{\\Gamma_{44}+2\\kappa_4} t_{22} +\r\n         -\\frac{\\Gamma_{43}}{\\Gamma_{44}+2\\kappa_4} t_{23}\r\n\\end{equation}\r\nthen plug that into the three remaining equations. Group like terms\r\n\\begin{equation}\r\n0 = ((\\Gamma_{11}-\\frac{\\Gamma_{14}\\Gamma_{41}}{\\Gamma_{44}+2 \\kappa_4}) - 2 i k_1)t_{21} + \r\n     (\\Gamma_{12}-\\frac{\\Gamma_{14}\\Gamma_{42}}{\\Gamma_{44}+2 \\kappa_4})t_{22} + \r\n     (\\Gamma_{13}-\\frac{\\Gamma_{14}\\Gamma_{43}}{\\Gamma_{44}+2 \\kappa_4})t_{23}  \r\n\\end{equation}\r\nNow we can write a 3x3 matrix\r\n\\begin{equation}\r\n \\left( \\begin{array}{c}\r\n0 \\\\\r\n-2 i k_2 \\\\\r\n0 \\end{array} \\right) =\r\n \\left( \\begin{array}{cccc}\r\n(\\Gamma_{11}-\\frac{\\Gamma_{14}\\Gamma_{41}}{\\Gamma_{44}+2 \\kappa_4})-2 i k_1 &\r\n(\\Gamma_{12}-\\frac{\\Gamma_{14}\\Gamma_{42}}{\\Gamma_{44}+2 \\kappa_4})           & \r\n(\\Gamma_{13}-\\frac{\\Gamma_{14}\\Gamma_{43}}{\\Gamma_{44}+2 \\kappa_4})            \\\\\r\n(\\Gamma_{21}-\\frac{\\Gamma_{24}\\Gamma_{41}}{\\Gamma_{44}+2 \\kappa_4})         &\r\n(\\Gamma_{22}-\\frac{\\Gamma_{24}\\Gamma_{42}}{\\Gamma_{44}+2 \\kappa_4})-2 i k_2 &\r\n(\\Gamma_{23}-\\frac{\\Gamma_{24}\\Gamma_{43}}{\\Gamma_{44}+2 \\kappa_4})              \\\\\r\n(\\Gamma_{31}-\\frac{\\Gamma_{34}\\Gamma_{41}}{\\Gamma_{44}+2 \\kappa_4})         &\r\n(\\Gamma_{32}-\\frac{\\Gamma_{34}\\Gamma_{42}}{\\Gamma_{44}+2 \\kappa_4})         &\r\n(\\Gamma_{33}-\\frac{\\Gamma_{34}\\Gamma_{43}}{\\Gamma_{44}+2 \\kappa_4})+2 i \\kappa_3 \\end{array} \\right)\r\n \\left( \\begin{array}{c}\r\nt_21 \\\\\r\nt_22 \\\\\r\nt_23 \\end{array} \\right) \r\n\\label{singlescattererfirstfold}\r\n\\end{equation}\r\nobserve the recursion relation\r\n\\begin{equation}\r\n\\Gamma_{ij,4} = \\Gamma_{ij} - \\frac{\\Gamma_{i4}\\Gamma_{4j}}{\\Gamma_{44}+2 \\kappa_4}\r\n\\end{equation}\r\nwhich generalizes to a recursion relation\r\n\\begin{equation}\r\n\\Gamma_{ij}^{(n)} = \\Gamma_{ij}^{(n+1)} - \\frac{\\Gamma_{i(n+1)}^{(n+1)}\r\n\\Gamma_{(n+1)j}^{(n+1)}}{\\Gamma_{(n+1)(n+1)}^{(n+1)}+2 \\kappa_{(n+1)}}\r\n\\end{equation}\r\nThings to keep in mind: multiplying folded matrices is not equivalent to multiplying large matrices and then folding.  This recursion relation demonstrates that an infinite number of closed channels can be accounted for (with the proper normalization).\r\n% see Ben's notes, 20080618\r\nNow we'll repeat the process of folding for the general one scatterer matrix for N open channels and $N_c$ closed channels.\r\n\\begin{equation}\r\n\\left(\r\n\\left( \\begin{array}{ccc}\r\n\\hat{\\Gamma}_{pp} & | & \\hat{\\Gamma}_{pq} \\\\\r\n--- & + & --- \\\\\r\n\\hat{\\Gamma}_{qp} & | &  \\hat{\\Gamma}_{qq} \\end{array}\r\n\\right) - 2 i \r\n\\left( \\begin{array}{cccc}\r\nk_1 &     &        & 0         \\\\\r\n    & k_2 &        &           \\\\\r\n    &     & \\ddots &           \\\\\r\n  0 &     &        & k_{N+N_c} \\end{array} \r\n\\right)\r\n\\right)\r\n\\left( \\begin{array}{c}\r\n\\vec{t}_p \\\\\r\n\\vec{t}_q\\end{array} \r\n\\right) = free terms from input\r\n\\end{equation}\r\nwhere if $n>N$ then $k=i\\kappa$. \r\n\r\nDo the bottom half (closed channels only) of the matrix multiplication,\r\n\\begin{equation}\r\n\\hat{\\Gamma}_{qp} \\vec{t}_p + (\\hat{\\Gamma}_{qq} + 2 \\hat{\\kappa}_q)\\vec{t}_q =\r\n\\left( \\begin{array}{c}\r\n\t0 \\\\\r\n\t\\vdots \\\\\r\n\t0 \\end{array}\r\n\\right)_q\r\n\\end{equation}\r\nZeros on the left since the evanescent modes can not have inputs. No $k$ dependence since there are no diagonal elements.\r\n\r\nSolve for $\\vec{t}_q$,\r\n\\begin{equation}\r\n(\\Gamma_{qq}+2 \\vec{\\kappa}_q) \\vec{t}_q = - \\hat{\\Gamma}_qp \\vec{t}_p\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\vec{t}_q = -(\\hat{\\Gamma}_{qq} + 2 \\hat{\\kappa}_q)^{-1} (\\hat{\\Gamma}_{qp} \\vec{t}_p)\r\n\\end{equation}\r\n\r\nNow it's time for the upper set (open channels)\r\n\\begin{equation}\r\nfree terms = (\\hat{\\Gamma}_{pp}-2 i \\hat{k}_p)\\vec{t}_p + \\hat{\\Gamma}_{pq} \\vec{t}_q\r\n\\end{equation}\r\nplug into $\\vec{t}_q$\r\n\\begin{equation}\r\nfree terms = (\\hat{\\Gamma}_{pp}-2 i \\hat{k}_p)\\vec{t}_p - \\hat{\\Gamma}_{pq} \r\n((\\hat{\\Gamma}_{qq}+2 \\vec{\\kappa}_q)^{-1})(\\hat{\\Gamma}_{qp} \\vec{t}_p)\r\n\\end{equation}\r\nfactor out $\\vec{t}_p$\r\n\\begin{equation}\r\nfree terms = ((\\hat{\\Gamma}_{pp}-2 i \\hat{k}_p)\\vec{t}_p - \\hat{\\Gamma}_{pq} \r\n((\\hat{\\Gamma}_{qq}+2 \\vec{\\kappa}_q)^{-1})\\hat{\\Gamma}_{qp}) \\vec{t}_p\r\n\\end{equation}\r\nwhich compared to equation B8 in \\cite{2007_Froufe-Perez_PRE}\r\n\r\n\\begin{equation}\r\n\\hat{\\tilde{U}}_{pp} = \\hat{U}_{pp} - \\hat{U}_{pq} \r\n\\frac{1}{\\sqrt{2 \\vec{\\kappa}_Q}}\\frac{1}{I+\r\n\\frac{1}{\\sqrt{2 \\kappa_Q}}\\hat{U}_{QQ}\\frac{1}{\\sqrt{2 \\kappa_Q}} }\r\n\\frac{1}{\\sqrt{2 \\kappa_Q}}\\hat{U}_{QP}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\hat{\\tilde{U}}_{pp} = \\hat{U}_{pp} - \\hat{U}_{pq}\r\n\\frac{1}{I 2 \\vec{\\kappa}_Q + 2 \\vec{\\kappa}_Q \\hat{U}_{QQ}}\\hat{U}_{QP}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\hat{\\tilde{U}}_{pp} = \\hat{U}_{pp} - \\hat{U}_{pq} (2 \\kappa_Q + U_{QQ})^{-1} U_{QP}\r\n\\end{equation}\r\n\r\nThey match!", "meta": {"hexsha": "c03fd6c203fe002c5bb78fe5ba1448be2f6e988b", "size": 5536, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix_open_closed_channel_folding.tex", "max_stars_repo_name": "bhpayne/physics_phd_dissertation", "max_stars_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendix_open_closed_channel_folding.tex", "max_issues_repo_name": "bhpayne/physics_phd_dissertation", "max_issues_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/appendix_open_closed_channel_folding.tex", "max_forks_repo_name": "bhpayne/physics_phd_dissertation", "max_forks_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.9365079365, "max_line_length": 513, "alphanum_fraction": 0.6304190751, "num_tokens": 2040, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.5920661315777628}}
{"text": "%outlines the background behind the sequential probability ratio test.\n\n\\note{This section assumes that the reader has a basic familiarity with the terminology and techniques of hypothesis testing in statistics.}\n\nA statistical test is a mechanism for making quantitative decisions about a process, by determining how well the data stand in agreement with given predicted probabilities \\cite{cowan1998statistical}. Statistical tests are usually used to draw conclusions about a \\textit{statistical hypothesis}, which is a testable hypothesis based on observations of a process that is modelled using random variables. There are many texts that outline the statistical tests that can be used to make a decision between accepting and rejecting the null hypothesis, $H_0$, for example \\cite{IntroductionToMathematicalStatistics}. The usual context in which this occurs is one in which the data has been collected in advance and the sample size is fixed and known. The standard procedure for testing a simple hypothesis involves using a Uniformly Most Powerful (UMP) Test for a fixed value of the probability of making a Type \\Romannum{1} error, $\\alpha$ \\cite[p.~253]{IntroductionToMathematicalStatistics}. The subsequent sections discuss how this can be extended to a sequential paradigm, where the sample size is not fixed.\\note{An example might be of value here} \\par\n\nThere exists a branch of statistical hypothesis testing called \\textit{sequential hypothesis testing}, which is used when the sample size is not fixed in advance \\cite[p.~375]{IntroductionToMathematicalStatistics}. Given a hypothesis to test, $H_0$, this means the decision process goes beyond deciding whether to accept or reject the null hypothesis, but to either\n\\begin{enumerate}\n    \\item Accept the hypothesis being tested, $H_0$.\n    \\item Reject the hypothesis being tested, $H_0$\n    \\item Continue the experiment by making a further observation.\n\\end{enumerate}\n\nIt is clear that samples are gathered as long as 1). or 2). above are not chosen, which intuitively corresponds to the notion of making an informed decision, where it is desirable to ensure that enough data has been gathered to draw a meaningful conclusion. In the sequential testing paradigm, two kinds of error may be committed, as with the non-sequential case: we may reject the null hypothesis when it is true (commit a Type \\Romannum{1} error) or we may accept the null hypothesis when some alternative hypothesis is true (commit a Type \\Romannum{2} error). \n\nThe Sequential Probability Ratio Test (SPRT), which was devised by \\citeauthor{Wald1945SequentialHypotheses}, proposes a statistical test for simple hypotheses with specified fixed values, which ensures that the probability of Type \\Romannum{1} and Type \\Romannum{2} errors do not exceed $\\alpha$ and $\\beta$ respectively \\cite{Wald1945SequentialHypotheses}. \\citeauthor{Wald1948OptimumTest} proved that the SPRT is optimal, in the sense that of all tests with the same power, the SPRT requires on average the fewest number of observations to reach a decision \\cite{Wald1948OptimumTest}. A strong advantage of the test is that it can be carried out without determining any probability distributions whatsoever \\cite{Wald1945SequentialHypotheses}. The SPRT can be thought of as a stopping rule for sampling in a stochastic process. For the sake of brevity, we omit the full derivation of the rule and the proof of optimality, but instead refer the reader to \\cite{Wald1945SequentialHypotheses}, \\cite{Wald1950BayesProblems} and \\cite{Wald1948OptimumTest}. The SPRT can be carried out following the procedure shown in Algorithm \\ref{alg:SPRT}.\n\n\n%\\begin{enumerate}\n%    \\item The null and alternative hypotheses are stated, $H_0$ and $H_1$.\n%    \\item The desired type \\Romannum{1} and type \\Romannum{2} error rates are specified as $\\alpha$ and $\\beta$.\n%    \\item Calculate the values $A = \\frac{1-\\beta}{\\alpha}$ and $B = \\frac{\\beta}{1-\\alpha}$\n%    \\item \n%\\end{enumerate}\n\n\n\\begin{algorithm}{}\n\\caption{The Sequential Probability Ratio Test Algorithm}\n\\label{alg:SPRT}\n\n\\begin{algorithmic}[1]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n%Input\n\\REQUIRE $\\newline \\alpha \\text{: \\quad The upper limit in probability for making a Type \\Romannum{1} error.}$\n\\newline $\\beta \\text{: \\quad The upper limit in probability for making a Type \\Romannum{2} error.}$\n\\newline $H_0 \\text{: \\quad The null hypothesis.}$\n\\newline $H_1 \\text{: \\quad The alternative hypothesis.}$\n\\newline $(x_1, ..., x_m) \\text{: \\quad The m observations made so far.}$\n%Output\n\\ENSURE  $\\newline$ A decision to accept $H_0$, accept $H_1$ or make another observation.\\\\\n\\hfill\\pagebreak\n\n\\STATE Calculate $p_{0m}=p_{0m}(x_1, ..., x_m)$, the probability of observing the data under the assumption $H_0$ is true.\n\n\\STATE Calculate $p_{1m}=p_{1m}(x_1, ..., x_m)$, the probability of observing the data under the assumption $H_1$ is true.\n\n\\STATE Calculate the values $A = \\frac{1-\\beta}{\\alpha}$ and $B = \\frac{\\beta}{1-\\alpha}$\n\n\\STATE Accept $H_1$ if $\\frac{p_{1m}}{p_0m} \\geq A$\n\n\\STATE Accept $H_0$ if $\\frac{p_{1m}}{p_0m} \\leq B$\n\n\\STATE Make an additional observation if $B < \\frac{p_{1m}}{p_0m} < A$\n\n\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\nThe main idea, as is the case with simple non-sequential hypothesis tests, is based on the idea that if the likelihood ratio ($\\frac{p_{1m}}{p_{0m}}$) lies in some \\textit{critical region}, $C$, then we may reject the null hypothesis \\cite{IntroductionToMathematicalStatistics}. The SPRT extends this by partitioning the m-dimensional sample space $M_m$ into three mutually exclusive regions, $R_m^0$, $R_m^1$ and $R_m$. After the $i_{th}$ observation $(x_i)$ has been drawn, $H_0$ is accepted if $(x_1, ..., x_i)$ lies in $R_m^0$, $H_1$ will be accepted if $(x_1, ..., x_i)$ lies in $R_m^1$ or a $i+1_{th}$ observation will be drawn if $(x_1, ..., x_i)$ lies in $R_m$ \\cite{Wald1945SequentialHypotheses}. The process of how to choose $R_m^0$, $R_m^1$ and $R_m$ is outlined in Section 3 of \\cite{Wald1945SequentialHypotheses}, which goes beyond the scope of this thesis. The derivation essentially shows that $(x_1, ..., x_i) \\in R_m^0$ is equivalent to $\\frac{p_{1m}}{p_0m} \\leq B$, $(x_1, ..., x_i) \\in R_m^1$ is equivalent to $\\frac{p_{1m}}{p_0m} \\geq A$, and $(x_1, ..., x_i) \\in R_m$ is equivalent to $B < \\frac{p_{1m}}{p_0m} < A$.\n\n\\subsubsection{Summary}\nThe SPRT is a sequential hypothesis test that formulates a stopping rule in order to draw conclusions based on statistical data. It has frequently been in quality control studies, where samples can be expensive to gather, but it can be applied in other scenarios where sampling can be expensive \\cite{Ou2010AnMean}. We use it in Section \\ref{subsubsec:SeachTerminationMethodology}", "meta": {"hexsha": "d9be1cbe1e35fca843b00be207794fd678e86fb7", "size": 6825, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/BackgroundKnowledgeAndRelatedWork/MultiAgentTargetDetectionBackground/SPRT.tex", "max_stars_repo_name": "DavidLSmyth/ResearchMScThesis", "max_stars_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/BackgroundKnowledgeAndRelatedWork/MultiAgentTargetDetectionBackground/SPRT.tex", "max_issues_repo_name": "DavidLSmyth/ResearchMScThesis", "max_issues_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-18T11:59:42.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-18T11:59:42.000Z", "max_forks_repo_path": "Chapters/BackgroundKnowledgeAndRelatedWork/MultiAgentTargetDetectionBackground/SPRT.tex", "max_forks_repo_name": "DavidLSmyth/ResearchMScThesis", "max_forks_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 105.0, "max_line_length": 1153, "alphanum_fraction": 0.7624908425, "num_tokens": 1826, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.803173801068221, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.5920661281047753}}
{"text": "\\documentclass[notitlepage]{problem-solving}\n\n\\title{The Mystery of $1^\\pi$\\\\ Wrestling with Complex Exponentiation}\n\\author{Matt McCarthy}\n\\date{June 2016}\n\n\\addbibresource{complex-logs.bib}\n\n\\nocite{*}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{problem*}\n\tFind what step is wrong in the following statements.\n\t\\begin{align*}\n\t\t1 &= 1^\\pi\\\\\n\t\t&= \\paren{e^{2i\\pi}}^\\pi\\\\\n\t\t&=e^{2i\\pi^2}\\\\\n\t\t&=\\cos 2\\pi^2 + i\\sin 2\\pi^2\\\\\n\t\t&\\approx 0.6296 +0.7768 i\n\t\\end{align*}\n\\end{problem*}\n\n\\section{Background}\n\n\\subsection{Complex Logarithms}\n\nIn $\\RR$ we can define logarithms in various ways such as the inverse of the exponential or as\n\\[\n\t\\ln x = \\int \\frac{1}{x} dx.\n\\]\nHowever, in $\\CC$ the original definition (inverse of the exponential) fails since\n\\[\n\te^{i\\theta} = e^{i\\theta + 2ki\\pi}\n\\]\nwhere $k\\in\\ZZ$.\nEssentially, the exponential is a bijection between $\\RR$ and $(0,\\infty)$ and is thus invertible.\nWhen we move to $\\CC$, the exponential function loses its injectivity since multiple domain elements get mapped to a single element in the range.\nHowever, defining logarithms as the antiderivative of $1/x$ still works.\n\\begin{definition}\n\tLet $z\\in\\CC$.\n\tThen $\\log z$ is defined as\n\t\\[\n\t\t\\log z := \\ln |z| +i\\arg z.\n\t\\]\n\tIf $\\arg z\\in(-\\pi,\\pi]$ we say it is the \\textit{principal logarithm} of $z$.\n\\end{definition}\n\\begin{corollary}\n\t\\[\n\t\t\\frac{d}{dz} \\log z = \\frac{1}{z}\n\t\\]\n\\end{corollary}\nAs we can see by the definition, the complex logarithm is \\textit{multivalued}, thus if we want it to be well-defined we need to restrict the argument of the input to an open interval of length $2\\pi$.\n\n\\subsection{Exponentiation with a Complex Base}\n\nBecause of these problems with the logarithm, exponentiation becomes less straightforward.\nIn the complex world, exponentiation is defined as follows.\n\\begin{definition}\n\tLet $z,w\\in\\CC$ with $w\\neq 0$.\n\tThen\n\t\\[\n\t\tw^z = e^{z\\log w}.\n\t\\]\n\\end{definition}\nHowever, this definition is equivalent to\n\\[\n\tw^z = e^{z(\\ln|w| + i\\arg w)}.\n\\]\nThus, due to the properties of the complex logarithm, exponentiation in the complex plane is also multivalued.\n\nWhen exponentiating, the following are true.\n\\begin{enumerate}\n\t\\item $z^w$ is single-valued iff $w\\in\\ZZ$.\n\t\\item $z^{m/n}$ has $n$ distinct values for $m,n\\in\\ZZ$ with $n>0$ and $\\gcd(m,n)=1$.\n\t\\item $z^x$ has infinitely many values if $x$ is irrational.\n\\end{enumerate}\n\n\\section{Solution}\n\n\\begin{problem*}\n\tFind what step is wrong in the following statements.\n\t\\begin{align*}\n\t\t1 &= 1^\\pi\\\\\n\t\t&= \\paren{e^{2i\\pi}}^\\pi\\\\\n\t\t&=e^{2i\\pi^2}\\\\\n\t\t&=\\cos 2\\pi^2 + i\\sin 2\\pi^2\\\\\n\t\t&\\approx 0.6296 +0.7768 i\n\t\\end{align*}\n\\end{problem*}\n\nThe problem is\n\\[\n\t1^\\pi = 1.\n\\]\nBy definition of complex exponentiation we have\n\\[\n\t1^\\pi = e^{\\pi\\log 1}.\n\\]\nHowever\n\\[\n\t\\log 1 = \\ln |1| + i\\arg 1 = 0 + 2ki\\pi\n\\]\nfor $k\\in\\ZZ$.\nThus,\n\\[\n\t1^{\\pi} = e^{2ki\\pi^2}.\n\\]\nSince $\\pi$ is irrational, there are infinitely many values that $1^\\pi$ can take.\nThe only value of $k$ that yields an answer of 1 is $k=0$.\n\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "811ab0e99f939c5f5249ea3f813f838b15f32361", "size": 3027, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2016-summer/complex-logs/complex-logs.tex", "max_stars_repo_name": "matt-mccarthy/problem-solving", "max_stars_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2016-summer/complex-logs/complex-logs.tex", "max_issues_repo_name": "matt-mccarthy/problem-solving", "max_issues_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2016-summer/complex-logs/complex-logs.tex", "max_forks_repo_name": "matt-mccarthy/problem-solving", "max_forks_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.8717948718, "max_line_length": 201, "alphanum_fraction": 0.6769078295, "num_tokens": 1066, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795402, "lm_q2_score": 0.7371581510799252, "lm_q1q2_score": 0.5920661176642729}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{xcolor}\n\\usepackage[T1]{fontenc}\n\\usepackage{pagecolor}\n\\usepackage{amssymb}\n\\usepackage{lmodern}\n\\usepackage{mathtools, nccmath}\n\\usepackage{courier}\n\\usepackage[overload]{empheq}\n\\usepackage[inline, shortlabels]{enumitem}\n\\usepackage{amsmath}\n\\usepackage{mathtools} \n\\definecolor{myyellow}{RGB}{225,225,100}\n\\definecolor{myred}{RGB}{220,100,100}\n\\definecolor{mygreen}{RGB}{120,225,120}\n\\definecolor{myblue}{RGB}{100,200,255}\n\\definecolor{mypurple}{RGB}{200,140,255}\n\\definecolor{myorange}{RGB}{255,150,50}\n\\color{white}\n\\pagecolor{black}\n\\title{Solution to Linear algebra \\#1}\n\\author{@all.about.mathematics}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\\large\n\\section{Problem}\nLet $A$ be a square matrix. Then reflect its entries along the diagonal other than the diagonal and let the new matrix be $B$. Prove that $\\det(A)=\\det(B)$. \n\n\\medskip\n\\noindent\nFor example, if\n$$\nA=\n\\begin{bmatrix} \n1 & 4 & 7 \\\\\n2 & 5 & 8 \\\\\n3 & 6 & 9 \\\\\n\\end{bmatrix}\n\\implies\nB=\n\\begin{bmatrix} \n9 & 8 & 7 \\\\\n6 & 5 & 4 \\\\\n3 & 2 & 1 \\\\\n\\end{bmatrix}\n$$\n\\newpage\n\\section{Solution}\nLet's use the example provided in the last page. First, let's take the transpose of A, which does not change its determinant.\n$$A=\n\\begin{bmatrix} \n1 & 4 & 7 \\\\\n2 & 5 & 8 \\\\\n3 & 6 & 9 \\\\\n\\end{bmatrix}\n\\implies\nA^T=\n\\begin{bmatrix} \n1 & 2 & 3 \\\\\n4 & 5 & 6 \\\\\n7 & 8 & 9 \\\\\n\\end{bmatrix}$$\nWe can see that $\\mathbf{r_1}\\longleftrightarrow \\mathbf{r_3}$ and $\\mathbf{c_1} \\longleftrightarrow \\mathbf{c_3} $ turns $A^T$ into $B$.\n\\medskip\n\n\\noindent{Similarly, we can deduce that for a $n\\times n$ matrix $A$, the following EROs and ECOs can turn $A^T$ into it's corresponding $B$:}\n$$\\{\\mathbf{r_1}\\longleftrightarrow\\mathbf{r_n}\\:,\\: \\mathbf{r_2}\\longleftrightarrow\\mathbf{r_{n-1}}\\cdots \\mathbf{r_{\\left \\lfloor{n/2}\\right \\rfloor}}\\longleftrightarrow \\mathbf{r_{\\left \\lceil{n/2}\\right \\rceil}}\\} $$\n$$\\{\\mathbf{c_1}\\longleftrightarrow\\mathbf{c_n}\\:,\\: \\mathbf{c_2}\\longleftrightarrow\\mathbf{c_{n-1}}\\cdots \\mathbf{c_{\\left \\lfloor{n/2}\\right \\rfloor}}\\longleftrightarrow \\mathbf{c_{\\left \\lceil{n/2}\\right \\rceil}}\\}$$\nIt's trivial that both sets have the same number of operations. Let that number be $m$. Since these Type I operations multiply the determinant by $-1$, $\\det(B)$ can be calculated like this.\n$$\\det(B)=\\det(A^T)(-1)^m(-1)^m= \\det(A)(-1)^{2m}= \\det(A)$$\nTherefore our claim is proved.\n\n\\end{document}\n\n", "meta": {"hexsha": "003b4b56df3978e86d6e20b410c321c3127fb3d7", "size": 2445, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Algebra/all.about.mathematics' questions/Linear algebra question 1.tex", "max_stars_repo_name": "Nanu00/LaTeX", "max_stars_repo_head_hexsha": "0f08a90c4e9ef78af42797670903636059ca0df2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2020-05-29T17:22:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T18:47:05.000Z", "max_issues_repo_path": "Algebra/all.about.mathematics' questions/Linear algebra question 1.tex", "max_issues_repo_name": "Nanu00/LaTeX", "max_issues_repo_head_hexsha": "0f08a90c4e9ef78af42797670903636059ca0df2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-06-26T07:33:59.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-11T12:14:49.000Z", "max_forks_repo_path": "Algebra/all.about.mathematics' questions/Linear algebra question 1.tex", "max_forks_repo_name": "Shreenabh664/LaTeX", "max_forks_repo_head_hexsha": "675e03f3ec555456b9a2cc714825ec75317848c2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-06-22T07:50:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-08T05:11:14.000Z", "avg_line_length": 31.3461538462, "max_line_length": 220, "alphanum_fraction": 0.6965235174, "num_tokens": 892, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799252, "lm_q2_score": 0.8031737892899222, "lm_q1q2_score": 0.5920661055088166}}
{"text": "%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"program-analysis\"\n%%% End:\n\n\\chapter{Semantics}\n\n\\section{Simple Imperative Language}\n\\label{sec:simple-language}\n\n\n\\begin{figure}[h]\n  \\begin{math}\n    \\begin{array}{lcll}\n      n & \\in & \\mathbb{V} & \\text{Scalar values}\\\\\n      x & \\in & \\mathbb{X} & \\text{Program variables}\\\\\n      \\odot & = & + | - | * | \\dots & \\text{Binary operators}\\\\\n      \\oslash & = & < | \\leq | == | \\dots & \\text{Comparison operators}\\\\\n      E & = & & \\text{Scalar expressions}\\\\\n        & | & n & \\text{Scalar constant}\\\\\n        & | & x & \\text{Variable}\\\\\n        & | & E \\odot E & \\text{Binary operation}\\\\\n      B & = & & \\text{Boolean expressions}\\\\\n        & | & x \\oslash n & \\text{Comparison of a variable with a constant}\\\\\n      C & = & & \\text{Commands (or Statements)}\\\\\n        & | &\\mathbf{skip} & \\text{No-Op}\\\\\n        & | & C; C & \\text{Sequence}\\\\\n        & | & x := E & \\text{Assignment}\\\\\n        & | & \\mathbf{input}(x) & \\text{Reading a value from input}\\\\\n        & | & \\mathbf{if}(B)\\{C\\} \\mathbf{else} \\{C\\} & \\text{Conditional statement}\\\\\n        & | & \\mathbf{while}(B)\\{C\\} & \\text{Loop statement}\\\\\n      P & = & C & \\text{Program}\\\\\n    \\end{array}\n  \\end{math}\n\n  \\caption{Syntax of a simple imperative language}\n  \\label{fig:syntax}\n\\end{figure}", "meta": {"hexsha": "af91e35ed71a613f54ef9ac00ac0bcfd05b3dd73", "size": 1316, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/sem.tex", "max_stars_repo_name": "skicombinator/nirvana", "max_stars_repo_head_hexsha": "d120744c0179b4c69c0c7ddc8b461e62486510f6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-01-21T06:14:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T06:14:51.000Z", "max_issues_repo_path": "theory/sem.tex", "max_issues_repo_name": "sangwoo-joh/bible-raw-data", "max_issues_repo_head_hexsha": "d120744c0179b4c69c0c7ddc8b461e62486510f6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/sem.tex", "max_forks_repo_name": "sangwoo-joh/bible-raw-data", "max_forks_repo_head_hexsha": "d120744c0179b4c69c0c7ddc8b461e62486510f6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6315789474, "max_line_length": 86, "alphanum_fraction": 0.5303951368, "num_tokens": 438, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213691605412, "lm_q2_score": 0.6584175072643413, "lm_q1q2_score": 0.5919972506107851}}
{"text": "\\chapter{PRINCIPLES OF DIFFERENTIAL PRIVACY}\n\nDuring this chapter we are going to introduce the term of D.P., and its definition, alongside with the principles that need to be followed while applying it.\n\n\\section{Promise of Differential Privacy}\n\n\\par Differential Privacy is actually a promise made by the data handlers, to the participants of a study: \"You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies/ datasets/ info resources are available\". \n\\par The goal is to make the data widely available for analysis, while protecting the users. However, is it possible to learn nothing about an individual, while gathering useful information about a population? This is actually what D.P. is trying to achieve.\n\n\n\\section{Definition of Differential Privacy}\nBefore defining D.P., we must analyze some of the basic components of its definition.\n\n\\subsection{Randomized Response}\nRandomized response is one of the earliest privacy mechanisms, that is used to conduct surveys where taboo behaviour is studied. The participants in those surveys are asked to answer truthfully, while they do not want to be stigmatized. There is a micro-world of what we are trying to achieve, thus we are going to give the algorithm of the randomized response in order to answer a binary (yes/no) question.\n\n\\begin{itemize}\n    \\item Flip a coin.\n    \\item If it lands on heads, answer truthfully\n    \\item If it lands on tails, flip another one\n    \\item If it lands on heads, answer no, else, answer yes\n\\end{itemize}\n\nWe are going to analyze this algorithm and its success in later chapters, but for now, it is enough to know that there exists a simple mechanism that adds noise, and is rather accurate for large samples.\n\nBefore giving the definition of D.P., we must define its components. \n\n\\begin{itemize}\n    \\item \\textbf{Probability Simplex}, given a discrete set $B$, is denoted as $\\Delta(B)$ and is defined to be:\n    \\begin{align*}\n        \\Delta(B) = \\{ x\\in R^{|B|}: x_i \\geq 0 \\text { } \\forall i \\text{ and } \\sum_{i=1}^{|B|} x_i = 1\\}\n    \\end{align*}\n    \\item A \\textbf {Randomized algorithm} $M$ with domain $A$ and discrete range of results $B$, is associated with the mapping $M: A\\rightarrow\\Delta(B)$. \n    \\item \\textbf{Distance between Databases:} The $l_1$ norm of a database x is denoted $||x||_1$ and it is defined to be: $||x||_1 = \\sum_{i = 1}^{|x|} |x_i|$. Thus, the $l_1$ distance between 2 databases $x$ and $y$, is $||x-y||_1$, and the size of a database $x$ os $||x||_1$.\n    \n\\end{itemize}\n\n\\subsection{Definition}\nDifferential Privacy is defined as following:\n\\\\\n\\\\\nA randomized algorithm $M$ with domain $N^{|x|}$ is (\u03b5,\u03b4)-differentially private, if for all $S \\in Range(n)$ and for all $x,y \\in N^{|x|}$ s.t. $||x - y||_1 \\leq 1$\n$$ Pr[M(x) \\in S] \\leq e^\\epsilon Pr[M(y) \\in S] + \\delta$$\n\nwhere the probability space is over the coin flips of the mechanisms $M$. If $\\delta = 0$, we say that $M$ is \u03b5-differentially private.\n\n\\\\\nIt should be noted that D.P. is rather a definition than a strict algorithm. While relying on the definition of D.P., we can create different algorithms, which will all ensure that the result will be deferentially private. This allows us to create different forms of D.P., that will be analyzed later on this thesis.\n\nThe whole point of Differential Privacy, is that the output of a D.P. mechanism, should by \\emph{independent} of whether or not an individual is present in the domain $N$. The \"ability\" of the adversary to recognise the existence of a column in the dataset, is regularized by epsilon.\n\n\\section{The meaning of epsilon}\nIt is made clear from the above definition, that if we have a computational task, we might find different algorithms for applying D.P., but the result will always be of the same form: each user of the dataset, will get \u03b5-D.P.. But what does the epsilon parameter actually mean?\n\nBy reading the mathematical equation, we observe that the higher the value of epsilon, the bigger the difference between the two probabilities (minimum and maximum). Thus, we extract the following statement about the value of epsilon during the application of Differential Privacy:\n\n\\begin{itemize}\n    \\item The \\emph{lower the epsilon} value, the \\emph{higher the privacy} guarantees for the users of the dataset.\n    \\item The \\emph{higher the epsilon} value, the \\emph{more accurate the results} produced.\n\\end{itemize}\n\nIn practice, epsilon values vary in the range $(0,5]$, as lower values are prohibited, and higher values are considered extreme cases. However, as mentioned in [1],  when epsilon is small, failing to be (\u03b5,0)-differentially private is not necessarily alarming, if our algorithm is linearly increasing with \u03b5 (ex (2\u03b5,0)-D.P). This happens because of the nature of the epsilon parameter, which guarantees very strict boundaries between databases. However, when \u03b5 increases by a lot, users' privacy suffers. \n\nIn \\textbf{Figure 2.1}, we can see in general terms, the function between the epsilon and the accuracy error, as well as the protection guaranteed. We will discuss in later sections the details on how these graphs are created, but now is a good time to get an overall picture of the accuracy error produced when applying D.P.\n\\bigskip\n\\bigskip\\bigskip\n\n\\begin{figure}[!htb]\\centering\n      \\includegraphics[width=0.6\\textwidth]{images/epsilon_intro_graph.png}\n  \\caption{Accuracy Error as a function of epsilon}\n\\end{figure}\n\n\\section{Different forms of Differential Privacy}\n\nAs mentioned during the definition, due to the room that is left for its interpretation, there can be many forms of Differential Privacy. There are two major fields recognized, the \\emph{Global D.P.} and the \\emph{Local D.P.}.\n\nTheir major difference is the curator of the data. In the Global model, the curator must be trusted, as he collects the non-private data and has to pass them through a D.P. algorithm.\n\nOn the other hand, in the Local model, the curator may as well be untrusted, since the users perturb their data on their own, using a specific protocol. The key differences of the two forms are shown in the \\emph{Figure 2.2} below.\n\nAn other difference between the two models, is the amount of noise added. With the absence of a trusted curator, the users themselves must add a significant amount of noise into their data, in order to preserve their privacy. This of course results into a need of many users (several thousands), in order for the L.D.P. protocols to function correctly and accurately.\n\n\n\\begin{figure}[!htb]\\centering\n      \\includegraphics[width=0.3\\textwidth]{images/local_vs_global.png}\n  \\caption{Differences between LDP and GDP}\n\\end{figure}\n\nIn this thesis, we are going to examine both models, by quoting their definitions, observing already-existing algorithms, and creating our own L.D.P. protocol.\n\n\\section{Existing Problems of D.P.}\n\nAs every new step in Computer Science, Differential Privacy has some issues that are yet to be solved, and some others not covered by its definition. \n\nOne major problem is the behaviour of the protocols \\emph{when the number of users is limited}. The definition of D.P. is based on the alteration of the data in order not to reveal sensitive information. Thus, if a small amount of users are involved in those protocols, the accuracy of the results might be way off the standards that we set, in order to satisfy the epsilon requirements of the user.\n\nAnother (unsolvable) issue, that mainly lies on the basis of surveys, is  \\emph{the possibility that conclusions drawn from a survey may reflect statistical information about an individual}.\n\nFor example, if a survey about the correlation of smoking and dental problems is conducted, someone that has specific dental problems might be deemed as a smoker, despite keeping his privacy about the fact that he is smoking, during the survey. That is something that D.P. does not promise: unconditional freedom from distinguishing. This is not however a violation of the definition of D.P., as the survey teaches us that specific private attributes correlate with public observable attributes, since this correlation would be observed independent of the presence or absence of the individual in the survey.\n\nThere are several more issues as the ones covered above, however we are not going to focus on those, rather on the advantages of D.P.", "meta": {"hexsha": "f72cba508e0051fc2ac947671d5c0532ff4b8817", "size": 8425, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis_paper/GDP/DP_definition.tex", "max_stars_repo_name": "nikosgalanis/bsc-thesis", "max_stars_repo_head_hexsha": "b5521e995f266ff1aeb9fecc220650483630dc04", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2021-07-29T15:24:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T13:57:07.000Z", "max_issues_repo_path": "thesis_paper/GDP/DP_definition.tex", "max_issues_repo_name": "nikosgalanis/bsc-thesis", "max_issues_repo_head_hexsha": "b5521e995f266ff1aeb9fecc220650483630dc04", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis_paper/GDP/DP_definition.tex", "max_forks_repo_name": "nikosgalanis/bsc-thesis", "max_forks_repo_head_hexsha": "b5521e995f266ff1aeb9fecc220650483630dc04", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.4158415842, "max_line_length": 608, "alphanum_fraction": 0.7648664688, "num_tokens": 1994, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430478583168, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.5919668182375527}}
{"text": "\\section{Recurrent Neural Networks}\n\\label{sec:recurrent_neural_networks}\n\nThe two major types of neural networks are distinguished by the structure of\ntheir internal weights.  Feedforward networks simply pipe their input through\nall the layers towards the output. They have proven very useful for tasks that\nrequire static, non-linear input-output mappings such as pattern recognition or\nclassification.  Processing time series is a different task because, to model a\nsequence, the network needs information about the past. In a FNN this is not\nthe case.  An input vector $\\vt{u}$ that is fed into the network $F$ does not\ncontain information about previous or future inputs and the prediction $\\vt{y}$\nhas to be made solely based on the current input:\n\\begin{equation}\n  \\vt{y} = F(\\vt{u})\n\\end{equation}\n\n\\begin{figure}\n  \\begin{minipage}[b]{.4\\textwidth}\n    \\centering\n    \\FeedForwardNet{1}\n    Feedforward Network\n  \\end{minipage}\n  \\hspace{.05\\textwidth}\n  \\begin{minipage}[b]{.5\\textwidth}\n    \\centering\n    \\RecurrentNet\n    Recurrent Network\n  \\end{minipage}\n  \\caption{The traditional feedforward network on the left only exhibits\n  forward connections while the recurrent network on the right has cyclic\n  connections within or possibly even between layers if there is more than one\n  hidden weight matrix.}\n  \\label{fig:fnn_rnn}\n\\end{figure}\n\nIn contrast, recurrent neural networks (RNN) possess cyclic weight connections.\nThe outputs of a layer have feedback weights that are connected to the same or\na previous layer.  This cyclic nature of RNNs mathematically makes them\ndynamical systems [\\cite{FUNAHASHI}] and it can be shown that they are\n\\emph{Turing complete} [\\cite{siegelmann1991}]. A brief description on the analogies\nbetween RNNs and dynamical systems is given in\nSec.~\\ref{ssub:state_space_model}. Roughly, RNNs maintain an internal state\n$\\vt{x}$ at all times which acts as a memory of previous inputs. At every time\nstep the network receives the previous internal state along with an input\nvector and returns a prediction $\\vt{y}$ as well as an updated internal state\n$\\vt{x}$:\n\\begin{equation}\n  \\label{eq:recurrent_network_F}\n  \\vt{y}, \\text{ } \\vt{x} = F(\\vt{u}, \\vec{x}_{t-1}).\n\\end{equation}\n\nThis enables RNNs to process time series data and they are thus qualified for\ntasks such as filtering, dynamic pattern recognition, and prediction.  RNNs are\nmost widely used in speech recognition or other language processing tasks, but\nthey are also highly interesting from a neurological point of view, as all\nbiological neural networks are recurrent.  Generally, they are highly promising\ntools for non-linear time series modelling.  They can be run in a\nself-monitoring fashion, which makes them very interesting for an automated\noutlier detection in large scale time series data.  Especially in the field of\nNatural Language Processing (NLP), RNNs have achieved impressive results while\nrequiring little preprocessing of datasets [\\cite{sutskever2011generating}].\n\n\n\\subsection{State Space Model}\n\\label{ssub:state_space_model}\n\nThe simplest dynamical system in discrete time is defined by a state space $M$,\na set of times $T$, and an evolution function $\\Phi$.  The function $\\Phi$ maps\na given state $\\vec{x}_{t-1} \\in M$ at time $t \\in T$ to a new state $\\vt{x}$:\n\\begin{equation}\n  \\vt{x} = \\Phi(\\vec{x}_{t-1}).\n\\end{equation}\n\nA dynamical system with inputs $\\vt{u}$ and outputs $\\vt{y}$ is defined by\nthe state space representation:\n\\begin{align}\n  \\vt{x} &= \\Phi(\\vt{u}, \\vec{x}_{t-1}), \\\\\n  \\vt{y} &= \\Psi(\\vt{u}, \\vec{x}_{t-1}).\n\\end{align}\n\nIn a basic RNN the two functions $\\Phi$ and $\\Psi$ are defined with an input\nmatrix $\\wmatr{in}$, a recurrent weight matrix $\\wmatr{}$, and an output matrix\n$\\wmatr{out}$:\n\\begin{align}\n  \\label{eq:state_space}\n  \\vt{x} &= \\varphi(\\wmatr{in} \\vt{u} + \\wmatr{}\\vec{x}_{t-1}), \\\\\n  \\label{eq:readout}\n  \\vt{y} &= \\psi(\\wmatr{out} \\vt{x}),\n\\end{align}\n\nwhere $\\varphi$ denotes the component-wise applied, non-linear, state\nactivation function. In RNNs a typical choice is the hyperbolic tangent.  The\noutput activation function $\\psi$ is commonly chosen to be the identity\nfunction, resulting in a linear output layer.  A flow chart of the basic RNN is\nshown in Fig.~\\ref{fig:rnn_flow_chart}.  The input weights $\\wmatr{in}$ have\ndimensions $n \\times m$, hidden weights $\\wmatr{}$: $n \\times n$ and\noutput weights $\\wmatr{out}$: $n \\times k$. In the simple RNN the input is\nnot utilized by the output layer, but would be entirely possible to introduce\nanother matrix to do this.\n\nAn input series $\\mathbf{u}$ of length $N$ that is fed to the network one by\none produces $N$ internal states.\n\n\\begin{equation}\n  \\mathbf{x} = (\\vt{x}, \\vec{x}_{t+1}, ..., \\vec{x}_{t+N}),\n\\end{equation}\n\\begin{figure}\n  \\centering\n  \\RNNFlowChart\n  \\caption{Basic RNN cell flow chart. The recurrent weights are enclosed by the\n  non-linearity $\\varphi$, the output weights by the function $\\psi$.}\n  \\label{fig:rnn_flow_chart}\n\\end{figure}\n\n\n\nFrom Eq.~\\ref{eq:state_space}, it becomes evident that the internal state acts\nas a kind of memory of the network.  This memory is dynamic as opposed to the\nstatic memory brought about by weight adjustments of GD.  The latter is called\nlong-term memory. The dynamic memory of the internal RNN state is termed\n\\emph{short-term memory} (STM). STM will be further discussed in\nSec.~\\ref{sub:short_term_memory}  and brief computational analysis of the STM\ncapacity is given in Sec.~\\ref{sec:short_term_memory}. \n\nEvery new input overwrites a part of the previous internal state, gradually\nencoding the input sequence into $\\vt{x}$.  The length of an input sequence\nthat can be encoded into $\\vt{x}$ depends on the size of the internal state and\non the two matrices $\\wmatr{in}$ and $\\wmatr{}$.  Generally, the state size $n$\nmust be much larger than the input size $m$, in order to create an effective\nRNN. How an RNN is trained in the light of a time dependency of the weights is\ndescribed in Sec.~\\ref{sub:training_recurrent_neural_networks}.\n\nAs the internal states now contain information both about current and past\ninputs, it is possible for the output matrix $\\wmatr{out}$ to create an educated\nprediction of the next frame.  An RNN that receives its output as the next\ninput is called a freely running RNN and enables predictions further into the\nfuture as depicted in Fig.~\\ref{fig:annotated_rnn}.  Of course, in this case the\nnumber of input and output units must be the same: $k = m$.\n\n\\begin{figure}\n  \\centering\n  \\RecurrentNetAnnotated\n  \\caption{RNN setup that is able to predict $n$ steps into the future by\n    feeding the output back into the input. The input and output layers are\n    fully connected. The internal weight connections can be sparse to speed\n    up the network for large reservoir sizes.\n  }\n  \\label{fig:annotated_rnn}\n\\end{figure}\n\n\n\n\\subsection{Training Recurrent Neural Networks}\n\\label{sub:training_recurrent_neural_networks}\n\nThere exist various different methods of training recurrent networks, which all\nhave their own benefits and drawbacks or just perform better or worse at\ndifferent tasks. No clear favourable approach, like the mini-batch GD algorithm\nfor feedfoward networks, was found yet.  This is due to several obstacles that\narise specifically during RNN training, which will be discussed below.  The\nthree most common methods are \\emph{Backpropagation Through Time} (BPTT),\n\\emph{Real-time Recurrent Learning} (RTRL, [\\cite{williams1989}]), and\n\\emph{Extended Kalman Filtering} (EKF, [\\cite{williams1992}]), the first of which\nwill be treated below.\n\n\\subsubsection{Backpropagation Through Time}\n\\label{ssub:backpropagation_through_time}\n\nBPTT is an adapted form the of classic backpropagation algorithm of feedforward\nnetworks as described in Sec.~\\ref{sec:feedforward_neural_networks} and was\ndeveloped for the first time by~[\\cite{mozerBPTT}].  The\ncyclic connections of RNNs prevent the direct application of the\nbackpropagation algorithm.  One solution is to \\emph{unroll} the network in\ntime by stacking the network on top of itself for a certain number of time\nsteps.  The depth of unrolling is determined by the length $N$ of the sequence\n$\\mathbf{u}$ that is fed into the network.\n\nBy unrolling the network in time, one practically ends up with a very deep\nfeedforward network with shared weights between the stacked layers of clones of\nthe network.  In the forward pass each clone, which now corresponds to a time\nstep in the sequence, receives the corresponding input $\\vt{u}$ and updates its\nown internal state $\\vt{x}$.  The internal state of each clone depends on its\ninput and on the internal state of the previous layer (at time $t-1$).  Finally\nthe output $\\vt{y}$ is computed by each clone.  The loss function that is\nminimized is defined by:\n\\begin{equation}\n  \\mathcal{L} = \\sum_{t=0}^{N} \\mathcal{L}_t\n              = \\sum_{t=0}^{N} || \\vt{d} - \\vt{y} ||_2.\n\\end{equation}\n\nThe weight adjustment is now done by a typical gradient descent algorithm.  By\ncollecting all the weights and biases of the state space model in the variable\n$\\Theta$ we can write the weight adjustment as:\n\\begin{equation}\n  \\label{eq:batch_update}\n  \\Theta' = \\Theta + \\eta \\sum_t \\frac{\\partial \\mathcal{L}_t}{\\partial \\Theta}\n\\end{equation}\n\nThe expression for the gradient of the cost can by derived by applying the\nchain rule:\n\\begin{equation}\n  \\newcommand{\\loss}{\\mathcal{L}_t}\n  \\dd{\\loss}{\\Theta} = \\dd{\\loss}{\\vt{x}} \\dd{\\vt{x}}{\\Theta}\n  = \\dd{\\loss}{\\vt{x}} \\dd{\\vt{x}}{\\vec{x}_{t-1}} \\dd{\\vec{x}_{t-1}}{\\Theta},\n\\end{equation}\n\nwhich results in a product of derivatives, as every state depends on the\nprevious one.\n\\begin{align}\n  \\newcommand{\\loss}{\\mathcal{L}_t}\n  \\dd{\\loss}{\\Theta} = \\dd{\\loss}{\\vt{x}} \\dd{\\vt{x}}{\\vec{x}_{0}} \\dd{\\vec{x}_{0}}{\\Theta} \\\\\n  \\dd{\\vt{x}}{\\vec{x}_{0}} = \\prod_{i=1}^t \\dd{\\vec{x}_i}{\\vec{x}_{i-1}} \\label{eq:dxtdxN}\n\\end{align}\n\nThe last derivative of the state $\\vec{x}_0$ denotes the derivative of the first\nstate (starting to count from the perspective of the forward pass) of the\nunrolled network, which is a constant with respect to $\\Theta$.  From\nEq.~\\ref{eq:dxtdxN} one can see the origin of the vanishing and exploding\ngradient problems.  If the individual derivatives $\\dd{\\vt{x}}{\\vec{x}_{0}}$ are\nsmall, the gradient, being product of these derivatives, vanishes very quickly\nand explodes if they are large.  It can be shown that it `[...] is\n\\textit{sufficient} for the largest eigenvalue $\\lambda_1$ of the recurrent\nweight matrix to be smaller than one for long term components to vanish (as $t\n\\rightarrow \\infty$) and \\textit{necessary} for  it  to  be  larger  than  one\nfor  gradients  to explode.' -- [\\cite{razvan2012}].\nBy bounding the spectral radius\nof the recurrent weights to be smaller than one, it is thus possible to avoid\nthe exploding gradient problem. A solution to the vanishing gradient problem is\nmore complicated and involves advanced network architectures such as the long\nshort-term memory (LSTM) unit [\\cite{lstm}].  It introduces additional input,\nforget, and output layers, but the essential part is that one of the recurrent maps of\nthe unit is the identity function.  The derivatives in Eq.~\\ref{eq:dxtdxN}\nbecome one and the gradient can flow through many layers.  A completely\ndifferent approach is to avoid training the recurrent weights altogether, which\nwill be described in Sec.~\\ref{sec:reservoir_computing}.\n\n\n\\subsection{The Bifurcating State Space}%\n\\label{sub:the_bifurcating_state_space}\n\nAnother problem that arises with the optimization of recurrent weights is that\nthe state space is not necessarily continuous, which was shown by\n[\\cite{doya1992}].  The points at which the state space has discontinuities are\ncalled \\emph{bifurcations} and are extensively studied in non-linear dynamics.\nThey can cause discontinuities in the state space and thus impair the learning\nor prevent convergence to a local minimum completely.  To understand what\nbifurcations are and how they affect RNN training, we will consider the\nrecurrent part of a single unit RNN with the hyperbolic tangent as the\nactivation function.  If the RNN has only one unit the state $\\vt{x}$, as well\nas the weights and biases become scalars:\n\\begin{equation}\n  \\label{eq:single_unit_rnn}\n  x_{t+1} = \\tanh(w x_t + b).\n\\end{equation}\nThe parameter $w$ denotes the scalar weight of the single unit and $b = w_{in}\nu_t$ will serve as the bias of a constant input of $u_t = 1$.  In\nFig.~\\ref{fig:bif_evolution} we can see the evolution of $x_t$.  Depending on\ndifferent initial values $x_0$ and network parameters, the state converges to\ndifferent values for $t$ towards infinity. These values are called \\emph{fixed\npoints} $x^*$ and for them $x_t = x_{t+1}$ holds.  In particular, fixed points\nthat the state converges to are called \\emph{stable} fixed points (or\n\\emph{attractors}).  The second kind of fixed points are \\emph{unstable}.  The\nslightest deviation from an unstable fixed point will result in a flow away\nfrom the point, which is why they are also called \\emph{repellers}.  In the\nfirst three cases of Fig.~\\ref{fig:bif_evolution} a fixed point is always\nreached. The fourth example in the lower right shows representatives of\nthe oscillating fixed point, more specifically \\emph{period-2 cycles}, that\nrepeat every second iteration.\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{bif_evolution.pdf}\n  \\caption{Evolution of $x_t$ over time for different parameters $w$ and $b$.\n    Dashed lines show unstable fixed points. Apart from the expected fixed points\n    that $x_t$ converges to over time, there are also oscillations visible in\n    the last plot. Such oscillations that repeat every 2 iterations are called\n    \\emph{period-2 cycles} and they appear when $w<-1$.}\n  \\label{fig:bif_evolution}\n\\end{figure}\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{cobweb.pdf}\n  \\caption{Cobwebs for the same parameters as in Fig.~\\ref{fig:bif_evolution}.\n    The black dot is the initial value $x_0$. By drawing a vertical line to the\n    intersection with the activation function gives the new input $x_1$. Drawing\n    a horizontal line to the intersection with $y = x$ projects the point back to\n    the $x$-axis.  The projection is the next input to the activation function.\n    This process is repeated until a stable orbit or a fixed point is reached.}\n  \\label{fig:cobweb}\n\\end{figure}\n\n\n\nBy varying the parameters $w$ and $b$ the location and the nature of fixed\npoints can be changed. The blue line in the right plot of\nFig.~\\ref{fig:fixed_points} splits in two as $w$ is increased. The point at\n$w=1$ is called a \\emph{bifurcation} point.  There are two things that are\nhappening here: the stable fixed point at $x=0$ becomes unstable (indicated by\nthe dashed line) and two new stable fixed points above and below zero are\ncreated.\n\nFixed points can be found analytically by rewriting\nEq.~\\ref{eq:single_unit_rnn} with the assumption that $x^*$ is a fixed point\n(for which $x_t = x_{t+1}$):\n\\begin{equation}\n  \\label{eq:fp}\n  x^* = \\tanh(wx^* +b).\n\\end{equation}\nSolving once for $w$ and once for $b$ results in two equations for fixed\npoints:\n\\begin{align}\n  b &= \\tanh^{-1}(x^*) - wx^*\\\\\n  w &= \\frac{\\tanh^{-1}(x^*) - b}{x^*},\n\\end{align}\nwhich can be plotted for different values of $w$ and $b$\n(Fig.~\\ref{fig:fixed_points}).  The period-2 cycles cannot be found by\nanalysing Eq.~\\ref{eq:fp}.  Instead they can be found analytically by solving\n\\begin{equation}\n  x^* = \\tanh^2(wx^* + b),\n\\end{equation}\nbut also by an intuitive, graphical approach called \\emph{cobwebbing}\n(Fig.~\\ref{fig:cobweb}).\nStarting from an initial point $x_0$ a vertical line is drawn to the value of\nthe activation function. Now drawing a horizontal line until we intersect with\nthe graph of $y = x$ gives the new input $x_1$ and so forth.\n\n\\subsubsection{Effect on RNN Training}%\n\\label{ssub:effect_on_rnn_training}\n\nNow that we have an understanding of what fixed points and bifurcations are we\ncan examine their effect on RNN learning. Suppose we initialize the network\nwith a constant $b=0.1$ and a $w=3$. If $x_0$ is negative, the nearest fixed\npoint is on the lower branch of the yellow line in the right plot of\nFig.~\\ref{fig:fixed_points}.  Further assume we train the network to output\n$x_\\infty = - 0.25$.  In this case, $w$ will be lowered to approach $x^*= -\n0.25$ until the bifurcation point is reached and the stable fixed point\nvanishes (yellow line becomes dashed line). The fixed point becomes unstable\nand the network output will change discontinuously as it jumps to the attractor\non the upper branch.  This will result in a discontinuity in the loss function\nand an infinite gradient.  After jumping to the upper branch $w$ will grow\ntowards infinite values as the GD algorithm tries to approach the target value\nof $x = - 0.25$.  Similar examples can be constructed in which parameters\noscillate between two bifurcation points.\n\nThe weights of RNNs that are used in practice are normally initialized to very\nsmall values which results\nin few fixed points. As the network learns some of the weights increase which\ndrives the RNN through bifurcations.  The discontinuities that result in very\nlarge gradients cause large jumps of the GD algorithm which can nullify the\nlearning of hundreds of steps in a single iteration. Aside from the vanishing\nand exploding gradient problems, bifurcations are another major reason for the\nintricacy of RNN training.\n\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{bif_fixpoints.pdf}\n  \\caption{Fixed points for different parameter values of $b$ and $w$.  The\n    values of the fixed points $x^*$ are affected by varying the weights of the\n    RNN.  If there is more than one stable fixed point $x_t$ converges to the\n    attractor that is closest to the initial value $x_0$.  Dashed lines denote\n    unstable fixed points, which can only be reached if $x_0 = x^*$.}\n  \\label{fig:fixed_points}\n\\end{figure}\n\nOne solution to all the headaches that are caused by the bifurcations in the\nstate space, and RNN training in general, is surprisingly simple.  The cause of\nbifurcations are adaptions of the recurrent weights of the RNN, so keeping them\nconstant would eliminate all the complications at once and even come with the\nadditional advantage of not having to change those weights in the first place.\nThis might seem like a rather drastic method as the whole ML approach is based on\ngradual learning of weights. However, restricting the weight optimization to the\nnon-recurrent weights frees us from the notorious problems of RNNs and can\nperform just as well.  This approach is called \\emph{reservoir computing} and\nwill be discussed in the next Section.\n", "meta": {"hexsha": "ebb2ef2ca6a8978620f0c1b40ffc1ba333155674", "size": 18775, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mainmatter/recurrent_nets.tex", "max_stars_repo_name": "nmheim/thesis", "max_stars_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-09-22T12:17:23.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-22T12:17:23.000Z", "max_issues_repo_path": "mainmatter/recurrent_nets.tex", "max_issues_repo_name": "nmheim/thesis", "max_issues_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mainmatter/recurrent_nets.tex", "max_forks_repo_name": "nmheim/thesis", "max_forks_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.3351206434, "max_line_length": 94, "alphanum_fraction": 0.7564314248, "num_tokens": 5079, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.875787001374006, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.5918258592799674}}
{"text": "\\documentclass[a4paper,12pt]{scrartcl}\n\n\\usepackage{bm,amsmath,url,graphicx}\n\\usepackage{palatino}\n\\usepackage{color, xcolor}\n\\usepackage{listings}\n\n\n\\newcommand{\\n}{{\\bf n}}\n\\newcommand{\\h}{{\\bf h}}\n\\newcommand{\\x}{{\\bf x}}\n\\newcommand{\\HH}{{\\bf H}}\n\\newcommand{\\thb}{{\\boldsymbol{\\theta}}}\n\\newcommand{\\python}{{\\fbox{\\texttt{\\bfseries python}}\\quad}}\n\\newcommand{\\pen}{{\\fbox{\\texttt{\\bfseries pen\\&paper}}\\quad}}\n\n\\renewcommand{\\familydefault}{\\rmdefault}\n\n\n\\begin{document}\n\\section*{\\bf SGN-41007 Pattern Recognition and Machine Learning}\n\\emph{Exercise Set 2: January 18 - 20, 2017}\n\\bigskip\n\\sloppy\n\n\\lstdefinestyle{mystyle}{\n  belowcaptionskip=1\\baselineskip,\n  breaklines=true,\n  frame=single,\n  xleftmargin=\\parindent,\n  language=Python,\n  showstringspaces=false,\n  basicstyle=\\ttfamily,\n  keywordstyle=\\bfseries\\color{green!40!black},\n  commentstyle=\\itshape\\color{purple!40!black},\n  identifierstyle=\\color{blue},\n  stringstyle=\\color{orange},\n  moredelim=**[is][\\color{red}]{@}{@},\n}\n\n\\lstset{language=Python,style=mystyle} \n\n\n\\noindent\nExercises consist of both pen\\&paper and computer assignments.\nPen\\&paper questions are solved at home before exercises, while\ncomputer assignments are solved during exercise hours. The\ncomputer assignments are marked by text \\python and \nPen\\&paper questions by text \\pen\n\n\\begin{enumerate}\n\n\\item \\pen \\emph{Find a maximum likelihood estimator for one sample.}\n\nSuppose we measure one sample $x$ for which we assume the model\n\\[\np(x, \\lambda) = \\begin{cases}\\lambda\\exp(-\\lambda x) & x \\ge 0\\\\\n0, & x < 0\\end{cases}\n\\]\n\n\\begin{enumerate}\n\t\\item Write and simplify the expression for $\\ln p(x; \\lambda)$. For simplicity, you can\n\tomit the case $x<0$.\n\t\\item Compute the derivative $\\frac{\\partial \\ln p(x; \\lambda)}{\\partial \\lambda}$.\n\t\\item Solve $\\frac{\\partial \\ln p(x; \\lambda)}{\\partial \\lambda} = 0$ with respect to $\\lambda$.\n\tThis is the formula for maximum likelihood estimate of $\\lambda$.\n\\end{enumerate}\n\n\\item \\pen \\emph{Find a maximum likelihood estimator for many samples.}\n\nSuppose we measure samples $\\x = (x[0], x[1],\\ldots,x[N-1])$ for which we assume the model\n\\[\np(x[n], \\lambda) = \\begin{cases}\\lambda\\exp(-\\lambda x[n]) & x[n] \\ge 0\\\\\n0, & x[n] < 0\\end{cases}\n\\]\n\n\\begin{enumerate}\n\t\\item Write and simplify the expression for $\\ln p(\\x; \\lambda)$. You can assume the\n\tsamples are independent and $p(\\x; \\lambda)$ is simply the product of individual\n\tprobabilities: $p(\\x; \\lambda) = \\prod p(x[n], \\lambda)$.\n\t\\item Compute the derivative $\\frac{\\partial \\ln p(\\x; \\lambda)}{\\partial \\lambda}$.\n\t\\item Solve $\\frac{\\partial \\ln p(\\x; \\lambda)}{\\partial \\lambda} = 0$ with respect to $\\lambda$.\n\tThis is the formula for maximum likelihood estimate of $\\lambda$.\n\\end{enumerate}\n\n\\item \\python \\emph{Same as last week Question 1, but without numpy.}\n\nDownload the following file and extract the contents:\n{\\small\n\\begin{verbatim}\nhttp://www.cs.tut.fi/courses/SGN-41006/exercises/locationData.zip\n\\end{verbatim}\n}\n\\label{ex1}\n\n\\begin{enumerate}\n\\item Read the file into memory one line at a time (in a for loop). See similar example at the end of lecture slide set 1.\n\\item Load the same data into another variable using \\verb+numpy.loadtxt+. Check that the contents of the \ntwo arrays are equal using \\verb+numpy.all+ or \\verb+numpy.any+.\n\\end{enumerate}\n\n\\item \\python \\emph{Implement two utility functions we need later.}\n\n\\begin{enumerate}\n\\item Implement the following function:\n\\begin{verbatim}\np = gaussian(x, mu, sigma)\n\\end{verbatim}\nwhich returns the Gaussian density with parameters \\verb+mu+ and \\verb+sigma+ at point \\verb+x+.\nIn mathematical notation, the function is:\n\\[\np(x; \\mu,\\sigma) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left[-\\frac{1}{2\\sigma^2} (x-\\mu)^2\\right]\n\\]\n\\item Implement the function:\n\\begin{verbatim}\np = log_gaussian(x, mu, sigma)\n\\end{verbatim}\nthat returns $\\ln p(x; \\mu,\\sigma)$. Do not use the the previous function, because \nthe straightforward solution \\verb+p = numpy.log(gaussian(x, mu, sigma))+ would be numerically inaccurate.\nInstead, first manually simplify the expression \n\\[\n\\ln p(x; \\mu,\\sigma) = \\ln\\left(\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left[-\\frac{1}{2\\sigma^2} (x-\\mu)^2\\right]\\right)\n\\]\nand write the corresponding code.\n\\item Plot both functions for \\verb+mu = 0+ and \\verb+sigma = 1+ in the interval\n$x\\in [-5,5]$ to check that the code is correct. \\emph{Hint:} create a vector\nof x-coordinates using \\verb+numpy.linspace+ and pass that through your functions\nand plot.\n\\end{enumerate}\n\n\\item \\python \\emph{Estimate sinusoidal parameters.}\n\n\\begin{enumerate}\n\\item Generate a 100-sample long synthetic test signal from the model:\n\\[\nx[n] = \\sin\\left( 2\\pi f_0 n \\right) + w[n], \\qquad n = 0,1,\\ldots, 99\n\\]\nwith $f_0 = 0.017$ and $w[n]\\sim {\\cal N}(0,0.25)$. Note that $w[n]$ is generated\nby \\verb+w = numpy.sqrt(0.25) * numpy.random.randn(100)+. Plot the result.\n\\item Implement code from estimating the frequency of \\verb+x+ using the maximum likelihood \nestimator:\n\\[\n\\hat{f}_0 = \\text{value of } f \\text{ that maximizes } \\left|\\sum_{n=0}^{N-1} x(n)e^{-2\\pi i f n}\\right|.\n\\]\nImplementation is straightforward by noting that the sum expression is in fact a dot product:\n\\[\n\\hat{f}_0 = \\text{value of } f \\text{ that maximizes } \\left|\\x \\cdot {\\bf e}\\right|,\n\\]\nwith $\\x=(x_0,x_1,\\ldots, x_{N-1})$ and ${\\bf e}=(e^{-2\\pi i f \\cdot 0}, e^{-2\\pi i f \\cdot 1},\\ldots , e^{-2\\pi i f \\cdot (N-1)})$.\n\nUse the following template and fill in the blanks.\n\\begin{lstlisting}\nscores = []\nfrequencies = []\n\nfor f in numpy.linspace(0, 0.5, 1000):\n    \n    # Create vector e. Assume data is in x.\n    \n    n = numpy.arange(100)\n    z = # <compute -2*pi*i*f*n. Imaginary unit is 1j>\n    e = numpy.exp(z)\n\t\t\n    score = # <compute abs of dot product of x and e>\n    scores.append(score)\n    frequencies.append(f)\n\nfHat = frequencies[np.argmax(scores)]\n\\end{lstlisting}\n\\item Run parts (a) and (b) a few times. Are the results close to true $f_0 = 0.017$?\n\\end{enumerate}\n\n\\end{enumerate}\n\n\\end{document}          \n", "meta": {"hexsha": "8f13d51f3f858a3381894a75d5a596ac62a13f66", "size": 6031, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/Ex2/Exercise2.tex", "max_stars_repo_name": "mahehu/SGN-41007", "max_stars_repo_head_hexsha": "c8ed169a0a5f70fb87b99448e39a573c0df584b2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 61, "max_stars_repo_stars_event_min_datetime": "2017-01-09T07:48:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-15T15:13:49.000Z", "max_issues_repo_path": "exercises/Ex2/Exercise2.tex", "max_issues_repo_name": "mahehu/SGN-41007", "max_issues_repo_head_hexsha": "c8ed169a0a5f70fb87b99448e39a573c0df584b2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Ex2/Exercise2.tex", "max_forks_repo_name": "mahehu/SGN-41007", "max_forks_repo_head_hexsha": "c8ed169a0a5f70fb87b99448e39a573c0df584b2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 46, "max_forks_repo_forks_event_min_datetime": "2017-01-10T19:32:04.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-20T08:29:20.000Z", "avg_line_length": 34.2670454545, "max_line_length": 132, "alphanum_fraction": 0.6990548831, "num_tokens": 1895, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6825737473266734, "lm_q2_score": 0.8670357615200474, "lm_q1q2_score": 0.5918158488069747}}
{"text": "\\problemname{Forest for the Trees}\n\nYou are playing hide-and-go-seek in a forest with Belle.  The forest\nhas one tree at each of the positive integer lattice points.  That is,\nthere is a tree at every point $(x,y)$ where $x$ and $y$ are both\npositive integers.  You may consider each tree as a point.  A logging\ncompany has cut down all of the trees in some axis-aligned rectangle,\nincluding those on the boundary of the rectangle.\n\nYou are standing at $(0,0)$ and Belle is standing at $(x_b,y_b)$.\nYou can see Belle if and only if there is no tree blocking your line of\nsight to Belle. If there is a tree at $(x_b,y_b)$, Belle will make\nit easier for you to find her by standing on the side of the tree\nfacing your location.\n\nFor example, suppose that Belle is standing at $(2,6)$.  If the\ntrees in the rectangle with corners at $(1,1)$ and $(5,4)$ are cut\ndown (blue rectangle in figure), then you can see Belle.  However,\nif the rectangle was at $(3,5)$ and $(5,7)$ (red rectangle in figure),\nthen the tree at $(1,3)$ would be in the way.\n\n\\begin{center}\n\t\\includegraphics[width=0.45\\textwidth]{trees.pdf}\n\\end{center}\n\nGiven the rectangle and Belle's location, can you see her?\n\n\\section*{Input}\n\nThe first line of input contains two integer $x_b$ and $y_b$~($1 \\leq x_b,y_b \\leq 10^{12}$), which are the coordinates that Belle is \nstanding on.\n\nThe second line of input contains four integers $x_1$, $y_1$, $x_2$ and $y_2$~($1 \\leq x_1 \\leq x_2 \\leq 10^{12}$ and $1 \\leq y_1 \\leq y_2 \n\\leq 10^{12}$), which specify two opposite corners of the rectangle at $(x_1, y_1)$ and $(x_2, y_2)$.\n\n\\section*{Output}\n\nIf you can see Belle, display \\texttt{Yes}.\n\nOtherwise, display \\texttt{No} and the coordinates of the closest tree\nthat is blocking your view.\n", "meta": {"hexsha": "ce5ff5569bf3efe22d9320c3d0c8eb87980a6404", "size": 1756, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problems/forestforthetrees/problem_statement/problem.tex", "max_stars_repo_name": "icpc/na-rocky-mountain-2018-public", "max_stars_repo_head_hexsha": "416a94258f99ab68ff7d9777faca55c94cdaf5f5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-22T16:34:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T16:34:26.000Z", "max_issues_repo_path": "problems/forestforthetrees/problem_statement/problem.tex", "max_issues_repo_name": "icpc/na-rocky-mountain-2018-public", "max_issues_repo_head_hexsha": "416a94258f99ab68ff7d9777faca55c94cdaf5f5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/forestforthetrees/problem_statement/problem.tex", "max_forks_repo_name": "icpc/na-rocky-mountain-2018-public", "max_forks_repo_head_hexsha": "416a94258f99ab68ff7d9777faca55c94cdaf5f5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.8095238095, "max_line_length": 139, "alphanum_fraction": 0.7226651481, "num_tokens": 521, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.5917672883862061}}
{"text": "% !TEX root = main.tex\n\n%-------------------------------------------------\n\\section{Conditional expectation}\\label{sec:cond_expe}\n\nConditional expectations are expectations computed with respect to conditional distributions.\n%PMFs (discrete case) and conditional PDFs (continuous case).\n\n%%-----------------------------\n%\\subsection{Conditioning on events}\n%\\begin{definition}\\label{defn:cond_expe_A}\n%let $A$ be an event with $\\prob(A)>0$, and let $X$ be a random variable. Then\n%\\[\n%\\begin{array}{lll}\n%\\expe(X|A) & = \\displaystyle\\sum_{i=1}^{\\infty} x_i\\,f_{X|A}(x_i)\t\t& \\quad\\text{if $X$ is discrete, or} \\\\[4ex]\n%\\expe(X|A) & = \\displaystyle\\int_{-\\infty}^{\\infty} x\\,f_{X|A}(x|y)\\,dy\t& \\quad\\text{if $X$ is continuous.} \n%\\end{array}\n%\\]\n%\\end{definition}\n%\n%% example: cond_expe for cts. uniform\n%\\begin{example}\n%Let $X\\sim\\text{Uniform}[0,1]$ and let $A=\\{1/2\\leq X\\leq 3/4\\}$. The CDF of $X$ is $F_X(x) = x$ for $0\\leq x\\leq 1$ (with $F_X(x)=0$ for $x<0$ and $F_X(x)=1$ for $x>1$) so\n%\\[\n%\\prob(A) = \\prob(1/2\\leq x\\leq 3/4) = \\prob(X\\leq 3/4)-\\prob(X\\leq 1/2) = F_X(3/4) - F_X(1/2) = 3/4 - 1/2 = 1/4.\n%\\]\n%The PDF of $X$ is $f_X(x)=1$ for $0\\leq x\\leq 1$ (and zero otherwise), so the conditional PDF of $X|A$ is\n%\\[\n%f_{X|A}(x) = \\frac{f_X(x)}{\\prob(A)} = 4 \\text{for $1/2\\leq x\\leq 3/4$ (and zero otherwise),}\n%\\]\n%which shows that $X|A\\sim\\text{Uniform}[1/2,3/4]$. The conditional expectation of $X|A$ is\n%\\[\n%\\expe(X|A) \n%\t= \\int_{-\\infty}{\\infty} x f_{X|A}(x)\\,dx\n%\t= \\int_{1/2}{3/4} 4x \\,dx\n%\t= 5/8.\n%\\]\n%This is the mid-point of the interval $[1/2,3/4]$ as expected.\n%\\end{example}\n\n%\\begin{example}\n%Recall example~\\label{example:cond_pmf}: a fair coin is tossed repeatedly until a head occurs; $X$ is the number of times the coin is tossed; $A$ is the event that the coin is tossed an odd number of times. If $A$ occurs, the expcted value of $X$ is\n%\\[\n%\\expe(X|A) = \\sum_{k\\text{odd}} k(3/2^{k+1}) = \\sum_{k\\text{odd}}^{\\infty} 3/2^k  = 3(1/2 + 1/8 + 1/32 + \\ldots) = 2.\n%\\]\n%The unconditional expectation is \n%\\[\n%\\expe(X) = \\sum_{k=1}^{\\infty} k(1/2^k) = \\sum_{k=1}^{\\infty} 1/2^{k-1} = 1 + 1/2 + 1/4 + 1/8 + \\ldots = 2.\n%\\]\n%*** so it doesn't change!? ***\n%\\end{example}\n\n\\begin{definition}\\label{def:cond_expe_x}\nLet $x$ be a fixed value. The \\emph{conditional expectation of $Y$ given $X=x$} is \n\\[\n\\begin{array}{lll}\n\\expe(Y|X=x) & = \\displaystyle\\sum_{j=1}^{\\infty} y_j\\,f_{Y|X}(y_j|x)\t\t& \\quad\\text{if $Y$ is discrete, or} \\\\[4ex]\n\\expe(Y|X=x) & = \\displaystyle\\int_{-\\infty}^{\\infty} y\\,f_{Y|X}(y|x)\\,dy\t& \\quad\\text{if $Y$ is continuous.} \n\\end{array}\n\\]\n\\end{definition}\n\n%-----------------------------\n%\\subsection{Conditioning on events}\n%\\begin{definition}\\label{defn:cond_expe_A}\n%let $A$ be an event with $\\prob(A)>0$, and let $X$ be a random variable. Then\n%\\[\n%\\begin{array}{lll}\n%\\expe(X|A) & = \\displaystyle\\sum_{i=1}^{\\infty} x_i\\,f_{X|A}(x_i)\t\t& \\quad\\text{if $X$ is discrete, or} \\\\[4ex]\n%\\expe(X|A) & = \\displaystyle\\int_{-\\infty}^{\\infty} x\\,f_{X|A}(x|y)\\,dy\t& \\quad\\text{if $X$ is continuous.} \n%\\end{array}\n%\\]\n%\\end{definition}\n\n\\begin{definition}\nLet $A$ be an event with $\\prob(A)>0$. The \\emph{conditional expectation of $Y|A$} is the conditional expectation of $Y$ given $I_A=1$, where $I_A$ is the indicator variable of $A$.\n%%\\[\n%%\\begin{array}{cccl}\n%%F_{X|A}:\t& \\R\t& \\to\t\t& [0,1] \\\\\n%%\t\t\t& x\t\t& \\mapsto\t& \\prob(X\\leq x\\,|\\, A).\n%%\\end{array}\n%%\\]\n%The conditional PMF/PDF of $X|A$ are the functions\n%\\[\n%\\begin{array}{lll}\n%f_{X|A}(x) & = \\prob(\\{X=x\\}\\cap A)/\\prob(A)& \\quad\\text{if $X$ is discrete, or} \\\\\n%f_{X|A}(x) & = f_{X}(x)/\\prob(A)\t\t\t& \\quad\\text{if $X$ is continuous.} \n%\\end{array}\n%\\]\n\\end{definition}\n\n\n% example: cond_expe for cts. uniform\n\\begin{example}\nLet $Y\\sim\\text{Uniform}[0,1]$. Find the conditional expectation of $Y$ given that $1/2\\leq Y\\leq 3/4$.\n\n\\begin{solution}\nLet $A=\\{1/2\\leq X\\leq 3/4\\}$. \n\nThe CDF of $Y$ is $F_Y(y) = y$ for $0\\leq y\\leq 1$ (with $F_Y(y)=0$ for $y<0$ and $F_Y(y)=1$ for $y>1$) so\n\\[\n\\prob(A) = \\prob(1/2\\leq Y\\leq 3/4) = \\prob(Y\\leq 3/4)-\\prob(Y\\leq 1/2) = F_Y(3/4) - F_Y(1/2) = 3/4 - 1/2 = 1/4.\n\\]\n\nThe PDF of $Y$ is $f_Y(y)=1$ for $0\\leq y\\leq 1$ (and zero otherwise), so the conditional PDF of $Y|A$ is\n\\[\nf_{Y|A}(y) = \\frac{f_Y(y)}{\\prob(A)} = 4 \\quad\\text{for $1/2\\leq x\\leq 3/4$ (and zero otherwise),}\n\\]\nwhich shows that $Y|A\\sim\\text{Uniform}[1/2,3/4]$. The conditional expectation of $Y|A$ is\n\\[\n\\expe(Y|A) \n\t= \\int_{-\\infty}^{\\infty} y f_{Y|A}(y)\\,dy\n\t= \\int_{1/2}^{3/4} 4y \\,dy\n\t= 5/8.\n\\]\nThis is the mid-point of the interval $[1/2,3/4]$ as expected.\n\\end{solution}\n\\end{example}\n\n%-----------------------------\n\\subsection{Conditioning on random variables}\n\nFor any fixed value of $x$, the conditional expectation $\\expe(Y|X=x)$ is just a number. Let us now think of $x$ as a variable quantity, and consider the transformation\n\\[\n\\begin{array}{rccl}\n\tg:\t& \\R\t& \\to\t\t& \\R \\\\\n\t\t& x\t\t& \\mapsto\t& \\expe(Y|X=x)\n\\end{array}\n\\]\nThis transformation of $X$ yields a new random variable called the \\emph{conditional expectation of $Y$ given $X$}.\n\\begin{definition}\\label{def:cond_expe}\nThe \\emph{conditional expectation of $Y|X$} is the random variable\n\\[\\begin{array}{llll}\n\\expe(Y|X):\t& \\Omega \t& \\to \t\t& \\R \\\\\n\t\t\t& \\omega\t& \\mapsto \t& \\expe\\big[Y|X=X(\\omega)\\big].\n\\end{array}\\]\n\\end{definition}\n\nThe distribution of $\\expe(Y|X)$ depends only on the distribution of $X$, and its expectation is given by\n\n\\[\n\\begin{array}{lll}\n\\expe\\big[\\expe(Y|X)\\big] & = \\displaystyle\\sum_{i=1}^{\\infty}\\expe(Y|X=x_i)f_X(x_i)\t\t& \\quad\\text{if $X$ is discrete, or} \\\\[3ex]\n\\expe\\big[\\expe(Y|X)\\big] & = \\displaystyle\\int_{-\\infty}^{\\infty} \\expe(Y|X=x)f_X(x)\\,dx\t& \\quad\\text{if $X$ is continuous.} \n\\end{array}\n\\]\n\n\\begin{theorem}[Law of total expectation]\nLet $X$ and $Y$ be random variables defined on the same probability space. Then\n\\[\n\\expe(Y) = \\expe\\big[\\expe(Y|X)\\big].\n\\]\n\\end{theorem}\n\\begin{proof}\nFor discrete random variables (the continuous case is similar):\n\\begin{align*}\n\\expe\\big[\\expe(Y|X)\\big] \n\t= \\sum_x \\expe(Y|X=x) f_X(x) \n\t& = \\sum_x\\left(\\sum_y y\\,f_{Y|X}(y|x)\\right) f_X(x) \\\\\n\t& = \\sum_x\\left(\\sum_y y\\,\\frac{f_{X,Y}(x,y)}{f_X(x)}\\right) f_X(x) \\\\\n\t& = \\sum_x\\left(\\sum_y y f_{X,Y}(x,y)\\right) \\\\\n\t& = \\sum_x y \\left(\\sum_y f_{X,Y}(x,y)\\right) \\\\\n\t& = \\sum_x y f_Y(y)  \n\t= \\expe(Y).\n\\end{align*}\n%For continuous random variables (the discrete case is similar):\n%\\begin{align*}\n%\\expe\\big[\\expe(Y|X)\\big] \n%\t& = \\int_{-\\infty}^{\\infty} \\expe(Y|X=x) f_X(x)\\,dx \\\\\n%\t& = \\int_{-\\infty}^{\\infty}\\left(\\int_{-\\infty}^{\\infty} y\\,f_{Y|X}(y|x)\\,dy\\right) f_X(x)\\,dx \\\\\n%\t& = \\int_{-\\infty}^{\\infty}\\left(\\int_{-\\infty}^{\\infty} y\\,\\frac{f_{X,Y}(x,y)}{f_X(x)}\\,dy\\right) f_X(x)\\,dx \\\\\n%\t& = \\int_{-\\infty}^{\\infty}\\left(\\int_{-\\infty}^{\\infty} y f_{X,Y}(x,y)\\,dy\\right)\\,dx \\\\\n%\t& = \\int_{-\\infty}^{\\infty} y \\left(\\int_{-\\infty}^{\\infty} f_{X,Y}(x,y)\\,dx\\right)\\,dy \\\\\n%\t& = \\int_{-\\infty}^{\\infty} y f_Y(y) \\,dy \\\\\n%\t& = \\expe(Y).\n%\\end{align*}\n\\end{proof}\n\n\\begin{theorem}[Law of total variance]\nIf $X$ and $Y$ are random variables defined on the same probability space,\n\\[\n\\var(Y) = \\expe\\big[\\var(Y|X)\\big] + \\var\\big[\\expe(Y|X)\\big] \n\\]\nwhere $\\var(Y|X)=\\expe(Y^2|X) - \\expe(Y|X)^2$.\n\\end{theorem}\n\\begin{proof}\nBy the law of total expectation,\n\\begin{align*}\n\\var(Y)\n\t& = \\expe(Y^2) - \\expe(Y)^2 \\\\\n\t& = \\expe\\big[\\expe(Y^2|X)\\big] - \\expe\\big[\\expe(Y|X)\\big]^2\n\\end{align*}\nBecause $\\var(Y|X)=\\expe(Y^2|X) - \\expe(Y|X)^2$,\n\\[\n\\var(Y) = \\expe\\big[\\var(Y|X) + \\expe(Y|X)^2\\big] - \\expe\\big[\\expe(Y|X)\\big]^2\n\\]\nHence, by the linearity of expectation,\n\\begin{align*}\n\\var(Y)\n\t& = \\expe\\big[\\var(Y|X)\\big] + \\Big(\\expe\\big[\\expe(Y|X)^2\\big] - \\expe\\big[\\expe(Y|X)\\big]^2\\Big) \\\\\n\t& = \\expe\\big[\\var(Y|X)\\big] + \\var[\\expe(Y|X)\\big].\n\\end{align*}\n\\end{proof}\n\n% example\n\\begin{example}\nLet the joint PDF of the continuous random variables $X$ and $Y$ be \n\\[\nf_{X,Y}(x,y) = \\begin{cases}\n\tcxy & \\quad\\text{for $x,y\\geq 0$ with $x+y\\leq 1$}, \\\\\n\t0\t& \\quad\\text{otherwise.}\n\\end{cases}\n\\]\n\\ben\n\\it Sketch the support of $f_{X,Y}$\n\\it Show that $c=24$.\n\\it Compute the conditional expectation $\\expe(Y|X)$.\n\\it Verify the identity $\\expe\\big[\\expe(Y|X)\\big]=\\expe(Y)$.\n\\een \n\n\\begin{solution}\n\\ben\n\n\\it % << (a)\n$\\supp(f_{X,Y})$ is the lower-left half of the unit square.\n\n\\it % << (b)\nThe marginal PDF of $X$ is\n\\[\nf_X(x) = c\\int_{0}^{1-x} xy\\,dy = \\frac{cx(1-x)^2}{2}\n\\]\nTo find $c$,\n\\[\n\\int_{0}^{1} f_X(x)\\,dx = 1, \\qquad\\text{so}\\quad c=24.\n\\]\nThus\n\\begin{align*}\nf_X(x)\t& = 12x(1-x)^2 \\qquad 0\\leq x\\leq 1 \\\\\nf_Y(y)\t& = 12y(1-y)^2 \\qquad 0\\leq y\\leq 1 \\\\\n\\end{align*}\nand\n\\begin{align*}\n\\expe(X) & = 12\\int_0^1 x(1-x)^2\\,dx = 2/5 \\\\\n\\expe(Y) & = 12\\int_0^1 y(1-y)^2\\,dy = 2/5 \\\\\n\\end{align*}\n\n\\it % << (c)\nTo compute $\\expe(Y|X)$,\n\\begin{align*}\n\\expe(Y|X=x)\n\t& = \\int_0^1 y\\left(\\frac{f_{X,Y}(x,y)}{f_X(x)}\\right)\\,dy \\\\\n\t& = \\int_0^{1-x} y\\left(\\frac{24xy}{12x(1-x)^2}\\right)\\,dy \\\\\n\t& = \\frac{24x}{12x(1-x)^2}\\int_0^{1-x} y^2\\,dy \\\\\n\t& =  \\frac{24x}{12x(1-x)^2}\\left[\\frac{(1-x)^3}{3}\\right] \\\\\n\t& = \\frac{2}{3}(1-x)\t\n\\end{align*}\nso $\\expe(Y|X) = 2(1-X)/3$.\n\n\\it % << (d)\n\\begin{align*}\n\\expe\\big(\\expe(Y|X)\\big)\n\t& = \\int_0^1 \\expe(Y|X=x) f_X(x)\\,dx \\\\\n\t& = \\frac{2}{3}\\int_0^1 (1-x) f_X(x)\\,dx \\\\\n\t& = \\frac{2}{3}\\big(1-\\expe(X)\\big) = \\frac{2}{3}\\left(1-\\frac{2}{5}\\right) = \\frac{2}{5} \\\\\n\t& = \\expe(Y)\n\\end{align*}\n\\een\n\\mbox{}\n\\end{solution}\n\\end{example}\n\n% exercises\n\\begin{exercise}\n\\begin{questions}\n\\question\nLet $X$ and $Y$ be jointly continuous random variables having the following joint PDF,\n\\[\nf_{X,Y}(x,y) = \n\\begin{cases}\n\t\\frac{21}{4}x^2y\t\t& \\quad x^2<y<1, \\\\\n\t0\t\t\t\t\t\t& \\quad\\text{otherwise.}\n\\end{cases}\n\\]\n\\ben\n\\it Sketch the support of $f_{X,Y}$.\n\\it Find the marginal PDFs of $X$ and $Y$.\n\\it Find the mean and variance of $Y$.\n\\it Find the conditional PDF of $Y$ given $X=x$. \n\\it Are $X$ and $Y$ independent? \n\\it Find the conditional expectation of $Y$ given $X=x$. \n\\it Find the conditional expectation of $Y$ given $X$. \n\\it Verify that $\\expe(Y)=\\expe\\big[\\expe(Y|X)\\big]$.\n\\een\n\n\\begin{answer}\n\\ben\n\\it % << (a)\nThe support of the joint PDF $f(x,y)$ is the set $\\{(x,y): x^2 < y < 1\\}$.  This is the region of the plane between the vertical lines $x=-1$ and $x=+1$, bounded above by the horizontal line $y=1$ and below by parabola $y=x^2$. In particular,\n\\bit\n\\it For fixed $x\\in[-1,1]$, $f_{X,Y}(x,y)\\neq 0$ only for $y\\in[x^2,1]$.\n\\it For fixed $y\\in[0,1]$, $f_{X,Y}(x,y)\\neq 0$ only for $x\\in[-\\sqrt{y},+\\sqrt{y}]$.\n\\eit\n\n\\it % << (b)\nThe marginal distributions are computed as follows:\n\\begin{align*}\nf_X(x) \t\n\t& = \\int_{-\\infty}^{\\infty} f(x,y)\\,dy \n\t= \\int_{x^2}^1 \\frac{21}{4}x^2y\\,dy \n\t= \\frac{21}{4}x^2\\left[\\frac{y^2}{2}\\right]_{x^2}^1 \n\t= \\begin{cases} \\frac{21}{8}x^2(1-x^4)\t& -1<x<1, \\\\ 0 & \\text{ otherwise.}\\end{cases} \\\\ [2ex]\nf_Y(y) \t\n\t& = \\int_{-\\infty}^{\\infty} f(x,y)\\,dx \n\t= \\int_{-\\sqrt{y}}^{\\sqrt{y}} \\frac{21}{4}x^2y\\,dx \n\t= \\frac{21}{4}y\\left[\\frac{x^3}{3}\\right]_{-\\sqrt{y}}^{\\sqrt{y}}\n\t= \\begin{cases} \\frac{7}{2}y^{5/2} & 0< y< 1, \\\\ 0 & \\text{ otherwise.}\\end{cases} \\\\\n\\end{align*}\n\n\\it % << (c)\nThe expected value and variance of $Y$ are computed as follows:\n\\begin{align*}\n\\expe(Y)\n\t& = \\int_{-\\infty}^{\\infty} y\\,f_Y(y)\\,dy \t= \\int_0^1 y\\left(\\frac{7y^{5/2}}{2}\\right)\\,dy = \\frac{7}{9}, \\\\\n\\expe(Y^2)\n\t& = \\int_{-\\infty}^{\\infty} y^2\\,f_Y(y)\\,dy = \\int_0^1 y^2\\left(\\frac{7y^{5/2}}{2}\\right)\\,dy = \\frac{7}{11}, \\\\\n\\var(Y)\n\t& = \\expe(Y^2) - \\expe(Y)^2 = \\frac{7}{11} - \\frac{49}{81} = \\frac{28}{891}.\n\\end{align*}\n\n\\it % << (d)\nThe conditional PDF of $Y$ given $X=x$ is\n\\[\nf_{Y|X=x}(y) \t\n\t= \\frac{f_{X,Y}(x,y)}{f_X(x)} \n\t= \\frac{(21/4)x^2y}{(21/8)x^2(1-x^4)} \t\n\t= \\begin{cases} \n\t\t\\displaystyle\\frac{2y}{1-x^4} \t& x^2\\leq y\\leq 1, \\\\\n\t \t0 \t\t\t\t\t\t\t\t& \\text{ otherwise}.\n\t \\end{cases}\n\\]\t \n\n\\it % << (e)\n$X$ and $Y$ are clearly not independent, because the support of $f_{X,Y}$ is not a rectangular region, and moreover, the conditional PDF of $Y$ given $X=x$ depends on $x$.\n\n\\it % << (f)\nThe conditional expected value of $Y$ given that $X=x$ is\n\\begin{align*}\n\\expe(Y|X=x) \n\t& = \\int_{-\\infty}^{\\infty} y f_{Y|X=x}(y\\,|\\,x)\\,dy \\\\\n\t& = \\int_{x^2}^1 y \\left(\\frac{2y}{1-x^4}\\right)\\,dy \n\t= \\frac{2}{1-x^4}\\left[\\frac{y^3}{3}\\right]_{x^2}^1\n\t= \\frac{2(1-x^6)}{3(1-x^4)} \n\\end{align*}\t\n\n\\it % << (g)\nThe conditional expectation of $Y$ given $X$ is the random variable \n\\[\n\\expe(Y|X)=\\displaystyle\\frac{2(1-X^6)}{3(1-X^4)}\n\\]\n\n\\it % << (h)\nThe expected value of $\\expe(Y|X)$ is\n\\begin{align*}\n\\expe\\big[\\expe(Y|X)\\big] \n\t& = \\int_{-\\infty}^{\\infty} \\expe(Y|X=x)f_X(x)\\,dx \\\\\n\t& = \\frac{2}{3}\\int_{-1}^{1} \\left(\\frac{1-x^6}{1-x^4}\\right)\\left(\\frac{21}{8}x^2(1-x^4)\\right)\\,dx \\\\\n\t& = \\frac{7}{4}\\int_{-1}^{1} x^2(1-x^6)\\,dx \n\t= \\frac{7}{9}.\n\\end{align*}\t\nThus $\\expe\\big[\\expe(Y|X)\\big]=\\expe(Y)$ as required.\n\\een\n\\end{answer}\n\n% house for sale\n\\question\nA man puts his house for sale, and decides to accept the first offer that exceeds the reserve price of $\\pounds r$. Let $X_1,X_2,\\ldots$ represent the sequence of offers received, and suppose that the $X_i$ are independent and identically distributed random variables, each having exponential distribution with rate parameter $\\lambda$. \n\\begin{parts}\n\\part\nWhat is the expected number of offers received before the house is sold?% is $e^{\\lambda r}$.\n\\begin{answer}\nLet $N$ be the number of offers received before the house is sold. Then $\\{N=k\\}$ is the event that the first $k-1$ offers are at most $\\pounds r$, each occurring independently with probability $F(r)$, and the $k$th offer exceeds $\\pounds r$, which occurs with probability $1-F(r)$. Thus $N$ has \\emph{geometric} distribution, with `probability of success' equal to $1-F(r)$ (where `success' corresponds to the sale of the house). Hence the expected number of offers received before the house is sold is\n\\[\n\\expe{E}(N) = \\frac{1}{1-F(r)} = e^{\\lambda r}.\n\\]\n\\end{answer}\n\\part \nWhat is the expected selling price of the house?\n\\begin{answer}\nLet $F_S$ be the conditional CDF of $X_i$ given that $X_i>r$:\n\\[\nF_S(x)  \n\t= \\frac{\\mathbb{P}(r < X_i \\leq x)}{\\mathbb{P}(X_i > r)}\n\t= \\frac{F(x)-F(r)}{1-F(r)}\n\t= \\begin{cases}\n\t\t1 - e^{-\\lambda(x-r)} \t& x > r, \\\\\n\t\t0\t\t\t\t\t\t& \\text{otherwise.}\n\t\\end{cases}\n\\]\nA straightforward calculation yields $\\mathbb{E}(X_i | X_i>r) = r + \\displaystyle\\frac{1}{\\lambda}$.\n\\end{answer}\n\\end{parts}\n\n\\end{questions}\n\\end{exercise}\n", "meta": {"hexsha": "adc580420e808172f4aed7086542d6a311009e75", "size": 14371, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "L5/MA2500/05D_conditional_expectation.tex", "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "L5/MA2500/05D_conditional_expectation.tex", "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L5/MA2500/05D_conditional_expectation.tex", "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "avg_line_length": 35.4839506173, "max_line_length": 503, "alphanum_fraction": 0.5970356969, "num_tokens": 6154, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.5917672812563964}}
{"text": "\\documentclass[letterpaper]{article}\n\\title{\\textbf{Tokio university's entrance exam 2017}}\n\\author{Acosta Dom\u00ednguez Jorge}\n\\date{}\n\n\\usepackage{amsmath}\n\\usepackage{tabto}\n\\usepackage{tasks}\n\\usepackage{geometry} %this reduces the amount of margin\n\\newcommand{\\itab}[0]{\\hspace{2em}}\n\n\\begin{document}\n\\pagenumbering{gobble}\n\\maketitle\n\\newpage\n\\pagenumbering{arabic}\n\n\\section{Mathmatics exam}\n\\subsection{Lineal algebra}\n\nSuppose that three-dimensional vectors $\\left(\n\\begin{matrix}\nx_n\\\\y_n\\\\z_n\\\\\n\\end{matrix}\\\\\n\\right)$\nsatisfy the equation\n\\begin{equation*}\n\\left(\n\\begin{matrix}\nx_{n+1}\\\\y_{n+1}\\\\z_{n+1}\n\\end{matrix}\n\\right)\n=A\\left(\n\\begin{matrix}\nx_n\\\\y_n\\\\z_n\\\\\n\\end{matrix}\\\\\n\\right),\\itab n = 0, 1, 2, ...\n\\end{equation*}\nwhere $x_0$, $y_0$, $z_0$ and $\\alpha$ are real numbers, and\n\\begin{equation*}\nA = \\left(\\begin{matrix}\n1-2\\alpha & \\alpha & \\alpha\\\\\n\\alpha & 1-\\alpha & 0\\\\\n\\alpha & 0 & 1-\\alpha\n\\end{matrix}\\right),\\itab0<\\alpha<\\frac{1}{3}.\n\\end{equation*}\nAnswer the following questions.\n\\settasks{\n\tcounter-format=(tsk),\n\tlabel-width=3ex\n}\n\\begin{tasks}\n\\task{Express $x_n + y_n + z_n$ using $x_0, y_0$ and $z_0$}\n\\task{Obtain the eigenvalues $\\lambda_1, \\lambda_2$ and $\\lambda_3$, and their corresponding eigenvectors $v_1, v_2, v_3$ of the matrix $A$}\n\\task{Express the matrix $A$ using $\\lambda_1, \\lambda_2, \\lambda_3, v_1, v_2, v_3$.}\n\\task{Express $\\left(\\begin{matrix}\nx_n\\\\y_n\\\\z_n\n\\end{matrix}\\right)$}\nusing $x_0, y_0, z_0$ and $\\alpha$.\n\\task{Obtain $\\lim_{n\\to\\infty}\n\\left(\\begin{matrix}\nx_n\\\\y_n\\\\z_n\n\\end{matrix}\\right)$}.\n\\task{Regard\n\\begin{equation*}\nf(x_0, y_0, z_0) = \\frac{(x_n, y_n, z_n)\\left(\\begin{matrix}x_{n+1}\\\\y_{n+1}\\\\z_{n+1}\\end{matrix}\\right)}{(x_n, y_n, z_n)\\left(\\begin{matrix}x_n\\\\y_n\\\\z_n\\end{matrix}\\right)}\n\\end{equation*}\nas a function of $x_0, y_0$ and $z_0$. Obtain the maximum and the minimum values of $f(x_0, y_0, z_0)$, where we assume that $x_0^2 + y_0^2 + z_0^2 \\neq 0$.}\n\\end{tasks}\\newpage\n\n\\subsubsection{Resolution}\n\\begin{tasks}\n\\task{\nUsing $n=0$\n\\begin{align*}\nx_1 &= (1-2\\alpha)x_0+\\alpha y_0 + \\alpha z_0\\\\\ny_1 &= \\alpha x_0 + (1-\\alpha)y_0\\\\\nz_1 &= \\alpha x_0 + (1-\\alpha)z_0\n\\end{align*}\nadding both sides of the equations:\n\\begin{align*}\nx_1 + y_1 + z_1 &= (1-2\\alpha)x_0 + \\alpha x_0 + \\alpha x_0 + \\alpha y_0 + (1-\\alpha) y_0 + \\alpha z_0 + (1-\\alpha)z_0\\\\\nx_1 + y_1 + z_1 &= x_0(\\alpha + \\alpha -2\\alpha + 1) + y_0(\\alpha + 1 -\\alpha) + y_0(\\alpha + 1 -\\alpha)\\\\\nx_1 + y_1 + z_1 &=x_0 + y_0 + z_0\n\\end{align*}\ndue that with $n=1$ we get $x_1 + y_1 + z_1 =x_0 + y_0 + z_0$ we can say that:\n$$x_n + y_n + z_n=x_0 + y_0 + z_0$$\n}\n\\task{\nFor this part we have to recall that $|A-I\\lambda|=0$. So, we substract $I\\lambda$ from our A to get this:\n\\begin{equation*}\n\\left|\\begin{matrix}\n1-2\\alpha-\\lambda & \\alpha & \\alpha\\\\\n\\alpha & 1-\\alpha-\\lambda & 0\\\\\n\\alpha & 0 & 1-\\alpha-\\lambda\n\\end{matrix}\\right| = 0\n\\end{equation*}\nBy getting the determinant of $|A-I\\lambda|$ we get $(\\lambda+1)(\\lambda-1+3\\alpha)(\\lambda-1+\\alpha)=0$\nSo the values of $\\lambda$ are:\n\\begin{itemize}\n\\item{$\\lambda_3 = 1-\\alpha$}\n\\item{$\\lambda_2 = 1-3\\alpha$}\n\\item{$\\lambda_1 = 1$}\n\\end{itemize}\nNow, substituting each of the $\\lambda$ values in $A-I\\lambda$ and resolving the equations system we get the following vectors:\\\\\nFor $\\lambda = 1-\\alpha$\n\\begin{equation*}\n\\left(\\begin{matrix}\n1-2\\alpha-(1-\\alpha) & \\alpha & \\alpha\\\\\n\\alpha & 1-\\alpha-(1-\\alpha) & 0\\\\\n\\alpha & 0 & 1-\\alpha-(1-\\alpha)\n\\end{matrix}\\right)\n,\\itab v_1 = \\left(\\begin{matrix}\n0\\\\-1\\\\1\n\\end{matrix}\\right)\n\\end{equation*}\nFor $\\lambda = 1-3\\alpha$\n\\begin{equation*}\n\\left(\\begin{matrix}\n1-2\\alpha-(1-3\\alpha) & \\alpha & \\alpha\\\\\n\\alpha & 1-\\alpha-(1-3\\alpha) & 0\\\\\n\\alpha & 0 & 1-\\alpha-(1-3\\alpha)\n\\end{matrix}\\right)\n,\\itab v_2 = \\left(\\begin{matrix}\n-2\\\\1\\\\1\n\\end{matrix}\\right)\n\\end{equation*}\nFor $\\lambda = 1$\n\\begin{equation*}\n\\left(\\begin{matrix}\n1-2\\alpha-1 & \\alpha & \\alpha\\\\\n\\alpha & 1-\\alpha-1 & 0\\\\\n\\alpha & 0 & 1-\\alpha-1\n\\end{matrix}\\right)\n,\\itab \\itab \\itab \\itab \\itab v_3 = \\left(\\begin{matrix}\n1\\\\1\\\\1\n\\end{matrix}\\right)\n\\end{equation*}\n}\n\\task{\nSince we can decompose any matrix $A$ in $A=Q\\Lambda Q^{-1}$ where $\\Lambda$ is the diagonal matrix made of the eigenvalues that we found, and $Q$ is the the matrix made of eigenvectors that we already found. We simply have to build the Q matrix and get its inverse.\\\\\nWe have that $Q=\\left(\\begin{matrix}\n0 & -2 & 1\\\\\n-1 & 1 & 1\\\\\n1 & 1 & 1\n\\end{matrix}\\right)$, with this we only have to find $Q^{-1}$.\\\\\nAltought there are differents ways to get the inverse of a matrix the used for this exercise is the gauss-Jordan one. Solving we have:\\\\\n$$\\frac{1}{6}\\left(\\begin{matrix}\n0 & -3 & 3\\\\\n-2 & 1 & 1\\\\\n2 & 2 & 2\n\\end{matrix}\\right)$$\nSo this way we could represent $A$ using its eigenvalues and its eigenvectors\\\\\n\\begin{equation*}\nA = \n\\frac{1}{6}\n\\left(\\begin{matrix}\n0 & -2 & 1\\\\\n-1 & 1 & 1\\\\\n1 & 1 & 1\n\\end{matrix}\\right)\n\\left(\\begin{matrix}\n1-\\alpha & 0 & 0\\\\\n0 & 1-3\\alpha & 0\\\\\n0 & 0 & 1\n\\end{matrix}\\right)\n\\left(\\begin{matrix}\n0 & -3 & 3\\\\\n-2 & 1 & 1\\\\\n2 & 2 & 2\n\\end{matrix}\\right)\n\\end{equation*}\n}\n\\task{\nFor fast analisys lets rename $\\left(\\begin{matrix}\nx_n\\\\y_n\\\\z_n\n\\end{matrix}\\right)$ as $V_n$.\\\\\nThis way we have that \\begin{equation}\nV_{n+1} = AV_n\n\\end{equation}\nIf we set $n=0$ we have $V_1=AV_0$. With this we can deduce that \\begin{equation}\nV_n = A^nV_0\n\\end{equation}\nIf we apply it to $v_{n+1}$ we have that $V_{n+1} = AV_n=A(A^nV_0)=A^{n+1}V_0$. This way we can prove the validity of our deduction.\nSo, to express $x_n + y_n + z_n$ using $x_0, y_0$ and $z_0$ we first need to find the value of $A^n$\\\\\nFortunately we know that \\begin{equation}\nA^n = Q\\Lambda^nQ^{-1}\n\\end{equation}.\n$$\nA^n = \n\\frac{1}{6}\n\\left(\\begin{matrix}\n0 & -2 & 1\\\\\n-1 & 1 & 1\\\\\n1 & 1 & 1\n\\end{matrix}\\right)\n\\left(\\begin{matrix}\n1-\\alpha & 0 & 0\\\\\n0 & 1-3\\alpha & 0\\\\\n0 & 0 & 1\n\\end{matrix}\\right)^n\n\\left(\\begin{matrix}\n0 & -3 & 3\\\\\n-2 & 1 & 1\\\\\n2 & 2 & 2\n\\end{matrix}\\right)$$\n$$\nA^n = \n\\frac{1}{6}\n\\left(\\begin{matrix}\n0 & -2 & 1\\\\\n-1 & 1 & 1\\\\\n1 & 1 & 1\n\\end{matrix}\\right)\n\\left(\\begin{matrix}\n(1-\\alpha)^n & 0 & 0\\\\\n0 & (1-3\\alpha)^n & 0\\\\\n0 & 0 & 1\n\\end{matrix}\\right)\n\\left(\\begin{matrix}\n0 & -3 & 3\\\\\n-2 & 1 & 1\\\\\n2 & 2 & 2\n\\end{matrix}\\right)\n$$\n$$\nA^n = \\frac{1}{6}\n\\left(\\begin{matrix}\n4(1-3\\alpha)^n+2 & -2(1-3\\alpha)^n+2 & -2(1-3\\alpha)^n+2\\\\\n-2(1-3\\alpha)^n+2 & 3(1-\\alpha)^n+(1-3\\alpha)^n+2 & -3(1-\\alpha)^n+(1-3\\alpha)^n+2\\\\\n-2(1-3\\alpha)^n+2 & -3(1-\\alpha)^n+(1-3\\alpha)^n+2 & 3(1-\\alpha)^n+(1-3\\alpha)^n+2\n\\end{matrix}\\right)\n$$\nSo, if we replace this in $3$ we have:\n$$\nV_n = \\frac{1}{6}\n\\left(\\begin{matrix}\n4(1-3\\alpha)^n+2 & -2(1-3\\alpha)^n+2 & -2(1-3\\alpha)^n+2\\\\\n-2(1-3\\alpha)^n+2 & 3(1-\\alpha)^n+(1-3\\alpha)^n+2 & -3(1-\\alpha)^n+(1-3\\alpha)^n+2\\\\\n-2(1-3\\alpha)^n+2 & -3(1-\\alpha)^n+(1-3\\alpha)^n+2 & 3(1-\\alpha)^n+(1-3\\alpha)^n+2\n\\end{matrix}\\right)V_0\n$$\nSubstituying $V_n$ to its original form and multiplying we get the following:\n\\begin{displaymath}\n\\left(\\begin{matrix}x_n\\\\y_n\\\\z_n\\end{matrix}\\right)=\n\\frac{1}{6} \\left(\\begin{matrix}\n4(1-3\\alpha)^n+2 & -2(1-3\\alpha)^n+2 & -2(1-3\\alpha)^n+2\\\\\n-2(1-3\\alpha)^n+2 & 3(1-\\alpha)^n+(1-3\\alpha)^n+2 & -3(1-\\alpha)^n+(1-3\\alpha)^n+2\\\\\n-2(1-3\\alpha)^n+2 & -3(1-\\alpha)^n+(1-3\\alpha)^n+2 & 3(1-\\alpha)^n+(1-3\\alpha)^n+2\n\\end{matrix}\\right)\\left(\\begin{matrix}x_0\\\\y_0\\\\z_0\\end{matrix}\\right)\n\\end{displaymath}\n\\begin{align*}\nx_n &= \\frac{1}{3}((x_0+y_0+z_0)+(1-3\\alpha)^n(x_0-y_0-z_0))\\\\\ny_n &= \\frac{1}{3}(x_0+y_0+z_0)+\\frac{1}{6}(1-3\\alpha)^n(-2x_0+y_0+z_0)+\\frac{1}{2}(1-\\alpha)^n(y_0-z_0)\\\\\nz_n &= \\frac{1}{3}(x_0+y_0+z_0)+\\frac{1}{6}(1-3\\alpha)^n(-2x_0+y_0+z_0)+\\frac{1}{2}(1-\\alpha)^n(-y_0+z_0)\n\\end{align*}\n}\n\\task{\nConsidering the equations above, and the condition that $0<\\alpha<\\frac{1}{3}$, we can write the $\\lim_{n\\to\\infty}\\left( \\begin{matrix}\nx_n\\\\y_n\\\\z_n\n\\end{matrix} \\right)$ as\n\\begin{align*}\n\\frac{1}{3}(x_0+y_0+z_0) &< x_n < \\frac{2}{3}(2x_0-y_0-z_0)\\\\\n\\frac{5}{6}y_0 &< y_n < \\frac{1}{3}(x_0+y_0+z_0)\\\\\n\\frac{5}{6}z_0 &< z_n < \\frac{1}{3}(x_0+y_0+z_0)\n\\end{align*}\n}\n\\task{\nMental note: \"Too much work to do for only 2 and half hours!\"\n}\n\\end{tasks}\n\n\\newpage\n\\subsection{Differential equations}\nA real valued function $u(x, t)$ is defined in $0 \\leq x \\leq 1$ and $t \\geq 0$. Here $x$ and $t$ are independent.\\\\\nSuppose solving the following partial differential equation:\n\\begin{equation}\n\\frac{\\partial u}{\\partial t} = \\frac{\\partial^2 u}{\\partial x^2}\n\\end{equation}\nunder the following conditions:\n\\begin{center}\nBoundary conditions:\\itab $u(0, t) = u(1, t) = 0$\\\\\nInitial condition:\\itab $u(x, 0) = x-x^2$\n\\end{center}\nSince the constant function $u(x, t) = 0$ is obvious a solution of the partial differential equation, consider the other solutions. Answer the following questions.\n\\settasks{\n\tcounter-format=(tsk),\n\tlabel-width=3ex\n}\n\\begin{tasks}\n\\task{\nCalculate the following expression, where $n$ and $m$ are positive integers.\n$$\n\\int_0^1{sin(n\\pi x) sin(m\\pi x) dx}\n$$\n}\n\\task{\nSupose $u(x, t ) = \\epsilon(x) \\rho(t)$ where $\\epsilon(x)$ is a function only of $x$ and $\\rho(t)$ is a function only of $t$. Express the ordinary differential equations for $\\epsilon$ and $\\rho$ using an arbitrary constant $C$. You may use that $f(x)$ and $g(t)$ are constant functions where $f(x)$ and $g(t)$ satisfy $f(x) = g(t)$ for arbitrary $x$ and $t$.\n}\n\\end{tasks}\n\\newpage\n\\subsubsection{Resolution}\n\\begin{tasks}\n\\task{\nFirst at all, we know that $\\sin(x)\\sin(y) = \\frac{1}{2}[\\cos(x-y)-\\cos(x+y)]$. So we can rewrite the expression as follows:\n\\begin{equation*}\n=\\frac{1}{2}\\left[\\int_0^1{\\cos(\\pi x(n-m))}dx -\\cos(\\pi x(n+m))dx \\right]\n\\end{equation*}\nNow this is easily integrable by variable change. Applying it an resolving we get:\n\\begin{equation*}\n=\\frac{1}{2}\\pi\\left( \\right)\n\\end{equation*}\n}\n\\end{tasks}\n\\end{document}\n", "meta": {"hexsha": "dadce07d88cbade495c4f9dd8d93f446403c3200", "size": 9845, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Math/nyuugakushiken2017.tex", "max_stars_repo_name": "syaoraang/shiken_junbi", "max_stars_repo_head_hexsha": "ee99cee25f0263f3eb91a661241413cc582ec34d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Math/nyuugakushiken2017.tex", "max_issues_repo_name": "syaoraang/shiken_junbi", "max_issues_repo_head_hexsha": "ee99cee25f0263f3eb91a661241413cc582ec34d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Math/nyuugakushiken2017.tex", "max_forks_repo_name": "syaoraang/shiken_junbi", "max_forks_repo_head_hexsha": "ee99cee25f0263f3eb91a661241413cc582ec34d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.1550632911, "max_line_length": 360, "alphanum_fraction": 0.6415439309, "num_tokens": 4178, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5917672757832093}}
{"text": "\\documentclass{article}\n\n\\title{Regularization techniques for Deep Learning}\n\\author{Vladimir Feinberg}\n\n\\input{../old/defs}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Regularization}\n\nRegularization shrinks the capacity of an ANN. In other words, it restricts the richness of the family of functions which the ANN can take the value of. As Vincent Vanhouke puts it in his \\nurl{https://www.udacity.com/course/deep-learning--ud730}{Udacity course}, ANN regularization can be likened to fitting into tight jeans, where extremely expressive model classes are tightened by generic regularization techniques, which seem to find a ``right fit'' in many problems, empirically speaking.\n\nAt a high level, regularization changes the objective from usual loss $J(\\vtheta)$ to loss plus weight penalty $J(\\vtheta)+\\Omega(\\vtheta)$ or loss plus constraints. In both supervised and unsupervised models this has justification in moving overfit models of high capacity to those of appropriate capacity for the problem: lower variance for a bit higher bias, where the bias and variance are with respect to the loss $J$ on the training set as an estimator for the loss on the entire data distribution.\n\nIn the sections that follow, we recount general regularization techniques for ANNs, most of which were collected from Dr. Goodfellow's \\nurl{http://www.deeplearningbook.org/contents/mlp.html}{Deep Learning Book}.\n\n\\section{Weight Decay}\n\nFor all affine transforms let the weight matrix terms (not bias), vectorized, be $\\vw$. Add $\\Omega(\\vtheta)=\\frac{\\alpha}{2}\\norm{\\vw}_2^2$. Intuitively, bias is unrestricted because it only interacts with one variable directly (the output feature map), so it isn't prone to overfitting. For a quadratic approximation of the loss at $J(\\vtheta)$, with hessian $H=Q\\Lambda Q^\\top$, $L^2$ regularization would result in a new optimum $Q(\\Lambda+\\alpha I)^{-1}\\Lambda Q^\\top \\vw_*$ where $\\vw_*$ is the unconstrained optimum. If $\\vv_i$ is the $i$-th eigenvector of $H$ and $\\lambda_i$ its eigenvalue, then the regularization adjusts the solution by rescaling the $\\vv_i$-th component of $\\vw_*$ by a factor of $\\frac{\\lambda_i}{\\lambda_i+\\alpha}$. Regularization can also be viewed as constraints by identifying that $\\alpha$ is a KKT multiplier: a fixed $\\alpha$ corresponds to optimizing $\\min J(\\vtheta)$ such that $\\norm{\\vw}\\le k$ for some $k$, which is in practice difficult to find given $\\alpha$ (Fig.~\\ref{fig:reg}). $L^2$ regularization is equivalent to Bayesian MAP if weights have $N(0,\\alpha^{-1})$ prior.\n\\begin{figure}[!h]\n\\centering\n{\\includegraphics[width=0.75\\textwidth]{reg.pdf}}\n  \\caption{Regularization as constrained optimization: the above shows the contour plots of a quadratic objective, and $L^2,L^1$ regularizations, respectively, as constraints constructed by weighing the objective against the shape of the regularizer's ball. Image from Bishop's Pattern Recognition and Machine Learning, Chapter 3, page 146.}\n\\label{fig:reg}\n\\end{figure}\n\n\\textbf{Proximal Regularization}. Similar to weight decay, but $\\Omega(\\vtheta)=\\alpha\\norm{\\vw}_1$. Assuming a (convex) diagonal Hessian approximation to $J(\\vtheta)$ (more strict than above), the $i$-th component solution is pulled down to\n$$w_i=\\begin{cases}\n\\pa{\\abs{w_i}-\\frac{\\alpha}{H_{i,i}}}\\sgn w_i  & H_{i,i}>0, \\abs{w_i}>\\frac{\\alpha}{H_{i,i}}\\\\\n  0 & \\text{otherwise}\n\\end{cases}$$\n$L^1$ regularization is equivalent to Bayesian MAP if weights have $\\Laplace(0, \\alpha^{-1})$ prior.\n\n\\textbf{Early Stopping}. Stop training if validation increases; even if training is unconverged. If validation set is large, you miss out on a lot of data. More complicated algorithms are necessary to find an applicable early stopping time when re-training to include validation set (see \\nurl{http://www.deeplearningbook.org/contents/regularization.html}{Dr. Goodfellow's DL book}, Algorithm 7.3). Early stopping is similar to $L^2$ regularization, and for quadratic loss it is just that (Fig.~\\ref{fig:earlystop}).\n\\begin{figure}[!h]\n\\centering\n{\\includegraphics[width=0.75\\textwidth]{early-stop.pdf}}\n  \\caption{From Dr. Goodfellow's DL book lecture, Chapter 7.}\n\\label{fig:earlystop}\n\\end{figure}\n\nEarly stopping can thus be viewed as an efficient way of regularizing weights just the right amount for the problem at hand, whereas we might have to cross-validate multiple settings of $\\alpha$ to find an appropriate value for the $\\alpha$ hyperparameter. Nonetheless, early stopping still suffers from generalization issues of overfitting to the validation set, see discussions in \\nurl{http://www.tandfonline.com/doi/abs/10.1080/00207179508921605}{Sj\\\"{o}berg and Ljung 1995} and \\nurl{http://www.mitpressjournals.org/doi/abs/10.1162/089976699300016557}{Cataltepe et al 1999}.\n\n\\section{Dataset Augmentation and Manifold Learning}\n\nModify supervised pairs $(\\vx, y)$ to include $(T(\\vx), y)$ where $T$ is a transformation we expect the classifier to be invariant to. Noise injection into inputs, too can be a form of dataset augmentation. An automatic technique for doing this is tangent propagation (\\nurl{https://papers.nips.cc/paper/4409-the-manifold-tangent-classifier}{Rifai et al 2011}), which adds a penalty if the output is sensitive to changes along learned manifold vectors $\\vv_i$: $\\Omega(f)=\\sum_i\\pa{\\vv_i^\\top\\nabla f)^2}$.\n\n\\section{Noise Robustness}\n\n(\\nurl{https://papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks}{Graves 2011}). Inject noise into weights during training, forcing learned weights to be robust to small variations.\n\n\\section{Parameter Sharing}\n\nReuse part of a network for another task, encoding prior knowledge that the feature maps must be the same between two tasks. CNNs, discussed in a separate document (\\nurl{https://github.com/vlad17/ml-notes/tree/master/deep-learning}{root}), are a form of parameter sharing, since the linear matrix $W$ is restricted to a convolution transform.\n\n\\section{Ensemble Methods}\n\nBagging uses votes of several models: $k$ uncorrelated models will tend to have $\\nicefrac{1}{k}$-th generalization error when bagged. Training data can be resampled to create more uncorrelated models.\n\n\\textbf{Dropout} (\\nurl{https://arxiv.org/abs/1207.0580}{Hinton et al 2012}, \\nurl{http://proceedings.mlr.press/v28/wang13a.html}{Wang et al 2013}, \\nurl{http://jmlr.org/papers/v15/srivastava14a.html}{Srivastava 2014}, \\nurl{https://arxiv.org/abs/1506.02142}{Gal et al 2015}). Dropout is a cheap, practical approximation to averaging exponentially many models (exponential in the number of dropout layers). Dropout at a certain hidden or input layer wp $p$ drops an activation by setting it to zero during training and multiplying all activations (or the weights) by $p$ with no drops during testing (the weight scaling inference rule); theoretically this does not guarantee a proper output activation averaging, only individual hidden unit output expectation is preserved, but this works well in practice (Fig.~\\ref{fig:dropout}). Weight scaling actually corresponds to taking a geometric means of the votes, and has been showed to work better than Monte Carlo sub-network averaging (which is intractable). First, this prevents feature co-adaptation: since a given activation might vanish at any time during training, the net is forced to learn those activations at each layer which independently contribute to the loss minimization task. Note that this conflicts with locality-focused tasks which rely on specific co-adaptations, like convolutional layers, but might be used at higher levels in CNNs to force the networks to identify more than one underlying feature for their classification (though lower levels can be locally co-adapted). Naive Bayes is an extreme form of dropout which demonstrates the co-adaptation avoidance.\n\\begin{figure}[!h]\n\\centering\n{\\includegraphics[width=0.75\\textwidth]{dropout.pdf}}\n  \\caption{From Dr. Goodfellow's DL book lecture, Chapter 7.}\n\\label{fig:dropout}\n\\end{figure}\n\n\\textbf{Dropconnect} (\\nurl{http://cs.nyu.edu/~wanli/dropc/}{Li et al 2013}). Generalizes dropout by applying it to edges rather than hidden units. It's not as commonly used as dropout, but it has a more mathematically sound explanation of how its inference procedure mimics ensembles.\n\n\\section{Adversarial Training}\n\n(\\nurl{https://arxiv.org/abs/1412.6572}{Goodfellow 2014}). Tiny changes in input can result in large differences in output, showing that the learned manifold for a classification category is too sensitive to perturbations in directions that it shouldn't be. Adversarial training forces nets to overcome this.\n\n\\end{document}\n\n% LocalWords: Vanhouke\n", "meta": {"hexsha": "c4efe1275f7414cfe625973b15b3d631057d8fb6", "size": 8658, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "deep-learning/regularization.tex", "max_stars_repo_name": "vlad17/shallow-ml-notes", "max_stars_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2017-06-27T18:39:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T18:48:49.000Z", "max_issues_repo_path": "deep-learning/regularization.tex", "max_issues_repo_name": "vlad17/shallow-ml-notes", "max_issues_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "deep-learning/regularization.tex", "max_forks_repo_name": "vlad17/shallow-ml-notes", "max_forks_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-03-23T10:45:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-07T06:21:43.000Z", "avg_line_length": 108.225, "max_line_length": 1632, "alphanum_fraction": 0.7775467775, "num_tokens": 2190, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7931059511841119, "lm_q1q2_score": 0.5917672757832093}}
{"text": "\\section*{Exercise 20.3-1}\n\\subsection*{Modify vEB trees to support duplicate keys}\n\nInstead of the tree holding bits, it could instead hold integers. These integers would then act as counters for the number of instances of each key.\n\\\\\nFurthermore, to support this change, the element stored in $min$ should appear in the clusters (like $max$), and hold an integer like the rest of the tree.\n\\\\\nWhen inserting and deleting in the tree, the counter should then be updated, by adding to or subtracting from the counter.", "meta": {"hexsha": "18524830f603f1f35988506beb83333dbcd1fd07", "size": 518, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Uge3/Ex.20.3-1.tex", "max_stars_repo_name": "pdebesc/AADS", "max_stars_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Uge3/Ex.20.3-1.tex", "max_issues_repo_name": "pdebesc/AADS", "max_issues_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Uge3/Ex.20.3-1.tex", "max_forks_repo_name": "pdebesc/AADS", "max_forks_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.75, "max_line_length": 155, "alphanum_fraction": 0.777992278, "num_tokens": 119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7461389817407017, "lm_q2_score": 0.7931059487389968, "lm_q1q2_score": 0.5917672650046082}}
{"text": "%\n% This section describes load balancing and the methods offered by Zoltan.\n%\n\\chapter{Data partitioning and load balancing}\n\\label{cha:lb}\nAs problem sizes grow, parallel computing has become an important tool\nin computational science. Many large-scale computing tasks are\ntoday run on parallel computers, with multiple processors or cores.\nAn important issue is then how to best divide the data and work\namong processes (processors). This problem is known as \\emph{data partitioning}\nor \\emph{load balancing}. We will use these phrases interchangably.\nPartitioning or load balancing can either be performed once\n(static partitioning) or multiple times (dynamic load balancing).\n\nWe wish to divide work evenly but at the same time minimize\ncommunication among processes. Communication could either be\nmessage passing (on distributed memory systems) or memory access\n(on shared-memory systems). \n\nWe give a brief overview of the different categories of partitioning methods,\nand later explain how to use them in Zoltan.\n\n\n\\section{Geometric methods}\nGeometric methods rely on each object having a set of coordinates.\nData objects are partitioned based on geometric locality.\nWe assume that it is beneficial to keep objects that are close together on \nthe same processor, but there is no explicit model of communication.\nExamples of geometric methods include recursive coordinate bisection (RCB)\nand space-filling curves. An advantage of geometric methods is that\nthey are very fast to compute, but communication volume in\nthe application may be high.\n\n\\section{Graph partitioning}\nGraph partitioning is perhaps the most popular method. This approach is\nbased on representing the application \n(computation) as a graph, where data objects are vertices and \ndata dependencies are edges. The graph partitioning problem is then\nto partition the vertices into equal-sized sets, while minimizing the\nnumber of edges with endpoints in different sets (parts).\nThis is an NP-hard optimization problem, but fast multilevel \nalgorithms and software produce good solutions in practice.\nIn general, graph partitioning produces better quality partitions\nthan geometric methods but also take longer to compute.\n\n\n\\section{Hypergraph partitioning}\nThe graph model has several deficiencies. First, only symmetric\nrelations between pairs of objects can be represented. Second, \nthe communication model is inaccurate and does not model \ncommunication volume correctly.\nHypergraphs generalize graphs, but can represent relationships\namong arbitrary sets of objects (not just pairs). Also,\ncommunication volume is exact, so hypergraph methods \nproduce very high quality partitions. The main drawback of hypergraph\nmethods is that they take longer to run than graph algorithms. \n\n\\section{Methods in Zoltan}\nZoltan is a toolkit containing many load balancing methods. We explain how \nto use Zoltan in the next chapter. The methods currently available\nin Zoltan (version 3.1) are:\n\\begin{description}\n\\item[Simple:] BLOCK, RANDOM. These are intended for testing, not real use.\n\\item[Geometric:] RCB, RIB, HSFC.\n\\item[Graph:] Zoltan has a native graph partitioner, and optionlly supports ParMetis and PT-Scotch.\n\\item[Hypergraph:] Zoltan has a native parallel hypergraph partitioner.\n\\end{description}\n\n", "meta": {"hexsha": "a0b9d06f7180533d1c89b279d2a0ea707e0c96cd", "size": 3279, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/zoltan/doc/Zoltan_html/tu_html/methods.tex", "max_stars_repo_name": "jschueller/seacas", "max_stars_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_stars_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_stars_count": 82, "max_stars_repo_stars_event_min_datetime": "2016-02-04T18:38:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T03:01:49.000Z", "max_issues_repo_path": "packages/zoltan/doc/Zoltan_html/tu_html/methods.tex", "max_issues_repo_name": "jschueller/seacas", "max_issues_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_issues_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_issues_count": 206, "max_issues_repo_issues_event_min_datetime": "2015-11-20T01:57:47.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:12:04.000Z", "max_forks_repo_path": "packages/zoltan/doc/Zoltan_html/tu_html/methods.tex", "max_forks_repo_name": "jschueller/seacas", "max_forks_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_forks_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_forks_count": 68, "max_forks_repo_forks_event_min_datetime": "2016-01-13T22:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T06:25:05.000Z", "avg_line_length": 47.5217391304, "max_line_length": 99, "alphanum_fraction": 0.8124428179, "num_tokens": 683, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754607093178, "lm_q2_score": 0.8128673246376009, "lm_q1q2_score": 0.5917474651486082}}
{"text": "\\setlength\\abovedisplayskip{2.5pt}\n\nFor relevant nuclear forensics predictions, both classification and regression\nalgorithms must be used.  For example, one may want to predict the reactor type\nlabel given some measurement-based features of \\gls{SNF} of an unknown source.\nThis would require a classification algorithm. Or perhaps the input fuel\ncomposition is relevant to an investigation on weapons intent, so a regression\nalgorithm would be used to train a model based on some different set of\nmeasured features. Since this work trains models to predict burnup of\n\\gls{SNF}, the algorithms are presented in a regression context.\n\n\\subsubsection{Linear Models}\n\\label{sec:linear}\n\nOne of the simplest and most utilized methods of prediction is a linear model\nfrom a least-squares fit. Thus, it is the most natural place to begin a\ndemonstration of \\gls{ML} algorithms. Since linear models must have all\nlinearly related parameters, there are many restrictions regarding the shape of\nthe model. However, this makes the resulting models stable to peturbations.\n\\cite{changingml}\n\n\\begin{figure}[!htb]\n  \\makebox[\\textwidth][c]{\\includegraphics[width=1.1\\textwidth]{./chapters/litrev/regularization.png}}\n  \\caption{Effect of Regularization on Prediction}\n  \\label{fig:reg}\n\\end{figure}\n\nAn example of a linear model is in Equation \\ref{eq:linreg}. The vector of\ninput features of size $p$, $\\boldsymbol{X}$, provides a model,\n$F(\\boldsymbol{X})$ by determining the unknown coefficients, or weights,\n$\\beta_{j}$'s. A loss function can be calculated by the difference in the\nmodel-predicted value and $\\boldsymbol{Y}$, the vector of actual labels.  The\nalgorithm calculates $\\beta_{j}$ by minimizing the value of this loss function\nover all the training data.  This is usually the least squares error from\nminimizing the sum of squared errors, $\\sum_{i=1}^{n} (y_i - f(x_i))^2$.  But\nit could instead be the least absolute deviations from minimizing the sum of\nabsolute error differences, $\\sum_{i=1}^{n} |y_i - f(x_i)|$. These are referred\nto as the $L_2$ and $L_1$ norms, respectively.  \n\\begin{equation}\n  F(\\boldsymbol{X}) = \\beta_{0} +  \\sum_{j=1}^{p} x_{j} \\beta_{j}\n  \\label{eq:linreg}\n\\end{equation}\n\nThe form of linear regression used here is called ridge regression. This\nalgorithm performs optimization using the $L_2$ norm, and also uses the form of\nthe $L_2$ norm for \\textit{regularization}. Regularization, sometimes called\n\\textit{shrinkage}, is a term introduced into the \\gls{ML} model to prevent\noverfitting; it is used in many \\gls{ML} algorithms. For ridge\nregression, shrinkage further reduces the weights, $\\beta_j$, on the input\nfeatures, $x_j$. The shrinkage term also includes a complexity parameter,\n$\\lambda$, which governs the strength to which regularization is performed\n\\cite{elements_stats}. Thus, the linear model given from ridge regression\nis updated to Equation \\ref{eq:ridgereg}.  Figure \\ref{fig:reg} is a\nvisualization of how applying regularization smooths out a model.\n\\begin{equation}\n  F(\\boldsymbol{X}) = \\beta_{0} +  \\sum_{j=1}^{p} x_{j} \\beta_{j} + \\lambda \\sum_{j=1}^{p} \\beta_{j}^2\n  \\label{eq:ridgereg}\n\\end{equation}\n\n\\subsubsection{Nearest Neighbor Methods}\n\\label{sec:neighbor}\n\nNearest neighbor regression is a unique algorithm in that it is instance-based;\nit does not actually generalize, but tracks the observations in the training\nset.  The main metric for this algorithm is distance (or dissimilarity) between\nthe test sample and the closest training sample(s) in the vicinity.  During\nprediction, the algorithm will calculate a value based on the instance that is\nclosest to the current test sample. Thus, there is not any learning, but\ninstead a direct comparison between an unknown sample and the space that the\ntraining set populates. The predictions from nearest neighbors can be quite\naccurate, but are highly unstable to peturbations \\cite{elements_stats}.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.8\\linewidth]{./chapters/litrev/nn-fig.png}\n  \\caption{Schematic of \\textit{k}-Nearest Neighbors Regression}\n  \\label{fig:nn}\n\\end{figure}\n\nAn extension of nearest neighbor is \\textit{k}-nearest neighbor regression.\nThe closest \\textit{k} neighbors are averaged for an estimate of the unknown\nsample, as shown in Equation \\ref{eq:knn}.  Figure \\ref{fig:nn} provides a\npictoral explanation of how this is done for a prediction of a single feature.\nFor \\textit{k}-neighbors, this algorithm predicts a value, $Y$, from the input\nfeatures, $\\boldsymbol{X}$, in the neighborhood, $N_k (\\boldsymbol{X})$\n\\cite{elements_stats}. \n\\begin{equation}\n  Y(\\boldsymbol{X}) = \\frac{1}{k} \\sum_{x_i \\in N_k(\\boldsymbol{X})} y_i\n  \\label{eq:knn}\n\\end{equation}\n\nThere are two tuneable parameters in this algorithm: the distance metric and\nthe value of \\textit{k}.  The population of the neighborhood, \\textit{k},\naffects the number of points being averaged together for a prediction.  The\nmetrics for distance differ, but in this study, the Euclidian distance was\nused. In this initial work, $k = 1$ is used. This can perform very well, but\ncan also easily overfit the data and thus not generalize well. \n\n\\subsubsection{Support Vector Machines}\n\\label{sec:svm}\n\n\\Gls{SVR} is an extension of the popular classification algorithm,\n\\glsreset{SVM}.  This algorithm was chosen because of its ability to handle\nhighly dimensional data well, which in this study is a maximum of approximately\n300 features. \n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.8\\linewidth]{./chapters/litrev/svm.png}\n  \\caption{Schematic of \\acrshort{SVM} Classification}\n  \\label{fig:svm}\n\\end{figure}\n\nAs seen in Figure \\ref{fig:svm}\\footnote{This schematic is based on the\ntutorial on SVMs by Dr.\\@ Saed Sayad at\nhttp://www.saedsayad.com/support\\_vector\\_machine.htm}, the \\gls{SVM} algorithm\nseparates two classes by determining an optimal hyperplane between them.  The\nalgorithm evaluates the quality of the line that separates the two classes by\nmaximizing the margin width, $m_w = \\frac{2}{\\lVert w \\rVert}$.  The hyperplane\nis defined in Equation \\ref{eq:plane}, where $\\boldsymbol{w}$ is the vector\nthat is normal to the hyperplane.  \n\\begin{equation}\n  \\boldsymbol{w \\cdot x} + b = 0\n  \\label{eq:plane}\n\\end{equation}\n\nFigure \\ref{fig:svm} also shows a case of soft margins.  Some problems are not\nlinearly separable, and thus a penalty term, $\\xi_{i}$, is introduced to allow\nfor some misclassifications.  The algorithm then simultaneously minimizes the\nmisclassifications while maximizing the margin. The objective function of the\nalgorithm shows this as well (using quadratic programming) in Equation\n\\ref{eq:svm}. Here, $C$ is responsible for the margin width/misclassification\ntradeoff, and the penalty term is included in the constraint. \\cite{scikit,\nelements_stats}\n\\begin{equation}\n\\begin{split}\n  min\\ & \\frac{1}{2} \\lVert w \\rVert ^{2} + C \\sum_{i} \\xi_i \\\\\n  subject\\ to:\\ \\ & y_i (w x_i + b) > 1 - \\xi_i\n  \\label{eq:svm}\n\\end{split}\n\\end{equation}\n\nFigure \\ref{fig:svr}\\footnote{These schematics are based on a tutorial on SVR\nby Dr.\\@ Saed Sayad at\nhttp://www.saedsayad.com/support\\_vector\\_machine\\_reg.htm} demonstrates how\n\\gls{SVM} can be altered slightly from classification to nonlinear regression\nwith \\gls{SVR}.  \\Gls{SVR} has a similar objective function but instead\n\\textit{minimizes} the margin, as shown in Figure \\ref{fig:svr-a}.  While the\nobjective function is the same as in Equation \\ref{eq:svm}, the constraint is\nnow set to the opposite inequality sign, shown in Equation \\ref{eq:svr} \n\\begin{equation}\n\\begin{split}\n  min\\ & \\frac{1}{2} \\lVert w \\rVert ^{2} + C \\sum_{i} \\xi_{i} \\\\\n  subject\\ to:\\ \\ & \\lvert y_i - (w x_i + b) \\rvert \\leq \\varepsilon + \\xi_i\n  \\label{eq:svr}\n\\end{split}\n\\end{equation} \n\nFurther, this can be extended to nonlinear analysis via what is called the\n\\textit{kernel trick}.  A nonlinear kernel function maps the data\ninto higher order feature space. The algorithm can then find a linear\nseparation in this space, as shown in Figure \\ref{fig:svr-b}.\n\n\\begin{figure}[!hp]\n  \\centering\n  \\begin{subfigure}[h]{0.8\\linewidth}\n    \\includegraphics[width=\\linewidth]{./chapters/litrev/svr-a.png}\n    \\caption{Demonstration of regression with \\acrshort{SVR}}\n    \\label{fig:svr-a}\n  \\end{subfigure}\n  \\begin{subfigure}[h]{0.8\\linewidth}\n    \\includegraphics[width=\\linewidth]{./chapters/litrev/svr-b.png}\n    \\caption{The kernel trick with \\acrshort{SVR}}\n    \\label{fig:svr-b}\n  \\end{subfigure}\n  \\caption{Illustrations of \\acrshort{SVR} and Nonlinear Analysis}\n  \\label{fig:svr}\n\\end{figure}\n\nChosen for its flexibility, the kernel in this study is the Gaussian radial\nbasis function, shown in Equation \\ref{eq:svr-rbf}. This has two tuneable\nparameters, $\\gamma$ and $C$. The $\\gamma$ controls the width of influence of\nindividual training instances, which strongly affects the fitting of the model.\nLow values correspond to underfitting because the instances have too large of a\nradius (low influence) and high values correspond to overfitting because the\ninstances have a small radius (high influence). \n\\begin{equation}\n\\begin{split}\n  min\\ & \\frac{1}{2} \\lVert w \\rVert ^{2} + C \\sum_{i} \\xi_{i} \\\\\n  subject\\ to:\\ \\ & \\lvert y_i - (w \\phi(x_i) + b) \\rvert \\leq \\varepsilon + \\xi_i \\\\\n  where:\\ & w = \\sum_{i} \\alpha_i y_i \\phi(x_i) \\\\\n  and:\\ & K(x_i, x_j) = \\phi(x_i) \\phi(x_j) = e^{\\gamma \\lVert x_i - x_j \\rVert ^{2}}\n  \\label{eq:svr-rbf}\n\\end{split}\n\\end{equation} \n\nThe $C$ parameter also affects the fitting of the model by allowing more or\nless support vectors, corresponding to more or less misclassification,\nrespectively. A lower $C$ smooths the surface of the model by allowing more\nmisclassifications, whereas a higher $C$ classifies more training examples by\nallowing fewer misclassifications. Thus, too low or too high of a C can cause\nunder- or overfitting, respectively. \n\nSince there is a tradeoff of fitting strength provided by two parameters, it\nis common to run the algorithm on a logarithmic grid from $10^{-3}$ to $10^3$\nfor each parameter. If plotted on a heatmap of accuracies given $\\gamma$ and\n$C$, there will be a diagonal of ideal combinations that emerges. The element\nwith the lowest value of each parameter is usually chosen. \n\n\\subsubsection{Dimensionality Reduction Techniques}\n\\label{sec:dimreduc}\n\nIn addition to utilizing various algorithm parameters for regularization as\ndiscussed above, dimensionality reduction can improve generalizability by\nremoving the noise of features that do not affect the regression task. This can\nbe thought of in the following way: shrinkage techniques reduce the weights of\nnoisy features, whereas dimensionality reduction removes them completely\n\\cite{elements_stats}.  Although one could use domain knowledge to manually\nreduce the number of features in a data set (e.g., only including certain\nnuclide subsets such as actinides), statistical feature reduction may also\nprove helpful in this work.  The mathematical treatment of the methods\ndescribed below are in Ref.  \\cite{elements_stats}.\n\n%Another use for the following techniques is the visualization of one's data\n%set, perhaps for data exploration or to show why certain predictions are\n%difficult. \n\n\\vspace{5mm} \\noindent \\textbf{Principal Components Analysis} \\vspace{5mm}\n\n\\Gls{PCA} is considered the most common dimensionality reduction technique.\nThe \\gls{PCA} algorithm learns a linear transformation of a data set\n$\\boldsymbol{X}$ with which to construct a transformation matrix according to a\nuser-chosen number of variables, i.e., components.  This matrix is part of the\nsingular value decomposition of the data matrix $\\boldsymbol{X}$.  The\ndecomposition step is what provides the principal components, which are the\nresult of maximizing the variance in the original data while minimizing the\nsquared reconstruction error between the original data and the transformed\ndata. The mathematical assumptions are that the variables are all Gaussian and\nuncorrelated.\n\nBecause the principal components are obtained purely statistically with no\nmodel assumptions, they are usually uninterpretable. However, they can still\nprovide clues with which to obtain new information. In the case of this work,\nthis could provide insight into new forensics signatures.\n\n\\vspace{5mm} \\noindent \\textbf{Factor Analysis} \\vspace{5mm}\n\nFactor analysis is similar to \\gls{PCA} in that it also calculates linear\ncombinations of the data set features using the above-mentioned decomposition\nand the mathematical assumptions are the same.  It is different in that the\ndecomposition is rearranged so that it represents \\textit{latent variables}\nincluding random error disturbances rather than principal components.  Latent\nvariables are constructed by maximizing the correlation/shared variance among\nthe variables rather than the total variance.  However, different optimizations\ncan be chosen so that the solutions are parameter-dependent.  \n\nFurthermore, the initial model assumptions, while enabling interpretable\nresults, also increase the dependence of the solutions on the algorithm inputs.\nIf \\gls{PCA} does not perform well with the type of training data in this work,\nit is possible that factor analysis will, since there are many nuclides in the\ntraining data set that are in a decay chain together.\n\n\\vspace{5mm} \\noindent \\textbf{Independent Components Analysis} \\vspace{5mm}\n\n\\Gls{ICA} has characteristics of both factor analysis and \\gls{PCA}, but with\nthe goal of finding independent measurements from multiple `sources'.  It uses\nthe same form of decomposition as factor analysis but without the inclusion of\nrandom error. The mathematical assumptions are a bit different: the variables\nare statistically independent and non-Gaussian.  This allows the algorithm to\nminimize higher-order statistics of the data set (\\gls{PCA} and factor analysis\nonly minimize the first two orders).\n\nSince the independent components are useful in signal processing of multiple\nsignals, this technique could prove useful for reprocessed nuclear materials\nwhere multiple source streams converge. \n\n", "meta": {"hexsha": "f7ec49a8575d962724658e250da08cd5d3e8fb5e", "size": 14133, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "document/chapters/litrev/algs.tex", "max_stars_repo_name": "opotowsky/prelim", "max_stars_repo_head_hexsha": "100a27fb533beee1c985ad72ae70bdb646b04bab", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "document/chapters/litrev/algs.tex", "max_issues_repo_name": "opotowsky/prelim", "max_issues_repo_head_hexsha": "100a27fb533beee1c985ad72ae70bdb646b04bab", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "document/chapters/litrev/algs.tex", "max_forks_repo_name": "opotowsky/prelim", "max_forks_repo_head_hexsha": "100a27fb533beee1c985ad72ae70bdb646b04bab", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.0216606498, "max_line_length": 102, "alphanum_fraction": 0.7727304889, "num_tokens": 3693, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673269042767, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5917474572041427}}
{"text": "\\appendix\n\\section{Proof of equation \\ref{eq:energy}}\\label{sec:proofenergy}\nThe third equality \\cref{eq:energy-compact} is valid since \n\\begin{equation*}\n    trace(\\bar{\\mathbf{W}}) = \\mathbf{S}^T diag^{-1}(\\bar{\\mathbf{W}})\\mathbf{S}\n\\end{equation*}\nThe last equality, \\cref{eq:energy}, comes from the fact that the trace of $\\bar{\\mathbf{W}}$ (\\cref{eq:weights}) is the sum of the diagonal elements, and the diagonal elements of $\\bar{\\mathbf{W}}$ are given by\n\\begin{equation}\n    \\bar{w}_{ii} = \\frac{1}{M} \\sum_{m=1}^M (v_i^{(m)})^2 \\equiv 1\n\\end{equation}\nThe trace of $\\bar{\\mathbf{W}}$ is therefore given by \n\\begin{equation}\n    trace(\\bar{\\mathbf{W}}) = \\sum_{n=1}^N 1 = N\n\\end{equation}\n\n\\section{Matlab Code} \\label{sec:matlab_code}\n\\lstinputlisting[breaklines=true,language=MATLAB]{../../project2.m}\n\\lstinputlisting[breaklines=true,language=MATLAB]{../../runSim.m}", "meta": {"hexsha": "bc21973333904986a15240a2914dfa49c2189ca4", "size": 879, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/project2/appendix.tex", "max_stars_repo_name": "HaavardM/nevr3004-neural-networks", "max_stars_repo_head_hexsha": "7acfe8f6a4fedabd1d2dbfebf2f21e045010f90e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/project2/appendix.tex", "max_issues_repo_name": "HaavardM/nevr3004-neural-networks", "max_issues_repo_head_hexsha": "7acfe8f6a4fedabd1d2dbfebf2f21e045010f90e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/project2/appendix.tex", "max_forks_repo_name": "HaavardM/nevr3004-neural-networks", "max_forks_repo_head_hexsha": "7acfe8f6a4fedabd1d2dbfebf2f21e045010f90e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.8333333333, "max_line_length": 211, "alphanum_fraction": 0.6916951081, "num_tokens": 317, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673178375734, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.5917474554010802}}
{"text": "\\section{Computation}\n\\label{sec:computation}\n\n\\subsection{Stationary equilibrium}\n\nThe solution algorithm used to solve for stationary equilibria consists of the following steps:\n\\begin{enumerate}\n    \\item Guess initial levels of aggregate capital and aggregate labor\n    \\item derive factor prices from firm FOCs\n    \\item calculate pension benefits from aggregate labor supply and the government budget constraint\n    \\item Solve for household value functions and policy functions by backward induction\n        \\begin{enumerate}\n            \\item Start with maximum age; value function equals flow utility from consuming all available resources\n            \\item Iterate backwards over ages and solve using continuation values from the next age\n                \\begin{enumerate}\n                    \\item Define meshgrids for current assets, next period assets, current human capital and next period human capital\n                    \\item Calculate consumption, human capital effort and flow utility on meshgrid\n                    \\item Extract continuation values by next period assets and next period human capital from value functions of time period $t+1$\n                    \\item Calculate sum of flow utility and expected discounted continuation value on meshgrids\n                    \\item Find maximal value over assets and human capital next period\n                \\end{enumerate}\n            \\item Store value function and policy for all ages\n        \\end{enumerate}\n    \\item Simulate cross-sectional distribution of households by asset holdings, human capital and age\n        \\begin{enumerate}\n            \\item Initiate with initial mass of household at initial levels of assets and human capital\n            \\item Iterate forward over policy functions to obtain cross-sectional distribution\n        \\end{enumerate}\n    \\item Aggregate over households to obtain aggregate variables\n    \\item Verify initial guess; If tolerance for deviation is exceeded, update guess and repeat.\n\\end{enumerate}\n\n\\subsection{Transitional dynamics}\n\nThe solution algorithm used to solve for transitional dynamics consists of the following steps:\n\\begin{enumerate}\n    \\item Guess initial path for aggregate capital and aggregate labor\n    \\item Derive factor prices from firm FOCs\n    \\item Calculate pension benefits from aggregate labor supply and the government budget constraint\n    \\item Solve for household value functions and policy functions by backward induction for all age and all time periods\n        \\begin{enumerate}\n            \\item Start with the final period $T$ and use as continuation values for all ages the value functions of the final stationary equilibrium\n            \\item Iterate backwards through time periods and derive value functions and optimal policies\n                \\begin{enumerate}\n                    \\item Define meshgrids for current assets, next period assets, current human capital and next period human capital\n                    \\item Calculate consumption, human capital effort and flow utility on meshgrid\n                    \\item Extract continuation values by next period assets and next period human capital from value functions of time period $t+1$\n                    \\item Calculate sum of flow utility and expected discounted continuation value on meshgrids\n                    \\item Find maximal value over assets and human capital next period\n                \\end{enumerate}\n            \\item Store policies and value functions for all ages and all time periods\n        \\end{enumerate}\n    \\item Simulate cross-sectional distribution of households by asset holdings, human capital and age over time\n        \\begin{enumerate}\n            \\item Start with cross-sectional distribution from initial stationary equilibrium\n            \\item Iterate forward over policy functions of the respective time period to obtain cross-sectional distribution of the following time period\n        \\end{enumerate}\n    \\item Aggregate over households to obtain paths aggregate variables\n    \\item Verify initial guess; If tolerance for deviation is exceeded, update guess and repeat.\n\\end{enumerate}\n\n\\subsection{parametrization}\n\nThe following parameters are used for computation:\n\\begin{itemize}\n    \\item Asset holdings are discretized on a linear grid with bounds $[0.001, 30.0]$ with 60 grid points\n    \\item Human capital is discretized on a logarithmic grid with bounds $[1.0, 5.0]$ with 20 grid points\n    \\item The tolerance level for deviations in aggregate capital and aggregate labor are set to $10^-5$ for computation of stationary equilibria and to $10^-3$ for the computation of transitional dynamics.\n\\end{itemize}", "meta": {"hexsha": "ad85eb118d0f3a329c85bf6bf204ea35ffd8aff4", "size": 4671, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/paper/A1_computation.tex", "max_stars_repo_name": "simonjheiler/demographic_change_olg", "max_stars_repo_head_hexsha": "cd920989bdc7461efab533ea993ba3990bd46b8d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/paper/A1_computation.tex", "max_issues_repo_name": "simonjheiler/demographic_change_olg", "max_issues_repo_head_hexsha": "cd920989bdc7461efab533ea993ba3990bd46b8d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/paper/A1_computation.tex", "max_forks_repo_name": "simonjheiler/demographic_change_olg", "max_forks_repo_head_hexsha": "cd920989bdc7461efab533ea993ba3990bd46b8d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-16T01:37:35.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-16T01:37:35.000Z", "avg_line_length": 67.6956521739, "max_line_length": 206, "alphanum_fraction": 0.7313209163, "num_tokens": 880, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711794579723, "lm_q2_score": 0.6959583376458152, "lm_q1q2_score": 0.5915445291024233}}
{"text": "\\documentclass[12pt]{article}%\n\\usepackage{hyperref}\n\\usepackage{listings}\n\\usepackage{color}\n\\usepackage{multicol}\n\\usepackage{amsfonts}\n\\usepackage{fancyhdr}\n\\usepackage{comment}\n\\usepackage[a4paper, top=2.2cm, bottom=2.2cm, left=2.2cm, right=2.2cm]%\n{geometry}\n\\usepackage{times}\n\\usepackage{changepage}\n\\usepackage{amssymb}\n\\usepackage{graphicx}%\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{acknowledgement}[theorem]{Acknowledgement}\n\\newtheorem{algorithm}[theorem]{Algorithm}\n\\newtheorem{axiom}{Axiom}\n\\newtheorem{case}[theorem]{Case}\n\\newtheorem{claim}[theorem]{Claim}\n\\newtheorem{conclusion}[theorem]{Conclusion}\n\\newtheorem{condition}[theorem]{Condition}\n\\newtheorem{conjecture}[theorem]{Conjecture}\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{criterion}[theorem]{Criterion}\n\\newtheorem{definition}[theorem]{Definition}\n\\newtheorem{example}[theorem]{Example}\n\\newtheorem{exercise}[theorem]{Exercise}\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{notation}[theorem]{Notation}\n\\newtheorem{problem}[theorem]{Problem}\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{remark}[theorem]{Remark}\n\\newtheorem{solution}[theorem]{Solution}\n\\newtheorem{summary}[theorem]{Summary}\n\\usepackage{commath}\n\\usepackage{url} \n\\usepackage{hyperref}\n\\usepackage[style=numeric]{biblatex}\n\\usepackage{subfig}\n\\usepackage{minted}\n% \\usepackage[utf8]{inputenc}\n% \\usepackage[english]{babel}\n\\usepackage{esvect}\n\\addbibresource{reference.bib}\n\n\n\\usepackage{commath}\n\n\\newenvironment{proof}[1][Proof]{\\textbf{#1.} }{\\ \\rule{0.5em}{0.5em}}\n\\usepackage[utf8]{inputenc}\n\n% \\usepackage{algorithm}\n% \\usepackage{algorithmic} %format of the algorithm\n\n\n% Default fixed font does not support bold face\n\\DeclareFixedFont{\\ttb}{T1}{txtt}{bx}{n}{12} % for bold\n\\DeclareFixedFont{\\ttm}{T1}{txtt}{m}{n}{12}  % for normal\n\n\\usepackage{graphics}\n\n% Custom colors\n\\usepackage{color}\n\\definecolor{deepblue}{rgb}{0,0,0.5}\n\\definecolor{deepred}{rgb}{0.6,0,0}\n\\definecolor{deepgreen}{rgb}{0,0.5,0}\n\n\\usepackage{listings}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\n% Python style for highlighting\n\\newcommand\\pythonstyle{\\lstset{\nlanguage=Python,\nbasicstyle=\\ttm,\notherkeywords={self},             % Add keywords here\nkeywordstyle=\\ttb\\color{deepblue},\nemph={MyClass,__init__},          % Custom highlighting\nemphstyle=\\ttb\\color{deepred},    % Custom highlighting style\nstringstyle=\\color{deepgreen},\nframe=tb,                         % Any extra options here\nshowstringspaces=false            % \n}}\n\n% Python environment\n\\lstnewenvironment{python}[1][]\n{\n\\pythonstyle\n\\lstset{#1}\n}\n{}\n\n% Python for external files\n\\newcommand\\pythonexternal[2][]{{\n\\pythonstyle\n\\lstinputlisting[#1]{#2}}}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath, nccmath}\n\\usepackage{geometry}\n\n\\usepackage{algorithm}\n\\usepackage{arevmath}     % For math symbols\n\\usepackage[noend]{algpseudocode}\n\n\\begin{document}\n\n\\title{Institute of Robotics,  University of Innopolis}\n\\author{Computational Intelligence \\\\ Least Squares Estimation, Convex Operations , and Convex Programming}\n\\date{\\today}\n\\maketitle\n\n\\subsection{Task 01}\nIn least-squares, given the measurements $A \\in \\mathbb{R}^{m\\times n}$ and $b \\in \\mathbb{R}^{n}$, seek a vector $x \\in \\mathbb{R}^m$ that project $Ax$ on $b$ (or such that $Ax$ close to $b$). Such a closeness is defined as:\n\\begin{equation} \\label{eq:least_squares}\n    \\sum_{i=1}^m (a_i^Tx-b_i)^2 = \\min_x \\left \\| Ax -b \\right \\|_2^2\n\\end{equation}\n\n\\begin{enumerate}\n    \\item Using CVXPY formulate the (\\ref{eq:least_squares}), you may generate some random matrix and vector for $A$ and $b$, respectively\n\n\\begin{minted}\n[frame=lines, framesep=2mm, baselinestretch=1.2,]\n{python}\n m = 20\n n = 15\n np.random.seed(1)\n A = np.random.randn(m, n)\n b = np.random.randn(m)\n\\end{minted}\n% # Define and solve the CVXPY problem.\n% x = cp.Variable(n)\n% cost = cp.sum_squares(A @ x - b)\n% prob = cp.Problem(cp.Minimize(cost))\n% prob.solve()\n\n% # Print result.\n% print(\"\\nThe optimal value is\", prob.value)\n% print(\"The optimal x is\")\n% print(x.value)\n% print(\"The norm of the residual is \", cp.norm(A @ x - b, p=2).value)\n    \\item If $x^*$ is the optimal solution you obtained, comment on $Ax^*-b$\n    % zero mean perfect fitting \n\\end{enumerate}\n\n\\subsection{Task 02}\nLet's try adding a regularization term in the objective (\\ref{eq:least_squares}) as follows:\n\n\\begin{equation}\\label{eq:lasso}\n    \\min_{x} \\sum_{i=1}^m (a_i^Tx-b_i)^2+\\lambda\\|x\\|_1\n\\end{equation}\n\n\\begin{enumerate}\n    \\item Using CVXPY formulate the (\\ref{eq:lasso}), use the same matrix and vector you used in task 01 for $A$ and $b$, respectively. $\\lambda$ is a regularization parameter, e.g., $1/\\sqrt{n}$ \n\n\n    \\item Compare the $x^*$ in both cases\n    \\item Repeat the Task 01 and Task 02 using the provided dataset \n    % zero mean perfect fitting \n\\end{enumerate}\n\n\\subsection{Task 03}\nConsider the following minimization problem\n\\begin{equation}\n    \\min_x (Ax-b)^{\\top} (Ax-b)\n\\end{equation}\n\n\\begin{equation}\\label{eq:01}\n\\begin{aligned}\n \\min_{x} \\quad & (Ax-b)^{\\top} (Ax-b) \\\\\n\\textrm{s.t.} \\quad  & Gx \\leq h,\n\\end{aligned}\n\\end{equation}\n\\begin{enumerate}\n    \\item Let's consider with no constraints, try to solve analytically ($x^* = (A^TA)^{ -1}A^T$) and as well as a QP problem and compare your answers? Take $A=\\begin{bmatrix}\n1 &1 \\\\ \n2 & 1\\\\ \n3 & 2\n\\end{bmatrix} $ and $b = \\begin{bmatrix}\n2\\\\ \n3\\\\ \n4\n\\end{bmatrix}$\n    \\item Constraint x within $-0.9 \\leq x \\leq 0.9$ and solve it again, \n\\end{enumerate}\n\n\\subsection{Task 04}\nA sphere is described by $\\{ x \\in \\mathbb{R}^n \\mid \\left \\| x - x_c \\right \\|_2 = r\\}$. Let's try to fit a sphere in $\\mathbb{R}^n$ for a given m number of points ($u_1, u_2,...,u_m \\in \\mathbb{R}^n$), by minimizing the following error function:\n\\begin{equation}\n    \\sum_{i=1}^m (\\left \\| u_i -x_c \\right \\|_2^2-r^2)^2\n\\end{equation} over the variables $x_c \\in \\mathbb{R}^n, \\; r \\in \\mathbb{R}$\n\n\\begin{enumerate}\n    \\item Formulate the problem as a least squares problem in the form : $\\min_x \\left \\| Ax -b \\right \\|_2^2$\n    \\item If $x = (x_c, t)$, define the $t$ in terms of $r$ and $x_c$\n    \\item Define the $A$ and $b$  and show that \n    \\item Use the optimality condition $A^T(Ax-b) = 0$ and show that \n    \\begin{equation}\\label{eq:shpere}\n        r^2  = \\frac{1}{m} \\sum_{i=1}^m  \\left \\| u_i -x_c \\right \\|_2^2\n    \\end{equation}\n    \\item  Using CVXPY formulate the (\\ref{eq:shpere}) for $\\mathbb{R}^2$ where m number of points can be generated as follows:\n    \\begin{minted}\n[frame=lines, framesep=2mm, baselinestretch=1.2,]\n{python}\nm = 50\nr = 1 \nxc = (3,4)\n\nt = np.linspace(0, 2*np.pi, m, endpoint=False) \nx = xc[0] + r * np.cos(t) + np.random.uniform(-0.2,0.2,t.shape[0])\ny = xc[1] + r * np.sin(t) + np.random.uniform(-0.2,0.2,t.shape[0])\nU = np.vstack((x,y))\n\n\\end{minted}\n\\end{enumerate}\n\n\\begin{figure}[H]%\n    \\centering\n    \\subfloat[\\centering \\label{f:init_space}  ]{{\\includegraphics[width=5cm]{circle.png} }}%\n    \\qquad\n    \\subfloat[\\centering\\label{f:end_pose} ]{{\\includegraphics[width=7cm]{shpere.png} }}%\n    \\label{fig:example}\\caption{Expected output for $\\mathbb{R}^2$ and $\\mathbb{R}^3$} %\n\\end{figure}\n\n\\subsection{Task 05}\nLet's try to find the Chebyshev center of a polyhedron. Consider the following polyhedron:\n\\begin{equation}\n    P = \\{ x\\mid a_i^{\\top}x \\leq b_i, \\; i=1,...,m\\}\n\\end{equation} The Chebyshev center is the center of the largest ball that can fit within the P\n\\begin{equation}\n    Cb = \\{ x_c+u\\mid \\left \\| u \\right \\|_2 \\leq r\\}\n\\end{equation}, where $x_c$ is the center and $u = x-x_c$. Hint: Cauchy-Schwarz Inequality: all vectors $\\mathbf{a}$ and $\\mathbf{u}$ of an inner product space it is true that\n\\begin{equation}\n    \\mathbf{a}^T\\mathbf{u} \\leq \\left \\| \\mathbf{a} \\right \\|_2 \\left \\| \\mathbf{u} \\right \\|_2\n\\end{equation}\n\n\\begin{enumerate}\n    \\item To Cb be inside P, $a_i^{\\top}x\\leq b_i$ for all $x \\subset Cb$, how can you define this condition?\n    \\item Solve this optimization problem considering these constraints: $A = \\begin{bmatrix}\n-1 & -1\\\\ \n-0.5 & 1\\\\ \n 2& -1\n\\end{bmatrix}, \\; b = \\begin{bmatrix}\n1\\\\ \n2\\\\ \n4\n\\end{bmatrix}$, and compare your result with MPT toolbox as follows:\n    \\begin{minted}\n[frame=lines, framesep=2mm, baselinestretch=1.2,]\n{matlab}\n\nP = Polyhedron('A', A, 'b', b);\nhold on\nplot(P);\nx = sdpvar(2,1);\ndata = P.chebyCenter(); \nS = YSet(x, norm(x - data.x) <= data.r);\nS.plot('color', 'lightgreen');\nplot(data.x(1), data.x(2), 'ko', 'MarkerSize'\n                    , 10, 'MarkerFaceColor', 'k');\n\\end{minted} \n\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=12cm]{ball.png}\n    \\caption{Chebyshev center}\\label{f:farthest-dis}\n    \\end{center}\n\\end{figure}\n    \n\\end{enumerate}\n\n\\subsection{Task 06}\nLet $S=\\{y_1\\mathbf{a}_1+y_2\\mathbf{a}_2 \\mid -1<y_1<1, -1<y_2<1\\}$, where $\\mathbf{a}_1, \\mathbf{a}_2 \\in \\mathbb{R}^n$ be a polyhedron. Your task is to formulate S in the standard form, namely $S=\\{ x \\mid Ax \\leq b, Fx=g\\}$. For simplicity assume that $\\mathbf{a}_1$ and $\\mathbf{a}_2$ are independent, and S can be seen as a intersection of following three sets:\n\\begin{enumerate}\n    \\item $S_1$ plane defined by $\\mathbf{a}_1$ and $\\mathbf{a}_2$\n    \\item $S_2 = \\{ z+y_1\\mathbf{a}_1+y_2\\mathbf{a}_2 \\mid \\mathbf{a}_1^Tz=\\mathbf{a}_2^Tz=0, \\; -1 \\leq y_1 \\leq 1\\}$ parallel to $\\mathbf{a}_2$ and orthogonal to $S_1$ \n    \\item $S_3 = \\{ z+y_1\\mathbf{a}_1+y_2\\mathbf{a}_2 \\mid \\mathbf{a}_1^Tz=\\mathbf{a}_2^Tz=0, \\; -1 \\leq y_2 \\leq 1\\}$ parallel to $\\mathbf{a}_1$ and orthogonal to $S_1$ \n\\end{enumerate} \\textbf{Hint}: a vector $c_1$ that is in the plane that defined by $\\mathbf{a}_1$ and $\\mathbf{a}_2$, and orthogonal to $\\mathbf{a}_2$ can be described by $ c_1 = \\mathbf{a}_1 - \\frac{\\mathbf{a}_1^T\\mathbf{a}_2}{\\left \\| \\mathbf{a}_2 \\right \\|_2^2}\\mathbf{a}_2$ \n\nFor visualization you may use following script:\n\\begin{minted}\n[frame=lines, framesep=2mm, baselinestretch=1.2,]\n{matlab}\nS = Polyhedron('A', A, 'b', b);\nplot(S)\n\\end{minted} \n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=6cm]{polycons.png}\n    \\caption{S}\\label{f:farthest-dis}\n    \\end{center}\n\\end{figure}\n\n\\section{Appendix A}\n\n\\begin{figure}[H]%\n    \\centering\n    \\subfloat[\\centering \\label{f:init_space}  ]{{\\includegraphics[width=7.5cm]{init_pose.png} }}%\n    \\qquad\n    \\subfloat[\\centering\\label{f:end_pose} ]{{\\includegraphics[width=8cm]{end_pose.png} }}%\n    \\subfloat[\\centering\\label{f:convex_opt} ]{{\\includegraphics[width=8cm]{convex_opt.png} }}%\n    \\label{fig:example}\\caption{How can we formulate the this trajectory planning as a optimization problem?} %\n\\end{figure}\n\n\\begin{equation}\n     J(\\Gamma) =  \\xi_{obs}J_{obs}(\\Gamma) + \\xi_{smooth}J_{smooth}(\\Gamma) + \\xi_{soft}J_{soft}(\\Gamma), \\quad J_{soft}(\\Gamma) = J_{v}(\\Gamma) + J_{a}(\\Gamma),\n\\end{equation} where $J_{soft}(\\Gamma)$ is determined by soft limits on acceleration and velocity. $J_{smooth}(\\Gamma)$ is defined by considering geometric information and/or minimizing snap and/or jerk. $\\xi_{obs}J_{obs}(\\Gamma)$ helps to avoid collision detection. So how can we define those sub objective functions? What kind of things to be consider? There are so much to think aren't they? Let's get to the problem formulation\n\n\\subsection{Convex Set and Convex Functions}\nA set $\\Omega \\subseteq \\mathbb{R}^n$ is convex if and only if the line segment between any two points in $\\Omega$ lies in $\\Omega$, i.e., $\\forall x_1,x_2 \\Omega$ and $0 \\leq \\lambda \\leq 1$\n\\begin{equation}\n    \\lambda x_1 + (1-\\lambda)x_2 \\in \\Omega\n\\end{equation}$ \\lambda x_1 + (1-\\lambda)x_2, \\; \\lambda \\in [0,1]$ is called convex combination of $x_1$ and $x_2$. This can be generalized up to n points $\\lambda_1 x_1 + ... + \\lambda_n x_n, \\; \\lambda_1 + ...+ \\lambda_n = 1$\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=12cm]{convex_set_non.png}\n    \\caption{Some convex and nonconvex sets~\\cite{boyd2004convex}}\n    \\end{center}\n\\end{figure}\n\nA function $f:\\mathbb{R}^n \\rightarrow \\mathbb{R}$ is a convex function if whose domain dom(f) is a convex set and $\\forall x,y \\in dom(f)$ and $0 \\leq \\lambda \\leq 1$\n\\begin{equation}\n    f(\\lambda x_1 + (1-\\lambda)x_2) \\leq \\lambda f(x_1) + (1-\\lambda)f(x_2)\n\\end{equation} Geometrically, the line segments connecting $(x_1, f(x_1))$ to $(x_2, f(x_2))$ is sit above the graph, namely $epi f(x)$ of the function f, refer to Fig.~\\ref{f:convex_fun}. Now we are ready to define given optimization problem as convex optimization problem as follows:\n\\begin{equation}\n\\begin{aligned}\n\\min_{} \\quad & f(x)\\\\\n\\textrm{s.t.} \\quad & x \\in \\Omega,\\\\\n\\end{aligned}\n\\end{equation} where f is a convex function and $\\Omega$ is a convex set. Such a problem guarantee to have a global solution due to the convexity nature. \n\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=12cm]{convex-fun.png}\n    \\caption{Definition of a convex function}\\label{f:convex_fun}\n    \\end{center}\n\\end{figure}\n\n\n\n\\subsection{Some importance examples of convex sets}\n\\subsubsection{Hyperplanes and halfspaces}\nHyperplanes and halfspaces are extremely importance when defining the constrained set for optimization problems. A hyperplane is form as:\n\\begin{equation}\n    Hyperplanes: \\{ x|\\; a^Tx =b\\} \\; (a\\in \\mathbb{R}^n, b \\in \\mathbb{R}, a\\neq 0)\n\\end{equation} Geometrically, the hyperplanes can be interpreted as the set of points constant inner product to a given vector a (or normal vector) whereas b determine the offset from the origin to the hyperplane. For any point in the hyperplane, namely $x_0$, geometrical interpretation can be understood as \\begin{equation}\n     \\{ x|\\; a^T(x-x_0) = 0\\} = x_0 + a^{\\perp}\\; (a\\in \\mathbb{R}^n, a\\neq 0), \\quad a^Tx_0 = b, \n\\end{equation} where $a^{\\perp}$ denotes the orthogonal component of a, i.e., $a^{\\perp}=\\{ v | a^Tv = 0\\}$. This interpretation is illustrated in Fig.~\\ref{f:hyperplane_1}.\n\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=6cm]{hyperplane_1.png}\n    \\caption{Hyperplane in $\\mathbb{R}^2$, with normal vector a and a point $x_0$. THe darker arrow depicts the any point x in the hyperplane, $x-x_0$ ~\\cite{boyd2004convex}}\\label{f:hyperplane_1}\n    \\end{center}\n\\end{figure}\n\n\nA hyperplanecan be devided into two halfspaces. A closed halfspace is a set of the from\n\\begin{equation}\n    \\begin{aligned}\n     \\{x | a^Tx \\leq b\\}, \\quad a \\neq 0, or\\\\\n      \\{x | a^T(x-x_0) \\leq 0\\}, \\quad a \\neq 0, a^Tx_0 = b\\\\\n     \\end{aligned}\n\\end{equation}. Halfspaces are convex but not affine (Fig.~\\ref{f:hyperplane_2})\n\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=6cm]{hyperplane_2.png}\n    \\caption{A halfspace determined by $a^Tx \\leq b$ ~\\cite{boyd2004convex}}\\label{f:hyperplane_2}\n    \\end{center}\n\\end{figure} The halfspace consists of $x_0$ plus any vector that makes an obtues (or right) angle with the vector a as shown in Fig.~\\ref{f:hyperplane_3}.\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=6cm]{hyperplane_3.png}\n    \\caption{The vector $x_2-x_0$ makes an obtuse angle with a, whereas the vector $x_1-x_0$ makes an acute angle with a. Hence, both $x_1$ and $x_2$ can not be in the halfspace ~\\cite{boyd2004convex}}\\label{f:hyperplane_3}\n    \\end{center}\n\\end{figure}\nWhat is the different between plane and hyperplane? What is the different between hyperplane and halfspace? \n\n\n\n\\subsection{Euclidean balls and ellipsoids}\nA ball in $\\mathbf{R}^n$ has the form\n\\begin{equation}\n    Ball = B(x_c, r) = \\{x| \\left \\| x-x_c \\right \\|_2 \\leq r\\} = \\{x|(x-x_c)^T(x-x_c) \\leq r^2\\} = \\{x_c+ru| \\left \\| u \\right \\|_2 \\}\\; r>0\n\\end{equation} Can you try to show ball is a convex set?\nA ellipsoid in $x^n \\in \\mathbf{R}^n$ has the form\n\\begin{equation}\\label{ellipsoid}\n    Ellipsoid = \\{x | (x-x_c)^TP^{-1}(x-x_c) \\leq 1\\} = \\{x_c+Au|\\left \\| u \\right \\|_2 \\leq 1 \\}\\; P=P^T \\succ 0\n\\end{equation} where A is square and nonsingular ($A=P^{1/2}$). The matrix P determines the how far the ellipsoid extends in every direction from $x_c$ whose length is determined by $\\sqrt{\\lambda_i}$, where $\\lambda_i$ is the $i^{th}$ eiegen value. Ellipsoid can be a ball when $P=r^2I$.\n\n\\subsection{Polyhedra}\nGiven the halfspace representation (H-rep), i.e., $Ax \\leq b$, corresponding representation is defined in three different ways: Polyhedron ($P = \\{x|Ax\\leq b\\}$), Polyhedral cone ($P={x|Ax\\leq 0}$), and Polytope ($P=\\{Ax\\leq 1\\}$). A polyhedron has the form\n\\begin{equation}\n    Polyhedron = \\{x|a_jx^T \\leq b_j, j=1,...,m, c_j^Tx =d_j, j=1,...,p\\} = \\{x|Ax \\preceq b, Cx=d\\}.\n\\end{equation} Hence, polyhedron is the intersection of a set of halfspaces and hyperplanes. In general, subspaces, hyplerplanes line segments halfspaces are all polyhedra. Let's try to visualize polyhedron, considering following constraints:\n\\begin{equation}\\label{polyconst}\n    A = \\begin{bmatrix}\n -0.2936  & -1.3260 \\\\\n    0.8245  & -1.4999 \\\\\n    0.1941  &  1.0160 \\\\\n    0.2977  & -0.0275 \\\\\n   -0.7101  & -0.1604 \\\\\n   -0.6877  &  0.3788 \\\\\n    0.5728  & -0.1072 \\\\\n    0.4452  &  0.2128 \n\\end{bmatrix}, b = \\begin{bmatrix}\n2.0970 \\\\\n    0.2372 \\\\\n    1.5282 \\\\\n    1.4607 \\\\\n    2.5676 \\\\\n    1.8432 \\\\\n    0.3522 \\\\\n    1.8206 \n\\end{bmatrix}\n\\end{equation}\n\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=10cm]{polyhedron.png}\n    \\caption{The polyhedron with respect to (\\ref{polyconst})}\\label{f:polyhedron}\n    \\end{center}\n\\end{figure}\n\n\\begin{minted}\n[frame=lines, framesep=2mm, baselinestretch=1.2,]\n{matlab}\nA = randn(8,2);\nb = 3*rand(8,1);\nP = polytope(A,b);\nplot(P);   \n\\end{minted}    \n\nLet's try to define polyhedron for following constraints set:\n\\begin{equation}\nP = \\left\\{\\begin{matrix}\n6x+y \\leq 11\\\\ \n8x+2y \\leq 29 \\\\ \nx-y \\geq 11\\\\ \n2x+y \\leq -4\\\\ \ny \\leq 2\\\\ \ny < 21\n\\end{matrix}\\right.\n\\end{equation} you may use $P = Polyhedron( 'A', [*], 'b', [*], 'Ae', [*], 'be', *)\n$~\\cite{poly_ref} notation to define the polyhedron\n\\subsection{Convex hull of polyhedra}\nThe convex hull of the given a set of polyhedron is defined as follows:\n\\begin{equation}\n    conv\\{v_1,...,v_k\\} = \\{\\theta_1 v_1+...+\\theta_k v_k | \\theta \\succeq 0, 1^T\\theta = 1\\}\n\\end{equation}\n\nLet's say you are given a set of points (or vertices) and task is to construct the convex hull with respect to those vertices V and corresponding faces F\n\\begin{equation}\n    V = \\begin{bmatrix}\n14.2347  & 12.5802 &  13.1171 \\\\\n   12.7639 &  13.2543 &  15.3665 \\\\\n   16.4311  & 15.7984  & 16.0939 \\\\\n   12.1569 &  16.4915 &  16.7446\n\\end{bmatrix}, \\quad F = \\begin{bmatrix}\n1   &  2  &   4 \\\\\n     1   &  3  &   2 \\\\ \n     1  &   4  &   3 \\\\\n     2  &   3  &   4 \n\\end{bmatrix}\n\\end{equation}\n\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=10cm]{convex_hull.png}\n    \\caption{}\\label{f:convex_hull}\n    \\end{center}\n\\end{figure}\n\n\\begin{minted}\n[frame=lines, framesep=2mm, baselinestretch=1.2,]\n{matlab}\nV = (5).*rand(4,3) + 12;\nF = convhull(V);\nS.Vertices = V;\nS.Faces = F;\nS.FaceVertexCData = jet(size(V,1));\nS.FaceColor = 'interp';\npatch(S); \n\\end{minted}  \n\n\\subsection{Let's try some operations that preserve convexity}\nMinkowski sum or different is interesting property for convex sets. There are various applications in which Minkowski sum is being used. When defining terminal constraints set in  Robust Model Predictive Control (RMPC), Minkowski sum is quite useful. Further, in motion planning, to differentiate free-space from obstacle space this sum is being used. Moreover, in collision detection also this property can be used. \n\n\\begin{minted}\n[frame=lines, framesep=2mm, baselinestretch=1.2,]\n{matlab}\nA = randn(10,2);\nb = 3*rand(10,1);\nP = polytope(A,b);\nhold on\nplot(P, 'r');\n\nE = randn(10,2);\nf = 0.1*rand(10,1);\nS = polytope(E,f);\nplot(P-S,'g');\n\\end{minted} \n\n\\begin{figure}[H]\n    \\begin{center}\n    \\includegraphics[width=10cm]{minkowski_diff.png}\n    \\caption{Minkowski difference between two polyhedrons, namely P and E}\\label{f:minkowski_diff}\n    \\end{center}\n\\end{figure}\n\n\n\nLet's try to calculate the center corresponds to inscribing largest ball inside the polyhedron. The ball is described as $\\{x | \\left \\| x-x_c \\right \\|_2 \\leq r\\}$. \n\n\\printbibliography\n\\end{document}", "meta": {"hexsha": "6f39a3849ac238a606827b95142fcc82b1f1d2be", "size": 20226, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Practice 2021/Least Squares Estimation, Convex Operations , and Convex Programming/main.tex", "max_stars_repo_name": "kahlflekzy/Computational-Intelligence-Slides-Spring-2022", "max_stars_repo_head_hexsha": "9401fe1258efa91a6c9886501d02909420a94add", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2022-01-19T15:40:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T22:27:44.000Z", "max_issues_repo_path": "Practice 2021/Least Squares Estimation, Convex Operations , and Convex Programming/main.tex", "max_issues_repo_name": "kahlflekzy/Computational-Intelligence-Slides-Spring-2022", "max_issues_repo_head_hexsha": "9401fe1258efa91a6c9886501d02909420a94add", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-05-27T09:02:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-13T09:36:55.000Z", "max_forks_repo_path": "Practice 2021/Least Squares Estimation, Convex Operations , and Convex Programming/main.tex", "max_forks_repo_name": "kahlflekzy/Computational-Intelligence-Slides-Spring-2022", "max_forks_repo_head_hexsha": "9401fe1258efa91a6c9886501d02909420a94add", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2021-01-20T07:58:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-12T08:28:08.000Z", "avg_line_length": 38.9710982659, "max_line_length": 431, "alphanum_fraction": 0.6820429151, "num_tokens": 7118, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458153, "lm_q2_score": 0.849971175657575, "lm_q1q2_score": 0.5915445264575051}}
{"text": "\\documentclass[24pt, a4]{article}\n% To compile latex inside of vim \n% ! clear; pdlatex name_of_file.tex or \"%\"\n\n\n% \\usepackage[utf8]{inputenc}\n\\usepackage[parfill]{parskip}\n\\usepackage{listings}\n\\usepackage{geometry}\n\n\n\\title{Binary Search}\n\\author{Mustafa Muhammad}\n\\date{10th October 2021}\n\n\\begin{document}\n\n\\maketitle\n\n\\newpage\n\nList of Questions:\n\\begin{enumerate}\n    \\item Binary search\n    \\item Binary search on reverse sorted array\n    \\item Order not known search\n    \\item First and last occurent of an element\n    \\item Count of an element in a sorted array\n    \\item Find floor of an element in a sorted array\n    \\item Ceil of an element in a sorted array\n    \\item Next alphabetical element\n    \\item Find position of an element in an infinite sorted array\n    \\item Index of first in sorted infite array (binary)\n    \\item Minimum difference element in a sorted array\n    \\item Peak element\n    \\item Find maximum element in a bitonic array\n    \\item Search an element in a bitonic array\n    \\item Search in row and column wise sorted array\n    \\item Allocate minimum number of pages\n    \\end{enumerate}\n\n\\newpage\n\\section{Identification}\nIf you see that the given array is sorted, think binary search.\n\\begin{lstlisting}\ndef searchInSorted(self,arr, N, K):\n        left = 0\n        right = len(arr)-1\n\n        while(left <= right):\n            mid = int(left + (right-left)/2)\n\n            if arr[mid] == K:\n                return 1\n\n            elif arr[mid] < K:\n                left = mid+1\n\n            else:\n                right = mid -1\n\n        return -1\n\\end{lstlisting}\n\n\\section{Binary search - Reverse Sorted Array}\n\n\\begin{lstlisting}\ndef binary_reverse_search(arr, target):\n    left = 0\n    right = len(arr)-1\n\n    while(left <= right):\n        mid = int(left+(right-left)/2)\n\n        if arr[mid] < target:\n            right = mid - 1\n\n        elif arr[mid] > target:\n            left = mid + 1\n\n        else:\n            return mid\n\n    return -1\n\\end{lstlisting}\n\\newpage\n\n\\section{Order Not Known}\n\nWe can check the difference between the first and last element. If the first \nelement is smaller than the last, we do normal binary search, else we simply \nreverse the order.\n\n\\section{First and Last Occurence of An element}\n\nWe check the first occurence of an element by finding the occurence of an\nelement and then proceeding to go left.\nSimilarly we find the last occurence of an element by going right once found.\n\n\\begin{lstlisting}\ndef find(arr,n,x):\n    # code here\n    def first_occurence(arr, x):\n        left = 0\n        right = len(arr)-1\n        res = -1\n        while(left <= right):\n            mid = int(left+ (right-left)/2)\n            if arr[mid] == x:\n                res = mid\n                right = mid - 1\n            elif arr[mid] < x:\n                left = mid+1\n            elif arr[mid] > x:\n                right = mid - 1\n        return res\n    def last_occurence(arr, x):\n        left = 0\n        right = len(arr)-1\n        res = -1\n        while(left <= right):\n            mid = int(left+ (right-left)/2)\n            if arr[mid] == x:\n                res = mid\n                left = mid + 1\n            elif arr[mid] < x:\n                left = mid+1\n            elif arr[mid] > x:\n                right = mid - 1\n        return res\n    results = []\n    results.append(first_occurence(arr, x))\n    results.append(last_occurence(arr, x))\n    return results\n\\end{lstlisting}\n\\newpage\n\\section{Count of an element in a sorted array}\nWe find the first occurence of an element, then we find the last occurence of \nan element. Formula for count = Last - FirstOccurence + 1 if neither element\nis -1, else we return 0.\n\n\\section{Number of times a sorted array is rotated}\nInput: nums = [3,4,5,1,2]\n\nOutput: 1\n\nExplanation: The original array was [1,2,3,4,5] rotated 3 times. We know this\nbecause the minimum elements index is 3. We perform binary search to find the\nsmallest element. Its index in the number of rotations.\n\n\\begin{lstlisting}\nclass Solution:\n    def findMin(self, nums: List[int]) -> int:\n        # minimum in rotated sorted array\n        left = 0\n        right = len(nums)-1\n        n = len(nums)\n\n        while left <= right:\n            mid = int(left + (right-left)/2)\n\n            # To avoid index out of bounds\n            next_ = (mid+1)%n\n            prev = (mid-1+n)%n\n            \n            # Minimum element will be smaller than both its neighbors.\n            if nums[mid]<= nums[next_] and nums[mid] <= nums[prev]:\n                return nums[mid]\n            \n            #We go towards the unsorted array (location of min element)\n\n            elif nums[0] <= nums[mid]:\n                left = mid+1\n        \n            elif nums[mid] <= nums[-1]:\n                right = mid -1\n        \n        return nums[0]\n\\end{lstlisting}\n\n\\newpage\n\\section{Find an element in a rotated sorted array}\n\nBuilding upon the previous question on rotation. We find the minimum index\nand divide the array in half and apply binary search on both sides.\n\n\\begin{lstlisting}\nclass Solution:\n    def search(self, nums: List[int], target: int) -> int:\n        # find minimum index , then binary search on both sides\n        def find_index(nums):\n            left = 0\n            right = len(nums)-1\n            n = len(nums)\n            while left <= right:\n                mid = int(left + (right-left)/2)\n                next_ = (mid+1)%n\n                prev = (mid-1+n)%n\n\n                if nums[mid]<=nums[next_] and nums[mid]<=nums[prev]:\n                    return mid\n                \n                elif nums[0] <= nums[mid]:\n                    left = mid + 1\n                \n                elif nums[mid] <= nums[-1]:\n                    right = mid -1\n            \n            return -1\n        \n        def binary_search(nums, target):\n            left = 0\n            right = len(nums)-1\n            while(left <= right):\n                mid = int(left + (right-left)/2)\n                if nums[mid] == target:\n                    return mid\n                elif nums[mid] < target:\n                    left = mid + 1\n                elif nums[mid] > target:\n                    right = mid -1\n            return -1\n        min_index = find_index(nums)\n        left_arr = binary_search(nums[0:min_index], target)\n        right_arr = binary_search(nums[min_index:], target)\n        \n        return right_arr+len(nums[0:min_index]) if right_arr != -1 else left_arr\n\\end{lstlisting}\n\\newpage\n\\section{Searching in a nearly sorted array}\n\\begin{lstlisting}\nclass Solution:\n    def binary_search_on_nearly_sorted_array(arr, target):\n        left = 0\n        right = len(arr)-1\n\n        while left <= right:\n            mid = int(left + (right-left)/2)\n            if arr[mid] == target:\n                return mid\n            elif mid-1 >= left and arr[mid-1] == target:\n                return mid-1\n            elif mid+1 <= right and arr[mid+1] == target:\n                return mid+1\n            elif arr[mid] < target:\n                left = mid+2\n            elif arr[mid] > target:\n                right = mid -2 \n        return -1 \n\\end{lstlisting}\n\\newpage\n\\section{Find floor of an element in a sorted array}\nInput:\n\nN = 7, x = 5\n\narr[] = {1,2,8,10,11,12,19}\n\nOutput: 1\n\nExplanation: Largest Number less than 5 is\n2 (i.e K = 2), whose index is 1(0-based\nindexing).\n\\begin{lstlisting}\nclass Solution:\n    def findFloor(self,arr,N,X):\n            #Your code here\n            left = 0\n            right = len(arr)-1\n            res = -1\n            while left <= right:\n                mid = int(left + (right-left)/2)\n                if arr[mid] == X:\n                    return mid\n                if arr[mid] < X:\n                    res = mid\n                    left = mid + 1\n                \n                elif arr[mid] > X:\n                    right = mid - 1\n            return res\n\\end{lstlisting}\n\n\\newpage\n\\section{Find ceil of an element in a sorted array}\nFor example, let the input array be {1, 2, 8, 10, 10, 12, 19}\n\nFor x = 0:    floor doesn't exist in array,  ceil  = 1\n\nFor x = 1:    floor  = 1,  ceil  = 1\n\nFor x = 5:    floor  = 2,  ceil  = 8\n\nFor x = 20:   floor  = 19,  ceil doesn't exist in array\n\\begin{lstlisting}\nclass Solution:\n    def findFloor(self,arr,N,X):\n            #Your code here\n            left = 0\n            right = len(arr)-1\n            res = -1\n            while left <= right:\n                mid = int(left + (right-left)/2)\n                if arr[mid] == X:\n                    return mid\n                if arr[mid] < X:\n                    left = mid + 1\n                elif arr[mid] > X:\n                    res = mid\n                    right = mid - 1\n            return res\n\\end{lstlisting}\n\n\\section{Next alphabetical element}\nGiven an array of letters sorted in ascending order, find the smallest letter \nin the the array which is greater than a given key letter.\nProblem is very similar to ceil, as in this particular problem we just find \nthe next possible alphabet.\n\nChanges:\n1. Alphabets instead of numbers\n2. Even if we find the target alphabet, we return the next alphabet.\n\\begin{lstlisting}\nif arr[mid] == key:\n    left  = mid+1\n\\end{lstlisting}\n\n\\newpage\n\\section{Find position of an element in an infinite sorted array}\n\\begin{lstlisting}\nclass Solution:\n    def binary_search_on_inf(arr, target):\n        start = 0\n        end = 1\n\n        # Moving up the end index and making start the previous end\n        while arr[end] < target:\n            start = end\n            end = end*2\n        \n        while start <= end:\n            mid = int(left + (right-left)/2)\n            if arr[mid] == target:\n                return mid\n            elif arr[mid] < target:\n                left = mid + 1\n            elif arr[mid] > target:\n                right = mid -1\n        return -1\n\\end{lstlisting}\n\n\\newpage\n\\section{Index of First \"1\" in an Infinite 0/1 Sorted Arr}\n\nCombination of binary search on an infinite array and first occurence of an\nelement.\n\\begin{lstlisting}\nclass Solution:\n    def binary_search_on_inf(arr):\n        start = 0\n        end = 1\n\n        # Moving up the end index and making start the previous end\n        while arr[end] != 0:\n            start = end\n            end = end*2\n        \n        res = -1\n        while start <= end:\n            mid = int(left + (right-left)/2)\n            if arr[mid] == target:\n                res = mid\n                right = mid - 1\n            elif arr[mid] < target:\n                left = mid + 1\n            elif arr[mid] > target:\n                right = mid -1\n        return res\n\\end{lstlisting}\n\n\\newpage\n\\section{Minimum Difference Element in a Sorted Array}\n\nWe run binary search normally as we do, in the end, left and right will be \nin between the key value, all we have to do is return the value with the min.\ndifference between the two.\n\nAnother way to think about this is to find the ceil and floor of the array\nusing binary search.\n\n\\begin{lstlisting}\nclass Solution:\n    def binary_search(arr, target):\n        start = 0\n        end = 1\n        while start <= end:\n            mid = int(left + (right-left)/2)\n            if arr[mid] == target:\n                return mid\n            elif arr[mid] < target:\n                left = mid + 1\n            elif arr[mid] > target:\n                right = mid -1\n        return min(abs(target - arr[left]), abs(target-arr[right]))\n\\end{lstlisting}\n\n\\section{Binary search on answer}\n\nThere are some cases where the array is not sorted but we can still apply\nbinary search.\n\nWe need to find a criteria inorder to split the array and find our answer.\n\nWe also need to decide how to move between the array.\n\nBest example of this Concept is \"Peak Element\"\n\n\\newpage\n\\section{Peak Element}\nInput: nums = [1,2,3,1]\n\nOutput: 2\n\nExplanation: 3 is a peak element and your function should return the \nindex number 2.\n\nWe use binary search. We find the peak with the condition \nif nums[mid] > nums[mid+1] and nums[mid] > nums[mid-1]\n\nWe move in the direction of the larger value as there is a higher chance\nof encountering our result in that area.\n\n\\begin{lstlisting}\nclass Solution:\n    def findPeakElement(self, nums: List[int]) -> int:\n        left = 0\n        right = len(nums)-1\n        if len(nums) == 1: return 0\n        while left <= right:\n            mid = int(left + (right-left)/2)\n            if mid != 0 and mid != len(nums)-1:\n                if nums[mid] > nums[mid-1] and nums[mid] > nums[mid+1]:\n                    return mid\n                elif nums[mid] < nums[mid+1]:\n                    left = mid + 1\n                elif nums[mid] < nums[mid-1]:\n                    right = mid - 1\n            if mid == 0:\n                if nums[mid]>nums[1]:\n                    return mid\n                else:\n                    return 1\n            if mid == len(nums)-1:\n                if nums[mid] > nums[mid-1]:\n                    return mid\n                else:\n                    return mid-1\n        return -1 \n\\end{lstlisting}\n\\section{Find maximum in a bitonic array}\nGiven a bitonic array find the maximum value of the array. An array is said \nto be bitonic if it has an increasing sequence of integers followed \nimmediately by a decreasing sequence of integers.\n\nHence this problem is the same as finding the peak element.\n\nInput:\n1 4 8 3 2\n\nOutput:\n8\n\\newpage\n\\section{Search an element in a bitonic array}\nThis problem is a mix of both peak element and binary search. All the elements\nbefore the peak are sorted in ascending order, while the one's after the peak\nare in descending order. Hence we just need to find the peak index and after \nexcluding it, we run binary search on the two arrays, if the key is not the \npeak.\n\n\\begin{lstlisting}\nclass Solution:\n    def findPeakElement(self, nums: List[int]) -> int:\n        left = 0\n        right = len(nums)-1\n        if len(nums) == 1: return 0\n        while left <= right:\n            mid = int(left + (right-left)/2)\n            if mid != 0 and mid != len(nums)-1:\n                if nums[mid] > nums[mid-1] and nums[mid] > nums[mid+1]:\n                    return mid\n                elif nums[mid] < nums[mid+1]:\n                    left = mid + 1\n                elif nums[mid] < nums[mid-1]:\n                    right = mid - 1\n            if mid == 0:\n                if nums[mid]>nums[1]:\n                    return mid\n                else:\n                    return 1\n            if mid == len(nums)-1:\n                if nums[mid] > nums[mid-1]:\n                    return mid\n                else:\n                    return mid-1\n        return -1 \n\n    def binary_search_asc(self, arr, target):\n        start = 0\n        end = len(arr)-1\n        while start <= end:\n            mid = int(left + (right-left)/2)\n            if arr[mid] == target:\n                return mid\n            elif arr[mid] < target:\n                left = mid+1\n            elif arr[mid] > target:\n                right = mid-1\n        return -1\n\n    def binary_search_desc(self, arr, target):\n        start = 0\n        end = len(arr)-1\n        while start <= end:\n            mid = int(left + (right-left)/2)\n            if arr[mid] == target:\n                return mid\n            elif arr[mid] < target:\n                right = mid -1\n\n            elif arr[mid] > target:\n                left = mid + 1\n        return -1\n\\end{lstlisting}\n\\section{Search in row wise and column wise sorted array}\nWrite an efficient algorithm that searches for a target value in an m x n \ninteger matrix. The matrix has the following properties:\n\nIntegers in each row are sorted in ascending from left to right.\nIntegers in each column are sorted in ascending from top to bottom.\n\nInput: matrix = [[1,4,7,11,15],[2,5,8,12,19],[3,6,9,16,22],[10,13,14,17,24]\n,[18,21,23,26,30]], target = 5\n\nOutput: true\n\\begin{lstlisting}\nclass Solution:\n    def searchMatrix(self, matrix: List[List[int]], target: int) -> bool:\n        i = 0 \n        j = len(matrix[0])-1\n        while i>=0 and j>=0 and i<= len(matrix)-1 and j<= len(matrix[0])-1:\n            if matrix[i][j] == target:\n                return True\n            elif matrix[i][j] > target:\n                j-=1\n            elif matrix[i][j] < target:\n                i+=1\n        return False\n\\end{lstlisting}\n\\newpage\n\\section{Allocate minimum number of pages}\n\nYou are given N number of books. Every ith book has Ai number of pages. \nYou have to allocate contagious books to M number of students. There can be \nmany ways or permutations to do so. In each permutation, one of the M \nstudents will be allocated the maximum number of pages. Out of all these \npermutations, the task is to find that particular permutation in which the \nmaximum number of pages allocated to a student is minimum of those in all \nthe other permutations and print this minimum value. \n\nEach book will be allocated to exactly one student. Each student has to be \nallocated at least one book.\n\nNote: Return -1 if a valid assignment is not possible, and allotment should \nbe in contiguous order (see the explanation for better understanding).\n\nInput:\n\nN = 4\n\nA[] = {12,34,67,90}\n\nM = 2\n\nOutput:\n\n113\nExplanation:\n\nAllocation can be done in following ways:\n{12} and {34, 67, 90} Maximum Pages = 191\n\n{12, 34} and {67, 90} Maximum Pages = 157\n\n{12, 34, 67} and {90}  Maximum Pages =113\n\nTherefore, the minimum of these cases is\n\n113, which is selected as the output.\n\nMain challenge is to create the is\\_valid function, rest of the problem is\ntrivial.\n\n\\newpage\n\\begin{lstlisting}\nclass Solution:\n    def findPages(self,A, N, M):\n        start = max(A)\n        end = sum(A)\n        res = -1\n\n        def is_valid(arr, k, mid):\n            student = 1\n            curr_sum = 0\n            for i in range(0, len(arr)):\n                curr_sum += arr[i]\n                if curr_sum > mid:\n                    student+= 1\n                    curr_sum = arr[i]\n\n            if student > k:\n                return False\n            return True\n\n        while start <= end:\n            mid = int(start + (end-start)/2)\n            if is_valid(A, M, mid):\n                res = mid\n                end = mid - 1\n            else:\n                start = mid + 1\n        return res\n\\end{lstlisting}\n\\end{document}\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "8d2329991a9dbaa44a0c27fc56ca041939952eb6", "size": 18261, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "binary_search.tex", "max_stars_repo_name": "MoMus2000/Notes", "max_stars_repo_head_hexsha": "54957ebb92436521e375919bef6fa88a81192a9d", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "binary_search.tex", "max_issues_repo_name": "MoMus2000/Notes", "max_issues_repo_head_hexsha": "54957ebb92436521e375919bef6fa88a81192a9d", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "binary_search.tex", "max_forks_repo_name": "MoMus2000/Notes", "max_forks_repo_head_hexsha": "54957ebb92436521e375919bef6fa88a81192a9d", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4439252336, "max_line_length": 80, "alphanum_fraction": 0.559553146, "num_tokens": 4546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458153, "lm_q2_score": 0.8499711699569787, "lm_q1q2_score": 0.5915445224901277}}
{"text": "\\section{Sensitivity Analysis for the Structural Behavioral Model} \\label{sec:3}\n\\thispagestyle{plain} % surpress header on first page\n\n\nIn this chapter, we introduce the basic concepts, terminologies, methods of GSA in Section \\ref{sec:3.1}, including the general model structure and the input factors. Then we demonstrate the mathematical descriptions of the variance-based and quantile-based sensitivity measures in Section \\ref{sec:3.2} and Section \\ref{sec:3.3}.\n\n\\subsection{Global sensitivity analysis: the framework} \\label{sec:3.1}\n\nThe sensitivity analysis is \u201cThe study of how the uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input\u201d\\citep{saltelli2004SensitivityAnalysisPractice}. Thus, the definition of sensitivity analysis includes models, model inputs and model outputs. Throughout this work, a structural economic model will be regarded as a general function that defines a relationship between inputs and output(s):\n\n\\begin{equation} \\label{eq:6}\n\\mathcal{M}: \\theta \\mapsto y=\\mathcal{M}(\\theta)\n\\end{equation}\n\n\n\\noindent\nwhere $\\boldsymbol{\\theta} \\in \\Theta \\subset \\mathbb{R}^{d}$ is a vector of model parameters. The output of interest is a vector $y$. To produce a counterfactual prediction, a policy $g \\in \\mathcal{G}$ changes the mapping to $\\mathcal{M}_g(\\theta)$\\citep{eisenhauer2021StructuralModelsPolicymaking}. Consequently, the differences between the prediction before and after the policy intervention yield a structural estimate of policy effects. \\\\\n\n\\noindent\nUnlike LSA which analyzes how a tiny change near an input space value affects the scalar output, GSA identifies such effects in the whole input space. To be more precise, each input parameter is treated as a random variable assigned with a distribution of all possible values. Consequently, the uncertainty coming from model parameters is transmitted through the model to generate an empirical distribution of the output of interest $Y = M(\\Theta)$ with a joint probability density function $f_Y(y)$. So far, this is how uncertainty propagation occurs. \\\\\n\n\\noindent\nOnce uncertainty propagation characterizes the uncertainty of the model output, a sensitivity analysis then can be applied to identify which parameters are primarily responsible for the variability of output. \\\\\n\n\\noindent\nTo date, a wide range of GSA methods has been developed. In this paper, we focus on variance-based sensitivity measures\\citep{sobol1993SensitivityEstimatesNonlinear} and quantile-based sensitivity measures\\citep{kucherenko2019QuantileBasedGlobal}. A comprehensive discussion and comparison of the GSA methods can be found in \\cite{razavi2021FutureSensitivityAnalysis}.\n\n\\subsection{Variance-based sensitivity measures}  \\label{sec:3.2}\n\nThe model function $Y=f\\left(\\theta_{1}, \\ldots, \\theta_{d}\\right)$ is defined in $d$-dimensional real coordinate space $R^d$ with an input vector $\\boldsymbol{\\theta}=(\\theta_1, \\dots, \\theta_{d})$.Note that $\\mathbf{\\theta}$ is a random variable with a continuous probability distribution function(PDF). To quantify the effect of variations of $\\theta$ on the variation of $Y$, let us consider the conditional expectation  $\\mathrm{E}\\left[Y \\mid \\Theta_{i}=\\theta_{i}\\right]$. It is the mean value of output $Y$ over the probability distribution of  $\\Theta_k(k \\neq i)$ with the condition that $\\Theta_i$ is fixed to $\\theta_i$. If we consider the variations of $\\Theta_i$,  the associated random variable is  $\\mathrm{E}\\left[Y \\mid \\Theta_{i}\\right]$, whose variance quantifies the effect of $\\Theta_i$ on the variation of Y. \\\\\n\n\\noindent\nAccording to the result by \\cite{sobol1993SensitivityEstimatesNonlinear}, given $k$ input variables mutually independent, the variance of the output can be decompose as the sum of variances of increasing order:\n\n\\begin{equation} \\label{eq:7}\n\\operatorname{Var}(Y)=\\sum_{i} V_{i}+\\sum_{i j} V_{i j}+\\sum_{i j h} V_{i j k}+\\cdots+V_{1,2}, k\n\\end{equation}\n\n\\noindent\nwhere $V_i = \\operatorname{Var}(E(Y \\mid \\Theta_i))$ is first order variance and $V_{ij} = \\operatorname{Var}(E(Y \\mid \\Theta_i,\\Theta_j))$ is second order variance, etc. Note that in an additive model, this decomposition includes only first order variances. \\\\\n\n\\noindent\nThus, \\textit{first-order indice}s of input parameter $\\Theta_i$ is defined as:\n\n\\begin{equation} \\label{eq:8}\nS_i = \\frac{V_i}{\\operatorname{Var}(Y)}=\\frac{\\operatorname{Var}(E(Y \\mid \\Theta_i))}{\\operatorname{Var}(Y)}\n\\end{equation}\n\n\\noindent\nAccordingly, \\textit{second-order indices} of input parameter $\\Theta_i$ and $\\Theta_j$ is defined as:\n\n\n\\begin{equation} \\label{eq:9}\nS_{ij} = \\frac{V_{ij}}{\\operatorname{Var}(Y)}=\\frac{\\operatorname{Var}(E(Y \\mid \\Theta_i,\\Theta_j))}{\\operatorname{Var}(Y)}\n\\end{equation}\n\n\\noindent\nFirst-order indices measures the proportion of the total variance which is due to the main effect of $\\Theta_i$ on Y. Whereas, second-order index measures the proportion of the total variance which is explained by the interaction between the two inputs. \\\\\n\n\\noindent\nIn 1996, \\cite{homma1996ImportanceMeasuresGlobal} introduced the \\textit{total variance index} which measures the proportion of the total variance due to the\nmain effect of $\\Theta$, and all its interactions with the other inputs:\n\n\\begin{equation} \\label{eq:10}\nS_{Ti}=\\frac{\\sum_{i} V_{i}+\\sum_{j h} V_{i j h}+\\cdots}{\\operatorname{Var}(Y)} = \\frac{E(Var(Y \\mid \\Theta_{\\sim i}))}{Var(Y)}=1-\\frac{Var(E(Y \\mid \\Theta _{\\sim i})}{Var(Y)}\n\\end{equation}\n\n\\noindent\nwhere $\\Theta_{\\sim i}=(\\Theta_1, \\Theta_2, \\dots, \\Theta_{i-1}, \\Theta_{i+1}, \\dots ,  \\Theta_k)$ \\\\\n\n\\noindent\nThere are three features of total variance index to be aware of. Firstly, The condition $S_{Ti}=0$ is necessary and sufficient for $\\Theta_i$ to be a non-influential input(it can be treated as a fixed input). Secondly, if $S_{Ti} \\approx S_i$ the interaction between $\\Theta_i$ and the other inputs does not affect the variability of the output. Lastly, it is obvious that the sum of the total indexes is in general greater than \\cite{homma1996ImportanceMeasuresGlobal}.\n\n\n\\subsection{Quantile-based sensitivity measures}  \\label{sec:3.3}\n\nNow we consider the scenarios where only a specific range of output is important to analyst. For instance, $Y = M(\\boldsymbol(\\Theta) \\leq a )$ or $Y = M(\\boldsymbol(\\Theta) \\geq b$. We reformulate such problems to $\\alpha$-th quantile of the output CDF $q_Y(\\alpha)$:\n\n\n\\begin{equation} \\label{eq:11}\n\\alpha=\\int_{-\\infty}^{q_{Y}(\\alpha)} \\rho_{Y}(y) d y=P\\left\\{Y \\leq q_{Y}(\\alpha)\\right\\}\n\\end{equation}\n\n\\noindent\nalternatively, to be more formal:\n\n\\begin{equation} \\label{eq:12}\nq_{Y}(\\alpha)=F_{Y}^{-1}(\\alpha)=\\inf \\{y \\mid F(Y \\leq y) \\geq \\alpha\\}\n\\end{equation}\n\n\\noindent\nwhere $\\rho_Y(y)$ denotes the PDF of the output $Y$ and $F_Y(y)$ denotes to the CDF of the output $Y$.To solve such problems, \\cite{kucherenko2019QuantileBasedGlobal} introduced QBSM $\\bar{q}_{i}^{(1)}$ and $\\bar{q}_{i}^{(2)}$\n\n\\begin{equation} \\label{eq:13}\n\\bar{q}_{i}^{(1)}(\\alpha)=E_{\\theta_{i}}\\left(\\left|q_{Y}(\\alpha)-q_{Y \\mid \\Theta_{i}}(\\alpha)\\right|\\right)=\\int\\left|q_{Y}(\\alpha)-q_{Y \\mid \\Theta_{i}}(\\alpha)\\right| d F_{\\theta_{i}}\n\\end{equation}\n\n\\begin{equation} \\label{eq:14}\n\\bar{q}_{i}^{(2)}(\\alpha)=E_{\\theta_{i}}\\left[\\left(q_{Y}(\\alpha)-q_{Y \\mid \\Theta_{i}}(\\alpha)\\right)^{2}\\right]=\\int\\left(q_{Y}(\\alpha)-q_{Y \\mid \\Theta_{i}}(\\alpha)\\right)^{2} d F_{\\theta_{i}}\n\\end{equation}\n\n\\noindent\nHere,  $F_{\\Theta_i}$ denotes to the CDF of input variable, $q_{Y \\mid \\Theta_{i}}({\\alpha})$ denotes to the conditional PDF with $\\Theta_{i}$ being fixed at $X_{i}=x_{i}^{\\mbox{*}}$ \\citep{song2021QuantileSensitivityMeasures}\n\n\\begin{equation}\\label{eq:15}\nq_{Y \\mid \\Theta_{i}}(\\alpha)=F_{Y \\mid \\Theta_{i}}^{-1}(\\alpha)=\\inf \\left\\{P\\left(Y \\leq y \\mid \\Theta_{i}=\\theta_{i}^{*}\\right) \\geq \\alpha\\right\\}\n\\end{equation}\n\n\n\\noindent\nAdditionally, a normalized version of QBSM $Q_{i}^{(1)}(\\alpha)$ and $Q_{i}^{(2)}(\\alpha)$ was also presented in \\cite{kucherenko2019QuantileBasedGlobal}:\n\n\\begin{equation}\\label{eq:16}\nQ_{i}^{(1)}(\\alpha)=\\frac{\\bar{q}_{i}^{(1)}(\\alpha)}{\\sum_{j=1}^{d} \\bar{q}_{j}^{(1)}(\\alpha)}\n\\end{equation}\n\n\\begin{equation}\\label{eq:17}\nQ_{i}^{(2)}(\\alpha)=\\frac{\\bar{q}_{i}^{(2)}(\\alpha)}{\\sum_{j=2}^{d} \\bar{q}_{j}^{(2)}(\\alpha)}\n\\end{equation}\n\n\\noindent\nwith $\\left\\{Q_{i}^{(1)}(\\alpha), Q_{i}^{(2)}(\\alpha)\\right\\} \\in[0,1]$.\n\n\n\n\n", "meta": {"hexsha": "01fe51e4a4209ab7b00677f7351e5cdc3f0c76b2", "size": 8472, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/05_sa.tex", "max_stars_repo_name": "Yuleii/yulei-thesis-QBSM-kw94", "max_stars_repo_head_hexsha": "bb882bc6c809331c370a4d6442c36ad67ccad498", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/05_sa.tex", "max_issues_repo_name": "Yuleii/yulei-thesis-QBSM-kw94", "max_issues_repo_head_hexsha": "bb882bc6c809331c370a4d6442c36ad67ccad498", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/05_sa.tex", "max_forks_repo_name": "Yuleii/yulei-thesis-QBSM-kw94", "max_forks_repo_head_hexsha": "bb882bc6c809331c370a4d6442c36ad67ccad498", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.7086614173, "max_line_length": 834, "alphanum_fraction": 0.7339471199, "num_tokens": 2561, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711604559846, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5915445105177594}}
{"text": "\\section{Logical Paradoxes}\n\\newcommand{\\starttoken}{\\textsc{s}}\n\\newcommand{\\finishtoken}{\\textsc{f}}\n\nIn this section we provide proofs of some logical inconsistencies that arise when slight changes are made to the Iris logic.\n\n\\subsection{Saved Propositions without a Later}\n\\label{sec:saved-prop-no-later}\n\nAs a preparation for the proof about invariants in \\Sref{app:section:invariants-without-a-later}, we show that omitting the later modality from a variant of \\emph{saved propositions} leads to a contradiction.\nSaved propositions have been introduced in prior work~\\cite{dodds:higher-order-sync,iris2} to prove correctness of synchronization primitives; we will explain all that is necessary here.\nThe counterexample assumes a higher-order logic with separating conjunction, magic wand and the modalities $\\always$ and $\\upd$ satisfying the rules in \\Sref{sec:base-logic}.\n\n\\begin{thm}\n\\label{thm:counterexample-1}\nIf there exists a type $\\GName$ and a proposition $\\_ \\Mapsto \\_ : \\GName \\to \\Prop \\to \\Prop$ associating names $\\gamma : \\GName$ to propositions and satisfying:\n\\begin{align}\n    \\proves{}& \\upd \\Exists \\gname : \\GName. \\gname \\Mapsto P(\\gname)\n               \\tagH{sprop-alloc} \\\\\n    \\gname \\Mapsto P \\proves{}& \\always (\\gname \\Mapsto P)\n               \\tagH{sprop-persist} \\\\\n    \\gname \\Mapsto \\prop * \\gname \\Mapsto \\propB \\proves{}\n             &\n               \\prop \\Lra \\propB\n               \\tagH{sprop-agree}\n\\end{align}\nthen $\\proves\\upd \\FALSE$.\n\\end{thm}\n\nThe type $\\GName$ should be thought of as the type of ``locations'' and $\\gname \\Mapsto P$ should be read as stating that location $\\gname$ ``stores'' proposition $P$.\nNotice that these are immutable locations, so the maps-to proposition is persistent.\nThe rule \\ruleref{sprop-alloc} is then thought of as allocation, and the rule \\ruleref{sprop-agree} states that a given location $\\gname$ can only store \\emph{one} proposition, so multiple witnesses covering the same location must agree.\n\n%Compared to saved propositions in prior work, \\ruleref{sprop-alloc} is stronger since the stored proposition can depend on the name being allocated.\n%\\derek{Can't we cut the above sentence?  This makes it sound like we are doing something weird that we ought not to be since prior work didn't do it.  But in fact, I thought that in our construction we do not really need to rely on this feature at all!  So I'm confused.}\nThe conclusion of \\ruleref{sprop-agree} usually is guarded by a $\\later$.\nThe point of this theorem is to show that said later is \\emph{essential}, as removing it introduces inconsistency.\n%\nThe key to proving \\thmref{thm:counterexample-1} is the following proposition:\n\\begin{defn}\n$A(\\gname) \\eqdef \\Exists \\prop : \\Prop. \\always\\lnot \\prop \\land \\gname \\Mapsto \\prop$.\n\\end{defn}\nIntuitively, $A(\\gname)$ says that the saved proposition named $\\gname$ does \\emph{not} hold, \\ie we can disprove it.\nUsing \\ruleref{sprop-persist}, it is immediate that $A(\\gname)$ is persistent.\n\nNow, by applying \\ruleref{sprop-alloc} with $A$, we obtain a proof of $\\prop \\eqdef \\gname \\Mapsto A(\\gname)$: this says that the proposition named $\\gname$ is the proposition saying that it, itself, does not hold.\nIn other words, $\\prop$ says that the proposition named $\\gname$ expresses its own negation.\nUnsurprisingly, that leads to a contradiction, as is shown in the following lemma:\n\\begin{lem}   \\label{lem:saved-prop-counterexample-not-agname}   We have $\\gname \\Mapsto A(\\gname) \\proves \\always\\lnot A(\\gname)$ and $\\gname \\Mapsto A(\\gname) \\proves A(\\gname)$. \\end{lem}\n\\begin{proof}%[\\lemref{lem:saved-prop-counterexample-not-agname}]\n\\leavevmode\n  \\begin{itemize}\n  \\item First we show $\\gname \\Mapsto A(\\gname) \\proves \\always\\lnot A(\\gname)$.\n    Since $\\gname \\Mapsto A(\\gname)$ is persistent it suffices to show $\\gname \\Mapsto A(\\gname) \\proves \\lnot A(\\gname)$.\n    Suppose $\\gname \\Mapsto A(\\gname)$ and $A(\\gname)$.\n    Then by definition of \\(A\\) there is a $\\prop$ such that $\\always \\lnot \\prop$ and $\\gname \\Mapsto \\prop$.\n    By \\ruleref{sprop-agree} we have $\\prop \\Lra A(\\gname)$ and so from $\\lnot \\prop$ we get $\\lnot A(\\gname)$, which leads to a contradiction with $A(\\gname)$.\n    \n  \\item Using the first item we can now prove $\\gname \\Mapsto A(\\gname) \\proves A(\\gname)$.\n    We need to prove\n    \\begin{align*}\n      \\Exists \\prop : \\Prop. \\always \\lnot \\prop \\land \\gname \\Mapsto \\prop.\n    \\end{align*}\n    We do so by picking $\\prop$ to be $A(\\gname)$, which leaves us to prove \\(\\always \\lnot A(\\gname) \\land \\gname \\Mapsto A(\\gname)\\).\n    The last conjunct holds by assumption, and the first conjunct follows from the previous item of this lemma.\n  \\end{itemize}\n\\end{proof}\n\nWith this lemma in hand, the proof of \\thmref{thm:counterexample-1} is simple.\n\\begin{proof}[\\thmref{thm:counterexample-1}]\n  Using the previous lemmas we have\n  \\begin{align*}\n    \\proves \\All \\gname. \\lnot (\\gname \\Mapsto A(\\gname)).\n  \\end{align*}\n  Together with the rule \\ruleref{sprop-alloc} we thus derive $\\upd \\FALSE$.\n\\end{proof}\n\n\\subsection{Invariants without a Later}\n\\label{app:section:invariants-without-a-later}\n\nNow we come to the main paradox: if we remove the $\\later$ from \\ruleref{inv-open}, the logic becomes inconsistent.\nThe theorem is stated as general as possible so that it also applies to previous, less powerful versions of Iris.\n\n\\begin{thm}\n  \\label{thm:counterexample-2}\n  Assume a higher-order separation logic with $\\always$ and an update modality with a binary mask ${\\upd}_{\\set{0,1}}$ (think: empty mask and full mask) satisfying strong monad rules with respect to separating conjunction and such that:\n  \\begin{mathpar}\n    \\inferhref{weaken-mask}{eq:update-weaken-mask}\n    {}{{\\upd}_0 \\prop \\proves {\\upd}_1 \\prop}\n  \\end{mathpar}\n\n\\noindent\n  Assume a type $\\InvName$ and a proposition $\\knowInv{\\cdot}{\\cdot} : \\InvName \\to \\Prop \\to \\Prop$ satisfying:\n%\n  \\begin{mathpar}\n    \\inferhref{inv-alloc}{eq:inv-alloc}\n    {}\n    {\\prop \\proves {\\upd}_1 \\Exists \\iname. \\knowInv \\iname \\prop}\n    \\and\n    \\inferhref{inv-persist}{eq:inv-persistent}\n    {}\n    {\\knowInv \\iname \\prop \\proves \\always \\knowInv \\iname \\prop}\n    \\and\n    \\inferhref{inv-open-nolater}{eq:inv-open}\n    {\\prop * \\propB \\proves {\\upd}_0 (\\prop * \\propC) }\n    {\\knowInv \\iname \\prop * \\propB \\proves {\\upd}_1 \\propC}\n  \\end{mathpar}\n\n\\noindent\n  Finally, assume the existence of a type $\\GName$ and two tokens $\\ownGhost{\\cdot}{\\starttoken} : \\GName \\to \\Prop$ and $\\ownGhost{\\cdot}{\\finishtoken}: \\GName \\to \\Prop$ parameterized by $\\GName$ and satisfying the following properties:\n  \\begin{mathpar}\n    \\inferhref{start-alloc}{eq:start-alloc}\n    {}{\\proves {\\upd}_0 \\Exists \\gname. \\ownGhost \\gname \\starttoken}\n    \\and\n    \\inferhref{start-finish}{eq:start-finish}\n    {}{\\ownGhost \\gname \\starttoken \\proves {\\upd}_0 \\ownGhost \\gname \\finishtoken}\n    \\and\n    \\inferhref{start-not-finished}{eq:start-not-finished}\n    {}{\\ownGhost \\gname \\starttoken * \\ownGhost \\gname \\finishtoken \\proves \\FALSE}\n    \\and\n    \\inferhref{finished-dup}{eq:finished-dup}\n    {}{\\ownGhost \\gname \\finishtoken \\proves \\ownGhost \\gname \\finishtoken * \\ownGhost \\gname \\finishtoken}\n  \\end{mathpar}\n\n\\noindent\n  Then $\\TRUE \\proves{\\upd}_1 \\FALSE$.\n\\end{thm}\n\n\nThe core of the proof is defining the $\\Mapsto$ from the previous counterexample using invariants.\nThen, using the standard proof rules for invariants, we show that it satisfies \\ruleref{sprop-alloc} and \\ruleref{sprop-persist}.\nFurthermore, assuming the rule for opening invariants without a $\\later$, we can prove a slightly weaker version of \\ruleref{sprop-agree}, which is sufficient for deriving a contradiction.\n\n\n% Taking ${\\upd}_0$ and ${\\upd}_1$ to be the fancy update modalities $\\pvs[\\emptyset]$\n% and $\\pvs[\\nat]$, respectively, we can see that Iris\n% \\emph{almost} satisfies these axioms.  First, to implement the tokens,\n% we can use the RA with the carrier\n% $\\{\\mundef,\\epsilon,\\starttoken,\\finishtoken\\}$ and operation\n% $\\epsilon \\mtimes x = x \\mtimes \\epsilon = x$,\n% $\\finishtoken \\mtimes \\finishtoken = \\finishtoken$ and otherwise\n% $x \\mtimes y = \\mundef$.  Then, observe that the rules for\n% $\\knowInv{\\cdot}{\\cdot}$ are special cases of (derivable) invariant\n% rules in Iris.  The fly in the ointment is the \\ruleref{eq:inv-open}\n% rule: in Iris, this rule would protect each occurrence of $\\prop$\n% in the premise of the rule with a $\\later$, whereas here they are\n% unprotected.\n\nWe start by defining $\\Mapsto$ satisfying (almost) the assumptions of \\lemref{lem:counterexample-invariants-saved-prop-agree}.\n%\n\\begin{defn}\nWe define $\\_ \\Mapsto \\_ : \\GName \\to \\Prop \\to \\Prop$ as:\n%\n\\begin{align*}\n  \\gname \\Mapsto \\prop \\eqdef \\Exists \\iname. \\knowInv \\iname {\\ownGhost \\gname \\starttoken \\lor \\ownGhost \\gname \\finishtoken * \\always \\prop}.\n\\end{align*}\n\\end{defn}\nNote that using \\ruleref{eq:inv-persistent}, it is immediate that $\\gname \\Mapsto \\prop$ is persistent.\n\nWe use the tokens $\\ownGhost \\gname \\starttoken$ and $\\ownGhost \\gname \\finishtoken$ to model invariants that can be initialized ``lazily'': $\\ownGhost \\gname \\starttoken$ indicates that the invariant is still not initialized, whereas the duplicable $\\ownGhost \\gname \\finishtoken$ indicates it has been initialized with a resource satisfying $\\prop$.%\n%\\footnote{We would usually require the token to be persistent, but it turns out the proof also works with the weaker assumption of duplicability.}\n% RK: cut the footnote, it takes space. Maybe restore later\n\n% TODO, explain this ...\n\nWe can show variants of \\ruleref{sprop-agree} and \\ruleref{sprop-alloc} for the defined $\\Mapsto$.\n\\begin{lem}\n  \\label{lem:counterexample-invariants-saved-prop-alloc}\nWe have\n  \\(\\proves {\\upd}_1 \\Exists \\gname. \\gname \\Mapsto \\prop(\\gname)\\).\n\\end{lem}\n\\begin{proof}\n  We have to show the allocation rule \\[\\proves {\\upd}_1 \\Exists \\gname. \\gname \\Mapsto \\prop.\\]\n    From \\ruleref{eq:start-alloc} we have a $\\gname$ such that ${\\upd}_0 \\ownGhost \\gname \\starttoken$ holds and hence from \\ruleref{eq:update-weaken-mask} we have ${\\upd}_1\\ownGhost\\gname \\starttoken$.\n    Since we are proving a goal of the form ${\\upd}_1 R$ we may assume $\\ownGhost \\gname \\starttoken$.\n    Thus for any $\\prop$ we have ${\\upd}_1\\left(\\ownGhost{\\gname}{\\starttoken} \\lor \\ownGhost \\gname \\finishtoken * \\prop\\right)$.\n    Again since our goal is still of the form ${\\upd}_1$ we may assume $\\ownGhost{\\gname}{\\starttoken} \\lor \\ownGhost \\gname \\finishtoken * \\always \\prop$.\n    The rule \\ruleref{eq:inv-alloc} then gives us precisely what we need.\n \\qed \\end{proof}\n\n%\n\\begin{lem}\n\\label{lem:counterexample-invariants-saved-prop-agree}\nWe have\n  \\(\n  \\gname \\Mapsto \\prop * \\gname \\Mapsto \\propB * \\always \\prop \\proves {\\upd}_1 \\always \\propB\n  \\)\nand thus\n  \\(\n  \\gname \\Mapsto \\prop * \\gname \\Mapsto \\propB \\proves ({\\upd}_1 \\always \\prop) \\Lra ({\\upd}_1 \\always \\propB).\n  \\)\n\\end{lem}\n\n\\begin{proof}[\\lemref{lem:counterexample-invariants-saved-prop-agree}]\n\\begin{itemize}\n  \\item We first show\n    \\[\\gname \\Mapsto \\prop * \\gname \\Mapsto \\propB * \\always \\prop \\proves {\\upd}_1 \\always \\propB.\\]\n    We use \\ruleref{eq:inv-open} to open the invariant in $\\gname \\Mapsto \\prop$ and consider two cases:\n    % \n    \\begin{enumerate}\n    \\item $\\ownGhost \\gname \\starttoken$(the invariant is ``uninitialized'') : In this case, we use \\ruleref{eq:start-finish} to ``initialize'' the invariant and obtain $\\ownGhost{\\gname}{\\finishtoken}$.\n      Then we duplicate $\\ownGhost \\gname \\finishtoken$, and use it together with $\\always \\prop$ to close the invariant.\n    \\item $\\ownGhost \\gname \\finishtoken * \\always \\prop$ (the invariant is ``initialized''): In this case we duplicate $\\ownGhost \\gname \\finishtoken$, and use a copy to close the invariant.\n    \\end{enumerate}\n    % \n    After closing the invariant, we have obtained $\\ownGhost \\gname \\finishtoken$.\n    Hence, it is sufficient to prove\n    \\[\n      \\ownGhost{\\gname}{\\finishtoken} * \\gname \\Mapsto \\prop * \\gname \\Mapsto \\propB * \\always \\prop \\proves {\\upd}_1 \\always \\propB.\\]\n    We proceed by using \\ruleref{eq:inv-open} to open the other invariant in $\\gname \\Mapsto \\propB$, and we again consider two cases:\n    \\begin{enumerate}\n    \\item $\\ownGhost{\\gname}{\\starttoken}$ (the invariant is ``uninitialized''): As witnessed by \\ruleref{eq:start-not-finished}, this cannot happen, so we derive a contradiction.\n      Notice that this is a key point of the proof: because the two invariants ($\\gname \\Mapsto \\prop$ and $\\gname \\Mapsto \\propB$) \\emph{share} the ghost name $\\gname$, initializing one of them is enough to show that the other one has been initialized.\n      Essentially, this is an indirect way of saying that really, we have been opening the same invariant two times.\n    \\item $\\ownGhost{\\gname}{\\finishtoken} * \\always \\propB$ (the invariant is ``initialized''):\n      Since $\\always \\propB$ is duplicable we use one copy to close the invariant, and retain another to prove ${\\upd}_1 \\always \\propB$.\n    \\end{enumerate}\n\\item By applying the above twice, we easily obtain\n\\[ \\gname \\Mapsto \\prop * \\gname \\Mapsto \\propB \\proves ({\\upd}_1 \\always \\prop) \\Lra ({\\upd}_1 \\always \\propB) \\]\n\\end{itemize}\n\\qed \\end{proof}\n% When allocating $\\gname \\Mapsto \\prop(\\gname)$ in \\lemref{lem:counterexample-invariants-saved-prop-alloc}, we will start off in ``state'' $\\ownGhost \\gname \\starttoken$, and once we have $P$ in \\lemref{lem:counterexample-invariants-saved-prop-agree} we use \\ruleref{eq:start-finish} to transition to $\\ownGhost\\gname \\finishtoken$, obtaining ourselves a copy of said token.\n% Finally, we use this token with $\\gname \\Mapsto \\propB$ to obtain a proof of $\\propB$.\nIntuitively, \\lemref{lem:counterexample-invariants-saved-prop-agree} shows that we can ``convert'' a proof from $\\prop$ to $\\propB$.\n\nWe are now in a position to replay the counterexample from \\Sref{sec:saved-prop-no-later}.\nThe only difference is that because \\lemref{lem:counterexample-invariants-saved-prop-agree} is slightly weaker than the rule \\ruleref{sprop-agree} of \\thmref{thm:counterexample-1}, we need to use ${\\upd}_1 \\FALSE$ in place of $\\FALSE$ in the definition of the predicate $A$:\nwe let \\(\n  A(\\gname) \\eqdef \\Exists \\prop : \\Prop. \\always (\\prop \\Ra {\\upd}_1 \\FALSE) \\land \\gname \\Mapsto \\prop\\)\nand replay the proof that we have presented above.\n\n%TODO: What about executing a view shift under a later?\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"iris\"\n%%% End:\n", "meta": {"hexsha": "4a77596fa9c27ec91e6eba259d2b57538a2c7227", "size": 14506, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/paradoxes.tex", "max_stars_repo_name": "aa755/iris-coq", "max_stars_repo_head_hexsha": "b958d569a1613c673ff52be14661b298a56b71fc", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/paradoxes.tex", "max_issues_repo_name": "aa755/iris-coq", "max_issues_repo_head_hexsha": "b958d569a1613c673ff52be14661b298a56b71fc", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/paradoxes.tex", "max_forks_repo_name": "aa755/iris-coq", "max_forks_repo_head_hexsha": "b958d569a1613c673ff52be14661b298a56b71fc", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.9495798319, "max_line_length": 375, "alphanum_fraction": 0.7135667999, "num_tokens": 4354, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.84594244507642, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5915289999827678}}
{"text": "\\chapter{Results}\n\\label{cha:results}\n\\epigraph{\n  This chapter will analyze the practical capabilities of the echo state\n  network for anomaly detection in time series.  First we analyze the\n  \\emph{memory capacity} (MC) of the ESN and show how well it can predict\n  a scalar chaotic time series. A tradeoff between networks with large MC and\n  good capability to predict non-linearities is discussed before moving on to\n  high-dimensional input images and sea surface height prediction.\n}\n\n\n\n\\section{Short-term Memory}\n\\label{sec:short_term_memory}\n\nThe short-term memory capacity (\\emph{MC}) of the internal state can be\nestimated through the so-called coefficient of determination $R^2$  and a\nsimple experiment.  $R^2$ is also called the squared correlation coefficient.\nFor two variables $X$ and $Y$ it is defined by:\n\\begin{equation}\n  \\label{eq:detcoeff}\n  R^2 = \\text{detCoeff}(X, Y)\n      = \\frac{\\text{cov}^2(X, Y)}{\\sigma^2(X) \\sigma^2(Y)}\n\\end{equation}\n\nThe network receives a random sequence $\\mathbf{u}$ created from a uniform\ndistribution and is trained to extract the last $n$ inputs from the internal\nstate $\\vt{x}$ with 20 units.  After training, the network extracts the last 40\ntime steps only from the last internal state\n(Fig.~\\ref{fig:random_timeseries_recovery}).\nBy creating $m$ sequences and collecting them in a matrix $U$\n\\begin{equation}\n  U = \\begin{bmatrix}\n    u^{t=-n}_{0}  & \\dots & u^{t=0}_{m} \\\\\n    \\vdots & \\ddots & \\vdots \\\\\n    u^{t=-n}_{0}  & \\dots & u^{t=0}_{m} \n  \\end{bmatrix}\n\\end{equation}\nand the reconstructions of the network in a corresponding matrix $Y$, the\ncoefficient of determination can be calculated for each time step by treating\nthe columns of $U$ and $Y$ as the variables $X$ and $Y$.\nThe memory capacity can then be estimated by:\n\\begin{equation}\n  MC = \\sum_{-n < i < 0} \\text{detCoeff}(U_i, Y_i),\n\\end{equation}\n\nThe determination coefficient between extracted and targeted values is one if\nthey are equal and zero if there is no correlation between them at all.\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{mc_seq.pdf}\n  \\caption{Determination coefficient over time for various spectral radii of\n  the reservoir and two different learning algorithms. On the left the network\n  was trained with SGD and on the right with pseudo-inverse regression. The SGD\n  trained network does not show a clear depedence on spectral radius, while in\n  the LMS network it is clearly visible.}\n  \\label{fig:mc_seq}\n\\end{figure}\n\n\\begin{listing}\n  \\inputminted{json}{pseudocode/model_setups/memorize_setup.json}\n  \\label{lst:memorize_setup}\n  \\caption{ESN setup parameters for the memorization task. The \\ttt{\"False\"}\n    value of the Tikohnov regularization parameter $\\beta$ means that the\n    pseudo-inverse method was used. The spectral radius is varied from experiment\n    to experiment.\n  }\n\\end{listing}\n\nTo find the MC of the ESN it was trained with batches of $m=2000$ random\nsequences.  Once with the Adam algorithm and once with the pseudo-inverse\nmethod (which of course only need a single batch of random sequences to be\ntrained). After the training phase, another batch is fed to the network for\nevaluation. An exemplary reconstruction is shown in\nFig.~\\ref{fig:random_timeseries_recovery}.  It becomes evident that the most\nrecent time steps can be reconstructed almost perfectly and the memory of the\nnetwork becomes weaker further back in time.  With this setup one can try to\nfind the optimal spectral radius, which maximizes the memory capacity.  Both\nplots in Fig.~\\ref{fig:mc_seq} show the coefficient of determination over the\nlast 25 time steps at varying spectral radius $\\rho$. Training\nthe ESN via a least mean squares approach (LMS, pseudo-inverse in this case)\nseems to be much more efficient than training with Adam in this case.  Which is\nalso visible in the scaling of $\\rho$, which has only a very small effect in\nthe Adam-optimized network, while it is clearly visible in the LMS ESN.\nFigure~\\ref{fig:mc_rho} suggests that MC increases as $\\rho$ approaches one and\nthen quickly degenerates. This is thoroughly investigated computationally in a\npaper by [\\cite{farkavs2016}], which in essence confirms the conjectures that\nwere made before.\n\n\\begin{figure}\n  \\begin{minipage}[t]{.48\\textwidth}\n    \\includegraphics[width=\\linewidth]{random_timeseries_recovery.pdf}\n    \\caption{\n      True vs. reconstructed random time series. Most recent time step at $t=0$.\n    }\n    \\label{fig:random_timeseries_recovery}\n  \\end{minipage}\n  \\hspace{.02\\textwidth}\n  \\begin{minipage}[t]{.48\\textwidth}\n    \\includegraphics[width=\\linewidth]{mc_rho.pdf}\n    \\caption{MC over spectral radius $\\rho$ for the LMS and GD trained networks.}\n    \\label{fig:mc_rho}\n  \\end{minipage}\n\\end{figure}\n\n\n\n\\newpage\n\\section{Mackey-Glass System}%\n\\label{sec:res_mackey_glass_system}\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{mackey_esn_ba_var_loss.pdf}\n  \\caption{Hyper-parameters over MSE. The very large\n    values are caused by diverging predictions that were clipped to a maximal\n    $\\mathrm{MSE}=10^{20}$. The weight initialization and the sparsity\n    parameters yield good performance in a certain range, the spectral radius\n  only allows good performance for values larger than one.}\n  \\label{fig:mackey_esn_ba_var_loss}\n\\end{figure}\n\n\nThe first actual prediction task that will be solved by the ESN is the one of\nthe Mackey-Glass system that was described in\nsection~\\ref{sec:mackey_glass_system}.  The reservoir is initialized with 500\nunits, which should be more than enough given that the period of the\nconsidered time series is below 100 time steps, and that the ESN should have a\nmemory capacity slightly smaller than 500 steps.  The ESN is fed with a\nsequence of length 2200, which creates the same number of internal states. To\navoid transients in the internal state, the first 200 steps are discarded. The\nremaining 2000 internal states can be used to train the readout layer\n$\\wmatr{out}$ via the pseudo-inverse method. Three hyper-parameters (HP) remain\nto be set: spectral radius, reservoir density, and input weight initialization\nparameter, which are found with the help of Bayesian Optimization (BO).  For\nthis purpose, the package \\ttt{scikit-optimize} was used to minimize the\nRMSE\n\\begin{equation}\n  \\text{RMSE} = \\frac{1}{N} \\sum_{i=0}^N || d_i - y_i ||_2\n\\end{equation}\nover several predictions.  The scikit algorithm needs an interval for every\nparameter to form an HP space that it can sample from, which where chosen as\nspecified in table~\\ref{tab:mackey_bo}. The resulting optimal HPs were obtained\nover five BO runs.  Each BO run evaluated 100 trained ESNs at different points\nin the HP space (Fig.~\\ref{fig:mackey_esn_ba_var_loss}).  A surprising result\nis that the optimal spectral radius is significantly larger than one.\nIn fact, the predicted signal just converges towards a value close to the mean\nof the series if the spectral radius is close to one.\n\\begin{table}[h]\n  \\centering\n  \\rowcolors{2}{gray!25}{white}\n  \\begin{tabular}{|l c c c c|}\n    \\hline \\rowcolor{gray!50}\n    Parameter        & Best              & Min  & Max & Prior \\\\ \\hline\n    Spectral radius  & $1.40$   & 0.5  & 2.0 & uniform \\\\\n    Weight init.     & $0.48$   & 0.1  & 1.0 & uniform \\\\\n    Density          & $0.03$   & 0.01 & 1.0 & logarithmic \\\\\n    \\hline\n  \\end{tabular}\n  \\caption{Hyper-parameter results obtained via Bayesian Optimization}\n  \\label{tab:mackey_bo}\n\\end{table}\n\nThis is peculiar, as section~\\ref{sec:short_term_memory} showed a spectral\nradius close to one to be optimal for the memorization of sequences.  It\nsuggests that the crucial ingredient to predicting chaotic time series might\nnot only be a very good knowledge of the past, but something else. In fact this\nis were the memory non-linearity tradeoff that was mentioned in\nsection~\\ref{sub:short_term_memory} comes into play.  The more non-linear a\nprediction task becomes the higher must the spectral radius of the reservoir be\nto perform well. Unfortunately a large spectral radius degrades the MC of the\nESN. A mathematically sound framework for reasoning about the memory\nnon-linearity tradeoff is yet to be developed, but a rigorous computational\nanalysis of the problem is carried out in an article by [\\cite{verstraeten2010}].\nFor the purpose of an anomaly detection it is enough that we found good\nparameters for the prediction task. The quality of the predictions is depicted\nin Fig.~\\ref{fig:mackey1d_predictions}.\nThe predictive power of the ESN varies slightly over different parts of the\nMackey-Glass system. By randomly choosing different starting points in the\nsequence and creating predictions for the next 500 frames, we can evaluate the\naverage error that the ESN makes at each frame.\nFig.~\\ref{fig:mackey1d_predictions} shows the best and worst predictions in the\ntwo upper plots and the average deviation of prediction and target in the\nbottom.  An anomaly detection might be feasible within the first 100 frames, an\nanomaly that is caught beyond that point might as well be caused by a degraded\nprediction.\\\\\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{mackey1d_predictions.pdf}\n  \\caption{The best (top) and worst (middle) predictions from 20 randomly\n    chosen starting points of the Mackey-Glass series. All predictions were\n    made by an ESN with optimal hyper-parameters.  The target values $d$ are\n    shown in blue, prediction $y$ in yellow, and the error $e = |d - y|$ in\n    green. The plot in the bottom shows the variability of $e$ over different\n    predictions.\n  }\n  \\label{fig:mackey1d_predictions}\n\\end{figure}\n\n\nAt this point, \\emph{finally}, all necessary parts to try and find artificially\nintroduced anomalies in the MG system are in place. The anomalies are created\nby varying the parameter $\\gamma$ of the MG Eq.~\\ref{eq:mackey_glass}.  By\ndefault it has a value of $\\gamma = 0.1$.  Every 400 steps it is decreased by\nan amount $\\delta$ set back to its initial value after 50 steps. Depending on\nthe magnitude of $\\delta$ this creates rather smooth looking anomalies. Easily\nvisual anomalies are created with a value of $\\delta = 0.05$. To search the\noutliers, the trained network receives a sliding window of 300 steps of the\nanomalous sequence.  The predictions for the next $N_p = 100$ steps into the\nfuture are collected and the \\emph{mean prediction error} (MPE) is calculated\nfor every sequence that was fed to the ESN:\n\\begin{equation}\n  \\text{MPE} = \\frac{1}{N_p} \\sum_{t=1}^{N_p} | d_t - y_t |.\n\\end{equation}\nThe MPE is recorded for the whole dataset and the normality score $\\Sigma$\n(Eq.~\\ref{eq:normality_score}) is computed with a large window size of 100\nsteps and a small window size of 5 steps.  The result is shown in\nFig.~\\ref{fig:mackey_visible_anomalies}. With delight we can find that all\nanomalies are easily caught.  To demonstrate the capabilities of the ESN to\nefficiently find outliers in chaotic systems like the MG system,\nFig.~\\ref{fig:mackey_anomalies} depicts how the algorithm performs on\nanomalies that would, most probably, not be spotted by a human.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{mackey_visible_anomalies.pdf}\n  \\caption{Mackey Glass system with artificially introduced anomalies at every\n    400th step with $\\delta=0.05$. The detected anomalies are indicated by the\n    yellow bars in the plots of the score $\\Sigma$. The anomaly score catches\n    the irregularly high error regions slightly too early, because the\n    prediction error goes up as soon as the prediction window reaches into the\n  anomalous regions.}\n  \\label{fig:mackey_visible_anomalies}\n\\end{figure}\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{mackey_anomalies.pdf}\n  \\caption{Outliers that would have probably not been caught by humans. Created\n  with $\\delta = 0.03$. One false positive at $t=1500$.}\n  \\label{fig:mackey_anomalies}\n\\end{figure}\n\n\\begin{listing}\n  \\inputminted{json}{pseudocode/model_setups/mackey_setup.json}\n  \\label{lst:mackey_setup}\n  \\caption{ESN setup parameters for MG prediction. The \\texttt{DenseESNCell}\n    implies that the reservoir matrix was created with a dense matrix representation\n  }\n\\end{listing}\n\n\n\n\n\n\n\\clearpage\n\\section{Dansgaard-Oeschger Events}%\n\\label{sec:res_dansgaard_oeschger_events}\n\nThe second dataset consisting of ice-core records that reach back 100k years\nhas two components. The $\\delta^{18}$O sequence, which is directly connected to\nsurface temperature and the Ca$^{2+}$ series, that measures dust content and\nshows similar patterns.  Both sequences lack a clear reappearing pattern, which\nwas clearly visible in the Mackey-Glass series, despite the chaotic nature of\nthe MG system. This makes a one time training over a part of the dataset that\ncontains most of variability impossible.  It should however still be possible\nto project several dozen years ahead by moving to an online learning mode. In\nthe online mode, the output weights are recomputed with every new time slice\nthat is fed to the network.  The resulting short-term predictions $y$ are shown\nin Fig.~\\ref{fig:d18O_anomalies}.\n\\begin{listing}\n  \\inputminted{json}{pseudocode/model_setups/DO_setup.json}\n  \\label{lst:DO_setup}\n  \\caption{ESN setup parameters for DO event detection. The hyper-parameters were\n  found via Bayesian Optimization.}\n\\end{listing}\nThey were achieved by training the ESN for 1500 input steps and predicting the\nnext 200 steps (the time step of the series is one year). Due to the very\nirregular dataset the predictions are not very good, but at points where the\nsequences change abruptly they deviate much more from the target values than\nthey do otherwise. This can be exploited to detect the DO events which are\nlocated exactly at these abruptly changing points. Apart from the online weight\nadaption, the procedure is the same as with the MG system. The score is\ncalculated from the MPE and thresholded, resulting in the detection of\nanomalies at the yellow colored regions of Fig.~\\ref{fig:d18O_anomalies}.\nAbrupt changes after steady periods are easily detected by the algorithm.  DO\nevents that lie close to each other tend to slip through. This is due to the\nfact that the ESN learns from the previous DO events and as it is trained on\nthese anomalies, they are considered normal.\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{d18O_Ca2_anomalies.pdf}\n  \\caption{Results of the DO dataset. The DO events are marked with\n  the grey background. Detected anomalies are indicated by the yellow bars in\n  the plots of the normality score $\\Sigma$}\n  \\label{fig:d18O_anomalies}\n\\end{figure}\n\n\n\n\n\\newpage\n\\section{Kuroshio}%\n\\label{sec:res_kuroshio}\n\\begin{figure}\n  \\begin{minipage}[b]{0.5\\linewidth}\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{kuro_example_pred_stepidx200.pdf}\n  \\end{minipage}%% \n  \\begin{minipage}[b]{0.5\\linewidth}\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{kuro_example_pred_stepidx470.pdf}\n  \\end{minipage} \n  \\begin{minipage}[b]{0.5\\linewidth}\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{kuro_example_pred_stepidx0.pdf} \n  \\end{minipage}%%\n  \\begin{minipage}[b]{0.5\\linewidth}\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{kuro_example_pred_stepidx100.pdf}\n  \\end{minipage} \n  \\caption{Four exemplary predictions and corresponding desired outputs in the\n    Kuroshio region. Shown is the 150th prediction frame.}\n  \\label{fig:kuro_pred_examples} \n\\end{figure}\n\n\nThe prediction of the Kuroshio region involves significantly more data that\nneeds to be processed by the ESN. This means that the reservoir size has to\ngrow appropriately.  As a rule of thumb, the size of the internal state\n$\\vt{x}$ should be at least as big as the period of the dataset is long. The\nmost obvious period in climate simulations is typically the annual signal\nwhich, in our case of 3-day means, is 122 steps long. To be able to remember\nwhole video sequences, this number should be multiplied by the number of pixels\nin each frame to obtain the state size. Even if we consider raw inputs of 100 x 100\npixels and resample them to a size of 30 x 30, this would still result in a\nstate size greater than 100 000, which would result in a memory use of more\nthan 40 GB of the internal square reservoir matrix alone. This is clearly\nunacceptable. Luckily, the pixels are highly correlated and we can hope that\nthe ESN is able to compress some of the information of the input images it\nreceives. Sizing the reservoir down to 10000 units and keeping it very sparse\nwill decrease the memory usage of the network to an acceptable amount of a few\nhundred MB.\n\n\\begin{figure}[p]\n  \\centering\n  \\includegraphics[width=1.1\\linewidth]{kuro_pred_stepidx290}\n  \\caption{Exemplary prediction 200 steps into the future.}\n  \\label{fig:kuro_pred_stepidx290}\n\\end{figure}\n\n\\begin{figure}[p]\n  \\centering\n  \\includegraphics[width=1.1\\linewidth]{kuro_pred_stepidx20}\n  \\caption{Another prediction example. Here it looks like the internal state of\n  the ESN has reached a fixed point, so the prediction remains constant after\n  day 10100.  This fixed point produces predictions that are very similar to\n  the mean contracted state of the Kuroshio, if viewed in 2D.}\n  \\label{fig:kuro_pred_stepidx20}\n\\end{figure}\n\n\n\nApart from using a sparsely represented ESN reservoir, the approach is again\nthe same as before.  The ESN receives 1300 input frames and predicts the next\n200, which amounts to a prediction of roughly two years. Exemplary predictions\nand the corresponding desired target outputs are shown in\nFig.~\\ref{fig:kuro_pred_examples}. The figures ~\\ref{fig:kuro_pred_stepidx290}\nand ~\\ref{fig:kuro_pred_stepidx20} show the 25th row of two more exemplary\ntarget and prediction evolutions over time. The prediction error that is shown\nis quite low for the predicted two years, but just to be sure only the first\nyear, meaning the first $N_p = 100$ frames, will be used for the subsequent\nanomaly detection.\n\nFig.~\\ref{fig:kuro_mean_pred} compares the target and prediction values for the\nlast prediction step ($N_p=100$) that is considered for the anomaly detection.\nThe error plot shows the MPE for each prediction. A large deviation of\nprediction and target is visible around day 15500. The normality score (plotted\nin a logarithmic color scale) that was calculated from the MPE sequence is very\nlow in this region as well. It seems to be a promising candidate for the\nKuroshio anomaly. By summing up all instances where the anomaly score $\\Sigma <\n0.01$ we can create a map of outliers. The result is shown in\nFig.~\\ref{fig:kuro_anomaly_count}.  It shows a region with a lot of outliers\njust where we expect them to be.  The similarity between the anomaly count and\nthe difference of elongated and contracted states is obvious and the result\nwill be regarded as a successful detection of the Kuroshio anomaly.\n\nThe second region of high anomaly counts is where the Kuroshio turns towards\nthe North Pacific basin. This anomaly is caused by a large unpredicted  eddy\nthat enters the analyzed area from the East. This anomalous region would\nprobably vanish if the analysis area is shifted eastward, away from Japan.\nThis would be the obvious next step of the outlier search, but unfortunately a\nthorough analysis of the whole world is out of the scope of this work and will\nremain as one path that a further studies of this problem could take.\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{kuro_anomaly_count.pdf}\n  \\caption{Count of all pixels with a normality score that is lower than 0.01,\n  compared to the assumed true form of the Kuroshio anomaly (obtained from the\n  difference of its two states).}\n  \\label{fig:kuro_anomaly_count}\n\\end{figure}\n\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=1.1\\linewidth]{kuro_mean_pred.pdf}\n  \\caption{The target plot shows the true values of one row of the SSH\n    frames that were fed to the ESN.  In the prediction plot we can see the\n    prediction that was made based on the internal state 100 steps before.\n    The error plot depicts the mean error of one prediction sequence and in the\n    bottom we can see the resulting anomaly score on a logarithmic color scale.\n  }\n  \\label{fig:kuro_mean_pred}\n\\end{figure}\n\n\\begin{listing}\n  \\inputminted{json}{pseudocode/model_setups/kuro_setup.json}\n  \\label{lst:kuro_setup}\n  \\caption{ESN setup parameters for Kuroshio anomaly detection. The Bayesian\n  Optimization resulted a large range of parameters with similar performance as\n  long as the spectral radius was larger than 1.3 and the weight initialization\n  parameter larger than 0.8. The chosen hyper-parameters reflect the need for a\n  sufficiently non-linear reservoir.}\n\\end{listing}\n", "meta": {"hexsha": "12e867ac8c3d81d8a82cc5f36523d106dfec7584", "size": 20765, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mainmatter/results.tex", "max_stars_repo_name": "nmheim/thesis", "max_stars_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-09-22T12:17:23.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-22T12:17:23.000Z", "max_issues_repo_path": "mainmatter/results.tex", "max_issues_repo_name": "nmheim/thesis", "max_issues_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mainmatter/results.tex", "max_forks_repo_name": "nmheim/thesis", "max_forks_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.5163551402, "max_line_length": 84, "alphanum_fraction": 0.7723091741, "num_tokens": 5387, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424450764199, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5915289999827676}}
{"text": "\\hypertarget{group__numpp__structures__matrices__sparse}{}\\section{Sparse}\n\\label{group__numpp__structures__matrices__sparse}\\index{Sparse@{Sparse}}\n\n\nModule containg sparse matrix implementations.  \n\n\n\\subsection*{Classes}\n\\begin{DoxyCompactItemize}\n\\item \nclass \\hyperlink{classnumpp_1_1matrix_1_1sparse_1_1block}{numpp\\+::matrix\\+::sparse\\+::block$<$ T, Rows, Columns $>$}\n\\begin{DoxyCompactList}\\small\\item\\em Block version of sparse matrix. \\end{DoxyCompactList}\\item \nclass \\hyperlink{classnumpp_1_1matrix_1_1sparse_1_1nested}{numpp\\+::matrix\\+::sparse\\+::nested$<$ T, Rows, Columns $>$}\n\\begin{DoxyCompactList}\\small\\item\\em Block nested version of sparse matrix. \\end{DoxyCompactList}\\end{DoxyCompactItemize}\n\n\n\\subsection{Detailed Description}\nModule containg sparse matrix implementations. \n\n\\begin{DoxyWarning}{Warning}\nWarnings about compilation times do not apply to blocked and nested\\+\\_\\+block classes as those are runtime only \n\\end{DoxyWarning}\n", "meta": {"hexsha": "abcbbd1205973af9acbc1cf5cf82eb2b8c4b54cf", "size": 963, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/group__numpp__structures__matrices__sparse.tex", "max_stars_repo_name": "szymonmaszke/numpp", "max_stars_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-06-06T01:51:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-02T15:17:00.000Z", "max_issues_repo_path": "docs/group__numpp__structures__matrices__sparse.tex", "max_issues_repo_name": "vyzyv/numpp", "max_issues_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-11-28T12:15:46.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T00:03:38.000Z", "max_forks_repo_path": "docs/group__numpp__structures__matrices__sparse.tex", "max_forks_repo_name": "szymonmaszke/numpp", "max_forks_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-08-06T13:58:27.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-06T06:45:22.000Z", "avg_line_length": 41.8695652174, "max_line_length": 122, "alphanum_fraction": 0.7995846314, "num_tokens": 285, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.8459424431344437, "lm_q1q2_score": 0.5915289827193568}}
{"text": "% !TEX root = ../main.tex\n\n\\section{Network models}\n\n\\subsection{Deterministic networks}\n\n\\begin{itemize}\n  \\item Complete graph $K_N$ on $N$\n  \\item Lattices (in $D$-dimensions): each node has 2D neighbours\n\\end{itemize}\n\n\\subsection{Measurements of networks}\n\n\\begin{itemize}\n  \\item Choose 8 persons\n  \\item Link = friendship\n  \\item Properties of friendship graphs?\n\\end{itemize}\n\nWhy do we need this? Reality is just a measurement. You have a lot of friendship networks,\nit depends on the realization. \n\nMotivation to study \\emph{random graphs}\ne.g. realizations of $G_{0.4}(8)$\n\n\\subsection{Complex network models}\n\n\\begin{itemize}\n  \\item Random Graph - Erd\u00f6s-R\u00e9nyi (1959-1960)\n  \\item Small-World Graph - Watts-Strogatz (1998). Introduces the idea of rewiring\n  \\item Scale-Free Graph - Barab\u00e1si-Albert (1999)\n\\end{itemize}\n\n\\subsection{Erd\u00f6s-R\u00e9nyi random graph}\nThe ER random graph $G_p(N)$ is a graph with $N$ nodes and each node pair is connected \nindependently with probability $p$\n\nAny so generated graph belongs to the class ER random graph $G_p(N)$ with same $N$ and $p$.\n\nThere are two variants of ER random graph:\n\\begin{itemize}\n  \\item $G_p(N)$ link existence probability $p$\n  \\item $G(N,L)$ precisely $L$ links?\n\\end{itemize}\n\nHow would you describe or deduce the adjacency matrix $A$ for $G_p(N)$?\n$a_{ij}$ (with $j \\ne i$) is a Bernoulli random variable with mean $p$\n\n\\begin{align*}\n  Pr[a_{ij} = 1] &= p \\\\\n  Pr[a_{ij} = 0] &= 1 - p \\\\\n  E[a_{ij}] &= p\n\\end{align*}\nA is a $N \\times N$ matrix\n\nThe complement graph of $G_p(N)$ is $G_{1-p}(N)$\n\n\\begin{itemize}\n  \\item If $p=1$, then $G_1(N)$ is the complete graph $K_N$\n  \\item If $p=0$, then $G_0(N)$ is the empty or null graph\n  \\item The average number of links is\n  \\begin{align*}\n    E[L] = \\frac{N(N-1)}{2}p \\implies p = \\frac{L}{L_{\\max}} = \\text{link density}\n  \\end{align*}\n  \\item The average clustering coefficient is:\n  \\begin{align*}\n    E[c_{G_p(N)}] = p\n  \\end{align*}\n  \\item Degree distribution: Binomial distribution\n  \\begin{align*}\n    Pr[D_{rg} = k] &= \n    \\begin{pmatrix}\n      N - 1 \\\\ k\n    \\end{pmatrix}\n    p^k ( 1- p)^{N-1-k} \\simeq \n    \\frac{1}{\\sigma \\sqrt{2\\pi}}e^{-\\left( \\frac{(k-\\mu)^2}{2\\sigma^2} \\right)} \\\\\n    \\mu &= E[D] = (N-1)p \\\\\n    \\sigma^2 &= Var[D] = (N-1)p(1 - p)\n  \\end{align*}\n\\end{itemize}\n\n\\subsubsection{Distribution of eigenvalues of adjacency matrix of $G_p(N)$}\n\nConsider set of eigenvalues as realizations of eigenvalue random variable $\\lambda$\n\nHistogram approximates the probability density function $f_{\\lambda}(x)$ of eigenvalues\nof the adjacency matrix $A$. The largest eigenvalue of a ER graph is really large compared to \nthe other ones, the distance is called spectral value.\n\n\\subsubsection{Observations from the spectrum}\n\n\\begin{itemize}\n  \\item The probability density function $f_{\\lambda}(x)$ of the eigenvalues of the adjacency\n  matrix\n  \\begin{itemize}\n    \\item Peaks refer to a specific structure or pattern in the graph\n    \\item A broader, bell-shape form of $f_{\\lambda}(x)$ around the origin ($x=0$) is\n    a fingerprint of randomness\n    \\begin{itemize}\n      \\item Broadening of peaks in spectra to bell-shape forms i mostly due to randomness\n    \\end{itemize}\n  \\end{itemize}\n  \\item If $f_{\\lambda}(-x) = f_{\\lambda}(x)$ an even function of $x$, then the graph is a tree\n  (no triangles, thus skewness $s_{\\lambda} = 0$)\n  \\begin{itemize}\n    \\item Any tree can be represented by a bipartite graph\n    \\item A bipartite graph has a symmetric spectrum (for each eigenvalue $\\lambda$, there\n    exists an eigenvalue $-\\lambda$)\n  \\end{itemize}\n\\end{itemize}\n\n\\subsection{Power law graphs}\n\nUsually, we prefer normalized random variables with mean 0 and variance 1.\n\nA power law degree distribution is also called scale-free:\n\nAny number $a$ just multiplies the probability density; there is no characteristic length. \nThe mean is not representative, because the variance is large.\n\nThe simplest family of ``power law'' graphs have been proposed by Barbasi-Albert:\n\n\\begin{itemize}\n  \\item Start with $n << N$ nodes\n  \\item Attach a new node with $m$ links; each link to an already existing node \n  randomly and proportionally to its degree\n  \\item Repeat 2) until size $N$ is reached\n\\end{itemize}\n\n\\subsubsection{Power law exponent $\\tau$ in $Pr[D=k] \\simeq ck^{-\\tau}$}\n\n\\begin{itemize}\n  \\item Critical point 1\n  \\begin{itemize}\n    \\item $E[D]$ diverges\n    \\item $E[D^2]$ diverges\n    \\item $D_{max}$ grows faster than $N$\n    \\item $E[H] \\sim $ const\n  \\end{itemize}\n  \\item Critical point 2 (needs to be fixed)\n  \\begin{itemize}\n    \\item $E[D]$ finite\n    \\item $E[D^2]$ diverges\n    \\item $D_{max}$ grows faster than $N$\n    \\item $E[H] \\sim log(log N)$\n  \\end{itemize}\n  \\item Critical point 3 (needs to be fixed)\n  \\begin{itemize}\n    \\item $E[D]$ diverges\n    \\item $E[D^2]$ diverges\n    \\item $D_{max}$ grows faster than $N$\n    \\item $E[H] \\sim $ const\n  \\end{itemize}\n\\end{itemize}\n\n\\subsection{Observed common properties of real-world complex networks}\n\n\\begin{itemize}\n  \\item small-world property\n  \\begin{itemize}\n    \\item Average length/hopcount of a path is short compared to the size $N$ of the network\n    ($E[H] = O(log N))$\n  \\end{itemize}\n  \\item scale-free degree distribution\n  \\begin{itemize}\n    \\item heavy tails (non-Gaussian behavior)\n  \\end{itemize}\n  \\item clustering and community structure\n  \\begin{itemize}\n    \\item network of networks\n  \\end{itemize}\n  \\item robustness to random node failure\n  \\item vulnerability to targeted hub attacks and cascading failures\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "e6d0fdb7251eb645890092b4e62b0728150d45ec", "size": 5584, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/Networking/lectures/lecture_04.tex", "max_stars_repo_name": "jmigual/APATeoria", "max_stars_repo_head_hexsha": "acea91e3d339165855742dd5c5d6961158d5c391", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/Networking/lectures/lecture_04.tex", "max_issues_repo_name": "jmigual/APATeoria", "max_issues_repo_head_hexsha": "acea91e3d339165855742dd5c5d6961158d5c391", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-08-05T10:35:07.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-05T10:35:08.000Z", "max_forks_repo_path": "Notes/Networking/lectures/lecture_04.tex", "max_forks_repo_name": "jmigual/APATeoria", "max_forks_repo_head_hexsha": "acea91e3d339165855742dd5c5d6961158d5c391", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-10-10T08:40:56.000Z", "max_forks_repo_forks_event_max_datetime": "2016-10-14T12:10:40.000Z", "avg_line_length": 30.347826087, "max_line_length": 95, "alphanum_fraction": 0.6883954155, "num_tokens": 1722, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.76908023177796, "lm_q1q2_score": 0.5914844069822042}}
{"text": "\\section{Critical points of Energy in the sphere}\n", "meta": {"hexsha": "5c809ea3d6a43188711ad19885ea6b6a0efa49ce", "size": 50, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/CPoE_Sphere.tex", "max_stars_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_stars_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-12-28T05:53:38.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-20T05:56:59.000Z", "max_issues_repo_path": "src/CPoE_Sphere.tex", "max_issues_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_issues_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/CPoE_Sphere.tex", "max_forks_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_forks_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.0, "max_line_length": 49, "alphanum_fraction": 0.8, "num_tokens": 11, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7718435083355187, "lm_q1q2_score": 0.5914587738946729}}
{"text": "% ===> this file was generated automatically by noweave --- better not edit it\n\\section{Introduction}\n\nThe {\\Tt{}matrix-lua\\nwendquote} module adds simple matrix support to Lua.\nSupported functions are\n\\begin{itemize}\n  \\item {\\Tt{}matrix(m,n)\\nwendquote}: create a new zero matrix of size $m$ by $n$\n  \\item {\\Tt{}A(i,j)\\nwendquote}:      get $A_{ij}$\n  \\item {\\Tt{}A[k]\\nwendquote}:        get or set the $k$th element (column-major 1-based)\n  \\item {\\Tt{}C\\ =\\ A+B\\nwendquote}:     add two matrices\n  \\item {\\Tt{}C\\ =\\ A-B\\nwendquote}:     subtract two matrices\n  \\item {\\Tt{}C\\ =\\ -A\\nwendquote}:      take the unary negation of a matrix\n  \\item {\\Tt{}C\\ =\\ A*B\\nwendquote}:     multiply two matrices\n  \\item {\\Tt{}A.print()\\nwendquote}:   print matrix\n  \\item {\\Tt{}A.factor()\\nwendquote}:  factor matrix\n  \\item {\\Tt{}A.solve(x)\\nwendquote}:  compute $x := A^{-1}x$\n  \\item {\\Tt{}A.clone()\\nwendquote}:   create a copy of the matrix\n  \\item {\\Tt{}A.slice(i1,i2,j1,j2)\\nwendquote}:   create a slice of the matrix\n  \\item {\\Tt{}A.free()\\nwendquote}:    free matrix\n\\end{itemize}\nSupported variables are\n\\begin{itemize}\n  \\item {\\Tt{}A.m\\nwendquote}:  return number of rows in matrix\n  \\item {\\Tt{}A.n\\nwendquote}:  return number of columns in matrix\n\\end{itemize}\n\n\n\\section{Interface}\n\n\\nwfilename{matrix-lua.nw}\\nwbegincode{1}\\sublabel{NW49APAR-2XOU18-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-2XOU18-1}}}\\moddef{matrix-lua.h~{\\nwtagstyle{}\\subpageref{NW49APAR-2XOU18-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwenddeflinemarkup\n#ifndef MATRIX_LUA_H\n#define MATRIX_LUA_H \n\n#include <lua.h>\n\nstruct lua_matrix_struct \\{\n    int owns_data;  /* True if this matrix owns storage  */\n    int m, n, ld;   /* Array size and leading dimension  */\n    double* data;   /* Array data                        */\n    int* ipiv;      /* Pivot array for factored matrices */\n\\};\n\ntypedef struct lua_matrix_struct* lua_matrix_t;\n\nvoid         lua_matrix_register(lua_State* L);\nvoid         lua_matrix_push(lua_State* L, lua_matrix_t A);\nlua_matrix_t lua_matrix_get(lua_State* L, int index);\n\nlua_matrix_t lua_matrix_create(int m, int n);\nlua_matrix_t lua_matrix_slice(lua_matrix_t A, int i1, int i2, int j1, int j2);\nvoid         lua_matrix_destroy(lua_matrix_t self);\n\n#endif /* MATRIX_LUA_H */\n\\nwnotused{matrix-lua.h}\\nwendcode{}\\nwbegindocs{2}\\nwdocspar\n\nThe {\\Tt{}lua{\\_}matrix{\\_}register\\nwendquote} function registers the functions of\nthe module with the Lua interpreter.  The {\\Tt{}lua{\\_}matrix{\\_}push\\nwendquote}\nand {\\Tt{}lua{\\_}matrix{\\_}get\\nwendquote} functions add and get matrices\non the Lua stack.\n\nThe {\\Tt{}lua{\\_}matrix{\\_}create\\nwendquote} and {\\Tt{}lua{\\_}matrix{\\_}destroy\\nwendquote} functions create\nnew matrices.  The {\\Tt{}lua{\\_}matrix{\\_}slice\\nwendquote} function creates a ``parasite''\nmatrix that uses the same storage as the original matrix but which\nhas different indexing.  In Matlab parlance, the slice function returns\n{\\Tt{}A(i1:i2,\\ j1:j2)\\nwendquote}.\n\n\n\\section{Implementation}\n\n\\nwenddocs{}\\nwbegincode{3}\\sublabel{NW49APAR-VVNpW-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-VVNpW-1}}}\\moddef{matrix-lua.c~{\\nwtagstyle{}\\subpageref{NW49APAR-VVNpW-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwenddeflinemarkup\n#include <sugar.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <assert.h>\n#include <matrix_lua.h>\n\n\\LA{}macros~{\\nwtagstyle{}\\subpageref{NW49APAR-1VvxMr-1}}\\RA{}\n\\LA{}static prototypes~{\\nwtagstyle{}\\subpageref{NW49APAR-4QyxLE-1}}\\RA{}\n\\LA{}static data~{\\nwtagstyle{}\\subpageref{NW49APAR-IAwJm-1}}\\RA{}\n\\LA{}static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}\\RA{}\n\\LA{}functions~{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-1}}\\RA{}\n\\nwnotused{matrix-lua.c}\\nwendcode{}\\nwbegindocs{4}\\nwdocspar\n\n\n\\subsection{Matrix tag}\n\nLua provides \\emph{tags} for user data, with semantics known only\nto the host program.  The {\\Tt{}lua{\\_}matrix{\\_}tag\\nwendquote} is the tag for\nthe Lua matrix type.  Note that you \\emph{could} get into trouble\nwith this if multiple interpreters are simultaneously active,\nsince the different interpreters may not end up allocating the same\ntag value.\n\n\\nwenddocs{}\\nwbegincode{5}\\sublabel{NW49APAR-IAwJm-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-IAwJm-1}}}\\moddef{static data~{\\nwtagstyle{}\\subpageref{NW49APAR-IAwJm-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwenddeflinemarkup\nstatic int lua_matrix_tag;\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{6}\\nwdocspar\n\n\n\\subsection{Accessor macros}\n\nWe use a couple convenience macros for getting the matrix entries.\nThe macros use one-based indexing for consistence with the Lua\nindex conventions.\n\n\\nwenddocs{}\\nwbegincode{7}\\sublabel{NW49APAR-1VvxMr-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1VvxMr-1}}}\\moddef{macros~{\\nwtagstyle{}\\subpageref{NW49APAR-1VvxMr-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwenddeflinemarkup\n#define Mij(M,i,j) (M->data[((j)-1)*M->ld + ((i)-1)])\n#define Mrow0(M,k) ( ((k)-1) % (M->m) )\n#define Mcol0(M,k) ( ((k)-1) / (M->m) )\n#define Mk(M,k)    ( M->data[Mcol0(M,k) * M->ld + Mrow0(M,k)] )\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{8}\\nwdocspar\n\n\\subsection{Matrix call getter}\n\nWe use Lua tag methods to handle the case when the ``call'' syntax\nis applied to a matrix object.  If there is one numeric argument $i$,\nwe get the $(i,1)$ matrix entry; if there are two numeric arguments,\nwe get the $(i,j)$ entry.\n\n\\nwenddocs{}\\nwbegincode{9}\\sublabel{NW49APAR-BzF6M-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{\\relax}{NW49APAR-BzF6M-2}\\nwenddeflinemarkup\nlua_pushcclosure(L, lua_matrix_call, 0);\nlua_settagmethod(L, lua_matrix_tag, \"function\");\n\\nwalsodefined{\\\\{NW49APAR-BzF6M-2}\\\\{NW49APAR-BzF6M-3}\\\\{NW49APAR-BzF6M-4}\\\\{NW49APAR-BzF6M-5}\\\\{NW49APAR-BzF6M-6}\\\\{NW49APAR-BzF6M-7}\\\\{NW49APAR-BzF6M-8}}\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{10}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{11}\\sublabel{NW49APAR-1duChy-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{\\relax}{NW49APAR-1duChy-2}\\nwenddeflinemarkup\nstatic int lua_matrix_call(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,1);\n    int n = lua_gettop(L);\n    int i, j;\n\n    if (n < 2 || n > 3)\n        lua_error(L, \"Wrong number of arguments\");\n\n    if (!lua_isnumber(L,2) || (n == 3 && !lua_isnumber(L,3)))\n        lua_error(L, \"Index must be a number\");\n\n    i = (int) lua_tonumber(L,2);\n    j = (n == 3) ? (int) lua_tonumber(L,3) : 1;\n\n    if (i <= 0 || i > A->m || j <= 0 || j > A->n)\n        lua_error(L, \"Index out of range\");\n\n    lua_settop(L,0);\n    lua_pushnumber(L, Mij(A,i,j));\n    return 1;\n\\}\n\n\\nwalsodefined{\\\\{NW49APAR-1duChy-2}\\\\{NW49APAR-1duChy-3}\\\\{NW49APAR-1duChy-4}\\\\{NW49APAR-1duChy-5}\\\\{NW49APAR-1duChy-6}\\\\{NW49APAR-1duChy-7}\\\\{NW49APAR-1duChy-8}\\\\{NW49APAR-1duChy-9}\\\\{NW49APAR-1duChy-A}\\\\{NW49APAR-1duChy-B}\\\\{NW49APAR-1duChy-C}\\\\{NW49APAR-1duChy-D}\\\\{NW49APAR-1duChy-E}\\\\{NW49APAR-1duChy-F}\\\\{NW49APAR-1duChy-G}\\\\{NW49APAR-1duChy-H}}\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{12}\\nwdocspar\n\n\n\\subsection{Matrix table getter}\n\nThe indexed read operation does two different things.  If the index\nis a number $k$, we try to return the $k$th element of the matrix\n(as arranged in column-major order with one-based indexing).\nIf the index is a string, then we probably have a method call,\nwhich is handled by the {\\Tt{}lua{\\_}matrix{\\_}getmethod\\nwendquote} function described\nlater.  (Recall that the dot syntax in Lua is a type of indexing,\nso that {\\Tt{}A.print()\\nwendquote} is equivalent to {\\Tt{}A[\"print\"]()\\nwendquote}.)\n\nThe {\\Tt{}gettable\\nwendquote} tag method receives the tagged object and the index.\n\n\\nwenddocs{}\\nwbegincode{13}\\sublabel{NW49APAR-BzF6M-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-2}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{NW49APAR-BzF6M-1}{NW49APAR-BzF6M-3}\\nwenddeflinemarkup\nlua_pushcclosure(L, lua_matrix_gettable, 0);\nlua_settagmethod(L, lua_matrix_tag, \"gettable\");\n\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{14}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{15}\\sublabel{NW49APAR-1duChy-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-2}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-1}{NW49APAR-1duChy-3}\\nwenddeflinemarkup\nstatic int lua_matrix_gettable(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,1);\n    int k;\n\n    if (!lua_isnumber(L,2)) \\{\n        if (lua_isstring(L,2))\n            return lua_matrix_getmethod(L);\n        else\n            lua_error(L, \"Index must be a number\");\n    \\}\n    k = (int) lua_tonumber(L,2);\n\n    if (k <= 0 || k > A->m * A->n)\n        lua_error(L, \"Index out of range\");\n\n    lua_settop(L,0);\n    lua_pushnumber(L, Mk(A,k));\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{16}\\nwdocspar\n\n\n\\subsection{Matrix setter}\n\nThe indexed write operation can only do one thing: set a matrix\nentry.  The {\\Tt{}settable\\nwendquote} tag method gets the tagged object,\nthe index, and the value to write on the stack.\n\n\\nwenddocs{}\\nwbegincode{17}\\sublabel{NW49APAR-BzF6M-3}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-3}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{NW49APAR-BzF6M-2}{NW49APAR-BzF6M-4}\\nwenddeflinemarkup\nlua_pushcclosure(L, lua_matrix_settable, 0);\nlua_settagmethod(L, lua_matrix_tag, \"settable\");\n\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{18}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{19}\\sublabel{NW49APAR-1duChy-3}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-3}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-2}{NW49APAR-1duChy-4}\\nwenddeflinemarkup\nstatic int lua_matrix_settable(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,1);\n    int k;\n\n    if (!lua_isnumber(L,2))\n        lua_error(L, \"Index must be a number\");\n\n    if (!lua_isnumber(L,3))\n        lua_error(L, \"Value must be a number\");\n\n    k = (int) lua_tonumber(L,2);\n    if (k <= 0 || k > A->m*A->n)\n        lua_error(L, \"Index out of range\");\n\n    Mk(A,k) = lua_tonumber(L,3);\n\n    lua_settop(L,0);\n    return 0;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{20}\\nwdocspar\n\n\n\\subsection{Matrix arithmetic operations}\n\nThe {\\Tt{}lua{\\_}matrix{\\_}add\\nwendquote} and {\\Tt{}lua{\\_}matrix{\\_}sub\\nwendquote} methods are attached\nto the add and subtract events.  The {\\Tt{}lua{\\_}matrix{\\_}unm\\nwendquote} method is\nattached to the unary negation event.\n\n\\nwenddocs{}\\nwbegincode{21}\\sublabel{NW49APAR-BzF6M-4}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-4}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{NW49APAR-BzF6M-3}{NW49APAR-BzF6M-5}\\nwenddeflinemarkup\nlua_pushcclosure(L, lua_matrix_add, 0);\nlua_settagmethod(L, lua_matrix_tag, \"add\");\n\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{22}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{23}\\sublabel{NW49APAR-1duChy-4}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-4}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-3}{NW49APAR-1duChy-5}\\nwenddeflinemarkup\nstatic int lua_matrix_add(lua_State* L)\n\\{\n    int i, j;\n    \\LA{}get binary operands~{\\nwtagstyle{}\\subpageref{NW49APAR-1rgxbt-1}}\\RA{}\n\n    \\LA{}check summand conformality~{\\nwtagstyle{}\\subpageref{NW49APAR-2op4lt-1}}\\RA{}\n    C = lua_matrix_create(A->m, A->n);\n\n    for (i = 1; i <= A->m; ++i)\n        for (j = 1; j <= A->n; ++j)\n            Mij(C,i,j) = Mij(A,i,j) + Mij(B,i,j);\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NW49APAR-EyHPB-1}}\\RA{}\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{24}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{25}\\sublabel{NW49APAR-BzF6M-5}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-5}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{NW49APAR-BzF6M-4}{NW49APAR-BzF6M-6}\\nwenddeflinemarkup\nlua_pushcclosure(L, lua_matrix_sub, 0);\nlua_settagmethod(L, lua_matrix_tag, \"sub\");\n\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{26}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{27}\\sublabel{NW49APAR-1duChy-5}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-5}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-4}{NW49APAR-1duChy-6}\\nwenddeflinemarkup\nstatic int lua_matrix_sub(lua_State* L)\n\\{\n    int i, j;\n    \\LA{}get binary operands~{\\nwtagstyle{}\\subpageref{NW49APAR-1rgxbt-1}}\\RA{}\n\n    \\LA{}check summand conformality~{\\nwtagstyle{}\\subpageref{NW49APAR-2op4lt-1}}\\RA{}\n    C = lua_matrix_create(A->m, A->n);\n\n    for (i = 1; i <= A->m; ++i)\n        for (j = 1; j <= A->n; ++j)\n            Mij(C,i,j) = Mij(A,i,j) - Mij(B,i,j);\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NW49APAR-EyHPB-1}}\\RA{}\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{28}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{29}\\sublabel{NW49APAR-BzF6M-6}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-6}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{NW49APAR-BzF6M-5}{NW49APAR-BzF6M-7}\\nwenddeflinemarkup\nlua_pushcclosure(L, lua_matrix_unm, 0);\nlua_settagmethod(L, lua_matrix_tag, \"unm\");\n\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{30}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{31}\\sublabel{NW49APAR-1duChy-6}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-6}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-5}{NW49APAR-1duChy-7}\\nwenddeflinemarkup\nstatic int lua_matrix_unm(lua_State* L)\n\\{\n    int i, j;\n    \\LA{}get unary operand~{\\nwtagstyle{}\\subpageref{NW49APAR-2Wr2g8-1}}\\RA{}\n\n    C = lua_matrix_create(A->m, A->n);\n\n    for (i = 1; i <= A->m; ++i)\n        for (j = 1; j <= A->n; ++j)\n            Mij(C,i,j) = -Mij(A,i,j);\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NW49APAR-EyHPB-1}}\\RA{}\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{32}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{33}\\sublabel{NW49APAR-BzF6M-7}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-7}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{NW49APAR-BzF6M-6}{NW49APAR-BzF6M-8}\\nwenddeflinemarkup\nlua_pushcclosure(L, lua_matrix_mul, 0);\nlua_settagmethod(L, lua_matrix_tag, \"mul\");\n\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{34}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{35}\\sublabel{NW49APAR-1duChy-7}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-7}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-6}{NW49APAR-1duChy-8}\\nwenddeflinemarkup\nstatic int lua_matrix_mul(lua_State* L)\n\\{\n    extern int dgemm_(char* transA, char* transB, int* m, int* n, int* k,\n                      double* alpha, double* A, int* ldA,\n                      double* B, int* ldB,\n                      double* beta, double* C, int* ldC);\n#ifdef HAVE_LAPACK\n    double one  = 1;\n    double zero = 0;\n#endif\n\n    \\LA{}get binary operands~{\\nwtagstyle{}\\subpageref{NW49APAR-1rgxbt-1}}\\RA{}\n\n    if (A->n != B->m)\n        lua_error(L, \"Nonconformal matrices\");\n\n    C = lua_matrix_create(A->m, B->n);\n\n#ifdef HAVE_LAPACK\n\n    dgemm_(\"N\", \"N\", &(C->m), &(C->n), &(A->m),\n           &one, A->data, &(A->ld),\n           B->data, &(B->ld),\n           &zero, C->data, &(C->ld));\n\n#else\n    lua_error(L, \"Dense linear algebra routines not linked\\\\n\");\n#endif /* HAVE_LAPACK */\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NW49APAR-EyHPB-1}}\\RA{}\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{36}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{37}\\sublabel{NW49APAR-1rgxbt-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1rgxbt-1}}}\\moddef{get binary operands~{\\nwtagstyle{}\\subpageref{NW49APAR-1rgxbt-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-4}\\\\{NW49APAR-1duChy-5}\\\\{NW49APAR-1duChy-7}}\\nwenddeflinemarkup\nlua_matrix_t A, B, C;\n\nif (lua_tag(L,1) != lua_matrix_tag || lua_tag(L,2) != lua_matrix_tag)\n    lua_error(L, \"Invalid operands\");\n\nA = lua_touserdata(L,1);\nB = lua_touserdata(L,2);\n\\nwused{\\\\{NW49APAR-1duChy-4}\\\\{NW49APAR-1duChy-5}\\\\{NW49APAR-1duChy-7}}\\nwendcode{}\\nwbegindocs{38}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{39}\\sublabel{NW49APAR-2Wr2g8-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-2Wr2g8-1}}}\\moddef{get unary operand~{\\nwtagstyle{}\\subpageref{NW49APAR-2Wr2g8-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-6}}\\nwenddeflinemarkup\nlua_matrix_t A = lua_touserdata(L,1);\nlua_matrix_t C;\n\\nwused{\\\\{NW49APAR-1duChy-6}}\\nwendcode{}\\nwbegindocs{40}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{41}\\sublabel{NW49APAR-2op4lt-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-2op4lt-1}}}\\moddef{check summand conformality~{\\nwtagstyle{}\\subpageref{NW49APAR-2op4lt-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-4}\\\\{NW49APAR-1duChy-5}}\\nwenddeflinemarkup\nif (A->m != B->m || A->n != B->n)\n    lua_error(L, \"Noncomformal matrices\");\n\n\\nwused{\\\\{NW49APAR-1duChy-4}\\\\{NW49APAR-1duChy-5}}\\nwendcode{}\\nwbegindocs{42}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{43}\\sublabel{NW49APAR-EyHPB-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-EyHPB-1}}}\\moddef{return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NW49APAR-EyHPB-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-4}\\\\{NW49APAR-1duChy-5}\\\\{NW49APAR-1duChy-6}\\\\{NW49APAR-1duChy-7}}\\nwenddeflinemarkup\nlua_settop(L,0);\nlua_pushusertag(L, C, lua_matrix_tag);\nreturn 1;\n\\nwused{\\\\{NW49APAR-1duChy-4}\\\\{NW49APAR-1duChy-5}\\\\{NW49APAR-1duChy-6}\\\\{NW49APAR-1duChy-7}}\\nwendcode{}\\nwbegindocs{44}\\nwdocspar\n\n\\subsection{{\\Tt{}matrix\\nwendquote} command}\n\nThe {\\Tt{}matrix\\nwendquote} command creates a new matrix of a specified size.\nThe user is allowed to leave off the second argument $n$;\nif it is not explicitly specified, $n$ is assumed to be $1$.\nThe sizes are only somewhat sanity checked -- there is no\ncheck to make sure that $mn$ is not humongous, and we fail\ngracelessly if the memory allocations fail.\n\nAfter the numeric arguments, the user may also specify a table\nof array contents in \\emph{row-major} order.  The idea is that\n\\begin{verbatim}\n  A = matrix(2,2, {1, 2,\n                   3, 4})\n\\end{verbatim}\nis probably the most natural way to initialize a little two-by-two\nexample matrix.  It should also be possible to write\n\\begin{verbatim}\n  A = matrix{1, 2, 3}\n\\end{verbatim}\nto get a three-element vector.\n\n\\nwenddocs{}\\nwbegincode{45}\\sublabel{NW49APAR-BzF6M-8}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-8}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-nRuDO-4}}\\nwprevnextdefs{NW49APAR-BzF6M-7}{\\relax}\\nwenddeflinemarkup\nlua_register(L, \"matrix\", lua_matrix);\n\\nwused{\\\\{NW49APAR-nRuDO-4}}\\nwendcode{}\\nwbegindocs{46}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{47}\\sublabel{NW49APAR-1duChy-8}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-8}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-7}{NW49APAR-1duChy-9}\\nwenddeflinemarkup\nstatic int lua_matrix(lua_State* L)\n\\{\n    int nargs = lua_gettop(L);\n    lua_matrix_t A;\n    int i,j;\n    int m = 0, n = 0;\n    int init_table = 0;\n\n    \\LA{}get \\code{}matrix\\edoc{} parameters~{\\nwtagstyle{}\\subpageref{NW49APAR-JzBPi-1}}\\RA{}\n\n    A = lua_matrix_create(m,n);\n\n    \\LA{}initialize matrix from table~{\\nwtagstyle{}\\subpageref{NW49APAR-3n1EGq-1}}\\RA{}\n\n    lua_settop(L,0);\n    lua_pushusertag(L, A, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{48}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{49}\\sublabel{NW49APAR-JzBPi-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-JzBPi-1}}}\\moddef{get \\code{}matrix\\edoc{} parameters~{\\nwtagstyle{}\\subpageref{NW49APAR-JzBPi-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-8}}\\nwprevnextdefs{\\relax}{NW49APAR-JzBPi-2}\\nwenddeflinemarkup\nfor (i = 1; i <= nargs; ++i) \\{\n    if (lua_isnumber(L,i)) \\{\n        int val = (int) lua_tonumber(L,i);\n        if (val <= 0)\n            lua_error(L, \"Size out of bounds\");\n\n        if      (m == 0)  m = val;\n        else if (n == 0)  n = val;\n        else              lua_error(L, \"Too many size parameters\");\n\n    \\} else if (lua_istable(L,i)) \\{\n        if (init_table == 0)\n            init_table = i;\n        else\n            lua_error(L, \"Too many initializers\");\n    \\}\n\\}\n\n\\nwalsodefined{\\\\{NW49APAR-JzBPi-2}}\\nwused{\\\\{NW49APAR-1duChy-8}}\\nwendcode{}\\nwbegindocs{50}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{51}\\sublabel{NW49APAR-JzBPi-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-JzBPi-2}}}\\moddef{get \\code{}matrix\\edoc{} parameters~{\\nwtagstyle{}\\subpageref{NW49APAR-JzBPi-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-8}}\\nwprevnextdefs{NW49APAR-JzBPi-1}{\\relax}\\nwenddeflinemarkup\nif (m == 0) \\{\n    if (init_table) \\{\n        m = lua_getn(L,init_table);\n        if (m == 0)\n            lua_error(L, \"Insufficient initializer entries\");\n    \\}\n\\}\n\nif (n == 0)\n    n = 1;\n\nif (init_table && lua_getn(L,init_table) > m*n)\n    lua_error(L, \"Initializer too large\");\n\n\\nwused{\\\\{NW49APAR-1duChy-8}}\\nwendcode{}\\nwbegindocs{52}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{53}\\sublabel{NW49APAR-3n1EGq-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-3n1EGq-1}}}\\moddef{initialize matrix from table~{\\nwtagstyle{}\\subpageref{NW49APAR-3n1EGq-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-8}}\\nwenddeflinemarkup\nif (init_table) \\{\n    int k = 1;\n    for (i = 1; i <= m; ++i) \\{\n        for (j = 1; j <= n; ++j) \\{\n            lua_rawgeti(L,init_table, k++);\n            if (lua_isnumber(L,-1))\n                Mij(A,i,j) = lua_tonumber(L,-1);\n            lua_pop(L,1);\n        \\}\n    \\}\n\\}\n\\nwused{\\\\{NW49APAR-1duChy-8}}\\nwendcode{}\\nwbegindocs{54}\\nwdocspar\n\n\\subsection{Matrix {\\Tt{}print\\nwendquote} method}\n\nMatlab's wrapped matrix output format is \\emph{really} nice when you\nhave to inspect matrices of moderate size.  The {\\Tt{}print{\\_}matrix\\nwendquote} routine\nemulates Matlab's behavior for {\\Tt{}format\\ short\\ e\\nwendquote}.\n\n\\nwenddocs{}\\nwbegincode{55}\\sublabel{NW49APAR-2e3CeC-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-2e3CeC-1}}}\\moddef{call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NW49APAR-2e3CeC-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-9}}\\nwenddeflinemarkup\nlua_pushvalue(L, -1);\nlua_pushstring(L, buf);\nlua_rawcall(L, 1, 0);\n\\nwused{\\\\{NW49APAR-1duChy-9}}\\nwendcode{}\\nwbegindocs{56}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{57}\\sublabel{NW49APAR-1duChy-9}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-9}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-8}{NW49APAR-1duChy-A}\\nwenddeflinemarkup\nstatic void print_matrix(lua_State* L, lua_matrix_t A)\n\\{\n    int i, j, c;\n    int m = A->m;\n    int n = A->n;\n    char buf[256];\n\n    lua_getglobal(L, \"print\");\n    for (c = 1; c <= n; c += 6) \\{\n\n        if (n > 6) \\{\n            sprintf(buf, \"  Columns %d through %d\\\\n\", \n                    c, (c+5 > n) ? n : c+5);\n            \\LA{}call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NW49APAR-2e3CeC-1}}\\RA{}\n        \\}\n\n        for (i = 1; i <= m; ++i) \\{\n            char num[64];\n\n            *buf = '\\\\0';\n            for (j = c; j <= n && j < c+6; ++j) \\{\n                if (Mij(A,i,j))\n                    sprintf(num, \"  % .4e\", Mij(A,i,j));\n                else\n                    sprintf(num, \"            0\");\n                strcat(buf, num);\n            \\}\n            \\LA{}call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NW49APAR-2e3CeC-1}}\\RA{}\n\n        \\}\n        buf[0] = ' ';\n        buf[1] = '\\\\0';\n        \\LA{}call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NW49APAR-2e3CeC-1}}\\RA{}\n    \\}\n    lua_pop(L,1);\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{58}\\nwdocspar\n\nThe {\\Tt{}matrix{\\_}print\\nwendquote} routine (also known as the {\\Tt{}print\\nwendquote} method\nfor a matrix object) uses {\\Tt{}print{\\_}matrix\\nwendquote} to output a reasonably\npretty representation of the matrix.\n\n\\nwenddocs{}\\nwbegincode{59}\\sublabel{NW49APAR-1duChy-A}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-A}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-9}{NW49APAR-1duChy-B}\\nwenddeflinemarkup\nstatic int lua_matrix_print(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n\n    if (lua_gettop(L) != 1)\n        lua_error(L, \"Wrong number of arguments\");\n\n    print_matrix(L, A);\n    lua_settop(L, 0);\n    return 0;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{60}\\nwdocspar\n\n\n\\subsection{Matrix {\\Tt{}factor\\nwendquote} method}\n\nThe {\\Tt{}matrix{\\_}factor\\nwendquote} function computes $A = PLU$ using LAPACK's %'\n{\\Tt{}DGETRF\\nwendquote} routine.\n\n\\nwenddocs{}\\nwbegincode{61}\\sublabel{NW49APAR-1duChy-B}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-B}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-A}{NW49APAR-1duChy-C}\\nwenddeflinemarkup\nstatic int lua_matrix_factor(lua_State* L)\n\\{\n    extern int dgetrf_(int* m, int* n, double* A, int* ldA,\n                       int* ipiv, int* info);\n\n    lua_matrix_t A = lua_touserdata(L,-1);\n\n    if (A->m != A->n)\n        lua_error(L, \"Matrix must be square\");\n\n#ifdef HAVE_LAPACK\n    if (A->ipiv == NULL) \\{\n        int info;\n        A->ipiv = calloc(A->m, sizeof(int));\n        dgetrf_(&(A->m), &(A->n), A->data, &(A->ld), A->ipiv, &info);    \n        if (info != 0) \\{\n            printf(\"dgetrf failed with error code %d\\\\n\", info);\n            lua_error(L, \"Error during factorization\");\n        \\}\n    \\}\n#else\n    lua_error(L, \"Dense linear algebra not linked\\\\n\");\n#endif\n\n    lua_settop(L,0);\n    lua_pushusertag(L, A, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{62}\\nwdocspar\n\n\n\\subsection{Matrix {\\Tt{}solve\\nwendquote} method}\n\nThe {\\Tt{}matrix{\\_}solve\\nwendquote} function computes $x := A^{-1} x$ given a factored\nmatrix $A$.  If the matrix has not already been factored, we will factor\nit.\n\n\\nwenddocs{}\\nwbegincode{63}\\sublabel{NW49APAR-1duChy-C}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-C}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-B}{NW49APAR-1duChy-D}\\nwenddeflinemarkup\nstatic int lua_matrix_solve(lua_State* L)\n\\{\n    extern int dgetrs_(char* trans, int* n, int* nrhs, double* A, int* ldA,\n                       int* ipiv, double* B, int* ldB, int* info);\n\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_matrix_t B;\n\n#ifdef HAVE_LAPACK\n    int info;\n#endif\n\n    if (lua_gettop(L) != 2)\n        lua_error(L, \"Wrong number of arguments\");\n\n    if (lua_tag(L,1) != lua_matrix_tag)\n        lua_error(L, \"Argument must be a matrix\");\n    B = lua_touserdata(L,1);\n\n    if (A->ipiv == NULL) \\{\n        lua_matrix_factor(L);\n    \\}\n\n    if (A->n != B->m)\n        lua_error(L, \"Dimension mismatch\");\n\n#ifdef HAVE_LAPACK\n    dgetrs_(\"N\", &(A->n), &(B->n), A->data, &(A->ld), A->ipiv, \n            B->data, &(B->ld), &info);    \n    if (info != 0)\n        printf(\"Solve failed with error code %d\\\\n\", info);\n\n#else\n    lua_error(L, \"Dense linear algebra libraries not linked\\\\n\");\n#endif\n\n    lua_settop(L,0);\n    lua_pushusertag(L, B, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{64}\\nwdocspar\n\n\n\\subsection{Matrix {\\Tt{}clone\\nwendquote} method}\n\nThe {\\Tt{}matrix{\\_}clone\\nwendquote} function ({\\Tt{}clone\\nwendquote} method) makes a full\ncopy of a matrix object.\n\n\\nwenddocs{}\\nwbegincode{65}\\sublabel{NW49APAR-1duChy-D}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-D}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-C}{NW49APAR-1duChy-E}\\nwenddeflinemarkup\nstatic int lua_matrix_clone(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_matrix_t B;\n    int j;\n\n    if (lua_gettop(L) != 1)\n        lua_error(L, \"Too many arguments\");\n\n    B = lua_matrix_create(A->m, A->n);\n    for (j = 1; j <= A->n; ++j)\n        memcpy(&Mij(B,1,j), &Mij(A,1,j), A->m * sizeof(double));\n\n    if (A->ipiv) \\{\n        B->ipiv = calloc(A->m, sizeof(int));\n        memcpy(B->ipiv, A->ipiv, A->m * sizeof(int));\n    \\}\n\n    lua_settop(L,0);\n    lua_pushusertag(L, B, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{66}\\nwdocspar\n\n\\subsection{Matrix {\\Tt{}slice\\nwendquote} method}\n\nThe matrix {\\Tt{}slice\\nwendquote} command returns a copy of a subscripted slice\nof the matrix $A$.\n\n\\nwenddocs{}\\nwbegincode{67}\\sublabel{NW49APAR-1duChy-E}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-E}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-D}{NW49APAR-1duChy-F}\\nwenddeflinemarkup\nstatic int lua_matrix_slice_method(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_matrix_t B;\n    int n = lua_gettop(L);\n    int i1, i2, j1 = 1, j2 = 1;\n\n    \\LA{}get slice subscript arguments~{\\nwtagstyle{}\\subpageref{NW49APAR-1xpjei-1}}\\RA{}\n\n    B = lua_matrix_slice(A,i1,i2,j1,j2);\n\n    lua_settop(L,0);\n    lua_pushusertag(L, B, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{68}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{69}\\sublabel{NW49APAR-1xpjei-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1xpjei-1}}}\\moddef{get slice subscript arguments~{\\nwtagstyle{}\\subpageref{NW49APAR-1xpjei-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-1duChy-E}}\\nwenddeflinemarkup\nif (n != 3 && n != 5)\n    lua_error(L, \"Wrong number of arguments\");\n\nif (!lua_isnumber(L,1) || !lua_isnumber(L,2) ||\n    (n == 5 && (!lua_isnumber(L,3) || !lua_isnumber(L,4))))\n    lua_error(L, \"Subscripts must be numeric\");\n\ni1 = (int) lua_tonumber(L,1);\ni2 = (int) lua_tonumber(L,2);\nif (n == 5) \\{\n    j1 = (int) lua_tonumber(L,3);\n    j2 = (int) lua_tonumber(L,4);\n\\}\n\nif (i1 <= 0 || i2 < i1 || i2 > A->m ||\n    j1 <= 0 || j2 < j1 || j2 > A->n)\n    lua_error(L, \"Bad subscripts\");\n\n\\nwused{\\\\{NW49APAR-1duChy-E}}\\nwendcode{}\\nwbegindocs{70}\\nwdocspar\n\n\\subsection{Matrix {\\Tt{}free\\nwendquote} method}\n\nThe {\\Tt{}matrix{\\_}free\\nwendquote} function (also known as the {\\Tt{}free\\nwendquote} method)\ndeallocates the object memory.  It should probably be called on\nLua garbage collection, but it isn't yet.\n\n\\nwenddocs{}\\nwbegincode{71}\\sublabel{NW49APAR-1duChy-F}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-F}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-E}{NW49APAR-1duChy-G}\\nwenddeflinemarkup\nstatic int lua_matrix_free(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n\n    if (lua_gettop(L) != 1)\n        lua_error(L, \"Too many arguments\");\n\n    lua_matrix_destroy(A);\n\n    lua_settop(L,0);\n    return 0;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{72}\\nwdocspar\n\n\n\\subsection{{\\Tt{}m\\nwendquote} and {\\Tt{}n\\nwendquote} fields}\n\n\\nwenddocs{}\\nwbegincode{73}\\sublabel{NW49APAR-1duChy-G}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-G}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-F}{NW49APAR-1duChy-H}\\nwenddeflinemarkup\nstatic int lua_matrix_m(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_settop(L,0);\n    lua_pushnumber(L, A->m);\n    return 1;\n\\}\n\nstatic int lua_matrix_n(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_settop(L,0);\n    lua_pushnumber(L, A->n);\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{74}\\nwdocspar\n\n\n\\subsection{Method recall}\n\nWhen a matrix object is indexed by a method name string,\nwe return a Lua closure that implements the method.\nSo when the user requests {\\Tt{}A.print\\nwendquote}, for instance,\nthe returned closure object will have {\\Tt{}A\\nwendquote} as its final\nargument when it is called.\n\nOn entry, the Lua stack contains the matrix object\nand the name string.\n\n\\nwenddocs{}\\nwbegincode{75}\\sublabel{NW49APAR-4QyxLE-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-4QyxLE-1}}}\\moddef{static prototypes~{\\nwtagstyle{}\\subpageref{NW49APAR-4QyxLE-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwenddeflinemarkup\nstatic int lua_matrix_getmethod(lua_State* L);\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{76}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{77}\\sublabel{NW49APAR-1duChy-H}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-H}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NW49APAR-1duChy-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-1duChy-G}{\\relax}\\nwenddeflinemarkup\nstatic int lua_matrix_getmethod(lua_State* L)\n\\{\n    const char* name = lua_tostring(L,2);\n\n    lua_pop(L,1);\n    if (strcmp(name, \"print\") == 0)\n        lua_pushcclosure(L, lua_matrix_print, 1);\n    else if (strcmp(name, \"free\") == 0)\n        lua_pushcclosure(L, lua_matrix_free, 1);\n    else if (strcmp(name, \"factor\") == 0)\n        lua_pushcclosure(L, lua_matrix_factor, 1);\n    else if (strcmp(name, \"solve\") == 0)\n        lua_pushcclosure(L, lua_matrix_solve, 1);\n    else if (strcmp(name, \"clone\") == 0)\n        lua_pushcclosure(L, lua_matrix_clone, 1);\n    else if (strcmp(name, \"slice\") == 0)\n        lua_pushcclosure(L, lua_matrix_slice_method, 1);\n    else if (strcmp(name, \"m\") == 0)\n        lua_matrix_m(L);\n    else if (strcmp(name, \"n\") == 0)\n        lua_matrix_n(L);\n    else\n        lua_error(L, \"Invalid method name\");\n\n    return 1;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{78}\\nwdocspar\n\n\n\\subsection{Public matrix manipulation}\n\n\\nwenddocs{}\\nwbegincode{79}\\sublabel{NW49APAR-nRuDO-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-1}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-1}}}\\endmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{\\relax}{NW49APAR-nRuDO-2}\\nwenddeflinemarkup\nlua_matrix_t lua_matrix_create(int m, int n)\n\\{\n    lua_matrix_t C;\n\n    C = calloc(1, sizeof(*C));\n    C->owns_data = 1;\n    C->ld = m;\n    C->m  = m;\n    C->n  = n;\n    C->data = calloc(m*n, sizeof(double));\n    return C;\n\\}\n\nvoid lua_matrix_destroy(lua_matrix_t self)\n\\{\n    if (self->ipiv)\n        free(self->ipiv);\n\n    if (self->owns_data)\n        free(self->data);\n\n    free(self);\n\\}\n\n\\nwalsodefined{\\\\{NW49APAR-nRuDO-2}\\\\{NW49APAR-nRuDO-3}\\\\{NW49APAR-nRuDO-4}}\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{80}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{81}\\sublabel{NW49APAR-nRuDO-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-2}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-nRuDO-1}{NW49APAR-nRuDO-3}\\nwenddeflinemarkup\nlua_matrix_t lua_matrix_slice(lua_matrix_t A, int i1, int i2, int j1, int j2)\n\\{\n    lua_matrix_t C;\n\n    C = calloc(1, sizeof(*C));\n    C->owns_data = 0;\n    C->ld   = A->ld;\n    C->m    = (i2-i1)+1;\n    C->n    = (j2-j1)+1;\n    C->data = &Mij(A,i1,j1);\n    return C;\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{82}\\nwdocspar\n\n\\subsection{Setting and removing vectors}\n\nThe {\\Tt{}lua{\\_}matrix{\\_}push\\nwendquote} function is a thin wrapper around\n{\\Tt{}lua{\\_}pushusertag\\nwendquote}.  Similarly, {\\Tt{}lua{\\_}matrix{\\_}get\\nwendquote} is a\nthin wrapper around {\\Tt{}lua{\\_}touserdata\\nwendquote}.  The only reason we\ndon't want the user to directly use the {\\Tt{}lua{\\_}pushusertag\\nwendquote} and \n{\\Tt{}lua{\\_}touserdata\\nwendquote} functions is that then we would have\nto expose the matrix tag value to the world.  That wouldn't\nbe a tragedy, but it would be nice to keep it private.\n\n\\nwenddocs{}\\nwbegincode{83}\\sublabel{NW49APAR-nRuDO-3}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-3}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-nRuDO-2}{NW49APAR-nRuDO-4}\\nwenddeflinemarkup\nvoid lua_matrix_push(lua_State* L, lua_matrix_t matrix)\n\\{\n    lua_pushusertag(L, matrix, lua_matrix_tag);\n\\}\n\nlua_matrix_t lua_matrix_get(lua_State* L, int index)\n\\{\n    if (index > lua_gettop(L))\n        lua_error(L, \"Index out of range\");\n\n    if (lua_tag(L,index) != lua_matrix_tag)\n        lua_error(L, \"Variable is not a matrix\");\n\n    return lua_touserdata(L,index);\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\\nwbegindocs{84}\\nwdocspar\n\n\n\\subsection{Registration functions}\n\n\\nwenddocs{}\\nwbegincode{85}\\sublabel{NW49APAR-nRuDO-4}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-4}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NW49APAR-nRuDO-1}}}\\plusendmoddef\\nwstartdeflinemarkup\\nwusesondefline{\\\\{NW49APAR-VVNpW-1}}\\nwprevnextdefs{NW49APAR-nRuDO-3}{\\relax}\\nwenddeflinemarkup\nvoid lua_matrix_register(lua_State* L)\n\\{\n    lua_matrix_tag = lua_newtag(L);\n    \\LA{}register functions~{\\nwtagstyle{}\\subpageref{NW49APAR-BzF6M-1}}\\RA{}\n\\}\n\n\\nwused{\\\\{NW49APAR-VVNpW-1}}\\nwendcode{}\n\n\\nwixlogsorted{c}{{call Lua print for buffer}{NW49APAR-2e3CeC-1}{\\nwixd{NW49APAR-2e3CeC-1}\\nwixu{NW49APAR-1duChy-9}}}%\n\\nwixlogsorted{c}{{check summand conformality}{NW49APAR-2op4lt-1}{\\nwixu{NW49APAR-1duChy-4}\\nwixu{NW49APAR-1duChy-5}\\nwixd{NW49APAR-2op4lt-1}}}%\n\\nwixlogsorted{c}{{functions}{NW49APAR-nRuDO-1}{\\nwixu{NW49APAR-VVNpW-1}\\nwixd{NW49APAR-nRuDO-1}\\nwixd{NW49APAR-nRuDO-2}\\nwixd{NW49APAR-nRuDO-3}\\nwixd{NW49APAR-nRuDO-4}}}%\n\\nwixlogsorted{c}{{get \\code{}matrix\\edoc{} parameters}{NW49APAR-JzBPi-1}{\\nwixu{NW49APAR-1duChy-8}\\nwixd{NW49APAR-JzBPi-1}\\nwixd{NW49APAR-JzBPi-2}}}%\n\\nwixlogsorted{c}{{get binary operands}{NW49APAR-1rgxbt-1}{\\nwixu{NW49APAR-1duChy-4}\\nwixu{NW49APAR-1duChy-5}\\nwixu{NW49APAR-1duChy-7}\\nwixd{NW49APAR-1rgxbt-1}}}%\n\\nwixlogsorted{c}{{get slice subscript arguments}{NW49APAR-1xpjei-1}{\\nwixu{NW49APAR-1duChy-E}\\nwixd{NW49APAR-1xpjei-1}}}%\n\\nwixlogsorted{c}{{get unary operand}{NW49APAR-2Wr2g8-1}{\\nwixu{NW49APAR-1duChy-6}\\nwixd{NW49APAR-2Wr2g8-1}}}%\n\\nwixlogsorted{c}{{initialize matrix from table}{NW49APAR-3n1EGq-1}{\\nwixu{NW49APAR-1duChy-8}\\nwixd{NW49APAR-3n1EGq-1}}}%\n\\nwixlogsorted{c}{{macros}{NW49APAR-1VvxMr-1}{\\nwixu{NW49APAR-VVNpW-1}\\nwixd{NW49APAR-1VvxMr-1}}}%\n\\nwixlogsorted{c}{{matrix-lua.c}{NW49APAR-VVNpW-1}{\\nwixd{NW49APAR-VVNpW-1}}}%\n\\nwixlogsorted{c}{{matrix-lua.h}{NW49APAR-2XOU18-1}{\\nwixd{NW49APAR-2XOU18-1}}}%\n\\nwixlogsorted{c}{{register functions}{NW49APAR-BzF6M-1}{\\nwixd{NW49APAR-BzF6M-1}\\nwixd{NW49APAR-BzF6M-2}\\nwixd{NW49APAR-BzF6M-3}\\nwixd{NW49APAR-BzF6M-4}\\nwixd{NW49APAR-BzF6M-5}\\nwixd{NW49APAR-BzF6M-6}\\nwixd{NW49APAR-BzF6M-7}\\nwixd{NW49APAR-BzF6M-8}\\nwixu{NW49APAR-nRuDO-4}}}%\n\\nwixlogsorted{c}{{return \\code{}C\\edoc{}}{NW49APAR-EyHPB-1}{\\nwixu{NW49APAR-1duChy-4}\\nwixu{NW49APAR-1duChy-5}\\nwixu{NW49APAR-1duChy-6}\\nwixu{NW49APAR-1duChy-7}\\nwixd{NW49APAR-EyHPB-1}}}%\n\\nwixlogsorted{c}{{static data}{NW49APAR-IAwJm-1}{\\nwixu{NW49APAR-VVNpW-1}\\nwixd{NW49APAR-IAwJm-1}}}%\n\\nwixlogsorted{c}{{static functions}{NW49APAR-1duChy-1}{\\nwixu{NW49APAR-VVNpW-1}\\nwixd{NW49APAR-1duChy-1}\\nwixd{NW49APAR-1duChy-2}\\nwixd{NW49APAR-1duChy-3}\\nwixd{NW49APAR-1duChy-4}\\nwixd{NW49APAR-1duChy-5}\\nwixd{NW49APAR-1duChy-6}\\nwixd{NW49APAR-1duChy-7}\\nwixd{NW49APAR-1duChy-8}\\nwixd{NW49APAR-1duChy-9}\\nwixd{NW49APAR-1duChy-A}\\nwixd{NW49APAR-1duChy-B}\\nwixd{NW49APAR-1duChy-C}\\nwixd{NW49APAR-1duChy-D}\\nwixd{NW49APAR-1duChy-E}\\nwixd{NW49APAR-1duChy-F}\\nwixd{NW49APAR-1duChy-G}\\nwixd{NW49APAR-1duChy-H}}}%\n\\nwixlogsorted{c}{{static prototypes}{NW49APAR-4QyxLE-1}{\\nwixu{NW49APAR-VVNpW-1}\\nwixd{NW49APAR-4QyxLE-1}}}%\n\\nwbegindocs{86}\\nwdocspar\n\\nwenddocs{}\n", "meta": {"hexsha": "364dde803af09a4b4721c021e907f761d88a2f46", "size": 41899, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sugar31/src/tex/matrix-lua.tex", "max_stars_repo_name": "davidgarmire/sugar", "max_stars_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sugar31/src/tex/matrix-lua.tex", "max_issues_repo_name": "davidgarmire/sugar", "max_issues_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sugar31/src/tex/matrix-lua.tex", "max_forks_repo_name": "davidgarmire/sugar", "max_forks_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.1951488423, "max_line_length": 508, "alphanum_fraction": 0.7038831476, "num_tokens": 16398, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7662936377487305, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5914587577101297}}
{"text": "\\chapter*{Calculation of the ChiSquare ($\\chi^2$)}\n\\setcounter{chapter}{1}\n\\emph{Get the ChiSquare}\n\n\\section{Introduction}\nThe purpose of the \\emph{ChiSquare} plugin is to calculate the ChiSquare and the reduced ChiSquare for two sets of data.\n \n\\section{Plugin Properties}\nTable \\ref{table:PluginProperties} lists available plugin property names, along with their data type and purpose.\n\n\n\\begin{table}[ht]\n\\centering % used for centering table\n\\begin{tabular}{l l p{7.5cm}} % centered columns (4 columns)\n\nParameter Name & Data Type & Purpose \\\\ [0.5ex] % inserts table \n%heading\n\\hline % inserts single horizontal line\nExperimentalData\t\t& \tTelluriumData   \t& Data representing Experimental data. \\\\\nModelData \t\t\t\t& \tTelluriumData   \t& Data representing Model data. \\\\\nNrOfModelParameters\t\t& \tint   \t\t\t\t& Number of model parameters used to create the model data. \\\\\nChiSquare     \t\t\t& \tdouble\t\t   \t\t& The calculated ChiSquare. \\\\\nReducedChiSquare \t\t& \tdouble\t\t   \t\t& The calculated reduced ChiSquare. \\\\\n\n\\hline %inserts single line\n\\end{tabular}\n\\caption{Plugin Properties} \n\\label{table:PluginProperties} \n\\end{table}\n\n\\section{Plugin Events}\nThis plugin does not use any plugin events.\n\n\\section{The \\texttt{execute()} function}\nThe \\verb|execute()| function will attempt to calculate the ChiSquare, and the reduced ChiSquare, using data supplied by the user. \n\n\\section{Python examples}\n\n\\subsection{Usage of the ChiSquare plugin}\nThe python script below shows how to use the plugin. \n\n\\begin{singlespace}\n\\lstinputlisting[label=chisquare_plugin_header,caption={ChiSquare plugin example.},language=Python]{Examples/telChiSquare.py}\n\\end{singlespace}\n\n", "meta": {"hexsha": "84246dd1406fbae919468f3576ef3bb80b23bfe9", "size": 1658, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "plugins/released/chisquare/docs/chisquare.tex", "max_stars_repo_name": "sys-bio/rrplugins", "max_stars_repo_head_hexsha": "03af6ea70d73462ad88103f1e446dc0c5f3f971c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "plugins/released/chisquare/docs/chisquare.tex", "max_issues_repo_name": "sys-bio/rrplugins", "max_issues_repo_head_hexsha": "03af6ea70d73462ad88103f1e446dc0c5f3f971c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2015-12-02T18:20:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-20T17:13:34.000Z", "max_forks_repo_path": "plugins/released/chisquare/docs/chisquare.tex", "max_forks_repo_name": "sys-bio/telPlugins", "max_forks_repo_head_hexsha": "03af6ea70d73462ad88103f1e446dc0c5f3f971c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2015-01-27T18:53:45.000Z", "max_forks_repo_forks_event_max_datetime": "2015-07-13T17:07:50.000Z", "avg_line_length": 36.0434782609, "max_line_length": 131, "alphanum_fraction": 0.7551266586, "num_tokens": 435, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.5914587576123032}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx, amssymb, amsmath}\n\n\\begin{document}\n\n\\title{Extended Gravitational Lens: Thin Approximation Inadequacy}\n\n\\maketitle\n\n\\section{-}\n\nLet $O$, $S$ and $L$ to denote the Observer, Source and Lens respectively. $P$ denotes some arbitrary point on the ray of light. The lens is assumed to be spherically symmetric, neglecting pressure and energy density, the Schwarzschild orbital equation for massless particles (the lensing equation) is then:\n\\begin{equation}\n    \\label{eq1}\n    \\frac{d^2u}{d\\theta^2}+u=3r_gu^2\n\\end{equation}\nwhere $u\\equiv1/r$, the radius $r\\equiv \\overline{LP}$, and $\\theta\\equiv \\angle SLP$.\nThe gravitational radius  \n\\begin{equation}\n    \\label{eq2}\n    r_g(r) \\equiv \\frac{M(r)G}{c^2}\n\\end{equation}\nis a function of $r$, here $c$ is the speed of light and $M(r)$ the lens mass enclosed by a sphere of radius $r$. The angle between $\\vec{SL}$ and the ray tangent is called the \"ray angle\":\n\\begin{equation}\n    \\label{eq3}\n    \\alpha \\equiv \\arctan \\left( \\frac{-u^{-1}u'\\sin \\theta + \\cos \\theta}{-u^{-1}u'\\cos \\theta - \\sin \\theta} \\right) \\;.\n\\end{equation}\n$\\alpha(0)$ and $\\alpha(\\theta_O)$ are the incoming and outgoing angles respectively, $\\theta_O\\equiv \\angle SLO$. We define the impact point $B$ where the straight line from the source tangent to the actual light ray hits the lens plane. The impact parameter is then $b \\equiv \\overline{LB}$. \n\nFor thin lens approximation light is assumed to travel in a straight line from $S$ to $B$, where it is deflected by an angle $\\gamma=4r_g/b$, the mass used to calculate $r_g$ is the lens projected mass in a circle of radius $b$.\n\nCalculating the time of travel is done as follows:\n\\begin{equation}\n    \\label{eq4}\n    \\Delta t_{ray} = \\int_{0}^{\\theta_O} \\left( \\frac{u^{-4}u'^2+u^{-2}}{1-2ur_g} \\right)^{\\frac{1}{2}} {\\rm d}  \\theta \\;,\n\\end{equation}\nwhere the denominator accounts for gravitational time dilation.\n\n\\section{The harmonic solution to the lensing equation}\n\nWe are interested in solving the lensing equation (eq. \\ref{eq1}) inside an isothermal sphere sharply truncated at $R$. The total mass is $M_0$ and the enclosed mass $M(r)=M_0r/R$. Plugging this into the lens equation (eq. \\ref{eq1}) gives:\n\\begin{equation}\n    \\label{eq5}\n    \\frac{d^2u}{d\\theta^2}=\\left( \\frac{3M_0G}{Rc^2} -1 \\right) u \\;.\n\\end{equation}\nThis is the equation of a harmonic oscillator with $m=1$ and\n\\begin{equation}\n    \\label{eq6}\n    k=\\left( 1-\\frac{3M_0G}{Rc^2} \\right) u \\;.\n\\end{equation}\nThe general solution is $u=A\\cos \\left( \\omega \\theta+\\phi \\right)$ with $\\omega=\\sqrt{k/m}$ i.e.\n\\begin{equation}\n    \\label{eq7}\n    u= A \\cos \\left( \\theta\\sqrt{1-\\frac{3M_0G}{Rc^2}} +\\phi \\right)\n\\end{equation}\nwhere $\\phi$ and $A$ are decided by the boundary conditions. We are interested in a symmetric configuration around the lens, in this case $rmin=u(\\pi/2)^{-1}$ is the minimal distance between the ray and the lens, hence the $\\cos$ term has to be maximal at $\\theta=\\pi/2$ \n\\begin{equation}\n    \\label{eq8}\n    \\phi = -\\frac{\\pi}{2}\\sqrt{1-\\frac{3M_0G}{Rc^2}}\n\\end{equation}\nAnd $A=1/rmin$. The symmetric solution is then \n\\begin{equation}\n    \\label{eq9}\n    u= \\frac{1}{rmin} \\cos \\left( \\sqrt{1-\\frac{3M_0G}{Rc^2}} \\left( \\theta-\\frac{\\pi}{2}\\right) \\right)\\;.\n\\end{equation}\nThe solution is uniquely defined by the total halo mass $M_0$, its radius $R$ and the minimum approach radius $rmin$.\n\n\\section{One specific example}\n\nWe compare three solutions to a lensing configuration: analytical (harmonic), numerical, and thin approximation. The configuration is symmetric, cosmologically driven, the lens mass $M_0=2\\times 10^{12} M_s/h$ in a radius $R=200 Kpc/h$. The ray minimal approach radius $rmin=4 Kpc/h$. We solve analytically inside the halo, and fit it with a numerical solution to the general lensing equation. The numerical solution fits the analytical one very good, it is defined outside the halo and it preserves symmetry. We put a source at $rs = u_{num}(0)^{-1}=3.5477249 Gpc/h$ and run a thin lens ray with the same incoming angle as the numerical one. The impact parameter fo this ray is $b=4.00000287 Kpc/h$. To calculate the projected mass inside a cylinder or radius $b$ we first calculate the projected density of the lens:\n\\begin{align}\n    \\rho &= \\frac{   {\\rm d}M(r)   }{   {\\rm d}r   } \\frac{1}{4\\pi r^2}\\\\\n    &= \\frac{M_0}{4\\pi Rr^2}\\\\\n    \\Psi&=2\\int_0^{\\sqrt{R^2-r_p^2}} \\rho(\\sqrt{r_p^2+z^2}) {\\rm d}z \\\\\n    &=\\frac{M_0}{2\\pi R} \\int_0^{\\sqrt{R^2-r_p^2}} \\frac{{\\rm d} z}{r_p^2+z^2} \\\\\n    &=\\left[ \\frac{M_0}{2\\pi R r_p} \\arctan \\frac{z}{r_p} \\right] _0^{\\sqrt{R^2-r_p^2}}\\\\\n    &=\\frac{M_0}{2\\pi R r_p}\\arctan \\sqrt{\\frac{R^2}{r_p^2}-1}\n\\end{align}\nwhere $r_p\\equiv \\sqrt{x^2+y^2}$ is a projected radius. The mass inside a cylinder of radius $b$ is then a simple 1D integral of the cylindrical shells:\n\\begin{align}\n    M_{projected}(b) &= \\int_0^b 2\\pi r_p \\Psi(r_p) {\\rm d} r_p \\\\\n    &= \\frac{M_0}{R} \\int_0^b \\arctan \\left( \\sqrt{\\frac{R^2}{r_p^2}-1} \\right) {\\rm d} r_p\\;.\n\\end{align}\nThis has an analytical solution, but we just integrate this with a simple 1D quadrature. For our configuration, $M_{projected}(4 Kpc/h) = 6.24318\\times 10^{10} M_s/h$ (this is 56\\% higher than the enclosed mass in a similar radius sphere). We run the thin ray with the above configuration and we find that it diverges significantly from the numerical and analytical solutions, see figures below. To test that our solution for the thin ray is good, we compare it to the numerical solution in a different scenario in which the rays miss the halo, see last figure in this document (the source is on the left). This configuration consists of a black hole lens of mass $2\\times 10^{12}M_s/h$, a source at a distance of $3.3 Gpc/h$ and an impact parameter $b=20 Kpc/h$ (yea, this is less cosmologically driven but it should get the idea across, that thin lens matches our numerics for non extended configurations) \n\n\n\\includegraphics[width=300pt]{zoom_1.png}\n\n\\includegraphics[width=300pt]{zoom_2.png}\n\n\\includegraphics[width=300pt]{zoom_3.png}\n\n\\includegraphics[width=300pt]{u1.png}\n\n\\includegraphics[width=300pt]{u2.png} \n\n\\includegraphics[width=300pt]{missing_the_lens.png} \n\n\\includegraphics[width=300pt]{missing_the_lens_zoom.png} \n\n\\end{document}\n", "meta": {"hexsha": "d88df0f7e14297e65614350ea6f6ffeabd35d8cc", "size": 6303, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/lensing.tex", "max_stars_repo_name": "skariel/Lensing", "max_stars_repo_head_hexsha": "4c90e61b1694c393f665a04dd0659999a1c8e001", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/lensing.tex", "max_issues_repo_name": "skariel/Lensing", "max_issues_repo_head_hexsha": "4c90e61b1694c393f665a04dd0659999a1c8e001", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/lensing.tex", "max_forks_repo_name": "skariel/Lensing", "max_forks_repo_head_hexsha": "4c90e61b1694c393f665a04dd0659999a1c8e001", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.7941176471, "max_line_length": 908, "alphanum_fraction": 0.7071235919, "num_tokens": 2042, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5914587535906238}}
{"text": "% Created 2015-11-13 Fri 09:36\n\\documentclass{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{fixltx2e}\n\\usepackage{graphicx}\n\\usepackage{longtable}\n\\usepackage{float}\n\\usepackage{wrapfig}\n\\usepackage{soul}\n\\usepackage{textcomp}\n\\usepackage{marvosym}\n\\usepackage{wasysym}\n\\usepackage{latexsym}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\tolerance=1000\n\\usepackage[margin=18mm]{geometry}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{subfigure}\n\\usepackage{parskip}\n\\usepackage{standalone}\n\\usepackage{tikz,pgf,pgfplots}\n\\usetikzlibrary{decorations.pathmorphing,patterns}\n\\usetikzlibrary{arrows,snakes,backgrounds,patterns,matrix,shapes,fit,calc,shadows,plotmarks,decorations.markings,datavisualization,datavisualization.formats.functions,intersections,external}\n\\usetikzlibrary{decorations.pathmorphing,patterns}\n\\pgfplotsset{compat=1.9}\n\\newcommand*{\\mexp}[1]{\\ensuremath{\\mathrm{e}^{#1}}}\n\\newcommand*{\\laplace}[1]{\\ensuremath{\\mathcal{L} \\{#1\\}}}\n\\newcommand*{\\laplaceinv}[1]{\\ensuremath{\\mathcal{L}^{-1} \\{#1\\}}}\n\\newcommand*{\\realpart}[1]{\\ensuremath{\\operatorname{Re}(#1)}}\n\\newcommand*{\\impart}[1]{\\ensuremath{\\operatorname{Im}(#1)}}\n\\newcommand*{\\vsp}[1]{\\rule{0pt}{#1}}\n\\newcommand*{\\tderiv}[1]{\\ensuremath{\\frac{d^{#1}}{dt^{n}}}}\n\\newcommand*{\\bbm}{\\begin{bmatrix}}\n\\newcommand*{\\ebm}{\\end{bmatrix}}\n\\newcommand*{\\obsmatrix}{\\mathcal{O}}\n\\newcommand*{\\contrmatrix}{\\mathcal{C}}\n\\newcommand*{\\cwh}{\\ensuremath{\\cos \\omega h}}\n\\newcommand*{\\swh}{\\ensuremath{\\sin \\omega h}}\n\\providecommand{\\alert}[1]{\\textbf{#1}}\n\n\\title{Computerized control - homework 2}\n\\author{Kjartan Halvorsen}\n\\date{Due 2015-09-03}\n\\hypersetup{\n  pdfkeywords={},\n  pdfsubject={},\n  pdfcreator={Emacs Org-mode version 7.9.3f}}\n\n\\begin{document}\n\n\\maketitle\n\n\n\\section{Exercises}\n\\label{sec-1}\n\\subsection{Sample the continuous-time transfer function}\n\\label{sec-1-1}\n\n   The harmonic oscillator from Homework 1\n\\begin{align*}\n\\dot{x} &= \\begin{bmatrix} 0 & \\omega\\\\-\\omega & 0 \\end{bmatrix} x + \\begin{bmatrix}1\\\\0\\end{bmatrix} u\\\\\ny &= \\begin{bmatrix} 1 & 0 \\end{bmatrix} x.\n\\end{align*} \nhas the transfer function \n\\[ G(s) = C(sI-A)^{-1}B + D = \\frac{s}{s^2 + \\omega^2}. \\]\nSampling the state space system with zero-order-hold gives the discrete-time state space system ($x(kh) = x(k)$)\n\\begin{align*}\nx(k+1) &= \\bbm \\cos \\omega h & \\sin \\omega h\\\\ -\\sin \\omega h & \\cos \\omega h \\ebm x(k) + \n          \\frac{1}{\\omega} \\bbm \\sin \\omega h \\\\ \\cos \\omega h - 1 \\ebm u(k), \\\\\ny(k) &= \\bbm 1 & 0 \\ebm x(k).\n\\end{align*}\n\n\\begin{enumerate}\n\\item \\textbf{Compute the pulse-transfer function} for the discrete-time system from the state-space representation using the expression \\[ H(z) = C(zI-\\Phi)^{-1}\\Gamma. \\]\n\\item \\textbf{Compute the pulse-transfer function} by sampling the transfer function $G(s)$.\n\\end{enumerate}\n\\subsection{Simulation of the continuous- and discrete-time harmonic oscillator}\n\\label{sec-1-2}\n\\subsubsection{Simulate step responses}\n\\label{sec-1-2-1}\n\nUse matlab's control toolbox or the \\href{http://python-control.sourceforge.net/}{python control module}  to simulate the systems. Use $\\omega=1$. \n\nFirst, define the continuous-time system \\texttt{sys\\_c} and the sampled system \\texttt{sys\\_d} using the \\texttt{ss} function. The example below uses the python control toolbox. Using the matlab control toolbox is very similar.\n\n\\begin{verbatim}\nimport numpy as np\nimport control.matlab as cm\nimport matplotlib.plot as plt\n\nomega = 1.0\nh = omega / 10\n\nA = np.array([[0, omega], [-omega, 0]])\nB = np.array([[1],[0]])\nC = np.array([[1, 0]])\nD = np.array([[0]])\nsys_c = cm.ss(A,B,C,D)\n\nwh = omega*h\nF = np.array([[np.cos(wh), np.sin(wh)], [-np.sin(wh), np.cos(wh)]])\nG = 1.0/omega* np.array([[np.sin(wh)], [np.cos(wh)-1]])\nsys_d = cm.ss(F,G,C,D, h)\n\nTc = np.linspace(0,4/omega,200)\n(yc,tc) = cm.step(sys_c, Tc)\nTd = h*np.arange(40)\n(yd,td) = cm.step(sys_d, Td)\n\nplt.plot(tc,yc)\nplt.plot(td,yd[0], '*')\n\\end{verbatim}\n\n\\textbf{Verify that the step response of the discrete-time system is equal to that of the continuous-time system at the sampling instants. Explain why this is so!}\n\\subsubsection{Sampling the system with help of the computer}\n\\label{sec-1-2-2}\n\nUse the function \\texttt{c2d} to sample your continuous-time system \\texttt{sys\\_c}. \\textbf{Verify that you get the same discrete-time system as your} \\texttt{sys\\_d} \\textbf{above}. \\emph{Hint}: Look at the system matrices returned by \\texttt{ssdata}. \n\\subsubsection{Compute the discrete step response yourself}\n\\label{sec-1-2-3}\n\n    Write some lines of code that solves the difference equation\n    \\begin{align*}\n    x(k+1) &= \\Phi x(k) + \\Gamma u(k)\\\\\n    y &= Cx(k)\n    \\end{align*}\n\ngiven an initial state $x(0)=x_0$ and an input sequence $\\{u(k)\\}$. Use a step signal ($u(k)=1$) and verify that your solution is the same as when using the \\texttt{step} function.\n \n\n\\end{document}\n", "meta": {"hexsha": "893a4e7519ce61b64ff3d732fed5efc4d43da5db", "size": 4894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "homework/historical/hw2-fall15.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "homework/historical/hw2-fall15.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "homework/historical/hw2-fall15.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 36.2518518519, "max_line_length": 254, "alphanum_fraction": 0.7078054761, "num_tokens": 1647, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5914587535906238}}
{"text": "\\section{Refinement Type Driven Synthesis}\n\\label{sec:rtypes}\n\nIn order to find an initial value for gradient descent, we could use refinement types~\\cite{freeman1991refinement}.\nIn this section we explore a possible optimization for selecting an initial DSP program for gradient descent.\nThis has not yet been implemented, but we present the theory behind the approach.\n\n\\subsection{Refinement Types for DSP}\n\\label{sec:rtypeSearch}\nRefinement types are a way of giving an abstract description of the behavior of a function. \nFor example, using a similar syntax to the refinement type system for Haskell, LiquidHaskell~\\cite{vazou2014refinement}, given the function \n%\n\\texttt{map :: [a]} $\\to$ \\texttt{[b]}\n%\nwe can further provide a refinement types that captures some properties of the behavior of this function over values:\n%\n\\begin{align*}\n\\texttt{f :: xs:[a]} \\to\\ \\texttt{ys:[b]}\\ \\mid \\texttt{ length xs == length ys}\n\\end{align*}\n\n\\noindent In this case, the refinement type describes that the length of the lists are still equal after applying the \\texttt{map} function.\n\nIn a similar style for DSP, we can write predicates about the filters available to us during synthesis. \nFor example, a low-pass filter could be described as the refinement type that says the amplitude of the frequencies greater than the threshold frequency have decreased in the output Audio.\nFor brevity in notation, we will only treat a single time slice from the waterfall plot here, but the concept generalizes when quantified over all time slices as well.\n%\n\\begin{align*}\n  &\\texttt{lpf :: t:Float} \\to  \\texttt{xs:Audio} \\to\\ \\texttt{ys:Audio}\\ \\mid \\\\\n  &\\forall f_1 \\in  \\texttt{spectrogram(xs)}.\\ \\forall f_2 \\in \\texttt{spectrogram(ys)}. \\\\\n  &(f_1 > \\texttt{t}  \\land  f_2 > \\texttt{t}  \\land f_1 == f_2) \\implies \\texttt{amp}(f_1) > \\texttt{amp}(f_2)\n\\end{align*}\n\nWhere \\texttt{t} represents the level at which the lowpass filter is applied, \\texttt{spectrogram} represents the spectrogram of the sound sample, $f_i$ represents a frequency, and \\texttt{amp()} represents the amplitude of the frequency. \n\nAdditionally, a high-pass filter could be described as the refinement type that says the amplitude of the frequencies less than the threshold frequency have decreased in the output Audio.\n%\n\\begin{align*}\n  &\\texttt{hpf :: t:Float} \\to\\ \\texttt{xs:Audio} \\to\\ \\texttt{ys:Audio}\\ \\mid \\\\\n  &(\\forall f_1 \\in \\texttt{spectrogram(xs)}. \\forall f_2 \\in \\texttt{spectrogram(ys)}). \\\\\n  &(f_1 < \\texttt{t} \\land f_2 < \\texttt{t} \\land f_1 == f_2) \\implies \\texttt{amp(}f_1\\texttt{)} > \\texttt{amp(} f_2 \\texttt{)} \n\\end{align*}\n\nNotice that in these refinement types, we only need to calculate the spectrogram for the input and output statically.\nAs opposed to the current technique of generating filters, applying them, and the calculating the aural distance, this approach is relatively static.\nWe could quickly check many threshold values over the input and output examples.\nThis will only yield a rough boolean estimation of whether this threshold should even be considered, but this is enough information for us to select an initial program to pass to our gradient descent algorithm.\nAs the search for an initial filter takes roughly 40 seconds out of our current benchmarks, this could dramatical increase the speed of synthesis.\n\n\\subsection{Combination of Search Algorithms}\n\nBeyond just using the refinement types to select an initial program for gradient descent, we can use refinement types in as part of the main search strategy as well.\nWe briefly describe here a way to use refinement types in combination with gradient descent to handle more complex combinations of DSP filters.\nSo far in our work (\\textit{c.f.} Sec.~\\ref{sec:eval}) we have synthesized filters with a fixed form - all our solutions use a single low-pass filter, and a single high-pass filter.\nIdeally, we would be able to synthesize solutions that use any arbitrary combination of filters.\nIn order to do this, we would need an iterative solution that can find one filter at a time.\n\nIn this approach, given input example \\texttt{x:Audio} and output example \\texttt{y:Audio}, we would first find a filter \\synthFilter' using the approach described in Sec.~\\ref{sec:search} and Sec.~\\ref{sec:rtypeSearch}.\nWe will say that this \\synthFilter' has the refinement type $r_1$.\nHowever, this filter might not return a satisfactory result.\nWe could then continue the search using the output of \\synthFilter'\\texttt{(x)} as the new input example, \\texttt{z:Audio}.\nNow the synthesis task is to find a filter \\synthFilter'' (with refinement type $r_2$) using input example \\texttt{z:Audio} and output example \\texttt{y:Audio}.\nEssentially, \\synthFilter' has gotten us the first half of the way, and \\synthFilter'' will get us the second half of the way.\nWith this, we can start to use more information rich refinement types, such as below:\n%\n\\begin{align*}\n\\synthFilter \\texttt{:: } & \\texttt{x:Audio} \\to\\ \\texttt{y:Audio}\\ \\mid \\\\\n   & \\exists \\ \\texttt{z:Audio}.\\ r_1\\texttt{(x,z)} \\land r_2\\texttt{(z,y)}\n\\end{align*}\n\n", "meta": {"hexsha": "7f9a2a31e17526766a2047e0d15329a281d773ac", "size": 5094, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/FARM-18/secs/rtypes.tex", "max_stars_repo_name": "Yale-OMI/DSP-PBE", "max_stars_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-03T02:36:39.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-03T02:36:39.000Z", "max_issues_repo_path": "papers/FARM-18/secs/rtypes.tex", "max_issues_repo_name": "Yale-OMI/DSP-PBE", "max_issues_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2018-11-16T21:50:44.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T18:57:19.000Z", "max_forks_repo_path": "papers/FARM-18/secs/rtypes.tex", "max_forks_repo_name": "Yale-OMI/DSP-PBE", "max_forks_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.7714285714, "max_line_length": 239, "alphanum_fraction": 0.7614840989, "num_tokens": 1340, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127492339909, "lm_q2_score": 0.6926419894793248, "lm_q1q2_score": 0.5914558254711911}}
{"text": "\\chapter{Neural Networks: Representation}\n\\label{chap:neural_net_repr}\n\nLet's start by discussing the motivation for neural networks. We already have seen and coded two powerful machine learning algorithms, so we do we need another? \n\n\\section{Non-linear Hypotheses}\nIf we have a fairly messy dataset with three terms, $x_1$, $x_2$, and $x_3$, we can classify them using logistic regression, but we'll probably need to introduce polynomial terms to get an accurate classifier. This would give us a hypothesis in the following form:\n$$\nh_\\theta\\left(x\\right) = g\\left(\\theta_0, \\theta_1 x_1^2 + \\theta_2 x_1 x_2 + \\theta_3 x_1 x_3 + \\theta_4 x_2^2 + \\theta_5 x_2 x_3 + \\theta_6 x_3^2\\right)\n$$\n\nSimply by including quadratic terms, we created six features. We can determine this number of features mathematically from combinatorics, and we can model it after sampling with replacement:\n\\begin{equation}\n\\text{num. quadratic features } = \\binom{n+k-1}{k} = \\frac{\\left( n + k - 1\\right) !}{k! \\left(n-1\\right)!} = \\frac{\\left(3 + 2 - 1\\right)!}{2! \\cdot \\left(3-1\\right)!} = \\frac{4!}{4} = 6\n\\end{equation}\n\nIf we think back to our housing example, and want to perform classification instead of regression using 100 features, that would give us 5050 polynomial terms to include, in addition to the 100 linear terms. We can approximate the growth of the number of new features we get with all quadratic terms with $\\mathcal{O}\\left(n^2 / 2\\right)$. If we wanted to include cubic terms in our hypothesis too, the features would grow asymptotically as $\\mathcal{O}\\left(n^3\\right)$. Since the number of features grows so rapidly, the number of quadratic and cubic features very quickly becomes impractical. \n\nConsider a collection of $50\\times 50$ pixel black-and-white photograph, where we want to determine which photographs are of cars. Then the length of our feature vector is $2500$\\footnote{If we were using RGB values, this would be $7500$.}, since we have $2500$ individual pixels. Each features here represents the brightness of the pixel. Now if we want to include quadratic features, we have approximately $2500^2 / 2 = 3,125,000$ features. \n\n\\section{Neurons and the Brain}\nNeural networks originated when people thought to build algorithms to mimic how the human brain learns. They were popular in the 1980s, but somewhat fell out of use in the 90s; however, there has been a pretty big surge in neural network use lately due to the massive advances in computer hardware and processing speed. \n\nWhile it might seem like the human brain learns different things in different brain regions, there is a hypothesis that the brain only uses a single learning algorithm for all its different functions. This was motivated by an experiment where scientists rewired the optical nerve to the auditory cortex in animals, and the auditory cortex actually learned to see. This was repeated with other areas of the brain as well. The principle behind this is called neuroplasticity.\n\n\\section{Model Representation I}\nNeural networks were developed to simulate neurons and networks of neurons in the brain. Very simplistically, the neuron takes inputs via the dendrites as electrical inputs (called spikes) and then channels the output via the axon. \n\nFor an artificial neural network, we'll use a very simple model of what a neuron does, we'll model a neuron as a logistic unit. In our model, our inputs are the input features $x_1$, $x_2$, etc. and the output is the result of our hypothesis function. Just like with logistic regression, we have two parameter vectors $\\vec{x}$ and $\\vec{\\theta}$\n$$\n\\vec{x} = \\left[\\begin{array}{c}x_0 \\\\ x_1 \\\\ x_2 \\\\ x_3 \\end{array}\\right] ~~\\mbox{ \\;\\;\\;\\;\\;\\;\\;\\;\\;\\; }~~ \\vec{\\theta} = \\left[\\begin{array}{c}\\theta_0 \\\\ \\theta_1 \\\\ \\theta_2 \\\\ \\theta_3 \\end{array}\\right]\n$$\nwhere $x_0 = 1$ is the bias term. When representing neural networks, we always have the $\\theta_0$ bias term, but we sometimes omit it for notational convenience. Additionally, when representing neural networks, we'll typically use $3$ features, though in reality the number of features is a parameter of the problem. \n\n$$\n\\left[\\begin{array}{c} x_0 \\\\ x_1 \\\\ x_2 \\\\ x_3 \\end{array}\\right] \\to \\left[ {\\;\\;} \\right] \\to h_\\theta\\left(x\\right)\n$$\n\nWe use the same logistic function in the hypothesis as logistic regression. However, in neural networks, it is often called the sigmoid function, or the sigmoid/logistic activation function. \n\\begin{equation}\n\\frac{1}{1 + e^{-\\theta^{{}^\\intercal} x}}\n\\end{equation}\nWe sometimes call the $\\theta$ parameters \\textbf{weights} for neural networks, as is traditional in the neural network literature, so now we might refer to $\\vec{\\theta}$ as either parameters or weights. Now, let's look at a very simple model of a neural network. The first layer, $\\vec{x}$, is called the \\textbf{input layer}. The output of the hypothesis function is called the \\textbf{output layer}, which gives the our final value for the hypothesis. In between the input layer and the final layer, there are one or more hidden layers.\n\\begin{figure}[h] %  figure placement: here, top, bottom, or page\n\t\\centering\n\t\\graphicspath{{./Figures/}} %Use this to import an image from a subfolder.\n\t\\includegraphics[scale=0.8]{nn_repr_3_layer_neural_netw.pdf} \n\t\\caption[]{A sample artificial neural network, with three inputs and one hidden layer.}\n\t\\label{nn_repr_3_layer_neural_netw.pdf}\n\\end{figure}\nThe hidden layer nodes are labeled $a_1^{(2)}$, $a_2^{(2)}$, etc. and called activation units, where $a_i^{(j)}$ is the activation of unit $i$ in layer $j$. The matrix $\\Theta^{(j)}$ is the matrix of weights controlling the function mapping from layer $j$ to layer $j + 1$. Mathematically, we might represent this as\n$$\n\\left[\\begin{array}{c} x_0 \\\\ x_1 \\\\ x_2 \\\\ x_3 \\end{array}\\right] \\to \\left[\\begin{array}{c} a_1^{(2)} \\\\ a_2^{(2)} \\\\ a_3^{(2)} \\end{array}\\right] \\to h_\\Theta\\left(x\\right)\n$$\n\nNow, let's break out the computations that are represented by this diagram\n\\begin{subequations}\n\\begin{align}\na_1^{(2)} &= g\\left(\t\\Theta_{10}^{(1)} x_0 + \\Theta_{11}^{(1)} x_1 + \\Theta_{12}^{(1)} x_2 + \\Theta_{13}^{(1)} x_3   \t\\right) \\\\\na_2^{(2)} &= g\\left(\t\\Theta_{20}^{(1)} x_0 + \\Theta_{21}^{(1)} x_1 + \\Theta_{22}^{(1)} x_2 + \\Theta_{23}^{(1)} x_3   \t\\right) \\\\\na_3^{(2)} &= g\\left(\t\\Theta_{30}^{(1)} x_0 + \\Theta_{31}^{(1)} x_1 + \\Theta_{32}^{(1)} x_2 + \\Theta_{33}^{(1)} x_3   \t\\right) \\\\\nh_\\Theta\\left(x\\right) = a_1^{(3)} &= g\\left(\t\\Theta_{10}^{(2)} a_0^{(2)} + \\Theta_{11}^{(2)} a_1^{(2)} + \\Theta_{12}^{(2)} a_2^{(2)} + \\Theta_{13}^{(2)} a_3^{(2)}   \t\\right) \n\\end{align}\n\\label{chapnnrepr-sectmodelrepr1-definehiddenlayeractivations}\n\\end{subequations}\nThis is saying that we compute our activation nodes using a $3\\times 4$ matrix or parameters. We apply each row of parameters to our inputs to obtain the value for one activation node. Our hypothesis output is the sigmoid function applied to the sum of the values from the activation nodes, which have been multiplied by yet another parameter matrix, $\\Theta^{(2)}$, containing the weights for our second layer of nodes. \n\nMore generally, the dimension of the matrix of weights $\\Theta^{(j)}$ is given by the following: if a network has $s_j$ units in layer $j$, and $s_{j+1}$ units in later $j+1$, then $\\Theta^{(j)}$ will have dimensions \n\\begin{equation}\n\\| \\Theta^{(j)} \\| = \\left(s_{j+1}\\right)\\times\\left(s_j + 1\\right)\n\\end{equation}\nThe $+1$ for layer $j$ comes from the bias nodes, $x_0$ and $\\Theta_0^{(j)}$, and it's only applied to the input nodes since the output of a layer doesn't include a bias node. \n\nFor example, if layer one has $2$ input nodes and layer two has $4$ activation nodes, then $\\Theta^{(1)}$ will be a $4\\times 3$ matrix, since $s_j = s_1 = 2$ and $s_{j+1} = s_2 = 4$.\n\n\\section{Model Representation II}\nNow, we're going to go through the neural network model again, but this time with a vectorized implementation. We begin by defining a new variable $z_k^{(j)}$ that encompasses the parameters inside of our sigmoid function $g$. As such, we can now rewrite equations \\ref{chapnnrepr-sectmodelrepr1-definehiddenlayeractivations} as\n\\begin{align*}\na_1^{(2)} &= g\\left(\t\\Theta_{10}^{(1)} x_0 + \\Theta_{11}^{(1)} x_1 + \\Theta_{12}^{(1)} x_2 + \\Theta_{13}^{(1)} x_3   \t\\right) & &\\implies &  a_1^{(2)} &= g\\left(z_1^{(2)}\\right) \\\\\na_2^{(2)} &= g\\left(\t\\Theta_{20}^{(1)} x_0 + \\Theta_{21}^{(1)} x_1 + \\Theta_{22}^{(1)} x_2 + \\Theta_{23}^{(1)} x_3   \t\\right) & &\\implies &  a_2^{(2)} &= g\\left(z_1^{(2)}\\right) \\\\\na_3^{(2)} &= g\\left(\t\\Theta_{30}^{(1)} x_0 + \\Theta_{31}^{(1)} x_1 + \\Theta_{32}^{(1)} x_2 + \\Theta_{33}^{(1)} x_3   \t\\right) & &\\implies &  a_3^{(2)} &= g\\left(z_1^{(2)}\\right)\n\\end{align*}\nSo the $z$ values are just a weighted linear combination of the input values $x_0$, $x_1$, etc. going to a particular neuron. In other words, for layer $j=2$ and node $k$, \n\\begin{equation}\nz_k^{(2)} = \\Theta_{k, 0}^{(1)} x_0 + \\Theta_{k, 1}^{(1)} x_1 + \\cdots + \\Theta_{k, n}^{(1)} x_n\n\\end{equation}\nThe vector representations of $x$ and $z^{(j)}$ are\n$$\nx = \\left[\\begin{array}{c} x_0 \\\\ x_1 \\\\ x_2 \\\\ x_3 \\end{array}\\right] ~~\\mbox{ \\;\\;\\;\\;\\;\\;\\;\\;\\;\\; }~~ z^{(j)} = \\left[\\begin{array}{c} z_1^{(j)} \\\\ z_2^{(j)} \\\\ z_3^{(j)} \\end{array}\\right]\n$$\nFrom these vectors, we have $z^{(2)} = \\Theta^{(1)} x$ and $a^{(2)} = g\\left(z^{(2)}\\right)$, where $z^{(2)} \\in \\mathbb{R}^3$ and $a^{(2)} \\in \\mathbb{R}^3$. We define $x$ to be $a^{(1)}$, which makes sense because $x$ is our input vector and $a^{(1)}$ implies that we're looking at our first layer, which is the input layer. Then, we can write the general definition for $z$ as\n\\begin{equation}\nz^{(j)} = \\Theta^{(j-1)} a^{(j-1)}\n\\end{equation}\nHere, we are multiplying our matrix $\\Theta^{(j-1)}$, with dimensions $s_j \\times (n+1)$, by our vector $a^{j-1}$, with length $(n+1)$. This yields our vector $z^{(j)}$ with length $s_j$.\\footnote{Recall that $s_j$ is the number of activation nodes.} \n\nFrom this, we can create a vector of our activation nodes for layer $j$ as \n\\begin{equation}\na^{(j)} = g\\left(z^{(j)}\\right)\n\\end{equation}\nwhere the sigmoid function $g$ is applied element-wise to $z^{(j)}$. Next, to get our hypothesis, we need to add a bias term to the layer $j=2$, $a_0^{(2)} = 1$. In fact, we can generalize going forward that we will need to add bias terms, and they'll all equal one $a_0^{(j)} = 1$. Now, we have $a^{(2)} \\in \\mathbb{R}^4$, since we just added the bias term to the previously length-three vector. Now, we can compute \n$$\nz^{(3)} = \\Theta^{(2)} a^{(2)}\n$$\nand \n$$\nh_\\Theta\\left(x\\right) = a^{(3)} = g\\left(z^{(3)}\\right)\n$$\n\nThis process for computing $h_\\Theta\\left(x\\right)$ is called \\textbf{forward propagation}, because we start off with the activations of the input units, then forward propagate that to compute the activations of the hidden layer, then again forward propagate that to compute the activations of the output layer. \n\nLet's step back for a minute. What we're doing here is very similar to logistic regression, though it might not seem like it. Previously, we have the input feed directly into the logistic regression; now instead, we have the nodes from layer $j=2$ (the hidden layer) feeding into the logistic regression. However, those nodes $a_k^{(2)}$ are themselves learned from the input data\n\nWe've been specifically talking about the neural network architecture described in Figure \\ref{nn_repr_3_layer_neural_netw.pdf}, but there can be other neural network architectures too. Consider the example shown in Figure \\ref{nn_repr_4_layer_neural_netw.pdf}.\n\\begin{figure}[h] %  figure placement: here, top, bottom, or page\n\t\\centering\n\t\\graphicspath{{./Figures/}} %Use this to import an image from a subfolder.\n\t\\includegraphics[scale=1]{nn_repr_4_layer_neural_netw.pdf} \n\t\\caption[]{A sample artificial neural network, with three inputs and two hidden layer.}\n\t\\label{nn_repr_4_layer_neural_netw.pdf}\n\\end{figure}\nHere, we have the same input layer, but there are two hidden layers. The first hidden layer has three hidden units, which are computed as some complex function of the input layer. The second hidden layer can take the first hidden layer's features and compute even more complex features, so the output layer can have very complex features. \n\n\n\\section{Examples and Intuitions}\nLet's say we have inputs $x_1, x_2 \\in \\{0, 1\\}$. In this case, our target label $y = x_1 \\text{ AND } x_2$. This is a logical \\textit{and}. Can we make a neural network that can recreate this \\textit{and} operator? The graph of our function will look something like this\n$$\n\\left[\\begin{array}{c} x_0 \\\\ x_1 \\\\ x_2 \\end{array}\\right] \\to \\left[g\\left(z^{(2)}\\right)\\right] \\to h_\\Theta\\left(z\\right)\n$$\nwhere $x_0 = 1$ is our bias variable. For this example, let's define our first $\\Theta^{(1)}$ matrix as\n$$\n\\Theta^{(1)} = \\left[\\begin{array}{ccc}\\Theta_{1,0}^{(1)} & \\Theta_{1,1}^{(1)} & \\Theta_{1,2}^{(1)} \\end{array}\\right] = \\left[\\begin{array}{ccc}-30 & 20 & 20 \\end{array}\\right]\n$$\nThis means our hypothesis is given by \n$$\nh_\\Theta\\left(x\\right) = g\\left(\t-30 + 20x_1 + 20x_2\t\\right)\n$$\nLet's figure out what our hypothesis evaluates to for different combinations of $x_1$ and $x_2$\\footnote{Keep in mind that the sigmoid function evaluates to about $0.99$ for an input of $4.6$, and about $0.01$ for an input value of $-4.6$.}\n\n\\begin{center}\n\\begin{tabular}{c c | r}\n$x_1$ & $x_2$ & $h_\\Theta\\left(x\\right)$ \\\\\n\\hline\n$0$ & $0$ & $g\\left(-30\\right) \\approx 0$ \\\\\n$0$ & $1$ & $g\\left(-10\\right) \\approx 0$ \\\\\n$1$ & $0$ & $g\\left(-10\\right) \\approx 0$ \\\\\n$1$ & $1$ & $g\\left(10\\right) \\approx 1$\n\\end{tabular}\n\\end{center}\n\nThis is exactly the truth table for the logical \\textit{and}, so $h_\\Theta\\left(x\\right) \\approx x_1 \\text{ AND } x_2$. Using a small neural network, we have just constructed one of the most fundamental operations in computing: the \\textit{and} gate. \n\n\\subsection{Building Logical Gates Using Neural Networks}\nWe are also able to build neural networks to simulate all other logical gates. Let's start with a super-simple example. If we have a single input variable $x_1$, let's use a neural network to build the logical \\textit{not} gate. We could do this with \n$$\n\\Theta^{(1)} = \\left[\\begin{array}{cc}\\Theta_{1,0}^{(1)} & \\Theta_{1,1}^{(1)} \\end{array}\\right] = \\left[\\begin{array}{cc}10 & -20 \\end{array}\\right]\n$$\ngiving us a hypothesis of\n$$\nh_\\Theta\\left(x\\right) = g\\left(10 - 20x\\right)\n$$\nIf we fill out the table of values for this, we get\n\\begin{center}\n\\begin{tabular}{c | r}\n$x_1$ & $h_\\Theta\\left(x\\right)$ \\\\\n\\hline\n$0$ & $g\\left(10\\right) \\approx 1$ \\\\\n$1$ & $g\\left(-10\\right) \\approx 0$\n\\end{tabular}\n\\end{center}\nAs a reminder, here is a truth table for some additional logic gates.\n\\begin{center}\n\\begin{tabular}{| c | c | c | c | c | c | c | c |} \\hline\n\\multicolumn{2}{| c |}{\\textbf{Input}} & \\multicolumn{6}{ c |}{\\textbf{Output}} \\\\ \\hline\nA & B & A and B & A or B & A nand B & A nor B & A xor B & A xnor B \\\\ \\hline \\hline\n0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 \\\\ \\hline\n0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 \\\\ \\hline\n1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 \\\\ \\hline\n1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\nWe can similarly construct $\\Theta$ for the \\textit{or} gate as\n$$\n\\Theta^{(1)} = \\left[\\begin{array}{ccc}\\Theta_{1,0}^{(1)} & \\Theta_{1,1}^{(1)} & \\Theta_{1,2}^{(1)} \\end{array}\\right] = \\left[\\begin{array}{ccc}-10 & 20 & 20 \\end{array}\\right]\n$$\nand $\\Theta$ for the \\textit{nor} gate as\n$$\n\\Theta^{(1)} = \\left[\\begin{array}{ccc}\\Theta_{1,0}^{(1)} & \\Theta_{1,1}^{(1)} & \\Theta_{1,2}^{(1)} \\end{array}\\right] = \\left[\\begin{array}{ccc}10 & -20 & -20 \\end{array}\\right]\n$$\n\n\n\\subsection{Logical XNOR Gate}\nHaving defined the \\textit{not}, \\textit{and}, \\textit{or}, and \\textit{nor} gates, let's try and build a logical \\textit{xnor} gate. We'll start by building a hidden layer with two nodes, one built with the \\textit{and} gate and the other with the \\textit{nor} gate. Using \n$$\n\\Theta^{(1)} = \\left[\\begin{array}{ccc} -30 & 20 & 20 \\\\ 10 & -20 & -20 \\end{array}\\right]\n$$\nwe can build $a_1^{(2)}$ from the \\textit{and} gate and build $a_2^{(2)}$ from the \\textit{or} gate. This gives us the following \n\\begin{center}\n\\begin{tabular}{c c | c c | c}\n$x_1$ & $x_2$ & $ a_1^{(2)}$ & $a_2^{(2)}$ & $h_\\Theta\\left(x\\right)$ \\\\ \\hline\n0 & 0 & 0 & 1 & {} \\\\\n0 & 1 & 0 & 0 & {} \\\\\n1 & 0 & 0 & 0 & {} \\\\\n1 & 1 & 1 & 0 & {}\n\\end{tabular}\n\\end{center}\nNow, to finish our \\textit{xnor} gate, we can use the $\\textit{or}$ gate between our two existing nodes on the second layer. \n$$\n\\Theta^{(2)} = \\left[\\begin{array}{ccc} -10 & 20 & 20 \\end{array}\\right]\n$$\nWriting this out formally, we find\n\\begin{align*}\na^{(2)} &= g\\left(\\Theta^{(1)} x\\right) \\\\\nh_\\Theta\\left(x\\right) = a^{(3)} &= g\\left(\\Theta^{(2)} a^{(2)}\\right)\n\\end{align*}\nFilling in the rest of our table, we find we've built the \\textit{xnor} gate!\n\\begin{center}\n\\begin{tabular}{c c | c c | c}\n$x_1$ & $x_2$ & $ a_1^{(2)}$ & $a_2^{(2)}$ & $h_\\Theta\\left(x\\right)$ \\\\ \\hline\n0 & 0 & 0 & 1 & 1 \\\\\n0 & 1 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 \\\\\n1 & 1 & 1 & 0 & 1\n\\end{tabular}\n\\end{center}\n\n\n\\section{Multiclass Classification}\nSimilar to logistic regression, we can do multiclass classification with neural networks, and the way we do it is essentially an extension of the one-vs-all method. Let's say we have a computer vision example, where we're trying to classify an image into a pedestrian, a car, a motorcycle, or a truck. We would do this buy building a neural network with an output of four numbers, meaning the output $h_\\Theta$ will actually be a $4$-vector. In our example, when we have a pedestrian or car, we'd want our output to be\n$$\n\\text{(pedestrian) } h_\\Theta\\left(x\\right) \\approx \\left[\\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\\\ 0 \\end{array}\\right] ~~\\mbox{ \\;\\;\\;\\;\\;\\;\\;\\;\\;\\; }~~ \\text{(car)} h_\\Theta\\left(x\\right) \\approx \\left[\\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\\\ 0 \\end{array}\\right]\n$$\nOur training set will look similar\n$$\n\\left(x^{(1)}, y^{(1)}\\right), \\left(x^{(2)}, y^{(2)}\\right), \\left(x^{(3)}, y^{(3)}\\right), \\cdots, \\left(x^{(m)}, y^{(m)}\\right)\n$$\nbut instead of representing $y \\in \\{1, 2, 3, 4\\}$, we'll represent $y^{(i)}$ as one of the following:\n$$\ny^{(i)} \\in \\left\\{\t \\left[\\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\\\ 0 \\end{array}\\right],  \t\\left[\\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\\\ 0 \\end{array}\\right],\t \n\t\t\t\\left[\\begin{array}{c} 0 \\\\ 0 \\\\ 1 \\\\ 0 \\end{array}\\right], \t \\left[\\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\\\ 1 \\end{array}\\right]\t\t\\right\\}\n$$\nwhere both $h_\\Theta\\left(x\\right)$ and $\\vec{y}$ will be in $\\mathbb{R}^4$. \n\nLet's write this out a bit. Sticking with the image classification problem with four output classes, our artificial neural network can be represented by\n$$\n\\left[\\begin{array}{c} x_0 \\\\ x_1 \\\\  x_2 \\\\ x_3 \\\\ \\vdots \\\\ x_n\\end{array}\\right] \\to\n\\left[\\begin{array}{c} a_0^{(2)} \\\\ a_1^{(2)} \\\\ a_2^{(2)} \\\\ a_3^{(2)} \\\\ \\vdots \\end{array}\\right] \\to \n\\left[\\begin{array}{c} a_0^{(3)} \\\\ a_1^{(3)} \\\\ a_2^{(3)} \\\\ a_3^{(3)} \\\\ \\vdots \\end{array}\\right] \\to \n\\left[\\begin{array}{c} h_\\Theta\\left(x\\right)_1 \\\\ h_\\Theta\\left(x\\right)_2 \\\\ h_\\Theta\\left(x\\right)_3 \\\\ h_\\Theta\\left(x\\right)_1 \\end{array}\\right]\n$$\nThe final hidden layer of nodes, when multiplied by its $\\Theta$ matrix, will result in another vector, on which we can apply the sigmoid function $g$ to get a vector of hypothesis values, which will be (asymptotically) equal to one of the four $y^{(i)}$ vectors.  \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "84b18a7707d92aa2cad4850b737118840e6df516", "size": 19500, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX Notes/Chapters/4-Neural_Networks_Representation.tex", "max_stars_repo_name": "Sz593/coursera_ml_notes", "max_stars_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LaTeX Notes/Chapters/4-Neural_Networks_Representation.tex", "max_issues_repo_name": "Sz593/coursera_ml_notes", "max_issues_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LaTeX Notes/Chapters/4-Neural_Networks_Representation.tex", "max_forks_repo_name": "Sz593/coursera_ml_notes", "max_forks_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.1438848921, "max_line_length": 596, "alphanum_fraction": 0.678, "num_tokens": 6534, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.5914494651075611}}
{"text": "\\chapter{Unsupervised Learning}~\\label{clustering}\nUnsupervised learning is a paradigm of the machine learning field which is based on the training of knowledge without using a teacher.\nIt includes a large set of techniques and algorithms used for learning from data without knowing the true classes. The main application of unsupervised learning consists on estimating how data are organized in the space, such that they can reconstruct the prior probability distribution of data. For doing that clustering algorithms are used with the goal of grouping a set of objects in such a way that objects in the same cluster are strongly similar (\\textit{internal criterion}) and objects from distinct clusters are strongly dissimilar (\\textit{external criterion}).\n\nThe classical clustering problem starts with a set of $n$ objects and an $n \\times n$ affinity matrix $A$ of pairwise similarities that gives us an edge-weighted graph $G$. The goal of the clustering problem is to partition vertices of $G$ into maximally homogeneous groups (clusters). Usually the graph $G$ is an undirected graph, meaning that the affinity matrix $A$ is symmetric.\n\\image{img/ulearning/clustering}{\"Classical\" clustering problem.}{0.9}\nIn literature we can find different clustering algorithms that are strongly used, and each of them manages data in different ways. Some of the clustering algorithm, that we are going to test against adversarial noise in chapter \\ref{results}, are: K-Means , Spectral and Dominant Sets clustering.\n\n\\section{Images as Graphs}\nIn some applications we can have that images correspond to our data for which we want to obtain groups partition. In this case the clustering algorithms could be used in order to reconstruct a simplified version of the input image, removing noisy information. For doing that the image is represented as an edge-weighted undirected graph, where vertices correspond to individuals pixels and edge-weights reflect the similarity between pairs of vertices.\nGiven an input image with $H \\times W$ pixels we construct a similarity matrix $A$ such that the similarity between the pixels $i$ and $j$ is measured by:\n$$A(i,j) = \\exp\\Big(\\frac{-||F(i)- F(j)||^2_2}{\\sigma^2}\\Big)$$\n\n\\begin{itemize}\n\t\\item $F(i)$, is the normalized intensity of pixel $i$ (\\textit{intensity segmentation}).\n\t\\item $F(i) = [v, vs\\sin(h), vs\\cos(h)](i)$ where $h,s,v$ are the HSV values of pixel $i$ (\\textit{color segmentation}).\n\t\\item $F(i) = [|I*f_1|, \\dots, |I*f_k|](i)$ is a vector based on texture information at pixel $i$ (\\textit{texture segmentation}).\n\\end{itemize}\nThe constant $\\sigma$ is introduced for obtaining a scaling effect on the affinity:\n\\begin{itemize}\n\t\\item Small $\\sigma$: only nearby points are similar.%  group only nearby points.\n\t\\item Large $\\sigma$: distant points tend to be similar.%group far-away points.\n\\end{itemize}\n\n\nAn example of application of clustering algorithms for image segmentation is below provided:\n\\image{img/ulearning/fruits.png}{Image of vegetables.}{0.3}\n\\begin{figure}[H]\n\t\\begin{minipage}[t]{0.5\\linewidth} \n\t\t\\centering\n\t\t\\includegraphics[width=0.48\\textwidth]{img/ulearning/fruits_intensity}\n\t\t\\caption{Clustering on pixels intensity.}\n\t\\end{minipage}        \n\t\\hspace{1.cm}\n\t\\begin{minipage}[t]{0.5\\linewidth} \n\t\t\\centering\n\t\t\\includegraphics[width=0.48\\textwidth]{img/ulearning/fruits_color}\n\t\t\\caption{Clustering on pixels color.}\n\t\\end{minipage}\n\\end{figure}\n\\FloatBarrier\n\n\n\\newpage\n\\section{K-Means}\nK-Means is one of the simplest, famous and used iterative clustering algorithms. It aims to partition $n$ objects into $K$ maximal cohesive groups. The goal of the K-Means algorithm is to reach the following state: each observation belongs to the cluster with the nearest center. Its implementation can be shortly described in few lines:\n\n\\begin{itemize}\n\t\\item \\textbf{Initialization:} Pick $K$ random points as cluster centers (centroids).\n\t\\image{img/ulearning/kmeans1}{Initialization with $K=2$.}{0.3}\n\t\\item \\textbf{Alternate:}\n\t\\begin{enumerate}\n\t\t\\item Assign data points to closest cluster centroid.\n\t\t\\item For each cluster C update the corresponding centroid to the average of points in C.\n\t\t\\begin{figure}[H]\n\t\t\t\\begin{minipage}[t]{0.42\\linewidth} \n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=0.68\\textwidth]{img/ulearning/kmeans2}\n\t\t\t\t\\caption{Iterative step 1.}\n\t\t\t\\end{minipage}        \n\t\t\t\\hspace{2.5cm}\n\t\t\t\\begin{minipage}[t]{0.42\\linewidth} \n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=0.68\\textwidth]{img/ulearning/kmeans3}\n\t\t\t\t\\caption{Iterative step 2.}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\n\t\t\\FloatBarrier\n\t\\end{enumerate}\n\t\\item \\textbf{Stop:} When no points' assignments change.\n\t\\begin{figure}[H]\n\t\t\\begin{minipage}[t]{0.42\\linewidth} \n\t\t\t\\centering\n\t\t\t\\includegraphics[width=0.68\\textwidth]{img/ulearning/kmeans4}\n\t\t\t\\caption{Repeat until convergence.}\n\t\t\\end{minipage}        \n\t\t\\hspace{2.5cm}\n\t\t\\begin{minipage}[t]{0.42\\linewidth} \n\t\t\t\\centering\n\t\t\t\\includegraphics[width=0.68\\textwidth]{img/ulearning/kmeans5}\n\t\t\t\\caption{Final output.}\n\t\t\\end{minipage}\n\t\\end{figure}\n\\end{itemize}\n\\newpage\nFormally speaking we can define the K-Means algorithm over the set of points $X$ in the following way:\n\\begin{enumerate}\n\t\\begin{footnotesize}\n\t\\item Initialize cluster centroids $\\mu_1, \\dots, \\mu_k,$.\n\t\\item Repeat until all points remain unchanged (convergence):\n\t\\end{footnotesize}\n\t\t\\begin{enumerate}[label*=\\arabic*.]\n\t\t\t\\item  $\\forall i\\in X \\quad c^{(i)} = \\arg\\min_j \\vert\\vert x^{(i)}-\\mu_j\\vert\\vert^2$\n\t\t\t\\item $\\forall j\\in C \\quad \\mu_j = \\frac{\\sum_{i=1}^m 1\\{c^{(i)} = j\\}x^{(i)}}{\\sum_{i=1}^m 1\\{c^{(i)} = j\\}}$\n\t\t\\end{enumerate}\n\\end{enumerate}\n\n\\paragraph*{Properties of K-Means.} K-Means benefits of the following properties:\n\\begin{itemize}\n\t\\item It is guaranteed to converge in a finite number of steps.\n\t\\item It minimizes an objective function, which represents the compactness of the retrieved $K$ clusters:\n\t$$\\arg \\min_C\\sum_{i=0}^K \\Biggl\\{ \\sum_{j \\in \\text{elements of } C_i\\text{ cluster}} ||x_j - \\mu_i||^2 \\Biggr\\}$$\n\twhere $\\mu_i$ is the centroid of cluster $i$.\n\t\\item It is a polynomial algorithm: $O(Kn)$ for assigning each sample to the closest cluster and $O(n)$ for the update of the clusters center.\n\\end{itemize}\nIt is possible to say that K-Means is a very simple and efficient method but, on the other hand, it is strongly sensible to the initialization phase. If the initial centroids are not chosen correctly the algorithm converges to a local minimum of the error function. Another disadvantage of K-Means is that does not work well in presence non-convex shapes. \n\n\n\\section{Eigenvector-based Clustering}~\\label{eingenvector-based-clustering}\nThe eigenvector-based clustering collects different techniques that use properties of eigenvalues and eigenvectors for solving the clustering problem.\nLet us represent a cluster using a vector $x$ whose $k$-th entry captures the participation of node $k$ in that cluster. If a node does not participate in cluster $x$, the corresponding entry is zero. We also impose the restriction that $x^Tx = 1$. The goal of the clustering algorithm is to maximize:\n\\begin{equation}~\\label{cohesiveness}\n\\arg\\max_x \\sum_{i=1}^n \\sum_{j=1}^n w_{ij} x_i x_j = x^TAx\n\\end{equation}\nwhich measures the cluster's cohesiveness and $x_i$, $x_j$ represent the measure of centrality to the cluster and it is defined as:\n$$x_i = \\begin{cases}\n\\neq 0 \\text{  if } i \\in C\\\\\n= 0 \\text{  if } i \\notin C\n\\end{cases}$$.\\\\\nComing back to the notion of eigenvalues of a matrix we can say that $\\lambda$ is an eigenvalue of $A$ and $x_\\lambda$ is the corresponding eigenvector if:\n$$Ax_\\lambda = \\lambda x_\\lambda$$\nFrom which we can derive that:\n$$x_\\lambda^TAx_\\lambda = \\lambda x_\\lambda^Tx_\\lambda = \\lambda$$\nThere are two important theorems that defines the nature of eigenvalues of a $n\\times n$ matrix $A$:\n\\begin{enumerate}\n\t\\item If $A = A^T$ then $A$ is symmetric and has only \"\\textit{real}\" eigenvalues. It means that we can sort them, from the smallest one to the largest one.\n\t\\item If $A$ is symmetric then $\\max_x x^TAx$ corresponds to the largest eigenvalue $\\lambda$. Moreover, the corresponding eigenvector $x_\\lambda$ is the argument which maximizes the cohesiveness.\n\\end{enumerate}\nTaking advantage of the two theorems we can say that considering $A$ as the affinity matrix, then clustering problem \\ref{cohesiveness} corresponds to an \\textbf{eigenvalue problem}, maximized by the eigenvector of $A$ with the largest eigenvalue.\\\\\n\n\\paragraph*{Clustering by Eigenvectors Algorithm\\\\\\\\} \nWe can define the algorithm for extracting clusters from data points using the eigenvectors strategy with the following steps:\n\\begin{enumerate}\n\t\\begin{footnotesize}\n\t\\item Construct the affinity matrix $A$ from input $G$.\n\t\\item Compute the eigenvalues and eigenvectors of $A$.\n\t\\item Repeat\n\t\\item $\\quad$ Take the largest unprocessed eigenvalue and the corresponding eigenvector.\n\t\\item $\\quad$ Zero all the components corresponding to samples that have already been clustered.\n\t\\item $\\quad$ Threshold the remaining components to detect which elements belong to this cluster.\n\t\\item $\\quad$ If all elements have been accounted for, there are sufficient clusters.\n\t\\item Until there are sufficient clusters.\n\t\\end{footnotesize}\n\\end{enumerate}\n\n\\subsection{Clustering as Graph Partitioning}\nLet $G=(V,E,w)$ is an undirected weighted graph with $\\vert V\\vert$ nodes (samples) and $\\vert E\\vert$ edges. Note that it is undirected when the affinity matrix is symmetric. Given two graph partitions of vertices $A$ and $B$, with $B=V\\setminus A$, we define cut$(A,B)$ in the following way:\\\\\n$$cut(A,B) = \\sum_{i \\in A} \\sum_{j \\in B} w(i,j)$$\n\\image{img/ulearning/minCut}{Minimum cut problem.}{0.8}\nIn the MinCut problem, we look for the partitioning that minimizes the cost of crossing from one $A$ to $B$, which is the sum of weights of the edges which cross the cut. The fundamental idea is to consider the clustering problem as a graph partitioning. Indeed, the MinCut problem can be considered a good way of solving the clustering problem in graph data. The MinCut clustering is advantageous because it is solvable in polynomial time but, on the other hand, it favors highly unbalanced clusters (often with isolated vertices), indeed, it only measures what happens between the clusters and not what happens within the clusters.\n\\image{img/ulearning/minCut2}{Minimum cut unbalance clusters.\\cite{normalized_cut}}{0.45}\n\n\\subsection{Normalized Cut}\nIn order to overcome the problem of unbalanced clusters, a normalized version of the min cut problem, called \\textbf{Normalized Cut}, is used and it is defined by:\\\\\n\n$$Ncut(A,B) = \\underbrace{cut(A,B)}_{\\text{Between A and B}}\\left( \\underbrace{\\frac{1}{vol(A)} + \\frac{1}{vol(B)}}_{\\text{Within A and B}}\\right)$$\\\\\n\nwhere $vol(A)$ is the volume of the set $A$ given by $vol(A) = \\sum_{i \\in A}d_i,~A \\subseteq V$ and $d_i = \\sum_j w_{i,j}$ is the degree of nodes (sum of weights).\\\\\nThe Normalized Cut has the advantage of taking into consideration what happens within clusters, indeed, considering $vol(A)$ and $vol(B)$ it takes into account what's going on within $A$ and $B$.\n\\newpage\n\n\\subsection{Graph Laplacians}\nFrom an accurate analysis \\citeauthor{spectral_tutorial} discovered that the main tools for spectral clustering are graph Laplacian matrices, defined in the spectral graph theory.  In this section we are going to define different graph Laplacians and point out their most important properties since they will be later on used for solving the MinCut and NMinCut problem.\n\n\\paragraph{The Unnormalized Graph Laplacian.} The  \\textbf{unnormalized graph Laplacian} matrix is defined as:\n$$L = D - W$$\n\nwhere:\n\\begin{itemize}\n\t\\item $D$ is a diagonal matrix containing information about the degree of each node in $G$.\n\t\\item $W$ is the affinity matrix of $G$, containing $1s$ or $0s$ if nodes are adjacent. Diagonal elements are all set to $0$.\n\\end{itemize}\nIn the following we provide an example of matrices $D$ and $W$ obtained considering the graph shown in Fig. \\ref{fig:graphG}.\n\\begin{figure}[H]\n\t\\begin{minipage}[t]{0.49\\linewidth} \n\t\t\\centering\n\t\t$$ D = \\begin{bmatrix}\n\t\t2 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\t0 & 4 & 0 & 0 & 0 & 0 \\\\\n\t\t0 & 0 & 4 & 0 & 0 & 0 \\\\\n\t\t0 & 0 & 0 & 1 & 0 & 0 \\\\\n\t\t0 & 0 & 0 & 0 & 3 & 0 \\\\\n\t\t0 & 0 & 0 & 0 & 0 & 2 \\\\\n\t\t\\end{bmatrix}$$\n\t\t\\caption{Degree matrix $D$.}\n\t\\end{minipage}        \n\t\\hspace{1cm}\n\t\\begin{minipage}[t]{0.49\\linewidth} \n\t\t\\centering\n\t\t$$ W = \\begin{bmatrix}\n\t\t0 & 1 & 1 & 0 & 0 & 0 \\\\\n\t\t1 & 0 & 1 & 1 & 1 & 0 \\\\\n\t\t1 & 1 & 0 & 0 & 1 & 1 \\\\\n\t\t0 & 1 & 0 & 0 & 0 & 0 \\\\\n\t\t0 & 1 & 1 & 0 & 0 & 1 \\\\\n\t\t0 & 0 & 1 & 0 & 1 & 0 \\\\\n\t\t\\end{bmatrix}$$\n\t\t\\caption{Affinity matrix $W$.}\n\t\\end{minipage}\n\\end{figure}\n\nThe elements of $L$ are given by:\n$$\nL _ { i , j } = \\left\\{ \\begin{array} { l l } { \\operatorname { d } \\left( v _ { i } \\right) } & { \\text { if } i = j } \\\\ \n{ - 1 } & { \\text { if } i \\neq j \\text { and } v _ { i } \\text { is adjacent to } v _ { j } } \\\\ \n{ 0 } & { \\text { otherwise } } \\end{array} \\right.\n$$\nwhere $d(v_i)$ is the degree of the vertex $i$.\n\\imageLabel{img/ulearning/graphLaplacian}{Laplacian matrix L associated to the graph in the left.}{0.85}{graphG}\n\nIn \\cite{spectral_tutorial} are reported the properties satisfied by the matrix $L$, that are:\n\\begin{enumerate}\n\t\\item For all vectors $f$ in $\\mathbb{R}^n$, we have:\n\t$$f ^ { \\top } L f = \\frac { 1 } { 2 } \\sum _ { i, j = 1 } ^ { n } w _ { i j } \\left( f _ { i } - f _ { j } \\right) ^ { 2 }$$\n\tThis is proved by the definition of $d_i$:\n\t$$\\begin{aligned} \n\tf ^ { \\top } L f & = f ^ { \\top } D f - f ^ { \\top } W f = \\sum _ { i=1 }^n d _ { i } f _ { i } ^ { 2 } - \\sum _ { i , j =1 }^n f _ { i } f _ { j } w _ { i j } \\\\ \n\t& = \\frac { 1 } { 2 } \\left( \\sum _ { i=1 }^n \\left( \\sum _ { j=1 }^n w _ { i j } \\right) f _ { i } ^ { 2 } - 2 \\sum _ { i, j=1 }^n f _ { i } f _ { j } w _ { i j } + \\sum _ { j=1 }^n \\left( \\sum _ { i=1 }^n w _ { i j } \\right) f _ { j } ^ { 2 } \\right) \\\\ \n\t& = \\frac { 1 } { 2 } \\sum _ { i ,j=1 }^n w _ { i j } \\left( f _ { i } - f _ { j } \\right) ^ { 2 } \n\t\\end{aligned}$$\n\t\n\t\\item $L$ is symmetric (by assumption) and positive semi-definite. The symmetry of $L$ follows directly from the symmetry of $W$ and $D$. The positive semi-definiteness is a direct consequence of the first property, which shows that \t$f ^ { T } L f \\geq 0$\n\t\n\t\\item The smallest eigenvalue of $L$ is 0, the corresponding eigenvector is the constant 1 vector.\n\t\\item $L$ has $n$ non-negative, real-valued eigenvalues $0 = \\lambda _ { 1 } \\leq \\lambda _ { 2 } \\leq \\ldots \\leq \\lambda _ { n }$\n\\end{enumerate}\n\nFirst relation between spectrum and clusters:\n\\begin{itemize}\n\t\\item The multiplicity of eigenvalue $\\lambda_1=0$ corresponds to the number of connected components of the graph.\n\t\\item The eigenspace is spanned by the characteristic function of these components (so all eigenvectors are piecewise constant).\n\\end{itemize}\n\n\\paragraph{Normalized Graph Laplacians.} In literature it is also defined the normalized form of a Laplacian graph. In particular, there exists two definitions that are closely related:\n$$\\begin{array} { l } { L _ { \\mathrm { sym } } = D ^ { - 1 / 2 } L D ^ { - 1 / 2 } = I - D ^ { - 1 / 2 } W D ^ { - 1 / 2 } } \\\\ { L _ { \\mathrm { rw } } = D ^ { - 1 } L = I - D ^ { - 1 } W } \\end{array}$$\nThe first matrix $L_{sym}$ is a symmetric matrix, and the second one $L_{rw}$ as a normalized form of a Laplacian graph which is closely connected to a random walk \\cite{spectral_tutorial}.\n\n\\begin{defn}[Properties for Laplacian matrices and normalized ones]{} In relation to Laplacian matrices, it is possible to notice that, let $L$ be the Laplacian of a graph $G=(V,E)$. Then, $L \\geq 0$, indeed:\\\\\n\t$\\forall x = (x_1, \\dots, x_n),$\n\t$$\\begin{aligned} x^{T}Lx &= x^{T} \\sum_{e \\in E} L_{e} x \\\\ &=\\sum_{e \\in E} x^T L_{e} x \\\\ &=\\sum_{i, j \\in E}\\left(x_{i}-x_{j}\\right)^{2} \\geq 0 \\end{aligned}$$\n\tIn relation instead to the normalized Laplacian Matrix we have that:\n\t$$\\forall x \\in \\mathbb{R}^n \\quad x^TLx = \\sum_{i,j} \\left(\\frac{x(i)}{\\sqrt{d(i)}} - \\frac{x(j)}{\\sqrt{d(j)}} \\right)^2 \\geq 0$$\n\\end{defn}\n\\par \\bigskip \\bigskip \\noindent\n\n\n\\subsection{Solving Ncut}~\\label{solveNcutTheory}\nAny cut $(A,B)$ can be represented by a binary indicator vector $x$:\n$$x_i = \\begin{cases}\n+1 \\text{  if } i \\in A\\\\\n-1 \\text{  if } i \\in B\n\\end{cases}$$\nIt can be shown that:\n\\begin{equation}~\\label{solvencut}\nmin_x ~ Ncut(x) = min_y \\underbrace{\\frac{y'(D-W)y}{y'Dy}}_{\\text{Rayleigh quotient}}\n\\end{equation}\n\nsubject to the constraint that $y ^ { \\prime } D1 = \\sum_{i} y_{i} d _ { i } = 0$ (with $y _ { i } \\in \\{ 1 , - b \\}$ (relaxation introducing also real values), indeed $y$ is an indicator vector with 1 in the $i$-th position if the $i$-th feature point belongs to $A$, negative constant ($-b$) otherwise).\n\n\\begin{thm}[Solving Ncut proof]{}\n\t$$\\lambda_2 = \\min_x \\frac{x^TLx}{x^Tx} = \\min_x \\frac{x^TD^{-1/2}LD^{-1/2}x}{x^Tx} \\qquad \\text{Remember } L_{sym} = D^{-1/2}LD^{-1/2}$$\n\t\\text{Considering the change of variables obtained by setting} $y=D^{-1/2}x$ \\text{ and } $x = D^{1/2}y$:\n\t$$\\lambda_2 = \\min_y \\frac{y^TLy}{(D^{1/2}y)^T(D^{1/2}y)} = \\min_y \\frac{y^TLy}{y^TDy}$$\n\\end{thm}\n\nIssues rise up because solving Problem \\ref{solvencut} is not computationally efficient since it is an \\textbf{NP-Hard} problem. The huge Ncut time complexity brings us to take into consideration an approximation of it. If we relax the constraint that $y$ must be a discrete-valued vector and allow it to take on real values, then the original problem\n$$\\min _ { y } \\frac { y ^ { \\prime } ( D - W ) y } { y ^ { \\prime } D y }$$\nis equivalent to:\n$$\\min _ { y } y ^ { \\prime } ( D - W ) y \\quad \\text { subject to } y ^ { \\prime } D y = 1$$\nThis amount to solve a \\textit{generalized} eigenvalue problem, but now the optimal solution is provided by the second smallest eigenvalue since we want to minimize the cut. Note that we pick the second smaller eigenvalues since we have seen the smallest one is always zero and corresponds to the trivial partitioning $A=V$ and $B=\\emptyset$.\n$$\\underbrace{(D-W)}_{Laplacian}y=\\lambda D y$$\nWe started from an NP-Hard problem and through relaxation we reached a feasible solution. However, we have not the warranty that the relaxed solution is in one to one correspondence.\n\\paragraph*{The effect of relaxation.} Through the relaxation we loose some precision in the final solution. \n\\imageb{img/ulearning/relaxation1}{0.8}\nNote that the original problem returns binary values $(-1,1)$, indicating the clustering membership. The relaxed version, on the right, returns continuous values of it can be the case that some points are not so clear to assign (close to the margin between the two). For that reason relaxed solution not always is in one-to-one correspondence with the original problem.\n\n\\subsection{Random Walk Interpretation} \nThe Ncut problem can be formalized also in terms of random walk, as highlighted in \\cite{spectral_tutorial}, since we want to find a cut that reduces the probability of jumping between nodes of different clusters. It can be defined by a  Markov chain where each data point is a state, connected to all other states with some probability. With our affinity $W$ and degree $D$, the stochastic matrix is:\n$$P=D^{-1}W$$\nwhich is the row-normalized version of $W$, so each entry $P(i,j)$ is a probability of \"\\textit{walking}\" to state $j$ from state  $i$\\cite{randomwalk_spectral}.\n\\imageb{img/ulearning/randomWalk}{0.20}\nProbability of a walk through states $( s _ { 1 } , \\dots , s _ { m })$ is given by:\n$$P \\left( s _ { 1 } , \\ldots , s _ { 2 } \\right) = P \\left( s _ { 1 } \\right) \\prod _ { i = 2 } ^ { m } P \\left( s _ { i } , s _ { i - 1 } \\right)$$\nSuppose we divide the states into two groups, and we want to minimize the probability of jumping between the two groups. We can formulate this as an eigenvector problem:\n$$P y = \\lambda y$$\nwhere the component of vector $y$ will give the segmentation.\\\\\nWe can precise also that:\n\\begin{itemize}\n\t\\item $P$ is a stochastic matrix.\n\t\\item The largest eigenvalue is 1, and its eigenvector is the all-one vector 1. Not very informative about segmentation.\n\t\\item The second largest eigenvector is orthogonal to the first, and its components indicate the strongly connected sets of states.\n\t\\item Meila and Shi (2001) showed that minimizing the probability of jumping between two groups in the Markov chain is equivalent to minimizing Ncut.\n\\end{itemize}\n\\begin{thm}[Random Walk Proposition] $(\\lambda, y)$ is a solution to $Py = \\lambda y$ if and only if \\footnote{Adapted from Y. Weiss}:\n\t\\begin{itemize}\n\t\t\\item $1-\\lambda$ is an eigenvalue of $(D-W)y = \\lambda D y$\n\t\t\\item $y$ is an eigenvector of $(D-W)y = \\lambda Dy$\n\t\\end{itemize} \n\t\\textbf{Proof:}\n\t$$\\begin{array} { l l } { P y = \\lambda y } & { \\Leftrightarrow \\quad - P y = - \\lambda y } \\\\\n\t{ } & { \\Leftrightarrow \\quad y - P y = y - \\lambda y } \\\\\n\t{ } & { \\Leftrightarrow \\quad ( I - P ) y = ( 1 - \\lambda ) I y } \\\\\n\t{ } & { \\Leftrightarrow \\quad \\left( D ^ { - 1 } D - D ^ { - 1 } W \\right) y = ( 1 - \\lambda ) D ^ { - 1 } D y } \\\\\n\t{ } & { \\Leftrightarrow \\quad D ^ { - 1 } ( D - W ) y = D ^ { - 1 } ( 1 - \\lambda ) D y } \\\\\n\t{ } & { \\Leftrightarrow \\quad ( D - W ) y = ( 1 - \\lambda ) D y } \\end{array}$$\n\\end{thm}\nThe problem is to find a cut $(A,B)$ in a graph $G$ such that a random walk does not have many opportunities to jump between the two clusters.\\\\\nThis is equivalent to the Ncut problem due to the following relation:\n$$Ncut(A,B) = P(A|B) + P(B|A)$$\n\\subsection{2-way Ncut clustering algorithm}\nIn section \\ref{solveNcutTheory} we have seen how to solve the Normalized Cut clustering problem, and here we want to discuss its implementation for extracting just two clusters:\n\\begin{enumerate}\n\t\\begin{footnotesize}\n\t\\item Compute the affinity matrix $W$, compute the degree matrix $D$. $D$ is diagonal and \\\\$D_{i,i} = \\sum_{j \\in V} W_{i,j}$\n\t\\item Solve the generalized eigenvalue problem $(D-W)y = \\lambda Dy$\n\t\\item Use the eigenvector associated to the second smallest eigenvalue to bipartition the graph into two parts.\n\t\\end{footnotesize}\n\\end{enumerate}\nSometimes there's not a clear threshold to split based on the second vector since it takes continuous values. In which way it is possible to choose the splitting point?\n\\begin{itemize}\n\n\t\\item Pick a constant value (0 or 0.5).\n\t\\item Pick the median value as the splitting point.\n\t\\item Look for the splitting point that has minimum Ncut value:\n\t\\begin{enumerate}\n\t\t\\item Choose $n$ possible splitting points.\n\t\t\\item Compute Ncut value.\n\t\t\\item Pick the minimum.\n\t\\end{enumerate}\n\n\\end{itemize}\n\n\\subsection{K-way Ncut clustering algorithm}\nIn the case we want to extract more than 2 clusters we can adopt two possible strategies:\n\\paragraph*{Approach \\# 1.} It is a recursive two-way cuts:\n\\begin{enumerate}\n\t\\begin{footnotesize}\n\t\\item Given a weighted graph $G=(V,E,w)$, summarize the information into matrices $W$ and $D$.\n\t\\item Solve $(D-W)y = \\lambda Dy$ for eigenvectors with the smallest eigenvalues.\n\t\\item Use the eigenvector with the second smallest eigenvalue to bipartition the graph by finding the splitting point such that Ncut is minimized.\n\t\\item Decide if the current partition should be subdivided by checking the stability of the cut, and make sure Ncut is below the prespecified value.\n\t\\item Recursively repartition the segmented parts if necessary.\n\t\\end{footnotesize}\n\\end{enumerate}\nNote that the approach is computationally wasteful, only the second eigenvector is used, whereas the next few small eigenvectors also contain useful partitioning information.\n\n\\paragraph*{Approach \\#2.} Using the first $k$ eigenvectors:\n\\begin{enumerate}\n\t\\begin{footnotesize}\n\t\\item Construct a similarity graph and compute the unnormalized graph Laplacian $L$.\n\t\\item Compute the $k$ smallest \\textbf{generalized} eigenvectors $u _ { 1 } , u _ { 2 } , \\dots , u _ { k }$ of the generalized eigenproblem $L u = \\lambda D u$.\n\t\\item Let $U = \\left[ u_1, u_{ 2 }, \\dots, u_{ k } \\right] \\in \\mathbb { R } ^ { n \\times k }$.\n\t\\item Let $y_i \\in \\mathbb{ R }^k$ be the vector corresponding to the $i$th row of $U$.\n\t$$U = \\left[ \\begin{array} { c c c c } { u _ { 11 } } & { u _ { 12 } } & { \\cdots } & { u _ { 1 k } } \\\\ { u _ { 21 } } & { u _ { 22 } } & { \\cdots } & { u _ { 2 k } } \\\\ { \\vdots } & { \\vdots } & { \\ddots } & { \\vdots } \\\\ { u _ { n 1 } } & { u _ { n 2 } } & { \\cdots } & { u _ { n k } } \\end{array} \\right] = \\left[ \\begin{array} { c } { y _ { 1 } ^ { T } } \\\\ { y _ { 2 } ^ { T } } \\\\ { \\vdots } \\\\ { y _ { n } ^ { T } } \\end{array} \\right]$$\n\t\\item Thinking of $y_i$'s as points in $\\mathbb{ R }^k$, cluster them with $k$-means algorithms.\n\t\\end{footnotesize}\n\\end{enumerate}\n\n\\subsection{Spectral Clustering vs $K$-Means}\nFirst of all, let us define the spectral clustering algorithm \\cite{spectral_algorithm}, its goal is to cluster objects that are connected but not necessarily compact or clustered within convex boundaries. The algorithm has in input the similarity matrix $S \\in \\mathbb{ R }^{n \\times n}$ and the $k$ number of clusters to construct. It returns in output the $k$ clusters. It follows these steps:\n\\begin{enumerate}\n\t\\begin{footnotesize}\n\t\\item Construct a similarity graph and compute the normalized graph Laplacian $L_{sym}$.\n\t\\item Embed data points in a low-dimensional space (spectral embedding), in which the clusters are more obvious, computing the $k$ smallest eigenvectors $v_1, \\dots, v_k$ of $L_{sym}$. \n\t\\item Let $V=\\left[v_1,\\dots, v_k \\right] \\in \\mathbb{ R }^{n \\times k}$.\n\t\\item Form the matrix $U \\in \\mathbb{ R }^{n \\times k}$ from $V$ by normalizing the row sums to have norm 1, that is: \n\t$$u_ { i j } = \\frac { v _ { i j } } { \\left( \\sum _ { k } v _ { i k } ^ { 2 } \\right) ^ { 1 / 2 } }$$\n\t\\item For $i=1,\\dots, n$, let $y _ { i } \\in \\mathbb { R } ^ { k }$ be the vector corresponding to the $i$th row of $U$.\n\t\\item Cluster the points $y_i$ with $i=1,\\dots,n$ with the $k$-means algorithm into clusters $C_1, \\dots, C_k$.\n\t\\end{footnotesize}\n\\end{enumerate}\nApplying $k$-means to Laplacian eigenvectors allows us to find cluster with non-convex boundaries.\n\\imageb{img/ulearning/spectral1}{0.95}\n\\imageb{img/ulearning/spectral2}{0.95}\n\\imageb{img/ulearning/spectral3}{0.95}\nOne of the possible problems that could appear on the usage of the Spectral Clustering algorithm consists on choosing the best $k$. We want to find a $k$ such that all eigenvalues $\\lambda_1, \\dots, \\lambda_k$ are very small, but $\\lambda_{k+1}$ is relatively large. In this way, the choosing of $k$ maximizes the eigengap (difference between consecutive eigenvalues) $\\delta_k = |\\lambda_k - \\lambda_{k-1}|$.\n\\imageb{img/ulearning/eigengap}{0.95}\n\\newpage\n\n\\section{Dominant Sets}\nIn the previous sections we have seen that data can be represented using weighted graphs, also called similarity graphs, in which data are represented by nodes in the graph and the edges represent the similarity relation between nodes. This representation allows us to codify also very complex structured entities.\nIn literature some authors argue to the fact that a cluster can be seen as a \\textbf{maximal clique}\\footnote{A \\textbf{clique} is a subset of mutually adjacent vertices.\\\\ A \\textbf{maximal clique} is a clique that is not contained in a larger one.} of a graph, indeed the concept of clique is related to the internal cluster criteria, instead maximal clique responds to the external criteria. But the standard definition of click does not consider weighted graphs. For this reason, dominant set is introduced by \\citeauthor{dominantset} as an extension of the maximal clique problem. We are going to see that the notion of dominant set provides measures of cohesiveness of a cluster and vertex participation of different clusters.\n\n\\subsection{Cluster in Graph Theory}\nData to be clustered could be coded as an undirected weighted graph with no self-loops: $G=(V, E, \\omega)$, where $V=\\{1,\\dots,n\\}$ is the vertex set, $E\\subseteq V\\times V$ is the edges set and $w: E\\rightarrow \\mathbb{R}^*_+$ is the positive weight function. Vertices represent data points, edges neighborhood relationships and edge-weights similarity relations. $G$ is then represented with an adjacency matrix $A$, such that $a_{ij} = \\omega(i,j)$. Since there are not self-loops we have that $\\omega(i,i) = 0$ (main diagonal equal to $0$).\\\\\nOne of the key problem of clustering is that there is not a unique and well define definition of cluster, but in literature researches agree that a cluster should satisfy two conditions:\n\\begin{itemize}\n\t\\item \\textbf{High internal homogeneity}, also named \\textit{Internal criterion}. It means that all the objects inside a cluster should be highly similar(or low distance) to each other.\n\t\\item \\textbf{High external in-homogeneity}, also named \\textit{External criterion}. It means that objects coming from different clusters have low similarity (or high distance).\n\\end{itemize}\nThe idea of the criterion is that clusters are groups of objects which are strongly similar to each other if they become to the same cluster, otherwise they have a highly dissimilarity. \nInformally speaking a cluster is a set of entities which are alike, and entities from different clusters are not alike.\n\nLet $S\\subseteq V$ be a nonempty subset of vertices and $i \\in S$. The average weighted degree of $i$ with regard to $S$ is defined as:\\\\\n\\begin{equation}\n\\text{awdeg}_S(i)=\\frac{1}{|S|}\\sum_{j\\in S}a_{ij}\n\\end{equation}\nThis quantity over here represents the average similarity between entity $i$ and the rest of the entities in $S$. In other words how much similar $i$ is in average with all the objects in $S$.\nIt can be observed that $\\text{awdeg}_{\\{i\\}}(i) = 0$  $\\forall i \\in V$, since we have no self-loops.\\\\\nWe now introduce a new quantity $\\phi$ such that if $j \\notin S$:\n\\begin{equation}\n\\phi_S(i, j)=a_{ij}-\\text{awdeg}_S(i)\n\\end{equation}\nNote that $\\phi_{\\{i\\}}(i, j)=a_{ij}$ $\\forall i, j\\in V$ with $i \\neq j$. $\\phi_S(i, j)$ measures the relative similarity between $i$ and $j$ with respect to the average similarity between $i$ and its neighbors in $S$. This measures can be either positive or negative.\n\n\\begin{defn}[\\citeauthor{ds1}, Node's weight]{}\nLet $S\\subseteq V$ be a nonempty subset of vertices and $i \\in S$. The weight of $i$ with regard to $S$ is:\n\\begin{equation}\nw_S(i)= \\begin{cases}\n1 & \\text{if } |S| = 1 \\\\\n\\sum\\limits_{j\\in S\\setminus \\{i\\}}\\phi_{S\\setminus \\{i\\}}(j, i)w_{S\\setminus \\{i\\}}(j) & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\\end{defn}\n\nFurther, the total weight of $S$ is defined to be $W(S)=\\sum_{i\\in S}w_S(i)$.\n\\image{img/ulearning/ws.png}{Weight of i respect to elements in S.}{0.18}\nNote that $w_{\\{i, j\\}}(i)=w_{\\{i, j\\}}(j)=a_{ij}$ $\\forall i, j \\in V \\land i\\neq j$. Then, $w_S(i)$ is calculated simply as a function of the weights on the edges of the sub-graph induced by $S$.\\\\\n\nIntuitively, $w_S(i)$ gives a measure of the similarity between $i$ and $S\\setminus \\{i\\}$ with respect to the overall similarity among the vertices of $S\\setminus \\{i\\}$. In other words, how similar (important) $i$ is with respect to the entities in $S$.\nAn important property of this definition is that it induces a sort of natural ranking among vertices of the graph. \n\n\n\\imageLabel{img/ulearning/graph.png}{Similarity graph example.}{0.15}{gex}\n\nConsidering the graph proposed in Figure \\ref{fig:gex}, we can derive a ranking between nodes: $$w_{\\{1,2,3\\}}(1) < w_{\\{1,2,3\\}}(2) < w_{\\{1,2,3\\}}(3)$$\n\\begin{defn}[\\citeauthor{ds1}, Dominant Set]{} A nonempty subset of vertices $S\\subset V$ such that $W(T)>0$ for any nonempty $T \\subseteq S$, is said to be a \\textbf{dominant set} if:\n\\begin{itemize}\n\t\\item $w_S(i) > 0$ $\\forall i \\in S \\qquad$ (\\textit{internal homogeneity})\n\t\\item $w_{S \\cup \\{i\\}}(i) < 0$ $\\forall i \\notin S \\qquad$ (\\textit{external homogeneity})\n\\end{itemize}\n\\end{defn}\nThese conditions correspond to cluster properties (\\textbf{internal homogeneity} and \\textbf{external in-homogeneity}). Informally we can say that the first condition requires that all the nodes in the cluster $S$ are important (high weight, similar). The second one assumes that if we consider a new point in the cluster $S$, the cluster cohesiveness will be lower, meaning that the current cluster is already maximal.\\\\\nBy definition, dominant sets are expected to capture compact structures. Moreover, this definition is equivalent to the one of maximal clique problem when applied to unweighted graphs.\n\\image{img/ulearning/dominant_def}{The set \\{1,2,3\\} is dominant.}{0.2}\n\n\n\\subsection{Link to Standard Quadratic Optimization}\nClusters are commonly represented as an $n$-dimensional vector expressing the participation of each node to a cluster. Large numbers denote a strong participation, while zero values no participation. In section \\ref{eingenvector-based-clustering} we have seen that the goal of clustering algorithm is to maximize the cohesiveness of the retrieved clusters. Formally speaking the goal can be expressed using the following optimization problem:\n\\begin{equation}\\label{SPQ}\n\\begin{array}{lcl}\n\\text{maximize} & f(x) \\\\\n\\text{subject to} & x \\in \\Delta\n\\end{array}\n\\end{equation}\nwhere $A$ is a symmetric real-valued matrix with null diagonal and \\begin{equation}\n\\Delta=\\{x\\in\\mathbb{R}^n:x\\geq 0 \\land e^\\top x = 1\\}\n\\end{equation} is the standard simple of $\\mathbb{R}^n$. This yields the following standard quadratic problem in which local solution corresponds to a maximally cohesive cluster.\n\nThe entity $x$ is a strict local solution of problem \\ref{SPQ} if there exists a neighborhood $U \\subset\t\\Delta$ of $x$ such that $f(x) > f(z)$ $\\forall z \\in U\\setminus\\{x\\}$. Then we define the support $\\sigma(x)$ of $x\\in\\Delta$ as the index set of the positive components in $x$.\n$$\\sigma(x) = \\{i\\in V: x_i > 0\\}$$\nIn other words $\\sigma(x)$ is the set of vertices in $V$ that belongs to the extracted cluster.\n\n\\begin{defn}[\\citeauthor{dominantset}, Characteristic vector]{}~\\label{ds_simplex} A non-empty subset $C \\subseteq V$ and $C$ is a dominant set, admits a weighted \\textbf{characteristic vector} $x^c\\in \\Delta$ if it has positive total weight $W(C)$, in which:\n\n$$\nx_i^C= \\begin{cases}\n\\frac{W_c(i)}{W(C)} & \\text{if } i\\in C\\\\\n0 & \\text{otherwise}\n\\end{cases}\n$$\n\\end{defn}\nThe important notion provided by Definition \\ref{ds_simplex} is that also dominant set solutions belong to the standard simplex, as imposed in problem \\ref{SPQ}. The advantage is that, empirically, strict local maximizers of the dominant sets procedure work well in extracting clusters.\n\n\n\\subsection{Link to Game Theory}\nGame theory is a theoretical framework used for examining and analyzing models of strategic interaction between competing rational actors. The clustering problem, as suggested by \\citeauthor{dominantset}, \ncan be formulated in terms of a game, also called \\textit{clustering game}, with the following properties:\n\\begin{itemize}\n\t\\item \\textbf{Symmetric game}, the payoff of playing any strategy does not depend by the player but only by the strategy itself.\n\t\n\t\\item \\textbf{Complete knowledge}, players have complete knowledge about the game, they know what are the strategies that can be played and the corresponding payoffs.\n\t\n\t\\item \\textbf{Non-cooperative game}, players take independent decisions about the strategy to play, they don't make a priori alliance.\n\t\n\t\\item Players play only \\textbf{pure strategies}, meaning that they do not behave \u201crationally\u201d but they take decisions in a pre-programmed pattern.\n\\end{itemize}\nIn the clustering game we have two players that want to extract the best structure of cluster from data samples. The pure strategies available to the players are the data points themselves in $V$ and the similarity matrix $A$ is used as the \\textit{payoff matrix} for the clustering game. The values $A_{ij}$ and $A_{ji}$ are the revenues obtained by player 1 and player 2 considering that they have player strategies $(i,j) \\in V\\times V$. Remember that the main diagonal of the similarity matrix is zero, meaning that $A_{ii}=0$.\nA \\textit{mixed strategy} $x=(x_1, \\dots, x_n)^T \\in \\Delta$ is a probability distribution over the set of pure strategies, which models a stochastic playing strategy of a player. If player 1 and 2 play mixed strategies $(x_1, x_2) \\in \\Delta \\times \\Delta$, then the expected payoffs for the players are: $\\mathbf{x_1^TAx_2}$ and $\\mathbf{x_2^TAx_1}$ respectively.\nThe goal of the two players of course is to maximize as much as possible their resulting revenue. During the game each player extract an object $(i,j)$ and the resulting revenue is associated according to the payoff matrix $A$. Since we are considering $A$ equal to the similarity matrix we can say that in order to maximize their revenue the two players would coordinate their strategies so that the extracted samples belong to the same cluster. In other words, only by selecting objects belonging to the same cluster, each player is able to maximize his expected payoff. The desired condition is that the two players reach a \\textbf{symmetric Nash equilibrium}, that is state in which the two players agree about the cluster membership. A \\textbf{Nash Equilibrium} is a mixed-strategy profile $(x_1,x_2)\\in \\Delta\\times \\Delta$ such that no player can improve the expected payoff by changing his playing strategy, given the opponent's strategy being fixed. This concept can be expressed with the following expression:\\\\\n$$y_1^TAx_2 \\leq x_1^TAx_2 \\qquad y_2^TAx_1 \\leq x_2^TAx_1 \\qquad \\forall (y_1,y_2) \\in (V\\times V).$$ \nA Nash equilibrium is \\textbf{symmetric} if $x_1 = x_2$, meaning that considering a symmetric Nash Equilibrium $x \\in \\Delta$ the two conditions hold in a unique one:\\\\\n$$y^TAx \\leq x^TAx$$\n\nThe symmetric Nash equilibrium condition satisfies the internal homogeneity criterion required by the dominant set definition. However, it does not include any kind of constraint that guarantees the maximality condition. In order to satisfy this condition it is necessary to look for a different type of Nash Equilibrium, known as \\textbf{Evolutionary Stable Strategy (ESS)}. \n\n\\paragraph{Definition.} A symmetric Nash equilibrium $x\\in \\Delta$ is an ESS if it satisfies also:\n$$y^TAx = x^TAx \\implies x^TAy > y^TAy \\qquad \\forall y \\in \\Delta\\setminus\\{x\\}$$\n\n$$y^TAx = x^TAx \\implies x^TAy < x^TAx \\qquad \\forall y \\in \\Delta\\setminus\\{x\\}$$\nEven if the strategy $y$ provides the same payoff of the strategy $x$, it is better to play $x$ since the payoff against itself is greater than the one provided by $y$. The two strategies $x$ and $y$ represents two Nash Equilibrium, but only $x$ is an ESS.\\\\\nIn conclusion we can say that the ESSs of the clustering game with affinity matrix $A$ are in \\textbf{correspondence} with dominant sets of the same clustering problem instance. However, we can also conclude that ESS\u2019s are in one-to-one correspondence to (strict) local solutions of StQP.\\\\\nIt is possible to say that ESS's satisfy the main characteristics of a cluster:\n\\begin{itemize}\n\t\\item \\textbf{Internal coherency:} High support for all samples within the group.\n\t\\item \\textbf{External incoherency:} Low support for external samples.\n\\end{itemize}\n\n\n\\subsection{Extracting Dominant Sets}\nOne of the major advantages of using dominant sets is that it can be written with few lines of code, and moreover we can define different clustering approaches:\n\\begin{itemize}\n\t\\item Extracting a dominant set, done using the replicator dynamics procedure.\n\t\\item Partitioning of the data points, obtained using the \\textit{peel-off} strategy, which means that at each iteration we extract a dominant set and the corresponding vertices are remove from the graphs. This is done until all vertices have been clustered (Partitioning-based clustering).\n\t\\item Extracting overlapping clusters, obtained enumerating dominant sets. \n\\end{itemize}\nIn our applications we are going to deal with the second one, assuming that each entity belongs to a cluster. This assumption is required since the subject of this thesis is based on comparing three algorithms, but the first two (K-Means  and Spectral clustering) are essentially partitioning-based algorithm.\\\\\n\nThe \\textbf{Replicator Dynamics}  are deterministic game dynamics that have been developed in evolutionary game theory. It considers an ideal scenario whereby individuals are repeatedly drawn at random from a large, ideally infinite, population to play a two-player game. Players are not supposed to behave rationally, but they act according to an inherited behavioral pattern (pure strategy). An evolutionary selection process operates over time on the distribution of behaviors \\cite{replicator}.\\\\\n\nLet $x_i(t)$ the population share playing pure strategy $i$ at time $t$. The state of the population at time $t$ is: $x(t) = (x_1(t),\\dots,x_n(t))\\in\\Delta$.\\\\\nWe define an evolution equation, derived from Darwin's principle of nature selection:\n$$\\dot{x_i} = x_i~g_i(x)$$\n\nwhere $g_i$ specifies the rate at which pure strategy $i$ replicates, $\\dot{x_i}$ is grow rate of strategy $i$. \n$$\\frac{\\dot{x_i}}{x_i} \\propto \\text{payoff of pure strategy }i\\text{ - average population payoff}$$\nThe most general continuous form is given by the following equation:\n$$\\dot{x_i} = x_i[(Ax)_i - x^TAx]$$\nwhere $(Ax)_i$ is the $i$-th component of the vector and $x^TAx$ is the average payoff for the population. If we have a result better than the average strategy there's an improvement.\n\n\\begin{thm}[Nachbar,1990 TaylorandJonker,1978]{}\nA point $x\\in\\Delta$ is a Nash equilibrium if and only if $x$ is the limit point of a replicator dynamics trajectory starting from the interior of $\\Delta$. Furthermore, if $x\\in\\Delta$ is an ESS, then it is an asymptotically stable equilibrium point for the replicator dynamics.\\\\\n\\end{thm}\n\nAssuming that the payoff matrix $A$ is symmetric $(A = A^T)$ then the game is said to be doubly symmetric. Thanks to this assumption we can derive some conclusions:\n\\begin{itemize}\n\t\\item \\textit{Fundamental Theorem of Natural Selection (Losert and Akin,1983)} \\\\\n\tFor any doubly symmetric game, the average population payoff $f(x) = x^TAx$ is strictly increasing along any non-constant trajectory of replicator dynamics, meaning that $\\frac{df(x(t))}{dt} \\geq 0$ $\\forall t \\geq0$, with equality if and only if $x(t)$ is a stationary point.\n\t\n\t\\item \\textit{Characterization of ESS's (Hofbauer and Sigmund, 1988)}\\\\\n\tFor any doubly symmetric game with payoff matrix $A$, the following statements are equivalent:\n\t\\begin{itemize}\n\t\t\\item $x\\in \\Delta^{ESS}$\n\t\t\\item $x \\in \\Delta$ is a strict local maximizer of $f(x) = x^TAx$ over the standard simplex $\\Delta$.\n\t\t\\item $x\\in\\Delta$ is asymptotically stable in the replicator dynamics.\n\t\\end{itemize}\n\\end{itemize}\n\n\nA well-known discretization of replicator dynamics, which assumes non-overlapping generations, is the following (assuming a non-negative $A$): \n$$\nx_i(t+1) = x_i(t)\\frac{A(x(t))_i}{x(t)^TAx(t)}\n$$\nwhich inherits most of the dynamical properties of its continuous-time counterpart.\n\\image{img/ulearning/rep_dynamics}{MATLAB implementation of discrete-time replicator dynamics}{0.5}\nThe components of the converged vector give us a measure of the participation of the corresponding vertices in the cluster, while the value of the objective function provides of the cohesiveness of the cluster.\n\n\\subsection{Dominant Sets Hierarchy}\nA useful extension of the Dominant Sets formulation is introduced in the optimization problem. The new formulation is now defined:\n\\begin{equation}\\label{alphaSPQ}\n\\begin{array}{lcl}\n\\text{maximize} & f_\\alpha(x) = x^\\prime(A - \\alpha I)x\\\\\n\\text{subject to} & x \\in \\Delta\n\\end{array}\n\\end{equation}\nwhere $\\alpha \\geq 0$ is a parameter and $I$ is the identity matrix.\\\\\n\nThe parameter $\\alpha$ affects the number of clusters found by the algorithm. With an huge value of $\\alpha$ each point defines a cluster since we require a strong cohesiveness. Instead decreasing the value of $\\alpha$ the number of cluster increases.\\\\\nThe objective function $f_{\\alpha}$ has now two kinds of solutions:\n\\begin{itemize}\n\t\\item solutions which correspond to dominant sets for original matrix $A$ $(\\alpha=0)$.\n\t\n\t\\item solutions which don't correspond to any dominant set for the original matrix $A$, although they are dominant for the scaled matrix $A+\\alpha(ee' - I)$. In other words, it allows to find subsets of points that are not sufficiently coherent to be dominant with respect to $A$, and hence they should be split.\t\n\\end{itemize}\nThis algorithm has the basic idea of starting with a sufficiently large $\\alpha$ and adaptively decrease it during the clustering process following these steps:\n\\begin{enumerate}\n\t\\item Let $\\alpha$ be a large positive value (ex: $\\alpha > |V|-1$).\n\t\\item Find a partition of the data into $\\alpha$-clusters.\n\t\\item For all the $\\alpha$-clusters that are not 0-clusters recursively repeat step 2 with decreasing $\\alpha$.\n\\end{enumerate}\n\\newpage\n\\subsection{Properties}\n\\begin{itemize}\n\t\\item \\textbf{Well separation between structure and noise.} In such situations it is often more important to cluster a small subset of the data very well, rather than optimizing a clustering criterion over all the data points, particularly in application scenarios where a large amount of noisy data is encountered. \n\t\n\t\\item \\textbf{Overlapping clustering}. In some cases we can have that two distinct clusters share some points, but partitional approaches impose that each element cannot be long to more than one cluster.\n\t\n\t\\item Dominant sets can be found by mining \\textbf{local solutions}, so it is not necessary to look for global solutions.\n\t\n\t\\item Deal very well in presence of noise.\n\t\n\t\\item Strong connection with theoretical results.\n\t\n\t\\item Makes no assumptions on the structure of the affinity matrix, being it able to work with asymmetric and even negative similarity functions.\n\t\n\t\\item Does not require a priori knowledge on the number of clusters (since it extracts them sequentially).\n\t\n\t\\item Leaves clutter elements unassigned (useful, e.g., in figure/ground separation or one-class clustering problems).\n\t\n\t\\item Limiting the number of dynamic's iterations it is possible to detect quasi-click structures.\n\\end{itemize}\n", "meta": {"hexsha": "fa3ef2f83f2638fc5dc54645b75e561c89fba935", "size": 46479, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis_template/UnsupervisedLearning.tex", "max_stars_repo_name": "Cinofix/templates_latex", "max_stars_repo_head_hexsha": "d3c46697b59e9a0b53541589cbd9c4033babdcef", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2020-04-20T09:05:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T15:05:41.000Z", "max_issues_repo_path": "thesis_template/UnsupervisedLearning.tex", "max_issues_repo_name": "Cinofix/templates_latex", "max_issues_repo_head_hexsha": "d3c46697b59e9a0b53541589cbd9c4033babdcef", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis_template/UnsupervisedLearning.tex", "max_forks_repo_name": "Cinofix/templates_latex", "max_forks_repo_head_hexsha": "d3c46697b59e9a0b53541589cbd9c4033babdcef", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-04-20T09:05:12.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-20T09:05:12.000Z", "avg_line_length": 78.5118243243, "max_line_length": 1021, "alphanum_fraction": 0.7261343833, "num_tokens": 13407, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267118026095992, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5914494611843175}}
{"text": "\\documentclass[a4paper, 10pt, twocolumn, landscape]{article}\n\n%packages\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{mathpazo}\n\\usepackage{bookmark}\n\\usepackage{pdflscape}\n\\usepackage{array}\n\\usepackage[left=1cm, right=1cm, bottom=1cm, top=1cm, headsep=0.2cm]{geometry}\n\\usepackage{fancyhdr}\n\\usepackage{minted}\n\n%header\n\\lhead{INSTITUTO MILITAR DE ENGENHARIA - \\the\\year}\n\\chead{LOREM IPSUM}\n\\rhead{\\thepage}\n\\title{ICPC Notebook}\n\\renewcommand{\\headrulewidth}{2pt}\n\n%PDF properties\n\\hypersetup{\npdftitle={ICPC Notebook},\npdfauthor={Lorem Ipsum},\ncolorlinks=true,\nallcolors=blue,\n}\n\n%Minted options\n\\setminted{frame=none,breaklines,fontsize=\\small}\n\n\\begin{document}\n%set header\n\\pagestyle{fancy}\n\n%vetical line between columns\n\\setlength{\\columnseprule}{1pt}\n\n%set table of contents depth to section\n\\setcounter{tocdepth}{1}\n%create table of contents\n\\tableofcontents\n\n\\section{Template \\& vimrc}\n\\subsection{Template}\n\\inputminted{cpp}{template.cpp}\n\\subsection{vimrc}\n\\inputminted{vim}{vimrc}\n\n\\section{Graphs}\n\\subsection{DFS}\n\\inputminted{cpp}{graphs/dfs.cpp}\n\\subsection{BFS}\n\\inputminted{cpp}{graphs/bfs.cpp}\n\\subsection{Topological Sort}\n\\inputminted{cpp}{graphs/kahn.cpp}\n\\subsection{Articulation points and bridges}\n\\inputminted{cpp}{graphs/articulation.cpp}\n\\subsection{Strongly Connected Components}\n\\inputminted{cpp}{graphs/kosaraju.cpp}\n\\subsection{Minimum Spanning Tree}\n\\inputminted{cpp}{graphs/kruskal.cpp}\n\\inputminted{cpp}{graphs/prim.cpp}\n\\subsection{Shortest path}\n\\inputminted{cpp}{graphs/dijkstra.cpp}\n\\inputminted{cpp}{graphs/bellman-ford.cpp}\n\\inputminted{cpp}{graphs/spfa.cpp}\n\\inputminted{cpp}{graphs/floyd-warshall.cpp}\n\\subsection{Maximum Flow}\n\\inputminted{cpp}{graphs/dinic.cpp} % Flow\n\\subsection{Minimum Cost Maximum Flow}\n\\inputminted{cpp}{graphs/min-cost-max-flow.cpp} % Flow\n\\subsection{Maximum Bipartite Cardinality Matching}\n\\inputminted{cpp}{graphs/kuhn.cpp}\n\\subsection{Lowest Common Ancestor}\n\\inputminted{cpp}{graphs/lca.cpp}\n\\subsection{2-SAT}\n\\inputminted{cpp}{graphs/2-sat.cpp}\n\\subsection{Erdos-Gallai}\n\\inputminted{cpp}{graphs/erdos-gallai.cpp}\n\n\\section{Mathematics}\n\\subsection{Number Theory}\n\\inputminted{cpp}{math/basics.cpp}\n\\subsection{Primes}\n\\inputminted{cpp}{math/sieve.cpp}\n\\subsection{Euler phi}\n\\inputminted{cpp}{math/euler-phi.cpp}\n\\subsection{Extended Euclidean}\n\\inputminted{cpp}{math/extended-euclid.cpp}\n\\subsection{Primarily test}\n\\inputminted{cpp}{math/miller-rabin.cpp}\n\\subsection{Prime factors}\n\\inputminted{cpp}{math/prime-factors.cpp}\n\\inputminted{cpp}{math/pollard-rho.cpp}\n\\subsection{Fast Fourier Transform}\n\\inputminted{cpp}{math/fft.cpp}\n\\subsection{Number Theoretic Transform}\n\\inputminted{cpp}{math/ntt.cpp}\n\\subsection{Chinese Remainder}\n\\inputminted{cpp}{math/chinese.cpp}\n\\subsection{Primitive Root}\n\\inputminted{cpp}{math/primitive-root.cpp}\n\\subsection{Linear Systems}\n\\subsubsection{Gaussian Elimination (double)}\n\\inputminted{cpp}{math/gauss-elim.cpp}\n\\subsubsection{Gaussian Elimination Modulo Prime}\n\\inputminted{cpp}{math/gauss-elim-prime.cpp}\n\\subsubsection{Gaussian Elimination Extended Inverse}\n\\inputminted{cpp}{math/gauss-elim-ext.cpp}\n\\subsection{Golden Section Search (Ternary Search)}\n\\inputminted{cpp}{math/gss.cpp}\n\\subsection{Josephus}\n\\inputminted{cpp}{math/josephus.cpp}\n\\subsection{Simposon Rule}\n\\inputminted{cpp}{math/simpson-rule.cpp}\n\n\\section{Strings}\n\\subsection{Rabin Karp}\n\\inputminted{cpp}{strings/rabin-karp.cpp}\n\\subsection{Knuth-Morris-Pratt}\n\\inputminted{cpp}{strings/kmp.cpp}\n\\subsection{Suffix Array}\n\\inputminted{cpp}{strings/suffix-array.cpp}\n\\subsection{Z Function}\n\\inputminted{cpp}{strings/z.cpp}\n\\subsection{Prefix Function}\n\\inputminted{cpp}{strings/prefix.cpp}\n\\subsection{Recursive-String Matching}\n\\inputminted{cpp}{strings/recursive-string-matching.cpp}\n\\subsection{Aho-Corasick}\n\\inputminted{cpp}{strings/aho.cpp}\n\\subsection{Palindromes}\n\\inputminted{cpp}{strings/manacher.cpp}\n\\subsection{Suffix Automata}\n\\inputminted{cpp}{strings/suffix-automaton.cpp}\n\n\\section{Data Structures}\n\\subsection{Disjoint Set Union}\n\\inputminted{cpp}{data-structures/dsu.cpp}\n\\subsection{Sparse Table}\n\\inputminted{cpp}{data-structures/sparse.cpp}\n\\subsection{Sparse Table 2D}\n\\inputminted{cpp}{data-structures/sparse2d.cpp}\n\\subsection{Fenwick Tree}\n\\inputminted{cpp}{data-structures/bit.cpp}\n\\subsection{Fenwick Tree 2D}\n\\inputminted{cpp}{data-structures/bit2d.cpp}\n\\subsection{Range Update Point Query Fenwick Tree}\n\\inputminted{cpp}{data-structures/bit-range.cpp}\n\\subsection{Segment Tree}\n\\inputminted{cpp}{data-structures/segtree.cpp}\n\\subsection{Segment Tree 2D}\n\\inputminted{cpp}{data-structures/segtree2d.cpp}\n\\subsection{Persistent Segment Tree}\n\\inputminted{cpp}{data-structures/persistent-segtree.cpp}\n\\inputminted{cpp}{data-structures/persistent-segtree-naum.cpp}\n\\subsection{Heavy-Light Decomposition}\n\\inputminted{cpp}{data-structures/hld.cpp}\n\\subsection{Centroid Decomposition}\n\\inputminted{cpp}{data-structures/centroid-decomposition.cpp}\n\\subsection{Trie}\n\\inputminted{cpp}{data-structures/trie.cpp}\n\\subsection{Mergesort Tree}\n\\inputminted{cpp}{data-structures/mergesort-tree.cpp}\n\\subsection{Treap}\n\\inputminted{cpp}{data-structures/treap.cpp}\n\n\\section{Dynamic Programming}\n\\subsection{Longest Increasing Subsequence}\n\\inputminted{cpp}{dynamic-programming/lis.cpp}\n\\subsection{Convex Hull Trick}\n\\inputminted{cpp}{dynamic-programming/convex-hull-trick.cpp}\n\\subsection{Divide and Conquer Optimization}\n\\inputminted{cpp}{dynamic-programming/divide-and-conquer-optimization.cpp}\n\\subsection{Knuth Optimization}\n\\inputminted{cpp}{dynamic-programming/knuth-optimization.cpp}\n\n\\section{Geometry}\n\\subsection{Basic}\n\\inputminted{cpp}{geometry/basics.cpp}\n\\subsection{Closest Pair of Points}\n\\inputminted{cpp}{geometry/closest-pair.cpp}\n\\subsection{Nearest Neighbours}\n\\inputminted{cpp}{geometry/neighbour.cpp}\n\n\\section{Miscellaneous}\n\\subsection{builtin}\n\\inputminted{cpp}{misc/builtin.cpp}\n\\subsection{prime numbers}\n\\inputminted{text}{misc/prime-numbers.txt}\n\\subsection{Week day}\n\\inputminted{cpp}{misc/week-day.cpp}\n\\subsection{Date}\n\\inputminted{cpp}{misc/date.cpp}\n\\subsection{Python}\n\\inputminted{cpp}{misc/python.py}\n\\subsection{Sqrt Decomposition}\n\\inputminted{cpp}{misc/sqrt-decomposition.cpp}\n\n\\newpage\n\\onecolumn\n\\begin{table}[ht]\n\\centering\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{|c|c|c|c|m{80mm}|m{120mm}|c|}\n\\hline\n& A & C & N & Assunto & Descricao & Diff\\\\\n\\hline\n& & & & & &\\\\\nA & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nB & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nC & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nD & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nE & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nF & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nG & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nH & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nI & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nJ & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nK & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n& & & & & &\\\\\nL & & & & & &\\\\\n& & & & & &\\\\\n\\hline\n\\end{tabular}\n}\n\\end{table}\n\n\\end{document}\n", "meta": {"hexsha": "e9cecf246becba745d9c3052f7f3fd7562351f60", "size": 7117, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "biblio.tex", "max_stars_repo_name": "Davi-Holanda/icpc-notebook", "max_stars_repo_head_hexsha": "096441ba9e53c847ce8dd6a65340e163d274a946", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 46, "max_stars_repo_stars_event_min_datetime": "2018-01-23T01:43:23.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-03T15:16:25.000Z", "max_issues_repo_path": "biblio.tex", "max_issues_repo_name": "Davi-Holanda/icpc-notebook", "max_issues_repo_head_hexsha": "096441ba9e53c847ce8dd6a65340e163d274a946", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-08-02T16:29:03.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-16T22:36:11.000Z", "max_forks_repo_path": "biblio.tex", "max_forks_repo_name": "Davi-Holanda/icpc-notebook", "max_forks_repo_head_hexsha": "096441ba9e53c847ce8dd6a65340e163d274a946", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2018-07-12T05:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-19T01:40:49.000Z", "avg_line_length": 26.9583333333, "max_line_length": 78, "alphanum_fraction": 0.7358437544, "num_tokens": 2324, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8267117876664789, "lm_q1q2_score": 0.5914494504936506}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\n\\begin{document}\n\n\\title{PoGo Kit}\n\\author{@possatti}\n\n\\maketitle\n\n\\begin{abstract}\nThis is a draft of some Pok\u00e9mon GO formulas.\n\\end{abstract}\n\n\\section{Calculating TDO}\n\nReal damage formula:\n\\begin{equation}\n  \\label{real_damage_formula}\n  RealDamage = Floor(\\frac{1}{2} * Power * \\frac{Atk}{Def} * Multipliers) + 1\n\\end{equation}\n\nSimpified damage formula:\n\\begin{equation}\n  \\label{simpified_damage_formula}\n  Damage = \\frac{1}{2} * Power * \\frac{Atk}{Def} * Multipliers\n\\end{equation}\n\nAuxiliary:\n\\begin{equation}\n  \\label{PPT}\n  FastPPT = FastPower \\div Turns\n\\end{equation}\n\\begin{equation}\n  \\label{EPT}\n  FastEPT = EnergyDelta \\div Turns\n\\end{equation}\n\\begin{equation}\n  \\label{PPE}\n  ChargePPE = ChargePower \\div EnergyDelta\n\\end{equation}\n\\begin{equation}\n    HP = Sta * CPM_{lvl}\n\\end{equation}\n\nDamage per Turn (DPT):\n\\begin{equation}\n  FastDPT = \\frac{1}{2} * FastPPT * \\frac{Atk}{Def} * Multipliers\n\\end{equation}\n\\begin{equation}\n  ChargeDPT = \\frac{1}{2} * ChargePPE * FastEPT * \\frac{Atk}{Def} * Multipliers\n\\end{equation}\n\\begin{equation}\n  \\begin{aligned}\n    DPT = FastDPT + ChargeDPT \\\\\n    DPT = \\frac{FastPPT * Atk}{2Def} + \\frac{ChargePPE * FastEPT * Atk}{2Def} \\\\\n    DPT = (FastPPT + ChargePPE * FastEPT) * \\frac{Atk}{2Def} \\\\\n  \\end{aligned}\n\\end{equation}\n\nThe Pok\u00e9mon stays alive for some turns (TotalTurns):\n\\begin{equation}\n  \\begin{aligned}\n    TotalTurns_A = HP_A \\div DPT_B \\\\\n    TotalTurns_A = HP_A \\div (FastPPT_B + FastEPT_B * ChargePPE_B) * \\frac{Atk_B}{2Def_A} \\\\\n    TotalTurns_A = \\frac{HP_A}{FastPPT_B + FastEPT_B * ChargePPE_B} * \\frac{2Def_A}{Atk_B} \\\\\n  \\end{aligned}\n\\end{equation}\n\n% Why we measure moveset pairs, instead of moveset triples? Because the player will always prefer the one with highest ChargePPE, unless type advantage changes things. Knowing what types will come into play is not possible, then let's just forget about the triples.\n\nHow to calculate TDO:\n\\begin{equation}\n  \\begin{aligned}\n    TDO = DPT_A * TotalTurns \\\\ \\\\\n    TDO = (FastPPT_A + FastEPT_A * ChargePPE_A) * \\frac{Atk_A}{2Def_B} \\\\\n      * \\frac{HP_A}{FastPPT_B + FastEPT_B * ChargePPE_B} * \\frac{2Def_A}{Atk_B} \\\\ \\\\\n    TDO = (FastPPT_A + FastEPT_A * ChargePPE_A) * Atk_A * HP_A * Def_A \\\\\n      * \\frac{1}{(FastPPT_B + FastEPT_B * ChargePPE_B) * Atk_B * Def_B} \\\\ \\\\\n    TDO \\propto (FastPPT_A + FastEPT_A * ChargePPE_A) * Atk_A * HP_A * Def_A\n  \\end{aligned}\n\\end{equation}\n\n% \\subsection{Subsection Heading Here}\n\n% \\begin{figure}\n%     \\centering\n%     \\includegraphics[width=3.0in]{myfigure}\n%     \\caption{Simulation Results}\n%     \\label{simulationfigure}\n% \\end{figure}\n\n\\end{document}\n", "meta": {"hexsha": "c93104d40713b7bafa3817dbc7b2452bf782c121", "size": 2697, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notebook/notebook.tex", "max_stars_repo_name": "possatti/pogokit", "max_stars_repo_head_hexsha": "5b1ddcf04fa7a39ecfc3ad30fc63c1d8a129356a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebook/notebook.tex", "max_issues_repo_name": "possatti/pogokit", "max_issues_repo_head_hexsha": "5b1ddcf04fa7a39ecfc3ad30fc63c1d8a129356a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook/notebook.tex", "max_forks_repo_name": "possatti/pogokit", "max_forks_repo_head_hexsha": "5b1ddcf04fa7a39ecfc3ad30fc63c1d8a129356a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.3894736842, "max_line_length": 265, "alphanum_fraction": 0.6915090842, "num_tokens": 945, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117812622843, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5914494459119362}}
{"text": "%!TEX root =  ../main.tex\n\n\\subsection{Deriving}\n\n\\objective{Prove the Power Rule, Product Rule, and Quotient Rule, and apply them to arbitrary derivatives}\n\n\nComing back to Earth for a section from all this contemplation of infinities, we see the\nextreme usefulness of it all.  The number $e$ is not the answer to any algebraic \nexpression.  That is, one cannot build a polynomial --- even an irregular one with\nrational coefficients and rational exponents --- that has $e$ as a solution.  $e$ is a\ntranscendental number, the result of an infinite process, though it itself is a finite number.\nThere is a very important sense in which $e$ contains an infinity within itself.\n\nIt takes a system like calculus to create $e$, and it returns the favor by unleashing a\nvast reservoir of new waters for calculus to navigate.  The derivative of $e^x$ is $e^x$,\nand its inverse --- $\\ln{x}$ --- is just as amazing.  The inverse of $y=e^x$ can be written\nas $x=e^y$.  By implicit differentiation, $dx = e^ydy$, and therefore $\\frac{dy}{dx} =\n\\frac{1}{e^x}$.  Well, we began by saying $e^y=x$, so the derivative of $\\ln{x}$ must\nbe $\\frac{1}{x}$.  Hopefully, you have been curious for several chapters what\ncould ever have the derivative of $\\frac{1}{x}$, since nothing could ever make that\nvia the Power Rule.\n\n\\personfeature[-2in]{\\chapdir/pics/Leibniz_Hannover}{Gottfried Wilhelm Leibniz}{1646 -\n1716}{was a German polymath and philosopher who occupies a prominent place in the history of mathematics and the history of philosophy, having developed differential and integral calculus independently of Isaac Newton. Leibniz's notation has been widely used ever since it was published. It was only in the 20th century that his Law of Continuity and Transcendental Law of Homogeneity found mathematical implementation (by means of non-standard analysis).\n\\href{https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz}{Wikipedia}}\n\n\\subsection{Proofs}\nSpeaking of the Power Rule, we have been using it as we made it via Induction for \nsome time now.  Not that Induction is a bad thing, but ln affords us a more elegant \nproof.  Suppose $y$ is defined as some function to a power:\n$$\ny = \\left(f(x)\\right)^n\n$$\nThere is no immediately obvious way to differentiate the left side of this identity, but\nwhat if we take the log of both sides first?  This is called the \\textbf{logarithmic deriviative}.\n$$\n\\ln{y} = \\ln{\\left(f(x)\\right)^n}\n$$\nA happy consequence comes from the fact that a log of a power is the same as the\nlog times the power.\n$$\n\\ln{y} = n\\cdot\\ln{f(x)}\n$$\nNow we can take the derivative of both sides implicitly.\n$$\n\\frac{1}{y} \\cdot y^\\prime = n \\cdot \\frac{1}{f(x)}\n$$\nSolving for $y^\\prime$, we get\n$$\ny^\\prime = n \\frac{y}{f(x)}\n$$\nIf we substitute back in the original definition of $y$ (i.e. $\\left(f(x)\\right)^n$) and simplify,\nwe see\n\\begin{equation}\ny^\\prime = n\\cdot{}\\left(f(x)\\right)^{n-1}\n\\end{equation}\nthe generalized Power Rule.\n\n\n\\subsection{Product and Quotient Rule}\nThe Product Rule can also be proven by Logarithmic Derivative, without limits, for any \n$y = f\\cdot{}g$.\n\\begin{align*}\n\\ln{y}  = & \\ln{f \\cdot{} g}  \\\\\n\\frac{1}{y}y^\\prime =& \\left(\\ln(f)\\right)^\\prime + \\left(\\ln(g)\\right)^\\prime \\\\\ny^\\prime =& y(\\frac{1}{f}f^\\prime + \\frac{1}{g}g^\\prime)  \\\\\n =& f\\cdot{}g(\\frac{1}{f}f^\\prime + \\frac{1}{g}g^\\prime)  \\\\\n\\end{align*}\n\\begin{equation}\n  =  g\\cdot{}f^\\prime + f\\cdot{}g^\\prime\n\\end{equation}\n\nThe same goes for the Quotient Rule, for any $y=\\frac{f}{g}$.\n\\begin{align*}\n\\ln{y} &= \\ln{\\frac{f}{g}} \\\\\n\\frac{1}{y}y^\\prime &= \\left(\\ln{f} - \\ln{g}\\right)^\\prime \\\\\ny^\\prime &= y\\left(\\frac{1}{f}f^\\prime - \\frac{1}{g}g^\\prime\\right) \\\\\n  &= \\frac{f}{g}\\left(\\frac{f^\\prime}{f} - \\frac{g^\\prime}{g}\\right) \\\\\n  &= \\frac{f^\\prime}{g} - \\frac{g\\cdot{}f^\\prime}{g^2}\\\\\n\\end{align*}\n\\begin{equation}\n  = \\frac{g\\cdot{}f^\\prime - g\\cdot{}f^\\prime}{g^2}\n\\end{equation}\n\n\\subsection{Derivative Review}\nLet us summarize all of the derivative shortcuts we have discerned.  You are \nresponsible to prove without assistance all of these\n\nWe will use $u$ as a variable of differentiation because there might be a\n(nested?) set of Chain Rules to apply before we get down to $x$.  Assuming\n$u$ is a function of $x$:\n\\begin{equation}\n\\frac{d}{dx}u = \\frac{du}{dy}\\cdot{}\\frac{dy}{dx}\n\\end{equation}\n\nWe have been given a looping cycle of trigonometric derivatives without proof:\n\\begin{equation}\n\\left(\\sin{u}\\right)^\\prime = \\cos{u}\\frac{du}{dx}\n\\end{equation}\n\\begin{equation}\n\\left(\\cos{u}\\right)^\\prime = -\\sin{u}\\frac{du}{dx}\n\\end{equation}\n\\begin{equation}\n\\left(-\\sin{u}\\right)^\\prime = -\\cos{u}\\frac{du}{dx}\n\\end{equation}\n\\begin{equation}\n\\left(-\\cos{u}\\right)^\\prime = \\sin{u}\\frac{du}{dx}\n\\end{equation}\nAny polynomial or power function can be differentiated with the Power Rule:\n\\begin{equation}\n\\left(u^n\\right)^\\prime = n\\cdot{}u^{n-1}\\frac{du}{dx}\n\\end{equation}\nAny exponential function can be differentiated as follows:\n\\begin{equation}\n(b^u)^\\prime = b^u\\cdot{}\\ln{u}\\frac{du}{dx}\n\\end{equation}\nAny logarithmic function can be differentiated as follows:\n\\begin{equation}\n\\left(\\log_a{u}\\right)^\\prime = \\frac{1}{\\ln{a}}\\cdot{}\\frac{1}{u}\\cdot{}\\frac{du}{dx}\n\\end{equation}\n\nTogether with the Product and Quotient Rule above, almost any major\nfunction should be differentiable for you now.\n\n\n\n", "meta": {"hexsha": "deb357b346674ea25343568af430462c79d945bc", "size": 5334, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch08/0803.tex", "max_stars_repo_name": "aquatiki/AnalysisTextbook", "max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z", "max_issues_repo_path": "ch08/0803.tex", "max_issues_repo_name": "aquatiki/AnalysisTextbook", "max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch08/0803.tex", "max_forks_repo_name": "aquatiki/AnalysisTextbook", "max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.671875, "max_line_length": 455, "alphanum_fraction": 0.704911886, "num_tokens": 1676, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484144, "lm_q2_score": 0.8267117876664789, "lm_q1q2_score": 0.5914494404614544}}
{"text": "% Created 2020-06-01 Mon 18:31\n% Intended LaTeX compiler: pdflatex\n\\documentclass[11pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\author{Eric Nguyen}\n\\date{December 12, 2018}\n\\title{A Look at Atmospheric CO\\(_{\\text{2}}\\)}\n\\hypersetup{\n pdfauthor={Eric Nguyen},\n pdftitle={A Look at Atmospheric CO\\(_{\\text{2}}\\)},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.1.9)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n\n\\section{Description}\n\\label{sec:org91827ea}\n\nThe original document can be found at\n\\url{http://sustainabilitymath.org/word/Mauna-Loa-CO2.docx}. \\\\\n\n\\noindent A Look at Atmospheric CO\\(_{\\text{2}}\\) \\\\\nProduced by Thomas J. Pfaff \\\\\nIthaca College \\\\\nUpdated June 2018 \\\\\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.9\\linewidth]{./figures/figure-01.jpg}\n\\caption{\\label{fig:org6bd9d10}\nFigure 1 Atmospheric CO2 data, 1950-2017, from the Mauna Loa site, \\url{ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2\\_annmean\\_mlo.txt}, with a fitted curve.}\n\\end{figure}\n\n\\subsection{Note}\n\\label{sec:org12b3cff}\n\nAccording to Warren\\footnote{According to IPCC Fifth Assessment Report (AR5) page 22: \\url{https://www.ipcc.ch/pdf/assessment-report/ar5/syr/AR5\\_SYR\\_FINAL\\_SPM.pdf}}, at \\(1^{\\circ}\\) Celsius, in addition to the trends\nwe are already observing, oceans will further acidify, natural\necosystems will start to collapse, and as many as 18-60 million people\nin the developing world will go hungry.\nAt \\(1.5^{\\circ}\\) Celsius the Greenland ice sheet will melt, eventually causing\na 7m rise in sea level, inundating coastal areas.\nAt \\(2^{\\circ}\\) Celsius agricultural yields in the rich nations will start to\nfall and 1-3 billion people will experience water scarcity.\nAt \\(3^{\\circ}\\) Celsius the Amazon rainforest is expected to collapse and at\n\\(4^{\\circ}\\) Celsius most of Africa and Australia will lose all agricultural\nproduction.\n\n\\section{Questions}\n\\label{sec:orgc778e3d}\n\nAnswer the following questions using the fitted curve,\n\\(\\hat{y} = 310.42336317 + 0.52063260x + 0.01345947x^2\\),\nthat is represented in \\hyperref[fig:org6bd9d10]{Figure 1}.\n\n\\subsection{Question 1}\n\\label{sec:org98ee15e}\n\nFind a model with output Average MH4 in PPB and input years (or years after 1950).\n[Either delete this question or the figure, in which case provide the data.]\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\nFirst, we need to extract the data from the 'data.txt' file.\n\n\\begin{verbatim}\n# This code snippet extracts the data from 'data.txt'\n%config InlineBackend.figure_format='retina'\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndata = pd.read_csv(\"data.txt\")\n\n# Years after 1950, instead of actual years\ndata[\"year\"] = data[\"year\"] - 1950\n\n# Print head\nhead = data.head()\n[list(head)] + [None] + head.values.tolist()\n\\end{verbatim}\n\n\\begin{center}\n\\begin{tabular}{rrr}\nyear & mean & unc\\\\\n\\hline\n9.0 & 315.97 & 0.12\\\\\n10.0 & 316.91 & 0.12\\\\\n11.0 & 317.64 & 0.12\\\\\n12.0 & 318.45 & 0.12\\\\\n13.0 & 318.99 & 0.12\\\\\n\\end{tabular}\n\\end{center}\n\nOnce we've extracted the data, we can make a visualization.\n\n\\begin{verbatim}\n# This code snippet plots the data from 'data.txt'\ndef plot_config(size=(10, 10),\n                xlab = \"Years after 1950\",\n                ylab = \"ppm\"):\n    plt.figure(figsize=size)\n    plt.xlabel(xlab)\n    plt.ylabel(ylab)\n    plt.grid()\n\ndef scatterplot():\n    plot_config()\n    plt.scatter(data[\"year\"],\n                data[\"mean\"],\n                marker=\"s\",\n                facecolors=\"none\",\n                edgecolors=\"black\")\n\nscatterplot()\nplt.savefig(\"figures/data.png\")\n\\end{verbatim}\n\n\\begin{center}\n\\includegraphics[width=.9\\linewidth]{./figures/data.png}\n\\end{center}\n\nUsing the data, we can create a model using NumPy's \\texttt{polyfit} function.\n\n\\begin{verbatim}\n# This code snippet creates the provided model\n# as well as generating a new model based on np.polyfit\n\n# Model given in the question for comparison\nold_model = np.poly1d([0.01345947, 0.52063260, 310.42336317])\nprint(\"Old model:\")\nprint(old_model, \"\\n\")\n\n# Model using newer data\nfit = np.polyfit(data[\"year\"], data[\"mean\"], 2)\nmodel = np.poly1d(fit)\nprint(\"New model:\")\nprint(model)\n\\end{verbatim}\n\n\\begin{verbatim}\nOld model:\n         2\n0.01346 x + 0.5206 x + 310.4 \n\nNew model:\n         2\n0.01249 x + 0.6027 x + 308.9\n\\end{verbatim}\n\nWith our models ready, we can plot them to compare them visually.\n\n\\begin{verbatim}\n# This code snippet plots the models\nx = np.linspace(min(data[\"year\"]), max(data[\"year\"]), 1000)\nscatterplot()\nplt.plot(x, old_model(x), color=\"blue\", lw=2)\nplt.plot(x, model(x), color=\"red\", lw=2)\nplt.savefig(\"figures/models.png\")\n\\end{verbatim}\n\n\\begin{center}\n\\includegraphics[width=.9\\linewidth]{./figures/models.png}\n\\end{center}\n\nHere, we NumPy provides us with the following model:\n\n\\[\\hat{y} = 0.01249x^2 + 0.6027x + 308.9.\\]\n\nIndeed, this closely matches that of the model provided\nto us:\n\n\\[\\hat{y} = 310.42336317 + 0.52063260x + 0.01345947x^2\\]\n\nAs an additional check, we can also compare the models\nvisually using the plots and see that they match each\nother when plotted.\n\n\\subsection{Question 2}\n\\label{sec:orgf700d27}\n\nAccording to the model what will CO\\(_{\\text{2}}\\) levels be in 2050?\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\n\\begin{verbatim}\nmodel(2050 - 1950)\n\\end{verbatim}\n\n\\begin{verbatim}\n494.0709782199019\n\\end{verbatim}\n\nAccording to the model, there will be approximately\n494.071 ppm of CO\\(_{\\text{2}}\\) in the atmosphere by 2050.\n\n\\subsection{Question 3}\n\\label{sec:org0182eb6}\n\nWhat is the rate of change of CO\\(_{\\text{2}}\\) in 2017\n(the last year of the data set) and what is\nthe percentage rate of change?\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\nTaking the derivative of the model provided by \\hyperref[sec:org98ee15e]{Question 1},\nwe find the rate of change of CO\\(_{\\text{2}}\\) to be modeled by\n\n\\[\\hat{y}' = 0.02498x + 0.6027.\\]\n\nWe can verify this in code:\n\n\\begin{verbatim}\nrate_of_change = np.poly1d([model.c[0] * 2, model.c[1]])\nrate_of_change\n\\end{verbatim}\n\n\\begin{verbatim}\n \n0.02497 x + 0.6027\n\\end{verbatim}\n\nNow all we need to do is use that model to find the\nrate of change in 2017:\n\n\\begin{verbatim}\nq3a = rate_of_change(2017 - 1950)\nq3a\n\\end{verbatim}\n\n\\begin{verbatim}\n2.276033781313511\n\\end{verbatim}\n\nSo, the rate of change of CO\\(_{\\text{2}}\\) in 2017 is approximately\n2.276 ppm/year or 227.6\\%.\n\n\\subsection{Question 4}\n\\label{sec:orge7010f3}\n\nAssuming that CO\\(_{\\text{2}}\\) levels continue to grow constantly at the\n2017 rates, what will the CO\\(_{\\text{2}}\\) levels reach in 2050?\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\nIf the CO\\(_{\\text{2}}\\) levels were to continue to grow constantly\nat the 2017 rates, then we can represent this model\nby taking the rate and the CO\\(_{\\text{2}}\\) level at 2017.\n\n\\begin{verbatim}\nq4_model = np.poly1d([q3a, list(data[\"mean\"])[-1]])\nq4_model\n\\end{verbatim}\n\n\\begin{verbatim}\n \n2.276 x + 406.6\n\\end{verbatim}\n\nPlotted, the assumed model would look like this:\n\n\\begin{verbatim}\nyes\nports both :results none\n     x = np.linspace(0, 2050 - 2017 + 5, 100)\n     plot_config(xlab = \"Years after 2017\")\n     plt.plot(x, q4_model(x))\n     plt.savefig(\"figures/model-q4.png\")\n\\end{verbatim}\n\nThen the predicted CO\\(_{\\text{2}}\\) level by 2050 would be calculated as so:\n\n\\begin{verbatim}\nq4_model(2050 - 2017)\n\\end{verbatim}\n\n\\begin{verbatim}\n481.6591147833459\n\\end{verbatim}\n\nSo according to this model, the CO\\(_{\\text{2}}\\) level by 2050 will\nreach approximately 481.67 ppm.\n\n\\subsection{Question 5}\n\\label{sec:org95871d8}\n\nAtmospheric CO\\(_{\\text{2}}\\) levels of 450 ppm yield a likely chance that\nglobal average temperature increases will be at least \\(2^{\\circ}\\)\nCelsius. \\footnote{Warren, R. 2006. Impacts of global climate change at different annual mean global temperature increases, in H.J. Schellnhuber et al. (eds.) Avoiding Dangerous Climate Change. Cambridge University Press, Cambridge}\nAccording to the model, in what year do we reach a CO\\(_{\\text{2}}\\) level of 450 ppm?\nIf we assume CO\\(_{\\text{2}}\\) levels continue to grow constantly at the 2017 rates,\nin what year do we reach a CO\\(_{\\text{2}}\\) level of 450 ppm?\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\nTo find the year we reach a CO\\(_{\\text{2}}\\) level of 450 ppm according to the model,\nwe can translate the model down vertically by 450 and then take the\nlargest positive root.\n\n\\begin{verbatim}\nq5a_m1 = np.poly1d([model.c[0], model.c[1], model.c[2] - 450])\nq5a_p1 = max(q5a_m1.roots)\nq5a_p1\n\\end{verbatim}\n\n\\begin{verbatim}\n84.86137790253937\n\\end{verbatim}\n\nEnsuring that this year indeed corresponds to a CO\\(_{\\text{2}}\\) level of 450 ppm,\nwe can plug it back into the model.\n\n\\begin{verbatim}\nmodel(q5a_p1)\n\\end{verbatim}\n\n\\begin{verbatim}\n450.0\n\\end{verbatim}\n\nWe can then repeat the same steps for the 2017 model.\n\n\\begin{verbatim}\nq5a_m2 = np.poly1d([q4_model.c[0], q4_model.c[1] - 450])\nq5a_p2 = max(q5a_m2.roots)\nq5a_p2\n\\end{verbatim}\n\n\\begin{verbatim}\n19.09022632121249\n\\end{verbatim}\n\nThis produces a very different value.\nThat is because the 2017 model predicts the CO\\(_{\\text{2}}\\) levels\nstarting from 2017, however we want a prediction of the\nyears since 1950, not 2017.\nThe following calculation will provide us with this.\n\n\\begin{verbatim}\n2017 - 1950 + q5a_p2\n\\end{verbatim}\n\n\\begin{verbatim}\n86.0902263212125\n\\end{verbatim}\n\nThen we also verify this prediction.\n\n\\begin{verbatim}\nq4_model(q5a_p2)\n\\end{verbatim}\n\n\\begin{verbatim}\n450.0\n\\end{verbatim}\n\nAccording to the model, we reach a CO\\(_{\\text{2}}\\) level of 450 ppm\napproximately 84.86 years after 1950, so in the year 2034.\n\nIf we assume CO\\(_{\\text{2}}\\) levels continue to grow constantly at the\n2017 rates, we would reach a CO\\(_{\\text{2}}\\) level of 450 ppm in\napproximately 86.09 years after 1950, so the year 2036.\n\n\\subsection{Question 6}\n\\label{sec:org2e4fb62}\n\nFill in the blank:\nIn order to avoid reaching 450 ppm of atmospheric CO\\(_{\\text{2}}\\) the\ntrend in the data would have to become (???Calculus Term???).\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\n450 as the maximum limit as \\(x\\) (years after 1950) approaches infinity.\n\n\\subsection{Question 7}\n\\label{sec:orgb99afc8}\n\nProvide a (general or real world related) question that you would\nlike answered based on your work here.\nThis should not be something that you could answer yourself with\na little work.\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\nHow could we leverage artificial intelligence to optimize CO\\(_{\\text{2}}\\) emissions?\n\n\\subsection{Question 8}\n\\label{sec:org7100531}\n\nSummarize your work on questions 1-5 in a short paragraph\nas if it were a news article\n\n\\noindent\\rule{\\textwidth}{0.5pt}\n\nCarbon dioxide in the Earth's atmosphere is growing at an exponential rate.\nThis is a concern as atmospheric carbon dioxide has been known to be a\nsignificant factor in climate change on Earth.\nClimate change can be devastating, as shown in the description.\nMy work predicts the amount of atmospheric carbon dioxide for each\nyear and the rate at which it is produced each year.\nAdditionally, it visualizes those predictions on graphs generated\nby matplotlib.\n\\end{document}", "meta": {"hexsha": "a85f70cd8863162eb8ffce974d2a0e7145edf7ac", "size": 11345, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "A Look at Atmospheric CO2.tex", "max_stars_repo_name": "airicbear/A_Look_at_Atmospheric_CO2", "max_stars_repo_head_hexsha": "86fb7c896b19450f8b3a74aa847e9d0ff897df34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "A Look at Atmospheric CO2.tex", "max_issues_repo_name": "airicbear/A_Look_at_Atmospheric_CO2", "max_issues_repo_head_hexsha": "86fb7c896b19450f8b3a74aa847e9d0ff897df34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "A Look at Atmospheric CO2.tex", "max_forks_repo_name": "airicbear/A_Look_at_Atmospheric_CO2", "max_forks_repo_head_hexsha": "86fb7c896b19450f8b3a74aa847e9d0ff897df34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.9477434679, "max_line_length": 232, "alphanum_fraction": 0.7256059938, "num_tokens": 3494, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.8056321936479701, "lm_q1q2_score": 0.5914303248564046}}
{"text": "%! Author = tstreule\n\n\\section{PET \\textnormal{-- Positron Emission Tomography}}\n\n\\begin{minipage}{.2\\linewidth}\n    \\includegraphics[width=.9\\linewidth]{PET_PSF}\n\\end{minipage}\n\\begin{minipage}{.8\\linewidth}\n    \\textbf{Positron range} $\\propto$ E. \\quad $\\textrm{FWHM}\\approx 0.1 - 0.5 \\unit{mm}$.\n\n    \\textbf{Radionuclide}: $\\ce{^{18}F}$ better than $\\ce{^{11}C}$ (110 vs 20 min)\n\n    \\textbf{Image production}: Scintillator (fine grid) $\\to$ PMT / avalance Diode $\\to$ Electronics. Event finding in scintillator is linear.\n    \\fbox{$y = \\frac{ S\\ped{A}+S\\ped{B}-S\\ped{C}-S\\ped{D} }{ S\\ped{A}+S\\ped{B}+S\\ped{C}+S\\ped{D} }$}\n\n    \\textbf{PSF}: Trapezoid/Triangle. \\highlight{$\\displaystyle \\textrm{FWHM} = w\\frac{r+x}{2r}$}\\\\\n    {\\scriptsize r: Ring diameter, x: distance from center, w: grid spacing of sc.}\n\\end{minipage}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Efficiency (to actually measure concentrations)}\nDetection eff. $\\epsilon = (1-e^{-\\mu d}) \\cdot \\Phi$ \\hfill ($\\Phi$: frac. of events in E window)\n\nGeometric eff.: $\\Omega = 4\\pi \\sin(\\arctan(z/D))$ \\hfill $D=2r$, $z$: length\n\nRadial geometric coverage $\\phi$: Fraction not gap between crystals\n\n\\textbf{Total sensitivity}: \\highlight{$\\displaystyle \\eta = \\epsilon^2 \\phi \\frac{\\Omega}{4\\pi}$}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Problems, solutions and additions}\n\\textbf{Correction table} for which scintillator under PMT\n\n\\textbf{Correction matrix}: against non-uniform detector eff.\n\n\\textbf{Attenuation correction}: Measure Image in HU via X-Ray CT, $\\to$ attenuation coeff. $\\to$ multiply PET lines by $\\eu^{\\mu D_{ij}}$\n\n\\textbf{Random coincidence}: Measure Background noise, then -\n\n\\textbf{Detector dead time} $\\delta$: $N\\ped{measured} = N\\ped{true} \\eu^{-N\\ped{true} \\delta}$\n\n\\textbf{TOF} (Time Of Flight of photon): $\\Delta x = \\frac{c\\Delta t}{2} = \\unit[75]{mm} \\;\\widehat{=}\\; \\unit[500]{ps}$\n\n\\textbf{3D image reconstr.}: Use coinc. between different rings.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quantitative PET - various effects and definitions}\n\\textbf{Partial voluming}: When measuring the \\underline{intensity} of an object, it appears smaller at the edge of the object than it is. $\\to$ Only measure in the center\n\n\\textbf{Injected dose per gram of tissue} \\fbox{$\\%ID/g = \\frac{c_t v_t}{D_{inj}} \\cdot \\frac{1}{m_t} \\cdot 100\\%$}, \\quad\n$c_t$: tissue conc., $v_t$: vol. of tissue ROI, $m_t$: mass of tissue ROI, $D\\ped{inj}$: injected dose\n\n\\textbf{Standardized uptake value} ($M$: Body mass, $S$: surface)\\\\\n\\highlight{$\\displaystyle SUV = [\\%ID/g] \\cdot M/100$} \\highlight{$\\displaystyle SUV' = [\\%ID/g] \\cdot S / 100$}\n\n\\textbf{Distribution volume}: Volume of blood needed for $M$ \\#tracer.\\\\\n\\fbox{$V_d = M/c_b = V_t c_t/c_b + V_b = V_t \\lambda + V_b$} \\quad $\\lambda =  c_t/c_b$\\\\\n$V_t \\lambda = V_1$ = equiv blood vol. to tissue vol. where tracer is\\\\\n$c_b$: typ. conc. of that tracer in blood. $V_d > 7 \\ell \\implies$ body.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{The scatchard equation}\n\\textbf{\\underline{L}igand} (tracer/drug), \\textbf{\\underline{R}eceptor}. \\quad Reaction: L+R = RL\n\n$[L], [R], [RL]$: conc. ---\n$[R_T]$: total \\#receptors ---\n$k\\ped{off}, k\\ped{on}$: reaction rates ---\n$k\\ped{off} \\cdot [RL]$: \\#bound pairs that separate.\n\nEquilibrium eq.: $k\\ped{off} \\cdot [RL] = k\\ped{on} \\cdot [R] \\cdot [L]$ \\\\\nEq. const.: $K_d = \\frac{k\\ped{off}}{k\\ped{on}} = \\frac{[R] \\cdot [L]}{[RL]}$ \\hfill\n\\highlight{$\\displaystyle \\frac{[RL]}{[L]} = -\\frac{1}{K_d} \\cdot [RL] + \\frac{R_T}{K_d}$}\\\\\nScatchard plot: $\\textrm{slope} = 1/K_d$, $R_T$: intersection with $[RL]$-axis\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Kinetic models}\n\\textbf{Renking-Crone eq.}: (amount of substance diffuses out of a blood capillary)\\\\\n\\highlight{$\\displaystyle = F \\cdot E$} \\highlight{$\\displaystyle E = 1 - \\eu^{-P \\cdot S / F}$}\n$E$: Efficiency, $P$: vascular permeability, $S$: capillary surface, $F$: blood flow\n\n\\textbf{Kinetic model}: ($c_p$: Plasma $\\leftarrow$ particular part)\\\\\n\\includegraphics[width = 0.45\\linewidth]{PET_Kinetic_Model}\n$c_f$: free ligands, $c_b$: bound l\n\n\\textbf{Kin. eq.}:\n$\\deriv{c_f}{t} = k_1 c_p - (k_2 + k_3) c_f + k_4 c_b$, \\;---\\;\n$\\deriv{c_b}{t} = k_3c_f - k_4c_b$, \\;---\\;\n$k_1 = FE$, \\;---\\;\n$k_2 = k_1 c_p/c_f$, \\;---\\;\n$k_3 = k_{on} [R]$, \\;---\\;\n$k_4 = k_{off}$\n\nWhat is measured: $c_t(t) = c_f(t) + c_b(t)$. $* c_p(t)$ should be in the equation. $\\to$ At the end determine the rate constants.\n\n\\textbf{Improvement}: Look at reference section in brain without receptors. Only 2 compartment model, measure $k_1$ and $k_2$.\n\nTo measure behaviour of a drug (cold): Tracer (hot) $\\to$ same receptors. Experiment with and without drug. Assumption: $c_b << c_{b,d}$. The drug changes the \\# total receptors from $B_{max}$ to $B'_{max}$. \\textbf{Receptor occupancy for drug}:\\\\\n\\fbox{$\\textrm{RO}[\\%] = \\frac{c_{b,d}}{B\\ped{max}} = (1 - B'\\ped{max}/B\\ped{max})\\cdot 100\\%$}\n", "meta": {"hexsha": "ecbea78ed838d8586da7115badaf6af3337b3d07", "size": 5040, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/BMI18/sections/06_pet.tex", "max_stars_repo_name": "tstreule/eth-cheat-sheets", "max_stars_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-26T23:11:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T23:11:57.000Z", "max_issues_repo_path": "src/BMI18/sections/06_pet.tex", "max_issues_repo_name": "tstreule/eth-cheat-sheets", "max_issues_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/BMI18/sections/06_pet.tex", "max_forks_repo_name": "tstreule/eth-cheat-sheets", "max_forks_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.6170212766, "max_line_length": 247, "alphanum_fraction": 0.6255952381, "num_tokens": 1736, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.591430321430478}}
{"text": "% This is part of the TFTB Tutorial.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file tutorial.tex for copying conditions.\n\nIn contrast with the linear time-frequency representations which\ndecompose the signal on elementary components (the atoms), the purpose of\nthe energy distributions is to distribute the {\\it energy} of the signal\nover the two description variables\\,: time and frequency.\n\n  The starting point is that since the energy of a signal $x$ can be deduced\nfrom the squared modulus of either the signal or its Fourier transform,\n\\begin{eqnarray}\n\\label{Ex1}\nE_x = \\int_{-\\infty}^{+\\infty} |x(t)|^2\\ dt\\ =\\ \\int_{-\\infty}^{+\\infty}\n|X(\\nu)|^2\\ d\\nu,  \n\\end{eqnarray}\nwe can interpret $|x(t)|^2$ and $|X(\\nu)|^2$ as energy densities, respectively\nin time and in frequency. It is then natural to look for a j{\\it oint time and\nfrequency} energy density $\\rho_x(t,\\nu)$, such that\n\\begin{eqnarray}\n\\label{Ex2}\nE_x = \\int_{-\\infty}^{+\\infty} \\int_{-\\infty}^{+\\infty} \\rho_x(t,\\nu)\\ dt\\\nd\\nu,               \n\\end{eqnarray}\nwhich is an intermediary situation between those described by\n(\\ref{Ex1}). As the energy is a quadratic function of the signal, the\ntime-frequency energy distributions will be in general quadratic\nrepresentations.\n\nTwo other properties that an energy density should satisfy are the\nfollowing {\\it marginal properties}\\,:\\index{marginal properties}\n\\begin{eqnarray}\n\\label{fmarg}\n\\int_{-\\infty}^{+\\infty} \\rho_x(t,\\nu)\\ dt   &=& |X(\\nu)|^2\\\\\n\\label{tmarg}\n\\int_{-\\infty}^{+\\infty} \\rho_x(t,\\nu)\\ d\\nu &=& |x(t) |^2,\n\\end{eqnarray}\nwhich mean that if we integrate the time-frequency energy density along one\nvariable, we obtain the energy density corresponding to the other variable.\n\nThe main references for this chapter are \\cite{FLA93}, \\cite{COH89},\n\\cite{AUG91}, \\cite{HLA91} and \\cite{HLA92}.\n\n\n\\section{The Cohen's class}\n%~~~~~~~~~~~~~~~~~~~~~~~~~~\n\\label{cohenclass}\n\\index{Cohen's class} \nSince there is much more than one distribution satisfying properties\n(\\ref{Ex2}), (\\ref{fmarg}) and (\\ref{tmarg}), we can impose additional\nconstraints on $\\rho_x$ so that this distribution satisfies other desirable\nproperties. Among these, the covariance principles are of fundamental\nimportance. The {\\it Cohen's class}, to which is dedicated this section,\nand whose definition can be found in subsection \\ref{cohendef}, is the\nclass of time-frequency energy distributions {\\it covariant by translations\nin time and in frequency} \\cite{COH89}.\n\n  The spectrogram, that we considered in the previous part, is an element\nof the Cohen's class since it is quadratic, time- and frequency- covariant,\nand preserves energy (property (\\ref{Ex2})).  However, taking the squared\nmodulus of an atomic decomposition is only a restrictive possibility to\ndefine a quadratic representation, and this definition presents the\ndrawback that the marginal properties (\\ref{fmarg}) and (\\ref{tmarg}) are\nnot satisfied.\n\n\n\\subsection{The Wigner-Ville distribution}\n%'''''''''''''''''''''''''''''''''''''''''\n\\label{WVD}\n\\subsubsection{Definition}\n\\index{Wigner-Ville distribution}A time-frequency energy distribution which\nis particularly interesting is the {\\it Wigner-Ville distribution} (WVD)\ndefined as\\,:\n\\begin{eqnarray}\n\\label{wvd}\nW_x(t,\\nu)=\\int_{-\\infty}^{+\\infty} x(t+\\tau/2)\\ x^*(t-\\tau/2)\\ e^{-j2\\pi\n\\nu \\tau}\\ d\\tau,   \n\\end{eqnarray}\nor equivalently as\n\\[W_x(t,\\nu)=\\int_{-\\infty}^{+\\infty} X(\\nu+\\xi/2)\\ X^*(\\nu-\\xi/2)\\\ne^{j2\\pi \\xi t}\\ d\\xi.\\] This distribution satisfies a large number of\ndesirable mathematical properties, as summarized in the next\nsub-section. In particular, the WVD is always real-valued, it preserves\ntime and frequency shifts and satisfies the marginal properties.\n\n  An interpretation of this expression can be found in terms of probability\ndensity\\,: expression (\\ref{wvd}) is the Fourier transform of an acceptable\nform of characteristic function for the distribution of the energy.\n\n  Before looking at the theoretical properties of the WVD, let us see what\nwe obtain on two particular synthetic signals.\n\\begin{itemize}\n\\item {\\it Example 1}\\,: The first signal is the academic linear chirp\nsignal that we already considered. The WVD is available thanks to the\nM-file \\index{\\ttfamily tfrwv}{\\ttfamily tfrwv.m} of the Time-Frequency\nToolbox (see fig. \\ref{En1fig1}).\n\\begin{verbatim}\n     >> sig=fmlin(256);\n     >> tfrwv(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig1.eps}}\n\\caption{\\label{En1fig1}Wigner-Ville distribution of a linear chirp signal\n: almost perfect localization in the time-frequency plane}\n\\end{figure}\nIf we choose a 3-dimension plot to represent it, we can see that the WVD\ncan take negative values, and that the localization obtained in the\ntime-frequency plane for this signal is almost perfect.\n\n\\item {\\it Example 2}\\,: When a car goes in front of an observer with a\nconstant speed, the signal heard by this person from the engine changes\nwith time\\,: the main frequency decreases (at a first level of\napproximation) from one value to another. This phenomenon, known as the\n{\\it doppler effect}\\index{Doppler effect}, expresses the dependence of the\nfrequency received by an observer from a transmitter on the relative speed\nbetween the observer and the transmitter. The corresponding signal can be\ngenerated thanks to the M-file \\index{\\ttfamily doppler}{\\ttfamily\ndoppler.m} of the Time-Frequency Toolbox. Here is an example of such a\nsignal (see fig. \\ref{En1fig2})\\,:\n\\begin{verbatim}\n     >> [fm,am,iflaw]=doppler(256,50,13,10,200);\n     >> sig=am.*fm;\n     >> tfrwv(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig2.eps}}\n\\caption{\\label{En1fig2}WVD of a doppler signal : many interference terms\nare present, due to the bilinearity of the distribution}\n\\end{figure}\nLooking at this time-frequency distribution, we notice that the energy is\nnot distributed as we could expect for this signal. Although the signal\nterm is well localized in the time-frequency plane, numerous other terms\n(the interference terms, due to the bilinearity of the WVD) are present at\npositions in time and frequency where the energy should be null. We will\nsee earlier how to get rid of these terms.\n\\end{itemize}\n\n\\subsubsection{Properties}\n\\label{propertieswvd}\n  Here is a list of the main properties of the WVD \\cite{FLA93}.\n\\begin{enumerate}\n\\item {\\it Energy conservation}\\index{energy conservation}\\,: by\nintegrating the WVD of x all over the time-frequency plane, we obtain the\nenergy of $x$\\,:\n\\[E_x = \\int_{-\\infty}^{+\\infty} \\int_{-\\infty}^{+\\infty} W_x(t,\\nu)\\ dt\\\nd\\nu\\] \n\n\\item {\\it Marginal properties}\\index{marginal properties}\\,: the energy\nspectral density and the instantaneous power can be obtained as marginal\ndistributions of $W_x$\\,:\n\\begin{eqnarray*}\n\\int_{-\\infty}^{+\\infty} W_x(t,\\nu)\\ dt &=& |X(\\nu)|^2\\\\ \n\\int_{-\\infty}^{+\\infty} W_x(t,\\nu)\\ d\\nu &=& |x(t)|^2 \n\\end{eqnarray*}\n\n\\item {\\it Real-valued}\\,: \\[W_x(t,\\nu)\\ \\in \\Rset,\\ \\forall\\ t, \\nu\\]\n\n\\item {\\it Translation covariance}\\index{translation covariance}\\,: the WVD\nis time and frequency covariant\\,:\n\\begin{eqnarray*}\ny(t)=x(t-t_0) &\\Rightarrow & W_y(t,\\nu)=W_x(t-t_0,\\nu)\\\\\ny(t)=x(t) e^{j2\\pi \\nu_0 t} &\\Rightarrow & W_y(t,\\nu)=W_x(t,\\nu-\\nu_0)\n\\end{eqnarray*}\n\n\\item {\\it Dilation covariance}\\index{dilation covariance}\\,: the WVD also\npreserves dilations\\,:\n\\begin{eqnarray*}\ny(t)=\\sqrt{k}\\ x(kt)\\ ;\\ k>0\\ \\Rightarrow\\  W_y(t,\\nu)=W_x(kt,\\frac{\\nu}{k})\n\\end{eqnarray*}\n\n\\item {\\it Compatibility with filterings}\\index{compatibility with\nfilterings}\\,: it expresses the fact that if a signal $y$ is the\nconvolution of $x$ and $h$ (i.e. the output of filter $h$ whose input is\n$x$), the WVD of $y$ is the time-convolution between the WVD of $h$ and the\nWVD of $x$\\,:\n\\[y(t)=\\int_{-\\infty}^{+\\infty} h(t-s)\\ x(s)\\ ds\\  \\Rightarrow\\\nW_y(t,\\nu)=\\int_{-\\infty}^{+\\infty} W_h(t-s,\\nu)\\ W_x(s,\\nu)\\ ds\\] \n\n\\item {\\it Compatibility with modulations}\\index{compatibility with\nmodulations}\\,: this is the dual property of the previous one\\,: if $y$ is\nthe modulation of $x$ by a function $m$, the WVD of $y$ is the\nfrequency-convolution between the WVD of $x$ and the WVD of $m$\\,:\n\\[y(t)=m(t)\\ x(t)\\ \\Rightarrow\\ W_y(t,\\nu)=\\int_{-\\infty}^{+\\infty}\nW_m(t,\\nu-\\xi)\\ W_x(t,\\xi)\\ d\\xi\\] \n\n\\item {\\it Wide-sense support conservation}\\index{support conservation}\\,:\nif a signal has a compact support in time (respectively in frequency), then\nits WVD also has the same compact support in time (respectively in\nfrequency)\\,:\n\\begin{eqnarray*}\n\tx(t)=0,\\ |t|>T  &\\Rightarrow &  W_x(t,\\nu)=0,\\ |t|>T\\\\\n\tX(\\nu)=0,\\ |\\nu|>B  &\\Rightarrow &  W_x(t,\\nu)=0,\\ |\\nu|>B\n\\end{eqnarray*}\n\n\\item {\\it Unitarity}\\label{unitarity}\\index{unitarity}\\,: the unitarity property\nexpresses the conservation of the scalar product from the time-domain to\nthe time-frequency domain (apart from the squared modulus)\\,:\n\\[\\left|\\int_{-\\infty}^{+\\infty} x(t)\\ y^*(t)\\ dt\\right|^2 = \\int_{-\\infty}^{+\\infty}\n\\int_{-\\infty}^{+\\infty} W_x(t,\\nu)\\ W_y^*(t,\\nu)\\ dt\\ d\\nu.\\] \nThis formula is also known as the Moyal's formula.\n\n\\item {\\it Instantaneous frequency}\\index{instantaneous frequency}\\,: the\ninstantaneous frequency of a signal $x$ can be recovered from the WVD as\nits first order moment (or center of gravity) in frequency\\,:\n\\[f_x(t)=\n{\\int_{-\\infty}^{+\\infty} \\nu W_{x_a}(t,\\nu)\\ d\\nu\n\\over \n\\int_{-\\infty}^{+\\infty} W_{x_a}(t,\\nu)\\ d\\nu}\\] \nwhere $x_a$ is the analytic signal associated to $x$.\n\n\\item {\\it Group delay}\\index{group delay}\\,: in a dual way, the group\ndelay of $x$ can be obtained as the first order moment in time of its WVD\\,:\n\\[t_x(\\nu)={\\int_{-\\infty}^{+\\infty} t\\ W_{x_a}(t,\\nu)\\\ndt\\over \\int_{-\\infty}^{+\\infty} W_{x_a}(t,\\nu)\\ dt}\\] \n\n\\item {\\it Perfect localization on linear chirp signals}\\index{perfect\nlocalization}\\,:\n\\[x(t)=e^{j2\\pi \\nu_x(t) t}  \\mbox{ with }  \\nu_x(t)=\\nu_0+2\\beta t\\\n\t\t  \\Rightarrow\\ W_x(t,\\nu)=\\delta(\\nu-(\\nu_0+\\beta t)).\\]\n\\end{enumerate}\n\n\\subsubsection{Interferences}\n\\index{interferences} As the WVD is a bilinear function of the signal $x$,\nthe {\\it quadratic superposition principle}\\index{quadratic superposition\nprinciple} applies\\,:\n\\[W_{x+y}(t,\\nu)\\ =\\ W_x(t,\\nu)\\ +\\ W_y(t,\\nu)\\ +\\\n2\\Re{\\{W_{x,y}(t,\\nu)\\}}\\] \nwhere \n\\[W_{x,y}(t,\\nu)\\ =\\ \\int_{-\\infty}^{+\\infty} x(t+\\tau/2)\\ y^*(t-\\tau/2)\\\ne^{-j2\\pi \\nu \\tau}\\ d\\tau\\] \nis the cross-WVD of $x$ and $y$. This can be easily generalized to $N$\ncomponents, but for the sake of clarity, we will only consider the\ntwo-component case.\n\n  Unlike the spectrogram interference terms, the WVD interference terms\nwill be non-zero regardless of the time-frequency distance between the two\nsignal terms. These interference terms are troublesome since they may\noverlap with auto-terms (signal terms) and thus make it difficult to\nvisually interpret the WVD image. However, it appears that these terms must\nbe present or the good properties of the WVD (marginal properties,\ninstantaneous frequency and group delay, localization, unitarity \\ldots)\ncannot be satisfied. Actually, there is a trade-off between the quantity of\ninterferences and the number of good properties.\\\\\n\n  o {\\it Interference geometry}\\\\\n  The rule of interference construction of the WVD can be summarized as\nfollows\\,: two points of the time-frequency plane interfere to create a\ncontribution on a third point which is located at their geometrical\nmidpoint. Besides, these interference terms oscillate perpendicularly to\nthe line joining the two points interfering, with a frequency proportional\nto the distance between these two points.\n\n  This can be seen on the following example\\,: we consider two atoms in the\ntime-frequency plane, analyzed by the WVD, whose relative distance is\nincreasing from one realization to the other, and then decreasing. The WVDs\nwere calculated and saved on the file {\\ttfamily movwv2at.mat}. We load\nthem and run the sequence using the function {\\ttfamily movie} (see\nfig. \\ref{En1fig3})\\,:\n\\begin{verbatim}\n     >> load movwv2at\n     >> clf; movie(M,10);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=10cm\n\\centerline{\\epsfbox{figure/en1fig3.eps}}\n\\caption{\\label{En1fig3}Structure of the interferences between\n2 components with different locations in time and frequency : we can notice\nthe change in the direction of the oscillations, as well as the change in\nthe period of these oscillations}\n\\end{figure}\n\nWe can notice, from this movie, the evolution of the interferences\nwhen the distance between the two interfering terms changes, and in\nparticular the change in the direction of the oscillations.\n\n\\subsubsection{Pseudo-WVD}\n\\label{PWVD}\\index{pseudo Wigner-Ville distribution}\n  The definition (\\ref{wvd}) requires the knowledge of the quantity\n\\[q_x(t,\\tau)=x(t+\\tau/2)\\ x^*(t-\\tau/2)\\] from $\\tau=-\\infty$ to\n$\\tau=+\\infty$, which can be a problem in practice. That is why we often\nreplace $q_x(t,\\tau)$ in (\\ref{wvd}) by a windowed version of it, leading\nto the new distribution\\,:\n\\[PW_x(t,\\nu)=\\int_{-\\infty}^{+\\infty} h(\\tau)\\ x(t+\\tau/2)\\ x^*(t-\\tau/2)\\\ne^{-j2\\pi \\nu \\tau}\\ d\\tau\\] where $h(t)$ is a regular window. This\ndistribution is called the {\\it pseudo Wigner-Ville distribution} (noted\npseudo-WVD or PWVD in the following). This windowing operation is\nequivalent to a frequency smoothing of the WVD since\n\\[PW_x(t,\\nu) = \\int_{-\\infty}^{+\\infty} H(\\nu-\\xi)\\ W_x(t,\\xi)\\ d\\xi\\]\nwhere $H(\\nu)$ is the Fourier transform of $h(t)$. Thus, because of their\noscillating nature, the interferences will be attenuated in the pseudo-WVD\ncompared to the WVD. However, the consequence of this improved readability\nis that many properties of the WVD are lost\\,: the marginal properties, the\nunitarity, and also the frequency-support conservation\\,; the\nfrequency-widths of the auto-terms are increased by this operation.\\\\\n\n  * {\\it Example}\\,: The M-file \\index{\\ttfamily tfrpwv}{\\ttfamily\ntfrpwv.m} calculates the pseudo-WVD of a signal, with the possibility to\nchange the length and shape of the smoothing window. If we consider a\nsignal composed of four gaussian atoms (obtained thanks to \\index{\\ttfamily\natoms}{\\ttfamily atoms.m}), each localized at a corner of a rectangle,\n\\begin{verbatim}\n     >> sig=atoms(128,[32,.15,20,1;96,.15,20,1;...\n                       32,.35,20,1;96,.35,20,1]);\n\\end{verbatim}\nand compute its WVD (see fig. \\ref{En1fig4})\n\\begin{verbatim}\n     >> tfrwv(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig4.eps}}\n\\caption{\\label{En1fig4}WVD of 4 gaussian atoms : many interferences are\npresent} \n\\end{figure}\nwe can see the four signal terms, along with six interference terms (two of\nthem are superimposed). If we now compute the pseudo-WVD (see\nfig. \\ref{En1fig5}),\n\\begin{verbatim}\n     >> tfrpwv(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig5.eps}}\n\\caption{\\label{En1fig5}The frequency-smoothing operated by the pseudo-WVD\nattenuates the interferences oscillating perpendicularly to the frequency\naxis}\n\\end{figure}\nwe can note the important attenuation of the interferences oscillating\nperpendicularly to the frequency axis, and in return the spreading in\nfrequency of the signal terms.\n\n\\subsubsection{Sampling the WVD\\,; the analytic signal} \n\\index{analytic signal}\n  Because of the quadratic nature of the WVD, its sampling has to be done\nwith care. Let us write it as follows\\,:\n\\[W_x(t,\\nu)=2\\int_{-\\infty}^{+\\infty} x(t+\\tau)\\ x^*(t-\\tau)\\ e^{-j4\\pi\n\\nu \\tau}\\ d\\tau\\] \nIf we sample $x$ with a period $T_e$, write $x[n]=x(nT_e)$, and evaluate\nthe WVD at the sampling points $nT_e$ in time, we obtain a discrete-time\ncontinuous-frequency expression of it\\,:\n\\[W_x[n,\\nu)=2\\ T_e \\sum_k x[n+k]\\ x^*[n-k]\\ e^{-j4\\pi \\nu k}.\\]\nAs this expression is periodic in frequency with period $\\frac{1}{2\\ T_e}$\n(contrary to period $\\frac{1}{T_e}$ obtained for the Fourier transform of a\nsignal sampled at the Nyquist rate), the discrete version of the WVD may be\naffected by a spectral aliasing, in particular if the signal $x$ is\nreal-valued and sampled at the Nyquist rate.  Two alternatives to this\nproblem can be found. The first one consists in oversampling the signal by\na factor of at least 2, and the second one in using the analytic\nsignal. Indeed, as its bandwidth is half the one of the real signal, the\naliasing will not take place in the useful spectral domain $[0,1/2]$ of\nthis signal. This second solution presents another advantage\\,: since the\nspectral domain is divided by two, the number of components in the\ntime-frequency plane is also divided by two. Consequently, the number of\ninterference terms decreases significantly. To illustrate this phenomenon,\nwe consider the WVD of the real part of a signal composed of two atoms (see\nfig. \\ref{En1fig6})\\,:\n\\begin{verbatim}\n     >> sig=atoms(128,[32,0.15,20,1;96,0.32,20,1]);\n     >> tfrwv(real(sig));\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig6.eps}}\n\\caption{\\label{En1fig6}WVD of a real signal composed of 2 gaussian atoms :\nwhen the analytic signal is not considered, spectral aliasing and additional\ninterferences appear in the time-frequency plane}\n\\end{figure}\nWe can see that four signal terms are present instead of two, due to the\nspectral aliasing. Besides, because of the components located at negative\nfrequencies (between -1/2 and 0), additional interference terms are\npresent. If we now consider the WVD of the same signal, but in its complex\nanalytic form (see fig. \\ref{En1fig7}),\n\\begin{verbatim}\n     >> tfrwv(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig7.eps}}\n\\caption{\\label{En1fig7}WVD of the previous signal, but in its analytic\nform}\n\\end{figure}\nthe aliasing effect has disappeared, as well as the terms corresponding to\ninterferences between negative- and positive- frequency components.\n\n\n\\subsection{The Cohen's class}\n%'''''''''''''''''''''''''''''\n\\label{cohendef}\\index{Cohen's class}\n\\subsubsection{Presentation}\n\n  Among the desirable properties of an energy time-frequency distribution,\ntwo of them are of particular importance\\,: {\\it time and frequency\ncovariance}. Indeed, these properties guaranty that, if the signal is\ndelayed in time and modulated, its time-frequency distribution is\ntranslated of the same quantities in the time-frequency plane. It has been\nshown that the class of energy time-frequency distributions verifying these\ncovariance properties possesses the following general expression\\,:\n\\[C_x(t,\\nu;f)=\\int\\int\\int_{-\\infty}^{+\\infty}\ne^{j2\\pi \\xi(s-t)}\\ f(\\xi,\\tau)\\ x(s+\\tau/2)\\ x^*(s-\\tau/2)\\ e^{-j2\\pi \\nu\n\\tau}\\ d\\xi\\ ds\\ d\\tau,\\] where $f(\\xi,\\tau)$ is a two-dimensional function\ncalled the {\\it parameterization function}\\index{parameterization\nfunction}. This class of distributions is known as the {\\it Cohen's class},\nwhich can also be written\\,:\n\\begin{eqnarray}\n\\label{defcohen1}\nC_x(t,\\nu;\\Pi)=\\int_{-\\infty}^{+\\infty}\\int_{-\\infty}^{+\\infty}\n\\Pi(s-t,\\xi-\\nu)\\ W_x(s,\\xi)\\ ds\\ d\\xi, \n\\end{eqnarray}\nwhere \n\\[\\Pi(t,\\nu)=\\int_{-\\infty}^{+\\infty}\\int_{-\\infty}^{+\\infty} f(\\xi,\\tau)\\\ne^{-j2\\pi(\\nu \\tau+\\xi t)}\\ dt\\ d\\nu\\]  \nis the two-dimensional Fourier transform of the parameterization function\n$f$. This class is of significant importance since it includes a large number\nof the existing time-frequency energy distributions.  Of course, the WVD is\nthe element of the Cohen's class for which the function $\\Pi$ is a double\nDirac\\,: $\\Pi(t,\\nu)=\\delta(t)\\ \\delta(\\nu)$, i.e. $f(\\xi,\\tau)=1$.\n\n  In the case where $\\Pi$ is a smoothing function, expression\n(\\ref{defcohen1}) allows one to interpret $C_x$ as a smoothed version of\nthe WVD\\,; consequently, such a distribution will attenuate in a particular\nway the interferences of the WVD.\n\n  Before considering different kinds of smoothing functions $\\Pi$, let us\npoint out the different advantages of such a unified formulation\\,:\n\\begin{enumerate}\n\\item by specifying the parameterization function $f$ arbitrarily, it is\npossible to obtain most of the known energy distributions\\,;\n\\item it is easy to convert a constraint that we wish for the distribution\nin an admissibility condition for the parameterization function\\,;\n\\item it is possible, by using such admissibility arguments, to\ncheck {\\it a priori} the properties of a particular definition, or to construct a\nclass of solutions according to a specified schedule of conditions.\n\\end{enumerate}\n\n\\subsubsection{Coupled smoothing}\n\n  If we look at the Moyal's formula (property 9. see page\n\\pageref{unitarity}), it is easy to express the spectrogram as a smoothing\nof the WVD\\,:\n\\begin{eqnarray}\n\\label{spectro}\nS_x(t,\\nu)=\\int_{-\\infty}^{+\\infty}\\int_{-\\infty}^{+\\infty}\nW_h(s-t,\\xi-\\nu)\\ W_x(s,\\xi)\\ ds\\ d\\xi.\t \n\\end{eqnarray}\nThus, the spectrogram is the element of the Cohen's class for which\n$\\Pi(s,\\xi)$ is the WVD of the window $h$. This new formulation provides us\nwith another interpretation of the embarrassing trade-off between the time\nand frequency- resolutions of the spectrogram\\,: if we choose a short\nwindow $h$, the smoothing function will be narrow in time and wide in\nfrequency, leading to a good time resolution but bad\nfrequency resolution\\,; and vice-versa.\n\n\\subsubsection{Separable smoothing}\n\\label{SPWVD}\\index{smoothed-pseudo Wigner-Ville distribution}\n  The problem with the previous smoothing function $\\Pi(s,\\xi)=W_h(s,\\xi)$\nis that it is controlled only by the short-time window $h(t)$. If we add a\ndegree of freedom by considering a separable smoothing function\n\\[\\Pi(t,\\nu)=g(t)\\ H(-\\nu)\\] (where $H(\\nu)$ is the Fourier transform of a\nsmoothing window $h(t)$), we allow a progressive and independent control,\nin both time and frequency, of the smoothing applied to the WVD. The\nobtained distribution\n\\[SPW_x(t,\\nu)=\\int_{-\\infty}^{+\\infty} h(\\tau)\\ \\int_{-\\infty}^{+\\infty}\ng(s-t)\\ x(s+\\tau/2)\\ x^*(s-\\tau/2)\\ ds\\ e^{-j2\\pi \\nu \\tau}\\ d\\tau\\] is\nknown as the {\\it smoothed-pseudo Wigner-Ville distribution} (noted\nsmoothed-pseudo-WVD or SPWVD). The previous compromise of the spectrogram\nbetween time and frequency- resolutions is now replaced by a compromise\nbetween the joint time-frequency resolution and the level of the\ninterference terms\\,: the more you smooth in time and/or frequency, the\npoorer the resolution in time and/or frequency.\n\nNote that if we only consider a smoothing in frequency  i.e. if\n$g(t)=\\delta(t)$, we obtain the pseudo-WVD.\\\\\n\n  * {\\it Example}\\,: The signal that we consider here is composed of two\ncomponents\\,: the first one is a complex sinusoid (normalized frequency\n0.15) and the second one is a Gaussian signal shifted in time and\nfrequency\\,:  \n\\begin{verbatim}\n     >> sig=fmconst(128,.15) + amgauss(128).*fmconst(128,0.4);\n\\end{verbatim}\nIf we display the WVD, the pseudo-WV and the smoothed-pseudo-WVD of this signal (see\nfig. \\ref{En1fig8}, fig. \\ref{En1fig9} and fig. \\ref{En1fig10}),\n\\begin{verbatim}\n     >> tfrwv(sig);  \n     >> tfrpwv(sig); \n     >> tfrspwv(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig8.eps}}\n\\caption{\\label{En1fig8}WVD of a signal composed of a gaussian atom and a\ncomplex sinusoid. Interferences are present between the two components}\n\\end{figure}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig9.eps}}\n\\caption{\\label{En1fig9}Pseudo-WVD of the same signal : the frequency\nsmoothing done by the pseudo-WVD degrades the frequency resolution without\nreally attenuating the interferences}\n\\end{figure}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig10.eps}}\n\\caption{\\label{En1fig10}Smoothed-pseudo-WVD of the same signal : the\ntime-smoothing carried out by the smoothed-pseudo-WVD considerably reduces\nthese interferences}\n\\end{figure}\nwe can make the following remarks\\,: from the WVD, we can see the two signal\nterms located at the right positions in the time-frequency plane, as well\nas the interference terms between them. As these interference terms\noscillate globally perpendicularly to the time-axis, the frequency\nsmoothing done by the pseudo-WVD degrades the frequency resolution without\nreally attenuating the interferences. On the other hand, the time-smoothing\ncarried out by the smoothed-pseudo-WVD considerably reduces these\ninterferences\\,; and as the time resolution is not of fundamental importance\nhere, this representation is suitable for this signal.\n\n  An interesting property of the smoothed-pseudo WVD is that it allows a\ncontinuous passage from the spectrogram to the WVD, under the condition\nthat the smoothing functions $g$ and $h$ are gaussian. The time-bandwidth\nproduct then goes from 1 (spectrogram) to 0 (WVD), with an independent\ncontrol of the time and frequency resolutions. This is clearly illustrated\nby the function \\index{\\ttfamily movsp2wv}{\\ttfamily movsp2wv.m}, which\nconsiders different transitions, on a signal composed of four atoms. To\nvisualize these snapshots, load the mat-file {\\ttfamily movsp2wv} (obtained\nby running {\\ttfamily movsp2wv.m}\\,; but as it takes a long time to run, we\nsaved the result in a mat file) and run {\\ttfamily movie} (see\nfig. \\ref{En1fig11})\\,:\n\\begin{verbatim}\n     >> load movsp2wv\n     >> clf; movie(M,10);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=10cm\n\\centerline{\\epsfbox{figure/en1fig11.eps}}\n\\caption{\\label{En1fig11}Different transitions from the spectrogram to the\nWVD, using the smoothed-pseudo-WVD. The signal is composed of 4 gaussian\natoms}\n\\end{figure}\nThis movie shows the effect of a (time/frequency) smoothing on the\ninterferences and on the resolutions\\,: the WVD gives the best resolutions\n(in time and in frequency), but presents the most important interferences,\nwhereas the spectrogram gives the worst resolutions, but with nearly no\ninterferences\\,; and the smoothed-pseudo WVD allows to choose the best\ncompromise between these two extremes.\n\n\n\\subsection{Link with the narrow-band ambiguity function}\n%''''''''''''''''''''''''''''''''''''''''''''''''''''''''\n\\subsubsection{Definition and properties}\n\\label{NBAF}\\index{narrow-band ambiguity function}\n  A function of particular interest, especially in the field of radar\nsignal processing, is the {\\it narrow-band ambiguity function} (noted AF),\ndefined as\n\\[A_x(\\xi,\\tau)=\\int_{-\\infty}^{+\\infty} x(s+\\tau/2)\\ x^*(s-\\tau/2)\\\ne^{-j2\\pi \\xi s}\\ ds.\\] \n\nThis function, also known as the (symmetric) {\\it Sussman ambiguity\nfunction}, is a measure of the time-frequency correlation of a signal $x$,\ni.e. the degree of similarity between $x$ and its translated versions in\nthe time-frequency plane. Unlike the variables '$t$' and '$\\nu$' which are\n\"absolute\" time and frequency coordinates, the variables '$\\tau$' and\n'$\\xi$' are \"relative\" coordinates (respectively called {\\it delay} and\n{\\it doppler}).\\index{delay}\\index{doppler} \n \n  The AF is generally complex-valued, and satisfies the Hermitian even\nsymmetry\\,:\n\\[A_x(\\xi,\\tau) = A_x^*(-\\xi,-\\tau).\\]\n\n  An important relation exists between the narrow-band ambiguity function\nand the WVD, which says that the ambiguity function is the two-dimensional\nFourier transform of the WVD\\,:\n\\[A_x(\\xi,\\tau)=\\int_{-\\infty}^{+\\infty} \\int_{-\\infty}^{+\\infty}\nW_x(t,\\nu)\\ e^{j2\\pi(\\nu \\tau-\\xi t)}\\ dt\\ d\\nu.\\] \nThus, the AF is the dual of the WVD in the sense of the Fourier\ntransform. Consequently, for the AF, a dual property corresponds to nearly\nall the properties of the WVD. Among these properties, we will\nrestrict ourselves to only three of them, which are important for the\nfollowing\\,:\n\\begin{itemize} \n\\item Marginal properties\n\n  The temporal and spectral auto-correlations are the cuts of the AF along\nthe $\\tau$-axis and $\\xi$-axis respectively\\,:\n\\[ r_x(\\tau)=A_x(0,\\tau) \\mbox{ and } R_x(\\xi)=A_x(\\xi,0).\\] \nThe energy of $x$ is the value of the AF at the origin of the\n$(\\xi,\\tau)$-plane, which corresponds to its maximum value\\,:\n\\[|A_x(\\xi,\\tau)|\\ \\leq\\ A_x(0,0)\\ =\\ E_x,\\ \\forall \\xi, \\tau.\\]\n\n\\item TF-shift invariance\n\n  Shifting a signal in the time-frequency plane leaves its AF invariant\napart from a phase factor (modulation)\\,:\n\\[y(t) = x(t-t_0)\\ e^{j2\\pi \\nu_0 t}\\ \n \\Rightarrow A_y(\\xi,\\tau) = A_x(\\xi,\\tau)\\ e^{j2\\pi(\\nu_0 \\tau-t_0 \\xi)}\\]  \n\n\\item Interference geometry\n\n  In the case of a multi-component signal, the elements of the AF\ncorresponding to the signal components (denoted as the AF-signal terms) are\nmainly located around the origin, whereas the elements corresponding to\ninterferences between the signal components (AF-interference terms) appear\nat a distance from the origin which is proportional to the time-frequency\ndistance between the involved components. This can be noticed on a simple\nexample\\,:\\\\\n \n  * {\\it Example}\\,: The M-file \\index{\\ttfamily ambifunb}{\\ttfamily\nambifunb.m} of the TF Toolbox implements the narrow-band ambiguity\nfunction. We apply it on a signal composed of two linear FM signals with\ngaussian amplitudes\\,:\n\\begin{verbatim}\n     >> N=64; sig1=fmlin(N,0.2,0.5).*amgauss(N);\n     >> sig2=fmlin(N,0.3,0).*amgauss(N);\n     >> sig=[sig1;sig2]; \n\\end{verbatim}\nLet us first have a look at the WVD (see fig. \\ref{En1fig12})\\,:\n\\begin{verbatim}\n     >> tfrwv(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig12.eps}}\n\\caption{\\label{En1fig12}WVD of  2 chirps with gaussian amplitudes and\ndifferent slopes}\n\\end{figure}\nWe have two distinct signal terms, and some interferences oscillating in\nthe middle. If we look at the ambiguity function of this signal (see\nfig. \\ref{En1fig13}),\n\\begin{verbatim}\n     >> ambifunb(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig13.eps}}\n\\caption{\\label{En1fig13}Narrow-band ambiguity function of the previous\nsignal : the AF-signal terms are located around the origin, whereas the\nAF-interference terms are located away from the origin}\n\\end{figure}\nwe have around the origin (in the middle of the image) the AF-signal terms,\nwhereas the AF-interference terms are located away from the origin. Thus,\napplying a 2-D low pass filtering around the origin on the ambiguity\nfunction, and returning to the WVD by 2-D Fourier transform will attenuate\nthe interference terms. Actually, this 2-D filtering is operated, in the\ngeneral expression of the Cohen's class, by the parameterization function $f$,\nas we discuss it now.\n\\end{itemize}\n\n\\subsubsection{New interpretation of the Cohen's class}\n\n  The dual expression of the Cohen's class formulation (expression\n(\\ref{defcohen1})) in terms of AF writes\n\\begin{eqnarray}\n\\label{defcohen2}\nC_x(t,\\nu;f) = \\int_{-\\infty}^{+\\infty} \\int_{-\\infty}^{+\\infty}\nf(\\xi,\\tau)\\ A_x(\\xi,\\tau)\\ e^{-j2\\pi(\\nu \\tau+\\xi t)}\\ d\\xi\\ d\\tau  \n\\end{eqnarray}\n(recall that $f$ is the two-dimensional Fourier transform of $\\Pi$).  This\nexpression is very instructive about the role played by the parameterization\nfunction $f(\\xi,\\tau)$. Indeed, $f$ acts as a weighting function that tries to\nlet the signal terms unchanged, and to reject the interference\nterms. Actually, the change from the time-frequency plane to the ambiguity\nplane allows a precise characterization of the weighting function $f$, and\nthus of the smoothing function $\\Pi(t,\\nu)$.\n  \n  For example, the WVD corresponds to a constant parameterization\nfunction\\,: $f(\\xi,\\tau)=1,\\ \\forall\\ \\xi,\\ \\tau$\\,: no difference is made\nbetween the different regions of the ambiguity plane. For the spectrogram,\n$f(\\xi,\\tau)=A_h^*(\\xi,\\tau)$\\,: the ambiguity function of the window $h$\ndetermines the shape of the weighting function. And for the\nsmoothed-pseudo-WVD, we have $f(\\xi,\\tau)=G(\\xi)\\ h(\\tau)$\\,: the weighting\nfunction is separable in time and frequency, which is very useful to adapt\nit to the shape of the AF-signal terms.\n\n  We will end this section by presenting other energy distributions that\nare members of the Cohen's class.\n\n\n\\subsection{Other important energy distributions}\n%''''''''''''''''''''''''''''''''''''''''''''''''\n\\subsubsection{The Rihaczek and Margenau-Hill distributions}\\label{MHD}\n\\index{Rihaczek distribution}\\index{Margenau-Hill distribution} Another\n  possible definition of a time-frequency energy density is given by the\n  Rihaczek distribution. If we consider the interaction energy between a\n  signal $x$ restricted to an infinitesimal interval $\\delta_T$ centered on\n  $t$, and $x$ passed through an infinitesimal bandpass filter $\\delta_B$\n  centered on $\\nu$, it can be approximated by the following expression\\,:\n\\[\\delta_T\\ \\delta_B\\ [x(t)\\ X^*(\\nu)\\ e^{-j2\\pi \\nu t}].\\]\nThis leads us to interpret the quantity\n\\[R_x(t,\\nu)=x(t)\\ X^*(\\nu)\\ e^{-j2\\pi \\nu t},\\]\ncalled the {\\it Rihaczek distribution}, as a complex energy density at\npoint $(t,\\nu)$.  This distribution, which corresponds to the element of\nthe Cohen's class for which $f(\\xi,\\tau)=e^{j\\pi \\xi \\tau}$, verifies many\ngood properties (1-2, 4-11, see section \\ref{propertieswvd}). However, it\nis complex valued, which can be awkward in practice. It is implemented\nunder the name \\index{\\ttfamily tfrri}{\\ttfamily tfrri.m}. The real part of\nthe Rihaczek distribution is also a time-frequency distribution of the\nCohen's class ($f(\\xi,\\tau)=\\cos{(\\pi \\xi \\tau)}$), known as the {\\it\nMargenau-Hill distribution} (see the M-file \\index{\\ttfamily\ntfrmh}{\\ttfamily tfrmh.m}). It has also numerous interesting properties\\,:\n1-5, 8, 10-11. As for the WVD, we can define smoothed versions of the\nRihaczek and Margenau-Hill distributions. The file \\index{\\ttfamily\ntfrpmh}{\\ttfamily tfrpmh.m} computes the pseudo Margenau-Hill distribution.\n\n  The interference structure of the Rihaczek and Margenau-Hill\ndistributions is different from the Wigner-Ville one\\,: the interference\nterms corresponding to two points located on $(t_1,\\nu_1)$ and\n$(t_2,\\nu_2)$ are positioned at the coordinates $(t_1,\\nu_2)$ and\n$(t_2,\\nu_1)$. This can be seen on the following example (see\nfig. \\ref{En1fig14})\\,:\n\\begin{verbatim}\n     >> sig=atoms(128,[32,0.15,20,1;96,0.32,20,1]);\n     >> tfrmh(sig);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig14.eps}}\n\\caption{\\label{En1fig14}Margenau-Hill distribution of 2 atoms : the\nposition of the interferences is quite different from the one obtained with\nthe WVD}\n\\end{figure}\nThus, the use of the Rihaczek (or Margenau-Hill) distribution for signals\ncomposed of multi-components located at the same position in time or in\nfrequency is not advised, since the interference terms will then be\nsuperposed to the signal terms.\n\n\\subsubsection{The Page distribution}\n\\index{Page distribution}\n  Motivated by the construction of a causal energy density, Page proposed\nthe following distribution (the {\\it Page distribution})\\,:\n\\begin{eqnarray*}\nP_x(t,\\nu) &=& {d\\over dt}\\,\n\\left\\{{\n|\\int_{-\\infty}^t x(u)\\ e^{-j2\\pi \\nu u}\\\ndu|^2]\n}\\right\\}\\\\ \n &=& 2\\ \\Re{\\left\\{x(t)\\ \\left(\\int_{-\\infty}^t\\ x(u)\\ e^{-j2\\pi \\nu\nu}du\\right)^* \\ e^{-j2\\pi \\nu t}\\right\\}}\n\\end{eqnarray*}\nIt is the derivative of the energy spectral density of the signal\nconsidered before time $t$. It corresponds to the element of the Cohen's\nclass with parameterization function $f(\\xi,\\tau)=e^{-j\\pi \\xi |\\tau|}$, and\nverifies the properties 1-5, 7-10 (see section \\ref{propertieswvd}).\nActually, it is the only distribution of the Cohen's class which is\nsimultaneously causal, unitary, compatible with modulations, and preserves\ntime-support.  \n\n\\index{pseudo-Page distribution} The function \\index{\\ttfamily\ntfrpage}{\\ttfamily tfrpage.m} computes this distribution. A\nfrequency-smoothed version of the Page distribution, called the {\\it\npseudo-Page distribution}, is also available (see the file \\index{\\ttfamily\ntfrppage}{\\ttfamily tfrppage.m}).\n\n\\subsubsection{Joint-smoothings of the WVD}\n\n  The following distributions correspond to particular cases of the Cohen's\nclass for which the parameterization function depends only on the product of\nthe variables $\\tau$ and $\\xi$\\,:\n\\begin{eqnarray}\n\\label{paramfun}\nf(\\xi,\\tau)=\\Phi(\\tau\\xi)\t       \n\\end{eqnarray}\nwhere $\\Phi$ is a decreasing function such that $\\Phi(0)=1$ (the Rihaczek\nand Margenau-Hill distributions are particular elements of this class). A\ndirect consequence of this definition is that the marginal properties will\nbe respected. Besides, since $\\Phi$ is a decreasing function, $f$ is a\nlow-pass function, and according to (\\ref{defcohen2}), this parameterization\nfunction will reduce the interferences. That is why these distributions are\nalso known as the {\\it Reduced Interference Distributions}.\\index{Reduced\nInterference Distributions}\n\\label{RID}\n\\begin{itemize}\n\\item The {\\it Choi-Williams distribution\t}\n\n\\index{Choi-Williams distribution}\n  One natural choice for Phi is to consider a gaussian function\\,:\n\\[f(\\xi,\\tau)=\\exp{\\left[-\\frac{(\\pi \\xi \\tau)^2}{2\\sigma^2}\\right]}.\\]\nThe corresponding distribution,\n\\[CW_x(t,\\nu)=\\sqrt{\\frac{2}{\\pi}}\n\\int\\int_{-\\infty}^{+\\infty} \n{\\sigma\\over |\\tau|}\\\ne^{-2\\sigma^2(s-t)^2/\\tau^2}\\ x(s+\\frac{\\tau}{2})\\ x^*(s-\\frac{\\tau}{2})\\\ne^{-j2\\pi \\nu \\tau}\\ ds\\ d\\tau\\] is the Choi-Williams distribution. Note\nthat when $\\sigma\\ \\longrightarrow\\ +\\infty$, we obtain the WVD. Inversely,\nthe smaller $\\sigma$, the better the reduction of the interferences. This\ndistribution verifies properties 1-5, 10-11, and can be computed with the\nM-file \\index{\\ttfamily tfrcw}{\\ttfamily tfrcw.m}.  The \"cross\"-shape of\nthe parameterization function of the Choi-Williams distribution implies that\nthe efficiency of this distribution strongly depends on the nature of the\nanalyzed signal. For instance, if the signal is composed of synchronized\ncomponents in time or in frequency, the Choi-Williams distribution will\npresent strong interferences. This can be observed on the following\nexample\\,: we analyze four gaussian atoms positioned at the corners of a\nrectangle rotating around the center of the time-frequency plane (see\nfig. \\ref{En1fig15})\\,:\n\\begin{verbatim}\n     >> load movcw4at\n     >> clf; movie(M,5);\n\\end{verbatim}\n\\begin{figure}[htb]\n\\epsfxsize=10cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig15.eps}}\n\\caption{\\label{En1fig15}Choi-Williams distribution of 4 atoms rotating\naround the middle of the time-frequency plane : when the time/frequency\nsupports of the atoms overlap, strong interferences appear on the overlap\nsupport}\n\\end{figure}\nWhen the time/frequency supports of the atoms overlap, some AF-interference\nterms are not completely attenuated (those present around the axes of\nthe ambiguity plane), and the efficiency of the distribution is quite poor. \n\n\\item The {\\it Born-Jordan} and {\\it Zhao-Atlas-Marks distributions}\n\\index{Born-Jordan distribution}\\index{Zhao-Atlas-Marks distribution}\n\n  If we impose to the distributions defined by (\\ref{paramfun}) the further\ncondition to preserve time- and frequency- supports, the simplest choice\nfor $f$ is then\\,:\n\\[f(\\xi,\\tau)={\\sin{(\\pi \\xi \\tau)}\\over\\pi \\xi \\tau}\\]    \t\nwhich defines the {\\it Born-Jordan distribution}\\,:\n\\[BJ_x(t,\\nu)=\\int_{-\\infty}^{+\\infty} \\frac{1}{|\\tau|}\\\n\\int_{t-|\\tau|/2}^{t+|\\tau|/2} x(s+\\tau/2)\\ x^*(s-\\tau/2)\\ ds\\ e^{-j2\\pi\n\\nu \\tau} d\\tau.\\]  \n\nProperties 1-5, 8, 10-11 are verified by this distribution, and the\ncorresponding M-file of the Time-Frequency Toolbox is \\index{\\ttfamily\ntfrbj}{\\ttfamily tfrbj.m}.\n\n  If we smooth the Born-Jordan distribution along the frequency axis, we\nobtain the {\\it Zhao-Atlas-Marks distribution}, defined as\n\\[ZAM_x(t,\\nu)=\\int_{-\\infty}^{+\\infty} \\left[\\ h(\\tau)\\\n\\int_{t-|\\tau|/2}^{t+|\\tau|/2} x(s+\\tau/2)\\ x^*(s-\\tau/2)\\ ds\\right]\\\ne^{-j2\\pi \\nu \\tau}\\ d\\tau. \\] \n  \nThis distribution, also known as the {\\it Cone-Shaped Kernel distribution},\nvalidates properties 3-4 and 8 (only for time) (see the M-file\n\\index{\\ttfamily tfrzam}{\\ttfamily tfrzam.m} for its computation).\n\\end{itemize}\n\n\\subsubsection{Comparison of the parameterization functions}\n\n  To illustrate the differences between some of the presented\ndistributions, we represent their weighting (parameterization) function in\nthe ambiguity plane, along with the result obtained by applying them on a\ntwo-component signal embedded in white gaussian noise\\,: the signal is the\nsum of two linear FM signals, the first one with a frequency going from\n0.05 to 0.15, and the second one from 0.2 to 0.5. The signal to noise ratio\nis 10\\,dB.\n\n  On the left-hand side of the figures \\ref{En1fig16} and \\ref{En1fig17},\nthe parameterization functions are represented in a schematic way by the\nbold contour lines (the weighting functions are mainly non-zeros inside\nthese lines), superimposed to the ambiguity function of the signal. The\nAF-signal terms are in the middle of the ambiguity plane, whereas the\nAF-interference terms are distant from the center. On the right-hand side,\nthe corresponding time-frequency distributions are represented.\n\\begin{figure}[htb]\n\\epsfxsize=12cm\n\\epsfysize=10cm\n\\centerline{\\epsfbox{figure/en1fig16.eps}}\n\\caption{\\label{En1fig16}Two chirps embedded in a 10 dB white gaussian\nnoise analyzed by different quadratic distributions. On the left-hand side,\nthe parameterization function is represented by a bold contour line,\nsuperimposed to the ambiguity function of the signal. The AF-signal terms\nare in the middle of the ambiguity plane, whereas the AF-interference terms\nare distant from the center. On the right-hand side, the corresponding\ntime-frequency distribution is represented}\n\\end{figure}\n\\begin{figure}[htb]\n\\epsfxsize=12cm\n\\epsfysize=8cm\n\\centerline{\\epsfbox{figure/en1fig17.eps}}\n\\caption{\\label{En1fig17}Two chirps embedded in a 10 dB white gaussian\nnoise analyzed by different quadratic distributions (concluding)}\n\\end{figure}\n\nFrom these plots, we can conclude that the ambiguity plane is very\nenlightening with regard to interference reduction in the case of\nmulticomponent signals. On this example, we notice that the\nsmoothed-pseudo-WVD is a particularly convenient and versatile\ncandidate. This is due to the fact that we can adapt independently the\ntime-width and frequency-width of its weighting function. But in the\ngeneral case, it is interesting to have several distributions at our\ndisposal since each one is well adapted to a certain type of\nsignal. Besides, for a given signal, as a result of the different\ninterference geometries, these distributions offer complementary\ndescriptions of this signal.\n\n\n\\subsection{Conclusion}\n%''''''''''''''''''''''\n  The Cohen's class, which gather all the quadratic time-frequency\ndistributions covariant by shifts in time and in frequency, offers a wide\nset of powerful tools to analyze non-stationary signals. The basic idea is\nto devise a joint function of time and frequency that describes the energy\ndensity or intensity of a signal simultaneously in time and in\nfrequency. The most important element of this class is probably the\nWigner-Ville distribution, which satisfies many desirable properties. Since\nthese distributions are quadratic, they introduce cross-terms in the\ntime-frequency plane which can disturb the readability of the\nrepresentation. One way to attenuate these interferences is to smooth the\ndistribution in time and in frequency, according to their structure. But\nthe consequence of this is a decrease of the time and frequency\nresolutions, and more generally a loss of theoretical properties. The\ngeneral formulation proposed by Cohen is very useful to have a better\nunderstanding of the existing solutions, as well as the connection with the\nambiguity function.\n\nBut there exists other time-frequency energy distributions, which are not\nelements of the Cohen's class, i.e. which are not covariant by shifts in\ntime or in frequency. This is the case for example of the affine\ndistributions, which are presented in the next chapter.\n", "meta": {"hexsha": "f8ab205243174ebd2106db4792192f9ba3264354", "size": 43984, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/tutorial/enedist1.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/tutorial/enedist1.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/tutorial/enedist1.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 47.3965517241, "max_line_length": 85, "alphanum_fraction": 0.7423153874, "num_tokens": 12615, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.734119526900183, "lm_q2_score": 0.8056321796478255, "lm_q1q2_score": 0.5914303145786249}}
{"text": "\\subsection{Eigenvalue Method}\r\n\\noindent\r\nThis method allows us to find fundamental solutions using the eigenvalues of the matrix $A$.\r\nOf course, just like with roots of the auxiliary equation, we need to consider the cases of real distinct eigenvalues and eigenvectors, repeated eigenvalues, complex eigenvalues, and defective matrices. \r\n\r\n% n distinct eigenvalues\r\n\\input{./linearSystems/homogeneousSystems/realDistinctEigenvalues.tex}\r\n% Repeated eigenvales with enough eigenvectors\r\n\\input{./linearSystems/homogeneousSystems/repeatedEigenvalues.tex}\r\n% Defective matrices\r\n\\input{./linearSystems/homogeneousSystems/complexEigenvalues.tex}\r\n% Complex roots\r\n\\input{./linearSystems/homogeneousSystems/defectiveMatrices.tex} \\\\\r\n\r\n\r\n\\noindent\r\nThis final approach for defective matrices actually gives a general way to solve $\\vec{x}' = A\\vec{x}$ for any real $n \\times n$ matrix $A$.\r\n\\begin{enumerate}[label = \\arabic*)]\r\n\t\\item\r\n\t\tCompute the characteristic polynomial $p(\\lambda) = \\det{(A - \\lambda I)}$.\r\n\t\tFind the zeroes of the characteristic to find the distinct eigenvalues $\\lambda_1, \\ldots, \\lambda_k$ with corresponding multiplicities $m_1, \\ldots, m_k$.\r\n\t\\item\r\n\t\tFor each eigenvalue $\\lambda_i$ find the $m_i$ corresponding linearly independent eigenvectors of rank $m_i$ by solving the system $(A - \\lambda_i)^{m_i}\\vec{v_i} = \\vec{0}$.\r\n\t\\item\r\n\t\tFor each generalized generalized eigenvector $\\vec{v_i}$, compute the corresponding fundamental solution\r\n\t\t\\begin{equation*}\r\n\t\t\t\\vec{x_i} = e^{At}\\vec{v_i} = e^{\\lambda t}\\left(\\vec{v_i} + t(A - \\lambda I)\\vec{v_i} + \\frac{t^2}{2!}(A - \\lambda I)^2\\vec{v_i} + \\ldots\\right).\r\n\t\t\\end{equation*}\r\n\t\tNote that this summation terminates is at most $m_i$ terms.\r\n\t\\item\r\n\t\tCombine these fundamental solutions as columns of a a square matrix to obtain the fundamental matrix of the system $X(t)$.\r\n\t\tThe general solution to the system is\r\n\t\t\\begin{equation*}\r\n\t\t\t\\vec{x} = X(t)\\begin{bmatrix}\r\n\t\t\t\tC_1 \\\\\r\n\t\t\t\t\\vdots \\\\\r\n\t\t\t\tC_n\r\n\t\t\t\\end{bmatrix}.\r\n\t\t\\end{equation*}\r\n\\end{enumerate}", "meta": {"hexsha": "2ca44cd3c7447e1cf74bdbe928f0ad38d8c8babe", "size": 2049, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/linearSystems/homogeneousSystems/eigenvalueMethod.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "diffEq/linearSystems/homogeneousSystems/eigenvalueMethod.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "diffEq/linearSystems/homogeneousSystems/eigenvalueMethod.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 51.225, "max_line_length": 204, "alphanum_fraction": 0.7296242069, "num_tokens": 589, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321703143954, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5914303124131799}}
{"text": "\\section{Trust Flow}\n  Everything is in place to define the indirect trust from one player to another.\n  \\subimport{common/definitions/}{indirecttrust.tex}\n  \\noindent Note that $Tr_{A \\rightarrow B} \\geq DTr_{A \\rightarrow B}$. The next theorem shows that $Tr_{A \\rightarrow B}$ is\n  finite.\n  \\subimport{thesis/theorems/}{convergencetheorem.tex}\n  \\subimport{common/proofsketches/}{convergenceproofsketch.tex}\n  Full proofs of all theorems and lemmas can be found in the Appendix.\n\n  In the setting of \\texttt{TransitiveGame(}$\\mathcal{G}$\\texttt{,}$A$\\texttt{,}$B$\\texttt{)}, we make use of the notation\n  $Loss_A = Loss_{A, j}$, where $j$ is a turn in which the game has converged. It is important to note that $Loss_A$ is not\n  the same for repeated executions of this kind of game, since the order in which players are chosen may differ between\n  executions and the conservative players are free to choose which incoming direct trusts they will steal and how much from\n  each.\n\n  Let $G$ be a weighted directed graph. We will investigate the maximum flow on this graph. For an introduction to the\n  maximum flow problem see \\cite{clrs} p. 708. Considering each edge's capacity as its weight, a flow assignment\n  $X = [x_{vw}]_{\\mathcal{V} \\times \\mathcal{V}}$ with a source $A$ and a sink $B$ is valid when:\n  \\begin{equation}\n  \\label{flow1}\n    \\forall (v, w) \\in \\mathcal{E}, x_{vw} \\leq c_{vw} \\mbox{ and}\n  \\end{equation}\n  \\begin{equation}\n  \\label{flow2}\n    \\forall v \\in \\mathcal{V} \\setminus \\{A,B\\}, \\sum\\limits_{w \\in N^{+}(v)}x_{wv} = \\sum\\limits_{w \\in N^{-}(v)}x_{vw}\n    \\enspace.\n  \\end{equation}\n  We do not suppose any skew symmetry in $X$. The flow value is $\\sum\\limits_{v \\in N^{+}\\left(A\\right)}x_{Av}$, which is\n  proven to be equal to $\\sum\\limits_{v \\in N^{-}\\left(B\\right)}x_{vB}$. There exists an algorithm that returns the maximum\n  possible flow from $A$ to $B$, namely $MaxFlow\\left(A, B\\right)$. This algorithm evidently needs full knowledge of the\n  graph. The fastest version of this algorithm runs in $O\\left(|\\mathcal{V}||\\mathcal{E}|\\right)$ time \\cite{maxflownm}. We\n  refer to the flow value of $MaxFlow\\left(A, B\\right)$ as $maxFlow\\left(A, B\\right)$.\n\n  We will now introduce two lemmas that will be used to prove the one of the central results of this work, the Trust Flow\n  theorem.\n  \\subimport{thesis/lemmas/}{flowgamelemma.tex}\n  \\subimport{common/proofsketches/}{flowgameproofsketch.tex}\n  \\subimport{thesis/lemmas/}{gameflowlemma.tex}\n  \\subimport{common/proofsketches/}{gameflowproofsketch.tex}\n  \\subimport{common/theorems/}{trustflowtheorem.tex}\n  \\subimport{thesis/proofs/}{trustflowproof.tex}\n\n  \\noindent We note that the maxFlow is the same in the following two cases: When a player chooses the evil strategy and when\n  the same player chooses a variation of the evil strategy where she does not nullify her outgoing direct trust.\n\n  Further justification of trust transitivity through the use of $MaxFlow$ can be found in the sociological experiment\n  conducted in \\cite{kmrs}.\n\n  Here we see another important theorem that gives the basis for risk-invariant transactions between different, possibly\n  unknown, parties.\n  \\subimport{common/theorems/}{riskinvtheorem.tex}\n  \\subimport{common/proofs/}{riskinvproof.tex}\n  \\noindent It is intuitively obvious that it is possible for $A$ to reduce her outgoing direct trust in a manner that\n  achieves (\\ref{primetrust}), since $maxFlow\\left(A, B\\right)$ is continuous with respect to $A$'s outgoing direct trusts. We\n  leave this calculation as part of further research.\n", "meta": {"hexsha": "86a11a2d38020ad73e6e4a46e8ae2432eb07d0fb", "size": 3580, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/flow.tex", "max_stars_repo_name": "dionyziz/DecentralizedTrust", "max_stars_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2017-03-15T14:33:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-01T14:07:45.000Z", "max_issues_repo_path": "thesis/flow.tex", "max_issues_repo_name": "dionyziz/DecentralizedTrust", "max_issues_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2017-03-07T12:25:26.000Z", "max_issues_repo_issues_event_max_datetime": "2017-07-31T14:42:20.000Z", "max_forks_repo_path": "thesis/flow.tex", "max_forks_repo_name": "dionyziz/DecentralizedTrust", "max_forks_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-03-07T10:49:58.000Z", "max_forks_repo_forks_event_max_datetime": "2017-08-28T06:32:33.000Z", "avg_line_length": 63.9285714286, "max_line_length": 126, "alphanum_fraction": 0.7388268156, "num_tokens": 1045, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321796478255, "lm_q2_score": 0.7341195210831258, "lm_q1q2_score": 0.5914303098922165}}
{"text": "\\section{Bellman-Ford and Floyd-Warshall Algorithms}\n\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Bellman-Ford algorithm}\n  \\begin{exampleblock}{Bellman-Ford algorithm (Problem 6.30)}\n\t\\begin{itemize}\n\t  \\item digraph $G = (V, E), \\exists e: w(e) < 0$\n\t  \\item $\\forall s, t: |s \\leadsto^{\\SP} t| \\le k$\n\t  \\item Given $s,t$, find $s \\leadsto^{\\SP} t$.\n\t\\end{itemize}\n  \\end{exampleblock}\n\n  $\\text{d}(v,k)$: shortest path distance from $s$ to $v$ using $\\le k$ edges\n\n  \\[\n\ts \\leadsto^{\\SP; \\le k-1} u \\to^{=1} v\n  \\]\n\n  \\[\n\t\\text{d}(v, k) = \\min_{u \\in N(v)} \\text{d}(u, k-1) + w(u,v)\n  \\]\n\n  \\begin{center}\n\t\\begin{minipage}{0.50\\linewidth}\n\t  \\begin{algorithmic}[c]\n\t\t\\ForAll{$i = 1 \\to n - 1$}\n\t\t  \\ForAll{$e \\in E$}\n\t\t\t\\State update $e$\n\t\t  \\EndFor\n\t\t\\EndFor\n\t  \\end{algorithmic}\n\t\\end{minipage}\n  \\end{center}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Floyd-Warshall algorithm}\n  \\[\n\t\\dist(i,j,k) = \\min(\\dist(i,j,k-1), \\dist(i,k,k-1) + \\dist(k,j,k-1))\n  \\]\n\n  \\[\n\t\\# k's = 1 \\implies \\dist(i,k,k-1)\n  \\]\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Routing table}\n  \\begin{exampleblock}{Routing table (Problem 6.25)}\n\tContruct routing table and extract shortest paths from it.\n  \\end{exampleblock}\n\n  \\begin{columns}\n\t\\column{0.50\\textwidth}\n\t  \\begin{algorithmic}\n\t\t\\State Init: $\\text{Go}(i,j) \\gets \\text{Null}$\n\t\t\\Statex\n\t\t\\State $\\forall (i,j) \\in E: \\text{Go}(i,j) \\gets j$\n\t\t\\Statex\n\t\t\\If{$\\dots$}\n\t\t  \\State $\\text{Go}(i,j) \\gets \\text{Go}(i,k)$\n\t\t\\EndIf\n\t  \\end{algorithmic}\n\t\\column{0.50\\textwidth}\n\t  \\begin{algorithmic}\n\t\t\\If{$Go(i,j) = \\text{Null}$}\n\t\t  \\State $\\dots$\n\t\t\\EndIf\n\t\t\\Statex\n\t\t\\While{$i \\neq j$}\n\t\t  \\State $i \\gets \\text{Go}(i,j)$\n\t\t\\EndWhile\n\t  \\end{algorithmic}\n  \\end{columns}\n\n  \\vspace{0.50cm}\n  \\[\n\t\\text{Prev}(i,j) \\gets \\text{Prev}(k,j)\n  \\]\n  \\[\n\t\\text{Intermediate}(i,j) \\gets k\n  \\]\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Shortest cycle in digraph}\n  \\begin{exampleblock}{Shortest cycle in digraph}\n\tFind shortest cycle in digraph $G = (V, E), w(e) > 0$.\n  \\end{exampleblock}\n\n  \\vspace{0.50cm}\n  \\centerline{Initialize $\\dist[v][v] \\gets \\infty$ in Floyd-Warshall algorithm} \n\n  \\[\n\t\\exists v: \\dist[v][v] < 0 \\text{\\emph { vs. }} \\forall v: \\dist[v][v] = \\infty\n  \\]\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Max-min path problem}\n  \\begin{exampleblock}{Max-min path problem (Problem 6.26)}\n\t\\begin{itemize}\n\t  \\item $G = (V, E)$: network of oil pipelines\n\t  \\item $c(u,v)$: capacity of $(u,v)$\n\t  \\item (2) Compute all-pair $\\text{cap}(u,v)$.\n\t\\end{itemize}\n  \\end{exampleblock}\n\n  \\[\n\t\\text{cap}(u,v,k) = \\max(\\text{cap}(u,v,k-1), \\min(\\text{cap}(u,k,k-1), \\text{cap}(k,v,k-1)))\n  \\]\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "01345c9d99ddf7c8ee7fc32e9d292d8a20f44603", "size": 2678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-paths-20170605/sections/apsp.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-paths-20170605/sections/apsp.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-paths-20170605/sections/apsp.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 24.3454545455, "max_line_length": 94, "alphanum_fraction": 0.569081404, "num_tokens": 1067, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256631249077, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.5914078366749032}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{textcomp}\n\\usepackage[english]{babel}\n\\usepackage{amsmath, amssymb}\n\\usepackage[]{amsthm} %lets us use \\begin{proof}\n\\usepackage[]{amssymb} %gives us the character \\varnothing\n\n\n% figure support\n\\usepackage{import}\n\\usepackage{xifthen}\n\\pdfminorversion=7\n\\usepackage{pdfpages}\n\\usepackage{transparent}\n\\newcommand{\\incfig}[1]{%\n\t\\def\\svgwidth{\\columnwidth}\n\t\\import{./figures/}{#1.pdf_tex}\n}\n\n\\pdfsuppresswarningpagegroup=1\n\n\\begin{document}\n\n\\section*{Section 2.3}\n\n\\subsection*{Exercise 2.3.1}\n\nProve Lemma 2.3.2. (Hint: modify the proofs of Lemmas 2.2.2, 2.2.3 and Proposition 2.2.4.)\n\n\\begin{proof}\n\n$ $\\newline\n\nWe need to show $m \\times 0 = m$ and $n \\times \\left(  m\\text{++} \\right) = \\left( n\\times m \\right) + n$ before proving the multiplication is commutative.\n\nTo show $m\\times 0 = m$ we use induction. For the base case $0\\times 0 = 0$ follows since we know that $0\\times m = m$ by definition. Now suppose inductively that $m\\times 0 = m$, we need to show $\\left( m\\text{++} \\right) \\times 0 = 0$. By definition $\\left( m\\text{++} \\right) \\times 0 = \\left( m\\times 0 \\right) + 0 = 0$, hence the induction is closed.\n\nTo show $n \\times \\left(  m\\text{++} \\right) = \\left( n\\times m \\right) + n$, we induct on  $n$. For the base case $0 \\times \\left( m\\text{++} \\right) = \\left( 0 \\times m \\right) + 0$ follows by the definitions of addition and multiplication. Now suppose inductively $n \\times \\left(  m\\text{++} \\right) = \\left( n\\times m \\right) + n$, we need to show $n\\text{++} \\times \\left(  m\\text{++} \\right) = \\left( n\\text{++}\\times m \\right) + n\\text{++}$. The left-hand side is $n\\times \\left( m\\text{++} \\right) + \\left( m\\text{++} \\right) = n\\times m + n + m + 1$ by definition of multiplication and the hypothesis. The right-hand side is $\\left( n\\times m + m \\right) + n\\text{++} = n\\times m + n + m + 1$ by definition of multiplication. Thus both sides are equal to each other, and we have closed the induction.\n\nTo show multiplication is commutative, we induct on $n$. For the base case $0\\times m = m\\times 0$ follows since both sides euqal to zero by definition of multiplication and $m\\times 0 = 0$. Now suppose inductively that $n \\times m = m\\times n$, we need to show $n\\text{++}\\times m = m\\times n\\text{++}$. The left-hand side is $n\\times m + m$ by definition. The right-hand side is $m\\times n + m$ by the lemma we've just proved. And by the hypothesis the right-hand then equals to $n\\times m + m$. Thus both sides are equal to each other, and we have closed the induction.\n\n\\end{proof}\n\n\\subsection*{Exercise 2.3.2}\n\nProve Lemma 2.3.3. (Hint: prove the second statement first.)\n\n\\begin{proof}\n\n$ $\\newline\n\nThe statement is equivalent to \"$n\\times m$ is positive iff both $n$ and  $m$ are positive\".\n\nFirst we need to show $n\\times m$ is positive implies both $n$ and  $m$ are positive. For the sake of contradiction that $n$ equals to zero, by definition of multiplication  $0\\times m = 0$, a contradiction. Similar contradiction holds for $m$ with Lemma 2.3.2. Thus we have proved the statement.\n\nThen we need to show both $n$ and  $m$ are positive implies $n\\times m$ is positive. For the sake of contradiction that $n\\times m = 0$, by Lemma 2.2.10 there exists extractly one natural nunmber $a$ such that  $a\\text{++} = n$, thus by definition of multiplication we have $n\\times m = \\left( a\\text{++} \\right) \\times m = \\left( a\\times m \\right) \\text{++}$. Since $a\\times m$ is a natural number, by Axiom 2.3 $\\left( a\\times m \\right) \\text{++} \\neq 0$, a contradiction. Thus $n\\times m$ is positive.\n\nThus we have proved the original statement.\n\n\\end{proof}\n\n\\subsection*{Exercise 2.3.3}\n\nProve Proposition 2.3.5. (Hint: modify the proof of Proposition\n2.2.5 and use the distributive law.)\n\n\\begin{proof}\n\n\tWe use induction on $a$. For the base case $\\left( 0\\times b \\right) \\times c = 0 \\times \\left( b\\times c \\right) $ follows, since the left-hand side equals to $0\\times c = 0$ by definition of multiplication, and the right-hand side equals to $0$ since $b\\times c$ is a natural number and $0$ times a natural number equals to $0$. Now suppose inductively $\\left( a\\times b \\right) \\times c = a\\times \\left( b\\times c \\right) $, we need to show $\\left( a\\text{++}\\times b \\right) \\times c = a\\text{++}\\times \\left( b\\times c \\right) $. The left-hand side equals to $\\left( a\\times b + b \\right) \\times c$ by definition of multiplication, then equals to $\\left( a\\times b \\right) \\times c + b\\times c$ by the distributive law. The right-hand side equals to $a\\times \\left( b\\times c \\right) + b\\times c$ by definition of multiplication. By the hypothesis both sides are equal to each other, thus we have closed the induction.\n\n\\end{proof}\n\n\\subsection*{Exercise 2.3.4}\n\nProve the identity $\\left( a + b \\right) ^2 = a^2 + 2ab + b^2$ for all natural\nnumbers $a$, $b$.\n\n\\begin{proof}\n\n\t$\\left( a+b \\right) ^2 = \\left( a+b \\right) ^1 \\left( a + b \\right) = \\left( a + b \\right) \\left( a + b \\right) $ by definition of exponentiation. Thus equals to  $a\\times \\left( a + b \\right) + b\\times \\left( a+b \\right) = a\\times a + a\\times b + b\\times a + b\\times b$ by the distributive law. $a\\times a = a^2$ and $b\\times b = b^2$ by definition of exponentiation. $a\\times b + b\\times a = a\\times b + a\\times b = 2ab$ since multiplication is commutative and the definition of multiplication. Thus we have proved the statement.\n\n\\end{proof}\n\n\\subsection*{Exercise 2.3.5}\n\nProve Proposition 2.3.9. (Hint: fix $q$ and induct on $n$.)\n\n\\begin{proof}\n\n\tWe use induction on $n$ (keep $q$ fixed). For the base case there exists natural numbers $m = 0$, $r = 0$ such that $0 \\le r < q$ and $0 = mq + r$. Now suppose inductively there exists $m$, $r$ such that $0 \\le r < q$ and $n = mq + r$, we need to show there exists $m'$,  $r'$ such that  $0 \\le r' < q$ and $n = m'q + r'$. Thus $r\\text{++} \\le q$ by Proposition 2.2.12 (e). If $r\\text{++} < q$, we can simply set $m' = m$ and  $r' = r + 1$. Otherwise if  $r\\text{++} = q$, then $n\\text{++} = mq + q = \\left( m\\text{++} \\right) \\times q + 0$, thus we can set $m' = m\\text{++}$ and $r' = 0$. Thus we have close the induction.\n\n\\end{proof}\n\n\\end{document}\n", "meta": {"hexsha": "e378f4bb6add621700deb32f1cc5c7bc39f70d35", "size": 6241, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-2.3.tex", "max_stars_repo_name": "huntzhan/solution-analysis-i-terence-tao", "max_stars_repo_head_hexsha": "ddddabd9082d996c2d441721304160004b32836c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chap-2.3.tex", "max_issues_repo_name": "huntzhan/solution-analysis-i-terence-tao", "max_issues_repo_head_hexsha": "ddddabd9082d996c2d441721304160004b32836c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chap-2.3.tex", "max_forks_repo_name": "huntzhan/solution-analysis-i-terence-tao", "max_forks_repo_head_hexsha": "ddddabd9082d996c2d441721304160004b32836c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.6836734694, "max_line_length": 924, "alphanum_fraction": 0.6747316135, "num_tokens": 2092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300449389326, "lm_q2_score": 0.8418256412990657, "lm_q1q2_score": 0.5914078056125783}}
{"text": "\\chapter{How to program a proof tool}\\label{tool}\n\nUsers of \\HOL{} can create their own theorem proving tools by combining\npredefined rules and tactics. The \\ML{} type-discipline\nensures that only logically sound methods can be used to create values\nof type \\ml{thm}.\nIn this chapter, a simple but real\\footnote{The example\nis `real' in that the need for it came up last week.} example is described.\n\nSeveral implementations of the tool are given to illustrate various styles\nof proof programming. The first implementation is the obvious one, but\nis very slow because of the `brute force' method used. The second\nimplementation produces a much more streamlined proof, but still has a\nbrute force component, namely the use of a tautology checker from the\nlibrary \\ml{taut}. The third implementation replaces the general\ntautology checker with a special purpose derived inference rule. The\nfourth and final implementation uses an optimised implementation of\nthe special purpose rule; understanding it is left as an exercise in\nusing \\DESCRIPTION.\n\nThe timings in this chapter are based on Version 1.12. Later versions\nof \\HOL{} have an optimised tautology checker library due to Richard\nBoulton. This new tautology checker is actually faster than the\nspecial purpose derived rule described in\nSection~\\ref{bogus-optimization}.  Thus with the new tautology checker\nthe so called ``even more efficient implementation'' is actually\nslower than the program it replaces! This was only discovered (by\nJuanito Camilleri) during the preparation of Version 2 of the\ntutorial. Rather than completely rewriting the chapter, it was decided\nto leave it essentially as it was (except for the addition of this\n paragraph). The methods\nthat are described are still useful, and there is an important lesson\nhere: optimizations can become obsolete.  The really dedicated reader\ncould learn a lot by studying the old and new tautology checker\n({\\small\\verb%contrib/icl-taut%} and\n{\\small\\verb%Library/taut%}, respectively) to find out how they work.\nBesides improving the tautology library, Richard Boulton also\nreimplemented rewriting using ideas from Tom Melham and Roger Fleming.\nAs a result, in versions later than 1.12 the various rewriting tools\nare quite a bit faster and generate fewer intermediate theorems.\n\n\nIt is sometimes claimed that `\\LCF-style' systems can never be\npractical, because the efficiency needed to handle real examples can\nonly be obtained with decision procedures coded as primitive rules. It\nis hoped that this chapter, as well as the \\ml{taut} library, shows\nthat the truth of such claims is not obvious. Research is currently in\nprogress to see if a variety of practical decision algorithms can be\nimplemented as efficient derived rules.\n\nThe tool described here is a tactic that puts conjunctions into the\nnormal form obtained by right associating, sorting the conjuncts into\na canonical order and then removing repetitions. This canonical order\nuses the built-in polymorphic infix \\ml{<<}, which orders any pair of\n\\ML{} values with the same type.\n\n\\section{A simple implementation}\n\nA first implementation uses `brute-force' rewriting with\nthe equations:\n\n\\begin{hol}\\begin{verbatim}\n   |- (t1 /\\ t2) /\\ t3 = t1 /\\ (t2 /\\ t3)     % Associativity          %\n\n   |- t1 /\\ t2 = t2 /\\ t1                     % Symmetry (if t2 << t1) %\n   |- t1 /\\ (t2 /\\ t3) = t2 /\\ (t1 /\\ t3)     % Symmetry (if t2 << t1) %\n\n   |- t /\\ t = t                              % Cancel repeated terms  %\n   |- t1 /\\ (t1 /\\ t2) = t1 /\\ t2             % Cancel repeated terms  %\n\\end{verbatim}\\end{hol}\n\n\\noindent These equations are easily proved using the\nlibrary \\ml{taut}. Note that \\HOL{} Version 1.12 is used in\nthis chapter. Versions of \\HOL{} later than 1.12 contain improved\nrewriting tools and a new version of the library \\ml{taut} (the old version\nof the library is preserved in the directory\n{\\small\\verb%contrib/icl-taut%}).\n\n\n\\setcounter{sessioncount}{0}\n\\begin{session}\\begin{verbatim}\nscaup% hol\n          _  _    __    _      __    __\n   |___   |__|   |  |   |     |__|  |__|\n   |      |  |   |__|   |__   |__|  |__|\n\n          Version 1.12 (Sun3/Franz), built on Feb 23 1991\n\n#load_library `taut`;;\nLoading library `taut` ...\n........................\nLibrary `taut` loaded.\n() : void\n\\end{verbatim}\\end{session}\n\n\\noindent The library \\ml{taut} defines \\ml{TAUT\\_RULE}\\footnote{The function \\ml{TAUT\\_RULE} has been replaced by a function called \\ml{TAUT\\_PROVE}\nin the new version of the \\ml{taut} library available in versions of\n\\HOL{} later than 1.12}\nwhich converts a term to the corresponding theorem, if the term is a tautology.\n\\vfill\n\\newpage\n\\begin{session}\\begin{verbatim}\n#let ASSOC = TAUT_RULE \"(t1 /\\ t2) /\\ t3 = t1 /\\ t2 /\\ t3\";;\nASSOC = |- (t1 /\\ t2) /\\ t3 = t1 /\\ t2 /\\ t3\n\n#let SYM1 = TAUT_RULE \"t1 /\\ t2 = t2 /\\ t1\";;\nSYM1 = |- t1 /\\ t2 = t2 /\\ t1\n\n#let SYM2 = TAUT_RULE \"t1 /\\ t2 /\\ t3 = t2 /\\ t1 /\\ t3\";;\nSYM2 = |- t1 /\\ t2 /\\ t3 = t2 /\\ t1 /\\ t3\n\n#let CANCEL1 = TAUT_RULE \"t /\\ t = t\";;\nCANCEL1 = |- t /\\ t = t\n\n#let CANCEL2 = TAUT_RULE \"t1 /\\ t1 /\\ t2 = t1 /\\ t2\";;\nCANCEL2 = |- t1 /\\ t1 /\\ t2 = t1 /\\ t2\n\\end{verbatim}\\end{session}\n\n\\noindent One cannot just use \\ml{REWRITE\\_TAC} with \\ml{SYM1} and\n\\ml{SYM2}, because it would loop.  What is needed is a special\nrewriting tool that will only apply symmetry when terms are out of\norder. Such a tool can be implemented as a {\\it conversion\\/}.\n\nConversions are described in detail in \\DESCRIPTION. The idea, which\nis due to Larry Paulson \\cite{lcp_rewrite}, is that a conversion is an\n\\ML{} function that maps a term $t_1$ to an equation:\n\n\\medskip\n{\\small\\verb%|- %}$t_1${\\small\\verb% = %}$t_2$.\n\\medskip\n\n\\noindent The intention is that a conversion will only apply to a\nsubset of terms: on members of this subset it will generate an\nequation, on all other terms it will fail. Because conversions are so\ncentral to theorem-proving in \\HOL, the \\ML{} type\n{\\small\\verb%term->thm%} is abbreviated to {\\small\\verb%conv%}.\nConversions are applied using the function:\n\n\\begin{hol}\\begin{verbatim}\n   REWR_CONV : thm -> conv\n\\end{verbatim}\\end{hol}\n\n\\noindent This takes an equation {\\small\\verb%|- %}$t_1${\\small\\verb% = %}$t_2$\nand generates a conversion (\\ie\\ \\ML{} function of type {\\small\\verb%term->thm%})\nthat maps any term $u$ that matches $t_1$ to the theorem\n{\\small\\verb%|- %}$u${\\small\\verb% = %}$v$, where $v$ is\nobtained by applying the substitution obtained by matching $u$ with $t_1$ to $t_2$.\nIf $u$ doesn't match $t_1$ then the application of \\ml{REWR\\_CONV} fails.\n\n\n\\begin{session}\\begin{verbatim}\n#REWR_CONV ASSOC \"(A /\\ B) /\\ C\";;\n|- (A /\\ B) /\\ C = A /\\ B /\\ C\n\n#REWR_CONV ASSOC \"A /\\ (B /\\ C)\";;\nevaluation failed     REWR_CONV: lhs of theorem doesn't match term\n\n#REWR_CONV SYM1 \"B /\\ A\";;\n|- B /\\ A = A /\\ B\n\n#REWR_CONV SYM1 \"A \\/ B\";;\nevaluation failed     REWR_CONV: lhs of theorem doesn't match term\n\\end{verbatim}\\end{session}\n\n\\noindent For our application, the required conversion should map\na conjunction\n\n\\medskip\n$t_1${\\small\\verb% /\\ (%}$t_{2_1}${\\small\\verb% /\\ %}$t_{2_2}${\\small\\verb%)%}\n\\medskip\n\n\\noindent in which\n$t_{2_1}${\\small\\verb% << %}$t_1$ to the equational theorem:\n\n\\medskip\n\n{\\small\\verb%|- %}$t_1${\\small\\verb% /\\ (%}$t_{2_1}${\\small\\verb% /\\ %}$t_{2_2}${\\small\\verb%)  =  %} $t_{2_1}${\\small\\verb% /\\ (%}$t_1${\\small\\verb% /\\ %}$t_{2_2}${\\small\\verb%)%}\n\n\\medskip\n\n\\noindent If $t_1${\\small\\verb% << %}$t_{2_1}$ then the conversion fails\n(in the \\ML{} sense) when applied to\n$t_1${\\small\\verb% /\\ (%}$t_{2_1}${\\small\\verb% /\\ %}$t_{2_2}${\\small\\verb%)%}.\nIn addition, if the right conjunct is not itself a conjunction, then\nthe conversion should reorder if necessary. More precisely, if the conversion\nis applied to $t_1${\\small\\verb% /\\ %}$t_2$ where $t_2$ is not a conjunction and\n$t_2${\\small\\verb% << %}$t_1$, then it should generate the equation:\n\n\\medskip\n{\\small\\verb%|- %}$t_1${\\small\\verb% /\\ %}$t_2${\\small\\verb%  =  %}$t_2${\\small\\verb% /\\ %}$t_1$\n\\medskip\n\n\\noindent Such a conversion is easily implemented in \\ML{} using\n\\ml{SYM1} and \\ml{SYM2} proved above, together with the \\ML{} syntax\nprocessing functions \\ml{is\\_conj} and \\ml{dest\\_conj}, where:\n\n\\begin{hol}\\begin{verbatim}\n   is_conj   : term -> bool\n   dest_conj : term -> (term # term)\n\\end{verbatim}\\end{hol}\n\n\\noindent These are functions that test whether a term is a conjunction, and\nsplits a term into its two conjuncts, respectively. For example:\n\n\\begin{session}\\begin{verbatim}\n#is_conj \"A /\\ B\";;\ntrue : bool\n\n#is_conj \"A \\/ B\";;\nfalse : bool\n\n#dest_conj \"A /\\ B\";;\n(\"A\", \"B\") : (term # term)\n\n#dest_conj \"A \\/ B\";;\nevaluation failed     dest_conj\n\\end{verbatim}\\end{session}\n\nThe implementation of the special purpose conversion,\n\\ml{CONJ\\_ORD\\_CONV}, is now straightforward.\n\n\n\\begin{session}\\begin{verbatim}\n#let CONJ_ORD_CONV t =\n# let t1,t2 = dest_conj t\n# in\n# if is_conj t2\n#   then (let t21,t22 = dest_conj t2\n#         in\n#         if t21 << t1 then REWR_CONV SYM2 t else fail)\n#   else (if t2  << t1 then REWR_CONV SYM1 t else fail);;\nCONJ_ORD_CONV = - : conv\n\\end{verbatim}\\end{session}\n\n\\noindent This is illustrated by:\n\n\\begin{session}\\begin{verbatim}\n#\"A:bool\" << \"B:bool\";;\ntrue : bool\n\n#\"B:bool\" << \"C:bool\";;\ntrue : bool\n\n#CONJ_ORD_CONV \"B /\\ A\";;\n|- B /\\ A = A /\\ B\n\n#CONJ_ORD_CONV \"A /\\ B\";;\nevaluation failed     fail\n\\end{verbatim}\\end{session}\n\nThe process of normalizing a conjunction can be split into four phases:\n\n\\begin{enumerate}\n\\item Right associate the conjunction by repeatedly applying:\n\\begin{quote}\n\\ml{REWR\\_CONV\\ ASSOC}\n\\end{quote}\n\\item Put the conjuncts in canonical order by repeatedly applying:\n\\begin{quote}\n\\ml{CONJ\\_ORD\\_CONV}\n\\end{quote}\n\\item Remove repetitions of $t$ of the form $t${\\small\\verb% /\\ %}$t$\nby repeatedly applying:\n\\begin{quote}\n\\ml{REWR\\_CONV\\ CANCEL1}\n\\end{quote}\n\\item Remove repetitions of $t_1$ in\n$t_1${\\small\\verb% /\\ (%}$t_1${\\small\\verb% /\\ %}$t_2${\\small\\verb%)%}\nby repeatedly applying:\n\\begin{quote}\n\\ml{REWR\\_CONV\\ CANCEL2}\n\\end{quote}\n\\end{enumerate}\n\n\nTo implement this, a method of repeatedly applying a conversion to\nsubterms of a term is needed. This is provided by the operator\n\n\\begin{hol}\\begin{verbatim}\n   TOP_DEPTH_CONV : conv -> conv\n\\end{verbatim}\\end{hol}\n\n\\noindent If $c$ is a conversion then \\ml{TOP\\_DEPTH\\_CONV}~$c$ is a\nconversion that repeatedly applies $c$ to all subterms until $c$ is no\nlonger applicable to any subterms. The function \\ml{TOP\\_DEPTH\\_CONV}\nis one of a family of operators that apply conversions throughout\nterms. Members of this family differ in the order in which subterms\nare visited and the amount of repetition that is done. For more\ndetails, see the chapter on conversions in \\DESCRIPTION.\n\n\\begin{session}\\begin{verbatim}\n#let ex1 = \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\";;\nex1 = \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" : term\n\n#REWR_CONV ASSOC ex1;;\nevaluation failed     REWR_CONV: lhs of theorem doesn't match term\n\n#TOP_DEPTH_CONV (REWR_CONV ASSOC) ex1;;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D =\n   A /\\ B /\\ C /\\ A /\\ C /\\ A /\\ D /\\ D\n\\end{verbatim}\\end{session}\n\n\\noindent The right hand side of this theorem is \\ml{ex1} in right-associated\nform. The conclusion of a theorem can be extracted with the \\ML{} function\n\\ml{concl} and the right hand side of an equation can be extracted with\n\\ml{rhs}. Thus, continuing the session:\n\n\\begin{session}\\begin{verbatim}\n#let ex2 = rhs(concl it);;\nex2 = \"A /\\ B /\\ C /\\ A /\\ C /\\ A /\\ D /\\ D\" : term\n\n#TOP_DEPTH_CONV CONJ_ORD_CONV ex2;;\n|- A /\\ B /\\ C /\\ A /\\ C /\\ A /\\ D /\\ D =\n   A /\\ A /\\ A /\\ B /\\ C /\\ C /\\ D /\\ D\n\\end{verbatim}\\end{session}\n\n\\noindent The right hand side of this is the result of canonicalizing\nthe order of the conjuncts in the left hand side. Next, the repetitions\ncan be eliminated using \\ml{CANCEL1} and \\ml{CANCEL2}.\n\n\\begin{session}\\begin{verbatim}\n#let ex3 = rhs(concl it);;\nex3 = \"A /\\ A /\\ A /\\ B /\\ C /\\ C /\\ D /\\ D\" : term\n\n#TOP_DEPTH_CONV (REWR_CONV CANCEL1) ex3;;\n|- A /\\ A /\\ A /\\ B /\\ C /\\ C /\\ D /\\ D =\n   A /\\ A /\\ A /\\ B /\\ C /\\ C /\\ D\n\\end{verbatim}\\end{session}\n\n\\begin{session}\\begin{verbatim}\n#let ex4 = rhs(concl it);;\nex4 = \"A /\\ A /\\ A /\\ B /\\ C /\\ C /\\ D\" : term\n\n#TOP_DEPTH_CONV (REWR_CONV CANCEL2) ex4;;\n|- A /\\ A /\\ A /\\ B /\\ C /\\ C /\\ D = A /\\ B /\\ C /\\ D\n\\end{verbatim}\\end{session}\n\n\nTo make the conjunction normalizer, the four stages just described\nmust be performed in sequence. Conversions can be applied in sequence using\nthe infixed function:\n\n\\begin{hol}\\begin{verbatim}\n   THENC : conv -> conv -> conv\n\\end{verbatim}\\end{hol}\n\n\n\\noindent If $c_1\\ t_1$ evaluates to $\\ml{ |- }t_1\\ml{=}t_2$ and\n$c_2\\ t_2$ evaluates to $\\ml{ |- }t_2\\ml{=}t_3$, then\n$\\ml{(}c_1\\ \\ml{THENC}\\ c_2\\ml{)}\\ t_1$ evaluates to\n$\\ml{\\ |-\\ }t_1\\ml{=}t_3$. If the\nevaluation of $c_1\\ t_1$ or the evaluation of $c_2\\ t_2$ fails,\nthen so does the evaluation of $c_1\\ \\ml{THENC}\\ c_2$. \\ml{THENC} is\njustified by the transitivity of equality.\n\nUsing \\ml{THENC}, the normalizer is defined by\n\n\\begin{session}\\begin{verbatim}\n#let CONJ_NORM_CONV =\n# TOP_DEPTH_CONV(REWR_CONV ASSOC)   THENC\n# TOP_DEPTH_CONV CONJ_ORD_CONV         THENC\n# TOP_DEPTH_CONV(REWR_CONV CANCEL1) THENC\n# TOP_DEPTH_CONV(REWR_CONV CANCEL2);;\n\nCONJ_NORM_CONV = - : conv\n\n#CONJ_NORM_CONV ex1;;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D = A /\\ B /\\ C /\\ D\n\\end{verbatim}\\end{session}\n\nThis conversion can now be converted to a rule or tactic using the functions\n\\ml{CONV\\_RULE} or \\ml{CONV\\_TAC}, respectively.\n\n\n\\begin{hol}\n\\begin{verbatim}\n   CONV_RULE : conv -> thm -> thm\n   CONV_TAC  : conv -> tactic\n\\end{verbatim}\n\\end{hol}\n\n\\noindent $\\ml{CONV\\_RULE}\\ c\\ \\ml{(|- }t\\ml{)}$ returns $\\ml{|- }t'$, where\n$c\\ t$ evaluates to the equation\n$\\ml{|-}\\ t\\ml{=}t'$.\n$\\ml{CONV\\_TAC}\\ c$ is a tactic that\nconverts the conclusion of a goal using $c$. For more details see \\DESCRIPTION.\n\n\\begin{session}\\begin{verbatim}\n#let CONJ_NORM_TAC = CONV_TAC CONJ_NORM_CONV;;\nCONJ_NORM_TAC = - : tactic\n\\end{verbatim}\\end{session}\n\nHere is an example. It uses {\\it antiquotation\\/}: if $x$ is an \\ML\\\nindentifier bound to term, then occurrences of\n{\\small\\verb%^%}$x$ inside a quotation\ndenotes the term bound to $x$.\n\n\\begin{session}\\begin{verbatim}\n#g \"^ex1 ==> B\";;\n\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D ==> B\"\n\n() : void\n\n#e CONJ_NORM_TAC;;\nOK..\n\"A /\\ B /\\ C /\\ D ==> B\"\n\\end{verbatim}\\end{session}\n\nTo summarize, here is the \\ML{} code implementing the normalizer:\n\n\\begin{hol}\\begin{verbatim}\n   load_library `taut`;;\n\n   let ASSOC   = TAUT_RULE \"(t1 /\\ t2) /\\ t3 = t1 /\\ t2 /\\ t3\"\n   and SYM1    = TAUT_RULE \"t1 /\\ t2 = t2 /\\ t1\"\n   and SYM2    = TAUT_RULE \"t1 /\\ t2 /\\ t3 = t2 /\\ t1 /\\ t3\"\n   and CANCEL1 = TAUT_RULE \"t /\\ t = t\"\n   and CANCEL2 = TAUT_RULE \"t1 /\\ t1 /\\ t2 = t1 /\\ t2\";;\n\n   let CONJ_ORD_CONV t =\n    let t1,t2 = dest_conj t\n    in\n    if is_conj t2\n      then (let t21,t22 = dest_conj t2\n            in\n            if t21 << t1 then REWR_CONV SYM2 t else fail)\n      else (if t2  << t1 then REWR_CONV SYM1 t else fail);;\n\n   let CONJ_NORM_CONV =\n    TOP_DEPTH_CONV(REWR_CONV ASSOC)   THENC\n    TOP_DEPTH_CONV CONJ_ORD_CONV         THENC\n    TOP_DEPTH_CONV(REWR_CONV CANCEL1) THENC\n    TOP_DEPTH_CONV(REWR_CONV CANCEL2);;\n\n   let CONJ_NORM_TAC = CONV_TAC CONJ_NORM_CONV;;\n\\end{verbatim}\\end{hol}\n\n\\section{A more efficient implementation}\n\nThe normalizer just given is rather slow. This can be shown by switching on the\nsystem timer using the function:\n\n\\begin{hol}\\begin{verbatim}\n   timer : bool -> bool\n\\end{verbatim}\\end{hol}\n\n\\noindent Evaluating \\ml{timer~true} switches on timing; evaluating\n\\ml{timer~false} switches it off (the previous value of the timing flag\nis returned). Garbage collection times are also shown, together with a\ncount of the number of intermediate theorems that are generated (which\ngives an estimate of the number of primitive inferences done).\n\n\\begin{session}\\begin{verbatim}\n#timer true;;\nfalse : bool\nRun time: 0.0s\n\n#CONJ_NORM_CONV ex1;;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D = A /\\ B /\\ C /\\ D\nRun time: 1.1s\nGarbage collection time: 0.5s\nIntermediate theorems generated: 73\n\\end{verbatim}\\end{session}\n\n\\noindent Here is a bigger example:\n\n\\begin{session}\\begin{verbatim}\n#CONJ_NORM_CONV \"^ex1 /\\ (^ex1 /\\ (^ex1 /\\ ^ex1 /\\ ^ex1) /\\ ^ex1)\";;\n|- (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   ((A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    A /\\\n    (B /\\ C /\\ A) /\\\n    (C /\\ A /\\ D) /\\\n    D) /\\\n   A /\\\n   (B /\\ C /\\ A) /\\\n   (C /\\ A /\\ D) /\\\n   D =\n   A /\\ B /\\ C /\\ D\nRun time: 38.3s\nGarbage collection time: 11.5s\nIntermediate theorems generated: 16761\n\\end{verbatim}\\end{session}\n\nThe reason that \\ml{CONJ\\_CANON\\_CONV} is slow is because of the\nrepeated pattern matching done during rewriting. A much more efficient\napproach is to normalize the conjunction by \\ML{} programming outside\nthe logic, and then to prove that the normalized term is equal to the\noriginal one. An even more efficient approach, which is not explored\nhere, would be to avoid having to do this proof by verifing the\nnormalization code by some sort of meta-theoretic\nreasoning about \\ML. How to do this in\n\\HOL{} is not clear, but work on this approach has been done in the\ncontext of {\\small FOL} \\cite{FOL}, the Boyer-Moore prover \\cite{BoyerMoore}\nand Nuprl \\cite{Nuprl}. These approaches all use logically\nsophisticated extra axioms, called reflection principles, that enable\nmetatheorems to be `reflected' into the logic as object level\ntheorems.\n\nTo normalize the term by \\ML{} programming, the conjuncts are extracted,\nrepeated elements are deleted and the resulting list is sorted.\n\n\\HOL{} already has a predefined function:\n\n\\begin{hol}\\begin{verbatim}\n   conjuncts : term -> term list\n\\end{verbatim}\\end{hol}\n\n\\noindent for extracting conjuncts.\n\\HOL{} also has a predefined \\ML{} function for removing repeated elements of\na list:\n\n\n\\begin{hol}\\begin{verbatim}\n   setify : * list -> * list\n\\end{verbatim}\\end{hol}\n\n\\noindent Both \\ml{conjuncts} and \\ml{setify} are illustrated below:\n\n\\begin{session}\\begin{verbatim}\n#timer false;;\ntrue : bool\n\n#conjuncts ex1;;\n[\"A\"; \"B\"; \"C\"; \"A\"; \"C\"; \"A\"; \"D\"; \"D\"] : term list\n\n#setify it;;\n[\"B\"; \"C\"; \"A\"; \"D\"] : term list\n\n#let ex1_list = it;;\nex1_list = [\"B\"; \"C\"; \"A\"; \"D\"] : term list\n\\end{verbatim}\\end{session}\n\nThere is a predefined sorting function in \\ML:\n\n\\begin{session}\\begin{verbatim}\n#sort;;\nsort = - : (((* # *) -> bool) -> * list -> * list)\n\n#sort $< [3;2;5;6;1;1;7;9;3];;\n[1; 1; 2; 3; 3; 5; 6; 7; 9] : int list\n\n#sort $<< ex1_list;;\n[\"A\"; \"B\"; \"C\"; \"D\"] : term list\n\\end{verbatim}\\end{session}\n\nUsing this function, the list of conjuncts of the normalization of a\nterm is easily computed.  The predefined \\ML{} function:\n\n\\begin{hol}\\begin{verbatim}\n   list_mk_conj : term list -> term\n\\end{verbatim}\\end{hol}\n\n\\noindent can then be used to build the normalized conjunction.\n\n\n\\begin{session}\\begin{verbatim}\n#ex1;;\n\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" : term\n\n#let ex1_norm = list_mk_conj(sort $<< (setify(conjuncts ex1)));;\nex1_norm = \"A /\\ B /\\ C /\\ D\" : term\n\\end{verbatim}\\end{session}\n\n\\noindent The calculation of \\ml{ex1\\_norm} from \\ml{ex1} has been\ndone by (unverified) \\ML{} code. What is required is the theorem\nasserting that they are equal. This can be proved using the tautology\nchecker.\n\n\\begin{session}\\begin{verbatim}\n#TAUT_RULE \"^ex1 = ^ex1_norm\";;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D = A /\\ B /\\ C /\\ D\n\\end{verbatim}\\end{session}\n\n\\noindent A conversion that normalizes conjunctions is thus:\n\n\\begin{session}\\begin{verbatim}\n#let CONJ_NORM_CONV2 t =\n# if is_conj t\n#  then TAUT_RULE \"^t = ^(list_mk_conj(sort $<< (setify(conjuncts t))))\"\n#  else fail;;\nCONJ_NORM_CONV2 = - : conv\n\n#CONJ_NORM_CONV2 ex1;;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D = A /\\ B /\\ C /\\ D\n\\end{verbatim}\\end{session}\n\n\\noindent \\ml{CONJ\\_CANON\\_CONV2} is more than an order of magnitude faster\nthan \\ml{CONJ\\_CANON\\_CONV}:\n\n\n\\begin{session}\\begin{verbatim}\n#timer true;;\nfalse : bool\nRun time: 0.0s\n\n#CONJ_NORM_CONV2 \"^ex1 /\\ (^ex1 /\\ (^ex1 /\\ ^ex1 /\\ ^ex1) /\\ ^ex1)\";;\n|- (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   ((A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    A /\\\n    (B /\\ C /\\ A) /\\\n    (C /\\ A /\\ D) /\\\n    D) /\\\n   A /\\\n   (B /\\ C /\\ A) /\\\n   (C /\\ A /\\ D) /\\\n   D =\n   A /\\ B /\\ C /\\ D\nRun time: 1.9s\nGarbage collection time: 0.5s\nIntermediate theorems generated: 1273\n\\end{verbatim}\\end{session}\n\n\\section{An even more efficient implementation}\\label{bogus-optimization}\n\nAlthough the implementation just given is much faster than the first\nnaive one, it can be improved further by replacing the call to the\ngeneral tautology checker with a special purpose conjunction-equivalence\nprover.\n\nTo see how this works, the equivalence of \\ml{ex1} and \\ml{ex1\\_norm} will first\nbe proved manually. The general form of this proof will then be abstracted into\na derived rule.\n\nThe goal is to prove that \\ml{ex1} and \\ml{ex1\\_norm} are equal.\n\n\\begin{session}\\begin{verbatim}\n#timer false;;\ntrue : bool\n\n#g \"^ex1 = ^ex1_norm\";;\n\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D = A /\\ B /\\ C /\\ D\"\n\n() : void\n\\end{verbatim}\\end{session}\n\n\\noindent The predefined tactic \\ml{EQ\\_TAC} splits an equation into two\nimplications (see Section~\\ref{EQTAC}).\n\n\\begin{session}\\begin{verbatim}\n#e EQ_TAC;;\nOK..\n2 subgoals\n\"A /\\ B /\\ C /\\ D ==> A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\"\n\n\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D ==> A /\\ B /\\ C /\\ D\"\n\n() : void\n\\end{verbatim}\\end{session}\n\n\\noindent Each of these can be solved by:\n\\begin{enumerate}\n\\item moving the antecedent of the\nimplication to the assumption list (using \\ml{DISCH\\_TAC}, see\nSection~\\ref{DISCHTAC});\n\\item breaking up the remaining\ngoal (the consequent of the implication) into one subgoal per conjunct (using\n\\ml{CONJ\\_TAC}, see Section~\\ref{CONJTAC});\n\\item  solving each of these\nconjuncts using the antecedent (which is now an assumption)\n\\end{enumerate}\n\n\\noindent Step 1--3 are now done interactively.\n\n\\begin{session}\\begin{verbatim}\n#e DISCH_TAC;;\nOK..\n\"A /\\ B /\\ C /\\ D\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n() : void\n\\end{verbatim}\\end{session}\n\n\\noindent \\ml{CONJ\\_TAC} is repeated using the tactical \\ml{REPEAT}\ndescribed in Section~\\ref{THEN}.\n\n\\begin{session}\\begin{verbatim}\n#e (REPEAT CONJ_TAC);;\nOK..\n4 subgoals\n\"D\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n\"C\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n\"B\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n\"A\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n() : void\n\\end{verbatim}\\end{session}\n\n\\noindent The final step is to use the assumption\n{\\small\\verb%\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\"%} to solve each goal.\nTo do this, the assumption is grabbed using the tactical:\n\n\\begin{hol}\\begin{verbatim}\n   POP_ASSUM : (thm -> tactic) -> tactic\n\\end{verbatim}\\end{hol}\n\n\\noindent Given a function \\ml{$f$ : thm -> tactic}, the tactic\n\\ml{POP\\_ASSUM}\\ $f$ applies $f$ to the (assumed) first\nassumption of a goal\nand then applies the tactic created thereby to the original goal\nminus its top assumption:\n\n\\begin{hol}\\begin{alltt}\n   POP_ASSUM \\(f\\) ([\\(t\\sb{1}\\);\\(\\ldots\\);\\(t\\sb{n}\\)],\\(t\\)) = \\(f\\) (ASSUME \\(t\\sb{1}\\)) ([\\(t\\sb{2}\\);\\(\\ldots\\);\\(t\\sb{n}\\)],\\(t\\))\n\\end{alltt}\\end{hol}\n\n\\noindent \\ML{} functions such as $f$,\nwith type \\ml{thm -> tactic} are abbreviated to \\ml{thm\\_tactic} (see\n\\DESCRIPTION\\ for further details).\n\nAfter grabbing the assumption, it is split into its individual conjunctions\nusing the predefined derived rule:\n\n\\begin{hol}\\begin{verbatim}\n   CONJUNCTS : thm -> thm list\n\\end{verbatim}\\end{hol}\n\n\\noindent For example:\n\n\n\\begin{session}\\begin{verbatim}\n#CONJUNCTS(ASSUME \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\");;\n[. |- A; . |- B; . |- C; . |- A; . |- C; . |- A; . |- D; . |- D]\n: thm list\n\\end{verbatim}\\end{session}\n\n\\noindent Among the individual conjuncts is the goal, which can thus be\nsolved immediately using \\ml{ACCEPT\\_TAC} (see Section~\\ref{ACCEPTTAC}).\nThe appropriate assumption can be chosen with the predefined tactical\n\\ml{MAP\\_FIRST}, which\nis characterized by:\n\n\\begin{hol}\\begin{alltt}\n   MAP_FIRST \\(f\\) [\\(x\\sb{1}\\); \\(\\ldots\\) ;\\(x\\sb{n}\\)]  =  \\(f\\)(\\(x\\sb{1}\\)) ORELSE \\(\\ldots\\) ORELSE \\(f\\)(\\(x\\sb{n}\\))\n\\end{alltt}\\end{hol}\n\n\\noindent Returning to the proof: the final step is now performed by\npopping the assumption and applying to it the function obtained by\ncomposing \\ml{CONJUNCTS} and \\ml{MAP\\_FIRST} using the \\ML{} infixed\nfunction composition operator \\ml{o}\n(where \\ml{(}$f$~\\ml{o}~$g$\\ml{)}$x$~\\ml{=}~$g$\\ml{(}$f$\\ml{(}$x$\\ml{))}).\n\n\\begin{session}\\begin{verbatim}\n#e(POP_ASSUM(MAP_FIRST ACCEPT_TAC o CONJUNCTS));;\nOK..\ngoal proved\n. |- A\n\nPrevious subproof:\n3 subgoals\n\"D\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n\"C\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n\"B\"\n    [ \"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D\" ]\n\n() : void\n\\end{verbatim}\\end{session}\n\n\n\\noindent The remaining subgoals are solved identically. Stitching together\nthe tactics just used results in:\n\n\\begin{hol}\\begin{verbatim}\n   EQ_TAC          THEN\n   DISCH_TAC       THEN\n   REPEAT CONJ_TAC THEN\n   POP_ASSUM(MAP_FIRST ACCEPT_TAC o CONJUNCTS)\n\\end{verbatim}\\end{hol}\n\n\\noindent With this, the entire proof can be done in one step.\n\n\\begin{session}\\begin{verbatim}\n#g \"^ex1 = ^ex1_norm\";;\n\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D = A /\\ B /\\ C /\\ D\"\n\n() : void\n\n#e(EQ_TAC          THEN\n#  DISCH_TAC       THEN\n#  REPEAT CONJ_TAC THEN\n#  POP_ASSUM(MAP_FIRST ACCEPT_TAC o CONJUNCTS));;\nOK..\ngoal proved\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D = A /\\ B /\\ C /\\ D\n\nPrevious subproof:\ngoal proved\n() : void\n\\end{verbatim}\\end{session}\n\nUsing this tactic, a derived rule \\ml{CONJ\\_EQ} can be defined that proves\ntwo conjunctions equal. This is what is needed to replace the call to\n\\ml{TAUT\\_RULE}.\n\\ml{CONJ\\_EQ} is defined with the predefined function:\n\n\\begin{hol}\\begin{verbatim}\n   PROVE : term # tactic -> theorem\n\\end{verbatim}\\end{hol}\n\n\\noindent \\ml{PROVE}\\ml{(}$t$\\ml{,}$T$\\ml{)} applies the tactic $T$ to\nthe goal \\ml{([],}$t$\\ml{)}; if this goal is proved by $T$ then the\nresulting justification is applied to \\ml{[]} to obtain the theorem\n\\ml{|-}~$t$, which is returned. If $T$ does not solve the goal, then\nthe application of \\ml{PROVE} fails. Using \\ml{PROVE}, the definition\nof \\ml{CONJ\\_EQ} is:\n\n\\begin{hol}\\begin{verbatim}\n   let CONJ_EQ t1 t2 =\n    PROVE (\"^t1 = ^t2\",\n           EQ_TAC          THEN\n           DISCH_TAC       THEN\n           REPEAT CONJ_TAC THEN\n           POP_ASSUM(MAP_FIRST ACCEPT_TAC o CONJUNCTS))\n\\end{verbatim}\\end{hol}\n\n\n\\noindent Replacing the call to \\ml{TAUT\\_RULE} in the definition\nof \\ml{CONJ\\_NORM\\_CONV2} results in:\n\n\\begin{hol}\\begin{verbatim}\n   let CONJ_NORM_CONV3 t =\n    if is_conj t\n     then CONJ_EQ t (list_mk_conj(sort $<< (setify(conjuncts t))))\n     else fail\n\\end{verbatim}\\end{hol}\n\n\\noindent Continuing the session:\n\n\\begin{session}\\begin{verbatim}\n#let CONJ_EQ t1 t2 =\n# PROVE (\"^t1 = ^t2\",\n#        EQ_TAC          THEN\n#        DISCH_TAC       THEN\n#        REPEAT CONJ_TAC THEN\n#        POP_ASSUM(MAP_FIRST ACCEPT_TAC o CONJUNCTS));;\nCONJ_EQ = - : (term -> conv)\n\n#let CONJ_NORM_CONV3 t =\n# if is_conj t\n#  then CONJ_EQ t (list_mk_conj(sort $<< (setify(conjuncts t))))\n#  else fail;;\nCONJ_NORM_CONV3 = - : conv\n\\end{verbatim}\\end{session}\n\n\\noindent \\ml{CONJ\\_NORM\\_CONV3} is almost twice\nas efficient as \\ml{CONJ\\_NORM\\_CONV2}. To show this, the timer is switched\nback on.\n\n\\begin{session}\\begin{verbatim}\n#timer true;;\nfalse : bool\nRun time: 0.0s\n\\end{verbatim}\\end{session}\n\n\\noindent Here is the big example with \\ml{CONJ\\_NORM\\_CONV3}:\n\n\\begin{session}\\begin{verbatim}\n#CONJ_NORM_CONV3 \"^ex1 /\\ (^ex1 /\\ (^ex1 /\\ ^ex1 /\\ ^ex1) /\\ ^ex1)\";;\n|- (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   ((A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    A /\\\n    (B /\\ C /\\ A) /\\\n    (C /\\ A /\\ D) /\\\n    D) /\\\n   A /\\\n   (B /\\ C /\\ A) /\\\n   (C /\\ A /\\ D) /\\\n   D =\n   A /\\ B /\\ C /\\ D\nRun time: 1.0s\nGarbage collection time: 0.5s\nIntermediate theorems generated: 775\n\\end{verbatim}\\end{session}\n\n\\section{Further optimizations}\n\nFurther improvements are still possible. As an exercise the reader\nmight want to decipher the following highly optimized definition of\n\\ml{CONJ\\_EQ}.\n\nThe function \\ml{PROVE\\_CONJ}, defined below,\nconverts a term $t$ to the theorem \\ml{|-}~$t$\nif that theorem occurs in a supplied list of theorems (\\ml{ths} in the\ncode below), or $t$ is a conjunction each of whose conjuncts occurs in\nthe list. The definition of \\ml{PROVE\\_CONJ} uses the following\npredefined \\ML{} functions:\n\n\\begin{itemize}\n\\item \\ml{uncurry}~$f$~\\ml{(}$x$\\ml{,}$y$\\ml{)}~~\\ml{=}~~$f$~$x$~$y$\n\n\\item \\ml{(}$f${\\small\\verb% # %}$g$\\ml{)}\\ml{(}$x$\\ml{,}$y$\\ml{)}~~\\ml{=}~~\\ml{(}$f\\ x$~\\ml{,}~$g\\ y$\\ml{)}\n\n\\item \\ml{find}~$p$~\\ml{[}$x_1\\ml{;}\\ldots\\ml{;}x_n$\\ml{]}~~=~~{\\it the first $x_i$ for which $p\\ x_i$ is true\\/}\n\n\\end{itemize}\n\n\\noindent and the inference rule \\ml{CONJ}:\n\n\n\\[ \\Gamma_1\\turn\nt_1\\qquad\\qquad\\qquad\\Gamma_2\\turn t_2\\over \\Gamma_1\\cup\\Gamma_2 \\turn t_1\\conj\nt_2 \\]\n\n\n\\noindent Here is the definition of \\ml{PROVE\\_CONJ}:\n\n\\begin{hol}\\begin{verbatim}\n   letrec PROVE_CONJ ths tm =\n    (uncurry CONJ ((PROVE_CONJ ths # PROVE_CONJ ths) (dest_conj tm))) ?\n    find (\\th. concl th = tm) ths\n\\end{verbatim}\\end{hol}\n\n\\noindent Using this, the optimized \\ml{CONJ\\_EQ}, called\n\\ml{CONJ\\_EQ2}, is defined using \\ml{IMP\\_ANTISYM\\_RULE} (a predefined rule):\n\n\n\\[ \\Gamma_1 \\turn t_1 \\imp t_2 \\qquad\\qquad \\Gamma_2\\turn t_2 \\imp t_1\\over\n\\Gamma_1 \\cup \\Gamma_2 \\turn t_1 = t_2\\]\n\n\\noindent The definition is:\n\n\\begin{hol}\\begin{verbatim}\n   let CONJ_EQ2 t1 t2 =\n    let imp1 = DISCH t1 (PROVE_CONJ (CONJUNCTS(ASSUME t1)) t2)\n    and imp2 = DISCH t2 (PROVE_CONJ (CONJUNCTS(ASSUME t2)) t1)\n    in IMP_ANTISYM_RULE imp1 imp2\n\\end{verbatim}\\end{hol}\n\n\\noindent Loading these \\ML{} function definitions into \\HOL:\n\n\\begin{session}\\begin{verbatim}\n#letrec PROVE_CONJ ths tm =\n# (uncurry CONJ ((PROVE_CONJ ths # PROVE_CONJ ths) (dest_conj tm))) ?\n# find (\\th. concl th = tm) ths;;\nPROVE_CONJ = - : (thm list -> conv)\nRun time: 0.0s\n\n#let CONJ_EQ2 t1 t2 =\n# let imp1 = DISCH t1 (PROVE_CONJ (CONJUNCTS(ASSUME t1)) t2)\n# and imp2 = DISCH t2 (PROVE_CONJ (CONJUNCTS(ASSUME t2)) t1)\n# in IMP_ANTISYM_RULE imp1 imp2;;\nCONJ_EQ2 = - : (term -> conv)\nRun time: 0.0s\n\\end{verbatim}\\end{session}\n\n\n\\noindent A version of \\ml{CONJ\\_NORM\\_CONV} that\nuses \\ml{CONJ\\_EQ2} is defined by:\n\n\\begin{hol}\\begin{verbatim}\n   let CONJ_NORM_CONV4 t =\n    if is_conj t\n     then CONJ_EQ2 t (list_mk_conj(sort $<< (setify(conjuncts t))))\n     else fail\n\\end{verbatim}\\end{hol}\n\n\n\\noindent Loading this into \\ML:\n\n\\begin{session}\\begin{verbatim}\n#let CONJ_NORM_CONV4 t =\n# if is_conj t\n#  then CONJ_EQ2 t (list_mk_conj(sort $<< (setify(conjuncts t))))\n#  else fail;;\nCONJ_NORM_CONV4 = - : conv\nRun time: 0.0s\n\\end{verbatim}\\end{session}\n\n\\noindent This is even faster than \\ml{CONJ\\_NORM\\_CONV3}:\n\n\\begin{session}\\begin{verbatim}\n#CONJ_NORM_CONV4 \"^ex1 /\\ (^ex1 /\\ (^ex1 /\\ ^ex1 /\\ ^ex1) /\\ ^ex1)\";;\n|- (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n   ((A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\\n    A /\\\n    (B /\\ C /\\ A) /\\\n    (C /\\ A /\\ D) /\\\n    D) /\\\n   A /\\\n   (B /\\ C /\\ A) /\\\n   (C /\\ A /\\ D) /\\\n   D =\n   A /\\ B /\\ C /\\ D\nRun time: 0.4s\nIntermediate theorems generated: 155\n\\end{verbatim}\\end{session}\n\n\\section{Normalizing all subterms}\n\nThere is an important difference in the functionality of\n\\ml{CONJ\\_NORM\\_CONV} and the various optimised versions of it. The\ndifference is that \\ml{CONJ\\_NORM\\_CONV} applies to any term,\nnormalizing all subterms that are conjunctions. However the functions\n\\ml{CONJ\\_NORM\\_CONV}{\\small $n$} (where {\\small $n = 2,3,4$}) all\nfail on non-conjunctions.\n\n\\begin{session}\\begin{verbatim}\n#CONJ_NORM_CONV \"^ex1 ==> ^ex1\";;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D ==>\n   A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D =\n   A /\\ B /\\ C /\\ D ==> A /\\ B /\\ C /\\ D\nRun time: 2.0s\nGarbage collection time: 0.6s\nIntermediate theorems generated: 1307\n\n#CONJ_NORM_CONV4 \"^ex1 => ^ex1\";;\nneed 2 nd branch to conditional\nskipping: ex1 \" ;; parse failed\n\n#CONJ_NORM_CONV4 \"^ex1 ==> ^ex1\";;\nevaluation failed     fail\n\\end{verbatim}\\end{session}\n\nWhat is needed is a function that will apply a conversion to all\nconjunctive subterms of a term. Such a function is \\ml{TOP\\_DEPTH\\_CONV}, however\n\n\\begin{hol}\\begin{verbatim}\n   TOP_DEPTH_CONV CONJ_NORM_CONV4\n\\end{verbatim}\\end{hol}\n\n\\noindent would loop, because \\ml{CONJ\\_NORM\\_CONV4} never fails on\nconjunctions, so \\ml{TOP\\_DEPTH\\_CONV} would keep applying it forever!\nThis is easily got around using:\n\n\\begin{hol}\\begin{verbatim}\n   CHANGED_CONV : conv -> conv\n\\end{verbatim}\\end{hol}\n\n\\noindent \\ml{CHANGED\\_CONV}~$c$ behaves like $c$, except that if $c$\nhas no effect, then \\ml{CHANGED\\_CONV}~$c$ fails.\n\n\\begin{session}\\begin{verbatim}\n#TOP_DEPTH_CONV(CHANGED_CONV CONJ_NORM_CONV4) \"^ex1 ==> ^ex1\";;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D ==>\n   A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D =\n   A /\\ B /\\ C /\\ D ==> A /\\ B /\\ C /\\ D\nRun time: 0.5s\nIntermediate theorems generated: 292\n\\end{verbatim}\\end{session}\n\nAlthough this works, the scanning through subterms done by\n\\ml{TOP\\_DEPTH\\_CONV} is a bit `brute force'. Further efficiency can\nbe obtained by writing a special scanning function that\njust applies a conversion to maximal conjunctive subterms.\nThis is provided by the code below.\nThe understanding of this is a fairly hard exercise for the reader. The\nsection on conversions in \\DESCRIPTION\\ should be helpful.\n\nFirst, an auxiliary derived rule for combining two equations into\na single equation by conjoining  the left hand sides and  the\nright hand sides.\n\n\\begin{session}\\begin{verbatim}\n#let MK_CONJ th1 th2 = MK_COMB(AP_TERM \"$/\\\" th1, th2);;\nMK_CONJ = - : (thm -> thm -> thm)\nRun time: 0.0s\n\\end{verbatim}\\end{session}\n\n\\noindent Next a function that conjoins the left hand sides and right\nhand sides of lists of equations.\n\n\\begin{session}\\begin{verbatim}\n#letrec MK_CONJL l =\n# if null l     then fail\n# if null(tl l) then hd l\n#               else MK_CONJ (hd l) (MK_CONJL(tl l));;\nMK_CONJL = - : proof\nRun time: 0.0s\n\\end{verbatim}\\end{session}\n\n\\noindent Next, a function that applies a conversion $c$ to all\nconjunctive subterms of a term. This uses the \\ML{} function:\n\n\\bigskip\n\\ml{map}~$f$~\\ml{[}$x_1\\ml{;}\\ldots\\ml{;}x_n$\\ml{]}~~=~~\\ml{[}$f\\ x_1\\ml{;}\\ldots\\ml{;}f\\ x_n$\\ml{]}\n\n\\bigskip\n\n\\noindent and the rules \\ml{MK\\_COMB}:\n\n\n\\[ \\Gamma_1 \\turn f = g \\qquad\\qquad \\Gamma_2\\turn x = y \\over\n\\Gamma_1 \\cup \\Gamma_2 \\turn f\\ x = g\\ y\\]\n\n\\noindent and \\ml{MK\\_ABS}:\n\n\n\\[ \\Gamma \\turn \\forall x.\\ t_1 = t_2 \\over\n\\Gamma \\turn (\\lambda x.\\ t_1) = (\\lambda x.\\ t_2)\\]\n\n\\noindent and \\ml{GEN}:\n\n$$\\Gamma\\turn t\\over\\Gamma\\turn\\uquant{x} t$$\n\\begin{itemize}\n\\item Where $x$ is not free in $\\Gamma$.\n\\end{itemize}\n\n\n\\noindent and \\ml{REFL}:\n\n$$ \\over\\turn t = t$$\n\n\\begin{itemize}\n\\item\\ml{REFL}~$t$~~\\ml{=}~~ $\\turn t = t$.\n\\end{itemize}\n\n\\noindent The definition of \\ml{CONJ\\_DEPTH\\_CONV} also uses:\n\n\\begin{hol}\\begin{verbatim}\n   is_comb   : term -> bool\n   dest_comb : term -> (term # term)\n\n   is_abs    : term -> bool\n   dest_abs  : term -> (term # term)\n\\end{verbatim}\\end{hol}\n\n\\noindent which are the tests and destructors for combinations and\nabstractions, respectively.\n\n\\begin{session}\\begin{verbatim}\n#letrec CONJ_DEPTH_CONV c tm =\n# if is_conj tm\n#  then (c THENC (MK_CONJL o map (CONJ_DEPTH_CONV c) o conjuncts)) tm\n# if is_comb tm\n#  then (let rator,rand = dest_comb tm in\n#         MK_COMB (CONJ_DEPTH_CONV c rator, CONJ_DEPTH_CONV c rand))\n# if is_abs tm\n#  then (let bv,body = dest_abs tm in\n#         let bodyth = CONJ_DEPTH_CONV c body in\n#         MK_ABS (GEN bv bodyth))\n#   else (REFL tm);;\nCONJ_DEPTH_CONV = - : (conv -> conv)\nRun time: 0.0s\n\\end{verbatim}\\end{session}\n\n\\noindent The next session shows that \\ml{CONJ\\_DEPTH\\_CONV} is an\nimprovement over \\ml{TOP\\_DEPTH\\_CONV}.\n\n\\begin{session}\\begin{verbatim}\n#CONJ_DEPTH_CONV CONJ_NORM_CONV4 \"^ex1 ==> ^ex1\";;\n|- A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D ==>\n   A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D =\n   A /\\ B /\\ C /\\ D ==> A /\\ B /\\ C /\\ D\nRun time: 0.4s\nIntermediate theorems generated: 95\n\\end{verbatim}\\end{session}\n\n\\noindent However, the figures show that we are getting to a point of\ndiminishing returns.\n\nFinally, the tactic for normalizing all conjunctions in a goal is:\n\n\\begin{session}\\begin{verbatim}\n#let CONJ_NORM_TAC = CONV_TAC (CONJ_DEPTH_CONV CONJ_NORM_CONV4);;\nCONJ_NORM_TAC = - : tactic\nRun time: 0.0s\n\\end{verbatim}\\end{session}\n\n\\noindent This is illustrated by:\n\n\\begin{session}\\begin{verbatim}\n#g \"^ex1 ==> ^ex1 /\\ ^ex1_norm\";;\n\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D ==>\n (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\ A /\\ B /\\ C /\\ D\"\n\n() : void\nRun time: 0.1s\n\n#e CONJ_NORM_TAC;;\nOK..\n\"A /\\ B /\\ C /\\ D ==> A /\\ B /\\ C /\\ D\"\n\n() : void\nRun time: 0.3s\nIntermediate theorems generated: 110\n\\end{verbatim}\\end{session}\n\n\\noindent To show how much faster the optimized version is, here is\nthe last step repeated with the first version of the tool. The \\ML\\\nfunction \\ml{b} backs up to the last goal.\n\n\\begin{session}\\begin{verbatim}\n#b();;\n\"A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D ==>\n (A /\\ (B /\\ C /\\ A) /\\ (C /\\ A /\\ D) /\\ D) /\\ A /\\ B /\\ C /\\ D\"\n\n() : void\nRun time: 0.1s\n\\end{verbatim}\\end{session}\n\n\\noindent Expanding with the slow tactic:\n\n\\begin{session}\\begin{verbatim}\n#e(CONV_TAC CONJ_NORM_CONV);;\nOK..\n\"A /\\ B /\\ C /\\ D ==> A /\\ B /\\ C /\\ D\"\n\n() : void\nRun time: 3.5s\nGarbage collection time: 1.0s\nIntermediate theorems generated: 1932\n\\end{verbatim}\\end{session}\n\n\\noindent it is 10 times slower and generates almost 20 times as many\nprimitive inference steps!\n\n\nHere is the complete \\ML{} program for the optimized normalizer.\n\n\\begin{hol}\\begin{verbatim}\n   letrec insert ord x l =\n    if null l\n     then [x]\n    if ord(x,hd l)\n     then x.l\n     else hd l.(insert ord x (tl l));;\n\n   letrec sort ord l =\n    if null l\n     then []\n     else insert ord (hd l) (sort ord (tl l));;\n\\end{verbatim}\\end{hol}\n\n\\begin{hol}\\begin{verbatim}\n   letrec PROVE_CONJ ths tm =\n    (uncurry CONJ ((PROVE_CONJ ths # PROVE_CONJ ths) (dest_conj tm))) ?\n    find (\\th. concl th = tm) ths;;\n\n   let CONJ_EQ t1 t2 =\n    let imp1 = DISCH t1 (PROVE_CONJ (CONJUNCTS(ASSUME t1)) t2)\n    and imp2 = DISCH t2 (PROVE_CONJ (CONJUNCTS(ASSUME t2)) t1)\n    in IMP_ANTISYM_RULE imp1 imp2;;\n\\end{verbatim}\\end{hol}\n\n\\begin{hol}\\begin{verbatim}\n   let CONJ_NORM_CONV t =\n    if is_conj t\n     then CONJ_EQ t (list_mk_conj(sort $<< (setify(conjuncts t))))\n     else fail;;\n\\end{verbatim}\\end{hol}\n\n\\begin{hol}\\begin{verbatim}\n   let MK_CONJ th1 th2 = MK_COMB(AP_TERM \"$/\\\" th1, th2);;\n\n   letrec MK_CONJL l =\n    if null l     then fail\n    if null(tl l) then hd l\n                  else MK_CONJ (hd l) (MK_CONJL(tl l));;\n\\end{verbatim}\\end{hol}\n\n\\begin{hol}\\begin{verbatim}\n   letrec CONJ_DEPTH_CONV c tm =\n    if is_conj tm\n     then (c THENC (MK_CONJL o map (CONJ_DEPTH_CONV c) o conjuncts)) tm\n    if is_comb tm\n     then (let rator,rand = dest_comb tm in\n            MK_COMB (CONJ_DEPTH_CONV c rator, CONJ_DEPTH_CONV c rand))\n    if is_abs tm\n     then (let bv,body = dest_abs tm in\n            let bodyth = CONJ_DEPTH_CONV c body in\n            MK_ABS (GEN bv bodyth))\n     else (REFL tm);;\n\\end{verbatim}\\end{hol}\n\n\\begin{hol}\\begin{verbatim}\n   let CONJ_NORM_TAC = CONV_TAC (CONJ_DEPTH_CONV CONJ_NORM_CONV);;\n\\end{verbatim}\\end{hol}\n\n\nAlthough the optimized implementation is much more efficient, it uses\nless general methods. An advantage of the simple implementation based\non rewriting is that essentially the same algorithm can be used to\nnormalize terms built out of any associative, commutative and\nidenpotent operation. The two exercises that follow (whose solutions are not\nsupplied) suggest that the reader try to extract general principles from the\nconjunction normalizer and use these to implement generic tools.\n\n\\subsection{Exercise 1}\n\nImplement a normalizer for any associative and commutative operator.\n\n\\begin{hol}\\begin{verbatim}\n   AC_CANON_CONV : thm # thm -> conv\n\\end{verbatim}\\end{hol}\n\n\\noindent The two theorem arguments should be the\nassociative and commutative laws for the operator. For example:\n\n\\begin{hol}\\begin{verbatim}\n   AC_CANON_CONV(ASSOC,SYM1)\n\\end{verbatim}\\end{hol}\n\n\\noindent would be a canonicalizer for conjunctions.\nUse the `brute force' rewriting method described at the beginning of this chapter.\n\n\\subsection{Exercise 2}\n\n\nImplement an optimized canonicalizer:\n\n\\begin{hol}\\begin{verbatim}\n   FAST_AC_CANON_CONV : thm # thm -> conv \\end{verbatim}\\end{hol}\n\\noindent This should use tuned rewriting (\\eg\\ a generalization of\n\\ml{CONJ\\_DEPTH\\_CONV}) and be as fast as possible. Try to think up\ntricks to minimise the amount of general matching and to make every\ninference count.\n\n\n", "meta": {"hexsha": "1a965ea011d3ac1b02b2169f8a30d5dad3a576b9", "size": 41325, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Manual/Tutorial/tool.tex", "max_stars_repo_name": "LiLiming/HOL", "max_stars_repo_head_hexsha": "8de43bf3176993a37fb2f917fe978964c9d0591c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-27T07:51:47.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-27T07:51:47.000Z", "max_issues_repo_path": "Manual/Tutorial/tool.tex", "max_issues_repo_name": "LiLiming/HOL", "max_issues_repo_head_hexsha": "8de43bf3176993a37fb2f917fe978964c9d0591c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Manual/Tutorial/tool.tex", "max_forks_repo_name": "LiLiming/HOL", "max_forks_repo_head_hexsha": "8de43bf3176993a37fb2f917fe978964c9d0591c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.1182228916, "max_line_length": 180, "alphanum_fraction": 0.6538415003, "num_tokens": 13730, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5913818413017228}}
{"text": "\\section{Introduction}\n\n\\subsection{Exercise 1}\nIt seems intuitive that the described self-play would converge to a maximin (or minimax, depending on order of play) policy, since the agent is \ncompeting with itself (alternating between minimizing and maximizing its value function). \n\nWe can make this notion more precise by introducing some notation. Suppose that we are playing Tic-Tac-Toe\nwith ``X'' going first. Let $s$ be an arbitrary game state, $V(s)$ be the probability that ``O'' wins from $s$, $N(s)$ be the neighboring states of $s$,\nand $A_X(s)$ and $A_O(s)$ be the next states chosen by the two respective player policies.\nThen we have\n\\begin{align*}\n        A_X(s) &= \\argmax_{s' \\in N(s)} 1 - V(s') = \\argmin_{s' \\in N(s)} V(s') \\\\\n        A_O(s) &= \\argmax_{s' \\in N(s)} V(s') \\\\\n        V(s) &\\gets V(s) + \\alpha [V(A_O(A_X(s))) - V(s)]\n\\end{align*}\nfrom which we can see that if $V(s)$ converges, it converges to a maximin policy. This policy is not\nnecessarily the same as the one learned by playing against a random opponent for the reasons stated in the\ntext (a random opponent could make suboptimal moves).\n\n\\subsection{Exercise 2}\nBy mapping rotations and reflections of a given game state to a single state, we could significantly shrink\nthe size of our value function table along with the search space of moves. However, we can only do this if\nthe opponent also regards symmetric game positions as identical. If the opponent takes different actions\nin symmetric positions, then treating symmetric positions as the same would lead to sub-optimal play on our\npart. \n\n\\subsection{Exercise 3}\nA greedy player could converge to a local optima, which may be worse than a nongreedy player who explores more\nof the search space. To illustrate the kind of problems that could occur, consider the following simplified\nscenario: the greedy player has just made its best move from position $s$, and the opponent must choose\nbetween two moves leading to states $u$ and $v$ respectively. Suppose we only have one move from either of\nthe positions $u$ and $v$, and that we always lose from $u$ and always win from $v$. Furthermore, suppose\nthe opponent chooses the move that leads to $v$ with probability $p \\gg V(s)$. Then, letting $s'$ be the final\nstate we reach, we can compute the change to $V(s)$ as\n\\begin{align*}\n        \\mathrm{E} [\\alpha (V(s') - V(s))] = \\alpha (\\mathrm{E} [V(s')] - V(s)) = \\alpha (p - V(s)) > 0 \n\\end{align*}\nso we will most likely keep making the same move from $s$. Thus, if there is another move from $s$ that \nwould actually lead to a certain win against the opponent's policy, we will not discover it.\n\n\\subsection{Exercise 4}\nWhen we do not learn from exploratory moves, our expected updates are as described in the text:\n\\begin{align*}\n        V(s) &\\gets V(s) + \\alpha [V(s') - V(s)]\n\\end{align*}\nwith $V(s)$ indicating the probability that we win from state $s$ assuming we make our best moves. \nIf we instead explore with likelihood $\\epsilon$, then our expected updates look more like:\n\\begin{align*}\n        V(s) &\\gets V(s) + \\alpha [(1 - \\epsilon) V(s') + \\epsilon \\sum_{s'' \\neq s'} P(s'') V(s'') - V(s)]\n\\end{align*}\nwhere $P(s'')$ indicates the probability with which we end up in the non-greedy state $s''$ during exploration.\nIn this case, we can interpret the probability $V(s)$ to be the expected probability of winning assuming\nwe make our best move with probability $\\epsilon$ and an exploratory move with probability $1 - \\epsilon$. \n\nIf we assume that we want to keep making exploratory moves, then it makes more sense to learn the latter\nset of probabilities, as they incorporate information from all possible moves from a given state. In terms of\nwinning, however, it seems like the former set of probabilities would be better, as they hone in on the\nprobabilities corresponding to the best possible outcomes. \n\n\\subsection{Exercise 5}\nSome potential improvements that come to mind:\n\\begin{itemize}\n        \\item Track whether opponent plays symmetrically or not during learning, and then condense value \n                function table afterwards if possible.\n        \\item If at any point during learning we play a sequence of moves that leads to a win, try to play\n                the exact same sequence of moves again. Keep doing so for as long as it is possible to play\n                the same sequence. This would help against a deterministic opponent - if we can win once, we\n                can always win.\n        \\item We can prune the search space. We can disregard (set winning probability to 0) \n                all states from which the opponent only has a single move, with that move being a winning move.\n\\end{itemize}\n\nPerhaps a better approach to this problem would be to combine reinforcement learning with minimax optimal\nplay. In other words, if we reach a state where we can win no matter what the opponent does, then there is\nno need to learn anything; we just use the minimax strategy from that state. \n", "meta": {"hexsha": "0dd95c6d7c41100c4ee1a61c1d7ab92f0c2da840", "size": 4995, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Reinforcement_Learning_Sutton_Barto/chapter_1.tex", "max_stars_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_stars_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-09-19T07:33:25.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-19T07:33:25.000Z", "max_issues_repo_path": "Reinforcement_Learning_Sutton_Barto/chapter_1.tex", "max_issues_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_issues_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Reinforcement_Learning_Sutton_Barto/chapter_1.tex", "max_forks_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_forks_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.7236842105, "max_line_length": 152, "alphanum_fraction": 0.7271271271, "num_tokens": 1231, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.7745833737577158, "lm_q1q2_score": 0.5913818377201072}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n\\usepackage{hyperref}\n\\title{Exercise in aggregate signatures}\n\\author{Todor Milev}\n\n\\begin{document}\n\\maketitle \nAggregate signatures are one of the core technologies used by Fabcoin's Kanban system. Aggregate signatures have been proposed as early as 1995 in \\cite{Horster1995} - an overview of the subject can be found, for example, in \\cite{cryptoeprint:SimpleSchnorrMultisignatures}. Recently, aggregate signatures have been proposed as an optimization for Bitcoin's network. Besides Fabcoin, a number of crypto-currencies have put aggregate signatures in the core of their technology. In the present text we discuss one such scheme recently implemented by the Zilliqa crypto-currency and then propose an exercise on the topic. \n\nThe purpose of an aggregate signature is to have a message $m$ be signed by multiple entities. The signatories combine their public keys to a single aggregate one with the same size as a regular public key. Likewise, the signatories aggregate their signature to one of regular size. The entire process happens under the assumption that the entities do not trust one another and do not share any secrets. The resulting aggregate signature achieves the same goal as would be achieved by having each of the individual signatures for $m$ of every signatory, but is far shorter. To see why this optimization is significant consider the task of having $1000$ nodes sign a small message $m$ - say $90$ bytes. A standard ECDSA signature is $65$ bytes. This means that all $1000$ signatures would take $1000\\cdot 65$ bytes - much more than a single aggregate signature. \n\n\nOur exercise is expected to be completed ``by hand'' with the help of online cryptography tools or tools provided by FAB coin. Here, ``by hand'' means that individual cryptographic operations are expected to be carried via online tools/ready software with intermediate results recorded (copy+paste) in a resulting homework paper. The resulting homework paper is expected to have explanations and notation written by the student using proper academic language and references. The exercise can be expanded to a larger project by requesting an exposition of the theory behind aggregate signatures. The exercise can be further (significantly) expanded by requesting that the students carry out part (or all) of the computations using their own code (which will be evaluated), rather than using online tools.\n\n\\section{The exercise}\nThe Zilliqa aggregate signature scheme can be roughly described (omitting some technicalities) as follows. After each step, we give an exercise to be carried out by the student. The exercises roughly requests that the student simulate (on paper) the steps of the aggregate signature for $1$ aggregator and $3$ signer nodes. \n\nThe student is expected to properly cite all tools (online or otherwise) used to carry out the intermediate computations in the exercise.\n\n\n\\subsection{Preparation step} \nIn this step, the signers prove their identities to a special node called an aggregator.\n\\begin{itemize}\n\\item Select a special node called an aggregator.\n\\item Each signer: one-time send public key to aggregator.\n\\item Aggregator: send back challenge message to be signed.\n\\item Each signer: send signed challenge back.\n\\item Aggregator: verify the signers' signatures. \n\\end{itemize}\n\n\\subsubsection{Proposed exercise}\n\\begin{itemize}\n\\item The student prepares a challenge message and presents its encoding in a standard format (hex, base64, $\\dots$).\n\\item The student generates $3$ public-secret key pairs corresponding to three signer nodes. Encodings  of the three pairs are presented.\n\\item The student generates $3$ signatures for the challenge message. Encoded results are presented.\n\\item[Optional] The student presents intermediate computations in the verification of the $3$ signatures.\n\\end{itemize}\n\n\\subsection{Signature aggregation}\nIn this step, the aggregator node composes an aggregate signature. The student should skip all steps related to networking and assume that all messages send between the nodes arrive without error. \n\n\\begin{itemize}\n\\item Each signer: choose random $\\mathrm{nonce}_i$, compute $q_i = g^{\\mathrm{nonce}_i}$. Here, we assume $g$ is the generator of the elliptic curve $y^2=x^3+7$ over $\\mathbb Z/p\\mathbb Z$ with $p = 2^{256}- 2^{32}-977$ - i.e., curve secp256k1, the standard curve used in Fabcoin and Bitcoin. \n\\item Each signer: send $q_i$ to aggregator.  \n\\item Aggregator: compute $ \\mathrm{Pub} =\\prod_{i} \\mathrm{pub}_i =\\mathrm{pub}_1\\cdot \\dots \\cdot \\mathrm{pub}_n $. \n\\item Aggregator: compute $ Q = \\prod_i q_i$.\n\\item Aggregator: compute  $\\mathrm{challenge} = H(Q, \\mathrm{Pub}, \\mathrm{digest})$.\n\\item Aggregator: send $\\mathrm{challenge},\\mathrm{Pub}, \\mathrm{digest}$ to signers.\n\\item Each signer: verify $\\mathrm{challenge} = H(Q, \\mathrm {Pub}, \\mathrm{digest}) $.\n\\item Each signer: compute $\\mathrm{solution}_i = {\\mathrm{nonce}_i - \\mathrm{challenge} \\cdot \\mathrm{secret}_i} $\n\\item Each signer: send $\\mathrm{solution}_i$ to aggregator.\n\\item Aggregator: compute $\\mathrm{solution} = \\sum_i \\mathrm{solution}_i $.\n\\item Aggregator: final signature: $(\\mathrm{challenge}, \\mathrm{solution})$.\n\n\\end{itemize}\n\n\\subsection{Proposed exercise}\n\\begin{itemize}\n\\item The student presents hex encodings of the generator $g$ of secp256k1, of $p = 2^{256}- 2^{32}-977$ and of the order (number of elements) of secp256k1.\n\\item The student selects $\\mathrm{nonce}_1, \\mathrm{nonce}_2,\\mathrm{nonce}_3$. Encodings are presented.\n\n\\item The students computes the $q_i$'s. Encodings are presented.\n\\item The student computes $Q$. Encoding is presented.\n\\item The students computes $\\mathrm{challenge}$ and presents its encoding.\n\\item The student computes each $\\mathrm{solution}_i$ and presents encodings.\n\\item The student computes the final signature and presents its encoding.\n\\end{itemize}\n\n\n\\subsection{Aggregate signature verification}\n[To be written].\n\n\\bibliographystyle{plain}\n\\bibliography{../bibliography}\n\n\\end{document}", "meta": {"hexsha": "84890fab5b60ef5471822b1ab915f79e227ebdcf", "size": 6084, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/presentations/aggregate_signature_exercise.tex", "max_stars_repo_name": "blockchaingate/Kanban", "max_stars_repo_head_hexsha": "b48c5db37107a09749ef3c4014fca939ac98f073", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-06-27T01:06:04.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-29T14:21:28.000Z", "max_issues_repo_path": "doc/presentations/aggregate_signature_exercise.tex", "max_issues_repo_name": "blockchaingate/Kanban", "max_issues_repo_head_hexsha": "b48c5db37107a09749ef3c4014fca939ac98f073", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2018-12-03T16:18:40.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-07T11:49:18.000Z", "max_forks_repo_path": "doc/presentations/aggregate_signature_exercise.tex", "max_forks_repo_name": "FAB-Coin/Kanban-js", "max_forks_repo_head_hexsha": "75b0bd96b98d13b2ee7b7467dcf4b79f2e21c29c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-01-22T18:09:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-10T06:33:39.000Z", "avg_line_length": 77.0126582278, "max_line_length": 861, "alphanum_fraction": 0.7809007232, "num_tokens": 1444, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7745833841649233, "lm_q1q2_score": 0.591381837328856}}
{"text": "\\documentclass[11pt]{article}\n\n\\setlength{\\oddsidemargin}{-0.25 in}\n\\setlength{\\evensidemargin}{-0.25 in}\n\\setlength{\\topmargin}{-0.9 in}\n\\setlength{\\textwidth}{7.0 in}\n\\setlength{\\textheight}{9.0 in}\n\\setlength{\\headsep}{0.75 in}\n\\setlength{\\parindent}{0.3 in}\n\\setlength{\\parskip}{0.1 in}\n\\usepackage{epsf}\n\\usepackage{pseudocode}\n\\usepackage[shortlabels]{enumitem}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{color}\n\\usepackage[normalem]{ulem}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n\\usepackage{wrapfig}\n\\pagenumbering{arabic}\n\\def\\O{\\mathop{\\smash{O}}\\nolimits}\n\\def\\o{\\mathop{\\smash{o}}\\nolimits}\n\\newcommand{\\e}{{\\rm e}}\n\\newcommand{\\R}{{\\bf R}}\n\\newcommand{\\Z}{{\\bf Z}}\n\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{booktabs}\n\n\\usepackage{graphicx}\n\\usepackage{tikz}\n\\usetikzlibrary{arrows.meta}\n\n%% display solutions or not\n\\newif\\ifsol\n\\soltrue % comment out to hide solutions\n\n\\title{Section 11: Classification and Clustering}\n\\date{}\n\\author{CS 182 - Artificial Intelligence}\n\\begin{document}\n\\maketitle\n\n\\noindent Algorithms in machine learning fall under {\\bf{supervised}} and {\\bf{unsupervised}} approaches. In the former, we are given a set of inputs and labels/outputs, and would like to learn the relationship between them to guess label/output values for new data. In the latter, we are only given input values, and aim to find structure within them.\n\n\\section*{Supervised Learning}\n\nSupervised learning can also be broken down into {\\bf{parametric}} and {\\bf{non-parametric}} approaches. Parametric algorithms, such as Naive Bayes or regression, rely on learning a set of parameters that define the classifier. Non-parametric algorithms, such as k-NN, rely directly on data rather than on a parameterized model. \n\n\\subsection*{Naive Bayes}\n\nNaive Bayes can be thought of as a {\\bf{probabilistic inference algorithm}} over a special case of a Bayes Net that is applied to classification. The model is made of an unknown variable $Y$ representing the possible classes, which causes a collection of emission variables $E_1$, $E_2$, $\\dots$, $E_n$. We make a (``naive'') assumption that each of the emissions is conditionally independent of one another given $Y$. While this may not be true for most applications, it still results in a simple and useful algorithm. Finally, given the emissions, we can calculate classification probabilities by inference over the Bayes Net:\n\\[\nP(Y | x_1, \\ldots, x_n) \\propto P(Y, x_1, \\ldots, x_n) = P(Y) \\prod_{i=1}^n P(x_i|Y)\n\\]\nTo build up this network from labeled data, we can count the frequencies of occurrence of observing $(x_i, Y)$ for each $x_i$ and normalize to construct conditional probabilities. Improvements on this approach involve {\\bf{smoothing}}, which is discussed further down.\n\n\n\\subsection*{Regression}\n\nWhen the output variable $y$ is continuous rather than discrete, we can perform regression. First, a {\\bf{model}} $h_{\\theta}$ is chosen to describe the input-output relationship, where the {\\bf{model parameters}} $\\theta$ are calculated by optimizing over a {\\bf{loss function}} $L$ on the training data. For instance, if we assume that the underlying model is {\\bf{linear}}, we choose $h_{\\theta}(f^{(i)}) = \\theta^T f^{(i)}$ for every $i$th data vector. Often, a quadratic loss function is chosen. This is both intuitive, but also theoretically supported -- a quadratic is the {\\bf{maximum likelihood estimate}} of the data given the model parameters, under the assumption of i.i.d. Gaussian noise in the data. Overall, the optimization problem to solve {\\bf{linear regression}} would be:\n\\[\n\\min_{\\theta} \\; \\; \\sum_{i = 1}^m (\\theta^T f^{(i)} - y^{(i)})^2\n\\]\nThere are other possible loss functions, such as:\n\\begin{itemize}\n    \\item \\bf{absolute ($L_1$) loss}: $L = |h_{\\theta}(f) - y|$\n    \\item \\bf{deadband loss}: $L = \\max(0, |h_{\\theta}(f) - y| - \\epsilon)$\n\\end{itemize}\n\n\\noindent {\\bf{Logistic regression}} is a variant of regression designed for binary classification (where the output variable does assume discrete values $0$ and $1$). Instead of using the linear function $h_{\\theta}(f) = \\theta^T f$, we apply a sigmoid:\n\\[\nh_{\\theta} = \\frac{1}{1 + e^{-\\theta^T f}}\n\\]\nSimilarly to linear regression, we choose a loss function, pose an optimization problem, and solve for the optimal parameters $\\theta$. \n\n\\noindent While linear regression with a quadratic loss is {\\bf{convex}} and has a closed-form solution, most of these optimization problems are non-convex and are solved with {\\bf{stochastic gradient descent}}. Finally, after an optimization approach is applied and the model parameters $\\theta$ are determined, we can guess the output of new data $f$ by calculating $h_{\\theta}(f)$ (for the binary logistic case, rounding it to 0 or 1).\n\n\n\n\n\\subsection*{k-Nearest Neighbors}\nThis is a simple {\\bf{non-parametric}} classification approach which assigns a label to a new point by choosing the most common class of the $k$ closest neighboring points. Variations on this algorithm might give a weighting scheme to the $k$ labels rather than choosing the most common value. \n\n\n\\noindent When designing an ML algorithm, the available data is usually broken up into the {\\bf{training data}}, which is used to train the classifier, and the {\\bf{test data}}, which is used to evaluate the classifier and can in no way be viewed or used beforehand. \\noindent This division of datasets allows to deal with two issues:\n\\begin{enumerate}\n    \\item {\\bf{Overfitting}} occurs when a classifier is trained for very high (near 100\\%) training accuracy, but is not able to {\\bf{generalize}} to the testing data, and has low test accuracy.\n    \\begin{enumerate}\n        \\item {\\bf{Smoothing}} is used in discrete problems to cause more gradual updates to probabilities. For instance, one example of overfitting would be giving zero probabilities to all words not in the training set, which can be mitigated with {\\bf{Laplace smoothing}}:\n        \\begin{align*}\n        &\\text{standard probability:} \\; \\; P_{ML}(x) = \\frac{count(x)}{N} \\\\\n        &\\text{with smoothing:} \\; \\; P_{LAP, k}(x) = \\frac{count(x) + k}{N + k|X|}\n        \\end{align*}\n        \\item {\\bf{Regularization}} is used in continuous problems such as regression. For instance, an example of overfitting would be using a high-order polynomial that goes through all training data points. The addition of a regularization term to the loss function places an analytic penalty on problem parameters to encourage smaller values or sparsity:\n        \\begin{align*}\n        &\\text{standard loss function:} \\; \\; loss(f^{(i)}, y^{(i)}, \\theta) = (\\theta^T f^{(i)} - y^{(i)})^2 \\\\\n        &\\text{with $L_2$-regularization:} \\; \\; loss(f^{(i)}, y^{(i)}, \\theta) = (\\theta^T f^{(i)} - y^{(i)})^2 + \\lambda ||\\theta||_2^2 \\\\\n        &\\text{with $L_1$-regularization:} \\; \\; loss(f^{(i)}, y^{(i)}, \\theta) = (\\theta^T f^{(i)} - y^{(i)})^2 + \\lambda ||\\theta||_1\n        \\end{align*}\n    \\end{enumerate}\n    \\item Sometimes algorithms depend on a choice of {\\bf{hyperparameters}}, which must be decided outside of the training process. There are a couple common approaches to tuning:\n    \\begin{enumerate}\n        \\item Part of the training data is taken to be the {\\bf{held-out dataset}}. For each hyperparameter value, the classifier is trained with the rest of the training data and tested on the held-out data, and the one with the highest accuracy is chosen.\n        \\item If there are $m$ possible values for a hyperparameter, {\\bf{cross-validation}} divides the training data into $m$ sections. For the $i$th value, the $i$th section is chosen to be the tuning test set and the others are chosen to be the tuning training sets. After evaluating all hyperparameter values, the one with the highest tuning test accuracy is chosen.\n    \\end{enumerate}\n\\end{enumerate}\n\n\n\\section*{Unsupervised Learning}\n\nUnsupervised learning algorithms focus primarily on clustering, dimensionality reduction, anomaly detection, or other ways of inferring data structure without labels. The algorithms we have discussed all fall under clustering.\n\n\\noindent {\\bf{K-means}} is a {\\bf{centroid-based}} clustering algorithm. First, initialize $k$ cluster centroids. Then, iteratively:\n\n\\noindent Assign each point to nearest cluster centroid:\n\\[\na_i = \\text{argmin}_k \\; \\text{dist}(x_i, c_k), \\;\\; i = 1, \\dots, N\n\\]\n\n\\noindent Recompute the $k$th cluster centroid by taking the mean of the points assigned to cluster $k$:\n\\[\nc_k = \\frac{1}{|a_i \\;:\\; a_i = k|} \\sum_{(i \\;:\\; a_i = k)} x_i, \\;\\; k = 1, \\dots, K\n\\]\n\n\n\\noindent {\\bf{Agglomerative clustering}} and {\\bf{divisive clustering}} are both {\\bf{connectivity-based}} clustering algorithms. The former iteratively constructs larger clusters from smaller clusters. The latter iteratively breaks down large clusters into smaller clusters.\n\n\\noindent {\\bf{Choosing K:}} The outcome of all of these algorithms vary on the chosen number of clusters. Some common approaches for choosing $k$ are:\n\\begin{itemize}\n    \\item We can plot the average of the distances from each point to its cluster's centroid. Then, it is reasonable to choose the smallest value of $k$ for which this ``error'' flattens out.\n    \\item {\\bf{Silhouette diagrams}} are plots of the measure\n    \\[\n    m_i = \\frac{(b_i - a_i)}{\\max(a_i, b_i)}\n    \\]\n    where $a_i$ is the mean distance from the point $x_i$ to all points in its cluster, and $b_i$ is the mean distance from $x_i$ to all points in the neighboring (``next best'') cluster.\n\\end{itemize}\n\n\n\n%\\begin{figure}[ht]\n%\\renewcommand{\\labelenumii}{\\arabic{enumii}.}\n%\\setlength{\\parindent}{0pt}\n%    \\includegraphics[scale=0.4]{figs/carrepair}\n%    \\caption{Conditional implications example (left), example conditional probability tables (center), and example bayes net for car repair (right)}\n%    \\label{fig:bayes}\n%\\end{figure}\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\newpage\n\\section*{Exercises:}\n\\begin{enumerate}\n\n\n\\item{{\\bf{Silhouette Diagrams.}}}\n\nSuppose we are trying to cluster an unlabeled dataset. We apply k-means with $k = 3$, and are evaluating the clusters with a with a silhouette diagram.\n\\begin{enumerate}\n    \\item Suppose that for a certain point $x_i$, $a_i << b_i$. What will be the approximate value of $m_i$, and what does this imply about the clustering of $x_i$? \\\\\n    \\ifsol\n        {\\color{blue} If $a_i$ is small relative to $b_i$, this is a good clustering of $x_i$, since it is (on average) much closer to points in its own cluster than the next best cluster. The measure $m_i$ will be close to 1.}\n    \\else\n        \\vspace{1cm}\n    \\fi\n    \\item Suppose that for a certain point $x_i$, $a_i \\approx b_i$. What will be the approximate value of $m_i$, and what does this imply about the clustering of $x_i$? \\\\\n    \\ifsol\n        {\\color{blue} If $a_i$ is similar to $b_i$, this is not a very good clustering of $x_i$, since it is about the same distance away from points in its own cluster as those in the next best cluster. The measure $m_i$ will be close to 0.}\n    \\else\n        \\vspace{1cm}\n    \\fi\n    \\item Suppose that for a certain point $x_i$, $a_i >> b_i$. What will be the approximate value of $m_i$, and what does this imply about the clustering of $x_i$? \\\\\n    \\ifsol\n        {\\color{blue} If $a_i$ is much greater than $b_i$, this is an incorrect clustering of $x_i$, since it is much farther from points in its own cluster as those in the next best cluster (this situation should be impossible with k-means). The measure $m_i$ will be close to -1.}\n    \\else\n        \\vspace{1cm}\n    \\fi\n\\end{enumerate}\n\n\n\\item{{\\bf{Regression.}}}\n\nYou are working for a hospital, and are given a dataset of patient information with hundreds of continuous features (height, weight, blood pressure, etc) along with whether the patient has diabetes. Your goal is to predict whether incoming patients will have diabetes.\n\n\\begin{enumerate}\n\\item Formalize the problem as an optimization problem using logistic regression. \\\\\n\\ifsol\n    {\\color{blue} Let $f^{(i)}$ be the vector of features for the $i$th patient, $y^{(i)}$ be the true outcome (diabetes or no diabetes) for each patient, and $\\theta$ be the parameters we are learning. Then, if we use a sum-of-squared-error loss function, we are trying to solve the problem:\n    \\[\n    \\hat{\\theta} = \\text{argmin}_{\\theta} \\; \\sum_{i} (\\frac{1}{1 + e^{-\\theta^T f^{(i)}}} - y^{(i)})^2\n    \\]\n    \n    }\n\\else\n    \\vspace{1cm}\n\\fi\n\n\\item Write down the formula your model would use to compute the probability that a patient has diabetes. \\\\\n\\ifsol\n    {\\color{blue} Once the parameters $\\theta$ are learned by solving the problem above, a new patient's probability of having diabetes given a feature vector $f$ will be\n    \\[\n    P(y | f) = \\frac{1}{1 + e^{-\\theta^T f}}\n    \\]\n    .}\n\\else\n    \\vspace{1cm}\n\\fi\n\n\\item Suppose you are now trying to estimate blood pressure using linear regression on the 20 features you consider most important. If you calculate a regression model on a training set of 12 patients, what (approximately) will be your training set error? \\\\\n\\ifsol\n    {\\color{blue} In this problem, we have 20 ``unknowns'' (the parameters $\\theta$ and only 12 ``equations'' (the training data). Therefore, unless the training data contains equations which exactly contradict each other, this is an under-determined system, and we can choose $\\theta$ such that all the total error is 0. This is an example of overfitting.}\n\\else\n    \\vspace{1cm}\n\\fi\n\\end{enumerate}\n\n\\newpage\n\n\\item{\\bf{Naive Bayes}}\n\nIn this problem, we will be using Naive Bayes to predict the probability that a given email is spam or not spam (ham). We assume a model in which an underlying unobserved state $y$ (spam/ham) creates emissions (words) $w_1, \\ldots, w_n$. We use a bag-of-words approach to model text -- the ordering of words within a sentence is ignored. Then, $P(Y)$ is the prior probability across states $Y$, and  $p(w_i|Y)$ are the conditional probabilities of seeing a word $w_i$ given $Y$. For the problems, start by assuming a uniform distribution over $Y$, and use the following conditional probability table over words:\n\n% Please add the following required packages to your document preamble:\n% \\usepackage{booktabs}\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{@{}lll@{}}\n\\toprule\n        & $p(w|ham)$ & $p(w|spam)$ \\\\ \\midrule\nCS182   & 0.05     & 0.01      \\\\\nfriend  & 0.02     & 0.04      \\\\\nneed    & 0.01     & 0.01      \\\\\non      & 0.02     & 0.02      \\\\\nour     & 0.02     & 0.02      \\\\\nproject & 0.007    & 0.003     \\\\\nstart   & 0.01     & 0.03       \\\\\nto      & 0.01     & 0.01      \\\\\nyou     & 0.01     & 0.015     \\\\\nwe      & 0.01     & 0.015     \\\\\nworking & 0.01     & 0.03      \\\\\n...     & ...      & ...      \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{enumerate}\n\\item\nWhat is the probability of seeing the following spam email: ``we need you on our project friend''?\\\\\n\\ifsol\n\\textcolor{blue}{\\[\np(email|spam) = 0.015*0.01*0.015*0.02*0.003*0.04 = 5.4 *10^{-12}\n\\]}\n\\else\n    \\vspace{1cm}\n\\fi\n\n\\item\nWhat is the probability that the following text is spam: ``start CS182 project''?\\\\\n\\ifsol\n\\textcolor{blue}{\\begin{align*}\np(email|ham) &= 0.01 * 0.05 * 0.007 = 0.0000035 \\\\\np(email|spam) &= 0.03 * 0.01 * 0.003 = 0.0000009\\\\\np(spam|email) &\\propto 0.5 * 0.0000009\\\\\np(ham|email) &\\propto 0.5 * 0.0000035%\\frac{0.5 * 0.0000009}{0.0000009+0.0000035} = 0.102\n\\end{align*}\nWe can normalize this and get a probability for $p(spam|email) = 0.205$.\n}\n\\else\n    \\vspace{1cm}\n\\fi\n\n\n\\item We now consider a fresh inbox for which we have no previous data. We want to build a new classifier from scratch. The inbox right now contains three emails:\n\\begin{enumerate}\n\\item [spam:] ``I am only prince''\n\\item [spam:] ``I will send money''\n\\item [ham:] ``I will send you the review for it''\n\\end{enumerate}\nDetermine the prior over Y (spam/ham), fill in the probability table below, and calculate the probabilities that the email ``I will send'' is spam/ham.\\\\\n\\ifsol\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\n       & freq ham & $p(w|ham)$ & freq spam & $p(w|spam)$ \\\\ \\midrule\nam     & 0        & 0          & 1         & 1/8         \\\\\nfor    & 1        & 1/8        & 0         & 0           \\\\\nI      & 1        & 1/8        & 2         & 2/8         \\\\\nit     & 1        & 1/8        & 0         & 0           \\\\\nmoney  & 0        & 0          & 1         & 1/8         \\\\\nonly   & 0        & 0          & 1         & 1/8         \\\\\nprince & 0        & 0          & 1         & 1/8         \\\\\nreview & 1        & 1/8        & 0         & 0           \\\\\nsend   & 1        & 1/8        & 1         & 1/8         \\\\\nthe    & 1        & 1/8        & 0         & 0           \\\\\nyou    & 1        & 1/8        & 0         & 0           \\\\\nwill   & 1        & 1/8        & 1         & 1/8         \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\\else\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\n       & freq ham & $p(w|ham)$ & freq spam & $p(w|spam)$ \\\\ \\midrule\nam     &         &           &          &          \\\\\nfor    &         &         &          &            \\\\\nI      &         &         &          &          \\\\\nit     &         &         &          &            \\\\\nmoney  &         &           &          &          \\\\\nonly   &         &           &          &          \\\\\nprince &         &           &          &          \\\\\nreview &         &         &          &            \\\\\nsend   &         &         &          &          \\\\\nthe    &         &         &          &            \\\\\nyou    &         &         &          &            \\\\\nwill   &         &         &          &          \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\\fi\n\n\\ifsol\n\\textcolor{blue}{We can determine the prior by counting the number of emails from each category:  $p(spam) = 2/3$ and $p(ham) = 1/3$. To construct the table, we count the word frequencies and normalize by the number of words. Using this table, we can take the same approach as before:}\n\\begin{align*}\np(email|ham) &= \\left(1/8\\right)^3 = 0.00195\\\\\np(email|spam) &= 2/8 * 1/8 * 1/8 = 0.0039\\\\\np(ham|email) &\\propto \\frac{1}{3} * 0.00195 = 0.00065\\\\%\\frac{0.00195}{0.00195+0.0039} = 0.111\\\\\np(spam|email) &\\propto \\frac{2}{3} * 0.0039 = 0.0026\\\\%\\frac{0.0039}{0.00195+0.0039} = 0.444\n\\end{align*}\n\n\\textcolor{blue}{Normalizing this yields 0.2 and 0.8 probabilities for the two classes respectively.}\n\\else\n    \\vspace{1cm}\n    \\newpage\n\\fi\n\n\\item For long texts, you will need to multiply many numbers smaller than 1. This can cause implementation problems, where all of the predicted probabilities are so small that they are numerically rounded to zero. What can you do to address this issue? Why does it work?\\\\\n\\ifsol\n\\textcolor{blue}{In order to avoid those issues, you can add up the log-probabilities. This works because $$P(Y) \\prod_{i=1}^n P(w_i|Y) = e^{\\log (P(Y) \\prod_{i=1}^n P(w_i|Y))} = e^{\\log P(Y) \\sum_{i=1}^n \\log P(w_i|Y)}$$}\n\\else\n    \\vspace{1cm}\n\\fi\n\n\\item Use your classifier to predict the emails ``Will you send the review'' and ``Please send the review''.\\\\\n\\ifsol\n\\textcolor{blue}{Those two are special cases. Since $p(review|spam)=0)$, we have a 0 probability for the whole term. The word ``please'' does not occur in the table and with the current approach, we can't compute the probabilities.} \\newpage\n\\else\n    \\vspace{1cm}\n\\fi\n\n\n\\item Repeat task (c) with laplace smoothing and $k=1$.\\\\\n\n\\ifsol\n\\textcolor{blue}{\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\n       & freq ham & $p(w|ham)$ & freq spam & $p(w|spam)$ \\\\ \\midrule\nam     & 1        & 1/20       & 2         & 2/20         \\\\\nfor    & 2        & 2/20       & 1         & 1/20           \\\\\nI      & 2        & 2/20       & 3         & 3/20         \\\\\nit     & 2        & 2/20       & 1         & 1/20           \\\\\nmoney  & 1        & 1/20       & 2         & 2/20         \\\\\nonly   & 1        & 1/20       & 2         & 2/20         \\\\\nprince & 1        & 1/20       & 2         & 2/20         \\\\\nreview & 2        & 2/20       & 1         & 1/20           \\\\\nsend   & 2        & 2/20       & 2         & 2/20         \\\\\nthe    & 2        & 2/20       & 1         & 1/20           \\\\\nyou    & 2        & 2/20       & 1         & 1/20           \\\\\nwill   & 2        & 2/20       & 2         & 2/20         \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\\begin{align*}\np(email|ham) &= \\left(1/10\\right)^3 = 0.001\\\\\np(email|spam) &= 3/20 * 2/20 * 2/20 = 0.0015\\\\\np(ham|email) &\\propto \\frac{1}{3} * 0.001 = 0.000333...\\\\%\\frac{0.00195}{0.00195+0.0039} = 0.111\\\\\np(spam|email) &\\propto \\frac{2}{3} * 0.0015 = 0.001\\\\%\\frac{0.0039}{0.00195+0.0039} = 0.444\n\\end{align*}\nNormalizing this yields 0.25 and 0.75 probabilities for the two classes respectively.\n}\n\\else\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\n       & freq ham & $p(w|ham)$ & freq spam & $p(w|spam)$ \\\\ \\midrule\nam     &         &           &          &          \\\\\nfor    &         &         &          &            \\\\\nI      &         &         &          &          \\\\\nit     &         &         &          &            \\\\\nmoney  &         &           &          &          \\\\\nonly   &         &           &          &          \\\\\nprince &         &           &          &          \\\\\nreview &         &         &          &            \\\\\nsend   &         &         &          &          \\\\\nthe    &         &         &          &            \\\\\nyou    &         &         &          &            \\\\\nwill   &         &         &          &          \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\\fi\n\n\\item What are other possible features you can consider?\\\\\n\\ifsol\n\\textcolor{blue}{We can incorporate any features that are representative of spam emails in addition to word frequencies. Examples are\n\\begin{itemize}\n\\item ``Online Pharmacy''\n\\item Mentions large quantities of money\n\\item Sent by an unknown address with many numbers\n\\item Subject is all capitals\n\\item Email body has low ratio of text to images\t\n\\item ``One hundred percent guaranteed''\t\n\\item ``Prestigious Non-Accredited Universities''\n\\end{itemize}\nAnother approach is to also count occurrences of longer phrases (called n-grams).}\n\\else\n\\fi\n\n\\end{enumerate}\n\n\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "3f63416c3c4f3a7cd1543422be8ce7da86148f65", "size": 22422, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Section_11/tex/main.tex", "max_stars_repo_name": "Harvard-CS182-F18/courseware", "max_stars_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Section_11/tex/main.tex", "max_issues_repo_name": "Harvard-CS182-F18/courseware", "max_issues_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Section_11/tex/main.tex", "max_forks_repo_name": "Harvard-CS182-F18/courseware", "max_forks_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.7829099307, "max_line_length": 791, "alphanum_fraction": 0.6266167157, "num_tokens": 6584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721305, "lm_q2_score": 0.7634837527911057, "lm_q1q2_score": 0.591381836937604}}
{"text": "\\section{Aim and scope of the thesis}\n\nThe aim of this thesis is to develop a library for fast,\napproximate \\textit{maximum inner product search} (MIPS).\n\nTo this end we provide a set of tools that would help machine\nlearning developers and practitioners create faster, more scalable machine learning models.\nProblems requiring MIPS naturally arise in many machine learning models, for example, in\nclassification problems with large output spaces, when linear models or deep neural networks are used.\n\nGenerally speaking, MIPS is the task involving searching through a massive collection of items for\nan item most similar to a particular one, called query. The items may have many properties (features)\naccording to which items can be more related or differ from each other.\nEvery item is mathematically represented by a vector, with all vector's components being item's features.\n\nUnfortunately, when one has to compare the query with hundreds of thousands of vectors,\na linear search time can become prohibitively expensive.\nIn other words comparing query and every database vector may not be feasible in practical applications.\nTo counter that, proposals were made in several articles to build an \\textit{index} for speeding\nup the searching time.\nThis index is essentially a specific data structure built by preprocessing database vectors (before\nany of the queries are answered --- this is a one-time cost).\nThe most important feature of the index is short prediction (searching) time; training (index preparation)\ncan last much longer if needed.\nIt is not always necessary for the exact result to be returned --- it can often be approximate.\nIn this thesis we focus on such algorithms --- they trade some prediction precision for speed.\n\nThe library implemented in this thesis delivers three different MIPS algorithms,\nrecently introduced in the literature.\nTwo of these use $K$-means clusterization algorithm as part of their implementation: hierarchical $K$-means algorithm\nselects candidate set by querying tree of $K$-means clusters, and quantization-based approach uses $K$-means as well, \nbut only after subdividing data vectors into smaller-dimensionality subvectors. The third algorithm we consider\nis ALSH, or Asymmetric Locality Sensitive Hashing, which relies on specific hash functions preserving\nvector locality.\n\nOur library not only offers high performance, but is also easy-to-use.\nIt provides a set of wrappers around a range of machine learning libraries, that allow users to\nspeed up their models without ever touching any of the implemented MIPS algorithms. Consequently,\nthese wrappers can be used as drop-in replacements for a number of models in Python machine learning ecosystem.\nWe show some examples of how this library can be used in practice,\nas well as detailed experiments measuring the speed and accuracy of our algorithms.\n\nIn particular we provide plugins for PyTorch (a deep learning library for Python), and scikit-learn\n(a general-purpose machine learning library, also for Python), as well as a modified version of fastText\n(a C++ library for efficient text classification).\n\nAll experimental results presented in this thesis were obtained using Amazon Elastic Compute Cloud (EC2).\nEC2 is a part of Amazon.com's cloud-computing platform, Amazon Web Services (AWS). It allows users to rent virtual\ncomputers on which they can run their own computer applications.\n\nWe have compared the implemented algorithms against the Faiss library \\cite{faiss}, which is a library for efficient\nsimilarity search and clustering of dense vectors, developed by Facebook AI Research. We refer the reader to\n\\cite{JDH17} for further details regarding this software.\n\nThe rest of this thesis is structured as follows.\nThe next chapter is dedicated to the theoretical description of the MIPS algorithms.\nThe third chapter describes implementation details of the library.\nChapter 4 presents experimental results.\nThe last chapter concludes the thesis.\nIn the appendix we describe how to run the library.\n\nThe list below shows in detail the contribution of each author of this thesis:\n\\begin{itemize}\n    \\item \\textbf{Marcin Elantkowski} wrote wrappers exposing C++ indexes to Python, prepared datasets used in the\n        experiments, adapted the code so it could be used from PyTorch and fastText,\n            and ran final tests on Amazon Web Services\n    \\item \\textbf{Adam Krasuski} refactored and optimized the C++ code of the core algorithms,\n        implemented an interface to the Faiss library, analyzed and described the empirical results.\n    \\item \\textbf{Agnieszka Lipska} implemented the first version of the quantization algorithm,\n        implemented the Faiss interface, and adapted the code to use it from scikit-learn.\n    \\item \\textbf{Franciszek Walkowiak} implemented the first versions of ALSH and\n        hierarchical \\mbox{$K$-means} and conducted preliminary tests on a personal computer.\n\\end{itemize}\n\n", "meta": {"hexsha": "41aa67336576feea9208e426343b1ccd39c92edd", "size": 4952, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/tex/aim.tex", "max_stars_repo_name": "akrasuski1/mips", "max_stars_repo_head_hexsha": "073eff28065fd8c214ac5feb4e530d4482ad3d79", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/tex/aim.tex", "max_issues_repo_name": "akrasuski1/mips", "max_issues_repo_head_hexsha": "073eff28065fd8c214ac5feb4e530d4482ad3d79", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/tex/aim.tex", "max_forks_repo_name": "akrasuski1/mips", "max_forks_repo_head_hexsha": "073eff28065fd8c214ac5feb4e530d4482ad3d79", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.0266666667, "max_line_length": 118, "alphanum_fraction": 0.8018982229, "num_tokens": 1001, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.5913818335516147}}
{"text": "\\section{Full R code}\n\n\tThis code will run all the indicated analysis and produce all plots.\n\n\t\\lstinputlisting[style=Rsty]{../../code/hmc/d_sir_stan.r}\n\n\\section{Full Stan code}\n\n\tStan model code to be used with the preceding R code.\n\n\t\\lstinputlisting[style=Stansty]{../../code/hmc/d_sirode_euler.stan}", "meta": {"hexsha": "866bbfaa70067b48088ff5ab2f918ccb0f64a586", "size": 304, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writing/MCMC-HMCMC/mcmc-appendix.tex", "max_stars_repo_name": "dbarrows/epidemic-forecasting", "max_stars_repo_head_hexsha": "a0865fa20c992dc4159e79bb332500e3ff2357ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "writing/MCMC-HMCMC/mcmc-appendix.tex", "max_issues_repo_name": "dbarrows/epidemic-forecasting", "max_issues_repo_head_hexsha": "a0865fa20c992dc4159e79bb332500e3ff2357ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writing/MCMC-HMCMC/mcmc-appendix.tex", "max_forks_repo_name": "dbarrows/epidemic-forecasting", "max_forks_repo_head_hexsha": "a0865fa20c992dc4159e79bb332500e3ff2357ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.6363636364, "max_line_length": 69, "alphanum_fraction": 0.7467105263, "num_tokens": 85, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8840392725805822, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.5913164129867053}}
{"text": "\\documentclass{article}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{fancyhdr, hyperref, amsmath, amsfonts, parskip, multicol, listings, xcolor}\n\\fancyhead[L]{\\textbf{CMPUT 275 -- Tangible Computing} \\hfill \\textbf{Winter 2020}}\n\n\\hypersetup{\n\tcolorlinks,\n\tcitecolor=black,\n\tfilecolor=black,\n\tlinkcolor=blue,\n\turlcolor=blue\n}\n\n% settings up listings environment\n\\definecolor{codegreen}{rgb}{0,0.6,0}\n\\definecolor{codegray}{rgb}{0.5,0.5,0.5}\n\\definecolor{codepurple}{rgb}{0.58,0,0.82}\n\\definecolor{backcolour}{rgb}{0.95,0.95,0.92}\n\n\\lstdefinestyle{mystyle}{\n    backgroundcolor=\\color{backcolour},\n    commentstyle=\\color{codegreen},\n    keywordstyle=\\color{magenta},\n    numberstyle=\\tiny\\color{codegray},\n    stringstyle=\\color{codepurple},\n    basicstyle=\\ttfamily\\footnotesize,\n    breakatwhitespace=false,\n    breaklines=true,\n    captionpos=b,\n    keepspaces=true,\n    numbers=left,\n    numbersep=5pt,\n    showspaces=false,\n    showstringspaces=false,\n    showtabs=false, \n    tabsize=2\n}\n\n\\lstset{style=mystyle}\n\n\\title{\\Large \\textbf{Final Project: EEG Visualizer -- FFT Algorithms}}\n\\author{Eddie Guo}\n\\date{\\today}\n\n\n\\begin{document}\n\n\\maketitle\n\\thispagestyle{fancy}\n\n\\section*{Discrete Fourier Transform}\nThe Fourier transform can be written in a few forms with forward and inverse transforms:\n\n\\begin{multicols}{2}\n    \\textbf{Hertz Frequency}\n    \\begin{align*}\n        X(f) &= \\int\\limits_{-\\infty}^{\\infty} x(t) e^{-i 2\\pi ft} dt\\\\\n        x(t) &= \\int\\limits_{-\\infty}^{\\infty} X(f) e^{i 2\\pi ft} df\n    \\end{align*}\n\n    \\textbf{Radian Frequency}\n    \\begin{align*}\n        X(\\omega) &= \\frac{1}{\\sqrt{2\\pi}} \\int\\limits_{-\\infty}^{\\infty} x(t) e^{-i \\omega t} dt\\\\\n        x(t) &= \\frac{1}{\\sqrt{2\\pi}} \\int\\limits_{-\\infty}^{\\infty} X(\\omega) e^{i \\omega t} d\\omega\n    \\end{align*}\n\\end{multicols}\n\nThe discrete Fourier transform (DFT) is more useful to us as programmers. The na\\\"{i}ve method is an $O(n^2)$ computation which also has a forward and inverse form. A few notes on notation:\n\n\\begin{multicols}{2}\n    \\begin{itemize}\n        \\item $N$ -- the number of time samples.\n        \\item $n$ -- the current sample considered (0, 1, ..., $N$-1).\n        \\item $x_n$ -- the value of the signal at time $n$.\n        \\item $k$ -- the current frequency (0 Hz to N-1 Hz).\n        \\item $X_k$ -- the complex number representing amplitude and phase.\n    \\end{itemize}\n\\end{multicols}\n\n\\textbf{Inverse DFT} $$ x_n = \\frac{1}{N} \\sum_{n=0}^{N-1} X_k \\cdot e^{i 2 \\pi k n / N} $$\n\\textbf{Forward DFT} \\vspace{-1em}\n\n\\begin{align*}\n    X_k &= \\sum_{n=0}^{N-1} x_n \\cdot e^{-i 2 \\pi k n / N}\\\\\n    X_k &= \\sum_{n=0}^{N-1} x_n \\cdot e^{-i 2 \\pi k n / N}\\\\\n    \\textbf{X} &= \\textbf{x} \\cdot M\\\\\n    \\intertext{Where the matrix $M$ is given by}\n    M_{kn} &= \\left[ e^{-i 2 \\pi k n / N} \\right]\n\\end{align*}\n\n\\newpage\n\\thispagestyle{fancy}\nWe can compute the DFT using matrix multiplication as shown below.\n\\begin{lstlisting}[language=Python, caption=Na\\\"{i}ve implementation of the DFT]\nimport numpy as np\n\ndef dft(x):\n    \"\"\"Computes the discrete Fourier transform of the 1-D array x\"\"\"\n    x = np.asarray(x, dtype=float)\n    n = np.arange(N)\n    N = x.shape[0]\n    k = n.reshape((N, 1))\n    M = np.exp(-2j * np.pi * k * n / N)\n\n    return np.dot(x, M)\n\\end{lstlisting}\n\n\n\\section*{Cooley-Tukey FFT Algorithm}\nThis algorithm exploits the symmetry in the DFT. Let us start by re-expressing the DFT in terms of $X_{N+k}$.\\vspace{-1em}\n\n\\begin{align*}\n    X_{N+k} &= \\sum_{n=0}^{N-1} x_n \\cdot e^{-i 2 \\pi k n / N}\\\\\n    &= \\sum_{n=0}^{N-1} x_n \\cdot e^{-i 2\\pi n} \\cdot e^{-i 2\\pi k n / N}\\\\\n    \\intertext{We can now use the identity $e^{ 2\\pi i} = 1 \\Rightarrow e^{2\\pi i n} = (e^{2\\pi i})^n = 1^n = 1$.}\n    &= \\sum_{n=0}^{N-1} x_n \\cdot e^{-i 2\\pi k n / N}\\\\\n    \\intertext{However, this is just the original DFT. Thus, symmetry exists such that $X_{N+k} = X_k$. We can take this one step further $X_{k + iN} = X_k$, for any integer $i \\in \\mathbb{R}$.}\n    X_k &= \\sum_{n=0}^{N-1} x_n \\cdot e^{-i 2\\pi k n/N}\\\\\n    &= \\sum_{m=0}^{N/2 - 1} x_{2m} \\cdot e^{-i2\\pi k(2m)/(N/2)} + \\sum_{m=0}^{N/2 - 1} x_{2m + 1} \\cdot e^{-i2\\pi k(2m + 1)/ (N/2)}\\\\\n    &= \\sum_{m=0}^{N/2 - 1} x_{2m} \\cdot e^{-i2\\pi k(2m)/(N/2)} + e^{-i 2\\pi k/N} \\sum_{m=0}^{N/2 - 1} x_{2m + 1} \\cdot e^{-i2\\pi k  m/ (N/2)}\\\\\n\\end{align*}\n\nHere, the DFT is split into two terms: one for even-numbered values and one for odd-numbered values. This still gives $(N/2) * N$ computations giving the same time complexity of $O(n^2)$. However, we notice that $0 \\leq k < N$ and $0 \\leq n < M \\equiv N/2$. From the symmetric properties shown, there is only need to perform half the computations for each partition of the original $N$-dimensional vector. Thus, the algorithm is converted from $O(n^2)$ to $O(M^2)$ where $M = N/2$.\n\nIt is clear here that we are cooking up a divide-and-conquer approach: partition the vector until the partitioning no longer provides any reasonable time benefits, say N $\\leq 32$. We can recursively apply all computations such that $O(N^2)$ becomes $O(\\frac{N}{2} log_2 N) = O(N log N)$.\\footnote{Note: this algorithm works $\\iff$ the input vector's size is a power of 2.}\n\n\n\\end{document}\n", "meta": {"hexsha": "6b405033959eae23955a2c8e01372f54c7e56c26", "size": 5195, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Final Project/documentation/fft_math/cooley_tukey.tex", "max_stars_repo_name": "tig3r66/CMPUT275", "max_stars_repo_head_hexsha": "dd5b94dcf0436e281f4696959db07b56f5c0b9d8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-25T05:19:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T05:19:15.000Z", "max_issues_repo_path": "Final Project/documentation/fft_math/cooley_tukey.tex", "max_issues_repo_name": "tig3r66/CMPUT275", "max_issues_repo_head_hexsha": "dd5b94dcf0436e281f4696959db07b56f5c0b9d8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Final Project/documentation/fft_math/cooley_tukey.tex", "max_forks_repo_name": "tig3r66/CMPUT275", "max_forks_repo_head_hexsha": "dd5b94dcf0436e281f4696959db07b56f5c0b9d8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.2713178295, "max_line_length": 481, "alphanum_fraction": 0.6411934552, "num_tokens": 1870, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7956580952177051, "lm_q1q2_score": 0.591307642159342}}
{"text": "\\section{The \\popheap algorithm}\n\\Label{sec:popheap}\n\nThe algorithm \\popheap moves the first element of the heap, which holds\nthe heap's largest value, and places it at the the end of the underlying sequence.\nWhereas in the \\cxx Standard Library \\cite[\\S 28.7.7.2]{cxx-17-draft}\n\\popheap works on a range of random access iterators,\nour version operates on an array of \\valuetype.\nWe therefore use the following signature for \\popheap\n\n\\begin{lstlisting}[style = acsl-block]\n\n    void pop_heap(value_type* a, size_type n);\n\\end{lstlisting}\n\nThe \\popheap algorithm expects that \\inl{n} is greater or equal than~1\nand that the array \\inl{a[0..n-1]} forms a heap.\nThe algorithms then \\emph{rearranges} the array \\inl{a[0..n-1]} such that the\nresulting array satisfies the following properties.\n\n\\begin{itemize}\n\\item \\inl{a[n-1] = \\\\old(a[0])}, that is, the largest element\nof the original heap is transferred to the end of the array.\n\n\\item the subarray \\inl{a[0..n-2]} is a heap\n\\end{itemize}\n\nIn this sense the algorithm \\emph{pops} the largest element from a heap.\n\n\\subsection{Formal specification of \\popheap}\n\nBased on the above semi-formal description we propose the\nfollowing function contract for \\specref{popheap}.\n\n\\input{Listings/pop_heap.h.tex}\n\n\\subsection{Implementation of \\popheap}\n\\Label{sec:pop-heap:impl}\n\nIn an abstract sense \\popheap is quite similar to \\pushheap.\nIn  \\pushheap we started at the last array element and \nclimbed from there up the tree until we would find a node where to\ninsert the new value into the heap.\nEvery time we had reached the next parent node we\nmoved its value down to where we had just come from.\n\nWith \\popheap its the other way round.\nWe start at the root of the tree and descend from there\nby selecting an appropriate child.\nEvery time we lift the value of the selected child to the node where\njust are.\nWe repeat this process until we find a node where we can insert\nthe last array element into the heap.\nOnce this is done, we can safely place the maximum element (that is the\nthe original root node) at the last element of the array.\n\n\\clearpage\n\nThe following two figures illustrate how \\popheap affects an array,\nwhich is shown again as a tree with blue and grey nodes, representing\nheap and non-heap nodes, respectively.\nFigure~\\ref{fig:popheap-pre} is in fact the same figure as\nFigure~\\ref{fig:heap-tree}.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.70\\linewidth]{Figures/pop_heap_pre.pdf}\n\\caption{\\Label{fig:popheap-pre}Heap before the call of \\popheap}\n\\end{figure}\n\n\\FloatBarrier\n\nFigure~\\ref{fig:popheap-post}, on the other hand, shows the heap after the call of \\popheap\ntogether with arrows that indicate how our implementation moves around elements\nin the underlying array.\nWe can see that the first element of the original array,\nwhere the maximum of the heap resides, is now the last element of the array.\nFurthermore, the last array element is not part of the heap anymore.\nThe dashed nodes highlight which heap nodes have changed during the call to \\popheap.\nThe arrows indicate the \\emph{cyclic reordering} of array elements to achieve the\ndesired result.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.70\\linewidth]{Figures/pop_heap_post.pdf}\n\\caption{\\Label{fig:popheap-post}Heap after the call of \\popheap}\n\\end{figure}\n\n\\FloatBarrier\n\n\nAs in the case of \\implref{pushheap} we will subdivide the discussion of the \nimplementation of \\implref{popheap} into a prologue, main act, and epilogue.\n\n\\input{Listings/pop_heap.c.tex}\n\n\n\\subsection{Prologue}\n\nIn the prologue we check whether the initial heap contains at least two elements,\ninitialize some variables, and also check whether\nthe last array element is by chance equal to the maximum element of the heap,\nwhich resides at the index \\inl{p == 0} of the array.\nIf this is not the case, then we set aside for future\nreference the last array element in the variable \\inl{v}.\nFinally we copy the value \\inl{a[p]} to its final destination at the end\nof the array.\nNote that this assignment only occurs if the respective values differ.\nThis allows us, as in the case of \\implref{pushheap}, to formally describe the\neffect of the assignment using the predicate \\logicref{ArrayUpdate}.\n\nFigure~\\ref{fig:popheap-prologue} highlights the main effects of the prologue\nat the hand of our exemplary heap.\nNote that we have highlighted the root of the heap as the currently active node.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.65\\linewidth]{Figures/pop_heap_prologue.pdf}\n\\caption{\\Label{fig:popheap-prologue}Heap after the prologue of \\popheap}\n\\end{figure}\n\n\\FloatBarrier\n\n\n\\subsection{Main act}\n\nIn the main act, we start at a child node \\inl{c} of the prologue's index \\inl{p}.\nThis means that compared to the pre-state of \\popheap\nat the beginning of the main act the array \\inl{a[0..n-1]}\n\\begin{itemize}\n\\item contains the value \\inl{v} one time less\n\\item contains the value \\inl{a[p]} one time more\n\\item whereas all other values have not change their number of occurences.\n\\end{itemize}\n\nMoreover, the maximum element of the original heap is now at the end of the array\nand we can only guarantee that the first $n-1$ array elements got a heap.\nThese observations are necessary reason for our loop invariants.\n\nTo be more precise, when we talk in the context of \\popheap\nof a \\emph{child node} we usually mean one of the possibly two children\nwhere the maximum of the values resides.\nWe do this because copying that larger value to its parent node guarantees\nthat the resulting tree is still a heap.\nWe compute the maximum child of a node using the function \\specref{heapchild}.\n\nNow, as long as the index~\\inl{c} is not yet the index of the last array element\nof the heap and its value \\inl{a[c]} is less than \\inl{v},\nwe haven't found yet an index where we could insert \\inl{v} without\nviolating the heap property.\n\n\\clearpage\n\nIn the loop body we proceed as follows.\n\n\\begin{itemize}\n\\item\nIf \\inl{a[c]} is less than \\inl{a[p]} we copy the former value on \nthe latter.\nAs mentioned above, using the index~\\inl{c} of the maximum child\nmaintains  heap property of the array.\nWe use here the predicate \\logicref{HeapCompatible} to express\nthat the insertion of the new value \\inl{a[p]} maintains the heap\nproperty of the array.\n\nThe value \\inl{a[c]} now occurs one time more than in the pre-state whereas\nthe now overwritten value \\inl{a[p]} occurs as often as in the pre-state of \\popheap.\nThe value \\inl{v} continues to occur one time less than in the pre-state.\n%\nWe then proceed to the next iteration by setting \\inl{p} to \\inl{c}\nand computing the next maximum child node.\n\nAs in the case of \\implref{pushheap} the verification of \nthe correct number of occurences of the involved values\nrelies on the predicates \\logicref{ArrayUpdate} and \\logicref{MultisetUpdate}\nand on lemma \\logicref{MultisetParityCombined}.\n\n\\item\nOtherwise, the array being a heap, we can conclude that\n\\inl{a[c]} equals \\inl{a[p]} and we continue with the next iteration\nafter setting \\inl{p} to \\inl{c} and computing the corresponding\nnew maximum child node.\n\\end{itemize}\n\nThe following three figures depict how the main act of \\popheap \nmodifies step by step our example heap.\nIn each step we highlight the currently active node \\inl{c}.\n\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.65\\linewidth]{Figures/pop_heap_main_act1.pdf}\n\\caption{\\Label{fig:popheap-main_act1}Heap after the first iteration of of \\popheap}\n\\end{figure}\n\n\\FloatBarrier\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.65\\linewidth]{Figures/pop_heap_main_act2.pdf}\n\\caption{\\Label{fig:popheap-main_act2}Heap after the second iteration of \\popheap}\n\\end{figure}\n\n\\FloatBarrier\n\\clearpage\n\nNote that in the final step no value is actually copied as the involved nodes\nhold the same value.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.65\\linewidth]{Figures/pop_heap_main_act3.pdf}\n\\caption{\\Label{fig:popheap-main_act3}Heap after the third iteration of \\popheap}\n\\end{figure}\n\n\\FloatBarrier\n\nWe finally remark that in the main act the the last array element is never modified.\nThus, the root element of the original element is still safely stored there.\n\n\\subsection{Epilogue}\n\nAfter leaving the loop, we know that value \\inl{v} can be the inserted\nin the array at the index \\inl{p} without violating the heap property of\nthe first $n-1$ elements.\n%\nMoreover, compared to the pre-state of \\popheap the array \\inl{a[0..n-1]} still\n\\begin{itemize}\n\\item contains the value \\inl{v} one time less\n\\item contains the value \\inl{a[p]} one time more\n\\item whereas all other values have not change their number of occurences.\n\\end{itemize}\n\nIn other words, assigning the value \\inl{v} to \\inl{a[p]}\ncancels this imbalance and establishes that \\popheap\nonly reorders the array elements.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.65\\linewidth]{Figures/pop_heap_epilogue.pdf}\n\\caption{\\Label{fig:popheap-epilogue}Heap after the epilogue of \\popheap}\n\\end{figure}\n\n\\FloatBarrier\n\nIn Figure~\\ref{fig:popheap-epilogue} we have marked the value \\inl{v}\nas the currently active node despite not being an array element.\n\n\\clearpage\n\n", "meta": {"hexsha": "b8fdd26ecf1226e5172b8ce4a8c407f2e238a29c", "size": 9181, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/heap/pop_heap.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/heap/pop_heap.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/heap/pop_heap.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 36.577689243, "max_line_length": 91, "alphanum_fraction": 0.7754057292, "num_tokens": 2454, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7956580927949807, "lm_q1q2_score": 0.5913076403588505}}
{"text": "\\chapter{Statistical method}\n\\label{ch:methods}\n%%\nIn this chapter we will discuss the statistical method that we developed forbinary Population synthesis. Focusing on the difference between our method and traditional sampling methods in BS. \n\n\n\\section{Simulating populations of binaries}\nIn binary population synthesis, one can simulate the evolution of a synthetic population of binary systems. \nFor such simulations, bianries are randomly drawn from the distributions of initial binary parameters and evaluating it through binary evolution prescriptions. By doing so, they present a rapid\ncode that can compute the evolution of many stars within\na simulation.\nSince it is now known a priori which initial conditions will produce an event of interest, the full initial parameter space needs to be explored. Traditional methods in BPS tackle this problem by randomly drawing binaries $\\mathbf{x_i} \\sim P(\\mathbf{x_i})$ from their prior distributions which are based on observations, and evaluating them with the BPS model, $u$,  into their final state $\\mathbf{x_f}$, \n\n\\begin{equation}\n\t\\mathbf{x_f} = u(\\mathbf{x_i}). \n\\end{equation}  \n\nMany parameters are used in BPS but most binaries can be described uniquely by a few important initial vari-\nables: the initial mass of the primary star (i.e., the most massive star) $M_{1,i}$, the mass ratio, $q_i = M_{1,i} / M_{2,i}$, between\nthe two stars and the initial separation, $a_i$, eccentricity $e_i$. Depending on the binary evolution we can also add the kick velocity received when the primary or secondary collapses to a compact object $\\mathbf{v}_{\\text{kick, SN}}$. Each initial binary sample can thus be represented by \n\n\\begin{equation}\n\t\\mathbf{x_i} = (M_{1,i}, a_i, q_i, e_i, \\mathbf{v}_{\\text{kick, SN1}}, \\mathbf{v}_{\\text{kick, SN2}} )\n\\end{equation}\n\nOften, BPS is used to study binaries that evolve to a certain subtype $\\mathbf{X_t}$,  \n\nSo in such a study the goal is to model the distribution:\n\n\\begin{align}\n\t\\psi(\\mathbf{x_f}  ) = \\begin{cases} 1 \\hspace{0.1cm} &\\text{if } \\mathbf{x_f} \\in \\mathbf{X_t} \\\\\n\t0  & \\text{else}\n\t\\end{cases}\n\\end{align}\nthat equals unity if $\\mathbf{x_i}$ evaluated to the target binary system\n$\\mathbf{X_t}$ and zero if not. We will use this function throughout the\npaper as it is the main objective to perform the inference\non.\nSay something about this function.\n\nHowever, in cases when the target population is a rare event in the simulation, e.g. when simulating Compact object binaries since most systems will disrupt during the supernova kick, and thus  $\\psi(\\mathbf{x_f}  )= 0 $ for most systems. Therefore simulating populations of such rare events becomes extremely computational expensive. \n\n\n\nTherefore, instead, present the variance reduction method , adaptive importance sampling [1], that generates samples\nfrom a distribution function which is automatically adapted to the scientific target by focusing on areas\nof the parameter space found to produce events of interest.\n Instead of drawing random binaries from the prior distribution $P$ we draw them from a distribution that focuses around the target distribution of interest, thereby minimizing the computational costs spend on areas that don't produce events of interest, whilst maximizing the computational costs spend on binaries that become the rare event. \\\\\n\nThe method consists of three main steps that are also shown in Figure \n\n\\begin{enumerate}\n\t\\item  The parameter space is explored to find a small population of events of interest.\n\t\\item  The set of known events is used to build an instrumental distribution, to adaptively\nguide future sampling of the parameter space.\n\t\\item  Later simulations of population members are then drawn from this instrumental\ndistribution. The information from these simulations can in turn be used to\nfurther improve the instrumental distribution.\n\n\\end{enumerate}\nBy doing so, the method minimizes the computational time spend on binary systems that don`t evolve to the event of interest, whilst maximizing the computational time spend on binaries of the target distribution. \n\n\n\n\n\\section{Initial binary parameters}\n\nFor the exploratory phase and comparison runs we choose $M_1, q, a, e, v_k $ similar to the distributions used in common binary population synthesis models (e.g. Belczynski, Pols)\n\n\n\n\\subsection{Initial mass}\nThe distribution of the initial primary mass M1 follows a\npower law distribution also known as the initial mass func-\ntion (IMF) (Kroupa 2001):\n\n\n%\n\\begin{equation}\n\tp(M_{1,i}) = C_M M_{1,i}^{-\\alpha} \\vspace{0.2cm} \\text{for} M_{1,i} \\in [M_{1,i,\\text{min}}, M_{1,i,\\text{max}}] \n\\label{eq:priorIMF}\n\\end{equation}\n%\nwhere $C_M$ is the normalization constant given by\n%\n\\begin{equation}\n\tC_M = \\frac{\\alpha + 1}{M_{1,i,\\text{max}^{\\alpha +1}} - M_{1,i,\\text{min}}^{\\alpha+1}}.\n\\end{equation}\nWe choose $\\alpha = 2.35$ in agreement with (Salpeter or Kroupa), and $M_1 \\in [5, 100] M_{\\odot}$ to align with Vigna-Gomez (2017). \n\n\n\n\\subsection{initial mass ratio}\nThe mass ratio $q_i$ is suggested from observations to have\na flat distribution (Mazeh et al., 1992; Goldberg $\\&$ Mazeh,\n1994; Tout, 1991), given by\n%\n\\begin{equation}\n\tp(q) = \\frac{1}{q_{\\text{max}} - q_{\\text{min}}}, \\vspace{2cm} \\text{for } q \\in [q_{\\text{min}}, q_{\\text{max}}],  \n\\end{equation}\nwhere $[q_{\\text{min}}, q_{\\text{max}}] = (0, 1]$ by definition of q. Nevertheless, it\nis also suggested that there is some dependency of the mass\nratio on the period of the system (e.g. Moe $\\&$ Di Stefano 2016). But this is beyond the scope of this thesis. \n\n\\subsection{initial separation}\n%The separation $a_i$ is found to be uniform in log, also known\n%as Opik law (Opik \u03081924) and consistent with findings by\n%(Kobulnicky et al. 2014) and (Moe $\\&$ Di Stefano 2015). The\n%distribution function is given by\n%\n\\begin{equation}\n\tp(a_i) = C_a \\frac{1}{a} \\vspace{2cm} \\text{for } a \\in [a_{\\text{min}}, a_{\\text{max}}] \n\\end{equation}\n%\nwhere $C_a$ is the normalization constant\n%\n\\begin{equation}\nC_a = \\frac{1}{\\log a_{\\text{max}} - \\log a_{\\text{min}}}.\t\n\\end{equation}\n% \nWe choose $[a_{\\text{min}}, a_{\\text{max}}]  = [0.1, 1000]$ AU consistent with (Vigna-Gomez 2018).\n\n\n\n\\subsection{eccentricity}\nWe assume that all of our binaries initially are orbiting in a circular orbit, $e = 0$ consistent with (Alejandro et al) \n\n\\subsection{Supernova}\nWe differentiate between three supernova scenarios: core col- lapse supernovae (CCSN), ultra-stripped supernova (USSN) and electron-capture supernova (ECSN). For the CCSN treatment, we use the rapid explosion\nscenario, as presented in Fryer et al. (2012), to determine the compact object remnant mass according to the total and carbon-oxygen (CO) core mass of the progenitor, with\na maximum allowed NS mass of mNS,max $= 2.0$ M?. In this scenario, the collapse does not allow for accretion onto the proto-NS, and is able to reproduce the proposed mass gap between neutron stars and black holes (Ozel et al. 2010;\nFarr et al. 2011). There is no consensus yet whether the mass gap is due to observational selection effects or if it is intrinsic to the explosion mechanism (Kreidberg et al. 2012; Wyrzykowski et al. 2016). Another explosion scenario comes from USSN (Tauris\net al. 2013, 2015). A star becomes stripped when it loses its hydrogen envelope during its evolution; if, during later stages, it manages to lose its helium envelope, it becomes ultra-stripped. In COMPAS, any star which engages in a stable case BB mass transfer episode with a NS as an accre- tor, is considered to be ultra-stripped.We define case BB as a mass transfer episode which involves a Helium donor star which has stopped burning helium in the core (naked helium star Hertzprung Gap, HeHG). Ultra-stripped stars are left with an ONeMg core with a thin carbon and helium layer (Tauris et al. 2013). The compact object remnant mass of an USSN is determined in the same way as for CCSN. A single star with 8  mZAMS $/$ M?  10 (binary stars spread the initial mass range) may collapse in an ECSN (Nomoto 1984). We assume the baryonic mass of the de- generate ONeMg core leading to an ECSN is 1.3\n\n\n\\section{Adaptive Importance Sampling algorithm}\n\n\n\n\n\\begin{table}[]\n\\centering\n\\caption{Table of variables used }\n\\label{my-label}\n\\begin{tabular}{l|l}\n\\hline\n\\textbf{variable}               & \\textbf{description}           \\\\ \\hline\n\\multicolumn{1}{|l|}{$u$} & \\multicolumn{1}{l|}{BPS model} \\\\ \\hline\n\\multicolumn{1}{|l|}{$\\mathbf{x_i}$} & \\multicolumn{1}{l|}{initial state of a binary system} \\\\ \\hline\n\\multicolumn{1}{|l|}{$\\mathbf{x_f}$} & \\multicolumn{1}{l|}{final state of a binary system} \\\\ \\hline\n\\multicolumn{1}{|l|}{$\\mathbf{X_t}$} & \\multicolumn{1}{l|}{target subpopulation of binaries of interest} \\\\ \\hline\n\\multicolumn{1}{|l|}{$M_1$}  & \\multicolumn{1}{l|}{mass of the primary star} \\\\ \\hline\n\\multicolumn{1}{|l|}{$a$} & \\multicolumn{1}{l|}{initial separation of the binary } \\\\ \\hline\n\\multicolumn{1}{|l|}{$q$} & \\multicolumn{1}{l|}{initial mass ratio $q = M_2 / M_1 $ of the binary} \\\\ \\hline\n\\multicolumn{1}{|l|}{$e$} & \\multicolumn{1}{l|}{initial eccentricity} \\\\ \\hline\n\\multicolumn{1}{|l|}{$v$} & \\multicolumn{1}{l|}{} \\\\ \\hline\n\\multicolumn{1}{|l|}{$v_k$} & \\multicolumn{1}{l|}{kick velocity amplitude} \\\\ \\hline\n\\multicolumn{1}{|l|}{$\\theta_k$} & \\multicolumn{1}{l|}{} \\\\ \\hline\n\\multicolumn{1}{|l|}{$\\phi_k$} & \\multicolumn{1}{l|}{} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\\section{Adptive Importance Sampling}\n\\section{Fiducial method}\n\\subsection{Exploratory phase}\n\\subsection{Improve distribution}\n\\subsection{Run simulation (step 3) }\n\n\\section{Advanced Adaptive Importance Sampling}\n\n\\section{Other implementations}\n\n\n\\section{Population Synthesis}\nFor the population synthesis model we use COMPAS (Alejandro et al in prep and Stevenson et al. 2017) \n\nbased on (...) \n\n\\section{Fiducial Model}\n\nOur fiducial model \n\n\\subsection{SN}\n\\subsection{mass transfer and stabillity}\n\\subsection{CE treatment}\n\\subsection{fall back}\n\\subsection{kicks}\n\n\n\n\n%\\section{\\NoCaseChange{\\acl{RXTE}}}\n%While a number of X-ray missions have been conducted over the years, \\ac{RXTE} remains a unique mission due to its extraordinary timing resolution \\citep{bradt1993x}. Operating for more than 16 years from 1995 to 2012, \\ac{RXTE} built up a significant archive of transient X-ray sources, providing a large repository of \\ac{LMXB} observations \\citep{heasarc}. Using the \\ac{HEASARC} online services, provided by the NASA/Goddard Space Flight Center, data can freely be downloaded for the full length of the mission. With a wide range of data types available, the variety in data products originates in the different instruments carried on board \\ac{RXTE}.\\\\\n%\n%\\ac{RXTE} carried out observations with three observational instruments -- the \\ac{ASM}, the \\ac{HEXTE} and the \\ac{PCA} \\citep{bradt1993x}. The \\ac{ASM} was designed to survey a large fraction of the sky every 1.5 hours, allowing the intensity and spectrum of more than 75 objects to be monitored. Upon rapid changes in either property, the \\ac{HEXTE} and the \\ac{PCA} could be pointed towards the target typically within a few hours. The energy ranges of the \\ac{HEXTE} and the \\ac{PCA} were designed to be complimentary, with the \\ac{HEXTE} observing from 15-200 keV, and the \\ac{PCA} from 2-60 keV \\citep{jahoda1996orbit}. Both instruments had a $1^\\circ$ field of view, and a minimum timing resolution of $8\\mu$s for the \\ac{HEXTE} and $1\\mu$s for the \\ac{PCA} \\citep{rothschild1998flight, zhang1993laboratory}. \\\\\n%\n%Only observations conducted with the \\ac{PCA} are used throughout this project, providing an extensive source of data. Comprised of five \\acp{PCU}, the \\ac{PCA} allowed energies to be determined to an energy resolution of less than 18\\% at 6~keV \\citep{pcainfo}. Each \\ac{PCU} was filled with a mixture of Xenon and Methane gas, allowing charged particles to ionise the gas, resulting in an electrical pulse proportional to the energy carried by the incident particle \\citep{zhang1993laboratory}. An additional layer on top of each \\ac{PCU} contained propane gas in order to reduce the effect of background events. A gradual loss of propane occurred at the start of the mission, leaking through to the Xenon layers \\citep{jahoda1996orbit}. Combined with other effects, this resulted a gradual change in gain for which gain epochs have to be defined to correct for these changes \\citep{rxteenergychannel}. The loss of the PCU$0$ propane layer in 2000 required the adoption of an additional calibration epoch. An internal radioactive source allowed for continuous energy calibration \\citep{zhang1993laboratory}.\\\\\n%\n%With each active \\ac{PCU} providing data, information was sent to six \\acp{EA} incorporated in the \\ac{EDS} \\citep{jahoda1996orbit}. This phase allows for data processing and compression before telemetry. Two of the \\acp{EA} were programmed to run in standard configurations, with the others able to run in one of the seven different modes \\citep{rxtepcaissues}. The sheer number of options, and parameters, needed to extract this data from the different modes requires a large set of extraction tools. With the previous paragraphs giving a brief overview of the hardware behind the observations, additional technical details on the \\ac{PCA} can be found in \\citet{zhang1993laboratory}. Shifting to the data extraction and analysis side of data reduction requires specialised software, and is explained in the following section.\\\\\n%\n%\\section{Data Reduction}\n%\\label{sec:data_reduction}\n%In order to conduct a systematic population study, a robust pipeline is needed to run through many scores of observations. While a number of tools are available to help in extracting data, these tools do not lend well to upscaling, often requiring a large number of input parameters for every data mode. To this end, the \\chromos software pipeline was developed. The technical side of \\chromos is described in appendix~\\ref{ch:chromos}, in the form of a manual, with information on the underlying methods presented in this section. \\ \\\\\n%\n%The \\ac{HEASARC} online services provides several interfaces for searching and retrieving \\ac{RXTE} data \\citep{rxtearchive}. Using the web-based graphical user interface, a list of ObsIDs can be obtained for each target. ObsIds are used to classify observations, and are an identification code which changes when a new target is acquired, or when the \\ac{EA}-modes change. ObsIDs follow the format of 'NNNNN-TT-VV-SSX', with NNNNN a five-digit proposal number, TT a two-digit target number, VV a two-digit viewing number, SS a two-digit number to identify different pointings and X a character to denote slewing, configuration changes etc \\citep{rxtearchive}. An \\sw{ftp}-server allows for downloads to be conducted via the command line. Extracting this data requires some knowledge of the available data modes per observation. Two \\acp{EA} modes should be permanently available, \\sw{standard1} with no energy information but a 0.125s timing resolution and \\sw{standard2} data with 129 spectral channels and a 16s time resolution \\citep{rxtestdproducts}. A whole range of other data modes can also be present depending on the observation. These modes fall into two categories - science array (also known as binned-mode data) and science event format. Both formats require different tools and input to extract the data. Science array format includes data modes such as \\sw{binned} and \\sw{standard2}, and science event format, data modes such as \\sw{good\\-xenon} and \\sw{event} mode. Each data mode can provide diverging timing resolutions depending on the binning method, but files must have a timing resolution higher than $\\sfrac{1}{128}\\ $s for our subsequent analysis, with the exception of files for spectral analysis and background creation. \\\\\n%\n%To ensure data remains a reliable reflection of the actual target emission, it must be filtered using \\acp{GTI} \\citep{rxtecookbookevent}. Following the \\ac{RXTE} cookbooks for science array and science event mode data, the ftool \\sw{maketime} is used to set filter criteria \\citep{maketime,rxtecookbookbinned,rxtecookbookevent}. To ensure the earth brightness does not contaminate observations, the pointing elevation is set to be above $10^\\circ$. Additionally, the pointing offset is set to be less than $0.02^\\circ$ and the number of active \\acp{PCU} set to be greater than one. For all objects save for black holes and Sco~X-1, up to 10min since the \\ac{SAA} is removed \\marginpar{The \\ac{SAA} refers to an area in which the Earth's magnetic field is reduced in strength, causing an increase in high energy particle count rates \\ \\ \\ \\ \\citep{saa}.} and an electron count larger than 0.1 is removed to prevent electron contamination. The former sources show sufficiently high count rates to neglect this last criterion, and with current backgrounds able to account for the \\ac{SAA} passage, the filtering on time since \\ac{SAA} is not required. Using information from standard filter files, the times at which a change in number of \\acp{PCU} occurs are noted, allowing for 32s around these transitions to be filtered during extraction. This prevents any surge, or change in electrical current, from contaminating the count rate. \\\\\n%\n%Background files are created from \\sw{standard2} files together with standard filter files using the ftool \\sw{pcabackest} \\citep{pcabackest}. This estimated background spectrum also requires a provided model file. With sources showing a net count rate larger than 40~ct/s/PCU, the 'bright' background model can be used for all sources \\citep{pcadigest}. Having created background files, the sole step left before extracting the main data files, is determining the correct energy channel ranges. This final part is potentially the most complicated part of \\chromos, requiring a number of steps. On the basis of the observation date, an initial channel range can be selected using the energy-channel conversion table \\citep{rxteenergychannel}. If files are in \\sw{event} or \\sw{binned} mode, then the header of these files will show the channel binning, which can vary from observation to observation. The final channels can be selected using this information, in which channel bins closest to the energy range 2-13~keV are selected while ensuring the lowest energy channels are omitted \\citep[see][]{gleissner2004long}.\\\\\n%\n%Extracting data requires two ftools: \\sw{saextrct} for \\sw{event} and \\sw{goodxenon} files and \\sw{seextrct} for \\sw{standard2} and \\sw{binned} files \\citep{saextrct,seextrct}. The input parameters can be determined using the files and information generated in the previous steps, with exception of the timing resolution. This is set to $\\sfrac{1}{128}\\ $s for all files save for \\sw{standard2} files, which are extracted with their intrinsic resolution of 16s. A background file is extracted for each subsequent data file to ensure the same filter criteria are applied. \\sw{standard2} files are extracted for PCU2 only, the sole PCU with an intact propane layer of the course of the mission, and the most stable one. While the low timing resolution of \\sw{standard2} files prevents any high precision timing analysis from taking place, this data is well-suited to spectral analysis. \\sw{standard2} files are therefore extracted as both spectra and light curves, rather than just as light curves as all other data modes are.\\\\\n%\n%\\section{Timing Analysis}\n%\\label{sec:timing_analysis}\n%Light curves must be background corrected, which necessitates the rebinning of background light curves to match the resolution of the original light curve files. This is done on basis of interpolation between each consecutive background data point. Subtracting these values from the light curve count rates produces a light curve suitable for further analysis. To prevent the flares from affecting the general variability trend throughout a single observation, X-ray bursts are identified and removed for all neutron star systems. Tests showed that two consecutive count rates above a \\acf{RMS} per observation of 7$\\sigma$ provided a strong indication for an X-ray burst. Numerous methods for automatically identifying the start and end of the bursts were tested, but in the end a choice was made to cut fixed time bins on other side. Approximately 3s was cut before each detection, and 625s afterwards. Further discussion on this method can be found in section~\\ref{sec:dis_bursts}.\\\\\n%\n%In order to conduct timing analysis, a switch must be made to the frequency domain. Unless specified elsewhere, all following information is based on work presented in \\citet{uttley2014x}. While a variety of techniques are available to conduct time-series analysis \\citep[e.g.][]{maccarone2000time,legg2012direct}, discrete Fourier transforms are the most prevalent. These transforms can be calculated with\n%\\begin{align} \n%X_n = \\sum^{N-1}_{k=0}x_k\\ e^{\\sfrac{2\\pi ink}{N}}\n%\\end{align}\n%with $X_n$ the discrete Fourier transform, $N$ the number of time bins and $x$ the $k^\\textrm{th}$ value of the light curve. $X_n$ is calculated in steps of $f_n=\\sfrac{n}{N\\Delta t}$ with $\\Delta t$ the width of a time bin and with $n=0,1,2\\ldots \\sfrac{N}{2}$. The maximum frequency is thus 64~Hz due to the $\\sfrac{1}{128}\\ $s time resolution of the extracted light curves. In order to obtain a power spectrum, the discrete Fourier transform is multiplied with its complex conjugate\n%\\begin{align}\n%|X_n|^2 = X_n X_n^*\n%\\end{align}\n%The resulting power spectrum is subsequently normalised with\n%\\begin{align}\n%P_n = \\frac{2\\Delta t}{\\langle x \\rangle ^2 N}|X_n|^2\n%\\end{align}\n%in which $\\langle x \\rangle$ is the mean flux of the light curve. This leads to units in terms of fractional variance per Hz \\citep{belloni1990variability}. With the exception of perhaps an \\ac{AMSP}, few sources show fully periodic signals over all observations, but will usually show shifting power spectral components. A decision must therefore be made on the total time over which to Fourier transform. With power spectra errors dependent on the number of power spectra over which is averaged, a trade-off is established. Reducing the noise on the final power spectrum requires averaging over many segments, but in the process reduces the sensitivity to varying power spectral components. Following the procedure given in \\citet{heil2015power}, a choice is made to take discrete Fourier transforms of each $m^{th}$ segment of 512s in an observation, and average these power spectra:\n%\\begin{align}\n%\\overline{P}_n = \\frac{1}{M} \\sum_{m=1}^M P_{n,m}\n%\\end{align}\n%in which $M$ is the total number of segments in an observation. This number is limited not just by the total length of the light curve, but also by gaps due to PCU changes, X-ray bursts etc. The decision to bin across each observation is expanded on in section~\\ref{sec:binning}, where alternative binning methods are also discussed. \\\\\n%\\clearpage\n%With power spectra flattening towards higher frequencies, a correction can be applied by calculating the white noise present in the power spectrum with\n%\\begin{align} \n%P_\\textrm{noise} = 2\\frac{\\langle x \\rangle + \\langle b \\rangle}{\\langle x \\rangle ^2} \\label{eq:noise}\n%\\end{align}\n%in which $\\langle x \\rangle$ is the mean count rate and $\\langle b \\rangle$ is the mean background count rate in $\\overline{P}_n$. The resulting noise value can be subtracted from the power spectrum. The associated errors on the noise corrected power spectrum are then calculated by dividing each power by $\\sqrt{M}\\hspace{2pt}$. As the powers are $\\chi^2_2$-distributed, errors on the power spectrum can be approximated as Gaussian provided a large number of samples are binned.\\\\\n%\n%Power spectral evolution can be parametrised using power colours, as described in section~\\ref{sec:commontechniques}. These can be calculated using the power spectra obtained for each observation. First, the integrated power, or variance $V$, can be calculated using\n%\\begin{align} \n%V = \\Delta \\nu \\sum_{i=\\nu_\\textrm{low}}^{\\nu_\\textrm{high}} \\overline{P}_i\n%\\end{align}\n%with $\\Delta \\nu$ the Nyquist frequency and $\\overline{P}_i$ the power in frequency $i$, running between the four equally spaced frequency bands of 0.0039-0.031~Hz, 0.031-0.25~Hz, 0.25-2.0~Hz and 2.0-16.0~Hz \\citep{heil2015power}. This results in a variance value $V$ for each consecutive frequency band, respectively referred to as $V_A$, $V_B$, $V_C$ and $V_D$. Defining two power colours as\n%\\begin{align}\n%\tPC1 = \\frac{V_C}{V_A} \\hspace{20pt} \\textrm{and} \\hspace{20pt} PC2 = \\frac{V_B}{V_D} \n%\\end{align}\n%allows an observation to be placed in a \\ac{PCC}~diagram, with $PC1$ on the horizontal axis and $PC2$ on the vertical axis. The associated error values can be calculated from the errors on the variance, determined with\n%\\begin{align}\n%Err^2(V) = \\frac{(\\Delta \\nu)^2}{M} \\sum_{i=\\nu_\\textrm{low}}^{\\nu_\\textrm{high}} \\overline{ P^2 }_i\n%\\end{align}\n%as given by \\citet{heil2012ubiquity}. Note that the summation is over the average squared power, rather than over the squared average power. Power colour errors can subsequently be calculated with simple error propagation\n%\\begin{align}\n%Err(PC1) = PC1 \\sqrt{\\left(\\frac{Err(V_C)}{V_C}\\right) ^2 + \\left(\\frac{Err(V_A)}{V_A}\\right) ^2}\n%\\end{align}\n%and $Err(PC2)$ in analogous fashion.\n%\n%\\section{Spectral Analysis}\n%Spectral analysis is conducted with \\sw{standard2} files, due to both their energy resolution, and their relative ease of use. The ftool \\sw{pcarsp} is run for all files prior to any analysis, allowing response files to be created from energy spectra and filter files \\citep{pcarsp}. These response files can be used by the X-ray spectral-fitting program \\sw{xspec} \\citep{arnaud1996astronomical} to ensure an accurate representation of count rates in each energy bin after subtracting background values. Energies outside the range 2-13~keV are ignored, before unfolding the energy spectrum around a flat powerlaw. In doing so, the energy spectrum is effectively being divided by the effective area of the instrument response, providing energy spectra independent of long term changes in instrument response \\citep{heil2015inclination}. The energy spectral hardness is calculated for a variety of energy bands, by integrating the count rate over energy and dividing the resulting flux of the higher energy band by that of the lower energy band. These fluxes are additionally summed, to obtain the relative total intensity.\\\\\n%\n%%TODO Error calculation for integrating over energies? Does this paragraph need any formula's?\n%\n%\\section{Targets}\n%In order to compare the variability properties of both black hole and neutron star \\acp{LMXB}, a selection of objects is required. An overview of the chosen objects can be found in Tab.~\\ref{tab:objects} and includes information on various system parameters. The choice of objects was based primarily on the number of \\ac{RXTE} observations available, but also on the system type. These were chosen to ensure a good spread across both atoll and Z sources, as well as various inclinations, accretion states etc. Sources have been divided by compact object, with neutron stars above the centre line, and black holes below. Most columns should be self-explanatory, however \\spacedlowsmallcaps{\\#Good} may require additional explanation. This refers to the number of ObsIDs which have power colour ratios with a fractional variance constrained at a 3$\\sigma$-level in each frequency band. Further discussion on these values can be found in section~\\ref{sec:selection_criteria}, in which various selection effects are examined. The object names used in this table and throughout this thesis were selected to reflect the most common name in use. Alternate source names can however be found in Tab.~\\ref{tab:aka}, presenting source designations from the \\sw{4U}, \\sw{GX}, \\sw{IGR}, \\sw{INTEGRAL1}, \\sw{SWIFT}, \\sw{X} and \\sw{XTE} catalogues \\citep{SIMBAD}. The choice of black holes systems was made on basis on of \\ac{PCC}~diagram coverage as presented in \\citet{heil2015power}, providing a representative sample of possible black hole power colour tracks. \\\\\n%\n%\n%\\section{Robustness of \\chromos}\n%Initial tests of the pipeline were conducted by comparing power colours of Aql~X-1 with prior results presented in \\citet{heil2015power}. Both the results can be seen in Fig.~\\ref{fig:aqlx1}, with left showing power colours from \\citet{heil2015power} and right showing results obtained with \\chromos. Similar tracks are found for Aql~X-1 with both methods, save for the top left corner of the \\ac{PCC}~diagram where observations are primarily in the soft state. A close inspection revealed differences in noise correction, where \\citep{heil2015power} used spectral fitting to obtain the white noise, in contrast to the approach taken in this work. Additional binning effects also played a role, and are discussed further in section~\\ref{sec:binning}. The power colours track of black holes obtained with \\chromos showed only minor differences in comparison to those obtained in \\citet{heil2015power}. Further tests were also conducted on parameters related to energy spectra, revealing similar hardness and intensity values in past spectral studies. Suggestions on scientific improvements to \\chromos are given in the discussion, section~\\ref{ch:discussion}, with coding recommendations in appendix~\\ref{ch:chromos}.\n%\n%\\begin{figure}%\n%\\myfloatalign%\n%\\makebox[\\textwidth][r]{%\n%\\subfloat{\\includegraphics[width=.42\\largefigure,valign=t]{pc/aquila_lucy.png}}%\n%\\quad%\n%\\subfloat{\\includegraphics[width=.45\\largefigure,valign=t]{pc/individual/aquila_X1}}%\n%}%\n%\\caption[Comparison of Aql~X-1 \\acs{PCC}~diagrams]{\\spacedlowsmallcaps{left} Power colour ratios obtained for Aql~X-1 by \\citet{heil2015power}. To ensure clarity, the original figure has been adapted by removing power colours from black hole systems. \\spacedlowsmallcaps{right} Power colour ratios obtained with \\chromos for Aql~X-1. Only power colours with a fractional variance constrained at a $3\\sigma$ level in each frequency band have been included.}\\label{fig:aqlx1}\n%\\end{figure}\n%\n%\\newcolumntype{L}[1]{>{\\raggedright\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n%\\newcolumntype{C}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n%\\newcolumntype{R}[1]{>{\\raggedleft\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n%\n%%\\setlength{LTcapwidth}{10.in}\n%\\begin{landscape}\n%\\begin{longtable}{@{\\extracolsep{\\fill}}>{\\centering\\arraybackslash}p{4cm}ccccccc@{}}\n%\\caption[Object properties]{Overview of \\acp{LMXB} showing system properties and observation details, sorted by compact object type, and then by name. Alternative source names can be found in appendix~\\ref{ch:sources}, in table~\\ref{tab:aka}. Systems are divided into atolls (A), Z sources (Z), accreting pulsars (AP), accreting millisecond pulsars (AMP), and objects showing characteristics of both atoll and Z sources (AZ). A further division is made between persistent accretion-powered pulsars with burst oscillations (P), intermittent ones (I) and burst oscillation sources without detectable accretion-powered pulsations (B) \\citep[see][for a review]{watts2012thermonuclear}. Intermittent sources have been assigned to the pulsar or burster group on basis of their timing properties \\citep{marieke}. Spin frequencies have been given where possible, as are inclination angles. \\spacedlowsmallcaps{\\#Good} gives the total number of observations with a significant detected variance at a 3$\\sigma$-level in all four power colour frequency bands. The total number of available ObsIDs in the \\ac{RXTE} archive are also given per source. References are denoted with numbers, and are given at the bottom of this table.}\\label{tab:objects}\\\\\n%\\multicolumn{8}{l}{} \\\\[-7pt]\n%\\toprule\n%\\tableheadline{Source}&\\tableheadline{Type}&\\tableheadline{Burster/Pulsar}&\\tableheadline{Spin Freq. (Hz)}&\\tableheadline{Inclination ($^\\circ$)}&\\tableheadline{\\#Good}&\\tableheadline{\\#ObsID}&\\tableheadline{References}\\\\\n%\\midrule\n%\\endfirsthead\n%\\toprule\n%\\tableheadline{Source}&\\tableheadline{Type}&\\tableheadline{Burster/Pulsar}&\\tableheadline{Spin Freq. (Hz)}&\\tableheadline{Inclination ($^\\circ$)}&\\tableheadline{\\#Good}&\\tableheadline{\\#ObsID}&\\tableheadline{References}\\\\\n%\\midrule\n%\\endhead\n%4U 0614+09&A&B&414.7&86&85&502&1,2,2,3\\\\\n%4U 1636-53&A&B&581.9&64&45&1556&4,5,2,6\\\\\n%4U 1702-43&A&B&329&&28&210&7,5,5\\\\\n%4U 1705-44&A&&&51&55&516&8,9\\\\\n%4U 1728-34&A&B&363&50&55&405&7,5,5,10\\\\\n%Aql X-1&A&I/B&550.3&72\u201379&145&596&4,5,2,11\\\\\n%Cir X-1&AZ&&&90&52&811&12,13\\\\\n%Cyg X-2&Z&&&62.5&62&567&8,14\\\\\n%EXO 0748-676&A&B&552.5&75&93&746&15,5,16,17\\\\\n%GX 17+2&Z&&&&4&206&8\\\\\n%GX 340+0&Z&&&&5&97&8\\\\\n%GX 349+2&Z&&&&3&142&8\\\\\n%GX 5-1&Z&&&&5&167&8\\\\\n%HETE J1900.1-2455&AMP&I/P&337.3&30&129&361&18,5,19,20\\\\\n%IGR J00291+5934&AMP&&599&&45&479&21,22\\\\\n%IGR J17480-2446&AP&P&11&&42&159&23,5,23\\\\\n%IGR J17498-2921&AMP&P&401&&3&129&24,5,24\\\\\n%KS 1731-260&A&B&524&&21&82&7,5,5\\\\\n%SAX J1808.4-3658&AMP&P&401&55&25&1337&25,5,25,26\\\\\n%SWIFT J1756.9-2508&AMP&B&182&&19&50&27\\\\\n%Sco X-1&Z&&&30&48&598&8,28\\\\\n%Sgr X-1&A&&&&12&109&8\\\\\n%Sgr X-2&Z&&&&35&88&12\\\\\n%V4634 Sgr&A&&&&119&1008&29\\\\\n%XB 1254-690&A&&&&10&94&30\\\\\n%XTE J0929-314&AMP&B&185&&7&46&31,31,22\\\\\n%XTE J1701-462&AZ&&&60&94&872&12,28\\\\\n%XTE J1751-305&AMP&B&435&&21&274&32\\\\\n%XTE J1807-294&AMP&P&190&&4&112&33,2,22\\\\\n%XTE J1814-338&AMP&P&314&&17&93&34,5,22\\\\\n%\\midrule\n%\\\\[-20pt]\n%GX 339-4&BH&&&&391&1401&35\\\\\n%H1743-322&BH&&&&123&558&36\\\\\n%XTE J1550-564&BH&&&&141&423&37\\\\\n%\\bottomrule\n%\\\\[-7pt]\n%\\multicolumn{8}{L{23cm}}{\n%\\spacedlowsmallcaps{1}~\\citet{mendez1997kilohertz} \\spacedlowsmallcaps{2}~\\citet{marieke} \\spacedlowsmallcaps{3}~\\citet{schulz2010dynamical} \\spacedlowsmallcaps{4}~\\citet{liu2001catalogue} \\spacedlowsmallcaps{5}~\\citet{watts2012thermonuclear} \\spacedlowsmallcaps{6}~\\citet{pandel2008relativistic} \\spacedlowsmallcaps{7}~\\citet{galloway2008thermonuclear} \\spacedlowsmallcaps{8}~\\citet{hasinger1989two} \\spacedlowsmallcaps{9}~\\citet{di2009relativistically} \\spacedlowsmallcaps{10}~\\citet{shaposhnikov2002bursting} \\spacedlowsmallcaps{11}~\\citet{galloway2016intermittent} \\spacedlowsmallcaps{12}~\\citet{fridriksson2015common} \\spacedlowsmallcaps{13}~\\citet{iaria2008chandra} \\spacedlowsmallcaps{14}~\\citet{orosz1999optical} \\spacedlowsmallcaps{15}~\\citet{homan2015geometric} \\spacedlowsmallcaps{16}~\\citet{galloway2010discovery} \\spacedlowsmallcaps{17}~\\citet{parmar1986discovery} \\spacedlowsmallcaps{18}~\\citet{watts2009discovery} \\spacedlowsmallcaps{19}~\\citet{morgan2005hete} \\spacedlowsmallcaps{20}~\\citet{papitto2013accretion} \\spacedlowsmallcaps{21}~\\citet{galloway2005discovery} \\spacedlowsmallcaps{22}~\\citet{gladstone2007analysing} \\spacedlowsmallcaps{23}~\\citet{papitto2011spin} \\spacedlowsmallcaps{24}~\\citet{papitto2011discovery} \\spacedlowsmallcaps{25}~\\citet{wijnands1998millisecond} \\spacedlowsmallcaps{26}~\\citet{cackett2009broad} \\spacedlowsmallcaps{27}~\\citet{krimm2007discovery} \\spacedlowsmallcaps{28}~\\citet{crampton1976spectroscopic} \\spacedlowsmallcaps{29}~\\citet{van2005relations} \\spacedlowsmallcaps{30}~\\citet{bhattacharyya2007timing} \\spacedlowsmallcaps{31}~\\citet{galloway2002discovery} \\spacedlowsmallcaps{32}~\\citet{markwardt2002discovery} \\spacedlowsmallcaps{33}~\\citet{markwardt2003discovery} \\spacedlowsmallcaps{34}~\\citet{markwardt2003xte} \\spacedlowsmallcaps{35}~\\citet{wijnands1999broadband} \\spacedlowsmallcaps{36}~\\citet{homan2005high} \\spacedlowsmallcaps{37}~\\citet{homan2001correlated}\n%} \\\\\n%\\end{longtable}\n%\\end{landscape}\n%% \n%% \\begin{table}\n%%     \\myfloatalign\n%%   \\begin{tabularx}{\\textwidth}{Xccc} \\toprule\n%%     \\tableheadline{Source}\t& \\tableheadline{Inclination}\t& \\tableheadline{Type (A/Z[Sco/Cyg])} & \\tableheadline{Dipper} \\\\\n%%     \\midrule\n%%     \\TODO & \\TODO & \\TODO & \\TODO\\\\\n%%     \\midrule\n%%     \\TODO & \\TODO & \\TODO & \\TODO\\\\%\\citeauthor{knuth:1976} \\\\\n%%     \\bottomrule\n%%   \\end{tabularx}\n%%   \\caption[\\TODO]{\\TODO}\\label{tab:objects}\n%% \\end{table}", "meta": {"hexsha": "b1895219ad0b06c51f4a936408a5eed24d2abaa4", "size": 36038, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/methods.tex", "max_stars_repo_name": "FloorBroekgaarden/MasterThesis", "max_stars_repo_head_hexsha": "c533e3c6671bf703609fb071653e77d847cfbae0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/methods.tex", "max_issues_repo_name": "FloorBroekgaarden/MasterThesis", "max_issues_repo_head_hexsha": "c533e3c6671bf703609fb071653e77d847cfbae0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/methods.tex", "max_forks_repo_name": "FloorBroekgaarden/MasterThesis", "max_forks_repo_head_hexsha": "c533e3c6671bf703609fb071653e77d847cfbae0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.802259887, "max_line_length": 1923, "alphanum_fraction": 0.7668294578, "num_tokens": 10231, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.870597271765821, "lm_q2_score": 0.6791787121629466, "lm_q1q2_score": 0.5912911338504852}}
{"text": "\\subsubsection{Primary model}\n\\label{sec:methods_pipeline_primary_model}\n\nThe primary model consists of computing two moving average signals with\ndifferent time windows. The longer the time window, the more samples are\naveraged and consequently the slower it varies with price variations. Having two\nsignals, one \\emph{fast} and one \\emph{slow} provides information of how short\nand long term price movement vary one with respect to other. In particular, the\nstrategy takes advantage of the crossing points of both signals. When the fast\nsignal crosses above the slow signal, a buy event is generated. When the slow\nsignal crosses above the fast signal, a sell event is generated.\n\nNote that from a daily sampled, positive and real signal as the bitcoin close\nprice in USD is, we obtain another two daily sampled, positive and real signals\n(fast and slow). The latter two signals generate when they cross the events of\nthe primary model. These events are not equally spaced anymore, and the series\nis categorical, its values are $\\{1, -1\\}$ which represent $\\{buy, sell\\}$\nrespectively. See figure \\ref{fig:buy_and_sell_labels} to show these events over\nthe bitcoin price series.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=\\textwidth]{methods/images/buy_and_sell_labels.png}\n    \\caption{Buy and sell signals over the price series.}\n    \\label{fig:buy_and_sell_labels}\n\\end{figure}\n\nThe process of obtaining the signal with the bets is called \\emph{labeling}\nbecause it generates a series of \\emph{labels} that determine a concrete action:\nbuy or sell the position. Next, \\emph{metalabeling} process comes. It consists\nin providing a probability to each \\emph{label} which will be used to size the\nbet. To assess whether the \\emph{label} is correct or not, the triple barrier\nmethod is used:\n\n\\begin{enumerate}\n  \\item Define a profit taking and stop loss rate for which a buy and sell\n        signals will be considered valid. For a given price and time, a new\n        greater value and lesser value are defined based on both rates before.\n  \\item Define a time constant $h$ (expressed as a positive and integer multiple\n        of the sampling rate, i.e. number of days) that determines for each\n        label, a new time stamp ahead.\n  \\item Determine whether the price signal hits the greater price or the lesser\n        price before reaching the time stamp ahead of $h$ periods from the\n        label's event. When:\n  \\begin{itemize}\n    \\item A $1$-valued label gets a cross with the greater price barrier, the\n          metalabel is $1$ to indicate a positive label.\n    \\item A $-1$-valued label gets a cross with the lesser price barrier, the\n          metalabel is $1$ to indicate a positive label.\n    \\item A $1$-valued label gets a cross with the lesser price barrier, the\n          metalabel is $0$ to indicate a positive label.\n    \\item A $-1$-valued label gets a cross with the greater price barrier, the\n          metalabel is $0$ to indicate a positive label.\n    \\item When both $1$ and $-1$ valued labels do not get a corresponding cross \n          with any of the price barriers, the return sign between the price at\n          $h$ sample periods ahead of label's time stamp and the label's price\n          is used. If the sign of the return and the label match, the metalabel\n          is $1$, otherwise it is $0$.\n  \\end{itemize}\n\\end{enumerate}\n\nLabels and metalabels are fundamental series to build the secondary model. The\nsecondary model is a classifier that is trained with metalabels. The predicted\nprobability will help to size the bet on each label. In mathematical terms:\n$Metalabels = f_{(Labels, ...)}$ where $f$ represents the secondary model.", "meta": {"hexsha": "c14c4a01c512eb39ff39621b751b26695e020c34", "size": 3696, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/methods/pipeline/primary_model.tex", "max_stars_repo_name": "agalbachicar/swing_for_the_fences", "max_stars_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/methods/pipeline/primary_model.tex", "max_issues_repo_name": "agalbachicar/swing_for_the_fences", "max_issues_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/methods/pipeline/primary_model.tex", "max_forks_repo_name": "agalbachicar/swing_for_the_fences", "max_forks_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.8615384615, "max_line_length": 80, "alphanum_fraction": 0.746482684, "num_tokens": 881, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681049901036, "lm_q2_score": 0.689305616785446, "lm_q1q2_score": 0.5912643726690866}}
{"text": "%Chapter \"Computational aspects\"\r\n%\r\n\\graphicspath{ {./img/Computational/} }\r\n\\chapter{FEM formulation of the elasticity BVP}\r\n\\label{chap: Computational Aspects}\r\n\r\n\\section*{Preliminary}\r\nThe previous chapter covered the boundary value problem for the model of theory of elasticity. It was shown that the model could equivalently be written in strong form (i.e., governing equations and boundary conditions) or in weak form. It was also shown that a particular interpretation of the weak form was that of the principle of virtual displacements. In this chapter we start from this principle and use interpolation theory to approximate both physical and virtual functions. As a result, this weak form of the BVP becomes a system of linear equations in the interpolation parameters that artificially enforce the principle of virtual displacements. The resulting finite element equations are shown to be equivalent to those governing the mechanical equilibrium of a discrete system of particles. To facilitate conceptual understanding we find first the equations for a single element and then proceed to consider the full mesh after invoking equilibrium and displacement compatibility conditions along element interfaces.\r\n\r\nAt the end of this chapter\\footnote{{\\bf This chapter, together with theoretical and computational learning activities is complemented by Jupyter Notebooks 8 through 12 available at the course REPO.}} the student should be able to:\r\n\r\n\r\n\\begin{itemize}\r\n\\item[\u2022] Use interpolation theory to write the finite element based discrete version of the principle of virtual work.\r\n\r\n\\item[\u2022] Recognize the finite element equilibrium equations of the elasticity BVP as physically analogous to those of a discrete system of particles.\r\n\r\n\\item[\u2022] Understand the computational steps required for the implementation of the finite element. equilibrium equations in a computer language like Python.\r\n\r\n\\item[\u2022] Understand the approximate nature of finite element solutions.\r\n\r\n\\end{itemize}\r\n\r\n\\section{FE algorithm starting from the weak formulation of the BVP}\r\nIn the previous chapter it was shown that the elasticity BVP could equivalently be formulated in differential and integral form in a so-called weak formulation. In this section we show that a simple finite element algorithm can be obtained if one introduces discretization, through the use of interpolation theory, into this weak form. Discretization appears in two forms: first, when one assumes that a primary variable is known at discrete (nodal) points and second when the interpolation functions exist over finite sub-domains or elements. For completeness we start by recalling the weak formulation introduced previously and then use our combined index notation for scalar components of tensorial functions and for componentes of nodal variables. The weak form is then given as follows:\r\n\r\nGiven the body forces $f_i$, the surface tractions $t_i^{\\hat n}$ prescribed over the part of the boundary $S_t$ and the surface displacements ${\\bar u_i}$ prescribed over the part of the boundary $S_u$ find the displacement field ${u_i}:V \\to \\mathbb{R}$ and such $\\forall {w_i} \\in V$ it satisfies:\r\n\\[\\intL_V \\sigma_{ij} w_{i,j}\\, \\dd{V} - \\intL_V f_i w_i\\, dV  - \\intL_{S_t} t_i^{\\hat n} w_i\\, \\dd{S} = 0.\\]\r\n\r\n\r\nIn the expression above $V$ represents the complete problem domain while $S_t$ is that part of the boundary over which tractions are prescribed. Assume now that the full domain $V$ is divided into $NUMEL$ non-overlapping sub-domains (or finite elements) as shown schematically in \\cref{fig:blow2}\r\n\r\n\r\n\\begin{figure}[h]\r\n\\centering\r\n\\includegraphics[width=8cm]{meshblow.pdf}\r\n\\caption{Meshed blow}\r\n\\label{fig:blow2}\r\n\\end{figure}\r\n\r\n\r\nand in such a way that mathematically this can be expressed like:\r\n\r\n\\[V=\\bigcup \\Omega _e\\]\r\n\r\nand where $\\Omega _e$ represents the sub-domain occupied by one such an element.\r\n\r\n\\begin{tcolorbox}\r\nIn the original strong form of the BVP the partial differential equations result from equilibrium arguments valid over infinitesimal material points occupying the domain $V$. In the FEM the operation of dividing the full domain $V$ into {\\bf NUMEL} subdomains is equivalent to rendering the continuous problem with infinite material points into a discrete problem with a finite number of elements. In conclusion, a finite element is the discrete approximation of an infinitesimal material point.\r\n\r\n\\end{tcolorbox}\r\n\r\n\r\nNote that each element is defined by an ordered set of nodal points establishing a prescribed interpolation space and used to conduct function approximations. In this element partition adjacent elements are connected through the nodal points in such a way that interpolation functions exist only within its element but are continuous through element interfaces. Using this partition in the weak form of equilibrium and using a summation symbol yields;\r\n\r\n\\[\\sum_e\\int_{\\Omega_e}\\sigma_{ij}w_{i,j}\\operatorname d\\Omega_e-\\sum_e\\int_{\\Omega_e}f_iw_i\\operatorname d\\Omega_e-\\sum_e\\int_{S_e}t_iw_i\\operatorname dS_e = 0\\]\r\n\r\n\r\nwhich is still the same form of the weak equilibrium expression.\r\n\r\nIn a second discretization operation we use interpolation methods to approximate both, the displacement field $u_i$ and the weighting function $w_i$ over each element as discussed previously. In particular assume that inside every element the fields can be approximated like:\r\n\r\n\r\n\\begin{align*} \r\nu_i(\\overrightarrow x)& =N_i^Q(\\overrightarrow x)\\widehat u^Q \\\\ \r\nw_i(\\overrightarrow x)& =N_i^Q(\\overrightarrow x)\\widehat w^Q\r\n\\end{align*}\r\n\r\nand where $u^Q$ represent the scalar components of the displacement vector at the nodal point $Q$, $N_i^Q(\\overrightarrow x)$ is the shape function for the nodal point $Q$ evaluated at the field point $\\overrightarrow x$ and $u_i(\\overrightarrow x)$ is the interpolated approximation to the displacement vector at the field point $\\overrightarrow x$.\r\n\r\n\r\n\\begin{tcolorbox}\r\nRecall that in expressions like\r\n\r\n\\[u_i(\\overrightarrow x) =N_i^Q(\\overrightarrow x)\\widehat u^Q\\]\r\n\r\nthe repeated super-script $Q$ implies summation over $Q= 1,..,Nnodes$ where $Nnodes$ is the number of nodes of the element.\r\n\r\n\r\n\\end{tcolorbox}\r\n\r\nConsidering the displacement field as the primary variable we can write in terms of the elastic constitutive tensor:\r\n\r\n\\[\\sigma_{ij}=C_{ijkl}\\varepsilon_{kl}\\equiv C_{ijkl}B_{kl}^Q(\\overrightarrow x)\\widehat u^Q\\]\r\n\r\nafter using\r\n\r\n\\begin{align*} \r\n\\varepsilon_{ij}(\\overrightarrow x)=B_{ij}^Q(\\overrightarrow x)\\widehat u^Q \\\\ \r\nw_{(i,j)}(\\overrightarrow x)=B_{ij}^Q(\\overrightarrow x)\\widehat w^Q\r\n\\end{align*}\r\n\r\n\r\nand where\r\n\r\n\\[ B_{ij}^Q(\\overrightarrow x)=\\frac12\\left[\\frac{\\partial N_i^Q}{\\partial x_j}+\\frac{\\partial N_j^Q}{\\partial x_i}\\right].\\]\r\n\r\nSubstitution of the above in the discretized weak form gives us for an arbitrary $e$-element of domain $\\Omega_e$\r\n\r\n\\[\\widehat w^P\\int_{\\Omega_e}B_{ij}^PC_{ijkl}B_{kl}^Q\\operatorname d\\Omega_e\\widehat u^Q-\\widehat w^P\\int_{\\Omega_e}N_i^Pf_i\\operatorname d\\Omega_e-\\widehat w^P\\int_{S_e}N_i^Pt_i^{(n)}\\operatorname ds_e=0.\\]\r\n\r\n\r\nNote that $w_i$ in the above expression is an arbitrary function and it is convenient to assign values like\r\n\r\n\\[\\widehat w^T=\\begin{bmatrix}0\\cdots&1\\cdots&0\\end{bmatrix}\\]\r\n\r\nin turn  for every node in order to cancel the common term $\\widehat w^P$ in such a way that we can write for an arbitrary $P$ degree of freedom:\r\n\r\n\\[\\int_{\\Omega_e}B_{ij}^PC_{ijkl}B_{kl}^Qd\\Omega_e\\widehat u^Q=\\int_{\\Omega_e}N_i^Pf_id\\Omega_e+\\int_{S_e}N_i^Pt_idS_e.\\]\r\n\r\nIn this expression the dummy superscripts imply a summation over the number of nodes of the element in such a way that the only free index is $P$ which in this compact notation implies also that we have a system of equations corresponding to $P= 1,..,Nnodes$, which is the number of nodes of the element. Note also that the term $K_e^{PQ}$ defined by\r\n\r\n\\[K_e^{PQ}=\\int_{\\Omega_e}B_{ij}^PC_{ijkl}B_{kl}^Q\\operatorname d\\Omega_e\\]\r\n\r\n\r\nrepresents the force along degree of freedom $P$ due to a unit displacement along degree of freedom $Q$. In this sense we can define the following set of nodal forces along the $P$-th degree of freedom:\r\n\r\n\\begin{align*} \r\nf_\\sigma^P & =K_e^{PQ}\\widehat u^Q \\\\\r\nf_B^P & =\\int_{\\Omega_e}N_i^Pf_i\\operatorname d\\Omega_e \\\\\r\nf_c^P & =\\int_{S_e}N_i^Pt_i^{(n)}\\operatorname dS_e\r\n\\end{align*}\r\n\r\nsatisfying\r\n\r\n\r\n\\begin{equation}\r\nf_\\sigma^P=f_B^P+f_C^P\r\n\\label{nodal_forces}\r\n\\end{equation}\r\n\r\n\r\nand where $f_\\sigma^P$, $f_B^P$ and $f_C^P$ are forces due to the internal element stresses, body forces and surface tractions respectively.\r\n\r\n\\Cref{nodal_forces} which can also be written like:\r\n\r\n\\[K_e^{PQ}\\widehat u^Q =f_B^P+f_C^P\\]\r\n\r\nis an equation in the nodal displacement $u^Q$ . However it must be observed that this equation has been derived for a single element $\\Omega_e$ and the possible contribution in terms of forces and stiffness terms from additional elements is yet to be considered. This final step, leading to the assembly of the global system of equilibrium equations (to be discussed later) gives:\r\n\r\n\r\n\\begin{equation}\r\n\\left[K^G\\right]\\left\\{U^G\\right\\}=\\left\\{RHS^G\\right\\}\r\n\\label{globalsys}\r\n\\end{equation}\r\n\r\nand where the term $RHS^G$ is a vector of assembled global forces of the type $f_B^P+f_C^P$ storing the contribution from body forces and external surface tractions.\r\n\r\n\\begin{tcolorbox}\r\nThe details of the Python implementation of the stiffness matrix:\r\n\r\n\\[K_e^{PQ}=\\int_{\\Omega_e}B_{ij}^PC_{ijkl}B_{kl}^Q\\operatorname d\\Omega_e\\]\r\n\r\nare presented as an in-class activity in Notebook 6. This activity requires prior knowledge of interpolation (see Notebook 3) and numerical integration.\r\n\r\n\\end{tcolorbox}\r\n\r\n\\section{FE algorithm starting from the principle of virtual work}\r\n\r\nAs shown in the previous chapter a valid solution $(u_i , \\sigma_{ij} , \\epsilon_{ij})$ to the well-posed boundary value problem in theory of elasticity and where $u_i={\\overline u}_i \\quad \\text{in} \\quad  S_u$\r\n\r\nsatisfies \r\n\r\n\\begin{equation}\r\n\\int\\limits_V \\sigma _{ij}\\var{\\varepsilon_{ij}}\\dd{V}  - \\int\\limits_V f_i\\delta u_i\\dd{V}  - \\int\\limits_{S_t} t_i^{(n)}\\var{u_i}\\dd{S}  = 0\r\n\\label{PVDs}\r\n\\end{equation}\r\n\r\nfor arbitrary functions $\\delta u_i$ such that  $\\delta u_i =0 \\quad \\text{in} \\quad  S_u$ and where:\r\n\r\n\\[\\delta\\varepsilon_{ij}=\\frac12\\left(\\frac{\\partial\\delta u_i}{\\partial x_j}+\\frac{\\partial\\delta u_j}{\\partial x_i}\\right).\\]\r\n\r\nAfter using the same arguments and interpolation scheme as in the weak form discretization \\cref{PVDs} leads to:\r\n\r\n\r\n\\[\\int_{\\Omega_e}B_{ij}^PC_{ijkl}B_{kl}^Qd\\Omega_e\\widehat u^Q=\\int_{\\Omega_e}N_i^Pf_id\\Omega_e+\\int_{S_e}N_i^Pt_idS_e\\]\r\n\r\nwhich is the same equation resulting from the discretization of the weak form.\r\n\r\nNote that the PVW statement is an scalar (energy) equation relating the strain energy,\r\n\r\n\\[\\delta E_s=\\int_{\\Omega_e}\\sigma_{ij}\\delta\\varepsilon_{ij}\\operatorname d\\Omega_e\\]\r\n\r\naccumulated in the elastic solid as a result of the imposed virtual strain field $\\delta\\varepsilon_{ij}$ to the work of the external loads\r\n\r\n\\[\\delta W=\\int_{\\Omega_e}f_i\\delta u_i\\operatorname d\\Omega_e+\\int_{S_e}t_i^{(n)}\\delta u_i\\operatorname dS_e\\]\r\n\r\ndue to the imposition of the virtual displacement $\\delta u_i$.\r\n\r\nUsing the same notation as in the previous algorithm the discrete version can also be written like:\r\n\r\n\\[\\delta\\widehat u^Pf_\\sigma^P-\\delta\\widehat u^Pf_B^P-\\delta\\widehat u^Pf_c^P=0\\]\r\n\r\nand after using the arbitrariness in $\\delta u_i$\r\n\r\n\\begin{equation}\r\nf_\\sigma^P-f_B^P-f_c^P=0.\r\n\\label{PVW_force}\r\n\\end{equation}\r\n\r\n\r\nIn a sense the FEM scheme makes use of interpolation theory to convert the continuous BVP governed by a set of partial differential equations into a discrete problem governed by a set of algebraic equations. Within this context \\cref{PVW_force} can be interpreted as the force equilibrium equation governing the response of a particle $P$ subject to the action of:\r\n\r\n\\begin{itemize}\r\n\\item[\u2022] External nodal forces $f_B^P$ consistent with body forces $f_i$.\r\n\\item[\u2022] External nodal forces $f_c^P$ consistent with surface tractions $t_i^{(n)}$\r\n\\item[\u2022] Internal forces $f_\\sigma^P$ consistent with element stresses and equilibrating the external forces.\r\n\\end{itemize}\r\n\r\nOn the other hand, recall  that the PVW statement only holds if the set $(u_i , \\sigma_{ij} , \\epsilon_{ij})$ is the actual solution to the BVP. This fact implies that in the FE formulation where we only have an approximate solution $u_i(\\overrightarrow x) =N_i^Q(\\overrightarrow x)\\widehat u^Q $ the principle does not hold as this is not the actual solution. However, the nodal displacements found after solving \\cref{PVW_force} enforces such condition.\r\n\r\n\\begin{tcolorbox}\r\n\r\nSince the finite element equation written in the form the term:\r\n\r\n\\[f_\\sigma^P  =K_e^{PQ}\\widehat u^Q\\]\r\n\r\nrepresents nodal forces equivalent to element stresses the coefficients $K_e^{PQ}$ represent the nodal force along degree of freedom $P$ due to a unit displacement along degree of freedom $Q.$\r\n\r\n\\end{tcolorbox}\r\n\r\n\r\n\\section{Global assembly}\r\n\r\nThe fundamental equation\r\n\r\n\\begin{equation}\r\nK_e^{PQ}\\widehat u^Q =f_B^P+f_c^P\r\n\\label{oneelement}\r\n\\end{equation}\r\n\r\nresulting from the discretization of the PVW through the introduction of interpolation functions was derived over a single element of domain $\\omega_e$. In order to consider the contribution from all the elements in a finite element discretized version of a solid we use Newton's third law of motion (action and reaction principle). For that purpose recall that the term $f_C^P$ in \\cref{oneelement} corresponds to the contact forces resulting from surface tractions and it is computed according to:\r\n\r\n\\begin{equation}\r\nf_c^P=\\int_{S_t}N_i^Pt_i^n\\operatorname dS\r\n\\label{traction}\r\n\\end{equation}\r\n\r\nwhere it is evident that they depend on the external outward normal vector $\\widehat n$ over $S_t$. Clearly, the mechanical interaction between two elements in contact takes place through these contact forces. To explain this interaction consider the schematic mesh shown in \\cref{fig:4assembled1} below:\r\n\r\n\r\n\\begin{figure}[H]\r\n\\centering\r\n\\includegraphics[width=8cm]{assembled}\r\n\\caption{4-elements mesh}\r\n\\label{fig:4assembled1}\r\n\\end{figure}\r\n\r\n\r\nFocusing attention in elements $1$ and $2$ let the interacting surface be labelled as $S_b$ and extend this notation to the degrees of freedom and forces in such a way that the degrees of freedom of nodal points along $S_b$ would be named $U_b$ while those at other parts of the element would be named $U_c$. Accordingly the finite element equilibrium equations for each element can be written like:\r\n\r\n\\begin{equation}\r\n\\begin{Bmatrix}F_a\\\\F_b(\\widehat n)\\end{Bmatrix} = \\begin{bmatrix}K_{aa}^1&K_{ab}^1\\\\K_{ba}^1&K_{bb}^1\\end{bmatrix}\\begin{Bmatrix}U_a\\\\U_b\\end{Bmatrix}\r\n\\end{equation}\r\n\r\nand\r\n\r\n\\begin{equation}\r\n\\begin{Bmatrix}-F_b(\\widehat n\\ast)\\\\F_C\\end{Bmatrix}=\\begin{bmatrix}K_{bb}^2&K_{bc}^2\\\\K_{cb}^2&K_{cc}^2\\end{bmatrix}\\begin{Bmatrix}U_b\\\\U_c\\end{Bmatrix}\r\n\\end{equation}\r\n\r\nThe dependency of the contact forces along the common interface $S_b$ upon the normal outward vector $\\widehat{n}$ has been made explicit in the equations. In particular this vector reads $\\widehat n$ for element $1$ and ${\\widehat n}^*$ for element $2$ and they satisfy:\r\n\r\n\\[\\widehat n = - {\\widehat n}^* . \\]\r\n\r\nUsing this relation between the normal vectors at the contact interface we can write for the nodal forces:\r\n\r\n\\[ \r\nF_b(\\widehat{n})+F_b({\\widehat{n} }^*)=0\r\n\\]\r\n\r\nor equivalently\r\n\r\n\\begin{equation}\r\nF_b(\\widehat n)-F_b(\\widehat n)=0.\r\n\\label{coupling}\r\n\\end{equation}\r\n\r\n\r\nThis condition is shown for completeness in \\cref{fig:coupled1}\r\n\r\n\\begin{figure}[H]\r\n\\centering\r\n\\includegraphics[width=12cm]{coupled1}\r\n\\caption{Equilibrated nodal forces along the contact interface $S_b$ of elements 1 and 2.}\r\n\\label{fig:coupled1}\r\n\\end{figure}\r\n\r\nwhere it is evident that the contact nodal forces along element interfaces are equal and opposite in accordance with Newton's third law. On the other hand, from displacements continuity we have the condition:\r\n\r\n\\begin{equation}\r\nU_B = U_B^1 = U_B^2.\r\n\\label{continuity}\r\n\\end{equation}\r\n\r\nUsing \\cref{coupling} and \\cref{continuity} in the equilibrium relationships yields the partial assemblage of the global stiffness matrix resulting after considering the interaction between elements $1$ and $2$:\r\n\r\n\\begin{equation}\r\n\\begin{bmatrix}K_{aa}^1&K_{ab}^1&0\\\\K_{ba}^1&K_{bb}^1+K_{bb}^2&K_{bc}^2\\\\0&K_{cb}^2&K_{cc}^2\\end{bmatrix}\\begin{Bmatrix}U_a\\\\U_b\\\\U_c\\end{Bmatrix}=\\begin{Bmatrix}F_a\\\\0\\\\F_c\\end{Bmatrix}.\r\n\\label{finaleq}\r\n\\end{equation}\r\n\r\nNote that in the right hand side vector from the above assembled global equilibrium equations only contact forces have been included. This process of adding elemental matrices through contact nodal forces is termed {\\bf element assembly} and it is symbolically written like:\r\n\r\n\\begin{equation}\\label{eq:assem}\r\n{K^G}=\\assem_{i=1}^{Numel} k^i\r\n\\end{equation}\r\n\r\nand where $K^G$ is the global coefficient matrix; $k^i$ is the local elemental matrix for the $i$-th element; $\\assem_{i=1}^{Numel}$ is the assembly operator and $i$ is an element index ranging between $1$ and the total number of elements $Numel$ that conform the finite element model. Notice that the assembly operator is analogous to a summation operator commonly used in the representation of series of finite or infinite number of terms, however in the finite element algorithm the assembly operator contains information indicating the position of each single term from the element coefficient matrix within the global system.\r\n\r\n\\begin{tcolorbox}\r\n\r\nIn most finite element codes the assembly operator is an array of integer values indicating the position (row and column) from each coefficient of the stiffness matrix of a given element. In {\\bf SolidsPy} this operator is termed the {\\bf DME()} operator and it is computed in the assembly module {\\bf ASSEMUTIL}.\r\n\r\n\\end{tcolorbox}\r\n\r\n\r\n\\section{The assembly algorithm}\r\n\r\nIn this section we will discuss the fundamental algorithmic steps required for building the system of algebraic equations through the addition or assembly of elemental coefficient matrices as described by \\cref{eq:assem}. This process of assembly of the elemental coefficient matrices into the global system involves:\r\n\r\n\\begin{itemize}\r\n\\item (i) The identification of the active degrees of freedom (or equation numbers) assigned to each node in the mesh.\r\n\\item (ii) The identification of the relationship between the elemental degrees of freedom and the global degrees of freedom.\r\n\\item (iii) The computation of the coefficient matrix for each element in the model.\r\n\\end{itemize}\r\n\r\n\\begin{tcolorbox}\r\n\r\nIn the finite element jargon the word {\\bf elemental} commonly refers to variable or computations performed at the element level.\r\n\r\n\\end{tcolorbox}\r\n\r\n\r\nStep (i) is easily accomplished by assigning a boundary condition flag to the degrees of freedom existing at each node indicating if the degree of freedom is active, in which case it contributes with an equation, or if it is prescribed, in which case it contributes with a specified displacement boundary condition. In our notation we use a $0$ value to indicate an active degree of freedom and a $-1$ value to indicate a prescribed degree of freedom. In the Python based code {\\bf SolidsPy} this information is stored in a boundary conditions array termed $IBC$ and with dimensions $nn \\times MDIM$, where $nn$ corresponds to the  total number of nodal points and $MDIM$ represents the problem dimensionality. The $IBC$ array is first given to the program through an input data file and later translated from $0$s and $-1$s to equation numbers. Thus during this process the boundary condition flag is read and equations are counted and numbers assigned according to the result. If the boundary condition flag is equal to $0$ the program assigns an equation number while it retains a $-1$ value for a $-1$ flag. In summary, the $IBC$ array exists in two instances. In its first instance it just contains integer flags indicating active or restrained degrees of freedom at each node while in a second instance it indicates the actual active equations existing at each node.\r\n\r\nIn parallel and also with data given through an input file, the nodes conforming each element are stored in a {\\bf connectivity} array termed $IELCON$ and of dimension $Numel \\times MxNNel$ where $Numel$ is the number of elements in the model and $MxNNel$ is the maximum number of nodes in a given element. Thus each row in the $IELCON$ array stores the nodal point data for the element. Each entry in the $IELCON$ array can now be directly translated into equation numbers using the processed version of the $IBC$ array. This process results in the discrete version of the assembly operator $\\assem_{i=1}^{Numel}$ also called the assembly list or the $DME$ operator in {\\bf SolidsPy}.\r\n\r\nIn the final step the mesh is looped one element at a time and each elemental coefficient matrix is assembled into the global matrix using the information (equation numbers) stored in the $DME$ operator. The actual computation of the elemental matrix is conducted by an element based subroutine, called $UEL$ in {\\bf SolidsPy}. These subroutines may be different for each element in the mesh according to different kinematic o material assumptions. The assembly process is further illustrated by the following sample problem.\r\n\r\n\\paragraph*{Example}\r\nFor the mesh shown in \\cref{fig:quad} where the elements are numbered from left to right and bottom to top write:\r\n\r\n\\begin{itemize}\r\n\\item (i) The boundary conditions array {\\bf IBC()} in its two instances.\r\n\\item (ii) The connectivities array for the mesh.\r\n\\item (iii) The assembly {\\bf DME()} operator.\r\n\\end{itemize}\r\n\r\n\\begin{figure}[h]\r\n\\centering\r\n\\includegraphics[width=8cm]{mesh2.pdf}\r\n\\caption{Mesh of 4 bi-linear finite elements.}\r\n\\label{fig:quad}\r\n\\end{figure}\r\n\r\n(i) In a first instance the boundary conditions array {\\bf IBC()} is filled with data assigned by the user in the input file. Since all the nodes in the bottom and top faces of the mesh are restrained in the horizontal and vertical directions they are assigned the flag $-1$ in each column. Similarly, node 4 is restrained only in the horizontal direction so it is assigned a $-1$ in the first position and a $0$ in the second position, while nodes 5 and 6 are completely free along both directions and they are assigned values of $0$. The resulting array is given by:\r\n\r\n\\[IBC_1 = \\begin{bmatrix}\r\n-1 & -1\\\\\r\n-1 & -1\\\\\r\n-1 & -1\\\\\r\n-1 & 0\\\\\r\n0 & 0\\\\\r\n0 & 0\\\\\r\n-1 & -1\\\\\r\n-1 & -1\\\\\r\n-1 & -1\r\n\\end{bmatrix}\\]\r\n\r\nIn a second instance the boundary conditions array {\\bf IBC()} is modified by the program which reads every value retaining those corresponding to $-1$ and assigning sequential numbers to those corresponding to $0$. Accordingly the first equation identified as $U_0$ corresponds to the vertical displacement of nodal point 4 while the remaining 4 equations associated to degrees of freedom $U_1 , U_2 , U_3 , U_4$ are the horizontal and vertical displacements of nodes 5 and 6. The final form of the array is given like: \r\n\r\n\\[IBC_2 = \\begin{bmatrix}\r\n-1 & -1\\\\\r\n-1 & -1\\\\\r\n-1 & -1\\\\\r\n-1 & 0\\\\\r\n1 & 2\\\\\r\n3 & 4\\\\\r\n-1 & -1\\\\\r\n-1 & -1\\\\\r\n-1 & -1\r\n\\end{bmatrix}\\]\r\n\r\n\r\n(ii) The next step corresponds to the creation of the connectivities array {\\bf IELCON()} storing the nodal points defining each element. In this case each element is defined by a list of 4 nodal point numbers. Recall that for the computation of the elemental stiffness matrix each element in the physical mesh is mapped to the canonical element in the natural space. In the natural space the shape functions are associated to a fixed node numbering and element orientation. If one follows a counter-clockwise orientation the node numbering must follow the same counter-clockwise orientation in the physical space. However the selection of the first node in the sequence is arbitrary and the mapping into the natural space will take care of the corresponding rotation. In this case a valid connectivities array is that given by:\r\n\r\n\r\n\\[IELCON = \\begin{bmatrix}\r\n0 &1 &4 &3\\\\\r\n1 &2 &5 &4\\\\\r\n3 &4 &7 &6\\\\\r\n4 &5 &8 &7\r\n\\end{bmatrix}\\]\r\n\r\n(iii) The final ingredient required for the assembly of the global system of equations is the so-called {\\bf DME()} operator. This is actually the same {\\bf IELCON()} array but translated into equation numbers assigned at each element. Computationally this involves a subroutine that runs over the {\\bf IELCON()} array, reads the value of the nodal identifier and then extracts the assigned equation numbers from the boundary conditions array {\\bf IBC()}. In this example the assembly operator consistent with the definition of the connectivities array corresponds to:\r\n\r\n\\[DME = \\begin{bmatrix}\r\n-1 &-1 &-1 &-1 &1 &2 &-1 &0\\\\\r\n-1 &-1 &-1&-1 &{\\bf 3} &4 &1 &2\\\\\r\n-1 &0 &1 &2 &-1 &-1 &-1 &-1\\\\\r\n1 &2 &3 &4 &-1 &-1 &-1 &-1\r\n\\end{bmatrix}\\]\r\n\r\n\\paragraph*{A note on the assembly operator:}\r\nThe definition of the element connectivities and its subsequent translation into degrees of freedom is carried out according to the local or canonical element definition in the natural space(\\cref{fig:locdof})\r\n\r\n\\begin{figure}[H]\r\n\\centering\r\n\\includegraphics[width=6cm]{localdof.pdf}\r\n\\caption{Local definition for a 4-noded element.}\r\n\\label{fig:locdof}\r\n\\end{figure}\r\n\r\nAs a result the $(i,j)$ entry in the $DME$ array corresponds to the global equation number associated with the local degree of freedom $j$ of the $i$ element. For instance the value of $3$ stored at position $(1,4)$ in the current {\\bf DME()} operator indicates that the global equation $3$ corresponds to the local equation $4$ (column index) in element $1$ (row index).\r\n\r\nThe assembly process is then conducted by identifying the relation between the entries in each row of the {\\bf DME()} operator and the list of local degrees of freedom for the reference element shown in \\cref{fig:locdof}. Accordingly, for element 1 it follows that the assembly of row $4$ of the local stiffness matrix proceeds as follows:\r\n\\begin{align*}\r\nK_{3,3}^G & \\leftarrow  K_{3,3}^G + k_{4,4}^1 \\\\\r\nK_{3,4}^G & \\leftarrow  K_{3,4}^G + k_{4,5}^1 \\\\\r\nK_{3,1}^G & \\leftarrow  K_{3,1}^G + k_{4,6}^1\r\n\\end{align*}\r\n\r\n\r\n\\section{Summary: The finite element algorithm}\r\nAt this point it becomes evident that the algorithm for the displacements based finite element method reduces to the solution of a linear system of algebraic equations in the unknown nodal displacements $U^G$ resulting after assembling the contribution to the stiffness matrix $K^G$ and loads vector $RHS^G$ from all the elements in the mesh. This process can be summarized in the 4-steps pseudo-code described in \\cref{algo:overall1}:\r\n\r\n\r\n\\begin{algorithm}[H]\r\n\\SetAlgoLined\r\n\\KwData{Finite element model}\r\n\\KwResult{Field function}\r\n\\BlankLine\r\nPREPROCESSING (Reads the model and computes {\\bf IBC()}, {\\bf IELCON()} and {\\bf DME()} arrays);\\\\\r\nASSEMBLY (Forms the system of equations $\\left[K^G\\right]\\left\\{U^G\\right\\}=\\left\\{RHS^G\\right\\}$);\\\\\r\nSOLUTION (Finds $U^G$);\\\\\r\nPOSTPROCESSING (Scatter the global solution to the elements and computes element forces, strains, and stresses);\\\\\r\n\\caption{Global steps involved in the finite element algorithm}\r\n\\label{algo:overall1}\r\n\\end{algorithm}\r\n\r\nIn the pre-processing stage the code reads the input file and forms the arrays {\\bf IBC()}, {\\bf IELCON()} and {\\bf DME()} which are required for the assembly process. Once the global system of equations is assembled and solved, the values of the nodal displacements are distributed to the corresponding elements in a process commonly referred to like scattering. This process is somehow inverse to the assembly operation. With the nodal displacements for each element at hand it is now possible to compute nodal forces together with strain and stress distributions inside the element. The assembly and solution process is given in \\cref{algo:overall2}. In this algorithm it must be noticed that the local stiffness matrix for each $i$ element is obtained by the call to the local element subroutine {\\bf UEL()}.\r\n\r\n\\begin{algorithm}[H]\r\n\\SetAlgoLined\r\n\\KwData{Finite element model}\r\n\\KwResult{Field function}\r\n\\BlankLine\r\nREAD {\\bf IBC()} and {\\bf IELCON()} arrays from input files ;\\\\\r\nCOMPUTE modified {\\bf IBC()} array ;\\\\\r\nCOMPUTE {\\bf DME()} operator (using {\\bf IBC()} and {\\bf IELCON()} arrays);\\\\\r\nINITIALIZE Global stiffness matrix $K^G \\leftarrow 0.0$;\\\\\r\n(Start assembly)\r\n\\BlankLine\r\n\\For{$i \\leftarrow 1$ to $Numel$}{\r\n    Call UEL(element parameters;$k^i$) ;\\\\\r\n    $K^G \\leftarrow K^G+^i$ (Assemble each $k^i$ into $K^G$ according to the {\\bf DME()} operator);\\\\\r\n    $RHS^G \\leftarrow RHS^G+rhs^i$ (Assemble each $rhs^i$ into $RHS^G$);\\\\\t\r\n\t\\BlankLine\r\n\t}\r\n\\BlankLine\r\nSOLVE ${K^G}{U^G} = RH{S^G}$;\\\\\r\nScatter nodal displacements solution to the elements\r\n\\caption{Details of the assembly and system solution process}\r\n\\label{algo:overall2}\r\n\\end{algorithm}\r\n\r\nThe details of the elemental subroutine {\\bf UEL()} are given in \\cref{algo:uel}. This process basically involves looping through the Gauss points and computing the required terms. The computation of the the strain-displacements matrix $B(r_j , s_j )$ and Jacobian determinant $\\left\\|J_j\\right\\|$ at each Gauss point is conducted by additional subroutines.\r\n\r\n\\newpage\r\n\\begin{algorithm}[H]\r\n\\SetAlgoLined\r\n\\KwData{Material parameters, nodal coordinates, applied loads}\r\n\\KwResult{$k^l$ and $rhs^l$}\r\n\\BlankLine\r\nINITIALIZE $k^l \\leftarrow 0.0$ ;\\\\\r\nCOMPUTE constitutive tensor $C_{ijkl}$ ;\\\\\r\nFORM Gauss quadrature arrays $w()$ and $r() , s()$;\\\\\r\n(Start loop through the Gauss points)\r\n\\BlankLine\r\n\\For{$j \\leftarrow 1$ to $Ngpts$}{\r\n    RETRIEVE $r_j$ , $s_j$ , $w_j$   ;\\\\ \r\n    COMPUTE $\\left\\|J_j\\right\\|$, $B(r_j , s_j )$ ;\\\\\r\n    $k^l \\leftarrow k^l+B^T C B \\left\\|J_j\\right\\| \\left\\|J_j\\right\\|$ ;\\\\\r\n\t\\BlankLine\r\n\t}\r\n\\BlankLine\r\n\\caption{Details of the elemental {\\bf UEL} subroutine}\r\n\\label{algo:uel}\r\n\\end{algorithm}\r\n\r\n\r\n%\\section{Sparse assembly}\r\n%In Finite Elements is common to have stiffness and mass matrices that are sparse, i.e., matrices in which most of the elements are zero.\r\n%\r\n%\r\n%For instance, in a regular mesh formed with bilinear quadrilaterals the number of nonzero entries is given by the expression\r\n%\\[\\text{storage} = 9 n_x n_y - 9 n_x - 3 n_y + 4\\, ,\\]\r\n%where $n_x$ is the number of nodes in the $x$ coordinate and $n_y$ is the number of nodes in the $y$ coordinate. Figure \\ref{fig:sparse_storage} presents the needed storage for a mesh with $n_x=n_y$ for different sizes.\r\n%\\begin{figure}[H]\r\n%\\centering\r\n%\\includegraphics[width=6 in]{sparse_storage.pdf}\r\n%\\caption{Nonzero entries in a sparse matrix for a structured mesh of bilinear elements.}\r\n%\\label{fig:sparse_storage}\r\n%\\end{figure}\r\n\r\n\r\n\r\n\\paragraph*{Proposed problems}\r\n\\begin{enumerate}\r\n\r\n\\item \\label{punto01} In the bi-linear element shown in \\cref{fig:onemesh} the nodal displacements resulting from a finite element solution are indicated by the blue arrows. For the material parameters given in the figure find the nodal forces consistent with the element stresses. Assume plane strain behaviour.\r\n\r\n\r\n\\begin{figure}[H]\r\n\\centering\r\n\\includegraphics[height=6cm]{post.png} \r\n\\caption{One-element mesh for problem 1.}\r\n\\label{fig:onemesh}\r\n\\end{figure}\r\n\r\n\r\n\\item \\label{punto02} For the mesh shown in the figure, with internal surfaces between elements 1-3 and 3-2 labelled $S_b$ and $S_c$ respectively, write the form of the global stiffness matrix resulting from the physical assembly. Explicitly formulate the force and displacement compatibility equations along both boundaries\r\n\r\n\r\n\\begin{figure}[H]\r\n\\centering\r\n\\includegraphics[height=6cm]{long.png} \r\n\\caption{Long mesh for problem 2.}\r\n\\label{fig:longmesh}\r\n\\end{figure}\r\n\r\n\\item \\label{punto03} For the mesh shown in the figure propose different node numbering schemes and identify the resulting changes in the size of the half-band in the stiffness matrix. Assume that each element subroutine is full of $1$s.\r\n\r\n\\begin{figure}[H]\r\n\\centering\r\n\\includegraphics[height=6cm]{halfband.png} \r\n\\caption{Long mesh for problem 3.}\r\n\\label{fig:halfmesh}\r\n\\end{figure}\r\n\r\n\r\n\r\n\\item \\label{punto04} A finite element model is conformed by an assemblage of three elements of the type shown in \\cref{fig:cercha}\r\n\r\n\r\n\\begin{figure}[H]\r\n\\centering\r\n\\includegraphics[height=3cm]{truss.png} \r\n\\caption{Fundamental element}\r\n\\label{fig:cercha}\r\n\\end{figure}\r\n\r\nand with a stiffness matrix (in the global coordinate system) given by:\r\n\r\n\\[\r\nk = \\begin{bmatrix}1&1&1&1\\\\1&1&1&1\\\\1&1&1&1\\\\1&1&1&1.\r\n\\end{bmatrix}\r\n\\]\r\n\r\nOn the other hand, it is known that global stiffness matrix has been assemble using the following operator\r\n\r\n\\[\r\nDME = \\begin{bmatrix}-1&-1&2&3\\\\-1&-1&1&-1\\\\1&-1&2&3\\end{bmatrix}\r\n\\]\r\n\r\nin which a value of $-1$ refers to a restrained degree of freedom and thus equal to zero. It is required to:\r\n\r\n\\begin{itemize}\r\n\\item Draw the complete model including its displacement boundary conditions.\r\n\\item Indicate the order of the global stiffness matrix for the complete structure.\r\n\\item Find the global stiffness matrix for the structure.\r\n\\end{itemize}\r\n\r\n\r\n\r\n\r\n\r\n%%%%%\r\n\\item \\label{punto05} Using SolidsPy compute the stress concentration factor defined according to:\r\n\r\n\\[SCF = \\frac{\\max\\, \\sigma_{yy}}{\\text{promedio}\\, \\sigma_{yy}}\\, \\]\r\n\r\nand introduced after drilling a hole in the following plates:\r\n\r\n\\begin{figure}[H]\r\n\t\\centering\r\n\t\\includegraphics[height=5 cm]{tema1.pdf}\r\n\t\\includegraphics[height=5 cm]{tema2.pdf}\r\n\t\\caption{Plates with holes.}\r\n\t\\label{fig:tema1}\r\n\\end{figure}\r\n\r\nIn both cases the material parameters are defined in \\cref{tab:comun}\r\n\r\n\\begin{table}[H]\r\n    \\centering\r\n    \\begin{tabular}{cc}\r\n        \\hline\r\n        Parameter & Value \\\\\r\n        \\hline\r\n        Young's modulus (Pa)    & $E =200 \\times 10^9$ \\\\\r\n        Poisson's ratio    & $\\nu = 0.285$   \\\\\r\n        Load (N/m)   & $S = 10^8$  \\\\\r\n        \\hline\r\n    \\end{tabular}\r\n    \\caption{Material parameters for the analysis defined in problem 2.}\r\n    \\label{tab:comun}\r\n\\end{table}\r\n\r\nwhile the geometric parameters are those of \\cref{tab:tema1}\r\n\r\n\\begin{table}[H]\r\n\t\\centering\r\n\t\\begin{tabular}{cccc}\r\n\t\t\\hline\r\n\t\t$a$ & $b$ & $h$ & $w$ \\\\\r\n\t\t\\hline\r\n\t\t20      & 40      & 50      & 100  \\\\\r\n\t\t\\hline\r\n\t\\end{tabular}\r\n\t\\caption{Geometric parameters for the analysis defined in problem 2.}\r\n\t\\label{tab:tema1}\r\n\\end{table}\r\n\r\n\r\n\r\n\r\n%\\inputminted[]{python}{src/engine.py}\r\n\r\n\r\n\r\n\r\n%En este caso el factor de concentraci\u00f3n de esfuerzos ($SCF$) se calcula como:\r\n%\r\n%\r\n%\\subsection*{Tema 2}\r\n%\\begin{figure}[H]\r\n%\t\\centering\r\n%\t\\includegraphics[height=5 cm]{tema2.pdf}\r\n%\t\\caption{Geometr\u00edas y condiciones del tema 2.}\r\n%\t\\label{fig:tema2}\r\n%\\end{figure}\r\n%\\begin{table}[H]\r\n%\t\\centering\r\n%\t\\begin{tabular}{ccccc}\r\n%\t\t\\hline\r\n%\t\tEquipo & $a$ & $b$ & $h$ & $w$ \\\\\r\n%\t\t\\hline\r\n%        3    & 40      & 40      & 50      & 100  \\\\\r\n%        10   & 20      & 40      & 50      & 200  \\\\\r\n%        14   & 10      & 20      & 50      & 100  \\\\\r\n%        16   & 20      & 40      & 50      & 100  \\\\\r\n%        21   & 10      & 40      & 100     & 200  \\\\\r\n%        24   & 10      & 40      & 100     & 300  \\\\\r\n%        28   & 10      & 30      & 100     & 200  \\\\\r\n%\t\t\\hline\r\n%\t\\end{tabular}\r\n%\t\\caption{Equipos y par\u00e1metros para el tema 2. Longitudes en mil\u00edmetros. Considere las simetr\u00edas del problema para que est\u00e9 cinem\u00e1ticamente determinado.}\r\n%\t\\label{tab:tema2}\r\n%\\end{table}\r\n%\r\n%En este caso el factor de concentraci\u00f3n de esfuerzos ($SCF$) se calcula como:\r\n%\\[SCF = \\frac{\\max\\, \\sigma_{yy}}{\\text{promedio}\\, \\sigma_{yy}}\\, .\\]\r\n%\r\n%\\subsection*{Tema 3}\r\n%\\begin{figure}[H]\r\n%\t\\centering\r\n%\t\\includegraphics[height=4 cm]{tema3.pdf}\r\n%\t\\caption{Geometr\u00edas y condiciones del tema 3.}\r\n%\t\\label{fig:tema3}\r\n%\\end{figure}\r\n%\\begin{table}[H]\r\n%\t\\centering\r\n%\t\\begin{tabular}{ccccc}\r\n%\t\t\\hline\r\n%\t\tEquipo & $a$ & $b$ & $h$ & $w$ \\\\\r\n%\t\t\\hline\r\n%\t\t2   & 20      & 10      & 50      & 200  \\\\\r\n%\t\t6   & 10      & 10      & 50      & 200  \\\\\r\n%\t\t8   & 20      & 20      & 50      & 200  \\\\\r\n%\t\t13  & 10      & 20      & 50      & 200  \\\\\r\n%\t\t19  & 20      & 30      & 60      & 240  \\\\\r\n%\t\t25  & 20      & 40      & 60      & 240  \\\\\r\n%\t\t27  & 40      & 20      & 60      & 240  \\\\\r\n%\t\t\\hline\r\n%\t\\end{tabular}\r\n%\t\\caption{Equipos y par\u00e1metros para el tema 3. Longitudes en mil\u00edmetros.}\r\n%\t\\label{tab:tema3}\r\n%\\end{table}\r\n%\r\n%En este caso el factor de concentraci\u00f3n de esfuerzos ($SCF$) se calcula como:\r\n%\\[SCF = \\frac{\\max|\\sigma_{xx}|}{\\text{promedio}\\, |\\sigma_{xx}|}\\, .\\]\r\n%\r\n%\\subsection*{Tema 4}\r\n%\\begin{figure}[H]\r\n%\t\\centering\r\n%\t\\includegraphics[height=4 cm]{tema4.pdf}\r\n%\t\\caption{Geometr\u00edas y condiciones del tema 4.}\r\n%\t\\label{fig:tema4}\r\n%\\end{figure}\r\n%\\begin{table}[H]\r\n%\t\\centering\r\n%\t\\begin{tabular}{ccccc}\r\n%\t\t\\hline\r\n%\t\tEquipo & $a$ & $b$ & $h$ & $w$ \\\\\r\n%        \\hline\r\n%        1   & 20      & 10      & 50      & 100  \\\\\r\n%\t\t7   & 20      & 10      & 50      & 200  \\\\\r\n%\t\t9   & 20      & 10      & 50      & 300  \\\\\r\n%\t\t15  & 10      & 10      & 50      & 100  \\\\\r\n%\t\t18  & 10      & 10      & 50      & 200  \\\\\r\n%\t\t20  & 10      & 10      & 50      & 300  \\\\\r\n%\t\t26  & 20      & 20      & 60      & 200  \\\\\r\n%\t\t\\hline\r\n%\t\\end{tabular}\r\n%\t\\caption{Equipos y par\u00e1metros para el tema 4. Longitudes en mil\u00edmetros.}\r\n%\t\\label{tab:tema4}\r\n%\\end{table}\r\n%\r\n%En este caso el factor de concentraci\u00f3n de esfuerzos ($SCF$) se calcula como:\r\n%\\[SCF = \\frac{\\max|\\sigma_{xx}|}{\\text{promedio}\\, |\\sigma_{xx}|}\\, .\\]\r\n\r\n\r\n\r\n%%%%%\r\n\r\n\r\n\r\n\\end{enumerate}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "meta": {"hexsha": "cb8ad65da1095763248d279b28fb79bff0414edd", "size": 37784, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "course_notes/src/computational.tex", "max_stars_repo_name": "AppliedMechanics-EAFIT/Introductory-Finite-Elements", "max_stars_repo_head_hexsha": "a4b44d8bf29bcd40185e51ee036f38102f9c6a72", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2019-11-26T13:28:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T17:57:11.000Z", "max_issues_repo_path": "course_notes/src/computational.tex", "max_issues_repo_name": "jgomezc1/Introductory-Finite-Elements", "max_issues_repo_head_hexsha": "a4b44d8bf29bcd40185e51ee036f38102f9c6a72", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "course_notes/src/computational.tex", "max_forks_repo_name": "jgomezc1/Introductory-Finite-Elements", "max_forks_repo_head_hexsha": "a4b44d8bf29bcd40185e51ee036f38102f9c6a72", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2020-02-17T07:24:59.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T07:54:28.000Z", "avg_line_length": 50.6487935657, "max_line_length": 1373, "alphanum_fraction": 0.7230309126, "num_tokens": 10559, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.591253642804355}}
{"text": "\\subsection{Final Score}\n\nAfter retrieving the sets of \\emph{PageRank} and \\emph{TF-IDF} scores \nfor all users that discuss any query term, they are normalized to zero-average \nand unitary variance so that they can be mixed together by means of a parameter, $\\alpha$. \nFinally both terms are combined, and the resulting score can be seen in Equation \\ref{eq:finalscore}, where $s_t$ is the TF-IDF score for user $u$ and query $q$ and $s_p$ the PageRank score of $u$.\n\n\\begin{equation}\ns(u,q) = \\alpha * s_t(u,q) + (1 - \\alpha) * s_p(u)\n\\label{eq:finalscore}\n\\end{equation}", "meta": {"hexsha": "ebce51068412945b058062bdd208a56a2089a45f", "size": 572, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/method_finalscore.tex", "max_stars_repo_name": "helderm/stalkr", "max_stars_repo_head_hexsha": "4d98ef673f0c34992a3b05065f9c6f9dd8fad461", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/method_finalscore.tex", "max_issues_repo_name": "helderm/stalkr", "max_issues_repo_head_hexsha": "4d98ef673f0c34992a3b05065f9c6f9dd8fad461", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/method_finalscore.tex", "max_forks_repo_name": "helderm/stalkr", "max_forks_repo_head_hexsha": "4d98ef673f0c34992a3b05065f9c6f9dd8fad461", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.0, "max_line_length": 197, "alphanum_fraction": 0.7307692308, "num_tokens": 169, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467675095294, "lm_q2_score": 0.6723317057447908, "lm_q1q2_score": 0.5910782457997009}}
{"text": "\\section{Constants}\n\nWe can prove that all the above constants belong to the \ninterpretations of their types.\n%\n\\begin{theorem}{[Constants]}\\label{thm:constant}\n$c \\in \\interp{\\constty{c}}$.\n\\end{theorem} \n%\nThe Theorem trivially holds for more of the constants.\nFor example, \n\\newcommand\\eqtype{\\ensuremath{\n\t\\tfun{x}{b^\\lfinite}{\n\t\\tfun{y}{b^\\lfinite}{\n\t\\tlref{v}{\\tbool}{\\lfinite}{v \\Leftrightarrow x = y}\n\t}}\n}}\n$$= \\in \\interp{\\eqtype}$$\nas $\\forall e_1, e_2, \\evals{e_1}{d_1}, \\evals{e_2}{d_2}\\Rightarrow \n\t\t\t\\evals{(e_1=e_2 \\Leftrightarrow e_1= e_2)}{\\etrue}\n\t\t\t\\land \\exists d. \\evals{(e_1 = e_2)}{d}$\n\nHere we prove that for any type $\\tau$, \n\\efix{\\tau} and \\etfix{\\tau} satisfy Theorem~\\ref{thm:constant}.\n\nGiven the families of constants:\n\\begin{align*}\n\\ceval{\\etfix{\\tau}}{f} & \\doteq \\efun{n}{}{\\efun{f}{}{\\etfixn{\\tau}{}{n}}}\\\\ \n%\\ceval{\\etfixf{\\tau}{f}{??}}{n} &\\doteq\n%f\\ n\\ (\\etfixn{\\tau}{f}{n}\\ f) \\\\\n\\ceval{\\etfixn{\\tau}{}{n}}{m} &\\doteq\n\\efun{f}{}{\nf\\ m\\ (\\etfixn{\\tau}{f}{m}\\ f)} \\\\\n\\end{align*}\n%\nand their types\n%\n\\begin{align*}\n%\\decr{\\tau}{n} & \\doteq \\decrtypefull{\\tau}{n}\\\\\n\\constty{\\etfix{\\tau}} &\\doteq \n\t(\\decrty{\\tau})\n\t \\rightarrow\n\t\\tfun{m}{\\tnat^\\lfinite}{\\tau\\sub{x}{m}}\\\\ \n%\\constty{\\etfixf{\\tau}{f}{n}} &\\doteq \n%\t \\etfixfty{\\tau}{f}{n}\\\\ \n\\constty{\\etfixn{\\tau}{f}{n}} &\\doteq  \n\t(\\decrty{\\tau})\n\t\\rightarrow\\adecrty{\\tau}{n}\\\\ \n%\\constty{\\etfixfn{\\tau}{f}{n}} &\\doteq \n%\t \\etfixfty{\\tau}{f}{n}\\\\ \n\\end{align*}\nwe prove that the constants belong to the \nmeanings of their types:\n%\n\\begin{theorem}{[Terminating Fixpoint]}\\label{thm:fixpoint}\n\\begin{enumerate}\n\\item\\label{nfix}$\\forall n. \\etfixn{\\tau}{f}{n} \\in \\constty{\\etfixn{\\tau}{f}{n}}$\n\\item\\label{tfix}$\\etfix{\\tau} \\in \\constty{\\etfix{\\tau}}$\n\\item\\label{fix}$\\efix{\\tau} \\in \\constty{\\efix{\\tau}}$, if the result of $\\tau$ is a \\Div type.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\\begin{itemize}\n\\item \\ref{nfix}.\nWe prove that for all\n%$f \\in \\interp{\\decrty{\\tau}}$\nappropriate $f$\nand $ m \\in \\interp{\\tref{v}{\\tnat}{\\lfinite}{v < n}}$,\n$e \\equiv \\etfixn{\\tau}{f}{n}\\ f \\ m \\in \\interp{\\tau\\sub{x}{m}}$\n% \nby induction on $n$.\n\nFor $n=0$, \nit is trivial, as \nthere is no $m$ such that\n$m \\in \\interp{\\tref{v}{\\tnat}{\\lfinite}{v < 0}}$.\n\nFor the inductive step, $e$ reduces to \n$$\n\\etfixn{\\tau}{f}{n}\\ f\\ m \n\\hookrightarrow\n\\etfixfn{\\tau}{f}{n}\\ m \n\\hookrightarrow\nf\\ m\\ (\\etfixn{\\tau}{f}{m}\\ f)\\\\\n$$\nBy IH, since $m < n$,\n$\\etfixn{\\tau}{f}{m} \\in \\constty{\\etfixn{\\tau}{f}{m}}$, \nso $f$ receives the appropriate arguments, \nand returns the appropriate result that proves the theorem.\n%\n\\item \\ref{tfix}.\nWe prove that \nfor all appropriate $f$\n% \\\\$f \\in \\interp{\\decrty{\\tau}}$\nand     $ m \\in \\interp{\\tnat^\\lfinite}$,\n$\\etfix{\\tau}\\ f \\ m \\in \\interp{\\tau\\sub{x}{m}}$.\n%\nSince $m \\in \\interp{\\tref{v}{\\tnat}{\\lfinite}{v < m+1}}$\n$$\\etfixn{\\tau}{f}{m+1}\\ f \\ m \\in \\interp{\\tau\\sub{x}{m}}$$\n%\nBut operationally, \n$\\etfixn{\\tau}{f}{m+1}\\ f \\ m$\nand\n$\\etfix{\\tau}\\ f \\ m$\nbehave equivalently, which proves the theorem.\n\\item \\ref{fix}. The prove for \n$\\efix{\\tau} \\in \\constty{\\efix{\\tau}}$.\nis similar.\n%\nThe only difference is that for the base case\n\\efixn{\\tau}{0} should be defined to belong \ninto the interpretation of any type.\n%\nThus, it is defined as a diverging expression\nand the type of \\efix{\\tau} is constrainted\nto $\\tau$'s with potentially diverging result. \n%\nWith refinement types we prove that the basic\n\\etfixn{\\tau}{f}{0} operator\ncannot be called, so we omit \nthe definition of this basic case.\n\\end{itemize}\n\\end{proof}\n%\n", "meta": {"hexsha": "5235facfe8faa034b1e08b018e415a94dcd71622", "size": 3550, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/refinedhaskell/proofs/constants.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/refinedhaskell/proofs/constants.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/refinedhaskell/proofs/constants.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 28.4, "max_line_length": 96, "alphanum_fraction": 0.6371830986, "num_tokens": 1402, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7799929104825006, "lm_q1q2_score": 0.5910742371977635}}
{"text": "\n\n\\section{Group operations, revisited}\n\nIn this section we revise \\emph{Riordan group} concepts described in\n\\autoref{subsection:back:to:the:basics:riordan:group}, looking at them under\nthe light of $h$-characterization. In particular, we show how a Riordan arrays\nwritten using the proposed characterization, allows us to build new arrays\nrespect to group operation $\\cdot$ and to find their inverses.\n\n\\subsection{Inverting an array}\n\nLet $\\mathcal{R}\\left(d(t),h(t)\\right)$ be a Riordan array and let \n$\\mathcal{R}_{h(t)}\\left(g(h(t)),h(t)\\right)$ its $h$-characterization, for some\nfunction $g$. %in the variable $h(t)$. \n \nUsing rules for inversion in the Riordan group, \n\\marginpar{using $\\mathcal{R}_{h(t)}$ \npolymorphically as an array, remind \\autoref{par:h:characterization:is:an:array:polymorphism}}\nproceed as follow:\n\\begin{displaymath}\n    \\begin{split}\n        \\mathcal{R}_{h(t)}^{-1}\\left(\\frac{1}{g(h(\\hat{h}(t)))},\\hat{h}(t)\\right)&=\n        \\left[\\mathcal{R}^{-1}\\left(\\left.\\frac{1}{g(h(y))},y\\right) \\right| y = \\hat{h}(t) \\right]\\\\\n        &= \\mathcal{R}_{\\hat{h}(t)}^{-1}\\left(k(\\hat{h}(t)),\\hat{h}(t)\\right)\n    \\end{split}\n\\end{displaymath}\nwhere function $k$ is defined as $k(y)=\\frac{1}{g(h(y))}$, \ntherefore we got a new array $\\mathcal{T}_{\\hat{h}(t)}$ which is the $\\hat{h}$-characterization\nof $\\mathcal{R}_{h(t)}^{-1}$ and, as the same time, of $\\mathcal{R}^{-1}$.\n\\marginpar{again, $\\hat{h}(t)$ plays the role of a \\emph{variable},\nso don't be tempted to say $h(\\hat{h}(t))=t$ as in the normal course of things \\ldots}\n\\\\\\\\\nFor the sake of clarity we apply the previous derivation to build the inverse of $\\mathcal{F}$,\nthe Fibonacci array.  To set the stage we need function $h$, take it from the definition:\n\\begin{displaymath}\n    \\begin{split}\n        h(t)&=\\frac{1-\\sqrt{1-4t}}{2}\\\\\n    \\end{split}\n\\end{displaymath}\nwe need also function $g$, take it from the $h$-characterization $\\mathcal{F}_{h(t)}$:\n\\begin{displaymath}\n    \\left[g(y)=\\left.\\frac{1}{1-y+2y^3-y^4} \\right| y=h(t)\\right]\n\\end{displaymath}\nwe're ready to apply:\n\\begin{displaymath}\n    \\left[\\mathcal{F}^{-1}\\left.\\left(1-y-y^2,y\\right) \\right| y = \\hat{h}(t) \\right]=\n    \\mathcal{F}_{\\hat{h}(t)}^{-1}\\left(1-\\hat{h}(t)-\\hat{h}(t)^2,\\hat{h}(t)\\right)\n\\end{displaymath}\nwhere function $\\hat{h}$ is the compositional inverse of function $h$. Observe\nthat $1-y-y^2$ in the left array under variable constraint $y=\\hat{h}(t)$, is\nobtained by evaluating $\\frac{1}{g(h(y))}$, considering $y$ as a variable:\nusing this abstraction, function $\\hat{h}(t)$ cannot be used\n\\emph{functionally}, otherwise $g(h(\\hat{h}(k)))=g(k)$, where $k=h(t)$\naccording to constraint under $g$ is defined. If that would be the case, it\nyields a factorization in $h(t)$, not in $\\hat{h}(t)$, as we would like to\nhave.\n\nIf the explicit definition for $\\mathcal{F}^{-1}$ is desired, just plug $\\hat{h}(t)=t-t^2$:\n\\begin{displaymath}\n    \\left(\\mathcal{F}_{\\hat{h}(t)}^{-1}\\right)^{\\stackrel{\\hat{h}(t)}{\\rightarrow}} =\n        \\mathcal{F}^{-1}\\left(1-t+2t^3-t^4,t-t^2\\right)\n\\end{displaymath}\n\n\\subsection{Multiplying two arrays}\n\nFinally, let us tackle the product of two Riordan arrays. \nMultiplication, the action of the Riordan group, between two elements \n$\\mathcal{A}(d_{\\mathcal{A}}(t), h_{\\mathcal{A}}(t))$ and \n$\\mathcal{B}(d_{\\mathcal{B}}(t), h_{\\mathcal{B}}(t))$ belonging to the group, is defined as:\n\\begin{displaymath}\n    \\mathcal{A}\\mathcal{B} = \\left(d_{\\mathcal{A}}(t)d_{\\mathcal{B}}(h_{\\mathcal{A}}(t)),\n        h_{\\mathcal{B}}(h_{\\mathcal{A}}(t))\\right)\n\\end{displaymath}\nConsider the $h_{\\mathcal{A}}$-characterization of $\\mathcal{A}$:\n\\begin{displaymath}\n    \\mathcal{A}_{h_\\mathcal{A}(t)} \\left(\\gamma(h_{\\mathcal{A}}(t)), h_{\\mathcal{A}}(t)  \\right)\n\\end{displaymath}\nfor some function $\\gamma$, and the $h_{\\mathcal{B}}$-characterization of $\\mathcal{B}$:\n\\begin{displaymath}\n    \\mathcal{B}_{h_\\mathcal{B}(t)} \\left(\\eta(h_{\\mathcal{B}}(t)), h_{\\mathcal{B}}(t)  \\right)\n\\end{displaymath}\nfor some function $\\eta$, respectively. Apply now the multiplication rule:\n\\begin{displaymath}\n    \\begin{split}\n        & \\left(\\gamma(h_{\\mathcal{A}}(t)), h_{\\mathcal{A}}(t)  \\right)\n            \\left(\\eta(h_{\\mathcal{B}}(t)), h_{\\mathcal{B}}(t)  \\right) \\\\\n        &=\\left[\\left.\\left(\\gamma(y), y  \\right)\n            \\left(\\eta(h_{\\mathcal{B}}(t)), h_{\\mathcal{B}}(t)  \\right) \\right| y=h_{\\mathcal{A}}(t) \\right]\\\\\n        &=\\left[\\left.\\left(\\gamma(y)\\eta(h_{\\mathcal{B}}(y)), h_{\\mathcal{B}}(y)  \\right) \\right|\n             y=h_{\\mathcal{A}}(t) \\right]\\\\\n    \\end{split}\n\\end{displaymath}\nStop here, it's enough to build the $h_{\\mathcal{A}}$-characterization \nof the product $\\mathcal{A}\\mathcal{B}$:\n\\begin{displaymath}\n        \\left[\\left.\\left(\\Omega(y), h_{\\mathcal{B}}(y)  \\right) \\right| y=h_{\\mathcal{A}}(t) \\right] \n        =\\big(\\mathcal{A}\\mathcal{B}\\big)_{h_{\\mathcal{A}}(t)}\\left(\n            \\Omega(h_{\\mathcal{A}}(t)), h_{\\mathcal{B}}(h_{\\mathcal{A}}(t))  \\right)\n\\end{displaymath}\nwhere $\\left[\\left.\\Omega(y)= \\gamma(y)\\eta(h_{\\mathcal{B}}(y))\\right| y=h_{\\mathcal{A}}(t) \\right]$. \n\\\\\\\\\nNonetheless, \n\\marginpar{The following can be skipped on a first reading}\nfrom were we stopped, it's possible to factor $\\mathcal{A}\\mathcal{B}$ respect to $h_{\\mathcal{B}}(t)$ in\na similar way, this time abstracting over $h_{\\mathcal{B}}(y)$, remembering to track $y=h_{\\mathcal{A}}(t)$:\n\\begin{displaymath}\n    \\begin{split}\n        &\\left[\\left.\\left(\\gamma(y)\\eta(h_{\\mathcal{B}}(y)), h_{\\mathcal{B}}(y)  \\right) \\right|\n             y=h_{\\mathcal{A}}(t) \\right]\\\\\n        &=\\left.\\left[\\left.\\left(\\gamma(\\hat{h}_{\\mathcal{B}}(k))\\eta(k), k  \\right) \\right|\n             k=h_{\\mathcal{B}}(y) \\right]\\right|_{y=h_{\\mathcal{A}}(t)}\\\\\n        &=\\left.\\left[\\left.\\left(\\Theta(k), k  \\right) \\right| k=h_{\\mathcal{B}}(y) \\right]\\right|_{y=h_{\\mathcal{A}}(t)}\\\\\n        &=\\left.\\big(\\mathcal{A}\\mathcal{B}\\big)_{h_{\\mathcal{B}}(y)}\\left(\n            \\Theta(h_{\\mathcal{B}}(y)), h_{\\mathcal{B}}(y)  \\right)\\right|_{y=h_{\\mathcal{A}}(t)}\n    \\end{split}\n\\end{displaymath}\n\\marginpar{in order to stop tracking $y=h_{\\mathcal{A}}(t)$, we should write $(\\Theta(k), k)$,\n    where $k$ solves\n    $h_{\\mathcal{B}}(\\hat{h}_{\\mathcal{A}}(\\hat{h}_{\\mathcal{B}}(k)))=h_{\\mathcal{B}}(t)$,\n    but it doesn't supply an explicit variable substitution}\nwhere $\\left.\\left[\\left.\\Theta(k)=\\gamma(\\hat{h}_{\\mathcal{B}}(k))\\eta(k) \\right| \n    k=h_{\\mathcal{B}}(y) \\right]\\right|_{y=h_{\\mathcal{A}}(t)}$.\n\nObserve that for the latter factorization, it is necessary to compute function $\\hat{h}_{\\mathcal{B}}$, \nthe compositional inverse of function $h_{\\mathcal{B}}$.\n\\\\\\\\\nFor the sake of clarity we apply the previous derivation to the product $\\mathcal{P}\\mathcal{F}$, namely\nwe'll multiply Pascal and Fibonacci arrays, \\emph{in the given order}, providing \nboth $\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{P}}(t)}$ \nand $\\left.\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{F}}(y)}\\right|_{y=h_{\\mathcal{P}}(t)}$.\n\nRecall $\\mathcal{P}$ and $\\mathcal{F}$ $h$-characterizations, they are respectively:\n\\begin{displaymath}\n    \\begin{split}\n        &\\mathcal{P}_{h_{\\mathcal{P}}(t)}\\left( 1+h_{\\mathcal{P}}(t), h_{\\mathcal{P}}(t) \\right)\n            \\text{ where } h_{\\mathcal{P}}(t) = \\frac{t}{1-t}\\\\\n        &\\mathcal{F}_{h_{\\mathcal{F}}(t)}\\left( \\frac{1}{1-h_{\\mathcal{F}}(t)+\n            2h_{\\mathcal{F}}(t)^3-h_{\\mathcal{F}}(t)^4}, h_{\\mathcal{F}}(t) \\right)\n                \\text{ where } h_{\\mathcal{F}}(t) = \\frac{1-\\sqrt{1-4t}}{2}\\\\\n    \\end{split}\n\\end{displaymath}\nand recognize the following functions under variable constraint (in the following, $y$\nis a ``local'' variable, \\emph{it's not shared} among the two definitions):\n\\begin{displaymath}\n    \\begin{split}\n        &\\left.\\left[\\gamma(y) = 1+y \\right| y=h_{\\mathcal{P}}(t) \\right]\\\\\n        &\\left.\\left[\\eta(y) = \\frac{1}{1-y+2y^3-y^4} \\right| y=h_{\\mathcal{F}}(t) \\right]\\\\\n    \\end{split}\n\\end{displaymath}\n\nToward $\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{P}}(t)}$, compute $\\Omega$ function:\n\\begin{displaymath}\n    \\left.\\left[\\Omega(y)= \\gamma(y)\\eta(h_{\\mathcal{F}}(y))\\right| y=h_{\\mathcal{P}}(t) \\right]\n        = \\left.\\left[\\Omega(y)= \\frac{1+y}{1-y-y^2}\\right| y=h_{\\mathcal{P}}(t) \\right]\n\\end{displaymath}\n\\marginpar{$\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{P}}(t)}$}\ntherefore answer the first question:\n\\begin{displaymath}\n    \\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{P}}(t)} \\left(\\frac{1+\n        h_{\\mathcal{P}}(t)}{1-h_{\\mathcal{P}}(t)-h_{\\mathcal{P}}(t)^2}, \\frac{1-\\sqrt{1-4h_{\\mathcal{P}}(t)}}{2} \\right)\n\\end{displaymath}\n\n\\marginpar{$\\left.\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{F}}(y)}\\right|_{y=h_{\\mathcal{P}}(t)}$}\nToward $\\left.\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{F}}(y)}\\right|_{y=h_{\\mathcal{P}}(t)}$, compute $\\Theta$ function:\n\n\\begin{displaymath}\n    \\begin{split}\n        &\\left.\\left[\\left.\\Theta(k)=\\gamma(\\hat{h}_{\\mathcal{F}}(k))\\eta(k) \\right| k=h_{\\mathcal{F}}(y) \\right]\\right|_{y=h_{\\mathcal{P}}(t)}\\\\\n        &= \\left.\\left[\\left.\\Theta(k)=\\frac{1+k-k^2}{1-k+ 2k^3 - k^4} \\right| k=h_{\\mathcal{F}}(y) \\right]\\right|_{y=h_{\\mathcal{P}}(t)}\\\\\n    \\end{split}\n\\end{displaymath}\nwhere $\\hat{h}_{\\mathcal{F}}(y)=y-y^2$, therefore answer the second question:\n\\begin{displaymath}\n    \\left.\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{F}}(y)} \\left(\n        \\frac{1+h_{\\mathcal{F}}(y)-h_{\\mathcal{F}}(y)^2}{1-h_{\\mathcal{F}}(y)+ 2h_{\\mathcal{F}}(y)^3 - h_{\\mathcal{F}}(y)^4} ,\n        h_{\\mathcal{F}}(y) \\right)\\right|_{y=h_{\\mathcal{P}}(t)}\n\\end{displaymath}\n\n\\marginpar{check substituting functions $h_{\\mathcal{P}}$ and $h_{\\mathcal{F}}$}\nLittle check plugging in function $h_{\\mathcal{P}}(t)$:\n\\begin{displaymath}\n    \\left(\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{P}}(t)}\\right)^{\\stackrel{h_{\\mathcal{P}}(t)}{\\rightarrow}}\n        = \\big(\\mathcal{P}\\mathcal{F}\\big)\\left(\\frac{1-t}{1-3t+t^2}, \\frac{1}{2}-\\frac{1}{2}\\sqrt{\\frac{1-5t}{1-t}} \\right)\n\\end{displaymath}\non the other hand, plugging in function $h_{\\mathcal{F}}(y)$, where $y=h_{\\mathcal{P}}(t)$:\n\\begin{displaymath}\n    \\left.\\left(\\big(\\mathcal{P}\\mathcal{F}\\big)_{h_{\\mathcal{F}}(y)}\\right)^{\\stackrel{h_{\\mathcal{F}}(y)}{\\rightarrow}}\\right|_{y=h_{\\mathcal{P}}(t)}\n        = \\left.\\big(\\mathcal{P}\\mathcal{F}\\big)\\left(\\frac{1+y}{1-y-y^2}, \\frac{1-\\sqrt{1-4y}}{2} \\right)\\right|_{y=h_{\\mathcal{P}}(t)}\n\\end{displaymath}\n", "meta": {"hexsha": "db57b61e0d7f861a2206faccbdaa01493d857929", "size": 10431, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classicthesis/Chapters/h-characterization/group-operations.tex", "max_stars_repo_name": "massimo-nocentini/master-thesis", "max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classicthesis/Chapters/h-characterization/group-operations.tex", "max_issues_repo_name": "massimo-nocentini/master-thesis", "max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classicthesis/Chapters/h-characterization/group-operations.tex", "max_forks_repo_name": "massimo-nocentini/master-thesis", "max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.612565445, "max_line_length": 151, "alphanum_fraction": 0.6270731473, "num_tokens": 4031, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541067, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5910742337132896}}
{"text": "\\section{From regular expressions to \\eNFA{s}}\n\nWe left behind the regular expressions when we informally introduced\nthe transition diagrams for token recognition. Now let us show that\nregular expressions when used in lexers to specify tokens can be\nconverted to \\eNFA{s}, and therefore to DFAs. This proves that regular\nlanguages are recognisable languages. Actually, it is possible to\nprove that any \\eNFA can be converted to a regular expression denoting\nthe same language, but we will not do so. Therefore, keep in mind that\nthe regular languages are the same as the recognisable languages. In\nother words, the choice of using a regular expression or a finite\nautomaton is only a matter of convenience.\n\nThe construction we present here to build an \\eNFA from a regular\nexpression is called \\emph{Thompson's construction}. Let us first\nassociate an \\eNFA to the basic regular expressions.\n\\begin{itemize}\n\n  \\item For the expression \\(\\epsilon\\), construct the following NFA,\n    where~\\(i\\) and~\\(f\\) are new states:\n  \\begin{center}\n    \\includegraphics[bb=48 710 135 730]{thompson_epsilon}\n  \\end{center}\n\n  \\item For \\(a \\in \\Sigma\\), construct the following NFA, where~\\(i\\)\n    and~\\(f\\) are new states:\n  \\begin{center}\n    \\includegraphics[bb=48 710 135 730]{thompson_symbol}\n  \\end{center}\n\n\\end{itemize}\nNow let us associate NFAs to complex regular expressions. In the\nfollowing, let us assume that~\\(N(s)\\) and~\\(N(t)\\) are the NFAs for\nregular expressions \\(s\\)~and~\\(t\\).\n\\begin{itemize}\n\n  \\item For the regular expression \\(st\\), construct the following NFA\n    \\(N(st)\\), where no new state is created:\n\\begin{center}\n\\includegraphics[bb=65 660 295 714]{thompson_conc}\n\\end{center}\n  The final state of \\(N(s)\\) becomes a normal state, as well as the\n  initial state of \\(N(t)\\). This way only remains a unique initial\n  state~\\(i\\) and a unique final state~\\(f\\).\n\n  \\item For the regular expression \\(s\\) \\disj \\(t\\), construct the\n  following NFA \\(N(s \\, \\text{\\disj} \\, t)\\)\n\\begin{center} \n\\includegraphics[bb=65 590 272 715,scale=0.9]{thompson_disj}\n\\end{center}\nwhere \\(i\\) and \\(f\\) are new states. Initial and final\nstates of \\(N(s)\\) and \\(N(t)\\) become normal.\n\n  \\item For the regular expression \\(s\\)\\kleene, construct the following\n    NFA \\(N(s\\text{\\kleene})\\), where~\\(i\\) and~\\(f\\) are new\n    states:\n\\begin{center}\n\\includegraphics[bb=50 620 255 718]{thompson_kleene}\n\\end{center}\nNote that we added two \\(\\epsilon\\) transitions and that the initial\nand final states of \\(N(s)\\) become normal states.\n\n\\end{itemize}\nHow do we apply these simple rules when we have a complex regular\nexpression, having many level of nested parentheses and other\nconstructs? Actually, the abstract syntax tree of the regular\nexpression directs the application of the rules. If the syntax tree\nhas the shape shown in \\fig~\\vref{fig:re_ast_conc},\n\\begin{figure}[b]\n\\centering\n\\subfloat[\\(s \\cdot t\\)\\label{fig:re_ast_conc}]{\n  \\includegraphics{re_ast_conc}\n}\n\\qquad\n\\subfloat[\\(s \\,\\text{\\disj}\\, t\\)\\label{fig:re_ast_disj}]{\n\\includegraphics{re_ast_disj}\n}\n\\qquad\n\\subfloat[\\(s\\text{\\kleene}\\)\\label{fig:re_ast_kleene}]{\n\\includegraphics{re_ast_kleene}\n}\n\\caption{Three tree patterns for three regular expressions}\n\\end{figure}\nthen we construct first \\(N(s)\\), \\(N(t)\\) and finally \\(N(s \\cdot\nt)\\). If the syntax tree has the shape found in\n\\fig~\\vref{fig:re_ast_disj}, then we construct first \\(N(s)\\),\n\\(N(t)\\) and finally \\(N (s \\, \\text{\\disj} \\, t)\\). If the syntax\ntree has the shape shown in \\fig~\\vref{fig:re_ast_kleene}, then we\nconstruct first \\(N(s)\\) and finally \\(N(s\\text{\\kleene})\\). These\npattern matchings are applied first at the root of the abstract syntax\ntree of the regular expression.\n", "meta": {"hexsha": "a29cd864d4ef7d16ff139b205fbf5ed4077f2f8a", "size": 3731, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "re_to_enfa.tex", "max_stars_repo_name": "rinderknecht/Book", "max_stars_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "re_to_enfa.tex", "max_issues_repo_name": "rinderknecht/Book", "max_issues_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "re_to_enfa.tex", "max_forks_repo_name": "rinderknecht/Book", "max_forks_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5543478261, "max_line_length": 72, "alphanum_fraction": 0.7274189225, "num_tokens": 1089, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.591074233322254}}
{"text": "% proposal\n%\n% PROPOSAL: A brief plan of intention, strategy and accomplishments (about \u00bd-1 page).\n%\n%----------------------------------------------------------------------------------------\n%\tPACKAGES AND OTHER DOCUMENT CONFIGURATIONS\n%----------------------------------------------------------------------------------------\n% none\n\n\\section{Proposal}\nIn a research study, a university medical center urology group was interested in the association between prostate-specific antigen (PSA) and a number of prognostic clinical measurements in men with advanced prostate cancer. Data were collected on 97 men who were about to undergo radical prostectomies. The data given has identifications numbers, and provides information on 8 other variables on each person. The 8 variables being: PSA Level, Cancer Volume, Weight, Age, Benign Prostatic Hyperplasia, Seminal Vesicle Invasion, Capsular Penetration, and Gleason Score. \\par\nWith this available data set, I will carry out a complete logistic regression analysis by first creating a binary response variable Y, called high-grade-cancer, by letting Y=1 if Gleason Score equals 8, and Y=0 otherwise (i.e., if Gleason Score equals 6 or 7). Thus, the response of interest is high-grade-cancer (Y), and the pool of predictors include those previously mentioned. \\par\nMy analysis will consider transformations of predictors, the inclusion of second-order predictors, analysis of residuals and influential observations, model selection, goodness of fit evaluation, and the development of an ROC curve. Additionally, I will discuss the determination of a prediction rule for determining whether the grade of disease is predicted to be high grade or not, model validation, and finally assess the strengths and weaknesses of my final model. \\\\\n", "meta": {"hexsha": "cb92544b5b4919a012d29263db81bf67b94b0e3d", "size": 1784, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/sections/proposal.tex", "max_stars_repo_name": "josiwala/prostate-cancer", "max_stars_repo_head_hexsha": "4920f3f3066bac5ceab241f724ff1cda8eda559b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "reports/sections/proposal.tex", "max_issues_repo_name": "josiwala/prostate-cancer", "max_issues_repo_head_hexsha": "4920f3f3066bac5ceab241f724ff1cda8eda559b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/sections/proposal.tex", "max_forks_repo_name": "josiwala/prostate-cancer", "max_forks_repo_head_hexsha": "4920f3f3066bac5ceab241f724ff1cda8eda559b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 127.4285714286, "max_line_length": 572, "alphanum_fraction": 0.725896861, "num_tokens": 346, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5910742294467443}}
{"text": "\n\\section{The \\findii algorithm---reuse of specification elements}\n\\Label{sec:findii}\n\nIn this section we specify \\find in a slightly different way.\nOur approach is motivated by a considerable number of closely related \\acsl formulas\nin the contract \\specref{find} and the implementation \\implref{find}.\n\n\\begin{lstlisting}[style=acsl-block]\n\n    \\exists integer i; 0 <= i < n        &&   a[i] == v;\n\n    \\forall integer i; 0 <= i < \\result  ==>  a[i] != v;\n\n    \\forall integer i; 0 <= i < n        ==>  a[i] != v;\n\n    \\forall integer k; 0 <= k < i        ==>  a[k] != v;\n\\end{lstlisting}\n\nNote that the first formula is the negation of the third one.\n\n\\subsection{The predicates \\SomeEqual and \\NoneEqual}\n\nIn order to be more explicit about the commonalities of these formulas\nwe define a predicate, called \n\\logicref{SomeEqual},\nwhich describes the situation that there is a valid index \\inl{i} \nwhere~\\inl{a[i]} equals~\\inl{v}.\n\n\\input{Listings/SomeNone.acsl.tex}\n\nWe first remark that the \\SomeEqual, its negation \\NoneEqual\nand the lemmas \\NotSomeEqualNoneEqual and \\NoneEqualNotSomeEqual are encapsulated\nin the \\emph{axiomatic block} \n\\logicref{SomeNone}.\nThis is a \\emph{feeble} attempt to establish some modularization for the various predicates,\nlogic functions and lemmas.\nWe say \\emph{feeble} because axiomatic blocks are, in contrast to \\acsl \\inl{module}s,\n\\emph{not} name spaces.\n\\acsl modules, however, are not yet implemented by \\framac.\n\nWe also remark that both predicates come in overloaded versions.\nThe first of theses versions is a definition for array sections while the\nsecond definition is for the case of complete arrays.\n\nNote that we have provided a label, viz.\\ \\inl{A}, to the\npredicate \\SomeEqual.\nIts purposes to express that the evaluation of the predicate depends on a memory state,\nviz.\\ the contents of \\inl{a[0..n-1]}.\nIn general, we have to write\n\n\\begin{lstlisting}[style=acsl-block]\n\n    \\exists integer i; 0 <= i < n && \\at(a[i],A) == v;\n\\end{lstlisting}\n\nin order to express that we refer to the value \\inl{a[i]} in\nthe program state~\\inl{A}.\nHowever, \\acsl allows to abbreviate \\inl{\\\\at(a[i],A)} by \\inl{a[i]} if, as in\n\\SomeEqual or \\NoneEqual, the label~\\inl{A} is the only available label.\nIn particular, we have omitted the label in the overloaded versions for complete arrays.\n\n%\\clearpage\n\n\\subsection{Formal specification of \\findii}\n\nWith the predicates \\logicref{SomeEqual}\nand \\logicref{NoneEqual}\nwe are able to encapsulate all uses of the universal and existential \nquantifiers in both the specification and implementation of \\findii.\n\nAs a result, the revised contract \\specref{findii} is more concise\nthan that of \\specref{find}.\n%\nIn particular, it can be seen immediately that the conditions in the\nassumes clauses\nof the two behaviors \\inl{some} and \\inl{none} are mutually\nexclusive since\none is the literal negation of the other.\nMoreover, the requirement that \\find returns the smallest index can\nalso be expressed\nusing the \\logicref{NoneEqual} predicate, as depicted with the last postcondition of\nbehavior \\inl{some}.\n\n\\input{Listings/find2.h.tex}\n\nWe also enriched the specification of \\find by user-defined names\n(sometimes called \\emph{labels}, too, the distinction to program state identifiers\nbeing obvious)  to refer to\nthe \\inl{requires} and \\inl{ensures} clauses.\nWe highly recommend this practice in particular for more complex annotations.\nFor example, \\framac can be instructed to verify only clauses with a\ngiven name.\n\n\\clearpage\n\n\\subsection{Implementation of \\findii}\n\nThe predicate \\NoneEqual is also used in the loop annotation inside\nthe implementation of \\implref{findii}.\nNote that, as in the case of the specification, we use labels to name individual annotations.\n\n\\input{Listings/find2.c.tex}\n\n%\\clearpage\n\n", "meta": {"hexsha": "7a824ab98bae4de9f3f3165c30a5edc8bc444f59", "size": 3801, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/nonmutating/find2.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/nonmutating/find2.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/nonmutating/find2.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 35.523364486, "max_line_length": 93, "alphanum_fraction": 0.7524335701, "num_tokens": 1011, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929053683038, "lm_q2_score": 0.7577943658046608, "lm_q1q2_score": 0.5910742290557085}}
{"text": "\\chapter{Pairwise Graph Convolutional Networks for Interface Prediction}\n\\label{chap:methods}\n\nThe methods presented in this thesis were motivated by a desire to exploit the local structure around a residue when performing interface prediction.\nThe biological reasoning for this is that a residue's neighborhood influences its propensity to participate in an interface.\nIt was noted in Chapter \\ref{chap:neuralnetworks} that convolutional neural networks are one way of detecting features in a local neighborhood, but they are limited to regular grids. \nUnfortunately, proteins cannot naturally be represented as a regular grid, so convolution must be developed for a more natural representation: graphs.\n\n\n\\section{Proteins As Graphs}\n\nAn undirected, unweighted graph $G$ consists of a set of vertices, $V=\\{v_1, v_2, ..., v_n\\}$, and a set of eges, $E=\\{e_1, e_2, ..., e_m\\}$ where each edge is incident to two vertices and there is at most one edge between two vertices.\nOne way of representing proteins as graphs is to let each vertex represent a residue in the protein, and each edge represent the relationship between two residues.\nThus any information pertaining to a particular residue can be associated with the relevant vertex in the form of a feature vector.\nThe features used in this work are drawn from features used in prior interface prediction work \\cite{minhas2014}.\nLikewise, any information about the relationship between two residues can be associated with the relevant edge.\nThe edge features used here describe the distance between and relative orientation of two residues.\nThese edge features are defined between any two residues in the protein, so the graph is complete. \nA detailed explanation of each feature is contained in Appendix \\ref{appendix:features}.\n\nThis representation is an abstraction of the original protein to a well studied mathematical object, with two notable facts.\nFirst, the graph itself does not rely on a coordinate system, as is the case when working with raw 3D positions.\nSecond, the features contained on the graph are also not tied to a coordinate system.\nThis is because they either describe coordinate free attributes of individual residues, or they describe relative spatial relationships between two residues.\nThese facts make a protein graph invariant to rotations or translations in space.\nHowever, since the graph was constructed from points in 3D space, local neighborhoods of vertices can be defined using spatial proximity.\nThis is useful when designing convolutions that use a local neighborhood as a receptive field.\nWe now turn our attention to graph convolution\n\n\\section{Graph Convolution}\nRecent years have seen increased attention to problems involving graph structured data, prompting developments in graph convolution to perform various tasks on those data~\\cite{bronstein2016}.\nThese developments allow leveraging of deep learning techniques, which have shown great success for problems on regular grids.\nThough each variant of graph convolution is tailored to suit the data and problem being addressed, there are some common approaches which merit describing generally.\nThese approaches generally fall into two categories: \\emph{spectral} and \\emph{spatial}.\n\n\n\\subsection{Spectral and Spatial Graph Convolution}\nSpectral methods of graph convolution are based on linear functions in the frequency domain of a graph, defined using the laplacian operator $\\mathcal{L}=I-D^{-1/2}WD^{-1/2}$, where $I$ is the identity matrix, $W$ is a matrix of edge weights, and $D$ is a diagonal matrix containing the degree of each vertex~\\cite{bruna2013, henaff2015, kipf2016}.\nEach filter in a spectral convolution implies a weighting of each frequency in the spectral decomposition of the graph~\\cite{mallat2009}.\nIn physics, the Laplacian operator is used to model the process of diffusion.\nIn a similar way, the graph Laplacian can be thought of as modeling diffusion along the edges of the graph.\n\nSpatial approaches instead directly model the diffusion process through a transition matrix, or by defining operations in a localized neighborhood of a central vertex~\\cite{henaff2015, atwood2016}.\nIn the latter, each neighborhood constitutes a receptive field where a convolution operation is performed (see Figure \\ref{fig:spatial_graph_conv}).\nSpatial convolution commonly involves a vector of weights and takes a weighted sum of neighbors, much like a discrete convolution on a grid can be viewed as taking a weighted sum of pixels within the receptive field.\nSpatial convolution is more directly analogous to grid based convolution as described in Chapter \\ref{chap:neuralnetworks}.\nThere is a critical difference, however, in that receptive fields from different parts of a graph have no natural correspondence with one another.\n\n\\begin{figure}\n\t\\centering\n\t%\\begin{center}\n\t\\includegraphics[width=\\textwidth]{conv_graph.pdf}\n\t%\\end{center}\n\t\\caption{Receptive fields in a graph context, where each receptive field is defined around a central vertex. The result of convolution is applied to the central vertex in the receptive field.}\n\t\\label{fig:spatial_graph_conv}\n\\end{figure}\n\n\\subsection{Receptive Field Correspondence in Spatial Convolution}\n\n\nGrid receptive fields have defined positions, such as \"upper left most\" or \"bottom middle\".\nTherefore weights can be applied consistently across receptive fields such that each weight is always applied to the same position.\nWith graphs, there is often no such correspondence between receptive fields, since vertices are inherently unordered and lack a specific position (recall their coordinate free nature).\nThe only well defined position is the center, but the problem persists in the neighbors.\nSee Figure \\ref{fig:grid_vs_graph_rf}\nIn fact, even the number of neighbors may vary from one receptive field to another, depending on the definition of a local neighborhood.\nHence the consistent application of weights across receptive fields is only possible after these issues have been addressed.\nThis is typically done in one of two ways:\n\n\\begin{figure}\n\\includegraphics[width=0.9\\textwidth]{grid_vs_graph_rf.png}\n\\caption{The difference between grid receptive fields and graph receptive fields with respect to correspondence. Grid receptive fields give rise to positions (denoted by color) which are consistent from one receptive field to another. Graphs have no positions other than the central vertex (in blue).}\n\\label{fig:grid_vs_graph_rf}\n\\end{figure}\n\n\\begin{enumerate}\n\t\\item \\emph{Imposed ordering of neighbors}. This approach establishes a correspondence between two receptive fields by ordering the neighbors in each and associating neighbors that share a common position. \n\tOrdering methods are either based on vertex characteristics, like degree and betweenness centrality, or domain specific knowledge ~\\cite{niepert2016, duvenaud2015}.\n\tThey typically require the number of neighbors in a receptive field to remain the same.\n\tThis approach allows filter weights which are applied to a particular position in the ordering, which, it is assumed, has some significance across all receptive fields.\n\tIf the imposed ordering is arbitrary, this approach has limited utility.\n\tMethods following this approach can be called \\emph{ordered}.\n\t\n\t\\item \\emph{Identical treatment of neighbors}. This approach ignores the need to establish a correspondence between receptive fields and instead treats all neighbors identically.\n\tRather than apply different weights to neighbors depending on their position in an ordering, the same weights are applied to each neighbor.\n\tThis allows for different sizes of receptive fields and avoids choosing an ordering method, but lacks the ability to treat each neighbor uniquely.\n\tMethods following this approach can be called \\emph{order-free}.\n\\end{enumerate}\n\nFigure \\ref{fig:correspondence_approaches} depicts both approaches.\nExamples of each were evaluated for this thesis.\nIn addition, proposed convolutions were examined which attempt to incorporate the advantages of both ordered and order-free methods.\nBelow is a description of each existing method followed by the proposed methods presented in this thesis. \n\n\\begin{figure}\n\t\\centering\n\t%\\begin{center}\n\t\\includegraphics[width=0.8\\textwidth]{correspondence_approaches.pdf}\n\t%\\end{center}\n\t\\caption{Two approaches of establishing correspondence between the neighbors of receptive fields A and B. Central vertices are shown in blue and neighbors in green. The central vertices always correspond with one another. Left: neighbors are ordered and placed into correspondence based on position. Unique weights (\\emph{$w_2$--$w_4$}) can then be applied to each position in the order. Right: neighbors are left unordered and treated identically. This requires that the same weights (\\emph{$w_2$}) be used for all neighbors.}\n\t\\label{fig:correspondence_approaches}\n\\end{figure}\n\n\n\\subsection{Diffusion Based Method}\nAs mentioned, spectral methods utilize the Laplacian operator, which is used to model diffusion processes. \nAlthough no spectral methods were examined in this thesis, a spatial diffusion method was.\nAtwood \\& Towsley~\\cite{atwood2016} proposed a Diffusion Convolutional Neural Network (DCNN) which converts a graph's weight matrix to a transition matrix by normalizing its rows.\nThe transition matrix $P$ is raised to successive exponents, generating a power series, with each power corresponding to all walks of length equivalent to that power.\nFor a maximum power of $L$, the activation of a vertex is:\n\n\\begin{equation}\nh(x_i | W) = \\sigma \\bigg( \\sum_{l=0}^L (W_{l\\cdot} \\odot P^l X ) \\bigg),\n\\label{eq:diffusion}\n\\end{equation}\n\n\\noindent\nwhere $W$ is a matrix of weights, $W_{l\\cdot}$ is its $l^{th}$ row, $\\odot$ denotes broadcasted elementwise multiplication, and $X$ is a matrix of all vertices, where each row is a vertex and each column a different feature.\nFor each power $l$, all vertices that have a random walk of length $l$ that end at the vertex of interest are summed together, weighted by the walk probability.\nThis sum is then multiplied elementwise with a weight vector and summed with the result of all other powers to produce the overall signal.\nUnlike other spatial convolutions presented in this thesis, this method does not use a receptive field, instead relying on the similarity matrix to indicate proximity.\n\n\n\\subsection{Ordered Method}\n\nNiepert, Ahmed, \\& Kutzkov~\\cite{niepert2016} described an ordered graph convolution, \\emph{PATCHY-SAN}, which performs classification at the graph level.\nThis method constructs local neighborhoods and orders neighboring vertices according to a \\emph{normalization} procedure.\nThe ordered vertices serve as the receptive field for convolution at the central vertex, which is also in the ordering. \nIf $(x_1, x_2, ... , x_k)$ is the ordered list of vertices, then convolution takes the form:\n\n\\begin{equation}\nh(x_i | \\{ W_{j} \\}, b)= \\sigma \\bigg( \\frac{1}{k} \\sum_{j=1}^{k}(W_{j} x_j) + b \\bigg),\n\\label{eq:patchysan}\n\\end{equation}\n\n\\noindent\nwhere $W_j$ is the weight matrix for position $j$ in the ordering.\nA natural ordering technique is one which places the central vertex first and orders its neighbors from least to greatest distance from the center. \nIn this way, one weight matrix is used for all central vertices, another for all nearest neighbors, etc.\nThe vertex ordering also imposes a lexicographic ordering on the neighborhood edges as well, so unique weights can be used to generate a signal from each edge. \nAdding this term to the convolution generates:\n\n\\begin{equation}\nh(x_i | \\{ W_{j} \\}, \\{ W_{jk} \\}, b)= \\sigma \\bigg( \\frac{1}{k} \\sum_{j=1}^{k}(W_{j} x_j) + \\frac{1}{k^2} \\sum_{j = 1}^{k-1} \\sum_{l=j+1}^{k}(W_{jk} A_{jk})  + b \\bigg),\n\\label{eq:patchysan_2e}\n\\end{equation}\n\n\\noindent\nwhere $W_{jl}$ is the weight matrix associated with edge $(j, l)$ in the lexicographic ordering, and $A_{jk}$ is the feature vector associated with edge $(j, k)$.\nOriginally, Niepert et al. convolved vertices and edges separately and combine them in a subsequent layer.\nClassification is then performed on the whole graph level.\nFor interface prediction however, the edges and vertex signals are averaged together and applied to the central vertex so that repeated graph convolutions may be performed without losing the graph structure.\n\n\n\\subsection{Order-Free Methods}\nOne of the simplest forms of order-free graph convolution was proposed by Duvenaud \\& Maclaurin, et al.~\\cite{duvenaud2015}, which was used for generating molecular \\emph{fingerprints}.\nThis method uses a single set of weights for all vertices in the receptive field, including the central vertex:\n\n\\begin{equation}\nh(x_i | W, b)= \\sigma \\bigg( W x_i +  \\frac{1}{|\\mathcal{N}_i|} \\sum_{j \\in \\mathcal{N}_i} (W x_j) + b\\bigg),\n\\label{eq:fingerprint}\n\\end{equation}\n\n\\noindent\nwhere $\\mathcal{N}_i$ is the index set of all neighbors of $x_i$.\nIn the original formulation, no bias term was included, and the sum was not divided by the neighborhood size.\nHowever, when used for interface prediction, including the bias and normalization provided better overall results.\n\nAs mentioned, center vertices may be treated separately from the neighbors, even if all neighbors are treated identically. \nSchlichtkrull \\& Kipf~\\cite{schlichtkrull2017} proposed such an approach called \\emph{Relational Graph Convolutional Networks}, or R-GCN.\nTheir methods were developed for use in knowledge bases, graph structures where the vertices are named entities and the edges capture the many binary relationships between the entities. \nTo convolve a vertex, they take the neighborhood defined by each relation type and sum the signal from all the neighbors in that neighborhood.\nThe resultant signal is the sum of signals from each relation type.\nFor protein graphs, spatial proximity is the only method for determining neighborhoods that makes sense biologically, so there is only one neighborhood.\nThe adapted version of this convolution for use in interface prediction is:\n\n\\begin{equation}\nh(x_i | W^\\textsc{c}, W^\\textsc{n}, b)= \\sigma \\bigg( W^\\textsc{c} x_i + \\frac{1}{|\\mathcal{N}_i|} \\sum_{j \\in \\mathcal{N}_i}(W^\\textsc{n} x_j)  + b\\bigg),\n\\label{eq:rgcn}\n\\end{equation}\n\n\\noindent\nwhere separate weight matrices, $W^\\textsc{c}$ and $W^\\textsc{n}$, are used for the center and neighbors, respectively\nIn the original version, a different weight matrix was used for each of the many relation types.  \nTo tie learning across all relation types, all weight matrices were simply linear combinations of a set of learnable basis weight matrices:\n\n\\begin{equation}\n\\begin{split}\nW^\\textsc{n} = \\sum_{b} a^\\textsc{n}_b V_b \\\\\nW^\\textsc{c} = \\sum_{b} a^\\textsc{c}_b V_b \n\\end{split}\n\\label{eq:rgcn_basis}\n\\end{equation}\n\n\\noindent\nwhere $\\{V_b\\}$ is a set of basis matrices, and $a^\\textsc{c}$ and $a^\\textsc{n}$ are scalar weights that combine the basis matrices to create $W^\\textsc{c}$ and $W^\\textsc{n}$, respectively.\nFor interface prediction, R-GCNs were examined both with and without basis matrices.\n\nThe above order-free methods do not incorporate edge information.\nSch{\\\"u}tt, Arbabzadah, Chmiela, M{\\\"u}ller, \\& Tkatchenko~\\cite{schutt2017} proposed a version called \\emph{Deep Tensor Neural Networks} (DTNN) which creates a signal for the neighbor vertices as well as the edges which connect them to the central vertex:\n\n\\begin{equation}\nh(x_i | W, W^\\textsc{n}, W^\\textsc{e}, b^\\textsc{n}, b^\\textsc{e})= x_i + \\frac{1}{|\\mathcal{N}_i|} \\sum_{j \\in \\mathcal{N}_i} \\sigma \\bigg[ W \\bigg( (W^\\textsc{n} x_j + b^\\textsc{n}) \\odot (W^\\textsc{e} A_{ij} + b^\\textsc{e}) \\bigg) \\bigg],\n\\label{eq:deep_tensor}\n\\end{equation}\n\n\\noindent\nwhere $\\odot$ denotes the elementwise product.\nIn this formulation, $W^\\textsc{n}$ and $W^\\textsc{e}$ transform vertices and edges respectively into a common space, and $W$ transforms their combination to have the same dimensionality as the input. \nThis convolution does not transform the input by any weight matrix, rather it can be viewed as updating the representation at $x_i$ using information from its neighbors.\nThis restricts representations to always have the same dimensionality, which is not generally required in convolutions. \nAgain, the normalization is omitted from the original formulation, but consistently improves performance when performing interface prediction.\nNevertheless, this method uniquely incorporates edge information, so that even while all neighbors use the same weight matrix, the information on their edges is used to differentiate them.\nThis concept is carried forward into the convolution methods proposed below.\n\n\n\\subsection{Proposed Order-Free Methods}\nLike DTNN, these methods incorporate edge information to help differentiate neighbors, but also avoid imposing an arbitrary ordering on the neighbors in a receptive field.\nThis is accomplished by incorporating information from the edges between each neighbor and the central vertex.\nHere are two variants of graph convolution which differ only in how the edge information is incorporated, denoted \\emph{Sum Coupling} and \\emph{Product Coupling}.\nFor a central vertex $i$ on the graph and a local neighborhood of vertices $\\mathcal{N}_i$, the output of Sum Coupling graph convolution is:\n\\begin{equation}\nh(x_i | W^\\textsc{c}, W^\\textsc{n}, W^\\textsc{e}, b) = \\sigma \\bigg( W^{\\textsc{c}} x_i + \\frac{1}{|\\mathcal{N}_i|}\\sum_{j \\in \\mathcal{N}_i} (W^{\\textsc{n}} x_j + W^{\\textsc{e}} A_{ij}) + b \\bigg),\n\\label{eq:sum_coupling}\n\\end{equation}\nwhere $x_i$ is the feature vector associated with vertex $i$, $W^\\textsc{c}$, $W^\\textsc{n}$ and $W^\\textsc{e}$ are weight matrices, and $b$ is a vector of biases. \nIntuitively, this calculates an activation for the central vertex, each neighbor vertex, and each edge between a neighbor and the central vertex separately.\nIt is the inclusion of edge activations that allows each neighbor to be distinguished from the others on the basis of its relationship to the central vertex.\nThis variant is called Sum Coupling because the neighbor vertex and edge activations are added together.\nBecause of this, the direct association between a neighbor its edge is lost.\nA variant which maintains the association is Product Coupling, which output is:\n\\begin{equation}\nh(x_i | W^\\textsc{c}, W^\\textsc{n}, W^\\textsc{e}, b) = \\sigma \\bigg( W^{\\textsc{c}} x_i + \\frac{1}{|\\mathcal{N}_i|}\\sum_{j \\in \\mathcal{N}_i} (W^{\\textsc{n}} x_j \\odot W^{\\textsc{e}} A_{ij}) + b \\bigg),\n\\label{eq:prod_coupling}\n\\end{equation}\nwhere $\\odot$ denotes the elementwise product between two vectors or matrices. \nThis allows a neighbor's influence on the overall activation to be modulated by its relationship to the central vertex.\nFor protein graphs, this means neighboring residues will contribute more or less to the overall activation, depending on their distance from and relative orientation to the central vertex, with the precise modulation determined by the edge activations.\n\nIn relation to prior methods, Sum Coupling is most similar to Fingerprint and R-GCN convolutions.\nUnlike Fingerprint convolutions, it uses separate weights for central and neighbor vertices.\nUnlike R-GCN convolutions, it does not use basis functions which tie together all weight matrices.\nAlso, Fingerprint and R-GCN convolutions do not generate signals from the edges.\nProduct Coupling is similar to DTNN convolutions in the way vertex and edge information is coupled together, but unlike DTNN it calculates a signal for the central vertex and allows the number of channels/features to change layer by layer. \n\nThe receptive fields are always defined around a central vertex, so the results of convolution can be applied to that vertex.\nThis retains the graph structure after each convolution, so convolutional layers are stackable, just like convolutions on grids.\n\nA note on receptive fields: protein graphs are complete and embedded in a metric space, so we can define a receptive field using a fixed number of closest neighbors to the central vertex.\nA receptive field can also be defined using a threshold $\\delta>0$ such that all vertices closer to the central vertex than the threshold are included in the receptive field.\nAll neighbors in a receptive field are guaranteed to share an edge with the central vertex, allowing the application of equations (\\ref{eq:sum_coupling}) and (\\ref{eq:prod_coupling}).\nFor incomplete graphs, a receptive field can be defined as all vertices within $k$ hops of the central vertex. \nIf $k=1$, both Sum and Product Coupling can directly be applied.\nIf $k>1$, then Product Coupling can't be directly applied to neighbors more than 1 hop away from the center vertex, since they share no edge with the center. \nThough there are ways to deal with this issue, they are not the focus of this thesis.\n\nLastly, to assess the benefit that incorporating information from neighboring residues has on classification performance, the \\emph{No Convolution} variant is defined as:\n\n\\begin{equation}\nh(x_i | W^\\textsc{c}, b)= \\sigma \\bigg( W^{\\textsc{c}} x_i + b \\bigg),\n\\label{eq:no_conv}\n\\end{equation}\n\n\\noindent\nwhich excludes all neighbors.\nNote the similarity to Equation (), except that in this case the output is just for a single vertex rather than for the entire layer.\n\n\n\\section{Pairwise Neural Network Architecture}\nThese graph convolution operations allow the detection of local patterns on a single graph, and produce a new representation at each vertex.\nPartner specific protein interaction, however, requires classifying pairs of residues in different proteins (vertices in different graphs), which is equivalent to making predictions on vertices in the product graph. \nSuch predictions are made using a pairwise neural network architecture.\n\nA pairwise architecture consists first of two identical convolutional modules, each responsible for generating the representation for one of the proteins in the pair.\nA key requirement for the pairwise architecture is symmetry, since the prediction for a pair of residues should be the same irrespective of which leg is used for which protein.\nTo ensure symmetry in the convolution layers, weights are shared between layers in the different modules.\nThe merge layer then combines the vertex representations from one graph with the vertex representations from the other into pairs.\nTo maintain symmetry, this merge process should also be symmetric.\nFor example, the elementwise sum, elementwise product, and outer-product are all commutative and therefore produce symmetric output.\nAnother option is to combine pairs asymmetrically (e.g. concatenate the two representations together), but then average the predictions from each ordering of the pair.\nFinally, the combined representation for each pair of residues is passed through a number of fully connected layers.\nThe data are represented as pairs of residues at this point. \nTheoretically, graph convolution could be performed at this stage as well, this time in the product graph.\nHowever, the computational and memory requirements of doing so prove prohibitive, since the number of convolutions and the number of neighbors in each convolution increases quadratically in the graph size.\nHence the work in this thesis performs no convolution after merging.\nThe final layer has a single output for each pair indicating the prediction for that pair.\nSee Figure \\ref{fig:pairwise_arch1} for a graphical depiction.\n\nThis output is converted to a binary class prediction vector using a softmax function, \n\n\\begin{equation}\n\\text{softmax}(x) = \\bigg[ \\frac{e^{-x}}{e^{x} + e^{-x}} , \\frac{e^{x}}{e^{x} + e^{-x}} \\bigg],\n\\label{eq:softmax}\n\\end{equation}\n\n\\noindent\nthe elements of which can be interpreted as class probabilities for the negative (non-interfacial) and positive (interfacial) classes, respectively.\nThis output is compared to a one-hot label vector indicating whether \\big($[0, 1]$\\big) or not\\big($[1, 0]$\\big) the pair constitute part of the true interface. \n\n\\begin{figure}\n\t\\includegraphics[width=\\textwidth]{pairwise_network4.pdf}\n\t\\caption{A pairwise neural network architecture that takes two protein graphs as input. Each leg contains one or more convolutional layers. The resultant graphs are then merged to create representations of residue pairs. After more fully connected layers, a final classification is performed for each pair.}\n\t\\label{fig:pairwise_arch1}\n\\end{figure}\n\n\nThis chapter has presented protein graphs, graph convolution operations, and pairwise neural network architectures, all of which are components in this thesis' approach to partner specific protein interface prediction.\nChapter \\ref{chap:experiments} describes the experiments that were performed.\n", "meta": {"hexsha": "d56d1863a0d0533a646b69f6123eb73249301225", "size": 24727, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "methods.tex", "max_stars_repo_name": "fouticus/msthesis", "max_stars_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "methods.tex", "max_issues_repo_name": "fouticus/msthesis", "max_issues_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "methods.tex", "max_forks_repo_name": "fouticus/msthesis", "max_forks_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.0226537217, "max_line_length": 528, "alphanum_fraction": 0.7905123954, "num_tokens": 5662, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952893703477, "lm_q2_score": 0.6584174938590246, "lm_q1q2_score": 0.5910582826762762}}
{"text": "\\section{Solution}\n\\label{sec:solution}\n\nOur goal is now to find the policy $\\pi$, which is an action to execute for each state, that will minimize the total cost on the time horizon $\\mathcal{T}$.\nWe now define the Q-values of the MDP for each action $a$, state $k$, and policy $\\pi$, which is defined as the expected cost of applying action $a$ at stage $k$ and then relying on policy $\\pi$.\n\nThe basics of Reinforcement Learning is to apply stochastic approximation on the expectation of the cost to obtain algorithm can thus be written as\n\\[\n  Q(k,a) \\leftarrow Q(k,a) + \\alpha(t) \\left( c(a|k) + \\underset{j\\in \\mathcal{U}(k')}{\\text{min}} \\{Q(k',j)\\} + Q(k,a) \\right)\n\\]\nwhere $t$ represents an iteration variable and where alpha $t$ is a coefficient which decreases with $t$.\n\nAs we can see, this algorithm is model free in the sense that it is not required to know the probability distribution in advance.\n", "meta": {"hexsha": "4a7fdd7011ee20eb8424c8500e77c4151ddf3d62", "size": 914, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/solution.tex", "max_stars_repo_name": "qlete/ANManagement", "max_stars_repo_head_hexsha": "04a6436fdd6c90bcb51c24e3e4ebdc003e450230", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/solution.tex", "max_issues_repo_name": "qlete/ANManagement", "max_issues_repo_head_hexsha": "04a6436fdd6c90bcb51c24e3e4ebdc003e450230", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-05-16T10:53:59.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-21T12:13:01.000Z", "max_forks_repo_path": "report/solution.tex", "max_forks_repo_name": "qlete/ANManagement", "max_forks_repo_head_hexsha": "04a6436fdd6c90bcb51c24e3e4ebdc003e450230", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.2857142857, "max_line_length": 195, "alphanum_fraction": 0.7264770241, "num_tokens": 243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9073122238669026, "lm_q2_score": 0.6513548714339144, "lm_q1q2_score": 0.5909822369272453}}
{"text": "\n\n    \\filetitle{gamma}{Create function proportional to log of gamma distribution}{logdist/gamma}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nF = logdist.gamma(Mean,Std)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\item\n  \\texttt{Mean} {[} numeric {]} - Mean of the gamma distribution.\n\\item\n  \\texttt{Std} {[} numeric {]} - Std dev of the gamma distribution.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{F} {[} function\\_handle {]} - Function handle returning a\n  value proportional to the log of the gamma density.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\nSee \\href{logdist/Contents}{help on the logdisk package} for details on\nusing the function handle \\texttt{F}.\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "edb82e3662e634b344c2672b3b728865f935dd83", "size": 862, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/logdist/gamma.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/logdist/gamma.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/logdist/gamma.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 23.2972972973, "max_line_length": 95, "alphanum_fraction": 0.7447795824, "num_tokens": 244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8615382058759129, "lm_q2_score": 0.6859494550081926, "lm_q1q2_score": 0.5909716627893185}}
{"text": "\\subsection{Convolutional Layer}\n\nThe convolutional layer - referred to as conv2d - receives an input data tensor $X_{ci} $ of size $[H, W, C_{in}]$ and outputs a data tensor of size $[H,W,C_{out}]$. Here $H$ and $W$ denote the height and width of the convolutional kernel. So every input tensor is transferred to each single convolutional computation channel, which is responsible for computing a \\emph{single} output channel. Each convolutional channel is parameterized with its own set of values. Those values are implemented as constants in VHDL, which has the advantages of greater optimization options by the compiler as well as reduced data movement. Therefore template entities are created for the convolution channel- and layer entities where the weights are then populated via a Python script.\n\n\\begin{figure}[hb]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{img/convolution.pdf}\n\t\\caption[Convolution Operation]{Convolution Operation. The convolutional kernel (red-dashed-square) is shifted over the input data. The currently processed data is shown in blue and the only three pixels (violett) need to be loaded for the next convolution because of data reuse. Note that for the convolution operation the image must be padded (in our case by zeros) to sustain image dimensions. Depth channels omitted for illustration purposes.}\n\t\\label{fig:hw-conv-operation}\n\\end{figure}\n\nThe principal operation can be shown in Figure~\\ref{fig:hw-conv-operation}. Here the convolutional kernel slides over the input data from left to right, top to bottom. It can be seen that most of the $6$ of $9$ input pixels (of each input channel) can be reused. This is done via shift registers and because all weights of the kernel remain unchanged for the operation this results in minimum data movement. Note that for the convolutional operations the input data must be padded in order to sustain the image dimensions. If a new line occurs, two cycles are required to load a new 3x3 kernel instead of one as an additional 3x1 vector needs to be loaded from memory. \n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\begin{subfigure}[t]{0.5\\textwidth}\n\t\t\\centering\n\t\t% Constrain the height instead of the widht so the images are equally aligned at the top\n\t\t\\includesvg[height=2.5in]{img/inkscape/conv2d.svg}\n\t\t\\caption[Conv2d block diagram.]{Conv2d block diagram. For each output channel a conv\\_channel module is used. $k$ indicates the number of output channels.}\n\t\t\\label{fig:conv2d}\n\t\\end{subfigure}%\n\t~\n\t\\begin{subfigure}[t]{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includesvg[height=2.5in]{img/inkscape/conv-channel.svg}\n\t\t\\caption[conv\\_channel block diagram.]{conv\\_channel block diagram. For each input channel a kernel\\_3x3 module is used. $n$ indicates the number of input channels.}\n\t\t\\label{fig:conv-channel}\t\t\n\t\\end{subfigure}\n\t\\caption{Block diagram of the Convolutional Layer}\n\t\\label{fig:hw-layer-conv}\n\\end{figure}\n\n\nIn Figure~\\ref{fig:conv2d} the block diagram of the top-level conv2d module is shown. It consists of $k$ conv\\_channel modules to realise $k$ output channels. All conv\\_channel modules recieve the same input vector $X_{c_i}$. \nThe internal structure of a conv\\_channel module is shown in Figure \\ref{fig:conv-channel}. It uses $n$ kernel\\_3x3 modules to realise $n$ input channels. All kernel\\_3x3 modules get a different input vector $X_{c_{i1}}$ to $X_{c_{in}}$ which are $3 \\times 3$ input matrices. All kernel outputs are summed up to one final value of length BIT\\_WIDTH\\_OUT.\n\n\\subsubsection{Interface}\n\n\\begin{itemize}\n\t\\item Input interface connected to shift register, which consists of a $n \\cdot 3 \\times 3$ vector of values of length BIT\\_WIDTH\\_IN, in which $n$ is the number of input channels.\n\t\\item Output interface connected to the pooling layer, which is a vector of $m$ values of length BIT\\_WIDTH\\_OUT, in which $m$ is the number of output channels.\n\\end{itemize}\nBoth input and output interfaces have ready, last and valid signals to control the flow of data.\n\n\\subsubsection{Parameter}\n\n\\begin{table}[hb]\n\t\\centering\n\t\\begin{tabular}{lcc}\n\t\t\\toprule\n\t\tParameter & VHDL Datatype & Type \\\\\n\t\t\\midrule\n\t\t BIT\\_WIDTH\\_IN & integer & Generic\\\\\n\t\t BIT\\_WIDTH\\_OUT & integer & Generic \\\\\n\t\t INPUT\\_CHANNELS & integer & Generic\\\\\n \t \t OUTPUT\\_CHANNELS & integer & Generic \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\\subsubsection*{Convolution Channel}\n\n\n\\textbf{Interface}\n\\begin{itemize}\n\t\\item Input interface, same as conv2d.\n\t\\item Output interface connected to the pooling layer, which is a value of length BIT\\_WIDTH\\_OUT.\n\\end{itemize}\n\n\\textbf{Parameter}\n\\begin{itemize}\n \t\\item BIT\\_WIDTH\\_IN : integer\n \t\\item KERNEL\\_WIDTH\\_OUT : integer, output bit width of the kernel\\_3x3 module\n \t\\item BIT\\_WIDTH\\_OUT: integer\n \t\\item N: integer, number of kernels\n \t\\item OUTPUT\\_MSB: integer, defines which of the $n$=BIT\\_WIDTH\\_OUT bits is the most significant bit\n \t\\item BIAS: integer, currently unused as bias seems to not be very important in the convolutional layers\n\\end{itemize}\n\n\\subsubsection*{Kernel-3x3}\n\nThis modules performs the convolution of a single kernel with a single input image patch. This is a multiplication of - in our case - 9 values of length BIT\\_WIDTH\\_IN with their respective weights which are defined in an array that can be set with a generic. The multiplication results are then added up in an adder tree. The weights are specified in a single generic constant array in row-major notation.\n\n\\subsubsection{Interface}\n\\begin{itemize}\n\t\\item Input interface, a vector of 9 values of length BIT\\_WIDTH\\_IN.\n\t\\item Output interface, same as conv\\_channel.\n\\end{itemize}\n\n\\subsubsection{Parameter}\n\\begin{itemize}\n\t\\item BIT\\_WIDTH\\_IN: integer\n\t\\item BIT\\_WIDTH\\_OUT: integer\n\t\\item WEIGHT: array of 9 integers\n\t\\item WEIGHT\\_WIDTH: integer\n\\end{itemize}", "meta": {"hexsha": "e10977f9fed943f92718dbb462c856a0c89822a3", "size": 5805, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/documentation/02_conv2d-hw.tex", "max_stars_repo_name": "marbleton/FPGA_MNIST", "max_stars_repo_head_hexsha": "4b4a30e0adca35de9adcad7b3fec08c516260790", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-11-13T12:24:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-31T02:39:35.000Z", "max_issues_repo_path": "tex/documentation/02_conv2d-hw.tex", "max_issues_repo_name": "marbleton/FPGA_MNIST", "max_issues_repo_head_hexsha": "4b4a30e0adca35de9adcad7b3fec08c516260790", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 29, "max_issues_repo_issues_event_min_datetime": "2019-12-17T22:06:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:20:45.000Z", "max_forks_repo_path": "tex/documentation/02_conv2d-hw.tex", "max_forks_repo_name": "marbleton/FPGA_MNIST", "max_forks_repo_head_hexsha": "4b4a30e0adca35de9adcad7b3fec08c516260790", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2019-10-20T15:12:52.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-13T13:36:37.000Z", "avg_line_length": 59.2346938776, "max_line_length": 769, "alphanum_fraction": 0.7695090439, "num_tokens": 1537, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8615382200964034, "lm_q2_score": 0.6859494421679929, "lm_q1q2_score": 0.5909716614815334}}
{"text": "\\section{Ring theory}\\label{sec:ring_theory}\n\nAs discussed in \\fullref{rem:additive_magma}, commutative and non-commutative groups are quite different despite having similar definitions. Rings are extensions of \\hyperref[def:abelian_group]{Abelian groups}, which allow multiplication with more than members of \\( \\BbbZ \\).\n\nFor commutative rings, this second operation is often truly an extension of \\fullref{def:magma/exponentiation} to arbitrary ring elements. For noncommutative ring, this second operation is usually given by function composition.\n", "meta": {"hexsha": "79ea721470181f091ec55bb93e39d6ef85b6f9c9", "size": 552, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/ring_theory.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/ring_theory.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ring_theory.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.0, "max_line_length": 276, "alphanum_fraction": 0.8170289855, "num_tokens": 125, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8152325073083132, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.5909378175662996}}
{"text": "% !TEX root = hott_intro.tex\n\n\\section{The hierarchy of homotopical complexity}\n\\sectionmark{Homotopical complexity}\\label{chap:hierarchy}\n%Not all types have interesting higher groupoid structure. For example, we will see below that two natural numbers can only be equal in at most one way. Voevodsky articulated a useful notion to detect the homotopical complexity of types, which allows us to distinguish between contractible types (also called \\emph{$(-2)$-types}), \\emph{propositions} (also called \\emph{$(-1)$-types}), \\emph{sets} (\\emph{$0$-types}), and \\emph{$k$-types} for higher $k$.\n\n%We will see [later] that there are types that are not $k$-types for any $k$.\n\n\\subsection{Propositions and subtypes}\n\n\\begin{defn}\nA type $A$ is said to be a \\define{proposition} if there is a term of type\n\\begin{equation*}\n\\isprop(A)\\defeq\\prd{x,y:A}\\iscontr(x=y).\n\\end{equation*}\nWe define $\\prop$ to be the type of all small propositions, i.e.,\n\\begin{equation*}\n  \\prop\\defeq\\sm{X:\\UU}\\isprop(A).\n\\end{equation*}\n\\end{defn}\n\n\\begin{eg}\\label{eg:prop_contr}\nAny contractible type is a proposition by \\cref{ex:prop_contr}. However, propositions do not need to be inhabited: the empty type is also a proposition, since\n\\begin{equation*}\n\\prd{x,y:\\emptyt}\\iscontr(x=y)\n\\end{equation*}\nfollows from the induction principle of the empty type.\n\\end{eg}\n\nIn the following lemma we prove that in order to show that a type $A$ is a proposition, it suffices to show that any two terms of $A$ are equal. In other words, propositions are types with \\define{proof irrelevance}.\n\n\\begin{thm}\\label{lem:isprop_eq}\n  Let $A$ be a type. Then the following are equivalent:\n  \\begin{enumerate}\n  \\item The type $A$ is a proposition.\n  \\item Any two terms of type $A$ can be identified, i.e., there is a dependent function\n    \\begin{equation*}\n      \\prd{x,y:A}\\id{x}{y}.\n    \\end{equation*}\n  \\item The type $A$ is contractible as soon as it is inhabited, i.e., there is a function\n    \\begin{equation*}\n      A \\to \\iscontr(A).\n    \\end{equation*}\n  \\item The map $\\mathsf{const}_\\ttt : A\\to\\unit$ is an embedding. \n  \\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n  To show that (i) implies (ii), let $A$ be a proposition. Then its identity types are contractible, so the center of contraction of $\\id{x}{y}$ is identification $\\id{x}{y}$, for each $x,y:A$.\n\n  To show that (ii) implies (iii), suppose that $A$ comes equipped with $p:\\prd{x,y:A}\\id{x}{y}$. Then for any $x:A$ the dependent function $p(x):\\prd{y:A}\\id{x}{y}$ is a contraction of $A$. Thus we obtain the function\n  \\begin{equation*}\n    \\lam{x}(x,p(x)):A\\to\\iscontr(A).\n  \\end{equation*}\n\n  To show that (iii) implies (iv), suppose that $A\\to\\iscontr(A)$ and let $x,y:A$. We have to show that\n  \\begin{equation*}\n    \\apfunc{\\mathsf{const}_\\ttt}:(x=y)\\to (\\ttt=\\ttt)\n  \\end{equation*}\n  is an equivalence. Since we have $x:A$ it follows that $A$ is contractible. Since the unit type is contractible it follows that $\\mathsf{const}_\\ttt$ is an equivalence. Therefore we conclude by \\cref{cor:emb_equiv} that it is an embedding.\n\n  To show that (iv) implies (i), note that if $A\\to\\unit$ is an embedding, then the identity types of $A$ are equivalent to contractible types and therefore they must be contractible.\n\\end{proof}\n\nIn the following lemma we show that propositions are closed under equivalences.\n\n\\begin{lem}\\label{lem:prop_equiv}\nLet $A$ and $B$ be types, and let $e:\\eqv{A}{B}$. Then we have\n\\begin{equation*}\n\\isprop(A)\\leftrightarrow\\isprop(B).\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nWe will show that $\\isprop(B)$ implies $\\isprop(A)$. This suffices, because the converse follows from the fact that $e^{-1}:B\\to A$ is also an equivalence. \n\nSince $e$ is assumed to be an equivalence, it follows by \\cref{cor:emb_equiv} that\n\\begin{equation*}\n\\apfunc{e} : (x=y)\\to (e(x)=e(y))\n\\end{equation*}\nis an equivalence for any $x,y:A$. If $B$ is a proposition, then in particular the type $e(x)=e(y)$ is contractible for any $x,y:A$, so the claim follows from \\cref{thm:contr_equiv}.\n\\end{proof}\n\n  In set theory, a set $y$ is said to be a subset of a set $x$, if any element of $y$ is an element of $x$, i.e., if the condition\n  \\begin{equation*}\n    \\forall_z (z\\in y)\\to (z\\in x)\n  \\end{equation*}\n  holds. We have already noted that type theory is different from set theory in that terms in type theory come equipped with a \\emph{unique} type. Moreover, in set theory the proposition $x\\in y$ is well-formed for any two sets $x$ and $y$, whereas in type theory the judgment $a:A$ is only well-formed if it is derived using the postulated inference rules. Because of these differences we must find a different way to talk about subtypes.\n\n  Note that in set theory there is a correspondence between the subsets of a set $x$, and the \\emph{predicates} on $x$. A predicate on $x$ is just a proposition $P(z)$ that varies over the elements $z\\in x$. Indeed, if $y$ is a subset of $x$, then the corresponding predicate is the proposition $z\\in y$. Conversely, if $P$ is a predicate on $x$, then we obtain the subset\n  \\begin{equation*}\n    \\{z\\in x\\mid P(z)\\}\n  \\end{equation*}\n  of $x$. Now we have the right idea of subtypes in type theory: they are families of propositions.\n\n\\begin{defn}\nA type family $B$ over $A$ is said to be a \\define{subtype} of $A$ if for each $x:A$ the type $B(x)$ is a proposition.\n\\end{defn}\n\nWe will show in \\cref{thm:subtype} that a type family $B$ over $A$ is a subtype of $A$ if and only if the projection map $\\proj 1:\\big(\\sm{x:A}B(x)\\big)\\to A$ is an embedding.\n\n\\begin{comment}\n\\begin{samepage}\n\\begin{thm}\\label{thm:subtype}\nLet $B$ be a type family over $A$. The following are equivalent:\n\\begin{enumerate}\n\\item The family $B$ over $A$ is a \\define{subtype} of $A$, in the sense that for each $x:A$ the type $B(x)$ is a proposition.\n\\item The projection map\n\\begin{equation*}\n\\proj 1 : \\Big(\\sm{x:A}B(x)\\Big)\\to A\n\\end{equation*}\nis an embedding. \n\\end{enumerate}\n\\end{thm}\n\\end{samepage}\n\n\\begin{proof}\nFirst assume that $B(x)$ is a proposition for each $x:A$. Our goal is to show that\n\\begin{equation*}\n\\apfunc{\\proj 1} : (\\id{s}{t})\\to (\\id{\\proj 1(s)}{\\proj 1(t)})\n\\end{equation*}\nis an equivalence for every $s,t:\\sm{x:A}B(x)$. By $\\Sigma$-induction on $s$ and \\cref{thm:id_fundamental} it suffices to show that the type\n\\begin{equation*}\n\\sm{t:\\sm{x:A}B(x)} \\id{a}{\\proj 1(t)}\n\\end{equation*}\nis contractible, for any $a:A$ and $b:B(a)$. \nFor the center of contraction we take $\\pairr{\\pairr{a,b},\\refl{a}}$. \nThe contraction is constructed by applying $\\Sigma$-induction twice, by which it suffices to construct a term of type\n\\begin{equation*}\n\\prd{x:A}{y:B(x)}{p:\\id{a}{x}} \\pairr{\\pairr{a,b},\\refl{a}}=\\pairr{\\pairr{x,y},p}.\n\\end{equation*}\nThis term is constructed by path induction on $p$, so it suffices to construct a term of type\n\\begin{equation*}\n\\prd{y:B(a)} \\pairr{\\pairr{a,b},\\refl{a}}=\\pairr{\\pairr{a,y},\\refl{a}}\n\\end{equation*}\nHowever, the proposition $B(a)$ is contractible by \\cref{cor:contr_prop}, since we have $b:B(a)$. Therefore we may proceed by singleton induction, so it suffices to construct an identification of type\n\\begin{equation*}\n\\pairr{\\pairr{a,b},\\refl{a}}=\\pairr{\\pairr{a,b},\\refl{a}},\n\\end{equation*}\nwhich we have by reflexivity. This completes the proof that if each $B(x)$ is a proposition, then the projection map $\\proj 1 : \\big(\\sm{x:A}B(x)\\big)\\to A$ is an embedding.\n\nFor the converse, assume that the projection map is an embedding, and let $x:A$. Our goal is to show that $B(x)$ is a proposition. By \\cref{lem:isprop_eq} it suffices to show that\n\\begin{equation*}\n\\prd{x:A}{y,z:B(x)} \\id{y}{z}\n\\end{equation*}\nLet $y,z:B(x)$. By our assumption that the projection map is an embedding we have an equivalence\n\\begin{equation*}\n\\eqv{(\\id{\\pairr{x,y}}{\\pairr{x,z}})}{(\\id{x}{x})}\n\\end{equation*}\nIn particular, we obtain an identification $p:\\id{\\pairr{x,y}}{\\pairr{x,z}}$ which comes equipped with an identification $q:\\ap{\\proj 1}{p}=\\refl{x}$. Now it follows that\n\\begin{equation*}\n\\begin{tikzcd}[column sep=huge]\ny \\arrow[r,equals,\"\\apfunc{\\mathsf{tr}_B(\\blank,y)}(q)\"] & \\mathsf{tr}_B(p,y) \\arrow[r,equals,\"\\apd{\\proj 2}{p}\"] & z,\n\\end{tikzcd}\n\\end{equation*}\nwhere $\\apdfunc{\\proj 2}$ is the \\emph{dependent} action on paths of the dependent function $\\proj 2:\\prd{t:\\sm{x:A}B(x)} B(\\proj 1(t))$, constructed in \\cref{defn:apd}.\n\\end{proof}\n\n\\begin{cor}\nLet $f:A\\to B$ be a map. The following are equivalent:\n\\begin{enumerate}\n\\item For each $y:B$, the fiber $\\fib{f}{y}$ is a proposition. \n\\item $f$ is an embedding.\n\\end{enumerate}\n\\end{cor}\n\n\\begin{proof}\nBy \\cref{ex:fib_replacement} there is a commuting triangle\n\\begin{equation*}\n\\begin{tikzcd}[column sep=large]\nA \\arrow[rr,\"\\lam{a}\\pairr{f(a),\\pairr{a,\\refl{f(a)}}}\"] \\arrow[dr,swap,\"f\"] & & \\sm{y:B}\\fib{f}{y} \\arrow[dl,\"\\proj 1\"] \\\\\n& B\n\\end{tikzcd}\n\\end{equation*}\nin which the top map is an equivalence. Thus it follows from \\cref{ex:emb_triangle} that $f$ is an embedding if and only if $\\proj 1:\\big(\\sm{y:B}\\fib{f}{y}\\big)\\to B$ is an embedding. Now the claim follows from \\cref{thm:subtype}.\n\\end{proof}\n\\end{comment}\n\n\\subsection{Sets}\n\n\\begin{defn}\nA type $A$ is said to be a \\define{set} if there is a term of type\n\\begin{equation*}\n\\isset(A)\\defeq \\prd{x,y:A}\\isprop(\\id{x}{y}).\n\\end{equation*}\n\\end{defn}\n\n\\begin{lem}\nA type $A$ is a set if and only if it satisfies \\define{axiom K}, which asserts that\n\\begin{equation*}\n\\prd{x:A}{p:\\id{x}{x}}\\id{\\refl{x}}{p}.\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nIf $A$ is a set, then $\\id{x}{x}$ is a proposition, so any two of its elements are equal. \nThis implies axiom $K$. \n\nFor the converse, if $A$ satisfies axiom $K$, then for any $p,q:\\id{x}{y}$ we have $\\id{\\ct{p}{q^{-1}}}{\\refl{x}}$, and hence $\\id{p}{q}$. This shows that $\\id{x}{y}$ is a proposition, and hence that $A$ is a set.\n\\end{proof}\n\n\\begin{thm}\\label{lem:prop_to_id}\nLet $A$ be a type, and let $R:A\\to A\\to\\UU$ be a binary relation on $A$ satisfying\n\\begin{enumerate}\n\\item Each $R(x,y)$ is a proposition,\n\\item $R$ is reflexive, as witnessed by $\\rho:\\prd{x:A}R(x,x)$,\n\\item There is a map\n  \\begin{equation*}\n    R(x,y)\\to (x=y)\n  \\end{equation*}\n  for each $x,y:A$.\n\\end{enumerate}\nThen any family of maps\n\\begin{equation*}\n\\prd{x,y:A}(\\id{x}{y})\\to R(x,y)\n\\end{equation*}\nis a family of equivalences. Consequently, the type $A$ is a set.\n\\end{thm}\n\n\\begin{proof}\nLet $f:\\prd{x,y:A}R(x,y)\\to(\\id{x}{y})$. \nSince $R$ is assumed to be reflexive, we also have a family of maps\n\\begin{equation*}\n\\mathsf{path\\usc{}ind}_x(\\rho(x)):\\prd{y:A}(\\id{x}{y})\\to R(x,y).\n\\end{equation*}\nSince each $R(x,y)$ is assumed to be a proposition, it therefore follows that each $R(x,y)$ is a retract of $\\id{x}{y}$. Therefore it follows that $\\sm{y:A}R(x,y)$ is a retract of $\\sm{y:A}x=y$, which is contractible. We conclude that $\\sm{y:A}R(x,y)$ is contractible, and therefore that any family of maps\n\\begin{equation*}\n  \\prd{y:A}(x=y)\\to R(x,y)\n\\end{equation*}\nis a family of equivalences.\n\nNow it also follows that $A$ is a set, since its identity types are equivalent to propositions, and therefore they are propositions by \\cref{lem:prop_equiv}. \n\\end{proof}\n\n\\begin{defn}\n  A map $f:A\\to B$ is said to be \\define{injective} if for any $x,y:A$ there is a map\n  \\begin{equation*}\n    (f(x)=f(y))\\to (x=y).\n  \\end{equation*}\n\\end{defn}\n\n\\begin{cor}\n  Any injective map into a set is an embedding.\n\\end{cor}\n\n\\begin{proof}\n  Let $f:A\\to B$ be an injective map between sets. Now consider the relation\n  \\begin{equation*}\n    R(x,y)\\defeq (f(x)=f(y)).\n  \\end{equation*}\n  Note that $R$ is reflexive, and that $R(x,y)$ is a proposition for each $x,y:A$. Moreover, by the assumption that $f$ is injective, we have\n  \\begin{equation*}\n    R(x,y)\\to (x=y)\n  \\end{equation*}\n  for any $x,y:A$. Therefore we are in the situation of \\cref{lem:prop_to_id}, so it follows that the map $\\apfunc{f} : (x=y)\\to (f(x)=f(y))$ is an equivalence.\n\\end{proof}\n\n\\begin{thm}\\label{thm:eq_nat}\nThe type of natural numbers is a set.\n\\end{thm}\n\n\\begin{proof}\nWe will apply \\cref{lem:prop_to_id}. Note that the observational equality $\\mathsf{Eq}_\\N:\\N\\to(\\N\\to\\UU)$ on $\\N$ (\\cref{defn:obs_nat}) is a reflexive relation by \\cref{ex:obs_nat_eqrel}, and moreover that $\\mathsf{Eq}_\\N(n,m)$ is a proposition for every $n,m:\\N$ (proof by double induction).\nTherefore it suffices to show that\n\\begin{equation*}\n\\prd{m,n:\\nat}\\mathsf{Eq}_\\N(m,n)\\to (\\id{m}{n}).\n\\end{equation*}\nThis follows from the fact that observational equality is the \\emph{least} reflexive relation, which was shown in \\cref{ex:obs_nat_least}.\n\\end{proof}\n\n\\begin{comment}\n\\begin{thm}[Hedberg]\\label{thm:dec_eq}\nAny type with decidable equality is a set.\n\\end{thm}\n\n\\begin{proof}\nLet $A$ be a type, and let $d:\\prd{x,y:A}(\\id{x}{y})+\\neg(\\id{x}{y})$ be the witness that $A$ has decidable equality.\nWe first construct a reflexive binary relation $E:A\\to A\\to\\type$ such that each $E(x,y)$ is a proposition.\nFor every $x,y:A$, we first define a type family $E'(x,y):((\\id{x}{y})+\\neg(\\id{x}{y}))\\to\\type$ by\n\\begin{align*}\nE'(x,y,\\inl(p)) & \\defeq \\unit \\\\\nE'(x,y,\\inr(p)) & \\defeq \\emptyt.\n\\end{align*}\nNote that $E'(x,y,q)$ is a proposition for each $x,y:A$ and $q:(\\id{x}{y})+\\neg(\\id{x}{y})$. \nNow we set $E(x,y)\\defeq E'(x,y,d(x,y))$. Then $E$ is clearly reflexive, and a family of propositions.\nTherefore it remains to show that $E$ implies identity. \n\nSince $E$ is defined as an instance of $E'$, it suffices to construct a term of type\n\\begin{equation*}\n\\prd{x,y:A}{q:(\\id{x}{y})+\\neg(\\id{x}{y})} E'(q)\\to (\\id{x}{y}). \n\\end{equation*}\nBy induction of disjoint sums, it suffices to construct terms of types\n\\begin{align*}\n& \\prd{x,y:A}{p:\\id{x}{y}} \\unit\\to (\\id{x}{y}) \\\\\n& \\prd{x,y:A}{p:\\neg(\\id{x}{y})} \\emptyt\\to (\\id{x}{y}).\n\\end{align*}\nIn the first case, we take $\\lam{x}{y}{p}{t}p$, and the second case is by induction on the empty type.\n\\end{proof}\n\\end{comment}\n\n\\subsection{General truncation levels}\n\\begin{defn}\nWe define $\\istrunc{} : \\Z_{\\geq-2}\\to\\UU\\to\\UU$ by induction on $k:\\Z_{\\geq -2}$, taking\n\\begin{align*}\n\\istrunc{-2}(A) & \\defeq \\iscontr(A) \\\\\n\\istrunc{k+1}(A) & \\defeq \\prd{x,y:A}\\istrunc{k}(\\id{x}{y}).\\qedhere\n\\end{align*}\nFor any type $A$, we say that $A$ is \\define{$k$-truncated}, or a \\define{$k$-type}, if there is a term of type $\\istrunc{k}(A)$. We say that a map $f:A\\to B$ is $k$-truncated if its fibers are $k$-truncated.\n\\end{defn}\n\n%For the rest of this section, let $k:\\Z_{\\geq-2}$.\n\n\\begin{thm}\\label{thm:istrunc_next}\nIf $A$ is a $k$-type, then $A$ is also a $(k+1)$-type.\n\\end{thm}\n\n\\begin{proof}\nWe have seen in \\cref{eg:prop_contr} that contractible types are propositions. This proves the base case.\nFor the inductive step, note that if any $k$-type is also a $(k+1)$-type, then any $(k+1)$-type is a $(k+2)$-type, since its identity types are $k$-types and therefore $(k+1)$-types.\n\\end{proof}\n\n\\begin{thm}\\label{thm:ktype_eqv}\nIf $e:\\eqv{A}{B}$ is an equivalence, and $B$ is a $k$-type, then so is $A$.\n\\end{thm}\n\n\\begin{proof}\nWe have seen in \\cref{ex:contr_equiv} that if $B$ is contractible and $e:\\eqv{A}{B}$ is an equivalence, then $A$ is also contractible. This proves the base case.\n\nFor the inductive step, assume that the $k$-types are stable under equivalences, and consider $e:\\eqv{A}{B}$ where $B$ is a $(k+1)$-type. In \\cref{cor:emb_equiv} we have seen that\n\\begin{equation*}\n\\apfunc{e}:(\\id{x}{y})\\to(\\id{e(x)}{e(y)})\n\\end{equation*}\nis an equivalence for any $x,y$. Note that $\\id{e(x)}{e(y)}$ is a $k$-type, so by the induction hypothesis it follows that $\\id{x}{y}$ is a $k$-type. This proves that $A$ is a $(k+1)$-type.\n\\end{proof}\n\n\\begin{cor}\\label{cor:emb_into_ktype}\nIf $f:A\\to B$ is an embedding, and $B$ is a $(k+1)$-type, then so is $A$.\n\\end{cor}\n\n\\begin{proof}\nBy the assumption that $f$ is an embedding, the action on paths\n\\begin{equation*}\n\\apfunc{f}:(\\id{x}{y})\\to (\\id{f(x)}{f(y)})\n\\end{equation*}\nis an equivalence for every $x,y:A$. Since $B$ is assumed to be a $(k+1)$-type, it follows that $f(x)=f(y)$ is a $k$-type for every $x,y:A$. Therefore we conclude by \\cref{thm:ktype_eqv} that $\\id{x}{y}$ is a $k$-type for every $x,y:A$. In other words, $A$ is a $(k+1)$-type.\n\\end{proof}\n\nIn the following definition we generalize the notion of contractible map.\n\n\\begin{defn}\nWe say that a map $f:A\\to B$ is \\define{$k$-truncated} if for each $y:B$ the fiber $\\fib{f}{y}$ is $k$-truncated.\n\\end{defn}\n\n\\begin{thm}\nLet $B$ be a type family over $A$. Then the following are equivalent:\n\\begin{enumerate}\n\\item For each $x:A$ the type $B(x)$ is $k$-truncated.\n\\item The projection map\n\\begin{equation*}\n\\proj 1 : \\Big(\\sm{x:A}B(x)\\Big)\\to A\n\\end{equation*}\nis $k$-truncated.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\nBy \\cref{ex:proj_fiber} we obtain equivalences\n\\begin{equation*}\n\\eqv{\\fib{\\proj 1}{x}}{B(x)}\n\\end{equation*}\nfor every $x:A$. Therefore the claim follows from \\cref{thm:ktype_eqv}.\n\\end{proof}\n\n\\begin{thm}\\label{thm:trunc_ap}\nLet $f:A\\to B$ be a map. The following are equivalent:\n\\begin{enumerate}\n\\item The map $f$ is $(k+1)$-truncated.\n\\item For each $x,y:A$, the map\n\\begin{equation*}\n\\apfunc{f} : (x=y)\\to (f(x)=f(y))\n\\end{equation*}\nis $k$-truncated. \n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\nFirst we show that for any $s,t:\\fib{f}{b}$ there is an equivalence\n\\begin{equation*}\n\\eqv{(s=t)}{\\fib{\\apfunc{f}}{\\ct{\\proj 2(s)}{\\proj 2(t)^{-1}}}}\n\\end{equation*}\nWe do this by $\\Sigma$-induction on $s$ and $t$, and then we calculate using \\cref{ex:trans_ap} and basic manipulations of identifications that\n\\begin{align*}\n(\\pairr{x,p}=\\pairr{y,q}) & \\eqvsym \\sm{r:x=y} \\mathsf{tr}_{f(\\blank)=b}(r,p)=q \\\\\n& \\eqvsym \\sm{r:x=y} \\ct{\\ap{f}{r}^{-1}}{p}=q \\\\\n& \\eqvsym \\sm{r:x=y} \\ap{f}{r}=\\ct{p}{q^{-1}} \\\\\n& \\jdeq \\fib{\\apfunc{f}}{\\ct{p}{q^{-1}}}.\n\\end{align*}\nBy these equivalences, it follows that if $\\apfunc{f}$ is $k$-truncated, then for each $s,t:\\fib{f}{b}$ the identity type $s=t$ is equivalent to a $k$-truncated type, and therefore we obtain by \\cref{thm:ktype_eqv} that $f$ is $(k+1)$-truncated.\n\nFor the converse, note that we have equivalences\n\\begin{align*}\n\\fib{\\apfunc{f}}{p} & \\eqvsym ((x,p)=(y,\\refl{f(y)})).\n\\end{align*}\nIt follows that if $f$ is $(k+1)$-truncated, then the identity type $(x,p)=(y,\\refl{f(y)})$ in $\\fib{f}{f(y)}$ is $k$-truncated for any $p:f(x)=f(y)$. We conclude by \\cref{thm:ktype_eqv} that the fiber $\\fib{\\apfunc{f}}{p}$ is $k$-truncated. \n\\end{proof}\n\n\\begin{cor}\\label{cor:prop_emb}\nA map is an embedding if and only if its fibers are propositions.\n\\end{cor}\n\n\\begin{cor}\\label{thm:subtype}\nA type family $B$ over $A$ is a subtype if and only if the projection map\n\\begin{equation*}\n\\proj 1 : \\Big(\\sm{x:A}B(x)\\Big)\\to A\n\\end{equation*}\nis an embedding.\n\\end{cor}\n\n\\begin{thm}\nLet $f:\\prd{x:A}B(x)\\to C(x)$ be a family of maps. Then the following are equivalent:\n\\begin{enumerate}\n\\item For each $x:A$ the map $f(x)$ is $k$-truncated.\n\\item The induced map \n\\begin{equation*}\n\\total{f}:\\Big(\\sm{x:A}B(x)\\Big)\\to\\Big(\\sm{x:A}C(x)\\Big)\n\\end{equation*}\nis $k$-truncated.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\nThis follows directly from \\cref{lem:fib_total,thm:ktype_eqv}.\n\\end{proof}\n\n\\begin{comment}\n\\begin{proof}\nBy \\cref{ex:contr_retr} it follows that if $A$ is a retract of a contractible type, then $A$ is contractible.\nFor the inductive step, suppose that the $k$-types are closed under retracts, and consider a section-retraction pair\n\\begin{equation*}\n\\begin{tikzcd}\nA \\arrow[r,\"i\"] & B \\arrow[r,\"r\"] & A,\n\\end{tikzcd}\n\\end{equation*}\nwith $H:r\\circ i\\htpy \\idfunc$, where $B$ is a $(k+1)$-type.\nBy the induction hypothesis it suffices to show that for any $x,y:A$, the function $\\apfunc{i}:(\\id{x}{y})\\to (\\id{i(x)}{i(y)})$ has a retraction.\nThe retraction $\\varphi:(\\id{i(x)}{i(y)})\\to(\\id{x}{y})$ is defined as\n\\begin{equation*}\n\\varphi \\defeq \\lam{q} \\ct{H(x)^{-1}}{\\ap{r}{q}}{H(y)}\n\\end{equation*}\nTo see that $\\varphi(\\ap{i}{p})=p$, we have to show that the square\n\\begin{equation*}\n\\begin{tikzcd}\nr(i(x)) \\arrow[d,equals,swap,\"\\ap{r}{q}\"] \\arrow[r,equals,\"H(x)\"] & x \\arrow[d,equals,\"p\"] \\\\\nr(i(y)) \\arrow[r,equals,swap,\"H(y)\"] & y\n\\end{tikzcd}\n\\end{equation*}\ncommutes. This square commutes by the naturality of homotopies, proven in \\cref{ex:htpy_nat}.\n\\end{proof}\n\\end{comment}\n\n\\begin{exercises}\n\\item\n  \\begin{subexenum}\n  \\item Show that $\\succN:\\N\\to\\N$ is an embedding.\n  \\item Show that $n\\mapsto m+n$ is an embedding, for each $m:\\N$. Moreover, conclude that there is an equivalence\n    \\begin{equation*}\n      \\fib{\\mathsf{add}_\\N(m)}{n}\\simeq (m\\leq n).\n    \\end{equation*}\n  \\item Show that $n\\mapsto mn$ is an embedding, for each $m>0$ in $\\N$. Conclude that the divisibility relation\n    \\begin{equation*}\n      d\\mid n\n    \\end{equation*}\n    is a proposition for each $d,n:\\N$ such that $d>0$. \n  \\end{subexenum}\n\\item \\label{ex:diagonal}Let $A$ be a type, and let the \\define{diagonal} of $A$ be the map $\\delta_A:A\\to A\\times A$ given by $\\lam{x}(x,x)$. \n\\begin{subexenum}\n\\item Show that\n\\begin{equation*}\n{\\isequiv(\\delta_A)}\\leftrightarrow{\\isprop(A)}.\n\\end{equation*}\n\\item Construct an equivalence $\\eqv{\\fib{\\delta_A}{(x,y)}}{(x=y)}$ for any $x,y:A$.\n\\item Show that $A$ is $(k+1)$-truncated if and only if $\\delta_A:A\\to A\\times A$ is $k$-truncated.\n\\end{subexenum}\n\\item \\label{ex:istrunc_sigma}\n\\begin{subexenum}\n\\item Let $B$ be a type family over $A$. Show that if $A$ is a $k$-type, and $B(x)$ is a $k$-type for each $x:A$, then so is $\\sm{x:A}B(x)$. Conclude that for any two $k$-types $A$ and $B$, the type $A\\times B$ is also a $k$-type. Hint: for the base case, use \\cref{ex:contr_in_sigma,ex:contr_equiv}.\n\\item Show that for any $k$-type $A$, the identity types of $A$ are also $k$-types.\n\\item Show that any maps $f:A\\to B$ between $k$-types $A$ and $B$\nis a $k$-truncated map.\n\\item Use \\cref{ex:proj_fiber} to show that for any type family $B:A\\to \\UU$, if $A$ and $\\sm{x:A}B(x)$ are $k$-types, then so is $B(x)$ for each $x:A$. \n\\end{subexenum}\n\\item \\label{ex:eq_bool}Show that $\\bool$ is a set by applying \\cref{lem:prop_to_id} with the observational equality on $\\bool$ defined in \\cref{ex:obs_bool}.\n\\item \\label{ex:set_coprod}Show that for any two $(k+2)$-types $A$ and $B$, the disjoint sum $A+B$ is again a $(k+2)$-type. Conclude that $\\mathbb{Z}$ is a set.\n\\item Use \\cref{ex:contr_retr,ex:retr_id} to show that if $A$ is a retract of a $k$-type $B$, then $A$ is also a $k$-type.\n\\item Show that a type $A$ is a $(k+1)$-type if and only if the map $\\mathsf{const}_x:\\unit\\to A$ is $k$-truncated for every $x:A$.\n\\item Consider a commuting triangle\n\\begin{equation*}\n\\begin{tikzcd}[column sep=tiny]\nA \\arrow[rr,\"h\"] \\arrow[dr,swap,\"f\"] & & B \\arrow[dl,\"g\"] \\\\\n& X\n\\end{tikzcd}\n\\end{equation*}\nwith $H: f \\htpy g \\circ h$, and suppose that $g$ is $k$-truncated. Show that $f$ is $k$-truncated if and only if $h$ is $k$-truncated.\n% Suppose that $h$ is $k$-truncated and surjective. Show that $f$ is $(k+1)$-truncated if and only if $g$ is $(k+1)$-truncated.\n\\end{exercises}\n", "meta": {"hexsha": "1531c1cd716352cd28d7f566e532a7f6ee8d8cdc", "size": 23036, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/hierarchy.tex", "max_stars_repo_name": "tadejpetric/HoTT-Intro", "max_stars_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Book/hierarchy.tex", "max_issues_repo_name": "tadejpetric/HoTT-Intro", "max_issues_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Book/hierarchy.tex", "max_forks_repo_name": "tadejpetric/HoTT-Intro", "max_forks_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.5256916996, "max_line_length": 454, "alphanum_fraction": 0.6753776697, "num_tokens": 8163, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.8152324938410784, "lm_q1q2_score": 0.5909378029589041}}
{"text": "\\chapter{Introduction}\n\\label{chap:intro}\n\n\\textbf{by Bastian Boll} \\\\\n\nThere is a constant need to generate subtitles for video. For large video quantities such as on sites like YouTube, machine learning systems have been employed to perform this task \\cite{youtubeFAQ}. \n\nIn this work, we aim to solve a related problem: given an audio file $A$ and a finished transcript $\\mathcal{T}$, we try to compute time alignment information. Let $\\mathcal{T}$ be an $l$-tuple of words and $A$ be a spoken-word audio signal with known length such that every word $w$ in $\\mathcal{T}$ occurs in $A$ at time $t_w$ . We try to find the mapping\n\\begin{align*}\n\tf&\\colon\\textrm{Transcripts}_l\\to\\R^l\\\\\n\t\\mathcal{T}&\\mapsto f(\\mathcal{T})\\textrm{ such that }(f(\\mathcal{T}))_w=t_w\n\\end{align*}\nBecause the transcript is given, this problem is easier to solve than general speech-to-text. Specifically, a respective model does not need to solve the tasks addressed by the language model which is typically present in a speech-to-text system \\cite{anusuya2010speech}.\n\nHowever, a good solution for the above problem is still a useful tool in generating subtitles as the laborious process of aligning a transcript to the video could be automated.\n\n\\section{The dataset}\n\n\\textbf{by Bastian Boll} \\\\\n\nWe use a dataset consisting of 2436 \\href{https://www.ted.com}{TED-talks}. Part of the metadata for these talks can be found in \\href{https://www.kaggle.com/rounakbanik/ted-talks}{a kaggle dataset}. Additional data, such as subtitles and audio files can be aquired from the TED website. Both the subtitles and the audio are permitted to be used for the described purpose by their \\href{https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy}{usage policy and license}.\\\\\nThe dataset contains approximately 560 hours of high quality English spoken audio recordings. Speakers come from a very diverse international pool. The used language is relatively erudite. Both subtitles and transcripts for each talk are of consistently good quality.\\\\\nThis dataset was chosen to be representative of the type of data which can be cheaply acquired but still contains structural information with regard to the problem at hand. This specifically represents a tradeoff between data accuracy and available data quantity. Most acoustic modelling approaches do not predict words directly. Instead, sequences of phonemes or even sub-phonetic states (as in e.g. \\cite{maas2015building}) are predicted from short intervals of an audio signal to be used in conjunction with a language model. Using much more coarsely labeled data from sources such as the one we chose prohibits the use of single phonemes or sub-phonetic states as labels. It is therefore to be expected that any model constructed to work with this data is presented with a fairly hard classification task. On the other hand, one can try and make up for this challenge by being able to source form an extremely large pool of available data. In chapter \\ref{chap:algorithm} we also describe how pre-existing knowledge such as word order can be effectively employed to reduce the accuracy requirements for the language model.\n\n\\section{Implementation}\n\n\\textbf{by Bastian Boll} \\\\\n\nAll presented methods are implemented in Python 3.6 making prevalent use of \\href{https://www.tensorflow.org/}{Tensorflow 1.4} and \\href{http://www.numpy.org/}{numpy}. The source code is available at \\url{https://github.com/bbboll/ml_subtitle_align}. The \\emph{Readme.md} provides additional information on how to get started.", "meta": {"hexsha": "a5f313eca659650bebc6b8c79b2b2b6d23599810", "size": 3562, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/chapters/intro.tex", "max_stars_repo_name": "bbboll/ml_subtitle_align", "max_stars_repo_head_hexsha": "3abb5628902db1021af8ff1666f37a1663574487", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-04-01T20:02:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T15:59:20.000Z", "max_issues_repo_path": "doc/chapters/intro.tex", "max_issues_repo_name": "bbboll/ml_subtitle_align", "max_issues_repo_head_hexsha": "3abb5628902db1021af8ff1666f37a1663574487", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/chapters/intro.tex", "max_forks_repo_name": "bbboll/ml_subtitle_align", "max_forks_repo_head_hexsha": "3abb5628902db1021af8ff1666f37a1663574487", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 122.8275862069, "max_line_length": 1126, "alphanum_fraction": 0.7888826502, "num_tokens": 831, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8152324893520001, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.5909377900141101}}
{"text": "\\chapter{Gradient Boosting \\label{chapter:gradientboosting}}\n\nFor several years, no one knew why AdaBoost (Chapter~\\ref{chapter:boosting}) worked. Then, in 2001, Jerome Friedman published a paper on \\textbf{gradient boosting}, which generalized the boosting idea to many problem classes. In the gradient boosting framework, of which AdaBoost is a subclass, there are three components:\n\n\\begin{enumerate}\n\\item A loss function to be optimized.\n\\item Weak learners to make predictions.\n\\item An additive model that adds the contributions of different weak learners to minimize the loss function.\n\\end{enumerate}\n\nGiven a predefined loss function and a method for building new weak learners, gradient boosting provides a principled way to create new learners and add them to the combined model. It can be used to boost learners for a wide array of different objective functions. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Gradient Boosting for Classification}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Gradient Boosting for Regression}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Gradient Boosting for Survival Analysis}", "meta": {"hexsha": "4618ebfe5994f98b930fce85f3673ab9037a426a", "size": 1277, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/mcds-gradient-boosting-xgboost.tex", "max_stars_repo_name": "blpercha/mcds-notes", "max_stars_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-12-10T16:51:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-03T01:31:23.000Z", "max_issues_repo_path": "tex/mcds-gradient-boosting-xgboost.tex", "max_issues_repo_name": "blpercha/mcds-notes", "max_issues_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/mcds-gradient-boosting-xgboost.tex", "max_forks_repo_name": "blpercha/mcds-notes", "max_forks_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-14T17:16:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-14T17:16:44.000Z", "avg_line_length": 55.5217391304, "max_line_length": 322, "alphanum_fraction": 0.646045419, "num_tokens": 223, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.59093779001411}}
{"text": "\\documentclass{article}\n% Packages\n\\usepackage{booktabs}\n\\usepackage[table]{xcolor}\n\\usepackage[fixamsmath]{mathtools}\n    \\newcommand{\\vecw}{\\mathbf{w}}\n    \\newcommand{\\vecx}{\\mathbf{x}}\n    \\newcommand{\\vecy}{\\mathbf{y}}\n    \\newcommand{\\vecz}{\\mathbf{z}}\n    \\newcommand{\\vecp}{\\mathbf{p}}\n    \\newcommand{\\rvx}{\\mathbf{X}}\n    \\newcommand{\\matx}{\\mathbf{X}}\n    \\newcommand{\\matd}{\\mathbf{D}}\n    \\newcommand{\\dd}{\\mathrm{d}}\n    \\newcommand{\\diff}[2]{\\frac{\\dd #1}{\\dd #2}}\n    \\newcommand{\\partialdd}[2]{\\frac{\\partial #1}{\\partial #2}}\n\\usepackage{unicode-math}\n    \\unimathsetup{math-style=ISO, bold-style=ISO, mathrm=sym}\n    \\setsansfont{FiraGO}[BoldFont=* SemiBold, Numbers=Monospaced]\n    \\setmathfont{Fira Math}\n\\usepackage{hyperref}\n    \\hypersetup{\n        pdfauthor={lenovo},\n        pdfcreator={Microsoft\u00ae Word 2017},\n        pdfproducer={Microsoft\u00ae Word 2017},\n    }\n\\usepackage{tikz}\n    \\usetikzlibrary{ducks,external}\n\\usepackage{fancyhdr}\n    \\pagestyle{fancy}\n    \\fancyhead{}\n        \\renewcommand{\\headrulewidth}{0pt}\n    \\fancyfoot[C]{%\n        \\tikzset{external/export=false}%\n        \\shuffleducks\n        \\begin{tikzpicture}[scale=0.5]\n            \\duck[signpost=\\scalebox{0.6}{\\thepage},\\randomhead]\n        \\end{tikzpicture}\n    }\n\\usepackage{geometry}\n    \\geometry{margin=1in}\n\\usepackage{tcolorbox}\n    \\tcbuselibrary{listings,breakable}\n    \\newtcblisting{PyListing}[1]{%\n        listing options={language=Python,numbers=left,breaklines,\n            numberstyle=\\tiny\\color{red!75!black},firstnumber=last,\n            basicstyle=\\small,keywordstyle=\\color{blue!85!black},\n            commentstyle=\\color{white!75!black}\\textit},\n        title=\\texttt{#1},listing only,breakable,\n        left=6mm,right=6mm,top=2mm,bottom=2mm,\n        colback=violet!5!white,colframe=violet!75!black}\n    \\newtcblisting{Output}{%\n        title=\\texttt{Output},listing only,breakable,\n        left=6mm,right=6mm,top=2mm,bottom=2mm,\n        colback=blue!5!white,colframe=blue!75!black}\n    \\tcbuselibrary{xparse}\n        \\DeclareTotalTCBox{\\verbbox}{ O{orange} v !O{} }{\n            fontupper=\\ttfamily,nobeforeafter,tcbox raise base,%\n            arc=0pt,outer arc=0pt,top=0pt,bottom=0pt,left=0mm,%\n            right=0mm,leftrule=0pt,rightrule=0pt,toprule=0.3mm,%\n            bottomrule=0.3mm,boxsep=0.5mm,bottomrule=0.3mm,boxsep=0.5mm,%\n            colback=#1!10!white,colframe=#1!50!black,#3}{#2}%\n    \\DeclareTotalTCBox{\\commandbox}{ s v }{\n            verbatim,colupper=white,colback=black!75!white,colframe=black}{\n                \\IfBooleanTF{#1}{\\textcolor{red}{\\ttfamily\\bfseries >> }}{}%\n                    \\lstinline[language=sh,morekeywords={python,python2,python3},keywordstyle=\\color{blue!35!white}\\bfseries]^#2^}%\n% Information\n\\title{Project 1}\n\\author{Iydon Leong}\n\\date{April 3, 2019}\n\\begin{document}\n\\maketitle\n\n\n% Question 1\n\\section{Question 1}\\label{S:1}\nAccording to the question, we can write equation~\\eqref{E:1-1}.\n\\begin{equation}\\label{E:1-1}\n    \\begin{dcases}\n        p_1(x;\\vecw) = P(y=1|\\rvx=\\vecx) = \\frac{\\exp(\\vecw^T\\vecx)}{1+\\exp(\\vecw^Tx)} \\\\\n        p_0(x;\\vecw) = P(y=0|\\rvx=\\vecx) = \\frac{1}{1+\\exp(\\vecw^Tx)}\n    \\end{dcases}\n\\end{equation}\n\nTherefore, we can conclude that\n\\begin{equation*}\n    p_{t}(x;\\vecw) = P(y=t|\\rvx=\\vecx) = \\frac{\\exp(t\\vecw^T\\vecx)}{1+\\exp(\\vecw^Tx)},\\quad\\text{where $t\\in\\{0,1\\}$.}\n\\end{equation*}\n\nThen the log-likelihood function can be rewritten as equation~\\eqref{E:1-2}.\n\\begin{equation}\\label{E:1-2}\n    \\begin{aligned}\n        l(\\vecw) &= \\sum_{i=1}^n\\log\\left[\\frac{\\exp(y_i\\vecw^T\\vecx_i)}{1+\\exp(\\vecw^T\\vecx_i)}\\right] \\\\\n                 &= \\sum_{i=1}^n\\left[y_i\\vecw^T\\vecx_i-\\log\\left(1+e^{\\vecw^T\\vecx_i}\\right)\\right]\n    \\end{aligned}\n\\end{equation}\n\n\n% Question 2\n\\section{Question 2}\\label{S:2}\nSince $\\vecw=(w_0,w_1,\\ldots,w_d)^T$ and $\\vecx=(x_0,x_1,\\ldots,x_d)^T$, then we have equation~\\eqref{E:2-1}.\n\\begin{equation}\\label{E:2-1}\n    l(w_j) = \\sum_{i=1}^n\\left[y_iw_jx_{ij}-\\log\\left(1+e^{w_jx_{ij}}\\right)\\right],\\quad j=1,2,\\ldots,d.\n\\end{equation}\n\nTo find the maxima of each log-likelihood function, we just let $\\partialdd{l(w_j)}{w_j}=0$, where $j=1,2,\\ldots,d$. That is,\n\\begin{equation}\\label{E:2-2}\n    \\begin{aligned}\n        0 &= \\partialdd{l(w_j)}{w_j} \\\\\n          &= \\sum_{i=1}^n\\left[y_ix_{ij}-\\frac{x_{ij}e^{w_jx_{ij}}}{1+e^{w_jx_{ij}}}\\right] \\\\\n          &= \\sum_{i=1}^nx_{ij}\\left(y_i-p(\\vecx_i;\\vecw)\\right) \\\\\n          &= \\vecx_{\\cdot j}^T\\left(\\vecy-p(\\vecx_i;\\vecw)\\right).\n    \\end{aligned}\n\\end{equation}\n\nMoreover, we can get the vector form\n\\begin{equation}\\label{E:2-3}\n    \\partialdd{l(\\vecw)}{\\vecw} = \\matx^T\\left(\\vecy-p(\\matx;\\vecw)\\right).\n\\end{equation}\n\n\n% Question 3\n\\section{Question 3}\\label{S:3}\nFrom equation~\\eqref{E:2-2}, we have the specific forms of $\\partialdd{l(w_j)}{w_j}$, where $j=1,2,\\ldots,d$. In addition, we can derive the expression of $\\partialdd{^2l(w_j)}{w_j^2}$.\n\\begin{equation}\\label{E:3-1}\n    \\begin{aligned}\n        \\partialdd{^2l(w_j)}{w_j^2} &= \\partialdd{}{w_j}\\partialdd{l(w_j)}{w_j} \\\\\n                                    &= \\sum_{i=1}^n-\\frac{x_{ij}x_{ij}e^{w_jx_{ij}}\\left(1+e^{w_jx_{ij}}\\right)-x_{ij}e^{w_jx_{ij}}x_{ij}e^{w_jx_{ij}}}{\\left(1+e^{w_jx_{ij}}\\right)^2} \\\\\n                                    &= \\sum_{i=1}^n-\\frac{x_{ij}^2e^{w_jx_{ij}}}{\\left(1+e^{w_jx_{ij}}\\right)^2}\n    \\end{aligned}\n\\end{equation}\n\nWe can rewrite equation~\\eqref{E:3-1} with vector form.\n\\begin{equation}\\label{E:3-2}\n    \\begin{aligned}\n        \\partialdd{^2l(\\vecw)}{\\vecw\\partial\\vecw^T} &= -\\sum_{i=1}^n\\vecx_i\\vecx_i^Tp\\left(\\vecx_i;\\vecw\\right)\\left(1-p(\\vecx_i;\\vecw)\\right) \\\\\n                                                     &= \\matx^T\\Lambda\\matx\n    \\end{aligned}\n\\end{equation}\nwhere $\\Lambda$ is a $n\\times n$ diagonal matrix with diagonal entries $p\\left(\\vecx_i;\\vecw\\right)\\left(1-p(\\vecx_i;\\vecw)\\right)$.\n\n\n% Question 4\n\\section{Question 4}\\label{S:4}\n\\begin{equation}\\label{E:4-1}\n    \\vecw^{(k)} = \\arg\\min_{\\vecw}(\\vecz-\\matx\\vecw)^T\\matd(\\vecz-\\matx\\vecw)\n\\end{equation}\n\nFirst of all, we should know the size of matrix and vector to avoid incompatible operation. This can be checked through table~\\ref{T:4}.\n\\begin{table}[htb]\n    \\centering\n    \\begin{tabular}{cm{7em}cm{15em}}\n        \\toprule\n        Symbols   & Visual Size                         & Size        & Description \\\\ \\midrule\n        \\rowcolor[HTML]{EFEFEF} \n        $\\vecp$   & \\textcolor{violet!50!white}{\\rule{1em}{7em}} & $n\\times 1$ & the column vector of $p(\\vecx_i;\\vecw^{(k-1)})$ \\\\\n        $\\vecw$   & \\textcolor{violet!50!white}{\\rule{1em}{3em}} & $d\\times 1$ & weight \\\\\n        \\rowcolor[HTML]{EFEFEF} \n        $\\vecx_i$ & \\textcolor{violet!50!white}{\\rule{1em}{3em}} & $d\\times 1$ & the column vector sample $x_i$ \\\\\n        $\\vecy$   & \\textcolor{violet!50!white}{\\rule{1em}{7em}} & $n\\times 1$ & the column vector of $y_i$ \\\\\n        \\rowcolor[HTML]{EFEFEF} \n        $\\vecz$   & \\textcolor{violet!50!white}{\\rule{1em}{7em}} & $n\\times 1$ & $\\matx\\vecw^{(k-1)}+\\matd^{-1}(\\vecy-\\vecp)$ \\\\\n        $\\matd$   & \\textcolor{violet!50!white}{\\rule{7em}{7em}} & $n\\times n$ & matrix with diagonal entries as $p(\\vecx_i;\\vecw^{(k-1)})(1-p(\\vecx_i;\\vecw^{(k-1)}))$ \\\\\n        \\rowcolor[HTML]{EFEFEF} \n        $\\matx$   & \\textcolor{violet!50!white}{\\rule{3em}{7em}} & $n\\times d$ & matrix with row entries as $\\vecx_i$ \\\\ \\bottomrule\n    \\end{tabular}\n    \\caption{Visual Size of Matrics and Vectors}\\label{T:4}\n\\end{table}\n\nInspired by the least-square algorithm, we then differentiate the evaluation function at equation~\\eqref{E:4-2}.\n\\begin{equation}\\label{E:4-2}\n    \\begin{aligned}\n        \\partialdd{(\\vecz-\\matx\\vecw)^T\\matd(\\vecz-\\matx\\vecw)}{\\vecw} &= -2\\matx^T\\matd(\\vecz-\\matx\\vecw) \\\\\n                                                                       &= 2\\matx^T\\matd\\matx\\vecw-2\\matx^T\\matd\\vecz \\\\\n                                                                       &= 0\n    \\end{aligned}\n\\end{equation}\nwhere $\\vecz=\\matx\\vecw^{(k-1)}+\\matd^{-1}(\\vecy-\\vecp)$. Obviously, we can get the equation~\\eqref{E:4-3} by solving $(\\matx^T\\matd\\matx)\\vecw = \\matx^T\\matd\\vecz$.\n\\begin{equation}\\label{E:4-3}\n    \\begin{aligned}\n        \\vecw &= (\\matx^T\\matd\\matx)^{-1}\\matx^T\\matd\\vecz \\\\\n              &= (\\matx^T\\matd\\matx)^{-1}\\matx^T\\matd\\left(\\matx\\vecw^{(k-1)}+\\matd^{-1}(\\vecy-\\vecp)\\right) \\\\\n              &= \\vecw^{(k-1)}+(\\matx^T\\matd\\matx)^{-1}\\matx^T(\\vecy-\\vecp)\n    \\end{aligned}\n\\end{equation}\n\nBack to question~\\ref{S:2} and question~\\ref{S:3}, or exactly, equation~\\eqref{E:2-3} and equation~\\eqref{E:3-2}.\n\\begin{equation}\\label{E:4-4}\n    \\vecw^{(k)} = \\vecw^{(k-1)} - \\left[\\left(\\partialdd{^2l(\\vecw)}{\\vecw\\partial\\vecw^T}\\right)^{-1}\\partialdd{l(\\vecw)}{\\vecw}\\right]_{\\vecw=\\vecw^{(k-1)}}\n\\end{equation}\n\nThat is, the new notations, equation~\\eqref{E:4-4} in Newton-Raphson method can be rewritten as equation~\\eqref{E:4-1}, where $\\vecz=\\matx\\vecw^{(k-1)}+\\matd^{-1}(\\vecy-\\vecp)$. This is the so-called iteratively reweighted least square(RLS) algorithm.\n\n\\clearpage\n\n\n% Question 5-7\n\\section{Question 5--7}\\label{S:5-7}\nThere are two \\verbbox{Python} programs, one implements the IRLS algorithm, the other compute and compare the results between my own program and the sklearn program. The result is shown at the end.\n\\begin{PyListing}{logistic\\_regression.py}\nfrom numpy import array, diag, divide, exp, matrix, power, zeros\nfrom numpy.linalg import inv, norm\n\n\nclass logistic_regression_IRLS(object):\n    \"\"\"\n    Maximum likelihood approach for logistic regression\n    \"\"\"\n    def __init__(self):\n        self.w = None\n\n    def fit(self, X, y, w0=None, erf=None, maxit=None):\n        self.__newton_raphson(X, y, w0, erf, maxit)\n\n    def predict(self, X):\n        y = X * self.w\n        return array((y>0).astype(int).T)[0]\n        \n    def __newton_raphson(self, X, y, w0=None, erf=None, maxit=None):\n        \"\"\"\n        Args:\n            X:   nxd matrix.\n            y:   nx1 column vector.\n            w0:  dx1 column vector.\n            erf: error function with two parameters.\n                 erf = lambda old,new: ...\n        \"\"\"\n        # Pre-process\n        X   = matrix(X)\n        n,d = X.shape\n        y   = matrix(y).reshape(n, 1)\n        if w0==None:\n            w0 = zeros((d, 1))\n        if erf==None:\n            erf = lambda old,new: norm(old-new,2)<1e-6\n        if maxit==None:\n            maxit = 64\n        w1 = zeros((d, 1))\n        ex = zeros((n, 1))\n        # Iterations\n        for i in range(maxit):\n            ex = exp(X*w0).T\n            D  = diag(array(ex/power(1+ex,2))[0])\n            w1 = w0 + inv(X.T*D*X)*X.T*(y-(ex/(1+ex)).T)\n            if erf(w0, w1):\n                print(\"Convergence @ the %d iteration(s).\"%i)\n                break\n            w0 = w1\n        self.w = w1\n\\end{PyListing}\n\n\\begin{PyListing}{main.py}\nimport numpy as np\n\nfrom logistic_regression import logistic_regression_IRLS\nfrom sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score\n\nfrom sklearn.linear_model import LogisticRegression as Regression\n\n\ndef read_dat(path):\n    # raw text\n    encode = \"utf-8\"\n    with open(path, \"r\", encoding=encode) as f:\n        while True:\n            content = f.readline()\n            if content[0] != \"@\":\n                break\n        content += f.read()\n    # analysis\n    data = [line.split(\", \") for line in content.splitlines()]\n    one_hot = {\"Absent\":0, \"Present\":1}\n    for i,k in enumerate(data):\n        data[i][4] = one_hot.get(k[4], -1)\n    data = np.array(data, dtype=float)\n    # input and output\n    return data[:,:-1], data[:,-1].astype(int)\n\ndef evaluation(y_true, y_pred):\n    # data\n    c = confusion_matrix(y_true, y_pred)\n    a = accuracy_score(y_true, y_pred)\n    p = precision_score(y_true, y_pred)\n    r = recall_score(y_true, y_pred)\n    # output\n    print(\"Accuracy classification score:  %.3f\"%a)\n    print(\"Precision classification score: %.3f\"%p)\n    print(\"Recal classification score:     %.3f\"%r)\n    print(\"Confusion matrix:\\n%s\"%c)\n\n\nfor i in range(5):\n    train = \"saheart-5-%dtra.dat\"%(i+1)\n    test  = \"saheart-5-%dtst.dat\"%(i+1)\n    X_train,y_train = read_dat(train)\n    X_test,y_test   = read_dat(test)\n\n    model1 = logistic_regression_IRLS()\n    model1.fit(X_train, y_train)\n\n    model2 = Regression(solver=\"newton-cg\")\n    model2.fit(X_train, y_train)\n\n    y_pred1 = model1.predict(X_test)\n    y_pred2 = model2.predict(X_test)\n\n    print(train, test)\n    print(\"-\"*37)\n    print(\"IRIS Algorithm\" + \"-\"*23)\n    evaluation(y_test, y_pred1)\n    print(\"LogisticRegression\" + \"-\"*19)\n    evaluation(y_test, y_pred2)\n    print(\"-\"*37, \"\\n\"*3)\n\\end{PyListing}\n\n\\twocolumn\n\\begin{Output}\nsaheart-5-1tra.dat saheart-5-1tst.dat\n-------------------------------------\nIRIS Algorithm-----------------------\nAccuracy classification score:  0.731\nPrecision classification score: 0.684\nRecal classification score:     0.406\nConfusion matrix:\n[[55  6]\n [19 13]]\nLogisticRegression-------------------\nAccuracy classification score:  0.731\nPrecision classification score: 0.640\nRecal classification score:     0.500\nConfusion matrix:\n[[52  9]\n [16 16]]\n-------------------------------------\n\n\n\nsaheart-5-2tra.dat saheart-5-2tst.dat\n-------------------------------------\nIRIS Algorithm-----------------------\nAccuracy classification score:  0.774\nPrecision classification score: 0.739\nRecal classification score:     0.531\nConfusion matrix:\n[[55  6]\n [15 17]]\nLogisticRegression-------------------\nAccuracy classification score:  0.763\nPrecision classification score: 0.727\nRecal classification score:     0.500\nConfusion matrix:\n[[55  6]\n [16 16]]\n-------------------------------------\n\n\n\nsaheart-5-3tra.dat saheart-5-3tst.dat\n-------------------------------------\nIRIS Algorithm-----------------------\nAccuracy classification score:  0.739\nPrecision classification score: 0.682\nRecal classification score:     0.469\nConfusion matrix:\n[[53  7]\n [17 15]]\nLogisticRegression-------------------\nAccuracy classification score:  0.728\nPrecision classification score: 0.667\nRecal classification score:     0.438\nConfusion matrix:\n[[53  7]\n [18 14]]\n-------------------------------------\n\n\n\nsaheart-5-4tra.dat saheart-5-4tst.dat\n-------------------------------------\nIRIS Algorithm-----------------------\nAccuracy classification score:  0.707\nPrecision classification score: 0.586\nRecal classification score:     0.531\nConfusion matrix:\n[[48 12]\n [15 17]]\nLogisticRegression-------------------\nAccuracy classification score:  0.739\nPrecision classification score: 0.633\nRecal classification score:     0.594\nConfusion matrix:\n[[49 11]\n [13 19]]\n-------------------------------------\n\n\n\nsaheart-5-5tra.dat saheart-5-5tst.dat\n-------------------------------------\nIRIS Algorithm-----------------------\nAccuracy classification score:  0.609\nPrecision classification score: 0.429\nRecal classification score:     0.375\nConfusion matrix:\n[[44 16]\n [20 12]]\nLogisticRegression-------------------\nAccuracy classification score:  0.685\nPrecision classification score: 0.552\nRecal classification score:     0.500\nConfusion matrix:\n[[47 13]\n [16 16]]\n-------------------------------------\n\\end{Output}\n\n\n% Question 8\n\\section{Question 8}\nYes, I can improve my results for the following reasons exist.\n\\begin{enumerate}\n    \\item The data I use does not have regularization.\n    \\item The diagonal matrices can use sparse matrix algorithm to calculate its inverse and multiplication.\n    \\item Python is flexible but slow, we can use some compiled language to speed up computing.\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "a656bdee35b9eeb2c8912855d5e5e136baf0fb94", "size": 15611, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MA333/Project1/main.tex", "max_stars_repo_name": "iydon/homework", "max_stars_repo_head_hexsha": "253d4746528ef62d33eba1de0b90dcb17ec587ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-10-20T08:18:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-11T12:14:56.000Z", "max_issues_repo_path": "MA333/Project1/main.tex", "max_issues_repo_name": "AllenYZB/homework", "max_issues_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2022-01-13T03:04:10.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:49:10.000Z", "max_forks_repo_path": "MA333/Project1/main.tex", "max_forks_repo_name": "AllenYZB/homework", "max_forks_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-11-02T05:46:01.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-12T23:11:28.000Z", "avg_line_length": 36.4742990654, "max_line_length": 251, "alphanum_fraction": 0.6048299276, "num_tokens": 5208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5908436474786977}}
{"text": "\\documentclass[a4paper,man,natbib]{apa6}\n\\usepackage[english]{babel}\n\n\\usepackage[cache=false]{minted}\n\\usemintedstyle{vs}\n\\usepackage{xcolor}\n\\definecolor{bg}{rgb}{.95,.95,.95}\n\n\\graphicspath{ {./images/} }\n\\usepackage{graphicx}\n\\usepackage{caption}\n\n\\usepackage{setspace}\n%\\usepackage{titlesec}\n%\\titleformat{\\subsection}[runin]% runin puts it in the same paragraph\n%\t{\\normalfont\\bfseries}% formatting commands to apply to the whole heading\n%\t{\\thesubsection}% the label and number\n%\t{0.5em}% space between label/number and subsection title\n%\t{}% formatting commands applied just to subsection title\n%\t[]% punctuation or other commands following subsection title\n% End Packages %\n\n\\title{Advanced Statistical Methods Homework 3}\n\\shorttitle{DAT 530 HW3}\n\\author{Brandon Hosley}\n\\date{\\today}\n\\affiliation{University of Illinois - Springfield}\n%\\abstract{}\n\n\\begin{document}\n\\maketitle\n\\singlespacing\n\n\\section{Introduction to Statistical Learning \\\\ Chapter 3: Problem 15}\nThis problem involves the \\textbf{\\textcolor{red}{Boston}} data set, \nwhich we saw in the lab for this chapter. \nWe will now try to predict per capita crime rate\nusing the other variables in this data set. \nIn other words, per capita crime rate is the response, \nand the other variables are the predictors.\n\n\\begin{minted}[bgcolor=bg]{r}\n# Load Boston data set\nlibrary(MASS)\nmount(Boston)\n\\end{minted}\n\n\\subsection{(a)} \n\\emph{\n\tFor each predictor, \n\tfit a simple linear regression model to predict the response. \n\tDescribe your results. \n\tIn which of the models is there a statistically significant association\n\tbetween the predictor and the response? \n\tCreate some plots to back up your assertions.}\n\\includegraphics[width=\\linewidth]{LinearMatrix}\n\\begin{minted}[bgcolor=bg]{r}\npar(mfrow=c(2,7))\nplot(crim,  crim); abline(lm(crim~crim)) \nplot(zn,    crim); abline(lm(crim~zn)) \nplot(indus, crim); abline(lm(crim~indus)) \nplot(chas,  crim); abline(lm(crim~chas))\nplot(nox,   crim); abline(lm(crim~nox)) \nplot(rm,    crim); abline(lm(crim~rm))  \nplot(age,   crim); abline(lm(crim~age))  \nplot(dis,   crim); abline(lm(crim~dis))  \nplot(rad,   crim); abline(lm(crim~rad))  \nplot(tax,   crim); abline(lm(crim~tax))    \nplot(ptratio, crim); abline(lm(crim~ptratio))\nplot(black, crim); abline(lm(crim~black))\nplot(lstat, crim); abline(lm(crim~lstat))\nplot(medv,  crim); abline(lm(crim~medv))\n\\end{minted}\n\nIt appears that only certain predictors have slopes not near-zero or near-infinite:\nzn, indus, age, rad, lstat, medv.\n\n\\subsection{(b)}\n\\emph{\n\tFit a multiple regression model to predict the \n\tresponse using all of the predictors. \n\tDescribe your results. \n\tFor which predictors can we reject the null hypothesis \n\t$H_0 : \\beta_j = 0$?\n}\n\\begin{minted}[bgcolor=bg]{r}\nsummary(lm(crim~., data=Boston))\n\\end{minted}\nZn, dis, rad, black, and medv are the predictors that allow rejection of null hypothesis with a fairly high confidence.\n\n\\subsection{(c)}\n\\emph{\n\tHow do your results from (a) compare to your results from (b)?\n\tCreate a plot displaying the univariate regression coefficients\n\tfrom (a) on the x-axis, and the multiple regression coefficients\n\tfrom (b) on the y-axis. That is, each predictor is displayed as a\n\tsingle point in the plot. Its coefficient in a simple linear regression model is shown on the x-axis, and its coefficient estimate\n\tin the multiple linear regression model is shown on the y-axis.\n} \\\\\n\\begin{minted}[bgcolor=bg]{r}\npredictors <- names(Boston)\ny <- coefficients(lm(crim~., data=Boston))\nx <- vector(mode=\"numeric\", length=1)\nx <- append(x,coefficients(lm(crim~zn))[[2]])\nx <- append(x,coefficients(lm(crim~indus))[[2]])\nx <- append(x,coefficients(lm(crim~chas))[[2]])\nx <- append(x,coefficients(lm(crim~nox))[[2]])\nx <- append(x,coefficients(lm(crim~rm))[[2]])\nx <- append(x,coefficients(lm(crim~age))[[2]])\nx <- append(x,coefficients(lm(crim~dis))[[2]])\nx <- append(x,coefficients(lm(crim~rad))[[2]])\nx <- append(x,coefficients(lm(crim~tax))[[2]])\nx <- append(x,coefficients(lm(crim~ptratio))[[2]])\nx <- append(x,coefficients(lm(crim~black))[[2]])\nx <- append(x,coefficients(lm(crim~lstat))[[2]])\nx <- append(x,coefficients(lm(crim~medv))[[2]])\n\ndf <- data.frame(predictors,x,y)\n\nlibrary(ggplot2)\nlibrary(ggrepel)\np <- ggplot(df, aes(x,y)) \np <- p + geom_point() \np <- p + geom_text_repel(aes(x,y,label=predictors)) \np <- p + coord_cartesian(xlim=c(-3,2),ylim=c(-1,0.75))\np <- p + geom_abline(intercept=0,slope=1)\np\n\\end{minted}\n\\includegraphics[width=\\linewidth]{c-ratio}\nThis figure shows that for all values other than nox, the coefficient ratios are close to 1. For values above the plotted line the predictors are given a higher coefficient in the multivariate regression model than they would receive on their own.\n\n\\subsection{(d)}\n\\emph{\n\tIs there evidence of non-linear association between any of the\n\tpredictors and the response? To answer this question, for each\n\tpredictor $X$, fit a model of the form }\\\\\n\t$Y = \\beta_0 + \\beta_1X + \\beta_2X^2 + \\beta_3X^3 + \\epsilon$.\n\t\n\\begin{minted}[bgcolor=bg]{r}\nlm(crim~poly( zn,    3, raw=TRUE))\nlm(crim~poly( chas,  3, raw=TRUE))\nlm(crim~poly( indus, 3, raw=TRUE))\nlm(crim~poly( nox,   3, raw=TRUE))\nlm(crim~poly( rm,    3, raw=TRUE))\nlm(crim~poly( age,   3, raw=TRUE))\nlm(crim~poly( dis,   3, raw=TRUE))\nlm(crim~poly( rad,   3, raw=TRUE))\nlm(crim~poly( tax,   3, raw=TRUE))\nlm(crim~poly(ptratio,3, raw=TRUE))\nlm(crim~poly( black, 3, raw=TRUE))\nlm(crim~poly( lstat, 3, raw=TRUE))\nlm(crim~poly( medv,  3, raw=TRUE))\n\\end{minted}\n\t\t\nFrom this information we can determine that certain predictors bear a non-linear relationship to the crime statistic. \\\\\ndis, medv, nox, ptratio, rm all have significant coefficients associated with their $X^2$ and $X^3$ values.\n\t\n\\end{document}", "meta": {"hexsha": "202373a30baf1793b374571d44913a95a599d2a5", "size": 5746, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2020-Fall Advanced Statistical Methods/HW 03/HW 03.2 - Hosley.tex", "max_stars_repo_name": "bhosley/Schoolwork", "max_stars_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2020-Fall Advanced Statistical Methods/HW 03/HW 03.2 - Hosley.tex", "max_issues_repo_name": "bhosley/Schoolwork", "max_issues_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2020-Fall Advanced Statistical Methods/HW 03/HW 03.2 - Hosley.tex", "max_forks_repo_name": "bhosley/Schoolwork", "max_forks_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.3670886076, "max_line_length": 247, "alphanum_fraction": 0.7198050818, "num_tokens": 1773, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5908436393361247}}
{"text": "\\documentclass[inequalities.tex]{subfile}\n\n\\begin{document}\n\t\\section{IrMO}\\label{sec:irmo}\n\t\n\t\t\\begin{problem}[$2002$ Team Selection Test, problem $6$]\n\t\t\tLet $x_{1},\\ldots,x_{n}$ be positive real numbers such that $x_{1}^{2}+\\ldots+x_{n}^{2}=n$, $\\lambda$ be a real number such that $0\\leq \\lambda\\leq 1$ and $s$ be a positive real number such that $\\sum_{i=1}^{n}x_{i}\\geq s$. Prove that at least\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left\\lceil\\dfrac{s^{2}(1-\\lambda)^{2}}{n}\\right\\rceil\n\t\t\t\t\\end{align*}\n\t\t\tof these numbers are larger than $\\frac{\\lambda s}{n}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2003$ Round $2$, problem $1$]\n\t\t\tLet $x,y,z$ be real numbers such that $xyz=-1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx^{4}+y^{4}+z^{4}+3(x+y+z)\n\t\t\t\t\t\t& \\geq \\dfrac{x^{2}}{y}+\\dfrac{y^{2}}{z}+\\dfrac{z^{2}}{x}+\\dfrac{y^{2}}{x}+\\dfrac{z^{2}}{x}+\\dfrac{x^{2}}{z}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2005$ Team Selection Test, problem $1$]\n\t\t\tLet $a_{1},\\ldots,a_{n}$ be real numbers and\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{a_{1}+\\ldots+a_{n}}{n}\n\t\t\t\t\t\t& = m\\\\\n\t\t\t\t\t\\dfrac{a_{1}^{2}+\\ldots+a_{n}^{2}}{n}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tIf there is an $i$ such that $a_{i}\\leq m$, prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tn-i\n\t\t\t\t\t\t& \\geq n(m-a_{i})^{2}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$ Team Selection Test, problem $4$]\n\t\t\tLet $x_{1},\\ldots,x_{n}$ be real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i,j=1}^{n}|x_{i}+x_{j}|\n\t\t\t\t\t\t& \\geq n\\sum_{i=1}^{n}|x_{i}|\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Team Selection Test, problem $5$]\n\t\t\tLet $a,b,c$ be positive real numbers such that $ab+bc+ca=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt{a^{3}+a}+\\sqrt{b^{3}+b}+\\sqrt{c^{3}+c}\n\t\t\t\t\t\t& \\geq 2\\sqrt{a+b+c}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2009$ Team Selection Test, problem $3$]\n\t\t\tLet $a,b,c$ be positive real numbers such that $a+b+c=3$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{2+a^{2}+b^{2}}+\\dfrac{1}{2+b^{2}+c^{2}}+\\dfrac{1}{2+c^{2}+a^{2}}\n\t\t\t\t\t\t& \\leq \\dfrac{3}{4}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Third Round, problem $6$]\n\t\t\tWe call a sequence of real numbers $a_{10},\\ldots,a_{1389}$ \\textit{concave} if $2a_{i}\\geq a_{i-1}+a_{i+1}$ for $0<i<1389$. Find the largest real number $c$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=0}^{1389}ia_{i}\n\t\t\t\t\t\t& \\geq c\\sum_{i=0}^{1389}a_{i}^{2}\n\t\t\t\t\\end{align*}\n\t\t\tfor all concave sequence of non-negative real numbers $(a_{n})$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2011$ Third Round, problem $10$]\n\t\t\tFind the smallest real number $k$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt{(a^{2}+1)(b^{2}+1)(c^{2}+1)}+\\sqrt{(b^{2}+1)(c^{2}+1)(d^{2}+1)}+\\sqrt{(c^{2}+1)(d^{2}+1)(a^{2}+1)}+\\sqrt{(d^{2}+1)(a^{2}+1)(b^{2}+1)}\n\t\t\t\t\t\t& \\geq 2(ab+bc+cd+da+ac+bd)-k\n\t\t\t\t\\end{align*}\n\t\t\tfor all real numbers $a,b,c,d$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2012$ Third Round, problem $10$]\n\t\t\tLet $a,b,c$ be positive real numbers such that $ab+bc+ca=1$. Show that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt{3}(\\sqrt{a}+\\sqrt{b}+\\sqrt{c})\n\t\t\t\t\t\t& \\leq \\dfrac{a\\sqrt{a}}{bc}+\\dfrac{b\\sqrt{b}}{ca}+\\dfrac{c\\sqrt{c}}{ab}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2013$ Team Selection Test, problem $11$]\n\t\t\tLet $a,b,c$ be the sides of a triangle such that $a\\geq b\\geq c$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt{a(a+b-\\sqrt{ab})}+\\sqrt{b(b+c-\\sqrt{bc})}+\\sqrt{c(c+a-\\sqrt{ca})}\n\t\t\t\t\t\t& \\geq a+b+c\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2014$ Round $2$, problem $3$]\n\t\t\tLet $x,y,z$ be non-negative real numbers such that $x^{2}+y^{2}+z^{2}=2(xy+yz+zx)$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{x+y+z}{3}\n\t\t\t\t\t\t& \\sqrt[3]{2xyz}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2014$ Team Selection Test, problem $5$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n+1}$ be positive real numbers such that $x_{1}\\cdots x_{n}=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt[x_{1}]{n}+\\ldots+\\sqrt[x_{n+1}]{n}\n\t\t\t\t\t\t& \\geq n^{\\sqrt[n]{x_{1}}}+\\ldots+n^{\\sqrt[n]{x_{n+1}}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2014$ Team Selection Test $2$, problem $5$]\n\t\t\tLet $x,y,z$ be positive real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx^{2}+y^{2}+z^{2}\n\t\t\t\t\t\t& = x^{2}y^{2}+y^{2}z^{2}+z^{2}x^{2}\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left((x-y)(y-z)(z-x)\\right)^{2}\n\t\t\t\t\t\t& \\leq 2\\left((x^{2}-y^{2})^{2}+(y^{2}-z^{2})^{2}+(z^{2}-x^{2})^{2}\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2015$ Team Selection Test, problem $6$]\n\t\t\tIf $a,b,c$ are positive real numbers such that $a+b+c=abc$, prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{abc}{3\\sqrt{2}}\\left(\\sum_{cyc}\\dfrac{\\sqrt{a^{3}+b^{3}}}{ab+1}\\right)\n\t\t\t\t\t\t& \\geq \\sum_{cyc}\\dfrac{a}{a^{2}+1}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2016$ Team Selection Test, problem $1$]\n\t\t\tLet $a,b,c,d$ be positive real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{a+1}+\\dfrac{1}{b+1}+\\dfrac{1}{c+1}+\\dfrac{1}{d+1}\n\t\t\t\t\t\t& = 2\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt{\\dfrac{a^{2}+1}{2}}+\\sqrt{\\dfrac{a^{2}+1}{2}}+\\sqrt{\\dfrac{c^{2}+1}{2}}+\\sqrt{\\dfrac{d^{2}+1}{2}}\n\t\t\t\t\t\t& \\geq 3(\\sqrt{a}+\\sqrt{b}+\\sqrt{c}+\\sqrt{d})-8\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2016$ Round $2$, problem $1$]\n\t\t\tLet $a,b,c$ be positive real numbers such that $c\\geq b\\geq a$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{(c-a)^{2}}{6c}\n\t\t\t\t\t\t& \\leq \\dfrac{a+b+c}{3}-\\dfrac{3}{\\dfrac{1}{a}+\\dfrac{1}{b}+\\dfrac{1}{c}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2017$ Round $2$, problem $4$]\n\t\t\tLet $x,y$ be two positive real numbers such that $x^{4}-y^{4}=x-y$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{x-y}{x^{6}-y^{6}}\n\t\t\t\t\t\t& \\leq \\dfrac{4}{3}(x+y)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2017$ Team Selection Test, problem $1$]\n\t\t\tLet $a,b,c,d$ be positive real numbers such that $a+b+c+d=2$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{(a+c)^{2}}{ad+bc}+\\dfrac{(b+d)^{2}}{ac+bd}+4\n\t\t\t\t\t\t& \\geq 4\\left(\\dfrac{a+b+1}{c+d+1}+\\dfrac{c+d+1}{a+b+1}\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2018$ Team Selection Test, problem $2$]\n\t\t\tFind the smallest real number $k$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\dfrac{2a}{a-b}\\right)^{2}+\\left(\\dfrac{2b}{b-c}\\right)^{2}+\\left(\\dfrac{2c}{c-a}\\right)^{2}+k\n\t\t\t\t\t\t& 4\\left(\\dfrac{2a}{a-b}+\\dfrac{2b}{b-c}+\\dfrac{2c}{c-a}\\right)\n\t\t\t\t\\end{align*}\n\t\t\tfor all real numbers $a,b,c$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2020$ Second Round, problem $2$]\n\t\t\tLet $x,y,z$ be positive real numbers such that $x+y+z=1399$. Find\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\left(\\lfloor x\\rfloor y,\\lfloor y\\rfloor z,\\lfloor z\\rfloor x\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\\end{document}", "meta": {"hexsha": "ab3466b6b77712ef10ad68dc1d6a77f7cfae062b", "size": 6700, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "irmo.tex", "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_issues_repo_path": "irmo.tex", "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "irmo.tex", "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.4130434783, "max_line_length": 246, "alphanum_fraction": 0.566119403, "num_tokens": 2992, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5908436350205422}}
{"text": "\\section{Thursday, 18 October 2018}\n\n\\subsection{Steps}\n\\begin{itemize}\n\\item We get the columns that ends with ``MEAN'' because that columns represent the average values. In that way the dataset is more representative. After that, we work on the visualization.\n\\item We have done a \\texttt{PCA} attempt but the \\texttt{PCA} fails because we haven't deleted the version column and there are rows where the version number is not a float, it's a string like ``x.x.x'', so we eliminate the version column.\nThe covariance is pretty high, about $0.85$.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{../../reports/figures/PCA_AccelerometerStat.png}\n\\caption{Principal Component Analysis}\n\\label{fig:pca}\n\\end{figure}\n\n\\item Next, we start clustering with k-means. We analyse different groups in order to interpret the plots. We distribute the job: Ivan analyse the magnetic field, Benjamin the gyroscope and D\u00eddimo the accelerometer.\n\\item If we do k-means with the complete dataset, the computer is out of memory, that is the reason why we only consider the values of 5 days.\n\\item We do k-means in a range of 2 to 50 clusters to know what is the ideal number of clusters. In order to get it, we use the \\textit{silhouette} method and we analyse which is the best coefficient. When we already know the ideal number of clusters, we plot the graphic, showing the different groups. \n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{../../reports/figures/KMeans_AccelerometerStat.png}\n\\caption{K-means clustering algorithm with best number of clusters}\n\\label{fig:k-means}\n\\end{figure}\n\\end{itemize}\n", "meta": {"hexsha": "3065af2ce7d4794622c65e7aaa401a6e8e0f1789", "size": 1638, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/LabBook/18_10_18.tex", "max_stars_repo_name": "ivangarrera/MachineLearning", "max_stars_repo_head_hexsha": "c13584cdcb7c4df1ab2814cf42a3c2bd3c203e75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/LabBook/18_10_18.tex", "max_issues_repo_name": "ivangarrera/MachineLearning", "max_issues_repo_head_hexsha": "c13584cdcb7c4df1ab2814cf42a3c2bd3c203e75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/LabBook/18_10_18.tex", "max_forks_repo_name": "ivangarrera/MachineLearning", "max_forks_repo_head_hexsha": "c13584cdcb7c4df1ab2814cf42a3c2bd3c203e75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.6666666667, "max_line_length": 303, "alphanum_fraction": 0.7753357753, "num_tokens": 419, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624688140726, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5908436273665612}}
{"text": "\\chapter{Assignment-Probability}\n\\begin{enumerate}\n\t\\item Consider a random walker on a square lattice. At each step the walker moves to a nearest neighbour site with equal probability for each of the four sites. The walker starts at the origin and takes 3 steps. The probability that during this walk no site is visited more than once is\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]$\\frac{9}{16}$\n\t\t\\task[\\textbf{b.}]$\\frac{27}{64}$\n\t\t\\task[\\textbf{c.}]$\\frac{3}{8}$\n\t\t\\task[\\textbf{d.}]$\\frac{4}{9}$\n\t\\end{tasks}\n\t\\item There are two baskets. Baskets I contains 3 red and 4 blue balls while Basket II contains 4 red and 3 blue balls. A ball is transferred from box I to box II and then a ball from box II is drawn, what is the probability that it is blue?\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]$\\frac{17}{56}$\n\t\t\\task[\\textbf{b.}]$\\frac{19}{56}$\n\t\t\\task[\\textbf{c.}]$\\frac{21}{56}$\n\t\t\\task[\\textbf{d.}] $\\frac{25}{56}$\n\t\\end{tasks}\n\t\\item If the distribution function of $x$ is $f(x)=x e^{-x / \\lambda}$ over the interval $0<x<\\infty$, the mean value of $x$ is\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]$\\lambda$\n\t\t\\task[\\textbf{b.}]$2 \\lambda$\n\t\t\\task[\\textbf{c.}] $\\frac{\\lambda}{2}$\n\t\t\\task[\\textbf{d.}] 0\n\t\\end{tasks}\n\t\\item If $x(t)$ denote the position of a Brownian particle at time $t$, given that its position coincide with the point $x=0$ at $t=0$. The particle is to jump a distance $l$ of constant magnitude with equally likely either positive or negative direction along $x$ axis. If the particle take 30 jumps then what is probability that particle have 10 more jump towards right.\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]\\color{red}{?}\n\t\t\\task[\\textbf{b.}]\\color{red}{?}\n\t\t\\task[\\textbf{c.}]\\color{red}{?}\n\t\t\\task[\\textbf{d.}]\\color{red}{?}\n\t\\end{tasks}\n\t\\item In a square lattice as shown in the figure, a particle starts from origin. If the particle can move only in $x$ or $y$ direction and the probability of each direction is $\\frac{1}{4}$, the probability that particle will return to origin in four steps is\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=3.5cm,width=4cm]{P -assignment-01}\n\t\\end{figure}\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]$\\frac{1}{4}$\n\t\t\\task[\\textbf{b.}] $\\frac{1}{8}$\n\t\t\\task[\\textbf{c.}]$\\frac{1}{16}$\n\t\t\\task[\\textbf{d.}] $\\frac{3}{16}$\n\t\\end{tasks}\n\t\\item If the density function of a random variable $X$ is $f(x)= \\begin{cases}k x^{2}, & 1 \\leq x \\leq 5 \\\\ 0, & \\text { otherwise }\\end{cases}$\n\tThen the probability that the random variables assumes value between 2 to 4 is\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]$\\frac{15}{31}$\n\t\t\\task[\\textbf{b.}] $\\frac{14}{31}$\n\t\t\\task[\\textbf{c.}]$\\frac{17}{31}$\n\t\t\\task[\\textbf{d.}] $\\frac{18}{31}$\n\t\\end{tasks}\n\t\\item For a continuous random variable $x$ probability distribution function\n\t$$\n\tf(x)= \\begin{cases}c x^{2} & \\text { for }|x| \\leq 1 \\\\ 0 & \\text { for }|x|>1\\end{cases}\n\t$$\n\tThen the value of $c$ and $\\operatorname{var}(x)$ are respectively\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]$\\frac{3}{2}, \\frac{3}{5}$\n\t\t\\task[\\textbf{b.}] $\\frac{3}{2}, \\frac{5}{3}$\n\t\t\\task[\\textbf{c.}]$\\frac{2}{3}, \\frac{3}{5}$\n\t\t\\task[\\textbf{d.}] $\\frac{2}{3}, \\frac{5}{3}$\n\t\\end{tasks}\n\t\\item Consider the following two experiments:\n\tExperiment I: A pair of die is thrown 6 times. Let $P_{1}$ be the probability of getting a doublet on 5 throws (a doublet on a throw means same number on both die).\n\t\n\tExperiment II: A coin is tossed 6 times. Let $P_{2}$ be the probability of getting a head on 4 throws.\n\tThe ratio of $P_{1}$ and $P_{2}$ is\n\t \\begin{tasks}(4)\n\t\t\\task[\\textbf{a.}]$\\frac{1}{2^{5}}$\n\t\t\\task[\\textbf{b.}]$\\frac{2}{3^{5}}$\n\t\t\\task[\\textbf{c.}]$\\frac{1}{2^{6}}$\n\t\t\\task[\\textbf{d.}]$\\frac{2}{3^{6}}$\n\t\\end{tasks}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\\end{enumerate}", "meta": {"hexsha": "e3ed083e5d8553cb91923edbae894e9797781d3b", "size": 3805, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CSIR- Mathematical Physics/chapter/Assignments/Assignment-Probability.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CSIR- Mathematical Physics/chapter/Assignments/Assignment-Probability.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CSIR- Mathematical Physics/chapter/Assignments/Assignment-Probability.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.5909090909, "max_line_length": 373, "alphanum_fraction": 0.6417871222, "num_tokens": 1388, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914975839675, "lm_q2_score": 0.7826624738835051, "lm_q1q2_score": 0.5908436225623865}}
{"text": "\\documentclass{beamer}\n\n\\input{../../shared_slides.tex}\n\\usepackage{changepage}\n% https://openai.com/blog/deep-double-descent/\n\n\\title{(Sub)-gradient method}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\\frame{\\tableofcontents}\n\n\\section{Intro}%\n\n\\begin{frame}\n  \\frametitle{Spoiler: Smooth vs.\\ nonsmooth}\n  We consider the convex optimization problem\n  \\begin{equation}\n    \\min_x \\, f(x)\n  \\end{equation}\n  \\vspace{-0.5cm}\n  \\begin{block}{}\n    \\begin{equation}\n      x_{k+1} = x_k - \\alpha g_k\n    \\end{equation}\n  \\end{block}\n  \\begin{adjustwidth}{-1.5em}{-1.5em}\n    \\begin{minipage}{0.52\\textwidth}\n      \\begin{block}{}\n        \\begin{itemize}\n          \\item If $f$ is \\textcolor{blue}{smooth} we take $g_k = \\nabla f(x_k)$ $\\rightarrow$ \\textbf{Gradient Descent}.\n          \\item stepsize can be constant $1/L$ (smoothness constant)\n          \\item convergence rate $f(x_k)-f^* = \\mathcal{O}(1/k)$\n        \\end{itemize}\n      \\end{block}\n    \\end{minipage}\n    \\hfill\n    \\begin{minipage}{0.52\\textwidth}\n      \\begin{block}{}\n        \\begin{itemize}\n          \\item If \\textcolor{blue}{not} we take $g_k$ a \\textit{subgradient} $\\rightarrow$ \\textbf{Subgradient method}.\n          \\item stepsize has to be chosen \\emph{small or decreasing} $\\approx 1/\\sqrt{k}$\n          \\item convergence rate is \\emph{worse} $f(x_k)-f^* = \\mathcal{O}(1/\\sqrt{k})$\n        \\end{itemize}\n      \\end{block}\n    \\end{minipage}\n  \\end{adjustwidth}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Intuition behind GD}\n  \\begin{itemize}\n    \\item derivative (gradient) points in the direction of steepest ascent\n          $\\rightarrow$ GD is also called \\textbf{steepest descent}\n    \\item GD update is equivalent to\n          \\begin{equation}\n            x_{k+1} = \\argmin_{x\\in \\R^d} \\Big\\{ \\underbrace{f(x_k) + \\langle \\nabla f(x_k), x-x_k \\rangle }_{\\text{linearization of $f$}}+ \\frac{1}{2 \\alpha} \\Vert x-x_k \\Vert^2 \\Big\\}\n          \\end{equation}\n          \\begin{itemize}\n            \\item solves a linear model of $f$\n            \\item minimizing unconstrained linear models is no good\n            \\item so we add a ``proximity term''\n          \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n\n\\section{Subgradient theory}%\n\n\\begin{frame}\n  \\frametitle{Subgradients}\n  What if $f$ is not differentiable?\n  \\begin{definition}\n    $g \\in \\R^d$ is a \\textcolor{blue}{subgradient} of $f$ at $x$ if\n    \\begin{equation}\n      f(y) \\ge f(x) + g^T (y-x)\n    \\end{equation}\n  \\end{definition}\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=\\textwidth,height=\\textheight,keepaspectratio]{subgradients}\n    % \\caption{\\label{fig:label} }\n  \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Subgradients II}\n  \\begin{definition}\n    The \\textcolor{blue}{subdifferential} $\\partial f(x)$ is the \\emph{set of all subgradients} of $f$ at $x$.\n  \\end{definition}\n  Example\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth,keepaspectratio]{subdifferential}\n    % \\caption{\\label{fig:label} }\n  \\end{figure}\n  Subgradient condition at $x=0$ is $f(y)\\ge f(0) + g(y-0) = gy$.\n  \\textcolor{purple}{What is $\\partial f(0)$?}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Subgradients III}\n  \\begin{lemma}%\n    If $f$ is differentiable at $x$ then $\\partial f(x) \\subset \\{\\nabla f(x)\\}$\n  \\end{lemma}\n  So either one subgradient or none.\\\\\n  \\begin{minipage}{0.48\\textwidth}\n    \\begin{figure}[ht]\n      \\centering\n      \\includegraphics[width=\\textwidth,keepaspectratio]{graph_above_tangent}\n      % \\caption{\\label{fig:label} }\n    \\end{figure}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}{0.48\\textwidth}\n    \\begin{figure}[ht]\n      \\centering\n      \\includegraphics[width=\\textwidth,keepaspectratio]{differentiable_function}\n      % \\caption{\\label{fig:label} }\n    \\end{figure}\n  \\end{minipage}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Subgradient characterization of convexity}\n  \\begin{lemma}%\n    A function $f: \\R^d \\to \\R$ is convex \\textit{if and only if} $\\partial f(x)$ is not empty for all $x$.\n  \\end{lemma}\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=\\textwidth,keepaspectratio]{subgradients}\n    \\caption{Subgradients at every point.}\n  \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Lipschitz = bounded subgradients}\n  \\begin{definition}\n    We call $f$ $L$-Lipschitz (continuous) if\n    \\begin{equation}\n      \\Vert f(x)-f(y) \\Vert \\le L \\Vert x-y \\Vert.\n    \\end{equation}\n  \\end{definition}\n  \\begin{lemma}%\n    Let $f$ be convex. Then the following two are equivalent.\n    \\begin{enumerate}\n      \\item All \\textbf{subgradients are uniformly bounded}.\n            \\begin{equation}\n              \\Vert g \\Vert \\le L \\quad \\forall x, \\forall g \\in \\partial f(x)\n            \\end{equation}\n      \\item $f$ is $L$-Lipschitz\n    \\end{enumerate}\n  \\end{lemma}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Subgradient optimality condition}\n  \\begin{lemma}%\n    Let \\textcolor{blue}{$0 \\in \\partial f(\\bar{x})$}, then $\\bar{x}$ is a \\textcolor{blue}{global minimum}.\n  \\end{lemma}\n  \\begin{proof}\n    By the definition of subgradients, $g=0 \\in \\partial f(\\bar{x})$ gives\n    \\begin{equation}\n      f(y) \\ge f(\\bar{x}) + g^T(y-\\bar{x}) = f(\\bar{x}).\n    \\end{equation}\n  \\end{proof}\n\\end{frame}\n\n\n\\section{Convergence subgradient method}%\n\\label{sec:}\n\n\\begin{frame}\n  \\frametitle{Convergence statement: Subgradient method}\n  \\textcolor{gray}{We assume there exists minimizer $x^*$ and we write $f^*=f(x^*)$.}\n  \\begin{theorem}\n    $f$ is convex, subgradients are bounded $\\Vert g(x) \\Vert \\le G$ for all $g(x)\\in \\partial f(x)$. Then,\n    \\begin{equation}\n      f(\\bar{x}_k) - f^* \\le \\frac{\\Vert x_0-x^*\\Vert G}{\\sqrt{k}}\n    \\end{equation}\n    for the \\textbf{averaged} iterates $\\bar{x}_k = \\frac{1}{k} \\sum_{i=0}^{k-1} x_i $\n  \\end{theorem}\n  \\begin{itemize}\n    \\item Also holds for the \\textbf{``best''} iterate.\n    \\item \\textcolor{blue}{Dimension independent!} (no $d$)\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Proof}\n  \\vspace{-0.5cm}\n  \\begin{equation}\n    \\begin{aligned}\n      \\Vert x_{k+1} - x^* \\Vert^2 &\\le \\Vert x_k - \\alpha g_k - x^* \\Vert^2 \\\\\n      &= \\Vert x_k-x^* \\Vert^2 + 2 \\alpha \\langle g_k, x^*-x_k \\rangle + \\alpha^2 \\Vert g_k \\Vert^2.\n    \\end{aligned}\n  \\end{equation}\n  Using the \\textbf{subgradient inequality} $\\langle g_k , x^* -x_k \\rangle \\le f(x^*) - f(x_k)$\n  % \\begin{equation}\n  %   \\langle g_k , x^* -x_k \\rangle \\le f(x^*) - f(x_k)\n  % \\end{equation}\n  \\begin{equation}\n    \\Vert x_{k+1} - x^* \\Vert^2 \\le \\Vert x_k-x^* \\Vert^2 + 2 \\alpha(f(x^*) - f(x_k)) + \\alpha^2 \\Vert g_k \\Vert^2.\n  \\end{equation}\n  Summing up (telescoping) yields\n  \\begin{equation}\n    \\label{eq:telescoping}\n    2\\sum_{i=0}^{k-1}  \\alpha(f(x_i) - f(x^*)) + \\Vert x_{k} - x^* \\Vert^2 \\le \\Vert x_0-x^* \\Vert^2 +  \\alpha^2 \\sum_{i=0}^{k-1} \\Vert g_k \\Vert^2.\n  \\end{equation}\n  Via the \\emph{bounded subgradient} assumption\n  \\begin{equation}\n    2\\sum_{i=0}^{k-1}  \\alpha(f(x_i) - f(x^*)) \\le \\Vert x_0-x^* \\Vert^2 +  \\alpha^2 k G^2.\n  \\end{equation}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Proof [contd]}\n  We divide by $2\\alpha$ and $k$\n  \\begin{equation}\n    \\frac{1}{k}\\sum_{i=0}^{k-1}  f(x_i) - f^*  \\le \\frac{1}{2 \\alpha k} \\Vert x_0-x^* \\Vert^2 +  \\frac{\\alpha G^2}{2}\n  \\end{equation}\n  Using Jensens inequality (convexity with more than $2$ points)\n  \\begin{equation}\n    \\sum_{i=0}^{k-1} \\frac{1}{k} f(x_i) \\ge \\sum_{i} f \\left( \\frac{1}{k}\\sum_{i=0}^{k-1}x_i \\right)\n  \\end{equation}\n  we obtain\n  \\begin{equation}\n    f(\\bar{x}_k) - f^*  \\le \\frac{1}{2 \\alpha k} \\Vert x_0-x^* \\Vert^2 + \\frac{\\alpha G^2}{2}.\n  \\end{equation}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{How to choose the stepsize?}\n  We have\n \\begin{equation}\n    f(\\bar{x}_k) - f^*  \\le \\frac{1}{2 \\alpha k} \\Vert x_0-x^* \\Vert^2 + \\frac{\\alpha G^2}{2}.\n  \\end{equation}\n  Choose $\\alpha$ such that \\textit{RHS is minimized}, i.e.\\\n  \\begin{equation}\n    \\alpha = \\frac{\\Vert x_0-x^*\\Vert }{G \\sqrt{k}},\n  \\end{equation}\n  which gives\n  % Clearly $\\alpha_i = \\ell_2 \\textbackslash \\ell_1$ leads convergence, for example $1/i$.\n  % However, $\\alpha = \\mathcal{O}(1/\\sqrt{i})$ gives\n  % \\begin{equation}\n  %   \\sum \\alpha_i = (\\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots +  \\frac{1}{\\sqrt{k}}) > \\sqrt{k}\n  %   \\sum \\alpha^2 = (\\frac{1}{1} + \\frac{1}{2} + \\cdots + \\frac{1}{k}) \\approx \\log(k)\n  % \\end{equation}\n\n  \\begin{equation}\n    f(\\bar{x}_k) - f^* \\le \\frac{\\Vert x_0-x^*\\Vert G }{\\sqrt{k}}. \\qed\n  \\end{equation}\n  When ignoring constants (and focusing on the rate) we sometimes write\n  \\begin{equation}\n    \\mathcal{O}\\left(\\frac{1}{\\sqrt{k}}\\right).\n  \\end{equation}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Complexity}\n  For convex Lipschitz functions we require \\textcolor{blue}{$\\mathcal{O}(\\epsilon^{-2})$} iterations. For $D:= \\Vert x_0 -x^* \\Vert$\n  \\begin{equation}\n    f(\\bar{x}_k) - f^* \\le \\frac{D G}{\\sqrt{k}}\n  \\end{equation}\n  \\textbf{Q:} How many iterations to get\n  \\begin{equation}\n    f(\\bar{x}_k) - f^* \\le \\epsilon ?\n  \\end{equation}\n  \\textbf{A:} We get this if\n  \\begin{equation}\n    \\frac{D G}{\\sqrt{k}} \\le \\epsilon\n  \\end{equation}\n  Equivalently\n  \\begin{equation}\n    k \\ge \\frac{D^2 G^2}{\\epsilon^2}.\n  \\end{equation}\n\\end{frame}\n\n\n\n\\begin{frame}\n  \\frametitle{Polyak stepsize}\n  Let's revisit the convergence proof of the subgradient method\n  \\begin{equation}\n    \\begin{aligned}\n      \\Vert x_{k+1} - x^* \\Vert^2 &\\le \\Vert x_k - \\alpha_k g_k - x^* \\Vert^2 \\\\\n      &= \\Vert x_k-x^* \\Vert^2 + 2 \\alpha \\langle g_k, x^*-x_k \\rangle + \\alpha^2 \\Vert g_k \\Vert^2\\\\\n      &\\le \\Vert x_k-x^* \\Vert^2 + 2 \\alpha (f^* - f(x_k))+ \\alpha^2 \\Vert g_k \\Vert^2.\n    \\end{aligned}\n  \\end{equation}\n  Can we pick $\\alpha$ such that the RHS is minimized?\n  \\begin{equation}\n    \\min_\\alpha \\, \\alpha^2 \\Vert g_k \\Vert^2 + 2 \\alpha (f^* - f(x_k))\n  \\end{equation}\n  gives\n  \\begin{equation}\n    \\alpha_k^* = \\frac{f(x_k)-f^*}{\\Vert g_k \\Vert^2}\n  \\end{equation}\n  \\begin{equation}\n      \\Vert x_{k+1} - x^* \\Vert^2 = \\Vert x_k-x^* \\Vert^2 - {\\left( \\frac{f(x_k)-f^*}{\\Vert g_k \\Vert} \\right)}^2\n  \\end{equation}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Polyak stepsize [contd]}\n  \\begin{itemize}\n    \\item Requires us to know the optimal objective function value\n    \\item can be the case in certain setting:\n          separable data, feasibility problems\n    \\item modern deep learning interpolation setting\n  \\end{itemize}\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.7\\textwidth]{modern-regime}\n    \\caption{from openai.com}\n  \\end{figure}\n\\end{frame}\n\n\n\\section{Smooth case}%\n\\label{sec:}\n\n\\begin{frame}\n  \\frametitle{Can we do better if the function is smooth?}\n  \\begin{definition}\n    We call a function $L$-\\textcolor{blue}{smooth} if\n    \\begin{equation}\n      f(y) \\le f(x) + \\langle \\nabla f(x), y-x \\rangle + \\frac{L}{2} \\Vert y-x \\Vert^2.\n    \\end{equation}\n  \\end{definition}\n  \\begin{center}\n    \\textit{Can be upper bounded by a quadratic.}\n  \\end{center}\n  \\begin{lemma}%\n    If the gradient of $f$ is $L$-Lipschitz\n    \\begin{equation}\n      \\Vert \\nabla f(x)- \\nabla f(y) \\Vert \\le L \\Vert x-y \\Vert.\n    \\end{equation}\n    then it is also $L$-smooth.\n  \\end{lemma}\n  Note: Definition does not require convexity.\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Smoothness}\n  If $f$ is convex we get upper and lower bound:\n\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=\\textwidth,keepaspectratio]{smoothness}\n    % \\caption{\\label{fig:label} }\n  \\end{figure}\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Smooth vs. Lipschitz}\n  \\begin{itemize}\n    \\item Bounded (sub)gradients $\\Leftrightarrow$ Lipschitz continuity of $f$\n    \\item Smoothness $\\Leftrightarrow$ Lipschitz continuity of $\\nabla f$ (if convex)\n  \\end{itemize}\n\n  \\begin{lemma}%\n    Let $f$ be convex and differentiable, then the following are \\textit{equivalent}\n    \\begin{enumerate}\n      \\item $f$ is smooth with parameter $L$\n      \\item $\\nabla f$ is $L$-Lipschitz\n    \\end{enumerate}\n  \\end{lemma}\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Sufficient decrease}\n  \\begin{lemma}%\n    If $f$ is $L$-smooth with stepsize $\\alpha = 1/L$, then gradient descent satisfies\n    \\begin{equation}\n      f(x_{k+1}) \\le f(x_k) - \\frac{1}{2L} \\Vert \\nabla f(x_k) \\Vert^2\n    \\end{equation}\n  \\end{lemma}\n  \\begin{proof}\n    \\begin{equation}\n      \\begin{aligned}\n        f(x_{k+1}) &\\le f(x_k) + \\langle \\nabla f(x_k), x_{k+1}-x_k \\rangle + \\frac{L}{2}\\Vert x_{k+1}-x_k \\Vert^2 \\\\\n        &= f(x_k) - \\alpha \\Vert \\nabla f(x_k) \\Vert^2 + \\frac{\\alpha^2 L}{2} \\Vert \\nabla f(x_k) \\Vert^2 \\\\\n        &= f(x_k) - \\left(\\frac{1}{L} - \\frac{1}{2L}\\right) \\Vert \\nabla f(x_k) \\Vert^2\n      \\end{aligned}\n    \\end{equation}\n  \\end{proof}\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Smooth convex functions}\n  \\begin{theorem}\n    Let $f: \\R^d \\to \\R$ be convex and $L$-smooth and the stepsize $\\alpha=1/L$, then\n    \\emph{gradient descent} yields\n    \\begin{equation}\n      f(x_k)-f^* \\le \\frac{L}{2k}\\Vert x_0-x^* \\Vert^2.\n    \\end{equation}\n  \\end{theorem}\n  \\begin{itemize}\n    \\item holds for last iterate\n    \\item independet of dimension $d$\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Complexity of gradient method}\n  Denote $D^2 := \\Vert x_1- x^* \\Vert^2$\n  \\begin{equation}\n    \\text{iteration: } k \\ge \\frac{D^2 L}{2 \\epsilon} \\Rightarrow \\text{error} \\le \\frac{L D^2}{2 k} \\le \\epsilon\n  \\end{equation}\n  Given error $\\epsilon=0.01$ results in\n  \\begin{itemize}\n    \\item $50 \\cdot D^2 L$ iterations for \\textit{smooth} case\n      \\item $10 000 \\cdot D^2 G^2$ for nonsmooth but Lipschitz\n  \\end{itemize}\n\n  What if we don't know $L$?\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Proof of $\\mathcal{O}(\\epsilon^{-1})$ for smooth functions}\n    Subgradient analysis gave us\n    \\begin{equation}\n      \\sum_{i=0}^{k-1} (f(x_i) - f^*) \\le \\frac{1}{2 \\alpha}\\Vert x_0-x^* \\Vert^2 +  \\frac{\\alpha}{2} \\sum_{i=0}^{k-1}  \\Vert g_k \\Vert^2,\n    \\end{equation}\n    see~\\eqref{eq:telescoping}. This time we use \\textcolor{blue}{sufficient decrease} to bound gradient norm\n    \\begin{equation}\n      \\frac{1}{2 L} \\sum_{i=0}^{k-1} \\Vert \\nabla f(x_k) \\Vert^2 \\le \\sum_{i=0}^{k-1} (f(x_i) - f(x_{i+1})) = f(x_0) - f(x_k)\n    \\end{equation}\n    Combining the above two (with $\\alpha=1/L$)\n    \\begin{equation}\n      \\begin{aligned}\n        \\sum_{i=0}^{k-1} (f(x_i) - f^*)  &\\le \\frac{L}{2} \\Vert x_0-x^* \\Vert^2 +  \\frac{1}{2L} \\sum_{i=0}^{k-1}  \\Vert g_k \\Vert^2 \\\\\n        &\\le \\frac{L}{2} \\Vert x_0-x^* \\Vert^2 + f(x_0) - f(x_k)\n      \\end{aligned}\n    \\end{equation}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Proof II}\n  By rewriting:\n    \\begin{equation}\n      \\sum_{\\textcolor{red}{i=1}}^{\\textcolor{red}{k}} (f(x_i) - f^*) \\le \\frac{L}{2} \\Vert x_0-x^* \\Vert^2\n    \\end{equation}\n    As last iterate is the best (sufficient decrease):\n    \\begin{equation}\n      f(x_k) - f^* \\le \\frac{1}{k} \\sum_{i=1}^{k} f(x_i) - f^* \\le \\frac{L}{2 k} \\Vert x_0 - x^* \\Vert^2 \\qed\n    \\end{equation}\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "75b4181aee4fa8abd068b301ae75bdd73d24246c", "size": 15002, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/02_(sub-)gradient-method/Subgradient_method.tex", "max_stars_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_stars_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-10-03T14:40:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-20T15:34:36.000Z", "max_issues_repo_path": "slides/02_(sub-)gradient-method/Subgradient_method.tex", "max_issues_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_issues_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-10-21T13:02:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-06T19:50:32.000Z", "max_forks_repo_path": "slides/02_(sub-)gradient-method/Subgradient_method.tex", "max_forks_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_forks_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-10-05T21:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T15:38:30.000Z", "avg_line_length": 32.7554585153, "max_line_length": 185, "alphanum_fraction": 0.6208505533, "num_tokens": 5607, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.5908430973797697}}
{"text": "\\documentclass[12pt,timesnewroman,letterpaper]{article}\n\\input{ramesh_abbreviations}\n\\usepackage{times}\n\\usepackage{mathrsfs}\n\\usepackage{relsize}\n\\usepackage{tabto}\n\\usepackage{amsmath}\n\\usepackage{helvet}\n\\usepackage{courier}\n\\usepackage{fancyheadings}\n\\pagestyle{fancy}\n\\usepackage{pmc}\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\hypersetup{\n    colorlinks=true,\n    urlcolor=magenta,\n}\n\\setlength\\textwidth{6.5in}\n\\setlength\\textheight{8.5in}\n\\begin{document}\n\\title{Logistic Regression in Q}\n\\author{ Tara Mirmira and Ramesh Subramonian }\n\\maketitle\n\\thispagestyle{fancy}\n\\lfoot{{\\small Data Analytics Team}}\n\\cfoot{}\n\\rfoot{{\\small \\thepage}}\n\n\\section{Objective}\n\nWe have observations that fall into one of two classes: $0$ or $1$. Given a new observation, we want to predict if the observation will be in class $0$ or class $1$. \\\\\n\n\\noindent We would like to use a linear model to make our predictions, meaning we would like to express the class of an observation as a linear combination of the explanatory variables. We consider the simplest linear model, the linear probability model, in the next section.\n\n\\section{Linear Probability Model}\n\nConsider the model $y_i = \\alpha + \\beta x_i + \\epsilon_i$\n\n\n\\begin{itemize}\n    \\item $y_i$ is the class associated with $x_i$, and is what we want to predict\n    \\item $\\epsilon_i$ are independent and identically distributed with mean of $0$ and a variance of $\\sigma^2$\n    \\item $E(y_i) = \\alpha + \\beta x_i$\n    \\item $E(y_i) = 0 * P(y_i = 1) + 1*P(y_i = 0) = P(y_i = 1) = \\pi_i$\n    \\item $\\pi_i = \\alpha + \\beta_i$\n\\end{itemize}\n\n\\noindent The problem is $\\epsilon_i$ can only take on the values $0$ or $1$ so $Var(\\epsilon_i) = (1 - \\pi_i)^2 \\pi_i + -\\pi_i^2(1 - \\pi_i) = \\pi_i(1-\\pi_i)$ which is not constant and non constant variance causes all sorts of problems for linear modelling.\\\\\n\n\\noindent This simple model does not work so we will present a more complex linear model in the next section.\n\n\\section{Alternate Model to Linear Probability Model - Linear Model Using Link Function}\n\n\nGeneral overview:\n\\begin{itemize}\n    \\item Let $\\pi_i$ be the probability that observation $i$ is in class $1$.\n    \\item Find a function $G$ such that $\\pi_i = G(\\alpha + \\beta_i)$\n    \\item We want $G$ to be invertible so we can say $G^{-1}(\\pi_i) = \\alpha + \\beta x_i$\n    \\item For logistic regression, we will use the function $G(z) = \\frac{1}{1+e^{-z}}$\n    \\item $\\frac{1}{1+e^{-z}}$ is bounded between $0$ and $1$ for all values of $z$.\n\\end{itemize}\n\n\\noindent Using the above link function:\n\\begin{itemize}\n    \n\n\\item $P(y_i = 1) = \\pi_i = G(\\alpha + \\beta_i) = \\dfrac{1}{1 + e^{-(\\alpha + \\beta x_i)}}  = \\dfrac{1}{1 + e^{-(\\alpha + \\beta x_i)}} * \\dfrac{e^{\\alpha + \\beta x_i}}{e^{\\alpha + \\beta x_i}} = \\dfrac{e^{\\alpha + \\beta x_i}}{e^{\\alpha + \\beta x_i} + 1}$\n\\item $P(y_i = 0) = 1 - P(y_i = 1) = 1 - \\dfrac{e^{\\alpha + \\beta x_i}}{e^{\\alpha + \\beta x_i} + 1} = \\dfrac{1}{1 + e^{\\alpha + \\beta x_i}}$\n\n\\end{itemize}\n\n\\noindent \\textbf{The odds ratio}: the ratio of the probability of the observation being in one category versus the probability of being in the other category\n\\begin{itemize}\n    \n\\item odds ratio = $\\dfrac{\\pi}{1-\\pi} = \\dfrac{e^{\\alpha + \\beta x_i}}{1} = e^{\\alpha + \\beta x_i} = e^{\\alpha}e^{\\beta x_i}$ which means that when we increase $x_i$ by one unit, the odds increase by $e^{\\beta}$\n\n\\item $\\log(\\text{odds ratio}) = \\log\\big(\\dfrac{\\pi}{1-\\pi}\\big) = \\log(e^{\\alpha + \\beta x_i}) = \\alpha + \\beta x_i$\n\\end{itemize}\n\n\\noindent From the above derivation, we see that we have found the \\textbf{link function}:\\\\\\\\\n$\\log\\big(\\dfrac{\\pi_i}{1-\\pi_i}\\big) = \\alpha + \\beta x_i$\n\\\\\\\\\nThe log of the odds ratio is linear in the ${x_i} 's$. The link function connects $E(y_i) = \\pi_i$ to a linear function of the explanatory variables $x_i$.\n\n\\section{Fitting a Model to the Data Using Maximum Likelihood}\nGeneral overview:\n\\begin{itemize}\n    \\item We will use maximum likelihood to estimate $\\alpha$ and $\\beta$. We want to find $\\alpha$ and $\\beta$ that makes the given data most likely to occur and use these values to predict future values.\n    \\item The response variables are the classes $0$ or $1$, which means the response variables are $Bernoulli(\\pi_i)$ variables.\n\\end{itemize}\n\n\\noindent Maximum Likelihood:\n\\\\\n\\\\\nRecall a few probabilities:\n\\begin{itemize}\n    \\item $P(i^{th} \\text{ observation } = 1) = \\pi_i$\n    \\item $P(i^{th} \\text{ observation } = 0) = 1 - \\pi_i$\n    \\item $P(i^th \\text{ observation } = y_i) = \\pi_i^{y_i}$ if $y_i = 1$\n    \\item $P(i^th \\text{ observation } = y_i) = (1-\\pi_i)^{1-y_i}$ if $y_i = 0$\n    \\item Using the above two bullet points, $P(i^th \\text{ observation } = y_i) =\\\\ \\pi_i^{y_i}(1-\\pi_i)^{1-y_i}$\n\\end{itemize}\n\n$$\n\\text{Likelihood } = \\mathscr{L}(y_1 \\ldots y_n) = \\prod_{i=1}^{n} p(y_i) = \\prod_{i=1}^n P(i^{th} \\text{ observation } = y_i) = \\prod_{i=1}^n \\pi_i^{y_i}(1-\\pi_i)^{1-y_i} = \n$$\n\n$$\\prod_{i=n}^n \\big(\\dfrac{\\pi_i}{1-\\pi_i}\\big)^{y_i}(1-\\pi_i) = \\prod_{i=1}^n (e^{\\alpha + \\beta x_i})^{y_i}\\big(\\dfrac{1}{1 + e^{\\alpha + \\beta x_i}}\\big)$$\n\n\\noindent Log Likelihood:\n\n$$\\log \\mathscr{L} = \\sum_{i=1}^n y_i(\\alpha + \\beta x_i) - \\log(1 + e^{\\alpha + \\beta x_i})$$\n\n\\noindent We want to find $\\alpha$ and $\\beta$ that maximize the likelihood of observing the data. To maximize the likelihood (equivalent to maximizing the log likelihood), the next steps are to take the derivatives of the likelihood (or log likelihood) with respect to $\\alpha$ and $\\beta$, set the equations to $0$, and solve for $\\alpha$ and $\\beta$.\\\\\\\\\n\n\\noindent Take the derivative with respect to $\\alpha$, set to $0$ and solve:\n\n$$\n\\dfrac{\\partial}{\\partial \\alpha} \\log \\mathscr{L} = \\sum \\big(y_i - \\dfrac{1}{1 + e^{\\alpha + \\beta x_i}}e^{\\alpha + \\beta x_i}\\big) = 0\n$$\n\n$$\n\\sum y_i = \\sum \\dfrac{e^{\\hat{\\alpha} + \\hat{\\beta}x_i}}{1 + e^{\\hat{\\alpha} + \\hat{\\beta}x_i}}\n$$\n\n\\noindent Take the derivative with respect to $\\beta$, set to $0$ and solve:\n\n$$\n\\dfrac{\\partial}{\\partial \\beta} \\log \\mathscr{L} = \\sum \\big(y_i x_i - \\dfrac{1}{1 + e^{\\alpha + \\beta x_i}}e^{\\alpha + \\beta x_i}x_i\\big) = 0\n$$\n\n$$\n\\sum y_i x_i = \\sum x_i \\dfrac{e^{\\hat{\\alpha} + \\hat{\\beta} x_i}}{1 + e^{\\hat{\\alpha} + \\hat{\\beta} x_i}} \\quad \\quad (1)\n$$\n\n\\noindent If we use the design matrix\n$$\n\\boldsymbol{X} = \\begin{bmatrix}\n    \\boldsymbol{1}       & \\boldsymbol{x_1} & \\dots & \\boldsymbol{x_p}\n\\end{bmatrix}\n$$\nwith $p$ explanatory variables, and the response vector $\\boldsymbol{y}$, the equation $(1)$ becomes\n\n$$\n\\boldsymbol{X^t y} = \\boldsymbol{X^t \\hat{\\pi}}\n$$\n\\\\\nwhere $\\hat{\\pi}$ is the vector of estimated $\\pi_i$ values for the observations with values $(1, x_{1,i}, \\ldots, x_{p,i})$. $x_{j,i}$ are the observed data values and the $1$ value is added so that an intercept value $\\alpha$ is included in the linear model. \n\\\\\\\\\nThis equation is nonlinear so instead of solving directly for $\\boldsymbol{\\hat{\\pi}}$, we will use an iterative method called Newton-Raphson.\n\n\\section{Newton-Raphson}\n\nThe Newton-Raphson method can be used to approximate the roots of a real-valued function. \\\\\\\\\nSteps:\n\\begin{enumerate}\n    \\item Start with an initial guess $x_0$ for the root\n    \\item Iterative step: $x_{i+1} = x_i - \\dfrac{f(x_i)}{f'(x_i)}$. Note that $(x_{i+1}, 0)$ is the intersection of the $x-axis$ and the line tangent to $f$ at $x_i$\n    \\item Repeat the iterative step until $f(x_i)$ is sufficiently close to $0$. Then the value $x_i$ is the approximation for the root.\n\\end{enumerate}\n\n\\noindent The diagram at the top of the next page provides a visual for the above steps.\\\\\n\n\\begin{figure}[hbtp]\n        \\centering\n        \\includegraphics[width=4.5in]{NewtonRaphson.png}\n        \\caption{Newton Raphson Diagram}\n        \\label{Newton Raphson}\n    \\end{figure}\n\n\\noindent Next, we will show how to apply the Newton-Raphson method to find the estimate  $\\boldsymbol{\\hat{\\beta}}$ for $\\boldsymbol{\\beta}$. Note that we have switched to vector notation so instead of estimating $\\alpha$ and $\\beta$, we want to estimate $\\boldsymbol{\\beta} = \n\\begin{bmatrix} \n    \\alpha \\\\\n    \\beta_1 \\\\\n    \\vdots \\\\\n    \\beta_p\n\\end{bmatrix}$ where $\\beta_i$ is the coefficient for the explanatory variable $x_i$ and $\\alpha$ is the intercept. \\\\\n\n\n\n\\noindent Suppose we want to find the maximum likelihood estimate $\\hat{\\theta_n}$ for the parameter $\\theta$. Let the derivative of the log likelihood function be the function $\\ell'$. We want to find the roots of this function, which means we want to find $\\hat{\\theta_n}$ such that $\\ell'(\\hat{\\theta_n}) = 0$, which means $\\hat{\\theta_n}$ is the maximum likelihood estimate for $\\theta$\n\\\\\\\\\nRecall the general Taylor Series expansion formula:\n$$\nf(x) = f(a) + \\dfrac{f'(a)}{1!}(x - a) + \\dfrac{f''(a)}{2!}(x-a)^2 + \\ldots\n$$\nwhere $f$ is infinitely differentiable at $a$.\n\\\\\\\\\\\\\nUsing a one term Taylor Series expansion, we get that:\n$$\n\\ell'(\\hat{\\theta_n}) = \\ell'(\\theta_0) + \\ell''(\\theta_0)*(\\hat{\\theta_n} - \\theta_0)\n$$\n\\\\\nIt can be shown that $\\ell''(\\theta_0)$ can be estimated by $E[\\ell''(\\theta_0)] = -I(\\theta_0)$ where $I$ is the Fisher information. \\\\\\\\\n\nNow we get:\n$$\n\\ell'(\\hat{\\theta_n}) = \\ell'(\\theta_0) - I(\\theta_0)*(\\hat{\\theta_n} - \\theta_0)\n$$\n\\\\\nRearranging terms we get:\n$$\n\\hat{\\theta_n} = \\theta_0 + \\ell'(\\theta_0)[I(\\theta_0)]^{-1}\n$$\n\n\\noindent Now, we can implement Newton-Raphson in the following manner:\n\n\\begin{enumerate}\n    \\item Start with the initial value $\\theta_0$\n    \\item $\\theta_{k+1} = \\theta_k + \\ell'(\\theta_k)[I(\\theta_k)]^{-1}$\n    \\item Iterate until $\\theta_k \\approx \\theta_{k+1}$ or until $\\ell'(\\theta_{k+1}) \\approx 0$\n\\end{enumerate}\n\n\\noindent For the case of logistic regression, the parameter $\\theta = \\boldsymbol{\\beta}$\\\\\n\n\\noindent Although not discussed here, it can be shown that for Bernoulli data $I(\\boldsymbol{\\beta}_{(k)}) = \\boldsymbol{X}^t \\boldsymbol{W}_{(k)} \\boldsymbol{X}$.\\\\\n\n\\noindent $\\boldsymbol{W}_{(k)}$ is an $N \\times N$ diagonal matrix with the $i^{th}$ diagonal element is $\\pi_i(1-\\pi_i)$ where $\\pi_i$ is the fitted probability for the $i^{th}$ observation using $\\boldsymbol{\\beta}_{(k)}$\\\\\n\n\\noindent The following is the Newton-Raphson implementation for logistic regression:\n\\begin{enumerate}\n    \\item Start with the initial value $\\boldsymbol{\\beta}_0 = \\begin{bmatrix} \n    0 \\\\\n    \\vdots \\\\\n    0\n    \\end{bmatrix}$\n    \\item $\\boldsymbol{\\hat{\\beta}}_{(k+1)} = \\boldsymbol{\\hat{\\beta}}_{k} + (\\boldsymbol{X}^t \\boldsymbol{W}_{(k)} \\boldsymbol{X})^{-1}[\\boldsymbol{X}^t\\boldsymbol{y} - \\boldsymbol{X}^t\\boldsymbol{\\hat{\\pi}}] = \\boldsymbol{\\hat{\\beta}}_{k} + (\\boldsymbol{X}^t \\boldsymbol{W}_{(k)} \\boldsymbol{X})^{-1}\\boldsymbol{X}^t(\\boldsymbol{y - \\hat{\\pi}})$\n    \\\\\n    Matrix inversion can be costly. (See \\href{https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/}{Don't invert that matrix} for more details). We will manipulate the above equation so the iteration can be performed without needing to invert any matrices.\n    \\\\\\\\\n    $\\boldsymbol{\\hat{\\beta}}_{(k+1)} - \\boldsymbol{\\hat{\\beta}}_{k} = (\\boldsymbol{X}^t \\boldsymbol{W}_{(k)} \\boldsymbol{X})^{-1}\\boldsymbol{X}^t(\\boldsymbol{y - \\hat{\\pi}})$\n    \\\\\\\\\n    $(\\boldsymbol{X}^t \\boldsymbol{W}_{(k)} \\boldsymbol{X})(\\boldsymbol{\\hat{\\beta}}_{(k+1)} - \\boldsymbol{\\hat{\\beta}}_{k}) = \\boldsymbol{X}^t(\\boldsymbol{y - \\hat{\\pi}})$\n    \\\\\\\\\n    Use a matrix solver to solve for $\\boldsymbol{x}$ in $\\boldsymbol{Ax} = \\boldsymbol{b}$ where \n    \\begin{itemize}\n    \\item $\\boldsymbol{A} = \\boldsymbol{X}^t \\boldsymbol{W}_{(k)} \\boldsymbol{X}$\n    \\item $\\boldsymbol{x} = \\boldsymbol{\\hat{\\beta}}_{(k+1)} - \\boldsymbol{\\hat{\\beta}}_{k}$\n    \\item $\\boldsymbol{b} = \\boldsymbol{X}^t(\\boldsymbol{y - \\hat{\\pi}})$\n    \\end{itemize}\n    \n    Note that $\\boldsymbol{\\beta}_{(k+1)} = \\boldsymbol{x} + \\boldsymbol{\\hat{\\beta}}_{k}$ and $\\boldsymbol{\\hat{\\pi}} = \\frac{e^{\\boldsymbol{X\\beta_{(k)}}}}{1 + e^{\\boldsymbol{X\\beta_{(k)}}}}$\n    \n    \\item Iterate until $\\boldsymbol{\\hat{\\beta}}$ has converged. At convergence, $(\\boldsymbol{X}^t \\boldsymbol{W}_{(k)} \\boldsymbol{X})^{-1}$ is $\\approx 0$\n\\end{enumerate}\n\n\\section{How to Predict the Class of a New Observation}\n\nRecall the link function: \n$$\n\\log \\big( \\dfrac{\\pi_i}{1-\\pi_i} \\big) = \\alpha + \\beta x_i\n$$\n\n\\noindent We can rewrite this in vector notation as the following:\n$$\n\\log \\big( \\dfrac{\\pi_i}{1-\\pi_i} \\big) = \\boldsymbol{x_i}^t \\boldsymbol{\\hat{\\beta}}\n$$\n\nwhere $\\boldsymbol{x_i} = \\begin{bmatrix} \n    x_{1,i} \\\\\n    \\vdots \\\\\n    x_{p,i}\n    \\end{bmatrix}$ is the vector that contains the data values for each explanatory variable $x_i$ and $\\boldsymbol{\\hat{\\beta}}$ is the vector of coefficients found by the Newton-Raphson calculation.\\\\\n    \n\\noindent Given the link function:\n$$\n\\pi_i = \\dfrac{e^{\\alpha + \\beta x_i}}{e^{\\alpha + \\beta x_i} + 1}\n$$\n\n\\noindent In vector notation, this is:\n$$\n\\pi_i = \\dfrac{e^{\\boldsymbol{x_i}^t \\boldsymbol{\\hat{\\beta}}}}{e^{\\boldsymbol{x_i}^t \\boldsymbol{\\hat{\\beta}}} + 1}\n$$\n\n\\noindent Given a new vector of observations $\\boldsymbol{z}$, we can plug $\\boldsymbol{z}$ in for $\\boldsymbol{x_i}$ in the above formula and solve for $\\pi_i$. \n\nA basic classification procedure is the following:\n\n\\begin{itemize}\n    \\item If $\\pi_i \\geq 0.5$, we classify the new observation as class $1$\n    \\item If $\\pi_i < 0.5$, we classify the new observation as class $0$\n\\end{itemize}\n\n\\noindent The above classification procedure can be modified as desired.\n\n\n\\begin{thebibliography}{9}\n\n\\bibitem{notes citation} \nLecture notes from Professor Deborah Nolan, UC Berkeley, STAT 151A, Spring 2017\n\n\\bibitem{stats textbook} \nTrevor Hastie, Robert Tibshirani, and Jerome Friedman. \n\\textit{The Elements of Statistical Learning 2nd Edition}. \nSpringer, New York, NY, 2009.\n\n\\bibitem{image citation} \nImage citation: Ruye Wang - Newton-Raphson method (univariate)\n\\\\\\texttt{http://fourier.eng.hmc.edu/e176/lectures/NM/node20.html}\n\n\\bibitem{don't invert that matrix}\n``Don't invert that matrix\" citation: John D. Cook\n\\\\\\texttt{https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/}\n\\end{thebibliography}\n\n\\section{Data Structures}\n\n\\bi\n\\item \\(X\\) is an \\(N \\times M\\) matrix containing the input data concatenated with the value $1$.\n  \\(X_{i, j}\\) is value of \\(j^{th}\\) attribute of \\(i^{th}\\)\n  instance. \\(X\\) is stored as \\(M\\) columns, where \\(X_j\\) is\n  observations for attribute \\(j\\)\n\n\\item \\(y\\) is an \\(N \\times 1\\) classification vector. \\(y_i\\) is\n  classification of instance \\(i\\) and can be 1 or 0.\n\\item \\(\\beta\\) is an \\(M \\times 1\\) coefficient vector. \\(\\beta_j\\)\n  is coefficient for attribute \\(j\\). \n\\item \\(\\beta^{\\mathrm new}\\) is an \\(M \\times 1\\) vector, which are the new\n  coefficients that we solve for in each iteration\n\\item \\(A\\) is an \\(M \\times M\\) matrix. Since it is symmetric, we can skip\n  computing the lower diagonal elements.\n\\item \\(W = X^TWX\\) is a diagonal \\(N \\times N\\) matrix. Since the\n  off-diagonal elements are zero, we can represent is as an \\(N \\times\n  1\\) vector.   When used as a vector, we will use lower case \\(w\\). When used\n  as a matrix, we will use upper case \\(W\\). Note that \\(W_{i, i} = w_i\\) and\n  that \\( i \\neq j \\Rightarrow W_{i,j} = 0\\)\n\\item \\(b\\) is an \\(M \\times 1\\) vector, \\(X^T W ( X \\beta + W^{-1}(y - p) )\\)\n  \\ei\n\nWith these elements, the Newton Raphson iteration becomes the following:\n\\\\\n$(X^TWX)\\times (\\beta^{new} - \\beta) = X^T(y-p)$\n\nwhere \n\\be\n\\item \\(W\\) is symmetric and positive definite\n\\item Because \\(W\\) is symmetric and positive definite, \\(A = \n  X^T W X\\) is at least positive semi-definite.\n\\item If the attributes are linearly independent, then \\(A\\) will actually be\n  positive definite; else, the dependent attributes should be removed prior to\n  starting the computation.\n\n\\framebox{{\\bf ANDREW Any easy way to do the above? }}\n\n\\ee\n\n\\section{computations}\n\nStep by step computations in Table~\\ref{step_by_step_calc}.\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|l|l|l|l|} \\hline \\hline\n  {\\bf Name} & {\\bf Description} & {\\bf Type} & {\\bf Code} \\\\ \\hline \\hline\n  \\(Xbeta\\) & \\(X \\beta\\) & \\((N \\times M) \\times (M \\times 1)\\)  &  \\\\\n  & & \\(= N \\times 1\\) & \\(Xbeta = \\mathrm{mv\\_mul} (X, beta)\\) \\\\ \\hline\n  \\(p\\) & & \\(N \\times 1\\) & \\( p = \\mathrm{logit}(t1) = e^{t1}/(1 + e^{t1})\\) \\\\ \\hline\n  \\(ysubp\\) &  \\(y - p\\) & \\(N \\times 1\\) & \\( t_2 = \\mathrm{vvsub}(y, p)\\) \\\\ \\hline\n  \\(w\\) & & \\(N \\times 1\\) & \\( w = \\mathrm{logit2}(t1) = e^{t1}/(1 + e^{t1})^2\\) \\\\ \\hline\n  \\(b\\) & \\(X^T (y-p)\\) & \\((M\\times N) \\times (N \\times 1)\\)\n  & \\(\\forall j_{j=1}^{j=M} b_j = \\) \\\\ \n        & & \\( = M \\times 1 \\) & \\(\\mathrm{sum}(\\mathrm{vvmul}(X_i, ysubp))\\) \\\\ \\hline\n  \\(A\\) & \\(X^T W X\\) & \\((M \\times N) \\times (N \\times N) \\times (N \\times M)\\)\n  & \\(\\forall_{j=1}^{j=M} \\forall_{k=j}^{k=M} A_{j, k} = \\) \\\\ \n  & & \\(= (M \\times M)\\) & \\(\\mathrm{sumprod2}(X_j, w, X_k)\\) \\\\ \\hline\n  \\hline\n\n\\hline\n\\end{tabular}\n\\caption{Listing of individual steps and intermediate values}\n\\label{step_by_step_calc}\n\\end{table}\n\n\\begin{table}[hb]\n\\centering\n\\begin{tabular}{|l|l|l|l|l|} \\hline \\hline\n  {\\bf Name} & {\\bf Input Type} & {\\bf Output Type} & {\\bf Return Value} \\\\ \\hline \\hline\n  logit & Vector \\(x\\) & Vector \\(y\\) & \\(y = \\frac{e^x}{1 + 1^x}\\) \\\\ \\hline\n  logit2 & Vector \\(x\\) & Vector \\(y\\) & \\(y = \\frac{e^x}{(1 + 1^x)^2}\\) \\\\ \\hline\n  vvadd & Vector \\(x\\), Vector \\(y\\) & Vector \\(z\\) & \\(z = x + y \\)  \\\\ \\hline\n  vvsub & Vector \\(x\\), Vector \\(y\\) & Vector \\(z\\) & \\(z = x - y \\)  \\\\ \\hline\n  vvmul & Vector \\(x\\), Vector \\(y\\) & Vector \\(z\\) & \\(z = x \\times y \\)  \\\\ \\hline\n  vvdiv & Vector \\(x\\), Vector \\(y\\) & Vector \\(z\\) & \\(z = x / y \\)  \\\\ \\hline\n  vsmul & Vector \\(x\\), Scalar \\(y\\) & Vector \\(z\\) & \\(\\forall i:~z_i = x_ \\times y \\)  \\\\ \\hline\n  sumprod & Vector \\(x\\), Vector \\(y\\) & Scalar \\(z\\) & \\(z = \\sum_i (x_i \\times y_i)\\) \\\\ \\hline\n  sumprod2 & Vector \\(x\\), Vector \\(y\\), Vector \\(z\\) & Scalar \\(w\\) &  \\(w = \\sum_i (x_i \\times y_i \\times z_i)\\) \\\\ \\hline\n\\hline\n\\end{tabular}\n\\caption{Necessary Operators}\n\\label{tbl_custom_ops}\n\\end{table}\n\n\n\\section{Details}\n\n\\subsection{Notes}\n\n\\be\n\\item The calculation of \\(A\\) is simplified by the fact that the off-diagonal\n  elements of \\(w\\) are 0 and that it is a symmetric matrix. See last row of\n  Table~\\ref{step_by_step_calc}\n\\ee\n\\subsection{Clarifications needed}\n\n\\be\n\\item Initial guess for \\(\\beta\\)\n\\ee\n\n\\section{Putting it all together}\nThe Q code will look like the following.\n\n\\begin{verbatim}\nt1 = mvmul(X, beta)  \np = logit(t1)\nw = logit2(t1)\nt2 = vvsub(y, p)\nfor j in 1 to M do \n  b[j] = sumprod(Xj, t2)\nend\nfor j in 1 to M do \n  for k in j to M do \n    A[j][k] = sumprod2(Xj, w, Xk)\n  end\nend\n\\end{verbatim}\n\n\n\\end{document}\n", "meta": {"hexsha": "669f160773fba32c9e94bbfc487384f69eef2165", "size": 18788, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ML/LOGREG/doc/log_reg.tex", "max_stars_repo_name": "subramon/qlu", "max_stars_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ML/LOGREG/doc/log_reg.tex", "max_issues_repo_name": "subramon/qlu", "max_issues_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-07-29T16:48:25.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-26T23:47:22.000Z", "max_forks_repo_path": "ML/LOGREG/doc/log_reg.tex", "max_forks_repo_name": "subramon/qlu", "max_forks_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-14T22:34:13.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-14T22:34:13.000Z", "avg_line_length": 43.6930232558, "max_line_length": 390, "alphanum_fraction": 0.6486587183, "num_tokens": 6430, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191214879991, "lm_q2_score": 0.8333245994514084, "lm_q1q2_score": 0.5908430754173762}}
{"text": "% Macros were moved to macros.tex because I need them in an earlier section.\n\n\\section{The Input Language}\n\\label{sec:input-language}\n\n\\iflong\n\\begin{figure}\n\t\\[\n\t\\begin{array}{rl@{\\quad}l}\n\t\t\\noalign{\\textbf{Propositions}}\n\t\t\\ip,\\ipp ::= \n\t\t    & \\iP \\ibar \\iselect{\\im}{\\iP}&\\mbox{Predicate names}\\\\\n\t\t  | & \\itrue \\ | \\ \\ifalse \\ibar \\inot{\\ip_1} \\ibar {\\ip_1}{\\land}{\\ip_2} \\ibar \n\t\t       {\\ip_1}{\\lor}{\\ip_2}\\ | \\ {\\ip_1}{\\Rightarrow}{\\ip_2} & \\mbox{Predicate logic}\\\\\n                  | & \\ipcases{\\ie}{\\il_1}{\\ix_1}{\\ip_1}{\\il_n}{\\ix_n}{\\ip_n} & \\mbox{Propositional case}\\\\\n\t\t  | & \\ilambda{\\ix}{\\is}{\\ip}  \\ibar \\iapp{\\ip}{\\ie} & \\mbox{Predicates and application}\\\\\n\t\t  | & \\iequal{\\ie_1}{\\ie_2} & \\mbox{Term equality}\\\\\n\t\t  | & \\iforall{\\ix}{\\is}{\\ip}  \\ibar \n\t\t      \\iexists{\\ix}{\\is}{\\ip} \\ |\\\n\t\t      \\iunique{\\ix}{\\is}{\\ip} & \\mbox{Term quantifiers}\\\\[5pt]\n\t\t\n\t\t\\noalign{\\textbf{Sets}}\n\t\t\\is ::= \n\t\t    & \\iS  \\ibar \\iselect{\\im}{\\iS} &\\mbox{Set names}\\\\\n\t\t  | & \\isunit \\ibar \\iprod{\\ix}{\\is_1}{\\is_2}\n                  &\\mbox{Unit and (dependent) cartesian product}\\\\\n\t\t  | & \\isvoid \\ibar \\isum{\\il_1}{\\is_1}{\\il_2}{\\is_2} &\\mbox{Void and disjoint union}\\\\\n\t\t  | & \\idarrow{\\ix}{\\is_1}{\\is_2} & \\mbox{(Dependent) function space} \\\\\n\t\t  | & \\ilambda{\\ix}{\\is_1}{\\is_2} \\ibar \n\t\t      \\iapp{\\is}{\\ie} &\\mbox{Dependent set and application}\\\\\n\t\t  | & \\iquot{\\is}{\\ipp} & \\mbox{Set quotient by an equivalence relation}\\\\\n\t\t  | & \\isubset{\\ix}{\\is}{\\ipp} & \\mbox{Subset satisfying a predicate}\\\\\n\t\t  | & \\irz{\\is}&\\mbox{Realizers of a set}\\\\[5pt] \n\t\t\n\t\t\\noalign{\\textbf{Terms}}\t\n\t\t\\ie ::=\n\t\t    & \\ix \\ibar \\iselect{\\im}{\\ix} &\\mbox{Term names}\\\\\n\t\t  | & \\ilambda{\\ix}{\\is_1}{\\ie} \\ibar \n\t\t      \\iapp{\\ie_1}{\\ie_2} &\\mbox{Function and application}\\\\\n\t\t  | & \\ituple{\\ie_1}{\\ie_2} \n\t\t      \\ibar \\iproj{\\ie}{n}&\\mbox{Tuple and projection}\\\\\n\t\t  | & \\iinj{\\il}{\\ie} \n\t\t      \\ibar (\\imatch{\\ie_0}{\\il_1}{\\ix_1}{\\ie_1}{\\il_2}{\\ix_2}{\\ie_2})&\\mbox{Injection and projection from a union}\\\\\n\t\t  | & \\ieclass{\\ie}{\\ipp}\n\t\t      \\ibar \\ileteclass{\\ix}{\\ipp}{\\ie_1}{\\ie_2}&\\mbox{Equivalence class and picking a representative}\\\\\n\t\t  | & \\irz{\\ie}\n\t\t      \\ibar \\iletrz{\\ix}{\\ie_1}{\\ie_2}&\\mbox{Realized value and picking a realizer}\\\\\n\t\t  | & \\icoerce{\\ie}{\\is} &\\mbox{Type coercion (e.g., in and out of a subset)}\\\\\n\t\t  | & \\ithe{\\ix}{\\is}{\\ip}&\\mbox{Definite description}\\\\\n\t\t  | & \\ilet{\\ix}{\\ie_1}{\\ie_2}&\\mbox{Local definition}\\\\[5pt]\n\t\t\n\t\t\\noalign{\\textbf{Models}}\t\t\n\t\t\\im ::= \n\t\t    & \\iM  \\ibar \\iselect{\\im}{\\iM}&\\mbox{Model names}\\\\\n\t\t  | & \\iapp{\\im_1}{\\im_2}&\\mbox{Application of parameterized model}\\\\[5pt]\n\t\t\n\t\t\\noalign{\\textbf{Proposition Kinds}}\n\t\t\\ipt ::=\n\t\t    & \\iProp \\ibar \\iStable & \\mbox{Classifiers for all propositions/stable propositions}\\\\\n\t\t  | & \\iEquiv{\\is} &\\mbox{Classifier for stable equivalences on $\\is$}\\\\\n\t\t  | & \\idarrow{\\ix}{\\is}{\\ipt} & \\mbox{Classifier for a predicate/relation}\\\\[5pt] \n\t\t\n\t\t\\noalign{\\textbf{Set Kinds}}\n\t\t\\ik ::= \n\t\t    & \\iSet &\\mbox{Classifier for a proper set}\\\\\n\t\t   | & \\idarrow{\\ix}{\\is}{\\ik} &\\mbox{Classifier for a dependent set}\\\\[5pt]\n\t\t\n\n\t\t\\noalign{\\textbf{Theory Elements}}\n\t\t\\ite ::=\n\t\t     & \\iDefinition{\\ix}{\\ie}.\\ibar\\iDefinition{\\iS}{\\is}.&\\mbox{Give a name to a term or set}\\\\\n\t\t   | & \\iDefinition{\\iP}{\\ip}.\\ibar\\iDefinition{\\iTH}{\\ith}.&\\mbox{Give a name to a predicate or theory}\\\\\n\t\t   | & \\iParameter{\\ix}{\\is}.\\ibar\\iParameter{\\iS}{\\ik}.&\\mbox{Require an element in the given set or kind}\\\\\n\t\t   | & \\iParameter{\\iP}{\\ipt}.\\ibar\\iParameter{\\iM}{\\ith}.&\\mbox{Require a predicate or model of the given sort}\\\\\n\t\t   | & \\iAxiom{\\iP}{\\ip}.&\\mbox{Axiom that must hold}\\\\[5pt]\n\n  \t\t\\noalign{\\textbf{Theories}}\n\t\t\\ith ::= \n\t\t     & \\iTH&\\mbox{Theory name}\\\\\n%\t\t   | & \\iselect{\\im}{\\iTH}\\\\\n\t\t   \t| & \\ithy{\\ite_1,\\ldots,\\ite_n} &\\mbox{Theory of a model}\\\\\n\t\t \t| & \\idarrow{\\iM}{\\ith_1}{\\ith_2} &\\mbox{Theory of a uniform family of models}\\\\\n\t\t  \t| & \\ilambda{\\iM}{\\ith_1}{\\ith_2} \\ibar \n\t\t      \\iapp{\\ith}{\\iM}&\\mbox{Parameterized theory and application}\\\\\n\t\\end{array}\n\t\\]\n\t\\label{fig:input}\n\t\\caption{Input Syntax (Simplified)}\n\\end{figure}\n\\fi % iflong\n\nThe input to RZ consists of one or more theories.\nA RZ \\emph{theory} is a generalized logical signature with associated\naxioms, similar to a Coq module signature. Theories describe\n\\emph{models}, or implementations. \n\\iflong\nA summary of the input language appears in Figure~\\ref{fig:input}.\n\\fi % iflong\n\nThe simplest theory $\\ith$ is a list of \\emph{theory element}\\/s\n$\\ithy{\\ite_1 \\ldots \\ite_n}$. A theory element may specify that a certain\nset, set element, proposition or predicate, or model must exist (using\nthe \n\\texttt{Parameter} keyword). It may also provide a definition\nof a set, term, proposition, predicate, or theory (using the \n\\texttt{Definition} keyword). Finally, a theory element can be\na named axiom (using the \\texttt{Axiom} keyword).\n\nWe allow model parameters in theories; \ntypical examples in mathematics include\nthe theory of a vector space parameterized by a field of scalars%\n\\iflong\n, or the theory of the real numbers parameterized by a model of the\nnatural numbers.\n\\else % \\iflong\n.\n\\fi % \\iflong\n\n\\iflong\nFollowing Sannella, Sokolowski, and \nTarlecki\\footnote{``parameterized (program specification) $\\neq$ (parameterized program) specification''}~\\cite{sannella92:_towar}\nRZ supports two forms of parameterization.  \n\\fi % \\iflong\nA theory of a parameterized implementation\n$\\idarrow{\\iM}{\\ith_1}{\\ith_2}$ describes a uniform family of models (i.e.,\na single implementation; a functor in OCaml) that maps every\nmodel~$\\iM$ satisfying~$\\ith_1$ to a model of~$\\ith_2$.  In contrast,\na theory $\\ilambda{\\iM}{\\ith_1}{\\ith_2}$\nmaps models to theories; if $\\iTH$ is such a theory, then\n$\\iTH(\\im_1)$ and $\\iTH(\\im_2)$ are theories whose implementations might be completely unrelated.\\iflong\\footnote{\nFurther, in some cases $\\iTH(\\im_1)$ might be implementable while $\\iTH(\\im_2)$ is not.\n}\n\\fi % \\iflong\n\n\nPropositions and predicates appearing in theories may use full\nfirst-order constructive logic, not just the negative fragment. \n\\iflong\nThe grammar for logical inputs is shown in Figure~\\ref{fig:input}. Most of\nthis should be familiar, including the use of lambda abstraction to\ndefine predicates.\n\\fi\n\nThe language of sets is rich, going well beyond the type systems of\ntypical programming languages. In addition to any base sets postulated\nin a theory, one can construct dependent cartesian products and\ndependent function spaces. We also supports disjoint unions (with\nlabeled tags), quotient spaces (a set modulo a stable equivalence\nrelation), subsets (elements of a set satisfying a predicate). RZ even\npermits explicit references to sets of realizers.\n\nThe term language includes introduction and elimination constructs\nfor the set level. For product sets we have\ntuples and projections ($\\iproj{\\ie}{1}$, $\\iproj{\\ie}{2}$, \\ldots),\nand for function spaces we have lambda abstractions and application.\nOne can inject a term into a tagged union, or do case analyses\non the members of a union. We can produce an equivalence class or pick a representative from a equivalence class (as\nlong as what we do with it does not depend on the choice of\nrepresentative). We can produce a set of realizers\nor choose a representative from a given set of realizers\n(as long as what we do with it does not depend on the choice of\nrepresentative). We can inject a term into a subset (if it satisfies\nthe appropriate predicate), or project an element out of a subset. \nFinally, the term language also allows local\ndefinitions of term variables, and definite descriptions (as long as\nthere is a unique element satisfying the predicate in question).\n\nFrom the previous paragraph, it is\nclear that checking the well-formedness of terms is not decidable. RZ\nchecks what it can, but does not attempt serious theorem proving.\nUncheckable constraints remain as obligations\nin the final output, and should be verified by other means before\nthe output can be used.\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"cie\"\n%%% End: \n\n", "meta": {"hexsha": "eadb6178164e2cf4d930ee652066a03000dd342b", "size": 8095, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "private/cie/logic.tex", "max_stars_repo_name": "andrejbauer/rz", "max_stars_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-08-28T10:12:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-07T21:04:22.000Z", "max_issues_repo_path": "private/cie/logic.tex", "max_issues_repo_name": "andrejbauer/rz", "max_issues_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "private/cie/logic.tex", "max_forks_repo_name": "andrejbauer/rz", "max_forks_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.9722222222, "max_line_length": 130, "alphanum_fraction": 0.6741198271, "num_tokens": 2653, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891479496521, "lm_q2_score": 0.7185944046238982, "lm_q1q2_score": 0.5908205212591104}}
{"text": "\\section{Computer Algorithms}\nTwo computer science concepts are core to the clustering and classification techniques investigated in this thesis.\nOne is a clustering algorithm that builds clusters by categorizing the datapoints into three different types, clustering some and denoting the unclustered as noise.\nAnother is a classification technique that searches for nearby datapoints in order to classify an unknown datapoint.\n\n\n%%%%% DBSCAN %%%%%\n\\subsection{Density-Based Clustering of \\Isols{}}\\label{sec:background:dbscan}\nDensity-based clustering algorithms build clusters based on two parameters: the minimum number of neighbors \\minneigh{},  a point must have to be a core point of a cluster, and \\eps{}, the radius that those neighbors must be within. \nThese algorithms define clusters with respect to core points and border points --- points within \\eps{} of a core point --- labeling everything else --- the singletons --- as noise.\nFor this work, we chose to use \\dbscan{} as the clustering technique for grouping \\isols{} because dense groupings of similar \\isols{} fits our intuition of bacterial \\isol{} strains.\nClosely related ``families'' of \\isols{} will appear in the same cluster and we want these clusters to have sufficient purity to aid us in \\mst{}.\n\n\n\\dbscan{}\\cite{ester1996density} provides the framework for our clustering algorithm. \nIt uses a \\distmetric{}, a minimum neighbors value \\minneigh{}, and an \\eps{} range to categorize data points as one of three types:\n\\index{\\dbscan{}}\n\\index{\\distmetric{}}\n\\index{\\minneigh{}}\n\\index{\\eps{}}\na core point, a border point, or noise.\nA \\textit{core point} is a point that has at least \\minneigh{} data points within \\eps{} of it. \n\\index{core point}\nA \\textit{border point} is a point that is within \\eps{} of a core point, but that does not have \\minneigh{} points within \\eps{} of it.\n\\index{border point}\nEvery other point is \\textit{noise}. \nA \\textit{cluster} is a group of neighboring core points with their associated border points.\n\\index{cluster}\nAccording to this definition of a cluster, all clusters must have at least \\minneigh{} points in them.\n\\autoref{fig:density-based-clustering} depicts this process.\n\\input{figures/density}\n\nDensity-based clustering techniques require a distance metric, often times the \\euclid{}, between data points in order to cluster.\nPerforming fast range queries greatly improves the speed of clustering.\nIf the range query can finish in \\bigo{\\log{n}} time, then \\dbscan{} can run in \\bigo{n\\log{n}} time.\nOrganizing the data into a spatial index can optimize these spatial queries.\n\nSpatial indexes structure the data into a search tree, similar to a binary search tree, organizing the points by distance.\nWhen querying for nearby points, the algorithm can traverse this search tree, ignoring certain points along the way.\nWhile this can speed up the range query to a \\bigo{\\log{n}}, many spatial indexes degenerate into a \\bigo{n} operation. In the former case, this makes \\dbscan{} run in \\bigo{n\\log{n}} time.\n\nIn \\dbscan{}, the \\codefn{RangeQuery} function handles range quer\\-ies by taking as parameters the data point and a distance and returning all data points within range of the query point.\nMathematically, a \\textit{range query} is a function $\\mathit{RangeQuery}: D\\times \\R \\rightarrow \\{D\\}$ that takes a query point $q\\in D$ and a real-valued $\\eps{}\\in\\R$ and returns $\\{d\\in D | \\mathit{Dist}(q, d)\\} < \\eps{}\\}$ --- the set of all other points within \\eps{} of the query point --- where $\\mathit{Dist}$ is some \\distmetric{} $\\mathit{Dist}: D\\times D \\rightarrow \\R $.\n\\index{range query}\nIf the \\distmetric{} forms a proper metric space, then data structures like quad trees and octrees can speedup \\eps{} range queries for a point.\nOne can imagine the range query as a hypersphere centered at the query point with a radius of the query range.\nIn order to make \\codefn{RangeQuery} fast, we had to make some optimizations.\n\n\n%%%%% kNN %%%%%\n\\subsection{\\kNNlong{}}\\label{sec:background:knn}\nThe \\kNNlong{} classification algorithm (\\kNN{}) is a straightforward algorithm to classify an unclassified object using a library. \nUsing a \\compfunc{}, it compares the unclassified object to ``nearby'' classified objects.\nIt uses the concept of a \\compfunc{} to formulate an idea of ``closeness,'' asserting that the similarity an unknown object has to a class of objects relates to the class of the unknown objects itself.\n\\autoref{fig:knn}\n\n\nTo outline the process:\nGiven an unclassified object \\UNKNOWN{}, a library of classified objects \\LIB{}, and a \\compfunc{}, \\COMP{}:\n\\begin{enumerate}\n\\item Compare \\UNKNOWN{} to each object in \\LIB{} using \\COMP{}\n\\item Add the classified object and the result to a list of neighbors, $N$\n\\item Sort $N$ by most similar\n\\item Consider only the top $k$ entries in $N$, called the \\knnlong{} \\label{knn:filter}\n\\item Classify \\UNKNOWN{} as the \\textit{most plural} classification in the \\knnlong{} list\n\\end{enumerate}\n\\autoref{alg:knn} describes this process in pseudocode.\n\\input{algorithms/knn}\n\n\\input{figures/knn}\n\nThe motivation is that the unclassified object must be ``close'' to some of the classified objects in our database, using an appropriate measure of closeness --- the \\textit{\\compfunc{}} --- for the data. \n\\index{\\compfunc{}}\nBy choosing the \\textit{most plural} or \\textit{dominant} classification --- the classification that shows up the highest number of times ---  in the \\knnlong{} we can, with some accuracy, classify our unknown object.\n\\index{most plural classification}\n\\index{dominant classification}\n\n\\autoref{fig:knn} depicts an example graph of datapoints in a coordinate space. \nAll but one of the has a class associated with it, any of \\achar{}, \\bchar{}, or \\cchar{}. \nThe class of one point, denoted by \\unknownchar{}, requires determination. \n\\Compfuncs{} like \\euclid{} or \\manhattan{} may be the most appropriate way to compare these datapoints.\n\n\\autoref{fig:knn} shows how using \\kNN{} with \\euclid{} and various \\k{} can classify this point.\nFigures~\\ref{fig:knn:4}, \\ref{fig:knn:6}, and \\ref{fig:knn:9} show the \\knnlong{} lists as \\k{} changes from 4, to 6, to 9.\nAt \\k{}=4, we see that \\kNN{} classifies the unknown as \\achar{} because there are 2 \\achar{}s, but only one each of \\bchar{} and \\cchar{}.\nFor \\k{}=6, \\bchar{} is the resulting classification, since \\bchar{} shows up 3 times --- more than the 2 \\achar{} and 1 \\cchar{}.\nFinally, as we extend \\k{} all the way out to 9, \\kNN{} classifies it as \\cchar{}, since such an extension exposes the \\knnlong{} list to all 4 \\cchar{}'s, more than any other classification.", "meta": {"hexsha": "88ba7bd063435baf4fb5f9fcce6efaf57c546a62", "size": 6649, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/background/computational.tex", "max_stars_repo_name": "jmcgover/thesis", "max_stars_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/background/computational.tex", "max_issues_repo_name": "jmcgover/thesis", "max_issues_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/background/computational.tex", "max_forks_repo_name": "jmcgover/thesis", "max_forks_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.4252873563, "max_line_length": 385, "alphanum_fraction": 0.7513911866, "num_tokens": 1695, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189134878876, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5908205069115837}}
{"text": "\\documentclass[10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\n% partial derivative as a fraction\n\\newcommand{\\fracpd}[2]{\n  \\ensuremath{\\frac{\\partial #1}{\\partial #2}}\n}\n\n\n\\begin{document}\n\n% --------------------------------------- %\n% Section\n% --------------------------------------- %\n\\section{Envelope Method}\n  Original problem is\n\n  \\begin{align*}\n    J_t(k_t, x_t) &= \\max_{\\chi_{t}} \\left[(1 - \\beta) c_t^{\\rho} + \\beta \\mu(J_{t+1}(k_{t+1}, x_{t+1}) g_{t+1})^{\\rho} \\right]^{\\frac1\\rho} \\\\\n    \\text{Subject To }& \\\\\n    c_t &= \\left[ \\eta k_{t}^{\\nu} + (1 - \\eta) \\right]^{\\frac1\\nu} + (1 - \\delta) k_t - \\chi_t \\\\\n    \\chi_t &= k_{t+1} g_{t+1} \\\\\n    x_{t+1} &= A x_{t} + B \\bar{b}^{\\frac{1}{2}} \\varepsilon_{1, t+1} \\\\\n    \\log(g_t) &= \\log(z_t) - \\log(z_{t-1}) = \\log(\\bar{g}) + x_t\n  \\end{align*}\n\n  We can take first order conditions to get\n\n  \\begin{align*}\n    (1 - \\beta) c_t^{\\rho - 1} &= \\beta \\mu(J_{t+1} g_{t+1})^{\\rho - \\alpha} E_t \\left[ (g_{t+1} J_{t+1})^{\\alpha - 1} \\fracpd{J_{t+1}}{k_{t+1}} \\right]\n  \\end{align*}\n\n  The envelope condition reveals\n\n  \\begin{align*}\n    \\fracpd{J_t(k_t, x_t)}{k_t} &= J_t^{1 - \\rho} (1 - \\beta) c_t^{\\rho - 1} (\\eta y_t^{1 - \\nu} k_t^{\\nu - 1} + 1 - \\delta)\n  \\end{align*}\n\n  Notice for any pair $(k_t, x_t)$ we can solve the envelope condition to get\n\n  \\begin{align*}\n    c_t &= \\left( \\frac{J_{k, t}(k_t, x_t)}{(1 - \\beta) J_t^{1 - \\rho} (\\eta y_t^{1 - \\nu} k_t^{\\nu - 1} + 1 - \\delta)} \\right)^{\\frac{1}{\\rho - 1}} \\\\\n    \\rightarrow \\chi_t &= y_t + (1 - \\delta) k_t - c_t \\\\\n    \\chi_t &= y_t + (1 - \\delta) k_t - \\left( \\frac{J_{k, t}(k_t, x_t)}{(1 - \\beta) J_t^{1 - \\rho} (\\eta y_t^{1 - \\nu} k_t^{\\nu - 1} + 1 - \\delta)} \\right)^{\\frac{1}{\\rho - 1}} \\\\\n  \\end{align*}\n\n\n\\end{document}\n", "meta": {"hexsha": "4c44bcc9b781afa462ebe26a14b346fea1a2d4a2", "size": 1771, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Models/OneAgent/OneGood/RecursivePref/ConstantVolatility/envelopemethod.tex", "max_stars_repo_name": "NYUEcon/GrowthModels", "max_stars_repo_head_hexsha": "463ceafc5749475340e367074f3eaf6248668eaf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2015-08-19T23:28:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T11:07:10.000Z", "max_issues_repo_path": "Models/OneAgent/OneGood/RecursivePref/ConstantVolatility/envelopemethod.tex", "max_issues_repo_name": "NYUEcon/GrowthModels", "max_issues_repo_head_hexsha": "463ceafc5749475340e367074f3eaf6248668eaf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-08-18T20:18:34.000Z", "max_issues_repo_issues_event_max_datetime": "2015-08-20T17:21:15.000Z", "max_forks_repo_path": "Models/OneAgent/OneGood/RecursivePref/ConstantVolatility/envelopemethod.tex", "max_forks_repo_name": "NYUEcon/GrowthModels", "max_forks_repo_head_hexsha": "463ceafc5749475340e367074f3eaf6248668eaf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2015-11-09T18:43:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-01T18:49:32.000Z", "avg_line_length": 35.42, "max_line_length": 179, "alphanum_fraction": 0.5155279503, "num_tokens": 749, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5907973935761871}}
{"text": "\\subsection{Test 2}\r\n\r\n\\begin{enumerate}[label=\\arabic*.]\r\n\t\\item \r\n\t\tFind the general form of a particular solution to the differential equation\r\n\t\t\\begin{equation*}\r\n\t\t\ty'' + 10y' + 25y = f(t)\r\n\t\t\\end{equation*}\r\n\t\tfor each of the following cases:\r\n\t\t\\begin{enumerate}[label=(\\alph*)]\r\n\t\t\t\\item \r\n\t\t\t\t$f(t) = \\cos{(-5t)} + 7t\\sin{(-5t)}$\r\n\t\t\t\\item\r\n\t\t\t\t$f(t) = te^{-5t}$.\r\n\t\t\\end{enumerate}\r\n\t\\item\r\n\t\tConsider the matrix $A = \\begin{bmatrix}\r\n\t\t\t1  & 0 & 6 \\\\\r\n\t\t\t3  & 1 & 3 \\\\\r\n\t\t\t-3 & 3 & -8\r\n\t\t\\end{bmatrix}$.\r\n\tCompute its characteristic polynomial and find all its eigenvalues.\r\n\tPick one of the eigenvalues and compute a corresponding eigenvector.\r\n\t\\item\r\n\t\tChoose one of the two parts.\r\n\t\t\\begin{enumerate}[label=(\\alph*)]\r\n\t\t\t\\item \r\n\t\t\t\tCompute the general solution for the equation $y'' - 5y' + 6y = 6e^{4t} - 10e^{t}$.\r\n\t\t\t\\item Use the method of variation of parameters to find a particular solution to $y'' + 9y = \\csc{(3t)}$.\r\n\t\t\\end{enumerate}\r\n\t\\item\r\n\t\tA 1kg mass is attached to a spring with stiffness 8N/m. The damping constant is 6Ns/m.\r\n\t\tAt time $t=0$, an external force $F(t) = 8\\sin{(2t)}$ is applied to the system as the mass is pushed rightward from equilibrium with a velocity of 1m/s.\r\n\t\t\\begin{enumerate}[label=(\\alph*)]\r\n\t\t\t\\item\r\n\t\t\t\tFind the displacement function of the mass.\r\n\t\t\t\\item\r\n\t\t\t\tDetermine the steady-state solution for the system.\r\n\t\t\t\\item\r\n\t\t\t\tWrite the steady-state solution in the form $A\\cos{(\\omega t - \\phi)}$ indicating the amplitude, frequency, period, and phase-shift.\r\n\t\t\\end{enumerate}\r\n\t\\item\r\n\t\tChoose one of the following two parts.\r\n\t\t\\begin{enumerate}[label=(\\alph*)]\r\n\t\t\t\\item \r\n\t\t\t\tConsider the following linear system of differential equations with given initial conditions:\r\n\t\t\t\t\\begin{equation*}\r\n\t\t\t\t\t\\begin{cases}\r\n\t\t\t\t\t\tx_1' = 2x_1 - 3x_2 \\\\\r\n\t\t\t\t\t\tx_2' = x_1 - 2x^2\r\n\t\t\t\t\t\\end{cases} \\text{, } x_1(0)=2 \\text{, } x_2(0)=3\r\n\t\t\t\t\\end{equation*}\r\n\t\t\t\t\\begin{enumerate}[label=(\\roman*)]\r\n\t\t\t\t\t\\item\r\n\t\t\t\t\t\tWrite the system in normal form.\r\n\t\t\t\t\t\\item\r\n\t\t\t\t\t\tShow that the vectors $\\begin{bmatrix}\r\n\t\t\t\t\t\t\t3e^t \\\\\r\n\t\t\t\t\t\t\te^t\r\n\t\t\t\t\t\t\\end{bmatrix}$ and $\\begin{bmatrix}\r\n\t\t\t\t\t\t\te^{-t} \\\\\r\n\t\t\t\t\t\t\te^{-t} \r\n\t\t\t\t\t\t\\end{bmatrix}$ are solutions to the homogeneous system.\r\n\t\t\t\t\t\\item\r\n\t\t\t\t\t\tWrite the general solution for the system and then compute the unique solution satisfying the initial conditions.\r\n\t\t\t\t\\end{enumerate}\r\n\t\t\t\\item\r\n\t\t\t\tThe matrix $A = \\begin{bmatrix}\r\n\t\t\t\t\t-1 & 2 \\\\\r\n\t\t\t\t\t-1 & 3\r\n\t\t\t\t\\end{bmatrix}$ has eigenvalues $\\lambda = -2 \\pm i$.\r\n\t\t\t\tFind the general solution to the system $\\vec{x}' = A\\vec{x}$.\r\n\t\t\\end{enumerate}\r\n\t\\item\r\n\t\tConsider the system\r\n\t\t$\\vec{x}' = \\begin{bmatrix}\r\n\t\t\t2 & -1 \\\\\r\n\t\t\t3 & -2\r\n\t\t\\end{bmatrix}\\vec{x} + \\begin{bmatrix}\r\n\t\t\t2t \\\\\r\n\t\t\t3t + 3\r\n\t\t\\end{bmatrix}$.\r\n\t\tIt is known that the corresponding homogeneous sysstem has general solution\r\n\t\t$\\vec{x_h} = C_1e^t\\begin{bmatrix}\r\n\t\t\t1 \\\\\r\n\t\t\t1\r\n\t\t\\end{bmatrix} + C_2e^{-t}\\begin{bmatrix}\r\n\t\t\t1 \\\\\r\n\t\t\t3\r\n\t\t\\end{bmatrix}$.\r\n\t\tFind the general solution for the given heterogeneous system.\r\n\\end{enumerate}", "meta": {"hexsha": "c1c05e8e06cbe77d8655875bc0c3afbff1972ce5", "size": 3071, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/additionalResources/tests/test2.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "diffEq/additionalResources/tests/test2.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "diffEq/additionalResources/tests/test2.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 33.3804347826, "max_line_length": 155, "alphanum_fraction": 0.6154347118, "num_tokens": 1043, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7401743677704878, "lm_q1q2_score": 0.590797393576187}}
{"text": "\n\n\n\\subsection{Adjustment}\n\n\\index{C}{adjustment!computation}\n\nThere are two basic approaches to adjusting for covariates.\nConceptually, the simplest one is to hold the covariates constant at\nsome level when collecting data or by extracting a subset of data\nwhich holds those covariates constant.  The other approach is to\ninclude the covariates in your models.\n\nFor example, suppose you want\nto study the differences in the wages of male and females.  The very\nsimple model \\model{\\VN{wage}}{\\VN{sex}} might give some insight, but\nit attributes to \\VN{sex} effects that might actually be due to level\nof education, age, or the sector of the economy in which the person\nworks.  Here's the result from the simple model:\\datasetCPS\n\\begin{Schunk}\n\\begin{Sinput}\n> cps = fetchData(\"cps.csv\")\n> mod0 = lm( wage ~ sex, data=cps)\n> summary(mod0)\n\\end{Sinput}\n\\begin{Soutput}\n...\n            Estimate Std. Error t value Pr(>|t|)    \n(Intercept)    7.879      0.322   24.50  < 2e-16 ***\nsexM           2.116      0.437    4.84  1.7e-06 ***\n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1 \n\nResidual standard error: 5.03 on 532 degrees of freedom\nMultiple R-squared: 0.0422,\tAdjusted R-squared: 0.0404 \nF-statistic: 23.4 on 1 and 532 DF,  p-value: 1.7e-06 \n\\end{Soutput}\n\\end{Schunk}\nThe coefficients indicate that a typical male makes \\$2.12 more per\nhour than a typical female.  (Notice that $R^2 = 0.0422$ is very small:\n\\VN{sex} explains hardly any of the person-to-person variability in wage.)\n\nBy including the variables \\VN{age}, \\VN{educ}, and \\VN{sector} in the\nmodel, you can adjust for these variables:\n\\begin{Schunk}\n\\begin{Sinput}\n> mod1 = lm( wage ~ age + sex + educ + sector, data=cps)\n> summary(mod1)\n\\end{Sinput}\n\\begin{Soutput}\n...\n              Estimate Std. Error t value Pr(>|t|)    \n(Intercept)    -4.6941     1.5378   -3.05  0.00238 ** \nage             0.1022     0.0166    6.17  1.4e-09 ***\nsexM            1.9417     0.4228    4.59  5.5e-06 ***\neduc            0.6156     0.0944    6.52  1.6e-10 ***\nsectorconst     1.4355     1.1312    1.27  0.20500    \nsectormanag     3.2711     0.7668    4.27  2.4e-05 ***\nsectormanuf     0.8063     0.7311    1.10  0.27064    \nsectorother     0.7584     0.7592    1.00  0.31829    \nsectorprof      2.2478     0.6698    3.36  0.00085 ***\nsectorsales    -0.7671     0.8420   -0.91  0.36273    \nsectorservice  -0.5687     0.6660   -0.85  0.39356    \n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1 \n\nResidual standard error: 4.33 on 523 degrees of freedom\nMultiple R-squared: 0.302,\tAdjusted R-squared: 0.289 \nF-statistic: 22.6 on 10 and 523 DF,  p-value: <2e-16 \n\\end{Soutput}\n\\end{Schunk}\nThe adjusted difference between the sexes is \\$1.94 per hour.  (The\n$R^2=0.30$ from this model is considerably larger than for \\texttt{mod0},\nbut still a lot of the person-to-person variation in wages has not\nbe captured.)\n\nIt would be wrong to claim that simply including a covariate in a\nmodel guarantees that an appropriate adjustment has been made.  The\neffectiveness of the adjustment depends on whether the model design is\nappropriate, for instance whether appropriate interaction terms have\nbeen included.  However, it's certainly the case that if you {\\bf don't}\ninclude the covariate in the model, you have {\\bf not} adjusted for\nit.\n\nThe other approach is to subsample the data so that the levels of the\ncovariates are approximately constant.  For example, here is a subset\nthat considers workers between the ages of 30 and 35 with between 10\nto 12 years of education and working in the sales sector of the\neconomy:\n\\begin{Schunk}\n\\begin{Sinput}\n> small = subset(cps, age <=35 & age >= 30 & \n                       educ>=10 & educ <=12 & \n                       sector==\"sales\" )\n\\end{Sinput}\n\\end{Schunk}\nThe choice of these particular levels of \\VN{age}, \\VN{educ}, and\n\\VN{sector} is arbitrary, but you need to choose some level if you\nwant to hold the covariates appproximately constant.\n\nThe subset of the data can be used to fit a simple model:\n\\begin{Schunk}\n\\begin{Sinput}\n> mod4 = lm( wage ~ sex, data=small)\n> summary(mod4)\n\\end{Sinput}\n\\begin{Soutput}\n...\n            Estimate Std. Error t value Pr(>|t|)  \n(Intercept)    4.500      0.500     9.0     0.07 .\nsexM           4.500      0.866     5.2     0.12  \n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1 \n\nResidual standard error: 0.707 on 1 degrees of freedom\nMultiple R-squared: 0.964,\tAdjusted R-squared: 0.929 \nF-statistic:   27 on 1 and 1 DF,  p-value: 0.121 \n\\end{Soutput}\n\\end{Schunk}\nAt first glance, there might seem to be nothing wrong with this\napproach and, indeed, for very large data sets it can be effective.\nIn this case, however, there are only 3 cases that satisfy the various\ncriteria: two women and one man.\n\\begin{Schunk}\n\\begin{Sinput}\n> table( small$sex )\n\\end{Sinput}\n\\begin{Soutput}\nF M \n2 1 \n\\end{Soutput}\n\\end{Schunk}\n\nSo, the \\$4.50 difference between\nthe sexes and wages depends entirely on the data from a single male!\n(Chapter \\ref{chap:confidence} describes how to assess the precision\nof model coefficients.  This one works out to be $4.50 \\pm 11.00$ ---\nnot at all precise.)\n", "meta": {"hexsha": "ea2e8be420ba14e389a1bad7848fc81d2d8ac78f", "size": 5182, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ComputationalTechnique-Orig/TotalPartial/computer-total-partial.tex", "max_stars_repo_name": "dtkaplan/SM3", "max_stars_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-01T01:28:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-01T01:28:07.000Z", "max_issues_repo_path": "ComputationalTechnique-Orig/TotalPartial/computer-total-partial.tex", "max_issues_repo_name": "BriannaBarry/SM3", "max_issues_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ComputationalTechnique-Orig/TotalPartial/computer-total-partial.tex", "max_forks_repo_name": "BriannaBarry/SM3", "max_forks_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-02-14T05:22:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T12:42:15.000Z", "avg_line_length": 37.2805755396, "max_line_length": 74, "alphanum_fraction": 0.6748359707, "num_tokens": 1712, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7981867777396212, "lm_q1q2_score": 0.5907973890014022}}
{"text": "\\chapter{Summarising the Posterior Distribution}\\label{chapter:summaries}\nThe posterior distribution is the full answer to any Bayesian problem. It gives\na complete description of our state of knowledge and our uncertainty about the\nvalue(s) of unknown\nparameters. From the posterior distribution, we can calculate any probability\nwe want. For example, if we had a posterior distribution $p(\\theta|x)$ and we\nwanted to know the probability that $\\theta$ is greater than or equal to 100, we could do:\n\\begin{eqnarray}\nP(\\theta \\geq 100 | x) &=& \\int_{100}^\\infty p(\\theta | x) \\, d\\theta\n\\end{eqnarray}\nor\n\\begin{eqnarray}\nP(\\theta \\geq 100 | x) &=& \\sum_{100}^\\infty p(\\theta | x)\n\\end{eqnarray}\ndepending on whether the set of possible $\\theta$ values is continuous or\ndiscrete. We could also work out the probability of anything else.\nHowever, the posterior distribution is sometimes too much\ninformation for us to think about easily. Maybe a giant list of $\\theta$\nvalues and probabilities isn't easy to digest. Sometimes,\nwe need to {\\it summarise} the posterior distribution to help us\ncommunicate our results with others. A giant Bayes' Box (or a million MCMC\nsamples of the parameter, we'll see that later), might technically\ncontain everything we want, but it's not easy to talk about.\n\nFor example, say you were trying to estimate\na parameter, and a colleague asked you to state your uncertainty about the\nparameter. Well, your posterior distribution might be complicated. It might\nhave bumps and wiggles in it, or some other kind of structure. If there were\ntwo or more unknown parameters, there might be dependence in the posterior\ndistribution. In some cases there might even me multiple separate peaks!\nFigure~\\ref{fig:complicated_posterior} shows an example of what a complicated\nposterior distribution might look like. If this was your result, your colleague\nmight not care about all the little wiggles in this plot. They just want to know\nthe ``big picture'' of your results.\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.6]{Figures/complicated_posterior.pdf}\n\\caption{\\it A complicated posterior distribution. When communicating with others,\nit is often useful to summarise the posterior distribution with a few numbers. In\nthis case, something like ``the parameter = $5 \\pm 1$'' might be a useful summary.\n\\label{fig:complicated_posterior}}\n\\end{center}\n\\end{figure}\n\nThe idea of summarising the posterior distribution is very closely related to\nthe idea of summarising a data set, which you probably encountered when you\nstudied descriptive statistics.\n\\begin{framed}\n{\\bf In descriptive statistics, you often make summaries of a complex data set\n(e.g. the mean and the standard deviation) so that you can communicate about\nthe data set in a concise way. In Bayesian statistics, you often do a similar\nthing, but instead of giving a concise description of the {\\it data}, you give a\nconcise description of the {\\it posterior distribution}.}\n\\end{framed}\n\n\\section{Point Estimates}\nA ``point estimate'' refers to a single number guess for the value of a parameter.\nIf you have several parameters, a point estimate would be a single guess for the\nvalue of each parameter (like a single point in a multidimensional space).\nIf you look at the\nposterior distribution plotted in Figure~\\ref{fig:complicated_posterior}, you\ncan see that the true value of the parameter is probably somewhere around 5,\nbut with some uncertainty. If you were to provide a single number as a guess of\nthe parameter, you would probably say something close to 5. In classical statistics, a\nsingle number guess is called an ``estimate'', and a rule for generating such\nguesses is called an ``estimator''. Estimates are usually written by putting a\nhat over the name of the parameter. So, by looking at the plot of the\nposterior, you could give an estimate like this:\n\\begin{eqnarray}\n\\hat{\\theta} = 5.\n\\end{eqnarray}\nBut there are better things you could do than just looking at the plot, and you've\nprobably learnt some of them\nin previous statistics courses. Here are three methods you could use to\nchoose a point estimate using the posterior distribution: the posterior mean\n(the expectation value of the parameter), the posterior median\n(the value that divides the probability\nin half), and the posterior mode (the value where the posterior distribution has its\npeak). In our illustrative example, the values of these three point estimates\nare:\n\\begin{eqnarray}\n\\hat{\\theta} &=& 4.988 \\textnormal{ (the posterior mean)}\\\\\n\\hat{\\theta} &=& 4.924 \\textnormal{ (the posterior median)}\\\\\n\\hat{\\theta} &=& 4.996 \\textnormal{ (the posterior mode)}\n\\end{eqnarray}\nIn this example, there's not much of a difference between these three methods.\nBut in other situations, they can be quite different (this usually happens if\nthe posterior distribution is skewed, or has multiple modes; you may notice a\nstrong analogy between this topic and descriptive statistics). Is there a way\nto say which one is the {\\it best}? It turns out there is, but that\ndepends on what you mean by ``best''.\n\nBefore we move on to the formal ways of deciding what constitutes a good estimate, I would\nlike to mention a very common method that is easy to use. If the posterior distribution\nlooks even vaguely like a normal distribution, it is common to summarise it like\nthis:\n\\begin{eqnarray}\n\\theta = \\textnormal{posterior mean }\\pm\\textnormal{posterior standard deviation}.\n\\end{eqnarray}\nI use this kind of summary frequently in my own research.\n\n\\subsection{A Very Brief Introduction to Decision Theory}\nDecision theory is a very important topic. In this course we will use a\n{\\it tiny} amount of it, just enough to solve the problem of ``which point\nestimate is best?''. If you think about it, this is a bit of a weird question.\nObviously, the best point estimate is the true value. Of course it is, how could it\nbe otherwise? Our only problem is that we can't actually implement this suggestion.\nWe don't know the true value. We only have the posterior distribution\n(which is based on all the evidence we have), and we have to do \nthe best we can with our incomplete information. To think about which decision is best,\nthe first thing we should think\nabout is which decisions are {\\it possible}. For estimating a single parameter, any\nreal number is a possible guess.\n\nThe key idea in decision theory is the concept of {\\it utility}, and the related\nconcept of {\\it loss} (loss is just negative utility). Utility is\na numerical measure of how good it would be if a certain outcome came true.\nConversely, loss is a measure of how bad it would be if a certain outcome\ncame true.\nUtilities are often subjective (not unlike prior probabilities), but in some\napplications utility can be more concrete. For example, in betting or investment\ndecisions the utility can be measured in dollars. The problem with utility is\nthat we have uncertainty about what is going to happen, or about what is true,\nso we can't just choose the decision that gives us the greatest utility. Instead\nwe will use our posterior probabilities and choose the decision that gives us\nthe maximum possible {\\it expected value} of the utility.\n\nImagine we were estimating a parameter $\\theta$ and we wanted to give a\npoint estimate $\\hat{\\theta}$. One idea for what the utility or loss might be is the\n{\\it quadratic} loss function, which is given by\n\\begin{eqnarray}\nL(\\theta, \\hat{\\theta}) = \\left(\\hat{\\theta} - \\theta\\right)^2.\n\\end{eqnarray}\nThis expression inside the parentheses is the difference between our point estimate\nand the true value. This formula says that if our point estimate is off by 2,\nthat is four times worse than if we were off by 1. If we were off by 10, that is\n100 times worse than if we were off by 1, due to the squaring in the quadratic\nloss function formula.\n\nIt turns out (we will prove this below) that {\\it if the loss function is\nquadratic, the best estimate you can give is the posterior mean}.\nHere is the proof. The expected value of the loss is\n\\begin{eqnarray}\n\\mathds{E}\\left[L(\\theta, \\hat{\\theta})\\right] =\n\\int p(\\theta|x)(\\hat{\\theta} - \\theta)^2 \\, d\\theta\n\\end{eqnarray}\nSince we are summing (integrating) over all possible true $\\theta$ values, \nthe expected loss is only a function of our estimate $\\hat{\\theta}$. To minimise a function\nof one variable, you differentiate it and then set the derivative to zero.\nThe derivative is\n\\begin{eqnarray}\n\\frac{d}{d\\hat{\\theta}}\\mathds{E}\\left[L(\\theta, \\hat{\\theta})\\right] &=&\n\\int p(\\theta|x)\\frac{d}{d\\hat{\\theta}}(\\hat{\\theta} - \\theta)^2 \\, d\\theta \\\\\n&=& \\int p(\\theta|x)2(\\hat{\\theta} - \\theta) \\, d\\theta\n\\end{eqnarray}\nSetting this equal to zero and then solving for $\\hat{\\theta}$ gives the final\nresult:\n\\begin{eqnarray}\n\\hat{\\theta} &=& \\int \\theta p(\\theta|x) \\, d\\theta.\n\\end{eqnarray}\nwhich is the posterior mean. Some people call the posterior mean the ``Bayes\nEstimate'' for this reason. I don't like that term because I don't think\npoint estimates are really Bayesian.\nThe actual output of a Bayesian analysis is the posterior distribution.\n\nNote that I didn't verify that $\\hat{\\theta}$ actually minimises the expected loss\n, because setting the derivative to zero would also find a maximum. To make sure\nit really does minimise the expected loss, you can calculate the second\nderivative and verify that it is\npositive. But that's not really needed. It would be pretty bizarre if the\nposterior mean was the {\\it worst} estimate!\n\n\\subsection{Absolute Loss}\nSometimes, the quadratic loss/utility is not a reasonable model\nfor the consequences of an incorrect estimate. Another plausible form for the loss\nfunction is the {\\it absolute} loss. This looks like:\n\\begin{eqnarray}\nL(\\hat{\\theta}, \\theta) &=& |\\hat{\\theta} - \\theta|.\n\\end{eqnarray}\nWith this assumption, the ``badness'' of an incorrect estimate is proportional\nto how far the estimate is from the true value. If the estimate is twice as\nfar from the true value, it is twice as bad. We will not prove it (although you\nare welcome to derive this yourself), but for this loss function\nthe best estimate is the posterior median, which is the value of $\\hat{\\theta}$\nfor which $P(\\theta \\leq \\hat{\\theta}) = P(\\theta > \\hat{\\theta}) = 0.5$.\n\n\\subsection{All-or-nothing Loss}\nThe third kind of loss function we will look at is the ``all-or-nothing'' loss,\nalso sometimes called {\\it 0-1 loss}.\nSometimes, you may need your estimate to be\ncompletely correct, and if it isn't correct, then it is irrelevant how far your\nestimate was from the true value. All incorrect estimates are equally bad.\nThe all-or-nothing loss looks like:\n\\begin{eqnarray}\nL(\\hat{\\theta}, \\theta) = \\left\\{\n\\begin{array}{lr}\n0 & \\hat{\\theta} = \\theta\\\\\n1, & \\textnormal{otherwise.}\n\\end{array}\n\\right.\n\\end{eqnarray}\n\nIf you were in this situation you would want to make your chances as high as\npossible, which implies you should simply choose the most probable value\nof $\\theta$ as your point estimate $\\hat{\\theta}$. That is, the appropriate\nestimate is the posterior mode. This intuition is correct. With all-or-nothing loss,\nthe best estimate is the posterior mode. The three loss functions we consider\nin STATS 331 are shown in Figure~\\ref{fig:utility}.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.5]{Figures/utility.pdf}\n\\caption{\\it Three kinds of loss function, which measure how bad it is for our\npoint estimate $\\hat{\\theta}$ to be different from the true value of the parameter\n$\\theta$. Note that the all-or-nothing loss has a small amount of width in this\nplot, just so that we can clearly see the spike at\n$\\hat{\\theta} - \\theta$ = 0.\\label{fig:utility}}\n\\end{center}\n\\end{figure}\n\n\\subsection{Invariance of Decisions}\nYou may be wondering about the definitions of our loss functions. For example,\nwe defined the quadratic loss as $(\\hat{\\theta} - \\theta)^2$, but what if we\ndefined it as $3(\\hat{\\theta} - \\theta)^2 + 5$ instead? Would our\nbest decision change? Luckily, the answer is no. The decision (estimate) which\nminimises the expected value of a loss function $L$ also minimises the expected\nvalue of a different loss function $aL + b$, where $a$ is any positive number\nand $b$ is any other number.\nFor the mathematicians, the optimal decision is invariant under positive affine\ntransformations of the utility or loss function. Phew!\n\n\\subsection{Computing Point Estimates from a Bayes' Box}\nWe have just discussed three different point estimates, and under what\ncircumstances we can consider them to be the best possible estimate we could\nmake. Now, we will look at how to actually obtain the point estimates.\nThe posterior mean is straightforward. It's the expectation value of the parameter\nusing the posterior distribution. In R, the code is:\n\\begin{minted}[mathescape,\n               numbersep=5pt,\n               gobble=0,\n               frame=single,\n               framesep=2mm, fontsize=\\small]{r}\npost_mean = sum(theta*post)\n\\end{minted}\nYou should also know how to compute this manually from a Bayes' Box, using a\ncalculator.\n\nThe posterior mode is also fairly straightforward. First, we can find the\nhighest probability in the Bayes' Box. Then we find the corresponding parameter\nvalue.\n\\begin{minted}[mathescape,\n               numbersep=5pt,\n               gobble=0,\n               frame=single,\n               framesep=2mm, fontsize=\\small]{r}\nhighest_probability = max(post)\npost_mode = theta[post == highest_probability]\n\\end{minted}\nIn the case of a tie, {\\tt post\\_mode} might be a vector, indicating\nthat there isn't a single mode.\n\nThe posterior median is a little harder. We need to find the $\\theta$ value\nwhich has 50\\% of the probability to the left and 50\\% of the probability to the\nright. Note that this isn't precisely defined in some cases, particularly with\ndiscrete distributions. For example, if $\\theta$ could be 1, 2, or 3, and the\nprobabilities of these were 0.3, 0.6, and 0.1, then what is the median? It is\nnot entirely clear. However, if there are a large number of possibilities then\nthe definition becomes more clear.\n\nTo calculate the posterior median in R, we need to use the cumulative distribution\nwhich is defined as $F(t) = P(\\theta \\leq t)$. If we then find the value of $t$\nwhere $F(t) = 0.5$, we have found the posterior median. This isn't always\npossible but we can always find the value of $t$ which makes $F(t)$ very close to\n0.5.\nTo obtain the cumulative distribution\nin R you can use the {\\tt cumsum} function, which calculates the cumulative sum\nof a vector. The posterior vector contains the probabilities of $\\theta$ equalling\ncertain values. If we want the probability that $\\theta$ is less than or\nequal to a certain value, we sum all the probabilities up to and including that value. The\ncumulative sum function achieves this. Here is the code for calculating the\nposterior median:\n\\begin{minted}[mathescape,\n               numbersep=5pt,\n               gobble=0,\n               frame=single,\n               framesep=2mm, fontsize=\\small]{r}\nF = cumsum(post)\ndist = abs(F - 0.5) # Distance of the F-values from 0.5\npost_median = theta[dist == min(dist)]\n\\end{minted}\nNote that this may also produce more than one result. Like the mode, the\nposterior median is not always uniquely defined.\n\n\\subsection{Computing Point Estimates from Samples}\nWhen we use a Bayes' Box (or the equivalent R commands which represent the\ncolumns of a Bayes' Box as vectors), we end up with a vector of possible parameter\nvalues and another vector containing the posterior distribution.\nWhen we use MCMC and JAGS, the output is different.\nWe will only have a vector of parameter\nvalues, without corresponding probabilities. The vector of parameter values is\nmeant to be a random sample of values drawn from the posterior distribution.\nIt's like saying ``here are a bunch of guesses for the parameter'', and any\nregion where there are a lot of guesses is considered to be a region with high\nprobability.\n\nWhen we have samples instead of an exhaustive list of parameter values and\nprobabilities, the methods for computing the summaries are different. For a\nparameter called $\\theta$, the methods for computing the summaries are given\nbelow.\n\n\\begin{minted}[mathescape,\n               numbersep=5pt,\n               gobble=0,\n               frame=single,\n               framesep=2mm, fontsize=\\small]{r}\n# Posterior mean using samples\npost_mean = mean(theta)\n# Posterior mode using samples\n# post_mode = ??? (this can't be done easily with samples!)\n# If you have a really large number of samples,\n# visually finding the peak of\n# a histogram can work.\n# Posterior median using samples\nsorted = sort(theta)\npost_median = sorted[0.5*length(theta)]\n\\end{minted}\n\n\\section{Credible Intervals}\nCredible intervals are another useful kind of summary. They are used to make\nstatements like ``There is a 95\\% probability the parameter is between\n100 and 150''. The basic idea is to\nuse the posterior distribution to find an interval $[a, b]$ such that\n\\begin{eqnarray}\nP(a \\leq \\theta \\leq b | x) &=& \\alpha\n\\end{eqnarray}\nwhere $\\alpha$ is some pre-defined probability. 95\\% seems to be the\nmost popular choice. An example of a 95\\% credible interval is given in\nFigure~\\ref{fig:credible_interval}.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.5]{Figures/credible_interval.pdf}\n\\caption{\\it A central 95\\% credible interval is defined as an interval that\ncontains 95\\% of the posterior probability, while having 2.5\\% of the probability\nabove the upper limit and 2.5\\% of the probability below the lower limit. The\ncredible interval is formed by finding the edges of the grey region. In this\ncase the credible interval is $[3.310, 7.056]$.\n\\label{fig:credible_interval}}\n\\end{center}\n\\end{figure}\nNote that the interval shown in Figure~\\ref{fig:credible_interval} is not the\nonly possible interval that would contain 95\\% of the probability. However, to\nmake the notion of a credible interval precise, we usually use a {\\it central}\ncredible interval. A central credible interval containing an amount of probability\n$\\alpha$ will leave $(1-\\alpha)/2$ of the probability to its left and\nthe same amount $(1-\\alpha)/2$ of the probability to its right.\n\n\\subsection{Computing Credible Intervals from a Bayes' Box}\nThe method for computing credible intervals is closely related to the method\nfor computing the posterior median. With the median, we found the value of\n$\\theta$ which has 50\\% of the posterior probability to its left and 50\\% to its\nright. To find the lower end of a 95\\% credible interval, we find the $\\theta$\nvalue that has 2.5\\% of the probability to its left. To find the upper end we\nfind the value of $\\theta$ that has 2.5\\% of the posterior probability to its\nright, or 97.5\\% to the left.\n\n\\subsection{Computing Credible Intervals from Samples}\nIf you have used MCMC and have obtained random samples from the posterior\ndistribution, you can find a credible interval in a similar way to how you would\nfind the posterior median. Again, instead of finding the 0.5 quantile of the\nposterior distribution you would find the 0.025 quantile and the 0.975 quantile\n(if you wanted a central 95\\% credible interval).\n\n\\section{Confidence Intervals}\nIn previous stats courses you have probably come across the concept of a\n{\\it confidence interval}. A confidence interval is a concept in classical\nstatistics that is somewhat similar to a credible interval in Bayesian statistics.\nWhen people calculate confidence intervals, they usually want to say they are\n95\\% sure that the parameter is in that interval,\ngiven the data. This is what Bayesian credible intervals do, but {\\it it is not\nwhat classical confidence intervals do}!\n\nLuckily, a lot of the time, the classical and the Bayesian methods for making\nintervals will actually give the same interval. But this isn't always the case!\nIn lectures we will study an example (taken from an Ed Jaynes paper from the\n70s)\nwhere the Bayesian credible interval and\nthe classical confidence interval give completely different results. The result\nis shown in Figure~\\ref{fig:jaynes}. The key thing to note is the classical\nconfidence interval lies entirely in a region where we are certain\n(from the data) that $\\theta$ cannot possibly be!\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.5]{Figures/jaynes.pdf}\n\\caption{\\it An example of a Bayesian credible interval and a frequentist confidence\ninterval applied to a particular problem where they give different answers.\nThe posterior distribution in blue shows that the parameter $\\theta$ is probably\nsomewhere between 11 and 12, and values above 12 are completely impossible.\nHowever, the entire frequentist confidence interval lies above $\\theta=12$.\n\\label{fig:jaynes}}\n\\end{center}\n\\end{figure}\n\n", "meta": {"hexsha": "5e1cf71d90853adeb856ca6cc465ec167760373a", "size": 20788, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "summaries.tex", "max_stars_repo_name": "xulinpan/stat331", "max_stars_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 55, "max_stars_repo_stars_event_min_datetime": "2015-03-09T18:03:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-25T03:36:54.000Z", "max_issues_repo_path": "summaries.tex", "max_issues_repo_name": "xulinpan/stat331", "max_issues_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-07-07T05:00:32.000Z", "max_issues_repo_issues_event_max_datetime": "2015-07-10T08:48:27.000Z", "max_forks_repo_path": "summaries.tex", "max_forks_repo_name": "xulinpan/stat331", "max_forks_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2015-07-29T14:34:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-04T20:04:47.000Z", "avg_line_length": 51.3283950617, "max_line_length": 91, "alphanum_fraction": 0.7635174139, "num_tokens": 5085, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7981867777396212, "lm_q1q2_score": 0.5907973890014022}}
{"text": "\\section{Mathematical Appendix}\\label{appx: b} \nTo simplify notation for \\autoref{appx: b}, I denote the continuation value at budget $b$ by: \n\\begin{equation*}\n    \\begin{aligned}\n        &K_{b}:=\\alpha \\,\\mathbb{E}_{\\theta}\\left[V_w(\\theta', b)\\right] \n    \\end{aligned} \n\\end{equation*}\n\n\\subsection{Proof for \\autoref{prop:piecewiseV} and \\autoref{cor:optpolicy}} \n\\begin{proof}\n    Fix some $b\\in\\mathcal{B}_w$ and, starting from \\autoref{eq:full bellman}, consider the following:\n    \\begin{equation*}\n        \\begin{aligned} \n            V_w(\\theta,b) \\;&=\\;\\max\\left\\{\\,\\overline{\\mu} \\, u(\\theta) +\\alpha \\,\\mathbb{E}_\\theta \\Big[V_w(\\theta', b-1)\\Big]\\,,\\; \\alpha\\,\\mathbb{E}_\\theta \\Big[ V_w(\\theta', b)\\Big]\\,\\right\\}\\\\\n            &=\\; \\max\\left\\{\\,\\overline{\\mu} \\, u(\\theta) + K_{b-1} \\,,\\; K_b \\,\\right\\}\\\\\n            &=\\; K_{b-1} + \\max\\left\\{\\,\\overline{\\mu} \\, u(\\theta) \\,,\\; K_b - K_{b-1}\\,\\right\\}\n        \\end{aligned}\n    \\end{equation*}\n    \n    First, note that the difference between any two consecutive continuation values $K_b$ and $K_{b-1}$ must necesarily lie between 0 and $\\overline{\\mu}u(1)$. \n    This is true since the value function denotes the expected lifetime sum of discounted payoffs, and an additional right-swipe can provide an agent with, at most, an additional expected payoff of $\\overline{\\mu}u(1)$ and, at least, an additional payoff of $0$.\n    Furthermore, since $u(\\theta)$ is, by assumption, continuous and increasing over $\\Theta$ (and we assume $\\overline\\mu>0$ to prune out degenerate equilibria), then, by the Intermediate Value Theorem, there exists a unique root, $\\widetilde\\omega_b$, satisfying:\n    \\begin{equation*} \n            \\overline\\mu u(\\widetilde\\omega_b) = K_b-K_{b-1}   \n    \\end{equation*}\n    Consider now two cases. First, if $\\theta\\leq\\widetilde\\omega_b$, then:\n    \\begin{equation*}\n        \\begin{aligned} \n            V_w(\\theta,b) \\;&=\\; K_{b-1} + \\max\\left\\{\\,\\overline{\\mu} \\, u(\\theta) \\,,\\; K_b - K_{b-1}\\,\\right\\}\\\\\n            &=\\; K_{b-1} + K_b - K_{b-1}\\\\\n            &=\\; K_b.\n        \\end{aligned}\n    \\end{equation*}\n    Analogously, if $\\theta\\leq\\widetilde\\omega_b$, then:\n    \\begin{equation*}\n        V_w(\\theta,b) = \\overline{\\mu} \\, u(\\theta) + K_{b-1}. \n    \\end{equation*} \n\n    Thus, by considering the above function over the intervals $[0, \\widetilde\\omega_b]$ and $[\\widetilde\\omega_b, 1]$ separately, and substituting back the expressions for $K_b, K_{b-1}$, we conclude that:\n    \\begin{equation*}\n    \\begin{split}\n        V_w(\\theta,b)=\\begin{cases}\n            \\overline\\mu u(\\theta) +\\alpha \\,\\mathbb{E}_{\\theta}\\Big[V_w(\\theta', b-1)\\Big],& \\theta \\geq \\widetilde \\omega_b \\\\[10pt]\n            \\alpha \\,\\mathbb{E}_{\\theta}\\Big[V_w(\\theta', b)\\Big],& \\theta\\leq\\widetilde \\omega_b\n        \\end{cases} \n    \\end{split}\n    \\end{equation*} \n\n    Furthermore, \\autoref{cor:optpolicy} follows trivially from the above by considering a cutoff policy over the above intervals such that $V_w(\\theta,b)$ is attained. \n\\begin{comment} \n    When a woman with budget $b$ is presented a candidate with attractiveness $\\theta \\geq \\widetilde\\omega_b$ and she swipes right, her expected lifetime sum of discounted payoffs is:\n    \\begin{equation*}\n        \\begin{split}\n            \\overline\\mu u(\\theta) +\\alpha \\,\\mathbb{E}_{\\theta}\\Big[V_w(\\theta', b-1)\\Big]\\\\\n            = V_w(\\theta,b)\n        \\end{split}\n    \\end{equation*}\n    Alternatively, when presented a candidate with attractiveness $\\theta<\\widetilde\\omega_b$ and she swipes left:\n    \\begin{equation*}\n        \\begin{split}\n            \\alpha \\,\\mathbb{E}_{\\theta}\\Big[V_w(\\theta', b)\\Big]\\\\\n            = V_w(\\theta,b)\n        \\end{split}\n    \\end{equation*} \n\\end{comment}\n\\end{proof}\n\n\\subsection{Proof for \\autoref{prop:recurrence relation}} \n\\begin{proof} \nFix some $b\\in\\mathcal{B}_w$ and consider the result presented by \\autoref{prop:piecewiseV}, which guarantees the existence and uniqueness of some $\\widetilde \\omega_b$ satisfying:\n\\begin{align}\n    \\begin{split}\\label{eq:A.1}\n        V_w(\\theta, b)&=\\begin{cases} \n            \\overline\\mu u(\\theta) + K_{b-1},& \\theta> \\widetilde \\omega_b \\\\\n            K_b,& \\theta\\leq\\widetilde \\omega_b\n        \\end{cases}\n    \\end{split}\\\\ \n    \\begin{split}\\label{eq:A.2}\n        \\overline\\mu u(\\widetilde\\omega_b) &= K_b-K_{b-1}\n    \\end{split} \n\\end{align}  \n\nStarting out with \\autoref{eq:A.2} and expanding out the expectation operator, we can use \\eqref{eq:A.1} to substitute in the piecewise definitions of $V_w(\\theta,b)$ over the appropriate intervals: \n\\begin{equation}\\label{eq:A.3}\n    \\begin{split}\n        \\overline\\mu u(\\widetilde\\omega_b) &= K_b-K_{b-1}\\\\\n                                           &= \\alpha \\,\\int^1_0 V_w(\\theta',b)-V_w(\\theta',b-1)\\,dF_m(\\theta')\\\\\n                                           &=\\alpha \\int^{\\widetilde\\omega_b}_0 K_b\\,dF_m(\\theta') \\;+\\; \\alpha \\int^1_{\\widetilde\\omega_b}\\,\\overline\\mu u(\\theta') + K_{b-1}\\,dF_m(\\theta')\\\\ \n                                           & \\quad -\\,\\alpha \\int^{\\widetilde\\omega_{b-1}}_0 K_{b-1}\\,dF_m(\\theta') \\;-\\; \\alpha \\int^1_{\\widetilde\\omega_{b-1}} \\overline\\mu u(\\theta') + K_{b-2}\\,dF_m(\\theta')\n    \\end{split}\n\\end{equation}\n\nFurthermore, \\autoref{eq:A.2} implies that:\n\n$$\n\\overline\\mu u(\\widetilde\\omega_b) +K_{b-1}= K_b\n$$\n\n$$\n\\overline\\mu u(\\widetilde\\omega_{b-1}) +K_{b-2}=K_{b-1}\n$$\n\nThen, by substituting these expressions into \\eqref{eq:A.3}, we arrive at \\eqref{eq:A.4}: \n\\begin{equation}\\label{eq:A.4}\n    \\begin{split}\n        \\overline\\mu u(\\widetilde\\omega_b) &=\\alpha \\int^{\\widetilde\\omega_b}_0 \\overline\\mu u(\\widetilde\\omega_b) +K_{b-1}\\,dF_m(\\theta') \\;+\\; \\alpha \\int^1_{\\widetilde\\omega_b} \\,\\overline\\mu u(\\theta') + K_{b-1}\\,dF_m(\\theta')\\\\ \n                                           & \\quad -\\,\\alpha \\int^{\\widetilde\\omega_{b-1}}_0 K_{b-1}\\,dF_m(\\theta') \\;-\\; \\alpha \\int^1_{\\widetilde\\omega_{b-1}} \\overline\\mu u(\\theta') + K_{b-1}-\\overline\\mu u(\\widetilde\\omega_{b-1})\\,dF_m(\\theta')\n    \\end{split}\n\\end{equation}\n\nWith some algebra, this simplifies down to the recurrence relation in \\autoref{eq:recurrence relation}:  \n\\begin{equation}\n    u(\\widetilde\\omega_b)=\\alpha   u(\\widetilde\\omega_b)F_m(\\widetilde\\omega_b) \\;+\\; \\alpha  u(\\widetilde\\omega_{b-1})\\Big[1  - F_m(\\widetilde\\omega_{b-1})\\Big] \\;+\\; \\alpha\\int^{\\widetilde\\omega_{b-1}}_{\\widetilde\\omega_b} u(\\theta') \\,dF_m(\\theta') \n\\end{equation}\n\nFurthermore, to obtain the initial condition for the above, note that the right-swiping budget constraint imposes $V_w(\\theta,0)=0, \\forall b\\in \\mathcal{B}_w$. Then, \\eqref{eq:A.1} and \\eqref{eq:A.2} simplify to: \n\\begin{align}\n    \\begin{split}\\label{eq:A.5} \n        V_w(\\theta, 1)&=\\begin{cases} \n            \\overline\\mu u(\\theta),& \\theta> \\widetilde \\omega_1 \\\\  \n            K_1,& \\theta\\leq\\widetilde \\omega_1\n        \\end{cases}\n    \\end{split}\\\\ \n    \\begin{split}\\label{eq:A.6}\n        \\overline\\mu u(\\widetilde\\omega_1) &= K_1 \n    \\end{split} \n\\end{align}\n\nBeginning with \\autoref{eq:A.6}, we simplify until arriving at \\autoref{eq:initial condition}: \n\\begin{equation*} \n    \\begin{split}\n        \\overline\\mu u(\\widetilde\\omega_1) &= \\alpha \\, \\mathbb{E}_\\theta\\Big[\\,V_w(\\theta',1)\\,\\Big]\\\\\n        &= \\alpha \\,\\int^{\\widetilde\\omega_1}_0\\,K_1\\,dF_m(\\theta') + \\alpha \\,\\int_{\\widetilde\\omega_1}^1 \\overline\\mu u(\\theta')\\,dF_m(\\theta')\\\\\n        &= \\alpha \\,\\int^{\\widetilde\\omega_1}_0 \\overline\\mu u(\\widetilde\\omega_1)\\,dF_m(\\theta') + \\alpha \\,\\int_{\\widetilde\\omega_1}^1 \\overline\\mu u(\\theta')\\,dF_m(\\theta')\\\\\n        &= \\alpha \\overline\\mu u(\\widetilde\\omega_1)F_m(\\widetilde\\omega_1) + \\alpha \\,\\int_{\\widetilde\\omega_1}^1 \\overline\\mu u(\\theta')\\,dF_m(\\theta')\\\\\n        \\implies u(\\widetilde\\omega_1) &= \\alpha u(\\widetilde\\omega_1)F_m(\\widetilde\\omega_1) + \\alpha \\,\\int_{\\widetilde\\omega_1}^1 u(\\theta')\\,dF_m(\\theta') \n    \\end{split}\n\\end{equation*}  \n\n%To conclude the proof, note that the existence and uniqueness of some $\\widetilde\\omega_b$ that satisfies \\ref{eq:A.2} is guaranteed given the assumptions on $u(\\theta)$ being continuous and strictly increasing. Since the difference between any two consecutive continuation values must lie strictly between 0 and $\\overline{\\mu}u(1)$, then, by the Intermediate Value Theorem, there exists one and only one root $\\widetilde\\omega_b$ satisfying \\ref{eq:A.2} and, by extension the above reoccurrence relation. \n\\end{proof}", "meta": {"hexsha": "eb44a211397f9b3766054ebdd8ded9e79ec95198", "size": 8453, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dissertation/appendices/ap-b.tex", "max_stars_repo_name": "patohdzs/project-tinder", "max_stars_repo_head_hexsha": "4a8c138a63e31fa36981a421863a1af5162519c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dissertation/appendices/ap-b.tex", "max_issues_repo_name": "patohdzs/project-tinder", "max_issues_repo_head_hexsha": "4a8c138a63e31fa36981a421863a1af5162519c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dissertation/appendices/ap-b.tex", "max_forks_repo_name": "patohdzs/project-tinder", "max_forks_repo_head_hexsha": "4a8c138a63e31fa36981a421863a1af5162519c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.9503546099, "max_line_length": 508, "alphanum_fraction": 0.6304270673, "num_tokens": 2941, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5907973890014021}}
{"text": "% !TeX spellcheck = en_US\n\\documentclass[]{article}\n\n\\usepackage[natbib]{TiPi}\n\n\\title{Scaling to integer}\n\\author{\u00c9ric T.}\n\n\n\\newcommand*{\\Lag}{\\mathscr{L}}\n\\newcommand*{\\IntRange}[1]{[\\![#1]\\!]}\n\\newcommand*{\\half}{\\ensuremath{{}^1\\!/\\!{}_2}}\n\\renewcommand*{\\proxy}[1]{\\widetilde{#1}}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nWe discuss the best way to rescale floating-point data to integer values by an\naffine transform.\n\\end{abstract}\n\n\\section{Digitization}\n\nAssume that we have data values $\\V d = \\{d_1,d_2,\\ldots,d_m\\} \\in \\Reals^m$\nwhich we want to approximate as integers $\\V k = \\{k_1,k_2,\\ldots,k_m\\} \\in\n\\IntRange{k_\\Tag{min},k_\\Tag{max}}^m\\subset\\Integers^m$ through the affine transform:\n\\begin{equation}\n  \\label{eq:approx-data}\n  \\V d \\approx \\proxy{\\V d} = \\alpha\\,\\V k + \\beta\\,\\One \\, .\n\\end{equation}\nIf $\\alpha \\not= 0$, a possible approximate reciprocal formula to compute $\\V\nk$ is given by:\n\\begin{equation}\n  \\label{eq:digitization}\n  \\V k = \\Round{(\\V d - \\beta\\,\\One)/\\alpha} \\, ,\n\\end{equation}\nwhere $\\Round{\\cdot}: \\Reals\\mapsto\\Integers$ is applied elementwise and rounds\nits argument(s) to the nearest integer(s).  This function has the following\nproperties:\n\\begin{equation}\n  \\label{eq:rounding}\n  u - 1/2 < \\Round{u} \\le u + 1/2\n  \\quad\\Longleftrightarrow \\quad\n  \\Round{u} - 1/2 \\le u < \\Round{u} + 1/2 \\, ,\n\\end{equation}\nfor any $u\\in\\Reals$.\n\n\n\\section{Criteria for the affine parameters}\n\nA first suggestion is to find the parameters $\\alpha$ and $\\beta$ such that the\napproximation in Eq.~(\\ref{eq:approx-data}) yields the least worst error:\n\\begin{equation}\n  \\label{eq:criterion}\n  (\\estim{\\alpha},\\estim{\\beta})\n  = \\argmin_{\\alpha,\\beta} \\Norm[\\big]{\\V d - \\proxy{\\V d}}_{\\infty} \\, .\n\\end{equation}\nNote that we could have also required to optimize the criterion with respect to\nthe values of $\\V k$ but the problem would have been far more complex.  The\nassumed digitization formula~(\\ref{eq:digitization}) to compute $\\V k$ given\n$\\V d$, $\\alpha$ and $\\beta$ is only a possibility but it is simple and\nuniformly accurate.\n\nAnother point to consider is the stability of the transform when\nEq.~(\\ref{eq:approx-data}) and Eq.~(\\ref{eq:digitization}) are alternately\napplied several times with $\\alpha$ and $\\beta$ computed each time according to\nthe actual data range.  To avoid any drift, it may be advisable that the choice\nof $\\alpha$ and $\\beta$ ensures that some determined data value remains exactly\nrepresented.  This suggests to choose $\\beta$ to be a multiple of $\\alpha$ as\nthis implies that, for instance, a value of zero for the data will always be\nexactly represented.  \\oops{Study the successive transforms and check that no\nshrink occurs.} \n\nTo simplify the reasoning we assume in the sequel that $\\alpha \\ge 0$ and that\nthe data bounds:\n\\begin{align}\n   d_\\Tag{min} &\\bydef \\min_{i\\in\\IntRange{1,m}} d_i \\, , \\\\ \n   d_\\Tag{max} &\\bydef \\max_{i\\in\\IntRange{1,m}} d_i \\, , \n\\end{align}\nare both finite.  We also note that if $d_\\Tag{max} = d_\\Tag{min}$, we can\nchoose $\\estim{\\alpha}$, $\\estim{\\beta}$ and $\\V k = \\Zero$ such that the worst\nerror is exactly zero\\footnote{ For instance, if $0 \\in\n\\IntRange{k_\\Tag{min},k_\\Tag{max}}$, taking $\\estim{\\alpha} = 1$,\n$\\estim{\\beta} = d_\\Tag{max} = d_\\Tag{min}$ and $\\V k = \\Zero$ yields\n$\\Norm[\\big]{\\V d - \\proxy{\\V d}}_{\\infty} = 0$; otherwise, if $0 \\not\\in\n\\IntRange{k_\\Tag{min},k_\\Tag{max}}$, taking $\\estim{\\alpha} = 0$,\n$\\estim{\\beta} = d_\\Tag{max} = d_\\Tag{min}$ and any $\\V k \\in\n\\IntRange{k_\\Tag{min},k_\\Tag{max}}^m$ also yields $\\Norm[\\big]{\\V d - \\proxy{\\V\nd}}_{\\infty} = 0$.}.  In what follows, we therefore consider that $d_\\Tag{max}\n> d_\\Tag{min}$.\n\n\n\\section{Choosing the scale}\n\nIf $\\alpha=0$ then $\\proxy{\\V d}$ given by Eq.~(\\ref{eq:approx-data}) does not\ndepend on $\\V k$ and the worst error is:\n\\begin{displaymath}\n  \\Norm[\\big]{\\V d - \\proxy{\\V d}}_{\\infty} = \\max\\Brace{\n    \\Abs{d_\\Tag{min} - \\beta}, \\Abs{d_\\Tag{max} - \\beta}\n  } \\qquad\\text{(when $\\alpha = 0$)} \\, .\n\\end{displaymath}\nThis error is obviously minimized when $\\beta$ is the central value of the data range:\n\\begin{displaymath}\n  \\estim{\\beta} = (d_\\Tag{min} + d_\\Tag{max})/2\n  \\qquad\\text{(when $\\estim{\\alpha} = 0$)} \\, .\n\\end{displaymath}\n\nWe now consider the other possibility and assume that $\\alpha > 0$.  For $\\V k$\ngiven by Eq.~(\\ref{eq:digitization}), the pointwise error is then given by:\n\\begin{displaymath}\n  \\V d - \\proxy{\\V d} = \\alpha\\,\\Paren[\\big]{\\V u - \\Round{\\V u}} \\, ,\n\\end{displaymath}\nwith $\\V u \\bydef (\\V d - \\beta\\,\\One)/\\alpha$.  Since, from\nEq.~(\\ref{eq:rounding}), $-1/2 \\le u - \\Round{u} < 1/2$, the worst error is\nthen:\n\\begin{equation}\n  \\label{eq:worst-error}\n  \\Norm[\\big]{\\V d - \\proxy{\\V d}}_{\\infty} = \\Abs{\\alpha}/2 = \\alpha/2 \\, ,\n\\end{equation}\nwhere the last equality follows from our assumption that $\\alpha \\ge 0$.  As a\nresult of the chosen digitization formula~(\\ref{eq:digitization}), the worst\nerror does not depend on $\\beta$ but solely on $\\alpha$.  According to\nEq.~(\\ref{eq:worst-error}), to have the least worst error, we should choose the\nsmallest $\\alpha$ (in magnitude) such that all $\\V k$ given by\nEq.~(\\ref{eq:digitization}) are in the range\n$\\IntRange{k_\\Tag{min},k_\\Tag{max}}$.  This corresponds to the intuition that\nthe smaller the discretization step $\\alpha$, the more accurate the resulting\ndigitization.\n\nWhatever $\\beta$, the smaller the magnitude of $\\alpha$ the larger is the\ninterval spanned by the values of $\\V k$ computed according to\nEq.~(\\ref{eq:digitization}), thus the smallest possible $\\alpha$ is such that\nthe bounds $k_\\Tag{min}$ and $k_\\Tag{max}$ are reached when the data span the\nrange $[d_\\Tag{min},d_\\Tag{max}]$.  Since the mapping $\\V d \\mapsto \\V k$\nimplemented by the digitization formula~(\\ref{eq:digitization}) is\nmonotonically increasing (because $\\alpha > 0$), the following constraints must\nhold:\n\\begin{align}\n   &\\left\\{\n   \\begin{array}{l}\n   \\Round{(d_\\Tag{min} - \\beta)/\\alpha} = k_\\Tag{min} \\\\[1ex]\n   \\Round{(d_\\Tag{max} - \\beta)/\\alpha} = k_\\Tag{max}\n   \\end{array}\\right. \\notag  \\\\\n   \\Longleftrightarrow \\quad\n   &\\left\\{\n   \\begin{array}{l}\n   k_\\Tag{min} - 1/2 \\le (d_\\Tag{min} - \\beta)/\\alpha < k_\\Tag{min} + 1/2 \\\\[1ex]\n   k_\\Tag{max} - 1/2 \\le (d_\\Tag{max} - \\beta)/\\alpha < k_\\Tag{max} + 1/2\n   \\end{array}\\right.\n   \\label{eq:bounds}\n\\end{align}\ntaking the difference between the two last pairs of inequalities yields:\n\\begin{displaymath}\n  k_\\Tag{max} - k_\\Tag{min} - 1 < (d_\\Tag{max} - d_\\Tag{min})/\\alpha <\n  k_\\Tag{max} - k_\\Tag{min} + 1 \\, .\n\\end{displaymath}\nAs we assumed that $d_\\Tag{max} > d_\\Tag{min}$, the above inequalities become:\n\\begin{displaymath}\n  \\frac{d_\\Tag{max} - d_\\Tag{min}}{k_\\Tag{max} - k_\\Tag{min} + 1}\n  < \\alpha <\n  \\frac{d_\\Tag{max} - d_\\Tag{min}}{k_\\Tag{max} - k_\\Tag{min} - 1} \\, .\n\\end{displaymath}\nThus:\n\\begin{equation}\n  \\label{eq:best-alpha}\n  \\boxed{\n    \\alpha = \\frac{d_\\Tag{max} - d_\\Tag{min}}\n    {k_\\Tag{max} - k_\\Tag{min} + \\eta}\n  } \\, ,\n\\end{equation}\nwith $\\eta \\in (-1,1)$.  The smallest possible $\\alpha$ corresponds to $\\eta =\n1$ but this is a strict lower bound.  For now, we keep the freedom to choose\n$\\eta \\in (-1,1)$ and consider how to determine the bias $\\beta$.\n\n\n\\section{Choosing the bias}\n\nThe inequalities in Eq.~(\\ref{eq:bounds}) can be combined to bound the value of\n$\\beta/\\alpha$:\n\\begin{displaymath}\n  \\max\\Brace{\\sigma_0, \\sigma_1} - 1/2\n  < \\beta/\\alpha \\le\n  \\min\\Brace{\\sigma_0, \\sigma_1} + 1/2 \\, ,\n\\end{displaymath}\nwith $\\sigma_0 = d_\\Tag{min}/\\alpha - k_\\Tag{min}$ and $\\sigma_1 =\nd_\\Tag{max}/\\alpha - k_\\Tag{max}$.  The difference $\\sigma_1 - \\sigma_0$ has a\nsimple expression for $\\alpha$ given by Eq.~(\\ref{eq:best-alpha}):\n\\begin{displaymath}\n  \\sigma_1 - \\sigma_0 =\n  (d_\\Tag{max} - d_\\Tag{min})/\\alpha - k_\\Tag{max} + k_\\Tag{min} = \\eta \\, ,\n\\end{displaymath}\nand putting all together yields:\n\\begin{equation}\n  \\boxed{\n    \\gamma_0 < \\beta/\\alpha \\le \\gamma_1\n  } \\, ,\n  \\label{eq:beta-bounds}\n\\end{equation}\nwith:\n\\begin{align}\n  \\gamma_0 &= \\max\\Brace{\\sigma_0, \\sigma_1} - 1/2 \\notag \\\\\n  %&= \\sigma_0 + \\max\\Brace{0, \\eta} - 1/2 \\notag \\\\\n  &= \\sigma_0 + (\\eta)_{+} - 1/2\n   = \\sigma_1 - (\\eta)_{-} - 1/2 \\, , \\\\\n  \\gamma_1 &= \\min\\Brace{\\sigma_0, \\sigma_1} + 1/2 \\notag \\\\\n  %&= \\sigma_0 + \\min\\Brace{0, \\eta} + 1/2 \\notag \\\\\n  &= \\sigma_0 + (\\eta)_{-} + 1/2\n   = \\sigma_1 - (\\eta)_{+} + 1/2 \\, ,\n\\end{align}\nand where $(\\eta)_{+} = \\max\\Brace{0, \\eta}$ and $(\\eta)_{-} = \\min\\Brace{0,\n\\eta}$.  The range of possible values for $\\gamma = \\beta/\\alpha$ is the\nsemi-open interval $(\\gamma_0,\\gamma_1]$.  The width of the interval is:\n\\begin{displaymath}\n  \\gamma_1 - \\gamma_0 = 1 - (\\eta)_{+} + (\\eta)_{-} = 1 - \\Abs{\\eta} \\, .\n\\end{displaymath}\nSince $\\eta \\in (-1,1)$, $\\gamma_1 - \\gamma_0  \\in (0,1]$ so the semi-open\ninterval $(\\gamma_0,\\gamma_1]$ is always non-empty.  The center of the interval\nis:\n\\begin{displaymath}\n  \\gamma_\\Tag{cen} = (\\gamma_0 + \\gamma_1)/2 = \\sigma_0 + \\eta/2 = \\sigma_1 - \\eta/2 \\, .\n\\end{displaymath}\n\nIf $d_\\Tag{max} > d_\\Tag{min}$ and $k_\\Tag{max} > k_\\Tag{min}$, we have the\nflexibility to choose $\\eta$ in the range $(-1,1)$.   We can either:\n\\begin{itemize}\n\\item Take $\\eta = 0$, then $\\gamma_1 - \\gamma_0 = 1$ and there is thus exactly\none integer value in the range  $(\\gamma_0,\\gamma_1]$ which is\n$\\Floor{\\gamma_1}$.  This let us choose $\\gamma=\\beta/\\alpha$ to be integer so\nthat a zero in the data is exactly represented in the digitized data.  To have\n$\\gamma$ integer, we can take $\\gamma = \\Floor{\\gamma_1}$ but noting that, when\n$\\eta=0$, we have:\n\\begin{equation}\n\t\\sigma_0 = \\sigma_1 = \\gamma_\\Tag{cen}\n\t= \\frac{d_\\Tag{min}\\,k_\\Tag{max} - d_\\Tag{max}\\,k_\\Tag{min}}\n\t       {d_\\Tag{max} - d_\\Tag{min}} \\, ,\n\t       \\label{eq:gamma-center}\n\\end{equation}\nwhich belongs to $(\\gamma_0,\\gamma_1]$ and thus $\\Round{\\gamma_\\Tag{cen}} =\n\\Floor{\\gamma_1}$.  A possibility is thus to take $\\eta=0$, and $\\gamma =\n\\Round{\\gamma_\\Tag{cen}}$ or $\\gamma = \\gamma_\\Tag{cen}$ depending whether or\nnot we want to exactly represent specific data values such as zero.\n\n\\item Choose $\\eta \\rightarrow 1^-$ to minimize the worst error which is\nslightly better than the previous case but imposes $\\gamma = \\beta/\\alpha\n\\rightarrow \\gamma_1^-$ (or equivalently $\\gamma = \\beta/\\alpha \\rightarrow\n\\gamma_0^+$) which is not guaranteed to be integer.  In practice, the value of\n$\\eta$ must be such that the denominator in Eq.~(\\ref{eq:best-alpha}) is as\nclose as possible but numerically strictly smaller than $k_\\Tag{max} -\nk_\\Tag{min} + 1$, this leads to take $\\eta$ such that:\n\\begin{align}\n  &k_\\Tag{max} - k_\\Tag{min} + \\eta\n  = (k_\\Tag{max} - k_\\Tag{min} + 1)\\,(1 - \\varepsilon) \\notag \\\\\n  \\Longrightarrow\\quad& \\eta = 1 - (k_\\Tag{max} - k_\\Tag{min} + 1)\\,\\varepsilon \\, ,\n  \\label{eq:max-eta}\n\\end{align}\nwith $\\varepsilon$ the smallest value such that $1\\pm\\varepsilon$ is\nnumerically different from 1.  The value of $\\eta$ in Eq.~(\\ref{eq:max-eta}) is\nnonnegative if the number of digitization levels $k_\\Tag{max} - k_\\Tag{min} +\n1$ at most $1/\\varepsilon$.  For larger number of digitization levels taking\n$\\eta=0$ is better.  For 64-bit IEEE floating point values, $\\varepsilon =\n2^{-52}$ and thus using Eq.~(\\ref{eq:max-eta}) imposes to digitize to at most\n52-bit integers.\n\n\\item Taking $\\eta < 0$ is worst than the previous cases so must be\ndisregarded.\n\\end{itemize}\n\nTo summarize, taking $\\eta = 0$ is only slightly better than $\\eta->1^-$ in\nterms of least worst error but is more general (it applies to any number of\ndigitization levels) and offers more flexibility for choosing the bias $\\beta$,\nfor instance we can have $\\beta$ a multiple of $\\alpha$.\n\n\n\\section{Stable transform}\n\nAs the computations only depend on the four given parameters, $k_\\Tag{min}$,\n$k_\\Tag{max}$, $d_\\Tag{min}$ and $d_\\Tag{max}$, a simple constraint to preserve\nthe stability of the resulting transforms can be guaranteed if the data bounds\nare exactly preserved, \\emph{i.e.}:\n\\begin{displaymath}\n  \\left\\{\n  \\begin{array}{l}\n    d_\\Tag{min} = \\alpha\\,k_0 + \\beta \\\\[1ex]\n    d_\\Tag{max} = \\alpha\\,k_1 + \\beta\n  \\end{array}\\right.\n  \\quad\\Longleftrightarrow \\quad\n  \\left\\{\n  \\begin{array}{l}\n    \\displaystyle\n    \\alpha = \\frac{d_\\Tag{max} - d_\\Tag{min}}{k_1 - k_0} \\\\[2ex]\n    \\displaystyle\n    \\beta = \\frac{d_\\Tag{min}\\,k_1 - d_\\Tag{max}\\,k_0}{k_1 - k_0}\n  \\end{array}\n  \\right.\n\\end{displaymath}\nfor some $(k_0,k_1) \\in \\IntRange{k_\\Tag{min},k_\\Tag{max}}^2$ and assuming that\n$k_0\\not=k_1$.  As discussed above the smaller $\\alpha$ the more accurate the\ndigitization, thus $k_0 = k_\\Tag{min}$ and $k_1 = k_\\Tag{max}$ is the best\nchoice if we further impose that $k_0 < k_1$.   In that case the digitization\nparameters write:\n\\begin{equation}\n  \\left\\{\n  \\begin{array}{l}\n    \\displaystyle\n    \\alpha = \\frac{d_\\Tag{max} - d_\\Tag{min}}\n                  {k_\\Tag{max} - k_\\Tag{min}} \\\\[2ex]\n    \\displaystyle\n    \\beta = \\frac{d_\\Tag{min}\\,k_\\Tag{max} - d_\\Tag{max}\\,k_\\Tag{min}}\n                 {k_\\Tag{max} - k_\\Tag{min}} \\\\[2ex]\n    \\displaystyle\n    \\gamma = \\beta/\\alpha\n    = \\frac{d_\\Tag{min}\\,k_\\Tag{max} - d_\\Tag{max}\\,k_\\Tag{min}}\n           {d_\\Tag{max} - d_\\Tag{min}}\n  \\end{array}\n  \\right.\n  \\label{eq:exact-interpolation}\n\\end{equation}\nThis yields the same value for $\\alpha$ as in Eq.~(\\ref{eq:best-alpha}) with\n$\\eta = 0$ and $\\gamma = \\beta/\\alpha = \\gamma_\\Tag{cen}$ with\n$\\gamma_\\Tag{cen}$ given in Eq.~(\\ref{eq:gamma-center}).  In other words, the\nparameters derived from exact interpolation of the bounds are compatible with\nEquations~(\\ref{eq:best-alpha}) and (\\ref{eq:beta-bounds}) for $\\eta = 0$.\n\n\n\\section{Proposed parameters}\n\nTo summarize all previous considerations, taking $\\eta = 0$ seems the best\nchoice. Assuming $k_\\Tag{max} > k_\\Tag{min}$ and $d_\\Tag{max} > d_\\Tag{min}$, a\ngood selection of digitization parameters is then:\n\\begin{equation}\n  \\boxed{\n  \\begin{array}{l}\n    \\displaystyle\n    \\alpha = \\frac{d_\\Tag{max} - d_\\Tag{min}}\n                  {k_\\Tag{max} - k_\\Tag{min}} \\, , \\\\[2ex]\n    \\displaystyle\n    \\gamma_\\Tag{cen} = \\frac{d_\\Tag{min}\\,k_\\Tag{max} - d_\\Tag{max}\\,k_\\Tag{min}}\n                            {d_\\Tag{max} - d_\\Tag{min}} \\, , \\\\[2ex]\n    \\displaystyle\n    \\beta = \\alpha\\times\\begin{cases}\n    \\Round{\\gamma_\\Tag{cen}} & \\text{to preserve zeros,} \\\\\n    \\gamma_\\Tag{cen} & \\text{to preserve data bounds.} \\\\\n    \\end{cases}\n  \\end{array}\n  }\n\\end{equation}\nWhen $k_\\Tag{max} > k_\\Tag{min}$ and $d_\\Tag{max} = d_\\Tag{min}$, taking\n$\\alpha = 0$ and $\\beta = d_\\Tag{max} = d_\\Tag{min}$ yields an exact\nrepresentation.  Finally, when $k_\\Tag{max} = k_\\Tag{min}$ (which means that\nthe values are digitized on a single level!), $\\alpha = 0$ and $\\beta =\n(d_\\Tag{max} + d_\\Tag{min})/2$ yields the least worst error.\n\n\n\\section{Digitization with non-finite values}\n\nIn order to cope with non-finite data (NaN or $\\pm\\infty$), the following\ndigitization rule can be applied:\n\\begin{equation}\n  \\label{eq:complex-digitization}\n  k_i = \\begin{cases}\n     k_\\Tag{nan}     & \\text{if $d_i$ is not a number;}\\\\\n     k_\\Tag{+\\infty} & \\text{if $d_i > d_\\Tag{max}$;}\\\\\n     k_\\Tag{-\\infty} & \\text{if $d_i < d_\\Tag{min}$;}\\\\\n     \\Round{(d_i - \\beta)/\\alpha} & \\text{otherwise.}\n  \\end{cases}\n\\end{equation}\nwhere $k_\\Tag{nan}$, $k_\\Tag{+\\infty}$ and $k_\\Tag{-\\infty}$ are chosen\nintegers while $d_\\Tag{min}$ and $d_\\Tag{max}$ (with $d_\\Tag{max} \\ge\nd_\\Tag{min}$) are both finite and specify the range of valid data values. \n\\oops{Perhaps add cases for $k_\\Tag{min}$ and $k_\\Tag{max}$ to avoid unwanted\nbehavior due to rounding errors.}\n\nThe integers $k_\\Tag{nan}$, $k_\\Tag{+\\infty}$ and $k_\\Tag{-\\infty}$ can be\nchosen outside the range $\\IntRange{k_\\Tag{min},k_\\Tag{max}}$ (thus reserving a\nfew values of the integer range for representing theses special cases) but this\nis not mandatory.  For instance, if we take $k_\\Tag{-\\infty} = k_\\Tag{min}$ and\n$k_\\Tag{+\\infty} = k_\\Tag{max}$, then the above rule implements \\emph{data\nclipping}:\n\\begin{equation}\n  \\label{eq:digitization-with-clipping}\n  k_i = \\begin{cases}\n     k_\\Tag{nan} & \\text{if $d_i$ is not a number;}\\\\\n     k_\\Tag{min} & \\text{if $d_i \\le d_\\Tag{min}$;}\\\\\n     k_\\Tag{max} & \\text{if $d_i \\ge d_\\Tag{max}$;}\\\\\n     \\Round{(d_i - \\beta)/\\alpha} & \\text{otherwise.}\n  \\end{cases}\n\\end{equation}\n\n\\bibliographystyle{plainnat}\n\\bibliography{journals-short,biblio}\n\n\\end{document}\n", "meta": {"hexsha": "798efd96ce248b93575e8bada9c5d2c37a99b3ec", "size": 16423, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/digitization.tex", "max_stars_repo_name": "emmt/TiPi", "max_stars_repo_head_hexsha": "dfcdf5b21eeaaf6b9d0372cb86f527ff47ba3dfd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2015-03-28T01:47:32.000Z", "max_stars_repo_stars_event_max_datetime": "2019-06-22T11:56:31.000Z", "max_issues_repo_path": "notes/digitization.tex", "max_issues_repo_name": "emmt/TiPi", "max_issues_repo_head_hexsha": "dfcdf5b21eeaaf6b9d0372cb86f527ff47ba3dfd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-03-02T14:42:52.000Z", "max_issues_repo_issues_event_max_datetime": "2016-10-20T20:59:01.000Z", "max_forks_repo_path": "notes/digitization.tex", "max_forks_repo_name": "emmt/TiPi", "max_forks_repo_head_hexsha": "dfcdf5b21eeaaf6b9d0372cb86f527ff47ba3dfd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2015-01-29T10:34:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-24T15:10:57.000Z", "avg_line_length": 41.5772151899, "max_line_length": 89, "alphanum_fraction": 0.6585276746, "num_tokens": 5809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.798186768138228, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.590797386469482}}
{"text": "\\documentclass{subfile}\n\n\\begin{document}\n\t\\section{ChMO}\\label{sec:chmo}\n\t\n\t\t\\begin{problem}[$2019$, problem $1$]\n\t\t\tLet $a,b,c,d,e\\geq-1$ be real numbers such that $a+b+c+d+e=5$. Find the minimum and maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(a+b)(b+c)(c+d)(d+e)(e+a)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2018$, problem $6$]\n\t\t\tLet $n,k$ be natural numbers such that $n>k$ and $a_{1},\\ldots,a_{n}$ be real numbers in the interval $(k-1,k)$. Let $x_{1},\\ldots,x_{n}$ be positive real numbers such that for any subset $I$ of $\\{1,2,\\ldots,n\\}$ with $k$ elements,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i\\in I}x_{i}\n\t\t\t\t\t\t& \\leq\\sum_{i\\in I}a_{i}\n\t\t\t\t\\end{align*}\n\t\t\tFind the maximum possible value of $x_{1}\\cdots x_{n}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2017$, problem $6$]\n\t\t\tGiven an integer $n\\geq2$ and real numbers $a,b$ such that $0 < a < b$. Let $x_{1},\\ldots,x_{n}$ be real numbers in the interval $[a,b]$. Find the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{\\dfrac{x_{1}^{2}}{x_{2}}+\\ldots+\\dfrac{x_{n-1}^{2}}{x_{n}}+\\dfrac{x_{n}^{2}}{x_{1}}}{x_{1}+\\ldots+x_{n}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2015$, problem $1$]\n\t\t\tLet $z_{1},\\ldots,z_{n}$ be complex numbers such that $|z_{i}-1|\\leq r$ for some real number $r\\in(0,1)$. Show that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left|\\sum_{i=1}^{n}z_{i}\\right|\\cdot\\left|\\sum_{i=1}^{n}\\dfrac{1}{z_{i}}\\right|\n\t\t\t\t\t\t& \\geq n^{2}(1-r^{2})\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2011$, problem $1$]\n\t\t\tLet $a_{1},\\ldots,a_{n}$ be real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}a_{i}^{2}-\\sum_{i=1}^{n}a_{i}a_{i+1}\n\t\t\t\t\t\t& \\leq \\left\\lfloor\\dfrac{n}{2}\\right\\rfloor(M-m)\n\t\t\t\t\\end{align*}\n\t\t\twhere $a_{n+1}=a_{1},M=\\max\\{a_{1},\\ldots,a_{n}\\},m=\\min\\{a_{1},\\ldots,a_{n}\\}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2011$, problem $5$]\n\t\t\tLet $n\\geq4$ be an integer and $a_{1},\\ldots,a_{n},b_{1},\\ldots,b_{n}$ be non-negative real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{1}+\\ldots+a_{n}\n\t\t\t\t\t\t& = b_{1}+\\ldots+b_{n}>0\n\t\t\t\t\\end{align*}\n\t\t\tFind the maximum of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{\\sum_{i=1}^{n}a_{i}(a_{i}+b_{i})}{\\sum_{i=1}^{n}b_{i}(a_{i}+b_{i})}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2009$, problem $4$]\n\t\t\tLet $n>3$ be an integer and $a_{1},\\ldots,a_{n}$ be real numbers satisfying $\\min\\{a_{i}-a_{j}\\}\\leq1$ for $1\\leq i<j\\leq n$. Find the minimum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}a_{i}^{3}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$, problem $3$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n},y_{1},\\ldots,y_{n}$ be real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx_{1}\n\t\t\t\t\t\t& \\leq x_{2}\\ldots\\leq x_{n}\\\\\n\t\t\t\t\ty_{1}\n\t\t\t\t\t\t& \\geq y_{2}\\ldots\\geq y_{n}\\\\\n\t\t\t\t\t\\sum_{i=1}^{n}ix_{i}\n\t\t\t\t\t\t& = \\sum_{i=1}^{n}iy_{i}\n\t\t\t\t\\end{align*}\n\t\t\tShow that for any real number $\\alpha$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}x_{i}\\lfloor i\\alpha\\rfloor\n\t\t\t\t\t\t& \\geq\\sum_{i=1}^{n}y_{i}\\lfloor i\\alpha\\rfloor\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$, problem $1$]\n\t\t\tLet $a,b,c$ be complex numbers and $|a+b|=m,|a-b|=n$. If $mn\\neq0$, show that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\{|ac+b|,|a+bc|\\}\n\t\t\t\t\t\t& \\geq\\dfrac{mn}{\\sqrt{m^{2}+n^{2}}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$, problem $1$]\n\t\t\tLet $a_{1},\\ldots,a_{k}$ be real numbers such that $a_{1}+\\ldots+a_{k}=0$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\{a_{1}^{2},\\ldots,a_{k}^{2}\\}\n\t\t\t\t\t\t& \\leq \\dfrac{k}{3}\\left((a_{1}-a_{2})^{2}+\\ldots+(a_{k-1}-a_{k})^{2}\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$, problem $5$]\n\t\t\tLet $(a_{n})$ be a sequence such that $a_{1}=\\frac{1}{2},a_{k+1}=-a_{k}+\\frac{1}{2-a_{k}}$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\dfrac{n}{2(a_{1}+\\ldots+a_{n})}-1\\right)^{n}\n\t\t\t\t\t\t& \\leq\\left(\\dfrac{a_{1}+\\ldots+a_{n}}{n}\\right)^{n}\\left(\\dfrac{1}{a_{1}}-1\\right)\\cdots\\left(\\dfrac{1}{a_{n}}-1\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2005$, problem $1$]\n\t\t\tLet $\\theta_{i}\\in\\left(-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right)$ for $1\\leq i\\leq 4$. Prove that, there exists a real number $x$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\cos^{2}{\\theta_{1}}\\cos^{2}{\\theta_{2}}-(\\sin{\\theta_{1}}\\sin{\\theta_{2}}-x)^{2}\n\t\t\t\t\t\t& \\geq0\\\\\n\t\t\t\t\t\\cos^{2}{\\theta_{3}}\\cos^{2}{\\theta_{4}}-(\\sin{\\theta_{3}}\\sin{\\theta_{4}}-x)^{2}\n\t\t\t\t\t\t& \\geq0\n\t\t\t\t\\end{align*}\n\t\t\tif and only if\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{4}\\sin^{2}{\\theta_{i}}\n\t\t\t\t\t\t& \\leq 2\\left(1+\\prod_{i=1}^{4}\\sin{\\theta_{i}}+\\prod_{i=1}^{4}\\cos{\\theta_{i}}\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2004$, problem $2$]\n\t\t\tLet $n\\geq2$ be an integer and $a_{1},\\ldots,a_{n}$ are positive integers such that $a_{1}<\\ldots<a_{n}$ and\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{a_{1}}+\\ldots+\\dfrac{1}{a_{n}}\n\t\t\t\t\t\t& \\leq1\n\t\t\t\t\\end{align*}\n\t\t\tProve that for any real number $x$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\sum_{i=1}^{n}\\dfrac{1}{a^{2}+x^{2}}\\right)^{2}\n\t\t\t\t\t\t& \\leq \\dfrac{1}{2}\\dfrac{1}{a_{1}(a_{1}-1)+x^{2}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2003$, problem $3$]\n\t\t\tLet $n$ be a positive integer. Find the smallest positive real number $\\lambda$ such that for any $x_{1},\\ldots,x_{n}\\in\\left(0,\\frac{\\pi}{2}\\right)$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\prod_{i=1}^{n}\\tan{x_{i}}\n\t\t\t\t\t\t& = 2^{\\frac{n}{2}}\n\t\t\t\t\\end{align*}\n\t\t\timplies\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\cos{x_{i}}\n\t\t\t\t\t\t& \\leq \\lambda\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2003$, problem $3$]\n\t\t\tLet $a,b,c,d$ be positive real numbers such that $ab+cd=1$ and $x_{1},x_{2},x_{3},x_{4},y_{1},y_{2},y_{3},y_{4}$ are real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx_{i}^{2}+y_{i}^{2}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tfor $1\\leq i\\leq 4$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(ax_{1}+bx_{2}+cx_{3}+cx_{4}\\right)^{2}+(ay_{4}+by_{3}+cy_{2}+dy_{4})^{1}\n\t\t\t\t\t\t& \\leq 2\\left(\\dfrac{a^{2}+b^{2}}{ab}+\\dfrac{c^{2}+d^{2}}{cd}\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2002$, problem $1$]\n\t\t\tFor every four points $P_{1},P_{2},P_{3},P_{4}$ on the plane, find the minimum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{\\sum_{1\\leq i<j\\leq 4}P_{i}P_{j}}{\\min\\limits_{1\\leq i<j\\leq 4}P_{i}P_{j}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2002$, problem $3$]\n\t\t\tLet $c$ be a real number such that $c\\in\\left(\\frac{1}{2},1\\right)$. Find the least real number $M$ such that for every integer $n\\geq2$ and real numbers $0\\leq a_{1}\\leq \\ldots\\leq a_{n}$, if\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{n}\\sum_{i=1}^{n}ia_{i}\n\t\t\t\t\t\t& = c\\sum_{i=1}^{n}a_{i}\n\t\t\t\t\\end{align*}\n\t\t\tthen we always have that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}a_{i}\n\t\t\t\t\t\t& \\leq M\\sum_{i=1}^{m}a_{i}\n\t\t\t\t\\end{align*}\n\t\t\twhere $m=\\lfloor cn\\rfloor$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1999$, problem $5$]\n\t\t\tDetermine the maximum value of $\\lambda$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tf(x)\n\t\t\t\t\t\t& \\geq \\lambda(x-a)^{3}\n\t\t\t\t\\end{align*}\n\t\t\tfor all non-negative real number $x$ where\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tf(x)\n\t\t\t\t\t\t& = x^{3}+ax^{2}+bx+c\n\t\t\t\t\\end{align*}\n\t\t\tand has non-negative roots. Find the equality condition.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1998$, problem $2$]\n\t\t\tGiven a positive integer $n>1$. Determine with proof if there exists $2n$ distinct positive integers $a_{1},\\ldots,a_{n},b_{1},\\ldots,b_{n}$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{1}+\\ldots+a_{n}\n\t\t\t\t\t\t& = b_{1}+\\ldots+b_{n}\\\\\n\t\t\t\t\tn-1\n\t\t\t\t\t\t& > \\sum_{i=1}^{n}\\dfrac{a_{i}-b_{i}}{a_{i}+b_{i}}>n-1-\\dfrac{1}{1998}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1998$, problem $6$]\n\t\t\tLet $n\\geq2$ be a positive integer and $x_{1},\\ldots,x_{n}$ be real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}x_{i}^{2}+\\prod_{i=1}^{n-1}x_{i}x_{i+1}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tFor each $k$, find the maximum value of $|x_{k}|$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1997$, problem $1$]\n\t\t\tLet $x_{1},\\ldots,x_{1997}$ be real numbers satisfying\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t-\\dfrac{1}{\\sqrt{3}}\n\t\t\t\t\t\t& \\leq x_{i}\\leq\\dfrac{1}{\\sqrt{3}}\\\\\n\t\t\t\t\tx_{1}+\\ldots+x_{1997}\n\t\t\t\t\t\t& = -318\\sqrt{3}\n\t\t\t\t\\end{align*}\n\t\t\tDetermine the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx_{1}^{12}+\\ldots+x_{1997}^{12}\n\t\t\t\t\\end{align*}\n\t\t\twith proof.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1997$, problem $6$]\n\t\t\tLet $(a_n)$ be a sequence of real numbers satisfying\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{n+m}\n\t\t\t\t\t\t& \\leq a_{n}+a_{m}\n\t\t\t\t\\end{align*}\n\t\t\tfor all non-negative integers $m,n$. Prove that, if $n\\geq m$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{n}\n\t\t\t\t\t\t& \\leq ma_{1}+\\left(\\dfrac{n}{m}-1\\right)a_{m}\n\t\t\t\t\\end{align*} \n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1996$, problem $2$]\n\t\t\tLet $n$ be a natural number. Suppose that $x_{0}=0$ and $x_{i}>0$ for all $i\\in\\{1,\\ldots,n\\}$. If $\\sum_{i=1}^{n}x_{i}=1$, prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t1\n\t\t\t\t\t\t& \\leq \\sum_{i=1}^{n}\\dfrac{x_{i}}{\\sqrt{1+x_{0}+\\ldots+x_{i-1}}\\sqrt{x_{i}+\\ldots+x_{n}}}\n\t\t\t\t\t\t\t< \\dfrac{\\pi}{2}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1995$, problem $1$]\n\t\t\tLet $n\\geq3$ be an integer and $a_{1},\\ldots,a_{n},b_{1},\\ldots,b_{n}$ be real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{1}+\\ldots+a_{n}\n\t\t\t\t\t\t& = b_{1}+\\ldots+b_{n}\\\\\n\t\t\t\t\t0<a_{1}=a_{2},\n\t\t\t\t\t\t& a_{i}+a_{i+1}=a_{i+2}\\\\\n\t\t\t\t\t0<b_{1}\\leq b_{2},\n\t\t\t\t\t\t& b_{i}+b_{i+1}\\leq b_{i+2}\n\t\t\t\t\\end{align*}\n\t\t\tProve that $a_{n-1}+a_{n}\\leq b_{n-1}+b_{n}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1995$, problem $5$]\n\t\t\tLet $a_{1},\\ldots,a_{10}$ distinct natural numbers such that $a_{1}+\\ldots+a_{10}=1995$. Find the minimum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{1}a_{2}+a_{2}a_{3}+\\ldots+a_{9}a_{10}+a_{10}a_{1}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1993$, problem $2$]\n\t\t\tGiven a positive integer $k$ and a positive real number $a$. Find the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta^{k_1}+\\ldots+a^{k_r}\n\t\t\t\t\\end{align*}\n\t\t\twhere $1\\leq r\\leq k$ and $k_{1}+\\ldots+k_{r}=k$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2019$ Team Selection Test, problem $3$]\n\t\t\tLet $n$ be a positive integer and $a_{1},\\ldots,a_{n}$ be non-negative real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{1}+\\ldots+a_{n}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tFind the maximum possible value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{1\\leq i<j\\leq n}\\min\\{(i-j)^{2},(n+i-j)^{2}a_{i}a_{j}\\}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2019$ Team Selection Test, problem $7$]\n\t\t\tLet $x,y,z$ be complex numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t|x|^{2}+|y|^{2}+|z|^{2}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left|x^{3}+y^{3}+z^{3}-3xyz\\right|\n\t\t\t\t\t\t& \\leq1\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2019$ Team Selection Test, problem $5$]\n\t\t\tFind all positive integer $n$ such that for any positive real numbers $a,b,c,x,y,z$ the following conditions hold:\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\{a,b,c,x,y,z\\}\n\t\t\t\t\t\t& = a\\\\\n\t\t\t\t\ta+b+c\n\t\t\t\t\t\t& = x+y+z\\\\\n\t\t\t\t\tabc\n\t\t\t\t\t\t& = xyz\\\\\n\t\t\t\t\ta^{n}+b^{n}+c^{n}\n\t\t\t\t\t\t& = x^{n}+y^{n}+z^{n}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2018$ Team Selection Test, problem $5$]\n\t\t\tLet $n,k$ be positive integers such that $n>4k$. Find the minimum value of $\\lambda(n,k)=\\lambda$ such that for any positive real numbers $a_{1},\\ldots,a_{n}$, we have\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{a_{i}}{\\sqrt{a_{i}^{2}+a_{i+1}^{2}+\\ldots+a_{i+k}^{2}}}\n\t\t\t\t\\end{align*}\n\t\t\twhere $a_{n+i}=a_i$ for $1\\leq i\\leq k$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2018$ Team Selection Test, problem $3$]\n\t\t\tLet $H(n)$ be the \\textit{harmonic sum}\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t1+\\dfrac{1}{2}+\\ldots+\\dfrac{1}{n}\n\t\t\t\t\\end{align*}\n\t\t\tProve that there exists a positive constant $C$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tH(a_{1})+\\ldots+H(a_{n})\n\t\t\t\t\t\t& \\leq C\\sqrt{\\sum_{i=1}^{n}ia_{i}}\n\t\t\t\t\\end{align*}\n\t\t\tfor arbitrary positive integer $n$ and positive real numbers $a_{1},\\ldots,a_{n}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2017$ Team Selection Test, problem $2$]\n\t\t\tLet $n$ be positive integers and $x>1$ be a real number. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{\\{ix\\}}{\\lfloor ix\\rfloor}\n\t\t\t\t\t\t& < \\sum_{i=1}^{n}\\dfrac{1}{2i-1}\n\t\t\t\t\\end{align*}\n\t\t\twhere $\\{a\\}$ and $\\lfloor a\\rfloor$ is the \\textit{decimal} and \\textit{integer} portion of $a$ respectively.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2017$ Team Selection Test, problem $5$]\n\t\t\tLet $m\\geq2$ be a positive integer and $a_{1},\\ldots,a_{m}$ be positive real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(m-1)^{m-1}\\left(x_{1}^{m}+\\ldots+x_{m}^{m}\\right)\n\t\t\t\t\t\t& \\geq (x_{1}+\\ldots+x_{m})^{m}-m^{m}x_{1}\\cdots x_{m}\n\t\t\t\t\\end{align*}\n\t\t\tFind out when equality holds.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2016$ Team Selection Test, problem $2$]\n\t\t\tFind the smallest positive real number $\\lambda$ such that for any complex numbers $z_{1},z_{2},z_{3}$ with $|z_{i}|<1$ and $z_{1}+z_{2}+z_{3}=0$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t|z_{1}z_{2}+z_{2}z_{3}+z_{3}z_{1}|^{2}+|z_{1}z_{2}z_{3}|^{2}\n\t\t\t\t\t\t& < \\lambda\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2016$ Team Selection Test, problem $7$]\n\t\t\tLet $n>1$ be an integer and $\\alpha$ be a real number such that $0<\\alpha<2$ and $a_{1},\\ldots,a_{n},c_{1},\\ldots,c_{n}$ be positive real numbers. For a positive real number $y$, let\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tf(y)\n\t\t\t\t\t\t& = \\left(\\sum_{a_{i}<y}c_{i}a_{i}^{2}\\right)^{\\frac{1}{2}}+\\left(\\sum_{a_{i}>y}c_{i}a_{i}^{\\alpha}\\right)^{\\frac{1}{\\alpha}}\n\t\t\t\t\\end{align*}\n\t\t\tIf a positive real number $x$ satisfies $x\\geq f(y)$ for some positive real number $y$, prove that $f(x)\\leq 8^{\\frac{1}{\\alpha}}x$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2015$ Team Selection Test, problem $4$]\n\t\t\tLet $n\\geq2$ be an integer and $x_{1},x_{2},\\ldots,$ be a non-decreasing monotonous sequence of positive real numbers such that $x_{1},\\frac{x_{2}}{2},\\frac{x_{3}}{3},\\ldots$ is a non-increasing monotonous sequence. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{\\sum_{i=1}^{n}x_{i}}{n\\sqrt[n]{\\sum_{i=1}^{n}x_{i}}}\n\t\t\t\t\t\t& \\leq \\dfrac{n+1}{2\\sqrt[n]{n!}}\n\t\t\t\t\\end{align*} \n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2014$ Team Selection Test, problem $4$]\n\t\t\tLet $(x_{n})$ be a sequence of real numbers and $(y_{n})$ be a sequence such that $y_{1}=x_{1}$ and for $n\\geq1$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ty_{n+1}\n\t\t\t\t\t\t& = x_{n+1}-\\sqrt{\\sum_{i=1}^{n}x_{i}^{2}}\n\t\t\t\t\\end{align*}\n\t\t\tFind the smallest positive real number $\\lambda$ such that for any $(x_{n})$ and positive integer $m$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{m}\\sum_{i=1}^{n}x_{i}^{2}\n\t\t\t\t\t\t& \\leq \\sum_{i=1}^{n}\\lambda^{n-i}y_{i}^{2}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2014$ Team Selection Test, problem $5$]\n\t\t\tLet $n\\geq2$ be a positive integer. Find the greatest constant $\\lambda(n)$ such that for any non-zero complex numbers $z_{1},\\ldots,z_{n}$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}|z_{i}|^{2}\n\t\t\t\t\t\t& \\geq \\lambda(n)\\min\\limits_{1\\leq i\\leq n}|z_{i+1}-z_{i}|^{2}\n\t\t\t\t\\end{align*}\n\t\t\twhere $z_{n+1}=z_{1}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2013$ Team Selection Test, problem $4$]\n\t\t\tLet $n,k>1$ be integers and $a_{1},\\ldots,a_{n}$ be non-negative real numbers such that $a_{1}\\geq\\ldots\\geq a_{n}$ and\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{1}+\\ldots+a_{n}\n\t\t\t\t\t\t& = 1\\\\\n\t\t\t\t\tc_{1}+\\ldots+c_{m}\n\t\t\t\t\t\t& \\leq m^{k}\n\t\t\t\t\\end{align*}\n\t\t\tfor any positive integer $m\\leq n$ and non-negative real numbers $c_{1},\\ldots,c_{m}$. Find the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tc_{1}a_{1}^{k}+\\ldots+c_{n}a_{n}^{k}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2013$ Team Selection Test, problem $17$]\n\t\t\tLet $n\\geq2$ be an integer and $a_{1},\\ldots,a_{n},b_{1},\\ldots,b_{n}$ be non-negative real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\dfrac{n}{n-1}\\right)^{n-1}\\left(\\dfrac{1}{n}\\sum_{i=1}^{n}a_{i}^{2}\\right)+\\left(\\dfrac{1}{n}\\sum_{i=1}^{n}b_{i}^{2}\\right)\n\t\t\t\t\t\t& \\geq \\prod_{i=1}^{n}\\sqrt[n]{a_{i}^{2}+b_{i}^{2}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2012$ Team Selection Test, problem $1$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n},y_{1},\\ldots,y_{n}$ be complex numbers such that $|x_{i}|=|y_{i}|$ for $1\\leq i\\leq n$. Let\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx\n\t\t\t\t\t\t& = \\dfrac{1}{n}\\sum_{i=1}^{n}x_{i}\\\\\n\t\t\t\t\ty\n\t\t\t\t\t\t& = \\dfrac{1}{n}\\sum_{i=1}^{n}y_{i}\\\\\n\t\t\t\t\tz_{i}\n\t\t\t\t\t\t& = xy_{i}-yx_{i}-x_{i}y_{i}\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}|z_{i}|\n\t\t\t\t\t\t& \\leq n\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2012$ Team Selection Test, problem $4$]\n\t\t\tLet $m,n>1$ be integers and $r,s$ are real numbers such that $r<s$. Let $(a_{ij})$ be a $m\\times n$ non-zero matrix such that $a_{ij}\\geq0$. Find the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{\\left(\\sum_{j=1}^{n}\\left(\\sum_{i=1}^{m}a_{ij}^{s}\\right)^{\\frac{r}{s}}\\right)^{\\frac{1}{r}}}{\\left(\\sum_{i=1}^{m}\\left(\\sum_{j=1}^{n}a_{ij}^{r}\\right)^{\\frac{s}{r}}\\right)^{\\frac{1}{s}}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2011$ Team Selection Test, problem $6$]\n\t\t\tLet $n$ be a positive integer. Find the largest real number $\\lambda$ such that for all positive real numbers $x_{1},\\ldots,x_{2n}$ satisfying\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{2n}\\sum_{i=1}^{2n}(x_{i}+2)^{n}\n\t\t\t\t\t\t& \\geq \\prod_{i=1}^{2n}x_{i}\n\t\t\t\t\\end{align*}\n\t\t\tthe following inequality is also true:\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{2n}\\sum_{i=1}^{2n}(x_{i}+1)^{n}\n\t\t\t\t\t\t& \\geq \\lambda\\prod_{i=1}^{2n}x_{i}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2011$ Team Selection Test, problem $7$]\n\t\t\tLet $n\\geq 3$ be an integer. Find the largest real number $M$ such that for any positive real numbers $x_{1},\\ldots,x_{n}$ there is an arrangement $y_{1},\\ldots,y_{n}$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{y_{i}^{2}}{y_{i+1}^{2}-y_{i+1}y_{i+2}+y_{i+2}^{2}}\n\t\t\t\t\t\t& \\geq M\n\t\t\t\t\\end{align*}\n\t\t\twhere $y_{n+2}=y_{2},y_{n+1}=y_{1}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Team Selection Test, problem $1$]\n\t\t\tLet $n$ be a positive integer. The real numbers $a_{0},\\ldots,a_{n},b_{0},\\ldots,b_{n}$ satisfy $a_{i}+a_{i+1}\\geq 0$ for $1\\leq i\\leq 2n-1$ and $a_{2i+1}\\leq 0$ for $1\\leq i\\leq n-1$. For any integers $p,q$ such that $0\\leq p\\leq q\\leq n$, we have\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=2p}^{2q}b_{i}\n\t\t\t\t\t\t& > 0\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=0}^{2n}(-1)^{i}a_{i}b_{i}\n\t\t\t\t\t\t& \\geq 0\n\t\t\t\t\\end{align*}\n\t\t\tDetermine when equality holds.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Team Selection Test, problem $5$]\n\t\t\tFind all positive real numbers $\\lambda$ such that for all integer $n\\geq 2$ and all positive real numbers $a_{1},\\ldots,a_{n}$ such that $a_{1}+\\ldots+a_{n}=n$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{1}{a_{i}}-\\lambda\\prod_{i=1}^{n}\\dfrac{1}{a_{i}}\n\t\t\t\t\t\t& \\leq n-\\lambda\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Team Selection Test, problem $7$]\n\t\t\tLet $n\\geq 2$ be an integer and $a$ be a positive real number. Find the smallest positive real number $M(n,a)=M$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{1}{a+S-x_{i}}\n\t\t\t\t\t\t& \\geq M\n\t\t\t\t\\end{align*}\n\t\t\twhere $S=\\sum_{i=1}^{n}x_{i}$ for any positive real numbers $x_{1},\\ldots,x_{n}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Team Selection Test $2$]\n\t\t\tLet $n\\geq2$ be an integer and $x_{1},\\ldots,x_{n}$ be real numbers in the interval $[0,1]$. Prove that there exists real numbers $a_{0},\\ldots,a_{n}$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{0}+a_{n}\n\t\t\t\t\t\t& = 0\\\\\n\t\t\t\t\t|a_{i}|\n\t\t\t\t\t\t& \\leq 1\\\\\n\t\t\t\t\t|a_{i}-a_{i-1}|\n\t\t\t\t\t\t& = x_{i}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2009$ Team Selection Test $2$]\n\t\t\tLet $n\\geq2$ be an integer. Find the maximum constant $\\lambda(n)$ so that if a sequence of real numbers $a_{0},a_{1},\\ldots$ satisfies $0=a_{0}\\leq a_{1}\\leq\\ldots\\leq a_{n}$ and for $1\\leq i\\leq n-1$, $2a_{i}\\geq a_{i-1}+a_{i+1}$ then\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\sum_{i=1}^{n}ia_{i}\\right)^{2}\n\t\t\t\t\t\t& \\geq \\lambda(n)\\sum_{i=1}^{n}a_{i}^{2}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2009$ Team Selection Test, problem $5$]\n\t\t\tLet $m>1$ be an integer and $n$ be an odd integer such that $3\\leq n<2m$. Consider a matrix $(a_{ij})$ such that for any $1\\leq j\\leq n$, $a_{1j},a_{2j},\\ldots,a_{mj}$ is a permutation of $1,2,\\ldots,m$ and  for any $1\\leq i\\leq m$ and $1\\leq j\\leq n-1$, $|a_{ij}-a_{i(j+1)}|\\leq1$. Find the minimum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\limits_{1<i<m}\\sum_{j=1}^{n}a_{ij}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2009$ Quiz $1$, problem $3$]\n\t\t\tLet $m,n$ be positive integers and $x_{1},\\ldots,x_{m},y_{1},\\ldots,y_{n}$ be positive real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t2XY\\sum_{i=1}^{m}\\sum_{j=1}^{n}|x_{i}-y_{j}|\n\t\t\t\t\t\t& \\geq X^{2}\\sum_{i=1}^{n}\\sum_{j=1}^{n}|y_{i}-y_{j}|+Y^{2}\\sum_{i=1}^{m}\\sum_{j=1}^{m}|x_{i}-x_{j}|\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2009$ Quiz $5$, problem $3$]\n\t\t\tLet $a_{1},a_{2},a_{3},a_{4}$ be non-negative real numbers such that $a_{1}+a_{2}+a_{3}+a_{4}=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\left\\{\\sum_{i=1}^{4}\\sqrt{a_{i}^{2}+a_{i}a_{i-1}+a_{i-1}^{2}+a_{i-1}a_{i-2}},\\sum_{i=1}^{4}\\sqrt{a_{i}^{2}+a_{i}a_{i+1}+a_{i+1}a_{i+2}}\\right\\}\n\t\t\t\t\t\t& \\geq 2\n\t\t\t\t\\end{align*}\n\t\t\twhere $a_{i+4}=a_i$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Team Selection Test, problem $5$]\n\t\t\tLet $m,n>1$ be integers and $(a_{ij})$ be a non-zero matrix of non-negative real numbers. Find the minimum and maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{m\\sum_{i=1}^{m}\\left(\\sum_{j=1}^{n}a_{ij}\\right)^{2}+n\\sum_{j=1}^{n}\\left(\\sum_{i=1}^{m}a_{ij}\\right)^{2}}{\\left(\\sum_{i=1}^{m}\\sum_{j=1}^{n}a_{ij}\\right)^{2}+mn\\sum_{i=1}^{m}\\sum_{j=1}^{n}a_{ij}^{2}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Team Selection Test, problem $6$]\n\t\t\tFind the maximum constant $M$ such that for any integer $n\\geq 3$, there exists two sequences of positive real numbers $a_{1},\\ldots,a_{n}$ and $b_{1},\\ldots,b_{n}$ satisfying\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}b_{i}\n\t\t\t\t\t\t& = 1\\\\\n\t\t\t\t\t2b_{i}\n\t\t\t\t\t\t& \\geq b_{i-1}+b_{i+1}\\\\\n\t\t\t\t\ta_{k}^{2}\n\t\t\t\t\t\t& \\geq 1+\\sum_{i=1}^{k}a_{i}b_{i}\n\t\t\t\t\\end{align*}\n\t\t\tand $a_{n}=M$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Quiz $2$ problem $3$]\n\t\t\tLet $z_{1},z_{2},\\ldots,z_{3}$ be complex numbers such that $|z_{i}|\\leq 1$ and $\\omega_{1},\\omega_{2}$ are the roots of the equation\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(z-z_{1})(z-z_{2})+(z-z_{2})(z-z_{3})+(z-z_{3})(z-z_{1})\n\t\t\t\t\t\t& = 0\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\min\\{|z_{j}-\\omega_{1}|,|z_{j}-\\omega_{2}\\}\n\t\t\t\t\t\t& \\leq 1\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Quiz $4$ problem $2$]\n\t\t\tLet $x,y,z$ be positive real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{xy}{z}+\\dfrac{yz}{x}+\\dfrac{zx}{y}\n\t\t\t\t\t\t& > 2\\sqrt[3]{x^{3}+y^{3}+z^{3}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Quiz $5$, problem $2$]\n\t\t\tLet $n\\geq 2$ be an integer and $a_{1},\\ldots,a_{n}$ be real numbers not all zero. Determine the necessary and sufficient condition so that there exists a sequence of integers $x_{1},\\ldots,x_{n}$ which satisfies\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t0\n\t\t\t\t\t\t& < x_{1} <\\ldots<x_{n}\\\\\n\t\t\t\t\ta_{1}x_{1}+\\ldots+a_{n}x_{n}\n\t\t\t\t\t\t& \\geq 0\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Quiz $5$, problem $3$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n},y_{1},\\ldots,y_{n}$ be real numbers such that $0<x_{1}\\leq\\frac{x_{2}}{2}\\leq\\ldots\\leq\\frac{x_{n}}{n}$ and $0<y_{n}\\leq y_{n-1}\\leq\\ldots\\leq y_{1}$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\sum_{i=1}^{n}x_{i}y_{i}\\right)^{2}\n\t\t\t\t\t\t& \\leq \\left(\\sum_{i=1}^{n}y_{i}\\right)\\left(\\sum_{i=1}^{n}\\left(x_{i}^{2}-\\dfrac{1}{4}x_{i}x_{i+1}\\right)y_{i}\\right)\n\t\t\t\t\\end{align*}\n\t\t\twhere $x_{0}=0$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$ Team Selection Test, problem $5$]\n\t\t\tLet $n>1$ be a positive integer and $x_{1},\\ldots,x_{n}$ be real numbers satisfying $A=\\left|\\sum_{i=1}^{n}x_{i}\\right|\\neq0$ and $B=\\max\\limits_{1\\leq i\\leq n}|x_{i}-x_{j}|\\neq0$. Prove that for any $n$ vectors $\\overrightarrow{\\alpha_{i}}$ in the plane, there exists a permutation $k_{1},\\ldots,k_{n}$ of the numbers $1,\\ldots,n$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left|\\sum_{i=1}^{n}a_{i}x_{k_{i}}\\overrightarrow{\\alpha_{i}}\\right|\n\t\t\t\t\t\t& \\geq \\dfrac{AB}{2A+B}\\max_{1\\leq i\\leq n}|\\alpha_{i}|\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$ Quiz $2$, problem $1$]\n\t\t\tLet $u,v,w$ be positive real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tu+v+w+\\sqrt[3]{uvw}\n\t\t\t\t\t\t& = 4\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt{\\dfrac{uv}{w}}+\\sqrt{\\dfrac{vw}{u}}+\\sqrt{\\dfrac{wu}{v}}\n\t\t\t\t\t\t& \\geq u+v+w\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$ Quiz $4$, problem $1$]\n\t\t\tLet $a_{1},\\ldots,a_{n}$ be positive real numbers satisfying $a_{1}+\\ldots+a_{n}=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(a_{1}a_{2}+\\ldots+a_{n}a_{1})\\left(\\dfrac{a_{1}}{a_{2}^{2}+a_{2}}+\\ldots+\\dfrac{a_{n}^{2}}{a_{1}^{2}+a_{1}}\\right)\n\t\t\t\t\t\t& \\geq \\dfrac{n}{n+1}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$ Quiz $5$, problem $3$]\n\t\t\tFind the smallest constant $k$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{x}{\\sqrt{x+y}}+\\dfrac{y}{\\sqrt{y+z}}+\\dfrac{z}{\\sqrt{z+x}}\n\t\t\t\t\t\t& \\leq k\\sqrt{x+y+z}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$ Team Selection Test, problem $3$]\n\t\t\tLet $n$ be a positive integer and $a_{1},\\ldots,a_{n}$ be real numbers. Prove that there exists real numbers $b_{1},\\ldots,b_{n}$ such that $a_{i}-b_{i}$ is a positive integer for $1\\leq i\\leq n$ and\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{1\\leq i<j\\leq n}(b_{i}-b_{j})^{2}\n\t\t\t\t\t\t& \\leq \\dfrac{n^{2}-1}{12}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$ Team Selection Test, problem $8$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n}$ be positive real numbers such that $x_{1}+\\ldots+x_{n}=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\sum_{i=1}^{n}\\sqrt{x_{i}}\\right)\\left(\\sum_{i=1}^{n}\\dfrac{1}{\\sqrt{1+x_{i}}}\\right)\n\t\t\t\t\t\t& \\leq\\dfrac{n^{2}}{n+1}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$ Team Selection Test, problem $11$]\n\t\t\tGiven positive real numbers $x,y,z$ such that $x+y+z=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{xy}{\\sqrt{xy+yz}}+\\dfrac{yz}{\\sqrt{yz+zx}}+\\dfrac{zx}{\\sqrt{zx+xy}}\n\t\t\t\t\t\t& \\leq\\dfrac{\\sqrt{2}}{2}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2005$ Team Selection Test, problem $4$]\n\t\t\tLet $a_{1},\\ldots,a_{6};b_{1},\\ldots,b_{6};c_{1},\\ldots,c_{6}$ be permutations of $1,\\ldots,6$. Find the minimum value of $\\sum_{i=1}^{6}a_{i}b_{i}c_{i}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2005$ Quiz $1$, problem $2$]\n\t\t\tLet $a,b,c$ be non-negative real numbers such that $ab+bc+ca=\\frac{1}{3}$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{a^{2}-bc+1}+\\dfrac{1}{b^{2}-ca+1}+\\dfrac{1}{c^{2}-ab+1}\n\t\t\t\t\t\t& \\leq 3\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2005$ Quiz $2$, problem $3$]\n\t\t\tLet $a,b,c,d$ be positive real numbers such that $abcd=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{(1+a)^{2}}+\\dfrac{1}{(1+b)^{2}}+\\dfrac{1}{(1+c)^{2}}+\\dfrac{1}{(1+d)^{2}}\n\t\t\t\t\t\t& \\geq 1\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2004$ Quiz $4$, problem $2$]\n\t\t\tFind the greatest positive real number $k$ such that for any positive real numbers $a,b,c,d$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(a+b+c)\\left(3^{4}(a+b+c+d)^{5}+2^{4}(a+b+c+2d)^{5}\\right)\n\t\t\t\t\t\t& \\geq kabcd^{3}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2003$ Quiz $1$, problem $1$]\n\t\t\t$x,y,z$ are positive real numbers such that $x+y+z=xyz$. Find the minimum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx^{7}(yz-1)+y^{7}(zx-1)+z^{7}(xy-1)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2003$ Quiz $2$, problem $3$]\n\t\t\tLet $n$ be a positive integer and the roots of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tf(z)\n\t\t\t\t\t\t& = z^{n}+a_{1}z^{n-1}+\\ldots+a_{n}\n\t\t\t\t\\end{align*}\n\t\t\tare $z_{1},\\ldots,z_{n}$ where $a_{1},\\ldots,a_{n}$ are complex numbers. If $\\sum_{i=1}^{n}|a_{i}|^{2}\\leq 1$, then prove that $\\sum_{i=1}^{n}|z_{i}|^{2}\\leq n$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2003$ Quiz $6$, problem $1$]\n\t\t\tLet $n$ be a positive integer and $a_{1},\\ldots,a_{n},x$ be real numbers.\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tg(x)\n\t\t\t\t\t\t& = \\sum_{i=1}^{n}a_{i}\\cos{ix}\n\t\t\t\t\\end{align*}\n\t\t\tIf $g(x)\\geq -1$ for all real number $x$, then prove that $\\sum_{i=1}^{n}a_{i}\\leq n$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2003$ Quiz $8$, problem $3$]\n\t\t\tLet $n\\geq 2$ be an integer and $a_{1},\\ldots,a_{n}$ be positive real numbers not all of which are equal such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{1}{a_{i}^{2n}}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\sum_{i=1}^{n}a_{i}^{2n}\\right)-n^{2}\\sum_{1\\leq i < j\\leq n}\\left(\\dfrac{a_{i}}{a_{j}}-\\dfrac{a_{j}}{a_{i}}\\right)^{2}\n\t\t\t\t\t\t& > n^{2}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2002$ Team Selection Test, problem $6$]\n\t\t\tLet\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tf(x_{1},x_{2},x_{3})\t\n\t\t\t\t\t\t& = -2(x_{1}^{3}+x_{2}^{3}+x_{3}^{3})+3x_{1}^{2}(x_{2}+x_{3})+3x_{2}^{2}(x_{3}+x_{1})+3x_{3}^{2}(x_{1}+x_{2})-12x_{1}x_{2}x_{3}\n\t\t\t\t\\end{align*}\n\t\t\tFor real numbers $r,s,t$, define\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tg(r,s,t)\n\t\t\t\t\t\t& = \\max\\limits_{t\\leq x_{3}\\leq t+2}|f(r, r+2, x_{3})+s|\n\t\t\t\t\\end{align*}\n\t\t\tFind the minimum value of $g(r,s,t)$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2001$ Team Selection Test, problem $4$]\n\t\t\tLet $n>3$ be an integer. The real numbers $x_{1},\\ldots,x_{n+2}$ satisfy the condition $0<x_{1}<\\ldots<x_{n+2}$. Find the minimum possible value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{\\left(\\sum_{i=1}^{n}\\dfrac{x_{i+1}}{x_{i}}\\right)\\left(\\sum_{i=1}^{n}\\dfrac{x_{i+2}}{x_{i+1}}\\right)}{\\left(\\sum_{i=1}^{n}\\dfrac{x_{i+1}x_{i+2}}{x_{k+1}^{2}+x_{k}x_{k+2}}\\right)\\left(\\sum_{i=1}^{n}\\dfrac{x_{i+1}^{2}+x_{i}x_{i+2}}{x_{i}x_{i+1}}\\right)}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2001$ Team Selection Test, problem $6$]\n\t\t\tFind the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\limits_{1\\leq x\\leq 3}|x^{3}-ax^{2}-bx-c|\n\t\t\t\t\\end{align*}\n\t\t\twhere $a,b,c$ runs through all real numbers.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1999$ Team Selection Test, problem $1$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n}$ be non-negative real numbers such that $x_{1}+\\ldots+x_{n}=1$. Find the largest possible value of $\\sum_{i=1}^{n}(x_{i}^{4}-x_{j}^{5})$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1998$ Team Selection Test, problem $3$]\n\t\t\tFor a fixed real number $\\theta\\in\\left[0,\\frac{\\pi}{2}\\right]$, find the smallest positive real number $a$ for which\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{\\sqrt{a}}{\\cos{\\theta}}+\\dfrac{\\sqrt{a}}{\\sin{\\theta}}\n\t\t\t\t\t\t& > 1\n\t\t\t\t\\end{align*}\n\t\t\tand there exists $x\\in\\left[1-\\frac{\\sqrt{a}}{\\sin{\\theta}},\\cos{\\theta}\\right]$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left((1-x)\\sin{\\theta}-\\sqrt{a-x^{2}\\cos^{2}{\\theta}}\\right)^{2}+\\left(x\\cos{\\theta}-\\sqrt{a-(1-x)^{2}\\sin{\\theta}}\\right)^{2}\n\t\t\t\t\t\t& \\leq a\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1996$ Team Selection Test, problem $5$]\n\t\t\tLet $n\\geq 4$ be an integer and $\\alpha_{1},\\ldots,\\alpha_{n},\\beta_{1},\\ldots,\\beta_{n}$ be real numbers such that $\\sum_{i=1}^{n}\\alpha_{i}^{2}<1$ and $\\sum_{i=1}^{n}\\beta_{i}^{2}<1$.\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tA^{2}\n\t\t\t\t\t\t& = 1-\\sum_{i=1}^{n}\\alpha_{i}^{2}\\\\\n\t\t\t\t\tB^{2}\n\t\t\t\t\t\t& = 1-\\sum_{i=1}^{n}\\beta_{i}^{2}\\\\\n\t\t\t\t\tW\n\t\t\t\t\t\t& = \\dfrac{1}{2}\\left(1-\\sum_{i=1}^{n}\\alpha_{i}\\beta_{i}\\right)^{2}\n\t\t\t\t\\end{align*}\n\t\t\tFind all real number $\\lambda$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx^{n}+\\lambda(x^{n-1}+\\ldots+x^{3}+Wx^{2}+ABx+1)\n\t\t\t\t\t\t& = 0\n\t\t\t\t\\end{align*}\n\t\t\tonly has real roots.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1995$ Team Selection Test, problem $4$]\n\t\t\tLet $n$ be a positive integer and $r_{1},\\ldots,r_{n},s_{1},\\ldots,s_{n},t_{1},\\ldots,t_{n},u_{1},\\ldots,u_{n},v_{1},\\ldots,v_{n}$ be $5n$ real numbers.\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tR\n\t\t\t\t\t\t& = \\dfrac{1}{n}\\sum_{i=1}^{n}r_{i}\\\\\n\t\t\t\t\tS\n\t\t\t\t\t\t& = \\dfrac{1}{n}\\sum_{i=1}^{n}s_{i}\\\\\n\t\t\t\t\tT\n\t\t\t\t\t\t& = \\dfrac{1}{n}\\sum_{i=1}^{n}t_{i}\\\\\n\t\t\t\t\tU\n\t\t\t\t\t\t& = \\dfrac{1}{n}\\sum_{i=1}^{n}u_{i}\\\\\n\t\t\t\t\tV\n\t\t\t\t\t\t& = \\dfrac{1}{n}\\sum_{i=1}^{n}v_{i}\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\prod_{i=1}^{n}\\dfrac{r_{i}s_{i}t_{i}u_{i}v_{i}+1}{r_{i}s_{i}t_{i}u_{i}v_{i}-1}\n\t\t\t\t\t\t& \\geq \\left(\\dfrac{RSTUV+1}{RSTUV-1}\\right)^{n}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1993$ Team Selection Test, problem $2$]\n\t\t\tLet $n\\geq 2$ be an integer and $a,b,c,d$ be positive integers such that $\\frac{a}{b}+\\frac{c}{d}<1$ and $a+c\\leq n$. Find the maximum value of $\\frac{a}{b}+\\frac{c}{d}$ for a fixed $n$.\n\t\t\\end{problem}\n\\end{document}", "meta": {"hexsha": "54483c76e0c1557f4d53644b418d74211ea0691c", "size": 32255, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chmo.tex", "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_issues_repo_path": "chmo.tex", "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chmo.tex", "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2396593674, "max_line_length": 344, "alphanum_fraction": 0.5714772903, "num_tokens": 14373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.8080672204860316, "lm_q1q2_score": 0.590744473646017}}
{"text": "\\section{Question 2}\nSystem is:\n$$\nG(s) = \\frac{-s + 3}{s(s+2)(s^2+2s+4)}\n$$\n\\subsection{part a}\nIntegral plus Time Delay:\n$$\nG = e^{-1.15s}\\dfrac{0.375}{s(0.378s +  1)}       \n$$\n\\begin{figure}[H]\n    \\caption{system and ITD step responde}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/IDT1.png}\n\\end{figure}\nNow we use foipdt function.\n\\begin{figure}[H]\n    \\caption{step responde with foipdt PD controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/foipdt_PD.png}\n\\end{figure}\n\\begin{figure}[H]\n    \\caption{step responde with foipdt PID controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/foipdt_PID.png}\n\\end{figure}\n\\subsection{part b}\nIntegral plus Time Delay:\n$$\nG = e^{-1.33s}\\dfrac{0.375}{s}      \n$$\n\\begin{figure}[H]\n    \\caption{system and ITD step responde}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/IDT2.png}\n\\end{figure}\nNow we use ipdtctrl function.\n\\begin{figure}[H]\n    \\caption{step responde with ipdtctrl for ISE PD controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/pd1.png}\n\\end{figure}\n\\begin{figure}[H]\n    \\caption{step responde with ipdtctrl for ITSE PD controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/pd2.png}\n\\end{figure}\n\\begin{figure}[H]\n    \\caption{step responde with ipdtctrl for ISTSE PD controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/pd3.png}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\caption{step responde with ipdtctrl for ISE PID controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/pid1.png}\n\\end{figure}\n\\begin{figure}[H]\n    \\caption{step responde with ipdtctrl for ITSE PID controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/pid2.png}\n\\end{figure}\n\\begin{figure}[H]\n    \\caption{step responde with ipdtctrl for ISTSE PID controller}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q2/pid3.png}\n\\end{figure}\n\n\\subsection{conclusion}\nFirt ITD function doesn't fit well but PD controller work very good but PID controller work very bad too slow for setteling time and has a high overshoot. system has integrator so we don't need to have integrator in controller and just make system slow and pendulous.\nSecond ITD fits very good bad PD and PID controller don't work well.\nPD is far better than PID and PID is too slow and pendulous.\nIn final PD controller used first method is the best fot this system.", "meta": {"hexsha": "e6ca5ab0a600ec1e97dfbf8f0b59c40b7074717f", "size": 2436, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW/HW VI/Report/Q2/Q2.tex", "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW/HW VI/Report/Q2/Q2.tex", "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW/HW VI/Report/Q2/Q2.tex", "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.9189189189, "max_line_length": 267, "alphanum_fraction": 0.7126436782, "num_tokens": 762, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672043084051, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.5907444712889512}}
{"text": "\\documentclass[a4paper]{article}\n\n\\input{temp}\n\n\\begin{document}\n\n\\title{Vector Calculus}\n\\date{Lent 2016}\n\n\\maketitle\n\n\\newpage\n\n\\tableofcontents\n\n\\newpage\n\n\\section{Differential operators}\n\\subsection{grad, div, curl}\n\nView $\\nabla f=\\left(\\frac{\\partial f}{\\partial x},\\frac{\\partial f}{\\partial y},\\frac{\\partial f}{\\partial z}\\right)$ as obtained from $f$ by applying the vector operator:\\\\\n$\\nabla = \\left(\\frac{\\partial}{\\partial x},\\frac{\\partial}{\\partial y},\\frac{\\partial}{\\partial z}\\right) = \\mathbf{e_{i}} \\frac{\\partial}{\\partial x_{i}}$,\nwhere $\\mathbf{e_{i}}$ is an orthonormal, right-handed system,\n$\\mathbf{e_{i}}\\cdot \\mathbf{e_{j}} = \\delta_{ij}$, $\\mathbf{e_{i}}\\times \\mathbf{e_{j}} = \\epsilon_{ijk} \\mathbf{e_{k}}$.\\\\\nCall it \\emph{grad}, write $\\nabla f = \\grad f$.\\\\\nNow define\n\\begin{equation*}\n\\begin{aligned}\n\\nabla \\cdot \\mathbf{F} &= \\left(\\mathbf{e_{i}}\\frac{\\partial}{\\partial x_{i}}\\right)\\left(F_{j} \\mathbf{e_{j}}\\right)\\\\\n&= \\frac{\\partial F_{i}}{\\partial x_{i}}\\\\\n&= \\frac{\\partial F_1}{\\partial x} + \\frac{\\partial F_2}{\\partial y} + \\frac{\\partial F_3}{\\partial z}.\n\\end{aligned}\n\\end{equation*}\nAlso written as $\\vdiv \\mathbf{F}$, called \\emph{divergence} of $\\mathbf{F}$.\\\\\nAnd\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\nabla \\times\\mathbf{F} &= \\left(\\mathbf{e_{i}}\\frac{\\partial}{\\partial x_{i}}\\right)\\times\\left(F_j\\mathbf{e_{j}}\\right)\\\\\n&= \\epsilon_{ijk} \\frac{\\partial F_j}{\\partial x_i}\\mathbf{e_k}.\n\\end{aligned}\n\\end{equation*}\nAlso written as $\\curl \\mathbf{F}$.\\\\\nwrite $\\left(\\partial_1,\\partial_2,\\partial_3\\right)=\\left(\\frac{\\partial}{\\partial x},\\frac{\\partial}{\\partial y},\\frac{\\partial}{\\partial z}\\right)$, then\n\\begin{equation*}\n\\begin{aligned}\n\\curl \\mathbf{F} &=\n\\begin{vmatrix}\n\\mathbf{e_1}& \\mathbf{e_2}& \\mathbf{e_3}\\\\\n\\partial_1& \\partial_2& \\partial_3\\\\\nF_1& F_2& F_3\n\\end{vmatrix}\\\\\n&=\\left(\\partial_2 F_3-\\partial_3 F_2,\\partial_3 F_1-\\partial_1 F_3,\\partial_1 F_2,\\partial_2 F_1\\right).\n\\end{aligned}\n\\end{equation*}\nNote: with $\\nabla$, order is important. e.g.:\\\\\n$\\mathbf{F}\\cdot\\nabla=F_i \\frac{\\partial}{\\partial x_i}$ is a scalar operator;\\\\\n$\\nabla\\cdot\\mathbf{F}=\\frac{\\partial F_i}{\\partial x_i}$ is a scalar function.\\\\\n$\\grad,\\vdiv,\\curl$ are all linear operators.\\\\\nLeibniz properties hold for all of them:\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\left(fg\\right)=\\left(\\nabla f\\right)g+f\\left(\\nabla g\\right);\\\\\n\\nabla\\cdot\\left(f\\mathbf{F}\\right)=\\left(\\nabla f\\right)\\cdot\\mathbf{F}+f\\vdiv \\mathbf{F};\\\\\n\\curl\\left(f\\mathbf{F}\\right)=\\left(\\nabla f\\right)\\times\\mathbf{F}+f\\curl\\mathbf{F}.\n\\end{aligned}\n\\end{equation*}\nAlso many more (can be proven by index notation):\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\cdot\\left(\\mathbf{F\\times\\mathbf{G}}\\right)&=\\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{G}-\\mathbf{F}\\cdot\\left(\\nabla\\times\\mathbf{G}\\right);\\\\\n\\nabla\\times\\left(\\mathbf{F}\\times\\mathbf{G}\\right)&=\\mathbf{F}\\left(\\nabla\\cdot\\mathbf{G}\\right)-\\mathbf{G}\\left(\\nabla\\cdot\\mathbf{F}\\right)+\\mathbf{G}\\left(\\nabla\\cdot\\mathbf{F}\\right)-\\mathbf{F}\\left(\\nabla\\cdot\\mathbf{G}\\right);\\\\\n\\nabla\\left(\\mathbf{F}\\cdot\\mathbf{G}\\right) &= \\mathbf{F}\\times\\left(\\nabla\\times\\mathbf{G}\\right)+\\mathbf{G}\\times\\left(\\nabla\\times\\mathbf{F}\\right)+\\left(\\mathbf{F}\\cdot\\nabla\\right)\\mathbf{G}+\\left(\\mathbf{G}\\cdot\\nabla\\right)\\mathbf{F}.\n\\end{aligned}\n\\end{equation*}\n\n\\begin{eg}\nRecall $\\left(\\mathbf{a}\\times\\mathbf{b}\\right)_k=\\epsilon_{ijk}a_i b_j$.\\\\\nSo:\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\cdot\\left(\\mathbf{F}\\times\\mathbf{G}\\right)&=\\frac{\\partial}{\\partial x_i}\\left(\\mathbf{F}\\times\\mathbf{G}\\right)_i\\\\\n&=\\frac{\\partial}{\\partial x_i}\\left(\\epsilon_{ijk}F_j G_k\\right)\\\\\n&=\\epsilon_{ijk}\\frac{\\partial F_j}{\\partial x_i} G_k + \\epsilon_{ijk} F_j \\frac{\\partial G_k}{\\partial x_i}\\\\\n&=\\left(\\nabla\\times\\mathbf{F}\\right)_k\\cdot\\mathbf{G}-F_j \\epsilon_{ikj}\\frac{\\partial G_k}{\\partial x_i}\\\\\n&=\\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{G}-\\mathbf{F}\\cdot\\left(\\nabla\\times\\mathbf{G}\\right).\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\begin{eg}\nLet $\\mathbf{r}=\\left(x,y,z\\right)$. $r=\\sqrt{x^2+y^2+z^2}$.\\\\\nCalculate $\\vdiv \\left(r^\\alpha\\mathbf{r}\\right) = \\left(\\nabla\\left(r^\\alpha\\right)\\right)\\cdot\\mathbf{r}+r^\\alpha\\left(\\nabla\\cdot\\mathbf{r}\\right)$.\\\\\nBut\n\\begin{equation*}\n\\begin{aligned}\n\\nabla r^\\alpha &= \\alpha r^{\\alpha-1} \\nabla r\\\\\n&=\\alpha r^{\\alpha-1}\\left(\\frac{1}{r}\\right)\\left(x,y,z\\right)\\\\\n&=\\alpha r^{\\alpha-2}\\mathbf{r},\\\\\n\\nabla\\cdot\\mathbf{r} = \\frac{\\partial x}{\\partial x} + \\frac{\\partial y}{\\partial y} + \\frac{\\partial z}{\\partial z} = 3.\n\\end{aligned}\n\\end{equation*}\n\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\vdiv\\left(r^\\alpha \\mathbf{r}\\right)&=\\left(\\alpha r^{\\alpha-2}\\mathbf{r}\\right)\\cdot\\mathbf{r}+3r^\\alpha\\\\\n&=\\alpha r^\\alpha + 3r^\\alpha;\\\\\n\\curl\\left(r^\\alpha\\mathbf{r}\\right)&=\\nabla\\left(r^\\alpha\\right)\\times\\mathbf{r}+r^\\alpha\\left(\\nabla\\times\\mathbf{r}\\right)\\\\\n&=\\mathbf{0}+\\mathbf{0}\\\\\n&=\\mathbf{0}.\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\subsection{Second order derivatives}\nWe have\\\\\n$\\nabla\\times\\left(\\nabla f\\right)=\\mathbf{0}$ for any $f$ -- \"$\\curl\\grad=0$\";\\\\\n$\\nabla\\cdot\\left(\\nabla\\times\\mathbf{A}\\right)=0$ for any $\\mathbf{A}$ -- \"$\\vdiv\\curl=0$\".\\\\\nConverse results also hold for suitable regions $D \\subset \\R^n$.\\\\\n1) If $\\mathbf{F}$ is defined on all $\\R^3$ ( or any simply connected region $D$), then $\\nabla\\times\\mathbf{F}=0 \\implies \\mathbf{F}=\\nabla\\varphi$ for some $\\varphi$.\\\\\n(freedom: $\\varphi \\to \\varphi +$ constants also works)\\\\\n2) If $\\mathbf{F}$ is defined on all $\\R^3$ ( or any $D \\subset \\R^3$ such that any sphere on $D$ can be contracted continuously to a point in $D$), then $\\vdiv \\mathbf{F}=0 \\implies \\mathbf{F}=\\nabla\\times\\mathbf{A}$.\\\\\n(freedom: $\\mathbf{A}\\to\\mathbf{A}+\\nabla\\varphi$, called the gauge freedom of $\\mathbf{A}$)\\\\\n\n\\begin{defi}\nA vector field is called:\\\\\n1)\\emph{irrotational} if $\\nabla\\times\\mathbf{F}=0$;\\\\\n2)\\emph{conservative} if $\\mathbf{F}=\\nabla\\varphi$ for some $\\varphi$;\\\\\n3)\\emph{solenoidal} if $\\nabla\\cdot\\mathbf{F}=0$.\n\\end{defi}\n\n\\textbf{Laplacian operator}, $\\nabla^2=\\nabla\\cdot\\nabla=\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}+\\frac{\\partial^2}{\\partial z^2}$.\n\nFor scalar fields $f$,\n\n\\begin{equation*}\n\\nabla^2f=\\nabla\\cdot\\left(\\nabla f\\right)=\\vdiv\\left(\\grad f\\right).\n\\end{equation*}\n\nFor vector fields $\\mathbf{F}=\\left(F_{1},F_{2},F_{3}\\right)$,\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2\\mathbf{F}&=\\nabla\\left(\\nabla\\cdot\\mathbf{F}\\right)-\\nabla\\times\\left(\\nabla\\times\\mathbf{F}\\right)\\\\\n&=\\grad\\left(\\vdiv\\mathbf{F}\\right)-\\curl\\left(\\curl\\mathbf{F}\\right).\n\\end{aligned}\n\\end{equation*}\n\n\\begin{eg} (irrotational and conservative fields).\\\\\n\tConsider $\\mathbf{F}=\\left(-\\frac{y}{x^2+y^2},\\frac{x}{x^2+y^2},0\\right)$:\n\tCheck that $\\nabla\\times\\mathbf{F}=0$.\\\\\n\tConsider $f\\left(x,y,z\\right)=\\arctan \\frac{y}{x}$:\n\t$\\frac{\\partial f}{\\partial x}=-\\frac{y}{x^2+y^2},\\frac{\\partial f}{\\partial x}=-\\frac{x}{x^2+y^2}$.\\\\\n\tIt looks like $\\mathbf{F}=\\nabla f$, but $\\arctan \\frac{y}{x}$ is not continuous on whole of $\\R^3-$(z axis)$=D$.\\\\\n\tNote that $\\arctan \\frac{y}{x}=$polar angle $\\varphi$ in x-y plane.\\\\\n\tAt $A$: $\\varphi = 0$.\\\\\n\tInsist that $\\varphi$ varies continuously along the curve. Then must have $\\varphi=2\\pi$ at $B$. But $B=A$.\\\\\n\tNow $\\mathbf{F}$ is defined on $D$, so $\\mathbf{F}\\neq\\nabla f$ on all $D$ with $f$ smooth.\\\\\n\tSo $\\mathbf{F}$ is not conservative on all $D$($D$ is not simply connected).\\\\\n\tRemove offending part, e.g. z-x plane for $x\\geq 0$.\\\\\n\t$D'=\\R-\\left\\{(\\text{z axis})\\cup(\\text{z-x plane},x\\geq 0)\\right\\}$.\\\\\n\tThen on $D'$, $\\mathbf{F}=\\nabla f$ everywhere and $f$ is smooth, i.e. $\\mathbf{F}$ is conservative on $D'$. Also $D'$ is simply connected.\n\t\n\\end{eg}\n\n\\newpage\n\n\\section{Integral Theorems}\n\\subsection{Green's Theorem}\n\\begin{thm} (Green's)\\\\\n\tFor smooth functions $P\\left(x,y\\right)\\text{ and }Q\\left(x,y\\right)$,\\\\\n\t\\begin{equation*}\n\t\\int_{A} \\left(\\frac{\\partial Q}{\\partial x}-\\frac{\\partial P}{\\partial y}\\right) dA = \\int_{C} \\left(Pdx+Qdy\\right).\n\t\\end{equation*}\\\\\n\tWhere $A$ is a simple bounded region in x-y plane, i.e. with boundary $\\partial A=C$ a piecewise smooth, non-self intersecting closed curve, and it is traversed \\emph{anticlockwise}.\\\\\n\tEquivalently, $A$ is on the left hand side of direction of traverse.\n\\end{thm}\n\n\\begin{eg}\n\tLet $P=x^2 y, Q=xy^2$.\\\\\n\tGreen's theorem says:\\\\\n\t\\begin{equation*}\n\t\\int_{A} \\left(y^2-x^2\\right)dA = \\int_{C}\\left(x^2 ydx+xy^2 dy\\right)\n\t\\end{equation*}\n\tfor any simple $A$, and $C$ is the boundary of $A$ traversed anticlockwise.\\\\\n\\end{eg}\n\n\\begin{rem}\nGreen's theorem holds also for non-simple regions $A\\subseteq \\R^2$, with holes -- having $C=\\partial A$ being a number of disconnected components.\\\\\nOrientations of $\\partial A$ must always be chosen to have $A$ on the left-hand-side for direction of traverse.\\\\\n\\end{rem}\n\n\\subsection{Stoke's Theorem}\n\\begin{thm}(Stoke's)\\\\\n\tFor a smooth vector field $\\mathbf{F}\\left(\\mathbf{r}\\right)$,\\\\\n\t\\begin{equation*}\n\t\\int_{S} \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot d\\mathbf{S} = \\int_{C}\\mathbf{F}\\cdot d\\mathbf{r}.\n\t\\end{equation*}\n\twhere $S$ is a smooth bounded surface with boundary $\\partial S=C$,piecewise smooth, non-self intersecting closed curve(or a union of such, \\underline{and} $S$ and $C$ have \\emph{compatible orientations}, i.e. if $mathbf{n}$ is normal to $S$ and $\\mathbf{t}$ is unit tangent to $C$(so that $d \\mathbf{S}=\\mathbf{n}dS, d \\mathbf{r}=\\mathbf{t}dS, S=\\text{arclength}$), then \"if stand on boundary with head up in direction of $\\mathbf{n}$ and face direction $\\mathbf{t}$, then $S$ is on the left hand side.\" $\\mathbf{t}\\times\\mathbf{n}$(which is tangent to $S$ too) points \\underline{out} from $S$ along $C$.\n\\end{thm}\n\n\\begin{eg}\nConsider $S=$Section of sphere of radius $a$ for $0\\leq \\theta \\leq \\alpha$.\\\\\nSpherical polar coordinates ($r=a,\\theta,\\varphi$).\\\\\nOrient $S$ with outward pointing normal $\\mathbf{n}=\\mathbf{e_{r}}$.\\\\\nThen compatible orientation for $C$ is anticlockwise looking down from north pole, i.e. in direction of \\emph{increasing} $\\varphi$ when $C$ is parameterised by $\\varphi$.\\\\\n$S$ is $\\theta : 0\\to \\alpha$, $\\varphi:0\\to 2\\pi$ in $\\left(\\theta,\\varphi\\right)$ parameters.\\\\\n$\\mathbf{r}\\left(\\theta,\\varphi\\right) = \\left(a\\sin\\theta\\cos\\varphi,a\\sin\\theta\\sin\\varphi,a\\cos\\theta\\right)$.\\\\\nSo find the partial derivatives of $\\mathbf{r}$ with respect to $\\theta$ and $\\varphi$, and get\n$|\\frac{\\partial \\mathbf{r}}{\\partial\\theta}\\times\\frac{\\partial\\mathbf{r}}{\\partial \\varphi}|=a^2 \\sin\\theta$.\\\\\nso $d\\mathbf{S}=a^2 \\sin\\theta \\mathbf{e_{r}}d\\theta d\\varphi$.\\\\\nFor $C$: $r=a$ and $\\theta=\\alpha$:\\\\\n$\\mathbf{r}\\left(\\varphi\\right)=\\left(a\\sin\\alpha\\cos\\varphi,a\\sin\\alpha\\sin\\varphi,a\\cos\\alpha\\right)$.\\\\\n$d\\mathbf{r}=\\frac{d\\mathbf{r}}{d\\varphi}d\\varphi=a\\sin\\alpha\\left(-\\sin\\varphi,\\cos\\varphi,0\\right)d\\varphi$.\\\\\nStokes says:\n\\begin{equation*}\n\\int_{S} \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot \\mathbf{dS} = \\int_{C}\\mathbf{F}\\cdot\\mathbf{dr}.\n\\end{equation*}\nSo e.g. take $\\mathbf{F}=\\left(0,xz,0\\right)$,\\\\\nhave $\\nabla\\times\\mathbf{F}=\\left(-x,0,z\\right)$.\\\\\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{dS}&=\\left(-x,0,z\\right)\\cdot\\mathbf{e_{r}}r d\\theta d\\varphi\\\\\n&=\\left(-a\\sin\\theta\\cos\\varphi,0,a\\cos\\theta\\right)\\cdot\\left(\\sin\\theta\\cos\\varphi,\\sin\\theta\\sin\\varphi,\\cos\\theta a^2 \\sin\\theta d\\theta d\\varphi\\right)\\\\\n&=\\left(-a^3 \\sin^3 \\theta \\cos^2 \\varphi + a^3 \\sin\\theta\\cos^2 \\theta\\right) d\\theta d\\varphi\\\\\n\\end{aligned}\n\\end{equation*}\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F}\\cdot\\mathbf{dr}&=\\left(0,xz,0\\right)\\cdot\\left(-sin\\varphi,\\cos\\varphi,0\\right)a\\sin\\alpha d\\varphi\\\\\n&=a^3 \\sin^2 \\alpha\\cos\\alpha\\cos^2 \\varphi d\\varphi\\\\\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\int_{S} \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{dS}=\\int_{\\varphi=0}^{2\\pi} \\int_{\\theta=0}^\\alpha \\left(-a^3 \\sin^3 \\theta\\cos^2 \\varphi + a^3 \\sin\\theta\\cos^2 \\theta\\right) d\\theta d\\varphi\\\\\n\\int_{C} \\mathbf{F}\\cdot\\mathbf{dr}=\\int_{0}^{2\\pi} a^3 \\sin^2 \\alpha\\cos\\alpha\\cos^2 \\varphi d\\varphi\\\\\n\\end{aligned}\n\\end{equation*}\n\nCheck both = $\\pi a^3 \\sin^2 \\alpha\\cos\\alpha$.\n\\end{eg}\n\n\\subsection{Gauss' Theorem}\n\\begin{thm}(Gauss', aka divergence theorem)\nFor a smooth vector field $\\mathbf{F}\\left(\\mathbf{r}\\right)$,\n\\begin{equation*}\n\\begin{aligned}\n\\int_{V} \\nabla\\cdot\\mathbf{F} dV = \\int_{S} \\mathbf{F}\\cdot\\mathbf{dS}\\\\\n\\end{aligned}\n\\end{equation*}\nwhere $V$ is a bounded volume, with boundary $\\partial V=S$ being a piecewise smooth closed surface(or union of such).\\\\\n*with $\\mathbf{n}$ pointing \\emph{outwards} from $V$.\\\\\n\n(2D Gauss' theorem)\\\\\nFor smooth vector field $\\mathbf{G}=\\left(G_{1},G_{2}\\right)$ on $\\R^2$,\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\int_{A} \\nabla\\cdot\\mathbf{G} dA = \\int_{C} \\mathbf{G}\\cdot\\mathbf{n} ds\\\\\n\\end{aligned}\n\\end{equation*}\nWhere $A$ is bounded region in $\\R^2$, $C=\\partial A$ is boundary, $ds$ is (scalar) arclength element on $C$.\n*$\\mathbf{n}$ is outward pointing normal along $C$.\\\\\n\nNote: if $C$ is $\\left(x\\left(s\\right),y\\left(s\\right)\\right)$ parameterised by arclength with $s$ increasing anticlockwise,\\\\\nunit tangent:\\\\\n$\\mathbf{t}=\\left(\\frac{dx}{ds},\\frac{dy}{ds}\\right)$,\\\\\nunit normal:\\\\\n$\\mathbf{n}=\\left(\\frac{dy}{ds},-\\frac{dx}{ds}\\right)$\\\\\nhas correct sign to be outward pointing.\\\\\n\nCheck:\\\\\non top: $\\frac{dx}{ds}<0$, so $\\mathbf{n}$ has positive $y$ component(correct).\\\\\non bottom:$\\frac{dx}{ds}>0$, so $\\mathbf{n}$ has negative $y$ component(correct).\\\\\nSo\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{n}\\cdot\\mathbf{G}=G_{1}\\frac{dy}{ds}-G_{2}\\frac{dx}{ds}\\\\\n\\end{aligned}\n\\end{equation*}\nso 2D Gauss says:\n\\begin{equation*}\n\\begin{aligned}\n\\int_{A} \\left(\\frac{\\partial G_{1}}{\\partial x}+\\frac{\\partial G_{2}}{\\partial y}\\right) dA = \\int_{C} G_{1} dy - G_{2} dx\\\\\n\\end{aligned}\n\\end{equation*}\n\\end{thm}\n\n\\subsection{Proof of theorems}\nRelating and proving all the theorems:\\\\\nstrategy: we'll show (in order) that:\\\\\n(a) Stokes $\\implies$ Green's;\\\\\n(b) Green $\\iff$ 2D Gauss;\\\\\n(c) Prove 2D Gauss;\\\\\n(d) Green $\\implies$ Stokes.\\\\\n\n\\begin{proof}\n(a) consider just simple regions:\\\\\nIn Stokes' theorem, let $S$ be a region $A$ in $x-y$ plane of 3D with $\\mathbf{n}=\\mathbf{k}$(positive $z$ direction).\\\\\nThen consistent orientation of $\\partial A$ is anticlockwise.\\\\\nConsider $\\mathbf{F}=\\left(P,Q,0\\right)$,\\\\\nso $\\nabla\\times\\mathbf{F}=\\left(0,0,\\frac{\\partial Q}{\\partial x}-\\frac{\\partial Q}{\\partial y}\\right)$.\\\\\nso $\\mathbf{F}\\cdot\\mathbf{dr}=Pdx+Qdy$, and\\\\\n$\\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{n}=\\frac{\\partial Q}{\\partial x}-\\frac{\\partial Q}{\\partial y}$.\\\\\nStokes gives:\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\int_{A} \\left(\\frac{\\partial Q}{\\partial x}-\\frac{\\partial Q}{\\partial y}\\right)dA = \\int_{C} Pdx+Qdy\\\\\n\\end{aligned}\n\\end{equation*}\ni.e. Green's theorem.\\\\\n\n(b) Green's theorem is \\\\\n\\begin{equation*}\n\\begin{aligned}\n\\int_{A} \\left(\\frac{\\partial Q}{\\partial x}-\\frac{\\partial Q}{\\partial y}\\right)dA = \\int_{C} Pdx+Qdy\\\\\n\\end{aligned}\n\\end{equation*}\n2D Gauss (with $\\mathbf{G}=\\left(G_{1},G_{2}\\right)$) is\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\int_{A} \\left(\\frac{\\partial G_{1}}{\\partial x}-\\frac{\\partial G_{2}}{\\partial y}\\right)dA = \\int_{C} G_{1} dy-G_{2} dx\\\\\n\\end{aligned}\n\\end{equation*}\nmake correspondence/replacements:\\\\\n$Q\\to G_{1}$, $P\\to -G_{2}$\\\\\nthen each gives the other.\\\\\n\n(c) Proving 2D Gauss (divergence) theorem.\n\\begin{equation}\\label{eq:1}\n\\begin{aligned}\n\\int_{A} \\nabla\\cdot\\mathbf{G} dA=\\int_{C=\\partial A} \\mathbf{G}\\cdot\\mathbf{n} ds\\\\\n\\end{aligned}\n\\end{equation}\n$\\mathbf{n}$ outward pointing normal on $C$.\\\\\nSuppose we can prove (\\ref{eq:1}) for $\\mathbf{G}=b\\left(x,y\\right)\\mathbf{j}$, then same argument (with role $x$ and $y$ interchanged) will give (\\ref{eq:1}) for $\\mathbf{H}=a\\left(x,y\\right)\\mathbf{i}$ ($\\vdiv G=\\frac{\\partial b}{\\partial y}, \\vdiv H = \\frac{\\partial a}{\\partial x}$).\\\\\nThen since both sides of (\\ref{eq:1}) is linear in $\\mathbf{G}$, it must hold for any $\\mathbf{G}=a\\left(x,y\\right)\\mathbf{i}+b\\left(x,y\\right)\\mathbf{j}$.\\\\\nSo let $\\mathbf{G}=G\\left(x,y\\right)\\mathbf{j}$:\\\\\nuse vertical strips $Y_{x}$ for area integral.\\\\\n\\includegraphics[scale=0.20]{VC01}\\\\\nRecall 2D-Gauss theorem:\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\int_A \\nabla\\cdot\\mathbf{G}dA = \\int_{C=\\partial A} \\mathbf{G}\\cdot\\mathbf{n}ds\n\\end{aligned}\n\\end{equation*}\nRemark: $\\int_C \\mathbf{G}\\cdot\\mathbf{n}ds$ is different from $\\int_C \\mathbf{G}\\cdot\\mathbf{dr} = \\int_C \\mathbf{G}\\cdot\\mathbf{t}ds$ (e.g. as in Stokes' theorem).\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\int_A \\nabla\\cdot\\mathbf{G}dA &= \\int_{x_L}^{x_R} \\left(\\int_{Y_x} \\frac{\\partial G}{\\partial y}dy\\right) dx\\\\\n&= \\int_{x_L}^{x_R} G_+ \\left(x\\right) - G_- \\left(x\\right) dx\\\\\n&=\\int_{x_L}^{x_R} G_+ \\left(x\\right)dx - \\int_{x_L}^{x_R} G_- \\left(x\\right)dx\n\\end{aligned}\n\\end{equation*}\nchange variable to $s$ on $C_+$,i.e. opposite orientation to $x (S:R\\to L)$ for the first integral, and change variable to $s$ on $C_-$, i.e. same orientation on $x(s:L\\to R)$ for the second integral:\n\\begin{equation*}\n\\begin{aligned}\n\\int_{x_L}^{x_R} G_+ dx = \\int_{S_2}^{S_1} G_+ \\frac{dx}{ds} ds &= - \\int_{C_+} \\left(G_+ \\frac{dx}{ds}\\right) ds\\\\\n\\int_{x_L}^{x_R} G_- dx = \\int_{S_0}^{S_1} G_- \\frac{dx}{ds} ds &= - \\int_{C_-} \\left(G_- \\frac{dx}{ds}\\right) ds\n\\end{aligned}\n\\end{equation*}\nNow relate $\\frac{dx}{ds}$ 's to outward unit $\\mathbf{n}$:\\\\\n$\\mathbf{t}=\\left(\\frac{dx}{ds},\\frac{dy}{ds}\\right)$ unit tangent;\\\\\n$\\mathbf{n}=\\left(\\frac{dy}{ds},-\\frac{dx}{ds}\\right)$ is the correct choice of signs to point outward.\\\\\nvertical component of $\\mathbf{n}$:\\\\\n$\\mathbf{j}\\cdot\\mathbf{n}=-\\frac{dx}{ds}$ is positive on $C_+$, negative on $C_-$.\\\\\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\int_{x_L}^{x_R} \\left(G_+ - G_-\\right) dx\\\\\n&=-\\int_{C_+} G\\left(s\\right) \\frac{dx}{ds}ds - \\int_{C_-} G\\left(s\\right) \\frac{dx}{ds}ds\\\\\n&=\\int_{C_+} G \\mathbf{j}\\cdot\\mathbf{n} ds + \\int_{C_-} G \\mathbf{j}\\cdot\\mathbf{n} ds\\\\\n&=\\int_C \\mathbf{G}\\cdot\\mathbf{n}ds \\text{  as  } \\mathbf{G}=G\\mathbf{j}.\n\\end{aligned}\n\\end{equation*}\nFor more general shapes, e.g.\\\\\n\\includegraphics[scale=0.20]{VC02}\\\\\nWe either divide the region into simple pieces, and sum of all the dotted parts cancel;\\\\\nor proceed as above with $Y_x$ now possibly a sum of intervals.\\\\\n\\begin{rem}\nSame method can be used to prove 3D Gauss theorem: look at $\\mathbf{G}$ of the form $\\mathbf{G}=G\\left(x,y,z\\right)\\mathbf{k}$ and vertical columns of volume for $\\int_Y \\vdiv \\mathbf{G} dY$.\n\\end{rem}\n(d) Green's theorem $\\implies$ Stokes' theorem.\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot \\mathbf{dS} = \\int_{C=\\partial S} \\mathbf{F}\\cdot\\mathbf{dr}\n\\end{aligned}\n\\end{equation*}\n$S$ and $C$ have consistent orientations.\\\\\nSuppose $S$(the surface) is parameterised as $\\mathbf{r}\\left(u,v\\right)$ for $\\left(u,v\\right)\\in A\\subseteq \\R^2$.(Note that there are two different meanings of $S$ here: the surface, and the arclength of the boundary - kindly reminded by lecturer)\\\\\nThen $\\mathbf{dS}=\\left(\\frac{\\partial \\mathbf{r}}{\\partial u}\\times \\frac{\\partial\\mathbf{r}}{\\partial v}\\right) dudv$.\\\\\nand parameterised $\\left(u,v\\right)$ have fixed the orientation of $S$ ($\\mathbf{n}$ in direction of $left(\\frac{\\partial \\mathbf{r}}{\\partial u}\\times \\frac{\\partial\\mathbf{r}}{\\partial v}$).\\\\\nSo also fixed (consistent) orientation on $C\\partial S$.\\\\\nSo \n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{ds}=\\int_A \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\left(\\frac{\\partial \\mathbf{r}}{\\partial u}\\times \\frac{\\partial\\mathbf{r}}{\\partial v}\\right)dudv.\n\\end{aligned}\n\\end{equation*}\nFact:\n\\begin{equation*}\n\\begin{aligned}\n\\curl \\mathbf{F}\\left(\\mathbf{r}\\left(u,v\\right)\\right)\\cdot \\left(\\frac{\\partial \\mathbf{r}}{\\partial u}\\times \\frac{\\partial\\mathbf{r}}{\\partial v}\\right)\\\\\n&= \\frac{\\partial \\mathbf{F}}{\\partial u}\\cdot \\frac{\\partial \\mathbf{r}}{\\partial v}-\\frac{\\partial \\mathbf{F}}{\\partial v}\\cdot \\frac{\\partial \\mathbf{r}}{\\partial u}.\n\\end{aligned}\n\\end{equation*}\nProof outline: just use suffix notation:\n\\begin{equation} \\label{eq:2}\n\\begin{aligned}\nLHS &= \\left(\\epsilon_{ijk} \\frac{\\partial F_j}{\\partial x_i}\\right)\\left(\\epsilon_{pqk} \\frac{\\partial x_p}{\\partial u}\\frac{\\partial x_q}{\\partial v}\\right)\n\\end{aligned}\n\\end{equation}\nThen use $\\epsilon_{ijk}\\epsilon_{pqk}=\\delta_{ip}\\delta_{jq}-\\delta_{iq}\\delta_{jp}$\\\\\nand note:\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\frac{\\partial \\mathbf{F}}{\\partial u}\\right)_i &= \\frac{\\partial F_i}{\\partial u}\\\\\n&= \\frac{\\partial F_i}{\\partial x_p}\\frac{\\partial x_p}{\\partial u}\\\\\n&= \\frac{\\partial F_i}{\\partial x_j}\\frac{\\partial x_p}{\\partial u}\\delta_{jp}; \\text{  Note the suffix notation used}\\\\\n\\left(\\frac{\\partial \\mathbf{r}}{\\partial v}\\right)_i = \\frac{\\partial x_i}{\\partial v}\n\\end{aligned}\n\\end{equation*}\nsubstitute into (\\ref{eq:2}) to get the result.\\\\\n\nSo we have\n\\begin{equation*}\n\\begin{aligned}\n\\partial_S \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{dS}&=\\int_A \\left(\\frac{\\partial \\mathbf{F}}{\\partial u}\\cdot\\frac{\\partial \\mathbf{r}}{\\partial v}-\\frac{\\partial \\mathbf{F}}{\\partial v} \\cdot \\frac{\\partial \\mathbf{r}}{\\partial u}\\right) dudv\\\\\n&=\\int_A \\left[\\frac{\\partial}{\\partial u}\\left(\\mathbf{F}\\left(\\mathbf{r}\\left(u,v\\right)\\right)\\cdot\\frac{\\partial \\mathbf{r}}{\\partial v}\\right)-\\frac{\\partial}{\\partial v}\\left(\\mathbf{F}\\cdot\\frac{\\partial \\mathbf{r}}{\\partial u}\\right)\\right]dudv\n\\end{aligned}\n\\end{equation*}\nThink Green!\\\\\nfor xy: \n\\begin{equation*}\n\\begin{aligned}\n\\int_A \\frac{\\partial Q}{\\partial x} - \\frac{\\partial P}{\\partial y}dxdy = \\int_C Pdx + Qdy \\text{  anticlockwise}.\n\\end{aligned}\n\\end{equation*}\nSo Green's theorem in u-v plane gives:\n\\begin{equation*}\n\\begin{aligned}\n\\partial_S \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{dS}=\\int_C \\left(\\mathbf{F}\\cdot\\frac{\\partial \\mathbf{r}}{\\partial u}\\right)du + \\left(\\mathbf{F}\\cdot\\frac{\\partial \\mathbf{r}}{\\partial v}\\right)dv\n\\end{aligned}\n\\end{equation*}\nAlso along $C=\\partial S \\mathbf{dr} = \\frac{\\partial \\mathbf{dr}}{\\partial u}du + \\frac{\\partial \\mathbf{r}}{\\partial v}dv$,\\\\\nSo we get \n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{dS}=\\int_C \\mathbf{F}\\cdot\\mathbf{dr}.\n\\end{aligned}\n\\end{equation*}\nFinally we should check that anti-clockwise orientation for $\\partial A$ (needed in Green's theorem) agrees with the orientation of $\\partial S$ imposed by the choice of parameters,i.e. $\\mathbf{n}$ direction $~ \\frac{\\partial \\mathbf{r}}{\\partial u}\\times\\frac{\\partial \\mathbf{r}}{\\partial v}$.\\\\\nIt always does! to see why:\\\\\n\\includegraphics[scale=0.25]{VC03}\\\\\nConsider $P$ on $\\partial A$ where the tangent is parallel to the $u$ axis, mapping to $P'$ on $\\partial S$.\\\\\nThen small inward normal $\\mathbf{\\delta r}$ maps to $\\mathbf{\\delta r}\\frac{\\partial \\mathbf{r}}{\\partial v}\\delta v$ must also point inward.\\\\\n$\\mathbf{\\partial u}$ maps to $\\mathbf{\\partial r}$, tangent at $P_1$, can point to $L$ or $R$ on $\\partial S$ \\\\\nmaps to $\\frac{\\partial \\mathbf{r}}{\\partial u}$ at $P'$.\\\\\n=direction of traverse induced by anticlockwise on $\\partial A$.\\\\\nIf points $L$ on $\\partial S$ then $\\mathbf{n} \\sim \\frac{\\partial \\mathbf{r}}{\\partial u}\\times\\frac{\\partial \\mathbf{r}}{\\partial v}$ points downwards (in picture) and consistent orientation on $\\partial S$ is then to $L$.\\\\\nIf points $R$ then $\\mathbf{n}$ points upwards, and consistent again.\\\\\n\\end{proof}\n\n\\newpage\n\\section{Geometrical characterisation of div and curl}\nApply divergence theorem to small volume $V$ (size $V$), containing $\\mathbf{r}$:\n\\begin{equation*}\n\\begin{aligned}\n\\int_{\\partial V} \\mathbf{F}\\cdot \\mathbf{dS} = \\int_V \\nabla\\cdot\\mathbf{F}dV \\approx \\nabla\\cdot\\mathbf{F}|_\\mathbf{r} V\n\\end{aligned}\n\\end{equation*}\nThis becomes exact as $V\\to 0$:\n\\begin{equation}\\label{eq:3}\n\\begin{aligned}\n\\vdiv \\mathbf{F} = \\lim_{V\\to 0} \\frac{1}{V} \\int_{\\partial V} \\mathbf{F}\\cdot \\mathbf{dS}\n\\end{aligned}\n\\end{equation}\nSimilarly, Stoke's theorem for small planar surface $S$ (area $S$) containing $\\mathbf{r}$, unit normal $\\mathbf{n}$:\n\\begin{equation*}\n\\begin{aligned}\n\\int_{\\partial S} \\mathbf{F}\\cdot\\mathbf{dr} &= \\int_S \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{n} dS\\\\\n&= \\mathbf{n}\\cdot\\nabla\\times\\mathbf{F} S\n\\end{aligned}\n\\end{equation*}\nso\n\\begin{equation}\\label{eq:4}\n\\begin{aligned}\n\\mathbf{n}\\cdot\\nabla\\times\\mathbf{F}=\\lim_{S\\to 0} \\frac{1}{S} \\int_{\\partial S} \\mathbf{F}\\cdot\\mathbf{dr}\n\\end{aligned}\n\\end{equation}\n\n(\\ref{eq:3}) and (\\ref{eq:4}) manifestly coordinates independent -- purely geometrical.\\\\\nThis could be used as definitions. Then Stoke's and Gauss' theorems become easy to prove.\\\\\n\n\\begin{eg} (Stoke's theorem)\\\\\nDivide $S$ up into many small loops:\\\\\n\\includegraphics[scale=0.15]{VC04}\\\\\nBy (\\ref{eq:4}), $\\int \\left(\\nabla\\times\\mathbf{F}\\right)\\cdot\\mathbf{n}dS = $ sum of all circulations of $\\mathbf{F}$, all anticlockwise(or all clockwise); where adjacent loop contact, circulations cancel since directions opposite.\\\\\nHence sum of circulations all cancel except at the boundary where there is no matching adjacent loop. The result is simply the circulation on the boundary, i.e. $\\int_C \\mathbf{F}\\cdot\\mathbf{dr}$.\n\\end{eg}\n\n\\begin{eg} (Gauss' theorem)\\\\\n\\includegraphics[scale=0.15]{VC05}\\\\\nSimilarly the outward flows from all small regions cancel on all internal surface contacts, leaving only outward flow on boundary, i.e. $\\int_V \\vdiv\\mathbf{F}dV = \\int_{\\partial V} \\mathbf{F}\\cdot\\mathbf{n}dS$.\\\\\n\\end{eg}\n\n\\begin{eg} (Fluid dynamics)\\\\\nLet $\\mathbf{u}$ be velocity field of a fluid flow in 3D.\\\\\nFor div: had before $\\int_S \\mathbf{u}\\cdot\\mathbf{dS}$ is rate of fluid crossing surface.\\\\\nFor closed surface (and outward $\\mathbf{n}$) it is the total outflow from $S$ per unit time.\\\\\nSo $\\vdiv \\mathbf{u}$ is the outflow rate per unit volume for small local volumes (as volume $\\to$ 0).\\\\\nIf the density of fluid is constant, i.e. flow is incompressible, then must have $\\vdiv \\mathbf{u}=0$ (assuming no cavities develop).\\\\\nAbove is for a fixed volume $V$. We can also consider moving volumes: consider fluid 'particles' in $V$ at time, moving for time $\\delta t$ (small), under the flow they occupy new positions and get new volumes.\\\\\nSmall surface element $\\mathbf{dS}$ moves through $\\mathbf{u}\\delta t$ appending extra volume $\\mathbf{\\delta S}\\cdot\\mathbf{u}\\delta t$ to $V$.\\\\\n\\includegraphics[scale=0.15]{VC06}\\\\\nSo summing over whole surface:\n\\begin{equation*}\n\\begin{aligned}\nV\\left(t+\\delta t\\right)-v\\left(t\\right)=\\delta V = \\sum \\mathbf{\\delta S}\\cdot \\mathbf{u}\\delta t \\text{  + o terms}.\n\\end{aligned}\n\\end{equation*}\nIn limit as $\\delta t\\to 0, \\delta S$'s $\\to 0$:\n\\begin{equation*}\n\\begin{aligned}\n\\frac{dV}{dt} = \\int_S \\mathbf{u}\\cdot\\mathbf{dS}=\\int_V \\vdiv \\mathbf{u} dV\n\\end{aligned}\n\\end{equation*}\nSo for small volumes around a point $\\mathbf{r}$ get $\\vdiv \\mathbf{u}=\\frac{1}{V}\\dot{V}$ (rate of change of volume per unit volume).\\\\\nIf you \"go with the flow\":\\\\\nAgain incompressibility = volume is preserved, i.e. $\\vdiv \\mathbf{u}=0$.\n\nFor curl:\\\\\nLet $A$ be planar disc, centre $\\mathbf{r}$, radius $a$, normal $\\mathbf{n}$:\\\\\n\\includegraphics[scale=0.15]{VC07}\\\\\nthen\n\\begin{equation*}\n\\begin{aligned}\n\\int_{\\partial A} \\mathbf{u}\\cdot\\mathbf{dr} &= \\int_{\\partial A} \\mathbf{u}\\cdot\\mathbf{t} dS\\\\\n&= 2\\pi a \\cdot (\\text{average tengential component of }\\mathbf{u}, \\text{write as }\\overline{u}_{tang})\\\\\n&= 2\\pi a^2 \\omega\n\\end{aligned}\n\\end{equation*}\nwhere $\\omega = \\frac{1}{a} \\bar{u}_{tang}$ is the angular velocity. corresponding $\\bar{u}_{tang}$ speed at radius $a$.\\\\\nThen by (\\ref{eq:4})\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{n}\\cdot\\nabla\\times\\mathbf{u}=\\lim_{a\\to 0} \\frac{1}{\\pi a^2} 2\\pi a^2 \\omega = 2\\omega\n\\end{aligned}\n\\end{equation*}\ni.e. $\\left(\\curl \\mathbf{u}\\right)\\cdot\\mathbf{n}=$twice local rate of rotation of fluid near $\\mathbf{r}$.\\\\\n$\\bar{u}_{tang}$ is independent of overall motion $\\mathbf{u}\\to\\mathbf{u} + $\\underline{constant} as $\\bar{const}_{tang} = 0$.\n\\end{eg}\n\n\\newpage\n\\section{Conservation laws}\nFor space and time dependent scalar and vector fields $\\rho\\left(\\mathbf{r},t\\right),\\mathbf{j}\\left(\\mathbf{r},t\\right)$:\n\\begin{equation}\\label{eq:5}\n\\begin{aligned}\n\\frac{\\partial p}{\\partial t}+\\nabla\\cdot\\mathbf{j}=0\n\\end{aligned}\n\\end{equation}\nis called the \\emph{conservation equation}(or continuity equation).\\\\\nWhy? -- let $V$ be fixed volume in space, boundary $S=\\partial V$. Then\n\\begin{equation*}\n\\begin{aligned}\nQ\\left(t\\right)=\\int_V \\rho\\left(\\mathbf{r},t\\right) dV\n\\end{aligned}\n\\end{equation*}\nsatisfies\n\\begin{equation}\\label{eq:6}\n\\begin{aligned}\n\\frac{dQ}{dt}&=\\int_V \\frac{\\partial \\rho}{\\partial t} dV\\\\\n&= -\\int_V \\vdiv \\mathbf{j}dV\\\\\n&= -\\int_S \\mathbf{j}\\cdot\\mathbf{dS}\n\\end{aligned}\n\\end{equation}\nIf we think of $\\rho$ as density of some quantity $q$(mass, charge,...) and $\\mathbf{j}$ is the current density for it, i.e. for any small surface element $\\mathbf{dS}$ in space,\\\\\n$\\mathbf{j}\\cdot\\mathbf{dS}=$amount of $q$ crossing $\\mathbf{dS}$ per unit time (flux of $q$ across $\\mathbf{dS}$).\\\\\nThen $\\mathbf{Q}\\left(t\\right)$ = total amount of $q$ in $V$, and (\\ref{eq:6}) expresses conservation of $q$, in the sense that any change of amount of $q$ in $V$ must be associated to the same amount of $Q$ passing across the boundary of $\\partial V$.\\\\\nEquivalently (\\ref{eq:5}) expresses conservation of $q$ (for any $V$).\\\\\n$\\bullet$ more than just overall conservation -- it imposes local conservation.\\\\\neg.\\\\\n\\includegraphics[scale=0.15]{VC08}\\\\\nCannot have $q$ decreasing in $V_A$ while increasing in distant $V_B$ by same amount (so $q$ is conserved overall) but nothing happening in between.\\\\\nDecrease in $V_A$ can only happen if some $q$ flows out across its boundary.\\\\\n\n\\begin{eg}\n1) conservation of electric charge:\\\\\n$\\rho\\left(\\mathbf{r},t\\right) \\sim$ charge density;\\\\\n$\\mathbf{j}\\left(\\mathbf{r},t\\right) \\sim$ electric current density.\\\\\n\n2) conservation of mass for fluid motion:\\\\\n$\\rho\\left(\\mathbf{r},t\\right) \\sim$ mass density;\\\\\n$\\mathbf{j}=\\rho\\mathbf{u}$ is mass current if $\\mathbf{u}$ is the velocity field.\\\\\nVolume $\\delta V \\sim \\mathbf{n}\\cdot\\mathbf{u}\\delta t\\delta S$. So mass $\\sim \\rho\\mathbf{n}\\cdot\\mathbf{u}\\delta t\\delta S$.\\\\\nSo $\\frac{\\partial \\rho}{\\partial t}+\\vdiv\\left(\\rho\\mathbf{u}\\right)=0$ for fluid motion.\\\\\nIf $\\rho$ is also constant, get $\\vdiv\\left(\\mathbf{u}\\right)=0$.\\\\\n\\end{eg}\n\nWe know $\\mathbf{F}$ is conservative $\\left(\\mathbf{F}=\\nabla f\\right) \\implies \\mathbf{F}$ is irrotational ($\\nabla \\times \\mathbf{F}=\\mathbf{0}$).\\\\\nNow proved converse: suppose $\\mathbf{F}$ is irrotational on a simply connected region $D\\subseteq \\R^3$.\\\\\n\\includegraphics[scale=0.15]{VC09}\\\\\nFor any fixed $\\mathbf{r_0} \\in D$ and variable $\\mathbf{r}$, consider\\\\\n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\mathbf{r}\\right) = \\int_C \\mathbf{F}\\cdot \\mathbf{dr}\n\\end{aligned}\n\\end{equation*}\nFor a choice of curve $C$ from $\\mathbf{r_0}$ to $\\mathbf{r}$.\\\\\nThis is independent of choice of $C$ by Stoke's theorem:\\\\\nIf $C'$ is any other curve, it can be smoothly deformed into $C$, defining a spanning surface $S$.\\\\\nIf while $\\partial S$ oriented in direction of $C$ (so opposite on $C'$), then Stoke's theorem implies\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\nabla\\times\\mathbf{F}\\cdot\\mathbf{ds} &= \\int_C \\mathbf{F}\\cdot\\mathbf{dr} - \\int_{C'} \\mathbf{F}\\cdot\\mathbf{dr}\n\\end{aligned}\n\\end{equation*}\nBut $\\nabla \\times \\mathbf{F}=\\mathbf{0}$, so the two integrals are equal.\\\\\nHence $f\\left(\\mathbf{r}\\right)$ is well defined on all $D$.\\\\\n\nClaim: $\\mathbf{F}=\\nabla f$.\\\\\nConsider extending $C$ by a small $\\mathbf{\\delta r}$ at $\\mathbf{r}$ giving $C+\\delta C$:\\\\\n\\includegraphics[scale=0.15]{VC10}\\\\\n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\mathbf{r}+\\mathbf{\\delta r}\\right) &= \\int_{C+\\delta C} \\mathbf{F}\\cdot\\mathbf{dr}\\\\\n&= \\int_C \\mathbf{F}\\cdot\\mathbf{dr} + \\int_{\\delta C} \\mathbf{F}\\cdot \\mathbf{dr}\\\\\n&=f\\left(\\mathbf{r}\\right)+\\mathbf{F}\\left(\\mathbf{r}\\right)\\cdot\\mathbf{\\delta r} + o\\left(|\\mathbf{\\delta r}|\\right)\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\delta f &= \\mathbf{F}\\left(\\mathbf{r}\\right)\\cdot\\mathbf{\\delta r} + \\text{ o terms}\n\\end{aligned}\n\\end{equation*}\nBut by definition of $\\nabla f$,\n\\begin{equation*}\n\\begin{aligned}\n\\delta f = \\nabla f \\cdot \\mathbf{\\delta r} + \\text{ o terms}\n\\end{aligned}\n\\end{equation*}\nSo $\\mathbf{F}=\\nabla f.$\\\\\n\n\\newpage\n\\section{Orthogonal curvilinear coordinates}\nConsider coordinates $\\left(u,v,w\\right)$ on $\\R^3$, and small invertible functions $\\mathbf{r}\\left(u,v,w\\right)$.\\\\\n\n\\subsection{Line element, surface element, volume element}\nLine element is\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{dr}&=\\frac{\\partial \\mathbf{r}}{\\partial u}du + \\frac{\\partial \\mathbf{r}}{\\partial v}dv + \\frac{\\partial \\mathbf{r}}{\\partial w}dw\n\\end{aligned}\n\\end{equation*}\nHere $\\frac{\\partial \\mathbf{r}}{\\partial u},\\frac{\\partial \\mathbf{r}}{\\partial v},\\frac{\\partial \\mathbf{r}}{\\partial w}$ are tangent to coordinate lines of $u,v,w$ respectively (since the other two are hold constant).\\\\\nRequire these to be linearly independent,i.e.:\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\mathbf{r}}{\\partial u}\\cdot\\left(\\frac{\\partial \\mathbf{r}}{\\partial v}\\times\\frac{\\partial \\mathbf{r}}{\\partial w}\\right) \\neq 0.\n\\end{aligned}\n\\end{equation*}\n(also equal to the \\emph{Jacobian}, $\\frac{\\partial \\left(x,y,z\\right)}{\\partial \\left(u,v,w\\right)}$).\\\\\n$\\left(u,v,w\\right)$ are \\emph{orthogonal} curvilinear coordinates.\\\\\nIf $\\frac{\\partial \\mathbf{r}}{\\partial u},\\frac{\\partial \\mathbf{r}}{\\partial v},\\frac{\\partial \\mathbf{r}}{\\partial w}$ are orthogonal vectors, then set\n\\begin{equation}\\label{eq:7}\n\\begin{aligned}\n\\frac{\\partial \\mathbf{r}}{\\partial u} &= h_u\\mathbf{e}_u\\\\\n\\frac{\\partial \\mathbf{r}}{\\partial v} &= h_u\\mathbf{e}_v\\\\\n\\frac{\\partial \\mathbf{r}}{\\partial w} &= h_u\\mathbf{e}_w\n\\end{aligned}\n\\end{equation}\nwith $h_u,h_v,h_w$ all greater than 0, and $\\mathbf{e}_u,\\mathbf{e}_v,\\mathbf{e}_w$ are orthonormal and form a right-handed system, i.e. $\\mathbf{e}_u\\times \\mathbf{e}_v = \\mathbf{e}_w$ etc. This can always be achieved by re-ordering coordinates appropriately.\\\\\nThen the \\emph{line element} is\n\\begin{equation}\\label{eq:8}\n\\begin{aligned}\n\\mathbf{dr} &= h_u\\mathbf{e}_u du + h_v\\mathbf{e}_v dv + h_w\\mathbf{e}_w dw\n\\end{aligned}\n\\end{equation}\nSo the square of distance induced by small changes in $u,v,w$ is (up to o-terms):\n\\begin{equation*}\n\\begin{aligned}\n|\\mathbf{\\delta r}|^2 &= h_u^2\\delta u^2+h_v^2\\delta v^2+h_w^2\\delta w^2\n\\end{aligned}\n\\end{equation*}\nSo $h_u,h_v,h_w$ are positive scale factors for distance along the curve.\\\\\n\nFor surface elements:\\\\\nConsider co-ordinate surfaces $w=$constant, parameterised by $u,v$.\\\\\nArea element,\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{dS} &= \\left(\\frac{\\partial \\mathbf{r}}{\\partial u}\\times \\frac{\\partial \\mathbf{r}}{\\partial v}\\right) dudv \\text{  (general formula)}\\\\\n&= h_u\\mathbf{e}_u\\times h_v\\mathbf{e}_v dudv\\\\\n&= h_u h_v \\mathbf{e}_w dudv \\text{  (for orthogonal coordinates)}\n\\end{aligned}\n\\end{equation*}\nGeometrically, $dS$ is a small rectangle of side lengths $h_u \\delta u$ and $h_v \\delta v$ with normal $\\mathbf{e}_w$.\\\\\nSimilarly, for surface elements of $u=$constant and $v=$ constant surfaces.\\\\\n\nFor volume elements:\\\\\nThe Jacobian formula gives\n\\begin{equation*}\n\\begin{aligned}\ndV&=\\frac{\\partial \\mathbf{r}}{\\partial u}\\cdot\\left(\\frac{\\partial \\mathbf{r}}{\\partial v}\\times\\frac{\\partial\\mathbf{r}}{\\partial w}\\right)dudvdw\\\\\n&=h_u h_v h_w dudvdw\n\\end{aligned}\n\\end{equation*}\n($\\sim$ small cuboid with sides $h_u\\delta u,h_v\\delta v,h_w\\delta w$).\\\\\n\n\\subsection{Grad, div and curl in curvilinear coordinates}\nCoordinate-independent definition of $\\grad$ is\n\\begin{equation*}\n\\begin{aligned}\ndf &= \\nabla f \\cdot \\mathbf{dr}\n\\end{aligned}\n\\end{equation*}\nfor any scalar function $f$. But also\n\\begin{equation*}\n\\begin{aligned}\ndf &= \\frac{\\partial f}{\\partial u}du + \\frac{\\partial f}{\\partial v}dv + \\frac{\\partial f}{\\partial w}dw.\n\\end{aligned}\n\\end{equation*}\nSubstitute (\\ref{eq:8}) for $\\mathbf{dr}$ above, compare coefficients of $du,dv,dw$ and get\n\\begin{equation*}\n\\begin{aligned}\n\\nabla f &=\\frac{1}{h_u}\\mathbf{e}_u \\frac{\\partial f}{\\partial u} +\\frac{1}{h_v}\\mathbf{e}_v \\frac{\\partial f}{\\partial v} +\\frac{1}{h_w}\\mathbf{e}_w \\frac{\\partial f}{\\partial w}\n\\end{aligned}\n\\end{equation*}\nexpressing $\\grad$ in curvilinear coordinates.\\\\\n(For both the variables and unit vectors for components)\\\\\n\nFor vector field \n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F} = F_u \\mathbf{e}_u + F_v \\mathbf{e}_v + F_w \\mathbf{e}_w\n\\end{aligned}\n\\end{equation*}\nin $\\left(u,v,w\\right)$ system,\n$\\vdiv \\mathbf{F}$ and $\\curl \\mathbf{F}$ are more complicated as $\\mathbf{e}_u,\\mathbf{e}_v,\\mathbf{e}_w$ are generally \\emph{not} constant now.\\\\\n\nSeveral ways to derive formulas:\\\\\ne.g. use coordinate-independent formulas for $\\nabla \\cdot \\mathbf{F},\\nabla \\times \\mathbf{F}$ or algebraic methods, get (derivations not part of the course):\n\\begin{equation*}\n\\begin{aligned}\n\\nabla \\times \\mathbf{F}&=\\frac{1}{h_u h_v h_w}\n\\begin{vmatrix}\nh_u \\mathbf{e}_u & h_v \\mathbf{e}_v & h_w \\mathbf{e}_w\\\\\n\\frac{\\partial}{\\partial u} & \\frac{\\partial}{\\partial v} & \\frac{\\partial}{\\partial w}\\\\\nh_u F_u & h_v F_v & h_w F_w\n\\end{vmatrix}\\\\\n&= \\frac{1}{h_v h_w}[\\frac{\\partial}{\\partial v}\\left(h_w F_w\\right)-\\frac{\\partial}{\\partial w}\\left(h_v F_v\\right)]\\mathbf{e}_u + \\text{  (two similar terms)},\\\\\n\\nabla \\cdot \\mathbf{F} &= \\frac{1}{h_u h_v h_w}[\\frac{\\partial}{\\partial u}\\left(h_v h_w F_u\\right)+ \\text{  (two similar terms)]}.\n\\end{aligned}\n\\end{equation*}\n\n\\begin{eg}\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{A} &= \\frac{1}{r}\\tan\\frac{\\theta}{2}\\mathbf{e}_\\varphi\n\\end{aligned}\n\\end{equation*}\nin spherical polar coordinates.\n\\begin{equation*}\n\\begin{aligned}\nh_r = 1, h_\\theta = 1, h_\\varphi = r\\sin\\theta\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\times\\mathbf{A} &= \\frac{1}{r^2 \\sin\\theta}\n\\begin{vmatrix}\n\\mathbf{e}_r & r\\mathbf{e}_\\theta & r\\sin\\theta\\mathbf{e}_\\varphi\\\\\n\\frac{\\partial}{\\partial r} & \\frac{\\partial}{\\partial \\theta} & \\frac{\\partial}{\\partial \\varphi}\\\\\n0 & 0 & r\\sin\\theta \\cdot \\frac{1}{r} \\tan \\frac{\\theta}{2}\n\\end{vmatrix}\\\\\n&= \\frac{1}{r^2 \\sin\\theta} \\mathbf{e}_r \\frac{\\partial}{\\partial \\theta} \\left(2 \\sin^2 \\frac{\\theta}{2}\\right)\\\\\n&= \\frac{1}{r^2} \\mathbf{e}_r\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\begin{eg} (Algebraic method for $\\curl$ and $\\vdiv$ formula (outline)\\\\\n$\\bullet$ for $\\curl$:\\\\\nNote first \n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\times\\left(\\frac{\\mathbf{e}_u}{h_u}\\right)=0\n\\end{aligned}\n\\end{equation*}\nsince\n\\begin{equation*}\n\\begin{aligned}\n\\nabla u = \\frac{\\mathbf{e}_u}{h_u} \\text{  and  } \\curl\\grad = 0\n\\end{aligned}\n\\end{equation*}\nSimilarly\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\times\\left(\\frac{\\mathbf{e}_v}{h_v}\\right)=\\nabla\\times\\left(\\frac{\\mathbf{e}_w}{h_w}\\right)=0\n\\end{aligned}\n\\end{equation*}\nNow for $\\nabla\\times\\mathbf{F}$, write\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F} &= F_u \\mathbf{e}_u+F_v \\mathbf{e}_v+F_w \\mathbf{e}_w\\\\\n&=F_u h_u \\left(\\frac{\\mathbf{e}_u}{h_u}+F_v h_v\\right) +\\left(\\frac{\\mathbf{e}_v}{h_v}\\right)+F_w h_w \\left(\\frac{\\mathbf{e}_w}{h_w}\\right)\n\\end{aligned}\n\\end{equation*}\nThen recall\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\times\\left(q\\mathbf{V}\\right)=\\nabla\\varphi\\times\\mathbf{V}+\\varphi\\left(\\nabla\\times\\mathbf{V}\\right)\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\times\\mathbf{F}=\\nabla\\left(F_u h_u\\right)\\times\\frac{\\mathbf{e}_u}{h_u}+\\nabla\\left(F_v h_v\\right)\\times\\frac{\\mathbf{e}_v}{h_v}+\\nabla\\left(F_w h_w\\right)\\times\\frac{\\mathbf{e}_w}{h_w}\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n$\\bullet$ for $\\vdiv$:\\\\\nStarting trick is to note\n\\begin{equation}\\label{eq:9}\n\\begin{aligned}\n\\vdiv\\left(\\frac{\\mathbf{e}_w}{h_u h_v}\\right)=\\vdiv\\left(\\frac{\\mathbf{e}_v}{h_u h_w}\\right)=\\vdiv\\left(\\frac{\\mathbf{e}_u}{h_v h_w}\\right)=0\n\\end{aligned}\n\\end{equation}\nbecause\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\mathbf{e}_u}{h_u}=\\nabla u,\\frac{\\mathbf{e}_v}{h_v}=\\nabla v,\\frac{\\mathbf{e}_w}{h_w}=\\nabla w\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\mathbf{e}_w}{h_u h_v} = \\nabla u \\times \\nabla v\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\cdot\\left(\\mathbf{A}\\times\\mathbf{B}\\right) = \\mathbf{B}\\cdot\\left(\\nabla\\times \\mathbf{A}\\right)-\\mathbf{A}\\cdot\\left(\\nabla\\times\\mathbf{B}\\right)\n\\end{aligned}\n\\end{equation*}\nagain get (\\ref{eq:9}) by $\\curl\\grad = 0$.\\\\\nThen write\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F}=F_u h_v h_w \\left(\\frac{\\mathbf{e}_u}{h_v h_w}\\right) + \\text{  two similar terms}\n\\end{aligned}\n\\end{equation*}\nand use\n\\begin{equation*}\n\\begin{aligned}\n\\vdiv\\left(\\varphi\\mathbf{F}\\right) = \\varphi\\left(\\nabla\\cdot\\mathbf{V}\\right)+\\mathbf{V}\\cdot\\left(\\nabla\\varphi\\right).\n\\end{aligned}\n\\end{equation*}\n\n\\newpage\n\\section{Laplace equation and Poisson equation}\n\n\\subsection{Gauss' Law and Poisson equation}\nConsider gravitational force $\\mathbf{F}\\left(\\mathbf{r}\\right)$ on point mass $m$ at $\\mathbf{r}$, from some distribution of mass in space.\\\\\nWrite \n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F}\\left(\\mathbf{r}\\right)=m\\mathbf{g}\\left(\\mathbf{r}\\right)\n\\end{aligned}\n\\end{equation*}\n$\\mathbf{g}\\left(\\mathbf{r}\\right)$ is the gravitational field/force per unit mass/acceleration due to gravity.\\\\\n\n\\begin{thm} (Gauss' law in integral form)\\\\\nrelates $\\mathbf{g}$ to mass distribution\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{g}\\cdot\\mathbf{dS}=-4\\pi GM\n\\end{aligned}\n\\end{equation*}\nfor any closed surface $S$, bounding a volume $V$.\\\\\n$M$ is the total mass contained in $V$. $G$ is the Newtonian gravitational constant.\nThis can be used as the law of gravitation:\\\\\n$\\bullet$ Newton inverse square law follows from Gauss (plus an extra symmetry assumption);\\\\\n$\\bullet$ Gauss follow from Newton too (in Part 1B Methods -- Green's functions for PDE, $\\delta$-functions).\n\\end{thm}\n\nTo get Newton from Gauss:\\\\\nConsider total mass $M$, distributed with spherical symmetry about $\\mathbf{0}$, all contained in sphere radius $a$.\\\\\nAssume that spherical symmetry $\\implies$ $\\mathbf{g}\\left(\\mathbf{r}\\right)$ is radial,\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{g}\\left(\\mathbf{r}\\right) = g\\left(\\mathbf{r}\\right)\\mathbf{e}_r\n\\end{aligned}\n\\end{equation*}\nNow apply Gauss law for $S$ = a sphere of radius $R>a$.\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{g}\\cdot \\mathbf{dS} &= \\int_S g\\left(R\\right)\\mathbf{e}_r\\cdot\\mathbf{e}_r dS\\\\\n&=\\int_S g\\left(R\\right)dS\\\\\n&=4\\pi R^2 g\\left(R\\right)\n\\end{aligned}\n\\end{equation*}\nas $g$ is constant on $S$.\\\\\nSo Gauss implies\n\\begin{equation*}\n\\begin{aligned}\n4\\pi R^2 g\\left(R\\right) = -4\\pi GM\\\\\n\\text{i.e.  } g\\left(R\\right) = -\\frac{GM}{R^2},\\\\\n\\mathbf{g}\\left(\\mathbf{r}\\right) = -\\frac{GM}{r^2}\\mathbf{e}_r\n\\end{aligned}\n\\end{equation*}\nfor any $\\mathbf{r}$ with $|\\mathbf{r}| > a$.\\\\\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F}\\left(\\mathbf{r}\\right) = -\\frac{GMm}{r^2}\\mathbf{e}_r\n\\end{aligned}\n\\end{equation*}\nTake the limit $a\\to 0$(fixed $M$) we get Newton's law for points masses.\\\\\n\nIf mass distribution is not sufficiently symmetric, it's hard to get $\\mathbf{g}\\left(\\mathbf{r}\\right)$ from Gauss' integral law. But we can recast it as:\n\n\\begin{thm} (Gauss' law in differential form)\\\\\nSuppose the mass distribution has mass density $\\rho\\left(\\mathbf{r}\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{g}\\cdot\\mathbf{dS} &= -4 \\pi GM,\\\\\nM &= \\int_V \\rho dV\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{g}\\cdot\\mathbf{dS} &= -4\\pi G\\int_V \\rho dV\n\\end{aligned}\n\\end{equation*}\nwhile\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{g}\\cdot\\mathbf{dS} = \\int_V \\nabla\\cdot\\mathbf{g} dV\n\\end{aligned}\n\\end{equation*}\nby the divergence theorem. So the two volume integrals on the two RHSs are equal.\\\\\nThis holds for any region $V$ so\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\cdot\\mathbf{g} = -4\\pi G\\rho\n\\end{aligned}\n\\end{equation*}\n\\end{thm}\nNow $\\mathbf{g}\\left(\\mathbf{r}\\right)$ is also a conservative field. So\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{g} = -\\nabla\\varphi\n\\end{aligned}\n\\end{equation*}\nfor some $\\varphi$ (called the gravitational potential energy per unit mass).\\\\\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = 4\\pi G\\rho\n\\end{aligned}\n\\end{equation*}\nthe Poisson equation with source term $4\\pi G\\rho$.\\\\\nFor the previous spherical symmetric example,\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right)=-\\frac{GM}{r}\n\\end{aligned}\n\\end{equation*}\nfor $r>a$, with $\\varphi \\to 0$ as $r\\to\\infty$, fixing the overall constant freedom in $\\varphi$.\\\\\nThe above applies to any situation with inverse-square force law.\n\n\\begin{eg}\nAssume distribution of electric charges at rest,\ncharge density $\\rho\\left(\\mathbf{r}\\right)$, with the force on charge $q$ at $r$\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F}=q\\mathbf{E}\\left(\\mathbf{r}\\right)\n\\end{aligned}\n\\end{equation*}\nwhere $\\mathbf{E}$ is the electric field/force per unit charge.\\\\\nGauss' law for electrostatics:\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{E}\\cdot\\mathbf{dS} = \\frac{Q}{\\epsilon_0}\n\\end{aligned}\n\\end{equation*}\n$S$ is a closed surface, bounding volume $V$, and $Q = \\int_V \\rho dV$ = total charge in $V$. $\\epsilon$ is a constant, the \\emph{permittivity of free space}.\\\\\nElectrostatic field $\\mathbf{E}$ is conservative:\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{E} = -\\nabla \\varphi\n\\end{aligned}\n\\end{equation*}\nHere $\\varphi$ is the electrostatic potential. So\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = -\\frac{\\rho}{\\epsilon_0}\n\\end{aligned}\n\\end{equation*}\nthe Poisson equation for $\\varphi$ with source $-\\frac{\\rho}{\\epsilon_0}$.\\\\\nSpherically symmetric configuration (as for gravity) gives\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{E}\\left(\\mathbf{r}\\right) = \\frac{Q}{4\\pi \\epsilon_0} \\frac{\\mathbf{e}_r}{r^2},\\\\\n\\varphi\\left(\\mathbf{r}\\right) = \\frac{Q}{4\\pi \\epsilon_0}\\frac{1}{r}\n\\end{aligned}\n\\end{equation*}\nwith $(r>a)$.\\\\\nHere $Q$ is the total charge in sphere with radius $a$.\\\\\nAs $a\\to 0$, this gives Coulomb's inverse square law.\\\\\n\nNote: in empty regions of space $\\rho=0$, so both potentials satisfy Laplace equation:\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = 0\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\nExplicit solutions of Poisson and Laplace equation in 3 dimensions for either cylindrical polar or spherical polar symmetry, i.e. $\\varphi$ depends only on the radial coordinates:\\\\\nWrite $r$ for the radial coordinate in either coordinate system.\\\\\n\nLaplace equation $\\nabla^2 \\varphi = 0$ with $\\varphi = \\varphi\\left(r\\right)$:\\\\\n$\\bullet$ 1) Spherical symmetry: $\\nabla^2$ formula with $\\frac{\\partial}{\\partial \\theta}$, $\\frac{\\partial}{\\partial \\varphi}$ terms set to 0 gives ($'=\\frac{d}{dr}$):\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = \\varphi'' + \\frac{2}{r} \\varphi' = 0\n\\end{aligned}\n\\end{equation*}\nSo $\\varphi = \\frac{A}{r}+b$ is the general solution.\\\\\n\n$\\bullet$ 2) Cylindrical symmetry $\\nabla^2$ formula gives\n\\begin{equation*}\n\\begin{aligned}\n\\varphi'' +\\frac{1}{r}\\varphi'=\\frac{1}{r}\\left(r\\varphi'\\right)'=0\n\\end{aligned}\n\\end{equation*}\nSo $\\varphi = A\\log r+B$ is the general solution.\\\\\n\nPoisson equation\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = f\\left(r\\right)\n\\end{aligned}\n\\end{equation*}\nFind any single particular integral PI, then the Poisson general solution = PI + Laplace general solution.\\\\\n\n\\begin{eg}\nConsider spherically symmetric solution of $\\nabla^2\\varphi = \\rho_0$(constant)\n$\\nabla^2$ in spherical polars give\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\left(r^\\alpha\\right) = \\alpha\\left(\\alpha+1\\right)r^{\\alpha-2}\n\\end{aligned}\n\\end{equation*}\nSo $\\alpha = 2$ gives a particular integral of $\\frac{1}{6} \\rho_0 r^2$. So the general solution is\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(r\\right) = \\frac{A}{r}+B+\\frac{1}{6}\\rho_0 r^2\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\begin{eg}\nWe seek spherical symmetric $\\varphi\\left(r\\right)$ with\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2\\varphi = \\left\\{\n\\begin{array}{ll}\n4\\pi G \\rho_0 & r \\leq a\\\\\n0 & r > a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n$\\rho_0$ constant, $\\varphi$ non-singular at $r=0$, $\\varphi \\to 0$ as $r \\to \\infty$, $\\varphi$, $\\varphi'$ continuous at $r=a$.\\\\\nPhysically: gravitational potential inside/outside a planet, with constant density $\\rho_0$, radius $a$, total mass $M=\\frac{4}{3}\\pi a^3 \\rho_0$.\\\\\nOur general solutions (and previous example) give\n\\begin{equation*}\n\\begin{aligned}\n\\varphi = \\left\\{\n\\begin{array}{ll}\n\\frac{A}{r}(=0)+ B+\\frac{1}{6}4\\pi G \\rho_0 r^2 & r\\leq a\\\\\\\\\n\\frac{C}{r} + D(=0) = \\frac{C}{r} & r>a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nMatching $\\varphi$, $\\varphi'$ at $r=a$ gives\n\\begin{equation*}\n\\begin{aligned}\n\\varphi: B+\\frac{1}{6}4\\pi G \\rho_0 a^2 = \\frac{c}{a}\\\\\n\\varphi': \\frac{1}{3}4\\pi G\\rho_0 a = -\\frac{c}{a^2}\n\\end{aligned}\n\\end{equation*}\nand use $M=\\frac{4}{3}\\pi a^2\\rho_0$, get\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(r\\right) = \\left\\{\n\\begin{array}{ll}\n\\frac{GM}{2a}\\left(\\left(\\frac{r}{a}\\right)^2-3\\right) & r\\leq a\\\\\\\\\n-\\frac{GM}{r} & r > a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nGravitational field\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{g} = -\\nabla \\varphi \\equiv g\\left(r\\right) \\mathbf{e}_r\\\\\ng\\left(r\\right) = -\\varphi'\\left(r\\right) = \\left\\{\n\\begin{array}{ll}\n-\\frac{GMr}{a^3} & r\\leq a\\\\\\\\\n-\\frac{GM}{r^2} & r>a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n\\includegraphics[scale=0.30]{VC11}\\\\\nAlternative solution: use Gauss' law for $\\mathbf{g}=g\\left(r\\right)\\mathbf{e}_r$ for $S=$ sphere radius $R$:\\\\\n$\\bullet$ $R\\geq a$ done before (Newton from Gauss);\\\\\n$\\bullet$ $R<a$:\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{g}\\cdot\\mathbf{dS} &= r\\pi R^2 g\\left(R\\right)\\\\\n&= -4\\pi GM\\left(\\frac{R}{a}\\right)^3\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\ng\\left(R\\right) = -\\frac{GMR}{a^3}\n\\end{aligned}\n\\end{equation*}\nfor any $R\\leq a$.\n\\end{eg}\n\nTo get unique solution of Poisson/Laplace equation, need to impose boundary conditions.\\\\\nTwo common kinds of boundary conditions on $\\varphi$ at boundary $S=\\partial V$ for $\\partial$ in volume $V$:\\\\\n$\\bullet$ 1) Dirichlet condition (D): specify $\\varphi$ on boundary;\\\\\n$\\bullet$ 2) Neumann condition (N): specify outward normal derivative $\\mathbf{n} \\cdot \\nabla\\varphi$ on boundary. Write $\\mathbf{n}\\cdot\\nabla\\varphi = \\frac{\\partial\\varphi}{\\partial n}$.\\\\\n(can also impose these on different parts of $\\partial V$ or $\\alpha \\varphi + \\beta \\frac{\\partial \\varphi}{\\partial n}$ etc.)\\\\\nIn applications, boundary conditions generally model physical conditions.\\\\\n\n\\begin{eg}\n$\\bullet$ Electrostatic potential:\\\\\n(D) $\\varphi$ = constant on $S$ for perfectly conducting surface.\\\\\n$\\bullet$ Also other equations (of 1B Methods heat/diffusion equation)\\\\\n$T\\left(\\mathbf{r},t\\right)$ the temperature in space and time.\\\\\n(D) $T$ specified on $S$;\\\\\n(N) $\\frac{\\partial T}{\\partial n} = 0$ on S: perfectly insulating boundary.\n\\end{eg}\n\n\\begin{thm} (Uniqueness theorem)\\\\\nSuppose $\\varphi\\left(\\mathbf{r}\\right)$ satisfies Poisson equation $\\nabla^2\\varphi = \\rho$ for some $\\rho\\left(\\mathbf{r}\\right)$ in a bouded volume $V$, with boundary $S=\\partial V$, a closed surface, outward normal $\\mathbf{n}$.\\\\\nThen:\\\\\n$\\bullet$ 1) if\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = f\\left(\\mathbf{r}\\right)\n\\end{aligned}\n\\end{equation*}\non $S$ (i.e. D boundary condition), then $\\varphi\\left(\\mathbf{r}\\right)$ is unique.\\\\\n$\\bullet$ 2) if\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\varphi}{\\partial n} = \\mathbf{n}\\cdot\\nabla\\varphi = g\\left(\\mathbf{r}\\right)\n\\end{aligned}\n\\end{equation*}\non $S$ (i.e. N boundary condition), then $\\varphi\\left(\\mathbf{r}\\right)$ is unique up to a constant, i.e. unique up to replacement $\\varphi \\to \\varphi +$ constant.\\\\\n(Note: Laplace equation included as special case $\\rho \\equiv 0$).\n\\begin{proof}\nLet $\\varphi_1\\left(\\mathbf{r}\\right)$ and $\\varphi_2\\left(\\mathbf{r}\\right)$ be any two solutions of Poisson equation, with each obeying the boundary conditions (D) or (N) above.\\\\\nThen\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(\\mathbf{r}\\right)=\\varphi_1\\left(\\mathbf{r}\\right)-\\varphi_2\\left(\\mathbf{r}\\right)\n\\end{aligned}\n\\end{equation*}\nsatisfies\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2\\psi = 0\n\\end{aligned}\n\\end{equation*}\nin $V$, and\\\\\nfor (D): $\\psi = 0$ on $S$\\\\\nor for (N): $\\frac{\\partial\\psi}{\\partial n}=\\mathbf{n}\\cdot\\nabla\\varphi = 0$ on $S$.\\\\\nSo by divergence theorem(!):\n\\begin{equation*}\n\\begin{aligned}\n\\int_V \\nabla\\cdot\\left(\\psi\\nabla\\psi\\right)dV &= \\int_S \\psi\\nabla\\psi\\cdot\\mathbf{dS}\\\\\n&= \\int_S \\psi \\frac{\\partial\\varphi}{\\partial n} dS\n\\end{aligned}\n\\end{equation*}\nBut\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\cdot\\left(\\psi\\nabla\\psi\\right)&=\\nabla\\psi\\cdot\\nabla\\psi + \\psi\\nabla^2\\psi\\text{(=0 in V)}\\\\\n&= |\\nabla\\psi|^2\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\int_V |\\nabla\\psi|^2 dV = \\int_S \\psi \\frac{\\partial \\psi}{\\partial n}dS\n\\end{aligned}\n\\end{equation*}\nBut RHS=0 for (D) or (N).\\\\\n(or even if (D) or (N) used on different parts of $S$).\\\\\nSince $|\\nabla\\psi|^2\\geq 0$, $\\int_V$ can only vanish if $|\\nabla\\psi|=0$ i.e. $\\nabla\\psi = \\mathbf{0}$, i.e. $\\psi = c$ constant in $V$.\\\\\nSo (D): $\\psi = 0$ on $S$, so $c=0$, and $\\varphi_1 = \\varphi_2$ in $V$;\\\\\nor (N): $\\frac{\\partial\\psi}{\\partial n} = 0$ on $S$ so any constant $c$ is possible, i.e. $\\varphi_1 = \\varphi_2 +$ any constant.\n\\end{proof}\n\\end{thm}\n\\begin{rem}\n$\\bullet$ 1) The theorem says that if a solution exists then its unique, but says nothing about its \\emph{existence}.\\\\\ne.g. for (N), if\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2\\varphi = \\rho\n\\end{aligned}\n\\end{equation*}\nin $V$, and\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\varphi}{\\partial n} = g\n\\end{aligned}\n\\end{equation*}\non $\\partial V$, then by divergence theorem,\n\\begin{equation*}\n\\begin{aligned}\n\\int_V \\nabla^2\\varphi dV = \\int_{\\partial V} \\frac{\\partial \\varphi}{\\partial n}dS\n\\end{aligned}\n\\end{equation*}\nso\n\\begin{equation*}\n\\begin{aligned}\n\\int_V \\rho dV = \\int_{\\partial V} g dS\n\\end{aligned}\n\\end{equation*}\nSo $\\rho$ and $g$ cannot be freely specified independently.\\\\\n\n$\\bullet$ 2) The theorem applies similarly for regions of $\\R^n$ for any $n=2,3,4,...$. We need just the definitions of $\\grad$, $\\vdiv$, and divergence theorem.\\\\\n\n$\\bullet$ 3) The results can be extended to some \\emph{un}bounded regions $V$ with suitable asymptotic conditions on $\\varphi$ (and then $\\psi$ in proof 2 as well, since $\\psi = \\varphi_1 - \\varphi_2$).\\\\\nConsider a sphere of radius $R$ (will take $R\\to\\infty$):\\\\\nSuppose \n\\begin{equation*}\n\\begin{aligned}\n&|\\psi\\left(\\mathbf{r}\\right)| = O\\left(\\frac{1}{R}\\right)\\\\\n&|\\frac{\\partial \\psi}{\\partial n}| = O\\left(\\frac{1}{R^2}\\right)\n\\end{aligned}\n\\end{equation*}\nfor $|\\mathbf{r}| = R$. Then\n\\begin{equation*}\n\\begin{aligned}\n|\\int_S \\psi\\frac{\\partial \\psi}{\\partial n} dS| &\\leq \\max_{|\\mathbf{r}|\\leq R} | \\psi \\frac{\\partial \\psi}{\\partial n}|\\cdot 4\\pi R^2\\\\\n&\\leq \\text{constant   }\\cdot \\frac{1}{R}\\cdot\\frac{1}{R^2}\\cdot 4\\pi R^2\\\\\n&=O\\left(\\frac{1}{R}\\right)\\\\\n&\\to 0\n\\end{aligned}\n\\end{equation*}\nas $R\\to\\infty$.\\\\\nSo uniqueness proof goes through for unbounded region $R\\to\\infty$.\\\\\nThese conditions on $\\psi$ and $\\frac{\\partial \\psi}{\\partial n}$ are ensured by imposing\n\\begin{equation*}\n\\begin{aligned}\n|\\varphi\\left(\\mathbf{r}\\right)| = O\\left(\\frac{1}{R}\\right) \\text{   and   } |\\nabla \\varphi| = O\\left(\\frac{1}{R^2}\\right)\n\\end{aligned}\n\\end{equation*}\nso with these asymptotic condition on $\\varphi$, the uniqueness theorem extends to all of $\\R^3$.\\\\\n\n$\\bullet$ 4) The starting point of proof is a special case of the identity\n\\begin{equation*}\n\\begin{aligned}\n\\nabla \\cdot \\left(u\\nabla v\\right) = \\nabla u \\cdot \\nabla v + u \\nabla^2 v\n\\end{aligned}\n\\end{equation*}\nso by divergence theorem\n\\begin{equation*}\n\\begin{aligned}\n\\int_S u\\nabla v \\mathbf{dS} = \\int_S \\nabla u \\cdot \\nabla v dV + \\int_V u \\nabla^2 v dV\n\\end{aligned}\n\\end{equation*}\nThis is called \\emph{Green's first identity}.\\\\\nSwap $u$ and $v$ and subtract the two equations,\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\left(u\\nabla v-v\\nabla u\\right)\\cdot \\mathbf{dS} = \\int_V \\left(u\\nabla^2 v - v \\nabla^2 u\\right) dV\n\\end{aligned}\n\\end{equation*}\ncalled \\emph{Green's second identity}.\n\\end{rem}\n\n\\subsection{Laplace equation and harmonic functions}\nSolutions of Laplace equation\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = 0\n\\end{aligned}\n\\end{equation*}\nare called \\emph{harmonic functions}.\n\n\\begin{thm} (Mean value property)\\\\\nSuppose $\\varphi\\left(\\mathbf{r}\\right)$ is harmonic in $V$ containing a solid sphere\n\\begin{equation*}\n\\begin{aligned}\nV_R: |\\mathbf{r}-\\mathbf{a}| \\leq R\n\\end{aligned}\n\\end{equation*}\nwith centre $\\mathbf{a}$, with boundary\n\\begin{equation*}\n\\begin{aligned}\nS_R: |\\mathbf{r}-\\mathbf{a}| = R\n\\end{aligned}\n\\end{equation*}\nThen\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{a}\\right) = \\bar{\\varphi}\\left(R\\right)\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n\\bar{\\varphi}\\left(R\\right) = \\frac{1}{4\\pi R^2}\\int_{S_R} \\varphi\\left(\\mathbf{r}\\right) dS = \\text{   average of  } \\varphi \\text{   over  } S_R.\n\\end{aligned}\n\\end{equation*}\n\\begin{proof}\nNote\n\\begin{equation*}\n\\begin{aligned}\n\\bar{\\varphi}\\left(R\\right) \\to \\varphi\\left(\\mathbf{a}\\right)\n\\end{aligned}\n\\end{equation*}\nas $R\\to 0$. Take spherical polar coordinates ($u,\\theta,\\chi$) centred at $\\mathbf{a}$.\\\\\nThen on $S_R$ (i.e. $u=R$),\n\\begin{equation*}\n\\begin{aligned}\ndS = R^2 \\sin\\theta d\\theta d\\chi\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\bar{\\varphi}\\left(R\\right) = \\frac{1}{4\\pi R^2}\\int_{\\chi=0}^{2\\pi} \\int_{\\theta = 0}^\\pi \\varphi\\left(R,\\theta,\\chi\\right) \\sin\\theta R^2 d\\theta d\\chi\n\\end{aligned}\n\\end{equation*}\nSo \n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dR} \\bar{\\varphi}\\left(R\\right) &= \\frac{1}{4\\pi} \\int_\\chi \\int_\\theta \\frac{\\partial \\varphi}{\\partial R}\\sin\\theta d\\theta d\\chi\\\\\n&= \\frac{1}{4\\pi R^2} \\int_{S_R} \\left(\\nabla\\varphi \\cdot \\mathbf{e}_u \\right) dS\\\\\n&= \\frac{1}{4\\pi R^2} \\int_{V_R} \\nabla^2 \\varphi dV\\\\\n&=0\n\\end{aligned}\n\\end{equation*}\n($\\nabla\\varphi \\cdot \\mathbf{e}_u$ takes the first component (at $u=R$), and the second last step is by divergence theorem).\\\\\nNote that $\\nabla^2 \\varphi = 0$. So $\\bar{\\varphi}\\left(R\\right)$ is constant in $R$, so is equal to $\\varphi\\left(\\mathbf{a}\\right)$.\n\\end{proof}\n\\end{thm}\n\n\\begin{defi}\n$\\varphi\\left(\\mathbf{r}\\right)$ on $V$ has a \\emph{local maximum} at $\\mathbf{a}\\in V$, if for some $\\epsilon >0$,\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{a}\\right) > \\varphi\\left(\\mathbf{r}\\right)\n\\end{aligned}\n\\end{equation*}\nfor all $\\mathbf{r}\\in V$ with $0<|\\mathbf{r}-\\mathbf{a}|<\\epsilon$.\\\\\n(similar for \\emph{local minimum}: if $\\varphi\\left(\\mathbf{a}\\right)<\\varphi\\left(\\mathbf{r}\\right)$ ...)\n\\end{defi}\n\n\\begin{coro}(Maximum of minimum principle)\\\\\nSuppose $\\varphi$ is harmonic on $V$. Then $\\varphi$ cannot have \\emph{any} local maximum or minimum at any interior point of $V$.\n\\begin{proof}\n\\emph{if} $\\varphi$ has a local maximum at an \\emph{interior} point $\\mathbf{a}$, there is $\\epsilon$ as above so that\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right)<\\varphi\\left(\\mathbf{a}\\right)\n\\end{aligned}\n\\end{equation*}\nfor all $|\\mathbf{r}-\\mathbf{a}|<\\epsilon$ and $\\mathbf{r}\\in V$.\\\\\nBy taking $\\epsilon$ small enough, we can ensure that the \\emph{whole} sphere $\\mathbf{r}\\in \\R^3$ with $|\\mathbf{r}-\\mathbf{a}|<\\epsilon$ lies within $V$ (this is always possible for an \\emph{interior} point.)\\\\\nSo for any $0<R<\\epsilon$, $\\varphi\\left(\\mathbf{r}\\right) < \\varphi\\left(\\mathbf{a}\\right)$ for $|\\mathbf{r}-\\mathbf{a}| = R$. So $\\bar{\\varphi}\\left(R\\right) < \\varphi\\left(\\mathbf{a}\\right)$, contradicting the mean value property.\\\\\nSo internal local maximum (similarly minimum) cannot exist.\n\\end{proof}\n\\end{coro}\n\n\\subsection{Integral solution of Poisson equation}\n(Statement and informal derivation)\\\\\nRecall electrostatics: for a point charge $Q$ at $\\mathbf{r}_0$, the potential is\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = \\frac{Q}{4\\pi \\epsilon_0} \\frac{1}{|\\mathbf{r}-\\mathbf{r}_0|}\n\\end{aligned}\n\\end{equation*}\nLet's set $\\epsilon_0 = 1$.\\\\\nFor distribution of charges, density $\\rho\\left(\\mathbf{r}\\right)$ all contained in a bounded volume $V'$, the potential satisfies the Poisson equation,\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = -\\frac{\\rho}{\\epsilon_0} = -\\rho\n\\end{aligned}\n\\end{equation*}\nIf we have many point charges $Q_n$ at $\\mathbf{r}_\\alpha$, then by linearity, the potential is just  the sum:\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = \\sum_\\alpha \\frac{1}{4\\pi} \\frac{Q_\\alpha}{|\\mathbf{r}-\\mathbf{r}_\\alpha|}\n\\end{aligned}\n\\end{equation*}\n\nAnalogously, we can view potential $\\varphi$ of density $\\rho$ as a sum of contributions\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{4\\pi}\\frac{\\rho\\left(\\mathbf{r}\\right)'dV'}{|\\mathbf{r}-\\mathbf{r}'|}\n\\end{aligned}\n\\end{equation*}\nfrom charge elements $\\rho\\left(\\mathbf{r}'\\right)dV'$ ranging over $V'$, with $\\sum_\\alpha$ being replaced by $\\int_{V'} dV'$.\\\\\nThis suggests that\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = \\frac{1}{4\\pi} \\int_V \\frac{\\rho\\left(\\mathbf{r}'\\right)}{|\\mathbf{r}-\\mathbf{r}'|}dV'\n\\end{aligned}\n\\end{equation*}\nis a solution to Poisson equation\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2\\varphi = -\\rho\n\\end{aligned}\n\\end{equation*}\nHere integral is taken with respect to $\\mathbf{r}'$ and $\\mathbf{r}\\in\\R^3$ is a parameter in the integral.\\\\\nThis is indeed \\emph{the} unique solution with boundary condition\n\\begin{equation*}\n\\begin{aligned}\n&|\\varphi\\left(\\mathbf{r}\\right)| = O\\left(\\frac{1}{|\\mathbf{r}|}\\right),\\\\\n&|\\nabla\\varphi\\left(\\mathbf{r}\\right)| = O\\left(\\frac{1}{|\\mathbf{r}|^2}\\right)\n\\end{aligned}\n\\end{equation*}\n\n\\begin{eg}\nUsing the integral formula, we can calculate again the solution to\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2\\varphi = \\left\\{\n\\begin{array}{ll}\n-\\rho_0 \\text{   constant   } & |\\mathbf{r}|\\leq a\\\\\n0 & |\\mathbf{r}|\\geq a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nWe have\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = \\frac{1}{4\\pi}\\int_{V'} \\frac{\\rho_0}{|\\mathbf{r}-\\mathbf{r}'|} dV'\n\\end{aligned}\n\\end{equation*}\nHere $V'$ is the sphere $|\\mathbf{r}'| \\leq a$, centred at origin.\\\\\nIntroduce spherical polar coordinates $\\left(r',\\theta,\\chi\\right)$,\\\\\n(diagram to be inserted -- vc12)\\\\\nwith NS pole axis along $\\mathbf{r}$, so $\\theta$ is the latitude down from $\\mathbf{r}$, $\\chi$ is the angle around $\\mathbf{r}$, and\n\\begin{equation*}\n\\begin{aligned}\ndV' = r'^2 \\sin\\theta dr' d\\theta d\\chi\n\\end{aligned}\n\\end{equation*}\nBy cosine rule for triangle,\n\\begin{equation*}\n\\begin{aligned}\n|\\mathbf{r}-\\mathbf{r}'| &= \\left(r^2+r'^2 - 2rr' \\cos \\theta\\right) ^\\frac{1}{2}\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) &= \\frac{1}{4\\pi} \\int_0^a dr' \\int_0^\\pi d\\theta \\int_0^{2\\pi} d\\chi \\frac{\\rho_0 r'^2\\sin\\theta}{\\sqrt{r^2+r'^2-2rr'\\cos\\theta}}\\\\\n&=\\frac{\\rho_0}{2} \\int_0^a dr' r'^2 \\frac{1}{rr'}\\left[\\sqrt{r^2+r'^2 - 2rr'\\cos\\theta}\\right]_{\\theta=0}^{\\theta=\\pi}\n\\end{aligned}\n\\end{equation*}\nWhile \n\\begin{equation*}\n\\begin{aligned}\n\\left[\\sqrt{r'^2+r^2-2rr'\\cos\\theta}\\right]_0^\\pi = \\left[|\\mathbf{r}-\\mathbf{r'}|\\right]_{\\theta=0}^{\\theta=\\pi}\n\\end{aligned}\n\\end{equation*}\nwhen $\\theta =\\pi$: $\\mathbf{r}$, $\\mathbf{r'}$ are antiparallel. So $|\\mathbf{r}-\\mathbf{r}'| = r+r'$;\\\\\nwhen $\\theta = 0:$ $\\mathbf{r}$, $\\mathbf{r'}$ are parallel. So $|\\mathbf{r}-\\mathbf{r'}| = |r-r'|$.\\\\\nThen\n\\begin{equation*}\n\\begin{aligned}\n\\left(r+r'\\right) - |r-r'| = \\left\\{\n\\begin{array}{ll}\n2r' & r>r'\\\\\n2r & r<r'\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = \\left\\{\n\\begin{array}{ll}\n\\rho_0 \\int_0^a dr' \\frac{r'^2}{r} & r>a\\\\\n\\rho_0 \\int_0^r dr' \\frac{r'^2}{r} + \\rho_0\\int_r^a dr' r' & r\\leq a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nIn the first case $r>r'$ as $r'<a$, in the second case $r'$ varies from $0\\to a$ so separable: $r':0\\to r(r'<r)$ and $r\\to a(r'>r)$. So\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = \\left\\{\n\\begin{array}{ll}\n\\rho_0 \\frac{1}{3}a^3 \\frac{1}{r} & r>a\\\\\n\\rho_0 \\left(-\\frac{1}{b}r^2 + \\frac{1}{2}a^2\\right) & r\\leq a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n\\emph{as before}.\n\n\\subsection{Point sources and delta functions (non-examinable)}\nFor\n\\begin{equation*}\n\\begin{aligned}\n\\varphi\\left(\\mathbf{r}\\right) = \\frac{1}{4\\pi} \\int_{V'} \\frac{\\rho\\left(\\mathbf{r}'\\right)}{|\\mathbf{r}-\\mathbf{r'}|} dV'\n\\end{aligned}\n\\end{equation*}\nWe have\n\\begin{equation}\\label{eq:10}\n\\begin{aligned}\n\\nabla^2 \\varphi = \\frac{1}{4\\pi} \\int_{V'} \\rho \\left(\\mathbf{r}'\\right) \\nabla^2 \\left(\\frac{1}{|\\mathbf{r}-\\mathbf{r}'|}\\right) dV'\n\\end{aligned}\n\\end{equation}\nWrite $\\mathbf{r}' = \\mathbf{a}$ consider\n\\begin{equation*}\n\\begin{aligned}\n\\psi = \\frac{1}{4\\pi}\\frac{1}{|\\mathbf{r}-\\mathbf{a}}\n\\end{aligned}\n\\end{equation*}\nthen calculate\n\\begin{equation*}\n\\begin{aligned}\n&\\nabla\\psi = -\\frac{1}{4\\pi} \\frac{\\mathbf{r}-\\mathbf{a}}{|\\mathbf{r}-\\mathbf{a}|^3},\\\\\n&\\nabla^2 \\psi = 0(!)\n\\end{aligned}\n\\end{equation*}\nfor all $\\mathbf{r}\\neq \\mathbf{a}$.\\\\\nNote these are all singular at $\\mathbf{a}=\\mathbf{a}$ and \\emph{ok} for all $\\mathbf{r}\\neq \\mathbf{a}$.\\\\\n\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\nabla\\psi \\cdot \\mathbf{dS} = -1\n\\end{aligned}\n\\end{equation*}\n(e.g. by using $|\\mathbf{r}-\\mathbf{a}|$ = radial coordinates $u$, $\\left(\\mathbf{r}-\\mathbf{a}\\right) = u \\mathbf{e}_u$ etc) for $S$ surface of any sphere $V$, centre $\\mathbf{a}$.\\\\\nSo divergence theorem would require (!):\n\\begin{equation*}\n\\begin{aligned}\n\\int_V \\nabla^2 \\psi dV = -1\n\\end{aligned}\n\\end{equation*}\n$\\nabla^2 \\psi$ is zero for all $\\mathbf{r}\\neq \\mathbf{a}$.\\\\\nThis holds if we take\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\left(\\frac{1}{4\\pi} \\frac{1}{|\\mathbf{r}-\\mathbf{a}|}\\right) = -\\delta \\left(\\mathbf{r}-\\mathbf{a}\\right)\n\\end{aligned}\n\\end{equation*}\nRHS is a 3-D delta function -- a generalised function satisfying\n\\begin{equation*}\n\\begin{aligned}\n\\int_V f\\left(\\mathbf{r}\\right) \\delta \\left(\\mathbf{r}-\\mathbf{a}\\right) dV = f\\left(\\mathbf{a}\\right)\n\\end{aligned}\n\\end{equation*}\nfor any $V$ containing $\\mathbf{a}$.\\\\\nThen (\\ref{eq:10}) gives\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\varphi = -\\rho \\left(\\mathbf{r}\\right) \n\\end{aligned}\n\\end{equation*}\nas wanted.\\\\\nFor point source $Q$ at $\\mathbf{a}$, the potential\n\\begin{equation*}\n\\begin{aligned}\n&\\varphi\\left(\\mathbf{r}\\right) = \\frac{Q}{4\\pi} \\frac{1}{|\\mathbf{r}-\\mathbf{a}|}\\\\\n&\\nabla^2 \\varphi = -Q\\delta \\left(\\mathbf{r}-\\mathbf{a}\\right)=-\\text{   charge density of point source  } Q \\text{  at  } \\mathbf{r} = \\mathbf{a}\n\\end{aligned}\n\\end{equation*}\nThis is zero except $\\mathbf{r}=\\mathbf{a}$, infinite at $\\mathbf{r}=\\mathbf{a}$, but any $\\int_V$ about $\\mathbf{a}$ gives $Q$.\n\\end{eg}\n\n\\newpage\n\\section{Maxwell's equations}\nLaws of electromagnetism(full dynamic case).\\\\\nLet\\\\\n$\\mathbf{E}\\left(\\mathbf{r},t\\right)$ be the electric field,\\\\\n$\\mathbf{B}\\left(\\mathbf{r},t\\right)$ be the magnetic field,\\\\\n$\\rho\\left(\\mathbf{r},t\\right)4$ be charge density,\\\\\n$\\mathbf{j}\\left(\\mathbf{r},t\\right)$ be current density.\\\\\nThen\n\\begin{equation*}\n\\begin{aligned}\n&\\nabla \\cdot \\mathbf{E} = V_{\\epsilon_0} & (1)\\\\\n&\\nabla\\times\\mathbf{E} + \\frac{\\partial \\mathbf{B}}{\\partial t} = 0 & (2)\\\\\n&\\nabla\\cdot \\mathbf{B} = 0 & (3)\\\\\n& \\nabla\\times\\mathbf{B}-\\mu_0 \\epsilon_0 \\frac{\\partial \\mathbf{E}}{\\partial t} = \\mu_0 \\mathbf{j} & (4)\n\\end{aligned}\n\\end{equation*}\n($\\epsilon_0$, $\\mu_0$ are respectively the permittivity and permeability of free space -- constants).\\\\\n\n\\begin{thm} (Lorentz force law)\\\\\nThe force on a point charge $q$ at $\\mathbf{r}\\left(t\\right)$ due to $\\mathbf{E}$ and $\\mathbf{B}$ fields is\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{F} = q\\left(\\mathbf{E}+\\dot{\\mathbf{r}}\\times\\mathbf{B}\\right)\n\\end{aligned}\n\\end{equation*}\n\\end{thm}\n\n\\subsection{Conservation of electric charge from Maxwell equation}\nTake the div of the fourth equation and use the first one,\n\\begin{equation*}\n\\begin{aligned}\n-\\mu_0 \\epsilon_0 \\frac{\\partial}{\\partial t}\\left(\\frac{\\rho}{\\epsilon_0}\\right) = \\mu_0 \\nabla \\cdot \\mathbf{j}\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot \\mathbf{j} = 0\n\\end{aligned}\n\\end{equation*}\ni.e. the conservation and continuity of equation for $\\rho, \\mathbf{j}$.\n\nIntegral forms:\\\\\nFor (1) and (3) use divergence theorem, and get respectively:\n\\begin{equation*}\n\\begin{aligned}\n\\int_S \\mathbf{e}\\cdot \\mathbf{dS} = \\frac{Q}{\\epsilon_0}\\\\\n\\int_S \\mathbf{B}\\cdot \\mathbf{dS} = 0\n\\end{aligned}\n\\end{equation*}\nHere $S$ is any closed surface, enclosing charge. For (2) and (4), use Stokes' Theorem. For (2):\n\\begin{equation*}\n\\begin{aligned}\n\\int_C \\mathbf{E}\\cdot \\mathbf{dr} = -\\frac{d}{dt}\\int_S \\mathbf{B}\\cdot \\mathbf{dS}\n\\end{aligned}\n\\end{equation*}\n$S$ is any surface with closed curve boundary $C$.\\\\\nFor (4):\n\\begin{equation*}\n\\begin{aligned}\n\\int_C \\mathbf{B}\\cdot\\mathbf{dr} = \\mu_0 \\int \\mathbf{j}\\cdot \\mathbf{dS} + \\mu_0 \\epsilon_0 \\frac{d}{dt}\\int \\mathbf{E}\\cdot \\mathbf{dS}\n\\end{aligned}\n\\end{equation*}\n\nFully static case: $\\rho$, $\\mathbf{j}$, $\\mathbf{E}$, $\\mathbf{B}$ are all time independent. Then $\\mathbf{E}$ and $\\mathbf{B}$ decouple in the equations (not related in a single equation).\\\\\n$\\bullet$ Electrostatics:\n\\begin{equation*}\n\\begin{aligned}\n&\\nabla\\cdot\\mathbf{E} = \\frac{\\rho}{\\epsilon_0}\\\\\n&\\nabla\\times\\mathbf{E} = 0\\\\\n&\\mathbf{E} = -\\nabla\\varphi\\\\\n&\\nabla^2 \\varphi = -\\frac{\\rho}{\\epsilon_0}\n\\end{aligned}\n\\end{equation*}\n$\\varphi$ is the electrostatic potential, satisfies scalar Poisson equation.\\\\\n\n$\\bullet$ Magnets statics:\n\\begin{equation*}\n\\begin{aligned}\n&\\nabla\\cdot \\mathbf{B} = 0 \\implies B = \\nabla\\times\\mathbf{A}\\\\\n&\\nabla\\times\\mathbf{B} = \\mu_0 \\mathbf{j}\n\\end{aligned}\n\\end{equation*}\nHere $\\mathbf{A}$ is a vector potential (true in field case too). The freedom in choice of $\\mathbf{A}$ is \n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{A} \\to \\mathbf{A} + \\nabla \\chi\n\\end{aligned}\n\\end{equation*}\nfor any $\\chi$ (called \\emph{gauge freedom}).\\\\\n\nWe can choose $\\chi$ to make\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\cdot\\mathbf{A} = 0 \\text{  } \\nabla^2\\chi \\cong -\\text{previous  } \\div \\mathbf{A}\n\\end{aligned}\n\\end{equation*}\nThen\n\\begin{equation*}\n\\begin{aligned}\n\\nabla\\times\\mathbf{B} = \\nabla\\times\\left(\\nabla\\times\\mathbf{A}\\text{ =0 }\\right) = \\nabla\\left(\\nabla\\cdot\\mathbf{A}\\right)-\\nabla^2 \\mathbf{A}\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\mathbf{A} = -\\mu_0 \\mathbf{j}\n\\end{aligned}\n\\end{equation*}\nthe \\emph{vector} Poisson equation for $\\mathbf{A}$.\n\n\\subsection{Electromagnetic waves}\nConsider Maxwell equation in empty space, i.e.\n\\begin{equation*}\n\\begin{aligned}\n\\rho = 0, \\mathbf{j} = 0\n\\end{aligned}\n\\end{equation*}\nThen as shown previously\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\mathbf{E} &= \\nabla\\left(\\nabla\\cdot\\mathbf{E}  \\text{ =0 }\\right)-\\nabla\\times\\left(\\nabla\\times\\mathbf{E}\\right)\\\\\n&= \\nabla\\times\\frac{\\partial \\mathbf{B}}{\\partial t}\\\\\n&= \\frac{\\partial}{\\partial t}\\left(\\mu_0 \\epsilon_0 \\frac{\\partial \\mathbf{E}}{\\partial t}\\right) \\\\\n&= \\mu_0 \\epsilon_0 \\frac{\\partial^2 \\mathbf{E}}{\\partial t^2}\n\\end{aligned}\n\\end{equation*}\ni.e.,\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\nabla^2 - \\frac{1}{c^2}\\frac{\\partial^2}{\\partial t^2}\\right) \\mathbf{E} = \\mathbf{0}\n\\end{aligned}\n\\end{equation*}\nand get same for $\\mathbf{B}$, where $c^2 = \\frac{1}{\\epsilon_0 \\mu_0}$.\\\\\ni.e. compounds of $\\mathbf{E}$ and $\\mathbf{B}$ satisfy wave equation in 3 dimensions with speed of propagation\n\\begin{equation*}\n\\begin{aligned}\nc=\\frac{1}{\\sqrt{\\epsilon_0 \\mu_0}}\n\\end{aligned}\n\\end{equation*}\n\nRecall 1D wave equation:\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\frac{\\partial^2}{\\partial x^2} - \\frac{1}{c^2}\\frac{\\partial^2}{\\partial t^2}\\right) f\\left(x,t\\right) = 0\n\\end{aligned}\n\\end{equation*}\nhas solutions\n\\begin{equation*}\n\\begin{aligned}\nf\\left(x,t\\right) = f\\left(x\\pm ct\\right)\n\\end{aligned}\n\\end{equation*}\nFor $f\\left(x-ct\\right),f\\left(x+ct\\right)$: right/left moving wave, shape $f\\left(x\\right)$.\\\\\nSo Maxwell's equations predict the existence of electromagnetic waves moving at speed\n\\begin{equation*}\n\\begin{aligned}\nc=\\frac{1}{\\sqrt{\\epsilon_0 \\mu_0}}=2.998*10^8 m/sec\n\\end{aligned}\n\\end{equation*}\nfrom measured value of $\\epsilon_0$, $\\mu_0$. i.e. we get expected value of speed of light!\\\\\nLight identified as waves of electromagnetism.\n\n\\newpage\n\\section{Tensors}\n\\subsection{Introduction}\nA generalisation of idea of vectors as \\emph{geometrical objects}.\\\\\nConsider a vector $\\mathbf{v}$ in space (e.g. velocity of a particle).\nIt is described by 3 real numbers (components) only after we have chosen a basis ( or Cartesian coordinates).\\\\\nThe components depend on the \\emph{choice} of basis.\\\\\nBut $\\mathbf{v}$ (velocity) has physical significance (existence independent of choice of any basis, i.e. it is a geometrical object).\\\\\nTo get a description independent of any \\emph{particular choice} of basis, we can give its components for all possible choices of orthonormal bases.\\\\\nConsider right-handed orthonormal bases\n\\begin{equation*}\n\\begin{aligned}\nB=\\left\\{\\mathbf{e}_i\\right\\},B' = \\left\\{\\mathbf{e}_i'\\right\\}\n\\end{aligned}\n\\end{equation*}\nin 3D space, any such bases related by \\emph{rotation} $R$\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{e}_i' = R_{ip} \\mathbf{e}_p\n\\end{aligned}\n\\end{equation*}\nFor any vector\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{v}=v_i \\mathbf{e}_i = v_i' \\mathbf{e}_i'\n\\end{aligned}\n\\end{equation*}\ncomponents are related by\n\\begin{equation*}\n\\begin{aligned}\nv_i' = R_{ip}v_p\n\\end{aligned}\n\\end{equation*}\nRecall: rotation matrix $R$ has\n\\begin{equation*}\n\\begin{aligned}\nRR^T = R^TR = I\n\\end{aligned}\n\\end{equation*}\ni.e.\n\\begin{equation*}\n\\begin{aligned}\nR_{ip}R_{jp} = R_{qi} R_{qj} = \\delta_{ij}\n\\end{aligned}\n\\end{equation*}\nSo \n\\begin{equation}\\label{eq:11}\n\\begin{aligned}\nR_{ip} R_{jq}\\delta_{pq} = \\delta_{ij}\n\\end{aligned}\n\\end{equation}\nWe have $\\det R = 1$, so\n\\begin{equation}\\label{eq:12}\n\\begin{aligned}\n\\epsilon_{pqr} R_{ip} R_{jq} r_{kr} = \\epsilon_{ijk} \\det R = \\epsilon_{ijk}\n\\end{aligned}\n\\end{equation}\nSo a vector $\\mathbf{v}$ is a triple of numbers $\\left(v_1,v_2,v_3\\right)$ for each orthonormal basis $B=\\left\\{\\mathbf{e}_i\\right\\}$ such that if $B'$ is related to $B$ by rotation $R$ (as above), the corresponding triples are related by\n\\begin{equation*}\n\\begin{aligned}\nv'_i = R_{ip} v_p\n\\end{aligned}\n\\end{equation*}\ncalled the \\emph{vector transformation rule}.\n\n\\begin{defi}\nA (Cartesian) tensor of rank $n$ has components $T_{ij...k}$ ($n$ indices) with respect to each orthonormal basis $B=\\left\\{\\mathbf{e}_i\\right\\}$ satisfying the \\emph{tensor transformation law}:\n\\begin{equation*}\n\\begin{aligned}\nT'_{ij...k} = R_{ip} R_{jq} ... R_{kr} T_{pq...r}\n\\end{aligned}\n\\end{equation*}\nwhen bases $B$ and $B'$ are related by rotation $R$.\n\\end{defi}\n\n\\begin{eg}\nTensor of rank 0: no indices, $T=T'$ -  a scalar.\\\\\nTensor of rank 1: $T'_i = R_{ip} T_p$ - a vector.\\\\\nTensor of rank 2: $T'_{ij} = R_{ip} R_{jq} T_{pq}$ - components form a matrix.\\\\\nIf $\\mathbf{u}$, $\\mathbf{v}$, ... $\\mathbf{w}$ are $n$ vectors, then \n\\begin{equation*}\n\\begin{aligned}\nT_{ij...k} = u_i v_j ... w_k\n\\end{aligned}\n\\end{equation*}\nis a tensor of rank $n$ called \\emph{outer product} or \\emph{tensor product} of vectors.\n\\end{eg}\n\nNote that (\\ref{eq:11}) and (\\ref{eq:12}) say exactly that $\\delta_{ij}$ and $\\epsilon_{ijk}$ are tensors of ranks 2 and 3 with \\emph{special property} that their components are the same with respect to any basis or coordinate system - called \\emph{isotropic} or \\emph{invariant tensors}.\n\n\\subsection{Rank 2 tensors, linear map, matrices}\nLinear map on vectors represented as a matrix on components, given a choice of basis.\\\\\ne.g. $B$ or $B'$:\\\\\n\\begin{equation}\\label{eq:13}\n\\begin{aligned}\nv_i = M_{ij} u_j \\text{  or  } v'_i = M'_{ij} u'_j\n\\end{aligned}\n\\end{equation}\nMatrix depends on the choice of basis.\\\\\nIf \n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{e}'_i = R_{ij} \\mathbf{e}_j\n\\end{aligned}\n\\end{equation*}\nwhere $R$ is the rotation matrix, then\n\\begin{equation*}\n\\begin{aligned}\nM' = RMR^{-1}\n\\end{aligned}\n\\end{equation*}\nBut $R$ is a rotation matrix, so orthogonal. So\n\\begin{equation*}\n\\begin{aligned}\nM'_{ij} = R_{ip} M _{pq} \\left(R^{-1}\\right)_{qj} = R_{ip} M_{pq} R_{jq}\n\\end{aligned}\n\\end{equation*}\ni.e. $M$ satisfies rank 2 tensor transformation law.\n(can also see this from (\\ref{eq:13}) by substituting vectors transformation laws:\n\\begin{equation*}\n\\begin{aligned}\nv'_i = R_{ij}v_j, v_j = R_{ij} v'_i\n\\end{aligned}\n\\end{equation*}\n\n\\begin{eg} (Conductivity tensor)\\\\\nIn many substances, applied electric field $\\mathbf{E}$ induces a current density $\\mathbf{j}$ via a linear relation (Ohm's Law):\n\\begin{equation*}\n\\begin{aligned}\nj_i = \\sigma_{ik} E_k\n\\end{aligned}\n\\end{equation*}\nso by the above $\\sigma$ is a tensor -- called conductivity tensor.\\\\\nIn general, substances may conduct differently in different directions and $\\mathbf{j}$ not parallel to $\\mathbf{E}$.\\\\\nFor isotropic substance, $\\sigma_{ik} = \\sigma \\delta_{ik}$, so $\\mathbf{j} = \\sigma \\mathbf{E}$ and they are parallel.\n\\end{eg}\n\n\\subsection{Tensor algebra}\n\\subsubsection{Addition and scalar multiplication}\nIf $S,T$ are tensors of some rank $n$, then:\\\\\n$\\bullet$ $\\left(S+T\\right)$ \\emph{defined} by (in any basis):\n\\begin{equation*}\n\\begin{aligned}\n\\left(S+T\\right){ij...k} = S_{ij...k} + T_{ij...k}\n\\end{aligned}\n\\end{equation*}\n$\\bullet$ $\\alpha T$ for scalar and defined by\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\alpha T\\right)_{ij...k} = \\alpha T_{ij...k}\n\\end{aligned}\n\\end{equation*}\nare also \\emph{tensors} of rank $n$.\\\\\n(easy e.g. for rank 2:\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\left(S+T\\right)'_{ij} &= S'_{ij} + T'_{ij}\\\\\n&= R_{ip} R_{jq} S_{pq} + R){ip} R_{jq} T_{pq}\\\\\n&= R_{ip} R_{jq} \\left(S_{pq} + T_{pq}\\right)\\\\\n&= R_{ip} R_{jq} \\left(S+T\\right)_{pq}\n\\end{aligned}\n\\end{equation*}\n\n\\subsubsection{Tensor products}\nIf $S,T$ are tensors of ranks $m,n$ respectively, then the tensor product $S\\otimes T$ has rank $m+n$, and is defined by\n\\begin{equation*}\n\\begin{aligned}\n\\left(S\\otimes T\\right)_{ij...k pq...r} = S_{ij...k} T_{pq...r}\n\\end{aligned}\n\\end{equation*}\nclearly satisfies correct transformation rule (for $m+n$ suffixes).\\\\\nThe definition of $S\\otimes T\\otimes U\\otimes...$ is similar.\\\\\n$\\otimes$ is associative, but \\emph{not} commutative.\\\\\nE.g. vectors $\\mathbf{u}$, $\\mathbf{v}$,\n\\begin{equation*}\n\\begin{aligned}\nT=u\\otimes v\n\\end{aligned}\n\\end{equation*}\nhas components $T_{ij} = u_i v_j$, but\n\\begin{equation*}\n\\begin{aligned}\nS=v\\otimes u\n\\end{aligned}\n\\end{equation*}\nhas components $S_{ij} = v_i u_j$.\\\\\n\n\\subsubsection{Contractions}\nFor rank $n$ tensor $T_{ijp...q}$, introduce $S_{p...q}$ with $\\left(n-2\\right)$ indices defined by\n\\begin{equation*}\n\\begin{aligned}\nS_{p...q} = \\delta_{ij} T_{ijp...q} = T_{iip...q}\n\\end{aligned}\n\\end{equation*}\ni.e. \\emph{contracting} the two indices $i$ and $j$.\\\\\nThen $S$ is a \\emph{tensor} of rank $\\left(n-2\\right)$.\\\\\nTo prove this, see\n\\begin{equation*}\n\\begin{aligned}\nS'_{p...q} &= \\delta_{ij} T'_{ijp...q}\\\\\n&= \\delta_{ij} R_{ia} R_{jb} R_{pc} ... R_{qd} T_{abc...d}\\\\\n&= \\delta_{ab} R_{pc} ... R_{qd} T_{abc...d}\\\\\n&= R_{pc} ... R_{qd} S_{c...d}\n\\end{aligned}\n\\end{equation*}\nwhich is the correct transformation law for $n-2$ indices.\\\\\n\n\\begin{eg}\nFor rank 2 tensors: scalar $T_{ii}$ = trace of $3\\times 3$ matrix $T_{ij}$.\\\\\nin matrix notation:\n\\begin{equation*}\n\\begin{aligned}\nT'_{ii} = \\tr\\left(T'\\right) = \\tr\\left(RTR^T\\right) = \\tr\\left(T\\right) = T_{ii}.\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\subsection{Symmetric and antisymmetric tensors}\nIf \n\\begin{equation*}\n\\begin{aligned}\nT_{...i...j...} = \\left\\{\n\\begin{array}{ll}\n+T_{...j...i...} & T \\text{  symmetric in index i,j}\\\\\n-T_{...j...i...} & T \\text{  antisymmetric in index i,j}\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n(If this holds in one coordinate system, then it holds in all of the systems).\\\\\n$T$ is \\emph{totally} symmetric/antisymmetric if it is symmetric/antisymmetric in all pairs of indices.\\\\\n\n\\begin{eg}\n$\\delta_{ij}$ is totally symmetric rank 2 tensor.\\\\\n$\\epsilon_{ijk}$ is totally antisymmetric rank 3 tensor.\n\\end{eg}\n\nFor any $n$, there are totally symmetric tensors of rank $n$. e.g.: Let $u$ be a vector, $T_{ij...k} = u_i u_j ... u_k$.\\\\\nIn 3D:\\\\\nany totally antisymmetric rank 3 tensor $T$ has the form\n\\begin{equation*}\n\\begin{aligned}\nT_{ijk} = \\lambda \\epsilon_{ijk}\n\\end{aligned}\n\\end{equation*}\nsince, if $T_{123} = \\lambda$, then all the other components are fixed to be $\\pm \\lambda$ or 0 due to antisymmetry.\\\\\nAny totally anti-symmetric tensor $T$ of rank greater than 3 must be trivial, i.e. $T=0$. This is because for any choice of more than 3 indices $ij...k \\in \\left\\{1,2,3\\right\\}$ in $T_{ij...k}$, at least two of them must be the same.\n\n\\subsection{Tensors as multi-linear maps, quotient rule}\nMulti-linear map $T$ from $n$ vectors $\\mathbf{a},\\mathbf{b},...\\mathbf{c}$ to $\\R$ is the map $T\\left(\\mathbf{a},\\mathbf{b},...,\\mathbf{c}\\right)$ that is linear in each vector separately.\\\\\nSo in a basis $B=\\left\\{\\mathbf{e}_i\\right\\}$, $\\mathbf{a} = a_i \\mathbf{e}_i$ etc.\\\\\nLinearity gives\n\\begin{equation*}\n\\begin{aligned}\nT\\left(\\mathbf{a}, \\mathbf{b},...,\\mathbf{c}\\right) = T_{ij...k} a_i b_j ... c_k\n\\end{aligned}\n\\end{equation*}\nwhere the coefficients\n\\begin{equation*}\n\\begin{aligned}\nT_{ij...k}  = T\\left(\\mathbf{e}_i, \\mathbf{e}_j,...,\\mathbf{e}_k\\right)\n\\end{aligned}\n\\end{equation*}\nFor any other basis $B'=\\left\\{\\mathbf{e}'_i\\right\\}$ related by rotation $R$,\n\\begin{equation*}\n\\begin{aligned}\nT'_{ij...k} &= T\\left(\\mathbf{e}'_i, \\mathbf{e}'_j, ..., \\mathbf{e}'_k\\right)\\\\\n&= T\\left(R_{ip} \\mathbf{e}_p, R_{jq} \\mathbf{e}_q, ..., R_{kr} \\mathbf{e}_r\\right)\\\\\n&= R_{ip} R_{jq} ... R_{kr} T_{pq...r}\n\\end{aligned}\n\\end{equation*}\nSo $T_{ij...k}$ \\emph{is} a tensor of rank $n$.\n\nConversely, given a tensor $T$ of rank $n$, define $T\\left(\\mathbf{a},\\mathbf{b},...,\\mathbf{c}\\right)$ to be\n\\begin{equation*}\n\\begin{aligned}\nT_{ij...k} a_i b_j ... c_k\n\\end{aligned}\n\\end{equation*}\nfor components in some basis - \\emph{independent} of choice of basis (as $T$ is a tensor and have contractions) so defines a multi-linear map on \\emph{vectors}.\\\\\nThus there is a 1-1 correspondence between rank $n$ tensors and multi-linear maps from $n$ vector to $\\R$.\n\n\\subsection{Quotient rule}\nWe know that if $T_{i...jp...q}$ ($n$ indices in $i...j$ and $m$ indices in $p...q$) is a tensor of rank $m+n$ and $u_{p...q}$ is any tensor of rank $m$, then contraction\n\\begin{equation*}\n\\begin{aligned}\nv_{i...j} = T_{i...jp...q} u_{p...q}\n\\end{aligned}\n\\end{equation*}\nis a tensor of rank $n$.\\\\\nThe converse is called \\emph{quotient rule}:\nSuppose $T_{i...jp...q}$ is any array of numbers (given for each basis) such that for any tensor $u_{p...q}$,\n\\begin{equation*}\n\\begin{aligned}\nv_{i...j} = T_{i...jp...q} u_{p...q}\n\\end{aligned}\n\\end{equation*}\nis a \\emph{tensor} of rank $n$, then $T_{i...jp...q}$ is also a tensor (rank $m+n$).\n\\begin{proof}\ntake\n\\begin{equation*}\n\\begin{aligned}\nu_{p...q} = c_{p}...d_q\n\\end{aligned}\n\\end{equation*}\nfor $m$ vectors $\\mathbf{c},...,\\mathbf{d}$. Then some $v_{i...j}$ is a \\emph{tensor}.\\\\\nWe have: for any $n$ vectors $\\mathbf{a},...,\\mathbf{b}$,\n\\begin{equation*}\n\\begin{aligned}\nv_{i...j} a_i... b_j = T_{i...jp...q} a_i...b_j c_p...d_q\n\\end{aligned}\n\\end{equation*}\nis \\emph{scalar}, so define a multi-linear map on $m+n$ vectors so coefficients $T_{i...q}$ must be a tensor (of rank $m+n$).\n\\end{proof}\n\n\\subsection{Tensor calculus}\nA tensor \\emph{field} is a tensor $T_{ij...k} \\left(\\mathbf{x}\\right)$ at each point.\\\\\nIt is smooth if each component function is smooth (in \\emph{any} coordinate system).\\\\\nThen the \\emph{derivative}\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial}{\\partial x_p}...\\frac{\\partial}{\\partial x_q} T_{ij...k}\n\\end{aligned}\n\\end{equation*}\n($m$ of the partial derivatives) is a new \\emph{tensor} of rank $\\left(m+n\\right)$.\\\\\nCorrect transformation law on new indices follows from Chain rule: if\n\\begin{equation*}\n\\begin{aligned}\nx'_p = R_{pq}x_q\n\\end{aligned}\n\\end{equation*}\n($R$ rotation, constant) then\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial}{\\partial x'_p} = R_{pq} \\frac{\\partial}{\\partial x_q}\n\\end{aligned}\n\\end{equation*}\nso e.g. rank $T$ and one derivative\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial T'_i}{\\partial x'_p} = R_{pq} \\frac{\\partial}{\\partial x_q} \\left(R_{ij}T_j\\right) = R_{pq}R_{ij} \\left(\\frac{\\partial T_j}{\\partial x_c}\\right)\n\\end{aligned}\n\\end{equation*}\n\n\\emph{Integrals}: of a tensor field e.g.\n\\begin{equation*}\n\\begin{aligned}\n\\int_V T_{ij...k} dV\n\\end{aligned}\n\\end{equation*}\nis defined by integrals of component functions. The transformation rule for the result holds (so it's still a tensor). e.g. integrals are limits of sums.\\\\\n\nThe divergence theorem can be generalised from vectors to tensors:\n\n\\begin{thm}\nLet $V$ be a volume bounded by smooth closed surface $S=\\partial V$, outward unit normal $\\mathbf{n}$, and let $T_{ij...kl}$ be a smooth tensor field. Then\n\\begin{equation}\\label{eq:14}\n\\begin{aligned}\n\\int_S T_{ij...kl} n_l dS = \\int_V \\frac{\\partial}{\\partial x_l} \\left(T_{ij...kl}\\right) dV\n\\end{aligned}\n\\end{equation}\n\\begin{proof}\napply vector divergence theorem to vector field $\\mathbf{f}$ defined by\n\\begin{equation*}\n\\begin{aligned}\nv_l = a_i b_j ... c_k T_{ij...kl}\n\\end{aligned}\n\\end{equation*}\nwhere $\\mathbf{a}$, $\\mathbf{b}$, ... $\\mathbf{c}$ are any constant vectors. Then\n\\begin{equation*}\n\\begin{aligned}\n&\\nabla\\cdot\\mathbf{v} = \\frac{\\partial v_l}{\\partial x_l} = a_i b_j ... c_k \\frac{\\partial}{\\partial x_l} T_{ij...kl}\\\\\n&\\mathbf{n}\\cdot\\mathbf{v} = a_i b_j ... c_k T_{ij...kl} n_l\n\\end{aligned}\n\\end{equation*}\n\\end{proof}\nso we get (\\ref{eq:14}) contracted with $a_i b_j ... c_k$ on both sides.\\\\\nSetting $\\mathbf{a}$, $\\mathbf{b}$, ..., $\\mathbf{c}$ to all choice of basis vectors gives (\\ref{eq:14}).\n\\end{thm}\n\n\\subsection{Tensors of rank 2}\n\\emph{Decomposition}: any $T_{ij}$ can be written as sum of symmetric and anti-symmetric parts:\n\\begin{equation*}\n\\begin{aligned}\nT_{ij} = S_{ij} + A_{ij}\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\nS_{ij} = \\frac{T_{ij} + T_{ji}}{2},\\\\\nA_{ij} = \\frac{T_{ij} - T_{ji}}{2}\n\\end{aligned}\n\\end{equation*}\nThen further reduce symmetric part:\n\\begin{equation*}\n\\begin{aligned}\nS_{ij} = P_{ij} + \\frac{1}{3}\\delta_{ij} Q\n\\end{aligned}\n\\end{equation*}\nwith $P_{ij}$ traceless (i.e. $P_{ii} = 0$) and $Q=S_{ii} = T_{ii}$, since $A_{ii} = 0$.\\\\\nIf $T$ is a tensor then $A,S,P,Q$ are all tensors too.\\\\\n\\begin{equation*}\n\\begin{aligned}\n\\text{Tensor} & \\text{          } & \\text{no. of independent components}\\\\\nT & & 9\\\\\nA & & 3\\\\\nS & & 6\\\\\nP & & 5\\\\\nQ & & 1\n\\end{aligned}\n\\end{equation*}\nFor anti-symmetric: any anti-symmetric $3\\times 3$ matrix $A$ can be written\n\\begin{equation*}\n\\begin{aligned}\n\\left(A_{ij}\\right) = \\left[\n\\begin{matrix}\n0 & B_3 & -B_2 \\\\\n-B_3 & 0 & B_1 \\\\\nB_2 & -B_1 & 0 \\\\\n\\end{matrix}\\right]\n\\end{aligned}\n\\end{equation*}\nThen \n\\begin{equation*}\n\\begin{aligned}\nA_{ij} = \\epsilon_{ijk} B_k\n\\end{aligned}\n\\end{equation*}\nbecause (what?)\\\\\n$\\epsilon-\\delta$ identity:\n\\begin{equation*}\n\\begin{aligned}\nB_k = \\frac{1}{2} \\epsilon_{ijk} A_{ij} \\iff A_{ij} = \\epsilon_{ijk} B_k\n\\end{aligned}\n\\end{equation*}\nNote also\n\\begin{equation*}\n\\begin{aligned}\nB_k = \\frac{1}{2}\\epsilon_{ijk} T_{ij}\n\\end{aligned}\n\\end{equation*}\nas $\\epsilon_{ijk} \\delta_{ij} = 0$.\\\\\nSummary:\n\\begin{equation*}\n\\begin{aligned}\nT_{ij} = P_{ij} + \\epsilon_{ijk} B_k + \\frac{1}{3}\\delta_{ij} Q\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n&B_k = \\frac{1}{2}\\epsilon_{ijk} T_{ij}\\\\\n&Q = T_{kk}\\\\\n&P_{ij} = \\frac{T_{ij} + T_{ji}}{2} - \\frac{1}{3} \\delta_{ij} T_{kk}\n\\end{aligned}\n\\end{equation*}\n\n\\begin{eg}\nIf $F_i\\left(\\mathbf{r}\\right)$ is a vector field, then its derivative $T_{ij} = \\frac{\\partial F_i}{\\partial x_j}$ is rank 2 tensor with\n\\begin{equation*}\n\\begin{aligned}\n&P_{ij} = \\frac{1}{2}\\left(\\frac{\\partial F_i}{\\partial x_j} + \\frac{\\partial F_j}{\\partial x_i}\\right) - \\frac{1}{3}\\delta_{ij} \\left(\\nabla\\cdot\\mathbf{F}\\right)\\\\\n&B_k = \\frac{1}{2}\\epsilon_{ijk} \\frac{\\partial F_i}{\\partial x_j} = -\\frac{1}{2} \\left(\\nabla\\times\\mathbf{F}\\right)_k\\\\\n&Q = \\frac{\\partial F_k}{\\partial x_k} = \\nabla\\cdot\\mathbf{F}.\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\subsection{The inertia tensor}\nConsider masses $m_\\alpha$, position $\\mathbf{r}_\\alpha$, all rotating with angular velocity $\\mathbf{\\omega}$ about $\\mathbf{0}$.\\\\\nSo velocities $\\mathbf{v}_\\alpha = \\mathbf{\\omega} \\times \\mathbf{r}_\\alpha$.\\\\\nTotal angular momentum about $\\mathbf{0}$ is\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{L} = \\sum_\\alpha \\mathbf{r}_\\alpha \\times m_\\alpha \\mathbf{v}_\\alpha = \\sum m_\\alpha \\mathbf{r}_\\alpha \\times \\left(\\mathbf{\\omega} \\times \\mathbf{r}_\\alpha\\right)\n\\end{aligned}\n\\end{equation*}\nIn components $L_i = I_{ij} \\omega_j$, where\n\\begin{equation*}\n\\begin{aligned}\nI_{ij} = \\sum_\\alpha m_\\alpha \\left(\\left(\\mathbf{r}_\\alpha\\right)_k \\left(\\mathbf{r}_\\alpha\\right)_k \\delta_{ij} - \\left(\\mathbf{r}_\\alpha\\right)_i \\left(\\mathbf{r}_\\alpha\\right)_j\\right)\n\\end{aligned}\n\\end{equation*}\nis the \\emph{inertia tensor} about $\\mathbf{0}$.\n\n\\end{document}", "meta": {"hexsha": "11e96876889cf39a2447c9b12db1f1c98ef8851c", "size": 90385, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/Vector Calculus.tex", "max_stars_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_stars_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-25T17:34:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T17:34:25.000Z", "max_issues_repo_path": "Notes/Vector Calculus.tex", "max_issues_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_issues_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/Vector Calculus.tex", "max_forks_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_forks_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.1176209498, "max_line_length": 606, "alphanum_fraction": 0.6793383858, "num_tokens": 34257, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672227971211, "lm_q2_score": 0.7310585669110203, "lm_q1q2_score": 0.5907444658658315}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n\\usepackage{graphicx}\n\\usepackage{tabularx}\n\\usepackage{multicol}\n\n\\usepackage[english]{babel}\n\\newtheorem{theorem}{Theorem}\n\n% Geometry \n\\usepackage{geometry}\n\\geometry{letterpaper, left=15mm, top=20mm, right=15mm, bottom=20mm}\n\n% Fancy Header\n\\usepackage{fancyhdr}\n\\renewcommand{\\footrulewidth}{0.4pt}\n\\pagestyle{fancy}\n\\fancyhf{}\n\\chead{MAT 341 - Linear Algebra}\n\\lfoot{CALU Fall 2021}\n\\rfoot{RDK}\n\n% Add vertical spacing to tables\n\\renewcommand{\\arraystretch}{1.4}\n\n% Macros\n\\newcommand{\\definition}[1]{\\underline{\\textbf{#1}}}\n\n\\newenvironment{rcases}\n  {\\left.\\begin{aligned}}\n  {\\end{aligned}\\right\\rbrace}\n\n% Begin Document\n\\begin{document}\n\n\\section*{Section 1.3: Vector Equations}\n\n\\begin{itemize}\n\n  \\item A matrix with only one column is called a \\definition{column vector}, or simply a \\definition{vector}.\n  \n  \\item An example of a vector with two entries, where $w_1$ and $w_2$ are any real numbers, is:\n  \\begin{equation*}\n      w = \\begin{bmatrix}\n        w_1 \\\\ w_2\n      \\end{bmatrix}\n  \\end{equation*}\n\n  \\item The set of all vectors with 2 entries is denoted by $\\mathbb{R}^2$.\n  \n  \\item The $\\mathbb{R}$ stands for the real numbers that appear as entries in the vector, and the exponent 2 indicates that each vector contains 2 entries.\n  \n  \\item Two vectors in $\\mathbb{R}^2$ are \\textbf{equal} if and only if their corresponding entries are equal.\n  \n  \\item Given two vectors \\textbf{u} and \\textbf{v} in $\\mathbb{R}^2$, their \\textbf{sum} is the vector $\\textbf{u} + \\textbf{v}$ obtained by adding the corresponding entries of \\textbf{u} and \\textbf{v}.\n  \n  \\item Given a vector \\textbf{u} and a real number \\textbf{c}, the \\definition{scalar multiplication} of \\textbf{u} by \\textbf{c} is the vector \\textbf{cu} obtained by multiplying each entry in \\textbf{u} by \\textbf{c}.\n  \n  \\item Consider a rectangular coordinate system in the plane. Because each point in the plane is determined by an ordered pair of numbers, we can identify a geometric point $(a,b)$ with the column vector \n  $\\begin{bmatrix}\n    a \\\\ b\n  \\end{bmatrix}$.\n\n  \\item We may regard $\\mathbb{R}^2$ as the set of all points in the plane\n  \n  \\item The vector whose entries are all zero is called the \\definition{zero vector} and is denoted by \\textbf{0}.\n  \n  \\item For all \\textbf{u}, \\textbf{v}, \\textbf{w}, in $\\mathbb{R}^2$ and scalars \\textit{c} and \\textit{d}:\n  \\begin{enumerate}\n      \\item $u + v = v + u$\n      \\item $(u + v) + w = u + (v + w)$\n      \\item $u + 0 = 0 + u = u$\n      \\item $u + (-u) = 0$\n      \\item $c(u + v) = cu + cv$\n      \\item $(c+d)u = cu + du$\n      \\item $c(du) = (cd)u$\n      \\item $1u = u$\n  \\end{enumerate}\n\n  \\item The vector $y = c_1v_1 + \\cdots + c_pv_p$ is called a \\definition{linear combination}.\n  \n  \\item A vector Equation\n  \\begin{equation*}\n      x_1a_1 + \\cdots + x_na_n = b\n  \\end{equation*}\n  has the same solution set as the linear system whose augmented matrix is\n  \\begin{equation*}\n      \\begin{bmatrix}\n          a_1 & a_2 & \\cdots & a_n & b\n      \\end{bmatrix}\n  \\end{equation*}\n\n  \\item If $v_1, \\ldots, v_p$ are in $\\mathbb{R}^2$, then the set of all linear combinations of $v_1, \\ldots, v_p$ is denoted by\n  $Span\\{v_1, \\ldots, v_p\\}$ and is called the \\definition{subset of $\\mathbb{R}^2$ spanned by $v_1, \\ldots, v_p$}.\n\n\\end{itemize}\n\n\n\\end{document}", "meta": {"hexsha": "2b54bb066071d3f5fe926474902810c430ecfead", "size": 3384, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter 1/Section 3/notes.tex", "max_stars_repo_name": "Bkrenz/calu-mat341", "max_stars_repo_head_hexsha": "2628f0755dde2e4a933131e23cbe8168444fd77c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter 1/Section 3/notes.tex", "max_issues_repo_name": "Bkrenz/calu-mat341", "max_issues_repo_head_hexsha": "2628f0755dde2e4a933131e23cbe8168444fd77c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter 1/Section 3/notes.tex", "max_forks_repo_name": "Bkrenz/calu-mat341", "max_forks_repo_head_hexsha": "2628f0755dde2e4a933131e23cbe8168444fd77c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.1764705882, "max_line_length": 220, "alphanum_fraction": 0.6713947991, "num_tokens": 1139, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110202, "lm_q2_score": 0.8080672181749422, "lm_q1q2_score": 0.590744462486748}}
{"text": "\\subsection{Performance Measure}\\label{sssection:PerfMeas}\nThe agent's performance measure maps sequences of environment states to the real numbers. Given the above definitions, we decided that environment states of the form\n\\[ <x_i, terminated\\_x_i> \\]\nshould be of high value, as they indicate that the agent has correctly identified the location of the target in the environment. Secondary to this, sequences of environment states that take longer to end in a terminal state should be valued lower than shorter ones, reflecting our desire for the agent to terminate its search in the minimum possible amount of time. Therefore, the performance measure primarily gives high values to the agent when it correctly identifies the location of the target or correctly concludes that the target is not present, with a secondary ordering on value determined by the time taken to come to a conclusion. The actual value of the function only needs to adhere to this ordering, but we arbitrarily defined it as:\n%\\note{Be careful that this agrees with the rest}\n\\[\nPerformance Measure(state_1,..., state_t) = \n\\begin{cases}\n\\frac{1}{t} \\quad \\text{ if } state_t \\text{ = } <x_i, terminated\\_x_i>\n%agent returns correct target location.} \n\\\\\n-1 \\quad \\text { otherwise. }\n\\end{cases}\n\\]\n\n%\\[\n%Performance Measure(state_1,..., state_t) = \n%\\begin{cases}\n%\\frac{1}{t} \\quad \\text{ if } state_t \\text{ = } <x_i, TERMINATED\\_x_i>\n%agent returns correct target location.} \n%\\\\\n%\\frac{1}{t} \\quad \\text{ if agent correctly returns target is not present.}\n%\\\\\n%-1 \\quad \\text { if agent returns incorrect target location.}\n%\\\\\n%-1 \\quad \\text{if agent incorrectly returns target is not present}\n%\\end{cases}\n%\\]\n\n%The performance measure is assumed to ignore subsequent terminal states. \nIt is worth noting that this performance measure provides goal states for the agent:\n\\[ <x_i, terminated\\_x_i> \\]", "meta": {"hexsha": "b1cef8b3aac7c11f1d597adf4d52573c3840478c", "size": 1883, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/MultiAgentTargetDetection/InitialAgentDesign/PerformanceMeasure.tex", "max_stars_repo_name": "DavidLSmyth/ResearchMScThesis", "max_stars_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/MultiAgentTargetDetection/InitialAgentDesign/PerformanceMeasure.tex", "max_issues_repo_name": "DavidLSmyth/ResearchMScThesis", "max_issues_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-18T11:59:42.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-18T11:59:42.000Z", "max_forks_repo_path": "Chapters/MultiAgentTargetDetection/InitialAgentDesign/PerformanceMeasure.tex", "max_forks_repo_name": "DavidLSmyth/ResearchMScThesis", "max_forks_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.84375, "max_line_length": 747, "alphanum_fraction": 0.7588953797, "num_tokens": 451, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8740772351648677, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.5906704597886486}}
{"text": "\\section{Ensembles}\n\nFor a given macrostate $(N, V, E)$, a statistical system, at any time t, may be in any one of an extremely large number of distinct microstates. As time passes, the system transitions between microstates, with the result that, over a long enough time period the behaviour of the systems is ``averaged'' over the collection of microstates which have been visited by the system.\nA useful approach is to consider, at a single instant of time, a large (infinite) number of ``copies'' of the same system, existing in all possible microstates that satisfy the macroscopic conditions. Then, we can expect the average behaviour of any system for this collection (or ensemble) to be identical with the time-averaged behaviour of the system. This forms the basis of the so\u2013called ensemble theory.\n\nIn the previous section, we considered an isolated system where we could keep track of the dynamics of every particle and use that to calculate the values of extensive, macroscopic properties of the system. An important aspect of this was the conservation of energy for the system. We refer to such systems as a \\emph{micro-canonical ensemble} --- the system is isolated with no heat flux and no change in the number of particles, hence the internal energy of the system is constant and it is described entirely by the Hamiltonian dynamics. Note that this term is distinct from several similar terms used in physics, canonical systems and grand-canonical ensembles. In canonical systems, $(N, T, V)$ are fixed instead of $(N, E, V)$, with temperature being controlled by a heat reservoir; and in grand-canonical ensembles, $(\\mu, T, V)$ are fixed (where $\\mu$ is the chemical potential of the system), with the number of particles $N$ being allowed to vary.\n\nWe also saw (a couple of sections earlier) that we can equate the entropy of a system with the accessible volume of the phase space of that system. This was part of what motivated us to study the microscopic dynamics of the system using molecular dynamics.  We'd now like to consider some slightly more realistic cases where, for example, we may want to allow for multiple systems in contact.\n\nWe'll start by considering the simplest case: two (isolated) systems in contact, without any exchange of energy. If the state of system 1 corresponds to the region of phase space $\\Gamma^1$ and similarly, the state of system 2 to $\\Gamma^2$ then the state of the composite system $1\\cup 2$ corresponds to the phase space regions given by the Cartesian product $\\Gamma^{1\\cup 2}=\\Gamma^1\\times\\Gamma^2$ and the volume of this accessible volume of phase space is given by $|\\Gamma^{1\\cup 2}|=|\\Gamma^1||\\Gamma^2|$. From this, it's easy to see that the entropy of the composite system is given by\n\\begin{eqnarray*}\n\tS &=& k_B\\ln|\\Gamma^{1\\cup 2}| = k_B\\ln(|\\Gamma^1||\\Gamma^2|)\\\\\n\t\t&=& k_B\\ln|\\Gamma^1| + k_B\\ln|\\Gamma^2| = S^1 + S^2,\n\\end{eqnarray*}\nwhich is fortunate, since entropy is an extensive variable and we therefore expect it to be additive.\nIn this example the two sub-system were completely isolated from each other; the dynamics of one system had no influence on the dynamics of the other. This condition of dynamic independence corresponds to the independence of the observables that pertain to these sub-systems.\n\nNow, let's look at how the entropy of a composite system changes if we allow for the exchange of energy between the two sub-systems. This has the effect of increasing the accessible volume of the phase space, since exchange of energy means that there are more possible configurations for the overall system.\n\nWithout energy exchange, the volume of the accessible phase space to the total system is given by\n$$\n\t|\\Gamma_0| = |\\Gamma^1||\\Gamma^2|.\n$$\nOnce we allow for the exchange of energy, this becomes\n$$\n\t|\\Gamma| = \\sum_{E^1}|\\Gamma^1(E^1)||\\Gamma^2(E_\\text{tot}-E^1)|.\n$$\nThat is, we must now consider all possible configurations where the total energy of the composite system is partitioned between the two sub-systems. This volume is bounded below by $|\\Gamma_0|$ since the expression for $|\\Gamma_0|$ is just one of the terms in the sum. At first glance, it may look like this increase in the volume of the accessible phase space is enormous (we have added many more possible configurations), however, the volume of the accessible phase space for the composite system corresponds almost entirely to the states where $E^1=E^1_\\text{eqm}$ and $E^2=E^2_\\text{eqm}$. As a consequence, for large enough $N$, the difference between $\\Gamma$ and $|\\Gamma^1(E^1_\\text{eqm})|||\\Gamma^2(E^2_\\text{eqm})|$ is negligible. It's not too hard to show (see section 3.7 of \\emph{Statistical Mechanics in a Nutshell}) that the contribution to the accessible phase space volume due to the exchange of energy between the two systems is of order $\\sqrt{N}$ compared with the total system size $N$ --- small enough to be negligible when $N$ is large.\n\n\\subsection{The canonical ensemble}\n\nRather than considering a perfectly isolated system, for which energy is conserved, a more realistic experimental situation may be to consider a system S in thermal contact with some much larger reservoir R. This has the effect of holding the total system at constant temperature. In such a situation we want to be able to calculate the average value $\\langle A\\rangle$ of an observable $A$ for the system S; we are not interested in the state of the reservoir R, except to the extent that it helps us determine the state of S.\n\nIf, for the composite system S$\\cup$R, we write $x_S,x_R$ for the state, te=hen the average value of the observable $A$ is given by\n$$\n\t\\langle A\\rangle = \\frac{1}{|\\Gamma|}\\int_{\\Gamma}dx_Sdx_RA(x_s),\n$$\nwhere $\\Gamma$ is the region of the phase space for the composite system, when it has total internal energy of $E$.\n\nIn order to make explicit the parts of the total phase space that is accessible to the composite system, wewrite the above expression as the Cartesian product of the phase space for the system and the reservoir, i.e.\n$$\n\t\\langle A\\rangle =\\frac{1}{|\\Gamma|}\\int dx_SA(x_S)\\times\\int dx_R\\delta(H^R(x_R)-(E-H^S(x_S))).\n$$\nThe delta function in the last term is zero, except when $x_S$ and $x_R$ in the two sub-systems take values such that $H^S+H^R=E$. That is the delta function defines the accessible phase space volume when the two sub-systems can exchange internal energy between them, but the total internal energy of the composite system is conserved.\n\nRecalling the fundamental postulate of statistical mechanics ($S=k_B\\ln|\\Gamma|$), we rewrite the last expression to replace the phase space volume with the corresponding expression for entropy:\n$$\n\t\\int dx_R\\delta(H^R(x_R)-(E-H^S(x_S))) \\simeq \\exp\\left(\\frac{1}{k_B}S^R(E-H^S)\\right).\n$$\n\nSince $H^S$ is much less than $E$ it makes sense to now expand the exponential in a Taylor series about $E$:\n$$\n\t\\exp\\left(\\frac{1}{k_B}S^R(E-H^S)\\right)\\simeq \\exp\\left(\\frac{1}{k_B}S^R(E)\\right)\\exp\\left(\\frac{-1}{k_B}\\left.\\frac{\\partial S^R}{\\partial E}\\right|_E H^S(x_S)\\right)\\ldots\n$$\n\nEarlier in the course, we identified $\\frac{\\partial S}{\\partial E}$ with $\\frac{1}{T}$, and, for a canonical ensemble, $T$ is constant (that's the point of the reservoir) so we write the expected value of the observable as\n$$\n\t\\langle A\\rangle \\simeq \\frac{1}{\\Gamma}\\int dx_sA(x_S)\\times\\exp\\left(\\frac{1}{k_B}s^R(E)\\right)\\exp\\left(\\frac{-H^S(x_S)}{k_BT}\\right).\n$$\n\nIt remains to make the normalisation precise. When we do this, the factors of $S\\exp\\left(\\frac{1}{k_B}S^R(E)\\right)$ cancel from the integral and its normalisation and we get\n\\begin{equation}\n\t\\langle A\\rangle = \\frac{1}{Z}\\int dx_SA(x_S)\\exp\\left(\\frac{-H^S(x_S)}{k_BT}\\right),\n\t\\label{eq:z1}\n\\end{equation}\nwhere $Z$ is the known as the \\emph{partition function} and is given by\n\\begin{equation}\n\tZ = \\int dx_S\\exp\\left(\\frac{-H^S(x_S)}{k_BT}\\right).\n\t\\label{eq:z2}\n\\end{equation}\n\nOne way to think about equations \\ref{eq:z1} and \\ref{eq:z2} is that we no longer need to keep track of which parts of phase space are accessible. Instead we can integrate over the entire phase space and each region is weighted appropriately, according to $\\exp\\frac{-H}{k_BT}$ --- the so-called \\emph{Boltzmann factor}, a probability density in the phase space.\n\nSimilar to the case of the micro-canonical ensemble, for the canonical ensemble, the contributions to $A$ are dominated by the part of phase space that corresponds to when the system's internal energy is at the equilibrium value. \n\nYou should read through, and understand, section 3.12 of \\emph{Statistical Mechanics in a Nutshell} and sections 4.1--4.3 of \\emph{Statistical Mechanics Made Simple} for some extra details and explanation.\n\n\\subsection{Other ensembles}\nWe can follow a similar approach to what we did for the canonical ensemble and generalise to other types of ensembles. If $f_i$ is some variable that we want to hold fixed and $X_i$ is the conjugate extensive variable, then putting the system in contact with a reservoir with which it can exchange $X_i$ gives\n\\begin{equation*}\n\t\\langle A\\rangle = \\frac{1}{Z}\\int dx_SA(x_S)\\exp\\left(\\frac{f_iX^S_i(x_S)}{k_BT}\\right),\n\\end{equation*}\nwhere we have used the relation $\\left.\\frac{\\partial S^R}{\\partial X_i^R}\\right|_E=-\\frac{f_i}{T}$.\n\nCalculating the partition function proceeds similarly too, (see SMiaN, section 3.13) and gives\n\\begin{equation*}\n\tZ \\simeq \\exp\\left(\\frac{T^S(X_i^*)+f_iX_i^*}{k_BT}\\right),\n\\end{equation*}\nwhere $X_i^*$ is the value of $X_i$ which maximises the value of the exponential.\n\\subsection{The grand-canonical ensemble}\nRather than holding $N$ the number of particles fixed we can allow it to vary and instead fix $\\mu$ the chemical potential. The corresponding ensemble for this case is the so-called \\emph{grand-canonical ensemble}. In this case, the expected falue of an observable $A$ is given by\n$$\n\t\\langle A\\rangle = \\frac{1}{Z}\\sum_{N=1}^{\\infty}\\int\\mathrm{d}xA(x)\\exp\\left(-\\frac{H_N(x)-\\mu N}{k_B T}\\right)\n$$\nand the corresponding partition function is given by\n$$\n\tZ=\\exp\\left(-\\frac{E-TS-\\mu N}{k_B T}\\right).\n$$\n\\subsection{The $P-T$ ensemble}\nThe \\emph{$p-T$ ensemble} is one specific example of a generalised ensemble. In this case the pressure and temperature are fixed, while the internal energy and volume (their conjugate variables) are allowed to fluctuate.\nUsing the generalised formula above, and dropping the subscripts that we were previously using to denote the system, the $p-T$ ensemble is given by\n \\begin{equation*}\n\t\\langle A\\rangle = \\frac{1}{Z}\\int dxA(x)\\exp\\left(-\\frac{E(x)+pV(x)}{k_BT}\\right),\n\\end{equation*}\nwhile the partition function is given by\n$$\n\t\\ln Z = -\\frac{E-TS+pV}{k_BT}.\n$$\nThe quantity on the top of the fraction is the \\emph{Gibbs free energy}. (See section 3.14 of SMiaN)\n\nAnother important ensemble is the so-called \\emph{grand ensemble} or grand-canonical ensemble in which the number of particles in the system is also allowed to fluctuate due to, for example, chemical reactions, while the \\emph{chemical potential} $\\mu$ is held fixed. In this case the expression for the expected value of an observable is\n$$\n\t\\langle A \\rangle = \\frac{1}{Z} \\sum_{N=1}^\\infty \\int dxA(x)\\exp\\left(-\\frac{H_N(x)-\\mu N}{k_BT}\\right),\n$$\nwhile the corresponding partition function is given by\n$$\n\t\\ln Z = -\\frac{E-TS-\\mu N}{k_BT}.\n$$\n(See section 3.15 of SMiaN and section 4.4 of SMMS for more discussion of the grand canonical ensemble.)\n\n\\subsection{Information theory and the Gibbs Formula for entropy}\nTo finish off this section, we'll look at one last application of entropy; not because it is particularly useful to physical systems in statistical mechanics, but because it gives a result that forms the basis of information theory.\n\nWe'll return to a considering a generalised ensemble, like the one we looked at a couple of sections earlier. However, in this case we'll assume that the phase space is discretized and that the index $n$ runs over all of the microstates of the system. If we consider an intensive variable $f$ and its corresponding extensive variable $X$ then the expression for the expected value of any observable $A$ is\n$$\n\t\\langle A \\rangle = \\frac{1}{Z}\\sum_n A_n\\exp\\left(\\frac{fX_n}{k_BT}\\right),\n$$\nwhile the partition function $Z$ is given by\n$$\n\tZ =\\sum_n\\exp\\left(\\frac{fX_n}{k_BT}\\right).\n$$\n\nThe partition function is related to the thermodynamic potentials (see the previous section on generalised ensembles and SMiaN sections 3.12 and 3.13) via\n\\begin{equation}\n\t\\ln Z = \\frac{TS+f\\langle X\\rangle}{k_BT}.\n\t\\label{eq:gibbZ}\n\\end{equation}\n\nNow, for any individual microstate $n$ the probability is therefore given by\n$$\n\tp_n = \\frac{1}{Z}\\exp\\left(\\frac{fX_n}{k_BT}\\right).\n$$\nTaking the log of both sides of this expression gives\n$$\n\t\\ln p_n = \\frac{fX_n}{k_BT} - ln Z,\n$$\nwhich after substituting \\ref{eq:gibbZ} for $\\ln Z$ gives\n$$\n\t\\ln p_n = \\frac{1}{k_BT}(fX_n -TS -f\\langle X\\rangle).\n$$\n\nNow we can calculate the expected value of both side of the above equation\n$$\n\t\\langle \\ln p_n \\rangle = \\sum_n p_n\\ln p_n = \\frac{1}{k_BT}(f\\langle X\\rangle -TS -\\langle X \\rangle) = \\frac{-S}{kB}.\n$$\n\nAfter rearranging this for $S$ (and making explicit the sum for the expected value of $p_n$) we arrive at the \\emph{Gibbs formula for entropy}:\n$$\n\tS = -k_B\\sum_np_n\\ln p_n.\n$$\n\nAlthough this is elegant, it's generally useless in the context of physical systems since the number of microstates it would be necessary to sum over is far to large to be practical and, in any case, we generally don't know the probability distribution $p_n$ for each of the microstates. The value of this expression lies in it's application to other systems, particularly information theory, where it can be used to quantify amount of information in, for example a digital signal, in which case the $p_n$ represent the probability of receiving the $n$-th possible value in the list of signals.\n\n\n\n\\subsection{Recommended reading}\nMost of the notes in this section follow closely the second half of chapter 3 in \\emph{Statistical Mechanics in a Nutshell}; specifically, section 3.6--3.18. Chapter four of \\emph{Statistical Mechanics made Simple} covers the smae material in sections 4.0--4.4. It is gives some intuitive and succinct explanations, but I find it to be of more use \\emph{after} you've already looked at SMiaN. Also useful, and with a slightly different presentation (perhaps with more traditional notation), is \\emph{Entropy, Order Parameters, and Complexity}. Here the content is spread around a bit over chapters three through six. Much of the useful content, including some relevant to early sections of these notes, is in sections 3.1, 3.5, 6.1, 6.2, 6.3, and 5.3.\n", "meta": {"hexsha": "89ecffcff50cf300fabca6f18d61447fd6666132", "size": 14715, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "07-ensembles.tex", "max_stars_repo_name": "RenzDC/708Notes2018", "max_stars_repo_head_hexsha": "05c3220cf500bbfe5878c528548c78ee187b73ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "07-ensembles.tex", "max_issues_repo_name": "RenzDC/708Notes2018", "max_issues_repo_head_hexsha": "05c3220cf500bbfe5878c528548c78ee187b73ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07-ensembles.tex", "max_forks_repo_name": "RenzDC/708Notes2018", "max_forks_repo_head_hexsha": "05c3220cf500bbfe5878c528548c78ee187b73ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.5892857143, "max_line_length": 1059, "alphanum_fraction": 0.7536527353, "num_tokens": 3961, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085909370422, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5905619346662448}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{setspace}\n\\usepackage{amsmath}\n\n\\title{Chapter 2\\\\Determinants and Matrices}\n\\author{solutions by Hikari}\n\\date{July 2021}\n\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{2.1 Determinants}\n\n\\paragraph{2.1.1}\n(a) $1\\times(-1\\times1)=-1$\n\n(b) $1\\times(1\\times1-2\\times3)-2\\times(3\\times1-2\\times0)=-11$\n\n(c) $\\frac{1}{\\sqrt{2}}(-\\sqrt{3})\\times\\sqrt{3}\\times(-\\sqrt{3}\\times\\sqrt{3})=\\frac{9}{\\sqrt{2}}$\n\n\\paragraph{2.1.2}\n\n$\\begin{vmatrix}\n1&3&3\\\\\n1&-1&1\\\\\n2&1&3\n\\end{vmatrix}\n=2$\n, So the homogeneous linear independent equations have no nontrivial solutions.\n\n\\paragraph{2.1.3}\n(a)$\n\\begin{vmatrix}\n1&2\\\\\n2&4\n\\end{vmatrix}\n=0$\n\n(b)\n$\n\\begin{vmatrix}\n3&2\\\\\n6&4\n\\end{vmatrix}\n=0$\n\n(c) $(1,1)$, $(2,2)$\n\n\\paragraph{2.1.4}\n(a)\n$|A|=\\sum_{ij\\cdots}\\varepsilon_{ij\\cdots}a_{1i}a_{2j}\\cdots$, which is the sum of all products formed by choosing one entry in each row that they are all in different columns (call it a valid combination), multiplying them together, and multiplying $+1$ or $-1$ depending on the parity of permutation $c_1c_2\\cdots$, where $c_i$ is the column of the entry chosen in row i. $a_{ji}C_{ji}=a_{ji}M_{ji}(-1)^{j+i}$, is the sum of all products of valid combinations in $M_{ji}$, multiplying $a_{j+i}$, multiplying $(-1)^{j+i}$. If it takes n steps for a permutation in $M_{ji}$ to return to reference order, then it will take $n+|j-i|$ steps for the permutation appended $a_{ji}$ to return to reference order, so $a_{ij}M_{ij}(-1)^{|j-i|}=a_{ij}M_{ij}(-1)^{j+i}$ will contribute to the sum of products of valid combinations in A that contains $a_{ji}$, so $\\sum_{i}a_{ji}C_{ji}$ contains all products of valid combinations, and is therefore equal to $|A|$.\n\\medskip\n\nExample:\n\\[\n\\begin{vmatrix}\n &o& & \\\\\n o& & & \\\\\n & & &o\\\\\n & &o&\n\\end{vmatrix}\n\\xrightarrow{2\\,steps}\n\\begin{vmatrix}\n o&& & \\\\\n &o& & \\\\\n & &o&\\\\\n & &&o\n\\end{vmatrix}\n\\]\n\\[\n\\begin{vmatrix}\n &o&&&\\\\\n &&&a_{24}&\\\\\n o&&&&\\\\\n &&&&o\\\\\n &&o&&\n\\end{vmatrix}\n\\xrightarrow{2\\,steps}\n\\begin{vmatrix}\n o&&&&\\\\\n &&&a_{24}&\\\\\n &o&&&\\\\\n &&o&&\\\\\n &&&&o\n\\end{vmatrix}\n\\xrightarrow{|4-2|\\,steps}\n\\begin{vmatrix}\n o&&&&\\\\\n &a_{24}&&&\\\\\n &&o&&\\\\\n &&&o&\\\\\n &&&&o\n\\end{vmatrix}\n\\]\n(b) If $A'$ is the matrix whose $k^{th}$ column is the $j^{th}$ column of $A$ and all the other columns is the same with $A$,  then $\\sum_ia_{ij}C_{ik}=\\sum_{i}a'_{ik}C'_{ik}$ is the determinant of $A'$, but $A'$ has two equal rows ($j^{th}$ and $k^{th}$ rows), so it equals to zero.\n\n\\paragraph{2.1.5}\n(a) $\\det(H_1)=1$, $\\det(H_2)=8.3333\\times10^{-2}$, $\\det(H_3)=4.62963\\times10^{-4}$\n\n(b)\nfor $\\det(H_4)$: \n\nSubtract the last row from each row above it:\n\\renewcommand{\\arraystretch}{1.5}\n\\[\n\\begin{vmatrix}\n\\frac{1}{1}&\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}\\\\\n\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}\\\\ \n\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}&\\frac{1}{6}\\\\\n\\frac{1}{4}&\\frac{1}{5}&\\frac{1}{6}&\\frac{1}{7}\\\\\n\\end{vmatrix}\n=\n\\begin{vmatrix}\n\\frac{3}{(1)(4)}&\\frac{3}{(2)(5)}&\\frac{3}{(3)(6)}&\\frac{3}{(4)(7)}\\\\\n\\frac{2}{(2)(4)}&\\frac{2}{(3)(5)}&\\frac{2}{(4)(6)}&\\frac{2}{(5)(7)}\\\\ \n\\frac{1}{(3)(4)}&\\frac{1}{(4)(5)}&\\frac{1}{(5)(6)}&\\frac{1}{(6)(7)}\\\\\n\\frac{1}{4}&\\frac{1}{5}&\\frac{1}{6}&\\frac{1}{7}\\\\\n\\end{vmatrix}\n=\n\\frac{(1)(2)(3)}{(4)(5)(6)(7)}\n\\begin{vmatrix}\n\\frac{1}{1}&\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}\\\\\n\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}\\\\ \n\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}&\\frac{1}{6}\\\\\n1&1&1&1\\\\\n\\end{vmatrix}\n=\n\\frac{(3!)^2}{7!}\n\\begin{vmatrix}\n\\frac{1}{1}&\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}\\\\\n\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}\\\\ \n\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}&\\frac{1}{6}\\\\\n1&1&1&1\\\\\n\\end{vmatrix}\n\\]\nSubtract the last column from each column precedes it:\n\\[\n\\frac{(3!)^2}{7!}\n\\begin{vmatrix}\n\\frac{1}{1}&\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}\\\\\n\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}\\\\ \n\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}&\\frac{1}{6}\\\\\n1&1&1&1\\\\\n\\end{vmatrix}\n=\n\\frac{(3!)^2}{7!}\n\\begin{vmatrix}\n\\frac{3}{(1)(4)}&\\frac{2}{(2)(4)}&\\frac{1}{(3)(4)}&\\frac{1}{4}\\\\\n\\frac{3}{(2)(5)}&\\frac{2}{(3)(5)}&\\frac{1}{(4)(5)}&\\frac{1}{5}\\\\ \n\\frac{3}{(3)(6)}&\\frac{2}{(4)(6)}&\\frac{1}{(5)(6)}&\\frac{1}{6}\\\\\n0&0&0&1\\\\\n\\end{vmatrix}\n=\n\\frac{(3!)^2}{7!}\\frac{(3!)^2}{6!}\n\\begin{vmatrix}\n\\frac{1}{1}&\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}\\\\\n\\frac{1}{2}&\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}\\\\ \n\\frac{1}{3}&\\frac{1}{4}&\\frac{1}{5}&\\frac{1}{6}\\\\\n0&0&0&1\\\\\n\\end{vmatrix}\n=\n\\frac{(3!)^4}{(7!)(6!)}\\det(H_3)\n\\]\nBy this procedure, we found that\n\\[\n\\det(H_n)=\\frac{(n-1)!^4}{(2n-1)!(2n-1)!}\\det(H_{n-1})\n\\]\nSo $\\det(H_4)=\\frac{3!^4}{7!6!}\\det(H_3)=1.65344\\times10^{-7}$, $\\det(H_5)=\\frac{4!^4}{9!8!}\\det(H_4)=3.74930\\times10^{-12}$, $\\det(H_6)=\\frac{5!^4}{11!10!}=5.36730\\times10^{-18}$\n\n\\paragraph{2.1.6}\nLinear dependence implies one row (or column) can be expressed by linear combination of other rows (columns), so $A_{ni}=a_1A_{1i}+a_2A_{2i}+\\cdots$. Add $-a_j$ times the $j^{th}$ row to the $n^{th}$  row, then the determinant remains the same, but all entries in the $n^{th}$ row becomes $0$, so the determinant equals to zero.\n\n\\paragraph{2.1.7}\nBy Gauss's elimination, $x_1=1.88282$, $x_2=-0.36179$, $-0.96889$, $0.44221$, $0.41022$, $0.39219$\n\n\\paragraph{2.1.8}\n(a) $\\sum_{i}\\delta_{ii}=\\delta_{11}+\\delta_{22}+\\delta_{33}=3$\n\n(b) $\\sum_{ij}\\delta_{ij}\\varepsilon_{ijk}=\\sum_{i}\\delta_{ii}\\varepsilon_{iik}=0$\n\n(c) If $i\\neq j$, then at least two of $1,j,p,q$ is the same, and $\\varepsilon_{ipq}\\varepsilon_{jpq}=0$. If  $i=j=1$, then \n$\\sum_{pq}\\varepsilon_{ipq}\\varepsilon_{jpq}=\\varepsilon_{123}\\varepsilon_{123}+\\varepsilon_{132}\\varepsilon_{132}=2$, and the case is similar when $i=j=2$ and $i=j=3$. Therefore, $\\sum_{pq}\\varepsilon_{ipq}\\varepsilon_{jpq}=2\\delta_{ij}$ \n\n(d)$\\sum_{ijk}\\varepsilon_{ijk}\\varepsilon_{ijk}=(-1)^2\\times6=6$\n\n\\paragraph{2.1.9}\nThe only case that $\\varepsilon_{ijk}\\varepsilon_{pqk}\\neq0$ is : k is one of $(1,2,3)$ and $(i,j),(p,q)$ are the other two of $(1,2,3)$, respectively. So $i=p,j=q$ or $i=q,j=p$. For the former case, $\\varepsilon_{ijk}\\varepsilon_{pqk}=(\\pm1)^2=1=\\delta_{ip}\\delta_{jq}$, and for the latter case, $\\varepsilon_{ijk}\\varepsilon_{pqk}=(1)(-1)=-1=-\\delta_{iq}\\delta_{jp}$. Therefore, $\\sum_{k}\\varepsilon_{ijk}\\varepsilon_{pqk}=\\delta_{ip}\\delta_{jq}-\\delta_{iq}\\delta_{jp}$\n\n\\section*{2.2 Matrices}\n\n\\paragraph{2.2.1}\n\\[((AB)C)_{il}=\\sum_m(AB)_{im}C_{ml}=\\sum_m\\sum_k A_{ik}B_{km}C_{ml}=\\sum_k\\sum_m A_{ik}B_{km}C_{ml}=\\sum_k A_{ik}(BC)_{kl}=(A(BC))_{il}\\]\n\n\\paragraph{2.2.2}\nIf $(A+B)(A-B)=A^2-B^2$, than $A^2+BA-AB-B^2=A^2-B^2$, so $AB-BA=[A,B]=0$. If $[A,B]=0$, then $(A+B)(A-B)=A^2-B^2-(AB-BA)=A^2-B^2$\n\n\\paragraph{2.2.3}\n(a)\n\\renewcommand{\\arraystretch}{1}\n\\[(a+ib)+(c+id)\\longleftrightarrow\n\\begin{pmatrix}\na&b\\\\-b&a\n\\end{pmatrix}+\n\\begin{pmatrix}\nc&d\\\\-d&c\n\\end{pmatrix}=\n\\begin{pmatrix}\na+c&b+d\\\\-(b+d)&a+c\n\\end{pmatrix}\\longleftrightarrow\n(a+c)+i(b+d)\n\\]\n\\[(a+ib)(c+id)\\longleftrightarrow\n\\begin{pmatrix}\na&b\\\\-b&a\n\\end{pmatrix}\n\\begin{pmatrix}\nc&d\\\\-d&c\n\\end{pmatrix}=\n\\begin{pmatrix}\nac-bd&ad+bc\\\\-(ad+bc)&ac-bd\n\\end{pmatrix}\\longleftrightarrow\n(ac-bd)+i(ad+bc)\n\\]\n(b)\n\\[(a+ib)^{-1}=\\frac{1}{a^2+b^2}(a-ib)\\longleftrightarrow\\frac{1}{a^2+b^2}\n\\begin{pmatrix}\na&-b\\\\b&a\n\\end{pmatrix}\n\\]\n\n\\paragraph{2.2.4}\nMultiply each row of $A$ by $-1$ will turn $A$ into $-A$, so $\\det(-A)=(-1)^n\\det(A)$.\n\n\\paragraph{2.2.5}\n(a) If $A^2=0$ and $A=\\begin{pmatrix}\nx&y\\\\z&t\n\\end{pmatrix}$\n\\[A^2=\n\\begin{pmatrix}\nx&y\\\\z&t\n\\end{pmatrix}\n\\begin{pmatrix}\nx&y\\\\z&t\n\\end{pmatrix}=0\n\\]\nThen $x^2+yz=0$, $t^2+yz=0$, $y(x+t)=0$, $z(x+t)=0$. Let $y=b^2$, $z=-a^2$, then $x=\\pm ab$, $t=\\pm ab$. Without less of generality let $x=ab$ because the sign of $a$ and $b$ is arbitrary. If $y\\neq0$, then $t=-x=-ab$; if $y=0$, then $t=x=ab=0$ so $t=-ab$. Therefore, in all cases we can find $a,b$ such that $\\begin{pmatrix}\nx&y\\\\z&t\n\\end{pmatrix}=\\begin{pmatrix}\nab&b^2\\\\-a^2&-ab\n\\end{pmatrix}$\n\n(b)\nLet $A=\\begin{pmatrix}\n1&0\\\\0&1\n\\end{pmatrix}$, $B=\\begin{pmatrix}\n-1&0\\\\0&-1\n\\end{pmatrix}$, then $\\det{C}=\\det\\begin{pmatrix}\n0&0\\\\0&0\n\\end{pmatrix}=0$ but $\\det{A}+\\det{B}=1+1=2$\n\n\\paragraph{2.2.6}\n$K=\n\\begin{pmatrix}\n0&0&i\\\\\n-i&0&0\\\\\n0&-1&0\n\\end{pmatrix}\n$, \n$K^2=\n\\begin{pmatrix}\n0&-i&0\\\\\n0&0&1\\\\\ni&0&0\n\\end{pmatrix}\n$, \n$K^3=\n\\begin{pmatrix}\n-1&0&0\\\\\n0&-1&0\\\\\n0&0&-1\n\\end{pmatrix}=-I\n$, \n$K^4=-K$, $K^5=-K^2$, $K^6=I$. So if $n=6k$, k is positive integer, then $K^n=I$\n\n\\paragraph{2.2.7}\n\\[[A,[B,C]]=[A,BC-CB]=(ABC-ACB)-(BCA-CBA)=ABC-ACB-BCA+CBA\\]\n\\[[B,[A,C]]-[C,[B,A]]=[B,AC-CA]-[C,AB-BA]\\]\n\\[=(BAC-BCA)-(ACB-CAB)-[(CAB-CBA)-(ABC-BAC)]=ABC-ACB-BCA+CBA\\]\nSo $[A,[B,C]]=[B,[A,C]]-[C,[B,A]]$\n\n\\paragraph{2.2.8}\nUse the definition of commutator and carry out the corresponding matrix multiplication, then all the three relations are trivially satisfied.\n\n\\paragraph{2.2.9}\nCarry out the corresponding matrix multiplication, then all the relations are trivially satisfied.\n\n\\paragraph{2.2.10}\nIf $A$ and $B$ are upper right triangular matrices, then $A_{ij}=0$ when $j<i$, $B_{ij}=0$ when $j<i$. $(AB)_{ij}=\\sum_k A_{ik}B_{kj}$, when $j<i$: if $k>j$, then $B_{kj}=0$, if $k\\leq j$, then $k<i$ and $A_{ik}=0$. So in all case $(AB)_{ij}=\\sum_k A_{ik}B_{kj}=0$ when $j<i$, so $AB$ is also an upper right triangular matrix.\n\n\\paragraph{2.2.11}\n(a)(b) By matrix multiplication the relations hold trivially.\n\n(c) When $i=j$, $\\sigma_i\\sigma_j+\\sigma_j\\sigma_i=2(\\sigma_i)^2=2I_2=2\\delta_{ij}I_2$. When $i\\neq j$: by (a) we know the inverse matrix of $\\sigma_i$ is itself, and by (b) we have $\\sigma_i\\sigma_j=i\\sigma_k$, so $(\\sigma_i\\sigma_j)^{-1}=\\sigma_j^{-1}\\sigma_i^{-1}=\\sigma_j\\sigma_i=(i\\sigma_k)^{-1}=-i\\sigma_k$, so $\\sigma_i\\sigma_j+\\sigma_j\\sigma_i=i\\sigma_k-i\\sigma_k=0=2\\delta_{ij}I_2$. So $\\sigma_i\\sigma_j+\\sigma_j\\sigma_i=2\\delta_{ij}I_2$ holds for all cases.\n\n\\paragraph{2.2.12}\n(a)(b) By definition of commutator and matrix multiplication, the relations can be easily verified.\n\n(c) $[M^2,M_i]=2IM_i-M_i2I=0$; $[M_z,L^+]=[M_z,M_x]+i[M_z,M_y]=iM_y+i(-i)M_x=M_x+iM_y$; $[L^+,L^-]=[M_x+iM_y,M_x-iM_y]=i[M_y.M_x]-i[M_x,M_y]=2M_z$\n\n\\paragraph{2.2.13}\nIt is similar with Exercise 2.2.12.\n\n\\paragraph{2.2.14}\nIf the $i^{th}$ diagonal entries of $A$ is $a_i$, then $(AB)_{ij}=a_iB_{ij}$, and $(BA)_{ij}=B_{ij}a_j$. So if $i\\neq j$, then by $a_iB_{ij}=a_jB_{ij}$ and $a_i\\neq a_j$, we have $B_{ij}=0$, which means $B$ is a diagonal matrix.\n\n\\paragraph{2.2.15}\n$(AB)_{ij}=\\sum_k A_{ik}B_{kj}=\\sum_k a_i\\delta_{ik}b_j\\delta_{kj}=a_ib_j\\delta_{ij}=a_ib_i\\delta_{ij}$. $(BA)_{ij}=\\sum_k B_{ik}A_{kj}=\\sum_k b_i\\delta_{ik}a_j\\delta_{kj}=a_jb_i\\delta_{ij}=a_ib_i\\delta_{ij}$. So $(AB)_{ij}=(BA)_{ij}$, and $A$ and $B$ commute.\n\n\\newcommand{\\trace}{\\mathrm{trace}}\n\n\\paragraph{2.2.16}\nFor any two matrices $X,Y$ we have $\\trace(XY)=\\trace(YX)$. If $A,B$ commute, $\\trace(ABC)=\\trace(BAC)=\\trace(CBA)$; if $B,C$ commute, $\\trace(ABC)=\\trace(ACB)=\\trace(CBA)$; if $A,C$ commute, $\\trace(ABC)=\\trace(CAB)=\\trace(ACB)=\\trace(CBA)$.\n\n\\paragraph{2.2.17}\n$\\trace([M_j,M_k])=\\trace(M_jM_k-M_kM_j)=\\trace(M_jM_k)-\\trace(M_kM_j)=0=\\trace(iM_l)=i\\trace(M_l)$, so $\\trace(M_l)=0$, ans so as $M_j$ and $M_k$ because the commutation relation is cyclic.\n\n\\paragraph{2.2.18}\n$\\trace(A)=\\trace(ABB)=\\trace(BAB)=\\trace(-ABB)=\\trace(-A)=-\\trace(A)$, so $\\trace(A)=0$. The same is for $\\trace(B)$.\n\n\\paragraph{2.2.19}\n(a) If $AB=-BA$ and both are non-singular (so the matrix inverses exist): $\\trace(A)=\\trace(ABB^{-1})=\\trace(B^{-1}AB)=\\trace(-B^{-1}BA)=\\trace(-A)=-\\trace(A)$, so $\\trace(A)=0$. The same is for $\\trace(B)$.\n\n(b) $A,B$ being non-singular means $\\det(A),\\det(B)\\neq0$. Suppose they are anti-commuting and n is odd, then $\\det(A)\\det(B)=\\det(AB)=\\det(-BA)=\\det(-B)\\det(A)=(-1)^n\\det(B)\\det(A)=-\\det(A)\\det(B)$, so $\\det(A)\\det(B)=0$. So either  $\\det(A)$ or $\\det(B)$ equals to zero, contradicting to $\\det(A),\\det(B)\\neq0$. \n\n\\paragraph{2.2.20}\n$(A^{-1}A)_{ik}=\\sum_j(A^{-1})_{ij}A_{jk}=\\sum_j\\frac{(-1)^{i+j}M_{ji}}{\\det(A)}A_jk=\\frac{1}{\\det(A)}\\sum_j(-1)^{i+j}M_{ji}A_{jk}$. If $i=k$, notice that $\\det(A)=\\det(A^T)=\\sum_{j}(-1)^{i+j}M_{ji}A_{ji}$, so $(A^{-1}A)_{ik}=\\frac{\\det(A)}{\\det(A)}=1$; if $i\\neq k$, then $(A^{-1}A)_{ik}=\\frac{1}{\\det(A)}\\sum_{j}(-1)^{i+j}M_{ji}A_{jk}=\\frac{1}{\\det(A)}A_{jk}C_{ji}=0$ by Exercise 2.1.4(b) (it is obvious by noticing that $\\sum_{j}(-1)^{i+j}M_{ji}A_{jk}$ is the determinant of A whose $k^{th}$ column is replaced by $j^{th}$ column, and the determinant of matrix with two same column is zero). Therefore, $(A^{-1}A)_{ik}=\\delta_{ik}$, so $A^{-1}A=I$.\n\n\\paragraph{2.2.21}\n(a) The unit matrix with $M_{ii}$ replaced by $k$.\n\n(b) The unit matrix with $M_{im}$ replaced by $-K$.\n\n(c) The unit matrix with $M_{ii},M_{mm}$ replaced by 0 and $M_{im},M_{mi}$ replaced by 1.\n\n\\paragraph{2.2.22}\n(a) The unit matrix with $M_{ii}$ replaced by $k$.\n\n(b) The unit matrix with $M_{mi}$ replaced by $-k$.\n\n(c) The unit matrix with $M_{ii},M_{mm}$ replaced by 0 and $M_{im},M_{mi}$ replaced by 1.\n\n\\paragraph{2.2.23}\nBy Gauss-Jordan matrix inversion, $A^{-1}=\n\\begin{pmatrix}\n1&-1&0\\\\\n-1&\\frac{11}{7}&-\\frac{1}{7}\\\\\n0&-\\frac{1}{7}&\\frac{2}{7}\n\\end{pmatrix}\n$\n\n\\paragraph{2.2.24}\n(a) $\\sum_{i=1}^nT_{ij}$ is the sum of fraction of population of $j$th area having moved to other(including $j$) areas, so the sum add up to 1.\n\n(b) $\\sum_{i=1}^nQ_i=\\sum_{i=1}^n(TP)_i=\\sum_{i=1}^n\\sum_{j=1}^n T_{ij}P_j=\\sum_{j=1}^n\\left(\\sum_{i=1}^nT_{ij} \\right)P_j=\\sum_{j=1}^nP_j=1$\n\n\\paragraph{2.2.25}\n\\renewcommand{\\arraystretch}{1.5}\n\\[\n\\begin{pmatrix}\n1&\\frac{1}{2}&\\frac{1}{4}&\\frac{1}{8}&\\frac{1}{16}&\\frac{1}{32}\\\\\n\\frac{1}{2}&1&\\frac{1}{2}&\\frac{1}{4}&\\frac{1}{8}&\\frac{1}{16\n}\\\\\n\\frac{1}{4}&\\frac{1}{2}&1&\\frac{1}{2}&\\frac{1}{4}&\\frac{1}{8}\\\\\n\\frac{1}{8}&\\frac{1}{4}&\\frac{1}{2}&1&\\frac{1}{2}&\\frac{1}{4}\\\\\n\\frac{1}{16}&\\frac{1}{8}&\\frac{1}{4}&\\frac{1}{2}&1&\\frac{1}{2}\\\\\n\\frac{1}{32}&\\frac{1}{16}&\\frac{1}{8}&\\frac{1}{4}&\\frac{1}{2}&1\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n1&0&0&0&0&0\\\\\n0&1&0&0&0&0\\\\\n0&0&1&0&0&0\\\\\n0&0&0&1&0&0\\\\\n0&0&0&0&1&0\\\\\n0&0&0&0&0&1\\\\\n\\end{pmatrix}\n\\xrightarrow{R_i-\\frac{1}{2}R_{i-1}\\rightarrow R_i \\,i=6\\sim2}\n\\]\n\\[\n\\begin{pmatrix}\n1&\\frac{1}{2}&\\frac{1}{4}&\\frac{1}{8}&\\frac{1}{16}&\\frac{1}{32}\\\\\n0&\\frac{3}{4}&\\frac{3}{8}&\\frac{3}{16}&\\frac{3}{32}&\\frac{3}{64}\\\\\n0&0&\\frac{3}{4}&\\frac{3}{8}&\\frac{3}{16}&\\frac{3}{32}\\\\\n0&0&0&\\frac{3}{4}&\\frac{3}{8}&\\frac{3}{16}\\\\\n0&0&0&0&\\frac{3}{4}&\\frac{3}{8}\\\\\n0&0&0&0&0&\\frac{3}{4}\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n1&0&0&0&0&0\\\\\n-\\frac{1}{2}&1&0&0&0&0\\\\\n0&-\\frac{1}{2}&1&0&0&0\\\\\n0&0&-\\frac{1}{2}&1&0&0\\\\\n0&0&0&-\\frac{1}{2}&1&0\\\\\n0&0&0&0&-\\frac{1}{2}&1\\\\\n\\end{pmatrix}\n\\xrightarrow[R_i-\\frac{1}{2}R_{i+1}\\rightarrow R_i \\,i=1\\sim5]{\\frac{3}{4}R_1\\rightarrow R_1}\n\\]\n\\[\n\\begin{pmatrix}\n\\frac{3}{4}&0&0&0&0&0\\\\\n0&\\frac{3}{4}&0&0&0&0\\\\\n0&0&\\frac{3}{4}&0&0&0\\\\\n0&0&0&\\frac{3}{4}&0&0\\\\\n0&0&0&0&\\frac{3}{4}&0\\\\\n0&0&0&0&0&\\frac{3}{4}\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n1&-\\frac{1}{2}&0&0&0&0\\\\\n-\\frac{1}{2}&\\frac{5}{4}&-\\frac{1}{2}&0&0&0\\\\\n0&-\\frac{1}{2}&\\frac{5}{4}&-\\frac{1}{2}&0&0\\\\\n0&0&-\\frac{1}{2}&\\frac{5}{4}&-\\frac{1}{2}&0\\\\\n0&0&0&-\\frac{1}{2}&\\frac{5}{4}&-\\frac{1}{2}\\\\\n0&0&0&0&-\\frac{1}{2}&1\\\\\n\\end{pmatrix}\n\\xrightarrow{\\frac{4}{3}R_i\\rightarrow R_i\\,i=1\\sim 6}\n\\]\n\\[\n\\begin{pmatrix}\n1&0&0&0&0&0\\\\\n0&1&0&0&0&0\\\\\n0&0&1&0&0&0\\\\\n0&0&0&1&0&0\\\\\n0&0&0&0&1&0\\\\\n0&0&0&0&0&1\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n\\frac{4}{3}&-\\frac{2}{3}&0&0&0&0\\\\\n-\\frac{2}{3}&\\frac{5}{3}&-\\frac{2}{3}&0&0&0\\\\\n0&-\\frac{2}{3}&\\frac{5}{3}&-\\frac{2}{3}&0&0\\\\\n0&0&-\\frac{2}{3}&\\frac{5}{3}&-\\frac{2}{3}&0\\\\\n0&0&0&-\\frac{2}{3}&\\frac{5}{3}&-\\frac{2}{3}\\\\\n0&0&0&0&-\\frac{2}{3}&\\frac{4}{3}\\\\\n\\end{pmatrix}\n\\]\n\n\\paragraph{2.2.26}\nIf $A,B$ are orthogonal, $AA^T=I$ and $BB^T=I$, then $(AB)(AB)^T=ABB^TA^T=AA^T=I$, so $AB$ is also orthogonal.\n\n\\paragraph{2.2.27}\n$\\det(AA^T)=\\det(A)\\det(A^T)=(\\det(A))^2=\\det(I)=1$, so $\\det(A)=\\pm1$\n\n\\paragraph{2.2.28}\nIf $A=A^T$ and $B=-B^T$, then $\\trace(AB)=\\trace(BA)=\\trace(-B^TA^T)=\\trace(-(AB)^T)=-\\trace(AB)$, so $\\trace(AB)=0$.\n\n\\paragraph{2.2.29}\n\\renewcommand{\\arraystretch}{1}\n$AA^T=\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}\\begin{pmatrix}a&c\\\\b&d\\end{pmatrix}=\\begin{pmatrix}1&0\\\\0&1\\end{pmatrix}$, so $a^2+b^2=1$, $c^2+d^2=1$, $ac+bd=0$. Let $\\theta=\\tan^{-1}\\frac{b}{a}$, then $a=\\cos\\theta$, $b=\\sin\\theta$, $\\frac{c}{d}=-\\tan{\\theta}$, $c=-\\sin\\theta$, $d=\\cos\\theta$. So the most general form is $\\begin{pmatrix}\\cos\\theta&\\sin\\theta\\\\-\\sin\\theta&\\cos\\theta\\end{pmatrix}$.\n\n\\paragraph{2.2.30}\n$\\det(A^*)=\\sum_{ij\\cdots}\\varepsilon_{ij\\cdots}a_{1i}^*a_{2j}^*\\cdots=(\\sum_{ij\\cdots}\\varepsilon_{ij\\cdots}a_{1i}a_{2j}\\cdots)^*=(\\det A)^*$\n\n$\\det(A^*)=\\det((A^*)^T)=A^\\dagger$\n\n\\paragraph{2.2.31}\nIf two of the matrices are real, then their commutator is real, so $i$ multiply the third matrix must be real, so the third matrix must be  pure imaginary.\n\n\\paragraph{2.2.32}\n$(AB)^\\dagger=((AB)^T)^*=(B^TA^T)^*=((B)^T)^*((A)^T)^*=B^\\dagger A^\\dagger$\n\n\\paragraph{2.2.33}\n$S^\\dagger_{ij}=S^*_{ji}$, so \n$\\trace(S^\\dagger S)=\\sum_i(S^\\dagger S)_{ii}=\\sum_i\\sum_j S^\\dagger_{ij}S_{ji}=\\sum_i\\sum_j|S_{ji}|^2>0$ when $S$ is not null matrix.\n\n\\paragraph{2.2.34}\n$A^\\dagger=A$, $B^\\dagger=B$. $(AB+BA)^\\dagger=B^\\dagger A^\\dagger+A^\\dagger B^\\dagger=BA+AB=AB+BA$; $[i(AB-BA)]^\\dagger=-i(B^\\dagger A^\\dagger-A^\\dagger B^\\dagger)=-i(BA-AB)=i(AB-BA)$. So both $(AB-BA)$ and $i(AB-BA)$ are Hermitian.\n\n\\paragraph{2.2.35}\n$(C+C^\\dagger)^\\dagger=C^\\dagger+C=C+C^\\dagger$; $[i(C-C^\\dagger)]^\\dagger=-i(C^\\dagger-C)=i(C-C^\\dagger)$. So both matrices are Hermitian,\n\n\\paragraph{2.2.36}\n$C=-i(AB-BA)$, $C^\\dagger=i(B^\\dagger A^\\dagger-A^\\dagger B^\\dagger)=i(BA-AB)=-i(AB-BA)=C$ so $C$ is Hermitian.\n\n\\paragraph{2.2.37}\nIf $AB=BA$, then $(AB)^\\dagger=B^\\dagger A^\\dagger=BA=AB$ so $AB$ is Hermitian; if $AB$ is Hermitian, then $AB=(AB)^\\dagger=B^\\dagger A^\\dagger=BA$. Therefore, $AB=BA$ is a necessary and sufficient condition for $AB$ to be Hermitian.\n\n\\paragraph{2.2.38}\n$UU^\\dagger=I$, $U^\\dagger=U^{-1}$, $U=(U^{-1})^\\dagger$, $U^{-1}U=I=U^{-1}(U^{-1})^\\dagger$, so $U^{-1}$ is unitary.\n\n\\paragraph{2.2.39}\nIt is obvious that $(A\\otimes B)^T=A^T\\otimes B^T$, so $(A\\otimes B)^\\dagger=A^\\dagger\\otimes B^\\dagger$. If $A$ and $B$ are unitary, then $(A\\otimes B)(A\\otimes B)^\\dagger =(A\\otimes B)(A^\\dagger\\otimes B^\\dagger)=(AA^\\dagger\\otimes BB^\\dagger)=I_1\\otimes I_2=I$\n\n\\paragraph{2.2.40}\n$\\boldsymbol{\\sigma}\\cdot\\mathbf{p}=\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3=\n\\begin{pmatrix}\n0&p_1\\\\p_1&0\n\\end{pmatrix}+\n\\begin{pmatrix}\n0&-ip_2\\\\ip_2&0\n\\end{pmatrix}+\n\\begin{pmatrix}\np_3&0\\\\0&-p_3\n\\end{pmatrix}=\n\\begin{pmatrix}\np_3&p_1-ip_2\\\\p_1+ip_2&-p_3\n\\end{pmatrix}\n$,\nso $(\\boldsymbol{\\sigma}\\cdot\\mathbf{p})^2=\\begin{pmatrix}\np_3&p_1-ip_2\\\\p_1+ip_2&-p_3\n\\end{pmatrix}\\begin{pmatrix}\np_3&p_1-ip_2\\\\p_1+ip_2&-p_3\n\\end{pmatrix}=\n\\begin{pmatrix}\n\\mathbf{p}^2&0\\\\0&\\mathbf{p}^2\n\\end{pmatrix}=\\mathbf{p}^2\\mathbf{1}_2\n$\n\n\\paragraph{2.2.41}\n$(\\gamma^0)^2=(\\sigma_3)^2\\otimes(1_2)^2=1_2\\otimes1_2=1_4$. $(\\gamma^i)^2=\\gamma^2\\otimes(\\sigma_i)^2=(-1_2)\\otimes1_2=-1_4$. When $\\mu\\neq0$, $\\gamma^\\mu\\gamma^i+\\gamma^i\\gamma^\\mu=\\gamma^2\\otimes(\\sigma_\\mu\\sigma_i)+\\gamma^2\\otimes(\\sigma_i\\sigma_\\mu)=\\gamma^2\\otimes(\\sigma_\\mu\\sigma_i+\\sigma_i\\sigma_\\mu)=\\gamma^2\\otimes0=0$; when $\\mu=0$, $\\gamma^0\\gamma^i+\\gamma^i\\gamma^0=(\\sigma_3\\gamma)\\otimes(\\sigma_i)+(\\gamma\\sigma_3)\\otimes(\\sigma_i)=\n\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix}\\otimes(\\sigma_i)+\n\\begin{pmatrix}0&-1\\\\-1&0\\end{pmatrix}\\otimes(\\sigma_i)=0\n$\n\n\\paragraph{2.2.42}\n$\\gamma^5\\gamma^\\mu=i\\gamma^0\\gamma^1\\gamma^2\\gamma^3\\gamma^\\mu$. Switch $\\gamma^\\mu$ with $\\gamma^i$ left to it: if $i=\\mu$, it won't cange; if $i\\neq\\mu$, it will be multiplied $(-1)$ by the anti-commuting properties. So after switching four times to move $\\gamma^\\mu$ to the left-most side, three $(-1)$ have been multiplied, so $\\gamma^5\\gamma^\\mu=-i\\gamma^\\mu\\gamma^0\\gamma^1\\gamma^2\\gamma^3=-\\gamma^\\mu\\gamma^5$, so $\\gamma^5$ anti-commutes with all four $\\gamma^\\mu$.\n\n\\paragraph{2.2.43}\n($\\gamma_\\mu=\\sum g_{\\nu\\mu}\\gamma^\\mu=$ should be $\\gamma_\\nu=\\sum g_{\\nu\\mu}\\gamma^\\mu$) $\\gamma_0=\\gamma^0$ and $\\gamma_i=-\\gamma ^i,\\,i=1,2,3$. Along with $(\\gamma^0)^2=1$ and $(\\gamma^i)^2=-1$, we have $\\gamma_\\mu\\gamma^\\mu=1,\\,\\mu=0,1,2,3$.\n\\medskip\n\n(a) If $\\mu=\\alpha$, $\\gamma_\\mu\\gamma^\\alpha\\gamma^\\mu=\\gamma_\\mu\\gamma^\\mu\\gamma^\\alpha=\\gamma^\\alpha$; if $\\mu\\neq\\alpha$, $\\gamma_\\mu\\gamma^\\alpha\\gamma^\\mu=-\\gamma_\\mu\\gamma^\\mu\\gamma^\\alpha=-\\gamma^\\alpha$. So $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\mu=(1-3)\\gamma^\\alpha=-2\\gamma^\\alpha$\n\\medskip\n\n(b) If $\\alpha=\\beta$, $\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\mu=\\gamma^\\alpha\\gamma^\\beta=(\\gamma^\\alpha)^2=g^{\\alpha\\alpha}=g^{\\alpha\\beta}$ for all $\\mu=0,1,2,3$, so $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\mu=4g^{\\alpha\\beta}$. If $\\alpha\\neq\\beta$, for $\\mu=\\alpha$ or $\\mu=\\beta$, $\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\mu=-\\gamma^\\alpha\\gamma^\\beta$, and for $\\mu\\neq\\alpha$ and $\\mu\\neq\\beta$, $\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\mu=\\gamma^\\alpha\\gamma^\\beta$, so $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\mu=(-2+2)\\gamma^\\alpha\\gamma^\\beta=0=g^{\\alpha\\beta}$. So $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\mu=4g^{\\alpha\\beta}$.\n\\medskip\n\n(c)\nIf $\\alpha,\\beta,\\nu$ are different with each other, then $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\nu\\gamma^\\mu=(3-1)\\gamma^\\alpha\\gamma^\\beta\\gamma^\\nu=2(-1)^3\\gamma^\\nu\\gamma^\\beta\\gamma^\\alpha=-2\\gamma^\\nu\\gamma^\\beta\\gamma^\\alpha$. If only two of  $\\alpha,\\beta,\\nu$ are the same, then $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\nu\\gamma^\\mu=(-3+1)\\gamma^\\alpha\\gamma^\\beta\\gamma^\\nu=-2(1)(-1)^2\\gamma^\\nu\\gamma^\\beta\\gamma^\\alpha\\\\=-2\\gamma^\\nu\\gamma^\\beta\\gamma^\\alpha$. If $\\alpha,\\beta,\\nu$ are all the same, then $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\nu\\gamma^\\mu=(-3+1)\\gamma^\\alpha\\gamma^\\beta\\gamma^\\nu=-2\\gamma^\\nu\\gamma^\\beta\\gamma^\\alpha$. Therefore, in all cases, $\\sum\\gamma_\\mu\\gamma^\\alpha\\gamma^\\beta\\gamma^\\nu\\gamma^\\mu=-2\\gamma^\\nu\\gamma^\\beta\\gamma^\\alpha$.\n\n\n\n\\paragraph{2.2.44}\n$(\\gamma^5)^2=-\\gamma^0\\gamma^1\\gamma^2\\gamma^3\\gamma^0\\gamma^1\\gamma^2\\gamma^3=-(-1)^{3+2+1}(\\gamma^0)^2(\\gamma^1)^2(\\gamma^2)^2(\\gamma^3)^2=1$, so\\\\ $M^2=\\frac{1}{4}(1+2\\gamma^5+(\\gamma^5)^2)=\\frac{1}{4}(2+2\\gamma^5)=\\frac{1}{2}(1+\\gamma^5)=M$.\n\n\\paragraph{2.2.45}\nBy evaluation, we can found that the 16 Dirac matrices is equal to $(i)^n\\sigma_i\\otimes\\sigma_j,\\,i,j=0,1,2,3$ with each Dirac matrix having different $(i,j)$, if we let $\\sigma_0=1_2$, $n$ depend on $i,j$. Then the problem is equivalent to prove that the 16 $\\sigma_i\\otimes\\sigma_j$ form a linearly independent set. If $a,b\\neq0$ and $(i.j)\\neq(k,l)$, then  $a\\sigma_i\\otimes\\sigma_j+b\\sigma_k\\otimes\\sigma_l\\neq0$ because the four $\\sigma_\\mu$ form a linearly independent set. So the 16 $\\sigma_i\\otimes\\sigma_j$ form a linearly independent set, and the 16 Dirac matrices form a linearly independent set\n\n\\paragraph{2.2.46}\n(The 16 Dirac matrices is defined as $E_{ij}=\\sigma_i\\otimes\\sigma_j$, $i,j=0,1,2,3$, where $\\sigma_0=I_2$) \\\\For $(i,j)\\neq(0,0)$, $\\trace(E_{ij})=\\trace(\\sigma_i)(\\sigma_j)=0$ because $\\trace(\\sigma_k)=0$ when $k=1,2,3$. So $\\trace(E_{ij}E_{mn})\\neq0$ only when $(\\sigma_i\\otimes\\sigma_j)(\\sigma_m\\otimes\\sigma_n)=\\sigma_0\\otimes\\sigma_0$, that is, $i=m$ and $j=n$. Let $c_i=c_{mn}$, $\\Gamma_i=E_{mn}$,  $\\sum_{i=1}^{16}c_i\\Gamma_i=\\sum_{i,j=0}^3c_{ij}E_{ij}$, then $\\trace(A\\Gamma_i)=\\trace(\\sum_{i,j=0}^3c_{ij}E_{ij}E_{mn})=c_{mn}\\trace(E_{mn}E_{mn})=c_{mn}\\trace(I_4)=4c_{mn}=4c_i$, so $c_i=\\frac{1}{4}\\trace(A\\Gamma_i)$.\n\n\\paragraph{2.2.47}\nNote that $(\\gamma^0)^T=\\gamma^0$, $(\\gamma^1)^T=-\\gamma^1$, $(\\gamma^2)^T=\\gamma^2$, $(\\gamma^3)^T=-\\gamma^3$. $C^{-1}=-i(\\gamma^0)^{-1}(\\gamma^2)^{-1}=i\\gamma^0\\gamma^2$, so $C\\gamma^\\mu C^{-1}=-\\gamma^2\\gamma^0\\gamma^\\mu\\gamma^0\\gamma^2$. If $\\mu=0$ or $2$, $-\\gamma^2\\gamma^0\\gamma^\\mu\\gamma^0\\gamma^2=-(-1)(1)(-1)\\gamma^\\mu=-\\gamma^\\mu=-(\\gamma^\\mu)^T$; If $\\mu=1$ or $3$, $-\\gamma^2\\gamma^0\\gamma^\\mu\\gamma^0\\gamma^2=-(-1)^2(1)(-1)\\gamma^\\mu=\\gamma^\\mu=-(\\gamma^\\mu)^T$. So in all cases $C\\gamma^\\mu C^{-1}=-(\\gamma^\\mu)^T$.\n\n\\paragraph{2.2.48}\n(a) \\[\\gamma^0mc^2=mc^2\\sigma_3\\otimes I_2=\n\\begin{pmatrix}mc^2&0\\\\0&-mc^2\\end{pmatrix}\\] \\[c(\\alpha_1p_1+\\alpha_2p_2+\\alpha_3p_3)=c\\gamma^0(\\gamma^1p_1+\\gamma^2p_2+\\gamma^3p_3)=c(\\sigma_3\\otimes I_2)(\\gamma\\otimes\\sigma_1p_1+\\gamma\\otimes\\sigma_2p_2+\\gamma\\otimes\\sigma_3p_3)\\]\n\\[=c\\sigma_1\\otimes(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)=\n\\begin{pmatrix}\n0&c(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)\\\\\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3&0\\end{pmatrix}\\]\n\\[-EI_4=-EI_2\\otimes I_2=\n\\begin{pmatrix}\n-E&0\\\\0&-E\n\\end{pmatrix}\n\\]\n\\[\\psi=\n\\begin{pmatrix}\n\\psi_L\\\\\\psi_S\n\\end{pmatrix}\n\\]\nSo $\\left[\\gamma^0mc^2+c(\\alpha_1p_1+\\alpha_2p_2+\\alpha_3p_3)-E\\right]\\psi=0$ becomes\n\\[\n\\begin{pmatrix}\nmc^2-E&c(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)\\\\\nc(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)&-mc^2-E\n\\end{pmatrix}\n\\begin{pmatrix}\n\\psi_L\\\\\\psi_S\n\\end{pmatrix}=0\n\\]\n\n(b) By the indicated approximation, the equation becomes\n\\[\n\\begin{pmatrix}\n-\\varepsilon&c(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)\\\\\nc(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)&-2mc^2\n\\end{pmatrix}\n\\begin{pmatrix}\n\\psi_L\\\\\\psi_S\n\\end{pmatrix}=0\n\\]\nIt can be separated to $\\varepsilon\\psi_L=c(\\alpha_1p_1+\\alpha_2p_2+\\alpha_3p_3)\\psi_S$ and $c(\\alpha_1p_1+\\alpha_2p_2+\\alpha_3p_3)\\psi_L=2mc^2\\psi_S$. Eliminating $\\psi_S$ and we obtain $\\frac{1}{2m}\\left(p_1^2+p_2^2+p_3^2\\right)\\psi_L=\\varepsilon\\psi_L$\n\n(c) From the two separated equations, we can get $(\\frac{\\psi_S}{\\psi_L})^2=\\frac{\\varepsilon}{2mc^2}\\ll1$ in the non-relativistic approximation.\n\n\\paragraph{2.2.49}\n\\[(\\gamma^0)^2=\n\\begin{pmatrix}\nI_2&0\\\\0&I_2\n\\end{pmatrix}=I_4\n\\]\n\\[\n(\\gamma^i)^2=\n\\begin{pmatrix}\n-\\sigma_i^2&0\\\\0&-\\sigma_i^2\n\\end{pmatrix}=\n\\begin{pmatrix}\n-I_2&0\\\\0&-I_2\n\\end{pmatrix}=-I_4 ,\\, i=1,2,3\n\\]\n\\[\n\\gamma^\\mu\\gamma^i+\\gamma^i\\gamma^\\mu=\n\\begin{pmatrix}\n-\\sigma_\\mu\\sigma_i&0\\\\\n0&-\\sigma_\\mu\\sigma_i\n\\end{pmatrix}+\n\\begin{pmatrix}\n-\\sigma_i\\sigma_\\mu&0\\\\\n0&-\\sigma_i\\sigma_\\mu\n\\end{pmatrix}=\n0,\\,\\mu\\neq i\n\\]\n\n\\paragraph{2.2.50}\nAs in Exercise 2.2.48 but with $\\gamma^0$ and $\\gamma^i$ defined in Exercise 2.2.49, the Dirac equation becomes\n\\[\n\\begin{pmatrix}\n-c(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)-E&mc^2\\\\\nmc^2&c(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)-E\n\\end{pmatrix}\n\\begin{pmatrix}\n\\psi_L\\\\\\psi_S\n\\end{pmatrix}=0\n\\]\nIn the limit that m approaches zero, the equation becomes\n\\[\n\\begin{pmatrix}\n-c(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)-E&0\\\\\n0&c(\\sigma_1p_1+\\sigma_2p_2+\\sigma_3p_3)-E\n\\end{pmatrix}\n\\begin{pmatrix}\n\\psi_L\\\\\\psi_S\n\\end{pmatrix}=0\n\\]\nwhich separates into independent $2\\times2$ blocks.\n\n\\paragraph{2.2.51}\n(a) $|r'|^2=r'^\\dagger r'=r^\\dagger U^\\dagger Ur=r^\\dagger r=|r|^2$\n\n(b) $r^\\dagger r=r'^\\dagger r'=r^\\dagger U^\\dagger Ur$ for any $r$, so $U^\\dagger U=I$.\n\n\n\\end{document}\n", "meta": {"hexsha": "3419e8e403485f17efd849becfdba09148d167fe", "size": 26770, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Mathematical Methods for Physicists/Chapter 02/main.tex", "max_stars_repo_name": "hikarimusic2002/Solutions", "max_stars_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematical Methods for Physicists/Chapter 02/main.tex", "max_issues_repo_name": "hikarimusic2002/Solutions", "max_issues_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematical Methods for Physicists/Chapter 02/main.tex", "max_forks_repo_name": "hikarimusic2002/Solutions", "max_forks_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.8702290076, "max_line_length": 952, "alphanum_fraction": 0.6252895032, "num_tokens": 12259, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7520125793176223, "lm_q1q2_score": 0.5905619314736645}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{III}\n\n\\def\\ntitle{Profinite Groups}\n\\def\\nlecturer{G.\\ R.\\ Wilkes}\n\n\\def\\nterm{Lent}\n\\def\\nyear{2020}\n\n\\input{header}\n\n\\renewcommand{\\c}[1]{\\mathbf{#1}} % category\n\\DeclareMathOperator{\\Mat}{Mat} % matrix group\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\tableofcontents\n\n\\setcounter{section}{-1}\n\n\\section{Introduction}\n\nQuestion: how can we tell when two objects are different? This is in general a difficult question and to answer it we need a variety of techniques.\n\n\\begin{itemize}\n\\item \\(\\N \\cong \\Q\\) is easier thatn \\(\\Q \\ncong \\R\\).\n\\item To show two finite dimensional vector spaces are not isomorphic we can compute their dimensions.\n\\item For simplicial complexes, we can compute that homology groups \\(H_n(X)\\). Note that this is a partial invariant, in contrast to dimensiona which is a complete invariant for finite dimensional spaces.\n\\item We can also consider \\(\\pi_1(X)\\). However in contrast to \\(H_1(X)\\), \\(\\pi_1(X)\\) is not necessarily abelian. The Adian-Rabin theorem says that there can be no algorithm which decides if a finitely presented group is trivial or not. Can we build algorithm which sometimes work? We can solve the problem for finite groups. For infinite groups we can build upon this idea. We can write out the lists of finite quotiesnts of two groups \\(G_1\\) and \\(G_2\\) and compare them. The question: when does this work?\n\n  Before we answer this question, note that a list of quotient groups is a very unpleasant object. Instead, we can combine this list into a single ``limiting'' object, called the profinite completion. This technique works in other situations:\n  \\begin{itemize}\n  \\item \\(p\\)-adic integers \\(\\Z_p\\) being the ``limit'' of \\(\\Z/p^n\\Z\\),\n  \\item Galois theory: let\n    \\begin{align*}\n      K &= \\Q(n\\text{th root of unit for all } n) \\\\\n      K_N &= \\Q(n\\text{th root of unit for } n \\leq N)\n    \\end{align*}\n    Then \\(K = \\bigcup K_N\\) and is a Galois extension over \\(\\Q\\) so we can consider \\(\\gal(K/\\Q)\\), which is the ``limit'' of \\(\\gal(K_n/\\Q)\\).\n  \\item \u00e9tale fundamental groups in algebraic geometry.\n  \\end{itemize}\n\\end{itemize}\n\nAside from profinite groups, we will also study group cohomolgy in this course. It is another invariant of groups. This is related to the homology of a simplicial complex and gives abelian invariants. Among other things, it tells if a group is free. It answers the question: given a group \\(G\\) and an abelian group \\(A\\), how many groups \\(E\\) exists such that \\(A \\normal E\\) and \\(E/A \\cong G\\)?\n\n\\section{Inverse limits}\n\n\\subsection{Categories \\& Limits}\n\nRecap on categories and limits. Refer to III Category Theory.\n\n\\subsection{Inverse limits and profinite groups}\n\n\\begin{definition}[profinite completion]\\index{profinite completion}\n  Let \\(G\\) be a group. Let \\(\\c N\\) be the category whose objects are finite index normal subgroups \\(N \\normal_f G\\) and with an arrow \\(N_1 \\to N_2\\) if and only if \\(N_1 \\subseteq N_2\\). This is a poset category.\n\n  The assignment\n  \\begin{align*}\n    N &\\mapsto G/N \\\\\n    (N_1 \\to N_2) &\\mapsto (G/N_1 \\to G/N_2)\n  \\end{align*}\n  is a functor \\(\\c N \\to \\c{Grp}\\).\n\n  The limit of this diagram is a group called the \\emph{profinite completion} \\(\\hat G\\) of \\(G\\).\n\\end{definition}\n\n\\begin{notation}\n  \\(\\hat G\\) is equipped with homomorphisms making the following diagram commutes:\n  \\[\n    \\begin{tikzcd}\n      \\hat G \\ar[d] \\ar[dr] \\\\\n      G/N_1 \\ar[r] & G/N_2\n    \\end{tikzcd}\n  \\]\n  In this course we refer to them as \\emph{projection maps} and the horizontal map as \\emph{transition map}. In addition, we have a \\emph{canonical map} \\(i: G \\to \\hat G\\) which exists by the definition of limit.\n\\end{notation}\n\nWe haven't shown profinite completion exists. We will prove it shortly in a more general context, by showing that it is an exmaple of a particular kind of, in some sense well-behaved, limit.\n\n\\begin{definition}[inverse system]\\index{inverse system}\n  A poset \\((J, \\preceq)\\) is called an \\emph{inverse system} if for all \\(i, j \\in J\\) exists \\(k \\in J\\) such that \\(k \\preceq i\\) and \\(k \\preceq j\\).\n\\end{definition}\n\n\\begin{eg}\n  In \\(\\c{N}\\), \\(N_1 \\cap N_2\\) is a subgroup of both \\(N_1\\) and \\(N_2\\).\n\\end{eg}\n\n\\begin{definition}[inverse system, inverse limit]\\index{inverse system}\\index{inverse limit}\n  An \\emph{inverse system} (of groups, sets etc) is a functor \\(\\c J \\to \\c C\\) where \\(\\c J\\) is the poset category corresponding to an inverse system.\n\n  If \\(F: \\c J \\to \\c{Grp}\\) is an inverse system, the limit of \\(F\\) is called the \\emph{inverse limit} of the objects \\(F(j)\\).\n\\end{definition}\n\nSince this is the central subject of this course, we spell out this definition explicitly\n\n\\begin{definition}\n  An \\emph{inverse system of groups}, indexed over an inverse system \\((J, \\preceq)\\), consists of\n  \\begin{itemize}\n  \\item a group \\(G_j\\) for all \\(j \\in J\\),\n  \\item for all \\(i \\preceq j\\), a transition map \\(\\phi_{ij}: G_i \\to G_j\\) such that \\(\\phi_{ii} = \\id_{G_i}, \\phi_{jk} \\compose \\phi_{ij} = \\phi_{ik}\\).\n  \\end{itemize}\n  The \\emph{inverse limit of the system} \\((G_j)_{j \\in J}\\) is a group \\(\\varprojlim_{j \\in J} G_j\\) with projection maps \\(p_j: \\varprojlim G_j \\to G_j\\) such that \\(\\phi_{ij} \\compose p_i = p_j\\) and such that for any \\(Z\\) with \\(q_j: Z\\to G_j\\), a map \\(q: Z \\to \\varprojlim G_j\\) such that \\(p_j \\compose q = q_j\\).\n\\end{definition}\n\n\\begin{definition}[profinite group]\\index{profinite group}\n  A \\emph{profinite group} is the inverse limit of an inverse system of finite groups.\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item The profinite completion of a group \\(G\\) is a profinite group.\n  \\item \\(\\Z_p\\), the \\(p\\)-adic integers, is \\(\\varprojlim \\Z/p^n\\Z\\).\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{proposition}\n  Let \\((G_j)_{j \\in J}\\) be an inverse system of groups indexed by an inverse system \\(J\\). \\(\\varprojlim G_j\\) exists and is equal to\n  \\[\n    L = \\{ (g_j)_{j \\in J} \\in \\prod_{j \\in J} G_j: \\varphi_{ij} (g_i) = g_j \\}.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(p_j: L \\to G_j\\) be the restriction of the projection \\(\\prod G_j \\to G_j\\). Then \\(\\varphi_{ij} \\compose p_i = p_j\\). Now let \\(q_j: Z \\to G_j\\) be a cone. There is a unique map \\(f: Z \\to \\prod G_j\\) such that \\(p_j \\compose f = q_j\\) and \\(f(Z) \\subseteq L\\).\n\\end{proof}\n\nNote that we do not use any properties of inverse system or posets. The construction works equally well for sets (except that the resulting inverse limit is not a group). We will see that the finiteness and inverse system requirement ensures that the construction gives a nonempty set. To do so we need to bring in topology.\n\n\\subsection{Topology on a profinite group/set}\n\nGive each finite group \\(G_j\\) in an inverse system the discrete topology. Then give \\(\\prod G_j\\) the product topology and \\(\\varprojlim G_j \\subseteq \\prod G_j\\) the subspace topology. \\(\\prod G_j\\) is Hausdorff and compact (Tychonoff). It follows that \\(\\varprojlim G_j\\) is Hausdorff. Since the conditions defining the subgroup are closed conditions, it is also compact.\n\n\\begin{proposition}\n  If \\((X_j)_{j \\in J}\\) is an inverse system of nonempty finite sets then \\(\\varprojlim X_j \\ne \\emptyset\\).\n\\end{proposition}\n\n\\begin{proof}\n  Consider the set\n  \\[\n    Y_I = \\{(x_j) \\in \\prod X_J: \\phi_{ij}(x_i) = x_j \\text{ for all } i, j \\in I\\}\n  \\]\n  where \\(I \\subseteq J\\). The \\(Y_I\\)'s are closed and \\(\\bigcup_{I \\text{ finite}} Y_I = \\varprojlim X_j\\).\n\n  To show for \\(I\\) finite, \\(Y_I \\ne \\emptyset\\), by definition of inverse system exists \\(k \\in J\\)  such that \\(k \\leq i\\) for all \\(i \\in I\\). Now \\(X_k\\) is nonempty so exists \\(x_k \\in X_K\\). For \\(i \\in J\\), set \\(X = \\varphi_{ki}(x_k)\\). For \\(j \\notin I\\), set \\(x_j\\) to be arbitrary. Then this gives a sequence \\((x_i) \\in Y_I\\).\n\n  Now use finite intersection property: suppose \\(I_1, \\dots, I_m\\) are are finite, then\n  \\[\n    Y_{I_1} \\cap \\cdots \\cap Y_{I_m} \\supseteq Y_{I_1 \\cup \\cdots \\cup I_m} \\ne \\emptyset\n  \\]\n  so \\(\\varprojlim X_j = \\bigcup_{I \\subseteq J \\text{ finite}} Y_I \\ne \\emptyset\\).\n\\end{proof}\n\nIt is perhaps psychologically comforting to point out that the topology on a profinite group is metrisable, thanks to\n\n\\begin{proposition}\n  If \\((X_i)\\) is a countable family of metric spaces then \\(\\prod X_i\\) is a metric space.\n\\end{proposition}\n\n\\begin{proof}\n  IB Metric and Topological Spaces.\n\\end{proof}\n\nIn applications to profinite completions this is often implied by\n\n\\begin{proposition}\n  If \\(G\\) is a finitely generated group then it has only countably many finite index normal subgroups.\n\\end{proposition}\n\n\\begin{proof}\n  Every finite index normal subgroup arises as the kernel of some homomorphism \\(G \\to S_n\\). For a fixed \\(n\\) the homomorphism is determined by the image of the generators of \\(G\\), so finitely many.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G\\) be a profinite group. Then multiplication \\(\\mu: G \\times G \\to G\\) and inversion \\(i: G \\to G\\) are continuous maps.\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet 1.\n\\end{proof}\n\nThus \\(G\\) is a \\emph{topological group}\\index{topological group}.\n\n\\begin{definition}\n  An \\emph{isomorphism of topological groups} is an isomorphism \\(f: G \\to H\\) which is also a homeomorphism.\n\\end{definition}\n\nFrom now on we only consider homomorphism between profinite groups which are continuous.\n\n\\begin{lemma}\n  Let \\(H\\) be a topological group and let \\(G = \\varprojlim G_j\\) be an inverse limit of finite groups with projections \\(p_j: G \\to G_j\\). Then a homomorphism \\(f: H \\to G\\) is continuous if and only if \\(p_j \\compose f: H \\to G_j\\) is continuous, if and only if \\(\\ker(p_j \\compose f)\\) is an open subgroups of \\(H\\).\n\\end{lemma}\n\n\\begin{proof}\n  The first iff is by definition of the product topology on \\(\\prod G_j\\). For the second, let \\(f_j = p_j \\compose f\\). If \\(f_j: H \\to G_j\\) is continuous then \\(\\ker f_j = f_j^{-1}(1)\\) is open. Conversely, if \\(\\ker f_j\\) is open then \\(f_j^{-1}(g_j) = h \\cdot \\ker(f_j)\\), where \\(h \\in f_j^{-1}(g_j)\\) if nonempty, is open for all \\(g_j \\in G_j\\). Thus \\(f_j^{-1}(U)\\) is open for all \\(U \\subseteq G_j\\).\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G\\) be a compact topological group. Then a subgroup \\(U \\subseteq G\\) is open if and only if it is closed and has finite index.\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet 1.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G = \\varprojlim G_j\\) be an inverse system of finite groups. Then the open subgroups \\(U_j = \\ker p_j\\) form a basis of open neighbourhoods of the identity.\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(V \\ni 1\\) be open. By definition of the product topology, \\(V\\) contains a basic open set of the form \\(p_{j_1}^{-1}(X_{j_1}) \\cap \\cdots \\cap p_{j_m}^{-1}(X_{j_m}) \\ni 1\\), where \\(X_{j_i} \\subseteq G_{j_i}\\) open. Then \\(1 \\in X_{j_i} \\subseteq G_{j_i}\\) so by shrinking wlog \\(X_{j_i} = \\{1\\}\\). Thus \\(1 \\in \\ker p_{j_1} \\cap \\cdots \\cap \\ker p_{j_m}\\). Now can find \\(k \\in J\\) such that \\(k \\leq j_i\\) for all \\(i\\). Then \\(1 \\in U_k \\subseteq \\ker p_{j_1} \\cap \\cdots \\cap \\ker p_{j_m} \\subseteq V\\).\n\\end{proof}\n\n\\begin{corollary}\n  A basis of open sets in \\(G\\) is \\(\\{p_j^{-1}(g_j): j \\in J, g_j \\in G_j\\}\\).\n\\end{corollary}\n\n\\begin{corollary}\n  Let \\(X \\subseteq G = \\varprojlim G_j\\) be a subset. Then \\(X\\) is dense in \\(G\\) if and only if \\(p_j(X) = p_j(X)\\).\n\\end{corollary}\n\n\\begin{proof}\n  If \\(X\\) is not dense then exist nonempty open set \\(U\\) such that \\(U \\cap X = \\emptyset\\). wlog \\(U = p_j^{-1}(g_j)\\). Then \\(g_j \\in p_j(G) \\setminus p_j(X)\\). Similarly if \\(X\\) is dense and \\(U\\) is nonempty open, wlog \\(U = p_j^{-1}(g_j)\\), then \\(g_j \\in p_j(G) = p_j(X)\\) so \\(X \\cap U \\ne \\emptyset\\).\n\\end{proof}\n\n\\begin{proposition}\n  Let \\((G_j)\\) be an inverse system of finite groups and \\(G = \\varprojlim G_j\\). Let \\(X \\subseteq G\\) be a subset. Then \\(\\overline X = \\varprojlim X_j\\) where \\(X_j = p_j(X)\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let\n  \\begin{align*}\n    X'\n    &= \\varprojlim X_j\n    = \\{(g_i) \\in \\prod G_j: g_j \\in X_j \\text{ for all } j, \\phi_{ij}(g_i) = g_j\\} \\\\\n    &= \\bigcap p_j^{-1}(X_j) \\\\\n    &= \\bigcap p_j^{-1}(p_j(X))\n  \\end{align*}\n  which is closed. \\(X \\subseteq X'\\) so \\(\\overline X \\subseteq X'\\). Let \\(g \\in G \\setminus \\overline X\\). Then exists a basic open set \\(p_j^{-1}(g_j) \\subseteq G \\setminus \\overline X\\). Hence \\(\\overline X \\cap p_j^{-1}(g_j) = \\emptyset\\), so \\(g_j \\notin X_j\\), so \\(g \\notin X'\\).\n\\end{proof}\n\n\\begin{corollary}\n  \\(X\\) is closed if and only if \\(X = \\varprojlim X_j\\).\n\\end{corollary}\n\nAlong the same line\n\n\\begin{proposition}\n  Let \\(G\\) be a profinite group. Then\n  \\[\n    \\overline X = \\bigcap_{N \\normal_o G} XN.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Since \\(\\ker p_j\\) form a neighbourhood basis of the identity,\n  \\[\n    \\overline X = \\bigcap p_j^{-1}(p_j(X)) = \\bigcap X \\cdot \\ker p_j \\subseteq \\bigcap_{N \\normal_o G} XN.\n  \\]\n  Conversely if \\(g \\notin \\overline X\\) then can find a neighbourhood \\(p_j^{-1}(g_j)\\) of \\(g\\) that is disjoint from \\(X\\). Then \\(g \\notin X \\cdot \\ker p_j\\).\n\\end{proof}\n\n\\begin{eg}\n  If \\(\\Gamma\\) is an abstract group, \\(\\hat \\Gamma\\) its profinite completion with \\(i: \\Gamma \\to \\hat \\Gamma\\) then \\(p_j(i(\\Gamma)) = \\Gamma/N_j = p_j(\\hat \\Gamma)\\), so \\(i(\\Gamma)\\) is dense in \\(\\hat \\Gamma\\).\n\\end{eg}\n\n\\subsection{Change of inverse system}\n\n\\subsubsection{Surjective inverse system}\n\nLet \\((G_j)_{j \\in J}\\) be an inverse system with transition funcitons \\(\\varphi_{ij}\\) and projections \\(p_j\\).\n\n\\begin{definition}[surjective inverse system]\\index{inverse system!surjective}\n  An inverse system is \\emph{surjective} if the transition maps \\(\\varphi_{ij}\\) are all surjective.\n\\end{definition}\n\n\\begin{proposition}\n  Let \\((X_j)\\) be a surjective inverse system of nonempty finite sets. Then all projections \\(p_j: \\varprojlim X_j \\to X_j\\) are surjective.\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet 1.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\((X_j)\\) be an inverse system of finite sets. Then there exists a surjective inverse system with the same inverse limit.\n\\end{proposition}\n\n\\begin{proof}\n  Recall that\n  \\[\n    \\varprojlim X_j = \\left\\{ (x_j) \\in \\prod X_j: \\varphi_{ij}(x_i) = x_j \\right\\}.\n  \\]\n  Define \\(Y_j = p_j(\\varprojlim X_j)\\). Then \\(Y_j\\) with transition maps \\(\\varphi_{ij}|_{Y_i}\\) form an inverse system: if \\(y_i \\in Y_i\\) then \\(\\varphi_{ij}(y_i) \\in Y_J\\). Then \\(\\varprojlim Y_j = \\varprojlim X_j\\), and this is a surjective inverse system.\n\\end{proof}\n\n\\subsubsection{Cofinal subsystems}\n\n\\begin{definition}[cofinal]\\index{cofinal inverse system}\\index{inverse system!cofinal}\n  If \\(J\\) is an inverse system, \\(I \\subseteq J\\) is \\emph{cofinal} if for all \\(j \\in J\\) exists \\(i \\in I\\) such that \\(i \\leq j\\).\n\\end{definition}\nTherefore \\(I\\) is also an inverse system.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item In the system of finite index subgroups of \\(\\Z\\), one cofinal system is \\(\\{n! \\Z\\}\\).\n  \\item If \\(k \\in J\\), then \\(J_{\\leq k} = \\{j \\in J: j \\leq k\\}\\) is a \\emph{principal cofinal system}.\n  \\item A cofinal system of \\(\\N^{\\text{op}}\\) is the same as an increasing sequence of integers.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{definition}[linearly ordered inverse system]\\index{inverse system!linearly ordered}\n  An inverse system is \\emph{linearly ordered} if it is isomorphic to a subset of \\(\\N^{\\text{op}}\\).\n\\end{definition}\n\n\\begin{proposition}\n  Let \\(J\\) be a countable inverse system with no initial element. Then \\(J\\) has a linearly ordered cofinal system.\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet 1.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\((X_j)\\) be an inverse system of (finite) sets. Let \\(I \\subseteq J\\) be a cofinal system. Then \\(\\varprojlim_{j \\in J} X_j \\cong \\varprojlim_{i \\in I} X_i\\).\n\\end{proposition}\n\n\\begin{proof}\n  We prove the proposition for profinite groups. Let \\(G = \\varprojlim_{j \\in J} G_j, H = \\varprojlim_{i \\in I} G_i\\). The map \\(\\prod G_j \\to \\prod G_i\\) is a continuous homomorphism and restricts to a map \\(f: G \\to H\\). Remains to check this is a bijection. Suppose \\((g_j) \\in \\ker f\\), then \\(p_i(g) = g_i = 1\\) for all \\(i \\in I\\). For every \\(j \\in J\\) exists \\(i \\in I\\) such that \\(i \\leq j\\) so \\(g_j = \\varphi_{ij}(g_i) = 1\\). Thus \\((g_j) = 1\\). For surjectivity, let \\((g_i) \\in H\\). For \\(j \\notin I\\), let \\(i \\in I\\) be such that \\(i \\leq j\\) and set \\(g_j = \\varphi_{ij}(g_i)\\). It is well-defined and \\((g_j) \\in G\\).\n\\end{proof}\n\n\\section{Profinite groups}\n\n\\subsection{\\(\\Z_p\\), the \\(p\\)-adic integers}\n\nLet \\(p\\) be a prime. Consider the inverse system\n\\[\n  \\begin{tikzcd}\n    \\cdots \\ar[r] & \\Z/p^n \\ar[r] & \\cdots \\ar[r] & \\Z/p^2 \\ar[r] & \\Z/p \\ar[r] & 1\n  \\end{tikzcd}\n\\]\nof finite rings. The inverse limit is the profinite ring \\emph{\\(p\\)-adic integers}\\index{\\(p\\)-adic integer}\n\\[\n  \\Z_p = \\varprojlim \\Z/p^n.\n\\]\nAn element \\(\\alpha \\in \\Z_p\\) is a sequence \\((a_n)\\) of elements of \\(\\Z/p^n\\) such that \\(a_n = a_m \\bmod{p^n}\\) if \\(n \\geq m\\), where \\(a_n = \\alpha \\pmod{p^n} = p_n(a)\\).\n\nAddition and multiplication are done component-wise. One way to get such \\(\\alpha\\) is to take \\(a \\in \\Z\\) and let \\(a_n\\) be reductions mod \\(p^n\\) of \\(a\\). This gives \\(\\iota: \\Z \\to \\Z_p\\).\n\n\\begin{definition}[pro-\\(p\\) group, pro-\\(p\\) completion]\\index{pro-\\(p\\) group}\\index{pro-\\(p\\) completion}\n  A \\emph{pro-\\(p\\) group} is an inverse limit of \\(p\\)-groups.\n\n  The \\emph{pro-\\(p\\) completion} of \\(\\Gamma\\) is\n  \\[\n    \\hat \\Gamma_{(p)} = \\varprojlim_{\\substack{N \\normal \\Gamma \\\\ \\Gamma/N \\text{ a \\(p\\)-group}}} \\Gamma/N.\n  \\]\n\\end{definition}\n\nTherefore \\(\\Z_p = \\hat \\Z_{(p)}\\). Usually we suppress \\(\\iota\\) and regard \\(\\Z \\subseteq \\Z_p\\).\n\nThere is a natural metric on \\(\\Z_p\\): let \\(\\alpha = (a_n), \\beta = (b_n)\\). If \\(\\alpha = \\beta\\) then \\(d(\\alpha, \\beta) = 0\\). Otherwise let \\(n\\) be the smallest integer such that \\(a_n \\ne b_n\\) and set \\(d(\\alpha, \\beta) = p^{-n}\\). The restriction of this metric to \\(\\Z\\) is the ``\\(p\\)-adic metric'' on \\(\\Z\\).\n\nThe open ball is\n\\begin{align*}\n  B(0, r)\n  &= \\{(a_n): a_m = 0 \\text{ for } p^{-m} \\geq r\\} \\\\\n  &= \\{(a_n): a_m = 0 \\text{ for } m \\leq -\\log_p r\\} \\\\\n  &= \\ker(\\Z_p \\to \\Z/p^{\\floor{-\\log_p r}})\n\\end{align*}\nwhich is an open subgroup of \\(\\Z_p\\).\n\n\\begin{proposition}\n  \\(\\Z_p\\) is abelian and torsion-free.\n\\end{proposition}\n\n\\begin{proof}\n  Abelian is obvious. For torsion-free, let \\(\\alpha = (a_n) \\in \\Z_p\\) with \\(\\alpha \\ne 0\\) and \\(m\\alpha = 0\\) for some \\(m > 0\\). Write \\(m = p^rs\\) where \\(s\\) is coprime to \\(p\\). As \\(\\alpha \\ne 0\\), exists \\(n\\) such that \\(\\alpha \\ne 0 \\pmod{p^n}\\), i.e.\\ \\(a_n \\ne 0 \\in \\Z/p^n\\). As \\(m \\ne 0\\), \\(s \\ne 0\\). Now consider \\(ma \\pmod{p^{n + r}}\\). Claim this is nonzero: if \\(p^{n + r} \\divides ma_{n + r}\\) then \\(p^n \\divides  a_{n + r}\\), hence \\(a_{n + r} = 0 \\pmod{p^n} = a_n \\pmod{p^n}\\).\n\\end{proof}\n\n\\begin{proposition}\n  The ring \\(\\Z_p\\) is an integral domain.\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet 1.\n\\end{proof}\n\n\\subsection{The profinite integers \\(\\hat \\Z\\)}\n\n\\begin{proposition}\n  \\(\\hat \\Z\\) is abelian and torsion-free.\n\\end{proposition}\n\n\\begin{proposition}\n  \\(\\hat \\Z\\) has zero divisors.\n\\end{proposition}\n\nBoth follow from\n\n\\begin{theorem}[Chinese remainder theorem]\n  There is an isomorphism of topological rings\n  \\[\n    \\hat \\Z \\cong \\prod_{p \\textup{ prime}} \\Z_p.\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  There is a continuous homomorphism \\(\\hat \\Z \\to \\Z_p\\) for every \\(p\\) as\n  \\[\n    \\begin{tikzcd}\n      \\hat \\Z \\ar[d] \\ar[dr] \\\\\n      \\Z/p^n \\ar[r] & \\Z/p^m\n    \\end{tikzcd}\n  \\]\n  We thus have a continuous homomorphism \\(f: \\hat \\Z \\to \\prod_{p \\text{ prime}} \\Z_p\\).\n\n  \\(f\\) is surjective if and only if \\(\\im f \\subseteq \\prod \\Z_p\\) is dense, if and only if \\(\\im f\\) intersects all basic open sets of \\(\\prod \\Z_p\\) non-trivially. A basic open set has the form \\(\\phi^{-1}(x_1, \\dots, x_r)\\) where \\(\\phi: \\prod \\Z_p \\to \\Z/p_1^{n_1} \\times \\dots \\times \\Z/p_r^{n_r}\\). Now invoke the classical Chinese remainder theorem: let \\(m = p_1^{n_1} \\cdots p_r^{n_r}\\), then we have a commutative diagram\n  \\[\n    \\begin{tikzcd}\n      \\hat \\Z \\ar[r, \"f\"] \\ar[d, two heads] & \\prod \\Z_p \\ar[d, \"\\phi\"] \\\\\n      \\Z/m \\ar[r, \"\\cong\"] & \\Z/p_1^{n_1} \\times \\cdots \\times \\Z/p_r^{n_r}\n    \\end{tikzcd}\n  \\]\n  As \\((x_1, \\dots, x_r) \\in \\im (\\phi \\compose f)\\), have \\(\\im f \\cap \\phi^{-1}(x_1, \\dots, x_r) \\ne \\emptyset\\).\n  \n  Now suppose \\(g \\in \\hat \\Z \\setminus \\{0\\}\\). Then exists \\(m\\) such that the image of \\(g\\) in \\(\\hat \\Z \\to \\Z/m\\) is nonzero. Now use injectivity of the isomorphism to conclude \\(f\\) must be injective.\n\\end{proof}\n\n\\subsection{Profinite matrix group}\n\nIf \\(R\\) is a commutative ring with 1 then there is a matrix ring \\(\\Mat_n(R)\\) of \\(n \\times n\\) matrices whose entries are in \\(R\\). In particular\n\\begin{align*}\n  \\Mat_n(\\Z_p) &\\cong \\varprojlim \\Mat_n(\\Z/p^m) \\\\\n  \\Mat_n(\\hat \\Z) &\\cong \\varprojlim \\Mat_n(\\Z/m)\n\\end{align*}\nfor similar argument as above.\n\nDefine\n\\begin{align*}\n  \\SL_n(R) &= \\{A \\in \\Mat_n(R): \\det A = 1\\} \\\\\n  \\GL_n(R) &= \\{A \\in \\Mat_n(R): \\det A \\in R^\\times\\}\n\\end{align*}\nAs \\(\\det: \\Mat_n(\\Z_p) \\to \\Z_p\\) is a polynomial so continuous, \\(\\SL_n(\\Z_p) \\subseteq \\Mat_n(\\Z_p)\\) is a closed subset and is a group under multiplication. We will show in example sheet that \\(\\Z_p^\\times\\) and \\(\\hat \\Z^\\times\\) are closed subsets of \\(\\Z_p\\) and \\(\\hat \\Z\\), and in fact they are isomorphic to \\(\\varprojlim (\\Z/p^m)^\\times\\) and \\(\\varprojlim (\\Z/m)^\\times\\). We have\n\\[\n  \\SL_n(\\Z_p) = \\varprojlim \\SL_n(\\Z/p^m)\n\\]\netc. A version of Chinese remainder theorem also holds.\n\nProblem: consider the inclusion \\(\\SL_n(\\Z) \\subseteq \\SL_n(\\hat \\Z)\\). How does this inclusion look like? For example, is this inclusion dense? (the answer is yes, see example sheet 2). We know from general theory this holds if and only if \\(\\SL_n(\\Z) \\to \\SL_n(\\Z/m)\\) is surjecitve. But this is not obvious at all. For example how can we find an element that is mapped to \\(\n\\begin{psmallmatrix}\n  7 & 9 \\\\\n  4 & 9\n\\end{psmallmatrix}\n\\in \\SL_2(\\Z/13)\\)?\n\nAnother question: do we have \\(\\SL_n(\\hat \\Z) = \\widehat{\\SL_n(\\Z)}\\), i.e.\\ does \\(\\SL_n(\\Z)\\) have any other finite quotients other than \\(\\SL_n(\\Z/m)\\)? The answer is no for  \\(n = 2\\) (example sheet 2), and yes for \\(n \\geq 3\\) (hard theorem of Bass-Lazard-Serre).\n\n\\subsection{Subgroups, quotients and homomorphisms}\n\nA reminder that we are working in the category of topological groups so subgroups are closed and homomorphisms are continuous (non-closed subgroup can be pretty wild: \\(\\hat \\Z \\supseteq \\prod \\Z_p \\supseteq \\prod \\Z\\)).\n\n\\begin{proposition}\n  A closed subgroup of a profinite group is a profinite group.\n\\end{proposition}\n\n\\begin{proposition}\n  Let \\(G = \\varprojlim G_j\\) be a profinite group of a surjective inverse system, \\(H \\leq G\\) a closed subgroup. Let \\(H_j = p_j(H)\\). Then \\(H\\) has finite index (i.e.\\ open) if and only if \\([G_j: H_j]\\) is constant for \\(j \\in I\\) for some cofinal subsystem \\(I \\subseteq J\\), in which case \\([G: H] = [G_j: H_j]\\) for \\(j \\in I\\).\n\\end{proposition}\n\n\\begin{proof}\n  Exercise.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G\\) be a profinite group and \\(N \\normal_c G\\). Then \\(G/N\\) equipped with the quotient topology is a profinite group.\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(G = \\varprojlim G_j\\) be a surjective inverse system. Define \\(N_j = p_j(N)\\). Then \\(N_j \\normal G_j\\) and define \\(Q_j = G_j/N_j\\). Exists \\(\\psi_{ij}\\) such that the following diagram commutes:\n  \\[\n    \\begin{tikzcd}\n      G_i \\ar[r, \"\\phi_{ij}\"] \\ar[d] & G_j \\ar[d] \\\\\n      G_i/N_i \\ar[r, \"\\psi_{ij}\"] & G_j/N_j\n    \\end{tikzcd}\n  \\]\n  Check that \\((Q_j, \\psi_{ij})\\) is an inverse system. Let \\(Q = \\varprojlim Q_j\\). There exists a continuous map \\(\\prod G_j \\to \\prod Q_j\\) restricting to \\(f: G \\to Q\\). \\((g_j) \\in \\ker f\\) if and only if \\(g_j \\in \\ker (G_j \\to Q_j) = N_j\\) for all \\(j\\), if and only if \\(g \\in N\\). Thus \\(\\ker f = N\\). By first isomorphism theorem for groups, exists group isomorphism \\(\\overline f: G/N \\to Q\\) making the diagram commute\n  \\[\n    \\begin{tikzcd}\n      G \\ar[r, \"f\"] \\ar[d] & Q \\\\\n      G/N \\ar[ur, \"\\overline f\"', dashed]\n    \\end{tikzcd}\n  \\]\n  \\(\\overline f\\) is continuous by definition of quotient topology. \\(\\overline f\\) is a homeomorphism because \\(G/N\\) is compact and \\(Q\\) is Hausdorff.\n\\end{proof}\n\n\\begin{theorem}[first isomorphism theorem for profinite groups]\n  If \\(G\\) and \\(Q\\) are profinite groups, \\(f: G \\to Q\\) is a continuous surjective homomorphism, then exists an isomorphism of topological groups \\(\\overline f: G/\\ker f \\to Q\\) making the following diagram commute\n  \\[\n    \\begin{tikzcd}\n      G \\ar[r, \"f\"] \\ar[d] & Q \\\\\n      G/\\ker f \\ar[ur, \"\\overline f\"', dashed]\n    \\end{tikzcd}\n  \\]\n\\end{theorem}\n\n\\begin{corollary}\n  A (closed) quotient of a profinite group is a profinite group when given the quotient topology.\n\\end{corollary}\n\n\\begin{definition}[morphism of inverse system]\\index{inverse system!morphism}\n  Let \\((G_j)\\) and \\((H_j)\\) be inverse system of finite groups, indexed over the same poset \\(J\\). A \\emph{morphism of inverse system} is a family of group homomorphisms \\(f_j: G_j \\to H_j\\) such that for all \\(i \\leq j\\), the following diagram commutes\n  \\[\n    \\begin{tikzcd}\n      G_i \\ar[r, \"f_i\"] \\ar[d, \"\\varphi_{ij}^G\"] & H_j \\ar[d, \"\\varphi_{ij}^H\"] \\\\\n      G_j \\ar[r, \"f_j\"] & H_j\n    \\end{tikzcd}\n  \\]\n\\end{definition}\n\n\\begin{proposition}\n  Given a morphism of inverse systems as above, exists a unique continuous homomorphism \\(f: G \\to H\\) such that\n  \\[\n    \\begin{tikzcd}\n      G \\ar[r, \"f\"] \\ar[d, \"p_j^G\"] & H \\ar[d, \"p_j^H\"] \\\\\n      G_j \\ar[r, \"f_j\"] & H_j\n    \\end{tikzcd}\n  \\]\n  commutes for all \\(j\\).\n\\end{proposition}\n\n\\begin{proof}\n  Exercise.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\((G_j)_{j \\in J}, (H_i)_{i \\in I}\\) be inverse systems of finite groups, \\(G = \\varprojlim G_j, H = \\varprojlim H_j\\). Let \\(f: G \\to H\\) be a continuous homomorphism. Assume \\(J\\) and \\(I\\) are countable. Then there are cofinal subsystems \\(J' \\subseteq J, I' \\subseteq I\\) with an isomorphism \\(\\alpha: J' \\to I'\\) and a morphism \\(f_{j'}: G_{j'} \\to H_{\\alpha(j')}\\) of inverse system which induces \\(f\\).\n\\end{proposition}\n\n\\begin{proof}\n  We may assume \\(J\\) and \\(I\\) are linearly ordered, i.e.\\ isomorphic to \\(\\N^{\\text{op}}\\). Also we may assume \\((G_j)_{j \\in J}\\) is surjective. Set \\(J' = J\\). Construct an increasing sequence \\(k_n\\) of natural numbers inductively as follows: the composition \\(G \\to H \\to H_n\\) is continuous so the kernel is open, hence containing a basic open subgroup \\(\\ker p_{k_n}^G\\) of \\(G\\). As \\(\\ker p_{k_n}^G \\subseteq \\ker p_n^H f\\), exists a quotient map \\(f_n: G_{k_n} \\to H_n\\). Increase \\(k_n\\) if necessary so \\(k_n > k_{n - 1}\\). Set \\(I' = \\{k_n\\}\\).\n\\end{proof}\n\n\\subsection{Generators of profinite groups}\n\n\\begin{definition}[topological generating set]\\index{topological generating set}\n  Let \\(G\\) be a topological group. A subset \\(S \\subseteq G\\) is a \\emph{(topological) generating set} for \\(G\\) if \\(\\langle S\\rangle\\) is dense in \\(G\\). \\(G\\) is \\emph{(topologically) finitely generated} (tfg) if it has a finite topological generating set.\n\\end{definition}\n\n\\begin{definition}\n  Let \\(G\\) be a topological group and \\(S \\subseteq G\\). The \\emph{closed subgroup generated by \\(S\\)} is the smallest closed subgroup of \\(G\\) which contains \\(S\\). Equivalently, it is the intersection of all closed subgroups which contain \\(S\\).\n\\end{definition}\n\n\\begin{ex}\n  The closed subgroup generated by \\(S\\) is \\(\\overline{\\langle S\\rangle}\\). More generally, the closure of a subgroup is a subgroup.\n\\end{ex}\n\n\\begin{proposition}\n  If \\(G\\) is a profinite group, \\(U \\leq_o G\\) then \\(G\\) is tfg if and only if \\(U\\) is tfg. \n\\end{proposition}\n\n\\begin{proof}\n  If \\(G = \\overline{\\langle S\\rangle}\\) for some finite \\(S\\) then \\(U \\cap \\langle S\\rangle\\) is finitely generated. As \\(U\\) is open, \\(U \\cap \\langle S\\rangle\\) is dense in \\(U\\).\n\n  Conversely if \\(U = \\overline{\\langle S\\rangle}\\) and \\(T\\) is a set of coset representatives for \\(U\\) in \\(G\\), then \\(S \\cup T\\) is a finite generating set for \\(G\\).\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G = \\varprojlim G_j\\) be an inverse system of finite groups. Then \\(S \\subseteq G\\) is a topological generating set if and only if \\(p_j(S)\\) generates \\(p_j(G)\\).\n\\end{proposition}\n\n\\begin{proof}\n  \\(\\langle S\\rangle\\) is dense if and only if \\(p_j(G) = p_j(\\langle S\\rangle) = \\langle p_j(S)\\rangle\\) for all \\(j\\).\n\\end{proof}\n\n\\begin{proposition}\n  \\(\\alpha \\in \\Z_p\\) is a generator is and only if \\(\\alpha \\ne 0 \\pmod p\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(\\alpha = (a_n)\\). \\(\\alpha\\) is a generator if and only if \\(a_n\\) generates \\(\\Z/p^n\\) for all \\(n\\), if and only if \\(a_n\\) is coprime to \\(p^n\\), if and only if \\(a_n\\) is coprime to \\(p\\).\n\\end{proof}\n\nIf \\(\\alpha\\) is a generator then consider multiplication by \\(\\alpha\\), \\(f_\\alpha: \\Z_p \\to \\Z_p\\). The image of \\(f_\\alpha\\) contains \\(\\langle\\alpha\\rangle\\) and hence \\(\\im f_\\alpha \\supseteq \\overline{\\langle \\alpha\\rangle} = \\Z_p\\) so exists \\(\\beta\\) such that \\(\\alpha\\beta = 1\\). The converse is also true so we can identify the set of generators of \\(\\Z_p\\) with \\(\\Z_p^\\times\\).\n\n\\(\\Z_p^\\times\\) is an open subset of \\(\\Z_p\\) and for all \\(n\\) the natural map \\(\\Z_p^\\times \\to (\\Z/p^n)^\\times\\) is surjective. Thus many elements of \\(\\Z\\) are invertible in \\(\\Z_p\\). For example to compute the inverse of \\(2 \\in \\Z_3^\\times\\), we can compute its inverse in \\((\\Z/3^n)^\\times\\) for increasingly large \\(n\\) to get\n\\[\n  2^{-1} = (\\dots, 41, 14, 5, 2) \\in \\prod \\Z/3^n\n\\]\nso \\(2^{-1} = (\\dots, 81, 27, 5, 2) \\in \\Z_3\\).\n\n\\begin{proposition}\n  \\(\\alpha \\in \\hat \\Z\\) is a generator if and only if \\(\\alpha \\ne 0 \\pmod p\\) for all \\(p\\).\n\\end{proposition}\n\nBy Chinese remainder theorem this set is \\(\\prod \\Z_p^\\times\\), and thus can be identified with \\(\\hat \\Z^\\times\\). It is a closed subset of \\(\\hat \\Z\\) and just as in the \\(p\\)-adic integer case, \\(\\hat \\Z^\\times \\to (\\Z/n)^\\times\\) is surjective for all \\(n\\).\n\n\\begin{theorem}[Gasch\u00fctz's lemma for finite groups]\\index{Gasch\u00fctz's lemma}\n  Let \\(f: G \\to H\\) be a surjective homomorphism of finite groups. Assume that \\(G\\) has a generating set of size \\(d\\). Then for any generating set \\(\\{z_1, \\dots, z_d\\}\\) of \\(H\\), there exists a generating set \\(\\{x_1, \\dots, x_n\\}\\) of \\(G\\) such that \\(f(x_i) = z_i\\).\n\\end{theorem}\n\n\\begin{proof}\n  We formulate the theorem in terms of \\emph{generating vectors}: \\((x_1, \\dots, x_d) \\in G^d\\) whose underlying set generates \\(G\\). \\(f\\) extends to \\(f^d: G^d \\to H^d\\). Let \\(y\\) be a generating vector for \\(H\\) and let \\(N_G(y)\\) be the cardinality of the set of generating vectors \\(x\\) for \\(G\\) such that \\(f^d(x) = y\\). We show \\(N_G(y)\\) is independent of \\(y\\), and then the results follows.\n\n  Induction on \\(|G|\\). Let \\(y\\) be a generating vector for \\(H\\). Let \\(\\mathcal C\\) be the set of \\(\\leq d\\)-generated proper subgroups of \\(G\\). Then for all \\(x\\) such that \\(f^d(x) = y\\), either \\(\\langle x \\rangle = G\\) or \\(\\langle x\\rangle \\in \\mathcal C\\). Therefore\n  \\[\n    |\\ker f^d| = |\\{x \\in G^d: f^d(x) = y\\}| = N_G(y) + \\sum_{C \\in \\mathcal C} N_C(y).\n  \\]\n  As \\(\\ker f^d\\) is manifestly independent of \\(y\\), by induction hypothesis \\(N_G(y)\\) is independent of \\(y\\).\n\\end{proof}\n\n\\begin{theorem}[Gasch\u00fctz's lemma for profinite groups]\\index{Gasch\u00fctz's lemma}\n  Let \\(f: G \\to H\\) be a continuous surjective homomorphism of profinite groups. Assume \\(G\\) has a topological generating set of size \\(d\\). Then for every generating set \\(\\{z_1, \\dots, z_d\\}\\) of \\(H\\), exists a generating set \\(\\{x_1, \\dots, x_n\\}\\) of \\(G\\) such that \\(f(x_i) = z_i\\).\n\\end{theorem}\n\n\\begin{proof}\n  wlog \\(G = \\varprojlim_{j \\in J} G_j, H = \\varprojlim_{j \\in J} H_j\\) and \\(f\\) is induced by a morphism of inverse systems \\(f_j: G_j \\to H_j\\) (see lemma below) and the inverse systems are surjective. Let\n  \\[\n    X_j = \\{x_j \\text{ generating vector for } G_j: f_j(x_j) = p_j^H(z)\\}\n  \\]\n  which is nonempty. \\((X_j)_{j \\in J}\\) forms an inverse system so\n  \\[\n    \\varprojlim X_j = \\{x \\in G^d: x \\text{ generator of } G: f(x) = z\\}\n  \\]\n  is nonempty.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G\\) be a profinite group and \\(\\mathcal U\\) be a collection of open normal subgroups of \\(G\\) which form a neighbourhood basis at \\(1\\). Then \\(G = \\varprojlim_{U \\in \\mathcal U} G/U\\).\n\\end{proposition}\n\n\\begin{proof}\n  There exists a homomorphism \\(f: G \\to \\varprojlim_{U \\in \\mathcal U} G/U\\) which is surjective since \\(G\\) surjects \\(G/U\\), and injective because \\(\\mathcal U\\) is a neighbourhood basis: for all \\(g \\in G \\setminus \\{1\\}\\) exists \\(V \\normal_o G\\) such that \\(g \\notin V\\) and exists \\(U \\in \\mathcal U\\) such that \\(U \\subseteq V\\).\n\\end{proof}\n\n\\begin{eg}\n  If \\(G\\) is tfg, take \\(\\mathcal U = \\{U_n\\}\\) where\n  \\[\n    U_n = \\bigcap \\{\\text{normal subgroups of \\(G\\) of index \\(\\leq n\\)}\\}\n  \\]\n  which is open since the collection is finite.\n\\end{eg}\n\nAs a corollary\n\n\\begin{lemma}\n  If \\(G\\) is a tfg profinite group then \\(G = \\varprojlim_{j \\in J} G_j\\) where \\(J\\) is countable.\n\\end{lemma}\n\n\\section{Profinite completion}\n\n\\subsection{Residual finiteness}\n\nLet \\(\\Gamma\\) be an abstract group (usually finitely generated), \\(\\hat \\Gamma = \\varprojlim_{N \\normal_f \\Gamma} \\Gamma/N\\) its profinite completion, and a canonical map \\(\\iota = \\iota_\\Gamma: \\Gamma \\to \\hat \\Gamma\\).\n\nWe have seen for \\(\\Gamma = \\Z\\) this canonical map is an injection. This injection is sufficiently important that it deserves its own name.\n\n\\begin{definition}[residually finite]\\index{residually finite}\n  Let \\(\\Gamma\\) be an abstract group. \\(\\Gamma\\) is called \\emph{residually finite} if for every \\(\\gamma \\in \\Gamma \\setminus \\{1\\}\\) there exists \\(N \\normal_f \\Gamma\\) such that \\(\\gamma \\notin N\\). Equivalently, \\(\\gamma\\) is not in the kernel of \\(\\Gamma \\to \\Gamma/N\\).\n\\end{definition}\n\n\\begin{proposition}\n  \\(\\Gamma\\) is residually finite if and only if \\(\\iota_\\Gamma: \\Gamma \\to \\hat \\Gamma\\) is injective.\n\\end{proposition}\n\n\\begin{proposition}\n  A subgroup of a residually finite group is residually finite.\n\\end{proposition}\n\n\\begin{proof}\n  Suppose \\(\\Delta \\leq \\Gamma\\). If \\(\\gamma \\in \\Delta \\setminus \\{1\\}\\), exists \\(N \\normal_f \\Gamma\\) such that \\(\\gamma \\notin N\\). Then \\(N \\cap \\Delta \\normal_f \\Delta\\) and \\(\\gamma \\notin N \\cap \\Delta\\).\n\\end{proof}\n\nA partial converse holds, provided the subgroup has finite index:\n\n\\begin{proposition}\n  Let \\(\\Gamma\\) be an abstract group, \\(\\Delta \\leq \\Gamma\\) of finite index. Then if \\(\\Delta\\) is residually finite so is \\(\\Gamma\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(\\gamma \\in \\Gamma \\setminus \\{1\\}\\). If \\(\\gamma \\notin \\Delta\\), take\n  \\[\n    N = \\mathrm{Core}_\\Gamma(\\Delta) = \\bigcap_{g \\in \\Gamma} g\\Delta g^{-1} \\normal_f \\Gamma\n  \\]\n  and \\(\\gamma \\notin N\\). If \\(\\gamma \\in \\Delta\\) then exists \\(M \\normal_f \\Delta\\) such that \\(\\gamma \\notin M\\). Then \\(M \\leq_f \\Gamma\\) so \\(N = \\mathrm{Core}_\\Gamma(M) \\normal_f \\Gamma\\) and \\(\\gamma \\notin N\\).\n\\end{proof}\n\n\\begin{proposition}\n  Direct product of residually finite groups is residually finite.\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet 2.\n\\end{proof}\n\n\\begin{proposition}\n  All finitely generated abelian groups are residually finite.\n\\end{proposition}\n\n\\begin{remark}\n  Finite generation is necessary here. For example \\(\\Q\\) has no nontrivial finite quotient.\n\\end{remark}\n\nA source of residually finite groups is matrix group.\n\n\\begin{proposition}\n  \\(\\SL_N(\\Z)\\) and \\(\\GL_N(\\Z)\\) are residually finite for all \\(N\\).\n\\end{proposition}\n\n\\begin{proof}\n  If \\(A \\in \\GL_N(\\Z)\\), take a prime \\(p\\) greater than an entry of \\(A\\). Then \\(A\\) is not in the kernel of \\(\\GL_N(\\Z) \\to \\GL_N(\\Z/p)\\).\n\\end{proof}\n\n\\begin{proposition}[Non-examinable][Mal'cev's theorem]\n  Let \\(\\Gamma\\) be a finitely generated subgroup of \\(\\GL_N(K)\\) (resp. \\(\\SL_N, \\PSL_N\\)) where \\(K\\) is a field. Then \\(\\Gamma\\) is residually finite.\n\\end{proposition}\n\nAs a corollary, fundamental groups of surfaces are residually finite (as they are contained in \\(\\PSL_2(\\R)\\) by hyperbolic geometry).\n\n\\begin{theorem}\n  Let \\(G\\) and \\(H\\) be tfg profinite groups. Then \\(G \\cong H\\) if and only if the set of isomorphism types of continuous finite quotients of \\(G\\) and \\(H\\) are equal.\n\\end{theorem}\n\n\\begin{proof}\n  Let \\(G_n\\) be the intersection of all open normal subgroups of \\(G\\) of index \\(\\leq n\\). Then \\(G_n \\normal_o G\\) and \\(G = \\varprojlim G/G_n\\). Define \\(H_n\\) similarly. Now \\(G/G_n\\) is a continuous finite quotient of \\(G\\) so it is also a continuous finite quotient of \\(H\\). Thus exists \\(V \\normal_o H\\) such that \\(G/G_n \\cong H/V\\). By definition the intersection of all normal subgroups of \\(G/G_n\\) of index \\(\\leq n\\) is trivial, so upon taking their preimages under \\(H \\to H/V\\) we have \\(H_n \\subseteq V\\). Then\n  \\[\n    |G/G_n| = |H/V| \\leq |H/H_n|.\n  \\]\n  By symmetry we have equality so \\(H_n = V\\) and thus \\(G/G_n \\cong H/H_n\\).\n\n  To show there exists an isomorphism of inverse systems, let \\(S_n\\) be the set of isomorphisms \\(G/G_n \\to H/H_n\\), which is nonempty. If \\(f_n \\in S_n\\) then it takes normal subgroups of \\(G/G_n\\) of index \\(\\leq n - 1\\) to those in \\(H/H_n\\), so defines an isomorphism \\(G_{n - 1}/G_n \\to H_{n - 1}/H_n\\). Thus \\(f_n\\) descends to a map \\(\\phi_{n, n - 1}(f_n): G/G_{n - 1} \\to H/H_{n - 1}\\) which makes the following diagram commute\n  \\[\n    \\begin{tikzcd}\n      G/G_n \\ar[r, \"f_n\"] \\ar[d] & H/H_n \\ar[d] \\\\\n      G/G_{n - 1} \\ar[r, \"\\phi_{n, n - 1}(f_n)\"] & H/H_{n - 1}\n    \\end{tikzcd}\n  \\]\n  \\((S_n, \\phi_{n, n - 1})\\) is an inverse system of nonempty sets so \\(\\varprojlim S_n\\) is nonempty and its element defines an isomorphism of inverse systems.\n\\end{proof}\n\nAs a corollary\n\n\\begin{theorem}\n  Let \\(\\Gamma, \\Delta\\) be finitely generated abstract groups. Then \\(\\hat \\Gamma \\cong \\hat \\Delta\\) if and only if the set of isomorphism types of finite quotients of \\(\\Gamma\\) and \\(\\Delta\\) are the same.\n\\end{theorem}\n\nThe following lemma characterises open subgroups of profinite completion\n\n% TODO: consider simplifying notations in the proof\n% merge the proof with basic correspondence\n% check if finitely generation is necessary\n\n\\begin{lemma}\n  \\label{lem:subgroup correspondence for profinite completion}\n  Let \\(\\Gamma\\) be a finitely generated abstract group. Then the open subgroups of \\(\\hat \\Gamma\\) are precisely the subgroups \\(\\overline{\\iota_\\Gamma(\\Delta)}\\) for \\(\\Delta \\leq_f \\Gamma\\).\n\\end{lemma}\n\n\\begin{proof}\n  If \\(\\Delta \\leq_f \\Gamma\\) then \\(\\overline{\\iota_\\Gamma(\\Delta)}\\) is closed. It also has finite index so open: suppose \\(\\Gamma = \\bigcup_{i = r}^r g_i \\Delta\\) is a finite (disjoint) union, then\n  \\[\n    \\hat \\Gamma = \\overline{\\iota_\\Gamma(\\Gamma)}\n    = \\overline{\\bigcup \\iota_\\Gamma(g_i \\Delta)}\n    = \\bigcup \\iota_\\Gamma(g_i) \\overline{\\iota_\\Gamma(\\Delta)}\n  \\]\n\n  Conversely, let \\(U \\leq_o \\hat \\Gamma\\), then \\(U \\cap \\iota_\\Gamma(\\Gamma)\\) is dense in \\(U\\). Let \\(\\Delta = \\iota_\\Gamma^{-1}(U) = \\iota_\\Gamma^{-1}(U \\cap \\iota_\\Gamma(\\Gamma))\\). Then \\(\\Delta \\leq_f \\Gamma\\) and \\(\\iota_\\Gamma(\\Delta) = U \\cap \\iota_\\Gamma(\\Gamma)\\).\n\\end{proof}\n\n\nQuestion: how much can we learn about \\(\\Gamma\\) from \\(\\hat \\Gamma\\), if \\(\\Gamma\\) is residually finite?\n\n\\begin{proposition}\n  If \\(\\hat \\Gamma \\cong \\hat \\Delta\\), \\(\\Delta\\) is abelian and \\(\\Gamma\\) is residually finite then \\(\\Gamma\\) is abelian.\n\\end{proposition}\n\n\\begin{proof}\n  If \\(\\Delta\\) is abelian, all its quotients are abelian, so \\(\\hat \\Delta\\) is abelian. Then \\(\\iota_\\Gamma: \\Gamma \\to \\hat \\Gamma = \\hat \\Delta\\) shows \\(\\Gamma\\) is abelian.\n\\end{proof}\n\n\\begin{proposition}\n  If \\(G\\) and \\(H\\) are finitely generated and \\(\\hat G \\cong \\hat H\\) then \\(G_{\\mathrm{ab}} \\cong H_{\\mathrm{ab}}\\). In particular if \\(G\\) is abelian and \\(H\\) is residually finite then \\(G \\cong H\\).\n\\end{proposition}\n\n\\begin{proof}\n  If \\(G\\) and \\(H\\) have the same finite quotients then they have the same finite abelian quotients, which are precisely the finite quotients of abelianisation. Thus suffices to show we can recover a finitely generated abelian group \\(G \\cong \\Z^r \\times T\\) from its finite quotient. Have\n  \\[\n    r = \\max\\{k: G \\surj (\\Z/n)^k \\text{ for all } n\\}\n  \\]\n  and \\(T\\) the largest finite abelian group such that \\(G \\surj (\\Z/n)^r \\times T\\).\n\\end{proof}\n\n\\begin{eg}[Baumslag]\n  One does not have to go far from abelian groups to show this fails in general. Let \\(\\phi: C_{25} \\to C_{25}\\) be the automorphism \\(t \\mapsto t^6\\) where \\(t\\) is a fixed generator. \\(\\phi\\) has order \\(5\\). Form semidirect products \\(G_1 = C_{25} \\rtimes_\\phi \\Z, G_2 = C_{25} \\rtimes_{\\phi^2} \\Z\\). Write \\(\\Z = \\langle s\\rangle\\) multiplicatively. Claim \\(G_1 \\ncong G_2\\) but \\(\\hat G_1 \\cong \\hat G_2\\).\n\n  \\begin{proof}\n    Suppose \\(\\psi: G_2 \\to G_1\\) is an isomorphism. Then \\(\\psi(C_{25}) = C_{25}\\) so \\(\\psi(t, 1) = (t^a, 1)\\) where \\(t^a\\) generates \\(C_{25}\\). Let \\(\\psi(1, s) = (t^b, s^c)\\). As \\(s^c\\) generates \\(\\Z\\), \\(c = \\pm 1\\). Now compute\n    \\[\n      \\psi((1, s) \\bullet_2 (t, 1) \\bullet_2 (1, s^{-1}))\n      = \\psi(\\phi^2(t), 1)\n      = (\\phi^2(t^a), 1).\n    \\]\n    On the other hand\n    \\begin{align*}\n      (t^b, s^c) \\bullet_1 (t^a, 1) \\bullet_1 (\\phi^{-c}(t^{-b}), s^{-c})\n      &= (t^b \\phi^c(t^a), s^c) \\bullet_1 (\\phi^{-c} (t^{-b}), s^{-c}) \\\\\n      &= (t^b \\phi^c(t^a) \\phi^c \\phi^{-c}(t^b), 1) \\\\\n      &= (\\phi^c(t^a), 1)\n    \\end{align*}\n    so \\(\\phi^2(t^a) = \\phi^c(t^a)\\) so \\(2 = c \\pmod 5\\), absurd.\n\n    To show the profinite completions are isomorphic, note \\(\\hat G_1 = C_{25} \\rtimes_\\phi \\hat \\Z, \\hat G_2 = C_{25} \\rtimes_{\\phi^2} \\hat \\Z\\): let \\(G_1 \\to Q\\) be a finite quotient. If the composition \\(\\Z \\to G_1 \\to Q\\) has image of order \\(m\\), then \\(f\\) factors through the finite quotient \\(C_{25} \\rtimes_\\phi (\\Z/5m)\\), so the quotients \\(C_{25} \\rtimes \\Z/5m\\) are cofinal in the finite quotients, so\n    \\[\n      \\hat G_1 = \\varprojlim (C_{25} \\rtimes \\Z/5m) = C_{25} \\rtimes \\hat \\Z.\n    \\]\n    Same for \\(G_2\\).\n  \\end{proof}\n\n  It is worth noting that if we try to compute the same expressions in the profinite completions, we can merely conclude \\(c\\) is a topological generator of \\(\\hat \\Z\\) --- but \\(\\hat \\Z\\) has many generators! In fact, for any \\(c \\in \\hat \\Z^\\times\\) such that \\(c = 2 \\pmod 5\\) we can define \\(\\psi': \\hat G_2 \\to \\hat G_1, (t^a, s^b) \\mapsto (t^a, s^{bc})\\) which is continuous and injective. As \\(c\\) is a generator, it is also surjective.\n\\end{eg}\n\nOpen problem: If \\(G\\) is finitely generated and residually finite and \\(F\\) is a finitely generated free group, does \\(\\hat F \\cong \\hat G\\) imply \\(F \\cong G\\)? Equivalently, does there exists a finitely generated residually finite group \\(G\\) and \\(n \\in \\N\\) such that a finite group \\(Q\\) is a quotient of \\(G\\) if and only if \\(d(Q) \\leq n\\), where \\(d\\) is the minimum number of generators?\n\n\\begin{proposition}\n  If \\(F_1, F_2\\) are both finitely generated free and \\(\\hat F_1 \\cong \\hat F_2\\) then \\(F_1 \\cong F_2\\).\n\\end{proposition}\n\n\\begin{proof}\n  Immediate.\n\\end{proof}\n\nWhat about surface groups? They are fundamental groups of genus \\(g\\) orientable surfaces and has presentation\n\\[\n  S_g = \\langle a_1, b_1, \\dots, a_g, b_g| [a_1, b_1] \\cdots [a_g, b_g]\\rangle.\n\\]\nCan we tell them apart from free groups? If \\(\\hat S_g = \\hat F_r\\) then \\(\\Z^{2g} \\cong \\Z^r\\) so \\(r = 2g\\). What next?\n\n\\begin{proposition}[basic correspondence]\n  Let \\(G_1, G_2\\) be finitely generated residually finite groups. Suppose \\(\\hat G_1 \\cong \\hat G_2\\). Then there is a bijection\n  \\[\n    \\psi: \\{H \\leq_f G_1\\} \\to \\{H \\leq_f G_2\\}\n  \\]\n  such that if \\(K \\leq_f H \\leq_f G_1\\) then\n  \\begin{itemize}\n  \\item \\([H: K] = [\\psi(H): \\psi(K)]\\).\n  \\item \\(K \\normal H\\) if and only if \\(\\psi(K) \\normal \\psi(H)\\).\n  \\item If \\(K \\normal H\\) then \\(H/K \\cong \\psi(H)/\\psi(K)\\).\n  \\item \\(\\hat H \\cong \\widehat{\\psi(H)}\\).\n  \\end{itemize}\n\\end{proposition}\n\nEvery finite-sheeted cover of a surface is a surface, so every finite index subgroup of \\(S_g\\) is a surface group so has abelianisation \\(\\Z^{2g'}\\). However \\(F_r\\) has an index \\(2\\) subgroup, which is free of rank \\(2(r - 1) + 1\\) by Nielsen-Schreier. Thus from the corrspondence we deduce \\(S_g\\) is not free.\n\nThe proposition itself follows from the correpondence between subgroups of \\(G\\) and \\(\\hat G\\).\n\n\\begin{proposition}\n  Let \\(G\\) be finitely generated residually finite and regard \\(G\\) as a subgroup of \\(\\hat G\\). Then there is a bijection\n  \\begin{align*}\n    \\{H \\leq_f G\\} &\\to \\{U \\leq_o \\hat G\\} \\\\\n    H &\\mapsto \\overline H \\\\\n    U \\cap G &\\mapsfrom U\n  \\end{align*}\n  Furthermore if \\(K \\leq_f H \\leq_f G\\) then\n  \\begin{itemize}\n  \\item \\([H : K] = [\\overline H: \\overline K]\\),\n  \\item \\(K \\normal H\\) if and only if \\(\\overline K \\normal \\overline H\\),\n  \\item If \\(K \\normal H\\) then \\(H/K \\cong \\overline H/\\overline K\\),\n  \\item \\(\\overline H \\cong \\hat H\\).\n  \\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\n  Surjectivity is done in \\Cref{lem:subgroup correspondence for profinite completion}. To show injectivity enough to show \\(\\overline H \\cap G = H\\). Consider the action of \\(G\\) on \\(G/H\\), inducing a continuous \\(\\alpha: \\hat G \\to \\mathrm{Sym}(G/H)\\). Then \\(g \\in \\mathrm{Stab}_{\\hat G}(H)\\), an open subgroup. If \\(g \\in G \\setminus H\\) then it is disjoint from \\(H\\), so \\(g \\notin \\overline H\\).\n\n  Let \\(\\{g_i\\}\\) be a set of coset representatives of \\(H\\) in \\(G\\). Then \\(\\hat G = \\bigcup g_i \\overline H\\). This is a disjoint union since \\(\\overline H \\cap G = H\\), so \\(\\{g_i\\}\\) is also a coset representative of \\(\\overline H\\) in \\(\\hat G\\), so there is a natural bijection \\(G/H \\to \\hat G/\\overline H\\).\n\n  If \\(\\overline K \\normal \\overline H\\) then \\(K = \\overline K \\cap G \\normal \\overline H \\cap G = H\\). \\(H\\) normalises \\(K\\) if and only if \\(K\\) lies in the kernel of the action \\(H \\to \\mathrm{Sym}(H/K)\\), so \\(\\overline K \\subseteq \\overline H\\) so \\(\\overline K \\normal \\overline H\\).\n\n  Any finite quotient \\(K\\) of \\(H\\) is the quotient of \\(\\overline H\\) by the open subgroup \\(\\overline K\\). Thus by universal property there is a continuous homomorphism \\(\\overline H \\to \\hat H\\). It is injective since for any \\(1 \\ne h \\in \\overline H\\), exists \\(U\\) open in \\(\\hat G\\) such that \\(h \\notin U\\). Then \\(h\\) is not mapped to identity under the composition \\(\\overline H \\to \\hat H \\to \\overline H/\\overline H \\cap U\\).\n\\end{proof}\n\n\\begin{definition}[Hopf property]\\index{Hopf property}\n  A (topological respectively) group \\(G\\) has the \\emph{Hopf property} (or is Hopfian, or Hopf) if every (continuous respectively) surjective homomorphism \\(G \\to G\\) is an isomorphism.\n\\end{definition}\n\nUsually surjectivity is the easier condition to check --- if we know a set of generators then we only have to show that they lie in the image. On the other hand to show injectivity requires understanding the image of each element.\n\n\\begin{proposition}\n  Let \\(G\\) be a tfg profinite group. Then \\(G\\) has the Hopf property.\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(G_n\\) be the intersection of all open normal subgroups of \\(G\\) of index \\(\\leq n\\). \\(G_n\\) is open and \\(G = \\varprojlim G/G_n\\). If \\(U \\leq_o G\\) with \\([G: U] \\leq n\\) then because \\(f\\) is surjective, \\([G: U] = [G: f^{-1}(U)] \\leq n\\) so \\(G_n \\leq f^{-1}(U)\\). Thus \\(G_n \\subseteq \\bigcap f^{-1}(U) = f^{-1}(G_n)\\) so \\(f(G_n) \\subseteq G_n\\), inducing a surjective map \\(f_n: G/G_n \\to G/G_n\\). Finite groups are Hopfian so \\(f_n\\) is an isomorphism. The result thus follows.\n\\end{proof}\n\n\\begin{corollary}\n  If \\(\\Gamma\\) is finitely generated and residually finite then \\(\\Gamma\\) has the Hopf property.\n\\end{corollary}\n\n\\begin{proof}\n  Let \\(f: \\Gamma \\to \\Gamma\\) be a surjection. Then \\(f\\) induces a unique map \\(\\hat f: \\hat \\Gamma \\to \\hat \\Gamma\\). \\(\\hat \\Gamma\\) is tfg and \\(\\hat f\\) is surjective because its image is compact and contains a dense subset. The result then follows from the Hopf property of \\(\\hat \\Gamma\\).\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G, H\\) be (topological respectively) groups, \\(G\\) with Hopf property. If there exists (continuous respectively) surjections \\(f: G \\to H, f': H \\to G\\) then both \\(f\\) and \\(f'\\) are isomorphisms.\n\\end{proposition}\n\n\\begin{proof}\n  \\(f' \\compose f: G \\to G\\) is an isomorphism by Hopf property so \\(f\\) is injective and an isomorphism. Then \\(f' = (f'\\compose f) \\compose f^{-1}\\) is also an isomorphism.\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(\\Gamma\\) be a group and suppose exists a finite quotient \\(Q\\) of \\(\\Gamma\\) such that \\(d(\\Gamma) = d(Q)\\), where \\(d\\) is the minimum number of generators. Then for a free group \\(F\\), \\(\\hat F \\ncong \\hat \\Gamma\\) unless \\(\\Gamma\\) is free, i.e.\\ \\(\\Gamma \\cong F\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(F\\) be a free group with \\(\\hat F \\cong \\hat \\Gamma\\). Then \\(Q\\) is also a quotient of \\(F\\) so \\(d(F) \\geq d(Q) = d(\\Gamma)\\) so exists a surjection \\(f: F \\to \\Gamma\\), which induces a continuous surjection \\(\\hat f: \\hat F \\to \\hat \\Gamma\\). By Hopf property \\(\\hat f\\) is an isomorphism so \\(f\\) is injective.\n\\end{proof}\n\n\\begin{eg}\n  Non-Hopfian groups are not residually finite. For an example of non-Hopfian group, let\n  \\[\n    \\mathrm{BS}(n, m) = \\langle a, t|a^nt^{-1} = a^m \\rangle\n  \\]\n  where \\(n, m\\) coprime. Define\n  \\begin{align*}\n    f: \\mathrm{BS}(n, m) &\\to \\mathrm{BS}(n, m) \\\\\n    y &\\mapsto t \\\\\n    a &\\mapsto a^n\n  \\end{align*}\n  \\(f\\) is a surjection as it surjects the generators. But \\(f\\) is not injective: \\(a\\) does not commute with \\(tat^{-1}\\) (which we quote as a fact). But\n  \\[\n    f([a, tat^{-1}]) = [a^n, ta^nt^{-1}] = [a^n, a^m] = 1\n  \\]\n  so \\(\\mathrm{BS}(n, m)\\) is not Hopfian. residually finite.\n\\end{eg}\n\n\\subsection{Finite quotients of free groups}\n\n\\begin{theorem}\n  Free groups are residually finite.\n\\end{theorem}\n\nThere are two proofs. The first proof in online notes is non-examinable (and omitted). The proof actually shows that\n\n\\begin{corollary}\n  For all \\(p\\), a free group \\(F\\) is residually \\(p\\)-finite. In particular \\(F\\) injects to its pro-\\(p\\) completion.\n\\end{corollary}\n\nWe record the second proof here, which is an algorithm to produce finite quotients.\n\n\\begin{proof}\n  Let \\(F\\) be free on finite generating set \\(S\\) so \\(F = \\pi_1 \\bigvee_S S^1\\). Write \\(g \\in F \\setminus \\{1\\}\\) as a reduced word \\(s_1 \\cdots s_n\\).  Write out \\(g\\) ``along a line'' to get a labelled graph \\(Y\\) and there is a continuous map \\(Y \\to X\\). We seek to make \\(Y\\) a covering space of \\(X\\) by adding edges.\n\n  For each \\(s \\in S\\), the number of vertices having an outgoing \\(s\\) edge is the same as those having an incoming one. Thus we can add \\(s\\) edges so that every vertex has exactly one \\(s\\) edge entering and one leaving. This gives a finite covering \\(\\overline Y\\) of \\(X\\) so corresponds to a finite index subgroup \\(\\pi_1\\overline Y\\) of \\(F\\). \\(g \\notin \\pi_1 \\overline Y\\) since it is not a loop.\n\n  For each \\(s \\in S\\), following the edge \\(s\\) is a permutation of vertices to we have an action of \\(F\\) on vertices of \\(Y\\). \\(g\\) does not fix the initial vertex of \\(Y\\) as \\(g\\) is not contained in the finite index normal subgroup corresponding to the kernel of the action.\n\\end{proof}\n\n\\begin{theorem}[Marshall Hall's theorem]\\index{Marshal Hall's theorem}\n  Let \\(S \\subseteq F\\) be a finite subset. If \\(y \\notin \\langle S\\rangle\\) then exists a finite group \\(Q\\) and a homomorphism \\(f: F \\to Q\\) such that \\(f(y) \\notin f(\\langle S\\rangle)\\).\n\\end{theorem}\n\n\\begin{corollary}\n  If \\(S\\) does not generate \\(F\\) then exists \\(Q, f\\) such that \\(f(\\langle S\\rangle) \\ne f(F)\\).\n\\end{corollary}\n\n\\begin{remark}\n  Marshall Hall's theorem actually says that exists \\(H \\leq_f F\\) such that \\(S \\subseteq H\\) and \\(H = \\langle S \\rangle * H'\\).\n\\end{remark}\n\n\\begin{corollary}\n  \\(S\\) generates \\(F\\) if and only if \\(S\\) (topologically) generates \\(\\hat F\\).\n\\end{corollary}\n\n\\begin{note}\n  As a result \\(\\overline{\\langle S\\rangle} \\cap F = \\langle S\\rangle\\), which generalises the basic correspondence, which holds only for finite index subgroup.\n\\end{note}\n\nThe proof uses monodromy action of the fundamental group. Let \\(X\\) be a wedge of circles whose fundamental group is \\(F\\). If we can find a finite covering space \\(Y\\) of \\(X\\) such that \\(S\\) is contained in the image of \\(\\pi_1Y\\) then \\(\\langle S\\rangle\\) is contained in the stabiliser of a vertex of the covering action. Similar to the proof of residually finiteness of free groups, if \\(y \\notin \\langle S\\rangle\\) then we can construct \\(Y\\) so that \\(g\\) does not lie in the stabiliser.\n\nTo construct the covering space first let \\(Y\\) be the graph with a distinguished vertex and loops corresponding to words in \\(S\\) going around the vertex. We would like to add edges as before so that \\(Y\\) becomes a covering space. Here we encounter a problem as there might be more than one edge, say \\(a\\), coming from the distinguished vertex. We need to apply \\emph{Stallings' fold}\\index{Stalling' fold} to identify the repeated edges, and it is a fact that this procedure does not change the fundamental group.\n\nThe formal proof is non-examinable. A worked example can be found in the lecturer's online notes.\n\n\\section{Pro-\\(p\\) groups}\n\nRecall that a \\emph{pro-\\(p\\) group}\\index{pro-\\(p\\) group} is an inverse limit of finite \\(p\\)-groups --- finite groups of order a power of \\(p\\). The \\emph{pro-\\(p\\) completion}\\index{pro-\\(p\\) completion} of \\(\\Gamma\\) is\n\\[\n  \\hat \\Gamma_{(p)} = \\varprojlim_{\\substack{N \\normal \\Gamma \\\\ \\Gamma/N p\\text{-group}}} \\Gamma/N.\n\\]\nFor example \\(\\Z_p = \\hat \\Z_{(p)}\\).\n\n\\subsection{Generators of pro-\\(p\\) groups}\n\n\\begin{definition}[Frattini subgroup]\\index{Frattini subgroup}\n  Let \\(G\\) be a finite group. The \\emph{Frattini subgroup} of \\(G\\) is\n  \\[\n    \\Phi(G) = \\bigcap \\{M: M \\text{ maximal proper subgroup of } G\\}.\n  \\]\n\\end{definition}\n\n\\begin{proposition}\n  If \\(f: G \\to H\\) is a surjective group homomorphism then \\(f(\\Phi(G)) \\subseteq \\Phi(H)\\). In particular \\(\\Phi(G)\\) is a characteristic normal subgroup\\index{characteristic subgroup}, i.e.\\ if \\(f: G \\to G\\) is an automorphism then \\(f(\\Phi(G)) = \\Phi(G)\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(M\\) be a maximal proper subgroup of \\(H\\). Then \\(f^{-1}(M)\\) is a proper subgroup and it is maximal: if \\(f^{-1}(M) \\subseteq N \\subseteq G\\), assume \\(f^{-1}(M) \\ne N\\). Then \\(f(N) = H\\) as \\(M\\) is maximal. Therefore \\(G = N \\cdot \\ker f = N\\) as \\(N \\supseteq \\ker f\\). Therefore \\(\\Phi(G) \\subseteq f^{-1}(M)\\), hence \\(f(\\Phi(G)) \\subseteq M\\). Taking intersection to get \\(f(\\Phi(G)) \\subseteq \\Phi(H)\\).\n\\end{proof}\n\nNote that we did not use finiteness anywhere.\n\n\\begin{proposition}\n  Let \\(G\\) be a finite group. For \\(S \\subseteq G\\) a subset, TFAE:\n  \\begin{enumerate}\n  \\item \\(S\\) generates \\(G\\).\n  \\item \\(S \\Phi(G)\\) generates \\(G\\).\n  \\item \\(S \\Phi(G)/\\Phi(G)\\), the image of \\(S\\) in \\(G/\\Phi(G)\\), generates \\(G/\\Phi(G)\\).\n  \\end{enumerate}\n\\end{proposition}\n\nIn other words, the elements of \\(\\Phi(G)\\) are precisely the non-generators.\n\n\\begin{proof}\n  Only \\(3 \\implies 1\\) is nonobvious. Suppose \\(S\\) does not generate \\(G\\). Then \\(\\langle S\\rangle\\) is contained in some maximal proper subgroup. Here we used the crucial fact that \\(G\\) is finite. Since \\(\\Phi(G) \\subseteq M\\), \\(M/\\Phi(G)\\) is a proper subgroup of \\(G/\\Phi(G)\\). Thus \\(S \\Phi(G)/\\Phi(G) \\subseteq M/\\Phi(G) \\subsetneq G/\\Phi(G)\\).\n\\end{proof}\n\n\\begin{definition}\n  Let \\(G\\) be a group, \\(H, K\\) subgroups of \\(G\\) and \\(m \\in \\Z\\). Define\n  \\[\n    HK = \\{hk: h \\in H, k \\in K\\}\n  \\]\n  which is a priori a set, but is a subgroup if either \\(H\\) or \\(K\\) is normal, and is a normal subgroup if both \\(H\\) and \\(G\\) are.\n\n  Define the commutator to be\n  \\[\n    [H, K] = \\langle [h, k] = h^{-1}k^{-1}hk: h \\in H, k \\in K \\rangle\n  \\]\n  which is a subgroup and is a normal subgroup if both \\(H\\) and \\(K\\) are normal.\n\n  Finally define\n  \\[\n    H^m = \\langle h^m: h \\in H\\rangle.\n  \\]\n  which is a subgroup.\n\\end{definition}\n\n\\begin{proposition}\n  Let \\(G\\) be a \\(p\\)-group. Then\n  \\[\n    \\Phi(G)\n    = [G, G] G^p\n    = \\ker (G \\to G_{\\mathrm{ab}} \\to G_{\\mathrm{ab}}/p G_{\\mathrm{ab}})\n    = \\langle [g_1, g_2] g_3^p \\rangle.\n  \\]\n\\end{proposition}\n\nNote that \\(G_{\\mathrm{ab}}/p G_{\\mathrm{ab}}\\) is an \\(\\F_p\\)-vector space, so is isomorphic to \\((\\Z/p)^d\\) for some \\(d\\). Thus \\(G/\\Phi(G) \\cong \\F_p^d\\) where \\(d = d(G)\\) is the minimum size of a generating set of \\(G\\).\n\n\\begin{proof}\n  Example sheet 2.\n\\end{proof}\n\n\\begin{definition}\n  Let \\(G\\) be a profinite group. Define\n  \\[\n    \\Phi(G) = \\bigcap \\{M: M \\text{ maximal proper closed subgroups of } G\\},\n  \\]\n  where \\(M\\) maximal proper closed means that if \\(N\\) is a closed subgroup then \\(M \\subseteq N \\subseteq G\\) implies \\(N = M\\) or \\(N = G\\).\n\\end{definition}\n\n\\begin{proposition}\n  Any proper closed subgroup of a profinite group \\(G\\) is contained in a proper open subgroup, and hence is contained in a maximal proper closed subgroup, and maximal proper closed subgroups are open.\n\\end{proposition}\n\n\\begin{proof}\n  Suppose \\(H \\leq_c G\\), \\(G \\ne H\\) and \\(G = \\varprojlim G_j\\). Since \\(H\\) is not dense, exists \\(j\\) such that \\(p_j(H) \\ne p_j(G)\\). Then \\(p_j^{-1}(p_j(H))\\) is open proper and contains \\(H\\).\n\\end{proof}\n\nSimilar to the finite case we have\n\n\\begin{lemma}\n  If \\(f: G \\to H\\) is a surjective continuous homomorphism of profinite groups then \\(f(\\Phi(G)) \\subseteq \\Phi(H)\\).\n\\end{lemma}\n\n\\begin{proposition}\n  If \\(S \\subseteq G\\) where \\(G\\) is a profinite group then TFAE\n  \\begin{enumerate}\n  \\item \\(S\\) is a tgs for \\(G\\).\n  \\item \\(S \\Phi(G)\\) is a tgs for \\(G\\).\n  \\item \\(S \\Phi(G)/\\Phi(G)\\) is a tgs for \\(G/\\Phi(G)\\).\n  \\end{enumerate}\n\\end{proposition}\n\n\\begin{proposition}\n  Let \\((G_j)_{j \\in J}\\) be a surjective inverse system of finite groups. Let \\(G = \\varprojlim G_j\\). Then\n  \\[\n    \\Phi(G) = \\varprojlim \\Phi(G_j).\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(p_j: G \\to G_j\\) be the projection. Then \\(p_j(\\Phi(G)) \\subseteq \\Phi(G_j)\\) for all \\(j\\). Hence \\(\\Phi(G) \\subseteq \\varprojlim \\Phi(G_j)\\).\n\n  Now let \\(M\\) be a maximal proper closed subgroup of \\(G\\). \\(M\\) is open so exists \\(i\\) such that \\(\\ker p_i \\subseteq M\\). Thus \\(\\ker p_j \\subseteq M\\) for all \\(j \\leq i\\). Then \\(p_j(M)\\) is a maximal proper subgroup of \\(G_j\\) so \\(p_j(M) \\supseteq \\Phi(G_j)\\) for all \\(j \\leq i\\). Thus\n  \\[\n    M \\supseteq \\varprojlim_{j \\leq i} \\Phi(G_j) = \\varprojlim \\Phi(G_j).\n  \\]\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G\\) be a tfg pro-\\(p\\) group. Then\n  \\[\n    \\Phi(G) = \\overline{[G, G] G^p}\n  \\]\n  and \\(G/\\Phi(G) \\cong \\F_p^d\\) where \\(d = d(G)\\).\n\\end{proposition}\n\nLater we will see that \\([G, G] G^p\\) is in fact closed.\n\n\\begin{note}\n  \\(G/\\Phi(G)\\) is also denoted \\(H_1(G, \\F_p)\\).\n\\end{note}\n\n\\begin{proof}\n  Write \\(G = \\varprojlim G_j\\) as a surjective inverse limit of finite \\(p\\)-groups. Then \\(\\Phi(G) = \\varprojlim [G_j, G_j] G_j^p\\). If \\(g_1, g_2, g_3 \\in G\\) then\n  \\[\n    p_j([g_1, g_2]g_3^p) = [p_j(g_1), p_j(g_2)]p_j(g_3)^p \\in [G_j, G_j] G_j^p\n  \\]\n  so \\(p_j(\\overline{[G, G] G^p}) \\subseteq [G_j, G_j] G_j^p\\) for all \\(j\\) so \\(\\overline{[G, G] G^p} \\subseteq \\Phi(G)\\). Now \\(G/\\overline{[G, G] G^p}\\) is tfg abelian and every element has order \\(p\\). Therefore it is isomorphic to \\(\\F_p^d\\) for some \\(d\\) (if \\(a_1, \\dots, a_d\\) tgs of \\(G/\\overline{[G, G]G^p}\\) then \\(\\{a_1^{n_1} \\cdots a_d^{n_d}: n_1, \\dots, n_d \\in \\{0, \\dots, p - 1\\}\\}\\) is a finite dense subgroup). Then since \\(\\Phi(\\F_p^d) = \\{0\\}\\), we find \\(\\Phi(G) \\subseteq \\overline{[G, G] G^p}\\) as required.\n\\end{proof}\n\n\\begin{corollary}\n  Let \\(f: G \\to H\\) be a continuous homomorphism of tfg pro-\\(p\\) groups. Then \\(f(\\Phi(G)) \\subseteq \\Phi(H)\\) and hence there is an induced map \\(f_*: G/\\Phi(G) \\to H/\\Phi(H)\\), which is a map of vector spaces over \\(\\F_p\\). \\(f\\) is surjective if and only if \\(f_*\\) is surjective.\n\\end{corollary}\n\n\\begin{proof}\n  If \\(g_1, g_2, g_3 \\in G\\) then\n  \\[\n    f([g_1, g_2] g_3^p) = [f(g_1), f(g_2)] f(g_3)^p \\in \\Phi(H)\n  \\]\n  so \\(f(\\Phi(G)) \\subseteq \\Phi(H)\\).\n\n  \\(f(G)\\) generates \\(H\\) if and only if \\(f(G) \\Phi(H)/\\Phi(H) = f_*(G/\\Phi(G))\\) generates \\(H/\\Phi(H)\\). As the image of both are compact so closed, the result follows.\n\\end{proof}\n\n\\begin{eg}\n  Let \\(F = \\langle a, b\\rangle\\), the free group of rank 2 and \\(G = \\hat F_{(p)}\\). Then \\(G/\\Phi(G) = \\F_p^2\\). Let \\(S = \\{a^4b^2a, ba^{-2}b\\}\\). Map \\(S\\) to \\(G/\\Phi(G)\\) to test generation:\n  \\begin{align*}\n    a^4b^2a &\\mapsto\n              \\begin{psmallmatrix}\n                5 \\\\\n                2\n              \\end{psmallmatrix}\n    \\\\\n    ba^{-2}b &\\mapsto\n               \\begin{psmallmatrix}\n                 -2 \\\\\n                 2\n               \\end{psmallmatrix}\n  \\end{align*}\n  They generate \\(\\F_p^2\\) if and only if \\(\\det\n  \\begin{psmallmatrix}\n    5 & -2 \\\\\n    2 & 2\n  \\end{psmallmatrix}\n  = 14 \\ne 0\\), if and only if \\(p \\ne 2, 7\\).\n\\end{eg}\n\n\\subsection{Nilpotent groups}\n\n\\begin{definition}\n  A commutator of length 2 is a commutator\n  \\[\n    [g_1, g_2] = g_1^{-1}g_2^{-1}g_1g_2 = g_1^{-1}g_1^{g_2}.\n  \\]\n  Define iteratively a commutator of length \\(n\\)\n  \\[\n    [g_1, g_2, \\dots, g_n] = [g_1, [g_2, \\dots, g_n]].\n  \\]\n\\end{definition}\n\n\\begin{definition}[lower central series]\\index{lower central series}\n  The \\emph{lower central series} of \\(G\\) is the following sequence of subgroups:\n  \\begin{align*}\n    G_1 &= G \\\\\n    G_{n + 1} &= [G, G_n] = \\langle [g, h]: g \\in G, h \\in G_n \\rangle\n  \\end{align*}\n  We sometimes denote \\(G_n\\) by \\(\\gamma_n(G)\\).\n\\end{definition}\n\n\\begin{definition}[nilpotent group]\\index{nilpotent group}\n  A group \\(G\\) is \\emph{nilpotent of class \\(c\\)} if \\(\\gamma_{c + 1}(G) = 1\\) but \\(\\gamma_c(G) \\ne 1\\).\n\\end{definition}\n\nFor a nilpotent group we have\n\\[\n  G = \\gamma_1 \\geq \\gamma_2 \\geq \\dots \\geq \\gamma_{c + 1} = 1.\n\\]\n\n\\begin{note}\n  \\(\\gamma_c(G)\\) is central in \\(G\\) and nilpotent of class \\(1\\) is the same as abelian.\n\\end{note}\n\n\\begin{proposition}\n  \\(\\gamma_n\\) is \\emph{fully characteristic}\\index{fully characteristic subgroup} in the sense that if \\(f: G \\to H\\) is a homomorphism then \\(f(\\gamma_n(G)) \\subseteq \\gamma_n(H)\\).\n\\end{proposition}\n\n\\begin{proposition}\n  Subgroups and quotients of nilpotent groups of class \\(c\\) are nilpotent of class \\(\\leq c\\).\n\\end{proposition}\n\n\\begin{proposition}\n  A finite \\(p\\)-group is nilpotent.\n\\end{proposition}\n\n\\begin{ex}\n  If \\(R\\) is a ring, then the set of upper trianglular \\(n \\times n\\) matrices with \\(1\\) on the diagonal is a nilpotent group.\n\\end{ex}\n\n\\begin{definition}\n  For \\(G\\) is tfg pro-\\(p\\) group, the \\emph{lower central \\(p\\)-series} is defined by\n  \\begin{align*}\n    G_1 &= G \\\\\n    G_{n + 1} &= \\overline{[G, G_n] G_n^p}\n  \\end{align*}\n  \\(\\gamma_n(G) \\subseteq G_n = \\gamma_n^{(p)}(G)\\).\n\\end{definition}\n\nWhy are we doing this?\n\\begin{enumerate}\n\\item \\(G_n/G_{n + 1}\\) are vector spaces over \\(\\F_p\\).\n\\item \\(\\gamma_n^{(p)}(G)\\) is open in \\(G\\): inductively \\(\\Phi(G) = \\gamma_2^{(p)}(G)\\) and \\(\\Phi(G_{n - 1}) \\leq \\gamma_n^{(p)}(G)\\), and Frattini subgroup of a tfg pro-\\(p\\) group is open.\n\\item \\(\\{\\gamma_n^{(p)}(G)\\}\\) forms a neighbourhood basis for the identity: if \\(N \\normal_o G\\), \\(G/N\\) is a finite \\(p\\)-group so \\(\\gamma_n^{(p)}(G/N)\\) vanish for some \\(n\\). Therefore \\(\\gamma_n^{(p)}(G) \\subseteq N\\).\n\\end{enumerate}\n\n\\subsection{Invariance of topology}\n\n\\begin{theorem}\n  \\label{thm:homomorphism from pro-p group is continuous}\n  Let \\(G\\) be a tfg pro-\\(p\\) group, \\(H\\) a profinite group and \\(f: G \\to H\\) a homomorphism. Then \\(f\\) is continuous.\n\\end{theorem}\n\n\\begin{corollary}\n  Let \\(G\\) be a tfg pro-\\(p\\) group. There is no other topology on \\(G\\) making it into a profinite group.\n\\end{corollary}\n\n\\begin{proof}\n  If \\(\\tau_1\\) is our given topology and \\(\\tau_2\\) is another topology such that \\((G, \\tau_2)\\) is profinite, then the identity homomorphism from \\(\\tau_1\\) to \\(\\tau_2\\) is continuous so a homeomorphism.\n\\end{proof}\n\n``The group structure determines the topology''.\n\n\\Cref{thm:homomorphism from pro-p group is continuous} follows from\n\n\\begin{theorem}\n  \\label{thm:finite index subgroup of pro-p group is open}\n  Let \\(G\\) be a tfg pro-\\(p\\) group. Then any finite index subgroup of \\(G\\) is open.\n\\end{theorem}\n\n\\begin{proof}[Proof of \\Cref{thm:homomorphism from pro-p group is continuous}]\n  Suppose \\(f: G \\to H\\) is a homomorphism. Then for all \\(U \\normal_o H\\), \\(U\\) has finite index so \\(f^{-1}(H)\\) has finite index and hence open in \\(G\\). Thus \\(f\\) is continuous.\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(G\\) be a nilpotent group generated by \\(a_1, \\dots, a_d\\). Then every \\(g \\in [G, G]\\) can be written as\n  \\[\n    g = [a_1, x_1] \\cdots [a_d, x_d]\n  \\]\n  for some \\(x_1, \\dots, x_d \\in G\\).\n\\end{lemma}\n\n\\begin{proof}\n  Induction on nilpotency class. \\(c = 1\\) is trivial. Assume true for nilpotency class \\(c - 1\\). In particular it is true for \\(G/\\gamma_c(G)\\) so\n  \\[\n    g = [a_1, x_1] \\cdots [a_d, x_d] \\cdot u\n  \\]\n  for some \\(u \\in \\gamma_c(G)\\). We can write \\(u\\) as a product\n  \\[\n    u = \\prod_{i = 1}^N [g_i, v_i]\n  \\]\n  where \\(g_i \\in G, v_i \\in \\gamma_{c - 1}(G)\\). Recall the commutator relations\n  \\begin{align*}\n    [xy, z] &= [x, z]^y [y, z] \\\\\n    [x, yz] &= [x, z] [x, y]^z\n  \\end{align*}\n  which imply, for \\(v \\in \\gamma_{c - 1}(G)\\)\n  \\begin{align*}\n    [a_ia_j, v] &= [a_i, v] [a_j v] \\\\\n    [a_i^{-1}, v] &= [a_i, v^{-1}] = [a_i, v]^{-1} \\\\\n    [a_i, v] [a_i, v] &= [a_i, v^2] \\\\\n    [a_i, w] [a_i, v] &= [a_i, vw]\n  \\end{align*}\n  Now we can rewrite \\(u\\) in the form \\([a_1, v_1'] \\cdots [a_d, v_d']\\) so\n  \\[\n    g = [a_1, x_1] \\cdots [a_d, x_d] [a_1, v_1'] \\cdots [a_d, v_d']\n    = [a_1, x_1v_1'] \\cdots [a_d, x_d v_d'].\n  \\]\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G\\) be a tfg pro-\\(p\\) group. Then \\([G, G]\\) is closed in \\(G\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(a_1, \\dots, a_d\\) be a tgs for \\(G\\). Let\n  \\[\n    X = \\{[a_1, x_1] \\cdots [a_d, x_d]: x_i \\in G\\}.\n  \\]\n  \\(X\\) is the image of the map\n  \\begin{align*}\n    G^d &\\to G \\\\\n    (x_1, \\dots x_d) &\\mapsto [a_1, x_1] \\cdots [a_d, x_d]\n  \\end{align*}\n  so is compact so closed. Obviously \\(X \\subseteq [G, G]\\). Let \\(g \\in [G, G]\\). Let \\(G = \\varprojlim G_j\\) where \\(p_j: G \\to G_j\\). Then \\(p_j(g) \\in [G_j, G_j]\\). \\(G_j\\) is nilpotent as it is a \\(p\\)-group. By the previous lemma\n  \\[\n    p_j(g) = [p_j(a_1), x_1] \\cdots [p_j(a_d), x_d]\n  \\]\n  for some \\(x_1, \\dots, x_d \\in G_j\\). Hence \\(p_j(g) \\in p_j(X)\\) for all \\(j\\) so \\(g \\in \\overline X = X\\).\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(G\\) be a pro-\\(p\\) group and let \\(K\\) be a finite index subgroup. Then \\([G: K]\\) is a power of \\(p\\).\n\\end{proposition}\n\n\\begin{proof}\n  wlog assume \\(K\\) is normal (otherwise replace \\(K\\) by its core). Let \\([G: K] = m = p^r m'\\) where \\(m'\\) is coprime to \\(p\\). Consider \\(X = \\{g^m:g \\in G\\}\\). Then \\(X \\subseteq K\\) by Lagrange. \\(X\\) is closed so\n  \\[\n    X = \\overline X = \\bigcap_{N \\normal_o G} XN.\n  \\]\n  We will show that \\(g^{p^r} \\in X \\subseteq K\\) for all \\(g \\in G\\), and this shows \\([G: K]\\) divides \\(p^r\\) (?). Let \\(N \\normal_o G\\) be open normal. Let \\([G: N] = p^s\\) for some \\(s\\). Let \\(t = \\max(r, s)\\). Then \\(g^{p^s} \\in N\\) for all \\(g \\in G\\). But \\((m, p^t) = p^r\\) so exists \\(a, b \\in \\Z\\)  such that \\(am + bp^t = p^r\\). Then\n  \\[\n    g^{p^r} = (g^a)^m \\cdot (g^b)^{p^t} \\in XN.\n  \\]\n  This holds for all \\(N \\normal_o G\\) so the result follows.\n\\end{proof}\n\n\\begin{proposition}\n  If \\(G\\) is a tfg pro-\\(p\\) group then \\([G, G]G^p\\) is closed in \\(G\\), hence equal to \\(\\Phi(G)\\).\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(G^{\\{p\\}} = \\{g^p: g \\in G\\}, G^p = \\langle G^{\\{p\\}} \\rangle\\). As \\(G/[G, G]\\) is abelian,\n  \\[\n    [G, G]G^p = [G, G]G^{\\{p\\}}\n  \\]\n  and \\([G, G]G^{\\{p\\}}\\) is closed as it is the image of the map\n  \\begin{align*}\n    [G, G] \\times G &\\to G \\\\\n    (x, g) &\\mapsto xg^p\n  \\end{align*}\n\\end{proof}\n\n\\begin{proof}[Proof of \\Cref{thm:finite index subgroup of pro-p group is open}]\n  Again wlog look at normal subgroups only. Suppose \\(K \\normal_f G\\) is a counterexample with \\([G: K]\\) is minimal. Consider\n  \\[\n    M = [G, G]G^p K \\normal_f G\n  \\]\n  which contains \\(K\\) as a finite index subgroup.\n\n  Now \\(G/K\\) is a non-trivial \\(p\\)-group so\n  \\[\n    \\Phi(G/K) = [G/K, G/K](G/K)^p,\n  \\]\n  which is the image of \\(M\\) in \\(G/K\\). Hence \\(M\\) is a proper subgroup of \\(G\\) so either \\(M = K\\), so \\(K \\geq [G, G] G^p = \\Phi(G)\\) open, so \\(K\\) open, or \\(M \\ne K\\), therefore by minimality \\(K\\) is open in \\(M\\) and \\(M\\) is open in \\(G\\) so \\(K\\) is open in \\(G\\).\n\\end{proof}\n\n\\subsection{Hensel's lemma \\& \\(p\\)-adic arithmetic}\n\nWe saw earlier that solving the equation \\(ax = 1\\) in \\(\\Z_p\\) just depends on the image of \\(a\\) in \\(\\Z/p\\). Hensel's lemma allows us to do so for all polynomials, and gives an algorithm for finding the root.\n\n\\begin{lemma}\n  Let \\(f(x)\\) be a polynomial with \\(\\Z_p\\) coefficients. Then \\(f\\) has a root in \\(\\Z_p\\) if and only if it has a root modulo \\(\\Z/p^k\\) for all \\(k\\).\n\\end{lemma}\n\nThe aim is to reduce just to mod \\(p\\). To do so we use the method of \\emph{Hensel lifting}\\index{Hensel lifting}. As an example let \\(p = 7\\) and \\(f(x) = x^2 - 2\\). \\(f(3) = 0 \\pmod 7\\) so to find a solution mod \\(49\\), consider the element \\(3 + 7a\\), \\(0 \\leq a \\leq 6\\). Then\n\\[\n  (3 + 7a)^2 = 9 + 7 \\cdot 6a + 49 a^2 = 2 + 7 (1 + 6a) \\pmod{49}\n\\]\nso we only have to solve a linear equation since the square term vanishes. \\(a = 1\\), for example, gives a solution so \\((3 + 7 \\cdot 1)^2 = 100 = 2 \\pmod{49}\\). Next we can consider \\(10 + 7^2 \\cdot a \\in \\Z/343\\) etc.\n\n\\begin{proposition}[Hensel's lemma for square roots]\n  Let \\(p \\ne 2\\) be prime. Suppose \\(\\lambda \\in \\Z_p\\) is congruent to a nonzero square \\(r_1^2 \\pmod p\\). Then exists a unique \\(\\rho \\in \\Z_p\\) such that \\(\\rho^2 = \\lambda\\) and \\(\\rho = r_1 \\pmod p\\).\n\\end{proposition}\n\n\\begin{proof}\n  We construct a sequence \\(r_k \\in \\Z\\), unique modulo \\(p^k\\), such that\n  \\begin{itemize}\n  \\item \\(r_{k + 1} = r_k \\pmod{p^k}\\),\n  \\item \\(r_k^2 = \\lambda \\pmod{p^k}\\).\n  \\end{itemize}\n  The first condition can be either interpreted as \\((z_k)\\) forming a Cauchy sequence in \\(\\Z_p\\), or as \\((r_k) \\in \\prod \\Z/p^k\\) compatible with transition functions. In either case it gives an elemnt \\(\\rho \\in \\Z_p\\). The second condition then says \\(\\rho^2 = \\lambda\\).\n\n  Suppose we have constructed \\(r_k\\). Consider the elements \\(r_k + ap^k\\), \\(0 \\leq a < p\\). Since \\(r_k^2 = \\lambda \\pmod{p^k}\\), we can write \\(r_k^2 = \\lambda + b_k p^k\\) for some \\(b_k \\in \\Z_p\\). Then\n  \\[\n    (r_k + p^ka)^2 = \\lambda + p^kb_k + 2p^ka r_k + p^{2k} a^2\n    = \\lambda + p^k(b_k + 2a r_k) \\pmod{p^{k + 1}}\n  \\]\n  Now modulo \\(p\\),\n  \\[\n    b_k + 2ar_k = b_k + 2ar_1 \\pmod p\n  \\]\n  has a unique root for \\(a \\pmod p\\), since \\(2r_1 \\ne 0 \\pmod p\\). Set \\(r_{k + 1} = r_k + p^k a\\).\n\\end{proof}\n\n\\begin{proposition}[Hensel's lemma]\\index{Hensel's lemma}\n  Let \\(f(x)\\) be a polynomial with \\(\\Z_p\\) coefficients and let \\(K \\in \\N\\). Let \\(r \\in \\Z_p\\) be such that \\(f(r) = 0 \\pmod{p^K}, f'(r) \\ne 0 \\pmod p\\). Then exist a unique \\(\\rho \\in \\Z_p\\) such that \\(f(\\rho) = 0\\) and \\(\\rho = r \\pmod{p^K}\\).\n\\end{proposition}\n\nThis follows immediately from\n\n\\begin{lemma}\n  For \\(r, a \\in \\Z_p\\) and \\(k \\geq 1\\),\n  \\[\n    f(r + p^ka) = f(r) + p^k af'(r) \\pmod{p^{k + 1}}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  The statement is linear in \\(f\\) so enough to show for \\(f(x) = x^n\\). By binomial formula,\n  \\begin{align*}\n    (r + p^ka)^n\n    &= r^n + np^ka r^{n - 1} + \\sum_{i = 2}^n \\binom{n}{i} p^{ki}a^i r^{n - i} \\\\\n    &= r^n + np^kar^{n - 1} \\pmod{p^{k + 1}}\n  \\end{align*}\n\\end{proof}\n\nWe can adapt Hensel's lemma to matrix groups.\n\n\\begin{definition}\n  Define filtrations\n  \\begin{align*}\n    \\GL_N^{(k)}(\\Z_p)\n    &= \\ker(\\GL_N(\\Z_p) \\to \\GL_N(\\Z/p^k)) \\\\\n    &= \\{I + p^k A: A \\in \\Mat_{N \\times N}(\\Z_p)\\} \\\\\n    \\SL_N^{(k)}(\\Z_p)\n    &= \\ker(\\SL_N(\\Z_p) \\to \\SL_N(\\Z/p^k)) \\\\\n    &= \\{I + p^k A: A \\in \\Mat_{N \\times N}(\\Z_p), \\det(I + p^kA) = 1\\}\n  \\end{align*}\n\\end{definition}\n\n\\begin{proposition}\n  \\(\\GL_N^{(1)}(\\Z_p)\\) and \\(\\SL_N^{(1)}(\\Z_p)\\) are pro-\\(p\\) groups.\n\\end{proposition}\n\n\\begin{proof}\n  Write\n  \\[\n    \\GL_N^{(1)}(\\Z_p) = \\varprojlim_{k \\in \\Z} \\GL_N^{(1)}(\\Z/p^k)\n  \\]\n  and \\(\\GL_N^{(1)}(\\Z/p^k) = \\{I + pA: A \\in \\Mat_{N \\times N}(\\Z/p^k)\\}\\) has order \\(p^{N^2(k - 1)}\\). \\(\\SL_N^{(1)}(\\Z_p)\\) is a closed subgroup of \\(\\GL_N^{(1)}(\\Z_p)\\) and is also a pro-\\(p\\) group.\n\\end{proof}\n\nFor the rest of the section we assume \\(p\\) is an odd prime.\n\n\\begin{proposition}\n  The continuous function \\(A \\mapsto A^p\\) maps \\(\\GL_N^{(k)}(\\Z_p)\\) onto \\(\\GL_N^{(k + 1)}(\\Z_p)\\). Same for \\(\\SL\\).\n\\end{proposition}\n\nSlogan: every element in \\(\\GL_n^{k + 1}(\\Z_p)\\) has a \\(p\\)th root.\n\n\\begin{proof}\n  The proof is by Hensel-like successive approximations. Claim for all \\(r \\geq 1\\), for all \\(A\\),\n  \\[\n    (I + p^rA)^p\n    = I + p^{r + 1}A + p^{r + 2}B\n    = I + p^{r + 1}A \\pmod{p^{r + 2}}\n  \\]\n  where \\(B\\) is some polynomial of \\(A\\): for \\(\\ell \\geq 2\\) the term\n  \\[\n    \\binom{p}{p - \\ell} p^{r\\ell} A^{\\ell}\n  \\]\n  always has a factor \\(p^{r + 2}\\).\n\n  Let \\(I + p^{k + 1}A \\in \\GL_N^{(k + 1)}(\\Z_p)\\). We show inductively that: for all \\(n \\geq 1\\), exist matrices \\(B_n, E_n\\) both expressible as polynomials in \\(A\\) such that\n  \\begin{itemize}\n  \\item \\(B_{n + 1} = B_n \\pmod{p^n}\\),\n  \\item \\((I + p^k B_n)^p = I + p^{k + 1}A + p^{k + n + 1}E_n\\).\n  \\end{itemize}\n  Note that \\(B_n, E_n\\)'s commute and therefore we can apply binomial theorem. For \\(n = 1\\), choose \\(B_1 = A\\). Then\n  \\[\n    (I + p^kA)^p = I + p^{k + 1}A + p^{k + 2}E_1.\n  \\]\n  Inductively define \\(B_{n + 1} = B_n - p^n E_n\\),\n  \\begin{align*}\n    (I + p^k B_{n + 1})^p\n    &= (I + p^k B_n - p^{k + n}E_n)^p \\\\\n    &= (I + p^kB_n)^p - p \\cdot p^{k + n} E_n (I + p^k B_n)^{p - 1} + O(p^{k + n + 2}) \\\\\n    &= I + p^{k + 1}A + p^{k + n + 1}E_n - p^{k + n + 1} E_n (I + O(p^k)) + O(p^{k + n + 2}) \\\\\n    &= I + p^{k + 1}A + p^{k + n + 2}E_{n + 1}\n  \\end{align*}\n  for some \\(E_{n + 1}\\). Thus the proposition holds for \\(\\GL\\).\n\n  For \\(\\SL\\), suffcies to show \\(\\det C^p = 1\\) then \\(\\det C = 1\\). See example sheet 3 Q10: \\(\\Z_p^\\times = \\Z_p \\times C_{p - 1}\\).\n\\end{proof}\n\n\\begin{lemma}\n  \\[\n    (I + p^kA)(I + p^kB) = (I + p^k B)(I + p^kA) = I + p^k(A + B) \\pmod{p^{k + 1}}.\n  \\]\n\\end{lemma}\n\n\\begin{proposition}\n  For all \\(k\\),\n  \\[\n    \\Phi(\\GL_N^{(k)}(\\Z_p)) = \\GL_N^{(k + 1)}(\\Z_p)\n  \\]\n  and\n  \\[\n    \\GL_N^{(k)}/\\GL_N^{(k + 1)} \\cong \\F_p^{N^2}.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  By the previous proposition\n  \\[\n    \\GL_N^{(k + 1)} \\subseteq (\\GL_N^{(k)})^p \\subseteq \\Phi(\\GL_n^{(k)}).\n  \\]\n  By the lemma \\(\\GL_N^{(k)}/\\GL_N^{(k + 1)}\\) is abelian and of exponent \\(p\\) so is isomorphic to \\(\\F_p^d\\) for some \\(d\\). But we have already seen that \\(|\\GL_N^{(k)}/\\GL_N^{(k + 1)}| = p^{N^2}\\) so \\(d = N^2\\). As \\(\\Phi(\\F_p^{N^2}) = 1\\), \\(\\Phi(\\GL_N^{(k)}) \\subseteq \\GL_N^{(k + 1)}\\).\n\\end{proof}\n\n\\begin{corollary}\n  For any \\(k\\), the continuous map \\(A \\mapsto A^p\\) induces an isomorphism\n  \\[\n    \\GL_N^{(k)}/\\GL_N^{(k + 1)} \\to \\GL_N^{(k + 1)}/\\GL_N^{(k + 2)}.\n  \\]\n\\end{corollary}\n\n\\begin{theorem}\n  If \\(H\\) is a closed subgroup of \\(\\GL_N^{(1)}\\) then \\(d(H) \\leq N^2\\).\n\\end{theorem}\n\n\\begin{proof}\n  Suffices to show \\(d(H) \\leq N^2\\) for any subgroup \\(H\\) of each finite group \\(G = \\GL_N^{(1)}/\\GL_N^{(k + 1)}\\) for each \\(k\\). For each \\(H \\leq G\\), for \\(m \\leq k\\), set\n  \\begin{align*}\n    G_m &= \\GL_N^{(m)}/\\GL_N^{(k + 1)} \\\\\n    H_m &= H \\cap G_m\n  \\end{align*}\n  Induction to show \\(d(H_m) \\leq N^2\\): for \\(m = k\\)\n  \\[\n    H_k \\leq G_k = \\GL_N^{(k)}/\\GL_N^{(k + 1)} \\cong \\F^{N^2}_p.\n  \\]\n  Inductively, let \\(e\\) be the dimension of\n  \\[\n    H_m/H_{m + 1} \\leq G_m/G_{m + 1} \\cong \\F_p^{N^2}.\n  \\]\n  Have a surjection \\(H_m/\\Phi(H_m) \\to H_m/H_{m + 1}\\) (?). Take \\(e\\) elements \\(h_1, \\dots, h_e\\) of \\(H_m\\) which generate \\(H_m/H_{m + 1}\\). The \\(p\\)th-power map gives an isomorphism \\(G_m/G_{m + 1} \\cong G_{m + 1}/G_{m + 2}\\) and hence \\(h_1^p, \\cdots, h_e^p\\) are linearly independent in \\(G_{m + 1}/G_{m + 2}\\), thus linearly independent in \\(H_{m + 1}/\\Phi(H_{m + 1})\\) (?). By extending to a basis, we can find \\(y_1, \\dots, y_{d - e}\\) elements of \\(H_{m + 1}\\) such that \\(H_{m + 1} = \\langle h_1^p, \\dots, h_e^p, y_1, \\dots, y_{d - e} \\rangle\\). Then\n  \\[\n    H_m = \\langle h_1, \\dots, h_e \\rangle H_{m + 1} = \\langle h_1, \\dots, h_e, y_1, \\dots, y_{d - e} \\rangle\n  \\]\n  so \\(d(H_m) \\leq d(H_{m + 1})\\).\n\\end{proof}\n\n\\begin{corollary}[Non-examinable]\n  There is no closed nonabelian free pro-\\(p\\) subgroup in \\(\\GL_N(\\Z_p)\\).\n\\end{corollary}\n\n\\begin{proof}[Sketch proof]\n  \\(\\hat F_{(p)}\\) has normal subgroups of index \\(p^n\\) for all \\(n\\). These subgroups, by a form of basic correspondence, are free pro-\\(p\\) of rank \\(p^n(r - 1) + 1\\), absurd.\n\\end{proof}\n\nCompare this with the result that \\(\\SL_2(\\Z)\\) contains a nonabelian free subgroup.\n\nAs a converse we have\n\n\\begin{theorem}[Non-examinable]\n  If \\(G\\) is a pro-\\(p\\) group and exists \\(R\\) such that \\(d(H) \\leq R\\) for all \\(H \\leq_c G\\) then there exists an abelian normal subgroup \\(A \\cong \\Z_p^e \\leq G\\) (\\(e \\leq R\\)) such that \\(G/A \\embed \\GL_R(\\Z_p) \\times F\\) where \\(F\\) is a finite \\(p\\)-group.\n\\end{theorem}\n\n\\section{Cohomology of groups}\n\n\\subsection{Group rings and chain complexes}\n\nLet \\(G\\) be an abstract group.\n\n\\begin{definition}[group ring]\\index{group ring}\n  The \\emph{group ring} \\(\\Z G\\) is the free abelian group with basis \\(G\\), with multiplication given on basis elements by \\(g \\cdot h = gh\\).\n\\end{definition}\n\n\\(\\Z G\\) is in general noncommutative, but is commutative if \\(G\\) is abelian. The multiplicative identity is \\(1 e = e\\). Note that \\(\\Z G\\) is not necessarily an integral domain, for example if \\(G\\) has torsion element.\n\n\\begin{definition}[\\(G\\)-module]\\index{\\(G\\)-module}\n  A \\emph{\\(G\\)-module} is a \\(\\Z G\\)-module.\n\\end{definition}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item We only need to think about the action of basis elements.\n  \\item A \\(G\\)-module \\(M\\) is trivial if \\(g \\cdot m = m\\) for all \\(g \\in G, m \\in M\\).\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{definition}\n  If \\(M_1, M_2\\) are \\(G\\)-modules, a morphism of \\(G\\)-modules or a \\(G\\)-linear map is a \\(\\Z G\\)-module homomorphism \\(M_1 \\to M_2\\).\n\\end{definition}\n\n\\begin{definition}\n  Let \\(M_1, M_2\\) be \\(G\\)-modules. The \\emph{\\(\\Hom\\)-group} \\(\\Hom_G(M_1, M_2)\\) is the set of \\(G\\)-linear maps \\(M_1 \\to G_2\\) with the structure of an abelian group.\n\\end{definition}\n\n\\begin{definition}[chain complex]\\index{chain complex}\n  A \\emph{chain complex of \\(G\\)-modules} is a sequence of \\(G\\)-modules and \\(G\\)-module maps\n  \\[\n    \\begin{tikzcd}\n      M_s \\ar[r] & \\cdots \\ar[r] & M_n \\ar[r, \"d_n\"] & \\cdots \\ar[r, \"d_{t + 1}\"] & M_t \\ar[r] & \\cdots \\ar[r] & M_t\n    \\end{tikzcd}\n  \\]\n  such that \\(d_n \\compose d_{n + 1} = 0\\) for all \\(n\\). Sometimes the chain complex is written as \\((M_n, d_n)\\).\n\n  The complex is \\emph{exact at \\(M_n\\)} if \\(\\im d_{n + 1} = \\ker d_n\\). The complex is \\emph{exact} if it is exact at \\(M_n\\) for all \\(t < n < s\\).\n\n  The \\emph{homology} of the complex is the sequence of abelian groups \\(H_S(M_\\bullet) = \\ker d_S, H_m(M_\\bullet) = \\ker d_n/\\im d_{n + 1}, H_t(M_\\bullet) = M_t/\\im d_{t + 1}\\).\n\\end{definition}\n\nRevision on free/projective modules and free/projective resolution.\n\n\\begin{eg}\n  Let \\(X\\) be a simplicial complex whose universal cover \\(\\widetilde X\\) is contractible. let \\(G = \\pi_1X\\) and let \\(X_n\\) be the set of \\(n\\)-simplices of \\(X\\). Now \\(G\\) acts on \\(\\widetilde X\\) with quotient \\(X\\) and no fixed points. Thus the set of \\(n\\)-simplices of \\(\\widetilde X\\) is in bijection with \\(G \\times X_n\\). The reduced simplicial chain complex of \\(\\widetilde X\\) takes the form\n  \\[\n    \\begin{tikzcd}\n      \\cdots \\ar[r] & \\Z G\\{X_2\\} \\ar[r] & \\Z G \\{X_1\\} \\ar[r] & \\Z G\\{X_0\\} \\ar[r] & \\Z \\ar[r] & 0\n    \\end{tikzcd}\n  \\]\n  which is a free resolution of \\(\\Z\\) by \\(G\\)-modules.\n\\end{eg}\n\n\\begin{definition}[group cohomology]\\index{group cohomology}\n  Let \\(F_\\bullet\\) be a projective resolution of \\(\\Z\\) by \\(G\\)-modules. Let \\(M\\) be a \\(G\\)-module. Apply the functor \\(\\Hom_G(-, M)\\) to get a chain complex\n  \\[\n    \\begin{tikzcd}\n      \\Hom_G(F_0, M) \\ar[r, \"d^1\"] & \\Hom_G(F_1, M) \\ar[r, \"d^2\"] & \\cdots\n    \\end{tikzcd}\n  \\]\n  The \\emph{\\(n\\)th cohomoogy group with coefficients in \\(M\\)} is then\n  \\[\n    H^n(G, M) = \\ker d^{n + 1}/\\im d^n.\n  \\]\n  Elements of \\(\\ker d^{n + 1}\\) and \\(\\im d^n\\) are called \\emph{\\(n\\)-cocycles} and \\emph{\\(n\\)-coboundaries} respectively.\n\\end{definition}\n\n\\begin{eg}\n  Let \\(G = \\Z = \\langle t\\rangle\\) (written multiplicatively). Consider the chain complex\n  \\[\n    \\begin{tikzcd}\n      0 \\ar[r] & \\Z G \\ar[r, \"d_1\"] & \\Z G \\ar[r, \"\\varepsilon\"] & \\Z \\ar[r] & 0\n    \\end{tikzcd}\n  \\]\n  where\n  \\begin{align*}\n    \\varepsilon(\\sum n_g g) &= \\sum n_g \\\\\n    d_1(x) &= x(t - 1)\n  \\end{align*}\n  It is easy to check this is exact with perhaps the exception \\(\\ker \\varepsilon \\subseteq \\im d_1\\). To do so suppose \\(x = \\sum_{k = K}^L n_k t^k \\in \\ker \\varepsilon\\) so \\(\\sum n_k = 0\\). Then\n  \\begin{align*}\n    x &= n_L t^{L - 1} (t - 1) + n_L t^{L - 1} + \\sum_{k = K}^{L - 1} n_k t^k \\\\\n      &= n_L t^{L - 1} (t - 1) + (n_L + n_{L - 1})t^{L - 2} (t - 1) + (n_L + n_{L - 1})t^{L - 2} + \\sum_{k = K}^{L - 2} n_k t^k \\\\\n      &= \\cdots \\\\\n      &= \\text{ some expression } \\cdot (t - 1) + \\underbrace{(n_L + \\dots + n_K) t^{K - 1}}_{= 0}\n  \\end{align*}\n\n  Now let \\(M\\) be a \\(G\\)-module and apply \\(\\Hom_G(-, M)\\) to get\n  \\[\n    \\begin{tikzcd}\n      \\Hom_G(\\Z G, M) \\ar[r, \"d^1\"] & \\Hom_G(\\Z G, M) \\ar[r] & 0\n    \\end{tikzcd}\n  \\]\n  Note that \\(\\Hom_G(\\Z G, M) \\cong M\\) as abelian groups so we can rewrite the chain complex as\n  \\[\n    \\begin{tikzcd}\n      M \\ar[r, \"(t - 1) \\cdot \"] & M \\ar[r] & 0\n    \\end{tikzcd}\n  \\]\n  so\n  \\begin{align*}\n    H^0(G, M) &= \\ker d^1 = \\{m \\in M: tm = m\\} = M^G \\\\\n    H^1(G, M) &= M/\\im d^1 = M/\\langle (t - 1) M\\rangle = M_G\n  \\end{align*}\n  which are called \\emph{invariants}\\index{invariant} and \\emph{coinvariants} of \\(M\\) respectively. \\(H^n(G, M) = 0\\) for \\(n \\geq 2\\).\n\\end{eg}\n\n\\begin{proposition}\\label{prop:cohomological dimension of free group}\n  If \\(G\\) is a free group then \\(H^n(G, M) = 0\\) for all \\(n \\geq 2\\).\n\\end{proposition}\n\n\\begin{proof}[Non-examinable]\n  Let \\(X\\) be a wedge of circles with \\(\\pi_1 X = G\\). The universal cover \\(\\widetilde X\\) is a tree so contractible. The simplicial chain complex of \\(\\widetilde X\\) gives a free resolution of \\(G\\)-modules of length \\(1\\).\n\\end{proof}\n\n\\begin{definition}[cohomological dimension]\\index{cohomological dimension}\n  A group \\(G\\) has \\emph{cohomological dimension} \\(n\\), written \\(\\mathrm{cd}(G) = n\\), if \\(H^m(G, M) = 0\\) for all \\(M\\) for all \\(m > n\\) but exists \\(M\\) such that \\(H^n(G, M) \\ne 0\\). If no such \\(n\\) exists then \\(\\mathrm{cd}(G) = \\infty\\).\n\\end{definition}\n\nThus free groups have cohomological dimension \\(1\\). The converse is a also true, by a deep theorem of Stallings and Swan.\\index{Stallings-Swan theorem}\n\nWe now investigate morphisms between complexes of \\(G\\)-modules, which should really be done in the context of homological algebra using derived categories. The proofs are omitted.\n\n\\begin{definition}[chain map]\n  Let \\((A_n, \\alpha_n)\\) and \\((B_n, \\beta_n)\\) be chain complexes of \\(G\\)-modules. A \\emph{chain map} is a sequence of \\(G\\)-linear maps \\(f_n: A_n \\to B_n\\) such that\n  \\[\n    f_{n - 1} \\compose \\alpha_n = \\beta_n \\compose f_n\n  \\]\n  for all \\(n\\).\n\\end{definition}\n\n\\begin{proposition}\n  A chain map \\((f_n): (A_n) \\to (B_n)\\) induces maps \\(f_*: H_n(A_\\bullet) \\to H_n(B_\\bullet)\\).\n\\end{proposition}\n\n\\begin{corollary}\n  A \\(G\\)-linear map \\(f: M \\to N\\) induces maps \\(f_*: H^*(G, M) \\to H^*(G, N)\\).\n\\end{corollary}\n\n\\begin{proposition}\n  If\n  \\[\n    \\begin{tikzcd}\n      0 \\ar[r] & M_1 \\ar[r] & M_2 \\ar[r] & M_3 \\ar[r] & 0\n    \\end{tikzcd}\n  \\]\n  is a short exact sequence of \\(G\\)-modules then there is a long exact sequence\n  \\[\n    \\begin{tikzcd}[column sep=small]\n      \\cdots \\ar[r] & H^n(G, M_1) \\ar[r] & H^n(G, M_1) \\ar[r] & H^n(G, M_3) \\ar[r] & H^{n + 1}(G, M_1) \\ar[r] & \\cdots\n    \\end{tikzcd}\n  \\]\n\\end{proposition}\n\n\\subsection{Different projective resolutions}\n\n\\begin{theorem}\n  The definition of \\(H^n(G, M)\\) does not depend on the choice of projective resolutions.\n\\end{theorem}\n\nThis follows from the homological algebra fact that projective resolutions are unique up to quasi-isomorphism. The proof is non-examniable \\emph{except} for the definition of the chain map between two complexes.\n\n\\subsection{Bar resolution}\n\nWe define the so-called bar resolution\\index{bar resolution} for the trivial \\(G\\)-module \\(\\Z\\). Let \\(G^{(n)}\\) be the set of symbols \\([g_1|g_2| \\cdots |g_n]\\), \\(g_i \\in G\\). Let \\(F_n = \\Z G \\{G^{(n)}\\}\\). Define\n\\begin{align*}\n  d_0: F_0 &\\to \\Z \\\\\n  \\sum n_g g &\\mapsto \\sum n_g\n\\end{align*}\ni.e.\\ the augmentation map. Define\n\\begin{align*}\n  d_n: F_n &\\to F_{n - 1} \\\\\n  [g_1| \\cdots |g_n] &\\mapsto g_1[g_2| \\cdots | g_n] \\\\\n           &+ \\sum (-1)^i [g_1| \\cdots| g_ig_{i + 1}| \\cdots | g_n] \\\\\n           &+ (-1)^n [g_1| \\cdots |g_{n - 1}]\n\\end{align*}\n\n\\begin{proposition}\n  The bar resolution is a free resolution of \\(\\Z\\) by \\(G\\)-modules.\n\\end{proposition}\n\n\\begin{proof}[Non-examinable]\n  \\(F_n\\) is \\(G\\)-isomorphic to \\(\\Z\\{G^{n + 1}\\}\\) (with diagonal \\(G\\)-action) via \\([g_1| \\cdots |g_n] \\mapsto (1, g_1, g_1g_2, \\dots, g_1 \\cdots g_n)\\). The latter is a chain complex with boundary maps\n  \\[\n    (g_0, \\dots, g_n) \\mapsto \\sum (-1)^i (g_0, \\dots, \\hat g_i, \\dots, g_n)\n  \\]\n  and the complex is acyclic via the obvious chain homotopy\n  \\[\n    (g_0, \\dots, g_n) \\mapsto (1, g_0, \\dots, g_n).\n  \\]\n\\end{proof}\n\nSince different projective resolutions give the same cohomology groups we might reinterpret group cohomology in terms of bar resolution.\n\n\\begin{definition}\n  The group of \\emph{\\(n\\)th cochains} of \\(G\\) with coefficients in \\(M\\) is\n  \\[\n    C^n(G, M) = \\{G^n \\to M\\} = \\Hom_G(F_n, M).\n  \\]\n  The \\emph{\\(n\\)th coboundary map} is\n  \\begin{align*}\n    d^n: C^{n - 1}(G, M) &\\to C^n(G, M) \\\\\n    \\phi &\\mapsto ((g_1, \\dots, g_n) \\mapsto g_1 \\cdot \\phi(g_2, \\dots, g_n) \\\\\n                         &+ \\sum (-1)^i \\phi(g_1g_2, \\dots, g_n) \\\\\n                         &+ (-1)^n \\phi(g_1, \\dots g_{n - 1})\n  \\end{align*}\n  The \\emph{\\(n\\)th cocycle} and \\emph{\\(n\\)th coboundary} are\n  \\begin{align*}\n    Z^n(G, M) &= \\ker d^{n + 1} \\\\\n    B^n(G, M) &= \\im d^n\n  \\end{align*}\n  The \\emph{\\(n\\)th cohomology} is defined to be\n  \\[\n    H^n(G, M) = Z^n(G, M)/B^n(G, M).\n  \\]\n\\end{definition}\n\nWe can write out the low-dimensional cocyles and coboundaries explicitly.\n\\[\n  H^0(G, M) = \\ker d^1 = \\{m \\in M: gm = m \\text{ for all } g\\} = M^G\n\\]\nis called the set of \\emph{invariants} of \\(M\\).\n\\begin{align*}\n  \\ker d^2 &= \\{\\phi: G \\to M: \\phi(gh) = g \\phi(h) + \\phi(g)\\} \\\\\n  \\im d^1 &= \\{\\phi: G \\to M: \\text{ exists } m \\text{ such that } \\phi(g) = (g - 1)m\\}\n\\end{align*}\nare called \\emph{crossed homomorphisms} and \\emph{principal crossed homomorphisms}.\n\n\\begin{eg}\n  If \\(M\\) has trivial \\(G\\)-action then\n  \\[\n    H^1(G, M) = \\Hom(G, M).\n  \\]\n\\end{eg}\n\n\\begin{proposition}\n  Let \\(\\alpha: G_1 \\to G_2\\) be a group homomorphism and \\(M\\) a \\(G_2\\)-module. Then there is a natural homomorphism \\(\\alpha^*: H^n(G_2, M) \\to H^n(G_1, M)\\).\n\\end{proposition}\n\n\\begin{proof}\n  Given \\(f \\in C^n(G_2, M)\\), set \\(\\alpha^*f \\in C^n(G_1, M)\\) to be the composition \\(f \\compose \\alpha^n\\).\n\\end{proof}\n\nSuppose we have a short sequence of groups\n\\[\n  \\begin{tikzcd}\n    1 \\ar[r] & H \\ar[r] & G \\ar[r] & Q \\ar[r] & 1.\n  \\end{tikzcd}\n\\]\nThere is in general no long exact sequence on cohomologies in the style of the snake lemma.\n\n\\begin{eg}\n  Let \\(M = \\Z\\) (with trivial actions) and a short exact sequence \\(0 \\to \\Z \\to \\Z^2 \\to \\Z \\to 0\\). Then the sequence\n  \\[\n    \\begin{tikzcd}[row sep=small]\n      H^2(\\Z, \\Z) \\ar[r] \\ar[d, equal] & H^2(\\Z^2, \\Z) \\ar[r] & H^2(\\Z, \\Z) \\ar[d, equal] \\\\\n      0 & & 0\n    \\end{tikzcd}\n  \\]\n  cannot be exact as \\(H^2(\\Z^2, \\Z) \\ne 0\\).\n\\end{eg}\n\nThere is, however, \\emph{some} exact sequences coming from spectral sequences, namely the five-term exact sequence. In low dimensions they can be described explicitly. We state the result below and sketch the proof.\n\n\\begin{lemma}\n  Let \\(H \\normal G\\) and let \\(M\\) be a \\(G\\)-module. Let \\(G\\) act on \\(C^n(H, M)\\) via\n  \\[\n    (g \\cdot \\phi)(h_1, \\dots, h_n) = g\\phi(g^{-1} h_1g, \\dots, g^{-1}h_n g).\n  \\]\n  Then this gives an action of \\(G\\) on \\(H^n(H, M)\\). Moreover \\(H\\) acts trivially.\n\\end{lemma}\n\n\\begin{proof}\n  The first part is an easy computation:\n  \\begin{align*}\n    (g \\cdot d^n \\phi)(h_1, \\dots, h_n)\n    &= g d^n\\phi(g^{-1}h_1g, \\dots, g^{-1}h_ng) \\\\\n    &= g(g^{-1}h_1g) \\phi(g^{-1}h_2g, \\dots, g^{-1}h_ng) \\\\\n    &+ \\sum (-1)^i g \\phi(g^{-1}h_1g, \\dots, g^{-1} h_ih_{i + 1} g, \\dots, g^{-1} h_n g) \\\\\n    &+ (-1)^n g \\phi(g^{-1}h_1g, \\dots, g^{-1}h_{n - 1}g) \\\\\n    &= h_1 (g \\cdot \\phi)(h_2, \\dots, h_n) \\\\\n    &+ \\sum (-1)^i d (g \\cdot \\phi) (h_1, \\dots, h_i h_{i + 1}, \\dots, h_n) \\\\\n    &+ (-1)^n d(g \\cdot \\phi) (h_1, \\dots, h_{n - 1}) \\\\\n    &= (d^n(g \\cdot \\phi))(h_1, \\dots, h_n)\n  \\end{align*}\n  \n  To show \\(H\\) acts trivially, we show if \\(h \\in H\\), \\(\\phi \\in Z^n(G, M)\\) then \\(h \\cdot \\phi - \\phi \\in \\im d^n\\). Induction on \\(n\\). \\(n = 1\\),\n  \\[\n    0 = (d^2 \\phi)(h_1, h_2) = h_1\\phi(h_2) - \\phi(h_1h_2) + \\phi(h_1)\n  \\]\n  so\n  \\[\n    \\phi(h_1h_2) = h_1\\phi(h_2) + \\phi(h_1)\n  \\]\n  then\n  \\begin{align*}\n    (h \\cdot \\phi - \\phi)(h_1)\n    &= (h \\cdot \\phi)(h_1) - \\phi(h_1) \\\\\n    &= h \\phi(h^{-1}h_1h) - \\phi(h_1) \\\\\n    &= h(h^{-1} \\phi(h_1h) + \\phi(h^{-1})) - \\phi(h_1) \\\\\n    &= h_1\\phi(h) + \\phi(h_1) + h\\phi(h^{-1}) - \\phi(h_1) \\\\\n    &= h_1\\phi(h) - \\phi(h) \\\\\n    &= (h_1 - 1) \\phi(h) \\in \\im d^1\n  \\end{align*}\n  The induction process is another messy calculation and is left as an exercise.\n\\end{proof}\n\nNote if \\(G\\) has trivial action and \\(\\phi \\in C^1(H, M)\\) then\n\\[\n  (g \\cdot \\phi)(h) = \\phi(g^{-1}hg)\n\\]\nso \\(\\phi \\in H^1(H, M)^G\\) if and only if \\(\\phi(h) = \\phi(g^{-1}hg)\\) for all \\(g \\in G, h \\in H\\). Such a \\(\\phi: H \\to M\\) is called a \\(G\\)-invariant homomorphism.\n\n\\begin{theorem}[inflation-restriction exact sequence]\\index{inflation-restriction exact sequence}\n  Let \\(H \\normal G\\) and \\(Q = G/H\\). Let \\(M\\) be a \\(G\\)-module. Then there is an exact sequence\n  \\[\n    \\begin{tikzcd}\n      0 \\ar[r] & H^1(Q, M^H) \\ar[r] & H^1(G, M) \\ar[r] & H^1(H, M)^G \\ar[dll, out=0, in=180, overlay] \\\\\n      & H^2(Q, M^H) \\ar[r] & H^2(G, M)\n    \\end{tikzcd}\n  \\]\n\\end{theorem}\n\nNote that \\(H^1(H, M)^G = H^1(H, M)^Q\\) by the previous lemma.\n\n\\begin{proof}[Non-examinable]\n  We only define the maps appearing in the sequence. This is done via the restriction map\n  \\begin{align*}\n    H^n(G, M) &\\to H^n(H, M)^G \\\\\n    \\phi &\\mapsto \\phi|_{H^n}\n  \\end{align*}\n  the inflation map\n  \\begin{align*}\n    H^n(Q, M^H) &\\to H^n(G, M) \\\\\n    \\phi &\\mapsto \\phi \\compose q^n\n  \\end{align*}\n  where \\(q: G \\to Q\\), and the transgression map \\(Tg: H^1(H, M) \\to H^2(Q, M^H)\\) defined as follow: choose a set-theoretic section \\(s: Q \\to G\\), i.e.\\ a transversal, with \\(s(1) = 1\\). Define\n  \\begin{align*}\n    \\rho: G &\\to H \\\\\n    g &\\mapsto g \\cdot s(gH)^{-1}\n  \\end{align*}\n  If \\(\\phi: H \\to M\\) is a \\(Q\\)-invariant cocycle, define\n  \\begin{align*}\n    Tg(\\phi): G^2 &\\to M \\\\\n    (g_1, g_2) &\\mapsto \\phi(\\rho(g_1)\\rho(g_2)) - \\phi(\\rho(g_1g_2))\n  \\end{align*}\n  \\(Tg(\\phi)\\) descends to \\(Q^2 \\to M\\).\n\\end{proof}\n\n\\begin{corollary}[Hopf formula]\\index{Hopf formula}\n  Let \\(F\\) be free, \\(R \\normal F\\) and \\(Q = F/R\\). If \\(A\\) is abelian with trivial \\(F\\)-action then\n  \\[\n    H^2(Q, A) \\cong \\frac{\\{\\text{\\(F\\)-invariant homomorphisms } R \\to A\\}}{\\{\\text{restrictions of homomorphisms } F \\to A\\}}.\n  \\]\n\\end{corollary}\n\n\\begin{eg}\n  Suppose \\(Q = \\langle x_1, \\dots, x_d| r_1, \\dots, r_n \\rangle\\) is a presentation, so \\(F = F \\{x_1, \\dots, x_d\\}, R = \\langle\\langle r_1, \\dots, r_m \\rangle\\rangle\\). Then\n  \\begin{align*}\n    d(H^1(Q, \\Z)) &\\leq d \\\\\n    d(H^2(Q, \\Z)) &\\leq m\n  \\end{align*}\n\\end{eg}\n\n\\subsection{Cohomology and group extensions}\n\nLet \\(E\\) be a group, with an abelian normal subgroup \\(M\\). Let \\(G = E/M\\). Such an \\(E\\) is called an \\emph{extension of \\(G\\) by \\(M\\)}\\index{extension}. Two extensions are \\emph{equivalent}\\index{extension!equivalent} if there is a commutative diagram of homomorphisms\n\\[\n  \\begin{tikzcd}\n    1 \\ar[r] & M \\ar[r] \\ar[d, equal] & E \\ar[r] \\ar[d] & G \\ar[r] \\ar[d, equal] & 1 \\\\\n    1 \\ar[r] & M \\ar[r] & E' \\ar[r] & G \\ar[r] & 1\n  \\end{tikzcd}\n\\]\n\n\\begin{lemma}\n  Equivalent extensions are isomorphic as groups.\n\\end{lemma}\n\n\\begin{proof}\n  Same as five lemma.\n\\end{proof}\n\nNote that \\(M\\) comes with the structure of a \\(G\\)-module: the conjugation action of \\(E\\) on \\(M\\) descends to a \\(G\\)-action. If this action is trivial, the extension is called a \\emph{central extension}\\index{extension!central}.\n\nGiven a \\(G\\)-module \\(M\\), the group extension problem is concerned about the classification of the extensions. We can always form the \\emph{semidirect product} \\(E = M \\rtimes G\\). It is sometimes also called a \\emph{split extension}\\index{extension!split}.\n\n\\begin{definition}[splitting]\n  The \\emph{splitting} of an extension \\(E\\) is a group homomorphism \\(G \\to E\\) that is a section to \\(E \\to G\\).\n\\end{definition}\n\n\\begin{proposition}\n  Extensions which have a splitting are equivalent to \\(M \\rtimes G\\).\n\\end{proposition}\n\n\\begin{proof}\n  Set \\(M \\rtimes G \\to E, (m, g) \\mapsto i(m) s(g)\\) where \\(s\\) is a section of \\(E \\to G\\).\n\\end{proof}\n\nLet \\(E\\) be an arbitrary extension of \\(G\\) by \\(M\\). Let \\(s: G \\to E\\) be a set-theoretic section with \\(s(1) = 1\\). To measure how far \\(s\\) is from being a homomorphism, consider the function\n\\[\n  \\phi(g_1, g_2) = s(g_1) s(g_2) s(g_1g_2)^{-1}.\n\\]\nThen \\(s\\) is a homomorphism if and only if \\(\\phi\\) is constant. The image of \\(\\phi\\) is in \\(M\\) so \\(\\phi: G^2 \\to M\\) is an element of \\(C^2(G, M)\\). Claim that in fact \\(\\phi \\in Z^2(G, M)\\).\n\n\\begin{proof}\n  We compute \\(s(g_1)s(g_2)s(g_3)\\) in two ways:\n  \\begin{align*}\n    s(g_1)s(g_2)s(g_3)\n    &= \\phi(g_1, g_2) s(g_1g_2) s(g_3) \\\\\n    &= \\phi(g_1, g_2) \\phi(g_1g_2, g_3) s(g_1g_2g_3)\\\\\n    s(g_1)s(g_2)s(g_3)\n    &= s(g_1) \\phi(g_2, g_3) s(g_2g_3) \\\\\n    &= s(g_1) \\phi(g_2, g_3) s(g_1)^{-1} s(g_1) s(g_2g_3) \\\\\n    &= s(g_1) \\phi(g_2, g_3) s(g_1)^{-1} \\phi(g_1, g_2g_3) s(g_1g_2g_3)\n  \\end{align*}\n  so\n  \\[\n    \\phi(g_1, g_2) \\phi(g_1g_2, g_3) = s(g_1) \\phi(g_2, g_3) s(g_1)^{-1} \\phi(g_1, g_2g_3).\n  \\]\n  Recognising the first three terms on RHS as the action of \\(G\\) on \\(M\\) and convert to additive notation in \\(M\\), we get\n  \\[\n    \\phi(g_1, g_2) + \\phi(g_1g_2, g_3) = g_1 \\phi(g_2, g_3) + \\phi(g_1, g_2, g_3).\n  \\]\n\\end{proof}\n\nIn addition \\(\\phi\\) is a \\emph{normalised cocycle}\\index{normalised cocycle}, i.e.\n\\[\n  \\phi(1, g) = \\phi(g, 1) = 0.\n\\]\n\nIf we had chosen a different section \\(s': G \\to E\\), consider \\(\\psi(g) = s'(g) s(g)^{-1}\\) so \\(\\psi \\in C^1(G, M)\\). Have\n\\begin{align*}\n  s'(g_1)s'(g_2)\n  &= \\psi(g_1)s(g_1) \\psi(g_2)s(g_2) \\\\\n  &= \\psi(g_1)s(g_1) \\psi(g_2)s(g_1)^{-1} s(g_1)s(g_2) \\\\\n  &= \\psi(g_1)s(g_1)\\psi(g_2)s(g_1)^{-1} \\phi(g_1, g_2) s(g_1g_2) \\\\\n  &= \\psi(g_1)s(g_1)\\psi(g_2)s(g_1)^{-1} \\phi(g_1, g_2)\n\\end{align*}\nand a comparison with \\(s'(g_1)s'(g_2) = \\phi'(g_1, g_2)s'(g_1g_2)\\) shows \\([\\phi] \\in H^2(G, M)\\) is well-defined. This shows part of\n\n\\begin{theorem}\n  Let \\(G\\) be a group and \\(M\\) a \\(G\\)-module. There exists a bijection\n  \\[\n    \\{\\text{equivalence classes of extensions of \\(G\\) by \\(M\\)}\\} \\longleftrightarrow H^2(G, M).\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  Let to construct the inverse map. Let \\([\\phi] \\in H^2(G, M)\\) where \\(\\phi\\) is a \\emph{normalised cocycle}. Define a group structure on the set \\(M \\times G\\) by\n  \\[\n    (m_1, g_1) \\cdot (m_2, g_2) = (m_1 + g_1 \\cdot m_2 + \\phi(g_1, g_2), g_1g_2).\n  \\]\n  By the assumption on \\(\\phi\\), this defines a group structure with identity \\((0, 1)\\) and the inverse of \\((m, g)\\) is \\((-g^{-1} \\cdot (m + \\phi(g, g^{-1})), g^{-1})\\). This is an extension.\n\n  If we choose a different normalised cocycle \\(\\phi'\\) such that \\(\\phi - \\phi' = d^2 \\psi\\), then the obvious map \\(E_\\phi \\to E_{\\phi'}, (m, g) \\mapsto (m + \\psi(g), g)\\) is an equivalence of extensions.\n\n  It is an exercise to check the two maps are inverses to each other.\n\\end{proof}\n\nOne result that is used in the proof is\n\n\\begin{lemma}\n  Every cohomology class is represented by some normalised cocycle.\n\\end{lemma}\n\n\\begin{proof}\n  Let \\(\\phi \\in Z^2(G, M)\\). Let \\(\\psi(g) = \\phi(1, g)\\). Then \\(\\phi - d^2\\psi\\) is normalised:\n  \\begin{align*}\n    (\\phi - d^2\\psi)(1, g)\n    &= \\phi(1, g) - (\\phi(1, g) - \\phi(1, g) + \\phi(1, 1))\n      = \\phi(1, g) - \\phi(1, 1) \\\\\n    (\\phi - d^2\\psi)(g, 1)\n    &= \\phi(g, 1) - g \\phi(1, 1)\n  \\end{align*}\n  and they both vanish since \\(\\phi\\) is a cocycle.\n\\end{proof}\n\nWe may recover Hopf formula from this identification. Suppose \\(G\\) has presentation \\(\\langle x_1, \\dots, x_n | r_1, \\dots, r_m \\rangle\\). Then take \\(F\\) to be the free groups on \\(x_i\\)'s and \\(R = \\ker (F \\to G)\\). Suppose \\(E\\) is a central extension of \\(G\\) by \\(A\\). Choose some preimages \\(\\overline x_i \\in E\\) of generators \\(x_i\\) of \\(G\\). Let \\(\\overline r_i\\) be the element of \\(E\\) given by replacing each occurrence of \\(x_i\\) in \\(r_i\\) with \\(\\overline x_i\\). Then \\(\\overline r_i \\in A = \\ker (E \\to G)\\), say \\(a_i\\). Write down a group presentation\n\\[\n  \\overline E = \\langle \\overline x_1, \\dots, \\overline x_n, A | \\overline r_1 = a_1, \\dots, \\overline r_m = a_m, A \\text{ central}, \\text{relations of A}\\rangle\n\\]\nThere exists a natural diagram of exact rows\n\\[\n  \\begin{tikzcd}\n    & A \\ar[r] \\ar[d, equal] & \\overline E \\ar[r] \\ar[d] & G \\ar[r] \\ar[d, equal] & 1 \\\\\n    1 \\ar[r] & A \\ar[r] & E \\ar[r] & G \\ar[r] & 1\n  \\end{tikzcd}\n\\]\nIt follows that \\(A \\embed \\overline E\\) and \\(\\overline E \\cong E\\). Can try to define \\(F\\)-invariant homomorphism \\(R \\to A\\) by \\(r_i \\mapsto a_i\\). Fact (non-examinable): this is a well-defined \\(F\\)-invariant homomorphism if and only if \\(A \\to \\overline E\\) is an injection. We made a choice of preimages \\(\\overline x_i\\) of \\(x_i\\) in \\(E\\). A different choice \\(\\overline x_i'\\) differ from \\(\\overline x_i\\) by an element \\(b_i \\in A\\). Then \\(x_i \\mapsto b_i\\) gives a homomoprhism \\(f: F \\to A\\) and \\(\\overline r_i' = \\overline r_i f(b_i)\\).\n\n\\begin{eg}\n  Let \\(G = \\langle x_1, x_2| x_1x_2x_1^{-1}x_2^{-1}x_1 \\rangle\\). We show \\(H^2(G, \\Z) = 0\\). Any central extension of \\(G\\) by \\(\\Z\\) has a presentation\n  \\[\n    E = \\langle \\overline x_1, \\overline x_2, t | \\overline x_1 \\overline x_2 \\overline x_1^{-1} \\overline x_2^{-1} \\overline x_1 = t^k, t \\text{ central} \\rangle.\n  \\]\n  Now we can make a substitution \\(\\overline x_1 \\mapsto \\overline x_1 t^{-k} = \\overline x_1'\\),\n  \\[\n    E \\cong \\langle \\overline x_1', \\overline x_2, t | \\overline x_1' t^k \\cdot \\overline x_2 (\\overline x_1' t^k)^{-1} \\overline x_2^{-1} (\\overline x_1' t^k) = t^k, t \\text{ central}\\rangle = E'\n  \\]\n  \\(E\\) and \\(E'\\) are equivalent as extensions and we can simplify \\(E'\\)\n  \\begin{align*}\n    \\langle \\overline x_1', \\overline x_2, t | \\overline x_1' \\overline x_2 (\\overline x_1')^{-1} \\overline x_2^{-1} \\overline x_1' = 1, t \\text{ central}\\rangle = \\Z \\times G\n  \\end{align*}\n  It follows that \\(H^2(G, \\Z) = 0\\).\n\\end{eg}\n\n\\subsection{Worked example: \\(\\Z^2\\)}\n\nLet \\(T = \\Z^2 = \\langle a, b \\rangle\\). We will classify all central extensions of \\(T\\) by \\(\\Z\\). Start with a free resolution derived from topology\n\\[\n  \\begin{tikzcd}\n    0 \\ar[r] & \\Z T \\ar[r, \"\\beta\"] & (\\Z T)^2 \\ar[r, \"\\alpha\"] & \\Z T \\ar[r, \"\\varepsilon\"] & \\Z \\ar[r] & 0\n  \\end{tikzcd}\n\\]\nwhere\n\\begin{itemize}\n\\item \\(\\varepsilon\\) is the augmentation map.\n\\item \\(\\alpha(x, y) = x(a - 1) + y(b - 1)\\).\n\\item \\(\\beta(z) = (z(1 - b), z(a - 1))\\).\n\\end{itemize}\nExactness can either be derived from topology (square tiling of the plane), or direct computation.\n\nApply \\(\\Hom_{\\Z T}(-, \\Z)\\) to get\n\\[\n  \\begin{tikzcd}\n    \\Hom_{\\Z T}(\\Z T, \\Z) \\ar[r, \"\\alpha^*\"] & \\Hom_{\\Z T}((\\Z T)^2, \\Z) \\ar[r, \"\\beta^*\"] & \\Hom_{\\Z T}(\\Z T, \\Z) \\ar[r] & 0\n  \\end{tikzcd}\n\\]\nwhere\n\\begin{align*}\n  \\beta^*(f)(z)\n  &= f(z (1 - b), z(a - 1)) \\\\\n  &= f(z - zb, 0) + f(0, za - z) \\\\\n  &= (b - 1) \\cdot f(z, 0) + (1 - 1) \\cdot f(0, z) \\\\\n  &= 0\n\\end{align*}\nso \\(\\beta^* = 0\\). Similarly \\(\\alpha^* = 0\\). Thus\n\\[\n  H^i(T, \\Z) =\n  \\begin{cases}\n    \\Z & i = 0\\\\\n    \\Z^2 & i = 1 \\\\\n    \\Z & i = 2\n  \\end{cases}\n\\]\n\nTo get extensions, we turn to bar resolutions. We seek a chain map\n\\[\n  \\begin{tikzcd}\n    \\dots \\ar[r] & \\Z T\\{T^2\\} \\ar[r, \"d_2\"] \\ar[d, \"f_2\"] & \\Z T \\{T\\} \\ar[r, \"d_1\"] \\ar[d, \"f_1\"] & \\Z T \\ar[r, \"\\varepsilon\"] \\ar[d, \"f_0\"] & \\Z \\ar[r] \\ar[d, equal] & 0 \\\\\n    0 \\ar[r] & \\Z T \\ar[r, \"\\beta\"] & (\\Z T)^2 \\ar[r, \"\\alpha\"] & \\Z T \\ar[r, \"\\varepsilon\"] & \\Z \\ar[r] & 0\n  \\end{tikzcd}\n\\]\nObviously \\(f_0 = \\id\\). \\(f_1\\) should satisfy \\(\\alpha f_1 = f_0 d_1 = d_1\\). Want to find an element \\((x_{r, s}, y_{r, s}) \\in (\\Z T)^2\\) such that\n\\[\n  \\alpha(x_{r, s}, y_{r, s}) = d_1([a^rb^s]) = a^rb^s - 1 = (a^r - 1)b^s + (b^s - 1)\n\\]\nthen define \\(f_1\\) by \\([a^rb^s] \\mapsto (x_{r, s}, y_{r, s})\\). By commutativity of \\(T\\),\n\\begin{align*}\n  x_{r, s} &= \\frac{a^r - 1}{a - 1} b^s = S(a, r) b^s \\\\\n  y_{r, s} &= \\frac{b^s - 1}{b - 1} = S(b, s)\n\\end{align*}\nwhere\n\\[\n  S(a, r) =\n  \\begin{cases}\n    1 + a + \\dots + a^{r - 1} & r \\geq 0 \\\\\n    -(a^{-1} + \\dots + a^r) & r < 0\n  \\end{cases}\n\\]\n\nSo \\(f_1([a^rb^s]) = (S(a, r)b^s, S(b, s))\\). By a messy calculation we similarly find\n\\[\n  f_2([a^rb^s|a^tb^u]) = S(a, r)b^sS(b, u).\n\\]\n\nA cohomology class \\(p \\in \\Z \\cong \\Hom_{\\Z T}(\\Z T, \\Z)\\) is represented by the \\(2\\)-cochain given by the composition \n\\[\n  (a^rb^s, a^tb^u) \\mapsto S(a, r)b^sS(b, u) \\mapsto p r u\n\\]\nso the group structure on the set \\(\\Z \\times T\\) corresponding to this cochain is\n\\[\n  (m, a^rb^s) \\cdot (n, a^tb^u) = (m + n + pru, a^{r + t} b^{n + s}).\n\\]\nMore concretely, this group has a \\(3\\)-dimensional representation\n\\[\n  (m, a^rb^s) \\mapsto\n  \\begin{pmatrix}\n    1 & pr & m \\\\\n    0 & 1 & s \\\\\n    0 & 0 & 1\n  \\end{pmatrix}\n\\]\nThe group of central extensions of \\(T\\) by \\(\\Z\\) is generated by \\(p = 1\\).\n\n\\subsection{Cohomology of profinite groups}\n\nThe cohomology theory works for profinite groups.\n\n\\begin{definition}[\\(G\\)-module]\\index{\\(G\\)-module}\n  Let \\(G\\) be a profinite group. A \\emph{finite \\(G\\)-module} is a finite abelian group \\(M\\) with a continuous \\(G\\)-action \\(G \\times M \\to M\\).\n\\end{definition}\n\nTo avoid defining the group ring of a profinite ring, we use the ad hoc definition\n\n\\begin{definition}\\index{group cohomology!profinite group}\n  Let \\(G\\) be a profinite group and \\(M\\) a finite \\(G\\)-module. Define\n  \\[\n    C^n(G, M) = \\{G^n \\to M \\text{ continuous}\\}\n  \\]\n  and \\(d^n\\) given by the same formula as before. Define \\(H^n(G, M) = \\ker d^{n + 1}/\\im d^n\\).\n\\end{definition}\n\nCourse convention: all general results in this chapter and example sheet 4 may be assumed to hold for profinite groups, where all groups are profinite, all functions are continuous and all modules are finite.\n\n\\begin{eg}\n  An extension of \\(G\\) by \\(M\\) is a profinite group \\(E\\) with \\(M \\normal E\\) such that \\(E/M \\cong G\\). Equivalence of extensions is a continuous homomorphism respecting \\(M\\) and \\(G\\). Then there exists a bijection between equivalent classes of extensions and \\(H^2(G, M)\\).\n\\end{eg}\n\nOne can treat profinite groups as discrete groups, but the resulting cohomology theory is horrid. Another question: why finite modules only? For example consider the exact sequence of \\(\\hat \\Z\\)-modules with trivial \\(\\hat \\Z\\)-action\n\\[\n  \\begin{tikzcd}\n    0 \\ar[r] & \\Z \\ar[r] & \\Q \\ar[r] & \\Q/\\Z \\ar[r] & 0\n  \\end{tikzcd}\n\\]\nHave\n\\[\n  H^1(\\hat \\Z, \\Z) = H^1(\\hat \\Z, \\Q) = 0\n\\]\nas the continuous map from \\(\\hat \\Z \\to \\Z\\) has compact image. \\(H^1(\\hat \\Z, \\Q/\\Z) \\cong \\Q/\\Z\\). But we should have a long exact sequence, which implies that \\(H^2(\\hat \\Z, \\Z)\\). Thus free profinite groups \\(\\hat \\Z\\) does not have ``cohomologicial dimension'' \\(1\\).\n\n\\subsubsection{Pro-\\(p\\) groups of cohomological dimension 1}\n\nFor simplicity in this section assuming all pro-\\(p\\) groups are tfg. The aim of this section is to prove Stallings-Swan for tfg pro-\\(p\\) groups\\index{Stallings-Swan theorem!pro-\\(p\\) group}\\index{cohomological dimension!pro-\\(p\\) group}, i.e.\\ \\Cref{prop:cohomological dimension of free group} and its converse:\n\n\\begin{theorem}\n  \\label{thm:Stallings-Swan for pro-p group}\n  A tfg pro-\\(p\\) group \\(G\\) is free if and only if \\(\\mathrm{cd}(G) = 1\\), if and only if \\(H^2(G, \\F_p) = 0\\).\n\\end{theorem}\n\nNote that for any non-trivial tfg pro-\\(p\\) group \\(G\\), in particular free pro-\\(p\\) groups, \\(\\mathrm{cd}(G) \\geq 1\\) as\n\\[\n  H^1(G, \\F_p) \\cong \\Hom(G, \\F_p) \\ne 0.\n\\]\n\nWe first show the equivalence of the characterisations\n\n\\begin{theorem}\n  Let \\(G\\) be a pro-\\(p\\) group. Then\n  \\[\n    \\mathrm{cd}(G) = \\max(n: H^n(G, \\F_p) \\ne 0).\n  \\]\n\\end{theorem}\n\n\\begin{definition}[simple]\\index{simple}\n  A \\(G\\)-module \\(M\\) is \\emph{simple} if the only \\(G\\)-submodules are \\(0\\) and \\(M\\).\n\\end{definition}\n\n\\begin{proposition}\n  Fix \\(n \\geq 0\\). Let \\(G\\) be a profinite group and suppose \\(H^n(G, S) = 0\\) for all simple finite \\(G\\)-modules \\(S\\). Then \\(H^n(G, M) = 0\\) for all finite \\(M\\).\n\\end{proposition}\n\n\\begin{proof}\n  Suppose for contradiction \\(M\\) of minimal size has nonvanishing \\(H^n(G, M)\\). \\(M\\) is not simple so exists \\(G\\)-submodule \\(N \\leq M\\), giving rise to a short exact sequence\n  \\[\n    \\begin{tikzcd}\n      0 \\ar[r] & N \\ar[r] & M \\ar[r] & M/N \\ar[r] & 0\n    \\end{tikzcd}\n  \\]\n  of \\(G\\)-modules. By using long exact sequence and minimality of \\(M\\), \\(H^n(G, M) = 0\\), absurd.\n\\end{proof}\n\n\\begin{definition}[\\(p\\)-primary component]\\index{\\(p\\)-primary component}\n  Let \\(M\\) be a finite \\(G\\)-module. Let \\(M_p\\) be the Sylow \\(p\\)-subgroup of \\(M\\), called the \\emph{\\(p\\)-primary component} of \\(M\\). Then \\(M = \\bigoplus_p M_p\\).\n\\end{definition}\n\n\\begin{proposition}\n  Let \\(G\\) be a pro-\\(p\\) group, \\(M\\) a finite \\(G\\)-module. Then\n  \\[\n    H^n(G, M) = H^n(G, M_p)\n  \\]\n  for \\(n \\geq 1\\).\n\\end{proposition}\n\n\\begin{proof}\n  Write \\(M = M_p \\oplus M'\\) where \\(M'\\) is the direct sum of other \\(q\\)-primary components. Then\n  \\[\n    H^n(G, M) = H^n(G, M_p) \\oplus H^n(G, M')\n  \\]\n  (finite \\(p\\)-group case is an exercise in example sheet 4) and we show \\(H^n(G, M') = 0\\). Let \\(\\phi: G^n \\to M'\\) be a continuous function. Claim \\(\\phi\\) factors as \\(G^n \\to (G/K)^n \\to M'\\) for some \\(K \\normal_o G\\).\n\n  \\begin{proof}\n    We want to find \\(K\\) such that each fibre of \\(\\phi\\) is a union of cosets of \\(K^n\\). For each \\(m \\in M'\\), \\(\\phi^{-1}(m)\\) is open and closed, so can be written as \\(\\phi^{-1}(m) = \\bigcup (a_i + K_i^n)\\) where \\(K_i \\normal_o G\\). The cover may be taken to be finite and we take \\(K\\) to be the intersection of all the \\(K_i\\)'s.\n  \\end{proof}\n\n  But \\(H^n(G/K, M') = 0\\) as \\(G/K\\) is a finite \\(p\\)-group, so exists \\(\\psi_K: (G/K)^{n - 1} \\to M\\) such that \\(\\phi_K = d^n\\psi_K\\). Now set \\(\\psi: G^{n - 1} \\to (G/K)^{n - 1} \\to M'\\) so \\(\\phi = d^n \\psi\\).\n\\end{proof}\n\n\\begin{remark}\n  The argument actually shows that if \\(G = \\varprojlim G/K\\) then \\(H^n(G, M) = \\varinjlim H^n(G/K, M)\\).\n\\end{remark}\n\n\\begin{proposition}\n  Let \\(G\\) be a pro-\\(p\\) group. The only simple \\(p\\)-primary \\(G\\)-module is \\(\\F_p\\).\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet 4.\n\\end{proof}\n\nCombining these gives\n\n\\begin{proposition}\n  Let \\(G\\) be a pro-\\(p\\) group. If \\(H^n(G, \\F_p) = 0\\) then \\(H^n(G, M) = 0\\) for all finite modules \\(M\\).\n\\end{proposition}\n\n\\begin{proposition}\n  Suppose exists \\(n\\) such that \\(H^n(G, M) = 0\\) for all \\(M\\) then \\(\\mathrm{cd}(G) \\leq n - 1\\).\n\\end{proposition}\n\n\\begin{proof}[Non-examinable]\n  By course convention we shall prove this for an abstract group \\(G\\). The main idea is \\emph{dimension shifting}: suppose there is a short exact sequence of \\(G\\)-modules\n  \\[\n    \\begin{tikzcd}\n      0 \\ar[r] & M \\ar[r] & N \\ar[r] & M' \\ar[r] & 0\n    \\end{tikzcd}\n  \\]\n  where \\(N\\) is cohomologically trivial. Then from the long exact sequence one know \\(H^i(G, M') \\cong H^{i + 1}(G, M)\\).\n  \n  Example sheet 4 shows that for \\(K \\leq G\\), the following holds for the coinduced module \\(\\mathrm{coind}^K_G(M) = \\Hom_{\\Z K}(\\Z G, M)\\):\n  \\[\n    H^n(G, \\mathrm{coind}^K_G(M)) \\cong H^n(K, M).\n  \\]\n  So take \\(K = 1\\), have \\(\\mathrm{coind}^K_G(M) = \\Hom_\\Z(\\Z G, M)\\) cohomologically trivial. Finally the map\n  \\begin{align*}\n    \\alpha: M &\\to \\Hom(\\Z G, M) \\\\\n    m &\\mapsto (x \\mapsto xm)\n  \\end{align*}\n  is an injection so gives the required short exact sequence.\n\\end{proof}\n\n\\begin{corollary}\n  Free pro-\\(p\\) groups have cohomological dimension \\(1\\).\n\\end{corollary}\n\n\\begin{proof}\n  Suffice to show \\(H^2(G, \\F_p) = 0\\), i.e.\\ to show every extension of \\(G\\) by \\(\\F_p\\) splits. Let \\(G\\) be free on \\(X\\) for some \\(X\\) finite. Suppose we have an extension\n  \\[\n    \\begin{tikzcd}\n      1 \\ar[r] & \\F_p \\ar[r] & E \\ar[r] & F(X) \\ar[r] & 1\n    \\end{tikzcd}\n  \\]\n  \\(E\\) is again a pro-\\(p\\) group. For each \\(x \\in X\\) choose a preimage \\(e_x \\in E\\). It extends to a unique homomorphism \\(F(X) \\to E\\).\n\\end{proof}\n\nNote that the proof works for free groups as well, so providing an algebraic proof of \\Cref{prop:cohomological dimension of free group}.\n\n\\begin{theorem}\n  Let \\(G\\) and \\(G'\\) be tfg pro-\\(p\\) groups. Let \\(f: G \\to G'\\) be a continuous homomorphism. Assume\n  \\begin{itemize}\n  \\item \\(f^*: H^1(G', \\F_p) \\to H^1(G, \\F_p)\\) is an isomorphism,\n  \\item \\(f^*: H^2(G', \\F_p) \\to H^2(G, \\F_p)\\) is an injection.\n  \\end{itemize}\n  Then \\(f\\) is an isomorphism.\n\\end{theorem}\n\nHeuristics: \\(H^1(G, \\F_p) = \\Hom(G, \\F_p) = \\Hom(G/\\Phi(G), \\F_p)\\), the dual of the \\(\\F_p\\)-vector space \\(G/\\Phi(G)\\). The first condition tells us something about generators. In particular \\(f\\) is a surjection. \\(H^2\\) is related to relations, and the second condition says that we impose no additional relations so \\(f\\) is an injection.\n\n\\begin{proof}[Non-examinable]\n  Let \\(G_n = \\gamma_n^{(p)}(G)\\), the lower central \\(p\\)-series. Recall that \\(G_n\\) are all open, \\(G = \\varprojlim G/G_n\\). \\(G_n\\)'s are fully characteristic, therefore inducing maps \\(f_n: G/G_n \\to G'/G_n'\\). We will show that they are all isomorphisms and so is \\(f\\).\n\n  Induction on \\(n\\). For \\(n = 2\\), \\(f_2: G/\\Phi(G) \\to G'/\\Phi(G')\\). By the remark before the transpose of \\(f_2\\) is an isomorphism, so is \\(f_2\\) itself.\n\n  Suppose the result holds for \\(n\\). If we can show \\(G_n/G_{n + 1} \\to G_n'/G_{n + 1}'\\) is an isomorphism then combining the induction hypothesis we deduce the result for \\(n + 1\\) from the diagram\n  \\[\n    \\begin{tikzcd}\n      1 \\ar[r] & G_n/G_{n + 1} \\ar[r] \\ar[d, \"\\cong\"] & G/G_{n + 1} \\ar[r] \\ar[d] & G/G_n \\ar[r] \\ar[d, \"\\cong\"] & 1 \\\\\n      1 \\ar[r] & G_n'/G_{n + 1}' \\ar[r] & G'/G_{n + 1}' \\ar[r] & G'/G_n' \\ar[r] & 1\n    \\end{tikzcd}\n  \\]\n\n  \\(G_n/G_{n + 1}\\) is a finite-dimensional \\(\\F_p\\)-vector space so \\(G_n/G_{n + 1} \\to G_n'/G_{n + 1}'\\) is an isomorphism if and only if its dual \\(H^1(G_n'/G_{n + 1}', \\F_p) \\to H^1(G_n/G_{n + 1}, \\F_p)\\) is. A homomorphism \\(\\phi: G_n \\to \\F_p\\) factors through \\(G_n/G_{n + 1}\\) if and only if \\(\\phi([g, g']) = 0\\) for all \\(g \\in G, g' \\in G_n\\), if and only if\n  \\[\n    0 = \\phi(g^{-1}(g')^{-1}gg') = - \\phi(g^{-1}g'g) + \\phi(g'),\n  \\]\n  if and only if \\(\\phi\\) is \\(G\\)-invariant. Thus \\(H^1(G_n/G_{n + 1}, \\F_p) = H^1(G_n, \\F_p)^G\\). The five term exact sequence induced by\n  \\[\n    \\begin{tikzcd}\n      1 \\ar[r] & G_n \\ar[r] & G \\ar[r] & G/G_n \\ar[r] & 1\n    \\end{tikzcd}\n  \\]\n  says we have a commutative diagram of exact sequences\n  \\[\n    \\begin{tikzcd}\n      H^1(G'/G'_n) \\ar[r] \\ar[d, \"\\cong\"] & H^1(G') \\ar[r] \\ar[d, \"\\cong\"] & H^1(G'_n)^{G'} \\ar[r] \\ar[d] & H^2(G'/G'_n) \\ar[r] \\ar[d, \"\\cong\"] & H^2(G') \\ar[d, hook] \\\\\n      H^1(G/G_n) \\ar[r] & H^1(G) \\ar[r] & H^1(G_n)^G \\ar[r] & H^2(G/G_n) \\ar[r] & H^2(G)\n    \\end{tikzcd}\n  \\]\n  By induction hypothesis and injectivity on \\(H^2\\), the middle vertical map is an isomorphism by five lemma. Therefore \\(G_n/G_{n + 1} \\cong G_n'/G_{n + 1}'\\).\n\\end{proof}\n\nIn fact we get for free from the proof\n\n\\begin{theorem}\n  If \\(\\Gamma\\) and \\(\\Gamma'\\) are finitely generated abstract groups and \\(f: \\Gamma \\to \\Gamma'\\) is a homomorphism and the same conditions hold, then \\(\\hat f: \\hat \\Gamma_{(p)} \\to \\hat \\Gamma_{(p)}'\\) is an isomorphism.\n\\end{theorem}\n\n\\begin{proof}\n  Set \\(\\Gamma_n = \\gamma_n^{(p)}(\\Gamma)\\). Then \\(\\hat \\Gamma_{(p)} = \\varprojlim \\Gamma/\\Gamma_n\\). Proceed as before.\n\\end{proof}\n\n\\begin{proof}[Proof of \\Cref{thm:Stallings-Swan for pro-p group}]\n  Suppose \\(x_1, \\dots, x_d\\) is a generating set of minimal size. Let \\(F\\) be the free pro-\\(p\\) group on \\(x_i\\) and consider \\(f: F \\to G\\). \\(F/\\Phi(F) \\to G/\\Phi(G)\\) is an isomorphism since they are the same \\(\\F_p\\)-vector space and \\(H^2(G, \\F_p) \\to H^2(F, \\F_p)\\) is an injecition. Thus \\(f\\) is an isomorphism.\n\\end{proof}\n\n\\begin{eg}\n  Let \\(\\Gamma = \\langle x_1, x_2| x_1x_2x_1^{-1}x_2^{-1}x_1 \\rangle\\). Recall \\(H^2(\\Gamma, \\Z) = 0\\). The same argument shows \\(H^2(\\Gamma, \\F_p) = 0\\). \\(H^1(\\Gamma, \\F_p) = \\Hom(\\Gamma, \\F_p)\\). Let \\(\\phi: \\Gamma \\to \\F_p\\). Then\n  \\[\n    0 = \\phi(x_1x_2x_1^{-1}x_2^{-1}x_1) = \\phi(x_1)\n  \\]\n  and no further relation so \\(H^1(\\Gamma, \\F_p) = \\F_p\\) generated by \\(x_1 \\mapsto 0, x_2 \\mapsto 1\\). Let \\(f: \\Z \\to \\Gamma, 1 \\mapsto x_2\\). Then \\(f\\) induces \\(\\hat \\Gamma_{(p)} \\cong \\Z_p\\).\n\\end{eg}\n\n\\subsubsection{Presentation of pro-\\(p\\) groups}\n\n\\begin{definition}[presentation of pro-\\(p\\) group]\\index{presentation}\n  Let \\(X\\) be a finite set and \\(F\\) a free pro-\\(p\\) group on \\(X\\). Let \\(R \\subseteq F\\). The pro-\\(p\\) group with \\emph{presentation} \\(\\floor{X|R}_p\\) is defined to be \\(F/\\overline{\\langle\\langle R\\rangle\\rangle}\\).\n\\end{definition}\n\nNote that not all elements in \\(\\floor{X|R}_p\\) can be written as a product of elements of \\(X\\).\n\n\\begin{lemma}\n  Let \\(F_{\\mathrm{abs}}\\) be the abstract free group on \\(X\\) and let \\(R \\subseteq F_{\\mathrm{abs}}\\). Let \\(\\Gamma = \\langle X | R\\rangle, G = \\floor{X|R}_p\\). Then \\(G = \\hat \\Gamma_{(p)}\\).\n\\end{lemma}\n\n\\begin{proof}\n  Suffices to look at the \\(p\\)-group quotients of \\(G\\) and \\(\\Gamma\\). A quotient \\(\\Gamma \\to P\\), where \\(P\\) is a \\(p\\)-group, is the same as a function \\(X \\to P\\) such that its extension to \\(F_{\\mathrm{abs}}\\) contains \\(R\\) in the kernel. But this is exactly the same as a continuous quotient \\(G \\to P\\).\n\\end{proof}\n\n\\begin{theorem}\n  \\label{thm:size of generators and relators of pro-p group}\n  Let \\(G\\) be a tfg pro-\\(p\\) group. Let \\(X\\) be a finite tgs of \\(G\\). Let \\(r_X\\) be the minimal size of a set \\(R \\subseteq F(X)\\) such that \\(G = \\floor{X|R}_p\\). Then\n  \\[\n    |X| - r_X = \\dim_{\\F_p} H^1(G, \\F_p) - \\dim_{\\F_p} H^2(G, \\F_p).\n  \\]\n  In particular if \\(X\\) is a minimal generating set then \\(r_X = \\dim H^2(G, \\F_p)\\).\n\\end{theorem}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item One may ask the same question for abstract groups. If \\(\\Gamma\\) is a finitely generated (abstract) group, \\(X\\) a finite generating set, let \\(\\rho_X\\) be the minimal size of an \\(R\\) such that \\(\\Gamma = \\langle X| R\\rangle\\). Then what can we say about \\(|X| - \\rho_X\\)? It is a subtle question and in general the answer depends on \\(X\\).\n  \\item For \\(G\\) finite \\(p\\)-group, let \\(X\\) be a generating set. One may ask must \\(\\rho_X = r_X\\)? Certainly \\(r_X \\leq \\rho_X\\) by the lemma. The other direction is open.\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{lemma}\n  Let \\(G\\) and \\(L\\) be profinite groups. Assume \\(L\\) acts continuously on \\(G\\) by homomorphism via \\(\\rho: L \\times G \\to G\\). Then there is a proper open normal subgroup of \\(G\\) which is invariant under the action of \\(L\\).\n\\end{lemma}\n\nNote that if we assume \\(G\\) is tfg then this follows from the old trick of taking all intersections.\n\n\\begin{proof}\n  Let \\(U\\) be a proper open normal subgroup of \\(G\\). Claim \\(\\tilde L = \\{\\ell \\in L: \\ell \\cdot U = U\\}\\) is open in \\(L\\). If this claim holds, there are finitely many subgroups of the form \\(\\ell \\cdot U\\) by orbit-stabiliser. Their intersection is then an \\(L\\)-invariant open normal subgroup.\n\n  Let \\(\\ell \\in \\tilde L\\). For each \\(u \\in U\\) we have \\(\\ell \\cdot u \\in U\\). Can find \\(A_u \\subseteq L, B_u \\subseteq U\\) open such that\n  \\[\n    (\\ell, u) \\in A_u \\times B_u \\subseteq \\rho^{-1}(U).\n  \\]\n  \\(B_u\\)'s cover \\(U\\) which is compact so we can take a finite subcover \\(B_{u_1}, \\dots, B_{u_k}\\). Take \\(A = A_{u_1} \\cap \\cdots \\cap A_{u_k}\\). Left to show \\(A \\subseteq \\tilde L\\): if \\(a \\in A, u \\in U\\) then exists \\(u_i\\) such that \\(u \\in B_{u_i}\\) and \\(a \\in A_{u_i}\\), hence \\((a, u) \\in A_{u_i} \\times B_{u_i} \\subseteq \\rho^{-1}(U)\\). Thus \\(A \\cdot U \\subseteq U\\).\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(F\\) be a free pro-\\(p\\) group, \\(N \\normal F\\) a closed proper normal subgroup of \\(F\\). Then exists a set \\(R \\subseteq N\\) of size \\(r\\) such that \\(N = \\overline{\\langle\\langle R\\rangle\\rangle}\\) if and only if \\(\\dim_{\\F_p} H^1(N, \\F_p)^F \\leq r\\).\n\\end{lemma}\n\n\\begin{proof}\n  Recall\n  \\begin{align*}\n    H^1(N, \\F_p)^F\n    &= \\{\\phi: N \\to \\F_p \\text{ homomorphism such that } \\phi(f^{-1}nf) = \\phi(n)\\} \\\\\n    &= \\{\\phi: N/N^p[N, F] \\to \\F_p\\}\n  \\end{align*}\n  If \\(N = \\overline{\\langle\\langle R\\rangle\\rangle}\\), an \\(F\\)-invariant map \\(\\phi: N \\to \\F_p\\) is determined by the restriction to \\(R\\), so there is an injection \\(H^1(N, \\F_p)^F \\embed \\F_p^{|R|}\\).\n\n  Conversely suppose \\(\\dim H^1(N, \\F_p)^F = r\\), then \\(\\dim N/N^p[N, F] = r\\). Let \\(R \\subseteq N\\) be a lift of a basis of the vector space. \\(R\\) has the property that every \\(F\\)-invariant homomorphism \\(N \\to \\F_p\\) which kills \\(R\\) is trivial. Claim \\(N = \\overline{\\langle\\langle R\\rangle\\rangle}\\): suppose \\(N' = \\overline{\\langle\\langle R\\rangle\\rangle} \\ne N\\). Then \\(N' \\Phi(N) \\ne N\\) by definition of Frattini subgroup, so \\(M = N/N'\\Phi(N) \\ne 0\\). \\(M\\) is an abelian pro-\\(p\\) group with a continuous action of \\(F\\). By the previous lemma \\(M\\) has a \\(F\\)-invariant proper open subgroup \\(U\\). Now \\(M/U\\) is a finite \\(F\\)-module which is an abelian \\(p\\)-group, so by the characterisation of simple modules there is a map \\(M/U \\to \\F_p\\). This contradicts\n  \\[\n    N \\to N/N'\\Phi(N) = M \\to M/U \\to \\F_p\n  \\]\n  which kills all of \\(R\\).\n\\end{proof}\n\n\\begin{proof}[Non-examinable proof of \\Cref{thm:size of generators and relators of pro-p group}]\n  This follows from minimal generating set has size equal to \\(\\dim G/\\Phi(G) = \\dim H^1(G, \\F_p)\\).\n\n  Let \\(N = \\ker(F(X) \\to G)\\). We obtain the five term exact seqeunce\n  \\[\n    \\begin{tikzcd}\n      0 \\ar[r] & H^1(G) \\ar[r] & H^1(F) \\ar[r] & H^1(N)^F \\ar[r] & H^2(G) \\ar[r] & H^2(F) = 0 \\\\\n      & \\dim H^1(G) & {|X|} & r_X & \\dim H^2(G)\n    \\end{tikzcd}\n  \\]\n  Now take Euler characteristic.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\printindex\n\\end{document}\n\n% https://www.dpmms.cam.ac.uk/~grw46/partiiiprofinite.html", "meta": {"hexsha": "6a45098a5d1dac8751921d853c4a65da74183b90", "size": 119986, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "III/profinite_groups.tex", "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_issues_repo_path": "III/profinite_groups.tex", "max_issues_repo_name": "geniusKuang/tripos", "max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_forks_repo_path": "III/profinite_groups.tex", "max_forks_repo_name": "geniusKuang/tripos", "max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "avg_line_length": 48.1291616526, "max_line_length": 781, "alphanum_fraction": 0.6172470122, "num_tokens": 44048, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7853085808877581, "lm_q1q2_score": 0.5905619271090569}}
{"text": "\\documentclass{article}\n\n\\usepackage[utf8]{inputenc} % allow utf-8 input\n\\usepackage[T1]{fontenc}    % use 8-bit T1 fonts\n\\usepackage{hyperref}       % hyperlinks\n\\usepackage{url}            % simple URL typesetting\n\\usepackage{booktabs}       % professional-quality tables\n\\usepackage{amsfonts}       % blackboard math symbols\n\\usepackage{nicefrac}       % compact symbols for 1/2, etc.\n\\usepackage{microtype}      % microtypography\n\\usepackage{natbib}         % bibliography \\usepackage{apalike}\n\\usepackage{xcolor}\n\\usepackage{graphicx}\n\n\\title{REINFORCE Variants}\n\n\\author{\nSotetsu KOYAMADA\n}\n\n\\begin{document}\n\n\\maketitle\n\n\\tableofcontents\n\n\\begin{abstract}\nWe summarize the famous REINFORCE~\\citep{Williams1992-rp} algorithm and its variants.\n\\end{abstract}\n\n\\section{Notation}\n\n\\begin{itemize}\n\t\\item Assumes Episodic MDPs.\n\t\\item Discount factor is omitted for the simplicity.\n\\end{itemize}\n\nThe reinforcement learning objective is defined by:\n\n\\begin{eqnarray}\nJ(\\theta) := \\mathbb{E}_{(s_1, a_1, \\ldots, s_T, a_T) \\sim p_\\theta(s_1, a_1, \\ldots, s_T, a_T)} \\Biggl[ \\sum_{t=1}^{T} r(s_t, a_t) \\Biggr].\n\\end{eqnarray}\n\n\\section{REINFORCE Variants}\n\nPolicy gradient $\\nabla_\\theta J(\\theta)$ is derived as\n\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n= \\mathbb{E}_{(s_1, a_1, \\ldots, s_T, a_T) \\sim p_\\theta(s_1, a_1, \\ldots, s_T, a_T)} \\Biggl[ \\Biggl(\\sum_{t^\\prime=1}^{T} \\nabla_\\theta \\log \\pi_\\theta (a_{t^\\prime}|s_{t^\\prime}) \\Biggr) \\Biggl( \\sum_{t=1}^T r(s_t, a_t) \\Biggr) \\Biggr].\n\\end{eqnarray}\n\n\\subsection{Vanilla REINFORCE}\n\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n&\\approx& \\frac{1}{N} \\sum_{n=1}^{N} \\Biggl( \\sum_{t^\\prime=1}^{T} \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime}) \\Biggr) \\Biggl( \\sum_{t=1}^T r(s_{n, t}, a_{n, t}) \\Biggr) \\\\\n&=& \\frac{1}{N} \\sum_{n=1}^{N} \\mathcal{R}_n \\sum_{t^\\prime=1}^{T} \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime})\n\\end{eqnarray}\nwhere $\\mathcal{R}_{n} := \\sum_{t=1}^T r(s_{n, t}, a_{n, t})$.\n\n\\subsection{Future Return}\n\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n&\\approx& \\frac{1}{N} \\sum_{n=1}^{N} \\Biggl( \\sum_{t^\\prime=1}^{T} \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime}) \\Biggl(\\, \\sum_{{\\color{red}{t=t^\\prime}}}^{T} r(s_{n, t}, a_{n, t}) \\Biggr) \\Biggr)  \\\\\n&=& \\frac{1}{N} \\sum_{n=1}^{N} \\sum_{t^\\prime=1}^{T} \\mathcal{R}_{n, {\\color{red}{t^\\prime}}} \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime})\n\\end{eqnarray}\nwhere $\\mathcal{R}_{n, t^\\prime} := \\sum_{t=t^\\prime}^T r(s_{n, t}, a_{n, t})$.\n\n\\subsection{Average Baselines}\n\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n\\approx \\frac{1}{N} \\sum_{n=1}^{N} \\Biggl(\\mathcal{R}_n {\\color{red}{- \\frac{1}{N} \\sum_{n^\\prime=1}^N \\mathcal{R}_n }} \\Biggr)\\sum_{t^\\prime=1}^{T} \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime})\n\\end{eqnarray}\n\nFor the future return case,\n\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n\\approx \\frac{1}{N} \\sum_{n=1}^{N} \\sum_{t^\\prime=1}^{T} \\Biggl(\\mathcal{R}_{n, t^\\prime} {\\color{red}{- \\frac{1}{N} \\sum_{n^\\prime=1}^N \\mathcal{R}_{n^\\prime, t^\\prime} }} \\Biggr)  \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime})\n\\end{eqnarray}\n\n\\subsubsection{Debiasing Factor}\nWhile it is well known that introducing action-independent baseline is unbiased,\nestimating the baseline using the same samples used in return estimation may introduce some bias.\nRescaling the gradient can make the estimator unbiased.\n\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n\\approx \\frac{1}{{\\color{red}{N-1}}} \\sum_{n=1}^{N} \\Biggl(\\mathcal{R}_n - \\frac{1}{N} \\sum_{n^\\prime=1}^N \\mathcal{R}_n \\Biggr)\\sum_{t^\\prime=1}^{T} \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime})\n\\end{eqnarray}\nSee Section 2.4 in \\citet{Parmas2020-tr} for the detailed explanation.\nFor the future return case,\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n\\approx \\frac{1}{{\\color{red}{N-1}}} \\sum_{n=1}^{N} \\sum_{t^\\prime=1}^{T} \\Biggl(\\mathcal{R}_{n, t^\\prime} - \\frac{1}{N} \\sum_{n^\\prime=1}^N \\mathcal{R}_{n^\\prime, t^\\prime} \\Biggr)  \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime})\n\\end{eqnarray}\n\n\\subsection{Entropy regularization}\nWe can add entropy regularization term to enhance the exploration~\\citep{Mnih2016-eh}\n\\begin{eqnarray}\n\\nabla_\\theta J(\\theta)\n&:=& \\frac{1}{N} \\sum_{n=1}^{N} \\mathcal{R}_n \\sum_{t^\\prime=1}^{T} \\nabla_\\theta \\log \\pi_\\theta (a_{n, t^\\prime}|s_{n, t^\\prime}) +\n{\\color{red}{\\beta \\mathcal{H} \\bigl(\\pi_\\theta(\\, \\cdot \\,|\\, s_{n, t^\\prime}) \\bigr)}}.\n\\end{eqnarray}\n\n\\subsection{Maximum Entropy}\nLet us define maximum entropy RL objective \\citep{Haarnoja2017-xl} (TODO: check other ref) by\n\\begin{eqnarray}\nJ(\\theta) := \\mathbb{E}_{(s_1, a_1, \\ldots, s_T, a_T) \\sim p_\\theta(s_1, a_1, \\ldots, s_T, a_T)} \\Biggl[ \\sum_{t=1}^{T} r(s_t, a_t) {\\color{red}{ + \\alpha \\mathcal{H} \\bigl(\\pi_\\theta(\\, \\cdot \\,|\\, s_t) \\bigr) }}\\Biggr],\n\\end{eqnarray}\nwhere $\\mathcal{H} \\bigl(\\pi_\\theta(\\, \\cdot \\,|\\, s_t) \\bigr)$ is the entropy term of policy $\\pi_\\theta$ at state $s_t$ and $\\alpha$ is a hyperparameter.\nThen, the gradient is derived as\n\\begin{eqnarray}\n\\textrm{TBA.}\n\\end{eqnarray}\nFor the future return case,\n\\begin{eqnarray}\n\\textrm{TBA.}\n\\end{eqnarray}\n\n\\bibliographystyle{apalike}\n\\bibliography{reference}\n\n\\end{document}\n", "meta": {"hexsha": "0ab2b3138dfe4ddc157dbf9a109413bfafb2e3d8", "size": 5264, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/main.tex", "max_stars_repo_name": "sotetsuk/reinforce", "max_stars_repo_head_hexsha": "8d98de1d42134c7184625e70f2c3d54153636069", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-28T08:40:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-16T04:55:37.000Z", "max_issues_repo_path": "tex/main.tex", "max_issues_repo_name": "sotetsuk/reinforce", "max_issues_repo_head_hexsha": "8d98de1d42134c7184625e70f2c3d54153636069", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-08-18T03:25:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-19T04:13:44.000Z", "max_forks_repo_path": "tex/main.tex", "max_forks_repo_name": "sotetsuk/reinforce", "max_forks_repo_head_hexsha": "8d98de1d42134c7184625e70f2c3d54153636069", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.125, "max_line_length": 246, "alphanum_fraction": 0.6662234043, "num_tokens": 2121, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085708384735, "lm_q2_score": 0.752012562644147, "lm_q1q2_score": 0.5905619108226531}}
{"text": "\\documentclass[12pt, a4paper]{article}\n% \\usepackage{ctex}\n\\usepackage[margin=1in]{geometry} \n\\usepackage{amsmath,amsthm,amssymb}\n\\usepackage{bm}\n\\usepackage{cases}\n\\usepackage{graphicx}\n\\usepackage{subfig}\n\\usepackage{hyperref}\n\\usepackage{amsfonts}\n\\usepackage{authblk}\n\\usepackage{mathrsfs}\n\\newcommand{\\E}{\\mathbb{E}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\N}{\\mathcal{N}}\n\\hypersetup{hidelinks}\n\\title{PRML Note\\\\C09 Mixture Models and EM}\n\\author{Yang Zhao}\n\\affil{Department of Automation, Tsinghua University}\n\\date{}\n\n\\begin{document}\n    \\maketitle\n    \\section{K-means Clustering}\n    \\begin{itemize}\n        \\item Suppose we have a data set $\\{\\bm{x}_1,\\bm{x}_2,\\cdots,\\bm{x}_N\\}$ and \n        our goal is to partition the data set into some number $K$ of clusters, where\n        we shall suppose for the moment that the value of $K$ is given.\n        \\item We might think of a cluster as comprising a group of data points whose\n        inter-point distances are small compared with the distances to points outside\n        of the cluster. To do this, we introduce a set of D-dimensional vectors \n        $\\bm{\\mu}_k$, where $k=1,\\cdots,K$, in which $\\bm{\\mu}_k$ is a prototype \n        associated with the $k^{th}$ cluster.\n        \\item For each data point, we introduce a corresponding set of binary indicator\n        variables $r_{nk}\\in\\{ 0,1\\}$, where $k=1,\\cdots,K$ describing which of $K$\n        clusters the data point $\\bm{x}_n$ is designed to, so that if data point \n        $\\bm{x}_n$ is designed to cluster $k$, then $r_{nk}=1$ and $r_{nj}=0$ for \n        $j\\neq k$. This is known as the $1$-of-$K$ coding scheme. We can define an \n        objective function, sometimes called a \\textit{distortion measure}, given by\n        \\begin{equation}\n            J=\\sum_{n=1}^N\\sum_{k=1}^K r_{nk}\\Vert\\bm{x}_n-\\bm{\\mu}_k\\Vert^2\n        \\end{equation}\n        Our goal is to find values for the $r_{nk}$ and the $\\bm{\\mu}_k$ so as to \n        minimize $J$.\n        \\item First we choose some initial values for the $\\bm{\\mu}_k$. Then in the \n        first phase we minimize $J$ with respect to the $r_{nk}$, keeping the \n        $\\bm{\\mu}_k$ fixed. In the second phase we minimize $J$ with respect to the \n        $\\bm{\\mu}_k$, keeping the $r_{nk}$ fixed. This two-stage optimization is then \n        repeated until convergence. \n        \\item Consider first the detemination of the $r_{nk}$. We can simply assign the\n        $n^{th}$ data point to the closest cluster centre.\n        \\begin{equation}\n            r_{nk}=\\begin{cases}\n                1&\\textit{if $k=argmin_j\\Vert\\bm{x}_n-\\bm{\\mu}_j\\Vert^2$}\\\\\n                0&\\textit{otherwise}\n            \\end{cases}\n        \\end{equation}\n        \\item Consider the optimization of the $\\bm{\\mu}_k$ with the $r_{nk}$ held fixed.\n        The objective function can be minimized by setting its derivative with respect to\n        $\\bm{\\mu}_k$ to zero giving\n        \\begin{equation}\n            \\bm{\\mu}_k=\\frac{\\sum_nr_{nk}\\bm{x}_n}{\\sum_nr_{nk}}\n        \\end{equation}\n        \\item This $K$-means algorithm may converge to a local rather than global \n        minimum of $J$.\n        \\item In practice, a better initialization procedure would be to choose the \n        cluster centres $\\bm{\\mu}_k$ to be equal to a random subset of $K$ data points.\n        \\item It is worth noting that the $K$-means algorithm itself is often used to \n        initialize the parameters in a Gaussian mixture model before applying the EM\n        algorithm.\n        \\item The $K$-means algorithm can be generalized by introducing a more general\n        dissimilarity measure $\\nu(\\bm{x},\\bm{x}')$ instead of the Euclidean distance, \n        which gives the $K$-medoids algorithm.\n    \\end{itemize}\n    \\section{Mixtures of Gaussians}\n    \\begin{itemize}\n        \\item The Gaussian mixture distribution can be written as a linear superposition\n        of Gaussians in the form\n        \\begin{equation}\n            p(\\bm{x})=\\sum_{k=1}^K\\pi_k\\N(\\bm{x}|\\bm{\\mu}_k,\\mathbf{\\Sigma}_k)\n        \\end{equation}\n        Let us introduce a $K$-dimensional binary random variable $\\bm{z}$ to represent \n        the state of the Gaussian distribution using $1$-of-$K$ code. \n        \\begin{figure}[htbp]\n            \\centering\n            \\includegraphics[width=0.75in]{figures/MixtureModel.PNG}\n            \\caption{Graphical representation of a mixture model}\n        \\end{figure}\n        \\item The joint distribution is expressed in the form $p(\\bm{x},\\bm{z})=p(\\bm{z})\n        p(\\bm{x}|\\bm{z})$. The marginal distribution over $\\bm{z}$ is specified in terms \n        of the mixing coefficients $\\pi_k$, such that\n        \\begin{equation}\n            p(z_k=1)=\\pi_k\n        \\end{equation}\n        and the conditional distribution of $\\bm{x}$ given a particular value for $\\bm{z}$\n        is a Gaussian \n        \\begin{equation}\n            p(\\bm{x}|z_k=1)=\\N(\\bm{x}|\\bm{\\mu}_k,\\mathbf{\\Sigma}_k)\n        \\end{equation}\n        So the joint distribution is given by\n        \\begin{equation}\n            p(\\bm{x})=\\sum_{k=1}^K\\pi_k\\N(\\bm{x}|\\bm{\\mu}_k,\\mathbf{\\Sigma}_k)\n        \\end{equation}\n        \\item We use $\\gamma(z_k)$ to denote $p(z_k=1|\\bm{x})$, whose value can be found\n        using Bayes' theorem\n        \\begin{equation}\n            \\label{eq:responsibility}\n            \\gamma(z_k)\\equiv p(z_k=1|\\bm{x})=\\frac{\\pi_k\\N(\\bm{x}|\\bm{\\mu}_k,\n            \\mathbf{\\Sigma}_k)}{\\sum_{j=1}^K\\pi_j\\N(\\bm{x}|\\bm{\\mu}_j,\\mathbf{\\Sigma}_j)}\n        \\end{equation}\n        the $\\gamma(z_k)$ can be viewed as the \\textit{responsibility} that component $k$\n        takes for explaining the observation $\\bm{x}$. $\\gamma(z_{nk})\\equiv \n        p(z_k=1|\\bm{x}_n)$.\n        \\item Suppose we have a data set of observations $\\{\\bm{x}_1,\\cdots,\\bm{x}_N\\}$. \n        The log of the likelihood function is given by\n        \\begin{equation}\n            \\label{eq:MLEforGM}\n            lnp(\\mathbf{X}|\\bm{\\pi},\\bm{\\mu},\\mathbf{\\Sigma})=\\sum_{n=1}^Nln\n            \\Big\\{\\sum_{k=1}^K\\pi_k\\N(\\bm{x}_n|\\bm{\\mu}_k,\\mathbf{\\Sigma}_k)\\Big\\}\n        \\end{equation}\n        It is worth emphasizing that there is a significant problem associated with the \n        maximum likelihood framework applied to Gaussian mixture models, due to the \n        presence of singularities. \n        \\item These singularities provide another example of the severe over-fitting that\n        can occur in a maximum likelihood approach. If we adopt a Bayesian approach, this\n        difficulty does not occur. \n        \\item \\textit{EM algorithm}. The expectation-maximization algorithm is an elegant\n        and powerful method for finding maximum likelihood solutions for models with \n        latent variables. The EM algorithm can be generalized to obtain the variational\n        inference framework. \n        \\item Setting the derivatives of $lnp(\\mathbf{X}|\\bm{\\pi},\\bm{\\mu},\n        \\mathbf{\\Sigma})$ in equation (\\ref{eq:MLEforGM}) with respect to the $\\bm{\\mu}_k$\n        and $\\mathbf{\\Sigma}_k$ to zero, we obtain\n        \\begin{align}\n            \\label{eq:mean}\n            \\bm{\\mu}_k&=\\frac{1}{N_k}\\sum_{n=1}^N\\gamma(z_{nk})\\bm{x}_n\\\\\n            \\label{eq:corvariance}\n            \\mathbf{\\Sigma}_k&=\\frac{1}{N_k}\\sum_{n=1}^N\\gamma(z_{nk})\n            (\\bm{x}_n-\\bm{\\mu}_k)(\\bm{x}_n-\\bm{\\mu}_k)^T\n        \\end{align}\n        where we have defined\n        \\begin{equation}\n            N_k=\\sum_{n=1}^N\\gamma(z_{nk})\n        \\end{equation}\n        \\item If we want to maximize $lnp(\\mathbf{X}|\\bm{\\pi},\\bm{\\mu},\\mathbf{\\Sigma})$\n        with respect to the $\\pi_k$, we must take account of the constriant $\\sum_{k=1}\n        ^K\\pi_k=1$. This can be achieved using a Lagrange multiplier and maximize the \n        following quantity\n        \\begin{equation}\n            lnp(\\mathbf{X}|\\bm{\\pi},\\bm{\\mu},\\mathbf{\\Sigma})+\\lambda\\Big(\n                \\sum_{k=1}^K\\pi_k-1\\Big)\n        \\end{equation}\n        which gives\n        \\begin{equation}\n            0=\\sum_{n=1}^N\\frac{\\N(\\bm{x}_n|\\bm{\\mu}_k,\\mathbf{\\Sigma}_k)}\n            {\\sum_j\\pi_j\\N(\\bm{x}_n|\\bm{\\mu}_j,\\mathbf{\\Sigma}_j)}+\\lambda\n        \\end{equation}\n        If we multiply both sides by $\\pi_k$ and sum over $k$ making use of the \n        constriant $\\sum_{k=1}^K\\pi_k=1$, we can find $\\lambda=-N$ and we can obtain\n        \\begin{equation}\n            \\label{eq:coefficient}\n            \\pi_k=\\frac{N_k}{N}\n        \\end{equation}\n        \\item For EM algorithm, in the expectation step, or E step, we use the current\n        values for the parameters to evaluate the posterior probabilities, or \n        responsibility given by equation (\\ref{eq:responsibility}); in the maximization\n        step, or M step, we use the probabilities to re-estimate the means, corvariances\n        and mixing coefficients using the equation (\\ref{eq:mean}), \n        (\\ref{eq:corvariance}), (\\ref{eq:coefficient}). \n    \\end{itemize}\n    \\section{An Alternative View of EM}\n    \\begin{itemize}\n        \\item The goal of the EM algorithm is to find maximum likelihood solutions for \n        models having latent variables. We denote the set of all observed data by \n        $\\mathbf{X}$, in which the $n^{th}$ row represents $\\bm{x}_n^T$, and similarly\n        we denote the set of all latent variables by $\\mathbf{Z}$, with a corresponding\n        row $\\bm{z}_n^T$. The set of all model parameters is denoted by $\\bm{\\theta}$, \n        and so the log likelihood function is given by\n        \\begin{equation}\n            ln p(\\mathbf{X}|\\bm{\\theta})=ln\\Big\\{\\sum_{\\mathbf{Z}}p(\\mathbf{X},\n            \\mathbf{Z}|\\bm{\\theta}) \\Big\\}\n        \\end{equation}\n        \\item The presence of the sum prevents the logarithm from acting directly on the \n        joint distribution, resulting in complicated expressions for the maximum likelihood\n        solution. \n        \\item \\textit{The General EM Algorithm}. \\begin{enumerate}\n            \\item Choose an initial setting for the parameters $\\bm{\\theta}^{old}$.\n            \\item \\textbf{E Step}. Evaluate $p(\\mathbf{Z}|\\mathbf{X},\\bm{\\theta}^{old})$.\n            \\item \\textbf{M Step}. Evaluate $\\bm{\\theta}^{new}$ given by \n            \\begin{equation*}\n                \\bm{\\theta}^{new}=argmax_{\\bm{\\theta}} Q(\\bm{\\theta},\\bm{\\theta}^{old})\n            \\end{equation*}\n            where \\begin{equation*}\n                Q(\\bm{\\theta},\\bm{\\theta}^{old})=\\sum_{\\mathbf{Z}}p(\\mathbf{Z}|\\mathbf{X},\n                \\bm{\\theta}^{old})ln p(\\mathbf{X},\\mathbf{Z}|\\bm{\\theta})\n            \\end{equation*}\n            \\item Check for convergence of either the log likelihood or the parameters values.\n            If the convergence criterion is not satisfied, then let\n            \\begin{equation*}\n                \\bm{\\theta}^{old}\\leftarrow\\bm{\\theta}^{new}\n            \\end{equation*}\n            and return to step 2. \n        \\end{enumerate}\n        \\item The EM algorithm can also be used to find MAP solutions for models in which a \n        prior $p(\\bm{\\theta})$ is defined over the parameters. In this case the E step remains\n        the same as in the maximum likelihood case, whereas in the M step the quantity to be \n        maximized is given by $Q(\\bm{\\theta},\\bm{\\theta}^{old})+lnp(\\bm{\\theta})$. Suitable \n        choices for the prior will remove the singularities. \n        \\item Consider the problem of maximizing the likelihood for the complete data set \n        $\\{\\mathbf{X},\\mathbf{Z}\\}$. This likelihood function takes the form\n        \\begin{equation}\n            p(\\mathbf{X},\\mathbf{Z}|\\bm{\\mu},\\mathbf{\\Sigma},\\bm{\\pi})=\n            \\prod_{n=1}^N\\prod_{k=1}^K\\pi^{z_{nk}}_k\\N(\\bm{x}_n|\\bm{\\mu}_k,\n            \\mathbf{\\Sigma}_k)^{z_{nk}}\n        \\end{equation}\n        where $z_{nk}$ denotes the $k^{th}$ component of $\\bm{z}_n$. Taking the logarithm, we\n        obtain\n        \\begin{equation}\n            lnp(\\mathbf{X},\\mathbf{Z}|\\bm{\\mu},\\mathbf{\\Sigma},\\bm{\\pi})=\n            \\sum_{n=1}^N\\sum_{k=1}^Kz_{nk}\\{ln\\pi_k+ln\\N(\\bm{x}_n|\\bm{\\mu}_k,\\mathbf{\\Sigma}_k)\\}\n        \\end{equation}\n        Comparision with the log likelihood function (\\ref{eq:MLEforGM}) for the incomplete data\n        shows that summation over $k$ and the logarithm have been interchanged. \n    \\end{itemize}\n    \\section{The EM Algorithm in General}\n    \\begin{itemize}\n        \\item Consider a probabilistic model in which we collectively denote all of the observed\n        variables by $\\mathbf{X}$ and all of the hidden variables by $\\mathbf{Z}$. The joint \n        distribution $p(\\mathbf{X},\\mathbf{Z}|\\bm{\\theta})$\n        is governed by a set of parameters denoted $\\bm{\\theta}$.\n        \\item We shall suppose that direct optimization of $p(\\mathbf{X}|\\bm{\\theta})$ \n        is difficult, but that optimization\n        of the complete-data likelihood function $p(\\mathbf{X},\\mathbf{Z}|\\bm{\\theta})$ \n        is significantly easier. \n        \\item For any choice of $q(\\mathbf{Z})$, the following decomposition holds\n        \\begin{equation}\n            lnp(\\mathbf{X}|\\bm{\\theta})=\\mathcal{L}(q,\\bm{\\theta})+KL(q||p)\n        \\end{equation}\n        where we have defined\n        \\begin{align}\n            \\mathcal{L}(q,\\bm{\\theta})=\\sum_{\\mathbf{Z}}q(\\mathbf{Z})ln\\Big\\{\\frac\n            {p(\\mathbf{X},\\mathbf{Z}|\\bm{\\theta})}{q(\\mathbf{Z})}\\Big\\}\\\\\n            KL(q||p)=-\\sum_{\\mathbf{Z}}q(\\mathbf{Z})ln\\Big\\{\\frac\n            {p(\\mathbf{Z}|\\mathbf{X},\\bm{\\theta})}{q(\\mathbf{Z})}\\Big\\}\n        \\end{align}\n        Because the Kullback-Leibler divergence satisfies $KL(q||p)\\ge 0$, we see that the quantity\n        $\\mathcal{L}(q,\\bm{\\theta})$ is a lower bound on the log\n        likelihood function $lnp(\\mathbf{X}|\\bm{\\theta})$.\n        \\item The EM algorithm is a two-stage iterative optimization technique for finding\n        maximum likelihood solutions. \n        \\begin{enumerate}\n            \\item In the \\textbf{E} step, the\n            lower bound $\\mathcal{L}(q,\\bm{\\theta}^{old})$ is maximized with respect to \n            $q(\\mathbf{Z})$ while holding $\\bm{\\theta}^{old}$ fixed when $q(\\mathbf{Z})$ is\n            equals to $p(\\mathbf{Z}|\\mathbf{X},\\bm{\\theta}^{old})$. \n            \\item In the subsequent \\textbf{M} step, the distribution $q(\\mathbf{Z})$ is held \n            fixed and the lower bound $\\mathcal{L}(q,\\bm{\\theta})$ is maximized with respect to \n            $\\bm{\\theta}$ to give some new value $\\bm{\\theta}^{new}$. \n        \\end{enumerate}\n    \\end{itemize}\n    \\section{Appendix}\n\\end{document}\n", "meta": {"hexsha": "14c066f22d66eb65ab1a166e96c58c38f0e87212", "size": 14443, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "C09/C09.tex", "max_stars_repo_name": "RemindYZ/PRML", "max_stars_repo_head_hexsha": "f37f64d5ecef81488a943bb3c48f48b8c7edb063", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "C09/C09.tex", "max_issues_repo_name": "RemindYZ/PRML", "max_issues_repo_head_hexsha": "f37f64d5ecef81488a943bb3c48f48b8c7edb063", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "C09/C09.tex", "max_forks_repo_name": "RemindYZ/PRML", "max_forks_repo_head_hexsha": "f37f64d5ecef81488a943bb3c48f48b8c7edb063", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.9163498099, "max_line_length": 99, "alphanum_fraction": 0.6115073046, "num_tokens": 4265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.828938806208442, "lm_q1q2_score": 0.5903968965462637}}
{"text": "\\title{Model}\n\n{{navbar}}\n\n\\subsubsection{Model}\n\nA probabilistic model is a joint distribution $p(\\mathbf{x},\n\\mathbf{z})$ of data $\\mathbf{x}$ and latent variables $\\mathbf{z}$.\nFor background, see the \\href{/tutorials/model}{Probabilistic Models tutorial}.\n\nIn Edward, we specify models using a simple language of random variables.\nA random variable $\\mathbf{x}$ is an object parameterized by\ntensors $\\theta^*$, where\nthe number of random variables in one object is determined by\nthe dimensions of its parameters.\n\n\\begin{lstlisting}[language=Python]\nfrom edward.models import Normal, Exponential\n\n# univariate normal\nNormal(loc=tf.constant(0.0), scale=tf.constant(1.0))\n# vector of 5 univariate normals\nNormal(loc=tf.zeros(5), scale=tf.ones(5))\n# 2 x 3 matrix of Exponentials\nExponential(rate=tf.ones([2, 3]))\n\\end{lstlisting}\n\nFor multivariate distributions, the multivariate dimension is the\ninnermost (right-most) dimension of the parameters.\n\n\\begin{lstlisting}[language=Python]\nfrom edward.models import Dirichlet, MultivariateNormalTriL\n\n# K-dimensional Dirichlet\nDirichlet(concentration=tf.constant([0.1] * K))\n# vector of 5 K-dimensional multivariate normals with lower triangular cov\nMultivariateNormalTriL(loc=tf.zeros([5, K]), scale_tril=tf.ones([5, K, K]))\n# 2 x 5 matrix of K-dimensional multivariate normals\nMultivariateNormalTriL(loc=tf.zeros([2, 5, K]), scale_tril=tf.ones([2, 5, K, K]))\n\\end{lstlisting}\n\nRandom variables are equipped with methods such as\n\\texttt{log\\_prob()}, $\\log p(\\mathbf{x}\\mid\\theta^*)$,\n\\texttt{mean()}, $\\mathbb{E}_{p(\\mathbf{x}\\mid\\theta^*)}[\\mathbf{x}]$,\nand \\texttt{sample()}, $\\mathbf{x}^*\\sim p(\\mathbf{x}\\mid\\theta^*)$.\nFurther, each random variable is associated to a tensor $\\mathbf{x}^*$ in the\ncomputational graph, which represents a single sample $\\mathbf{x}^*\\sim\np(\\mathbf{x}\\mid\\theta^*)$.\n\nThis makes it easy to parameterize random variables with complex\ndeterministic structure, such as with deep neural networks, a diverse\nset of math operations, and compatibility with third party libraries\nwhich also build on TensorFlow.\nThe design also enables compositions of random variables\nto capture complex stochastic structure.\nThey operate on $\\mathbf{x}^*$.\n\n\\includegraphics[width=375px]{/images/random_variable_ops.png}\n\n\\begin{lstlisting}[language=Python]\nfrom edward.models import Normal\n\nx = Normal(loc=tf.zeros(10), scale=tf.ones(10))\ny = tf.constant(5.0)\nx + y, x - y, x * y, x / y\ntf.tanh(x * y)\nx[2]  # 3rd normal rv in the vector\n\\end{lstlisting}\n\nIn the \\href{/api/model-compositionality}{compositionality page}, we\ndescribe how to build models by composing random variables.\n\n\\begin{center}\\rule{3in}{0.4pt}\\end{center}\n\n\\begin{itemize}\n  \\item @{ed.models.RandomVariable}\n  \\item {{models}}\n\\end{itemize}\n", "meta": {"hexsha": "98dd254e581f22e2696218397d171e9c34d50ff8", "size": 2779, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/api/model.tex", "max_stars_repo_name": "zhangyewu/edward", "max_stars_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5200, "max_stars_repo_stars_event_min_datetime": "2016-05-03T04:59:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:32:26.000Z", "max_issues_repo_path": "docs/tex/api/model.tex", "max_issues_repo_name": "zhangyewu/edward", "max_issues_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 724, "max_issues_repo_issues_event_min_datetime": "2016-05-04T09:04:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-28T02:41:12.000Z", "max_forks_repo_path": "docs/tex/api/model.tex", "max_forks_repo_name": "zhangyewu/edward", "max_forks_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1004, "max_forks_repo_forks_event_min_datetime": "2016-05-03T22:45:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T00:08:08.000Z", "avg_line_length": 35.1772151899, "max_line_length": 81, "alphanum_fraction": 0.7491903562, "num_tokens": 769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387914176259, "lm_q2_score": 0.7122321964553657, "lm_q1q2_score": 0.590396896138432}}
{"text": "\n\\subsection{Triangles}\n\\subsubsection{Area of a triangle}\n\n\\subsubsection{Circumference of a triangle}\n\n\\subsubsection{Sum of angles of a triangle}\n\nAngles in a triangle add to \\(\\pi \\).\n\n", "meta": {"hexsha": "cc3625d6ad0b9adea458cba7ec8e57b8b76eb508", "size": 189, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/polygon2D/01-01-triangle.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/polygon2D/01-01-triangle.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/polygon2D/01-01-triangle.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.1818181818, "max_line_length": 43, "alphanum_fraction": 0.7566137566, "num_tokens": 49, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8918110425624792, "lm_q2_score": 0.6619228625116081, "lm_q1q2_score": 0.5903101181124178}}
{"text": "\\chapter*{Appendix A \\\\ Effective Window Method Polynomials \\label{chap:append1}}\r\n\\addcontentsline{toc}{chapter}{Appendices}\r\n\\addcontentsline{toa}{appendix}{Appendix A}\r\n\\addtocontents{toa}{\\addvspace{10pt}}%just to separate the entries in the list\r\n\r\nFor reference, here we list the first few polynomials $H_{l_> l_<}(x)$ from Eq.~(\\ref{foffdiag})\r\n\\beqa\r\nH_{20}(x)&=&x^2-1, \\\\ & & \\nonumber \\\\\r\nH_{40}(x)&=&{7\\over 4}x^4-{5\\over 2}x^2 +{3\\over 4}, \\\\  & & \\nonumber \\\\\r\nH_{42}(x)&=&x^4-x^2, \\\\& & \\nonumber \\\\\r\nH_{60}(x)&=&    \\frac{33}{8} x^6 - \\frac{63}{8}x^4 + \\frac{35}{8}x^2 - \\frac{5}{8}  , \\\\ & & \\nonumber \\\\\r\nH_{62}(x) &=&   \\frac{11}{4}x^6 - \\frac{9}{2}x^4 + \\frac{7}{4}x^2, \\\\ & & \\nonumber \\\\\r\nH_{64}(x) &=&  x^6 -  x^4 \r\n\\label{Hpoly}\r\n\\eeqa\r\n", "meta": {"hexsha": "5b7b49844356a9e26657b3bde0927081c7e81418", "size": 760, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "app1.tex", "max_stars_repo_name": "changhoonhahn/DisThesis", "max_stars_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "app1.tex", "max_issues_repo_name": "changhoonhahn/DisThesis", "max_issues_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "app1.tex", "max_forks_repo_name": "changhoonhahn/DisThesis", "max_forks_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5, "max_line_length": 106, "alphanum_fraction": 0.5828947368, "num_tokens": 332, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797148356994, "lm_q2_score": 0.6477982179521103, "lm_q1q2_score": 0.5902605955046781}}
{"text": "% Preamble.\n\\documentclass[12pt]{article}\n\\usepackage[margin=1.25in]{geometry}\n\\usepackage[fleqn]{amsmath}\n\\usepackage{textcomp}\n\\usepackage{gensymb}\n\\usepackage{amsfonts}\n\\usepackage{enumitem}\n%\\usepackage{tikz}  % Include for figures.\n%\\usepackage{subfiles}  % Include for subfiles.\n\n%% Title macros.\n\\newcommand{\\HOMEWORKNUM}{18}\n\\newcommand{\\NAME}{D. Choi}\n\\newcommand{\\DATE}{2020-06-19}\n\n\\title{\\vspace{-2\\baselineskip}MATH 225 - Homework \\#\\HOMEWORKNUM}\n\\author{\\NAME}\n\\date{\\DATE}\n\n%% Formatting options.\n%\\pagenumbering{gobble}  % Include for single-page document.\n\n% Macros.\n%% Contextualized by input/output bases.\n\\newcommand{\\based}[3]{{\\{#1\\}}_{#2}^{#3}}\n\n\n% Document.\n\\begin{document}\n\\maketitle\n\n\\section*{1.}\n\\textit{$T: \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2$ reflects vectors across\nline $y = \\tan(30 \\degree)x$ in standard basis.}\n\\begin{gather*}\n\t\\mathcal{A}\n\t=\n\t\\left\\{\n\t\t\\begin{pmatrix} \\cos(20 \\degree) \\\\ \\sin(20 \\degree) \\end{pmatrix},\n\t\t\\begin{pmatrix} -\\sin(20 \\degree) \\\\ \\cos(20 \\degree) \\end{pmatrix}\n\t\\right\\}\n\t\\\\\n\t\\mathcal{B}\n\t=\n\t\\left\\{\n\t\t\\begin{pmatrix} \\cos(40 \\degree) \\\\ \\sin(40 \\degree) \\end{pmatrix},\n\t\t\\begin{pmatrix} -\\sin(40 \\degree) \\\\ \\cos(40 \\degree) \\end{pmatrix}\n\t\\right\\}\n\\end{gather*}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item \\textit{Find $\\based{T}{\\mathcal{A}}{\\mathcal{B}}$.}\n\t\\\\[\\baselineskip]\n\tLet $\\mathcal{S}$ be the standard basis. \\\\\n\tIn standard basis, $T$ can be expressed by the matrix\n\t\\begin{equation*}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t=\n\t\t\\begin{pmatrix}\n\t\t\t\\cos(60 \\degree) & \\sin(60 \\degree) \\\\\n\t\t\t\\sin(60 \\degree) & -\\cos(60 \\degree)\n\t\t\\end{pmatrix}\n\t\t.\n\t\\end{equation*}\n\tUsing $\\based{T}{\\mathcal{S}}{\\mathcal{S}}$,\n\t\\begin{equation*}\n\t\t\\based{T}{\\mathcal{A}}{\\mathcal{B}}\n\t\t=\n\t\t\\based{I}{\\mathcal{S}}{\\mathcal{B}}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t\\based{I}{\\mathcal{A}}{\\mathcal{S}}\n\t\t.\n\t\\end{equation*}\n\tThus,\n\t\\footnotesize\n\t\\begin{equation*}\n\t\t\\boxed{\n\t\t\t\\based{T}{\\mathcal{A}}{\\mathcal{B}}\n\t\t\t=\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(40 \\degree) & -\\sin(40 \\degree) \\\\\n\t\t\t\t\\sin(40 \\degree) & \\cos(40 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t^{-1}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(60 \\degree) & \\sin(60 \\degree) \\\\\n\t\t\t\t\\sin(60 \\degree) & -\\cos(60 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(20 \\degree) & -\\sin(20 \\degree) \\\\\n\t\t\t\t\\sin(20 \\degree) & \\cos(20 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t}\n\t\t.\n\t\\end{equation*}\n\t\\normalsize\n\t\n\t\\item \\textit{Find $\\based{T}{\\mathcal{B}}{\\mathcal{A}}$.}\n\t\\begin{equation*}\n\t\t\\based{T}{\\mathcal{B}}{\\mathcal{A}}\n\t\t=\n\t\t\\based{I}{\\mathcal{S}}{\\mathcal{A}}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t\\based{I}{\\mathcal{B}}{\\mathcal{S}}\n\t\t.\n\t\\end{equation*}\n\tThus,\n\t\\footnotesize\n\t\\begin{equation*}\n\t\t\\boxed{\n\t\t\t\\based{T}{\\mathcal{B}}{\\mathcal{A}}\n\t\t\t=\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(20 \\degree) & -\\sin(20 \\degree) \\\\\n\t\t\t\t\\sin(20 \\degree) & \\cos(20 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t^{-1}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(60 \\degree) & \\sin(60 \\degree) \\\\\n\t\t\t\t\\sin(60 \\degree) & -\\cos(60 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(40 \\degree) & -\\sin(40 \\degree) \\\\\n\t\t\t\t\\sin(40 \\degree) & \\cos(40 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t}\n\t\t.\n\t\\end{equation*}\n\t\\normalsize\n\t\n\t\\item \\textit{Find $\\based{T}{\\mathcal{A}}{\\mathcal{A}}$.}\n\t\\begin{equation*}\n\t\t\\based{T}{\\mathcal{A}}{\\mathcal{A}}\n\t\t=\n\t\t\\based{I}{\\mathcal{S}}{\\mathcal{A}}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t\\based{I}{\\mathcal{A}}{\\mathcal{S}}\n\t\t.\n\t\\end{equation*}\n\tThus,\n\t\\footnotesize\n\t\\begin{equation*}\n\t\t\\boxed{\n\t\t\t\\based{T}{\\mathcal{A}}{\\mathcal{A}}\n\t\t\t=\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(20 \\degree) & -\\sin(20 \\degree) \\\\\n\t\t\t\t\\sin(20 \\degree) & \\cos(20 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t^{-1}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(60 \\degree) & \\sin(60 \\degree) \\\\\n\t\t\t\t\\sin(60 \\degree) & -\\cos(60 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(20 \\degree) & -\\sin(20 \\degree) \\\\\n\t\t\t\t\\sin(20 \\degree) & \\cos(20 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t}\n\t\t.\n\t\\end{equation*}\n\t\\normalsize\n\t\n\t\\item \\textit{Find $\\based{T}{\\mathcal{B}}{\\mathcal{B}}$.}\n\t\\begin{equation*}\n\t\t\\based{T}{\\mathcal{B}}{\\mathcal{B}}\n\t\t=\n\t\t\\based{I}{\\mathcal{S}}{\\mathcal{B}}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t\\based{I}{\\mathcal{B}}{\\mathcal{S}}\n\t\t.\n\t\\end{equation*}\n\tThus,\n\t\\footnotesize\n\t\\begin{equation*}\n\t\t\\boxed{\n\t\t\t\\based{T}{\\mathcal{B}}{\\mathcal{B}}\n\t\t\t=\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(40 \\degree) & -\\sin(40 \\degree) \\\\\n\t\t\t\t\\sin(40 \\degree) & \\cos(40 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t^{-1}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(60 \\degree) & \\sin(60 \\degree) \\\\\n\t\t\t\t\\sin(60 \\degree) & -\\cos(60 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(40 \\degree) & -\\sin(40 \\degree) \\\\\n\t\t\t\t\\sin(40 \\degree) & \\cos(40 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t}\n\t\t.\n\t\\end{equation*}\n\t\\normalsize\n\\end{enumerate}\n\n\\end{document}", "meta": {"hexsha": "ba4dddb070167726e7d7af07168119bba2fd3051", "size": 4699, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "usc-20202-math-225-39425/hw18/main.tex", "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "usc-20202-math-225-39425/hw18/main.tex", "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "usc-20202-math-225-39425/hw18/main.tex", "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.7323232323, "max_line_length": 74, "alphanum_fraction": 0.5965098957, "num_tokens": 2008, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.8006920116079209, "lm_q1q2_score": 0.5902366613553575}}
{"text": "% Created 2020-04-01 mi\u00e9 08:59\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=1610]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{khpreamble}\n\\usepackage{pgfplots}\n\\usepackage{pdfpages}\n\\usepgfplotslibrary{groupplots}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{System identification of the pneumatic tank model}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={System identification of the pneumatic tank model},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.3.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\\section{Intro}\n\\label{sec:org067222b}\n\\begin{frame}[label={sec:orge13f9f7}]{Lab experiment}\n\\begin{center}\n\\includegraphics[width=\\linewidth]{../../figures/tank-lab-setup.png}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org9936e26}]{Simulink simulation}\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{../../figures/tank-simscape-model.png}\n\\end{center}\n\\end{frame}\n\n\\section{Models}\n\\label{sec:org432c17d}\n\\begin{frame}[label={sec:orgcb96d48}]{System identification}\n\\begin{center}\n  \\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n    \\node[coordinate] (input) {};\n    \\node[block, right of=input, node distance=20mm] (plant)  {System};\n    \\node[coordinate, right of=plant, node distance=20mm] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.3, color=blue!80!black] {$u(t)$} (plant);\n    \\draw[->] (plant) -- node[above, near end, color=orange!80!black] {$p(t)$} (output);\n  \\end{tikzpicture}\n\\end{center}   \n\nGiven input-output data \\(\\{ \\big( \\textcolor{blue!80!black}{u(1)}, \\textcolor{orange!80!black}{p(1)} \\big), \\big( \\textcolor{blue!80!black}{u(2)}, \\textcolor{orange!80!black}{p(2)} \\big), \\ldots, \\big( \\textcolor{blue!80!black}{u(N)}, \\textcolor{orange!80!black}{p(N)} \\big) \\}\\) find a \\alert{good description} of the system that generated the data.\n\\end{frame}\n\n\\begin{frame}[label={sec:org0d412e7}]{Flow through the valve}\n\\begin{center}\n  \\begin{tikzpicture}\n    \\node {\\includegraphics[width=0.7\\linewidth]{../../figures/valve-volt-opening.png}};\n    \\node[coordinate] (five) at (0.54,-2.8) {};\n    \\node[coordinate, below of=five, node distance=1.8cm] (origin) {};\n    \\draw[color=red!80!black] (five) to (origin);\n    \\draw[color=blue!80!black, ->, thick] (origin) ++(0,0.4cm) -- node[below, near end] {$u(t)$} ++(1.5cm, 0); \n   \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org2499e40}]{Nonlinear model 1}\n\\begin{block}{Flow into the tank, \\(V_{in}(t) > 5\\)}\n\\[ \\dot{p}(t) = a_0(\\underbrace{V_{in}(t) - 5}_{u(t)})\\sqrt{|\\underbrace{p_s - p(t)}_{\\Delta p(t)}|} \\]\n\\end{block}\n\\begin{block}{Flow out the tank, \\(V_{in}(t) < 5\\)}\n\\[ \\dot{p}(t) = a_0u(t)\\sqrt{|\\underbrace{p(t)-0}_{\\Delta p(t)}|} \\]\n\\end{block}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgc8a4915}]{Nonlinear model 2}\n\\[ \\dot{p}(t) = a_0u(t)|\\Delta p(t))|^{a_1} \\]\n\\end{frame}\n\n\\begin{frame}[label={sec:org5ac1def}]{Converting to a regression model which is linear in the parameters}\n\\[ \\dot{p}(t) = a_0u(t)|\\Delta p(t)|^{a_1} \\]\n\nTake the logarithm of the equation to get\n\n\\[ \\log \\dot{p} = \\log a_0 + \\log u + a_1 \\log |\\Delta p|\\]\n\\end{frame}\n\n\\begin{frame}[label={sec:org3d7146d}]{Linear model}\nIntroduce \\(y(t) = p(t)-p_0\\), where \\(p_0\\) is a chosen operating point.\n\n\\[ \\dot{p}(t) = \\dot{y}(t) = -a y(t) + ku(t)\\]\n\\end{frame}\n\n\n\\section{Experiments and results}\n\\label{sec:orga687e3f}\n\n\\begin{frame}[label={sec:org687b35d}]{Input-output data}\n\\begin{center}\n\\includegraphics[width=0.88\\linewidth]{../../figures/tank-sysid-input-output.png}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgd55b88c}]{Fitting nonlinear model 1}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{../../figures/tank-sysid-sqrt-deltaP}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgc3a7756}]{Fitting nonlinear model 2}\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{../../figures/tank-sysid-log}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org54e1900}]{Fitting linear model}\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{../../figures/tank-sysid-linear-model}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org23ab816}]{Validation}\n\\begin{center}\n\\includegraphics[width=0.55\\linewidth]{../../figures/sysid-tank-validation-model.png}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgc0ea29b}]{Validation results}\n\\begin{center}\n\\includegraphics[width=\\linewidth]{../../figures/sysid-tank-valudation-results.png}\n\\end{center}\n\\end{frame}\n\\end{document}", "meta": {"hexsha": "b1c2860b541b9c90f1492d112dd144f802b1f1e3", "size": 4870, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "modules/tank-pid/screen-cast-2.tex", "max_stars_repo_name": "kjartan-at-tec/mr2015", "max_stars_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "modules/tank-pid/screen-cast-2.tex", "max_issues_repo_name": "kjartan-at-tec/mr2015", "max_issues_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "modules/tank-pid/screen-cast-2.tex", "max_forks_repo_name": "kjartan-at-tec/mr2015", "max_forks_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.1292517007, "max_line_length": 351, "alphanum_fraction": 0.6973305955, "num_tokens": 1733, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.737158174177441, "lm_q2_score": 0.8006919997179627, "lm_q1q2_score": 0.5902366525905774}}
{"text": "\\chapter{Sums of Combinatorial Games}\n\\marginurl{%\n    Sum of Games:\\\\\\noindent\n    Introduction to Combinatorial Game Theory \\#5\n}{youtu.be/dRaqJKZh3y0}\n\n\nLet us consider the following game.\n\\begin{game}[Take-Away Game]\n\\label{game:two-pile-take-away-n-m-2-1}\n  In this game there are two players.\n  \\begin{itemize}\n    \\item They have two piles of $n$ and $m$ chips, respectively.\n    \\item They make moves in turns with player I starting,\n      each move consists of moving one or two chips out of the pile.\n    \\item The player that removes the last chip wins.\n  \\end{itemize}\n\\end{game}\nIt is not hard to draw the graph corresponding to the game for small $n$ and\n$m$.\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[thick]\n    \\node[circle, draw, minimum size=6pt] (v1) at (0,0) {$(0, 0)$};\n    \\node[circle, draw, minimum size=6pt] (v2) at (1.5,1.5) {$(0, 1)$};\n    \\node[circle, draw, minimum size=6pt] (v3) at (-1.5,1.5) {$(1, 0)$};\n    \\node[circle, draw, minimum size=6pt] (v4) at (0,3) {$(1, 1)$};\n    \\node[circle, draw, minimum size=6pt] (v5) at (3,3) {$(0, 2)$};\n    \\node[circle, draw, minimum size=6pt] (v6) at (-3,3) {$(2, 0)$};\n\n\n    \\draw[->] (v2) -- (v1);\n    \\draw[->] (v3) -- (v1);\n    \\draw[->] (v4) -- (v2);\n    \\draw[->] (v4) -- (v3);\n    \\draw[->] (v5) to[out=-90, in=0] (v1);\n    \\draw[->] (v6) to[out=-90, in=180] (v1);\n    \\draw[->] (v6) -- (v3);\n    \\draw[->] (v5) -- (v2);\n  \\end{tikzpicture}\n  \\caption{Part of the graph of \\Cref{game:two-pile-take-away-n-m-2-1}}\n\\end{figure}\nUsing this graph it is easy to see that all drawn positions, except $(1, 1)$ and\n$(0, 0)$ are N-positions.\n\nIn the rest of the chapter we will discuss a method to study similar games.\nAssume we have two combinatorial games $\\mathcal{G}_1$ and $\\mathcal{G}_2$.\nOne may form another game played as follows: the initial position of the new game\nconsists of the pair of initial positions of $\\mathcal{G}_1$ and $\\mathcal{G}_2$,\nplayers alternate moves, and on each turn a player make a move in one of the game\nleaving the position in the second untouched.  The new game is called\n\\emph{sum of $\\mathcal{G}_1$ and $\\mathcal{G}_2$}.\n\nLet us give a formal definition.\n\\begin{definition}\n    Let $G_1 = (V_1, F_1)$ and $G_2 = (V_1, F_2)$ be directed graphs.\n    We say that $G$ is the sum of $G_1$ and $G_2$, denoted $G_1 + G_2$, is\n    a graph $(V_1 \\times V_2, F)$  such that\n    \\[\n        F(x_1, x_2) =\n            \\set[y_1 \\in F_1(x_1)]{(y_1, x_2)} \\cup\n            \\set[y_2 \\in F_2(x_2)]{(x_1, y_2)}.\n    \\]\n\\end{definition}\n\\Cref{figure:sum-of-graphs-G1-G2} gives an example of this operation.\n\n\\begin{figure}\n    \\centering\n    \\subfloat[$G_1$\\label{figure:sum-of-graphs-G1}]{\n        \\begin{tikzpicture}[thick]\n          \\node[circle, draw, minimum size=6pt] (v1) at (0,0) {$1$};\n          \\node[circle, draw, minimum size=6pt] (v2) at (1,1) {$2$};\n          \\node[circle, draw, minimum size=6pt] (v3) at (-1,1) {$3$};\n\n\n          \\draw[->] (v1) -- (v2);\n          \\draw[->] (v1) -- (v3);\n          \\draw[->] (v2) -- (v3);\n        \\end{tikzpicture}\n    }\n    \\qquad\\qquad\n    \\subfloat[$G_2$\\label{figure:sum-of-graphs-G2}] {\n        \\begin{tikzpicture}[thick]\n          \\node[circle, draw, minimum size=6pt] (v1) at (0,0) {$1$};\n          \\node[circle, draw, minimum size=6pt] (v2) at (1,1) {$2$};\n          \\node[circle, draw, minimum size=6pt] (v3) at (-1,1) {$3$};\n\n\n          \\draw[->] (v2) -- (v1);\n          \\draw[->] (v3) -- (v1);\n          \\draw[->] (v3) -- (v2);\n        \\end{tikzpicture}\n    }\n    \\qquad\\qquad\n    \\subfloat[$G_1 + G_2$\\label{figure:sum-of-graphs-G1-G2}] {\n        \\begin{tikzpicture}[thick]\n          \\node[circle, draw, minimum size=6pt] (v11) at (0, 0)  {$(1, 1)$};\n          \\node[circle, draw, minimum size=6pt] (v21) at (1.5, 1.5)  {$(2, 1)$};\n          \\node[circle, draw, minimum size=6pt] (v31) at (-1.5, 1.5) {$(3, 1)$};\n          \\node[circle, draw, minimum size=6pt] (v12) at (0, -1.5) {$(1, 2)$};\n          \\node[circle, draw, minimum size=6pt] (v22) at (3, 3)  {$(2, 2)$};\n          \\node[circle, draw, minimum size=6pt] (v32) at (-3, 3) {$(3, 2)$};\n          \\node[circle, draw, minimum size=6pt] (v13) at (0,-3)  {$(1, 3)$};\n          \\node[circle, draw, minimum size=6pt] (v23) at (4.5,4.5)  {$(2, 3)$};\n          \\node[circle, draw, minimum size=6pt] (v33) at (-4.5,4.5) {$(3, 3)$};\n\n\n          \\draw[->] (v11) -- (v21);\n          \\draw[->] (v11) -- (v31);\n          \\draw[->] (v21) -- (v31);\n          \\draw[->] (v12) to[out = 0, in = -90] (v22);\n          \\draw[->] (v12) to[out = 180, in = -90] (v32);\n          \\draw[->] (v22) -- (v32);\n          \\draw[->] (v13) to[out = 0, in = -90] (v23);\n          \\draw[->] (v13) to[out = 180, in = -90] (v33);\n          \\draw[->] (v23) -- (v33);\n          \\draw[->] (v12) -- (v11);\n          \\draw[->] (v13) to[out = 45, in = -45] (v11);\n          \\draw[->] (v13) -- (v12);\n          \\draw[->] (v22) -- (v21);\n          \\draw[->] (v23) to[out = -100, in = 0] (v21);\n          \\draw[->] (v23) -- (v22);\n          \\draw[->] (v32) -- (v31);\n          \\draw[->] (v33) to[out = -80, in = 180] (v31);\n          \\draw[->] (v33) -- (v32);\n        \\end{tikzpicture}\n    }\n    \\caption{\\Cref{figure:sum-of-graphs-G1-G2} depicts sum of graphs\n        from \\Cref{figure:sum-of-graphs-G1} and \\Cref{figure:sum-of-graphs-G2}.\n    }\n\\end{figure}\n\n\nAnother example is given by the game of Nim; it is easy to see that\n$2$-pile Nim is a sum of two $1$-pile Nims. This observation leads to a\ngeneralization of Bouton's Theorem (\\Cref{theorem:bouton}).\n\\begin{theorem}[The Sprague--Grundy Theorem]\n    Let $G_1$ and $G_2$ be some graphs and $g_1$ and $g_2$ be corresponding\n    Sprague--Grundy functions. Then the graph $G_1$ and $G_2$ has a\n    Sprague--Grundy function $g$ such that\n    $g(x_1, x_2) = g_1(x_1) \\bitwisexor g_2(x_2)$.\n\\end{theorem}\n\\begin{proof}\n    Let $G_1 = (V_1, F_1)$, $G_2 = (V_2, F_2)$, and $G = G_1 + G_2$.\n    Consider some $x_1 \\in V_1$ and $x_2 \\in V_2$.\n    Let $a = g_1(x_1) \\bitwisexor g_2(x_2)$. To prove the statement we need to\n    show that\n    \\begin{enumerate}\n        \\item for any $0 \\le b < a$, there is $(y_1, y_2) \\in F(x_1, x_2)$ such\n            that $g(y_1, y_2) = b$;\n        \\item for any $(y_1, y_2) \\in F(x_1, x_2)$, $g(y_1, y_2) \\neq a$.\n    \\end{enumerate}\n\n    We start from proving the first statement.\n    Let us fix some $0 \\le b < a$ and let $c = a \\bitwisexor b$.\n    Let $g_i(x_i) = (p_{i, \\ell}, \\dots, p_{i, 0})$ for each $i \\set{1, 2}$ and\n    $c = (1, q_{k - 1}, \\dots, q_0)$ where $k \\le \\ell$.\n    For some $j \\in \\set{1, 2}$, $p_{j, k} = 1$ since\n    $a = g_1(x_1) \\bitwisexor g_2(x_2)$. Without loss of generality $j = 1$.\n    Hence, $c \\bitwisexor g_1(x_1) < g_1(x_1)$, whence there is $x'_1$ such\n    that $g_1(x'_1) = c \\bitwisexor g_1(x_1)$. As a result, there is a move in\n    $G$ from $(x_1, x_2)$ to $(x'_1, x_2)$ and\n    $g(x'_1, x_2) = g_1(x'_1) \\bitwisexor g_2(x_2) =\n        c \\bitwisexor g_1(x_1) \\bitwisexor g_2(x_2) =\n        c \\bitwisexor a = b$.\n\n    To prove the second statement, assume that there is\n    $(y_1, y_2) \\in F(x_1, x_2)$ so that $g(y_1, y_2) = a$. Without loss of\n    generality we may assume that $x_2 = y_2$. Hence,\n    $0 = g(y_1, x_2) \\bitwisexor g(x_1, x_2) = g_1(y_1) \\bitwisexor g_1(x_1)$.\n    However, $g_1(y_1) \\neq g_1(x_1)$ since there is a move from $x_1$ to $y_1$.\n    Therefore $g_1(y_1) \\bitwisexor g_1(x_1) \\neq 0$ which is a contradiction.\n\\end{proof}\nIt is also easy to see that if $G_1$ and $G_2$ satisfy the ending condition,\nthen $G_1 + G_2$ also satisfies the ending condition. Therefore, if $G_1$ and\n$G_2$ satisfy the ending condition and $g_1$, $g_2$ are Sprague--Grundy\nfunctions of them, $G_1 + G_2$ has unique Sprague--Grundy function $g$ such that\n$g(x_1, x_2) = g_1(x_1) \\bitwisexor g_2(x_2)$.\n\nThe simple example of an application of this theorem is the analysis of the\nfollowing game.\n\\begin{game}\n\\label{game:subtraction-1-2-3-out-of-10-11}\n  Alice and Bob have two piles with $10$ and $11$ chips respectively.\n  They take turns and remove $1$ or $2$ chips from one of the piles.\n  If one of them cannot make a move he/she loses.\n\\end{game}\n\nTo determine who is the winner in this game, we start with a subtraction game\n$G$ with the subtraction set $\\set{1, 2}$. It is easy to see that \n$g : \\N_0 \\to \\N_0$ such that\n\\[\n  g(x) =\n  \\begin{cases}\n    0 & \\text{if } x \\equiv 0 \\pmod{3} \\\\\n    1 & \\text{if } x \\equiv 1 \\pmod{3} \\\\\n    2 & \\text{if } x \\equiv 2 \\pmod{3}\n  \\end{cases}\n\\]\nis the Sprague--Grundy function for $G$. \nIt is also clear that \\Cref{game:subtraction-1-2-3-out-of-10-11} is equal to \n$G + G$. Therefore the function $g' : \\N_0^2 \\to \\N_0$ such that \n\\[\n  g(x, y) =\n  \\begin{cases}\n    0 & \\text{if } x \\equiv 0 \\pmod{3} \\text{ and } y \\equiv 0 \\pmod{3}\\\\\n    1 & \\text{if } x \\equiv 0 \\pmod{3} \\text{ and } y \\equiv 1 \\pmod{3}\\\\\n    2 & \\text{if } x \\equiv 0 \\pmod{3} \\text{ and } y \\equiv 2 \\pmod{3}\\\\\n    1 & \\text{if } x \\equiv 1 \\pmod{3} \\text{ and } y \\equiv 0 \\pmod{3}\\\\\n    0 & \\text{if } x \\equiv 1 \\pmod{3} \\text{ and } y \\equiv 1 \\pmod{3}\\\\\n    3 & \\text{if } x \\equiv 1 \\pmod{3} \\text{ and } y \\equiv 2 \\pmod{3}\\\\\n    2 & \\text{if } x \\equiv 2 \\pmod{3} \\text{ and } y \\equiv 0 \\pmod{3}\\\\\n    3 & \\text{if } x \\equiv 2 \\pmod{3} \\text{ and } y \\equiv 1 \\pmod{3}\\\\\n    0 & \\text{if } x \\equiv 2 \\pmod{3} \\text{ and } y \\equiv 2 \\pmod{3}\n  \\end{cases}\n\\]\nTherefore, the position $(10, 11)$ is an N-position.\n\n\n\\marginurl{%\n    Applications of the Sprague-Grundy Theorem:\\\\\\noindent\n    Introduction to Combinatorial Game Theory \\#6\n}{youtu.be/V4yI_1P1Jcc}\nA surprising example of the application of this theorem is the following game.\n\\begin{game}\n\\label{game:polygon}\n  In this game position is described by a polygon with several diagonals. \n  The game starts with a polygon with $n$ sides. On each turn a player draw a\n  new diagonal so that it does not intersect with previously drawn diagonals.\n  Players take turns and the one who cannot make a move loses.\n\\end{game}\nIt is easy to see that we do not care about the shape of the polygon and\ndiagonals, the only important information is the number of nodes and which nodes\nare connected by a diagonal. \n\nLet $g(n)$ be the value of the Sprague--Grundy function at the polygon with $n$\nsides. It is easy to see that if we split the polygon, by a diagonal, into two\nparts with $\\ell$ and $m$ sides, then the resulting position is essentially a\nposition in the sum of two games; \nhence, \n$g(n) = \n  \\mex \\set[\n    \\ell, m \\ge 3 \\text{ and } \\ell + m = n + 2\n  ]{g(\\ell) \\bitwisexor g(m)}$.\nUsing this observation it is easy to compute the value of $g(n)$ for small $n$.\n\\begin{table}[h!]\n  \\centering\n  \\begin{tabular}{l l l l l l l l l}\n      \\toprule\n      3 & 4 & 5 & 6 & 7 & 8 \\\\\n      \\midrule\n      0 & 1 & 0 & 1 & 0 & 1 \\\\\n      \\bottomrule\n  \\end{tabular}\n  \\caption{Sprague--Grundy function for \\Cref{game:polygon}}\n\\end{table}\nIt is easy to see to make a conjecture that $g(n) = 0$ for odd $n$ and $g(n) =\n1$ for even $n$. Let us prove this using induction. The base case follows from\nthe computation necessary to write the table. Let us prove the induction step\nfrom $1, \\dots, n$ to $n + 1$.\n\\begin{itemize}\n  \\item Let $n + 1$ be even. It is clear that if $\\ell + m = (n + 1) + 2$, then\n    $\\ell$ and $m$ have the same remainder modulo $2$. Therefore, $g(m) =\n    g(\\ell)$ by the induction hypothesis. As a result, \n    \\[\n      \\set[\n        \\ell, m \\ge 3 \\text{ and } \\ell + m = n + 2\n      ]{g(\\ell) \\bitwisexor g(m)} = \\set{0}\n    \\] \n    and $g(n + 1) = 1$.    \n  \\item Let $n + 1$ be odd. It is clear that if $\\ell + m = (n + 1) + 2$, then\n    $\\ell$ and $m$ have different remainders modulo $2$. Therefore, $g(m) \\neq\n    g(\\ell)$ by the induction hypothesis. As a result, \n    \\[\n      \\set[\n        \\ell, m \\ge 3 \\text{ and } \\ell + m = n + 2\n      ]{g(\\ell) \\bitwisexor g(m)} = \\set{1}\n    \\] \n    and $g(n + 1) = 0$.\n\\end{itemize}\n\\begin{chapterendexercises}\n    \\exercise Compute the Sprague--Grundy function for states of the subtraction\n      game with two piles of chips where players and the subtraction set \n      $\\set{1, 2, 5}$.\n    \\exercise Let $G_1$ be the subtraction game with the subtraction set\n      $\\set{1, 2}$\n      Let $G_2$ be the game of Nim with three piles.\n      Find all the moves from $(11, (1, 6, 7))$ to P-positions in \n      $G1 + G2$.\n\\end{chapterendexercises}\n", "meta": {"hexsha": "42a04ebde0483adfe3314cb858e3f8e7bdfabd5f", "size": 12409, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "parts/part_2/chapter_13_sum_of_games.tex", "max_stars_repo_name": "alexanderknop/I2DM", "max_stars_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-01-12T05:01:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-12T11:44:11.000Z", "max_issues_repo_path": "parts/part_2/chapter_13_sum_of_games.tex", "max_issues_repo_name": "aaknop/I2DM", "max_issues_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 69, "max_issues_repo_issues_event_min_datetime": "2019-01-09T00:19:58.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-04T00:27:16.000Z", "max_forks_repo_path": "parts/part_2/chapter_13_sum_of_games.tex", "max_forks_repo_name": "aaknop/I2DM", "max_forks_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-01-08T23:55:41.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-12T07:14:44.000Z", "avg_line_length": 42.6426116838, "max_line_length": 81, "alphanum_fraction": 0.5872350713, "num_tokens": 4830, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.8006920092299293, "lm_q1q2_score": 0.5902366503554034}}
{"text": "\\chapter{Introduction}\n\\emph{\u201c\u4f60\u4eecCS\u6bd4CS\u7684\u5f3a\u5417\uff1f\u90a3\u4f60\u4eec\u7684EE\u6bd4EE\u7684\u5f3a\u5417\uff1f\u201d}\n\\newpage\n\n\\section{Basics}\nThe basic assumption of machine learning is that \\textbf{data samples are i.i.d.}.\n\nThe goal of training a model is to \\textbf{minimize the generalization error of the model}. Since we only have limited amount of data, what we can actually do is to minimize the emprical error.\n\nHowever, we do not always want the emprical error to be as small as possible due to the risk of overfitting.\n\n\\section{Overfitting and Underfitting}\n\\paragraph{Overfitting.} High variance. The model performs well on training sets but performs poorly on new unseen samples. Using a high-order model to fit low-order distribution of data usually leads to overfitting.\n\\paragraph{Underfitting.} High bias. The model has not fully captured the underlying structure of the data. Conduct more training or change a more complicated model.\n\n\\section{Methods for Splitting data}\nTo train a model, we first need to divide data into training set and test set. Training set and test set should be disjoint.\n\n\\subsection{Hold-Out}\nDivide dataset $\\mathcal{D}$ into traning set $\\mathcal{S}$ and test set $\\mathcal{T}$ s.t.\n\\[ \\mathcal{S} \\cup \\mathcal{T} = \\mathcal{D} \\quad \\mathcal{S} \\cap \\mathcal{T} = \\emptyset \\]\nTypical proportion of $\\mathcal{S}$ and $\\mathcal{T}$ is 30\\% and 70\\%.\n\n\\subsection{Cross-Validation}\nDivide $\\mathcal{D}$ into $k$ disjoint sets of similar size.\n\\[ \\mathcal{D} = \\mathcal{D}_1 \\cup \\mathcal{D}_2 \\cup \\dots \\mathcal{D}_k \\quad \\text{s.t.} \\quad \\mathcal{D}_i \\cap \\mathcal{D}_j = \\emptyset \\]\nEach time use $k-1$ sets for training and the remaining set for testing. \nA typical value of $k$ is $10$.\n\n\\subsection{Leave-One-Out}\nA special case of cross-validation, wehre each set $\\mathcal{D}_i$ contains only one sample.\n\n\\subsection{Bootstrapping}\nSuppose $\\mathcal{D}$ has $m$ samples. Randomly pick a sample from $\\mathcal{D}$, copy it into some $\\mathcal{D}'$ and put it back to $\\mathcal{D}$. Repeat the process for $m$ times.\n\\[ \\lim_{m\\to\\infty}(1-\\frac{1}{m})^m = \\frac{1}{e} \\approx 0.368 \\]\nAbout $36.8\\%$ samples in $\\mathcal{D}$ will not be in $\\mathcal{D}'$. So we can use $\\mathcal{D}'$ for training and $\\mathcal{D}\\backslash\\mathcal{D}'$ for testing.\n\n\\section{Performance Evaluation}\n\\subsection{Measure}\n\\paragraph{Regression} Common performance measure for a regression model is \\textbf{Mean Squared Error}.\n\\[ E = \\frac{1}{m}\\sum_{i=1}^m(f(x^{(i)}) - y^{(i)})^2 \\]\n\\paragraph{Classification} Common measure for a classification model is \\textbf{Error Rate}\n\\[ E = \\frac{1}{m}\\sum_{i=1}^m\\mathbb{I}[f(x^{(i)}) \\neq y^{(i)}] \\]\n\n\\subsection{TPR and FPR}\n\\begin{definition}[Sensitivity/TPR]\n    \\[ TPR = \\frac{TP}{TP + FN} \\]\n\\end{definition}\n\\begin{definition}[FPR]\n    \\[ FPR = \\frac{FP}{TN + FP} \\]\n\\end{definition}\n\n\\subsection{Receiver Operating Characteristic}\nMany classification models output a real value and compare it to a certain threshold.\n\nThe \\textbf{ROC Curve} uses $FPR$ as its $x$-axis, and $TPR$ as its $y$-axis. It can be plotted by setting different thresholds for dividing positive and negative samples.\n\nThe \\textbf{Area Under Curve, AUC} is used to evaluate different models. Usually models with a larger AUC is considered to have better performance.\n\n\\subsection{Precision and Recall}\n\\begin{definition}[Precision]\n    \\[ P = \\frac{TP}{TP + FP} \\]\n\\end{definition}\n\\begin{definition}[Recall]\n    \\[ R = \\frac{TP}{TP + FN} \\]\n\\end{definition}\nSimilar to the ROC Curve, we can also plot the \\textbf{P-R Curve}. And the \\textbf{Break-Even Point, BEP}, defined as the value when $P = R$, is used to evaluate different models.\n\nAnother more common measure is the $F1$ rate\n\\begin{definition}[$F1$ Rate]\n    \\[ F1 = \\frac{2 \\times P \\times R}{P + R} = \\frac{2 \\times TP}{\\#Samples + TP - TN} \\]\n\\end{definition}\n\\begin{remark}\n    The $F1$ rate is defined by the harmonic mean of Precision and Recall.\n\\end{remark}\n\n\\begin{definition}[$F_{\\beta}$ Rate]\n    \\[ F_{\\beta} = \\frac{(1+\\beta^2)\\times P \\times R}{(\\beta^2 \\times P)+R} \\]\n\\end{definition}\n\\begin{remark}\n    $F_{\\beta}$ is the weighted harmonic mean. When $\\beta > 1$, precision has a higher weight. When $0 < \\beta < 1$, recall has a higher weight.\n\\end{remark}\n\n\\section{Error Analysis}\n\\paragraph{Bias.} The \\textbf{bias} is the difference between model prediction and ground truth. This is usually because the model is not well-trained, or because the model is not complex enough to fit the data distribution.\n\\paragraph{Variance.} The \\textbf{variance} is the variance of outputs of the same model fitted different times. This is usually because the model is too complex and mistakenly fits the noise or specific features in the dataset.\n\\paragraph{Noise.} Noise.\n\nHigh variance $\\to$ Overfitting.\n\nHigh bias $\\to$ Underfitting.\n\n\\subsection{Bias-Variance Decomposition}\nLet\n\\[ bias(x) = f(x) - y \\]\n\\[ var(x) = \\mathbb{E}_{\\mathcal{D}}[(f(x;\\mathcal{D}-f(x)))^2] \\]\nThe generalization error of a model $f$ trained on $\\mathcal{D}$ can be represented by\n\\[ E(f;\\mathcal{D}) = bias^2(x) + var(x) + \\varepsilon^2 \\]", "meta": {"hexsha": "338cdece73add4b10f2daa8cc7666cb00e17b1f3", "size": 5101, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Machine Learning/Introduction.tex", "max_stars_repo_name": "YBRua/CourseNotes", "max_stars_repo_head_hexsha": "58a4ccb6b8f8d1de9ec10b627a45442519855dfc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-03-20T10:40:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T08:15:15.000Z", "max_issues_repo_path": "Machine Learning/Introduction.tex", "max_issues_repo_name": "YBRua/CourseNotes", "max_issues_repo_head_hexsha": "58a4ccb6b8f8d1de9ec10b627a45442519855dfc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Machine Learning/Introduction.tex", "max_forks_repo_name": "YBRua/CourseNotes", "max_forks_repo_head_hexsha": "58a4ccb6b8f8d1de9ec10b627a45442519855dfc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-14T11:31:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-14T11:31:00.000Z", "avg_line_length": 52.0510204082, "max_line_length": 228, "alphanum_fraction": 0.7157420114, "num_tokens": 1543, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.8006920020959543, "lm_q1q2_score": 0.5902366450965355}}
{"text": "\\section{Controllers}\n\\label{sec:controllers}\n\nIn an imitation learning setting, there are two controllers involved: an \\emph{omniscient} controller, which performs the desired task with perfect knowledge of the environment, and a \\emph{learned} controller, which is trained to imitate the behaviour of the omniscient controller.\n\n\\subsection{Omniscient controller}\n\nWe implemented a controller from the literature that simultaneously controls position and orientation of the robot toward a certain goal pose~\\cite{park2011smooth}.\n\nWe chose this particular controller because it promised smooth and intuitive \ntrajectories, which globally converge to arbitrary goal poses without \nsingularities, from any initial pose. Furthermore, this specific formulation \nmakes it easy to impose limits on the velocity, acceleration and jerk of the \nresulting paths, ensuring that they would be physically realizable if executed\non a real robot.\n\nThe control law is described in an egocentric polar coordinate system, relative to the current pose of the robot. The control variables are the linear $v$ and angular $\\omega$ velocities. Given a target pose $T$, the state of the robot is expressed as the triple $(r, \\theta, \\delta)$, where $r$ is the Euclidean distance from the target position; $\\theta \\in (-\\pi, \\pi]$ the target orientation with respect to the line of sight from the robot to the target position; $\\delta \\in (-\\pi, \\pi]$ the vehicle orientation with respect to the line of sight, as shown in Figure~\\ref{fig:egocentric-coordinates}.\n\nIt can be seen that $(r, \\theta)$ completely identify the position of the robot, while $\\delta$ identifies its orientation. In this formulation, moving the robot to the target pose corresponds to bringing the state to the origin, $(r, \\theta, \\delta) = (0, 0, 0)$.\n\n\\begin{figure}[htbp]\n\t\\centerline{\\includegraphics[width=\\columnwidth]{controller/egocentric-coordinates}}\n\t\\caption{Egocentric polar coordinate system, relative to the current pose of the robot.}\n\t\\label{fig:egocentric-coordinates}\n\\end{figure}\n\nAssuming initially the linear velocity $v$ is nonzero positive and given (although not constant), the authors show that the angular velocity $\\omega$ only influences $\\delta$ directly, which in turn influences $(r, \\theta)$. As such, the control problem is decomposed in a slow and a fast subsystem.\n\nThe slow subsystem first computes a reference orientation $\\hat{\\delta}$ to steer the robot toward the origin:\n\\begin{IEEEeqnarray}{l}\n\t\\hat{\\delta} = \\arctan(-k_1 \\theta)\n\\end{IEEEeqnarray}\n\nThe fast subsystem then controls the angular velocity $\\omega$ to bring the current orientation $\\delta$ toward the reference orientation $\\hat{\\delta}$ computed by the slow subsystem (somewhat confusingly, the original paper uses $\\delta$ for both the current and reference orientations):\n\\begin{IEEEeqnarray}{l}\n\t\\omega = -\\frac{v}{r} [\n\t\tk_2 (\\delta - \\underbrace{\\arctan(-k_1 \\theta)}_{\\hat{\\delta}}) +\n\t\t(1 + \\underbrace{\\frac{k_1}{1 + (k_1\\theta)^2}}_{-\\dot{\\hat{\\delta}}})\\sin\\theta\n\t] \\notag \\\\\n\t\\label{eq:angular-vel}\n\\end{IEEEeqnarray}\n\nIt can be seen from \\eqref{eq:angular-vel} that there is a linear relation between $v$ and $\\omega$. In particular, $\\, \\omega = \\kappa(r, \\theta, \\delta) \\, v$ where $\\kappa$ is the curvature of the resulting path. It is possible to rewrite \\eqref{eq:angular-vel}, such that\n\\begin{IEEEeqnarray}{l}\n\t\\kappa = -\\frac{1}{r} [k_2 (\\delta - \\hat{\\delta}) + (1 - \\dot{\\hat{\\delta}})\\sin\\theta]\n\t\\label{eq:curvature}\n\\end{IEEEeqnarray}\nwhich implies that the shape of the path does not depend on the choice of $v$. To ensure a smooth and comfortable trajectory, the authors suggest to choose $v$ so that $v \\rightarrow 0$ as $\\kappa \\rightarrow \\pm\\infty$ and $v \\rightarrow v_\\text{max}$ as $\\kappa \\rightarrow 0$:\n\\begin{IEEEeqnarray}{l}\n\tv = \\frac{v_\\text{max}}{1 + \\beta |\\kappa(r, \\theta, \\delta)|^\\lambda}\n\t\\label{eq:linear-vel}\n\\end{IEEEeqnarray}\n\nAs written, the control law has a singularity as $r \\rightarrow 0$, in other words when the robot approaches the target. We address this problem as suggested in the original paper, by setting $v = k_3 r$ in the neighbourhood of $r = 0$:\n\\begin{IEEEeqnarray}{l}\n\tv' = \\min(v, k_3 r)\n\t\\label{eq:final-linear-vel}\n\\end{IEEEeqnarray}\n\nIn the equations above $k_1 > 0$, $k_2 > 0$, $k_3 > 0$, $\\beta > 0$ and $\\lambda > 1$ are design parameters. Figure~\\ref{fig:omniscient-trajectories} shows some trajectories generated by this controller, obtained with parameters $k_1 = 1$, $k_2 = 3$, $k_3 = 2$, $\\beta = 0.4$ and $\\lambda = 2$.\n\n\\begin{figure}[htbp]\n\t\\centerline{\\includegraphics[width=.8\\columnwidth]{controller/demo-circle-omniscient-trajectories}}\n\t\\caption{Trajectories of the omniscient controller from 9 initial poses.}\n\t\\label{fig:omniscient-trajectories}\n\\end{figure}\n\nFinally, we implemented support for reverse gear as suggested by the authors: by simultaneously flipping the orientation of both the robot and the target when $v$ is negative. \n\nWe use the reverse gear to teach the learned controller how to behave when it overshoots the target. This normally never happened with the omniscient controller, so the neural network wouldn't know what to do if it didn't stop precisely over the goal. We used an augmentation technique to ensure that this situation would be present in the training set: we made the robot move in reverse if its initial position was inside a region in front of the goal (i.e. between the arms of the object in Figure~\\ref{fig:omniscient-trajectories}).\n\n\\subsection{Learned controller}\n\nOur learned controller is implemented as an end-to-end neural network, which \nreceives the sensor inputs and produces commands for the motors. \nSection~\\ref{sec:models} will describe the specific architectures that we used.\n", "meta": {"hexsha": "d04978a61ad4c53055dfa96cb7cde38dcf080519", "size": 5792, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/controller.tex", "max_stars_repo_name": "GiorgiaAuroraAdorni/learning-relative-interactions-through-imitation", "max_stars_repo_head_hexsha": "ad3cddd7df85a2834e643317fc219189f2743424", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-31T19:12:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-31T19:12:50.000Z", "max_issues_repo_path": "report/sections/controller.tex", "max_issues_repo_name": "GiorgiaAuroraAdorni/learning-relative-interactions-through-imitation", "max_issues_repo_head_hexsha": "ad3cddd7df85a2834e643317fc219189f2743424", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections/controller.tex", "max_forks_repo_name": "GiorgiaAuroraAdorni/learning-relative-interactions-through-imitation", "max_forks_repo_head_hexsha": "ad3cddd7df85a2834e643317fc219189f2743424", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 75.2207792208, "max_line_length": 605, "alphanum_fraction": 0.757769337, "num_tokens": 1531, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5902366404730366}}
{"text": "\\chapter{Type theory (old computation book chapter draft)}\n\nThe type theory described here is inspired by\nMartin-L\\\"of type theory\nand the Haskell programming language.\nThe type theory is constructive (intuitionistic) and dependent.\n\n\\section{Notation}\n\nIff \\(a\\) is a type then \\(|a|\\) is the \\emph{cardinality} of \\(a\\).\nIt is the number of inhabitants of that type.\n\n\\section{Common types}\n\n\\begin{itemize}\n    \\item\n        $\\Void$ is the empty type.\n        It has no inhabitant.\n    \\item\n        $\\Bool$ is the type of boolean values.\n        This type has two inhabitants: $\\true$ and $\\false$.\n    \\item\n        $\\Bit$ has two inhabitants: $0$ and $1$.\n    \\item\n        $\\Fun{a}{b}$ is the type of functions from $a$ to $b$.\n        The function can be partial.\n        This type is also be written $a \\to b$.\n    \\item\n        $\\Pred{a}$ is the type of\n        logical predicates about objects of type $a$.\n        We define\n        \\[ \\Pred{a} = \\Fun{a}{\\Bool}. \\]\n    \\item\n        $\\Nat$ is the type of natural numbers.\n        This type is defined using Peano axioms.\n    \\item\n        $\\Set{a}$ is the type of the set of elements of type $a$.\n    \\item $\\List{a} = \\Kleene{a}$ is the Kleene closure of $a$.\n        A list of $a$ is an ordered collection of elements of type $a$ with duplicates allowed.\n        so we can write $[x,y,z] 1 = y$.\n    \\item $\\Bits$ is the type of bitstrings.\n        \\[\n            \\Bits = \\Kleene{\\Bit}\n        \\]\n    \\item $\\Either{a}{b}$ is the sum type or the union type\n        that consists of $\\Left{x}$ for all $x : a$\n        and $\\Right{y}$ for all $y : b$.\n    \\item $\\Pair{a}{b}$ is the product type\n        that consists of $(x,y)$ for all $x : a$ and $y : b$.\n    \\item\n        $\\Vect{a}{n}$ is the type of vectors\n        whose element type is $a$ and whose length is $n : \\Nat$:\n        \\[ \\Vect{a}{n} = \\Fun{(I n)}{a} \\]\n        where\n        \\[ I n = \\{ 0, 1, 2, \\ldots, n - 1 \\} \\]\n        or alternatively using predicate logic\n        \\[ \\Fa{x} (x : \\Nat \\wedge x < n \\iff x : I n) \\]\n    \\item $\\Typ$ is the type of types.\n        This implies\n        \\[ \\Typ : \\Typ \\]\n        (Does $\\Typ : \\Typ$ imply inconsistency?\n        Russell's paradox?\n        Unrestricted comprehension?)\n\n        This means that $\\sfSet$ and $\\sfRel$ can be thought as the \\emph{type functions}\n        \\begin{align*}\n            \\sfSet &: \\Typ \\to \\Typ,\n            \\\\\n            \\sfRel &: \\Typ \\to \\Typ \\to \\Typ.\n        \\end{align*}\n\\end{itemize}\n\nThe cardinality of $\\Nat$ is $\\aleph_0$ (aleph-null).\n\n\\section{Cardinality of types}\n\n$|\\List{a}| = |\\Fun{\\Nat}{a}|$.\n\n\\(|a \\to b| = |b|^{|a|}\\)\n\nIf a type has finitely many inhabitants,\nthen the cardinality of that type is the number of its inhabitants.\n\nIf there is a bijection between two types,\nthen those types have the same cardinality.\n\nType-theoretic restatement of Cantor's theorem?\nThere is no bijection between $a$ and $\\Set{a}$.\n$|a| < |\\Set{a}|$.\n\nKind of ordering on cardinalities:\n\\begin{itemize}\n    \\item Iff there is an injection from $a$ to $b$, then $|a| \\le |b|$.\n    \\item Iff there is an surjection from $a$ to $b$, then $|a| \\ge |b|$.\n    \\item Iff there is a bijection between $a$ and $b$, then $|a| = |b|$.\n\\end{itemize}\n\nTwo sets are equinumerous (have the same cardinality) if and only if there is a bijection between them.\n\n$|\\sfT a| = |\\sfT (\\Set{a})|$?\n\n\\section{Cardinality theorems}\n\n\\begin{mthm}\n    \\[\n        |a| \\lneq |\\Set{a}|\n    \\]\n\\begin{proof}\n    Has been proved by Georg Cantor using Cantor's diagonal argument\n    that the cardinality of a set is strictly less than its power set.\n    Beth numbers.\n    $\\beth_n \\lneq \\beth_{n+1}$ for each natural number $n$.\n\\end{proof}\n\\end{mthm}\n\n\\begin{mthm}[Equinumerosity among one-parameter types]\n    For each $a$, all these types have the same cardinality:\n    $\\Pred{a}$, $\\Set{a}$.\n    \\begin{proof}\n        Let $p : \\Pred{a}$ be a predicate and $s : \\Set{a}$ be a set.\n        We define $p$ and $s$ such that each object that satisfies the predicate $p$ is an element of the set $s$\n        and also such that each element of the set $s$ satisfies the predicate $p$.\n        \\begin{align*}\n            F p &= \\{ x \\,|\\, p x \\}\n            \\\\\n            G s &= \\lambda x \\to x \\in s\n        \\end{align*}\n        The relationship is\n        \\[ \\FA{x} (p x \\iff x \\in s) \\]\n\n        But what if $p x = x \\not\\in S$.\n        Or what if $p x = \\neg\\exists S ~ x \\in S$?\n        Or what if $p x = \\Fa{S} x \\in S$?\n        Or what if $p x = x \\in x$?\n        What if $p x = \\neg (p x)$?\n        Isn't this prone to Russell's paradox?\n        Unrestricted comprehension?\n        FIXME?\n\n        Or is this not prone?\n        $p$ cannot refer to $s$?\n        Can it?\n    \\end{proof}\n\\end{mthm}\n\nThus a predicate is a set and a set is a predicate.\nIt turns out that there is a name for this concept:\nthat set is the \\emph{extension} of that predicate.\nIf $p$ is a predicate, then $p$ is also a set,\nso we can write $x \\in p$ to mean that $p x$ is true.\nWhat if we assume that a predicate is equal to its own extension?\nNow we make a bold but reasonable claim:\na predicate \\emph{is} a set and a set \\emph{is} a predicate.\nThis has some interesting consequences.\n\nIf we assume the equality, then $p$ becomes a fixed point of $\\phi \\mu$.\nTo see this, we have to define several functions.\nLet $\\phi$ be the flip combinator, that is $\\phi f x y = f y x$.\nLet $\\mu$ be the set membership function, that is $\\mu x y = x \\in y$.\nRecall that the $\\eta$-reduction transforms $p x = q x$ to $p = q$.\n\\begin{align*}\n    p x &= x \\in s\n    \\\\\n    &= \\mu x s\n    \\\\\n    &= \\phi \\mu s x\n    \\\\\n    p &= \\phi \\mu s\n    \\\\\n    p &= s\n    \\\\\n    p &= \\phi \\mu p\n    \\\\\n    p &= \\phi \\mu (\\phi \\mu p)\n    \\\\\n    &= \\phi \\mu (\\phi \\mu (\\phi \\mu p))\n    \\\\\n    &= \\ldots\n\\end{align*}\nThat implies that we can write strange but provable things like these:\n\\begin{align*}\n    1 \\in \\{0,1,2\\} &= \\{0,1,2\\} 1 = \\true\n    \\\\\n    3 \\in \\{0,1,2\\} &= \\{0,1,2\\} 3 = \\false\n    \\\\\n    (\\lambda x \\to x = 1) 1 &= 1 \\in (\\lambda x \\to x = 1) = \\true\n\\end{align*}\nbut this can be confusing at first.\nShould we distinguish predicate and set?\nShould we treat them as the same thing?\nThe membership operator $\\in$ becomes swapped function application.\n\nWe can even generalize the notation $f x = x \\in f$ to every function $f : a \\to b$, not just predicates.\nLet $f x = x + 1$. Then $f 0 = 0 \\in f = 1$.\nThis may need some effort and time to get accustomed to,\nbut once you master it, you will be another mathematician.\n\n\\begin{mthm}[Equinumerosity among two-parameter types]\n    All these types have the same cardinality:\n    \\begin{itemize}\n        \\item $\\Relab{a}{b}$, $\\Relab{b}{a}$\n        \\item $\\Pred(a,b)$, $\\Pred(b,a)$\n        \\item $\\Set{(a,b)}$, $\\Set{(b,a)}$\n        \\item $\\Fun{a}{(\\Set{b})}$, $\\Fun{(\\Set{a})}{b}$\n    \\end{itemize}\n    \\begin{proof}\n        Proving $|\\Relab{a}{b}| = |\\Relab{b}{a}|$ is simple.\n\n        Proving $|\\Pred(a,b)| = |\\Pred(b,a)|$ is simple.\n\n        $r : \\Relab{a}{b}$ and $p : \\Pred(a,b)$ and $f : a \\to b \\to \\Bool$.\n        $r$ relates $x$ to $y$ if and only if $p(x,y)$ is true.\n        \\begin{align*}\n            p z &= z \\in r\n             \\\\ &= \\mu z r\n             \\\\ &= \\phi \\mu r z\n            \\\\\n            p &= \\phi \\mu r\n        \\end{align*}\n        Then let $p = r$.\n\n        Since there is a bijection between $\\Pred{(a,b)}$ and $\\Relab{a}{b}$\n        and between $\\Pred{a}$ and $\\Set{a}$,\n        there is a bijection between $\\Relab{a}{b}$ and $\\Set{(a,b)}$.\n\n        To prove that there is a bijection between $\\Relab{a}{b}$ and $\\Fun{a}{(\\Set{b})}$,\n        we choose any $r : \\Relab{a}{b}$ that is a relation\n        from objects of type $a$ to objects of type $b$.\n        Define the \\emph{image of $x$ in $r$} as\n        $i r x = \\{ y \\,|\\, \\text{$r$ relates $x$ to $y$} \\}$\n        where the type of $i$ is $\\Relab{a}{b} \\to a \\to \\Set{b}$.\n        We define the \\emph{relation functionization} function $F$ as\n        \\[ F r = \\{ (x,Y) \\,|\\, i r x = Y \\} \\]\n        we capitalize $Y$ to highlight the fact that it is a set.\n        $G : \\Fun{a}{(\\Set{b})} \\to \\Relab{a}{b}$ is the \\emph{function relationization} function.\n        \\[ G f = \\{ (x,y) \\,|\\, y \\in f x \\} \\]\n        $G f$ relates $x$ to $y$ iff $y \\in f x$.\n        We can see that $F(G f) = f$ and $G(F r) = r$.\n        Thus $F \\circ G$ is the identity of $\\Fun{a}{(\\Set{b})}$\n        and $G \\circ F$ is the identity of $\\Relab{a}{b}$.\n        Thus $F$ and $G$ are inverses of each other.\n\n        ???\n    \\end{proof}\n\\end{mthm}\n\nThere is a mapping from $\\Fun{a}{b}$ to $\\Relab{a}{b}$.\nThere is a bijection between $\\Relab{a}{b}$ and $\\Relab{b}{a}$.\nThere is a bijection between $\\Relab{b}{a}$ and $\\Fun{b}{(\\Set{a})}$.\nThis means that there is a bijection between $\\Fun{a}{(\\Set{b})}$ and $\\Fun{b}{(\\Set{a})}$.\n", "meta": {"hexsha": "d1b1498ff5ddcbb7f86531978017397d2f9524c9", "size": 8889, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "research/type-old.tex", "max_stars_repo_name": "edom/work", "max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_stars_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "research/type-old.tex", "max_issues_repo_name": "edom/work", "max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_issues_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z", "max_forks_repo_path": "research/type-old.tex", "max_forks_repo_name": "edom/work", "max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_forks_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z", "avg_line_length": 35.4143426295, "max_line_length": 113, "alphanum_fraction": 0.5633929576, "num_tokens": 2905, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.800691997339971, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5902366369671246}}
{"text": "\\documentclass[main.tex]{subfiles}\n \n\\begin{document}\n%\\section{*Abstract Integration}\n%\\towrite{Lebesgue-Stieltjes integral}\n%\\todo{See Rynne or Garling or Analyse de Hilbert et de Fourier course notes}\n%\\section{*Probability Measure}\n%\\todo{See Pestman or MIT course notes or...}\n%\\towrite{Should cover: p. measure, cont. distr., stochastic vectors, E, moment gen?, CLT, multivar norm dist}\n%\\section{Large Deviation Theory}\n%\\todo{See Hugo or MIT course notes or ...}\n%\\towrite{Should cover: application to multivariate normal dist}\n\n\\section{Multivariate Gaussian distribution}\n\\emph{This presentation of the multivariate Gaussian distribution is heavily based on Chapter VIII of \\emph{Mathematical Statistics} by \\cite{Pestman1998}, which provides proofs to all theorems listed below.}\n\nOne distribution that will be particularly useful in our analysis is the \\emph{multivariate Gaussian distribution}, which generalises the (one-dimensional) normal distribution. In the simplest case, we have a stochastic vector $\\mat{E}=(\\mel{E}_1, \\dots, \\mel{E}_p)$, which is the combination of $p$ Gaussian distributed, \\emph{independent} scalar variables. In general, however, we wish to study stochastic vectors produced by applying a \\emph{linear transformation} $\\mat{L} \\in \\mathcal{L}(\\mathbb{R}^p, \\mathbb{R}^q)$ to $\\mat{E}$. In this case, the coordinate variables $(\\mel{LE})_1,\\dots,(\\mel{LE})_q$ are not always independent! For example, the map $(\\mel{E}_1, \\mel{E}_2) \\mapsto (\\mel{E}_1, \\mel{E}_1 + \\mel{E}_2)$ transforms two independent normals into two dependent ones.\n%In particular, we answer the following question: ``For $i,j \\in \\range{q}$ and $x \\in \\mathbb{R}$, what is the distribution of $(\\mel{LE})_i$, \\emph{given that} $(\\mel{LE})_j \\geq x$?''\n\\subsection{Normal and Gaussian}\nAlthough they are often used interchangeably, we make a clear distinction between a \\emph{normal} distribution and a \\emph{Gaussian} distribution. The former should be familiar:\n\n\\begin{definition}\nA scalar variable $E$ is said to be \\emph{normally distributed} with parameters $\\mu$ and $\\sigma$ if\n\\begin{equation}\nx \\mapsto \\frac{1}{\\sigma \\sqrt{2\\pi}} \\exp\\left[\\frac{-(x-\\mu)^2}{2\\sigma^2}\\right]\n\\end{equation}\nis the probability density of $E$.\n\\end{definition}\n\n\\begin{definition}\nA scalar variable $E$ is said to be \\emph{Gaussian distributed} if it is either normally distributed or constant. \n\\end{definition}\n\nOne could interpret a constant $E$ with value $\\mu$ as a normally distributed variable with mean $\\mu$ and `standard deviation $0$'. A linear combination of normally distributed \\emph{scalar} variables is also normally distributed, and the same is true for Gaussian distributed scalars.\n\n\\begin{definition}\nA stochastic vector $\\mat{E}=(\\mel{E}_1, \\dots, \\mel{E}_p)$ is \\emph{elementary normally distributed}\\index{distribution!elementary normal} if the scalar variables $\\mel{E}_i$ are independent and normally distributed ($i \\in \\range{p}$).\n$\\mat{E}$ is \\emph{elementary Gaussian distributed}\\index{distribution!elementary Gaussian} if the scalar variables $\\mel{E}_i$ are independent and Gaussian distributed.\n\\end{definition}\n\n\\begin{definition}\nA stochastic vector $\\mat{X}=(\\mel{X}_1, \\dots, \\mel{X}_p)$ is \\emph{normally distributed}\\index{distribution!normal} if there exists an orthogonal operator $\\mat{Q}$ such that $\\mat{QX}$ is elementary normally distributed.\n$\\mat{X}$ is \\emph{Gaussian distributed}\\index{distribution!Gaussian} if there exists an orthogonal operator $\\mat{Q}$ such that $\\mat{QX}$ is elementary Gaussian distributed.\n\\end{definition}\n%\n%\\towrite{Plaatjes vullen gaatjes}\n%\nWe state, without proof, the following properties of Gaussian distributed vectors:\n\\begin{proposition}\\label{thm:normaliffinvertible}\nThe distribution of a Gaussian distributed vector $\\mat{X}$ is uniquely determined by its expectation $\\mat{\\mu} =\\EXP\\left[\\mat{X}\\right]$ and covariance matrix $\\mat{\\Sigma}=\\COVMAT(\\mat{X})$, and we write $\\mat{X} \\sim \\gaussdistr(\\mat{\\mu}, \\mat{\\Sigma})$. $\\mat{X}$ is normally distributed if and only if $\\mat{\\Sigma}$ is invertible.\n\\end{proposition}\n\nTranslating or applying a linear map to a Gaussian distribution results in a new Gaussian distribution. Note that this can be \\emph{any} linear map, not necessarily a bijective, orthogonal one.\n\n\\begin{theorem}\\label{thm:linearmapofgaussian}\nSuppose $\\mat{X}$ is a $\\gaussdistr(\\mat{\\mu}, \\mat{\\Sigma})$-distributed $p$-vector. \n\\begin{align}\n\\intertext{For any $\\mat{a} \\in \\mathbb{R}^p$:}\n\\mat{X}+\\mat{a} \\, &\\sim \\, \\gaussdistr(\\mat{\\mu}+\\mat{a}, \\mat{\\Sigma}), \\\\\n\\intertext{and for any linear map $\\mat{L} \\in \\mathcal{L}(\\mathbb{R}^p, \\mathbb{R}^q)$:}\n\\mat{L}\\mat{X} \\, &\\sim \\, \\gaussdistr(\\mat{L}\\mat{\\mu}, \\mat{L}\\mat{\\Sigma}\\mat{L}^*).\n\\end{align}\n\\end{theorem}\n\\begin{remark}\nIf $\\mat{X}$ is normally distributed, and $\\mat{L} \\in \\mathcal{L}(\\mathbb{R}^p, \\mathbb{R}^q)$, then $\\mat{L}\\mat{X}$ is Gaussian distributed, but not necessarily normally distributed! A trivial example is the zero map: any normally distributed scalar is mapped to a constant (0), which does not exhibit a probability density function.\n\\end{remark}\n\\begin{corollary}\\label{cor:innerproductisgaussian}\nFor any $\\mat{b} \\in \\mathbb{R}^p$, the mapping\n\\[\n\\left\\langle\\mat{b}, \\,\\cdot\\,\\right\\rangle : \\mathbb{R}^p \\rightarrow \\mathbb{R} : \\mat{x} \\mapsto \\left\\langle\\mat{b}, \\mat{x}\\right\\rangle\n\\]\nis \\emph{linear}, and therefore the scalar variable $\\left\\langle\\mat{b}, \\mat{X}\\right\\rangle$ is Gaussian distributed.\n\\end{corollary}\nIn particular, when applying Corollary \\ref{cor:innerproductisgaussian} to each element of the standard basis $(\\mat{e}_1, \\dots, \\mat{e}_p)$ of $\\mathbb{R}^p$, we find that each of the \\emph{coordinates} of $\\mat{X}$ is Gaussian distributed:\n\\begin{proposition}\\label{prop:gaussianmarginaldistr}\nSuppose $\\mat{X}=(\\mel{X}_1,\\dots,\\mel{X}_p)$ is a $\\gaussdistr(\\mat{\\mu}, \\mat{\\Sigma})$-distributed $p$-vector. Then each for each $i \\in \\range{p}$, the marginal distribution is given by:\n\\[\n\\mel{X}_i \\, \\sim \\, \\gaussdistr(\\mel{\\mu}_i, \\mel{\\Sigma}_{ii}).\n\\]\n\\end{proposition}\n\n\nFor normally distributed vectors, a probability density function exists:\n\\begin{theorem}\nSuppose $\\mat{X}$ is a \\emph{normally} distributed $p$-vector with expectation $\\mat{\\mu} =\\EXP\\left[\\mat{X}\\right]$ and covariance matrix $\\mat{\\Sigma}=\\COVMAT(\\mat{X})$. Then $\\mat{X}$ has a \\emph{probability density function} given by\n\\begin{equation}\\label{eq:multivarnormaldensity}\n\\mat{x} \\mapsto \\frac{1}{\\sqrt{\\det(\\mat{\\Sigma})} \\left(2\\pi\\right)^{p/2}} \\exp\\left[ -\\frac{1}{2} (\\mat{x} - \\mat{\\mu})^* \\mat{\\Sigma}^{-1} (\\mat{x} - \\mat{\\mu}) \\right]\n\\end{equation}\n\\end{theorem}\n\\begin{remark}\nThe condition that $\\mat{X}$ is normally distributed is necessary: if $\\mat{X}$ is Gaussian, but not normal, it will take values in an \\emph{affine subspace of $\\mathbb{R}^p$} (such as a plane, as subspace of $\\mathbb{R}^3$). If a density function were to exist, it would have support of zero measure, and integrating over $\\mathbb{R}^p$ would yield $0$, instead of $1$.\n\\end{remark}\n%\\begin{theorem}\n%The mode of a Gaussian distribution is its mean.\\todo{(waar) moet dit?}\n%\\end{theorem}\n\\section{One linear condition}\n\\begin{definition}\nGiven a $\\mat{r} \\in \\mathbb{R}^p$ and $b \\in \\mathbb{R}$, the set $A\\subseteq \\mathbb{R}^p$ of solutions to the equation\n\\[\n\\left\\langle \\mat{r}, \\mat{x} \\right\\rangle = b\n\\]\nis called a \\emph{plane in $\\mathbb{R}^p$}\\index{plane}. The set $B \\subseteq \\mathbb{R}^p$ of solutions to\n\\[\n\\left\\langle \\mat{r}, \\mat{x} \\right\\rangle \\leq b\n\\]\nis a \\emph{half-space in $\\mathbb{R}^p$}.\nWhen $b=1$, we write $\\mat{r}_A$ (or $\\mat{r}_B$), and we say that $\\mat{r}_A$ ($\\mat{r}_B$) is a \\define{pillar} for $A$ (or $B$). Two planes are called \\emph{parallel} if they do not intersect.\n\\end{definition}\n\\begin{remark}\nWhen $b \\neq 1$, we can scale $\\mat{r}$ to create a pillar for $A$ or $B$, unless $b=0$ (which is the case if and only if $\\mat{0} \\in A$).\n\\end{remark}\n\\begin{remark}\n$A$ is the \\emph{boundary} of $B$, \\ie $A = \\partial B$.\n\\end{remark}\n\nGeometrically, a pillar can be interpreted as a vector from the origin that crosses the plane orthogonally. Its length is \\emph{not} the distance between the origin and the plane, but rather the \\emph{inverse} of this distance.\n%\n%\\towrite{Orthogonal projection onto a plane}\n\n\\begin{definition}\nA \\emph{convex polyhedron in $\\mathbb{R}^p$}\\index{polyhedron} is the intersection of finitely many half-spaces in $\\mathbb{R}^p$, and can be written as the set of solutions to\n\\[\n\\mat{R}\\mat{x} \\leq \\mat{b},\n\\]\nwhere each $i$th row of $\\mat{R}$, together with $\\mel{b}_i$, defines one of the intersecting half-spaces.\n\\end{definition}\nThe boundary of a convex polyhedron is contained in the union of planes corresponding to the half-spaces. This relation is strict, in general.\n\nThe convex polyhedra in $\\mathbb{R}$ are exactly all (possibly infinite) closed intervals. Convex polyhedra in $\\mathbb{R}^3$ are familiar shapes, like a cube or a pyramid, but they might also be unbounded, like a pyramid with infinitely deep foundations. \nObjects that are \\emph{not} convex polyhedra are spheres (no finite intersection of half-spaces) and donuts (not convex, which as the name suggest, is a property of any convex polyhedron\\footnote{Using the linearity of the inner product, one easily finds that half-spaces are convex, and so are their intersections.}).\n\n\\subsection{Feasibility region}\nIn this thesis, we model the \\emph{power injection} of a grid as a normally distributed stochastic vector, where each coordinate corresponds to the amount of power injected at a node of the network. Positive values denote net generation (injection) and negative values are assigned to net consumption. The stochastic behaviour originates from \\emph{renewable sources}: wind and solar, and their correlations arise from correlated weather. \n\nThe transmission network was built to transfer power from one node to another, \\ie have a non-zero power injection.\nIn Chapter \\ref{chap:grid}, we will learn that not all power injections are \\emph{feasible}, as some might cause one of the transmission lines to overload. The \\emph{feasibility region} (the set of power injections that can be used) is, to some approximation,\\footnote{We make this \\emph{DC approximation} more precise in Chapter \\ref{chap:model}.} a \\emph{convex polyhedron}.\n\nUsing historical generation series, we can estimate the covariance of this power injection. Because the amount of current flowing through a line is a linear function of the power injection (by the same approximation), we can use Corollary \\ref{cor:innerproductisgaussian} to determine the marginal distribution of line current, which gives the probability of an overload failure. A failure of this kind (\\ie caused by a fluctuation of renewable injection) is called an \\define{emergent failure}.\n\nCurrent studies (notably \\cite{Nesti2018emergentfailures} and \\cite{Chertkov2011}) take this one step further, by determining \\emph{the most probable power injection that caused the emergent failure to occur}. We follow the approach of \\cite{Nesti2018emergentfailures}, which is generalised in the following Theorem. Here, the plane $A$ can be taken to be one of the \\emph{boundary planes of the feasibility region, corresponding to one of the lines}, and the event $\\mat{X} \\in A$ is the \\emph{event that this line overloads}.\n\n\\begin{theorem}\\label{thm:modeofaplaneconditional}\nLet $\\mat{X} \\, \\sim \\, \\gaussdistr(\\mat{\\mu}, \\mat{\\Sigma})$ be a normally distributed $p$-vector with density function $f_{\\mat{X}}$, and let $A\\subseteq \\mathbb{R}^p$ be a plane, given by a pillar $\\mat{r}_A \\in \\mathbb{R}^p$. Then $\\mat{X} \\mid \\mat{X} \\in A$ is Gaussian distributed, and has mode\n\\begin{equation}\n\\tilde{\\mat{x}}_{A} = \\argmax_{\\mat{x} \\in A} f_{\\mat{X}}(\\mat{x}) = \\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{\\mu}, \\mat{r}_A \\right\\rangle}{\\left\\langle  \\mat{\\Sigma} \\mat{r}_A, \\mat{r}_A \\right\\rangle} \\mat{\\Sigma} \\mat{r}_A.\\label{eq:modeofaplaneconditional}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}%\\todo{inproduct rx is verkeerd om}\nWe will start with the special case of $\\mat{\\Sigma} = \\mat{I}$, and work our way towards the general case.\\footnote{This process uses the so-called the \\emph{standardised}\\index{distribution!standardised normal} form of $\\mat{X}$.}\n\n\\emph{Step 1. The case of unit covariance.}\\\\\nSuppose $\\mat{X}$ is \\emph{elementary} normally distributed, with all marginal variances equal to one, \\ie $\\mat{\\Sigma} = \\mat{\\Sigma}^{-1} = \\mat{I}$. The probability density function of $X$ (see Equation (\\ref{eq:multivarnormaldensity})) then reduces to:\n\\[\n\\mat{x} \\mapsto \\frac{1}{\\left(2\\pi\\right)^{p/2}} \\exp\\left[ \\frac{-\\norm{\\mat{x} - \\mat{\\mu}}^2}{2}  \\right].\n\\]\nThis is a \\emph{decreasing} function of the \\emph{distance between $\\mat{x}$ and $\\mat{\\mu}$}, and so its mode is obtained when this distance is minimal.\n\nBecause $A$ is an affine subspace of $\\mathbb{R}^p$, the distance between $\\mat{x} \\in A$ and $\\mat{\\mu}$ is minimal when $\\mat{x}$ is the \\emph{orthogonal projection} of $\\mat{\\mu}$ onto $A$, which is given by:\n\\[\n\\tilde{\\mat{x}}_{A} = \\mat{\\mu} + \\frac{1 - \\left\\langle \\mat{\\mu}, \\mat{r}_A \\right\\rangle}{\\left\\langle \\mat{r}_A, \\mat{r}_A \\right\\rangle} \\mat{r}_A.\n\\]\n\n\\emph{Step 2. The case of an arbitrary elementary normal distribution.}\\\\\nLet us drop the assumption that all marginal variances are equal to one. There exist $\\lambda_1, \\dots, \\lambda_p \\in \\mathbb{R}_{>0}$ such that $\\mat{\\Sigma} = \\mat{\\Lambda} = \\diag(\\lambda_1, \\dots, \\lambda_p)$, and $\\mat{\\Lambda}^{t} = \\diag(\\lambda_1^{t}, \\dots, \\lambda_p^{t})$ for any $t \\in \\mathbb{R}$.%\\todo{Dit zijn de varianties, gebruik simgasquared?}\nWith $t=-\\frac{1}{2}$, we find that $\\mat{\\Lambda}^{-\\frac{1}{2}}\\mat{X}$ is elementary normally distributed, with mean $\\mat{\\Lambda}^{-\\frac{1}{2}}$ and unit covariance matrix (Theorem \\ref{thm:linearmapofgaussian}).\nApplying the same transformation $\\mat{\\Lambda}^{-\\frac{1}{2}}$ to $A$ yields a new plane, which has pillar $\\mat{\\Lambda}^{\\frac{1}{2}}\\mat{r}_A$ (not $\\mat{\\Lambda}^{-\\frac{1}{2}}\\mat{r}_A$!):\n\nFor each $\\mat{x} \\in \\mathbb{R}^p$, we have\n\n$\\mat{x} \\in A \\iff $\n\\begin{gather*}\n\\left\\langle \\mat{x}, \\mat{r}_A \\right\\rangle = 1\n\\iff \\sum_{i=1}^p \\mel{x}_i \\mel{r}_{A\\,i} = 1\n\\iff \\sum_{i=1}^p \\lambda_{i}^{-\\frac{1}{2}}\\mel{x}_i \\lambda_{i}^{\\frac{1}{2}} \\mel{r}_{A\\,i} = 1 \n\\iff \\\\\n\\left\\langle \\mat{\\Lambda}^{-\\frac{1}{2}}\\mat{x}, \\mat{\\Lambda}^{\\frac{1}{2}}\\mat{r}_A \\right\\rangle = 1\n\\iff\n\\left\\langle \\mat{\\Lambda}^{-\\frac{1}{2}}\\mat{x}, \\mat{r}_{\\mat{\\Lambda}^{-\\frac{1}{2}}(A)} \\right\\rangle = 1\n\\end{gather*}\n\n\\hfill $\\iff \\mat{\\Lambda}^{-\\frac{1}{2}}\\mat{x} \\in \\mat{\\Lambda}^{-\\frac{1}{2}}(A).$\n\nThis means that the plane $\\mat{\\Lambda}^{-\\frac{1}{2}}(A)$ is defined by the pillar $\\mat{\\Lambda}^{\\frac{1}{2}}\\mat{r}_A$.\n\nWe can now apply our earlier result, and we find:\n\n\\begin{gather*}\n\\tilde{\\mat{x}}_{A} = \\mat{\\Lambda}^{\\frac{1}{2}} \n\\oversortoftilde{\\left(\n\\mat{\\Lambda}^{-\\frac{1}{2}} \\mat{x}\\right)_{\\mat{\\Lambda}^{-\\frac{1}{2}}(A)}\n}\n=\n\\mat{\\Lambda}^{\\frac{1}{2}} \n\\left(\n\\mat{\\Lambda}^{-\\frac{1}{2}} \\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{\\Lambda}^{-\\frac{1}{2}} \\mat{\\mu}, \\mat{\\Lambda}^{\\frac{1}{2}} \\mat{r}_A \\right\\rangle}{\\left\\langle \\mat{\\Lambda}^{\\frac{1}{2}} \\mat{r}_A, \\mat{\\Lambda}^{\\frac{1}{2}} \\mat{r}_A \\right\\rangle} \\mat{\\Lambda}^{\\frac{1}{2}} \\mat{r}_A\n\\right) \\\\\n=\n\\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{\\Lambda}^{-\\frac{1}{2}} \\mat{\\mu}, \\mat{\\Lambda}^{\\frac{1}{2}} \\mat{r}_A \\right\\rangle}{\\left\\langle \\mat{\\Lambda}^{\\frac{1}{2}} \\mat{r}_A, \\mat{\\Lambda}^{\\frac{1}{2}} \\mat{r}_A \\right\\rangle} \\mat{\\Lambda} \\mat{r}_A\n=\n\\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{\\mu}, \\mat{r}_A \\right\\rangle}{\\left\\langle \\mat{\\Lambda}  \\mat{r}_A,\\mat{r}_A \\right\\rangle} \\mat{\\Lambda} \\mat{r}_A.\n\\end{gather*}\n\n\\emph{Step 3. The general case.}\\\\\nFinally, we consider the general case where $\\mat{X}$ is normally distributed. If so, there exists orthogonal $\\mat{Q}$ such that $\\mat{\\Sigma}=\\mat{Q}\\mat{\\Lambda}\\mat{Q}^{-1}$, with $\\mat{\\Lambda}$ a diagonal matrix. In other words, the orthogonal map $\\mat{Q}^{-1}$ maps $\\mat{X}$ into an \\emph{elementary} normally distributed vector.\\footnote{$\\mat{Q}^{-1}\\mat{X}$ is elementary normally distributed, but it does not necessarily have a unit covariance matrix.}\nApplying the same transformation $\\mat{Q}^{-1}$ to $A$ yields a new plane, which is defined by the pillar $\\mat{Q}^{-1}\\mat{r}_A$. This derivation is more straightforward, since $\\mat{Q}^{-1}$ is orthogonal:\n\nFor each $\\mat{x} \\in \\mathbb{R}^p$, we have\n\n$\\mat{x} \\in A \\iff $\n\\begin{gather*}\n\\left\\langle \\mat{x}, \\mat{r}_A \\right\\rangle = 1\n\\iff \\left\\langle \\mat{Q}^{-1} \\mat{x}, \\mat{Q}^{-1} \\mat{r}_A \\right\\rangle = 1\n\\iff \\left\\langle \\mat{Q}^{-1} \\mat{x}, \\mat{r}_{\\mat{Q}^{-1}(A)} \\right\\rangle = 1\n\\end{gather*}\n\n\\hfill $\\iff \\mat{Q}^{-1}\\mat{x} \\in\\mat{Q}^{-1}(A).$\n\nUsing our previous result, we find:\n\n\\begin{gather*}\n\\tilde{\\mat{x}}_{A}\n=\n\\mat{Q}\n\\oversortoftilde{\\left(\n\\mat{Q}^{-1}\\mat{x}\\right)_{\\mat{Q}^{-1}(A)}\n}\n=\n\\mat{Q}\n\\left(\n\\mat{Q}^{-1}\\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{Q}^{-1} \\mat{\\mu}, \\mat{Q}^{-1}\\mat{r}_A \\right\\rangle}{\\left\\langle \\mat{\\Lambda} \\mat{Q}^{-1} \\mat{r}_A,\\mat{Q}^{-1} \\mat{r}_A \\right\\rangle} \\mat{\\Lambda} \\mat{Q}^{-1} \\mat{r}_A\n\\right) \\\\\n=\n\\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{Q}^{-1} \\mat{\\mu}, \\mat{Q}^{-1}\\mat{r}_A \\right\\rangle}{\\left\\langle \\mat{\\Lambda} \\mat{Q}^{-1} \\mat{r}_A,\\mat{Q}^{-1} \\mat{r}_A \\right\\rangle} \\mat{Q} \\mat{\\Lambda} \\mat{Q}^{-1} \\mat{r}_A\n=\n\\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{\\mu}, \\mat{r}_A \\right\\rangle}{\\left\\langle  \\mat{Q} \\mat{\\Lambda} \\mat{Q}^{-1} \\mat{r}_A, \\mat{r}_A \\right\\rangle} \\mat{Q} \\mat{\\Lambda} \\mat{Q}^{-1} \\mat{r}_A \\\\\n=\n\\mat{\\mu}  + \\frac{1 - \\left\\langle \\mat{\\mu}, \\mat{r}_A \\right\\rangle}{\\left\\langle  \\mat{\\Sigma} \\mat{r}_A, \\mat{r}_A \\right\\rangle} \\mat{\\Sigma} \\mat{r}_A.\n\\end{gather*}\n\n\\end{proof}\n\nWhen we replace the \\emph{plane} $A$ by a \\emph{half-space} $B$ in Theorem \\ref{thm:modeofaplaneconditional}, there are two cases to consider. If $\\mat{\\mu} \\in B$, then $\\tilde{\\mat{x}}_{B} = \\mat{\\mu}$, because the mode of $\\mat{X}$ is $\\mat{\\mu}$, which is contained in $B$. \n\nIn the non-trivial case of $\\mat{\\mu} \\notin B$, we find that $\\tilde{\\mat{x}}_{B} = \\tilde{\\mat{x}}_{A}$, where $A \\defeq \\partial B$ is the \\emph{boundary} of $B$, which is a plane, given by the same pillar: $\\mat{r}_A = \\mat{r}_B$. A geometrical argument is as follows: by reducing to the case of an elementary normally distributed $\\mat{X}$ with a unit covariance matrix, the mode is the point of $B$ that minimises the distance to $\\mat{\\mu}$. Since $\\mat{\\mu} \\notin B$, this point must lie on the boundary of $B$, \\ie $\\tilde{\\mat{x}}_{B} \\in A$. Because $A$ is a subset of $B$, we can proceed to compute $\\tilde{\\mat{x}}_{A}$, which is then necessarily the mode of $\\mat{X} \\, \\mid \\, B$.\n\nFor a half-plane $B$, we now have an explicit expression for $\\tilde{\\mat{x}}_{B}$, \\ie the \\emph{most likely value} of $\\mat{X}$, given that $\\mat{X} \\in B$. What can we say about the \\emph{expected value} of $\\mat{X} \\, \\mid \\, B$? Geometrically, one can imagine that $\\EXP [\\mat{X} \\, \\mid \\, \\mat{X} \\in B]$ lies close to $\\tilde{\\mat{x}}_{B}$, but `shifted inwards', away from $\\partial B$ (in the direction of its pillar). We have not found a direct method to compute this expected value. \\cite{Nesti2018emergentfailures} introduce a scaling factor $\\epsilon>0$ to the covariance of $\\mat{X}$ (\\ie $\\mat{X}_{\\epsilon} \\, \\sim \\, \\gaussdistr(\\mat{\\mu}, \\epsilon\\mat{\\Sigma})$), and show that the expected value \\emph{converges} to $\\tilde{\\mat{x}}_{B}$, the mode,\\footnote{The factor $\\epsilon$ disappears in (\\ref{eq:modeofaplaneconditional}).} as $\\epsilon \\rightarrow 0$, using the tools of Large Deviations Theory.\n\nOne can interpret this result as saying that the probability density of $\\mat{X}_{\\epsilon} | B$ gets \\emph{concentrated close to the boundary of $B$}. In the one-dimensional case, this statement\\footnote{\\ie the \\emph{tail distribution} of $X \\, \\sim \\, \\gaussdistr(0,\\epsilon^2)$, given $X \\geq 1$, becomes \\emph{steeper} as $\\epsilon \\rightarrow 0$.} is easy to demonstrate, see \\eg \\cite{Touchette2009}.\n%\n%\\todo[inline]{Is de marginale verdeling gaussiaans verdeeld? (Ze is niet normaal verdeeld.) Wat is de covariantiematrix? Je zou als Stap 0 kunnen transformeren naar een $\\mathbf{r}_A$ die parallel aan een van de assen loopt, en dan dat coordinaat gelijk aan $\\norm{\\mathbf{r}_A}$ stellen? Maar dan klopt de verwachtingswaarde niet meer?}\n%\n%\\todo[inline]{Het zou mooier zijn om dit resultaat toe te passen op een algemene Gauss-verdeling, die niet per se normaal is. De marginele verdeling bestaat dan alleen niet altijd, namelijk wanneer $A \\cap \\Ima(\\mathbf{X}) = \\emptyset$. Als ze wel bestaat, dan gok ik dat het Gaussiaans is.\n%\n%Het is nu niet toe te passen op line flows (want die zijn normaal, wel Gaussiaans), maar wel op de injectie. Wanneer je dan de meest likely injectie hebt, kan je afleiden wat de flow is. Dit komt overeen met elkaar?\n%\n%De covariantiematrix van de flow is ook bijna singulair (numeriek gezien singulair), omdat de correlaties zo sterk zijn. Maar ik denk dat je kan beargumenteren dat de kans op singulaire covariantiematrix $0$ is, en dan kan de stelling dus wel worden toegepast. Numeriek is de bijna-singulariteit ook geen probleem (denk ik), omdat in de uiteindelijke uitdrukking de inverse van sigma niet voorkomt.}\n\\end{document}", "meta": {"hexsha": "e2f325830f21e916e373fd0527dc1789debc6990", "size": 21647, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/thesis/probability.tex", "max_stars_repo_name": "fons-/grid-failures", "max_stars_repo_head_hexsha": "947ccefe4ced7aa45b7b77339a6f28d7b0881c44", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-07-15T21:43:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T08:14:11.000Z", "max_issues_repo_path": "doc/thesis/probability.tex", "max_issues_repo_name": "fons-/grid-analysis", "max_issues_repo_head_hexsha": "947ccefe4ced7aa45b7b77339a6f28d7b0881c44", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/thesis/probability.tex", "max_forks_repo_name": "fons-/grid-analysis", "max_forks_repo_head_hexsha": "947ccefe4ced7aa45b7b77339a6f28d7b0881c44", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-01-23T17:12:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-05T22:22:44.000Z", "avg_line_length": 84.2295719844, "max_line_length": 923, "alphanum_fraction": 0.7013442971, "num_tokens": 7114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7371581510799253, "lm_q2_score": 0.8006919997179627, "lm_q1q2_score": 0.5902366340965814}}
{"text": "\\documentclass[a4paper, 12pt]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage[]{amsfonts}\n\\usepackage[]{graphicx}\n\\usepackage[]{amsthm}\n\n\\title{CS231A Course Notes 2: Single View Metrology}\n\\author{Kenji Hata and Silvio Savarese}\n\\date{}\n\n\n\\renewcommand\\emph{\\textbf}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\nIn the previous lecture notes, we discussed how we can transform points from the real, 3D world into digital images using the extrinsic and intrinsic properties of cameras. We looked at how we can use known structure in a calibration rig and its corresponding image to deduce these camera properties. This time, we will look at a related problem: can we recover known structure of the 3D world if we have a single image and know the properties of the camera that took the image? We will then more generally discuss what information can be deduced from a single image.\n\n\\section{Transformations in 2D}\nTo better understand how we can learn from images, we should be able to first know about the various transformations in 2D space. \n\n\\emph{Isometric transformations} are transformations that preserve distances. In its most basic form, an isometry can be described as a rotation $R$  and translation $t$. Therefore, mathematically, they are defined as\n\\begin{equation*}\n    \\begin{bmatrix}x'\\\\y'\\\\1\\end{bmatrix} = \\begin{bmatrix}R & t\\\\ 0 & 1\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix}\n\\end{equation*}\nwhere $\\begin{bmatrix}x'&y'&1\\end{bmatrix}^T$ is the point achieved after the isometric transformation. A useful property of rotation matrices is that $RR^T = I$, which can be used to quickly check if a matrix is a rotation.\n\n\\emph{Similarity transformations} are transformations that preserve shape. Intuitively, they can do everything that isometric transformations can plus scaling. Mathematically, they are denoted as \n\\begin{equation*}\n    \\begin{bmatrix}x'\\\\y'\\\\1\\end{bmatrix} = \\begin{bmatrix}SR & t\\\\ 0 & 1\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix},\\ S = \\begin{bmatrix}s & 0\\\\ 0 & s\\end{bmatrix}\n\\end{equation*}\nSince they preserve shapes, they also preserve ratio of lengths and angles. Note that every isometric transformation is a specific form of a similarity transformation when $s=1$. The converse does not hold true however.\n\n\\emph{Affine transformations} are transformations that preserve points, straight lines, and parallelism. For some vector $v$, an affine transformation $T$ is defined as \n\\begin{equation*}\n    T(v) = Av + t\n\\end{equation*}\nwhere $A$ is a linear transformation of $\\mathbb{R}^n$. In homogeneous coordinates, affine transformations are often written as\n\\begin{equation*}\n    \\begin{bmatrix}x'\\\\y'\\\\1\\end{bmatrix} = \\begin{bmatrix}A & t\\\\ 0 & 1\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix}\n\\end{equation*}\nFrom the above equation, it is easy to see that all similarities (and thus isometries) are a specific case of affinities.\n\n\\emph{Projective transformations} or \\emph{homographies} are any transformations that maps lines to lines, but does not necessarily preserve parallelism. In homogeneous coordinates, projective transformations are represented as\n\\begin{equation*}\n    \\begin{bmatrix}x'\\\\y'\\\\1\\end{bmatrix} = \\begin{bmatrix}A & t\\\\ v & b\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix}\n\\end{equation*}\nWe see that the form is a further generalization of affine transformations, as extra degrees of freedom are added with the addition of $v$. \n\nDespite not preserving parallelism, projective transformations does preserve collinearity of points, as it maps lines to lines. Furthermore, we prove that the cross ratio of four collinear points remains invariant under projective transformations. The cross ratio takes four points $P_1, P_2, P_3, P_4$ on a line and computes\n\\begin{equation}\n    \\mathrm{cross\\ ratio} = \\frac{\\|P_3-P_1\\|\\|P_4-P_2\\|}{\\|P_3-P_2\\|\\|P_4-P_1\\|}\n\\end{equation}\nWe leave proving the invariance of the cross ratio under projective transformation as a class exercise.\n\n\\section{Points and Lines at Infinity}\nLines are important for determining structure in images, so knowing their definitions in both 2D and 3D is essential. A line in 2D can be represented with the homogeneous vector $\\ell = \\begin{bmatrix}a & b & c \\end{bmatrix}^T$. The ratio $-\\frac{a}{b}$ captures the slope of the line and the ratio $-\\frac{c}{b}$ captures the y-intercept. Formally, the relationship between 2D lines and points on them are defined by:\n\\begin{equation}\n\\forall p = \\begin{bmatrix}x\\\\y\\end{bmatrix} \\in \\ell,\\ \\  \\begin{bmatrix}a & b & c\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix} = 0\n\\end{equation}\n\nIn general, two lines $\\ell$ and $\\ell'$ will intersect at a point $x$. This point is defined as the cross product between $\\ell$ and $\\ell'$.\n\n\\begin{proof} \n    Given two intersecting lines $\\ell$ and $\\ell'$, the intersection point $x$ should lie on both lines $\\ell$ and $\\ell'$. Therefore, $x^T \\ell = 0$ and $x^T\\ell' = 0$. If we set $x = \\ell \\times \\ell'$, then by definition of cross product, the vector $x$ is orthogonal to both vectors $\\ell$ and $\\ell'$. By the definition of orthogonality, $x^T \\ell = 0$ and $x^T\\ell' = 0$. Thus, this definition of $x$ satisfies the constraints. \n\\end{proof}\n\nWhat about the case of parallel lines? Everyday knowledge expects these lines to never intersect. However, this definition could be rewritten to say that these lines intersect at infinity. In homogeneous coordinates, a point at infinity is written as $\\begin{bmatrix}x & y & 0\\end{bmatrix}^T$. Recall the Euclidean coordinates are gathered by dividing all coordinates by the last coordinate. In this case, the coordinate is zero, achieving a point at infinity. Therefore, homogeneous coordinates give a good formulation of determining intersections, even in cases of parallel lines.\n\nNow, let's consider two parallel lines $\\ell$ and $\\ell'$. When two lines are parallel, their slope is equal and thus $\\frac{a}{b} = \\frac{a'}{b'}$. If we compute the point of intersection using homogenoous coordinates, then we verify that\n\\begin{equation}\n\\ell \\times \\ell' \\propto \\begin{bmatrix}b \\\\ -a \\\\ 0  \\end{bmatrix}= x_\\infty\n\\end{equation}\n\nThus, we confirmed our intuition that two parallel lines intersect at infinity. The point of intersection at infinity of two parallel lines is also called an \\emph{ideal point}. One interesting property of a point at infinity is that all parallel lines with the same slope $-\\frac{a}{b}$ pass through the ideal point as shown below:\n\\begin{equation}\n\\ell ^T x_\\infty = \\begin{bmatrix}a & b & c\\end{bmatrix} \\begin{bmatrix}b \\\\ -a \\\\ 0\\end{bmatrix} = 0\n\\end{equation}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/line_infinity.png}\n\\caption{Points at infinity form lines at infinity.}\n\\label{fig:line_infinity}\n\\end{figure}\n\nThe concept of points of infinity can be extended to define \\emph{lines at infinity}. Consider two or more pairs of parallel lines. Each pair of parallel lines intersects at a point at infinity $\\{x_{\\infty,1},...,x_{\\infty,n}\\}$. The line $\\ell_\\infty$ that passes through all these points at infinity must satisfy $\\forall i, \\ell_\\infty^T x_{\\infty,i}= 0$. This means that $\\ell_\\infty = \\begin{bmatrix}0 & 0 & c\\end{bmatrix}^T$. Since $c$ is an arbitrary value, we can simple define $\\ell_\\infty = \\begin{bmatrix}0 & 0 & 1\\end{bmatrix}^T$.\n\nIf we apply a generic projective transformation $H$ to a point at infinity $p_\\infty$, what will happen? \n\\begin{equation}\np' =Hp_\\infty = \\begin{bmatrix} A &t \\\\ v&b \\end{bmatrix}\\begin{bmatrix}1\\\\1\\\\0\\end{bmatrix} = \\begin{bmatrix}p'_x\\\\ p'_y\\\\p'_z\\end{bmatrix}\n\\end{equation}\n\nNotice that the last element of $p'$ may become non-zero, which suggests that a projective transformation generally maps points at infinity to points that are no longer at infinity. However, this is not the case for affine transformations, which map points at infinity to points at infinity:\n\\begin{equation}\np' =Hp_\\infty = \\begin{bmatrix} A &t \\\\ 0&1 \\end{bmatrix}\\begin{bmatrix}1\\\\1\\\\0\\end{bmatrix} = \\begin{bmatrix}p'_x\\\\ p'_y\\\\0\\end{bmatrix}\n\\end{equation}\n\nNow let's apply a projective transformation $H$ to a line $\\ell$ to get a new line $\\ell'$. All points $x$ that pass through a line must satisfy the property $x^T\\ell = 0$. In the transformed space, we know that lines still map to lines, which means that $x'\\ell' = 0$. We can use the identity property to get \\[x^TI\\ell = x^TH^T H^{-T} \\ell = 0\\]If we apply a projective transformation to the line, then all the points become transformed as well, giving $x' = Hx$. Thus we get $x^TH^T H^{-T} \\ell = x'^T\\ell'$, and we find the projective transformation of a line is $\\ell' = H^{-T}\\ell$. Similar to our observations with points at infinity, we find that the projective transformation of a line at infinity does not necessarily map to another line at infinity. Additionally, affine transformations still map lines at infinity to lines at infinity.\n\n\\section{Vanishing Points and Lines}\nSo far, we have introduced the concepts of lines and points at infinity in 2D. Let us now introduce the equivalent concepts for 3D in its corresponding homogeneous coordinates. \n\nIn the 3D world, we are now introduced to the concepts of planes. We can represent a plane as a vector $\\begin{bmatrix}a&b&c&d\\end{bmatrix} ^T$, where $(a,b,c)$ form a normal vector and $d$ is the distance from the origin to the plane in that normal vector's direction. Formally, a plane is defined as all the points $x$ which satisfy\n\\begin{equation}\nx^T\\begin{bmatrix}a\\\\b\\\\c\\\\d\\end{bmatrix} = ax_1 + bx_2 + cx_3 + d = 0\n\\end{equation}\nLines in 3D are defined as the intersection of two planes. Since they have four degrees of freedom (a defined intercept location and slopes in each of the three dimensions), they are difficult to represent nicely in 3D space. Please see Section 3.2.2 of the Hartley \\& Zisserman textbook for more details.\n\nPoints, however, are defined similarly in 3D as they are in 2D. Points at infinity in 3D are again defined as the intersection point of parallel lines in 3D. Furthermore, if we apply a projective transformation to one of these points at infinity $x_\\infty$, then we obtain a point $p_\\infty$ in the image plane, which is no longer at infinity in homogeneous coordinates. This point $p_\\infty$ is known as a \\emph{vanishing point}. But, what can we do with vanishing points?\n\nWe can derive a useful relationship between parallel lines in 3D, their corresponding vanishing point in the image, and the camera parameters $K,R,T$. Let us define $d = (a,b,c)$ as the direction of a set of 3D parallel lines in the camera reference system. These lines intersect to a point at infinity and the projection of such a point in the image returns the vanishing point $v$, which is defined by\n\\begin{equation}\nv = Kd\n\\end{equation}\nWe leave the derivation of the above equation as an exercise. This equation can be rewritten to extract the direction $d$: \n\\begin{equation}\nd = \\frac{K^{-1}v}{\\|K^{-1}v\\|}\n\\end{equation}\n\nIf we consider a plane $\\Pi$ as a superset of parallel lines, each set of parallel lines intersects at a point at infinity. The line that passes through such set of points at infinity is the line at infinity $\\ell_\\infty$ associated to $\\Pi$. A line at infinity is also defined as the line where two parallel planes intersect. The projective transformation of $\\ell_\\infty$ to the image plane is no longer a line at infinity and is called the vanishing line or the \\emph{horizon line} $\\ell_{\\mathrm{horiz}}$. The horizon line is a line that passes through the corresponding vanishing points in the image. The horizon line can be computed as \n\\begin{equation}\n\\ell_{\\mathrm{horiz}} = H_P^{-T} \\ell_\\infty\n\\end{equation}\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/horizon.png}\n\\caption{The computed horizon line from a set of vanishing points.}\n\\label{fig:horizon}\n\\end{figure}\n\nThe concept of a horizon line allows us to answer as humans to intuitively deduce properties about the image that may not be easily apparent mathematically. For example, in Figure~\\ref{fig:horizon}, although the lines on the ground are not parallel in image coordinates, we have a natural understanding that they are parallel in the 3D world.\n\nFurthermore, the horizon line allows us to compute useful properties about the world. For example, we can derive an interesting relationship between the the normal $n$ of a plane in 3D with the corresponding horizon line $\\ell_{\\mathrm{horiz}}$ in an image:\n\\begin{equation}\nn = K^T \\ell_{\\mathrm{horiz}}\n\\end{equation}\nThis means that if we can recognize the horizon line associated with a plane, and if our camera is calibrated, then we can estimate the orientation of that plane.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/plane_infinity.png}\n\\caption{A set of two or more vanishing lines (the blue lines) defines the plane at infinity $\\Pi_\\infty$ (the yellow plane).}\n\\label{fig:plane_infinity}\n\\end{figure}\nBefore introducing the last property that relates vanishing points and lines, we first need to define the plane at infinity $\\Pi_\\infty$. This plane is defined by a set of 2 or more vanishing lines and is described by the vector $\\begin{bmatrix} 0 & 0 & 0 &1\\end{bmatrix}^T$ in homogeneous coordinates. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/angles.png}\n\\caption{Deriving the angle between two lines.}\n\\label{fig:angles}\n\\end{figure}\n\nThe last property we introduce relates lines and planes in 3D with the corresponding vanishing points and lines in the image plane. Suppose that two pairs of parallel lines in 3D have directions $d_1$ and $d_2$, and are associated with the points at infinity $x_{1,\\infty}$ and $x_{2,\\infty}$. Let $v_1$ and $v_2$ be the corresponding vanishing points. Then, we find that the angle $\\theta$ between $d_1$ and $d_2$ is given by using the cosine rule:\n\\begin{equation}\n    \\begin{split}\n        \\cos\\theta &= \\frac{d_1 \\cdot d_2}{\\|d_1\\| \\|d_2\\|}\\\\\n        &= \\frac{v_1^T\\omega v_2}{\\sqrt{v_1^T \\omega v_1}\\sqrt{v_2^T \\omega v_2}}\n    \\end{split}\n    \\label{eq:angles}\n\\end{equation}\nwhere $\\omega = (KK^T)^{-1}$. \n\nWe can extend this idea further to the 3D planar case, in which we want to relate different planes in 3D. Recall that for any plane, we can compute its associated vanishing line $\\ell_\\mathrm{{horiz}}$ and its normal $K^T \\ell_{\\mathrm{horiz}}$. Therefore, we can determine the angle $\\theta$ between two planes by computing the angle between each of the planes' normal vectors $n_1$ and $n_2$. We derive the angle $\\theta$ between two planes with a vanishing lines $\\ell_1$ and $\\ell_2$ respectively:\n\\begin{equation}\n    \\begin{split}\n        \\cos\\theta &= \\frac{n_1 \\cdot n_2}{\\|n_1\\| \\|n_2\\|}\\\\\n        &= \\frac{\\ell_1^T\\omega^{-1} \\ell_2}{\\sqrt{\\ell_1^T \\omega^{-1} \\ell_1}\\sqrt{\\ell_2^T \\omega^{-1} \\ell_2}}\n    \\end{split}\n\\end{equation}\n\n\\section{A Single View Metrology Example}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/example1.png}\n\\caption{The example setup with two vanishing points for a pair of perpendicular planes.}\n\\label{fig:example1}\n\\end{figure}\nSuppose that we can identify two planes in an image of the 3D world. Additionally, let's suppose that we can identify a pair of parallel lines on each of these planes. This allows us to estimate two vanishing points $v_1$ and $v_2$ in the image. Finally, let's suppose that we know that these planes are perpendicular in 3D. In this case, we know that from Equation~\\ref{eq:angles}, that $v_1\\omega v_2 = 0$. \n\nBut recall that $\\omega$ depends on the camera matrix $K$, which is potentially unknown at this time. Therefore, is knowing these two vanishing points sufficient for accurately estimating the camera parameters? Considering that $K$ has 5 degrees of freedom and that $v_1 \\omega v_2 = 0$ provides only one constraint, we do not have enough information to calculate $K$. \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/example2.png}\n\\caption{The example setup with three vanishing points for a set of mutually perpendicular planes.}\n\\label{fig:example2}\n\\end{figure}\nWhat if we are able to find another vanishing $v_3$ for another mutually orthogonal plane? Then we know that $v_1\\omega v_2 = v_1\\omega v_3 = v_2\\omega v_3 = 0$. Since each pair gives a constraint, we only end up with 3 out of the 5 constraints needed to compute $K$. However, if we make the assumption that the camera has zero-skew and square pixels, then we can add the additional two constraints needed. By these assumptions, then we know that $\\omega$ takes on the form \n\\begin{equation}\n    \\omega = \\begin{bmatrix}\\omega_1 & 0 & \\omega_4 \\\\ 0 & \\omega_1 & \\omega_5 \\\\ \\omega_4 & \\omega_5 &\\omega_6 \\end{bmatrix}\n\\end{equation}\n\nIf you noticed carefully, there are four variables in the definition of $\\omega$. However, we can only know $\\omega$ up to scale, which reduces the amount of actual variables to three, allowing it to be solved. Once we have $\\omega$, then we can use Cholesky decomposition to compute $K$. Thus, we have managed to calibrate the camera using a single image!\n\nOnce $K$ is known, then we can reconstruct the 3D geometry of the scene. For example, we can compute the orientation of all the planes identified above. Therefore, a single image can be readily used to uncover a wealth of information about the scene it captures.\n\n\\end{document}\n", "meta": {"hexsha": "6b08a4cec152d94ecd4b029295aa592d03b7b8c5", "size": 17498, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "02-single-view-metrology/02-single-view-metrology.tex", "max_stars_repo_name": "zishanqin/cs231a-notes", "max_stars_repo_head_hexsha": "b864cfb7c472573ec1fb7da780748348fc320dab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 287, "max_stars_repo_stars_event_min_datetime": "2017-04-03T00:30:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T03:52:04.000Z", "max_issues_repo_path": "02-single-view-metrology/02-single-view-metrology.tex", "max_issues_repo_name": "zishanqin/cs231a-notes", "max_issues_repo_head_hexsha": "b864cfb7c472573ec1fb7da780748348fc320dab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2019-06-26T11:23:10.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-16T09:00:43.000Z", "max_forks_repo_path": "02-single-view-metrology/02-single-view-metrology.tex", "max_forks_repo_name": "zishanqin/cs231a-notes", "max_forks_repo_head_hexsha": "b864cfb7c472573ec1fb7da780748348fc320dab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 112, "max_forks_repo_forks_event_min_datetime": "2017-04-09T10:44:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T09:19:59.000Z", "avg_line_length": 85.7745098039, "max_line_length": 847, "alphanum_fraction": 0.7518573551, "num_tokens": 4888, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7879312006227324, "lm_q1q2_score": 0.5902291960278362}}
{"text": "\\section*{VALIDATION}\\label{validation}\nThe accuracy of the fitted models have been estimated using two test\ndata sets, containing 20\\% and 30\\% of the data. The results from this\nevaluation in seen in Tab.\\ref{tab:test_validataion} and\nFig.\\ref{fig:test_evaluation}. The actual predictions on the\ntest dataset is also plotted in Fig.\\ref{fig:predictions}. The\nSupport Vector Regressor seems to be the model that performs the best.\nHowever, none of the models seem to perform very well on this dataset\nand the accuracy changes very much with the test size, which means that\nthe ranking between the models is very unreliable. So one might as well\nuse the simple polynomial regression. The explicit formula from this\nregression is shown in Eq.\\ref{eq:model_polynomial}. It can be\nseen from this expression that the $Power$ depends on the ship draught\n$T$ and one of the wind components $U_{wind}$. There is however also\na very high interception term, which means that most of the $Power$ is\nnot explained by this model, if we assume that $Power$ is zero when\nthe ship is at rest in the harbour.\n\\begin{table}[H]\n\\scriptsize\n\\center\n\\caption{Models evaluated with the test sets}\n\\label{tab:test_validataion}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\\addlinespace\nmodel name & test size & r2 score & mean absolute error\\\\\n&  &  & \\\\\n\\hlineSVR & 0.2 & 0.64 & 687.07\\\\\nXGBoost & 0.2 & 0.27 & 976.96\\\\\nlasso & 0.2 & 0.29 & 999.13\\\\\npolynomial & 0.2 & 0.3 & 997.36\\\\\nridge & 0.2 & 0.3 & 997.54\\\\\nSVR & 0.3 & 0.83 & 510.07\\\\\nXGBoost & 0.3 & 0.55 & 856.03\\\\\nlasso & 0.3 & 0.45 & 1063.0\\\\\npolynomial & 0.3 & 0.45 & 1060.51\\\\\nridge & 0.3 & 0.45 & 1061.05\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{figure}[H]\n\\begin{center}\\includegraphics[width = 0.95\\textwidth]{figures/test_evaluation.pdf}\\end{center}\n\\vspace{-0.7cm}\n\\caption{Evaluation of the fitted models accuracy}\n\\label{fig:test_evaluation}\n\\end{figure}\n\\begin{figure}[H]\n\\begin{center}\\includegraphics[width = 0.95\\textwidth]{figures/predictions.pdf}\\end{center}\n\\vspace{-0.7cm}\n\\caption{Predictions on the test data sets}\n\\label{fig:predictions}\n\\end{figure}\n\\begin{equation}\ny = - 944.923049 T + 244.319815 U_{wind} + 2964.727832\n\\label{eq:model_polynomial}\n\\end{equation}\n", "meta": {"hexsha": "3bb4688f46fe0aa5865d21c4433d63b2a963a03e", "size": 2214, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "validation.tex", "max_stars_repo_name": "martinlarsalbert/ship_power_prediction_latex", "max_stars_repo_head_hexsha": "a264e9cfb44423b2f0f9954377088e6e97c5f40b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "validation.tex", "max_issues_repo_name": "martinlarsalbert/ship_power_prediction_latex", "max_issues_repo_head_hexsha": "a264e9cfb44423b2f0f9954377088e6e97c5f40b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "validation.tex", "max_forks_repo_name": "martinlarsalbert/ship_power_prediction_latex", "max_forks_repo_head_hexsha": "a264e9cfb44423b2f0f9954377088e6e97c5f40b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.5357142857, "max_line_length": 95, "alphanum_fraction": 0.7348690154, "num_tokens": 726, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.787931185683219, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.5902291848368375}}
{"text": "\\section{Arithmetic task}\n\nThe aim of the ``Arithmetic task'' is to directly test arithmetic models ability to extrapolate beyond the training range. Additionally, our generalized version provides a high degree of flexibility in how the input is shaped, sampled, and the problem complexity.\n\nOur ``arithmetic task'' is identical to the ``simple function task'' in the NALU paper \\cite{trask-nalu}. However, as they do not describe their setup in details, we use the setup from \\citet{maep-madsen-johansen-2019}, which provide Algorithm \\ref{tab:simple-function-task-defaults}, an evaluation-criterion to if and when the model has converged, the sparsity error, as well as methods for computing confidence intervals for success-rate and the sparsity error.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.7]{graphics/function_task_static_problem.pdf}\n\\caption{Shows how the dataset is parameterized.}\n\\label{fig:simple-function-task-problem}\n\\end{figure}\n\n\\subsection{Dataset generation}\n\\label{sec:appendix:simple-function-task:data-generation}\n\nThe goal is to sum two random subsets of a vector $\\mathbf{x}$ ($a$ and $b$), and perform an arithmetic operation on these ($a \\circ b$).\n\n\\begin{equation}\n    a = \\sum_{i=s_{1,\\mathrm{start}}}^{s_{1,\\mathrm{end}}} x_i, \\quad b = \\sum_{i=s_{2,\\mathrm{start}}}^{s_{2,\\mathrm{end}}} x_i, \\quad t = a \\circ b\n\\end{equation}\n\nAlgorithm \\ref{alg:simple-function-task-generator} defines the exact procedure to generate the data, where an interpolation range will be used for training and validation and an extrapolation range will be used for testing. Default values are defined in table \\ref{tab:simple-function-task-defaults}.\n\n\\begin{table}[h]\n\\caption{Default dataset parameters for ``Arithmetic task''}\n\\label{tab:simple-function-task-defaults}\n\\centering\n\\begin{tabular}{cc}\n\\begin{minipage}{.4\\linewidth}\n\\begin{tabular}{r l}\n\\toprule\n Parameter name & Default value \\\\\n \\midrule\n Input size & 100 \\\\\n Subset ratio & 0.25 \\\\\n Overlap ratio & 0.5 \\\\\n \\bottomrule\n\\end{tabular}\n\\end{minipage} &\n\\begin{minipage}{.4\\linewidth}\n\\begin{tabular}{r l}\n\\toprule\n Parameter name & Default value \\\\\n \\midrule\n Interpolation range & $U[1,2]$ \\\\\n Extrapolation range & $U[2,6]$ \\\\\n \\\\\n \\bottomrule\n\\end{tabular}\n\\end{minipage}\n\\end{tabular}\n\\end{table}\n\n\\begin{algorithm}[h]\n  \\caption{Dataset generation algorithm for ``Arithmetic task''}\n  \\begin{algorithmic}[1]\n    \\Function{Dataset}{${\\Call{Op}{\\cdot, \\cdot}: \\mathrm{Operation}}$, ${i: \\mathrm{Input Size}}$, ${s: \\mathrm{Subset Ratio}}$, ${o: \\mathrm{Overlap Ratio}}$, ${\\hspace{3cm}R: \\mathrm{Range}}$}\n      \\Let{$\\mathbf{x}$}{\\Call{Uniform}{$R_{lower}, R_{upper}, i$}} \\Comment{Sample $i$ elements uniformly}\n      \\Let{$k$}{\\Call{Uniform}{$0, 1 - 2s - o$}} \\Comment{Sample offset}\n      \\Let{$a$}{\\Call{Sum}{$\\mathbf{x}[ik:i(k+s)]$}} \\Comment{Create sum $a$ from subset}\n      \\Let{$b$}{\\Call{Sum}{$\\mathbf{x}[i(k+s-o):i (k+2s-0)]$}} \\Comment{Create sum $b$ from subset}\n      \\Let{$t$}{\\Call{Op}{$a, b$}} \\Comment{Perform operation on $a$ and $b$}\n      \\State \\Return{$x, t$}\n    \\EndFunction\n  \\end{algorithmic}\n  \\label{alg:simple-function-task-generator}\n\\end{algorithm}\n\n\\subsection{Model defintions and setup}\n\nModels are defined in table \\ref{tab:simple-function-task-model-defintions} and are all optimized with Adam optimization \\cite{adam-optimization} using default parameters, and trained over $5 \\cdot 10^6$ iterations. Training takes about 8 hours on a single CPU core(\\text{8-Core Intel Xeon E5-2665 2.4GHz}). We run 19150 experiments on a HPC cluster.\n\nThe training dataset is continuously sampled from the interpolation range where a different seed is used for each experiment, all experiments use a mini-batch size of 128 observations, a fixed validation dataset with $1 \\cdot 10^4$ observations sampled from the interpolation range, and a fixed test dataset with $1 \\cdot 10^4$ observations sampled from the extrapolation range.\n\n\\begin{table}[h]\n\\caption{Model definitions}\n\\label{tab:simple-function-task-model-defintions}\n\\centering\n\\begin{tabular}{r l l l l l}\n\\toprule\n Model & Layer 1 & Layer 2 & $\\hat{\\lambda}_{\\mathrm{sparse}}$ & $\\lambda_{\\mathrm{start}}$ & $\\lambda_{\\mathrm{end}}$ \\\\\n \\midrule\n NMU & NAU & NMU & 10 & $10^6$ & $2 \\cdot 10^6$ \\\\\n NAU & NAU & NAU & 0.01 & $5 \\cdot 10^3$ & $5 \\cdot 10^4$ \\\\\n $\\mathrm{NAC}_{\\bullet}$ & $\\mathrm{NAC}_{+}$ & $\\mathrm{NAC}_{\\bullet}$ & -- & -- & -- \\\\\n $\\mathrm{NAC}_{\\bullet,\\sigma}$ & $\\mathrm{NAC}_{+}$ & $\\mathrm{NAC}_{\\bullet,\\sigma}$ & -- & -- & -- \\\\\n $\\mathrm{NAC}_{\\bullet,\\mathrm{NMU}}$ & $\\mathrm{NAC}_{+}$ & $\\mathrm{NAC}_{\\bullet,\\mathrm{NMU}}$ & 10 & $10^6$ & $2 \\cdot 10^6$ \\\\\n $\\mathrm{NAC}_{+}$ & $\\mathrm{NAC}_{+}$ & $\\mathrm{NAC}_{+}$ & -- & -- & -- \\\\\n NALU & NALU & NALU & -- & -- & -- \\\\\n Linear & Linear & Linear & -- & -- & -- \\\\\n ReLU & ReLU & ReLU & -- & -- & -- \\\\\n ReLU6 & ReLU6 & ReLU6 & -- & -- & -- \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\subsection{Ablation study}\n\\label{sec:appendix:ablation-study}\nTo validate our model, we perform an ablation on the multiplication problem. Some noteworthy observations:\n\n\\begin{enumerate}\n    \\item None of the $W$ constraints, such as $\\mathcal{R}_{sparse}$ and clamping W to be in $[0, 1]$, are necessary when the hidden size is just $2$.\n    \\item Removing the $\\mathcal{R}_{sparse}$ causes the NMU to immediately fail for larger hidden sizes.\n    \\item Removing the clamping of W does not cause much difference. This is because $\\mathcal{R}_{sparse}$ also constrains $W$ outside of $[0, 1]$. The regularizer used here is $\\mathcal{R}_{sparse} = \\min(|W|, |1 - W|)$, which is identical to the one used in other experiments in $[0, 1]$, but is also valid outside $[0, 1]$. Doing this gives only a slightly slower convergence. Although, this can not be guaranteed in general, as the regularizer is omitted during the initial optimization.\n    \\item Removing both constraints, gives a somewhat satisfying solution, but with a lower success-rate, slower convergence, and higher sparsity error.\n\\end{enumerate}\n\nIn conclusion both constraints are valuable, as they provide faster convergence and a sparser solution, but they are not critical to the success-rate of the NMU.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\linewidth]{results/simple_function_static_mul_hidden_size_ablation.pdf}\n\\caption{Ablation study where $\\mathcal{R}_{sparse}$ is removed and the clamping of W is removed. There are 50 experiments with different seeds, for each configuration.}\n\\label{fig:simple-function-static-ablation}\n\\end{figure}\n\n\\subsection{Effect of dataset parameter}\n\\label{sec:appendix-simple-function-task:dataset-parameter-effect}\n\nTo stress test the models on the multiplication task, we vary the dataset parameters one at a time while keeping the others at their default value (default values in table \\ref{tab:simple-function-task-defaults}). Each runs for 50 experiments with different seeds. The results, are visualized in figure \\ref{fig:simple-function-static-dataset-parameters-boundary}.\n\nIn figure \\ref{fig:simple-function-static-theoreical-claims-experiment}, the interpolation-range is changed, therefore the extrapolation-range needs to be changed such it doesn't overlap. For each interpolation-range the following extrapolation-range is used: ${\\mathrm{U}[-2,-1] \\text{ uses } \\mathrm{U}[-6,-2]}$, ${\\mathrm{U}[-2,2] \\text{ uses } \\mathrm{U}[-6,-2] \\cup \\mathrm{U}[2,6]}$, ${\\mathrm{U}[0,1] \\text{ uses } \\mathrm{U}[1,5]}$, ${\\mathrm{U}[0.1,0.2] \\text{ uses } \\mathrm{U}[0.2,2]}$, ${\\mathrm{U}[1.1,1.2] \\text{ uses } \\mathrm{U}[1.2,6]}$, ${\\mathrm{U}[1,2] \\text{ uses } \\mathrm{U}[2,6]}$, ${\\mathrm{U}[10, 20] \\text{ uses } \\mathrm{U}[20, 40]}$.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\linewidth,trim={0 1.3cm 0 0},clip]{results/simple_function_static_mul_input_size.pdf}\n\\includegraphics[width=\\linewidth,trim={0 1.3cm 0 0.809cm},clip]{results/simple_function_static_mul_overlap.pdf}\n\\includegraphics[width=\\linewidth,trim={0 0 0 0.809cm},clip]{results/simple_function_static_mul_subset.pdf}\n\\caption{Shows the effect of the dataset parameters.}\n\\label{fig:simple-function-static-dataset-parameters-boundary}\n\\end{figure}\n\n\\subsection{Gating convergence experiment}\n\\label{sec:appendix:nalu-gate-experiment}\n\nIn the interest of adding some understand of what goes wrong in the NALU gate, and the shared weight choice that NALU employs to remedy this, we introduce the following experiment.\n\nWe train two models to fit the arithmetic task. Both uses the $\\mathrm{NAC}_{+}$ in the first layer and NALU in the second layer. The only difference is that one model shares the weight between $\\mathrm{NAC}_{+}$ and $\\mathrm{NAC}_{\\bullet}$ in the NALU, and the other treat them as two separate units with separate weights. In both cases NALU should gate between $\\mathrm{NAC}_{+}$ and $\\mathrm{NAC}_{\\bullet}$ and choose the appropriate operation. Note that this NALU model is different from the one presented elsewhere in this paper, including the original NALU paper \\cite{trask-nalu}. The typical NALU model is just two NALU layers with shared weights.\n\nFurthermore, we also introduce a new gated unit that simply gates between our proposed NMU and NAU, using the same sigmoid gating-mechanism as in the NALU. This combination is done with seperate weights, as NMU and NAU use different weight constrains and can therefore not be shared.\n\nThe models are trained and evaluated over 100 different seeds on the multiplication and addition task. A histogram of the gate-value for all seeds is presented in figure \\ref{fig:simple-function-static-nalu-gate-graph} and table \\ref{tab:simple-function-static-nalu-gate-table} contains a summary. Some noteworthy observations:\n\n\\vspace{-0.3cm}\\begin{enumerate}\n    \\item When the NALU weights are separated far more trials converge to select $\\mathrm{NAC}_{+}$ for both the addition and multiplication task. Sharing the weights between $\\mathrm{NAC}_{+}$ and $\\mathrm{NAC}_{\\bullet}$ makes the gating less likely to converge for addition.\n    \\item The performance of the addition task is dependent on NALU selecting the right operation. In the multiplication task, when the right gate is selected, $\\mathrm{NAC}_{\\bullet}$ do not converge consistently, unlike our NMU that converges more consistently.\n    \\item Which operation the gate converges to appears to be mostly random and independent of the task. These issues are caused by the sigmoid gating-mechanism and thus exists independent of the used sub-units.\n\\end{enumerate}\n\n\\vspace{-0.2cm}These observations validates that the NALU gating-mechanism does not converge as intended. This becomes a critical issues when more gates are present, as is normally the case. E.g. when stacking multiple NALU layers together.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.93\\linewidth]{results/function_task_static_nalu.pdf}\n\\vspace{-0.2cm}\\caption{Shows the gating-value in the NALU layer and a variant that uses NAU/NMU instead of $\\mathrm{NAC}_{+}$/$\\mathrm{NAC}_{\\bullet}$. Separate/shared refers to the weights in $\\mathrm{NAC}_{+}$/$\\mathrm{NAC}_{\\bullet}$ used in NALU.}\n\\label{fig:simple-function-static-nalu-gate-graph}\n\\end{figure}\n\n\\input{results/function_task_static_nalu.tex}\n\n\\subsection{Regularization}\n\\label{sec:appendix:simple-function-task:regualization}\n\nThe $\\lambda_{start}$ and $\\lambda_{end}$ are simply selected based on how much time it takes for the model to converge. The sparsity regularizer should not be used during early optimization as this part of the optimization is exploratory and concerns finding the right solution by getting each weight on the right side of $\\pm 0.5$.\n\nIn figure \\ref{fig:simple-fnction-static-regularizer-add}, \\ref{fig:simple-fnction-static-regularizer-sub} and \\ref{fig:simple-fnction-static-regularizer-mul} the scaling factor $\\hat{\\lambda}_{\\mathrm{sparse}}$ is optimized.\n\\begin{equation}\n\\lambda_{\\mathrm{sparse}} = \\hat{\\lambda}_{\\mathrm{sparse}} \\max(\\min(\\frac{t - \\lambda_{\\mathrm{start}}}{\\lambda_{\\mathrm{end}} - \\lambda_{\\mathrm{start}}}, 1), 0)\n\\end{equation}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\linewidth,trim={0 1.3cm 0 0},clip]{results/simple_function_static_regualization_add.pdf}\n\\caption{Shows effect of $\\hat{\\lambda}_{\\mathrm{sparse}}$ in NAU on the arithmetic dataset for the $\\bm{+}$ operation.}\n\\label{fig:simple-fnction-static-regularizer-add}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\linewidth,trim={0 1.3cm 0 0},clip]{results/simple_function_static_regualization_sub.pdf}\n\\caption{Shows effect of $\\hat{\\lambda}_{\\mathrm{sparse}}$ in NAU on the arithmetic dataset for the $\\bm{-}$ operation.}\n\\label{fig:simple-fnction-static-regularizer-sub}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\linewidth,trim={0 1.3cm 0 0},clip]{results/simple_function_static_regualization_mul.pdf}\n\\caption{Shows effect of $\\hat{\\lambda}_{\\mathrm{sparse}}$ in NMU on the arithmetic dataset for the $\\bm{\\times}$ operation.}\n\\label{fig:simple-fnction-static-regularizer-mul}\n\\end{figure}\n\n\\subsection{Comparing all models}\n\\label{sec:appendix:comparison-all-models}\n\nTable \\ref{tab:function-task-static-defaults-all} compares all models on all operations used in NALU \\cite{trask-nalu}. All variations of models and operations are trained for 100 different seeds to build confidence intervals. Some noteworthy observations are:\n\n\\begin{enumerate}\n    \\item Division does not work for any model, including the $\\mathrm{NAC}_{\\bullet}$ and NALU models. This may seem surprising but is actually in line with the results from the NALU paper (\\citet{trask-nalu}, table 1) where there is a large error given the interpolation range. The extrapolation range has a smaller error, but this is an artifact of their evaluation method where they normalize with a random baseline. Since a random baseline will have a higher error for the extrapolation range, errors just appear to be smaller. A correct solution to division should have both a small interpolation and extrapolation error. \n    \\item $\\mathrm{NAC}_{\\bullet}$ and NALU are barely able to learn $\\sqrt{z}$, with just 2\\% success-rate for NALU and 7\\% success-rate for $\\mathrm{NAC}_{\\bullet}$.\n    \\item NMU is fully capable of learning $z^2$. It learn this by learning the same subset twice in the NAU layer, this is also how $\\mathrm{NAC}_{\\bullet}$ learn $z^2$.\n    \\item The Gated NAU/NMU (discussed in section \\ref{sec:appendix:nalu-gate-experiment}) works very poorly, because the NMU initialization assumes that $E[z_{h_{\\ell-1}}] = 0$. This is usually true, as discussed in section \\ref{sec:methods:moments-and-initialization}, but not in this case for the first layer. In the recommended NMU model, the NMU layer appears after NAU, which causes that assumption to be satisfied.\n\\end{enumerate}\n\n\\input{results/function_task_static_all.tex}", "meta": {"hexsha": "a2cb73238312aa51ce9a6ff232af449cbdad831f", "size": 14909, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/appendix/simple-function-task.tex", "max_stars_repo_name": "AndreasMadsen/stable-nalu", "max_stars_repo_head_hexsha": "b3296ace137ffa4854edeef3759f1578b7650210", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 147, "max_stars_repo_stars_event_min_datetime": "2019-10-07T11:01:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-16T02:51:18.000Z", "max_issues_repo_path": "paper/appendix/simple-function-task.tex", "max_issues_repo_name": "AndreasMadsen/stable-nalu", "max_issues_repo_head_hexsha": "b3296ace137ffa4854edeef3759f1578b7650210", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-12-03T12:40:21.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-03T12:40:21.000Z", "max_forks_repo_path": "paper/appendix/simple-function-task.tex", "max_forks_repo_name": "AndreasMadsen/stable-nalu", "max_forks_repo_head_hexsha": "b3296ace137ffa4854edeef3759f1578b7650210", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2019-12-21T15:58:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-03T08:32:38.000Z", "avg_line_length": 72.7268292683, "max_line_length": 662, "alphanum_fraction": 0.7415654974, "num_tokens": 4131, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.787931185683219, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.5902291848368375}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%\n%\n%   Thesis template by Youssif Al-Nashif\n%\n%   May 2020\n%\n%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Alternate Clustering Method with KDE}\n\nAs an alternative to popular clustering methods that were used in the preceding section, another clustering method was attempted that features use of kernel density estimation. By using a kernel density estimation (KDE) on the kernel values, we can cluster the documents into similar groups, defined by local maxima. \\\\\nFirst, the graph kernel matrix, $K$ is taken, and we extract a row, $i$, and we compute a KDE using R's \\texttt{density()} function. Now, the default value for bandwidth will likely produce a smooth, unimodal or bimodal distribution, but this is not what the goal is. The goal is to use the KDE to find clusters through their value appearing in a local maxima. So, through producing a KDE with few local maxima, we produce very few clusters. If the number of clusters needs to increase, we can essentially overfit the KDE and abuse the use of the bandwidth parameter to create a KDE with many more local maxima and minima.\\\\\n\n%% INSERT GRAPHIC HERE.\n**GRAPHIC**\\\\ \n\nOnce a KDE with a sufficient number of local maxima, which is determined by the user, then cluster breaks are located. If we consider the estimated KDE to be a function $k(x)$, where $x$ is a kernel value, and $k(x)$ is the estimated density at a value $x$, then we can do calculus to locate the break points. \n\n", "meta": {"hexsha": "058412ce1e810260e58c8a044aedf8d4a1292e2b", "size": 1457, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/04_KDE_Cluster.tex", "max_stars_repo_name": "Levi-Nicklas/GraphDocNLP", "max_stars_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-27T02:08:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-27T02:08:34.000Z", "max_issues_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/04_KDE_Cluster.tex", "max_issues_repo_name": "Levi-Nicklas/GraphDocNLP", "max_issues_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2021-02-18T16:07:14.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-25T14:18:51.000Z", "max_forks_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/04_KDE_Cluster.tex", "max_forks_repo_name": "Levi-Nicklas/GraphDocNLP", "max_forks_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.6842105263, "max_line_length": 624, "alphanum_fraction": 0.7419354839, "num_tokens": 333, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8519528094861981, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5900982889050204}}
{"text": "\\documentclass[11pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}\n\\usepackage{hyperref}\n\\title{Three-Dimensional Idealized Model for Tidal Motion in Tidal Estuaries. Part II: An Efficient Finite-Element Multigrid Approach}\n\\author{Matthias M\\\"oller}\n\\begin{document}\n\\maketitle\n\n\\begin{abstract}\n...\n\\end{abstract}\n\n\\section{Introduction}\n...\n\n\\section{Mathematical Model}\n\nThe linear elliptic partial differential equation for the sea surface elevation as derived in [?] reads\n\\begin{alignat}{2}\n\\nabla\\cdot(\\mathbb{A}\\nabla N)+i\\omega N&=0  \\quad && \\text{in } \\Omega,\n\\label{eq:sse_scalar1}\\\\\nN&=A_{M_2} \\quad && \\text{on } \\Gamma_D,\n\\label{eq:sse_scalar2}\\\\\n(\\mathbb{A}\\nabla N)\\cdot\\mathbf{n}&=0 \\quad && \\text{on } \\Gamma_N,\n\\label{eq:sse_scalar3}\n\\end{alignat}\nwhere $\\Gamma_D$ and $\\Gamma_N$ denote the Dirichlet and Neumann boundary part and $A_{M_2}$ is the amplitude of sea surface elevation at the Dirichlet boundary. Both the sea surface elevation $N\\in\\mathbb{C}$ and the matrix\n$$\n\\mathbb{A}=\n\\begin{bmatrix}\nC_1 & C_2\\\\\nC_3 & C_4\n\\end{bmatrix},\\quad\\text{where}\\quad C_k\\in\\mathbb{C}, k=1,\\dots,4\n$$\nconsist of complex values, where $i$ denotes the imaginary number, i.e. $i^2=-1$.\n\nLet us define the $z$-dependent function $C:\\mathbb{C}\\mapsto\\mathbb{C}$ as\n\\begin{equation}\nC(\\alpha,z)=\\frac{g}{\\alpha^3A_v}\\left[\\frac{\\alpha^2A_vz\\sinh(\\alpha h)-s\\sinh(\\alpha z)+\\alpha z s \\cosh(\\alpha h)}{\\alpha A_v\\sinh(\\alpha h)+s\\cosh(\\alpha h)}\\right]\n\\end{equation}\nand introduce the auxiliary real-valued parameters\n$$\n\\alpha_1=\\sqrt{i\\frac{\\omega+f}{A_\\nu}},\\qquad\n\\alpha_2=\\sqrt{i\\frac{\\omega-f}{A_\\nu}}.\n$$\nThen the complex-valued coefficients $C_1,\\dots,C_4$ are given by\n\\begin{alignat}{4}\nC_1&=&\\frac{C(\\alpha_1,-h)+C(\\alpha_2,-h)}{2},&& \\qquad\nC_2&=&i\\frac{C(\\alpha_1,-h)-C(\\alpha_2,-h)}{2},\\\\\nC_3&=&-i\\frac{C(\\alpha_1,-h)-C(\\alpha_2,-h)}{2},&&\\qquad\nC_4&=&\\frac{C(\\alpha_1,-h)+C(\\alpha_2,-h)}{2}.\n\\end{alignat}\nNote that $C_1=C_4$ and $C_2=-C_3$. The other parameters are given in the following table\n\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\nSymbol & Value & Description & Units\\\\\n\\hline\n$(x,y,z)$ & & Cartesian coordinates & $m$\\\\\n$t$ & & time & $s$\\\\\n$L$ & & length scale & $m$\\\\\n$H$ & & mean depth & $m$\\\\\n$h(x,y)$ & 10 & bed profile & $m$\\\\\n$\\eta(x,y,t)$ & & sea surface elevation above $z=0$ & $m$\\\\\n$\\mathbf{u}(x,y,z,t)$ & & velocity vector & $m s^{-1}$\\\\\n$f(x,y)$ & 0 & Coriolis acceleration (Earth rotation) & $s^{-1}$\\\\\n$g$ & 10 & gravitational acceleration & $ms^{-2}$\\\\\n$A_h(x,y)$ & & horizontal eddy viscosity & $m^2 s^{-1}$\\\\\n$A_\\nu(x,y)$ & 0.012 & vertical eddy viscosity & $m^2 s^{-1}$\\\\\n$s(x,y)$ & 0.049 & bottom stress & $m s^{-1}$\\\\\n$A_{M_2}(x,y)$ & 1.0 & $M_2$ amplitude at seaward side & $m$\\\\\n$\\omega$ & $1.4\\times 10^{-4}$ &tidal frequency of $M_2$ tide & $s^{-1}$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\n\\subsection{Variational formulation}\nThe variational formulation reads: find $N\\in V_{A_{M_2}}$ such that\n\\begin{equation}\n\\int_\\Omega\\nabla \\varphi\\cdot(-\\mathbb{A}\\nabla N)+ i\\omega \\varphi N\\,\\mathrm{d}\\mathbf{x}=0\\qquad \\forall \\varphi\\in V_0,\n\\end{equation}\nThe essential boundary conditions have been built into the trial and test spaces\n\\begin{align}\nV_{A_{M_2}}&=\\{v\\in H^1(\\Omega)\\,:\\, v=A_{M_2}\\text{ on }\\Gamma_D\\},\\\\\nV_0&=\\{\\varphi\\in H^1(\\Omega)\\,:\\, \\varphi=0\\text{ on }\\Gamma_D\\}.\n\\end{align}\n\n\\subsection{Finite element approximation}\nLet the variable for the sea surface elevation be approximated by finite elements as follows\n\\begin{equation}\nN(\\mathbf{x})=\\sum_{j}\\varphi_j(\\mathbf{x})N_j,\n\\end{equation}\nwhere $\\{\\varphi_j(\\mathbf{x})\\}_j$ denotes a basis of \\emph{real} valued functions and complex valued coefficients $N_j\\in\\mathbb{C}$.\n\nInstead of working with complex values directly we split them into their real and imaginary parts, i.e. $N_j=N_j^\\mathrm{Re}+iN_j^\\mathrm{Im}$ and use separate scalar finite element fields $N^\\mathrm{Re}$ and $N^\\mathrm{Im}$ for discretizing each one of them. Here and below superscript $(\\cdot)^\\mathrm{Re}$ and $(\\cdot)^\\mathrm{Im}$ denote the real and imaginary part, respectively. Then the problem at hand reads: find $(N^\\mathrm{Re},N^\\mathrm{Im})\\in V_{A_{M_2}^\\mathrm{Re}}\\times V_{A_{M_2}^\\mathrm{Im}}$ such that\n\\begin{equation}\n\\sum_j\\int_\\Omega \\nabla\\varphi_i\\cdot\\left(-\\mathbb{A}^\\mathrm{Re}\\nabla N_j^\\mathrm{Re}+\\mathbb{A}^\\mathrm{Im}\\nabla N_j^\\mathrm{Im}\\right)-\\omega \\varphi_i N_j^\\mathrm{Im}\\,\\mathrm{d}\\mathbf{x}=0\n\\end{equation}\nand\n\\begin{equation}\n\\sum_ji\\int_\\Omega \\nabla\\varphi_i\\cdot\\left(-\\mathbb{A}^\\mathrm{Im}\\nabla N_j^\\mathrm{Re}-\\mathbb{A}^\\mathrm{Re}\\nabla N_j^\\mathrm{Im}\\right)+\\omega \\varphi_i N_j^\\mathrm{Re}\\,\\mathrm{d}\\mathbf{x}=0\n\\end{equation}\nfor all admissible real valued test functions $\\varphi_i\\in V_{0}$.\nTo simplify the notation let us define the following auxiliary stiffness matrices and the standard consistent mass matrix\n\\begin{alignat}{2}\nS^\\mathrm{Re}&=\\{s_{ij}^\\mathrm{Re}\\} \\qquad s_{ij}^\\mathrm{Re}&=&\\int_\\Omega \\nabla\\varphi_i\\cdot(-\\mathbb{A}^\\mathrm{Re}\\nabla\\varphi_j)\\,\\mathrm{d}\\mathbf{x},\\\\\nS^\\mathrm{Im}&=\\{s_{ij}^\\mathrm{Im}\\} \\qquad s_{ij}^\\mathrm{Im}&=&\\int_\\Omega \\nabla\\varphi_i\\cdot(-\\mathbb{A}^\\mathrm{Im}\\nabla\\varphi_j)\\,\\mathrm{d}\\mathbf{x},\\\\\nM&=\\{m_{ij}\\} \\qquad m_{ij}&=&\\int_\\Omega \\varphi_i\\varphi_j\\,\\mathrm{d}\\mathbf{x}.\n\\end{alignat}\nThen the problem at hand can be written in compact matrix notation as follows:\n\\begin{equation}\n\\begin{bmatrix}\nS^\\mathrm{Re} & -S^\\mathrm{Im}-\\omega M\\\\\nS^\\mathrm{Im}+\\omega M &  S^\\mathrm{Re}\n\\end{bmatrix}\n\\begin{bmatrix}\nN^\\mathrm{Re}\\\\\nN^\\mathrm{Im}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0\\\\\n0\n\\end{bmatrix}.\n\\end{equation}\n\n\\section{First-order system formulation}\n\nIn what follows, we present two alternative reformulations of the second-order scalar model problem as a first order system. In essence, the two formulations vary in the treatment of the anisotropic diffusion operator $\\mathbb{A}$.\n\n\\subsection{Model problems}\nLet $\\boldsymbol{\\sigma}=\\mathbb{A}\\nabla N$ in order to convert the scalar problem equation \\eqref{eq:sse_scalar1}--\\eqref{eq:sse_scalar3} into the first-order system\n\\begin{alignat}{2}\n\\nabla\\cdot\\boldsymbol{\\sigma}+i\\omega N&=0  \\quad && \\text{in } \\Omega,\n\\label{eq:sse_system1}\\\\\n\\boldsymbol{\\sigma}-\\mathbb{A}\\nabla N&=0 \\quad && \\text{in } \\Omega,\n\\label{eq:sse_system2}\\\\\nN&=A_{M_2} \\quad && \\text{on } \\Gamma_D,\n\\label{eq:sse_system3}\\\\\n\\boldsymbol{\\sigma}\\cdot\\mathbf{n}&=0 \\quad && \\text{on } \\Gamma_N.\n\\label{eq:sse_system4}\n\\end{alignat}\n\nIt is also possible to define $\\boldsymbol{\\tau}=\\nabla N$ as the gradient of the sea surface elevation which yields the alternative first-order system resulting from the scalar problem equation \\eqref{eq:sse_scalar1}--\\eqref{eq:sse_scalar3}\n\\begin{alignat}{2}\n\\nabla\\cdot(\\mathbb{A}\\boldsymbol{\\tau})+i\\omega N&=0  \\quad && \\text{in } \\Omega,\n\\label{eq:sse_system5}\\\\\n\\boldsymbol{\\tau}-\\nabla N&=0 \\quad && \\text{in } \\Omega,\n\\label{eq:sse_system6}\\\\\nN&=A_{M_2} \\quad && \\text{on } \\Gamma_D,\n\\label{eq:sse_system7}\\\\\n(\\mathbb{A}\\boldsymbol{\\tau})\\cdot\\mathbf{n}&=0 \\quad && \\text{on } \\Gamma_N.\n\\label{eq:sse_system8}\n\\end{alignat}\nBoth approaches lead to slightly different variational formulations and, consequently, system matrices.\n\n\\subsection{Variational formulation I}\nThe variational formulation associated with \\eqref{eq:sse_system1}--\\eqref{eq:sse_system4} reads: find $N\\in W$ and $\\boldsymbol{\\sigma}\\in\\mathbf{Q}_0$ such that \n\\begin{alignat}{2}\n\\int_\\Omega\\varphi\\left(\\nabla\\cdot\\boldsymbol{\\sigma}+i\\omega N\\right)\\,\\mathrm{d}\\mathbf{x}&=0 \\qquad && \\forall \\varphi\\in W,\n\\label{eq:weak1}\\\\\n\\int_\\Omega\\boldsymbol{\\psi}\\cdot\\left(\\boldsymbol{\\sigma}-\\mathbb{A}\\nabla N\\right)\\,\\mathrm{d}\\mathbf{x}&=0 \\qquad && \\forall \\boldsymbol{\\psi}\\in \\mathbf{Q}_0,\n\\label{eq:weak2}\n\\end{alignat}\nwhere the test and trial spaces are defined as follows:\n\\begin{equation}\nW=L^2(\\Omega),\\quad\n\\mathbf{Q}_0=\\{\\mathbf{q}\\in H(\\text{div};\\Omega)\\,:\\,\\mathbf{q}\\cdot\\mathbf{n}=0 \\text{ on }\\Gamma_N\\}.\n\\end{equation}\nNote that the Neumann boundary condition \\eqref{eq:sse_scalar2} of the second-order scalar problem becomes a Dirichlet boundary condition \\eqref{eq:sse_system3} in the first-order system formulation, and hence, it has been implemented into the test and trial spaces. Since $\\mathbb{A}^*=\\overline{\\mathbb{A}}^T$ and $\\overline{\\boldsymbol{\\psi}}=\\boldsymbol{\\psi}$, it is clear that\n\\begin{equation}\n\\boldsymbol{\\psi}\\cdot(\\mathbb{A}\\nabla N)\n=\\boldsymbol{\\psi}^T(\\mathbb{A}\\nabla N)\n=(\\boldsymbol{\\psi}^T\\mathbb{A})\\nabla N\n=(\\mathbb{A}^*\\boldsymbol{\\psi})^*\\nabla N\n=\\overline{\\left(\\overline{\\mathbb{A}}^T\\boldsymbol{\\psi}\\right)}^T\\nabla N\n=(\\mathbb{A}^T\\boldsymbol{\\psi})\\cdot\\nabla N.\n\\end{equation}\nIntegration by parts is applied to \\eqref{eq:weak2} to include the boundary condition \\eqref{eq:sse_system3} as a natural one\n\\begin{align*}\n\\int_\\Omega\\boldsymbol{\\psi}\\cdot(\\boldsymbol{\\sigma}-\\mathbb{A}\\nabla N)\\,\\mathrm{d}\\mathbf{x}\n&=\\int_\\Omega\\boldsymbol{\\sigma}\\cdot\\boldsymbol{\\psi}-(\\mathbb{A}^T\\boldsymbol{\\psi})\\cdot\\nabla N\\,\\mathrm{d}\\mathbf{x}\\\\\n&=\\int_\\Omega\\boldsymbol{\\psi}\\cdot\\boldsymbol{\\sigma}+\\nabla\\cdot(\\mathbb{A}^T\\boldsymbol{\\psi})N\\,\\mathrm{d}\\mathbf{x}-\\int_\\Gamma N(\\mathbb{A}^T\\boldsymbol{\\psi})\\cdot\\mathbf{n}\\,\\mathrm{d}\\mathbf{x}\\\\\n&=\\int_\\Omega\\boldsymbol{\\psi}\\cdot\\boldsymbol{\\sigma}+\\nabla\\cdot(\\mathbb{A}^T\\boldsymbol{\\psi})N\\,\\mathrm{d}\\mathbf{x}-\\int_{\\Gamma_D} A_{M_2}(\\mathbb{A}^T\\boldsymbol{\\psi})\\cdot\\mathbf{n}\\,\\mathrm{d}s-\\int_{\\Gamma_N} N(\\mathbb{A}^T\\boldsymbol{\\psi})\\cdot\\mathbf{n}\\,\\mathrm{d}s\n\\end{align*}\nThus, the variational formulation for the problem at hand reads: find $(N,\\boldsymbol{\\sigma})\\in W\\times\\mathbf{Q}_0$ such that\n\\begin{align}\n&\\int_\\Omega\\varphi\\left(\\nabla\\cdot\\boldsymbol{\\sigma}+i\\omega N\\right)+\\boldsymbol{\\psi}\\cdot\\boldsymbol{\\sigma}+\\nabla\\cdot(\\mathbb{A}^T\\boldsymbol{\\psi})N\\,\\mathrm{d}\\mathbf{x}\\\\\n=&\\int_{\\Gamma_D} A_{M_2}(\\mathbb{A}^T\\boldsymbol{\\psi})\\cdot\\mathbf{n}\\,\\mathrm{d}s+\\int_{\\Gamma_N} N(\\mathbb{A}^T\\boldsymbol{\\psi})\\cdot\\mathbf{n}\\,\\mathrm{d}s \\qquad \\forall (\\varphi,\\boldsymbol{\\psi})\\in W\\times\\mathbf{Q}_0.\n\\end{align}\n\n\\subsection{Variational formulation II}\nThe variational formulation associated with \\eqref{eq:sse_system5}--\\eqref{eq:sse_system8} reads: find $(N,\\boldsymbol{\\sigma})\\in W\\times\\mathbf{Q}_0$ such that \n\\begin{alignat}{2}\n\\int_\\Omega\\varphi\\left(\\nabla\\cdot(\\mathbb{A}\\boldsymbol{\\tau})+i\\omega N\\right)\\,\\mathrm{d}\\mathbf{x}&=0 \\qquad && \\forall \\varphi\\in W,\n\\label{eq:weak3}\\\\\n\\int_\\Omega\\boldsymbol{\\psi}\\cdot\\left(\\boldsymbol{\\tau}-\\nabla N\\right)\\,\\mathrm{d}\\mathbf{x}&=0 \\qquad && \\forall \\boldsymbol{\\psi}\\in \\mathbf{Q}_0.\n\\label{eq:weak4}\n\\end{alignat}\nIn order to impose the natural boundary conditions \\eqref{eq:sse_system7}, we perform integration by parts in \\eqref{eq:weak4} \n\\begin{equation}\n\\int_\\Omega\\boldsymbol{\\psi}\\cdot\\boldsymbol{\\tau}+\n\\nabla\\cdot\\boldsymbol{\\psi}N\\,\\mathrm{d}\\mathbf{x}=\n\\int_{\\Gamma_D}A_{M_2}\\boldsymbol{\\psi}\\cdot\\mathbf{n}\\,\\mathrm{d}s+\n\\int_{\\Gamma_N}N\\boldsymbol{\\psi}\\cdot\\mathbf{n}\\,\\mathrm{d}s.\n\\end{equation}\nThus, the variational formulation for the problem at hand reads: find $(N,\\boldsymbol{\\sigma})\\in W\\times\\mathbf{Q}_0$ such that\n\\begin{align}\n&\\int_\\Omega\\varphi\\left(\\nabla\\cdot(\\mathbb{A}\\boldsymbol{\\tau})+i\\omega N\\right)+\\boldsymbol{\\psi}\\cdot\\boldsymbol{\\tau}+\\nabla\\cdot\\boldsymbol{\\psi}N\\,\\mathrm{d}\\mathbf{x}\\\\\n=& \\int_{\\Gamma_D}A_{M_2}\\boldsymbol{\\psi}\\cdot\\mathbf{n}\\,\\mathrm{d}s+\n\\int_{\\Gamma_N}N\\boldsymbol{\\psi}\\cdot\\mathbf{n}\\,\\mathrm{d}s\n\\qquad \\forall (\\varphi,\\boldsymbol{\\psi})\\in W\\times\\mathbf{Q}_0.\n\\end{align}\n\n\\subsection{Finite element approximation}\nLet us approximate the sea surface elevation and its derivative by finite elements again separating the real and imaginary parts as in the scalar case. That is,\n\\begin{alignat}{3}\nN(\\mathbf{x})&=\\sum_{j}\\varphi_j(\\mathbf{x})N_j && \\qquad N_j&=N_j^\\mathrm{Re}+iN_j^\\mathrm{Im}\\\\\n\\boldsymbol{\\sigma}(\\mathbf{x})=(\\sigma_1(\\mathbf{x}),\\sigma_2(\\mathbf{x}))^T&=\\sum_{j}\\psi_j(\\mathbf{x})\\boldsymbol{\\sigma}_j && \\qquad \\boldsymbol{\\sigma}_j&=\\boldsymbol{\\sigma}_j^\\mathrm{Re}+i\\boldsymbol{\\sigma}_j^\\mathrm{Im}\\\\\n\\boldsymbol{\\tau}(\\mathbf{x})=(\\tau_1(\\mathbf{x}),\\tau_2(\\mathbf{x}))^T&=\\sum_{j}\\psi_j(\\mathbf{x})\\boldsymbol{\\tau}_j && \\qquad \\boldsymbol{\\tau}_j&=\\boldsymbol{\\tau}_j^\\mathrm{Re}+i\\boldsymbol{\\tau}_j^\\mathrm{Im}\n\\end{alignat}\nNote that $\\{\\psi_j\\}$ are scalar basis functions which are multiplied by vector-valued coefficients in order to obtain a vector-valued finite element functions. Here, we do not use vector-valued finite elements such as Raviar-Thomas or Brezzi-Douglas-Marini finite elements.\n\n\\paragraph{First formulation.} The problem reads: find $(N^\\mathrm{Re},N^\\mathrm{Im},\\boldsymbol{\\sigma}^\\mathrm{Re},\\boldsymbol{\\sigma}^\\mathrm{Im})\\in [W]^2\\times [\\mathbf{Q}_0]^2$ such that\n\\begin{align}\n&\\sum_j\\int_\\Omega \\varphi_i\\left(\\nabla\\cdot\\boldsymbol{\\sigma}_j^\\mathrm{Re}-\\omega N^\\mathrm{Im}_j\\right)+\n\\boldsymbol{\\psi}_i\\cdot\\boldsymbol{\\sigma}^\\mathrm{Re}_j+\n\\nabla\\cdot({(\\mathbb{A}^\\mathrm{Re})}^T\\boldsymbol{\\psi}_i)N^\\mathrm{Re}_j-\n\\nabla\\cdot({(\\mathbb{A}^\\mathrm{Im})}^T\\boldsymbol{\\psi}_i)N^\\mathrm{Im}_j\n\\,\\mathrm{d}\\mathbf{x}\\\\\n=&\n\\int_{\\Gamma_D}\\left(A^\\mathrm{Re}_{M_2}({(\\mathbb{A}^\\mathrm{Re})}^T\\boldsymbol{\\psi}_i)-\nA^\\mathrm{Im}_{M_2}({(\\mathbb{A}^\\mathrm{Im})}^T\\boldsymbol{\\psi}_i)\n\\right)\\cdot\\mathbf{n}\\,\\mathrm{d}s\n+\n\\int_{\\Gamma_N}\\left(N^\\mathrm{Re}_j({(\\mathbb{A}^\\mathrm{Re})}^T\\boldsymbol{\\psi}_i)-\nN^\\mathrm{Im}_j({(\\mathbb{A}^\\mathrm{Im})}^T\\boldsymbol{\\psi}_i)\n\\right)\\cdot\\mathbf{n}\\,\\mathrm{d}s\n\\end{align}\nand\n\\begin{align}\n&\\sum_ji\\int_\\Omega \\varphi_i\\left(\\nabla\\cdot\\boldsymbol{\\sigma}_j^\\mathrm{Im}+\\omega N^\\mathrm{Re}_j\\right)+\n\\boldsymbol{\\psi}_i\\cdot\\boldsymbol{\\sigma}^\\mathrm{Im}_j+\n\\nabla\\cdot({(\\mathbb{A}^\\mathrm{Re})}^T\\boldsymbol{\\psi}_i)N^\\mathrm{Im}_j+\n\\nabla\\cdot({(\\mathbb{A}^\\mathrm{Im})}^T\\boldsymbol{\\psi}_i)N^\\mathrm{Re}_j\n\\,\\mathrm{d}\\mathbf{x}\\\\\n&=\ni\\int_{\\Gamma_D}\\left(A^\\mathrm{Re}_{M_2}({(\\mathbb{A}^\\mathrm{Im})}^T\\boldsymbol{\\psi}_i)+\nA^\\mathrm{Im}_{M_2}({(\\mathbb{A}^\\mathrm{Re})}^T\\boldsymbol{\\psi}_i)\n\\right)\\cdot\\mathbf{n}\\,\\mathrm{d}s\n+\ni\\int_{\\Gamma_N}\\left(N^\\mathrm{Re}_j({(\\mathbb{A}^\\mathrm{Im})}^T\\boldsymbol{\\psi}_i)+\nN^\\mathrm{Im}_j({(\\mathbb{A}^\\mathrm{Re})}^T\\boldsymbol{\\psi}_i)\n\\right)\\cdot\\mathbf{n}\\,\\mathrm{d}s\n\\end{align}\nfor all admissible pairs of real valued test functions $(\\varphi_i,\\boldsymbol{\\psi}_i)\\in W\\times\\mathbf{Q}_0$.\n\nLet us define the following auxiliary matrices and vectors\n\\begin{alignat}{2}\nA&=\\{a_{ij}\\} \\qquad a_{ij}&=&\\int_\\Omega \\psi_i\\psi_j\\,\\mathrm{d}\\mathbf{x}\\\\\nB_x&=\\{b_{ij,x}\\} \\qquad b_{ij,x}&=&\\int_\\Omega \\partial_x\\psi_i\\varphi_j\\,\\mathrm{d}\\mathbf{x}\\\\\nB_y&=\\{b_{ij,y}\\} \\qquad b_{ij,x}&=&\\int_\\Omega \\partial_y\\psi_i\\varphi_j\\,\\mathrm{d}\\mathbf{x}\n\\end{alignat}\n\n\\begin{equation}\n\\begin{bmatrix}\n0 & -\\omega M & B_x^T & 0 & B_y^T & 0\\\\\n\\omega M & 0 & 0 & B_x^T & 0 & B_y^T\\\\\nB_x & 0 & A & 0 & 0 & 0\\\\\n0 & B_x & 0 & A & 0 & 0\\\\\nB_y & 0 & 0 & 0 & A & 0\\\\\n0 & B_y & 0 & 0 & 0 & A\n\\end{bmatrix}\n\\begin{bmatrix}\nN^\\mathrm{Re}\\\\\nN^\\mathrm{Im}\\\\\n\\sigma_1^\\mathrm{Re}\\\\\n\\sigma_1^\\mathrm{Im}\\\\\n\\sigma_2^\\mathrm{Re}\\\\\n\\sigma_2^\\mathrm{Im}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0\\\\\n0\\\\\n\\phantom{i}\\int_{\\Gamma_D}A^\\mathrm{Re}_{M_2}\\partial_x\\psi^x_i\\,\\mathrm{d}s\\\\\ni\\int_{\\Gamma_D}A^\\mathrm{Im}_{M_2}\\partial_x\\psi^x_i\\,\\mathrm{d}s\\\\\n\\phantom{i}\\int_{\\Gamma_D}A^\\mathrm{Re}_{M_2}\\partial_y\\psi^y_i\\,\\mathrm{d}s\\\\\ni\\int_{\\Gamma_D}A^\\mathrm{Im}_{M_2}\\partial_y\\psi^y_i\\,\\mathrm{d}s\n\\end{bmatrix}\n\\end{equation}\n\n\\paragraph{Second formulation.} The problem reads: find $(N^\\mathrm{Re},N^\\mathrm{Im},\\boldsymbol{\\tau}^\\mathrm{Re},\\boldsymbol{\\tau}^\\mathrm{Im})\\in [W]^2\\times [\\mathbf{Q}_0]^2$ such that\n\\begin{align}\n&\\sum_j\\int_\\Omega \\varphi_i\\left(\\nabla\\cdot(\\mathbb{A}^\\mathrm{Re}\\boldsymbol{\\tau}_j^\\mathrm{Re}-\\mathbb{A}^\\mathrm{Im}\\boldsymbol{\\tau}_j^\\mathrm{Im})-\\omega N^\\mathrm{Im}_j\\right)+\n\\boldsymbol{\\psi}_i\\cdot\\boldsymbol{\\tau}^\\mathrm{Re}_j+\n\\nabla\\cdot\\boldsymbol{\\psi}_iN^\\mathrm{Re}_j\n\\,\\mathrm{d}\\mathbf{x}\\\\\n&=\n\\int_{\\Gamma_D}(\\boldsymbol{\\psi}_i\\cdot\\mathbf{n})A^\\mathrm{Re}_{M_2}\\,\\mathrm{d}s\n+\n\\sum_j\\int_{\\Gamma_N}(\\boldsymbol{\\psi}_i\\cdot\\mathbf{n})N^\\mathrm{Re}_j\\,\\mathrm{d}s\n\\end{align}\nand\n\\begin{align}\n&\\sum_ji\\int_\\Omega \\varphi_i\\left(\\nabla\\cdot(\\mathbb{A}^\\mathrm{Re}\\boldsymbol{\\tau}_j^\\mathrm{Im}+\\mathbb{A}^\\mathrm{Im}\\boldsymbol{\\tau}_j^\\mathrm{Re})+\\omega N^\\mathrm{Re}_j\\right)+\n\\boldsymbol{\\psi}_i\\cdot\\boldsymbol{\\tau}^\\mathrm{Im}_j+\n\\nabla\\cdot\\boldsymbol{\\psi}_iN^\\mathrm{Im}_j\n\\,\\mathrm{d}\\mathbf{x}\\\\\n&=\ni\\int_{\\Gamma_D}(\\boldsymbol{\\psi}_i\\cdot\\mathbf{n})A^\\mathrm{Im}_{M_2}\\,\\mathrm{d}s\n+\n\\sum_ji\\int_{\\Gamma_N}(\\boldsymbol{\\psi}_i\\cdot\\mathbf{n})N^\\mathrm{Im}_j\\,\\mathrm{d}s\n\\end{align}\nfor all admissible pairs of real valued test functions $(\\varphi_i,\\boldsymbol{\\psi}_i)\\in W\\times\\mathbf{Q}_0$.\nIn addition to the mass matrices $A$ and $M$ introduced above, let us define the following auxiliary matrices and vectors\n\\begin{alignat}{2}\nC^\\mathrm{Re}_k&=\\{c^\\mathrm{Re}_{ij,k}\\} \\qquad c^\\mathrm{Re}_{ij,k}&=&\n\\int_\\Omega \\varphi_i\\nabla\\cdot(\\mathbf{a}^\\mathrm{Re}_k\\psi_j)\\,\\mathrm{d}\\mathbf{x}\n=\n\\int_\\Omega \\varphi_i\\left(\\partial_x(a^\\mathrm{Re}_{1k}\\psi_j)+\\partial_y(a^\\mathrm{Re}_{2k}\\psi_j)\\right)\\,\\mathrm{d}\\mathbf{x}\\\\\nC^\\mathrm{Im}_k&=\\{c^\\mathrm{Im}_{ij,k}\\} \\qquad c^\\mathrm{Im}_{ij,k}&=&\n\\int_\\Omega \\varphi_i\\nabla\\cdot(\\mathbf{a}^\\mathrm{Im}_k\\psi_j)\\,\\mathrm{d}\\mathbf{x}\n=\n\\int_\\Omega \\varphi_i\\left(\\partial_x(a^\\mathrm{Im}_{1k}\\psi_j)+\\partial_y(a^\\mathrm{Im}_{2k}\\psi_j)\\right)\\,\\mathrm{d}\\mathbf{x}\\\\\nD_k&=\\{d_{ij,k}\\} \\qquad d_{ij,k}&=&\n\\int_{\\Gamma_N}\\psi_i\\varphi_jn_k\\,\\mathrm{d}s\\\\\nb^\\mathrm{Re}_k&=\\{b^\\mathrm{Re}_{i,k}\\} \\qquad b^\\mathrm{Re}_{i,k}&=&\n\\int_{\\Gamma_D}A^\\mathrm{Re}_{M_2}\\psi_in_k\\,\\mathrm{d}s\\\\\nb^\\mathrm{Im}_k&=\\{b^\\mathrm{Im}_{i,k}\\} \\qquad b^\\mathrm{Im}_{i,k}&=&\n\\int_{\\Gamma_D}A^\\mathrm{Im}_{M_2}\\psi_in_k\\,\\mathrm{d}s\n\\end{alignat}\nwhere $\\mathbf{a}^\\mathrm{Re}_k$ and $\\mathbf{a}^\\mathrm{Im}_k$ denotes the real and imaginary parts of the $k$-th column of matrix $\\mathbb{A}$ and $n_k$ represents the $k$-th component of the outward unit normal vector $\\mathbf{n}=(n_1,n_2)$. \n\nThen, the matrix form reads\n\\begin{equation}\n\\begin{bmatrix}\n0 & -\\omega M & C_1^\\mathrm{Re} & -C_1^\\mathrm{Im} & C_2^\\mathrm{Re} & -C_2^\\mathrm{Im}\\\\[1ex]\n\\omega M & 0 & C_1^\\mathrm{Im} & C_1^\\mathrm{Re} & C_2^\\mathrm{Im} & C_2^\\mathrm{Re}\\\\[1ex]\nB_x+D_1 & 0 & A & 0 & 0 & 0\\\\[1ex]\n0 & B_x+D_1 & 0 & A & 0 & 0\\\\[1ex]\nB_y+D_2 & 0 & 0 & 0 & A & 0\\\\[1ex]\n0 & B_y+D_2 & 0 & 0 & 0 & A\n\\end{bmatrix}\n\\begin{bmatrix}\nN^\\mathrm{Re}\\\\[1ex]\nN^\\mathrm{Im}\\\\[1ex]\n\\tau_1^\\mathrm{Re}\\\\[1ex]\n\\tau_1^\\mathrm{Im}\\\\[1ex]\n\\tau_2^\\mathrm{Re}\\\\[1ex]\n\\tau_2^\\mathrm{Im}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0\\\\[1ex]\n0\\\\[1ex]\nb_1^\\mathrm{Re}\\\\[1ex]\nb_1^\\mathrm{Im}\\\\[1ex]\nb_2^\\mathrm{Re}\\\\[1ex]\nb_2^\\mathrm{Im}\n\\end{bmatrix}\n\\end{equation}\n\n\\bibliographystyle{apalike}\n\\bibliography{mybib.bib}\n\\end{document}", "meta": {"hexsha": "e9bccb68e73c14a3be7e1dc7c8369d1b306f9d0a", "size": 19256, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "applications/sse/doc/sse.tex", "max_stars_repo_name": "trmcnealy/Featflow2", "max_stars_repo_head_hexsha": "4af17507bc2d80396bf8ea85c9e30e9e4d2383df", "max_stars_repo_licenses": ["Intel", "Unlicense"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-08-02T11:51:34.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-10T14:14:21.000Z", "max_issues_repo_path": "applications/sse/doc/sse.tex", "max_issues_repo_name": "tudo-math-ls3/FeatFlow2", "max_issues_repo_head_hexsha": "56159aff28f161aca513bc7c5e2014a2d11ff1b3", "max_issues_repo_licenses": ["Intel", "Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "applications/sse/doc/sse.tex", "max_forks_repo_name": "tudo-math-ls3/FeatFlow2", "max_forks_repo_head_hexsha": "56159aff28f161aca513bc7c5e2014a2d11ff1b3", "max_forks_repo_licenses": ["Intel", "Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.7634408602, "max_line_length": 520, "alphanum_fraction": 0.693653926, "num_tokens": 7468, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438951143326726, "lm_q2_score": 0.6992544147913993, "lm_q1q2_score": 0.590097384318014}}
{"text": "% declare document class and geometry\n\\documentclass[12pt]{article} % use larger type; default would be 10pt\n\\usepackage[margin=1in]{geometry} % handle page geometry\n\n% import packages and commands\n\\input{../header2.tex}\n\n\n\\title{Phys 220A -- Classical Mechanics -- HW06}\n\\author{UCLA, Fall 2014}\n\\date{\\formatdate{12}{11}{2014}} % Activate to display a given date or no date (if empty),\n         % otherwise the current date is printed \n\n\\begin{document}\n\\maketitle\n\n\n\\section*{Problem 1 (15 pts)}\n\\textit{\nA particle with mass $m$ and charge $e$ moves in a magnetic field $\\v B$. \n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nShow that the following Poisson brackets hold\n\\begin{eqn}\n\\cbr{m \\dot x_i, m \\dot x_j} = e \\epsilon_{ijk} B_k, \\qquad\n\\cbr{m \\dot x_i, x_j} = - \\delta_{ij}.\n\\end{eqn}\n}\n\n\n% part B\n\\item \\textit{\nConsider the magnetic field of a magnetic monopole with \n\\begin{eqn}\n\\v B = g_m \\frac{\\uv x}{\\abs{x}^2}\n\\end{eqn}\nwhere $\\uv x$ is the unit vector in the $\\v x$ direction. Show that the following generalization of the angular momentum\n\\begin{eqn}\n\\v J = m \\v x \\times \\vd x - eg \\uv x\n\\end{eqn}\ncommutes with the Hamiltonian $H$. \n}\n\n\n% part C\n\\item \\textit{\nCan you interpret the fact that the usual angular momentum is not conserved but the new $\\v J$ defined above is?\n}\n\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 2 (15 pts)}\n\\textit{\nShow that the following coordinate transformations are canonical.\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\n$P = \\frac{1}{2}(p^2 + q^2), \\quad Q = \\arctan(q/p)$. \n}\n\n\n% part B\n\\item \\textit{\n$P = 1/q, \\quad Q = pq^2$.\n}\n\n\n% part C\n\\item \\textit{\n$P = 2 \\sqrt{q} (1 + \\sqrt{q} \\cos p) \\sin p, \\quad Q = \\log(1 + \\sqrt{q} \\cos p)$. \n}\n\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 3 (10 pts)}\n\\textit{\nFor a particle moving in one dimension in a uniform force field the Lagrangian is given by\n\\begin{eqn}\nL(q, \\dot q) = \\frac{1}{2} m \\dot q^2 + m a q.\n\\end{eqn}\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nSolve the Hamilton equations and find the map\n\\begin{eqn}\ng_t : \\set{q(0), p(0)} \\mapsto \\set{q(t), p(t)}.\n\\end{eqn}\n}\n\n\n% part B\n\\item \\textit{\nDetermine the shape of a square in phase space given by $\\set{q,p} = \\set{\\set{0,0}, \\set{\\alpha,0}, \\set{0,\\beta}, \\set{\\alpha,\\beta}}$ and verify that the volume is preserved. \n}\n\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 4 (10 pts)}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nVerify that the symplectic tensor $J$ expressed in new coordinates a transformation $Q_i = Q_i(p_j, q_k), P_i = P_i(p_j, q_k)$ takes the form\n\\begin{eqn}\nJ_{IJ} = \n\\begin{pmatrix}\n\\set{Q_i, Q_j} & \\set{Q_i, P_j} \\\\\n\\set{P_j, Q_j} & \\set{P_i, P_j}\n\\end{pmatrix}.\n\\end{eqn}\n}\n\n\n% part B\n\\item \\textit{\nVerify the Jacobi-identity \n\\begin{eqn}\n\\set{A, \\set{B,C}} + \\set{B,\\set{C,A}} + \\set{C,\\set{A,B}} = 0.\n\\end{eqn}\n}\n\n\n\\end{enumproblem}\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "3b6eab8dc753ba851b4624b14edd10606c341ac9", "size": 2851, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classical/hw06.tex", "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classical/hw06.tex", "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classical/hw06.tex", "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.1342281879, "max_line_length": 178, "alphanum_fraction": 0.6604700105, "num_tokens": 1015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936438, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5900788287284254}}
{"text": "\\chapter{Multivariable calculus done correctly}\nAs I have ranted about before, linear algebra is done wrong\nby the extensive use of matrices to obscure the structure of a linear map.\nSimilar problems occur with multivariable calculus, so here I would like to set \nthe record straight.\n\nSince we are doing this chapter using morally correct linear algebra,\nit's imperative you're comfortable with linear maps,\nand in particular the dual space $V^\\vee$ which we will repeatedly use.\n\nIn this chapter, all vector spaces have norms and\nare finite-dimensional over $\\RR$.\nSo in particular every vector space is also a metric space\n(with metric given by the norm), and we can talk about open sets as usual.\n\n\\section{The total derivative}\n\\prototype{If $f(x,y) = x^2+y^2$, then $(Df)_{(x,y)} = 2x\\ee_1^\\vee + 2y\\ee_2^\\vee$.}\nFirst, let $f : [a,b] \\to \\RR$.\nYou might recall from high school calculus that for every point $p \\in \\RR$,\nwe defined $f'(p)$ as the derivative at the point $p$ (if it existed), which we interpreted as the \\emph{slope} of\nthe ``tangent line''.\n\n\\begin{center}\n\t\\begin{asy}\n\t\timport graph;\n\t\tsize(150,0);\n\n\t\treal f(real x) {return 3-2/(x+2.5);}\n\t\tgraph.xaxis(\"$x$\");\n\t\tgraph.yaxis();\n\t\tdraw(graph(f,-2,2,operator ..), blue, Arrows);\n\n\t\treal p = -1;\n\t\treal h = 1000 * (f(p+0.001)-f(p));\n\t\treal r = 0.9;\n\t\tdraw( (p+r,f(p)+r*h)--(p-r,f(p)-r*h), red);\n\t\tdot( (p, f(p)), red );\n\t\tdraw( (p, f(p))--(p,0), dashed);\n\t\tdot(\"$p$\", (p, 0), dir(-90));\n\t\tlabel(\"$f'(p)$\", (p+r/2, f(p) + h*r/2), dir(115), red);\n\t\\end{asy}\n\\end{center}\n\nThat's fine, but I claim that the ``better'' way to interpret\nthe derivative at that point is as a \\emph{linear map},\nthat is, as a \\emph{function}.\nIf $f'(p) = 1.5$,\nthen the derivative tells me that if I move $\\eps$ away from $p$\nthen I should expect $f$ to change by about $1.5\\eps$.\nIn other words,\n\\begin{moral}\nThe derivative of $f$ at $p$ approximates $f$ near $p$ by a \\emph{linear function}.\n\\end{moral}\n\nWhat about more generally?\nSuppose I have a function like $f : \\RR^2 \\to \\RR$, say \n\\[ f(x,y) = x^2+y^2 \\]\nfor concreteness or something.\nFor a point $p \\in \\RR^2$, the ``derivative'' of $f$ at $p$ ought to represent a linear map\nthat approximates $f$ at that point $p$.\nThat means I want a linear map $T : \\RR^2 \\to \\RR$ such that\n\\[ f(p + v) \\approx f(p) + T(v) \\]\nfor small displacements $v \\in \\RR^2$.\n\nEven more generally, if $f : U \\to W$ with $U \\subseteq V$ open\n(in the $\\norm{\\bullet}_V$ metric as usual),\nthen the derivative at $p \\in U$ ought to be so that\n\\[ f(p + v) \\approx f(p) + T(v) \\in W. \\]\n(We need $U$ open so that for small enough $v$, $p+v \\in U$ as well.)\nIn fact this is exactly what we were doing earlier with $f'(p)$ in high school.\n\n\\begin{center}\n\t\\includegraphics{media/tangent.pdf}\n\t\\\\ \\scriptsize Image derived from \\cite{img:tangentplane}\n\\end{center}\n\nThe only difference is that, by an unfortunate coincidence,\na linear map $\\RR \\to \\RR$ can be represented by just its slope.\nAnd in the unending quest to make everything a number so that it can be AP tested,\nwe immediately forgot all about what we were trying to do in the first place\nand just defined the derivative of $f$ to be a \\emph{number} instead of a \\emph{function}.\n\n\\begin{moral}\n\tThe fundamental idea of Calculus is the local approximation of functions by linear functions.\n\tThe derivative does exactly this.\n\\end{moral}\nJean Dieudonn\\'e as quoted in \\cite{ref:pugh} continues:\n\\begin{quote}\n\tIn the classical teaching of Calculus, this idea is immediately obscured\n\tby the accidental fact that, on a one-dimensional vector space,\n\tthere is a one-to-one correspondence between linear forms and numbers,\n\tand therefore the derivative at a point is defined as a number instead of a linear form.\n\tThis \\textbf{slavish subservience to the shibboleth of numerical interpretation at any cost}\n\tbecomes much worse . . .\n\\end{quote}\n\nSo let's do this right.\nThe only thing that we have to do is say what ``$\\approx$'' means, and for\nthis we use the norm of the vector space.\n\\begin{definition}\n\tLet $U \\subseteq V$ be open.\n\tLet $f : U \\to W$ be a continuous function, and $p \\in U$.\n\tSuppose there exists a linear map $T : V \\to W$ such that\n\t\\[\n\t\t\\lim_{\\norm{v}_V \\to 0}\n\t\t\\frac{\\norm{f(p + v) - f(p) - T(v)}_W}{\\norm{v}_V} = 0.\n\t\\]\n\tThen $T$ is the \\vocab{total derivative} of $f$ at $p$.\n\tWe denote this by $(Df)_p$, and say $f$ is \\vocab{differentiable at $p$}.\n\n\tIf $(Df)_p$ exists at every point, we say $f$ is \\vocab{differentiable}.\n\\end{definition}\n\n\\begin{ques}\n\tCheck if that $V = W = \\RR$, this is equivalent to the single-variable definition.\n\t(What are the linear maps from $V$ to $W$?)\n\\end{ques}\n\\begin{example}[Total derivative of $f(x,y) = x^2+y^2$]\n\tLet $V = \\RR^2$ with standard basis $\\ee_1$, $\\ee_2$ and let $W = \\RR$,\n\tand let $f\\left( x \\ee_1 + y \\ee_2 \\right) = x^2+y^2$.  Let $p = a\\ee_1 + b\\ee_2$.\n\tThen, we claim that \\[ (Df)_p : \\RR^2 \\to \\RR \\quad\\text{by}\\quad\n\tv \\mapsto 2a \\cdot \\ee_1^\\vee(v) + 2b \\cdot \\ee_2^\\vee(v). \\]\n\\end{example}\nHere, the notation $\\ee_1^\\vee$ and $\\ee_2^\\vee$ makes sense,\nbecause by definition $(Df)_p \\in V^\\vee$: these are functions from $V$ to $\\RR$!\n\nLet's check this manually with the limit definition.\nSet $v = xe_1 + ye_2$, and note that the norm on $V$ is $\\norm{(x,y)}_V = \\sqrt{x^2+y^2}$\nwhile the norm on $W$ is just the absolute value $\\norm{c}_W = \\left\\lvert c \\right\\rvert$.\nThen we compute\n\\begin{align*}\n\t\\frac{\\norm{f(p + v) - f(p) - T(v)}_W}{\\norm{v}_V} \n\t&= \\frac{\\left\\lvert (a+x)^2 + (b+y)^2 - (a^2+b^2) - (2ax+2by) \\right\\rvert}{\\sqrt{x^2+y^2}} \\\\\n\t&= \\frac{x^2+y^2}{\\sqrt{x^2+y^2}} = \\sqrt{x^2+y^2} \\\\\n\t&\\to 0\n\\end{align*}\nas $\\norm{v} \\to 0$.\nThus, for $p = ae_1 + be_2$ we indeed have $(Df)_p = 2a \\cdot e_1^\\vee + 2b \\cdot e_2^\\vee$.\n\n\\begin{remark}\n\tAs usual, differentiability implies continuity.\n\\end{remark}\n\\begin{remark}\n\tAlthough $U \\subseteq V$, it might be helpful to think of vectors from $U$ and $V$\n\tas different types of objects (in particular, note that it's possible for $0_V \\notin U$).\n\tThe vectors in $U$ are ``inputs'' on our space\n\twhile the vectors coming from $V$ are ``small displacements''.\n\tFor this reason, I deliberately try to use $p \\in U$ and $v \\in V$ when possible.\n\\end{remark}\n\n\\section{The projection principle}\nBefore proceeding I need to say something really important.\n\\begin{theorem}[Projection principle]\n\t\\label{thm:project_principle}\n\tLet $U$ be an open subset of the vector space $V$.\n\tLet $W$ be an $n$-dimensional real vector space with basis $w_1, \\dots, w_n$.\n\tThen there is a bijection between continuous functions $f : U \\to W$ and\n\t$n$-tuples of continuous $f_1, f_2, \\dots, f_n : U \\to \\RR$\n\tby projection onto the $i$th basis element, i.e.\\ \n\t\\[ f(v) = f_1(v)w_1 + \\dots + f_n(v)w_n. \\]\n\\end{theorem}\n\\begin{proof}\n\tObvious.\n\\end{proof}\nThe theorem remains true if one replaces ``continuous'' by ``differentiable'', ``smooth'', ``arbitrary'',\nor most other reasonable words. Translation:\n\\begin{moral}\nTo think about a function $f : U \\to \\RR^{n}$,\nit suffices to think about each coordinate separately.\n\\end{moral}\nFor this reason, we'll most often be interested in functions $f : U \\to \\RR$.\nThat's why the dual space $V^\\vee$ is so important.\n\n\\section{Total and partial derivatives}\n\\prototype{If $f(x,y) = x^2+y^2$, then\n$(Df) : (x,y) \\mapsto 2x \\cdot \\ee_1^\\vee + 2y \\cdot \\ee_2^\\vee$, and\n$\\fpartial fx = 2x$, $\\fpartial fy = 2y$.}\nLet $U \\subseteq V$ be open and let $V$ have a basis $e_1$, \\dots, $e_n$.\nSuppose $f : U \\to \\RR$ is a function which is differentiable everywhere,\nmeaning $(Df)_p \\in V^\\vee$ exists for every $p$.\nIn that case, one can consider $Df$ as \\emph{itself} a function:\n\\begin{align*}\n\tDf : U &\\to V^\\vee \\\\\n\tp &\\mapsto (Df)_p.\n\\end{align*}\nThis is a little crazy: to every \\emph{point} in $U$\nwe associate a \\emph{function} in $V^\\vee$.\nWe say $Df$ is the \\vocab{total derivative} of $f$,\nto reflect how much information we're dealing with.\nWe say $(Df)_p$ is the total derivative at $p$.\n\nLet's apply the projection principle now to $Df$.\nSince we picked a basis $e_1$, \\dots, $e_n$ of $V$,\nthere is a corresponding dual basis\n$e_1^\\vee$, $e_2^\\vee$, \\dots, $e_n^\\vee$.\nThe Projection Principle tells us that $Df$ can thus be thought of as just $n$ functions, so we can write\n\\[ Df = \\psi_1 e_1^\\vee + \\dots + \\psi_n e_n^\\vee.  \\]\nIn fact, we can even describe what the $\\psi_i$ are.\n\\begin{definition}\n\tThe \\vocab{$i^{\\text{th}}$ partial derivative} of $f : U \\to \\RR$, denoted \n\t\\[ \\fpartial{f}{e_i}: U \\to \\RR \\]\n\tis defined by\n\t\\[\n\t\t\\fpartial{f}{e_i} (p)\n\t\t\\defeq \\lim_{t \\to 0} \\frac{f(p + te_i) - f(p)}{t}.\n\t\\]\n\\end{definition}\nYou can think of it as ``$f'$ along $e_i$''.\n\\begin{ques}\n\tCheck that if $Df$ exists, then \\[ (Df)_p(e_i) = \\fpartial{f}{e_i}(p). \\]\n\\end{ques}\n\\begin{remark}\n\tOf course you can write down a definition of $\\fpartial{f}{v}$\n\tfor any $v$ (rather than just the $e_i$).\n\\end{remark}\n\nFrom the above remarks, we can derive that\n\\[\n\t\\boxed{\n\tDf =\n\t\\frac{\\partial f}{\\partial e_1} \\cdot e_1^\\vee\n\t+ \\dots + \n\t\\frac{\\partial f}{\\partial e_n} \\cdot e_n^\\vee .\n\t}\n\\]\nand so given a basis of $V$, we can think of $Df$ as just\nthe $n$ partials.\n\\begin{remark}\nKeep in mind that each $\\frac{\\partial f}{\\partial e_i}$ is a function from $U$ to the \\emph{reals}.\nThat is to say,\n\\[\n\t(Df)_p =\n\t\\underbrace{\\frac{\\partial f}{\\partial e_1}(p)}_{\\in \\RR} \\cdot e_1^\\vee\n\t+ \\dots + \n\t\\underbrace{\\frac{\\partial f}{\\partial e_n}(p)}_{\\in \\RR} \\cdot e_n^\\vee\n\t\\in V^\\vee.\n\\]\n\\end{remark}\n\n\n\\begin{example}[Partial derivatives of $f(x,y) = x^2+y^2$]\n\tLet $f : \\RR^2 \\to \\RR$ by $(x,y) \\mapsto x^2+y^2$.\n\tThen in our new language, \n\t\\[ Df : (x,y) \\mapsto 2x \\cdot \\ee_1^\\vee + 2y \\cdot \\ee_2^\\vee. \\]\n\tThus the partials are\n\t\\[\n\t\t\\frac{\\partial f}{\\partial x} : (x,y) \\mapsto 2x \\in \\RR\n\t\t\\quad\\text{and}\\quad\n\t\t\\frac{\\partial f}{\\partial y} : (x,y) \\mapsto 2y \\in \\RR\n\t\\]\n\\end{example}\n\nWith all that said, I haven't really said much about how to\nfind the total derivative itself.\nFor example, if I told you\n\\[ f(x,y) = x \\sin y + x^2y^4 \\]\nyou might want to be able to compute $Df$ without going through\nthat horrible limit definition I told you about earlier.\n\nFortunately, it turns out you already know how to compute partial derivatives,\nbecause you had to take AP Calculus at some point in your life.\nIt turns out for most reasonable functions, this is all you'll ever need.\n\\begin{theorem}[Continuous partials implies differentiable]\n\t\\label{thm:apcalc_partials}\n\tLet $U \\subseteq V$ be open and pick any basis $e_1, \\dots, e_n$.\n\tLet $f : U \\to \\RR$ and suppose that $\\fpartial{f}{e_i}$ is defined\n\tfor each $i$ and moreover is \\emph{continuous}.\n\tThen $f$ is differentiable and $Df$ is given by\n\t\\[ Df = \\sum_{i=1}^n \\fpartial{f}{e_i} \\cdot e_i^\\vee. \\]\n\\end{theorem}\n\\begin{proof}\n\tNot going to write out the details, but\\dots\n\tgiven $v = t_1e_1 + \\dots + t_ne_n$,\n\tthe idea is to just walk from $p$ to $p+t_1e_1$, $p+t_1e_1+t_2e_2$, \\dots,\n\tup to $p+t_1e_1+t_2e_2+\\dots+t_ne_n = p+v$,\n\tpicking up the partial derivatives on the way.\n\tDo some calculation.\n\\end{proof}\n\n\\begin{remark}\n\tThe continuous condition cannot be dropped. The function\n\t\\[\n\t\tf(x,y)\n\t\t= \\begin{cases}\n\t\t\t\\frac{xy}{x^2+y^2} & (x,y) \\neq (0,0) \\\\\n\t\t\t0 & (x,y) = (0,0).\n\t\t\\end{cases}\n\t\\]\n\tis the classic counterexample -- the total derivative $Df$ does not exist at zero,\n\teven though both partials do.\n\\end{remark}\n\n\\begin{example}\n\t[Actually computing a total derivative]\n\tLet $f(x,y) = x \\sin y + x^2y^4$. Then\n\t\\begin{align*}\n\t\t\\fpartial fx (x,y) &= \\sin y + y^4 \\cdot 2x \\\\\n\t\t\\fpartial fy (x,y) &= x \\cos y + x^2 \\cdot 4y^3.\n\t\\end{align*}\n\tSo \\Cref{thm:apcalc_partials} applies,\n\tand $Df = \\fpartial fx \\ee_1^\\vee + \\fpartial fy \\ee_2^\\vee$,\n\twhich I won't bother to write out.\n\\end{example}\n\nThe example $f(x,y) = x^2+y^2$ is the same thing.\nThat being said, who cares about $x \\sin y + x^2y^4$ anyways?\n\n\\section{(Optional) A word on higher derivatives}\nLet $U \\subseteq V$ be open, and take $f : U \\to W$, so that $Df : U \\to \\Hom(V,W)$.\n\nWell, $\\Hom(V,W)$ can also be thought of as a normed vector space in its own right:\nit turns out that one can define an operator norm on it by setting\n\\[ \\norm{T} \\defeq \\sup \\left\\{ \\frac{\\norm{T(v)}_W}{\\norm{v}_V} \\mid v \\neq 0_V \\right\\}. \\] \nSo $\\Hom(V,W)$ can be thought of as a normed vector space as well.\nThus it makes sense to write\n\\[ D(Df) : U \\to \\Hom(V,\\Hom(V,W)) \\]\nwhich we abbreviate as $D^2 f$. Dropping all doubt and plunging on,\n\\[ D^3f : U \\to \\Hom(V, \\Hom(V,\\Hom(V,W))). \\]\nI'm sorry.\nAs consolation, we at least know that $\\Hom(V,W) \\cong V^\\vee \\otimes W$ in a natural way,\nso we can at least condense this to\n\\[ D^kf : V \\to (V^\\vee)^{\\otimes k} \\otimes W \\]\nrather than writing a bunch of $\\Hom$'s.\n\\begin{remark}\n\tIf $k=2$, $W = \\RR$, then $D^2f(v) \\in (V^\\vee)^{\\otimes 2}$,\n\tso it can be represented as an $n \\times n$ matrix,\n\twhich for some reason is called a \\vocab{Hessian}.\n\\end{remark}\nThe most important property of the second derivative is that\n\\begin{theorem}\n\t[Symmetry of $D^2 f$]\n\tLet $f : U \\to W$ with $U \\subseteq V$ open.\n\tIf $(D^2f)_p$ exists at some $p \\in U$, then it is symmetric, meaning\n\t\\[ (D^2f)_p(v_1, v_2) = (D^2f)_p(v_2, v_1). \\]\n\\end{theorem}\nI'll just quote this without proof (see e.g. \\cite[\\S5, theorem 16]{ref:pugh}),\nbecause double derivatives make my head spin.\nAn important corollary of this theorem:\n\\begin{corollary}\n\t[Clairaut's theorem: mixed partials are symmetric]\n\tLet $f : U \\to \\RR$ with $U \\subseteq V$ open be twice differentiable.\n\tThen for any point $p$ such that the quantities are defined,\n\t\\[\n\t\t\\frac{\\partial}{\\partial e_i}\n\t\t\\frac{\\partial}{\\partial e_j}\n\t\tf(p)\n\t\t=\n\t\t\\frac{\\partial}{\\partial e_j}\n\t\t\\frac{\\partial}{\\partial e_i}\n\t\tf(p).\n\t\\]\n\\end{corollary}\n\n\\section{Towards differential forms}\nThis concludes the exposition of what the derivative really is:\nthe key idea I want to communicate in this chapter is that $Df$\nshould be thought of as a map from $U \\to V^\\vee$.\n\nThe next natural thing to do is talk about \\emph{integration}.\nThe correct way to do this is through a so-called \\emph{differential form}:\nyou'll finally know what all those stupid $dx$'s and $dy$'s really mean.\n(They weren't just there for decoration!)\n\n\\section\\problemhead\n\\begin{sproblem}[Chain rule]\n\tLet $U_1 \\taking f U_2 \\taking g U_3$ be differentiable maps\n\tbetween open sets of normed vector spaces $V_i$, and let $h = g \\circ f$.\n\tProve the Chain Rule: for any point $p \\in U_1$, we have\n\t\\[ (Dh)_p = (Dg)_{f(p)} \\circ (Df)_p. \\]\n\\end{sproblem}\n\n\\begin{problem}\n\tLet $U \\subseteq V$ be open, and $f : U \\to \\RR$ be differentiable $k$ times.\n\tShow that $(D^kf)_p$ is symmetric in its $k$ arguments, meaning for any $v_1, \\dots, v_k \\in V$\n\tand any permutation $\\sigma$ on $\\left\\{ 1, \\dots, k \\right\\}$ we have\n\t\\[ (D^kf)_p(v_1, \\dots, v_k) = (D^kf)_p(v_{\\sigma(1)}, \\dots, v_{\\sigma(k)}). \\]\n\t\\begin{hint}\n\t\tSimply induct, with the work having been done on the $k=2$ case.\n\t\\end{hint}\n\\end{problem}\n", "meta": {"hexsha": "762cf9ff98fcbfdc75f8272e11a480f2b9bdd456", "size": 15027, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/diffgeo/multivar.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/diffgeo/multivar.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/diffgeo/multivar.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.4409448819, "max_line_length": 114, "alphanum_fraction": 0.6705263858, "num_tokens": 5227, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8175744828610095, "lm_q1q2_score": 0.5900788286472055}}
{"text": "\\section{Our technique}\n\n\\subsection{Basic technique}\n\\label{sec-basic-technique}\n\nTo illustrate our technique, we first show a very simple version of it\nin the form of the following code:%\n\\footnote{Throughout this paper, we rely on the left-to-right\n  evaluation order mandated by the \\commonlisp{} standard.}\n\n{\\small\n\\begin{verbatim}\n(defun count-from-end (x list)\n  (labels ((aux (x list n)\n             (cond ((= n 0) 0)\n                   ((= n 1)\n                    (if (eq x (car list)) 1 0))\n                   (t (let* ((n/2 (ash n -1))\n                             (half (nthcdr n/2 list)))\n                        (+ (aux x half (- n n/2))\n                           (aux x list n/2)))))))\n    (aux x list (length list))))))\n\\end{verbatim}\n}\n\nThis function starts by computing the length of the list and then\ncalling the auxiliary function with the original arguments and the\nlength.  The auxiliary function calls \\texttt{nthcdr} in order to get\na reference to about half the list it was passed.  Then it makes two\nrecursive calls, first with the second half of the list and then with\nthe first half of the list.  The recursion terminates when the list has\na single element or no element in it.  When it has no element in it,\nclearly the count is $0$.  When it has a single element in it, the\nelement is compared to the argument \\texttt{x} and if they are the\nsame, the value $1$ is returned, otherwise $0$ is returned.\n\nThe main feature of our technique is that it trades fewer recursive\ncalls for multiple traversals of the list.  The maximum number of\nsimultaneous active invocations of this simple function is around\n$\\mathsf{lb}\\thinspace n$, where $n$ is the length of the list.  The\nmaximum value of this number is quite modest.  On a 64-bit processor,\nit can never exceed $60$ and it is significantly smaller in practice\nof course.  \n% not relevant for count but for find\n% The number of times this function computes the\n% \\texttt{cdr} of a list depends on where in the list the item to be\n% found is located.  If it is the \\emph{last} element of the list (best\n% case), each \\texttt{cons} cell is processed twice; once to compute the\n% length of the list, and once again as part of the recursive traversal.\n% When the item to be found is the \\emph{first} element of the list\n% (worst case), \nThe number of \\texttt{cdr} operations can be\napproximately expressed as $n\\thinspace (1 +\n\\frac{1}{2}\\mathsf{lb}\\thinspace n)$.  In \\refSec{sec-analyses} we\nanalyze this result in greater depth.\n\nThe best case for this function is very efficient indeed.\nThe worst case is unacceptably slow.  Even for a list of some\nreasonable length such as a million elements, the execution time is a\nfactor $6$ slower than for the best case.\n\nThe remainder of this section is dedicated to ways of improving on the\nperformance of the basic technique.\n\n\\subsection{Using more stack space}\n\\label{sec-more-stack}\n\nBy far the most important improvement to the basic technique is to\ntake advantage of the available stack space to decrease the number of\nmultiple list traversals required by the basic technique.\n\nThe following example illustrates this technique by using the simple\nrecursive traversal if there are fewer than $10000$ elements in the\nlist.%\n\\footnote{The number $10000$ was chosen to be a significant part of a\n  typical per-thread default stack while still leaving room for stack\n  space required by callers and callees of this function.  In a real\n  production implementation, the number would be chosen based on the\n  remaining space left on the stack when the function is called.}\nIf there are more elements, then it divides the list in two,\njust like the basic technique shown in \\refSec{sec-basic-technique}.\n\n{\\small\n\\begin{verbatim}\n(defun count-from-end-2 (x list)\n  (labels ((recursive (x list n)\n             (if (zerop n)\n                 0\n                 (+ (recursive x (cdr list) (1- n))\n                    (if (eq x (car list)) 1 0))))\n           (aux (x list n)\n             (if (<= n 10000)\n                 (recursive x list n)\n                 (let* ((n/2 (ash n -1))\n                        (half (nthcdr n/2 list)))\n                   (+\n                    (aux x half (- n n/2))\n                    (aux x list n/2))))))\n    (aux x list (length list))))\n\\end{verbatim}\n}\n\nWith this improvement, the number of \\texttt{cdr} operations required\ncan now be expressed as ap\\-proximately \n\\[\nn\\thinspace (1 +\n\\frac{1}{2}\\mathsf{lb}\\thinspace \\frac{n}{10000})\n\\]\nwhich is significantly better than the corresponding value for the\nbasic technique.\n\nHowever, there is no particular reason to divide the list into $2$\nequal-sized parts when there are too many elements for the basic\ntechnique. \\refSec{sec-benchmarks} gives a more complete explanation\nof the parameters involved and how they influence the execution time\nof the resulting code.\n\n\\subsection{Analyses}\n\\label{sec-analyses}\n\nIn this section we give approximate formulas for the performance of\nour technique.\nThe basic measure we are interested in is the number\nof \\texttt{cdr} operations that must be performed as a function of the\nnumber of elements of the list.  We will denote the number of elements\nof the list by $N$ and the number of \\texttt{cdr} operations required\nby $F(N)$.  Since our technique always starts by traversing the entire\nlist in order to compute $N$, we can always write $F(N)$ as $N +\nf(N)$, were $f(N)$ is the number of \\texttt{cdr} operations required\nin the subsequent step.\n\nFor the basic technique where the list is divided into two equal-size\nsublists, we obtain the following recursive relation:\n\n\\label{analyse1}\n\\[ f(N) = \\left\\{ \\begin{array}{ll}\n                    0 & \\mbox{if $N = 1$} \\\\\n                    \\left\\lfloor\\frac{N}{2}\\right\\rfloor\n                    + f(\\left\\lfloor\\frac{N}{2}\\right\\rfloor)\n                    + f(\\left\\lceil\\frac{N}{2}\\right\\rceil) &\n                    \\mbox{otherwise}\n                  \\end{array} \\right. \\]\n\nIn order to obtain an approximate solution to this relation, we can\nsolve for $N$ being a power of $2$, i.e., $N = 2^n$.  In that case,\nfor $N>1$ we obtain:\n\n\\[ f(N) = \\frac{N}{2} + 2f(\\frac{N}{2}) \\]\n\nThe details of the approximate resolution of this recursive equation\nis given in the appendix.\nThis solution yields\n\n\\[ f(N) = \\frac{N}{2}\\mathsf{lb}~N + Nf(1) = \\frac{N}{2}\\mathsf{lb}~N\\]\n\nIncluding the traversal to compute the number of elements of the list,\nwe obtain:\n\n\\[ F(N) = \\frac{N}{2}\\mathsf{lb}~N + N = N(1 + \\frac{1}{2}\\mathsf{lb}~N)\\]\n\nwhich is clearly $O(N\\mathsf{log}~N)$.  More importantly, for a list\nwith around $16$ million elements (which fills the default heap of most\nimplementations we have tested), we have $N \\approx 2^{24}$ which\ngives $F(N) \\approx 13N$ which is probably unacceptably slow.\n\nLet us now consider what happens when we are able to handle more than\na single element with the basic recursive technique, as shown in\n\\refSec{sec-more-stack}.  We denote the number of elements that the\nbasic recursive technique can handle by $K$, and again, in order to\nsimplify the analysis, we assume that both $N$ and $K$ are powers of\n$2$, i.e., $N = 2^n$ $K = 2^k$, and also that $N \\ge K$.  The\nrecursion relation now looks as follows:\n\n\\label{analyse2}\n\\[ f(N) = \\left\\{ \\begin{array}{ll}\n                    N-1 & \\mbox{if $N \\le K$} \\\\\n                    \\frac{N}{2} + 2f(\\frac{N}{2}) &\\mbox{otherwise}\n                  \\end{array} \\right. \\]\n\nThe resolution of this equation is given in the appendix (Part B).\nIt yields:\n\n\\[ f(N) \\approx N(1 + \\frac{1}{2}\\mathsf{lb}~\\frac{N}{K})\\]\n\nWith the best portable version of our technique and a typical stack\nbeing able to handle $K = 2^{14}$ we are now looking at a \nperformance for $N = 2^{24}$ of $F(N) \\approx 6N$.  Comparing this\nresult to the technique of reversing the list, it is fair to say that\nthe overhead of allocating and subsequently garbage-collecting a\n\\texttt{cons} cell can very well be comparable to $6$ times the time\ntaken by the \\texttt{cdr} operation.  In other words, the performance\nof our portable version is already comparable to an implementation\nbased on first creating a reversed copy of the list and then\ntraversing that reversed copy.\n\nFinally, instead of using more stack space for the base case, let us\nanalyze what happens if we divide the original list into more than two\nparts.  For this analysis, let us assume that we divide the list into\n$M$ equal parts, and that $M$ also is a power of $2$ so that $M =\n2^m$.  We then obtain the following relation:\n\n\\label{analyse3}\n\\[ f(N) = \\left\\{ \\begin{array}{ll}\n                    0 & \\mbox{if $N = 1$} \\\\\n                    N - \\frac{N}{M} + Mf(\\frac{N}{M}) &\\mbox{otherwise}\n                  \\end{array} \\right. \\]\n\nThe resolution of this equation is given in the appendix (Part C).\nIt yields:\n\n\\[ F(N) \\approx N(1 + \\frac{\\mathsf{lb}~N}{\\mathsf{lb}~M}) \\]\n\nWhile it may appear that we can get very good performance when $M$ is\nchosen to be large, in practice, using large values of $M$ introduces\na different kind of overhead, namely large stack frames, making the\ngain less than what the formula suggests.\n\n\\subsection{Implementation-specific solutions}\n\nSo far, we have explored techniques that can mostly be implemented in\nportable \\commonlisp{}.  In this section, we explore a variation on\nour technique that requires access to the control stack of the\nimplementation.\n\nRecall that at the lowest level of our technique, there is a recursive\nfunction that is used for traversing the list when the number of\nelements is small compared to the stack size.  At each invocation,\nthis function does very little work.\n\nWith direct access to the control stack, we can convert the recursive\nfunction to an iterative function that pushes the elements of the list\non the control stack, and then processes them in reverse order.  This\ntechnique has several advantages:\n\n\\begin{itemize}\n\\item A single word is used for each element, whereas the recursive\n  function requires space for a return address, a frame pointer,\n  saved registers, etc.  As a result, this technique can be used for\n  lists with more elements than would be possible with the recursive\n  technique, thereby further decreasing the number of times a list is\n  traversed.\n\\item There is no function-call overhead involved.  The only\n  processing that is needed for an element is to store it on the stack\n  and then comparing it to the item.\n\\end{itemize}\n\nWe illustrate this technique in a notation similar to \\commonlisp{}:\n\n{\\small\n\\begin{verbatim}\n(defun low-level-reverse-count (item list length)\n  (loop for rest = list then (cdr rest)\n        repeat length\n        do (push-on-stack (car rest)))\n  (loop repeat length\n        count (eq item (pop-from-stack))))\n\\end{verbatim}\n}\n\nWe implemented this technique in \\sbcl{}.  In order not to have to\nrecompile \\sbcl{} with our additional function, we used the\nimplementation-specific foreign-function interface and wrote the\nfunction using the language C.  Rather than pushing and popping the\ncontrol stack, we used the built-in C function \\texttt{alloca} to\nallocate a one-dimensional C array on the control stack to hold the\nlist elements.\n\nIn \\sbcl{}, the default stack size is $2$MBytes, or around $250$k\nwords on a 64-bit processor.  We tested our technique using $100000$\nwords on the stack.  The result is that for a list with $10$ million\nelements, our technique processes the list in reverse order as fast as\nan ordinary loop from the beginning of the list.\n\nThis surprising result can be explained by a few factors:\n\n\\begin{itemize}\n\\item Presumably in order to speed up the functions \\texttt{car} and\n  \\texttt{cdr}, SBCL uses the same tag for \\texttt{cons} cells and for\n  the symbol \\texttt{nil}.  As a result, in order to traverse a list,\n  SBCL must make \\emph{two} tests for each element, namely one to\n  check whether the putative list is something other than a list\n  altogether, and another to check whether it is a \\texttt{cons}\n  cell.  When our technique traverses a list for which the number of\n  elements is known, there is no need to make any additional tests,\n  simply because when the length of the list is positive, the first\n  element must be a \\texttt{cons} cell.\n\\item The \\sbcl{} compiler can not determine that the return value of\n  \\texttt{count} must always be a \\texttt{fixnum}.%\n  \\footnote{On a byte-addressed processor where $n$ word-aligned bytes\n    are needed to represent a \\texttt{cons} cell, the number of\n    elements in a list can be at most $N/n$ where $N$ is the maximum\n    number of possible addresses.  In a system that uses at most\n    $\\mathsf{lb}~n$ tag bits for a fixnum, the value that\n    \\texttt{count} returns must be a fixnum.  While some systems might\n    use $8$ tag bits, \\sbcl{} on a $64$-bit platform uses a single tag\n    bit for fixnums.  As a consequence, \\texttt{count} must then\n    return a fixnum.}\n  When the function is implemented in C, this problem disappears.\n\\end{itemize}\n\nIf we put this technique in the perspective of the analyses in\n\\refSec{sec-analyses}, we can also see that the number of \\texttt{cdr}\noperations remains quite modest, even for lists with a very large\nnumber of elements.\n\nThere are several variations on this implementation-specific\ntechnique.  Some implementations might allocate a vector or a list\ndeclared to be \\texttt{dynamic-extent} on the stack, thus giving\nessentially the same advantage as the version we implemented in C.\nHowever, such a technique would still be implementation specific,\ngiven that it is permitted for the compiler to ignore\n\\texttt{dynamic-extent} declarations.  In the case of \\sbcl{}, using\nsuch a declaration, we were able to obtain performance almost as good\nas our C version.  However, as it turns out, \\sbcl{} only allocates a\nvector on the stack under certain circumstances thereby making this\ntechnique impossible to apply in general.\n", "meta": {"hexsha": "51a6fb89c6be26f52220efbf3433bad30844d267", "size": 13927, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Papers/Reverse-order/sec-our-method.tex", "max_stars_repo_name": "gwerbin/SICL", "max_stars_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 842, "max_stars_repo_stars_event_min_datetime": "2015-01-12T15:44:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T14:03:04.000Z", "max_issues_repo_path": "Papers/Reverse-order/sec-our-method.tex", "max_issues_repo_name": "gwerbin/SICL", "max_issues_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 85, "max_issues_repo_issues_event_min_datetime": "2015-03-25T00:31:09.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-18T11:06:19.000Z", "max_forks_repo_path": "Papers/Reverse-order/sec-our-method.tex", "max_forks_repo_name": "gwerbin/SICL", "max_forks_repo_head_hexsha": "ec5cc25de783ecce373081ab72d2a04359155ad6", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 80, "max_forks_repo_forks_event_min_datetime": "2015-03-06T12:52:05.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-15T05:30:33.000Z", "avg_line_length": 43.9337539432, "max_line_length": 74, "alphanum_fraction": 0.7107058232, "num_tokens": 3607, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.727975460709318, "lm_q2_score": 0.8104789086703225, "lm_q1q2_score": 0.5900087569344632}}
{"text": "\\input{../../style/preamble}\n\\input{../../latex-math/basic-math}\n\\input{../../latex-math/basic-ml}\n\\input{../../latex-math/ml-gp}\n\n\\newcommand{\\titlefigure}{figure_man/post-mean.png} % does not fit\n\\newcommand{\\learninggoals}{\n  \\item \\textcolor{blue}{XXX}\n  \\item \\textcolor{blue}{XXX}\n}\n\n\\title{Introduction to Machine Learning}\n\\date{}\n\n\\begin{document}\n\n\\lecturechapter{Gaussian Proccesses: Additional Material}\n\\lecture{Introduction to Machine Learning}\n\n%http://www.gaussianprocess.org/gpml\n\n\\begin{vbframe}{Notation}\n% We would like to model a function\n% \n% $$\n% f: \\mathcal{X} \\to \\Yspace\n% $$\n% \n% where\n% \n% \\begin{itemize}\n% \\item $\\Xspace$ is a p-dimensional input space (here: $\\Xspace = \\R^n$)\n% \\item $\\Yspace$ is the target space (usually $\\Yspace = \\R$ for regression and $\\Yspace = \\{0, 1\\}$ for binary classification)\n% \\item $\\bm{x} \\in \\mathcal{X}$ is called independent / predictor variable\n% \\item $y \\in \\mathcal{Y}$ is called dependent variable (target, label, output)\n% \\end{itemize}\n% \n% \\framebreak\n% \n\nIn this chapter \n\n\\begin{itemize}\n\\item $(\\xv_*, y_*)$ denotes one single test observation, excluded from training\n\\item $\\Xmat_* \\in \\R^{n_* \\times p}$ contains a set of $n_*$ test observations and  \n\\item $\\yv_* \\in \\R^{n_* \\times p}$ the corresponding outcomes, excluded from training. \n\\end{itemize}\n\n% \\framebreak\n\n% In the context of Gaussian processes \n\n% \\begin{itemize}\n% \\item the function $m: \\Xspace \\to \\R$ is called \\textbf{mean function}. We define the \\textbf{mean vector}\n\n% \\vspace*{-0.3cm}\n% $$\n% m(\\Xmat):= \\biggl(m\\left(\\bm{x}^{(1)}\\right), m\\left(\\bm{x}^{(2)}\\right), ..., m\\left(\\bm{x}^{(n)}\\right)\\biggr)^T\n% $$\n% \\item  the bivariate, positive-definite function $k: \\Xspace \\times \\Xspace \\to \\R$ is called \\textbf{covariance function} or \\textbf{kernel}; $k(\\Xmat, \\Xmat)$ denotes the $n\\times n$ matrix that is obtained by plugging in all pairs $\\bm{x}^{(i)}, \\bm{x}^{(j)}$ and is called \\textbf{kernel matrix} or \\textbf{covariance matrix}\n\n% $$\n% k(\\Xmat, \\Xmat) := k(\\bm{x}^{(i)}, \\bm{x}^{(j)})_{i, j = 1, ..., n}\n% $$ \n% \\item We sometimes use the abbreviations $\\bm{K} := k(\\Xmat, \\Xmat)$, $\\bm{K}_* := k(\\Xmat_*, \\Xmat)$, $\\bm{K}_{**} := k(\\Xmat_*, \\Xmat_*)$.\n\n\n% \\end{itemize}\n\n\\end{vbframe}\n\n\n\n\\section{Noisy Gaussian Processes}\n\n\\begin{vbframe}{Noisy Gaussian Process}\n\nIn the above equations we implicitly assumed that we had access to the true function value $\\fx$. In many cases, we only have access to a noisy version thereof \n$$\ny = \\fx + \\eps.$$ \n\nAssuming additive i.i.d. Gaussian noise, the covariance function becomes\n\n$$\n\\cov(y^{(i)}, y^{(j)}) = k(\\bm{x}^{(i)}, \\bm{x}^{(j)}) + \\sigma_n^2 \\delta_{ij}\n$$\n\nwhere $\\delta_{ij} = 1$ if $i = j$. In matrix notation, this becomes\n\n$$\n\\cov(\\yv) = \\Kmat + \\sigma_n^2\\id =: \\Kmat_y.  \n$$\n\nThe $\\sigma_n^2$ is also called \\textbf{nugget}. \n\n\\end{vbframe}\n\n\\begin{vbframe}{GP vs. kernelized Ridge regression} \n\nThe predictive function is then \n\n\\begin{eqnarray*}\n\\bm{f}_* | \\Xmat_*, \\Xmat, \\yv \\sim \\mathcal{N}(\\bm{\\bar f}_*, \\cov(\\bm{\\bar f}_*)).\n\\end{eqnarray*}\n\nwith \n\n\\begin{itemize}\n\\item $\\bm{\\bar f}_* = \\Kmat_{*}^{T} \\Kmat_y^{-1}\\yv$ and\n\\item $\\cov(\\bm{\\bar f}_*) = \\Kmat_{**}- \\Kmat_{*}^{T}\\Kmat_y^{-1}\\Kmat_*$.\n\\end{itemize}\n\nThe predicted mean values at the training points $\\bm{\\bar f} = \\bm{K}\\Kmat_y^{-1}\\bm{y}$ are a \\textbf{linear combination} of the $\\bm{y}$ values. \n\n\\lz \n\n\\textbf{Note:} Predicting the posterior mean corresponds exactly to the predictions obtained by kernelized Ridge regression. However, a GP (as a Bayesian model) gives us much more information, namely a posterior distribution, whilst kernelized Ridge regression does not. \n\n\n\\end{vbframe}\n\n\n\n\n\\section{Bayesian Linear Regression as a GP}\n\n\n\\begin{vbframe}{Bayesian linear regression as a GP}\n\nOne example for a Gaussian process is the Bayesian linear regression model covered earlier. For  $\\thetab \\sim \\mathcal{N}(\\bm{0}, \\tau^2 \\id)$, the joint distribution of any set of function values \n\n$$\nf(\\xi) = \\thetab^T \\xi + \\epsi\n$$\n\nis Gaussian. \n\n\\vspace*{0.3cm}\n\nThe corresponding mean function is $m(\\bm{x}) = \\bm{0}$ and the covariance function is\n\n\\vspace*{-0.5cm}\n\n\\begin{eqnarray*}\n\\cov(f(\\bm{x}), f(\\bm{x}^\\prime)) &=& \\E[f(\\bm{x}) f(\\bm{x}^\\prime)] - \\underbrace{\\E[f(\\bm{x})] \\E[f(\\bm{x}^\\prime]}_{= 0} \\\\ &=& \\E[(\\thetab^T \\bm{x} + \\epsi)^T(\\thetab^T \\bm{x}^\\prime + \\epsi)] \\\\ &=&  \\tau^2 \\bm{x}^T\\bm{x}^\\prime + \\sigma^2 =: k(\\bm{x}, \\bm{x}^\\prime).\n\\end{eqnarray*}\n\n% As we have just described, the predictive distribution assuming a Gaussian process Prior for one single test point $\\bm{x}^*$  is normal with mean \n% \n% $$\n% (\\bm{x}^*)^T \\bm{X}^T (\\Xmat\\Xmat^T + \\id)^{-1} \\yv.\n% $$\n% \n% Remember that we derived also a normal predictive distribution for a Bayesian linear regression case - the predictive mean was\n% \n% $$\n% \\mu_{\\text{post}} = (\\bm{x}^*)^T(\\Xmat^T\\Xmat + \\sigma^2 \\id)^{-1}\\Xmat^T\\yv.\n% $$\n% \n% Using the matrix identity $(\\bm{AB} + \\id)\n% ^{-1}\\Amat = \\Amat(\\bm{BA} + \\id)^{-1}$^*$, it can be seen that the predictive distributions are identical.\n% \n% \\vfill\n% \\begin{footnotesize}\n% $^*$ Searl Set of Identities, see \\emph{http://matrixcookbook.com], 3.2}\n% \\end{footnotesize}\n\n\\end{vbframe}\n\n\\begin{vbframe}{Feature Spaces and the Kernel Trick}\n\nIf one relaxes the linearity assumption by first projecting features into a higher dimensional feature space $\\mathcal{Z}$ using a basis function $\\phi: \\Xspace \\to \\mathcal{Z}$, the corresponding covariance function is\n\n$$\nk(\\bm{x}, \\bm{x}^\\prime) = \\tau^2 \\phi(\\bm{x})^T\\phi(\\bm{x}^\\prime) + \\sigma^2.\n$$\n\nTo get arbitrarily complicated functions, we would have to handle high-dimensional feature vectors $\\phi(\\bm{x})$. \n\n\\lz \n\nFortunately, all we need to know are the inner products $\\phi(\\bm{x})^T\\phi(\\bm{x}^\\prime)$ - the feature vector itself never occurs in calculations. \n\n\\framebreak\n\n\nIf we can get the inner product directly \\textbf{without} calculating the infinite feature vectors, we can infer an infinitely complicated model with a \\textbf{finite amount} of computation. This idea is known as \\textbf{kernel trick}.\n\n\\lz \n\n A Gaussian process can be defined by either\n\n\\begin{itemize}\n\\item deriving the covariance function explicitly via inner products of evaluations of basis functions or\n\\item choosing a positive definite kernel function (Mercer Kernel) directly, which  corresponds - according to Mercer's theorem - to taking inner products in some (possibly infinite) feature space\n\\end{itemize}\n\n\\end{vbframe}\n\n\\begin{vbframe}{Summary: Gaussian process regression}\n\n\\begin{itemize}\n\\item Gaussian process regression is equivalent to \\textbf{kernelized} Bayesian linear regression\n\\item The covariance function describes the shape of the Gaussian process\n\\item With the right choice of covariance function, remarkably flexible models can be built\n\\item But: naive implementations of Gaussian process models scale poorly with large datasets as\n\\begin{itemize}\n\\item the kernel matrix has to be inverted / factorized, which is $\\order(n^3)$,\n\\item computing the kernel matrix uses $\\order(n^2)$ memory - running out of memory places a hard limit on problem sizes\n\\item generating predictions is $\\order(n)$ for the mean, but $\\order(n^2)$ for the variance.\n\\end{itemize}\n(...so we need special tricks)\n\\end{itemize}\n\n\\end{vbframe}\n\n\n\\endlecture\n\\end{document}\n", "meta": {"hexsha": "788514fb9e8b1663401d48592f2a66cae499e2d5", "size": 7369, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/gaussian-processes/slides-x-gp-additional.tex", "max_stars_repo_name": "compstat-lmu/lecture_i2ml", "max_stars_repo_head_hexsha": "f02753d581fb1de3f5db70ae5b23d50cf39e1326", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2019-02-27T17:20:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-24T12:05:52.000Z", "max_issues_repo_path": "slides/gaussian-processes/slides-x-gp-additional.tex", "max_issues_repo_name": "compstat-lmu/lecture_i2ml", "max_issues_repo_head_hexsha": "f02753d581fb1de3f5db70ae5b23d50cf39e1326", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 323, "max_issues_repo_issues_event_min_datetime": "2019-02-27T08:02:59.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T08:11:41.000Z", "max_forks_repo_path": "slides/gaussian-processes/slides-x-gp-additional.tex", "max_forks_repo_name": "compstat-lmu/lecture_i2ml", "max_forks_repo_head_hexsha": "f02753d581fb1de3f5db70ae5b23d50cf39e1326", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 56, "max_forks_repo_forks_event_min_datetime": "2019-02-27T16:25:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T05:53:20.000Z", "avg_line_length": 33.4954545455, "max_line_length": 331, "alphanum_fraction": 0.6828606324, "num_tokens": 2319, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.727975460709318, "lm_q2_score": 0.8104789040926008, "lm_q1q2_score": 0.5900087536019942}}
{"text": "\\section{Related Rates}\\label{sec:RelatedRates}\r\nWhen defining the derivative $f^{\\prime}\\left( x\\right)$,\r\nwe define it to be exactly the rate of change of $%\r\nf\\left( x\\right) $ with respect to $x$. Consequently, any question about\r\nrates of change can be rephrased as a question about derivatives.\r\n\\textbf{When we calculate derivatives, we are calculating rates of change}. Results\r\nand answers we obtain for derivatives translate directly into results and\r\nanswers about rates of change. Let us look at\r\nsome examples where more than one variable is involved, and where our job is to\r\nanalyze and exploit relations between the rates of change of these\r\nvariables. The mathematical step of relating the rates of change turns out\r\nto be largely an exercise in differentiation using the chain rule or\r\nimplicit differentiation. This explains why some textbooks place this\r\nsection shortly after the sections on the chain rule and implicit\r\ndifferentiation.\r\n\r\nSuppose we have two variables $x$ and $y$ (in most problems the\r\nletters will be different, but for now let's use $x$ and $y$) which\r\nare both changing with time.  A ``related rates'' problem is a problem\r\nin which we know one of the rates of change at a given instant---say,\r\n$\\ds \\dot x = dx/dt$---and we want to find the other rate $\\ds \\dot y = dy/dt$ at that\r\ninstant. (The use of $\\ds \\dot x$ to mean $dx/dt$ goes back to Newton and\r\nis still used for this purpose, especially by physicists.)\r\n\r\nIf $y$ is written in terms of $x$, i.e., $y=f(x)$, then this is easy\r\nto do using the chain rule:\r\n$$\\dot y = \\frac{dy}{dt}=\\frac{dy}{dx}\\cdot\\frac{dx}{dt}=\\frac{dy}{dx}\\dot x.$$\r\nThat is, find the derivative of $f(x)$, plug in the value of\r\n$x$ at the instant in question, and multiply by the given value of\r\n$\\ds \\dot{x}=dx/dt$ to get $\\ds \\dot{y}=dy/dt$.\r\n\r\n\\begin{example}{Speed at which a Coordinate is Changing}{SpeedCoordinate}\r\nSuppose an object is moving along a path described by $\\ds y=x^2$, that\r\nis, it is moving on a parabolic path. At a particular time, say $t=5$,\r\nthe $x$ coordinate is 6 and we measure the speed at which the $x$ coordinate of the object is\r\nchanging and find that $dx/dt = 3$. \r\n\r\nAt the same time, how fast is the $y$ coordinate changing?\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nUsing the chain rule, $\\ds dy/dt = 2x\\cdot dx/dt$. At $t=5$ we know that\r\n$x=6$ and $dx/dt=3$, so $dy/dt = 2\\cdot 6\\cdot 3 = 36$.\r\n\\end{solution}\r\n \r\nIn many cases, particularly interesting ones,\r\n$x$ and $y$ will be related in some other way, for example\r\n$x=f(y)$, or $F(x,y)=k$, or perhaps $F(x,y)=G(x,y)$, where $F(x,y)$\r\nand $G(x,y)$ are expressions involving both variables.  In all cases, you\r\ncan solve the related rates problem by taking the derivative of both sides,\r\nplugging in all the known values (namely, $x$, $y$, and $\\ds \\dot{x}$), and\r\nthen solving for $\\ds \\dot{y}$.\r\n\r\nTo summarize, here are the steps in doing a related rates problem.\r\n\r\n\\begin{formulabox}[Steps for Solving Related Rates Problems]\r\n\\begin{enumerate}\r\n\t\\item\tDecide what the two variables are.\r\n\t\\item\tFind an equation relating them.\r\n\t\\item\tTake $d/dt$ of both sides.\r\n\t\\item\tPlug in all known values at the instant in question.\r\n\t\\item\tSolve for the unknown rate.\r\n\\end{enumerate}\r\n\\end{formulabox}\r\n\r\n\\begin{example}{Receding Airplanes}{RecedingAirplane}\r\nA plane is flying directly away from you at 500 mph at an altitude of\r\n3 miles.  How fast is the plane's distance from you increasing at the\r\nmoment when the plane is flying over a point on the ground 4 miles\r\nfrom you?\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nTo see what's going on, we first draw a schematic representation of\r\nthe situation, as in Figure~\\ref{fig:airplane}.\r\n\r\nBecause the plane is in level flight directly away from you, the rate\r\nat which $x$ changes is the speed of the plane, $dx/dt=500$. The\r\ndistance between you and the plane is $y$; it is $dy/dt$ that we wish\r\nto know. By the Pythagorean Theorem we know that $\\ds x^2+9=y^2$. Taking\r\nthe derivative:\r\n$$ 2x \\dot x = 2y\\dot y.$$\r\nWe are interested in the time at which $x=4$; at this time we know\r\nthat $\\ds 4^2+9=y^2$, so $y=5$. Putting together all the information we\r\nget\r\n$$2(4)(500)=2(5)\\dot y.$$\r\nThus, $\\ds \\dot y=400$ mph.\r\n\\end{solution}\r\n\r\n\\figure[H]\r\n\\centerline{\\vbox{\\beginpicture\r\n\\normalgraphs\r\n%\\ninepoint\r\n\\setcoordinatesystem units <.75truecm,.75truecm>\r\n\\setplotarea x from 0 to 6, y from 0 to 3\r\n\\axis bottom shiftedto y=0 /\r\n\\setlinear\r\n\\setdashes\r\n\\plot 0 0 5 3 /\r\n\\putrule from 5 0 to 5 3\r\n\\put {$\\longrightarrow$} [l] at 5 3\r\n\\put {$x$} [t] <0pt,-4pt> at 2.5 0\r\n\\put {$y$} [br] <-2pt,2pt> at 2.5 1.5\r\n\\put {$3$} [l] <4pt,0pt> at 5 1.5\r\n\\endpicture}}\r\n\\caption{Receding airplane. \\label{fig:airplane}}\r\n\\endfigure\r\n\r\n\\begin{example}{Spherical Balloon}{SphericalBalloon}\r\nYou are inflating a spherical balloon at the rate of 7 cm${}^3$/sec.  How\r\nfast is its radius increasing when the radius is 4 cm?\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nHere the variables are the radius $r$ and the volume $V$.  We know $dV/dt$,\r\nand we want $dr/dt$.  The two variables are related by the\r\nequation $\\ds V=4\\pi r^3/3$.  Taking the derivative of both sides gives\r\n$\\ds dV/dt=4\\pi r^2\\dot r$.  We now substitute the values we know at the\r\ninstant in question: $\\ds 7=4\\pi 4^2\\dot r$, so\r\n$\\ds \\dot r=7/(64\\pi)$ cm/sec.\r\n\\end{solution}\r\n\r\n\\begin{example}{Conical Container}{ConicalContainer}\r\nWater is poured into a conical container at the rate of 10\r\ncm${}^3$/sec.  The cone points directly down, and it has a height of\r\n30 cm and a base radius of 10 cm; see Figure~\\ref{fig:cone tank}.\r\nHow fast is the water level rising when the water is 4 cm deep (at its\r\ndeepest point)?\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nThe water forms a conical shape within the big cone; its\r\nheight and base radius and volume are all increasing\r\nas water is poured into the container.  This means that we actually have\r\nthree things varying with time: the water level $h$ (the height of the cone\r\nof water), the radius $r$ of the circular top surface of water (the base\r\nradius of the cone of water), and the volume of water $V$.  \r\nThe volume of a cone is given by\r\n$\\ds V=\\pi r^2h/3$.  We know $dV/dt$, and we want $dh/dt$.  At\r\nfirst something seems to be wrong: we have a third variable, $r$, whose rate\r\nwe don't know.  \r\n\r\nHowever, the dimensions of the cone of water must have the same\r\nproportions as those of the container.  \r\nThat is, because of similar triangles, \r\n$r/h=10/30$ so $r=h/3$.  Now we can eliminate $r$ from the\r\nproblem entirely: $\\ds V=\\pi(h/3)^2h/3=\\pi h^3/27$.  We take\r\nthe derivative of both sides and plug in $h=4$ and $dV/dt=10$, obtaining\r\n$\\ds 10=(3\\pi\\cdot 4^2/27)(dh/dt)$.  Thus, $dh/dt=90/(16\\pi)$\r\ncm/sec.\r\n\\end{solution}\r\n\r\n\\figure[H]\r\n\\centerline{\\vbox{\\beginpicture\r\n\\normalgraphs\r\n%\\sevenpoint\r\n\\setcoordinatesystem units <1truecm,1truecm>\r\n\\setplotarea x from -1.5 to 1.5, y from -5 to 1\r\n\\ellipticalarc  axes ratio 3:1  360 degrees from 1.5 0 center at 0 0\r\n\\ellipticalarc  axes ratio 3:1  360 degrees from 0.75 -2.5 center at 0 -2.5\r\n\\betweenarrows {10} from 0 0.75 to 1.5 0.75\r\n\\betweenarrows {30} from 1.7 -5 to 1.7 0\r\n\\betweenarrows {$h$} from -1 -5 to -1 -2.5\r\n\\put {$r$} at 0.385 -2.1\r\n\\setlinear\r\n\\plot 1.5 0 0 -5 -1.5 0 /\r\n%\\setplotsymbol ({\\teeny.})\r\n\\plotsymbolspacing=.2pt\r\n\\arrow <2pt> [0.7, 2] from 0.25 -2.1 to 0 -2.1\r\n\\arrow <2pt> [0.7, 2] from 0.50 -2.1 to 0.75 -2.1\r\n\\endpicture}}\r\n\\caption{Conical water tank. \\label{fig:cone tank}}\r\n\\endfigure\r\n\r\n\\begin{example}{Swing Set}{SwingSet}\r\nA swing consists of a board at the end of a 10 ft long rope.  Think of the\r\nboard as a point $P$ at the end of the rope, and let $Q$ be the point of\r\nattachment at the other end.  Suppose that the swing is directly below $Q$\r\nat time $t=0$, and is being pushed by someone who walks at 6\r\nft/sec from left to right.  \r\n\r\nFind (a) how fast the swing is rising after 1\r\nsec; (b) the angular speed of the rope in deg/sec after 1 sec.\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nWe  start out by asking: What is the geometric\r\nquantity whose rate of change we know, and what is the geometric quantity\r\nwhose rate of change we're being asked about?  Note that the person pushing\r\nthe swing is moving horizontally at a rate we know.  In other words,\r\nthe  horizontal coordinate of $P$ is increasing at 6 ft/sec.  In the\r\n$xy$-plane let us make the convenient choice of putting the origin at the\r\nlocation of $P$ at time $t=0$, i.e., a distance 10 directly below the point\r\nof attachment.  Then the rate we know is $dx/dt$, and in part \r\n(a) the rate we want is $dy/dt$ (the rate at which $P$ is rising).  In part\r\n(b) the rate we want is $\\ds \\dot{\\theta}=d\\theta/dt$, where $\\theta$ stands for\r\nthe angle in radians through which the swing has swung from the vertical.\r\n(Actually, since we want our answer in deg/sec, at the end we must convert\r\n$d\\theta/dt$ from rad/sec by multiplying by $180/\\pi$.)\r\n\r\n\\noindent\r\n(a)~From the diagram we see that we have a right triangle whose legs\r\nare $x$ and $10-y$, and whose hypotenuse is 10.  Hence\r\n$\\ds x^2+(10-y)^2=100$.  Taking the derivative of both sides we obtain:\r\n$2x\\dot{x}+2(10-y)(0-\\dot{y})=0$.  We now look at what we know after 1\r\nsecond, namely $x=6$ (because $x$ started at 0 and has been increasing at\r\nthe rate of 6 ft/sec for 1 sec), thus $y=2$ (because we get $10-y=8$ from\r\nthe Pythagorean theorem applied to the triangle with hypotenuse 10 and\r\nleg 6), and $\\ds \\dot{x}=6$.  Putting in these values gives us\r\n$2\\cdot 6\\cdot 6-2\\cdot 8\\dot{y}=0$, from which we can easily solve\r\nfor $\\ds \\dot{y}$: $\\ds \\dot{y}=4.5$ ft/sec.\r\n\r\n\\noindent\r\n(b)~Here our two variables are $x$ and $\\theta$, so we want to use the\r\nsame right triangle as in part (a), but this time relate $\\theta$ to\r\n$x$.  Since the hypotenuse is constant (equal to 10), the best way to\r\ndo this is to use the sine: $\\sin\\theta=x/10$.  Taking derivatives we\r\nobtain $\\ds (\\cos\\theta)\\dot{\\theta}=0.1\\dot{x}$.  At the instant in\r\nquestion ($t=1$ sec), when we have a right triangle with sides\r\n6--8--10, $\\ds \\cos\\theta=8/10$ and $\\ds \\dot{x}=6$. Thus\r\n$(8/10)\\dot{\\theta}=6/10$, i.e., $\\ds \\dot{\\theta}=6/8=3/4$ rad/sec, or\r\napproximately $43$ deg/sec.  \r\n\\end{solution}\r\n\r\n\\figure[H]\r\n\\centerline{\\vbox{\\beginpicture\r\n\\normalgraphs\r\n%\\sevenpoint\r\n\\setcoordinatesystem units <0.5truecm,0.5truecm>\r\n\\setplotarea x from -6 to 6, y from 0 to 10\r\n\\circulararc  73.74 degrees from -6 2 center at 0 10\r\n\\circulararc  36.87 degrees from 0 8 center at 0 10\r\n%\\betweenarrows {10} from 0 0.75 to 1.5 0.75\r\n%\\betweenarrows {30} from 1.7 0 to 1.7 -5\r\n%\\betweenarrows {$h$} from -1 -2.5 to -1 -5\r\n%\\put {$r$} at 0.385 -2.1\r\n\\setlinear\r\n\\plot 0 10 6 2 /\r\n\\setdashes\r\n\\putrule from 0 0 to 0 10\r\n\\putrule from 0 2 to 6 2\r\n\\multiput {$\\bullet$} at 0 10 6 2 /\r\n\\put {$P$} [l] <4pt,0pt> at 6 2\r\n\\put {$Q$} [b] <0pt,4pt> at 0 10\r\n\\put {$x$} [b] <0pt,4pt> at 3 2\r\n\\put {$y$} [r] <-4pt,0pt> at 0 1\r\n\\put {$\\theta$} [tl] at 0.6 7.7\r\n\\endpicture}}\r\n\\caption{Swing. \\label{fig:swing}}\r\n\\endfigure\r\n\r\nWe have seen that sometimes there are apparently more than two\r\nvariables that change with time, but in reality there are just two, as\r\nthe others can be expressed in terms of just two. However sometimes there\r\nreally are several variables that change with time; as long as you\r\nknow the rates of change of all but one of them you can find the rate\r\nof change of the remaining one.  As in the case when there are just two\r\nvariables, take the derivative of both sides of the equation relating all of\r\nthe variables, and then substitute all of the known values and solve for\r\nthe unknown rate.\r\n\r\n\\begin{example}{Distance Changing Rate}{DistanceChangingRate}\r\nA road running north to south crosses a road going east to west at the\r\npoint $P$.  Car A is driving north along the first road, and car B is\r\ndriving east along the second road.  At a particular time car A is $10$\r\nkilometers to the north of $P$ and traveling at 80 km/hr, while car B\r\nis 15 kilometers to the east of $P$ and traveling at 100 km/hr.\r\nHow fast is the distance between the two cars\r\nchanging?\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nLet $a(t)$ be the distance of car A north of $P$ at time $t$, and\r\n$b(t)$ the distance of car B east of $P$ at time $t$, and let $c(t)$\r\nbe the distance from car A to car B at time $t$.  By the Pythagorean\r\nTheorem, $\\ds c(t)^2=a(t)^2+b(t)^2$. Taking derivatives\r\nwe get $\\ds 2c(t)c'(t)=2a(t)a'(t)+2b(t)b'(t)$, so\r\n$$\\dot{c}=\\frac{a\\dot{a}+b\\dot{b}}{c}=\\frac{a\\dot{a}+b\\dot{b}}{\\sqrt{a^2+b^2}}.$$\r\nSubstituting known values we get:\r\n$$\\dot{c}=\\frac{10\\cdot 80+15\\cdot100}{\\sqrt{10^2+15^2}}=\\frac{460}{\\sqrt{13}} \\approx 127.6 \\hbox{km/hr}$$\r\nat the time of interest.\r\n\\end{solution}\r\n\r\n\\figure[H]\r\n\\centerline{\\vbox{\\beginpicture\r\n\\normalgraphs\r\n%\\sevenpoint\r\n\\setcoordinatesystem units <0.5truecm,0.5truecm>\r\n\\setplotarea x from 0 to 10, y from 0 to 6\r\n\\axis left shiftedto x=0 /\r\n\\axis bottom shiftedto y=0 /\r\n\\multiput {$\\bullet$} at 0 0 0 4 7 0 /\r\n\\setdashes\\setlinear\r\n\\plot 0 4 7 0 /\r\n\\put {$(0,a(t))$} [r] <-4pt,0pt> at 0 4\r\n\\put {$(b(t),0)$} [t] <0pt,-6pt> at 7 0\r\n\\put {$c(t)$} [bl] <2pt,2pt> at 3.5 2\r\n\\put {$P$} [tr] <-2pt,-2pt> at 0 0\r\n\\setsolid\r\n\\setplotsymbol ({\\tenrm.}) \r\n\\arrow <5pt> [.25, 1] from 0 4 to 0 5\r\n\\arrow <5pt> [.25, 1] from 7 0 to 8 0\r\n\\endpicture}}\r\n\\caption{Cars moving apart. \\label{fig:departing cars}}\r\n\\endfigure\r\n\r\nNotice how this problem differs from\r\nExample~\\ref{exa:RecedingAirplane}.  In both cases we started with the\r\nPythagorean Theorem and took derivatives on both sides.  However, in\r\nExample~\\ref{exa:RecedingAirplane} one of the sides was a constant\r\n(the altitude of the plane), and so the derivative of the square of\r\nthat side of the triangle was simply zero.  In this Example, on the\r\nother hand, all three sides of the right triangle are variables, even\r\nthough we are interested in a specific value of each side of the\r\ntriangle (namely, when the sides have lengths 10 and 15). Make sure\r\nthat you understand at the start of the problem what are the variables\r\nand what are the constants.\r\n\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for Section \\ref{sec:RelatedRates}}\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\n\\begin{ex}\r\nAir is being pumped into a spherical balloon at a constant rate of 3 cm\\textsuperscript{3}/s. How fast is the radius of the balloon increasing when the radius reaches 5cm?\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA cylindrical tank standing upright (with one circular base on the\r\nground) has radius 20 cm.  How fast does the water level in the\r\ntank drop when the water is being drained at 25 cm${}^3$/sec?\r\n\\begin{sol}\r\n\t$1/(16\\pi)$ cm/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA cylindrical tank standing upright (with one circular base on the\r\nground) has radius 1 meter.  How fast does the water level in the\r\ntank drop when the water is being drained at 3 liters per second?\r\n\\begin{sol}\r\n\t$3/(1000\\pi)$ meters/second\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA ladder 13 meters long rests on horizontal ground and leans\r\nagainst a vertical wall.  The foot of the ladder is pulled away from\r\nthe wall at the rate of 0.6 m/sec.  How fast is the top sliding down\r\nthe wall when the foot of the ladder is 5 m from the wall?\r\n\\begin{sol}\r\n\t$1/4$ m/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA ladder 13 meters long rests on horizontal ground and leans\r\nagainst a vertical wall. The top of the ladder is being pulled up the\r\nwall at $0.1$ meters per second.\r\nHow fast is the foot of the ladder approaching \r\nthe wall when the foot of the ladder is 5 m from the wall?\r\n\\begin{sol}\r\n\t$-6/25$ m/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA rotating beacon is located 2 miles out in the water.  Let $A$ be the\r\npoint on the shore that is closest to the beacon.  As the beacon rotates at\r\n10 rev/min, the beam of light sweeps down the shore once each time it revolves.\r\nAssume that the shore is straight.  How fast is the point where the beam\r\nhits the shore moving at an instant when the beam is lighting up a point 2\r\nmiles along the shore from the point $A$?\r\n\\begin{sol}\r\n\t$80\\pi$ mi/min\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA baseball diamond is a square 90 ft on a side.  A player runs from first\r\nbase to second base at 15 ft/sec.  At what rate is the player's distance\r\nfrom third base decreasing when she is half way from first to second base?\r\n\\begin{sol}\r\n\t$\\ds 3\\sqrt5$ ft/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nSand is poured onto a surface at 15 cm${}^3$/sec, forming a\r\nconical pile whose base diameter is always equal to its altitude.  How\r\nfast is the altitude of the pile increasing when the pile is 3 cm\r\nhigh?\r\n\\begin{sol}\r\n\t$20/(3\\pi)$ cm/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA boat is pulled in to a dock by a rope with one end attached to the front\r\nof the boat and the other end passing through a ring attached to the dock\r\nat a point 5 ft higher than the front of the boat.  The rope is being\r\npulled through the ring at the rate of 0.6 ft/sec.  How fast is the boat\r\napproaching the dock when 13 ft of rope are out?\r\n\\begin{sol}\r\n\t$13/20$ ft/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA balloon is at a height of 50 meters, and is rising at the constant rate\r\nof 5 m/sec.  A bicyclist passes beneath it, traveling in a\r\nstraight line at the constant speed of 10 m/sec.  How fast is the distance\r\nbetween the bicyclist and the balloon increasing 2 seconds later?\r\n\\begin{sol}\r\n\t$\\ds 5\\sqrt{10}/2$ m/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA pyramid-shaped vat has square cross-section and stands on its\r\ntip.  The dimensions at the top are 2 m $\\times$ 2 m, and the depth is\r\n5 m.  If water is flowing into the vat at 3 m${}^3$/min, how fast is\r\nthe water level rising when the depth of water (at the deepest point)\r\nis 4 m?  Note: the volume of any ``conical'' shape (including\r\npyramids) is $(1/3)(\\hbox{height})(\\hbox{area of base})$.\r\n\\begin{sol}\r\n\t$75/64$ m/min\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%%\r\n%\\begin{ex} \r\n%The sun is rising at the rate of $1/4$ deg/min, and appears to be\r\n%climbing into the sky perpendicular to the\r\n%horizon, as depicted in figure~\\xrefn{fig:sunrise sunset}.\r\n%How fast is the shadow of a 200 meter building\r\n%shrinking at the moment when the shadow is 500 meters long? \r\n%\\begin{sol}\r\n%\t$145\\pi/72$ m/s\r\n%\\end{sol}\r\n%\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA woman 5 ft tall walks at the rate of 3.5 ft/sec away from a streetlight\r\nthat is 12 ft above the ground.  At what rate is the tip of her shadow\r\nmoving?  At what rate is her shadow lengthening?\r\n\\begin{sol}\r\n\ttip: 6 ft/s, length: $5/2$ ft/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA man 1.8 meters tall walks at the rate of 1 meter per\r\nsecond toward a streetlight that is 4 meters above the ground.  At\r\nwhat rate is the tip of his shadow moving?  At what rate is his shadow\r\nshortening?\r\n\\begin{sol}\r\n\ttip: $20/11$ m/s, length: $9/11$ m/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA police helicopter is flying at 150 mph at a constant altitude of 0.5 mile\r\nabove a straight road.  The pilot uses radar to determine that an oncoming\r\ncar is at a distance of exactly 1 mile from the helicopter, and that this\r\ndistance is decreasing at 190 mph.  Find the speed of the car.\r\n\\begin{sol}\r\n\t$\\ds 380/\\sqrt3-150\\approx 69.4$ mph\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA police helicopter is flying at 200 kilometers per hour at\r\na constant altitude of 1 km above a straight road.  The pilot uses\r\nradar to determine that an oncoming car is at a distance of exactly 2\r\nkilometers from the helicopter, and that this distance is decreasing at 250\r\nkph.  Find the speed of the car.\r\n\\begin{sol}\r\n\t$\\ds 500/\\sqrt3-200\\approx 88.7$ km/hr\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\nA light shines from the top of a pole 20 m high.  An object is dropped from\r\nthe same height from a point 10 m away, so that its height at time $\\ds t$\r\nseconds is $\\ds h(t)=20-9.8t^2/2$.  How fast is the object's shadow\r\nmoving on the ground one second later?\r\n\\begin{sol}\r\n\t$4000/49$ m/s\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n\r\n%\\exercise The sun is setting at the rate of $1/4$ deg/min, and appears\r\n%to be dropping perpendicular to the horizon, as depicted in\r\n%figure~\\xrefn{fig:sunrise sunset}. How fast is the shadow of a 25\r\n%meter wall lengthening at the moment when the shadow is 50 meters long?\r\n%\\answer $25\\pi/144$ m/min\r\n%\\endanswer\r\n%\r\n%\\font\\miscsymbols \\endexamplemiscsymbols10 scaled 2000\r\n%\\begin{psfigure}{2.45in}{1.25in}{124.figME.ps}\r\n%\\figure\r\n%\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%\\sevenpoint\r\n%\\setcoordinatesystem units <0.3truecm,0.3truecm>\r\n%\\setplotarea x from -10 to 10, y from 0 to 8\r\n%\\axis bottom shiftedto y=0 /\r\n%\\setdashes\\setlinear\r\n%\\plot 5 0 -8 8 /\r\n%\\setsolid\r\n%\\setplotsymbol ({\\tenrm.}) \r\n%\\plot 0 0 0 3 /\r\n%\\put {\\miscsymbols k} at -9 8.6\r\n%\\endpicture}\\endexample\r\n%\\figrdef{fig:sunrise sunset}\r\n%\\endfigure{Sunrise or sunset.}\r\n\r\n\r\n\r\n%\\exercise\r\n%The trough shown in figure~\\xrefn{fig:trough}\r\n%is constructed by fastening together three\r\n%slabs of wood of dimensions 10 ft $\\times$ 1 ft, and then attaching the\r\n%construction to a wooden wall at each end.  The angle $\\theta$ was\r\n%originally $\\ds 30^\\circ$, but because of poor construction the sides are\r\n%collapsing.  The trough is full of water.  At what rate (in ft${}^3$/sec) \r\n%is \r\n%the water spilling out over the top of\r\n%the trough if the s\\endexampleides have each fallen to an angle of $\\ds 45^\\circ$, and are\r\n%collapsing at the rate of $\\ds 1^\\circ$ per second?\r\n%\\answer $\\ds \\pi\\sqrt2/36$ ft$^3$/s\r\n%\\endanswer\r\n%\r\n%\\begin{psfigure}{2.5in}{1.5in}{124.figMF.ps}\r\n%\\figure\r\n%\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%\\sevenpoint\r\n%\\setcoordinatesystem units <0.9truecm,0.9truecm>\r\n%\\setplotarea x from -1 to 12, y from 0 to 4\r\n%\\setlinear\\endexample\r\n%\\plot -0.5 0.866 0 0 1 0 1.5 0.866 -0.5 0.866 /\r\n%\\plot 1 0 10.4 3.42 10.9 4.29 8.9 4.29 -0.5 0.866 /\r\n%\\plot 10.9 4.29 1.5 0.866 /\r\n%\\put {$\\theta$} [b] <0pt,3pt> at -0.15 0.3\r\n%\\put {$\\theta$} [b] <0pt,3pt> at 1.15 0.3\r\n%\\put {$1$} [t] <0pt,-4pt> at 0.5 0\r\n%\\put {$1$} [tr] <-2pt,-2pt> at -0.25 0.433\r\n%\\put {$1$} [tl] <2pt,-2pt> at 10.65 3.853\r\n%\\put {$10$} [tl] <2pt,-2pt> at 5.7 1.71\r\n%\\circulararc 30 degrees from 0 0.3 center at 0 0\r\n%\\circulararc -30 degrees from 1 0.3 center at 1 0\r\n%\\setdashes <2pt>\r\n%\\plot 0 0 9.4 3.42 10.4 3.42 / \r\n%\\plot 9.4 3.42 8.9 4.29 / \r\n%\\plot 0 0 0 0.866 /\\endexample\r\n%\\plot 1 0 1 0.866 /\r\n%\\endpicture}\r\n%\\figrdef{fig:trough}\r\n%\\endfigure{Trough.}\r\n\r\n%\\font\\miscsymbols miscsymbols10 scaled 2000\r\n%\\exercise\r\n%A light shines from the top of a pole 20 m high. A ball is falling 10\r\n%meters from the pole, casting a shadow on a building 30 meters away,\r\n%as shown in figure~\\xrefn{fig:falling ball}.\r\n%When the ball is 25 meters from the ground it is falling at 6 meters\r\n%per second. How fast is its shadow moving?\r\n%\\answer 18 m/s\r\n%\\endanswer\r\n%\r\n%\\figure\r\n%\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%\\ninepoint\r\n%\\setcoordinatesystem units <.7truecm,.7truecm>\r\n%\\setplotarea x from 0 to 4.5, y fro{DistanceChangingRate}m 0 to 4\r\n%\\axis bottom /\r\n%\\setlinear\r\n%\\linethickness1pt\r\n%\\putrule from 0 0 to 0 2\r\n%\\put {$\\bullet$} at 1 2.5\r\n%\\put {\\miscsymbols k} at 0.05 2\r\n%\\putrule from 3 0 to 3 4\r\n%\\putrule from 3 4 to 4.5 4\r\n%\\setdashes <2pt>\r\n%\\plot 0 2 3 3.5 /\r\n%\\endpicture}\r\n%\\figrdef{fig:falling ball}\r\n%\\endfigure{Falling ball.}\r\n%\r\n%\\exercise Do example~\\xrefn{exam:departing cars} assuming that the angle\r\n%between the two roads is 120${}^\\circ$ instead of 90${}^\\circ$ (that\r\n%is, the ``north--south'' road actually goes in a somewhat northwesterly\r\n%direction from $P$).  Recall the law of cosines:\r\n%$\\ds c^2=a^2+b^2-2ab\\cos\\theta$.\r\n%\\answer $\\ds 136\\sqrt{475}/19\\approx 156$ km/hr\r\n%\\endanswer\r\n\r\n%\\exercise\r\n%Do example~\\xrefn{exam:departing cars} assuming that\r\n%car A is 300 meters north of $P$, car B is 400 meters east of $P$, both\r\n%cars are going at constant speed toward $P$, and the two cars will collide in\r\n%10 seconds.\r\n%\\answer $-50$ m/s\r\n%\\endanswer\r\n%\r\n%\\exercise\r\n%Do example~\\xrefn{exam:departing cars} assuming that\r\n%8 seconds ago car A started from rest at $P$ and has been picking up\r\n%speed at the steady rate of 5 m/sec${}^2$, and 6 seconds after car A\r\n%started car B passed $P$ moving east at constant speed 60 m/sec.\r\n%\\answer $68$ m/s\r\n%\\endanswer\r\n%\r\n%\\exercise Referring again to example~\\xrefn{exam:departing cars},\r\n%suppose that instead of car B an airplane is flying at speed $200$\r\n%km/hr to the east of $P$ at an altitude of 2 km, as depicted in\r\n%figure~\\xrefn{fig:car and airplane}. How fast is the distance between\r\n%car and airplane changing?  \r\n%\\answer $\\ds 3800/\\sqrt{329}\\approx 210$ km/hr \r\n%\\endanswer\r\n%\r\n%\\figure\r\n%\\beginpicture\r\n%\\normalgraphs\r\n%\\setcoordinatesystem units <0.3truecm,0.3truecm>\r\n%\\setplotarea x from 0 to 8, y from -2 to 10\r\n%\\setlinear\r\n%\\plot 6 3 0 0 8 -2 /\r\n%\\setdashes <2pt>\r\n%\\plot 2.5 1.25 5 8.75 /\r\n%\\plot 5 -1.25 5 8.75 /\r\n%\\multiput {$\\bullet$} at 2.5 1.25 5 8.75 /\r\n%\\put {$A$} [br] <-2pt,2pt> at 2.5 1.25\r\n%\\put {$B$} [b] <0pt,3pt> at 5 8.75\r\n%\\put {$c(t)$} [br] <-2pt,2pt> at 3.75 5\r\n%\\setplotsymbol ({\\tenrm.}) \r\n%\\setsolid\r\n%\\arrow <5pt> [.25, 1] from 2.5 1.25 to 4 2\r\n%\\arrow <5pt> [.25, 1] from 5 8.75 to 7 8.25\r\n%\\endpicture\r\n%\\figrdef{fig:car and airplane}\r\n%\\endfigure{Car and airplane.}\r\n%\r\n%\\exercise Referring again to example~\\xrefn{exam:departing cars}, suppose\r\n%that instead of car B an airplane is flying at speed $200$\r\n%km/hr to the east of $P$ at an altitude of 2 km, and that it is\r\n%gaining altitude at 10 km/hr.\r\n%How fast is\r\n%the distance between car and airplane changing?\r\n%\\answer \\hbox{$\\ds 820/\\sqrt{329}+150\\sqrt{57}/\\sqrt{47}\\approx 210$ km/hr}\r\n%\\endanswer\r\n\r\n%\\exercise\r\n%The two blades of a pair of scissors are fastened at the point $A$ as\r\n%shown in figure~\\xrefn{fig:scissors}.  Let\r\n%$a$ denote the distance from $A$ to the tip of the blade (the point $B$).\r\n%Let $\\beta$ denote the angle at the tip of the blade that is formed by the\r\n%line $\\ds \\overline{AB}$ and the bottom edge of the blade, line\r\n%$\\ds \\overline{BC}$, and let $\\theta$ denote the angle between\r\n%$\\ds \\overline{AB}$ and the horizontal.\r\n%Suppose that a piece of paper is cut in such a way that the center\r\n%of the scissors at $A$ is fixed, and the paper is also fixed.  As the\r\n%blades are closed (i.e., the angle $\\theta$ in the diagram is decreased),\r\n%the distance $x$ between $A$ and $C$ increases, cutting the paper.\r\n%\r\n%\\itemitem{\\bf a.} Express $x$ in terms of $a$, $\\theta$, and $\\beta$.\r\n%\r\n%\\itemitem{\\bf b.} Express $dx/dt$ in terms of $a$,\r\n%$\\theta$, $\\beta$, and $d\\theta/dt$.\r\n%\r\n%\\itemitem{\\bf c.} Suppose that the distance $a$ is 20 cm, and the\r\n%angle $\\beta$ is $\\ds 5^\\circ$.  Further suppose that $\\theta$ is\r\n%decreasing at 50\r\n%deg/sec.  At the instant when $\\ds \\theta=30^\\circ$, find the rate (in\r\n%cm/sec) at which the paper is being cut.\r\n%\\answer (a) $x=a\\cos\\theta-a\\sin\\theta\\cot(\\theta+\\beta)=\r\n%\\hbox{$a\\sin\\beta/\\sin(\\theta+\\beta$), (c) $\\ds \\dot x\\approx 3.79$ cm/s}$\r\n%\\endanswer\r\n%\r\n%\\figure\r\n%\\vbox{\\beginpicture\r\n%\\normalgraphs\r\n%\\setcoordinatesystem units <0.7truecm,0.7truecm>\r\n%\\setplotarea x from -3 to 6, y from -3.5 to 3.5\r\n%\\setlinear\r\n%\\plot 0.3 0 6 3.5 2 0 0.3 0 /\r\n%\\setdashes <2pt>\r\n%\\put {\\beginpicture\r\n%\\setplotarea x from -1 to 1, y from -0.33 to 0.33\r\n%\\startrotation by 0.866 -0.5 about 0 0\r\n%\\ellipticalarc  axes ratio 3:1  360 degrees from 1 0 center at 0 0\r\n%\\stoprotation\\endpicture} at -1 1\r\n%\\put {\\beginpicture\r\n%\\setplotarea x from -1 to 1, y from -0.33 to 0.33\r\n%\\startrotation by 0.866 0.5 about 0 0\r\n%\\ellipticalarc  axes ratio 3:1  360 degrees from 1 0 center at 0 0\r\n%\\stoprotation\\endpicture} at -1 -1\r\n%\\put {\\beginpicture\r\n%\\setplotarea x from -1 to 1, y from -0.33 to 0.33\r\n%\\startrotation by 0.866 0.5 about 0 0\r\n%\\ellipticalarc  axes ratio 3:1  245 degrees from 1 0.58 center at 0 0\r\n%\\stoprotation\\endpicture} at -1 -1\r\n%\\put {\\beginpicture\r\n%\\setplotarea x from -1 to 1, y from -0.33 to 0.33\r\n%\\startrotation by 0.866 -0.5 about 0 0\r\n%\\ellipticalarc  axes ratio 3:1  -245 degrees from 1 -0.58 center at 0 0\r\n%\\stoprotation\\endpicture} at -1 1\r\n%\\multiput {$+$} at 0 0 0.27 0.9 -1 -1 /\r\n%\\setquadratic\r\n%\\plot 0.27 0.9 3 2.5 6 3.5 /\r\n%\\plot 0.27 -0.9 3 -2.5 6 -3.5 /\r\n%\\setlinear\r\n%\\plot 0.3 0 6 -3.5 2 0 /\r\n%\\plot 6 -3.5 6 3.5 /\r\n%\\plot 2 0 6 0 /\r\n%\\put {$B$} [l] <4pt,0pt> at 6 3.5\r\n%\\put {$A$} [r] <-3pt,0pt> at 0.3 0\r\n%\\put {$C$} [br] <0pt,3pt> at 2 0\r\n%\\put {$\\theta$} at 1 0.2\r\n%\\endpicture}\r\n%\\figrdef{fig:scissors}\r\n%\\endfigure{Scissors.}\r\n\r\n\\end{enumialphparenastyle}", "meta": {"hexsha": "9dfa63bc1a5f8cf16dc5baa38d724fc5f7503854", "size": 28696, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "5-applications-of-derivatives/5-1-related-rates.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5-applications-of-derivatives/5-1-related-rates.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5-applications-of-derivatives/5-1-related-rates.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.5181208054, "max_line_length": 172, "alphanum_fraction": 0.6834750488, "num_tokens": 9673, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.8104789155369047, "lm_q1q2_score": 0.5900087523668079}}
{"text": "\n\n    \\filetitle{wmean}{Weighted average of time series observations}{tseries/wmean}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nY = wmean(X,RANGE,BETA)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\item\n  \\texttt{X} {[} tseries {]} - Input tseries object whose data will be\n  averaged column by column.\n\\item\n  \\texttt{RANGE} {[} numeric {]} - Date range on which the weighted\n  average will be computed.\n\\item\n  \\texttt{BETA} {[} numeric {]} - Discount factor; the last observation\n  gets a weight of of 1, the N-minus-1st observation gets a weight of\n  \\texttt{BETA}, the N-minus-2nd gets a weight of \\texttt{BETA\\^{}2},\n  and so on.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{Y} {[} numeric {]} - Array with weighted average of individual\n  columns; the sizes of \\texttt{Y} are identical to those of the input\n  tseries object in 2nd and higher dimensions.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "d6ad7a3629953eb28d1bd35d09794ab3ca299703", "size": 1096, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/tseries/wmean.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/tseries/wmean.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/tseries/wmean.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 26.0952380952, "max_line_length": 82, "alphanum_fraction": 0.7290145985, "num_tokens": 330, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789178257654, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.590008749249863}}
{"text": "\\chapter{Interior funcoids}\n\nHaving a funcoid $f$ let define \\emph{interior funcoid} $f^{\\circ}$.\n\n\\begin{defn}\n  Let $f \\in \\mathsf{\\mathsf{FCD}} (A , B) = \\mathsf{\\mathsf{pFCD}} \\left(\n  \\mathscr{T} A , \\mathscr{T} B \\right)$ be a co-complete funcoid. Then\n  $f^{\\circ} \\in \\mathsf{\\mathsf{pFCD}} \\left( \\dual \\mathscr{T} A ,\n  \\dual \\mathscr{T} B \\right)$ is defined by the formula $\\langle\n  f^{\\circ} \\rangle^{\\ast} X = \\overline{\\supfun{f}\n  \\overline{X}}$.\n\\end{defn}\n\n\\begin{prop}\n  Pointfree funcoid $f^{\\circ}$ exists and is unique.\n\\end{prop}\n\n\\begin{proof}\n  $X \\mapsto \\overline{\\supfun{f} \\overline{X}}$ is a component\n  of pointfree funcoid $\\dual \\mathscr{T} A \\rightarrow \\dual\n  \\mathscr{T} B$ iff $\\supfun{f}$ is a component of the\n  corresponding pointfree funcoid $\\mathscr{T} A \\rightarrow \\mathscr{T} B$\n  that is essentially component of the corresponding funcoid\n  $\\mathsf{\\mathsf{FCD}} (A , B)$ what holds for a unique funcoid.\n\\end{proof}\n\nIt can be also defined for arbitrary funcoids by the formula $f^{\\circ} =\n(\\CoCompl f)^{\\circ}$.\n\n\\begin{obvious}\n$f^{\\circ}$ is co-complete.\n\\end{obvious}\n\n\\begin{thm}\n  The following values are pairwise equal for a co-complete funcoid $f$ and $X\n  \\in \\mathscr{T} \\Src f$:\n  \\begin{enumerate}\n    \\item\\label{int-simpl} $\\rsupfun{f^{\\circ}} X$;\n    \n    \\item\\label{int-set} $\\setcond{ y \\in \\Dst f }{ \\rsupfun{ f^{-\n    1} } \\{ y \\} \\sqsubseteq X }$\n    \n    \\item\\label{int-sset-set} $\\bigsqcup \\setcond{ Y \\in \\mathscr{T} \\Dst f }{\n    \\rsupfun{f^{- 1}} Y \\sqsubseteq X }$\n    \n    \\item\\label{int-sset-flt} $\\bigsqcup \\setcond{ \\mathcal{Y} \\in \\mathscr{F} \\Dst f\n    }{ \\supfun{f^{- 1}} \\mathcal{Y}\n    \\sqsubseteq X }$\n  \\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n  ~\n  \\begin{description}\n    \\item[\\ref{int-simpl}=\\ref{int-set}] $\\setcond{ y \\in \\Dst f }{\n    \\rsupfun{f^{- 1}} \\{ y \\} \\sqsubseteq X } = \\setcond{ x \\in\n    \\Dst f }{ \\rsupfun{f^{- 1}} \\{\n    x \\} \\asymp \\overline{X} } = \\setcond{ x \\in \\Dst f\n    }{ \\{ x \\} \\asymp \\supfun{f}\n    \\overline{X} } = \\overline{\\supfun{f} \\overline{X}} =\n    \\rsupfun{f^{\\circ}} X$.\n    \n    \\item[\\ref{int-set}=\\ref{int-sset-set}] If $\\rsupfun{f^{- 1}} Y \\sqsubseteq X$ then (by\n    completeness of $f^{- 1}$) $Y = \\setcond{ y \\in Y }{\n    \\rsupfun{f^{- 1}} \\{ y \\} \\sqsubseteq X }$ and thus\n    \\[ \\bigsqcup \\setcond{ Y \\in \\mathscr{T} \\Dst f }\n       { \\rsupfun{f^{- 1}} Y \\sqsubseteq X }\n       \\sqsubseteq \\setcond{ y \\in \\Dst f }{\n       \\rsupfun{f^{- 1}} \\{ y \\} \\sqsubseteq X } . \\]\n    The reverse inequality is obvious.\n    \n    \\item[\\ref{int-sset-set}=\\ref{int-sset-flt}] It's enough to prove that if $\\supfun{f^{- 1}}\n    \\mathcal{Y} \\sqsubseteq X$ for $\\mathcal{Y} \\in \\mathscr{F} \\Dst f$\n    then exists $Y \\in \\up \\mathcal{Y}$ such that $\\langle f^{- 1}\n    \\rangle^{\\ast} Y \\sqsubseteq X$. Really let $\\supfun{f^{- 1}}\n    \\mathcal{Y} \\sqsubseteq X$. Then $\\bigsqcap \\langle \\langle f^{- 1}\n    \\rangle^{\\ast} \\rangle^{\\ast} \\up \\mathcal{Y} \\sqsubseteq X$ and\n    thus exists $Y \\in \\up \\mathcal{Y}$ such that $\\langle f^{- 1}\n    \\rangle^{\\ast} Y \\sqsubseteq X$ by properties of generalized filter bases.\n  \\end{description}\n\\end{proof}\n\nThis coincides with the customary definition of interior in topological\nspaces.\n\n\\begin{prop}\n  $f^{\\circ \\circ} = f$ for every funcoid $f$.\n\\end{prop}\n\n\\begin{proof}\n  $\\rsupfun{f^{\\circ\\circ}} X = \\neg \\neg \\supfun{f}\n  \\neg \\neg X = \\supfun{f} X$.\n\\end{proof}\n\n\\begin{prop}\\label{get-rid-interior}\n  Let $g \\in \\mathsf{FCD} (A , B)$, $f \\in \\mathsf{FCD} (B ,\n  C)$, $h \\in \\mathsf{FCD} (A , C)$ for some sets $A$. $B$, $C$.\n  \n  $g \\sqsubseteq f^{\\circ} \\circ h \\Leftrightarrow f^{- 1} \\circ g \\sqsubseteq\n  h$, provided $f$ and $h$ are co-complete.\n\\end{prop}\n\n\\begin{proof}\n  $g \\sqsubseteq f^{\\circ} \\circ h \\Leftrightarrow \\forall X \\in A : \\rsupfun{ g\n  } X \\sqsubseteq \\rsupfun{ f^{\\circ} \\circ h } X\n  \\Leftrightarrow \\forall X \\in A : \\rsupfun{ g } X \\sqsubseteq\n  \\rsupfun{ f^{\\circ} } \\rsupfun{ h } X \\Leftrightarrow\n  \\forall X \\in A : \\rsupfun{ g } X \\sqsubseteq \\neg \\rsupfun{ f\n  } \\neg \\rsupfun{ h } X \\Leftrightarrow\n  \\forall X \\in A : \\rsupfun{ g } X \\asymp \\rsupfun{ f } \\neg \\rsupfun{h} X \\Leftrightarrow \\forall X \\in A : \\rsupfun{ f^{- 1}\n  } \\rsupfun{ g } X \\asymp \\neg \\rsupfun{ h\n  } X \\Leftrightarrow \\forall X \\in A : \\rsupfun{ f^{- 1}\n  } \\rsupfun{ g } X \\sqsubseteq \\rsupfun{ h\n  } X \\Leftrightarrow \\forall X \\in A : \\rsupfun{ f^{- 1} \\circ g\n  } X \\sqsubseteq \\rsupfun{ h } X \\Leftrightarrow f^{-\n  1} \\circ g \\sqsubseteq h$.\n\\end{proof}\n\n\\begin{rem}\nThe above theorem allows to get rid of interior funcoids (and use only ``regular'' funcoids) in some formulas.\n\\end{rem}\n", "meta": {"hexsha": "4ee9107010bbfb1afda01817e281aa734dc5d52f", "size": 4684, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-interior.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-interior.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-interior.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.393442623, "max_line_length": 127, "alphanum_fraction": 0.6112297182, "num_tokens": 1832, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789178257654, "lm_q2_score": 0.727975443004307, "lm_q1q2_score": 0.5900087492498629}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{cancel}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{color}\n\n\\newtheorem{theorem}{Theorem}\n\\begin{document}\n\n\\title{Finding correlation for acmPRS}\n\n\\maketitle\n\n\\textbf{Goals:}\n\n\\begin{itemize}\n\t\\item Find $Var(\\text{acmPRS})$\n\t\\item Find $Cov(Y, \\text{acmPRS})$\n\\end{itemize}\n\n\\subsection{Finding $Var(\\text{acmPRS})$:}\n\nDefine \n$$ \n\\begin{aligned} \n\\hat{S}_i &= \\sum^{m_i}_{j = 1} \\hat{\\beta} \\mathbf{G_j} \\\\\n\\hat{S}_{acm} &= \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i \n\\end{aligned} \n$$\n\n\n\n\\begin{theorem}\n$E[\\hat{S}_{acm}] = 0$ \n\\end{theorem}\n\n\\begin{proof}\n$$ \n\\begin{aligned}\nE[\\hat{S}_{acm}] &= E[\\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i] \\\\\n&= \\sum^c_{i=1} E[n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i] \\\\\n&= \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} E[\\hat{S}_i] \\\\ \\\\\n&\\because E[\\hat{S}_i] = 0 \\\\\n&\\therefore E[\\hat{S}_{acm}] = 0\n\\end{aligned}\n$$\n\\end{proof}\n\n\\newpage\n\n\\begin{theorem}\n$Var(\\hat{S}_{acm}) =\\sum^c_{i=1}  \\frac{m_i n^2_i r^2_i h_i}{h_1}  Var(\\hat{\\beta}_i) +  \\sum^c_{i \\neq j} n_i n_j r_i r_j \\sqrt{\\frac{h_i h_j}{h^2_1}} Cov(\\hat{S}_i, \\hat{S}_j)]$\n\\end{theorem}\n\n\\begin{proof}\n\n$$\n\\begin{aligned}\nVar(\\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i) &= \\sum^c_{i=1} Var \\left( n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i \\right) + \\sum_{i \\neq j} Cov \\left( n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i, n_j r_j \\sqrt{\\frac{h_j}{h_1}} \\hat{S}_j \\right) \n\\end{aligned}\n$$\n\n\\begin{equation}\n\\begin{aligned}\n\\sum^c_{i=1} Var \\left( n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i \\right) &= \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}}  Var(\\hat{S}_i) \\\\\n&= \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} m_i Var(\\hat{\\beta}_i)      \\\\\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n\\sum_{i \\neq j} Cov \\left(  n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i,  n_j r_j \\sqrt{\\frac{h_j}{h_1}} \\hat{S}_j \\right) &= \\scriptstyle \\sum_{i \\neq j} E \\left[ \\left( n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\hat{S}_i \\right), \\left( n_j r_j \\sqrt{\\frac{h_j}{h_1}} \\hat{S}_j  \\right)  \\right] + \\scriptscriptstyle \\cancelto{0}{E \\left[ \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}}] \\hat{S}_i \\right]} \\cancelto{0}{E \\left[ \\sum^c_{j=1} n_j r_j \\sqrt{\\frac{h_j}{h_1}} \\hat{S}_j \\right]} \\\\\n&= \\sum_{i \\neq j} n_i n_j r_i r_j \\sqrt{\\frac{h_i h_j}{h^2_1}} E[\\hat{S}_i \\hat{S}_j]\\\\ \n&= \\sum_{i \\neq j} n_i n_j r_i r_j \\sqrt{\\frac{h_i h_j}{h^2_1}} Cov \\left( \\hat{S}_i, \\hat{S}_j \\right) \\\\ \n\\end{aligned}\n\\end{equation}\n\n\nThe above is because\n\n$$\n\\begin{aligned}\nCov( \\hat{S}_i, \\hat{S}_j) &= E \\left[ \\hat{S}_i \\hat{S}_j \\right] + \\cancelto{0}{E[ \\hat{S}_i]} + \\cancelto{0}{E[ \\hat{S}_j ]} \\\\\n&=  E \\left[ \\hat{S}_i \\hat{S}_j \\right]\n\\end{aligned}\n$$\n\nCombining (1) and (2): \n\n$$ Var(\\hat{S}_{acm}) =\\sum^c_{i=1}  \\frac{m_i n^2_i r^2_i h_i}{h_1}  Var(\\hat{\\beta}_i) +  \\sum^c_{i \\neq j} n_i n_j r_i r_j \\sqrt{\\frac{h_i h_j}{h^2_1}} Cov(\\hat{S}_i, \\hat{S}_j)] $$\n\n\\end{proof}\n\n\\newpage\n\n\\begin{theorem}\n$Cov(Y, \\text{acmPRS}) = \\frac{1}{c} \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} m_i Cov(\\hat{\\beta}_i, \\beta_{Y})$\n\\end{theorem}\n\n\\begin{proof}\n\n$$\n\\begin{aligned}\n\\text{Cov}(Y, \\text{acmPRS}) &= E[Y \\text{acmPRS}] + \\cancelto{0}{E[Y]} + \\cancelto{0}{E[\\text{acmPRS}]} \\\\\n&= E \\left[ \\left( \\sum^m_{j=1} \\beta_{j Y} G_i \\right) \\left( \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\sum^m_{j=1} \\hat{\\beta}_{j i} G_j \\right) \\right] \n\\end{aligned}\n$$   \n\n\\textcolor{red}{I'm really not sure if I'm allowed to do this.}\n\n$$ \n\\begin{aligned}\nE \\left[ \\left( \\sum^m_{j=1} \\beta_{j Y} G_i \\right) \\left( \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\sum^m_{j=1} \\hat{\\beta}_{j i} G_j \\right) \\right] &= E \\left[ \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} \\left[ \\sum^{m_i} \\beta_j G_j \\right] \\frac{1}{c} \\left[ \\sum^{m_Y} \\beta_{jY} G_j \\right] \\right]\n\\end{aligned}\n$$\n\nHere I've brought the $Y$ score inside the summation with a $\\frac{1}{c}$ coefficient.\n\n$$\n\\begin{aligned}\n&= \\frac{1}{c} \\sum^c_{i=1} \\left[ n_i r_i \\sqrt{\\frac{h_i}{h_1}} E \\left[    \\sum^{m_i} \\beta_j G_j \\sum^{m_Y} \\beta_{jY} G_j \\right] \\right]\n\\end{aligned}\n$$\n\nAnd since by equation (7) in power and predicitve accuracy: $ E \\left[ \\sum^{m_i} \\beta_j G_j \\sum^{m_Y} \\beta_{jY} G_j \\right] = m \\text{Cov} (\\hat{\\beta}_i, \\beta_Y)$ \n\nThen\n$$Cov(Y, \\text{acmPRS}) = \\frac{1}{c} \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} m_i Cov(\\hat{\\beta}_i, \\beta_{Y})$$\n\n\\end{proof}\n\n\\newpage\n\n\\begin{theorem}\n$R^2_{Y, \\text{acmPRS}} = \\frac{ \\frac{1}{c^2} \\left( \\sum^c_{i=1} n_i r_i \\sqrt{\\frac{h_i}{h_1}} m_i Cov(\\hat{\\beta}_i, \\beta_{Y}) \\right)^2}{ \\left( \\sum^c_{i=1}  \\frac{m_i n^2_i r^2_i h_i}{h_1}  Var(\\hat{\\beta}_i) +  \\sum^c_{i \\neq j} n_i n_j r_i r_j \\sqrt{\\frac{h_i h_j}{h^2_1}} Cov(\\hat{S}_i, \\hat{S}_j)] \\right) \\left( \\sigma^2_{g} + \\sigma^2_{e} \\right)}$\n\\end{theorem}\n\n\\begin{proof}\n\n$$\n\\begin{aligned}\nR^2_{X, Y} = \\frac{Cov(X,Y)^2}{Var(X)Var(Y)}\n\\end{aligned}\n$$\n\nSub in equations in theorems (2) and (3).\n\n\\end{proof}\n\n\\end{document}\n\n", "meta": {"hexsha": "92e7cc158b9d2ab5c3e83a3592e01fca2a8fb927", "size": 5063, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "inst/proofs/r2.tex", "max_stars_repo_name": "Chris1221/aprs", "max_stars_repo_head_hexsha": "e0e81c76c87897930086cbd0fd4a477d481d1f7d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-04-21T13:32:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-15T14:40:03.000Z", "max_issues_repo_path": "inst/proofs/r2.tex", "max_issues_repo_name": "Chris1221/pRs", "max_issues_repo_head_hexsha": "e0e81c76c87897930086cbd0fd4a477d481d1f7d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2016-11-23T11:27:26.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-01T12:16:06.000Z", "max_forks_repo_path": "inst/proofs/r2.tex", "max_forks_repo_name": "Chris1221/aprs", "max_forks_repo_head_hexsha": "e0e81c76c87897930086cbd0fd4a477d481d1f7d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2016-11-21T23:36:26.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-15T14:38:49.000Z", "avg_line_length": 32.664516129, "max_line_length": 478, "alphanum_fraction": 0.6095200474, "num_tokens": 2388, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.810478913248044, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5900087459173943}}
{"text": "\\section{The mathematics of calculating \\gn{n}}\n\\label{sec:math_background}\nThis section will detail the mathematics of calculating correlation functions \\gn{n}. If you are familiar with the results, you can safely skip this section, but do note that the correlation of photon events is in many ways distinct from a standard signal correlation.\n\nIf you are not familiar with notation such as\n\\begin{align}\nc &= \\braces{(a,b)|a,b\\in\\integers;~a/b\\in\\integers} \\\\\nF&:\\wholes\\times\\integers\\rightarrow\\reals \\\\\n&\\sum_{z\\in\\integers_{n}}{z^{2}} \\\\\n&A\\cap B\n\\end{align}\nyou should read appendix~\\ref{sec:notation} before reading the remainder of this section.\n\n\\subsection{Definition of the correlation function}\n\\label{sec:correlation_function}\nA signal $\\intensity(\\time)$ is a function of real time which returns non-negative values\\footnote{Correlations can be defined for negative-valued functions, but for our purposes non-negative functions are the most physically meaningful.}:\n\\begin{equation}\n\\intensity:\\reals\\rightarrow\\reals^{*}\n\\end{equation}\nIn practice such a function may represent a physical quantity such as voltage, and we will develop our understanding of photon correlations by developing first an understanding of how to correlate real-valued functions. \n\nThe simplest non-trivial correlation of a function is the autocorrelation, which measures the average predictability of the value of the function for a time $\\time+\\timedelay$, given its value at $\\time$. By this we mean that a function is more, less, or equally likely to increase than decrease after the time delay. Formally, this is a function which maps a signal to a function mapping time vectors to scalar, and  can be calculated as\n\\begin{equation}\n\\gn{2}(\\intensity(\\time);\\timedelay) = \n\t\\frac{\\angles{\\intensity(\\time)\\intensity(\\time+\\timedelay)}}\n         {\\angles{\\intensity(\\time)}\\angles{\\intensity(\\time+\\timedelay)}}\n\\end{equation}\nwhere the angled brackets indicate an average over all values of $\\time$. Implicit in this definition is the assumption that $\\intensity(\\time)$ have a non-zero value for some time, because the average must be non-zero for the result to be well-defined. Practically, this imposes the requirement that a signal must exist in the physical sense, so it is not too onerous. Because we will typically define $\\intensity(\\time)$ in context, we will drop it from the notation:\n\\begin{equation}\n\\label{eq:autocorrelation}\n\\gn{2}(\\timedelay) = \n\t\\frac{\\angles{\\intensity(\\time)\\intensity(\\time+\\timedelay)}}\n         {\\angles{\\intensity(\\time)}\\angles{\\intensity(\\time+\\timedelay)}}\n\\end{equation}\nThis formula can be interpreted as a measure of the periodic structure of $I(t)$: the denominator represents the average density of signal over a volume in phase space, while the numerator represents the density of events where the signal has some non-random behavior relative to all origins of time. \n%Here, $\\gn{2}>1$ indicates supercorrelation, that the function is more likely than not to increase in value after a time delay $\\tau$. If $\\gn{2}<1$, the function is more likely than not to decrease in value after a time delay $\\tau$. And in the middle, if $\\gn{2}=1$, the function has equal probability to increase, decrease, or remain constant. The meaning of the exact magnitude of $\\gn{2}(\\tau)$ can be discussed in the context of a particular function, but these general principles should always apply.\n\nIn many cases, an autocorrelation is an oversimplification of a signal, and it is useful to be able to compare the cross-correlations of two or more channels instead. For example, if our function $\\intensity$ is endowed with a second dimension indicating the identity of a detection channel from the set $\\channels$ of detection channels, it takes the form\n\\begin{equation}\n\\intensity:\\channels\\times\\reals\\rightarrow\\reals^{*}\n\\end{equation}\nwhere $\\times$ is the Cartesian product of elements from $\\channels$ and $\\reals^{*}$. For example, if $\\channels=\\braces{0, 1}$, elements of $\\channels\\times\\wholes$ take the form of 2-tuples such as (0, 10), (1, 17), (0, 0), and so on. Under this notation, $I$ is now a function of two variables and follows the form $\\intensity(\\channel, \\time)$. For clarity of future notation, we will include the channel variable as a subscript:\n\\begin{equation}\n\\intensity_{\\channel}(\\time)\\equiv \n\\intensity(\\channel,\\time) \n\\end{equation}\nGiven this, we can separate the signal $I$ into a sum of signals over all channels:\n\\begin{equation}\n\\intensity(\\time) = \\sum_{\\channel\\in\\channels}{\\intensity_{\\channel}(\\time)}\n\\end{equation}\nUnder the definition of an autocorrelation as in equation~\\ref{eq:autocorrelation}, we can substitute this new value and expand the result to obtain:\n\\begin{align}\n\\label{eq:signal_g2}\n\\gn{2}(\\tau) &= \\frac\n     {\\angles{\n       \\left(\\sum_{\\channel\\in\\channels}{\\intensity_{\\channel}(\\time)}\\right)\n       \\left(\\sum_{\\channel\\in\\channels}{\\intensity_{\\channel}(\\time+\\timedelay)}\\right)}}\n     {\\angles{\\sum_{\\channel\\in\\channels}{\\intensity_{\\channel}(\\time)}}\n      \\angles{\\sum_{\\channel\\in\\channels}{\\intensity_{\\channel}(\\time+\\timedelay)}}} \\\\\n             &= \\frac{\\angles{\\sum_{\\vec{\\channel}\\in\\channels^{2}}{\n                             \\intensity_{\\channel_{0}}(\\time)\\intensity_{\\channel_{1}}(\\time+\\timedelay)}}}\n                     {\\left(\\sum_{\\channel\\in\\channels}{\\angles{\\intensity_{\\channel}(\\time)}}\\right)\n                      \\left(\\sum_{\\channel\\in\\channels}{\\angles{\\intensity_{\\channel}(\\time+\\timedelay)}}\\right)}\n%\\gn{2}(\\tau) &= \\frac{\\angles{\\left(\\sum_{\\channel\\in\\channels}{I_{c}(t)}\\right)\n%                              \\left(\\prod_{j=1}^{n-1}{\n%                                          \\sum_{\\channel\\in\\channels}{I_{c}(t+\\tau_{j})}}\\right)}}\n%                     {\\angles{\\sum_{\\channel\\in\\channels}{I_{c}(t)}}\n%                      \\prod_{j=1}^{n-1}{\\angles{\\sum_{\\channel\\in\\channels}{I_{c}(t+\\tau_{j})}}}} \\\\\n%             &= \\frac{\\angles{\\sum_{\\vec{\\channel}\\in\\channels}{\n%                             \\left(I_{c_{0}}(t)\\prod_{j=1}^{n-1}{I_{c_{j}}(t+\\tau_{j})}\\right)}}}\n%                     {\\angles{\\sum_{\\channel\\in\\channels}{I_{c}(t)}}^{n}}\n\\end{align}\nwhere $\\vec{\\channel}\\equiv(\\channel_{0},\\channel_{1})\\in\\channels^{2}$ indicates 2-tuple elements of the set $\\channels\\times\\channels$. We see that, if $\\channels$ contains a single element (we have only one detection channel), the function returns to the form shown in equation~\\ref{eq:autocorrelation}. Note also that we can consider a single cross-correlation term, though these are distinct from the terms in the autocorrelation sum:\n\\begin{equation}\n\\gn{2}_{(\\channel_{0}, \\channel_{1})}(\\timedelay) =\n    \\frac{\\angles{\\intensity_{\\channel_{0}}(\\time)\n                  \\intensity_{\\channel_{1}}(\\time+\\timedelay)}}\n         {\\angles{\\intensity_{\\channel_{0}}(\\time)}\n          \\angles{\\intensity_{\\channel_{1}}(\\time+\\timedelay)}}\n\\end{equation}\nIf we ignore the normalization and retain just the cross term, we see that each cross-correlation contains the information necessary for reconstructing the full autocorrelation, though we must have knowledge of the individual average intensities to do so. As such, we will focus most of our attention on calculating the cross-correlation terms.\n\nAs an aside, note that the interchange of two channels is equivalent to inversion of their relative time delay:\n\\begin{align}\n\\gn{2}_{(\\channel_{0}, \\channel_{1})}(\\timedelay)\n      &= \\frac{\\angles{\\intensity_{\\channel_{0}}(\\time)\n                       \\intensity_{\\channel_{1}}(\\time+\\timedelay)}}\n              {\\angles{\\intensity_{\\channel_{0}}(\\time)}\n               \\angles{\\intensity_{\\channel_{1}}(\\time+\\timedelay)}} \\\\\n      &=\\frac{\\angles{\\intensity_{\\channel_{0}}(\\time-\\timedelay)\n                      \\intensity_{\\channel_{1}}(\\time)}}\n             {\\angles{\\intensity_{\\channel_{0}}(\\time-\\timedelay)}\n              \\angles{\\intensity_{\\channel_{1}}(\\time)}} \\\\\n      &= \\gn{2}_{(\\channel_{1}, \\channel_{0})}(-\\timedelay)\n\\end{align}\nThis relationship implies that, while cross-correlations may be asymmetric about $\\timedelay=0$, the full autocorrelation must be symmetric. \n\nNow that we have generalized this definition to an arbitrary number of channels, we can also generalize the correlation to an arbitrary order $n$ by drawing cross-correlation terms $\\vec{\\channel}\\in\\channels^{n}$:\n\\begin{equation}\n\\label{eq:gn}\n\\gn{n}(\\tau_{1},\\ldots\\tau_{n-1}) = \n     \\frac{\\sum_{\\vec{\\channel}\\in\\channels^{n}}\n                {\\angles{\\prod_{j=0}^{n-1}\n                               {\\intensity_{\\channel_{j}}(\\time+\\timedelay_{j})}}}}\n     {\\prod_{j=0}^{n-1}\n            {\\parens{\\sum_{\\channel\\in\\channels}\n                          {\\angles{\\intensity_{\\channel}(\\time+\\timedelay_{j})}}}}}\n\\end{equation}\nThe interpretation of this equation is the same as before, except that the correlation is measured as a density in $n$-dimensional space.\n\n\\subsubsection{Examples of correlations of functions}\nTo become more familiar with the behavior of these correlation functions, we should evaluate them for a few familiar functions. For example, for $I(t)=1+\\sin{(t)}$:\n\\begin{align}\n\\label{eq:sine_correlation}\n\\gn{2}(\\tau) &= \\frac{\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}{\\left(1+\\sin{(t)}\\right)\\left(1+\\sin{(t+\\tau)}\\right)\\,dt}}\n                     {\\left(\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}{\\left(1+\\sin{(t)}\\right)\\,dt}\\right)\n                      \\left(\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}{\\left(1+\\sin{(t+\\tau)}\\right)\\,dt}\\right)} \\\\\n             &= 1 + \\frac{1}{2}\\cos{(\\tau)}\n\\end{align}\nFor, this result says that, given that our function at some value at time $t$, after a small time delay $\\tau\\approx 0$ the value is likely to have increased ($\\gn{2}=1.5$). After a delay of $\\tau=\\pi/2\\textnormal{ or }3\\pi/2$, the function is equally likely to have increased as decreased, and its correlation is therefore indistinguishable from that of a randomly-distributed function.\n\nMoving on, consider a signal composed of two sine waves:\n\\begin{align}\nI_{0}(t) &= 1+\\sin(t) \\\\\nI_{1}(t) &= 1+\\sin(2t) \\\\\nI(t) &= I_{0}(t)+I_{1}(t)\n\\end{align}\nWe can calculate the full autocorrelation:\n\\begin{align}\n\\gn{2}(\\tau) = 2 + \\frac{1}{4}\\cos(\\tau) + \\frac{1}{4}\\cos(2\\tau)\n\\end{align}\nor individual cross-correlations:\n\\begin{align}\n\\gn{2}_{(0,0)}(\\tau) &= 1+\\frac{1}{2}\\cos(\\tau) \\\\\n\\gn{2}_{(0,1)}(\\tau) = \\gn{2}_{(1,0)}(\\tau) &= 1 \\\\\n\\gn{2}_{(1,1)}(\\tau) &= 1+\\frac{1}{2}\\cos(2\\tau)\n\\end{align}\nCalculation of higher-order correlations is also possible, though the results quickly become quite verbose:\n\\begin{align}\n\\gn{3}(\\tau) &= 1\n               + \\frac{1}{8}\\cos(\\tau_{1})\n               + \\frac{1}{8}\\cos(2\\tau_{1})\n               + \\frac{1}{8}\\cos(\\tau_{1}-\\tau_{2}) \\nonumber\\\\\n            &  + \\frac{1}{8}\\cos(2(\\tau_{1}-\\tau_{2}))\n               + \\frac{1}{8}\\cos(\\tau_{2})\n               + \\frac{1}{8}\\cos(2\\tau_{2})  \\nonumber \\\\\n            &  - \\frac{1}{32}\\sin(\\tau_{1}-2\\tau_{2})\n               - \\frac{1}{32}\\sin(2\\tau_{1}-\\tau_{2})\n               + \\frac{1}{32}\\sin(\\tau_{1}+\\tau_{2})\n\\end{align}\n\n\\subsection{Photon arrivals can be represented by $\\delta$-functions}\nTo begin to see how we may calculate correlations of photon arrival times, consider the essential character we must capture: any given detection channel $c$ may detect up to a single photon at any time, and that detection event occurs at some fixed times $t$. As such, we can uniquely define a photon as an element $\\photon$ by its channel and arrival time:\n\\begin{equation}\n\\photon=(\\channel,\\times)\\in\\channels\\times\\reals\n\\end{equation}\nFurthermore, given a set of photons $\\photons$, we can define the signal consisting of all detected photons as a sum over $\\delta$ functions\\footnote{$\\delta(\\vec{x})=0$ if and only if $\\vec{x}=\\vec{0}$, and $\\int_{-\\infty}^{\\infty}{\\delta(\\vec{x})\\,d\\vec{x}}=1$.}:\n\\begin{equation}\n\\label{eq:delta_function_signal}\n\\intensity_{\\channel}(\\time)\n   = \\sum_{\\photon\\in\\photons}\n          {\\delta\\left(\\channel-\\Channel(\\photon),\\time-\\Time(\\photon)\\right)}\n\\end{equation}\nwhere $\\Channel$ is a function of a photon which returns the detection channel the photon arrived on, and $\\Time$ is a similar function which returns arrival time. To simplify the notation, we can define a subset of photons associated with each channel:\n\\begin{equation}\n\\photons_{\\channel} = \\setbuilder{\\photon}{\\photon\\in\\photons;~\\Channel(\\photon)=\\channel}\n                      \\subseteq\\photons\n\\end{equation}\nwhere the set builder notation denotes the elements of a set and the conditions they must satisfy for inclusion in the set. In long form, the set $\\photons_{\\channel}$ is the set of all photons in $\\photons$ which arrived on channel $\\channel$. As before, we can recover the full signal by the union of all channel signals:\n\\begin{equation}\n\\photons = \\bigcup\\limits_{\\channel\\in\\channels}{\\photons_{\\channel}}\n\\end{equation}\nBecause we have discretized the signal into a set of photons, we can more efficiently describe operations on the signal as counting subsets of the signal. For example, the autocorrelation of the signal $\\photons$ can be calculated by integrating the associated $\\delta$-functions:\n\\begin{align}\n\\gn{2}(\\tau) &= \n    \\frac{\\int{\\parens{\\sum_{\\photon\\in\\photons}{\\delta(\\time-\\Time(\\photon))}}\n               \\parens{\\sum_{\\photon\\in\\photons}{\\delta(\\timedelay+\\timedelay-\\Time(\\photon))}}\n               d\\time}}\n          {\\parens{\\int{\\brackets{\n                           \\sum_{\\photon\\in\\photons}\n                                {\\delta(\\time-\\Time(\\photon))}}d\\time}}\n           \\parens{\\int{\\brackets{\n                           \\sum_{\\photon\\in\\photons}\n                                {\\delta(\\time+\\timedelay-\\Time(\\photon))}}d\\time}}}\n\\end{align}\nbut the same result can be expressed much more simply by counting events:\n\\begin{equation}\n\\label{eq:photon_correlation_prop}\n\\gn{2}(\\timedelay) \\propto\n    \\abs{\\setbuilder{(\\photon_{0},\\photon_{1})}\n                    {\\begin{aligned}\n                           \\photon_{0}\\in\\photons;\\\\\n                            \\photon_{1}\\in\\photons;\\\\\n                            \\abs{\\braces{\\photon_{0},\\photon_{1}}}=2;\\\\\n                            \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\timedelay;\n                     \\end{aligned}}}\n\\end{equation}\nwhere $(\\photon_{0},\\photon_{1})$ represents a pair of photons, and the third condition indicates that the two photons are not identical. This restriction helps distinguish the contribution of distinct pairs of photons to the correlation at $\\tau=0$, which would otherwise be a sum of all photons and all photon pairs arriving simultaneously. The normalization factor is not quite as straightforward to calculate as for the continuous case, so we should spend some time simplifying the problem until that calculation becomes practical.\n\n\\subsubsection{The photon correlation at zero delay is different from the signal correlation}\nIt is important to note that, because the photons are required to be unique, at zero time delay the correlation is not the same as the standard correlation (equation~\\ref{eq:gn}). For example, in \\gn{2} the two values are:\n\\begin{align}\n\\gn{2}_{\\textnormal{signal}}(\\timedelay) =& \n\t\t\t\\frac{\\angles{\\intensity(\\time)^{2}}}{\\angles{\\intensity(\\time)}^{2}} \\\\\n\\gn{2}_{\\textnormal{photon}}(\\timedelay) =& \n\t\t\t\\frac{\\angles{\\intensity(\\time)\\parens{\\intensity(\\time)-1}\\heaviside(\\intensity(\\time)-1)}}\n\t\t\t     {\\angles{\\intensity(\\time)}^{2}} \t\t\t\n\\end{align}\nwhere $\\heaviside(x)=0$ for $x\\le 0$ and $1$ for $x>0$. This extra factor signifies that negative contributions are just zero contributions. When calculating quantities related to this value, care should be taken to ensure that the correct definition is used.\n\n\\subsection{Correlation over a finite time range}\nIn a physical experiment, time has a well-defined beginning and end: time begins when the experiment starts, and ends when it stops. Here the term experiment can mean a complete experiment or some subset of time in a larger measurement, but ultimately the important result is that the time range of the experiment is some range of time:\n\\begin{equation}\n\\integrationtime = [\\tstart,\\tstop)\\subset\\reals\n\\end{equation}\nwhere the bracket notation indicates that the subset contains all values at least as great as $\\tstart$ and less than $\\tstop$. Moreover, we will typically divide time into equally-spaced blocks, such that each block of experiment time can be indexed by some whole number:\n\\begin{equation}\n\\integrationtime = \\braces{\\tstart,\\tstart+\\Delta\\time,\\ldots} \n       \\rightarrow \\braces{0,1,\\ldots} = \\integers_{N}\n\\end{equation}\nwhere $N=\\abs{\\integrationtime}$ is the number of time blocks in the measurement. As such, any signal $\\intensity(\\time)$ can be treated as a function of whole time:\n\\begin{equation}\n\\intensity:\\wholes\\rightarrow\\reals^{*}\n\\end{equation}\nMore simply, because there are a finite number of values for which $\\intensity(\\time)$ is defined, we can treat it as a vector in signal space:\n\\begin{equation}\n\\intensity(\\time) \\equiv \\vec{\\intensity}\\in \\left(\\reals^{*}\\right)^{N}\n\\end{equation}\nwhere each dimension of the vector space represents the possible values $\\intensity(\\time)$ may have for a particular time block. These dimensions are indexed as $\\vec{\\intensity}(\\time)$.\n\nGiven this, the autocorrelation can be defined as a sum over all times in the experiment:\n\\begin{align}\n\\gn{2}(\\tau) &= \n    \\frac{\\frac{1}{N}\n          \\sum_{j=0}^{N-1}\n               {\\vec{\\intensity}(j)\\vec{\\intensity}(j+\\timedelay)}}\n         {\\parens{\\frac{1}{N}\n                  \\sum_{j=0}^{N-1}{\\vec{\\intensity}(j)}}\n          \\parens{\\frac{1}{N}\n                  \\sum_{j=0}^{N-1}{\\vec{I}(j+\\timedelay)}}} \\nonumber \\\\\n\\label{eq:discrete_g2} &=\n    \\frac{N\n          \\sum_{j=0}^{N-1}\n               {\\vec{\\intensity}(j)\n                \\vec{\\intensity}(j+\\timedelay)}}\n         {\\parens{\\sum_{j=0}^{N-1}\n                       {\\vec{\\intensity}(j)}}\n          \\parens{\\sum_{j=0}^{N-1}\n                       {\\vec{\\intensity}(j+\\timedelay)}}}\n\\end{align}\nNote that the index of the signal extends past those for which $\\vec{\\intensity}$ is defined in the time-shifted signal. This can be reconciled by defining $\\vec{\\intensity}(\\time)=0$ for values of $t$ outside \\closedopenrange{0}{N}. Thus the time-shifted signal has an increasingly decaying intensity as the time delay increases, accounting for the fact that we have no information about the signal outside the bounds of our integration window.\n%Note that the normalization factor corresponding to the averaging dimension is limited to the range of values in the summation. This results from the fact that, with finite boundaries to time, not all times are valid for the correlation, and inclusion of these times underestimates the true value of \\gn{2}. This not an important effect when $\\tau\\ll N$, but for $\\tau\\approx N$ the correction is significant, and we will explicitly include it to ensure complete accuracy (at a cost of clarity).\n\nNow that we have a method for calculating the discrete correlation function, we should consider how this will be performed for photons. One method would be to choose the subdivisions of time $\\timewindow$, and define $\\intensity(\\timewindow)$ to be the number of photons found with arrival times in $\\timewindow$. Depending on the definition of $\\timewindow$ this could be any whole number up to the number of photons in the set, and equation~\\ref{eq:discrete_g2} could be applied. However, this is inefficient: for practical measurements, $\\intensity(\\time)=0$ for a significant percentage of times $\\time$, and we must spend significant effort summing over zero-valued terms. As such, we will recast the problem as counting elements in a set:\n\\begin{equation}\n\\label{eq:photon_g2}\n\\gn{2}(\\tau) =\n      \\frac{\\parens{\\abs{\\integrationtime}}\n            \\abs{\\setbuilder{(\\photon_{0},\\photon_{1})}\n                            {\\begin{aligned}\n                                   \\photon_{0}\\in\\photons;\\\\\n                                    \\photon_{1}\\in\\photons;\\\\\n                                    \\abs{\\braces{\\photon_{0},\\photon_{1}}}=2;\\\\\n                                    \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\timedelay;\n                              \\end{aligned}}}}\n           {\\abs{\\setbuilder{\\photon}{\\begin{aligned}\n                                            \\photon\\in\\photons;\\\\\n                                            \\Time(\\photon)\\in\\integrationtime\n                                                      \\end{aligned}}}\n            \\abs{\\setbuilder{\\photon}{\\begin{aligned}\n                                            \\photon\\in\\photons;\\\\\n                                            \\Time(\\photon)+\\timedelay\\in\\integrationtime\n                                      \\end{aligned}}}}\n\\end{equation} \nOne final factor we have left out is the resolution of the experiment. When converting from real to whole time we implicitly assigned each subset $\\timewindow$ to some whole number index, such that the time delay $\\timedelay$ really represents a range of possible delays. In the final formula, we will include this factor, but for now we will leave it out for simplicity.\n\n\\subsection{The full \\gn{n} for photon correlation}\nBefore we extend the result of equation~\\ref{eq:photon_g2} to multiple time dimensions, it is worthwhile to write the general form of the result whose terms we must calculate. Starting with the definition of the correlation function, we note that the basic form of \\gn{n} is:\n\\begin{equation}\n\\gn{n} = \\frac{\\parens{\\textnormal{density of correlation events}}}\n              {\\parens{\\textnormal{density of randomly-distributed correlation events}}}\n\\end{equation}\nRecasting these general ideas to the photon-counting terminology we have developed, we can define a few descriptive terms. First of all, we must define the set of times accessible to a given dimension of the correlation. That is, for a given time difference between two photons, there will be some subset of times in the experiment where no correlation event can possibly exist, and inclusion of these times in any normalization will underestimate the normalized value. For a given tuple of time delays $\\vec{\\timedelay}$ and a particular time delay $\\timedelay_{j}$, we will denote this set:\n\\begin{equation}\n\\integrationtime_{\\vec{\\timedelay},\\timedelay_{j}}\n\\end{equation}\nNext, to determine the average number of randomly-distributed correlation events, we must know the average probability of finding a photon at any given time. This can be determined by counting the photons contributing to the correlation and dividing by the amount of time over which they were found, so denote the set of such photons:\n\\begin{equation}\n\\photons_{\\vec{\\timedelay},\\timedelay_{j}}\n\\end{equation}\nLastly, to determine the density of correlation events we must count the number of correlation events in a given volume of time-delay space. By specifying $\\timedelay$ we are really declaring that a photon arrived in some unit of time resulting from the resolution of the measurement, so the spatial normalization is implicit for now. As for the correlation events, we these are determined by the given time delays and the particular channels in the cross-correlation, so denote this set:\n\\begin{equation}\n\\correlationset_{\\vec{\\timedelay},\\vec{\\channel}}\n\\end{equation}\nIn general we may wish to consider some number of possible time delays when calculating these sets, as in a histogram. We will denote each set of time delays:\n\\begin{equation}\n\\resolution = \\left[\\resolution\\upminus,\\resolution\\upplus\\right)\n\\end{equation}\nThese effective time resolutions affect the volume of phase space under consideration, and must act to normalize the correlation set to return an average number of correlation events per volume phase space. \n\nAssembling these, the complete correlation function takes following form:\n\\begin{equation}\n\\label{eq:gn_full}\n\\gn{n}\\parens{\\vec{\\resolution}} =\n       \\parens{\\frac{\\abs{\\integrationtime}^{n}}\n                    {\\prod_{j=0}^{n-1}\n                           {\\abs{\\photons_{\\vec{\\resolution},\n                                           \\resolution_{j}}}}}}\n       \\parens{\\frac{\\sum_{\\vec{\\channel}\\in\\channels^{n}}\n                          {\\abs{\\correlationset_{\\vec{\\resolution},\n                                                \\vec{\\channel}}}}}\n                    {\\prod_{j=0}^{n-1}{\\abs{\\resolution_{j}}}}}\n\\end{equation}\n\nNow that we have defined each of these terms, we can turn our focus to calculating them. First, the set $\\integrationtime_{\\vec{\\resolution},\\resolution_{j}}$ is defined as the range of times accessible to the $j$th dimension of the correlation:\n\\begin{equation}\n\\integrationtime_{\\vec{\\resolution},\\resolution_{j}} = \n   \\closedopenrange{\\integrationtime\\upminus + \\resolution_{j}\\upminus}\n   \t\t\t     {\\integrationtime\\upplus + \\resolution_{j}\\upplus}\n\\end{equation}\nNext, the set of photons accessible in these times is:\n\\begin{equation}\n\\label{eq:correlation_photons}\n\\photons_{\\vec{\\resolution}, \\resolution_{j}} = \n   \\setbuilder{\\photon}\n              {\\photon\\in\\photons;~\n               \\Time\\parens{\\photon}\\in\\integrationtime_{\\vec{\\resolution},\\resolution_{j}}}\n\\end{equation}\nFinally, the set of all correlation events is as defined in equation~\\ref{eq:photon_correlation_prop}:\n\\begin{equation}\n\\label{eq:correlation_set}\n\\correlationset_{\\vec{\\resolution},\\vec{\\channel}} =\n    \\setbuilder{\\parens{\\photon_{0},\\photon_{1},\\ldots\\photon_{n-1}}}\n               {\\begin{aligned}\n                \\photon_{0}\\in\\photons_{\\channel_{0}};\\\\\n                \\ldots;\\\\\n                \\abs{\\braces{\\photon_{0},\\photon_{1},\\ldots}} = n;\\\\\n                \\Time(\\photon_{1})-\\Time(\\photon_{0})\\in\\resolution_{1};\\\\\n                \\ldots;\n                \\end{aligned}}\n\\end{equation}\nAnd with that, we have defined all of the necessary sets to be calculated on the way to calculating the \\nth-order correlation of photons. Note that these results are exact, with no approximations. For practical purposes we will introduce some approximations, but return to this result whenever those approximations fail.\n\n%we must consider first what the normalization factor actually represents. In each discrete, finite-time correlation we restricted the limits of each time dimension to the values which could support a given time delay, that is, all $t$ under consideration for a given $\\tau$ were chosen such that $t+\\tau <N$. More generally:\n%\\begin{equation}\n%t\\in\\setbuilder{t}{t+\\tau\\in\\integrationtime}\n%\\end{equation}\n%For higher dimensions, we must consider all possible time spans:\n%\\begin{equation}\n%\\label{eq:allowed_time_size}\n%\\integrationtime_{\\vec{\\tau}} = \n%     \\setbuilder{t}{\\begin{aligned}\n%                   t+\\tau_{1}\\in\\integrationtime; \\\\\n%                   t+\\tau_{2}\\in\\integrationtime; \\\\\n%                   \\ldots\n%                   \\end{aligned}}\n%\\end{equation}\n%where $\\integrationtime_{\\vec{\\tau}}$ represents the subset of times valid for the given time delays. On a practical note, it is straightforward to show the following:\n%\\begin{equation}\n%\\abs{\\integrationtime_{\\vec{\\tau}}}\n%   \\equiv\\abs{\\setbuilder{t}{\\begin{aligned}\n%                   t+\\tau_{1}\\in\\integrationtime; \\\\\n%                   t+\\tau_{2}\\in\\integrationtime; \\\\\n%                   \\ldots\n%                   \\end{aligned}}}\n%   = \\abs{\\integrationtime} - \\max{\\braces{\\abs{\\tau_{1}},\\ldots}}\n%\\end{equation}\n%To simplify later results, we can define subsets of photons whose arrival times are in this subset of time:\n%\\begin{equation}\n%\\photons_{\\integrationtime_{\\vec{\\tau}},\\tau_{j}} \n%    \\equiv \\setbuilder{\\photon} \n%                      {\\begin{aligned}\n%                       \\photon\\in\\photons; \\\\\n%                       \\Time(\\photon)+\\tau_{j}\\in\\integrationtime_{\\vec{\\tau}}\n%                       \\end{aligned}}\n%\\end{equation}\n%where $\\tau_{j}$ is the $j$th index of $\\vec{\\tau}$. Omission of this value is equivalent to $\\tau_{j}=0$. Additionally, denote the correlation set by:\n%\\begin{equation}\n%G_{\\vec{\\tau}} \n%    \\equiv \\setbuilder{(\\photon_{0},\\ldots\\photon_{n-1})}\n%                         {\\begin{aligned}\n%                          \\photon_{0}\\in\\photons;\\\\\n%                          \\ldots;\\\\\n%                          \\photon_{n-1}\\in\\photons;\\\\\n%                          \\abs{\\braces{\\photon_{0},\\ldots\\photon_{n-1}}}=n;\\\\\n%                          \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\tau_{1};\\\\\n%                          \\ldots\n%                          \\end{aligned}}\n%\\end{equation}\n%\n%Given this result, we can assemble the full $n$th-order autocorrelation function:\n%\\begin{equation}\n%\\gn{n}(\\tau_{1},\\ldots\\tau_{n-1}) = \n%   \\frac{\\abs{\\integrationtime_{\\vec{\\tau}}}^{n-1}\n%         \\abs{G_{\\vec{\\tau}}}}\n%        {\\abs{\\photons_{\\integrationtime_{\\vec{\\tau}}}}\n%         \\prod_{j=1}^{n-1}\n%               {\\abs{\\photons_{\\integrationtime_{\\vec{\\tau}},\\tau_{j}}}}}\n%\\end{equation}\n%By addition of further conditions, we can write the result for an individual cross-correlation:\n%\\begin{equation}\n%\\gn{n}_{\\vec{\\channel}}(\\tau_{1},\\ldots\\tau_{n-1}) = \n%   \\frac{\\abs{\\integrationtime_{\\vec{\\tau}}}^{n-1}\n%         \\abs{\\correlationset_{\\vec{\\tau},\\vec{\\channel}}}}\n%        {\\abs{\\photons_{\\integrationtime_{\\vec{\\tau}},\\channel_{0}}}\n%         \\prod_{j=1}^{n-1}\n%               {\\abs{\\photons_{\\integrationtime_{\\vec{\\tau}},\\channel_{j},\\tau_{j}}}}}\n%\\end{equation}\n%or express the autocorrelation as a sum over cross-correlation terms:\n%\\begin{equation}\n%\\gn{n}(\\tau_{1},\\ldots\\tau_{n-1}) = \n%   \\frac{\\abs{\\integrationtime_{\\vec{\\tau}}}^{n-1}\n%         \\sum_{\\vec{\\channel}\\in\\channels^{n}}\n%         {\\abs{G_{\\vec{\\tau},\\vec{\\channel}}}}}\n%        {\\abs{\\photons_{\\integrationtime_{\\vec{\\tau}}}}\n%         \\prod_{j=1}^{n-1}\n%               {\\abs{\\photons_{\\integrationtime_{\\vec{\\tau}},\\tau_{j}}}}}\n%\\end{equation}\n%\n%As a final note, one important practical consideration is the fact that the time delays $\\tau$ do not really represent single times but some range of times. This was implicit in the conversion from real to whole time and ultimately is just a rescaling of the time units, which is why we did not concern ourselves with it at the time. A more important consideration relates to choices of groupings of whole number time $\\tau$, which is of importance for histogramming real data. To account for these windows $\\resolution$, the correlation term must be normalized to account for the increased effective volume of phase space: in the current formula, the term measures the average number of correlation events per volume time space, and increasing the width of any dimension increases the volume of phase space. Thus the extra correction is found by dividing by the effective volume of phase space:\n%\\begin{equation}\n%\\label{eq:full_gn}\n%\\gn{n}(\\resolution_{1},\\ldots\\resolution_{n-1}) = \n%   \\parens{\n%        \\frac{\\abs{\\integrationtime_{\\vec{\\resolution}}}^{n-1}}\n%             {\\prod_{j=1}^{n-1}{\\abs{\\resolution_{j}}}}}\n%   \\parens{\n%        \\frac{\\sum_{\\vec{\\channel}\\in\\channels^{n}}\n%                   {\\abs{\\correlationset_{\\vec{\\tau},\\vec{\\channel}}}}}\n%             {\\abs{\\photons_{\\integrationtime_{\\vec{\\resolution}}}}\n%              \\prod_{j=1}^{n-1}\n%                    {\\abs{\\photons_{\\integrationtime_{\\vec{\\epsilon}},\\resolution_{j}}}}}}\n%\\end{equation}\n%This formula will be the basis of all further discussion, so if the definitions of its constituent terms are not clear you should work through this section again.\n\n\\subsubsection{Calculating \\gn{n} for T3 data}\nEquation~\\ref{eq:gn_full} describes the formula for calculating the autocorrelation of a set of photons tagged with arrival channel and time. Extension of the result is a matter of introducing a second time vector $\\vec{\\pulsedelay}$ of pulse delays, and construction of sets with further restrictions $\\Pulse(\\photon_{j})-\\Pulse(\\photon_{0})=\\pulsedelay$, where the function $\\Pulse$ returns the pulse number of a photon. In many regards, this turns the \\gn{n} function into one more akin to \\gn{2n}, but with the restriction that $\\timedelay_{j}$ and $\\pulsedelay_{j}$ are associated with channel $\\channel_{j}$.\n\n\\subsubsection{Calculating a single cross-correlation}\nFollowing the logic of the preceding sections, if we wish to calculate a single cross-correlation for a given set of channels, we can use:\n\\begin{equation}\n\\label{eq:cross_correlation_full}\n\\gn{n}_{\\vec{\\channel}}\\parens{\\resolution}  = \n       \\parens{\\frac{\\abs{\\integrationtime}^{n}}\n                    {\\prod_{j=0}^{n-1}\n                           {\\abs{\\photons_{\n                                        \\vec{\\resolution},\n                                        \\resolution_{j},\n                                        \\channel_{j}}}}}}\n      \\parens{\\frac{\\abs{\\correlationset_{\\vec{\\resolution},\n                                          \\vec{\\channel},\n                                          \\channel}}}\n                   {\\prod_{j=1}^{n-1}{\\abs{\\resolution_{j}}}}}\n\\end{equation}\nwhere the new $\\channel$  subscripts indicate the addition of a $\\Channel(\\photon)=\\channel$ condition to the set. Note that the cross-correlations cannot be added directly to recover the full autocorrelation, though knowledge of the normalization will enable such procedures.\n\n\\subsection{Subdividing the problem of calculating \\gn{n}}\nThe efficient and accurate calculation of \\gn{n} for a given set of photons \\photons{} can be achieved by appropriate subdivision of the problem into its constituent terms. Each program described in this paper is designed to calculate some term in equation~\\ref{eq:gn_full}:\n\\begin{itemize}\n\\item $\\photons$: \\program{picoquant}, \\program{intensity}/\\program{bin\\_intensity}\n\\item $\\integrationtime$: \\program{intensity}\n\\item $\\resolution$: \\program{gn}\n\\item $\\correlationset$: \\program{correlate}, \\program{histogram}\n\\end{itemize}\nEach task is sufficiently specialized and independent of the others that the whole correlation process may be performed by piping the results of commands into each other. In each section, we will discuss the design principles and algorithms which make each step possible.\n\nIn the case of \\program{intensity}/\\program{bin\\_intensity}, the former is used to approximate the intensities in equation~\\ref{eq:correlation_photons}, while \\program{bin\\_intensity} calculates the exact result. The exact result takes considerable computational expense, and is only really worthwhile when the limits of the correlation are comparable to the integration time.\n\n%The finite range of time affects the definition of an average by a scalar normalization factor, as implicitly included in equation~\\ref{eq:sine_correlation}:\n%\\begin{equation}\n%\\angles{I(t)} = \\frac{\\int_{\\integrationtime}{I(t)\\,dt}}\n%                     {\\abs{\\integrationtime}}\n%\\end{equation}\n%weere $\\int_{\\integrationtime}$ is equivalent to $\\int_{\\tstop}^{\\tstart}$ and $\\abs{\\integrationtime}=(\\tstop-\\tstart)$.. Consequently, each average in the autocorrelation picks up a factor of inverse integration time. The cross term is unfortunately not quite as simple to normalize:\n%\\begin{equation}\n%\\angles{I(t)I(t+\\tau)} = \\frac{\\int_{\\integrationtime\\setminus[0,\\tau)}{I(t)I(t+\\tau)\\,dt}}\n%                              {\\abs{\\integrationtime\\setminus[0,\\tau)}}\n%\\end{equation}\n%where $\\setminus$ indicates the removal of some values from the set of times $\\integrationtime$. \n%\\begin{align}\n%\\gn{n}(\\tau_{1},\\ldots\\tau_{n-1})\n%  &= \n%     \\frac{\\abs{\\integrationtime}^{-1}\n%           \\sum_{\\vec{\\channel}\\in\\channels^{n}}\n%                {\\brackets{\n%                      \\int_{\\integrationtime}\n%                           {\\brackets{I_{\\channel_{0}}(t)\n%                                     \\prod_{j=1}^{n-1}{I_{\\channel_{j}}(t+\\tau_{j})}}\n%                            dt}}}}\n%     {\\abs{\\integrationtime}^{-n}\n%      \\left(\\sum_{\\channel\\in\\channels}\n%                 {\\int_{\\integrationtime}\n%                       {I_{\\channel}(t)\\,dt}}\\right)\n%      \\prod_{j=1}^{n-1}\n%            \\left({\\sum_{\\channel\\in\\channels}\n%                        {\\brackets{\\int_{\\integrationtime}\n%                                        {I_{\\channel}(t+\\tau_{j})\\,dt}}}}\\right)} \\\\\n%  &=      \\frac{\\abs{\\integrationtime}^{n-1}\n%           \\sum_{\\vec{\\channel}\\in\\channels^{n}}\n%                {\\brackets{\n%                      \\int_{\\integrationtime}\n%                           {\\brackets{I_{\\channel_{0}}(t)\n%                                     \\prod_{j=1}^{n-1}{I_{\\channel_{j}}(t+\\tau_{j})}}\n%                            dt}}}}\n%     {\\left(\\sum_{\\channel\\in\\channels}\n%                 {\\int_{\\integrationtime}\n%                       {I_{\\channel}(t)\\,dt}}\\right)\n%      \\prod_{j=1}^{n-1}\n%            \\left({\\sum_{\\channel\\in\\channels}\n%                        {\\brackets{\\int_{\\integrationtime}\n%                                        {I_{\\channel}(t+\\tau_{j})\\,dt}}}}\\right)} \n%\\end{align}\n%\n%\n%\\subsection{Correlation of photons over discrete time steps}\n%For physical reasons it is often convenient to sample the value of a signal at some fixed time interval. As such, the value of the signal is known for some values $\\tstart,\\tstart+\\Delta t,\\ldots$, which can be index as $0,1,\\ldots$. Similarly, the set of times $\\integrationtime$ ceases to represent a continuous range and instead becomes some finite subset of whole numbers. Formally, we now have:\n%\\begin{align}\n%I:\\wholes\\rightarrow\\reals^{*} \\\\\n%\\integrationtime \\equiv \\integers_{N}\n%\\end{align}\n%where $\\wholes$ is the set of whole numbers \\braces{0,1,\\ldots} and $N$ is the number of times sampled. One subtle but important consequence of mapping is that different experiments may carry different time steps, so we will define $\\resolution$ as the resolution in a particular experiment. Substituting these results, we obtain the discrete correlation function:\n%\\begin{equation}\n%\\gn{n}(\\tau_{1},\\ldots\\tau_{n-1}) = \n%     \\frac{\\abs{\\integrationtime}^{n-1}\n%           \\sum_{\\vec{\\channel}\\in\\channels^{n}}\n%                {\\brackets{\n%                      \\sum_{t\\in\\integrationtime}\n%                           {\\brackets{\\resolution_{c_{0}} I_{\\channel_{0}}(t)\n%                                      \\prod_{j=1}^{n-1}{I_{\\channel_{j}}(t+\\tau_{j})}}}}}}\n%     {\\left(\\sum_{\\channel\\in\\channels}\n%                 {\\int_{\\integrationtime}\n%                       {I_{\\channel}(t)\\,dt}}\\right)\n%      \\prod_{j=1}^{n-1}\n%            \\left({\\sum_{\\channel\\in\\channels}\n%                        {\\int_{\\integrationtime}\n%                              {I_{\\channel}(t+\\tau_{j})\\,dt}}}\\right)}\n%\\end{equation}\n\n%\\subsection{The true meaning of $I(t)$}\n%\\label{sec:sampling_intensity}\n%For a real measurement, the signal $I(t)$ is real-valued and defined by averaging some signal for a time interval $\\Delta t$, such that the value $I(t)$ really is\n%\\begin{equation}\n%I(t) = \\left.\\angles{\\iota(t')}\\right|_{t'\\in[t,t+\\Delta t)} = \\frac{1}{\\Delta t}\\int_{t}^{t+\\Delta t}{\\iota(t'),dt'}\n%\\end{equation}\n%for the true function $\\iota(t)$ being approximated by the measurement. As such, the $I(t)$ over time can be represented meaningfully as a vector representing the value of the function for evenly-spaced values of $t$, and the correlations can be determined as inner products of displacements of that vector. More concretely, consider a signal $\\vec{I}\\in\\left(\\reals^{*}\\right)^{N}$ representing $N$ samples of $I(t)$ at $t=0, \\Delta t, \\ldots$, with elements indexed as $\\vec{I}(0), \\vec{I}(1),\\ldots$. To calculate $\\gn{2}(\\tau)$ for $\\tau\\in\\wholes$:\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\frac{\\sum_{j=0}^{N-\\tau}{\\vec{I}(j)\\vec{I}(j+\\tau)}}\n%                    {\\sum_{j=0}^{N-\\tau}{\\vec{I}(j)}\\sum_{j=0}^{N-\\tau}{\\vec{I}(j+\\tau)}}\n%\\end{equation}\n%Note that, for $\\tau\\rightarrow N$, the number of elements in the sum approaches 0. This represents the undersampled region of the correlation, and as such it is necessary to define the correlation window as significantly smaller than the sampled window in order to obtain a meaningful estimate of $\\gn{2}(\\tau)$.\n%\n%This definition of a signal is useful for many measurements of some gross quantity which can be said to sample a non-trivial range of values in $\\reals^{*}$ or \\wholes. However, photon-counting produces a signal which is fundamentally binary (in $\\integers_{2}$), indicating that either a photon has arrived in the time interval, or none has. In principle we can treat this vector in the same way as we do $\\vec{I}$, but this is inefficient: there are only a few bins which will have any signal, and a great number which contain nothing. Therefore, for a binary signal which only occasionally has a non-zero value, it is worthwhile to develop different forms for the correlation expressed in equation~\\ref{eq:gn}.\n%\n%\\subsection{Defining the signal of photon arrivals}\n%In practice, any given single-photon detector can detect exactly one photon at a time, such that any photon arrival can be defined unique by its detection channel and arrival time. Therefore, the photon $\\gamma$ can be represented as a 2-tuple:\n%\\begin{equation}\n%\\photon\\equiv (c, t)\\in C\\times\\wholes\n%\\end{equation}\n%We define the functions $\\Channel(\\photon)$ and $\\Time(\\photon)$ to return the channel and arrival time of a photon \\photon. From this definition, call\n%\\begin{equation}\n%\\photons\\equiv\\braces{\\photon}\\subset C\\times\\wholes\n%\\end{equation}\n%the set of all detected photons. This is the signal relevant to photon-correlation methods, and its form requires that we recast the correlation function in a way which is more directly applicable. \n%\n%To begin with, the definition of the signal on a single channel can be expressed as:\n%\\begin{equation}\n%\\photons_{c} = \\braces{\\photon\\left|\\photon\\in\\photons;~\\Channel(\\photon)=c\\right.}\n%\\end{equation}\n%In long form, this signal is the set of all detected photons, restricted to those whose channels matches the channel specified (see appendix~\\ref{sec:notation} for more details). Here, there is no longer an explicit time dependence, but we can define that with an additional parameter:\n%\\begin{equation}\n%\\photons_{c}(t) = \\abs{\\braces{\\photon\\left|\\begin{aligned}\n%                                          \\photon\\in\\photons; \\\\\n%                                           \\Channel(\\photon)=c;\\\\\n%                                           \\Time(\\photon)=t\n%                                     \\end{aligned}\\right.}}\n%\\end{equation}\n%where here the \\abs{\\cdot} indicate the number of elements in the set. To recover the complete signal \\photons:\n%\\begin{equation}\n%\\photons = \\bigcup\\limits_{c\\in C}{\\photons_{c}}\n%\\end{equation}\n%\n%\\subsubsection{T3 mode is akin to T2 mode, but a second time dimension}\n%As discussed in section~\\ref{sec:modes}, T3 mode has a definition distinct from that of T2 mode:\n%\\begin{equation}\n%\\photon\\equiv(c,p,t)\\in C\\times\\wholes\\times\\wholes\n%\\end{equation}\n%for which $\\Pulse(\\photon)$ returns the pulse number for which the photon arrived. While the definitions are not perfectly clean, we will show shortly how T3 photons can be treated exactly like a pair of T2 photons, with some restrictions.\n%\n%\\subsection{Calculating \\gn{n} by counting photons}\n%As before, we will begin our discussion of correlation functions by constructing a set formulation of the autocorrelation. Fundamentally, we must consider two halves of a problem:\n%\\begin{equation}\n%\\gn{n}(\\tau_{1},\\ldots)=\\frac{\\textnormal{number of events which satisfy a given time delay}}\n%     {\\textnormal{average number of photons per unit of time}}\n%\\end{equation}\n%The denominator is simpler to express, so we will start there. In a given experiment of finite length, there will be some absolute beginning and end of time, such that all detected photons are elements of the subset\n%\\begin{equation}\n%\\channels\\times \\wholes_{\\integrationtime}\n%\\end{equation}\n%for a total experiment time \\integrationtime. That is, there exists some time $\\integrationtime\\in\\wholes$ such that\n%\\begin{equation}\n%\\braces{\\photon\\left|\\photon\\in\\photons;\\Time(\\photon)\\ge\\integrationtime\\right.} = \\emptyset\n%\\end{equation}\n%Here, the integration time is defined such that there are \\integrationtime{} time units which pass during the experiment, since time begins with 0. This sets the minimal value for \\integrationtime{} to be one greater than the arrival time of the final photon in the experiment, or\n%\\begin{equation}\n%\\integrationtime\\ge\\max(\\braces{\\Time(\\photon)|\\photon\\in\\photons})\n%\\end{equation}\n%Now that we have determined the total units of time represented by \\photons{} (either by a defined value or by some maximal time \\Time(\\photon)+1), the average number of photons arriving per unit of time is\n%\\begin{equation}\n%\\angles{I(t)} = \\frac{\\abs{\\photons}}{\\integrationtime}\n%\\end{equation}\n%Next, consider the set of photons satisfying some specified time delay $\\tau$:\n%\\begin{equation}\n%\\angles{I(t)I(t+\\tau)}\\propto\n%         \\abs{\\braces{(\\photon_{0},\\photon_{1})\n%               \\left|\\begin{aligned}\n%                     \\photon_{0},\\photon_{1}\\in\\photons;\\\\\n%                     \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\tau\n%                     \\end{aligned}\\right.}}\n%\\end{equation}\n%This is accurate up to a normalization constant, which is related to the resolution of \\Time. As discussed in section~\\ref{sec:sampling_intensity}, the function does not have infinite resolution but instead approximates some true function by sampling for an interval we will call \\resolution. As such, the set defined is actually for a square in time space with length \\resolution{} ($t$ and $\\tau$ are really $[t,t+\\resolution)$ and $[\\tau,\\tau+\\resolution)$), so we must correct for this value to be completely general:\n%\\begin{equation}\n%\\angles{I(t)I(t+\\tau)}=    \n%         \\frac{1}{\\resolution^{2}}\n%         \\abs{\\braces{(\\photon_{0},\\photon_{1})\n%               \\left|\\begin{aligned}\n%                     \\photon_{0},\\photon_{1}\\in\\photons;\\\\\n%                     \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\tau\n%                     \\end{aligned}\\right.}}\n%\\end{equation}\n%For the most precise form of this calculation, $\\resolution=1$, but for practical reasons it will become necessary to undersample the correlation function by increasing the effective value of \\resolution. This resolution should also be allowed to vary with $\\tau$, so we will denote its full behavior $\\resolution_{c}(\\tau)$, where the $c$ subscript indicates its associated channel. Note that the reference channel $\\channel_{0}$ does not have a varying resolution, because it has not associated time delay (see equation~\\ref{eq:gn}). See section~\\ref{sec:histogram} for more details.\n%\n%Putting this together, the full autocorrelation is\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\frac{\\integrationtime^{2}}{\\abs{\\photons}^{2}\\resolution\\resolution(\\tau)}\n%                \\abs{\\braces{(\\photon_{0}, \\photon_{1}),\n%                              \\left|\\begin{aligned}\n%                              \\photon_{0},\\photon_{1}\\in\\photons \\\\\n%                              \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\tau\n%                              \\end{aligned}\\right.}}\n%\\end{equation}\n%Extension of this result to higher dimensions and multiple channels proceeds much as before, giving the following two-channel cross-correlation:\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\sum\\limits_{(c_{0},c_{1})\\in C^{2}}\n%                    {\n%                    \\frac{\\integrationtime^{2}}\n%                         {\\abs{\\photons_{c_{0}}}\\abs{\\photons_{c_{1}}}\n%                                \\resolution_{c_{0}}\\resolution_{c_{1}}(\\tau)}\n%                    \\abs{\\braces{(\\photon_{0},\\photon_{1})\n%                          \\left|\\begin{aligned}\n%                          \\photon_{0}\\in\\photons_{c_{0}};\\\\\n%                          \\photon_{1}\\in\\photons_{c_{1}};\\\\\n%                          \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\tau\n%                          \\end{aligned}\\right.}}\n%                    }\n%\\end{equation}\n%and this result for higher dimensions:\n%\\begin{equation}\n%\\label{eq:gn_set}\n%\\begin{split}\n%&\\gn{n}(\\tau_{1},\\ldots\\tau_{n-1})= \\\\\n%& \\sum\\limits_{\\vec{c}\\in C^{n}}\n%                    {\n%                    \\left[\n%                    \\left(\n%                    \\frac{\\integrationtime}{\\abs{\\photons_{c_{0}}}\\resolution_{c_{0}}}\n%                    \\prod_{j=1}^{n-1}{\\frac{\\integrationtime}\n%                                           {\\abs{\\photons_{c_{j}}}\\resolution_{c_{j}}(\\tau_{j})}}\n%                    \\right)\n%                    \\abs{\\braces{(\\photon_{0},\\ldots\\photon_{n-1})\n%                          \\left|\\begin{aligned}\n%                          \\photon_{0}\\in\\photons_{c_{0}};\\\\\n%                          \\photon_{1}\\in\\photons_{c_{1}};\\\\\n%                          \\ldots;\\\\\n%                          \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\tau_{1};\\\\\n%                          \\ldots\n%                          \\end{aligned}\\right.}}\n%                    \\right]\n%                    }\n%\\end{split}\n%\\end{equation}\n%\n%\\subsubsection{The \\gn{n} for T3 data can calculated like \\gn{2n} for T2 data}\n%Given the notation in equation~\\ref{eq:gn_set}, we see that the two timing dimensions of T3 data can be treated as separate conditions in the set, with a second unit of resolution associated with \\Pulse. Calling this unit of resolution $\\kappa$ and the relative pulse difference $\\rho$, the full expression is:\n%\\begin{equation}\n%\\label{eq:gn_set_t3}\n%\\begin{split}\n%&\\gn{n}(\\tau_{1},\\rho_{1},\\ldots\\tau_{n-1},\\rho_{n-1}) = \\\\\n%    &  \\sum\\limits_{\\vec{c}\\in \\channels^{n}}\n%                    {\n%                    \\left[\n%                    \\frac{\\integrationtime^{2}}\n%                         {\\abs{\\photons_{\\channel_{0}}}\\resolution_{\\channel_{0}}\\kappa_{\\channel_{0}}}\n%                    \\left(\n%                    \\prod_{j=1}^{n-1}{\\frac{\\integrationtime^{2}}\n%                                           {\\abs{\\photons_{c_{j}}}^{2}\n%                                            \\resolution_{c_{j}}(\\tau_{j})\n%                                            \\kappa_{c_{j}}(\\tau_{j})}}\n%                    \\right)\n%                    \\abs{\\braces{(\\photon_{0},\\ldots\\photon_{n-1})\n%                          \\left|\\begin{split}\n%                          \\photon_{0}\\in\\photons_{c_{0}};\\\\\n%                          \\photon_{1}\\in\\photons_{c_{1}};\\\\\n%                          \\ldots;\\\\\n%                          \\Time(\\photon_{1})-\\Time(\\photon_{0})=\\tau_{1};\\\\\n%                          \\Pulse(\\photon_{1})-\\Pulse(\\photon_{0})=\\rho_{1};\\\\\n%                          \\ldots\n%                          \\end{split}\\right.}}\n%                    \\right]\n%                    }\n%\\end{split}\n%\\end{equation}\n%This form is nearly identical to a \\gn{2n} for T2 data, except that we still sample the channel combinations from $\\channels^{n}$, reflecting the fact that $\\tau_{j}$ and $\\rho_{j}$ are associated with the same channel $\\channel_{j}$. We will continue to discuss T2-type correlation functions, but do not that the machinery developed for such uses can easily be repurposed for T3 data.\n%\n%\\subsection{Subdividing the problem of calculating \\gn{n}}\n%Examining equation~\\ref{eq:gn_set}, it is evident that there are a few distinct factors associated with each term of the sum:\n%\\begin{itemize}\n%\\item $\\integrationtime$: the integration time\n%\\item $\\abs{\\photons_{c}}$: the number of photons with a given channel \\channel.\n%\\item $\\resolution_{c}(\\tau)$: the resolution of a channel at a given time delay\n%\\item $(\\photon_{0},\\ldots)$: $n$-tuples of photons with particular properties\n%\\end{itemize}\n%Because these factors can be calculated or defined independently, we will turn our focus to the efficient determination of the value of each of these factors. Roughly, the terms can be calculated using the following programs:\n%\\begin{itemize}\n%\\item $\\integrationtime$: \\intensity\n%\\item $\\abs{\\photons_{c}}$: \\intensity\n%\\item $\\resolution_{c}(\\tau)$: \\program{picoquant}, \\program{histogram}\n%\\item $(\\photon_{0},\\ldots)$: \\program{correlate}, \\program{histogram}\n%\\end{itemize}\n%The remainder of this paper is devoted to specifying how each term may be calculated efficiently and accurately.\n\n%In principle it is possible to bin these photon arrivals to count the number of arrivals in some time interval and recover the vector-like signal discussed above, but such steps introduce a range of subtle artifacts related to the precise origin of time and definition of bin resolution. These problems are largely avoidable if we instead develop a definition of \\gn{n} which involves counting these events directly.\n\n%Do note, however, that the ``true'' photon arrival time discussed from here on is itself a binary form of this vectorial definition, because real instruments will have some finite timing resolution. In this sense, the idea of a discrete arrival time is just a simplification of the true signal, where each sampling represents the state of a photon arriving or not arriving during that interval. Many more samples will find no photon than one, so those are simply not reported. Additionally, this means that the reported photon arrival time carries some time units defined by the resolution of the measurement, so we can declare any arrival time $t$ to be a multiple of these time steps, or $t\\in\\wholes$, where \\wholes{} is the set of all whole numbers (the positive integers and zero).  \n%Figure~\\ref{fig:gaussian_g2} shows this result for various values of the four adjustable parameters, demonstrating the symmetric and asymmetric behavior of the various terms in the sum.\n%\n%\\begin{figure}\n%\\centering\n%\\caption{Graphs showing different \\gn{2} for a sum of two Gaussians as parameters are tuned.}\n%\\label{fig:gaussian_g2}\n%\\end{figure}\n\n% A correlation of such a signal quantifies the randomness of its behavior over time: a signal with strong time correlation has well-defined behavior at a time $I(t+\\tau)$ given a value at $I(t)$, and weaker correlation indicates that the value is less well-defined, approaching complete randomness. The correlation of a signal with itself (its autocorrelation) can be defined as:\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\frac{\\angles{I(t)I(t+\\tau)}}\n%                    {\\angles{I(t)}\\angles{I(t+\\tau)}}\n%\\end{equation}\n%where the angled brackets indicate an average over $t$. For this function, a value $\\gn{2}(\\tau)=1$ indicates non-correlation: at a time delay $\\tau$, the value $I(t+\\tau)$ is on average the same as $I(t)$. For $\\gn{2}(\\tau)>1$, $I(t+\\tau)$ is greater than $I(t)$, and for $\\gn{2}(\\tau)<1$, $I(t+\\tau)$ is less than $I(t)$. In terms of photon correlation methods, $\\gn{2}(\\tau)>1$ is called super-bunching (a photon arrival is likely to be followed by another), while $\\gn{2}(\\tau)<1$ is called anti-bunching (a photon arrival is likely to be followed by a lack of photons).\n%\n%For example, consider the autocorrelation of a sinusoid:\n%\\begin{align}\n%\\gn{2}(\\tau) &= \\frac{\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}{\\left(1+\\sin{(t)}\\right)\\left(1+\\sin{(t+\\tau)}\\right)\\,dt}}\n%                     {\\left(\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}{\\left(1+\\sin{(t)}\\right)\\,dt}\\right)\n%                      \\left(\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}{\\left(1+\\sin{(t+\\tau)}\\right)\\,dt}\\right)} \\\\\n%             &= 1 + \\frac{1}{2}\\cos{(\\tau)}\n%\\end{align}\n%it is evident that there is some structure to the autocorrelation, such that there is some probability of the signal being stronger or weaker at relative time delays $\\tau$. \n%\n%As an example which is more relevant to photon-correlation, consider a pulse train represented by\n%\\begin{equation}\n%\\label{eq:delta_train}\n%I(t) = \\sum_{n\\in\\integers}{\\delta(t-n)}\n%\\end{equation}\n%where $\\delta$ here represents the mathematical delta function\n%\\begin{equation}\n%\\delta(t) = \\left\\lbrace \\begin{split}\n%                          1;~t=0 \\\\\n%                          0;~t\\not=0\n%                         \\end{split}\n%            \\right.\n%\\end{equation}\n%This signal represents is a regularly-spaced pulse train where a single pulse arrives every unit of time, and as such its autocorrelation is:\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\sum_{n\\in\\integers}{\\delta(\\tau-n)}\n%\\end{equation}\n%\n%\n%\\subsection{Extending the correlation arbitrary numbers of signals}\n%While the autocorrelation of a signal is often a meaningful quantity to calculate, cross-correlations are more general and have many more applications. For example, the cross-correlation of a laser pulse train and the response of a light-emitting sample can be used to measure the lifetime of the emissive state, and is the implicit measurement of the interactive mode. \n%\n%Generalization of the correlation is fairly simple: the numerator holds an average over a product of some number of signals with time delays relative to a reference channel, normalized by the average intensity at each channel. For two channels, this can be expressed as:\n%\\begin{equation}\n%\\label{eq:g2}\n%\\gn{2}(\\tau) = \\frac{\\angles{I_{0}(t)I_{1}(t+\\tau)}}\n%                    {\\angles{I_{0}(t)}\\angles{I_{1}(t+\\tau)}}\n%\\end{equation}\n%Generalization to higher dimensions is relatively straightforward:\n%\\begin{equation}\n%\\label{eq:gn}\n%\\gn{n}(\\tau_{1}, \\ldots \\tau_{n-1}) = \\frac\n%\t{\\angles{I_{0}(t)\\prod_{j=1}^{n-1}{I_{j}(t+\\tau_{j})}}}\n%\t{\\angles{I_{0}(t)}\\prod_{j=1}^{n-1}{\\angles{I_{j}(t+\\tau_{j})}}}\n%\\end{equation}\n%where the $\\prod$ notation indicates a product of elements, akin to the $\\sum$ notation for summation.\n\n%\\subsubsection{Mapping T3 correlations onto T2-like correlations}\n%One benefit of generalizing the correlation to higher dimensions is that it provides a simple way to treat correlations of T3 data as higher-dimensional T2 data. For example, if we apply the map\n%\\begin{equation}\n%\\left(c_{j}, p_{j}, t_{j}\\right) \\rightarrow \\left(\\left(c_{j}, p_{j}\\right), \\left(c_{j}, t_{j}\\right)\\right)\n%\\end{equation}\n%it is evident that we can treat the single T3 entry as containing two dimensions of time to analyze independently. Therefore, any calculation of a correlation of T3 data can be expressed as\n%\\begin{equation}\n%\\gn{n}(\\rho_{1}, \\tau_{1}, \\ldots \\rho_{n-1}, \\tau_{n-1}) = \\frac\n%\t{\\angles{I_{0}(p, t)\\prod_{j=1}^{n-1}{I_{j}(p+\\rho_{j},t)I_{j}(p,t+\\tau_{j})}}}\n%\t{\\angles{I_{0}(p, t)}^{2}\\prod_{j=1}^{n-1}{\\angles{I_{j}(p+\\rho_{j},t)}\n%\t                                       \\angles{I_{j}(p,t+\\tau_{j})}}}\n%\\end{equation}\n%For the rest of this paper, T3 data will be treated as higher-dimensional T2 data.\n%\n%\\subsection{The true meaning of $I(t)$}\n%For many measurements, the signals $I_{j}(t)$ are real-valued and defined by averaging some signal for a time interval $\\Delta t$, such that the value $I_{j}(t)$ really is\n%\\begin{equation}\n%I_{j}(t) = \\left.\\angles{\\iota_{j}(t')}\\right|_{t'\\in[t,t+\\Delta t)} = \\frac{1}{\\Delta t}\\int_{t}^{t+\\Delta t}{\\iota_{j}(t'),dt'}\n%\\end{equation}\n%for a the true function $\\iota(t)$. As such, the $I(t)$ over time can be represented meaningfully as a vector representing the value of the function for evenly-spaced values of $t$, and the correlations can be determined as inner products of displacements of that vector. More concretely, consider a signal $\\vec{I}\\in\\reals^{N}$ representing $N$ samples of $I(t)$ at $t=0, \\Delta t, \\ldots$, with elements indexed as $\\vec{I}(0), \\vec{I}(1),\\ldots$. To calculate $\\gn{2}(\\tau)$ for $\\tau\\in\\integers^{*}$:\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\frac{\\sum_{j=0}^{N-\\tau}{\\vec{I}(j)\\vec{I}(j+\\tau)}}\n%                    {\\sum_{j=0}^{N-\\tau}{\\vec{I}(j)}\\sum_{j=0}^{N-\\tau}{\\vec{I}(j+\\tau)}}\n%\\end{equation}\n%Note that, for $\\tau\\rightarrow N$, the number of elements in the sum approaches 0. This represents the undersampled region of the correlation, and as such it is necessary to define the correlation window as significantly smaller than the sampled window in order to obtain a meaningful estimate of \\gn{n}.\n%\n%This definition of a signal is useful for many measurements of some gross quantity which can be said to sample a non-trivial range of values in \\reals{} or \\integers. However, photon-counting produces a signal which is fundamentally binary (in $\\integers_{2}$), indicating that either a photon has arrived, or none has. In principle it is possible to bin these photon arrivals to count the number of arrivals in some time interval and recover the vector-like signal discussed above, but such steps introduce a range of subtle artifacts related to the precise origin of time and definition of bin resolution. These problems are largely avoidable if we instead develop a definition of \\gn{n} which involves counting these events directly.\n%\n%Do note, however, that the ``true'' photon arrival time discussed from here on is itself a binary form of this vectorial definition, because real instruments will have some finite timing resolution. In this sense, the idea of a discrete arrival time is just a simplification of the true signal, where each sampling represents the state of a photon arriving or not arriving during that interval. Many more samples will find no photon than one, so those are simply not reported. Additionally, this means that the reported photon arrival time carries some time units defined by the resolution of the measurement, so we can declare any arrival time $t$ to be a multiple of these time steps, or $t\\in\\wholes$, where \\wholes{} is the set of all whole numbers (the positive integers and zero).  \n%\n%\\subsection{Calculating \\gn{n} by counting photons}\n%As in equation~\\ref{eq:delta_train}, it is possible to define the signal representing arrivals of photons as a sum over a set of $\\delta$-functions. Consider the set $T\\subset\\wholes$ of photon arrival times. The signal can be defined as\n%\\begin{equation}\n%I(t) = \\sum_{t'\\in T}{\\delta(t-t')}\n%\\end{equation}\n%This notation is cumbersome, so from here we will refer to a photon arrival time as $t'$ alone, but the $\\delta$ notation could be substituted as desired. This change makes the summation notation difficult to parse, so we instead switch to a set notation:\n%\\begin{equation}\n%I(t) = \\abs{\\braces{\\left. t'\\right|t'\\in T;~t-t'=0}}\n%\\end{equation}\n%In long form, this definition counts the number of photon arrival times $t'$ which are equal to the requested time $t$. This is computationally inefficient but conceptually simple, so we will define all important quantities in this fashion before discussing how to compute the result efficiently.\n%\n%Extending this notation to \\gn{2} for a single signal:\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\frac{\\abs{\\braces{(t_{j}, t_{k})\\left|\n%                          \\begin{split} \n%                            t_{j}, t_{k}\\in T; \\\\\n%                            t_{j}-t_{k}=\\tau\n%                          \\end{split}\\right.\n%                    }}}\n%                    {\\abs{\\braces{T}}^{2}/\\left(max(T)-min(T)\\right)^{2}}\n%\\end{equation}\n%where $min$ and $max$ are the functions which return the minimum and maximum values of a set, respectively. Even this definition is not quite sufficient: the measurement carries its own unit of time $\\resolution$, which means that any time $t$ specified is really a range $[t,t+\\resolution)$, so with appropriate normalization the result becomes:\n%\\begin{equation}\n%\\gn{2}(\\tau) = \\frac{\\abs{\\braces{(t_{j}, t_{k})\\left|\n%                          \\begin{split} \n%                            t_{j}, t_{k}\\in T \\\\\n%                            t_{j}-t_{k}=\\tau\n%                          \\end{split}\\right.\n%                    }}}\n%                    {\\resolution^{2}\\abs{\\braces{T}}^{2}/\\left(max(T)-t\\right)^{2}}\n%\\end{equation}\n%This result is identical to the normalization of histogrammed values, which will be discussed later.\n%\n%Extending this result to a number of signals, we obtain\n%\\begin{equation}\n%\\label{eq:gn_set}\n%\\gn{n}(\\tau_{1}, \\ldots) = \\frac{\\abs{\\braces{(t_{0}, t_{1}, \\ldots)\\left|\n%                                      \\begin{split}\n%                                      t_{0}\\in T_{0}; t_{1}\\in T_{1};\\ldots \\\\\n%                                      t_{1}-t_{0} = \\tau_{1}; \\ldots\n%                                      \\end{split}\\right.}}}\n%                                {\\prod_{j=0}^{n-1}{\\resolution_{j}\\frac{\\abs{T_{j}}}{max(T_{j})-min(T_{j})}}}\n%\\end{equation}\n%Typically, the signals being correlated will come from the same device, such that all $\\resolution_{j}$ are equal, leading to the final formula:\n%\\begin{equation}\n%\\label{eq:gn_set}\n%\\gn{n}(\\tau_{1}, \\ldots) = \\frac{\\abs{\\braces{(t_{0}, t_{1}, \\ldots)\\left|\n%                                      \\begin{split}\n%                                      t_{0}\\in T_{0}; t_{1}\\in T_{1};\\ldots \\\\\n%                                      t_{1}-t_{0} = \\tau_{1}; \\ldots\n%                                      \\end{split}\\right.}}}\n%                                {\\prod_{j=0}^{n-1}{\\resolution\\frac{\\abs{T_{j}}}{max(T_{j})-min(T_{j})}}}\n%\\end{equation}\n%\n%\\subsection{Subdividing the problem of calculating \\gn{n}}\n%The expression in equation~\\ref{eq:gn_set} is somewhat intimidating, but we can divide its \n", "meta": {"hexsha": "e6937d0a1c8ff4698ce775aed1c3f0a5f60f61d9", "size": 65216, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/math.tex", "max_stars_repo_name": "mktt2897/photon_correlation", "max_stars_repo_head_hexsha": "b5cb9376f883ff25f81521d4270d8dd7f750efb1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-10-24T11:43:34.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-25T06:02:21.000Z", "max_issues_repo_path": "doc/tex/math.tex", "max_issues_repo_name": "mktt2897/photon_correlation", "max_issues_repo_head_hexsha": "b5cb9376f883ff25f81521d4270d8dd7f750efb1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2017-08-15T14:42:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T07:35:13.000Z", "max_forks_repo_path": "doc/tex/math.tex", "max_forks_repo_name": "mktt2897/photon_correlation", "max_forks_repo_head_hexsha": "b5cb9376f883ff25f81521d4270d8dd7f750efb1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-04-14T16:27:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-15T05:32:12.000Z", "avg_line_length": 74.0249716232, "max_line_length": 896, "alphanum_fraction": 0.6565413395, "num_tokens": 17423, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8774767906859263, "lm_q2_score": 0.6723317123102956, "lm_q1q2_score": 0.5899554731944118}}
{"text": "\\documentclass{article}\n\\usepackage[hmargin=1in,vmargin=1.5in]{geometry}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{graphicx}\n\\usepackage{subcaption}\n\\usepackage{bm}\n\\newcommand{\\x}{\\bm x}\n\\title{Homework 5}\n\\setcounter{MaxMatrixCols}{20}\n\\author{Xinyi Gu, Songchen Tan}\n\\date{\\today}\n\\begin{document}\n\\maketitle\n\\section{}\n\\subsection*{(a)}\nDenoting the quadratic function $f$, we have $\\nabla f=Ax+b$ and $H_f=A$. The exact solution $x^*=-A^{-1}b$. Applying the Newton's method to arbitrary $x_0$, we get\n\n$$\nx_1=x_0-H_f^{-1}\\nabla f(x_0)=x_0-A^{-1}(Ax_0+b)=-A^{-1}b\n$$\n\nThrerfore the Newton's method converges in one step.\n\n\\subsection*{(b)}\nWe will need $x^*=-A^{-1}b=x_0-\\alpha \\nabla f(x_0)$ for some $\\alpha$. Therefore\n\n$$\n\\begin{aligned}\n    -A^{-1}b&=x_0-\\alpha \\nabla f(x_0)\\\\\n    -A^{-1}b&=x_0-\\alpha (Ax_0+b)\\\\\n    \\alpha (Ax_0+b)&=x_0+A^{-1}b\\\\\n    A\\alpha (Ax_0+b)&=(Ax_0+b)\n\\end{aligned}\n$$\n\nTherefore $Ax_0+b$ need to be either zero or an eigenvector of $A$.\n\n\\section{}\n\\section{}\n\\subsection*{(a)}\n\n$$\n\\min_{(x,y)\\in\\mathbb R^2}\\sum_i(x-x_i)^2+(y-y_i)^2\n$$\n\\subsection*{(b)}\n\nThe objective function $f$ is differentiable, because\n\n$$\n\\nabla f=\\left(\\sum_i2(x-x_i),\\sum_i2(y-y_i)\\right)\n$$\n\nThe problem is convex because for $r=(x,y)$ and $r'=(x',y')$\n\n$$\n\\begin{aligned}\n    &f(\\theta r+(1-\\theta)r')-\\theta f(r)-(1-\\theta)f(r')\\\\\n    =&\\sum_i(\\theta x+(1-\\theta)x'-x_i)^2+(\\theta y+(1-\\theta)y'-y_i)^2-\\theta(x-x_i)^2-(1-\\theta)(y-y_i)^2\\\\\n    =&\\sum_i(\\theta^2-\\theta)[(x-x_i)^2+(x'-x_i)^2+(y-y_i)^2+(y'-y_i)^2]+2(\\theta^2-\\theta)[(x-x_i)(x'-x_i)+(y-y_i)(y'-y_i)]\\\\\n    =&\\sum_i(\\theta^2-\\theta)[(x+x'-2x_i)^2+(y+y'-2y_i)^2]\\\\\n    \\le&0\n\\end{aligned}\n$$\n\n\\subsection*{(c)}\n\nSince the problem is convex we only need $\\nabla f=0$, which is\n\n$$\n\\sum_i2(x-x_i)=\\sum_i2(y-y_i)=0\n$$\n\n\\subsection*{(d)}\n$$\n(x^*,y^*)=\\frac1n\\left(\\sum_ix_i,\\sum_iy_i\\right)\n$$\n\\section{}\n\\subsection*{(a)}\n\nWe only need to prove that the objective function $f$ is non-convex. By random trying we can get\n\n$x=(1.0697822217051507, 1.1327854135005468, 1.707087600826763)$\n\nand\n\n$x'=(1.1270384303960612, 1.4248690078210067, 1.4724353082361537)$\n\nsuch that $f(x)+f(x')<2f((x+x')/2)$.\n\n\\subsection*{(b)}\n\nChanging variable $x_i=e^{z_i}$, we get $\\min_{z_1,z_2,z_3}\\exp(-z_1+2z_2+3z_3)$ subject to $11z_1-12z_2+13z_3\\le\\log 14$ and $15z_1+16z_2-17z_3\\le\\log 18$ and $z_1,z_2,z_3\\ge0$. Since $e^z$ is a strictly monotonic increasing function of $z$, the objective function can be changed to $\\min_{z_1,z_2,z_3}-z_1+2z_2+3z_3$. This is then solvable by simplex method.\n\n\\subsection*{(c)}\n\nChanging variable $x_i=e^{z_i}$, we get $\\min_{z_1,z_2,z_3}\\exp(-z_1+2z_2+3z_3)+5\\exp(4z_1+5z_2-6z_3)$ subject to $11z_1-12z_2+13z_3\\le\\log 14$ and $\\log(\\exp(15z_1+16z_2-17z_3)+7\\exp(18z_1-19z_2+20z_3))\\le\\log 21$ and $z_1,z_2,z_3\\ge0$. We notice that the objective function and the second constraint share the same structure, so it suffices to prove that a function $\\mathbb R^3\\to\\mathbb R$ of form\n\n$$\nf(z)=\\log(\\exp(az_1+bz_2+cz_3)+r\\exp(a'z_1+b'z_2+c'z_3))\n$$\n\nis convex. To prove this, we denote $g(y)=\\log(e^{y_1}+e^{y_2})$ and $h(z)=(az_1+bz_2+cz_3,a'z_1+b'z_2+c'z_3+\\log r)$ such that $h\\circ g=f$. So\n\n$$\n\\begin{aligned}\n    f(\\theta z+(1-\\theta)z')\n    &=g(h(\\theta z+(1-\\theta)z'))\\\\\n    &=g(\\theta h(z)+(1-\\theta)h(z'))&\\text{linearity of $h$}\\\\\n    &\\le\\theta g(h(z))+(1-\\theta)g(h(z'))&\\text{convexity of $g$}\\\\\n    &=\\theta f(z)+(1-\\theta)f(z')\n\\end{aligned}\n$$\n\n\\end{document}\n", "meta": {"hexsha": "f55286fbec9abae174f438c2a26612db4eeba071", "size": 3483, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "5/main.tex", "max_stars_repo_name": "tansongchen/learn-optimization", "max_stars_repo_head_hexsha": "b44e902c857287ff05da449b9a639dfe534af8ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5/main.tex", "max_issues_repo_name": "tansongchen/learn-optimization", "max_issues_repo_head_hexsha": "b44e902c857287ff05da449b9a639dfe534af8ed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5/main.tex", "max_forks_repo_name": "tansongchen/learn-optimization", "max_forks_repo_head_hexsha": "b44e902c857287ff05da449b9a639dfe534af8ed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.025862069, "max_line_length": 401, "alphanum_fraction": 0.6448463968, "num_tokens": 1501, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723316860482763, "lm_q2_score": 0.8774767970940974, "lm_q1q2_score": 0.5899554544585157}}
{"text": "\\subsection{Representation theory for the time group}\n\nTime is a linear operator\n\nInstead, we describe the time operator as a Lie group, using Lie algebra.\n\n\\(\\Psi (t_b-t_a)=e^{(t_b-t_a)X}\\)\n\n\\subsubsection{States are vectors}\n\nWe can remove a degree of freedom by using norm of 1 for vectors\n\nFor each dynamic system we define a set of possible states.\n\nWe can describe a state \\(v\\in V\\).\n\n\\subsubsection{Finite state spaces}\n\nWe can describe a system like heads or tails.\n\n\\subsubsection{Infinite state spaces}\n\nThis can describe continous position, or an angle.\n\n", "meta": {"hexsha": "a2cc33acfbfd09e7e6870fddb846773fe4fa4aa9", "size": 567, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/physics/QM/03-03-representation.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/physics/QM/03-03-representation.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/physics/QM/03-03-representation.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.68, "max_line_length": 73, "alphanum_fraction": 0.7601410935, "num_tokens": 138, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.855851154320682, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5899430023430271}}
{"text": "\\documentclass[11pt, a4note]{article}\n\\usepackage{amsmath, amsthm, amssymb, geometry}\n\\geometry{left=14mm, right=14mm, top=22mm, bottom=22mm}\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\\newtheorem{proposition}{Proposition}[section]\n\\newtheorem{theorem}{Theorem}[section]\n\\begin{document}\n\n\\title{A Note on an Elementary Formulation of the Naive Bayesian Model}\n\\author{jiyucho9145}\n\\maketitle\n\\begin{abstract}\nSome mail server programs and some mail client programs classify messages into several categories\nby the naive Bayesian classifier automatically. The algorithm of the naive Bayesian classifier is based on\nthe naive Bayesian model, which satisfies the two specific assumptions.\n\nIn this note, we define a probabilistic model (message receiving model) without the two assumptions\nand show that the message receiving model coincides with the naive Bayesian model under the two assumptions.\n\\end{abstract}\n\n\\newpage\n\\tableofcontents\n\n\\newpage\n\\section{Introduction}\nThe naive Bayesian classifier is used in some mail server programs and some mail client programs.\nMany Japanese books regarding to machine leaning have been published in recent years.\nSeveral popular machine larning algorithms are introduced in these books.\nThe naive Bayesian classifier is one of these algorithms.\n\nThe mathematical background of the naive Bayesian classifier is not introduced in some books.\nThis policy is appropriate in practice because programmers are able to implement and use\nthe naive Bayesian classifier without the mathematical background.\n\nHowever, we have an interest in the mathematical background of the naive Bayesian classifier. \nWe formulate the naive Bayesian model in elementary way by probability theory\nbecause the naive Bayesian classifier is based on the naive Bayesian model.\n\nNext section introduces a thory on a probabilistic model (message receiving model),\nwhich is a base of the theory on the naive Bayesian model, section 3 formulates\nthe naive Bayesian model.\n\n\\section{Message Receiving Model}\n\n\\subsection{Message Receiving Model}\n\n\\begin{definition}\nLet $ W $ be an nonempty finite set, $ E = \\mathrm{Map}(W, \\{0, 1\\}), C = \\{c_{1}, c_{2} \\}, c_{1} \\ne c_{2} $,\nthen we call $ (W, E, C) $ a message receiving model in this note.\n\\end{definition}\n\n\\begin{definition}\nLet $ g : F \\to C, F \\subset E, F \\ne \\O, F_{g,i} = g^{-1}(c_{i}) \\ne \\O $, then we call $ g : F \\to C $\na training data for a message receiving model $ (W, E, C) $ in this note.\n\\end{definition}\n\n\\begin{definition}\nLet $ e \\in E $, then we define a support $ \\mathrm{Supp}(e) $ of $ e $ as follows:\n\\begin{equation}\n\\mathrm{Supp}(e) = \\{ w \\in W ; e(w) \\ne 0\\}.\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}\nLet $ Z \\subset W $, then we define a set $ M_{g}(Z) $ as follows:\n\\begin{equation}\nM_{g}(Z) = \\{e \\in F ; Z \\subset \\mathrm{Supp}(e)\\}.\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}\nLet $ w \\in W $, then we define a set $ M_{g}(w) $ as follows:\n\\begin{equation}\nM_{g}(w) = M_{g}(\\{w\\}).\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}\nLet $ e \\in E $, then we define a set $ M_{g}(e) $ as follows:\n\\begin{equation}\nM_{g}(e) = \\cap_{w \\in \\mathrm{Supp}(e)}M_{g}(w).\n\\end{equation}\n\\end{definition}\n\n\\subsection{Message Receiving Measures}\n\n\\begin{definition}\nWe call the map\n\\begin{equation}\nm_{g} : \\mathrm{Pow}(F) \\to \\mathbb{R} ; G \\mapsto \\mathrm{card}(G)\n\\end{equation}\na message receiving measure of $ g : F \\to C $ in this note.\nWhere $ \\mathrm{Pow}(S) $ denots the power set of $ S $ for arbitrary set $ S $.\n\\end{definition}\n\n\\begin{proposition}\n$ m_{g} : \\mathrm{Pow}(F) \\to \\mathbb{R} $ is a measure on measurable space $ (F, \\mathrm{Pow}(F)) $.\n\\end{proposition}\n\n\\begin{proposition}\n\\begin{equation}\n\\mathrm{card}(F_{g,1}) + \\mathrm{card}(F_{g,2}) = \\mathrm{card}(F).\n\\end{equation}\n\\end{proposition}\n\n\\subsection{Message Receiving Probabilities}\n\n\\begin{definition}\nWe call the map\n\\begin{equation}\nP_{g} : \\mathrm{Pow}(F) \\to \\mathbb{R} ; G \\mapsto m(G)/m(F)\n\\end{equation}\na message receiving probability of $ g : F \\to C $ in this note.\n\\end{definition}\n\n\\begin{proposition}\n$ P_{g} : \\mathrm{Pow}(F) \\to \\mathbb{R} $ is a probability (measure) on measurable space $ (F, \\mathrm{Pow}(F)) $.\n\\end{proposition}\n\n\\begin{proposition}\n\\begin{equation}\nP_{g}(F_{g,1}) + P_{g}(F_{g,2}) = 1.\n\\end{equation}\n\\end{proposition}\n\n\\subsection{Message Receiving Conditional Probabilities}\n\n\\begin{definition}\nWe call the conditional probability \n\\begin{equation}\nP_{g}(-|-): \\mathrm{Pow}(F) \\times \\mathrm{Pow}(F) \\to \\mathbb{R}\n\\end{equation}\na message receiving conditional probability of $ g : F \\to C $ in this note.\n\\end{definition}\n\n\\begin{proposition}\nFor arbitrary $ M, N \\subset F $, following equation is satisfied:\n\\begin{equation}\nP_{g}(M \\cap N) = P_{g}(M | N)P_{g}(N).\n\\end{equation}\n\\end{proposition}\n\n\\begin{proposition}\nFor arbitrary $ M \\subset F $, following equation is satisfied:\n\\begin{equation}\nP_{g}(F_{g,1}|M) = \\frac{P_{g}(M|F_{g,1})P(F_{g,1})}{P_{g}(M|F_{g,1})P(F_{g,1}) + P_{g}(M|F_{g,2})P(F_{g,2})}.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proposition}\nFor arbitrary $ M \\subset F $, following equation is satisfied:\n\\begin{equation}\nP_{g}(F_{g,1}|M) + P_{g}(F_{g,2}|M) = 1.\n\\end{equation}\n\\end{proposition}\n\n\\section{Comparation with Naive Bayesian Model}\n\n\\subsection{Calculationg Conditional Probabilities under the Two Specific Assumptions}\n\n\\begin{theorem}\nAssume that $ g : F \\to C $ satisfies following two conditions:\n(i) $ P_{g}(F_{g,1}) = P_{g}(F_{g,2}) = 1/2 $;\n(ii) for arbitrary $ w_{1}, w_{2}, \\dots, w_{r} \\in W $,\n\\begin{equation}\nP_{g}(M_{g}(\\{ w_{1}, w_{2}, \\dots, w_{r} \\})|F_{g,i})\n= P_{g}(M_{g}(w_{1})|F_{g,i})P_{g}(M_{g}(w_{2})|F_{g,i}) \\cdots P_{g}(M_{g}(w_{r})|F_{g,i}).\n\\end{equation}\ni.e, $ M_{g}(w_{1}), M_{g}(w_{2}), \\dots, M_{g}(w_{r}) $ are independent.\nAnd, let $ Q_{g}(w_{1}, w_{2}, \\dots, w_{r}), R_{g}(w_{1}, w_{2}, \\dots, w_{r}) $ be following functions:\n\\begin{eqnarray}\nQ_{g}(w_{1}, w_{2}, \\dots, w_{r}) = \\prod_{i}^{r}P_{g}(M(w_{i})|F_{g,1}), \\\\\nR_{g}(w_{1}, w_{2}, \\dots, w_{r}) = \\prod_{i}^{r}(\\frac{P_{g}(M(w_{i})) - P_{g}(M(w_{i})|F_{g,1})P_{g}(F_{g,1})}{1 - P_{g}(F_{g,1})}).\n\\end{eqnarray}\nThen, for arbitrary $ e \\in E $, the following equation is satisfied\n\\begin{equation}\nP_{g}(F_{g,1}|M_{g}(e))\n= \\frac{Q_{g}(w_{1}, w_{2}, \\dots, w_{r})}{Q_{g}(w_{1}, w_{2}, \\dots, w_{r}) + R_{g}(w_{1}, w_{2}, \\dots, w_{r})}.\n\\end{equation}\nWhere $ \\mathrm{Supp}(e) = \\{ w_{1}, w_{2}, \\dots, w_{r} \\} \\subset W $.\n\\end{theorem}\n\n\\end{document}\n", "meta": {"hexsha": "8975bb6f9d517ddd69daa0df32ca3334eb0a0fbe", "size": 6552, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "appendix.tex", "max_stars_repo_name": "jiyucho9145/bfcm", "max_stars_repo_head_hexsha": "448597b9978adb973cf73a674e0ec8c3e2a1de2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "appendix.tex", "max_issues_repo_name": "jiyucho9145/bfcm", "max_issues_repo_head_hexsha": "448597b9978adb973cf73a674e0ec8c3e2a1de2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "appendix.tex", "max_forks_repo_name": "jiyucho9145/bfcm", "max_forks_repo_head_hexsha": "448597b9978adb973cf73a674e0ec8c3e2a1de2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.4162162162, "max_line_length": 134, "alphanum_fraction": 0.6892551893, "num_tokens": 2209, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7461390043208003, "lm_q1q2_score": 0.589845504857596}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{August 29, 2014}\n\\maketitle\n\ngiven a set S contained in R bounded abouve, the supremum of S or least upper bound is a real number L such that\n\\begin{enumerate}\n\\item\nfor all x in S, $x\\le L$\n\\item\nif there is a number M such that $x\\le M$ for all $x\\in S$ then $L\\le M$\n\\end{enumerate}\n\n\\section*{approximation property of the supremum theorem}\nlet S be a subset of R, $S\\ne 0$ bounded above, Let $b=\\text{sup}(S)$ then for all $a<b,\\exists x\\in S$ such that $a<x\\le b$\n\\subsection*{proof}\nif there is no x in S such that $a<x\\le b$ then a is an upper bound for S contradicting that b is the least upper bound$\\Box$\n\\section*{2.3.3 Least upper bound principle}\ntenth axiom from yesterday\n\nevery non-empty set of real numbers that is bounded above has a supremum\n\\subsection*{proof}\nrequired because book defines real numbers as decimal expansions, not axiomatic definition\n\n\\begin{enumerate}\n\\item\nobservation: this is equivalent to proving that any non-empty set of reals that is bbounded below has an infimum. Why? homework: let S be a set of Reals, let -S=$\\{-x:x\\in S\\}$. Then you will have to prove that sup(-S)=-(inf S)\n\nso we will prove infimum statement\n\\end{enumerate}\n\nS is bounded below. let $m$ be a lower bound. $m=m_0.m_1m_2m_3m_4m_5m_5...$ wher $m_0\\in\\mathbb{Z}, m_0>0$ without loss of generality and $m_i, i>0$ is 0-9 digit. clearly $m_0$ is also a lower bound for S.\n\nConsider all integers that are lower bounds for S, (there is at least $m_0$). Take the biggest of such integers ($n_0)$.\n\n$n_0$ is a lower bound for S, but $n_0+1$ is not. we build the infimum with $n_0.\\_\\_\\_\\_$. Now pick the gretest ineger $n_1$ such that $n_0+\\frac{n_1}{10}$ is a lower bound for S. Since $n_0$ is a lower bound, $0\\le n_1$.  Since $n_0+1$ is not a lower bound, $n_1<10$\n\nNow pick the reatest integer $n_2$ such that $n_0+\\frac{n_1}{10}+\\frac{n_2}{100}$ is still a lower bound for S. Claim $n_0.n_1n_2n_3n_4...$ is inf(S)$\\Box$\n\n\\section*{properties of the supremum}\nlet $A,B$ be subset of $\\mathbb{R}$, nonempty, let $C=\\{a+b:a\\in A, b\\in B\\}$ if $A,B$ have a supremum, then so does $A+B$ and sup($A+B$)=sup$A$+sup$B$\n\\subsubsection*{proof}\nlet $z\\in C$, then $z=a+b$, where $a\\in A$, $b\\in B$\n\nlet $L_1=\\text{sup}A, L_2=\\text{sup}B$ then $a\\le L_1, b\\le L_2$ and then $z\\le L_1+L_2$ for all $z\\in C$. This shows that $L_1+L_2$ is an upper bound for $C$. choose $\\epsilon>0$ and $x\\in A, y\\in B$ such that $L_1-\\epsilon<x, L_2-\\epsilon$  by important property  of sup.\n\n$L_1+L_2-2\\epsilon<x+x\\le L_1+L_2$, $x+y\\in C$, since for all tilde $\\epsilon>0$ there exists $z\\in C$ such that $L_1+L_2-\\epsilon<z\\le L_1+L_2$ so $L_1+L_2$ is the supremum of C\n\\subsection*{2}\nLet $S,T$ be subsets of R, nonempty, bounded above. if for all s in S and t in T, s is less than or equal to t then supremum of S is less than or equal to supremum of T (exercise)\n\n\\subsubsection*{proposition}\n$\\mathbb{Z}^+$ is unbounded above.\n\\subsubsection*{proof}\n$\\mathbb{Z}^+$ is a subset of R nonempty, if Z+ were bounded above it would have a supremum m. by the important property of supremum there exists som x in Z+ such that m-1 is less than x is less than or equal to m, but then m is less than x+1 which  is in Z+ so we have a contradiction\n\\subsubsection*{corollary}\nfor all x in R there exists an n in Z+ such  that x is less than or equal to n.\n\\subsubsection*{proposition}\narchimedean property of R. page 12\n\nfor all x greater than 0, y in R there exists some n in Z+ such that nx is greater than y.\n\\subsubsection*{proof}\napply previous corollary, with x replaced by $\\frac{y}{x}\\Box$\n\\subsection*{defintion of absolut value}\nalso on page 12\n$\\abs{x}=\\{x,x\\ge0, -x,x<0$\n\\subsubsection*{properties}\n\\begin{enumerate}\n\\item\nif $a\\ge0, \\abs{x}\\le a$ iff $-a\\le x\\le a$\n\\item\nfor all $x,y\\in \\mathbb{R}, \\abs{x+y}\\le\\abs{x}+\\abs{y}$ (triangle inequality)\n\\item\nsame as above but with more than two numbers\n\\item\nreverse triangle $\\abs{\\abs{a}-\\abs{b}}\\le \\abs{a-b}$\n\\item\n$\\abs{xy}=\\abs{x}\\abs{y}$, $\\abs{x^{-1}}=\\abs{x}^{-1}$\n\\end{enumerate}\n\\subsubsection*{proof 1}\nassume $\\left\\lvert x\\right\\rvert\\le a$\n\ncases\n\\begin{enumerate}\n\\item\n$x\\ge 0$ then $\\abs{x}=x$ so $0\\le x\\le a$ since $x\\ge 0$ and $-a<0, -a\\le x$ so $-a\\le x\\le a$\n\\item\n$x<0$ then $\\left\\lvert x\\right\\rvert=-x\\le a$ so $x\\ge -a$ since $x<0$ and $a>0$ $x\\le a$ so $-a\\le x\\le a$$\\Box$\n\\end{enumerate}\non the way back\n\nassume $-a\\le x\\le a$ if $x\\ge 0$ $x=\\left\\lvert x\\right\\rvert$ hence $-a\\le\\abs{x}\\le a$ then in particular $\\abs{x}\\le a$\n\nif $x<0$\n\n\\section*{cauchy-schwartz inequality}\nfor every $a_k,b_k\\in\\mathbb{R}$\n\\begin{align*}\n  \\left(\\sum\\limits_{k=1}^n{a_kb_k}\\right)^2\\le\\left(\\sum\\limits_{k=1}^n{{a_k}^2}\\right)^{\\frac{1}{2}}\\left(\\sum\\limits_{k=1}^n{{b_k}^2}\\right)^{\\frac{1}{2}}\n\\end{align*}\nwith equality iff $\\exists x \\in \\mathbb{R}$ such that $a_kx+b_k=0$ for all $k=1,...,n$\n\\subsubsection*{proof}\n\\begin{align*}\n  \\sum\\limits_{k=1}^n{\\left(a_kx+b_k\\right)^2}\\ge 0, \\forall x \\in \\mathbb{R}\n\\end{align*}\n$Ax^2+Bx+C\\ge0$, $A=\\sum\\limits_{k=1}^n{a_k^2}$\n\\end{document}\n", "meta": {"hexsha": "70107b53e2e32dd93358e67fd810073f67da1eaa", "size": 5280, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-08-29.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-08-29.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-08-29.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.1282051282, "max_line_length": 285, "alphanum_fraction": 0.690530303, "num_tokens": 1990, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7905303137346446, "lm_q1q2_score": 0.5898454922502515}}
{"text": "\\chapter{Calculation Packages}\n\nBroadwick supports several simulation and fitting models. In this chapter we will give an outline of these methods and how they can be simulated (and combing) using the Broadwick framework. We will, for the most part, dispense with the theory and concentrate on how the methods are implemented in Broadwick. The code snippets in this chapter are taken from the examples that are distributed with the Broadwick source code.\n\n\n\n\\section{Markov Chains}\\index{Markov Chains}\n\nA Markov chain is a random sequence of states where the current state depends solely on the previous state. In this sense, it is a ``memoryless'' process\\index{Memoryless process} as the transition from one state to the next does not depend on the sequence of states that preceeded it.\n\nA Markov Chain can be implemented in Broadwick using the MonteCarloStep\\index{Class!MonteCarloStep} and MarkovChain\\index{Class!MarkovChain} classes. The MonteCarloStep encapsulates the functionality of a state by maintaining a collection of the coordinates defining the state as a java.util.Map<String,Double> (i.e. the name and value of the state). A MarkovChain object is constructed using a MonteCarloStep object as an initial point and, optionally, a MarkovProposalFunction\\index{Class!MarkovProposalFunction} for generating the next step. The generateNextStep\\index{method!generateNextStep} method uses the proposal function to generate the next step in the chain as the following code snippet demonstrates,\n\n\\begin{lstlisting}\n\nfinal Map<String, Double> coordinates = new LinkedHashMap<>();\n        {\n            coordinates.put(\"x\", 0.0);\n            coordinates.put(\"y\", 0.0);\n        }\nfinal MonteCarloStep initialStep = new MonteCarloStep(coordinates);\n\nfinal MarkovChain mc = new MarkovChain(initialStep);\nfor (int i = 0; i < chainLength; i++) {\n    final MonteCarloStep nextStep = mc.generateNextStep(mc.getCurrentStep());\n    mc.setCurrentStep(nextStep);\n\n    log.trace(\"{}\", nextStep.toString());\n}\n\\end{lstlisting}\n\nBy default, a MarkovNormalProposal\\index{Class!MarkovNormalProposal} object is used to generate the next step by sampling from a Normal distribution\\index{Normal distribution} centered on the current coordinate and with a standard deviation of 1. New proposal classes implement the MarkovProposalFunction\\index{Class!MarkovProposalFunction} interface and using this object when creating the Markov Chain.\n\n\\begin{lstlisting}\n    final MarkovProposalFunction myProposer = new MarkovProposalFunction();\n    final MarkovChain mc = new MarkovChain(initialStep, myProposer);\n\\end{lstlisting}\n\n\n\\section{Monte Carlo Simulation}\\index{Monte Carlo Simulation}\nMonte Carlo simulation is a broad class of methods that reply on repeated simulation of [random] processes to derive numerical results. Broadwick uses the MonteCarloScenario\\index{Class!MonteCarloScenario} abstract class to encapsulate the process to be simulated, which is used by the MonteCarlo\\index{Class!MonteCarlo} class to implement the Monte Carlo process.\n\nInternally, the MonteCarlo class uses a producer-comsumer pattern\\index{pattern!producer-consumer} by creating a ThreadFactory\\index{Class!ThreadFactory} to spawn several simulation processes which are in turn consumed by a MonteCarloResults\\index{Class!MonteCarloResults} object. This MonteCarloResults object takes the results of each simulation (the producer threads) and calculates the required statistics.\n\nThese classes (MonteCarloScenario, MonteCarloResults) are extended for each implementation. By way of example, we will calculate $\\pi$\\index{$\\pi$} by throwing darts randomly at a square dartboard and calaulating the fraction that fall within the unit circle encompasses by the square.\n\nFirst, we implement the run\\index{method!run} method of our class that extends the MonteCarloScenario\\index{Class!MonteCarloScenario} class (this class has an internal random number generator, rng, to generate random number).\n\\begin{lstlisting}\n    @Override\n    public MonteCarloResults run() {\n        final MyResultsConsumer results = new MyResultsConsumer();\n\n        final double x = rng.getDouble(-1, 1);\n        final double y = rng.getDouble(-1, 1);\n\n        final double r = Math.sqrt(x * x + y * y);\n        if (r < 1) {\n            results.addHit();\n        } else {\n            results.addMiss();\n        }\n        return results;\n    }\n\\end{lstlisting}\n\nThe MonteCarloResults\\index{Class!MonteCarloResults} class is responsible for both storing the results of each simulation and for acting as a consumer\nobject that maintains a collection of the results returned by the producers.\n\n\\begin{lstlisting}\n    class MyResultsConsumer implements MonteCarloResults {\n\n    @Override\n    public double getExpectedValue() {\n        return hits.getSum() / (hits.getSum() + misses.getSum());\n    }\n\n    @Override\n    public Samples getSamples() {\n        return hits;\n    }\n\n    @Override\n    public String toCsv() {\n        return String.format(\"\\%d ;  \\%d\", hits.getSize(), misses.getSize());\n    }\n\n    @Override\n    public MonteCarloResults join(final MonteCarloResults results) {\n        // This is where the results of the producers are dealt with.\n        final MyResultsConsumer r = (MyResultsConsumer) results;\n        this.hits.add(r.hits);\n        this.misses.add(r.misses);\n\n        return this;\n    }\n\n    public void addHit() {\n        hits.add(1);\n    }\n\n    public void addMiss() {\n        misses.add(1);\n    }\n    \n    @Override\n    public void reset() {\n    }\n    \n    @Getter\n    private final Samples hits = new Samples();\n    @Getter\n    private final Samples misses = new Samples();\n}\n\\end{lstlisting}\n\nThese two classes are utilised thus\n\\begin{lstlisting}\n    MonteCarlo mc = new MonteCarlo(new Simulation(), 1000);\n    mc.setResultsConsumer(new MyResultsConsumer());\n    mc.run();\n\n    final MyResultsConsumer results = (MyResultsConsumer) mc.getResults();\n    log.info(\"Hits : Misses = {}\", results.toCsv());\n    log.info(\"Estimation of Pi = {}\", 4 * results.getExpectedValue());\n\\end{lstlisting}\n\n\\subsection{Markov Chain Monte Carlo}\\index{Markov Chain Monte Carlo}\nMarkov chains can be combined with Monte Carlo simulation\\index{Monte carlo Simulation} to explore a parameter space by using the Markov chain to `walk' through the parameter space while Monte Carlo simulation is to determine the state of the system and each step in the walk. Thus, Markov chain Monte Carlo (MCMC) methods can be used to sample from a probability distribution by using the Markov chain (coupled with a rejection function to accept or reject propsed steps) to find the desired distribution. \n\nA MarkovChainMonteCarlo\\index{Class!MarkovChainMonteCarlo} object will create a Markov chain and run the Monte Carlo simulation at each step using the Metropolis-Hastings algorithm\\index{algorithm!Metropolis-Hastings} (the MetropolisHastings\\index{Class!MetropolisHastings} class) to accept successive steps based on the results of the Monte Carlo simultaion at the given step.\n\nA MarkovChainObserver\\index{Class!MarkovChainObserver} object can be added to the MarkovChainMonteCarlo\\index{Class!MarkovChainMonteCarlo} object which will be informed when a step has been completed. This observer can be used to save the results of the simulation.\n\nBy default a Metropolis-Hastings, but it is easy to implement an alternative (in the following example we will use the Metropolis algorithm with a log-likelihood).\n\n\\begin{lstlisting}\n    final MonteCarloStep step = new MonteCarloStep(<initial step>);\n    final MarkovChainMonteCarlo mcmc = new MarkovChainMonteCarlo(\n                    myModel, <numScenarios>,\n                    myMonteCarloScenarioResults,\n                    myMarkovStepGenerator);\n\n    mcmc.setAcceptor(new MonteCarloAcceptor() {\n                @Override\n                public boolean accept(final MonteCarloResults oldResult, final MonteCarloResults newResult) {\n                    final double ratio = newResult.getExpectedValue() - oldResult.getExpectedValue();\n                    return Math.log(generator.getDouble()) < ratio / smoothingRatio;\n                }\n            });\n\n    MyMarkovChainObserver myMcmcObserver = new MyMarkovChainObserver(mcmc);\n    mcmc.addObserver(myMcmcObserver);\n\n    mcmc.run();\n\\end{lstlisting}\n\nWe can attach a MarkovChainObserver\\index{Class!MarkovChainObserver} to the MarkovChainMonteCarlo object that observes\\index{pattern!observer} the state of the chain (at each step the chain object informs the observer of its state which can be used, e.g., to save it to file).\n\n\\begin{lstlisting}\n    final MarkovChainObserver observer = new MyMCObserver();\n    mcmc.addObserver(observer);\n\\end{lstlisting}\n\n\\section{Approximate Bayesian Computation (ABC)}\\index{Approximate Bayesian Computation (ABC)}\nApproximate Bayesian computation (ABC) is a class of computational methods based on Bayesian Statistics\\index{Bayesian Statistics}~\\cite{Toni2007,Marin2011,Marjoram2003,Wegmann2009}. It bypasses the evaluation of a likelihood function \\index{Likelihood function}, which is often computationally expensive or even impossible.\n\nAt the heart of the ABC method is the `distance function' \\index{distance function} which is a measure of close the proposed sample is to the desired (posterior). The distance function in Broadwick is specified by overriding the AbcDistance class\\index{Class!AbcDistance} and adding it to an ApproxBayesianComp object\\index{Class!ApproxBayesianComp}, by default a simple absolute value of the difference between the proposed and observed value is used (as can be seen in the following example).\n\nAn AbcController object\\index{Class!AbcController} is used to control (i.e. determine when the calculation should end) the ABC process.\n\nThe ApproxBayesianComp class runs the bayesian computation. It is constructed using observed data (in the form of a AbcNamedQuantity object\\index{Class!AbcNamedQuantity}, which is a simple name-value map), an AbcModel object\\index{Class!AbcModel}, an AbcPriorsSampler object\\index{Class!AbcPriorsSampler} (which specifes how samples are drawn from a prior distribution) and a sensitivity.\n\nAs a very simple example, we will sample from a posterier that is normally distributed around $\\pi$ (assuming a standard deviation of 1.0). Using a uniform prior (for the mean of the posterior distribution) in [3,4] we will sample from this using a simple abs() function as the distance function. We should observe a posterior distribution [normally] distributed around $\\pi$.\n\n\\begin{lstlisting}\n        // first set up the observed data (the mean value of my unknown distribution).\n        final Map<String, Double> observed = new LinkedHashMap<>();\n        observed.put(\"value\", 3.142);\n        final AbcNamedQuantity observedData = new AbcNamedQuantity(observed);\n\n        // Next create a simple controller (we will sample 20000) points from the prior distribution.\n        final AbcController controller = new AbcController() {\n            @Override\n            public boolean goOn(final ApproxBayesianComp abc) {\n                return abc.getNumSamplesTaken() <= 20000;\n            }\n        };\n\n        // Create a dummy model to run\n        final AbcModel myModel = new AbcModel() {\n            @Override\n            public AbcNamedQuantity run(final AbcNamedQuantity parameters) {\n                // this is trivially simple model. Instead of doing any calculations we just return the parameters.\n                return parameters;\n            }\n        };\n\n        // Create a method for sampling our priors.\n        final AbcPriorsSampler priors = new AbcPriorsSampler() {\n            @Override\n            public AbcNamedQuantity sample() {\n                // Another dummy method here, we uniformly sample 'value' in the range [3,4]\n                final LinkedHashMap<String, Double> sample = new LinkedHashMap<>();\n                sample.put(\"value\", generator.getDouble(3.0, 4.0));\n                return new AbcNamedQuantity(sample);\n            }\n            private final RNG generator = new RNG(RNG.Generator.Well19937c);\n        };\n\n        final ApproxBayesianComp abc = new ApproxBayesianComp(observedData, myModel, priors, 0.05);\n        abc.setController(controller);\n        abc.run();\n\\end{lstlisting}\n\n\\section{Ordinary Differential Equations (ODEs)}\\index{Ordinary Differential Equations (ODEs)}\nOrdinary differential equations can be solved in Broadwick using the $4^\\mathrm{th}$ order Runge-Kutta method\\index{Runge-Kutta method}. The RungeKutta4 object\\index{Class!RungeKutta4} is constructed using an Ode object|index{Class!Ode} (which specifies the ODEs by implementing methods to specify the initial alues and derivatives of each variable), start and end times and the step size.\n\nAn observer can be attached to keep track of the results generated by the solver and a controller can be used to ensure that the calculation stops (if, for example, you specify a negative step size resulting in infinite loops or obtain a negative value for a population size). \n\n\\begin{lstlisting}\nfinal Ode myOde = new SirModel(beta, rho, s0, i0, r0);\nfinal OdeSolver solver = new RungeKutta4(myOde, tStart, tEnd, stepSize);\n\n// we will need an observer to 'observe' our simulation and record the simulation states.\nsolver.getObservers().clear();\nfinal MyObserver observer = new MyObserver(solver, outputFile);\nsolver.addObserver(observer);\n\n// Create a simple controller object to tell the simulator when to stop.\nfinal OdeController controller = new MyOdeController(tEnd);\nsolver.setController(controller);\n\nsolver.run();\n\\end{lstlisting}\n\nWe can also add triggered events\\index{triggered events} to the solver. These events occur at predetermined times and can be used to model, e.g. immigration events, vaccination or culling strategies in a population.\n\\begin{lstlisting}\n// Register theta events, these are fixed events that will be triggered at set times.\nsolver.registerNewTheta(observer, 20.0, new MyThetaEvent(solver));\n\\end{lstlisting}\n\n", "meta": {"hexsha": "3749a680241130869523d374a55514434694a296", "size": 14022, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/chap_algorithms.tex", "max_stars_repo_name": "EPICScotland/Broadwick", "max_stars_repo_head_hexsha": "b9c17e26baea943d0786b0203797fa1e0a26726b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-01-13T18:05:25.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-02T13:18:33.000Z", "max_issues_repo_path": "doc/chap_algorithms.tex", "max_issues_repo_name": "EPICScotland/Broadwick", "max_issues_repo_head_hexsha": "b9c17e26baea943d0786b0203797fa1e0a26726b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-08-13T18:32:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:14:26.000Z", "max_forks_repo_path": "doc/chap_algorithms.tex", "max_forks_repo_name": "EPICScotland/Broadwick", "max_forks_repo_head_hexsha": "b9c17e26baea943d0786b0203797fa1e0a26726b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.1825726141, "max_line_length": 713, "alphanum_fraction": 0.7454000856, "num_tokens": 3177, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7461389873857264, "lm_q1q2_score": 0.5898454914699067}}
{"text": "\\chapter{Unit 1}\n\\section{Plane Curves \\& Parametrics}\n\\subsection{Definition}\nTypically, functions come in rectangular form, meaning $x$ is the independent\nvariable, and $y$ is dependent upon it. These functions look recognizable:\n$y=\\ln(x)$.\n\nTo track more things, or place a function into a seperate independent variable,\nparametrics are used. When using parametrics, one function is given for every\ngraphed dimension. For example: $x=f(t)$, $y=f(t)$ or $x=f(\\theta)$,\n$y=f(\\theta)$.\n\nIn the above example, $x(t)$ and $y(t)$ are the parametric equations and $t$ is\nthe parameter. The set of points $(x, y)$ obtained as $t$ varries over the\ninterval on which it is defined, is the graph of the parametric equation.\n\n\\subsection{De-parameterizing}\n\\begin{description}\n  \\item[Table] A table of values can be created where $t$ varries independently,\n    and $(x(t), y(t))$ are the output values.\n  \\item[Algebraic Simplification] $x(t)$ can be simplified in terms of $y$ and\n    substituted back into $y(t)$ to obtain a function.\n  \\item[Trigonometric Simplification] Trigonometric identities (such as\n    $\\sin^2\\theta + \\cos^2\\theta = 1$, etc.) may be used to simplify the\n    parameter functions.\n\\end{description}\n\n\\subsection{Parameterizing}\nTo obtain the parametric curve of the rectangular equation, the\nfollowing process is implemented:\n\n\\begin{enumerate}\n  \\item Let $x(t) = t$, and write $y(t)$ in terms of $x(t)$.\n  \\item Write both $x$ and $y$ in terms of $t$ and $m=\\frac{dy}{dx}$.\n\\end{enumerate}\n\nAn example is shown below for the curve $y=1-x^2$:\n\n\\begin{align*}\n  x(t) &= t\\\\\n  y(t) &= 1-x^2\\\\\n       &= 1-t^2\\\\\\\\\n  m &= \\frac{dy}{dx}=-2x\\\\\\\\\n  x &= \\frac{-m}{2}\\\\\\\\\n  y &= 1-x^2\\\\\n    &= 1-(\\frac{-m}{2})^2\\\\\n    &= 1-\\frac{m^2}{4}\n\\end{align*}\n\n\\section{Parametric Equations and Calculus}\n\\subsection{The Derivative}\nSuppose a parametric equation is given $x=f(t)$ and $y=g(t)$. The slope of that\nequation is given to be:\n\\begin{equation}\n  \\frac{dy}{dx}=\\frac{\\frac{dy}{dt}}{\\frac{dx}{dt}}\n\\end{equation}\n\nThe second derivative, therefore, of a parametric curve is given to be:\n\\begin{equation}\\begin{aligned}\n  \\frac{d^2y}{dx^2} &=\n  \\frac{d}{dx}\\left(\\frac{\\frac{dy}{dt}}{\\frac{dx}{dt}}\\right)\\\\\n  &= \\frac{d}{dx}\\left(\\frac{dy}{dx}\\right)\n\\end{aligned}\\end{equation}\n\n\\subsection{Arc Length}\nRecall that the formula for arc length of a curve $h(x)$ over $[x_o, x_1]$ is:\n\\begin{equation}\n  S = \\int_{x_0}^{x_1} \\sqrt{1+\\left[h'(x)\\right]^2} dx\\\\\n\\end{equation}\n\nTherefore making the arc length of a parametric curve the following:\n\\begin{align*}\n  S &= \\int_{x_0}^{x_1} \\sqrt{1+\\left[\\frac{dy}{dx}\\right]^2} dx\\\\\n    &= \\int_{x_0}^{x_1} \\sqrt{1+\\left[\\frac{dy/dt}{dx/dt}\\right]^2} dx\\\\\n    &= \\int_{x_0}^{x_1} \\sqrt{\\frac{(dx/dt)^2+(dy/dt)^2}{(dx/dt)^2}} dx\\\\\n    &= \\int_{x_0}^{x_1}\n  \\sqrt{\\left(\\frac{dx}{dt}\\right)^2+\\left(\\frac{dy}{dt}\\right)^2} dx\\\\\n    &= \\int_{x_0}^{x_1} \\sqrt{[f'(t)]^2+[g'(t)]^2} dx\\\\\n\\end{align*}\n\n\\subsection{Areas of Rotation}\nThe revolution about the $x$-axis is given:\n\\begin{equation}\n  S=2\\pi\\int_a^b\n  g(t)\\sqrt{\\left(\\frac{dx}{dt}\\right)^2+\\left(\\frac{dy}{dt}\\right)^2} dt\n\\end{equation}\n\nand the area about the $y$-axis is given:\n\\begin{equation}\n  S=2\\pi\\int_a^b\n  f(t)\\sqrt{\\left(\\frac{dx}{dt}\\right)^2+\\left(\\frac{dy}{dt}\\right)^2} dt\n\\end{equation}\n\n\\section{Vectors in the Plane}\nVectors are directed line segments that have no origin. Vectors have both\nmagnitude, and direction, such that any vector is the same no matter where they\nstart. Vectors are denoted as follows:\n$$\\vec{v}=\\vec{PQ}$$\n\n\\subsection{Magnitude}\nThe magnitude of a vector $\\vec{v}$ with the components $\\langle x,y\\rangle$ may\nbe calculated as follows:\n\\begin{equation}\n  ||\\vec{v}||=\\sqrt{x^2+y^2}\n\\end{equation}\n\n\\subsection{Operations}\nLet $\\vec{u}=\\langle u_1,u_2\\rangle$ and $\\vec{v}=\\langle v_1,v_2\\rangle$.\n\\begin{enumerate}\n  \\item The vector sum of $\\vec{u}$ and $\\vec{v}$ is $\\langle u_1+v_1, u_2+v_2\n    \\rangle$.\n  \\item The scalar multiple of $c$ and $\\vec{v}$ is $\\langle cv_1, cv_2\n    \\rangle$.\n  \\item The negative of $\\vec{v}$ is $\\langle -v_1, -v_2 \\rangle$.\n  \\item The difference of $\\vec{u}$ and $\\vec{v}$ is $\\langle u_1-v_1, u_2-v_2\n    \\rangle$.\n\\end{enumerate}\n\nTo take the unit vector of a vector $\\vec{v}$ in its direction, then the\nfollowing is calculated:\n\\begin{equation}\n  \\vec{u}=\\frac{\\vec{v}}{||\\vec{v}||}=\\frac{1}{||\\vec{v}||}\\vec{v}\n\\end{equation}\n\nThere are three standard unit vectors in $\\mathbb{R}^3$ space:\n\\begin{description}\n  \\item[$\\hat{i}$] = $\\langle 1,0,0 \\rangle$\n  \\item[$\\hat{j}$] = $\\langle 0,1,0 \\rangle$\n  \\item[$\\hat{k}$] = $\\langle 0,0,1 \\rangle$\n\\end{description}\n\n\\subsection{Space Coordinates and Vectors in Space}\nPoints in the $\\mathbb{R}^2$ plane have coordinates of the form $(x,y)$. Points\nin the $\\mathbb{R}^3$ space have coordinates of the form $(x,y,z)$.\n\nThe distance between two such points takes the following form:\n\\begin{equation}\n  d=\\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2}\n\\end{equation}\n\nVectors in $\\mathbb{R}^2$ take the form $\\langle x,y,z \\rangle$. All of the same\nproperties of vectors defined in $\\mathbb{R}^2$ space apply here as well.\n\n\\section{The Dot Product}\nThe dot product determines information about the angle between two vectors. The\ndot producted is calculated between two vectors $\\vec{u}=\\langle u_1,u_2\n\\rangle$ and $\\vec{v}=\\langle v_1,v_2 \\rangle$ as follows:\n\n\\begin{equation}\n  \\vec{u} \\cdot \\vec{v} = u_1v_1 + u_2v_2\n\\end{equation}\n\n\\subsection{The Angle Between Two Vectors}\nThe dot product gives information about the angle between two non-zero vectors.\nThe following theorem is presented:\n\\begin{equation}\n  \\cos\\theta = \\frac{\\vec{u}\\cdot\\vec{v}}{||\\vec{u}||||\\vec{v}||}\n\\end{equation}\n\n\\subsection{Direction Cosines}\nThe dot product can also be used to determine the angle between a vectors\ncomponents in $\\mathbb{R}^3$ space and their respective planes of interest. For\nexample, for the vector $\\vec{v} = \\langle v_1,v_2,v_3 \\rangle$, we can\ndetermine the angle between each planar axis and the components of the vector.\n\nThe process is outlined below:\n\\begin{enumerate}\n  \\item Take the dot-product of a vector and a unit vector of magnitude $1$ in\n    that plane (i.e., $\\hat{i}$, $\\hat{j}$, or $\\hat{k}$).\n  \\item Apply the definition of the dot-product to determine the direction\n    cosine.\n\\end{enumerate}\n\n\\begin{align}\n  \\cos\\alpha &= \\frac{v_1}{||\\vec{v}||}\\\\\n  \\cos\\beta  &= \\frac{v_2}{||\\vec{v}||}\\\\\n  \\cos\\gamma &= \\frac{v_3}{||\\vec{v}||}\n\\end{align}\n\n\\subsection{Projections}\nTo project $\\vec{u}$ onto $\\vec{v}$ is to take the length of $\\vec{u}$ in the\ndirection of $\\vec{v}$. The notation is as follows below:\n\n\\begin{equation}\n  \\text{proj}_{\\vec{v}}\\vec{u}=\n  \\left(\\frac{\\vec{u}\\cdot\\vec{v}}{||\\vec{v}||^2}\\right)\\vec{v}\n\\end{equation}\n\n\\subsubsection{Work}\n\\begin{align}\n  W &= ||\\text{proj}_{\\vec{PQ}}\\vec{F}||||\\vec{PQ}|| \\\\\n    &= \\vec{F} \\cdot \\vec{PQ}\n\\end{align}\n\n\\section{The Cross Product}\nLet $\\vec{u}=\\langle u_1,u_2,u_3 \\rangle$ and $\\vec{v}=\\langle v_1,v_2,v_3\n\\rangle$. The cross product of $\\vec{u}$ and $\\vec{v}$ is said to be:\n\\begin{equation}\n  \\vec{u}\\times\\vec{v} = (u_2v_3-u_3v_2)\\hat{i} - (u_1v_3-u_3v_1)\\hat{j} +\n  (u_1v_2-u_2v_1)\\hat{k}\n\\end{equation}\n\nAnother way to calculate the cross product is with matricies:\n\n\\begin{equation}\n  \\begin{aligned}\n    \\vec{u}\\times\\vec{v} &= \\begin{vmatrix}\n      \\hat{i} & \\hat{j} & \\hat{k} \\\\\n      u_1     & u_2     & u_3 \\\\\n      v_1     & v_2     & v_3\n    \\end{vmatrix} \\\\\n  &= \\begin{vmatrix}\n    u_2 & u_3 \\\\\n    v_2 & v_3 \\\\\n  \\end{vmatrix}\\hat{i} -\n  \\begin{vmatrix}\n    u_1 & u_3 \\\\\n    v_1 & v_3 \\\\\n  \\end{vmatrix}\\hat{j} +\n  \\begin{vmatrix}\n    u_1 & u_2 \\\\\n    v_1 & v_2 \\\\\n  \\end{vmatrix}\\hat{k}\\\\\n  &= (u_2v_3-u_3v_2)\\hat{i} - (u_1v_3-u_3v_1)\\hat{j} + (u_1v_2-u_2v_1)\\hat{k}\n  \\end{aligned}\n\\end{equation}\n\n\\subsection{Algebraic Properties}\n\\begin{enumerate}\n  \\item $\\vec{u}\\times\\vec{v}=-(\\vec{v}\\times\\vec{u})$\n  \\item\n    $\\vec{u}\\times(\\vec{v}+\\vec{w})=(\\vec{u}\\times\\vec{v})+\n    (\\vec{u}\\times\\vec{w})$\n  \\item\n    $c(\\vec{u}\\times\\vec{v})=(c\\vec{u})\\times\\vec{v}=\\vec{u}\\times(c\\vec{v})$\n  \\item $\\vec{u}\\times0=0\\times\\vec{u}=0$\n  \\item $\\vec{u}\\times\\vec{u}=0$\n  \\item $\\vec{u}\\dot(\\vec{v}\\times\\vec{w}=(\\vec{u}\\times\\vec{v})\\cdot\\vec{w}$\n\\end{enumerate}\n\n\\subsection{Geometric Properties}\n\\begin{enumerate}\n  \\item $\\vec{u}\\times\\vec{v}$ is orthagonal to both $\\vec{u}$ and $\\vec{v}$\n  \\item $||\\vec{u}\\times\\vec{v}||=||\\vec{u}||||\\vec{v}||\\sin\\theta$\n  \\item $\\vec{u}\\times\\vec{v}=0$ only if $\\vec{u}=c\\vec{v}$\n  \\item $||\\vec{u}\\times\\vec{v}||$ is the area of a parallelogram having\n    $\\vec{u}$ and $\\vec{v}$ as adjacent sides.\n\\end{enumerate}\n\n\\subsection{The Triple Scalar Product}\n\\begin{equation}\n  \\vec{u}\\cdot(\\vec{v}\\times\\vec{w})=\\begin{vmatrix}\n    u_1 & u_2 & u_3 \\\\\n    v_1 & v_2 & v_3 \\\\\n    w_1 & w_2 & w_3\n  \\end{vmatrix}\n\\end{equation}\n\n\\subsubsection{Geometric Properties}\nThe volume of a parallelpiped with significant edges $\\vec{u}$, $\\vec{v}$, and\n$\\vec{w}$ is defined as:\n\\begin{equation}\n  \\begin{aligned}\n    V &= |\\vec{u}\\cdot(\\vec{v}\\times\\vec{w})| \\\\\n      &= ||\\text{proj}_{\\vec{v}\\times\\vec{w}}||||\\vec{v}\\times\\vec{w}|| \\\\\n      &= \\left|\\frac\n        {\\vec{u}\\cdot(\\vec{v}\\times\\vec{w})}\n        {||\\vec{v}\\times\\vec{w}||}\\right|\n        ||\\vec{v}\\times\\vec{w}|| \\\\\n      &= |\\vec{u}\\cdot(\\vec{v}\\times\\vec{w})|\n  \\end{aligned}\n\\end{equation}\n\n\\section{Lines and Planes in Space}\n\\subsection{Lines}\nGiven a line containing the point $P(x_1,y_1,z_1)$ in space parallel to the\nvector $\\vec{v}=\\langle a,b,c \\rangle$, the vector $\\vec{v}$ is the direction\nvector and the numbers $a$, $b$, and $c$ are direction numbers. One can surmise\nthat a line in space $L$ contains all points $Q(x,y,z)$ for which\n$\\vec{PQ}=c\\vec{v}$.\n\n\\begin{equation}\n  \\begin{aligned}\n    \\vec{PQ} &= \\langle x-x_1, y-y_1, z-z_1 \\rangle \\\\\n             &= \\langle at, bt, ct \\rangle \\\\\n             &= t\\vec{v}\n  \\end{aligned}\n\\end{equation}\n\nThe following parametric equations are generated:\n\\begin{align}\n  x=x_1+at \\\\\n  y=y_1+at \\\\\n  z=z_1+at\n\\end{align}\n\nand the symmetric equation of the line can be generated by eliminating the\nparameter:\n\n\\begin{equation}\n  \\frac{x-x_1}{a} = \\frac{y-y_1}{b} = \\frac{z-z_1}{c}\n\\end{equation}\n\n\\subsection{Planes}\nConsider a plane in space containing the point $P(x_1,y_1,z_1)$ and\n$Q(x_2,y_2,z_2)$. There is some vector in the plane, therefore, $\\vec{v}=\\langle\n(x_2-x_1,y_2-y_1,x_2-z-1 \\rangle$, and some other vector $\\vec{n}$ such that\n$\\vec{v}\\cdot\\vec{n}=0$. Let $\\vec{n}=\\langle a,b,c \\rangle$.\n\nIt can be said that:\n\\begin{equation}\n  \\begin{aligned}\n    \\vec{n}\\cdot\\vec{PQ} &= 0\\\\\n    \\langle a,b,c \\rangle \\cdot \\langle x-x_1, y-y_1, z-z_1 \\rangle &= 0\\\\\n    a(x-x_1)+b(y-y_1)+c(z-z_1) &= 0\\\\\n    ax+by+cz+d &= 0\n  \\end{aligned}\n\\end{equation}\n\nTo generate the vector $\\vec{n}$, it is useful to generate two vectors in the\nplane $\\vec{u}$ and $\\vec{v}$ and then take their cross product to obtain\n$\\vec{n}$ such that $\\vec{u}\\times\\vec{v}=\\vec{n}$.\n\nThe dot-product can be used to determine the angle between two planes by using\ntheir normal vectors $\\vec{n_1}$ and $\\vec{n_2}$.\n\\begin{equation}\n  \\cos\\theta=\\frac{|\\vec{n_1}\\cdot\\vec{n_2}|}{||\\vec{n_1}||||\\vec{n_2}||}\n\\end{equation}\n\n\\subsubsection{Distance between a Point and a Plane}\nThe distance $D$ between a point $Q(x_0,y_0,z_0)$ (not in the plane) and $P$ (in\nthe plane with the direction numbers $A$, $B$, $C$, and $D$) is:\n\\begin{equation}\n  \\begin{aligned}\n    D &= ||\\text{proj}_{\\vec{n}}\\vec{PQ}|| \\\\\n      &= \\frac{\\vec{PQ}\\cdot\\vec{n}}{||\\vec{n}||} \\\\\n      &= \\frac{|ax_0+by_0+cz_0+d|}{\\sqrt{a^2+b^2+c^2}}\n  \\end{aligned}\n\\end{equation}\n\n\\subsubsection{Distance between a Point and a Line}\nThe distance $D$ between a point $Q$ and a line in space containing the point\n$P$ is:\n\\begin{equation}\n  D = \\frac{||\\vec{PQ}\\times\\vec{u}||}{||\\vec{u}||}\n\\end{equation}\n\n\\section{Vectored-Valued Functions}\nVectored valued functions are functions that produced vectors ($\\langle x,y,z\n\\rangle$ in $\\mathbb{R}^3$ space, $\\langle x,y \\rangle$ in $\\mathbb{R}^2$ space).\n\nVectored valued functions are denoted as follows:\n\n\\begin{equation}\n  \\vec{r}(t) = \\langle \\vec{f}(t), \\vec{g}(t), \\vec{h}(t) \\rangle\n\\end{equation}\n\nThese functions are drawn as normal curves, such that each point on the curve\n$P(x,y)$ represents a vector $\\vec{v} = \\langle x,y \\rangle$. Just like normal\nparametric curves, they have direction or orientation.\n\nThe limits of these functions may be taken as follows:\n\\begin{equation}\n  \\lim_{x\\to{a}} \\vec{r}(t) =\n    \\left[\\lim_{x\\to{a}}\\vec{f}(t)\\right]\\hat{i} +\n    \\left[\\lim_{x\\to{a}}\\vec{g}(t)\\right]\\hat{j} +\n    \\left[\\lim_{x\\to{a}}\\vec{h}(t)\\right]\\hat{k}\n\\end{equation}\n\n\\section{Calculus of Vectored-Valued Functions}\n\\subsection{Differentiation}\nThe derivative of a vectored-valued function $\\vec{r}(t)$ is given as:\n\\begin{equation}\n  \\vec{r}'(t)=\\lim_{\\Delta{t}\\to{0}}\n    \\frac{\\vec{r}(t+\\Delta{t})-\\vec{r}(t)}{\\Delta{t}}\n\\end{equation}\n\nSuch that:\n\\begin{equation}\n  \\vec{r}'(t) = \\frac{d}{dt}\\vec{f}(t) +\n                \\frac{d}{dt}\\vec{g}(t) +\n                \\frac{d}{dt}\\vec{h}(t)\n\\end{equation}\n\n\\subsection{Properties}\n\\begin{enumerate}\n  \\item $$\\frac{d}{dt}[\\vec{r}(t)\\cdot\\vec{u}(t)] = \\vec{r}(t)\\cdot\\vec{u}'(t) +\n                                                    \\vec{u}(t)\\cdot\\vec{r}'(t)$$\n  \\item $$\\frac{d}{dt}[\\vec{r}(t)\\times\\vec{u}(t)]=\\vec{r}(t)\\times\\vec{u}'(t) +\n                                                   \\vec{u}(t)\\times\\vec{r}'(t)$$\n\\end{enumerate}\n\n\\subsection{Integration}\nThe integral of a vectored-valued function $\\vec{r}(t)$ is given as: (note, this\napplies to vector-valued functions in $\\mathbb{R}^2$ and $\\mathbb{R}^3$ for both\ndefinite and indefinite integrals)\n\\begin{equation}\n  \\int \\vec{r}(t) dt = \\left(\\int \\vec{f}(t) dt\\right)\\hat{i} +\n                       \\left(\\int \\vec{g}(t) dt\\right)\\hat{j} +\n                       \\left(\\int \\vec{h}(t) dt\\right)\\hat{k}\n\\end{equation}\n\n\\section{Velocity \\& Acceleration}\nLet $\\vec{a}(t)$, $\\vec{v}(t)$ and $\\vec{r}(t)$ represent the acceleration,\nvelocity, and position of an object in $\\mathbb{R}^2$ or $\\mathbb{R}^3$\nrespectively.\n\\begin{equation}\n  \\vec{a}(t) = \\frac{d}{dt}\\left[\\vec{v}(t)\\right]\n             = \\frac{d^2}{dt^2}\\left[\\vec{r}(t)\\right]\n\\end{equation}\n", "meta": {"hexsha": "b231866608f03c55b9d03a16cc44fd4f7961ee52", "size": 14366, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2016/bc_calculus/units/unit_1.tex", "max_stars_repo_name": "ttaylorr/midterms", "max_stars_repo_head_hexsha": "fdde0fd1a66eb5242d0dfa04a5201c3ab6d6b7eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-01-06T06:55:26.000Z", "max_stars_repo_stars_event_max_datetime": "2015-01-06T06:55:26.000Z", "max_issues_repo_path": "2016/bc_calculus/units/unit_1.tex", "max_issues_repo_name": "ttaylorr/midterms", "max_issues_repo_head_hexsha": "fdde0fd1a66eb5242d0dfa04a5201c3ab6d6b7eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2016/bc_calculus/units/unit_1.tex", "max_forks_repo_name": "ttaylorr/midterms", "max_forks_repo_head_hexsha": "fdde0fd1a66eb5242d0dfa04a5201c3ab6d6b7eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.9537712895, "max_line_length": 81, "alphanum_fraction": 0.6383822915, "num_tokens": 5266, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8244619350028204, "lm_q1q2_score": 0.589839851911092}}
{"text": "\n\\subsection{Continous time with Lie algebra}\n\nWe use \\(X=iH\\), what are the implications of this compared to other choices?\n\nLie algebras with \\(n\\times x\\)\n\nThis loops back? multiple dimensions, infinite, so maybe not?\n\nWith continuous time we do not have a single operator to describe movements. There is always one smaller.\n\nWith continous time there must be either a single state, or an uncountably infinite number of states.\n\n\\(U=M_n^n\\)\n\n\\(U=(I+\\dfrac{1}{n}G_n)^n\\)\n\n\\(U=\\lim_{n\\rightarrow \\infty }(I+\\dfrac{1}{n}G)^n\\)\n\nNow:\n\n\\(UU^*=I\\)\n\n\\((I+\\dfrac{1}{n}G)(I+\\dfrac{1}{n}G)^*=I\\)\n\n\\((I+\\dfrac{1}{n}G)(I+\\dfrac{1}{n}G^*)=I\\)\n\n\\(G=-G^*\\)\n\n\\(G=iH\\)\n\n\\(iH=-(iH)^*\\)\n\n\\(H=H^*\\)\n\n\\(H\\) is Hermitian\n\n\\(U=\\lim_{n\\rightarrow \\infty }(I+\\dfrac{1}{n}iH)^n\\)\n\nThis isn't quite right, need defined for different time jumps.\n\n", "meta": {"hexsha": "5ad0881d297105f8ed201a00115e89349d13c8d5", "size": 822, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/physics/QM/03-05-lie.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/physics/QM/03-05-lie.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/physics/QM/03-05-lie.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.5714285714, "max_line_length": 105, "alphanum_fraction": 0.6484184915, "num_tokens": 292, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637433190939, "lm_q2_score": 0.6859494485880928, "lm_q1q2_score": 0.5896858707009082}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath,amsfonts,amssymb,amsthm}\n\\newtheorem{problem}{Problem}\n\\newtheorem{lemma}{Lemma}\n\\usepackage{harpoon}%\n\\usepackage{float}\n\\usepackage{verbatim}\n\n\\title{Notes on Trilinear Aggregation}\n\\author{Jason Yang}\n\\date{August 2021}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\n\nThis paper outlines the notes I have taken while reading the papers ``Strassen's Algorithm is not Optimal / Trilinear Technique of Aggregating, Uniting and Canceling for / Constructing Fast Algorithms for Matrix Operations\" (Pan) \\begin{verbatim} https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4567976 \\end{verbatim} and \"On Practical Algorithms for Accelerated Matrix Multiplication\" (Laderman, Pan, Sha) \\begin{verbatim} https://www.sciencedirect.com/science/article/pii/002437959290393O \\end{verbatim}. These two papers show what is called the \\textit{trilinear aggregation} technique of multiplying two moderately-sized matrices together by using as few multiplications as possible. I think they are especially interesting because they aren't well known and aren't mentioned often by other researchers, since more attention has been given to the asymptotically fastest matrix multiplication algorithms. These algorithms are very theoretically interesting, but are incredibly complicated and are only faster than naive matrix multiplication for absurdly large matrices. Therefore, I believe trilinear aggregation deserves more attention if we want to make new fast matrix multiplication algorithms that are actually usable in the real world.\n\n\\section{Definitions}\n\nWe have matrices $A=[A_{i,j}]_{i,j}$ and $B=[B_{j,k}]_{j,k}$ of size $M\\times K$ and $K\\times N$ respectively and want to compute the matrix product $C=AB:=[\\sum_{j} A_{i,j}B_{j,k}]_{i,k}$ using as few multiplications between elements of $A$ and $B$ as possible. Here, we disregard scalar multiplications, e.g. using the $2A_{i,j}$ has no cost. We also do not care how many additions we make between elements of $A$ and $B$.\n\nThe reason we care about reducing the number of multiplications is that the elements of $A$ and $B$ don't have to be just numbers. In particular, they can also be smaller matrices, meaning any procedure for multiplying the elements of $A$ and $B$ immediately yields a divide-and-conquer algorithm for multiplying arbitrary matrices of any size (provided there are base case algorithms for small sizes). It can be shown that if we have way of multiplying a $M\\times K$ matrix with a $K\\times N$ matrix using $R$ multiplications, then we have a $O(N^{3\\log_{MKN} R})$-time algorithm for multiplying two $N\\times N$ matrices together. Additions and scalar multiplications become insignificant to the asymptotic time complexity.\n\nThe naive algorithm, which comes directly from the definition of the matrix product, requires $MKN$ multiplications and computes two $N\\times N$ matrices in $O(N^3)$ time. For $(M,K,N)=(2,2,2)$, however, V. Strassen found a clever algorithm using only 7 multiplications instead of the expected $2*2*2=8$ multiplications. This yielded the first ever sub $O(N^3)$-time algorithm for multiplying two $N\\times N$ matrices together, as it runs in $O(N^{log_2 7})=O(N^{\\sim 2.807})$ time.\n\n\\section{Tensor Representation of Matrix Multiplication}\n\nWhat is the general form of an algorithm multiplying matrices $A$ and $B$ with few multiplications? Since the matrix product $AB$ only involves terms of the form $ab$ for some element $a$ in $A$ and some element in $B$, it does not make sense to have triple products of the elements of $A$ and $B$, nor to have elements of $A$ and $B$ by themselves, only multiplied with scalars. Thus, all multiplications are of the form $(\\sum_{ij}\\alpha_{ij}A_{ij})*(\\sum_{jk}\\beta_{jk}B_{jk})$, i.e. a linear combination of the elements of $A$ multiplied by a linear combination of the elements of $B$.\n\nIf we use $R$ multiplications, we can represent the $r$-th product as $P_r=(\\sum_{ij}\\alpha_{rij}A_{ij})*(\\sum_{jk}\\beta_{rjk}B_{jk})$, where $\\alpha_{rij},\\beta_{rjk}$ are scalar coefficients. Then the $(j,k)$-th element of the matrix product $C_{jk}=(AB)_{jk}=\\sum_{j}A_{ij}B_{jk}$ must be obtainable as a linear combination of the products $P_r$, i.e. $C_{jk}=\\sum_{r}\\gamma_{rki}P_{r}$ for coefficients $\\gamma_{rki}$ (we use $\\gamma_{rki}$ and not $\\gamma_{rik})$ because that way the resulting matrix multiplication tensor we will define shortly will have more symmetry). Plugging in the definitions of $C_{jk}$ and $P_r$ yields $\\sum_{j}A_{ij}B_{jk}=\\sum_{r}\\gamma_{rki}(\\sum_{ij}\\alpha_{rij}A_{ij})*(\\sum_{jk}\\beta_{rjk}B_{jk})$. If we want the coefficients of $\\alpha,\\beta,\\gamma$ to make this equation valid for any matrices $A,B$, then we obtain the so-called \\textit{Brent equations}:\n\n\\[\\sum_{r=0}^{R-1} \\alpha_{ri_0j_0}\\beta_{rj_1k_1}\\gamma_{rk_2i_2}=\\begin{cases}\n1 & j_0=j_1 \\wedge k_1=k_2 \\wedge i_2=i_0 \\\\\n0 & \\texttt{else} \\\\\n\\end{cases}\\]\n\n\\textbf{Any set of coefficients of $\\alpha,\\beta,\\gamma$ that satisfies this system of equations describes a valid algorithm for multiplying a $M\\times K$ matrix with a $K\\times N$ matrix using only $R$ multiplications between matrix elements}.\n\nWe can rewrite this system of equations as a tensor equation. A \\textit{tensor} for our purposes is a $n$-dimensional array $A_{i_0,\\dots i_{n-1}}$. Additionally, define the \\textit{outer product} of two tensors $A,B$ of dimensions $n,m$ respectively to be the $(n+m)$-dimensional tensor $A\\times B$ s.t. $(A\\times B)_{i_0,\\dots i_{n-1},j_0,\\dots j_{m-1}}=A_{i_0,\\dots i_{n-1}}B_{j_0,\\dots j_{m-1}}$. Finally, define the \\textit{matrix multiplication tensor} $\\mathcal{M}(M,K,N)$ to be the $M\\times K\\times K\\times N\\times N\\times M$ tensor s.t. the element at coordinate $(i_0,j_0,j_1,k_1,k_2,i_2)$ is 1 if $j_0=j_1 \\wedge k_1=k_2 \\wedge i_2=i_0$ and 0 otherwise. Then the Brent equations above are equivalent to the following tensor equation:\n\n\\[\\sum_{r=0}^{R-1} \\alpha_r\\times\\beta_r\\times\\gamma_r=\\mathcal{M}(M,K,N),\\]\n\nwhere $\\alpha_r, \\beta_r, \\gamma_r$ are the $r$ matrices of $\\alpha,\\beta,\\gamma$ respectively. The minimum $R$ such that the equation can be satisfied is called the \\textit{rank} of the tensor $\\mathcal{M}(M,K,N)$. Thus, finding a fast matrix multiplication algorithm is equivalent to finding a low-rank decomposition of $\\mathcal{M}(M,K,N)$.\n\n\\section{Trilinear Aggregation}\nIn this section we show Pan's technique of trilinear aggregation, which yielded the first matrix multiplication algorithm asymptotically faster than Strassen's algorithm. We will only consider the case $M=K=N$ (i.e. we will only consider multiplying two $N\\times N$ matrices together).\n\nLet $\\mathcal{M}(N)=\\mathcal{M}(N,N,N)$, $[N]=\\{0,\\dots N-1\\}$, $[N]^3=\\{(i,j,k)|i,j,k\\in [N]\\}$, and $E_{i,j}$ be the $N\\times N$ matrix with its element at $(i,j)$ equal to 1 and all other elements equal to 0. Also, for brevity, we will omit the $\\times$ symbol when evaluating the outer product between two tensors. Finally, we will refer to a \\textit{term} as a set of three matrices that have been combined using the outer product.\n\nThe matrix multiplication tensor $\\mathcal{M}(N)$ has a trivial $N^3$-rank decomposition:\n\n\\[\\mathcal{M}(N)=\\sum_{(i,j,k)\\in [N]^3}E_{i,j}E_{j,k}E_{k,i}.\\]\n\nNotice that we can define a bijection $(i,j,k)\\Longleftrightarrow E_{i,j}E_{j,k}E_{k,i}$ between the elements of $[N]^3$ and the terms in the trivial decomposition.\n\nPan's technique starts by organizing the terms of the trivial decomposition into groups of 3 as follows:\n\n\\[\\mathcal{M}(N)=(\\sum_{(i,j,k)\\in S}E_{i,j}E_{j,k}E_{k,i}+E_{j,k}E_{k,i}E_{i,j}+E_{k,i}E_{i,j}E_{j,k})+(\\sum_{i\\in [N]} E_{i,i}E_{i,i}E_{i,i}).\\]\n\nHere we define a set $S\\in [N]^3$ s.t. $\\cup_{(i,j,k)\\in S}\\{(i,j,k),(j,k,i),(k,i,j)\\}=[N]^3\\setminus\\{(i,i,i)|i\\in [N]\\}$ and the sets $S$, $\\{(j,k,i)|(i,j,k)\\in S\\}$, and $\\{(k,i,j)|(i,j,k)\\in S\\}$ are mutually disjoint. Thus, $|S|=\\frac{N^3-N}{3}$. The terms $E_{i,i}E_{i,i}E_{i,i}$ must be treated separately.\n\nNext, we replace $\\sum_{(i,j,k)\\in S}E_{i,j}E_{j,k}E_{k,i}+E_{j,k}E_{k,i}E_{i,j}+E_{k,i}E_{i,j}E_{j,k}$ with $\\sum_{i,j,k}(E_{i,j}+E_{j,k}+E_{k,i})(E_{j,k}+E_{k,i}+E_{i,j})(E_{k,i}+E_{i,j}+E_{j,k})$. With this method, we generate all the terms $E_{i,j}E_{j,k}E_{k,i}, E_{j,k}E_{k,i}E_{i,j}, E_{k,i}E_{i,j}E_{j,k}$ using only $\\frac{N^3-N}{3}$ terms instead of $N^3-N$ terms, a big improvement. However, we also generate several unwanted terms in the process.\n\nThe final step is to remove these unwanted terms. We can use the distributive law to our advantage to remove most of these terms with only $O(N^2)$ terms. For example, consider removing terms of the form $E_{i,j}E_{i,j}E_{j,k}$: instead of calculating $\\sum_{(i,j,k)\\in S} E_{i,j}E_{i,j}E_{j,k}$ naively, we can rewrite it as $\\sum_{i,j\\in [N]}E_{i,j}E_{i,j}(\\sum_{k|(i,j,k)\\in S}E_{j,k})$, which only takes $N^2$ terms instead of $\\frac{N^3-N}{3}$. We will call such terms \\textit{intermediate terms}, as together they can be removed using only $O(N^2)$ terms. In general, any $E_aE_bE_c$ where each $a,b,c$ represents a pair of indices is an intermediate term if $a,b,c$ are not all mutually distinct. We will initially ignore the details of actually removing the intermediate terms and talk about them later. For now, we have the following:\n\n\\begin{comment}\nWe will define the sets $S_{IJ},S_{JK},S_{KI}$ to be $\\{(i,j)|(i,j,k)\\in S\\},\\{(j,k)|(i,j,k)\\in S\\},\\{(k,i)|(i,j,k)\\in S\\}$.\n\\end{comment}\n\n\\[\\sum_{(i,j,k)\\in S}E_{i,j}E_{j,k}E_{k,i}+E_{j,k}E_{k,i}E_{i,j}+E_{k,i}E_{i,j}E_{j,k}\\]\n\\begin{comment}\n\\[=\\sum_{(i,j,k)\\in S}\\Big[(E_{i,j}+E_{j,k}+E_{k,i})(E_{j,k}+E_{k,i}+E_{i,j})(E_{k,i}+E_{i,j}+E_{j,k})\\]\n\\[-E_{i,j}E_{i,j}(E_{k,i}+E_{i,j}+E_{j,k})-E_{j,k}E_{j,k}(E_{k,i}+E_{i,j}+E_{j,k})-E_{k,i}E_{k,i}(E_{k,i}+E_{i,j}+E_{j,k})\\]\n\\[-E_{i,j}(E_{j,k}+E_{k,i}E_{i,j}-E_{j,k}(E_{i,j}+E_{k,i})E_{j,k}-E_{k,i}(E_{i,j}+E_{j,k})E_{k,i}\\]\n\\[-(E_{j,k}+E_{k,i})E_{i,j}E_{i,j}-(E_{i,j}+E_{k,i})E_{j,k}E_{j,k}-(E_{i,j}+E_{j,k})E_{k,i}E_{k,i}\\]\n\\[-E_{i,j}E_{k,i}E_{j,k}-E_{j,k}E_{i,j}E_{k,i}-E_{k,i}E_{j,k}E_{i,j}\\Big]\\]\n\\end{comment}\n\n\\[=\\sum_{(i,j,k)\\in S}(E_{i,j}+E_{j,k}+E_{k,i})(E_{j,k}+E_{k,i}+E_{i,j})(E_{k,i}+E_{i,j}+E_{j,k})\\]\n\\begin{comment}\n\\[-\\sum_{(i,j)\\in S_{IJ}}E_{i,j}E_{i,j}(\\sum_k^*E_{k,i}+E_{i,j}+E_{j,k})\\]\n\\[-\\sum_{(j,k)\\in S_{JK}}E_{j,k}E_{j,k}(\\sum_i^*E_{k,i}+E_{i,j}+E_{j,k})\\]\n\\[-\\sum_{(k,i)\\in S_{KI}}E_{k,i}E_{k,i}(\\sum_j^*E_{k,i}+E_{i,j}+E_{j,k})\\]\n\\[-\\sum_{(i,j)\\in S_{IJ}}E_{i,j}(\\sum_k^*E_{j,k}+E_{k,i})E_{i,j}\\]\n\\[-\\sum_{(j,k)\\in S_{JK}}E_{j,k}(\\sum_i^*E_{i,j}+E_{k,i})E_{j,k}\\]\n\\[-\\sum_{(k,i)\\in S_{KI}}E_{k,i}(\\sum_j^*E_{i,j}+E_{j,k})E_{k,i}\\]\n\\[-\\sum_{(i,j)\\in S_{IJ}}(\\sum_k^*E_{j,k}+E_{k,i})E_{i,j}E_{i,j}\\]\n\\[-\\sum_{(j,k)\\in S_{JK}}(\\sum_i^*E_{i,j}+E_{k,i})E_{j,k}E_{j,k}\\]\n\\[-\\sum_{(k,i)\\in S_{KI}}(\\sum_j^*E_{i,j}+E_{j,k})E_{k,i}E_{k,i}\\]\n\\end{comment}\n\\[-\\Big[ O(N^2) \\texttt{many terms}\\Big]\\]\n\\[-\\Big(\\sum_{(i,j,k)\\in S}E_{i,j}E_{k,i}E_{j,k}+E_{j,k}E_{i,j}E_{k,i}+E_{k,i}E_{j,k}E_{i,j}\\Big),\\]\n\n\\begin{comment}\nwhere $\\sum_i^*,\\sum_j^*,\\sum_K^*$ are short for $\\sum_{i|(i,j,k)\\in S},\\sum_{j|(i,j,k)\\in S},\\sum_{k|(i,j,k)\\in S}$ respectively. \n\\end{comment}\n\nWe will call the terms $(E_{i,j}+E_{j,k}+E_{k,i})(E_{j,k}+E_{k,i}+E_{i,j})(E_{k,i}+E_{i,j}+E_{j,k})$ the \\textit{aggregation terms} and the terms $E_{i,j}E_{k,i}E_{j,k}, E_{j,k}E_{i,j}E_{k,i}, E_{k,i}E_{j,k}E_{i,j}$ \\textit{unacceptable terms}, as they cannot be easily evaluated with only $O(N^2)$ terms.\n\nWe can summarize the above long equation with an \\textit{aggregation table}:\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $ij$ & $jk$ & $ki$ \\\\\n        $jk$ & $ki$ & $ij$ \\\\\n        $ki$ & $ij$ & $jk$ \\\\\n    \\end{tabular}\n    \\label{group1}\n    \\caption{}\n\\end{table}\n\nIn the aggregation table, each entry represents a matrix $E_{i,j}$; the letter $E$ has simply been omitted for brevity. Then our target terms correspond to the sum of the products of the rows, our aggregation term corresponds to the product of the sum of the columns, and our unacceptable terms come from the lists of coordinates $[(0,0),(1,1),(2,2]$, $[(1,0),(2,1),(0,2)]$, and $[(2,0),(0,1),(1,2)]$. In this specific table, the unnecceptable terms can be written as $ijkijk,jkijki,kijij$.\n\nTo eliminate the unacceptable terms, Pan uses a clever trick: expand all matrices $E_{i,j}$ to size $2N\\times 2N$ and evaluate the tensor $\\mathcal{M}(2N)$ instead of $\\mathcal{M}(N)$. To retain the restriction $(i,j,k)\\in S$, we can represent the trivial decomposition of $\\mathcal{M}(2N)$ as follows:\n\n\\[\\sum_{a,b,c\\in [2]}\\Big[\\Big(\\sum_{(i,j,k)\\in S}E_{i+aN,j+bN}E_{j+bN,k+cN}E_{k+cN,i+aN}\\]\\[+E_{j+bN,k+cN}E_{k+cN,i+aN}E_{i+aN,j+bN}\\]\\[+E_{k+cN,i+aN}E_{i+aN,j+bN}E_{j+bN,k+cN}\\Big)\\]\\[+\\Big(\\sum_{i\\in [N]} E_{i+aN,i+bN}E_{i+bN,i+cN}E_{i+cN,i+aN}\\Big)\\Big].\\]\n\nNow, the trilinear aggregation technique involves organizing the 24 terms $E_{i+aN,j+bN}E_{j+bN,k+cN}E_{k+cN,i+aN}$, $E_{j+bN,k+cN}E_{k+cN,i+aN}E_{i+aN,j+bN}$, and\\\\ $E_{k+cN,i+aN}E_{i+aN,j+bN}E_{j+bN,k+cN} \\forall a,b,c\\in [2]$ and for fixed $i,j,k$ into 8 groups of 3, where each group is replaced with an aggregation term and a set of intermediate terms, such that the unacceptable terms from each of the groups cancel each other out.\n\nTo accomplish this, let us abbreviate the expressions $i+N,j+N,k+N$ as $\\bar{i},\\bar{j},\\bar{k}$ respectively. The first group is described by Table 1 above and generates the unacceptable terms $E_{i,j}E_{k,i}E_{j,k},E_{j,k}E_{i,j}E_{k,i},E_{k,i}E_{j,k}E_{i,j}$, which we will abbreviate as $ijkijk,jkijki,kijkij$. Let's try removing $ijkijk$ with our second group. We will need to add a negative sign:\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $-ij$ & $j?$ & $-?i$ \\\\\n        $?k$ & $ki$ & $i?$ \\\\\n        $k?$ & $?j$ & $jk$ \\\\\n    \\end{tabular}\n\\end{table}\n\nIn the above table, we already know partial information about the elements not on the main diagonal, since each row must form a valid target term. For example, $ij\\bar{j}kki$ is invalid. In general, any target term must be of the form $abbcca$, where each occurrence of the same letter represents the same index.\n\nWe also need to add a second negative sign in the first row to counteract the initial negative sign we added to cancel out $ijkijk$. To match Pan's solution, we arbitrarily place this negative sign at the rightmost column. Finally, we must have each of the rows evaluate to new target terms we have not already evaluated, e.g. we cannot set the first row of Table 2 to $-ij|jk|-ki$ since that would simply create the target term $ijjkki$ that we already got from Table 1. This constraint forces our new table to be:\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $-ij$ & $j\\bar{k}$ & $-\\bar{k}i$ \\\\\n        $\\bar{j}k$ & $ki$ & $i\\bar{j}$ \\\\\n        $k\\bar{i}$ & $\\bar{i}j$ & $jk$ \\\\\n    \\end{tabular}\n    \\caption{}\n\\end{table}\n\nWhat this table means is that we calculate the aggregation terms $\\sum_{(i,j,k)\\in S}(-E_{i,j}+E_{\\bar{j},k}+E_{k,\\bar{i}})(E_{j,\\bar{k}}+E_{k,i}+E_{\\bar{i},j})(-E_{\\bar{k},j}+E_{i,\\bar{j}}+E_{j,k})$ and remove the resulting intermediate terms.\n\nWe continue in this fashion, cancelling out $jkijki$ with Table 3 and $kijkij$ with Table 4:\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $i\\bar{j}$ & $\\bar{j}k$ & $ki$ \\\\\n        $-jk$ & $k\\bar{i}$ & $-\\bar{i}j$ \\\\\n        $\\bar{k}i$ & $ij$ & $j\\bar{k}$ \\\\\n    \\end{tabular}\n    \\caption{}\n\\end{table}\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $\\bar{i}j$ & $jk$ & $k\\bar{i}$ \\\\\n        $j\\bar{k}$ & $\\bar{k}i$ & $ij$ \\\\\n        $-ki$ & $i\\bar{j}$ & $-\\bar{j}k$ \\\\\n    \\end{tabular}\n    \\caption{}\n\\end{table}\n\nNow, we have cancelled all unacceptable terms generated by Table 1, but we have also created new unacceptable terms $i\\bar{j}k\\bar{i}j\\bar{k}$,$j\\bar{k}i\\bar{j}k\\bar{i}$,$k\\bar{i}j\\bar{k}i\\bar{j}$,$-\\bar{i}j\\bar{k}i\\bar{j}k$,$-\\bar{j}k\\bar{i}j\\bar{k}i$,\\\\$-\\bar{k}i\\bar{j}k\\bar{i}j$ from Tables 2-4. To eliminate these terms, we simply duplicate Tables 1-4 and swap the bar variables with no-bar variables; the unacceptable terms generated from these new tables will perfectly cancel out our current unacceptable terms:\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $\\bar{i}\\bar{j}$ & $\\bar{j}\\bar{k}$ & $\\bar{k}\\bar{i}$ \\\\\n        $\\bar{j}\\bar{k}$ & $\\bar{k}\\bar{i}$ & $\\bar{i}\\bar{j}$ \\\\\n        $\\bar{k}\\bar{i}$ & $\\bar{i}\\bar{j}$ & $\\bar{j}\\bar{k}$ \\\\\n    \\end{tabular}\n    \\caption{}\n\\end{table}\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $-i\\bar{j}$ & $\\bar{j}k$ & $-k\\bar{i}$ \\\\\n        $j\\bar{k}$ & $\\bar{k}\\bar{i}$ & $\\bar{i}j$ \\\\\n        $\\bar{k}i$ & $i\\bar{j}$ & $\\bar{j}\\bar{k}$ \\\\\n    \\end{tabular}\n    \\caption{}\n\\end{table}\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $\\bar{i}j$ & $j\\bar{k}$ & $\\bar{k}\\bar{i}$ \\\\\n        $-\\bar{j}\\bar{k}$ & $\\bar{k}i$ & $-i\\bar{j}$ \\\\\n        $k\\bar{i}$ & $\\bar{i}\\bar{j}$ & $\\bar{j}k$ \\\\\n    \\end{tabular}\n    \\caption{}\n\\end{table}\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c}\n        $i\\bar{j}$ & $\\bar{j}\\bar{k}$ & $\\bar{k}i$ \\\\\n        $\\bar{j}k$ & $k\\bar{i}$ & $\\bar{i}\\bar{j}$ \\\\\n        $-\\bar{k}\\bar{i}$ & $\\bar{i}j$ & $-j\\bar{k}$ \\\\\n    \\end{tabular}\n    \\caption{}\n\\end{table}\n\nThus, we have a way of decomposing the tensor $\\mathcal{M}(2N)$ into $8(\\frac{N^3-N}{3})+O(N^2)+8N$ terms instead of $8N^3$ terms with the trivial decomposition. The $8N$ terms from calculating $\\sum_{a,b,c\\in [2]}\\sum_{i\\in [N]} E_{i+aN,i+bN}E_{i+bN,i+cN}E_{i+cN,i+aN}$ can be improved to $7N$ terms using Strassen's algorithm.\n\n\\end{document}\n", "meta": {"hexsha": "403d2753c595023a76a3b2ecc460f71d74d00110", "size": 17714, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "trilinearAggregation.tex", "max_stars_repo_name": "jasonLLyang/Matrix-Multiplication-Tensor-Decomposition", "max_stars_repo_head_hexsha": "b252a1cf39b691cd4b2a87ec07eb36ea7920594b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "trilinearAggregation.tex", "max_issues_repo_name": "jasonLLyang/Matrix-Multiplication-Tensor-Decomposition", "max_issues_repo_head_hexsha": "b252a1cf39b691cd4b2a87ec07eb36ea7920594b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "trilinearAggregation.tex", "max_forks_repo_name": "jasonLLyang/Matrix-Multiplication-Tensor-Decomposition", "max_forks_repo_head_hexsha": "b252a1cf39b691cd4b2a87ec07eb36ea7920594b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.8858447489, "max_line_length": 1254, "alphanum_fraction": 0.6626961725, "num_tokens": 6351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835289107307, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5896717605392118}}
{"text": "\\hypertarget{classnumpp_1_1differentiation_1_1symbolic_1_1power}{}\\section{numpp\\+:\\+:differentiation\\+:\\+:symbolic\\+:\\+:power$<$ Left, Exp $>$ Class Template Reference}\n\\label{classnumpp_1_1differentiation_1_1symbolic_1_1power}\\index{numpp\\+::differentiation\\+::symbolic\\+::power$<$ Left, Exp $>$@{numpp\\+::differentiation\\+::symbolic\\+::power$<$ Left, Exp $>$}}\n\\subsection*{Public Types}\n\\begin{DoxyCompactItemize}\n\\item \n\\mbox{\\Hypertarget{classnumpp_1_1differentiation_1_1symbolic_1_1power_aba34fd92a6784601b20d66413423802b}\\label{classnumpp_1_1differentiation_1_1symbolic_1_1power_aba34fd92a6784601b20d66413423802b}} \n{\\footnotesize template$<$std\\+::size\\+\\_\\+t Active$>$ }\\\\using {\\bfseries derivative} = simplify\\+\\_\\+multiplication$<$ \\hyperlink{classnumpp_1_1differentiation_1_1symbolic_1_1multiply}{multiply}$<$ \\hyperlink{classnumpp_1_1differentiation_1_1symbolic_1_1constant}{constant}$<$ Exp $>$, \\hyperlink{classnumpp_1_1differentiation_1_1symbolic_1_1power}{power}$<$ Left, Exp-\\/1 $>$ $>$, typename Left\\+::template derivative$<$ Active $>$ $>$\n\\end{DoxyCompactItemize}\n\\subsection*{Static Public Member Functions}\n\\begin{DoxyCompactItemize}\n\\item \n\\mbox{\\Hypertarget{classnumpp_1_1differentiation_1_1symbolic_1_1power_a02f035c5198288fa0f22c48655574904}\\label{classnumpp_1_1differentiation_1_1symbolic_1_1power_a02f035c5198288fa0f22c48655574904}} \nstatic C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR auto {\\bfseries calculate} (auto \\&\\&values)\n\\end{DoxyCompactItemize}\n\n\nThe documentation for this class was generated from the following file\\+:\\begin{DoxyCompactItemize}\n\\item \ndifferentiation/symbolic/arithmetic.\\+hpp\\end{DoxyCompactItemize}\n", "meta": {"hexsha": "93d9c9766648ea4d7936a5fbf6a0c6e868317f52", "size": 1642, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/classnumpp_1_1differentiation_1_1symbolic_1_1power.tex", "max_stars_repo_name": "szymonmaszke/numpp", "max_stars_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-06-06T01:51:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-02T15:17:00.000Z", "max_issues_repo_path": "docs/classnumpp_1_1differentiation_1_1symbolic_1_1power.tex", "max_issues_repo_name": "vyzyv/numpp", "max_issues_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-11-28T12:15:46.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T00:03:38.000Z", "max_forks_repo_path": "docs/classnumpp_1_1differentiation_1_1symbolic_1_1power.tex", "max_forks_repo_name": "szymonmaszke/numpp", "max_forks_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-08-06T13:58:27.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-06T06:45:22.000Z", "avg_line_length": 82.1, "max_line_length": 438, "alphanum_fraction": 0.8051157125, "num_tokens": 601, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.8031737940012417, "lm_q1q2_score": 0.5896255750150321}}
{"text": "\\section{Model Optimization}\n\\label{sec:optimizerStrategies}\n\nWhen analyzing the range of values obtainable by a model, frequently a key question is ``what set of\nparameters result in the best response value?''  To answer this question, RAVEN uses the \\xmlNode{Optimizer},\na powerful sampler-like entity that searches the input space to find minimum or maximum values of a response.\n\nIn the remainder of this section, we will explore how to use the optimizer using a simple analytic problem,\nwith a two-dimensional input space and single response of interest.  After getting used to running with the\noptimizer, we will add increasing complexity, including changing adaptive step sizes, initial conditions,\nparallel trajectories, input space subdivision, input space constraints, and response constraints.\n\nTo demonstrate the operation of the Optimizer entities in RAVEN, the model we consider is the Beale function,\nwhich is documented in the analytic tests for RAVEN and replicated here:\n\n\\begin{itemize}\n  \\item Function: $f(x,y) = (1.5-x+xy)^2+(2.25-x+xy^2)^2+(2.625-x+xy^3)^2$\n  \\item Domain: $-4.5 \\leq x,y \\leq 4.5$\n  \\item Global Minimum: $f(3,0.5)=0$\n\\end{itemize}\n\nThe two inputs are the variables $x$ and $y$, and the response is a value we'll assign to $ans$, short for\n``answer''.  The model is an external model in RAVEN, and can be found at\n\\begin{verbatim}\n  raven/tests/framework/AnalyticModes/optimizing/beale.py.\n\\end{verbatim}\nThe function's values are distributed as in Fig. \\ref{fig:beale}, with red indicating high values\nand blue indicating low values.\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.7]{../../tests/framework/user_guide/optimizing/Beale_grid.png}\n  \\caption{Plot of Beale's function for Optimization}\n  \\label{fig:beale}\n\\end{figure}\nThe objective is to minimize the function.\n\nNote that throughout this example we use the SPSA optimizer by way of demonstration, since it is the first\nadvanced algorithm for optimization included in RAVEN; many of the options and parameters apply to other\noptimizers, and details can be found in the RAVEN user manual.\n\n\\subsection{Introduction: The Optimizer Input}\nAs with other entities, the Optimizer gets its own XML block in the RAVEN input.\nHere's an example of an input for a SPSA optimizer named \\xmlString{opter}:\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{Optimizers}\nThis is the smallest amount of input needed to run an optimization problem, with the exception that we include\nthe \\xmlNode{initialSeed} to maintain consistent results.  Note the required blocks included\nto define the optimizer:\n\\begin{itemize}\n  \\item \\xmlNode{objective}, which is where you indicate the variable for which you want to find the minimum (or,\n    if you change the default, maximum).  As listed here, we want to minimize the value of \\texttt{ans} given a\n    range of possible values for \\texttt{x} and \\texttt{y}.\n  \\item \\xmlNode{variable}, which is where you can define the input space variables, one for each of these\n    nodes.  Declaring a variable here informs the optimizer that you want it to find the optimal value for\n    this variable, along with the other variables declared in their own blocks. Note this follows\n    the same pattern as any other \\xmlNode{Sampler}, including a \\xmlNode{distribution} node to\n    describe the domain of the variable. For \\xmlNode{GradientDescent}, the shape of the\n    distribution is not significant unless performing other advanced optimizations (such as\n    optimization at risk). Nominally, this distribution simply defines the acceptable range of the\n    variable, making the \\xmlNode{Uniform} distribution a common choice. The distribution sets the\n    upper and lower bounds of the variable, which will give the optimizer some general expectations for\n    finding the optimal point; it will never try to sample a value smaller than the lower bound or larger than\n    the upper bound.  In the example we define variables \\emph{x} and \\emph{y} as our input variables, and\n    both of them coincidentally range between -4.5 and 4.5.  We set the initial values for both variables to 0\n    through the \\xmlNode{initial} block, which is required in most cases; the exception is when a\n    preconditioner sets them in mulitlevel optimization, but we're not concerned with that feature\n    for this example.\n  \\item \\xmlNode{TargetEvaluation}, which declares the DataObject that the optimization search evaluations are\n    going to be placed in.  All of the optimal points found as part of the optimization, as well as any other\n    points evaluated as part of the algorithm, are placed in this object so the optimizer can retrieve this\n    information later.  When this data object is defined, it is critical that the objective variable is\n    defined in the output space, and the input variables in the input space, so the optimizer can collect the\n    results of its sampling.  The data object type should be ``PointSet'' for this data object.  In this\n    example, we use the self-descriptive \\emph{optOut} data object.\n  \\item \\xmlNode{samplerInit}, which contains initialization parameters for the optimization\n    algorithm. In this case, we set an \\xmlNode{initialSeed} to 1234 just to maintain consistent\n    results. We also set the maximum number of model evaluations through the \\xmlNode{limit} node.\n    We don't expect to need all these runs, but in case the optimizer is struggling, we set this\n    cutoff to prevent the code running ad infinitum.\n  \\item \\xmlNode{gradient}, which defines the gradient approximation algorithm to use within the\n    gradient descent algorithm. In this case,\n    we simply indicate that we want to use \\xmlNode{SPSA}, and need no additional inputs.\n  \\item \\xmlNode{stepSize}, which defines how we should control the step size during the gradient\n    descent algorithm. There are several algorithms to choose from; in this case, we choose\n    \\xmlNode{GradientHistory}, which uses the scalar product between successive steps to determine\n    what step to take in the search algorithm. A bigger growth factor results in traversing the input space more\n    quickly, but converging more slowly. A bigger shrink factor results in collapsing to a minimum\n    more quickly, converging quickly but possibly falling into local minima. We're using the default\n    growth (1.25) and shrink (1.15) factors here.\n  \\item \\xmlNode{acceptance}, which determines the algorithm by which we decide whether to accept a\n    potential new optimal point during the gradient descent algorithm. In this case we use\n    \\xmlNode{Strict}, which indicates any potential new optimal points in the search that are not\n    preferrable to the previously-found optimal point are discarded, and the search continues from\n    the previously-found optimal point in the search.\n  \\item \\xmlNode{convergence}, which informs the searching algorithm of when to decide it has found\n    the optimal point within a sufficient tolerance. There are several stopping criteria; in this\n    case, we use the local value of the \\xmlNode{gradient}, which we want to be at most 0.1.\n\\end{itemize}\nThe other critical blocks in this input are as follows:\n\n\\subsubsection{Models}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{Models}\nNote that we define the external model with the name \\xmlString{beale} and provide a path to the analytic\nmodel itself.  This model is set up with the \\texttt{run} method that allows RAVEN to run the model.  We also\nlist all our input/output variables, \\emph{x, y}, and \\emph{ans}.\n\n\\subsubsection{Data Objects}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{DataObjects}\nWe have three data objects:\n\\begin{itemize}\n\\item \\xmlString{placeholder}, which is necessary to define the input to our external\nmodel in the Steps (the external model doesn't use any input file, so we just use a placeholder\nhere);\n\\item \\xmlString{optOut}, which will hold all of the samples taken by our optimizer (optimal\ncandidates, gradient evaluation points, rejected points, etc); and\n\\item \\xmlString{opt\\_export}, which will hold the actual solution path taken by our optimizer.  We store the\npath travelled by the optimization algorithm as successive samples, with \\texttt{iteration} keeping track of the\noptimization steps taken.  Note especially how the input of \\xmlString{opt\\_export} is set to \\texttt{trajID},\nwhich is a special keyword for the Optimizer trajectory tracking, as is the output variable \\texttt{iteration}.\nThere are several other special keyword outputs that can be written to the Solution Export data object, that\ncan be found in the user manual.\n\\end{itemize}\n\n\\subsubsection{Out Streams}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{OutStreams}\nHere we define the way to print the output of our optimization algorithm.  There's not much to note, except\nthat we'll be printing the optimization path as a CSV.\n\n\\subsubsection{Steps}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{Steps}\nHere we put it all together into a work flow that RAVEN can follow.  We only need two steps: one to optimize,\nand one to print out the results.  To actually perform the optimization, we need a MultiRun step, which we\ncleverly name \\xmlString{optimize}.  For input we take the placeholder data object \\emph{placeholder}, which sets\nup the input space of the model we defined, \\emph{beal}.  Where a \\xmlNode{Sampler} would normally go, we\ninclude the \\xmlNode{Optimizer} we defined earlier.  We output to the same data object we indicated in the\nOptimizer's \\xmlNode{TargetEvaluation} node.  Finally, we note specifically the use of the\n\\xmlNode{SolutionExport} node.  The data object defined in this node is where the Optimizer will write the\noptimization path history, with the final entry being the last step taken by the optimizer.  The IOStep is\nunremarkable, used simply to write out the optimization path history to file.\n\n\\subsubsection{Conclusion}\nAfter reviewing the components (don't forget the RunInfo block!), you can run this example and see the\nresults.  In particular, we can view the final results of the optimizer in \\texttt{Simple/opt\\_export\\_0.csv}.  Note\nthat \\texttt{opt\\_export} is the name of the \\xmlNode{Print} OutStream we defined in the input file.\n\nWhen we open the file (preferably in a CSV reader such a spreadsheet viewer), we see a CSV with several headers,\nthe outputs defined in the data object in the input\nfile: \\emph{trajID}, \\emph{iteration}, \\emph{x}, \\emph{y}, and \\emph{ans} (not necessarily in that order).  \\emph{x},\n\\emph{y}, and \\emph{ans} are the values of the variable at each optimization iteration, while\n\\emph{iteration} gives the sequential order of the optimization iteration. \\emph{trajID} is the\ntrajectory identifying number; since we are only using one trajectory, this identifier is simply 0.\n\nWe can see there's only one line of data in the ouput CSV, showing the final solution discovered by\nthe optimization algorithm.\nIf we look at the line, we converged around $f(2.7, 0.42) = 0.0199$ in 40 steps, which is okay but still a little\nways from the analytic optimal point $f(3, 0.5) = 0$.  If we look at the output from the run, we can look at\nthe last time RAVEN was ``Checking convergence for Trajectory 0''.  Below that statement, there are a series\nof convergence criteria and their respeective statuses.  We can see our convergence criteria\nrequested through the input file (\\texttt{gradient}, whose final accepted value is 0.0857) as well\nas the \\texttt{same point} convergence criteria, which helps determine if the optimal solution is at\na boundary even though other conditions have not converged.\n\nWe can see that the reason we converged at the end is the\n\\texttt{gradient}, which means the relative change in the gradient of \\emph{ans} was sufficiently\nsmall between steps to cause convergence.  Clearly, we claimed convergence prematurely because of\nthe low value required in the optimizer input.  Because these convergence criteria are very\nproblem-specific, one set parameters will not work best for all problems.\n\nWe can improve this result by changing convergence\nparameters as well as step size growth and shrink factors, all of which can be found in the user manual, and\nmany of which we'll discuss in the rest of this section. Feel free to experiment with these values,\nand see their affect on the solution discovered.\n\n\\subsection{Increasing verbosity}\nWe saw in the previous section that the output stored in \\texttt{Simple/opt\\_export.csv} only\nincludes the final optimal solution, and minimal information about that point. We can increase the\noutput to see the entire path traversed by adding a few parameters in the input file.\n\nThe first parameter to add is in \\xmlNode{Optimizers} \\xmlNode{GradientDescent}\n\\xmlNode{samplerInit}. By adding the node \\xmlNode{writeSteps} with the value \\xmlString{every}, we\ncan see the full path taken by the optimizer from initial point to final accepted solution.\n\nFurther, the optimizer has some special variables that can be use in the \\xmlNode{SolutionExport}\n\\xmlNode{DataObject} to print additional information to the CSV. For example, the special variable\n\\texttt{accepted} will tell us, for each point in the optimization path, what the result of that\npoint is. For SPSA, these acceptance notes can be one of the following:\n\\begin{itemize}\n  \\item \\texttt{first}, or the initial point at which the optimization search begins;\n  \\item \\texttt{accepted}, if the new proposed point is sufficiently improved to be accepted as a\n    new optimal point in the search;\n  \\item \\texttt{rejected}, if the new proposed point is \\emph{not} sufficiently improved and\n    therefore rejected;\n  \\item \\texttt{rerun}, indicating the search algorithm returned to an old optimal point after\n    rejecting a proposed optimal point; and\n  \\item \\texttt{final}, which shows that the point listed is the final accepted and converged\n    optimal point.\n\\end{itemize}\n\n\n\n\\subsection{Initial Conditions and Parallel Trajectories} \\label{subsec:opt parallel traj}\nNotice we set the optimization search to start at $(0,0)$.\nYou can change this initial value through\nthe \\xmlNode{initial} block within the \\xmlNode{variable} definition nodes.\n\nFurthermore, RAVEN offers the possibility to run multiple optimization paths in parallel.  Because many\n(perhaps most) optimization techniques get stuck in local minima, using multiple paths (or \\emph{trajectories} as\nthey are called in RAVEN) increases the likelihood that one of the trajectories will find the global minimum\npoint.  You can request multiple trajectories by providing a variety of initial conditions in the\n\\xmlNode{initial} block, as shown in this Optimizer example:\n\\xmlExample{framework/user_guide/optimizing/multiple_trajectories.xml}{Optimizers}\nNote that the ordered pairs are split across the \\xmlNode{initial} nodes, so that the first trajectory will\nstart as a point made up of all the first entries, the second trajectory starts at all the second entries, and\net cetera.  In this case, we've requested starting points at (-2,-2), (-2,2), (2,-2), and (2,2).  This (and\ndefining a new working directory in the \\xmlNode{RunInfo} block) is the only input change between the original\nfile and this one.\n\nWhen run, we can see the results in the working directory \\texttt{MultipleTraj}.  There, we see the same files\nas for the base case, plus \\texttt{opt\\_export} files 0-3.  These are produced because we've\nclustered the outputs by \\texttt{trajID} in the \\xmlNode{OutStreams} definition. Each of these corresponds to the\npath that one of the initial points started at, as you can see at the top of each of these CSV files.  We can\nsee that trajectory 2 (who started at (2,-2)) ended close to the analytic optimal point, while trajectory 1\nwas far from it.\n\nIn the screen output from the RAVEN run, you can see the final summary shows the status of each\ntrajectory. Under \\emph{Trajectory Results} we see trajectories 1-3 all converged with different\noptimal values, while trajectory 0 is marked as \\emph{following 1}. This means at some point\nTrajectory 0 started following the same path as Trajectory 1 already moved along, so Trajectory 0\nwas terminated as a result to save computational resources.\n\nFinally, we see the optimal point selected was Trajectory 2 with a function value of roughly 3.7e-3\nat (3.15, 0.54).\n\n\\subsection{Adjusting Adaptive Steps} \\label{subsec:opt stepsize}\nAs we've seen, some of the optimization paths are struggling to converge to meaningful optimal solutions.\nOne way to improve this is to tinker with the convergence tolerances as shown in the user manual.  Another is\nto change the step size modifications used as part of the search process, which we discuss in this section.\nFirst, we briefly discuss how the SPSA chooses its step size, so we can make informed choices about what\nparameters to use.\n\nBecause SPSA is a gradient-based method, it operates by starting at a particular point, estimating the\ngradient at that point, then taking a step in the opposite direction of the gradient in order to follow a\ndownhill path.  It adaptively chooses how long of a step to take based on its history.  If the gradient is in\nthe same direction twice in a row, the algorithm assumes there's further to travel, so increases its step size\nmultiplicatively by the \\emph{growthFactor}, which we had defaulted to 1.25.  If, on the other hand,\nthe gradient switches directions, then the step size is divided by the \\emph{shrinkFactor}, which we\nhad defaulted to 1.15.  This means that by default, if the gradient keeps going in the same direction, you\nalways increase your step size by 25\\%, while if you're bouncing back and forth in a valley, the\nstep size is reduced by 15\\% at each iteration.\n\nBy way of note, in higher dimensions, the actual growth or shrink multiplier is scaled by a dot product\nbetween the two previous gradients, with a max of the grain growth factor when the dot product is 1 (exactly\naligned) and a minimum of grain shrink factor when the dot product is -1 (exactly opposite).  This means if\nthe gradient is at right angles with the past gradient, then the step size remains unchanged (dot product is\n0).\n\nThere are some additional considerations for the step size change, as well.  If the algorithm takes a step,\nthen discovers the new point has a worse response value than the point it's at, it will reject the new point,\nre-evaluate the gradient, and flag the step size to be divided by the gain shrink factor.  Because of this, if\nthe gain shrink factor is too large, false convergence can be obtained when the algorithm struggles to find a\nnew downhill point to move to.  As a result, in practice it is often beneficial to have a gain shrink factor\nthat is smaller than the gain growth factor.\n\nFor this new example, we use gain growth factor of 1.25 (meaning when the gradient continues in the same\ndirection our step grows by 25\\% of its old value) and a gain shrink factor of 1.1 (meaning when the\ngradient flips directions our step size shrinks to 90\\% of its old value).  We add this to the base case\n(\\texttt{simple.xml}) to get:\n\\xmlExample{framework/user_guide/optimizing/step_size.xml}{Optimizers}\nNote the definition of the gain growth and shrink factors in the \\xmlNode{convergence} block.  Reviewing the\noutput file \\texttt{StepSize}, we can see more steps were taken than the case using default step sizes, but\nthe final solution was $f(2.75,0.430)=0.013$ in 40 iterations, which is closer to the analytical solution of $f(3,0.5)=0$\nthan the original case using the same number of iterations.\n\nIt is often challenging to find the best gain growth and shrink factors, and these can have a very significant\nimpact on the speed and accuracy of the convergence process.  Too large a shrink factor results in poor\nresolution of valleys, while too small a shrink factor results in many unnecessary evaluations of the model.\n\n% TODO rewrite with new sampler-based denoising options\n% \\subsection{Denoising Stochastic Problems}\n% While many of the models we see in optimization are deterministic (meaning running the same inputs into the\n% model yields the same results every time), there are quite a few stochastic models that we might desire to\n% optimize.  For example, a model that included Brownian motion or is solved using unseeded random numbers might\n% be considered stochastic.\n\n% The difficulty with optimizing noisy problems rests in the unreliability of a single sample.  If we send a\n% single set of inputs into a stochastic model, we can't trust the results to be consistent.  One way to measure\n% the consistency of the results is through a signal-to-noise ratio (SNR).  There are many ways to define this\n% value; for our purposes, we will use the ratio of the mean of the signal to the standard deviation of the\n% signal, $\\mu/\\sigma$.\n\n% To obtain an approximation of your SNR, you can use RAVEN to perform a Monte Carlo run on your model and then\n% use the BasicStatistics postprocessor to collect the mean (expectedValue) and standard deviation (sigma) of\n% your response.  It's important to make this sampling all at a single value in the input space, so replace your\n% variables with constants in the Sampler input.  Once you have the mean and sigma, you have an idea of how\n% noisy your model is.  A SNR of 1 means the signal is just as big as the noise, making it very difficult to\n% optimize.  A SNR of less than 1 means the noise dominates the signal, and will make optimization almost\n% impossible without introducing denoising.  A SNR of more than 1 indicates the signal is stronger than the\n% noise, and perhaps denoising is not necessary.  If your standard deviation is 0, then you don't have any\n% discernable noise!\n\n% To denoise a model in RAVEN SPSA optimization currently, we turn our attention to the \\xmlNode{Optimizer}\n% subnode \\xmlNode{parameter}, specifically the \\xmlNode{numGradAvgIterations} node.  This parameter instructs\n% RAVEN to perform multiple gradient evaluations, including multiple evaluations of each optimal point, and use\n% the average to make decisions in optimization pathing.  By default, RAVEN takes one optimal point and one\n% neighboring point to evaluate a gradient.  Increasing the \\xmlNode{numGradAvgIterations} will increase the\n% number of times the optimal point is sampled, and how many neighbor points are sampled.  This serves to\n% denoise the model.\n\n% However, this also raises the question, how many resamples do I need to denoise my model?  In a Wilks-like\n% approach, we want to reduce the size of the confidence interval for our mean to be less than the noise.  The\n% number of resamples required depends on the size of the confidence interval $z$ and the confidence-to-noise ratio\n% $\\xi$ we want to ultimately have for the optimization algorithm.  We also assume the distribution of the\n% response is roughly Gaussian Normal, which may not be the case.  The approximate equation for assuring the\n% confidence interval is smaller than the noise is\n% \\begin{equation}\n%   \\frac{z\\sigma}{\\sqrt{n}} \\leq \\xi\\sigma,\n% \\end{equation}\n% \\begin{equation}\n%   n \\geq \\left(\\frac{z}{\\xi}\\right)^2.\n% \\end{equation}\n% Thus, the number of resamples depends on the confidence level as well as the desired ratio of the confidence interval\n% to the noise.\n\n% A few values for varying ratios are given in Table \\ref{tab:confidence levels} for the 99\\% confidence level ($z=2.576$).\n% \\begin{table}[htb]\n%   \\centering\n%   \\begin{tabular}{c c}\n%     Confidence-to-noise $\\xi$ & Resamples necessary \\\\ \\hline\n%     1.0 & 7 \\\\\n%     0.9 & 9 \\\\\n%     0.7 & 14 \\\\\n%     0.5 & 27 \\\\\n%     0.1 & 664 \\\\\n%     0.05 & 2655\n%   \\end{tabular}\n%   \\caption{Estimate of the number of samples necessary to denoise models to varying confidence levels}\n%   \\label{tab:confidence levels}\n% \\end{table}\n% That is, if you want the noise and confidence interval to have the same magnitude, only 7 resamples are\n% required.  If, on the other hand, you want the confidence interval to be half the level of the noise, 27\n% resamples are required.\n\n% Note these are only guidelines; individual models may behave differently and require more or less resamples to\n% provide a clear optimization path.\n\n\\subsection{Functional Constraints} \\label{subsec:opt explicit constraint}\nSometimes an optimization problem has a constrained input space, possibly where there is a tradeoff between\ntwo inputs.  In this event, RAVEN allows the user to define a \\emph{constraint} function, which will cause\nRAVEN to treat this constraint as it would a boundary condition.\n\nFor example, we will introduce a void in the input where we reject inputs.  This void is defined by rejecting\nall samples within $(x-1)^2 + y^2 < 1$.  We'll also include the modified step growth and shrink parameters\ndiscussed in section \\ref{subsec:opt stepsize}.\n\nTo include a constraint function, we first have to define it in the RAVEN input as a \\xmlNode{Function}\nentity:\n\\xmlExample{framework/user_guide/optimizing/constrain.xml}{Functions}\nNote that the file \\texttt{./Constrain/constraint.py} is located relative to the working directory.  Currently,\nexternal functions are always Python files.  In that file,\nnote that the only method is \\texttt{constrain}, which is RAVEN's keyword to find the constraint function.\nRAVEN will pass in a \\texttt{self} object, which will have the function variables defined in the\n\\xmlNode{Functions} input available as members.  The method \\texttt{constrain} then returns a boolean which is \\texttt{True} if\nthe evaluation does not violate the constraint, or \\texttt{False} if the constraint is violated.\n\nTo attach the constraint to the optimizer, simply add it as an assembled \\xmlNode{Function}:\n\\xmlExample{framework/user_guide/optimizing/constrain.xml}{Optimizers}\n\nAfter running, looking through the path followed by trajectory 0 shows that instead of following the path from\nsection \\ref{subsec:opt stepsize}, the path moves to lower \\emph{y} values before swinging back up toward the\noptimal point.\n\n%\\subsection{Implicit Constraints (Penalties)}\n% TODO come up with a working example\n", "meta": {"hexsha": "a73f6de3611f59bdb439fb76a14cb52682525396", "size": 26158, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_guide/optimizing.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "doc/user_guide/optimizing.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "doc/user_guide/optimizing.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 70.1286863271, "max_line_length": 127, "alphanum_fraction": 0.7801055127, "num_tokens": 6251, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.734119526900183, "lm_q2_score": 0.8031737916455819, "lm_q1q2_score": 0.5896255639414807}}
{"text": "\\section{Formatting}  \n\nA thesis is written with a single-column layout on one- or two-sided\nA4 sheets. The font type of the body text is usually Times New Roman \nand the font size is 12 pt. The text is fully justified and hyphenated.\nYou do not have to indent the paragraphs.\n\n\\subsection{Mathematical notations}\n\nNumbers are generally written using numerals for the sake of clarity,\nfor example ``6 stages'' rather than ``six stages''. You should\nalso use a thousand separators\\footnote{Use tilde \\~{} in LaTeX and a\n  special character \\textit{non-breaking space} in MS Word},\ni.e. instead of 55700125 write 55~700~125. Never omit the leading zero\nin decimals. For example, it is correct to write ``0.5'' and wrong to\nwrite ``.5''. A comma is used as a decimal separator in the Estonian\nlanguage and a period in the English language.\n\nLike numbers, it is advisable to abbreviate units of\nmeasurement. There is a space between the number and the unit, but you\nshould keep them on the same line. It is better to compile a table or\ngraph than include a great deal of numerical values in the body\ntext. Use precise language and put numbers on a scale.\n\nNewton's Second Law can be presented in the following way:\n\n\\begin{equation}\n  \\label{eq:newton2}\n F = ma,\n\\end{equation}\n\nwhere $m$ denotes the mass of an object, $a$ means acceleration, and\n$F$ means force. Please note that all the variables must be defined at\nthe point of their first appearance. All sentences end with a\npunctuation mark, and the main elements of a sentence are separated by\na comma in accordance with the rules of English grammar. Consider formulas \nas part of a sentence, therefore use commas or period accordingly after \nformulas (e.g. see equation (\\ref{eq:newton2})).\n\nFormulas can be numbered or not numbered. Numbered formulas are used if there is need to refer to these formulas later on. For example, let us have a formula\n\\begin{equation}\\label{eq:pythagoras}\nc^2=a^2+b^2.\n\\end{equation}\nThen, since \\eqref{eq:pythagoras} is numbered, we can always refer to it in later text. For example, specifying $a=3$ and $b=4$ and applying \\eqref{eq:pythagoras} implies $c=5$.\n\nOn the other hand, if we do not refer to some formula or equation at all, it should not have a number, for example:\n\\[\nS_n=\\sum_{i=1}^n X_n.\n\\]\n\nOccasionally mathematical notations are preceded by an\nidentifier, such as 'Definition 1' or 'Theorem 1'. Simple formulas may \nbe displayed within the body of the text without numbering.\n\nPlease note that usually different styles applied to same letter/symbol refer to different object, for example, $x$, x, $X$, \\textit{\\textbf{X}}, $\\mathbf{X}$, ..., are all different. Also, a general rule is that variables are in italic and function names are in regular font, for example, $s$, $u$ and $p$ are some variables, and supremum function $\\sup$ is in regular font so we do not confuse it with the product $sup=s\\cdot u\\cdot p$. \n\nPlease try to use unified notation (i.e. same letter/symbol should mean the same variable or function) throughout the whole document (as much as possible).\n\n\nLaTeX is the best editor for writing also the more complex equations, such as\n\\begin{equation}\n  \\label{eq:fourier}\n  G^+(t,t')= \\int G^+(E) \\exp[-iE(t-t')/\\hbar] dE.\n\\end{equation}\n\n\\subsection{Theorems and definitions}\n\nMathematical thesis often include elements that require special formatting and numbering such as theorems, definitions, propositions, remarks, corollaries, lemmas and so on. Such formatting can be achieved by defining special environment with the \\verb+\\newtheorem+ command in the preamble of the document. For example:\n\n\\begin{theorem}\nLet $f$ be a function whose derivative exists in every point, then $f$ is \na continuous function.\n\\end{theorem}\n\n\\begin{proof}\nIf a derivative of function $f$ exists in point $x_0$ (meaning that $f$ is differentiable at point $x_0$), then $f$ is continuous at point $x_0$ (differentiability implies continuity - not proven here). \n\nAssume that function $f$ is differentiable in every point, then that implies that $f$ is continuous at every point and $f$ is a continuous function.\n\\end{proof}\n\nFor the next theorem we need a definition:\n\n\\begin{definition}\nA right triangle is a triangle in which one angle is a right angle (that is, a 90-degree angle).\n\\end{definition}\n\nNotice that the definition does not have a number. This can be achieved by defining the definition environment with the \\verb+\\newtheorem*{}+ command.\n\n\\begin{theorem}[Pythagorean theorem]\n\\label{pythagorean}\nThis is a theorem about right triangles and can be summarised in the next \nequation \n\\[ x^2 + y^2 = z^2 \\]\n\\end{theorem}\n\nAnd a consequence of Theorem \\ref{pythagorean} is the statement in the next \ncorollary (you can reference theorems when a label is assigned).\n\n\\begin{corollary}\nThere's no right rectangle whose sides measure 3cm, 4cm, and 6cm.\n\\end{corollary}\n\nNotice that the Corollary has different numbering, this is because the environment is defined with \\verb+\\newtheorem{corollary}{Corollary}[theorem]+ , where the square brackets determines the environment that will be used for numbering (also refered to as counter). The same logic can be applied to formulas and theorems, if there are many of such elements. Often the counters are determined by section, for example \"Theorem 2.3\" refers to the 3rd theorem in the 2nd section of a document. \n\nBy default, each theorem uses its own counter. However it is common for similar types of theorems (e.g. Theorems, Lemmas and Corollaries) to share a counter. In this case, define subsequent theorems as:\n\\begin{verbatim}\n\\newtheorem{<name>}[<counter>]{<Printed output>}\n\\end{verbatim}\nwhere counter is the name of the counter to be used.\n\n\n\\subsection{Figures}\n\nYou must refer to all the figures in the body text. The reference\nshould preferably appear on the same page as the actual figure or\nbefore it (e.g. Figure~\\ref{fig:ex_fig}). Figures and tables must be numbered consistently thesis\nand primarily placed at the top of the page, but you are free to\ndecide where they fit best. Never start a chapter with a figure, table\nor list.\n% Note: put tilde between text and ref command, like in\n% 'Figure~\\ref{fig:ex_fig}' above. Tilde (~) puts a white space but\n% prevents line break. Same thing applies to Table~\\ref{tab:summary}\n% as well.\n\nFigures and the caption are consistently centered  and the caption is \nplaced under the figure and always on the same page as the figure. \nAll figures must be explained in the body text, so that readers know \nwhat they are supposed to notice. \n\nThe figures should be in the same language as other text. The recommended \nfont size is the same as that of the body text but no smaller than 10 pt. \nThe figures must be readable, even if your thesis is printed in grey-scale!\n\n% Here's an example how to add a figure. Default placement of figures\n% is at top of page, i.e. placement specifier is '[t]' could be\n% omitted.  The dimensions can be relative to text width (or height)\n% or absolute (4 cm).\n\n\\begin{figure}[t]\n  \\begin{center}\n    \\includegraphics[width=0.7\\textwidth]{figure}\n  \\end{center}\n  \\caption{Diagram of achieved height and time of flight with different angles of release.}\n  \\label{fig:ex_fig}\n\\end{figure}\n\n\n\\subsection{Tables}\n\nTables have numbered captions, see Table\\ref{tab:thin_film} for\nexample. The caption is placed on the same page above the table. You \nmust refer to all the tables in the body text. In addition, you must \ndiscuss the content of any tables in the body text to ensure that readers \nunderstand their relevance.\n\n\\begin{table}[ht]\n  \\small\n  \\begin{center}\n    \\caption{Example of evaporation conditions in a thin film structure.}\n    \\label{tab:thin_film}\n    \\begin{tabular}{l | r r r r r}\n      % l = align to left (e.g. text), c=align to center, r=align to\n      % right (e.g. numbers), Pipe | creates vertical line\n      % Let's put 1 horinzontal line above the table, 2 after header rows,  and 1 below\n      \\hline\n      \\textbf{Substance} & \\textbf{Thickness}& \\textbf{Correction } & \\textbf{Pressure}  & \\textbf{Current} & \\textbf{Speed} \\\\\n                         & \\textbf{(nm)}     & \\textbf{coefficient} & \\textbf{(mbar)}    & \\textbf{(mA)}    & \\textbf{(nm/s)} \\\\\n      \\hline \n      SiO$_2$\t& 181.0\t& 1.10\t& $3.0\\cdot10^5$\t& 20-23\t &0.2 \\\\\n      TiO$_2$\t& 122.1\t& 1.55\t& $1.5\\cdot10^4$\t& 100-93 &0.1 \\\\\n      \\hline\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\n\nOften it is better to create the table in, e.g. MS Excel, and import\nit as .eps or .pdf file, for example, when you calculate some of the\nvalues automatically.\n%\\begin{table}[h]\n%  \\begin{center}\n%    \\caption{Example of evaporation conditions in a thin film structure.}\n%    \\label{tab:thin_film_graphics}\n%    \\includegraphics[width=8cm]{my_table.eps}\n%  \\end{center}\n%\\end{table}\n\nYou can use boldface to highlight the header row. Do not surround all \nthe cells with a border, as it may make your table harder to read. \nPut a line on top and bottom of the table. You can add a horizontal line\ngrouped into categories.\n\nThe numbers are right aligned (optimally lined up at the decimal\npoint) for easy comparison. You should preferably use SI units,\nestablished prefixes and rewrite large numbers so that the power of\nten should be placed in the title of the column instead of each row,\nif possible. \n\n\n\\subsection{Programs and algorithms}\n\nCodes and algorithms are written using mono-spaced font, such as\nCourier New, Consolas or their variations. If the length of the code\nor algorithm is less than 10 lines and you do not refer to it later on\nin the text, you can present it similarly to formulas.  Here's an\nexample showing a snippet (install package \\verb!listings! for coloring your code according to type):\n\n\\begin{verbatim}\n\\begin{lstlisting}[style=console, % title={Template files} ] \nall: ${TARGET}.tex\n\tpdflatex ${TARGET}.tex\n\tbibtex ${TARGET}\n\tpdflatex ${TARGET}.tex\n\\end{lstlisting}  \n\\end{verbatim}\n\n\nIf the code is longer but shorter than a page, you present like a\nfigure (Program 4.1) titled ``Program'' or ``Algorithm''.\nYou should add some comments to the code and indent it\nconsistently. The actions performed by the code must be outlined in\nbroad terms in the body text. Line numbers make it much easier to\nrefer to the code in the text. \n\nLaTeX has a packages that enable automatic code formatting like\nhighlight reserved words, bring out string input or color comments differently.", "meta": {"hexsha": "95e8efca8169e107118d6da0fbce5f1f093d1e30", "size": 10446, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "03_2_ch2_formatting.tex", "max_stars_repo_name": "klumiste/UTMatStat-thesis-template", "max_stars_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "03_2_ch2_formatting.tex", "max_issues_repo_name": "klumiste/UTMatStat-thesis-template", "max_issues_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "03_2_ch2_formatting.tex", "max_forks_repo_name": "klumiste/UTMatStat-thesis-template", "max_forks_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.2212389381, "max_line_length": 490, "alphanum_fraction": 0.7480375263, "num_tokens": 2751, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660688, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5896255597852725}}
{"text": "\\subsection{Circular disk with an edge crack in tension}\n\n\\paragraph{}\nNext, the present formulation is applied to problems with strong discontinuities and singularities.\nThe unique feature of the proposed framework is that the geometry is exactly represented by using NURBS and the singularities are captured semi-analytically without a priori knowledge of the asymptotic fields.\nIn the first example, consider a circular disk with an edge crack (see Fig.~\\ref{iso_fig:circular_disk_geo_bc}) with Young\u2019s modulus $E=\\SI{1}{\\newton \\per \\square \\meter}$ and Poisson's ratio $\\nu=0.3$.\n    \\begin{figure}[h!]\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{isogeometric_sbfem/images/circular_disk_geo_bc.eps}\n        }\n        \\caption{Circular disk with an edge crack}\n        \\label{iso_fig:circular_disk_geo_bc}\n    \\end{figure}\n\nThe analytical displacements solution for mode \\RN{1} are given by:\n    \\begin{subequations}\n        \\begin{align}\n            u_x &= \\frac{1}{2}\\left[\n                \\left(\n                    \\kappa + \\frac{n}{2} + (-1)^n\n                \\right) \\cos \\left(\n                    \\frac{n\\theta}{2}\n                \\right) -\n                \\frac{n}{2} \\cos \\left(\n                    \\left(\n                        \\frac{n}{2} -2\n                    \\right) \\theta\n                \\right)\n            \\right]\\\\\n            u_y &= \\frac{1}{2}\\left[\n                \\left(\n                    \\kappa - \\frac{n}{2} - (-1)^n\n                \\right) \\sin \\left(\n                    \\frac{n\\theta}{2}\n                \\right) +\n                \\frac{n}{2} \\sin \\left(\n                    \\left(\n                        \\frac{n}{2} -2\n                    \\right) \\theta\n                \\right)\n            \\right]\\\\\n        \\end{align}\n    \\end{subequations}\n\nwhere $\\kappa$ is the Kolosov constant defined in Eq.~\\ref{iso_eq:kolosov_constant}.\n\nThe analytical stress solution for mode \\RN{1} are given by:\n    \\begin{subequations}\n        \\begin{align}\n            \\sigma_{xx} &= \\frac{n}{2}\\left[\n                \\left(\n                    2 + \\frac{n}{2} + (-1)^n\n                \\right) \\cos \\left(\n                    \\left(\n                        \\frac{n}{2} - 1\n                    \\right)\\theta\n                \\right) - \\left(\n                    \\frac{n}{2} - 1\n                \\right)\n                \\cos \\left(\n                    \\left(\n                        \\frac{n}{2} -3\n                    \\right) \\theta\n                \\right)\n            \\right]\\\\\n            \\sigma_{yy} &= \\frac{n}{2}\\left[\n                \\left(\n                    2 - \\frac{n}{2} - (-1)^n\n                \\right) \\cos \\left(\n                    \\left(\n                        \\frac{n}{2} - 1\n                    \\right)\\theta\n                \\right) + \\left(\n                    \\frac{n}{2} - 1\n                \\right)\n                \\cos \\left(\n                    \\left(\n                        \\frac{n}{2} -3\n                    \\right) \\theta\n                \\right)\n            \\right]\\\\\n            \\tau_{xy} &= \\frac{n}{2}\\left[\n                \\left(\n                    \\frac{n}{2} - 1\n                \\right) \\sin \\left(\n                    \\left(\n                        \\frac{n}{2} - 3\n                    \\right)\\theta\n                \\right) - \\left(\n                    \\frac{n}{2} + (-1)^n\n                \\right)\n                \\sin \\left(\n                    \\left(\n                        \\frac{n}{2} - 3\n                    \\right) \\theta\n                \\right)\n            \\right]\n        \\end{align}\n    \\end{subequations}\n\n\\paragraph{}\nIn this example, the circular disk is represented by NURBS.\nThe control net and the location of control points are shown in Fig.~\\ref{iso_fig:circular_disk_mesh} for different NURBS orders.\n    \\begin{figure}\n        \\begin{subfigure}[b]{0.5\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_2nd_9cp.png}\n            }\n            \\caption{9 control points, 2nd order}\n        \\end{subfigure}\n        \\begin{subfigure}[b]{0.5\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_2nd_17cp.png}\n            }\n            \\caption{9 control points, 2nd order}\n        \\end{subfigure}\n\n        \\begin{subfigure}[b]{0.5\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_3rd_17cp.png}\n            }\n            \\caption{9 control points, 2nd order}\n        \\end{subfigure}\n        \\begin{subfigure}[b]{0.5\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_4th_17cp.png}\n            }\n            \\caption{9 control points, 2nd order}\n        \\end{subfigure}\n        \\caption{Meshing of the circular disk with an edge notch}\n        \\label{iso_fig:circular_disk_mesh}\n    \\end{figure}\n\n\\paragraph{}\nThe circular disk is subjected to a far field tension and the displacement and the stress modes are computed by the proposed isogeometric SBFEM.\nIt is noted that only the boundary of the circular disk is discretized using the NURBS and no tensor product of the corresponding knot vectors is required to represent the unknown fields inside the domain.\nThe convergence of the numerical stress intensity factor (SIF) and the T-stress with mesh refinement is shown in Tab.~\\ref{iso_tab:circular_disk_res}.\n\n\\begin{table}[]\n\\caption{T-stress and stress intensity factors for circular disk with an edge crack.}\n\\label{iso_tab:circular_disk_res}\n\\begin{tabularx}{\\textwidth}{XXXXXXX}\n\\toprule\n    Total    &   \\multicolumn{2}{c}{NURBS $p=2$} &\\multicolumn{2}{c}{NURBS $p=4$} &\\multicolumn{2}{c}{NURBS $p=6$}\\\\\n    \\cmidrule{2-7}\n    DOF      &   SIF     &   T-stress            &SIF     &   T-stress            &SIF     &   T-stress           \\\\\n    \\cmidrule{1-1} \\cmidrule{2-3} \\cmidrule{4-5} \\cmidrule{6-7}\n    18       &   2.3520  &   2.9442              &        &                       &        &                      \\\\\n    34       &   2.8693  &   5.1991              &2.8838  &5.4050                 &        &                      \\\\\n    74       &   2.8838  &   5.3112              &2.8840  &5.3445                 &2.8840  &5.3447                \\\\\n    130      &   2.8840  &   5.3318              &2.8840  &5.3444                 &2.8840  &5.3444                \\\\\n\\bottomrule\n\\end{tabularx}\n\\end{table}\n\n\\paragraph{}\nIt can be seen that with mesh refinement the SIF and the T-stress converge.\nIncreasing the order of the NURBS functions increases the convergence behavior.\nEq.~\\ref{iso_eq:isosbfem_fracture_stress_field} is the parametric equation for the stress field in the polar coordinates $r$ and $\\theta$.\nThe terms $\\left(\n        r_\\eta^{\\lambda_i+1}(\\eta)\n        \\boldsymbol{\\psi}_{\\sigma_i}(\\eta)\n    \\right)$\nin Eq.~\\ref{iso_eq:isosbfem_fracture_stress_field} together with $\\theta(\\eta)$ in Eq.~\\ref{lr_eq:sbfem_transform} are the stress modes describing the angular distribution at a constant radial coordinate $r$.\nFor the converged result, Fig.~\\ref{iso_fig:circular_disk_modes} shows the displacement and the stress distribution at a constant radial coordinate r around the crack tip for mode \\RN{1} fracture.\nEach of the stress modes is normalized with its value of $\\sigma_{yy}=0^\\circ$ .\n    \\begin{figure}[h!]\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_displacement_modes.png}\n            }\n            \\caption{displacements}\n        \\end{subfigure}\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_stress_modes.png}\n            }\n            \\caption{stress}\n        \\end{subfigure}\n    \\caption{Displacement and stress modes of circular disk with an edge crack using cubic NURBS functions}\n    \\label{iso_fig:circular_disk_modes}\n    \\end{figure}\n\n\\paragraph{}\nThe stress modes from the scaled boundary formulation are compared with the analytical solutions and a very good agreement is observed in Tab.~\\ref{iso_tab:circular_disk_res}.\nThe convergence of the displacement and stress modes with h and p refinement is shown in Fig.~\\ref{iso_fig:circular_disk_convergence}.\nIt can be seen that with refinement the solution converges monotonically.\n    \\begin{figure}[h!]\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.65}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_convergence_displacement.eps}\n            }\n            \\caption{Displacement mode}\n        \\end{subfigure}\n\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.65}{\n                \\includegraphics{isogeometric_sbfem/images/circular_disk_convergence_displacement.eps}\n            }\n            \\caption{Displacement mode}\n        \\end{subfigure}\n    \\caption{Circular disk with an edge crack: convergence of the displacement mode and stress mode}\n    \\label{iso_fig:circular_disk_convergence}\n    \\end{figure}\n\n", "meta": {"hexsha": "3e4e47c9d5f720790143de79bed7a08bca10e05e", "size": 9292, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "isogeometric_sbfem/ex_circular_disk_edge_crack.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "isogeometric_sbfem/ex_circular_disk_edge_crack.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "isogeometric_sbfem/ex_circular_disk_edge_crack.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2186046512, "max_line_length": 209, "alphanum_fraction": 0.5459535084, "num_tokens": 2422, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.803173791645582, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.589625559269373}}
{"text": "\\documentclass[11pt]{article}\n\n%% WRY has commented out some unused packages %%\n%% If needed, activate these by uncommenting\n\\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.\n%\\geometry{letterpaper}                   % ... or a4paper or a5paper or ... \n\\geometry{a4paper,left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm}\n%\\geometry{landscape}                % Activate for rotated page geometry\n%\\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent\n\n%for figures\n%\\usepackage{graphicx}\n\n\\usepackage{color}\n\\definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue\n\\definecolor{mylilas}{RGB}{170,55,241}\n%% for graphics this one is also OK:\n\\usepackage{epsfig}\n\n%% AMS mathsymbols are enabled with\n\\usepackage{amssymb,amsmath}\n\n%% more options in enumerate\n\\usepackage{enumerate}\n\\usepackage{enumitem}\n\n%% insert code\n\\usepackage{listings}\n\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{hyperref}\n\n%% colors\n\\usepackage{graphicx,xcolor,lipsum}\n\n\n\\usepackage{mathtools}\n\n\\usepackage{graphicx}\n\\newcommand*{\\matminus}{%\n  \\leavevmode\n  \\hphantom{0}%\n  \\llap{%\n    \\settowidth{\\dimen0 }{$0$}%\n    \\resizebox{1.1\\dimen0 }{\\height}{$-$}%\n  }%\n}\n\n\\title{WKB approximate solutions for standard baroclinic modes}\n\\author{Cesar B Rocha\\thanks{Scripps Institution of Oceanography, University of California, San Diego; \\texttt{crocha@ucsd.edu}}}\n\\date{\\today}\n\n\\begin{document}\n\n\\include{mysymbols}\n\n\\maketitle\n\nIn these notes I derive approximate WKB solutions to the standard baroclinic modes of physical oceanography. The elementary textbook example with constant buoyancy frequency is recovered as a special case. \n\n\\section{Pressure modes}\n\nThe standard baroclinic modes for pressure, here denoted $\\sp_n(z)$,  is defined by the regular Sturm-Liouville eigenproblem\n\n\\beq\n\\label{eigpb}\n\\sL \\sp_n = -\\kappa_n^2 \\sp_n\\com\n\\eeq\nwith homogeneous Neumann boundary conditions\n\\beq\n\\label{bc}\n@z = -h,\\,0:\\qquad \\sp_n' = 0\\com\n\\eeq\nand the self-adjoint Linear operator\n\\beq\n\\label{strech}\n\\sL \\defn \\frac{\\dd}{\\dd z}\\frac{f_0^2}{N^2}\\frac{\\dd}{\\dd z} \\per\n\\eeq\nHence the eigenmodes, $\\sp_n$, are orthogonal. The eigennvalue $\\kappa_n$ is the deformation wavenumber of the $n$'th mode. It is convenient to normalize the eigenmodes to have unit $L^2$-norm: \n\\beq\n\\label{normalization0}\n\\frac{1}{H}\\int_{-h}^{0}\\!\\! \\sp_n \\sp_m \\dd z = \\delta_{mn}\\com\n\\eeq\nwhere $\\delta_{mn}$ is the Kronecker delta. Equation \\eqref{eigpb} can be rewritten as\n\\beq\n\\label{eigpb_wkb}\n\\bur \\sp_n'' + \\left[\\bur\\right]' \\sp_n' + \\kappa_n^2 \\,\\sp_n = 0\\per\n\\eeq\nIntroducing the following definitions\n\\beq\n\\label{notation}\n\\ep \\defn \\frac{1}{\\kappa_n} \\qquad \\text{and} \\qquad S^2(z) \\defn \\ibur \\per \n\\eeq\nwe have the renotated equation\n\\beq\n\\label{dirich_eigpb_wkb_ep}\n\\ep^2\\, \\sp_n'' -\\ep^2 \\left[\\log S^2(z)\\right]' \\sp_n' + S^2(z) \\sp_n = 0\\per\n\\eeq\nIn the WKB spirit we assume that $S^2(z)$ is slowly varying i.e., the buoyancy frequency $N^2(z)$ does not vary very fast. (This assumption may be problematic near the base of the mixed-layer.) We also assume that $\\ep$ is small; the accuracy of the WKB solution improves with mode number. We now make the exponential approximation (e.g., Bender and Orszag)\n\\beq\n\\sp_n^e \\defn \\ee^{Q(z)/\\ep}\\per\n\\eeq\nHence\n\\beq\n{\\sp_n^e}' = \\frac{Q'(z)}{\\ep}\\sp_n^e\\com\n\\eeq\nand \n\\beq\n{\\sp_n^e}'' = \\left[\\left(\\frac{Q'(z)}{\\ep}\\right)^2 + \\frac{Q''(z)}{\\ep} \\right]\\sp_n^e\\com\n\\eeq\nNext we expand $Q(z)$ in powers of $\\ep$\n\\beq\n\\label{aseries}\nQ(z) = Q_0(z)  + \\ep\\,Q_1(z) + \\ep^2\\,Q_2(z) + \\mathcal{O}(\\ep^3)\\per\n\\eeq\nSubstituting \\eqref{aseries} in \\eqref{dirich_eigpb_wkb_ep} we obtain, to lowest order, $\\mathcal{O}(\\ep^0)$,\n\\beq\n\\label{lowest_order_eqn}\nQ_0'^2 + S^2(z) = 0\\per\n\\eeq\nThus\n\\beq\n\\label{Q0}\nQ_0 = \\pm \\ii \\int^z \\!\\!\\!S(\\xi) \\,\\dd \\xi  = \\pm \\ii \\tfrac{1}{f_0} \\int^z \\!\\!\\!N(\\xi) \\,\\dd \\xi \\per\n\\eeq\nAt next order, $\\mathcal{O}(\\ep)$, we have\n\\beq\n\\label{first_order_eqn}\n2\\,Q_0'Q_1' + Q_0'' - Q_0'  \\left[\\log S^2(z)\\right]' = 0\\per\n\\eeq\nHence\n\\beq\n\\label{Q_1}\nQ_1  =   \\frac{1}{2} \\log S(z) - \\frac{1}{2}\\log \\pm \\ii S(z) + \\text{const}\\,\\, \\per\n\\eeq\nNotice that the imaginary part in the $\\log$ in \\eqref{Q_1} just contributes an irrelevant constant. Thus\n\\beq\nQ_1 = \\log \\sqrt{S(z)} + \\text{const} \\,\\, \\per\n\\eeq\nIn the most common WKB approximation (a.k.a ``physical optics'') we truncate \\eqref{aseries} at $\\mathcal{O}(\\ep)$. The solution to \\eqref{dirich_eigpb_wkb_ep}, consistent with the bottom boundary condition \\eqref{bc}, is\n\\beq\n\\sp_n^{po} = A_n\\, \\sqrt{N(z)}\\, \\cos \\left(\\frac{\\kappa_n}{f_0} \\int_{-h}^{z} \\!\\!\\!N(\\xi) \\dd \\xi\\right)\\com\n\\eeq\nwhere $A_n$ is a constant. By imposing the boundary condition at $z=0$ \\eqref{bc}, we obtain the eigenvalues $\\kappa_n$:\n\\beq\n\\label{kappan}\n\\kappa_n = \\frac{n \\pi \\, f_0}{\\overline{N}\\,h} \\com\\qquad n=0,1,2,\\ldots \\com\n\\eeq\nwhere the mean buoyancy frequency is\n\\beq\n\\label{N_avg}\n\\overline{N} \\defn\\frac{1}{h} \\int_{-h}^0N(\\xi)\\dd \\xi\\per\n\\eeq\nThe constant $A_n$ is determined by the normalization condition \\eqref{normalization0}. We have\n\\beq\n\\label{an_eqn}\nA_n^2 \\, \\int_{-h}^{0}\\!\\! N(z) \\cos^2 \\left(\\frac{\\kappa_n}{f_0}\\int_{-h}^{z}\\!\\!\\!N(\\xi) \\dd \\xi\\right) \\dd z = H\\com\\qquad n\\ge 1\\per\n\\eeq\nThe integral in \\eqref{an_eqn} can be evaluated exactly by making the change of variables \n\\beq\n\\eta \\defn \\frac{\\kappa_n}{f_0}\\int_{-h}^{z}\\!\\!\\! N(\\xi) \\dd \\xi  \\qquad \\Rightarrow \\qquad \\dd\\eta = \\frac{\\kappa_n}{f_0}N(z) \\dd z\\com\n\\eeq\nand using the expression for the eigenvalues \\eqref{kappan}. We obtain \n\\beq\nA_n = \\Big(2/\\overline{N} \\Big)^{1/2} \\com\\qquad n\\ge 1\\per\n\\eeq\nThus the WKB approximate solution to the standard pressure modes is\n\\beq\n\\sp_n^{po} = \\left[\\frac{2\\,N(z)}{\\Nb} \\right]^{1/2}\\!\\!\\cos\\left( \\frac{n \\pi}{\\Nb\\,h} \\,\\,\\,\\int_{-h}^{z} \\!N(\\xi) \\dd \\xi\\right)\\com\\qquad n\\ge1\\per\n\\eeq\nThe amplitude of the baroclinic modes  at the boundaries is independent of the eigenvalue:\n\\beq\n\\sp_n^{po}(z=0) = (-1)^{n} \\left[\\frac{2 N(0)}{\\Nb}\\right]^{1/2}\\com\n\\eeq\nand\n\\beq\n\\sp_n^{po}(z=-h) = \\left[\\frac{2 N(-h)}{\\Nb}\\right]^{1/2}\\per\n\\eeq\n\nThe barotropic mode is not recovered from the WKB solution because $\\kappa_0 = 0$. From \\eqref{eigpb} we have that with $\\kappa_0 = 0$, the barotropic mode is constant, independent of the stratification. With the normalization \\eqref{normalization0} we obtain $\\sp_0 = 1$.\n\n\\subsection*{Constant buoyancy frequency}\nWith $N = \\text{const.}$  the modes are simple sinusoids. That exact result is recovered as a special case of the WKB solution\n\\beq\n\\sp_n^{po} = \\sqrt{2} \\cos\\left[n \\pi (1+z/h)\\right]\\per\n\\eeq\n\n\\section{Density modes}\nSimilarly the baroclinic modes for density, here denoted by $\\sr_n$, are defined via the eigenproblem \n\\beq\n\\label{eig_prob_rho}\n\\sr_n'' = -\\kappa_n^2 \\ibur \\sr_n\\com\n\\eeq\nwith homogeneous Dirichlet boundary conditions\n\\beq\n\\label{rho_bc}\n@z = -h,0: \\qquad \\sr_n = 0\\com\n\\eeq\nand normalization\n\\beq\n\\label{normalization2}\n\\frac{1}{h}\\int_{-h}^0 \\!\\!\\sr_n \\sr_m \\dd z = \\delta_{mn}\\per \n\\eeq\nAlternatively, we can work on  the approximation from the beginning. The WKB approximate solution to \\eqref{eig_prob_rho}-\\eqref{rho_bc}, consistent with the bottom boundary conditions \\eqref{rho_bc}, is\n\\beq\n\\sr_n^{po} = \\frac{B_n}{\\sqrt{N(z)}} \\sin \\left(\\frac{\\kappa_n}{f_0} \\int_{-h}^{z} N(\\xi)\\dd \\xi \\right) \\com\n\\eeq\nThe eigenvalues $\\kappa_n$ are the same as before \\eqref{kappan}. (This should be no surprise because it follows  from the definition of $\\sp_n$ and $\\sr_n$. Nonetheless,  the verification is a good sanity check.) To find $B_n$ we use the normalization \\eqref{normalization2}\n\\beq\n\\label{bn_eqn}\nB_n^2 \\, \\int_{-h}^{0}\\!\\!\\frac{1}{N(z)} \\sin \\left(\\frac{\\kappa_n}{f_0}\\int_{-h}^{z}\\!\\!\\!N(\\xi) \\dd \\xi\\right) \\dd z = h\\com\\qquad n\\ge 1\\per\n\\eeq\nWe use a similar trick as above i.e., we change variables with\n\\beq\n\\eta \\defn \\frac{\\kappa_n}{N^2(z) f_0}\\int_{-h}^{z}\\!\\!\\! N(\\xi) \\dd \\xi  \\qquad \\Rightarrow \\qquad \\dd\\eta = \\frac{\\kappa_n}{N(z) f_0} \\dd z\\com\n\\eeq\nwhere, in the WKB spirit, we used the fact that $N(z)$ is slowly varying when differentiating the relation above. We obtain\n\\beq\nB_n = \\Big(2 N^2(0)/\\Nb\\Big)^{1/2}\\per\n\\eeq\nThus the WKB approximate solution to the density modes is\n\\beq\n\\label{rpo_final}\n\\sr_n^{po} = \\left(\\frac{2 N^2(0)}{\\Nb N(z)}\\right)^{1/2} \\sin \\left(\\frac{n \\pi}{\\Nb\\,h} \\int_{-h}^{z} N(\\xi)\\dd \\xi \\right)\\com\\qquad n\\ge 1 \\per\n\\eeq\nFinally, note that the modes are simply related\n\\beq\n\\frac{\\dd \\sr_n^{po}}{\\dd z} = N(0) \\underbrace{\\frac{n\\pi}{\\Nb\\,h}}_{=\\kappa_n/f_0}\\,\\sp_n^{po}\\per\n\\eeq\n\n\\subsection*{Constant buoyancy frequency}\nAgain we recover the $N = \\text{const.}$ special case from \\eqref{rpo_final}:\n\\beq\n\\sr_n^{po} = \\sqrt{2}\\sin \\left[n\\pi(1 + z/h)\\right]\\per\n\\eeq\n\n%\\section{Probing WKB}\n%How accurate is the WKB approximate solution for the standard modes? To answer this questions,\n%we first recall that the fundamental assumption in WKB is the slowly varying nature of $N^2(z)$.\n%Second, the introduction  perturbation series \\eqref{aseries} hinges on the smallness of the parameter\n%$\\ep$. Hence we expect the accuracy to increase with mode number. Specifically, the physical optics\n%approximation truncates the series at first order, so the error decreases as $n^{-2}$.\n%\n%\n%To verify the predictions above for the accuracy of the WKB solution, we consider an example with exponential stratification: $N^2(z) = N_0^2 \\ee^{-\\alpha z}$,\n%where $N_0$ is the stratification frequency at the surface and $\\alpha^{-1}$ is the e-folding depth (see figure 1). In this simple example,\n%the standard modes can be calculated analytically in terms of Bessel functions (see LaCasce JPO 2012).\n%Figure \\ref{N2} shows the stratification profiles for two different e-folding scales: $\\alpha=2$ and $\\alpha=5$. With $\\alpha=5$,\n%the buoyancy frequency is strongly surface intensified as opposed to the slowly varying $\\alpha=2$ profile. Indeed, the relative\n%error of the WKB eigenvalue \\eqref{kappan} as compared to the exact eigenvalue is more than 10 times smaller for $\\alpha=2$\n%than for $\\alpha=5$ (see figure \\ref{rerror}) --- the relative error in the first deformation wavenumber for $\\alpha=2$ is spetacularly less than  $1\\%$! As expected,\n%the error decreases with modes number $n$ as $n^{-2}$. The eigenmodes are also well approximated by the WKB approximation (see figure \\ref{eigenstructure}\n%for the 4 gravest baroclinic modes with $\\alpha=5$).  While there are some differences in amplitude, particularly at depth, the overall structure and zero-crossing depths\n%are well captured by the WKB approximation.\n\n\n%\\begin{figure}[!ht]\n%\\label{N2}\n%  \\centering\n%    \\includegraphics[width=0.5\\textwidth]{figs/N2.pdf}\n%      \\caption{The exponential stratification $N^2(z) = N_0^2 \\ee^{-\\alpha z}$ with $\\alpha = 5$ and $\\alpha=2$.}\n%\\end{figure}\n%\n%\\begin{figure}[!ht]\n%\\label{rerror}\n%  \\centering\n%    \\includegraphics[width=0.5\\textwidth]{figs/RelativeError.pdf}\n%      \\caption{The relative error in deformation wavenumber square (the eigenvalue) for exponential stratification $N^2(z) = N_0^2 \\ee^{-\\alpha z}$ with $\\alpha = 5$ and $\\alpha=2$. The magnitude of the error decreases with $\\alpha$ ---  the buoyancy frequency is more slowly varying with $\\alpha=2$.  The error decreases with mode\n%      number as $n^{-2}$ ($-2$ slope in log$\\times$log space).}\n%\\end{figure}\n%\n%\\begin{figure}[!ht]\n%\\label{eigenstructure}\n%  \\centering\n%    \\includegraphics[width=0.9\\textwidth]{figs/Eigenstructure.pdf}\n%      \\caption{The pressure modes for exponential stratification $N^2(z) = N_0^2 \\ee^{-\\alpha z}$ with $\\alpha = 5$ calculated exactly (black) and using WKB (blue).}\n%\\end{figure}\n%\n\n\n\\end{document}\n", "meta": {"hexsha": "494e4d4c589318b793cd96e1e5fc337e59dfde33", "size": 11759, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "standard_modes_wkb.tex", "max_stars_repo_name": "crocha700/wkb_baroclinic_modes", "max_stars_repo_head_hexsha": "731005178c8380863f02695bd4a31070620d923d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-03-12T04:04:34.000Z", "max_stars_repo_stars_event_max_datetime": "2016-08-31T10:28:17.000Z", "max_issues_repo_path": "standard_modes_wkb.tex", "max_issues_repo_name": "cesar-rocha/wkb_baroclinic_modes", "max_issues_repo_head_hexsha": "731005178c8380863f02695bd4a31070620d923d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "standard_modes_wkb.tex", "max_forks_repo_name": "cesar-rocha/wkb_baroclinic_modes", "max_forks_repo_head_hexsha": "731005178c8380863f02695bd4a31070620d923d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-01T09:11:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-01T09:11:47.000Z", "avg_line_length": 41.5512367491, "max_line_length": 357, "alphanum_fraction": 0.6963177141, "num_tokens": 4208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105951184112, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5894109748716676}}
{"text": "\\section{Decimal and Binary Math}\n\nThe next few sections introduce some math ideas that may or may not be familiar to you. In order to really understand how computers work, you need to understand several math concepts that are not hard, but may be unfamiliar. The first concept is the \\emph{exponent}. \n\n\\subsection*{Bases and Exponents}\n\nExponentiation is a mathematical operation, written as $b^n$, involving two numbers, the base, $b$, and the exponent, $n$. Exponentiation means ``repeated multiplication of the base.\" \n\n\\noindent That is, $b^n$ is the product of multiplying the base number $n$ times:\n\n\n\\begin{equation*}\nb^{n}=\\underbrace {b\\times \\cdots \\times b} _{n}\n\\end{equation*}\n\n\\noindent In this case, $b^n$ is called the ``\\emph{n-th} power of b\", or ``b raised to the power n.\" So 3 to the 4\\textsuperscript{th} power looks like this: \n\n\\begin{equation*}\n3^4 ~=~ 3 \\times 3 \\times 3 \\times 3 ~=~ 9 \\times 9 ~=~  81\n\\end{equation*}\n\n\n\nPut another way: the exponent is a number, smaller and on the upper right hand side of a number, that means ``multiplying a number times itself zero times, once, or more than once\" depending on whether the number is 0, 1, 2, or another number (written $n$ in the description above). It is possible to use fractions as exponents, but we won't talk about that here.\n\nThe base can be any number, though most people think most easily in base-10. So in base-10, 10 to the power 2, or 10 to the second power, means $10 \\times 10$. \n\n\\subsection*{Order of Magnitude (Columns)}\n\nWhen counting up from zero, by one, in base 10, you eventually get to 9. In order to count any higher, you must ``carry the one\" over to the next column and reset the counter in the ``ones\" column (the place that counts by one, from zero to nine). When you move from the ``\\emph{ones}'' column to the ``\\emph{tens}'' column, the next column to the left represents the next \\emph{order of magnitude} of the base. ``Magnitude\" means ``size\", and in math, it means, specifically, moving from counting by one number (the base) to counting by the base times itself -- first twice, then three times, then more. For base-10, you count from 0 to 9 in the right-most column, then from 10 to 90 in the next column to the left, then from 100 to 900 in the next column to the left, then from 1000 to 9000, and so on. Each time the counter gets full (reaches 9), you cannot represent any more of the quantity being counted without moving to the next largest order of magnitude. That is, if your display only shows two digits, once you count past 99, you have no idea is the number is 0, or 100, or 200, or 900, or ten thousand, or 4 billion.\n\n\n\n\\newpage\n\\subsection*{Squares}\n\nShapes that are squares have four sides with the same length; that is, the length and width are the same. \\emph{Squaring a number} is multiplying the number times itself, just like, in a square, both sides have the same length. The most common way you will see a ``squared number\" described is with a little 2 up above the number, like  $2^2$, or $3^2$, or $10^2$. That smaller `2' is the exponent, that we discussed above.\n\n\\medskip\n\nSquaring the number 10 (that is, multiplying ten times itself) gets: $10 \\times 10 = 100$\n\n\\bigskip\n\n\\begin{tabular}{l m{0.75in} l l }\n\n\\blockline{1}{0.5} & $1 \\times 1 = $ & \\blockline{1}{0.5} & $=1^2$ \\\\\n\\\\\n\\blockline{2}{0.5} & $2 \\times 2 = $ & \\makeplate{2}{1}{0.5} & $=2^2$ \\\\\n\\\\\n\\blockline{3}{0.5} & $3 \\times 3 = $ & \\makeplate{3}{1}{0.5} & $=3^2$\\\\\n\\\\\n\\blockline{4}{0.5} & $4 \\times 4 = $ & \\makeplate{4}{1}{0.5} & $=4^2$ \\\\\n\\\\\n\\blockline{5}{0.5} & $5 \\times 5 = $ & \\makeplate{5}{1}{0.5} & $=5^2$ \\\\\n\\\\\n\\blockline{6}{0.5} & $6 \\times 6 = $ & \\makeplate{6}{1}{0.5} & $=6^2$ \\\\\n\n\\end{tabular}\n\n\\newpage\n\n\\subsection*{Cubes}\n\nCubes have the same length side, on each of six sides, in \\emph{three} dimensions. When cubing a number, you are multiplying the number times itself, and then multiplying it times itself \\emph{again}, because all three sides are the same (they have the same value). The most common way you will see a ``cubed number\" described is with a little 3 up above the number, like  $2^3$, or $3^3$, or $10^3$. As above, the exponent tells you how many times you multiply the number times itself (here, that means three times).\n\n\\medskip\n\nCubing the number 10 gets: $10 \\times 10 \\times 10 = 1000$\n\n\\bigskip\n\n\\begin{tabular}{m{1.1in} m{1.0in} m{1.45in} m{1.9in}}\n\n\\blockline{1}{0.5} & $1 \\times 1 \\times 1 = $ & \\blockline{1}{0.5} & One times itself is one.\\\\\n\\\\\n\\blockline{2}{0.5} & $2 \\times 2\\times 2 = $ & \\makeplate{2}{2}{0.5} & Two sets of four, or \\newline$4+4$ (that is, $2^2 + 2^2$) \\\\\n\\\\\n\\blockline{3}{0.5} & $3 \\times 3 \\times 3 = $ & \\makeplate{3}{3}{0.5} & Three sets of nine, or \\newline$9+9+9$\\newline Put another way:\\newline $9 \\times 3 = 27$ \\\\\n\\\\\n\n\\blockline{4}{0.5} & $4 \\times 4 \\times 4 = $ & \\makeplate{4}{4}{0.5} & Cubes get big quickly---\\newline Here's four sets of 16. \\newline$16 \\times 4 = 64$ \\\\\n\\\\\n\n\\blockline{5}{0.5} & $5 \\times 5 \\times 5 = $ & \\makeplate{5}{5}{0.5} & Five sets of 25. \\newline$25 \\times 5 = 125$ \\\\\n\\\\\n\n\n\\end{tabular}\n\n\\newpage\n\n\\subsection*{Multipliers and Prefixes}\n\nIn systems of measurement, when talking about, say, weight, height, or pressure, there are base units and there are ways to refer to these base units in large multiples, or in tiny fractions. You're probably a little taller than one \\emph{meter} in height. And it takes one hundred \\emph{centimeters} to add up to one meter. The prefix ``centi-\" means it takes one hundred of \\emph{these} to add up to one of the \\emph{base unit}, which in this case is a meter.\n\nThe same goes for weight. You probably weigh between 20 and 40 \\emph{kilograms}. The prefix ``{kilo-}'' means \\emph{one thousand} of whatever the base unit is. Put another way, grams are pretty small amounts of weight, so measuring things like people or cars is impractical if we use grams, because cars weigh millions of grams. Medicines, on the other hand,  are usually measured in quantities called \\emph{milligrams} -- each milligram is only $\\frac{1}{1000}$ (one one-thousandth) of a gram. Don't be confused into thinking ``milli-\" means ``million'' --- ``mega\" means ``million.''\n\nWhen thinking about computers, we hear terms that are often expressed as multiples of things like memory and storage capacity. A kilobyte is 1,000 bytes (a \\emph{byte} is 8 individual bits, and each bit is the smallest unit of computing -- it can only represent 1 or 0). A megabyte is 1,000 kilobytes (or, 1,000,000 bytes). A gigabyte is one \\emph{billion} bytes, and a terabyte is one \\emph{trillion} bytes. \n\n\n\\newpage\n\\subsection*{Representing Numbers in Decimal}\n\n\\emph{Decimal} means ``with tens\". You've always been taught to count in the decimal system -- by ten. When you are counting by ones, once you get past 9, you reset the right-most number to zero and add that ten to the next column, which is the \\emph{tens} column (so you only increment that counter by one, since you are adding \\emph{one} ``ten'' to the total). Once you fill up 9 ``tens\" (90) and count up past 9 in the ``ones'' column (that is, you add one to 99), you have to set the ones column and the tens column to zero, and add one to the ``hundreds'' column. You \\emph{carry} the ten to the left from the ones, and carry the hundred to the left, to the next-largest set of tens. One hundred is ten tens, one thousand is 100 tens, and so forth.\n\n\\noindent It looks like this:\n\\bigskip\n\n\\begin{tabular}{l l l l l | l r }\n\\rot{ten thousands} & \\rot{thousands} & \\rot{hundreds} & \\rot{tens} & \\rot{ones} & \\multicolumn{2}{c}{Number} \\\\\n\\hline\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 0 && 0 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 1 && 1 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 2 && 2 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 3 && 3 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 4 && 4 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 5 && 5 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 6 && 6 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 7 && 7 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 8 && 8 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 9 && 9 \\\\\n{\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 1 & 0 && 10 \\\\\n\\end{tabular}\n\\bigskip\n\nWhen you get to the bottom of the column, you re-set the counter for that column to zero, and add one to the next column over (carry). So you go from 9 to 10, or 19 to 20, or 99 to 100. So any column can only hold between 0 and 9, before you have to carry over to the next \\emph{order of magnitude}, which for a decimal system is \\emph{ten times the size of one step in the column}. So when you run out of room for the ones column, you go to the \\emph{tens} column, which contains ten times as many units as any single step (from 1 to 2, or from 8 to 9) in the column to the right of it. When you run out of room in the column, you have to carry to the next order of magnitude. You go from the ones, to the tens, to the hundreds (100 is $10 \\times 10$), to the thousands (1000 is $10 \\times 100$), and so on.\n\nTo express a number, we count up the amount of each order of magnitude and add them all together. The number 628 is made of $(6 \\times 100) +  (2 \\times 10) +  (8 \\times 1)$, for instance.\n\n\\newpage\n\nHere are a few examples of numbers expressed in columns. For each one, you count up how many units in the column exist, then count them for each order of magnitude, and add each value together to get the number being described:\n\n\\bigskip\n\\begin{tabular}{p{2.7in} | l l   l l l l | l l r }\n\\hline\n\\textbf{Text Description} & & \\rot{ten thousands} & \\rot{thousands} & \\rot{hundreds} & \\rot{tens} & \\rot{ones} && \\multicolumn{2}{c}{\\textbf{Number}} \\\\[\\sep]\n\\hline\n& && & & & & &&\\\\[-2mm]\n\nno ten-thousands, no thousands, no hundreds, no tens, and no ones   && {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 0 &&& 0 \\\\[\\widesep]\nno ten-thousands, no thousands, no hundreds, no tens, and one ones   && {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 1 &&& 1 \\\\[\\widesep]\nno ten-thousands, no thousands, no hundreds, no tens, and two ones  && {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 2 &&& 2 \\\\[\\widesep]\nno ten-thousands, no thousands, no hundreds, three tens, and no ones && {\\color{lightgray}0} & {\\color{lightgray}0} & {\\color{lightgray}0} & 3 & 0 &&& 30 \\\\[\\widesep]\nno ten-thousands, no thousands, four hundreds, no tens, and no ones && {\\color{lightgray}0} & {\\color{lightgray}0} & 4 & 0 & 0 &&& 400 \\\\[\\widesep]\nno ten-thousands, no thousands, five hundreds, five tens, and five ones && {\\color{lightgray}0} & {\\color{lightgray}0} & 5 & 5 & 5 &&& 555 \\\\[\\widesep]\nno ten-thousands, six thousands, five hundreds, no tens, and two ones && {\\color{lightgray}0} & 6 & 5 & 0 & 2 & & & 6502 \\\\[\\widesep]\nsix ten-thousands, eight thousands, no hundreds, three tens, and no ones && 6 & 8 & 0 & 3 & 0 & & & 68030 \\\\[\\widesep]\n\\hline\n\\end{tabular}\n\n\n\\bigskip\n\n\n\\stbox{6.0in}{\\emph{Exercise:} What if, in the above table, you counted past 99,999? What number would you see? That is, what do you see if there is not a ``hundred-thousands\" column? What happened to the hundred thousand that should be counted? Do you need to know how many hundred thousands are in the number? How would you handle the need for a larger number, if you have it? (Introduces the concept of \\emph{overflow}.)}\n\n\n\\newpage\n\\subsection*{Representing Numbers in Binary}\n\nComputers only understand ``1'' and ``0'' -- because it can sense the electricity in a wire that is either \\emph{on} (having a detectable voltage greater than zero) or \\emph{off} (having a reference voltage that is basically at ``zero volts\", or ``ground.\"). That means that a computer, in order to add, subtract, or store information, has to express \\emph{literally anything and everything it can handle} in terms of either ones or zeroes. This system is called \\emph{binary}, because computers only understand two ``states\" -- on, or off. Instead of ``base-10\" counting (where moving to the next column happens when you pass 9), binary is ``base-2'' counting; the numbers move to the next column when the number passes 1---because {\\color{webblue}\\href{https://www.youtube.com/watch?v=MOn_ySghN2Y}{there's no such number as ``two''}} if you're a computer!\n\nIn order to add numbers in binary, we can't count to ten. We must count to one, and then, if the resulting number is greater than one, carry the digit to the next column. Counting in binary is interesting because it is different, and because it introduces us to a new way of doing math.\n\nIn order to add two numbers, computers have to do the following:\n\\be\n\\+ store each number in a binary format;\n\\+ compare each column (ones, twos, fours, eights, etc.) of each number and see if the numbers in that column add up to more than one;\n\\+ carry the ``one'' to the next column (the next \\emph{order of magnitude}, which is two times the previous column). That is, if the number goes from one to two, the ``one\" will go to zero and the ``two\" will be moved -- and added -- to the next column over to the left;\n\\+ keep on adding and carrying digits until the addition is complete;\n\\+ count up the values of each order of magnitude (either one, or none) and add them together; and\n\\+ report the new number as a [binary] result.\n\\ee\n\n\n\\begin{minipage}[c]{6.5in}\n\\begin{center}\n\\textbf{Binary and decimal representation for each number from 0 to 15:}\n\n\\smallskip\n\\begin{tabular}{p{1.25in} | p{0.10in} p{0.10in} p{0.10in} p{0.10in} | l r}\n\\hline\\\\[\\negsep]\n\\textbf{Description} & \\rot{eights} & \\rot{fours} & \\rot{twos} & \\rot{ones} && \\textbf{{\\color{webblue}base 10}}\\\\[\\sep]\n\\hline\\\\[\\negsep]\n$0 + 0 + 0 + 0$   & 0 & 0 & 0 & 0 && {\\color{webblue}0} \\\\\n\n\\grr\n$0 + 0 + 0 + 1$ & 0 & 0 &  0 & 1 && {\\color{webblue}1} \\\\\n\n$0 + 0 + 2 + 0$   & 0 & 0 & 1 & 0 && {\\color{webblue}2} \\\\\n\n\\grr\n$0 + 0 + 2 + 1$   & 0 & 0 & 1 & 1 && {\\color{webblue}3} \\\\\n\n$0 + 4 + 0 + 0$   & 0 & 1 & 0 & 0 && {\\color{webblue}4} \\\\\n\n\\grr\n$0 + 4 + 0 + 1$   & 0 & 1 & 0 & 1 && {\\color{webblue}5} \\\\\n\n$0 + 4 + 2 + 0$   & 0 & 1 & 1 & 0 && {\\color{webblue}6} \\\\\n\n\\grr\n$0 + 4 + 2 + 1$   & 0 & 1 & 1 & 1 && {\\color{webblue}7} \\\\\n\n$8 + 0 + 0 + 0 $  & 1 & 0 & 0 & 0 && {\\color{webblue}8} \\\\\n\n\\grr\n$8 + 0 + 0 + 1 $  & 1 & 0 & 0 & 1 && {\\color{webblue}9} \\\\\n\n$8 + 0 + 2 + 0 $  & 1 & 0 & 1 & 0 && {\\color{webblue}10} \\\\\n\n\\grr\n$8 + 0 + 2 + 1 $  & 1 & 0 & 1 & 1 && {\\color{webblue}11} \\\\\n\n$8 + 4 + 0 + 0 $  & 1 & 1 & 0 & 0 && {\\color{webblue}12} \\\\\n\n\\grr\n$8 + 4 + 0 + 1 $  & 1 & 1 & 0 & 1 && {\\color{webblue}13} \\\\\n\n$8 + 4 + 2 + 0 $  & 1 & 1 & 1 & 0 && {\\color{webblue}14} \\\\\n\n\\grr\n$8 + 4 + 2 + 1 $  & 1 & 1 & 1 & 1 && {\\color{webblue}15} \\\\\n\n\\hline\n\\end{tabular}\n\\end{center}\n\n\\stbox{6.0in}{\\emph{Problem 1:} what happens in the above table if you add 1 to 15? What number would the computer report if it only has four columns to use? (As above with counting past 99,999, this problem refers to \\emph{overflow}).\n}\n\n\\stbox{6.0in}{\\emph{Problem 2:} Let's say the machine runs by itself at some regular speed, and adds one to the number each second. Let's also say you are using this counter as a way to control a single blinking light, since ``1'' means ``there is electricity available to that wire\" and so a ``1'' would turn the light on. Look at each column of numbers (that is, each \\emph{order of magnitude!}). Remembering that ``a line that has a voltage\" is a 1 and ``no voltage'' (usually called ``ground'') is a zero, which line (column) would make the light blink fastest? Which line would blink the slowest?\n}\n\\end{minipage}\n\n\\bigskip\n\n\\begin{tabular}{ llll llll llll llll l}\n\n\\multicolumn{17}{c}{\\textbf{How Computers Count to 2,020}}\\\\[\\sep]\n\n\\hline\\\\[\\negsep]\n% 2020: 0111 1110 0100\n$2^{15}$ & $2^{14}$ & $2^{13}$ & $2^{12}$ & $2^{11}$ & $2^{10}$ & $2^9$ & $2^8$ &\n$2^7$ & $2^6$ & $2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$  & \\\\\n\\hline\\\\[\\negsep]\n\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & zero \\\\\n\n\\grr 0 & 0 & 0 & 0 & 0  & 1 & 1  & 1 & 1 & 1  & 1 & 0 & 0 & 0 & 1 & 1 & 2019 \\\\\n\n0 & 0 & 0 & 0 & 0  & 1 & 1  & 1 & 1 & 1  & 1 & 0 & 0 & 1 & 0 & 0 & 2020 \\\\\n\n%\\\\[\\sep]\n\\hline\n\n\\end{tabular}\n\n\\bigskip\n\n\\begin{tabular}{ llll llll llll llll l}\n\n\\multicolumn{17}{c}{\\textbf{How Computers Count to 65,535}}\\\\[\\sep]\n\n\\grr \\hline\\\\[\\negsep]\n\n\\grr $2^{15}$ & $2^{14}$ & $2^{13}$ & $2^{12}$ & $2^{11}$ & $2^{10}$ & $2^9$ & $2^8$ &\n$2^7$ & $2^6$ & $2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$  & \\\\\n\\hline\\\\[\\negsep]\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & zero \\\\\n\\grr\n 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 32,767 \\\\\n 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 32,768 \\\\\n\\grr\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 65,535 \\\\[\\sep]\n\\hline\n\n\\end{tabular}\n\n\\bigskip\n\nIf the computer counts up from zero, then once it gets past the $2^{14}$ column, it jumps to the $2^{15}$ (32,768) column. \n\n\\stbox{6.0in}{\\emph{Useless skill:} Did you know you can count to 31 on one hand? You have five fingers, and each finger can be open or closed, and $2^5$ is 32. Use your thumb for the ``0 or 1\" ($2^0$, ``two to the zeroth power\") column; the digit to its left represents two to the first power (the ``twos digit\"); the next digit to the left represents two to the second power (the ``fours digit\"); and so on. Watch out for ``4'' though!}\n\n\\begin{tabular}{llll llll l}\n\n\\rot{128} & \\rot{64} & \\rot{32} & \\rot{sixteen} & \\rot{eight} & \\rot{four} & \\rot{two} & \\rot{one} &  \\\\[\\sep]\n\\hline\\\\[\\negsep]\n\n$2^7$ & $2^6$ & $2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$  & \\textbf{Result} \\\\[\\sep]\n\\hline\\\\[\\negsep]\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\\color{webblue}\\textbf{0}} \\\\\n\\grr\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & {\\color{webblue}\\textbf{2}} ($2 + 0$) \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & {\\color{webblue}\\textbf{3}} ($2 + 1$) \\\\\n\\grr\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & {\\color{webblue}\\textbf{4}} ($4 + 0 + 0$) \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & {\\color{webblue}\\textbf{7}} ($4 + 2 + 1$) \\\\\n\\grr\n0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & {\\color{webblue}\\textbf{8}} ($8 + 0 + 0 + 0$) \\\\\n\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \\makeblank{1.5in} \\\\\n\\grr\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & \\makeblank{1.5in} \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & \\makeblank{1.5in} \\\\\n\\grr\n0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & \\makeblank{1.5in} \\\\\n0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & \\makeblank{1.5in} \\\\\n\\grr\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & \\makeblank{1.5in} \\\\\n0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & \\makeblank{1.5in} \\\\\n\\grr\n0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \\makeblank{1.5in} \\\\[\\sep]\n\\hline\n\n\\end{tabular}\n\\bigskip\n\n\\stbox{6.0in}{\\emph{Tip:} if you have a sequence of all ones, like 7 or 15 or 31, rather than adding up each binary column, you can just subtract one from the next largest column base number. So \\texttt{0111} equals \\texttt{1000} minus one.}\n\n\n\\vfill\n\n\\stbox{4.25in}{\\noindent{\\color{red}\\textbf{Joke:}}\n    \n    \\medskip\n\n    \\noindent\\emph{Remember:}  There are only 10 types of people in the world;\\\\\nthose who understand binary, and those who don't.\\\\\n\n\\bigskip\n\n(If the joke needs explanation, write 10 like \\texttt{0010} and then figure up the value in binary)}", "meta": {"hexsha": "93af801fae187d4d2fc9aa3322f35dd3c7879b59", "size": 19788, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/math.tex", "max_stars_repo_name": "jessehamner/TechMillForKids", "max_stars_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2017-11-13T21:45:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T09:31:54.000Z", "max_issues_repo_path": "chapters/math.tex", "max_issues_repo_name": "jessehamner/TechMillForKids", "max_issues_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2017-03-10T21:46:26.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-25T19:21:58.000Z", "max_forks_repo_path": "chapters/math.tex", "max_forks_repo_name": "jessehamner/TechMillForKids", "max_forks_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-11-14T04:40:14.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-17T05:31:36.000Z", "avg_line_length": 58.8928571429, "max_line_length": 1128, "alphanum_fraction": 0.6484232868, "num_tokens": 7205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105951184112, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.5894109703602043}}
{"text": " \\documentclass[../thesis.tex]{subfiles}\n \\begin{document}\n\n This appendix contains formal specifics of Multi-Agent Influence\n Diagrams and data games.\n \n\\section{Multi-Agent Influence Diagrams}\n \nMulti-Agent Influence Diagrams (MAIDs) are a game-theoretic\nextension of Bayesian networks developed by Koller and Milch\n\\cite{koller2003multi}.\nA MAID is defined by:\n\\begin{enumerate}\n\\item A set $\\mc{A}$ of agents \n\\item A set $\\mc{X}$ of chance variables\n\\item A set $\\mc{D}_a$ of decision variables for each agent $a \\in \\mc{A}$,\n  with $\\mc{D} = \\bigcup_{a \\in \\mc{A}} \\mc{D}_a$\n\\item A set $\\mc{U}_a$ of utility variables for each agent $a \\in \\mc{A}$,\n  with $\\mc{U} = \\bigcup_{a \\in \\mc{A}} \\mc{U}_a$\n\\item A directed acyclic graph $\\mc{G}$ that defines the parent function\n  $Pa$ over $\\mc{V} = \\mc{X} \\cup \\mc{D} \\cup{U}$\n\\item For each chance variable $X \\in \\mc{X}$, a CPD $Pr(X \\vert Pa(X))$\n\\item For each utility variable $U \\in \\mc{U}$, a CPD $Pr(U \\vert Pa(U))$\n\\end{enumerate}\n\nThe decision variables represent moments where agents can\nmake decisions about how to act given only the information\nprovided by the variable's parents.\n\n\\begin{dfn}[Decision rules]\n  \\label{dfn:decision-rule}\n  A \\emph{decision rule} $\\delta$ is a function that maps each instantiation\n  $\\vec{pa}$ of $Pa(D)$ to a probability distribution over $dom(D)$.\n\\end{dfn}\n\n\\begin{dfn}[Strategy]\n  \\label{dfn:strategy}\n  An assignment of decision rules to every decision $D \\in \\mc{D}_a$\n  for a particular agent $a \\in \\mc{D}_a$ for a particular agent\n  $a \\in \\mc{A}$ is called a \\emph{strategy}.\n\\end{dfn}\n\n\\begin{dfn}[Strategy profile]\n  An assignment $\\sigma$ of decision rules to every decision\n  $D \\in \\mc{D}$ is called a \\emph{strategy profile}.\n  A \\emph{partial strategy profile} $\\sigma_\\mc{E}$ is\n  an assignment of decision rules to a subset $\\mc{E} \\subset \\mc{D}$.\n  $\\sigma_{-\\mc{E}}$ refers to a restriction of $\\sigma$ to variables\n  not in $\\mc{E}$.\n\\end{dfn}\n\nDecision rules are of the same form as CPDs, and so a MAID\ncan be transformed into a Bayes network by replacing every\ndecision variable with a random variable with the CPD of the\ndecision rule of a strategy profile.\n\n\\begin{dfn}\n  If $\\mc{M}$ is a MAID and $\\sigma$ is a strategy profile for\n  $\\mc{M}$, then the \\emph{joint distribution for $\\mc{M}$\n    induced by $\\sigma$}, denoted $P_{\\mc{M}[\\sigma]}$, is the\n  joint distribution over $\\mc{V}$ defined by the Bayes\n  net where:\n  \\begin{itemize}\n  \\item the set of variables is $\\mc{V}$;\n  \\item for $X, Y \\in \\mc{V}$, there is an edge $X \\rightarrow Y$\n    if and only if $X \\in Pa(Y)$;\n  \\item for all $X \\in \\mc{X} \\cup \\mc{U}$, the CPD for $X$ is $Pr(X)$;\n  \\item for all $D \\in \\mc{D}$, the CPD for $D$ is $\\sigma(D)$.\n  \\end{itemize}\n\\end{dfn}\n\n\\begin{dfn}\n  Let $\\mc{E}$ be a subset of $\\mc{D}_a$ and let $\\sigma$ be a strategy\n  profile.\n  We say that $\\sigma*_\\mc{E}$ is \\emph{optimal for the strategy profile}\n  $\\sigma$ if, in the induced MAID $\\mc{M}[\\sigma_{-\\mc{E}}]$,\n  where the only remaining decisions are those in $\\mc{E}$,\n  the strategy $\\sigma*_{\\mc{E}}$ is optimal, i.e., for all\n  strategies $\\sigma'_{\\mc{E}}$:\n  $$EU_a((\\sigma_{-\\mc{E}},\\sigma*_{\\mc{E}})) \\geq EU_a((\\sigma_{\\mc{E}}, \\sigma'_{\\mc{E}}))$$\n\\end{dfn}\n\nA major contribution of \\citet{koller2003multi} is their analysis\nof how to efficiently discover Nash Equilibrium strategy profiles\nfor MAIDs.\nTheir method involves analyzing the qualitative graphical\nstructure of the MAID to discover the \\emph{strategic reliance}\nof decision variables.\nWhen a decision variable $D$ strategically relies on $D'$,\nthen in principle the choice of the optimal decisionr rule for\n$D$ depends on the choice of the decision rule for $D'$.\n\n\\begin{dfn}[Strategic reliance]\n  \\label{dfn:strategic-reliance}\n  Let $D$ and $D'$ be decision nodes in a MAID $\\mc{M}$.\n  $D$ \\emph{strategically relies on} $D'$ if there exist\n  two strategy profiles $\\sigma$ and $\\sigma'$ and a\n  decision rule $\\delta$ for $D$ such that:\n  \\begin{itemize}\n  \\item $\\delta$ is optimal for $\\sigma$;\n  \\item $\\sigma'$ differs from $\\sigma$ only at $D'$;\n  \\end{itemize}\n  but no decision rule $\\delta*$ that agrees with $\\delta$ on\n  all parent instantiations $\\vec{pa} \\in dom(Pa(D))$\n  where $P_{\\mc{M}[\\sigma]}(\\vec{pa}) > 0$ is optimal for $\\sigma'$.\n\\end{dfn}\n\n\\begin{dfn}[s-reachable]\n  \\label{dfn:s-reachable}\n  A node $D'$ is \\emph{s-reachable} from a node $D$ in a MAID\n  $\\mc{M}$ if there is some utility node $U \\in \\mc{U}_D$ such\n  that if a new parent $\\widehat{D'}$ were added to $D'$, there would\n  be an active path in $\\mc{M}$ from $\\widehat{D'}$ to $U$ given\n  $Pa(D) \\cup \\{D\\}$, where a path is active in a MAID if it\n  is active in the same graph, viewed as a BN.\n\\end{dfn}\n\n\\begin{thm}\n  \\label{thm:strategic-non-reliance}\n  If $D$ and $D'$ are two decision nodes in a MAID $\\mc{M}$\n  and $D'$ is not s-reachable from $D$ in $\\mc{M}$, then D\n  does not strategically rely on $D'$.\n\\end{thm}\n\n\\subsection{Tactical independence}\n\nThis dissertation introduces a new concept\nrelated to Multi-Agent Influence Diagrams: tactical independence.\n\n\\begin{dfn}[Tactical independence]\n  \\label{dfn:tactical-independence}\n  For decision variables $D$ and $D'$ in MAID $\\mc{M}$,\n  $D$ and $D'$ are \\emph{tactically independent} for\n  conditioning set $\\mc{C}$ iff\n  for all strategy profiles $\\sigma$ on $\\mc{M}$,\n  in $P_{\\mc{M}[\\sigma]}$, the joint distribution for\n  $\\mc{M}$ induced by $\\sigma$,\n  $$D \\independent D' \\vert C$$\n\\end{dfn}\n\nBecause tactical independence depends on the\nindependence of variables on an induced probability\ndistribution that is representable by a Bayesian\nnetwork, the d-separation tests for independence\napply readily.\n\n\\begin{thm}\n  For decision variables $D$ and $D'$ in MAID $\\mc{M}$,\n  and for conditioning set $\\mc{C}$, if\n  $D$ and $D'$ are d-separated given $\\mc{C}$ on\n  $\\mc{M}$ considered as a Bayesian network,\n  then $D$ and $D'$ are tactically independent\n  given $\\mc{C}$.\n\\end{thm}\n\n\\begin{proof}\n  Suppose $D$ and $D'$ are d-separated given $\\mc{C}$\n  on $\\mc{M}$ considered as a Bayesian network.\n\n  For any strategy profile $\\sigma$,\n  the joint distribution for $\\mc{M}$\n  induced by $\\sigma$, $P_{\\mc{M}[\\sigma]}$\n  has the same graphical structure as $\\mc{M}$\n  considered as a Bayesian network.\n\n  Therefore, $D$ and $D'$ are d-separated given $\\mc{C}$\n  in the graph corresponding to $P_{\\mc{M}[\\sigma]}$\n  for all $\\sigma$.\n\n  Because $D$ and $D'$ are d-separated given $\\mc{C}$\n  in the Bayesian network, $D \\independent D' \\vert C$.\n\\end{proof}\n\n% Utility CPDs are deterministic\n% Utility nodes have to be leaves (I bend this in my models!)\n\n\\subsection{Notation}\n\\label{sec:maid-notation}\n\nWe will use a slightly different graphical notation than that used by\n\\citet{koller2003multi}.\n\nIn the models in this paper, we will denote random variables\nwith undecorated capital letters, e.g. $A, B, C$.\nI will denote strategic nodes with a tilde over a capital\nletter, e.g. $\\tilde{A}, \\tilde{B}, \\tilde{C}$.\nThe random variable defined by the optimal strategy at a\ndecision node, when such a variable is well-defined,\nwill be denoted with a hat, e.g. $\\hat{A}, \\hat{B}, \\hat{C}$.\nNodes that represent the payoff or utility to an\nagent will be denoted with a breve, e.g.\n$\\breve{A}, \\breve{B}, \\breve{C}$.\nParticular agents will be identified by a lower case\nletter and the assignment of strategic and utility nodes\nto them will be denoted by subscript.\nE.g., $\\tilde{A}_q$ and $\\breve{U}_q$ denote an action\ntaken by agent $q$ and a payoff awarded to $q$,\nrespectively.\n\n\\section{Data Games}\n\\label{sec:value-of-data}\n\nWhat distinguishes a data game from a MAID is the use\nof optional arrows to support mechanism design.\nA dotted arrow in a data game an optional arrow.\nThe diagram defines two separate models, one including the\narrow and one without.\nWhen considering an instantiation of the model with the dotted\nedge present, we will say the model or edge is \\emph{open}.\nWhen the edge is absent, we'll say it's \\emph{closed}.\n\nAs we have distinguished between strategic reliance and\ntactical independence, we can distinguish between the\nstrategic and tactical value of information.\n\nThe strategic value of an information flow to an agent\nis the difference in utility to that agent in the open\nand closed conditions of the game, given each game\nis at strategic equilibrium for all players.\n\n\\begin{dfn}[Strategic value of information]\n  \\label{dfn:strategic-value}\n  Given two MAID diagrams $\\mc{M}_{o}$ and $\\mc{M}_{c}$\n  that differ only by a single edge, $e$,\n  and a strategic profile solution for each diagram, $\\hat{\\sigma}_{o}$\n  and $\\hat{\\sigma}_{c}$, the \\emph{strategic value of $e$ to $a$}\n  is the difference in expected utility to $a$ under the\n  two respective induced joint distributions:\n\n  $$E(P_{\\mc{M}_{o}[\\hat{\\sigma}_{o}]}(U_a)) - E(P_{\\mc{M}_{c}[\\hat{\\sigma}_{c}]}(U_a))$$\n\\end{dfn}\n\nDefinition \\ref{dfn:strategic-value} is an incomplete\ndefinition because it leaves open what \\textit{solution concept}\nis used to determine the strategic profile solutions.\nFor the purpose of the results in this paper, we use\nNash Equilibrium as the solution concept for determining\nstrategic value of information.\n\nIn contrast with the strategic value of information,\nthe tactical value of information is the value of\nthe information to an agent given an otherwise fixed\nstrategy profile.\nWe allow the agent receiving the data to make a tactical\nadjustment to their strategy at the decision variable\nat the head of the new information flow.\n\n\\begin{dfn}[Best tactical response to information]\n  \\label{dfn:best-tactical-response}\n  Given two MAID diagrams $\\mc{M}_{o}$ and $\\mc{M}_{c}$\n  differing only in optional edge $e$ with head in decision\n  variable $D_a$,\n  the \\emph{best tactical response to $e$} given\n  strategy profile solution $\\hat{\\sigma}$, $\\hat{\\delta}_{\\sigma,e}$\n  is the decision rule $\\delta$ for $D$ such\n  that $\\delta$ is optimal for $\\hat{\\sigma}$\n  for player $a$.\n\\end{dfn}\n\n\\begin{dfn}[Tactical value of information]\n  \\label{def:tactical-value}\n  Given two MAID diagrams $\\mc{M}_{o}$ and $\\mc{M}_{c}$\n  differing only in optional edge $e$ with head in decision\n  variable $D$,\n  the \\emph{tactical value of $e$} to agent $a$ given\n  strategy profile solution $\\hat{\\sigma}$\n  is the difference in expected utility of\n  the open condition with the best tactical response to $e$\n  and the closed condition using the original strategy:\n\n  $$EU_a((\\hat{\\sigma}_{-D},\\hat{\\delta}_{\\hat{\\sigma},e}) - EU_a(\\hat{\\sigma})$$\n\\end{dfn}\n\n\nNote that the uniqueness of a best tactical response\nhas not yet been proven.\nHowever, if the best tactical response is not unique,\nthen the tactical value of the information will be the\nsame for any best tactical response.\nThis definition, like Definition \\ref{dfn:strategic-value},\ndepends on an implicit solution concept.\n\n \n\\end{document}\n", "meta": {"hexsha": "6327f63301a8537b100684d4ef592b0ba319a2ec", "size": 10944, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendixB.tex", "max_stars_repo_name": "sbenthall/dissertation", "max_stars_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendixB.tex", "max_issues_repo_name": "sbenthall/dissertation", "max_issues_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2018-04-19T13:07:29.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-02T21:33:06.000Z", "max_forks_repo_path": "chapters/appendixB.tex", "max_forks_repo_name": "sbenthall/dissertation", "max_forks_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4, "max_line_length": 94, "alphanum_fraction": 0.7067799708, "num_tokens": 3284, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.5894109703602042}}
{"text": "\\subsubsection{Feature importance}\n\\label{sec:methods_pipeline_feature_importance}\n\nLopez de Prado's eloquent words in section 8.2 of \\cite{lopez_de_prado} are very\nappropriate to illustrate what this section is about.\n\n\\say{Hunters do not blindly eat everything their smart dogs retrieve for them, do they?}\n\nAs a financial machine learning researcher once we are satisfied\nwith the performance of a machine learning model it is advised to understand\nwhich, how and when features contribute to improve the performance of the model.\nThe author focuses on the \\emph{importance} of the features with and without\nsubstitution effects. In this research pipeline, only a method to consider\nsubstitution effects is implemented.\n\nThe method is called Mean Decrease Accuracy and it uses model's estimation\naccuracy as guiding principle. Description of the method follows:\n\n\\begin{itemize}\n  \\item Let $X$ of be the feature matrix and let $Y$ be the output vector.\n  \\item Let $X_1, X_2, X_3, ..., X_m$ be the columns of $X$.\n  \\item Fit via the desired training process $m + 1$ models and measure their\n        accuracy. One model will be fitted with $X$ as is. And the other $m$\n        models will be fitted with one randomly permutated column $X_i$.\n  \\item For each of the aforementioned $m$ models, compute the relative loss in\n        accuracy with respect to the base without permutation model. \n\\end{itemize}\n\nThis method allows to create a rank in which once can inspect the relative loss\nof performance measured in accuracy, but it could also be F1-score or\nnegative log-loss when working with classifiers. Having a high value means\nthat the predictive importance of the feature is relevant. Even though this method\nis flexible and adaptable to multiple types of models as it is based on out of\nsample performance, it comes with some important drawbacks:\n\n\\begin{itemize}\n  \\item It is relatively slow because it requires the training of $m + 1$ models and\n        the evaluation of their performance. When this is done with purged $K$-fold cross\n        validation with embargo\\emph{ed} datasets, it will definitely take time.\n  \\item It is susceptible to \\emph{substitution effects}. The effect can be described with two\n        or more features that are highly correlated. The performance loss will be similar so\n        a researcher might decide to remove them but with that the overall predictive \n        capacity diminishes more than just keeping one of the features in the set\n        under study.\n  \\item A possible result is that all features are detrimental or unimportant\n        for the model what is somewhat hard to interpret.\n\\end{itemize}\n\nFeature orthogonalization could help to reduce the substitution effect. Two\nprocedures are proposed. First, a direct application of Frisch\u2013Waugh\u2013Lovell theorem\nanalysis can be used. Each feature is analyzed individually via a linear\nregression and residues are inspected to derive relative importance. See\nchapters 2 and 3 of \\cite{econometric_theory_and_methods} for an in depth description\nof the theorem and applications. Secondly, Principal Component Analysis, i.e. PCA,\nas suggested in section 8.4.2 of \\cite{lopez_de_prado} can also be used to\ndetermine the set of features whose eigenvalues in the orthogonalized space are\ngreater. Alternatively, the feature space could also be reduced in favor of\nfaster convergence of the machine learning models under use.\n\nOnce all features are ranked, the researcher is required to select which\nfeatures to drop and which ones to keep. The final set of features will be used\nto build a new model. Heuristic rules are used to drop features but as mentioned\nbefore researchers should be aware of the substitution effect. In this research\nproject, the applied rule applied consists in computing the mean loss of\nnegative log-loss and keep those features that produce higher than the mean loss\nin performance.\n\nFurthermore, researchers often face the requirement of explaining the economic\nmechanism that generates the excess return of the strategy with respect to a\nbenchmark. Understanding which features contribute more to predict with increased\nconfidence labels is reasonably simpler when extra unimportant features are\nremoved. This stage of the process together with feature engineering are very\nimportant to understand how the model behaves.", "meta": {"hexsha": "b429d193edaaa64a871ea4c25efa1f81bd24d86f", "size": 4350, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/methods/pipeline/feature_selection.tex", "max_stars_repo_name": "agalbachicar/swing_for_the_fences", "max_stars_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/methods/pipeline/feature_selection.tex", "max_issues_repo_name": "agalbachicar/swing_for_the_fences", "max_issues_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/methods/pipeline/feature_selection.tex", "max_forks_repo_name": "agalbachicar/swing_for_the_fences", "max_forks_repo_head_hexsha": "3871e88884a90e5c9dd80d71b20b811485007273", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.5890410959, "max_line_length": 94, "alphanum_fraction": 0.7926436782, "num_tokens": 939, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105951184112, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.5894109703602042}}
{"text": "\\section{Time Series}\n\nA time series, also known as discrete time signal, is a sequence of observations taken periodically in time. We can use time series to perform many tasks such as predictions of future values, behaviour analysis or information extraction. Examples of time series are audio signals, industrial instrument measures or diary finantial activity.\n\n\\begin{figure}[H]\n  \\centering\n  \\input{Figures/TimeSeries}\n  \\caption{Example of a random time series.}\n\\end{figure}\nA system can be determined comparing the input and the output. We call the system a filter if it is linear and time invariant. Considering the dynamic system as a black box, we can estimate the transference function or the impulse response to taht filter.\n\nWe can also consider \\textbf{multivariate} time series, where some values of the time series have an influence on the other values in different or the same time instant. We can \\textbf{classify} the time series in two wide types:\n\n\\begin{itemize}\n  \\item Determinist: based in dynamic systems, they exploit the phisics of the generation algorithm of the time series.\n        \\item Stochastic: where the series are realizations of a stochastic process, which can be modelated.\n\\end{itemize}\n\nIn this subject, we will focus on stochastic models.\n\n\\subsection{Stochastic Models}\n\nWe can make three big considerations on the stochastic models.\n\\begin{itemize}\n  \\item Stationary models.\n        \\begin{ndef}\n          Let \\(\\{X_{t}\\}\\) be a stochastic process and let \\(F_{X}\\left(x_{t_{1} + \\tau}, \\dots, x_{t_{n} +\\tau}\\right)\\) represent the CDF of the \\textbf{unconditional} joint distribution of \\(\\{X_{t}\\}\\) at times \\(t_{1 }+ \\tau,\\dots, t_{n} + \\tau\\). Then \\(\\{X_{t}\\}\\) is strictly stationary if\n          \\[\n            F_{X}\\left( x_{t_{1} + \\tau}, \\dots, x_{t_{n} +\\tau}\\right) = F_{X}\\left(X_{t_{1}},\\dots,x_{t_{n}}\\right)\n          \\]\n\n        \\end{ndef}\n        However, we will use the case of \\textbf{weak stationarity}, where we assume that the expectation of the stochastic process and the covariance at times \\(t,t+\\tau\\) are constant.\n\n        \\begin{example}\n          AR, MA, ARMA\n        \\end{example}\n\n  \\item Non stationary models, where we do not make the assumption that the average of the process is constant in time and that there is seasonality\n        \\begin{example}\n          ARIMA, SARIMA\n        \\end{example}\n\n  \\item Influenced by exogenous(extern) variables. In this cases, the exogenous variable affects the model, but the model does not affect this variable.\n        \\begin{example}\n          SARIMAX\n        \\end{example}\n\\end{itemize}\n\nLet us introduce some \\textbf{notation} for the following explanations\n\n\\begin{ndef}\n  Let \\(z_{t}\\) be the value of the time series at instant \\(t\\).\n  \\begin{itemize}\n    \\item The \\textbf{backward shift} operator is \\(z_{t-m} = B^{m}z_{t}\\)\n    \\item The \\textbf{forward shift} operator is \\(z_{t+m} = F^{m}z_{t} = B^{-m}z_{t}\\)\n    \\item The difference or discrete gradient operator is \\(\\nabla z_{t} = z_{t} - z_{t-1} = (1-B)z_{t}\\)\n  \\end{itemize}\n\\end{ndef}\n\nRecall that, having a time series we can consider its \\textbf{Z-transform}, that converts the discrete-time signal into a complex frequency-domain representation. In the Z-transform representation, the previously introduced notation is:\n\\begin{itemize}\n  \\item The backward shift is \\(z_{t-m} = B^{m}z_{t} = Z^{-m}z_{t}\\)\n  \\item The forward shift is \\(z_{t+m} = B^{-m}z_{t} = Z^{m}z_{t}\\)\n        \\item The difference or discrete gradient is \\(\\nabla z_{t} = (1-Z^{-1})z_{t}\\)\n\\end{itemize}\n\n\n\\section{Linear filter based models}\n\nThe stochastic models we use are based on time series \\(z_{t}\\) in which sucessive values are highly dependent. In these cases, we can see that the time series is generated from a series of independependent ``shocks''.\n\n\\begin{ndef}\n  Let \\(a_{t} \\sim \\mathcal N\\left(0,\\sigma_{a}^{2}\\right)\\) be \\emph{white noise} (where each \\emph{shock} is related to \\(a_{t}\\)) which is not observed. Consider a linear filter that transforms the unobserved \\(a_{t}\\) to a observed time series \\(z_{t}\\). We say that a \\textbf{linear filter model is}\n  \\begin{equation}\\label{linear:filter}\n    z_{t} = \\mu + a_{t} + \\psi_{1}a_{t-1} + \\psi_{2}a_{t-2} + \\dots = \\mu + \\psi(B)a_{t},\n  \\end{equation}\n  where\n  \\[\n    \\psi(B) = 1 + \\psi_{1}B + \\psi_{2}B^{2} + \\dots\n  \\]\n  is called the \\textbf{transfer function} of the filter.\n\\end{ndef}\n\n\\begin{figure}[H]\n\n  \\centering\n  \\includegraphics{Figures/LinearFilter}\n\n\\end{figure}\n\nAs we can see, we are expressing the filter in terms of a infinite sum of the coefficients \\(\\psi_{i}\\). If there are finite coefficients of the sum is \\emph{absolutely summable}, that is: \\(\\sum_{j = 0}^{\\infty}\\abs{\\psi_{j}} < \\infty\\) or the vector of coefficients has finite \\(\\ell^{1}\\) norm, we say that the filter is \\textbf{stable} and the process \\(z_{t}\\) is \\textbf{stationary}.\n\nIn the case where the \\(\\ell^{1}\\) norm is not finite, our filter are non-stable and produce non-stationary series.\n\n\\subsection{Autoregressive Models (AR)}\n\nLet us firstly consider the simplest case of linear filter. An \\textbf{autoregressive model} is a linear filter where the current value of the process \\(\\tilde z_{t}\\) is expressed as a finite sum of the previous values and a random shock \\(a_{t}\\).\n\n\\begin{ndef}\n  Let us denote the values of a process af equally spaced times \\(t,t-1,\\dots\\) by \\(z_{t}, z_{t-1}, \\dots\\). Consider that the values are centered, that is \\(\\tilde z_{t} = z_{t} - \\mu\\). Then, the \\textbf{autoregressive (AR) process} of \\textbf{order p} is\n  \\begin{equation}\\label{model:AR}\n    \\tilde z_{t} = \\phi_{1}\\tilde z_{t-1} + \\phi_{2}\\tilde z_{t-2}+ \\dots  + \\phi_{p}\\tilde z_{t-p} + a_{t}\n    \\end{equation}\n\\end{ndef}\n\nNote that it is called autoregressive since, if you consider \\(\\tilde z_{i-k}\\) for \\(k = 1,\\dots,p\\) as points, you are doing a \\emph{linear regression} over the past values.\\\\\n\nNow, if we define the \\textbf{autoregressive operator of order p} using the backward shift operator \\(B\\) as:\n\\[\n\\phi(B) = 1- \\phi_{1}B - \\phi_{2}B^{2} - \\dots - \\phi_{p}B^{p},\n\\]\nwe can economically write the autoregressive model in \\eqref{model:AR} as\n\\begin{equation}\\label{model:ar:red}\n  \\phi(B)\\tilde z_{t} = a_{t}\n\\end{equation}\n\nIn practice, this model has \\(p+2\\) unknown parameters \\(\\mu,\\phi_{1},\\dots,\\phi_{p},\\sigma_{a}^{2}\\) which have to be estimated from the data.\n\n\\begin{nprop}\nThe autoregressive model is a particula case of a linear filter\n\\end{nprop}\n\\begin{proof}\n  Although we will not be estrictly formal in this proof, we will give an intuition on the iterative process that has to be done.\n\n  Consider the term \\(\\tilde z_{t-1}\\), let us eliminate it. Recall that\n  \\[\n    \\tilde z_{t-1} = \\phi_{1}\\tilde z_{t-2} + \\dots + \\phi_{p} \\tilde z_{t-p-1} + a_{t-1}.\n  \\]\n  We can substitute this term in the expression of the AR model given in Equation \\eqref{model:AR}. The same can be done for \\(\\tilde z_{t-2}\\) and so on, to yield eventually an infinite series in the \\(a\\) terms.\n\n\\end{proof}\n\n  In the case where \\(p=1\\), we have the AR process \\(\\tilde z_{t} = \\phi \\tilde z_{t-1} + a_{t}\\). After \\(m\\) sucessive substitutions of \\(\\tilde z_{t-j} = \\phi \\tilde z_{t-j-1} + a_{t-j} \\), with \\(j = 1,\\dots,m\\), we obtain\n  \\[\n    \\tilde z_{t} = \\phi^{m+1}\\tilde z_{t-m-1}+ a_{t} + \\phi a_{t-1} + \\phi^{2}a_{t-2} + \\dots + \\phi^{m}a_{t-m}\n  \\]\n  Now, if we take the limit \\(m\\to \\infty\\) this leads to the \\emph{convergent inifinite series representation} \\(\\tilde z_{t} = \\sum_{j=0}^{\\infty}\\phi^{j}a_{t-j}\\), with \\(\\psi_{j} = \\phi^{j}, j \\geq 1\\), provided that \\(\\abs{\\phi} < 1\\). In the general AR case,\n  \\[\n    \\phi(B) \\tilde z_{t} = a_{t}\n  \\]\n  is equivalent to\n  \\[\n    \\tilde z_{t} = \\phi^{-1}(B) a_{t} = \\psi(B)a_{t}, \\quad \\quad \\psi(B) = \\phi^{-1}(B) = \\sum_{j=0}^{\\infty}\\psi_{j}B^{j}.\n  \\]\n\n  AR processes can be stationary or nonstationary. From the definition, it is clear that for a AR process to be stationary, the coefficients \\(\\phi\\) must be such that the weights \\(\\psi_{1},\\psi_{2},\\dots\\) in \\(\\psi(B) = \\phi^{-1}(B)\\) form a convergent series. A \\textbf{necessary requirement} for stationarity is that the autoregressive operator \\(\\phi(B)\\), considered a polynomial in \\(B\\) of degree \\(p\\), must have all roots greater than \\(1\\) in absolute value.\n\n  \\subsection{Application: Linear Prediction Coefficients in Speech Coding}\n\n\n  Let us now set in the case of the \\textbf{Speech Coding} topic. It is considered that a speech sample can be approximated as a linear combination of the past samples, which is how an AR model behaves. We have to find the coefficients that best suit our problem, using for instance the mean squared error prediction. We use the obtained \\textbf{Linear Prediction Coefficients (LPCs)} to represent the signal frame.\n\n  Using this technique, we would be \\textbf{reducing} the signal size significantly. However, since we are only approximating the signal, we would be most probably losing information. Two examples of codification of the audio signals are:\n\n  \\begin{itemize}\n    \\item MP3: which produces a different audio signal, involving loss of information\n          \\item FLAC: where the output is almost equal to the input, no loss of information\n\\end{itemize}\n\nSignals are digitalized using a coding system.\n\n\\begin{figure}[!h]\n  \\centering\n  \\includegraphics[scale=0.5]{Figures/SpeechCodingSystem}\n  \\caption{Block diagram of a speech coding system.}\n\\end{figure}\n\nThe filter eliminates aliasing and the sampler makes the continuous to discrete time conversion.\n\n\\begin{example}\n  In this example, we present the digital CD audio signal and why we would like to reduce its size without losing information. This signal has the following properties:\n\n  \\begin{enumerate}\n    \\item Sample rate \\(\\Omega_{s} = 44.1 kHz\\)\n    \\item Bits per sample: \\(16\\)\n          \\item 2 channels (although sometimes 3 are used)\n  \\end{enumerate}\n\n  With this properties, the input bit rate is\n  \\[\n    R = \\Omega_{s} \\ \\cdot \\ \\operatorname{Bits/sample} \\ \\cdot \\ \\operatorname{Channels} = 44.1*10^{3} * 16 * 2 = 14112000 \\frac{\\operatorname{bits}}{s} = 1.41 \\frac{\\operatorname{Mb}}{s}\n  \\]\n  Which implies that, in a single minute we would need\n  \\[\n    60s ; * ; 1,4112 \\frac{\\operatorname{Mb}}{s} ; * ; \\frac{1 \\operatorname{byte}}{8 \\operatorname{bits}} = 10.09 \\operatorname{MB},\n  \\]\n  which is a high size for a single minute audio.\n\\end{example}\n\n\\begin{example}\n  In this example, we will present the input bit rate for the speech digital signal. Its common properties are:\n    \\begin{enumerate}\n    \\item Sample rate \\(\\Omega_{s} = 8 ; \\operatorname{kHz}\\)\n    \\item Bits per sample: \\(16\\)\n          \\item 1 channel\n    \\end{enumerate}\n\n  With this properties, the input bit rate is \\(128\\) Kb per second.\n\n\\end{example}\n\nAs a quick note, remember that \\textbf{to quantify} a continuous time series is to assign it discrete amplitude values. When we do this, we are introducing a \\textbf{cuantification error},\n\\[\n\\operatorname{error}(t) = z_{\\operatorname{quantified}}(t) - z_{\\operatorname{original}}(t)  \n\\]\nIn each \\(t \\in \\mathbb R\\), this error can be positive or negative. We can consider that the error is additive.\\\\\n\nLet us consider a simplified version of the speech production model. Consider that there is a source and a filter, such as in the next figure:\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[scale=0.6]{Figures/SpeechGeneration}\n\\caption{LPC model of speech production.}\n\\end{figure}\n\nThen,\n\n\\begin{itemize}\n  \\item We assume that we can separate the voice in non overlapping frames that are short enough to keep the model parameters constant.\n  \\item Then, we estimate the model parameters for each frame . These parameters are: voicing, gain (energy level of the frame), filter coefficients (response of the synthesis filter), pitch period (time length between consecutive excitation impulses)\n\\end{itemize}\n\n\\subsubsection*{Using the AR model to compute the synthesis filter coefficients}\n\nWe can use the autoregressive model to predict a speech sample. Let \\(x[n]\\) be the discrete signal. The prediction aims to find the coefficients \\(a_k\\), \\(k = 1,\\dots, p\\) such that we can compute that sample using the previous samples\n\n\\[\nx[n] \\approx \\sum_{k=1}^p a_k \\ x[n-k] \\implies x[n] = \\sum_{k=1}^p a_k \\ x[n-k] + e[n] = \\sum_{k=1}^p a_k \\ x[n-k] + Ge'[n],\n\\]\nwhere \\(e[n]\\) is the error at time step \\(n\\), \\(e'[n]\\) is the theoretical excitation and \\(G\\) is the gain. We \\textbf{minimize the mean squared error}(also called mean energy) of the prediction error \\(e[n]\\) in order to fit this model.\n\n\\section{ARMA, (S)ARIMA(X) and Multivariate series}\n\n\nIn this section we will generalize the Autoregressive model, adding complexity (and thus more generalization capability) to it.\n\n\\subsection{MA and ARMA}\n\nFirstly, we bring back the general expression of a linear filter given in Equation \\eqref{linear:filter}:\n\\[\n\\tilde z_t = a_t + \\sum_{j=1}^\\infty \\psi_j a_{t-j}. \n\\]\n\nWe can consider a special case when only the first \\(q \\in \\mathbb N\\) are nonzero.\n\n\\begin{ndef}\nA \\textbf{Moving Average (MA)} process of order \\(q\\) is a linear filter where only the first \\(q\\) terms are nonzero\n\\begin{equation}\\label{model:MA}\n\\tilde z_t = a_t - \\theta_1 a_{t-1} - \\dots - \\theta_q a_{t-q}.\n\\end{equation}\n\\end{ndef}\nAs it can be appreciated, we now use the symbols \\(-\\theta_1,\\dots, -\\theta_q\\) for the finite set of \\emph{weights}. What we are doing is to \\emph{smooth} the white noise \\(a_t\\).\n\n\nRecall that, as we did before, we can express the moving average model as\n\\[\n\\tilde z_t = (1- \\theta_1 B - \\theta_2 B^2 - \\dots - \\theta_q B^q) a_t = \\theta(B) a_t.\n\\]\n\nThe Moving Average models are \\textbf{not addecuate} when the series has \\emph{autocorrelation} (that is, relation to its past values). Also, real-life time series are ``more than white noise'' to smooth.\n\nA solution for this disadvantages could be combining the MA model with the AR model linearly:\n\n\\begin{ndef}\nThe \\textbf{Autoregressive-Moving Average (ARMA)} process of order \\(p,q\\) is defined as a linear combination of both models\n\\[\n\\tilde z_t = \\sum_{i = 1}^p \\phi_i \\tilde z_{t-i} + a_t - \\sum_{j = 1}^q \\theta_j a_{t-j}, \n\\]\nor \n\\[\n\\phi(B)\\tilde z_t = \\theta(B)a_t. \n\\]\n\\end{ndef}\n\nUsing an ARMA model, we are not only capturing the relationships between a point \\(\\tilde z_t\\) and its previous ones (AR), but also smoothing the influence of the white noise (MA).\n\nA great \\textbf{disadvantage} of the ARMA models is that they are \\textbf{always stationary}, so we cannot model non-stationary time series.\n\nThe following proposition presents a very interesting results on ARMA models:\n\n\\begin{nprop}\n  An ARMA process is stationary if all the roots of \\(\\phi(B) = 0\\) have module greater than one, and it is (explosive) non stationary if the roots have module lesser than one.\n\n\\end{nprop}\n\nThe left-to-mention case where the roots of \\(\\phi(B) = 0\\) lie \\textbf{on} the unit circle is very interesting. Nonseasonal series are often well represented by models in which one or more roots are unitary.\n\\section{Non estationarity}\n\nMany time series appearing in real life have non stationary behaviour. Thus, we have to obtain new models to be able to make prediction of future values.\n\nHowever, we can decompose these time series and treat them separately. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.7]{Figures/Decomp}\n\\caption{Multiplicative decomposition of a time series.}\n\\end{figure}\n\n\n\nTipically, we use the following components (all of them at time \\(t\\)):\n\n\\begin{enumerate}\n\\item \\(T_t\\), the \\textbf{trend} component, reflecting the long-term progression of the series. It exists when there is a persistent increasing or decreasing direction on the data. It is not neccesarily linear.\n\n\\item \\(S_t\\), the \\textbf{seasonal} component, reflecting the seasonal variation. A seasonal pattern exists when a time series is influenced by seasonal factors. Seasonality occurs over a fixed known period of time.\n\n\\item \\(R_t\\), the \\textbf{residual} component, describing randomness or irregular influences\n\\end{enumerate}\n\nOcassionally, an additional cyclical component \\(C_t\\) is considered, but we will not do consider that case. With the considered components, using an \\textbf{additive} model (used when the variations around the trend do not vary with the level of the time series), we can think of the time series as \n\\[\nz_t = T_t + S_t + R_t.\n\\]\nUsing a \\textbf{multiplicative} model (used when the trend is proportional to the level of the time series), our time series can be written as:\n\\[\nz_t = T_t \\times S_t \\times R_t. \n\\]\n\nHaving the different components of a time series, we can look at the properties of each of the individual components and study them separately. Some properties that will help us in the creation of new models are:\n\n\\begin{enumerate}\n\\item The residual \\(R_t\\) is usually stationary, so we can use an already known model.\n\\item The trend \\(T_t\\) is usually a smooth function, which we can derivate (one or multiple times) in orden to \\emph{make it dissapear}\n\n\\item The stationality \\(S_t\\) has a periodic component.\n\\end{enumerate}\n\n\n\n\\subsection{ARIMA}\n\nFirstly, we will deal with the trend \\(T_t\\). As we have said, we can make it \\emph{dissapear} by differenciating it.\n\nFirstly, it can be shown that if \\(d\\) roots of the generalized autoregressive operator \\(\\varphi(B)\\) are unitary, then this operator can be written as\n\\[\n\\varphi(B) = \\phi(B)(1-B)^{d},\n\\]\nwhere \\(\\phi(B)\\) is a stationary autoregressive operator. Thus, a model that can represent homogeneous nonstationary behaviour has the form:\n\\[\n\\varphi(B)z_{t} = \\phi(B)(1-B)^{d}z_{t} = \\theta(B)a_{t}.\n\\]\n\nNow, if we name\n\\[\nw_{t} = (1-B)^{d}z_{t} = \\nabla^{d}z_{t},\n\\]\nwe can rewrite the previous equation as\n\\[\n\\phi(B)w_{t} = \\theta(B)a_{t},\n\\]\nand we are representing homogeneous nonstationary behaviour using the \\(d-\\)th diference of the process and calling it to be stationary. In practise, \\(d\\) is not usually greater than \\(2\\).\n\n\nWe can now use this reasoning to give a formal definition of the ARIMA process.\n\n\\begin{ndef}\n  The \\textbf{Autoregressive integrated moving average process (ARIMA)}of order \\(p,d,q\\) is defined by:\n  \\[\n    w_{t} = \\sum_{i = 1}^{p} \\phi_{i}w_{t-i} + a_{t} - \\sum_{j = 1}^{q} \\theta_{j}a_{t-j},\n  \\]\n  where \\(w_{t} = \\nabla^{d}z_{t}\\).\n\\end{ndef}\n\n\\begin{remark}\nIf we replace \\(w_{t}\\) by \\(z_{t} - \\mu\\), in the \\(d= 0\\) case, the model includes the \\emph{estationary mixed model}, the AR and the MA models.\n\\end{remark}\n\n\nThe following explanation gives an intuition of why is the model called \\emph{integrated} (although it probably should be called \\emph{summed}):\\\\\n\nLet us find the \\textbf{inverse relation} for \\(w_{t} = (1-B)^{d}z_{t} = \\nabla^d z_{t}\\). Consider\n\\[\nS = \\nabla^{-1} = (1-B)^{-1} = \\sum_{i = 0}^{\\infty}B^{i}.\n\\]\nThen, we can consider this \\emph{inverse relation} expressed as:\n\\[\nz_{t} = S^{d}w_{t} = \\sum_{j = 0}^{\\infty}w_{t-j}.\n\\]\n\nHence, it can be said that ARIMA may be generated by \\emph{summing} (or integrating) the stationary ARMA process \\(w_{t}\\), \\(d\\) times.\\\\\n\nTo sum up, ARIMA has \\textbf{three steps}\n\n\\begin{enumerate}\n  \\item Derivate \\(d\\) times to \\emph{remove} trend:\n        \\[\n        w_{t} = (1-B)^{d} z_{t} = \\nabla^{d}z_{t}.\n        \\]\n  \\item Apply stationary model ARMA to \\(w_{t}\\):\n        \\[\n  w_{t} = \\sum_{i = 1}^{p} \\phi_{i}w_{t-i} + a_{t} - \\sum_{j = 1}^{q} \\theta_{j}a_{t-j}.\n        \\]\n  \\item Predict \\emph{reversing} the derivation\n        \\[\n S^{d}w_{t} = \\sum_{j = 0}^{\\infty}w_{t-j}\n        \\]\n\n\\end{enumerate}\n\n\\subsubsection{SARIMA}\n\nHaving eliminated the Trend component, we would now like to estimate the \\textbf{stationality} of the time series, assuming a multiplicative or additive relation. We perform the following extension of ARIMA.\n\n\\begin{ndef}\n  Consider a non stationary model, which \\textbf{stationality} component has period \\(S\\). Apply ARIMA to obtain a model that has \\(S\\) temporal units. Then, we obtain the \\textbf{Seasonality ARIMA (SARIMA)}, of order \\(S\\):\n  \\[\n    \\Phi(B^{S})\\nabla_{S}^{D}s_{t} = \\Theta (B^{S}) \\alpha_{t}.\n  \\]\n\\end{ndef}\n\nIt is common to assume that the stationality is \\emph{multiplicative}, obtaining the model \\emph{SARIMA \\((p,d,q)\\times (P,D,Q)\\)}. This stational component modlules the width/module of the rest of the components of the time series.\n\n\n\\subsection{Exogenous variables}\n\nThere is an especial case in which we consider variables outside our model that directly affect our time series. To model this case, we use AR and MA models, and add extra coefficients to the already known equations. Let us define one of the models:\n\n\\begin{ndef}\n  Consider an ARMA model. Adding exogenous variables to it result in the \\textbf{ARMAX} model, which has the following expression:\n  \\[\n    z_{t} = \\sum_{i = 1}^{p}\\phi_{i} \\tilde z_{t-i} + a_{t} - \\sum_{j = 1}^{q} \\theta_{i}a_{t-j} - \\sum_{k=1}^{r} \\beta_{k} e_{t-k}.\n  \\]\n\\end{ndef}\n\nThe same way, \\textbf{ARX,MAX,ARIMAX,SARIMAX} can be defined.\n\n\n\\section{Model selection and fitting. Box-Jenkins method.}\n\nNow that we know all these fantastic models, we would like to apply them in time series analysis. We would like to choose which of the model \\textbf{best fits} a time series data.\n\nFirstly, we will introduce two defintions of functions that will be used to determine the best \\emph{hyperparameters} of our models. Since our purpose is to look at the \\emph{already-computed} functions, the following definitions will only give an intuitive idea and will not be very formal.\n\n\\begin{ndef}\nThe \\textbf{autocorrelation function ACF} is the correlation between a signal and a delayed copy of the signal. It is useful for finding repeating patterns (periodicity) or missing fundamental frequency.\n\\end{ndef}\n\n\\begin{note}\nUnit root processes, trend-stationary processes, AR and MA processes are specific forms of processes with autocorrelation.\n\\end{note}\n\n\\begin{ndef}\nThe \\textbf{partial autocorrelation function (PACF)} gives the partial correlation of a stationary time series with its own lagged values, \\emph{regressed the values of the time series at all shorter lags}. That is, eliminates the influence of other lag values.\n\\end{ndef}\n\nWith these two definitions, we proceed to describe the Box-Jenkins Method. We execute the following steps:\n\n\\begin{enumerate}\n  \\item Model class postulation. In this step, we select a family of models that we postulate our model will be in. As an example, we can consider \\emph{linear model based filters}.\n\n  \\item Model identification: There are variations of what it should be done in this step. We will consider the following:\n        \\begin{enumerate}\n          \\item Identifying estacionarity:\n                \\begin{enumerate}\n                  \\item Trend: It can be detected by differenciating the time series and checking that the autocorrelation function is transformed to a constant function \\(0\\) in \\(t = 0\\) and \\(0\\) in the rest of times\n                        \\item Seasonality: Using the autocorrelation, we must find if there is a peak in the AFC in a specific time.\n                \\end{enumerate}\n          \\item Eliminating \\emph{estationality and trend}\n\n          \\item Determining the type of stationary model that we will use, looking at the ACF signal. As a \\textbf{guide}, Table \\ref{acf:select:model} can be used.\n\n                \\begin{table}[H]\n\\begin{tabular}{l|l}\nACF                                                                  & Model                      \\\\ \\hline\nExponential decay as \\(|lag|\\) increases                             & AR                         \\\\\nPositive/negative decay as \\(|lag|\\) increases, or smoothed sinusoid & AR                         \\\\\nA few peaks in shiftings (lags) that are not \\(t = 0\\)               & MA                         \\\\\nDecay as \\(|lag|\\) increases with an initial \\(lag_0\\)               & ARMA                       \\\\\nAll values equal to zero and peak in \\(t = 0\\)                       & White Noise                \\\\\nHigh values in fixed intervals                                       & Estationality component    \\\\\nNo decay to zero                                                     & Non stationary time series\n\\end{tabular}\n\\caption{Guide to select model using ACF.}\n\\label{acf:select:model}\n\\end{table}\n        \\end{enumerate}\n\n\\end{enumerate}\n\nHaving selected our model, we have to find the optimal hyperparameters \\(p\\) and \\(q\\) that best fit our data. Usually, we will use:\n\n\\begin{itemize}\n  \\item  \\(p\\) such that  the last \\emph{important peak} in the PACF has \\(p\\) shifting.\n        \\item \\(q\\) such that  the last \\emph{important peak} in the ACF  has \\(q\\) shifting.\n\\end{itemize}\n\nThere exist other theoretical estrategies that can be followed, but we will not explain them in this notes.\n\nLastly, it has to be mentioned that there are different measures that quantify the \\emph{goodness} of the fitted model, such as the \\textbf{Akaike Information criterion} or the \\textbf{Bayesian Information criterion}.\n\n\n\\begin{note}\nThe exogenous-variable models may have some variations in the Box-Jenkins method.\n\\end{note}\n", "meta": {"hexsha": "1719018b9d1acb8b8bc7f9f16ca69876840d50b2", "size": 25253, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/1-basic-models.tex", "max_stars_repo_name": "fjsaezm/pit-notes", "max_stars_repo_head_hexsha": "b8e86e0fe811cbf7553f216c416125b665c53851", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/1-basic-models.tex", "max_issues_repo_name": "fjsaezm/pit-notes", "max_issues_repo_head_hexsha": "b8e86e0fe811cbf7553f216c416125b665c53851", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/1-basic-models.tex", "max_forks_repo_name": "fjsaezm/pit-notes", "max_forks_repo_head_hexsha": "b8e86e0fe811cbf7553f216c416125b665c53851", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.8108651911, "max_line_length": 470, "alphanum_fraction": 0.6953233279, "num_tokens": 7260, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7931059487389966, "lm_q1q2_score": 0.5894109685430726}}
{"text": "\\documentclass{article}\n\n\\usepackage{texing361}\n\n\\title{CS 361 Formula Mega Thread}\n\\author{Anonymous Comp}\n\\date{January 11, 2021}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{TeXing361 Commands}\n\\begin{itemize}\n    \\item Set: $\\set{1, 2, 3, \\cdots}$\n    \\item Vector (Arrow): $\\avec{a}$\n    \\item Vector (Bold): $\\bvec{a}$\n    \\item Matrix: $\\mat{A}$\n    \\item Eigenspace: $\\eig_{2}\\mat{A}$\n    \\item Determinant: $\\det\\mat{A}$\n    \\item Kernel: $\\ker\\mat{A}$\n    \\item Diagonal Matrix: $\\diag{(1, 2, 3)}$\n    \\item Norm: $\\norm{\\bvec{a}}$\n    \\item Inner Product: $\\vecdot{\\avec{a}}{\\avec{b}}$\n    \\item Expected Value: $\\E{X}$\n    \\item Variance: $\\V{X}$\n    \\item Covariance: $\\Cov{X}$\n    \\item Argmin/Argmax: $\\argmin{x}{f(x)}$\n\\end{itemize}\n\n\\newpage\n\\section{Common Probability Distributions \\protect\\footnote{Excerpted from \\emph{Probability and Statistical Inference, 9th Edition}}}\n    \\subsection{Discrete}\n        \\begin{enumerate}[(a)]\n            \\item Bernoulli $(0 < p < 1)$\n            $$\n            \\begin{aligned}\n                & p(x) = p^x(1 - p)^{1 - x}, \\;\\;\\;\\; x = 0, 1 \\\\\n                & M(t) = 1 - p + pe^t \\\\\n                & \\mu = p, \\; \\sigma^2 = p(1 - p) \\\\\n            \\end{aligned}\n            $$\n            \n            \\item Binomial $(0 < p < 1)$\n            $$\n            \\begin{aligned}\n                & p(x) = \\binom{n}{x} p^x(1 - p)^{n - x}, \\;\\;\\;\\; x = 0, 1, \\cdots, n \\\\\n                & M(t) = (1 - p + pe^t )^n \\\\\n                & \\mu = np, \\; \\sigma^2 = np(1 - p) \\\\\n            \\end{aligned}\n            $$\n            \n            \\item Geometric $(0 < p < 1)$\n            $$\n            \\begin{aligned}\n                & p(x) = (1 - p)^{x - 1}p, \\;\\;\\;\\; x = 1, 2, \\cdots \\\\\n                & M(t) = \\frac{pe^t}{1 - (1 - p)e^t}, \\;\\;\\;\\; t < -\\ln{(1 - p)}\\\\\n                & \\mu = \\frac{1}{p}, \\; \\sigma^2 = \\frac{1 - p}{p^2}\n            \\end{aligned}\n            $$\n\n            \\item Hypergeometric $(N_1 > 0, \\; N_2 > 0, \\; N = N_1 + N_2)$\n            $$\n            \\begin{aligned}\n                & p(x) = \\frac{\\binom{N_1}{x}\\binom{N_2}{n - x}}{\\binom{N}{n}}, \\;\\;\\;\\; x < N_1, \\; n - x < N_2 \\\\\n                & \\mu = n\\left(\\frac{N_1}{N}\\right), \\; \\sigma^2 = n\\left(\\frac{N_1}{N}\\right)\\left(\\frac{N_2}{N}\\right)\\left(\\frac{N - n}{N - 1}\\right)\n            \\end{aligned}\n            $$\n            \n            \\item Negativebinomial $(0 < p < 1, \\; r = 1, 2, 3, \\cdots)$\n            $$\n            \\begin{aligned}\n                & p(x) = \\binom{x - 1}{r - 1}p^r(1 - p)^{x - r}, \\;\\;\\;\\; x = r, r + 1, r + 2, \\cdots \\\\\n                & M(t) = \\frac{(pe^t)^r}{[1 - (1 - p)e^t]^r}, \\;\\;\\;\\; t < -\\ln{(1 - p)} \\\\\n                & \\mu = r\\left(\\frac{1}{p}\\right), \\; \\sigma^2 = \\frac{r(1 - p)}{p^2}\n            \\end{aligned}\n            $$\n            \n            \\item Poisson $(\\lambda > 0)$\n            $$\n            \\begin{aligned}\n                & p(x) = \\frac{\\lambda^x e^{-\\lambda}}{x!}, \\;\\;\\;\\; x = 0, 1, 2, \\cdots \\\\\n                & M(t) = e^{\\lambda(e^t - 1)} \\\\\n                & \\mu = \\lambda, \\sigma^2 = \\lambda\n            \\end{aligned}\n            $$\n            \n            \\item Uniform $(m > 0)$\n            $$\n            \\begin{aligned}\n                & p(x) = \\frac{1}{m}, \\;\\;\\;\\; x = 1, 2, \\cdots, m \\\\\n                & \\mu = \\frac{m + 1}{2}, \\sigma^2 = \\frac{m^2 - 1}{12}\n            \\end{aligned}\n            $$\n        \\end{enumerate}\n\n\n    \\subsection{Continuous}\n        \\begin{enumerate}[(a)]\n            \\item Beta $(\\alpha > 0, \\beta > 0)$\n            $$\n            \\begin{aligned}\n                & f(x) = \\frac{\\Gamma(\\alpha + \\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}x^{\\alpha - 1}(1 - x)^{\\beta - 1}, \\;\\;\\;\\; 0 < x < 1 \\\\\n                & \\mu = \\frac{\\alpha}{\\alpha + \\beta}, \\sigma^2 = \\frac{\\alpha\\beta}{(\\alpha + \\beta + 1)(\\alpha + \\beta)^2}\n            \\end{aligned}\n            $$\n            \n            \\item Chi-square $(r = 1, 2, \\cdots)$\n            $$\n            \\begin{aligned}\n                & f(x) = \\frac{1}{\\Gamma(r / 2) 2^{r / 2}}x^{r / 2 - 1}e^{-x/2}, \\;\\;\\;\\; 0 \\leq x < \\infty \\\\\n                & M(t) = \\frac{1}{(1 - 2t)^{r / 2}}, \\;\\;\\;\\; t < \\frac{1}{2} \\\\\n                & \\mu = r, \\sigma^2 = 2r\n            \\end{aligned}\n            $$\n            \n            \\item Exponential $(\\theta > 0)$\n            $$\n            \\begin{aligned}\n                & f(x) = \\frac{1}{\\theta}e^{-x/\\theta}, \\;\\;\\;\\; 0 \\leq x < \\infty \\\\\n                & M(t) = \\frac{1}{1 - \\theta t}, \\;\\;\\;\\; t < \\frac{1}{\\theta} \\\\\n                & \\mu = \\theta, \\sigma^2 = \\theta^2\n            \\end{aligned}\n            $$\n            \n            \\item Gamma $(\\alpha > 0, \\beta > 0)$\n            $$\n            \\begin{aligned}\n                & f(x) = \\frac{1}{\\Gamma(\\alpha)\\theta^\\alpha}x^{\\alpha - 1}e^{-x/\\theta}, \\;\\;\\;\\; 0 \\leq x < \\infty \\\\\n                & M(t) = \\frac{1}{(1 - \\theta t)^\\alpha}, \\;\\;\\;\\; t < \\frac{1}{\\theta} \\\\\n                & \\mu = \\alpha\\theta, \\sigma^2 = \\alpha\\theta^2\n            \\end{aligned}\n            $$\n            \n            \\item Normal $(-\\infty < \\mu < \\infty, \\sigma > 0)$\n            $$\n            \\begin{aligned}\n                & f(x) = \\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-(x - \\mu)^2 / 2\\sigma^2}, \\;\\;\\;\\; -\\infty < x < \\infty \\\\\n                & M(t) = e^{\\mu t + \\sigma^2t^2 / 2} \\\\\n                & \\E{X} = \\mu, \\V{X} = \\sigma^2\n            \\end{aligned}\n            $$\n            \n            \\item Uniform $(a < b)$\n            $$\n            \\begin{aligned}\n                & f(x) = \\frac{1}{b - a}, \\;\\;\\;\\; a < x < b \\\\\n                & M(t) = \\frac{e^{tb} - e^{ta}}{t(b - a)}, \\;\\;\\;\\; t \\neq 0; \\; M(0) = 1 \\\\\n                & \\mu = \\frac{a + b}{2}, \\sigma^2 = \\frac{(b - a)^2}{12}\n            \\end{aligned}\n            $$\n        \\end{enumerate}\n\n\\newpage\n\\section{Boilerplate}\n    \\subsection{Maximum Likelihood Estimation}\n    \\begin{itemize}\n        \\item Finding the likelihood function,\n        $$\n        \\begin{aligned}\n            \\mathcal{L}(\\Psi) &= \\prod_{i = 1}^n f(x_i; \\Psi)\\\\\n            &= \\prod_{i = 1}^n \\frac{2}{\\sqrt{\\pi\\Psi}}e^{-x_i^2/\\Psi}\\\\\n            &= \\left(\\frac{2}{\\sqrt{\\pi\\Psi}}\\right)^n\\prod_{i = 1}^n e^{-x_i^2 / \\Psi}.\n        \\end{aligned}\n        $$\n        \\item Taking the natural log of both sides,\n        $$\n        \\ln\\mathcal{L}(\\Psi) = n\\ln\\frac{2}{\\sqrt{\\pi\\Psi}} - \\frac{1}{\\Psi}\\sum_{i = 1}^n x_i^2.\n        $$\n        \\item Taking derivative w.r.t $\\Psi$ and set to zero,\n        $$\n        \\frac{\\text{d}}{\\text{d}\\Psi} \\ln\\mathcal{L}(\\Psi) = \\frac{2 \\sum_{i = 1}^n x_i^2 - n \\Psi }{2 \\Psi ^2} = 0.\n        $$\n        \\item Solving for $\\Psi$,\n        $$\n        \\boxed{\\hat{\\Psi} = \\frac{2\\sum_{i = 1}^n x_i^2}{n}}.\n        $$\n    \\end{itemize}\n\n    \\subsection{Hypothesis Testing for Population Mean}\n    \\begin{itemize}\n        \\item Calculate test statistic\n        If the population standard deviation is known,\n        $$\n        Z = \\frac{\\Bar{X} - \\mu_0}{\\sigma / n}.\n        $$\n        If population standard deviation is unknown,\n        $$\n        T = \\frac{\\Bar{X} - \\mu_0}{s / n}\n        $$\n        where $s$ is the sample standard deviation.\n        \\item Determine the $p$-value by integration, table, or technology (or rejection region). Be careful with the underlying distribution you are choosing.\n        \\item Compare and reach a conclusion (reject or fail to reject).\n    \\end{itemize}\n    \n    \\subsection{Code Highlight}\n    \\begin{itemize}\n        \\item Insert Code Block Directly\n        \\usemintedstyle{vs} % Or choose your own favourite\n        \\begin{minted}{python}\n            import numpy as np\n                \n            def incmatrix(genl1, genl2):\n                m = len(genl1)\n                n = len(genl2)\n                M = None # to become the incidence matrix\n                VT = np.zeros((n * m, 1), int)  # dummy variable\n                \n                # compute the bitwise xor matrix\n                M1 = bitxormatrix(genl1)\n                M2 = np.triu(bitxormatrix(genl2), 1) \n            ...\n        \\end{minted}\n        \\item Insert Code Block from External File\n        \\inputminted{python}{minted_example.py}\n    \\end{itemize}\n    \n\\end{document}", "meta": {"hexsha": "696eec0a28addec814b9e8e700a84a4e634211d7", "size": 8233, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pastebin361.tex", "max_stars_repo_name": "cs361-illinois/TeXing361", "max_stars_repo_head_hexsha": "536ca1d31c02115669580476b8c9b7b6f0fde1b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "pastebin361.tex", "max_issues_repo_name": "cs361-illinois/TeXing361", "max_issues_repo_head_hexsha": "536ca1d31c02115669580476b8c9b7b6f0fde1b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pastebin361.tex", "max_forks_repo_name": "cs361-illinois/TeXing361", "max_forks_repo_head_hexsha": "536ca1d31c02115669580476b8c9b7b6f0fde1b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7660550459, "max_line_length": 159, "alphanum_fraction": 0.4139438844, "num_tokens": 2836, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.766293653760418, "lm_q1q2_score": 0.5893413130114858}}
{"text": "\\documentclass[a4paper,12pt]{article}\n% decent example of doing mathematics and proofs in LaTeX.\n% An Incredible degree of information can be found at\n% http://en.wikibooks.org/wiki/LaTeX/Mathematics\n\n% Use wide margins, but not quite so wide as fullpage.sty\n\\marginparwidth 0.1in \n\\oddsidemargin 0.05in \n\\evensidemargin 0.05in \n\\marginparsep 0.05in\n\\topmargin 0.05in \n\\textwidth 6in \\textheight 8 in\n% That's about enough definitions\n\n\\usepackage{amsmath, amsthm, amssymb, amsfonts}\n\\usepackage{mathtools}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\n\\makeatletter\n\\renewenvironment{proof}[1][\\proofname] {\\par\\pushQED{\\qed}\\normalfont\\topsep6\\p@\\@plus6\\p@\\relax\\trivlist\\item[\\hskip\\labelsep\\bfseries#1\\@addpunct{.}]\\ignorespaces}{\\popQED\\endtrivlist\\@endpefalse}\n\\makeatother\n\n\\newtheoremstyle{break}\n  {\\topsep}{\\topsep}%\n  {\\itshape}{}%\n  {\\bfseries}{}%\n  {\\newline}{}%\n\\theoremstyle{break}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{corollary}{Corollary}[theorem]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{definition}[theorem]{Definition}\n\\DeclarePairedDelimiter{\\norm}{\\lVert}{\\rVert}\n\n\n\\begin{document}\n\n\\title{Calculus of Multivariate Functions}\n\\author{Hyeonsu Lyu, Department of Electrical Engineering, UNIST}\n\\date{July 3, 2018}\n\\maketitle\n\n\\section{Inverse and Implicit function theorem}\n\\begin{theorem} [The inverse function theorem] \n    \\label{thm:InverseFunctionTheorem}\n    Suppose $f$ is a $\\mathcal{C}^1-mapping$ of an open set $E\\subset \\mathbb{R}^n$ into $R^n$. $f'(a)$ is invertible for some $a\\in E$, and $b=f(a)$. Then,\\\\\n    \\indent (a)  there exists open sets $U$ and $V$ in $\\mathbb{R}^n$ such that $a\\in U, b\\in V$, $f$ is one-to-one on $U$, and $f(U)=V$ \\\\\n    \\indent (b)  if $g$ is the inverse of $f$, [which exists, by (a)], defined in $V$ by\n    \\begin{align}\n        g(f(x))=x\\ \\  (x\\in U),\n    \\end{align}\n    then $g\\in \\mathcal{C}(V).$\n\\end{theorem}\n\n\\begin{theorem} [The implicit function theorem]\n    \\label{thm:ImplicitFunctionTheorem}\n    Let $f$ be a $\\mathcal{C}^1-mapping$ of an open set $E\\subset \\mathbb{R}^{n+m}$ into $\\mathbb{R}^n$, such that $f(a,b)=0$ for some point $(a,b)\\in E$.\\\\\n    \\indent Put $A=f'(a,b)$ and assuem that $A_{\\textit{x}}$ is invertible.\\\\\n    \\indent Then there exists open sets $U\\subset \\mathbb{R}^{n+m}$ and $b \\in W$, habing the following property:\\\\\n    \\indent To every $y\\in W$ corresponds a unique $x$ such that\\\\\n    \\begin{align}\n        (x,y)\\in\\ U\\ \\ \\ \\ and \\ \\ \\ f(x,y)=0.\n    \\end{align}\n    \\indent if this $x$ is defined to be $g(y)$, then $g$ is a $\\mathcal{C}^1-mapping$ of $W$ into $\\mathbb{R}^{n}$, $g(b)=a$,\n    \\begin{align}\n        f(g(y),y)=0\\ \\ (y\\in W),\n    \\end{align}\n    \\indent and\n    \\begin{align}\n        g'(b)=-(A_x)^{-1}A_y.\n    \\end{align}\n    \\indent The function $g$ is \u201cimplicitly\u201d defined by (4). Hence the name of the theorem.\n\\end{theorem}\n\n\\begin{proof} [Proof of theorem \\ref{thm:InverseFunctionTheorem}]\n    \\indent (a) Put $f'(a)=A$, and choose $\\lambda$ so that\n    \\begin{align}\n        2\\lambda\\norm{A^{-1}}=1.\n        \\label{p1:1}\n    \\end{align}\n    \\indent since $f'$ is continuous at $a$, there is an open ball $U\\subset E$, with center at $a$, such that\n    \\begin{align}\n        \\norm{f'(x)-A}<\\lambda\\ \\ (x\\in U).\n        \\label{p1:2}\n    \\end{align}\n    We associate to each $y\\in \\mathbb{R}^n$ a function $\\varphi$, defined by\n    \\begin{align}\n        \\varphi(x)=x+A^{-1}(y-f(x)) \\ \\ (x \\in E)\n        \\label{p1:3}\n    \\end{align}\n    Note that $f(x)=y$ if and only if x is a fixed point of $\\varphi$.\n    Since $\\varphi'(x)=I-A^{-1}f'(x)=A^{-1}(A-f'(x))$, (\\ref{p1:1}) and (\\ref{p1:2}) imply that\n    \\begin{align}\n        \\norm{\\varphi'(x)}<\\frac{1}{2}\\ \\ (x\\in U)\n        \\label{p1:4}\n    \\end{align}\n    Hence\n    \\begin{align}\n        |\\varphi(x_1)-\\varphi(x_2)|\\leq \\frac{1}{2}|x_1-x_2|\\ \\ \\ \\ (x_1,x_2 \\in U),\n        \\label{p1:5}\n    \\end{align}\n    Then, by contraction theorem, it follows that $\\varphi$ has at most one fixed point in U, so that $f(x)=y$ for at most one $x\\in U$.\n    Thus $f$ is $1-1$ in $U$.\n\n    Next, put $V=f(U)$, and pick $y_0 \\in V$. Then $y_0=f(x_0)$ for some $x_0\\in U$. Let $B$ be an open ball with center at $x_0$ and radius $r>0$, so small that its closure $\\bar{B}$ lies in U. We will show that $y\\in V$ whenever $|y-y_0|<\\lambda r$. This proves that V is open. (This proves that every $y\\in V$ is interior point.) \\\\\n\n    Fix $y$, $|y-y_0| < \\lambda r$. with $\\varphi$ as in (\\ref{p1:3}),\n    \\begin{align*}\n        \\varphi(x)-x_0|=|A^{-1}(y-y_0)|<\\norm{A^{-1}}\\lambda r=\\frac{r}{2}.\n    \\end{align*}\n    if $x \\in \\bar{B}$, it therefore follows from (\\ref{p1:5}) that\n    \\begin{align*}\n        \\varphi(x)-x_0| &\\leq |\\varphi(x) - \\varphi(x_0)|+|\\varphi(x_0)-x_0| \\\\\n        &<\\frac{1}{2}|x-x_0|+\\frac{r}{2}\\leq r;\n    \\end{align*}\n    hence $\\varphi(x) \\in B$. Note that (\\ref{p1:5}) holds if $x_1,x2\\in\\bar{B}$.\n\n    Thus $\\varphi$ is a contraction of $\\bar{B}$ into $\\bar{B}$. Being a closed subset of $\\mathbb{R}^n$, $\\bar{B}$ is complete. By contraction theorem, $\\varphi$ has a fixed point $x\\in \\bar{B}$. For this $x, f(x)=y$. Thus, $y\\in f(\\bar{B})\\subset f(U)=V$.\n\n    This proves part (a) of the theorem.\n    \n    (b) Pick $y\\in V$, $y+k\\in V$. Then there exists $x\\in U$, $x+h \\in U$, so that $y=f(x),y+k=f(x+h)$. With $\\varphi$ as in (\\ref{p1:3}),\n    \\begin{align}\n        \\varphi{x+h}-\\varphi{x}=h+A^{-1}[f(x)-f(x+h)h-A^{-1}k.\n        \\label{p1:6}\n    \\end{align}\n    by (\\ref{p1:5}), $|h-A^{-1}k|\\leq\\frac{1}{2}|h|$. Hence $|A^{-1}k|\\geq\\frac{1}{2}|h|,$ and\n    \\begin{align}\n        |h|\\leq2\\norm{A^{-1}}|k|=\\lambda^{-1}|k|\n        \\label{p1:7}\n    \\end{align}\n\n    By (\\ref{p1:1}), (\\ref{p1:2}) and theorem 9.8[Rudin], $f'(x) has an inverse, say T.$ Since \n    \\begin{align*}\n        g(y+k)-g(y)-Tk=h-Tk=-T[f(x+h)-f(x)-f'(x)h],\n    \\end{align*}\n    (\\ref{p1:7}) implies\n    \\begin{align*}\n        \\frac{|g(y+k)-g(y)-Tk}{|k|}\\leq\\frac{\\norm{T}}{\\lambda} \\cdot\\frac{|f(x+h)-f(x)-f'(x)h|}{|h|}.\n    \\end{align*}\n    \n    As $k\\to0$, (\\ref{p1:7}) shows that $h\\to0$. The right side of the last inequality thus tends to 0. Hence the same is true of the left. We have thus proved that $g'(y)=T$. But $T$ was chosen to be the inverse of $f'(x)=f'(g(y))$. Thus\n    \\begin{align}\n        g'(y)={f'(g(y))}^{-1}\\ \\ \\ (y\\in V).\n        \\label{p1:8}\n    \\end{align}\n     \n    Finally, note that $g$ is a continuous mapping of V onto U (Since g is differentiable), that $f'$ is a continuous mapping of $U$ into the set $\\Omega$ of all invertible elements of $L(\\mathbb{R}^n$, and that inversion is a continuous mapping of $\\Omega$ onto $\\Omega$, by theorem 9.8[Rudin]. If we combine these facts with (\\ref{p1:8}), we see that $g\\in\\mathcal{c}'(V)$.\n\n\\end{proof}\n\n\n\n\\end{document}\n", "meta": {"hexsha": "6004299fbd7e200ff64ff4d8fe4f444fbf3fa932", "size": 6783, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Mathematics/Calculus.tex", "max_stars_repo_name": "lhs55349780/Latex-Suite", "max_stars_repo_head_hexsha": "3ccd4c123c25571ac21ad2ccbaa5600a7f4c5568", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Calculus.tex", "max_issues_repo_name": "lhs55349780/Latex-Suite", "max_issues_repo_head_hexsha": "3ccd4c123c25571ac21ad2ccbaa5600a7f4c5568", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Calculus.tex", "max_forks_repo_name": "lhs55349780/Latex-Suite", "max_forks_repo_head_hexsha": "3ccd4c123c25571ac21ad2ccbaa5600a7f4c5568", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.4807692308, "max_line_length": 375, "alphanum_fraction": 0.6037151703, "num_tokens": 2580, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5893413089556714}}
{"text": "\\section{Operator}\n\n% Normal\n\\subsection{Normal}\n\n\\begin{theorem}\\label{eigenvectorforadjointoperator}\n    If $T$ has eigenvector, then $T^*$ has eigenvector.    \n\\end{theorem}\n\\begin{proof}\n    $0 = \\innerproduct{0}{x} = \\innerproduct{(T - \\lambda I)(v)}{x} = \\innerproduct{v}{(T - \\lambda I)^* (x)} = \\innerproduct{v}{(T^* - \\overline{\\lambda} I)(x)}$. Since $v \\neq 0$ is reciprocal to the range of $T^* - \\overline{\\lambda} I$, $v \\notin \\rangespace{T^* - \\overline{\\lambda} I}$, so $\\nullspace{T^* - \\overline{\\lambda} I} \\neq \\{ 0 \\}$.\n\\end{proof}\n\n\\begin{theorem}[\\cindex{Schur}]\\label{schurincomplexfield}\n    Suppose the characteristic polynomial of $T$ splits. Then there exists an orthonormal basis $\\beta$ for $V$ that the $\\coordinate{T}_\\beta$ is upper trianglar. Note:\n    \\begin{enumerate}\n        \\item $\\beta$ does \\emph{not} need to be eigenvectors of $T$.\n        \\item It works in $\\mathcal{R}$ as long as $T$ splits.\n    \\end{enumerate} \n\\end{theorem}\n\\begin{proof}\n    Use induction. Since $T$ splits, it has a eigenvector. By \\thmref{eigenvectorforadjointoperator} $T^*$ has eigenvector, and make it a unit eigenvector $z$. Let $W = \\text{span}\\{z\\}$. Then prove $W^\\bot$ is $T$-invariant: for $\\forall y \\in W^\\bot$ and $x = cz \\in W$:\n    \\begin{equation*}\n        \\begin{aligned}\n            \\innerproduct{T(y)}{x} = \\innerproduct{T(y)}{cz} = \\innerproduct{y}{T^*(cz)} = \\innerproduct{y}{cT^*(z)} = \\innerproduct{y}{c \\lambda z} = \\overline{c\\lambda} \\innerproduct{y}{z} = 0\n        \\end{aligned}\n    \\end{equation*}\n    According to induction, $\\dimension{W^\\bot} = n - 1$ and there exists an orthonormal basis $\\gamma$ that $\\coordinate{T_{W^\\bot}}_\\gamma$ is upper triangular. Take $\\gamma \\cup \\set{z}$.\n\\end{proof}\n\n\\begin{theorem}\n    If $\\beta$ is an orthonormal basis and $\\coordinate{T}_\\beta$ is a diagonal matrix, $\\coordinate{T^*}_\\beta = \\left(\\coordinate{T}_\\beta\\right)^*$ is also a diagonal matrix.\n\\end{theorem}\n\n\\begin{theorem}\n    If an operator $T$ has orthogonal eigenvectors $\\beta$ that are basis of the inner product space, then $\\coordinate{T}_\\beta$ is a diagonal matrix.\n\\end{theorem}\n\n\n\n\n\\begin{definition}\n    $T$ is \\cindex{normal} if $T T^* = T^* T$. A square matrix $A$ is \\cindex{normal} if $AA^* = A^* A$.\n\\end{definition}\n\n\\begin{theorem}\n    $T$ is normal if and only of $\\coordinate{T}_\\beta$ is normal under orthonormal basis $\\beta$.\n\\end{theorem}\n\n\\begin{theorem}\\label{propertyofnormaloperator}\n    Properties of normal operator $T$ on $V$:\n    \\begin{enumerate}\n        \\item $\\forall x \\in V$, $\\norm{T(x)} = \\norm{T^*(x)}$\n        \\item $\\forall c \\in F$, $T - cI$ is normal.\n        \\item If $x$ is a eigenvector of eigenvalue $\\lambda$ for $T$, $T^*(x) = \\overline{\\lambda} x$, so $x$ is also an eigenvector of eigenvalue $\\overline{\\lambda}$ for $T^*$.\n        \\item If $x_1$ and $x_2$ are for eigenvalues $\\lambda_1$ and $\\lambda_2$, $\\innerproduct{x_1}{x_2} = 0$\n    \\end{enumerate}    \n\\end{theorem}\n\\begin{proof}\n    \\begin{equation*}\n        \\norm{T(x)}^2 = \\innerproduct{T(x)}{T(x)} = \\innerproduct{T^* T (x)}{x} = \\innerproduct{TT^*(x)}{x} = \\innerproduct{T^*(x)}{T^*(x)} = \\norm{T^*(x)^2}\n    \\end{equation*}\n    \n    \\begin{equation*}\n        0 = \\norm{(T - \\lambda I)(x)} = \\norm{(T - \\lambda I)^*(x)} = \\norm{(T^* - \\overline{\\lambda} I)(x) }\n    \\end{equation*}\n    \n    \\begin{equation*}\n        \\lambda_1 \\innerproduct{x_1}{x_2} = \\innerproduct{\\lambda x_1}{x_2} = \\innerproduct{T(x_1)}{x_2} = \\innerproduct{x_1}{T^*(x_2)} = \\innerproduct{x_1}{\\overline{\\lambda_2} x_2} = \\lambda_2 \\innerproduct{x_1}{x_2}\n    \\end{equation*}\n    So $(\\lambda_1 - \\lambda_2) \\innerproduct{x_1}{x_2} = 0$. Since $\\lambda_1 \\neq \\lambda_2$, $\\innerproduct{x_1}{x_2} = 0$\n\\end{proof}\n\n\n\\begin{theorem}\n    If $T$ is normal, $\\nullspace{T} = \\nullspace{T^*}$ and $\\rangespace{T} = \\rangespace{T^*}$. So being normal will refine \\thmref{nullandreciprocaladjoint}.\n\\end{theorem}\n\\begin{proof}\n    If $x \\in \\nullspace{T}$, $\\norm{T(x)} = \\norm{T^*} = 0$, so $T^*(x) = 0$ and $x \\in \\nullspace{T^*}$.\n\\end{proof}\n\n\n\n\\begin{theorem}\n    In $\\mathcal{C}$, let $V$ be finite dimensional inner product space. $T$ is normal if and only if there exists an orthonormal basis for $V$ consisting of eigenvectors of $T$.\n\\end{theorem}\n\\begin{proof}\n    in $C$ the polynomial always splits. According to \\thmref{schurincomplexfield} there exists a orthonormal basis $\\beta = \\{v_1, v_2, \\dots, v_n\\}$ that $\\coordinate{T}_\\beta = A$ is upper triangular. $v_1$ is an eigenvector because $T(v_1)=A_{1,1} v_1$. Assuming $v_1, v_2, \\dots, v_{k-1}$ are eigenvector of $T$, we prove that $v_k$ is also an eigenvector of $T$. Because $A$ is upper triangular, \n    \\begin{equation*}\n        T(v_k) = A_{1,k} v_1 + A_{2,k} v_2 + \\dots + A_{j,k} v_j + \\dots + A_{k,k} v_k\n    \\end{equation*}\n    Because $\\forall j < k$, $A_{j,k} = \\innerproduct{T(v_k}{v_j} = \\innerproduct{v_k}{T^*(v_j)} = \\innerproduct{v_k}{\\overline{\\lambda} v_j} = \\lambda_j \\innerproduct{v_k}{v_j} = 0$, we have $T(v_k) = A_{k,k} v_k$, so $v_k$ is an eigenvector of $T$.\n    \n    btw, it does not work in infinite dimensional complex inner product space.\n\\end{proof}\n\n\n\n\n\n\n% self-adjoint\n\\subsection{Hermitian}\n\n\\begin{definition}\n    $T$ is \\cindex{self-adjoint} (\\cindex{Hermitian}) if $T = T^*$, or $A = A^*$. For real matrix, it means $A$ is symmetric.\n\\end{definition}\n\n\\begin{theorem}\n    Let $T$ be a linear operator on complex inner product space. Then $T$ is self-adjoint if and only if $\\forall x \\in V$,$\\innerproduct{T(x)}{x} \\in \\mathcal{R}$.\n\\end{theorem}\n\\begin{proof}\n    If $T$ is self-adjoint, $\\overline{\\innerproduct{T(x)}{x}} = \\innerproduct{x}{T(x)} = \\innerproduct{T^*(x)}{x} = \\innerproduct{T(x)}{x}$. So $\\innerproduct{T(x)}{x} \\in \\mathcal{R}$.\n    \n    If $\\innerproduct{T(x)}{x} \\in \\mathcal{R}$, $\\innerproduct{T(x)}{x} = \\overline{\\innerproduct{T(x)}{x}} = \\innerproduct{x}{T(x)} = \\innerproduct{T^*(x)}{x}$. So $\\forall x \\in V$, $\\innerproduct{(T - T^*)(x)}{x} = 0$. According to Theorem (\\ref{zerotforalltx}), $T - T^* = 0$.\n\\end{proof}\n\n\n\\begin{theorem}\n    Let $T$ be a self-adjoint operator on finite dimensional inner product space $V$. Then:\n    \\begin{enumerate}\n        \\item every eigenvalue is real.\n        \\item If $V$ is a real inner product space, the characteristic polynomial for $T$ splits.\n    \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n    Because $T$ is self-adjoint, $T$ is also normal. So according to \\thmref{propertyofnormaloperator} if $\\lambda$ is an eigenvalue of $T$,  $\\overline{\\lambda}$ is an eigenvalue of $T^*$. So:\n    \\begin{equation*}\n        \\lambda x = T(x) = T^*(x) = \\overline{\\lambda} x\n    \\end{equation*}\n    So $\\lambda = \\overline{\\lambda}$, and $\\lambda$ is real.\n    \n    For a orthonormal basis $\\beta$, $A = \\coordinate{T}_\\beta$ is self-adjoint because $A^* = (\\coordinate{T}_\\beta)^* = [T^*]_\\beta = \\coordinate{T}_\\beta = A$. Define $L_A(x) = Ax$ in $\\mathcal{C}^n$. Here we create a function in $\\mathcal{C}^n$ from a function in $\\mathcal{R}^n$. Let $\\gamma$ be the standard basis for $\\mathcal{C}$ which is orthonormal. $[L_A]_\\gamma = A$ is self-adjoint, so $L_A$ is self-adjoint in $\\mathcal{C}^n$. The characteristic polynomial of $L_A$ splits. Since $L_A$ is self-adjoint, all eigenvalues are real, so the polynomial split in $\\mathcal{R}$. But $L_A$, $A$ and $T$ has the same characteristic polynomial.\n\\end{proof}\n\n\\begin{theorem}\\label{selfadjointmatrixhasorthonormalbasis}\n    Let $T$ be a linear operator on finite dimensional real inner product space. $T$ is self-adjoint if and only if there exists an orthonormal basis $\\beta$ for $V$ consisting of eigenvectors of $T$.    \n\\end{theorem}\n\\begin{proof}\n    By \\thmref{schurincomplexfield} there exists orthonormal basis $\\beta$ for $V$ that $A = \\coordinate{T}_\\beta$ is upper triangular. Because $A^* = (\\coordinate{T}_\\beta)^* = [T^*]_\\beta = \\coordinate{T}_\\beta = A$, $A$ is diagonal matrix.\n\\end{proof}\n\n\n\\begin{theorem}\n    For the orthonormal basis of eigenvector $T$ problem we have:\n    \\begin{enumerate}\n        \\item If $T$ splits, we have orthonormal basis that make $T$ upper triangular in $\\mathcal{R}$ or $\\mathcal{C}$. This basis may not be eigenvectors, or $T$ may not have eigenvectors.\n        \\item $T$ is complex normal.\n        \\item $T$ is real symmetric.\n    \\end{enumerate}\n\\end{theorem}\n\n\n\\begin{theorem}\\label{zerotforalltxforselfadjoint}\n    Let $T$ be self-adjoint operator. If $\\forall x \\in V$,$\\innerproduct{T(x)}{x} = 0$. Then $T = 0$.\\footnote{Self-adjoint is not needed of $V=\\mathcal{C}$. See \\thmref{zerotforalltx} on page \\pageref{zerotforalltx}.}\n\\end{theorem}\n\\begin{proof}\n    Choose orthonormal basis $\\beta$ that consist of eigenvector of $T$. For $x\\in \\beta$, $T(x) =\\lambda x$. So\n    \\begin{equation*}\n        0 = \\innerproduct{x}{T(x)} = \\innerproduct{x}{\\lambda x} = \\overline{\\lambda} \\innerproduct{x}{x}\n    \\end{equation*}\n    Hence $\\overline{\\lambda} = 0$ and $\\forall x \\in \\beta,  T(x) = 0$.\n\\end{proof}\n\n\n\\subsection{Positive Operator}\n\n\\begin{definition}\n    An operator $T$ is called \\cindex{positive operator} if $T$ is self-adjoint and $\\forall x \\in V$:\n    \\begin{equation}\n        \\innerproduct{Tx}{x} \\geq 0\n    \\end{equation}\n\\end{definition}\n\n\\begin{definition}\n    An Operator $R$ is called a \\cindex{square root} of an operator $T$ if\n    \\begin{equation}\n        R^2 = T\n    \\end{equation}\n\\end{definition}\n\n\\begin{theorem}\\label{positiveoperatorproperty}\n    All the following are equivalent:\n    \\begin{enumerate}\n        \\item \\label{positiveoperatorproperty1} $T$ is positive.\n        \\item \\label{positiveoperatorproperty2} $T$ is self-adjoint and all eigenvalue of $T$ are non-negative.\n        \\item \\label{positiveoperatorproperty3} $T$ has positive square root.\n        \\item \\label{positiveoperatorproperty4} $T$ has self-adjoint square root.\n        \\item \\label{positiveoperatorproperty5} $\\exists R: T = R^* R$\n    \\end{enumerate}    \n\\end{theorem}\n\\begin{proof}\n    For \\ref{positiveoperatorproperty2}, if $T$ is positive, $0 \\leq \\innerproduct{Tv}{v} = \\innerproduct{\\lambda v}{v} = \\lambda \\innerproduct{v}{v}$, so $\\lambda \\geq 0$.\n    \n    For \\ref{positiveoperatorproperty3}, if $T$ is self-adjoint, by \\thmref{selfadjointmatrixhasorthonormalbasis} there are orthonormal basis $\\beta=\\set{v_i}$ with eigenvalue $\\lambda_i$. Define $R(v_i) = \\sqrt{\\lambda_i} v_i$. Then $\\forall v_i \\in \\beta,  R^2(v_i) = T(v_i)$.\n    \n    For \\ref{positiveoperatorproperty1}, $\\innerproduct{Tv}{v} = \\innerproduct{R^*Rv}{v} = \\innerproduct{Rv}{Rv} \\geq 0$.    \n\\end{proof}\n\n\\begin{theorem}\n    A positive operator has a unique positive square root.    \n\\end{theorem}\n\n\\begin{definition}\n    If $T$ is a positive operator, $\\sqrt{T}$ is its positive square root.\n\\end{definition}\n\n\n\n% unitary and orthogonal operator\n\\subsection{Isometry}\n\n\\begin{definition}\n    Let $T$ be a linear operator on finite dimensional inner product space $V$ over $F$. If $\\forall x \\in V$, $\\norm{T(x)} = \\norm{x}$, we call $T$ \\cindex{unitary operator} if $F = \\mathcal{C}$ or \\cindex{orthogonal operator} if $F=\\mathcal{R}$. Unitary and orthogonal are also called \\cindex{isometry}.\n\\end{definition}\n\n\\begin{definition}\n    A square matrix $A$ is called \\cindex{unitary matrix} if $AA^* = A^*A = I$ and \\cindex{orthogonal matrix} if $AA^\\top = A^\\top A = I$.\n\\end{definition}\n\n\\begin{theorem}\\label{unitaryproperty}\n    Let $T$ be an linear operator. Then the following are equivalent:\n    \\begin{enumerate}\n        \\item $TT^* = T^* T = I$.\\label{unitaryisnormal}\n        \\item $\\innerproduct{T(x)}{T(y)} = \\innerproduct{x}{y}$.\n        \\item If $\\beta$ is an orthonormal basis for $V$. Then $T(\\beta)$ is an orthonormal basis.\n        \\item $\\norm{T(x)} = \\norm{x}$.\n    \\end{enumerate}\n    \n    So unitary or orthogonal operator preserve inner product and norm.\n\\end{theorem}\n\\begin{proof}\n    $\\innerproduct{x}{y} = \\innerproduct{T^* T x}{y} = \\innerproduct{T(x)}{T(y)}$.\n    \n    If $\\beta = \\{v_1,v_2,\\dots,v_n \\}$ is an orthonormal basis. $\\innerproduct{T(v_i)}{T(v_j)} = \\innerproduct{v_i}{v_j} = 0$.\n    \n    If $\\beta$ and $T(\\beta)$ are both orthonormal basis, expand $\\norm{T(x)}$ and $\\norm{x}$ to prove they are equal.\n    \n    $\\innerproduct{x}{x} = \\norm{x}^2 = \\norm{T(x)}^2 = \\innerproduct{T(x)}{T(x)} = \\innerproduct{x}{T^*Tx}$. So $\\forall x \\in V, \\innerproduct{x}{(I - T^*T)(x)} = 0$. $I - T^*T$ is normal, so according to \\thmref{zerotforalltxforselfadjoint}, $I - T^* T = 0$.\n\\end{proof}\n\n\\begin{theorem}\n    Unitary operator is normal.    \n\\end{theorem}\n\\begin{proof}\n    See \\thmref{unitaryproperty} property (\\ref{unitaryisnormal}).\n\\end{proof}\n\n\n\n\\begin{theorem}\n    Let $T$ be a linear operator on \\emph{real} inner product space $V$. $V$ has an orthonormal basis of eigenvectors of $T$ with absolute value of all eigenvalues equal to $1$ if and only if $T$ is self-adjoint and orthogonal.    \n\\end{theorem}\n\\begin{proof}\n    If $T$ is self-adjoint, there is orthonormal basis $\\beta$ of eigenvectors. If $T$ is orthogonal, $\\forall v_i \\in \\beta$, $\\absolutevalue{\\lambda_i} \\times \\norm{v_i} = \\norm{\\lambda_i v_i} = \\norm{T(v_i)} = \\norm{v_i}$, so $\\absolutevalue{\\lambda_i} = 1$.\n    \n    If $V$ has orthonormal basis $\\beta$ of eigenvectors, $T$ is self-adjoint. $\\forall v_i \\in \\beta$, we have $TT^* (v_i) = T(\\lambda_i v_i ) = \\lambda_i T(v_i) = \\lambda_i^2 v_i$. If $\\absolutevalue{\\lambda_i} = 1$, $TT^* = I$.\n\\end{proof}\n\n\\begin{theorem}\n    Let $T$ be a linear operator on \\emph{complex} inner product space $V$. $V$ has an orthonormal basis of eigenvectors of $T$ with absolute value of all eigenvalues equal to  $1$ if and only if $T$ is unitary.\n\\end{theorem}\n\\begin{proof}\n    If $T$ is unitary, it is normal, so there is orthonormal basis $\\beta$ of eigenvectors. If $T$ is unitary, $\\forall v_i \\in \\beta$, $\\absolutevalue{\\lambda_i} \\times \\norm{v_i} = \\norm{\\lambda_i v_i} = \\norm{T(v_i)} = \\norm{v_i}$, so $\\absolutevalue{\\lambda_i} = 1$.\n    \n    If $V$ has orthonormal basis $\\beta$ of eigenvectors, $T$ is normal. If $\\absolutevalue{\\lambda_i} = 1$, $\\forall v_i \\in \\beta$, $\\absolutevalue{\\lambda_i} \\times \\norm{v_i} = \\norm{\\lambda_i v_i} = \\norm{T(v_i)} = \\norm{v_i}$, so $\\norm{T(v_i)} = \\norm{v_i}$ and it is unitary.\n\\end{proof}\n\n\\begin{theorem}\n    $T$ is isometry if $\\coordinate{T}_\\beta$ is isometry for a orthonormal basis $\\beta$ of $V$.\n\\end{theorem}\n\n\\begin{definition}\n    $A$ is \\cindex{unitarily equivalent} or \\cindex{orthogonally equivalent} to $D$ if and only if there exists a unitary or orthogonal matrix $P$ that $A = P^* D P$.\n\\end{definition}\n\n\\begin{theorem}\n    Let $A$ be a complex square matrix. $A$ is normal if and only if it is unitarily equivalent to a diagonal matrix.    \n\\end{theorem}\n\n\\begin{theorem}\n    Let $A$ be a real square matrix. $A$ is symmetric if and only if it is orthogonally equivalent to a diagonal matrix.    \n\\end{theorem}\n\n\n\n\n\n\n% rigid motion\n\\subsection{Rigid motion}\n\n\\begin{definition}\n    Let $V$ be real inner product space. $f: V \\rightarrow V$ is a \\cindex{rigid motion} if \n    \\begin{equation}\n        \\norm{f(x) - f(y)} = \\norm{x - y}\n    \\end{equation}\n\\end{definition}\n\n\\begin{definition}\n    Let $V$ be real inner product space. $g: V \\rightarrow V$ is a \\cindex{translation} by $v_0 \\in V$ if\n    \\begin{equation}\n        \\exists v_0 \\forall x \\in V \\left( g(x) = x + v_0 \\right)\n    \\end{equation}\n\\end{definition}\n\n\\begin{theorem}\n    A translation is a rigid motion. And a composite of rigid motion is rigid motion.    \n\\end{theorem}\n\n\n\\begin{theorem}\n    Let $f$ be a rigid motion. Then there exists a unique orthogonal operator $T$ and unique translation $g$ that $f = g \\circ T$.\n\\end{theorem}\n\\begin{proof}\n    Define $T(x) = f(x) - f(0)$. $T$ is a composite of rigid motion, so it is a rigid motion. Therefore $\\norm{T(x)} = \\norm{f(x) - f(0)} = \\norm{x - 0} = \\norm{x}$. Since\n    \\begin{equation*}\n        \\begin{aligned}\n            \\norm{T(x) - T(y)}^2 &= \\norm{x}^2 - 2 \\innerproduct{T(x)}{T(y)} + \\norm{y}^2 \\\\\n            \\norm{x - y}^2 &= \\norm{x}^2 - 2 \\innerproduct{x}{y} + \\norm{y}^2 \\\\\n            \\norm{T(x) - T(y)}^2 &= \\norm{x - y}^2\n        \\end{aligned}\n    \\end{equation*}\n    We have $\\innerproduct{T(x)}{T(y)} = \\innerproduct{x}{y}$.\n    \n    Then $\\norm{T(ax + y) - aT(x) - T(y)}^2 = 0$ after expansion, $T$ is linear. So $T$ is an orthogonal operator. So we have unique $T$ and $g$ that\n    \\begin{equation}\n        \\begin{aligned}\n            T(x) &= f(x) &- f(0) \\\\\n            g(x) &= x &+ f(0)\n        \\end{aligned}\n    \\end{equation}\n\\end{proof}\n\n\\begin{theorem}\n    Let $T$ be an orthogonal operator on $R^2$, and let $A = \\coordinate{T}_\\beta$ where $\\beta$ is the standard basis of $R^2$. Then one of the following is satisfied:\n    \\begin{enumerate}\n        \\item $T$ is a rotation, so $\\determinate{T} = 1$.\n        \\item $T$ is a reflection about a line through the origin, so $\\determinate{T} = -1$.\n    \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n    Because $T$ is orthogonal, $T(\\beta) = \\set{T(e_1), T(e_2)}$ is an orthonormal basis of $R^1$. Since $T(e_1)$ is an unit vector, it has the form $T(e_1) = (\\cos{\\theta}, \\sin{\\theta})$. Since $T(e_2)$ is orthogonal to $T(e_1)$, it has the form $T(e_2) = (-\\sin{\\theta}, \\cos{\\theta})$ or $T(e_2) = (\\sin{\\theta}, -\\cos{\\theta})$.\n\\end{proof}\n\n\\begin{theorem}\n    For expression $f(x,y) = a x^2 + 2b xy + c y^2$, let $A = \\begin{pmatrix}\n        a & b \\\\\n        b & c\n    \\end{pmatrix}$ and $X = \\begin{pmatrix}\n        x \\\\\n        y\n    \\end{pmatrix}$, the formula is $f(X) = X^\\top A X = \\innerproduct{AX}{X}$. Since $A$ is symmetric, there is an orthogonal matrix $P$ and diagonal matrix $D$ that $A = P^\\top D P$. Define $X_0 = \\begin{pmatrix}\n        x_0 \\\\\n        y_0\n    \\end{pmatrix}$ that $X = PX_0$. We have $f(X) = X^\\top A X = (P X_0)^\\top A (P X_0) = X_0^\\top D X_0 = \\lambda_1 x_1^2 + \\lambda_2 x_2^2$. So the $xy$ term could be removed by rotation.\n\\end{theorem}\n\n\n\n\n\n\n\n% spectral theorem\n\\subsection{Spectral Theorem}\n\n\\begin{definition}\n    Let $V = W_1 \\oplus W_2$. $T$ is a \\cindex{projection} on $W_1$ along $W_2$ if $\\forall x = x_1 + x_2$ that $x_1 \\in W_1$ and $x_2 \\in W_2$, $T(x) = x_1$.\n\\end{definition}\n\n\\begin{theorem}\n    $T$ is a projection if and only if $T^2 = T$.\n\\end{theorem}\n\n\\begin{definition}\n    $T$ is an \\cindex{orthogonal projection} if $\\rangespace{T}^\\bot = \\nullspace{T}$ and $\\rangespace{T}= \\nullspace{T}^\\bot$\\footnote{In finite dimensional space $V$, $\\rangespace{T}^\\bot = \\nullspace{T} \\leftrightarrow \\rangespace{T}= \\nullspace{T}^\\bot$}. \n\\end{definition}\n\n\n\n\\begin{theorem}\n    $T$ is an orthogonal projection if and only if $T$ has an adjoint $T^*$ that $T^2 = T = T^*$.\n\\end{theorem}\n\\begin{proof}\n    $T^2 = T$ because $T$ is a projection. Let $x=x_1+x+2$ and $y=y_1+y_2$ where $x_1,y_1 \\in \\rangespace{T}$ and $x_2,y_2 \\in \\nullspace{T}$. So\n    \\begin{equation*}\n        \\begin{aligned}\n            \\innerproduct{x}{T(y)} &= \\innerproduct{x_1 + x_2}{y_1} = \\innerproduct{x_1}{y_1} \\\\\n            \\innerproduct{T(x)}{y} &= \\innerproduct{x_1}{y_1 + y_2} = \\innerproduct{x_1}{y_1}\n        \\end{aligned}\n    \\end{equation*}\n    So $T = T^*$ and $T^2 = T = T^*$.\n    \n    For the reverse side, prove that $\\rangespace{T}^\\bot = \\nullspace{T}$ and $\\rangespace{T}= \\nullspace{T}^\\bot$.\n\\end{proof}\n\n\\begin{theorem}[\\cindex{Spectral Theorem}]\n    Let $T$ be real symmetric or complex normal with distinct eigenvalue $\\lambda_i$ and its corresponding eigenspace $W_i$. Let $T_i$ be the orthogonal projection on $W_i$. We have:\n    \\begin{enumerate}\n        \\item $T_i T_j = \\delta_{ij} T_i$\n        \\item $\\displaystyle I = \\sum_{i=1}^k T_i$\n        \\item $\\displaystyle T = \\sum_{i=1}^k \\lambda_i T_i$\n    \\end{enumerate}\n    \n    $\\lambda_i$ is the \\cindex{spectrum} of $T$. $I$ is the resolution of the identity operator induced by T. $\\displaystyle T = \\sum_{i=1}^k \\lambda_i T_i$ is the \\cindex{spectral decomposition} of $T$.\n\\end{theorem}\n\\begin{proof}\n    Let $\\displaystyle x= \\sum_{i=1}^k x_i$ where $x_i \\in W_i$. Then\n    \\begin{equation*}\n        T(x) = \\sum_{i=1}^k T(x_i) = \\sum_{i=1}^k \\lambda_i x_i= \\sum_{i=1}^k \\lambda_i T_i (x_i) = \\sum_{i=1}^k \\lambda_i T_i (x) = \\left(\\sum_{i=1}^k \\lambda_i T_i \\right) x\n    \\end{equation*}\n\\end{proof}\n\n\\begin{theorem}\n    Let $F=\\mathcal{C}$. $T$ is normal if and only if $\\exists g \\in P$, $T^* = g(T)$.\n\\end{theorem}\n\\begin{proof}\n    Let $\\displaystyle T = \\sum_{i=1}^k \\lambda_i T_i$ be the spectral decomposition of $T$. Take the adjoint of both side and we have\n    \\begin{equation}\n        T^* = \\sum_{i=1}^k \\overline{\\lambda_i} T_i^*\n    \\end{equation}\n    According to Lagrange formula\\footnote{Theorem (\\ref{lagrangeinterpolationformula}) on page \\pageref{lagrangeinterpolationformula}.} , $\\exists g$, $g(\\lambda_i) = \\overline{\\lambda_i}$. So $g(T) = T^*$. The reverse is easy to prove.\n\\end{proof}\n\n\\begin{theorem}\n    Let $F=\\mathcal{C}$. $T$ is unitary if and only if $T$ is normal and $\\absolutevalue{\\lambda} = 1$ for all eigenvalue $\\lambda$ of $T$.\n\\end{theorem}\n\\begin{proof}\n    Let $\\displaystyle T = \\sum_{i=1}^k \\lambda_i T_i$ be the spectral decomposition of $T$. We have\n    \\begin{equation*}\n    TT^* = \\left( \\sum_{i=1}^k \\lambda_i T_i  \\right) \\times \\left( \\sum_{i=1}^k \\overline{\\lambda_i} T_i \\right) = \\sum_{i=1}^k \\absolutevalue{\\lambda_i}^2 T_i^2 = \\sum_{i=1}^k \\absolutevalue{\\lambda_i}^2 T_i = \\sum_{i=1}^k T_i  = I    \n    \\end{equation*}\n\\end{proof}\n\n\n\\begin{theorem}\n    Let $F=\\mathcal{C}$ and $T$ normal. $T$ is self-adjoint if and only if every eigenvalue of $T$ is real.    \n\\end{theorem}\n\\begin{proof}\n    $\\displaystyle T^* = \\sum_{i=1}^k \\overline{\\lambda_i} T_i = \\sum_{i=1}^k \\lambda_i T_i = T$, so $\\overline{\\lambda_i} = \\lambda_i$.\n\\end{proof}\n\n\n\n% single value decomposition\n\\subsection{Single Value Decomposition}\n\n\\begin{theorem}\n    Let $T:V \\rightarrow W$ be a linear transformation with rank $r$. Then there exists orthonormal basis $\\beta = \\{v_1, v_2, \\dots, v_n \\}$ for $V$ and $\\gamma = \\{ u_1, u_2, \\dots, u_m \\}$ for $W$ and positive scalars \\cindex{singular values} $\\sigma_1 \\geq \\sigma_2 \\geq \\dots \\geq \\sigma_r$ such that\n    \\begin{equation}\n        T(v_i) = \\begin{cases}\n            \\sigma_i u_i & \\text{if } 1 \\leq i \\leq r \\\\\n            0 & \\text{if } i > r\n        \\end{cases}\n    \\end{equation}\n    \n    Conversely, for $1 \\leq i \\leq n$, $v_i$ is an eigenvector of $T^*T$ with corresponding eigenvalue $\\sigma_i^2$ if $1 \\leq i \\leq r$ and $0$ if $i > r$. \n\\end{theorem}\n\\begin{proof}\n    $T^*T$ has rank $r$ according to \\thmref{rankofadjoint}, and positive semidefinite by \\thmref{positiveoperatorproperty}. So there is an orthonormal basis $v_i$ for $V$ consisting of eigenvectors of $T^*T$ with corresponding eigenvalues $\\lambda_i$ where $\\lambda_1 \\geq \\lambda_2 \\geq \\dots \\geq \\lambda_r > 0$ and $\\lambda_i = 0$ for $i > r$. For $1 \\leq i \\leq r$, define $\\sigma_i = \\sqrt{\\lambda_i}$ and $u_i = \\dfrac{1}{\\sigma_i} T(v_i)$. We have:\n    \\begin{equation*}\n    \\innerproduct{u_i}{u_j} = \\innerproduct{\\frac{1}{\\sigma_i} T(v_i)}{\\frac{1}{\\sigma_j} T(v_j)} = \\frac{1}{\\sigma_i \\sigma_j} \\innerproduct{T^*T(v_i)}{v_j} = \\frac{1}{\\sigma_i \\sigma_j} \\innerproduct{\\lambda_i v_i}{v_j} = \\frac{\\sigma_i^2}{\\sigma_i \\sigma_j} \\innerproduct{v_i}{v_j} = \\delta_{ij}\n    \\end{equation*}\n    \n    So $\\{u_1, u_2, \\dots, u_r \\}$ are orthogonal. Because the choice of $\\sqrt{\\lambda_i}$, they are unitary and therefore orthonormal. Extend it to an orthonormal basis $\\{u_1, u_2, \\dots, u_m \\}$.\n\\end{proof}\n\n\\begin{definition}\n    The \\cindex{singular values} of $A$ is the singular value of $L_A$.\n\\end{definition}\n\n\\begin{theorem}[Singular Value Decomposition Theorem]\n    Let $A_{m \\times n}$ be of rank $r$ with positive singular values $\\sigma_1 \\geq \\sigma_2 \\geq \\dots \\geq \\sigma_r$, and let $\\Sigma_{m \\times n}$ be\n    \\begin{equation}\n        \\Sigma_{ij} = \\begin{cases}\n            \\sigma_i & \\text{if } i = j \\leq r \\\\\n            0\n        \\end{cases}\n    \\end{equation}\n    Then there exists \\cindex{singular value decomposition} that with $U_{m \\times m}$ and $V_{n \\times n}$, we have\n    \\begin{equation}\n        A = U \\Sigma V^*\n    \\end{equation}\n    \n    The process to find singular value decomposition is:\n    \\begin{enumerate}\n        \\item find singular value of $A$ by calculating the eigenvalue of $A^*A$.\n        \\item sort the singular value from big to small.\n        \\item for non-zero singular value $\\sigma_i$, put $\\sqrt{\\sigma_i}$ to the $i$-th diagonal of $\\Sigma$.\n        \\item form $U$ of normalized eigenvector of $A^*A$.\n        \\item for non-zero singular value $\\sigma_i$, calculate orthonormal vector $u_i = \\dfrac{1}{\\sigma_i} L_A(v_i)$.\n        \\item expand the $u_i$ to orthonormal basis and form $V$.\n    \\end{enumerate}\n\\end{theorem}\n\n\n\n% Polar Decomposition\n\\subsection{Polar Decomposition}\n\n\\begin{theorem}[Polar Decomposition]\n    Any square matrix $A$, there exists a \\cindex{Polar Decomposition} using unitary matrix $W$ and a positive semidefinite matrix $P$ that \n    \\begin{equation}\n        A = WP\n    \\end{equation}\n    If $A$ is invertible, the Polar Decomposition is unique.\n\\end{theorem}\n\\begin{proof}\n    Use singular value decomposition on $A$ and we get $A = U \\Sigma V^* = U V^* V \\Sigma V^* = ( U V^*) ( V \\Sigma V^*) =  WP$.\n    So let $W = U V^*$ and $P = V \\Sigma V^*$.\n\\end{proof}\n\n\n\n% Pseudoinverse\n\\subsection{Pseudoinverse}\n\n\\begin{definition}\n    Let $T: V \\rightarrow W$ be a linear transformation. Let $L: \\nullspace{T}^\\bot \\rightarrow \\rangespace{T}$ be a linear transformation that $\\forall x \\in \\nullspace{T}^\\bot$, $L(x) = T(x)$. The \\cindex{pseudoinverse} (or \\cindex{Moore-Penrose generalised inverse}) of $T$ is a unique linear transformation from $W$ to $V$ that\n    \\begin{equation}\n        T^\\dag (y) = \\begin{cases}\n            L^{-1}(y) & \\text{for } y \\in \\rangespace{T} \\\\\n            0 & \\text{for } y \\in \\rangespace{T}^\\bot\n        \\end{cases}\n    \\end{equation}\n    Let $\\set{v_1, v_2,\\dots, v_r}$ be a basis for $\\nullspace{T}^\\bot$, $\\set{v_{r+1}, v_{r+2}, \\dots, v_{n}}$ be a basis for $\\nullspace{T}$, $\\set{u_1, u_2, \\dots, u_r}$ be basis for $\\rangespace{T}$, $\\set{u_{r_1}, u_{r+2}, \\dots, u_m}$ be a basis for $\\rangespace{T}^\\bot$, then:\n    \\begin{equation*}\n        T^\\dag (u_i) = \\begin{cases}\n            \\dfrac{1}{\\sigma_i} v_i & \\text{if } 1 \\leq i \\leq r \\\\\n            0\n        \\end{cases}\n    \\end{equation*}\n    So although not all $T$ has inverse, the restriction $\\left. T \\right|_{\\nullspace{T}^\\bot}$ could have proper inverse.\n\\end{definition}\n\n\\begin{theorem}\n    Let $A_{m \\times n}$ be a square matrix of rank $r$ with singular value decomposition $A = U \\Sigma V^*$ and non-zero singular values $\\sigma_1 \\geq \\sigma_2 \\geq \\dots \\geq \\sigma_r$. Let $\\Sigma^\\dag_{m \\times n}$  be a matrix that \n    \\begin{equation}\n        \\Sigma^\\dag_{ij} = \\begin{cases}\n            \\frac{1}{\\sigma_i} & \\text{if } i = j \\leq r \\\\\n            0 \n        \\end{cases}\n    \\end{equation}\n    \n    Then $A^\\dag = V \\Sigma^\\dag U^*$ is a singular value decomposition of $A$.\n\\end{theorem}\n\n\n\\begin{theorem}\n    Let $T: V \\rightarrow W$ be a linear transformation, then\n    \\begin{enumerate}\n        \\item $T^\\dag T$ is the orthogonal projection of $V$ on $\\nullspace{T}^\\bot$.\n        \\item $TT^\\dag$ is the orthogonal projection of $W$ on $\\rangespace{T}$.\n    \\end{enumerate}    \n\\end{theorem}\n\\begin{proof}\n    Define $L: \\nullspace{T}^\\bot \\rightarrow W$ by $L(x) = T(x)$. If $x \\in \\nullspace{T}^\\bot$, then $T^\\dag T (x) = L^{-1} L (x) = x$. If $x \\in \\nullspace{T}$, then $T^\\dag T (x) = T^\\dag (0) = 0$.\n\\end{proof}\n\n\n\\begin{theorem}\n    For a system of linear equations $Ax = b$. If $z = A^\\dag b$, then\n    \\begin{enumerate}\n        \\item If $Ax=b$ is consistent, then $z$ is the unique solution with minimal norm.\n        \\item If $Ax=b$ is inconsistent, then $z$ is the best approximation: $\\forall y$, $\\norm{Ax-b} \\leq \\norm{Ay-b}$. Also if $Az = Ay$, then $\\norm{z} \\leq \\norm{y}$.\n    \\end{enumerate}\n    \n    $A^\\dag b$ is the optimal solution discussed in section \\ref{consistentandinconsistentequation} on page \\pageref{consistentandinconsistentequation}.\n\\end{theorem}\n\\begin{proof}\n    Let $z = A^\\dag b$. If the equation is consistent, then $b \\in \\rangespace{T}$, then $Az = AA^\\dag b = TT^\\dag (b) = b$ because $TT^\\dag$ is a orthogonal projection, so $z$ is a solution to the linear system.\n    \n    If $y$ is any solution, then $T^\\dag T (y) = A^\\dag A y = A^\\dag b = z$. So $z$ is a orthogonal projection of $y$ on $\\nullspace{T}^\\bot$. So $\\norm{z} \\leq \\norm{y}$.\n    \n    If the equation is inconsistent, then $Az = AA^\\dag b$ is the orthogonal projection of $b$ on $\\rangespace{T}$, so $Az$ is the nearest vector to $b$.\n\\end{proof}\n\n\n\n\n\n% conditioning\n\\subsection{Conditioning}\n\n\\begin{definition}\n    For $Ax=b$, if a small change to $A$ and $b$ cause small change to $x$, the property is called \\cindex{well-conditioned}. Otherwise the system is \\cindex{ill-conditioned}.\n\\end{definition}\n\n\\begin{definition}\n    The \\cindex{relative change} in $b$ is $\\dfrac{\\norm{\\dif{b}}}{\\norm{b}}$ with $\\norm{\\cdot}$ be the standard norm on $\\mathcal{C}^n$.\n\\end{definition}\n\n\\begin{definition}\n    The \\cindex{Euclidean norm} of square matrix $A$ is \n    \\begin{equation}\n        \\norm{A} = \\max_{x \\neq 0} \\frac{\\norm{Ax}}{\\norm{x}}\n    \\end{equation}\n\\end{definition}\n\n\n\\begin{definition}\n    Let $B$ be a self-adjoint matrix. The \\cindex{Rayleigh quotient} for $x \\neq 0$ is $R(x) = \\dfrac{\\innerproduct{Bx}{x}}{\\norm{x}^2}$\n\\end{definition}\n\n\n\\begin{theorem}\nFor a self-adjoint matrix $B$, the $\\displaystyle \\max_{x \\neq 0} R(x)$ is the largest eigenvalue of $B$ and $\\displaystyle \\min_{x \\neq 0} R(x)$ is the smallest eigenvalue of $B$.\n\\end{theorem}\n\\begin{proof}\n    Choose the orthonormal basis $v_i$ of $B$ such that $Bv_i = \\lambda_i v_i$ where $\\lambda_1 \\geq \\lambda_2 \\geq \\lambda_n$. $\\forall x \\in F^n$, $\\exists a_i$ that $\\displaystyle x = \\sum_{i=1}^n a_i v_i$. So\n    \\begin{equation*}\n        R(x) = \\frac{\\innerproduct{Bx}{x}}{\\norm{x}^2} = \\frac{\\innerproduct{\\sum_{i=1}^n a_i \\lambda_i v_i}{\\sum_{j=1}^n a_j v_j}}{\\norm{x}^2} = \\frac{\\sum_{i=1}^n \\lambda_i \\absolutevalue{a_i}^2}{\\norm{x}^2} \\leq \\frac{\\lambda_1 \\sum_{i=1}^n \\absolutevalue{a_i}^2}{\\norm{x}^2} = \\frac{\\lambda_1 \\norm{x}^2}{\\norm{x}^2} = \\lambda_1\n    \\end{equation*}\n\\end{proof}\n\n\\begin{theorem}\n    $\\norm{A} = \\sqrt{\\lambda}$ where $\\lambda$ is the largest eigenvalue of $A^* A$.\n\\end{theorem}\n\n\\begin{theorem}\n    $\\lambda$ is an eigenvalue of $A^* A$ if and only if $\\lambda$ is an eigenvalue of $AA^*$.\n\\end{theorem}\n\n\\begin{theorem}\n    Let $A$ be invertible matrix. Then $\\norm{A^{-1}} = \\dfrac{1}{\\sqrt{\\lambda}}$ where $\\lambda$ is the smallest eigenvalue of $A^*A$.\n\\end{theorem}\n\n\\begin{definition}\n    $\\norm{A} \\times \\norm{A^{-1}}$ is the \\cindex{condition number} of $A$ and denoted as $\\text{cond}(A)$.\n\\end{definition}\n\n\\begin{theorem}\n    For system $Ax=b$ where $A$ is invertible and $b \\neq 0$, we have:\n    \\begin{enumerate}\n        \\item For any norm $\\norm{\\cdot}$, we have $\\dfrac{1}{\\text{cond}(A)} \\dfrac{\\norm{\\dif b}}{\\norm{b}} \\leq \\dfrac{\\norm{\\dif x}}{\\norm{x}} \\leq \\text{cond}(A) \\dfrac{\\norm{\\dif b}}{\\norm{b}}$.\n        \\item If $\\norm{\\cdot}$ is the Euclidean norm, then $\\text{cond}(A) = \\sqrt{\\dfrac{\\lambda_1}{\\lambda_n}}$ where $\\lambda_1$ and $\\lambda_n$ are the largest and smallest eigenvalue of $A^*A$.\n    \\end{enumerate}\n    \n    So when $\\text{cond} (b) \\geq 1$. If $\\text{cond}(b)$ is close to $1$, the relative error in $x$ is small when relative error of $b$ is small. However when $\\text{cond}(b)$ is large, the relative error in $x$ could be large or small. \n    \n    $\\text{cond}(x)$ is seldom calculated because when calculating $A^{-1}$ in computer, there are rounding errors which is related to $\\text{cond}(A)$.\n\\end{theorem}\n\n\n", "meta": {"hexsha": "b0e43dda534f3aae11584ca415936bb9202d03c6", "size": 32031, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/linear_algebra/la.7.operator.tex", "max_stars_repo_name": "elvisren/machine-learning-notes", "max_stars_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-07T03:05:08.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-04T17:28:22.000Z", "max_issues_repo_path": "src/linear_algebra/la.7.operator.tex", "max_issues_repo_name": "elvisren/machine-learning-notes", "max_issues_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/linear_algebra/la.7.operator.tex", "max_forks_repo_name": "elvisren/machine-learning-notes", "max_forks_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-01T23:34:47.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-01T23:34:47.000Z", "avg_line_length": 49.6604651163, "max_line_length": 647, "alphanum_fraction": 0.6377571727, "num_tokens": 11059, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.766293653760418, "lm_q1q2_score": 0.5893413048998568}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage[final]{pdfpages}\n\n\\begin{document}\n\n% =======================================================================================\n\\section*{The ADM evolution equations.}\n\nThe vacuum ADM equations, exactly as written in the following Cadabra code, are as follows.\n\n\\IfFileExists{adm-eqtns.cdbtex}{}{Where is {\\tt adm-eqtns.cdbtex}?}\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb{dotgab.lhs} = \\Cdb*{dotgab.101} \\end{dmath*}\n   \\begin{dmath*} \\cdb{dotKab.lhs} = \\Cdb*{dotKab.101} \\end{dmath*}\n   \\begin{dmath*} \\cdb{dotN.lhs}   = \\Cdb*{dotN.101}   \\end{dmath*}\n   \\begin{dmath*} \\cdb{Ham.lhs}    = \\Cdb*{Ham.101}    \\end{dmath*}\n   \\begin{dmath*} \\cdb{Mom.lhs}    = \\Cdb*{Mom.101}    \\end{dmath*}\n\\end{dgroup*}\n\n% Note that the covariant derivative for the 3-metric is usually denoted by the vertical bar\n\nCadabra's job was to express $R_{ab}$, $R$, $N_{ab}$ and $D_c$ in terms of the ADM\nvariables and their partial derivatives. It's all plain sailing from here, so cutting to\nthe chase, here are the results.\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb{Rab.lhs} = \\Cdb*{Rab.112} \\end{dmath*}\n   \\begin{dmath*} \\cdb{R.lhs}   = \\Cdb*{R.110} \\end{dmath*}\n   \\begin{dmath*} \\cdb{Nab.lhs} = \\Cdb*{Nab.102} \\end{dmath*}\n   \\begin{dmath*} \\cdb{Mom.lhs} = \\Cdb*{Mom.110}   \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\n% =======================================================================================\n\\section*{The ADM evolution equations. The big picture.}\n\n\\begin{cadabra}\n   from shared import *\n   import cdblib\n\n   jsonfile = 'adm-eqtns.json'\n   cdblib.create (jsonfile)\n\n   # ------------------------------------------------------------------------------------\n   # generic rules for covariant derivs\n\n   deriv1 := A?_{; m}   -> \\partial_{m}{A?}.                                              # cdb(deriv1.lhs,deriv1)\n   deriv2 := A?_{; m n} -> \\partial_{m}{A?_{; n}} - \\Gamma^{c}_{m n} A?_{; c}.            # cdb(deriv2.lhs,deriv2)\n\n   substitute (deriv2, deriv1)                                                            # cdb (deriv2.101,deriv2)\n\n   deriv3 := A?_{m n ; p} -> \\partial_{p}{A?_{m n}} - \\Gamma^{c}_{m p} A?_{c n}\n                                                    - \\Gamma^{c}_{n p} A?_{m c}.          # cdb(deriv3.lhs,deriv3)\n\n   # ------------------------------------------------------------------------------------\n   # partial derivs of g_{ab} in terms of partial of g^{ab}\n\n   defDgab := {g^{a e} g^{b f} \\partial_{c}{g_{e f}} -> - \\partial_{c}{g^{a b}},\n               g^{e a} g^{b f} \\partial_{c}{g_{e f}} -> - \\partial_{c}{g^{a b}},\n               g^{a e} g^{f b} \\partial_{c}{g_{e f}} -> - \\partial_{c}{g^{a b}},\n               g^{e a} g^{f b} \\partial_{c}{g_{e f}} -> - \\partial_{c}{g^{a b}}}.         # cdb (defDgab.lhs,defDgab)\n\n   # ------------------------------------------------------------------------------------\n   # standard defintions\n\n   defGamma := \\Gamma^{a}_{b c} ->\n               (1/2) g^{a e} (   \\partial_{b}{g_{e c}}\n                               + \\partial_{c}{g_{b e}}\n                               - \\partial_{e}{g_{b c}}).                                  # cdb (defGamma.lhs,defGamma)\n\n   defRabcd := R^{a}_{b c d} ->\n               \\partial_{c}{\\Gamma^{a}_{b d}} + \\Gamma^{a}_{e c} \\Gamma^{e}_{b d}\n             - \\partial_{d}{\\Gamma^{a}_{b c}} - \\Gamma^{a}_{e d} \\Gamma^{e}_{b c}.        # cdb (defRabcd.lhs,defRabcd)\n\n   defRab := R_{a b} -> R^{c}_{a c b}.                                                    # cdb (defRab.lhs,defRab)\n\n   # ------------------------------------------------------------------------------------\n   # Ricci tensor\n\n   Rab := R_{a b}.                                         # cdb (Rab.lhs,Rab)\n\n   substitute     (Rab, defRab)                            # cdb (Rab.101,Rab)\n   substitute     (Rab, defRabcd)                          # cdb (Rab.102,Rab)\n   substitute     (Rab, defGamma)                          # cdb (Rab.103,Rab)\n   product_rule   (Rab)                                    # cdb (Rab.104,Rab)\n   distribute     (Rab)                                    # cdb (Rab.105,Rab)\n\n   Rab = product_sort (Rab)                                # cdb (Rab.106,Rab)\n\n   rename_dummies (Rab)                                    # cdb (Rab.107,Rab)\n   canonicalise   (Rab)                                    # cdb (Rab.108,Rab)\n   substitute     (Rab, defDgab)                           # cdb (Rab.109,Rab)\n\n   Rab = product_sort (Rab)                                # cdb (Rab.110,Rab)\n\n   rename_dummies (Rab)                                    # cdb (Rab.111,Rab)\n   canonicalise   (Rab)                                    # cdb (Rab.112,Rab)\n\n   defRab := R_{a b} -> @(Rab).\n\n   # ------------------------------------------------------------------------------------\n   # Ricci scalar\n\n   Rscalar := R.                                           # cdb (R.lhs,Rscalar)\n   Rscalar := g^{a b} R_{a b}.                             # cdb (R.101,Rscalar)\n\n   substitute     (Rscalar, defRab)                        # cdb (R.102,Rscalar)\n   distribute     (Rscalar)                                # cdb (R.103,Rscalar)\n\n   Rscalar = product_sort (Rscalar)                        # cdb (R.104,Rscalar)\n\n   rename_dummies (Rscalar)                                # cdb (R.105,Rscalar)\n   canonicalise   (Rscalar)                                # cdb (R.106,Rscalar)\n   substitute     (Rscalar, defDgab)                       # cdb (R.107,Rscalar)\n\n   Rscalar = product_sort (Rscalar)                        # cdb (R.108,Rscalar)\n\n   rename_dummies (Rscalar)                                # cdb (R.109,Rscalar)\n   canonicalise   (Rscalar)                                # cdb (R.110,Rscalar)\n\n   defRscalar := R -> @(Rscalar).\n\n   # ------------------------------------------------------------------------------------\n   # Hessian\n\n   Nab := N_{; a b}.                                       # cdb (Nab.lhs,Nab)\n\n   substitute (Nab, deriv2)                                # cdb (Nab.101,Nab)\n   substitute (Nab, defGamma)                              # cdb (Nab.102,Nab)\n\n   defHess := N_{; a b} -> @(Nab).                         # cdb (Hess.lhs,defHess)\n\n   # ------------------------------------------------------------------------------------\n   # ADM evolution equations\n\n   DgabDt := \\partial_{t}{g_{a b}}.                        # cdb (dotgab.lhs,DgabDt)\n   DKabDt := \\partial_{t}{K_{a b}}.                        # cdb (dotKab.lhs,DKabDt)\n   DNDt   := \\partial_{t}{N}.                              # cdb (dotN.lhs,DNDt)\n\n   DgabDt := -2 N K_{a b}.                                                        # cdb (dotgab.101,DgabDt)\n   DKabDt := -N_{; a b} + N (R_{a b} + trK K_{a b} - 2 K_{a c} K_{b d} g^{c d}).  # cdb (dotKab.101,DKabDt)\n   # DNDt := -2 N trK.     # 1+log\n   # DNDt := -N*N trK.     # Harmonic\n   DNDt := 0.                                              # cdb (dotN.101,DNDt)  # Static\n\n   substitute (DKabDt,defHess)                             # cdb (dotKab.102,DKabDt)\n   distribute (DKabDt)                                     # cdb (dotKab.103,DKabDt)\n\n   # ------------------------------------------------------------------------------------\n   # The Hamiltonian contsraint\n\n   defHam := Ham     -> R + K_{a b} g^{a b} K_{c d} g^{c d} - K_{a b} K_{c d} g^{a c} g^{b d}.\n   Ham    := Ham.                                          # cdb (Ham.lhs,Ham)\n   substitute     (Ham, defHam)                            # cdb (Ham.101,Ham)\n\n   canonicalise   (Ham)                                    # cdb (Ham.102,Ham)\n\n   # ------------------------------------------------------------------------------------\n   # The momentum contsraint\n\n   defMom := Mom_{c} -> g^{a b} K_{a c ; b} - \\partial_{c}{g^{a b} K_{a b}}.\n   Mom    := Mom_{c}.                                      # cdb (Mom.lhs,Mom)\n   substitute     (Mom, defMom)                            # cdb (Mom.101,Mom)\n\n   substitute     (Mom, deriv3)                            # cdb (Mom.102,Mom)\n   product_rule   (Mom)                                    # cdb (Mom.103,Mom)\n   distribute     (Mom)                                    # cdb (Mom.104,Mom)\n   substitute     (Mom, defGamma)                          # cdb (Mom.105,Mom)\n   distribute     (Mom)                                    # cdb (Mom.106,Mom)\n   substitute     (Mom, defDgab)                           # cdb (Mom.107,Mom)\n\n   Mom = product_sort (Mom)                                # cdb (Mom.108,Mom)\n\n   rename_dummies (Mom)                                    # cdb (Mom.109,Mom)\n   canonicalise   (Mom)                                    # cdb (Mom.110,Mom)\n\n   cdblib.put ('Rscalar', Rscalar, jsonfile)\n   cdblib.put ('Rab',     Rab,     jsonfile)\n   cdblib.put ('Nab',     Nab,     jsonfile)\n   cdblib.put ('DgabDt',  DgabDt,  jsonfile)\n   cdblib.put ('DKabDt',  DKabDt,  jsonfile)\n   cdblib.put ('DNDt',    DNDt,    jsonfile)\n   cdblib.put ('Ham',     Ham,     jsonfile)\n   cdblib.put ('Mom',     Mom,     jsonfile)\n\n\\end{cadabra}\n\n\\clearpage\n\n% =======================================================================================\n\\section*{The Hessian of the lapse.}\n\n% \\begin{dgroup*}[spread=5pt]\n%    \\begin{dmath*}\n%       \\cdb{Nab.lhs}\n%          = \\Cdb*{Nab.101}\n%    \\end{dmath*}\n%    \\begin{dmath*}\n%       \\phantom{\\cdb{Nab.lhs}}\n%          = \\Cdb*{Nab.102}\n%    \\end{dmath*}\n% \\end{dgroup*}\n\n\\begin{align*}\n   \\cdb{Nab.lhs}\n      &= \\Cdb{Nab.101}\\\\\n      &= \\Cdb{Nab.102}\n\\end{align*}\n\n% =======================================================================================\n\\section*{The Ricci curvature.}\n\n\\begin{dgroup*}[spread=5pt]\n   \\begin{dmath*}\n      \\cdb{Rab.lhs}\n         = \\Cdb*{Rab.101}\n         = \\Cdb*{Rab.102}\n         = \\Cdb*{Rab.103}\n         = \\Cdb*{Rab.104}\n         = \\Cdb*{Rab.105}\n         = \\Cdb*{Rab.106}\n   \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\n\\begin{dgroup*}[spread=5pt]\n   \\begin{dmath*}\n      \\cdb{Rab.lhs}\n         = \\Cdb*{Rab.107}\n         = \\Cdb*{Rab.108}\n         = \\Cdb*{Rab.109}\n         = \\Cdb*{Rab.110}\n         = \\Cdb*{Rab.111}\n         = \\Cdb*{Rab.112}\n   \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\n% =======================================================================================\n\\section*{The Ricci scalar.}\n\n\\begin{dgroup*}[spread=5pt]\n   \\begin{dmath*}\n      \\cdb{R.lhs}\n         = \\Cdb*{R.101}\n         = \\Cdb*{R.102}\n         = \\Cdb*{R.103}\n         = \\Cdb*{R.104}\n         = \\Cdb*{R.105}\n         = \\Cdb*{R.106}\n         = \\Cdb*{R.107}\n   \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\n\\begin{dgroup*}[spread=5pt]\n   \\begin{dmath*}\n      \\cdb{R.lhs}\n          = \\Cdb*[\\hskip 2cm\\hfill]{R.108}\n          = \\Cdb*[\\hskip 2cm\\hfill]{R.109}\n          = \\Cdb*{R.110}\n   \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\n% =======================================================================================\n\\section*{The ADM constraints.}\n\n\\begin{dgroup*}[spread=5pt]\n   \\begin{dmath*}\n      \\cdb{Ham.lhs}\n         = \\Cdb*{Ham.101}\n         = \\Cdb*{Ham.102}\n   \\end{dmath*}\n\\end{dgroup*}\n\n\\begin{dgroup*}[spread=5pt]\n   \\begin{dmath*}\n      \\cdb{Mom.lhs}\n         = \\Cdb*{Mom.101}\n         = \\Cdb*{Mom.102}\n         = \\Cdb*{Mom.103}\n         = \\Cdb*{Mom.104}\n         = \\Cdb*{Mom.105}\n         = \\Cdb*{Mom.106}\n         = \\Cdb*{Mom.107}\n         = \\Cdb*{Mom.108}\n         = \\Cdb*{Mom.109}\n         = \\Cdb*{Mom.110}\n   \\end{dmath*}\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "36931491c009599b4b630c4f4527d37d13455fa6", "size": 11370, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "adm/cadabra/adm-eqtns.tex", "max_stars_repo_name": "leo-brewin/adm-bssn-numerical", "max_stars_repo_head_hexsha": "9e32c201272e9a41e7535475fe381e450b99b058", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-25T11:36:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T11:36:06.000Z", "max_issues_repo_path": "adm/cadabra/adm-eqtns.tex", "max_issues_repo_name": "leo-brewin/adm-bssn-numerical", "max_issues_repo_head_hexsha": "9e32c201272e9a41e7535475fe381e450b99b058", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "adm/cadabra/adm-eqtns.tex", "max_forks_repo_name": "leo-brewin/adm-bssn-numerical", "max_forks_repo_head_hexsha": "9e32c201272e9a41e7535475fe381e450b99b058", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.9, "max_line_length": 119, "alphanum_fraction": 0.3984168865, "num_tokens": 3495, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936377487305, "lm_q2_score": 0.76908023177796, "lm_q1q2_score": 0.5893412885297697}}
{"text": "\\documentclass{article}\n\n\\usepackage{fancyhdr}\n\\usepackage{extramarks}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{amsfonts}\n\\usepackage[plain]{algorithm}\n\\usepackage{algpseudocode}\n\\usepackage{matlab-prettifier}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n%\n% Basic Document Settings\n%\n\\lstMakeShortInline[style=Matlab-editor]\"\n\n\\topmargin=-1in\n\\evensidemargin=0in\n\\oddsidemargin=0in\n\\textwidth=6.5in\n\\textheight=9.0in\n\\headsep=0.25in\n\n\\linespread{1.1}\n\n\n\\rhead{\\firstxmark}\n\\lfoot{\\lastxmark}\n\\cfoot{\\thepage}\n\n\\renewcommand\\headrulewidth{0.4pt}\n\\renewcommand\\footrulewidth{0.4pt}\n\n%\n% Homework Details\n%   - Title\n%   - Due date\n%   - Class\n%   - Section/Time\n%   - Instructor\n%   - Author\n%\n\n\\newcommand{\\hmwkTitle}{AMATH 482 Homework 4: Extended Yale Faces B Database \u2013 Eigenfaces \\& Music Genre Identification}\n\\newcommand{\\hmwkDueDate}{March 8, 2019}\n\\newcommand{\\hmwkClassInstructor}{Professor Nathan Kutz}\n\\newcommand{\\hmwkAuthorName}{\\textbf{Skyler Hallinan}}\n\n%\n% Title Page\n%\n\n\\title{\n    \\textmd{\\textbf{\\text{ } \\hmwkTitle}}\\\\\n}\n\n\\author{\\hmwkAuthorName}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\section*{\\fontsize{19}{15}\\selectfont Abstract}\n\t\tIn part one, we demonstrated the reduction of images by doing an SVD analysis on cropped and uncropped sets, and reconstructed them with lower dimensions. In part two, we trained various supervised machine learning algorithms by using the singular value decomposition of short segments of fourier transformed audio. We showed that some models had remarkably high accuracy, and that SVD was not always beneficial.\n\\section*{\\fontsize{19}{15}\\selectfont Introduction and Overview}\n\tOften, we have too many features in our data. In this homework, we explored the feature reduction in images by using the Singular Value Decomposition on a matrix of images. We determined how many \"significant\" features were needed to reconstruct the faces after applying this SVD transform, and plotted the images again after projecting the images to a new orthonormal basis and reducing the features. \\\\ \\\\\n\tMachine learning extracts meaningful featuers from data, and bins data into distinct patterns that can be later used for decision making; it can learn from and make predictions on the data. There is both supervised and unsupervised machine learning. No labels are given in unsupervised learning, and the model must organize and cluster data by itself, based on observe patterns, and try to go to a lower rank subspace.  Supervised machine learning is a great tool to classify and predict new data after training a model with sample data and labels. In all of these machine learning algorithms, the main goal is to construct a low-rank feature space of a given dataset; this can be either done automatically with unsupverised learning, or manually, with PCA modes generated by SVD, where only a certain $r$ number of features are needed to represent the data. In this homework, we explored the Naive Bayes, Support Vector Machine, and Random Forests algorithm to classify our data using this supervised machine learning. We gave three test sets of songs, and saw how they performed.\n\\section*{\\fontsize{19}{15}\\selectfont Theoretical Background}\n\tThe Singular Value Decomposition is used to reformat a matrix into the following: \n\t\\begin{align*}\n\\mathbf { A } = \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\text{ where} \\\\\n\\begin{array} { l } { \\mathbf { U } \\in \\mathbb { C } ^ { m \\times m } \\text { is unitary } } \\\\ { \\mathbf { V } \\in \\mathbb { C } ^ { n \\times n } \\text { is unitary } } \\\\ { \\boldsymbol { \\Sigma } \\in \\mathbb { R } ^ { m \\times n } \\text { is diagonal } } \\end{array}\n\t\\end{align*}\n\tThe diagonal values of $\\sigma$ are nonnegative and ordered from largest to smallest. We can also compute the SVD with: \n\\begin{align*}\n\\mathbf { A } ^ { T } \\mathbf { A } & = \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) ^ { T } \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) \\\\ & = \\mathbf { V } \\boldsymbol { \\Sigma } \\mathbf { U } ^ { * } \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\\\ & = \\mathbf { V } \\boldsymbol { \\Sigma } ^ { 2 } \\mathbf { V } ^ { * }  \\\\ \\\\\n\\mathbf { A } \\mathbf { A } ^ { T } & = \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) ^ { T } \\\\ & = \\mathbf { U \\Sigma V } ^ { * } \\mathbf { V } \\boldsymbol { \\Sigma } \\mathbf { U } ^ { * } \\\\ & = \\mathbf { U } \\boldsymbol { \\Sigma } ^ { 2 } \\mathbf { U } ^ { * } \\\\ \\\\\n\\mathbf { A } ^ { T } \\mathbf { A V } & = \\mathbf { V } \\Sigma ^ { 2 } \\\\ \\mathbf { A } \\mathbf { A } ^ { T } \\mathbf { U } & = \\mathbf { U } \\boldsymbol { \\Sigma } ^ { 2 }\n\\end{align*}\nComputing the normalized eigenvectors for the final two equations gives orthonormal basis vectors for $U$ and $V$. In addition, the square root of the eigenvalues of these equations produces the singular values $\\sigma_j$. The SVD enables for every matrix to be diagonal if the proper bases for the domain and the range are used. \\\\ \\\\\n\tOne of the primary applications of the SVD is for Principal Component Analysis (PCA). The PCA allows for large, complex, and even somewhat random sets of data to be reduced to lower dimensions of dynamics without any knowledge of underlying behavior. \n\tThe covariance matrix will show correlations between all variables in the system; strong statistically dependent variables can be classified as redundant. Since we are looking for the covariance of a specific matrix, we have $\\mathbf { C } _ { \\mathbf { X } } = \\frac { 1 } { n - 1 } \\mathbf { X X } ^ { T }$, where $C_X$ is a square $m \\times m$. This covariance matrix is key to understanding redundancies in data; large values correspond to redundancy. However, large diagonal terms also correspond to large variances, which suggest strong fluctuations in specific variables, and help identify important components. Large variances are thus dynamics of interest, while small variances are non-interesting. Thus, our goal is to view this $C_X$ matrix ordered from largest to smallest, with off-diagonal values of 0 (diagonilization). \\\\ \\\\\n\tThe SVD does this. Each singular direction in the SVD captures as much energy as possible, which is measured by the singular values $\\theta_j$.  We know that SVD can diagnonalize any matrix via using the appropriate bases $U$ and $V$ as shown above. We define the transformed variable $Y = U*X$ where $U$ is the unitary transformation associated with the SVD ($\\mathbf { X } = \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * }$). We then calculate the variance in Y:\n \\begin{align*}\nC_Y = \\frac { 1 } { n - 1 } \\mathbf { Y } \\mathbf { Y } ^ { T }\\\\\n= \\frac { 1 } { n - 1 } \\left( \\mathbf { U } ^ { * } \\mathbf { X } \\right) \\left( \\mathbf { U } ^ { * } \\mathbf { X } \\right) ^ { T } \\\\\n= \\frac { 1 } { n - 1 } \\mathbf { U } ^ { * } \\left( \\mathbf { X } \\mathbf { X } ^ { T } \\right) \\mathbf { U } \\\\\n= \\frac { 1 } { n - 1 } \\mathbf { U } ^ { * } \\mathbf { U } \\boldsymbol { \\Sigma } ^ { 2 } \\mathbf { U } \\mathbf { U } ^ { * } \\\\\nC_Y= \\frac { 1 } { n - 1 } \\mathbf { \\Sigma } ^ { 2 }\n\\end{align*}\n\n\tWe can apply the SVD to a matrix of images in order to isolate the highest singular components and reduce the number of features in the data. The resulting matrices will give us information on the most important features within all the images in the matrix, and will give us a new orthonormal set that we can project the images onto. We will also be able to look at the energy components of this by looking at the singular values and dividing them by the maximum value.\\\\ \\\\\n\\textbf{Part 2}\nWe have seen that the fourier transform is an integral transform that decomposes a time defined function into its frequency components, and is defined by\n\t\\begin{equation} \\label{eq:2a}\n\t\tF(k) = \\frac{1}{\\sqrt2\\pi} \\int_{-\\infty}^{\\infty} e^{-ikx} f(x) dx\n\t\\end{equation}\n\t\t\\begin{equation} \\label{eq:2b}\n\t\tf(x) = \\frac{1}{\\sqrt2\\pi} \\int_{-\\infty}^{\\infty} e^{ikx} F(k) dk\n\t\\end{equation}\n\tThe fast fourier transform, which is accessible in things like MATLAB, assumes that the user is working on a $2\\pi$ periodic domain. \\\\ \\\\\nA downside to the fourier transform is that in changing the domain to the frequency domain, it completely removes all time data from the equation. However, since we will be using the fourier transform to look at the frequency content of different audio samples, we do not care about this. In addition, the fourier transform gives complex outputs, so it will be necessary to take the absolute value before proceeding forward and giving information to the models. \\\\ \\\\\n\n\nNaive Bayes classifier is a conditional probability model; given a vector $x$ with some $n$ features, it assigns an instance probability $P_{C_k}$ where $C_k$ is some class or label in the system. \tUsing Bayes Theorem, we can write the probability as $p\\left(C_{k} | \\mathbf{x}\\right)=\\frac{p\\left(C_{k}\\right) p(\\mathbf{x} | C_{k})}{p(\\mathbf{x})}$. Under assumptions of indepdence, we see that Naive Bayes has conditional distribution over some class $C$ of $p\\left(C_{k} | x_{1}, \\ldots, x_{n}\\right)=\\frac{1}{Z} p\\left(C_{k}\\right) \\prod_{i=1}^{n} p\\left(x_{i} | C_{k}\\right)$. \\\\ \\\\\nLinear SVM is based on constructing a hyperplane $\\mathbf{w} \\cdot \\mathbf{x}+b=0$, where vector w and constant b paramterize the hyperplane. SVM optimizes the decision line that makes the fewest labeling errors for the data, and optimizes the largest margin between the data. $w$ and $b$ must be optimized in a principle way, wihch is by stating a loss function:\t$\\ell\\left(\\mathbf{y}_{j}, \\overline{\\mathbf{y}}_{j}\\right)=\\left\\{\\begin{aligned} 0 & \\text { if data is correctly labeled } \\\\+1 & \\text { if data is incorrectly labeled } \\end{aligned}\\right.$. This can be simplified down to $\\underset{\\mathbf{w}, b}{\\operatorname{argmin}} \\sum_{j=1}^{m} H\\left(\\mathbf{y}_{j}, \\overline{\\mathbf{y}}_{j}\\right)+\\frac{1}{2}\\|\\mathbf{w}\\|^{2} \\quad$ subject to $\\min _{j}\\left|\\mathbf{x}_{j} \\cdot \\mathbf{w}\\right|=1$. There are also kernel methods for SVD, which solves the curse of dimensinoality, when there are too many additional features. We define the kernel function as $K\\left(\\mathbf{x}_{j}, \\mathbf{x}\\right)=\\Phi\\left(\\mathbf{x}_{j}\\right) \\cdot \\Phi(\\mathbf{x})$, and we get a new optimization problem $\\underset{\\boldsymbol{\\alpha}, b}{\\operatorname{argmin}} \\sum_{j=1}^{m} H\\left(\\mathbf{y}_{j}, \\overline{\\mathbf{y}}_{j}\\right)+\\frac{1}{2}\\left\\|\\sum_{j=1}^{m} \\alpha_{j} \\Phi\\left(\\mathbf{x}_{j}\\right)\\right\\|^{2} \\quad$ subject to $\\min _{j}\\left|\\mathbf{x}_{j} \\cdot \\mathbf{w}\\right|=1$. This allows us to represent Taylor series expansions of observables in a compact way. The radial basis function and polynomial kernel are commonly used. \\\\ \\\\\nRandom forests are an example of decision tree learning. The decision tree is a hierarchical construct that optimally splits data for classification and regression. The model scans through each feature and compares the prediction accuracy across different features. It branches when it reaches a feature giving the best segmentation.\n\\section*{\\fontsize{19}{15}\\selectfont Algorithm Development}\n\t\\textbf{Part 1} \\\\\n\tWe first loaded in the images for both cropped and uncropped. We used \"dir\" and iteratively went through all the folders, loading in each image, using \"double\" to convert our uint8 data matrices into workable formats. We reshaped each image using $reshape$ and placed it in a matrix, so that we had 1 row representing an image. \\\\ \\\\\nWe then applied the \"svd\" to this matrix. We plotted the energies of the singular values and looked at how many modes could represent a high percentage of energy (99.9\\%) in the system. Once we found a value, we reconstructed the matrix by subsetting the \"u\", \"s\", and \"v\" matrices by this rank $r$, then multiplied \"u*s*v.'\". Each column of this corresponded to a reconstruction of an image with lower dimension. We plotted out the first 9 of these reconstructed faces. \\\\ \\\\\n\\textbf{Part 2} \\\\ \n\tWe first selected our audio. For test case one, we needed to choose audio from three different artists/bands that were from different genres. We chose to use audio from Billy Joel (Rock), Travis Scott (Rap / Hip Hop), and Liszt (Classical Piano), who have music from three diverse and different categories. We downloaded five pieces/songs from each of the artists, and loaded them into \"MATLAB\" as a vector using \"[mus, Fs] = audioread(file)\". \\\\ \\\\\nThe procedure for one song was as follows: The songs were in stereo, but for the purpose of this analysis, we converted them to mono. To do this, if both channels were nonzero, we averaged their value, but if at least one channel was 0, we took the max of the two channels instead. This resulted in a 1D vector representing music. We then resampled this data by taking the 10th data point from each of these vectors, effectively reducing our data size by one order of magnitude. We then removed leading and trailing 0s from the audio using $find$ and conditional statements. We split these into 5 second segments by subsetting from an index to the index plus five times the sampling rate (\"Fs\"), then reshaped these into a matrix, where each column of the matrix represented a 5 second clip of the mono version of the song, using \"reshape\". Finally, we created a label vector that corresponded to the number of 5 second clips from this artist, with the label name being the artist's name. We repeated this for all songs and all artists, iteratively adding them to a large dataset of 5 second clips, along with labels. This resulted in a large data matrix where each column was a vector representation of a 5 second clip from a song, and a large 1D array of labels. \\\\ \\\\\nNext, we fourier-transformed each song (each column of the data) using the fast fourier transform \"fft\". We took the absolute value of this so that some of our models could use the data (like the SVM). We then took the SVD of this data, using \"svd\" and getting the \"u\",\"s\", and \"v\" matrices. We used \"diag\" to isolate the singular values to a 1D array from the \"s\" matrix, and calculated the number of modes at which 95\\% of energy was captured. We then truncated our \"u\" matrix to be this length which remove redundant and less important information. We multiplied the transpose of u by our original data matrix. With this transformed dataset, we used \"randperm\" to generate a random set of data to extract for training, and a random set to extract for testing. We used roughly 8 times the number of clips for training than we did for testing. \\\\ \\\\\nWe then trained supervised machine learning methods using this training data, and the accompanying labels. We used Naive Bayes, Random Forests, and Support Vector Machine, using \"fitcnb\",  \"fitctree\", \"fitceoc\" respectively. We created a cross validated model and non cross-validated model for each of these three. We then assessed the inaccuracy using \"loss\" with an input of the test data as well as the correct labels, which returned a scalar from 0 to 1 representing the inaccuracy of the model. We used \"kFoldloss\" with the same test data to assess the inaccuracy of the cross-validated data. We subtracted 1 minus these inaccuracies to obtain the accuracies of these models. We then plotted these to compare how each model performed.\n\tWe repeated the above procedure for test case two, where we distinguished between three bands of the same genre. For this case, we used artists Drake, Travis Scott, and Migos. Finally, we repeated this for case three, where we distinguished between three genres in general. For this case, we used Pop, Rock, and Motown.\n\\section*{\\fontsize{19}{15}\\selectfont Computational Results}\n\t\\textbf{Part 1: Faces} \\\\\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 8cm]{var1}\n\\includegraphics[width = 8cm]{var2}\n\\caption{ Plots of energy for cropped (left) and uncropped (right)}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 8cm]{crop}\n\\includegraphics[width = 8cm]{uncrop}\n\\caption{ Plots of energy for cropped (left) and uncropped (right)}\n\\end{center}\n\\end{figure}\nWe looked at the energy diagrams for both sets of images to reconstruct them with a lower rank. For the uncropped images, we found that a rank of 108 was sufficient enough to convey 99.9\\% of the energy. For the cropped images, we found this value to be 160. We were able to come up with clear reconstructions of the data, as shown in the above images. We see that the rank of the face space is 108 and 160 respectively for uncropped and cropped. In our SVD decomposition, our U is the eigenface space. This means that our U*S is our weighted eigen face; they are the \"modes\" or the basis vectors in n-dimensional space. The S matrix tells us how strong each of these vectors explains the variance; how important each of these principal components are to our data. Finally, V gives the coefficients for each of the data point's projection into the new basis; the ith column of the V matrix represents the coefficients of linear combinations in which we add U*S to get the ith face back. We see that compared to the cropped faces, the uncropped faces do not need as many singular modes to reconstruct the data at a high energy level. \\\\ \\\\\n\t\\textbf{Part 2: Music Classification} \\\\ \\\\\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 16cm]{acc1}\n\\caption{\\label{fig:scaled_diss}  Plots of accuracy of supervised learning algorithms (test 1)}\n\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 18cm]{use1}\n\\caption{\\label{fig:scaled_diss} Plots of energy (right) and accuracy with variable mode number (left)  (test 1) }\n\\end{center}\n\\end{figure}\n\t\\textbf{Test 1: Band Classification}\n\tWe saw that we were able to capture 95\\% of energy with 217 modes, as shown in Figure 4, so we used this number for our truncation of our \"u\" matrix.. We then compared the prediction accuracy of each of the models, as shown in Figure 3. We see that Naive Bayes performed remarkably well on non cross validated and cross validated data. For SVD, it had 88.73\\% and 84.14\\% accuracy respectively, and for non SVD, 95.16\\% and 94.26\\% percent accuracy. We see that with SVM, with SVD, it had low classifcation accuracies of non cross validated and cross validated data, both under 45\\%, while for the non SVD data in these categories, it had a 97.06\\% classification and 54.08\\% classification. We see that there is a large descrepancy in SVM's accuracy of this non SVD data; it has high classification accuracy for non-cross validated, but not for cross-validated. For RF, we observed somewhat high accuracies for the SVD data (0.7760 and 0.6692 for n-cv and cv), and very high accuracies for both non-cross validated and cross validated non SVD data, both being above 91\\%. Figure 4 shows that for SVD data, Naive  Bayes had consistently high accuracy with variable number of modes, followed by random forests and SVM. However, all three models were above the 50\\% mark, which shows that they are all competent. \\\\ \\\\\n\\textbf{Test 2: The Case for Seattle}\nWe saw that we were able to capture 95\\% of energy with 180 modes, as shown in Figure 6. Figure 5 shows the relative accuracies for each of our models. For NB, we saw that it did not perform well on the SVD data, with accuracies near or slightly below 0.5, for non-crossvalidated and cross-validated cases. However, for the non SVD data, it performed well for both cross-validated and non-cross validated, with accuracies of 87.26\\% and 84.64\\% respectively.  We saw similar results with RF; low accuracy (under 0.5) for SVD data, and high (>75\\%) accuracy for non SVD data for both cross-validated and non-cross validated. For our SVM, it did not perform well for 3/4 of our tests, with classification under 50\\%. However, for the non SVD data with no cross-validation, it had an astonishing 99.01\\% accuracy. This was the highest we observed, and was especially impressive due to the general trend of low accuracies for this test case. In addition, Figure 6 shows the huge variability in accuracy in all 6 SVD classification methods by changing the number of modes. Since we were classifying within the same genre, we expect this, because the model should theoretically be worse since there are not as many differences. This is why we see so low classification rates, as well as a huge variability with number of modes. \\\\ \\\\\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 16cm]{acc2}\n\\caption{\\label{fig:scaled_diss}  Plots of accuracy of supervised learning algorithms (test 2)}\n\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 18cm]{use2}\n\\caption{\\label{fig:scaled_diss} Plots of energy (right) and accuracy with variable mode number (left) (test 2)}\n\\end{center}\n\\end{figure}\n\\textbf{Test 3: Genre Classification}\nWe found that at 239 modes we were able to represent 95\\% of the energy, as shown in Figure 8. In terms of accuracy, all three models (NB, SVM, RF) were not very good at classifying the SVD data, for both non cross-validated cases cross-validated and. For SVM and RF, they had under 50 \\% classification for these categories. NB was slightly better with 63.21\\% and 64.48\\% classification for those categories respectively. Once again, our models were much better at classifying non SVD data: NB was solid with two classification accuracies above 83\\% for cross validated and non cross validated, while random forests was slightly worse, with two values above 73\\%. Once again, SVM showed strange behavior with the non SVD data: It had remarkbly high classification accuracy of 92.37 \\% for the non-cross validated data, and a much lower 45.56\\% classification percentage for the cross-validated data. Figure 5 of the accuracies of models on SVD data showed that NB (no cross validation) and NB (cross validation) had consistently high accuracies, depending on the number of nodes, while some models did not perform well at any number, such as both SVM models. Overall, although this graph was less sporadic than Figure 6 for test case 2, there is still some variability, especially compared to test case 1. This shows that it is harder to distinguish between genres when different artists are used, as there is more variability within each genre, since different artists are used. In addition, one reason for the reduced accuracy of this model may be that the genres could have been slightly more similar to eachohter (Rock, Pop, and Motown) than they were in test case 1 (Rap, Classical, Rock). \\\\ \\\\\nOverall, we found that SVM of non SVD, non cross-validated was consistently accurate for all three test cases. Naive Bayes also had consistently high accuracy for non SVD, in both non cross-validated and cross-validated data, but not as high. We also observe that the SVD classification had much lower accuracy than non SVD data. It makes sense that the non SVD classification had higher accuracy rates, because we are lowering the dimension and feature space with SVD, so we might lose information and accuracy in our model to gain speed (and decrease in computation time). Finally, we note that we had our worst total results in test 2, where we had to distinguish between artists in the same genre. This makes sense because their music will be similar.\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 16cm]{acc3}\n\\caption{\\label{fig:scaled_diss}  Plots of accuracy of supervised learning algorithms (test 3)}\n\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 18cm]{use3}\n\\caption{\\label{fig:scaled_diss} Plots of energy (right) and accuracy with variable mode number (left) (test 3)}\n\\end{center}\n\\end{figure}\n\n\n\\section*{\\fontsize{19}{15}\\selectfont Summary and Conclusions}\nIn part 1, we were able to perform an SVD across faces and lower the rank after identifying the number of modes that conveyed a lot of the energy. We were then able to reproduce these faces. In part 2, we were able to develop classification models for audio identification using supervised machine learning models. We found that our models performed best when the genres were very different, and the worst when they were the most similar, which is to be expected. In addition, SVM was our best model overall, and non SVD datasets had better classification accuracies than SVD ones.\n\\pagebreak\n\n\n\n\n\\section*{\\fontsize{19}{15}\\selectfont Appendix A}\n\\subsection*{MATLAB functions used and implementation}\n\"audioread(file)\": Loads a audio file as one or two vectors (depending on if the file is mono or stereo). We used this to load in our music for classification in part 2. \\\\\n\"diag(X)\" : Returns a column vector of the diagonal values of a matrix. We used this to generate our variance numbers from SVD. \\\\ \\\\\n\"find(X)\" : Returns a vector of non-zero indeces in a matrix. We used this in our conditional matrix to find locations in the image of bright spots, which corresponded to moving object. We then used in \"ind2sub\" to find direct coordinates.  \\\\ \\\\\n\"fitcnb(data, labels)\" : Fits a Naive Bayes Model across the rows of \"data\", using \"labels\" to create a model. Can specify cross validation with (..., \"crossval\", \"on\") We used this for our music classifcation. \\\\ \\\\\n\"fitcecoc(data, labels)\" : Fits a multi-labeled Support Vector Machine with \"data\", using \"labels\" to create a model. Can specify cross validation with (..., \"crossval\", \"on\") We used this for our music classifcation. \\\\ \\\\\n\"fitctree(data, labels)\" : Fits a Random Forests model with \"data\", using \"labels\" to create a model. Can specify cross validation with (..., \"crossval\", \"on\") We used this for our music classifcation. \\\\ \\\\\n\"floor(x)\" : Returns the rounded down integer of a double. We used this to calculate 5 second segments of our sample audio and reshape it. \\\\ \\\\\n\"imread(file)\": Reads in an image file as a 3D or 2D matrix (based on if the images is color or grayscale). We used this to load in the Yale Faces for part 1. \\\\ \\\\\n\"kFoldloss(Mdl, data, labels)\" : Returns a number between 0 to 1 representing the loss (inaccuracy) of a given cross-volidated model \"Mdl\", where predicted labels are compared to actual ones (\"labels\"). We used this to assess the accuracy of our cross validated models. \\\\ \\\\\n\"loss(Mdl, data, labels)\" : Returns a number between 0 to 1 representing the loss (inaccuracy) of a given model \"Mdl\", where predicted labels are compared to actual ones (\"labels\"). We used this to assess the accuracy of our various models. \\\\ \\\\\n\"max(A)\" : If \"A\" is a vector, then it returns the maximum value of this vector. We used this for normalization of data. \\\\ \\\\\n\"randperm(limit, n)\" : Returns a random permutation from 1 to \"limit\" of size \"n\". We used this to randomly select training and testing data for model classification. \\\\ \\\\\n\"reshape(A,sz)\" : Reshapes the array A to the size \"sz\". We used this to resize our image matrices into a vector for part 1, and to resize our audio into 5 second segments in part 2. \\\\ \\\\\n\"size(X)\" : Returns the dimensions of the matrix \"X\". We used this to calculate size for audio. \\\\ \\\\\n\"[u,s,v] = svd(X)\" : Returns the $U$, $S$, and $V$ matrices corresponded with the singular value decomposition of \"X\". We used this for the eigen faces in part 1, and for music classification in part 2. \\\\ \\\\\n\n\\section*{\\fontsize{19}{15}\\selectfont Appendix B}\n\\subsection*{MATLAB code}\n\\begin{lstlisting}[style=Matlab-editor]\nclear all; close all; clc; \n\nclose all; clear all; clc;\n%% Importing all images from directories (uncropped)\nstoragestan = [];\nDcy = dir(\"yalefaces\");\nfor k = 3:length(Dcy)\n    data = double(imread(strcat(\"yalefaces\\\", Dcy(k).name)));\n    storagestan = [storagestan; reshape(data,1,243*320)];\nend\n%% Importing all images from directories (Cropped)\n\nstoragecrop = [];\nDcy = dir(\"CroppedYale\");\nfor k = 1:length(Dcy)\n   curD = Dcy(k).name;\n   Dtemp = dir(strcat(\"CroppedYale\\\",curD, \"\\*.pgm\"));\n   \n   for j = 1:length(Dtemp)\n       data = double(imread(strcat(\"CroppedYale\\\",curD, \"\\\", Dtemp(j).name)));\n       storagecrop = [storagecrop; reshape(data,1,192*168)];\n   end\nend\n%% Computation (cropped)\ntic\n[u,s,v] = svd(storagecrop', 'econ');\ntoc\n\n%% Plotting and reconstrunction (cropped)\nsig = diag(s);\nlambda = sig.^2;\n\nsubplot(1,2,1)\nplot(cumsum(lambda/sum(lambda)), 'bo')\nrefline(0,0.99)\nlegend(\"Energy Captured\", \"99% of Energy Captured\", \"Location\", \"Southeast\");\nylabel(\"Cumulative Energy Captured by Diagonal Variances \"); xlabel(\"Diagonal Variances\");\ntitle(\"Cumulative Energy Captured (Cropped)\", \"Fontsize\", 14);\nsubplot(1,2,2)\nplot(lambda/sum(lambda), 'ro')\nset(gca, 'YScale', 'log')\nylim([10e-7, 1]); ylabel(\"Log of Energy Captured by Each Diagonal Variance\"); xlabel(\"Diagonal Variances\");\ntitle(\"Log Of Energy Captured (Cropped)\", \"Fontsize\", 14);\n%% Eigen Faces\nreconstruct = u(:,1:160) *s(1:160,:)*v(1:160,:)';\nfor z = 1:9\n    curimg = reshape(reconstruct(:,z), [192,168]);\n    subplot(3,3,z);\n    pcolor(flip(curimg)), shading interp, colormap(gray)\nend\n\n\n%% NEW PART\ntic\n[u1,s1,v1] = svd(storagestan', 'econ');\ntoc\n\nsig1 = diag(s);\nlambda1 = sig1.^2;\n\nreconstruct1 = u1(:,1:108) *s1(1:108,:)*v1(1:108,:)';\nfor z = 1:9\n    curimg = reshape(reconstruct1(:,z), [243,320]);\n    subplot(3,3,z);\n    pcolor(flip(curimg)), shading interp, colormap(gray)\nend\n\nsig1 = diag(s1);\nlambda1 = sig1.^2;\n\nsubplot(1,2,1)\nplot(cumsum(lambda1/sum(lambda1)), 'bo')\nrefline(0,0.99)\nlegend(\"Energy Captured\", \"99% of Energy Captured\", \"Location\", \"Southeast\");\nylabel(\"Cumulative Energy Captured by Diagonal Variances\"); xlabel(\"Diagonal Variances\");\ntitle(\"Cumulative Energy Captured  (Uncropped)\", \"Fontsize\", 14);\nsubplot(1,2,2)\nplot(lambda1/sum(lambda1), 'ro')\nset(gca, 'YScale', 'log')\nylim([10e-7, 1]); ylabel(\"Log of Energy Captured by Each Diagonal Variance\"); xlabel(\"Diagonal Variances\");\ntitle(\"Log Of Energy Captured (Uncropped)\", \"Fontsize\", 14);\n\nclose all; clear all; clc;\n\n%% Import audio files\nalldata = [];\nAlabels = [];\nfor str = {\"travi\", \"billy\", \"liszt\"}\n    for i = 1:5\n        [song, Fs] = audioread(strcat(\"Music\\\",str{1}, num2str(i),\".mp3\"));\n        Fs = Fs/10;\n\n        %Lower sampling rate\n        song = song(1:10:end,:);\n        \n        monosong = [];\n        %Convert from stereo to mono\n        for z = 1:length(song)\n            if (song(z,1) == 0 || song(z,2) == 0)\n                monosong(z,1) = max(song(z,:));\n            else\n                monosong(z,1) = (song(z,1) + song(z,2))/2;\n            end\n        end\n\n\n        %Remove leading and trailing 0s\n        monosong = monosong(find(monosong,1,'first'):find(monosong,1,'last'));\n\n        %5 second intervals\n        chunk_size = Fs*5;\n\n        endpoint = floor(length(monosong) / chunk_size);\n        monosong = monosong(1:(chunk_size * endpoint),1);\n\n        data = reshape(monosong, [chunk_size, endpoint]);\n        alldata = [alldata, data];\n        Alabels = [Alabels; repmat(str{1}, size(data,2) ,1)];\n    end\nend\nAlabels = Alabels.';\ntraindata = alldata;\ntrainlabels = Alabels;\n%% Rearrange and apply fft\n[u,s,v] = svd(abs(fft(alldata)), 'econ');\ntrainlabels = Alabels;\n\nsig = diag(s);\nlambda = sig.^2;\n\nutrunc = u(:, 1:217);\ntraindata = utrunc'*alldata;\n\nn = floor(size(traindata, 2)/(8));\nc = randperm(size(traindata,2), n);\n    \ntestdata = traindata(:,c);\ntestlabels = trainlabels(:,c);\n\ntraindata(:,c) = [];\ntrainlabels(:,c) = [];\n\nnosvdtraindata = abs(fft(alldata));\nnosvdtestdata = nosvdtraindata(:,c);\nnosvdtraindata(:,c) = [];\n%% Modeling\nnbMdl = fitcnb(traindata.', trainlabels);\nnbL = loss(nbMdl, testdata.', testlabels)\n\n%Cross validation model\nnbcvMdl = fitcnb(traindata.', trainlabels, 'crossval', 'on');\nnbcvL = kfoldLoss(nbcvMdl,'LossFun','ClassifErr')\n\nsvmMdl = fitcecoc(traindata.', trainlabels);\nsvmL = loss(svmMdl, testdata.', testlabels)\n\n%Cross validation model\nsvmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\nsvmcvL = kfoldLoss(svmcvMdl,'LossFun','ClassifErr')\n\nrfMdl = fitctree(traindata.', trainlabels);\nrfL = loss(rfMdl, testdata.', testlabels)\n\n%Cross validation model\nrfcvMdl = fitctree(traindata.', trainlabels, 'crossval', 'on');\nrfcvL = kfoldLoss(rfcvMdl,'LossFun','ClassifErr')\n\n%% Modeling (no SVD)\nnosvdnbMdl = fitcnb(nosvdtraindata.', trainlabels);\nnosvdnbL = loss(nosvdnbMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdnbcvMdl = fitcnb(nosvdtraindata.', trainlabels, 'crossval', 'on');\nnosvdnbcvL = kfoldLoss(nosvdnbcvMdl,'LossFun','ClassifErr')\n\nnosvdsvmMdl = fitcecoc(nosvdtraindata.', trainlabels);\nnosvdsvmL = loss(nosvdsvmMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdsvmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\nnosvdsvmcvL = kfoldLoss(nosvdsvmcvMdl,'LossFun','ClassifErr')\n\nnosvdrfMdl = fitctree(nosvdtraindata.', trainlabels);\nnosvdrfL = loss(nosvdrfMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdrfcvMdl = fitctree(nosvdtraindata.', trainlabels, 'crossval', 'on');\nnosvdrfcvL = kfoldLoss(nosvdrfcvMdl,'LossFun','ClassifErr')\n\n%% Plotting\nclose all;\n\nbar(1-[nbL nbcvL nosvdnbL nosvdnbcvL; svmL svmcvL nosvdsvmL nosvdsvmcvL; rfL rfcvL nosvdrfL nosvdrfcvL])\nxticklabels([\"Naive Bayes\", \"Support Vector Machine\", \"Random Forests\"]);\nlegend(\"SVD, No Cross Validation\", \"SVD, Cross Validation\", \"No SVD, No Cross Validation\", \"No SVD, Cross Validation\",...\n    \"Location\", \"south\");\nylabel(\"Classification Accuracy (Scale of 0 to 1)\"); xlabel(\"Supervised Learning Algorithms\");\ntitle(\"Test 1: Accuracy of Trained Models of Three Bands in Different Genres (217 Modes)\", 'FontSize', 20);\ngrid on\n\n%% Loop\ntic\nclassify= []\nfor z = 1:25:size(u,2)\n    trunc = u(:, 1:z);\n    traindata = utrunc'*alldata;\n    trainlabels = Alabels;\n    \n    n = floor(size(traindata, 2)/(8));\n    c = randperm(size(traindata,2), n);\n    \n    testdata = traindata(:,c);\n    testlabels = trainlabels(:,c);\n    traindata(:,c) = [];\n    trainlabels(:,c) = [];\n    \n    nbMdl = fitcnb(traindata.', trainlabels);\n    nbcvMdl = fitcnb(traindata.', trainlabels, 'crossval', 'on');\n\n    svmMdl = fitcecoc(traindata.', trainlabels);\n    svmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\n\n    rfMdl = fitctree(traindata.', trainlabels);\n    rfcvMdl = fitctree(traindata.', trainlabels, 'crossval', 'on');\n    \n    classify= [classify; loss(nbMdl, testdata.', testlabels) , kfoldLoss(nbcvMdl,'LossFun','ClassifErr'), ...\n                       loss(svmMdl, testdata.', testlabels), kfoldLoss(svmcvMdl,'LossFun','ClassifErr'), ...\n                        loss(rfMdl, testdata.', testlabels), kfoldLoss(rfcvMdl,'LossFun','ClassifErr')];\n                    \nend\ntoc\n\n%% Plot bigger stuff\nclose all;\n\nsubplot(1,2,1)\nplot(1:25:size(u,2), 1-classify, \"Linewidth\", 2)\ntitle(\"Test 1: Model Accuracy vs Number of Modes\", \"Fontsize\", 20);\nxlabel(\"Number of Modes\"); ylabel(\"Accuracy (Scale of 0 to 1)\");\nrefline(0,0.5)\nlegend(\"Naive Bayes\", \"Naive Bayes (Cross Validation)\", \"SVM\", \"SVM (Cross Validation)\", ...\n    \"Random Forests\", \"Random Forests (Cross Validation)\",\"50% Accuracy Mark\", \"Location\", \"best\");\n\nsubplot(1,4,3)\nplot(cumsum(lambda/sum(lambda)), 'bo')\nrefline(0,0.95)\nlegend(\"Energy Captured\", \"95% of Energy Captured\", \"Location\", \"Southeast\");\nylabel(\"Cumulative Energy Captured by Diagonal Variances\"); xlabel(\"Diagonal Variances\");\ntitle(\"Cumulative Energy Captured\", \"Fontsize\", 14);\nsubplot(1,4,4)\nplot(lambda/sum(lambda), 'ro')\nset(gca, 'YScale', 'log')\nylim([10e-5, 1]); ylabel(\"Log of Energy Captured by Each Diagonal Variance\"); xlabel(\"Diagonal Variances\");\ntitle(\"Log Of Energy Captured\", \"Fontsize\", 14);\n\nclose all; clear all; clc;\n\n%% Import audio files\nalldata = [];\nAlabels = [];\nfor str = {\"drake\", \"travi\", \"migos\"}\n    for i = 1:5\n        [song, Fs] = audioread(strcat(\"Music\\\",str{1}, num2str(i),\".mp3\"));\n        Fs = Fs/10;\n\n        %Lower sampling rate\n        song = song(1:10:end,:);\n        \n        monosong = [];\n        %Convert from stereo to mono\n        for z = 1:length(song)\n            if (song(z,1) == 0 || song(z,2) == 0)\n                monosong(z,1) = max(song(z,:));\n            else\n                monosong(z,1) = (song(z,1) + song(z,2))/2;\n            end\n        end\n\n\n        %Remove leading and trailing 0s\n        monosong = monosong(find(monosong,1,'first'):find(monosong,1,'last'));\n\n        %5 second intervals\n        chunk_size = Fs*5;\n\n        endpoint = floor(length(monosong) / chunk_size);\n        monosong = monosong(1:(chunk_size * endpoint),1);\n\n        data = reshape(monosong, [chunk_size, endpoint]);\n        alldata = [alldata, data];\n        Alabels = [Alabels; repmat(str{1}, size(data,2) ,1)];\n    end\nend\nAlabels = Alabels.';\ntraindata = alldata;\ntrainlabels = Alabels;\n%% Rearrange and apply fft\n[u,s,v] = svd(abs(fft(alldata)), 'econ');\ntrainlabels = Alabels;\n\nsig = diag(s);\nlambda = sig.^2;\n\nutrunc = u(:, 1:180);\ntraindata = utrunc'*alldata;\n\nn = floor(size(traindata, 2)/(8));\nc = randperm(size(traindata,2), n);\n    \ntestdata = traindata(:,c);\ntestlabels = trainlabels(:,c);\n\ntraindata(:,c) = [];\ntrainlabels(:,c) = [];\n\nnosvdtraindata = abs(fft(alldata));\nnosvdtestdata = nosvdtraindata(:,c);\nnosvdtraindata(:,c) = [];\n%% Modeling\nnbMdl = fitcnb(traindata.', trainlabels);\nnbL = loss(nbMdl, testdata.', testlabels)\n\n%Cross validation model\nnbcvMdl = fitcnb(traindata.', trainlabels, 'crossval', 'on');\nnbcvL = kfoldLoss(nbcvMdl,'LossFun','ClassifErr')\n\nsvmMdl = fitcecoc(traindata.', trainlabels);\nsvmL = loss(svmMdl, testdata.', testlabels)\n\n%Cross validation model\nsvmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\nsvmcvL = kfoldLoss(svmcvMdl,'LossFun','ClassifErr')\n\nrfMdl = fitctree(traindata.', trainlabels);\nrfL = loss(rfMdl, testdata.', testlabels)\n\n%Cross validation model\nrfcvMdl = fitctree(traindata.', trainlabels, 'crossval', 'on');\nrfcvL = kfoldLoss(rfcvMdl,'LossFun','ClassifErr')\n\n%% Modeling (no SVD)\nnosvdnbMdl = fitcnb(nosvdtraindata.', trainlabels);\nnosvdnbL = loss(nosvdnbMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdnbcvMdl = fitcnb(nosvdtraindata.', trainlabels, 'crossval', 'on');\nnosvdnbcvL = kfoldLoss(nosvdnbcvMdl,'LossFun','ClassifErr')\n\nnosvdsvmMdl = fitcecoc(nosvdtraindata.', trainlabels);\nnosvdsvmL = loss(nosvdsvmMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdsvmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\nnosvdsvmcvL = kfoldLoss(nosvdsvmcvMdl,'LossFun','ClassifErr')\n\nnosvdrfMdl = fitctree(nosvdtraindata.', trainlabels);\nnosvdrfL = loss(nosvdrfMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdrfcvMdl = fitctree(nosvdtraindata.', trainlabels, 'crossval', 'on');\nnosvdrfcvL = kfoldLoss(nosvdrfcvMdl,'LossFun','ClassifErr')\n\n%% Plotting\nclose all;\n\nbar(1-[nbL nbcvL nosvdnbL nosvdnbcvL; svmL svmcvL nosvdsvmL nosvdsvmcvL; rfL rfcvL nosvdrfL nosvdrfcvL])\nxticklabels([\"Naive Bayes\", \"Support Vector Machine\", \"Random Forests\"]);\nlegend(\"SVD, No Cross Validation\", \"SVD, Cross Validation\", \"No SVD, No Cross Validation\", \"No SVD, Cross Validation\");\nylabel(\"Classification Accuracy (Scale of 0 to 1)\"); xlabel(\"Supervised Learning Algorithms\");\ntitle(\"Test 2: Accuracy of Trained Models of Three Bands in Same Genre (180 Modes)\", 'FontSize', 20);\ngrid on\n%% Loop\ntic\nclassify= []\nfor z = 1:25:size(u,2)\n    trunc = u(:, 1:z);\n    traindata = utrunc'*alldata;\n    trainlabels = Alabels;\n    \n    n = floor(size(traindata, 2)/(8));\n    c = randperm(size(traindata,2), n);\n    \n    testdata = traindata(:,c);\n    testlabels = trainlabels(:,c);\n    traindata(:,c) = [];\n    trainlabels(:,c) = [];\n    \n    nbMdl = fitcnb(traindata.', trainlabels);\n    nbcvMdl = fitcnb(traindata.', trainlabels, 'crossval', 'on');\n\n    svmMdl = fitcecoc(traindata.', trainlabels);\n    svmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\n\n    rfMdl = fitctree(traindata.', trainlabels);\n    rfcvMdl = fitctree(traindata.', trainlabels, 'crossval', 'on');\n    \n    classify= [classify; loss(nbMdl, testdata.', testlabels) , kfoldLoss(nbcvMdl,'LossFun','ClassifErr'), ...\n                       loss(svmMdl, testdata.', testlabels), kfoldLoss(svmcvMdl,'LossFun','ClassifErr'), ...\n                        loss(rfMdl, testdata.', testlabels), kfoldLoss(rfcvMdl,'LossFun','ClassifErr')];\n                    \nend\ntoc\n\n%% Plot bigger stuff\nclose all;\n\nsubplot(1,2,1)\nplot(1:25:size(u,2), 1-classify, \"Linewidth\", 2)\ntitle(\"Test 2: Model Accuracy vs Number of Modes\", \"Fontsize\", 20);\nxlabel(\"Number of Modes\"); ylabel(\"Accuracy (Scale of 0 to 1)\");\nrefline(0,0.5)\nlegend(\"Naive Bayes\", \"Naive Bayes (Cross Validation)\", \"SVM\", \"SVM (Cross Validation)\", ...\n    \"Random Forests\", \"Random Forests (Cross Validation)\",\"50% Accuracy Mark\", \"Location\", \"Southwest\");\n\nsubplot(1,4,3)\nplot(cumsum(lambda/sum(lambda)), 'bo')\nrefline(0,0.95)\nlegend(\"Energy Captured\", \"95% of Energy Captured\", \"Location\", \"Southeast\");\nylabel(\"Cumulative Energy Captured by Diagonal Variances\"); xlabel(\"Diagonal Variances\");\ntitle(\"Cumulative Energy Captured\", \"Fontsize\", 14);\nsubplot(1,4,4)\nplot(lambda/sum(lambda), 'ro')\nset(gca, 'YScale', 'log')\nylim([10e-5, 1]); ylabel(\"Log of Energy Captured by Each Diagonal Variance\"); xlabel(\"Diagonal Variances\");\ntitle(\"Log Of Energy Captured\", \"Fontsize\", 14);\n\nclose all; clear all; clc;\n\n%% Import audio files\nalldata = [];\nAlabels = [];\nfor str = {\"pop\", \"rock\", \"motown\"}\n    for i = 1:5\n        [song, Fs] = audioread(strcat(\"Music\\\",str{1}, num2str(i),\".mp3\"));\n        Fs = Fs/10;\n\n        %Lower sampling rate\n        song = song(1:10:end,:);\n        \n        monosong = [];\n        %Convert from stereo to mono\n        for z = 1:length(song)\n            if (song(z,1) == 0 || song(z,2) == 0)\n                monosong(z,1) = max(song(z,:));\n            else\n                monosong(z,1) = (song(z,1) + song(z,2))/2;\n            end\n        end\n\n\n        %Remove leading and trailing 0s\n        monosong = monosong(find(monosong,1,'first'):find(monosong,1,'last'));\n\n        %5 second intervals\n        chunk_size = Fs*5;\n\n        endpoint = floor(length(monosong) / chunk_size);\n        monosong = monosong(1:(chunk_size * endpoint),1);\n\n        data = reshape(monosong, [chunk_size, endpoint]);\n        alldata = [alldata, data];\n        Alabels = [Alabels; repmat(str{1}, size(data,2) ,1)];\n    end\nend\nAlabels = Alabels.';\ntraindata = alldata;\ntrainlabels = Alabels;\n%% Rearrange and apply fft\n[u,s,v] = svd(abs(fft(alldata)), 'econ');\ntrainlabels = Alabels;\n\nsig = diag(s);\nlambda = sig.^2;\n\nutrunc = u(:, 1:239);\ntraindata = utrunc'*alldata;\n\nn = floor(size(traindata, 2)/(8));\nc = randperm(size(traindata,2), n);\n    \ntestdata = traindata(:,c);\ntestlabels = trainlabels(:,c);\n\ntraindata(:,c) = [];\ntrainlabels(:,c) = [];\n\nnosvdtraindata = abs(fft(alldata));\nnosvdtestdata = nosvdtraindata(:,c);\nnosvdtraindata(:,c) = [];\n%% Modeling\nnbMdl = fitcnb(traindata.', trainlabels);\nnbL = loss(nbMdl, testdata.', testlabels)\n\n%Cross validation model\nnbcvMdl = fitcnb(traindata.', trainlabels, 'crossval', 'on');\nnbcvL = kfoldLoss(nbcvMdl,'LossFun','ClassifErr')\n\nsvmMdl = fitcecoc(traindata.', trainlabels);\nsvmL = loss(svmMdl, testdata.', testlabels)\n\n%Cross validation model\nsvmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\nsvmcvL = kfoldLoss(svmcvMdl,'LossFun','ClassifErr')\n\nrfMdl = fitctree(traindata.', trainlabels);\nrfL = loss(rfMdl, testdata.', testlabels)\n\n%Cross validation model\nrfcvMdl = fitctree(traindata.', trainlabels, 'crossval', 'on');\nrfcvL = kfoldLoss(rfcvMdl,'LossFun','ClassifErr')\n\n%% Modeling (no SVD)\nnosvdnbMdl = fitcnb(nosvdtraindata.', trainlabels);\nnosvdnbL = loss(nosvdnbMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdnbcvMdl = fitcnb(nosvdtraindata.', trainlabels, 'crossval', 'on');\nnosvdnbcvL = kfoldLoss(nosvdnbcvMdl,'LossFun','ClassifErr')\n\nnosvdsvmMdl = fitcecoc(nosvdtraindata.', trainlabels);\nnosvdsvmL = loss(nosvdsvmMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdsvmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\nnosvdsvmcvL = kfoldLoss(nosvdsvmcvMdl,'LossFun','ClassifErr')\n\nnosvdrfMdl = fitctree(nosvdtraindata.', trainlabels);\nnosvdrfL = loss(nosvdrfMdl, nosvdtestdata.', testlabels)\n\n%Cross validation model\nnosvdrfcvMdl = fitctree(nosvdtraindata.', trainlabels, 'crossval', 'on');\nnosvdrfcvL = kfoldLoss(nosvdrfcvMdl,'LossFun','ClassifErr')\n\n%% Plotting\nclose all;\n\nbar(1-[nbL nbcvL nosvdnbL nosvdnbcvL; svmL svmcvL nosvdsvmL nosvdsvmcvL; rfL rfcvL nosvdrfL nosvdrfcvL])\nxticklabels([\"Naive Bayes\", \"Support Vector Machine\", \"Random Forests\"]);\nlegend(\"SVD, No Cross Validation\", \"SVD, Cross Validation\", \"No SVD, No Cross Validation\", \"No SVD, Cross Validation\");\nylabel(\"Classification Accuracy (Scale of 0 to 1)\"); xlabel(\"Supervised Learning Algorithms\");\ntitle(\"Test 3: Accuracy of Trained Models of Three Genres (239 Modes)\", 'FontSize', 20);\ngrid on\n%% Loop\ntic\nclassify= []\nfor z = 1:25:size(u,2)\n    trunc = u(:, 1:z);\n    traindata = utrunc'*alldata;\n    trainlabels = Alabels;\n    \n    n = floor(size(traindata, 2)/(8));\n    c = randperm(size(traindata,2), n);\n    \n    testdata = traindata(:,c);\n    testlabels = trainlabels(:,c);\n    traindata(:,c) = [];\n    trainlabels(:,c) = [];\n    \n    nbMdl = fitcnb(traindata.', trainlabels);\n    nbcvMdl = fitcnb(traindata.', trainlabels, 'crossval', 'on');\n\n    svmMdl = fitcecoc(traindata.', trainlabels);\n    svmcvMdl = fitcecoc(traindata.', trainlabels, 'crossval', 'on');\n\n    rfMdl = fitctree(traindata.', trainlabels);\n    rfcvMdl = fitctree(traindata.', trainlabels, 'crossval', 'on');\n    \n    classify= [classify; loss(nbMdl, testdata.', testlabels) , kfoldLoss(nbcvMdl,'LossFun','ClassifErr'), ...\n                       loss(svmMdl, testdata.', testlabels), kfoldLoss(svmcvMdl,'LossFun','ClassifErr'), ...\n                        loss(rfMdl, testdata.', testlabels), kfoldLoss(rfcvMdl,'LossFun','ClassifErr')];\n                    \nend\ntoc\n\n%% Plot bigger stuff\nclose all;\n\nsubplot(1,2,1)\nplot(1:25:size(u,2), 1-classify, \"Linewidth\", 2)\ntitle(\"Test 3: Model Accuracy vs Number of Modes\", \"Fontsize\", 20);\nxlabel(\"Number of Modes\"); ylabel(\"Accuracy (Scale of 0 to 1)\");\nrefline(0,0.5)\nlegend(\"Naive Bayes\", \"Naive Bayes (Cross Validation)\", \"SVM\", \"SVM (Cross Validation)\", ...\n    \"Random Forests\", \"Random Forests (Cross Validation)\",\"50% Accuracy Mark\", \"Location\", \"Southwest\");\n\nsubplot(1,4,3)\nplot(cumsum(lambda/sum(lambda)), 'bo')\nrefline(0,0.95)\nlegend(\"Energy Captured\", \"95% of Energy Captured\", \"Location\", \"Southeast\");\nylabel(\"Cumulative Energy Captured by Diagonal Variances\"); xlabel(\"Diagonal Variances\");\ntitle(\"Cumulative Energy Captured\", \"Fontsize\", 14);\nsubplot(1,4,4)\nplot(lambda/sum(lambda), 'ro')\nset(gca, 'YScale', 'log')\nylim([10e-5, 1]); ylabel(\"Log of Energy Captured by Each Diagonal Variance\"); xlabel(\"Diagonal Variances\");\ntitle(\"Log Of Energy Captured\", \"Fontsize\", 14);\n\n\\end{lstlisting}\n\\end{document}\n", "meta": {"hexsha": "9f3001a718409e702232147d5cb6ab18b2ed265b", "size": 47005, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW4/HW4.tex", "max_stars_repo_name": "shallinan1/AMATH-482", "max_stars_repo_head_hexsha": "3ce6b7df17fa4c66d93e13a0cfe119d4551b45d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW4/HW4.tex", "max_issues_repo_name": "shallinan1/AMATH-482", "max_issues_repo_head_hexsha": "3ce6b7df17fa4c66d93e13a0cfe119d4551b45d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW4/HW4.tex", "max_forks_repo_name": "shallinan1/AMATH-482", "max_forks_repo_head_hexsha": "3ce6b7df17fa4c66d93e13a0cfe119d4551b45d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.0308641975, "max_line_length": 1702, "alphanum_fraction": 0.7091586001, "num_tokens": 13410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.5893412884808267}}
{"text": "\n%\\section{Entropy and approximation rates}  \nThe presentation in this section follows the paper \\cite{siegel2021optimal}. \nThe recent success of deep learning \\cite{lecun2015deep} has spurred a large amount of research into the mathematical foundations of neural networks. In addition, there is a rapidly growing interest in using neural networks as a function class for solving partial differential equations \\cite{han2018solving,CiCP-28-1707} and simulating physical systems \\cite{raissi2018hidden}. Of particular importance in both of these research directions are the approximation theory of neural networks, specifically the determination of how effectively neural networks can approximate high dimensional functions. Many recent theoretical results indicate that a wide class of functions, especially in high dimensions, can be efficiently approximated by both shallow \\cite{wojtowytsch2020representation,ma2019priori,siegel2020approximation} and deep neural networks \\cite{yarotsky2017error,lu2020deep,opschoor2020deep,daubechies2019nonlinear,devore2020neural,li2019better}, and that solving PDEs in high dimensions using neural networks is a viable approach \\cite{lu2021priori,li2020multipole,luo2020two}.\n\nAn important consideration when studying the approximation properties of neural networks, and non-linear approximation in general, is the existence of a stable numerical algorithm which can realize a given approximation rate. This is intimitely connected with the metric entropy of the class of functions under consideration, as observed in \\cite{cohen2020optimal}. Consequently, calculating the metric entropy of neural network function classes is important for determining the theoretical limitations of using neural networks.\n\nIn this work, we calculate the metric entropy of the class of functions which can be efficiently approximated by shallow ReLU$^k$ neural networks. This class of functions has been extensively studied in the statistics and machine learning literature \\cite{barron1993universal,jones1992simple,klusowski2018approximation}. \n\nWe begin by considering a somewhat more general class of functions arising in the study of non-linear approximation by a dictionary of functions $\\mathbb{D}\\subset H$ in a Hilbert space $H$ \\cite{devore1998nonlinear,barron2008approximation}. Let $H$ be Hilbert space and $\\mathbb{D}\\subset H$ a dictionary with $\\sup_{d\\in \\mathbb{D}} \\|d\\|_H = K_\\mathbb{D} < \\infty$ (note here dictionary is simply another name for subset). \nWe introduce the set\n\\begin{equation}\\label{unit-ball-definition}\n B_1(\\mathbb{D}) = \\overline{\\left\\{\\sum_{j=1}^n a_jh_j:~n\\in \\mathbb{N},~h_j\\in \\mathbb{D},~\\sum_{i=1}^n|a_i|\\leq 1\\right\\}},\n\\end{equation}\nwhich is the closure of the convex, symmetric hull of $\\mathbb{D}$. Further, we define a norm, $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$, on $H$ given by the guage (see for instance \\cite{rockafellar1970convex}) of $B_1(\\mathbb{D})$,\n\\begin{equation}\\label{norm-definition}\n \\|f\\|_{\\mathcal{K}_1(\\mathbb{D})} = \\inf\\{c > 0:~f\\in cB_1(\\mathbb{D})\\},\n\\end{equation}\nwhich is defined so that $B_1(\\mathbb{D})$ is the unit ball of $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$. We also denote the space $\\mathcal{K}_1(\\mathbb{D})$ by \n\\begin{equation}\\label{space-definition}\n\\mathcal{K}_1(\\mathbb{D}) := \\{f\\in H:~\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})} < \\infty\\}.\n\\end{equation}\n\nThis norm has been introduced in different forms in the literature \\cite{devore1998nonlinear,kurkova2001bounds,kurkova2002comparison,barron2008approximation}.  The notation and definition we use here was introduced in \\cite{devore1998nonlinear}, where a more general $\\mathcal{K}_\\tau(\\mathbb{D})$ space is considered for $0<\\tau\\leq \\infty$. We restrict ourselves to the  case $\\tau = 1$, which is the most important space for general dictionaries. In Section \\ref{spectral-barron-section}, we discuss the properties of the $\\mathcal{K}_1(\\mathbb{D})$ space in more detail and compare with previously introduced notions, such as the Barron space introduced in \\cite{ma2019barron}.\n\nThe significance of the space $\\mathcal{K}_1(\\mathbb{D})$ is in connection with approximation from the set of $\\ell^1$-bounded $n$-term linear combinations of dictionary elements,\n\\begin{equation}\n \\Sigma_{n,M}(\\mathbb{D}) = \\left\\{\\sum_{j=1}^n a_jh_j:~h_j\\in \\mathbb{D},~\\sum_{i=1}^n|a_i|\\leq M\\right\\},\n\\end{equation}\nwhere the coefficients $a_i$ are taken as either real or complex depending upon whether $H$ is a real or complex Hilbert space. A classical result by Maurey \\cite{pisier1981remarques} (see also \\cite{jones1992simple,barron1993universal,devore1998nonlinear}) is the following approximation rate for functions $f\\in \\mathcal{K}_1(\\mathbb{D})$,\n\\begin{equation}\\label{fundamental-bound}\n \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{D})} \\|f - f_n\\|_H \\leq K_\\mathbb{D}\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}n^{-\\frac{1}{2}},\n\\end{equation}\nwhere the bound $M$ can be taken as $M = \\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}$. An equivalent formulation of this result, which is sometimes more convenient is that for $f\\in B_1(\\mathbb{D})$ we have\n\\begin{equation}\n \\inf_{f_n\\in \\Sigma_{n,1}(\\mathbb{D})} \\|f - f_n\\|_H \\leq K_\\mathbb{D}n^{-\\frac{1}{2}}.\n\\end{equation}\n\nIn this work, we are primarily interested in the following two types of dictionaries which are related to approximation by neural networks. Throughout the paper, we will consider the unit ball $B_1^d := \\{x\\in \\mathbb{R}^d:~|x| \\leq 1\\}$ of $\\mathbb{R}^d$. We remark that the results we obtain generalize in a straighforward manner to any bounded domain $\\Omega\\subset \\mathbb{R}^d$, however. In particular, the upper bounds transfer to $\\Omega$ since $\\Omega$ is contained in a ball of sufficiently large radius, and the lower bounds transfer to $\\Omega$ since $\\Omega$ contains a ball of some sufficiently small positive radius. Thus, in passing to $\\Omega$ only the implied constants will change.\n\nThe first type of dictionary we will be interested in arises when studying networks with ReLU$^k$ activation function $\\sigma_k(x) = \\text{ReLU}^k(x) := [\\max(0,x)]^k$ (here when $k=0$, we interpret $\\sigma_k(x)$ to be the Heaviside function). Consider the dictionary\n\\begin{equation}\\label{relu-k-space-definition}\n \\mathbb{P}^d_k = \\{\\sigma_k(\\omega\\cdot x + b):~\\omega\\in S^{d-1},~b\\in [-2,2]\\}\\subset L^2(B_1^d),\n\\end{equation}\nwhere $S^{d-1} = \\{\\omega\\in \\mathbb{R}^d:~|\\omega| = 1\\}$ is the unit sphere. We remark that the constant $2$ above can be replaced by any $c > 1$ to obtain an equivalent norm. In addition, when $k=1$ this norm is equivalent to the Barron norm studied in \\cite{ma2019barron,ma2019priori}. We discuss the definition \\eqref{relu-k-space-definition} and its relationship with the Barron norm in more detail in Section \\ref{spectral-barron-section}. The relationship with ReLU$^k$ networks arises because\n\\begin{equation}\n \\Sigma_{n,M}(\\mathbb{P}^d_k) = \\left\\{\\sum_{j=1}^n a_j\\sigma_k(\\omega_j \\cdot x + b_j):~\\omega_j\\in S^{d-1},~|b_j| \\leq 2,~\\sum_{i=1}^n|a_i|\\leq M\\right\\}\n\\end{equation}\nis the set of shallow ReLU$^k$ neural networks with bounded coefficients and $n$ hidden neurons.\n\nThe second type of dictionary is the spectral dictionary of order $s \\geq 0$, given by\n\\begin{equation}\n \\mathbb{F}^d_s = \\{(1+|\\omega|)^{-s}e^{2\\pi {\\mathrm{i}\\mkern1mu} \\omega\\cdot x}:~\\omega\\in \\mathbb{R}^d\\}\\subset L^2(B_1^d).\n\\end{equation}\nFor this dictionary the space $\\mathcal{K}_1(\\mathbb{F}_s)$ can be completely characterized in terms of the Fourier transform. In particular\n\\begin{equation}\\label{barron-integral-condition}\n \\|f\\|_{\\mathcal{K}_1(\\mathbb{F}^d_s)} = \\inf_{f_e|_{B_1^d}= f} \\int_{\\mathbb{R}^d} (1+|\\xi|)^s|\\hat{f}_e(\\xi)|d\\xi,\n\\end{equation}\nwhere the infemum is taken over all extensions $f_e\\in L^1(\\mathbb{R}^d)$. For reference, we provide a detailed proof of this result in Section \\ref{spectral-barron-section}. The connection with ReLU$^k$ neural networks is due to the fact that $\\mathcal{K}_1(\\mathbb{F}^d_{k+1})\\subset \\mathcal{K}_1(\\mathbb{P}^d_k)$, which was first observed in the case $k=0$ in \\cite{barron1993universal}, in the case $k=1,2$ in \\cite{klusowski2018approximation}, and extended to $k > 2$ in \\cite{CiCP-28-1707}. Thus the integral condition \\eqref{barron-integral-condition} defines a subspace of $\\mathcal{K}_1(\\mathbb{P}^d_k)$ which can be characterized via the Fourier transform. However, we remark that the inclusion here is strict \\cite{wojtowytsch2020representation}, a point which we will come back to later.\n\nNext, we recall the notion of metric entropy first introduced by Kolmogorov \\cite{kolmogorov1958linear}. The (dyadic) entropy numbers $\\epsilon_n(A)$ of a set $A\\subset H$ are defined by\n\\begin{equation}\n \\epsilon_n(A)_H = \\inf\\{\\epsilon > 0:~\\text{$A$ is covered by $2^n$ balls of radius $\\epsilon$}\\}.\n\\end{equation}\nRoughly speaking, the entropy numbers indicate how precisely we can specify elements of $A$ given $n$ bits of information.\nIt is not necessary for the space $H$ to be a Hilbert space although that is the case we will be interested in here, see for instance \\cite{lorentz1996constructive}, Chapter 15 for the general theory.\n\nOur main contribution is to calculate the entropy numbers of the unit balls $B_1(\\mathbb{P}_k^d)$ and $B_1(\\mathbb{F}_s^d)$ in $H = L^2(B_1^d)$. These are given in the following theorem.\n\\begin{theorem}\nLet $k \\geq 0$, $s > 0$ and $H = L^2(B_1^d)$. Then\n\\begin{equation}\\label{metric-entropy-rates}\n \\epsilon_n(B_1(\\mathbb{P}_k^d))_H \\eqsim_{k,d} n^{-\\frac{1}{2} - \\frac{2k+1}{2d}},~\\epsilon_n(B_1(\\mathbb{F}_s^d))_H \\eqsim_{s,d} n^{-\\frac{1}{2} - \\frac{s}{d}}.\n\\end{equation}\n\\end{theorem}\nThe estimates given here are weak equivalences, i.e. we have\n\\begin{equation}\n C_1n^{-\\frac{1}{2} - \\frac{2k+1}{2d}} \\leq \\epsilon_n(B_1(\\mathbb{P}_k^d))_H \\leq C_2n^{-\\frac{1}{2} - \\frac{2k+1}{2d}},\n\\end{equation}\nfor some constants $C_1 = C_1(k,d)$ and $C_2 = C_2(k,d)$, and an equivalent statement holds for $\\epsilon_n(B_1(\\mathbb{F}_s^d))$. (Generally, throughout this manuscript, we will use the notation $X\\lesssim Y$ to mean that $X\\leq CY$ for some constant $C$, $X\\gtrsim Y$ to mean that $X \\geq cY$ for some constant $c$, and $X\\eqsim Y$ to mean that $X\\gtrsim Y$ and $X\\lesssim Y$. Moreover, if the constants may depend on a small number of parameters, these will be indicated as subscripts of corresponding symbol. For dependence upon many parameters, the dependence (or independence) will be indicated in the text.) Let us discuss some consequences of these metric entropy rates.\n\nThe first consequence concerns approximation rates from $\\Sigma_{n,M}(\\mathbb{P}^d_k)$, with sufficiently large, but fixed, $M$ (i.e. the $\\ell^1$-norm of the coefficients $a_j$ is kept bounded). An important result, first observed by Makovoz \\cite{makovoz1996random}, is that for certain dictionaries the rate in \\eqref{fundamental-bound} can be improved. In particular, for the dictionary $\\mathbb{P}^d_0$ corresponding to neural networks with Heaviside activation function, Makovoz showed that for $f\\in B_1(\\mathbb{P}^d_0)$\n\\begin{equation}\\label{makovoz-original}\n \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{P}^d_0)} \\|f - f_n\\|_{L^2(B_1^d)} \\lesssim_d n^{-\\frac{1}{2}-\\frac{1}{2d}}.\n\\end{equation}\n(Note that here and in what follows the implied constant is independent of $n$, and the bound $M$ is fixed and independent of $n$.)\nFurthermore, improved rates have been obtained for other dictionaries. In particular, in \\cite{klusowski2018approximation}, the dictionaries $\\mathbb{P}^d_k$ corresponding to neural networks with activation function $\\sigma = [\\max(0,x)]^k$ are studied for $k=1,2$ and it is shown that for $f\\in B_1(\\mathbb{P}^d_k)$\n\\begin{equation}\n \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{P}^d_k)} \\|f - f_n\\|_{L^2(B_1^d)} \\lesssim_{k,d} n^{-\\frac{1}{2}-\\frac{1}{d}}.\n\\end{equation}\nThis analysis is extended to $k\\geq 3$ in \\cite{CiCP-28-1707}, where the same approximation rate is attained. This raises the natural question of what the optimal approximation rates for $\\Sigma_{n,M}(\\mathbb{P}^d_k)$ are. \n\nSpecifically, for each $k=0,1,2,...$ and dimension $d=2,...$ (the case $d=1$ is comparatively trivial), what is the largest possible value of $\\alpha := \\alpha(k,d)$ such that for $f\\in B_1(\\mathbb{P}^d_k)$ we have\n\\begin{equation}\n \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{P}^d_k)} \\|f - f_n\\|_{L^2(B_1^d)} \\lesssim_{k,d} n^{-\\frac{1}{2}-\\alpha(k,d)}.\n\\end{equation}\nThe result above imply that $\\alpha(k,d) \\geq \\frac{1}{2d}$ for $k=0$ and $\\alpha(k,d) \\geq \\frac{1}{d}$ for $k > 0$. When $d > 1$, the best available upper bounds on $\\alpha(k,d)$ are $\\alpha(d,k) \\leq \\frac{k+1}{d}$ (see \\cite{makovoz1996random,klusowski2018approximation}), except in the case $k=0$, $d=2$, where Makovoz obtains the sharp bound $\\alpha(0,2) = \\frac{1}{4}$ \\cite{makovoz1996random}.\n\nA consequence of the entropy calculation \\eqref{metric-entropy-rates}, specifically the lower bound in Theorem \\ref{relu-k-lower-bound-corollary} and the approximation rate in Theorem \\ref{relu-k-rate-corollary}, is that $\\alpha(k,d) = \\frac{2k+1}{2d}$, i.e. that for $f\\in B_1(\\mathbb{P}_k^d)$, we have the rate\n\\begin{equation}\\label{reluk-approximation-rate}\n \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{P}_k)} \\|f - f_n\\|_{L^2(B_1^d)} \\lesssim_{k,d} n^{-\\frac{1}{2}-\\frac{2k+1}{2d}},\n\\end{equation}\nand that this exponent can not be improved. This solves the problems posed in \\cite{makovoz1996random} and \\cite{klusowski2018approximation} for approximation rates in $L^2(B_1^d)$. In particular, it shows that the rate \\eqref{makovoz-original} obtained by Makovoz \\cite{makovoz1996random} is optimal for all $d \\geq 3$, and closes the gap between the best upper and lower bounds obtained in \\cite{klusowski2018approximation} for approximation in $L^2(B_1^d)$ by neural networks with ReLU$^k$ activation function.\n\nThe second important consequence concerns the more general stable non-linear approximation studied in \\cite{cohen2020optimal}, instead of approximation by $\\Sigma_{n,M}(\\mathbb{P}^d_k)$. In \\cite{cohen2020optimal}, for a subset $A\\subset H$ and a fixed $\\gamma > 0$, approximation schemes are considered which consist of a pair of $\\gamma$-Lipschitz functions $a:H\\rightarrow \\mathbb{R}^n$, $M:\\mathbb{R}^n\\rightarrow H$. Here, one can think of $a$ as an encoding map and $M$ as a decoding map, which are both required to be Lipschitz. Then the stable manifold $n$-widths are defined as the reconstruction error of the best encoding scheme $a,M$,\n\\begin{equation}\n \\delta^*_{n,\\gamma}(A)_H = \\inf_{a,M} \\sup_{x\\in A} \\|x - M(a(x))\\|_H.\n\\end{equation}\nNote that in general we must choose a norm on $R^n$ as well, but since $H$ is a Hilbert space we may take the Euclidean norm (this follows from the results in \\cite{cohen2020optimal}).\n\nThe main results of \\cite{cohen2020optimal} relate the stable manifold $n$-widths $\\delta^*_{n,\\gamma}(A)_H$ to the entropy numbers $\\epsilon_n(A)_H$. Combining this with our calculation of the entropies of $B_1(\\mathbb{P}_k^d)$ and $B_1(\\mathbb{F}_s^d)$, we are able to calculate the stable manifold $n$-widths of these sets as well. In particular, combining the entropy rates \\eqref{metric-entropy-rates} with Theorems 3.3 and 4.1 of \\cite{cohen2020optimal}, we get the weak equivalences\n\\begin{equation}\n \\delta^*_{n,2}(B_1(\\mathbb{P}_k^d))_{H}  \\eqsim_{k,d} n^{-\\frac{1}{2} - \\frac{2k+1}{2d}},~ \\delta^*_{n,2}(B_1(\\mathbb{F}_s^d))_{H} \\eqsim_{s,d} n^{-\\frac{1}{2} - \\frac{s}{d}},\n\\end{equation}\nwhere $H = L^2(B_1^d)$.\nThus, the entropy rates \\eqref{metric-entropy-rates} combined with the results of \\cite{cohen2020optimal} give the theoretically best possible approximation rate that can be attained for the $B_1(\\mathbb{P}_k^d)$, and thus for the Barron space when $k=1$, using any stable approximation scheme. Combined with the approximation rate \\eqref{reluk-approximation-rate}, this shows that no stable approximation scheme can approximate functions $f\\in B_1(\\mathbb{P}_k^d)$ more efficiently than shallow neural networks.\n\nWe note also that Carl's inequality \\cite{carl1981entropy} can also be used in combination with \\eqref{metric-entropy-rates} to derive lower bounds on the Kolmogorov $n$-widths of $B_1(\\mathbb{P}_k^d)$ and $B_1(\\mathbb{F}_s^d)$. Recall that the Kolmogorov $n$-widths of a set $A\\subset H$ is given by\n\\begin{equation}\n d_n(A)_H = \\inf_{Y_n}\\sup_{x\\in A}\\inf_{y\\in Y_n}\\|x - y\\|_H,\n\\end{equation}\nwhere the first infemum is over the collection of subspaces $Y_n$ of dimension $n$. Using Carl's inequality, the entropy rates \\eqref{metric-entropy-rates} imply the lower bounds\n\\begin{equation}\n d_n(B_1(\\mathbb{P}_k^d))_{H}  \\gtrsim_{k,d} n^{-\\frac{1}{2} - \\frac{2k+1}{2d}},~ d_n(B_1(\\mathbb{F}_s^d))_{H} \\gtrsim_{s,d} n^{-\\frac{1}{2} - \\frac{s}{d}},\n\\end{equation}\nwith $H = L^2(B_1^d)$. These results give a lower bound on how effectively the unit balls in these spaces can be approximated by linear methods.\n\nFurther, the entropy rates \\eqref{metric-entropy-rates} allow the comparison of the spaces $\\mathcal{K}_1(\\mathbb{P}_k^d)$ and $\\mathcal{K}_1(\\mathbb{F}_s^d)$ with each other and with more traditional function spaces. For instance, it is known that the entropy of the Sobolev unit ball $B(H^r) = \\{f\\in L^2(B_1^d):~\\|f\\|_{H^r(B_1^d)} \\leq 1\\}$ in the space $H = L^2(B_1^d)$ is given by (see \\cite{lorentz1996constructive}, Chapter 15)\n\\begin{equation}\n \\epsilon_n(B(H^r)) \\eqsim_{r,d} n^{-\\frac{r}{d}}.\n\\end{equation}\nWe observe that for fixed smoothness $k$, the entropy numbers of $B(H^k)$ decay very slowly in high dimensions. This phenomenon is known as the curse of dimensionality, and has the consequence that general high dimensional functions are difficult to approximate accurately. Comparing with the entropy rates \\eqref{metric-entropy-rates}, we see that entropy of $B_1(\\mathbb{P}_1^d)$ and $B_1(\\mathbb{F}_s^d)$ exhibit a decay rate of at least $O(n^{-\\frac{1}{2}})$ regardless of dimension. In general, to overcome the curse of dimensionality, it is necessary to find low entropy sets of functions which capture the phenomenon to be modelled.\n\nFinally, it is observed in \\cite{wojtowytsch2020representation} that the inclusion $\\mathcal{K}_1(\\mathbb{F}_{k+1})\\subset \\mathcal{K}_1(\\mathbb{P}_k)$ is strict. This leaves open the question of how much larger the Barron space $\\mathcal{K}_1(\\mathbb{P}_k)$ is, i.e. how much is lost by considering Barron's integral condition \\eqref{barron-integral-condition} on the Fourier transform instead of the nore natural space $\\mathcal{K}_1(\\mathbb{P}_k)$. The entropy rates \\eqref{metric-entropy-rates} give an answer to this question. In particular, we see that the entropy numbers $\\epsilon_n(B_1(\\mathbb{F}_{k+1}))$ decay faster by a factor of $n^{-\\frac{1}{2d}}$, which, comparing with the entropy of Sobolev balls, is analogous to about half a derivative.\n\nThe paper is organized as follows. In Section \\ref{spectral-barron-section} we discuss some of the technical subleties in defining the $\\mathcal{K}_1(\\mathbb{D})$ spaces and the dictionaries $\\mathbb{P}^d_k$. We also give a characterization of $\\mathcal{K}_1(\\mathbb{F}^d_s)$ in terms of the Fourier transform. Then in Section \\ref{main-result-1-section} we give our first main result, which gives approximation rates from $\\Sigma_{n,M}(\\mathbb{D})$ and upper bounds on the entropy of $B_1(\\mathbb{D})$ for dictionaries $\\mathbb{D}$ which are parameterized by a compact manifold. We apply this to obtain an upper bound on the entropy numbers of $B_1(\\mathbb{P}^d_k)$ and $B_1(\\mathbb{F}^d_s)$. In Section \\ref{main-result-2-section} we give our second main result, which gives a lower bound on the metric entropy numbers of convex hull of ridge functions. We use this to obtain matching lower bounds on the entropy numbers of $B_1(\\mathbb{P}^d_k)$ and $B_1(\\mathbb{F}^d_s)$. Finally, we give some concluding remarks and further research directions.\n\n\n\n\\section{Properties of the spaces $\\mathcal{K}_1(\\mathbb{D})$, $\\mathcal{K}_1(\\mathbb{P}^d_k)$, and $\\mathcal{K}_1(\\mathbb{F}^d_s)$}\\label{spectral-barron-section}\nIn this section, we derive some fundamental properties of the spaces $\\mathcal{K}_1(\\mathbb{D})$ for a general dictionary $\\mathbb{D}$. Further, we derive fundamental properties of the specific spaces $\\mathcal{K}_1(\\mathbb{P}_k)$ and their relationship with the Barron space considered in \\cite{ma2019barron, wojtowytsch2020representation}. Finally, we characterize the $\\mathcal{K}_1(\\mathbb{F}_s)$ spaces in terms of the Fourier transform.\n\n\\subsection{Basic Properties of $\\mathcal{K}_1(\\mathbb{D})$}\nWe begin with an elementary and well-known lemma concerning the unit ball $B_1(\\mathbb{D})$, the norm $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$, and the space $\\mathcal{K}_1(\\mathbb{D})$. The most important point here is that the space $\\mathcal{K}_1(\\mathbb{D})$ is a Banach space.\n\\begin{lemma}\\label{fundamental-norm-lemma}\n Suppose that $\\sup_{d\\in \\mathbb{D}} \\|d\\|_H = K_\\mathbb{D} < \\infty$. Then the $\\mathcal{K}_1(\\mathbb{D})$ norm satisfies the following properties.\n \\begin{itemize}\n  \\item $B_1(\\mathbb{D}) = \\{f\\in H:\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}\\leq 1\\}$\n  \\item $\\|f\\|_H\\leq K_\\mathbb{D}\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}$\n  \\item $\\mathcal{K}_1(\\mathbb{D}) := \\{f\\in H:~\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})} < \\infty\\}$ is a Banach space with the $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$ norm\n \\end{itemize}\n \n\\end{lemma}\n\\begin{proof}\n From definition \\eqref{norm-definition} we see that $B_1(\\mathbb{D}) \\subset \\{f\\in H:\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}\\leq 1\\}$ since $r=1$ is an element of the infemum in \\eqref{norm-definition}. For the reverse inclusion, let $\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}\\leq 1$. By \\eqref{norm-definition} this means that for every $n$ we must have $f\\in (1 + \\frac{1}{n})B_1(\\mathbb{D})$, or in other words that $f_n = \\frac{n}{n+1}f\\in B_1(\\mathbb{D})$. However, it is clear that $f_n\\rightarrow f$ in $H$ and thus since $B_1(\\mathbb{D})$ is closed, we have $f\\in B_1(\\mathbb{D})$. Thus $\\{f\\in H:\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}\\leq 1\\} = B_1(\\mathbb{D})$, proving the first statement.\n \n For the second statement, note that $\\|d\\|_H\\leq K_\\mathbb{D}$ for all $d\\in \\mathbb{D}$. This immediately implies that $\\|f\\|_H\\leq K_\\mathbb{D}$ for all $f\\in B_1(\\mathbb{D})$, which proves the result by an elementary scaling argument.\n \n Finally, for the third statement we must show that the set $\\mathcal{K}_1(\\mathbb{D})$ is complete with respect to the $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$ norm.\n \n So let $\\{f_n\\}_{n=1}^\\infty$ be a Cauchy sequence with respect to the $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$ norm. By the second statement, we have $\\|f_n - f_m\\|_H\\leq \\|f_n - f_m\\|_{\\mathcal{K}_1(\\mathbb{D})}$, so that the sequence is Cauchy with respect the the $H$-norm as well. Thus, there exists an $f\\in H$, such that $f_n\\rightarrow f$ in $H$.\n \n We will show that in fact $f_n\\rightarrow f$ in the $\\mathcal{K}_1(\\mathbb{D})$-norm as well (note that this automatically implies that $\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})}<\\infty$). \n \n To this end, let $\\epsilon > 0$ and choose $N$ such that $\\|f_n - f_m\\|_{\\mathcal{K}_1(\\mathbb{D})} < \\epsilon / 2$ for $n,m \\geq N$ ($\\{f_n\\}$ is Cauchy, so this is possible). In particular, this means that $\\|f_N - f_m\\|_{\\mathcal{K}_1(\\mathbb{D})}\\leq \\epsilon / 2$ for all $m > N$. Now the first statement implies that $f_m - f_N \\in (\\epsilon / 2)B_1(\\mathbb{D})$, or in other words that $f_m \\in f_N + (\\epsilon / 2)B_1(\\mathbb{D})$. Since $f_m\\rightarrow f$ in $H$, and $B_1(\\mathbb{D})$ is closed in $H$, we get $f\\in f_N + (\\epsilon / 2)B_1(\\mathbb{D})$. Hence $\\|f - f_N\\|_{\\mathcal{K}_1(\\mathbb{D})} \\leq \\epsilon / 2$ and the triangle inequality finally implies that $\\|f - f_m\\|_{\\mathcal{K}_1(\\mathbb{D})} \\leq \\epsilon$ for all $m \\geq N$. Thus $f_n\\rightarrow f$ in the $\\mathcal{K}_1(\\mathbb{D})$-norm and $\\mathcal{K}_1(\\mathbb{D})$ is complete.\n\\end{proof}\n\n% Note that the properties proved in Lemma \\ref{fundamental-norm-lemma} depend upon the fact that the unit ball $B_1(\\mathbb{D})$ is closed in $H$. If the norm is instead defined differently, for instance using integral representations, we argue that it is important to ensure that the unit ball is closed in $H$ to ensure that the resulting space is well-behaved. For instance, let $\\Omega\\subset \\mathbb{R}^d$ be a bounded domain and $\\sigma:\\mathbb{R}\\rightarrow \\mathbb{R}$ an activation function. Following \\cite{ma2019barron}, which inspired the present work, define\n% \\begin{equation}\\label{E-barron-norm}\n%  \\|f\\|_{\\mathcal{B}^\\sigma(\\Omega)} = \\inf\\left\\{\\int_{S_\\sigma} d|\\mu|(\\omega,b):~f(x)=\\int_{S_\\sigma} \\sigma(\\omega\\cdot x + b)d\\mu(\\omega,b)~\\text{for}~x\\in \\Omega\\right\\},\n% \\end{equation}\n% where $S_\\sigma\\subset \\mathbb{R}^d\\times\\mathbb{R}$ is a subset of parameters defending upon $\\sigma$.\n% \n% We compare this space to the space $\\mathcal{K}_1(\\mathbb{D})$ for the dictionary\n% \\begin{equation}\n%  \\mathbb{D}^\\sigma = \\{\\sigma(\\omega\\cdot x + b):~(\\omega,b)\\in S_\\sigma\\}\\subset L^2(\\Omega).\n% \\end{equation}\n% It is clear that $\\{f:\\|f\\|_{\\mathcal{B}^\\sigma(\\Omega)} \\leq 1\\}\\subset B_1(\\mathbb{D}^\\sigma)$ which implies that $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D}^\\sigma)}\\leq \\|\\cdot\\|_{\\mathcal{B}^\\sigma(\\Omega)}$ and $\\mathcal{B}^\\sigma(\\Omega)\\subset \\mathbb{K}_1(\\mathbb{D}^\\sigma)$. However, if there are elements in the closure $B_1(\\mathbb{D})$ which are not given by integral representations of the form in \\eqref{E-barron-norm}, the reverse inclusion may not hold, as the following result shows.\n% \n% \\begin{proposition}\n%  Suppose $\\Omega = [-1,1]^d$ and $\\sigma$ is a smooth sigmoidal function. Let $S_\\sigma = \\mathbb{R}^d\\times \\mathbb{R}$. Then $\\mathcal{B}^\\sigma(\\Omega)\\subsetneq \\mathbb{K}_1(\\mathbb{D}^\\sigma)$.\n% \n% \\end{proposition}\n% \\begin{proof}\n%  Consider the Heaviside function\n%  \\begin{equation}\n%   H(\\tau\\cdot x) = \\begin{cases} \n%       0 & \\tau\\cdot x\\leq 0 \\\\\n%       1 & \\tau\\cdot x\\geq 0.\n%    \\end{cases}\n%  \\end{equation}\n%  for $\\tau\\in S^{d-1}:=\\{x\\in \\mathbb{R}^d:~|x| = 1\\}$. Since $\\sigma$ is sigmoidal, we have\n%  \\begin{equation}\n%   \\lim_{r\\rightarrow \\infty}\\|H(\\tau\\cdot x) - \\sigma(r\\tau\\cdot x)\\|_{L^2(\\Omega)} = 0.\n%  \\end{equation}\n%  Thus $H(\\tau\\cdot x)\\in B_1(\\mathbb{D})$. However, since $\\sigma$ is smooth, the discontinuous function $H(\\tau\\cdot x)$ cannot have an integral representation of the form \\eqref{E-barron-norm} and so $H(\\tau\\cdot x)\\notin \\mathcal{B}^\\sigma(\\Omega)$.\n% \n% \\end{proof}\n% It is also not clear whether the space $\\mathcal{B}^\\sigma$ satisfies the conclusions of Lemma \\ref{fundamental-norm-lemma} in general. Despite these issues for general activation functions $\\sigma$, for specific activation functions such as the rectified linear unit, which are of primary interest in \\cite{ma2019barron}, the space $\\mathcal{B}^\\sigma$ may in fact be bettter behaved.\n\n% We begin by noting that the definition \\eqref{unit-ball-definition} of $B_1(\\mathbb{D})$ contains a closure in $H$ instead of being written in terms of an infemum over integral representations \\cite{ma2019barron} or representations by finite sums \\cite{barron2008approximation}. This follows the approach taking previously in the literature \\cite{devore1998nonlinear,kurkova2001bounds,kurkova2002comparison} and can result in a larger space for some dictionaries. As an example, consider the following situation.\n% \n% Suppose that $\\Omega\\subset \\mathbb{R}^d$ is bounded and $\\sigma$ is a smooth sigmoidal function. Consider the dictionary\n% \\begin{equation}\n%  \\mathbb{D}_{\\sigma} = \\{\\sigma(\\omega\\cdot x + b):~\\omega\\in \\mathbb{R}^d,~b\\in \\mathbb{R}\\}.\n% \\end{equation}\n% \n% By taking $\\omega\\rightarrow \\infty$, we easily see that $B_1(\\mathbb{P}_0)\\subset B_1(\\mathbb{D}_{\\sigma})$ (this is the essence of the argument by Barron \\cite{barron1993universal}). However, the discontinuous Heaviside function cannot be written as an integral representation of the smooth dictionary elements in $\\mathbb{D}_\\sigma$. Consequently the definition in terms of integral representations given in \\cite{ma2019barron} would fail to capture such functions.\n\nLet us remark that for some dictionaries $\\mathbb{D}$ the $\\mathcal{K}_1(\\mathbb{D})$ space can sometimes by substantially smaller than $H$. In fact, if the dictionary $\\mathbb{D}$ is contained in a closed subspace of $H$, then we have the following elementary result.\n\\begin{lemma}\\label{subspace-lemma}\n Let $K\\subset H$ be a closed subspace of $H$. Then $\\mathbb{D}\\subset K$ iff $\\mathcal{K}_1(\\mathbb{D})\\subset K$.\n\\end{lemma}\n\\begin{proof}\n We have $\\mathbb{D}\\subset\\mathcal{K}_1(\\mathbb{D})$ so that the reverse implication is trivial. For the forward implication, since $\\mathbb{D}\\subset K$ and $K$ is closed, it follows that $B_1(\\mathbb{D})\\subset K$. Then, from the definition \\eqref{norm-definition}, it follows that\n \\begin{equation}\n  \\mathcal{K}_1(\\mathbb{D}) = \\bigcup_{r > 0} rB_1(\\mathbb{D})\\subset K.\n \\end{equation}\n\n\\end{proof}\n A simple example when this occurs is when considering a shallow neural network with activation function $\\sigma$ which is a polynomial of degree $k$. In this case the space $\\mathcal{K}_1(\\mathbb{D})$ is contained in the finite-dimensional space of polynomials of degree $k$, and the $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$ norm is infinite on non-polynomial functions. This is related to the well-known result that neural network functions are dense iff the activation function is not a polynomial \\cite{leshno1993multilayer}. \n\\begin{proposition}\n Let $\\Omega\\subset \\mathbb{R}^d$ be a bounded domain and $\\mathbb{D} = \\{\\sigma(\\omega\\cdot x + b):(\\omega,b)\\in \\mathbb{R}^d\\times \\mathbb{R}\\}\\subset L^2(\\Omega)$, where the activation function $\\sigma\\in L^\\infty_{loc}(\\mathbb{R})$. Suppose further that the set of discontinuities of $\\sigma$ has Lebesgue measure $0$. Then $\\mathcal{K}_1(\\mathbb{D})$ is finite dimensional iff $\\sigma$ is a polynomial (a.e.).\n\\end{proposition}\n\\begin{proof}\n If $\\sigma$ is a polynomial, $\\mathbb{D}$ is contained in the space of polynomials of degree at most $\\text{deg}(\\sigma)$, which is finite dimensional. This implies the result by Lemma \\ref{subspace-lemma}. For the reverse implication, we use Theorem 1 of \\cite{leshno1993multilayer}, which states that if $\\sigma$ is not a polynomial, then \n $$\n C(\\Omega) \\subset \\overline{\\left\\{\\sum_{i=1}^na_i\\sigma(\\omega_i\\cdot x + b_i)\\right\\}},\n $$\n where the closure is taken in $L^\\infty(\\Omega)$ (note that this cumbersome statement is necessary since $\\sigma$ may not be continuous). This immediately implies that $\\mathcal{K}_1(\\mathbb{D})$ is dense in $L^2(\\Omega)$ (since $C(\\Omega)$ is dense in $L^2(\\Omega)$), and thus obviously not finite dimensional.\n\\end{proof}\n\nNext, we note that Maurey's approximation rate has a converse. In particular, if a function can be approximated by elements from $\\Sigma_{n,M}(\\mathbb{D})$ with fixed $M$, then it must be in the space $\\mathcal{K}_1(\\mathbb{D})$. In particular, we have\n\\begin{theorem}\n$\\quad$\n \\begin{enumerate}\n \\item Let $f\\in H$ and suppose that $f_n\\rightarrow f$ with $f_n\\in \\Sigma_{n,M}(\\mathbb{D})$ for a fixed $M < \\infty$. Then $f\\in \\mathcal{K}_1(\\mathbb{D})$ and\n \\begin{equation}\n  \\|f\\|_{\\mathcal{K}_1(\\mathbb{D})} \\leq M.\n  \\end{equation}\n  \\item If $f\\in \\mathcal{K}_1(\\mathbb{D})$,\n  $$\n  \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{D})} \\|f-f_n\\|_H\\le n^{-\\frac12}\\|f\\|_{H}\n  $$\n  \\end{enumerate}\n \\end{theorem} \n\\begin{proof}\n It is clear from the definitions that $\\Sigma_{n,M}(\\mathbb{D}) \\subset MB_1(\\mathbb{D})$ for every $n$. Thus $f_n\\in MB_1(\\mathbb{D})$ and since $MB_1(\\mathbb{D})$ is closed, we get $f\\in MB_1(\\mathbb{D})$, so that $\\|f\\|_{\\mathcal{K}_1(\\mathbb{D})} \\leq M$.\n\n\n We follow the argument of \\cite{barron1993universal}, see also \\cite{jones1992simple,pisier1981remarques} to prove the second statement. The result is trivial if $M = 0$, so suppose that $M > 0$. By normalizing both $f$ and the coefficients $a_i$  by $M$, we reduce to the case where $M = 1$. In this case $f\\in \\mathcal{C}_\\psi$ by Lemma \\ref{fundamental-norm-lemma}.\n\n  Let $\\epsilon > 0$. Then since $f\\in \\mathcal{C}_\\psi$, i.e. $f$ is in the closure of the convex hull of $\\{e^{{\\mathrm{i}\\mkern1mu}\\phi}\\psi_\\theta:~\\phi\\in \\mathbb{R},~\\theta\\in\\Theta\\}$, there exist $a_i$, $\\theta_i$ with $i=1,...,N$, such that\n \\begin{equation}\\label{eq_129}\n  \\left\\|f - \\sum_{i=1}^Na_i\\psi_{\\theta_i}\\right\\|_H \\leq \\epsilon,\n \\end{equation}\n and $\\sum_{i=1}^N |a_i| = 1$ (note here that $N$ may depend upong $\\epsilon$ and in particular may be very large). Next, draw $n$ samples $(i_1,...,i_n)$ from the discrete distribution on $\\{1,...,N\\}$ with the probability of index $i$ given by $|a_i|$, and form the random variable\n\\begin{equation}\n f_n = \\frac{1}{n}\\sum_{j=1}^n \\frac{a_{i_j}}{|a_{i_j}|}\\psi_{\\theta_{i_j}} \\in \\Sigma_{n,M}(\\mathbb{D}).\n\\end{equation}\nWe evidently have $\\mathbb{E}(f_n) = \\mathbb{E}(f_1) = \\sum_{i=1}^Na_i\\psi_{\\theta_i}$ and $$\\mathbb{V}(f_n) = \\mathbb{E}(\\|f_n-\\mathbb{E}(f_n)\\|_H^2) = \\frac{1}{n}\\mathbb{V}(f_1) = \\frac{1}{n}(\\mathbb{E}(\\|f_1\\|_H^2) - \\|\\mathbb{E}(f_n)\\|_H^2)\\leq \\frac{\\sup_{\\theta\\in \\Theta} \\|\\psi_\\theta\\|^2_H}{n} = \\frac{1}{n}.$$ \nThis means that there must exist a realization $\\tilde{f}_n\\in\\Sigma_{n,M}(\\mathbb{D})$ such that\n\\begin{equation}\n \\|\\tilde{f}_n - \\sum_{i=1}^Na_i\\psi_{\\theta_i}\\|_H^2 \\leq \\frac{1}{n}.\n\\end{equation}\nCombining this with \\eqref{eq_129}, we see that\n\\begin{equation}\n \\inf_{f_n\\in\\Sigma_{n,M}(\\mathbb{D})} \\|f-f_n\\|_H \\leq n^{-\\frac{1}{2}} + \\epsilon.\n\\end{equation}\nSince $\\epsilon > 0$ was arbitrary, we obtain the desired result.\n\\end{proof}\n\n\nFinally, we give a lemma which relates the space $\\mathcal{K}_1(\\mathbb{D})$ to the set of functions which have integral representations by elements of $\\mathbb{D})$.\n\\begin{lemma}\\label{prokhorov-lemma}\n Suppose that $\\mathbb{D}\\subset H$ is compact. Then $f\\in \\mathcal{K}_1(\\mathbb{D})$ iff there exists a Borel measure $\\mu$ on $\\mathbb{D}$ such that\n \\begin{equation}\n  f = \\int_\\mathbb{D} hd\\mu(h).\n \\end{equation}\n Moreover,\n \\begin{equation}\n  \\|f\\|_{\\mathcal{K}_1(\\mathbb{D})} = \\inf\\left\\{\\int_\\mathbb{D} d|\\mu|(h):~f = \\int_\\mathbb{D} hd\\mu(h)\\right\\}.\n \\end{equation}\n\n\\end{lemma}\n\\begin{proof}\n It suffices to show that \n $$B_1(\\mathbb{D}) = M(\\mathbb{D}):=\\left\\{\\int_\\mathbb{D} hd\\mu(h):~\\int_\\mathbb{D} d|\\mu|(h) \\leq 1\\right\\}.$$\n By approximating the integral using simple functions we immediately see that $M(\\mathbb{D})\\subset B_1(\\mathbb{D})$. To prove the inverse inclusion, we must show that $M(\\mathbb{D})$ is closed. This follows immediately from Prokhorov's theorem \\cite{prokhorov1956convergence} (see also \\cite{dudley2018real}, Theorem 11.5.4, for instance). Indeed, let $f_n\\rightarrow f$ with $f_n\\in M(\\mathbb{D})$ and let $\\mu_n$ be a corresponding sequence of Borel measure on $\\mathbb{D}$. By the compactness of $\\mathbb{D}$ and Prokhorov's theorem, by taking a subsequence if necessary we may assume that the $\\mu_n\\rightarrow \\mu$ weakly. This implies $f = \\int_\\mathbb{D} hd\\mu(h)$ and $\\int_\\mathbb{D} d|\\mu|(h) \\leq 1$, so that $f\\in M(\\mathbb{D})$.\n\\end{proof}\n\n\n\\subsection{Properties of $\\mathcal{K}_1(\\mathbb{P}^d_k)$ and relationship with the Barron space}\nNext, we explain the precise definition \\eqref{relu-k-space-definition}, i.e. how we define an appropriate dictionary corresponding to the ReLU$^k$ activation function. The problem with letting $\\sigma_k(x) = [\\max(0,x)]^k$ and setting\n\\begin{equation}\n \\mathbb{D} = \\{\\sigma_k(\\omega\\cdot x + b):~\\omega\\in \\mathbb{R}^d,~b\\in \\mathbb{R}\\},\n\\end{equation}\nis that unless $k=0$ the dictionary elements are not bounded in $L^2(B_1^d)$, since $\\sigma_k$ is not bounded and we can shift $b$ arbitrarily. This manifests itself in the fact that $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{D})}$ is a semi-norm which contains the set of polynomials of degree at most $k-1$ in its kernel (this occurs since the arbirtrarily large elements in $\\mathbb{D}$ are polynomials on $B_1^d$).\n\nWe rectify this issue by considering the dictionary\n\\begin{equation}\n \\mathbb{P}^d_k = \\{\\sigma_k(\\omega\\cdot x + b):~\\omega\\in S^{d-1},~b\\in [-2,2]\\}.\n\\end{equation}\nWe remark that the constant $2$ above can be replaced by any $c > 1$, which results in an equivalent norm. This follows since the elements of $\\mathbb{P}^d_k$ for which $|b| > 1$ are polynomials, and we only need finitely many of them to span the space of polynomials.\n\n% \\begin{proposition}\\label{constant-independence-proposition}\n%  Let $c > 1$ and consider the dictionary\n%  \\begin{equation}\n%   \\mathbb{P}^c_k = \\{\\sigma_k(\\omega\\cdot x + b):~\\omega\\in S^{d-1},~b\\in [-cR_\\Omega,cR_\\Omega]\\}\n%  \\end{equation}\n%  The we have\n%  \\begin{equation}\n%   \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}_k^c)} \\eqsim \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}_k)},\n%  \\end{equation}\n%  where the implied constant depends only upon $c$ and $k$.\n% \\end{proposition}\n% For the proof of this proposition, we will need the following well-known lemma which we include for completeness.\n% \\begin{lemma}\\label{polynomial-basis-lemma}\n%  Let $b_1,...,b_{k+1}\\subset \\mathbb{R}$ be distinct points. Then $(x+b_1)^k,...,(x+b_{k+1})^k$ is a basis for the space of polynomials of degree at most $k$.\n% \\end{lemma}\n% \\begin{proof}\n%  We expand out the polynomials to get\n%  \\begin{equation}\n%   (x+b_i)^k = \\sum_{j=0}^k \\binom{k}{j}b_i^jx^{k-j}.\n%  \\end{equation}\n%  Thus the change of basis matrix from the monomials $1,x,...,x^k$ to $(x+b_1)^k,...,(x+b_{k+1})^k$ has entries $M_{ij} = \\binom{k}{j}b_i^j$. This is a Vandermode matrix whose $j$-th column has been scaled by $\\binom{k}{j}\\neq 0$. Since the $b_i$ are distinct, its determinant is non-zero and thus $(x+b_1)^k,...,(x+b_{k+1})^k$ is a basis as claimed.\n% \n% \\end{proof}\n% \n% \\begin{proof}[Proof of Proposition \\ref{constant-independence-proposition}]\n%  We will show that $\\mathcal{K}_1(\\mathbb{P}_k^{c_1})$ and $\\mathcal{K}_1(\\mathbb{P}_k^{c_2})$ are equivalent for any $c_1 > c_2 > 1$. We clearly have $\\mathbb{P}_k^{c_1}\\supset \\mathbb{P}_k^{c_2}$, so that $\n%   \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}_k^{c_1})} \\leq \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}^{c_2}_k)}$. We must prove the reverse inequality.\n%   \n%   This will follow if for some constant $K$ we can show that $\\|g\\|_{\\mathcal{K}_1(\\mathbb{P}^{c_2}_k)} \\leq K$, i.e. $g\\in KB_1(\\mathbb{P}^{c_2}_k)$, for every $g\\in \\mathbb{P}_k^{c_1}$. To this end, let $g\\in \\mathbb{P}_k^{c_1}$. Then $g(x) = \\sigma_k(\\omega\\cdot x + b)$ with $\\omega\\in S^{d-1}$ and $b\\in [-c_1R_\\Omega, c_1R_\\Omega]$. If $b\\in [-c_2R_\\Omega, c_2R_\\Omega]$, then clearly $g(x)\\in \\mathbb{P}_k^{c_2}$ so that $\\|g\\|_{\\mathcal{K}_1(\\mathbb{P}^{c_2}_k)} \\leq 1$. So suppose that $b\\notin [-c_2R_\\Omega, c_2R_\\Omega]$. Then since $c_2 > 1$ and $|\\omega| = 1$, the quantity $\\omega\\cdot x + b$ does not change sign on $\\Omega$ and so either \n%   $$g(x) = \\sigma_k(\\omega\\cdot x + b) = (\\omega\\cdot x + b)^k,$$\n%  or $g = 0$. In the latter case the conclusion is clear, so consider the case where $d = (\\omega\\cdot x + b)^k$. \n%  \n%  Choose $k+1$ distinct numbers $b_1,...,b_{k+1}\\in [R_\\Omega, c_2R_\\Omega]$. By Lemma \\ref{polynomial-basis-lemma} the $(x+b_i)^k$ span the space of polynomials of degree $k$ and thus we can write $(\\omega\\cdot x + b)^k$ as a linear combination of $(\\omega\\cdot x+b_i)^k = \\sigma_k(\\omega\\cdot x+b_i)\\in \\mathbb{P}_k^{c_2}$ for $i=1,...,k+1$. Moreover, the coefficients are continuous as a function of $b$ and thus can be uniformly bounded for $b\\in [-c_1R_\\Omega,c_1R_\\Omega]$. This proves that there is a constant $K$ independent of $b$ such that $g(x) = \\sigma_k(\\omega\\cdot x + b)\\in KB_1(\\mathbb{P}_k^{c_2})$, which completes the proof.\n% \\end{proof}\nNext, we consider the relationship between $\\mathcal{K}_1(\\mathbb{P}_k^d)$ and the Barron norm introduced in \\cite{ma2019barron}, which is given by\n\\begin{equation}\\label{barron-norm}\n \\|f\\|_{\\mathcal{B}} = \\inf\\left\\{\\mathbb{E}_\\rho(|a|(|\\omega|_1 + |b|)):~f(x) = \\int_{\\mathbb{R}\\times\\mathbb{R}^d\\times\\mathbb{R}} a\\sigma_1(\\omega\\cdot x + b)\\rho(da,d\\omega,db)\\right\\},\n\\end{equation}\nwhere we recall that $\\sigma_1$ is the rectified linear unit and the infemum is taken over all integral representations of $f$. Here $\\rho$ is a probability distribution on $\\mathbb{R}\\times\\mathbb{R}^d\\times\\mathbb{R}$, and the expectation is taken with respect to $\\rho$. It turns out that the $\\mathcal{K}_1(\\mathbb{P}^d_1)$ space is equivalent to the Barron space. \n\n\\begin{proposition}\n We have\n \\begin{equation}\n  \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}^d_1)} \\eqsim_d \\|f\\|_{\\mathcal{B}}.\n \\end{equation}\n\n\\end{proposition}\n\\begin{proof}\n Consider the dictionary\n \\begin{equation}\n \\mathbb{B} = \\{(|\\omega|_1 + |b|)^{-1}\\sigma_1(\\omega\\cdot x + b):~\\omega\\in \\mathbb{R}^d,~b\\in \\mathbb{R}\\}\\subset L^2(B_1^d).\n\\end{equation}\nFrom lemma \\ref{prokhorov-lemma}, it is easy to see that $\\|f\\|_{\\mathcal{K}_1(\\mathbb{B})} = \\|f\\|_{\\mathcal{B}}$, so it suffices to show that $\\|f\\|_{\\mathcal{K}_1(\\mathbb{P}^d_1)} \\eqsim_d \\|f\\|_{\\mathcal{K}_1(\\mathbb{B})}$.\n\n \n It suffices to show that $\\mathbb{P}^d_1\\subset CB_1(\\mathbb{B})$ and $\\mathbb{B}\\subset CB_1(\\mathbb{P}^d_1)$ for some constant $C$. \n \n So let $g\\in \\mathbb{P}^d_1$. This means that $g(x) = \\sigma_1(\\omega \\cdot x + b)$ for some $\\omega\\in S^{d-1}$ and $b\\in [-2,2]$. Thus $$(|\\omega|_1 + |b|) \\leq (\\sqrt{d} + 2) \\leq C:=C(d),$$\n and since $(|\\omega|_1 + |b|)^{-1}\\sigma_1(\\omega \\cdot x + b)\\in \\mathbb{B}$, we see that $g\\in CB_1(\\mathbb{B})$. Thus $\\mathbb{P}^d_1\\subset CB_1(\\mathbb{B})$.\n \n Now, let $g\\in \\mathbb{B}$. Then $g(x) = (|\\omega|_1 + |b|)^{-1}\\sigma_1(\\omega \\cdot x + b)$ for some $\\omega\\in \\mathbb{R}^d$ and $b\\in \\mathbb{R}$. \n \n Consider first the case when $\\omega \\neq 0$. Note that by the positive homogeneity of $\\sigma_1$ we can assume that $|\\omega| = 1$, i.e. that $\\omega\\in S^{d-1}$. Further, we have that $(|\\omega|_1 + |b|)^{-1} \\leq (1+|b|)^{-1}$. Thus, we must show that\n \\begin{equation}\n  \\tilde g(x) := (1+|b|)^{-1}\\sigma_1(\\omega \\cdot x + b)\\in CB_1(\\mathbb{P}^d_1) \n \\end{equation}\n for $\\omega\\in S^{d-1}$ and $b\\in \\mathbb{R}$. For $b\\in [-2,2]$ this clearly holds with $C=1$ since $(1 + |b|)^{-1} \\leq 1$ and for such values of $b$, we have $\\sigma_1(\\omega\\cdot x + b)\\in \\mathbb{P}^d_1$. If $b < -2$, then $\\tilde g(x) = 0$, so we trivially have $\\tilde g\\in B_1(\\mathbb{P}^d_1)$. Finally, if $b > 2$, then $\\omega\\cdot x + b$ is positive on $B_1^d$, so that\n $$\n \\tilde g(x) = (1+|b|)^{-1}(\\omega\\cdot x + b) = (1+|b|)^{-1}\\omega\\cdot x + b(1+|b|)^{-1}.\n $$\n Now $\\omega\\cdot x\\in B_1(\\mathbb{P}^d_1)$ and $1 = [\\sigma_1(\\omega\\cdot x + 2) - \\sigma_1(\\omega\\cdot x + 1)]\\in 2 B_1(\\mathbb{P}^d_1)$.\n Combined with the above and the fact that $(1+|b|)^{-1},|b|(1+|b|)^{-1}\\leq 1$, we get $\\tilde g\\in CB_1(\\mathbb{P}^d_1)$.\n \n Finally, if $\\omega = 0$, then $g(x) = 1$ and by the above paragraph we clearly also have $g\\in CB_1(\\mathbb{P}_1)$. This completes the proof.\n\\end{proof}\n\nNote that it follows from this result that the Barron space $\\mathcal{B}$ is a Banach space, which was first proven in \\cite{wojtowytsch2020representation}.\n\n\\subsection{Characterization of $\\mathcal{K}_1(\\mathbb{P}^d_k)$}\nIn one dimension, the space $\\mathcal{K}_1(\\mathbb{P}^d_k)$ has a relatively simple characterization in terms of the space of bounded variation (see \\cite{wojtowytsch2020representation}, section 4 for a proof in the case $k=1$ in the context of the Barron space).\n\n\\begin{theorem}\\label{barron-space-1-d-characterization-theorem}\n We have\n \\begin{equation}\n \\mathcal{K}_1(\\mathbb{P}^1_k) = \\{f\\in L^2([-1,1]):~\\text{$f$ is $k$-times differentiable a.e. and }f^{(k)}\\in BV([-1,1])\\}.\n \\end{equation}\n In particular, it holds that\n \\begin{equation}\n  \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}^1_k)} \\eqsim_k \\sum_{j=0}^{k-1} |f^{(j)}(-1)| + \\|f^{(k)}\\|_{BV([-1,1])}. \n \\end{equation}\n\n\\end{theorem}\n\\begin{proof}\n We first prove that \n \\begin{equation}\\label{upper-bound-barron-1-d}\n  \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}^1_k)} \\lesssim_k \\sum_{j=0}^{k-1} |f^{(j)}(-1)| + \\|f^{(k)}\\|_{BV([-1,1])}. \n \\end{equation}\nNote that the right hand side is uniformly bounded for all $f = \\sigma_k(\\pm x + b)\\in \\mathbb{P}^1_k$, since $\\sigma_k^{(k)}$ is a multiple of the Heaviside function and $b$ is bounded by $2$. By taking convex combinations, this means that for some constant $C$, we have\n\\begin{equation}\n \\left\\{\\sum_{j=1}^n a_jh_j:~h_j\\in \\mathbb{P}^1_k,~\\sum_{i=1}^n|a_i|\\leq 1\\right\\} \\subset CB^1_{BV,k},\n\\end{equation}\nwhere\n\\begin{equation}\n B^1_{BV,k}:=\\left\\{f\\in L^2([-1,1]):~\\sum_{j=0}^{k-1} |f^{(j)}(-1)| + \\|f^{(k)}\\|_{BV([-1,1])} \\leq 1\\right\\}.\n\\end{equation}\nIt is well-known that $B^1_{BV,k}$ is compact in $L^1([-1,1])$ (see, for instance Theorem 4 of Chapter 5 in \\cite{evans2015measure}). This implies that $B^1_{BV,k}$ is closed in $L^2([-1,1])$, since if $f_n\\rightarrow_{L^2} f$ with $f_n\\in B^1_{BV,k}$, then there must exist a subsequence $f_{k_n}\\rightarrow_{L^1} f'\\in B^1_{BV,k}$. Clearly $f=f'$ and so $B^1_{BV,k}$ is closed in $L^2([-1,1])$. From this it follows that $B_1(\\mathbb{P}^1_k) \\subset CB^1_{BV,k}$ and we obtain \\eqref{upper-bound-barron-1-d}. \n\nNext, we prove the reverse inequality. So let $f\\in B^1_{BV,k}$. By Theorem 2 in Chapter 5 of \\cite{evans2015measure}, there exist $f_n\\in C^\\infty\\cap B^1_{BV,k}$ such that $f_n\\rightarrow f$ in $L^1([-1,1])$. Further, since $f_n,f\\in B^1_{BV,k}$, we have that $\\|f - f_n\\|_{L^\\infty([-1,1])}$ is uniformly bounded. Thus $$\\|f - f_n\\|^2_{L^2([-1,1])} \\leq \\|f - f_n\\|_{L^1([-1,1])}\\|f - f_n\\|_{L^\\infty([-1,1])} \\rightarrow 0$$\nand so $f_n\\rightarrow f$ in $L^2([-1,1])$ as well.\n\nUsing the Peano kernel formula, we see that\n\\begin{equation}\n f_n(x) = \\sum_{j=0}^{k} \\frac{f_n^{(j)}(-1)}{j!}(x+1)^j + \\int_{-1}^1 \\frac{f_n^{(k+1)}(b)}{k!}\\sigma_k(x-b)db.\n\\end{equation}\nFrom the definition of the $BV$-norm and the fact that $f_n\\in B^1_{BV,k}$, we see that\n\\begin{equation}\n \\sum_{j=0}^{k} \\frac{|f_n^{(j)}(-1)|}{j!}+ \\int_{-1}^1 \\frac{|f_n^{(k+1)}(b)|}{k!}db \\leq C_1\n\\end{equation}\nfor a fixed constant $C_1$. Choose $k+1$ distinct $b_1,...,b_{k+1}\\in [1, 2]$. Then by construction $\\sigma_k(x+b_i) = (x+b_i)^k$ is a polynomial on $[-1,1]$. Moreover, it is well-known that the polynomials $(x+b_i)^k$ span the space of polynomials of degree at most $k$. Combined with the coefficient bound\n\\begin{equation}\n \\sum_{j=0}^{k} \\frac{|f_n^{(j)}(-1)|}{j!} \\leq C_1,\n\\end{equation}\nwe see that\n\\begin{equation}\n \\sum_{j=0}^{k} \\frac{f_n^{(j)}(-1)}{j!}(x-a)^j \\in C_2B_1(\\mathbb{P}^1_k)\n\\end{equation}\nfor a fixed constant $C_2$ (independent of $f_n$). Furthermore, since also\n\\begin{equation}\n \\int_{-1}^1 \\frac{|f_n^{(k+1)}(b)|}{k!}db \\leq C_1,\n\\end{equation}\nwe obtain\n\\begin{equation}\n \\int_{-1}^1 \\frac{f_n^{(k+1)}(b)}{k!}\\sigma_k(x-b)db\\in C_1B_1(\\mathbb{P}^1_k).\n\\end{equation}\nThis implies that $f_n\\in CB_1(\\mathbb{P}^1_k)$ for $C = C_1 + C_2$ and since $f_n\\rightarrow f$ and $B_1(\\mathbb{P}^1_k)$ is closed in $L^2([-1,1])$, we get $f\\in CB_1(\\mathbb{P}^1_k)$, which completes the proof.\n\n\\end{proof}\n\nTheorem \\ref{barron-space-1-d-characterization-theorem} only serves to characterize the space $\\mathcal{K}_1(\\mathbb{P}_k^1)$, but this result can be used to bound the $\\|\\cdot\\|_{\\mathcal{K}_1(\\mathbb{P}^d_k)}$-norm of ridge functions which only vary in one direction in higher dimensions as well.\n\\begin{corollary}\\label{ridge-corollary}\n If $f\\in L^2([-1,1])$ is $k$-times differentiable a.e. and satisfies\n \\begin{equation}\\label{bound-530}\n  \\sum_{j=0}^{k-1} |f^{(j)}(-1)| + \\|f^{(k)}\\|_{BV([-1,1])} \\leq 1,\n \\end{equation}\n then for any $\\omega\\in S^{d-1}$, $\\|f(\\omega\\cdot x)\\|_{\\mathcal{K}_1(\\mathbb{P}^d_k)} \\lesssim_k 1$.\n\\end{corollary}\n\n\\begin{proof}\n This follows immediately from the one dimensional result, Theorem \\ref{barron-space-1-d-characterization-theorem}, by considering the dictionary $\\mathbb{P}_k^\\omega = \\{\\sigma_k(\\omega\\cdot x + b):~b\\in [-2,2]\\}$.\n\\end{proof}\n\nAn important application of Corollary \\ref{ridge-corollary} is to note that the functions $f_\\omega(x) = e^{2\\pi i \\omega\\cdot x}$ satisfy\n\\begin{equation}\\label{eq-550}\n \\sum_{j=0}^{k-1} |f_\\omega^{(j)}(-1)| + \\|f_\\omega^{(k)}\\|_{BV([-1,1])} \\lesssim_k (1 + |\\omega|)^{k+1},\n\\end{equation}\nwhich leads immediately to the following result.\n\\begin{theorem}\n For $k \\geq 0$ and $d \\geq 1$, we have\n \\begin{equation}\n  \\mathcal{K}_1(\\mathbb{F}^d_{k+1}) \\subset \\mathcal{K}_1(\\mathbb{P}^d_k).\n \\end{equation}\n\n\\end{theorem}\n\nUsing different language, an essentially equivalent result first appears for $k=0$ in \\cite{barron1993universal}, for $k=1,2$ in \\cite{klusowski2018approximation} and for $k \\geq 3$ in \\cite{CiCP-28-1707}. It is the basis of the Fourier integral condition introduced by Barron \\cite{barron1993universal}. In \\cite{wojtowytsch2020representation} it is remarked that $\\mathcal{K}_1(\\mathbb{P}_k)$ is actually significantly larger than $\\mathcal{K}_1(\\mathbb{F}_{k+1})$ when $k=1$. In later sections we quantify this observation by calculating the entropy of the unit balls of both spaces.\n\nIn general, a function $f\\in \\mathcal{K}_1(\\mathbb{P}^d_k)$ can be written as a superposition of one-dimensional ridge functions which satisfy \\eqref{bound-530}. This leads to the following bound on $\\mathcal{K}_1(\\mathbb{P}^d_k)$ in higher dimensions.\n\\begin{theorem}\\label{bs-theorem}\n We have\n \\begin{equation}\\label{bs-label}\n \\|f\\|_{\\mathcal{K}_1(\\mathbb{P}^d_k)} \\lesssim_{k,d} \\inf_{f_e|_{B_1^d} = f}\\left\\{\\int_{S^{d-1}}\\sum_{j=0}^{k-1} |g_\\omega^{(j)}(-1)| + \\|g_\\omega^{(k)}\\|_{BV([-1,1])} d\\omega,~g_\\omega(t) = \\int_{-\\infty}^\\infty e^{2\\pi i ts}\\hat{f}_e(\\omega s)s^{d-1}dx\\right\\}.\n\\end{equation}\nwhere the infemum is over all extensions $f_e$ which satisfy $f_e, \\hat{f}_e\\in L^1(\\mathbb{R}^d)$.\n\\end{theorem}\n\\begin{proof}\n By the Fourier inversion formula, we have\n \\begin{equation}\n  f_e(x) = \\int_{\\mathbb{R}^d} e^{2\\pi i \\xi\\cdot x}\\hat{f}_e(\\xi)d\\xi = C_d\\int_{S^{d-1}}\\int_{0}^\\infty e^{2\\pi i s(\\omega\\cdot x)}\\hat{f}_e(\\omega s) s^{d-1}ds d\\omega.\n \\end{equation}\n This means that\n \\begin{equation}\n  f_e(x) = C_d\\int_{\\omega\\in S^{d-1}} g_\\omega(\\omega\\cdot x)d\\omega.\n \\end{equation}\n Combined with Corollary \\ref{ridge-corollary}, this completes the proof.\n\n\\end{proof}\nWe conjecture that the bound in Theorem \\ref{bs-theorem} in fact characterizes the space $\\mathcal{K}_1(\\mathbb{P}^d_k)$.\n\n\\subsection{Characterization of $\\mathcal{K}_1(\\mathbb{F}^d_s)$}\nHere we characterize the space $\\mathcal{K}_1(\\mathbb{F}^d_s)$. In particular, have the following theorem.\n\\begin{theorem}\\label{spectral-barron-theorem}\nWe have\n\\begin{equation}\\label{fourier-integral-condition}\n \\|f\\|_{\\mathcal{K}_1(\\mathbb{F}^d_s)} = \\inf_{f_e|_{B_1^d} = f} \\int_{\\mathbb{R}^d} (1+|\\xi|)^s|\\hat{f}_e(\\xi)|d\\xi,\n\\end{equation}\nwhere the infemum is taken over all extensions $f_e\\in L^1(\\mathbb{R}^d)$.\n\\end{theorem}\n\nThe proof is a bit more involved due to the failure of Lemma \\ref{prokhorov-lemma} when $s=0$, and requires the technical fact that the unit ball\n\\begin{equation}\n B_1^s(\\Omega) = \\left\\{f:\\Omega\\rightarrow \\mathbb{R}:~\\inf_{f_e|_\\Omega = f} \\int_{\\mathbb{R}^d} (1+|\\xi|)^s|\\hat{f}_e(\\xi)|d\\xi\\leq 1\\right\\}\n\\end{equation}\nis closed in $L^2(\\Omega)$ (it is shown in \\cite{siegel2020approximation} that $B_1^s\\subset L^2(\\Omega)$). Throughout the proof we will use the notation\n\\begin{equation}\\label{spectral-barron-integral-condition}\n \\|f\\|_{\\mathcal{B}^s(\\Omega)} = \\inf_{f_e|_\\Omega = f} \\int_{\\mathbb{R}^d} (1+|\\xi|)^s|\\hat{f}_e(\\xi)|d\\xi\n\\end{equation}\nfor this infemum.\n\nWe need the following simple lemmas.\n\\begin{lemma}\\label{fourier-cutoff-lemma}\n  Suppose that $\\Omega\\subset \\mathbb{R}^d$ is bounded. Let $\\epsilon > 0$ and $s\\geq 0$. Then there exists a function $\\phi\\in L^1(\\mathbb{R}^d)$, such that $\\phi(x) = 1$ for $x\\in \\Omega$ and \n  \\begin{equation}\n  \\int_{\\mathbb{R}^d}(1+|\\xi|)^s|\\hat{\\phi}(\\xi)|d\\xi \\leq 1 + \\epsilon.\n  \\end{equation}\n \\end{lemma}\n \\begin{proof}\n  Since $\\Omega$ is bounded, it suffices to consider the case where $\\Omega = [-L,L]^d$ for a sufficiently large $L$. We consider separable $\\phi = \\phi_1(x_1)\\cdots\\phi_d(x_d)$, and note that\n  \\begin{equation}\n   \\int_{\\mathbb{R}^d}(1+|\\xi|)^s|\\hat{\\phi}(\\xi)|d\\xi \\leq \\int_{\\mathbb{R}^d}\\prod_{i=1}^d(1+|\\xi_i|)^s|\\hat{\\phi}_i(\\xi_i)|d\\xi \\leq \\prod_{i=1}^d \\int_{\\mathbb{R}}(1+|\\xi|)^s|\\hat{\\phi}_i(\\xi)|d\\xi,\n  \\end{equation}\n  and this reduces us to the one-dimensional case where $\\Omega = [-L,L]$.\n  \n  For the one-dimensional case, consider a Gaussian $g_R(x) = e^{-\\frac{x^2}{2R}}$. A simple calculation shows that the Fourier transform of the Gaussian is $\\hat{g}_R(\\xi) = \\sqrt{\\frac{R}{2\\pi}}e^{-\\frac{R\\xi^2}{2}}$. This implies that\n  \\begin{equation}\n   \\lim_{R\\rightarrow \\infty} \\int_{\\mathbb{R}}(1+|\\xi|)^s|\\hat{g}_R(\\xi)|d\\xi = 1,\n  \\end{equation}\n  and thus by choosing $R$ large enough, we can make this arbitrarily close to $1$.\n  \n  Now consider $\\tau_R\\in C^{k}(\\mathbb{R})$ for $k > s+2$ such that $\\tau_R(x) = 1 - g_R(x)$ for $x\\in [-L,L]$. Then we have \n  $$\\|\\tau_R\\|_{L^\\infty([-L,L])}, \\|\\tau_R^\\prime\\|_{L^\\infty([-L,L])}, \\cdots, \\|\\tau_R^{k}\\|_{L^\\infty([-L,L])} \\rightarrow 0$$\n  as $R\\rightarrow \\infty$.\n Consequently, it is possible to extend $\\tau_R$ to $\\mathbb{R}$ so that\n \\begin{equation}\n  \\|\\tau_R\\|_{L^1(\\mathbb{R})}, \\|\\tau_R^{(k)}\\|_{L^1(\\mathbb{R})} \\rightarrow 0.\n \\end{equation}\n as $R\\rightarrow \\infty$. For instance, for $x > L$ we can take $\\tau_R$ to be a polynomial which matches the first $k$ derivatives at $L$ times a fixed smooth cutoff function which is identically $1$ in some neighborhood of $L$ (and similarly at $-L$).\n \n This implies that $\\|\\hat{\\tau}_R(\\xi)\\|_{L^\\infty(\\mathbb{R})},\\|\\xi^{k}\\hat{\\tau}_R(\\xi)\\|_{L^\\infty(\\mathbb{R})}\\rightarrow 0$ as $R\\rightarrow \\infty$. Together, these imply that\n \\begin{equation}\n  \\lim_{R\\rightarrow \\infty} \\int_{\\mathbb{R}}(1+|\\xi|)^s|\\hat{\\tau}_R(\\xi)|d\\xi \\rightarrow 0,\n \\end{equation}\n since $k-2 > s$.\n \n Finally, set $\\phi_R = g_R(x) + \\tau_R(x)$. Then clearly $\\phi_R = 1$ on $[-L,L]$ and also\n \\begin{equation}\n  \\lim_{R\\rightarrow \\infty} \\int_{\\mathbb{R}}(1+|\\xi|)^s|\\hat{\\phi}_R(\\xi)|d\\xi \\leq \\lim_{R\\rightarrow \\infty} \\int_{\\mathbb{R}}(1+|\\xi|)^s|\\hat{\\tau}_R(\\xi)|d\\xi + \\lim_{R\\rightarrow \\infty} \\int_{\\mathbb{R}}(1+|\\xi|)^s|\\hat{g}_R(\\xi)|d\\xi = 1.\n \\end{equation}\n Choosing $R$ large enough, we obtain the desired result.\n\\end{proof}\n\nUsing this lemma, we can show that the infemum in \\eqref{spectral-barron-integral-condition} can alternatively be given by an infemum over integral representations by Borel measures.\n\\begin{lemma}\n Let $\\Omega\\subset \\mathbb{R}^d$ be a bounded domain and $s \\geq 0$. Then\n \\begin{equation}\\label{barron-norm-form-2}\n  \\|f\\|_{\\mathcal{B}^s(\\Omega)} = \\inf\\left\\{\\int_{\\mathbb{R}^d} (1+|\\xi|)^sd|\\mu|(\\xi):~f(x)=\\int_{\\mathbb{R}^d} e^{2\\pi {\\mathrm{i}\\mkern1mu}   \\xi\\cdot x}d\\mu(\\xi)~\\text{for}~x\\in\\Omega\\right\\}.\n \\end{equation}\n\n\\end{lemma}\n\\begin{proof}\n By choosing $\\mu = \\hat{f}(\\xi)d\\xi$ it is clear that\n \\begin{equation}\n  \\|f\\|_{\\mathcal{B}^s(\\Omega)} \\geq \\inf\\left\\{\\int_{\\mathbb{R}^d} (1+|\\xi|)^sd|\\mu|(\\xi):~f(x)=\\int_{\\mathbb{R}^d} e^{2\\pi {\\mathrm{i}\\mkern1mu}   \\xi\\cdot x}d\\mu(\\xi)~\\text{for}~x\\in\\Omega\\right\\}.\n \\end{equation}\n The content of the theorem is the reverse inequality.\n\n Let $\\mu$ be a regular Borel measure such that the integral in \\eqref{barron-norm-form-2} is finite (note this must mean that $\\mu$ has finite mass) and \n \\begin{equation}\n  f(x)=\\int_{\\mathbb{R}^d} e^{2\\pi {\\mathrm{i}\\mkern1mu} \\xi\\cdot x}d\\mu(\\xi)\n \\end{equation}\n for $x\\in \\Omega$. Choose $\\epsilon > 0$. By Lemma \\ref{fourier-cutoff-lemma} we can find a $\\phi\\in L^1(\\mathbb{R}^d)$ such $\\phi|_\\Omega = 1$ and $$\\int_{\\mathbb{R}^d}(1+|\\xi|)^s|\\hat{\\phi}(\\xi)|d\\xi \\leq 1 + \\epsilon.$$ \n We now set\n \\begin{equation}\n  f_e(x) = \\phi(x)\\left[\\int_{\\mathbb{R}^d} e^{2\\pi {\\mathrm{i}\\mkern1mu} \\xi\\cdot x}d\\mu(\\xi)\\right]\\in L^1(\\mathbb{R}^d),\n \\end{equation}\n since $\\phi\\in L^1(\\mathbb{R}^d)$ and $\\mu$ has finite mass, so the second factor must be bounded.\n\n Then we have that for $x\\in \\Omega$,\n \\begin{equation}\n  f(x) = f(x)\\phi(x) = f_e(x),\n \\end{equation}\n and $\\hat{f}_e = \\hat{\\phi} * \\mu$,\n where the function $\\hat{\\phi} * \\mu$ is given by\n \\begin{equation}\n  (\\hat{\\phi} * \\mu)(\\xi) = \\int_{\\mathbb{R}^d} \\hat{\\phi}(\\xi - \\nu)d\\mu(\\nu).\n \\end{equation}\n We now calculate\n \\begin{equation}\n  \\int_{\\mathbb{R}^d}(1+|\\xi|)^s|(\\hat{\\phi} * \\mu)(\\xi)|d\\xi \\leq \\int_{\\mathbb{R}^d}\\int_{\\mathbb{R}^d}(1+|\\xi|)^s |\\hat{\\phi}(\\xi - \\nu)|d|\\mu|(\\nu)d\\xi.\n \\end{equation}\n Finally, we use the simple inequality $(1+|\\xi|)^s \\leq (1+|\\nu|)^s(1+|\\xi - \\nu|)^s$ combined with a change of variables, to get\n \\begin{equation}\n \\begin{split}\n \\int_{\\mathbb{R}^d}(1+|\\xi|)^s|(\\hat{\\phi} * \\mu)(\\xi)|d\\xi &\\leq \\left(\\int_{\\mathbb{R}^d}(1+|\\xi|)^s|\\hat{\\phi}(\\xi)|d\\xi\\right)\\left(\\int_{\\mathbb{R}^d}(1+|\\nu|)^s d|\\mu|(\\nu)\\right)\\\\\n &\\leq (1+\\epsilon)\\left(\\int_{\\mathbb{R}^d}(1+|\\nu|)^s d|\\mu|(\\nu)\\right).\n \\end{split}\n \\end{equation}\n This shows that\n \\begin{equation}\n  \\|f\\|_{\\mathcal{B}^s(\\Omega)} \\leq (1+\\epsilon)\\inf\\left\\{\\int_{\\mathbb{R}^d} (1+|\\xi|)^sd|\\mu|(\\xi):~f(x)=\\int_{\\mathbb{R}^d} e^{2\\pi {\\mathrm{i}\\mkern1mu}   \\xi\\cdot x}d\\mu(\\xi)~\\text{for}~x\\in\\Omega\\right\\}.\n \\end{equation}\n Since $\\epsilon > 0$ was arbitrary, we get the desired result.\n\\end{proof}\n\nFinally, we prove that the unit ball $B_1^s(\\Omega)$ is closed in $L^2(\\Omega)$, from which Theorem \\ref{spectral-barron-theorem} follows easily.\n\n\\begin{proposition}\n Let $\\Omega\\subset \\mathbb{R}^d$ be a bounded domain and $s \\geq 0$. Then\n \\begin{equation}\n B_1^s(\\Omega) = \\{f\\in L^2(\\Omega):~\\|f\\|_{\\mathcal{B}^s(\\Omega)}\\leq 1\\} = \\left\\{f:\\Omega\\rightarrow \\mathbb{R}:~\\inf_{f_e|_\\Omega = f} \\int_{\\mathbb{R}^d} (1+|\\xi|)^s|\\hat{f}_e(\\xi)|d\\xi\\leq 1\\right\\}\n\\end{equation}\nis closed in $L^2(\\Omega)$.\n\\end{proposition}\n\\begin{proof}\n Let $f_n\\rightarrow f$ in $L^2(\\Omega)$ with $\\|f_n\\|_{\\mathcal{B}^s(\\Omega)} \\leq 1$. Choose $\\epsilon > 0$ and consider the corresponding sequence of $h_n = \\hat{f}_{n,e}$ in \\eqref{spectral-barron-integral-condition} which satisfy\n \\begin{equation}\n  \\int_{\\mathbb{R}^d}(1+|\\xi|)^s|h_n(\\xi)|d\\xi \\leq 1 + \\epsilon,~f_n(x)=\\hat{h}_n(x) = \\int_{\\mathbb{R}^d} h_n(\\xi)e^{2\\pi {\\mathrm{i}\\mkern1mu} \\xi\\cdot x}d\\xi.\n \\end{equation}\n By assumption $f_n\\rightarrow f$ in $L^2(\\Omega)$ so that for any $g\\in L^2(\\Omega)$, we have\n \\begin{equation}\n  \\langle f_n, g\\rangle_{L^2(\\Omega)} \\rightarrow \\langle f, g\\rangle_{L^2(\\Omega)}.\n \\end{equation}\n Choose $g$ to be any element in the dense subset $C^\\infty_c(\\Omega)\\subset L^2(\\Omega)$ and note that in this case we have by Plancherel's theorem\n \\begin{equation}\n  \\langle f_n, g\\rangle_{L^2(\\Omega)} = \\langle h_n, \\hat{g}\\rangle_{L^2(\\mathbb{R}^d)}.\n \\end{equation}\n Note that $\\hat{g}$ is a Schwartz function and so is in the space $C_{s,0}(\\mathbb{R}^d)$, defined to be the following space of continuous, decaying functions\n \\begin{equation}\n  C_{s,0}(\\mathbb{R}^d) = \\{\\phi\\in C(\\mathbb{R}):\\lim_{\\xi\\rightarrow \\infty} |(1+|\\xi|)^s\\phi(\\xi)| = 0\\}\n \\end{equation}\n with norm\n \\begin{equation}\n  \\|\\phi\\|_{C_{s,0}(\\mathbb{R}^d)} = \\sup_{\\xi\\in \\mathbb{R}^d} |(1+|\\xi|)^s\\phi(\\xi)|.\n \\end{equation}\n \n This implies that the map\n \\begin{equation}\n  h:\\phi \\rightarrow \\lim_{n\\rightarrow \\infty}\\langle h_n, \\phi\\rangle_{L^2(\\mathbb{R}^d)} \n \\end{equation}\n defines a bounded linear functional on the subspace of $C_{s,0}(\\mathbb{R}^d)$ which is spanned by $\\{\\hat{g}:g\\in C^\\infty_c(\\Omega)\\}$, which has norm $\\leq 1 + \\epsilon$. \n \n By the Hahn-Banach theorem, we can extend $h$ to an element $\\mu\\in C^*_{s,0}(\\mathbb{R}^d)$, such that $\\|\\mu\\|_{C^*_{s,0}(\\mathbb{R}^d)}\\leq 1 + \\epsilon$. By the Riesz-Markov theorem (Theorem 22 in \\cite{markoff1938mean}), the dual space $C^*_{s,0}(\\mathbb{R}^d)$ is exactly the space of Borel measures with norm given by\n \\begin{equation}\n  \\|\\mu\\|_{C^*_{s,0}(\\mathbb{R}^d)} = \\int_{\\mathbb{R}^d} (1+|\\xi|)^s d|\\mu|(\\xi) \\leq 1 + \\epsilon.\n \\end{equation}\n But we also have that for every $g\\in C^\\infty_c(\\Omega)$, $\\langle \\mu, \\hat{g}\\rangle = \\langle f,g\\rangle$. Taking the Fourier transform, we see that the function\n \\begin{equation}\n  f_\\mu = \\int_{\\mathbb{R}^d}e^{2\\pi {\\mathrm{i}\\mkern1mu}  \\xi\\cdot x}d\\mu(\\xi)\n \\end{equation}\n satisfies $\\langle f_\\mu, g\\rangle = \\langle f,g\\rangle$ for all $g\\in C^\\infty_c(\\Omega)$. Thus $f = f_\\mu$ in $L^2(\\Omega)$ and so by \\eqref{barron-norm-form-2}, we have $\\|f\\|_{\\mathcal{B}^{s}(\\Omega)} \\leq 1 + \\epsilon$. Since $\\epsilon$ was arbitrary, this completes the proof.\n\n\\end{proof}\n\n\\subsection{Relationship Between Entropy Numbers and Approximation Rates}\\label{entropy-lemma-section}\nWe end this section by proving a lemma which specifies the relationship between approximation rates from $\\Sigma_{n,M}(\\mathbb{D})$ and the entropy numbers of $B_1(\\mathbb{D})$. Note also that this lemma implicitly appears in \\cite{makovoz1996random,klusowski2018approximation} for the dictionaries $\\mathbb{P}^d_k$ with $k=0,1,2$. See also the very similar Theorem 3.6 in \\cite{cohen2020optimal}, which draws the same conclusion under different assumptions, and note that this lemma can be thought of as a variant of Carl's inequality for approximation from $\\Sigma_{n,M}(\\mathbb{D})$. The use of this lemma is in proving lower bounds on approximation rates from $\\Sigma_{n,M}(\\mathbb{D})$.\n\\begin{lemma}\\label{entropy-lemma}\n Let $H$ be a Hilbert space and $\\mathbb{D}\\subset H$ be a dictionary with $K_\\mathbb{D}:=\\sup_{h\\in \\mathbb{D}} \\|h\\|_H < \\infty$. Suppose that for some constants $0 < l < \\infty$, $C < \\infty$, the dictionary $\\mathbb{D}$ can be covered by $C\\epsilon^{-l}$ sets of diameter $\\epsilon$ for any $\\epsilon > 0$. If there exists an $M,K < \\infty$ and $\\alpha > 0$ such that for all $f\\in B_1(\\mathbb{D})$\n \\begin{equation}\\label{approx-bound-estimate}\n  \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{D})} \\|f - f_n\\|_H \\leq Kn^{-\\alpha},\n \\end{equation}\n then the entropy numbers of $B_1(\\mathbb{D})$ are bounded by\n \\begin{equation}\n  \\epsilon_{n\\log{n}}(B_1(\\mathbb{D})) \\lesssim n^{-\\alpha},\n \\end{equation}\n where the implied constant is independent of $n$.\n\\end{lemma}\n\\begin{proof}\n The proof here essentially follows the argument in the proof of Theorem 4 in \\cite{makovoz1996random}, but we provide it here for completeness. In what follows, all implied constants will be independent of $n$.\n \n Using our assumption on $\\mathbb{D}$ and setting $\\epsilon = n^{-\\alpha}$, we see that there is a subset $\\mathcal{D}_n\\subset \\mathbb{D}$ such that $|\\mathcal{D}_n| \\leq Cn^{\\alpha l}$ and\n \\begin{equation}\n  \\sup_{d\\in \\mathbb{D}} \\inf_{s\\in \\mathcal{D}_n} \\|d - s\\|_H \\leq n^{-\\alpha}.\n \\end{equation}\n \n Furthermore, we can cover the unit ball in $\\ell^1$ by $(1+\\frac{2}{\\epsilon})^n$ $\\ell^1$-balls of radius $\\epsilon$ (see \\cite{pisier1999volume}, page $63$). Thus, setting $\\epsilon = M^{-1}n^{-\\alpha}$, we can find a subset $\\mathcal{L}_n$ of the $n$-dimensional $\\ell^1$-ball of radius $M$, $B_M(\\ell_1^n) = \\{x\\in \\mathbb{R}^d:~|x|_1\\leq M\\}$, such that $|\n \\mathcal{L}_n| \\leq (1 + 2Mn^\\alpha)^n \\lesssim n^{2\\alpha n}$, and\n \\begin{equation}\n  \\sup_{x\\in B_M(\\ell_1^n)} \\inf_{s\\in \\mathcal{L}_n} |d - s|_1 \\leq n^{-\\alpha}.\n \\end{equation}\n\n Let $\\mathcal{S}_n$ consist of all linear combinations of $n$ elements of $\\mathcal{D}_{n}$ with coefficients in $\\mathcal{L}_{n}$. Then clearly \n \\begin{equation}\\label{eq-707}\n |\\mathcal{S}_n| \\leq |\\mathcal{D}_{n}|^n|\\mathcal{L}_{n}| \\lesssim n^{\\alpha ln + 2\\alpha n} = n^{(l + 2)\\alpha n}\n \\end{equation}\n \n By \\eqref{approx-bound-estimate}, we have for every $f\\in B_1(\\mathbb{D})$ an $f_n\\in \\Sigma_{n,M}(\\mathbb{D})$ such that\n \\begin{equation}\n  f_n = \\sum_{j=1}^n a_jh_j\n \\end{equation}\n and $\\|f - f_n\\|_H \\lesssim n^{-\\alpha}$, $h_j\\in \\mathbb{D}$ and $\\sum_{j=1}^n|a_j| \\leq M$. \n \n We now replace the $h_j$ by their closest elements in $\\mathcal{D}_{n}$ and the coefficients $a_j$ by their closest point in $\\mathcal{L}_{n}$. Since $\\|h_j\\|_H\\leq K_\\mathbb{D}$ and $\\sum_{j=1}^n|a_j| \\leq M$, this results in a point $\\tilde{f}_n\\in \\mathcal{S}_n$ with \n $$\\|f_n - \\tilde{f}_n\\|_H \\leq Mn^{-\\alpha} + K_\\mathbb{D}n^{-\\alpha} \\lesssim n^{-\\alpha}.$$ Thus $\\|f - \\tilde{f}_n\\|_H\\lesssim n^{-\\alpha}$ and so\n \\begin{equation}\n  \\epsilon_{\\log{|\\mathcal{S}_n|}} \\lesssim n^{-\\alpha}.\n \\end{equation}\n By equation \\eqref{eq-707}, we see that $\\log{|\\mathcal{S}_n|} \\lesssim n\\log{n}$, which completes the proof.\n\\end{proof}\n\nWe note that the assumptions of Lemma \\ref{entropy-lemma} hold for both of the dictionaries $\\mathbb{P}^d_k$ and $\\mathbb{F}^d_s$ for $s > 0$.\n\n\\subsection{Upper bounds for smoothly parameterized dictionaries}\\label{main-result-1-section}\nLet $H$ be a Hilbert space and consider a dictionary $\\mathbb{D}\\subset H$ which is parameterized by a smooth manifold $\\mathcal{M}$, i.e. we have a surjection\n\\begin{equation}\n \\mathcal{P}:\\mathcal{M}\\rightarrow \\mathbb{D}.\n\\end{equation}\nIn this section, we consider dictionaries $\\mathbb{D}$ which are parameterized in this way by a smooth compact manifold. For this class of dictionaries, we give upper bounds on the entropy of $B_1(\\mathbb{D})$ and on the approximation rates for $B_1(\\mathbb{D})$ from sparse convex combinations $\\Sigma_{n,M}(\\mathbb{D})$, which depend on the degree of smoothness of the parameterization map $\\mathcal{P}$.\n\nWe being by discussing the relevant notion of smoothness. These notions are well-studied for functions whose range is $\\mathbb{R}$ (see, for instance \\cite{lorentz1996constructive}), but we need to extend them to functions with range contained in the Hilbert space $H$.\n\\begin{definition}\n Let $H$ be a Hilbert space, $U\\subset \\mathbb{R}^d$ an open set, $k \\geq 0$ and integer and $\\alpha\\in (0,1]$. A function $\\mathcal{F}:U\\rightarrow H$ is of smoothness class $k+\\alpha$, written $\\mathcal{F}\\in {\\rm Lip}(k+\\alpha, L^\\infty(U\\rightarrow H))$, if\n \\begin{itemize}\n  \\item The derivatives $D^j\\mathcal{F}:(\\mathbb{R}^d)^{\\otimes j}\\rightarrow H$ exist for $j\\leq k$.\n  \\item The $k$-the derivative $D^k\\mathcal{F}$ is $\\alpha$-H\\\"older continuous on $U$, i.e.\n  $$\n  \\|D^k\\mathcal{F}(x) - D^k\\mathcal{F}(y)\\|_{(\\mathbb{R}^d)^{\\otimes j}\\rightarrow H} \\lesssim_{\\mathcal{F},\\alpha} |x-y|^\\alpha,\n  $$\n  where the norm on the left is the operator norm.\n \\end{itemize}\n\n\\end{definition}\n\n\\begin{definition}\n Let $H$ be a Hilbert space and $\\mathcal{M}$ a smooth $d$-dimensional manifold, $k \\geq 0$ and integer and $\\alpha\\in (0,1]$. A map $\\mathcal{P}:\\mathcal{M}\\rightarrow H$ is of smoothness class $k+\\alpha$, written $\\mathcal{P}\\in {\\rm Lip}(k+\\alpha, L^\\infty(M\\rightarrow H))$ if for each coordinate chart $(U,\\phi)$ we have $\\mathcal{P}\\circ \\phi\\in {\\rm Lip}(k+\\alpha, L^\\infty(U\\rightarrow H))$. \n \\end{definition}\n \n To illustrate this definition, we consider the two examples which arise in the study of neural networks, $\\mathbb{P}^d_k$ and $\\mathbb{F}^d_s$. \n \n First, note that the dictionary $\\mathbb{P}^d_k$ is parameterized by the manifold $S^{d-1}\\times [-2,2]$ via the map\n\\begin{equation}\\label{p-k-parameterization-definition}\n \\mathcal{P}^d_k(\\omega,b) = \\sigma_k(\\omega\\cdot x + b)\\in L^2(B_1^d).\n\\end{equation}\nWe claim that the map $\\mathcal{P}^d_k$ is of smoothness class $k + \\frac{1}{2}$. Indeed, differentiating $k$ times we obtain\n\\begin{equation}\n\\begin{split}\n D^k\\mathcal{P}^d_k(\\omega,b)\\cdot[(\\omega_1,b_1)\\otimes\\cdots\\otimes(\\omega_k,b_k)] &= \\left[\\prod_{i=1}^k (\\omega_i\\cdot x + b_i)\\right]\\sigma_k^{(k)}(\\omega\\cdot x + b) \\\\\n & = k!\\left[\\prod_{i=1}^k (\\omega_i\\cdot x + b_i)\\right]\\sigma_0(\\omega\\cdot x + b)\\in L^2(B_1^d),\n\\end{split}\n\\end{equation}\nand so\n\\begin{equation}\n \\|D^k\\mathcal{P}^d_k(\\omega,b) - D^k\\mathcal{P}^d_k(\\omega',b')\\|_{(\\mathbb{R}^d)^{\\otimes j}\\rightarrow H} \\lesssim_{k,d} \\|\\sigma_0(\\omega\\cdot x + b) - \\sigma_0(\\omega'\\cdot x + b')\\|_{L^2(B_1^d)}.\n\\end{equation}\nSince $\\sigma_0$ is the Heaviside function and $B_1^d$ is is the unit ball of $\\mathbb{R}^d$, it is easy to see that $\\sigma_0(\\omega\\cdot x + b) - \\sigma_0(\\omega'\\cdot x + b')$ is non-zero only on a strip of width $\\lesssim_d |\\omega - \\omega'| + |b - b'|$ (see for instance the argument in \\cite{makovoz1996random}, Section 4). This means that\n\\begin{equation}\n \\|\\sigma_0(\\omega\\cdot x + b) - \\sigma_0(\\omega'\\cdot x + b')\\|_{B_1^d} \\lesssim_d (|\\omega - \\omega'| + |b - b'|)^{\\frac{1}{2}},\n\\end{equation}\nand so $\\mathcal{P}^d_k$ is of smoothness class $k+\\frac{1}{2}$.\n\nNext, we observe that the dictionary $\\mathbb{F}_s^d$ is parameterized by $S^d$ via the steriographic projection map $\\mathcal{F}_s^d:S^d\\rightarrow L^2(B_1^d)$ given by\n\\begin{equation}\\label{f-s-parameterization-definition}\n \\mathcal{F}^d_s(\\nu) = \\begin{cases} \n          \\left(1 + |\\omega|\\right)^{-s}e^{2\\pi i \\omega\\cdot x} & \\nu_{d+1} \\neq -1 \\\\\n          0 & \\nu_{d+1} = -1,\n       \\end{cases}\n\\end{equation}\nwhere $$\\omega = (1+\\nu_{d+1})^{-1}(\\nu_1,...,\\nu_d),~|\\omega| = \\sqrt{\\frac{1-\\nu_{d+1}}{1+\\nu_{d+1}}}.$$\n\nIt is easy to check that this map is infinitely smooth at every point except the south pole where $\\nu_{d+1} = -1$. Moreover, the factor $\\left(1 + |\\omega|\\right)^{-s}$ implies that $\\mathcal{F}^d_s$ decays to order $s$ at the south pole. Taken together, this means that the map $\\mathcal{F}^d_s$ is of smoothness class $s$.\n\nWe proceed to bound the approximation rates from sparse convex combinations $\\sigma_{n,M}(\\mathbb{D})$ and the dyadic entropy numbers of $B_1(\\mathbb{D})$ for dictionaries which are smoothly parameterized by a compact manifold.\nWe will need the following simple lemma in what follows.\n\\begin{lemma}\\label{image-of-union-of-cubes-lemma}\n Suppose that $\\mathcal{M}$ is a d-dimensional compact smooth manifold and we are given a parameterization $\\mathcal{P}\\in {\\rm Lip}(k+\\alpha, L^\\infty(M\\rightarrow H))$. Then there exist finitely many maps $\\mathcal{P}_j:[-1,1]^d\\rightarrow H$, $j=1,...,T$, such that $\\mathcal{P}_j\\in {\\rm Lip}(k+\\alpha, L^\\infty([-1,1]^d\\rightarrow H))$ for each $j$ and\n \\begin{equation}\n  \\mathcal{P}(\\mathcal{M})\\subset \\bigcup_{j=1}^T \\mathcal{P}_j([-1,1]^d).\n \\end{equation}\n\n\\end{lemma}\n\\begin{proof}\n Let $\\phi_i: U_i\\rightarrow \\mathcal{M}$ for $j=1,...,T:=T_{\\mathcal{M}}$ be a coordinate atlas for $\\mathcal{M}$, which can be taken finite since $\\mathcal{M}$ is compact. Further, we can assume that the $U_j$ are bounded. By assumption the composition $\\mathcal{P} \\circ T_j$ is of smoothness class $k+\\alpha$. We translate and dialate each $U_j$ such that they are contained in the unit cube $C = [-1,1]^d$ and apply Whitney's extension theorem \\cite{whitney1934analytic} to obtain maps $\\mathcal{P}_j:C\\rightarrow H$ such that $\\mathcal{P}_j|_{U_j} = \\mathcal{P} \\circ T_j$ which are still of smoothness class $k+\\alpha$. Then the maps $\\mathcal{P}_j$ satisfy the conclusion of the lemma.\n\\end{proof}\n\n\\section{Approximation rates for smoothly parameterized dictionaries}\nHere we give upper bounds on the approximation rates of $B_1(\\mathbb{D})$ from sparse convex combinations $\\Sigma_{n,M}(\\mathbb{D})$ for smoothly parameterized dictionaries. In particular, we have the following theorem. \n\\begin{theorem}\\label{upper-bound-theorem}\n Let $k \\geq 0$ be an integer and $\\alpha\\in (0,1]$. Suppose that $\\mathcal{M}$ is a compact $d$-dimensional smooth manifold, $\\mathcal{P}\\in {\\rm Lip}(k+\\alpha, L^\\infty(M\\rightarrow H))$, and the dictionary $\\mathbb{D}\\subset \\mathcal{P}(\\mathcal{M})$. Then there exists an $M > 0$ such that for $f\\in B_1(\\mathbb{D})$ we have\n \\begin{equation}\n \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{D})} \\|f - f_n\\|_{H} \\lesssim n^{-\\frac{1}{2} - \\frac{k+\\alpha}{d}},\n \\end{equation}\n where both $M$ and the implied constant are independent of $n$.\n\\end{theorem}\nWe note that although the implied constants here are independent of $n$, they may indeed be very large. The proof of this theorem is a higher-order generalization of the stratified sampling argument \\cite{makovoz1996random,klusowski2018approximation}.\nBefore proving this theorem, we note a corollary obtained when applying it to the dictionary $\\mathbb{P}^d_k$. \n\\begin{theorem}\\label{relu-k-rate-corollary}\n Let $k\\geq 0$. Then there exists an $M = M(k,d) > 0$ such that for all $f\\in B_1(\\mathbb{P}^d_k)$ we have\n \\begin{equation}\n  \\inf_{f_n\\in \\Sigma_{n,M}(\\mathbb{P}^d_k)} \\|f - f_n\\|_{L^2(B_1^d)} \\lesssim_{k,d} n^{-\\frac{1}{2}-\\frac{2k+1}{2d}}.\n \\end{equation}\n\\end{theorem}\n\\begin{proof}\n This follows immediately from Theorem \\ref{upper-bound-theorem} given the smoothness condition of the map $\\mathcal{P}^d_k$ defined in \\eqref{p-k-parameterization-definition} and the fact that $S^{d-1}\\times [-2,2]$ is a compact $d$-dimensional manifold. \n\\end{proof}\n\nNote also that the approximation rates for cosine networks obtained in \\cite{siegel2020high} follow from Theorem \\ref{upper-bound-theorem} by considering the parmeterization \\eqref{f-s-parameterization-definition}.\n\n\\begin{proof}[Proof of Theorem \\ref{upper-bound-theorem}]\n We apply lemma \\ref{image-of-union-of-cubes-lemma} to $\\mathcal{P}$ and $\\mathcal{M}$ to obtain a collection of maps $\\mathcal{P}_j:C:=[-1,1]^d\\rightarrow H$ such that $\\mathbb{D}\\subset \\cup_{j=1}^T\\mathcal{P}_j(C)$ and $\\mathcal{P}_j\\in {\\rm Lip}(k+\\alpha, L^\\infty(C\\rightarrow H))$. \n \n It suffices to prove the result for $\\mathbb{D} = \\mathbb{D}_j := \\mathcal{P}_j(C)$, since $B_1(\\mathbb{D}) \\subset \\text{conv}(\\cup_{j=1}^T B_1(\\mathbb{D}_j))$ and if $f = \\alpha_1f_1 +\\cdots + \\alpha_Tf_T$ with $f_j\\in B_1(\\mathbb{D}_j)$ and $\\sum_{i=1}^T \\alpha_j= 1$, then\n \\begin{equation}\n  \\inf_{f_n\\in \\Sigma_{Tn,M}(\\mathbb{D})} \\|f - f_n\\|_{H} \\leq \\sum_{j=1}^T \\alpha_j\\inf_{f_{n,j}\\in \\Sigma_{n,M}(\\mathbb{D}_j)} \\|f_j - f_{n,j}\\|_{H},\n \\end{equation}\n which easily follows by setting $f_n = \\sum_{j=1}^T\\alpha_jf_{n,j}$.\n \nSo in what follows we consider $\\mathbb{D} = \\mathbb{D}_j$, $\\mathcal{P} = \\mathcal{P}_j$ and $\\mathcal{M} = C$. In other words, we assume without loss of generality that $T=1$ (at the cost of introducing a constant which depends upon $T$ and thus upon $\\mathcal{P}$ and $\\mathcal{M}$).\n \n Now let $f\\in B_1(\\mathbb{D})$ and $\\delta > 0$. Then there exists a convex combination (with potentially very large $N:=N_\\delta$)\n \\begin{equation}\\label{eq-176}\n  f_\\delta = \\sum_{i=1}^N a_id_i,\n \\end{equation}\n with $d_i\\in \\mathbb{D}$, $\\sum|a_i| \\leq 1$, and $\\|f - f_\\delta\\|_H  < \\delta$. Since $\\mathbb{D} = \\mathcal{P}(C)$, each $d_i = \\mathcal{P}(x_i)$ for some $x_i\\in C$, so we get \n \\begin{equation}\n  f_\\delta = \\sum_{i=1}^{N} a_i\\mathcal{P}(x_i).\n \\end{equation}\n We remark that in what follows all implied constants will be independent of $n$ and $\\delta$.\n \n Let $n \\geq 1$ be given and subdivide the cube $C$ into $n$ sub-cubes $C_1,...,C_n$ such that each $C_l$ has diameter $O(n^{-\\frac{1}{d}})$. This can easily be done by considering a uniform subdivision in each direction.\n \n We proceed to approximate the map $\\mathcal{P}$ by a piecewise polynomial on the subcubes $C_1,...,C_n$. To this end, let $z_1,...,z_{(k+1)^d}\\in C$ be the $d$-fold tensor product of the roots of the Chebyshev polynomials of degree $k$ (any other interpolation points will do just as well). Further, let $p_1,...,p_{(k+1)^d}$ be the corresponding Lagrange polynomials satisfying $p_i(z_j) = \\delta_{ij}$. \n \n We note that these polynomials span the space of polynomials whose monomials $z^\\alpha$ satisfy $|\\alpha|_\\infty \\leq k$, which in particular contains the space of polynomials of degree at most $k$.\n \n Considering the images of these points and polynomials on the sub-cube $C_l$, which we write $z_1^l,...,z_{(k+1)^d}^l$ and $p^l_1,...,p^l_{(k+1)^d}$, we rewrite $f_\\delta$ as\n \\begin{equation}\\label{eq-194}\n  f_\\delta = \\sum_{l=1}^{n}\\sum_{x_i\\in C_l}a_iP_l(x_i) +  \\sum_{l=1}^{n}\\sum_{x_i\\in C_l}a_iE_l(x_i),\n \\end{equation}\n where the polynomial approximation is given by\n \\begin{equation}\\label{eq-198}\n  P_l(x_i) = \\sum_{m=1}^{(k+1)^d} \\mathcal{P}(z_m^l)p_m^l(x_i),\n \\end{equation}\n and the error in the approximation is given by\n \\begin{equation}\n  E_l(x_i) = \\mathcal{P}(x_i) - P_l(x_i).\n \\end{equation}\n We now utilize the fact that in \\eqref{eq-198} we only evaluate $\\mathcal{P}$ at the fixed interpolation points $z_m^l$ and that the Lagrange polynomials are bounded to see that the first term of \\eqref{eq-194} satisfies\n \\begin{equation}\n  g_{\\delta,1} = \\sum_{l=1}^{n}\\sum_{x_i\\in C_l}a_iP_l(x_i) \\in \\Sigma_{(k+1)^dn,M}(\\mathbb{D}),\n \\end{equation}\n for some constant $M = M(k,d) = \\sup_{x\\in C}\\left|\\sum_{m=1}^{(k+1)^d} p_m(x)\\right|$.\n \n The next step is to bound the error $E_l(x_i)$ uniformly and to apply a sampling argument to the second term in \\eqref{eq-194}. To bound $E_l(x_i)$ we use the smoothness of the parameterization $\\mathcal{P}_j$ and a standard Bramble-Hilbert lemma \\cite{bramble1970estimation} type argument from numerical analysis (see also \\cite{xu1982error} for instance). There are two important points concerning the bound here. First, we are dealing with Hilbert space valued functions, and second, it is important that we are bounding the error to the interpolation polynomial instead of to averaged Taylor polynomials, as is commonly done when proving the Bramble-Hilbert lemma.\n \n We proceed as follows, consider any point $x\\in C_l$ and let $x_m(t) = x(1-t) + tz^l_m$ be the line segment from $x$ to the Lagrange point $z^l_m$. Using the differentiability of the parameterization map $\\mathcal{P}$, we get\n \\begin{equation}\n  \\mathcal{P}(z^l_m) - \\mathcal{P}(x) = \\mathcal{P}(x_m(1)) - \\mathcal{P}(x_m(0)) = \\sum_{s=1}^k \\frac{1}{s!}r_{m}^{(s)}(0) + \\frac{1}{k!}\\int_0^1 [r_{m}^{(k)}(t) - r_{m}^{(k)}(0)]dt,\n \\end{equation}\n where $r_{m}(t) = \\mathcal{P}(x_m(t))$. We now calculate that\n \\begin{equation}\n  r_{m}^{(s)}(t) = D^s\\mathcal{P}(x_m(t))\\cdot(z_m^l - x)^{\\otimes s}.\n \\end{equation}\n This gives us that\n \\begin{equation}\n  \\mathcal{P}(z^l_m) - \\mathcal{P}(x) = \\sum_{s=1}^k \\frac{1}{s!}D^s\\mathcal{P}(x)\\cdot(z_m^l - x)^{\\otimes s} + \\frac{1}{k!}\\left(\\int_0^1 [D^k\\mathcal{P}(x_m(t)) - D^k\\mathcal{P}(x)]dt\\right)\\cdot(z_m^l - x)^{\\otimes k}.\n \\end{equation}\n We now multiply this equation by the interpolation weights $p_m^l(x)$, sum over $m$, and use the following standard facts. First, for every $x$\n \\begin{equation}\n  \\sum_{m=1}^{(k+1)^d} p_m^l(x) = 1,\n \\end{equation}\n since this is just the interpolation of the constant function $1$. Second, for $s=1,...,k$ and every $x$ we have\n \\begin{equation}\n  \\sum_{m=1}^{(k+1)^d} p_m^l(x)(z_m^l - x)^{\\otimes s} = 0,\n \\end{equation}\n since this is the interpolation of the function $g(z) = (z-x)^{\\otimes s}$ evaluated at $x$. Note that $g$ is a polynomial of degree at most $k$ (hence is reproduced exactly) and vanishes at $x$. This gives the bound\n \\begin{equation}\n  \\|E_l(x)\\|_H = \\|\\mathcal{P}(x) - P_l(x)\\|_H \\leq \\frac{1}{k!}\\sum_{m=1}^{(k+1)^d} p_m^l(x)|z_m^l - x|^k\\int_0^1\\|D^k\\mathcal{P}(x_m(t)) - D^k\\mathcal{P}(x)]\\|dt.\n \\end{equation}\n Finally, we use the fact that the Lagrange polynomials are bounded, combined with the smoothness condition on the parameterization $\\mathcal{P}$ and the fact that $|z_m^l - x|$ is at most the diameter of $C_l$ (which is $O(n^{-\\frac{1}{d}})$ by construction) to conclude that\n \\begin{equation}\n  \\|E_l(x)\\|_H \\lesssim n^{-\\frac{k+\\alpha}{d}}.\n \\end{equation}\n \n We use this bound, combined with sampling argument of Lemma 1 in \\cite{barron1993universal} (essentially the approximation rate \\eqref{fundamental-bound}) to conclude that there exists an $n$-term convex combination\n \\begin{equation}\n  g_{\\delta,2} = \\frac{1}{n}\\sum_{s=1}^n E_{l_s}(x_{i_s}),\n \\end{equation}\nsuch that\n \\begin{equation}\n  \\left\\|g_{\\delta,2} -  \\sum_{l=1}^{n}\\sum_{x_{i}\\in C_l}a_{i}E_l(x_{i})\\right\\|_H \\lesssim n^{-\\frac{1}{2}-\\frac{k+\\alpha}{d}}.\n \\end{equation}\n Adding this to $g_{\\delta,1}$, we get\n \\begin{equation}\n  f_n = g_{\\delta,1} + g_{\\delta,2}\\in \\Sigma_{[(k+1)^d+1]n,2M}(\\mathbb{D}),\n \\end{equation}\n such that\n \\begin{equation}\n  \\|f_\\delta - f_n\\|_H \\lesssim n^{-\\frac{1}{2}-\\frac{k+\\alpha}{d}}.\n \\end{equation}\n Since $\\delta > 0$ was arbitrary, this yields the desired result.\n\n\\end{proof}\n\n\\subsection*{Entropy bounds for smoothly parameterized dictionaries}\nNext, we bound the entropy of $B_1(\\mathbb{D})$ for smoothly parameterized dictionaries $\\mathbb{D}$. We have the following theorem.\n\\begin{theorem}\\label{entropy-upper-bound-theorem}\n Let $k \\geq 0$ be an integer and $\\alpha\\in (0,1]$. Suppose that $\\mathcal{M}$ is a compact $d$-dimensional smooth manifold, $\\mathcal{P}\\in {\\rm Lip}(k+\\alpha, L^\\infty(M\\rightarrow H))$, and the dictionary $\\mathbb{D}\\subset \\mathcal{P}(\\mathcal{M})$. Then\n \\begin{equation}\\label{entropy-bound-equation}\n \\epsilon_n(B_1(\\mathbb{D})) \\lesssim n^{-\\frac{1}{2} - \\frac{k+\\alpha}{d}},\n \\end{equation}\n where the implied constant is independent of $n$.\n\\end{theorem}\nCombining Theorem \\ref{upper-bound-theorem} with lemma \\ref{entropy-lemma}, we obtain a bound of $\\epsilon_{n\\log{n}}(B_1(\\mathbb{D}) \\lesssim n^{-\\frac{1}{2} - \\frac{k+\\alpha}{d}}$. The content of Theorem \\ref{entropy-upper-bound-theorem} is to show that the logarithmic factor can be removed with a much more careful analysis.\n\nBefore proving this theorem, we note that by using the smoothness of the parameterizations \\eqref{p-k-parameterization-definition} and \\eqref{f-s-parameterization-definition}, we obtain\n\\begin{equation}\n \\epsilon_n(B_1(\\mathbb{P}_k^d)) \\lesssim_{k,d} n^{-\\frac{1}{2} - \\frac{2k+1}{2d}},~\\epsilon_n(B_1(\\mathbb{F}_s^d)) \\lesssim_{s,d} n^{-\\frac{1}{2} - \\frac{s}{d}}.\n\\end{equation}\nFurther, Theorem 4.1 in \\cite{cohen2020optimal} implies that these rates can be attained with a stable (i.e. Lipschitz) non-linear approximation scheme.\n\nIn the proof of Theorem \\ref{entropy-upper-bound-theorem}, which draws heavily on the ideas in \\cite{ball1990entropy}, it will be convenient to use the notion of entropy numbers of an operator $T:X\\rightarrow Y$ between two Banach spaces $X$ and $Y$, which we briefly recall.  For such an operator $T$, we simply define $\\epsilon_n(T) = \\epsilon_n(T(B_X))$ where $B_X = \\{x\\in X:~\\|x\\|_X\\leq 1|\\}$ is the unit ball in $X$. This notion has itself been significantly studied and corresponds to a measure of the degree of compactness of the operator $T$. We will make use of the following two lemmas in the proof of Theorem \\ref{entropy-upper-bound-theorem}.\n\nThe first is well-known and is simply due to the triangle inequality and the definition of the entropy.\n\\begin{lemma}\\label{triangle-inequality-entropy-lemma}\n Let $S,T:X\\rightarrow Y$. Then for any $0 \\leq m\\leq n$\n \\begin{equation}\n  \\epsilon_n(S+T) \\leq \\epsilon_m(S) + \\epsilon_{n-m}(T).\n \\end{equation}\n\n\\end{lemma}\n\nThe second, due to Carl (Proposition 1 in \\cite{carl1985inequalities}), is the following bound on the entropy of operators whose domain is a finite-dimensional $\\ell^1$-space.\n\\begin{lemma}\\label{carls-lemma}\n Let $T:\\ell_1^n\\rightarrow H$, where $\\ell_1^n$ is the $n$-dimensional $\\ell^1$ space (i.e. $\\mathbb{R}^n$ with the $\\ell^1$-norm) and $H$ is a Hilbert space. Then\n \\begin{equation}\n  \\epsilon_m(T) \\lesssim \\begin{cases} \n         \\|T\\| & m=0 \\\\\n          \\sqrt{1+\\log{\\frac{n}{m}}}m^{-\\frac{1}{2}}\\|T\\| & 1\\leq m\\leq n \\\\\n          2^{-\\frac{m}{n}}n^{-\\frac{1}{2}}\\|T\\| & m\\geq n.\n       \\end{cases}\n \\end{equation}\n (Note here the implied constant is absolute.)\n\n\\end{lemma}\n\n\n\\begin{proof}[Proof of Theorem \\ref{entropy-upper-bound-theorem}]\n As in the proof of Theorem \\ref{upper-bound-theorem}, we apply lemma \\ref{image-of-union-of-cubes-lemma} to $\\mathcal{P}$ and $\\mathcal{M}$ to obtain a collection of maps $\\mathcal{P}_j:C:=[-1,1]^d\\rightarrow H$ such that $\\mathbb{D}\\subset \\cup_{j=1}^T\\mathcal{P}_j(C)$ and $\\mathcal{P}_j\\in {\\rm Lip}(k+\\alpha, L^\\infty(C\\rightarrow H))$. \n \n It suffices to prove the result for $\\mathbb{D} = \\mathbb{D}_j := \\mathcal{P}_j(C)$, since $B_1(\\mathbb{D}) \\subset \\sum_{j=1}^T B_1(\\mathbb{D}_j)$ and so by lemma \\ref{triangle-inequality-entropy-lemma}\n \\begin{equation}\n  \\epsilon_{Tn}(B_1(\\mathbb{D})) \\leq \\sum_{j=1}^T \\epsilon_n(B_1(\\mathbb{D}_j)).\n \\end{equation}\n So in what follows, we assume without loss of generality that $T=1$, i.e. that $\\mathcal{M} = C$ (doing so introduces at most a constant independent of $n$).\n \n Consider the $\\ell_1$ space on the set $C = [-1,1]^d$, i.e. \n \\begin{equation}\n \\ell_1(C) = \\left\\{f:C\\rightarrow \\mathbb{R}:~\\|f\\|_{1,C} := \\sup \\left\\{\\sum_{i=1}^N |f(x_i)|:~\\text{$x_1,...,x_N\\in C$ are distinct}\\right\\} < \\infty\\right\\},\n \\end{equation}\n and its unit ball $B_1(\\ell_1(C)) = \\{f\\in \\ell_1(C):~\\|f\\|_{1,C} \\leq 1\\}$.\n \n We observe that $B_1(\\mathbb{D}) = \\overline{\\mathcal{S}(B_1(\\ell_1(C)))}$, where the operator $\\mathcal{S}:\\ell_1(C)\\rightarrow H$ is given by $\\mathcal{S}(f) = \\sum_{x\\in C} f(x)\\mathcal{P}(x)$. Since the entropy numbers don't change when taking the closure, the problem is reduced to bounding the entropy numbers of the operator $\\mathcal{S}$.\n \n We do this by decomposing $\\mathcal{S} = \\sum_{i=1}^\\infty \\mathcal{S}_i$ as follows. For each $i$, consider a decomposition of the cube $C$ into $N_i = 2^{id}$ subcubes $C_1,...,C_{N_i}$ with side length $2^{-i}$. As in the proof of Theorem \\ref{upper-bound-theorem}, we introduce the interpolation points $z_1^l,...,z_{(k+1)^d}^l$ and Lagrange polynomials $p_1^l,...,p_{(k+1)^d}^l$ on each subcube $C_l$. Given a function $\\mathcal{F}:C\\rightarrow H$, consider the piecewise polynomial interpolation of $\\mathcal{F}$ on the cubes $C_1,...,C_{N_i}$, which we denote by\n \\begin{equation}\n  \\pi_i(\\mathcal{F})(x) = \\sum_{m=1}^{(k+1)^d}p_m^l(x)\\mathcal{F}(z_m^l)~\\text{for $x\\in C_l$}.\n \\end{equation}\n Further, we let $\\pi _0(\\mathcal{F}) = 0$. We now set\n \\begin{equation}\n  \\mathcal{S}_i(f) = \\sum_{x\\in C} f(x)(\\pi_i(\\mathcal{P}) - \\pi_{i-1}(\\mathcal{P}))(x).\n \\end{equation}\n It is evident that $\\sum_{i=1}^\\infty \\mathcal{S}_i(f) = \\lim_{i\\rightarrow \\infty} \\sum_{x\\in C} f(x)\\pi_i(\\mathcal{P})(x) = \\sum_{x\\in C} f(x)\\mathcal{P}(x) = \\mathcal{S}(f)$.\n \nSince $\\pi_j(\\mathcal{P})$ is a piecewise polynomial on $C_1,...,C_{N_j}$ and the cubes $C_1,...,C_{N_i}$ for $i > j$ refine the cubes $C_1,...,C_{N_j}$, we clearly have $\\pi_i(\\pi_j(\\mathcal{F})) = \\pi_j(\\mathcal{F})$ whenever $i > j$. This allows us to write\n \\begin{equation}\n  \\mathcal{S}_i(f) = \\sum_{x\\in C} f(x)(\\pi_i(\\mathcal{P} - \\pi_{i-1}\\mathcal{P}))(x).\n \\end{equation}\nFrom this, we see that each $\\mathcal{S}_i$ factors through an $\\ell^1$ subspace of dimension $n_i := (k+1)^d2^{id}$. Namely, $\\mathcal{S}_i = \\mathcal{U}_i\\circ \\mathcal{V}_i$, where $\\mathcal{V}_i:\\ell_1(C)\\rightarrow \\ell_1^{n_i}$ is given by\n\\begin{equation}\n \\mathcal{V}_i(f) = \\left(\\sum_{x\\in C_l}f(x)p_m^l(x)\\right)_{(m,l)},\n\\end{equation}\nand $\\mathcal{U}_i: \\ell_1^{n_i}\\rightarrow H$ is given by\n\\begin{equation}\n \\mathcal{U}_i((y)_{(m,l)}) = \\sum_{l=1}^{N_i} \\sum_{m=1}^{(k+1)^d} y_{(m,l)} \\left[\\mathcal{P}(z_m^l) - \\pi_{i-1}(\\mathcal{P})(z_{m}^l)\\right].\n\\end{equation}\nHere the indexing set $(m,l)$ runs over $\\{1,...,(k+1)^d\\}\\times \\{1,...,N_i\\}$.\n\nFrom this it is evident that $\\|\\mathcal{V}_i\\| \\leq M = M(k,d) = \\sup_{x\\in C}\\left|\\sum_{m=1}^{(k+1)^d} p_m(x)\\right|$. \n\nFurthermore, via a Bramble-Hilbert lemma \\cite{bramble1970estimation} type argument analogous to that in the proof of Theorem \\ref{upper-bound-theorem}, we get that \n\\begin{equation}\n \\|\\mathcal{P}(z_m^l) - \\pi_{i-1}(\\mathcal{P})(z_{m}^l)\\|_H \\lesssim 2^{-i(k+\\alpha)},\n\\end{equation}\nsince $\\mathcal{P}$ is of smoothness class $k+\\alpha$ and the diameter of each of the cubes $C_1,...,C_{N_i}$ is $O(2^{-i})$. This means that $\\|\\mathcal{U}_i\\| \\lesssim 2^{-i(k+\\alpha)}$. Note that here and in what follows all implied constants are independent of $i$.\n\nWe now bound, using lemma \\ref{triangle-inequality-entropy-lemma},\n\\begin{equation}\\label{eq-912}\n \\epsilon_n(\\mathcal{S}) \\leq \\sum_{i=1}^\\infty \\epsilon_{m_i}(\\mathcal{S}_i)\\leq \\sum_{i=1}^\\infty \\|\\mathcal{V}_i\\|\\epsilon_{m_i}(\\mathcal{U}_i) \\leq M\\sum_{i=1}^\\infty\\epsilon_{m_i}(\\mathcal{U}_i),\n\\end{equation}\nfor any $m_i$ which satisfy $\\sum_{i=1}^\\infty m_i \\leq n$ (the $\\leq$ can be taken by monotonicity of the entropy).\n\nNow let $K = (k+1)^d$ and $c > 0$ be a fixed integer to be specified later. Note that by the monotonicity of the entropy numbers it suffices to prove \\eqref{entropy-bound-equation} for $n = K2^{rd}$ for integers $r \\geq 2c$. This corresponds to showing that\n\\begin{equation}\n \\epsilon_n(\\mathcal{S}) \\lesssim 2^{-r\\left(k+\\alpha+\\frac{d}{2}\\right)}\n\\end{equation}\nfor $n = K2^{rd}$, where here and in what follows the implied constant is independent of $r$ (and also $i$, if applicable).\n\nTo simplify the notation in what follows, we set $\\beta = k+\\alpha+\\frac{d}{2}$. Further, we introduce two indices $i_1 = r-c$ and $i_2 = \\lfloor r\\beta(k+\\alpha)^{-1}\\rfloor$. In equation \\eqref{eq-912} we set\n\\begin{equation}\n m_i = \\begin{cases} \n         K\\lceil(r-i)(\\beta + 1)2^{id}\\rceil & 1\\leq i\\leq i_1 \\\\\n         K\\lfloor2^{i_1d - (i - i_1)\\delta}\\rfloor & i_1 < i\\leq i_2 \\\\\n         0 & i > i_2,\n       \\end{cases}\n\\end{equation}\nwhere $\\delta = \\frac{d}{2}\\left(1 + \\frac{d}{2(k+\\alpha)}\\right)^{-1} > 0$.\n\nThis choice of $\\delta$ ensures that, since $r \\geq 2c$,\n\\begin{equation}\n\\begin{split}\ni_1d - (i_2 - i_1)\\delta \\geq i_1d - i_2\\delta &\\geq (r-c)d - r\\beta(k+\\alpha)^{-1}\\delta \\\\\n& = (r - c)d - r\\delta\\left(1 + \\frac{d}{2(k+\\alpha)}\\right) \\\\\n& \\geq r\\left(\\frac{d}{2} - \\delta\\left(1 + \\frac{d}{2(k+\\alpha)}\\right)\\right) \\geq 0.\n\\end{split}\n\\end{equation}\nThis means that for $i_1 < i\\leq i_2$, we have $2^{i_1d - (i - i_1)\\delta} \\geq 1$ so that\n\\begin{equation}\\label{delta-choice-bound}\n \\lfloor2^{i_1d - (i - i_1)\\delta}\\rfloor \\geq 2^{i_1d - (i - i_1)\\delta - 1}.\n\\end{equation}\nIn addition, it is easy to verify that $\\delta < k+\\alpha$. These facts will be important later when we bound the sum in \\eqref{eq-912}.\n\nWe proceed to check that for $c$ sufficiently large (independently of $r$) we can ensure that $\\sum_{i=1}^\\infty m_i \\leq n$. To this end, we calculate\n\\begin{equation}\\label{eq-932}\n \\sum_{i=1}^\\infty m_i = K\\sum_{i=1}^{i_1} \\lceil(r-i)(\\beta + 1)2^{id}\\rceil + K\\sum_{i=i_1+1}^{i_2} \\lfloor2^{i_1d - (i - i_1)\\delta}\\rfloor.\n\\end{equation}\nThe first sum above is bounded by (recall that $i_1 = r-c$)\n\\begin{equation}\n \\sum_{i=1}^{i_1} K\\lceil(r-i)(\\beta + 1)2^{id}\\rceil \\leq (r-c) + (\\beta + 1)\\sum_{i=1}^{r-c} (r-i)2^{id} \\leq [2(\\beta + 1)(c+1) + 1]2^{(r-c)d},\n\\end{equation}\nby noting that $(r-c) \\leq 2^{(r-c)d}$ and by writing \n$$\\sum_{i=1}^{r-c} (r-i)2^{id} = c\\sum_{i=1}^{r-c} 2^{id} + \\sum_{i=1}^{r-c-1}\\sum_{j=1}^{i}2^{jd},$$ \nand bounding the geometric series.\n\nThe second sum in \\eqref{eq-932} is bounded by (again recall that $i_1 = r-c$)\n\\begin{equation}\n \\sum_{i=i_1+1}^{i_2} \\lfloor2^{i_1d - (i - i_1)\\delta}\\rfloor \\leq \\sum_{i=i_1+1}^{\\infty} 2^{i_1d - (i - i_1)\\delta} = 2^{-\\delta}(1 - 2^{-\\delta})^{-1}2^{(r-c)d}.\n\\end{equation}\nThus, if we choose $c$ large enough so that\n$$\n [2(\\beta + 1)(c+1) + 1]2^{-cd} \\leq \\frac{1}{2}~\\text{and}~2^{-\\delta}(1 - 2^{-\\delta})^{-1}2^{-cd} \\leq \\frac{1}{2},\n$$\nthen we will have that $\\sum_{i=1}^\\infty m_i \\leq K2^{rd} = n$. (Note that such a $c$ can be chosen independently of $r$.)\n\nFinally, we bound the sum in equation \\eqref{eq-912} using lemma \\ref{carls-lemma}. We note that for $i \\leq i_1 = r-c$, we have \n$$m_i = K\\lceil(r-i)(\\beta + 1)2^{id}\\rceil \\geq K2^{id} = n_i.$$ \nThus lemma \\ref{carls-lemma} gives the bound (recall that $\\|\\mathcal{U}_i\\|\\lesssim 2^{-i(k+\\alpha)}$)\n\\begin{equation}\n\\begin{split}\n \\epsilon_{m_i}(\\mathcal{U}_i) \\leq 2^{-\\frac{m_i}{n_i}}n_i^{-\\frac{1}{2}}\\|\\mathcal{U}_i\\| \\lesssim 2^{-(r-i)(\\beta + 1)}\\sqrt{2^{-id}}2^{-i(k+\\alpha)} &= 2^{-(r-i)(\\beta + 1)}2^{-i\\beta} \\\\\n & = 2^{-r\\beta}2^{-(r-i)},\n \\end{split}\n\\end{equation}\nsince $\\frac{m_i}{n_i} \\geq (r-i)(\\beta + 1)$.\n\nSimilarly, for $i_1 < i\\leq i_2$, we note that\n$$m_i = K\\lfloor2^{i_1d - (i - i_1)\\delta}\\rfloor \\leq K2^{id} = n_i,$$\nand thus lemma \\ref{carls-lemma}, using \\eqref{delta-choice-bound}, gives the bound\n\\begin{equation}\n\\begin{split}\n \\epsilon_{m_i}(\\mathcal{U}_i) \\leq \\sqrt{1+\\log{\\frac{n_i}{m_i}}}m_i^{-\\frac{1}{2}}\\|\\mathcal{U}_i\\| &\\lesssim \\sqrt{2 + (i-i_1)(d + \\delta)}2^{-i_1\\frac{d}{2}+(i-i_1)\\frac{\\delta}{2}}2^{-i(k+\\alpha)} \\\\\n & = \\sqrt{2 + (i-i_1)(d + \\delta)}2^{-i_1\\left(\\frac{d}{2}+k+\\alpha\\right)}2^{-(i-i_1)\\left(k+\\alpha - \\frac{\\delta}{2}\\right)} \\\\\n & \\lesssim 2^{-r\\beta}\\sqrt{2 + (i-i_1)(d + \\delta)}2^{-(i-i_1)\\left(k+\\alpha - \\frac{\\delta}{2}\\right)}.\n \\end{split}\n\\end{equation}\nHere we have used that $\\log{\\frac{m_i}{n_i}} \\leq 1 + (i-i_1)(d + \\delta)$, which follows from \\eqref{delta-choice-bound}, and that $i_1 = r-c$ and $\\beta = \\frac{d}{2}+k+\\alpha$, so that $2^{-i_1\\left(\\frac{d}{2}+k+\\alpha\\right)} \\lesssim 2^{-r\\beta}$.\n\nFinally, if $i > i_2$, then $m_i = 0$ so that lemma \\ref{carls-lemma} implies that\n\\begin{equation}\n \\epsilon_{m_i}(\\mathcal{U}_i) \\leq \\|\\mathcal{U}_i\\| \\lesssim 2^{-i(k+\\alpha)}.\n\\end{equation}\n\nPlugging these bound into equation \\eqref{eq-912}, we get\n\\begin{equation}\\label{eq-998}\n\\begin{split}\n \\epsilon_n(\\mathcal{S})\n & \\lesssim 2^{-r\\beta}\\sum_{i=1}^{i_1}2^{-(r-i)} + 2^{-r\\beta}\\sum_{i=i_1+1}^{i_2}\\sqrt{2 + (i-i_1)(d + \\delta)}2^{-(i-i_1)\\left(k+\\alpha - \\frac{\\delta}{2}\\right)} + \\sum_{i=i_2+1}^\\infty2^{-i(k+\\alpha)} \\\\\n & \\leq 2^{-r\\beta}\\sum_{i=c}^{\\infty}2^{-i} + 2^{-r\\beta}\\sum_{i=1}^{\\infty}\\sqrt{2 + i(d + \\delta)}2^{-i\\left(k+\\alpha - \\frac{\\delta}{2}\\right)} + \\sum_{i=i_2+1}^\\infty2^{-i(k+\\alpha)}.\n \\end{split}\n\\end{equation}\nFinally, we have the following bounds:\n\\begin{equation}\n\\begin{split}\n &\\sum_{i=c}^{\\infty}2^{-i} \\leq 2^{1-c} \\leq 1,\\\\\n &\\sum_{i=1}^{\\infty}\\sqrt{2 + i(d + \\delta)}2^{-i\\left(k+\\alpha - \\frac{\\delta}{2}\\right)} \\lesssim 1, \\\\\n &\\sum_{i=i_2+1}^\\infty2^{-i(k+\\alpha)} \\lesssim 2^{-i_2(k+\\alpha)} \\lesssim 2^{-r\\beta(k+\\alpha)^{-1}(k+\\alpha)} = 2^{-r\\beta},\n \\end{split}\n\\end{equation}\nby summing geometric series and using that $k+\\alpha - \\frac{\\delta}{2} > 0$ since $\\delta < k+\\alpha$.\n\nPlugging these bounds into \\eqref{eq-998}, we finally get for $n=K2^{rd}$\n\\begin{equation}\n \\epsilon_n(\\mathcal{S}) \\lesssim 2^{-r\\beta} = 2^{-r\\left(k+\\alpha+\\frac{d}{2}\\right)},\n\\end{equation}\nwhere, importantly, the implied constant is independent of $r$ (and thus $n$). This completes the proof.\n\\end{proof}\n\n\\section{Lower bounds for ridge function dictionaries}\\label{main-result-2-section}\nIn this section, we consider lower bounds on the entropy of convex subsets $A$ of $L^2(B_1^d)$. We show that if $K$ contains a certain class of ridge functions, then its entropy must be bounded below. This result is useful for analyzing the entropy of $B_1(\\mathbb{D})$ when $\\mathbb{D}$ is a dictionary of ridge functions.\n\nWe begin with a general Lemma which is useful for lower bounding the entropy numbers of convex subsets of a Hilbert space. This Lemma is a modest generalization of Lemma 3 in \\cite{makovoz1996random}. A slightly different version has also appeared in  \\cite{klusowski2018approximation} in the context of lower bounding approximation rates of ReLU networks. The proofs given in these references rely on a combinatorial lemma which concerns covering numbers of the cube by Hamming balls (see  Lemma 8 in \\cite{lorentz1966metric} or Lemma 2.2 of Chapter 15 in \\cite{lorentz1996constructive}, for instance). For completeness, we provide here a simpler proof which we found more enlightening.\n\\begin{lemma}\\label{lower-eigenvalue-lemma}\n Let $H$ be a hilbert space and $A\\subset H$ a convex and symmetric set. Suppose that $g_1,...,g_n\\subset A$. Then\n \\begin{equation}\\label{lemma-lower-bound}\n  \\epsilon_{n}(A)\\geq \\frac{1}{2}\\sqrt{\\frac{\\lambda_{min}}{n}},\n \\end{equation}\n where $\\lambda_{min}$ is the smallest eigenvalue of the Gram matrix $G$ defined by $G_{ij} = \\langle g_i,g_j\\rangle_H$.\n\\end{lemma}\n\\begin{proof}\n Consider a maximal set of points $x_1,...,x_N\\in b_1^n(0,1):=\\{x\\in \\mathbb{R}^n:~|x|_1\\leq 1\\}$ in the $\\ell^1$-unit ball satisfying $|x_i - x_j| \\geq \\frac{1}{2}$ for each $i\\neq j$. We claim that $N \\geq 2^n$. Indeed, if the set $\\{x_i\\}_{i=1}^N$ is maximal, then the balls \n $$b^n_1(x_i,1/2) = \\left\\{x\\in \\mathbb{R}^n:~|x-x_i|_1\\leq \\frac{1}{2}\\right\\}$$\n must cover the ball $b_1^n(0,1)$. This implies that\n \\begin{equation}\n  \\sum_{i=1}^N |b^n_1(x_i,1/2)| \\geq |b_1^n(0,1)|.\n \\end{equation}\n Since we obviously have $|b^n_1(x_i,1/2)| = (1/2)^n|b_1^n(0,1)|$ for each $i$, it follows that $N \\geq 2^n$.\n \nConsider the collection of elements $f_1,...,f_N\\in H$ defined by\n \\begin{equation}\n  f_i = \\sum_{k=1}^nx^k_ig_k.\n \\end{equation}\n Since $A$ is symmetric and convex, we have $f_i\\in A$ for each $i=1,...,N$. Moreover, if $i\\neq j$, then\n \\begin{equation}\n  \\|f_i-f_j\\|^2_H = v^T_{ij}Gv_{ij},\n \\end{equation}\n where $v_{ij} = x_i - x_j$. Since $|x_i - x_j|_1 \\geq \\frac{1}{2}$, it follows from H\\\"older's inequality that $|v_{ij}|^2_2 \\geq \\frac{1}{4n}$. From the eigenvalues of $G$ we then see that $\\|f_i-f_j\\|^2_H \\geq \\frac{\\lambda_{min}}{4n}$ for all $i\\neq j$. This gives the lower bound \\eqref{lemma-lower-bound}.\n\\end{proof}\n\nThis Lemma can be applied to sequences of almost orthogonal vectors to obtain Lemma 3 from \\cite{makovoz1996random}, which we state here as a corollary for completeness.\n\\begin{corollary}\\label{entropy-lower-bound-corollary}\n Let $H$ be a hilbert space and $A\\subset H$ a convex and symmetric set. Suppose that $g_1,...,g_n\\subset A$ and the the $g_i$ are almost orthogonal in the sense that for all $i = 1,...,n$,\n \\begin{equation}\\label{diagonal-dominant}\n  \\sum_{j\\neq i}|\\langle g_i,g_j\\rangle_H| \\leq \\frac{1}{2}\\|g_i\\|_H^2.\n \\end{equation}\n Then\n \\begin{equation}\n  \\epsilon_{n}(A)\\geq \\frac{\\min_i \\|g_i\\|_H}{\\sqrt{8n}}.\n \\end{equation}\n\\end{corollary}\n\\begin{proof}\n This follows from Lemma \\ref{lower-eigenvalue-lemma} if we can show that the Gram matrix $G$ satisfies\n \\begin{equation}\n  \\lambda_{min}(G) \\geq \\frac{1}{2}\\min_i \\|g_i\\|^2_H.\n \\end{equation}\n This follows immediately from the diagonal dominance condition \\ref{diagonal-dominant} and the Gerschgorin circle theorem (see the proof in \\cite{makovoz1996random} for details).\n\\end{proof}\n\nUsing these results, it is a relatively simple matter to obtain lower bounds on the entropy of $B_1(\\mathbb{F}_s^d)$.\n\\begin{proposition}\n Let $d \\geq 1$. Then\n \\begin{equation}\n  \\epsilon_n(B_1(\\mathbb{F}_s^d)) \\gtrsim_{s,d} n^{-\\frac{1}{2}-\\frac{s}{d}}.\n \\end{equation}\n\\end{proposition}\n\\begin{proof}\n Consider the cube $C = [-d^{-\\frac{1}{2}},d^{-\\frac{1}{2}}]\\subset B_1^d$. It suffices to lower bound the $\\epsilon_n(B_1(\\mathbb{F}_s^d))$ with respect to $H = L^2(C)$. For this, we consider the collection of functions $g_\\xi(x) = (1 + \\sqrt{d}|\\xi|)^{-s}e^{2\\pi i \\sqrt{d}\\xi \\cdot x}\\in B_1(\\mathbb{F}_s^d)$ for $\\xi\\in \\mathbb{Z}^d$ with $|\\xi|_\\infty \\leq N$. This collection of functions is clearly orthogonal, and so satisfies the condition of Corollary \\ref{entropy-lower-bound-corollary}. Thus we get\n \\begin{equation}\n  \\epsilon_n(B_1(\\mathbb{F}_s^d)) \\gtrsim n^{-\\frac{1}{2}}\\min_{\\xi}\\|g_{\\xi}\\|_H = n^{-\\frac{1}{2}}(1+dN)^{-s} \\gtrsim_{s,d} n^{-\\frac{1}{2}}N^{-s},\n \\end{equation}\n where $n = (2N+1)^d$ is the total number of functions $g_\\xi$. Thus $N \\lesssim_d n^{\\frac{1}{d}}$ and so\n \\begin{equation}\n  \\epsilon_n(B_1(\\mathbb{F}_s^d))\\gtrsim_{s,d} n^{-\\frac{1}{2}-\\frac{s}{d}}.\n \\end{equation}\n This only holds a priori for $n$ of the form $n = (2N+1)^d$, but the monotonicity of the entropy extends the bound to all $n$.\n\\end{proof}\n\nSince $\\mathcal{K}_1(\\mathbb{F}_{k+1}^d)\\subset \\mathcal{K}_1(\\mathbb{P}_k^d)$, this result immediately implies that $\\epsilon_n(B_1(\\mathbb{P}_k^d)) \\gtrsim_{k,d} n^{-\\frac{1}{2}-\\frac{k+1}{d}}$, as observed in \\cite{makovoz1996random,klusowski2018approximation}. However, it is known that this inclusion is strict \\cite{wojtowytsch2020representation}, and so it may be possible to get a better lower bound on $\\epsilon_n(B_1(\\mathbb{P}_k^d))$ which does not come from $\\mathcal{K}_1(\\mathbb{F}_{k+1}^d)$. This would separate the two spaces in a quantifiable way, showing precisely how much larger $\\mathcal{K}_1(\\mathbb{P}_k^d)$ is. \n\nThe first such improved lower bound on $\\epsilon_n(B_1(\\mathbb{P}_k^d))$ is obtained by Makovoz \\cite{makovoz1996random} in the case $k=0$, $d=2$, and it is conjectured that an improved lower bound holds more generally. We settle this conjecture by deriving an improved lower bound for all $k \\geq 0$ and $d\\geq 2$, which requires a much more careful analysis.\n\n\\begin{theorem}\\label{lower-bound-theorem}\n Let $d \\geq 2$, $k\\geq 0$, and $A\\subset L^2(B_1^d)$ be a convex and symmetric set. Suppose that for every profile $\\phi\\in C_c^\\infty([-2,2])$ such that $\\|\\phi^{(k+1)}\\|_{L^1(\\mathbb{R})}\\leq 1$, and any direction $\\omega\\in S^{d-1}$, the ridge function $\\phi(\\omega\\cdot x)\\in L^2(B_1^d)$ satisfies\n \\begin{equation}\n  \\phi(\\omega\\cdot x)\\in A.\n \\end{equation}\n Then\n \\begin{equation}\n  \\epsilon_n(A) \\gtrsim_{k,d} n^{-\\frac{1}{2}-\\frac{2k+1}{2d}}.\n \\end{equation}\n\n\\end{theorem}\nThe argument we give here adapts the argument in the proof of Theorem 4 in \\cite{makovoz1996random}. A careful analysis allows us extend the result to higher dimensions and remove a logarithmic factor. The key is to consider profiles $\\phi$ whose higher order moments vanish. \n\nBefore we give the proof, we observe that the Peano kernel formula\n\\begin{equation}\n \\phi(x) = \\frac{1}{k!}\\int_{-2}^2 \\phi^{(k+1)}(t)[\\max(0,x-t)]^kdt = \\frac{1}{k!}\\int_{-2}^2 \\phi^{(k+1)}(t)\\sigma_k(0,x-t)dt,\n\\end{equation}\nwhich holds for all $\\phi\\in C_c^\\infty([-2,2])$, implies that for a constant $C = C(k,d)$, the unit ball $CB_1(\\mathbb{P}^d_k)$ satisfies the conditions of Theorem \\ref{lower-bound-theorem}. This yields the result\n\\begin{theorem}\\label{relu-k-lower-bound-corollary}\n Let $d \\geq 2$. Then\n \\begin{equation}\n  \\epsilon_n(B_1(\\mathbb{P}^d_k)) \\gtrsim_{k,d} n^{-\\frac{1}{2}-\\frac{2k+1}{2d}}.\n \\end{equation}\n\n\\end{theorem}\n\n\\begin{proof}[Proof of Theorem \\ref{lower-bound-theorem}]\n We introduce the weight $$d\\mu = (1-|x|^2)_+^{\\frac{d}{2}}dx$$ of Bochner-Riesz type on $B_1^d$ and consider the space $H = L^2(B_1^d,d\\mu)$. Since $1-|x|^2 \\leq 1$, it follows that\n $\\|f\\|_H \\leq \\|f\\|_{L^2(\\Omega)}$, and so it suffices to lower bound the entropy of $A$ with respect to the weighted space $H$.\n \n Choose $0\\neq \\psi\\in C^\\infty_c([-1,1])$ such that $2d-1$ of its moments vanish, i.e. such that\n \\begin{equation}\n  \\int_{-1}^1 x^r\\psi(x)dx = 0,\n \\end{equation}\n for $r=0,...,2d-2$. Such a function $\\psi$ can easily be obtained by convolving an arbitrary compactly supported function whose moments vanish (such as a Legendre polynomial) with a $C^\\infty$ bump function.\n \n Our assumptions on the set $A$ imply that by scaling $\\psi$ appropriately, we can ensure that for $0 < \\delta < 1$\n \\begin{equation}\n   \\delta^{k}\\psi(\\delta^{-1}\\omega\\cdot x + b)\\in A,\n  \\end{equation}\n for any $\\omega\\in S^{d-1}$ and $b\\in[-\\delta^{-1},\\delta^{-1}]$. Note that $\\psi$, which will be fixed in what follows, depends upon both $d$ and $k$.\n \n Let $N \\geq 1$ be an integer and fix $n = N^{d-1}$ directions $\\omega_1,...,\\omega_n\\in S^{d-1}$ with $\\min(|\\omega_i - \\omega_j|_2, |\\omega_i + \\omega_j|_2) \\gtrsim_d N^{-1}$. This can certainly be done since projective space $P^{d-1} = S^{d-1}/\\{\\pm\\}$ has dimension $d-1$. In particular, if $\\omega_1,...,\\omega_n$ is a maximal set satisfying $\\min(|\\omega_i - \\omega_j|_2, |\\omega_i + \\omega_j|_2) \\geq cN^{-1}$, then balls of radius $cN^{-1}$ centered at the $\\omega_i$ must cover $P^{d-1}$. So we must have $n = \\Omega(N^{d-1})$, and by choosing $c$ appropriately we can arrange $n = N^{d-1}$.\n \n Further, let $a \\leq \\frac{1}{4}$ be a sufficiently small constant to be specified later and consider for $\\delta = aN^{-1}$ the collection of functions\n \\begin{equation}\n  g_{p,l}(x) = \\delta^{k}\\psi(\\delta^{-1}\\omega_p \\cdot x + 2l)\\in A,\n \\end{equation}\n for $p=1,...,n$ and $l = -\\frac{N}{2},...,\\frac{N}{2}$. \n \n The intuition here is that $g_{p.l}$ is a ridge function which varies in the direction $\\omega_p$ and has the compactly supported profile $\\psi$ dialated to have width $\\delta$ (and scaled appropriately to remain in $A$). The different values of $l$ give different non-overlapping shifts of these functions.  The proof proceeds by checking that the $g_{p,l}$ can be made `nearly orthogonal' by choosing $a$ sufficiently small.\n \n Indeed, we claim that if $a$ is chosen small enough, then the $g_{p,l}$ satisfy the conditions of Corollary \\ref{entropy-lower-bound-corollary}, i.e. for each $(p,l)$\n \\begin{equation}\n  \\sum_{(p',l')\\neq (p,l)} |\\langle g_{p,l}, g_{p',l'}\\rangle_H| \\leq \\frac{1}{2}\\|g_{p,l}\\|^2_H.\n \\end{equation}\n \n To see this, we begin by estimating $\\|g_{p,l}\\|^2_H$, as follows\n \\begin{equation}\n  \\|g_{p,l}\\|^2_H = \\delta^{2k}\\int_{B_1^d} |\\psi(\\delta^{-1}\\omega_p \\cdot x + 2l)|^2(1-|x|^2)^{\\frac{d}{2}}dx.\n \\end{equation}\n We proceed to complete $\\omega_p$ to an orthonormal basis of $\\mathbb{R}^d$, $b_1 = \\omega_p, b_2,...,b_d$ and denote the coordinates of $x$ with respect to this basis by $y_i = x\\cdot b_i$. Rewriting the above integral in this new orthonormal basis, we get\n \\begin{equation}\n \\begin{split}\n  \\|g_{p,l}\\|^2_H &= \\delta^{2k}\\int_{B_1^d}|\\psi(\\delta^{-1}y_1 + 2l)|^2\\left(1-\\sum_{i=1}^d y_i^2\\right)^{\\frac{d}{2}}dy_1\\cdots dy_d \\\\\n  &= \\delta^{2k}\\int_{-1}^1|\\psi(\\delta^{-1}y_1 + 2l)|^2 \\rho_d(y_1)dy_1,\n  \\end{split}\n \\end{equation}\n where\n \\begin{equation}\n \\begin{split}\n  \\rho_d(y) &= \\int_0^{\\sqrt{1-y^2}} (1-y^2-r^2)^{\\frac{d}{2}}r^{d-2}dr\\\\ \n  &= (1-y^2)^{d-\\frac{1}{2}}\\int_0^{1} (1-r^2)^{\\frac{d}{2}}r^{d-2}dr = K_d(1-y^2)^{d-\\frac{1}{2}},\n  \\end{split}\n \\end{equation}\n for a dimension dependent constant $K_d$.\n\n Further, we change variables, setting $y = \\delta^{-1}y_1 + 2l$ and use the fact that $\\psi$ is supported in $[-1,1]$, to get\n \\begin{equation}\n  \\|g_{p,l}\\|^2_H = K_d\\delta^{2k+1}\\int_{-1}^1 |\\psi(y)|^2 (1-[\\delta(y-2l)]^2)^{d-\\frac{1}{2}} dy.\n \\end{equation}\n  Since $|y| \\leq 1$ and $|2l| \\leq N$, as long as $\\delta(N+1) \\leq 1/2$, which is guaranteed by $a \\leq \\frac{1}{4}$, the coordinate $y_1 = \\delta(y-2l)$ will satisfy $|y_1| \\leq 1/2$. This means that $$(1-[\\delta(y-2l)]^2)^{d-\\frac{1}{2}} = (1-y_1^2)^{d-\\frac{1}{2}} \\geq (3/4)^{d-\\frac{1}{2}}$$ uniformly in $p,l,N$ and $\\delta$, and thus\n \\begin{equation}\\label{lower-bound}\n  \\|g_{p,l}\\|^2_H \\geq K_d(3/4)^{d-\\frac{1}{2}}\\delta^{2k+1}\\int_{-1}^1 |\\psi(y)|^2dy \\gtrsim_{k,d} \\delta^{2k+1}.\n \\end{equation}\n \n Next consider $|\\langle g_{p,l}, g_{p',l'}\\rangle_H|$ for $(p,l)\\neq (p',l')$. \n \n If $p=p'$, then $\\omega_p = \\omega_{p'}$, but $l\\neq l'$. In this case, we easily see that the supports of $g_{p,l}$ and $g_{p,l'}$ are disjoint and so the inner product $\\langle g_{p,l}, g_{p',l'}\\rangle_H = 0$. \n \n On the other hand, if $p\\neq p'$ we get\n \\begin{equation}\n  \\langle g_{p,l}, g_{p',l'}\\rangle_{H} = \\delta^{2k}\\int_{B_1^d} \\psi(\\delta^{-1}\\omega_p\\cdot x + 2l)\\psi(\\delta^{-1}\\omega_{p'}\\cdot x + 2l')(1-|x|^2)^{\\frac{d}{2}}dx.\n \\end{equation}\n Since $p\\neq p'$, the vectors $\\omega_p$ and $\\omega_{p'}$ are linearly independent and we complete them to a basis $b_1 = \\omega_p, b_2 = \\omega_{p'}, b_3,...,b_d$, where $b_3,...,b_d$ is an orthonormal basis for the subspace orthogonal to $\\omega_p$ and $\\omega_{p'}$. \n \n Letting $b_1',b_2',b_3'=b_3,...,b_d'=b_d$ be a dual basis (i.e. satisfying $b_i'\\cdot b_j = \\delta_{ij}$) and making the change of variables $x = y_1b_1' + \\cdots + y_db_d'$ in the above integral, we get\n \\begin{equation}\\label{inner-product-equation}\n  \\langle g_{p,l}, g_{p',l'}\\rangle_{H} = \\delta^{2k}\\det(D_{p,p'})^{-\\frac{1}{2}} \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty \\psi(\\delta^{-1}y_1+2l)\\psi(\\delta^{-1}y_2+2l') \\gamma_d(|y_1b_1' + y_2b_2'|) dy_1dy_2,\n \\end{equation}\n where $D_{p,p'}$ is the Graham matrix of $\\omega_1$ and $\\omega_2$ (notice that then $D_{p,p'}^{-1}$ is the Graham matrix of $b_1'$ and $b_2'$) and\n\\begin{equation}\n \\begin{split}\n  \\gamma_d(y) &= \\int_0^{\\sqrt{1-y^2}} (1-y^2-r^2)^{\\frac{d}{2}}r^{d-3}dr\\\\ \n  &= (1-y^2)_+^{d-1}\\int_0^{1} (1-r^2)^{\\frac{d}{2}}r^{d-3}dr = K'_d(1-y^2)_+^{d-1},\n  \\end{split}\n \\end{equation}\n for a second dimension dependent constant $K'_d$. (Note that if $d=2$, then the above calculation is not correct, but we still have $\\gamma_d(y) =  (1-y^2)_+^{\\frac{d}{2}} = (1-y^2)_+^{d-1}$.) We remark that the choice of Bochner-Riesz weight $d\\mu = (1 - |x|^2)_+^{\\frac{d}{2}}$ was made precisely so that $\\gamma_d$ is a piecewise polynomial with continuous derivatives of order $d-2$, which will be important in what follows.\n \n Next, we fix $y_1$ and analyze, as a function of $z$,\n $$\\tau_{p,p'}(y_1,z) = \\gamma_d(|y_1b_1' + zb_2'|) = K'_d(1-q_{p,p'}(y_1,z))_+^{d-1},$$\n where $q_{p,p'}$ is the quadratic \n \\begin{equation}\\label{definition-of-q}\n q_{p,p'}(y_1,z) = (b_1'\\cdot b_1')y_1^2-2(b_1'\\cdot b_2')y_1z-(b_2'\\cdot b_2')z^2,\n \\end{equation}\n We observe that, depending upon the value of $y_1$, $\\tau_{p,p'}(y_1,z)$ is either identically $0$ or is a piecewise polynomial function of degree $2d-2$ with exactly two break points at the roots $z_1,z_2$ of $q_{p,p'}(y_1,z) = 1$. Furthermore, utilizing Fa\\`a di Bruno's formula \\cite{di1857note} and the fact that $q_{p,p'}(y_1,\\cdot)$ is quadratic, we see that\n \\begin{equation}\\label{derivative-of-tau}\n  \\left.\\frac{d^k}{dz^k} \\tau_{p,p'}(y_1,z)\\right|_{z_i} = \\sum_{m_1+2m_2=k} \\frac{k!}{m_1!m_2!2^{m_2}}f_d^{(m_1+m_2)}(1)\\left[\\frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i}\\right]^{m_1}\\left[\\frac{d^2}{dz^2}q_{p,p'}(y_1,z)|_{z_i}\\right]^{m_2},\n \\end{equation}\n where $f_d(x) = (1-x)^{d-1}$. \n \n Since $f^{(m)}_d(1) = 0$ for all $m \\leq d-2$, we see that\n the derivative in \\eqref{derivative-of-tau} is equal to $0$ for $0 \\leq k\\leq d-2$. Thus the function $\\tau_{p,p\n }(y_1,\\cdot)$ has continuous derivatives up to order $d-2$ at the breakpoints $z_1$ and $z_2$. Moreover, if we consider the derivative of order $k=d-1$, then only the term with $m_2 = 0$ in \\eqref{derivative-of-tau} survives and we get\n \\begin{equation}\n  \\left.\\frac{d^{d-1}}{dz^{d-1}} \\tau_{p,p'}(y_1,z)\\right|_{z_i} = f_d^{(d-1)}(1)\\left[\\frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i}\\right]^{d-1} = (-1)^{d-1}(d-1)!\\left[\\frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i}\\right]^{d-1}.\n \\end{equation}\n Utilizing the fact that the derivative of a quadratic $q(x) = ax^2 + bx + c$ at its roots is given by $\\pm\\sqrt{b^2 - 4ac}$ combined with the formula for $q_{p,p'}$ \\eqref{definition-of-q}, we get\n \\begin{equation}\n  \\frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i} = \\pm 2\\sqrt{(b_1'\\cdot b_1')(b_1'\\cdot b_2')^2-(b_2'\\cdot b_2')(b_1'\\cdot b_1')} = \\pm 2\\det(D_{p,p'})^{-\\frac{1}{2}}.\n \\end{equation}\n Taken together, this shows that the jump in the $d-1$-st derivative of $\\tau_{p,p'}(y_1,z)$ at the breakpoints $z_1$ and $z_2$ has magnitude\n \\begin{equation}\\label{derivative-bound}\n \\left|\\left.\\frac{d^{d-1}}{dz^{d-1}} \\tau_{p,p'}(y_1,z)\\right|_{z_i}\\right| \\lesssim_d \\det(D_{p,p'})^{-\\frac{d-1}{2}}.\n \\end{equation}\n\n Going back to equation \\eqref{inner-product-equation}, we see that due to the compact support of $\\psi$, the integral in \\eqref{inner-product-equation} is supported on a square with side length $2\\delta$ in $y_1$ and $y_2$. To clarify this, we make the change of variables $s = \\delta^{-1}y_1+2l$, $t = \\delta^{-1}y_2+2l'$, and use that $\\psi$ is supported on $[-1,1]$, to get (for notational convenience we let $y(s,l) = \\delta (s-2l)$)\n \\begin{equation}\n  \\langle g_{p,l}, g_{p',l'}\\rangle_{H} = \\delta^{2k+2}\\det(D_{p,p'})^{-\\frac{1}{2}} \\int_{-1}^1 \\int_{-1}^1 \\psi(s)\\psi(t) \\tau_{p,p'}(y(s,l), y(t,l'))ds dt.\n \\end{equation}\n We now estimate the sum over $l'$ as\n  \\begin{equation}\\label{big-equation}\n  \\begin{split}\n  \\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}} |\\langle g_{p,l}, g_{p',l'}\\rangle_{H}| &= \\delta^{2k+2}\\det(D_{p,p'})^{-\\frac{1}{2}}\\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}}\\left|\\int_{-1}^1\\int_{-1}^1 \\psi(s)\\psi(t)\\tau_{p,p'}(y(s,l), y(t,l'))dsdt\\right| \\\\\n  &\\leq \\delta^{2k+2}\\det(D_{p,p'})^{-\\frac{1}{2}}\\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}}\\int_{-1}^1\\left|\\int_{-1}^1 \\psi(s)\\psi(t)\\tau_{p,p'}(y(s,l), y(t,l'))dt\\right|ds \\\\\n  & = \\delta^{2k+2}\\det(D_{p,p'})^{-\\frac{1}{2}}\\int_{-1}^1|\\psi(s)|\\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}}\\left|\\int_{-1}^1 \\psi(t)\\tau_{p,p'}(y(s,l), y(t,l'))dt\\right|ds.\n  \\end{split}\n \\end{equation}\n For fixed $s$ and $l$, consider the inner sum\n \\begin{equation}\\label{sum-to-bound}\n  \\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}}\\left|\\int_{-1}^1 \\psi(t)\\tau_{p,p'}(y(s,l), y(t,l'))dt\\right| = \\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}}\\left|\\int_{-1}^1 \\psi(t)\\tau_{p,p'}(y(s,l), \\delta (t - 2l'))dt\\right|.\n \\end{equation}\n In the integrals appearing in this sum, the variable $z = \\delta (t - 2l')$ runs over the line segment $[\\delta(2l'-1),\\delta(2l'+1)]$. These segments are disjoint for distinct $l'$ and are each of length $2\\delta$. \n \n Further, recall that for fixed $y_1 = y(s,l)$, the function $\\tau_{p,p'}(y_1, z)$ is a piecewise polynomial of degree $2d-2$ with at most two breakpoints $z_1$ and $z_2$. Combined with the fact that $2d-1$ moments of $\\psi$ vanish, this implies that at most two terms in the above sum are non-zero, namely those where the corresponding integral contains a breakpoint.\n \n Furthermore, the bound on the jump in the $d-1$-st order derivatives at the breakpoints \\eqref{derivative-bound} implies that in the intervals (of length $2\\delta$) which contain a breakpoint, there exists a polynomial $q_i$ of degree $d-2$ for which\n \\begin{equation}\n  |\\tau_{p,p'}(y_1,z) - q_i(z)| \\leq \\frac{(2\\delta)^{d-1}}{(d-1)!}M_d \\det(D_{p,p'})^{-\\frac{d-1}{2}} \\lesssim_d \\delta^{d-1} \\det(D_{p,p'})^{-\\frac{d-1}{2}}\n \\end{equation}\n on the given interval. Using again the vanishing moments of $\\psi$, we see that the nonzero integrals in the sum \\eqref{sum-to-bound} (of which there are at most $2$) satisfy\n $$\n \\left|\\int_{-1}^1 \\psi(t)\\tau_{p,p'}(y(s,l), \\delta (t - 2l'))dt\\right| \\lesssim_{k,d} \\delta^{d-1}\\det(D_{p,p'})^{-\\frac{d-1}{2}}.\n $$\n So for each fixed $s$ and $l$, we get the bound\n \\begin{equation}\n  \\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}}\\left|\\int_{-1}^1 \\psi(t)\\tau_{p,p'}(y(s,l), y(t,l'))dt\\right| \\lesssim_{k,d} \\delta^{d-1} \\det(D_{p,p'})^{-\\frac{d-1}{2}}.\n \\end{equation}\n Plugging this into equation \\eqref{big-equation}, we get\n \\begin{equation}\n  \\sum_{l'=-\\frac{N}{2}}^{\\frac{N}{2}} |\\langle g_{p,l}, g_{p',l'}\\rangle_{H}| \\lesssim_{k,d} \\delta^{2k+d+1}\\det(D_{p,p'})^{-\\frac{d}{2}}\\int_{-1}^1|\\psi(s)| ds \\lesssim_{k,d} \\delta^{2k+d+1}\\det(D_{p,p'})^{-\\frac{d}{2}}.\n \\end{equation}\nWe analyze the $\\det(D_{p,p'})^{-\\frac{d}{2}}$ term using that $\\omega_p$ and $\\omega_{p'}$ are on the sphere to get\n\\begin{equation}\n \\det(D_{p,p'})^{-\\frac{d}{2}} = (1-\\langle \\omega_p,\\omega_{p'}\\rangle^2)^{-\\frac{d}{2}} = \\frac{1}{\\sin(\\theta_{p,p'})^d},\n\\end{equation}\nwhere $\\theta_{p,p'}$ represents the angle between $\\omega_p$ and $\\omega_{p'}$.\n\nSumming over $p'\\neq p$, we get\n\\begin{equation}\\label{eq-1357}\n \\sum_{(p',l')\\neq (p,l)} |\\langle g_{p,l}, g_{p',l'}\\rangle_H| \\lesssim_{k,d} \\delta^{2k+d+1}\\sum_{p'\\neq p}\\frac{1}{\\sin(\\theta_{p,p'})^d}.\n\\end{equation}\nThe final step is to bound the above sum. This is done in a relatively straightforward manner by noting that this sum is comparable to the following integral \n\\begin{equation}\n \\sum_{p'\\neq p}\\frac{1}{\\sin(\\theta_{p,p'})^d} \\eqsim_d N^{d-1}\\int_{P^{d-1}-B(p,r)} |x-p|^{-d}dx,\n\\end{equation}\nwhere we are integrating over projective space minus a ball of radius $r \\gtrsim_d N^{-1}$ around $p$. Integrating around this pole of order $d$ in the $d-1$ dimensional $P^{d-1}$, this gives\n\\begin{equation}\n \\sum_{p'\\neq p}\\frac{1}{\\sin(\\theta_{p,p'})^d} \\eqsim_d N^d.\n\\end{equation}\nTo be more precise, we present the detailed estimates in what follows.\n\nWe bound the sum over one hemisphere\n\\begin{equation}\n \\sum_{0 < \\theta_{p,p'}\\leq \\frac{\\pi}{2}}\\frac{1}{\\sin(\\theta_{p,p'})^d},\n\\end{equation}\nand note that the sum over the other hemisphere can be handled in an analogous manner. To this end, we decompose this sum as\n\\begin{equation}\\label{eq-1365}\n \\sum_{0 < \\theta_{p,p'}\\leq \\frac{\\pi}{2}}\\frac{1}{\\sin(\\theta_{p,p'})^d} = \\sum_{0 < \\theta_{p,p'}\\leq \\frac{\\pi}{4}}\\frac{1}{\\sin(\\theta_{p,p'})^d} + \\sum_{\\frac{\\pi}{4} < \\theta_{p,p'}\\leq \\frac{\\pi}{2}}\\frac{1}{\\sin(\\theta_{p,p'})^d}.\n\\end{equation}\nFor the second sum, we note that $\\sin(\\theta_{p,p'}) \\geq \\frac{1}{\\sqrt{2}}$, and the number of terms is at most $n = N^{d-1}$, so that the second sum is $\\lesssim N^{d-1}$. \n\nTo bound the first sum in \\eqref{eq-1365}, we rotate the sphere so that $\\omega_p = (0,...,0,1)$ is the north pole. We then take the $\\omega_{p'}$ for which $\\theta_{p,p'}\\leq \\frac{\\pi}{4}$ and project them onto the tangent plane at $\\omega_p$. Specifically, this corresponds to the map $\\omega_{p'} = (x_1,...x_{d-1},x_d)\\rightarrow x_{p'} = (x_1,...x_{d-1})$, which removes the last coordinate. \n\nIt is now elementary to check that this maps distorts distances by at most a constant (since the $\\omega_{p'}$ are all contained in a spherical cap of radius $\\frac{\\pi}{4}$), i.e. that for $p'_1\\neq p'_2$, we have\n\\begin{equation}\n |x_{p'_1} - x_{p'_2}| \\leq |\\omega_{p'_1} - \\omega_{p'_2}| \\lesssim |x_{p'_1} - x_{p'_2}|,\n\\end{equation}\nand also that $\\sin(\\theta_{p,p'}) = |x_{p'}|$.\n\nThis allows us to write the first sum in \\eqref{eq-1365} as\n\\begin{equation}\n \\sum_{0 < \\theta_{p,p'}\\leq \\frac{\\pi}{4}}\\frac{1}{\\sin(\\theta_{p,p'})^d} = \\sum_{0<|x_{p'}|\\leq \\frac{1}{\\sqrt{2}}}\\frac{1}{|x_{p'}|^d},\n\\end{equation}\nwhere by construction we have $|\\omega_{p'_1} - \\omega_{p'_2}| \\gtrsim_d N^{-1}$ for $p'_1\\neq p'_2$, and thus $|x_{p'_1} - x_{p'_2}|\\gtrsim_d N^{-1}$ as well. In addition, $|\\omega_p - \\omega_{p'}| \\gtrsim_d N^{-1}$ and thus also $|x_{p'}| \\gtrsim_d N^{-1}$.\n\nNow let $r\\gtrsim_d N^{-1}$ be such that the balls of radius $r$ around each of the $x_{p'}$, and around $0$, are disjoint. Notice that since $|x|^{-d}$ is a subharmonic function on $\\mathbb{R}^{d-1} / \\{0\\}$, we have\n\\begin{equation}\n \\frac{1}{|x_{p'}|^d} \\leq \\frac{1}{|B(x_{p'},r)|}\\int_{B(x_{p'},r)}|y|^{-d}dy \\lesssim_d N^{d-1}\\int_{B(x_{p'},r)}|y|^{-d}dy.\n\\end{equation}\nSince all of the balls $B(x_{p'},r)$ are disjoint and are disjoint from $B(0,r)$, we get (note that these integrals are in $\\mathbb{R}^{d-1}$)\n\\begin{equation}\n \\sum_{0<|x_{p'}|\\leq \\frac{1}{\\sqrt{2}}}\\frac{1}{|x_{p'}|^d} \\lesssim_d N^{d-1}\\int_{r \\leq |y| \\leq \\frac{\\pi}{2} + r} |y|^{-d}dy \\leq N^{d-1}\\int_{r \\leq |y|} |y|^{-d}dy \\lesssim_d N^{d-1}r^{-1} \\lesssim_d N^d.\n\\end{equation}\nPlugging this into equation \\eqref{eq-1365} and bounding the sum over the other hemisphere in a similar manner, we get\n\\begin{equation}\n \\sum_{p'\\neq p}\\frac{1}{\\sin(\\theta_{p,p'})^d} \\lesssim_d N^d.\n\\end{equation}\nUsing equation \\eqref{eq-1357}, we finally obtain\n\\begin{equation}\n \\sum_{(p',l')\\neq (p,l)} |\\langle g_{p,l}, g_{p',l'}\\rangle_H| \\lesssim_{k,d} \\delta^{2k+d+1}N^d.\n\\end{equation}\nCombined with the lower bound \\eqref{lower-bound}, which gives $\\|g_{p,l}\\|_H^2 \\gtrsim_{k,d} \\delta^{2k+1}$, we see that by choosing the factor $a$ in $\\delta = aN^{-1}$ small enough (independently of $N$, of course), we can in fact guarantee that the conditions of Corollary \\ref{entropy-lower-bound-corollary} are satisfied.\n\nApplying the Corollary, we see that\n\\begin{equation}\n \\epsilon_{n}(A) \\geq \\frac{\\min_{(p,l)} \\|g_{p,l}\\|_H}{\\sqrt{8n}} \\gtrsim_{k,d} n^{-\\frac{1}{2}} \\delta^{\\frac{2k+1}{2}} \\gtrsim_{k,d,a} n^{-\\frac{1}{2}}N^{-\\frac{2k+1}{2}},\n\\end{equation}\nwhere $n = N^d$ is the total number of functions $g_{p,l}$. This finally gives (since $a$ is fixed depending upon $k$ and $d$)\n\\begin{equation}\n \\epsilon_{n}(A)\\gtrsim_{k,d} n^{-\\frac{1}{2}-\\frac{2k+1}{2d}}.\n\\end{equation}\nAs before, the monotonicity of the entropy extends this bound to all $n$. This completes the proof.\n\n\\end{proof}\n\n\n\\section{Conclusion}\nWe introduce the natural approximation spaces for shallow neural networks with ReLU$^k$  and cosine activation functions and show that this space is equivalent to the Barron space when $\\sigma = \\text{ReLU}$ and the spectral Barron space when $\\sigma = \\cos$. Further, we calculate the precise asymptotics of the metric entropy of these spaces with respect to the $L^2$-norm. This has allowed us to calculate optimal approximation rates for shallow ReLU$^k$ neural networks, closing the gap between upper and lower bounds previously attained. \n\nThere are a few further questions we would like to propose. First, it is unclear how to extend these bounds to Banach spaces which are not Hilbert spaces, such as $L^1$ or $L^\\infty$, which have been studied in \\cite{klusowski2018approximation,makovoz1998uniform}, for example. Second, we would like to extend this theory to approximation by deeper neural networks. Finally, although it is known that greedy algorithms can attain an $O(n^{-\\frac{1}{2}})$ approximation rate \\cite{barron2008approximation} when approximating functions from $\\mathcal{K}_1(\\mathbb{D})$, as far as we know it is not known whether the higher order approximation rates derived here can be attained algorithmically.\n\n\n", "meta": {"hexsha": "41961325304da3f54cce65192d23a8920691c709", "size": 121440, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/Entropy.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/Entropy.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/Entropy.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 85.2210526316, "max_line_length": 1090, "alphanum_fraction": 0.6778820817, "num_tokens": 44875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7690802264851919, "lm_q1q2_score": 0.5893412803691976}}
{"text": "\\section{Modelling precision}\nIn this section, we try to build a basic model of precision errors that we can use to obtain a rough but reliable estimate of a program's precision.\n\n\\subsection{The issue with ``absolute or relative'' error}\\label{ss:abs-rel}\nWhen the output of a problem is some real value like a distance or an area, the problem statement often specifies a constraint such as: ``The answer should be accurate to an absolute or relative error of at most $10^{-5}$.'' While considering the relative accuracy of an answer can be a useful and convenient way to specify the required precision of an answer in some cases (for example in tasks where only addition and multiplication of positive values are performed), we think that for most geometry problems it is unsuitable.\n\nThe reason for this is the need to subtract\\footnote{Strictly speaking, we mean both subtraction of values of the same sign, and addition of values of opposite signs.} large values of similar magnitude. For example, suppose that we are able to compute two values with relative precision $10^{-6}$, such as $A=1000 \\pm 10^{-3}$ and $B = 999 \\pm 10^{-3}$. If we compute their difference, we obtain $A-B = 1 \\pm 2 \\times 10^{-3}$. The absolute error remains of a comparable size, being only multiplied by 2, but on the other hand relative error increases drastically from $10^{-6}$ to $2\\times 10^{-3}$ because of the decrease in magnitude. This phenomenon is called \\emph{catastrophic cancellation}.\n\nIn fact, whenever a certain relative error can affect big numbers, catastrophic cancellation can cause the corresponding absolute error to appear on very small values. The consequence is that if a problem statement has a certain tolerance on the relative error of the answer, and a correct solution has an error close to it for the biggest possible values, then the problem statement also needs to specify a tolerance on the corresponding absolute error in case catastrophic cancellation happens. And since that tolerance on absolute error is at least as tolerant as the tolerance on relative error for all possible values, it makes it redundant. This is why we think that tolerance on ``absolute or relative error'' is misleading at best.\\footnote{In fact, working with relative error tolerances would make sense if this ``relative error'' was defined based on the magnitude of the input coordinates rather than on the magnitude of the answer, as we will see starting from section~\\ref{ss:biggest-mag}. For example, if all input coordinates are bounded by $M$, it would make sense to require an absolute precision of $M^2\\times 10^{-6}$ on an area. But since the answer can remain very small even if the magnitude of the input grows, requiring a fixed relative precision on it is usually too constraining for test cases with inputs of large magnitude.}\n\nCatastrophic cancellation shows that relative precision is not a reliable way to think about precision whenever subtractions are involved --- and that includes the wide majority of geometry problems.\nIn fact, the most common geometric operations (distances, intersections, even dot/cross products) all involve subtractions of values which could be very similar in magnitude.\n\nExamples of this appear in two of the case studies of section~\\ref{s:case-studies}: in problem ``Keeping the Dogs Apart'' and when finding the solution of a quadratic equation.\n\nAnother example occurs when computing areas of polygons made of imprecise points. Even when the area ends up being small, the imprecision on it can be large if there were computations on large values in intermediate steps, which is the case when the coordinates have large magnitudes.\n\n\\centerFig{modelling0}\n\nBecause of this, we advise against trying to use relative error to build precision guarantees on the global scale of a whole algorithm, and we recommend to reason about those based on absolute error instead, as we describe below.\n\n\\subsection{Precision guarantees from IEEE 754}\n%\\emph{Note: This section is not meant to be a full description of how floating-point numbers work, but only a reminder of some useful guarantees. If you are completely unfamiliar with how floating-point numbers work or want to know more details, a good reference is \\cite{goldberg-floating}.}\n\nNearly all implementations of floating-point numbers obey the specifications of the IEEE 754 standard. This includes \\lstinline|float| and \\lstinline|double| in Java and C++, and \\lstinline|long double| in C++. The IEEE 754 standard gives strong guarantees that ensure floating-point numbers will have similar behavior even in different languages and over different platforms, and gives users a basis to build guarantees on the precision of their computations.\n\nThe basic guarantees given by the standard are:\n\\begin{enumerate}\n\\item decimal values entered in the source code or a file input are represented by the closest representable value;\n\\item the five basic operations ($+,-,\\times,/,\\sqrt{x}$) are performed as if they were performed with infinite precision and then rounded to the closest representable value.\n\\end{enumerate}\n\nThere are several implications. First, this means that integers are represented exactly, and basic operations on them ($+,-,\\times$) will have exact results, as long as they are small enough to fit within the significant digits of the type: $\\geq 9\\times 10^{15}$ for \\lstinline|double|, and $\\geq 1.8 \\times 10^{19}$ for \\lstinline|long double|. In particular, \\lstinline|long double| can perform exactly all the operations that a 64-bit integer type can perform.\n\nSecondly, if the inputs are exact, the relative error on the result of any of those five operations ($+,-,\\times,/,\\sqrt{x}$) will be bounded by a small constant that depends on the number of significant digits in the type.\\footnote{This assumes the magnitudes do not go outside the allowable range ($\\approx 10^{\\pm 308}$ for \\lstinline|double| and $\\approx 10^{\\pm 4932}$ for \\lstinline|long double|) which almost never happens for geometry problems.} This constant is $< 1.2 \\times 10^{-16}$ for \\lstinline|double| and $< 5.5 \\times 10^{-20}$ for \\lstinline|long double|. It is called the \\emph{machine epsilon} and we will often write it $\\epsilon$.\n\n\\subsection{Considering the biggest possible magnitude}\\label{ss:biggest-mag}\nWe explained earlier why we need to work with absolute error, but since IEEE 754 gives us guarantees in terms of relative errors, we need to consider the biggest magnitude that will be reached during the computations. In other words, if all computations are precise up to a relative error of $\\epsilon$, and the magnitude of the values never goes over $M$, then the absolute error of an operation is at most $M\\epsilon$.\n\nThis allows us to give good guarantees for numbers obtained after a certain number of $+$ and $-$ operations: a value that is computed in $n$ operations\\footnote{Note that when we say a value is ``computed in $n$ operations'' we mean that it is computed by a single formula that contains $n$ operations, and not that $n$ operations are necessary to actually compute it. For example $(a+b)+(a+b)$ is considered to be ``computed in 3 operations'' even though we can implement this with only 2 additions.} will have an absolute error of at most $nM\\epsilon$ compared to the theoretical result.\n%\\footnote{By this we mean, a value that is computed by a single formula involving only $+$ and $-$ operations, and at most $n$ of them.}\n\n%To clarify what we mean here by ``computed in $n$ operations'', we define it this way:\n%\\begin{itemize}\n%\\item exact inputs are ``computed in $0$ operations'';\n%\\item if $a$ is ``computed in $n_a$ operations'' and $b$ is ``computed in $n_b$ operations'' then the sum $a+b$ and subtraction $a-b$, is ``computed in $n_a+n_b+1$ operations''.\n%\\end{itemize}\n\n\n\nWe can prove the guarantee by induction: let's imagine we have two intermediate results $a$ and $b$ who were computed in $n_a$ and $n_b$ operations respectively. By the inductive hypothesis their imprecise computed values $a'$ and $b'$ respect the following conditions.\n\\[|a'-a| \\leq n_aM\\epsilon \\qquad |b'-b| \\leq n_bM\\epsilon\\]\n\nThe result of the floating-point addition of $a'$ and $b'$ is $\\round(a'+b')$ where $\\round()$ is the function that rounds a real value to the closest representable floating-point value. We know that $|\\round(x)-x| \\leq M\\epsilon$, so we can find a bound on the error of the addition:\n\\begin{align*}\n|\\round(a'+b') &- (a+b)|\\\\\n&= \\left|\\left[\\round(a'+b') - (a'+b')\\right] + \\left[(a'+b') - (a+b)\\right]\\right|\\\\\n&\\leq |\\round(a'+b') - (a'+b')| + |(a'+b') - (a+b)|\\\\\n&\\leq M\\epsilon + |(a'-a) + (b'-b)| \\\\\n&\\leq M\\epsilon + |a'-a| + |b'-b| \\\\\n&\\leq M\\epsilon + n_aM\\epsilon + n_bM\\epsilon \\\\\n&= (n_a + n_b + 1) M \\epsilon\n\\end{align*}\nwhere the first two steps follow from the triangle inequality.\nSince the sum is ``computed in $n_a+n_b+1$ operations'', the bound of $(n_a + n_b + 1) M \\epsilon$ that is obtained is small enough. The proof for subtraction is very similar.\n\n\\subsection{Incorporating multiplication}\nThe model above gives good guarantees but is very limited: it only works for computations that use only addition and subtraction. Multiplication does not give guarantees of the form $nM\\epsilon$. However, we can still say interesting things if we take a closer look the different types of values we use in geometry:\n\\begin{itemize}\n\\item Adimensional ``0D'' values: e.g. angles, constant factors;% Their magnitude is typically limited by some small constant (e.g. $2\\pi$ for angles).\n\\item 1D values: e.g. coordinates, lengths, distances, radii;% Their magnitude is limited by a constant $M$, which can be deduced from the input limits.\n\\item 2D values: e.g. areas, dot products, cross products;% Since they are based on products of 1D numbers, their magnitude is typically within a small constant factor of $M^2$.\n\\item 3D values: e.g. volumes, mixed products.% For the same reasons, their magnitude is typically within a small constant factor of $M^3$.\n\\end{itemize}\n\nUsually, the problem statement gives guarantees on the magnitude of coordinates, so we can find some constant $M$ so that all 1D values that will be computed in the code have a magnitude less than $M$. And since 2D and 3D values are usually created by products of 1D values, we can usually say that 2D values are bounded in magnitude by $M^2$ and 3D values by $M^3$ (we may need to multiply $M$ by a constant factor).\n\nIt turns out that computations made of $+,-,\\times$ and in which all $d$-dimensional values are bounded in magnitude by $M^d$ have good precision guarantees. In fact, we can prove that the absolute error of a $d$-dimensional number computed in $n$ operations is at most $M^d\\left((1+\\epsilon)^n-1\\right)$, which assuming $n\\epsilon \\ll 1$ is about $nM^d\\epsilon$.\n\nThe proof is similar in spirit to what we did with only $+$ and $-$ earlier. Since it is a bit long, we will not detail it here, but it can be found in section~\\ref{sec:proof-precision}, along with a more precise definition of the precision guarantees and its underlying assumptions.\n\nNote that this does \\emph{not} cover multiplication by an adimensional factor bigger than 1: this makes sense, since for example successive multiplication by 2 of a small value could make the absolute error grow out of control even if the magnitude remains under $M^d$ for a while.\n\nIn other cases, this formula $nM^d\\epsilon$ gives us a quick and reliable way to estimate precision errors.\n\n\\subsection{Why other operations do not work as well}\\label{ss:other-operations}\nNow that we have precision guarantees for $+,-,\\times$ operations, one might be tempted to try and include division as well. However, if that was possible, then it would be possible to give strong precision guarantees for line intersection, and we saw in subsection~\\ref{ss:numerically-unstable} that this is not the case.\n\nThe core of the problem is: if some value $x$ is very close to zero, then a small absolute error on $x$ will create a large absolute error on $1/x$. In fact, if $x$ is smaller than its absolute error, the computed value $1/x$ might be arbitrarily big, both in the positive or negative direction, and might not exist. This is why it is hard to give guarantees on the results of a division whose operands are already imprecise.\n\nAn operation that also has some problematic behavior is $\\sqrt{x}$. If $x$ is smaller than its absolute error, then $\\sqrt{x}$ might or might not be defined in the reals. However, if we ignore the issue of existence by assuming that the theoretical and actual value of $x$ are both nonnegative, then we \\emph{can} say some things on the precision.\n\nBecause $\\sqrt{x}$ is a concave increasing function, a small imprecision on $x$ will have the most impact on $\\sqrt{x}$ near 0.\n\n\\centerFig{modelling1}\n\nTherefore for a given imprecision $\\delta$, the biggest imprecision on $\\sqrt{x}$ it might cause is $\\sqrt{\\delta}$. This is usually pretty bad: if the argument of the square root had an imprecision of $nM^2\\epsilon$ then in the worst case the result will have an imprecision of $\\sqrt{n}M\\sqrt{\\epsilon}$, instead of the $nM\\epsilon$ bound that we have for $+,-,\\times$ operations.\n\nFor example let us consider a circle $\\mathcal{C}$ of radius tangent to a line $l$. If $\\mathcal{C}$ gets closer to $l$ by $10^{-6}$, then the intersection points will move by about\n\\[\\sqrt{1^2 - (1-10^{-6})^2} \\approx \\sqrt{2\\times 10^{-6}} = \\sqrt{2} \\times 10^{-3}\\]\naway from the tangency point, as pictured below.\n\n\\centerFig{modelling2}\n\nNote that here we have only shown that $1/x$ and $\\sqrt{x}$ perform poorly on imprecise inputs. Please bear in mind that on exact inputs, the IEEE 754 guarantees that the result is the closest represented floating-point number. So when the lines and circles are defined by integers, line intersections and circle-line intersections have a relative precision error proportional to $\\epsilon$ and thus an absolute error proportional to $M\\epsilon$.\n", "meta": {"hexsha": "ee19844d244b8e546e3f19ae4f0e28086d7bebe1", "size": 14013, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "archives/codelibraries/cp-geo-master/precision/modelling.tex", "max_stars_repo_name": "cbarnson/UVa", "max_stars_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_stars_repo_licenses": ["Unlicense", "MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-09-07T17:00:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-05T02:08:35.000Z", "max_issues_repo_path": "archives/codelibraries/cp-geo-master/precision/modelling.tex", "max_issues_repo_name": "cbarnson/UVa", "max_issues_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_issues_repo_licenses": ["Unlicense", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "archives/codelibraries/cp-geo-master/precision/modelling.tex", "max_forks_repo_name": "cbarnson/UVa", "max_forks_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_forks_repo_licenses": ["Unlicense", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 132.1981132075, "max_line_length": 1353, "alphanum_fraction": 0.7626489688, "num_tokens": 3380, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.8311430520409023, "lm_q1q2_score": 0.5892963370301834}}
{"text": "\\documentclass[11pt, numbers=endperiod, parskip=half]{scrartcl}\n\n\\usepackage{amsmath}\n\\usepackage{color}\n\\usepackage{semantic}\n\\usepackage{soul}\n\\usepackage{pdflscape}\n\n\\title{Assignment 2}\n\\subtitle{COS30023 - Languages in Software Development}\n\\author{Daniel Parker - 971328X}\n\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\section{Problem 1}\n\\subsection{Hoare Triple}\n\\begin{verbatim}\n{a > 4, b = 7}\n  x := b;\n  y := a;\n{a > 4, b = 7, x = 7, y > 4}\n\\end{verbatim}\n\n\\subsection{Rules}\n%%%%%%%%%%%%%%%%%%%%%%%%%\n%\t\t  RULES         %\n%%%%%%%%%%%%%%%%%%%%%%%%%\n\\inference[C = \\textsuperscript{def} C.Target := C.Source : ]{true}{\\{Q[C.Target \\\\\\ C.Source]\\} C \\{Q\\}}\n\\bigskip\n\\inference[C = \\textsuperscript{def} C\\textsubscript{1}; C\\textsubscript{2} : ]{\\{P\\}C\\textsubscript{1}\\{R\\} & \\{R\\}C\\textsubscript{2}\\{Q\\}}{\\{P\\}C\\textsubscript{1};C\\textsubscript{2}\\{Q\\}}\n\n\\subsection{Proof}\n\n\\inference{\\{ a > 4, b = 7 \\} x := b \\{R\\} & \\{R\\} y := a \\{ a > 4, b = 7, x = 7, y > 4 \\} }{ \\{ a > 4, b = 7 \\} x := b; y := a; \\{ a > 4, b = 7, x = 7, y > 4 \\} }\n\n\\bigskip\n\\(\\{R\\} C_2: y := a; \\{ a > 4, b = 7, x = 7, y > 4\\}\\)\n\\begin{flalign*}\n\\{R\\} &= \\{ a > 4, b = 7, x = 7, y > 4\\}\\lbrack y \\backslash a \\rbrack &\\\\\n      &= \\{ a > 4, b = 7, x = 7, a > 4\\} \\\\\n      &= \\{ a > 4, b = 7, x = 7, \\hbox{\\st{a $>$ 4}}\\} \\\\\n\\{R\\} &= \\{ a > 4, b = 7, x = 7\\}\n\\end{flalign*}\n\n\\bigskip\n\\(\\{ a > 4, b = 7\\} C_1: x := b; \\{R: a > 4, b = 7, x = 7\\}\\)\n\\begin{flalign*}\n\\{ a > 4, b = 7\\}\t&= \\{ a > 4, b = 7, x = 7\\}\\lbrack x \\backslash b \\rbrack &\\\\\n\t\t\t\t\t&= \\{ a > 4, b = 7, b = 7\\} \\\\\n\t\t\t\t\t&= \\{ a > 4, b = 7, \\hbox{\\st{b = 7}}\\} \\\\\n\\{ a > 4, b = 7\\}\t&= \\{ a > 4, b = 7\\} \\\\\n\\end{flalign*}\n\n\\section{Problem 2}\n\\subsection{Hoare Triple}\n\\begin{verbatim}\n{true}\n  if x < 0 then val := -x; else val := x;\n{val = abs(x)}\n\\end{verbatim}\n\n\\subsection{Rules}\n\\inference[C = \\textsuperscript{def} if C.Test then C.Then else C.Else : ]{ \\{ P \\wedge C.Test \\} C.Then \\{ Q \\} & \\{ P \\wedge \\neg C.Test \\} C.Else \\{ Q \\} }{ \\{P\\}C\\{Q\\} }\n\\bigskip\n\\inference{P => P' & \\{P'\\} C \\{Q'\\} & Q' => Q }{\\{P\\}C\\{Q\\}}\n\n\\subsection{Proof}\n\\inference{ \\{true \\wedge x < 0\\} val := -x; \\{ val = abs(x) \\} & \\{ true \\wedge x \\geq 0 \\} val := x; \\{ val = abs(x) \\} }{ \\{ true \\}\\ if\\ x < 0\\ then\\ val := -x;\\ else\\ val := x; \\{ val = abs(x) \\} }\n\n\\subsubsection{Premise 1}\n\\inference{x < 0 => -x = abs(x) & \\inference{true}{ \\{ -x = abs(x) \\} val := -x; \\{ val = abs(x) \\} } }{ \\{ x < 0 \\} val := -x; \\{ val = abs(x) \\} }\n\n\\subsubsection{Premise 2}\n\\inference{x \\geq 0 => x = abs(x) & \\inference{true}{ \\{ x = abs(x) \\} val := x; \\{ val = abs(x) \\} } }{ \\{ x \\geq 0 \\} val := x; \\{ val = abs(x) \\} }\n\n\\setpremisesspace{1em}\n\\setpremisesend{0.5em}\n\\begin{landscape}\n\\subsubsection{Full Proof}\n\\inference{ \\inference{x < 0 => -x = abs(x) & \\inference{true}{ \\{ -x = abs(x) \\} val := -x; \\{ val = abs(x) \\} } }{ \\{ x < 0 \\wedge true \\} val := -x; \\{ val = abs(x) \\} } & \\inference{x \\geq 0 => x = abs(x) & \\inference{true}{ \\{ x = abs(x) \\} val := x; \\{ val = abs(x) \\} } }{ \\{ x \\geq 0 \\wedge true \\} val := x; \\{ val = abs(x) \\} } }{ \\{ true \\}\\ if\\ x < 0\\ then\\ val := -x;\\ else\\ val := x; \\{ val = abs(x) \\} }\n\\end{landscape}\n\\end{document}\n", "meta": {"hexsha": "5f0023b9b9a08af142ca90fb4dace2dac7f0408b", "size": 3194, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignments/2/assignment-2.tex", "max_stars_repo_name": "rlgod/languages-in-software-development", "max_stars_repo_head_hexsha": "4a936667f22d2783521c917d60861249a2ae6671", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignments/2/assignment-2.tex", "max_issues_repo_name": "rlgod/languages-in-software-development", "max_issues_repo_head_hexsha": "4a936667f22d2783521c917d60861249a2ae6671", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignments/2/assignment-2.tex", "max_forks_repo_name": "rlgod/languages-in-software-development", "max_forks_repo_head_hexsha": "4a936667f22d2783521c917d60861249a2ae6671", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.1395348837, "max_line_length": 418, "alphanum_fraction": 0.4934251722, "num_tokens": 1473, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.589289982905649}}
{"text": "         %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n         % Information sheet for the Matlab lab - Maths 6111 %\n         %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\documentclass[10pt]{article} \n\\input ma_no_html_header\n\n\\usepackage{color}\n\\usepackage{hyperref}\n\\hypersetup{breaklinks=true,colorlinks=true}\n%\\input ma_header\n\\setlength{\\parindent}{0pt}\n\\pagestyle{myheadings}\n% \\markright{\n% \\protect {\\protect \\epsfxsize=0.2 true cm \\protect \\epsffile {dolph.line.eps}}\n% \\it Maths 3018/6111 - Numerical methods \\hfill}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{document}\n\n\\thispagestyle{empty}\n\\begin{center}\n\\textbf{\\Large Maths 3018/6111 - Numerical Methods \\\\*[8mm]\nWorksheet 7 - Solutions}\\\\*[.8cm]\n\\end{center}\n\n\\section*{Theory}\n\n\\begin{enumerate}\n\\item Find the linear system that results from using central\n  differencing to solve the elliptic equation\n  \\begin{equation*}\n    u_{xx} + u_{yy} = \\sin ( \\pi y ) \\left( 2 - 6 x - (\\pi x)^2 (1 -\n      x) \\right).\n  \\end{equation*}\n  Central differencing, trivial boundary conditions, and a $3 \\times\n  2$ grid should be used.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n  The question is incomplete as it does not specify the spatial\n  domain. However, it is reasonably to expect it to be the unit square\n  $(x, y) \\in [0, 1]^2$. So we first specify the grid using the\n  coordinates\n  \\begin{align*}\n    x_i & = i h_x & i & = 0, \\dots, 4, & h_x & = \\frac{1}{4}, \\\\\n    y_j & = j h_y & j & = 0, \\dots, 3, & h_y & = \\frac{1}{3}.\n  \\end{align*}\n  This gives an evenly spaced grid with 3 interior points in $x$ and 2\n  interior points in $y$. The boundary conditions are imposed as\n  \\begin{align*}\n    u_{0, j} & = 0, & u_{4, j} & = 0, \\\\\n    u_{i, 0} & = 0, & u_{i, 3} & = 0.\n  \\end{align*}\n  We are of course using the notation $u_{i,j} \\approx u(x_i, y_j)$.\n\n  We use central differencing at all interior points to get\n  \\begin{align*}\n    u_{xx} & \\approx \\frac{u_{i+1,j} + u_{i-1,j} - 2 u_{i,j}}{h_x^2}, \\\\\n    u_{yy} & \\approx \\frac{u_{i,j+1} + u_{i,j-1} - 2 u_{i,j}}{h_y^2}.\n  \\end{align*}\n  We then substitute these approximations into the differential\n  equation to find the difference equation satisfied at all interior\n  points,\n  \\begin{equation*}\n    \\left(u_{i+1,j} + u_{i-1,j} - 2 u_{i,j} \\right) + \\left(\n      \\frac{h_x}{h_y} \\right)^2 \\left( u_{i,j+1} + u_{i,j-1} - 2\n      u_{i,j} \\right) = h_x^2 \\sin ( \\pi y_j ) \\left( 2 - 6 x_i - (\\pi\n      x_i)^2 (1 - x_i) \\right).\n  \\end{equation*}\n  We will write $h_x / h_y = \\alpha$.\n\n  We choose natural ordering by rows to write the unknowns $u_{i,j}$\n  as a vector ${\\bf u}$ as\n  \\begin{equation*}\n    {\\bf u} = \\left[ u_{1,1}, u_{1,2}, u_{2,1}, u_{2,2}, u_{3,1},\n      u_{3,2} \\right]^T.\n  \\end{equation*}\n  It then follows that the matrix that defines the system has the form\n  \\begin{equation*}\n    A =\n    \\begin{pmatrix}\n      -2 ( 1 + \\alpha ) & \\alpha & 1 & 0 & 0 & 0 \\\\\n      \\alpha & -2 ( 1 + \\alpha ) & 0 & 1 & 0 & 0 \\\\\n      1 & 0 & -2 ( 1 + \\alpha ) & 1 & \\alpha & 0 \\\\\n      0 & 1 & \\alpha & -2 ( 1 + \\alpha ) & 0 & 1 \\\\\n      0 & 0 & 1 & 0 & -2 ( 1 + \\alpha ) & \\alpha \\\\\n      0 & 0 & 0 & \\alpha & 1 &  -2 ( 1 + \\alpha ) \n    \\end{pmatrix}\n  \\end{equation*}\n  and the right hand side vector is solely in terms of the right hand\n  side data\n  \\begin{equation*}\n    h_x^2 \\sin ( \\pi y_j ) \\left( 2 - 6 x_i - (\\pi x_i)^2 (1 - x_i) \\right)\n  \\end{equation*}\n  as all the boundary data is trivial.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n\\item Explain the strategy for solving evolutionary PDEs using finite\n  differencing methods.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n  The domain is discretized using a set of points in space. These\n  points will usually be evenly spaced with a (representative) step\n  length $h$. The grid is extended to cover the time domain as well by\n  discretizing time using a time step $\\delta$. The derivatives\n  defining the differential equation are replaced with finite\n  difference approximations, the initial data fixes the unknowns at\n  the initial time, and the boundary conditions imposed using\n  equivalent (or consistent) discrete boundary data. This implies a\n  finite difference formula which gives the unknown data at timestep\n  $n+1$ in terms of the known data at timestep $n$ and the boundary\n  data. \n\n  Depending on the finite difference formula derived, there may be a\n  relation between the time step $\\delta$ and space step $h$ for which\n  the algorithm is stable and convergent, which can be checked using\n  von Neumann analysis.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n\\item Write out a FTCS method for the equations\n  \\begin{align*}\n    \\partial_t y - \\partial_{xx} y & = \\sin(x), & x & \\in [0,1], &\n    y(t, 0) & = 0 = y(t, 1), \\\\\n    \\partial_t y + \\partial_{x} y & = e^{-(x-1/2)^2}, & x & \\in [0,1], &\n    y(t, 0) & = 0 = y(t, 1).\n  \\end{align*}\n  In both cases the initial data should be taken to be $y(0, x) =\n  f(x)$. Full details of the grid, the discrete initial and boundary\n  conditions, and the discrete update algorithm should be given.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n  We shall use $N+1$ points in space, denoted $x_i = i h$ with $i = 0,\n  \\dots, N$ and the space step $h = 1 / N$; this is an evenly spaced\n  grid with boundary points $x_0 = 0$ and $x_N = 1$. We will also\n  discretize time using $t^n = n \\delta$ where $\\delta$, the timestep,\n  is unconstrained. We will denote our approximate solution at $(x, t)\n  = (x_i, t^n)$ as $y_i^n \\approx y(x_i, t^n)$.\n\n  The initial data will be imposed by the equivalent discrete\n  condition $y_i^0 = f(x_i)$. The (trivial Dirichlet) boundary\n  conditions to be imposed in both cases will be given by the discrete\n  conditions\n  \\begin{equation*}\n    y_0^n = 0, \\qquad y_N^n = 0, \\qquad \\forall n \\ge 0.\n  \\end{equation*}\n\n  To form the discrete update algorithm we use forward differencing in\n  time (FT),\n  \\begin{equation*}\n    y_t|_{x=x_i, t = t^n} \\approx \\frac{y_i^{n+1} - y_i^{n}}{\\delta},\n  \\end{equation*}\n  and central differencing in space (CS), \n  \\begin{align*}\n    y_x|_{x=x_i, t = t^n} & \\approx \\frac{y_{i+1}^{n} - y_{i-1}^{n}}{2\n      h}, \\\\\n    y_{xx}|_{x=x_i, t = t^n} & \\approx \\frac{y_{i+1}^{n} + y_{i-1}^{n}\n      - 2 y_i^n}{h^2}.\n  \\end{align*}\n  We then replace the derivatives in the PDE with the finite\n  difference approximations and rearrange, to get for the heat equation\n  \\begin{align*}\n    \\frac{y_i^{n+1} - y_i^{n}}{\\delta} - \\frac{y_{i+1}^{n} + y_{i-1}^{n}\n      - 2 y_i^n}{h^2} & = \\sin(x_i) \\\\\n    \\Rightarrow \\qquad y_i^{n+1} = y_i^n + \\delta \\sin(x_i) + s \\left(\n      y_{i+1}^{n} + y_{i-1}^{n} - 2 y_i^n \\right),\n  \\end{align*}\n  where $s = \\delta / h^2$, and for the advection equation\n  \\begin{align*}\n    \\frac{y_i^{n+1} - y_i^{n}}{\\delta} + \\frac{y_{i+1}^{n} -\n      y_{i-1}^{n}}{2 h} & = e^{-(x_i - 1/2)^2} \\\\\n    \\Rightarrow \\qquad y_i^{n+1} & = y_i^n + \\delta e^{-(x_i - 1/2)^2}\n    - c \\left( y_{i+1}^n - y_{i-1}^n \\right)\n  \\end{align*}\n  where $c = \\delta / (2 h)$. In both cases these equations hold for\n  the interior points $n \\ge 0$ and $i = 1, \\dots, N-1$. The boundary\n  points are set using the above initial and boundary data.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n\\item Using von Neumann analysis, find when the BTCS method is stable\n  for the advection equation.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n  The finite difference formula for the BTCS method for the advection\n  equation\n  \\begin{equation*}\n    \\partial_t y + v \\partial_x y = 0\n  \\end{equation*}\n  is straightforwardly checked to be\n  \\begin{equation*}\n    y_i^{n+1} = y_i^n - c \\left( y_{i+1}^{n+1} - y_{i-1}^{n+1} \\right)\n  \\end{equation*}\n  where the convection number $c = v \\delta / (2 h)$ with $\\delta$ the\n  timestep and $h$ the space step. \n\n  We use the standard von Neumann analysis ansatz, which is that\n  $y_{\\ell}^k = q^k e^{\\alpha \\ell j h}$ where $q$ is the (unknown)\n  growth rate to be found, $\\alpha$ is related to the frequency of the\n  (generic) initial data (and hence can take any real value), $h$ is\n  the space step and $j = \\sqrt{-1}$.\n\n  Identifying $\\ell \\leftrightarrow i$ and $k \\leftrightarrow n$ and\n  substituting the ansatz into the finite difference equation we see\n  that\n  \\begin{equation*}\n    q = 1 - q c \\left( e^{\\alpha j h} - e^{- \\alpha j h} \\right).\n  \\end{equation*}\n  which can be written as\n  \\begin{align*}\n    q & = 1 - 2 j q c \\sin \\left(\\alpha h \\right) \\\\\n    \\Rightarrow \\qquad q & = \\frac{1}{1 + 2 j c \\sin \\left( \\alpha h\n      \\right)} \\\\\n    \\Rightarrow \\qquad |q|^2 & = \\left| \\frac{1 - 2 j c \\sin \\left(\n          \\alpha h \\right)}{1 + 4 c^2 \\sin^2 \\left( \\alpha h \\right)}\n    \\right|^2 \\\\\n    & = \\frac{1 + 4 c^2 \\sin^2 \\left( \\alpha h \\right)}{\\left[ 1 + 4\n        c^2 \\sin^2 \\left( \\alpha h \\right) \\right]^2} \\\\\n    & = \\frac{1}{1 + 4 c^2 \\sin^2 \\left( \\alpha h \\right)} \\\\\n    & \\le 1 \\qquad \\forall \\alpha.\n  \\end{align*}\n  Therefore BTCS is unconditionally stable for the advection equation.\n  % \n  \\begin{center}\n    \\rule{0.9\\textwidth}{.1pt}\n  \\end{center}\n  % \n\\end{enumerate}\n\n\\end{document}\n\n", "meta": {"hexsha": "ecde8642e27b3bc3d502495ef6827d5d21a6a858", "size": 9397, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Worksheets/Worksheet7_Solutions.tex", "max_stars_repo_name": "alistairwalsh/NumericalMethods", "max_stars_repo_head_hexsha": "fa10f9dfc4512ea3a8b54287be82f9511858bd22", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-01T09:15:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-01T09:15:04.000Z", "max_issues_repo_path": "Worksheets/Worksheet7_Solutions.tex", "max_issues_repo_name": "indranilsinharoy/NumericalMethods", "max_issues_repo_head_hexsha": "989e0205565131057c9807ed9d55b6c1a5a38d42", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Worksheets/Worksheet7_Solutions.tex", "max_forks_repo_name": "indranilsinharoy/NumericalMethods", "max_forks_repo_head_hexsha": "989e0205565131057c9807ed9d55b6c1a5a38d42", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-04-13T02:58:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-13T02:58:54.000Z", "avg_line_length": 37.438247012, "max_line_length": 80, "alphanum_fraction": 0.5997658827, "num_tokens": 3347, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.589289982905649}}
{"text": "\\documentclass{article}\n\n\\usepackage{pgfplots} %http://www.ctan.org/pkg/pgfplots\n\\pgfplotsset{compat=newest, set layers=standard}\n\n\\usepgfplotslibrary{fillbetween}\n\\usetikzlibrary{intersections}\n\n\\title{Numerical Integration}\n\\begin{document}\n\\def\\horzbar{\\text{magic}}\n\n  \\pagenumbering{gobble}\n  \\maketitle\n  \\newpage\n  \\pagenumbering{arabic}\n\n\n\\section*{Trapeziod method}\n\n\\pgfplotsset{\n    integral axis/.style={\n        axis lines=middle,\n        enlarge y limits=upper,\n        axis equal image, width=12cm,\n        xlabel=$x$, ylabel=$y$,\n        ytick=\\empty,\n        xticklabel style={font=\\small, text height=1.5ex, anchor=north},\n        samples=100\n    },\n    integral/.style={\n            domain=2:10,\n            samples=9\n    },\n    integral fill/.style={\n            integral,\n            draw=none, fill=#1,\n            on layer=axis background\n        },\n        integral fill/.default=cyan!10,\n        integral line/.style={\n            integral,\n            very thick,\n            draw=#1\n        },\n        integral line/.default=black\n}\n\n\n\\begin{tikzpicture}[\n    % The function that is used for all the plots\n    declare function={f=x/5-cos(deg(x*1.85))/2+2;}\n]\n\\begin{axis}[\n    integral axis,\n    ymin=0,\n    xmin=0.75, xmax=11.25,\n    domain=1.5:10.5,\n    xtick={2,...,10},\n    xticklabels={$a=x_0$, $x_1$,,,$x_{j-1}$,$x_j$,,$x_{n-1}$,$b=x_n$},\n]\n% The function\n\\addplot [very thick, cyan!75!blue] {f} node [anchor=south] {$y=f(x)$};\n\n% The filled area under the approximate integral\n\\addplot [integral fill=cyan!15] {f} \\closedcycle;\n\n% The approximate integral\n\\addplot [integral line=black] {f};\n\n% The vertical lines between the segments\n\\addplot [integral, ycomb] {f};\n\n% The highlighted segment\n\\addplot [integral fill=cyan!35, domain=6:7, samples=2] {f} \\closedcycle;\n\\end{axis}\n\\end{tikzpicture}\n\n$h = \\frac{b-a}{N -1}$\n\nN - number of points\n\nTotal area (integral)\n\n$$\\int_a^b f(x)dx \\approx h\\sum_{k=1}^{N} \\frac{f(x_{k-1})+f(x_k)}{2}$$\n\n\\section*{Simpson's method}\n\n\\begin{tikzpicture}\n\\begin{axis}[axis lines=center, ytick=\\empty, \nymax=1.3, ymin=0, xmax=2.6,xmin=0,\nxtick={0.3,1.3,2.3}, xticklabels={$a$,$\\frac{a+b}{2}$,$b$},\naxis line style={-}, xlabel=$x$, ylabel=$y$]\n\\draw[name path=A, purple] (2.3,1.2) parabola (0.3,0.5);\n\\node[] at (1.6,1.2){$y=f(x)$};\n\\path[name path=B] (\\pgfkeysvalueof{/pgfplots/xmin},0) \n                   --(\\pgfkeysvalueof{/pgfplots/xmax},0);\n\\addplot[gray!50] fill between[of=A and B,soft clip={domain=0.3:2.3}];\n\\draw[name path=C] (1.3,0) -- (1.3,1.025);\n\\path[name intersections={of=A and C}] coordinate (midpoint) at (intersection-1);\n\\draw (0.3,0.5) ..controls ++(0.1,0.3) and ++(-0.2,-0.05) .. (midpoint)\n                node[pos=0.5,above left]{$y=p_2(x)$} \n                ..controls ++(0.2,0.05) and ++(-0.1,-0.05) .. (2.3,1.2);\n\\end{axis}\n\\end{tikzpicture}\n\n$h = \\frac{b-a}{2}$\n\n$$\\int_a^b f(x)dx \\approx \\frac{h}{3} \\sum_{k=1}^{N/2} \\{f(x_{2k-2})+4f(x_{2k-1})+f(x_{2k}))\\}$$\n\n\n\\end{document}", "meta": {"hexsha": "511ced4816c129016be6eb3d70841848660b2619", "size": 2972, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/numerical_integration.tex", "max_stars_repo_name": "djeada/Numerical-Methodes", "max_stars_repo_head_hexsha": "45a5288f4719568a62a82374efbb3fc06d33ec46", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/numerical_integration.tex", "max_issues_repo_name": "djeada/Numerical-Methodes", "max_issues_repo_head_hexsha": "45a5288f4719568a62a82374efbb3fc06d33ec46", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/numerical_integration.tex", "max_forks_repo_name": "djeada/Numerical-Methodes", "max_forks_repo_head_hexsha": "45a5288f4719568a62a82374efbb3fc06d33ec46", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.5357142857, "max_line_length": 96, "alphanum_fraction": 0.6022880215, "num_tokens": 1073, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.5892899827588338}}
{"text": "\\section{$\\cHH^\\ast$ as a cohomology theory, and the fully relative $\\cap$ product}\npset 6 is now doe December 7. If you read any book about Poincar\\'{e} duality, there'll be an incomprehensible smear of stuff about cap products. I want to give you a cleaner explanation. The motivation is that it's the main step in the proof of Poincar\\'{e} duality.\n\nLet $X$ be any space, and let $K\\subseteq X$ be a closed subspace. We'll write $\\cHH^p(K)=\\varinjlim_{U\\in\\mathcal{U}_K} H^p(U)$ (and call it the \\v{C}ech cohomology -- but there's a different definition, although in certain cases they are isomorphic). We should write down a relative version of this as well. Suppose $L\\subseteq K$ be closed. Define $\\cHH^p(K,L)=\\varinjlim_{(U,V)\\in\\mathcal{U}_{K,L}} H^p(U,V)$ where $K\\supseteq L$ and $K\\subseteq U$, and $L\\subseteq V$ such that $V\\subseteq U$. Here $\\mathcal{U}_{K,L}$ is the directed set:\n\\begin{equation*}\n\\mathcal{U}_{K,L}\\text{ is the set of }\\xymatrix{K\\ar@{^(->}[d] & L\\ar@{^(->}[d]\\ar@{^(->}[l] \\\\ U & V\\ar@{^(->}[l]}\n\\end{equation*}\nsuch that $U$ and $V$ are open.\n\\begin{theorem}\nThere is a lexseq:\n\\begin{equation*}\n\\cdots\\to\\cHH^p(K,L)\\to\\cHH^p(K)\\to\\cHH^p(L)\\xrightarrow{\\delta}\\cHH^{p+1}(K,L)\\to\\cdots\n\\end{equation*}\n\\end{theorem}\nAll the maps in the lexseq are mysterious.\n\nAlso, excision holds:\n\\begin{theorem}[Excision]\nSuppose $A,B\\subseteq X$ are closed. Then $\\cHH^p(A\\cup B,A)\\cong\\cHH^p(B,A\\cap B)$. This is a form of excision that doesn't hold for ordinary cohomology.\n\\end{theorem}\nSo \\v{C}ech cohomology is better suited for talking about closed sets. \n\nWe defined $\\cap:\\cHH^p(K)\\otimes H_n(X,X-K)\\to H_q(X,X-K)$, such that $p+q=n$. Fix $x_K\\in H_n(X,X-K)$. Then capping with $x_K$ gives a map $\\cap x_k:\\cHH^p(K)\\to H_q(X,X-K)$.\n\nNow, if $K\\supseteq L$, then $X-K\\subseteq X-L$, so I get $H_n(X,X-K)\\xrightarrow{i_\\ast} H_n(X,X-L)$. Then $x_K\\mapsto x_L$. We can now extend our lexseq:\n\\begin{equation*}\n\\xymatrix{\n\t\\cdots\\ar[r] & \\cHH^p(K,L)\\ar[r] & \\cHH^p(K)\\ar[r]\\ar[d]_{-\\cap x_k} & \\cHH^p(L)\\ar[r]^\\delta\\ar[d]_{-\\cap x_L} & \\cHH^{p+1}(K,L)\\ar[r] & \\cdots\\\\\n\t& H_q(X-L,X-K)\\ar[r]& H_q(X,X-K)\\ar[r] & H_q(X,X-L)\\ar[r]^\\partial & H_{q-1}(X-L,X-K)\\ar[r] & \\cdots\n}\n\\end{equation*}\nWhere the bottom thing comes from the lexseq of the triple (see exercise 9 on your pset) $X-K\\subseteq X-L\\subseteq X$ (I think). Then we can extend our theorem on the lexseq:\n\\begin{theorem}\nThere is a lexseq and a ladder:\n\\begin{equation*}\n\\xymatrix{\n\t\\cdots\\ar[r] & \\cHH^p(K,L)\\ar[r]\\ar@{-->}[d]_{-\\cap x_{K}} & \\cHH^p(K)\\ar[r]\\ar[d]_{-\\cap x_k} & \\cHH^p(L)\\ar[r]^\\delta\\ar[d]_{-\\cap x_L} & \\cHH^{p+1}(K,L)\\ar[r]\\ar[d]_{-\\cap x_{K}} & \\cdots\\\\\n\t\\cdots\\ar[r] & H_q(X-L,X-K)\\ar[r]& H_q(X,X-K)\\ar[r] & H_q(X,X-L)\\ar[r]^\\partial & H_{q-1}(X-L,X-K)\\ar[r] & \\cdots\n}\n\\end{equation*}\n\\end{theorem}\nWhat I have to do is define a cap product of the following form (bottom row):\n\\begin{equation*}\n\\xymatrix{\n\t\\cHH^p(K)\\otimes H_n(X,X-K)\\ar[r]^{\\cap} & H_q(X,X-K)\\\\\n\t\\cHH^p(K,L)\\otimes H_n(X,X-K)\\ar[u]\\ar[r]^{\\cap} & H_q(X-L,X-K)\n}\n\\end{equation*}\n(where $p+q=n$)\n\nI want to define this fully relative cap product $\\cHH^p(K,L)\\otimes H_n(X,X-K)\\to H_q(X-L,X-K)$. We'll use this in the inductive proof of (some) important theorem.\n\nOur map $\\cHH^p(K)\\otimes H_n(X,X-K)\\to H_q(X,X-K)$ came from $S^p(U)\\otimes S_n(U,U-K)\\to S_q(U,U-K)$ where $U\\supseteq K$, defined via $\\beta\\otimes\\sigma\\mapsto\\beta(\\sigma\\circ\\alpha_p)\\cdot(\\sigma\\circ\\omega_q)$. I'm hoping to get:\n\\begin{equation*}\n\\xymatrix{\n\tS^p(U)\\otimes S_n(U,U-K)\\ar[r] & S_q(U,U-K)\\\\\n\tS^p(U,V)\\otimes S_n(U-L)/S_n(U-K)\\ar[r]\\ar[u] & S_q(U-L)/S_q(U-K)\\ar[u]\n}\n\\end{equation*}\nwhere again we have inclusions ($U,V$ open and $K,L$ closed):\n\\begin{equation*}\n\\xymatrix{K\\ar@{^(->}[d] & L\\ar@{^(->}[d]\\ar@{^(->}[l] \\\\ U & V\\ar@{^(->}[l]}\n\\end{equation*}\nThe bottom map $S^p(U,V)\\otimes S_n(U-L)/S_n(U-K)\\to S_q(U-L)/S_q(U-K)$ makes sense. We can evaluate a cochain that kills everything on $V$. This means that we can add in $S_n(V)$ to get $S^p(U,V)\\otimes (S_n(U-L)+S_n(V))/S_n(U-K)\\to S_q(U-L)/S_q(U-K)$ by sending $\\beta\\otimes\\tau\\mapsto 0$ where $\\tau:\\Delta^n\\to V$. This means that the diagram:\n\\begin{equation*}\n\\xymatrix{\n\tS^p(U)\\otimes S_n(U,U-K)\\ar[r] & S_q(U,U-K)\\\\\n\tS^p(U,V)\\otimes (S_n(U-L)+S_n(V))/S_n(U-K)\\ar[r]\\ar[u] & S_q(U-L)/S_q(U-K)\\ar[u]\n}\n\\end{equation*}\ncommutes. It's not that far off from where we want to go.\n\nNow, $(U-L)\\cup V=U$. I have this covering of $U$ by two open sets. In $S_n(U-L)+S_n(V)$ we're taking the sum of $n$-chains. We have a map $S_\\ast(U-L)+S_\\ast(V)\\to S_\\ast(U)$. We have already worked through this -- the locality principle! This tells us that $S_\\ast(U-L)+S_\\ast(V)\\to S_\\ast(U)$ is a homotopy equivalence. Hence we can extend our diagram:\n\\begin{equation*}\n\\xymatrix{\n\tS^p(U)\\otimes S_n(U,U-K)\\ar[r] & S_q(U,U-K)\\\\\n\tS^p(U,V)\\otimes (S_n(U-L)+S_n(V))/S_n(U-K)\\ar[r]\\ar[d]^{\\simeq}\\ar[u] & S_q(U-L)/S_q(U-K)\\ar[u]\\\\\n\tS^p(U,V)\\otimes S_n(U)/S_n(U-K) & \n}\n\\end{equation*}\nWe want the homology of $S_n(U)/S_n(U-K)$ to approximate $H_n(X,X-K)$.\n\\begin{claim}\nThere is an isomorphism $H_n(S_\\ast(U)/S_\\ast(U-K))=H_n(U,U-K)\\to H_n(X,X-K)$.\n\\end{claim}\n\\begin{proof}\nThis is exactly excision! Remember our recasting of excision in the previous lecture.\n\\end{proof}\nThis means that what we've constructed really \\emph{is} what we want! We now have our large lexseq:\n\\begin{equation*}\n\\xymatrix{\n\t\\cdots\\ar[r] & \\cHH^p(K,L)\\ar[r]\\ar@{-->}[d]_{-\\cap x_{K}} & \\cHH^p(K)\\ar[r]\\ar[d]_{-\\cap x_k} & \\cHH^p(L)\\ar[r]^\\delta\\ar[d]_{-\\cap x_L} & \\cHH^{p+1}(K,L)\\ar[r]\\ar[d]_{-\\cap x_{K}} & \\cdots\\\\\n\t\\cdots\\ar[r] & H_q(X-L,X-K)\\ar[r]& H_q(X,X-K)\\ar[r] & H_q(X,X-L)\\ar[r]^\\partial & H_{q-1}(X-L,X-K)\\ar[r] & \\cdots\\\\\n\t\\cdots\\ar[r] & H_q(U-L,U-K)\\ar[r]\\ar[u]^{\\cong,\\text{ five-lemma}} & H_q(U,U-K)\\ar[r]\\ar[u]^{\\cong,\\text{ locality}} & H_q(U,U-L)\\ar[r]\\ar[u]^{\\cong,\\text{ locality}} & \\cdots\n}\n\\end{equation*}\nAs desired.\n\nThe diagram:\n\\begin{equation*}\n\\xymatrix{\n\t\\cHH^p(L)\\ar[r]^\\delta\\ar[d]^{-\\cap x_L} & \\cHH^{p+q}(K,L)\\ar[d]^{-\\cap x_K}\\\\\n\tH_q(X,X-L)\\ar[r]^{\\partial} & H_{q-1}(X-L,X-K)\n}\n\\end{equation*}\nsays that:\n\\begin{equation*}\n(\\delta b)\\cap x_k=\\partial(b\\cap x_L)\n\\end{equation*}\nIt's rather wonderful! You have a decreasing sequence below and an increasing one above.\n\nI want to reformulate all of this in a more useful fashion, from Mayer-Vietoris. We had two different proofs, one from locality, and another one that we'll remind you of:\n\\begin{equation*}\n\\xymatrix{\n\t\\cdots\\ar[r] & A_n\\ar[r]\\ar[d] & B_n\\ar[r]\\ar[d]^\\cong & C_n\\ar[r]\\ar[d] & A_{n-1}\\ar[r]\\ar[d] & \\cdots\\\\\n\t\\cdots\\ar[r] & A_n^\\prime\\ar[r] & B_n^\\prime\\ar[r] & C^\\prime_n\\ar[r] & A^\\prime_{n-1}\\ar[r] & \\cdots\n}\n\\end{equation*}\nthen you get a lexseq:\n\\begin{equation*}\n\\cdots\\to C_{n+1}\\to C^\\prime_{n+1}\\oplus A_n\\to A^\\prime_n\\xrightarrow{\\partial} C_n\\to\\cdots\n\\end{equation*}\nYou can use this to prove Mayer-Vietoris -- I will do this in a special case. (This is exactly what I did in a homework assignment\\footnote{Suppose $A\\subseteq X$ is a subspace of $X$. Then there is a lexseq in reduced homology $\\cdots\\to \\widetilde{ H}_n(A)\\to \\widetilde{ H}_n(X)\\to H_n(X,A)\\to\\widetilde{ H}_{n-1}(A)\\to\\cdots$ that can be obtained by using the lexseq in homology of the sexseq $0\\to\\widetilde{S}_\\ast(A)\\to\\widetilde{S}_\\ast(X)\\to S_\\ast(X,A)\\to 0$.\n\nNow suppose $X=A\\cup B$. Consider the ladder:\n\\begin{equation*}\n\\xymatrix@C=10pt{\\cdots\\ar[r] & H_{n+1}(A,A\\cap B)\\ar[r]\\ar[d] & \\widetilde{ H}_n(A\\cap B)\\ar[r]\\ar[d] & \\widetilde{ H}_n(A)\\ar[r]\\ar[d] & H_n(A,A\\cap B)\\ar[r]\\ar[d] & \\cdots\\\\\n\\cdots\\ar[r] & H_{n+1}(X,B)\\ar[r] & \\widetilde{ H}_n(B)\\ar[r] & \\widetilde{ H}_n(X)\\ar[r] & H_n(X,B)\\ar[r] & \\cdots}\n\\end{equation*}\nThe first and fourth maps as shown are isomorphisms because of excision. The lexseq from the ladder (see above) therefore yields the Mayer-Vietoris sequence $\\cdots\\to \\widetilde{ H}_n(A\\cap B)\\to \\widetilde{ H}_n(B)\\oplus \\widetilde{ H}_n(A)\\to \\widetilde{ H}_n(X)\\to \\widetilde{ H}_{n-1}(A\\cap B)\\to\\cdots$.}!) We have a ladder of lexseqs:\n\\begin{equation*}\n\\xymatrix{\n\t\\cdots\\ar[r] & H_q(X,X-A\\cup B)\\ar[r]\\ar[d] & H_q(X,X-A)\\ar[r]\\ar[d] & H_{q-1}(X-A,X-A\\cup B)\\ar[r]\\ar[d]^{\\cong,\\text{ excision}} & \\cdots\\\\\n\t\\cdots\\ar[r] & H_q(X,X-B)\\ar[r] & H_q(X,X-A\\cap B)\\ar[r] & H_{q-1}(X-A\\cap B,X-B)\\ar[r] & \\cdots\n}\n\\end{equation*}\nThis means that (using the lexseq of the ladder) you have a lexseq:\n\\begin{equation*}\n\\cdots\\to H_q(X,X-A\\cup B)\\to H_q(X,X-A)\\oplus H_q(X,X-B)\\to H_q(X,X-A\\cap B)\\to H_{q-1}(X,X-A\\cup B)\\to\\cdots\n\\end{equation*}\nThis can be used to give a lexseq for \\v{C}ech cohomology:\n\\begin{equation*}\n\\cdots\\to \\cHH^p(A\\cup B)\\to \\cHH^p(A)\\oplus \\cHH^p(B)\\to \\cHH^p(A\\cap B)\\to \\cHH^{p+q}(A\\cup B)\\to\\cdots\n\\end{equation*}\nso that we're going to get a commutative Mayer-Vietoris ladder:\n\\begin{theorem}\nThere's a ``Mayer-Vietoris'' ladder:\n\\begin{equation*}\n\\xymatrix{\n\t\\to\\cHH^p(A\\cup B)\\ar[r]\\ar[d]^{-\\cap x_{A\\cup B}} & \\cHH^p(A)\\oplus \\cHH^p(B)\\ar[r]\\ar[d]^{(-\\cap x_A)\\oplus -\\cap x_B} & \\cHH^p(A\\cap B)\\ar[r]\\ar[d] & \\cHH^{p+q}(A\\cup B)\\ar[r]\\ar[d] & \\cdots\\\\\n\t\\to H_q(X,X-A\\cup B)\\ar[r]& H_q(X,X-A)\\oplus H_q(X,X-B)\\ar[r]& H_q(X,X-A\\cap B)\\ar[r]& H_{q-1}(X,X-A\\cup B)\\ar[r]&\\cdots\n}\n\\end{equation*}\nwhere I have four cohomology classes $x_{A \\cup B},x_A,x_B,x_{A\\cap B}$ that commute in:\n\\begin{equation*}\n\\xymatrix{\n\t & H_n(X,X-A)\\ar[dr] & \\\\\n\tH_n(X,X-A\\cap B)\\ar[ur]\\ar[dr] & & H_n(X,X-A\\cap B)\\\\\n\t & H_n(X,X-B)\\ar[ur]\n}\n\\end{equation*}\n\\end{theorem}\nThis is the most complicated blackboard for the rest of the course. Also xymatrix is not compiling properly because the diagram is too big!\n", "meta": {"hexsha": "92a4ba9662c15b479bf79c63117a500a8ec53151", "size": 9631, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old-905/lec-33-fully-relative-cap-product.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "old-905/lec-33-fully-relative-cap-product.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "old-905/lec-33-fully-relative-cap-product.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 59.0858895706, "max_line_length": 544, "alphanum_fraction": 0.6441698681, "num_tokens": 3921, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.8128673269042767, "lm_q1q2_score": 0.5892233734108997}}
{"text": "\n\\section{Introduction}\n\n\\noindent Catalan numbers were introduced with this name by\nEugene Catalan in $1838$, although they have been used even before by Ming Antu\nin $1730$. Let $C_{n}$ be the $n$-th Catalan number, then the following\ntable shows the first fifteen element of the infinite sequence\n$\\left(C_{n}\\right)_{n\\in\\mathbb{N}}$,\n\\begin{displaymath}\n    \\footnotesize\n    \\begin{array}{c|ccccccccccccccc}\n        n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\\\\n        \\hline\n        C_{n} & 1 & 1 & 2 & 5 & 14 & 42 & 132 & 429 & 1430 & 4862 & 16796 & 58786 & 208012 & 742900 & 2674440\n    \\end{array}\n\\end{displaymath} \nknown as $A000108$\\footnote{Sequences mentioned in this paper are\nidentified by keys of the form $Annnnnn$ that refer to entries in the\nencyclopedia.}in the Online Encyclopedia of Integer Sequences \\cite{sloane:oeis}.\n\nA comprehensive manuscript having origin in $1970$, entirely devoted to these\nnumbers, has been recently published by Stanley \\cite{stanley:2015} where it is\npossible to find their history, formal characterizations and $214$\ncombinatorial interpretations.  For the sake of clarity, we report three\nclasses of objects counted by $C_{n}$ and many more can be found in\n\\cite{ConcreteMath}: first, binary trees with $n$ nodes; second, admissible\nbracketing of a string with length $n+1$ using $n-1$ pairs of curly braces;\nthird, Dyck paths of length $2n$ using raising $\\diagup$ and falling\n$\\diagdown$ steps.\n\nIt is possible to augment $C_{n}$ with an additional dimension, obtaining\ncoefficients $C_{n,k}$ defining a lower triangular, infinite matrix\n$\\mathcal{C}$ ($A033184$): in \\autoref{tab:catalan:array} we report the upper\nchunk of that matrix where a coefficient at row $n$ and column $k$ can be\ninterpreted as the number of Dyck paths of length $2(n+1)$ with $k$ returns to\nthe ground, start and end points excluded -- conventionally, indexes are\n$0$-based.\n\n\\input{catalan/catalan-traditional-standard-ignore-negatives-centered-colouring-127-rows-mod2-partitioning-include-matrix.tex}\n\nIn the literature, matrix $\\mathcal{C}$ is known as \\emph{Catalan triangle} and\nit turns out to be a Riordan array. Those arrays form a family of matrices\nenjoying nice properties and, looking at them from the perspective of abstract\nalgebra, they are a \\textit{group}. Formally, a Riordan array $\\mathcal{R}$ is\ndenoted by a pair of functions $d$ and $h$ which provides a unique\ncharacterization for the generic coefficient $d_{n,k}\\in\\mathcal{R}$ lying at\nrow $n$ and column $k$, by means of the $n$-th coefficient extraction from\n$\\displaystyle d_{n,k} = [t^{n}]d(t)h(t)^{k}$.\nRiordan arrays have been introduced by Shapiro et al. \\cite{shapiro:1991} in $1991$;\nthereafter, in $1994$, Sprugnoli \\cite{sprugnoli:1991} pointed out their importance\nto solve combinatorial sums.\n\nIn this paper we apply a modular transformation to $\\mathcal{C}$ which maps\nevery coefficient to its remainder with respect to the congruence relation\n$\\equiv_{2}$: the resulting matrix is denoted by $\\mathcal{C}_{\\equiv_{2}}$ and\nis graphically represented in\n\\autoref{fig:catalan-traditional-standard-ignore-negatives-centered-colouring-127-rows-mod2-partitioning-triangle}.\nThis is a parallel study of the same modular transformation applied to the\nPascal array $\\mathcal{P}$, represented in\n\\autoref{fig:pascal-standard-handle-negatives-centered-colouring-127-rows-mod2-partitioning-triangle},\nwhere a coefficient lying at row $n$ and column $k$ counts the number of\nsubsets with $k$ elements out of a set with $n$ elements, ${{n}\\choose{k}}$ in\nsymbols.  In both pictures, we center the root of the triangle and represent\nevery coefficient with a colored dot, using colors blue and orange for\n\\textcolor{blue}{\\emph{even}} and \\textcolor{orange}{\\emph{odd}} remainders,\nrespectively.\n\nAlthough the array $\\mathcal{P}_{\\equiv_{2}}$ has been deeply studied already,\nbecause of both its (i)~relation to congruences about binomial coefficients and\n(ii)~connection with the Sierpinski gasket \\cite{sokolov,\nstewart:four:encounters:sierpinski}, to the best of our knowledge the array\n$\\mathcal{C}_{\\equiv_{2}}$ seems to be fresh and our approach interleaves with\nstudies about modular transformations of Catalan numbers\n\\cite{alter:kubota:prime:power:catalan:divisibility,\negecioglu:parity:via:lattice:paths,\nkonvalinka:divisibility:generalized:catalan:numbers}, where their remainders \nhave been characterized both arithmetically and combinatorially.  \n\nThe present study deepens our previous work \\cite{merlini:nocentini:lecco} and\ntakes the long track toward to a formal characterization of\n$\\mathcal{C}_{\\equiv_{2}}$; first, using the Riordan array definition of\n$\\mathcal{C}$ we find different closed formulae for the coefficient\n$c_{n,k}\\in\\mathcal{C}$; second, we prove congruences over coefficients lying\nin different regions of the matrix; third, we use such congruences to implement\nan efficient procedure that builds $\\mathcal{C}_{\\equiv_{2}}$ inductively.\nTo support our work, a bunch of functions have been implemented using the\nPython programming language on top of the Sage framework \\cite{sage}, that\nallow us (i)~to table a Riordan array the hard way by series expansion, (ii)~to\ngenerate the \\LaTeX\\,code of graphical representations of triangles and, finally,\n(iii)~to build $\\mathcal{C}_{\\equiv_{2}}$ in a cheaper way.\n\n\\input{catalan/catalan-traditional-standard-ignore-negatives-centered-colouring-127-rows-mod2-partitioning-include-figure.tex}\n\\input{pascal/pascal-standard-handle-negatives-centered-colouring-127-rows-mod2-partitioning-include-figure}\n\nOur work is organized as follows: the current section introduces Catalan\nnumbers and the concept of Riordan arrays together with the modular\ntransformation under study; \\Cref{sec:previous:work} recalls important results\nfrom the existing literature about the application of modular arithmetic to\ninfinite, lower triangular matrices; \\Cref{sec:C:precisely} formally defines\nthe Catalan array $\\mathcal{C}$ and presents some facts about it;\n\\Cref{sec:catalan:characterization} takes the matrix $\\mathcal{C}_{\\equiv_{2}}$\napart in order to reason on it piece-wise; finally, \\Cref{sec:conclusions}\nconcludes with final comments and leaves open questions.\n\n\\vfill\n", "meta": {"hexsha": "bcee9571df816b11ae5b7104f26261138b4317ff", "size": 6278, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "modular-article/preamble.tex", "max_stars_repo_name": "massimo-nocentini/master-thesis", "max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "modular-article/preamble.tex", "max_issues_repo_name": "massimo-nocentini/master-thesis", "max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "modular-article/preamble.tex", "max_forks_repo_name": "massimo-nocentini/master-thesis", "max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.2264150943, "max_line_length": 126, "alphanum_fraction": 0.771264734, "num_tokens": 1748, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.8128673246376009, "lm_q1q2_score": 0.5892233717678538}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage[left=1cm,right=1cm,\n    top=2cm,bottom=2cm,bindingoffset=0cm]{geometry}\n\\usepackage{braket}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{mathrsfs}\n\\usepackage[T2A]{fontenc}\n\\usepackage[utf8x]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{graphicx}\n\\parindent=0.5cm\n\n\\usepackage{hyperref} \n\\usepackage{indentfirst}\n\n\\numberwithin{equation}{section}\n%\\usepackage{showkeys}\n\n\\begin{document}\n\\section*{MSAI Statistics \\& Probability \u2013 Week 6 Seminar \\& HW}\\\\\n\n\\textbf{Problem 1:} Consider an Erd\\H{o}s-R\\'{e}nyi random graph $G(n,p)$ on $n$ vertices (that is, for any two vertices, there is an edge between them with probability $p$ and no edge with probability $1-p$; all edges are independent of each other). Find the \\textbf{variance} of the number of triangles in $G(n,p)$ (triples of vertices all pairwise connected with an edge). (In previous week HW you had to find the \\textbf{expectation}).\n\\\\\n\n\\textbf{Problem 2:} Find $\\textrm{Var}(\\xi)$ if $\\xi$ have the following distribution function:\n$$F(x)=\n\\begin{cases}\n0, ~~~ x<-2,\\\\\n1/5, ~~~-2\\leq x<1 \\\\\nx^2/4, ~~~1\\leq x<2 \\\\\n1, ~~~ x\\geq 2\n\\end{cases}$$\n(In previous week HW you had to find the expectation of $\\xi$).\n\\\\\n\n\\textbf{Problem 3:} Can you think of two such \\textbf{dependent} random variables $\\xi,~\\eta$, such that $\\textrm{Cov}(\\xi,\\eta)=0$?\n\n\\end{document}\n", "meta": {"hexsha": "4ff231204a4907d4e63ebe844fdcb636ec3c892b", "size": 1403, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week06_variance/Week06_HW_Theory.tex", "max_stars_repo_name": "girafe-ai/msai-statistics", "max_stars_repo_head_hexsha": "c9de8ca20bbb9f266e06598d376be50c35f00756", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2021-04-07T05:10:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-07T15:58:35.000Z", "max_issues_repo_path": "week06_variance/Week06_HW_Theory.tex", "max_issues_repo_name": "girafe-ai/msai-statistics", "max_issues_repo_head_hexsha": "c9de8ca20bbb9f266e06598d376be50c35f00756", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-08T17:08:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-08T17:08:53.000Z", "max_forks_repo_path": "week06_variance/Week06_HW_Theory.tex", "max_forks_repo_name": "girafe-ai/msai-statistics", "max_forks_repo_head_hexsha": "c9de8ca20bbb9f266e06598d376be50c35f00756", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-03-25T15:23:32.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-27T14:28:21.000Z", "avg_line_length": 33.4047619048, "max_line_length": 439, "alphanum_fraction": 0.7141838917, "num_tokens": 473, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.8128673223709251, "lm_q1q2_score": 0.5892233701248079}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{ambifuwb}\n\\section*{\\hspace*{-1.6cm} ambifuwb}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nWide-band ambiguity function.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[waf,tau,theta] = ambifuwb(x)\n[waf,tau,theta] = ambifuwb(x,fmin,fmax)\n[waf,tau,theta] = ambifuwb(x,fmin,fmax,N)\n[waf,tau,theta] = ambifuwb(x,fmin,fmax,N,trace)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty ambifuwb} calculates the asymetric wide-band ambiguity\n        function, defined as \n\\begin{eqnarray*}\n\\Xi_x(a,\\tau) = \\frac{1}{\\sqrt{a}}\\ \\int_{-\\infty}^{+\\infty} x(t)\\\nx^*(t/a-\\tau)\\ dt = \\sqrt{a} \\int_{-\\infty}^{+\\infty} X(\\nu)\\ X^*(a\\nu)\\\ne^{j2\\pi a \\tau\\nu}\\ d\\nu. \n\\end{eqnarray*}\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty x}     & signal (in time) to be analyzed (the analytic associated\n                signal is considered), of length {\\ty Nx} &\\\\\n        {\\ty fmin, fmax} & respectively lower and upper frequency bounds of\n                the analyzed signal. When specified, these parameters fix\n                the equivalent frequency bandwidth (both are expressed in\n                Hz)             & {\\ty 0, 0.5}\\\\\n        {\\ty N}     & number of Mellin points. This number is needed when {\\ty fmin}\n                and {\\ty fmax} are forced     & {\\ty Nx}\\\\\n        {\\ty trace} & if non-zero, the progression of the algorithm is shown\n                                        & 0\\\\\n\\hline\n        {\\ty waf}    & matrix containing the coefficients of the ambiguity\n                function. X-coordinate corresponds to the dual variable of \n                scale parameter ; Y-coordinate corresponds to time delay,\n                dual variable of frequency.\\\\\n        {\\ty tau}   & X-coordinate corresponding to time delay\\\\\n        {\\ty theta} & Y-coordinate corresponding to the $\\log(a)$ variable,\n\t\twhere $a$ is the scale\\\\\n\\hline\n\\end{tabular*}\n\\vspace*{.2cm}\n\nWhen called without output arguments, {\\ty ambifuwb} displays the squared\nmodulus of the ambiguity function by means of {\\ty contour}.\n\\end{minipage}\n\n\\newpage\n\n{\\bf \\large \\sf Example}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nConsider a BPSK signal (see {\\ty anabpsk}) of 256 points, with a keying\nperiod of 8 points, and analyze it with the wide-band ambiguity\nfunction\\,:\n\\begin{verbatim}\n         sig=anabpsk(256,8);\n         ambifunb(sig);\n\\end{verbatim}\nThe result, to be compared with the one obtained with the narrow-band\nambiguity function, presents a thin high peak at the origin of the\nambiguity plane, but with more important sidelobes than with the\nnarrow-band ambiguity function. It means that the narrow-band assumption is\nnot very well adapted to this signal, and that the ambiguity in the\nestimation of its arrival time and mean frequency is not so small.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nambifunb.\n\\end{verbatim}\n\\end{minipage}\n\n", "meta": {"hexsha": "0657906cbcf9df94ebc2b9c79ec02ce6931989cc", "size": 3408, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/ambifuwb.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/ambifuwb.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/ambifuwb.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 32.1509433962, "max_line_length": 84, "alphanum_fraction": 0.6511150235, "num_tokens": 1064, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673269042765, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.58922336857956}}
{"text": "\\section{Grothendieck's construction of Chern classes}\n%No class Monday because I'll be out!\n%We have a few lectures left and I'll use them to talk about characteristic classes, and we'll see some applications.\n%Maybe we'll have time to give a construction of Steenrod operations as well.\n\\subsection{Generalities on characteristic classes}\nWe would like to apply algebraic techniques to study $G$-bundles on a space.\nLet $A$ be an abelian group, and $n\\geq 0$ an integer.\n\\begin{definition}\n    A \\emph{characteristic class} for principal $G$-bundles (with values in\n    $H^n(-;A)$) is a natural transformation of functors $\\Top\\to \\Ab$:\n    $$\\Bun_G(X) \\xar{c} H^n(X;A)$$\n    Concretely: if $P\\to Y$ is a principal $G$-bundle over a space $X$, and\n    $f:X\\to Y$ is a continuous map of spaces, then\n    $$c(f^\\ast P) = f^\\ast c(P).$$\n\\end{definition}\nThe motivation behind this definition is that $\\Bun_G(X)$ is still rather\nmysterious, but we have techniques (developed in the last section) to compute\nthe cohomology groups $H^n(X;A)$. It follows by construction that if two\nbundles over $X$ have two different characteristic classes, then they cannot be\nisomorphic. Often, we can use characteristic classes to distinguish a given\nbundle from the trivial bundle.\n\n\\begin{example}\n    The Euler class takes an oriented real $n$-plane vector bundle (with a\n    chosen orientation) and produces an $n$-dimensional cohomology class\n    $e:\\Vect^{or}_n(X) = \\Bun_{SO(n)}(X)\\to H^n(X;\\Z)$. This is a\n    characteristic class. To see this, we need to argue that if $\\xi\\downarrow\n    X$ is a principal $G$-bundle, we can pull the Euler class back via $f:X \\to\n    Y$. The bundle $f^\\ast\\xi\\downarrow Y$ has a orientation if $\\xi$ does, so\n    it makes sense to even talk about the Euler class of $f^\\ast\\xi$. Since all\n    of our constructions were natural, it follows that $e(f^\\ast\\xi) = f^\\ast\n    e(\\xi)$.\n\n    Similarly, the {mod $2$ Euler class} is $e_2:\\Vect_n(X) = \\Bun_{O(n)}(X)\n    \\to H^n(X;\\Z/2\\Z)$ is another Euler class. Since everything has an\n    orientation with respect to $\\Z/2\\Z$, the mod $2$ Euler class is\n    well-defined. \n\\end{example}\n\nBy our discussion in \\S \\ref{classifying-g-bundles}, we know that $\\Bun_G(X) =\n[X,BG]$. Moreover, as we stated in Theorem \\ref{brown-rep}, we know that\n$H^n(X;A) = [X,K(A,n)]$ (at least if $X$ is a CW-complex). One moral reason for\ncohomology to be easier to compute is that the spaces $K(A,n)$ are infinite\nloop spaces (i.e., they can be delooped infinitely many times). It follows from\nthe Yoneda lemma that characteristic classes are simply maps $BG\\to K(A,n)$,\ni.e., elements of $H^n(BG;A)$.\n\n\\begin{example}\n    The Euler class $e$ lives in $H^n(BSO(n);\\Z)$; in fact, it is $e(\\xi)$, the\n    Euler class of the universal oriented $n$-plane bundle over $BSO(n)$. A\n    similar statement holds for $e_2\\in H^n(BO(n);\\Z/2\\Z)$. For instance, if\n    $n=2$, then $SO(2) = S^1$. It follows that\n    $$BSO(2) = BS^1 = \\CP^\\infty.$$\n    We know that $H^\\ast(\\CP^\\infty;\\Z) = \\Z[e]$ --- it's the polynomial\n    algebra on the ``universal'' Euler class! Similarly, $O(1) = \\Z/2\\Z$, so\n    $$BO(1) = B\\Z/2 = \\RP^\\infty.$$\n    We know that $H^\\ast(\\RP^\\infty;\\FF_2) = \\FF_2[e_2]$ --- as above, it is\n    the polynomial algebra over $\\Z/2\\Z$ on the ``universal'' mod $2$ Euler\n    class.\n\\end{example}\n\n\\subsection{Chern classes}\nThese are one of the most fundamental example of characteristic classes.\n\\begin{theorem}[Chern classes]\\label{chern-classes}\n    There is a unique family of characteristic classes for complex vector\n    bundles that assigns to a complex $n$-plane bundle $\\xi$ over $X$ the\n    \\emph{$n$th Chern class} $c^{(n)}_k(\\xi)\\in H^{2k}(X;\\Z)$, such that:\n    \\begin{enumerate}\n\t\\item $c^{(n)}_0(\\xi) = 1$.\n\t\\item If $\\xi$ is a line bundle, then $c^{(1)}_1(\\xi) = -e(\\xi)$.\n\t\\item The \\emph{Whitney sum formula} holds: if $\\xi$ is a $p$-plane\n\t    bundle and $\\eta$ is a $q$-plane bundle (and if $\\xi\\oplus\\eta$\n\t    denotes the fiberwise direct sum), then\n\t    \\begin{equation*}\n\t\tc^{(p+q)}_k(\\xi\\oplus \\eta) = \\sum_{i+j=k} c^{(p)}_i(\\xi)\\cup\n\t\tc^{(q)}_j(\\eta) \\in H^{2k}(X;\\Z).\n\t    \\end{equation*}\n    \\end{enumerate}\n    Moreover, if $\\xi_n$ is the universal $n$-plane bundle, then\n    $$H^\\ast(BU(n);\\Z) \\simeq \\Z[c_1^{(n)}, \\cdots, c^{(n)}_n],$$\n    where $c^{(n)}_k = c^{(n)}_k(\\xi_n)$.\n\\end{theorem}\nThis result says that all characteristic classes for complex vector bundles are\ngiven by polynomials in the Chern classes because the cohomology of $BU(n)$\ngives all the characteristic classes.  It also says that there are no universal\nalgebraic relations among the Chern classes: you can specify them\nindependently.\n\n\\begin{remark}\n    The $(p+q)$-plane bundle $\\xi_p\\times \\xi_q = \\pr_1^\\ast \\xi_p\\oplus\n    \\pr_2^\\ast\\xi_q$ over $BU(p)\\times BU(q)$ is classified by a map\n    $BU(p)\\times BU(q) \\xar{\\mu} BU(p+q)$. The Whitney sum formula computes the\n    effect of $\\mu$ on cohomology:\n    $$\\mu^\\ast(c^{(n)}_k) = \\sum_{i+j = k} c^{(p)}_i \\times c^{(q)}_j \\in\n    H^{2k}(BU(p)\\times BU(q)),$$\n    where, you'll recall,\n    $$x\\times y := \\pr_1^\\ast x \\cup \\pr_2^\\ast y.$$\n\\end{remark}\n\nThe Chern classes are ``stable'', in the following sense. Let $\\epsilon$ be the\ntrivial one-dimensional complex vector bundle, and let $\\xi$ be an\n$n$-dimensional vector bundle. What is $c^{(n+q)}_k(\\xi\\oplus\\epsilon^q)$? For\nthis, the Whitney sum formula is valuable.\n\nThe trivial bundle is characterized by the pullback:\n\\begin{equation*}\n    \\xymatrix{\n\tX\\times \\cC^n = n\\epsilon \\ar[r]\\ar[d] & \\cC^n\\ar[d]\\\\\n\tX\\ar[r] & \\ast\n    }\n\\end{equation*}\nBy naturality, we find that if $k>0$, then $c^{(n)}_k(n\\epsilon) = 0$. The\nWhitney sum formula therefore implies that\n$$c^{(n+q)}_k(\\xi\\oplus \\epsilon^q) = c^{(n)}_k(\\xi).$$\nThis phenomenon is called stability: the Chern class only depends on the\n``stable equivalence class'' of the vector bundle (really, they are only\ndefined on ``K-theory'', for those in the know). For this reason, we will drop\nthe superscript on $c^{(n)}_k(\\xi)$, and simply write $c_k(\\xi)$.\n\n\\subsection{Grothendieck's construction}\\label{grothendieck-chern}\nLet $\\xi$ be an $n$-plane bundle. We can consider the vector bundle\n$\\pi:\\PP(\\xi)\\to X$, the projectivization of $\\xi$: an element of the fiber of\n$\\PP(\\xi)$ over $x\\in X$ is a line inside $\\xi_x$, so the fibers are therefore\nall isomorphic to $\\CP^{n-1}$.\n\nLet us compute the cohomology of $\\PP(\\xi)$. For this, the Serre spectral\nsequence will come in handy:\n$$\nE_2^{s,t} = H^s(X; H^t(\\CP^{n-1})) \\Rightarrow H^{s+t}(\\PP(\\xi)).\n$$\n\\begin{remark}\n    Why is the local coefficient system constant? The space $X$ need not be\n    simply connected, but $BU(n)$ is simply connected since $U(n)$ is simply\n    connected. Consider the projectivization of the universal bundle\n    $\\xi_n\\downarrow BU(n)$; pulling back via $f:X\\to BU(n)$ gives the bundle\n    $\\pi:\\PP(\\xi)\\to X$. The map on fibers $H^\\ast(\\PP(\\xi_n)_{f(x)}) \\to\n    H^\\ast(\\PP(\\xi_n)_{x})$ is an isomorphism which is equivariant with respect\n    to the action of the fundamental group of $\\pi_1(X)$ via the map $\\pi_1 (X)\n    \\to \\pi_1(BU(n)) = 0$.\n\\end{remark}\nBecause $H^\\ast(\\CP^{n-1})$ is torsion-free and finitely generated in each\ndimension, we know that\n$$E_2^{s,t} \\simeq H^s(X) \\otimes H^t(\\CP^{n-1}).$$\nThe spectral sequence collapses at $E_2$, i.e., that $E_2 \\simeq E_\\infty$,\ni.e., there are no differentials. We know that the $E_2$-page is generated as\nan algebra by elements in the cohomology of the fiber and elements in the\ncohomology of the base. Thus, it suffices to check that elements in the\ncohomology of the fiber survive to $E_\\infty$. We know that\n$$E_2^{0,2t} = \\Z\\langle x^t\\rangle,\\text{ and }E_2^{0,2t+1} = 0,$$\nwhere $x = e(\\lambda)$ is the Euler class of the canonical line bundle\n$\\lambda\\downarrow\\CP^{n-1}$.\n\nIn order for the Euler class to survive the spectral sequence, it suffices to\ncome up with a two dimensional cohomology class in $\\PP(\\xi)$ that restricts to\nthe Euler class over $\\CP^{n-1}$. We know that $\\lambda$ itself is the\nrestriction of the tautologous line bundle over $\\CP^\\infty$. There is a\ntautologous line bundle $\\lambda_\\xi \\downarrow \\PP(\\xi)$, given by the\ntautologous line bundle on each fiber. Explicitly:\n$$E(\\lambda_\\xi)=\\{(\\ell,y)\\in \\PP(\\xi)\\times_X E(\\xi) | y\\in \\ell\\subseteq\n\\xi_x\\}.$$\nThus, $x$ is the restriction $e(\\lambda_\\xi)|_\\text{fiber}$ of the Euler class\nto the fiber. It follows that the class $x$ survives to the $E_\\infty$-page.\n\nUsing the Leray-Hirsch theorem (Theorem \\ref{leray-hirsch}), we conclude that\n$$H^\\ast(\\PP(\\xi)) = H^\\ast(X)\\langle 1, e(\\lambda_\\xi), e(\\lambda_\\xi)^2,\n\\cdots, e(\\lambda_\\xi)^{n-1}\\rangle.$$\nFor simplicity, let us write $e = e(\\lambda_\\xi)$. Unforunately, we don't know\nwhat $e^n$ is, although we do know that it is a linear combination of the $e^k$\nfor $k<n$. In other words, we have a relation\n$$e^n + c_1e^{n-1} + \\cdots + c_{n-1} e + c_n = 0,$$\nwhere the $c_k$ are elements of $H^{2k}(X)$. These are the Chern classes of\n$\\xi$. By construction, they are unique!\n\nTo prove Theorem \\ref{chern-classes}(2), note that when $n=1$ the above\nequation reads\n$$e+c_1 = 0,$$\nas desired.\n\n%\\begin{question}\n%    Why do we know that they are the Chern classes?\n%    My margin is too narrow to provide a proof.\n%\\end{question}\n%We'll prove this on Wednesday.\n", "meta": {"hexsha": "8015ea2f5cfe6c87d9157a2cd864f126be827ba1", "size": 9374, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "906/lec-70-grothendieck-chern-classes.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "906/lec-70-grothendieck-chern-classes.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "906/lec-70-grothendieck-chern-classes.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 50.9456521739, "max_line_length": 117, "alphanum_fraction": 0.6758054192, "num_tokens": 3106, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.8128673155708975, "lm_q1q2_score": 0.5892233651956699}}
{"text": "\\documentclass[modern]{aastex62}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n% Derivatives\n\\newcommand{\\dd}{\\mathrm{d}}\n\\newcommand{\\diff}[2]{\\frac{\\dd #1}{\\dd #2}}\n\n% Abbreviated Variables\n\\newcommand{\\Ndet}{N_\\mathrm{det}}\n\\newcommand{\\Nndet}{N_\\mathrm{ndet}}\n\\newcommand{\\Nnobs}{N_\\mathrm{nobs}}\n\\newcommand{\\Nobs}{N_\\mathrm{obs}}\n\\newcommand{\\Ntotal}{N_\\mathrm{total}}\n\n% Vector shortcuts\n\\newcommand{\\vd}{\\vec{d}}\n\\newcommand{\\vlambda}{\\vec{\\lambda}}\n\\newcommand{\\vtheta}{\\vec{\\theta}}\n\n\\begin{document}\n\\title{Incorporating Selection Effects in Population Analyses}\n\n\\author[0000-0003-1540-8562]{Will M. Farr}\n\\email{will.farr@stonybrook.edu}\n\\affiliation{Department of Physics and Astronomy, Stony Brook University, Stony Brook NY 11794, USA}\n\\affiliation{Center for Computational Astronomy, Flatiron Institute, 162 5th Ave., New York NY 10010, USA}\n\n\\author[0000-0002-6134-8946]{Ilya Mandel}\n\\email{imandel@star.sr.bham.ac.uk}\n\\affiliation{Birmingham Institute for Gravitational Wave Astronomy and School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom}\n\n\\author{Jonathan R. Gair}\n\\email{J.Gair@ed.ac.uk}\n\n\\section*{}\n\nConsider a population of objects, each described by some set of parameters\n$\\vtheta$, that follows a number density $\\diff{N}{\\vtheta}$.  Let the number\ndensity be parameterized by parameters, $\\vlambda$.  For each object in the\npopulation we make a noisy measurement of $\\vtheta$, represented by a likelihood\nfunction relating the measured data, $\\vd$, to the parameters of the object,\n$\\vtheta$: $p\\left( \\vd \\mid \\vtheta \\right)$.  If we have observed a\nrepresentative sample (i.e.\\ a ``fair draw''), then the appropriate\n(unnormalised) joint distribution for the parameters $\\left\\{ \\vtheta_i\n\\right\\}_{i=1}^{\\Ntotal}$ and observations $\\left\\{ \\vd_i \\right\\}$ of the $i =\n1, \\ldots, \\Ntotal$ objects given the parameters $\\vlambda$ describing the\npopulation\n%\n\\begin{equation}\n  \\pi\\left(\\left\\{ \\vtheta_i \\right\\}, \\left\\{ d_i \\right\\} \\mid \\vlambda \\right) \\propto \\left[ \\prod_{i=1}^{\\Ntotal} p\\left( \\vd_i \\mid \\vtheta_i \\right) \\diff{N}{\\vtheta_i}\\left( \\vlambda \\right) \\right] \\exp\\left[ - N\\left( \\vlambda \\right) \\right],\n\\end{equation}\n%\nwhere\n%\n\\begin{equation}\nN\\left( \\vlambda \\right) \\equiv \\int \\dd \\vd \\, \\dd \\vtheta p\\left( \\vd \\mid \\vtheta \\right) \\diff{N}{\\vtheta}\\left( \\lambda \\right)\n\\end{equation}\n%\nis the expected number of objects in the population\\footnote{The rationale for\nwriting this as a double-integral, when the integral over $\\vd$ is in fact\ntrivial---since the likelihood is normalised over $\\vd$---will become apparent\nbelow.}.  This is the standard posterior for a hierarchical analysis of an\ninhomogeneous Poisson process\n\\citep{Loredo1995,Hogg2010,Mandel2010,Youdin2011,Foreman-Mackey2014,Farr2015,Barrett2018}.\n\nNow suppose that, based on the observed data, some objects are classed as\n``observable'' and others are ``un-observable.''  For example, a survey may\nimpose a per-pixel or per-apeture threshold on the flux for inclusion of\npoint-sources in a catalog, or a gravitational wave detector my only report\nevents whose signal-to-noise ratio rises above some predetermined threshold,\nor\\ldots.  The key point here is that the selection of ``observable'' objects is\nmade by examining the data, $\\vd_i$, for each object; this is by far the most\ncommon case for astronomical observations.  Then the complete set of\nobservations partitions into two subsets:\n%\n\\begin{multline}\n  \\pi\\left(\\left\\{ \\vtheta_i \\right\\}, \\left\\{ \\vtheta_j \\right\\}, \\left\\{ d_i \\right\\}, \\left\\{ d_j \\right\\} \\mid \\vlambda \\right) \\propto \\left[ \\prod_{i=1}^{\\Nobs} p\\left( \\vd_i \\mid \\vtheta_i \\right) \\diff{N}{\\vtheta_i}\\left( \\vlambda \\right) \\right] \\\\ \\times \\left[ \\prod_{j=1}^{\\Nnobs} p\\left( \\vd_j \\mid \\vtheta_j \\right) \\diff{N}{\\vtheta_j}\\left( \\vlambda \\right) \\right] \\exp\\left[ - N\\left( \\vlambda \\right) \\right].\n\\end{multline}\n%\nAgain, a key point is that we can perform this partitioning simply by examining\nthe \\emph{data} obtained for each object.\n\nIt is common for the data associated with ``non-observable'' objects to be\ncompletely \\emph{censored}; that is, it often does not appear in a catalog or\notherwise at all.  In this case, it is appropriate to marginalize over the\nparameters and (unknown) data for the ``non-observable'' objects.  Doing so\ndestroys the distinguishability inherent in the inhomogeneous Poisson\ndistribution, so we must introduce a factor of $\\Nnobs!$ to account for the\nover-counting:\n%\n\\begin{equation}\n  \\pi\\left(\\left\\{ \\vtheta_i \\right\\}, \\left\\{ d_i \\right\\} \\mid \\vlambda \\right) \\propto \\left[ \\prod_{i=1}^{\\Nobs} p\\left( \\vd_i \\mid \\vtheta_i \\right) \\diff{N}{\\vtheta_i}\\left( \\vlambda \\right) \\right] \\frac{\\Nndet^{\\Nnobs}\\left( \\vlambda \\right)}{\\Nnobs!} \\exp\\left[ - N\\left( \\vlambda \\right) \\right],\n\\end{equation}\n%\nwhere\n%\n\\begin{equation}\n\\Nndet\\left( \\vlambda \\right) \\equiv \\int_{\\left\\{ \\vd \\mid \\textnormal{non-detection} \\right\\}} \\dd \\vd \\, \\dd \\vtheta \\, p\\left( \\vd \\mid \\vtheta \\right) \\diff{N}{\\vtheta}\\left( \\vlambda \\right)\n\\end{equation}\n%\nis the expected number of non-detections in the population model.  It is further\ncommon to not even know \\emph{how many} non-detected objects there were in a\ngiven survey or data set.  In this case we must marginalize---sum, since\ncounting is a discrete operation---over the unknown number of non-detections,\n$\\Nnobs$, yielding\n%\n\\begin{equation}\n\\pi\\left(\\left\\{ \\vtheta_i \\right\\}, \\left\\{ d_i \\right\\} \\mid \\vlambda \\right) \\propto \\left[ \\prod_{i=1}^{\\Nobs} p\\left( \\vd_i \\mid \\vtheta_i \\right) \\diff{N}{\\vtheta_i}\\left( \\vlambda \\right) \\right] \\exp\\left[ - \\left( N\\left( \\vlambda \\right) - \\Nndet\\left( \\vlambda \\right) \\right) \\right],\n\\end{equation}\n%\nor\n%\n\\begin{equation}\n  \\pi\\left(\\left\\{ \\vtheta_i \\right\\}, \\left\\{ d_i \\right\\} \\mid \\vlambda \\right) \\propto \\left[ \\prod_{i=1}^{\\Nobs} p\\left( \\vd_i \\mid \\vtheta_i \\right) \\diff{N}{\\vtheta_i}\\left( \\vlambda \\right) \\right] \\exp\\left[ - \\Ndet\\left( \\vlambda \\right) \\right],\n\\end{equation}\n%\nwhere $\\Ndet$---the compliment of $\\Nndet$---is the expected number of\ndetections under the population model:\n%\n\\begin{equation}\n  \\Ndet\\left( \\vlambda \\right) \\equiv \\int_{\\left\\{ \\vd \\mid \\textnormal{detection} \\right\\}} \\dd \\vd \\, \\dd \\vtheta \\, p\\left( \\vd \\mid \\vtheta \\right) \\diff{N}{\\vtheta}\\left( \\vlambda \\right).\n\\end{equation}\n%\nThis equation is the posterior for a hierarchical analysis of the number density\nand properties of objects from a data set subject to selection effects\n\\citep[e.g.][]{Gair2010,Youdin2011,Fishbach2018,Wysocki2018}.\n\nIf we re-parameterize $\\diff{N}{\\vtheta}$ so that we can write\n%\n\\begin{equation}\n  \\diff{N}{\\vtheta} \\equiv \\Lambda p\\left( \\vtheta \\mid \\vlambda{}' \\right)\n\\end{equation}\n%\nwith $p\\left( \\vtheta \\mid \\vlambda{}'\\right)$ integrating to 1 over the\npopulation for any value of the new parameters $\\vlambda{}'$, impose a prior\n$p\\left( \\Lambda \\right)\\propto 1/\\Lambda$, and marginalize over $\\Lambda$, we\narrive at the treatment of selection functions for estimating population\ndistributions from \\citet{Loredo2004,O1-BBH,Mandel2016}, as noted by\n\\citet{Fishbach2018}.\n\nNote that the commonly-employed technique of modifying $\\diff{N}{\\vtheta}$ to\naccount for the selection function is not correct, and will lead to biased\nresults as long as the selection is dependent only on the observed\ndata\\footnote{An example where the selection may be parameter- rather than\ndata-dependent is in surveys of objects that have been selected based on data in\nyet other surveys.  For example, X-ray selected populations of galaxy clusters\nin a weak-lensing catalog.}.\n\n\\acknowledgments\n\nWe thank Maggie Lieu for pointing out the counterexample given in the final\nfootnote of this document.  An example of a worked population analysis using\nthis method can be found at \\url{https://github.com/farr/SelectionExample}.\nIM's work was performed in part at Aspen Center for Physics, which is supported\nby National Science Foundation grant PHY-1607611; IM's visit there was partially\nsupported by a grant from the Simons Foundation.\n\n\\newpage\n\n\\bibliography{selection}\n\n\\end{document}\n", "meta": {"hexsha": "3b9cc1af562973731f0ce434bc3ab047bd0b0e8f", "size": 8180, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RNAAS/selection.tex", "max_stars_repo_name": "farr/SelectionExample", "max_stars_repo_head_hexsha": "c69dbafb218208e6b85d01065f8d64a1ec3f9f31", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-01-25T16:20:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-12T00:10:23.000Z", "max_issues_repo_path": "RNAAS/selection.tex", "max_issues_repo_name": "farr/SelectionExample", "max_issues_repo_head_hexsha": "c69dbafb218208e6b85d01065f8d64a1ec3f9f31", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RNAAS/selection.tex", "max_forks_repo_name": "farr/SelectionExample", "max_forks_repo_head_hexsha": "c69dbafb218208e6b85d01065f8d64a1ec3f9f31", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-01-25T18:22:37.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-28T03:51:32.000Z", "avg_line_length": 49.5757575758, "max_line_length": 427, "alphanum_fraction": 0.7343520782, "num_tokens": 2556, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673269042767, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.5892233637482203}}
{"text": "\\documentclass[11pt,twoside,a5paper]{article}\n\n\\usepackage[a4paper,top=2.5cm,bottom=2.5cm,left=2.2cm,right=2.2cm]{geometry}\n\\usepackage[T1]{fontenc}\n\\usepackage{verbatim}\n\\usepackage[normalem]{ulem} % for striking out with \\sout\n\\usepackage{amsfonts,amsmath,amssymb} % for more math support\n\\usepackage{times}\n\\usepackage[italic]{mathastext} % use normal text font (times) in equations\n\\usepackage{enumitem}\n%\\usepackage[draft]{hyperref}\n\\usepackage[colorlinks,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref}\n\\usepackage{color}\n\n\\newcommand{\\bfx}{{\\bf x}}\n\\newcommand{\\bfy}{{\\bf y}}\n\\newcommand{\\bfz}{{\\bf z}}\n\n\\newcommand{\\norm}{{\\cal N}}\n\n\\begin{document}\n\\title{Emulators: How they work}\n\\author{Tom McClintock}\n\\maketitle\n\n\\noindent Note: all of this comes from David Barber's book \n{\\bf Bayesian Reasoning and Machine Learning}.\n\n\\section{Multivariate Gaussian}\n\nA multivariate Gaussian distribution is given by\n\\begin{equation}\n  \\label{eq:multivariate_gaussian}\n  p(\\bfx|\\mu,\\Sigma) = \\norm(\\bfx|\\mu,\\Sigma)= \\frac{1}{\\sqrt{\\det(2\\pi\\Sigma)}}\\exp\\left(-\\frac{1}{2}(\\bfx-\\mu)^T\\Sigma^{-1}(\\bfx-\\mu)\\right)\n\\end{equation}\nwhere $\\mu$ is the mean vector of the distribution, $\\Sigma$ is the covariance matrix. Note that the inverse covariance $\\Sigma^{-1}$ is called the precision matrix.\n\n\\section{Partitioned Gaussian}\n\nA feature of multivariate Gaussians for our purposes is the idea of a {\\it partitioned} multivariate Gaussian. Consider $\\norm(\\bfz|\\mu,\\Sigma)$ defined jointly over two vectors $\\bfx$ and $\\bfy$ of potentially different dimensions,\n\\begin{equation}\n  \\label{eq:z}\n  \\bfz = \\begin{pmatrix}\n    \\bfx\\\\ \\bfy\n    \\end{pmatrix}\n\\end{equation}\nwith corresponding mean and partitioned covariance\n\\begin{equation}\n  \\label{eq:partitioned_mean}\n  \\mu = \\begin{pmatrix}\n    \\mu_x\\\\ \\mu_y\n  \\end{pmatrix}\n\\end{equation}\n\\begin{equation}\n  \\label{eq:partitioned_cov}\n  \\Sigma = \\begin{pmatrix}\n    \\Sigma_{xx} & \\Sigma_{xy} \\\\\n    \\Sigma_{yx} & \\Sigma_{yy}\n  \\end{pmatrix}\n\\end{equation}\nwhere $\\Sigma_{xy}=\\sigma_{yx}$. The marginal distribution is given by\n\\begin{equation}\n  \\label{eq:marginal_distribution}\n  p(\\bfx) = \\norm(\\bfx|\\mu_x,\\ \\Sigma_{xx})\n\\end{equation}\nbut the {\\it conditional} distribution is given by\n\\begin{equation}\n  \\label{eq:conditional_distribution}\n  p(\\bfx|\\bfy) = \\norm(\\bfx|\\mu_x+\\Sigma_{xy}\\Sigma_{yy}^{-1}(\\bfy-\\mu_y),\\ \n  \\Sigma_{xx} - \\Sigma_{xy}\\Sigma_{yy}^{-1}\\Sigma_{yx}).\n\\end{equation}\nIn other words, we have separated two parts of a multivarate Gaussian, and written the conditional probability of one vector given the other. Note that if there is no correlation $\\Sigma_{xy}=0$ then \\ref{eq:conditional_distribution} reduces to \\ref{eq:marginal_distribution}.\n\n\\section{Gaussian Process}\n\nA Gaussian process is a process by which we assume that given some set of observations $\\bfy(\\bfx)$, a future observation $y^*(x^*)$ is drawn from a multivariate Gaussian, given by \\ref{eq:conditional_distribution}. ({\\bf Note}: Gaussian processes always assume that $\\bar{\\bfy}=0$, or that $\\mu_y$ has already been subtracted off of $\\bfy$ and can be added on later.)\n\nFurther, since every $y$ is a function of some location in the domain $x$, we assume that we have some function (the {\\it kernel} function) that describes the covariance between observations made at different locations in the domain $k(x_1,x_2)$. Therefore, any element of a covariance matrix constructed between observations is given by\n\\begin{equation}\n  \\label{eq:kernel}\n  [K]_{1,2} = k(x_1,x_2).\n\\end{equation}\nThis means that the covariance matrix of the observations $\\bfy$ is\n\\begin{equation}\n  \\label{eq:K_matrix}\n  [K_{x,x}]_{i,j} = k(x_i,x_j),\\ \\ \\ i,j=1,...,N\n\\end{equation}\nwhere $N$ are the number of observations. Additionally, the covariance between the prediction $y^*$ and the observations $\\bfy$ is a vector\n\\begin{equation}\n  \\label{eq:K_matrix}\n  [K_{x,x^*}]_{i} = k(x_i,x^*),\\ \\ \\ i=1,...,N.\n\\end{equation}\n\nSince $\\bfy$ and $y^*$ form a multivariate Gaussian, we can immediately write down a prediction and uncertainty on that prediction from \\ref{eq:conditional_distribution}\n\\begin{equation}\n  \\label{eq:gaussian_process}\n  p(y^*|\\bfy) = \\norm(y^* | K_{x^*x}K_{xx}^{-1}\\bfy,\\ \n  K_{x^*x^*} - K_{x^*x}K_{xx}^{-1}K_{xx^*})\n\\end{equation}\nwhere $K_{xx^*} = K_{x^*x}^T$. If the observations $\\bfy$ have variances assiciated with them, $\\sigma^2$ then we make the transformation $K_{xx}\\rightarrow K_{xx}+{\\bf I}\\sigma^2$.\n\n\\section{Kernel Functions}\n\nThe kernel function (aka covariance function) isn't specified apriori, and in general could take almost any form. In Barber's book he explains in detail how different functions are appropriate for different purposes. In general, we will use the squared exponential kernel\n\\begin{equation}\n  \\label{eq:se_kernel}\n  k(\\bfx_1,\\bfx_2) = k(|\\bfx_1-\\bfx_2|) = k_0\\exp\\left(-\\frac{1}{2}\\frac{(\\bfx_1-\\bfx_2)^2}{L}\\right).\n\\end{equation}\nwhere $k_0$ is the covariance amplitude, and $L$ is known as the ``kernel length''. The domain $\\bfx$ can be multidimensional, with each dimension having slightly different meanings (e.g. $\\Omega_m$, $w$, $\\sigma_8$ in cosmology), so $L$ is actually an array containing a kernel length for each dimension. These parameters $k_0$ and $L$ are known as {\\it hyperparameters} in the literature.\n\n\\section{Hyperparameters}\n\nValues for the hyperparameters aren't random, but are informed by the observations. In general, $L$ corresponds to the ``feature size'', or the approximate size in the domain of features in your observations, while $\\sqrt{k_0}$ is the error bar on an uncorrelated prediction. Another way to interpret $\\sqrt{k_0}$ is the error bar on a prediction from a Gaussian process that is extrapolating.\n\nThe best way to find an optimal choice of hyperparameters is to write a likelihood of your observations which you can then maximize. This likelihood is\n\\begin{equation}\n  \\label{eq:likelihood}\n  \\log p(\\bfy | \\bfx) = -\\frac{1}{2}\\bfy^TK_{xx}^{-1}\\bfy - \\frac{1}{2}\\log\\det(2\\pi K_{xx}).\n\\end{equation}\nThis also lets you obtain full posteriors for your hyperparameters, which can be propogated forward into uncertainties into $y^*$ if you want to be thorough.\n\n\\section{Complexity and Limitations}\n\nTraining a Gaussian process is $O(N^3)$ in time because it requires matrix inversion (faster if you have smart inverters), and $O(N^2)$ in space because it requires the matrix to be stored completely. Therefore, as your training data becomes large the algorithm becomes unweildy. Note: in very specific instances this can be mitigated, see Foreman-Mackay et al. 2017 (https://arxiv.org/abs/1703.09710).\n\nIf an optimizer is used to find the hyperparameters, then the resulting Gaussian process will be sensitive to the optimizer used. The best advice I can give is to try a few methods and see what works best.\n\n\\end{document}\n", "meta": {"hexsha": "f43c0e2069e2034495ee93878d11e982f170ee7b", "size": 6864, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "emulator_writeup/How_Emulators_Work.tex", "max_stars_repo_name": "tmcclintock/TeX_Documents", "max_stars_repo_head_hexsha": "8fd83d85b2c0060d61b2cc6c9c72a408f936c4b4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "emulator_writeup/How_Emulators_Work.tex", "max_issues_repo_name": "tmcclintock/TeX_Documents", "max_issues_repo_head_hexsha": "8fd83d85b2c0060d61b2cc6c9c72a408f936c4b4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "emulator_writeup/How_Emulators_Work.tex", "max_forks_repo_name": "tmcclintock/TeX_Documents", "max_forks_repo_head_hexsha": "8fd83d85b2c0060d61b2cc6c9c72a408f936c4b4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.2093023256, "max_line_length": 402, "alphanum_fraction": 0.7348484848, "num_tokens": 2046, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702642896702, "lm_q2_score": 0.8128673223709251, "lm_q1q2_score": 0.589223350799449}}
{"text": "\\chapter{More on generalized limit}\n\n\\begin{defn}\nI will call a permutation group \\emph{fixed point free} when\nevery element of it except of identity has no fixed points.\n\\end{defn}\n\n\\begin{defn}\nA funcoid~$f$ is \\emph{Kolmogorov} when\n$\\rsupfun{f}\\{x\\}\\ne\\rsupfun{f}\\{y\\}$ for every distinct points\n$x,y\\in\\dom f$.\n\\end{defn}\n\n\\section{Hausdorff funcoids}\n\n\\begin{defn}\n\\emph{Limit}~$\\lim\\mathcal{F}=x$ of a filter~$\\mathcal{F}$\nregarding funcoid~$f$ is such a point that $\\rsupfun{f}\\{x\\}\\sqsupseteq\\mathcal{F}$.\n\\end{defn}\n\n\\begin{defn}\n\\emph{Hausdorff} funcoid is such a funcoid that every proper\nfilter on its image has at most one limit.\n\\end{defn}\n\n\\begin{prop}\nThe following are pairwise equivalent for every funcoid~$f$:\n\\begin{enumerate}\n\\item\\label{hd:eq-d} $f$~is Hausdorff.\n\\item\\label{hd:eq-op}\n$x\\ne y\\Rightarrow\\rsupfun{f}\\{x\\}\\asymp\\rsupfun{f}\\{y\\}$.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\n~\n\\begin{description}\n\\item[\\ref{hd:eq-d}$\\Rightarrow$\\ref{hd:eq-op}]\nIf~\\ref{hd:eq-op} does not hold,\nthen there exist distinct points~$x$ and~$y$ such that\n$\\rsupfun{f}\\{x\\}\\nasymp\\rsupfun{f}\\{y\\}$.\nSo~$x$ and~$y$ are both limit points of\n$\\rsupfun{f}\\{x\\}\\sqcap\\rsupfun{f}\\{y\\}$, and thus~$f$ is not\nHausdorff.\n\\item[\\ref{hd:eq-op}$\\Rightarrow$\\ref{hd:eq-d}]\nSuppose~$\\mathcal{F}$ is proper.\n\\[ \\rsupfun{f}\\{x\\}\\sqsupseteq\\mathcal{F}\\land\n\\rsupfun{f}\\{y\\}\\sqsupseteq\\mathcal{F}\\Rightarrow\n\\rsupfun{f}\\{x\\}\\nasymp\\rsupfun{f}\\{y\\}\\Rightarrow x=y. \\]\n\\end{description}\n\\end{proof}\n\n\\begin{cor}\nEvery entirely defined Hausdorff funcoid is Kolmogorov.\n\\end{cor}\n\n\\begin{rem}\nIt is enough to be ``almost entirely defined'' (having nonempty\nvalue everywhere except of one point).\n\\end{rem}\n\n\\begin{obvious}\nFor a complete funcoid induced by a topological space this\ncoincides with the traditional definition of a Hausdorff\ntopological space.\n\\end{obvious}\n\n\\section{Restoring functions from limit}\n\nConsider alternative definition of generalized limit:\n\\[ \\xlim f = \\lambda r\\in G: \\nu\\circ f\\circ\\uparrow r. \\]\n\nOr:\n\\[ \\xlim_a f = \\setcond{(\\rsupfun{r^{-1}}a, \\nu\\circ f\\circ\\uparrow r)}{r\\in G} \\]\n(note this requires explicit filter in the definition of generalized limit).\n\nOperations on the set of generalized limits can be defined (twice) pointwise. \\fxwarning{First define operations on\nfuncoids.}\n\n\\begin{prop}\nThe above defined $\\xlim_{\\rsupfun{\\mu}\\{x\\}} f$ is a monovalued\nfunction if~$\\mu$ is Kolmogorov and~$G$ is fixed point free.\n\\end{prop}\n\n\\begin{proof}\nWe need to prove $\\supfun{r^{-1}}\\rsupfun{\\mu}\\{x\\}\\ne\n\\supfun{s^{-1}}\\rsupfun{\\mu}\\{x\\}$ for $r,s\\in G$, $r\\ne s$.\nReally, by definition of generalized limit, they commute, so our\nformula is equivalent to\n$\\rsupfun{\\mu}\\rsupfun{r^{-1}}\\{x\\}\\ne\n\\rsupfun{\\mu}\\rsupfun{s^{-1}}\\{x\\}$;\n$\\rsupfun{\\mu}\\rsupfun{r^{-1}\\circ s}\\rsupfun{s^{-1}}\\{x\\}\\ne\n\\rsupfun{\\mu}\\rsupfun{s^{-1}}\\{x\\}$.\nBut $r^{-1}\\circ s\\ne e$, so because it is fixed point free,\n$\\rsupfun{r^{-1}\\circ s}\\rsupfun{s^{-1}}\\{x\\}\\ne\n\\rsupfun{s^{-1}}\\{x\\}$ and thus by kolmogorovness, we have the\nthesis.\n\\end{proof}\n\n\\begin{lem}\nLet~$\\mu$ and~$\\nu$ be Hausdorff funcoids. If function~$f$ is defined at point~$x$, then\n\\[ fx=\n\\lim\\rsupfun{(\\xlim_{\\rsupfun{\\mu}\\{x\\}}f)\\rsupfun{\\mu}\\{x\\}}\\{x\\}\n\\]\n\\end{lem}\n\n\\begin{rem}\nThe right part is correctly defined because\n$\\xlim_a f$ is monovalued.\n\\end{rem}\n\n\\begin{proof}\n$\\lim\\rsupfun{(\\xlim_{\\rsupfun{\\mu}\\{x\\}}f)\\rsupfun{\\mu}\\{x\\}}\\{x\\}=\n\\lim\\rsupfun{\\nu\\circ f}\\{x\\}=\\lim\\rsupfun{\\nu}fx=fx$.\n\\end{proof}\n\n\\begin{cor}\nLet~$\\mu$ and~$\\nu$ be Hausdorff funcoids.\nThen function~$f$ can be restored from values of $\\xlim_{\\rsupfun{\\mu}\\{x\\}}f$.\n\\end{cor}", "meta": {"hexsha": "6c0471aa95ab3da0be56c0fa731e823c47ef7316", "size": 3630, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-limit-more.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-limit-more.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-limit-more.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.25, "max_line_length": 115, "alphanum_fraction": 0.6873278237, "num_tokens": 1391, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721305, "lm_q2_score": 0.7606506581031359, "lm_q1q2_score": 0.5891873688370521}}
{"text": "\\documentclass[12pt]{article}\n\n\\title{$LDL^T$ Notes}\n\\author{Nick Henderson}\n\\date{\\today}\n\n\\usepackage{amsmath}\n\n\\begin{document}\n\\maketitle\n\n\\section{Cholesky: $A=LL^T$}\n\nOverall $LDL^T = A$ factorization:\n\n\\begin{equation*}\n\\begin{pmatrix}\nL_{11} &        &        \\\\\nL_{21} & L_{22} &        \\\\\nL_{31} & L_{32} & L_{33}\n\\end{pmatrix}\n\\begin{pmatrix}\nL_{11}^T & L_{21}^T & L_{31}^T \\\\\n         & L_{22}^T & L_{32}^T \\\\\n         &          & L_{33}^T\n\\end{pmatrix}\n= \n\\begin{pmatrix}\nA_{11} & A_{12} & A_{13} \\\\\nA_{21} & A_{22} & A_{23} \\\\\nA_{31} & A_{32} & A_{33}\n\\end{pmatrix}\n\\end{equation*}\n\n\\section{$A=LDL^T$}\n\nOverall $LDL^T = A$ factorization:\n\n\\begin{equation*}\n\\begin{pmatrix}\nL_{11} &        &        \\\\\nL_{21} & L_{22} &        \\\\\nL_{31} & L_{32} & L_{33}\n\\end{pmatrix}\n\\begin{pmatrix}\nD_{11} &        &        \\\\\n       & D_{22} &        \\\\\n       &        & D_{33}\n\\end{pmatrix}\n\\begin{pmatrix}\nL_{11}^T & L_{21}^T & L_{31}^T \\\\\n         & L_{22}^T & L_{32}^T \\\\\n         &          & L_{33}^T\n\\end{pmatrix}\n= \n\\begin{pmatrix}\nA_{11} & A_{12} & A_{13} \\\\\nA_{21} & A_{22} & A_{23} \\\\\nA_{31} & A_{32} & A_{33}\n\\end{pmatrix}\n\\end{equation*}\n\nMultiply $LD$:\n\n\\begin{equation*}\n\\begin{pmatrix}\nL_{11} D_{11} &               &              \\\\\nL_{21} D_{11} & L_{22} D_{22} &              \\\\\nL_{31} D_{11} & L_{32} D_{22} & L_{33} D_{33}\n\\end{pmatrix}\n\\begin{pmatrix}\nL_{11}^T & L_{21}^T & L_{31}^T \\\\\n         & L_{22}^T & L_{32}^T \\\\\n         &          & L_{33}^T\n\\end{pmatrix}\n= \n\\begin{pmatrix}\nA_{11} & A_{12} & A_{13} \\\\\nA_{21} & A_{22} & A_{23} \\\\\nA_{31} & A_{32} & A_{33}\n\\end{pmatrix}\n\\end{equation*}\n\nEquations that focus on $D_{22}$, $L_{22}$, and $L_{32}$:\n\n\\begin{align*}\nA_{22} &= L_{21}D_{11}L_{21}^T + L_{22}D_{22}L_{22}^T \\\\\nA_{32} &= L_{31}D_{11}L_{21}^T + L_{32}D_{22}L_{22}^T\n\\end{align*}\n\nOut of place algorithm for when $L_{22}$ is element:\n\\begin{align*}\nL_{12} & \\gets D_{11}L_{21}^T        & \\text{no blas function for this} \\\\\nD_{22} & \\gets A_{22} - L_{21}L_{12} & \\texttt{dot} \\\\\nL_{32} & \\gets A_{32} - L_{31}L_{12} & \\texttt{gemv} \\\\\nL_{32} & \\gets L_{32} / D_{22}       & \\texttt{scal}\n\\end{align*}\n\nIn place algorithm for when $L_{22}$ is element:\n\\begin{align*}\nA_{12} & \\gets D_{11}A_{21}^T        & \\text{no blas function for this, $D_{11}=\\mathsf{diag}(A_{11})$} \\\\\nA_{22} & \\gets A_{22} - A_{21}A_{12} & \\texttt{dot} \\\\\nA_{32} & \\gets A_{32} - A_{31}A_{12} & \\texttt{gemv} \\\\\nA_{32} & \\gets A_{32} / A_{22}       & \\texttt{scal}\n\\end{align*}\n\n\\section{$A = U^TDU$}\n\nOverall $U^TDU = A$ factorization:\n\n\\begin{equation*}\n\\begin{pmatrix}\nU_{11}^T &         &        \\\\\nU_{12}^T & U_{22}^T &        \\\\\nU_{13}^T & U_{23}^T & U_{33}^T\n\\end{pmatrix}\n\\begin{pmatrix}\nD_{11} &        &        \\\\\n       & D_{22} &        \\\\\n       &        & D_{33}\n\\end{pmatrix}\n\\begin{pmatrix}\nU_{11} & U_{12} & U_{13} \\\\\n       & U_{22} & U_{23} \\\\\n       &        & U_{33}\n\\end{pmatrix}\n= \n\\begin{pmatrix}\nA_{11} & A_{12} & A_{13} \\\\\nA_{21} & A_{22} & A_{23} \\\\\nA_{31} & A_{32} & A_{33}\n\\end{pmatrix}\n\\end{equation*}\n\nMultiply $U^TD$:\n\n\\begin{equation*}\n\\begin{pmatrix}\nU_{11}^TD_{11} &               &              \\\\\nU_{12}^TD_{11} & U_{22}^TD_{22} &              \\\\\nU_{13}^TD_{11} & U_{23}^TD_{22} & U_{33}^TD_{33}\n\\end{pmatrix}\n\\begin{pmatrix}\nU_{11} & U_{12} & U_{13} \\\\\n       & U_{22} & U_{23} \\\\\n       &        & U_{33}\n\\end{pmatrix}\n= \n\\begin{pmatrix}\nA_{11} & A_{12} & A_{13} \\\\\nA_{21} & A_{22} & A_{23} \\\\\nA_{31} & A_{32} & A_{33}\n\\end{pmatrix}\n\\end{equation*}\n\nEquations that involve $A_{22}$, $U_{22}$, and $U_{23}$:\n\n\\begin{align*}\nA_{22} &= U_{12}^TD_{11}U_{12} + U_{22}^TD_{22}U_{22} \\\\\nA_{23} &= U_{12}^TD_{11}U_{13} + U_{22}^TD_{22}U_{23}\n\\end{align*}\n\nOut of place algorithm for when $U_{22}$ is element:\n\\begin{align*}\nU_{21} & \\gets U_{12}^TD_{11}        & \\text{no blas function for this} \\\\\nD_{22} & \\gets A_{22} - U_{21}U_{12} & \\texttt{dot} \\\\\nU_{23} & \\gets A_{23} - U_{21}U_{13} & \\texttt{gemv} \\\\\nU_{23} & \\gets U_{23} / D_{22}       & \\texttt{scal}\n\\end{align*}\n\nIn place algorithm for when $U_{22}$ is element:\n\\begin{align*}\nA_{21} & \\gets A_{12}^TD_{11}        & \\text{no blas function for this, $D_{11}=\\mathsf{diag}(A_{11})$} \\\\\nD_{22} & \\gets A_{22} - A_{21}A_{12} & \\texttt{dot} \\\\\nA_{23} & \\gets A_{23} - A_{21}A_{13} & \\texttt{gemv} \\\\\nA_{23} & \\gets A_{23} / D_{22}       & \\texttt{scal}\n\\end{align*}\n\n\\end{document}\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: t\n%%% End:\n", "meta": {"hexsha": "7786b4382a60e544058f913ddcb669868aaf6372", "size": 4448, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ldl.tex", "max_stars_repo_name": "nwh/QuasiDefinite.jl", "max_stars_repo_head_hexsha": "e6816731779e130d37ef0920482f77363c2c16e8", "max_stars_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ldl.tex", "max_issues_repo_name": "nwh/QuasiDefinite.jl", "max_issues_repo_head_hexsha": "e6816731779e130d37ef0920482f77363c2c16e8", "max_issues_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ldl.tex", "max_forks_repo_name": "nwh/QuasiDefinite.jl", "max_forks_repo_head_hexsha": "e6816731779e130d37ef0920482f77363c2c16e8", "max_forks_repo_licenses": ["BSD-3-Clause-Open-MPI"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.0432432432, "max_line_length": 106, "alphanum_fraction": 0.5168615108, "num_tokens": 1949, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5891873527599071}}
{"text": "%!TEX root = ../notes.tex\n\\section{April 12, 2022}\n\\subsection{Ideals and Fractional Ideals}\nLet $R$ be a commutative ring with an identity.\n\nRecall that if $I, J$ are ideals of $R$, then\n\\[I + J := \\left\\{ a_i + b_j \\mid a_i\\in I, b_j\\in J \\right\\}\\]\nand\n\\[IJ := \\left\\{\\sum a_ib_j \\mid a_i\\in I, b_j\\in J\\right\\}\\]\nLet $K$ be a number field. An ideal of $riO_K$ is sometimes called an \\ul{intgral ideal}. This is to contrast them with fractional ideals.\n\\begin{definition}\n    A fractional ideal of $\\riO_K$ is a set of the form $c^{-1}\\mathfrak{b}$ when $\\mathfrak{b}$ is an ideal of $\\riO_K$ and $0\\neq c\\in\\riO_K$.\n\\end{definition}\n\\begin{example}\n    The fractional ideals of $\\ZZ$ are of the form $r\\ZZ$ where $r\\in\\QQ$.\n\n    $\\frac{2}{5}\\ZZ$ is a fractional ideal of $\\ZZ$.\n\\end{example}\n\\textbf{Caution!} If $\\riO_K$ is a PID, then fractional ideals are of the form\n\\[c^{-1}\\langle d\\rangle\\]\nfor $0\\neq c\\in\\riO_K$ and $d\\in\\riO_K$. This is just $c^{-1}d\\riO_K = \\alpha\\riO_k$ where $\\alpha = c^{-1}d$.\n\nAddition/multiplication of fractional ideals works similarly as in the case of ideals:\n\nIf $\\mathfrak{a}, \\mathfrak{b}$ are fractional ideals, then\n\\begin{align*}\n    \\mathfrak{a}\\mathfrak{b}    & := \\left\\{ \\text{finite sums }\\sum a_ib_j\\mid a_i\\in\\mathfrak{a}, b_j\\in\\mathfrak{b} \\right\\} \\\\\n    \\mathfrak{a} + \\mathfrak{b} & := \\left\\{a_i + b_j\\mid a_i\\in\\mathfrak{a}, b_j\\in\\mathfrak{b} \\right\\}\n\\end{align*}\nIf $a_1 = c_1^{-1}\\mathfrak{b}_1$ and $a_2 = c_2^{-1}\\mathfrak{b}_2$ where $\\mathfrak{b}_1, \\mathfrak{b}_2$ are integral ideals, then we have\n\\[\\mathfrak{a}_1\\mathfrak{a}_2 = (c_1c_2)^{-1}\\mathfrak{b}_1\\mathfrak{b}_2\\]\nThe multiplication is obviously associative and commutative, with $\\riO_K$ as the identity.\n\nThus, the set of \\emph{nonzero} fractional ideals forms a monoid\\footnote{Also a commutative ring with addition, actually.} under (commutative) multiplication.\n\nIf we want the structure of an Abelian group, we need to build the inverses in.\n\\begin{theorem}[p. 109 \\cite{stewart2015algebraic}]\\label{thm:fractional-ideals-group}\n    The nonzero fractional ideals of $\\riO_K$ form a group under multiplication.\n\\end{theorem}\nFor each ideal $\\mathfrak{a}\\subseteq \\riO_K$, define\n\\[\\mathfrak{a}^{-1} := \\left\\{ x\\in K\\mid x\\mathfrak{a}\\subseteq \\riO_K \\right\\}\\]\nAutomatically, this set contains all of $\\riO_K$.\n\nIf $\\mathfrak{a}\\neq 0$, then for any $0\\neq c\\in \\mathfrak{a}$, $c\\mathfrak{a}^{-1}\\subseteq \\riO_K$. Fixing such a $c$, we have that $c\\mathfrak{a}^{-1} =: \\mathfrak{b}$ is an ideal of $\\riO_K$. (Why? $c\\mathfrak{a}^{-1}$ is an $\\riO_K$-submodule of $\\riO_K$, i.e. that is to say, an ideal of $\\riO_K$.)\n\n\\begin{example}\n    Let's take $K = \\QQ$ so that $\\riO_K = \\ZZ$\n    \\begin{align*}\n        \\mathfrak{a}      & = 5\\ZZ           \\\\\n        \\mathfrak{a}^{-1} & = \\frac{1}{5}\\ZZ\n    \\end{align*}\n    Thus $\\mathfrak{a}^{-1} = c^{-1}\\mathfrak{b}$, so that $\\mathfrak{a}^{-1}$ is a fractional ideal.\n\\end{example}\n\nBy definition,\n\\[\\mathfrak{a}\\mathfrak{a}^{-1} = \\mathfrak{a}^{-1}\\mathfrak{a} \\subseteq \\riO_K\\]\nHarder to show: $\\mathfrak{a}\\mathfrak{a}^{-1} = \\riO_K$. We blackbox this for the moment (p.110-112 \\cite{stewart2015algebraic}, uses fact that $\\riO_K$ is a Dedekind domain).\nWe can extend this discussion to fractional ideals $\\mathfrak{a}$. Assuming this, we have shown \\cref{thm:fractional-ideals-group}.\n\n\\begin{theorem*}[p. 109 \\cite{stewart2015algebraic}]\n    The nonzero fractional ideals of $\\riO_K$ form a group under multiplication.\n\\end{theorem*}\n\\begin{proof}\n    Let $\\mathfrak{a}$ be a nonzero fractional ideal of $\\riO_K$. We have $\\mathfrak{a} = c^{-1}\\mathfrak{b}$ with $\\mathfrak{b}$ integral. We define $\\mathfrak{a}' = c\\mathfrak{b}^{-1}$, which is a fractional ideal, and $\\mathfrak{a}\\mathfrak{a}' = \\riO_K$. So $\\mathfrak{a}'$ is the inverse of $\\mathfrak{a}$.\n\\end{proof}\n\n\\recall A prime ideal of a commutative ring $R$ can be defined in a couple of different ways:\n\n\\begin{definition}[Prime Ideal]\n    An ideal $\\mathfrak{p}$ is \\ul{prime} if $IJ\\subseteq \\mathfrak{p}$ implies $I\\subseteq \\mathfrak{p}$ or $J\\subseteq \\mathfrak{p}$.\n\\end{definition}\n\\begin{definition*}[Prime Ideal (alternative)]\n    $\\mathfrak{p}$ is \\ul{prime} if $ab\\in\\mathfrak{p}$ implies that $a\\in\\mathfrak{p}$ or $b\\in\\mathfrak{p}$.\n\\end{definition*}\n\nTo prove unique factorization of nonzero ideals, we first need to prove $\\riO_K$ is a Dedekind domain.\n\\begin{theorem}[p.108 \\cite{stewart2015algebraic}]\n    The ring of integers $\\riO_K$:\n    \\begin{enumerate}[a)]\n        \\item is an integral domain,\n        \\item is Noetherian (every ascending chain of ideals terminates\\footnote{A chain of ideals is a sequence of inclusions $I_1\\subseteq I_2\\subseteq I_3\\subseteq \\cdots$; and for such a chain to \\ul{terminate} means that $\\exists N$ such that $I_n = I_N$ for all $n\\geq N$.}, or every ideal is finitely generated),\n        \\item is integrally closed in its field of fractions (that is, if $\\alpha\\in\\mathsf{Frac}(\\riO_K) = K$ satisfies a monic polynomial equation with coefficients in $\\riO_K$, then $\\alpha\\in\\riO_K$),\n        \\item has that every nonzero prime ideal of $\\riO_K$ is maximal.\n    \\end{enumerate}\n    We note that a ring satisfying (a)--(d) is called a \\ul{Dedekind domain}.\n\\end{theorem}\n\n", "meta": {"hexsha": "bddf41e6250dca2151086fbf2d60e106db13b981", "size": 5283, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-04-12.tex", "max_stars_repo_name": "jchen/math1560-notes", "max_stars_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-02T15:41:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-03T20:28:48.000Z", "max_issues_repo_path": "lectures/2022-04-12.tex", "max_issues_repo_name": "jchen/math1560-notes", "max_issues_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-04-12.tex", "max_forks_repo_name": "jchen/math1560-notes", "max_forks_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.0340909091, "max_line_length": 319, "alphanum_fraction": 0.6818095779, "num_tokens": 1889, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178138, "lm_q2_score": 0.8198933337131076, "lm_q1q2_score": 0.5891707422302557}}
{"text": "\\section{Mathematical logic}\\label{sec:mathematical_logic}\n\nMathematical logic uses mathematics to study logic and vice versa.\n\nWe start with objects that are purely logical in nature --- formulas --- which are strings of symbols that represent truth values. Formal definitions for formulas are given here using \\hyperref[def:formal_grammar]{grammars}, which in turn depend on \\hyperref[def:formal_language]{languages}. Formal definitions for truth values are given using \\hyperref[def:heyting_algebra]{Heyting} and \\hyperref[def:boolean_algebra]{Boolean algebras}\n\nThese definitions help us define the theory necessary to study following two important intertwined topics:\n\\begin{itemize}\n  \\item We are interested in establishing whether the formula \\( \\varphi \\) logically entails the formula \\( \\psi \\). This is done using \\hyperref[def:deductive_system]{deductive systems} which specify precisely how we can manipulate strings of symbols. This aspect is called \\term{syntactic} or \\term{logical} and is the basis or \\hyperref[def:proof_derivability]{proof theory}. Formulas allow us to express statements about mathematics and proof theory allows us to systematically study the relationships between them.\n\n  \\item Given a formula, we are interested in assigning a meaning to it. Different logical systems provide different syntax that is useful for different purposes - \\hyperref[def:propositional_syntax/formula]{propositional formulas} allow us to express complex relationships between propositions via \\hyperref[subsec:boolean_functions]{Boolean functions} while \\hyperref[def:first_order_syntax/formula]{first-order logic formulas} allow us to go one level lower and give a precise meaning to these propositions via \\hyperref[def:first_order_structure]{structures}. This aspect of logic is called \\term{semantical} and is the basis of \\hyperref[subsec:first_order_models]{model theory}. Model theory allows us to study logical formulas using pre-existing mathematics.\n\\end{itemize}\n\nThere are two aspects in which logical systems are categorized:\n\\begin{itemize}\n  \\item \\hyperref[subsec:propositional_logic]{Propositional} and \\hyperref[subsec:first_order_logic]{first-order logic} (among others) differ in what their syntax allows us to express. This also means that they differ in what their semantics can express, but, just as the syntax of first-order logic is a superset of the syntax of propositional logic, \\hyperref[subsec:boolean_functions]{Boolean functions} can express relations between quantifierless atomic formulas in any \\hyperref[def:first_order_structure]{structures}. In other words, semantics are identical in places where the syntax is the same.\n\n  \\item \\hyperref[def:classical_logic]{Classical} and \\hyperref[def:intuitionistic_logic]{intuitionistic logic} (among others) differ in their semantics and their logical \\hyperref[def:judgment/inference_rule]{inference rules}. This has two implications\n  \\begin{itemize}\n     \\item Boolean functions describe \\hyperref[def:classical_logic]{classical logic}, however they fail to describe \\hyperref[def:intuitionistic_logic]{intuitionistic logic} because double negation elimination \\eqref{eq:thm:minimal_propositional_negation_laws/dne} no longer holds and neither do other similar statements. So, while retaining the same syntax, we must resort to much more complicated semantical frameworks like \\hyperref[def:propositional_heyting_algebra_semantics]{Heyting} or \\hyperref[def:propositional_topological_semantics]{topological semantics}.\n\n     \\item The proof theory that describes classical logic no longer matches the semantics, hence we must resort to other proof systems. This turns out not to be trivial because we need a clear understanding of which logical axioms imply the others. \\Fullref{subsec:deductive_systems} lists different proof systems and their corresponding semantics.\n  \\end{itemize}\n\\end{itemize}\n\n\\begin{remark}\\label{rem:metalogic}\n  The statements of mathematical logic can themselves be studied logically. We distinguish between the \\term{object logic} which we study and the \\term{metalogic} which we use to study it. It is possible, for example, to study intuitionistic propositional logic using classical first-order logic. The metalogic is usually less formal and its statements are written in prose for the sake of easier understanding.\n\n  It is an exercise in futility to try and completely formalize the language, syntax and theory of the metalogic --- the \\term{metalanguage}, \\term{metasyntax} and \\term{metatheory}. We must take a given metalogical framework for granted and then study a certain object logical framework. This is not to say that the principles and rules that hold in the metalogic are immaterial --- see for example the discussion of the differences between \\hyperref[def:intuitionistic_logic]{intuitionistic logic} and \\hyperref[def:classical_logic]{classical logic}. This is to say that it makes little sense to attempt to study the metalogic because at that point it becomes the object logic and the still more abstract conceptual framework in which we reason about the metalogic now becomes the new metalogic. We can thus form a hierarchy that is unbounded in both directions --- we can study a more concrete object logic within the object logic, and we can jump from one metalogical level to the next.\n\n  An important connection between the logic and metalogic is given in \\fullref{rem:set_definition_recursion}.\n\\end{remark}\n\n\\begin{definition}\\label{def:classical_logic}\n  Classical logic is a vague term that we use to describe a semantic framework, \\fullref{def:propositional_semantics}, and a matching \\hyperref[def:deductive_system]{deductive system}, \\fullref{def:classical_propositional_deductive_systems}, for \\hyperref[subsec:propositional_logic]{propositional logic} and also a semantic framework, \\fullref{def:first_order_semantics}, and matching proof deduction deduction, \\fullref{def:first_order_natural_deduction_system}, for \\hyperref[subsec:first_order_logic]{first-order logic}, among others.\n\n  It is characterized by the ability to use the law of double negation elimination \\eqref{eq:thm:minimal_propositional_negation_laws/dne}. A more popular (but less accurate due to \\fullref{thm:minimal_propositional_negation_laws}) characterization is that the law of the excluded middle \\eqref{eq:thm:minimal_propositional_negation_laws/lem} holds. Within the metalogic, this law is called the \\term{principle of bivalence} and states that either a statement holds or its negation holds.\n\\end{definition}\n\n\\begin{definition}\\label{def:intuitionistic_logic}\n  Intuitionistic logic is a generalization of \\hyperref[def:classical_logic]{classical logic}. It is also called \\term{constructive logic} due to the \\hyperref[def:brouwer_heyting_kolmogorov_interpretation]{Brouwer-Heyting-Kolmogorov interpretation}. See \\fullref{rem:brouwer_heyting_kolmogorov_interpretation_compatibility} for further discussion of the topic.\n\n  Instead of the law of the excluded middle \\eqref{eq:thm:minimal_propositional_negation_laws/lem}, we have the strictly weaker principle of explosion \\eqref{eq:thm:minimal_propositional_negation_laws/efq} stating that everything can be proved given a contradiction.\n\n  To these ideas there correspond \\hyperref[def:propositional_heyting_algebra_semantics]{Heyting algebra semantics} and \\hyperref[def:propositional_topological_semantics]{topological semantics} and a matching deductive system, \\fullref{def:intuitionistic_propositional_deductive_systems}, for \\hyperref[subsec:propositional_logic]{propositional logic}.\n\\end{definition}\n\n\\begin{definition}\\label{def:minimal_logic}\n  Minimal logic is a generalization of \\hyperref[def:intuitionistic_logic]{intuitionistic logic}.\n\n  Instead of the law of the excluded middle \\eqref{eq:thm:minimal_propositional_negation_laws/lem} and the strictly weaker principle of explosion \\eqref{eq:thm:minimal_propositional_negation_laws/efq}, we have the even weaker law of non-contradiction \\eqref{eq:thm:minimal_propositional_negation_laws/lnc}.\n\n  Metalogically speaking, we can only conclude that there is no statement such that both the statement and its negation are true. If the statement instead does not hold, we cannot automatically conclude that its negation holds.\n\n  \\Fullref{def:minimal_propositional_axiomatic_deductive_system} provides a deductive system for \\hyperref[subsec:propositional_logic]{propositional logic}, but we avoid studying semantics of minimal logic.\n\\end{definition}\n\n\\begin{remark}\\label{rem:mathematical_logic_conventions}\n  We will only work in \\hyperref[def:classical_propositional_deductive_systems]{classical metalogic}. Outside the section on logic, we will use formulas and, more generally, use object logic only in dedicated places like \\fullref{def:group/theory} describing the \\hyperref[def:first_order_theory]{logical theory} of groups. Most axioms like \\ref{def:norm/N1}-\\ref{def:norm/N3} for norms are formulated entirely within the metalanguage under the assumption that we are working within a model of set theory. To keep a clear distinction between logical formulas and non-logical axioms and, more generally, to distinguishing between logic and metalogic, we use the following conventions:\n\n  \\begin{thmenum}\n    \\thmitem{rem:mathematical_logic_conventions/variable_symbols} Variables in the object language are denoted by the small Greek letters, usually \\( \\xi, \\eta, \\zeta \\), while variables in the metalanguage are denoted by small Latin letters, usually \\( x, y, z \\). If needed, we add subscripts with indices.\n\n    \\thmitem{rem:mathematical_logic_conventions/formula_term_symbols} Formulas, which we only consider in the object language, are also denoted by small Greek letters --- \\( \\varphi, \\psi, \\theta, \\chi \\) --- and, so are terms --- \\( \\tau, \\sigma, \\rho, \\kappa, \\mu, \\nu \\).\n\n    \\thmitem{rem:mathematical_logic_conventions/propositional_constants} The propositional constants denoting truth and falsity are denoted by \\( \\top \\) and \\( \\bot \\) in the object language and by \\( T \\) and \\( F \\) in the metalanguage. This is only for the sake of following an established convention, and we still use \\( \\top \\) and \\( \\bot \\) in general \\hyperref[def:semilattice/lattice]{lattices}.\n\n    \\thmitem{rem:mathematical_logic_conventions/connective_symbols} We usually prefer prose to symbolic quantifiers and connectives in the metalanguage. The longer double arrows \\( \\implies \\) and \\( \\iff \\) are sometimes used within the metalogic outside this section.\n\n    \\thmitem{rem:mathematical_logic_conventions/structure_pairs} We conflate structures in the metalogic (i.e. sets with functions and/or relations defined on them) with their domain --- see \\fullref{rem:first_order_model_notation} for a discussion.\n\n    \\thmitem{rem:mathematical_logic_conventions/shorthands} We additionally use syntactic shorthands like \\fullref{rem:propositional_formula_parentheses} and \\fullref{rem:first_order_formula_conventions} when writing formulas.\n\n    \\thmitem{rem:mathematical_logic_conventions/quantification} We avoid writing excessive universal quantification and instead rely on implicit universal quantification as described in \\fullref{thm:implicit_universal_quantification}. If we need the formulas to be closed, such as in the case of \\hyperref[def:first_order_theory]{first-order theories} for example, we assume all formulas are closed and if they are not, we add explicit universal quantifiers in front.\n  \\end{thmenum}\n\n  Some axioms like \\eqref{eq:def:magma/idempotent} are formulated within the metalogic for convenience and clarity, but are used as formulas in the object language in theorems like \\fullref{thm:positive_formulas_preserved_under_homomorphism}. In places like this, it is usually straightforward to translate axioms from the metalogic into logical formulas.\n\\end{remark}\n\n\\begin{remark}\\label{rem:higher_order_logic}\n  Since we describe first-order logic, it may be helpful to clarify why is it named, so. It is merely a shorthand for \\enquote{first-order predicate logic}. There are other predicate logical frameworks, namely second-order predicate logic (described in \\cite[ch. VIII]{OpenLogicFull}) and higher-order predicate logic, also known as \\enquote{simple type theory} (described in \\cite[sec. 3]{Farmer2008}).\n\n  Second-order logic allows us to quantify over relations between variables. In that case, we refer to the variables of first-order logic as \\enquote{individuals} and to the relations as \\enquote{relation variables}. This allows us, for example, to avoid axiom schemas like the \\hyperref[def:zfc/specification]{axiom schema of specification} by instead replacing them with a single axiom that quantifies over unary relations. A downside of second-order logic is that it has worse properties --- it is incomplete in the sense that there exists no \\hyperref[def:deductive_system]{deductive system} that is both, sound and complete\\footnote{refer to \\cite[thm. 39.6]{OpenLogicFull}} and it is not compact in the sense that the analogue to \\fullref{thm:first_order_compactness_theorem} does not hold\\footnote{refer to \\cite[thm. 39.7]{OpenLogicFull}}. This is attributed to the expressive power of second-order logic because a first-order axiom schema may have only a countable number of axioms while a second-order quantifier may range over uncountably many relations.\n\n  Clearly anything that extends second-order logic must suffer from the same consequences, however higher-order logic is still useful because it allows us to utilize some very powerful concepts. Rather than quantifying over relations over the relations over individuals that would happen in third-order logic, we instead consider the more abstract frameworks of type theory. Type theory itself comes in many flavors, but simple type theory can be viewed as a generalization of first-order logic --- see \\cite[thm. 2]{Farmer2008}. The rough idea is that rather than having individual variables, relation variables, etc., we have \\term{base types} and \\term{type constructors}. The individual variables have a dedicated base type, for example, and the types of functions and predicates are easily constructed using the basic type constructors, hence it is also easy to construct higher-order functions and predicates. The syntax of simple type theory is inspired by \\( \\lambda \\)-calculus, which is a huge topic in itself and one of the frameworks for studying computability theory. The semantics of simple type theory are merely an extension of first-order semantics with different universes for different types. Like second-order logic, however, type theories have worse properties than those of first-order logic.\n\n  Another benefit of type theories is that they allow for multiple base types. For example, in the definition of a \\hyperref[def:vector_space]{vector space}, we have scalars and vectors, and we introduce an axiom schema parameterized by the scalars. In contrast, we could have a type for scalars and a type for vectors. This is also easily achievable in first-order logic via the, so-called \\enquote{many-sorted first-order logic}, where the types are called \\enquote{sorts}. We lack type constructors, and thus we are restricted in how our functions and predicates are defined, however for simple cases many-sorted first-order logic is just as useful as simple type theory. As a matter of fact, both many-sorted first-order logic languages and simple type theory languages can be reformulated as first-order logic languages --- see \\cite[ch. 8]{Farmer2008}.\n\n  We circumvent the need for any of these higher-order logical frameworks by using set theory --- see \\fullref{rem:first_order_theories_in_zfc}.\n\\end{remark}\n", "meta": {"hexsha": "92d012501e30a134d03a4324a944495f2123c821", "size": 15725, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/mathematical_logic.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/mathematical_logic.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/mathematical_logic.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 172.8021978022, "max_line_length": 1314, "alphanum_fraction": 0.8116375199, "num_tokens": 3642, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8807970904940926, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.5891477872237154}}
{"text": "\n\\subsection{Linear probability model}\n\n\\(p=xB\\). can be outside \\([0,1]\\).\n", "meta": {"hexsha": "4ed193d89cfd070b020712472cf8b1e23a70ac13", "size": 76, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/glm/03-08-linear.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/glm/03-08-linear.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/glm/03-08-linear.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 15.2, "max_line_length": 37, "alphanum_fraction": 0.6447368421, "num_tokens": 24, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8807970779778824, "lm_q2_score": 0.6688802669716107, "lm_q1q2_score": 0.5891477846656606}}
{"text": "\\chapter{Linear Models: Regression}\n\\label{ch:regress}\n\nAims of this chapter\\footnote{Here you work with the script file {\\tt regress.R}}:\n\n\\begin{compactitem}\n\t\\item More functions for plotting data and models.\n\t\\item Calculating correlation coefficients.\n\t\\item Fitting a regression model and significance testing.\n\t\\item Using diagnostic plots to assess model suitability.\n\\end{compactitem}\n\nAs with the previous chapter, we'll start with creating a new blank \nscript for you to fill in during the practical. We'll also be using the \ngenome size data again, so:\n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Open R and change to the {\\tt code} directory.\n\t\\item Create a new blank script called `Regression.R' and add some \n\tintroductory comments.\n\t\\item Add code to your script to load the genome size data into R and \n\tcheck it.\n\\end{compactitem}\n\n\\section{Exploring the data}\n\nIn previous chapters we used {\\tt plot} to create a scatterplot between \ntwo variables. If you have a set of variables to explore, writing code \nfor each plot is tiresome, so R provides a the function {\\tt pairs}, \nwhich creates a grid of scatterplots between each pair of variables. \nAll it needs is a dataset.\n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Add {\\tt pairs(genome, col=genome\\$Suborder)} into your script \n\tand run the code. \n\\end{compactitem}\n\nThe result is messy! There are far too many variables in {\\tt genome} \nfor this to be useful. We need to cut down the data to fewer variables. \nIn Chapter \\ref{ch:ExpDesign}, we used indices to select colours; here, we can use \nindices to select columns from the data frame. This again uses square \nbrackets ({\\tt x[ ]}), but a data frame has two dimensions, rows and \ncolumns, so  you need to provide an index for each dimension, separated \nby commas. If an index is left blank, then all of that dimension (i.e. \nall rows or columns) are selected. Try the following to re-acquaint \nyourself to access data frame content using indices:  \n\n\\begin{lstlisting}\n# create a small data frame:\n> dat <- data.frame(A = c(\"a\", \"b\", \"c\", \"d\", \"e\"), B = c(1, 2, 3, 4, 5))\n> dat[1, ] # select row 1 (all columns selected)\n\tA B\n1 a 1\n\n> dat[, 2] # select column 2 (all rows selected)\n[1] 1 2 3 4 5\n> dat[2, 1] # select row 2, column 1\n[1] \"b\"\n\n\\end{lstlisting}\n\nNow let's get started with the actual analysis. We will look at five \nkey variables: genome size, body weight, total length, forewing length \nand forewing area. If you look at the output of {\\tt str(genome)}, \nyou'll see that these are in columns 4, 7, 8, 12 and 14. We can record \nthe indices of these columns and use this to select the data in the \npairs plot.\n\n\\begin{lstlisting}\nmorpho <- c(4, 7, 8, 12, 14)\npairs(genome[, morpho], col = genome$Suborder)\t\n\\end{lstlisting}\n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Add the code above to your script and run it\n\\end{compactitem}\n\nThe {\\tt pairs} plot should give you something like the plot below:\n\n\\begin{center}\n\t\\includegraphics[width=\\textwidth]{pairs.pdf} \n\\end{center}\n\nEach scatterplot is shown twice, with the variables swapping between \nthe $x$ and $y$ axes. You can see immediately that the relationships \nbetween the four morphological measurements and genome size are fairly \nscattered but that the plots comparing morphology show much clearer \nrelationships.\n\n\\section{Correlations}\n\nOne way of summarising how close strong the relationship between these \nvariables are is to calculate a correlation coefficient. Pearson \ncorrelations look at the difference of each point from the mean of each \nvariable (and since it uses means, it is a parametric statistic). \n\nIt is calculated using of the differences from the mean on each axis. \nThe key calculation is --- for each point -- to get the product of the \ndifferences on each axis and add them up. If the points are mostly top \nleft ($-x$, $y$) or bottom right ($x$, $-y$) then these products are \nmostly negative ($-xy$); if the points are mostly top right ($x$, $y$) \nor bottom left ($-x$, $-y$) then the products are mostly positive \n($xy$).  \n\n\\begin{center}\n\t\\includegraphics[width=\\textwidth]{corr.pdf} \n\\end{center}\n\nThe plots above show three clear cases where all the values of $xy$ are \nnegative or positive or where both are present and sum to zero. The \nPearson correlation coefficient simply scales these sums of $xy$ to be \nbetween -1 (perfectly negatively correlated) and 1 (perfectly \npositively  correlated) via zero (no correlation).\n\nWe will use two functions to look at correlations. The first is {\\tt \ncor}, which can calculate correlations between pairs of variables, so \nis a good partner for {\\tt pairs} plots. The second is {\\tt cor.test}, \nwhich can only compare a single pair of variables, but uses a $t$ test \nto assess whether the correlation is significant. \n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Try the following (and include it in your R script file)\n\\end{compactitem}\n\n\n\\begin{lstlisting}\n> cor(genome[, morpho], use = \"pairwise\")\n\\end{lstlisting}\n\nYou should see a correlation matrix. Then,\n\n\\begin{lstlisting}\n> cor.test(genome$GenomeSize, genome$TotalLength, use = \"pairwise\")\t\n\nPearson's product-moment correlation\n\ndata: genome$GenomeSize and genome$TotalLength\nt = 3.551, df = 96, p-value = 0.0005972\n\nalternative hypothesis: true correlation is not equal to 0 \n95 percent confidence interval:\n  0.1526 0.5050 \nsample estimates:\n    cor \n 0.3407   \n\\end{lstlisting}\n\nThe {\\tt use='pairwise'} in the above tells R to omit observations with \nmissing data and use complete pairs of observations. The first function \nconfirms our impressions from the graphs: the correlations between \ngenome size and morphology are positive but comparatively weak and the \ncorrelations between morphological measurements are positive and very \nstrong (i.e. close to 1). The correlation test tells us that genome \nsize and body length are positively correlated (r=0.34, $t$ = 3.5507, \ndf = 96, $p$ = 0.0006).\n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Again, remember this example when reporting correlations!\n\\end{compactitem}\n\n\\section{Transformations and allometric scaling}\n\nThere is one problem with the correlations above: {\\it correlations \nassume a straight line relationship}. Some of the scatterplots above \nare fairly straight but there are some strongly curved relationships. \nThis is due to the allometric scaling mentioned in Chapter \\ref{ch:ExpDesign}: two of \nthe variables are in linear units (total and forewing length), one is \nin squared units (forewing area) and one in cubic units (body weight, \nwhich is approximately volume).\n\nThe relationships between these variables can be described using a \npower law: $y = ax^b$. Fortunately, if we log transform this equation, \nwe get $\\log(y) = \\log(a) + b \\log(x)$. This is the equation of a \nstraight line ($y=a+bx$), so we should be able to make these plots \nstraighter by logging both axes. We saw in Chapter \\ref{ch:ExpDesign} that we can \ncreate a new logged variable in the data frame like this:\n\n\\begin{lstlisting}\n> genome$logGS <- log(genome$GenomeSize)\n\\end{lstlisting}\n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Using this line as a template, create a new logged version of \n\tthe five variables listed above.\n\t\\item Using {\\tt str}, work out which column numbers the logged \n\tvariables are and create a new variable called {\\tt logmorpho} \n\tcontaining these numbers.\n\t\\item Copy the {\\tt pairs} and {\\tt cor} test from earlier in your \n\tscript and modify them to run these functions for the columns given \n\tin {\\tt logmorpho}.\n\\end{compactitem}\n\nThe correlations should give the following output:\n\\begin{lstlisting}\n> cor(genome[, logmorpho], use = \"pairwise\")\n\n         logGS   logBW  logTL  logFL   logFA\n logGS 1.00000 0.08406 0.2224 0.1150 0.06808\n logBW 0.08406 1.00000 0.8892 0.9456 0.94996\n logTL 0.22244 0.88919 1.0000 0.9158 0.86207\n logFL 0.11500 0.94565 0.9158 1.0000 0.97916\n logFA 0.06808 0.94996 0.8621 0.9792 1.00000\n\\end{lstlisting}\n\nThe scatterplots should look like this and show that logging the data \nhas very successfully removed allometric scaling effects in the data:\n\\begin{center}\n\\includegraphics[width=\\textwidth]{pairsLog.pdf} \n\\end{center}\n\n\\section{Regression}\n\nWe'll now look at fitting the first linear model of this course, to \nexplore whether log genome size explain log body weight. The first \nthing to do is to plot the data:\n\n\\begin{center}\n\t\\includegraphics[width=0.5\\textwidth]{gsVbw.pdf} \n\\end{center}\n\nIt is clear that the two suborders have very different relationships: \nto begin with we will look at dragonflies (Anisoptera). We will \ncalculate two linear models:\n\n\\begin{compactdesc}\n\t\\item [The null model] This is the simplest linear model: nothing is \n\tgoing on and the response variable just has variation around the \n\tmean: $y = \\beta_1$. This is written as an R formula as {\\tt y \n\t\\textasciitilde{} 1}.\n\t\\item [Linear regression] This models a straight line relationship \n\tbetween the response variable and a continuous explanatory variable: \n\t$y= \\beta_1 + \\beta_{2}x$.\n\\end{compactdesc}\n\nThe code below fits these two models.\n\n\\begin{lstlisting}\n> nullModelDragon <- lm(logBW ~ 1, data = genome, subset = Suborder == \n\"Anisoptera\")\n> genomeSizeModelDragon <- lm(logBW ~ logGS, data = genome, subset = \nSuborder == \"Anisoptera\")\n\\end{lstlisting}\n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Note the long names for the models. Short names are easier to \n\ttype but calling R objects names like {\\tt mod1},  {\\tt mod2},  {\\tt \n\txxx} swiftly get confusing!   \n\t\\item Enter these models into your script and run them.\n\\end{compactitem}\n\nNow we want to look at the output of the model. Remember from the \nlecture that a model has {\\it coefficients} (the $\\beta$ values in the \nequation of the model) and {\\it terms} which are the explanatory \nvariables in the model. We'll look at the {\\it coefficients} first:\n\n\\begin{lstlisting}\n> summary(genomeSizeModelDragon) \n Call:\n lm(formula = logBW ~ logGS, data = genome, subset = Suborder == \n     \"Anisoptera\")\n \n Residuals:\n    Min     1Q Median     3Q    Max \n -1.324 -0.612  0.097  0.519  1.324 \n \n Coefficients:\n             Estimate Std. Error t value Pr(>|t|)    \n (Intercept)  -2.3995     0.0908  -26.41  < 2e-16 ***\n logGS         1.0052     0.2398    4.19  9.5e-05 ***\n ---\n Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 \n \n Residual standard error: 0.697 on 58 degrees of freedom\n   (2 observations deleted due to missingness)\n Multiple R-squared: 0.233,\tAdjusted R-squared: 0.219 \n F-statistic: 17.6 on 1 and 58 DF,  p-value: 9.54e-05  \n\\end{lstlisting}\n\nThere is a lot of information there: the model description (`{\\tt \nCall}'), a summary of the residuals, a table of coefficients and then \ninformation on residual standard error, r squared and an $F$ test. All \nof these will become clearer during this course --- for the moment, \nconcentrate on the coefficients table.\n\nThere are two rows in the coefficient table, one for each coefficient \nin $y=\\beta_1 + \\beta_2x$ --- these are the intercept and the slope of \nthe line. The rest the details on each row are a $t$ test  of whether \nthe slope and intercept are significantly different from zero. \n\nNow we will look at the {\\it terms} of the model using the {\\tt anova} \nfunction. We will have a proper look at ANOVA (Analysis of Variance) in \nchapter \\ref{ch:ANOVA}. Meanwhile, for our current purposes, all you need to \nknow is that ANOVA tests how much variation in the response variable is \nexplained by each explanatory variable. We only have one variable and \nso there is only one row in the output:\n\n\\begin{lstlisting}\n> anova(genomeSizeModelDragon)\n\n Analysis of Variance Table\n \n Response: logBW\n           Df Sum Sq Mean Sq F value  Pr(>F)    \n logGS      1   8.53    8.53    17.6 9.5e-05 ***\n Residuals 58  28.14    0.49                    \n ---\n Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 \n\\end{lstlisting}\n\nThis table is comparing the variation in log body weight explained by \nlog genome size to the total variation in log body weight. We are \ninterested in how much smaller the residuals are for the genome size \nmodel than the null model. Graphically, how much shorter are the red \nresiduals than the blue residuals:\n\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{regResid.pdf} \n\\end{center}\n\nWe can get the sums of the squares of these residuals from the two \nmodels using the function {\\tt resid}, and then square them and add \nthem up:\n\n\\begin{lstlisting}\n> sum(resid(nullModelDragon) ^ 2)\n [1] 36.67\n \n> sum(resid(genomeSizeModelDragon) ^ 2)\n [1] 28.14\n\\end{lstlisting}\n\nSo we have five columns in the table:\n\\begin{compactdesc}\n\t\\item[Df] This shows the degrees of freedom. Each fitted parameter/coefficient takes up \n\ta degree of freedom from the total sample size, and the left over are the residuals degree of freedom. In this \n\tcase, genome size adds a slope (compare the null model $y=\\beta_1$ \n\tand this model $y=\\beta_1 + \\beta_2x$ --- there is one more $\\beta$). \n\t\\item[Sum Sq] This shows sums of squares. The bottom line is the \n\tresidual sum of squares for the model and the one above is the \n\tvariation explained by genome size. Using the two values from above, \n\tthe sums of square residuals for the null model are 36.67. In the \n\tgenome size model, the sum of square residuals are 28.14 and so \n\t$36.67-28.14=8.53$ units of variance have been explained by this \n\tmodel.\n\t\\item[Mean Sq] These are just the Sum Sq (Sum of Squares) values divided by the \n\tdegrees of freedom. The idea behind this is simple: if we explain \n\tlots of variation with one coefficient, that is good (the null model), and if we explain a \n\tsmall amount of variation with a loss of degree of freedom (by adding and then estimating more parameters), then that is bad.\n\t\\item[F value] This is the ratio of the Mean Sq for the variable and \n\tthe residual Mean Sq. This is used to test whether the explained \n\tvariation is large or small.\n\t\\item[Pr(>F)] This is a $p$ value --- the probability of the variable \n\texplaining this much variance by chance. \n\\end{compactdesc}\n\nIn this case, it is clear that genome size explains a significant \nvariation in body weight. \n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Include the {\\tt summary} and {\\tt anova} commands for {\\tt \n\tgenomeSizeModelDragon} in your script, run them and check you are \n\thappy with the output.\n\t\\item Using this code as a template, create a new model called {\\tt \n\tgenomeSizeModelDamsel} that fits log body weight as a function of log \n\tgenome size for damselflies.\n\t\\item Write and run code to get the  {\\tt summary} and {\\tt anova} \n\ttables for this new model.\n\\end{compactitem}\n\n\\section{Plotting the model}\n\nNow we can plot the data and add lines to show the models. For simple \nregression models, we can use the function {\\tt abline(modelName)} to \nadd a line based on the model.\n\\begin{compactitem}[$\\quad\\star$]\n \\item You already know how to create and customise scatterplots from \n previous chapters. Create a plot of log body weight as a function of \n log genome size, picking your favourite colours for the points.\n \\item Use {\\tt abline} to add a line for each model and use the {\\tt \n col} option in the function to colour each line to match the points. \n For example: {\\tt abline(genomeSizeModelDragon, col='red')}.\n\\end{compactitem}\n\nYou should get something like Figure \\ref{fig:GenoRegModels}.\n\n\\begin{figure} \\centering\n\t\\includegraphics[width=0.5\\textwidth]{GenoRegModels.pdf}\n\t\\caption{Linear regression models fitted to the body weight vs. \n\tgenome size to the Dragonfly (red) and Damselfly (blue) subsets of \n\tthe data.}\n\t\\label{fig:GenoRegModels}\n\\end{figure}\n\n\\section{Model diagnostics}\n\nNow that we have our models, we need to check that they are appropriate \nfor the data. For this, we will inspect ``diagnostic plots''. Producing \ndiagnostic plots is easy in R --- if you {\\tt plot} a model, then R \nproduces a set of diagnostic plots! \n\n\\begin{compactitem}[$\\quad\\star$]\n\t\\item Try the following code (and include in the R script file):\n\\end{compactitem}\n\\begin{lstlisting}\n> par(mfrow = c(2, 2), mar = c(5, 5, 1.5, 1.5))\n> plot(genomeSizeModelDragon)\n\\end{lstlisting}\nThis should give the plots shown in figure \\ref{fig:DiagModDragon}.\n\\begin{figure} \\centering\n\t\\includegraphics[width=0.7\\textwidth]{DiagModDragon.pdf}\n\t\\caption{Diagnostics for the {\\tt lm} fit to the Dragonfly data \n\tsubset.}\n\t\\label{fig:DiagModDragon} \n\\end{figure}\nAnd,\n\\begin{lstlisting}\n> par(mfrow = c(2, 2), mar = c(5, 5, 1.5, 1.5))\n> plot(genomeSizeModelDamsel)\n\\end{lstlisting}\nThis should give the plots shown in figure \\ref{fig:DiagModDamsel}.\n\n\\begin{figure}\\centering\n\t\\includegraphics[width=0.7\\textwidth]{DiagModDamsel.pdf}\n\t\\caption{Diagnostics for the {\\tt lm} fit to the Damselfly data \n\tsubset.}\n\t\\label{fig:DiagModDamsel} \n\\end{figure}\n\nThe diagnostic plots are:\n\\begin{compactdesc}\n\t\t\n\t\t\\item[Residuals vs Fitted] This plot is used to spot if the \n\t\tdistribution of the residuals (the vertical distance from a point \n\t\tto the regression line) has {\\it similar variance} for different \n\t\tpredicted values (the y-value on the line corresponding to each \n\t\tx-value). There should be no obvious patterns (such as curves) or \n\t\tbig gaps. If there was no scatter, if all the points fell exactly \n\t\ton the line, then all of the dots on this plot would lie on the \n\t\tgray horizontal dashed line. The red line is a smoothed curve to \n\t\tmake it easier to see trends in the residuals. It is flat in the \n\t\tDragonfly model fit (Figure \\ref{fig:DiagModDragon}), and a bit \n\t\tmore wavy than we would like in the in the Damselfly model fit \n\t\t(Figure \\ref{fig:DiagModDamsel}), but there are no clear trends in \n\t\teither, which is what you hope to see. \n\t\t\n\t\t\\item[Normal Q--Q] This plot is to check whether the residuals are \n\t\t{\\it normally distributed } --- are the values of the observed \n\t\tresiduals similar to those expected under a normal distribution? \n\t\tIdeally, the points should form a perfectly straight line, \n\t\tindicating that the observed residuals exactly match the expected. \n\t\tHere, note that the points lie pretty close to the dashed line in \n\t\tboth Figures \\ref{fig:DiagModDragon} \\& \\ref{fig:DiagModDamsel}, \n\t\tbut deviate at the ends, especially for Damselflies. However, some \n\t\tdeviation is to be expected near the ends --- here these deviations \n\t\tare just about acceptable.\n\n\t\t\\item[Scale--Location] The x-axis on this plot is identical to the \n\t\tResiduals vs Fitted plot -- these are the fitted values. The y-axis \n\t\tis the square root of the {\\it standardized residuals}, which are \n\t\tresiduals rescaled so that they have a mean of zero and a variance \n\t\tof one. As a result, all y-axis values are positive. Thus large \n\t\tresiduals (both positive and negative) plot at the top, and small \n\t\tresiduals plot at the bottom (so only their {\\it scale} is \n\t\tretained). Thus, all of the numbered points (which will be the same \n\t\tin all plots) plot at the top here. The red line here shows the \n\t\ttrend, just like the Residuals vs Fitted plot. The regression \n\t\tanalysis has assumed homoscedasticity, that the variance in the \n\t\tresiduals doesn't change as a function of the predictor. If that \n\t\tassumption is correct, the red line should be relatively flat. It \n\t\tis not quite as flat as we would like, especially for the Dragonfly \n\t\tanalysis (Figure \\ref{fig:DiagModDragon}).\n\t\t\n\t\t\\item[Residuals vs Leverage] This plot shows the standardized \n\t\tresiduals against leverage. ``Leverage'' is a measure of how much \n\t\teach data point influences the linear model's coefficient \n\t\testimates. Because the regression line must pass through the \n\t\tcentroid (``pivot point'') of the data (Figure \\ref{fig:Leverage}), \n\t\tpoints that lie far from the centroid have greater leverage, and \n\t\ttheir leverage increases if there are fewer points nearby. There \n\t\tare two key things to note about this plot:\n\t\t\\begin{figure} \\centering\n\t\t\t\\includegraphics[width=0.4\\textwidth]{Leverage.pdf}\n\t\t\t\\caption{Leverage of data points on slope of a regression. The \n\t\t\tpoints further away from the centroid in the x-axis direction \n\t\t\thave more leverage, and can therefore move the regression line up \n\t\t\tor down (dashed red lines).}\n\t\t\t\\label{fig:Leverage} \n\t\t\\end{figure}\n\n\t\t\\begin{enumerate}\n\t\t\t\\item The standardized residuals (y-axis) are centered around \n\t\t\tzero and reach 2-3 standard deviations away from zero. They \n\t\t\tshould also lie symmetrically about zero, as would be expected \n\t\t\tfor a normal distribution. This is the case for the Damselfly \n\t\t\tplot (Figure \\ref{fig:DiagModDamsel}) , but not so much for the \n\t\t\tDragonfly plot Figure \\ref{fig:DiagModDragon}. \n\t\t\t\\item The contours values show {\\it Cook's distance} (only \n\t\t\tvisible in the Damsefly plot), which measures how much the \n\t\t\tregression would change if a point was deleted. Cook's distance \n\t\t\tis increased by leverage and by large residuals: a point far from \n\t\t\tthe centroid with a large residual can severely distort the \n\t\t\tcoefficient estimates from the regression. On this plot, you want to see that the red smoothed \n\t\t\tline stays close to the horizontal gray dashed line and that no \n\t\t\tpoints have a large Cook's distance (i.e, >0.5). Both are true \n\t\t\there.\n\t\t\\end{enumerate}\n\t\tThis is an important diagnostic plot in regression analyses in \n\t\tparticular because it tells you whether your estimate of the slope \n\t\tcoefficient in particular is strongly affected by certain data \n\t\tpoints.  \n\n\\end{compactdesc}\nNote that certain points are numbered in all the plots --- these are \npoints to pay special attention to because they are {\\it potential} \noutliers. The numbers correspnd to the row number for that dataset in \nyour data frame. You can easily identify these points in your data plot \n(Figure \\ref{fig:GenoRegModels}) because the order of the points along \nthe fitted values axis (y-axis) in the diagnostic plot matches the \norder along the x-axis in the data plot. So, fo example here, in Figure \n\\ref{fig:DiagModDragon}, the two numbered points (46, 10) near the \nbottom correspond in the data plot (Figure \\ref{fig:GenoRegModels}) to \nthe two red points near the center-left that lie farthest below the red \nline.\n\nThus, neither the Drangonfly nor the Damselfly diagnostic plots look \nperfect, but this level of deviation from assumptions of linear models \nis acceptable. The main worrying factors are that the QQ plot for \nDamselflies indicates the observed residuals are a bit more extreme \nthan expected, and the Scale--Location plot for Dragonflies suggests \nsome pattern in the standardized residuals wrt location of the fitted \nvalues.\n\n\\begin{compactitem}[$\\quad\\star$]\n \\item Copy the code to create the diagnostic plots into your script to \n keep a record of the code and run it.\n\\end{compactitem}\n\n\\section{Reporting the model}\n\nNow we know that the models are appropriate and we have a plot, the \nlast thing is to report the statistics. For the damselfly model, here \nis one summary that would do: log genome size explains significant \nvariation in log body weight in dameselflies (F=10.5, df=1,36, p=0.0025) \nand shows that body weight decreases with genome size (intercept: \n-4.65,  se=0.09; slope: -1.14, se=0.35).\n", "meta": {"hexsha": "25f078e10d12196b7d964f2e68a0075af20a4b10", "size": 23204, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "archived/SilCompBioStat/regress.tex", "max_stars_repo_name": "mathemage/TheMulQuaBio", "max_stars_repo_head_hexsha": "63a0ad6803e2aa1b808bc4517009c18a8c190b4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-12T13:33:14.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-12T13:33:14.000Z", "max_issues_repo_path": "archived/SilCompBioStat/regress.tex", "max_issues_repo_name": "OScott19/TheMulQuaBio", "max_issues_repo_head_hexsha": "197d710f76163469dfc7fa9d2d95ba3a739eccc7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "archived/SilCompBioStat/regress.tex", "max_forks_repo_name": "OScott19/TheMulQuaBio", "max_forks_repo_head_hexsha": "197d710f76163469dfc7fa9d2d95ba3a739eccc7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.4981684982, "max_line_length": 126, "alphanum_fraction": 0.7421996208, "num_tokens": 6319, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943822145998, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5890334680943693}}
{"text": "\\begin{verbatim}\n\\subsection{LU Decomposition}\n\nGiven the linear system \\eqref{eq:varyRHS}\n\\begin{equation}\n  \\label{eq:varyRHS}\n  \\begin{cases}\n    3x + y + z = m \\\\\n    x - 5y + 2z = 5 \\\\\n    2x + y + 5z = 10\n  \\end{cases}\n\\end{equation}\nwhere $m = 0, 1, 2, \\ldots, 20$. Using LU Decomposition we can obtain the\nsolution to the linear system \\eqref{eq:varyRHS} for corresponding $m$\n(see Table \\ref{tab:solution} and Figure \\ref{fig:solution}).\n\n\\input{../src/solution.tex}\n\n\\begin{figure}[!hbtp]\n  \\centering\n  \\includegraphics[width=0.85\\textwidth]{../src/lab_06_plot.pdf}\n  \\caption{Solution to the linear system vs. $m$}\n  \\label{fig:solution}\n\\end{figure}\n\\end{verbatim}\n", "meta": {"hexsha": "ce5284ba70de4da3d163b756b3fee9285413dc0b", "size": 678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "courses/template/MATH3341/Math.3341.Lab.06/Math.3341.Lab.06.Report/LaTeX/latex.tex", "max_stars_repo_name": "butlerm0405/math3341", "max_stars_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "courses/template/MATH3341/Math.3341.Lab.06/Math.3341.Lab.06.Report/LaTeX/latex.tex", "max_issues_repo_name": "butlerm0405/math3341", "max_issues_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "courses/template/MATH3341/Math.3341.Lab.06/Math.3341.Lab.06.Report/LaTeX/latex.tex", "max_forks_repo_name": "butlerm0405/math3341", "max_forks_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.0769230769, "max_line_length": 73, "alphanum_fraction": 0.6784660767, "num_tokens": 246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5890334674095485}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% CS240: Programming in C\n% Copyright 2016 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% https://github.com/ghorbanzade/UMB-CS240-2016S/blob/master/LICENSE\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\def \\topDirectory {.}\n\\def \\resDirectory {\\topDirectory/src/c/main/ls05}\n\\def \\texDirectory {\\topDirectory/src/tex}\n\\def \\styDirectory {\\texDirectory/sty}\n\\def \\cfgDirectory {\\texDirectory/cfg}\n\\def \\imgDirectory {\\texDirectory/img}\n\n\\documentclass[compress]{beamer}\n%\\mode<presentation>\n%\\usetheme{default}\n\n\\usepackage{\\styDirectory/directives}\n\\input{\\cfgDirectory/config}\n\\usepackage{\\styDirectory/beamerthemePejman}\n\\doc{number}{5}\n%\\setbeamertemplate{footline}[text line]{}\n\n\\usepackage{booktabs}\n\\usepackage{graphics}\n\n\\begin{document}\n\n\\prepareCover\n\n\\section{Operators}\n\n\\begin{slide}\n\t\\begin{block}{Definition}\n\n\tAn operation is an action performed on one or more values either to modify the value or to produce a new one.\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Arithmetic Operators}\n\n\t\\begin{columns}\n\t\\column{0.4\\textwidth}\n\n\t\\begin{terminal}\n\tint a = 10;\n\tint b = 25;\n\t\\end{terminal}\n\n\t\\column{0.6\\textwidth}\n\t\\begin{table}\n\t\\resizebox{\\columnwidth}{!}{\n\t\\begin{tabular}{lcc}\n\t\\toprule\n\tOpeartor & Example & Output \\\\\n\t\\midrule\n\taddition & \\texttt{a + b} & 35 \\\\\n\tsubtraction & \\texttt{a - b} & -15 \\\\\n\tmultiplication & \\texttt{a * b} & 250 \\\\\n\tdivision & \\texttt{b / a} & 2 \\\\\n\tmodulus & \\texttt{b \\% a} & 5 \\\\\n\t\\bottomrule\n\t\\end{tabular}\n\t}\n\t\\end{table}\n\t\\end{columns}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Quiz}\n\n\tWhat is the final value of variables \\texttt{d} and \\texttt{e}?\n\n\t\\begin{terminal}\n\tchar c = 'a';\n\tint d = c + 3;\n\tchar e = d + 1;\n\t\\end{terminal}\n\n\t\\end{block}\n\n\t\\pause\n\n\t\\begin{block}{Answer}\n\n\t\\begin{terminal}\n\tprintf(\"%d\", c); // prints: 97\n\tprintf(\"%d\", d); // prints: 100\n\tprintf(\"%c\", e); // prints: 'e'\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Relational Operators}\n\n\t\\begin{columns}\n\t\\column{0.4\\textwidth}\n\n\t\\begin{terminal}\n\tint a = 99;\n\tint b = 100;\n\tchar c = 'c';\n\tchar d = 'd';\n\t\\end{terminal}\n\n\t\\column{0.6\\textwidth}\n\t\\begin{table}\n\t\\resizebox{\\columnwidth}{!}{\n\t\\begin{tabular}{lcc}\n\t\\toprule\n\tOperation & Example & Output \\\\\n\t\\midrule\n\tgreater than & \\texttt{a > b} & 0 \\\\\n\tless than & \\texttt{a < b} & 1 \\\\\n\tgreater than or equal to & \\texttt{a >= c} & 1 \\\\\n\tless than or equal to & \\texttt{b <= d} & 1 \\\\\n\tequal to & \\texttt{a == c} & 1 \\\\\n\tnot equal to & \\texttt{b != d} & 0 \\\\\n\t\\bottomrule\n\t\\end{tabular}\n\t}\n\t\\end{table}\n\t\\end{columns}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Logical Operators}\n\n\t\\begin{columns}\n\t\\column{0.4\\textwidth}\n\n\t\\begin{terminal}\n\tint a = 99;\n\tint b = 100;\n\tchar c = 'c';\n\tchar d = 'd';\n\t\\end{terminal}\n\n\t\\column{0.6\\textwidth}\n\t\\begin{table}\n\t\\resizebox{\\columnwidth}{!}{\n\t\\begin{tabular}{lcc}\n\t\\toprule\n\tOperation & Example & Output \\\\\n\t\\midrule\n\tlogical AND & \\texttt{a < c \\&\\& a < d} & 0 \\\\\n\tlogical OR & \\texttt{a < b || a < c} & 1 \\\\\n\tlogical NOT & \\texttt{!(a < b)} & 0 \\\\\n\t\\bottomrule\n\t\\end{tabular}\n\t}\n\t\\end{table}\n\t\\end{columns}\n\n\tExpressions connected by logical operators are evaluated from left to right. Evaluation terminates once the final value is determined.\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{increment and decrement operators}\n\n\t\\begin{columns}\n\t\\column{0.4\\textwidth}\n\n\t\\begin{terminal}\n\tint a = 1;\n\t\\end{terminal}\n\n\t\\column{0.6\\textwidth}\n\t\\begin{table}\n\t\\resizebox{\\columnwidth}{!}{\n\t\\begin{tabular}{lcc}\n\t\\toprule\n\tOperation & Example & Output \\\\\n\t\\midrule\n\tincrement & \\texttt{a++} & 2 \\\\\n\tincrement & \\texttt{++a} & 2 \\\\\n\tdecrement & \\texttt{a--} & 0 \\\\\n\tdecrement & \\texttt{--a} & 0 \\\\\n\t\\bottomrule\n\t\\end{tabular}\n\t}\n\t\\end{table}\n\t\\end{columns}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Quiz}\n\n\tWhat is the output of the following code snippet?\n\n\\begin{terminal}\nint a = 1;\nint b = 2;\nif (++a && a == b)\n    printf(\"Programming\");\nif (b++ == a)\n    printf(\" is fun\");\n\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Bitwise Operators}\n\n\t\\begin{columns}\n\t\\column{0.45\\textwidth}\n\t\\begin{terminal}\n\tint a = 0b11110000;\n\tint b = 0b10101010;\n\t\\end{terminal}\n\n\t\\column{0.55\\textwidth}\n\t\\begin{table}\n\t\\resizebox{\\columnwidth}{!}{\n\t\\begin{tabular}{lcc}\n\t\\toprule\n\tOperation & Example & Output \\\\\n\t\\midrule\n\tbitwise AND & \\texttt{a \\& b} & 0b10100000 \\\\\n\tbitwise inclusive OR & \\texttt{a | b} & 0b11111010 \\\\\n\tbitwise exclusive OR & \\texttt{a \\^{} b} & 0b01011010 \\\\\n\tleft binary shift & \\texttt{a << 1} & 0b11100000 \\\\\n\tright binary shift & \\texttt{a >> 1} & 0b01111000 \\\\\n\tone's complement & \\texttt{\\~{}a} & 0b00001111 \\\\\n\t\\bottomrule\n\t\\end{tabular}\n\t}\n\t\\end{table}\n\t\\end{columns}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Assignment Operators}\n\n\tAn assignment operator (\\alert{\\texttt{=}}) stores a new value into a variable.\n\n\tAssignment statements such as ${exp}_1 = {exp}_1 {op} {exp}_2$ can equivalently be written as ${exp}_1 {op}= {exp}_2$ where ${op}$ is one of the following operators:\n\n\t\\texttt{+}\\hfill \\texttt{-}\\hfill \\texttt{*}\\hfill \\texttt{/}\\hfill \\texttt{\\%}\\hfill \\texttt{<<}\\hfill \\texttt{>>}\\hfill \\texttt{\\&}\\hfill \\texttt{\\^{}}\\hfill \\texttt{|}\n\n\tAn assignment statement has a value and can occur in expressions.\n\n\t\\end{block}\n\\end{slide}\n\n\\section{Conditionals}\n\n\\begin{slide}\n\t\\begin{block}{Statements}\n\n\tA statement is the smallest standalone element that provide a meaningful machine instruction. A statement is terminated by a semicolon.\n\n\tFunction of a program is determined based on the set of statements in its \\texttt{main} function executed in succession.\n\tA program terminates once all statements in its \\texttt{main} function are executed.\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{if} statement}\n\n\tAn \\texttt{if} statement contains an expression followed by a block of statements.\n\tStatements within the block are executed if the value of expression is evaluated to non-zero.\n\n\t\\begin{terminal}\n\tif (@*\\textit{expression}*@) {\n\t    @*$\\mathit{statement}_1$*@;\n\t    @*$\\mathit{statement}_2$*@;\n\t}\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Coding Style Convention}\n\n\t\\begin{terminal}\n\tif (number_statements > 1) {\n\t    printf(\"MUST\");\n\t    printf(\"preserve braces\");\n\t}\n\tif (number_statements == 1)\n\t    printf(\"remove braces\");\n\tif (number_statements == 0);\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Coding Style Convention}\n\n\t\\begin{terminal}\n\tint a = 2;\n\tif (a != 0)\n\t    printf(\"Hello\");\n\t\\end{terminal}\n\n\tIt is often recommended to avoid redundancy in expressions.\n\n\t\\begin{terminal}\n\tint a = 2;\n\tif (a)\n\t    printf(\"Hello\");\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{if-else} statement}\n\n\t\\begin{terminal}\n\tint a = 2;\n\tif (a == 2)\n\t    printf(\"Hello\");\n\telse\n\t    printf(\"Can you hear me?\");\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{if-else-if} statement}\n\n\t\\begin{terminal}\n\tint a = 3;\n\tif (a == 1)\n\t    printf(\"Hello\");\n\telse if (a == 2)\n\t    printf(\"Can you hear me?\");\n\telse\n\t    printf(\"How are you?\");\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Coding Style Convention}\n\n\tBraces should be used for all branches if at least one of them contains more than one statement.\n\n\t\\begin{terminal}\n\tint a = 3;\n\tif (a == 1) {\n\t    printf(\"Hello\");\n\t} else {\n\t    printf(\"How are you?\");\n\t    printf(\"Can you hear me?);\n\t}\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{switch} statement}\n\n\tThe \\texttt{switch} statement is a multi-way decision that tests whether an expression matches one of a number of \\alert{constant} integer values, and branches accordingly.\n\n\t\\begin{terminal}\n\tswitch (@*\\textit{expression}*@) {\n\tcase @*$\\mathit{val}_1$*@:\n\t    statements;\n\tcase @*$\\mathit{val}_2$*@:\n\t    @*\\textit{statements}*@;\n\tdefault:\n\t    @*\\textit{statements}*@;\n\t}\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\section{Loops}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{while} statement}\n\n\tA \\texttt{while} statement contains an expression followed by a block of statements.\n\n\tStatements within the block are executed \\emph{as long as} the value of expression is evaluated to non-zero.\n\n\t\\begin{terminal}\n\twhile (@*\\textit{expression}*@) {\n\t    @*\\textit{statements}*@;\n\t}\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{do-while} statement}\n\n\tA \\texttt{do-while} statement is a \\texttt{while} loop whose expression is evaluated after execution of its block of statements.\n\n\t\\begin{terminal}\n\tdo {\n\t    @*\\textit{statements}*@;\n\t} while (@*\\textit{expression}*@);\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{for} statement}\n\n\tA \\texttt{for} statement contains three expressions followed by a block of statements.\n\n\t\\begin{itemize}\n\t\\item[] $\\mathit{exp}_1$ pre-loop expression\n\t\\item[] $\\mathit{exp}_2$ iteration condition\n\t\\item[] $\\mathit{exp}_3$ post-iteration expression\n\t\\end{itemize}\n\n\t\\begin{terminal}\n\tfor (@*$\\mathit{exp}_1$*@; @*$\\mathit{exp}_2$*@; @*$\\mathit{exp}_3$*@) {\n\t    @*\\textit{statements}*@;\n\t}\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Remember}\n\n\t\\begin{itemize}\n\t\\item[] A \\texttt{for} statement is often used when the number of loop iterations is already known.\n\t\\item[] Mastering the \\texttt{while} statement can boost your code quality.\n\t\\end{itemize}\n\n\t\\begin{terminal}\n\tint i = 0;\n\twhile ((ch = str[i++]) != '\\0');\n\tprintf(\"length of str: %d\\n\", i);\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Infinite Loops}\n\n\t\\begin{terminal}\n\tfor (;;) {\n\t    @*\\textit{statements}*@;\n\t}\n\t\\end{terminal}\n\n\t\\begin{terminal}\n\twhile (1) {\n\t    @*\\textit{statements}*@;\n\t}\n\t\\end{terminal}\n\n\tAn infinite \\texttt{while} loop is often preferred for its superior readability.\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{figure}\n\t\\includegraphics[width=0.5\\textwidth]{\\imgDirectory/wonka.jpg}\n\t\\end{figure}\n\\end{slide}\n\n\\section{Unconditional Branches}\n\n\\begin{slide}\n\t\\begin{block}{Definition}\n\n\tUnconditional branches are statements that change program control flow without the need to evaluate an expression.\n\n\tBohm-Jacopini theorem states that any computing application can be realized without the use of Unconditional branches.\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{break} statement}\n\n\tA \\texttt{break} statement immediately terminates the loop regardless of its iteration condition and moves program control flow to the next statement after the loop.\n\n\t\\begin{terminal}\n\tint i = 2;\n\tint num = 25;\n\tint prime = 1;\n\twhile (i <= num / 2)\n\t    if (num % i == 0) {\n\t        prime = 0;\n\t        break;\n\t    }\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{continue} statement}\n\n\tA \\texttt{continue} statement skips the remaining statements of the loop and moves program control flow to the next iteration of the loop.\n\n\t\\begin{terminal}\n\tint i = 2;\n\tint num = 25;\n\tint prime = 1;\n\twhile (i <= num / 2) {\n\t    if (num % i)\n\t        continue;\n\t    prime = 0;\n\t}\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{\\texttt{goto} statement}\n\n\tA \\texttt{goto} statement changes program control flow to the label it points to.\n\t\\begin{terminal}\n\t    a = 2;\n\t    if (a == 2)\n\t        goto ERROR;\n\t    printf(\"Hello\");\n\n\tERROR:\n\t    printf(\"Goodbye\");\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{block}{Labels}\n\n\tA label is simply a name for a certain point in the program control flow that may be referred to by a \\texttt{goto statement}.\n\n\t\\begin{terminal}\n\t    a = 2;\n\t    if (a == 2)\n\t        goto ERROR;\n\t    printf(\"Hello\");\n\n\tERROR:\n\t    printf(\"Goodbye\");\n\t\\end{terminal}\n\n\t\\end{block}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{quotation} \\scriptsize \\normalfont\n\n\tC provides the infinitely-abusable \\texttt{goto} statement and labels to branch to.\n\t[...]\n\tThere are a few situations where \\texttt{goto}s may find a place such as breaking out of two or more loops at once.\n\n\tCode involving a \\texttt{goto} can always be written without one, though perhaps at the price of some repeated tests or an extra variable.\n\t[...]\n\tWith a few exceptions code that relies on \\texttt{goto} statements is generally harder to understand and to maintain than code without \\texttt{goto}s.\n\tAlthough we are not dogmatic about the matter, it does seem that \\texttt{goto} statements should be used rarely, if at all.\n\n\t\\begin{flushright}-- The C Programming Language (K\\&R)\\end{flushright}\n\n\t\\end{quotation}\n\\end{slide}\n\n\\begin{slide}\n\t\\begin{quotation} \\scriptsize \\normalfont\n\n\tAlbeit deprecated by some people, the equivalent of the \\texttt{goto} statement is used frequently by compilers in form of the unconditional jump instruction.\n\n\tThe \\texttt{goto} statement comes in handy when a function exits from multiple locations and some common work such as cleanup has to be done.\n\tIf there is no cleanup needed then just return directly.\n\n\tThe rationale for using \\texttt{goto}s is:\n\t(1) Unconditional statements are easier to understand and follow.\n\t(2) Nesting is reduced.\n\t(3) Errors by not updating individual exit points when making modifications are prevented.\n\t(4) Unconditional statements save the compiler work to optimize redundant code away.\n\n\t\\begin{flushright}-- Linux Kernel Coding Style\\end{flushright}\n\n\t\\end{quotation}\n\\end{slide}\n\n\\end{document}\n", "meta": {"hexsha": "48807ca85a75b8018a6f07ef819d67e351b0a6ef", "size": 13664, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/main/ls05/ls05.tex", "max_stars_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_stars_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:24.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:24.000Z", "max_issues_repo_path": "src/tex/main/ls05/ls05.tex", "max_issues_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_issues_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2016-05-16T23:55:39.000Z", "max_issues_repo_issues_event_max_datetime": "2016-06-20T03:04:35.000Z", "max_forks_repo_path": "src/tex/main/ls05/ls05.tex", "max_forks_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_forks_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.3633387889, "max_line_length": 173, "alphanum_fraction": 0.6716920375, "num_tokens": 4266, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943822145998, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.5890334641849652}}
{"text": "%------------------------------------------------------------\n%\tFILENAME: final_exam.tex\n%\t PROJECT: mathbootcamp\n%\t  AUTHOR: Brett R. Devine\n%\t   EMAIL: brett.devine@wsu.edu\n%------------------------------------------------------------\n\\documentclass[a4paper, 11pt]{article}\n\\usepackage{final_exam_style}\n\\newcounter{problem}\n\\newenvironment{problem}[1][]{%\n\t\\refstepcounter{problem}\\par \\medskip\n\t\\noindent \\textbf{Problem~\\theproblem.#1 \\rmfamily}{\\medskip}\n}\n\n\\newenvironment{solution}{ \\noindent \\textbf{Solution: \\medskip}}{}\n% To make solutions not visible use the command \\excludecomment{solution} %\n%\\excludecomment{solution}\n\n\\title{ Final Exam 2016 \\\\ \\vspace{1em }\n\t\\large WSU Economics PhD Mathematics Bootcamp\n}\n\n\\begin{document}\n\\maketitle\n\n\\begin{problem}\\\\\n\tConsider the market for commodity $x$.\n\tThe quantity demanded in this market depends on the market price for $x$, $p$, and the value of an external unpriced factor, $r$, such as the ``status symbol'' value of owning commodity $x$ which increases quantity demanded at any price.\n\tGiven a known constant $b_D$ we have the demand function:\n\t\\[\n\t\tq = b_D - 2p + r\n\t\\]\n\tThe supply of $x$ to the market increases with market price and we assume for some reason that higher values of $r$, the status symbol value, decrease the willingness to supply at any given price.\n\tImportant cost factors faced by suppliers result in a known constant $b_S$ and we have the supply function:\n\t\\[\n\t\tq = b_S + 2p - r\n\t\\]\n\tFinally, the status symbol value $r$ increases the more expensive the commodity is and it decreases as more of the commodity is supplied because it becomes more common.\n\tGiven some known constant value $b_r$, the status symbol value is determined by the function:\n\t\\[\n\t\tr = 2b_r + 4p - 2q\n\t\\]\n\t\\begin{enumerate}[(a)]\n\t\t\\item Using the functions provided, construct a proper system of linear equations to represent the market for commodity $x$.\n\t\t\\item Derive a matrix expression for the system in part (a) that takes the form $A\\mathbf{x} = \\mathbf{b}$.\n\t\t\\item The system of equations describing the market for commodity $x$ has three equations in three unknowns.\n\t\tThe system either has no solution, infinitely many solutions or a unique solution.\n\t\tPerform a test on the matrix $A$ (show your process and results) from part(b) that can tell you whether there is a unique solution and interpret the result.\n\t\t\\item Find the inverse matrix of $A$ (show it) and use it to solve the system for the equilibrium values of $q,p,r$ when we have the constants\n\t\t\\[\n\t\t\t\\begin{bmatrix}\n\t\t\t\tb_D \\\\\n\t\t\t\tb_S \\\\\n\t\t\t\tb_r\n\t\t\t\\end{bmatrix} = \n\t\t\t\\begin{bmatrix}\n\t\t\t\t10 \\\\\n\t\t\t\t8 \\\\\n\t\t\t\t2\n\t\t\t\\end{bmatrix}\n\t\t\\]\n\t\\end{enumerate}\n\\end{problem}\n\\begin{solution}\n\t\\begin{enumerate}[(a)]\n\t\t\\item The system of linear equations is\n\t\t\\begin{align}\n\t\t\tq + 2p - r &= b_D \\nonumber \\\\\n\t\t\tq - 2p + r &= b_S \\nonumber \\\\\n\t\t\tq - 2p + \\frac{1}{2}r &= b_r \\nonumber\n\t\t\\end{align}\n\t\t\\item\n\t\t\\begin{align}\n\t\t\t\\begin{bmatrix}\n\t\t\t\t1 & 2 & -1 \\\\\n\t\t\t\t1 & -2 & 1 \\\\\n\t\t\t\t1 & -2 & \\frac{1}{2}\n\t\t\t\\end{bmatrix}\n\t\t\t\\begin{bmatrix}\n\t\t\t\tq \\\\\n\t\t\t\tp \\\\\n\t\t\t\tr\n\t\t\t\\end{bmatrix} &= \n\t\t\t\\begin{bmatrix}\n\t\t\t\tb_D \\\\\n\t\t\t\tb_S \\\\\n\t\t\t\tb_r\n\t\t\t\\end{bmatrix} \\nonumber \\\\\n\t\t\tA\\mathbf{x} &= \\mathbf{b} \\nonumber\n\t\t\\end{align}\n\t\t\\item $\\det A = 2 \\neq 0$ so $A$ is invertible and there exists a unique solution to the system.\n\t\t\\item \\begin{align}\n\t\t\tA^{-1} &=\n\t\t\t\\begin{bmatrix}\n\t\t\t\t\\frac{1}{2} & \\frac{1}{2} & 0 \\\\\n\t\t\t\t\\frac{1}{4} & \\frac{3}{4} & -1 \\\\\n\t\t\t\t0 & 2 & -2\n\t\t\t\\end{bmatrix} \\nonumber\n\t\t\\end{align}\n\t\tand the solution is found as $\\mathbf{x} = (p,q,r) = A^{-1}\\mathbf{b}$ where $b=(10,8,2)$.\n\t\tThe solution is $(p,q,r) = (9, \\frac{13}{2}, 12)$.\n\t\\end{enumerate}\n\\end{solution}\n\n\n\n\\begin{problem}\\\\\n\tConsider the market for another commodity, $y$, with the following demand and supply functions where $b_D$ and $b_S$ are constants.\n\t\\begin{align}\n\t\t\\text{Supply} \\; q_S &= b_S e^{p} \\nonumber \\\\\n\t\t\\text{Demand} \\; q_D &= b_D e^{-p} \\nonumber\n\t\\end{align}\n\t\\begin{enumerate}[(a)]\n\t\t\\item Find the equilibrium price function $p^*:\\mathbb{R}^2_{++} \\rightarrow \\mathbb{R}_{++}$ such that for some values $(b_D,b_S)$ the equilibrium price for $y$ will be $p^*(b_D,b_S)$.\n\t\t\\item Derive the gradient function $\\nabla p^*$.\n\t\t\\item We are interested in the behavior of the equilibrium price in the market for commodity $y$ as structural aspects, $b_D$ and $b_S$ change.\n\t\tSuppose through econometric analysis it is determined that the current values are $(b_D, b_S) = (5,1)$.\n\t\tCalculate the total differential of equilibrium price $p^*$ as the values of $b_D$ and $b_S$ change slightly $(db_D, db_S) = (1/9, -1/7)$.\n\t\t\\item Using the values from part (c), calculate a linear approximation of the equilibrium market price as the structural parameters change as in part (c).\n\t\\end{enumerate}\n\\end{problem}\n\\begin{solution}\n\t\\begin{enumerate}[(a)]\n\t\t\\item $p^*(b_D,b_S) = (1/2)\\ln(b_D) - (1/2)\\ln(b_S)$.\n\t\t\\item $\\nabla p^* = (1/(2b_D), -1/(2b_S))$.\n\t\t\\item $dp^* = \\frac{26}{315}$\n\t\t\\item $p^*(5,1) + dp^* = (1/2)\\ln(5) - (1/2)\\ln(1) + \\frac{26}{315} = \\frac{26}{315} + \\frac{1}{2}\\ln(5)$.\n\t\\end{enumerate}\n\\end{solution}\n\n\\begin{problem}\\\\\n\tLet $F:\\mathbb{R}^n \\rightarrow \\mathbb{R}$ be a continuously differentiable function.\n\tLet $V = \\{ \\mathbf{v} \\in \\mathbb{R}^n : \\| \\mathbf{v} \\| = 1\\}$ be the set of all vectors in $\\mathbb{R}^n$ of unit length.\n\tAlso, let $\\mathbf{x}_0 \\in \\mathbb{R}^n$ such that $\\nabla F(\\mathbf{x}_0) \\neq \\mathbf{0}$.\n\tProve that over the set $V$, the direction $\\mathbf{v}$ in which $F$ increases most rapidly at the point $\\mathbf{x}_0$ is the direction of $\\nabla F(\\mathbf{x}_0)$.\n\\end{problem}\n\\begin{solution}\\\\\n\tLet $F:\\mathbb{R}^n \\rightarrow \\mathbb{R}$ and let $\\mathbf{x}_0 \\in \\mathbb{R}^n$ such that $\\nabla F(\\mathbf{x}_0) \\neq \\mathbf{0}$. \n\tFurthermore, let $V$ be the set of $n$-dimensional unit vectors.\n\tFor unit vectors $\\mathbf{v} \\in V$, the derivative of $F$ at $\\mathbf{x}_0$ in the direction $\\mathbf{v}$ is\n\t\\[\n\t\tD_{v}F(\\mathbf{x}_0) = \\nabla F(\\mathbf{x}_0) \\cdot \\mathbf{v}\n\t\\]\n\tThe dot product can be expressed in terms of the norms of the vectors and the angle between them.\n\t\\[\n\t\t\\nabla F(\\mathbf{x}_0)\\cdot \\mathbf{v} = \\| \\nabla F(\\mathbf{x}_0)\\| \\| \\mathbf{v} \\| \\cos \\theta\n\t\\]\n\twhere $\\theta$ is the angle between the vectors $\\nabla F(\\mathbf{x}_0)$ and $\\mathbf{v}$.\n\tSince $\\mathbf{v}$ is a unit vector $\\| \\mathbf{v} \\| = 1$ so we have\n\t\\[\n\t\tD_{v}F(\\mathbf{x}_0) = \\| \\nabla F(\\mathbf{x}_0) \\| \\cos \\theta\n\t\\]\n\tAs we vary $\\mathbf{v}$ over $V$, the value of $\\cos \\theta$ changes.\n\tIt takes its greatest value when $\\theta = 0$ meaning that $\\mathbf{v}$ and $\\nabla F$ will be coincident and point in the same direction.\n\tThus, $D_vF(\\mathbf{x}_0)$ is greatest when taken in the direction $\\mathbf{v} = \\nabla F(\\mathbf{x}_0)$. \\qedsymbol\n\\end{solution}\n\n\\begin{problem}\n\tProve or disprove the following statement.\n\t\\emph{There is no largest integer}.\n\\end{problem}\n\\begin{solution}\\\\\n\tLet $N \\in \\mathbb{Z}$ and assume to the contrary that $N$ is the largest integer.\n\tNow let $n = N + 1$.\n\tSince for any integer $z \\in \\mathbb{Z}$ we know that $z + 1 \\in \\mathbb{Z}$, we know that $n + 1$ is an integer.\n\tBut $n > N$ which contradicts our assumption that $N$ is the largest integer.\n\tTherefore, there is no largest integer. \\qedsymbol\n\t\n\\end{solution}\n\n\\begin{problem}\\\\\n\tConsider two firms engaging in Cournot competition in a market $M$.\n\tBoth firms face an inverse demand curve $P(Q,\\mu)$ where $P$ is the market price when total market quantity is $Q$ and consumer taste intensity is $\\mu > 0$.\n\tFurthermore $P(Q,\\mu)$ follows the law of demand so that $P_Q(Q,\\mu) < 0$ and more intense preferences increase demand $P_{\\mu}(Q,\\mu) > 0$.\n\t\n\tFirm 1 has total production costs $C_1(q_1)$ when it produces $q_1$ units.\n\tFirm 2 has total production costs $C_2(q_2)$ when it produces $q_2$ units.\n\tFor both firms costs increase in $q$, so that $C_i'(q_i) > 0$.\n\tTotal market output $Q = q_1 + q_2$.\n\tThe firm's profits are as follows\n\t\\begin{align}\n\t\t\\pi_1(q_1,q_2,\\mu) &= P(q_1 + q_2; \\mu)q_1 - C_1(q_1) \\nonumber \\\\\n\t\t\\pi_2(q_1,q_2,\\mu) &= P(q_1 + q_2; \\mu)q_2 - C_2(q_2) \\nonumber\n\t\\end{align}\n\tEach firm wants to choose its output quantity $q_i$ to maximize its profit $\\pi_i$ given the output decision of the other firm.\n\tThe solution to this maximization problem yields a \\emph{best response} function $R_i(q_{-i},\\mu)$ which has the interpretation that, for example, firm 1 will choose its output $q_1 = R_1(q_2,\\mu)$.\n\tSo $R$ is a function that takes firm 2's output as an argument and tells firm 1 what output, $q_1$ will maximize its own profit.\n\tFirm 2 has a similar function $q_2 = R_2(q_1,\\mu)$.\n\t\n\tUsing $q_1 = R_1(q_2,\\mu)$ and $q_2 = R_2(q_1,\\mu)$ we can create a composite function\n\t\\[\n\t\tq' = R_1( R_2(q_1,\\mu), \\mu) = F(q,\\mu)\n\t\\]\n\tAn equilibrium will be a \\emph{fixed point} of this composite function, which means that if you put a $q$ in for some fixed $\\mu$ and you get the same $q$ back out, then that $q$ is a Nash Equilibrium.\n\tLet $q_e$ represent such values.\n\t\\begin{enumerate}[(a)]\n\t\t\\item Calculate the partial derivative $\\pi_{q_1\\mu}$ for firm 1's profit function.\n\t\t\\item Calculate the partial derivative $\\pi_{q_1q_2}$ for firm 1's profit function.\n\t\t\\item For the identity $q_e = F(q_e, \\mu) = R_1(R_2(q_e,\\mu),\\mu)$, calculate the derivative $\\frac{d q_e}{d \\mu}$, telling us how the Nash equilibrium quantity changes as consumer taste intensity changes.\n\t\t[Hint: Consider the use of total differentials and um...the chain rule.]\n\t\\item (Bonus) Can you sign the derivative in part (a)? i.e., does the equilibrium output increase or decrease as $\\mu$ increases?\n\t\\end{enumerate}\n\\end{problem}\n\\begin{solution}\n\t\\begin{enumerate}[(a)]\n\t\t\\item \\begin{align}\n\t\t\t\\frac{\\partial \\pi_1}{\\partial q_1} &= P_Q(q_1 + q_2;\\mu)q_1 + P(q_1+q_2;\\mu) - C_1'(q_1) \\nonumber \\\\\n\t\t\t\\frac{\\partial^2 \\pi_1}{\\partial q_1 \\partial \\mu } &=  P_{Q\\mu}(q_1+q_2;\\mu)q_1 + P_{\\mu}(q_1+q_2;\\mu) \\nonumber \n\t\t\\end{align}\n\t\t\\item \\begin{align}\n\t\t\t\\pi_{q_1,q_2} &= P_{QQ}(q_1 + q_2;\\mu)q_1 + P_Q(q_1+q_2;\\mu) \\nonumber\n\t\t\\end{align}\n\t\t\\item So because its an identity, the total change on one side has to equal the total change on the other side.\n\t\t\\begin{align}\n\t\t\tdq_e &= \\frac{\\partial R_1}{\\partial R_2}\\frac{\\partial R_2}{\\partial q_e}d q_e + \\frac{\\partial R_1}{\\partial R_2}\\frac{\\partial R_2}{\\partial \\mu}d\\mu + \\frac{\\partial R_1}{\\partial \\mu}d\\mu \\nonumber \\\\\n\t\t\tdq_e \\left[1 - \\frac{\\partial R_1}{\\partial R_2}\\frac{\\partial R_2}{\\partial q_e}\\right] &=  \\left[\\frac{\\partial R_1}{\\partial R_2}\\frac{\\partial R_2}{\\partial \\mu} + \\frac{\\partial R_1}{\\partial \\mu} \\right] d\\mu \\nonumber\n\t\t\\end{align}\n\t\tFinally we have\n\t\t\\begin{align}\n\t\t\t\\frac{d q_e}{d \\mu} &= \\frac{\\left[\\frac{\\partial R_1}{\\partial R_2}\\frac{\\partial R_2}{\\partial \\mu} + \\frac{\\partial R_1}{\\partial \\mu} \\right]}{\\left[1 - \\frac{\\partial R_1}{\\partial R_2}\\frac{\\partial R_2}{\\partial q_e}\\right]} \\nonumber\n\t\t\\end{align}\n\t\\end{enumerate}\n\\end{solution}\n\n\n\\paragraph{True, False or Uncertain}\nFor the following problems, determine whether the statement, as given, is true, false or if its uncertain.\nIf you determine the statement is false or uncertain, provide an explanation.\n\n\\begin{problem}\\\\\n\t\\textbf{True, False or Uncertain}: Suppose $f(x,y)$ is defined on a set $D$ that contains a point $(a,b)$.\n\tIf the partial derivative functions $f_{xy}$ and $f_{yx}$ are both continuous on $D$, then $f_{xy}(a,b) = f_{yx}(a,b)$.\n\\end{problem}\n\\begin{solution}\n\tTrue\n\\end{solution}\n\n\\begin{problem}\\\\\n\t\\textbf{True, False or Uncertain}: For a function $f(x,y)$ if the partial derivatives $f_x$ and $f_y$ exist near a point $(a,b)$ then $f$ is differentiable at $(a,b)$.\n\\end{problem}\n\\begin{solution}\\\\\n\tFalse: The implication isn't true because even though $f_x$ and $f_y$ exist near $(a,b)$ if they are not continuous then it is possible for discontinuities to prevent the existence of a limiting value for the rate of change of the function.\n\tAlso, note that differentiability is sufficient for continuity.\n\tThe implication is true if $f_x$ and $f_y$ exist and are continuous near $(a,b)$.\n\tThis is an important result.\n\tFor very general functions such as $F:\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ we can know that this function is differentiable at some point $\\mathbf{x}$ if all the partial derivatives of $F$ (the Jacobian Matrix) exist and are continuous around $\\mathbf{x}$.\n\\end{solution}\n\n\\begin{problem}\\\\\n\t\\textbf{True, False or Uncertain}: If $f$ is a differentiable function of $x$ and $y$, then $f$ has a directional derivative in the direction of any vector $\\mathbf{v}$ and \n\t\\[\n\t\tD_v f(x,y) = \\nabla f \\cdot \\mathbf{v}\n\t\\]\n\\end{problem}\n\\begin{solution}\\\\\n\tFalse or Uncertain: Generally, this requires $\\mathbf{v}$ to be a unit vector i.e., $\\| \\mathbf{v} \\| = 1$.\n\tSome people don't require this, but we lose some of the meaning as when it is a unit vector the expression represents a projection of $\\nabla f$ onto vectors in the same direction as $\\mathbf{v}$.\n\\end{solution}\n\n\\end{document}", "meta": {"hexsha": "23513de8da658c6e6e7d3a57ecb5e24540b9a94d", "size": 13087, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/pdfs/math_bootcamp/bootcamp_repo/final_exam/final_exam.tex", "max_stars_repo_name": "joepatten/joepatten.github.io", "max_stars_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/pdfs/math_bootcamp/bootcamp_repo/final_exam/final_exam.tex", "max_issues_repo_name": "joepatten/joepatten.github.io", "max_issues_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-08-09T16:28:31.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-10T14:48:57.000Z", "max_forks_repo_path": "pages/teaching/math_bootcamp/final_exam/final_exam.tex", "max_forks_repo_name": "joepatten/joepatten.github.io", "max_forks_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.3346153846, "max_line_length": 259, "alphanum_fraction": 0.674027661, "num_tokens": 4407, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5890334595907405}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\n\\usepackage{graphicx}\n\n\\usepackage{geometry}\n\\newgeometry{tmargin=2.5cm, bmargin=2.5cm, lmargin=2.5cm, rmargin=2.5cm}\n\\linespread{1.1}\n\n\\begin{document}\n\n\\section*{Hyperelect - Leader election in directed hypercube network}\n\n\\subsection*{Idea behind the algorithm}\nThe algorithm will work over several stages. During each stage a \\textit{candidate} (called a \\textit{duellist}) will have a match against another duellist. One of them will win and proceed to the next stage, while the other will become \\textit{defeated}. After each stage only half of the duellists will enter the next stage. At the end, only one duellist will be standing. This duellist will become \\textit{leader} and notify the others.\n\nLet us take a closer look on how to execute aforementioned idea. We will start by explaning how to pair the duellist, then how to perform a match and how to notify all nodes. \n\nLet $H_{k}$ be a $k$-dimensional hypercube and $H_{k:i}$ be a collection of $i$-dimensional hypercubes obtained from $H_{k}$ by removing all connections between nodes in dimensions $>i$, e.g. $H_{3:1}$ will be collection of 4 segments (1-dimensional hypercubes). After stage $i$ we want to have exactly one duellist left in each hypercube $H_{k:i}$. So after stage 2 we want to have only two duellist in $H_{3}$, one in each hypercube $H_{3:2}$. Let us also observe that hypercubes from collection $H_{k:i}$ will be nicely paired in colection $H_{k:i+1}$. So we start from $H_{k:0}$ where every node is a duellist, pair them up according to collection $H_{k:1}$ and have a match between them to determine the winner. We continue this process up to $H_{k:k}$. At the end, we are left with one duellist that will become the leader.\n\nIn order to perform a match, a \\textit{match} message has to get to the other duellist. We will do so in two steps. First, send a message to the other hypercube. Second, forward a message to a duellist. A duellist can perform the first step by itself as it has the connection to the other hypercube. In order to forward the message every node that was defeated will remember the shortest path (what dimensions to travel) to its opponent (that won). The message will be forwarded in that fashion until it reaches the duellist.\n\nAfter electing a leader we need to notify all nodes. In $k$-dimensional hypercube we will perform that process over $k$ rounds. In a round $i\\in[1, k]$ every node that is a leader or a \\textit{follower} will notify their neighbor over connection in dimension $k-i+1$ to become a follower. So after round $i$ exactly one node in each hypercube $H_{k:k-i}$ will be a leader or a follower.\n\n\\subsection*{Implementation details}\n\nLet us now discuss some aspects that might be helpful while implementing the algorithm. We will explain how to retrieve shortest path, then how to process received messages and how to close communication at the end.\n\nAs the \\textit{match} message travels to its duellist, nodes can mark in which dimensions is the message forwarded. Observe, that we are only interested whether the message was transmitted odd or even number of times in a particular dimension. List of dimensions that were used odd number of times will constitute the shortest route to other duellist.\n\nAs some paths might be shorter than others, some \\textit{match} messages might be sent to the other hypercube before it reaches appropriate stage. In that case every node will store some messages (at most one per stage) that will be delayed until node reaches given stage as a duellist or becomes defeated and message will be automatically forwarded.\n\nTo gently close communication over the network every node that is/becomes a leader or a follower will send only \\textit{follow} messages in that round. That will not disrupt the algorithm as the leader was already selected.\n\n\\subsection*{Complexity - number of messages}\n\nLet $N = 2^{k}$ be the number of nodes. Let $d(i)$ be the maximal length of the shortest path from node defeated in stage $i$ to the winner. Clearly, $d(i) = i$.\n\nSending \\textit{match} message in stage $i$ can cost up to \n$$l(i) = 1 + \\sum_{j=1}^{i-1}d(j) = 1 + \\sum_{j=1}^{i-1}j = 1 + \\frac{i(i-1)}{2}.$$\n\nIn stage $i$ there will be $2\\cdot 2^{k-i} = 2^{k-i+1}$ \\textit{match} messages.\n\nSo in total the communication will cost us\n\\begin{align*}\n    M[Hyperelect] &\\leq N-1 + \\sum_{i=1}^{k}2^{k-i+1}\\cdot l(i) \n    = N-1 + \\sum_{i=1}^{k}2^{k-i+1} + \\sum_{i=1}^{k}2^{k-i}i(i-1)\\\\\n    &= N-1 + 6\\cdot 2^{k}-k^{2}-3k-6 = 7N - (log\\,N)^{2} - 3\\,log\\,N-7.\n\\end{align*}\n\n\\subsection*{Complexity - time}\n\nThe time complexity can be determined using above definitions\n\\begin{align*}\n    T[Hyperelect] &\\leq k + \\sum_{i=1}^{k}l(i) \n    = k + \\sum_{i=1}^{k}1+\\sum_{i=1}^{k}\\frac{i(i-1)}{2} \n    = 2k + \\frac{(k-1)k(k+1)}{6} \\\\\n    &= O(log^{3}\\,N).\n\\end{align*}\n\n\\subsection*{Useful resources}\nFor more information take a look at:\n\\begin{itemize}\n    \\item N. Santoro, Design and analysis of distributed algorithms, section 3.5 (Election in cube networks)\n    \\item P. Flocchini, B. Mans - Optimal elections in labeled hypercubes\n    \\item S. Robbins, K. A. Robbins - Choosing a leader on a hypercube\n\\end{itemize}\n\n\n\\end{document}\n", "meta": {"hexsha": "79dbab24f25f48b832f272d3c5b556165f7bf719", "size": 5307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/1164808/hyperelect.tex", "max_stars_repo_name": "tmarzec/distributed-framework", "max_stars_repo_head_hexsha": "cce9a5e0bd3e876ae9f74494a107d5e77e86e4a1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "text/1164808/hyperelect.tex", "max_issues_repo_name": "tmarzec/distributed-framework", "max_issues_repo_head_hexsha": "cce9a5e0bd3e876ae9f74494a107d5e77e86e4a1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/1164808/hyperelect.tex", "max_forks_repo_name": "tmarzec/distributed-framework", "max_forks_repo_head_hexsha": "cce9a5e0bd3e876ae9f74494a107d5e77e86e4a1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.76, "max_line_length": 829, "alphanum_fraction": 0.7365743358, "num_tokens": 1542, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.757794360334681, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5890334589059193}}
{"text": "\\section{Heterogeneous Systems}\r\n\\noindent\r\nNow that we know how to solve the ``easy case'' of homogeneous systems of linear differential equations, we can tackle the general case of heterogeneous systems.\r\nMuch like for single equations, we'll look for a homogeneous solution and particular solution, and we'll use undetermined coefficients or variation of parameters to find the particular solution.\r\nThese methods will then lead into seeing how systems of first order equations relate to single higher order equations.\r\n\r\n% Undetermined coefficients\r\n\\input{./linearSystems/heterogeneousSystems/undeterminedCoefficients.tex}\r\n% Variation of Parameters\r\n\\input{./linearSystems/heterogeneousSystems/variationOfParameters.tex}", "meta": {"hexsha": "72fc7263a4f64e540b5f625a5a59425c71db8a3e", "size": 726, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/linearSystems/heterogeneousSystems/heterogeneousSystems.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "diffEq/linearSystems/heterogeneousSystems/heterogeneousSystems.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "diffEq/linearSystems/heterogeneousSystems/heterogeneousSystems.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 72.6, "max_line_length": 195, "alphanum_fraction": 0.8236914601, "num_tokens": 139, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7577943658046609, "lm_q1q2_score": 0.5890334475201178}}
{"text": "\\section{Special Relativity}\r\nThe Newtonian Mechanics works perfectly (maybe not) well in low-speed cases, but when the object has gotten a pretty big velocity, Newtonian physics is no longer a good approximation to the situation that arises.\r\nTherefore, in 1905, Albert Einstein proposed the Special Theory of Relativity, in which the main differences involved are due to the treatment of the speed of light $c=299792458{\\rm ms^{-1}}\\approx 3\\times 10^8{\\rm ms^{-1}}$.\\\\\r\nSpecial Relativity is based on two postulates:\r\n\\begin{postulate}[Principle of Relativity]\r\n    The laws of physics are the same in all inertial frames.\r\n\\end{postulate}\r\n\\begin{postulate}[Speed of Light]\r\n    The speed of light in vacuum is the same in all inertial frames.\r\n\\end{postulate}\r\nThe need for the second postulate arises from many experiments that failed to detect the dependence of speed of light relative to inertial frames.\r\nBut the addition of this postulate then leads to a radical revision of our understanding of space and time and the relationships of energy, momentum and mass.\\\\\r\nConsider two frames $S,S'$, then if they are related by Galilean transformation, we have\r\n$$x'=x-vt,y'=y,z'=z,t'=t$$\r\nWrite the path of light ray in $S$ as $x=ct$, then in $S'$, we have $x'=x-vt=(c-v)t'$, so it doesn't work.\r\nTherefore we need a new form of transformation to describe inertial frames in order to accomodate our postulates.\r\nWe have to treat space and time equally.\r\n\\subsection{Lorentz Transformation}\r\nConsider inertial frames $S,S'$.\r\nAssume their origins coincide, i.e. the spacial origins of the frames coincide when $t=t'=0$.\r\nSuppose $S'$ is moving along the $x$ direction relative to $S$ with speed $v$, then we can ignore the $y,z$ directions for the moment.\r\nSo we are interested in the relationship between $(x,t)$ and $(x',t')$.\r\nBy the Principle of Relativity, something moving in constant velocity in $S$ must also do so in $S'$.\r\nIn $(x,t)$ plane, the constant velocity path is a straight line, so it is also the case in $(x',t')$.\r\nSo the transformation must be linear.\r\nThe origin of $S'$ moves with speed $v$ in $S$, this implies that $x'=\\gamma(x-vt)$.\r\nwhere $\\gamma$ depends on $|v|$.\r\nBy symmetry, $x=\\gamma(x'+vt')$.\r\nConsider a light ray going through the origins at time $t=t'=0$.\r\nIn $S$, the equation of the light ray in $S$ is $x=ct$ and also in $S'$, $x'=ct'$.\r\nSo if we plug these in, then $ct=\\gamma(c+v)t'$ and $ct'=\\gamma(c-v)t$.\r\nWe then have\r\n$$\\gamma^2(1-v/c)(1+v/c)=1\\implies \\gamma=\\frac{1}{\\sqrt{1-v^2/c^2}}$$\r\nWe call $\\gamma$ the Lorentz factor.\r\nConsequently we obtain the Lorentz transformation (or Lorentz Boost):\r\n$$\\begin{cases}\r\n    x'=\\gamma(x-vt)\\\\\r\n    t'=\\gamma(t-vx/c^2)\r\n\\end{cases},\\begin{cases}\r\n    x=\\gamma(x+vt)\\\\\r\n    t=\\gamma(t'+vx'/c^2)\r\n\\end{cases}$$\r\nThe coordinates $y,z,y',z'$ does not change if the velocity is entirely on the $x$-direction.\r\nNow $\\gamma>1$ whenever $v\\neq 0$ and $\\gamma\\to\\infty$ if $|v|\\to c$.\r\nWhen $v$ is small, we can approximate $\\gamma=1$, which gives us the standard Galilean transformation.\\\\\r\nTo check that the speed of light indeed remains constant in two frames.\r\nSuppose a light ray travels in $x$ direction, then $x=ct$, so\r\n$$x'=\\gamma(x-vt)=\\gamma(c-v)t=\\gamma^2(c-v)\\left( t'+\\frac{vx'}{c^2} \\right)\\implies x'=ct'$$\r\nFor a light ray that is travelling in the $y$ direction, $y=ct,x=z=0$.\r\nIn $S'$, we have\r\n$$x'=\\gamma(-vt)=-\\gamma vt,t'=\\gamma t,y'=ct,z'=0$$\r\nSo the speed of the light ray will be\r\n$$\\sqrt{\\left( \\frac{-\\gamma t}{\\gamma} \\right)^2+\\left( \\frac{c}{\\gamma} \\right)^2}=\\sqrt{c^2}=c$$\r\nSo the speed of light is not changed, but the direction of the light ray has.\\\\\r\nFrom a more general viewpoint, we consider the metric\r\n$$c^2t'^2-r'^2=(ct')^2-(x'^2+y'^2+z'^2)=(ct)^2-(x^2+y^2+z^2)=c^2t^2-r^2$$\r\nBy some calculation.\r\nSo this quantity is preserved.\\\\\r\nConsider the case where there is only one spacial dimension $x$ in an inertial frame $S$ with time $t$.\r\nConventionally we plot $x$ in the horizontal axis and $ct$ in the vertical direction.\r\nThe trajectory of a particle in space-time then is a curve in the plane.\r\nWe call this the Minkowski space-time, where each point $(x,ct)$ in the space-time represents an event.\r\nWe call the curve that represents the motion of some particle a world line.\r\nIn particular, the world line is straight iff the particle moves in uniform velocity.\r\nLight rays through the origin then travels in vertical lines of the form $x=\\pm ct$, which are the vertical lines that has a inclination of $\\pi/4$ to either axis.\r\nAs a particle is not allowed to move with velocity greater than the speed of light, its motion (assuming that it goes through the origin) is restricted to the upper and lower cones that are split by the lines $x=\\pm ct$.\\\\\r\nHow about viewing from another inertial frame $S'$?\r\nThe $t'$ axis corresponds to $x'=0$, so it corresponds to $x=vt=(v/c)ct$, and the $x'$ axis is $t'=0$, hence the axis is $ct=(v/c)x$.\r\nThus the axes moves by the same degree closer to the diagonal (where the light ray travels) if $v\\ge 0$ and further from the diagonla otherwise.\r\nThis is consistent with the postulate that the speed of light doesn't change across the frames.\r\n\\subsection{Relativistic Physics}\r\nConsider two events $P_1=(x_1,t_1),P_2=(x_2,t_2)$ that are points in the frame $S$ in one-dimensional Minkowski space-time.\r\nThey are called simultaneous if $t_1=t_2$.\\\\\r\nSo the line $P_1P_2$ is then parallel to the $x$-axis.\r\nThis is called the line of simultaneity in $S$.\r\nBut in $S'$, assuming $v\\neq 0$, $P_1,P_2$ are no longer in a line of simultaneity in $S'$ (which is of the form $t-vx/c^2=d$ where $d$ is a constant).\r\nIn particular, if $x_1<x_2$, then in $S'$ the event $P_2$ occurs first (with $v>0$).\r\nHence in general the simultaneity is frame-dependent.\\\\\r\nThe question of causality then arises, as different observers in different frames see different orders of events.\r\nSo we want to see a consistent ordering of cause and effect.\r\nNote that the lines of simultaneity in $S'$ viewed in $S$ cannot incline more than $\\pi/4$ since $|v|<c$.\r\nIn higher dimensions, lines and surfaces emerging from an event $P$ with $\\pi/4$ inclination to the axes forms the light cones, the past light cone and the future light cone (depending on signs).\r\nAll observes agree that the event $Q$ occurs after $P$ if $Q$ is in the future light cone, but whether or not the event $R$, not in the light cones, occurs after $P$ is frame dependent.\r\nThe fact that $R$ is outside of the light cone of $P$ then implies that $R$ cannot be influenced by $P$, and vice versa, since matters cannot travel faster than the speed of light (which is the boundary of the light cone).\r\nIn general, an event can only be influenced by events in its past light cone and influence events in its future light cone.\r\nSo causality does preserve.\\\\\r\nNow consider a clock stationary in $S'$ and tips in constant intervals $\\delta t'$.\r\nWe want to know what time interval is perceived by observers in $S$.\r\nRecall the inverse Lorentz transformation gives $t=\\gamma(t'+x'v/c^2)$, but $x'$ is constant since the clock is stationary in $S'$.\r\nSo $\\delta t=\\gamma\\delta t'$, so moving clocks are slower in moving frames.\r\nThis is called time dilation.\r\nWe say the time observed in the rest frame of a particular object the proper time.\\\\\r\nConsider two twins, Luke and Leia.\r\nLuke is staying home and Leia is going to a far planet and return home with speed $v$ relative to Luke.\r\nIn Luke's frame of reference, take the origin to be home.\r\nSuppose the planet is at $x=P$ and Leia arrives at the planet at time $cT$.\r\nTime experienced by Leia in this part of the journey is then\r\n$$T'=\\gamma(T-\\frac{v}{c^2}vT)=\\frac{T}{\\gamma}$$\r\nSame for going back.\r\nSo during the entire journey, Leia aged $2T/\\gamma$ while Luke aged $2T$, so Leia becomes younger than Luke.\r\nFrom Leia's perspective, Luke travels away from her and returns, so if the problem is symmetric, then Luke should be younger, which is a contradiction.\r\nSo the paradox is the lack of symmetries in this problem.\r\nLet $X$ be the intersection point between the line of simultaneity in Leia's outward frame through $P$, so at $A$, we have $x=0,t=T,t'=T/\\gamma$ and $X$ has $x=0,t'=T/\\gamma$, so the time experienced by Leia would be $t=T/\\gamma^2$ in Luke's frame at $A$.\r\nAs for the return journey, the line of simultaneity changes sign.\r\nSo in the return journey, Luke sees Leia aging from $A$ to $R$ and Leia sees Luke aging from $Z$ to $R$ (where $Z$ is the event with $x=0$ that is simultaneous with $A$ in the frame of Leia on her return journey).\r\nThe reason for the paradox is the discontinuity of time (from $X$ to $Z$) when Leia changes direction, so Luke has aged instaneously from $X$ to $Z$.\\\\\r\nNow we shall talk about length contraction.\r\nConsider a rod of length $L'$ stationary in $S'$, we want to know about the length of the rod in $S$.\r\nSuppose the ends of the rod are at $x'=0$ and $x'=L'$, so the world lines of the ends are simply the two vertical lines described by these equations.\r\nSo $x'=0$ mapsto $\\gamma(x-vt)=0$ in $S$ and $x'=L'$ mapsto $\\gamma(x-vt)=L'$, so these two lines are still parallel but the horizontal (in $S$) distance between them are now $L=L'/\\gamma$, so moving objects are contracted in the direction in which they move.\r\nWe define the proper length to be the length measured in the rest frame of the rod, which is essentially the greatest length of it over all frames.\\\\\r\nA practical problem is that does a train of proper length $2L$ fits in a platform of proper length $L$ if it travels at a certain speed.\r\nSo we want $\\gamma=2$.\r\nNow for the observers at the platform, this would work if the train attains the desired speed.\r\nAs for the observers on the train, the platform contracts to length $L/\\gamma=L/2$ so it doesn't fit.\r\nSuppose the platform is defined by $x=0$ and $x=L$.\r\nThe train is defined by $x'=0$ and $x'=2L$, which are mapped to some slanted lines in $S$, the frame of the platform.\r\nConsider the event $E$ where the rear of the platform and the rear of the train coincide.\r\nFor simplicity, this happens at $t=t'=0$.\r\nNow the front of the train is $x'=2L$ and the platform is $x=L$.\r\nLet $F$ be the event which is simultaneous with $E$ in $S$ at the front of the train, so $x'=\\gamma(x-vt),2L=\\gamma(L-vt)$ which implies $t=0$, so in the platform, $E$ is simultaneous with $F$, but in the train $S'$, we have $t'<0$ by calculation.\r\nSo in the train $F$ occurs before $E$ in $S'$.\\\\\r\nNow that both length and time become different in different frames, what about velocities?\r\nSuppose we have a particle moving with constant velocity $u'$ in $S'$ which moves with constant velocity $v$ relative to $S$.\r\nWe want to know the velocity $u$ of the particle as measured in $S$.\r\nThe world line of the particle in $S'$ can be taken as $x'=u't'$, so we have $\\gamma(x-vt)=u'\\gamma(t-vx/c^2)$ which gives $u=(u'+v)/(1+u'v/c^2)$.\r\nIn particular, if $u',v<<c$, then $u\\approx u'+v$ which is the standard Galilean transformation.\r\nNote also that we still cannot get to the speed of light given $u',v<c$ which is a combination of successive boosts.\r\n\\subsection{The Geometry of Space-time}\r\n\\begin{definition}\r\n    Consider two points $P,Q$ in space-time having coordinates $(x_1,ct_1),(x_2,ct_2)$, so $\\delta t=t_2-t_1$ and the space seperation is $\\delta x=x_2-x_1$.\r\n    We define the invariant interval between $P,Q$ to be $\\delta s^2=c^2\\delta t^2-\\delta x^2$.\r\n\\end{definition}\r\nNote that as we observed before, one can show that all observers agree on the value of $\\delta s^2$.\r\n\\begin{definition}\r\n    If we have three spacial dimensions $(x,y,z)$, we define $\\delta s^2=c^2\\delta t^2-\\delta x^2-\\delta y^2-\\delta z^2$.\r\n\\end{definition}\r\nIf the seperation between $P,Q$ becomes small, then $\\mathrm ds^2=c^2\\,\\mathrm dt^2-(\\mathrm dx^2+\\mathrm dy^2+\\mathrm dz^2)$ which looks like a distance (no it doesn't).\r\nWe can (no we can't) say that space-time is topologically equivalent to $\\mathbb R^4$ endowed by the distance measure $\\delta s$, but note that this is not even positive definite.\r\nThe space-time endowed with this ``measure of distance'' is called the Minkowski space-time.\r\n\\begin{definition}\r\n    Two events having $\\delta s^2<0$ are said to be time-like seperated, and two that have $\\delta s^2<0$ are said to be space-like seperated.\r\n\\end{definition}\r\nSo two time-like seperated events are at the same space position in some frame of reference and space-like seperated events are at the same time position in some frame.\r\n\\begin{definition}\r\n    if $\\delta s^2=0$, we say $P,Q$ are light-like seperated, so they can be connected by a light ray.\r\n\\end{definition}\r\nNote also that events that are light-like seperated may not be the same.\r\n\\begin{definition}\r\n    Take event $P$ in $S$, we can write its coordinates as a $4$-vector $X^\\mu=(ct,x,y,z),\\mu=0,1,2,3$, so $X^0=ct$ etc..\r\n\\end{definition}\r\nWe can define a new ``inner product'' on $4$-vectors by $X\\cdot Y=X^\\top\\eta X=X^\\mu\\eta_{\\mu\\nu}X^\\nu$ where\r\n$$\\eta=\\begin{pmatrix}\r\n    1&0&0&0\\\\\r\n    0&-1&0&0\\\\\r\n    0&0&-1&0\\\\\r\n    0&0&0&-1\r\n\\end{pmatrix}$$\r\nSo we have $X\\cdot X=c^2t^2-x^2-y^2-z^2$.\r\nWe call this the Minkowski metric.\r\n$4$-vectors with $X\\cdot X>0$ are time-like, those with $X\\cdot X<0$ are space-like and those with $X\\cdot X=0$ are light-like (or null).\r\nThe Lorentz transformation is a lineaR transformation that takes the components of a $4$-vector in $S$ to those of a $4$-vector in $S'$.\r\nHence we can write it as a matrix $\\Lambda$ where $X'=\\Lambda X$.\r\nThe set of all $\\Lambda$ that preserves the Minkowski metric then forms a group, called the Lorentz group.\r\nI.e. we want $X\\cdot X=(\\Lambda X)\\cdot (\\Lambda X)$ for all $4$-vector $X$.\r\nBy substitution we have $\\Lambda^\\top\\eta\\Lambda=\\eta$.\\\\\r\nIf $\\Lambda$ is just a spacial transformation, i.e.\r\n$$\\Lambda=\\begin{pmatrix}\r\n    1&0&0&0\\\\\r\n    0&&&\\\\\r\n    0&&R&\\\\\r\n    0&&&\r\n\\end{pmatrix}$$\r\nSo $R$ must be a rotation.\r\nIf it is not the case, we can also have the boost (WLOG in the $x$-direction)\r\n$$\\Lambda=\\begin{pmatrix}\r\n    \\gamma&-\\gamma\\beta&0&0\\\\\r\n    -\\gamma\\beta&\\gamma&0&0\\\\\r\n    0&0&1&0\\\\\r\n    0&0&0&1\r\n\\end{pmatrix},\\beta=\\frac{v}{c}$$\r\nThe Lorentz group $\\operatorname{O}(1,3)$ also consists of spacial reflections and time reversals.\r\nAnd its subgroup $\\operatorname{SO}(1,3)$ with determinant $1$ is called the proper Lorentz group.\r\nThis includes composition of time reversals and spacial reflections.\r\nThe subgroup that preserves the direction of time and spacial orientation is called the restrictive Lorentz group $\\operatorname{SO}^+(1,3)$, which is generated by spacial rotations and boosts (in all directions).\\\\\r\nA way to label the Lorentz transformations is by a concept of rapidity.\r\nWe now focus on the $(ct,x)$ space (i.e. the $2\\times 2$ submatrix on the top left corner operating on $(ct,x)$), where we define\r\n$$\\Lambda[\\beta]=\\begin{pmatrix}\r\n    \\gamma&-\\gamma\\beta\\\\\r\n    -\\gamma\\beta&\\gamma\r\n\\end{pmatrix}$$\r\nSo if we combine two boosts in the $x$ direction, then we have $\\Lambda[\\beta_1]\\Lambda[\\beta_2]=\\Lambda[(\\beta_1+\\beta_2)/(1+\\beta_1\\beta_2)]$ with appropriate values of $\\gamma$'s.\r\nRecall that for spacial rotations, we have $R(\\theta_1+\\theta_2)=R(\\theta_1)R(\\theta_2)$.\r\nFor Lorentz boosts, we define the rapidity $\\phi$ by $\\beta=\\tanh\\phi$, so $\\gamma=\\cosh\\phi$ and $\\gamma\\beta=\\sinh\\phi$.\r\nHence\r\n$$\\Lambda[\\phi]=\\begin{pmatrix}\r\n    \\cosh\\phi&-\\sinh\\phi\\\\\r\n    -\\sinh\\phi&\\cosh\\phi\r\n\\end{pmatrix}$$\r\nand thus $\\Lambda[\\phi_1]\\Lambda[\\phi_2]=\\Lambda[\\phi_1+\\phi_2]$.\r\nThis suggests that Lorentz transformations are hyperbolic rotations of space-time.\r\n\\subsection{Relativistic Kinematics}\r\nConsider a particle moving along some trajectory $\\underline{x}(t)$, then $\\underline{u}(t)=\\mathrm d\\underline{x}/\\mathrm dt$, so the path of it in space-time is parameterized by $t$.\r\nBut in special relativity the dependent variable $t$ is also going to change, so the path of it in a new frame would be non-trivial.\r\nConsider a particle at rest in $S'$, so $\\underline{x}=\\underline{x_0}$ in $S'$, so the invariant interval would be $\\delta s^2=c^2\\delta t'^2$.\r\nDefine the proper time as the time $\\tau$ with $c^2\\delta\\tau^2=\\delta s^2$, so $\\delta\\tau$ is the time experienced by the particle.\r\nDue to invariance, this equation holds in all frame, and $\\tau$ is real in time-like intervals.\r\nSo the world line of a particle can be parameterized by $\\tau$.\r\nIn terms of an infinitesimal interval, if $\\underline{u}$ is the speed of the particle, we have\r\n$$\\mathrm d\\tau=\\frac{\\mathrm ds}{c}=\\frac{1}{c}\\sqrt{c^2\\,\\mathrm dt^2-\\mathrm dx^2}=\\sqrt{1-\\frac{|\\underline{u}|^2}{c^2}}\\mathrm dt$$\r\nHence $\\mathrm dt/\\mathrm d\\tau=\\gamma_u$ where $\\gamma_u=1/\\sqrt{1-u^2/c^2}$.\r\nThe total time experienced by the particle is then\r\n$$T=\\int\\mathrm d\\tau=\\int\\frac{\\mathrm dt}{\\gamma_u}$$\r\nTo study this, we introduce the concept of a $4$-velocity.\r\nThe position $4$-vector of a particle is the column vector $X(\\tau)=(ct(\\tau),\\underline{x}(\\tau))^\\top$ where $\\underline{x}$ is a $3$-vector.\r\n\\begin{definition}\r\n    The $4$-velocity is\r\n    $$U=\\frac{\\mathrm dX}{\\mathrm d\\tau}=\\begin{pmatrix}\r\n        c\\,\\mathrm dt/\\mathrm d\\tau\\\\\r\n        \\mathrm d\\underline{x}/\\mathrm d\\tau\r\n    \\end{pmatrix}=\\gamma_u\\begin{pmatrix}\r\n        c\\\\\r\n        \\underline{u}\r\n    \\end{pmatrix},\\underline{u}=\\frac{\\mathrm d\\underline{x}}{\\mathrm dt}$$\r\n\\end{definition}\r\nIf I have two frames $S,S'$ such that the components of $X,X'$ of the position vector are related by $X'=\\Lambda X$, then $U'=\\Lambda U$.\r\nIn general, everything that transforms in this way is called a $4$-vector.\r\nAnd in particular, $U$ is a $4$-vector since $X$ is and $\\tau$ is invariant.\r\nNote that $\\mathrm dX/\\mathrm dt$ is however not a $4$-vector.\r\nThe scalar product $U\\cdot U$ will hence be Lorentz invariant.\r\nThat is, $U\\cdot U=U'\\cdot U'$.\r\nIn the rest frame where the particle with $4$-velocity $U$, then $U=(c,\\underline{0})^\\top$, so $U\\cdot U=c^2$, so for any $u$, we have $c^2=\\gamma_u^2(c^2-|\\underline{u}|^2)$.\r\nWe have seen that the rule of transformation of velocity in special relativity is not as simple as in Galilean transformations.\r\nHowever, we do have a fairly simple transformation law for $4$-vectors, which we can apply to $4$-velocity, which gives $U'=\\Lambda U$.\r\n\\begin{example}\r\n    In a frame $S$ where our favourite particle is moving with a speed $u$ at an angle $\\theta$ to the $x$-axis in the $x-y$ plane.\r\n    Its $4$-velocity is then $U=\\gamma_u(c,u\\cos\\theta,u\\sin\\theta,0)^\\top$.\r\n    Consider another frame $S'$ which moves with speed $v$ in the $x$ direction of $S$.\r\n    Suppose the velocity in $S$ is $u'$ and it makes an angle $\\theta'$ to the $x$-direction in $S'$.\r\n    So $U'=\\gamma_{u'}(c,u'\\cos\\theta',u'\\sin\\theta',0)$ with\r\n    $$U'=\\begin{pmatrix}\r\n        \\gamma_v&-\\gamma_v\\beta_v&0&0\\\\\r\n        -\\gamma_v\\beta_v&\\gamma_v&0&0\\\\\r\n        0&0&1&0\\\\\r\n        0&0&0&1\r\n    \\end{pmatrix}U$$\r\n    which we can solve to get $\\theta',u'$ in terms of other things.\r\n    $$u'\\cos\\theta'=\\frac{u\\cos\\theta-v}{1-uv\\cos\\theta/c^2},\\tan\\theta'=\\frac{u\\sin\\theta}{\\gamma_v(u\\cos\\theta-v)}$$\r\n    This change in angle, i.e. apparent change of direction, of the motion of the particle due to the motion of the observer is called an aberation.\r\n    This also applied with the particle is a photon, so $u=c$, so although the speed of light cannot change across inertial frames, the direction of light ray can.\r\n\\end{example}\r\nWe also want to talk about $4$-momentum.\r\nThe $4$-momentum of a particle of mass $m$ moving with $4$-velocity $u$ is given by $P=mU=m\\gamma_u(c,\\underline{u})^\\top$ with components $\\mu=0,1,2,3$ where the component $\\mu=0$ is interpreted as time.\r\nFor $P$ to be a $4$-vector, $m$ must be an invariant, so we must take $m$ to be the rest mass of the particle.\r\nThe $4$-momentum of a system of particles is the sum of the individual particles which conserves in the absence of external forces.\r\nThe spacial components of $P$ corresponds to the relativistic $3$-momentum $\\underline{p}=\\gamma_um\\underline{u}$ which is the same as the Newtonian expression except that mass is modified to $\\gamma_um$, which is interpreted as the apparent mass of the moving particle.\r\nIn particular, when $|\\underline{u}|\\to c$, the apparent mass tends to infinity.\r\nFor the zero component,\r\n$$P^0=\\gamma_umc=\\frac{1}{c}\\left(mc^2+\\frac{1}{2}m|\\underline{u}|^2+\\cdots\\right)$$\r\nWe see the kinetic energy in the second term, so the natural interpretation of $P$ is $P=(E/c,\\underline{p})$ where $E$ is called the relativistic energy, so $E=\\gamma_umc^2=mc^2+m|\\underline{u}|^2/2+\\cdots$ and $P$ is sometimes called the energy-momentum $4$-vector.\r\nNote that $E\\to\\infty$ as $|\\underline{u}|\\to c$.\r\nSo for a stationary particle, we have $E=mc^2$, and for a moving particle we have $E=mc^2+(\\gamma_u-1)mc^2$ where the second term is the kinetic energy, which reduces to the Newtonian kinetic energy for small $u$.\r\nNow $P\\cdot P=E^2/c^2-|\\underline{p}|^2$ is conserved under Lorentz transformation and hence equals to $m^2c^2$, so we have $E^2=|\\underline{p}|^2c^2+m^2c^4$.\r\nIn Newtonian physics, mass and energy are seperated idea in the sense that they are seperately conserved.\r\nBut in relativity, mass is not conserved and is a form of energy, i.e. we can convert mass into kinetic energy and vice versa.\\\\\r\nConsider a massless particles (i.e. particles with zero rest mass) like a photon.\r\nIt can have non-zero ($4$-)momentum and hence nonzero relativistic energy.\r\nSuppose this particle has the speed of light, then $0=m^2c^2=P\\cdot P=E^2/c^2-|\\underline{p}|^2$.\r\nWe say this particle is light-like and it travels through a light-like trajectory.\r\nConsequently, there is no proper time for this particle.\r\nNote that in this case $E=c|\\underline{p}|$, so\r\n$$P=\\frac{E}{c}\\begin{pmatrix}\r\n    1\\\\\r\n    \\underline{n}\r\n\\end{pmatrix}$$\r\nwhere $\\underline{n}$ is a unit vector.\r\nIn special relativity, we can write Newton's Law as\r\n$$\\frac{\\mathrm dP}{\\mathrm d\\tau}=F$$\r\nwhere $F$ is the $4$-force, i.e.\r\n$$F=\\gamma_u\\begin{pmatrix}\r\n    \\underline{F}\\cdot\\underline{u}/c\\\\\r\n    \\underline{F}\r\n\\end{pmatrix}$$\r\nwhich is also a $4$-vector.\r\nNote that if we transform from proper time to time experienced, then Newton's Second Law pops up, so it is consistent.\r\nEquivalently, for a particle with rest mass $m$, then one can write $F=mA$ where $A=\\mathrm dU/\\mathrm d\\tau$ is the $4$-acceleration.\r\n\\subsection{Examples in Particle Physics}\r\nWe want to explore the use of the conservation of total $4$-momentum in problems in particle physics.\r\nConsider $P=(E/c,\\underline{p})^\\top$ for a system of particles.\r\nA useful way to consider the system is to introduce the notion of a center-of-momentum frame (CM frame), which is the frame where the total $4$-momentum is $0$ (possible whenever all particles have positive rest mass).\r\n\\begin{example}\r\n    Particle decay.\r\n    Consider a particle of mass $m_1$ with momentum $P_1$ which is deemed to decay into two particles of mass $m_2,m_3$ and momenta $P_2,P_3$ respectively.\r\n    So we have $P_1=P_2+P_3$.\r\n    Consider the zero component, then $E_1=E_2+E_3$.\r\n    Consider the spacial components gives $\\underline{p_1}=\\underline{p_2}+\\underline{p_3}$.\r\n    In the CM frame, $P_1=0$, therefore $P_2=-P_3$.\r\n    Also\r\n    $$m_1c=E_1/c=E_2/c+E_3/c=\\sqrt{|\\underline{p_2}|^2+m_2^2c^2}+\\sqrt{|\\underline{p_3}|^2+m_3^2c^2}\\ge (m_2+m_3)c$$\r\n    So this decay is possible only if $m_1\\ge m_2+m_3$.\r\n    Note that is possible that we don't have the equality (unlike in Newtonian mechanics) where some mass has been converted to energy.\r\n\\end{example}\r\n\\begin{example}\r\n    A Higgs particle $h$ is decayed into two photons $\\gamma$, then $P_h=P_{\\gamma_1}+P_{\\gamma_2}$, then in the rest frame of $h$, $P_h=(m_hc,\\underline{0})$.\r\n    So if we look at the spacial components, then $\\underline{0}=\\underline{P_{\\gamma_1}}+\\underline{P_{\\gamma_2}}$.\r\n    And since the photons have zero rest mass,\r\n    $$\\frac{E_{\\gamma_1}}{c}=|\\underline{p_{\\gamma_1}}|=|\\underline{p_{\\gamma_2}}|=\\frac{E_{\\gamma_2}}{c}$$\r\n    So each of the photons has half of the Higgs particle's total energy.\r\n    Note that in this case mass does not conserve.\r\n\\end{example}\r\n\\begin{example}\r\n    Consider two identical particles which collide and retain their identities.\r\n    Let $P_1,P_2$ be the $4$-momenta before and $P_3,P_4$ after respectively.\r\n    Suppose $S$ is the laboratory frame where $\\underline{p_2}=0$, let the horizontal to be the line joining the two particles and let $\\theta$ be the inclination of particle $1$ after the collision and $\\phi$ be that of particle $2$.\r\n    We want to study the relationship between $\\theta$ and $\\phi$.\\\\\r\n    Now we go to the CM frame where the particles are horizontal before the collision, then the trajectories form two lines crossing each other.\r\n    Let $\\theta'$ be the angle between those two lines.\r\n    Let $v$ be the speeds before the collision and $w$ be that after.\r\n    We put a $'$ to indicate we are in the CM frame.\r\n    Then\r\n    $$P_1'=\\begin{pmatrix}\r\n        m\\gamma_vc\\\\\r\n        m\\gamma_vv\\\\\r\n        0\\\\\r\n        0\r\n    \\end{pmatrix},P_2'=\\begin{pmatrix}\r\n        m\\gamma_vc\\\\\r\n        -m\\gamma_vv\\\\\r\n        0\\\\\r\n        0\r\n    \\end{pmatrix}$$\r\n    and\r\n    $$P_3'=\\begin{pmatrix}\r\n        m\\gamma_wc\\\\\r\n        m\\gamma_ww\\cos\\theta'\\\\\r\n        m\\gamma_ww\\sin\\theta'\\\\\r\n        0\r\n    \\end{pmatrix},P_4'=\\begin{pmatrix}\r\n        m\\gamma_wc\\\\\r\n        -m\\gamma_ww\\cos\\theta'\\\\\r\n        -m\\gamma_ww\\sin\\theta'\\\\\r\n        0\r\n    \\end{pmatrix}$$\r\n    The first component gives $v=w$.\r\n    Now we apply the Lorentz transformation from the CM frame $S'$ back to $S$.\r\n    The velocity of the transformation is $v$, so\r\n    $$\\Lambda=\\begin{pmatrix}\r\n        \\gamma_v&\\gamma_vv/c&0&0\\\\\r\n        \\gamma_vv/c&\\gamma_v&0&0\\\\\r\n        0&0&1&0\\\\\r\n        0&0&0&1\r\n    \\end{pmatrix}$$\r\n    Now before the collision,\r\n    $$P_1=\\begin{pmatrix}\r\n        m\\gamma_v^2(c+v^2/c)\\\\\r\n        m\\gamma_v^2(v+v)\\\\\r\n        0\\\\\r\n        0\r\n    \\end{pmatrix}=\\begin{pmatrix}\r\n        m\\gamma_uc\\\\\r\n        m\\gamma_uu\r\n    \\end{pmatrix}$$\r\n    where $u$ is the initial velocity of particle $1$.\r\n    Consider the situation after the collision and set $q$ to be the velocity of particle $1$ after the collision, we get\r\n    $$P_3=\\begin{pmatrix}\r\n        m\\gamma_v^2(c+(v^2/c)\\cos\\theta')\\\\\r\n        m\\gamma_v^2(v+v\\cos\\theta')\\\\\r\n        m\\gamma_vv\\sin\\theta'\\\\\r\n        0\r\n    \\end{pmatrix}=\\begin{pmatrix}\r\n        m\\gamma_qc\\\\\r\n        m\\gamma_qq\\cos\\theta\\\\\r\n        m\\gamma_qq\\sin\\theta\\\\\r\n        0\r\n    \\end{pmatrix}$$\r\n    So by comparing the $1$ and $2$ components, we get\r\n    $$\\tan\\theta=\\frac{m\\gamma_v}{m\\gamma_v^2}\\frac{v\\sin\\theta'}{v(1+\\cos\\theta')}=\\frac{1}{\\gamma_v}\\tan\\frac{\\theta'}{2}$$\r\n    Similarly,\r\n    $$\\tan\\phi=\\frac{1}{\\gamma_v}\\cot\\frac{\\theta'}{2}$$\r\n    So $\\tan\\theta\\tan\\phi=1/\\gamma_v^2=2/(1+\\gamma_u)$.\r\n    When $\\gamma_u\\to 1$ (i.e. in the Newtonian limit), we get $\\tan\\theta\\tan\\phi=1$, so the angle after the collision would be $\\pi/2$.\r\n\\end{example}", "meta": {"hexsha": "fbab3a66f7f27ddfe35c1876b882234a16086392", "size": 27187, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8/rel.tex", "max_stars_repo_name": "david-bai-notes/IA-Dynamics-and-Relativity", "max_stars_repo_head_hexsha": "9a37539f19e62c795ad837062801e51e7adc75b2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8/rel.tex", "max_issues_repo_name": "david-bai-notes/IA-Dynamics-and-Relativity", "max_issues_repo_head_hexsha": "9a37539f19e62c795ad837062801e51e7adc75b2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8/rel.tex", "max_forks_repo_name": "david-bai-notes/IA-Dynamics-and-Relativity", "max_forks_repo_head_hexsha": "9a37539f19e62c795ad837062801e51e7adc75b2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.1701570681, "max_line_length": 271, "alphanum_fraction": 0.6931253908, "num_tokens": 8218, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.757794360334681, "lm_q2_score": 0.7772998560157663, "lm_q1q2_score": 0.5890334471777072}}
{"text": "\\paragraph{}\nThe main objective of this chapter is to introduce a way that can develop an adaptive mesh automatically based on the posteriori error estimator.\nMost of the unnecessary refinement in the region which contributes few to the improvement of the accuracy will be prevented.\nThe expressions related to the eigenvalues of the SBFEM formulation representing the quantity of the error in the interpolation are adopted as one of the error indicators, together with the area and other geometric properties of the Scaled Boundary Finite Element.\nA machine learning model using the Multilayer Perceptron (MLP) is trained to determine whether a Scaled Boundary Finite Element needs refinement or not based on all these information based on all error estimators mentioned above.\n\n\\paragraph{}\nThe proposed method enhances the SBFEM with quad-tree mesh introduced in Sec.~\\ref{qdt_sec:main} and the outstanding features of the method are:\n\\begin{itemize}\n    \\item No human effort involvement in mesh refinement\n    \\item Smart refinement detection\n    \\item Highly extensible criteria, any other error indicators can be added to the existing framework\n\\end{itemize}\n\n\\paragraph{}\nThis chapter is organized as follows.\nThe error indicators used in the proposed method will be introduced first.\nAfter that, a machine learning algorithm that can be trained to determine the necessity of the refinement of a cell based on these indicators is presented.\nFurthermore, a triangle merging algorithm is developed to bypass the lack of eigenvalue error indicators in first order triangular element in the SBFEM.\nFinally, the matrix representation of NURBS curves is introduced to improve the computational efficiency and the robustness of the NURBS related calculation.\nThe accuracy and the convergence properties of the proposed techniques are demonstrated with benchmark problems in the context of linear elasticity, followed by concluding remarks in the last section.\n", "meta": {"hexsha": "95769e2714893827ea53cd7977c3b1027b895ec0", "size": 1961, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "adaptivity/intro.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "adaptivity/intro.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "adaptivity/intro.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 89.1363636364, "max_line_length": 264, "alphanum_fraction": 0.8225395207, "num_tokens": 369, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321983146848, "lm_q2_score": 0.7310585844894971, "lm_q1q2_score": 0.5889643345190952}}
{"text": "\n\\section{World Map}\n\\label{sec:worldMap}\n\nThe world map is one of the basic requirements needed for both localization and path finding.\nIn this project the world map is used to do fictive particle measurements for the particle filter, and provide a read map containing obstacles for the A* algorithm.\nThe world map class is containing the length and width of the world, and a list of objects in the world.\nIt has been decided to only use squares objects in the world map, and these are modelled as a set of 4 lines.\nEach square object is defined by 4 parameters, a corner position, length, width and an orientation offset of the x axis in degrees.\nA useful geometry library called \\emph{MathNet.Spatial} is used to easy modulation of points, lines and angles, along easy access to geometry algebra as line length and the intersect point of two lines.\n\n\\subsection{Particle measurements}\n\nThe robot is measuring the distance to surrounding objects as the means of localization, and the particles therefore needs the ability to do the same action in the world map.\nThis is implemented by creating a line from the particle in the particle orientation direction, and calculate the intersect points with the world edge lines and object lines, as seen on figure \\ref{fig:particleMeasurement}.\nThe distance to the closest intersect point from the particle is chosen as the fictive distance measurement of that particle.\n\n\\myFigure{Implementation/WorldMap/particleMeasurement}{The principle of particle distance measurements in the world map.}{fig:particleMeasurement}{1}\n\n\\subsection{A* road map generation}\n\nThe A* algorithm needs a grid based map, where each tile is marked as free or blocked, to be able to plan a driving route.\nFor simplicity it has been decided to have the world map and road map at the same scale, this means a 100 by 100 cm world map, will be a 100*100 tiles road map.\nIt is necessary to determined whether each of the tiles is inside one of the world objects or not, to draw a correct read map.\n\nThis is done by area calculation as seen on figure \\ref{fig:ReadMapAreaCalc}.\nFour triangles is created from the tile coordinate to each of the object corners, marked A to D on figure \\ref{fig:ReadMapAreaCalc}.\nThe area sum of these four triangles is compared to the area of the object, if the triangle area sum is equal to the object area, the tile is inside the object and is marked as block, else it is marked free.\n\n\\myFigure{Implementation/WorldMap/ReadMapAreaCalc}{The principle of calculating area to determined if a point is inside a square.}{fig:ReadMapAreaCalc}{0.6}\n\nFigure \\ref{fig:BlockPlot} show a print out of a test read map containing three obstacles that the A* algorithm have to navigate around.\n\n\\myFigure{Implementation/WorldMap/BlockPlot}{A plot of the read map, with three obstacles placed at various positions. All blocked tiles is marked as black.}{fig:BlockPlot}{0.4}\n\n\\pagebreak", "meta": {"hexsha": "75329a8fd363bdad7879eb413369481a3ce25e92", "size": 2920, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/chapter/implementation/worldMap.tex", "max_stars_repo_name": "Rotvig/AI-Robotics-Project", "max_stars_repo_head_hexsha": "af8d96a429df4c55d9716c4ff0453188d9c8c799", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/chapter/implementation/worldMap.tex", "max_issues_repo_name": "Rotvig/AI-Robotics-Project", "max_issues_repo_head_hexsha": "af8d96a429df4c55d9716c4ff0453188d9c8c799", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/chapter/implementation/worldMap.tex", "max_forks_repo_name": "Rotvig/AI-Robotics-Project", "max_forks_repo_head_hexsha": "af8d96a429df4c55d9716c4ff0453188d9c8c799", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 81.1111111111, "max_line_length": 223, "alphanum_fraction": 0.7982876712, "num_tokens": 642, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812552, "lm_q2_score": 0.7310585903489892, "lm_q1q2_score": 0.5889643324164069}}
{"text": "\n\\subsection{Broadcasting}\n\nLoosen standards, can do addition subtraction if one matrix is \\(1\\times n\\).\n\n", "meta": {"hexsha": "e12566e0a5395502bc7eb23d218f38231b95ccde", "size": 107, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/computer/linear/01-03-broadcasting.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/computer/linear/01-03-broadcasting.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/computer/linear/01-03-broadcasting.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.8333333333, "max_line_length": 77, "alphanum_fraction": 0.7570093458, "num_tokens": 26, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8056321796478255, "lm_q2_score": 0.7310585844894971, "lm_q1q2_score": 0.5889643208725276}}
{"text": "\\chapter{Higher group theory}\\label{chap:higher_groups}\n\nIn this section I describe joint work with Ulrik Buchholtz and Floris van Doorn \\cite{BuchholtzDoornRijke}.\n\n\\section{Higher groups}\n\\label{sec:higher-groups}\n\nRecall that types in HoTT may be viewed as $\\infty$-groupoids:\nelements are objects, paths are morphisms, higher paths are higher\nmorphisms, etc.\n\nIt follows that \\emph{pointed connected} types $B$ may be viewed as higher\ngroups, with \\define{carrier} $\\loopspacesym B$.\nThe neutral element is the identity path,\nthe group operation is given by path composition,\nand higher paths witness the unit and associativity laws.\nOf course, these higher paths are themselves subject to further laws,\netc., but the beauty of the type-theoretic definition is\nthat we don't have to worry about that:\nall the (higher) laws follow from the rules of the identity types.\nWriting $G$ for the carrier $\\loopspacesym B$, it is common to write $BG$ for the pointed\nconnected type $B$, which comes equipped with an identification $G = \\loopspacesym BG$.\nWe call $BG$ the \\define{delooping} of $G$.\n\nThe type of pointed types is\n$\\UU_\\pt \\defeq  \\sm{A:\\UU} A$. The type of $n$-truncated types is\n$\\UU^{\\le n} \\defeq  \\sm{A:\\UU}\\istrunc{n}{A}$ and for $n$-connected types it is\n$\\UU^{>n} \\defeq  \\sm{A:\\UU}\\mathsf{is\\usc{}conn}_n(A)$. We will combine these notations as needed.\n\n\\begin{defn}\nWe define the type of \\define{higher groups}, or \\define{$\\infty$-groups}, to be\n\\begin{equation*}\n\\infty\\mathsf{Grp}\\defeq \\sm{G:\\UU}{BG:\\UU_\\pt^{>0}} \\eqv{G}{\\loopspacesym BG}.\n\\end{equation*}\nWhen $G$ is an $\\infty$-group, we also write $G$ for its first projection, called the \\define{carrier} of $G$.\n\\end{defn}\n\n\\begin{rmk}\nNote that we have equivalences\n\\begin{align*}\n  \\infty\\mathsf{Grp}\n  &\\jdeq   \\sm{G:\\UU}{BG:\\UU_\\pt^{>0}} \\eqv{G}{\\loopspacesym BG} \\\\\n  &\\eqvsym \\sm{G:\\UU_\\pt}{BG:\\UU_\\pt^{>0}} G \\eqvsym_\\pt \\loopspacesym BG \\\\\n  &\\eqvsym \\UU_\\pt^{>0}\n\\end{align*}\nfor the type of higher groups. \n\\end{rmk}\n\nAutomorphism groups form a major class of examples of $\\infty$-groups.\nGiven \\emph{any} type $A$ and any object $a : A$, the automorphism group at $a$ is defined as\n\\define{automorphism group} $\\Aut a\\defeq (a=a)$. \nThis is indeed an $\\infty$-group, because it is the loop space of the connected component of $A$ at $a$, i.e. we define~$\\BAut a \\defeq  \\im(a : 1 \\to A) = (x : A) \\times \\trunc{-1}{a=x}$.\nFrom this definition it is immediate that $\\Aut a=\\loopspacesym\\BAut a$, so we see that $\\Aut a$ is indeed an example of an $\\infty$-group. \n\nIf we take $A = \\mathsf{Set}$, we get the usual symmetric groups\n$\\Sym_n \\defeq  \\Aut(\\Fin(n))$, where $\\Fin(n)$ is a set with $n$\nelements. (Note that $\\BS_n = \\BAut(\\Fin (n))$ is the type of all\n$n$-element sets.)\n\nWe recover the ordinary set-level groups by requiring that $G$ is a $0$-type, or equivalently, that $BG$\nis a $1$-type. This leads us to introduce:\n\n\\begin{defn}\nWe define the type of \\define{groupal $(n-1)$-gropuoids}, or \\define{$n$-groups}, to be\n\\begin{equation*}\nn\\mathsf{Grp} \\defeq \\sm{G:\\UU_\\pt^{<n}}{BG :\\UU_\\pt^{>0}} G \\eqvsym_\\pt \\loopspacesym BG.\n\\end{equation*}\nWe write $\\mathsf{Grp}$ for the type of $1$-groups.\n\\end{defn}\n\nThe type of $n$-groups is therefore equivalent to the type of pointed connected $(n+1)$-types. Note that if $A$ is an $(n+1)$-type, then $\\Aut a$ is an $(n+1)$-group because $\\Aut a$ is $n$-truncated.\n\nFor example, the integers $\\mathbb{Z}$ as an additive group are from this\nperspective represented by their delooping $\\mathop{\\mathrm{B}\\mathbb{Z}}=\\bS^1$, i.e., the circle.\nIndeed, any set-level group $G$ is represented as its delooping $BG\\defeq K(G,1)$.\n\nMoving across the homotopy hypothesis, for every pointed type $(X,x)$\nwe have the \\define{fundamental $\\infty$-group of $X$},\n$\\Pi_\\infty(X,x)\\defeq \\Aut x$. Its $(n-1)$-truncation (an instance of\ndecategorification, see \\cref{sec:stabilization}) is the\n\\define{fundamental $n$-group of $X$}, $\\Pi_n(X,x)$,\nwith corresponding delooping $\\mathrm{B}\\Pi_n(X,x) = \\trunc{n}{\\BAut x}$.\n\nDouble loop spaces are more well-behaved than mere loop\nspaces. For example, they are commutative up to homotopy\nby the Eckmann-Hilton argument~\\cite[Theorem~2.1.6]{hottbook}.\nTriple loop spaces are even better behaved than double loop spaces, and so on.\n\n\\begin{defn}\nA type $G$ is said to be \\define{$k$-tuply groupal} if it comes equipped with a \\define{$k$-fold delooping}, i.e.~ a pointed $k$-connected\n$B^kG : \\UU_\\pt^{\\ge k}$ and an equivalence $G \\eqvsym \\loopspacesym^kB^kG$.\n\nMixing the two directions, we also define\n\\begin{align*}\n  (n,k)\\GType\n  &\\defeq  \\sm{G : \\UU_\\pt^{\\le n}}{B^kG : \\UU_\\pt^{\\ge k}}\n    G \\eqvsym_\\pt \\loopspacesym^kB^kG \\\\\n  & \\phantom{:}\\eqvsym \\UU_\\pt^{\\ge k,\\le n+k}\n\\end{align*}\nfor the type of \\define{$k$-tuply groupal $n$-groupoids}\\footnote{This\n  is called $n\\UU_k$ in \\cite{BaezDolan1998}, but here we give equal\n  billing to $n$ and $k$,\n  and we add the ``G'' to indicate group-structure.}.\nWe allow taking $n=\\infty$, in which case the truncation requirement\nis simply dropped.\n\\end{defn}\n\nNote that $n\\mathsf{Grp} = (n-1,1)\\GType$. This shift in indexing is slightly\nannoying, but we keep it to stay consistent with the literature.\n\nNote that for each $k\\geq 0$ there is a forgetful map\n\\begin{equation*}\n(n,k+1)\\GType \\to (n,k)\\GType,\n\\end{equation*}\ngiven by $B^{k+1}G\\mapsto \\loopspacesym B^{k+1}G$, defining a sequence\n\\begin{equation*}\n\\begin{tikzcd}\n\\cdots \\arrow[r] & (n,2)\\GType \\arrow[r] & (n,1)\\GType \\arrow[r] & (n,0)\\GType.\n\\end{tikzcd}\n\\end{equation*}\nThus we define $(n,\\infty)\\GType$ as the limit of this sequence:\n\\begin{align*}\n(n,\\infty)\\GType & \\defeq  \\lim_k{}(n,k)\\GType \\\\\n&\\phantom{:}\\eqvsym \\sm{B^{\\blank}G : \\prd{k : \\bN}\\UU_\\pt^{\\ge k,\\le n+k}}\\prd{k : \\bN} B^kG \\eqvsym_\\pt \\loopspacesym B^{k+1}G.\n\\end{align*}\nIn \\cref{sec:stabilization} we prove the stabilization theorem\n(\\cref{thm:stabilization}), from which it follows that\n$(n,\\infty)\\GType=(n,k)\\GType$ for $k\\geq n+2$.\n\nThe type $(\\infty,\\infty)\\GType$ is the type of \\define{stably groupal $\\infty$-groups},\nalso known as \\define{connective spectra}. If we also relax the\nconnectivity requirement, we get the type of all spectra, and we can\nthink of a spectrum as a kind of $\\infty$-groupoid with $k$-morphisms\nfor all $k\\in\\mathbb{Z}$.\n\nThe double hierarchy of higher groups is summarized in~\\cref{tab:periodic}.\nWe shall prove the correctness of the $n=0$ column in~\\cref{sec:n=0}.\n\\begin{table}\n  \\caption{\\label{tab:periodic}Periodic table of $k$-tuply groupal $n$-groupoids.}\n  \\centering\n  \\begin{tabular}{clllll} \\toprule\n    $k\\setminus n$ & $0$ & $1$ & $2$ & $\\cdots$ & $\\infty$ \\\\\n    \\midrule\n    $0$ & pointed set & pointed groupoid & pointed $2$-groupoid & $\\cdots$ & pointed $\\infty$-groupoid \\\\\n    $1$ & group & $2$-group & $3$-group & $\\cdots$ & $\\infty$-group \\\\\n    $2$ & abelian group & braided $2$-group & braided $3$-group & $\\cdots$ & braided $\\infty$-group \\\\\n    $3$ & \\ditto & symmetric $2$-group & sylleptic $3$-group & $\\cdots$ & sylleptic $\\infty$-group \\\\\n    $4$ & \\ditto & \\ditto & symmetric $3$-group & $\\cdots$ & ?? $\\infty$-group \\\\\n    $\\vdots$ & \\mbox{}\\quad$\\vdots$ & \\mbox{}\\quad$\\vdots$ & \\mbox{}\\quad$\\vdots$ & $\\ddots$ & \\mbox{}\\quad$\\vdots$ \\\\\n    $\\loopspacesym$ & \\ditto & \\ditto & \\ditto & $\\cdots$ & connective spectrum \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\nA homomorphism between higher groups is any\nfunction that can be suitably delooped.\n\n\\begin{defn}\nFor $G,H : (n,k)\\GType$, we define\n\\begin{align*}\n\\hom_{(n,k)}(G,H) & \\defeq  \n\\sm{h: G \\to_\\pt H}{B^k h: B^kG \\to_\\pt B^kH} \\loopspacesym^k(B^k h) \\sim_\\pt h \\\\\n&\\phantom{:}\\eqvsym (B^k h: B^kG \\to_\\pt B^kH).\n\\end{align*}\nFor (connective) spectra we need\npointed maps between all the deloopings and pointed homotopies showing\nthey cohere.\n\\end{defn}\n\nNote that if $h,k : G \\to H$ are homomorphisms between set-level\ngroups, then $h$ and $k$ are \\define{conjugate} if $Bh, Bk : BG \\to_\\pt BH$ are\n\\define{freely} homotopic (i.e., equal as maps $BG \\to BH$).\n\nAlso observe that \n\\begin{align*}\n\\pi_j(B^kG \\to_\\pt B^kH) & \\eqvsym \\trunc{0}{B^kG \\to_\\pt \\loopspacesym^jB^kH} \\\\\n& \\eqvsym \\trunc{0}{\\Sigma^jB^kG \\to_\\pt B^kH} \\\\\n& \\eqvsym 0\n\\end{align*}\nfor $j>n$, which suggests that $\\hom_{(n,k)}(G,H)$ is $n$-truncated. To prove this, we deviate slightly from the approach in \\cite{BuchholtzDoornRijke} and use the following intermediate result.\n\n\\begin{prp}\nConsider a pointed $(k+1)$-connected type $X$, and a family $Y:X\\to\\UU^{\\le n+k}$ of $(n+k)$-truncated types over $X$. Then the map\n\\begin{equation*}\n\\evpt : \\Big(\\prd{x:X}Y(x)\\Big) \\to Y(\\pt)\n\\end{equation*}\ninduced by the point inclusion $\\unit\\to X$, is an $(n-2)$-truncated map.\n\\end{prp}\n\n\\begin{proof}\nNote that we have a commuting triangle\n\\begin{equation*}\n\\begin{tikzcd}[column sep=-1em]\n& \\Big(\\prd{x:X}Y(x)\\Big) \\arrow[dl,swap,\"\\blank\\circ \\mathsf{const}_\\pt\"] \\arrow[dr,\"\\evpt\"] & \\phantom{\\Big(\\prd{t:\\unit} Y(\\pt)\\Big)} \\\\\n\\Big(\\prd{t:\\unit} Y(\\pt)\\Big) \\arrow[rr,swap,\"\\evpt\",\"\\eqvsym\"'] & & Y(\\pt),\n\\end{tikzcd}\n\\end{equation*}\nso the map on the left is an $(n-2)$-truncated map if and only if the map on the right is. For the map on the left, the claim follows immediately from Lemma 8.6.1 of \\cite{hottbook}, since the point inclusion $\\mathsf{const}_\\pt:\\unit\\to X$ is a $k$-connected map by \\cref{cor:ptd_connected}.\n\\end{proof}\n\n\\begin{defn}\n  If $X : \\UU_\\pt$ and $Y : X \\to \\UU_\\pt$, then we introduce the\n  type of \\define{pointed sections},\n\\begin{equation*}\n\\textstyle{\\prod_{(x:X)}^\\ast Y(x)} \\defeq \\sm{s:\\prd{x:X}Y(x)}s(\\pt)=\\pt\n\\end{equation*}\n  This type is itself pointed by the trivial section $\\lam{x}\\pt$.\n\\end{defn}\n\n\\begin{cor}\nConsider a pointed $k$-connected type $X$, and a family $Y:X\\to\\UU_\\pt^{\\le n+k}$ of pointed $(n+k)$-truncated types over $X$. Then the type $\\prod_{(x:X)}^\\ast Y(x)$ is $(n-1)$-truncated.\n\\end{cor}\n\n\\begin{proof}\nNote that we have a pullback square\n\\begin{equation*}\n\\begin{tikzcd}\n\\textstyle{\\prod_{(x:X)}^\\ast Y(x)} \\arrow[r] \\arrow[d] & \\unit \\arrow[d] \\\\\n\\prd{x:X}Y(x) \\arrow[r,swap,\"\\evpt\"] & Y(\\ast),\n\\end{tikzcd}\n\\end{equation*}\nso the claim follows from the fact that $\\evpt$ is an $(n-1)$-truncated map.\n\\end{proof}\n\n\\begin{thm}\n  The type $\\hom_{(n,k)}(G,H)$ is an $n$-type for any $G,H:(n,k)\\GType$.\n\\end{thm}\n\n\\begin{proof}\n  If $X$ is $(k-1)$-connected, and $Y$ is $(n+k)$-truncated, then the type of pointed maps $X \\to_\\pt Y$ is $n$-truncated.\n\\end{proof}\n\n\\begin{cor}\n  The type $(n,k)\\GType$ is $(n+1)$-truncated.\n\\end{cor}\n\\begin{proof}\n  This follows immediately from the preceding corollary, as the type\n  of equivalences $G \\eqvsym H$ is a subtype of the homomorphisms from\n  $G$ to $H$.\n\\end{proof}\n\nIf $k\\ge n+2$ (so we're in the stable range), then $\\hom_{(n,k)}(G,H)$\nbecomes a stably groupal $n$-groupoid. This generalizes the\nfact that the homomorphisms between abelian groups form an abelian\ngroup.\n\n\\begin{cor}\nThe automorphism group $\\Aut G$ of a higher group $G:(n,k)\\GType$ is a $1$-groupal $(n+1)$-group, equivalent to the automorphism group of the pointed type $B^kG$.\n\\end{cor}\n\n\\section{Stabilization}\n\\label{sec:stabilization}\n\n\\begin{defn}\nThe \\define{decategorification} $\\Decat G$ of a $k$-tuply groupal $(n+1)$-group is defined to be the $k$-tuply groupal $n$-group $\\trunc{n-1}{G}$, which has delooping $\\trunc{n+k-1}{B^kG}$. Thus, decategorification is an operation\n\\begin{equation*}\n\\Decat : (n,k)\\GType \\to (n-1,k)\\GType.\n\\end{equation*}\nThe functorial action of $\\Decat$ is defined in the expected way. We also define the \\define{$\\infty$-decategorification} $\\iDecat G$ of a $k$-tuply groupal $\\infty$-group as the $k$-tuply groupal $n$-group $\\trunc{n}{G}$, which has delooping $\\trunc{n+k}{B^k G}$. \n\\end{defn}\n\n\\begin{defn}\nThe \\define{discrete categorification} $\\Disc G$ of a $k$-tuply-groupal $(n+1)$-group is defined to be the same $\\infty$-group $G$, now considered as a $k$-tuply groupal $(n+2)$-group. Thus, the discrete categorification is an operation\n\\begin{equation*}\n\\Disc : (n,k)\\GType \\to (n+1,k)\\GType.\n\\end{equation*}\nSimilarly, the \\define{discrete $\\infty$-decategorification} $\\iDisc G$ of a $k$-tuply groupal $(n+1)$-group is defined to be the same group, now considered as a $k$-tuply groupal $\\infty$-group.\n\\end{defn}\n\n\\begin{rmk}\nThe decategorification and discrete categorification functors make the $(n+1)$-category $(n,k)\\GType$ a reflective sub-$(\\infty,1)$-category of $(n+1,k)\\GType$. That is, there is an adjunction ${\\Decat} \\dashv {\\Disc}$. These properties are straightforward consequences of the universal property of truncation.\nSimilarly, we have ${\\iDecat} \\dashv {\\iDisc}$ such that the counit induces an isomorphism ${\\iDecat} \\circ {\\iDisc} = \\idfunc$.\n\\end{rmk}\n\nFor the next constructions, we need the following properties.\n\\begin{defn}\n  For $A : \\UU_\\pt$ we define the \\define{$n$-connected cover} of $A$ to be \n  $A{\\angled n} \\defeq  \\fibf{A \\to \\trunc{n}{A}}$. We have the projection $p_1: A{\\angled n} \\to_\\pt A$.\n\\end{defn}\n\n\\begin{lem} \\label{lem:connected-cover-univ}\n  The universal property of the $n$-connected cover states the following. For any $n$-connected pointed type $B$, the pointed map\n  $$(B \\to_\\pt A{\\angled n}) \\to_\\pt (B \\to_\\pt A),$$\n  given by postcomposition with $p_1$, is an equivalence.\\\\\n\\end{lem}\n\n\\begin{proof}\n  Given a map $f:B\\to_\\pt A$, we can form a map $\\widetilde f: B \\to A{\\angled n}$. First note that for $b:B$ the type $\\truncunit{fb}_n=_{\\trunc{n}{A}}\\truncunit{\\pt}_n$ is $(n-1)$-truncated and inhabited for $b=\\pt$. Since $B$ is $n$-connected, the universal property for connected types shows that we can construct a $qb:\\truncunit{fb}_n=\\truncunit{\\pt}_n$ for all $b$ such that $q_0:qb_0\\cdot\\mathsf{ap}_{\\truncunit{\\blank}_n}(f_0)=1$. Then we can define the map $\\widetilde f(b)\\defeq (fb, qb)$. Now $\\widetilde f$ is pointed, because $(f_0,q_0):(fb_0,qb_0)=(a_0,1)$.\n\n  Now we show that this is indeed an inverse to the given map. On the one hand, we need to show that if $f: B \\to_\\pt A$, then $\\proj 1 \\circ \\widetilde f=f$. The underlying functions are equal because they both send $b$ to $f(b)$. They respect points in the same way, because\n  $\\mathsf{ap}{p_1}(\\widetilde f_0)=f_0$. The proof that the other composite is the identity follows from a computation using fibers and connectivity, which we omit here, but can be found in the formalization.\n\\end{proof}\n\nThe next reflective sub-$(\\infty,1)$-category is formed by looping and delooping.\n\\begin{description}\n\\item[looping] $\\loopspacesym : (n,k)\\GType \\to (n-1,k+1)\\GType$ \\\\\n  $\\angled{G,B^kG} \\mapsto \\angled{\\loopspacesym G,B^kG{\\angled k}}$\n\\item[delooping] $\\B : (n,k)\\GType \\to (n+1,k-1)\\GType$ \\\\\n  $\\angled{G,B^kG} \\mapsto \\angled{\\loopspacesym^{k-1}B^kG,B^kG}$\n\\end{description}\nWe have ${\\B} \\dashv {\\loopspacesym}$, which follows from Lemma \\ref{lem:connected-cover-univ} %note: autoref writes \"Theorem\"\nand $\\loopspacesym\\circ{\\B} = \\idfunc$, which follows from the fact that $A{\\angled n}=A$ if $A$ is $n$-connected.\n\nThe last adjoint pair of functors is given by stabilization and forgetting. This does not form a reflective sub-$(\\infty,1)$-category.\n\\begin{description}\n\\item[forgetting] $F : (n,k)\\GType \\to (n,k-1)\\GType$ \\\\\n  $\\angled{G,B^kG} \\mapsto \\angled{G,\\loopspacesym B^kG}$\n\\item[stabilization] $S : (n,k)\\GType \\to (n,k+1)\\GType$ \\\\\n  $\\angled{G,B^kG} \\mapsto \\angled{SG,\\trunc{n+k+1}{\\susp B^kG}}$,\\\\\n  where $SG = \\trunc{n}{\\loopspacesym^{k+1}\\susp B^kG}$\n\\end{description}\nWe have the adjunction ${S} \\dashv {F}$ which follows from the suspension-loop adjunction $\\Sigma\\dashv\\loopspacesym$ on pointed types.\n\nThe next main goal in this section is the stabilization theorem,\nstating that the ditto marks in~\\cref{tab:periodic} are justified.\n\nThe following corollary is almost \\cite[Lemma~8.6.2]{hottbook}, but\nproving this in Book HoTT is a bit tricky. See the\nformalization for details.\n\\begin{lem}[Wedge connectivity]\n  \\label{lem:wedge-connectivity}\n  If $A : \\UU_\\pt$ is $n$-connected and $B: \\UU_\\pt$ is\n  $m$-connected, then the map $A \\vee B \\to A \\times B$ is\n  $(n+m)$-connected.\n\\end{lem}\n\nLet us mention that there is an alternative way to prove the wedge\nconnectivity lemma: Recall that if $A$ is $n$-connected and $B$ is\n$m$-connected, then $A \\ast B$ is\n$(n+m+2)$-connected~\\cite[Theorem~6.8]{joinconstruction}. Hence the\nwedge connectivity lemma is also a direct consequence of the following lemma.\n\\begin{lem}\nLet $A$ and $B$ be pointed types.\nThe fiber of the wedge inclusion $A\\vee B\\to A\\times B$ is equivalent to\n$\\loopspacesym{A}\\ast\\loopspacesym{B}$. \n\\end{lem}\n\\begin{proof}\nNote that the fiber of $A\\to A\\times B$ is $\\loopspacesym B$, the fiber of $B\\to A\\times B$ is $\\loopspacesym A$, and of course the fiber of $1\\to A\\times B$ is $\\loopspacesym A\\times \\loopspacesym B$. We get a commuting cube\n\\begin{equation*}\n\\begin{tikzcd}\n& \\loopspacesym A\\times \\loopspacesym B \\arrow[dl] \\arrow[d] \\arrow[dr] \\\\\n\\loopspacesym B \\arrow[d] & 1 \\arrow[dl] \\arrow[dr] & \\loopspacesym A \\arrow[dl,crossing over] \\arrow[d] \\\\\nA \\arrow[dr] & 1 \\arrow[d] \\arrow[from=ul,crossing over] & B \\arrow[dl] \\\\\n& A\\times B\n\\end{tikzcd}\n\\end{equation*}\nin which the vertical squares are pullback squares. \n\nBy the descent theorem for pushouts it now follows that $\\loopspacesym A\\ast \\loopspacesym B$ is the fiber of the wedge inclusion.\n\\end{proof}\n\nThe second main tool we need for the stabilization theorem is:\n\\begin{thm}[Freudenthal]\n  If $A : \\UU_\\pt^{>n}$ with $n\\ge 0$, then the map\n  $A \\to \\loopspacesym\\susp A$ is $2n$-connected.\n\\end{thm}\nThis is \\cite[Theorem~8.6.4]{hottbook}.\n\nThe final building block we need is:\n\\begin{lem}\n  There is a pullback square\n  \\[\n    \\begin{tikzcd}\n      \\susp\\loopspacesym A \\ar[d,\"\\varepsilon_A\"']\\ar[r] & A \\vee A \\ar[d] \\\\\n      A \\ar[r,\"\\Delta\"'] & A \\times A\n    \\end{tikzcd}\n  \\]\n  for any $A : \\UU_\\pt$.\n\\end{lem}\n\n\\begin{proof}\nNote that the pullback of $\\Delta:A\\to A\\times A$ along either inclusion $A\\to A\\times A$ is contractible. So we have a cube\n\\begin{equation*}\n\\begin{tikzcd}\n& \\loopspacesym A \\arrow[dl] \\arrow[d] \\arrow[dr] \\\\\n1 \\arrow[d] & 1 \\arrow[dl] \\arrow[dr] & 1 \\arrow[dl,crossing over] \\arrow[d] \\\\\nA \\arrow[dr] & A \\arrow[d,\"\\Delta\"] \\arrow[from=ul,crossing over] & A \\arrow[dl] \\\\\n& A\\times A\n\\end{tikzcd}\n\\end{equation*}\nin which the vertical squares are all pullback squares. Therefore, if we pull back along the wedge inclusion, we obtain by the descent theorem for pushouts that the square in the statement is indeed a pullback square.\n\\end{proof}\n\n\\begin{thm}[Stabilization]\n  \\label{thm:stabilization}\n  If $k\\ge n+2$, then $S : (n,k)\\GType \\to (n,k+1)\\GType$ is an\n  equivalence, and any $G : (n,k)\\GType$ is an infinite loop space.\n\\end{thm}\n\\begin{proof}\n  We show that $F\\circ S=\\idfunc=S\\circ F : (n,k)\\GType \\to (n,k)\\GType$\n  whenever $k\\ge n+2$.\n\n  For the first, the unit map of the adjunction factors as\n  \\[\n    B^kG \\to \\loopspacesym\\susp B^kG \\to \\loopspacesym\\trunc{n+k+1}{\\susp B^kG}\n  \\]\n  where the first map is $2k-2$-connected by Freudenthal, and the\n  second map is $n+k$-connected. Since the domain is $n+k$-truncated,\n  the composite is an equivalence whenever $2k-2 \\ge n+k$.\n\n  For the second, the counit map of the adjunction factors as\n  \\[\n    \\trunc{n+k}{\\susp\\loopspacesym B^kG} \\to \\trunc{n+k}{B^kG} \\to B^kG,\n  \\]\n  where the second map is an equivalence. By the two lemmas above, the\n  first map is $2k-2$-connected.\n\\end{proof}\nFor example, for $G : (0,2)\\GType$ an abelian group, we have\n$B^nG = K(G,n)$, an Eilenberg-MacLane space.\n\nThe adjunction ${S} \\dashv {F}$ implies that the free group on a\npointed set $X$ is $\\loopspacesym\\trunc{1}{\\susp X}=\\pi_1(\\susp X)$.  If $X$\nhas decidable equality, $\\susp X$ is already $1$-truncated. It is an\nopen problem whether this is true in general.\n\nAlso, the abelianization of a set-level group $G : 1\\mathsf{Grp}$ is\n$\\pi_2(\\susp BG)$. If $G : (n,k)\\GType$ is in the stable range ($k \\ge\nn+2$), then $SFG=G$.\n\n\\section{Group actions}\n\\label{sec:actions}\n\nIn this section we consider a fixed group $G : \\GType$ with delooping\n$BG$. \n\n\\begin{defn}\nConsider a type $A$. A \\define{$G$-action} on $a:A$ is simply a function $X : BG \\to A$ such that $X(\\pt)=a$. If $a$ comes equipped with a $G$-action, we also say that $a$ is a \\define{$G$-object}. \n\nFor any $a:A$, the $G$-object $a^{\\mathrm{triv}}$ consists of $a$ equipped with the \\define{trivial action}, i.e. the constant function $\\mathsf{const}_a:BG\\to A$.\n\\end{defn}\n\n\\begin{rmk}\nClearly, an action of $G$ on $a:A$ is the same as a homomorphism $G \\to \\Aut a$.\n\\end{rmk}\n\n\\begin{defn}\nA \\define{$G$-type} is a type $X:\\UU$ equipped with a $G$-action, and a \\define{map of $G$-types} from $X$ to $Y$ is just a fiberwise transformation\n\\begin{equation*}\n\\alpha : \\prd{t:BG} X(t)\\to Y(t).\n\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\nConsider a type $X$ equipped with the structure of a $G$-type. The type of \\define{invariants} of $X$ is defined to be\n\\begin{equation*}\nX^{hG} \\defeq  \\prd{t:BG} X(t).\n\\end{equation*}\nThe type of \\define{coinvariants} of $X$ is defined to be\n\\begin{equation*}\nX_{hG} \\defeq \\sm{t:BG}X(t)\n\\end{equation*}\nWe also write $X \\dblslash G$ for the type of coinvariants of $X$.\n\\end{defn}\n\n\\begin{rmk}\nThe type of invariants of a $G$-type $X$ is also known as the type of \\define{homotopy fixed points}. The type of coinvariants is also known as the \\define{homotopy orbit space}, or the \\define{homotopy quotient}.\n\\end{rmk}\n\nIt is easy to see that these constructions are respectively the right and left\nadjoints of the functor that sends a type $X$ to the trivial\n$G$-action on $X$, $X^{\\mathrm{triv}} : BG \\to \\UU$, which is just\nthe constant family at $X$. Indeed, the adjunctions are just the usual argument\nswap and (un)currying equivalences, for $Y:\\UU$,\n\\begin{align*}\n  \\hom(Y, X^{hG})\n  & \\jdeq  X \\to \\prd{z:BG}Y(z) \\\\\n  & \\eqvsym \\prd{z:BG} X \\to Y(z) \\\\\n  & \\jdeq \\hom(X^{\\mathrm{triv}}, Y), \\\\\n  \\hom(X_{hG}, Y)\n  & \\jdeq \\Big(\\sm{z:BG}X(z)\\Big) \\to Y \\\\\n  & \\eqvsym \\prd{z:BG} X(z) \\to Y \\\\\n  & \\jdeq \\hom(X, Y^{\\mathrm{triv}}).\n\\end{align*}\nIf we think of an action $X : BG\\to \\UU$\nas a type-valued diagram on $BG$, this means that the homotopy fixed\npoints and the homotopy orbit space form the homotopy limit and\nhomotopy colimit of this diagram, respectively.\n\n\\begin{prp}\n  \\label{prop:action-hom-pullback}\n  Let $f : H \\to G$ be a homomorphism of higher groups with delooping\n  $Bf : BH \\to_\\pt BG$, and let $\\alpha : \\hom(X,Y)$ be a map of\n  $G$-types. By composing with $f$ we can also view $X$ and $Y$ as\n  $H$-types, in which case we get a homotopy pullback square:\n  \\[\n    \\begin{tikzcd}\n      X_{hH} \\ar[r]\\ar[d] & Y_{hH}\\ar[d] \\\\\n      X_{hG} \\ar[r]       & Y_{hG}.\n    \\end{tikzcd}\n  \\]\n\\end{prp}\n\\begin{proof}\n  The vertical maps are induced by $Bf$, and the horizontal maps are\n  induced by $\\alpha$. The homotopy pullback corner type $C$ is calculated as\\index{fix this notation}\n  \\begin{align*}\n    C\n    &\\eqvsym (z : BG) \\times (x : X\\,z)\n      \\times (w : BH) \\times (y : Y(Bf\\,w)) \\\\\n    &\\qquad \\times (z = Bf\\,w) \\times (y = \\alpha\\,z\\,x) \\\\\n    &\\eqvsym (w : BH) \\times (x : X(Bf\\,w))\n      = X_{hH},\n  \\end{align*}\n  and under this equivalence the top and the left maps are the\n  canonical ones.\n\\end{proof}\n\nEvery group $G$ carries two canonical actions on itself:\n\\begin{description}\n\\item[the right action] $G : BG \\to \\UU$, $G(x) = (\\pt = x)$, and the\n\\item[the adjoint action] $G^{\\mathrm{ad}} : BG \\to \\UU$,\n  $G^{\\mathrm{ad}}(x) = (x = x)$ (by conjugation).\n\\end{description}\n\nWe have $1 \\dblslash G = BG$,\n$G \\dblslash G = 1$ and $G^{\\mathrm{ad}} \\dblslash G = LBG \\defeq \n(\\bS^1 \\to BG)$, the free loop space of $BG$. Recalling that $\\mathop{\\mathrm{B}\\mathbb{Z}} =\n\\bS^1$, we see that $G^{\\mathrm{ad}} = (\\mathop{\\mathrm{B}\\mathbb{Z}} \\to BG)$, i.e., the\nconjugacy classes of homomorphisms from $\\mathbb{Z}$ to $G$. Since the\nintegers are the free (higher) group on one generator, this is just\nthe conjugacy classes of elements of $G$. But that is exactly what we\nshould get for the homotopy orbits of $G$ under the conjugation\naction.\n\nThe above proposition has an interesting corollary:\n\\begin{cor}\n  \\label{cor:hfiber-hom}\n  If $f : H \\to G$ is a homomorphism of higher groups,\n  then $G \\dblslash H$ is equivalent to the homotopy\n  fiber of the delooping $Bf : BH \\to_\\pt BG$,\n  where $H$ acts on $G$ via the $f$-induced right action.\n\\end{cor}\n\\begin{proof}\n  We apply Proposition~\\ref{prop:action-hom-pullback} with $\\alpha : G \\to 1$\n  being the canonical map from the right action of $G$ to the action\n  of $G$ on the unit type. Then the square becomes:\n  \\[\n    \\begin{tikzcd}[baseline=(O.base)]\n      G \\dblslash H \\ar[r]\\ar[d] & BH\\ar[d] \\\\\n      |[alias=O]| 1 \\ar[r]       & BG\n    \\end{tikzcd}\\qedhere\n  \\]\n\\end{proof}\n\nBy definition, $BG$ classifies \\define{principal $G$-bundles}: pullbacks\nof the right action of $G$. That is, a principal $G$-bundle over a\ntype $A$ is a family $F : A \\to \\UU$ represented by a map $\\chi:A\n\\to BG$ such that $F(x) \\eqvsym (\\pt = \\chi(x))$ for all $x : A$.\n\nFor example, for every higher group $G$ we have the corresponding Hopf\nfibration $\\Sigma G \\to \\UU$ represented by the map $\\chi_H : \\Sigma\nG \\to BG$ corresponding under the loop-suspension adjunction to the\nidentity map on $G$. (This particular fibration can be defined using\nonly the induced $H$-space structure on $G$.)\n\nThis perspective underlies the construction of the first and the third\nnamed author of the real projective spaces in homotopy type\ntheory~\\cite{realprojective}. The fiber sequences $\\bS^0 \\to \\bS^n \\to\n\\RP^n$ are principal bundles for the $2$-elements group\n$\\bS^0=\\Sym_2$ with delooping $\\BS_2\\eqvsym \\RP^\\infty$,\nthe type of $2$-element types.\n\n\\subsection{The center of a group}\n\\label{sec:center}\n\nGiven a $k$-tuply groupal $(n+1)$-group $G$, we can also consider the automorphism group\n$\\aut_c G$ of $B^kG:\\UU^{\\ge k,\\le n+k}$ at an arbitrary base point $c:B^kG$. \n\n\\begin{defn}\nWe define the \\define{generalized\n  center} of $G$ to be $ZG \\defeq  \\loopspacesym^k\\aut_c G : (n,k+1)\\GType$. Thus, $G$ acts on the center by \\define{conjugation}.\n\\end{defn}\n\nIndeed, if $G:1\\mathsf{Grp}$ is a set-level group, then an element of\n$ZG$ corresponds to an element of $\\loopspacesym^2\\BAut_cG$, or equivalently,\na map from the $2$-sphere $\\bS^2$ to $\\UU$\nsending the basepoint to $BG$.\nBy the universal property of\n$\\bS^2$ as a HIT, this again corresponds to a homotopy from the\nidentity on $BG$ to itself, $c : \\prd{z:BG} z=z$.\nThis is precisely a homotopy fixed point of the adjoint action of\n$G$ on itself, i.e., a central element.\n\n\\subsection{Equivariant homotopy theory}\n\\label{sec:equivariant}\n\nFix a group $G : \\GType$. Suppose that $G$ is actually the (homotopy)\ntype of a topological group. Consider the\ntype $BG \\to \\UU$ of (small) \\emph{types with a $G$-action}. Naively,\none might think that this represents $G$-equivariant homotopy types,\ni.e., sufficiently nice\\footnote{Sufficiently nice means the\n  $G$-CW-spaces. The same homotopy category arises by taking all\n  spaces with a $G$-action, but then the weak equivalences are the\n  $G$-maps $f : X \\to Y$ that induce weak equivalences on $H$-fixed\n  point spaces $f^H : X^H \\to Y^H$ for all closed subgroups $H$ of $G$.}\ntopological spaces with a $G$-action considered up to $G$-equivariant\nhomotopy equivalence. But this is not so.\n\nBy Elmendorf's theorem~\\cite{Elmendorf1983}, this homotopy theory is\nrather that of presheaves of (ordinary) homotopy types on the\n\\define{orbit category} $\\mathcal{O}_G$ of $G$. This is the full\nsubcategory of the category of $G$-spaces spanned by the homogeneous\nspaces $G/H$, where $H$ ranges over the closed subgroups of $G$.\n\nInside the orbit category we find a copy of the group $G$, namely as\nthe endomorphisms of the object $G/1$ corresponding to the trivial\nsubgroup $1$. Hence, a $G$-equivariant homotopy type gives rise to\ntype with a $G$-action by restriction along the inclusion $BG\n\\hookrightarrow \\mathcal{O}_G$. (Here we consider $BG$ as a (pointed\nand connected) topological groupoid on one object.)\n\nAs remarked by Shulman~\\cite{shulman:inverseEI}, when $G$ is a \\emph{compact} Lie\ngroup, then $\\mathcal{O}_G$ is an inverse EI $\\infty$-category, and\nhence we know how to model type theory in the presheaf $\\infty$-topos\nover $\\mathcal{O}_G$. And in certain simple cases we can even define\nthis model internally. For instance, if $G=\\mathbb{Z}/p\\mathbb{Z}$ is a cyclic group\nof prime order, then a small $G$-equivariant type consists of a type with a\n$G$-action, $X : BG \\to \\UU$ together with another type family $X^G : X^{hG}\n\\to \\UU$, where $X^G$ gives for each homotopy fixed point a type of\nproofs or ``special reasons'' why that point should be considered\nfixed~\\cite[7.6]{shulman:inverseEI}. Hence the total space of $X^G$ is the\ntype of actual fixed points,\nand the projection to $X^{hG}$ implements the map from actual fixed\npoints to homotopy fixed points.\n\nEven without going to the orbit category, we can say something about\ntopological groups through their classifying types in type theory. For\nexample~\\cite{Camarena}, if $f : H \\to G$ is injective, then the\nhomotopy fiber of $Bf$ is by Corollary~\\ref{cor:hfiber-hom}\nis the homotopy orbit space $G \\dblslash H$, which in this case is\njust the coset space $G/H$, and hence in type\ntheory represents the homotopy type of this coset space. And if\n\\[\n  1 \\to K \\to G \\to H \\to 1\n\\]\nis a short exact sequence of topological groups,\nthen $BK \\to BG \\to BH$ is a fibration sequence,\ni.e., we can recover the delooping $BK$ of $K$ as the homotopy fiber\nof the map $BG \\to BH$.\n\n\\subsection{The semi-direct product and the wreath product}\n\\label{sec:elementary-constructions}\n\nIf we are given a homomorphism $\\varphi : H \\to \\Aut(N)$, represented\nby a pointed map\n$B\\varphi : BH \\to_\\pt \\BAut_\\pt(BN)$ where $\\BAut_\\pt(BN)$ is the\ntype of pointed types merely equivalent to $BN$,\nwe can build a new group, the\n\\define{semidirect product}, $G \\defeq  N \\mathbin{\\rtimes_\\varphi} H$\nwith classifying type $BG \\defeq  (z : BH) \\times (B\\varphi\\,z)$.\nThe type $BG$ is indeed pointed (by the pair of the basepoint $\\pt$ in $BG$\nand the basepoint in the pointed type $B\\varphi(\\pt)$), and\nconnected, and hence presents a higher group $G$.\nAn element of $g$ is given by a pair of an element $h : H$ and an\nidentification $g\\cdot\\pt = \\pt$ in $B\\varphi(\\pt) \\eqvsym_\\pt BN$. But\nsince the action is via pointed maps, the second component is\nequivalently an identification $\\pt = \\pt$ in $BN$, i.e., an element of\n$N$. Under this equivalence, the product of $(h,n)$ and $(h',n')$ is\nindeed $(h\\cdot h', n\\cdot \\varphi(h)(n'))$.\n\nAs a special case we obtain the \\define{direct product} when $\\varphi$\nis the trivial action. Here, $B(H \\times N) \\eqvsym BH \\times BN$.\n\nAs another special case we obtain the \\define{wreath products} $N \\wr\n\\Sym_n$ of a group $N$ and a symmetric group $\\Sym_n$. Here,\n$\\Sym_n$ acts on the direct power $N^{\\Fin n}$ by permuting the\nfactors. Indeed, using the representation of $\\BS_n$ as the type of\n$n$-element types, the map $B\\varphi$ is simply $A \\mapsto (A \\to\nBN)$. Hence the delooping of the wreath product $G \\defeq  N \\wr \\Sym_n$\nis just $BG\\defeq (A:BS_n) \\times (A \\to BN)$.\n\n\\section{\\texorpdfstring{$1$}{1}-groups}\n\\label{sec:n=0}\n\n\\begin{defn}\nA \\define{$1$-group} $G$ is a set $G$ equipped with a unit $1:G$, a multiplication $x,y\\mapsto xy$ on $G$, and an inverse operation $x\\mapsto x^{-1}$, satisfying the usual group laws.\n\\end{defn}\n\nWe recall that the \\define{homotopy groups} of a pointed type $A$ are defined to be $\\pi_k A\\defeq \n\\trunc{0}{\\loopspacesym^k A}$. These are groups in the usual sense when\n$k\\ge 1$, with neutral element $\\truncunit{\\refl{\\ast}}$ and group\noperation induced by path concatenation.\n\nSince the subuniverse of $n$-types is a reflective subuniverse for any univalent universe closed under homotopy pushouts, it follows from Theorem 5.4 of \\cite{FinsterLicata} that the Eilenberg-Mac Lane spaces $K(G,n)$ can be constructed for any abelian group $G$, and any $g\\geq 1$, and moreover $K(G,n)$ is essentially small whenever $G$ is. Following \\cite{FinsterLicata}, we define $K(A,n+1)\\defeq \\trunc{n+1}{\\Sigma K(A,n)}$ inductively. Then we have the following result.\n\n\\begin{prp}[Theorem 5.4 of \\cite{FinsterLicata}]\\label{thm:EM-spaces}\n  Let $G$ be a group and $n\\geq 1$, and assume that $G$ is abelian\n  when $n>1$. The space $K(G,n)$ is $(n-1)$-connected and\n  $n$-truncated and there is a group isomorphism $\\pi_nK(G,n) \\cong G$.\\qed\n\\end{prp}\n\nIn this section we give a proof that the $n=0$ column of~\\cref{tab:periodic} is correct. \nNote that for $n=0$ the hom-types $\\hom_{(0,k)}(G,H)$ are sets, which means that $(0,k)\\GType$ forms a 1-category. % (F) should we define precategory/category here? I use the word \"category\" for a not necessarily univalent category.\nLet $\\Group$ be the category of ordinary set-level groups (a set with multiplication, inverse and unit satisfying the group laws) and $\\AbGroup$ the category of abelian groups.\n\\begin{thm}\\label{theorem:catofgroups}\n  We have the following equivalences of categories\n  {\\normalfont(}\\/for $k\\geq2${\\normalfont):}\n  \\begin{align*}\n    % (0,0)\\GType &\\eqvsym \\mathsf{Set}_\\pt\\\\\n    (0,1)\\GType &\\eqvsym \\Group;&&\\\\\n    (0,k)\\GType &\\eqvsym \\AbGroup.&&\n  \\end{align*}\n\\end{thm}\nSince this theorem has been formalized we will not give all details of the proof.\n\\begin{proof}\n  Let $k\\ge1$ and $G$ be a group which is abelian if $k>1$ and let $X:\\UU_\\pt^{\\ge k,\\le k}$. If we have a group homomorphism $\\phi : G \\to \\loopspacesym^k X$ we get a map $e_\\phi^k:K(G,k)\\to_\\pt X$. For $k=1$ this follows directly from the induction principle of $K(G,1)$. For $k>1$ we can define the group homomorphism $\\widetilde\\phi$ as the composite $G \\xrightarrow{\\phi} \\loopspacesym^k X \\eqvsym \\loopspacesym^{k-1}(\\loopspacesym X)$, and apply the induction hypothesis to get a map\n  $e_{\\widetilde\\phi}^{k-1}:K(G,k-1)\\to_\\pt \\loopspacesym X$. By the adjunction $\\Sigma\\dashv\\loopspacesym$ we get a pointed map $\\Sigma K(G,k-1)\\to_\\pt X$, and by the elimination principle of the truncation we get a map $K(G,k)=\\trunc{k}{\\Sigma K(G,k-1)}\\to_\\pt X$. \n\n  We can now show that $\\loopspacesym^k e_\\phi^k$ is the expected map, that is, the following diagram commutes, but we omit this proof here.\n\\begin{equation*}\n\\begin{tikzcd}[column sep=0]\n\\loopspacesym^nK(G,k) \\arrow[rr,\"\\eqvsym\"] \\arrow[dr,swap,\"\\loopspacesym^n e_\\phi^k\"] & & G \\\\\n& \\loopspacesym^n X \\arrow[ur,swap,\"\\phi\"] & \\phantom{\\loopspacesym^nK(G,k)}\n\\end{tikzcd}\n\\end{equation*}\n  Now if $\\phi$ is a group isomorphism, by Whitehead's Theorem for truncated types \\cite[Theorem 8.8.3]{hottbook} we know that $e_\\phi^k$ is an equivalence, since it induces an equivalence on all homotopy groups (trivially on the levels other than $k$). We can also show that $e_\\phi^k$ is natural in $\\phi$.\n\n  Note that if we have a group homomorphism $\\psi:G\\to G'$, we also get a group homomorphism $G\\to\\loopspacesym^k K(G',k)$, and by the above construction we get a pointed map $K(\\psi,k):K(G,k)\\to_\\pt K(G',k)$. This is functorial, which follows from naturality of $e_\\phi^k$. \n\n  Finally, we can construct the equivalence explicitly. We have a functor\n  $\\pi_k:(0,k)\\GType \\to \\AbGroup$ which sends $G$ to $\\pi_k BG$. Conversely, we have the functor $K({-},k):\\AbGroup\\to (0,k)\\GType$. We have natural isomorphisms\n  $\\pi_k K(G,k)\\eqvsym G$ by~\\cref{thm:EM-spaces} and $K(\\pi_k X,k)\\eqvsym_\\pt X$ by the application of Whitehead described above. The construction is exactly the same for $k=1$ after replacing $\\AbGroup$ by $\\Group$.\n\\end{proof}\n\nFrom the symmetric groups $\\Sym_n$, we can get other finite groups using\nthe constructions of~\\cref{sec:elementary-constructions}. Other\ngroups can be constructed more directly. For example,\n$BA_n$, the classifying type of the alternating group, can be taken to\nbe the type of $n$-element sets $X$ equipped with a \\define{sign\n  ordering}: this is an equivalence class of an ordering\n$\\Fin n \\eqvsym X$ modulo even permutations. Indeed, there are only two\npossible sign orderings, so this definition corresponds to\nfirst considering the short exact sequence\n\\[\n  1 \\to A_n \\to \\Sym_n \\xrightarrow{\\mathrm{sgn}}{} \\Sym_2 \\to 1\n\\]\nwhere the last map is the sign map, then realizing the sign map\nas given by the map $\\mathrm{Bsgn} : \\BS_n \\to \\BS_2$ that takes\nan $n$-element set to its set of sign orderings, and finally\nletting $BA_n$ be the homotopy fiber of $\\mathrm{Bsgn}$.\n\nSimilarly, $BC_n$, the classifying type of the cyclic group on $n$\nelements, can be taken to be the type of $n$-elements sets $X$\nequipped with a \\define{cyclic ordering}: an equivalence class of an\nordering $\\Fin n \\eqvsym X$ modulo cyclic permutations. But unlike\nthe above, where we had the coincidence that $\\Aut(\\Sym_2) \\eqvsym\n\\Sym_2$,\nthis doesn't correspond to a short exact sequence. Rather,\nit corresponds to a sequence\n\\[\n  1 \\to C_n \\to \\Sym_n \\to \\Aut(\\Fin(n-1)) \\eqvsym \\Sym_{(n-1)!}\n\\]\nwhere the delooping of the last map is the map from $\\BS_n$ to\n$\\BS_{(n-1)!}$ that maps an $n$-element set to the set of cyclic\norderings, of which there are $(n-1)!$ many -- since once we fix the\nposition in the ordering of a particular element,\nwe are free to permute the rest.\n\nAs another example,\nconsider the map $p : \\BS_4 \\to_\\pt \\BS_3$ that maps a 4-element set\n$X$ to its set of 2-by-2 partitions, of which there $3$. Using this\nconstruction, we can realize some famous semidirect and wreath product identities,\nsuch as $A_4 \\cong S_2^2 \\rtimes A_3$, $S_4 \\cong S_2^2 \\rtimes\nS_3$, and, for the octahedral group, $O_h \\cong S_2^3 \\rtimes S_3\n\\cong S_2 \\wr S_3$.\n\n\\smallskip\n\nLet us turn to a different way of getting new groups from old, namely\nvia covering space theory.\n\n\\subsection{\\texorpdfstring{$1$}{1}-groups and covering spaces}\n\\label{sec:covering}\n\nThe connection between covering spaces of a pointed connected type\n$X$ and sets with an action of the fundamental group of $X$ has\nalready been established in homotopy type\ntheory~\\cite{FavoniaHarper2016}. Let us recall this connection and\nexpand a bit upon it.\n\nFor us, a pointed connected type $X$ is equivalently\nan $\\infty$-group $G:\\infty\\mathsf{Grp}$\nwith delooping $BG \\defeq  X$.\nA covering space over $BG$ is simply a type family $C : BG \\to \\mathsf{Set}$\nthat lands in the universe of sets.\nHence by our discussion of actions in~\\cref{sec:actions}\nit is precisely a set with a $G$-action.\nSince $\\mathsf{Set}$ is a 1-type, $C$ extends uniquely to a type family\n$C' : \\trunc{1}{BG} \\to \\mathsf{Set}$,\nbut $\\trunc{1}{BG}$ is the delooping of the fundamental group\nof $X$, and hence $C'$ is the uniquely determined\nchoice of a set with an action of the fundamental group.\n\nThe universal covering space is the simply connected cover of $BG$,\n\\[\n  \\widetilde{BG} : BG \\to \\mathsf{Set}, \\quad\n  z \\mapsto \\trunc{0}{\\pt = z}.\n\\]\nNote that the total space of $\\widetilde{BG}$ is indeed the\n$1$-connected cover $BG\\angled1$,\nsince $\\trunc{0}{\\pt =_{BG} \\pt} \\eqvsym (\\truncunit{\\pt} =_{\\trunc{1}{BG}} \\truncunit{\\pt})$.\nAlso note that if $G$ is already a 1-group, then this is just the right\naction of $G$ on itself, and in general, it is the right action of $G$\non the fundamental group\n(i.e., the decategorification of $G$)\nvia the truncation homomorphism from $G$ to $\\pi_1(BG)$,\nwhere we can also view $\\pi_1(BG)$ as the $1\\mathsf{Grp}$ decategorification\nof $G$.\n\nIn general, there is a Galois correspondence between connected covers\nof $BG$ and conjugacy classes of subgroups of the fundamental group.\nIndeed, if $C : BG \\to \\mathsf{Set}$ has a connected total space,\nthen the space $(g : \\trunc{1}{BG}) \\times C'(g)$\nis itself a connected, 1-truncated type,\nand the projection to $\\trunc{1}{BG}$\ninduced an inclusion of fundamental groups\nonce a point $\\pt : C'(\\pt)$ has been chosen.\n\n\\begin{thm}[Fundamental theorem of Galois theory for covering spaces]\n  \\label{thm:galois-zero}\n  $\\phantom{42}$\n  \\begin{enumerate}\n  \\item  The automorphism group of the universal covering space\n    $\\widetilde{BG}$ is isomorphic to\n    the $1$-group decategorification of $G$,\n    \\[\n      \\Aut(\\widetilde{BG}) \\eqvsym \\Decat_1(G) \\eqvsym \\pi_1(BG).\n    \\]\n  \\item Furthermore, there is a contravariant correspondence between\n    conjugacy classes of subgroups of $\\Decat_1(G)$ and connected\n    covers of $BG$.\n  \\item This lifts to a Galois correspondence between subgroups of\n    $\\Decat_1(G)$ and pointed, connected covers of $BG$.  The normal\n    subgroups correspond to Galois covers.\n  \\end{enumerate}\n\\end{thm}\nNote that the universal covering space\nand the trivial covering space\n(constant at the unit type)\nare canonically pointed,\nreflecting the fact that\nthe two trivial subgroups are normal.\n\nThe first part of the fundamental theorem has a clear generalization to\nhigher groups:\n\\begin{thm}[Fundamental theorem of Galois theory for $n$-covers,\n  part one]\n  The automorphism group of the universal $n$-type cover $U_n(BG)$,\n  \\[\n    U_n(BG) : BG \\to \\UU^{\\le n},\n    \\quad\n    z \\mapsto \\trunc{n}{\\pt = z}\n  \\]\n  of $BG$ is\n  isomorphic to the $(n+1)$-group decategorification of $G$,\n  \\[\n    \\Aut(U_n(BG)) \\eqvsym \\Decat_{n+1}(G) \\eqvsym \\Pi_{n+1}(BG).\n  \\]\n\\end{thm}\n\\begin{proof}\n  Note that\n  $\\BAut(U_n(BG))$ is the image of the map $1 \\to (BG \\to \\UU^{\\le\n    n})$ that sends the canonical element to $U_n(BG)$. Since $BG$ is\n  connected, this image is exactly $\\trunc{n+1}{BG}$ by\n  \\cite[Theorem~7.1]{joinconstruction}. Then we are done,\n  since $\\B\\Pi_{n+1}(BG) \\eqvsym \\trunc{n+1}{BG}$, by definition.\n\\end{proof}\nIt is possible to use the other parts\nof~\\cref{thm:galois-zero} in order to \\emph{define} the notions of\nsubgroup and normal subgroup for $n$-groups, which then become\n\\emph{structure on} rather than a \\emph{property of} a homomorphism $f\n: K \\to G$.\nExplicitly, the structure of a \\define{normal subgroup} on such an $f$\nis a delooping $B(G \\dblslash K)$ of the type $G \\dblslash K$\ntogether with a map $Bq : BG \\to_\\pt B(G \\dblslash K)$ giving rise to a\nfiber sequence\n\\begin{equation}\\label{eq:normal-fiber-sequence}\n  G \\dblslash K \\to BK \\xrightarrow{Bf}{} BG\n  \\xrightarrow{Bq}{} B(G \\dblslash K).\n\\end{equation}\n\n\\subsection{Central extensions and group cohomology}\n\\label{sec:group-cohomology}\n\nThe cohomology of a higher group $G$ is simply the cohomology of its\ndelooping $BG$. Indeed, for any spectrum $A$, we define\n\\[\n  H_{\\mathrm{Grp}}^k(G, A) \\defeq  \\trunc{0}{BG \\to_\\pt B^kA}.\n\\]\nOf course, to define the $k$'th cohomology group, we only need the\n$k$-fold delooping $B^kA$.\n\nIf $A:(\\infty,2)\\GType$ is a braided $\\infty$-group, then\nwe have the second cohomology group $H_{\\mathrm{Grp}}^2(G, A)$, and an\nelement $c:BG \\to_\\pt B^2A$ gives rise to a \\define{central extension}\n\\[\n  BA \\to BH \\to BG \\xrightarrow{c}{} B^2A,\n\\]\nwhere $BH$ is the homotopy fiber of $c$.  This lifts to the world of\nhigher groups the usual result that isomorphism classes of central\nextensions of a $1$-group $G$ by an abelian $1$-group $A$\nare given by cohomology classes in $H_{\\mathrm{Grp}}^2(G, A)$.\n\n\\smallskip\n\nIn the Spectral repository there is full formalization of the Serre\nspectral sequence for cohomology \\cite{SerreSpectralSequence}.\nIf we have any normal subgroup fiber sequence for $\\infty$-groups as\nin~\\eqref{eq:normal-fiber-sequence}, then we get a corresponding\nspectral sequence with $E_2$-page\n\\[\n  H_{\\mathrm{Grp}}^p(G \\dblslash K, H_{\\mathrm{Grp}}^q(K, A))\n\\]\nand converging to $H_{\\mathrm{Grp}}^n(G, A)$, where $A$ is any\ntruncated, connective spectrum, which could even be a left $G$-module,\nin which case we reproduce the \\define{Hochschild-Serre spectral\n  sequence}.\n\n\\section{From RPn to higher groups}\n\n\\marginnote{Need to find a place for this}\n\\begin{defn}\nFor any type $X$, we define \\define{the connected component of $X$ in $\\UU$} to be the type \\[\\UU_X\\defeq\\sm{A:\\UU}\\brck{A=X}.\\] In particular, we have the type \\[\\UU_{\\sphere{0}}\\jdeq\\sm{A:\\UU}\\brck{A={\\sphere{0}}}\\] of $2$\\nobreakdash-element sets.\n\nAn \\define{$X$\\nobreakdash-bundle} over a type $A$ is defined to be a type family $B:A\\to\\UU_X$. \n\\end{defn}\n\nA term of type $\\UU_X$ is formally a pair of a small type $A:\\UU$ together with a term of type $\\brck{A=X}$, but since the latter is a mere proposition we usually omit it, and consider the term itself as a small type.\n", "meta": {"hexsha": "685019106ab823d90bd07e58d2c8d995ad1dd60b", "size": 44517, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "higher_groups.tex", "max_stars_repo_name": "EgbertRijke/dissertation", "max_stars_repo_head_hexsha": "f2c087ba8983205d3dd336bbc194be5b7218c2c5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-07-06T10:37:12.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-06T10:37:12.000Z", "max_issues_repo_path": "higher_groups.tex", "max_issues_repo_name": "EgbertRijke/dissertation", "max_issues_repo_head_hexsha": "f2c087ba8983205d3dd336bbc194be5b7218c2c5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "higher_groups.tex", "max_forks_repo_name": "EgbertRijke/dissertation", "max_forks_repo_head_hexsha": "f2c087ba8983205d3dd336bbc194be5b7218c2c5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.9709051724, "max_line_length": 572, "alphanum_fraction": 0.6957342139, "num_tokens": 15303, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110202, "lm_q2_score": 0.8056321936479701, "lm_q1q2_score": 0.5889643169456665}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath, amssymb, tikz, xcolor, sfmath, systeme, tcolorbox}\n\\renewcommand{\\familydefault}{\\sfdefault}\n\\usepackage[margin = 0.5in]{geometry}\n\\tikzset{>=stealth}\n\\pagestyle{empty}\n\\raggedright\n\n\\begin{document}\n\n\\subsection*{Systems of Equations: $\\mathbf{2 \\times 2}$}\n\n\\begin{tcolorbox}[colframe=orange!70!white, coltitle=black, title=\\textbf{Summary}]\n\\begin{enumerate}\n    \\item Multiplying by a matrix $A$ transforms the coordinate plane.\n    \\item Using our new coordinate system, we want to get to the point using vector arithmetic.\n    \\item If it is impossible to get to that point using vector arithmetic, we have no solution.\n    \\item If there is more than one vector combination to get to that point, we have an infinite number of solutions.\n    \\item Inverse matrices allow us to find the solution via a process.\n\\end{enumerate}\n\\end{tcolorbox}\n\\vfill\n\nIn Algebra 1, you learned about solving systems of equations:\n\\begin{center}\n\\systeme{x+y=2, 2x+y=1}\n\\end{center}\n\\vspace{1.5in}\n% \\begin{tcolorbox}[colback=white!50!green, title=\\textbf{Solution to a System of Equations}]\n% A \\textbf{solution} to a system of equations is an ordered pair, $(x,y)$, such that substituting the values of $x$ and $y$ into each equation \n% \\end{tcolorbox}\n% \\bigskip\n\nOne method for solving a system of equations is to find the intersection point of the graphs of each equation:\n\\begin{center}\n\\begin{tikzpicture}[scale=0.65]\n\\draw[gray!50] (-4,-4) grid (5,5);\n\\draw[<->, thick] (-4.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-4.5) -- (0,5.5) node [above] {$y$};\n\\draw[<->, very thick, red] (4.5,-2.5) node [right] {$x+y=2$} -- (-2.5,4.5);\n\\draw[<->, very thick, blue, domain=-2:2.5] plot (\\x, -2*\\x+1) node [below right] {$2x+y=1$};\n\\draw[color=violet,fill=violet] (-1,3) circle [radius=3pt];\n\\end{tikzpicture}\n\\end{center}\n\n\\newpage\n\nPreviously, we looked at matrix multiplication. The system of equations\n\\begin{center}\n    \\systeme{x+y=2, 2x+y=1}\n\\end{center}\nin matrix form is\n\\[\n\\begin{bmatrix}\n1 & 1 \\\\\n2 & 1 \n\\end{bmatrix}\n\\begin{bmatrix}\nx \\\\ y\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n2 \\\\ 1\n\\end{bmatrix}\n\\qquad \\text{or} \\qquad\nx\\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix} + y \\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}\n\\]\n\n\\vspace{1.5in}\n\nTransforming $\\hat{\\imath}$ and $\\hat{\\jmath}$ by the matrix $\\begin{bmatrix} 1 & 1 \\\\ 2 & 1 \\end{bmatrix}$, we get \n\\begin{center}\n\\begin{tikzpicture}[scale=0.6]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[->, very thick, red] (0,0) -- (1,2) node [below right] {$\\hat{\\imath}'$};\n\\draw[->, very thick, blue] (0,0) -- (1,1) node [below right] {$\\hat{\\jmath}'$};\n\\end{tikzpicture}\n\\end{center}\n\n\\vfill\n\n\\newpage\n\n\\underline{Using these vectors}, we want to get to the point $(2,1)$:\n\\begin{center}\n\\begin{tikzpicture}[scale=0.6]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[->, very thick, red] (0,0) -- (1,2) node [below right] {$\\hat{\\imath}'$};\n\\draw[->, very thick, blue] (0,0) -- (1,1) node [below right] {$\\hat{\\jmath}'$};\n\\draw[fill=black] (2,1) circle [radius=3pt];\n\\end{tikzpicture}\n\\end{center}\n\n\\vspace{1.5in}\n\nKnowing our solution is $(-1,3)$:\n\\[\n-1\\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix} + 3 \\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}\n\\]\n\n\\vfill\nVisually, we get\n\\begin{center}\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[->, very thick, red] (0,0) -- (1,2);\n\\draw[->, very thick, blue] (0,0) -- (1,1);\n\\draw[fill=black] (2,1) circle [radius=3pt];\n% \\draw[->, very thick, dashed, red] (0,0) -- (-1,-2) node [above left] {$-1\\hat{\\imath}'$};\n% \\draw[->, very thick, dashed, blue] (-1,-2) -- (0,-1);\n% \\draw[->, very thick, dashed, blue] (0,-1) -- (1,0);\n% \\draw[->, very thick, dashed, blue] (1,0) -- (2,1) node [below right] {$3\\hat{\\jmath}'$};\n\\pgftransformcm{1}{2}{1}{1}{\\pgfpoint{0cm}{0cm}}\n\\draw[violet] (-2,-3) grid (2,3);\n\\end{tikzpicture}\n\\end{center}\n\n\n\\newpage\n\n{\\color{red}\\textbf{Example 1.}} Using the new basis vectors $\\hat{\\imath} = \\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix}$ and $\\hat{\\jmath} = \\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix}$ \\newline\\\\ show that $(2,-1)$ is the solution to  \\systeme{x+y=1, 2x+y=3}\n\\bigskip\n\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[->, very thick, red] (0,0) -- (1,2);\n\\draw[->, very thick, blue] (0,0) -- (1,1);\n\\draw[fill=black] (1,3) circle [radius=3pt];\n\\end{tikzpicture}\n\n\n\\vfill\n\n\\newpage\n\nLike single equations, some systems of equations have special answers \n\\begin{itemize}\n    \\item No solution: $\\varnothing$\n    \\item Infinite number of solutions\n\\end{itemize}\n\\vfill\n\n{\\color{red}\\textbf{Example 2.}} Using the new basis vectors $\\hat{\\imath} = \\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix}$ and $\\hat{\\jmath} = \\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix}$ show that the system below has no solution.    \\newline\\\\\n\\systeme{x+y=2, 2x+2y=1}\n\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n% \\draw[->, very thick, red] (0,0) -- (1,2);\n% \\draw[->, very thick, blue] (0,0) -- (1,2);\n\\draw[fill=black] (2,1) circle [radius=3pt];\n\\end{tikzpicture}\n\\vfill\n\n\\newpage\n\n$\\begin{bmatrix}1 & 1 \\\\ 2 & 2\\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} = \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}$  \\newline\\\\\n\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n% \\draw[->, very thick, red] (0,0) -- (1,2);\n% \\draw[->, very thick, blue] (0,0) -- (1,2);\n\\draw[fill=black] (2,1) circle [radius=3pt];\n% \\draw[->, very thick, dashed, red] (0,0) -- (-1,-2) node [above left] {$-1\\hat{\\imath}'$};\n% \\draw[->, very thick, dashed, blue] (-1,-2) -- (0,-1);\n% \\draw[->, very thick, dashed, blue] (0,-1) -- (1,0);\n% \\draw[->, very thick, dashed, blue] (1,0) -- (2,1) node [below right] {$3\\hat{\\jmath}'$};\n\\pgftransformcm{1}{2}{1}{2}{\\pgfpoint{0cm}{0cm}}\n\\draw[dashed, violet] (-1,-1) grid (1,1);\n\\end{tikzpicture}\n\n\\vfill\n\nIf we graph the equations $x + y = 2$ and $2x + 2y = 1$, we find that the graphs do not intersect:\n\\begin{center}\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[<->, red, very thick, domain=-3:5] plot (\\x, 2 - \\x);\n\\draw[<->, blue, very thick, domain=-4:4.5] plot (\\x, 0.5-\\x);\n\\end{tikzpicture}\n\\end{center}\n\n\\vfill\n\n\\newpage\n\n{\\color{red}\\textbf{Example 3.}} Show that the system below has an infinite number of solutions.    \\newline\\\\\n\n\\systeme{2x-y=1, 4x-2y=2}\n\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[fill=black] (1,2) circle [radius=3pt];\n\\end{tikzpicture}\n\n\\vfill\n\n$\\begin{bmatrix} 2 & -1 \\\\ 4 & -2 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} = \\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix}$ \\newline\\\\\n\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n% \\draw[->, very thick, red] (0,0) -- (1,2);\n% \\draw[->, very thick, blue] (0,0) -- (1,1);\n\\draw[fill=black] (1,2) circle [radius=3pt];\n% \\draw[->, very thick, dashed, red] (0,0) -- (-1,-2) node [above left] {$-1\\hat{\\imath}'$};\n% \\draw[->, very thick, dashed, blue] (-1,-2) -- (0,-1);\n% \\draw[->, very thick, dashed, blue] (0,-1) -- (1,0);\n% \\draw[->, very thick, dashed, blue] (1,0) -- (2,1) node [below right] {$3\\hat{\\jmath}'$};\n\\pgftransformcm{2}{4}{-1}{-2}{\\pgfpoint{0cm}{0cm}}\n\\draw[dashed, violet] (-1,-1) grid (1,1);\n\\end{tikzpicture}\n\n\\vfill\n\n\\newpage\n\n\\subsubsection*{Matrix Methods for Solving a System of Equations}\n\nIf we graph the equations $2x-y=1$ and $4x-2y=2$, we find that they are equations for the same line:  \\newline\\\\\n\n\\begin{center}\n\\begin{tikzpicture}[scale=0.7]\n\\draw[gray!50] (-5,-5) grid (5,5);\n\\draw[<->, thick] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[<->, very thick, violet, domain=-2.5:2.5] plot (\\x, 2*\\x);\n\\end{tikzpicture}\n\\end{center}\n\nIn the intro, we saw that the system of equations   \n\\begin{center}\\systeme{x+y=2, 2x+y=1}\\end{center} \nin matrix form is\n\\[\n\\begin{bmatrix} 1 & 1 \\\\ 2 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} = \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}\n\\]\n\n\\vspace{0.75in}\n\nWe were able to get a solution $(-1,3)$. \n\n\\vfill \n\nIn matrix form, this is\n\\begin{flalign*}\n\\begin{bmatrix}\n1 & 0 \\\\ 0 & 1 \n\\end{bmatrix}\n\\begin{bmatrix} x \\\\ y \\end{bmatrix}\n=\n\\begin{bmatrix} -1 \\\\ 3 \\end{bmatrix}   &&\\\\\n\\end{flalign*}\n\n\\vfill\n\nSo, if we can transform \n\\begin{flalign*}\n\\begin{bmatrix} 1 & 1 \\\\ 2 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix}\n\\text{ into }\n\\begin{bmatrix}\n1 & 0 \\\\ 0 & 1 \n\\end{bmatrix}\n\\begin{bmatrix} x \\\\ y \\end{bmatrix}   \n&&\\\\\n\\end{flalign*}\nthat will transform the right side (point) we are starting with, $\\begin{bmatrix} 2 \\\\ 1\\end{bmatrix}$, to our solution of $\\begin{bmatrix} -1 \\\\ 3 \\end{bmatrix}$\n\n\\newpage\n\nAnother way to look at this is, given $\\hat{\\imath}'$ and $\\hat{\\jmath}'$, how can we get back to our original vectors $\\hat{\\imath}$ and $\\hat{\\jmath}$ \n\n\\emph{**while using the rules for $\\hat{\\imath}'$ and $\\hat{\\jmath}'$**}? \\newline\\\\\n\n\\begin{minipage}{0.45\\textwidth}\n\\begin{tikzpicture}[scale=0.65]\n\\draw[gray!50] (-3,-3) grid (3,3);\n\\draw[<->, thick] (-3.5,0) -- (3.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-3.5) -- (0,3.5) node [above] {$y$};\n\\draw[->, very thick, red] (0,0) -- (1,2) node [above right] {$\\hat{\\imath}'$};\n\\draw[->, very thick, blue] (0,0) -- (1,1) node [below right] {$\\hat{\\jmath}'$};\n\\end{tikzpicture}\n\\end{minipage}\n\\hspace{-1.25in}\nTO $\\rightarrow$ \\qquad\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}[scale=0.65]\n\\draw[gray!50] (-3,-3) grid (3,3);\n\\draw[<->, thick] (-3.5,0) -- (3.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-3.5) -- (0,3.5) node [above] {$y$};\n\\draw[->, ultra thick, red] (0,0) -- (1,0) node [below right] {$\\hat{\\imath}$};\n\\draw[->, ultra thick, blue] (0,0) -- (0,1) node [above right] {$\\hat{\\jmath}$};\n\\end{tikzpicture}\n\\end{minipage}\n\n\\vfill\n\nMultiplying $\\hat{\\imath}'$ by the vector $\\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix}$ gives us the following picture:\n\\begin{center}\n\\begin{tikzpicture}[scale=0.65]\n\\draw[gray!50] (-3,-3) grid (3,3);\n\\draw[<->, thick] (-3.5,0) -- (3.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-3.5) -- (0,3.5) node [above] {$y$};\n\\draw[->, very thick, red] (0,0) -- (1,2);\n\\draw[->, very thick, blue] (0,0) -- (1,1);\n\\draw[->, very thick, red, dashed] (0,0) -- (-1,-2) node [above left] {$-1\\hat{\\imath}'$};\n\\draw[->, very thick, blue, dashed] (-1,-2) -- (0,-1);\n\\draw[->, very thick, blue, dashed] (0,-1) -- (1,0) node [below right] {$2\\hat{\\jmath}'$};\n\\end{tikzpicture}\n\\end{center}\n\n\\vfill\n\nMultiplying $\\hat{\\jmath}'$ by the vector $\\begin{bmatrix} 1 \\\\ -1 \\end{bmatrix}$ gives us the following picture:\n\n\\begin{center}\n\\begin{tikzpicture}[scale=0.65]\n\\draw[gray!50] (-3,-3) grid (3,3);\n\\draw[<->, thick] (-3.5,0) -- (3.5,0) node [right] {$x$};\n\\draw[<->, thick] (0,-3.5) -- (0,3.5) node [above] {$y$};\n\\draw[->, very thick, red] (0,0) -- (1,2);\n\\draw[->, very thick, blue] (0,0) -- (1,1);\n\\draw[->, very thick, blue, dashed] (1,2) -- (0,1) node [above left] {$-1\\hat{\\jmath}'$};\n\\end{tikzpicture}\n\\end{center}\n\n\\vfill\n\n\\newpage\n\nPutting these ideas together gives us the \\textbf{inverse matrix} \n\\[\n\\begin{bmatrix} -1 & 1 \\\\ 2 & -1 \\end{bmatrix}\n\\]\n\nof $\\begin{bmatrix} 1 & 1 \\\\ 2 & 1 \\end{bmatrix}$ which we can then use to solve our system of equations:  \\vspace{1in}\n\n\\begin{flalign*}\n    \\begin{bmatrix} -1 & 1 \\\\ 2 & -1 \\end{bmatrix}\n    \\begin{bmatrix} 1 & 1 \\\\ 2 & 1 \\end{bmatrix}\n    \\begin{bmatrix} x \\\\ y \\end{bmatrix} &=\n    \\begin{bmatrix} -1 & 1 \\\\ 2 & -1 \\end{bmatrix}\n    \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix} &&\\\\\n    \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}\n    \\begin{bmatrix} x \\\\ y \\end{bmatrix} &= \n    \\begin{bmatrix} -1 \\\\ 3 \\end{bmatrix}   &&\\\\\n    \\begin{bmatrix} x \\\\ y \\end{bmatrix} &=\n    \\begin{bmatrix} -1 \\\\ 3 \\end{bmatrix} &&\\\\\n\\end{flalign*}\n\nWhich gives us our solution $(-1, 3)$.  \\vfill\n\nWorking with higher dimension matrices may be more difficult to apply the previous visual techniques. \\newline\\\\\n\nThus, we can alternatively work with the \\emph{rows} of the matrix as well to find our solution. \\vfill\n\nTo do this, we have 3 \\textit{elementary row operations}:\n\\begin{itemize}\n    \\item We can interchange 2 rows\n    \\item We can multiply a row by a nonzero number\n    \\item We can add (or subtract) two rows\n\\end{itemize}\n\n\\vfill\n\n\\newpage\n\n{\\color{red}\\textbf{Example 4.}} Using elementary row operations, transform the matrix $\\begin{bmatrix} 1 & 1 \\\\ 2 & 1 \\end{bmatrix}$ to $\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}$   \\vfill\n\n{\\color{red}\\textbf{Example 5.}} Apply the same elementary row operations from the previous example to the matrix $\\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}$   \\vfill\n\n\\newpage\n\n{\\color{red}\\textbf{Example 6.}} Using elementary row operations, transform the matrix $\\begin{bmatrix} 1 & 1 \\\\ 2 & 1 \\end{bmatrix}$ to $\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}$  \n\n\\vfill\n\n{\\color{red}\\textbf{Example 7.}} Apply the same elementary row operations from the previous example to the matrix $\\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}$   \n\n\\vfill\n\n\\newpage\n\n{\\color{red}\\textbf{Example 8.}} Using elementary row operations, transform the matrix $\\begin{bmatrix} 2 & -1 \\\\ 4 & -2 \\end{bmatrix}$ to $\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}$ \n\\vfill\n\n{\\color{red}\\textbf{Example 9.}} Apply the same elementary row operations from the previous example to the matrix $\\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix}$   \n\n\\vfill\n\n\n% While graphing methods may be easier for $2 \\times 2$ systems of equations, graphing methods get more difficult for larger systems such as $3 \\times 3$, $4 \\times 4$, etc. We will use elementary row operations and matrices for those.\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "9a9c17134305f28adb462a38c9399f9d210f9a4a", "size": 14654, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "System_of_Equations_2_by_2.tex", "max_stars_repo_name": "BryanBain/HA2_BEAMER", "max_stars_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "System_of_Equations_2_by_2.tex", "max_issues_repo_name": "BryanBain/HA2_BEAMER", "max_issues_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "System_of_Equations_2_by_2.tex", "max_forks_repo_name": "BryanBain/HA2_BEAMER", "max_forks_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:45.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:45.000Z", "avg_line_length": 35.4818401937, "max_line_length": 246, "alphanum_fraction": 0.6126654838, "num_tokens": 5924, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019594, "lm_q2_score": 0.7956581073313275, "lm_q1q2_score": 0.58892574111572}}
{"text": "\\documentclass[a4paper]{tufte-handout}\n\n\\usepackage{algorithm,algorithmic}\n\\usepackage{graphicx}\n\\usepackage{framed}\n\\usepackage{amsmath}\n\\usepackage[T1]{fontenc}\n\\usepackage{todonotes}\n\\usepackage{graphicx}\n\\hypersetup{\n    colorlinks=true,\n    linkcolor=blue,\n    filecolor=magenta,      \n    urlcolor=blue,\n}\n\n\\title{Deep Learning Crash Course}\n\\author{Sasank Chilamkurthy, Qure.ai}\n\\date{May 10, 2017}\n\n\\begin{document}\n\\maketitle\n\\tableofcontents\n\n\\section{Calculus: Recap}\\label{calculus-recap}\n\nLet's begin with a short recap of calculus.\n\n\\subsection{Derivative}\\label{derivative}\n\nDerivative of function \\(f(v)\\) (\\(f'(v)\\) or \\(\\frac{df}{dv}\\))\nmeasures sensitivity of change in \\(f(v)\\) with respect of change in\n\\(v\\).\n\n\\begin{marginfigure}\n  \\includegraphics[width=\\linewidth]{differential.png}\n  \\caption{Derivative illustration. Red is for positive \\(v\\)\ndirection and green is for negative \\(v\\) direction.\n\\href{https://en.wikipedia.org/wiki/Derivative}{Source}.}\n\\end{marginfigure}\n\nDirection (i.e sign) of the derivative at a points gives the direction\nof (greatest) increment of the function at that point.\n\n\\subsection{Gradient}\\label{gradient}\n\nThe gradient is a multi-variable generalization of the derivative. It is\na vector valued function. For a function \\(f(v_1, v_2, \\ldots, v_n)\\),\ngradient is a vector whose components are \\(n\\) partial derivatives of\n\\(f\\):\n\n\\[ \\nabla f = (\\frac{\\partial f}{\\partial v_1 }, \\frac{\\partial f}{\\partial v_2 }, \\ldots, \\frac{\\partial f}{\\partial v_n })\\]\n\n\n\\begin{marginfigure}\n  \\includegraphics[width=\\linewidth]{gradient.png}\n  \\caption{Gradient of the 2D function \\(f(x, y) = xe^{\u2212(x^2 + y^2)}\\) is\nplotted as blue arrows over the pseudocolor (red is for high values\nwhile blue is for low values) plot of the function.\n\\href{https://en.wikipedia.org/wiki/Gradient}{Source}.}\n\\end{marginfigure}\n\n\nSimilar to derivative, direction of the gradient at a point is the\nsteepest ascent of the function starting from that point.\n\n\\section{Optimization}\\label{optimization}\n\nGiven a function \\(C(v_1, v_2, \\ldots, v_n)\\), how do we optimize it?\ni.e, find a point which minimizes this function globally.\n\nThis is a very generic problem; lot of practical and theoretical\nproblems can be posed in this form. So, there is no general answer.\n\n\\subsection{Gradient Descent}\\label{gradient-descent}\n\nHowever, for a class of functions called convex functions, there is a\nsimple algorithm which is guaranteed to converge to the global minimum. Convex\nfunctions have only one minima. They look something like a valley.\n\nTo motivate our algorithm, imagine a ball\nis put at a random point on our valley and allowed to roll (figure \\ref{fig:valley}) . Our common\nsense tells that ball will eventually roll to the bottom of the valley.\n\n\\begin{figure}[htb!]\n  \\includegraphics[width=60mm]{valley_with_ball}\n  \\caption{Gradient descent illustration: a ball on a valley.\n\\href{http://neuralnetworksanddeeplearning.com/chap1.html}{Source}.}\n\\label{fig:valley}\n\\end{figure}\n\n\nLet's roughly simulate the motion of the ball! Key observation is that\nthe ball moves in the steepest direction of descent. This is negative\n\\sidenote{Gradient gives us steepest direction of \\emph{ascent}.}\nof the gradient's direction.\n\nGreat! Let's put together our algorithm:\n\n\n\\begin{algorithm}\n\\caption{Gradient Descent}\n\\begin{algorithmic}[1]\n  \\STATE Start at a random point: \\(v\\)\n  \\WHILE{\\(v\\) hasn't converged}\n  \\STATE Update \\(v\\) in the direction of steepest descent: \n      \\[v \\rightarrow v' = v -\\eta \\nabla C\\]\n  \\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{marginfigure}\n  \\includegraphics[width=\\linewidth]{Gradient_descent}\n  \\caption{Gradient descent on a series of level sets.\n  \\href{https://en.wikipedia.org/wiki/Gradient_descent}{Source}.}\n\\end{marginfigure}\n\nHere \\(\\eta\\) is called \\emph{learning rate}. If it is too small,\nalgorithm can be very slow and might take too many iterations to\nconverge. If it is too large, algorithm might not even converge.\n\n\\subsection{Example: Regression}\\label{example-regression}\n\nLet's apply the things learnt above on the linear regression problem. Here's\na recap of the linear regression:\n\n\\begin{quote}\nOur model is \\(y(x) = wx + b\\). Parameters are \\(v = (w, b)\\). We are\ngiven training pairs \\((x^1, y^1), (x^2, y^2), \\ldots, (x^n, y^n)\\)\n\\sidenote{Notation clarifiction: Here, superscripts are indices, not powers}.\n\nWe want to find \\(w\\) and \\(b\\) which minimize the following cost/loss\nfunction:\n\n\\[ C(w, b) = \\frac{1}{n} \\sum_{i = 1}^{n} C_i(w, b) = \\frac{1}{2n} \\sum_{i = 1}^{n} \\| y(x^i) - y^i\\|^2 \\]\n\nwhere\n\\(C_i(w, b) = \\frac{1}{2} \\| y(x^i) - y^i\\|^2 = \\frac{1}{2} \\| wx^i + b - y^i\\|^2\\)\nis the loss of the model for \\(i\\) th training pair.\n\\end{quote}\n\n\nLet's calculate gradients,\n\n\\[ \\nabla C = \\frac{1}{n} \\sum_{i = 1}^{n} \\nabla C_i \\]\n\nwhere \\(\\nabla C_i\\) is computed using:\n\n\\[ \\nabla C_i = \\left( \\frac{\\partial C_i}{\\partial w}, \\frac{\\partial C_i}{\\partial b}  \\right) = \\left( (wx^i + b - y^i)x^i, (wx^i + b - y^i) \\right)\\]\n\nUpdate rule is\n\n\\[ v \\rightarrow v' = v -\\eta \\nabla C = v -\\frac{\\eta}{n} \\sum_{i = 1}^{n} \\nabla C_i \\]\n\n\n\\subsection{Stochastic Gradient Descent}\\label{stochastic-gradient-descent}\n\nIn the above example, what if we have very large number of samples i.e\n\\(n \\gg 0\\)? At every optimization step, we have to compute\n\\(\\nabla C_i\\) for each sample \\(i = 1, 2, \\ldots, n\\). This can be very\ntime consuming!\n\nCan we approximate \\(\\nabla C\\) with a very few samples? Yes!\n\n\\[ \\nabla C = \\frac{1}{n} \\sum_{i = 1}^{n} \\nabla C_i \\approx \\frac{1}{m} \\sum_{j \\in S_m} \\nabla C_j \\]\n\nwhere \\(S_m\\) is random subset of size \\(m \\ll n\\) of\n\\({1, 2, \\ldots,n}\\). It turns out that this approximation, even though\nestimated only from a small random subset of samples, is good enough for\nconvergence of gradient descent. This subset of data is called\n\\emph{minibatch} and this technique is called \\emph{stochastic gradient\ndescent}.\n\nThen stochastic gradient descent works by picking a randomly chosen\nsubset of data and trains (i.e updates \\(v\\)) with gradient\napproximation computed from them. Next, another subset is picked up and\ntrained with them. And so on, until we've exhausted all the training\ndata, which is said to complete an \\emph{epoch} of training. Concretely,\nfollowing is the stochastic gradient descent algorithm.\n\n\\begin{algorithm}\n\\caption{Stochastic Gradient Descent}\n\\begin{algorithmic}[1]\n  \\STATE Start at a random point: \\(v\\)\n  \\FOR{a fixed number of epochs}\n      \\STATE Randomly partition the data into minibatches each of size $m$ \n      \\FOR{For each minibatch $S_m$}\n        \\STATE Update the parameters using \\[v \\rightarrow v' = v -\\frac{\\eta}{m} \\sum_{i \\in S_m} \\nabla C_j\\]\n      \\ENDFOR\n  \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Neural Networks}\\label{neural-networks}\n\nWith this background, we are ready to start with neural networks.\n\n\\subsection{Perceptron}\\label{perceptron}\n\nPerceptron, a type of artificial neuron was developed in the 1950s and\n1960s. Today, it's more common to use Rectified Linear Units (ReLUs).\nNevertheless, it's worth taking time to understand the older models.\n\nSo how do perceptrons work? A perceptron takes several inputs,\n\\(x_1, x_2, \\ldots x_n\\) and produces a single output (figure \\ref{fig:percp}).\n\n\n\\begin{figure}\n  \\includegraphics[width=0.5\\linewidth]{tikz0}\n  \\caption{ Perceptron Model.\n\\href{http://neuralnetworksanddeeplearning.com/chap1.html}{Source}. }\n\\label{fig:percp}\n\\end{figure}\n\n\n\\emph{Weights} \\(w_1, w_2, \\ldots w_n\\) decide the importance of each of\nthe inputs on output \\(y(x)\\). There is also a \\emph{threshold} \\(b\\) to\ndecide the output. These are the parameters of the model.\nThe expression of the output is\n\n\\[ y(x) = \\sigma\\left(\\sum_j w_j x_j - b\\right) \\]\n\nwhere \\(\\sigma(z)\\) is step function, \n\n\\begin{eqnarray*}\n  \\sigma(z) & = & \\left\\{ \\begin{array}{ll}\n      0 & \\mbox{if } z \\leq 0 \\\\\n      1 & \\mbox{if } z > 0\n      \\end{array} \\right.\n\\end{eqnarray*}\n\nTherefore \n\\begin{eqnarray*}\n    y(x) & = & \\left\\{ \\begin{array}{ll}\n      0 & \\mbox{if } \\sum_j w_j x_j \\leq b \\\\\n      1 & \\mbox{if } \\sum_j w_j x_j > b\n      \\end{array} \\right.\n\\end{eqnarray*}\n\nThat's the basic mathematical model. A way you can think about the\nperceptron is that it's a device that makes decisions by weighing up\nevidence. An example\n\\sidenote{This example is straight from\n\\href{http://neuralnetworksanddeeplearning.com/chap1.html}{here}}:\n\n\\begin{quote}\nSuppose the weekend is coming up, and you've heard that there's going to be a cheese festival in your city. You like cheese, and are trying to decide whether or not to go to the festival. You might make your decision by weighing up three factors:\n\n\\begin{itemize}\n\\item Is the weather good?\n\\item Does your boyfriend or girlfriend want to accompany you?\n\\item Is the festival near public transit? (You don't own a car).\n\\end{itemize}\nWe can represent these three factors by corresponding binary variables \\(x_1\\), \\(x_2\\) and \\(x_3\\)\n \nNow, suppose you absolutely adore cheese, so much so that you're happy to go to the festival even if your boyfriend or girlfriend is uninterested and the festival is hard to get to. But perhaps you really loathe bad weather, and there's no way you'd go to the festival if the weather is bad. You can use perceptrons to model this kind of decision-making.\n \nYou can use perceptrons to model this kind of decision-making. One way to do this is to choose a weight \\(w_1=6\\) for the weather, and \\(w_2=2\\) and \\(w_3=2\\) for the other conditions and threshold (or more aptly, \\emph{bias}) term \\(b = 5\\). By varying the weights and the threshold, we can get different models of decision-making.\n\\end{quote}\n\n\nAnother way perceptrons can be used is to compute the elementary logical\nfunctions we usually think of as underlying computation, functions such\nas \\texttt{AND}, \\texttt{OR}, and \\texttt{NAND}. Check that\nperceptron in \\ref{fig:nand} implements \\texttt{NAND}:\n\n\\begin{figure}\n  \\includegraphics[width=0.5\\linewidth]{tikz2}\n  \\caption{\\texttt{NAND} implemented by perceptron.\n\\href{http://neuralnetworksanddeeplearning.com/chap1.html}{Source} }\n\\label{fig:nand}\n\\end{figure}\n\nIf you are familiar with digital logic, you will know that \\texttt{NAND}\ngate is a universal gate. That is, any logical computation can be\ncomputed using just \\texttt{NAND} gates. Then, the same property follows\nfor perceptrons.\n\n\\subsection{Multi Layer Perceptrons and Sigmoid}\n\\label{multi-layer-perceptrons-and-sigmoid}\n\nAlthough perceptron isn't a complete model of human decision-making, the\nabove example illustrates how a perceptron can weigh up different kinds\nof evidence in order to make decisions. And it should seem plausible\nthat a complex network of perceptrons could make quite subtle decisions. (figure \\ref{fig:mlp})\n\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{tikz1}\n  \\caption{Multi layer perceptron.\n\\href{http://neuralnetworksanddeeplearning.com/chap1.html}{Source} }\n\\label{fig:mlp}\n\\end{figure}\n\nThis network is called \\emph{Multi layered perceptron (MLP)}. In this\nMLP, the first column of perceptrons - what we'll call the first\n\\emph{layer} of perceptrons - is making three very simple decisions, by\nweighing the input evidence. What about the perceptrons in the second\nlayer? Each of those perceptrons is making a decision by weighing up the\nresults from the first layer of decision-making. In this way a\nperceptron in the second layer can make a decision at a more complex and\nmore abstract level than perceptrons in the first layer. And even more\ncomplex decisions can be made by the perceptron in the third layer. In\nthis way, a many-layer network of perceptrons can engage in\nsophisticated decision making.\n\nHow do we learn the parameters of the above model? Gradient Descent!\n\\sidenote{ MLP model is far from convex.\nTherefore, gradient descent is not guaranteed to converge! But it turns\nout that it works fine with a few tweaks which we describe below.}\nHowever, the network is very discontinuous. In fact, a small change in\nthe weights or bias of any single perceptron in the network can\nsometimes cause the output of that perceptron to completely flip, say\nfrom 0 to 1. This makes it very difficult for gradient descent to\nconverge.\n\nHow do we overcome this? What is the source of this discontinuity?\nRemember that output of perceptron is given by\n\\(y(x) = \\sigma\\left(\\sum_j w_j x_j - b\\right)\\) where \\(\\sigma(z)\\) is\nstep function\n\n\\begin{eqnarray*}\n  \\sigma(z) & = & \\left\\{ \\begin{array}{ll}\n      0 & \\mbox{if } z \\leq 0 \\\\\n      1 & \\mbox{if } z > 0\n      \\end{array} \\right.\n\\end{eqnarray*}\n\nThis \\(\\sigma(z)\\) is the source of discontinuity. Can we replace\nstep function with a smoother version of it?\n\nCheck out the following function:\n\n\\begin{eqnarray*} \n  \\sigma(z) = \\frac{1}{1+e^{-z}}.\n\\end{eqnarray*}\n\nIf you graph it, it's quite clear that this function is smoothed out\nversion of a step function. This function is called \\emph{sigmoid}.\n\n\\begin{marginfigure}\n  \\includegraphics[width=\\linewidth]{sigmoid}\n  \\caption{Sigmoid function. When \\(z\\) is large and\n  positive, Then \\(e^{-z} \\approx 0\\) and so \\(\\sigma(z) \\approx 1\\).\n  Suppose on the other hand that \\(z\\) is very negative. Then\n  \\(e^{-z} \\rightarrow \\infty\\), and \\(\\sigma(z) \\approx 0\\).\n  \\href{http://neuralnetworksanddeeplearning.com/chap1.html}{Source}.}\n\\end{marginfigure}\n\nWith the \\emph{sigmoid neuron}, gradient descent can converges.\nBefore computing gradients for gradient descent, we need to discuss loss\nand activation functions.\n\n\\subsection{ReLU Activation}\n\nBy now, you have seen that general form of a artificial neuron is\n\\(y(x) = \\sigma\\left(\\sum_j w_j x_j - b\\right)\\).\nHere the function \\(\\sigma(z)\\) is called \\emph{activation\nfunction}. So far, we have seen two different activations:\n\n\\begin{enumerate}\n\\item\n  Step function\n\\item\n  Sigmoid function\n\\end{enumerate}\n\n\nLet me introduce another activation function, \\emph{rectifier} or\n\\emph{rectified linear unit} (ReLU):\n\n\\begin{marginfigure}\n\\includegraphics[width=\\linewidth]{relu}\n\\caption{ ReLU.\n\\href{http://neuralnetworksanddeeplearning.com/chap3.html}{Source}. }\n\\end{marginfigure}\n\n\\[\\sigma(z) = \\max(0, z)\\]\n\nBecause of reasons we describe later\n\\sidenote{Vanishing gradients problem. Read\nmore \\href{http://neuralnetworksanddeeplearning.com/chap5.html}{here}},\nReLUs are preferred activation functions these days. You will almost\nnever see sigmoid activation function in modern deep neural networks.\n\n\\subsection{Loss functions}\n\nTo train any machine learning model, we need to measure how well \nthe model fits to training set. Such a function is called loss/cost\n\\sidenote{I use terms loss function and\ncost function interchangeably.} function.\nIn regression problem we discussed before, cost function was \n\\(C(w, b) = \\frac{1}{2n} \\sum_{i = 1}^{n} \\| y(x^i) - y^i\\|^2\\). \nMinimizing the cost function trains the model.\n\nWe will make two assumptions about our cost function:\n\n\\begin{enumerate}\n\\item\n  The cost function can be written as an average over cost functions\n  \\(C_i\\) for individual training examples, \\((x^i, y^i)\\). i.e,\n  \\(C = \\frac{1}{n} \\sum_{i = 1}^{n} C_i\\)\n\\item\n  Cost can be written as a function of the outputs from the neural\n  network. i.e, \\(C_i = L(y(x^i), y^i)\\) where \\(y(x)\\) is the output\n  from the network.\n\\end{enumerate}\n\nIn the case of regression, we used \\(L_2\\) loss,\n\\(L_2(o, y) = \\frac{1}{2} \\| o - y\\|^2\\). We could have also used \\(L_1\\)\nloss, \\(L_1(o, y) = | o - y |\\).\n\n\\subsection{Cross Entropy Loss}\n\nWhat if we have a classification problem? What loss do we use? Consider\na MLP for digit classification on the right (figure \\ref{fig:mlp-digit}).\n\n\\begin{marginfigure}[20mm]\n  \\includegraphics[width=65mm]{tikz12}\n  \\caption{ MLP for digit classification.\n  \\href{http://neuralnetworksanddeeplearning.com/chap1.html\\%22}{Source}.\n  }\n\\label{fig:mlp-digit}\n\\end{marginfigure}\n\nHere, we have output \\(o\\) of size 10 and a target class \\(y\\). We could\nhave used \\(L_1(o, e_y)\\) or \\(L_2(o, e_y)\\) loss where \\(e_y\\) is\n\\(y\\)th unit vector. But it turns out that this doesn't work very well.\n\nInstead, we will consider the outputs as a probability distribution over the\n10 classes and use what is called a cross entropy loss:\n\n\\[ L(o, y) = - \\log(o_y) \\]\n\nTo understand this function, realize that \\(o_y\\), $y$th output is the predicted\nprobability of the target class \\(y\\). Minimizing negative of log of\nthis probability, maximizes the probability. Also, \\(L \\geq 0\\) because\n\\(\\log(x) \\leq 0\\) for \\(x \\in [0, 1]\\).\n\n\\subsection{Softmax Activation}\n\nIf we use cross entropy loss, we cannot use sigmoids as activations of\noutput layer because sigmoids do not guarantee a probability\ndistribution (Although each component of the output will be in \\([0, 1]\\),\nthey need not add up to 1.)\n\nWe therefore use an activation layer called \\emph{softmax}. According to\nthis function, the activation \\(a_j\\) of the \\(j\\)th output neuron is\n\n\\[ a_j = \\frac{e^{z_j}}{\\sum_k e^{z_k}} \\]\n\nwhere in the denominator we sum over all the output neurons.\nThis expression may seem opaque if you are not familiar with it. Observe\nthe following:\n\n\\begin{enumerate}\n\\item\n  Activations are positive: \\(a_j \\geq 0\\)\n\\item\n  Activations sum to 1: \\(\\sum_j a_j = 1\\)\n\\item\n  If you increase \\(z_m\\) keeping others constant, \\(a_m\\) increases.\n  Other activations decrease to ensure sum remains 1.\n\\item\n  If \\(z_m\\) is much larger than the others, \\(a_m \\approx 1\\) and\n  \\(a_k \\approx 0\\) for \\(k \\neq m\\).\n\\end{enumerate}\n\nTherefore softmax is a probability distribution which behaves like\nsmooth version of \\texttt{argmax}.\n\n\\subsection{Backpropogation}\\label{backpropogation}\n\nWe have so far discussed the model component of neural networks. We\nhaven't yet discussed how we learn the parameters of the networks \ni.e, weights and biases of the layers.\n\nAs expected, we will use stochastic gradient descent. For this, we need\ngradients of \\(C_i\\), loss for the \\(i\\)th training example,  \nwith respect to all the parameters of network. Computation\nof this quantity, \\(\\nabla C_i\\) is slightly involved. Let's start with\nwriting the expression for \\(C_i\\). Let's represent all the parameters\nof the network with \\(\\theta\\):\n\n\\[ C_i(\\theta) = L\\left(y(x^i, \\theta), y^i \\right)\\]\n\nLet's break the above function into composition of layers (or functions\nin general)\\sidenote{Dropping subscript for brevity}\n\n\\[ C = f_L \\circ \\ f_{L-1} \\circ \\cdots \\circ f_l \\circ \\cdots \\circ f_1 \\]\n\n\\begin{figure*}\n  \\includegraphics[width=\\linewidth]{backprop.png}\n  \\caption{Cost function as a composition of functions}\n\\end{figure*}\n\n\nHere, \\(l\\)th \\sidenote[][5mm]{ In particular,\n\\(C = o_L = f_L(u_L) = L(u_L, y^i)\\) } layer/function takes in input\n\\(u_l\\) and outputs \\(o_l\\)\n\n\\begin{equation}\no_l = f_l(u_l, \\theta_l) \\label{eq:1}\n\\end{equation}\n\n\nwhere \\(\\theta_l\\) are learnable parameters of this layer. Since output\nof \\(l-1\\)th layer is fed to \\(l\\) th layer as input,\n\n\n\\begin{equation}\nu_l = o_{l-1} \\label{eq:2}\n\\end{equation}\n\n\nWe require\n\\(\\nabla C = \\left(\\frac{\\partial C}{\\partial\\theta_1}, \\frac{\\partial C}{\\partial\\theta_2}, \\ldots, \\frac{\\partial C}{\\partial\\theta_L}\\right)\\).\nTherefore, we need to compute, for  \\( l = 1, 2, \\dots L \\)\n\n\\[ \\frac{\\partial C}{\\partial\\theta_l} = \\frac{\\partial o_L}{\\partial\\theta_l} \\]\n\n\nTo compute this quantity, we will compute generic\n\\sidenote{ You will see ahead why this quantity is useful. }:\n\n\\[ \\frac{\\partial o_m}{\\partial \\theta_l} \\]\n\nBefore getting started, let's write down the chain rule. Chain rule is\nthe underlying operation of our algorithm.\n\n\\begin{framed}\n\\textbf{Chain Rule}: For function f(x, y), \n\\begin{equation}\n\\frac{\\partial f}{\\partial t} = \\frac{\\partial f}{\\partial x} * \\frac{\\partial x}{\\partial t} + \\frac{\\partial f}{\\partial y} * \\frac{\\partial y}{\\partial t}\n\\label{eq:3}\n\\end{equation}\n\n\\end{framed}\n\nIf \\(m > l\\),\n\n\\begin{equation}\n\\frac{\\partial o_m}{\\partial \\theta_l} = 0 \\label{eq:4}\n\\end{equation}\n\n\nbecause output of the earlier layers doesn't depend on parameters of the later\nlayer.\n\nIf \\(m = l\\), using equation \\eqref{eq:1} and the fact that \\(u_l\\) and\n\\(\\theta_l\\) are independent,\n\n\\begin{equation}\n\\frac{\\partial o_l}{\\partial \\theta_l} = \\frac{\\partial f_l}{\\partial \\theta_l}\n\\label{eq:5}\n\\end{equation}\n\n\\(\\frac{\\partial f_l}{\\partial \\theta_l}\\) is a computable quantity\nwhich depends on the form of layer \\(f_l\\).\n\nIf \\(m < l\\), using equation \\eqref{eq:1}, \\eqref{eq:2}, chain rule \\eqref{eq:3},\n\n\\begin{align*}\n\\frac{\\partial o_m}{\\partial \\theta_l} &= \\frac{\\partial f_m}{\\partial \\theta_l}\\\\\n&= \\frac{\\partial f_m}{\\partial u_m} * \\frac{\\partial u_m}{\\partial \\theta_l} + \\frac{\\partial f_m}{\\partial \\theta_m} * \\frac{\\partial \\theta_m}{\\partial \\theta_l}\\\\\n&= \\frac{\\partial f_m}{\\partial u_m} * \\frac{\\partial u_m}{\\partial \\theta_l}\\\\\n&= \\frac{\\partial f_m}{\\partial u_m} * \\frac{\\partial o_{m-1}}{\\partial \\theta_l}\n\\end{align*}\n\n\nTherefore,\n\n\\begin{equation}\n\\frac{\\partial o_m}{\\partial \\theta_l} = \\frac{\\partial f_m}{\\partial u_m} * \\frac{\\partial o_{m-1}}{\\partial \\theta_l}\n\\label{eq:6}\n\\end{equation}\n\nLike \\(\\frac{\\partial f_l}{\\partial \\theta_l}\\),\n\\(\\frac{\\partial f_l}{\\partial u_l}\\) is a computable quantity which\ndepends on layer \\(f_l\\).\n\n\\noindent Let's put everything together and compute the required quantity\n\\sidenote{Apply \\eqref{eq:6} repetitively}:\n\n\\begin{align*}\n\\frac{\\partial C}{\\partial \\theta_l} &= \\frac{\\partial o_L}{\\partial \\theta_l}\\\\\n&= \\frac{\\partial f_L}{\\partial u_L} * \\frac{\\partial o_{L-1}}{\\partial \\theta_l}\\\\\n&= \\frac{\\partial f_L}{\\partial u_L} * \\frac{\\partial f_{L-1}}{\\partial u_{L-1}} * \\frac{\\partial o_{L-2}}{\\partial \\theta_l} \\\\\n&\\vdots \\\\\n&= \\frac{\\partial f_L}{\\partial u_L} * \\frac{\\partial f_{L-1}}{\\partial u_{L-1}} * \\cdots * \\frac{\\partial f_{l-1}}{\\partial u_{l-1}} * \\frac{\\partial o_l}{\\partial \\theta_l}\\\\\n&= \\frac{\\partial f_L}{\\partial u_L} * \\frac{\\partial f_{L-1}}{\\partial u_{L-1}} * \\cdots * \\frac{\\partial f_{l-1}}{\\partial u_{l-1}} * \\frac{\\partial f_l}{\\partial \\theta_l}\n\\end{align*}\n\nNow, algorithm to compute gradients \\(\\nabla C\\), i.e.\n\\(\\frac{\\partial C}{\\partial \\theta_l}\\) for all \\(l\\) is fairly clear:\n\n\\begin{algorithm}[H]\n\\caption{Back Propogation}\n\\begin{algorithmic}[1]\n  \\STATE \\{Forward pass\\}\n  \\STATE Set \\(u_0 = x\\)\n  \\FOR {\\(l = 1, \\dots L\\)}\n  \t\\STATE Store \\(u_l = f_l(u_{l-1}, \\theta_l)\\)\n  \\ENDFOR\n  \\STATE \\{Backward pass\\}\n  \\STATE {Set \\(\\texttt{buffer} = 1\\)}\n  \\FOR {\\(l = L, L-1, \\dots 1\\)}\n  \t\\STATE Store \\(\\frac{\\partial C}{\\partial \\theta_l} = \\frac{\\partial f_l}{\\partial \\theta_l} * \\texttt{buffer}\\)\n  \t\\STATE Update \\(\\texttt{buffer} = \\frac{\\partial f_l}{\\partial u_l} * \\texttt{buffer}\\)\n  \\ENDFOR \n  \\RETURN \\(\\left(\\frac{\\partial C}{\\partial\\theta_1}, \\frac{\\partial C}{\\partial\\theta_2}, \\ldots, \\frac{\\partial C}{\\partial\\theta_L}\\right)\\).\n\\end{algorithmic}\n\\end{algorithm}\n\n\nAlthough this derivation is for scalar functions, it will work with\nvector functions with a few modifications.\n\n\\subsection{Deep Networks and why they are hard to train}\\label{deep-neural-networks}\n\nWhenever you are asked to do any complex task, you usually break it down\nto sub tasks and solve the component subtasks. For instance, suppose\nyou're designing a logical circuit to multiply two numbers. Chances are\nyour circuit will look something like figure \\ref{fig:circuitmult}.\n\n\\begin{figure}\n\\includegraphics[width=0.7\\linewidth]{circuit_multiplication}\n\\caption{Logical circuit for multiplication.\n\\href{http://neuralnetworksanddeeplearning.com/chap5.html\\%22}{Source}.\n}\n\\label{fig:circuitmult}\n\\end{figure}\n\nSimilarly deep neural networks (i.e lot of layers) can build up multiple\nlayers of abstraction. Consider the network in figure \\ref{fig:dnn}.\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{tikz36}\n\\caption{Deep Neural Network.\n\\href{http://neuralnetworksanddeeplearning.com/chap5.html\\%22}{Source}.}\n\\label{fig:dnn}\n\\end{figure}\n\n\nIf we're doing visual pattern recognition, then the neurons in the first\nlayer might learn to recognize edges, the neurons in the second layer\ncould learn to recognize more complex shapes, say triangle or\nrectangles, built up from edges. The third layer would then recognize\nstill more complex shapes. And so on. These multiple layers of\nabstraction seem likely to give deep networks a compelling advantage in\nlearning to solve complex pattern recognition problems.\n\nHow do we train such deep networks? Stochastic gradient descent as\nusual. But we'll run into trouble, with our deep networks not performing\nmuch (if at all) better than shallow networks.\nLet's try to understand why are deep networks hard to train:\n\n\\begin{enumerate}\n\\item\n  Consider the number of parameters in the network. They are huge! If we\n  have to connect 1000 unit hidden layer to 224x224 (50,176) image, we\n  have \\(50,176*1000 + 1000 \\approx 50e6\\) parameters in that layer alone!\n  There are so many parameters that network can easily overfit on the\n  data without generalization.\n\\item\n  Gradients are unstable. Recall the expression for the gradients,\n  \\(\\frac{\\partial C}{\\partial \\theta_l} = \\frac{\\partial f_L}{\\partial u_L} * \\frac{\\partial f_{L-1}}{\\partial u_{L-1}} * \\cdots * \\frac{\\partial f_{l-1}}{\\partial u_{l-1}} * \\frac{\\partial f_l}{\\partial \\theta_l}\\).\n  If few of \\(\\frac{\\partial f_m}{\\partial u_m} \\ll 1\\), they will\n  multiply up and make\n  \\(\\frac{\\partial C}{\\partial \\theta_l} \\approx 0\\)\n  \\sidenote{ This is the reason why sigmoids are avoided. For sigmoid,\n  \\(\\frac{\\partial f}{\\partial u} = \\frac{d \\sigma}{d z}|_{z=u}\\) is\n  close to zero if \\(u\\) is either too large or too small. It's maximum\n  is only \\(1/4\\) }. Similarly if few of\n  \\(\\frac{\\partial f_m}{\\partial u_m} \\gg 1\\), they make\n  \\(\\frac{\\partial C}{\\partial \\theta_l} \\to \\infty\\). \n\\end{enumerate}\n\nKeep these two points in mind. We will see several approaches to deep\nlearning that to some extent manage to overcome or route around these.\n\n\n\\section{Convolutional Neural\nNetworks}\\label{convolutional-neural-networks}\n\nLet's go back to the problem of handwritten digit recognition. MLP looks\nlike the MLP in figure \\ref{fig:mnistmlp}\n\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{mnist_net.png}\n  \\caption{MLP for MNIST.\n  \\href{http://neuralnetworksanddeeplearning.com/chap5.html\\%22}{Source}.}\n  \\label{fig:mnistmlp}\n\\end{figure}\n\n\nIn particular, we have connected all the pixels in 28x28 images i.e, 784\npixels to each neuron in hidden layer 1.\n\nUpon reflection, it's strange to use networks with fully-connected\nlayers to classify images. The reason is that such a network\narchitecture does not take into account the spatial structure of the\nimages. For instance, it treats input pixels which are far apart and\nclose together on exactly the same footing. Such concepts of spatial\nstructure must instead be inferred from the training data.\n\nBut what if, instead of starting with a network architecture which is\n\\emph{tabula rasa}, we used an architecture which tries to take\nadvantage of the spatial structure? In this section I describe\nconvolutional neural networks. These networks use a special architecture\nwhich is particularly well-adapted to classify images. Using this\narchitecture makes convolutional networks fast to train. This, in turn,\nhelps us train deep, many-layer networks, which are very good at\nclassifying images. Today, deep convolutional networks or some close\nvariant are used in most neural networks for image recognition.\n\nConvolutional neural networks use three basic ideas:\n\n\\begin{enumerate}\n\\item\n  Local receptive fields\n\\item\n  Shared weights\n\\item\n  Pooling.\n\\end{enumerate}\n\n\\subsection{Local receptive fileds}\n\nAs per usual, we'll connect the input pixels to a layer of hidden\nneurons. But we won't connect every input pixel to every hidden neuron.\nInstead, we only make connections in small, localized regions of the\ninput image.\n\n\\begin{figure}\n\\includegraphics[width=0.8\\linewidth]{conv1}\n\\caption{Local receptive fields of convolution.\n\\href{http://neuralnetworksanddeeplearning.com/chap6.html\\%22}{Source}.\n}\n\\end{figure}\n\n\nThat region in the input image is called the \\emph{local receptive\nfield} for the hidden neuron. It's a little window on the input pixels.\nEach connection learns a weight. And the hidden neuron learns an overall\nbias as well. You can think of that particular hidden neuron as learning\nto analyze its particular local receptive field.\n\nWe then slide the local receptive field across the entire input image.\nFor each local receptive field, there is a different hidden neuron in\nthe first hidden layer. To illustrate this concretely, let's start with\na local receptive field in the top-left corner (figure \\ref{fig:conv2s})\n\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{conv2}\n  \\caption{Convolution.\n\\href{http://neuralnetworksanddeeplearning.com/chap6.html\\%22}{Source}.\n}\n\\label{fig:conv2s}\n\\end{figure}\n\n\nThen we slide the local receptive field over by one pixel\n\\sidenote[][-10mm]{ Sometimes a different\n\\emph{stride} length (e.g, 2) is used.} to the right (i.e., by one\nneuron), to connect to a second hidden neuron\n\\sidenote[][-8mm]{ Note that if we have a \\(28\\times 28\\) input\nimage, and \\(5\\times 5\\) local receptive fields, then there will be \\(24\\times 24\\) neurons\nin the hidden layer.} (figure \\ref{fig:conv2sd})\n\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{conv3}\n  \\caption{Slide the convolution.\n\\href{http://neuralnetworksanddeeplearning.com/chap6.html\\%22}{Source}.\n}\n\\label{fig:conv2sd}\n\\end{figure}\n\n\n\\subsection{Shared weights and biases}\n\nI've said that each hidden neuron has a bias and 5x5 weights connected\nto its local receptive field. What I did not yet mention is that we're\ngoing to use the same weights and bias for each of the 24x24 hidden\nneurons.\n\nSharing weights and biases means that all the neurons in the first\nhidden layer detect exactly the same feature just at different locations\nin the input image. To see why this makes sense, consider the\nconvolution filter in figure \\ref{fig:1}\n\n\\begin{figure}[!htb]\n\\includegraphics[width=0.7\\linewidth]{conv_example1.png}\n\\caption{A convolutional filter.\n\\href{https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/}{Source}.\n}\n\\label{fig:1}\n\\end{figure}\n\nLet's take an example image and apply convolution on a receptive field (figure \\ref{fig:conv2}, \\ref{fig:conv3}).\n\n\\begin{figure}[!htb]\n   \\includegraphics[width=\\linewidth]{conv_example2.png}\n   \\caption{Convolution on an example image.\n            \\href{https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/}{Source}.\n}\n\t\\label{fig:conv2}\n\\end{figure}\n\n\\begin{figure}[!htb]\n   \\includegraphics[width=\\linewidth]{conv_example3.png}\n   \\caption{Apply convolution.\n    \\href{https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/}{Source}.\n\\label{fig:conv3}\n}\n\\end{figure}\n\nBasically, in the input image, if there is a shape that generally\nresembles the curve that this filter is representing, then all of the\nmultiplications summed together will result in a large value! Now let's\nsee what happens when we move our filter (figure \\ref{fig:conv4}).\n\n\\begin{figure}[!htb]\n\\includegraphics[width=\\linewidth]{conv_example4.png}\n\\caption{Apply convolution at a different receptive field.\n\\href{https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/}{Source}.\n}\n\\label{fig:conv4}\n\\end{figure}\n\nOutput turns out to be zero.\nTherefore, this convolution picks up a right bending curve wherever it\nis on . To put it in slightly more abstract terms, convolutional\nnetworks are well adapted to the translation invariance of images: move\na picture of a cat (say) a little ways, and it's still an image of a cat\n\nThe network structure I've described so far can detect just a single\nkind of localized feature. To do image recognition we'll need more than\none \\textbf{feature map}. And so a complete convolutional layer consists\nof several different feature maps.\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{conv4.png}\n\\caption{Apply convolution at a different receptive field.\n\\href{http://neuralnetworksanddeeplearning.com/chap6.html}{Source}.\n}\n\\end{figure}\n\nA big advantage of sharing weights and biases is that it greatly reduces\nthe number of parameters involved in a convolutional network. For each\nfeature map we need 25=5x5 shared weights, plus a single shared bias. So\neach feature map requires 26 parameters. If we have 20 feature maps\nthat's a total of 20x26=520 parameters defining the convolutional layer.\nBy comparison, suppose we had a fully connected first layer, with\n784=28x28 input neurons, and a relatively modest 30 hidden neurons.\nThat's a total of 784x30 weights, plus an extra 30 biases, for a total\nof 23,550 parameters.\n\n\n\\subsection{Pooling layers}\n\nIn addition to the convolutional layers just described, convolutional\nneural networks also contain pooling layers. Pooling layers are usually\nused immediately after convolutional layers. What the \\emph{pooling\nlayers} do is simplify the information in the output from the\nconvolutional layer.\n\nIn detail, a pooling layer takes each feature map output from the\nconvolutional layer and prepares a condensed feature map. For instance,\neach unit in the pooling layer may summarize a region of (say) 2x2\nneurons in the previous layer. As a concrete example, one common\nprocedure for pooling is known as \\emph{max-pooling}. In max-pooling, a\npooling unit simply outputs the maximum activation in the 2x2 input\nregion, as illustrated in figure \\ref{fig:pool}.\n\n\\begin{figure}[!htb]\n\\includegraphics[width=0.8\\linewidth]{max_pooling}\n\\caption{Pooling Layer.\n\\href{http://neuralnetworksanddeeplearning.com/chap6.html}{Source}. }\n\\label{fig:pool}\n\\end{figure}\n\nNote that since we have 24x\n24 neurons output from the convolutional\nlayer, after pooling we have 12x\n12 neurons.\n\nAs mentioned above, the convolutional layer usually involves more than a\nsingle feature map. We apply max-pooling to each feature map separately.\nSo if there were three feature maps, the combined convolutional and\nmax-pooling layers would look like figure \\ref{fig:pool2}.\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{max_pooling2}\n\\caption{Convolution and pooling layer.\n\\href{http://neuralnetworksanddeeplearning.com/chap6.html}{Source}. }\n\\label{fig:pool2}\n\\end{figure}\n\nWe can think of max-pooling as a way for the network to ask whether a\ngiven feature is found anywhere in a region of the image. It then throws\naway the exact positional information. The intuition is that once a\nfeature has been found, its exact location isn't as important as its\nrough location relative to other features. A big benefit is that there\nare many fewer pooled features, and so this helps reduce the number of\nparameters needed in later layers.\n\n\\subsection{Case Study: LeNet}\\label{case-study-lenet}\n\nLet's put everything we've learnt together and analyze one of the very\nearly successes \\sidenote{LeNet is published in\n1998! CNNs are not exactly new.} of convolutional networks: LeNet. This\nis the \\emph{architecture} of LeNet:\n\n\\begin{figure*}\n\\includegraphics[width=\\linewidth]{lenet.png}\n\\caption{LeNet architecture.\n\\href{http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf}{Source}. }\n\\end{figure*}\n\nLet's go over each of the component layers of LeNet:\n\\sidenote{ I actually describe slightly modified\nversion of LeNet. }\n\n\\begin{itemize}\n\\item\n  \\textbf{Input}: Gray scale image of size 32 x 32.\n\\item\n  \\textbf{C1}: Convolutional layer of 6 feature maps, kernel size (5, 5)\n  and stride 1. Output size therefore is 6x28x28. Number of\n  trainable parameters is \\((5*5 + 1) * 6 = 156\\).\n\\item\n  \\textbf{S2}: Pooling/subsampling layer with kernel size (2, 2) and\n  stride 2. Output size is 6x14x14. Number of trainable parameters =\n  0.\n\\item\n  \\textbf{C3}: Convolutional layer of 16 feature maps. Each feature map\n  is connected to all the 6 feature maps from the previous layer. Kernel\n  size and stride are same as before. Output size is 16x10x10.\n  Number of trainable parameters is \\((6 * 5 * 5 + 1) * 16 = 2416\\).\n\\item\n  \\textbf{S4}: Pooling layer with same \\emph{hyperparameters} as above.\n  Output size = 16 x 5 x 5.\n\\item\n  \\textbf{C5}: Convolutional layer of 120 feature maps and kernel size\n  (5, 5). This amounts to \\emph{full connection} with outputs of\n  previous layer. Number of parameters are\n  \\((16 * 5 * 5 + 1)*120 = 48120\\).\n\\item\n  \\textbf{F6}: \\emph{Fully connected layer}\\sidenote{This is same as layers in MLP\n  we've seen before.} of 84 units. i.e, All units\n  in this layer are connected to previous layer's\n  outputs. Number of parameters is \\((120 + 1)*84 = 10164\\)\n\\item\n  \\textbf{Output}: Fully connected layer of 10 units with softmax\n  activation\\sidenote{Ignore `Gaussian connections'.\n  It is for a older loss function no longer in use.}.\n\\end{itemize}\n\nDataset used was MNIST. It has 60,000 training images and 10,000 testing\nexamples.\n\n\\section{Tricks of the Trade}\\label{tricks-of-the-trade}\n\n\\subsection{Dropout}\\label{dropout}\n\nWith so many parameters in neural networks, overfitting is a real\nproblem. For example, LeNet has about the same number of parameters as\nthere are training examples. There are a few techniques like\n\\(L_1\\)/\\(L_2\\) regularization you might be familiar with. These modify\ncost function by adding \\(L_1\\)/\\(L_2\\) norm of parameters respectively.\n\nDropout is radically different for regularization. We modify the network\nitself instead of the cost function. Suppose we're trying to train a\nnetwork in figure \\ref{fig:dropout1}\n\n\\begin{figure}[!hbt]\n\\includegraphics[width=60mm]{dropout1.png}\n\\caption{Network before dropout}\n\\label{fig:dropout1}\n\\end{figure}\n\n\nWith dropout, We start by randomly (and temporarily) deleting half the\nhidden neurons in the network, while leaving the input and output\nneurons untouched. After doing this, we'll end up with a network like in figure \\ref{fig:dropout2}.\n\n\n\\begin{figure}[!hbt]\n\\includegraphics[width=60mm]{dropout2.png}\n\\caption{Network after dropout}\n\\label{fig:dropout2}\n\\end{figure}\n\n\nWe forward-propagate the input \\(x\\) through the modified network, and\nthen backpropagate the result, also through the modified network. After\ndoing this over a mini-batch of examples, we update the appropriate\nweights and biases. We then repeat the process, first restoring the\ndropout neurons, then choosing a new random subset of hidden neurons to\ndelete, estimating the gradient for a different mini-batch, and updating\nthe weights and biases in the network.\n\nThe weights and biases will have been learnt under conditions in which\nhalf the hidden neurons were dropped out. When we actually run the full\nnetwork that means that twice as many hidden neurons will be active. To\ncompensate for that, we halve the weights outgoing from the hidden\nneurons.\n\nWhy would dropout help? Explanation from\n\\href{https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf}{AlexNet}\npaper \\sidenote{AlexNet is the paper which lead\nto renaissance of CNNs.}:\n\n\\begin{quote}\nThis technique reduces complex co-adaptations of neurons, since a neuron\ncannot rely on the presence of particular other neurons. It is,\ntherefore, forced to learn more robust features that are useful in\nconjunction with many different random subsets of the other neurons.\n\\end{quote}\n\nIn other words, we can think of dropout as a way of making sure that the\nmodel is robust to the loss of any individual piece of evidence. Of\ncourse, the true measure of dropout is that it has been very successful\nin improving the performance of neural networks.\n\n\\subsection{Data Augmentation}\\label{data-augmentation}\n\nYou probably already know that more data leads to better accuracy. It's\nnot surprising that this is the case, since less training data means our\nnetwork will be exposed to fewer variations in the way human beings\nwrite digits.\n\nObtaining more training data can be expensive, and so is not always\npossible in practice. However, there's another idea which can work\nnearly as well, and that's to artificially expand the training data.\n\nSuppose, for example, that we take an MNIST training image of a five,\nand rotate it by a small amount, let's say 15 degrees (figures \\ref{fig:mnist1}, \\ref{fig:mnist2}).\n\n\\begin{marginfigure}\n  \\includegraphics[width=30mm]{more_data_5.png}\n  \\caption{Example tranining image}\n  \\label{fig:mnist1}\n\\end{marginfigure}\n\n\\begin{marginfigure}\n  \\includegraphics[width=30mm]{more_data_rotated_5.png}\n  \\caption{Rotated training image}\n  \\label{fig:mnist2}\n\\end{marginfigure}\n\n\nIt's still recognizably the same digit. And yet at the pixel level it's\nquite different to any image currently in the MNIST training data. We\ncan expand our training data by making \\emph{many} small rotations of\nall the MNIST training images, and then using the expanded training data\nto improve our network's performance.\n\nWe call such expansion as \\emph{data augmentation}. Rotation is not the\nonly way to augment the data. A few examples are crop, zoom etc. The\ngeneral principle is to expand the training data by applying operations\nthat reflect real-world variation.\n\n\\subsection{Weight initialization and Batch\nNormalization}\\label{weight-initialization-and-batch-normalization}\n\nTraining a neural network is a highly non convex problem. Therefore,\ninitialization of parameters to be optimized is important. To understand\nbetter, recall the unstable gradient problem. This is the equation of\ngradients for parameters in \\(j\\)th layer:\n\n\\[\\frac{\\partial C}{\\partial \\theta_l} = \\frac{\\partial f_L}{\\partial u_L} * \\frac{\\partial f_{L-1}}{\\partial u_{L-1}} * \\cdots * \\frac{\\partial f_{l-1}}{\\partial u_{l-1}} * \\frac{\\partial f_l}{\\partial \\theta_l}\\]\n\nIf a layer is not properly initialized, it scales inputs by \\(k\\), i.e\n\\(\\frac{\\partial f_m}{\\partial u_m} \\approx k\\). Therefore gradients of\nparameters in \\(l\\) th layer is\n\n\\[\\frac{\\partial C}{\\partial \\theta_l} \\approx k^{L - l}\\]\n\nThus, \\(k > 1\\) leads to extremely large gradients and \\(k<1\\) to very\nsmall gradients in initial layers. Therefore, we want\n\n\\[k \\approx 1\\]\n\nThis can be made sure with good weight initialization. Historically, bad\nweight inits are what prevented deep neural networks to be trained.\n\nA recently developed technique called\n\\href{http://arxiv.org/abs/1502.03167}{Batch Normalization} alleviates a\nlot of headaches with initializations by explicitly forcing this\nthroughout a network. The core observation is that this is possible\nbecause normalization is a simple differentiable operation.\n\nIt has become a very common practice to use Batch Normalization in\nneural networks. In practice, networks that use Batch Normalization are\nsignificantly more robust to bad initialization.\n\n\\section{Practical Advice}\\label{practical-advice}\n\n\\subsection{ImageNet Dataset and\nILSVRC}\\label{imagenet-dataset-and-ilsvrc}\n\nImageNet is a \\emph{huge} dataset for visual recognition research. It\ncurrently has about \\emph{14 million} images tagged manually.\n\nImageNet project runs an annual contest, the ImageNet Large\nScale Visual Recognition Challenge (ILSVRC), where algorithms compete\n to correctly classify and detect objects and scenes. ILSVRC\nrecognition challenge is conducted on a subset of ImageNet: 1.2 million\nimages with 1000 classes.\n\nFigure \\ref{fig:imagenet} shows a few example images and their recognition results:\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{imagenet.png}\n\\caption{Images from imagenet.\n\\href{http://mappingignorance.org/fx/media/2013/04/Deep-learning-5.png}{Source}.\n} \n\\label{fig:imagenet}\n\\end{figure}\n\n\n\\href{https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf}{Alexnet}\nwas the first CNN to participate in ILSVRC and won the 2012 challenge by\na significant margin. This lead to a renaissance of CNNs for visual\nrecognition. Figure \\ref{fig:alexnet} shows its architecture.\n\\begin{figure*}[!hbt]\n\\includegraphics[width=0.9\\linewidth]{alexnet.png}\n\\caption{Alexnet architecture.\n\\href{http://mappingignorance.org/fx/media/2013/04/Deep-learning-5.png}{Source}.\n}\n\\label{fig:alexnet}\n\\end{figure*}\n\n\nIt isn't too different from LeNet we discussed before. Over the years,\nnewer CNN architectures won this challenge. Notable ones are\n\n\\begin{itemize}\n\\item\n  \\href{https://arxiv.org/pdf/1409.1556}{VGGNet}\n\\item\n  \\href{https://research.google.com/pubs/pub43022.html}{GoogLeNet}\n\\item\n  Inception\n\\item\n  \\href{https://arxiv.org/abs/1512.03385}{ResNet}\n\\end{itemize}\n\nYou will keep on hearing these architectures if you work more on CNNs. Figure \\ref{fig:top1}\nare the accuracies from these networks:\n\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{imagenet-top1.png}\n  \\caption{Top 1 accuracies on ILSVRC.\n\\href{https://chaosmail.github.io/deeplearning/2016/10/22/intro-to-deep-learning-for-computer-vision/\\#Canziani16}{Source}.\n}\n\\label{fig:top1}\n\\end{figure}\n\n\\subsection{Transfer Learning}\\label{transfer-learning}\n\nTo train a CNNs like above from scratch (i.e random initialization), you\nwill need a huge dataset of the ImageNet scale. In practice, very few\npeople do this.\n\nInstead, you will pretrain your network on large dataset like imagenet\nand use the learned weights as initializations. Usually, you will just\ndownload the trained model of one of the above architectures from\ninternet and use them as your weight initializations. This is called\n\\emph{transfer learning}.\n\nThis is a very powerful trick. For example,\n\\href{http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html}{here}\npretrained weights of ResNet-18 are used to train a classifier to\nclassify ants and bees.\n\n\\begin{figure}\n\\includegraphics[width=110mm]{transfer_learning.png}\n\\caption{Ants/Bees classifier.\n\\href{http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html}{Source}.\n}  \n\\end{figure}\n\nThis is the dataset used:\n\\begin{description}\n\\item[Training]: 120 ants + 120 bees images\n\\item[Testing]: 75 ants + 75 bees images\n\\end{description}\n\nWith this small dataset, accuracy is around 95 \\% !\n\nWhy does transfer learning work? 1.2 million images in imagenet cover\nwide diversity of real world images. Filters learned on imagenet will\ntherefore be sufficiently general to apply on any similar problem. i.e,\nyou will not overfit on your small dataset.\n\nMoral of the story: you don't need a large dataset if you are working on\nreal life images!\n\n\\subsection{GPUs}\\label{gpus}\n\nGPUs dramatically speed up neural networks. This is because most of the\nneural network computation is just matrix multiplication and thus is\nreadily vectorizable. GPUs excel at such computations.\n\nFor example, look at the times taken by the above transfer learning code to\ntrain:\n\n\\begin{description}\n\\item[CPU]: \\textasciitilde{} 20 min\n\\item[GPU]: \\textasciitilde{} 1 min\n\\end{description}\n\nAnd this was with a meagre dataset of 400 images. Imagine working with a\ndataset of the scale of ImageNet. GPUs are essential if you are serious\nabout deep learning.\n\n\\subsection{Other FAQ}\\label{other-faq}\n\n\\noindent What framework to use?\n\n\\begin{quote}\nLot of frameworks are available: Keras, Tensorflow, PyTorch. I suggest\nusing Keras if you are new to deep learning.\n\\end{quote}\n\n\\noindent How do I know what architecture to use?\n\n\\begin{quote}\nDon't be a hero!  -- Andrej Karpathy\n\n\\begin{itemize}\n\\item\n  Take whatever works best on ILSVRC (latest ResNet)\n\\item\n  Download a pretrained model\n\\item\n  Potentially add/delete some parts of it\n\\item\n  Finetune it on your application.\n\\end{itemize}\n\\end{quote}\n\n\\noindent How do I know what hyperparameters to use?\n\n\\begin{quote}\nDon't be a hero! -- Andrej Karpathy\n\n\\begin{itemize}\n\\item\n  Use whatever is reported to work best on ILSVRC.\n\\item\n  Play with the regularization strength (dropout rates)\n\\end{itemize}\n\\end{quote}\n\n\\noindent But my model is not converging!\n\n\\begin{quote}\n\\begin{itemize}\n\\item\n  Take a very small subset (like, 50 samples) of your dataset and train\n  your network on this.\n\\item\n  Your network should completely overfit on this data. If not play with\n  learning rates. If you couldn't get your network to overfit, something\n  is either wrong with your code/initializations or you need to pick a\n  more powerful model.\n\\end{itemize}\n\\end{quote}\n\n\\section{Recommended reading} \n\nFind a list of recommended papers below:\n\n\\begin{itemize}\n\t\\item Beginner/Essential\n\t\t\\begin{itemize}\n\t\t\t\\item \\href{https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks}{AlexNet} [2012]: Image Classification.\n\t\t\t\\item \\href{https://arxiv.org/abs/1512.03385}{ResNet} [2015]: Latest and greatest architecture on ILSVRC.\n\t\t\\end{itemize}\n\t\t\n\t\\item Intermediate\n\t\t\\begin{itemize}\n\t\t\t\\item \\href{https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf}{Dropout} [2014]\n\t\t\t\\item \\href{https://arxiv.org/abs/1502.03167}{BatchNorm} [2015]\n\t\t\t\\item \\href{https://arxiv.org/abs/1502.01852}{He init scheme} [2015]: A weight initialization scheme.\n\t\t\t\\item \\href{http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf}{FCN} [2015]: Base paper for all deep learning based segmentation approaches.\n\t\t\t\\item \\href{https://arxiv.org/abs/1506.02640}{YOLO} [2015]: Object detection pipeline in a single network.\n\t\t\\end{itemize}\n\t\\item Advanced\n\t\t\\begin{itemize}\n\t\t\t\\item \\href{https://arxiv.org/abs/1506.01497}{Faster RCNN} [2015]: Object detection. Might feel quite involved because of too many moving parts. Won MSCOCO 2015 detection challenge.\n\t\t\\end{itemize}\n\\end{itemize}\n\n\nThis notes is based on following sources:\n\\begin{description}\n\\item[Neural Networks and Deep Learning]: This is a free online book hosted at \\url{http://neuralnetworksanddeeplearning.com}. Lot of the figures, examples and sometimes text in this notes is from this book. It is a quite simple book to read. Do read if you want to make your fundamentals clearer.\n\\item[Andrej Karpathy's slides and notes]: Slides hosted \\href{https://docs.google.com/presentation/d/1Q1CmVVnjVJM_9CDk3B8Y6MWCavZOtiKmOLQ0XB7s9Vg/edit?usp=sharing}{here} and notes hosted \\href{http://cs231n.github.io}{here}. His notes is really good.\n\\end{description}\n\n\\end{document}\n", "meta": {"hexsha": "da9d8ab45c4e186b062fcd437ab9685882090780", "size": 50409, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/images/crash_course/handout.tex", "max_stars_repo_name": "sierxue/chsasank.github.io", "max_stars_repo_head_hexsha": "5626fafc64d4458a5b67e6d737a3d005c16871b7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-07-13T02:49:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-21T09:42:24.000Z", "max_issues_repo_path": "assets/images/crash_course/handout.tex", "max_issues_repo_name": "sierxue/chsasank.github.io", "max_issues_repo_head_hexsha": "5626fafc64d4458a5b67e6d737a3d005c16871b7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-07-13T15:47:42.000Z", "max_issues_repo_issues_event_max_datetime": "2020-11-08T14:11:16.000Z", "max_forks_repo_path": "assets/images/crash_course/handout.tex", "max_forks_repo_name": "sierxue/chsasank.github.io", "max_forks_repo_head_hexsha": "5626fafc64d4458a5b67e6d737a3d005c16871b7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2016-08-04T07:15:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-30T10:48:08.000Z", "avg_line_length": 38.5980091884, "max_line_length": 354, "alphanum_fraction": 0.7485568053, "num_tokens": 14199, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.588925733788375}}
{"text": "% Gaussian processes are good for bayesian regression, Very flexible in multi-output settings, for e.g. bonilla, alvarez, wilson\nGaussian process models  \\citep[GPs,][]{rasmussen-williams-book}  are a popular choice in \nBayesian regression due to their ability to capture complex dependencies and non-linearities in data.\nIn particular, when having multiple outputs or tasks they have proved effective in modeling \nthe dependencies between the tasks, outperforming  competitive baselines and \nsingle output learners  \\citep{bonilla-et-al-nips-08,teh-et-al-aistats-05,alvarez-lawrence-nips-08,wilson-et-al-icml-12}.\n% However they are costly\nHowever, the prohibitive cost of performing exact inference in  GP models severely hinders \ntheir application to  large scale multi-output problems. For example, na\\\"{i}ve inference \nin a fully coupled Gaussian process model over $P$  outputs and $N$ data points can have \na complexity of $\\calO(N^3P^3)$  and $\\calO(N^2P^2)$ in time and memory, respectively.\n\n\n% Motivating example is good\nA motivating  example of a large scale multi-output application \nis the tracking of  movements of a robot arm using 2 or more joint torques. \nIf one of the robot motors malfunctions and fails to record a torque, data collected from the \nother motors may be used to infer the missing torque values.\nHowever, taking 100 measurements per second already results in a total of over 40,000 data points \nper torque in just 7 minutes.\nClearly this problem is well beyond the capabilities  of conventional multiple output GPs.\nBuilding  multi-output GP models that can learn correlated tasks at the scale of \nthese types of problems  is thus the main focus of this paper.\n\n% Tranditional approaches in single output setting, based e..g. on inducing points are not good enough \nIn the single output setting previous attempts to scale up  GP inference \nresort to approximate inference. Most  approximation methods  \n can be understood \nwithin a single probabilistic framework that uses a set of \\emph{inducing points}  in order to \nobtain an approximate process (or a low-rank approximate covariance) over which inference can be \nperformed more efficiently \\citep{quinonero2005unifying}. These models have been referred \nto in the literature as sparse models. \nNevertheless, straightforward application of such approximate techniques will yield a computational \ncost of at least $\\calO(PNM^2)$ in time and  $\\calO(PNM)$ in memory, where \n$M$ is the number of inducing points. This high complexity still prevents us from applying GP models \nto  large scale multi-output problems.\n\n% Useful observations\nIn this work we approach the challenge of building scalable multi-output Gaussian process models  \nbased on the following observations.\nFirstly, inducing variables are the key catalyst\nfor achieving sparsity and dealing with large scale problems in Gaussian process models.\nIn particular, they capture the \\emph{sufficient statistics} of a dataset allowing the construction of sparse \nprocesses that can approximate arbitrarily well the exact GP model \\citep{titsias2009variational}.\nSecondly, the use of global latent variables (such as the inducing points) allows us to \ninduce dependencies in a highly correlated model efficiently. \nThis observation is exploited in \\cite{hensmangaussian} for single output GP regression models \nwhere, by explicitly representing a distribution over the inducing variables, stochastic variational inference \ncan be used to work with millions of data points.\nFinally, the key to multi-output and multi-task learning is to model dependencies between the outputs  \nbased on realistic assumptions of what can be shared across the tasks. It turns out that \nsharing  ``sparsity structure'' can not only be a reasonable assumption but also a crucial component\nwhen modeling dependencies between different related tasks.\n\n\n% Our approach starts from mixing latent process\n% having inducing points for sparsity\n% sharing inducing points across the latent process\nBased on these observations, we propose the collaborative multi-output Gaussian Process (COGP) model \nwhere latent processes are mixed to generate dependent outputs. \nEach process is sparse and characterized by its own set of inducing points.\nThe sparsity structure enabling output correlations is thus created via the shared inducing sets. \nTo learn this structure, we maintain an explicit representation of the posterior over the inducing points which in turn allows inference to be carried out efficiently.\n% exploit stochastic variational inference\nIn particular, we obtain a variational lower bound of the model evidence that decomposes across \ninputs and outputs. \nThis decomposition makes possible the application of stochastic variational inference, \nthus allowing the model to handle a large number of observations per output and a large number of outputs.\nFurthermore, learning of all  the  parameters in the model, including\nkernel hyperparameters and inducing inputs, can be done in a single stochastic optimization framework.\n\nWe analyze our multi-out model on a toy problem  where the inducing variables are shown to be \nconducive to the sharing of information between two related tasks. \nAdditionally, we  evaluate  our model on two moderate-sized datasets in which we show that it\ncan outperform previous non-scalable multi-output approaches as well as single output baselines.\nFinally, on a  large scale experiment regarding the learning of robot inverse dynamics we show \nthe substantial benefits of collaborative learning provided by our model.\n  \n% Experiments and summary of results\n%The rest of the paper is organized as follows.\n%We give description of the model in Section \\ref{sec:model}, followed by the inference procedure in Section \\ref{sec:inference}. \n%Section \\ref{sec:experiments} presents empirical evaluation.\n%We conclude with some discussion in Section \\ref{sec:discussion}.\n\n% Some citations that can be brought back somewhere\n\\paragraph{Related work.}\nMost GP-based multi-output models create correlated outputs by mixing a set of independent latent processes.\nThe mixing can be a linear combination with fixed coefficients \\citep[see e.g.][]{teh-et-al-aistats-05,bonilla-et-al-nips-08}. This is known in the geostatistics community as  the ``linear model of coregionalization'' \\citep{goovaerts1997geostatistics}.\nSuch models may also be reformulated in a common Bayesian framework, for example by placing a spike and slab prior over the coefficients \\citep{titsias2011spike}.\nMore complex dependencies can be induced by using \n input-dependent coefficients \\citep{wilson-et-al-icml-12,nguyen2013efficient}  or convolving processes \\citep{boyle-frean-nips-05,alvarez-lawrence-nips-08,alvarez2010efficient}.\n \nWhile we also use the mixing construction, the key difference in our model is the role of inducing variables.\nIn particular, when used in previous models to reduce the computational costs \\citep[see e.g.][]{alvarez-lawrence-nips-08,alvarez2010efficient}, the inducing points are integrated out or collapsed. \nIn contrast, our model maintains an explicit representation of the posterior over the inducing variables that is learned using data from all outputs.\nThis explicit representation  facilitates scalable learning in a similar fashion to \nthe approach in \\citet{hensmangaussian}, making it applicable to very large datasets. \n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "5b8fc04eea49486df3f64550aff74350f844dbd4", "size": 7404, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/intro.tex", "max_stars_repo_name": "fkopsaf/cogp", "max_stars_repo_head_hexsha": "3b07f621ff11838e89700cfb58d26ca39b119a35", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2015-05-28T13:46:13.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-10T11:02:08.000Z", "max_issues_repo_path": "paper/intro.tex", "max_issues_repo_name": "fkopsaf/cogp", "max_issues_repo_head_hexsha": "3b07f621ff11838e89700cfb58d26ca39b119a35", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-07-30T08:52:36.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-04T01:44:21.000Z", "max_forks_repo_path": "paper/intro.tex", "max_forks_repo_name": "trungngv/cogp", "max_forks_repo_head_hexsha": "3b07f621ff11838e89700cfb58d26ca39b119a35", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2016-04-03T03:18:18.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-23T13:28:55.000Z", "avg_line_length": 67.9266055046, "max_line_length": 253, "alphanum_fraction": 0.8087520259, "num_tokens": 1576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956580952177051, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5889257321495274}}
{"text": "\\documentclass[12pt, a4paper]\n{article}\n\n\\usepackage{svg}\n\\setsvg{inkscape=inkscape -z -D,svgpath=images/}\n\\usepackage{float}\n\\usepackage{amsmath}\n\\usepackage{todonotes}\n\n\\providecommand{\\todos}[1]{\\todo[inline]{#1}}\n\\providecommand{\\ang}[1]{\\theta_{\\text{#1}}}\n\\providecommand{\\sub}[1]{_{\\text{#1}}}\n\\providecommand{\\subi}[1]{_{\\text{#1},i}}\n\\providecommand{\\twonorm}[1]{\\left|\\left|#1\\right|\\right|}\n\\providecommand{\\foralli}{\\quad \\forall\\, i}\n\\providecommand{\\lr}[1]{\\left( {#1} \\right)}\n\n\\providecommand{\\tp}{\\tilde{p}}\n\\providecommand{\\inert}{w}\n\\providecommand{\\bp}{\\boldsymbol{p}}\n\\providecommand{\\btp}{\\tilde{\\bp}}\n\n\\providecommand{\\xh}{\\hat{x}}\n\\providecommand{\\yh}{\\hat{y}}\n\n\\providecommand{\\qcom}{, \\qquad}\n\n\n\\title{Reference frame conversions}\n\\author{Laurens Valk}\n\\begin{document}\n\\maketitle\n\n\\tableofcontents\n\n\\section{Preliminaries}\n\n\\subsection{Points and Vectors}\n\\begin{equation}\nr_{b/a}^c = p^c_b - p^c_a\n\\end{equation}\n\\todos{add picture}\n\\subsection{Transformation}\n\n\\begin{equation}\np^i = \\alpha\\lr{R^i_j p^j + r^i_{j/i}}\n\\end{equation}\n\n\\begin{equation}\n\\begin{bmatrix}\np^i\\\\\n1\n\\end{bmatrix} = \\underbrace{\\begin{bmatrix}\nR^i_j & r^i_{j/i}\\\\\n0 & 1\n\\end{bmatrix}}_{H^i_j}\\begin{bmatrix}\np^j\\\\1\n\\end{bmatrix}\n\\end{equation}\n\n\\subsection{Inverse Transformation}\n\n\\begin{equation}\n\\begin{bmatrix}\np^j\\\\\n1\n\\end{bmatrix} = \\underbrace{\\begin{bmatrix}\n(R^i_j)^T & -(R^i_j)^Tr^i_{j/i}\\\\\n0 & 1\n\\end{bmatrix}}_{H^i_j}\\begin{bmatrix}\np^j\\\\1\n\\end{bmatrix}\n\\end{equation}\n\n\\section{From Pixels to Meters} \n\\subsection{Transformation Matrices}\nThe symbol $\\tp$ expresses points in units of pixels. Any other point $p$ has position units of meters.\n\\begin{figure}[H]\n\\centering\n\\includesvg[width=1\\textwidth]{scaling}\n\\caption{Four coordinates: a) in the camera picture, b) with flipped y axis, c) with the origin at the midbase of the picture, d) scaled to physical distances.\n\\label{fig:robotframe}}\n\\end{figure}\n\n\\begin{equation}\n\\begin{bmatrix}\n\\tp^f\\\\\n1\n\\end{bmatrix} = \\begin{bmatrix}\n1 & 0 & 0\\\\\n0 & -1 & 0\\\\\n0& 0 & 1\n\\end{bmatrix}\\begin{bmatrix}\n\\tp^p\\\\1\n\\end{bmatrix}=H^f_p \\begin{bmatrix}\n\\tp^p\\\\1\n\\end{bmatrix}\n\\end{equation}\n\\begin{equation}\n\\begin{bmatrix}\n\\tp^c\\\\\n1\n\\end{bmatrix} = \\begin{bmatrix}\n1 & 0 & -\\frac{1}{2}\\tilde{d}_x\\\\\n0 & 1 & \\frac{1}{2}\\tilde{d}_y\\\\\n0& 0 & 1\n\\end{bmatrix}\\begin{bmatrix}\n\\tp^f\\\\1\n\\end{bmatrix}=H^c_f \\begin{bmatrix}\n\\tp^f\\\\1\n\\end{bmatrix}\n\\end{equation}\n\\begin{equation}\n\\begin{bmatrix}\np^{\\inert}\\\\\n1\n\\end{bmatrix} = \\begin{bmatrix}\n\\varepsilon & 0 & 0\\\\\n0 & \\varepsilon & 0\\\\\n0& 0 & 1\n\\end{bmatrix}\\begin{bmatrix}\n\\tp^c\\\\1\n\\end{bmatrix}=H^{\\inert}_c \\begin{bmatrix}\n\\tp^c\\\\1\n\\end{bmatrix}\n\\end{equation}\n\n\\begin{equation}\n\\begin{bmatrix}\np^{\\inert}\\\\\n1\n\\end{bmatrix} = H^{\\inert}_c H^c_f H^f_p \\begin{bmatrix}\n\\tp^p\\\\1\n\\end{bmatrix}\n\\end{equation}\n%\nThe overall transformation from pixels to the real world representation is\n%\n\\begin{equation}\n\\begin{bmatrix}\np^{\\inert}\\\\\n1\n\\end{bmatrix} = H^{\\inert}_p \\begin{bmatrix}\n\\tp^p\\\\1\n\\end{bmatrix} \\quad \\Rightarrow \\quad \\bp^{\\inert} = H^{\\inert}_p \n\\btp^p\n\\end{equation}\n\n\n\\todos{fix scaling outside $H$}\n\n\\subsection{Scaling Factor}\n%\n\\todos{consider working with centimeters}\nThe distance $d\\sub{apex/midbase}$ in Figure \\ref{fig:robotframe} is known, constant, and equivalent for each agent:\n%\n\\begin{equation}\nd\\sub{apex/midbase} \\equiv \\twonorm{p\\subi{apex}-p\\subi{midbase}} \\foralli\n\\end{equation}\n%\nPer agent, we can estimate the scaling factor $\\epsilon_i$ as:\n%\n\\begin{equation}\n\\varepsilon_i = \\dfrac{d\\sub{apex/midbase}}{\\twonorm{\\tp\\subi{apex}-\\tp\\subi{midbase}}} \\quad \\text{m/pixel}\n\\end{equation}\n%\nA more accurate estimate for the overall scaling factor is:\n%\n\\begin{equation}\n\\varepsilon = \\dfrac{1}{N}\\sum^N_{i=1} \\varepsilon_i\n\\end{equation}\n%\nHowever, because $\\varepsilon$ is constant it may be more desirable to calculate it once and use it as a constant parameter. If so, we may divide the arena width by the picture with in pixels:\n%\n\\begin{equation}\n\\varepsilon = \\dfrac{d\\sub{field}}{\\tilde{d}_x} \\quad \\text{m/pixel}\n\\end{equation}\n\\pagebreak\n\\section{Local Robot Frames}\n\\todos{cross check p's and then define vector as difference between two points; and then vectors cannot be transformed; just rotated}\n\n\\subsection{From Label Markers to Label Frame}\n\n\\begin{figure}[H]\n\\centering\n\\includesvg[width=0.5\\textwidth]{labelworld}\n\\caption{Local vehicle frame definitions\n\\label{fig:labelworld}}\n\\end{figure}\n\n\nFirst, transform pixel locations to world coordinates:\n%\n\\begin{equation}\n\\bp\\sub{midbase}^w = H^{\\inert}_p \\btp\\sub{midbase}^p, \\quad \\bp\\sub{apex}^w = H^{\\inert}_p \\btp\\sub{apex}^p\n\\end{equation}\n%\nObtain the label x and y axes in world coordinates:\n%\n\\begin{align}\n\\xh_\\ell^{\\inert}= \\dfrac{p\\sub{apex}^w-p\\sub{midbase}^w}{\\twonorm{p\\sub{apex}^w-p\\sub{midbase}^w}} \\qcom \\yh_\\ell^{\\inert} = \\begin{bmatrix}\n0 & -1\\\\\n1 & 0\n\\end{bmatrix}\\xh_\\ell\n\\end{align}\n%\nThe transformation matrix becomes:\n%\n\\begin{equation}\n\\bp^{\\inert} =H^{\\inert}_\\ell \n\\bp^\\ell\\qcom H^{\\inert}_\\ell = \\begin{bmatrix}\n\\xh_\\ell^{\\inert} & \\yh_\\ell^{\\inert} & r^{\\inert}\\sub{midbase/origin}\\\\\n0 & 0 & 1\n\\end{bmatrix}\n\\end{equation}\n\n\n\\todos{Agree on a better name than center. This is confusing}\n\\todos{replace terminology: top = apex, center = midbase}\n\\todos{FIX: Inside H matrix are always VECTORS. not p's.}\n\\subsection{From Label Frame to Robot Frame}\n\n\\todos{distinguish between points and vectors: Then vectors suddenly make sense: points in a frame are always defined with respect to the origin of that frame}\n\\begin{equation}\n\\bp^b = \\begin{bmatrix}\nI_2 & r^b\\sub{midbase/robot}\\\\\n0& 1\n\\end{bmatrix}\\bp^\\ell\n\\end{equation}\n\n\\begin{figure}[H]\n\\centering\n\\includesvg[width=0.7\\textwidth]{robotframe}\n\\caption{Local vehicle frame definitions\n\\label{fig:robotframe}}\n\\end{figure}\n\n\\section{Relative Agent Positions and Orientations}\n\n\n\n\n\\end{document}", "meta": {"hexsha": "ecaa26cc68a2d1167df291d10780403922f6a2f8", "size": 5871, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "math/math.tex", "max_stars_repo_name": "laurensvalk/legoswarm", "max_stars_repo_head_hexsha": "3ff629d933166ba07c02d0e936ccee586b2c5ddd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-01T23:28:07.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-01T23:28:07.000Z", "max_issues_repo_path": "math/math.tex", "max_issues_repo_name": "laurensvalk/legoswarm", "max_issues_repo_head_hexsha": "3ff629d933166ba07c02d0e936ccee586b2c5ddd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "math/math.tex", "max_forks_repo_name": "laurensvalk/legoswarm", "max_forks_repo_head_hexsha": "3ff629d933166ba07c02d0e936ccee586b2c5ddd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-06-27T09:27:54.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-01T23:28:08.000Z", "avg_line_length": 24.1604938272, "max_line_length": 192, "alphanum_fraction": 0.7075455629, "num_tokens": 2176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7956581049086031, "lm_q1q2_score": 0.588925730201898}}
{"text": "\\title{Quantifying Mobility of Octahedral Intermediates}\n\\author{\n        Daniel Johnson \\\\\n                Division of Applied Mathematics\\\\\n        Brown University\n}\n\\date{\\today}\n\n\\documentclass[12pt]{article}\n\n\\usepackage{graphicx,amsfonts,amsbsy,bbm, amsmath}\n\n\\begin{document}\n\\bibliographystyle{plain}\n\\maketitle\n\n\n\\section{Introduction}\nThe number of degrees of freedom measures the rigidity of an intermediate, but some intermediates with the same number of degrees of freedom may have varying degrees of mobility. We seek to quantify an intermediate's mobility by the relative amount of movement the intermediate has for a small movement in its configuration space. Intermediates with the most mobility will move less than the less mobile intermediates for the same amount of movement in configuration space.  \n\n\\section{Methods}\n% What is the configuration space?\nEach octahedron intermediate is composed of 8 equilateral triangles connected to each other along edges. We make the assumption that these triangles are rigid and cannot be deformed. Furthermore, when two triangles meet at an edge, they move relative to each other as if connected by an ideal hinge. Every configuration has 6 trivial degrees of freedom corresponding to the 3 translation degrees of freedom and the 3 rotational degrees of freedom. Since we are interested in the motion of an intermediate's faces relative to each other, we remove these trivial degrees of freedom by picking a single face to fix in space. Depending on the connectivity of the triangle's edges, an intermediate \nmay still have degrees of freedom. The Octahedron intermediate (83), being a convex polyhedron, is rigid and has no non-trivial degrees of freedom as given by Cauchy's Theorem. However, this is not the case for most of the intermediates.\n\nWe formalize the concept of the configuration space, by defining it to be the subset of an ambient parameter space that satisfies constraint equations that correspond to our assumptions. Since the are 8 faces in each intermediate and each face has 3 vertices each with 3 spacial coordinates (x,y,z), we use $\\mathbb{R}^{8\\times 3 \\times 3} = \\mathbb{R}^{72}$ as our ambient space. We then have 3 types of constraint equations: base face constraints, rigid face constraints, and hinge constraints. Every admissible configuration of an intermediate will be a point in $\\mathbb{R}^{72}$ that satisfies the constraint equations and any admissible movement of a configuration (if possible) is a continuous movement in the subset of $\\mathbb{R}^{72}$ where these constraints are satisfied. \n\nSince our constraint equations can all be expressed as polynomials, the configuration space which is the corresponding solution set is an Algebraic variety. The number of degrees of freedom of a configuration is the local dimension of this algebraic variety as a subset of $\\mathbb{R}^{72}$. In the case of octahedron intermediates, the number of degrees of freedom is not an informative way of classifying which intermediates are dominant as intermediates $1-11$ have $7$ degrees of freedom, $12-33$ have $5$, $34-65$ have $3$, $66-82$ have $1$ and the Octahedron (83) and Boat (84) have $0$ degrees of freedom. Thus, a different measure the mobility of a configuration is required to differentiate intermediates. It is important to note that degrees of freedom of in intermediate can theoretically change based on the region of configuration space. However, in practice we have not observed an intermediate's degrees of freedom change in this way. \n\nWe define the \\textit{canonical configuration} of an intermediate, to be that in which each hinge forms a $180^\\circ$ angle when the hinge does not have a vertex connection at either end and in the case of a closed vertex, the hinge's angle corresponds to the Octahedral angle. In cases where the intermediate cannot form an octahedron, we can use the Boat configuration's angles instead.\n\n% How do we move in the configuration space?\nGiven a particular configuration $X$, if we wish to make a small move in configuration space, we first find the null space of the Jacobian matrix of the constraint equations. Since this null space corresponds to the directions in which the constraint equations are not changing, the constraints will remain satisfied. The null space can be represented by an orthonormal basis $N \\in \\mathbb{R}^{72\\times d}$, where $d$ is the number of degrees of freedom of the configuration. Then, by taking a small step of size $\\epsilon$ in each of the $d$ directions we get the configurations $X + \\epsilon N_{\\cdot,k}$. \n% How do we measure the magnitude of a movement in configuration space?\n\nWe define the \\textit{mobility} of a configuration to be the $L^2$ norm of the gradient of the configuration space. To approximate this, we measure the mean squared distance between each point of $X$ and $X \\pm \\epsilon N_{\\cdot,k}$ in each of the $d$ directions of $N$, take the norm, and divide by $\\epsilon$. This is given by Equation~\\ref{eq:r1} where $\\triangle_f$ refers to the $f$th face of the intermediate and $r\\left(X\\right)$ is the point we are integrating over and $r\\left(X+\\sigma \\epsilon N_{\\cdot,k}\\right)$ is the corresponding point in the altered configuration. \n\n\nTo explicitly define the rotation and translation of each face by the movement in configuration space, we use the algorithm given by Arun et al~\\cite{Arunetal1987}. Given two sets of points $P,P' \\in \\mathbb{R}^{3\\times n}$ in 3D space, the algorithm finds the rotation matrix $R$ and translation vector $b$ minimizing the least squares error of $P' \\approx R P + b$. By using the three vertices of face $f$ as $P$ and their perturbed values as $P'$, we use this algorithm to find the rotation matrix $R_f$ and translation vector $b_f$ to describe the rigid movement of face $f$ in the configuration space. This leads to the formulation in Equation~\\ref{eq:r3}.  \n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[width=0.61\\textwidth]{triangle_fig.png}\n  \\caption{Coordinates of rotated triangle for integration.}\n  \\label{fig:tri}\n\\end{figure}\n\nTo make this integral easier to solve, we introduce a change of variables that enables us to integrate in the $x-y$ plane. If the the original triangle $f$ has vertices of $a', b',$ and $c'$, we consider the triangle with vertices at the coordinates $a = (0,0,0), b = (|a-b|,0,0),$ and $c = (\\alpha,\\beta,0)$ as seen in Figure~\\ref{fig:tri}. Here, the choices $\\alpha = \\frac{|a-b|^2 + |a-c|^2 - |b-c|^2 }{2|a-b|}$ and $\\beta = \\sqrt{|a-c|^2-\\alpha^2}$ yield a triangle congruent to the original. Using the Arun et al~\\cite{Arunetal1987} algorithm again, we find the rotation matrix $S_f$ and translation vector $c_f$ that gives $[a',b',c'] = S_f[a,b,c] + c_f$. Using this we perform a change of variables and integrate over $s$ and $t$. This results in an simple double integral over a quadratic function of $s$ and $t$ with constants $u,v,w \\in \\mathbb{R}^3$ as seen in Equation~\\ref{eq:r5}.\n{\\tiny\n\\begin{align}\n\\mathcal{R}\\left(x\\right) &= \\frac{1}{\\epsilon}\\left(\\sum_{k=1}^d \\sum_{f=1}^8 \\int_{\\triangle_f}\\left|\\frac{r\\left(x+\\epsilon N_{\\cdot,k}\\right) - r\\left(x\\right)}{2\\sqrt{3}}\\right|^2\\right)^{\\frac{1}{2}} \\label{eq:r1}  \\\\\n&= \\frac{1}{2\\sqrt{3}\\epsilon}\\left( \\sum_{k=1}^d \\sum_{f=1}^8 \\int_{\\triangle_f}\\left|R_fr+b_f - r\\right|^2\\right)^{\\frac{1}{2}} \\label{eq:r2} \\\\\n&= \\frac{1}{2\\sqrt{3}\\epsilon}\\left( \\sum_{k=1}^d \\sum_{f=1}^8 \\int_{\\triangle_f}\\left|\\left(R_f-I\\right)r+b_f\\right|^2\\right)^{\\frac{1}{2}} \\label{eq:r3} \\\\\n&= \\frac{1}{2\\sqrt{3}\\epsilon}\\left( \\sum_{k=1}^d \\sum_{f=1}^8 \\int_{0}^{\\beta}\\int_{\\frac{\\alpha}{\\beta}t}^{|a-b| + \\frac{\\alpha- |a-b|}{\\beta}t}\\left|\\left(R_f-I\\right)\\left(S_f\\begin{bmatrix} s \\\\[0.3em] t \\\\[0.3em] 0 \\end{bmatrix} + c_f\\right)+b_f\\right|^2ds dt\\right)^{\\frac{1}{2}} \\label{eq:r4} \\\\\n&= \\frac{1}{2\\sqrt{3}\\epsilon}\\left( \\sum_{k=1}^d \\sum_{f=1}^8 \\int_{0}^{\\beta}\\int_{\\frac{\\alpha}{\\beta}t}^{|a-b| + \\frac{\\alpha- |a-b|}{\\beta}t}\\left|u + sv + tw\\right|^2ds dt\\right)^{\\frac{1}{2}} \\label{eq:r5} \n\\end{align} \n}    \n% Why does the choice of the base constraint matter and how do we deal with it?\n\nUnfortunately, the particular choice of base face affects the mobility. To rectify this bias, we compute the mobility with each of the eight faces as the base and take the average. This gives the final mobility in Equation~\\ref{eq:r6}. \n{\\tiny\n\\begin{align}\n\\mathcal{R} &= \\frac{1}{8}\\sum_{g=1}^8\\mathcal{R}_g \\\\\n&= \\frac{1}{16\\sqrt{3}\\epsilon}\\sum_{g=1}^8\\left( \\sum_{k=1}^d \\sum_{f=1}^8 \\int_{0}^{\\beta}\\int_{\\frac{\\alpha}{\\beta}t}^{|a-b| + \\frac{\\alpha- |a-b|}{\\beta}t}\\left|u + sv + tw\\right|^2ds dt\\right)^{\\frac{1}{2}} \\label{eq:r6} \n\\end{align}     \n}\n\n\\section{Results}\n\nMobility was computed for all octahedral intermediates in their canonical configuration. The perturbation parameter $\\epsilon = 10^{-6}$ was selected, but the mobility showed to be robust to both smaller and larger values. Figure~\\ref{fig:t1} outlines the results. From these calculations, it is clear that while mobility is highly correlated to degrees of freedom, mobility gives additional information about an intermediate's configuration space. Interestingly, all nets had the same mobility. Similarly, the symmetric pairs (14,16), (15,18), (19,20), (34,38), and (36,39) have matching mobility values. Of the intermediates with 5 degrees of freedom, 19 and 20 had the lowest mobility and 17 had the highest. As for intermediates with 3 degrees of freedom, 35 was the least mobile and 37 was the most mobile.        \n\n\\begin{figure}[h!]\n\\label{fig:t1}\n\\centering\n\\scalebox{.9}{\n\\begin{tabular}{ l | c }\n  Intermediate & Mobility  \\\\\n  \\hline\\hline\n1 &  0.20518 \\\\\n2 &  0.20518 \\\\\n3 &  0.20518 \\\\\n4 &  0.20518 \\\\\n5 &  0.20518 \\\\\n6 &  0.20518 \\\\\n7 &  0.20518 \\\\\n8 &  0.20518 \\\\\n9 &  0.20518 \\\\\n10 & 0.20518 \\\\\n11 & 0.20518 \\\\\\hline\n12 & 0.18376 \\\\\n13 & 0.18313 \\\\\n14 & 0.18249 \\\\\n15 & 0.18171 \\\\\n16 & 0.18249 \\\\\n17 & 0.18634 \\\\\n18 & 0.18171 \\\\\n19 & 0.18102 \\\\\n20 & 0.18102 \\\\\n21 & 0.18255 \\\\\n22 & 0.18485 \\\\\\hline\n34 & 0.14487 \\\\\n35 & 0.14350 \\\\\n36 & 0.14530 \\\\\n37 & 0.14973 \\\\\n38 & 0.14487 \\\\\n39 & 0.14530 \\\\\\hline\n66 & 0.07954 \\\\\\hline\n83 & 0.00000 \\\\\n\\end{tabular}\n}\n\\caption{A table with the mobility of octahedral intermediates.}\n\\end{figure}\n\n\\bibliography{Master}\n\\end{document}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "b68f47e80e4dbc1a7ce50302ac6e1def877728e1", "size": 10418, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Existing Writing Dump/Keep/rigidity_distance_writeup.tex", "max_stars_repo_name": "Danie1Johnson/thesis", "max_stars_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Existing Writing Dump/Keep/rigidity_distance_writeup.tex", "max_issues_repo_name": "Danie1Johnson/thesis", "max_issues_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Existing Writing Dump/Keep/rigidity_distance_writeup.tex", "max_forks_repo_name": "Danie1Johnson/thesis", "max_forks_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.8468899522, "max_line_length": 950, "alphanum_fraction": 0.7238433481, "num_tokens": 2990, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.588925726615421}}
{"text": "\r\n\\input{SingleAssignmentSetup.tex}\r\n\\input{../WeekTitles.tex}\r\n\\begin{document}\r\n\r\n\r\n\\begin{center}\r\n\\subsection*{MNTC P01 - Week \\#2 - \\WeekTitleTwo}\r\n\\end{center}\r\n\r\n\r\n\\subsection*{Linear Approximations and Tangent Lines}\r\n\\begin{enumerate}[1.]\r\n\\begin{multicols}{2}\r\n\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Find the equation of the tangent line to the graph of $f$ at\r\n    (1,1), where $f$ is given by $f(x) = 2x^3 - 2x^2 + 1$.\r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    $f(x) = 2x^3 - 2x^2 + 1$.  \\\\\r\n    In general, the slopes of the function are given by $f'(x) = 6x^2\r\n    -\r\n    4x$ \\\\\r\n    At the point $(1,1)$ (which you should check is actually on the graph of $f(x)$!), the slope is \\\\\r\n    $f'(1) = 6 - 4 = 2$ \\\\\r\n    Using the point/slope formula for a line (or the tangent line\r\n    formula), a line tangent to the graph of $f(x)$ at the point\r\n    $(1,1)$ is\r\n    \\begin{align*} y & = f'(1) (x-1) + f(1)\\\\\r\n      & = 2 (x-1) + 1 \\\\\r\n      \\mbox{ or } y & = 2x - 1\r\n    \\end{align*}\r\n    \r\n  \\end{Solution}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    \\begin{enumerate}[(a)]\r\n    \\item Find the equation of the tangent line to $f(x) = x^3$ at\r\n      $x=2$.\r\n    \\item Sketch the curve and the tangent line on the same axes, and\r\n      decide whether using the tangent line to approximate $f(x) =\r\n      x^3$ would produce {\\em over-} or {\\em under-}estimates of\r\n      $f(x)$ near $x=2$.\r\n    \\end{enumerate}\r\n    \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item $f(x) = x^3$, so $f'(x) = 3x^2$.  \\\\\r\n      At $x=2$, $f(2) = 8$ and $f'(2) = 12$, so the tangent line to\r\n      $f(x)$ at $x=2$ is\r\n$$ y = 12 (x-2) + 8$$ \r\n\\item ~ \\\\\r\n  \\includegraphics[width=2.0in]{graphics/Week02_TangentLines/Tangent_x_cubed} \\\\\r\n  From the graph of $y =x^3$, it is clear that the tangent line at\r\n  $x=2$ will lie {\\em below} the actual curve. This means that using\r\n  the tangent line to estimate $f(x)$ values will produce {\\em\r\n    underestimates} of $f(x)$.\r\n\\end{enumerate}\r\n    \r\n\\end{Solution}\r\n% ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Find the equation of the line tangent to the graph of \\(f\\) at\r\n    \\((3,57)\\), where \\(f\\) is given by \\(f(x)= 4 x^3 - 7 x^2 + 12\\).\r\n    \\par \\end{Question}\r\n  \\begin{Solution}\r\n    Differentiating gives \\(f'(x) = 12 x^2 - 14 x\\), so \\(f'(3) =\r\n    66\\).  Thus the equation of the tangent line is \\(y - 57 = 66 ( x\r\n    - 3 )\\), or \\(y = 57 + 66 (x - 3 )\\).\r\n    \\par\\end{Solution}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Given a power function of the form \\(f(x)=ax^n\\), with \\(f'(3) =\r\n    16\\) and \\(f'(6) = 128\\), find \\(n\\) and \\(a\\).\r\n    \\par\r\n    \\par \\end{Question}\r\n  \\begin{Solution}\r\n \r\n    Since \\(f(x)=ax^n\\), \\(f'(x)=anx^{n-1}\\).  We know that\r\n    \\(f'(3)=(an)3^{n-1} = 16\\), and \\(f'(6) = (an) 6^{n-1} = 128\\).\r\n    Therefore,\r\n    \\[\\frac{f'(6)}{f'(3)} = \\frac{128}{16} = 8.\\]\r\n    But\r\n    \\[\\frac{f'(6)}{f'(3)} = \\frac{(an)6^{n-1}}{(an)3^{n-1}} =\r\n    2^{n-1},\\] so \\(2^{n-1} = 8\\), and so \\(n = 4\\).\r\n    \\par\r\n    Substituting \\(n=4\\) into the expression for \\(f'(3)\\), we get \\(4\r\n    a 3^{3} = 16\\), so \\(a = \\frac{4}{27}\\).\r\n    \\par\\end{Solution}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Find the equation of the line tangent to the graph of \\(f\\) at\r\n    \\((2,1)\\), where \\(f\\) is given by \\(f(x)= 2 x^3 - 5 x^2 + 5\\).\r\n    \\par \\end{Question}\r\n  \\begin{Solution}\r\n \r\n    Differentiating gives \\(f'(x) = 6 x^2 - 10 x\\), so \\(f'(2) = 4\\).\r\n    Thus the equation of the tangent line is \\(y - 1 = 4 ( x - 2 )\\),\r\n    or \\(y = 1 + 4 (x - 2 )\\).\r\n    \\par\\end{Solution}\r\n\r\n  % *******************************\r\n\\item\r\n  \\begin{Question}\r\n \r\n\\par \r\nFind all values of \\(x\\) where the tangent lines to \\(y=x^{8}\\) and\r\n\\(y=x^{9}\\) are parallel.\r\n\\par \\end{Question}\r\n\\begin{Solution}\r\n \r\n  Let \\(f(x)=x^{8}\\) and let \\(g(x)=x^{9}\\). The two graphs have\r\n  parallel tangent lines at all \\(x\\) where \\(f'(x)=g'(x)\\).\r\n  \\[f'(x) = g'(x)\\]\r\n  \\[8x^{7} = 9x^{8}\\]\r\n  \\[8x^{7}-9x^{8} = 0\\]\r\n  \\[x^{7}\\!\\left(8-9x\\right) = 0\\] hence, \\(x=0\\) or\r\n  \\(x=\\frac{8}{9}\\).\r\n\r\n  The point at $x=0$ is easy to visualize (both graphs are flat\r\n  there).  Here is a graph showing the parallel tangents at $x=8/9$.\r\n\r\n  \\includegraphics[width=0.8\\linewidth]{graphics/Week02_TangentLines/TangentTo_x8_x9}\r\n  \\par\\end{Solution}\r\n% ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Consider the function \\(f(x)=9-e^x\\).\r\n    \\begin{enumerate}[(a)]\r\n    \\item Find the slope of the graph of \\(f(x)\\) at the point where\r\n      the graph crosses the \\(x\\)-axis.\r\n      \\par\r\n    \\item\r\n\r\n      Find the equation of the tangent line to the curve at this\r\n      point.\r\n      \\par\r\n    \\item Find the equation of the line perpendicular to the tangent\r\n      line at this point. (This is the {\\it normal} line.)\r\n    \\end{enumerate}\r\n    \\par \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item \\(f(x) = 9-e^x\\) crosses the \\(x\\)-axis where \\(0 = 9\r\n      -e^x\\), which happens when \\(e^x = 9\\), so \\(x = \\ln 9\\).  Since\r\n      \\(f'(x) = -e^{x}\\), \\(f'(\\ln 9) = -9\\).\r\n      \\par\r\n    \\item \\(y = -9 ( x - \\ln(9) )\\).\r\n      \\par\r\n    \\item The slope of the normal line is the negative reciprocal of\r\n      the slope of the tangent, so \\(y = \\frac{1}{9} (x - \\ln(9))\\).\r\n    \\end{enumerate}\r\n    \\par\\end{Solution}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Consider the function $y = 2^x$.\r\n    \\begin{enumerate}[(a)]\r\n    \\item Find the tangent line based at $x=1$, and find where the\r\n      tangent line will intersect the $x$ axis.\r\n    \\item Find the point on the graph $x=a$ where the tangent line\r\n      will pass through the origin.\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item  We find the linearization using $f(x) = 2^x$, so\r\n$f'(x) = 2^x \\ln(2)$ (non-$e$ exponential derivative rule).\r\n\r\nAt the point $x=1$, $f(1) = 2^1 = 2$ and $f'(1) = 2^1 \\ln(2)$, so the linear approximation to $f(x)$ is \r\n\r\n$L(x) = 2+(2 \\ln(2))(x-1)$. \r\n\r\nSolving for where this line intersects the $x$ axis (or the $y=0$\r\nline), we find the $x$ intercept is approximately -0.4427.\r\n\r\n\\item This question is more general.  Instead of asking for a\r\n  linearization at a specific point, it is asking ``at what point\r\n  would the linearization pass through the origin?''  Let us give the\r\n  point a name: $x=a$ (as opposed to $x=1$ used in part (a)).\r\n\r\nFrom the function and the derivatives, the linearization at the point $x=a$ is given by:\r\n$$L_a(x) = \\underbrace{2^a}_{f(a)} + \\underbrace{2^a \\ln(2)}_{f'(a)}(x-a)$$\r\n\r\nThat is true in general, but we want the point $x=a$ where the\r\nlinearization will go through $(0,0)$, i.e. for which $L_a(0) = 0$:\r\n\\begin{align*}\r\n0 & = 2^a + 2^a \\ln(2) (0-a)  \\\\\r\n\\mbox{ Solving for }a,~~~~~ 0 & = 2^a (1 - a \\ln(2)) \\\\\r\n0 & = 1 - a \\ln(2) \\\\\r\na \\ln(2)  & = 1  \\\\\r\na & = \\frac{1}{\\ln(2)} \\approx 1.442  \\\\\r\n\\end{align*}\r\nAt that $x$ point, the graph of $y = 2^x$'s tangent line will pass\r\nexactly through the origin.\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    \\begin{enumerate}[(a)]\r\n    \\item Find the tangent line approximation to $f(x) = e^x$ at\r\n      $x=0$.\r\n    \\item Use a sketch of $f(x)$ and the tangent line to determine\r\n      whether the tangent line produces over- or under-estimates of\r\n      $f(x)$.\r\n    \\item Use your answer from part (b) to decide whether the\r\n      statement $e^x \\ge 1 + x$ is always true or not.\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item $f(x) = e^x$, so $f'(x) = e^x$ as well.\\\\\r\nTo build the tangent line at $x=0$, we use $a=0$ as our reference point:\r\n$f(0) = e^0 = 1$, and $f'(0) = e^0 = 1$.\r\nThe tangent line is therefore \\\\\r\n$l(x) = 1 (x-0) + 1 = x + 1$\r\n\\item  ~ \\\\\r\n\\includegraphics[width=0.8\\linewidth]{graphics/Week02_TangentLines/exp_tangent}\r\n\r\nSince the exponential graph is concave up, it curves upwards away from\r\nthe graph.  This means that the linear approximation will always be an\r\nunder-estimate of the original function.\r\n\\item Since the linear function will always underestimate the value of $e^x$, we can\r\nconclude that\r\n$$1+x \\le e^x$$, and they will be equal only at the tangent point, $x=0$.\r\n\\end{enumerate}\r\n    \r\n  \\end{Solution}\r\n\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n   The speed of sound in dry air is\r\n$$f(T) = 331.3 \\sqrt{1 + \\frac{T}{273.15}} \\mbox{m/s} $$\r\nwhere $T$ is the temperature in degrees Celsius.  Find a linear\r\nfunction that approximates the speed of sound for temperatures near\r\n$0^o$ C.\r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \r\n  \\begin{align*}\r\n    f(T) & = 331.3 \\sqrt{1 + \\frac{T}{273.15}} \\\\\r\n f'(T) & = 331.3 \\left(\\frac{1}{2}\\right) \\left(\\frac{1}{\\sqrt{1 + \\frac{T}{273.15}}}\\right) \\frac{1}{273.15} \r\n\\end{align*}\r\n  \\begin{align*}\r\n    \\mbox{ so at } T = 0^o \\mbox{ C}, ~~~ f(0) & = 331.3 & f'(0) & =\r\n    \\frac{331.3}{(2)(273.15)} \\approx 0.606\r\n  \\end{align*}\r\n  Thus the speed of sound for air temperatures around $0^o$ C is\r\n$$f(T) \\approx 0.606 (T - 0) + 331.3, \\mbox{ or } f(t) \\approx 0.606 T + 331.3 \\mbox{ m/s}$$\r\n\r\n  \\end{Solution}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Find the equations of the tangent lines to the graph of $y =\r\n    \\sin(x)$ at $x=0$, and at $x=\\pi/3$. \r\n    \\begin{enumerate}\r\n    \\item Use each tangent line to approximate $\\sin(\\pi/6)$. \r\n    \\item Would you expect these results to be equally accurate, given\r\n      that they are taken at equal distances on either side of\r\n      $\\pi/6$?  If there is a difference in accuracy, can you explain\r\n      it?\r\n    \\end{enumerate}\r\n \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n  \\item At $x=0$, the tangent line is defined by \r\n$f(0) = 0$ and $f'(0) = 1$, so \r\n$$y = 1(x-0) + 0 = x$$\r\nis the tangent line to $f(x)$ at $\\ds x=\\frac{\\pi}{3}$.\r\n\r\n  At $\\ds x=\\frac{\\pi}{3}$, the tangent line is defined by \r\n$(\\pi/3) = \\frac{\\sqrt{3}}{2}$ and $f'(\\pi/3) = \\frac{1}{2}$, so \r\n$$y = \\frac{1}{2}\\left(x-\\frac{\\pi}{3}\\right) + \\frac{\\sqrt{3}}{2}$$\r\nis the tangent line to $f(x)$ at $x=0$.\r\n\r\nThe estimates of each tangent line at the point $x=\\pi/6$ would be\r\n\\begin{itemize}\r\n\\item Based on $x=0$ tangent line, $f(x) \\approx x$, so $f(\\pi/6) \\approx\r\n  \\pi/6 \\approx 0.5236 $.\r\n\\item Based on $x=\\pi/3$ tangent line,\r\n$\\ds f(x) \\approx  \\frac{1}{2}\\left(x-\\frac{\\pi}{3}\\right) + \\frac{\\sqrt{3}}{2}$, \\\\\r\nso \r\n$\\ds f(\\pi/6) \\approx  \\frac{1}{2}\\left(\\pi/6-\\frac{\\pi}{3}\\right) + \\frac{\\sqrt{3}}{2} \\approx 0.6042 $\r\n\\item The {\\bf actual} value of $f(x) = \\sin(x)$ at $x= \\pi/6$ is\r\n  $\\sin(\\pi/6) = 0.5$. \r\n\\end{itemize}\r\n\\item From these calculations, the estimate obtained by using the\r\n  tangent line based at $x=0$ gives the more accurate prediction for\r\n  $f(x)$ at $x=\\pi/6$\r\n\r\nA sketch might help explain these results.\r\n\r\n\\includegraphics[width=0.8\\linewidth]{graphics/Week02_TangentLines/Sine_tangents}\r\n\r\nIn the interval $x \\in [0, \\pi/6]$, the function stays very close to\r\nlinear (i.e. does not curve much), which means that the tangent\r\nline stays a good approximation for a relatively long time.\r\n\r\nThe function is most curved/least linear around its peak, so the\r\nlinear approximation around $x = \\pi/3$ is less accurate even over the\r\nsame $\\Delta x$.\r\n    \\end{enumerate}\r\n\r\n  \\end{Solution}\r\n\r\n\r\n%   % ***************************************************************\r\n% \\item\r\n%   \\begin{Question}\r\n%     Find the {\\bf quadratic} polynomial $g(x) = ax^2 + bx + c$ which\r\n%     best fits the function $f(x) = e^x$ at $x=0$, in the sense that\r\n% $$ g(0) = f(0), g'(0) = f'(0), \\mbox{ and  } g''(0) = f''(0)$$\r\n% \\end{Question}\r\n% \\begin{Solution}\r\n% \\begin{align*}\r\n% f(x) & = e^x \\\\\r\n% \\mbox{ so } f'(x) = e^x \\\\\r\n% \\mbox{ and } f''(x) = e^x \\\\\r\n% \\mbox{Evaluating at $x=0$,} \\\\\r\n% f(0) = f'(0) = f''(0) & = 1 \\\\\r\n%   \\end{align*}\r\n% Comparing with the derivatives of the quadratic,\r\n% \\begin{align*}\r\n% g(x) & = ax^2 + bx + c \\\\\r\n% \\mbox{ so } g'(x) = 2ax + b \\\\\r\n% \\mbox{ and } g''(x) = 2a \\\\\r\n% \\mbox{Evaluating at $x=0$,} \\\\\r\n% g(0) = c, g'(0) = b, & \\mbox{ and } g''(0) = 2a\r\n%   \\end{align*}\r\n%   For $g(x)$ to fit the shape of $f(x)$ near $x=0$, we would then pick\r\n%   $c = 1$, $b = 1$ and $2a= 1$ or $a= 0.5$, so\r\n% $$ g(x) = 0.5 x^2 + x + 1$$\r\n% would be the best fit quadratic to $f(x) = e^x$ near $x=0$.\r\n\r\n% Here is a graph of the two functions, $f(x) = e^x$ in black, $g(x) = 0.5 x^2 + x + 1$ in red.  \r\n% Notice how similar they look near their intersection.\r\n\r\n% \\includegraphics[width=0.8\\linewidth]{graphics/Week02_TangentLines/Exp_quadratic_fit}\r\n    \r\n% \\end{Solution}\r\n\r\n% ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Consider the graphs of $y = \\sin(x)$ (regular sine graph), and $y\r\n    = k e^{-x}$ (exponential decay, but scaled vertically by $k$).\r\n\r\n    If $k \\ge 1$, the two graphs will intersect.  What is the smallest\r\n    value of $k$ for which two graphs will be {\\em tangent} at that\r\n    intersection point?\r\n\r\n    \r\n  \\end{Question}\r\n  \\begin{Solution}\r\nLet $f(x) = \\sin(x)$ and $g(x) = k e^{-x}$.  They intersect\r\n  when $f(x) = g(x)$, and they are tangent at that intersection if\r\n  $f'(x) = g'(x)$ as well.  Thus we must have\r\n  \\begin{align*}\r\n    \\sin(x) & = k e^{-x} & \\mbox{ and } \\cos(x) & = -k e^{-x}\r\n  \\end{align*}\r\n  We can't solve either equation on its own, but we can divide one by\r\n  the other:\r\n  \\begin{align*}\r\n    \\frac{\\sin(x)}{\\cos(x)} & = \\frac{k e^{-x}}{-ke^{-x}} \\\\\r\n    \\tan(x) & = -1 \\\\\r\n    x & = \\frac{3\\pi }{4}, \\frac{7 \\pi}{4}, \\ldots \\\\\r\n  \\end{align*}\r\n  Since we only need one value of $k$, we try the first value, $x = 3\r\n  \\pi/4$.\r\n\\begin{align*}\r\n\\sin(3\\pi/4) & = k e^{-3\\pi/4} \\\\\r\n\\frac{1}{\\sqrt{2}} e^{3\\pi/4} & = k  \\\\\r\nk & \\approx 7.46 \r\n\\end{align*}\r\nWe confirm our answer by verifying both the values and derivatives are\r\nequal at $x = 3\\pi/4$,\r\n  \\begin{align*}\r\n    \\sin(3 \\pi / 4) & = 7.46 e^{-3\\pi/4} \\approx 0.7071  \\mbox{ (same $y$: intersection)}\\\\\r\n \\mbox{ and } \\cos(3\\pi/4) & = -7.46 e^{-3\\pi/4} \\approx -0.7071  \\mbox{ (same derivative)}\r\n  \\end{align*}\r\n\r\n  The actual point of tangency is at $\\ds (x, y) = \\left(\\frac{3\\pi}{4},\r\n    \\frac{1}{\\sqrt{2}}\\right)$.  A sketch is shown below.\r\n  \r\n  \\includegraphics[width=0.8\\linewidth]{graphics/Week02_TangentLines/Sine_exp_tangency} \r\n    \r\n  \\end{Solution}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    \\begin{enumerate}[(a)]\r\n    \\item Show that $1 + kx$ is the local linearization of $(1+x)^k$ near $x=0$.\r\n    \\item Someone claims that the square root of 1.1 is about 1.05.\r\n      Without using a calculator, is this estimate about right, and\r\n      how can you decide using part (a)?\r\n    \\end{enumerate}\r\n    \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n  \\begin{enumerate}[(a)]\r\n    \\item\\begin{align*}\r\n       f(x) & = (1 + x)^k & f'(x) & = k (1+x)^{k-1}   \\\\\r\n\\mbox{ so at } x = 0, ~~~ f(0) & = 1^k = 1 & f'(0) &= k (1^{k-1}) = k\\\\\r\n  \\end{align*}\r\nso the tangent line at $x=0$ will be \r\n$$y = k (x-0) + 1 \\mbox{ or } y = 1 + kx$$\r\n\\item As an estimate for the square root of $1.1$, we could note that\r\n  $\\sqrt{1.1} = (1 + 0.1)^{1/2}$. This matches exactly the form of\r\n  $f(0.1)$ if we choose $k = \\frac{1}{2}$. From our linearization\r\n  above,\r\n$$f(0.1) \\approx 1 + \\frac{1}{2}(0.1) = 1.05$$\r\nso yes, a good approximation for $\\sqrt{1.1}$ is 1.05.  (Calculator\r\ngives the value of $\\approx 1.0488$.\r\n  \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    \\begin{enumerate}\r\n    \\item Find the local linearlization of $e^x$ near $x=0$. \r\n    \\item Square your answer to part (a) to find an approximation to\r\n      $e^{2x}$.\r\n    \\item Compare your answer in part (b) to the actual linearization\r\n      to $e^{2x}$ near $x=0$, and discuss which is more accurate.\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item $e^x$ has a tangent line/local linearization near $x=0$ of\r\n      $y = x + 1$ (slope 1, point (0, 1)).\r\n\r\n    \\item Multiplying this approximation by itself, we get\r\n      $(e^x)(e^x)$ or $e^{2x} \\approx (x+1)(x+1) = x^2 + 2x + 1$\r\n\r\n    \\item To compare with the actual linearization of $g(x) = e^{2x}$,\r\n      we find its derivative and value at $x=0$,\r\n  \\begin{align*}\r\n    g(x) & = e^{2x} & g(0) & = 1 \\\\\r\ng'(x) & = 2 e^{2x} & g'(0) & = 2 \\\\\r\n  \\end{align*}\r\n  so a linearization of $g(x) = e^{2x}$ near $x=0$ is $y = 2 (x-0) + 1$ or \r\n$$ y = 2x + 1$$\r\nNote that his is the same as the approximation we obtained before,\r\nexcept that our product version had an additional term, $x^2$.\r\n\r\nThese approximations give the same straight-line estimate of the\r\nfunction, but I would expect the first (multiplication) version to be\r\nmore accurate because it contains more information (the squared term\r\nthat the pure linear approximation was missing).\r\n\r\nWe will see more of this idea in Taylor polynomials and Taylor series.\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    \\begin{enumerate}\r\n    \\item Show that $1-x$ is the local linearization of $\\ds\r\n      \\frac{1}{1+x}$ near $x=0$.\r\n    \\item From your answer to part (a), show that near $x=0$, \r\n      $$\\frac{1}{1+x^2} \\approx 1-x^2.$$\r\n    \\item Without differentiating, what do you think the derivative of\r\n      $\\ds \\frac{1}{1+x^2}$ is at $x=0$?\r\n    \\end{enumerate}\r\n\r\n    \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \r\n  \\begin{enumerate}[(a)]\r\n  \\item Let $f(x) = 1/(1+x)$.  Then $f'(x) = \\frac{-1}{(1+x)^2}$. \\\\\r\n    At $x=0$, $f(0) = 1$ and $f'(0) = -1$. \\\\\r\n    So near $x=0$, $f(x) \\approx -1 x + 1 = -x + 1$ \\\\\r\n  \\item For small $x$ values (i.e. $x$ near zero), we can approximate\r\n    $1/(1+x)$ with $1-x$.  Replace the variable $x$ with $y$ (because\r\n    the name doesn't matter),\r\n$$1/(1+y)  \\approx 1-y$$\r\nIf we choose $y$ small but equal to $x^2$, then \r\n$$\\frac{1}{1 + x^2} \\approx 1-x^2$$\r\n\\item The linearization of $1/(1+x^2)$ is the linear part of $1 -\r\n  x^2$, or just $1$.  Since the derivative at $x=0$ is the coefficient\r\n  for $x$ in the linear part, this means $\\ds \\ddx ~\\frac{1}{1+x^2}$ at $x=0$\r\n  must equal zero.\r\n\\end{enumerate}\r\n  \\end{Solution}\r\n  % ***************************************************************\r\n% \\item\r\n%   \\begin{Question}\r\n%     \\begin{enumerate}[(a)]\r\n%     \\item Find the local linearization of\r\n\r\n%       \\[f(x) = \\frac{1}{1 + 2 x}\\] near \\(x = 0\\).\r\n%     \\item Using your answer to (a), what quadratic function would you\r\n%       expect to approximate \\(\\displaystyle g(x) = \\frac{1}{1 + 2\r\n%         x^2}\\)?\r\n%       \\par\r\n%     \\item Using your answer to (b), what would you expect the\r\n%       derivative of \\(\\frac{1}{1 + 2 x^2}\\) to be even without doing\r\n%       any differentiation?\r\n%     \\end{enumerate}\r\n%     \\par \\end{Question}\r\n%   \\begin{Solution}\r\n \r\n%     \\begin{enumerate}[(a)]\r\n%     \\item We know \\(f(0) = 1\\) and \\(f'(0) = -\\frac{2}{(1 + 2(0))^2} =\r\n%       -2\\), so the local linearization is \\(f(x)\\approx 1 - 2 x\\).\r\n%       \\par\r\n%     \\item Next, \\(g(x) = f(x^2)\\), so we expect that \\(g(x) \\approx 1\r\n%       - 2 x^2\\).\r\n%       \\par\r\n%     \\item Noting that the approximation we found in (b) is downward\r\n%       opening parabola with vertex on the \\(y\\)-axis, we expect that\r\n%       the derivative of \\(g(x)\\) at \\(x=0\\) will be zero.\r\n%     \\end{enumerate}\r\n%     \\par\\end{Solution}\r\n\r\n  % ***************************************************************\r\n\\end{multicols}\r\n\\hrulefill\r\n\r\n\\subsection*{MATLAB Graphing}\r\n\\item \r\n  \\begin{Question}\r\nCreate a smooth-looking graph of the function $y = \\cos(x)$, $x$\r\n  in radians, on the interval $[-\\pi, 5\\pi]$.\r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \r\n\\href{http://www.mast.queensu.ca/~apsc171/MNTCP01/PracticeProblems/MATLAB/W02GraphCos.m}{W02GraphCos.m}\r\n\r\n\\lstinputlisting{MATLAB/W02GraphCos.m}\r\n\r\nNote that using \\verb@x = -pi:(5*pi)@ does {\\em not} create a smooth\r\ngraph, because too few points are used\r\n($x = -3.14, -2.14, -1.14, \\ldots$, etc.).  MATLAB plots {\\em points},\r\nand {\\em not functions}, and if you use too few points in a graph, it\r\nlooks awful. That is why defining your graphing \\verb@x@ coordinates\r\nusing \\texttt{linspace}, which by default uses 100 points, is\r\nrecommended at is usually enough points for a smooth-looking graph.\r\n  \\end{Solution}\r\n\r\n% ******************************\r\n\\item \r\n  \\begin{Question}\r\nCreate a smooth-looking graph of the function $y = e^{-x^2}$,\r\n  over the interval $[-3, 3]$.\r\n  \\end{Question}\r\n\\begin{Solution}\r\n\\href{http://www.mast.queensu.ca/~apsc171/MNTCP01/PracticeProblems/MATLAB/W02GraphBell.m}{W02GraphBell.m}\r\n\r\n\\lstinputlisting{MATLAB/W02GraphBell.m}\r\n\\end{Solution}\r\n\r\n\\item \r\n  \\begin{Question}\r\n    It is common in scientific plots to draw functions as lines, and\r\n    plot data as distinct points.\r\n\r\n    The following points mark the distance between Saturn and several\r\n    of its moons:\r\n\r\n\\begin{tabular}{cccccccccc}\r\nPlanet/Object & Mercury& \tVenus& \tEarth& \tMars& \tJupiter& \tSaturn& \tUranus& \tNeptune& \tPluto \\\\\r\nMean Distance (AU), $d$& 0.39& \t0.72& \t1& \t1.52& \t5.20& \t9.54& \t19.18& \t30.06& \t39.44 \\\\\r\nPeriod (Earth years), $T$  & 0.24&\t0.62&\t1&\t1.88&\t11.86&\t29.46&\t84.01&\t164.8&\t247.7 \\\\\r\n\\end{tabular}\r\n\r\nThe best-fit curve to this data is given by the formula $T = d^{3/2}$.\r\n\r\nPlot both the raw data (as points) and best fit curve (as a line) on a\r\nsingle graph. The best-fit graph should look smooth.\r\n\\end{Question}\r\n\r\n\\begin{Solution}\r\n\\href{http://www.mast.queensu.ca/~apsc171/MNTCP01/PracticeProblems/MATLAB/W02GraphPlanets.m}{W02GraphPlanets.m}\r\n\r\n\\lstinputlisting{MATLAB/W02GraphPlanets.m}\r\n\\end{Solution}\r\n\r\n\\hrulefill\r\n\\subsection*{Newton's Method}\r\n\\begin{multicols}{2}\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Consider the equation $e^x + x = 2$.  This equation has a solution\r\n    near $x=0$.  By replacing the left side of the quation by its\r\n    linearization near $x=0$, find an approximate value for the\r\n    solution. \r\n\r\n    (In other words, perform one step of Newton's method, starting at\r\n    $x=0$, by hand.)\r\n    \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    Our equation is \r\n\\begin{align*}\r\n\\underbrace{e^x + x}_{f(x)} = 2\r\n\\end{align*}\r\nTo find the linearization of $f(x)$ near $x=0$, we need $f$ and its\r\nderivative $f'$, both evaluated at $x=0$.\r\n\\begin{align*}\r\n  f(x) & = e^x + x   & \\mbox{ so } f(0) = e^0 + 0 = 1 \\\\\r\n  f'(x) & = e^x + 1   & \\mbox{ so } f'(0) = e^0 + 1 = 2 \\\\\r\n\\end{align*}\r\nThe linearization is then \r\n$$ e^x + x \\approx \\underbrace{(2)}_{f'(0)}(x-0) + \\underbrace{1}_{f(0)} = 2x + 1$$\r\nWe now replace the original (unsolvable) equation\r\n\\begin{align*}\r\n  e^x + x & = 2 \\\\\r\n\\mbox{ with the simpler approximation: ~~~~~~} 2x + 1 & = 2 \\\\\r\n\\end{align*}\r\nSolving this second version is straightforward, yielding $x=0.5$.\r\nThis is actually a fair approximation to the solution, since\r\n$e^{0.5} + 0.5 \\approx 2.149$ which is close to the equation RHS\r\nvalue, 2.\r\n\r\n(If we continued our linearizations and their approximations, we would\r\nget values even closer to the real solution, which is (to 4 decimal\r\nplaces) 0.4429.)\r\n\r\n  \\end{Solution}\r\n\r\n  % ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Use Newton's Method with the equation \\(x^2=2\\) and initial\r\n    value \\(x_0=3\\) to calculate \\(x_1 , x_2 , x_3\\) (the next three\r\n    solution estimates generated by Newton's method). Do the calculations by hand. \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n \r\n\\par \r\nMoving everything to the left side, we get $$ \\underbrace{x^2 - 2}_{f(x)}  = 0$$\r\nDifferentiating, we have \\(f'(x)= 2x\\).  Therefore: \\par\r\n\\(\\ds x_1 = x_0 - \\frac{f(x_0)}{f'(x_0)}= x_0 - \\frac{x_0^2 - 2}{2\r\n  x_0} = 3 - \\frac{3^2 - 2}{2 * 3} = 1.83333\\) \\par\r\n\\(\\ds x_2 = x_1 - \\frac{f(x_1)}{f'(x_1)}= 1.83333 - \\frac{1.83333^2 -\r\n  2}{2 \\times 1.83333} \\approx 1.46212\\) \\par\r\n\\(\\ds x_3 = x_2 - \\frac{f(x_2)}{f'(x_2)}= 1.46212 - \\frac{1.46212^2 -\r\n  2}{2 \\times 1.46212} \\approx 1.415\\) \\par\r\n\\par\r\nThis sequence provides successive approximations to the exactly\r\nsolution,which would equal \\par\r\n\\(\\sqrt 2 \\approx 1.4142\\)\r\n\\par\\end{Solution}\r\n% ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Using MATLAB, write a script that applies Newton's Method to solve\r\n    the equation \\(x^3=5\\).  Use 10 iterations of Newton's method. \\\\\r\n    Compute the values of $x^3$ when you are done to confirm that it\r\n    is close to 5.\r\n  \\end{Question}\r\n\r\n    \\begin{Solution}\r\nMoving everything to the left side, we get\r\n$$ \\underbrace{x^3 - 5}_{f(x)} = 0$$ Differentiating, we have\r\n\\(f'(x)= 3x^2\\).\r\n\r\nSolution script:  \\\\\r\n\\href{http://www.mast.queensu.ca/~apsc171/MNTCP01/PracticeProblems/MATLAB/W02Newton1.m}{W02Newton1.m}\r\n\r\n\\lstinputlisting[firstline=4]{MATLAB/W02Newton1.m}\r\n\r\n\r\n\r\n\\par\\end{Solution}\r\n% ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\n    Use Newton's Method to approximate \\(4^{\\frac{1}{3}}\\) and compare\r\n    with the value obtained from a calculator.\r\n\r\n    (Hint: write out a simple equation that \\(4^{\\frac{1}{3}}\\) would\r\n    satisfy, and use Newton's method, with MATLAB, to solve that.)\r\n    \\par\r\n    \\par \\end{Question}\r\n  \\begin{Solution}\r\n \r\n\\par \r\nWe need to find an approximation to \\(4^{\\frac{1}{3}}\\) using Newton's\r\nMethod. An equation that would satisfy this relationship is\r\n$$x = \\sqrt[3]{4}$$, but that's something we can solve by inspection.\r\nIf we cube both sides though we get\r\n$$x^3 = 4$$\r\nwhich is a reasonable equation for solving. \r\n\r\nMoving everything to the left side, we get the equivalent equation \r\n$$x^3 - 4 = $$, which means for Newton's method we will use\r\n\\(f(x) = x^3-4\\), and so \\(f'(x) = 3x^2\\). \r\n\r\nSolution script:  \\\\\r\n\\href{http://www.mast.queensu.ca/~apsc171/MNTCP01/PracticeProblems/MATLAB/W02NewtonCubeRoot.m}{W02NewtonCubeRoot.m}\r\n\r\n\\lstinputlisting[firstline=4]{MATLAB/W02NewtonCubeRoot.m}\r\n\\par\\end{Solution}\r\n\r\n\r\n% ***************************************************************\r\n\\item\r\n  \\begin{Question}\r\nConsider the equation $ 10 x e^{-2x} = 0.4$.\r\n\\begin{enumerate}[(a)]\r\n\\item On a single set of axes, draw both the graphs $y = 10 x e^{-2x}$\r\n  and $y = 0.4$.  The $x$ locations of the intersections between these\r\n  two graphs are the solutions.\r\n\\item Continue your MATLAB script so that you use Newton's Method to\r\n  find {\\bf both} solutions to $10 x e^{-2x} = 0.4$.\r\n\\item Confirm both solutions by subbing them into the original\r\n  equation and verifying that the left and right hand sides of the\r\n  equation are equal.\r\n\\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n(a) and (b)  Solution script:  \\\\\r\n\\href{http://www.mast.queensu.ca/~apsc171/MNTCP01/PracticeProblems/MATLAB/W02NewtonXExp.m}{W02NewtonXExp.m}\r\n\r\n\\lstinputlisting[firstline=4,lastline=33]{MATLAB/W02NewtonCubeRoot.m}\r\n\\begin{enumerate}\r\n\\item[(c)]  The two solutions found by MATLAB were $x$ = 0.0436 and 1.9411. \\\\\r\n  \\begin{align*}\r\n   x  = 0.0436: & \\\\    10 (0.0436) e^{(-2)(0.0436)} & = 0.3996 \\mbox{ (very close to 0.4)} \\\\\r\n   x  = 1.9411: &  \\\\  10 (1.9411) e^{(-2)(1.9411} & = 0.4000 \\mbox{ (approx equal to 0.4)} \r\n  \\end{align*}\r\n\\end{enumerate}\r\n\r\n\\par\\end{Solution}\r\n\\end{multicols}\r\n\r\n\\end{enumerate}\r\n\\end{document}\r\n", "meta": {"hexsha": "ce183791fcbcc6730c92c89e973bc143c9f7e852", "size": 28012, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PracticeProblems/Week02.tex", "max_stars_repo_name": "aableson/MNTCP01", "max_stars_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-11-27T16:10:35.000Z", "max_stars_repo_stars_event_max_datetime": "2015-11-27T16:10:35.000Z", "max_issues_repo_path": "PracticeProblems/Week02.tex", "max_issues_repo_name": "aableson/MNTCP01", "max_issues_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PracticeProblems/Week02.tex", "max_forks_repo_name": "aableson/MNTCP01", "max_forks_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.449197861, "max_line_length": 116, "alphanum_fraction": 0.5634013994, "num_tokens": 9558, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7956580952177051, "lm_q1q2_score": 0.5889257230289439}}
{"text": "\\section{haha}\n\n\t\\lem{}{Let two rays $ h, i $ from point $ P $ and a fixed line $ l $. $ H_2 $ on $ h $ and $ E_2 $ be another point such that $ E_2\\in i $ and $ H_2E_2\\parallel l$. $ H_1 $ be another point on the plain. $ Z $ is a point on $ H_1H_2 $ such that $ E_2Z\\parallel PH_1 $. Prove that as $ H_2 $ moves on $ h $, $ Z $ moves on a line parallel to $ h $}\n\t\n\t\\proof{Let $ N\\in PH_1 $ such that $ ZN\\parallel h $. We will prove that as $ H_2 $ moves, $ \\dfrac{PN}{NZ} $ stays constant.\n\t\t\n\t\t\\figdf{.5}{ISL2009G6_lemma}{}\n\t\n\tIf we take $ M = E_2Z\\cap h $, we see that,\n\t\\[\\frac{PN}{NH_1}=\\frac{H_2Z}{ZH_1}=\\frac{H_2M}{MP}\\]\n\twhich is constant since $ PH_1 $ and $ l $ is constant. \\hfill \\qed}\n\n\n\t\\bigskip\n\t\n\t\n\tNow looking at our main problem, we see that if we move $ C $ and $ D $ on $ H_1A $ and $ H_1B $, keeping $ CD $ parallel to a fixed line, we see that if for some $ C $ the problem is true, then it's true for all other $ C $. Because, moving $ C $, moves $ H_2 $, $ E_2 $ along two fixed lines going though $ P $, and the perpendicular from $ E_2 $ to $ AB $ is parallel to $ PH_1 $. So the intersection of the perpendicular from $ E_2 $ to $ AB $ with $ H_1, H_2 $ lies on a line perpendicular to $ CD $. \n\t\n\t\\bigskip\n\t\n\t\n\tSo we are done if we can show the result for $ C=A $. \n\t\n\t\n\t\n\t\t\n\t", "meta": {"hexsha": "b66404229cd8e2fc5e943f68db89f2def3d61e9b", "size": 1307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dump/ISL2009G6.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "dump/ISL2009G6.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dump/ISL2009G6.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 48.4074074074, "max_line_length": 507, "alphanum_fraction": 0.6205049732, "num_tokens": 471, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5888283425146813}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{amsmath}\n\\usepackage{gensymb}\n\\usepackage[vmargin=1in,hmargin=1.25in]{geometry}\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\usepackage{microtype}\n\\usepackage{lmodern}\n\n\\title{ECE/CSE 576, Spring 2019 Homework 3: Content-Based Image Retrieval}\n\\author{Philip Pham}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\section{Algorithm}\n\nFor each image, we performed $k$-means clustering with $k = 8$ for 32\niterations.\\footnote{A random seed of 2020 was used for reproducibility.}\nConnected components was then applied to segment the image into contiguous\nregions. Only regions that made up at least $1/256$ of the image were kept.\n\nFor each region, the following features were computed: (1) the proportion of the\nimage occupied the region, (2) the average red, green, and blue color levels\nscaled to lie in range $[0,1]$, (3) the centroid, (4) the bounding box, and (5)\na normalized gray-level co-occurrence matrix (GLCM).\n\nFor the centroid and bounding box, coordinates were scaled to lie in range\n$[0, 1]$. The gray-levels were binned into 8 buckets each of size 32. The\nneighboring pixel of $(r, c)$ was the diagonal pixel at\n$\\left(r^\\prime = r + 1, c^\\prime = c + 1\\right)$. The GLCM was initialized with\nentries of $0.1$ which corresponds to putting a Dirichlet prior on\n$\\operatorname{GrayLevel}\\left(r^\\prime, c^\\prime\\right) \\mid\n\\operatorname{GrayLevel}\\left(r, c\\right)$ with $\\alpha = 0.1$. After counting\nco-occurrences, the matrix was normalized to define a proper joint probability\ndistribution over the gray levels of a pixel and its neighbor.\n\n\\subsection{Distance 1}\n\nLet $\\mathbf{q}$ be the query feature vector and $\\mathbf{p}$ be feature vector\nfor another image in the database. Distance 1 was simply the squared Euclidean\ndistance over the 5 classes of features:\n\\begin{equation}\n  d_1\\left(\\mathbf{q}, \\mathbf{p}\\right) = \\left\\lVert \\mathbf{q} - \\mathbf{p}\\right\\rVert_2^2.\n  \\label{eqn:distance_1}\n\\end{equation}\nFor the vectors in Equation \\ref{eqn:distance_1}, the first coordinate is the\nvolume, the colors are the next 3 entries, the centroid are the next 2 entries\nin the vector, the top, right, bottom, and left of the bounding box make the\nnext 4 entries, and the normalized GLCM are the last 64 entries for a total of\n$p = 74$ features per a feature vector.\n\n\\subsection{Distance 2}\n\nThe second distance function is more complex and uses several derived features.\n\\begin{equation}\n  d_2\\left(\\mathbf{q}, \\mathbf{p}\\right) = V(\\mathbf{q})\\left[\\left(V(\\mathbf{q}) - V(\\mathbf{p})\\right)^2 +\n  d_{2,\\text{Color}}\\left(\\mathbf{p}, \\mathbf{q}\\right) +\n  d_{2,\\text{Position}}\\left(\\mathbf{p}, \\mathbf{q}\\right) +\n  d_{2,\\text{GLCM}}\\left(\\mathbf{p}, \\mathbf{q}\\right)\\right],\n  \\label{eqn:distance_2}\n\\end{equation}\nwhere $V: \\mathcal{R} \\rightarrow [0, 1]$ is a function mapping a region into\nthe percentage of the image occupied. Suppose the query vector has region\nvectors $\\left\\{\\mathbf{q}_i\\right\\}_{i=1}^{N}$ and another image has region\nvectors $\\left\\{\\mathbf{p}_j\\right\\}_{j=1}^{M}$. The total distance to the image\ngiven a metric $d$ is computed as\n\\begin{equation}\n  D_d\\left(\\left\\{\\mathbf{q}_i\\right\\}_{i=1}^{N}, \\left\\{\\mathbf{p}_j\\right\\}_{j=1}^{M}\\right)\n  = \\frac{1}{N}\\sum_{i=1}^N\\min_{j}\\left\\{d\\left(\\mathbf{q}_i, \\mathbf{p}_j\\right)\\right\\},\n  \\label{eqn:total_distance}\n\\end{equation}\nso the leading $V\\left(\\mathbf{q}\\right)$ factor in Equation\n\\ref{eqn:distance_2} assigns more importance to larger regions.\n\nLet $\\operatorname{rgb}: \\mathcal{R}\\rightarrow \\left[0, 1\\right]^3$ map a\nregion to the average color levels. Then, we define\n\\begin{equation}\n  \\begin{split}\n  d_{2,\\text{Color}}\\left(\\mathbf{q}, \\mathbf{p}\\right) =\n  \\left(\\left\\lVert \\operatorname{rgb}\\left(\\mathbf{q}\\right)\\right\\rVert_2 -\n    \\left\\lVert \\operatorname{rgb}\\left(\\mathbf{p}\\right)\\right\\rVert_2\\right)^2\n  &+ \\left\\lVert \\operatorname{rgb}\\left(\\mathbf{q}\\right) -\n    \\operatorname{rgb}\\left(\\mathbf{p}\\right)\n  \\right\\rVert_1 \\\\ &+\n  \\left(1 - \\frac{\\operatorname{rgb}\\left(\\mathbf{q}\\right)\\cdot\\operatorname{rgb}\\left(\\mathbf{p}\\right)}\n    {\\left\\lVert \\operatorname{rgb}\\left(\\mathbf{q}\\right)\\right\\rVert_2\\left\\lVert \\operatorname{rgb}\\left(\\mathbf{p}\\right)\\right\\rVert_2}\\right),\n\\end{split}\n  \\label{eqn:distance_2_color}  \n\\end{equation}\nwhere the first term penalizes differing magnitude, the second term is the $l_1$\ndistance, and the third term is the cosine distance.\n\nLet $\\operatorname{Row}: \\mathcal{R} \\rightarrow \\left[0, 1\\right]$ and\n$\\operatorname{Col}: \\mathcal{R} \\rightarrow \\left[0, 1\\right]$ be the relative\nrow and columns for a region. Given two region bounding boxes, we can compute\nthe \\href{https://en.wikipedia.org/wiki/Jaccard_index}{Jaccard index} also know\nas \\emph{intersection over union}. Denote the map\n$J: \\mathcal{R} \\times \\mathcal{R} \\rightarrow [0,1]$. Based on the centroid and\nbounding box, we define\n\\begin{equation}\n  \\begin{split}\n  d_{2,\\text{Position}}\\left(\\mathbf{q}, \\mathbf{p}\\right)\n  = \\frac{1}{2}\\left(\n    \\operatorname{Row}\\left(\\mathbf{q}\\right) - \\operatorname{Row}\\left(\\mathbf{p}\\right)\n    \\right)^2 &+\n    \\frac{1}{2}\\left(\n      \\operatorname{Col}\\left(\\mathbf{q}\\right) - \\operatorname{Col}\\left(\\mathbf{p}\\right)\n    \\right)^2 \\\\\n    &+ \\left(1 - J\\left(\\mathbf{p}, \\mathbf{q}\\right)\\right)^2.\n  \\end{split}\n  \\label{eqn:distance_2_position}\n\\end{equation}\nThe first two terms penalize regions located in the different areas by their\ncentroids. The last term uses the region bounding boxes and penalizes regions\nwithout a lot of overlap, whether because they are located in different parts of\nthe images or their sizes differ considerably.\n\nFrom the normalized $8 \\times 8$ GLCM, $N_{\\text{GLCM}}\\left(\\mathbf{q}\\right)$,\nthe auxiliary features \\emph{Contrast}, \\emph{Energy}, and \\emph{Entropy} can\nbe defined:\n\\begin{align}\n  \\operatorname{Contrast}\\left(\n  \\mathbf{q}\n  \\right) &= \\sum_{i=1}^8\\sum_{j=1}^8\\left(i - j\\right)^2\n            N_{\\text{GLCM}}\\left(\\mathbf{q}\\right)(i, j) \\\\\n  \\operatorname{Energy}\\left(\n  \\mathbf{q}\n  \\right) &= \\sum_{i=1}^8\\sum_{j=1}^8\n            \\left[N_{\\text{GLCM}}\\left(\\mathbf{q}\\right)(i, j)\\right]^2 \\\\\n  \\operatorname{Entropy}\\left(\n  \\mathbf{q}\n  \\right) &= -\\sum_{i=1}^8\\sum_{j=1}^8\n            N_{\\text{GLCM}}\\left(\\mathbf{q}\\right)(i, j)\\log_2N_{\\text{GLCM}}\\left(\\mathbf{q}\\right)(i, j).\n\\end{align}\n\nBecause of differing scales and levels of importance, we defined\n\\begin{equation}\n  \\begin{split}\n  d_{2,\\text{GLCM}}\\left(\\mathbf{q}, \\mathbf{p}\\right)\n  &= 2\\left[\n    \\operatorname{Contrast}\\left(\\mathbf{q}\\right) -\n    \\operatorname{Contrast}\\left(\\mathbf{p}\\right)\n  \\right]^2 \\\\\n  &+ \\frac{1}{2}\\left[\n    \\operatorname{Energy}\\left(\\mathbf{q}\\right) -\n    \\operatorname{Energy}\\left(\\mathbf{p}\\right)\n  \\right]^2 \\\\\n  &+ \\frac{1}{2}\\left[\n    \\operatorname{Entropy}\\left(\\mathbf{q}\\right) -\n    \\operatorname{Entropy}\\left(\\mathbf{p}\\right)\n  \\right]^2.\n  \\end{split}\n  \\label{eqn:distance_2_glcm}\n\\end{equation}\nas a weighted sum of square differences.\n\n\\section{Discussion}\n\nBoth distances have the property that if $\\mathbf{p} = \\mathbf{q}$,\n$d\\left(\\mathbf{p}, \\mathbf{q}\\right) = 0$, so if the image is in the\ndatabase, the image itself will always be returned as the top result.\n\nColor features were found to be very important which explains why a simple\nmetric like $d_1$ can often be successful. In more complex images, weighting by\nthe region size, taking into account position with the bounding box, and using\ncontrast for texture analysis improved retrieval significantly. $d_2$ almost\nalways outperformed $d_1$ by taking into account these features.\n\nResults of empirical validation are shown in the next section.\n\n\\section{Empirical Results}\n\n\\subsection{\\texttt{beach}}\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{beach_1_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{beach_1_distance2.png}\n  \n  Query results for \\texttt{beach\\_1.jpg}.\n\\end{center}\n\nThe beach queries proved challenging. Euclidean distance completely fails, while\n$d_2$ only manages to return $2/4$ beach images. Returning an additional beach\nimage in the top row, $d_2$ does better.\n\n\\subsection{\\texttt{boat}}\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{boat_4_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{boat_4_distance2.png}\n  \n  Query results for \\texttt{boat\\_4.jpg}.\n\\end{center}\n\n$d_1$ and $d_2$ both return other boat images for their top two results. One\nmight say that $d_2$ does slightly better since it has another boat result in\nthe top row. The boat in \\texttt{boat\\_1} is too different and not matched.\n\n\\subsection{\\texttt{cherry}}\n\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{cherry_3_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{cherry_3_distance2.png}\n  \n  Query results for \\texttt{cherry\\_3.jpg}.\n\\end{center}\n\nThese queries were among the easiest. $d_1$ is near-perfect barely missing\n$\\texttt{cherry\\_5}$, while $d_2$ retrieves all the relevant images as its top results.\n\n\\subsection{\\texttt{crater}}\n\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{crater_3_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{crater_3_distance2.png}\n  \n  Query results for \\texttt{crater\\_3.jpg}.  \n\\end{center}\n\n$d_1$ completely fails here and doesn't return any relevant images in the top\nrow. $d_2$ does respectably retrieving $2/4$ crater images with an additional\ncrater image in the top row.\n\n\\subsection{\\texttt{pond}}\n\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{pond_2_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{pond_2_distance2.png}\n  \n  Query results for \\texttt{pond\\_2.jpg}.  \n\\end{center}\n\n$d_1$ mostly fails here. While its top result is another pond image, the other 3\nare nowhere to be found in the top row. $d_2$ does perfectly: its top 4 results\nare the other 4 pond images.\n\n\\subsection{\\texttt{stHelens}}\n\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{stHelens_5_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{stHelens_5_distance2.png}\n  \n  Query results for \\texttt{stHelens\\_5.jpg}.  \n\\end{center}\n\n$d_1$ does okay returning another image of Mount St. Helens as its top result\nwith another in the top row. $d_2$ does very well with its top 3\nresults being of Mount St. Helens and narrowly missing \\texttt{stHelens\\_3}.\n\n\\subsection{\\texttt{sunset1}}\n\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{sunset1_1_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{sunset1_1_distance2.png}\n  \n  Query results for \\texttt{sunset1\\_1.jpg}.  \n\\end{center}\n\nBoth $d_1$ and $d_2$ seem to do equally well here. The top row is all sunsets\nexcept for one image in both cases.\n\n\\subsection{\\texttt{sunset2}}\n\n\\begin{center}\n  \\includegraphics[width=\\textwidth]{sunset2_2_distance1.png}\n  \n  \\includegraphics[width=\\textwidth]{sunset2_2_distance2.png}\n  \n  Query results for \\texttt{sunset2\\_2.jpg}.  \n\\end{center}\n\nThis sunset is more challenging. $d_1$'s top two results are sunsets, but $d_2$\ndoes better: its top 3 results are sunsets with an additional result in the top\nrow.\n\n\\section*{Appendix}\n\nAll code used to generate these images can be found at\n\\href{https://github.com/ppham27/cse576/blob/master/hw3}{\\texttt{ppham27/cse576/hw3}}. The\nembedded JPEG, PNG files, and the \\LaTeX can be found in\n\\href{https://github.com/ppham27/cse576/blob/master/hw3/report}{\\texttt{ppham27/cse576/hw3/report}}.\n\n\\end{document}\n", "meta": {"hexsha": "9cf67b9ab65777d917bd0a380f434e8a3b9399c2", "size": 11435, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw3/report/report.tex", "max_stars_repo_name": "ppham27/cse576", "max_stars_repo_head_hexsha": "ee51345c5f4b8e344098aa08c592e8ebd1ce151f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw3/report/report.tex", "max_issues_repo_name": "ppham27/cse576", "max_issues_repo_head_hexsha": "ee51345c5f4b8e344098aa08c592e8ebd1ce151f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw3/report/report.tex", "max_forks_repo_name": "ppham27/cse576", "max_forks_repo_head_hexsha": "ee51345c5f4b8e344098aa08c592e8ebd1ce151f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1609589041, "max_line_length": 148, "alphanum_fraction": 0.7285526891, "num_tokens": 3635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5888283300521724}}
{"text": "\\subsection{Impact of QoI on Solution Quality }\\label{sec:impact}\n\nIn the explicit (set-based) approach, a finite numerical approximation of (often uncountably many) events in $\\sa$s is required.\nThus, there are two primary sources of approximation error: (1) partitioning the parameter space $\\pspace$ to approximate events in $\\pborel$, and (2) partitioning the data space $\\dspace$ to approximate events in $\\dborel$.\n\nA non-intrusive sample-based algorithm is initially introduced in \\cite{BET+14} and further analyzed in \\cite{BET+14-arxiv}.\nWe direct the interested reader to \\cite{BET+14-arxiv} for more detailed information and analysis of this algorithm, in which $\\dspace$ is discretized by $M$ samples and $\\pspace$ is discretized by $N$ samples.\nIf $\\updatedP$ represents the updated probability measure, then we let $\\updatedPxM$ be the exact solution to the approximate inverse problem using the discretization of $\\predictedP$ by $M$ samples.\nFinally, let $\\updatedPxNM$ denote the approximate solution to the approximate problem under both aforementioned discretizations, so\n\\[\n\\updatedP = \\lim\\limits_{M\\to\\infty} \\updatedPxM = \\lim\\limits_{M\\to\\infty} \\lim\\limits_{N\\to\\infty} \\updatedPxNM.\n\\]\n\nIf numerical error in the evaluation of the QoI map $\\qoi$ is inherent (e.g. owing to a mesh choice, surrogate model, or solution basis order), in the $N$ evaluations of samples in $\\pspace$, then we let $\\updatedPxNMh$ denote the approximate solution given the model discrepancy, and we have\n\\[\n\\updatedPxNM= \\lim\\limits_{h \\downarrow 0} \\updatedPxNMh.\n\\]\n\nHere, the $h$ refers to a mesh or other numerical parameter that determines the accuracy of the numerical solution evaluation of the QoI map.\n\n\nIn \\cite{BM17}, the focus is on proving the convergence of $\\updatedPxNMh(A) \\to \\PP_{\\pspace} (A)$ for some $A\\in \\pborel$ and on estimating the error in $\\updatedPxNMh(A)$.\nThere, as well as in \\cite{JWW17}, adjoint-based a posteriori estimates in the computed QoI are combined with a statistical analysis to both estimate and bound the error in $\\updatedPxNMh (A)$.\nIn \\cite{BM17}, adjoints are used to compute both error and derivative estimates of the QoI map to improve the accuracy in $\\updatedPxNMh (A)$.\n%However, no work has to date fully explored the \\emph{convergence rates} of Algorithm \\ref{alg:inv_density}.\n%Furthermore, no work has yet to establish that these rates are independent of the choice of QoI map despite other studies establishing that the absolute error is very much affected by the geometric properties of the QoI maps \\cite{BE13}.\n\nAs mentioned earlier, stability results are with respect to the Total Variation metric, which we use to study convergence as well.\nRepeated application of the triangle inequality yields\n\\begin{equation}\n\\label{eq:triangleineq}\nd_{\\text{TV}}(\\updatedPxNMh, \\updatedP) \\leq\n\\underset{ \\text{(E1)} }{\\underbrace{ d_{\\text{TV}}(\\updatedPxNMh, \\updatedPxNM)}} +\n\\underset{ \\text{(E2)} }{\\underbrace{ d_{\\text{TV}}(\\updatedPxNM, \\updatedPxM) }}+\n\\underset{ \\text{(E3)} }{\\underbrace{ d_{\\text{TV}}(\\updatedPxM, \\updatedP) }}.\n\\end{equation}\nThe term (E1) describes the effect of the error in the numerically evaluated $Q_j$ on the solution to the stochastic inverse problem.\nThe term (E2) describes the effect of finite sampling error in $\\pspace$, and (E3) describes the effect of discretization error of $\\predictedP$ on the solution to the stochastic inverse problem.\nIn our experience, terms (E1) and (E3) are more easily controlled.\nThus, we limit our focus to (E2), where certain geometric properties of the QoI map are known to significantly impact this term, as we show below.\n", "meta": {"hexsha": "319519c00f2615fadbd4b1a14f03275633dc8f89", "size": 3657, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch03/impact.tex", "max_stars_repo_name": "mathematicalmichael/thesis", "max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z", "max_issues_repo_path": "ch03/impact.tex", "max_issues_repo_name": "mathematicalmichael/thesis", "max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 59, "max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z", "max_forks_repo_path": "ch03/impact.tex", "max_forks_repo_name": "mathematicalmichael/thesis", "max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 89.1951219512, "max_line_length": 292, "alphanum_fraction": 0.7670221493, "num_tokens": 967, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825006, "lm_q2_score": 0.754914975839675, "lm_q1q2_score": 0.5888283291720148}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n%\\logo{./resources/pdf/logo.pdf}\n%\\institute{Rice University}\n%\\faculty{Faculty of Whatever Sciences}\n%\\department{Department of Mathematics}\n%\\title{Class Notes}\n%\\subtitle{Based on MATH xxx}\n%\\author{\\textit{Author}\\\\Gabriel \\textsc{Gress}}\n%\\supervisor{Linus \\textsc{Torvalds}}\n%\\context{Well, I was bored...}\n%\\date{\\today}\n\n\\begin{document}\n\n% \\maketitle\n\n% Notes taken on 02/05/21\n\n\\section{Modules}\n\\label{sec:modules}\n\nAn \\(R\\)-module \\(M\\) is an abelian group that comes equipped with a binary operation \\(\\cdot \\) that maps from \\(R\\times M\\) to \\(M\\) that is compatible with operations of both \\(M\\) and \\(R\\). It is the natural generalization of vector spaces to rings, but with the key difference that we may not have multiplicative inverses for elements in our \\(R\\)-module.\n\n\\begin{defn}[Left \\(R\\)-module]\nLet \\(R\\) be a ring. A \\textbf{left \\(R\\)-module} is a pair \\(\\prescript{}{R}{M} := (M,\\cdot :R\\times M\\to M)\\) where \\(M\\) is an abelian group, and \\(\\cdot \\) is a binary operation so that\n\t\\begin{align*}\n\t\t\\forall r,s \\in R, \\; m,n \\in M:\\\\\n\t\tr\\cdot (m+n) = (r\\cdot m) + (r\\cdot n)\\\\\n\t\t(r+s)\\cdot m = (r\\cdot m) + (s\\cdot m)\\\\\n\t\t(rs)\\cdot m = r\\cdot(s\\cdot m)\n\t\\end{align*}\n\tIf \\(R\\) is unital, then we also require\n\t\\begin{align*}\n\t\t1_R \\cdot  m = m.\n\t\\end{align*}\n\tThe map is called the (left) \\textbf{\\(R\\)-action map}.\n\\end{defn}\n\n\\begin{exmp}[Free Module of Rank \\(n\\)]\n\t\\begin{enumerate}\n\t\t\\item If \\(R\\) is a field \\(F\\), then the \\(R\\)-module is an \\(F\\)-vector space.\n\t\t\\item Take \\(M = R^{n} := \\left\\{(t_1,\\ldots,t_n) \\mid t_i \\in R \\right\\}\\). Let the \\(R\\)-action map of \\(\\prescript{}{R}M\\) be defined by\n\t\t\t\\begin{align*}\n\t\t\t\tR\\times M \\to M\\\\\n\t\t\t\t(r,(t_1,\\ldots,t_n)) \\mapsto (rt_1,\\ldots,rt_n).\n\t\t\t\\end{align*}\n\t\t\tOne can check that this satisfies the necessary properties of a left \\(R\\)-action on \\(M = R^{n}\\). This left \\(R\\)-module \\(\\prescript{}{R}R^{n}\\) (which we will simply denote here on out by \\(R^{n}\\)) is called the \\textbf{free left \\(R\\)-module of rank \\(n\\)}.\n\t\\end{enumerate}\n\\end{exmp}\n\n\\begin{exmp}[\\(\\Z\\)-Modules]\nAn abelian group \\(M\\) can be made into a module \\(\\prescript{}{\\Z}M\\) over the integers in exactly one way. Consider the \\(\\Z\\)-action map defined by\n\\begin{align*}\n\t\\Z\\times M \\to M\\\\\n\t(n,m) \\mapsto  m + \\ldots_{n} + m\n\\end{align*}\nOne can check that this indeed is a \\(\\Z\\)-module over \\(M\\). \n\\end{exmp}\n\n\\begin{hw}\n\tProve that the \\(\\Z\\)-action given above is the unique \\(\\Z\\)-action for any \\(\\Z\\)-module. Furthermore, show that \\(\\prescript{}{\\Z}M\\) is isomorphic to an abelian group.\n\\end{hw}\n\n\\begin{exmp}[\\(F{[}x{]}\\) Modules]\n\tLet \\(R = F[x]\\) be a polynomial ring over a field \\(F\\), and let \\(V\\) be a vector space over \\(F\\) with a linear operator \\(T \\in \\mathcal{L}(V)\\). We can construct an \\(F[x]\\)-module on \\(V\\) via \\(T\\) (denoted \\(\\prescript{}{F[x]}V)\\)). To see this, let \\(p(x) \\in F[x]\\) be a polynomial given by\n\t\\begin{align*}\n\t\tp(x) = a_nx^{n}+ a_{n-1}x^{n-1} + \\ldots + a_1 x + a_0.\n\t\\end{align*}\n\tFor each \\(v \\in V\\), we define the action of \\(p(x)\\) on \\(v\\) by\n\t\\begin{align*}\n\t\tp(x)\\cdot v &= (a_n T^{n} + a_{n-1}T^{n-1} + \\ldots + a_1 T + a_0) (v)\\\\\n\t\t       &= a_n T^{n}(v) + a_{n-1}T^{n-1}(v) + \\ldots + a_1 T(v) + a_0v\n\t\\end{align*}\n\tInformally, we are defining an action of \\(x\\) on \\(V\\) by \\(T\\), and then extending it onto \\(F[x]\\) in a natural way.\\\\\n\n\tRecall that \\(F\\leq F[x]\\) (as constant polynomials), and hence the action of \\(F\\) is exactly the same as constant polynomials, which correspond to the standard action of \\(F\\) on \\(V\\). In other words, this action is an extension of the action of \\(F\\) onto a larger ring \\(F[x]\\).\\\\\n\n\tBecause this action is dependent on the choice of  \\(T\\), this gives us many different \\(F[x]\\)-module structures on the same vector space \\(V\\). One can check that \\(T=0\\) also yields us the standard action of \\(F\\) on \\(V\\).\\\\\n\n\tWhat is interesting to note is that the action of \\(F[x]\\) via \\(T\\) encapsules \\textit{all} possible \\(F[x]\\)-modules-- this holds because the action of \\(x \\in F[x]\\) on \\(V\\) is a linear transformation from \\(V\\) to \\(V\\), and hence must correspond to some \\(T\\) which all actions \\(p(x)\\) must adhere to.\\\\\n\n\tOne might ask what \\(F[x]\\)-submodules look like. We can see immediately that an \\(F[x]\\)-submodule \\(\\prescript{}{F[x]}W \\leq \\prescript{}{F[x]}V\\) must also be an \\(F\\)-submodule, and hence \\(W<V\\) as a vector subspace. Furthermore, in order for it to be well-defined, \\(W\\) must be \\textbf{\\(T\\)-invariant}, that is, \\(T(W) \\subset  W\\). In fact this too is a bijection, so that all \\(F[x]\\)-submodules of \\(V\\) correspond to \\(T\\)-invariant subspaces of \\(V\\).\n\\end{exmp}\nThis example shows that the ideal structure of \\(F[x]\\) greatly restricts the module structure on \\(V\\) (and in fact can be used to derive its Jordan canonical form). In fact, the reasonings above can be applied to any PID \\(R\\), and in the special case \\(R = \\Z\\) we can obtain the fundamental theorem of finitely generated abelian groups. In general, it is always interesting to see how the structure of a ring \\(R\\) will affect its modules.\n\n\\end{document}\n", "meta": {"hexsha": "9ed278c57ecb36af6823093776414064ad1e753d", "size": 5193, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture5 - IntroModules.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture5 - IntroModules.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture5 - IntroModules.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.0113636364, "max_line_length": 465, "alphanum_fraction": 0.6458694396, "num_tokens": 1762, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7799928951399098, "lm_q1q2_score": 0.5888283261913884}}
{"text": "\\documentclass[]{article}\n\n\\usepackage{graphicx}\n\n\\usepackage[margin=1in]{geometry}\n\n\\setlength\\parindent{0pt}\n\n\\usepackage{physics}\n\\usepackage{amsmath, amsfonts, amssymb, amsthm}\n\n\\usepackage{listings}\n\n\\usepackage{enumitem}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\renewcommand*{\\thesection}{Problem \\arabic{section}}\n\\renewcommand*{\\thesubsection}{\\alph{subsection})}\n\\renewcommand*{\\thesubsubsection}{\\quad \\quad \\roman{subsubsection})}\n\n%Custom Commands\n\\newcommand{\\Rel}{\\mathcal{R}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\n\\newcommand{\\toI}{\\xrightarrow{\\textsf{\\tiny I}}}\n\\newcommand{\\toS}{\\xrightarrow{\\textsf{\\tiny S}}}\n\\newcommand{\\toB}{\\xrightarrow{\\textsf{\\tiny B}}}\n\n\\newcommand{\\divisible}{ \\ \\vdots \\ }\n\n\n% Theorem Definition\n\\newtheorem{definition}{Definition}\n\\newtheorem{theorem}{Theorem}\n\n\n%opening\n\n\\title{MATH 5301 Elementary Analysis - Homework 4}\n\n\\author{Jonas Wagner}\n\n\\date{2021, September 24}\n\n\\begin{document}\n\n\\maketitle\n\n% Problem 1\n\\section{}\nUse the axioms of the ordered field, prove the following:\n% Part a\n\\subsection{$(a > c) \\land (b > d) \\implies a + b > c + d$}\n\\begin{align*}\n    (a > c) \\land (b > d) &\\implies (a + b) > (c + d)\n\\end{align*}\nFrom (O3):\n\\begin{align*}\n    (a > c) &\\implies ((a + b) \\geq (b + c)) \\land ((a + d) \\geq (c + d))\\\\\n    (b > d) &\\implies ((a + b) \\geq (a + d)) \\land ((b + c) \\geq (c + d))\\\\\n\\end{align*}\nFrom (02):\n\\begin{align*}\n    ((a + b) \\geq (b + c)) \\land ((b + c) \\geq (c + d))\n        &\\implies (a + b) > (c + d)\n\\end{align*}\n\n% Part b\n\\subsection{$(a > c > 0) \\land (b > d > 0) \\implies ab > cd > 0$}\n\\begin{align*}\n    (a > c > 0) \\land (b > d > 0) &\\implies ab > cd > 0\n\\end{align*}\nFrom (O4):\n\\begin{align*}\n    (a > c > 0) \\land (b > 0) &\\implies ab > bc > 0\\\\\n    % (a > c > 0) \\land (d > 0) &\\implies ad > cd > 0\\\\\n    % (b > d > 0) \\land (a > 0) &\\implies ab > ad > 0\\\\\n    (b > d > 0) \\land (c > 0) &\\implies bc > cd > 0\n\\end{align*}\nFrom (O2):\n\\begin{align*}\n    (ab > bc > 0) \\land (bc > cd > 0) &\\implies ab > cd > 0\n\\end{align*}\n\n\\newpage\n% Part c\n\\subsection{$a > b > 0 \\implies \\frac{1}{a} < \\frac{1}{b}$}\n\\begin{align*}\n    a > b > 0 &\\implies \\frac{1}{b} < \\frac{1}{b}\n\\end{align*}\nFrom \n\\begin{align*}\n    a > 0 & \\implies a^{-1} > 0\\\\\n    b > 0 & \\implies b^{-1} > 0\n\\end{align*}\nFrom (O4):\n\\begin{align*}\n    (a > b > 0) \\land (a^{-1} > 0) &\\implies a a^{-1} = 1 > b a^{-1} = \\frac{b}{a}> 0\\\\\n    (a > b > 0) \\land (b^{-1} > 0) &\\implies a b^{-1} = \\frac{a}{b} > b b^{-1} = 1 > 0\\\\\n    (\\frac{a}{b} > 1 > 0) \\land (a^{-1} > 0)\n        &\\implies \\frac{a}{b} a^{-1} = \\frac{1}{b} > (1)(a^{-1}) = \\frac{1}{a} > 0\n\\end{align*}\nTherefore, $$\\frac{1}{a} < \\frac{1}{b}$$\n\n\\newpage\n% Part d\n\\subsection{\n    Let,\n    $$\\abs*{x} = \n        \\begin{cases}\n            x, & x \\geq 0\\\\\n            -x,& X < 0\n        \\end{cases}\n    $$\n    prove,\n    $$\\forall a, b \\implies \\abs*{a - b} \\geq \\abs*{\\abs*{a} - \\abs*{b}}$$\n}\n\n\\begin{align*}\n    \\forall a, b \\implies \\abs*{a - b} \\geq \\abs*{\\abs*{a} - \\abs*{b}}\n\\end{align*}\nWhen $a>b>0$ (or $b>a>0$),\n\\begin{align*}\n    \\abs*{a - b}&= a - b\\\\\n    \\abs*{a}    &= a\\\\\n    \\abs*{b}    &= b\\\\\n    \\abs*{\\abs*{a} - \\abs*{b}} &= a - b\\\\\n    \\abs*{a - b} = a-b &= \\abs*{\\abs*{a} - \\abs*{b}}\n\\end{align*}\nThe same is true for $0<a<b$ and $0 < b < a$ by similar arguments.\\\\\nFor $a > 0 > b$,\n\\begin{align*}\n    \\abs*{a}    &= a\\\\\n    \\abs*{b}    &= -b\\\\\n    \\abs*{a - b}&= \\abs*{a} + \\abs*{b}\\\\\n    \\abs*{a} - \\abs*{b} &= a - (-b) = a + b\\\\\n    \\abs*{\\abs*{a} - \\abs*{b}} &= \n        \\begin{cases}\n            \\abs*{a} - \\abs*{b}   & \\abs*{a} > \\abs*{b}\\\\\n            \\abs*{b} - \\abs*{a}   & \\abs*{a} < \\abs*{b}\n        \\end{cases}\n    \\intertext{From (03):}\n    \\abs*{a - b} = \\abs*{a} + \\abs*{b} &\\geq \\abs*{a} - \\abs*{b}\\\\\n    \\abs*{a - b} = \\abs*{a} + \\abs*{b} &\\geq \\abs*{b} - \\abs*{a}\\\\\n    \\therefore \\ \\abs*{a - b} &\\geq \\abs*{\\abs*{a} - \\abs*{b}}\n\\end{align*}\nTherefore $\\forall a, b$,\n\\begin{align*}\n     \\abs*{a - b} \\geq \\abs*{\\abs*{a} - \\abs*{b}}\n\\end{align*}\n\n% Problem 2\n\\newpage\n\\section{}\nDetermine which of the axioms satisfied by the set of real numbers \nare not satisfied by the following set:\n\n% Part a\n\\subsection{Set $\\Q$ of all rational numbers.}\nSet $\\Q$ of rational numbers can be an ordered field, \n$\\langle \\Q, +, 0, \\cdots, 1\\rangle$, but lacks (C) completeness:\n$$\\forall A \\subset \\Q \\not{\\exists} c \\in \\Q : c = \\sup A$$\n\n% Part b\n\\subsection{Set $\\Q(\\sqrt{2})$ of all numbers of form $a + b\\sqrt{2}$, where $a,b\\in\\Q$}\nSet $\\Q := \\qty{a + b \\sqrt{2} : a, b \\in \\Q}$ can be an ordered field, \n$\\langle \\Q(\\sqrt{2}), +, 0, \\cdots, 1\\rangle$, but lacks completeness (C):\n$$\\forall A \\subset \\Q(\\sqrt{2}) \\not{\\exists} c \\in \\Q : c = \\sup A$$\n\n% Part c\n\\subsection{Set $\\C$ of all pairs of real numbers $(a,b)$ \n    with addition ${(a,b) + (c,d) = (a+c,b+d)}$, \n    multiplication $(a,b)\\cdot(c,d) = (ac-bd,ad+bc)$, \n    and ordered relation $(a,b)<(c,d)\\iff(b \\leq d)\\land((b=d \\lor a<c))$.\n}\nSet $\\C := \\qty{(a,b) : a,b \\in \\R}$ can satisfy the field conditions, \n$\\langle \\C, +, 0, \\cdots, 1\\rangle$,\nbut it is not ordered because it does not satisfy (O1).\n\n% Problem 3\n\\newpage\n\\section{}\nUsing the method of mathematical induction, \nprove the following statements: $(n \\in \\N)$\n\n% Part a\n\\subsection{Bernoulli inequality: $\\forall n\\in\\N, \\ \\forall x > -1$,\n    $(1+x)^n \\geq 1 + nx$\n}\n\n\\begin{theorem}\n    $\\forall n\\in\\N, \\ \\forall x > -1$, \n        $$(1+x)^n \\geq 1 + nx$$\n\\end{theorem}\n\n\\begin{proof}\n    Proof by induction:\\\\\n    For $n=1$,\n    \\begin{align*}\n        (1+x)^n &\\geq 1 + nx\\\\\n        (1+x)^1 &\\geq 1 + (1)x\\\\\n        1+x &\\geq 1 + x\n    \\end{align*}\n    For $n > 1$,\n    \\begin{align*}\n        (1+x)^n &\\geq 1 + n x\\\\\n        (1+x)^n (1+x) &\\geq (1+nx) (1+x)\\\\\n        (1+x)^{n+1} &\\geq (1 + x + nx + nx^2)\\\\\n            &\\geq 1 + (n+1)x + nx^2\\\\\n    \\end{align*}\n    Since $n \\geq 2 \\implies nx^2 > 0$\n    \\begin{align*}    \n        1 + (n+1)x + nx^2 &\\geq 1 + (n+1)x\\\\\n    \\end{align*}\n    From (O2):\n        $$(1+x)^{n+1} \\geq 1 + (n+1)x$$\n    Therefore $\\forall n > 1$,\n        $$(1+x)^n \\geq 1 + nx \\implies (1+x)^{n+1} \\geq 1 + (n+1)x$$\n    Therefore $\\forall n\\in\\N, \\ \\forall x > -1$,\n        $$(1+x)^n \\geq 1 + nx$$\n\\end{proof}\n\n\\newpage\n% Part b\n\\subsection{For $n \\in \\N$, \n    $\\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} = 2 - \\frac{n+2}{2^n}$\n}\n\\begin{theorem}\n    For $n \\in \\N$, \n        $$\\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} = 2 - \\cfrac{n+2}{2^n}$$\n\\end{theorem}\n\\begin{proof}\n    Proof by induction:\n    For $n = 1$,\n    \\begin{align*}\n        \\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} \n            &= 2 - \\cfrac{n+2}{2^n}\\\\\n        \\frac{1}{2} = 2 - \\cfrac{1+2}{2^1} \n            &= 2 - \\frac{3}{2}\\\\\n        \\frac{1}{2} &= \\cfrac{1}{2}\\\\\n    \\end{align*}\n    For $n > 1$,\n    $$\\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} = 2 - \\cfrac{n+2}{2^n}\n    \\implies \\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^{n+1}} = 2 - \\cfrac{n+2}{2^{n+1}}$$\n    \\begin{align*}\n        \\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} \n            &= 2 - \\cfrac{n+2}{2^n}\\\\\n        \\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} + \\frac{n+1}{2^{n+1}}\n            &= 2 - \\cfrac{n+2}{2^n} + \\cfrac{n+1}{2^{n+1}}\\\\\n            &= 2 - \\cfrac{n+2}{2^n}\\cfrac{2}{2} + \\cfrac{n+1}{2^{n+1}}\\\\\n            &= 2 - \\cfrac{2(n+2)}{2^{n+1}} + \\cfrac{n+1}{2^{n+1}}\\\\\n            &= 2 + \\cfrac{n+1 - 2(n+2) }{2^{n+1}}\\\\\n            &= 2 + \\cfrac{n+1 - 2n - 2}{2^{n+1}}\\\\\n            &= 2 + \\cfrac{-n - 1}{2^{n+1}}\\\\\n            &= 2 + \\cfrac{-(n+1) -2}{2^{n+1}}\\\\\n            &= 2 - \\cfrac{(n+1) +2}{2^{n+1}}\\\\\n    \\end{align*}\n    Therefore $\\forall n > 1$,\n    $$\\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n}\n        = 2 - \\cfrac{n+2}{2^n}\n        \\implies \\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} + \\frac{n+1}{2^{n+1}}\n        = 2 - \\cfrac{(n+1) +2}{2^{n+1}}$$\n    Therefore For $n \\in \\N$,\n    $$\\frac{1}{2} + \\frac{2}{2^2} + \\cdots + \\frac{n}{2^n} = 2 - \\frac{n+2}{2^n}$$\n\\end{proof}\n\n\\newpage\n% Part c\n\\subsection{For $q \\in \\R, n \\in \\N$,\n    $(1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n}) = \\frac{1-q^{2^{n+1}}}{1-q}$\n}\n\\begin{theorem}\n    For $q \\in \\R, n \\in \\N$,\n    $$(1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n}) = \\cfrac{1-q^{2^{n+1}}}{1-q}$$\n\\end{theorem}\n\\begin{proof}\n    Proof by induction:\\\\\n    For $n=1$,\n    \\begin{align*}\n        (1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n})\n            &= \\cfrac{1-q^{2^{n+1}}}{1-q}\\\\\n        (1+q)(1+q^{2^1}) \n            &= \\cfrac{1-q^{2^{1+1}}}{1-q}\\\\\n        (1+q^{2} + q + q^{3}) \n            &= \\cfrac{1-q^{4}}{1-q}\\\\\n        (1 + q + q^{2} + q^{3}) (1-q)\n            &= \\cfrac{1-q^{4}}{1-q}(1-q)\\\\\n        1 + q + q^{2} + q^{3} - q - q^2 - q^3 -q^4\n            &= 1-q^{4}\\\\\n        1 - q^4\n            &= 1-q^{4}\\\\\n    \\end{align*}\n    For $n>1$\n    \\begin{align*}\n        (1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n}) &= \\cfrac{1-q^{2^{n+1}}}{1-q}\\\\\n        (1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n})(1+q^{2^{n+1}})\n            &= \\cfrac{1-q^{2^{n+1}}}{1-q}(1+q^{2^{n+1}})\\\\\n            &= \\cfrac{\\qty(1-q^{2^{n+1}})\\qty(1+q^{2^{n+1}})}{1-q}\\\\\n            &= \\cfrac{1-q^{2^{n+1}} + q^{2^{n+1}} + \\qty(-q^{2^{n+1}})\\qty(q^{2^{n+1}})}{1-q}\\\\\n            &= \\cfrac{1 -q^{2^{n+1} + 2^{n+1}})}{1-q}\\\\\n            &= \\cfrac{1 -q^{2\\qty(2^{n+1})}}{1-q}\\\\\n        (1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n})(1+q^{2^{n+1}})\n            &= \\cfrac{1 -q^{2^{n+2}}}{1-q}\\\\\n    \\end{align*}\n    Therefore $\\forall n>1$,\n    $$(1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n}) = \\cfrac{1-q^{2^{n+1}}}{1-q}\n    \\implies (1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n})(1+q^{2^{n+1}}) = \\cfrac{1 -q^{2^{n+2}}}{1-q}\n    $$\n    Therefore $\\forall n \\in \\N$,\n    $$(1+q)(1+q^2)(1+q^4)\\cdots(1+q^{2^n}) = \\cfrac{1-q^{2^{n+1}}}{1-q}$$\n\\end{proof}\n\n\\newpage\n% Part d\n\\subsection{For $n \\in \\N$,\n    $1^3 + 3^3 + \\cdots + (2n + 1)^3 = (n+1)^2 (2n^2 + 4n + 1)$\n}\n\\begin{theorem}\n    For $n \\in \\N$,\n    $$1^3 + 3^3 + \\cdots + (2n + 1)^3 = (n+1)^2 (2n^2 + 4n + 1)$$\n\\end{theorem}\n\\begin{proof}\n    Proof by induction:\\\\\n    For $n = 1$,\n    \\begin{align*}\n        1^3 + 3^3 + \\cdots + (2n + 1)^3 &= (n+1)^2 (2n^2 + 4n + 1)\\\\\n        1^3 + (2(1) + 1)^3 &= ((1)+1)^2 (2(1)^2 + 4(1) + 1)\\\\\n        1^3 + 3^3 &= (2)^2 (2(1) + 4 + 1)\\\\\n        1 + 27 &= (4) (2 + 4 + 1)\\\\\n        28 &= (4)(7)\\\\\n        28 &= (4)(7)\\\\\n        28 &= 28\n    \\end{align*}\n    For $n > 1$,\n    \\begin{align*}\n        1^3 + 3^3 + \\cdots + (2n + 1)^3 &= (n+1)^2 (2n^2 + 4n + 1)\\\\\n        1^3 + 3^3 + \\cdots + (2n + 1)^3 + (2(n+1)+1)^3\n            &= (n+1)^2 (2n^2 + 4n + 1) + (2(n+1)+1)^3\\\\\n            &= (n+1)^2 (2n^2 + 4n + 1) + (2n + 3)^3\n    \\end{align*}\n    \\begin{align*}\n        &= (n+1)(n+1) (2n^2 + 4n + 1) + (2n + 3)(2n + 3)(2n + 3)\\\\\n        &= (n^2 + 2n + 1) (2n^2 + 4n + 1) + 27 + 54 n + 36 n^2 + 8 n^3\\\\\n        &= 2n^4 + 8n^3 + 11n^2 + 6n + 1 + 8n^3 + 36 n^2 + 54n + 27\\\\\n        &= 2n^4 + 16n^3 + 47n^2 + 60n + 28\\\\\n        &= (n+2)^2(2n^2 + 8n + 7)\\\\\n        &= ((n+1)+1)^2 (2(n+1)^2 - 4n - 2 + 4(n+1) + 4n - 4 + 7)\\\\\n        &= ((n+1)+1)^2 (2(n+1)^2 + 4(n+1) + 1)\\\\\n    \\end{align*}\n    Therefore $\\forall n > 1$,\n    $$\n    1^3 + 3^3 + \\cdots + (2n + 1)^3 = (n+1)^2 (2n^2 + 4n + 1)\\implies\n    $$\n    $$\\implies 1^3 + 3^3 + \\cdots + (2n + 1)^3 + (2(n+1)+1)^3\n    = ((n+1)+1)^2 (2(n+1)^2 + 4(n+1) + 1)\n    $$\n    Therefore $\\forall n \\in \\N$,\n    $$1^3 + 3^3 + \\cdots + (2n + 1)^3 = (n+1)^2 (2n^2 + 4n + 1)$$\n\\end{proof}\n\n\\newpage\n% Part e\n\\subsection{For $n,k \\in \\N$,\n    $\\sum_{k=0}^{n} \\qty(-1)^k \\cfrac{n!}{k!(n-k)!} = 0, \n    \\sum_{k=0}^{n} \\cfrac{n!}{k!(n-k)!} = 2^n$\n}\n\\begin{definition}\n    The factorial of a number, $n!$, is defined as\n    $$n! := (1)(2)(3)\\cdots(n-1)(n)$$\n\\end{definition}\n\\begin{definition}\n    The combination of two numbers, $\\mqty(n\\\\k)$, is defined as\n    $$\\mqty(n\\\\k) := \\cfrac{n!}{k!(n-k)!}$$\n\\end{definition}\n\\subsubsection{$\\sum_{k=0}^{n} \\qty(-1)^k \\cfrac{n!}{k!(n-k)!} = 0$}\n\\begin{theorem}\n    For $n,k \\in \\N$,\n    $$\\sum_{k=0}^{n} \\qty(-1)^k \\mqty(n\\\\k) = 0$$\n\\end{theorem}\n\\begin{proof}\n    For $n=1$,\n    \\begin{align*}\n        \\sum_{k=0}^{n} \\qty(-1)^k \\mqty(n\\\\k) &= 0\\\\\n        \\sum_{k=0}^{1} \\qty(-1)^k \\mqty(1\\\\k) \n            &= (-1)^0 \\mqty(1\\\\0) + (-1)^1 \\mqty(1\\\\1)\\\\\n            &= (1)(1) + (-1) (1)\\\\\n            &=0\n    \\end{align*}\n    Therefore,\n    \\begin{align*}\n        \\sum_{k=0}^{1} \\qty(-1)^k \\mqty(1\\\\k) &= 0\n    \\end{align*}\n    For $n>2$,\n    \\begin{align*}\n        \\sum_{k=0}^{n} \\qty(-1)^k \\mqty(n\\\\k) &= 0\\\\\n        \\sum_{k=0}^{n} \\qty(-1)^k \\frac{n!}{k!(n-k)!} &= 0\\\\\n        \\qty(-1)^0 \\frac{n!}{0!(n-0)!} + \\qty(-1)^1 \\frac{n!}{1!(n-1)!} + \\cdots +\n            \\qty(-1)^n \\frac{n!}{n!(n-n)!} &= 0\\\\\n        (1)\\frac{n!}{0!n!} + (-1) \\frac{n!}{1!(n-1)!} + \\cdots +\n            \\qty(-1)^{n-2} \\frac{n!}{(n-2)!2!} + \\qty(-1)^{n-1} \\frac{n!}{(n-1)!1!} + \n            \\qty(-1)^n \\frac{n!}{n!0!}&=0\\\\\n        (1)\\frac{n!}{n!} + (-1) \\frac{n!}{(n-1)!} + \\cdots +\n            \\qty(-1)^{n-2} \\frac{n!}{(n-2)!2} + \\qty(-1)^{n-1} \\frac{n!}{(n-1)!} \n            + \\qty(-1)^n \\frac{n!}{n!}&=0\\\\\n        (1)\\frac{n!}{n!} + (-1) \\frac{n!}{(n-1)!} + \\cdots + \n            \\qty(-1)^{n-1} \\frac{n!}{(n-1)!} + \\qty(-1)^{n} \\frac{n!}{n!} +\n            \\qty(-1)^{n+1} \\frac{n+1!}{n+1!}&= \\qty(-1)^{n+1} \\frac{n+1!}{n+1!}\\\\\n    \\end{align*}\n    Therefore $\\forall n \\in \\N, n \\geq 2$,\n    $$\\sum_{k=0}^{n} \\qty(-1)^k \\mqty(n\\\\k) = 0$$\n\\end{proof}\n\n\\newpage\n\\subsubsection{$\\sum_{k=0}^{n} \\cfrac{n!}{k!(n-k)!} = 2^n$}\n\\begin{theorem}\n    For $n,k \\in \\N$,\n    $$\\sum_{k=0}^{n} \\mqty(n\\\\k) = 2^n$$\n\\end{theorem}\n\\begin{proof}\n    By induction:\\\\\n    For $n=2$,\n    \\begin{align*}\n        \\sum_{k=0}^{n} \\mqty(n\\\\k) &= 2^n\\\\\n        \\sum_{k=0}^{2} \\mqty(2\\\\k) &= 2^2\\\\\n        \\mqty(2\\\\0) + \\mqty(2\\\\1) + \\mqty(2\\\\2) &= 4\\\\\n        1 + 2 + 1 &= 4\\\\\n        4 &= 4\n    \\end{align*}\n    For $n >2$,\n    \\begin{align*}\n        \\sum_{k=0}^{n} \\mqty(n\\\\k) &= 2^n\\\\\n        \\sum_{k=0}^{n} \\frac{n!}{k!(n-k)!} &= 2^n\\\\\n        \\frac{n!}{0!(n-0)!} + \\frac{n!}{1!(n-1)!} + \\cdots + \\frac{n!}{n!(n-n)!} &= 2^n\\\\\n        \\frac{n!}{0!n!} + \\frac{n!}{1!(n-1)!} + \\cdots + \\frac{n!}{(n-2)!2!} \n            + \\frac{n!}{(n-1)!1!} + \\frac{n!}{n!0!}&=2^n\\\\\n        \\qty(\\frac{n!}{0!n!} + \\frac{n!}{1!(n-1)!} + \\cdots + \\frac{n!}{(n-2)!2!} \n            + \\frac{n!}{(n-1)!1!} + \\frac{n!}{n!0!}) \\frac{2(n+1)}{n+1}&=2^n2\\\\\n        \\frac{(n+1)!}{0!n!(n+1)} + \\frac{(n+1)!}{1!(n-1)!(n+1)} + \\cdots + \\frac{(n+1)!}{(n-2)!2!(n+1)} \n            + \\frac{(n+1)!}{(n-1)!1!} + \\frac{(n+1)!}{n!0!}&=2^{n+1}\\\\\n        \\frac{(n+1)!}{0!n!(n+1)} + \\frac{(n+1)!}{1!(n-1)!(n+1)} + \\cdots + \\frac{(n+1)!}{(n-2)!2!(n+1)} \n            + \\frac{(n+1)!}{(n-1)!1!} + \\frac{(n+1)!}{n!0!}&=2^{n+1}\\\\\n        \\sum_{k=0}^{n+1} \\mqty(n\\\\k) &= 2^{n+1}\n    \\end{align*}\n\\end{proof}\n\n% Problem 4\n\\newpage\n\\section{}\nShow that $\\forall n \\in \\N, n \\geq 2$,\n\n% Part a\n\\subsection{\n    $\\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots + \\frac{1}{\\sqrt{n}} > \\sqrt{n}$\n}\n\\begin{theorem}\n    For $n \\in \\N$ and $n \\geq 2$,\n    $\\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots + \\frac{1}{\\sqrt{n}} > \\sqrt{n}$\n\\end{theorem}\n\\begin{proof}\n    By Induction:\\\\\n    For $n=2$,\n    \\begin{align*}\n        \\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots + \\frac{1}{\\sqrt{n}} &> \\sqrt{n}\\\\\n        \\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} &> \\sqrt{2}\\\\\n        1 + \\frac{1}{2} &> \\sqrt{2}\\\\\n        \\frac{3}{2} &> \\sqrt{2}\\\\\n        \\qty(\\frac{3}{2})^2 &> \\qty(\\sqrt{2})^2\\\\\n        \\frac{9}{4} &> 2\\\\\n        9 &> (4)(2)\\\\\n        9 &> 8 \n    \\end{align*}\n    Therefore, for $n=2$,\n    $$\\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots + \\frac{1}{\\sqrt{n}} > \\sqrt{n}$$\n    For $n>2$,\n    \\begin{align*}\n        \\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots + \\frac{1}{\\sqrt{n}} \n            &> \\sqrt{n}\\\\\n        \\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots +\n            \\frac{1}{\\sqrt{n}} + \\frac{1}{\\sqrt{n+1}}\n            &> \\sqrt{n} + \\frac{1}{\\sqrt{n+1}}\\\\\n        \\qty(\\frac{1}{\\sqrt{1}} + \\frac{1}{\\sqrt{2}} + \\cdots +\n            \\frac{1}{\\sqrt{n}} + \\frac{1}{\\sqrt{n+1}}) \\sqrt{n+1}\n            &> \\qty(\\sqrt{n} + \\frac{1}{\\sqrt{n+1}})\\sqrt{n+1}\\\\\n        \\frac{\\sqrt{n+1}}{\\sqrt{1}} + \\frac{\\sqrt{n+1}}{\\sqrt{2}} + \\cdots + \\frac{\\sqrt{n+1}}{\\sqrt{n}} + \\frac{\\sqrt{n+1}}{\\sqrt{n+1}}\n            &> \\sqrt{n} \\sqrt{n+1} + 1\\\\\n    \\end{align*}\n\\end{proof}\n\n\\newpage\n% Part b\n\\subsection{\n    $\\cfrac{1}{n+1} + \\cfrac{1}{n+2} + \\cdots + \\frac{1}{3n+1} > 1$\n}\n\\begin{theorem}$\\forall n \\in \\N, n \\geq 2$,\n    $$\\cfrac{1}{n+1} + \\cfrac{1}{n+2} + \\cdots + \\frac{1}{3n+1} > 1$$\n\\end{theorem}\n\\begin{proof}\n    For $n = 2$,\n    \\begin{align*}\n        \\cfrac{1}{n+1} + \\cfrac{1}{n+2} + \\cdots + \\frac{1}{3n+1} > 1\\\\\n        \\cfrac{1}{(2)+1} + \\cfrac{1}{(2)+2} + \\cdots + \\frac{1}{3(2)+1} > 1\\\\\n        \\cfrac{1}{3} + \\cfrac{1}{4} + \\cdots + \\frac{1}{7} > 1\\\\\n        \\sum_{k=3}^{7} \\frac{1}{k} > 1\\\\\n        \\frac{1}{3} + \\frac{1}{4} + \\frac{1}{5} + \\frac{1}{6} + \\frac{1}{7} > 1\\\\\n        \\frac{140}{420} + \\frac{105}{240} + \\frac{84}{240} + \\frac{70}{240} + \n            \\frac{60}{420} > 1\\\\\n        \\frac{459}{420} > 1\\\\\n        459 > 420\n    \\end{align*}\n    Therefore for $n = 2$,\n    $$\\cfrac{1}{n+1} + \\cfrac{1}{n+2} + \\cdots + \\frac{1}{3n+1} > 1$$\n    For $n > 2$,\n    \\begin{align*}\n        \\cfrac{1}{n+1} + \\cfrac{1}{n+2} + \\cdots + \\frac{1}{3n+1} &> 1\\\\\n        \\sum_{k=n+1}^{3n+1} \\frac{1}{k} &> 1\\\\\n        \\sum_{k=n+1}^{3n+1} \\frac{1}{k} + \\sum_{k=3n+2}^{3(n+1) + 1} \\frac{1}{k} \n            &> 1 + \\sum_{k=3n+2}^{3(n+1) + 1} \\frac{1}{k}\\\\\n        \\frac{1}{n+1} + \\sum_{k=(n+1) + 1}^{3(n+1)+1} \\frac{1}{k}\n            &> 1 + \\sum_{k = 3n+2}^{3(n+1) + 1} \\frac{1}{k}\\\\\n    \\end{align*}\n    Since $\\frac{1}{n+1} > \\sum_{k = 3n+2}^{3(n+1) + 1} \\frac{1}{k}$, \n    by (O3) this implies that \n    $$\\sum_{k=(n+1) + 1}^{3(n+1)+1} \\frac{1}{k} > 1$$\n    Therefore for $n \\geq 2$,\n    $$ \\cfrac{1}{n+1} + \\cfrac{1}{n+2} + \\cdots + \\frac{1}{3n+1} > 1$$\n\\end{proof}\n\n\\newpage\n% Part c\n\\subsection{\n    $\\qty(\\cfrac{n+1}{2})^n > n!$\n}\n\\begin{theorem} For $n \\in \\N, n \\geq 2$,\n    $$\\qty(\\cfrac{n+1}{2})^n > n!$$\n\\end{theorem}\n\\begin{proof}\n    For $n=2$,\n    \\begin{align*}\n        \\qty(\\cfrac{n+1}{2})^n &> n!\\\\\n        \\qty(\\cfrac{2+1}{2})^2 &> 2!\\\\\n        \\qty(\\frac{3}{2})^2 &> (2)(1)\\\\\n        \\frac{9}{4} &> 2\\\\\n        9 &> 8\n    \\end{align*}\n    For $n>2$,\n    \\begin{align*}\n        \\qty(\\cfrac{n+1}{2})^n &> n!\\\\\n        \\qty(\\cfrac{n+1}{2})^n (n+1) &> n! (n+1)\\\\\n        \\qty(\\cfrac{n+1}{2})^n (n+1) &> (n+1)!\\\\\n        \\cfrac{(n+1)^{n+1}}{2^n} &> (n+1)!\\\\\n        \\cfrac{\\frac{1}{2}((n+1)+1)^{n+1}}{2^n} &> (n+1)!\\\\\n        \\cfrac{((n+1)+1)^{n+1}}{2^{n+1}} &> (n+1)!\\\\\n        \\qty(\\cfrac{(n+1)+1}{2})^{n+1} &> (n+1)!\\\\\n    \\end{align*}\n    Therefore for $n\\geq 2$,\n    $$\\qty(\\cfrac{n+1}{2})^n > n!$$\n\\end{proof}\n\n\\newpage\n% Part d\n\\subsection{\n    $(2^{2^n} - 6) \\divisible 10$\n}\n\\begin{theorem}\n    For $n \\in \\N, n \\geq 2$,\n    $$(2^{2^n} - 6) \\divisible 10$$\n\\end{theorem}\n\\begin{proof}\n    By induction,\\\\\n    For $n=2$,\n    \\begin{align*}\n        (2^{2^n} - 6) \\divisible 10\\\\\n        (2^{2^(2)} - 6) \\divisible 10\\\\\n        2^4 - 6 \\divisible 10\\\\\n        16 - 6 \\divisible 10\\\\\n        10 \\divisible 10\n    \\end{align*}\n    For $n>2$,\n    \\begin{align*}\n        (2^{2^n} - 6) \\divisible 10&\\\\\n        (2^{2^n} - 6) \\mod 10 &= 0\\\\\n        2^{2^n} \\mod 10 &= 6\\\\\n        (2^{2^n})^2 \\mod 10 &= 6^2 = 36\\\\\n        (2^{2^n 2}) \\mod 10 &= 6\\\\\n        (2^{2^{n+1}} - 6) \\mod 10 &= 0\\\\\n        (2^{2^{n+1}} - 6) \\divisible 10&\\\\\n    \\end{align*}\n    Therefore $\\forall n \\in \\N, n \\geq 2$,\n    $$(2^{2^n} - 6) \\divisible 10$$\n\\end{proof}\n\n% Problem 5\n\\newpage\n\\section{}\n\n% Part a\n\\subsection{Show that $\\sqrt{2} \\notin \\Q$}\n\\begin{definition}\n    $\\sqrt{2} := x > 0 : x^2 = 2$\n\\end{definition}\n\\begin{theorem}\n    $\\sqrt{2} \\notin \\Q$\n\\end{theorem}\n\\begin{proof}\n    Assume $\\sqrt{2} \\in \\Q$,\n    $$\\sqrt{2} \\in \\Q \\implies \\exists m,n \\in \\N : \\frac{m}{n} = \\sqrt{2}$$\n    Also assume that $m,n$ are coprime. (i.e) $\\gcd(m,n)=1$\\\\\n    Let $m = \\sqrt{2} n$,\n    \\begin{align*}\n        m = \\sqrt{2} n \\implies m^2 = 2 n^2 \\implies m^2 \\divisible 2 \\implies m \\divisible 2\\\\\n        m \\divisible 2 \\implies \\exists k \\in \\N : m = 2k \\implies m^2 = (2k)^2 = 4 k^2\\\\\n        4k^2 = 2 n^2 \\implies 2k^2 = n^2 \\implies n^2 \\divisible 2 \\implies n \\divisible 2\n    \\end{align*}\n    This is false becouse with $\\gcd(m,n)=1$, $m$ and $n$ cannot both be even.\n\\end{proof}\n\n% Part b\n\\subsection{Show that \n    $\\forall a,b \\in \\Q, a < b \\implies \\exists x \\in \\R \\backslash \\Q : a < x < b$\n}\n\\begin{theorem}\n    $\\forall a,b \\in \\Q, a < b \\implies \\exists x \\in \\R \\backslash \\Q : a < x < b$\n\\end{theorem}\n\\begin{proof}\n    Since $\\Q$ lacks completeness, some irrational element $x \\in \\R \\backslash \\Q$ \n    exists between all two rational numbers $a,b\\in \\Q : a \\neq b$. \n    Therefore, $a < b \\implies \\exists x : a < x < b$.\n\\end{proof}\n\n% Part c\n\\subsection{Show that\n    $\\forall a,b \\in \\R \\backslash \\Q, a < b \\implies \\exists x \\in Q : a < x < b$\n}\n\\begin{theorem}\n    $\\forall a,b \\in \\R \\backslash \\Q, a < b \\implies \\exists x \\in Q : a < x < b$\n\\end{theorem}\n\\begin{proof}\n    Since the set of all irrational numbers, $\\R \\backslash \\Q$, lacks completeness, \n    some rational elements $x \\in \\Q$ must exists between each irrational numbers \n    $a,b \\in \\R \\backslash \\Q$. Therefore $a < b \\implies \\exists x : a < x < b$.\n\\end{proof}\n\n\\newpage\n% Problem 6\n\\section{}\nProve that for any $n$:\n% Part a\n\\subsection{}\n\\begin{theorem}\\label{thm:n_lines}\n    For any configurations of $n$ straight lines on a plane, one could \n    color the plane in two colors so that every two parts with the same boundary \n    would have different colors.\n\\end{theorem}\n\\begin{proof}\n    Proof by induction:\\\\\n    For $n=1$,\n    When one line splits up a plane there are two sections that \n    can be painted into seperate colors. Therefore Theorem \\ref{thm:n_lines} is true.\n    For $n>1$,\n    When $n$ lines split up a plane and each section is painted a different color, \n    the when an additional line is added, all sections that it goes throug will be \n    slit in two and then the sections can still be pained alternativly.\n    Therefore Theroem \\ref{thm:n_lines}$(n)$ $\\implies$ Theorem \\ref{thm:n_lines} $(n+1)$.\n    Therefore, $\\forall n \\in \\N$, Theorem \\ref{thm:n_lines} is true.\n\\end{proof}\n\n% Part b\n\\subsection{}\n\\begin{theorem}\n    For any set of $n$ squares, one can partitian them into a finite amount of pieces \n    and which will constitute one square.\n\\end{theorem}\n\\begin{proof}\n    First, when $n=1$, the square is already a square.\\\\\n    For every $n$ squares capable of becoming a square, \n    a new square (of size $x^2$) can then be partitioned into a square and two \n    rectangles making an $L$ around the $n$ square to be the $n+1$ square.\n\\end{proof}\n\n% Part c\n\\subsection{What's wrong with this theorem?}\n\\textbf{Theorem 1.} All the numbers are equal. In other words, \nthe statement $P_n$ is true for all $n\\in \\N$, where $P_n$ is \n``if $\\{a_1,a_2,\\dots,a_n\\}$ is a collection of $n$ numbers, \nthen $a_1 = a_2 = \\cdots = a_n$''.\\\\\n\n\\textit{Proof.} $P_1$ is true. Let $\\{a_1,a_2,\\dots,a_{n+1}\\}$ be a collection of \n$n+1$ numbers. Consider the collection $\\{a_1,a_2,\\dots,a_n\\}$. Thanks to $P_n$ \nit follows that $a_1 = \\cdots = a_n$ denote this number by $b$. \nNow, consider the collection $\\{a_2,a_4,\\dots,a_{n+1}\\}$.\nIt contains $n$ numbers, annd hence by $P_n$ we get $a_2 = \\cdots = a_n = a_{n+1}$.\nBut $a_2 = \\cdots = a_n = b$, therefore $a_{n+1} = a_n = b$. \nSo $a_1 = \\cdots = a_{n+1} = b$ and $P_{n+1}$ follows.\\\\\n\nA lot of issues exist, but one obvious one is that $P_n$ saying a collection \nis equal to $b$ does not also indicate a different collection is $b$. \nThe ``numbers'' also doesn't indicate what numbers $a_i$ are selected from.\n\n\\end{document}\n", "meta": {"hexsha": "a5d4d55af53d7acfff99280df60a134f196ccb43", "size": 24002, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW4/MATH5301-HW4.tex", "max_stars_repo_name": "jonaswagner2826/MATH5301", "max_stars_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-01T05:26:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-01T05:26:53.000Z", "max_issues_repo_path": "Homework/HW4/MATH5301-HW4.tex", "max_issues_repo_name": "jonaswagner2826/MATH5301", "max_issues_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW4/MATH5301-HW4.tex", "max_forks_repo_name": "jonaswagner2826/MATH5301", "max_forks_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.1422475107, "max_line_length": 136, "alphanum_fraction": 0.4697108574, "num_tokens": 11126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723316860482763, "lm_q2_score": 0.8757869900269366, "lm_q1q2_score": 0.5888193436239553}}
{"text": "\\section{Measure and Outer Measure}\n\\subsection{Outer Measure and Measurability}\n  \\paragraph{1.}\n  \\begin{proof}\n    Suppose that $E\\subset X$ and there is a measurable $B$ with $\\bar{\\mu}B=0$\n    such that $E\\subset B$. We show that $E$ is measurable, that is, for every\n    $A\\subset X$ with finite outer measure, $\\mu^*(A)\\ge\\mu^*(A\\cap E)+\\mu^*(\n    A\\cap E^c)$. Since $A\\cap E\\subset E\\subset B$ and $\\mu^*$ is monotone,\n    $\\mu^*(A\\cap E)\\le\\mu^*(B)=\\bar{\\mu}B=0$. Again by the monotonicity, $\\mu^*(\n    A)\\ge\\mu^*(A\\cap E^c)$. Thus, $E$ is measurable, implying that $\\bar{\\mu}$\n    is complete.\n  \\end{proof}\n\n  \\paragraph{2.}\n  \\begin{proof}\n    From the countable subadditivity we obtain that $\\mu^*(A\\cap E)\\le\\sum\\mu^*\n    (A\\cap E_i)$. For the converse, first we consider just $E_1$ and $E_2$. \n    Since $E_1$ is measurable,\n    \\[\n      \\mu^*(A\\cap E)=\\mu^*(A\\cap E\\cap E_1)+\\mu^*(A\\cap E\\cap E_1^c)\n      \\ge\\mu^*(A\\cap E_1)+\\mu^*(A\\cap E_2).\n    \\]\n    By induction on $n$ we get $\\mu^*(A\\cap E)\\ge\\sum_{i=1}^n\\mu^*(A\\cap E_i)$.\n    Let $n\\to\\infty$ and the proof is completed.\n  \\end{proof}\n% end\n\\subsection{The Extension Theorem}\n  \\paragraph{4.}\n  \\begin{proof}\n    $\\,$\\par\n    (a) Since $\\{D_j\\}$ partitions $A$, $C_i\\subset A$ for each $i$ and \n    $\\mcal{C}$ is closed under intersection, by condition (i), $\\mu C_i=\\sum_j \n    \\mu(C_i\\cap D_j)$. Similarly for $\\mu D_j$. Thus,\n    \\[\n      \\sum_i\\mu C_i=\\sum_{i,j}\\mu(C_i\\cap D_j)=\n      \\sum_j\\sum_i\\mu(C_i\\cap D_j)=\\sum_j\\mu D_j.\n    \\]\n    This result implies that the definition of $\\mu$ on $\\mcal{A}$ is \n    well-defined. \\par\n    (b) Since every $A\\in\\mcal{A}$ is a finite union of sets of $\\mcal{C}$, it\n    suffices to show that $\\mu C\\ge\\sum\\mu C_i$. Then, from this condition, the\n    countably additivity follows. Since $\\mu$ is nonnegative and monotone, \n    $\\mu C\\ge\\sum_{i=1}^n\\mu C_i$ for all positive integer $n$. Let $n\\to\\infty$\n    and we get $\\mu C\\ge\\sum\\mu C_i$.\n  \\end{proof}\n\n  \\paragraph{7.}\n  \\begin{proof}\n    To prove the \"if\" part, let $\\vep_n=1/n$ and $A_n\\in\\mcal{A}_\\delta$ be such\n    that $\\mu^*(E\\setminus A_n)<\\vep_n$. Put $A=\\bigcup A_n$. Since the \n    collection of $\\mu^*$-measurable sets is a $\\sigma$-algebra, $A$ is \n    measurable. Meanwhile, since $\\mu^*(E\\setminus A)\\le\\mu^*(E\\setminus \n    A_n)<\\vep_n$, $E\\setminus A$ is of $\\mu^*$-measure zero. Hence, it is \n    measurable. Thus, $E=A\\cup(E\\setminus A)$ is also measurable.\\par\n    For the converse, suppose that $E$ is measurable and let $\\vep>0$ be fixed.\n    Note that $E^c$ is also measurable. Hence, by Prop. 6, there is a set $A\\in\n    \\mcal{A}_\\sigma$ with $E^c\\subset A$ and\n    \\[\n      \\vep>\\mu^*(A\\setminus E^c)=\\mu^*(A\\cap E).\n    \\]\n    Then, $A^c\\in\\mcal{A}_\\delta$, $A^c\\subset E$ and $\\mu^*(E\\setminus A^c)=\n    \\mu^*(E\\cap E)<\\vep$.\n  \\end{proof}\n% end\n\n", "meta": {"hexsha": "a77fdb11f50f9c9cf29f90ad51ca91f66ba9951a", "size": 2851, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real_analysis_3rd/ch12_measure_and_outer_measure.tex", "max_stars_repo_name": "Engineev/solutions", "max_stars_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-07-13T08:36:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T17:37:17.000Z", "max_issues_repo_path": "real_analysis_3rd/ch12_measure_and_outer_measure.tex", "max_issues_repo_name": "Engineev/solutions", "max_issues_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real_analysis_3rd/ch12_measure_and_outer_measure.tex", "max_forks_repo_name": "Engineev/solutions", "max_forks_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-28T00:05:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-28T00:05:28.000Z", "avg_line_length": 43.196969697, "max_line_length": 80, "alphanum_fraction": 0.6075061382, "num_tokens": 1109, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8267117983401363, "lm_q1q2_score": 0.5888107498679032}}
{"text": "\\chapter{Kepler data processing}\n\\label{Chapter3}\n\n\\section{Fitting eclipsing binary minima}\nTo obtain precise time of minima in eclipsing binary systems we need to define function that will fit minima shape with best precision.\nThere is phenomenological and physical methods that allow fo fit all light curve. Because we have a large number of eclipsing binary systems in Kepler EB catalogue we prefer to use phenomenological fitting of only minima parts of light curves.\n\n\\subsection{Gauss and Lorentzian fit}\nThe main advantage of this two function is their simplicity and ability to get coordinates of centre ($x_{c}$) after fitting light curve.\nWith another more complex functions that are discussed below we must to derivative it before we can find $x_{c}$.\n\nGaussian function:\n\\begin{equation}\\label{eq:gaus}\nF(\\varphi)= A\\cdot \\exp(-\\frac{(\\varphi - x_{c})^2}{2\\sigma^2})\n\\end{equation}\n\nLorentzian function: \n\\begin{equation}\\label{eq:lorentz}\nF(\\varphi)= A \\frac{w^2}{w^2 + (\\varphi - x_{c})^2}\n\\end{equation}\n\nFunctions perform good fit when EB minima has shape similar to normal distribution. In case of narrow minima such functions have a slightly larger error. Also function perform good fit when we fit only minima part but not all light curve. \n\n\\subsection{Mikulasek phenomenological model}\nMethod is aimed to establish a general model of LCs of eclipsing\nsystems (ES; both eclipsing binaries and stars with transiting\nplanets) that could fit the LCs with an accuracy of 1\\% of their\namplitudes or better. \n\nThe model function of a monochromatic light curve (expressed in magnitudes) of eclipsing\nsystems $F(\\varphi, \\lambda)$, can be assumed as the sum of three particular functions:\n\n\\begin{equation} \\label{eq:mik_general}\nF(\\varphi, \\lambda) = F_{e}(\\varphi, \\lambda) + F_{p}(\\varphi, \\lambda) + F_{c}(\\varphi, \\lambda)\n\\end{equation}\n\nwhere $F_{e}$ describes the mutual eclipses of the components, $F_{p}$ models the proximity effects,\nwhile $F_{c}$ approximates the O\u2019Connell effect (irrespectively of its physical cause)\n\nThe profiles of both minima\nare complex functions determined primarily by the geometry\nof the system and the relative brightness of components in\na given spectral region centred at the effective wavelength $\\lambda_{eff}$.\nThe contribution of eclipses $F_{e}(\\vartheta, \\lambda_{eff})$ to an ES light curve can\nbe approximated by a sum of two special periodic functions of\nphase function $\\vartheta$. In the case of circular orbits, eclipses are exactly\nsymmetrical around their centres at phases $\\varphi_{01}$ and $\\varphi_{02}$. If\nwe put the origin of the phase function $M_{0}$ at the time of the\nprimary minimum, then $\\varphi_{01} = 0$, $\\varphi_{02} = 0.5$.\nThe model function was selected so that it describes as aptly\nas possible those parts of LCs that are in the vicinity of their\ninflex points, where their slopes are maximum. The functions\nare parameterized by their widths $D_{1}, D_{2}$, eclipse LC kurtosis\ncoefficients $\\Gamma_{1}, \\Gamma_{2}$, dimensionless correcting factors $C_{1},C_{2}$, and\ncentral depths $A_{1}(\\lambda_{eff}), A_{2}(\\lambda_{eff})$:\n\n\\begin{equation} \\label{eq:mik_main}\nF_{e}(\\vartheta, \\lambda_{eff})=\\sum_{k=1}^{n_{e}} A_{k} \\left( 1+C_{k} \\frac{\\varphi_{k}^2}{D_{k}^2}\\right) \n\\left\\lbrace 1-\\left\\lbrace \n1-exp\\left[ 1-\\cosh\\left(\\frac{\\varphi_{k}}{D_{k}}\\right)\\right] \n\\right\\rbrace^{\\Gamma_{k}}\\right\\rbrace \n\\end{equation}\n\n\\begin{equation} \\label{eq:mik_2}\n\\varphi_{k} = \\vartheta - 0.5 (k - 1) - round \\left[ \\vartheta - 0.5 (k - 1)\\right] ,\n\\end{equation}\n\nwhere the summation is over the number of eclipses during one\ncycle, $n_{e}$: $n_{e} = 2$ or $n_{e} = 1$ (the common situation for exoplanet\ntransits). Each eclipse in a given colour is thus described by only\nfour parameters \u2013 its depth $A$, width $D$, kurtosis $\\Gamma$, and the correcting\nparameter $C$. \\parencite{mikulasek2015}\n\nIn the case of EBs with two minima in a cycle ($n_{e} = 2$), we\nneed eight parameters, but sometimes the number of needed parameters\ncan be smaller. Inspecting the parameters $D, \\Gamma$, and\n$C$ for both eclipses of many EBs, we have concluded that they\nare as a rule nearly the same: especially $D1 \\cong D2, \\Gamma_{1} \\cong \\Gamma_{2}$, and\n$C_{1} \\cong C_{2}$. We therefore usually need only five monochromatic parameters\n($A_{1}, A_{2}, D, \\Gamma,C$). The parameter $C$ is mostly comparable\nto its uncertainty, so we can neglect it entirely. Then we need\njust four parameters! On the other hand, in EBs with totalities\nwe see that the bottoms of their occultations are flat whilst transits\nare convex. It can be described by introducing of different\nparameters $C_{1},C_{2}$.\n\nThe LCs of the exoplanet transits ($n_{e} = 1$) need only\nfour parameters ($A, D, \\Gamma,C$; in cases of very precise\nmeasurements, we add another dimensionless parameter $K$:\n\n\\begin{equation}\\label{eq:mik_3}\nF_{e} = A\\left( 1+C \\frac{\\varphi^2}{D^2}+K\\frac{\\varphi^4}{D^4}\\right) \n\\left\\lbrace 1- \\left\\lbrace 1-exp\\left[ 1-\\cosh \\left( \\frac{\\varphi}{D}\\right) \\right] \\right\\rbrace ^{\\Gamma} \\right\\rbrace  \n\\end{equation}\n\nTesting several dozens of LCs of various types of ESs, we found\nthat the standard deviation of the fit is typically well below one percent. \nThe only minor inconvenience is the existence of a spike (a jump in derivatives) in \nthe mid-eclipses for LCs with $\\Gamma < 1$.\n\nThe contribution of proximity effects $F_{p}(\\vartheta)$ should be an\neven function symmetric with the phases 0.0 and 0.5, consequently\nthey can be satisfactorily expressed as a linear combination of $n_{p}$ elementary cosine functions $\\cos(2\\pi \\vartheta)$,\n$\\cos(4\\pi \\vartheta)$, $\\cos(6\\pi \\vartheta)$, etc. The even terms are the consequence\nof the ellipticity of tidally interacting components, whilst the odd\nterms result from the differences between the near and far sides\nof components. As a rule we can limit ourselves only to the first\ntwo or three terms in the $F_{p}$ \\parencite{russell1952,kallrath1999}. \nThe O\u2019Connell effect contribution $F_{c}(\\vartheta)$ can be modelled well by a simple sinusoid \\parencite{davidge1984, wilsey2009}:\n\n\\begin{equation}\\label{eq:mik_Fp}\nF_{p}  = \\sum_{k=n_{e}+1}^{n_{p}+n_{e}} A_{k}\\cos \\left[ 2\\pi(k-n_{e})\\vartheta\\right]\n\\end{equation}\n\n\\begin{equation}\\label{eq:mik_Fc}\nF_{c}  = \\sum_{k=n_{p}+n_{e}+1}^{n_{c}+n_{p}+n_{e}} A_{k}\\cos(2\\pi\\vartheta)\n\\end{equation}\n\nwhere $n_{p}$ is the number of terms in $F_{p}(\\vartheta)$: $n_{p} = 0, 1, 2, 3,\\ldots $,\n$n_{c} = 0$, if the O\u2019Connell asymmetry is not present, otherwise $n_{c} = 1$.\n\n\\subsection{Andronov phenomenological model}\nThe basic function (\u201especial shape\u201c) for the eclipse is $H(z)=(1-\\left| z\\right|^\\beta)^{3/2}$, $-1\\leq z \\geq +1$, where $\\beta=C_{5}$ \nis the parameter describing behaviour close to the mid-eclipse (0 \u2013 very narrow, 1 \u2013 triangular, 2 \u2013 parabolic, $\\gg2$ \u2013 flat).\n\nThe complete function includes a TP2 part (a trigonometrical polynomial of the second order), which\napproximates three effects: reflection, ellipticity and O\u2019Connell and has 12 parameters, including two for the\ncorrected initial epoch and the period (Andronov, Tkachenko \\& Chinarova, 2016):\n\n\\begin{equation}\\label{eq:andronov}\n\\begin{split}\nx(\\phi)=C_{1}+C_{2}\\cos(2\\pi(\\phi-\\phi_{0}))+ C_{3}\\sin(2\\pi(\\phi-\\phi_{0}))+C_{4}\\cos(4\\pi(\\phi-\\phi_{0}))+ \\\\\nC_{5}\\sin(4\\pi(\\phi-\\phi_{0}))+ C_{6}H((\\phi-\\phi_{0})/C_{8};C_{9})+ C_{7}H((\\phi-\\phi_{0}-0.5)/C_{8};C_{10})\n\\end{split}\n\\end{equation}\n\nIn previous works \\parencite{andronov2012}, $\\phi_{0}=0$ was used but, in \\cite{andronov2016}, authors added two parameters\n(C11, C12) to correct the initial epoch and the period. Of course, when needed, it is possible to add more\nparameters to describe the possible period changes.\n\nEven if more complicated models may be comparable in accuracy, the\nsmallest possible number of the NAV (\u201cNew Algol Variable\u201d)  parameters and their clear physical sense is the advantage of this method. \n\n", "meta": {"hexsha": "c2ddcd33e01f348a9331763052fc77e6ffc5fd41", "size": 7953, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/Chapter3_none.tex", "max_stars_repo_name": "vkudak/PhD", "max_stars_repo_head_hexsha": "898b0dfc86b04471c92050253c59c0c874e92c24", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/Chapter3_none.tex", "max_issues_repo_name": "vkudak/PhD", "max_issues_repo_head_hexsha": "898b0dfc86b04471c92050253c59c0c874e92c24", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/Chapter3_none.tex", "max_forks_repo_name": "vkudak/PhD", "max_forks_repo_head_hexsha": "898b0dfc86b04471c92050253c59c0c874e92c24", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.2291666667, "max_line_length": 243, "alphanum_fraction": 0.7374575632, "num_tokens": 2453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117855317474, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.588810745795085}}
{"text": "Previous chapters addressed the problem of robotic motion planning and control by leveraging techniques from control theory and optimal control. In particular, these techniques were used to generate open and closed-loop control laws to accomplish specific tasks such as trajectory generation, trajectory tracking, and stabilization or regulation about a particular robot state. One common component among all of these algorithms was the use of a model of the robot's kinematics or dynamics, which mathematically defines how the robot transitions from state to state based on control inputs.\n\nIn this chapter yet another set of algorithms for motion planning/trajectory generation is discussed\\cite{LaValle2006}. These algorithms are particularly well suited for higher-level motion planning tasks, such as motion planning in environments with obstacles. This is accomplished by focusing on formulating the motion planning problem for a robot with respect to the robot's \\textit{configuration space} rather than the \\textit{state space} that was used in previous chapters. While the robot's configuration is derivable from its state (and still characterizes all of the robot's degrees of freedom), the definition of the configuration space can be useful because it can be tailored to collision avoidance tasks\\footnote{In some cases the choice of configuration and state may end up being the same.}.\nHistorically speaking, these approaches were developed alongside many of the techniques from previous chapters, and are still being researched today.\n\n\\notessection{Search-Based Motion Planning}\nRecall the general definition of the motion planning problem:\n\\begin{definition}[Motion planning problem]\nCompute a sequence of actions to go from an initial condition to a terminal condition while respecting constraints and possibly optimizing a cost function.\n\\end{definition}\nPrevious chapters approached this problem by formulating mathematical optimization problems that minimized a cost function subject to constraints on the motion (i.e. from dynamics/kinematics, control limits, or conditions on the robot's state), or leveraged differential flatness properties of the model. In these approaches, the robot's trajectory was parameterized by its state $\\x$ and the corresponding control inputs $\\bu$ which satisfied a set of differential equations\n\\begin{equation*}\n    \\dot{\\x} = f(\\x, \\bu).\n\\end{equation*}\n\nIn this chapter, the motion planning problem will instead be addressed with respect to a \\textit{configuration space} ($C$-space).\nThe configuration $\\q$ of a robot is derivable from the full dynamics state $\\x$ and captures all of the degrees of freedom of the robot (i.e. all rigid body transformations). In some cases the state and configuration of the robot may be the same, but in other cases the definition of the configuration can be tailored to simplify the motion planning problem. One important example of this is for \\textit{geometric path planning}, where paths in the configuration space can be planned without considering the robot kinematic/dynamics model.\n\n\\begin{example}[Motivating Example] \\label{ex:mot}\n\\theoremstyle{definition}\nConsider the L-shaped robot from Figure \\ref{fig:2dworkspace} that lives in a 2D world with obstacles, and is trying to get from one point to another. Additionally, suppose this robot has a state $\\x = [x,y,\\theta,\\dot{x}, \\dot{y}, \\dot{\\theta}]^\\top $, and consider a configuration space defined by $\\q = [x,y,\\theta]^\\top $ which fully captures the robot's degrees of freedom. Since the motion planning problem in this case involves obstacle avoidance, it might be easier to just plan a sequence of configurations $\\q$ that are collision free (as is shown in the right-side graphic of Figure \\ref{fig:2dworkspace}).\n\nIn this case, the use of the configuration space has simplified the motion planning problem by abstracting away the consideration of the robot's dynamics. Once the geometric path has been defined in configuration space, other techniques (such as those discussed in previous chapters) could be used to perform lower-level control functions for path tracking.\n\\begin{figure*}[ht] \n    \\centering \n    \\includegraphics[width=0.85\\linewidth]{tex/figs/ch05_figs/2d_ws_obstacles.png}\n    \\caption{Motivating example: motion planning in a 2D workspace with obstacles.}\n    \\label{fig:2dworkspace} \n\\end{figure*}\n\nAdditionally, it is important to note that the $C$-space is a subset of $\\R^3$, and in particular the $C$-space is $\\R^2 \\times \\mathcal{S}^1$. This subspace is special because it includes the \\textit{manifold} $\\mathcal{S}^1$, which characterizes the fact that the rotational degree of freedom $\\theta$ satisfies $\\theta = \\theta \\pm 2\\pi k$ for all $k = 1,2,\\dots$. This distinction is important to make because it endows the planner with the ability to move from one angle to another in two different ways (i.e. the robot can turn left or turn right). For example, suppose the robot in Figure \\ref{fig:2dworkspace} has a current heading of $\\theta_0$ and wants move to have a heading $\\theta_g$ subject to the constraint of avoiding a $\\C$-space obstacle (see Figure \\ref{fig:rotational-dof-fig}). If the equivalence between the angles 0 and $2\\pi$ is not established in the definition of the configuration space, the robot would not be able to traverse a collision-free path to the desired heading in the configuration space (see red trajectory). Instead, since the configuration space is defined with respect to $\\Space^1$, the robot is able to achieve the desired heading (see green trajectory).\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{tex/figs/ch05_figs/S1_obstacle.png}\n\\caption{Example trajectory planning where the description of the configuration space using the manifold $\\Space^1$ is crucial to path planning. In particular, rotating clockwise leads to collision but rotating counter-clockwise is a feasible path.}\n\\label{fig:rotational-dof-fig}\n\\end{center}\n\\end{figure}\n\n\\end{example}\n\nIn this chapter two types of motion planning algorithms that plan in the configuration space will be discussed. The first class consists of \\textit{grid-based methods}, and the second class consists of methods referred to as \\textit{combinatorial planners}.\n\n\n\\subsection{Grid-based Motion Planners}\nSuppose the robot's configuration $\\q$ is a $d$ dimensional vector, then the $C$-space is a subset of $\\R^d$. Critically, this is a continuous space and therefore there are an infinite number of potential configurations the robot could be in. To simplify this problem, grid-based motion planners use a grid to discretize the $C$-space into a finite number of allowable configurations. For example, in a simple $C$-space in two dimensions the grid might look like that shown in Figure \\ref{fig:grid}.\n\\begin{figure}[ht] \n    \\centering \n    \\includegraphics[width=0.65\\linewidth]{tex/figs/ch05_figs/2d_ws_grid.png}\n    \\caption{Discretizing the configuration space using a grid.}\n    \\label{fig:grid} \n\\end{figure} \nIn grid-based planners, undesirable configurations are simply represented by identifying some cells of the grid to be forbidden (e.g. for obstacle avoidance). The dynamics/kinematics of the robot are also abstracted away and it is assumed that the robot has the ability to move freely between adjacent cells (configurations).\nAfter this discretization, the resulting motion planning problem is sometimes referred to as a \\textit{discrete planning} problem because only a finite number of options are available at each step, and only a finite number of configurations are possible. The planning problem then reduces to finding a way to traverse through the cells from the initial configuration to a desired final configuration.\n\nMathematically, problems of this type are commonly expressed using discrete \\textit{graphs}. A graph $G = (V,E)$, is simply defined by a set of vertices $V$ and a set of edges $E$. In the context of grid-based motion planners, each vertex $v \\in V$ represents a \\textit{free} cell of the grid, and each edge $(v,u) \\in E$ corresponds to a connection between adjacent cells. With the graph representation, the planning problem is to find a way to traverse through the graph to reach the desired vertex. Algorithms for solving such problems are referred to as \\textit{graph search methods}.\n\nThe advantages of such approaches are that they are simple and easy to use, and for some problems can be very fast. The disadvantages are primarily the result of the discretization procedure. In some cases, if the resolution of the grid is not fine enough the search algorithm may not always be able to find a solution. Additionally, for a fixed resolution the size of the graph grows exponentially with respect to the dimension of the configuration space. Therefore this approach is generally limited to simple robots with a low-dimensional configuration space.\n\n\\subsubsection{Label Correcting Algorithms}\nSince the graph is defined by a \\textit{finite} number of vertices (also referred to as \\textit{nodes}) and edges, it should be theoretically possible to solve a graph search problem in finite time. However in order to achieve this in practice, several simple ``accounting'' tricks need to be used to keep track of how the search has progressed and to avoid redundant exploration. Additionally, it is desirable to find a ``best'' path, and so a mechanism for keeping track of the current best path is required during the search.\n\nA general set of algorithms known as \\textit{label correcting algorithms} employ such accounting techniques to guarantee good performance. In these algorithms, the notion of a ``best'' path is logged in terms of a cost-of-arrival.\n\\begin{definition}[Cost-of-Arrival]\nThe cost-of-arrival associated with a vertex $q$ with respect to a starting vertex $q_I$ is the cost associated with taking the best known route from $q_I$ to $q$ along edges of the graph, and is denoted $C(q)$. \n\\end{definition}\nAdditionally, in a slight abuse of notation the cost from traversing an edge from vertex $q$ to vertex $q'$ is denoted as $C(q,q')$. To keep track of what nodes have already been visited and which still need further exploration, label correcting algorithms define a set of \\textit{frontier vertices} (sometimes also referred to as \\textit{alive}). This allows guarantees to be made that the search algorithm will avoid redundant exploration, and will terminate in finite time. It also guarantees that if a path from the initial vertex $q_I$ to the goal vertex $q_G$ exists, that it will be found.\n\nIn general, label correcting algorithms take the following steps to find the best path from an initial vertex $q_I$ to a desired vertex $q_G$\\footnote{In terms of robot motion planning this would be a search over paths through the discretized configuration space. Therefore the vertices of the graph are referred to as $q$ to better connect this abstraction with their physical interpretation being a particular robot configuration $\\q$.}:\n\n\\begin{enumerate}\n    \\item Initialize the set of frontier vertices as $Q = \\{q_I\\}$ and set $C(q_I) = 0$. Initialize the cost-of-arrival of all other vertices $q'$ as $C(q') = \\infty$.\n    \\item Remove a vertex from $Q$ and explore each of its connected vertices $q'$. For each $q'$, determine the candidate cost-of-arrival $\\tilde{C}(q')$ associated with moving from $q$ to $q'$ as $\\tilde{C}(q') = C(q) + C(q,q')$. If the candidate cost-of-arrival $\\tilde{C}(q')$ is lower than the current cost-of-arrival $C(q')$ AND is lower than the current cost-of-arrival $C(q_G)$, then set $C(q') = \\tilde{C}(q')$, define $q$ as the parent of $q'$, and add $q'$ to the set $Q$ if $q'$ is not $q_G$.\n    \\item Repeat step 2 until the set of frontier vertices $Q$ is empty. \n\\end{enumerate}\n\nThe bulk of the work is done is Step 2. In particular, for the selected $q$ from $Q$, these algorithms search its connected neighbors $q'$ to see if moving from $q$ to $q'$ will lead to a lower overall cost than previously found paths to $q'$. This is why the algorithms are called ``label correcting'', since they ``correct'' the cost-of-arrival as better paths are found throughout the search process. Eventually, once the best path from $q_I$ to $q$ is found, $q$ will never again be added to the set $Q$ and therefore the algorithm is guaranteed to eventually terminate.\n\n\\begin{theorem}[Label Correcting Algorithms]\nIf a feasible path exists from $q_I$ to $q_G$, then the label correcting algorithm will terminate in finite time with\n$C(q_G)$ equal to the optimal cost of traversal, $C^*(q_G)$.\n\\end{theorem}\n\nThe primary way in which label correcting algorithms differ from each other is in how they select the next vertex $q$ from the set of frontier nodes $Q$. In fact, the set $Q$ is often referred to as a \\textit{priority queue} since the algorithm might assign priority values to the order in which vertices are selected. Different approaches for prioritizing include \\textit{depth-first search}, \\textit{breadth-first search}, and \\textit{best-first search}.\n\n\\paragraph{Depth-First Search}\nDepth-first search in a directed graph expands each node up to the deepest level of the graph, until a chosen node has no more successors. Another way to think about this in terms of the set $Q$ is ``last in/first out'', where whenever a new vertex $q$ is selected from $Q$ it chooses those vertices that were most recently added.\n\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{tex/figs/ch05_figs/depth_first.PNG}\n\\caption{Depth-First Search}\n\\end{center}\n\\end{marginfigure}\n\n\\paragraph{Breadth-First Search}\nBreadth-first search begins with the start node and explores all of its neighboring nodes. Then for each of these nodes, it explores all their unexplored neighbors and so on. In terms of $Q$, this is like storing the frontier nodes as a queue with the first node added is the first node selected.\n\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{tex/figs/ch05_figs/breadth_first.PNG}\n\\caption{Breadth First Search}\n\\end{center}\n\\end{marginfigure}\n\n\\paragraph{Best-First Search}\nAlso commonly known as \\textit{Dijkstra\u2019s algorithm}, this approach greedily selects vertices $q$ from $Q$ by looking at the current best cost-of-arrivals. Mathematically,\n\\begin{equation*}\n   q = \\textrm{arg} \\min_{q \\in Q} C(q).\n\\end{equation*}\nThis approach is sometimes considered an ``optimistic'' approach since it is essentially making the assumption that the best current action will always correspond to the best overall plan.\nIn practice this approach typically provides a more efficient search procedure relative to depth-first or breadth-first approaches because it can account for the cost of the path, however additional improvements can be made.\n\n\\subsubsection{A* Algorithm}\nA* is a label correcting algorithm that is a modified version of Dijkstra's algorithm. In Dijkstra's algorithm the goal vertex $q_G$ is not taken into account, potentially leading to wasted effort in cases where the greedy choice makes no progress towards the goal. This is quantified by a quantity called the \\textit{cost-to-go}.\n\n\\begin{definition}[Cost-to-Go]\nThe cost-to-go associated with a vertex $q$ with respect to a goal vertex $q_G$ is the cost associated with taking the best known route from $q$ to $q_G$ along edges of the graph.\n\\end{definition}\nIn practice, the cost-to-go is not usually known, and therefore \\textit{heuristics} are used to provide approximate cost-to-go values $h(q)$. In order for the heuristic to be useful, it must be a positive \\textit{underestimate} of the true cost-to-go. An example of a heuristic $h$ is to simply use distance to the goal.\n\nWhile Djikstra's algorithm only prioritizes a vertex $q$ based on its cost-of-arrival $C(q)$, A* prioritizes based on cost-of-arrival $C(q)$ plus an approximate cost-to-go $h(q)$. This provides a better estimate of the total quality of a path than just using the cost-of-arrival alone. The A* algorithm is defined in Algorithm \\ref{alg:astar}.\n\\begin{algorithm}[ht]\\caption{A* Algorithm} \\label{alg:astar}\n\t\\KwData{$q_{I}$, $q_{G}$, $G$}\n\t\\KwResult{path}\n\t$C(q) = \\infty, \\:\\: f(q) = \\infty, \\:\\:  \\forall q$ \\\\\n\t$C(q_I) = 0$, $f(q_I) = h(q_I)$ \\\\\n\t$Q = \\{q_I\\}$ \\\\\n\t\\While{$Q$ is not empty}{\n\t    $q = \\argmin_{q' \\in Q}f(q')$ \\\\\n\t    \\If{$q = q_{G}$}{\n\t        \\Return{path}\n\t    }\n\t    $Q$.remove($q$)\\\\\n\t    \\For{$q' \\in \\{q' \\: | \\: (q,q') \\in E\\}$}{\n\t        $\\tilde{C}(q') = C(q) + C(q,q')$ \\\\\n\t        \\If{$\\tilde{C}(q') < C(q')$}{\n\t        $q'$.parent = $q$\\\\\n\t        $C(q') = \\tilde{C}(q')$\\\\\n\t        $f(q') = C(q') + h(q')$\\\\\n\t        \\If{$q' \\not\\in Q$}{\n\t            $Q$.add($q'$) \\\\\n\t        }\n\t        }\n\t    }\n\t}\n\t\\Return{failure}\n\\end{algorithm}\nNote that in the case that the heuristic is chosen to be $h(q) = 0$ for all $q$ then A* is the same as Djikstra's algorithm.\n\n\n\\subsection{Combinatorial Motion Planning} \nCombinatorial approaches to motion planning find paths through the continuous configuration space without resorting to discretizations like in grid-based planners. Recall that in grid-based planners, cells in the discretized configuration space that were undesirable were blocked out and simply not considered in the resulting path search. However, in the case of combinatorial planners the structure of the free portion of the configuration space is considered in a different way.\n\n\\begin{figure}[ht]\n \\centering\n \t\\includegraphics[width=.5\\textwidth]{tex/figs/ch05_figs/obs_padding.png}\n\t\\caption{Free (white) and forbidden spaces (grey and red) of the configuration space for a simple circular robot in a 2D world. Note that the forbidden space accounts for the physical dimensions of the robot.}\n \\label{fig:collision-free-space-fig}\n\\end{figure}\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=0.75\\textwidth]{tex/figs/ch05_figs/2d_ws_Cfree.png}\n\\caption{Once the free (white) and forbidden (grey and red) configurations have been identified, the physical dimensions of the robot can be ignored. This figure shows an example of a path planning problem in $C$-space with obstacles.}\n\\label{fig:combinatorial-planning}\n\\end{center}\n\\end{figure}\nFirst, the subset of the configuration space $C$ that is free (i.e. results in no collisions) is denoted as $C_{free}$ and is called the \\textit{free space} (see Figures \\ref{fig:collision-free-space-fig} and \\ref{fig:combinatorial-planning}).\nCombinatorial motion planning approaches operate by computing \\textit{roadmaps} through the free space $C_{free}$. A roadmap is a graph $G$ where each vertex represents a point in $C_{free}$ and each edge represents a path through $C_{free}$ that connects a pair of vertices. The set $S$ is then defined for a particular roadmap graph $G$ as the set of all points in $C_{free}$ that are either vertices of $G$ or lie on any edge of $G$.\nThis graph structure is similar to that used in grid-based planners, with the important distinction that the vertices can potentially be \\textit{any} configuration $q \\in C_{free}$ while in grid-based planners the vertices are defined ahead of time by discretization. This distinction is very important because the flexibility of choosing the vertices does not result in any loss of information!\nOnce the roadmap has been defined, a path can be defined by first connecting the initial configuration $q_I$ and goal configuration $q_G$ to the roadmap and then solving a discrete graph search over the roadmap graph $G$.\n\nIn general combinatorial planners are \\textit{complete} (i.e. the algorithm will either find a solution or will correctly report that no\nsolution exists), and can even be optimal in some cases. However, often times in practice they are not computationally feasible to implement except in problems with low-dimensional configuration spaces and/or simple geometric representations of the environment. Additionally, it requires that the free space be completely defined in advance, which is not necessarily a realistic requirement.\n\n\\subsubsection{Cell Decomposition}\nOne common approach for deriving the roadmap is to use \\textit{cell decomposition} to decompose $C_{free}$.\nCell decomposition refers to the process of partitioning $C_{free}$ into a finite set of regions called cells, which should generally satisfy:\n\\begin{itemize}\n    \\item Each cell should be easy to traverse and ideally convex.\n    \\item Decomposition should be easy to compute.\n    \\item Adjacencies between cells should be straightforward to determine, in order to build the roadmap.\n\\end{itemize}\n\n\\begin{example}[2D Cell Decomposition] \\label{ex:celldecomp}\nConsider a two-dimensional configuration space as shown in Figure \\ref{fig:cell-decompsition}. This space is decomposed into cells that are either lines or trapezoids by a process called vertical cell decomposition. Once the cells have been defined, the roadmap is generated by placing a vertex in each cell (e.g. at the centroid) as well as a vertex on each shared edge between cells.\n\nIf the forbidden space is polygonal, cell decomposition methods work pretty well and each cell can be made to be convex. In general, there exist several approaches for performing cell decomposition. However, cell decomposition in higher dimensions becomes increasingly challenging.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=0.74\\textwidth]{tex/figs/ch05_figs/2d_cell_decomp.png}\n\\caption{Example of 2D Cell Decomposition with $C_{free}$ colored white. A roadmap is defined as the graph $G$ with vertices shown as black dots and edges connecting them. To solve a planning problem with $q_{start}$ and $q_{goal}$ these points are first connected to the roadmap, and then the path is easily defined.}\n\\label{fig:cell-decompsition}\n\\end{center}\n\\end{figure}\n\\end{example}\n\n\\subsubsection{Other Roadmaps}\nOther ways to define roadmaps besides using cell decomposition exist. Two possible examples include a maximum clearance or minimum distance approach. Maximum clearance roadmaps simply try to always keep as far from obstacles as possible, for example by following the centerline of corridors. These roadmaps are also sometimes referred to as ``generalized Voronoi diagrams''. Minimum distance roadmaps are generally the exact opposite of maximum clearance roadmaps in that they tend to graze the corners of the forbidden space. In practice this is likely not desirable and therefore these approaches are less commonly used (without modification).\n\n\\subsection{Exercises}\n\\subsubsection{A* Motion Planning}\nComplete \\textit{Problem 1: A* Motion Planning} located in the online repository:\n\n\\vspace{\\baselineskip}\n\n\\url{https://github.com/PrinciplesofRobotAutonomy/AA274A_HW2},\n\n\\vspace{\\baselineskip}\n\nwhere you will implement the A* grid-based motion planning algorithm for some simple 2D environments.\n", "meta": {"hexsha": "2ffed5a3b455e2dfebf3d41d20d1180a8fb304a8", "size": 22859, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/source/ch05.tex", "max_stars_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_stars_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-03-23T16:03:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T14:15:38.000Z", "max_issues_repo_path": "tex/source/ch05.tex", "max_issues_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_issues_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/source/ch05.tex", "max_forks_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_forks_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 106.8177570093, "max_line_length": 1201, "alphanum_fraction": 0.7737871298, "num_tokens": 5377, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117855317474, "lm_q2_score": 0.7122321720225276, "lm_q1q2_score": 0.5888107306458985}}
{"text": "\n\n\\section{Preliminaries}\n\n\\begin{itemize}\n    \\item Define homology groups\n    \\item Define the fundamental group\n\\end{itemize}\n\nThe Poincar\u00e9 conjecture very roughly states that every 3-dimensional shape that has no holes, has a defined inside and outside and no boundary, must be the 3-dimensional sphere. \nIt is a bit more complicated than this, but this is essentially the statement. \nSo what is a 3-dimensional shape, and how can we describe these properties that we mentioned using proper mathematics? \n\nLet's start with shapes. \nIn mathematics, and in particular the field of topology, shapes are described by manifolds. \nAn $n$-dimensional manifold is a shape that looks locally like $n$-dimensional Euclidean space. \nAn example is the surface of the earth. \nIt is locally flat, meaning that the area you see around you looks a bit like just a flat plane, i.e. 2-dimensional Euclidean space. \nThere might be some bumps and cavities, but these can be ignored. \n\n\\begin{definition}\nAn $n$-dimensional manifold is a (second countable Hausdorff) topological space $M$ such that for every point $p\\in M$, there is an open neighborhood $U$ of $p$ and a homeomorphism $\\phi:U\\longrightarrow V$, where $V$ is an open set in $\\R^n$.\n\\end{definition}\n\nHere a homeomorphism means that $\\phi$ is a continuous isomorphism where its inverse is also continuous. \n\nThe next thing we need is the notion of homology. \nI will not cover homology in full detail here.\nIt is a really important gadget, but it takes time to cover it properly. \nThat said I have added the definition below.  \n\n\\begin{definition}\nLet $M$ be a manifold. \nWe define the n'th singular homology group of $M$ to be the quotient  \n\n$$H_n(M) = \\frac{Z_n(M)}{B_n(M)} = \\frac{Ker(\\partial_n)}{Im(\\partial_{n+1})},$$\n\nwhere $Z_n(M)$ is the group of $n$-cycles and $B_n(X)$ is the group of $n$-boundaries. \n\\end{definition}\n\n\n\n", "meta": {"hexsha": "e31f3b7106e6cbd57f2b888c6cd0fadfc2082971", "size": 1882, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/preliminaries.tex", "max_stars_repo_name": "torgeiraamboe/poincare_conjecture", "max_stars_repo_head_hexsha": "4c4e2bee39b2d424fad39de62d94bdb3517f21b9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/preliminaries.tex", "max_issues_repo_name": "torgeiraamboe/poincare_conjecture", "max_issues_repo_head_hexsha": "4c4e2bee39b2d424fad39de62d94bdb3517f21b9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/preliminaries.tex", "max_forks_repo_name": "torgeiraamboe/poincare_conjecture", "max_forks_repo_head_hexsha": "4c4e2bee39b2d424fad39de62d94bdb3517f21b9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.7674418605, "max_line_length": 243, "alphanum_fraction": 0.7507970244, "num_tokens": 501, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424217727027, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.588740676266339}}
{"text": "% Template Source: https://www.overleaf.com/latex/templates/math-notes-and-homeworks-template/pqpxbnjjdtcs\n\n\\documentclass[12pt]{article}\n \n\\usepackage[margin=1in]{geometry} \n\\usepackage{amsmath,amsthm,amssymb,graphicx,mathtools,tikz,hyperref}\n\\usetikzlibrary{positioning}\n\\newcommand{\\n}{\\mathbb{N}}\n\\newcommand{\\z}{\\mathbb{Z}}\n\\newcommand{\\q}{\\mathbb{Q}}\n\\newcommand{\\cx}{\\mathbb{C}}\n\\newcommand{\\real}{\\mathbb{R}}\n\\newcommand{\\field}{\\mathbb{F}}\n\\newcommand{\\ita}[1]{\\textit{#1}}\n\\newcommand{\\com}[2]{#1\\backslash#2}\n\\newcommand{\\oneton}{\\{1,2,3,...,n\\}}\n\\newcommand\\idea[1]{\\begin{gather*}#1\\end{gather*}}\n\\newcommand\\ef{\\ita{f} }\n\\newcommand\\eff{\\ita{f}}\n\\newcommand\\proofs[1]{\\begin{proof}#1\\end{proof}}\n\\newcommand\\inv[1]{#1^{-1}}\n\\newcommand\\setb[1]{\\{#1\\}}\n\\newcommand\\en{\\ita{n }}\n\\newcommand{\\vbrack}[1]{\\langle #1\\rangle}\n\n\\newenvironment{definition}[2][Definition]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{theorem}[2][Theorem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{lemma}[2][Lemma]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{exercise}[2][Exercise]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{remark}[2][Remark]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{proposition}[2][Proposition]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{corollary}[2][Corollary]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n \\hypersetup{\n colorlinks,\n linkcolor=blue\n }\n\\begin{document}\n\\date{}\n\n \n\\title{Topology II}\n\\author{Jose Iovino\\\\Professor \\\\UTSA} \n \n\\maketitle\n\\section{Filters}\n\\subsection{Basic Notions}\nTopology is the study of limits and convergence.  Filters are important because they generalize the notion of limit.  Intuitively, a filter is collection of ``large\" sets.  Let $I$ be a set.  We claim a subset of $I$ is ``large\" if it contains ``almost all\" of the elements of $I$. \n\nConditions that ``large\" subsets must satisfy.\n\\begin{itemize}\n  \\item $I$ is a large subset of itself, since it contains ``all\" of the elements.\n  \\item An intersection of large subsets should be large.\n  \\item If $A$ is a large subset of $I$, then any $B \\supseteq A$ is large.\n\\end{itemize}\n\n\\begin{definition}{filter}\nLet $I$ be any set.  A filter $\\mathcal{F}$ on $I$ is a collection of subsets of $I$ such that\n\\begin{enumerate}\n    \\item $I \\in \\mathcal{F}$ and $\\emptyset \\not\\in \\mathcal{F}$.\n    \\item If $A, B \\in \\mathcal{F}$, then $A \\cap B \\in \\mathcal{F}$.\n    \\item If $A \\in \\mathcal{F}$ and $B \\supseteq A$, then $B \\in \\mathcal{F}$.\n\\end{enumerate}\n\\end{definition}\n\nThe universal quantifier ``$\\forall$\" says a property holds for all elements, while the existential quantifier ``$\\exists$\" says a property holds for at least element.  Filters enable the definition of quantifiers between these two extremes which capture the notion of a property holding for ``almost all\" elements.\n\n\\begin{definition}{almost all}\nIf $\\mathcal{F}$ is a filter on $I$ and $P$ is a property.  We say that almost all elements of $I$ satisfy $P$ when\n\\[\n\\{ x \\in I | x \\textrm{ satisfies } P\\} \\in \\mathcal{F}.\n\\]\n\\end{definition}\n\n\\subsection{Examples}\n\\subsubsection{Co-finite Filter on $\\mathbb{N}$}\nOn $\\mathbb{N}$, let\n\\[\\mathcal{F} = \\{A \\subset \\mathbb{N} | A^c \\textrm{ is finite} \\}.\n\\]\nThis is the co-finite filter.  Check that it is indeed a filter.\n\n\\begin{enumerate}\n    \\item $\\mathbb{N} \\in \\mathcal{F}$ because $\\mathbb{N}^c = \\emptyset$.\n    \\item $A, B \\in \\mathbb{F}$, then $A^c, B^c$ are finite.  By De Morgan, $A^c \\cup B^c = (A \\cap B)^c$ is finite.  Hence $A \\cap B \\in \\mathcal{F}$.\n    \\item If $A \\in \\mathcal{F}$ and $B \\superseteq A$, then $B^c \\subseteq A^c$ which is finite.  Hence, $B \\in \\mathcal{F}$.\n\\end{enumerate}\n\n\\subsubsection{Frechet Filter}\nIn the previous example, none of the properties used where special to the natural numbers.  Thus we can generalize to any infinite index set $I$.  Then\n\\[\n\\mathcal{F} = \\{ A \\subseteq I | A^c \\textrm{ is finite} \\}\n\\]\nis a filter.  It is called the Frechet filter on I.\n\n\\subsubsection{Co-countable Filter}\nThis example can be extended even more by playing with the cardinalities involved.  For example, if $I$ is an uncountable set and\n\\[\n\\mathcal{F} = \\{ A \\subseteq I| A^c \\textrm{ is countable}\\}\n\\]\nthen $\\mathcal{F}$ is the co-countable filter.\n\n\\subsubsection{A Filter from Measure Theory}\nLet $(X, \\Sigma, \\mu)$ be a probability space, then \n\\[\n\\mathcal{F} = \\{ A \\subseteq X | \\mu(A) = 1 \\}\n\\]\nis a filter.  It consists of the events which are certain to happen.\n\n\\subsubsection{Trivial Example}\nIf $X$ is a set, then\n\\[\n\\mathcal{F} = \\{ X \\}\n\\]\nis a filter on $X$.  This is called the trivial filter on $X$.\n\n\\section{Limits}\nLet $X$ be a topological space.  Let $(x_i)_{i \\in I}$ be a family of elements of $X$ and let $a \\in X$.  If $\\mathcal{F}$ is a filter on $I$, we write\n\\[\nx_i \\xrightarrow{\\mathcal{F}} y\n\\]\nif $\\forall$ neighborhoods $U$ of $a$\n\\[\n\\{ i | x_i \\in U \\} \\in \\mathcal{F}.\n\\]\n\n\\begin{remark}{}\nIf $I = \\mathbb{N}$ and $\\mathcal{F}$ is the Frechet filter, then for any sequence $(x_n)_{n \\in \\mathbb{N}}$,\n\\[\nx_n \\xrightarrow{\\mathcal{F}} a\n\\]\nif and only if $x_n \\rightarrow a$ as $n \\rightarrow \\infty$ in the classical sense.\n\\end{remark}\n\n\\begin{definition}{Ultrafilter}\nIf $I$ is an index set then an ultrafilter $\\mathcal{F}$ on $I$ is a filter such that $\\forall A \\subseteq I$, either $A \\in \\mathcal{F}$ or $A^c \\in \\mathcal{F}$.\n\\end{definition}\n\n\\begin{theorem}\nIf $I$ is any set and $\\mathcal{F}$ is any filter on $I$, then there exists an ultrafilter $\\mathcal{U}$ on $I$ such that $\\mathcal{F} \\subseteq \\mathcal{U}$.\n\\end{theorem}\n\\end{document}", "meta": {"hexsha": "a5d62d0e12ee3acf3aaa6a37c06c74453b61c8bf", "size": 6090, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture_20190114.tex", "max_stars_repo_name": "nu11vect0r/utsa-2019-spring-topology", "max_stars_repo_head_hexsha": "e28d4742fbe88685446df49f45780ff6b24e5b55", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lecture_20190114.tex", "max_issues_repo_name": "nu11vect0r/utsa-2019-spring-topology", "max_issues_repo_head_hexsha": "e28d4742fbe88685446df49f45780ff6b24e5b55", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture_20190114.tex", "max_forks_repo_name": "nu11vect0r/utsa-2019-spring-topology", "max_forks_repo_head_hexsha": "e28d4742fbe88685446df49f45780ff6b24e5b55", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.4285714286, "max_line_length": 315, "alphanum_fraction": 0.6870279146, "num_tokens": 2098, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.849971181358171, "lm_q1q2_score": 0.588725724663298}}
{"text": "\n\n\\subsection{Representation of images}\n\nBy the linear finite element discretization and bilinear finite element discretization, \nwe have \\eqref{2d-fe0} and \\eqref{2d-fe1} respectively. \n\nAs usual, the F-inner product is defined as follows:\n$$\n(u, v)_F := \\sum_{i,j} u_{i,j}v_{i,j}.\n$$\n\nmore general form for the discretization systems  is the so \ncalled\n\n\nAs discussed above, the restriction mapping is consistent with\n\nHere let us show a simple example about some details for computing deconvolution. \n\n\\subsubsection{Pooling or restriction}\\label{sec:cnn-restriction}\nSolving the residual equation \n$$\nK_A\\ast e=r^h\n$$\nin matrix form is equivalent to solve the problem \n\\begin{equation}\\label{FEres}\na(e_h, v_h)=(r_h,v_h)~~\\forall~~v_h\\in \\mathcal V_h,\n\\end{equation}\nwhere \n\\begin{equation}\\label{expresion:residual}\ne_h=\\sum_{i=1}^{m_1}\\sum_{j=1}^{n_1}e^h_{i,j}\\phi^h_{i,j}(x,y), r_h=\\sum_{i=1}^{m_1}\\sum_{j=1}^{n_1}r^h_{i,j}\\psi^h_{i,j}(x,y).\n\\end{equation}\nWe solve the equation \\eqref{FEres} on the coarse grid space $\\mathcal V_H$, namely \n\\begin{equation}\\label{FEres:coarse}\na(e_{2h}, v_{2h})=(r_h,v_{2h})~~\\forall~~v_{2h}\\in \\mathcal V_{2h},\n\\end{equation}\nwhere $e_{2h}=\\sum_{i=1}^{m_2}\\sum_{j=1}^{n_2}e^{2h}_{i,j}\\phi^{2h}_{i,j}(x,y)$, the matrix form for \\eqref{FEres:coarse} is \n\\begin{equation}\\label{coarse:matrix}\nK_A\\ast e^{2h}=r^{2h},~~r_{i,j}^{2h}=(r_h,\\phi^{2h}_{i,j})\n\\end{equation}\n\\begin{equation}\\label{restriction}\nr_{i,j}^{2h}=(r_h,\\phi^{2h}_{i,j})\n\\end{equation}\nNow we consider the expression of nodal basis functions $(\\phi_{i,j}^{2h})(x,y)\\subset \\mathcal V_{2h}$ by the nodal basis functions $(\\phi_{i,j}^{h})(x,y)\\subset \\mathcal V_{h}$. Here, we only show the details of the case of bilinear finite element space.  The case of linear finite element space can be shown similarly.\n\n for general $\\mathcal V_1=\\mathcal  V_\\ell,  \\mathcal V_2=\\mathcal  V_{\\ell+1}$, \n \n \n \\section{Multigrid for finite element methods}\nLet $\\mathcal  V_1=\\mathcal  V_h$, then the finite element method: Find $u_h\\in \\mathcal  V_h$ such that \n\\begin{equation}\\label{FEdis}\na(u_h, v_h)=(f, v_h)~~~\\forall~~v_h\\in \\mathcal  V_h,\n\\end{equation}\nwhere $a(u_h, v_h)=(\\nabla u_h, \\nabla v_h)$.\n\\begin{lemma}\nFE $\\rightarrow$ FD with $f_{i,j}=\\int_{\\Omega}f(x,y)\\phi_{ij}(x,y)dxdy$.\n\\end{lemma}\n\nFor a more comprehensive notation, let us also define \n$K\\ast$ as an operator from $\\mathbb{R}^{m\\times n} \\quad \\text{and}$\nto $\\mathbb{R}^{m\\times n}$ as $\\mathcal C_K$.\nThen we have the ``transpose'' of convolution without stride as:\n\\begin{equation}\\label{eq:def_tran_conv}\n(K \\ast u, v)_F = (\\mathcal C_K (u), v) =  (u,  \\mathcal C_K^\\top v).\n\\end{equation}\n\nFinite element method: $A_h: \\mathcal  V_h\\rightarrow \\mathcal  V_h, ~~(A_hu_h,v_h)_{L^2}=(\\nabla u_h, \\nabla v_h)$ (discrete Laplacian $A_h\\approx -\\Delta_h$. )\n\\begin{equation}\\label{FE:eq}\nA_hu_h=f_h,~~ \n\\end{equation}\nwhere~$f_h(x,y)=\\sum_{i=1}^{m_1}\\sum_{j=1}^{n_1}f_{i,j}\\psi_{i,j}(x,y)$ in terms of dual basis $\\psi_{i,j}(x,y)\\subset \\mathcal V_h$ satisfying \\eqref{dual-basis}.\n\nThe gradient descent for \\eqref{FE:eq} is equivalent to the damped Jacobi method for \\eqref{FE:eq} and can be written as follows\n$$\nu^k=u^{k-1}+S_0(f-K_A\\ast u^{k-1}),~~~1\\le k\\le m_1.\n$$\nSmoother is equivalent to filter (filtering out high frequncies). Given any $u^0$, $u-u^k$ is much smoother than $u-u^0$.\n\n%Fine grid smoothing $\\Longleftrightarrow$ feature extraction.\n\nAfter smoothing, we do coarse grid correction: for example, for \n$\\mathcal T_h,~\\mathcal T_H,~H=2h, \\mathcal V_2=\\mathcal V_H\\subset \\mathcal V_h=\\mathcal V_1$, we do\n\\begin{itemize}\n\\item $K_A\\ast e=f- K_A\\ast u^{m_1}=r^h$;\n\\item  $u=u^{m_1}+e$;\n\\item  $K_A\\ast u=f$.\n\\end{itemize}\nWe need to solve the residual equation\n$$\nK_A\\ast e=r^h.\n$$\nMultigrid idea: solve this residual  equation on coarse grid space $\\mathcal V_H$ with $H=2h$.\n\n\n\nLet us first briefly describe a geometric multigrid method used to solve the \nfollowing boundary value problem\n\\begin{equation}\n\\label{laplace}\n-\\Delta u = f,  \\mbox{ in } \\Omega,\\quad\n\\frac{\\partial u}{\\partial {\\bm n}} =0  \\mbox{ on } \\partial\\Omega,\\quad\n\\Omega=(0,1)^2.\n\\end{equation}\n\nAs example, we consider a continuous linear finite element discretization of\n\\eqref{laplace} on a nested sequence of grids of sizes $n_\\ell\\times\nn_\\ell$ with $n_{\\ell}=2^{J-\\ell+1} + 1$, as shown in the right part of\nFig. \\ref{fig:2dpartition} and the corresponding sequence of finite\nelement spaces.\n%Here we need to notice that, $n_\\ell = 2^{k_\\ell} + 1$ for general PDEs grid \n%with the above boundary condition. For general images, we can take them as\n%discrete functions on grid with size $n_\\ell = 2^{k_\\ell}m$ with small $m = 1,3,\\cdots$.\n%Then generally speaking, the coarse grid size is $n_{\\ell+1} = \\frac{n_\\ell}{2}=2^{k\\ell - 1}m$.\n\nBased on the grid $\\mathcal T = \\mathcal T_\\ell$, the discretized system is\n\\begin{equation}\n\\label{laplace-h}\nAu=f.\n\\end{equation}\nHere,  $A:\\mathbb R^{n\\times n}\\mapsto \\mathbb R^{n\\times n}$ is a tensor satisfying\n\\begin{equation}\n\\label{uniform-laplace}\n(Au)_{i,j}=4u_{i,j}-u_{i+1,j}-u_{i-1,j}-u_{i,j+1}-u_{i,j-1},\n\\end{equation}\nwhich holds for $1\\le i,j \\le n$ with zero padding. \nHere we notice that, there exists a $3\\times 3$ kernel as\n\\begin{equation}\\label{eq:kernel-A}\nK_A = \\begin{pmatrix}\n0 & -1 & 0 \\\\\n-1 & 4 & -1 \\\\\n0 & -1 & 0\n\\end{pmatrix},\n\\end{equation}\nwith \n\\begin{equation}\\label{eq:convA}\nAu = K_A \\ast u.\n\\end{equation}\nWhere $\\ast$ is the stander convolution operation with zero padding like \\eqref{con1}. \nWe now briefly describe a simple multigrid method by a mixed use of the terminologies from \ndeep learning \\cite{goodfellow2017deep} and multigrid methods.\n\nHowever, because of the specially properties of \n An important results in multilevel finite element methods is that \nif we take the restriction as \\eqref{eq:restriction} with prolongation as \nthe transposition of restriction, we will have $A^2$ is still a convolution operation \nwith $K_{A^2} = K_{A^1} = K_A$ as in \\eqref{eq:kernel-A}.\n\n%\tNote that $\\sigma_\\phi^{(k_0)}=\\sigma\\ast\\phi^{(k_0)}=0$, this implies that $\\sigma$ is a polynomial of degree at most $k_0-1$, which contradicts with the assumption that $\\sigma$ is not a polynomial. \n%\tAs a result, $\\overline{\\Sigma}_n$ contains all polynomials, hence is dense in $C(\\mathbb{R})$.\n%\\end{proof}\n%\n%\\begin{theorem}\n%\t\\label{prop:riemann}\n%\t\tLet $\\sigma$ be a non-polynomial Riemann integrable function and $\\sigma\\in L_{loc}^\\infty(\\mathbb{R})$. Then $\\Sigma_1$ in dense in $C(\\mathbb{R})$.\t\n%\t\\end{theorem}\n%\\begin{proof}\n%\tThe proof basically follows the proof of Proposition~\\ref{prop:conti}. Let $\\phi\\in C_0^\\infty(\\mathbb{R})$, and consider $\\sigma_\\phi$. Again we assume $\\mathrm{supp}(\\phi)\\subset[-\\alpha,\\alpha]$. Then we can take the Riemann sum $\\sum_{i=1}^{m}\\sigma(x-y_i)\\phi(y_i)\\Delta y_i$ where $y_i=-\\alpha+\\frac{2\\alpha}{m}i$ and $\\Delta y_i=\\frac{2\\alpha}{m}$ for $i=0:m$. Set $V_i=[y_{i-1}.y_i]$. The goal is to show that the Riemann sum uniformly converges to $\\sigma_\\phi$ on any compact set $K$. \n%\t\n%\tFor any compact set $K$, and any $x\\in K$, still we need to do estimation similar to \\eqref{uniconv}. Denote $\\tilde{K}=K-[-\\alpha,\\alpha]$. Given $\\delta>0$, since $\\sigma$ is Riemann integrable, the set of discontinuous points is of measure 0, thus there exists a finite number $n(\\delta)$ of intervals, the union of which denoted by $U$, such that the measure of $U$ is $\\delta$ and $\\sigma$ is uniformly continuous on $\\tilde{K}\\setminus U$. \n%\t\n%\tChoose $\\delta>0$ such that \n%\t\\begin{equation}\n%\t\\label{delta}\n%\t10\\delta\\|\\sigma\\|_{L^\\infty(\\tilde{K})}\\|\\phi\\|_{L^\\infty}\\le\\epsilon.\n%\t\\end{equation}\n%\t\n%\t\n%\tChoose $m$ large enough such that $m\\delta>\\alpha n(\\delta)$, and:\n%\t\\begin{equation}\n%\t\\label{m-large}\n%\t|\\phi(x)-\\phi(y)|\\le\\frac{\\epsilon}{2\\alpha\\|\\sigma\\|_{L^\\infty(\\tilde{K})}},\\qquad \\forall |x-y|\\le\\frac{2\\alpha}{m},\n%\t\\end{equation}\n%\tand \n%\t\\begin{equation}\n%\t\\label{sigma}\n%\t|\\sigma(x)-\\sigma(y)|\\le\\frac{\\epsilon}{\\|\\phi\\|_{L^1}},\\qquad x,y\\in \\tilde{K}\\setminus U, |x-y|\\le\\frac{2\\alpha}{m}.\n%\t\\end{equation}\n%\t\n%\tThen\n%\t\t\\begin{equation}\n%\t\t\\label{riemann}\n%\t\\begin{aligned}\n%\t&|\\int_{\\mathbb{R}}\\sigma(x-y)\\phi(y)dy-\\sum_{i=1}^{m}\\sigma(x-y_i)\\phi(y_i)\\Delta y_i|,\\\\\n%\t\\le&\\sum_{i=1}^{m}\\int_{V_i}|\\sigma(x-y)-\\sigma(x-y_i)||\\phi(y)|+|\\sigma(x-y_i)||\\phi(y)-\\phi(y_i)|dy.\n%\t\\end{aligned}\n%\t\\end{equation}\n%\tBy \\eqref{m-large}, we have:\n%\\begin{equation}\n%\\label{part1}\n%\\sum_{i=1}^{m}\\int_{V_i}|\\sigma(x-y_i)||\\phi(y)-\\phi(y_i)|dy\\le\\epsilon.\n%\\end{equation}\n%When $(x-V_i)\\bigcap U=\\emptyset$, then by \\eqref{sigma}\t\n%\t\\begin{equation}\n%\t\\label{part2}\n%\t\\int_{V_i}|\\sigma(x-y)-\\sigma(x-y_i)||\\phi(y)|dy\\le\\frac{\\epsilon}{\\|\\phi\\|_{L^1}}\\int_{V_i}|\\phi(y)|dy.\n%\t\\end{equation}\n%\tThus sum together we will get a term that is less than $\\epsilon$.\n%For those $V_i$'s intersect with $U$, the total length should be at most $\\delta+\\frac{4\\alpha}{m}n(\\delta)$ since there are $n(\\delta)$ intervals in $U$. By the choice of $m$, we have $\\delta+\\frac{4\\alpha}{m}n(\\delta)\\le5\\delta$. Hence,\n%\t\\begin{equation}\n%\\label{part3}\n%\\sum\\int_{V_i}|\\sigma(x-y)-\\sigma(x-y_i)||\\phi(y)|dy\\le2\\|\\sigma\\|_{L^\\infty(\\tilde{K})}\\|\\phi\\|_{L^\\infty}5\\delta\\le\\epsilon.\n%\\end{equation}\n%\tCombine together we should have\n%\t$$\n%\t|\\int_{\\mathbb{R}}\\sigma(x-y)\\phi(y)dy-\\sum_{i=1}^{m}\\sigma(x-y_i)\\phi(y_i)\\Delta y_i|\\le3\\epsilon.\n%\t$$\n%\twhich implies the convergence is uniform.\n%\t\n%\tThen it left to show that there exists $\\phi$ such that $\\sigma_\\phi$ is not a polynomial. If not, then there must be $k_0$ and $\\phi$ such that $\\sigma_\\phi^{(k_0)}(\\theta)=0$ for all $\\theta\\in\\mathbb{R}$ and all $\\phi\\in C_0^\\infty(\\mathbb{R})$. Since for every compact set $K$, we should have:\n%\t$$\n%\t\\|\\sigma-\\sigma_{\\eta_\\epsilon}\\|_{L^p(K)}\\to0,\\quad1\\le p<\\infty, \\quad \\epsilon\\to0.\n%\t$$\n%\tIf $\\sigma_{\\eta_\\epsilon}$'s are all polynomials of degree at most $k_0-1$, then $\\sigma$ is also a polynomial of degree at most $k_0-1$ almost everywhere.\n%\t\n%\t\\end{proof}\n\n\n\n\nHere, we have\n\\begin{equation*}\n0=x_0<x_1<\\cdots<x_{n+1}=1, \\quad x_j=\\frac{j}{n+1},\\quad (j=0,\\cdots, n+1).\n\\end{equation*}\nand\n\\begin{equation*}\n0=y_0<y_1<\\cdots<y_{n+1}=1, \\quad y_j=\\frac{j}{n+1},\\quad (j=0,\\cdots,n+1).\n\\end{equation*}\n\\begin{figure}[H]\n\\begin{center}\n\\setlength{\\unitlength}{0.5mm}\n\\begin{picture}(0,45)(10,0)\n\\linethickness{0.25mm}\n\\multiput(0,0)(10,0){5}{\\line(0,1){40}}\n\\multiput(0,0)(0,10){5}{\\line(1,0){40}}\n\\put(0,0){\\line(1,1){40}}\n\\put(10,0){\\line(1,1){30}}\n\\put(20,0){\\line(1,1){20}}\n\\put(30,0){\\line(1,1){10}}\n%\\put(40,0){\\line(1,1){10}}\n\\put(0,10){\\line(1,1){30}}\n\\put(0,20){\\line(1,1){20}}\n\\put(0,30){\\line(1,1){10}}\n%\\put(0,40){\\line(1,1){10}}\n\\put(37,24){$\\displaystyle \\left. \\begin{array}{l}~ \\\\ ~\\end{array}\n\\right\\} h={1\\over n+1}$}\n\\put(44,4){$\\displaystyle N = n^2$}\n%\\multiput(100,0)(10,0){6}{\\line(0,1){50}}\n%\\multiput(100,0)(0,10){6}{\\line(1,0){50}}\n%\\put(147,34){$\\displaystyle \\left. \\begin{array}{l}~ \\\\ ~\\end{array}\n%\\right\\} h={1\\over n+1}$}\n%\\put(154,14){$\\displaystyle N = n^2$}\n\\end{picture}\n%\\setlength{\\unitlength}{0.5mm}\n\\end{center}\n\\label{fig:2dpartition}\n\\caption{Two-dimensional uniform grid for finite element method}\n\\end{figure}\n\nThe weak form is: Find $u_h \\in V_h$, such that\n\\begin{equation} \\label{eqn:weak}\na(u_h, v_h) = (f, v_h), \\ \\forall \\ v_h \\in V_h\n\\end{equation}\nwhere $a(u_h,v_h) = \\int_{\\Omega} \\nabla u_h \\nabla v_h $ and $(f, v_h) = \\int_{\\Omega} fv_h$, and $V_h$ is the corresponding linear finite element space.  Take $v_h = \\phi^h_{i,j}$ where $\\phi^h_{i,j}$ is the basis function at point $(x_i, y_j)$, from \\eqref{eqn:weak}, we have\n\n\n\n\n\nwhere ${A} = \\text{tridiag}(-{I}, {B}, -{I})$ and ${B} = \\text{tridiag}(-1,4,-1)$ and ${u} = ({u}_{i,j})$ and ${f} = ({f}_{i,j})$ with $i$ and $j$ both follow the lexicographic ordering. \n\\begin{enumerate}\n\\item \\emph{Jacobi method:}  implement the Jacobi method as shown in the following algorithm\n\\\n\n\\begin{algorithm}\nJacobi method:  $[{u}^{k+1}] = \\texttt{jacobi} ({u}^k, {f}, h)$\n\\begin{algorithmic}[1]\n\\STATE $n \\gets \\frac{1}{h} - 1$\n\\FOR{$j = 1:n$}\n\\FOR{$i = 1:n$}\n\\STATE $ {u}^{k+1}_{i,j} \\gets {u}^{k+1}_{i,j } + ({f}_{i,j} + {u}^k_{i-1,j} + {u}^k_{i+1,j} + {u}^k_{i,j-1} + {u}^{k}_{i,j+1} -4 {u}^k_{i,j})/4$\n\\ENDFOR\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\\n\nChoose right hand side $f$ and initial guess ${u}^0$ freely (constant right hand side and random initial guess are recommended), and use the stopping criterion $\\| {f} - {A} {u}^k \\| / \\| {f} - {A} {u}^0 \\|  \\leq 10^{-6} $.  Make a table to report the number of iterations and CPU time for $h = 2^{-5}, 2^{-6}, \\cdots, 2^{-8}$.\n\n\\item \\emph{Gauss-Seidel method:}  \n\n\n\n\nIt is easy to verify that the formulation for the linear element method is \n\\begin{equation}\n  \\label{2d-fe0}\n4u_{ij}-(u_{i+1,j}+u_{i-1,j}+u_{i,j+1}+u_{i,j-1})=f_{i,j},~~u_{i,j}=0~~\\hbox{if}~~i ~~\\hbox{or}~~ j\\in \\{0, n+1\\}.\n\\end{equation}\nwhere ${f}_{i,j} = (f, \\phi^h_{i,j})$.  This leads to the following linear system of equations.\n\\begin{equation*}\nA \\ast u = f,\n\\end{equation*}\nwhere\n\\begin{equation}\\label{eq:kernel-A}\nA = \\begin{pmatrix}\n0 & -1 & 0 \\\\\n-1 & 4 & -1 \\\\\n0 & -1 & 0\n\\end{pmatrix}.\n\\end{equation}\n\n%Implement the following gradient descent method:\n%\n%\\\n%\n%%\\begin{algorithm}\n%\\underline{\\textbf{Gradient descent method:}}  $[{u}] = \\texttt{GD} ({u}, {f}, n)$\n%\\begin{algorithmic}[1]\n%%\\STATE ${v} \\gets {u}^{k}$\n%%\\STATE $n \\gets \\frac{1}{h} - 1$\n%\\FOR{$j = 1:n$}\n%\\FOR{$i = 1:n$}\n%\\STATE $ {u}_{i,j} \\gets  {u}_{i,j} +\\frac{1}{8}({f}_{i,j} -4 {u}_{i,j} +{u}_{i-1,j} +{u}_{i+1,j} +{u}_{i,j-1}+{u}_{i,j+1} )$\n%\\ENDFOR\n%\\ENDFOR\n%\\end{algorithmic}\n%\\end{algorithm}\n\n\n\nas\n \\begin{equation}\nu \\leftarrow u + \\frac{1}{8}( f - K\\ast u).\n\\end{equation}\nThis can be done with  the convolution\n\n\n\nEspecially if \n$$\nu^{0} = 0,\n$$\nwe have\n\\begin{equation}\nu^{1} \\leftarrow S_0 \\ast f,\n\\end{equation}\nand if we apply the gradient descent iteration twice we have \n\\begin{equation}\nu^{2} \\leftarrow S_1 \\ast f,\n\\end{equation}\nwith \n\\begin{equation}\\label{eq:kernel-S}\nS_0 = {1 \\over 8},\n\\end{equation}\nand \n\\begin{equation}\\label{eq:kernel-S2}\nS_1={1\\over 64} \\begin{pmatrix}\n0 & 1 & 0 \\\\\n1 & 12 & 1  \\\\\n0 &1  & 0\n\\end{pmatrix}.\n\\end{equation}\n\\hrule\n\n%Given an initial guess ${u}^0$, consider the gradient descent iteration as follows:\n%\n%\n%\n%\\begin{algorithmic}[1]\n%\\STATE $k \\gets 0$\n%\\WHILE{$\\| {f} - {A} {u}^k \\| / \\| {f} - {A} {u}^0 \\|  > \\texttt{Tol}$}\n%\\STATE $[{u}^{k+1}] = \\texttt{GD} ({u}^k, {f}, n)$\n%\\STATE $k \\gets k+1$\n%\\ENDWHILE\n%\\end{algorithmic}\n\n\\\n\n\\eqref{conA}\n\n by convolution as\n\\begin{equation}\nu \\leftarrow K\\ast u.\n\\end{equation}\nThis can be done with\n\n\n\n%\\hrule\n\n\\\n\n%The restriction and prolongation are implemented as follows:\n%\n%\\\n%\n%%\\begin{algorithm} \\label{alg:restrict}\n%\\underline{\\textbf{Restriction:}} $[{r}^H] = \\texttt{restrict}({r}^h, n_H)$\n%\\begin{algorithmic}[1]\n%%\\STATE $H = 2h$, $n_H = \\frac{1}{H} - 1$\n%\\FOR{$j = 1:n_H$}\n%\\FOR{$i = 1:n_H$}\n%\\STATE ${r}^H_{i,j} \\gets {r}^h_{2i,2j} + \\frac{1}{2} \\left( {r}^h_{2i+1,2j} + {r}^h_{2i-1,2j}  + {r}^h_{2i,2j+1}  + {r}^h_{2i,2j-1}  + {r}^h_{2i+1,2j+1} + {r}^h_{2i-1,2j-1}   \\right)$\n%\\ENDFOR\n%\\ENDFOR\n%\\end{algorithmic}\n%\\end{algorithm}\n\n\\hrule\n\n\\begin{equation}\nr^{H} = R \\ast_2 r^h,\n\\end{equation}\nwith \n\\begin{equation}\n\\label{linear-restrict}\nR=\n\\begin{pmatrix}\n0 &\\frac{1}{2}&\\frac{1}{2}\\\\\n\\frac{1}{2}& 1&\\frac{1}{2}\\\\\n\\frac{1}{2}&\\frac{1}{2}&  0\n\\end{pmatrix}.\n\\end{equation}\n\\\nThis can be done with\n\n\n%\\hrule\n%\n%%\\begin{algorithm}\\label{alg:prolongate}\n%\\underline{\\textbf{Prolongation:}} $[ {e}^h] = \\texttt{prolongate}({e}^H, n_H)  $\n%\\begin{algorithmic}[1]\n%%\\STATE $n_H = \\frac{1}{H} - 1$\n%\\FOR{$j = 0:n_H-1$}\n%\\FOR{$i = 0:n_H-1$}\n%\\STATE ${e}^h_{2i,2j} \\gets {e}^H_{i,j}$\n%\\STATE ${e}^h_{2i+1,2j} \\gets \\frac{1}{2} ({e}^H_{i,j} + {e}^H_{i+1,j} )$\n%\\STATE ${e}^h_{2i+1,2j+1} \\gets \\frac{1}{2}( {e}^H_{i,j} + {e}^H_{i+1,j+1}) $\n%\\STATE ${e}^h_{2i,2j+1} \\gets  \\frac{1}{2} ({e}^H_{i,j} + {e}^H_{i,j+1})$\n%\\ENDFOR\n%\\ENDFOR\n%\\end{algorithmic}\n%\\end{algorithm}\n%\\hrule\n\\hrule\n\n\n\n\\begin{equation}\ne^{h} =  R \\ast_2^\\top r^H,\n\\end{equation}\nwith \n\\begin{equation}\n\\label{linear-restrict}\nR=\n\\begin{pmatrix}\n0 &\\frac{1}{2}&\\frac{1}{2}\\\\\n\\frac{1}{2}& 1&\\frac{1}{2}\\\\\n\\frac{1}{2}&\\frac{1}{2}&  0\n\\end{pmatrix}.\n\\end{equation}\n\\\nThis can be done with \n\n\n\n%Now, one step of the two-grid method with gradient descent smoother is defined as follows:\n%\n%\\\n%\n%%\\begin{algorithm} \n%\\underline{\\textbf{Two-grid method:}} $[{u}] = \\texttt{TwoGrid}( {u}, {f}, n, n_H)$\n%\\begin{algorithmic}[1]\n% \\STATE \\emph{Smoothing:} $[{u}] = \\texttt{GD} ({u}, {f}, n)$\n%\\STATE \\emph{Compute residual:} ${r} \\gets {f} - \\texttt{Action}({u}, n) $\n%\\STATE \\emph{Restriction:} $[{r}^{H}]= \\texttt{restrict}({r}, n_H)$\n%\\STATE \\emph{Coarse grid correction:} (Suppose to use $ {e}^{H} \\gets {A}_{H}^{-1} {r}^{H}$, however, we cannot compute ${A}_H^{-1}$ since we do not form ${A}_H$ explicitly.  Therefore, we use sufficient steps of Gauss-Seidel method)\n%\\begin{enumerate}\n%\\item [\\small{4.1:}] ${e}^{H} \\gets {0}$\n%\\item [\\small{4.2:}] \\textbf{for} $m = 1: \\texttt{coarse\\_step}$ \\textbf{do}\n%\\item [\\small{4.3:}] \\quad $[{e}^H] = \\texttt{GD} ({e}^H, {r}^H, n_H)$\n%\\item [\\small{4.4:}]  \\textbf{end} \\textbf{for}\n%\\end{enumerate}\n%%\\STATE ${e}^{H} \\gets {0}$\n%%\\FOR{$m = 1: \\texttt{coarse\\_step}$}\n%%\\STATE $[{e}^H] = \\texttt{GD} ({e}^H, {r}^H, H)$\n%%\\ENDFOR \n%\\STATE \\emph{Prolongation:} ${u} \\gets {u} + \\texttt{prolongate}({e}^H, n_H) $\n%\\end{algorithmic}\n%%\\end{algorithm}\n%\n%\\\n%\n%Given an initial guess ${u}^0$, please implement the two-grid iteration as follows:\n%\n%\\begin{algorithmic}[1]\n%\\STATE $k \\gets 0$\n%\\WHILE{$\\| {f} - {A} {u}^k \\| / \\| {f} - {A} {u}^0 \\|  > \\texttt{Tol}$}\n%\\STATE $[{u}^{k+1}] = \\texttt{TwoGrid} ({u}^k, {f}, n, n_H)$\n%\\STATE $k \\gets k+1$\n%\\ENDWHILE\n%\\end{algorithmic}\n%\n%\\\n%\n%Choose right hand side $f$ and initial guess ${u}^0$ freely (constant right hand side and random initial guess are recommended), and set the tolerance to be $\\texttt{Tol} = 10^{-6}$.  Make a table to report the number of iterations, convergence factor ($\\| {f} - {A} {u}^k \\| / \\| {f} - {A} {u}^{k-1} \\|  $ ), and CPU time for $J = 3, 4, 5, 6$.\n\n\nMG method is based on the following nested spaces\n\\begin{equation*}\n\\mathcal V_1 \\subset \\mathcal V_2 \\subset \\cdots \\subset \\mathcal V_J,\n\\end{equation*}\nwhere $\\mathcal V_\\ell$, $\\ell = 1, 2, \\cdots, J$, are linear finite element spaces on uniform grids with mesh size $h_\\ell = 2^{-\\ell}$ and $n_\\ell = 2^\\ell-1$. \n%The prolongation matrix ${I}_{l-1}^l \\in \\mathbb{R}^{N_l \\times N_{l-1}}$ where $N_l = (1/h_l-1)^2$ are defined by the manner of \\eqref{eqn:prolongation}.  Moreover, we also have the restriction matrix is the transpose of the prolongation matrix, i.e. ${I}_l^{l-1}: = ( {I}_{l-1}^l )^t$. And ${A}_{l-1} = {I}_l^{l-1} {A}_l  {I}_{l-1}^l $ with ${A}_J = {A}$.  \nRecursive implementation of the MG method with GD smoother is defined as follows:\n%\\begin{algorithm} \n\n\\\n\n%\\underline{\\textbf{MG method:}} $[ {u}_l ] = \\texttt{MultiGrid}({u}_l, {f}_l, l)$\n%\\begin{algorithmic}[1]\n%\\STATE $n_l \\gets 2^l-1$, $n_{l-1} = 2^{l-1}-1$\n%\\IF{$l=1$}\n%\\STATE ${u}_1 \\gets {A}_1^{-1} {f}_1$\n%\\ELSE \n%\\STATE \\emph{Smoothing:} $[{u}_l] = \\texttt{GD} ({u}_l, {f}_l, n_l)$\n%\\STATE \\emph{Compute residual:} ${r}_l \\gets {f}_l - \\texttt{Action}({u}_l, n_l)$\n%\\STATE \\emph{Restriction:} $[{r}_{l-1}] = \\texttt{restrict}({r}_l, n_{l-1})$\n%\\STATE \\emph{Coarse grid correction:} $[ {e}_{l-1} ] = \\texttt{MultiGrid}({0}, {r}_{l-1}, l-1) $\n%\\STATE \\emph{Prolongation:} ${u}_{l} \\gets {u}_l + \\texttt{prolongate}({e}_{l-1}, n_{l-1})$\n%\\ENDIF\n%\\end{algorithmic}\n%%\\end{algorithm}\n\n%\\underline{\\textbf{MG method:}} $[ {u}^\\ell ] = \\texttt{MultiGrid}(0, {f}^\\ell,\\ell)$\n%\\begin{algorithmic}[1]\n%\t\\STATE $n_\\ell \\gets 2^\\ell-1$, $n_{\\ell-1} = 2^{\\ell-1}-1$\n%\t\\IF{$\\ell=1$}\n%\t\\STATE ${u}^1 \\gets {A}_1^{-1} {f}^1$\n%\t\\ELSE \n%\t\\STATE \\emph{Smoothing:} $[{u}^\\ell] = S \\ast {f}^\\ell$\n%\t\\STATE \\emph{Compute residual:} ${r}^\\ell \\gets {f}^\\ell - A \\ast {u}^\\ell,$\n%\t\\STATE \\emph{Restriction:} $[{r}^{\\ell-1}] = R \\ast_2 {r}^\\ell$\n%\t\\STATE \\emph{Coarse grid correction:} $[ {e}^{\\ell-1} ] = \\texttt{MultiGrid}({0}, {r}^{\\ell-1}, \\ell-1) $\n%\t\\STATE \\emph{Prolongation:} ${u}^{\\ell} \\gets {u}^\\ell + R \\ast_2^\\top {e}^{\\ell-1}$\n%\t\\ENDIF\n%\\end{algorithmic}\n%\\end{algorithm}\n\n\\\n\n\nGiven an initial guess ${u}^0$, please implement the MG iteration as follows:\n\n%\\\n%\n%\\begin{algorithmic}[1]\n%\\STATE $k \\gets 0$\n%\\WHILE{$\\| {f} - {A} {u}^k \\| / \\| {f} - {A} {u}^0 \\|  > \\texttt{Tol}$}\n%\\STATE $[{u}^{k+1}] = \\texttt{MultiGrid} ({u}^k, {f}, J)$\n%\\STATE $k \\gets k+1$\n%\\ENDWHILE\n%\\end{algorithmic}\n\n\\\n\n", "meta": {"hexsha": "1f9f2e34755ceeda402271a407018668523a8a64", "size": 20697, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/buffer.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/buffer.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/buffer.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.374340949, "max_line_length": 497, "alphanum_fraction": 0.6362757888, "num_tokens": 8413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711642563823, "lm_q2_score": 0.6926419704455588, "lm_q1q2_score": 0.5887257020324463}}
{"text": "\\section{Spatial correlations of energetic disorder}\n\\label{sec:eanalyze}\n\n\\index{site energy!spatial correlation}\nLong-range, e.g. electrostatic and polarization, interactions often result in spatially correlated disorder~\\cite{dunlap_charge-dipole_1996}, which affects the onset of the mobility-field (Poole-Frenkel) dependence~\\cite{derrida_velocity_1983,novikov_essential_1998,nagata_atomistic_2008}.    \n\nTo quantify the degree of correlation, one can calculate the spatial correlation function of $E_i$ and $E_j$ at a distance $r_{ij}$\n\\begin{equation}\n\\label{equ:cf}\nC(r_{ij}) = \\frac{  \\langle \\left( E_i-\\langle E\\rangle \\right)\n                   \\left( E_j-\\langle E\\rangle \\right)\\rangle}\n                   {\\langle\\left( E_i -\\langle E\\rangle \\right)^2\\rangle},\n\\end{equation}\nwhere $\\langle E\\rangle$ is the average site energy. $C(r_{ij})$ is zero if $E_i$ and $E_j$ are uncorrelated and $1$ if they are fully correlated. For a system of randomly oriented point dipoles, the correlation function decays as $1/r$ at large distances~\\cite{novikov_cluster_1995}.\n\nFor systems with spatial correlations, variations in site energy differences, $\\Delta E_{ij}$, of pairs of molecules from the neighbor list are smaller than variations in site energies, $E_i$, of all individual molecules. Since only neighbor list pairs affect transport, the distribution of $\\Delta E_{ij}$ rather than that of individual site energies, $E_i$, should be used to characterize energetic disorder.\n\nNote that the \\calc{eanalyze} calculator takes into account {\\em all} contributions to the site energies \n\\votcacommand{Analyze distribution and correlations of site energeies}{\\cmdeana}\n\n", "meta": {"hexsha": "10654b79c6e7b38ae356c0b3ba7ea8376f070c18", "size": 1677, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/theory/correlations.tex", "max_stars_repo_name": "jimbach/ctp", "max_stars_repo_head_hexsha": "e5b33f074f81c6e6859dfaacada1b6c992c67c2b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manual/theory/correlations.tex", "max_issues_repo_name": "jimbach/ctp", "max_issues_repo_head_hexsha": "e5b33f074f81c6e6859dfaacada1b6c992c67c2b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/theory/correlations.tex", "max_forks_repo_name": "jimbach/ctp", "max_forks_repo_head_hexsha": "e5b33f074f81c6e6859dfaacada1b6c992c67c2b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 79.8571428571, "max_line_length": 410, "alphanum_fraction": 0.7650566488, "num_tokens": 430, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.841825655188238, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5886503111510614}}
{"text": "\\documentclass{article}\n\n\\usepackage{graphicx}      % include this line if your document contains figures\n\\usepackage{cite}        % required for bibliography\n\\usepackage{amsmath,mathtools,amssymb}\n\\usepackage{amsfonts}\n\\usepackage{xcolor}\n\\usepackage{tikz}\n\\usepackage{pgfplots}\n\\usetikzlibrary{external}\n\\tikzexternalize[prefix=tikz/]\n\\usepackage{epstopdf}\n\\usepackage[font=footnotesize, labelfont={sf,bf}, margin=1cm]{caption}\n\\usepackage{graphicx}\n\\usepackage{algorithm}\n\\usepackage{algpseudocode}\n\\usepackage{booktabs}\n\\usepackage{caption}\n%\n\\setlength\\parindent{0pt}\n\\setlength{\\parskip}{1em}\n%\n%\\makeatletter\n%\\def\\BState{\\State\\hskip-\\ALG@thistlm}\n%\\makeatother\n%\n%\\newcommand{\\dd}{{\\mathrm{d}}}\n%\n%\\usetikzlibrary{decorations.pathreplacing,calc}\n\n%\\newcommand{\\tikzmark}[1]{\\tikz[overlay,remember picture] \\node (#1) {};}\n%\n\\newcommand{\\R}{\\mathbb{R}}\n%\n%\\newlength\\figurewidth\n%\\newlength\\figureheight\n%\n%\\newtheorem{theorem}{Theorem}\n%\\newtheorem{corollary}{Corollary}[theorem]\n%%\\newtheorem{lemma}{Lemma}\n%\\newtheorem{lemma}[theorem]{Lemma}\n%\\newtheorem{assumption}{Assumption}\n%\\newtheorem {Remark}[theorem]{Remark}\n%\n%\\makeatletter\n%\\renewcommand*\\env@matrix[1][\\arraystretch]{%\n%\t\\edef\\arraystretch{#1}%\n%\t\\hskip -\\arraycolsep\n%\t\\let\\@ifnextchar\\new@ifnextchar\n%\t\\array{*\\c@MaxMatrixCols c}}\n%\\makeatother\n\n%\\newtheoremstyle{exampstyle}\n%  {\\topsep} % Space above\n%  {\\topsep} % Space below\n%  {} % Body font\n%  {} % Indent amount\n%  {\\bfseries} % Theorem head font\n%  {.} % Punctuation after theorem head\n%  {.5em} % Space after theorem head\n%  {} % Theorem head spec (can be left empty, meaning `normal')\n\n%\\definecolor{bluet}{RGB}{100,121,178}\n%\\definecolor{redt}{RGB}{255,86,73}\n%\\definecolor{dgreent}{RGB}{75,212,69}\n\n\\expandafter\\def\\expandafter\\normalsize\\expandafter{%\n\t\\normalsize\n\t\\setlength\\abovedisplayskip{10pt}\n\t\\setlength\\belowdisplayskip{10pt}\n\t\\setlength\\abovedisplayshortskip{10pt}\n\t\\setlength\\belowdisplayshortskip{10pt}\n}\n%\n%\n%\\def\\nonfrenchspacing{\\sfcode`\\.3000\\sfcode`\\?3000\\sfcode`\\!3000%\n%\t\\sfcode`\\:1000\\sfcode`\\;1500\\sfcode`\\,1250 }\n%%\n%\\nonfrenchspacing\n%\n%\\newcommand{\\barr}{\\begin{array}}\n%\t\\newcommand{\\earr}{\\end{array}}\n%\\newcommand{\\bvec}{ \\left[ \\!\\! \\barr{cccccccccccc} }\n%\\newcommand{\\evec}{ \\earr \\!\\! \\right] }\n%\n%\n%\\hyphenation{op-tical net-works semi-conduc-tor}\n\n\n\\title{Notes on Initialization Strategies for \\\\ Primal-Dual Interior Point Methods}\n\\author{Andrea Zanelli}\n\\let\\endtitlepage\\relax\n\n\\begin{document}\n\t\n\\begin{titlepage}\n\t\\maketitle\n\\end{titlepage}\n\n%\\section{Initialization strategy for general polytopic constraints.}\nConsider the following convex quadratic problem (QP) with general polytopic constraints:\n\\begin{equation}\\label{eq:u_qp}\n\\begin{aligned}\n&\\underset{\\begin{subarray}{c}\n\tx \\\\\n\t\\end{subarray}}{\\min}\t&&\\frac{1}{2}x^THx\\\\\n&\\quad \\text{s.t.} \t\t   \t&&Ax  = b\\\\\n& \t\t\t\t\t\t&&Cx \\leq d,\n\\end{aligned}\n\\end{equation}\nwhere $A \\in \\R^{m \\times n}$, $C \\in \\R^{p \\times n}$, $H \\in \\R ^{n\\times n}$ and $H \\succ 0$,  and $x \\in \\R^n$, $b \\in \\R^m$, $d \\in \\R^p$. We are interested in solving the above problem with a primal-dual infeasible interior-point method (IPM). To this end, we can write the following set of equations:\n\\begin{equation}\\label{eq:u_kkt}\n\\begin{aligned}\n&Hx + A^T\\lambda + C^T\\nu &&= 0 \\\\\n&Ax - b \t\t\t\t\t\t\t\t&&= 0 \\\\\n&Cx - d + s\t\t\t\t\t\t\t\t&&= 0 \\\\\n&S\\nu\t\t\t\t\t\t\t\t\t&&=0,\n\\end{aligned}\n\\end{equation}\nwhere the slack variable $s \\in \\R^p$ has been introduced. Equations \\eqref{eq:u_kkt}, together with the positivity constraints $s, \\,\\nu \\!> \\!0$ constitute the first-order optimality, or Karush-Kuhn-Tucker conditions (KKTs) associated with \\eqref{eq:u_qp}.\n\\par\n\nWhen using infeasible IPMs, a feasible point with respect to inequalities is required to be available to initialize the algorithm in order to be able to prove convergence of the iterates to the solution of the problem (if it exists). However, finding a point that lies inside the set defined by the polytopic constraints $Cx \\leq d$ might be a computationally expensive task. For this reason, the following modified problem formulation can be used:\n\\begin{equation}\\label{eq:qp}\n\\begin{aligned}\n&\\underset{\\begin{subarray}{c}\n\tx,\\,y \\\\\n\t\\end{subarray}}{\\min}\t&&\\frac{1}{2}x^THx\\\\\n&\\quad \\text{s.t.} \t\t   \t&&Ax  = b\\\\\n& \t\t\t\t\t\t&&Cx = y \\\\\n& \t\t\t\t\t\t&&y \\leq d,\n\\end{aligned}\n\\end{equation}\nwhere the additional constraint $Cx = y$ and the additional variable $y$ have been introduced in order to ``lift'' the inequality constraints and deal with the simpler constraint $y \\leq d$. The Lagrangian of the modified problem is\n\\begin{equation}\n\\mathcal{L} = \\frac{1}{2}x^THx + \\lambda^T(Ax - b) + \\mu^T(Cx - y) + \\nu^T(y - d)\n\\end{equation}\nand the KKTs associated with \\eqref{eq:qp} read \n\\begin{equation}\\label{eq:kkt}\n\\begin{aligned}\n&Hx + A^T\\lambda + C^T\\mu &&= 0 \\\\\n&-\\mu + \\nu &&= 0 \\\\\n&Ax - b \t\t\t\t\t\t\t\t&&= 0 \\\\\n&Cx - y\t\t\t\t\t\t\t\t&&= 0 \\\\\n&y - d + s\t\t\t\t\t\t\t\t\t&&=0 \\\\\n&S\\nu &&=0.\n\\end{aligned}\n\\end{equation} \nWhen solving QP problems arising from MPC formulations, due to the presence of additional equality constraints $Cx = y$, the efficient Riccati-based factorization propsed in \\cite{Frison2013} cannot be applied straightforwardly. For this reason, we propose below an efficient elimination technique is proposed that brings the problem into the same form as in \\eqref{eq:u_kkt}. In this way the Ricatti-based factorization can be applied to the reduced problem.\n\\par\nThe Newton system associated with \\eqref{eq:kkt} reads:\n\\begin{equation}\n\\begin{bmatrix}\nH & 0\t& A^T \t& C^T\t& 0 & 0 \t\\\\\n0 & 0\t& 0 \t& -I\t& I & 0\t\t\\\\\nA & 0\t& 0 \t&  0\t& 0 & 0\t\t\\\\\nC & -I\t& 0 \t&  0\t& 0 & 0\t\t\\\\\n0 & I\t& 0 \t&  0\t& 0 & I\t\\\\\n0 & 0\t& 0 \t& 0\t\t& S & V\t\t\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n\\Delta x \\\\ \\Delta y \\\\ \\Delta \\lambda \\\\ \\Delta \\mu \\\\\\Delta \\nu \\\\ \\Delta s\n\\end{bmatrix}\n=\n-\\begin{bmatrix}\nr_{S} \\\\ r_{S'} \\\\ r_E \\\\ r_{E'} \\\\ r_I \\\\ r_C\n\\end{bmatrix}.\n\\end{equation}\nThe system can be efficiently reduced by eliminating $\\Delta y$ using the fact that $\\Delta y = -\\Delta s - r_I$:\n\\begin{equation}\n\\begin{bmatrix}\nH \t& A^T \t& C^T\t& 0 & 0 \t\\\\\n0 \t& 0 \t& -I\t& I & 0\t\t\\\\\nA \t& 0 \t&  0\t& 0 & 0\t\t\\\\\nC \t& 0 \t&  0\t& 0 & I\t\\\\\n0 \t& 0 \t& 0\t\t& S & V\t\t\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n\\Delta x \\\\ \\Delta \\lambda \\\\ \\Delta \\mu \\\\\\Delta \\nu \\\\ \\Delta s\n\\end{bmatrix}\n=\n-\\begin{bmatrix}\nr_{S} \\\\ r_{S'} \\\\ r_E \\\\ r_{E''} \\\\ r_C\n\\end{bmatrix},\n\\end{equation}\nwhere $r_{E''} = r_{E'} + r_I$. Finally $\\Delta \\mu$ can be eliminated  using $\\Delta \\mu = \\Delta \\nu + r_{S'}$:\n\\begin{equation}\n\\begin{bmatrix}\nH \t& A^T \t& C^T \t&\t 0 \t\\\\\nA \t& 0 \t& 0 \t& 0\t\t\\\\\nC \t& 0 \t& 0 \t& I\t\\\\\n0 \t& 0 \t& S \t& V\t\t\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n\\Delta x \\\\ \\Delta \\lambda \\\\\\Delta \\nu \\\\ \\Delta s\n\\end{bmatrix}\n=\n-\\begin{bmatrix}\nr_{S''}  \\\\ r_E \\\\ r_{E''} \\\\ r_C\n\\end{bmatrix},\n\\end{equation}\nwhere $r_{S''} = r_S + C^Tr_{S'}$\n\\bibliographystyle{plain}\n\\bibliography{syscop} \n\\end{document}", "meta": {"hexsha": "d1c8b9f0b355a1f60a839d6745d98e0342bd7adb", "size": 6886, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "experimental/andrea/notes/notes.tex", "max_stars_repo_name": "aghezz1/hpipm", "max_stars_repo_head_hexsha": "937c9d7264ac1624ef5539bb4208a8b7c8d3e140", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 235, "max_stars_repo_stars_event_min_datetime": "2017-12-12T03:41:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T08:02:15.000Z", "max_issues_repo_path": "experimental/andrea/notes/notes.tex", "max_issues_repo_name": "aghezz1/hpipm", "max_issues_repo_head_hexsha": "937c9d7264ac1624ef5539bb4208a8b7c8d3e140", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 92, "max_issues_repo_issues_event_min_datetime": "2017-09-18T09:32:19.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-09T04:27:11.000Z", "max_forks_repo_path": "experimental/andrea/notes/notes.tex", "max_forks_repo_name": "aghezz1/hpipm", "max_forks_repo_head_hexsha": "937c9d7264ac1624ef5539bb4208a8b7c8d3e140", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 86, "max_forks_repo_forks_event_min_datetime": "2017-09-12T09:12:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-27T03:41:44.000Z", "avg_line_length": 33.5902439024, "max_line_length": 459, "alphanum_fraction": 0.6662794075, "num_tokens": 2544, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8418256532040708, "lm_q1q2_score": 0.5886503097636236}}
{"text": "\\chapter{Proof Tools: Propositional Logic}\n\\label{chap:proof-tools}\n\n\\newcommand{\\naive}{na\\\"\\i{}ve}\n\nUsers of \\HOL{} can create their own theorem proving tools by\ncombining predefined rules and tactics. The \\ML{} type-discipline\nensures that only logically sound methods can be used to create values\nof type \\ml{thm}.  In this chapter, a real example is described.\n\nTwo implementations of the tool are given to illustrate various styles\nof proof programming. The first implementation is the obvious one, but\nis inefficient because of the `brute force' method used. The second\nimplementation attempts to be a great deal more intelligent.\nExtensions to the tools to allow more general applicability are also\ndiscussed.\n\nThe problem to be solved is that of deciding the truth of a closed\nformula of propositional logic.  Such a formula has the general form\n\\[\n\\begin{array}{ccl}\n\\varphi & ::= & v \\;|\\;\\neg\\varphi\\;|\\;\\varphi\n\\land \\varphi \\;|\\; \\varphi \\lor \\varphi \\;|\\; \\varphi \\Rightarrow\n\\varphi\\;|\\;\\varphi = \\varphi\n\\\\[1ex]\n\\mathit{formula} &::= & \\forall \\vec{v}. \\;\\varphi\n\\end{array}\n\\]\nwhere the variables $v$ are all of boolean type, and where the\nuniversal quantification at the outermost level captures all of the\nfree variables.\n\n\\section{Method 1: Truth Tables}\n\n\\setcounter{sessioncount}{0}\n\nThe first method to be implemented is the brute force method of trying\nall possible boolean combinations.  This approach's only real virtue\nis that it is exceptionally easy to implement.  First we will prove\nthe motivating theorem:\n\\begin{hol}\n\\begin{verbatim}\n   val FORALL_BOOL = prove(\n     ``(!v. P v) = P T /\\ P F``,\n     SRW_TAC [][EQ_IMP_THM] THEN Cases_on `v` THEN SRW_TAC [][]);\n\\end{verbatim}\n\\end{hol}\nThe proof proceeds by splitting the goal into two halves, showing\n\\[\n(\\forall v. \\;P(v))\\Rightarrow P(\\top) \\land P(\\bot)\n\\]\n(which goal is automatically shown by the simplifier), and\n\\[\nP(\\top) \\land P(\\bot) \\Rightarrow P(v)\n\\]\nfor an arbitrary boolean variable $v$.  After case-splitting on $v$,\nthe assumptions are then enough to show the goal.  (This theorem is\nactually already proved in the theory \\theoryimp{bool}.)\n\nThe next, and final, step is to rewrite with this theorem:\n\\begin{hol}\n\\begin{verbatim}\n   val tautDP = SIMP_CONV bool_ss [FORALL_BOOL]\n\\end{verbatim}\n\\end{hol}\n\nThis enables the following\n\n\\begin{session}\n\\begin{verbatim}\n- tautDP ``!p q. p /\\ q /\\ ~p``;\n> val it = |- (!p q. p /\\ q /\\ ~p) = F : thm\n\n- tautDP ``!p. p \\/ ~p``\n> val it = |- (!p. p \\/ ~p) = T : thm\n\\end{verbatim}\n\\end{session}\nand even the marginally more intimidating\n\\begin{session}\n\\begin{verbatim}\n- time tautDP\n     ``!p q c a. ~(((~a \\/ p /\\ ~q \\/ ~p /\\ q) /\\\n                    (~(p /\\ ~q \\/ ~p /\\ q) \\/ a)) /\\\n                   (~c \\/ p /\\ q) /\\ (~(p /\\ q) \\/ c)) \\/\n                 ~(p /\\ q) \\/ c /\\ ~a``;\nruntime: 0.147s,    gctime: 0.012s,     systime: 0.000s.\n> val it =\n    |- (!p q c a.\n          ~(((~a \\/ p /\\ ~q \\/ ~p /\\ q) /\\ (~(p /\\ ~q \\/ ~p /\\ q) \\/ a)) /\\\n            (~c \\/ p /\\ q) /\\ (~(p /\\ q) \\/ c)) \\/ ~(p /\\ q) \\/ c /\\ ~a) =\n       T : thm\n\\end{verbatim}\n\\end{session}\n\nThis is a dreadful algorithm for solving this problem.  The system's\nbuilt-in function, \\ml{tautLib.TAUT\\_CONV}, solves the problem above\nmuch faster.  The only real\nmerit in this solution is that it took one line to write.  This is a\ngeneral illustration of the truth that \\HOL{}'s high-level tools,\nparticularly the simplifier, can provide fast prototypes for a variety\nof proof tasks.\n\n\\section{Method 2: the DPLL Algorithm}\n\nThe Davis-Putnam-Loveland-Logemann method~\\cite{DPLL-paper} for\ndeciding the satisfiability of propositional formulas in CNF\n(Conjunctive Normal Form) is a powerful technique, still used in\nstate-of-the-art solvers today.  If we strip the universal quantifiers\nfrom our input formulas, our task can be seen as determining the\nvalidity of a propositional formula.  Testing the negation of such a\nformula for satisfiability is a test for validity: if the formula's\nnegation is satisfiable, then it is not valid (the satisfying\nassignment will make the original false); if the formula's negation is\nunsatisfiable, then the formula is valid (no assignment can make it\nfalse).\n\n\\smallskip\n\\noindent\n(The source code for this example is available in the file \\texttt{examples/dpll.sml}.)\n\n\\subsection*{Preliminaries}\n\nTo begin, assume that we have code already to convert arbitrary\nformulas into CNF{}, and to then decide the satisfiability of these\nformulas.  Assume further that if the input to the latter procedure is\nunsatisfiable, then it will return with a theorem of the form\n\\[ \\vdash \\varphi = \\holtxt{F}\n\\]\nor if it is satisfiable, then it will return a satisfying assignment,\na map from variables to booleans.   This map will be a function from\n\\HOL{} variables to one of the \\HOL{} terms \\holtxt{T} or \\holtxt{F}.\nThus, we will assume\n\\begin{hol}\n\\begin{verbatim}\n   datatype result = Unsat of thm | Sat of term -> term\n   val toCNF : term -> thm\n   val DPLL : term -> result\n\\end{verbatim}\n\\end{hol}\n(The theorem returned by \\ml{toCNF} will equate the input term to\nanother in CNF{}.)\n\n\\smallskip\n\\noindent\nBefore looking into implementing these functions, we will need to\nconsider\n\\begin{itemize}\n\\item how to transform our inputs to suit the function; and\n\\item how to use the outputs from the functions to produce our desired\n  results\n\\end{itemize}\n\nWe are assuming our input is a universally quantified formula.  Both\nthe CNF and DPLL procedures expect formulas without quantifiers.  We\nalso want to pass these procedures the negation of the original\nformula.  Both of the required term manipulations required can be done\nby functions found in the structure \\ml{boolSyntax}.  (In general,\nimportant theories (such as \\theoryimp{bool}) are accompanied by\n\\ml{Syntax} modules containing functions for manipulating the\nterm-forms associated with that theory.)\n\nIn this case we need the functions\n\\begin{hol}\n\\begin{verbatim}\n   strip_forall : term -> term list * term\n   mk_neg       : term -> term\n\\end{verbatim}\n\\end{hol}\nThe function \\ml{strip\\_forall} strips a term of all its outermost\nuniversal quantifications, returning the list of variables stripped\nand the body of the quantification.  The function \\ml{mk\\_neg} takes a\nterm of type \\holtxt{bool} and returns the term corresponding to its\nnegation.\n\nUsing these functions, it is easy to see how we will be able to take\n$\\forall\\vec{v}.\\;\\varphi$ as input, and pass the term $\\neg\\varphi$\nto the function \\ml{toCNF}.  A more significant question is how to\nuse the results of these calls.   The call to \\ml{toCNF} will return a\ntheorem\n\\[\n\\vdash \\neg\\varphi = \\varphi'\n\\]\nThe formula $\\varphi'$ is what will then be passed to \\ml{DPLL}.  (We\ncan extract it by using the \\ml{concl} and \\ml{rhs} functions.) If\n\\ml{DPLL} returns the theorem $\\vdash \\varphi' = \\holtxt{F}$, an\napplication of \\ml{TRANS} to this and the theorem displayed above will\nderive the formula $\\vdash \\neg\\varphi = F$.  In order to derive the\nfinal result, we will need to turn this into $\\vdash\\varphi$.  This is\nbest done by proving a bespoke theorem embodying the equality (there\nisn't one such already in the system):\n\\begin{hol}\n\\begin{verbatim}\n   val NEG_EQ_F = prove(``(~p = F) = p``, REWRITE_TAC []);\n\\end{verbatim}\n\\end{hol}\nTo turn $\\vdash \\varphi$ into $\\vdash (\\forall\n\\vec{v}.\\;\\varphi) = \\holtxt{T}$, we will perform the following proof:\n\\[\n\\infer[\\texttt{\\scriptsize EQT\\_INTRO}]{\n  \\vdash (\\forall \\vec{v}.\\;\\varphi) = \\holtxt{T}}{\n  \\infer[\\texttt{\\scriptsize GENL}(\\vec{v})]{\\vdash \\forall \\vec{v}.\\;\\varphi}{\n    \\vdash \\varphi\n  }\n}\n\\]\nThe other possibility is that \\ml{DPLL} will return a satisfying\nassignment demonstrating that $\\varphi'$ is satisfiable.  If this is\nthe case, we want to show that $\\forall\\vec{v}.\\;\\varphi$ is false.\nWe can do this by assuming this formula, and then specialising the\nuniversally quantified variables in line with the provided map.  In\nthis way, it will be possible to produce the theorem\n\\[\n\\forall \\vec{v}.\\;\\varphi \\vdash \\varphi[\\vec{v} := \\mbox{\\emph{satisfying\n  assignment}}]\n\\]\nBecause there are no free variables in $\\forall\\vec{v}.\\;\\varphi$, the\nsubstitution will produce a completely ground boolean formula.  This\nwill straightforwardly rewrite to \\holtxt{F} (if the assignment\nmakes $\\neg\\varphi$ true, it must make $\\varphi$ false).  Turning\n$\\phi\\vdash \\holtxt{F}$ into $\\vdash \\phi = \\holtxt{F}$ is a matter of\ncalling \\ml{DISCH} and then rewriting with the built-in theorem\n\\ml{IMP\\_F\\_EQ\\_F}:\n\\[\n\\vdash \\forall t.\\;t \\Rightarrow \\holtxt{F} = (t = \\holtxt{F})\n\\]\nPutting all of the above together, we can write our wrapper function,\nwhich we will call \\ml{DPLL\\_UNIV}, with the \\ml{UNIV} suffix\nreminding us that the input must be universally quantified.\n\\begin{hol}\n\\begin{verbatim}\n   fun DPLL_UNIV t = let\n     val (vs, phi) = strip_forall t\n     val cnf_eqn = toCNF (mk_neg phi)\n     val phi' = rhs (concl cnf_eqn)\n   in\n     case DPLL phi' of\n       Unsat phi'_eq_F => let\n         val negphi_eq_F = TRANS cnf_eqn phi'_eq_F\n         val phi_thm = CONV_RULE (REWR_CONV NEG_EQ_F) negphi_eq_F\n       in\n         EQT_INTRO (GENL vs phi_thm)\n       end\n     | Sat f => let\n         val t_assumed = ASSUME t\n         fun spec th =\n             spec (SPEC (f (#1 (dest_forall (concl th)))) th)\n             handle HOL_ERR _ => REWRITE_RULE [] th\n       in\n         CONV_RULE (REWR_CONV IMP_F_EQ_F) (DISCH t (spec t_assumed))\n       end\n   end\n\\end{verbatim}\n\\end{hol}\n\nThe auxiliary function \\ml{spec} that is used in the second case\nrelies on the fact that \\ml{dest\\_forall} will raise a \\ml{HOL\\_ERR}\nexception if the term it is applied to is not universally quantified.\nWhen \\ml{spec}'s argument is not universally quantified, this means\nthat the recursion has bottomed out, and all of the original formula's\nuniversal variables have been specialised.  Then the resulting formula\ncan be rewritten to false (\\ml{REWRITE\\_RULE}'s built-in rewrites will\nhandle all of the necessary cases).\n\nThe \\ml{DPLL\\_UNIV} function also uses \\ml{REWR\\_CONV} in two places.\nThe \\ml{REWR\\_CONV} function applies a single (first-order) rewrite at\nthe top of a term.  These uses of \\ml{REWR\\_CONV} are done within\ncalls to the \\ml{CONV\\_RULE} function.  This lifts a conversion $c$ (a\nfunction taking a term $t$ and producing a theorem $\\vdash t = t'$),\nso that $\\ml{CONV\\_RULE}\\;c$ takes the theorem $\\vdash t$ to $\\vdash t'$.\n\n\n\\subsection{Conversion to Conjunctive Normal Form}\n\\label{sec:conv-conj-norm}\n\nA formula in Conjunctive Normal Form is a conjunction of disjunctions\nof literals (either variables, or negated variables).  It is possible\nto convert formulas of the form we are expecting into CNF by simply\nrewriting with the following theorems\n\\begin{eqnarray*}\n\\neg (\\phi \\land \\psi) &=& \\neg\\phi \\lor \\neg\\psi\\\\\n\\neg (\\phi \\lor \\psi) &=& \\neg\\phi \\land \\neg\\psi\\\\\n\\phi \\lor (\\psi \\land \\xi) &=& (\\phi \\lor \\psi) \\land (\\phi \\lor \\xi)\\\\\n(\\psi \\land \\xi)\\lor\\phi \\ &=& (\\phi \\lor \\psi) \\land (\\phi \\lor\n\\xi)\\\\[1ex]\n\\phi \\Rightarrow\\psi &=& \\neg\\phi \\lor \\psi\\\\\n(\\phi = \\psi) &=& (\\phi \\Rightarrow \\psi) \\land (\\psi \\Rightarrow\n\\phi)\n\\end{eqnarray*}\nUnfortunately, using these theorems as rewrites can result in an\nexponential increase in the size of a formula.  (Consider using them\nto convert an input in Disjunctive Normal Form, a disjunction\nof conjunctions of literals, into CNF{}.)\n\nA better approach is to convert to what is known as ``definitional\nCNF''. \\HOL{} includes functions to do this in the structure\n\\ml{defCNF}.  Unfortunately, this approach adds extra, existential,\nquantifiers to the formula.  For example\n\\begin{session}\n\\begin{verbatim}\n- defCNF.DEF_CNF_CONV ``p \\/ (q /\\ r)``;\n> val it =\n    |- p \\/ q /\\ r =\n       ?x. (x \\/ ~q \\/ ~r) /\\ (r \\/ ~x) /\\ (q \\/ ~x) /\\ (p \\/ x) : thm\n\\end{verbatim}\n\\end{session}\nUnder the existentially-bound \\holtxt{x}, the code has produced a\nformula in CNF{}.  With an example this small, the formula is actually\nbigger than that produced by the \\naive{} translation, but with more\nrealistic examples, the difference quickly becomes significant.  The\nlast example used with \\ml{tautDP} is 20 times bigger when translated\n\\naive{}ly than when using \\ml{defCNF}, and the translation takes 150\ntimes longer to perform.\n\nBut what of these extra existentially quantified variables?  In fact,\nwe can ignore the quantification when calling the core DPLL procedure.\nIf we pass the unquantified body to DPLL, we will either get back an\nunsatisfiable verdict of the form $\\vdash \\varphi' = \\holtxt{F}$, or a\nsatisfying assignment for all of the free variables.  If the latter\noccurs, the same satisfying assignment will also satisfy the\noriginal.  If the former, we will perform the following proof\n\\[\n\\infer{\\vdash (\\exists \\vec{x}.\\;\\varphi') = \\holtxt{F}}{\n  \\infer{\\vdash (\\exists \\vec{x}.\\;\\varphi') \\Rightarrow \\holtxt{F}}{\n    \\infer{\\vdash\\forall \\vec{x}.\\;\\varphi' \\Rightarrow \\holtxt{F}}{\n      \\infer{\\vdash\\varphi' \\Rightarrow \\holtxt{F}}{\n        \\vdash \\varphi' = \\holtxt{F}\n      }\n    }\n  }\n}\n\\]\nproducing a theorem of the form expected by our \\ml{wrapper}\nfunction.\n\nIn fact, there is an alternative function in the \\ml{defCNF} API that\nwe will use in preference to \\ml{DEF\\_CNF\\_CONV}.   The problem with\n\\ml{DEF\\_CNF\\_CONV} is that it can produce a big quantification,\ninvolving lots of variables.  We will rather use\n\\ml{DEF\\_CNF\\_VECTOR\\_CONV}.  Instead of output of the form\n\\[\n\\vdash \\varphi = (\\exists \\vec{x}.\\; \\varphi')\n\\]\nthis second function produces\n\\[\n\\vdash \\varphi = (\\exists (v : \\textsf{num} \\rightarrow\n\\textsf{bool}).\\; \\varphi')\n\\]\nwhere the individual variables $x_i$ of the first formula are replaced\nby calls to the $v$ function $v(i)$, and there is just one quantified\nvariable, $v$.  This variation will not affect the operation of the\nproof sketched above.  And as long as we don't require literals to be\nvariables or their negations, but also allow them to be terms of\nthe form $v(i)$ and $\\neg v(i)$ as well, then the action of the DPLL\nprocedure on the formula $\\varphi'$ won't be affected either.\n\nUnfortunately for uniformity, in simple cases, the definitional CNF\nconversion functions may not result in any existential\nquantifications at all.  This makes our implementation of \\ml{DPLL}\nsomewhat more complicated.  We calculate a \\ml{body} variable that\nwill be passed onto the \\ml{CoreDPLL} function, as well as a\n\\ml{transform} function that will transform an unsatisfiability result\ninto something of the desired form.  If the result of conversion to\nCNF produces an existential quantification, we use the proof sketched\nabove.  Otherwise, the transformation can be the identity function,\n\\ml{I}:\n\\begin{hol}\n\\begin{verbatim}\n   fun DPLL t = let\n     val (transform, body) = let\n       val (vector, body) = dest_exists t\n       fun transform body_eq_F = let\n         val body_imp_F = CONV_RULE (REWR_CONV (GSYM IMP_F_EQ_F)) body_eq_F\n         val fa_body_imp_F = GEN vector body_imp_F\n         val ex_body_imp_F = CONV_RULE FORALL_IMP_CONV fa_body_imp_F\n       in\n         CONV_RULE (REWR_CONV IMP_F_EQ_F) ex_body_imp_F\n       end\n     in\n       (transform, body)\n     end handle HOL_ERR _ => (I, t)\n   in\n     case CoreDPLL body of\n       Unsat body_eq_F => Unsat (transform body_eq_F)\n     | x => x\n   end\n\\end{verbatim}\n\\end{hol}\nwhere we have still to implement the core DPLL procedure (called\n\\ml{CoreDPLL} above).  The above code uses \\ml{REWR\\_CONV} with the\n\\ml{IMP\\_F\\_EQ\\_F} theorem to affect two of the proof's\ntransformations.  The \\ml{GSYM} function is used to flip the\norientation a theorem's top-level equalities.  Finally, the\n\\ml{FORALL\\_IMP\\_CONV} conversion takes a term of the form\n\\[\n\\forall x.\\;P(x) \\Rightarrow Q\n\\]\nand returns the theorem\n\\[\n\\vdash (\\forall x.\\;P(x) \\Rightarrow Q) = ((\\exists\nx.\\;P(x))\\Rightarrow Q)\n\\]\n\n\n\n\n\\subsection{The Core DPLL Procedure}\n\\label{sec:dpll-procedure}\n\nThe DPLL procedure can be seen as a slight variation on the basic\n``truth table'' technique we have already seen.  As with that\nprocedure, the core operation is a case-split on a boolean variable.\nThere are two significant differences though: DPLL can be seen as a\nsearch for a satisfying assignment, so that if picking a variable to\nhave a particular value results in a satisfying assignment, we do not\nneed to also check what happens if the same variable is given the\nopposite truth-value.  Secondly, DPLL takes some care to pick good\nvariables to split on.  In particular, \\emph{unit propagation} is used\nto eliminate variables that will not cause branching in the\nsearch-space.\n\nOur implementation of the core DPLL procedure is a function that takes\na term and returns a value of type \\ml{result}: either a theorem\nequating the original term to false, or a satisfying assignment (in\nthe form of a function from terms to terms).  As the DPLL search for a\nsatisfying assignment proceeds, an assignment is incrementally\nconstructed.  This suggests that the recursive core of our function\nwill need to take a term (the current formula) and a context (the\ncurrent assignment) as parameters.  The assignment can be naturally\nrepresented as a set of equations, where each equation is either $v =\n\\holtxt{T}$ or $v = \\holtxt{F}$.\n\nThis suggests that a natural representation for our program state is a\ntheorem: the hypotheses will represent the assignment, and the\nconclusion can be the current formula.  Of course, \\HOL{} theorems\ncan't just be wished into existence.  In this case, we can make\neverything sound by also assuming the initial formula.  Thus, when we\nbegin our initial state will be $\\phi\\vdash\\phi$.  After splitting on\nvariable $v$, we will generate two new states\n$\\phi,(v\\!=\\!\\holtxt{T})\\vdash \\phi_1$, and\n$\\phi,(v\\!=\\!\\holtxt{F})\\vdash \\phi_2$, where the $\\phi_i$ are the\nresult of simplifying $\\phi$ under the additional assumption\nconstraining $v$.\n\nThe easiest way to add an assumption to a theorem is to use the\nrule \\ml{ADD\\_ASSUM}.  But in this situation, we also want to\nsimplify the conclusion of the theorem with the same assumption.  This\nmeans that it will be enough to rewrite with the theorem\n$\\psi\\vdash\\psi$, where $\\psi$ is the new assumption.  The action of\nrewriting with such a theorem will cause the new assumption to appear\namong the assumptions of the result.\n\nThe \\ml{casesplit} function is thus:\n\\begin{hol}\n\\begin{verbatim}\n   fun casesplit v th = let\n     val eqT = ASSUME (mk_eq(v, boolSyntax.T))\n     val eqF = ASSUME (mk_eq(v, boolSyntax.F))\n   in\n     (REWRITE_RULE [eqT] th, REWRITE_RULE [eqF] th)\n   end\n\\end{verbatim}\n\\end{hol}\n\nA case-split can result in a formula that has been rewritten all the\nway to true or false.  These are the recursion's base cases. If the\nformula has been rewritten to true, then we have found a satisfying\nassignment, one that is now stored for us in the hypotheses of the\ntheorem itself.  The following function, \\ml{mk\\_satmap}, extracts\nthose hypotheses into a finite-map, and then returns the lookup\nfunction for that finite-map:\n\\begin{hol}\n\\begin{verbatim}\n   fun mk_satmap th = let\n     val hyps = hypset th\n     fun foldthis (t,acc) = let\n       val (l,r) = dest_eq t\n     in\n       Binarymap.insert(acc,l,r)\n     end handle HOL_ERR _ => acc\n     val fmap = HOLset.foldl foldthis (Binarymap.mkDict Term.compare) hyps\n   in\n     Sat (fn v => Binarymap.find(fmap,v)\n                  handle Binarymap.NotFound => boolSyntax.T)\n   end\n\\end{verbatim}\n\\end{hol}\nThe \\ml{foldthis} function above adds the equations that are stored as\nhypotheses into the finite-map.  The exception handler in\n\\ml{foldthis} is necessary because one of the hypotheses will be the\noriginal formula.  The exception handler in the function that looks up\nvariable bindings is necessary because a formula may be reduced to\ntrue without every variable being assigned a value at all.  In this\ncase, it is irrelevant what value we give to the variable, so we\narbitrarily map such variables to \\holtxt{T}.\n\nIf the formula has been rewritten to false, then we can just return\nthis theorem directly.  Such a theorem is not quite in the right form\nfor the external caller, which is expecting an equation, so if the\nfinal result is of the form $\\phi\\vdash \\holtxt{F}$, we will have to\ntransform this to $\\vdash \\phi = \\holtxt{F}$.\n\nThe next question to address is what to do with the results of\nrecursive calls.  If a case-split returns a satisfying assignment this\ncan be returned unchanged.  But if a recursive call returns a theorem\nequating the input to false, more needs to be done.  If this is the\nfirst call, then the other branch needs to be checked.  If this also\nreturns that the theorem is unsatisfiable, we will have two theorems:\n\\[\n\\phi_0,\\Delta,(v\\!=\\!\\holtxt{T})\\vdash \\holtxt{F} \\qquad\n\\phi_0,\\Delta,(v\\!=\\!\\holtxt{F})\\vdash \\holtxt{F}\n\\] where $\\phi_0$ is the original formula, $\\Delta$ is the rest of the\ncurrent assignment, and $v$ is the variable on which a split has just\nbeen performed.  To turn these two theorems into the desired\n\\[\n\\phi_0,\\Delta\\vdash \\holtxt{F}\n\\]\nwe will use the rule of inference \\ml{DISJ\\_CASES}:\n\\[\n\\infer{\\Gamma \\cup \\Delta_1 \\cup \\Delta_2 \\vdash \\phi}{\n  \\Gamma \\vdash \\psi \\lor \\xi &\n  \\Delta_1 \\cup \\{\\psi\\}\\vdash \\phi &\n  \\Delta_2 \\cup \\{\\xi\\} \\vdash \\phi\n}\n\\]\nand the theorem \\ml{BOOL\\_CASES\\_AX}:\n\\[\n\\vdash \\forall t.\\;(t = \\holtxt{T}) \\lor (t = \\holtxt{F})\n\\]\n\nWe can put these fragments together and write the top-level\n\\ml{CoreDPLL} function, in Figure~\\ref{fig:coredpll}.\n\\begin{figure}[htbp]\n\\begin{holboxed}\n\\begin{verbatim}\nfun CoreDPLL form = let\n  val initial_th = ASSUME form\n  fun recurse th = let\n    val c = concl th\n  in\n    if c = boolSyntax.T then\n      mk_satmap th\n    else if c = boolSyntax.F then\n      Unsat th\n    else let\n        val v = find_splitting_var c\n        val (l,r) = casesplit v th\n      in\n        case recurse l of\n          Unsat l_false => let\n          in\n            case recurse r of\n              Unsat r_false =>\n                Unsat (DISJ_CASES (SPEC v BOOL_CASES_AX) l_false r_false)\n            | x => x\n          end\n        | x => x\n      end\n  end\nin\n  case (recurse initial_th) of\n    Unsat th => Unsat (CONV_RULE (REWR_CONV IMP_F_EQ_F) (DISCH form th))\n  | x => x\nend\n\\end{verbatim}\n\\end{holboxed}\n\\caption{The core of the DPLL function}\n\\label{fig:coredpll}\n\\end{figure}\n\n\nAll that remains to be done is to figure out which variable to\ncase-split on.  The most important variables to split on are those\nthat appear in what are called ``unit clauses'', a clause containing\njust one literal.  If there is a unit clause in a formula then it is\nof the form\n\\[\n\\phi \\land v \\land \\phi'\n\\]\nor\n\\[\n\\phi \\land \\neg v \\land \\phi'\n\\]\nIn either situation, splitting on $v$ will always result in a branch\nthat evaluates directly to false.  We thus eliminate a variable\nwithout increasing the size of the problem.  The process of\neliminating unit clauses is usually called ``unit propagation''.\nUnit propagation is not usually thought of as a case-splitting\noperation, but doing it this way makes our code simpler.\n\nIf a formula does not include a unit clause, then choice of the next\nvariable to split on is much more of a black art.  Here we will\nimplement a very simple choice: to split on the variable that occurs\nmost often.  Our function \\ml{find\\_splitting\\_var} takes a formula\nand returns the variable to split on.\n\\begin{hol}\n\\begin{verbatim}\n   fun find_splitting_var phi = let\n     fun recurse acc [] = getBiggest acc\n       | recurse acc (c::cs) = let\n           val ds = strip_disj c\n         in\n           case ds of\n             [lit] => (dest_neg lit handle HOL_ERR _ => lit)\n           | _ => recurse (count_vars ds acc) cs\n         end\n   in\n     recurse (Binarymap.mkDict Term.compare) (strip_conj phi)\n   end\n\\end{verbatim}\n\\end{hol}\nThis function works by handing a list of clauses to the inner\n\\ml{recurse} function.  This strips each clause apart in turn.  If a\nclause has only one disjunct it is a unit-clause and the variable can\nbe returned directly.  Otherwise, the variables in the clause are\ncounted and added to the accumulating map by \\ml{count\\_vars}, and the\nrecursion can continue.\n\nThe \\ml{count\\_vars} function has the following implementation:\n\\begin{hol}\n\\begin{verbatim}\n   fun count_vars ds acc =\n     case ds of\n       [] => acc\n     | lit::lits => let\n         val v = dest_neg lit handle HOL_ERR _ => lit\n       in\n         case Binarymap.peek (acc, v) of\n           NONE => count_vars lits (Binarymap.insert(acc,v,1))\n         | SOME n => count_vars lits (Binarymap.insert(acc,v,n + 1))\n       end\n\\end{verbatim}\n\\end{hol}\n\nThe use of a binary tree to store variable data makes it efficient to\nupdate the data as it is being collected.  Extracting the variable\nwith the largest count is then a linear scan of the tree, which we can\ndo with the \\ml{foldl} function:\n\\begin{hol}\n\\begin{verbatim}\n   fun getBiggest acc =\n     #1 (Binarymap.foldl(fn (v,cnt,a as (bestv,bestcnt)) =>\n                            if cnt > bestcnt then (v,cnt) else a)\n                        (boolSyntax.T, 0)\n                        acc\n\\end{verbatim}\n\\end{hol}\n\n\\subsection{Performance}\n\\label{sec:dpll-performance}\n\nOnce inputs get even a little beyond the clearly trivial, the function\nwe have written (at the top-level, \\ml{DPLL\\_UNIV}) performs considerably\nbetter than the truth table implementation.  For example, the\ngeneralisation of the following term, with 29 variables, takes\n\\ml{wrapper} two and a half minutes to demonstrate as a tautology:\n\\begin{hol}\n\\begin{verbatim}\n   (s0_0 = (x_0 = ~y_0)) /\\ (c0_1 = x_0 /\\ y_0) /\\\n   (s0_1 = ((x_1 = ~y_1) = ~c0_1)) /\\\n   (c0_2 = x_1 /\\ y_1 \\/ (x_1 \\/ y_1) /\\ c0_1) /\\\n   (s0_2 = ((x_2 = ~y_2) = ~c0_2)) /\\\n   (c0_3 = x_2 /\\ y_2 \\/ (x_2 \\/ y_2) /\\ c0_2) /\\\n   (s1_0 = ~(x_0 = ~y_0)) /\\ (c1_1 = x_0 /\\ y_0 \\/ x_0 \\/ y_0) /\\\n   (s1_1 = ((x_1 = ~y_1) = ~c1_1)) /\\\n   (c1_2 = x_1 /\\ y_1 \\/ (x_1 \\/ y_1) /\\ c1_1) /\\\n   (s1_2 = ((x_2 = ~y_2) = ~c1_2)) /\\\n   (c1_3 = x_2 /\\ y_2 \\/ (x_2 \\/ y_2) /\\ c1_2) /\\\n   (c_3 = ~c_0 /\\ c0_3 \\/ c_0 /\\ c1_3) /\\\n   (s_0 = ~c_0 /\\ s0_0 \\/ c_0 /\\ s1_0) /\\\n   (s_1 = ~c_0 /\\ s0_1 \\/ c_0 /\\ s1_1) /\\\n   (s_2 = ~c_0 /\\ s0_2 \\/ c_0 /\\ s1_2) /\\ ~c_0 /\\\n   (s2_0 = (x_0 = ~y_0)) /\\ (c2_1 = x_0 /\\ y_0) /\\\n   (s2_1 = ((x_1 = ~y_1) = ~c2_1)) /\\\n   (c2_2 = x_1 /\\ y_1 \\/ (x_1 \\/ y_1) /\\ c2_1) /\\\n   (s2_2 = ((x_2 = ~y_2) = ~c2_2)) /\\\n   (c2_3 = x_2 /\\ y_2 \\/ (x_2 \\/ y_2) /\\ c2_2) ==>\n   (c_3 = c2_3) /\\ (s_0 = s2_0) /\\ (s_1 = s2_1) /\\ (s_2 = s2_2)\n\\end{verbatim}\n\\end{hol}\n(if you want real speed, the built-in function \\ml{tautLib.TAUT\\_PROVE} does the above \nin less than a second, by using  an external tool to generate the proof\n of unsatisfiability, and then translating that proof back into HOL).\n\n\\section{Extending our Procedure's Applicability}\n\\label{sec:dpll-applicability-extension}\n\nThe function \\ml{DPLL\\_UNIV} requires its input to be universally\nquantified, with all free variables bound, and for each literal to be\na variable or the negation of a variable.  This makes \\ml{DPLL\\_UNIV}\na little unfriendly when it comes to using it as part of the proof of\na goal.  In this section, we will write one further ``wrapper''\nlayer to wrap around \\ml{DPLL\\_UNIV}, producing a tool that can be\napplied to many more goals.\n\n\\paragraph{Relaxing the Quantification Requirement}  The first step is\nto allow formulas that are not closed.  In order to hand on a formula\nthat \\emph{is} closed to \\ml{DPLL\\_UNIV}, we can simply generalise\nover the formula's free variables.   If \\ml{DPLL\\_UNIV} then says that\nthe new, ground formula is true, then so too will be the original.  On\nthe other hand, if \\ml{DPLL\\_UNIV} says that the ground formula is\nfalse, then we can't conclude anything further and will have to raise\nan exception.\n\nCode implementing this is shown below:\n\\begin{hol}\n\\begin{verbatim}\n   fun nonuniv_wrap t = let\n     val fvs = free_vars t\n     val gen_t = list_mk_forall(fvs, t)\n     val gen_t_eq = DPLL_UNIV gen_t\n   in\n     if rhs (concl gen_t_eq) = boolSyntax.T then let\n         val gen_th = EQT_ELIM gen_t_eq\n       in\n         EQT_INTRO (SPECL fvs gen_th)\n       end\n     else\n       raise mk_HOL_ERR \"dpll\" \"nonuniv_wrap\" \"No conclusion\"\n   end\n\\end{verbatim}\n\\end{hol}\n\n\\paragraph{Allowing Non-Literal Leaves}\nWe can do better than \\ml{nonuniv\\_wrap}: rather than quantifying over\njust the free variables (which we have conveniently assumed will only\nbe boolean), we can turn any leaf part of the term that is not a\nvariable or a negated variable into a fresh variable.  We first\nextract those boolean-valued leaves that are not the constants true or\nfalse.\n\\begin{hol}\n\\begin{verbatim}\n   fun var_leaves acc t = let\n     val (l,r) = dest_conj t handle HOL_ERR _ =>\n                 dest_disj t handle HOL_ERR _ =>\n                 dest_imp t handle HOL_ERR _ =>\n                 dest_bool_eq t\n   in\n     var_leaves (var_leaves acc l) r\n   end handle HOL_ERR _ =>\n     if type_of t <> bool then\n       raise mk_HOL_ERR \"dpll\" \"var_leaves\" \"Term not boolean\"\n     else if t = boolSyntax.T then acc\n     else if t = boolSyntax.F then acc\n     else HOLset.add(acc, t)\n\\end{verbatim}\n\\end{hol}\nNote that we haven't explicitly attempted to pull apart boolean\nnegations (which one might do with \\ml{dest\\_neg}).  This is because\n\\ml{dest\\_imp} also destructs terms \\holtxt{\\~{}p}, returning\n\\holtxt{p} and \\holtxt{F} as the antecedent and conclusion.  We have\nalso used a function \\ml{dest\\_bool\\_eq} designed to pull apart only\nthose equalities which are over boolean values.  Its definition is\n\\begin{hol}\n\\begin{verbatim}\n   fun dest_bool_eq t = let\n     val (l,r) = dest_eq t\n     val _ = type_of l = bool orelse\n             raise mk_HOL_ERR \"dpll\" \"dest_bool_eq\" \"Eq not on bools\"\n   in\n     (l,r)\n   end\n\\end{verbatim}\n\\end{hol}\n\nNow we can finally write our final \\ml{DPLL\\_TAUT} function:\n\\begin{hol}\n\\begin{verbatim}\n   fun DPLL_TAUT tm =\n     let val (univs,tm') = strip_forall tm\n         val insts = HOLset.listItems (var_leaves empty_tmset tm')\n         val vars = map (fn t => genvar bool) insts\n         val theta = map2 (curry (op |->)) insts vars\n         val tm'' = list_mk_forall (vars,subst theta tm')\n     in\n         EQT_INTRO (GENL univs\n                      (SPECL insts (EQT_ELIM (DPLL_UNIV tm''))))\n     end\n\\end{verbatim}\n\\end{hol}\nNote how this code first pulls off all external universal\nquantifications (with \\ml{strip\\_forall}), and then re-generalises\n(with \\ml{list\\_mk\\_forall}).  The calls to \\ml{GENL} and \\ml{SPECL}\nundo these manipulations, but at the level of theorems.  This produces\na theorem equating the original input to true.  (If the input term is\nnot an instance of a valid propositional formula, the call to\n\\ml{EQT\\_ELIM} will raise an exception.)\n\n\\section*{Exercises}\n\n\\begin{enumerate}\n\\item Extend the procedure so that it handles conditional expressions\n  (both arms of the terms must be of boolean type).\n\\end{enumerate}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"tutorial\"\n%%% End:\n", "meta": {"hexsha": "61445e687a9ac4fed1d06425b286a3a7d61c656c", "size": 31070, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Manual/Tutorial/proof-tools.tex", "max_stars_repo_name": "LiLiming/HOL", "max_stars_repo_head_hexsha": "8de43bf3176993a37fb2f917fe978964c9d0591c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-27T07:51:47.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-27T07:51:47.000Z", "max_issues_repo_path": "Manual/Tutorial/proof-tools.tex", "max_issues_repo_name": "LiLiming/HOL", "max_issues_repo_head_hexsha": "8de43bf3176993a37fb2f917fe978964c9d0591c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Manual/Tutorial/proof-tools.tex", "max_forks_repo_name": "LiLiming/HOL", "max_forks_repo_head_hexsha": "8de43bf3176993a37fb2f917fe978964c9d0591c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8375, "max_line_length": 87, "alphanum_fraction": 0.7003218539, "num_tokens": 9211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.8418256393148982, "lm_q1q2_score": 0.588650289499511}}
{"text": "% Define number set symbols.\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\n\\chapter{Functions and Graphs}\n\\section{Composite Functions}\nA function $f(x) = 3x+9$ will take in any value for $x$, and substitute it into the expression $3x+9$. If another expression was passed into $f(x)$, such as $x-2$, then the result would be $3(x-2)+9$. A composite function is one where two functions are combined, similar to above.\n\nSuppose $f(x) = 2x$ and $g(x) = x+1$, then composite functions $f(g(x))$ and $g(f(x))$ can be found as follows:\n\\begin{align*}\n\tf(g(x)) &= 2(x+1) & g(f(x)) &= (2x)+1\\\\\n\t&=2x+2 & &=2x+1\n\\end{align*}\n\nA composite function can be tiring to write out all the time, so it could be written as $h(x)=f(g(x))$.\n\n\\subsection{Example}\n$f(x)=x^2+1$ and $g(x)=\\frac{1}{x} \\left(x\\neq0\\right)$. Find $h(x)=f(g(x))$ and $h(x)=g(f(x))$.\n\\begin{align*}\n\th(x)&=f\\left(\\frac{1}{x}\\right) & k(x)&=g\\left(x^2+1\\right)\\\\\n\t&=\\left(\\frac{1}{x}\\right)^2-2 & &=\\frac{1}{x^2+1}\\\\\n\t&=\\frac{1}{x^2}-1\n\\end{align*}\n\n\n\\section{Domains and Set Notation}\nOften, a function can't take all numbers as inputs. For example, a function of the area of a rectangle might be written as $f(x)=x^2+2x-8$. Since it's a function of real-life area, the output cannot be negative. After some thinking, it can be concluded that $x$ must be 2 or larger. Therefore the function's domain is any number larger or equal to 2.\n\nAdditionally, a decimal number could also be passed in to the function, which may be undesireable. You can define the domain to only include natural numbers (positive whole numbers).\n\nApplying these two factors, the domain of the function is $\\{x \\in \\N \\mid x \\geq 2\\}$\n\nSet (or rather set-builder) notation is a way of describing a set of numbers. For domains, it details what set of numbers can be an acceptable input of the function. The kinds of set number symbols that exist commonly are:\n\\begin{itemize}\n\t\\item $\\N$, natural numbers. Non-negative integers (starting with 0, counting up in steps of 1).\n\t\\item $\\Z$, integers. Any number that can be written without a fractional component (``whole\" numbers).\n\t\\item $\\Q$, rational numbers. A number which can be written as a fraction with an integer numerator and a non-zero natural denominator\\footnote{Negative denominators can exist, but are avoided, as they can be expressed as a negative numerator instead.}.\n\t\\item $\\R$, real numbers. Any number that can be represented on a number line, including irrational numbers such as $\\pi$.\n\\end{itemize}\n\n\\subsection{Example}\nFind a suitable domain for the function $\\frac{1}{x^2-2}$.\n\n\\begin{equation*}\n\t\\{x \\in \\R \\mid x \\neq \\sqrt{2}\\}\n\\end{equation*}\n\n\n\\section{Inverse Functions}\nAn inverse function ($f^{-1}(x)$) will do the opposite as what its normal counterpart ($f(x)$) does. For example,\n\\begin{align*}\n\tf(x) &= x^2\\\\\n\tf^{-1}(x)&=\\sqrt{x}\n\\end{align*}\n\nFunctions are said to be inverse if $f\\left(f^{-1}(x)\\right) = f^{-1}\\left(f(x)\\right)=x$.\n\nA trick for finding the inverse of a function is to do the following:\n\\begin{enumerate}\n\t\\item replace $f(x)$ with $y$,\n\t\\item change the subject of the formula for $x$,\n\t\\item change $y$ to $x$, and $x$ to $f^{-1}(x)$.\n\\end{enumerate}\n\n\\subsection{Examples}\n\\begin{enumerate}\n\t\\item\n\t\\begin{align*}\n\t\tf(x)&=2x\\\\\n\t\ty&=2x\\\\\n\t\tx&=\\frac{y}{2}\\\\\n\t\tf^{-1}(x)&=\\frac{x}{2}\n\t\\end{align*}\n\t\\item\n\t\\begin{align*}\n\t\tf(x)&=2x+1\\\\\n\t\ty&=2x+1\\\\\n\t\t2x&=y-1\\\\\n\t\tx&=\\frac{1}{2}\\left(y-1\\right)\\footnotemark\\\\\n\t\tf^{-1}(x)&=\\frac{1}{2}\\left(x-1\\right)\n\t\\end{align*}\n\t\\footnotetext{Note that writing $x=\\frac{y-1}{2}$ is correct too. That means that $f^{-1}(x)=\\frac{x-1}{2}$ is a perfectly acceptable answer.}\n\\end{enumerate}\n\n\n\\section{Graph of $y=f(x)+a$ and \\dots$-a$}\nGiven a graph $y=f(x)$, if asked to sketch $y=f(x)+a$, the graph would be moved $a$ places up; if asked to sketch $y=f(x)-a$, the graph would be moved $a$ places down.\n\nIn other words, the graph is moved $a$ places along the $y$-axis. A value of $-3$ would move the graph $-3$ places up (i.e. $3$ places down). Practically, add $a$ to all $y$-coordinates.\n\n\\subsection{Example}\nIn figure \\ref{fig:verticalShiftExample}, the original graph $y=f(x)$ is drawn, along with $y=f(x)+3$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\t\\addplot [] {x^3-3*x-1};\n\t\t\t\t\\coordinate\n\t\t\t\t[\n\t\t\t\t\tlabel=above left:{$(-1,1)$},\n\t\t\t\t\tblack,\n\t\t\t\t\tcmark=*,\n\t\t\t\t] (a) at (-1,1);\n\t\t\t\t\\coordinate\n\t\t\t\t[\n\t\t\t\t\tlabel=below right:{$(1,-3)$},\n\t\t\t\t\tblack,\n\t\t\t\t\tcmark=*,\n\t\t\t\t] (a) at (1,-3);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Some function $y=f(x)$ with a couple of given coordinates.}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\t\\addplot [mark=none] {x^3-3*x-1};\n\t\t\t\t\\addplot [color=blue,mark=none] {x^3-3*x+2};\n\t\t\t\t\\coordinate\n\t\t\t\t[\n\t\t\t\t\tlabel=left:{$(-1,4)$},\n\t\t\t\t\tblack,\n\t\t\t\t\tcmark=*,\n\t\t\t\t] (a) at (-1,4);\n\t\t\t\t\\coordinate\n\t\t\t\t[\n\t\t\t\t\tlabel=above:{$(1,0)$},\n\t\t\t\t\tblack,\n\t\t\t\t\tcmark=*,\n\t\t\t\t] (a) at (1,0);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{$y=f(x)+3$ in blue, along with the original graph.}\n\t\\end{subfigure}\n\t\\caption{An example of a typical question about $y=f(x)+a$ with the solution.}\n\t\\label{fig:verticalShiftExample}\n\\end{figure}\n\nTo find out the transformed coordinates, add $a$ to the $y$-coordinates:\n\\begin{align*}\nf(x)&=(-1,1) & f(x)&=(1,-3)\\\\\nf(x)+3&=(-1,4) & f(x)+3&=(1,0)\n\\end{align*}\n\n\n\\section{Graph of $y=f(x+a)$ and \\dots$-a)$}\nWhen the $a$ is inside the bracket, it moves the graph from side to side. A graph of $y=f(x+a)$ will move the graph $a$ places \\textit{left}, and $y=f(x-a)$ would move $a$ places \\textit{right}.\n\nIn other words, the graph is moved $-a$ places along the $x$-axis. A value of $-3$ would move the graph 3 places to the right. Practically, add $-a$ to all $x$-coordinates. The thing to throw you off is that it is $-a$, and not $a$ that is being moved.\n\n\\subsection{Example}\nThe figure \\ref{fig:horizontalShiftExample} is the graph $y=f(x)$ and $y=f(x+5)$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\\addplot [] {(x-3)^3-3*x+8};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above:{$(2,1)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (2,1);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below:{$(4,-3)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (4,-3);\n\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Some function $y=f(x)$ with a couple of given coordinates.}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\\addplot [mark=none] {(x-3)^3-3*x+8};\n\t\t\t\\addplot [color=blue,mark=none] {(x+2)^3-3*(x+5)+8};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above:{$(-3,1)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-3,1);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below left:{$(-1,-3)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,-3);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{$y=f(x+5)$ in blue, along with the original graph.}\n\t\\end{subfigure}\n\t\\caption{An example of a typical question about $y=f(x+a)$ with the solution.}\n\t\\label{fig:horizontalShiftExample}\n\\end{figure}\n\nTo find out the transformed coordinates, add $-a$ to the $x$-coordinates:\n\\begin{align*}\nf(x)&=(2,1) & f(x)&=(4,-3)\\\\\nf(x+5)&=(-3,1) & f(x+5)&=(-1,-3)\n\\end{align*}\n\n\n\\section{Graph of $y=-f(x)$}\n$y=-f(x)$ reflects the graph on the $x$-axis; all $y$-coordinates will change their sign.\n\nAn easy way to remember this is to think of the equation $y=x^2$ that was learnt in National 5. If the equation would be $y=-x^2$, then the parabola is ``flipped\". This is the same thing that can be done to any function $y=f(x)$, $y=-f(x)$ will be ``flipped\".\n\n\\subsection{Example}\nAn example of reflection on the $x$-axis with coordinates $(-1,3)$ and $(1,-1)$ is shown in figure \\ref{fig:xAxisReflectionExample}.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-4,\n\t\t\t\txmax=4,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-4,\n\t\t\t\tymax=4,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\\addplot [] {x^3-3*x+1};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above left:{$(-1,3)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,3);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below right:{$(1,-1)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (1,-1);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Some function $y=f(x)$ with a couple of given coordinates.}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-4,\n\t\t\t\txmax=4,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-4,\n\t\t\t\tymax=4,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\\addplot [mark=none] {x^3-3*x+1};\n\t\t\t\\addplot [color=blue,mark=none] {-x^3+3*x-1};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=left:{$(-1,-3)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,-3);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above:{$(1,1)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (1,1);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{$y=-f(x)$ in blue, along with the original graph.}\n\t\\end{subfigure}\n\t\\caption{An example of a typical question about $y=-f(x)$ with the solution.}\n\t\\label{fig:xAxisReflectionExample}\n\\end{figure}\n\nTo find out the transformed coordinates, change the $y$-coordinates' sign:\n\\begin{align*}\n\tf(x)&=(-1,3) & f(x)&=(1,-1)\\\\\n\t-f(x)&=(-1,-3) & -f(x)&=(-1,-1)\n\\end{align*}\n\n\n\\section{Graph of $y=f(-x)$}\n$y=f(-x)$ is reflection on the y-axis; all $x$-coordinates will change their signs.\n\n\\subsection{Example}\nHere in figure \\ref{fig:yAxisReflectionExample} an example of reflection on the $y$-axis is shown with the coordinates $(1,2)$ and $(3,-2)$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-4,\n\t\t\t\txmax=4,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-4,\n\t\t\t\tymax=4,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\\addplot [] {(x-2)^3-3*x+6};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above:{$(1,2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (1,2);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below:{$(3,-2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (3,-2);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Some function $y=f(x)$ with a couple of given coordinates.}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-4,\n\t\t\t\txmax=4,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-4,\n\t\t\t\tymax=4,\n\t\t\t\tytick={-4,-2,...,3},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8\n\t\t\t]\n\t\t\t\\addplot [mark=none] {(x-2)^3-3*x+6};\n\t\t\t\\addplot [color=blue,mark=none] {(-x-2)^3+3*x+6};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above left:{$(-1,2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,2);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below:{$(-3,-2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-3,-2);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{$y=f(-x)$ in blue, along with the original graph.}\n\t\\end{subfigure}\n\t\\caption{An example of a typical question about $y=f(-x)$ with the solution.}\n\t\\label{fig:yAxisReflectionExample}\n\\end{figure}\n\nTo find out the transformed coordinates, change the $x$-coordinates' sign:\n\\begin{align*}\n\tf(x)&=(1,2) & f(x)&=(3,-2)\\\\\n\tf(-x)&=(-1,2) & f(-x)&=(-3,-2)\n\\end{align*}\n\n\n\\section{Graph of $y=kf(x)$}\nA graph can be stretched or compressed on the $y$-axis by multiplying the graph by a constant, written as $y=kf(x)$. $k>1$ will stretch it, while $k<1$ will compress it.\n\nTo find out the new coordinates, multiply the $y$-coordinate by $k$.\n\n\\subsection{Example}\nFigure \\ref{fig:verticalCompressionExample} is the graph of $y=f(x)$, as well as $y=2f(x)$ and $y=\\frac{1}{2}f(x)$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,4},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8,\n\t\t\t]\n\t\t\t\\addplot [] {x^3-3*x};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above left:{$(-1,2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,2);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below right:{$(1,-2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (1,-2);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Some function $y=f(x)$ with a couple of given coordinates.}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,5},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,4},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8,\n\t\t\t]\n\t\t\t\\addplot [color=blue,mark=none] {2*(x^3)-6*x};\n\t\t\t\\addplot [color=red,mark=none] {0.5*(x^3)-1.5*x};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=left:{$(-1,4)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,4);\n\t\t\t\\coordinate\n\t\t\t\t[\n\t\t\t\tlabel=right:{$(1,-4)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (1,-4);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above left:{$(-1,1)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,1);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below right:{$(1,-1)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (1,-1);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{$y=2f(x)$ in blue and $y=\\frac{1}{2}f(x)$, original graph omitted for clarity.}\n\t\\end{subfigure}\n\t\\caption{An example of a typical question about $y=kf(x)$ with the solution.}\n\t\\label{fig:verticalCompressionExample}\n\\end{figure}\n\nTo find out the transformed coordinates, multiply the $y$-coordinates by $k$. For $k=2$:\n\\begin{align*}\nf(x)&=(-1,2) & f(x)&=(1,-2)\\\\\nf(2x)&=(-1,4) & f(2x)&=(1,-4)\n\\end{align*}\nAnd for $k=\\frac{1}{2}$:\n\\begin{align*}\nf(x)&=(-1,2) & f(x)&=(1,-2)\\\\\nf\\left(\\frac{1}{2}x\\right)&=(-1,1) & f\\left(\\frac{1}{2}x\\right)&=(1,-1)\n\\end{align*}\n\n\n\\section{Graph of $y=f(kx)$}\nThe last transformation is stretching and compressing on the $x$-axis. Again, the opposite is true here, $k>1$ will \\textit{compress} the graph, $k<1$ will stretch the graph.\n\nTo get the new coordinates, divide all $x$-coordinates by $k$.\n\n\\subsection{Examples}\nFigure \\ref{fig:horizontalCompressionExample} shows $y=f(x)$, and the graphs of $y=f(2x)$ and $y=f(\\frac{1}{2})$.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,4},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,4},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8,\n\t\t\t]\n\t\t\t\\addplot [] {x^3-3*x};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=above left:{$(-1,2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (-1,2);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=below right:{$(1,-2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (1,-2);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{Some function $y=f(x)$ with a couple of given coordinates.}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.475\\linewidth}\n\t\t\\begin{tikzpicture}[cmark/.style={label={[anchor=center]:\\pgfuseplotmark{#1}}}]\n\t\t\t\\begin{axis}\n\t\t\t[\n\t\t\t\trestrict y to domain=-10:10,\n\t\t\t\trestrict x to domain=-5:5,\n\t\t\t\txlabel=$x$,\n\t\t\t\tylabel=$y$,\n\t\t\t\txmin=-5,\n\t\t\t\txmax=5,\n\t\t\t\txtick={-4,-2,...,5},\n\t\t\t\tymin=-5,\n\t\t\t\tymax=5,\n\t\t\t\tytick={-4,-2,...,4},\n\t\t\t\taxis lines=center,\n\t\t\t\taxis equal,\n\t\t\t\tsmooth,\n\t\t\t\tscale=0.8,\n\t\t\t]\n\t\t\t\\addplot [color=blue,mark=none] {(x/2)^3-1.5*x};\n\t\t\t\\addplot [color=red,mark=none] {(2*x)^3-6*x};\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=left:{$(-2,2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t\t] (a) at (-2,2);\n\t\t\t\\coordinate\n\t\t\t[\n\t\t\t\tlabel=right:{$(2,-2)$},\n\t\t\t\tblack,\n\t\t\t\tcmark=*,\n\t\t\t] (a) at (2,-2);\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{$y=f(\\frac{1}{2}x)$ in blue and $y=f(2x)$ in red, original graph and coordinates of $y=f(\\frac{1}{2}x)$ omitted for clarity.}\n\t\\end{subfigure}\n\t\\caption{An example of a typical question about $y=f(kx)$ with the solution.}\n\t\\label{fig:horizontalCompressionExample}\n\\end{figure}\n\nTo find out the transformed coordinates, divide the $x$-coordinates by $k$. For $k=\\frac{1}{2}$:\n\\begin{align*}\nf(x)&=(-1,2) & f(x)&=(1,-2)\\\\\nf(2x)&=(-2,2) & f(2x)&=(2,-2)\n\\end{align*}\nAnd for $k=2$:\n\\begin{align*}\nf(x)&=(-1,2) & f(x)&=(1,-2)\\\\\nf\\left(\\frac{1}{2}x\\right)&=\\left(-\\frac{1}{2},2\\right) & f\\left(\\frac{1}{2}x\\right)&=\\left(\\frac{1}{2},-1\\right)\n\\end{align*}\n", "meta": {"hexsha": "f57968317fb28b9cd754cf21ea0f62bbc18b5a4b", "size": 18445, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TeX_files/FunctionsAndGraphs.tex", "max_stars_repo_name": "TheSheepGuy/Open_Higher_Maths", "max_stars_repo_head_hexsha": "2667a8da00b2cab502a92fd0ec9c9db9c86bea5c", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-12-09T14:56:06.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-09T15:08:38.000Z", "max_issues_repo_path": "TeX_files/FunctionsAndGraphs.tex", "max_issues_repo_name": "TheSheepGuy/Definitive_Higher_Maths", "max_issues_repo_head_hexsha": "2667a8da00b2cab502a92fd0ec9c9db9c86bea5c", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-09T15:22:21.000Z", "max_issues_repo_issues_event_max_datetime": "2020-02-11T20:19:01.000Z", "max_forks_repo_path": "TeX_files/FunctionsAndGraphs.tex", "max_forks_repo_name": "TheSheepGuy/Definitive_Higher_Maths", "max_forks_repo_head_hexsha": "2667a8da00b2cab502a92fd0ec9c9db9c86bea5c", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-09T20:16:01.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-09T20:16:01.000Z", "avg_line_length": 27.4888226528, "max_line_length": 350, "alphanum_fraction": 0.5960965031, "num_tokens": 7009, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056167854461, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.5886068568543328}}
{"text": "\\section{Methods}\n\\label{mmf-sec:methods}\n\nWith the change from a variant focused approach to a read based method, this new method will call ``mismatches`` of a read from the reference genome, rather than a variant. This has the advantage of not requiring a matched normal and its use for virtually any sequencing data source, be it TAS, WES, WGS or even nanopore sequencing\\footnote{however nanopore is not really usefull due to the short fragments naturally occurring in cfDNA}. However it also means, that the error suppression method, which are usually used by variant calling methods like read position ranks sum (RPRS) or strand bias are not usable, which leads to a higher degree of background noise. In the following sections I will describe how we filter and curate the found mismatches to retain as much signal as possible.\n\n\\subsection{Mathematical concept}\n\\label{mmf-sec:concept}\nWith the change from site based method, the concept of a mismatch from the reference needs to be introduced. A mismatch in the following is any position in an aligned read, which does not show the same base as the reference at the aligned position. The mismatch will inherit all the metrics of the read such as mapping quality, base quality and read position. \n\nThis then means, there are three sources of mismatches in a read, which are somatic variants, germline variants and sequencing errors (\\autoref{mmf-eq:1}).\n\\begin{equation}\nn(mismatches) = n(somatic~var.) + n(germline~var.)  + n(seq.~ error)\n\\label{mmf-eq:1}\n\\end{equation}\n\\myequation[\\ref{mmf-eq:1}]{MisMatchFinder: number of mismatches}\n\nWith the sequencing error being a function of the sequencing machine and chemistry, the error rate should be a stable almost constant, when using the same sequencing machine and chemistry \\cite{Schirmer2016,Stoler2021}. We can therefore reduce \\autoref{mmf-eq:1} to\n\\begin{equation}\nn(mismatches) = n(som.~var.) + n(germ.~var.)  + c_{seq.~err.}\n\\label{mmf-eq:2}\n\\end{equation}\n\\myequation[\\ref{mmf-eq:2}]{MisMatchFinder: sequencing error}\n\nSecondly, the number of germline variants is approximately the same between two people \\cite{Auton2015}, which again simplifies \\autoref{mmf-eq:2} by replacing $n(germline~var.)$.\n\n\\begin{equation}\nn(mismatches) = n(som.~var.) + c_{germ.~var.} + c_{seq. err.}\n\\label{mmf-eq:3}\n\\end{equation}\n\\myequation[\\ref{mmf-eq:3}]{MisMatchFinder: germline variants}\n\nOf course, \\autoref{mmf-eq:3} is a crude approximation and instead the constants are not real constants, but instead are better approximated with Gaussian distributions which leads to the following equation\n\n\\begin{equation}\nn(mismatches) = n(som.~var.) + \\mathcal{N}(\\mu_{germ.~var.}, \\sigma_{germ.~var.}^{2}) + \\mathcal{N}(\\mu_{seq.~err.}, \\sigma_{seq.~err.}^{2})\n\\label{mmf-eq:4}\n\\end{equation}\n\\myequation[\\ref{mmf-eq:4}]{MisMatchFinder: number of mismatches with distributions}\n\nHowever, both \\autoref{mmf-eq:3} and \\ref{mmf-eq:4} allow to make the conclusion, that with small enough values for either $c_{germ.~var}/c_{seq.~err.}$ or $\\mu_{germ.~var}/\\mu_{seq.~err.}$ and $\\sigma_{germ.~var}/\\sigma_{seq.~err.}$ respectively, there is a linear correlation between the amount of mismatches on a read and the somatic variants it contains:\n\n\\begin{equation}\nn(mismatches) \\sim n(som.~var.)\n\\label{mmf-eq:final}\n\\end{equation}\n\\myequation[\\ref{mmf-eq:final}]{MisMatchFinder: number of mismatches correlation with somatic variants}\n\nWith the help of \\autoref{mmf-eq:final} we can approximate tumour mutational burden and signatures from individual reads. This method is therefore independent from read depth and requires no matched normal sample for somatic variant calling.\n\n\\subsection{Data preprocessing}\nAs this new method has sophisticated internal measures to filter and process sequencing data, the steps for preprocessing are minimal: The reads only need to be aligned to a reference genome (\\autoref{intro-sec:mapping}). For optimal mapping and additional noise reduction, paired end sequencing of at least 75 bp is suggested. This ensures a few bases overlap on the standard fragment length of less than 150bp of ctDNA (\\autoref{intro-sec:ctDNA}). Another optional suggested step is the duplication marking of the BAM file.\n\n\\subsection{Mismatch detection}\nIn contrast to conventional variant calling approaches, which find regions of interest through pileups (position wise) and then realign reads in the surrounding area, to accurately estimate the most likely event that lead to the observed haplotype (\\autoref{intro-sec:variantcalling}), with this new method, we take every individual read as a separate entity to fully span the heterogeneity of all cells and their genetic background. A sequencing reads ``MD``- and ``CIGAR``- tag from the preprocessed BAM file are used to reconstruct the sequence of the read and the positions, where the read shows a different base than the reference. These potential mismatch sites will then filtered in multiple steps to reduce the impact of both germline variants as well as sequencing errors\n\n\\subsection{Filtering steps}\nApart from the filters, which most variant callers will employ, like mapping quality (MQ) and base quality (BQ), which are used to ignore reads as well as positions respectively, the method also internally filters out common sequencing errors next to homopolymer regions \\cite{Heydari2019}. While these cutoffs were preselected by me for optimal performance on our data (MQ=20, BQ=55, homopolyLength=5), the program allows the user to adjust them to their liking.\nThis is also possible for both the region of interest (ROI) bed-file which was used to restrict the analysis to only highly mappable regions of the genome (\\autoref{ch-mmfAppendix:bedfiles}), as well as for multiple other parameters which are unique to our method, like minimum average base quality, minimum and maximum number of mismatches per read and/or fragment, and the minimum and maximum length of a fragment \\cite{Hudecova2021}. If any of these values are not within the specified range a read will be discarded in the analysis. This is also the default for reads which have a secondary alignment position or are considered duplicates of any kind.\n\n\\subsection[Consensus reads]{Consensus reads - what happens when the sequencer isn't sure}\n\\label{mmf-sec:consensus}\n\nWhen paired end sequencing of ctDNA is analysed, the fraction of fragments where reads overlap is higher, than with ``normal`` tissue based sequencing, due to the shorter fragment length of ctDNA (\\autoref{intro-sec:ctDNA}). This allows an fragment internal consensus generation, by adjusting for differences between forward and reverse read. In many variant calling methods, these differences are used by measuring the ``strand bias`` \\cite{Guo2012, Saunders2012, GATKTeam2019} or ``strand balance probability`` \\cite{Garrison2012} by looking at a specific locus and evaluating the discrepancy of all forward and all reverse reads. As our method evaluates each read/fragment independently, the bias cannot be calculated, however in the overlapping region of both reads, a consensus can be generated. If both reads agree on the mismatch, the BQ of both reads will be added together to emphasise the increased evidence for this variants. However, if they disagree the base of the higher quality will be used and its quality will be decreased by half of the BQ of the lower quality base (\\autoref{fig:mmf-consensus}~bottom). To increase the stringency of the method, the user can also enable the \\lq\\emph{--strictOverlap}\\rq~ option, which will only consider a mismatch, if both reads agree with each other and decrease the BQ to zero otherwise. As we are only interested in mismatches from the reference, all positions where both agree with the reference are irrelevant for the analysis and will be discarded (\\autoref{fig:mmf-consensus}~top). For the most stringent analysis, MisMatchFinder can additionally be configured to only use mismatches in the overlap part of a fragment (\\lq\\emph{--onlyOverlap}\\rq), which significantly reduces the number of sequencing errors which end up in the final analysis (\\autoref{mmf-sec:cleanSim}).\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=.99\\linewidth]{Figures/ConsensusMethodMisMatchFinder.pdf}\n\\caption[Schematic of consensus computation method for overlapping reads]{Schematic of consensus computation method for overlapping reads in MisMatchFinder; Read 1 and Read 2 depict two overlapping paired end reads aligned to the reference sequence; Positions in the overlap are numbered for later referral; Read positions agreeing with the reference are coloured black, positions differing from the reference but agreeing in both reads are coloured purple (position 3) and differences between reads are coloured in the respective read colours (blue and red, position 6); Calculation for the resulting base quality (BQ$_{cons}$ for each possibility is shown as formulas)}\\label{fig:mmf-consensus}\n\\end{figure}\n\n\\subsection[Germline filtering]{Germline filtering - exclusion of normal variation}\n\\label{mmf-sec:germline}\n\nTo further enable the \\autoref{mmf-sec:concept} claim that the germline is a very small constant, we need to remove as many mismatches as possible, which stem from germline variants. For this purpose, I built a zarr \\cite{Miles2021} based storage system from the gnomAD database (v.3.1) \\cite{Karczewski2020} using scikit-alel \\cite{Miles2021a}.\nA in-depth explanation of the generation as well as a script for for an end user can be found in \\autoref{ch-mmfAppendix:germlineFilter}.\n\nThis then allows very precise filtering of known germline variants sites from the analysis. The method allows the specification of an allele frequency to consider a variant to be filtered, however as baseline, it will filter all sites, which were detected in any sample in gnomAD. This even includes sites with low quality variants, as these are signs for sequencing or mapping complications, which will most likely interfere with our method as well.\n\n\n\\subsection[Count normalisation]{Count normalisation - not everyone has the same chances}\n\\label{mmf-sec:countNorm}\nFinally when having filtered all ``noise`` mismatches from the dataset, we can aggregate all mismatches to oligo-nucleotide counts. With this step also comes the classification of \ndirectly neighbouring mismatches as DBS, which are counted as separate entities. SBS and DBS both can be used to identify underlying biological mutational processes, but they have very different signatures associated with them \\cite{Alexandrov2020}. The counts formed this way are influenced by the background frequency of their reference oligo-nucleotides in the analysed genomic region. As the frequencies of di- and tri-nucleotides are not uniform in the genome, the chance for a mismatch found in an ``AAA`` reference contex is almost seven times higher than a mismatch in ``CGC`` (\\autoref{A:mmf:tab:tricounts}, \\autoref{A:mmf:tab:dicounts}). To reduce this bias towards high frequency oligo-nucleotides, I implemented a count normalisation step.\n\nFirst the di- and tri-nucleotides in the analysed regions are counted using the supplied reference without any black-listed and/or only in white-listed regions. These counts are then either used to directly weight the observed mismatch counts, which leads to a more uniform distribution of mismatches, or by building a fraction of observed oligo-nucleotides and the total counts in the genome (\\autoref{A:mmf:tab:tricounts}, \\autoref{A:mmf:tab:dicounts}), the weighting achieves an approximation of how the counts would be distributed over the whole genome. These two options are available with \\lq\\emph{--normaliseCounts}\\rq~ for the approximation to full genome. By also adding \\lq\\emph{--flatNormalisation}\\rq~ only the observed counts are used for normalisation.\n\n\\subsection[Signature deconvolution]{Signature deconvolution - find the original signal}\n\\label{mmf-sec:sigDeconv}\nThe deconvolution of the involved signatures from known set of signatures is equivalent to finding the minimal distance between $m$ as the observed number of mismatches in each oligo-nucleotide context (a vector of length 96) and $\\textbf{S}w$, where $\\textbf{S}$ is the matrix of oligo-nucleotide defined contributions for each signature, resulting in a matrix of $96 \\times k$ with $k$ being the number of known signatures. Lastly, $w$ is the weights of each signature, which we want to estimate. \n\n\\begin{align}\n \\text{minimise:} & \\quad (m - \\textbf{S}w)^T(m - \\textbf{S}w)\n = m^Tm - w^T\\textbf{S}^Tm - m^T\\textbf{S}w + w^T\\textbf{S}^T\\textbf{S}w \\label{mmf-eq:optim1}\\\\\n \\text{with:} & \\quad \\sum_j w_j = 1 \\quad \\text{and} \\quad \\forall_j w_j \\geq 0 \\label{mmf-eq:requirements}\n\\end{align}\n\\myequation[\\ref{mmf-eq:optim1}]{MisMatchFinder: optimisation for signature weights}\n\\myequation[\\ref{mmf-eq:requirements}]{MisMatchFinder: optimisation function restrictions}\n\n\\autoref{mmf-eq:optim1} can then be written as \n\n\\begin{equation}\n\\text{minimise:} \\quad - m^T\\textbf{S}w + \\frac{1}{2}w^T\\textbf{S}^T\\textbf{S}w\n\\label{mmf-eq:optim2}\n\\end{equation}\n\\myequation[\\ref{mmf-eq:optim2}]{MisMatchFinder: quadratic programming formula}\n\nWith the same restrictions as shown in \\autoref{mmf-eq:requirements}. These equations and the idea to solve it with quadratic programming (QP) have been taken from \\textcite{Lynch2016}, the iterative linear models (ILM) solving approach was adapted from deconstructSigs \\cite{Rosenthal2016}. Both methods are implemented in python, using the quadprog package \\cite{McGibbon2021} for QP and a translation of the R code of deconstrutSigs.\n\nMisMatchFinder allows the use of either QP or ILM, as they in many cases produce very similar results \\cite{Lynch2016}. The default method is QP however, even though ILM is the more interpretable method and is the more parsimonious method, because with the increased number of signatures, in the latest work by \\textcite{Alexandrov2020}, ILM does not lead to the right signatures if the signal is not strong enough but QP seems to be more stable (\\autoref{fig:mmf-ILMerror}).\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=.99\\linewidth]{Figures/lowInputSignalDeconv.pdf}\n\\caption[Distance of deconvolution methods from truth]{Distances of the estimated weights generated with ILM and QP from the true weight used as input; Truth is a synthetic count sample with (SBS1: 0.25; SBS3: 0.05; SBS5: 0.46; SBS7a: 0.1; SBS19: 0.03; SBS21: 0.01; SBS31: 0.08; SBS57: 0.02;)}\\label{fig:mmf-ILMerror}\n\\end{figure}\n \nThe combinatorial problem in ILM, already shown by \\textcite{Lynch2016}, seems to be especially strong with ``wide`` signatures like SBS3 (\\autoref{fig:sig7a}) and low signature contribution. That makes it less useful for our approach, as we expect low tumour purity and therefore low somatic signature signals in cfDNA, but even with ILM as deconvolution method impacts the detection of SBS7a less than the detection of SBS3 (\\autoref{fig:mmf-qpVSilm})\\todo{make that image}.\n\nThe deconvolution method might be a spot for further optimisation by creating a custom deconvolution system adjusted for ctDNA detection.\n\n\\todo[color=green,inline]{maybe move the simulation description here}", "meta": {"hexsha": "6615fa1db3812116bc1920f843969073ee6efe1a", "size": 15223, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/mismatchfinder/methods.tex", "max_stars_repo_name": "SebastianHollizeck/PhDThesis", "max_stars_repo_head_hexsha": "3785d53be5d4adac85660fad607c7f2d2b9e3bd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/mismatchfinder/methods.tex", "max_issues_repo_name": "SebastianHollizeck/PhDThesis", "max_issues_repo_head_hexsha": "3785d53be5d4adac85660fad607c7f2d2b9e3bd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/mismatchfinder/methods.tex", "max_forks_repo_name": "SebastianHollizeck/PhDThesis", "max_forks_repo_head_hexsha": "3785d53be5d4adac85660fad607c7f2d2b9e3bd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 125.8099173554, "max_line_length": 1833, "alphanum_fraction": 0.7909741838, "num_tokens": 3861, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8539127566694178, "lm_q2_score": 0.6893056104028797, "lm_q1q2_score": 0.5886068539668188}}
{"text": "% !TeX spellcheck = en_US\r\n\\chapter{Backtracking}\r\n\\section{Problem's specification}\r\n\r\nPattern:\r\n\r\nFor a given problem we search all the sequences  $\\bm{x_{1}x_{2} ... x_{n}}$ for which some property holds\r\n $\\bm{P_n(x_{1},x_{2}, ..., x_{n})}$\r\n\r\n\\bigskip where: $\\bm{x_k \\in D_k}$ (some given domain of integers )\r\n\\\\\r\nThe backtrack method consists in designing \"cutoff\"/\"bounding\" properties $\\bm{P_l(x_{1},x_{2}, ..., x_{l})}$ for $\\bm{1\\leq l < n}$ such that: \r\n\r\n\\begin{itemize}\r\n \\item $\\bm{P_l(x_{1},x_{2}, ..., x_{l})}$ is true whenever $\\bm{P_{l+1}(x_{1},x_{2}, ..., x_{l+1})}$ is true;\r\n \\item $\\bm{P_l(x_{1},x_{2}, ..., x_{l})}$ is simple to test, if $\\bm{P_{l-1}(x_{1},x_{2}, ..., x_{l-1})}$ holds.\r\n \r\n\\end{itemize}\r\n\r\n\r\n\r\n\r\n\\section{References}\r\n\r\nBactracking from \\cite{KnuthArtOfCompProg4-5b}", "meta": {"hexsha": "6b26fba8ce3f50d8c1d4183b87a23de1f932b4c3", "size": 809, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/Notes on Data Structures and Algorithms - With Implementation/TeX_files/chapter_backtracking.tex", "max_stars_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_stars_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Doc/Notes on Data Structures and Algorithms - With Implementation/TeX_files/chapter_backtracking.tex", "max_issues_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_issues_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Doc/Notes on Data Structures and Algorithms - With Implementation/TeX_files/chapter_backtracking.tex", "max_forks_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_forks_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.36, "max_line_length": 145, "alphanum_fraction": 0.6056860321, "num_tokens": 292, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754471, "lm_q2_score": 0.7826624890918021, "lm_q1q2_score": 0.5885720415069775}}
{"text": "\\subsection{Intro}\n\n\\begin{frame}\n  \\frametitle{Eager and Lazy}\n\n  \\scriptsize\n\n  SMT can be reduced to SAT, but requires discovering and\n  adding incompatibilities between \\tatoms\n  \\vfill\n  Eager and Lazy refers to the time in which these incompatibilities\n  are added to the Boolean structure of the problem\n  \\begin{itemize}\n    \\item eager: immediately, before \\satsolver is called as black-box\n    \\item {\\bf lazy}: on demand, during \\satsolver's search\n  \\end{itemize}\n  \\vfill\n    \\begin{tabular}{ccc}\n      Eager & & Lazy \\\\\n      \\begin{minipage}{.4\\textwidth}\n          \\scalebox{.3}{\\input{eager.pdf_t}}\n      \\end{minipage}\n      & ~~~~~ &\n      \\begin{minipage}{.4\\textwidth}\n          \\scalebox{.3}{\\input{lazy.pdf_t}}\n      \\end{minipage}\n    \\end{tabular}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Lazy Approach}\n\n  \\scriptsize\n\n  The Lazy Approach builds on top of SAT and of available and well known\n  {\\bf decision procedures}, which we call {\\bf theory solvers} (\\tsolvers)\n  \\vfill\n  Examples of these \\tsolvers are the Union-Find procedure for equality, \n  and the Simplex Algorithm for Linear Rational Arithmetic\n  \\vfill\n  These procedures are very efficient in handling {\\bf conjunctions} of \\tatoms,\n  but they don't know how to handle arbitrary Boolean operators\n  \\vfill\n  \\pause\n  Lazy SMT can be seen as an efficient mechanism to extend these procedures\n  to handle generic Boolean combinations of \\tatoms \n  \\vfill\n  This is achieved with a tight integration between a SAT-solver and the\n  \\tsolver\n  \\vfill\n  In the following we assume that \n  \\begin{enumerate}[$(i)$]\n    \\item \\T is decidable, and that \n    \\item a \\tsolver for conjunctions of \\tatoms exists\n  \\end{enumerate}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{A bit of notation}\n\n  \\scriptsize\n\n  We will use the following notation\n  \\vfill\n  \\begin{center}\n  \\ra{1.8} \n  \\begin{tabular}{cl}\n  \\hline\n  Symbol            & Meaning \\\\\n  \\hline\n  $\\varphi$         & original formula, in some background theory \\T \\\\\n  $\\babst{\\varphi}$ & the Boolean abstraction of $\\varphi$ \\\\\n  $\\mu$             & an assignment for $\\varphi$ \\\\\n  $\\babst{\\mu}$     & the assignment for $\\babst{\\varphi}$ induced by $\\mu$ \\\\\n  \\hline\n  \\end{tabular}\n  \\end{center}\n  \\vfill\n  \\pause\n  E.g., where \\T is \\Lia (Linear Integer Arithmetic)\n  $$\n  \\ra{1.8} \n  \\begin{array}{rccccccc}\n    \\varphi \\equiv         & (x + y \\leq 0) & \\wedge & (x = 0) & \\wedge & (\\neg (y = 1) & \\vee & (x = 1) ) \\\\\n    \\babst{\\varphi} \\equiv & a_1            & \\wedge & a_2     & \\wedge & (\\neg a_3 & \\vee & a_4     ) \\\\ \n    \\mu     \\equiv         & \\multicolumn{7}{l}{\\{ x \\mapsto 0, y \\mapsto 0 \\}}       \\\\\n    \\babst{\\mu} \\equiv     & \\multicolumn{7}{l}{\\{ a_1 \\mapsto \\top, a_2 \\mapsto \\top, a_3 \\mapsto \\bot, a_4 \\mapsto \\bot \\}}\n  \\end{array}\n  $$\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{A bit of notation}\n\n  \\scriptsize\n  $$\n  \\ra{1.8} \n  \\begin{array}{rccccccc}\n    \\varphi \\equiv         & (x + y \\leq 0) & \\wedge & (x = 0) & \\wedge & (\\neg (y=1) & \\vee & (x = 1) ) \\\\\n    \\babst{\\varphi} \\equiv & a_1            & \\wedge & a_2     & \\wedge & (\\neg a_3 & \\vee & a_4     ) \\\\ \n    \\mu     \\equiv         & \\multicolumn{7}{l}{\\{ x \\mapsto 0, y \\mapsto 0 \\}}       \\\\\n    \\babst{\\mu} \\equiv     & \\multicolumn{7}{l}{\\{ a_1 \\mapsto \\top, a_2 \\mapsto \\top, a_3 \\mapsto \\bot, a_4 \\mapsto \\bot \\}}\n  \\end{array}\n  $$\n  \\vfill\n  Notice that\n  $$\\babst{\\mu} \\equiv \\{ a_1 \\mapsto \\top, a_2 \\mapsto \\top, a_3 \\mapsto \\bot, a_4 \\mapsto \\bot \\} \\equiv \\{ a_1, a_2, \\neg a_3, \\neg a_4 \\}$$\n  is nothing but\n  $$\\{\\ (x+y \\leq 0),\\ (x=0),\\ \\neg (y=1),\\ \\neg (x=1)\\ \\}$$\n  i.e., it is a {\\bf conjunction} of constraints, whose satisfiability can be checked with a \\tsolver\n  \\vfill\\pause\n  In other words, the \\tsolver can tell if $\\babst{\\mu}$ is \\T-satisfiable\n  \\vfill\n  If so, then there is a model $\\mu$, that induces $\\babst{\\mu}$, and if\n  $\\babst{\\mu}$ is a model for $\\babst{\\varphi}$ then $\\mu$ is also a\n  model for $\\varphi$ (take some time to think about it at home)\n\n\\end{frame}\n", "meta": {"hexsha": "6cd983b5118e00bffced79ff361004096aeaa34a", "size": 4044, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture4/recall.tex", "max_stars_repo_name": "formalmethods/smtlectures", "max_stars_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-11-07T19:34:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-24T08:05:50.000Z", "max_issues_repo_path": "lecture4/recall.tex", "max_issues_repo_name": "formalmethods/smtlectures", "max_issues_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture4/recall.tex", "max_forks_repo_name": "formalmethods/smtlectures", "max_forks_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-06T00:40:41.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-06T00:40:41.000Z", "avg_line_length": 33.1475409836, "max_line_length": 143, "alphanum_fraction": 0.6120178042, "num_tokens": 1428, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5885720376947007}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{ragged2e}\n\\usepackage{changepage}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage[margin=1in]{geometry}\n\\title{Triangulation Algorithm}\n\\author{Chloe Yugawa}\n\\date{February 2018}\n\n\\begin{document}\n\n\\maketitle\n\\paragraph{}\nThis algorithm is designed to calculate the location of a signal received by four hydrophones, all of which are equally spaced from each other in a square that is parallel to the surface of the water. To calculate the location of the signal origin, we will use the time stamps from each hydrophone (the time at which the signal was received), the speed of sound in the given environment, and some basic algebra and trigonometry. First, we write equations for the time it takes the signal to reach each hydrophone (labelled a,b,c, and d) using the formula for the distance between two points times the speed of sound in water ($s$). The origin is defined as the center of the hydrophone square, and $l$ is defined as both the $x$ and $y$ distances from the origin to each hydrophone.\\\\\n% One of your diagrams might be nice here to illustrate L, x, and y. Also, I thought that you said L was the distance between adjacent hydrophones, not their distance from the origin. Am I miss-remembering? -Noah 4/28\n% Perhaps add a brief note that 's' is a function of the conditions in the water? -Noah 4/28\n$s$ = speed of sound through water \\\\\n$l$ = $x$ and $y$ distance from origin to each hydrophone\\\\\n$a$ = hydrophone at location $(l,l)$\\\\\n$b$ = hydrophone at location $(-l,l)$\\\\\n$c$ = hydrophone at location $(-l,-l)$\\\\\n$d$ = hydrophone at location $(l,-l)$\\\\\n$t_a$ = time stamp of signal for hydrophone a\\\\\n$t_b$ = time stamp of signal for hydrophone b\\\\\n$t_c$ = time stamp of signal for hydrophone c\\\\\n$t_d$ = time stamp of signal for hydrophone d\\\\\n\\begin{gather*}\nt_a = \\frac{1}{s}*\\sqrt{(x-l)^2+(y-l)^2+z^2}\\\\\nt_b = \\frac{1}{s}*\\sqrt{(x+l)^2+(y-l)^2+z^2}\\\\\nt_c = \\frac{1}{s}*\\sqrt{(x+l)^2+(y+l)^2+z^2}\\\\\nt_d = \\frac{1}{s}*\\sqrt{(x-l)^2+(y+l)^2+z^2}\n\\end{gather*}\nNow, we define the deltas, or time differences. For this algorithm, hydrophone $a$ has been chosen to be the reference. Calculating in terms of a different hydrophone would work equivalently. \n\\begin{gather*}\n    delta_1 = t_b-t_a = \\frac{1}{s}*\\sqrt{(x+l)^2+(y-l)^2+z^2} - s*\\sqrt{(x-l)^2+(y-l)^2+z^2}\\\\\n    delta_2 = t_c-t_a = \\frac{1}{s}*\\sqrt{(x+l)^2+(y+l)^2+z^2} - s*\\sqrt{(x-l)^2+(y-l)^2+z^2}\\\\\n    delta_3 = t_d-t_a = \\frac{1}{s}*\\sqrt{(x-l)^2+(y+l)^2+z^2} - s*\\sqrt{(x-l)^2+(y-l)^2+z^2}\n\\end{gather*}\nThese equations rewritten for clarity and for future steps:\n\\begin{gather*}\n    delta_1 = \\frac{1}{s}(\\sqrt{(x+l)^2+(y-l)^2+z^2} - \\sqrt{(x-l)^2+(y-l)^2+z^2})\\\\\n    delta_2 = \\frac{1}{s}(\\sqrt{(x+l)^2+(y+l)^2+z^2} - \\sqrt{(x-l)^2+(y-l)^2+z^2})\\\\\n    delta_3 = \\frac{1}{s}(\\sqrt{(x-l)^2+(y+l)^2+z^2} - \\sqrt{(x-l)^2+(y-l)^2+z^2})\n\\end{gather*}\nWith $2$s factored out and the distance to hydrophone $a$ moved to the other side of the equations, we get:\n\\begin{gather*}\n    delta_1*s +\\sqrt{(x-l)^2+(y-l)^2+z^2} = \\sqrt{(x+l)^2+(y-l)^2+z^2}\\\\\n    delta_2*s + \\sqrt{(x-l)^2+(y-l)^2+z^2} = \\sqrt{(x+l)^2+(y+l)^2+z^2}\\\\\n    delta_3*s + \\sqrt{(x-l)^2+(y-l)^2+z^2} = \\sqrt{(x-l)^2+(y+l)^2+z^2}\n\\end{gather*}\nSquaring both sides and simplifying yields\n\\begin{gather*}\n    (delta_1*s)^2 +(x-l)^2+(y-l)^2+z^2 + delta_1*s*\\sqrt{(x-l)^2+(y-l)^2+z^2}= (x+l)^2+(y-l)^2+z^2\\\\\n    (delta_2*s)^2 + (x-l)^2+(y-l)^2+z^2 + delta_2*s*\\sqrt{(x-l)^2+(y-l)^2+z^2} = (x+l)^2+(y+l)^2+z^2\\\\\n    (delta_3*s)^2 + (x-l)^2+(y-l)^2+z^2 + delta_3*s*\\sqrt{(x-l)^2+(y-l)^2+z^2} = (x-l)^2+(y+l)^2+z^2\n\\end{gather*}\n\nNote that the only known values are the $s$ and time stamps and thus the value of the deltas. Further simplification: \n\\begin{equation}\n\\label{equ:d1}\n    (delta_1*s)^2  + delta_1*s*\\sqrt{(x-l)^2+(y-l)^2+z^2}= 4lx\n\\end{equation}\n\\begin{equation}\n\\label{equ:d2}\n     (delta_2*s)^2 + delta_2*s*\\sqrt{(x-l)^2+(y-l)^2+z^2} = 4lx+4ly\n\\end{equation}\n\\begin{equation}\n\\label{equ:d3}\n     (delta_3*s)^2 + delta_3*s*\\sqrt{(x-l)^2+(y-l)^2+z^2} = 4ly\n\\end{equation}\n\nHere, we see \\eqref{equ:d2} is a linear combination of \\eqref{equ:d1} and \\eqref{equ:d3}, and that the term $\\sqrt{(x-l)^2+(y-l)^2+z^2}$ shows up in all three equations. To solve this equation, set $\\sqrt{(x-l)^2+(y-l)^2+z^2} = N$.\n\\begin{equation}\n\\label{equ:n1}\n    (delta_1*s)^2  + delta_1*s*N= 4lx\n\\end{equation}\n\\begin{equation}\n\\label{equ:n2}\n     (delta_2*s)^2 + delta_2*s*N = 4lx+4ly\n\\end{equation}\n\\begin{equation}\n\\label{equ:n3}\n     (delta_3*s)^2 + delta_3*s*N = 4ly\n\\end{equation}\n\nNow, using the fact that the equations are linear combinations, we get \\eqref{equ:n1} + \\eqref{equ:n3} = \\eqref{equ:n2}:\n\\begin{equation*}\n    (delta_1*s)^2  + delta_1*s*N + (delta_3*s)^2 + delta_3*s*N = (delta_2*s)^2 + delta_2*s*N\n\\end{equation*}\nSolving for $N$:\n\\[N = \\frac{s(delta_2^2-delta_1^2-delta_3^2)}{2(delta_1+delta_3-delta_2)}\\]\n\nSolving \\eqref{equ:n1} for $x$ and \\eqref{equ:n3} for $y$ and using $N$ which is now in terms of known values, we get\n\\begin{equation*}\n    x = \\frac{(s*delta_1)^2 + 2s*delta_1*N}{4l}\n\\end{equation*}\n\\begin{equation*}\n    y = \\frac{(s*delta_3)^2 + 2s*delta_3*N}{4l}\n\\end{equation*}\nTo find $z$, we go back to $N = \\sqrt{(x-l)^2+(y-l)^2+z^2}$. We now have equations for $N$, $x$, and $y$, so we just need to solve this for $z$:\n\\[ z = \\sqrt{N^2-(x-l)^2-(y-l)^2}\\]\nThe last calculations to do are translating the Cartesian coordinates into polar coordinates. Per the software requirements, angle measurements will be in degrees. $z$ is the same in both coordinate systems, so the only values to find are radius $r$ and angle $\\theta$. \n\\begin{gather*}\n    r = \\sqrt{x^2+y^2}\\\\\n    \\theta = \\tanh{(\\frac{y}{x})}*\\frac{180}{\\pi}\n\\end{gather*}\n\n\\end{document}\n", "meta": {"hexsha": "6340e1efec369eeaabcdcd5b7adbd71fdd7d2371", "size": 5790, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/TriangulationAlgorithm/TriangulationAlgorithm.tex", "max_stars_repo_name": "31337H4X0R/crab-tracker", "max_stars_repo_head_hexsha": "c822a40010d172ba797b5de8c340931d0feea6e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-31T01:32:17.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-31T01:32:17.000Z", "max_issues_repo_path": "doc/TriangulationAlgorithm/TriangulationAlgorithm.tex", "max_issues_repo_name": "31337H4X0R/crab-tracker", "max_issues_repo_head_hexsha": "c822a40010d172ba797b5de8c340931d0feea6e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 47, "max_issues_repo_issues_event_min_datetime": "2017-11-04T02:04:42.000Z", "max_issues_repo_issues_event_max_datetime": "2018-06-16T01:00:48.000Z", "max_forks_repo_path": "doc/TriangulationAlgorithm/TriangulationAlgorithm.tex", "max_forks_repo_name": "31337H4X0R/crab-tracker", "max_forks_repo_head_hexsha": "c822a40010d172ba797b5de8c340931d0feea6e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-06-10T21:58:49.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T17:21:03.000Z", "avg_line_length": 52.1621621622, "max_line_length": 784, "alphanum_fraction": 0.6492227979, "num_tokens": 2187, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5885720289948985}}
{"text": "\n\\chapter{Data imputation}\n\\label{ch:data-imputation}\n\n\\section{Model estimation}\n\nWhenever some variables are measured, there is a chance of failing on collecting the data and missing some samples. It could be related to different reasons. % people not answering some questions in surveys or incomplete medical records or even public entities failing to report or just choosing not to disclose them. \nAlso, sampling data from sensors and reporting it via wireless communication channels is not an exception.\n\nTherefore, handling missing data is a problem that has been under research for a long time. Being Multiple Imputation\\cite{imputationRubin}, Expectation-Maximization\\cite{imputationEM}, Nearest Neighbours\\cite{imputationNN} and hot-deck\\cite{imputationHotDeck} the most popular techniques to deal with them.\n\n%As in the previous step, and based on time constraints, it has been decided not to implement algorithms from scratch but rely on existing known packages to avoid investing in developing time. \n\nMoritz et al. have analysed several univariate time series imputation implementations in R\\cite{MoritzComparison}, which results later have been compiled in the \\texttt{imputeTS} R package\\cite{imputeTS}. Therefore it is reasonable to rely on their work and use their results to decide which imputing strategy should be taken.\nThus, for the current work, the \\texttt{imputeTS} package will estimate a structural time series model from the data and then perform Kalman smoothing to fill the gaps. \n\nIt should be noted that before performing a Kalman smoothing, it is needed to have a state representation of the model, for which \\texttt{imputeTS} supports Auto-ARIMA state space estimation and structural time series model fitted by maximum likelihood. Moreover, it should be mentioned that the development has been mainly done in Python. Nonetheless, for this phase, an R package will be called from Python using the \\texttt{rpy2} module as an interface to bounce between both languages.\n\n\\pagebreak\n\\section{Auto-ARIMA}\n\n\\subsection{ARIMA models}\n\nIt is defined that a time series $\\{X_t\\}$ can be modelled by an \\emph{integrated autoregressive moving average} (ARIMA) model if there exist a $d$-th difference $\\{Y_t\\} = \\nabla^d \\{X_t\\}$ that can be modelled as an ARMA model\\cite{brockwell2016introduction}. \n\nLet $\\nabla$ and $B$ be the difference and backshift operators, respectively. Then:\n\n\\begin{align}\\label{eq1}\n\t\\begin{split}\n\t\t\\{Y_t\\} &= \\nabla^d \\{X_t\\}, \\quad d>0 \\\\\n\t\t\t\t&= (1-B)\\nabla^{d-1}\\{X_t\\} \\\\\n\t\t\t\t& \\quad\\quad \\vdots \\\\\n\t\t\t\t&= (1-B)^{d-1}\\nabla\\{X_t\\} \\\\\n\t\t\t\t&= (1-B)^d \\{X_t\\} \\longrightarrow \\text{ARMA}(p,q) \\\\\n\t\\end{split}\n\\end{align}\n\nTherefore, $\\{Y_t - \\mu\\}$ will be $\\text{ARMA}(p,q)$ with mean $\\mu$ if satisfies the following difference equation:\n\n\\begin{equation*}\n\tY_t - \\phi_1Y_{t-1} - \\cdots - \\phi_pY_{t-p} = Z_t + \\theta_1Z_{t-1} + \\cdots + \\theta_qZ_{t-q}\n\\end{equation*}\n\nWhich written using backshift operator:\n\n\\begin{align}\\label{eq2}\n\t\\phi(B)\\{Y_t\\} &= \\theta(B) Z_t \\quad,\\quad Z_t \\sim \\mathcal{N}(0,\\sigma^2)\n\\end{align}\n\nReplacing \\ref{eq1} in \\ref{eq2} the ARIMA process can be defined as:\n\n\\begin{align}\\label{eq:arima}\n\t\\begin{split}\n\t\t\\phi(B)(1-B)^d \\{X_t\\} &= \\theta(B) Z_t \\\\\n\t\t\\{X_t\\} &\\sim \\text{ARIMA}(p,d,q)\n\t\\end{split}\n\\end{align}\n\n\n\\subsection{SARIMA}\n\n It is said that a time series has a seasonality of period $s$ if $Y_t = Y_{t-s}$. Written using the backshift operator this is $Y_t = B^sY_t$.\n \n Using the same logical development than for ARIMA derivation, it can be said that $\\{Y_t\\}$ follows a Seasonal ARIMA (SARIMA) process with period $s$ \\cite{brockwell2016introduction} :\n \n\\begin{equation*}\n \t\\{Y_t\\} \\sim \\text{ARIMA}(p,d,q)(P,D,Q)_s\n\\end{equation*}\n \n If and only if\n \n\\begin{equation}\\label{eq3}\n \t\\{X_t\\} = (1-B)^d (1-B^s)^D \\{Y_t\\}\n\\end{equation}\n\nIs a causal ARMA process defined by \n\n\\begin{equation}\\label{eq4}\n\t\\phi(B)\\Phi(B^s)X_t = \\theta(B)\\Theta(B^s)Z_t , \\quad\\{Z_t\\} \\sim \\mathcal{N}(0,\\sigma^2)\n\\end{equation}\n\nTherefore, replacing (\\ref{eq3}) in (\\ref{eq4}), the SARIMA definition can be rewritten as\n\n\\begin{equation*}\\label{eq5}\n\t\\phi(B)\\Phi(B^s)(1-B)^d (1-B^s)^D \\{Y_t\\} = \\theta(B)\\Theta(B^s)Z_t , \\quad\\{Z_t\\} \\sim \\mathcal{N}(0,\\sigma^2)\n\\end{equation*}\n\nMost of the literature obviate a constant $c$ as part of the model since it does not add more insight about the concepts around these derivations. Nonetheless, as it will be seen in the following section it is considered as part of the auto-ARIMA estimation, therefore, the final ARIMA definition in current work is given by:\n\n\\begin{equation}\\label{eq:sarima}\n\t\\phi(B)\\Phi(B^s)(1-B)^d (1-B^s)^D \\{Y_t\\} = c + \\theta(B)\\Theta(B^s)Z_t , \\quad\\{Z_t\\} \\sim \\mathcal{N}(0,\\sigma^2)\n\\end{equation}\n\n\n\\subsection{Auto-ARIMA algorithm}\n\nDespite the fact that in the library the procedure is called \\emph{auto-ARIMA}, it does also support SARIMA models.\n\nThe main goal of auto-tuning these models is to choose the appropriates $p, q, d, P, Q$ and $D$ values so the model can make a good approximation of the process. If $d$ and $D$ are known, $p, q, P, Q$ can be selected by using an information criterion such as the \\ac{aic}\\cite{Akaike1998}:\n\n\\begin{equation}\n\t\\text{AIC} = -2\\log(L) + 2(p+q+P+Q+k)\n\\end{equation}\n\nWhere $k=1$ if $c\\neq0$ in equation (\\ref{eq:sarima}) and $0$ otherwise, and $L$ is the maximised likelihood of the model fitted to $(1-B)^d (1-B^s)^D \\{Y_t\\}$\\cite{autoarimaLib} \n\n\\texttt{ImputeTS} library does not implement the (S)ARIMA estimation on its code. Instead, it uses the \\texttt{forecast} package to do for it\\cite{imputeTS}.\n\nThe \\texttt{forecast} library implements the Hyndman-Khandakar algorithm for \\ref{eq:sarima}, which uses an heuristic that combines unit root tests, \\ac{aic} minimisation and MLE maximisation as shown in Figure \\ref{alg:autoarima}\\cite{autoarimaLib}.\n\n\\begin{figure}[H]\n\t\\noindent\\fbox{\n\t{\\small\\parbox{\\textwidth}\n\t\t{\n\t\t\t\\subsubsection*{Step 1: Find $D$ and $d$}\n\t\t\t\t\n\t\t\tFor non-seasonal data run KPSS unit-root tests\\cite{kpss}. If the results are significant difference the data and repeat $d$ times until the test results are insignificant. \n\t\t\t\n\t\t\tFor seasonal data it is considered ARIMA$(p,d,q)(P,D,Q)_s$ models where $s$ is the seasonal frequency and $D=0$ or $D=1$ depending on a extended Canova-Hansen test\\cite{canovahansen} (for $s$ > 13 the library estimates critical values $C_s = 0.269s^{0.928}$). \n\t\t\t\n\t\t\tAfter the seasonal pattern has passed its stability test and $D$ has been selected, $d$ is chosen by successive KPSS tests for seasonal differenced data if $D=1$, or the original data if $D=0$\n\t\t\t\n\t\t\t\\subsubsection*{Step 2: Initial model}\n\t\t\t\n\t\t\tFit the following models\n\t\t\t\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item ARIMA$(2,d,2)$ if $s=1$ or ARIMA$(2,d,2)(1,D,1)$ if $s \\geq 1$\n\t\t\t\t\\item ARIMA$(0,d,0)$ if $s=1$ or ARIMA$(0,d,0)(0,D,0)$ if $s \\geq 1$\n\t\t\t\t\\item ARIMA$(1,d,0)$ if $s=1$ or ARIMA$(1,d,0)(1,D,0)$ if $s \\geq 1$\n\t\t\t\t\\item ARIMA$(0,d,1)$ if $s=1$ or ARIMA$(0,d,1)(0,D,1)$ if $s \\geq 1$\n\t\t\t\\end{itemize}\n\t\t\n\t\t\tFrom these ones it is selected the one with lowest AIC score as initial model, it will be called \\emph{current} model and denoted as ARIMA$(p,d,q)$ if $s=1$ or ARIMA$(p,d,q)(P,D,Q)_s$ if $s \\geq 1$\n\t\t\t\n\t\t\t\\subsubsection*{Step 3: Explore variations}\n\t\t\t\n\t\t\t13 variations of current model are explored. They are summarised as: \n\t\t\t\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item One of $p, q, P$ and $Q$ is allowed to vary by $\\pm1$ from the current model\n\t\t\t\t\\item $p$ and $q$ vary both $\\pm1$ from the current model\n\t\t\t\t\\item $P$ and $Q$ both vary by $\\pm1$ from the current model\n\t\t\t\t\\item $c$ is included if the current model has $c=0$ or excluded if the current model has $c\\neq0$.\n\t\t\t\\end{itemize}\n\t\t\n\t\t\tOnce it is found a model with better AIC score than the current model, then this one is updated. The algorithm stops when no better AIC than the current is found. \n\t\t\t\n\t\t\t\n\t\t\t\\emph{Note: The model updating is also subjected to stability constraints that can be found in the original publication for further details.}\n\t\t\t\n\t\t}\n\t}}\n\t\\caption{Hyndman-Khandakar algorithm for Auto-ARIMA estimation}\n\t\\label{alg:autoarima}\n\\end{figure}\n\n\n\n\\section{Structural times series estimation}\n\n\\subsection{State space representation}\n\nA state-space representation of a time series is suitable for cases in which the underlying nature of a process is hidden and cannot be determined. Nonetheless,  indirect observations can be measured.\n\nThus, state-space representations are given by two equations. The \\emph{observation equation} (\\ref{eq:obs}) relates $w$-dimensional observable measures as a linear function of the $v$-dimensional noisy hidden state, and the \\emph{state equation} (\\ref{eq:state}) determines how the hidden state will transition from time $t$ to $t+1$.\n\n\\begin{align}\n\t\\bm{Y}_t\t\t&=\tG\\bm{X}_t + \\bm{W}_t  \\label{eq:obs} \\\\\n\t\\bm{X}_{t+1}\t&=\tF\\bm{X}_t + \\bm{V}_t \\label{eq:state} \n\\end{align}\n\nWhere $F$ is a $v \\times v$ matrix, $G$ a $w \\times v$ matrix, $\\{\\bm{W}_t\\} \\sim \\mathcal{N}(0,R)$, $\\{\\bm{V}_t\\} \\sim \\mathcal{N}(0,Q)$ and $E(\\bm{V}_s\\bm{W}^T_t) = 0$ for all $t, s$.\n\n\\subsection{Structural models}\n\nThe classical structural models are defined in terms of trend $(m)$, seasonal $(s)$ and noise $(\\varepsilon)$ components (\\ref{eq:struct}). Although useful for some applications, they can be too deterministic for some others.\n\n\\begin{equation}\\label{eq:struct}\n\tX_t = m_t + s_t + \\varepsilon_t\n\\end{equation}\n\nState-space representations allow bringing more flexibility to these components. Therefore, it is natural to extend these concepts to a state-space domain. \n\n\\subsubsection*{Local level model}\n\nTo show how the model is built up, it will be derived by adding component by component from the most straightforward model: the random walk. Let $\\{Y_t\\}$ be the observable variable from a state-space (\\ref{eq:loclvl_obs}) in which the hidden state corresponds is a random variable (\\ref{eq:loclvl_st}), which in fact will determine the \\emph{local level model}\\cite{brockwell2016introduction}.\n\n\\begin{align}\n\tY_t\t\t&= M_t + W_t, \\quad W_t \\sim \\mathcal{N}(0,\\sigma^2_w) \\label{eq:loclvl_obs} \\\\\n\tM_{t+1}\t&= M_t + Vt, \\quad V_t \\sim \\mathcal{N}(0,\\sigma^2_v) \\label{eq:loclvl_st}\n\\end{align}\n\n\\subsubsection*{Local linear trend model}\n\nIt is not difficult to extend this local level model to a \\emph{local linear trend model} by adding a slope state $B_t$. \n\n\\begin{equation}\n\tM_t = M_{t-1} + B_{t-1} + V_{t-1} \\label{eq:loclintrend}\n\\end{equation}\n\nIntroducing randomness into the slope also\n\n\\begin{equation}\n\tB_t = B_{t-1} + U_t , \\quad  U_t \\sim \\mathcal{N}(0,\\sigma^2_u) \\label{eq:randslope}\n\\end{equation} \n\nAs now this model contains multiple state, in order to write the model in state-space form, it can be defined the state vector:\n\n\\begin{equation}\n\t\\bm{X}_t = (M_t, B_t)^T \\label{eq:statevect}\n\\end{equation}\n\nThus, using (\\ref{eq:statevect}), (\\ref{eq:loclintrend}) and (\\ref{eq:randslope}) can be rewritten as\n\n\\begin{equation}\n\t\\bm{X}_{t+1} = \n\t\\begin{bmatrix}\n\t\t1 & 1 \\\\\n\t\t0 & 1 \n\t\\end{bmatrix} \\bm{X}_{t+1} + \\bm{V}_{t}, \\quad t = 1, 2, \\ldots\n\t\\label{eq:lintrendstates}\n\\end{equation}\n\nSuch that \n\\begin{equation}\\label{eq:V_trend}\n\t\\bm{V}_{t} = (V_t, U_t)^T\n\\end{equation}\n\nThe observation equation for the process $\\{Y_t\\}$ is then by\n\n\\begin{equation}\\label{eq:lintrendobs}\n\tY_t = \\begin{bmatrix}1 & 0\\end{bmatrix}\\bm{X}_{t} + W_t\n\\end{equation}\n\nThus, from (\\ref{eq:lintrendstates}) and (\\ref{eq:lintrendobs}) the missing pieces to define the state-space equations are:\n\n\\begin{equation}\\label{eq:F_trend}\n\tF = \\begin{bmatrix}\n\t\t1 & 1 \\\\\n\t\t0 & 1 \n\t\\end{bmatrix}\n\\end{equation}\n\n\\begin{equation}\n  G = \\begin{bmatrix}1 & 0\\end{bmatrix}\n\\end{equation} \n\n\\begin{equation}\n  Q = \\begin{bmatrix}\n  \t\t\\sigma^2_v \t& 0 \\\\\n  \t\t0 \t\t\t& \\sigma^2_u \n  \\end{bmatrix}\n\\end{equation}\n\n\\begin{equation}\n  R = \\sigma^2_w\n\\end{equation}\n\n\\subsubsection*{Noisy seasonal model}\n\nSame as the classical structural models, the seasonal component $s_t$ with period $d$ has the properties\\cite{maravall1985structural}\n\n\\begin{align*}\n\ts_{t+d} &= s_t \\\\\n\t\\sum_{t=1}^{d}{s_t} &= 0\n\\end{align*}\n\nExpanding it as in \\cite{brockwell2016introduction} it is obtained that\n\n\\begin{equation}\\label{eq:noiseasonal}\n\ts_{t+1} = -s_t - \\cdots - s_{t-d+2}, \\quad t=1,2,\\ldots\n\\end{equation}\n\nFrom (\\ref{eq:noiseasonal}) it can be constructed a generalised sequence $\\{Y_t\\}$ by adding a random variable $S_t \\sim \\mathcal{N}(0, \\sigma^2_s)$\n\n\\begin{equation}\n\tY_{t+1} = -Y_t - \\cdots - Y_{t-d+2} + S_t, \\quad t=1,2,\\ldots\n\\end{equation} \n\nNow, to put it into a state-space form, it is defined the state vector\n\n\\begin{equation}\n\t\\bm{X} = (Y_t, Y_{t-1}, \\ldots, Y_{t-d+2})^T\n\\end{equation}\n\nTherefore the observation equation for $\\{Y_t\\}$\n\n\\begin{equation}\n\tY_t = \\begin{bmatrix}1 & 0 & 0 & \\cdots & 0\\end{bmatrix} \\bm{X}, \\quad t=1,2,\\ldots\n\\end{equation}\n\n$\\{\\bm{X}\\}$ in the state equation\n\n\\begin{align}\n\t\\bm{X}_{t+1} &= F\\bm{X}_t + \\bm{V}_t , \\quad t=1,2,\\dots\\\\\n\t\\bm{V}_t &= (S_t, 0, \\ldots, 0)^T \\label{eq:V_noise} \\\\\n\tF &= \n\t\\begin{bmatrix}\n\t\t-1\t\t& -1\t\t& \\cdots\t& -1\t\t& -1 \t\t\\\\\n\t\t 1\t\t&  0 \t\t& \\cdots \t&  0\t\t& 0  \t\t\\\\\n\t\t 0\t\t&  1 \t\t& \\cdots \t&  0\t\t& 0  \t\t\\\\\n\t\t\\vdots\t& \\vdots\t& \\ddots\t& \\vdots\t& \\vdots\t\\\\\n\t\t0\t\t&  0\t\t& \\cdots \t&  1\t\t& 0\t\t\n\t\\end{bmatrix}\\label{eq:F_noise}\n\\end{align}\n\n\n\n\\subsubsection*{Structural time series general model}\n\nAs in (\\ref{eq:struct}), to construct a general (additive) structural time series model, it is needed to sum each of the components, i.e., the general model is built as result of merging the linear trend and the noisy seasonal models.\n\nThe state vector is then\n\n\\begin{equation}\n\t\\bm{X}_t = \n\t\\begin{bmatrix}\n\t\t\\bm{X}_t^1 \\\\\n\t\t\\bm{X}_t^2\n\t\\end{bmatrix} =\n\t\\begin{bmatrix}\n\t\t(M_t, B_t)^T \\\\\n\t\t(Y_t, Y_{t-1}, \\ldots, Y_{t-d+2})^T\n\t\\end{bmatrix} = \n\t\\begin{bmatrix}\n\t\t\\begin{bmatrix}\n\t\t\tM_t \\\\\n\t\t\tB_t\n\t\t\\end{bmatrix} \\\\\n\t\t\\begin{bmatrix}\n\t\t\tY_t \\\\\n\t\t\tY_{t-1} \\\\\n\t\t\t\\vdots \\\\\n\t\t\tY_{t-d+2}\n\t\t\\end{bmatrix}\n\t\\end{bmatrix}\n\\end{equation}\n\nThe state equation\n\n\\begin{align}\n\t\\bm{X}_{t+1}\t&= \n\t\\begin{bmatrix}\n\t\tF_1\t& 0 \\\\\n\t\t0\t& F2\n\t\\end{bmatrix} \\bm{X}_{t} +\n\t\\begin{bmatrix}\n\t\t\\bm{V}_t^1 \\\\\n\t\t\\bm{V}_t^2\n\t\\end{bmatrix}\n\\end{align}\n\nWhere $F_1$ and $F_2$ are the matrices defined in (\\ref{eq:F_trend}) and (\\ref{eq:F_noise}) respectively. $\\bm{V}_t^1$ and $\\bm{V}_t^2$ are (\\ref{eq:V_trend}) and (\\ref{eq:V_noise}).\n\n\nAnd the observations equation\n\n\\begin{equation}\n\tY_t = \t\\begin{bmatrix}1 & 0 & 1 & 0 & \\cdots & 0\\end{bmatrix}\\bm{X}_{t} + W_t\n\\end{equation}\n\n\n\n\\subsection{Kalman prediction}\n\nLet $\\bm{X} = (X_1, \\ldots, X_v)^T$ be a random vector. Then it can be defined \n\n\\begin{equation}\n\tP_t(\\bm{X}) \\coloneqq (P_t(X_1), \\ldots, P_t(X_v))^T\n\\end{equation}\n\nSuch that \n\n\\begin{equation}\n\tP_t(X_i) \\coloneqq P(X_i | Y_0, \\ldots, Y_t)\n\\end{equation}\n\nIs the best linear predictor of $X_i$ in terms of all components of $Y_0, Y_1, \\ldots, Y_t$\\cite{brockwell2016introduction}\n\n\nThen the Kalman one-step predictor for a state-space model given by (\\ref{eq:obs}) and (\\ref{eq:state}) is defined as \n\n\\begin{equation}\n\t\\hat{\\bm{X}} \\coloneqq P_{t-1}(\\bm{X}_t)\n\\end{equation}\n\nAnd their error covariance matrices\n\n\\begin{equation}\n\t\\Omega_t = E[(\\bm{X}_t - \\hat{\\bm{X}}_t)(\\bm{X}_t - \\hat{\\bm{X}}_t)^T]\n\\end{equation}\n\nThe Kalman predictive recursions are then defined by\n\nInitial conditions:\n\\begin{align}\\label{eq:kalm_pred_init}\n\\begin{split}\n\t\t\\hat{\\bm{X}}_1 &= P(\\bm{X}_1 | \\bm{Y}_0) \\\\\n\t\t\\Omega_1 &= E[(\\bm{X}_1 - \\hat{\\bm{X}}_1)(\\bm{X}_1 - \\hat{\\bm{X}}_1)^T]\n\\end{split}\n\\end{align}\n\nRecursions:\n\\begin{align}\\label{eq:kalm_pred_1}\n\\begin{split}\n\t\\hat{\\bm{X}}_{t+1} &= F_t\\hat{\\bm{X}}_t + \\Theta_t\\Delta_t^{-1}(\\hat{\\bm{Y}}_t - G_t\\hat{\\bm{X}}_t) \\\\\n\t\\Omega_{t+1} &= F_t \\Omega_t F_t^T + Q_t - \\Theta_t \\Delta_t^{-1} \\Theta_t^T \n\\end{split}\n\\end{align}\n\nWhere,\n\\begin{align}\\label{eq:kalm_pred_2}\n\\begin{split}\n\t\\Delta_t &= G_t \\Omega_t G_t^T + R_t \\\\\n\t\\Theta_t &= F_t \\Omega_t G_t^T\n\\end{split}\n\\end{align}\n\n\\subsection{Structural models estimation}\n\nConsider a vector $\\bm{\\theta}$ whose components can fully parametrise the state space given by (\\ref{eq:obs}) and (\\ref{eq:state}).\n\nTherefore, it is possible to find $\\bm{\\hat{\\bm{\\theta}}}_{MLE}$ by maximising the observations in $\\{Y_t\\}$ with respect to the parameters in $\\bm{\\theta}$.\n\nIf the conditional probability density of $\\bm{Y}_t | \\bm{Y}_{t-1}, \\ldots, \\bm{Y}_0$ is $f(\\cdot | \\bm{Y}_{t-1}, \\ldots, \\bm{Y}_0)$, then the likelihood can be expressed as \n\n\\begin{equation}\\label{eq:structts_lik_1}\n\t\\mathcal{L}(\\bm{\\theta} ; \\bm{Y}_1, \\ldots, \\bm{Y}_n) = \\prod_{t=1}^{n}{f(\\bm{Y}_t | \\bm{Y}_{t-1}, \\ldots, \\bm{Y}_0)}\n\\end{equation}\n\nIn general (\\ref{eq:structts_lik_1}) is hard to solve. But, if it is assumed that $\\bm{Y}_0$ , $\\bm{X}_1$ and $\\bm{W}_t$, $\\bm{V}_t, t=1,2,\\ldots$ are \\emph{jointly Gaussian}, then the resulting conditional densities will have the form\\cite{brockwell2016introduction}\n\n\\begin{equation}\n\tf(\\cdot | \\bm{Y}_{t-1}, \\ldots, \\bm{Y}_0) =  \\left(2 \\pi\\right)^{-w/2} \\left(\\det \\Delta_t\\right)^{-1/2} \\exp \\left\\{-\\frac{1}{2} \\bm{I}_t^T \\Delta_t^{-1}\\bm{I}_t\\right\\}\n\\end{equation}\n\nWhere \n\n\\begin{equation}\n\t\\bm{I}_t = \\bm{Y}_t - P_{t-1} \\bm{Y}_t = \\bm{Y}_t - G \\hat{\\bm{X}}_t\n\\end{equation}\n\nAnd $P_{t-1} \\bm{Y}_t$ and $\\Delta_t, t \\geq 1$ are obtained from the Kalman prediction recursions.\n\nFinally, under the Gaussianity assumptions, the likelihood can be rewritten as\\cite{brockwell2016introduction}\n\n\\begin{equation}\\label{eq:structts_lik_2}\n\t\\mathcal{L}(\\bm{\\theta} ; \\bm{Y}_1, \\ldots, \\bm{Y}_n) =\n\t\\left(2 \\pi\\right)^{-\\frac{nw}{2}} \n\t\\left( \\prod_{j=1}^{n}{\\det \\Delta_j} \\right)^{-\\frac{1}{2}}\n\t\\exp\\left\\{ - \\frac{1}{2} \\sum_{j=1}^{n}{\\bm{I}_t^T \\Delta_t^{-1}\\bm{I}_t} \\right\\}\n\\end{equation} \n\nNow, for any value of $\\bm{\\theta}$, the likelihood values $\\mathcal{L}(\\bm{\\theta} ; \\bm{Y}_1, \\ldots, \\bm{Y}_n)$ can be calculated using the help of Kalman recursions. Thus, in order to find $\\hat{\\bm{\\theta}}_{MLE}$ it is needed to run some nonlinear optimisation algorithm to find the best $\\bm{\\theta}$ by maximising $\\mathcal{L}$\\footnote{Constrained to the optimisation algorithm behaviour}. In the current work it is used the L-BFGS-B algorithm via \\texttt{optim} calls in R.\n\n\\pagebreak\n\\section{Kalman smoothing}\n\nA time series smoothing consists in the estimation of the hidden states $\\bm{X}_t$ given the complete time series $\\bm{Y}_1, \\ldots, \\bm{Y}_n$ such that $n > n$\\cite{timeseries_statespace_models}. Which is especially suitable when there is any missing data point at a time $t < n$, this is:\n\n\\begin{equation}\n\t\\bm{X}_{t|n} = P_n \\bm{X}_t\n\\end{equation}\n\nAn the error covariance matrices\\cite{brockwell2016introduction}\n\n\\begin{equation}\n\t\\Omega_{t|n} = E \\left[(\\bm{X}_{t} - \\bm{X}_{t|n})(\\bm{X}_{t} - \\bm{X}_{t|n})^T \\right]\n\\end{equation}\n\nThen, the Kalman iterations for the smoothing problem are \\cite{brockwell2016introduction}\n\n\\begin{align}\n\tP_n \\bm{X}_t\t\n\t&= \n\tP_{n-1} \\bm{X}_t + \\Omega_{t,n} G_n^T \\Delta_n^{-1} (\\bm{Y}_n - G_n \\hat{\\bm{X}}_n) \n\t\\\\\n\t\\Omega_{t,n+1} \t\t\n\t&=\n\t\\Omega_{t,n} \\left[ F_n `\\Theta_n \\Delta_n^{-1} G_n \\right]^T\n\t\\\\\n\t\\Omega_{t|n}\n\t&=\n\t\\Omega_{t|n-1} - \\Omega_{t,n} G_n^T \\Delta_n^{-1} G_n \\Omega_{t,n}^T\n\\end{align}\n\nGiven the initial conditions\n\n\\begin{align}\n\t\\hat{\\bm{X}}_t &= P_{t-1} \\bm{X}_t \\\\\n\t\\Omega_{t,t} &= \\Omega_{t|t-1} = \\Omega_t\n\\end{align}\n\nWhich are found from the Kalman prediction.\n\n\\section{Comparative experiments}\n\nA set of thorough experiments have been designed to compare both model estimation techniques and empirically determine which one performs best for the current dataset.\n\nThe starting point is a database containing 500 \\acp{rbs} and approximately two months of measurements. Then, the database is mined to find $n=50$ signals per feature with no missing data. After that, it has been artificially removed a given ratio of data using a uniform distribution so that it can be simulated a \\ac{mcar} scenario\\cite{rantou2017missing}. \n\nThe two algorithms are run to estimate the model and use them to run Kalman smoothing. It is used the coefficient of determination $R^2$ to compare the performance of the models. \n\n\\begin{equation}\\label{eq:R2}\n\tR^2 = 1 - \\frac{\\sum_{i}{(y_i - \\hat{y}_i)^2}}{\\sum_{i}{(y_i - \\bar{y}_i)^2}}\n\\end{equation}\n\nSome of the good results are presented in Figure \\ref{fig:imp_exp_good}\n\n\n\\begin{figure}[hptb]\n\t\\begin{subfigure}{.47\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{imp_traffic}\n\t\t\\caption{Radio traffic load imputation}\n\t\t\\label{fig:imp_radio_traffic}\n\t\\end{subfigure}%\n\t\\hfill\n\t\\begin{subfigure}{.47\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{imp_temp}\n\t\t\\caption{Cabinet temperature}\n\t\t\\label{fig:imp_temp}\n\t\\end{subfigure}\n\t\\caption{Imputation experiments good results}\n\t\\label{fig:imp_exp_good}\n\\end{figure}\n\nAlthough the plots above show promising results, there are other cases where the experiment did not perform as expected. \n\n\\pagebreak\n\n\\subsubsection*{Unintuitive $R^2$ values}\n\nThere are cases in which Auto-ARIMA estimation show poor and even negative $R^2$ values as shown in Figure \\ref{fig:imp_exp_issue}. The latter are unintuitive results, as $R^2$ values are expected to be limited to the $[0,1]$ interval. \n\nNonetheless, this is meant only for linear models, where the worst fitted model is assumed to be the observations mean[add citation]. Thus, having negative $R^2$ values implies that the observations mean explains more variance than the fitted model.\n\n\\begin{figure}[hptb]\n\t\\begin{subfigure}{.47\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{imp_sys_voltage}\n\t\t\\caption{\\ac{pdu} system voltage imputation}\n\t\t\\label{fig:imp_pdu_sys_voltage}\n\t\\end{subfigure}%\n\t\\hfill\n\t\\begin{subfigure}{.47\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{imp_power_load}\n\t\t\\caption{Average \\ac{psu} utilization imputation}\n\t\t\\label{fig:imp_psu_load}\n\t\\end{subfigure}\n\t\\caption{Imputation experiments good results}\n\t\\label{fig:imp_exp_issue}\n\\end{figure}\n\n\\subsubsection*{Optim convergence failures}\n\nThe model estimation threw runtime exceptions for some features due to the R code calls to the \\texttt{optim} library not converging. These exceptions were found to be an actual bug in the library that occurs when the function being optimised tends to a constant value. \n\nA dedicated routine was written to catch whenever this exception was thrown and run a simple interpolation instead of estimating the model.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.55\\linewidth]{imp_conv_issue}\n\t\\caption{Problematic signals for imputeTS}\n\t\\label{fig:imp_conv_issue}\n\\end{figure}\n\n\\section{Database construction algorithm}\n\nIt has been implemented a pipeline to build a unified and non-corrupted database from the sparsed raw files so that the forecasting section could learn from reliable data.\n\nIn the Figure, it is shown how the implemented blocks interact with each other to accomplish this task.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{db_construction_algorithm}\n\t\\caption{Database construction algorithm}\n\t\\label{fig:db_construction_algorithm}\n\\end{figure}\n\n", "meta": {"hexsha": "0865046314dd55111610856054182c63297d85c9", "size": 23126, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/data_imputation.tex", "max_stars_repo_name": "agustinvalencia/MasterThesis", "max_stars_repo_head_hexsha": "25e3b4689264647bd1418f249de7a5272234008a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/data_imputation.tex", "max_issues_repo_name": "agustinvalencia/MasterThesis", "max_issues_repo_head_hexsha": "25e3b4689264647bd1418f249de7a5272234008a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/data_imputation.tex", "max_forks_repo_name": "agustinvalencia/MasterThesis", "max_forks_repo_head_hexsha": "25e3b4689264647bd1418f249de7a5272234008a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.8724137931, "max_line_length": 489, "alphanum_fraction": 0.7023263859, "num_tokens": 7886, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5885720289948985}}
{"text": "The basic rules of logic are very simple. Logica is entirely defined as a way to manipulate symbols. These symbols are usually common signs such as numbers, letter, Greek letters, etc., but it is not forbidden to use any symbol we find useful or interesting. In this section, precisely with the goal of illustrating the \\textit{independence of logic from the symbols chosen}, we shall manipulate smileys and other icons. \\textbf{The first step is thus choosing the symbols.} Logic is really the description of most (complex) dynamic processes that search for an answer in a complex space. This search may be a purely mathematical search, or it could be a program running. The dynamic process is defined in terms of a series of rules which are activated whenever we find a matching input. Thus, \\textbf{the second step is choosing the rules that define our dynamic processes}.\n\nAfter defining symbols and rules, things become interesting and we can try to use our system to find answers and process information. We usually start with a given proposition, which is the input of the whole process. A \\textit{proposition} is just a series of symbols, such as for example: \\texttt{3 + 2 = 5 of \\dSmiley\\dSmiley\\Candle\\Candle}. We apply the rules to the proposition until we reach the desired answer, and we have also answered all of the intermediate questions that came to existance during the dynamic process.\n\n\\paragraph{A concrete example}\nLet us consider a full example. Consider the symbols of our language to be:\n\\begin{itemize}\n\\item A smiley \\dSmiley\n\\item A candle \\Candle\n\\item A tree \\Springtree\n\\item A coffee cup \\Coffeecup\n\\end{itemize}\n\nWe consider a proposition to be true if and only if we can process it until we reach a \\Coffeecup. \\footnote{Given the extreme importance of coffee in the diet of mathematicians and computer scientists, equating coffee with truth does not really seem that illogical a step} Our rules are:\n\n\\begin{itemize}\n\\item \\textbf{(G0)} A \\Coffeecup means we are done\n\\item \\textbf{(R1)} Two \\dSmiley followed by a \\Springtree, and then further followed by \\texttt{r}, means that we will have to process \\texttt{r} to find the answer\n\\item \\textbf{(R2)} Two \\Candle followed by a \\Springtree, and then further followed by \\texttt{r}, means that we will have to process \\texttt{r} to find the answer\n\\end{itemize}\n\nConsider now the input proposition of \\\\\n\\dSmiley\\dSmiley\\Springtree\\Candle\\Candle\\Springtree\\dSmiley\\dSmiley\\Springtree\\Coffeecup \\\\\n\nWe begin by using \\textbf{R2}, therefore obtaining: \\\\\n\\Candle\\Candle\\Springtree\\dSmiley\\dSmiley\\Springtree\\Coffeecup \\\\\n\nWe then use rule \\textbf{R1}, therefore obtaining: \\\\\n\\dSmiley\\dSmiley\\Springtree\\Coffeecup \\\\\n\nWe then use rule \\textbf{R2}, therefore obtaining: \\\\\n\\Coffeecup \\\\\n\nNow according to \\textbf{G0} we are done. The proof is successfull, therefore we can conclude that \\dSmiley\\dSmiley\\Springtree\\Candle\\Candle\\Springtree\\dSmiley\\dSmiley\\Springtree\\Coffeecup was \\textbf{true within our system of rules}. \n\nConsider now the new input proposition of \\\\\n\\dSmiley\\dSmiley\\Springtree\\dSmiley\\dSmiley\\Springtree\\Candle\\Springtree\\Coffeecup \\\\\n\nWe begin by using \\textbf{R2}, therefore obtaining: \\\\\n\\dSmiley\\dSmiley\\Springtree\\Candle\\Springtree\\Coffeecup \\\\\n\nWe then use rule \\textbf{R1}, therefore obtaining: \\\\\n\\Candle\\Springtree\\Coffeecup \\\\\n\nUnfortunately now we cannot apply rule \\textbf{R1} again, because we have no \\dSmiley at the head of our proposition; we cannot apply rule \\textbf{R2} because we have no \\Candle at the head of our proposition; and we can certainly not apply rule \\textbf{G0} because there is no lonely \\Coffeecup. If we cannot apply any of our rules, the process is \\textit{stuck}. This means that we cannot prove \\dSmiley\\dSmiley\\Springtree\\dSmiley\\dSmiley\\Springtree\\Candle\\Springtree\\Coffeecup with our rules, thus \\dSmiley\\dSmiley\\Springtree\\dSmiley\\dSmiley\\Springtree\\Candle\\Springtree\\Coffeecup was \\textbf{not true within our system of rules}.\n", "meta": {"hexsha": "cc6edd78ab171efa381e75274ae13ab3e18cea77", "size": 3977, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Course materials/Dictaat/tex/informal_introduction.tex", "max_stars_repo_name": "vs-team/metacompiler", "max_stars_repo_head_hexsha": "51eb3588394c15b31ebacba97a22c086e0c8dc6c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-12-13T09:22:28.000Z", "max_stars_repo_stars_event_max_datetime": "2018-03-03T21:48:11.000Z", "max_issues_repo_path": "Course materials/Dictaat/tex/informal_introduction.tex", "max_issues_repo_name": "cult-of-giuseppe/metacompiler", "max_issues_repo_head_hexsha": "51eb3588394c15b31ebacba97a22c086e0c8dc6c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2015-08-14T06:48:43.000Z", "max_issues_repo_issues_event_max_datetime": "2015-08-16T09:37:03.000Z", "max_forks_repo_path": "Course materials/Dictaat/tex/informal_introduction.tex", "max_forks_repo_name": "cult-of-giuseppe/metacompiler", "max_forks_repo_head_hexsha": "51eb3588394c15b31ebacba97a22c086e0c8dc6c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-10-11T17:13:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-10T19:12:15.000Z", "avg_line_length": 86.4565217391, "max_line_length": 875, "alphanum_fraction": 0.7877797335, "num_tokens": 1038, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199552262967, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.5884532004373108}}
{"text": "\\section{Logistic Regression}\n\\smallskip \\hrule height 2pt \\smallskip\n\n\\begin{itemize}\n\t\\item Another probabilistic approach to classification (categorical predictions).  \\hfill \\\\\n\t\\item directly optimizing the conditional log likelihood % TA 3/15/2016\n\t\\item Discriminative: learn $P(Y|X)$ directly then discriminate between classes.    % Week 5 audio. \n\t\tWe solved similar problems using Bayes before (a generative approach).  \n\tBut if we don't want to bother with modeling all those joint or conditional probabilities, we can do this discriminative approach instead. \\hfill \\\\  %Wed 2/3 audio transcription.\n\t\\item Parameters from gradient ascent % End of slide set 7\n\t\\item Linear, uses a probabilistic model, and is discriminative.% End of slide set 7\n\t\\item produces weights, $w^*$ of dim n + 1 (n = number of features in each training point).\n\t\tThese parameters are tied together, unlike in Naive Bayes where there is independence for parameters for different classes. \n\t\\item Use the dot product of those weights against the feature vectors as an input to the sigmoid function.\n\\end{itemize}\n\nLogistic Regression is the discriminative counterpart to a Naive Bayes generative classifier over Boolean features.  % https://www.cs.cmu.edu/~tom/mlbook/NBayesLogReg.pdf\nThe difference between logistic and Naive Bayes is just one word: \"conditional\".   % wk 5 audio\nWe are maximizing conditional log likelihood, that is conditional on X.  \n\tWe are not going to spend our time encoding the distribution over X.   % wk 5 audio\n\\hfill \\\\   \n\nParameters are tied together, unlike in Naive Bayes.  % audio week 6\n\\hfill \\\\\n\nCan use discrete or continuous outputs. \\hfill \\\\ % https://www.youtube.com/watch?v=zAULhNrnuL4\nIt's a linear classifier; the decision rule is a hyperplane. \\hfill \\\\  % Slide 6 summary\nYou optimize it (find weights) by gradient ascent, which works because it is concave.    \\hfill \\\\  % Slide 6 summary\nYou can use \"maximum conditional a posteriori\" for regularization.    \\hfill \\\\  % Slide 6 summary\n\n\\hfill \\\\\nNote: $ \\displaystyle \\frac{e^x}{1 + e^x} = \\frac{1}{1 + e^-x}$ \\hfill \\\\  % student/TA question/ans Feb 8th about gradient ascent/descent.  \n\n\\hfill \\\\ \n\n\\underline{Summary from non-class sources:} \\hfill \\\\\n% https://www.youtube.com/watch?v=-Z2a_mzl9LM\nWe are still using linear regression in the inputs, but putting the result into a sigmoid function. \\hfill \\\\\nRecall $w_0 + w_1 x_1 + w_2 x_2 + w_3 x_3 = w^Tx$ and $x = (1, x_1, x_2, x_3)$.  \\hfill \\\\\n$P(death|x) = \\sigma(w^Tx)$  %https://www.youtube.com/watch?v=-Z2a_mzl9LM\nwhere $\\sigma$, the sigmoid function,  converts your regression output into a sigmoid curve. \\hfill \\\\\n$\\displaystyle \\sigma(a) = \\frac{1}{1+ e^{-a}} = \\frac{1}{1+ e^{-(w_0 + w_1 x_1 + w_2 x_2 + w_3 x_3)}} = \\frac{1}{1+ e^{-(w^Tx)}}$   \\hfill \\\\ % https://www.youtube.com/watch?v=-Z2a_mzl9LM\n\\hfill \\\\\n\n% https://www.youtube.com/watch?v=_Po-xZJflPM  :\nWe can convert this to a linear relationship by \"taking the logit\". \\hfill \\\\\nThe logit (log odds) is the inverse of the logistic.  \\hfill \\\\ %  https://en.wikipedia.org/wiki/Logistic_regression\n$F(x) = \\sigma(a)$ above.  It is the probability that the dependent variable equals a case, given some linear combination of the predictors.  It can range from $- \\infty$ to $\\infty$   % https://en.wikipedia.org/wiki/Logistic_regression\nThe logit is $\\ln \\frac{F(x)}{1-F(x)}$, or equivalently, after exponentiating both sides: \\hfill \\\\\n$\\frac{F(x)}{1-F(x)} = e^{w^Tx}$  \\hfill \\\\\nThe logit (i.e., log-odds or natural logarithm of the odds) is equivalent to the linear regression expression.\n\n \n\n\nNote used odds ratio: $\\frac{p}{1-p}$  \\hfill \\\\\n$\\logit(\\frac{1}{1+ e^{-(w^Tx)}}) = \\log(\\frac{\\frac{1}{1+ e^{-(w^Tx)}}}{1-\\frac{1}{1+ e^{-(w^Tx)}}}) $  \\hfill \\\\ % https://www.youtube.com/watch?v=_Po-xZJflPM\nWe can now proceed with linear regression.  \\hfill \\\\\nNote that our predictions are now on the log scale; this impacts interpretation of the coefficients.  \\hfill \\\\  %https://www.youtube.com/watch?v=_Po-xZJflPM \n\n\\hfill \\\\ \\hfill \\\\   \n\n\\underline{Lecture's presentation:} \\hfill \\\\\n\nNotation:  \\hfill \\\\\n\\begin{itemize}\n\t\\item $x_i^j$: the $i^{th}$ attribute of data point $j$\n\t\\item $y^j$: the $j^{th}$ class  %(?)\n\t\\item $x^j$: the $j^{th}$ training example\n\\end{itemize}\n\nOnce again we don't want to try to estimate $P(X,Y)$; that is challenging due to the size of the distribution. \\hfill \\\\\nWe could make the Naive Bayes assumption and only need to calculate $P(X_i | Y)$, \nbut if we want $P(Y|X)$, why not learn that directly?  You can use logistic regression. \n\\includegraphics[width=1.5in]{figures/expo.pdf}     \\hfill \\\\\n\\hfill \\\\\n\nReuse ideas from regression, but let the y-intercept define the probability.  \\hfill \\\\\n$P(Y=1|\\bm{X, w}) \\propto exp(w_0 + \\sum_i w_i X_i)$  \\hfill \\\\\nWith normalization constants:  \\hfill \\\\\n$\\displaystyle  P(Y=0|\\bm{X, w}) \\frac{1}{1+ exp(w_0 + \\sum_i w_i X_i)} $ \\hfill \\\\\n$\\displaystyle  P(Y=1|\\bm{X, w}) \\frac{exp(w_0 + \\sum_i w_i X_i)}{1+ exp(w_0 + \\sum_i w_i X_i)} $ \\hfill \\\\\nLogistic function: \\includegraphics[width=1in]{figures/logistic.pdf}     \\hfill \\\\\n \\hfill \\\\\n \nMaking a decision boundary out of logistic equations:  \\hfill \\\\\nOutput the $Y$ with the highest $P(Y|X)$.   \\hfill \\\\\nIf binary Y, output Y=1 if $\\displaystyle 1 < \\frac{P(Y=1|X)}{P(Y=0|X)}$  \\hfill \\\\\nThat simplifies to just $1 <exp(w_0 + \\sum_i w_i X_i)$ or \\hfill \\\\\n$0 <w_0 + \\sum_i w_i X_i$   \\hfill \\\\\n\\includegraphics[width=.8in]{figures/logistic_boundary_linear.pdf}     \\hfill \\\\\n\\textbf{The decision boundary is a line (or hyperplane), hence we have a linear classifier!} \\hfill \\\\  \\hfill \\\\\n\nFor $ \\displaystyle P(Y=0 | \\bm{X,w}) = \\frac{1}{1 + exp(w_o + w_1 x_1)}$:  \\hfill \\\\\n\n\\includegraphics[width=3.4in]{figures/decision_boundary_examples.pdf}   \\hfill \\\\\n\n\\includegraphics[width=2in]{figures/decision_boundary_example.pdf}   \\hfill \\\\\n\n(See notes for more $w_0, w_1$ values plotted.)  \\hfill \\\\\nIn these plots, Y is the probability that the class is 1.    \\hfill \\\\\nThe red curve is the sigmoid.  The blue line is the decision boundary.  \\hfill \\\\\nThe decision boundary is from the equation $0 = w_1X + w_0$.  \\hfill \\\\\n% Erick advice: ignore the blue lines entirely.  don't need them to find probability distribution. \n\n\\hfill \\\\\nLarger weights result in a sharper curve.  The bias $w_0$ shifts there the middle of the curve is.   \\hfill \\\\\nThe red sigmoid defines a probability distribution over $Y$ in \\{0,1\\} for every possible input X. \\hfill \\\\\n\\hfill \\\\\nThe decision boundary leads to $P(Y=0|X, w) = 0.5$ when you are at the $y=0$ point on the line.   \\hfill \\\\\n(E/J words:  when the blue line crosses the x axis, that's when the sigmoid curve is above 1/2, which corresponds to classifying it as a no/0.)  \\hfill \\\\\n\\hfill \\\\\n\n% Forum 3/15/2016:\nIf $w_0 = 0$, $P(Y=0|X, w) = 0.5$ when at the $y=0$ point on the line: the exponent just becomes 0 and the whole equation evaluates to 0.5. \nIf $w_0 \\neq 0 $, this statement is no longer true.\nAnd similar argument for the 2-d case.\nThe slope of the line defines how quickly the probabilities go to 0 or 1 around the decision boundary. \n\\hfill \\\\\n\\hfill \\\\\n\n2D inputs: \\hfill \\\\\nFor $ \\displaystyle P(Y=0 | \\bm{X,w}) = \\frac{1}{1 + exp(w_o + w_1 x_1 + w_2 x_2)}$:  \\hfill \\\\\n\\includegraphics[width=2in]{figures/decision_boundary_example-2D.pdf}   \\hfill \\\\\n\n$P(Y=0 | X, w)$ decreases as $w_0 + \\sum_i w_i x_i$ increases. \nAgain, if you set the stuff inside the exponential to zero, you get the decision boundary hyperplane.\n\n\\subsubsection{Finding the w coefficients: Loss Function}\n\\underline{Simple Intro}: \\hfill \\\\  % TA e-mail 3/15/2016\nYou can use the chain rule in the opposite direction to decompose the loss function into the log likelihood of $X$ as well as the conditional log likelihood of the labels $y$ given $X$.\n\n\\underline{Class Version}: \\hfill \\\\\nGenerative (Naive Bayes) loss function: \nNow $j$ is a data point with observations indexed over $i$.\n\n\n\\begin{align*}\n\t\\ln P(D | \\bm{w}) = \\sum_{j=1}^N  &  \\ln P(x^j, y^j | \\bm{w}) \\mbox{   } \\mbox{    (the full log-likelihood)}\\\\\n\t\t\t\t\t& \\mbox{use Bayes' rule to rewrite conditionally}  \\\\  % J added this line. \n\t\t\t\t\t= \\sum_{j=1}^N  &  \\ln [P(y^j | x^j, \\bm{w}) P(x^j | \\bm{w})] \\\\ % J added this line. \n\t\t\t\t=  \\sum_{j=1}^N  & \\ln P(y^j | x^j , \\bm{w}) + \\sum_{j=1}^N \\ln P(x^j | \\bm{w})\n\\end{align*}\n\nWe decide to ignore the 2nd term because it won't help you get better predictions for that data anyway. \nOr, \"From a machine learning perspective, \"God gave us the data\" and we don't care about the 2nd sum.\"  % Erick 2/6/2016\n\nProfessor Farhadi is calling this first time a discriminative (logistic regression) loss function:  \\hfill \\\\\nIt is helping you discriminate between different classes.  It's not going to help you model the data. \nThis is unlike regression; we don't care about the value it puts out.  We only care about what the resulting class is.  \\hfill \\\\\n\n% Erick. \nThis is the difference between statistics and machine learning.  We only care about getting the best $\\bm{w}$ for discriminating between classes. \n\n\\textbf{Conditional Data Likelihood:} \n\"Conditional\" because you are conditioning on what $\\bm{X}$ is. \n\\begin{align*}\n\t\\ln P(D_Y | D_{\\bm{X}}, \\bm{w}) = \\sum_{j=1}^N \\ln P(y^j | \\bm{x}^j, \\bm{w})\n\\end{align*}\n$D_Y$ = ???  \\hfill \\\\\n$D_{\\bm{X}}$ = ???   \\hfill \\\\\nDoesn't waste effort learning $P(X)$.  Focuses on $P(Y| \\bf{X})$, which is all that matters for classification. \\hfill \\\\\nDiscriminative models cann't compute $P(\\bm{x}^j | \\bm{w})$!  ??? \n\\hfill \\\\\n\n\\subsubsection{Conditional Log Likelihood}\nJust need to figure out how to go up and find the maximum likelihood. \n\n(the binary case only).  \\hfill \\\\\n$P(Y=0 | \\bm{X}, \\bm{w}) = \\frac{1}{1 + \\exp(w_0 + \\sum_i w_i X_i)}$  \\hfill \\\\\n$P(Y=1 | \\bm{X}, \\bm{w}) = \\frac{\\exp(w_0 + \\sum_i w_i X_i)}{1 + \\exp(w_0 + \\sum_i w_i X_i)}$  \\hfill \\\\\n\n($ l( \\bm{w} )$ is conditional data log-likelihood.)  \n\\begin{align*}\n\tl( \\bm{w})  \\equiv  &  \\sum_j \\ln P(y^j | x^j, \\bm{w})  \\\\\n\t& \\mbox{Since $y^j$ is in \\{0, 1\\}, sum over the two cases: }   \\\\\n\t& \\mbox{(the $y^j$ and $(1-y^j)$ act like delta functions)}   \\\\\n\tl(\\bm{w})  =  & \\sum_j y^j \\ln P(y^j = 1 | x^j, \\bm{w}) +(1 - y^j) \\ln P(y^j = 0 | x^j, \\bm{w})  \\\\\n\t& \\mbox{plug in the definition of the likelihoods and do algebra to get:}  \\\\\n\t=& \\sum_j  y^j (w_0 + \\sum_i^n w_i x_i^j)  - \\ln(1 + \\exp(w_0 + \\sum_i^n w_i x_i^j)) \n\\end{align*}\t\n\nWhile we can't find a closed-form solution to optimize $l(\\bm{w})$,  $l(\\bm{w})$ is concave so we can to gradient \\underline{as}cent. \nJsut need to figure out how to go up and find the maximum likelihood.   % Wk 5 audio\n\n\\subsubsection{Gradent ascent to optimize w}\nTo maximize, we can't take derivative w.r.t $\\bm{w}$ and optimize b/c no closed form solution.  % Wk 5 audio \nInstead we form the gradient vector and move along the gradient. % Wk 5 audio\nConditional likelihood for Logistic Regression is convex.  (see above)  \\hfill \\\\\nIn this case it doesn't matter b/c we have a concave function.  % Wk 5 audio\nIterate to find $\\bm{w}$: start somewhere, find gradient direction, make step in that direction, repeat, repeat.\n\\hfill \\\\\n\n\\textbf{Gradient}:  gives ascent or descent direction. \\hfill \\\\\n\\begin{align*}\n\t\\nabla_w l(\\bm{w}) = [\\frac{\\partial l(\\bm{w})}{\\partial w_0}, \\dots, \\frac{\\partial l(\\bm{w})}{\\partial w_n}]'  \\hfill \\\\\n\\end{align*}\n\t(The $'$ at the end is for transpose b/c usually a column vector.)  \\hfill \\\\\n\\textbf{Update Rule:} \\hfill \\\\\n\\begin{align*}\n\t\\Delta \\bm{w} &= \\eta \\nabla_{\\bm{w}} l(\\bm{w})\n\\end{align*}\n$\\eta$ is the learning rate.  $\\eta > 0$.  \n\tIt can't be too big or you can get lost.  \n\tIt can't be too small or you take forever to converge. \n\nYour next weights $(t+1)$ become: \n$w_i^{(t+1)} \\leftarrow w_i^{(t)} + \\eta \\frac{\\partial l(\\bm{w})}{\\partial w_i}$   \\hfill \\\\\n\\hfill \\\\\nGradient ascent is the simplest  of optimization approaches.  \nNote that conjugate gradient ascent is much better (see reading).(?)  \\hfill \\\\\n\nSee slides for derivation of: \\hfill \\\\\n$\\displaystyle \\frac{\\partial l(w)}{\\partial w_i} = \\sum_j x_i^j (y^j - P(Y^j = 1 | x^j, w))$\n\n\\subsubsection{Example of gradient ascent to maximize conditional log likelihood}\n(M(C)LE; C for conditional) \\hfill\nLearning an approximation of the step function.  % Wk 5 audio. \n\nUse the equations: \\hfill \\\\\n\\begin{itemize}\n\t\\item \\textbf{eq 1:} $w_i^{(t+1)} \\leftarrow w_i^{(t)} + \\eta \\frac{\\partial l(\\bm{w})}{\\partial w_i}$\n\t\\item \\textbf{eq 2:} $\\displaystyle \\frac{\\partial l(w)}{\\partial w_i} = \\sum_j x_i^j (y^j - P(Y^j = 1 | x^j, w))$\n\t\t\t The loss function is conditional on log likelihood.  % Wk 5 audio\n\t\\item \\textbf{eq 3:} $P(Y=1 | X, W) = \\frac{\\exp(w_0 + \\sum_i w_i X_i)}{1 + \\exp(w_0 + \\sum_i w_i X_i)}$\n\t\\item \\textit{note:} superscript is for the $j^{th}$ data point and \n\t\t\tsubscripts are for the $i^{th}$ value of the input ($x$) array.  \n\t\t\tDon't forget that there is an $i=0$ $(x_0 = 1)$ bias term. \n\\end{itemize}\n\\includegraphics[width=1.5in]{figures/gradient_ascent_logistic_regression.pdf}   \\hfill \\\\\n\\begin{itemize}\n\t\\item begin with t=0 and $w = [w_0 + w_1 + w_2] = [0, 0, 0]$\n\t\\item calculate \\textbf{eq 3} for each Y.  \n\t\t\\begin{itemize}\n\t\t\t\\item For j = 0:  \n\t\t\t\t\\begin{align*}\n\t\t\t\t\tP(Y^0 = 1 | x^0, w) &= \\frac{\\exp(0 + 0*3 + 0*(-3))}{(1 +  \\exp(0 + 0*3 + 0*(-3)))} \\\\\n\t\t\t\t\t\t&= 1/(1+1) = 1/2\n\t\t\t\t\\end{align*}\n\t\t\t\\item For j = 1:  \n\t\t\t\t\\begin{align*}\n\t\t\t\t\tP(Y^1 = 1 | x^1, w) &= \\frac{\\exp(0 + 0*(-2) + 0^(2))}{(1 +  \\exp(0 +  0*(-2) + 0^(2)))} \\\\\n\t\t\t\t\t\t&= 1/(1+1) = 1/2\n\t\t\t\t\\end{align*}\n\t\t\\end{itemize} \n\t\\item calculate the terms that go into \\textbf{eq 2} by looping over the 3 values of $i$ (bias and two other coefficients) and 2 values of $j$ (2 data points).  \n\t\t\\begin{itemize}\n\t\t\t\\item for $i=0, j=0$: ($j=$0th (1st) training example, bias term) \\hfill \\\\\n\t\t\t\t $x_0^0(y^0 - P(Y^0=1|x^0, w)) = 1(1-0.5) = 0.5$ \n\t\t\t\\item for $i=0, j=1$: ($j=$1st (2nd) training example, bias term) \\hfill \\\\\n\t\t\t\t $x_0^0(y^0 - P(Y^0=1|x^0, w)) = 1(0-0.5) = -0.5$ \n\t\t\t\n\t\t\t\\item for $1=1, j=0$: ($j=$0th (1st) training example, $x_1$ term) \\hfill \\\\\n\t\t\t\t $x_1^0(y^0 - P(Y^0=1|x^0, w)) = -2(1-0.5) = 1.5$ \n\t\t\t\\item for $i=1, j=1$: ($j=$1st (2nd) training example, $x_1$ term) \\hfill \\\\\n\t\t\t\t $x_1^1(y^1 - P(Y^1=1|x^1, w)) = -2(0-0.5) = 1.0$ \n\t\t\t\t \n\t\t\t\\item for $1=2, j=0$: ($j=$0th (1st) training example, $x_2$ term) \\hfill \\\\\n\t\t\t\t $x_2^0(y^0 - P(Y^0=1|x^0, w)) = -3(1-0.5) = -1.5$ \n\t\t\t\\item for $i=2, j=1$: ($j=$1st (2nd) training example, $x_2$ term) \\hfill \\\\\n\t\t\t\t $x_2^1(y^1 - P(Y^1=1|x^1, w)) = 2(0-0.5) = -1.0$ \n\t\t\\end{itemize}\n\t\\item Now we can compute the gradient (\\textbf{eq 2}):\n\t\t\\begin{itemize}\n\t\t\t\\item $\\nabla_w l(\\bm{w}) = [\\frac{\\partial l(w)}{\\partial w_1}, \\frac{\\partial l(w)}{\\partial w_2}, \\frac{\\partial l(w)}{\\partial w_3}]$\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t%\\frac{\\partial l(w)}{\\partial w_i} &= \\sum_j x_i^j (y^j - P(Y^j = 1 | x^j, w)) \\\\\n\t\t\t\t\t\t&= [0.5 - 0.5, 1.5 + 1.0, -1.5 - 1] = [0, 2.5, -2.5]\n\t\t\t\t\\end{align*}\n\t\t\\end{itemize}\n\t\n\t\\item Use $\\eta = 0.1$ to scale the gradient:  (\\textbf{eq 1}) \\hfill \\\\\n\t\t\t$w = [0,0,0] + 0.1*[0, 2.5, 2.5] = [0, 0.25, -0.25]$\n\t\\item Start over with \\textbf{eq 3} and updated $w$.\n\t\t\\includegraphics[width=2in]{figures/logistic_regression_ex_loop_2.pdf}\n\\end{itemize}\n\nHow to do more algorithmically:  \\hfill \\\\\nYou don't have to get all the info to update all the weights at once.  \nDo them one at a time.  \\hfill \\\\\n\\includegraphics[width=2.5in]{figures/gradient_descent_algorthim.pdf}\n\n\\underline{How do you decide when to stop?} \\hfill \\\\\n(Not in notes!) \\hfill \\\\\n\nYou need a criteria that tells you when to stop updating your gradient. \n\tUse a threshold for magnitude of the gradient. \n\tWhen the amount of change is small it is not changing a lot.  If concave, you are close to global optima. \n\nWhy don't we wait until it gets to zero? \n\tThe chances of us hitting zero is low, and we might spend a lot of time wasting CPU cycles on negligible changes. \n\t\nWhen we are really far from the global optimum we can afford to take bigger steps. \n\tThink about the objective function.  We can afford to make big steps if we are far.  \n\tIf we make big steps when close, we can get farther away .\n\nYou could use this policy: let $\\nu$ be big at the beginning of a loop counter. \n\tIncrease the value until you see a drop.\n\t\"Line search\" asks \"what is maximum value of this parameter that gives you no drop?\"  \n\n\n\\subsubsection{Overfitting in Logistic Regression}\n \n \\underline{Intro \\#1:}  \\hfill \\\\\nBe weary of large parameters.  % Audio wk 5\n\tLarge parameters (large $a$ in $\\frac{1}{1 + \\exp(-a)}$) lead to a step function.  \n\tThat's ok if you truly want a step, but you should be concerned about overfitting the minute you see a large parameter.\n\tWe could regularize again: apply a penalty for large parameters. \n    \tCould do something like maximizing $w^* = \\argmax_w l(w) - \\lambda*(||w||_2^2)$ instead of $w* =  \\argmax_w l(w)$. \n\tBut that's not ideal.  Might have a hard time finding a suitable value for $\\lambda$.  \\hfill \\\\\n\t\n\t\nInstead, use a probabilistic way of regularization.\n\tWe've done this before using a prior.  \n\tInstead of MLE, we did MAP. The prior regularizes. \n\t$P(w | Y, X) \\propto P(Y | X,w)P(w)$.\n\tWhat would be the form of our prior ($P(w)$) be? \n\tWe want the posterior to be differentiable.  \n\tThe likelihood was a sigmoid shape, so the log likelihood had the form of was of the form $e^{something}$.\n\tWe already  know how to get the conditional likelihood in the form of $e^{blah}$.\n\tWell behaved summation of terms.   \\hfill \\\\\n\nWe pick a gaussian distribution for the prior for mathematical convenience.\n\tWe wanted to form a function we can optimize.\n\tIf it doesn't come from gaussian and does come from multinomial, what happens? \n\tWe wouldn't be too wrong.  And it is just a regularization.  \n\tWe are just trying to constrain $\\bm{w}$ with something that is mathematically well behaved.\n\tUsually the prior is off. But it is only there to keep w from getting too big.  It gives a low score when w is too big.   \\hfill \\\\\n\nQ from student: why aren't we considering $P(X,W)$ as the prior? Why $P(Y|X, w)$?\n\tP(X,W) is discriminative.  \n\tWe aren't reformulating our problem.  Just looking to estimate the parameters. \n\tWe want to regularize the parameter not the data (?).    \\hfill \\\\\n \n \\underline{Intro \\#2:}   \\hfill \\\\\n \nLike in linear regression, the maximum likelihood solution prefers higher weights.  \nHigher weights can give higher likelihood of properly classifying examples close to the decision boundary.  \nThis over-fitting causes features to have larger influences on the decision:  \\textbf{beware of overfitting!!!}. \nAgain, you can use regularization to penalize high weights.\nThis will be covered more later. \n\\hfill \\\\\n\n\\subsubsection{MAP for Logistic Regression}\nAbove was M(Conditional)LE.     \\hfill \\\\\nRecall that the MAP/MLE difference is \\_\\_\\_\\_\\_\\_\\_\\_\\_.  \\hfill \\\\\nPriors on $\\bm{w}$ are commonly added for regularization.    \\hfill \\\\\nHelps avoid very large weights and overfitting.    \\hfill \\\\\n$P(\\bm{w} | Y, \\bm{X}) \\propto P(Y | \\bm{X}, \\bm{w} ) P(\\bm{w} )$  \\hfill \\\\\nUse a normal distribution with zero mean and \"identity covariance\".  (\\_\\_\\_\\_):  \\hfill \\\\\n$ \\displaystyle  P(\\bm{w} ) = \\prod_i \\frac{1}{\\kappa \\sqrt{2 \\pi}} e^{\\frac{-w_i^2}{2 \\kappa^2}}$ \\hfill \\\\\n$\\kappa^2$ is variance.  \\hfill \\\\\n\\hfill \\\\\n\nNow we form our MAP estimate: \n\\begin{align*}\n\tw* &= \\argmax_w \\ln[P(\\bm{w}) \\prod_{j=1}^N P(y^j | x^j, \\bm{w})]  \\\\\n\t\t&= \\argmax_w \\ln[ \\prod_i \\frac{1}{\\kappa \\sqrt{2 \\pi}} e^{\\frac{-w_i^2}{2 \\kappa^2}}   \\prod_{j=1}^N P(y^j | x^j, \\bm{w})]\n\\end{align*}\n\n Instead of focusing on the stuff after the $\\prod$ in the bracket we also have $P(w)$.\n Our prior, $P(w)$ is $\\prod$ fo gaussians.\n The $\\prod(y^j|x^, x_j)$ is called likelihood (???)  \n\nAdd $\\log P(\\bm{w})$ to the objective:\n\\begin{align*}\n\t\\ln P(\\bm{w}) & \\propto - \\frac{\\lambda}{2} \\sum_i w_i^2 \\\\\n\t\\frac{\\partial \\ln P(\\bm{w})}{\\partial w_i} &= \\lambda w_i\n\\end{align*}\n\nWe now have a new way of optimizing w.\nThis is a quadratic penalty (???) that drives the weights toward zero. \\hfill \\\\\nIt adds a negative linear term to the gradients (???).  \\hfill \\\\\nAlso applicable in linear regression! \n\n\\subsubsection{Review: MLE vs. MAP for Logistic Regression}\n\\underline{Maximum conditional likelihood estimate:}\n\\begin{align*}\n\tw* &= \\argmax_w \\ln[\\prod_{j=1}^N P(y^j | x^j, \\bm{w})] \\\\\n\tw_i^{(t+1)} & \\leftarrow w_i^{(t)} + \\eta \\sum_j x_i^j [y^j - \\widehat{P}(Y^j = 1 | x^j, \\bm{w})]\\\\\n\\end{align*}\n\n\\underline{Maximum conditional a posteriori estimate:}\n\\begin{align*}\n\tw* &= \\argmax_w \\ln[P(\\bm{w}) \\prod_{j=1}^N P(y^j | x^j, \\bm{w})] \\\\\n\tw_i^{(t+1)} & \\leftarrow w_i^{(t)} + \\eta \\{ -\\lambda w_i^{(t)} +  \\sum_j x_i^j [y^j - \\widehat{P}(Y^j = 1 | x^j, \\bm{w})]\\\\\n\\end{align*}\nThe $\\lambda$ term complains when the value is big.\nThis $-\\lambda$ if from the derivative of the gaussian.\nThe whole point: if you add a prior and take a derivative, it is a form of regularization and should act as a penalty on big $w$. \n\n\n\\hfill \\\\  \\hfill \\\\\n\n\nVocab \\hfill \\\\\n\\begin{itemize}\n\t\\item \\textbf{discriminative}:  estimates joint probabilities.  E.g. $p(Data, Zebra)$, $p(Data, No Zebra)$. \n\t\\item \\textbf{generative}:  E.g. $p(Zebra | Data)$, $p(No Zebra | Data)$. \n\\end{itemize}\n\n\\subsubsection{Multiclass Logistic Regression (discrete labels)}\nDefine a weight vector $w_i$ for each $y_i$ where i ranges from $1$ to $R-1$ for $R$ classes.\nYou don't need a weight vector for the $R^{th}$ class b/c its probability is 1 - the sum of the rest.  \\hfill \\\\\n\\includegraphics[width=1in]{figures/multiclass_logistic.pdf} \\hfill \\\\\nEach category will have its own w values.  \\hfill \\\\ % Wk 5 audio. \nDon't need to train k w values.  k-1.  b/c they have to sum to 1.    % Wk 5 audio. \n\n\\begin {align*}\n\tP(Y=1 | X ) & \\propto \\exp(w_{10} + \\sum_i w_{1i}X_i)  \\\\\n\t\t& \\mbox{note: $\\bm{w}$ is now a matrix, not an array.}  \\\\\n\tP(Y=2 | X ) & \\propto \\exp(w_{20} + \\sum_i w_{2i}X_i)  \\\\\n\t & \\dots \\\\\n\t & \\mbox{the last probability is defined relative to the rest.}  \\\\\n\tP(Y=r | X ) &= 1- \\sum_{j=1}^{r-1} P(Y=j | X)\n\\end{align*}\nAfter normalizing, your probabilities become: \n\\begin {align*}\n\tP(Y=y_k | X ) &=  \\frac{\\exp(w_{k0} + \\sum_{i=1}^n w_{ki} X_i)}{1 + \\sum_{j=1}^{R-1} \\exp(w_{j0} + \\sum_{i=1}^n w_{ji}X_i)} \\\\\n\t& \\mbox{the last class is 1 - the sum of the other probabilities:} \\\\\n\tP(Y=y_R | X ) &=  \\frac{1}{1 + \\sum_{j=1}^{R-1} \\exp(w_{j0} + \\sum_{i=1}^n w_{ji}X_i)} \n\\end{align*}\nNote: features can be discrete or discontinuous. \n\\hfill \\\\  \\hfill \\\\\n\n\\subsubsection{Gaussian Naive Bayes vs Logistic Regression.}\nGaussian Naive Bayes with class-independent variances is representationally equivalent to Linear Regression.\n(\"If you do have an infinite number of training examples, the GNB and LR produce identical classifiers.\")  % Erick 2/7\nThe solutions different because of the objective (loss) function.  \\hfill \\\\\n\\hfill \\\\\n\nIf you do have class-independent variances and the underlying model is gaussian, GNB does better than LR.\nYou get an incorrect model for GNB if you don't have class-independent variances.  %Erick\nLR is less biased because it does not assume conditional independence.  % Erick.\n\\hfill \\\\\n\nIf you have enough categories, GNB w/ class independence will produce identical results to LR.  % week 6 audio\nGiven enough points \\& the assumption features independence of class is held true, GNB will do better. \nIf these assumptions are *not* true then LR will do better.\n\n \\hfill \\\\  \\hfill \\\\\n\nYou could use either to learn $ f: X \\rightarrow Y$ for $X$ that is a vector of real-valued features ($< X_1, \\dots, X_n >$) and boolean Y. \\hfill \\\\ \\hfill \\\\\n\nThe two approaches make different assumptions:   \\hfill \\\\\n\\begin{itemize}\n\t\\item NB assumes features are independent given the class.  This is an assumption on $P(\\bm{X} | Y)$.\n\t\\item LR assumes functional form of $P(Y|\\bm{X})$, and makes no assumption on $P(\\bm{X} | Y)$.\n\\end{itemize}\n\nGaussian Naive Bayes classifier would assume: \\hfill \\\\\n\\begin{itemize}\n\t\\item all $X_i$ are conditionally independent given Y.  \n\t\tIf you fix what hump you are looking at, you don't need to worry about the rest of the parameters. \n\t\\item model $P(X_i | Y = y_k)$ as Gaussian with $N(\\mu_{ik}, \\sigma_i)$.  (Your inputs are each produced by a gaussian.)\n\t\\item model $P(Y)$ as Bernoulli$(\\theta, 1-\\theta)$\n\\end{itemize} \nThis implies the form of $P(Y|X)$ is:\n$\\displaystyle P(Y=1 | X= < X_1, \\dots, X_n >) = \\frac{1}{1 + \\exp(w_0 + \\sum_i w_i X_i)}$. \\hfill \\\\\nIn the slides, he derives the form.   \\hfill \\\\\n(??)  You see that if you assume Gaussian in Logistic Regression, you get: \\hfill \\\\\n $w_0 = \\ln \\frac{1 - \\theta}{\\theta} + \\frac{\\mu_{i0}^2 + \\mu_{i1}^2}{2 \\sigma_i^2}$ and $w_i = \\frac{\\mu_{i0} + \\mu_{i1}}{\\sigma_i^2}$\n \n His confusing summary: \\hfill \\\\\n \\includegraphics[width=2.5in]{figures/GNB_vs_LR.pdf} \\hfill \\\\\n Attempted translation: \n \n Note: you need more parameters to train Naive Bayes. \\hfill \\\\\n $4n + 1$ parameters for GNB,  $n + 1$ parameters for Logistic Regression. \\hfill \\\\\n The NB parameter estimates are uncoupled, so there are more of them.   \\hfill \\\\\n (The Logistic Regression parameter estimates \\underline{are} coupled.)\n\nIf you don't have infinite data (? \"non-asymptotic analysis\"): \\hfill \\\\\nFor $n$ attributes in $X$, NB needs $O(\\log n)$ samples to converge and Logistic Regression needs $O(n)$.  \nSo GNB converges more quickly to its (perhaps less helpful) \"asymptotic estimates.\"\n\nYou have to do all of the optimization at once, all together if you are doing logistic regression.  ???\nEasier in a certain sense to fit a Gaussian model. \nTypically we have a lot of data, and that data is not gaussian so Logistic Regression is probably better. \n\\hfill \\\\\n\n\\subsubsection{More notes from class}  % week 6 audio\nLogistic regression makes no assumption about $P(X|Y)$.  \nNaive Bayes assumed independence of parameters, conditioned on label.\nOptimizing different loss functions.  \\hfill \\\\\n\\hfill \\\\ \n\nWhich is better? \nNeed to know how many parameters each classifier would need. \n\\# of parameters, \\# of features? \nHow many parameters does NB need? \n?? $2cC$ classes. C, d, n.  C classes, d is \\# of training examples, n is number of features.\n$4n + 1$ ??.\nFor logistic regression, the output was weights, w*.  Dim was n + 1. \nNB parameters are independent (uncoupled).\nLogistic regression: the parameters are tied together.  \\hfill \\\\\n\\hfill \\\\\n\n\\underline{Gen vs Disc. for NB/LR} \\hfill \\\\\nIf you have enough categories, GNB w/ class independence will produce identical results:\nGiven enough points and the assumption features independence of class is held true\nIf this is *not* true then LR will do better (when GNB is false). \n\\textbf{With limiting data, use GNB. }\nGNB may converge faster but to something less true.  \\hfill \\\\\n\\hfill \\\\\n\nDo we do optimization with NB?  No.\nWe start w/ joint distribution,  $P(x|y)*prior$, $\\dots$, take the max.\n We are not ML optimizing.  Not required to optimize every time we learn something. \n\\textbf{Both LR and NB are trying to do the same task, and one doesn't require training}. \n\\hfill \\\\  \\hfill \\\\\n\nNB requires less training examples. \n\\hfill \\\\  \\hfill \\\\\n\nUse logistic regression by default.  \\hfill \\\\  \\hfill \\\\\n\nIs there a global optimum for LR with regularization?\nNo.  But still should be close. \nIf you add a regularization that breaks the concavity of the function, you can't prove anything.\n\\hfill \\\\\n\n\\subsection{Logistic Regression Protocol}\nSteps:\n\\begin{itemize}\n\t\\item start with given data: x and labels.\n\t\\item form conditional log likelihood function.\n\t\\item either optimize that or add a prior\n\t\\item do hill climbing until you stop seeing big changes in the weights\n\t\\item then you claim victory\n\\end{itemize}\n\n\\textbf{For binary classification:} \\hfill \\\\\nThe final output of trained regression is the weight vector, $w*$.\nIt is d+1 dimensional for a d-dimensional feature vector.  The +1 is for bias (that coefficient is 1).  \\hfill \\\\\n\n\\underline{How do you use it?}  \\hfill \\\\\nYou found the form of decision boundary was $P(Y=1|X,w) = exp()/(1+ exp())$.\nPut in the $x$ with the $w$.  \nCompute the probability of Y=0 and Y=1, pick the biggest result. \nHow do you know if your weights are good? \nTest on held-out data (NOT the test data). \n\n\\subsubsection{Andrew Ng}\n* gives numbers between 0 and 1 (good for classification)   \\hfill \\\\\n* called \"logistic regression\" but it is really for classification.  (don't be confused by \"regression\") \\hfill \\\\\n* \"sigmoid function\" and \"logistic function\" are essentially synonymous.  \\hfill \\\\\n \n", "meta": {"hexsha": "3a8b49fea03c11b52758cf77d5b6a6c7292b4012", "size": 28691, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/logistic_regression.tex", "max_stars_repo_name": "JanetMatsen/Machine-Learning", "max_stars_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2016-02-07T23:35:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-26T05:13:33.000Z", "max_issues_repo_path": "tex/logistic_regression.tex", "max_issues_repo_name": "JanetMatsen/Machine-Learning", "max_issues_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/logistic_regression.tex", "max_forks_repo_name": "JanetMatsen/Machine-Learning", "max_forks_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2016-08-29T00:15:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-06T22:36:19.000Z", "avg_line_length": 52.2604735883, "max_line_length": 236, "alphanum_fraction": 0.6752291659, "num_tokens": 9497, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.7981867825403177, "lm_q1q2_score": 0.5883899112699869}}
{"text": "\\section{Dudley Integrals}\n\nGiven random variables $X_t,\\ t\\in \\Omega$. A most typical example is Guassian process, a time continuous stochastic process $X_t,\\ t\\in \\Omega$ is called Gaussian if and only if every finite set of indices $t_1$, $t_2$, $\\cdots , t_k$ in the index set $\\Omega$\n$$\nX_{t_1,\\cdots, t_k}=(X_{t_1}, \\cdots, X_{t_k})\n$$\nis a multivariate Gaussian random variables. If $k=1$, for any given $t$, $X_t$ is a random variable  with a Gaussian distribution, and $\\mathbb{E}X_t$ is the expectation of this random variable $X_t$.\n\n\n\nDudley's Entropy Integral gives a bound on \n$$\n\\mathbb{E}\\left[\\sup _{t, s \\in \\Omega} (X_{t} - X_s)\\right].\n$$\n\n\\begin{lemma}\\label{lm:dudley0}\nIf $\\mathbb{E} e^{\\lambda Z} \\leq e^{\\frac{\\lambda^{2} \\nu}{2}}$, then\n\\begin{equation}\n\\mathbb{E} \\max_{1\\le i \\le N} Z_{i} \\leq \\sqrt{2 \\nu \\log N}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSince $e^{\\lambda z}$ is a convex function of $z$, by Jensan's inequality,\n\\begin{equation}\n\\displaystyle \ne^{\\lambda \\mathbb{E}\\max_{1\\le i \\le N} Z_{i}} \\leq \\mathbb{E} \\max_{1\\le i \\le N}  e^{\\lambda Z_{i}} \\leq  \\sum_{i=1}^{N} \\mathbb{E} e^{\\lambda Z_{i}} \\leqslant N e^{\\frac{\\lambda^{2} \\nu}{2}},\\qquad \\forall \\lambda.\n\\end{equation}\nIt follows that\n\\begin{equation}\n\\mathbb{E} \\max _{1 \\leqslant i \\leq N} Z_{i} \\leq  {\\log N\\over \\lambda}+\\frac{\\lambda \\nu}{2}\n\\end{equation} \nChoose $\\lambda=\\sqrt{2\\log N\\over \\nu}$,\n\\begin{equation}\n\\mathbb{E} \\max _{1 \\leqslant i \\leq N} Z_{i} \\leq  \\sqrt{2\\nu \\log N }.\n\\end{equation} \n\\end{proof}\n \n\\begin{definition} (Covering Number) A $ \\delta$ -cover of a set $\\mathcal{T}$ w.r.t metric $d$ is a set $\\left\\{x_{1}, x_{2} \\ldots x_{N}\\right\\} \\subseteq \\mathcal{T}$ such that for each $x \\in \\mathcal{T},$ there exists some $i \\in\\{1,2, \\ldots N\\}$ such that $d\\left(x, x_{i}\\right) \\leq \\delta .$ The $\\delta$ -covering number $N(\\delta, \\mathcal{T})$ is the cardinality of the smallest $\\delta$-cover.\n\\end{definition}\n\n\\iffalse\n\\begin{theorem}\\label{th:dudley}\n(Dudley's theorem for bounded  $\\Omega$)Suppose $\\Omega$ is a bounded set. Let $\\left\\{X_x: x \\in \\Omega\\right\\}$ be a collection of random variables such that \n$$\n\\mathbb{E}\\left[ e^{\\lambda (X_x - X_{x'})}\\right]\\le e^{\\nu\\lambda^2 d^2(x, x')\\over 2},\n$$\nwith metric d on a finite set $\\Omega$.\nThen, for any $x_0\\in \\Omega$,\n\\begin{equation}\\label{eq:dudley}\n\\mathbb{E}\\left[\\sup _{x \\in \\mathcal{T}} X_x-X_{x_0}\\right] \\leq 12\\sqrt{\\nu}\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\Omega)} d \\mu\n\\end{equation}\nis the $\\displaystyle \\delta=\\sup_{x\\in \\Omega} d(x, x_0)$.\n\\end{theorem}\n\\begin{proof}\nGiven $x_0\\in \\Omega$, let $\\displaystyle \\delta=\\sup_{x\\in \\Omega} d(x, x_0)$.\nConsider a sequence of triangulation $\\mathcal{T}_j$ such that \n$$\n\\sup_{x, x'\\in \\mathcal{T}_j}d(x, x')\\le \\delta_j=2^{-j}\\delta.\n$$\nDenote the number of elements in $\\mathcal{T}_j$ by $N_j$.\nDefine a mapping $\\Pi_j: \\mathcal{T}\\rightarrow \\mathcal{T}_j$ with\n$$\nd(x, \\Pi_j x)\\le \\delta_j\\quad \\forall x\\in \\Omega.\n$$\nSince $\\Omega$ is finite, there exists a positive integer $J$ such that for all $x\\in \\Omega$,\n$$\n\\Pi_{J+1}(x)=x.\n$$\nThus,\n$$\nX_x=X_{\\Pi_0 x} + \\sum_{j=0}^J (X_{\\Pi_{j+1}(x)} - X_{\\Pi_{j}(x)}).\n$$\nBy definition, $N_0=1$.\nFor any $x$, let $\\Pi_0 (x)=x_0$, and\n$$\n\\mathbb{E}\\left [ \\sup_{x\\in\\Omega } X_{\\Pi_0 (x)} - X_{x_0}\\right]\\leq \\sum_{j=1}^J \\mathbb{E}\\left [ \\sup_{x\\in \\Omega} (X_{\\Pi_{j+1} (x)} - X_{\\Pi_j (x)})\\right]\n$$\nBy the triangle inequality, for any $x\\in \\Omega$,\n$$\nd(\\Pi_j(x), \\Pi_{j+1}(x))\\le \\delta_{j+1} + \\delta_{j}\\le 3\\delta_{j+1}.\n$$\nNote that for every integer $j$,\n$$\n\\# \\{(\\Pi_j(x), \\Pi_{j+1}(x)): x\\in \\Omega\\} \\le N_{j+1}^2.\n$$\nSince $\\left\\{X_x: x \\in \\Omega\\right\\}$ be a zero-mean sub-Gaussian process, by Lemma \\ref{lm:dudley0},\n\\begin{equation}\n\\begin{split}\n\\mathbb{E}\\left [ \\sup_{x\\in \\Omega} (X_{\\Pi_{j+1} (x)} - X_{\\Pi_j (x)})\\right]\n&\\le \\sqrt{2 \\nu \\log N}d(\\Pi_j(x), \\Pi_{j+1}(x)) \n\\\\\n&\\le 6\\delta_{j+1}\\sqrt{\\nu \\log N_{j+1}}.\n\\end{split}\n\\end{equation}\nHence, summing over $j$,\n\\begin{equation}\n\\begin{split}\n\\mathbb{E}\\left [ \\sup_{x\\in \\Omega} X_x - X_{x_0}\\right]\n&\\le \\sum_{j=1}^J \\mathbb{E}\\left [ \\sup_{x\\in \\Omega} (X_{\\Pi_{j+1} (x)} - X_{\\Pi_j (x)})\\right]\n\\\\\n&\\le \\sum_{j=1}^J 6\\delta_{j+1}\\sqrt{\\nu \\log N_{j+1}}\n\\\\\n& = 6\\sqrt{\\nu}\\sum_{j=1}^J \\delta_{j+1}\\sqrt{\\log N_{j+1}}.\n\\end{split}\n\\end{equation}\nNote that \n\\begin{equation}\nN_{j+1}=N(\\delta_{j+1}, \\Omega)\\le N(\\delta, \\Omega) \\le N(\\delta_{j+2}, \\Omega)=N_{j+2},\\quad \\delta_{j+2}\\le \\delta \\le \\delta_{j+1}.\n\\end{equation}\nThus,\n\\begin{equation}\n\\delta_{j+1}\\sqrt{\\log N_{j+1}}\\le 2\\int^{\\delta_{j+1}}_{\\delta_{j+2}} \\sqrt{ \\log N(\\delta, \\Omega) }d\\delta.\n\\end{equation}\nConsequently,\n\\begin{equation}\n\\begin{split}\n\\mathbb{E}\\left [ \\sup_{x\\in \\Omega} X_x - X_{x_0}\\right]\n&\\le 12\\sqrt{\\nu} \\int^{\\delta\\over 2}_{\\delta_{J+1}}\\sqrt{\\log N(\\mu, \\Omega) }d\\mu\n\\\\\n&\\le 12\\sqrt{\\nu} \\int^{\\delta\\over 2}_{0}\\sqrt{\\log N(\\mu, \\Omega) }d\\mu.\n\\end{split}\n\\end{equation}\n\\end{proof}\n\\begin{remark}\nDudley's Entropy Integral \\eqref{eq:dudley} still holds for an infinite set $\\mathcal{T}$.\n\\end{remark}\n\n\\fi\n\\begin{theorem}\\label{th:dudley}\n(Dudley's theorem) Suppose $\\Omega$ is a bounded set. Let $\\left\\{X_x: x \\in \\Omega\\right\\}$ be a collection of random variables such that \n$$\n\\mathbb{E}\\left[ e^{\\lambda (X_x - X_{x'})}\\right]\\le e^{\\nu\\lambda^2 d^2(x, x')\\over 2},\n$$\nwith metric $d$ on$\\Omega$.\nThen, for any $x_0\\in \\Omega$,\n\\begin{equation}\\label{eq:dudley}\n\\mathbb{E}\\left[\\sup _{x \\in \\Omega} X_x-X_{x_0}\\right] \\leq 12\\sqrt{\\nu}\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\Omega)} d \\mu\n\\end{equation}\nis the $\\displaystyle \\delta=\\sup_{x\\in \\Omega} d(x, x_0)$.\n\\end{theorem}\n\\begin{proof}\nLet $\\hat{\\Omega}=\\{\\hat x_1, \\cdots, \\hat x_N\\}$ be a minimal $\\epsilon$-covering of $\\Omega$ and N be the covering number. There exist $\\hat x, \\hat x_0\\in \\hat{\\Omega}$ such that\n$$\nd(x, \\hat x)\\le \\epsilon,\\quad d(\\hat x, \\hat x_0)\\le \\epsilon.\n$$\nThus,\n\\begin{equation}\nX_x - X_{x_0}\\le 2\\sup_{d(y,y')\\le \\epsilon} (X_y - X_{y'}) + \\sup_{\\hat x\\in \\hat{\\Omega}} (X_{\\hat x} - X_{\\hat x_0}).\n\\end{equation} \nLet $\\displaystyle \\hat \\delta=\\sup_{\\hat x\\in \\hat \\Omega} d(\\hat x, \\hat x_0)$.\nConsider a sequence of triangulation $\\hat{\\mathcal{T}}_j$ such that \n$$\n\\sup_{\\hat x, \\hat x'\\in\\hat{\\mathcal{T}}_j}d(\\hat x, \\hat x')\\le \\hat \\delta_j=2^{-j}\\hat \\delta.\n$$\nDenote the number of elements in $\\hat{\\mathcal{T}}_j$ by $N_j$.\nDefine a mapping $\\Pi_j: \\hat{\\mathcal{T}}\\rightarrow \\hat{\\mathcal{T}}_j$ with\n$$\nd(\\hat x, \\Pi_j \\hat x)\\le \\hat \\delta_j\\quad \\forall \\hat x\\in \\hat \\Omega.\n$$\nSince $\\hat \\Omega$ is finite, there exists a positive integer $J$ such that for all $\\hat x\\in \\hat \\Omega$,\n$$\n\\Pi_{J+1}(\\hat x)=\\hat x.\n$$\nThus,\n$$\nX_{\\hat{x}}=X_{\\Pi_0 \\hat{x}} + \\sum_{j=0}^J (X_{\\Pi_{j+1}(\\hat{x})} - X_{\\Pi_{j}(\\hat{x})}).\n$$\nBy definition, $N_0=1$.\nFor any $\\hat x$, let $\\Pi_0 (\\hat{x})=\\hat{x}_0$, and\n$$\n\\mathbb{E}\\left [ \\sup_{\\hat{x}\\in \\hat  \\Omega } X_{\\Pi_0 (\\hat{x})} - X_{\\hat{x}_0}\\right]\\leq \\sum_{j=1}^J \\mathbb{E}\\left [ \\sup_{\\hat{x}\\in \\hat \\Omega} (X_{\\Pi_{j+1} (\\hat x)} - X_{\\Pi_j (\\hat x)})\\right]\n$$\nBy the triangle inequality, for any $\\hat x\\in \\hat \\Omega$,\n$$\nd(\\Pi_j(\\hat x), \\Pi_{j+1}(\\hat x))\\le \\hat \\delta_{j+1} + \\hat \\delta_{j}\\le 3\\hat \\delta_{j+1}.\n$$\nNote that for every integer $j$,\n$$\n\\# \\{(\\Pi_j(\\hat x), \\Pi_{j+1}(\\hat x)): \\hat x\\in \\hat \\Omega\\} \\le N_{j+1}^2.\n$$\nSince $\\left\\{X_{\\hat x}: \\hat x \\in \\hat \\Omega\\right\\}$ be a zero-mean sub-Gaussian process, by Lemma \\ref{lm:dudley0},\n\\begin{equation}\n\\begin{split}\n\\mathbb{E}\\left [ \\sup_{\\hat x\\in \\hat \\Omega} (X_{\\Pi_{j+1} (\\hat x)} - X_{\\Pi_j (\\hat x)})\\right]\n&\\le \\sqrt{2 \\nu \\log N}d(\\Pi_j(\\hat x), \\Pi_{j+1}(\\hat x)) \n\\\\\n&\\le 6\\delta_{j+1}\\sqrt{\\nu \\log N_{j+1}}.\n\\end{split}\n\\end{equation}\nHence, summing over $j$,\n\\begin{equation}\n\\begin{split}\n\\mathbb{E}\\left [ \\sup_{\\hat x\\in \\hat \\Omega} X_{\\hat x} - X_{\\hat x_0}\\right]\n&\\le \\sum_{j=1}^J \\mathbb{E}\\left [ \\sup_{\\hat x\\in\\hat  \\Omega} (X_{\\Pi_{j+1} (\\hat x)} - X_{\\Pi_j (\\hat x)})\\right]\n\\\\\n&\\le \\sum_{j=1}^J 6\\hat \\delta_{j+1}\\sqrt{\\nu \\log N_{j+1}}\n\\\\\n& = 6\\sqrt{\\nu}\\sum_{j=1}^J \\hat \\delta_{j+1}\\sqrt{\\log N_{j+1}}.\n\\end{split}\n\\end{equation}\nNote that \n\\begin{equation}\nN_{j+1}=N(\\hat \\delta_{j+1}, \\hat \\Omega)\\le N(\\hat \\delta, \\hat \\Omega) \\le N(\\hat{\\delta}_{j+2}, \\hat \\Omega)=N_{j+2},\\quad \\hat \\delta_{j+2}\\le \\hat \\delta \\le \\delta_{j+1}.\n\\end{equation}\nThus,\n\\begin{equation}\n\\hat \\delta_{j+1}\\sqrt{\\log N_{j+1}}\\le 2\\int^{\\hat \\delta_{j+1}}_{\\hat \\delta_{j+2}} \\sqrt{ \\log N(\\hat \\delta, \\hat \\Omega) }d\\delta.\n\\end{equation}\nConsequently,\n\\begin{equation}\n\\begin{split}\n\\mathbb{E}\\left [ \\sup_{\\hat x\\in \\hat \\Omega} X_{\\hat x} - X_{\\hat x_0}\\right]\n&\\le 12\\sqrt{\\nu} \\int^{\\hat \\delta\\over 2}_{\\hat \\delta_{J+1}}\\sqrt{\\log N(\\mu, \\hat \\Omega) }d\\mu\n\\\\\n&\\le 12\\sqrt{\\nu} \\int^{\\hat \\delta\\over 2}_{0}\\sqrt{\\log N(\\mu, \\hat \\Omega) }d\\mu\n\\\\\n&\\leq 12\\sqrt{\\nu}\\int_{0}^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\Omega)} d \\mu\n\\end{split}\n\\end{equation}  \nwith   $\\displaystyle \\delta=\\sup_{x\\in  \\Omega} d(x, x_0)$.  \nThus,\n\\begin{equation}\n\\mathbb{E}\\sup_{x\\in \\Omega} (X_{x} - X_{x_0})\n\\leq 2\\mathbb{E}\\sup_{d(y,y')\\le \\epsilon} (X_y - X_{y'}) + 12\\sqrt{\\nu}\\int_{0}^{\\delta\\over 2} \\sqrt{\\log N(\\mu,\\Omega)} d \\mu\n\\end{equation}\nLet $\\epsilon\\rightarrow 0$,\n\\begin{equation}\n\\mathbb{E}\\sup_{x\\in\\Omega} (X_{x} - X_{x_0})\n\\leq 12\\sqrt{\\nu}\\int_{0}^{\\delta\\over 2} \\sqrt{\\log N(\\mu,\\Omega)} d \\mu\n\\end{equation}\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{logintegral}\n\\begin{align} \n\\int_0^t\\sqrt{\\log {1\\over u}} du\\le  \\sqrt{t(1-\\log t)}.\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nSince $\\sqrt{x}$ is a concave function,   by Jansen's inequality,\n\\begin{align} \n\\int_0^t\\sqrt{\\log {1\\over u}} du&\\le \\sqrt{-\\int_0^t \\log u du} = \\sqrt{-t\\log t + t}=\\sqrt{t(1-\\log t)}.\n\\end{align}\n\\end{proof}\n\nDefine Rademacher variable $\\sigma$ by\n\\begin{equation}\nP(\\sigma)=\\left\\{\\begin{array}{cc}\n1 / 2 & \\text { if } \\sigma=-1 \\\\\n1 / 2 & \\text { if } \\sigma=+1 \n\\end{array}\\right.\n\\end{equation}\n\n\\begin{lemma}\\label{th:rademacherprocess}\nLet $\\sigma_{i}$ be a sequence of independent identically distributed Rademacher variables.\n\\[\n\\begin{aligned}\n\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\big | \\frac{1}{n} \\sum_{i=1}^{n} f\\left(X_{i}, x\\right)-\\mathbb{E}[f]\\big | \\right] \n& \\leq \\mathbb{E}_n^{X, Y, \\sigma}\\left[\\sup _{x \\in \\Omega} \\mid \\frac{1}{n} \\sum_{i=1}^{n} \\sigma_{i}\\left[f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)\\right]\\big |\\right].\n\\end{aligned}\n\\]\n\\end{lemma}\n\\begin{proof}\n\\begin{equation}\n\\begin{aligned} \n\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\big | \\frac{1}{n} \\sum_{i=1}^{n} f\\left(X_{i}, x\\right)-\\mathbb{E}[f]\\big | \\right] \n&=\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\big |\\frac{1}{n} \\sum_{i=1}^{n} f\\left(X_{i}, x\\right)-\\mathbb{E}_{Y_{i}} f\\left(Y_{i}, x\\right)\\big |\\right] \n\\\\\n&=\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\big | \\mathbb{E}_n^{Y} \\frac{1}{n} \\sum_{i=1}^{n}\\left[f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)\\right]\\big | \\right] \n\\\\\n& \\leq \\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\mathbb{E}_n^{Y}\\big |\\frac{1}{n} \\sum_{i=1}^{n}\\left[f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)\\right]\\big |\\right]\n\\\\\n& \\leq \\mathbb{E}_n^{X, Y}\\left[\\sup _{x \\in \\Omega} \\big | \\frac{1}{n} \\sum_{i=1}^{n}\\left[f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)\\right]\\big |\\right]\n\\end{aligned}\n\\end{equation}\nConsider the distribution of $f(X_{i}, x)-f(Y_{i}, x)$ and $\\sigma_{i}(f(X_{i}, x)-f(Y_{i}, x))$. Define\n$$\nU_k=\\{(X, Y): f(X,x)-f(Y,x)=k\\}.\n$$\nFor $f(X_{i}, x)-f(Y_{i}, x)$,\n\\begin{equation}\nP(f(X_{i}, x)-f(Y_{i}, x)=k)=\\sum_{(a, b)\\in U_k} p(X_i=a)p(Y_i=b).\n\\end{equation}\nFor $\\sigma_{i}(f(X_{i}, x)-f(Y_{i}, x))$,\n\\begin{align}\nP(\\sigma_{i}(f(X_{i}, x)-f(Y_{i}, x))=k)=&\\sum_{(a, b)\\in U_k} p(\\sigma_i=1)p(X_i=a)p(Y_i=b) \\\\\n&+ \\sum_{(a, b)\\in U_k} p(\\sigma_i=-1)p(X_i=b)p(Y_i=a)\\\\\n=&\\sum_{(a, b)\\in U_k} p(X_i=a)p(Y_i=b).\n\\end{align}\nThis implies that the distribution of the difference $f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)$ is the same as the distribution of$\\sigma_{i}(f(X_{i}, x)-f(Y_{i}, x))$ so we obtain\n\\[\n\\begin{aligned}\n\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\big | \\frac{1}{n} \\sum_{i=1}^{n} f\\left(X_{i}, x\\right)-\\mathbb{E}[f]\\big |\\right] \n& \\leq \\mathbb{E}_n^{X, Y, \\sigma}\\left[\\sup _{x \\in \\Omega} \\big | \\frac{1}{n} \\sum_{i=1}^{n} \\sigma_{i}\\left[f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)\\right]\\big |\\right].\n\\end{aligned}\n\\]\n\n\n\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lm:rademacher}\nSuppose $\\theta$ is a random variable and $\\{\\theta_i\\}_{i=1}^n$ are $n$ samples of $\\theta$. Let $\\sigma_{i}$ be a sequence of independent identically distributed Rademacher variables. For $h(\\theta, x)$, define\n$$\nX_x = \\sum_{i=1}^n \\sigma_ih(\\theta_i, x),\n$$\nit holds that\n$$\n\\mathbb{E}_\\sigma\\left[ e^{\\lambda (X_x - X_{x'})}\\right]\\le e^{\\lambda^2 d^2(x, x')\\over 2},\n$$\nwith $\\displaystyle d^2(x, x') = \\sum_{i=1}^n(h(\\theta_i, x) - h(\\theta_i, x'))^2$.\n\\end{lemma}\n\\begin{proof}\nFor any $z$,\n$$\n{e^{z} + e^{-z}\\over 2}=\\sum_{i\\ge 0} {z^{2i}\\over (2i)!} \\le \\sum_{i\\ge 0} {z^{2i}\\over 2^i i!} = e^{z^2\\over 2}.\n$$\nLet $z_i= \\lambda  (h(\\theta_i, x) - h(\\theta_i, x'))$.\n\\begin{align}  \n\\displaystyle \n\\mathbb{E}_\\sigma\\left[ e^{\\lambda (X_x - X_{x'})}\\right]&=\\mathbb{E}_\\sigma\\left[ e^{\\lambda \\sum_{i=1}^n \\sigma_i(h(\\theta_i, x) - h(\\theta_i, x'))}\\right]\n\\\\\n&=\\prod_{i=1}^n \\int e^{\\lambda \\sigma_i(h(\\theta_i, x) - h(\\theta_i, x'))} P(\\sigma_i) d\\sigma_i\n\\\\\n&=\\prod_{i=1}^n  {e^{z_i} + e^{-z_i}\\over 2}  \\le \\prod_{i=1}^n  e^{z_i^2\\over 2}  \n\\\\ \n&= e^{{1\\over 2}\\lambda^2  \\sum_{i=1}^n(h(\\theta_i, x) - h(\\theta_i, x'))^2}.\n\\end{align}\n\\end{proof}\n\\newpage\n\\begin{remark}\\label{remark:supest}\nIt holds that\n\\begin{align}\n\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\mid \\frac{1}{n} \\sum_{i=1}^{n} f\\left(X_{i}, x\\right)-\\mathbb{E}[f]\\right]  \\leq {12\\over n}\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\Omega)} d \\mu\n\\end{align}\nwith $\\displaystyle \\delta^2=\\sup_{x\\in \\Omega} d^2(x, x_0) = \\sup_{x\\in \\Omega}\\sum_{i=1}^n(f(X_i, x) - f(Y_i, x))^2$.\n \\end{remark}\n\\begin{enumerate}\n\\item By Lemma \\ref{th:rademacherprocess}, \n\\[\n\\begin{aligned}\n\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\mid \\frac{1}{n} \\sum_{i=1}^{n} f\\left(X_{i}, x\\right)-\\mathbb{E}[f]\\right] \n& \\leq \\mathbb{E}_n^{X, Y, \\sigma}\\left[\\sup _{x \\in \\Omega} \\mid \\frac{1}{n} \\sum_{i=1}^{n} \\sigma_{i}\\left[f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)\\right]\\right].\n\\end{aligned}\n\\]\nwith a sequence of independent identically distributed Rademacher variables $\\sigma_{i}$.\n\\item Let $$\nX_x = \\sum_{i=1}^n \\sigma_i\\left[f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)\\right],\n$$\nBy Lemma \\ref{lm:rademacher}, it holds that\n$$\n\\mathbb{E}_\\sigma\\left[ e^{\\lambda (X_x - X_{x'})}\\right]\\le e^{\\lambda^2 d^2(x, x')\\over 2},\n$$\nwith $\\displaystyle d^2(x, x') = \\sum_{i=1}^n(h(X_i, Y_i, x) - h(X_i, Y_i, x'))^2$ and $h(X_i, Y_i, x) = f\\left(X_{i}, x\\right)-f\\left(Y_{i}, x\\right)$, which satisfies the assumption in Theorem \\ref{th:dudley}.\n\\item Let $\\displaystyle \\delta=\\sup_{x\\in \\Omega} d(x, x_0)$. By Theorem \\ref{th:dudley},  for any $x_0\\in \\mathcal{T}$,\n\\begin{equation} \n\\mathbb{E}\\left[\\sup _{x \\in \\Omega} X_x-X_{x_0}\\right] \\leq {12\\over n}\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\Omega)} d \\mu,\n\\end{equation} \n furthermore,\n\\begin{equation} \n\\mathbb{E}_n^{X}\\left[\\sup _{x \\in \\Omega} \\mid \\frac{1}{n} \\sum_{i=1}^{n} f\\left(X_{i}, x\\right)-\\mathbb{E}[f]\\right]  \\leq {12\\over n}\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\Omega)} d \\mu,\n\\end{equation} \n\\end{enumerate}\n\n\\section{Maximum norm error estimates ReLU DNN}\n\\begin{lemma}\\label{lm:Cstratifyrange1}\nSuppose $\\Omega$ is bounded, say $\\|x\\|_{\\ell_p}\\le T$, and $f\\in \\mathcal B^{m+1, q}(\\Omega)$.\n%$$\n% \\int_{\\mathbb{R}^{d}} |\\hat{f}(\\theta)|\\|\\theta\\|_{\\ell_1}^{m+1} d\\theta<\\infty.\n%$$\nThere exist $\\theta_j=(z_j, t_j, \\omega_j)$ with $\\|\\bar \\omega_j\\|_{\\ell_1}=1$, $t\\in [0,T]$,  $\\beta_j\\in [-1,1]$ such that \n$$\nf_n(x)= \\sum_{|\\alpha|\\le m}{1\\over \\alpha!}D^\\alpha f(0) x^\\alpha + {2\\nu\\over m!n}\\sum_{j=1}^{n}\\beta_j (\\bar \\theta_j\\cdot x - t_j)_+^m\n$$ \nwith $\\nu=\\int_{\\{-1,1\\}\\times [0,T]\\times \\mathbb{R}^{d}} \\rho(\\theta)d\\theta$ and $\\rho(\\theta)$  defined in \\eqref{eq:straglam} \nsatisfies the following estimate\n\\begin{equation}\n\\|f - f_n \\|_{C(\\Omega)} \\leq C\\sqrt{d} n^{-{1\\over 2}-{1\\over d}}\\|f\\|_{\\mathcal B^{m+1, q}(\\Omega)}.\n\\end{equation}   \n\\end{lemma}\n\\begin{proof}\nAccording to Lemma \\ref{lem:stratifiedapprox}, there exists a partition $G=G_1\\cup \\cdots \\cup G_M$ such that\n$$\n|t -t'| + \\|\\bar\\omega - \\bar\\omega'\\|_{\\ell_1}\\leq \\epsilon \\sim M^{-{1\\over d}},\\quad \\forall 1\\le i\\le M.\n$$\nwith $\\theta=(z, t, \\omega)\\in G$. By \\eqref{eq:straglam},\n$$\ng(\\theta, x)= (z\\bar \\omega\\cdot x - t)_+^m {\\rm sgn} s(zt,\\omega),\\quad s(zt,\\omega)= \n\\begin{cases}\n(-1)^{m+1\\over 2}\\cos(z\\|\\omega\\|_{\\ell_1}t + b(\\omega)) & m \\text{ is odd}\n\\\\\n(-1)^{m+2\\over 2}\\sin(z\\|\\omega\\|_{\\ell_1}t + b(\\omega)) & m \\text{ is even}.\n\\end{cases}\n$$\nThus,\n$$\n|g(\\theta,x) - g(\\theta',x)|\\le c_0\\epsilon,\\quad \\theta, \\theta'\\in G_i.\n$$\nLet $n_i=\\lceil \\lambda(G_i) n\\rceil$, $\\displaystyle g_{n_i}^i={1\\over n_i}\\sum_{j=1}^{n_i}g(\\theta_{i, j}, x)$ and \n$$\ng_n(x) = \\sum_{i=1}^M \\lambda(G_i)g_{n_i}^i\n$$\nIt follows that\n\\begin{align} \n|\\mathbb{E}_G g -g_n(x)| \n&= |\\sum_{i=1}^M \\lambda(G_i)(\\mathbb{E}_{G_i} g - g_{n_i}^i)|\n\\\\\n&\\le {1\\over n}\\sum_{i=1}^M \\sum_{j=1}^{n_i}|\\mathbb{E}_{G_i} g - g(\\theta_{i, j},x)|.\n\\end{align} \nLet $\\sigma_{i,j}$ be a sequence of independent identically distributed Rademacher variables\n\\begin{equation}\nP(\\sigma_{i, j}=k)=\\left\\{\\begin{array}{cc}\n1 / 2 & \\text { if } k=-1 \\\\\n1 / 2 & \\text { if } k=+1 \\\\\n0 & \\text { otherwise }\n\\end{array}.\n\\right.\n\\end{equation}\nBy Lemma \\ref{th:rademacherprocess},\n\\begin{align}  \n\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\\le & \n{1\\over n}\\mathbb{E}_n\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i} (\\mathbb{E}_{G_i} g  - g(\\theta_{ij}, x))|  \n\\end{align} \nAccording to Remark \\ref{remark:supest},\n\\begin{align}\n\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\n  \\leq {24\\over n }\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\mathcal{T})} d \\mu\n\\end{align}\nwith \n$$\n\\displaystyle \\delta^2= \\sup_{x\\in \\mathcal{T}}\\sum_{i=1}^M\\sum_{j=1}^{n_i}(g(\\theta_{ij}, x) - g(\\theta_{ij}, x))^2 \\le c^2\\epsilon^2 \\sum_{i=1}^M n_i \n$$ \nLet $h_{i, j}(x)= g(\\theta_{ij}, x) - g(\\theta_{ij}', x)$. Note that\n\\begin{align}\nd^2(x, x') = \\sum_{i=1}^M\\sum_{j=1}^{n_i}(h_{i, j}(x) - h_{i, j}(x'))^2\n \\le c^2\\epsilon^2  \\|x -x'\\|_\\infty^2\\sum_{i=1}^M n_i.\n\\end{align}\nit follows that\n\\begin{align} % requires amsmath; align* for no eq. number\nN(\\mu, \\Omega)\\le \\left ({2\\over \\|x -x'\\|_\\infty}\\right)^d\n\\le \\left ({2c\\epsilon\\sqrt{\\sum_{i=1}^M n_i}\\over \\mu}\\right)^d.\n\\end{align}\nBy Lemma \\ref{logintegral},\n\\begin{align} % requires amsmath; align* for no eq. number\n\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\Omega)} d \\mu \n&\\le \\sqrt{d}\\int_0^{c\\epsilon\\sqrt{\\sum_{i=1}^M n_i}\\over 2} \\sqrt{\\log {2c\\epsilon\\sqrt{\\sum_{i=1}^M n_i}\\over \\mu}} d \\mu\n\\\\\n& \\le C\\epsilon\\sqrt{d\\sum_{i=1}^M n_i} .\n\\end{align}\nThus,\n\\begin{align}\n\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\n\\le C\\epsilon\\sqrt{d\\sum_{i=1}^M n_i\\over n^2}\n\\end{align}\nSince $\\epsilon=n^{-1/d}$ and $\\displaystyle M\\le 4({5\\over \\epsilon})^d$,\n\\begin{align}\n\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\n\\le C\\sqrt{d} n^{-{1\\over 2}- {1\\over d}}.\n\\end{align}\nConsequently, there exists $g_n(x)$ such that\n$$\n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\\le C\\sqrt{d} n^{-{1\\over 2}- {1\\over d}}.\n$$\nNamely, there exist $\\beta_j\\in [-1, 1]$, $\\|\\bar \\omega_j\\|_{\\ell_1}=1$, $t_i\\in [0,T]$ such that \n$$\nf_n(x)= \\sum_{|\\alpha|\\le m}{1\\over \\alpha!}D^\\alpha f(0) x^\\alpha + {2\\nu\\over m!n}\\sum_{j=1}^{n}\\beta_j (\\bar \\omega_j\\cdot x - t_j)_+^m\n$$ \nsatisfies the following estimate\n\\begin{equation}\n\\sup_x |f - f_n| \\leq C\\sqrt{d} n^{-{1\\over 2}-{1\\over d}}\\|f\\|_{\\mathcal B^{m+1, q}(\\Omega)},\\qquad k\\le m.\n\\end{equation}  \n%\n%\n%\n%------\n%\n%\\begin{align}  \n%\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\\le & \n%{1\\over n}\\mathbb{E}_n\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i} (\\mathbb{E}_{G_i} g  - g(\\theta_{ij}, x))| \n%\\\\\n%\\le &{2\\over n}\\mathbb{E}_n^{\\theta_{ij}, \\bar{\\theta}_{ij}, \\sigma_{ij}}\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i} \\sigma_{i,j} (g(\\theta_{ij}, x)  - g(\\bar\\theta_{ij}, x))|   \n%\\\\\n%\\le &{2\\over n}\\mathbb{E}_n^{\\theta_{ij}, \\bar{\\theta}_{ij}, \\sigma_{ij}}\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i} \\sigma_{i,j} h_{i, j}(x)|\n%\\end{align} \n%with $h_{i, j}(x) =  g(\\theta_{i, j},x) - g(\\bar\\theta_{i, j}, x)$. \n%Let $\\displaystyle X_x = \\sum_{i=1}^M\\sum_{j=1}^{n_i} \\sigma_{i,j} h_{i, j}(x)$ and\n%\\begin{align}\n%d^2(x, x') = \\sum_{i=1}^M\\sum_{j=1}^{n_i}(h_{i, j}(x) - h_{i, j}(x'))^2\n% \\le c^2\\epsilon^2 (M+n)\\|x -x'\\|_\\infty^2.\n%\\end{align}\n%It follows from Lemma \\ref{lm:rademacher} that\n%\\begin{equation}\n%\\mathbb{E}_n^{\\sigma_{ij}}\\left [ e^{\\lambda (X_x - X_{x'})}\\right]\\le e^{\\lambda^2d^2(x,x')\\over 2},\n%\\end{equation}\n%which satisfies the assumption in Theorem \\ref{th:dudley}.\n%\\begin{equation}\n%\\mathbb{E}_n^{\\sigma_{ij}}\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i} \\sigma_{i,j} h_{i, j}(x)|\n%=\\mathbb{E}_n^{\\sigma_{ij}}\\sup_x X_x\n%\\le 12 \\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\mathcal{T})} d \\mu\n%\\end{equation}\n%with  $\n%\\displaystyle \\delta^2=\\sup_{x\\in \\mathcal{T}} \\sum_{i=1}^M\\sum_{j=1}^{n_i}h_{i, j}^2(x) \\le c^2\\epsilon^2 (M+n).\n%$ \n%%for $\\theta_{i, j}, \\theta_{i, j}'\\in G_i$,\n%%$$\n%%|s_{i, j}(x)|\\le m\\epsilon, \\quad \\forall x.\n%%$$\n%%Since $|g(\\theta, x) - g(\\theta, x')\\le L|x-x'|$ for any $\\theta$,\n%%$$\n%%|h_{i, j}(x) - h_{i, j}(x')|\\le \\sup_x |\\partial_xh_{i,j}||x-x'|\\le c\\epsilon \\|x-x'\\|_\\infty.\n%%$$\n%%By , $\\{h_{i,j}(x)\\}$ satisfy the assumptions in Theorem \\ref{th:dudley}, which leads to \n%%\\begin{align}  \n%%\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\\le & \n%%{1\\over n}\\mathbb{E}_n\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i} (\\mathbb{E}_{G_i} g  - g(\\theta_{ij}, x))| \n%%\\\\\n%%\\le &{2\\over n}\\mathbb{E}_n\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i} \\sigma_{i,j} (g(\\theta_{ij}, x)  - g(\\bar\\theta_{ij}, x))|  \n%%\\\\\n%%\\le &{2\\over n}\\mathbb{E}_n\\sup_x  |\\sum_{i=1}^M\\sum_{j=1}^{n_i}   h_{i,j}(x)|. \n%%\\\\\n%%\\leq & {12\\over n} \\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\mathcal{T})} d \\mu\n%%\\end{align} \n%%and\n%%\\begin{align}\n%%d^2(x, x') = \\sum_{i=1}^M\\sum_{j=1}^{n_i}(h_{i, j}(x) - h_{i, j}(x'))^2\n%% \\le c^2\\epsilon^2 (M+n)\\|x -x'\\|_\\infty^2.\n%%\\end{align}\n%Thus,  \n%\\begin{align} % requires amsmath; align* for no eq. number\n%N(\\mu, \\mathcal{T})\\le \\left ({2\\over \\|x -x'\\|_\\infty}\\right)^d\n%\\le \\left ({2c\\epsilon\\sqrt{M+n}\\over \\mu}\\right)^d.\n%\\end{align}\n%By Lemma \\ref{logintegral},\n%\\begin{align} % requires amsmath; align* for no eq. number\n%\\int_0^{\\delta\\over 2} \\sqrt{\\log N(\\mu, \\mathcal{T})} d \\mu \n%&\\le \\sqrt{d}\\int_0^{c\\epsilon\\sqrt{M+n}\\over 2} \\sqrt{\\log {2c\\epsilon\\sqrt{M+n}\\over \\mu}} d \\mu\n%\\\\\n%& \\le C\\epsilon\\sqrt{(M+n)d} .\n%\\end{align}\n%Thus,\n%\\begin{align}\n%\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\n%\\le C\\epsilon\\sqrt{d(M+n)\\over n^2}\n%\\end{align}\n%Since $\\epsilon=n^{-1/d}$ and $M\\le 4({5\\over \\epsilon})^d$,\n%\\begin{align}\n%\\mathbb{E}_n\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\n%\\le C\\sqrt{d} n^{-{1\\over 2}- {1\\over d}}.\n%\\end{align}\n%Consequently, there exists $g_n(x)$ such that\n%$$\n%\\sup_x |\\mathbb{E}_{G} g(x) - g_{n}(x)|\\le C\\sqrt{d} n^{-{1\\over 2}- {1\\over d}}.\n%$$\n%Namely, there exist $\\beta_j\\in [-2^d,2^d]$, $\\|\\bar \\theta_j\\|_{\\ell_1}=1$, $t_i\\in [0,T]$ such that \n%$$\n%f_n(x)= \\sum_{|\\alpha|\\le m}{1\\over \\alpha!}D^\\alpha f(0) x^\\alpha + {1\\over m!n}\\sum_{j=1}^{n}\\beta_j (\\bar \\theta_j\\cdot x - t_j)_+^m\n%$$ \n%satisfies the following estimate\n%\\begin{equation}\n%\\sup_x |f - f_n| \\leq C\\sqrt{d} n^{-{1\\over 2}-{1\\over d}},\\qquad k\\le m.\n%\\end{equation}  \n\\end{proof}\n\n\\begin{lemma}\\label{lm:Cstratifypm1}\nSuppose $\\Omega$ is bounded, say $\\|x\\|_{\\ell_p}\\le T$, and $f\\in \\mathcal B^{m+1, q}(\\Omega)$.\nThere exist $\\theta_j=(z_j, t_j, \\omega_j)$ with $\\|\\bar \\omega_j\\|_{\\ell_1}=1$, $t\\in [0,T]$,  $\\beta_j\\in \\{-1,1\\}$ such that \n$$\nf_n(x)= \\sum_{|\\alpha|\\le m}{1\\over \\alpha!}D^\\alpha f(0) x^\\alpha + {\\nu\\over m!n}\\sum_{j=1}^{n}\\beta_j (\\bar \\theta_j\\cdot x - t_j)_+^m\n$$ \nwith $\\nu=\\int_{\\{-1,1\\}\\times [0,T]\\times \\mathbb{R}^{d}} \\rho(\\theta)d\\theta$ and $\\rho(\\theta)$  defined in \\eqref{eq:straglam} \nsatisfies the following estimate\n\\begin{equation}\n\\|f - f_n \\|_{C(\\Omega)} \\leq C{\\sqrt{d} \\over m!}N^{-{1\\over 2}-{1\\over d+2}}\\|f\\|_{\\mathcal B^{m+1, q}(\\Omega)}.\n\\end{equation}   \n\\end{lemma}\n\\begin{proof}\nAccording to Lemma \\ref{lem:stratifiedapprox}, there exists a partition $G=G_1\\cup \\cdots \\cup G_M$ such that\n$$\n|t -t'| + \\|\\bar\\omega - \\bar\\omega'\\|_{\\ell_1}\\leq \\epsilon \\sim  M^{-{1\\over d}},\\quad \\forall 1\\le i\\le M.\n$$\nwith $\\theta=(z, t, \\omega)\\in G$. By \\eqref{eq:straglam},\n$$\ng(\\theta, x)= (z\\bar \\omega\\cdot x - t)_+^m {\\rm sgn} s(zt,\\omega),\\quad s(zt,\\omega)= \n\\begin{cases}\n(-1)^{m+1\\over 2}\\cos(z\\|\\omega\\|_{\\ell_1}t + b(\\omega)) & m \\text{ is odd}\n\\\\\n(-1)^{m+2\\over 2}\\sin(z\\|\\omega\\|_{\\ell_1}t + b(\\omega)) & m \\text{ is even}.\n\\end{cases}\n$$\nThus,\n$$\n|g(\\theta,x) - g(\\theta',x)|\\le c_0\\epsilon,\\quad \\theta, \\theta'\\in G_i.\n$$\nLet  $\\theta_{i,j} \\in G_i(1\\leq j\\leq n_i)$,  $n_i$ equal $\\lceil \\lambda(G_i)n\\rceil$ and $\\lfloor \\lambda(G_i)n\\rfloor$ with probabilities chosen to make its mean equal to $\\lambda(G_i)n$ and $m_i=n_i + \\mathbb{I}(n_i=0)$. Then\n\\begin{align} \n\\sum_{i=1}^M m_i&=\\sum_{i=1}^M n_i\\mathbb{I}(n_i>0) + \\sum_{i=1}^M n_i\\mathbb{I}(n_i=0)\n\\\\\n&\\le \\sum_{i=1}^M (n\\lambda(G_i) + 1)\\mathbb{I}(n_i>0) + \\sum_{i=1}^M \\mathbb{I}(n_i=0)\n\\\\\n&\\le  n\\sum_{i=1}^M \\lambda(G_i)\\mathbb{I}(n_i>0) + M\n\\le n+M.\n\\end{align} \nDefine $\\displaystyle N=\\sum_{i=1}^Mn_i$  and\n$$\nr_{m,n}(x)= {1\\over N}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} g(x,\\theta_{i,j}).\n$$\nBy the definition of $m_i$, $\\displaystyle {n_i\\over m_i}=0$ or $1$.  This means that $f_n(x) $ is in the form of  $\\displaystyle {1\\over N}\\sum_{i=1}^N\\beta_i g(x,\\theta_i)$ with $\\beta_i\\in \\{-1, 1\\}$. Define\n$$\n\\bar{r}_{m,n}(x)= \\sum_{i=1}^M {n_i\\over N}\\mathbb{E}_{G_i}g.\n$$\nSince\n$\n\\displaystyle r_m(x)=\\mathbb{E}_G g= \\sum_{i=1}^M \\lambda(G_i) \\mathbb{E}_{G_i}g,\n$  \n\\begin{align}  \nr_{m,n} - r_m  &= r_{m,n} - \\bar{r}_{m,n} + \\bar{r}_{m,n} - f_m\n\\\\\n&= {1\\over N}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} \\big (g(x,\\theta_{i,j}) - \\mathbb{E}_{G_i}g \\big ) + {1\\over N}\\sum_{i=1}^M (n_i - \\lambda(G_i)N)  \\mathbb{E}_{G_i}g.\\label{f1}\n\\end{align} \nConsider the first term in \\eqref{f1}. By Remark \\eqref{remark:supest},\n\\begin{align}  \n\\mathbb{E}_n \\left[ \\sup_{x\\in \\Omega}{1\\over N}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} \\big (g(x,\\theta_{i,j}) - \\mathbb{E}_{G_i}g \\big )\\right] \\le {12\\over N}\\int_0^{\\delta_1} \\sqrt{\\log N_1(\\mu, \\Omega)}d\\mu\n\\end{align} \nwith $\\displaystyle \\delta_1^2=\\sup_{x\\in \\Omega} \\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} \\big (g(x,\\theta_{i,j}) - g(x,\\theta_{i,j}')\\big )^2\\le Nc_0^2\\epsilon^2$. Let $h_{i, j}(x)= g(\\theta_{ij}, x) - g(\\theta_{ij}', x)$. Note that\n\\begin{align}\nd^2(x, x') = \\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i}(h_{i, j}(x) - h_{i, j}(x'))^2\n \\le c^2N\\epsilon^2  \\|x -x'\\|_\\infty^2.\n\\end{align}\nit follows that\n\\begin{align} % requires amsmath; align* for no eq. number\nN_1(\\mu, \\Omega) \\le \\left ( {2\\over \\|x -x'\\|_\\infty} \\right)^d\n\\le \\left ( {2c\\epsilon\\sqrt{N} \\over \\mu}\\right)^d.\n\\end{align}\nBy Lemma \\ref{logintegral},\n\\begin{align}  \n\\mathbb{E}_n \\left[ \\sup_{x\\in \\Omega}{1\\over N}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} \\big (g(x,\\theta_{i,j}) - \\mathbb{E}_{G_i}g \\big )\\right] \\le {12\\over N}\\int_0^{\\delta_1} \\sqrt{\\log N_1(\\mu, \\Omega)}d\\mu\n\\le {24c\\epsilon \\sqrt{d}\\over \\sqrt{N}}.\\label{eq:dud1}\n\\end{align} \nConsider the second term in \\eqref{f1}.\n$$\n\\mathbb{E}_n\\sup_{x\\in \\Omega}\\left |\\sum_{i=1}^M (n_i - \\lambda(G_i)N)  \\mathbb{E}_{G_i}g \\right |\\le \\mathbb{E}_n\\sup_{x\\in \\Omega}\\left |\\sum_{i=1}^M \\sigma_i(n_i - \\lambda(G_i)N)  \\mathbb{E}_{G_i}g \\right |\n$$\nLet $\\tilde h_i(x) =(n_i - \\lambda(G_i)N)  \\mathbb{E}_{G_i}g$ and \n$$\n\\delta_2^2 = \\sum_{i=1}^M (\\tilde h_i(x) - \\tilde h_i(x'))^2\\le M\\|x-x'\\|_\\infty^2\n$$\nwith $|n_i - \\lambda(G_i)N|\\le 1$ and $|\\mathbb{E}_{G_i}g(x) - \\mathbb{E}_{G_i}g(x')|\\le \\|x-x'\\|_\\infty$. It follows from Remark \\eqref{remark:supest} that\n$$\n\\mathbb{E}_n\\left [\\sup_{x\\in \\Omega}{1\\over N}\\sum_{i=1}^M (n_i - \\lambda(G_i)N)  \\mathbb{E}_{G_i}g\\right ]\\le  {12\\over N}\\int_0^{\\delta_2} \\sqrt{\\log N_2(\\mu, \\Omega)}d\\mu,\n$$\nwhere $\\delta_2\\le \\sqrt{M}$ and $N_2(\\mu, \\Omega)\\le ({2\\sqrt{M}\\over \\mu})^d$. Thus,\n\\begin{align} \n\\mathbb{E}_n\\left [\\sup_{x\\in \\Omega}{1\\over N}\\sum_{i=1}^M (n_i - \\lambda(G_i)N)  \\mathbb{E}_{G_i}g\\right ]\\le  {24\\sqrt{dM}\\over N}.\\label{eq:dud2}\n\\end{align}\nA combination of \\eqref{eq:dud1} and \\eqref{eq:dud2} gives\n\\begin{align} \n\\mathbb{E}_n\\sup_{x\\in \\Omega} |r_{m,n} - r_m |\\le {24c\\epsilon \\sqrt{d}\\over \\sqrt{N}} + {24\\sqrt{dM}\\over N}.\n\\end{align}\nChoose\n$\n\\displaystyle M= c^2\\epsilon^2 N,\n$\n$\\displaystyle \\mathbb{E}_n\\sup_{x\\in \\Omega} |f_n - f |$ is at most\n$\n\\displaystyle {48c\\epsilon \\sqrt{d}\\over \\sqrt{N}}.\n$\nSince $M \\sim \\epsilon^{-d}$,  we have\n$\nN\\sim \\epsilon^{-(d+2)}\n$\nand \n$\n\\epsilon\\sim N^{-{1\\over d+2}}\n$\nThus,\n\\begin{align} \n\\mathbb{E}_n\\sup_{x\\in \\Omega} |f_n - f |\\le {\\nu\\over m!}\\mathbb{E}_n\\sup_{x\\in \\Omega} |r_{m,n} - r_m| \\lesssim {\\sqrt{d} \\over m!}N^{-{1\\over 2}-{1\\over d+2}}\\|f\\|_{\\mathcal B^{m+1, q}(\\Omega)},\n\\end{align}\nwhich completes the proof. \n\\end{proof}\n\\begin{remark}\nSimilar to Theorem \\ref{est:stratify}, the estimates in Lemma \\ref{lm:Cstratifyrange1} and \\ref{lm:Cstratifypm1} also holds for the $C^k$-norm of the errors.\n\n\\end{remark}\n\n\n", "meta": {"hexsha": "324226a636238d37b0cce66b235f9d18a4b7e10e", "size": 29747, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/Dudley.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/Dudley.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/Dudley.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.1115942029, "max_line_length": 407, "alphanum_fraction": 0.6046996336, "num_tokens": 13508, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.7981867849406659, "lm_q1q2_score": 0.5883899038213573}}
{"text": "\\section{Radiation} \\label{sec:rad}\nRadiation is energy waves, some waves are visible like light, others are invisible like radio signals. As is the basis for physics, energy cannot be created nor destroyed, only changed from one\nform to another.\n\n\\subsection{The First Law of Thermodynamics and the Stefan-Boltzmann Equation} \\label{sec:first thermolaw}\nIf energy goes into an object it must equal the outflowing energy plus the change of internal energy. Which is exactly what happens with the atmosphere. Radiation from the sun comes in, and \nradiation from the atmosphere goes out. And along the way we heat the atmosphere and the planet which causes less radiation to be emitted than received. At least, that is the idea for Earth which\nmay not apply to all planets. Let one thing be clear, more radiation cannot be emitted than is inserted, unless the planet and atmosphere are cooling. Anyway, we assume that the planet is a black \nbody, i.e. it absorbs all radiation on all wavelengths. This is captured in Stefan-Boltzmann's law (\\autoref{eq:stefan-boltzmann}) \\cite{stefan-boltzmann}. \n\nIn \\autoref{eq:stefan-boltzmann} the symbols are:\n\n\\begin{itemize}\n    \\item $S$: The energy that reaches the top of the atmosphere, coming from the sun or a similar star, per second per meters squared (\\si{Wm^{-2}}). This is also called the insolation.\n    \\item $\\sigma$: The Stefan-Boltzmann constant, $5.670373 \\cdot 10^-8$ (\\si{Wm^{-2}K^{-4}}) \\cite{stefan-boltzmann}.\n    \\item $T$: The temperature of the planet (\\si{K}).\n\\end{itemize}\n\nThe energy difference between the energy that reaches the atmosphere and the temperature of the planet must be equal to the change in temperature of the planet, which is written in \n\\autoref{eq:sb rewritten}. The symbols on the right hand side are:\n\n\\begin{itemize}\n    \\item $\\Delta U$: The change of internal energy (\\si{J}) \\cite{thermo1}.\n    \\item $C$: The specific heat capacity of the object, i.e. how much energy is required to heat the object by one degree Kelvin (\\si{JK^{-1}}).\n    \\item $\\Delta T$: The change in temperature (\\si{K}).\n\\end{itemize}\n\nWe want to know the change of temperature $\\Delta T$, so we rewrite the equation into \\autoref{eq:sb rewritten2}. Here we added the $\\delta t$ term to account for the time difference (or time \nstep). This is needed as we need an interval to calculate the difference in temperature over. Also we need to get the energy that we get (\\si{J}) and not the energy per second (\\si{W}), and by \nadding this time step the units all match up perfectly.\n\n\\begin{subequations}\n    \\label{eq:basis}\n    \\begin{equation}\n        \\label{eq:stefan-boltzmann}\n        S = SB = \\sigma T^4\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:sb rewritten}\n        S - \\sigma T^4 = \\Delta U = C \\Delta T\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:sb rewritten2}\n        \\Delta T = \\frac{\\delta t(S - \\sigma T^4)}{C}\n    \\end{equation}\n\\end{subequations}\n\nThe set of equations in \\autoref{eq:basis} form the basis of the temperature exchange of the planet. However two crucial aspects are missing. Only half of the planet will be receiving light from \nthe sun at once, and the planet is a sphere. So we need to account for both in our equation. We do that in \\autoref{eq:basis sphere correction}. We view the energy reacing the atmosphere as a \ncircular area of energy, with the equation for the are of a circle being \\autoref{eq:basis circle} \\cite{areaCircle}. The area of a sphere is in \\autoref{eq:basis sphere} \\cite{areaSphere}. In \nboth equations, $r$ is the radius of the circle/sphere. By using \\autoref{eq:basis circle} and \\autoref{eq:basis sphere} in \\autoref{eq:sb rewritten2} we get \\autoref{eq:basis sphere2} where \n$r$ is replaced by $R$. It is common in physics literature to use capitals for large objects like planets. However we are not done yet since we can divide some stuff out. We end up with \n\\autoref{eq:basis sphere final} as the final equation we are going to use.\n\n\\begin{subequations}\n    \\label{eq:basis sphere correction}\n    \\begin{equation}\n        \\label{eq:basis circle}\n        \\pi r^2\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:basis sphere}\n        4 \\pi r^2\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:basis sphere2}\n        \\Delta T = \\frac{\\delta t (\\pi R^2S - 4\\pi R^2\\sigma T^4)}{4\\pi CR^2}\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:basis sphere final}\n        \\Delta T = \\frac{\\delta t (S - 4\\sigma T^4)}{4C}\n    \\end{equation}\n\\end{subequations}\n\n\\subsection{Insolation}\n\nWith the current equation we calculate the global average surface temperature of the planet itself. However, this planet does not have an atmosphere just yet. Basically we modelled the \ntemperature of a rock floating in space, let's change that with \\autoref{eq:atmos}. Here we assume that the area of the atmosphere is equal to the area of the planet surface. Obviously\nthis assumption is false, as the atmosphere is a sphere that is larger in radius than the planet, however the difference is not significant enough to account for it. We also define the\natmosphere as a single layer. This is due to the accessibility of the model, we want to make it accessible, not university simulation grade. One thing to take into account for the \natmosphere is that it only partially absorbs energy. The sun (or a similar star) is relatively hot and sends out energy waves (radiation) with relatively low wavelengths. The planet is \nrelatively cold and sends out energy at long wavelengths. As a side note, all objects radiate energy. You can verify this by leaving something in the sun on a hot day for a while and \nalmost touch it later. You can feel the heat radiating from the object. The planet is no exception and radiates heat as well, though at a different wavelength than the sun. The \natmosphere absorbs longer wavelengths better than short wavelengths. For simplicity's sake we say that all of the sun's energy does not get absorbed by the atmosphere. The planet's \nradiation will be absorbed partially by the atmosphere. Some of the energy that the atmosphere absorbs is radiated into space and some of that energy is radiated back onto the planet's \nsurface. We need to adjust \\autoref{eq:basis sphere final} to account for the energy being radiated from the atmosphere back at the planet surface.\n\nSo let us denote the temperature of the planet surface as $T_p$ and the temperature of the atmosphere as $T_a$. Let us also write the specific heat capacity of the planet surface as $C_p$ \ninstead of $C$. We add the term in \\autoref{eq:atmos on surface improved} to \\autoref{eq:basis sphere final} in \\autoref{eq:surface change}. In \\autoref{eq:atmos on surface}, $\\epsilon$ is the \nabsorbtivity of the atmosphere, the fraction of energy that the atmosphere absorbs. We divided \\autoref{eq:atmos on surface} by $\\pi R^2$ as we did that with \\autoref{eq:basis sphere final} as \nwell, so we needed to make it match that division.\n\n\\begin{subequations}\n    \\label{eq:atmos}\n    \\begin{equation}\n        \\label{eq:atmos on surface}\n        4\\pi R^2 \\epsilon \\sigma T_a^4\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:atmos on surface improved}\n        4\\epsilon \\sigma T_a^4\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:surface change}\n        \\Delta T_p = \\frac{\\delta t (S + 4\\epsilon \\sigma T_a^4 - 4\\sigma T_p^4)}{4C_p}\n    \\end{equation}\n\\end{subequations}\n\nAs you probably expected, the atmosphere can change in temperature as well. This is modelled by \\autoref{eq:atmos change}, which is very similar to \\autoref{eq:surface change}. There are\nsome key differences though. Instead of subtracting the radiated heat of the atmosphere once we do it twice. This is because the atmosphere radiates heat into space and towards the \nsurface of the planet, which are two outgoing streams of energy instead of one for the planet (as the planet obviously cannot radiate energy anywhere else than into the atmosphere).\n$C_a$ is the specific heat capacity of the atmosphere.\n\n\\begin{equation}\n    \\label{eq:atmos change}\n    \\Delta T_a = \\frac{\\delta t (\\sigma T_p^4 - 2\\epsilon\\sigma T_a^4)}{C_a}\n\\end{equation}\n\n\\subsection{The Latitude Longitude Grid}\nWith the current model, we only calculate the global average temperature. To calculate the temperature change along the surface and atmosphere at different points, we are going to use a grid.\nFortunately the world has already defined such a grid for us, the latitude longitude grid \\cite{latlong}. The latitude is the coordinate running from the south pole to the north pole, with -90 \nbeing the south pole and 90 being the north pole. The longitude runs parallel to the equator and runs from 0 to 360 which is the amount of degrees that an angle can take when calculating the \nangle of a circle. So 0 degrees longitude is the same place as 360 degrees longitude. To do this however we need to move on from mathematical formulae to code (or in this case pseudocode).\n\nPseudocode is a representation of real code. It is meant to be an abstraction of code such that it does not matter how you present it, but every coder should be able to read it and implement \nit in their language of preference. This is usually easier to read than normal code, but more difficult to read than mathematical formulae. If you are unfamiliar with code or coding, look up \na tutorial online as there are numerous great ones.\n\nThe pseudocode in \\autoref{alg:stream1v1} defines the main function of the radiation part of the model. All values are initialised beforehand, based on either estimations, trial and error or \nbecause they are what they are (like the Stefan-Boltzmann constant $\\sigma$), which is all done in \\autoref{sec:cp}. The total time $t$ starts at 0 and increases by $\\delta t$ after every \nupdate of the temperature. This is to account for the total time that the model has simulated (and it is also used later). What you may notice is the $T_p[lan, lon]$ notation. This is to indicate \nthat $T_p$ saves a value for each $lan$ and $lon$ combination. It is initialised as all zeroes for each index pair, and the values is changed based on the calculations. You can view $T_p$ like \nthe whole latitude longitude grid, where $T_p[lat, lon]$ is an individual cell of that grid indexed by a specific latitude longitude combination. Do note that from here on most, if not all\nfunctions need to be called \\footnote{In case you are unfamiliar with calls, defining a function is defining how it works and calling a function is actually using it.} from another file which I \nwill call the master. The master file decides what parts of the model to use, what information it uses for plots and the like. We do it this way because we want to be able to switch out \ncalculations. Say that I find a more efficient way, or more detailed way, to calculate the temperature change. If everything was in one file, then I need to edit the source code of the project.\nWith the master file structure, I can just swap out the reference to the project's implementation with a reference to my own implementation. This makes the life of the user (in this case the \nprogrammer who has another implementation) easier and makes changing calculations in the future easier as well. Also note that what we pass on as parameters \\footnote{Parameters are variables \nthat a function can use but are defined elsewhere. The real values of the variables are passed on to the funciton in the call.} in \\autoref{alg:stream1v1} are the \nthings that change during the execution of the model or that are calculated beforehand and not constants. $S$ for instance is not constant (well at this point it is but in \\autoref{sec:daynight} \nwe change that) amd the current time is obviously not constant. All constants can be found in \\autoref{sec:cp}.\n\n\\begin{algorithm}[hbt]\n    \\caption{The main function for the temperature calculations}\n    \\label{alg:stream1v1}\n    \\SetAlgoLined\n    \\SetKwInput{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{time $t$, amount of energy that hits the planet $S$}\n    \\Output{Temperature of the planet $T_p$, temperature of the atmosphere $T_a$}\n    \\For{$lat \\leftarrow -90$ \\KwTo $90$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $360$}{\n            $T_p[lat, lon] \\leftarrow T_p[lat, lon] + \\frac{\\delta t (S + 4\\epsilon \\sigma (T_a[lat, lon])^4 - 4\\sigma (T_p[lat, lon])^4)}{C_p}$ \\;\n            $T_a[lat, lon] \\leftarrow T_a[lat, lon] + \\frac{\\delta t (\\sigma (T_p[lat, lon])^4 - 2\\epsilon\\sigma (T_a[lat, lon])^4)}{C_a}$ \\;\n        }\n    }\n\\end{algorithm}\n\n\\subsection{Day/Night Cycle} \\label{sec:daynight}\nAs you can see, the amount of energy that reaches the atmopsphere is constant. However this varies based on the position of the sun relative to the planet. To fix this, we have to assign a \nfunction to $S$ that gives the correct amount of energy that lands on that part of the planet surface. This is done in \\autoref{alg:solar}. In this algorithm the term insolation is mentioned, \nwhich is $S$ used in the previous formulae if you recall. We use the $\\cos$ function here to map the strength of the sun to a number between $0$ and $1$. The strength is dependent on the latitude, \nbut since that is in degrees and we need it in radians we transform it to radians by multiplying it by $\\frac{\\pi}{180}$. This function assumes the sun is at the equinox (center of the sun is \ndirectly above the equator) \\cite{equinox} at at all times. The second $\\cos$ is needed to simulate the longitude that the sun has moved over the longitude of the equator. For that we need the \ndifference between the longitude of the point we want to calculate the energy for, and the longitude of the sun. The longitude of the sun is of course linked to the current time (as the sun is \nin a different position at 5:00 than at 15:00). So we need to map the current time in seconds to the interval $[0,$ seconds in a day$]$. Therefore we need the mod function. The mod function \nworks like this: $x$ mod $y$ means subtract all multiples of $y$ from $x$ such that $0 \\leq x < y$. So to map the current time to a time within one day, we do $-t$ mod $d$ where $-t$ is the \ncurrent time and $d$ is the amount of seconds in a day. We need $-t$ as this ensures that the sun moves in the right direction, with $t$ the sun would move in the opposite direction in our model \nthan how it would move in real life. When we did the calculation specified in \\autoref{alg:solar} we return the final value (which means that the function call is \"replaced\" \n\\footnote{Replaced is not necessarily the right word, it is more like a mathematical function $f(x)$ where $y = f(x)$. You give it an $x$ and the value that correpsonds to that $x$ is saved in \n$y$. So you can view the function call in pseudocode as a value that is calculated by a different function which is then used like a regular number.} by the value that the function calculates). \nIf the final value is less than 0, we need to return 0 as the sun cannot suck energy out of the planet (that it does not radiate itself, which would happen if a negative value is returned).\n\n\\begin{algorithm}[hbt]\n    \\caption{Calculating the energy from the sun (or similar star) that reaches a part of the planet surface at a given latitude and time}\n    \\label{alg:solar}\n    \\SetAlgoLined\n    \\SetKwInput{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{insolation $ins$, latitude $lat$, longitude $lon$, time $t$, time in a day $d$}\n    \\Output{Amount of energy $S$ that hits the planet surface at the given latitude-time combination.}\n    $longitude \\leftarrow 360 \\cdot \\frac{(-t \\text{ mod } d)}{d}$ \\;\n    $S \\leftarrow ins \\cdot \\cos(lat \\frac{\\pi}{180}) \\cos((lon - longitude) \\cdot \\frac{\\pi}{180})$ \\;\n    \\eIf{$S < 0$}{\n        \\Return{$0$}\n    }{\n        \\Return{$S$}\n    }\n\\end{algorithm}\n\nBy implementing \\autoref{alg:solar}, \\autoref{alg:stream1v1} must be changed as well, as $S$ is no longer constant for the whole planet surface. So let us do that in \\autoref{alg:stream1v2}. Note \nthat $S$ is defined as the call to \\autoref{alg:solar}. \n\\begin{algorithm}[hbt]\n    \\caption{The main function for the temperature calculations}\n    \\label{alg:stream1v2}\n    \\SetAlgoLined\n    \\SetKwInput{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{amount of energy that hits the planet $S$}\n    \\Output{Temperature of the planet $T_p$, temperature of the atmosphere $T_a$}\n\n    \\For{$lat \\leftarrow -nlat$ \\KwTo $nlat$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlot$}{\n            $T_p[lat, lon] \\leftarrow T_p[lat, lon] + \\frac{\\delta t (S + 4\\epsilon \\sigma (T_a[lat, lon])^4 - 4\\sigma (T_p[lat, lon])^4)}{C_p}$ \\;\n            $T_a[lat, lon] \\leftarrow T_a[lat, lon] + \\frac{\\delta t (\\sigma (T_p[lat, lon])^4 - 2\\epsilon\\sigma (T_a[lat, lon])^4)}{C_a}$ \\;\n        }\n    }\n\\end{algorithm}\n\n\\subsection{Albedo}\nAlbedo is basically the reflectiveness of a material (in our case the planet's surface) \\cite{albedo}. The average albedo of the Earth is about 0.2. Do note that we change $C_p$ from a constant \nto an array. We do this to allow adding in oceans or other terrain in the future. Same thing for the albedo, different terrain has different reflectiveness.\n\n\\begin{algorithm}[hbt]\n    \\caption{Defining albedo}\n    \\label{alg:albedo}\n    $a \\leftarrow 0.2$ \\;\n\n    $C_p \\leftarrow 10^6$ \\;\n\\end{algorithm}\n\nNow that we have that defined, we need to adjust the main loop of the program (\\autoref{alg:stream1v2}). We need to add albedo into the\nequation and change $C_p$ from a constant to an array. The algorithm after these changes can be found in \\autoref{alg:stream2v3}. We multiply by $1 - a$ since albedo represents how much energy is \nreflected instead of absorbed, where we need the amount that is absorbed which is exactly equal to $1$ minus the amount that is reflected.\n\n\\begin{algorithm}[hbt]\n    \\caption{The main function for the temperature calculations}\n    \\label{alg:stream2v3}\n    \\SetAlgoLined\n    \\SetKwInput{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{amount of energy that hits the planet $S$}\n    \\Output{Temperature of the planet $T_p$, temperature of the atmosphere $T_a$}\n    \\For{$lat \\leftarrow -nlat$ \\KwTo $nlat$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlot$}{\n            $T_p[lat, lon] \\leftarrow T_p[lat, lon] + \\frac{\\delta t ((1 - a[lat, lon])S + 4\\epsilon \\sigma (T_a[lat, lon])^4 - 4\\sigma (T_p[lat, lon])^4)}{C_p[lat, lon]}$ \\;\n            $T_a[lat, lon] \\leftarrow T_a[lat, lon] + \\frac{\\delta t (\\sigma (T_p[lat, lon])^4 - 2\\epsilon\\sigma (T_a[lat, lon])^4)}{C_a}$ \\;\n        }\n    }\n\\end{algorithm}\n\n\\subsection{Temperature with Varying Density}\nThe air density is not at all points exactly the same. This may be due to the winds blowing, or due to height changes in the terrain. We need to account for that, which is done in \n\\autoref{alg:temperature with density}.\n\n\\begin{algorithm}[hbt]\n    \\caption{The main function for the temperature calculations}\n    \\label{alg:temperature with density}\n    \\SetAlgoLined\n    \\SetKwInput{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{amount of energy that hits the planet $S$}\n    \\Output{Temperature of the planet $T_p$, temperature of the atmosphere $T_a$}\n    \\For{$lat \\leftarrow -nlat$ \\KwTo $nlat$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlot$}{\n            $T_p[lat, lon] \\leftarrow T_p[lat, lon] + \\frac{\\delta t ((1 - a[lat, lon])S + 4\\epsilon \\sigma (T_a[lat, lon])^4 - 4\\sigma (T_p[lat, lon])^4)}{\\rho[lat, lon]C_p[lat, lon]}$ \\;\n            $T_a[lat, lon] \\leftarrow T_a[lat, lon] + \\frac{\\delta t (\\sigma (T_p[lat, lon])^4 - 2\\epsilon\\sigma (T_a[lat, lon])^4)}{\\rho[lat, lon]C_a}$ \\;\n        }\n    }\n\\end{algorithm}\n\n\\subsection{Adding Layers} \\label{sec:rad layers}\nTo add layers, we need a vertical coordinate. You would think that height is the logical and obvious choice right? You are right, but also quite wrong. We instead are going to use pressure. The \nreason for this is quite simple: vertical advection (see \\autoref{sec:adv}) when using height as vertical coordinate will not work, whereas using pressure allows vertical advection to work.\nTherefore we will use pressure as the vertical coordinate. This makes sense too, as there is less pressure at the top of the atmosphere than at the bottom of the atmosphere.\n\nUsing pressure as your vertical coordinate does have some effect on the temperature. Instead of using the temperature, we will need to use potential temperature (as explained in \n\\autoref{sec:thermal pot}). Therefore you will need to convert from and back to temperature when reading in and outputting thermal data. A benefit is though that we do not need to keep track \nof the air density anymore, as that is incorporated into the potential temperature. To avoid confusion with the previously defined temperature of the planet $T_p$ we will from now on refer to \nthe potential temperature as $T_{pot}$. The initial temperature of the atmosphere (which is read in in \\autoref{sec:usatmosp}) is henceforth referred to as $T_i$.\n\n\\subsection{Grey Radiation Scheme}\nInspired by the Isca project \\cite{isca} and a paper describing the grey radiation scheme\\cite{greyRad}.\n\nA radiation scheme is a model for how energy is redistributed using light in a system. Such a model is a Grey radiation scheme if you split it into two parts, short and long wavelength radiation.\nSo you have two redistribution systems, one for short wavelength light and one for long wavelength light. Another assumption we make when using the Grey radiation scheme, is that the atmosphere \nis transparent to short wavelength radiation, meaning it lets through light with short wavelengths. Additionally we use a two stream approximation, which means that we have a stream of radiation\ngoing up, and another stream of radiation going down. Note that these two streams are both long wavelength radiation, because we said earlier we assume the atmosphere completely ignores short \nwavelength radiation.\n\nThe two long wavelength radiation streams are described in \\autoref{eq:upward radiation} and \\autoref{eq:downward radiation} \\cite{greyRad}. In those equations, the symbols are:\n\n\\begin{itemize}\n    \\item $U$: Upward flux.\n    \\item $D$: Downward flux.\n    \\item $B$: The Stefan-Boltzmann equation (see \\autoref{eq:stefan-boltzmann}).\n    \\item $\\tau$: Optical depth.\n\\end{itemize}\n\n\\begin{subequations}\n    \\begin{equation}\n        \\label{eq:upward radiation}\n        \\frac{dU}{d\\tau} = U - B\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:downward radiation}\n        \\frac{dD}{d\\tau} = B - D\n    \\end{equation}\n\\end{subequations}\n\nWith \\autoref{eq:upward radiation} and \\autoref{eq:downward radiation} written down, we can discuss how they work. These equations need a boundary condition to work, a starting point if you like.\nFor those equations the boundary conditions are that $U$ is at the surface equal to $B$ and that $D$ at the top of the atmosphere is equal to $0$. Meaning that in the beginning the top of the \natmosphere has no downward flux as there is no heat there, and that the bottom of the atmosphere has a lot of upward flux as most if not all of the heat is located there. Then after the spin up \ntime this should stabilise. We are interested in the change of the fluxes, so $dU$ and $dD$, to get those we need to multiply the right hand side by $d\\tau$. Then we have the flow of radiation\nthat we need. However we cannot solely use these two equations to calculate the heat of a given layer. For that we need a few more components. These are described in \\autoref{eq:heat layer}. \nHere $Q_R$ is the amount of heat in a layer, $c_p$ is the specific heat capacity of dry air (our atmosphere), $\\rho$ is the density of the air in that layer and $\\delta z$ is the change in height. \n$\\delta U - D$ are the change in net radiation, meaning the amount of radiation that is left over after you transferred the upward and downward flux. See it as incoming and outgoing energy for a \ngiven layer, the net change (either cooling down or heating up) is what remains after you have subtracted the incoming energy from the outgoing energy. While this explanation is not entirely true \n(as flux is not entirely equivalent to energy), it explains the concept the best.\n\n\\begin{equation}\n    \\label{eq:heat layer}\n    Q_R = \\frac{1}{c_p\\rho}\\frac{\\delta(U - D)}{\\delta z}\n\\end{equation}\n\nNow only one question remains: what is optical depth? Optical depth is the amount of work a photon has had to do to get to a certain point. This might sound really vague, but bear with me. \nOptical depth describes how much stuff a certain photon has had to go through to get to a point. As you'd expect this is $0$ at the top of the atmosphere as space is a big vacuum so no stuff to \nmove through, so no work. Then the further the photon moves into the atmosphere, the more work the photon has had to do to get there. This is because it now needs to move through gases, like air,\nwater vapour and other gases. Hence the closer the photon gets to the surface of the planet, the larger the optical depth is because the photon has had to work more to get there. This phenomenon\nis described in \\autoref{eq:optical depth}. The symbols in the equation mean:\n\n\\begin{itemize}\n    \\item $\\tau_0$: Optical depth at the surface.\n    \\item $p$: Atmospheric pressure (\\si{Pa}).\n    \\item $p_s$: Atmospheric pressure at the surface (\\si{Pa}).\n    \\item $f_l$: The linear optical depth parameter, with a value of 0.1.\n\\end{itemize}\n\n\\begin{equation}\n    \\label{eq:optical depth}\n    \\tau = \\tau_0[f_l(\\frac{p}{p_s}) + (1 - f_l)(\\frac{p}{p_s})^4]\n\\end{equation}\n\nAs one can see, \\autoref{eq:optical depth} has two parts, a linear part and a quatric part (to the power $4$). The quatric term approximates the structure of water vapour in the atmosphere, which \nroughly scales with $\\frac{1}{4}$ with respect to the height. The linear term is present to fix numerical behaviour because this is an approximation which will not be completely correct (that's\nwhy it is an approximation) so we add this term to make it roughly right. The same thing holds for $f_l$ which can be manually tuned to fix weird numerical behaviour.\n\nWith these equations in our mind, let's get coding. First we add the pressure profile, the pressure of all atmospheric layers at a lat lon point. We need this to accurately represent the optical \ndepth per atmospheric layer. Then we need to use the pressure profile with regards to \\autoref{eq:optical depth}. The resulting code can be found in \\autoref{alg:optical depth}. This algorithm \nreplaces the temperature calculations we have done in \\autoref{alg:temperature layer}, as this is basically a better version of the calculations done in that algorithm. $f_l$ has a value of $0.1$\nand is located near all the other constants in the code, henceforth we will refer to this section in the code as the control panel, since most if not all of the constants can be tweaked here. \n$\\tau_0$ is a function that gives the surface optical depth for a given latitude. The corresponding equation can be found in \\autoref{eq:optical depth surface} \\cite{simon}. Translating this \ninto code is left as an exercise to the reader. $U[0]$ is the boundary condition discussed before (being the same as \\autoref{eq:stefan-boltzmann}), just as $D[nlevels]$ is the boundary condition. \n$S_z$ represents the call to \\autoref{alg:gradient z}. \\texttt{solar} represents the call to \\autoref{alg:solar}. $T_{trans}$ represents the call to \\autoref{alg:temp to pot}.\n\n\\begin{algorithm}\n    \\caption{Main function for calculating the temperature using radiation}\n    \\label{alg:optical depth}\n    \\SetKwInput{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{amount of energy that hits the planet $S$}\n    \\Output{Potential temperature $T_{pot}$, temperature of the planet surface $T_p$}\n    $T_a \\leftarrow T_{trans}(T_{pot}, p_z, \\texttt{False})$ \\;\n    \\For{$lat \\leftarrow -nlat$ \\KwTo $nlat$}{\n        $\\tau = \\tau_0(lat)f_l\\frac{p_z}{p_z[0]} + (1 - f_l)(\\frac{p_z}{p_z[0]})^4)$ \\;\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlot$}{\n            $U[0] \\leftarrow \\sigma T_p[lat, lon]^4$ \\;\n\n            \\For{$level \\leftarrow 1$ \\KwTo $nlevels$}{\n                $U[level] \\leftarrow U[level - 1] - \\frac{(\\tau[level] - \\tau[level - 1])(\\sigma \\cdot (mean(T_a[:, :, level]))^4)}{1 + (\\tau[level - 1] - \\tau[level])}$ \\;\n            }\n\n            $D[nlevels - 1] \\leftarrow 0$ \\;\n\n            \\For{$level \\leftarrow nleves - 1$ \\KwTo $0$}{\n                $D[level] \\leftarrow D[level + 1] - \\frac{(\\tau[level + 1] - \\tau[level])(\\sigma \\cdot (mean(T_a[:, :, level]))^4)}{1 + (\\tau[level] - \\tau[level + 1])}$ \\;\n            }\n\n            \\For{$level \\leftarrow 0$ \\KwTo $nlevels$}{\n                $Q[level] \\leftarrow -287T_a[lat, lon, level] \\frac{S_z(U - D, p_z, level)}{10^3 \\cdot p_z[level]}$ \\;\n            }\n\n            $T_a[lat, lon, :] \\leftarrow T_a[lat, lon, :] + Q$ \\;\n\n            $S \\leftarrow \\texttt{solar}(I, lat, lon, t)$ \\;\n\n            $T_p[lat, lon] \\leftarrow T_p[lat, lon] \\frac{\\delta t((1 - a[lat, lon]) S + (D[0] - U[0]))}{C_p[lat ,lon]}$ \\;\n        }\n    }\n    $T_{pot} \\leftarrow T_{trans}(T_a, p_z, \\texttt{True})$ \\;\n    \\Return{$T_p, T_{pot}$}\n\\end{algorithm}\n\n\\begin{equation}\n    \\label{eq:optical depth surface}\n    \\tau_0 = 3.75 + \\cos(lat \\frac{\\pi}{90})\\frac{4.5}{2}\n\\end{equation}\n\n\\subsection{Adding In Some Ozone (Or Something Else That Approximates It)}\nAdding in ozone in the stratosphere is hella complicated, so we leave that as an exercise to the reader as in true academic fashion. Just joking, if you want you can work on implementing ozone \nhowever we opt not to because it is quite complicated. Instead we approximate it, which is decent enough for our purpose. We need to do it in \\autoref{alg:optical depth} as we need to adjust the\n$Q$. We add in a check to see if we are currently calculating the radiation in the stratosphere. If so we add some radiation extra to replicate the effect of ozone. As can be seen in \n\\autoref{alg:ozone}, where we only focus on the $Q$ part of \\autoref{alg:optical depth}, we add in some extra radiation based on how high the current layer calculation is, which scales with the\nheight. \n\n\\begin{algorithm}\n    \\caption{Replicating the effect of ozone}\n    \\label{alg:ozone}\n    $inv \\leftarrow \\frac{1}{24*60*60}$ \\;\n    \\For{$level \\leftarrow 0$ \\KwTo $nlevels$}{\n        $Q[level] \\leftarrow -287T_a\\frac{S_z(U - D, 0, level)}{10^3 \\cdot p_z[level]}$ \\;\n        \\uIf{$p_z[level] < 40000$}{\n            $Q[level] \\leftarrow Q[level] + \\texttt{solar}(5, lat, lon, t) inv(\\frac{100}{p_z[level]})$ \\;\n        }\n    }\n\\end{algorithm}\n\n\\subsection{Tilting the Planet}\nIn order to model a planet that has seasons, like Earth, we need to tilt the planet. This has as effect that the sun is not always directly above the equator but is above a certain band around\nthe equator as the year moves on. This means that some hemispheres receive more/less sun based on what part of the year it is. Which corresponds to the various seasons we have on Earth. But in\norder to do that, we have to change the \\texttt{solar} function. The new version as shown in \\autoref{alg:solar tilt} will replace \\autoref{alg:solar}. Here $\\alpha$ is the tilt in degrees.\n\n\\begin{algorithm}\n    \\caption{Calculating the energy from the sun (or similar star) that reaches a part of the planet surface at a given latitude and time}\n    \\label{alg:solar tilt}\n    \\SetKwInput{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{insolation $ins$, latitude $lat$, longitude $lon$, time $t$, time in a day $d$}\n    \\Output{Amount of energy $S$ that hits the planet surface at the given latitude, longitude and time combination.}\n    $sun\\_lon \\leftarrow -t \\text{ mod } d$ \\;\n    $sun\\_lon \\leftarrow sun\\_lon \\cdot \\frac{360}{d}$ \\;\n    $sun\\_lat \\leftarrow \\alpha\\cos(\\frac{2t\\pi}{year})$ \\;\n    $S \\leftarrow insolation\\cos(\\frac{\\pi(lat - sun\\_lat)}{180})$ \\;\n\n    \\uIf{$S < 0$}{\n        \\Return{$0$} \\;\n    } \\uElse {\n        $lon\\_diff \\leftarrow lon - sun\\_lon$ \\;\n        $S \\leftarrow S\\cos(\\frac{lon\\_diff\\pi}{180})$ \\;\n\n        \\uIf{$S < 0$}{\n            \\uIf{$lat + sun\\_lat > 90$ or $lat + sun\\_lat < -90$}{\n                \\Return{$insolation\\cos(\\frac{\\pi(lat + sun\\_lat)}{180})\\cos(\\frac{lon\\_diff\\pi}{180})$} \\;\n            } \\uElse {\n                \\Return{$0$} \\;\n            }\n        } \\uElse {\n            \\Return{$S$} \\;\n        }\n    }\n\\end{algorithm}\n\nWhat the code in \\autoref{alg:solar tilt} does boils down to calculating the latitude and longitude of the sun and checking whether the planet receives any energy. If not return $0$ immediately.\nIf so we check if the difference between the sun's longitude and the planet's longitude and calculate how much energy would hit the planet given that the sun is not straight above the equator. \nWe do this by multiplying the energy it would receive from the sun if it were above the equator $S$ by the cosine of the difference in longitudes, which represents the tilt. Then we check again \nif the planet is receiving energy, if not we check if it happens around the poles. We do this because due to the tilt it can be the case that at certain points in the year the pole is in constant\nsunlight, i.e. the sun does not go down. This creates a sort of overshoot which needs to be accounted for. If it does this then we add the latitudes of the sun and the planet together and use\nthat to calculate the energy that would hit that spot. If it is not the case that we are around the poles and we do not receive energy, then we return $0$. If it happens to be that we do receive \nenergy (so no negative values) then we return $S$.", "meta": {"hexsha": "b54deec830894af5c676c9eada4cd8f54bad6b82", "size": 33742, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex-docs/topics/radiation.tex", "max_stars_repo_name": "davleop/claude", "max_stars_repo_head_hexsha": "09ee880d502dcad8cc1a8d2fd681978b812d32dd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 175, "max_stars_repo_stars_event_min_datetime": "2020-06-15T16:29:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T21:53:34.000Z", "max_issues_repo_path": "tex-docs/topics/radiation.tex", "max_issues_repo_name": "davleop/claude", "max_issues_repo_head_hexsha": "09ee880d502dcad8cc1a8d2fd681978b812d32dd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2020-06-26T06:47:47.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-08T09:17:45.000Z", "max_forks_repo_path": "tex-docs/topics/radiation.tex", "max_forks_repo_name": "davleop/claude", "max_forks_repo_head_hexsha": "09ee880d502dcad8cc1a8d2fd681978b812d32dd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2020-06-24T10:39:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-12T08:07:56.000Z", "avg_line_length": 75.14922049, "max_line_length": 197, "alphanum_fraction": 0.7215932666, "num_tokens": 9158, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.5883898985130485}}
{"text": "% declare document class and geometry\n\\documentclass[12pt]{article} % use larger type; default would be 10pt\n\\usepackage[margin=1in]{geometry} % handle page geometry\n\n% import packages and commands\n\\input{../header.tex}\n\n\n\\title{Phys 220A -- Classical Mechanics -- HW02}\n\\author{UCLA, Fall 2014}\n\\date{\\formatdate{16}{10}{2014}} % Activate to display a given date or no date (if empty),\n         % otherwise the current date is printed \n\n\\begin{document}\n\\maketitle\n\n\n\\section*{Problem 1: Lenz vector (20 pts)}\n\\textit{\nIn this problem we use the fact that the Kepler problem has an additional conserved quantity to solve it without doing an integral.\n\\newline\nConsider the motion of a particle with mass $\\mu$ in a potential $V = -\\alpha / r$ and define the Lenz vector\n\\begin{equation}\n\\v{\\Lambda} = \\frac{\\mu}{\\alpha} \\left( \\frac{d}{dt} \\v{r} \\right) \\times (\\v{r} \\times \\left( \\frac{d}{dt} \\v{r} \\right) ) - \\frac{\\v{r}}{r}.\n\\end{equation}\n}\n\n\\begin{enumerate}[label=\\textbf{(\\alph*)}]\n\n% part A\n\\item \\textit{\nShow that $\\v{\\Lambda}$ is conserved.\n}\n\n\n% part B\n\\item \\textit{\nShow that $\\abs{\\v{\\Lambda}}$ is equal to the eccentricity $\\epsilon$ of the orbit.\n\\newline \\textbf{Note:} Remember that for the Kepler problem\n\\begin{equation}\n\\frac{L^2}{\\mu \\alpha} \\frac{1}{r} = 1 + \\epsilon \\cos\\phi.\n\\label{eq:orbit}\n\\end{equation}\n}\n\n\n% part C\n\\item \\textit{\nShow that $\\v{\\Lambda} \\cdot \\v{r}$ can be evaluated in two ways\n\\begin{align}\n\\v{\\Lambda} \\cdot \\v{r} &= \\epsilon r \\cos\\phi, \\text{and} \\\\\n\\v{\\Lambda} \\cdot \\v{r} &= \\frac{L^2}{\\mu \\alpha} - r,\n\\end{align}\nand use these results to derive the orbit equation \\eqref{eq:orbit}. \n}\n\n\n% part D\n\\item \\textit{\nCan $\\v{\\Lambda}$ be conserved for other central potentials?\n}\n\n\n\\end{enumerate}\n\n\n\n\\section*{Problem 2: Good vibrations (20 pts)}\n\\textit{\nConsider a particle of mass $\\mu$ moving in a central harmonic oscillator potential\n\\begin{equation}\nV(r) = \\alpha r^2, \\quad \\alpha > 0.\n\\end{equation}\n}\n\n\\begin{enumerate}[label=\\textbf{(\\alph*)}]\n\n% part A\n\\item \\textit{\nIn terms of the angular momentum and the other parameters of the problem, find the minimum of the effective potential. If the energy is equal to this minimal value what is the orbit?\n}\n\n\n% part B\n\\item \\textit{\nFor an energy which is larger than the minimal value, find the minimal and maximal radius $r_\\text{min}, r_\\text{max}$. \n}\n\n\n% part C\n\\item \\textit{\nFind the orbit equation $r = r(\\phi)$, what do these orbits look like?\n}\n\n\n% part D\n\\item \\textit{\nHow does the problem look like if you formulate it in Cartesian coordinates in the $(x,y)$ plane (assuming that the motion is in this plane)?\n}\n\n\n\\end{enumerate}\n\n\n\\section*{Problem 3 (15 pts)}\n\\textit{\nConsider a soap-film spanned between two rings of radius $R$ located at $x = -x_0$ and $x = +x_0$. Assume that the film is rotationally symmetric around the $x$-axis. \n}\n\n\\begin{enumerate}[label=\\textbf{(\\alph*)}]\n\n% part A\n\\item \\textit{\nShow that the area functional can be expressed in terms of the location $y(x)$ of the soap-film at the cross section $z=0$, $y>0$ and is given by\n\\begin{equation}\nA[y] = 2\\pi \\int_{-x_0}^{x_0} dx y(x) \\sqrt{1+y'(x)^2}.\n\\end{equation}\n}\n\n\n% part B\n\\item \\textit{\nFind the Euler-Lagrange equations.\n}\n\n\n% part C\n\\item \\textit{\nFind the solution which minimizes the area of the soap-film.\n}\n\n\n\\end{enumerate}\n\n\n\n\\section*{Problem 4: I walk the line (10 pts)}\n\\textit{\nConsider a system with $N$ degrees of freedom and coordinates $q^i$, $i = 1, \\dots, N$. The Lagrangian for motion without a potential is given by\n\\begin{equation}\nL = \\frac{1}{2} \\sum_{ij} g_{ij} (q^k) \\dot{q}^i \\dot{q}^j\n\\end{equation}\nwhere the function (``metric'') is assumed to be symmetric, $g_{ij} = g_{ji}$, depends on the coordinates $q^k$ and has an inverse (i.e. $\\det g \\neq 0$). \n}\n\n\\begin{enumerate}[label=\\textbf{(\\alph*)}]\n\n% part A\n\\item \\textit{\nShow that the Euler-Lagrange equations are given by\n\\begin{equation}\n\\ddot{q}^k + \\sum_{ij} \\Gamma\\indices{^k_{ij}} \\dot{q}^i \\dot{q}^j = 0\n\\end{equation}\nwhere\n\\begin{equation}\n\\Gamma\\indices{^k_{ij}} = \\frac{1}{2} \\sum_l g^{kl} \\left( \\frac{\\pd g_{il}}{\\pd q^j} + \\frac{\\pd g_{jl}}{\\pd q^i} - \\frac{\\pd g_{ij}}{\\pd q^l} \\right)\n\\end{equation}\nwhere $g^{ij}$ is the inverse of $g_{ij}$, i.e.\n\\begin{equation}\n\\sum_k g^{ik} g_{kj} = \\delta^i_j\n\\end{equation}\n}\n\n\n% part B\n\\item \\textit{\nWhat does this equation reduce to if $g_{ij}$ is the identity matrix?\n}\n\n\n\\end{enumerate}\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "11d7674959b1fab794a997346a8a87e552b4ad88", "size": 4461, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classical/hw02.tex", "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classical/hw02.tex", "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classical/hw02.tex", "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.2033898305, "max_line_length": 182, "alphanum_fraction": 0.6805648958, "num_tokens": 1476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.7981867777396211, "lm_q1q2_score": 0.5883898985130483}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\n\\section{Path integral} \\label{sec:path-integral}\n\nThe formalism of the path integral is about assigning a probability density to the space of possible field configurations. \nThis space is infinite-dimensional: in order to treat it as a vector space we need to find some kind of basis, such as the values of the field on a grid of points. \n\nThe way to formally treat these problems starts from the theory of linear functionals; for this first section we will follow \\textcite[]{zaidiFunctionalMethods1983} quite closely.\n\n\\subsection{Linear Functionals}\n\n% Following \\cite[]{zaidiFunctionalMethods1983}.\n\nWe start from the space of square-integrable functions \\(q(x)\\), endowed with a product and an orthonormal basis \\(\\phi _n\\).\nWe consider (multi-)linear \\emph{functionals}, which are maps from the space of square-integrable functions (or from tuples of them) to \\(\\mathbb{R}\\) or \\(\\mathbb{C}\\). \nThese can be represented as functions of infinitely many variables, countably so if we use the basis \\(\\phi _n\\), uncountably so if we use the continuous basis \\(x\\).\n\nA functional \\(F[q]\\) can be represented as a power series \n%\n\\begin{align}\nF[q] = \\sum _{n=0}^{\\infty } \\frac{1}{n!} \\prod_{i=1}^{n} \\int \\dd{x_i} q(x_i) f(x_1, \\dots, x_n)\n\\,.\n\\end{align}\n\nExamples of this are the exponential series corresponding to the function \\(f(x)\\), mapping \\(q(x)\\) to \\(e^{(f, q)}\\) where the brackets denote the scalar product in the space, and the Gaussian series corresponding to the kernel \\(K(x, y)\\), mapping \\(q(x)\\) to \\(e^{(q, K, q)}\\), where \n%\n\\begin{align}\n(q, K, q) = \\int \\dd{x} \\dd{y} q(x) q(y) K(x, y)\n\\,.\n\\end{align}\n\n\\textbf{Functional derivatives} describe how the output of the functional changes as the argument goes from \\(q(x)\\) to \\(q(x) + \\eta (x)\\), where \\(\\eta (x)\\) is small. \nThis will be a linear functional of \\(\\eta \\) to first order, so we define the functional derivative with the expression \n%\n\\begin{align}\n\\eval{F[q+\\eta ] - F[q]}_{\\text{linear order}} = \\int \\eta (y) \\fdv{F}{q (y)} \\dd{y}\n\\,.\n\\end{align}\n\nThe analogy to finite-dimensional spaces is as follows: the functional derivative \\(\\fdv*{F}{q(y)}\\) corresponds to the \\emph{gradient} \\(\\nabla^{i} F\\), while the integral in the previous expression corresponds to the \\emph{directional derivative} \\((\\nabla^{i} F) \\eta^{j} g_{ij}\\).\nThe metric is present since the gradient is conventionally defined with a vector-like upper index; in our infinite-dimensional space the scalar product is given by the integral.\n\nPractically speaking, the most convenient way to calculate a functional derivative is by taking \\(\\eta_y (x)\\) to be such that it only differs from zero in a small region near \\(y\\), and let us define \n%\n\\begin{align}\n\\delta \\omega = \\int \\eta _y (x) \\dd{x}\n\\,.\n\\end{align}\n\nThen, we define \n%\n\\begin{align}\n\\fdv{F}{q(y)} = \\lim_{ \\delta \\omega \\to 0} \\frac{F[q + \\eta _y] - F[q]}{ \\delta \\omega }\n\\,.\n\\end{align}\n\nIn order for the limit to be computed easily, it is convenient for \\(\\eta (x)\\) to be in the form \\(\\delta \\omega \\times \\text{fixed function}\\),\nso that we are only changing the normalization as we shrink \\(\\delta \\omega \\).\nA common choice is then \n%\n\\begin{align}\n\\eta _y (x) = \\delta \\omega  \\delta (x-y)\n\\,.\n\\end{align}\n\nIf we apply this procedure to the identity functional \\(q \\to q\\), we find \n%\n\\begin{align}\n\\fdv{q(x)}{q(y)} = \\lim_{ \\delta \\omega  \\to 0} \\frac{q (x) + \\delta \\omega \\delta (x-y) -q (x)}{ \\delta \\omega } = \\delta (x-y)\n\\,.\n\\end{align}\n\nThe variable \\(q\\) is one-dimensional, if instead we wanted to consider a multi-dimensional coordinate system \\(q_\\alpha \\) by the same reasoning we would find \n%\n\\begin{align}\n\\fdv{q_\\alpha (x)}{q_\\beta (y)} = \\delta_{\\alpha \\beta } \\delta (x-y)\n\\,.\n\\end{align}\n\nAn example: the functional derivative of a functional \\(F_n\\) defined by \n%\n\\begin{align}\nF_n[q] = \\int f(x_1 , \\dots, x_n) q(x_1 )\\dots q(x_n) \\dd{x_1} \\dots \\dd{x_n}\n\\,,\n\\end{align}\n%\nwhere \\(f\\) is a symmetric function of its arguments, is given by \n%\n\\begin{align}\n\\fdv{F_n}{q(y)} = n \\int f(x_1, \\dots, x_{n-1}, y) q(x_1 )\\dots q(x_{n-1}) \\dd{x_1 } \\dots \\dd{x_{n-1}}\n\\,,\n\\end{align}\n%\na function of \\(y\\). \n\nA \\textbf{linear transformation} is in the form \n%\n\\begin{align}\nq(x) = \\int K(x, y) q'(y) \\dd{y}\n\\,.\n\\end{align}\n\nIf this transformation has an inverse, which is characterized by the kernel \\(K^{-1}\\), then we must have the orthonormality relation \n%\n\\begin{align}\n\\int K(x,y) K^{-1} (y, z) \\dd{y} =\n\\int K^{-1}(x,y) K (y, z) \\dd{y} =\n\\delta (x-z)\n\\,.\n\\end{align}\n\nWe can do \\textbf{Legendre transforms}: if we have a functional \\(F\\) we can differentiate with respect to the coordinate \\(q\\) to find \n%\n\\begin{align}\n\\fdv{F[q]}{q(x)} = p(x)\n\\,,\n\\end{align}\n%\nin analogy to the momentum in Lagrangian mechanics. Then, we can map \\(F[q]\\) to a new functional \\(G[p]\\) which will only depend on the momentum: \n%\n\\begin{align}\nG[p]= F[p] - \\int q(x) p(x) \\dd{x}\n\\,.\n\\end{align}\n\nWe can also define functional integration, by \n%\n\\begin{align}\n\\int F[q] \\qty[ \\dd{q}] = \\int \\hat{F}(\\qty{q_i}) \\prod_i \\dd{q_i}\n\\,.\n\\end{align}\n\nOn the right-hand side we are using the expression of the functional as a function of infinitely many variables which we discussed above;\nwe are then integrating over each of the coordinates in this infinite dimensional function space.\nThe infinite-dimensional measure, here written as \\([ \\dd{q}]\\) is also often denoted as \\(\\mathcal{D}q\\). \n\nThis integral will not always exist, however in the cases in which it does we can change variables. \nLet us consider a linear change of variable, whose kernel is \\(K(x, y)\\), such that (compactly written) \\(q = K q'\\). \n\nThen, we want to compute the integral \n%\n\\begin{align}\n\\int F[Kq'] \\qty[ \\dd{Kq'}] \n\\,\n\\end{align}\n%\nas an integral in \\(\\qty[ \\dd{q}]\\): in order to do so, we need to relate the two functional measures. \nWe start by expressing both \\(q\\) and \\(q'\\) in terms of an orthonormal basis \\(\\phi _i\\): inserting this into the linear transformation law we get \n%\n\\begin{align}\nq(x) &= \\int K(x, y) q'(y) \\dd{y}  \\\\\n\\sum _{i} q_i \\phi _i(x) &= \\int K(x, y) \\sum _{j} q'_j \\phi _j (y) \\dd{y}  \\\\\n\\sum _{i} q_i \\underbrace{\\int \\phi _i (x) \\phi _k (x) \\dd{x}}_{ \\delta_{ik}} \n&= \n\\sum _{j} q'_j \\underbrace{\\int K(x, y) \\phi _j (y) \\phi _k (x) \\dd{y} \\dd{x}}_{ k_{jk}}  \\\\\nq_k &= \\sum _{j} q'_j k_{jk}\n\\,.\n\\end{align}\n\nThen, the measure will transform with the determinant \\(\\det K = \\det k\\), which we can now express as an infinite product of the eigenvalues of \\(k\\): \n%\n\\begin{align}\n\\qty[ \\dd{q}] = \\abs{\\pdv{q}{q'}} \\qty[ \\dd{q'}] = \\det K \\qty[ \\dd{q'}]\n\\,.\n\\end{align}\n\nUsually functional integrals cannot be computed analytically; the exception is given by \\textbf{Gaussian integrals}, which generalize the finite-dimensional result \n%\n\\begin{align}\n\\int_{\\mathbb{R}^{n}} \\exp(- \\frac{1}{2} A_{ij} x_i x_j + i b_j x_j) \\dd{x_1 } \\dots \\dd{x_j} = \\sqrt{\\frac{(2 \\pi)^{n}}{\\det A}}\n\\exp(- \\frac{1}{2} (A^{-1})_{ij} b_i b_j)\n\\,.\n\\end{align}\n\nHere \\(A_{ij}\\) is an \\(n\\)-dimensional real matrix (which without loss of generality can be taken to be symmetric) while \\(b_i\\) is an \\(n\\)-dimensional vector.\n% The result comes from a transformation of the coordinates according to the finite-dimensional.\n\nThis can be interpreted as a ``functional'' (still finite-dimensional, so just a function, but we will generalize soon) of \\(b_i\\); we write it with an additional normalization \\(N\\) for convenience:\n%\n\\begin{align}\nZ[b] = N \\int _{\\mathbb{R}^{n}} \\exp(- \\frac{1}{2} A_{ij} x_i x_j + b_j x_j) \\dd{x_1 } \\dots \\dd{x_j}\n= N \\sqrt{\\frac{(2 \\pi )^{n}}{\\det A}} \\exp(- \\frac{1}{2} (A^{-1})_{ij}b_i b_j)\n\\,,\n\\end{align}\n%\nand if we rescale the normalization \\(N\\) so that \\(Z[\\vec{0}] = 1 \\) we get \n%\n\\begin{align}\nZ[b] = \\exp(- \\frac{1}{2} (A^{-1})_{ij}b_i b_j)\n\\,.\n\\end{align}\n\nThe infinite-dimensional generalization of this result amounts to replacing all the sums (expressed implicitly with Einstein notation here) with integrals; also conventionally we change the names of the variables to \\(x \\to q\\), \\(A \\to K\\), \\(b \\to J\\): \n%\n\\begin{align} \\label{eq:gaussian-integral}\nZ[J] &= N \\int \\mathcal{D}q \\exp(- \\frac{1}{2} \\int \\dd{x} \\dd{y} K(x, y) q(x) q(y) + i \\int \\dd{x} q(x) J(x))  \\\\\n&= \\exp(- \\frac{1}{2} \\int \\dd{x} \\dd{y} J(x) J(y) K^{-1}(x, y))\n\\,.\n\\end{align}\n\nLet us now give some examples of applications of this result: \\(K(x,y) = \\sigma^{-2} \\delta (x-y)\\) means \\(K^{-1}(x,y) = \\sigma^2 \\delta (x-y) \\), so \n%\n\\begin{align}\nZ[J] = \\exp(- \\frac{\\sigma^2}{2} \\int \\dd{x} J^2(x))\n\\,.\n\\end{align}\n\nThis, as we shall see, can be used to give us a description of white noise, which is uncorrelated in momentum space.\n\n\\subsection{The probability density functional}\n\nWe can interpret the quantity \n%\n\\begin{align}\n\\exp(- \\frac{1}{2} (q, K, q)) \\mathcal{D}q\n\\,\n\\end{align}\n%\nas a Gaussian \\emph{probability density functional} \\(\\dd{\\mathcal{P}}[q]\\), since \n\\begin{enumerate}\n    \\item it is positive definite;\n    \\item it is normalized, as long as we set its integral, \\(Z[0]\\), to 1;\n    \\item it goes to zero as \\(q \\to \\pm \\infty \\).\n\\end{enumerate}\n\nIf this is the case, then we ought to be able to compute the average value of a functional \\(F[q]\\) as \n%\n\\begin{align} \\label{eq:average-of-a-functional}\n\\expval{F[q]} = \\int F[q] \\dd{P}[q] = \\int \\mathcal{D}q \\exp(- \\frac{1}{2} (q, K, q)) F[q]  \n\\,,\n\\end{align}\n%\nwhich we can generalize to any non-gaussian probability density functional by replacing the exponential \\(\\exp(- \\frac{1}{2} (q, K, q))\\) with a generic \\(\\mathcal{P}[q]\\).\n\nA useful kind of average we can compute is given by the \\(N\\)-point correlation function, \n%\n\\begin{align}\nC^{(N)}(x_1, \\dots, x_n) = \\expval{q(x_1) \\dots q(x_n)}\n\\,.\n\\end{align}\n\nWith the formula we gave earlier, this can be computed as \n%\n\\begin{align}\nC^{(N)}(x_1, \\dots , x_n)\n= \\int \\mathcal{D}q \\mathcal{P}[q] \\prod_i q(x_i)\n\\,.\n\\end{align}\n\nHere we can make use of a trick: consider the functional derivative\n%\n\\begin{align}\n\\eval{\\frac{1}{i}\\fdv{Z[J]}{J(x_1 )}}_{J = 0} &= \\frac{1}{i} \\eval{\\fdv{}{J(x)}}_{J=0} \\int \\mathcal{D}q\n\\exp(- \\frac{1}{2} (q, K, q) + i (J,q))  \\\\\n&= \\int \\mathcal{D}q \\exp(- \\frac{1}{2} (q, K, q)) q(x_1 )\n= \\expval{q(x_1 )} = C^{(1)}(x_1 )\n\\,,\n\\end{align}\n%\nwhich actually holds for any probability density functional, we did not make use of the gaussianity.\nIn general, each functional derivative ``brings down a factor \\(q(x)\\)'', so we will be able to write the \\(N\\)-point correlation function as\n%\n\\begin{align}\nC^{(N)} (x_1 \\dots x_n) = \\frac{1}{i^{N}} \\eval{\\frac{ \\delta^{n} Z[J]}{ \\delta J(x_1 )\\dots \\delta J(x_n)}}_{J =0 }\n\\,.\n\\end{align}\n\nThe correlation functions, which as we discussed in an earlier section are crucial when discussing structure formation, can be ``simply'' calculated by functional differentiation as long as we have the generating functional \\(Z[J]\\).\nThis generating functional is very similar mathematically to a partition function in statistical mechanics, and it serves an analogous role: its derivatives allow us to characterize the dynamics of the system. \n\nNow, any functional \\(\\mathscr{F} [q]\\) can be expressed through a functional Taylor series:\n%\n\\begin{align}\n\\mathscr{F}[q] = \\sum _{n=0}^{\\infty } \\frac{1}{n!}\n\\int \\dd{x_1} \\dots \\dd{x_n} \\eval{\\frac{ \\delta^{n} \\mathscr{F}[q]}{ \\delta q(x_1) \\dots \\delta q(x_n)}}_{q = 0} q(x_1) \\dots q(x_n)\n\\,,\n\\end{align}\n%\nso if we compute the average value \\(\\expval{\\mathscr{F}[q]}\\) we find \n%\n\\begin{align}\n\\expval{\\mathscr{F}[q]} &= \n\\sum _{n=0}^{\\infty } \\frac{1}{n!}\n\\int \\dd{x_1} \\dots \\dd{x_n} \\eval{\\frac{ \\delta^{n} \\mathscr{F}[q]}{ \\delta q(x_1) \\dots \\delta q(x_n)}}_{q = 0} \\underbrace{\\expval{q(x_1) \\dots q(x_n)}}_{= C^{(N)} (x_1 \\dots x_n)}  \\\\\n&=\\sum _{n=0}^{\\infty } \\frac{(-i)^{n}}{n!}\n\\int \\dd{x_1} \\dots \\dd{x_n} \\eval{\\frac{ \\delta^{n} \\mathscr{F}[q]}{ \\delta q(x_1) \\dots \\delta q(x_n)}}_{q = 0} \n\\eval{\\frac{ \\delta^{n} Z[J]}{ \\delta J(x_1 )\\dots \\delta J(x_n)}}_{J =0 } \\\\\n&= \\mathscr{F}\\qty[ -i \\fdv{}{J}]\\eval{ Z[J]}_{J=0}\n\\,.\n\\end{align}\n\nThe expression with the functional \\(\\mathscr{F}\\) being calculated ``at'' the functional derivative is purely formal: it allows us to compactly write the previous Taylor expansion, and should be interpreted as a shorthand for it.  \n\n\\subsection{Gaussian fields' correlation functions} \\label{sec:gaussian-field-correlation-functions}\n\nConsider a Gaussian field, whose partition function \\(Z[J]\\) is given, as we have shown in \\eqref{eq:gaussian-integral}, by \n%\n\\begin{align}\nZ[J] = \\exp( - \\frac{1}{2} (J, K^{-1}, J))\n\\,.\n\\end{align}\n\nThen, as before we can calculate the correlation functions through functional derivatives: the first ones are \n%\n\\begin{align}\n\\expval{q(x)} &= \\frac{1}{i} \\eval{\\fdv{Z[J]}{J(x)}}_{J=0}  \\\\\n&= \\eval{-i \\int \\dd{y} K^{-1}(x, y) J(y) \\exp(- \\frac{1}{2} (J, K^{-1}, J))}_{J =0} =0\\\\\n\\expval{q(y)q(x)} &= -\\eval{\\frac{ \\delta^2 Z[J]}{ \\delta  J(y) \\delta  J (x)}}_{J=0} \\\\\n&= -\\eval{\\qty(-K^{-1}(x, y) + \\int \\dd{z} \\dd{w} K^{-1} (x, z)K^{-1} (y, w) J(w))\\exp(- \\frac{1}{2} (J, K^{-1}, J))}_{J =0}  \\\\\n&= K^{-1} (x, y)\n\\,.\n\\end{align}\n\nSo, we have our result: \\emph{for a Gaussian variable, the two-point correlation function is the inverse of the kernel}. \nThis is perfectly analogous to the result we find for a zero-mean \\(n\\)-dimensional multivariate Gaussian with covariance matrix \\(K^{-1}\\): its probability density function is given by \n%\n\\begin{align}\n\\mathcal{N}(\\vec{x} | \\vec{0}, K^{-1}) = \n\\frac{1}{(2 \\pi )^{n/2} \\sqrt{\\det K^{-1}}}\n\\exp(- \\frac{1}{2} \\vec{x}^{\\top} K \\vec{x})\n\\,,\n\\end{align}\n%\nand its one and two point functions read \\(\\expval{x_i} = 0\\), \\(\\expval{x_i x_j} = K^{-1}_{ij}\\) respectively. \n\nComing back to the infinite-dimensional scenario: a similar (albeit longer) calculation to the one before allows us to compute the \\(N\\)-point correlation function for the same Gaussian variable: we expand the exponential in \\(Z[J]\\) in a power series, and when we differentiate it an even number of times we find for the even-numbered correlation functions\n%\n\\begin{align}\nC^{2N}(x_1 \\dots x_{2N}) &= \\qty[K^{-1}(x_1, x_2 ) K^{-1} (x_3, x_4) \\dots K^{-1} (x_{2N-1}, x_{2N})]_{\\text{symmetrized}}\n\\,,\n\\end{align}\n%\nwhere ``symmetrized'' means that we must sum over all the permutations of the variables \\(x_i\\) in the argument of the inverse kernels; on the other hand, the odd correlation functions \\(C^{2N+1}\\) all vanish since they correspond to the integrals of odd functions over all space.\n\nAs we mentioned in section \\ref{sec:statistical-methods}, in the Gaussian case the two-point function (or, equivalently, the kernel) contains all the information. \nStarting from it, we can reconstruct the Gaussian probability density functional \\(\\dd{\\mathcal{P}}[q]\\). \n\n% In the Gaussian case, as long as we know the two-point function, which corresponds to the inverse kernel, we can reconstruct the \\(N\\)-point function. \n\n\\subsubsection{Connected correlation functions}\n\nIn the Gaussian case the \\(N\\)-point correlation functions are redundant for \\(N > 2\\). We wish to introduce a notion of ``irreducible'' correlation functions which minimize the amount of redundancy: formally, we require that in the Gaussian case the ones with \\(N > 2\\) vanish. \n\n% This can also be stated by saying that the ``irreducible'' \\(N\\)-point functions are all zero except for \\(N=2\\), since all the higher ones can be reduced to that one.\n\nThese irreducible functions are called the connected \\(N\\)-point correlation functions, and as we will check in a moment they can be calculated through the \\emph{generating functional of connected correlation functions}:\n%\n\\begin{align}\n\\mathscr{W}[J] = \\log Z[J]\n\\,,\n\\end{align}\n%\nthrough the expansion \n%\n\\begin{align}\n\\mathscr{W}[J] &= \\sum _{n=1}^{\\infty } \\frac{i^{n}}{n!} \\int \\dd{x_1} \\dots \\dd{x_n} C^{N}_{C} (x_1 \\dots x_N) J(x_1) \\dots J(x_n)  \\\\\nC^{n}_C (x_1 \\dots x_n) &= \\frac{1}{i^{n}} \n\\eval{\\frac{ \\delta^{n} \\mathscr{W}[J]}{\\delta  J(x_1) \\dots \\delta J(x_n) }}_{J = 0}\n\\,.\n\\end{align}\n\n% These connected correlation functions \\(C^{N}_{C}\\) are not the same as the \\(C^{N}\\) from before, although they are related.\n\nWe could also have defined \\(\\mathscr{W}[J] = i \\log Z[J]\\), this is a matter of convention. \n\nIn the Gaussian case we have \n%\n\\begin{align}\n\\mathscr{W}[J] = \\log Z[J] = - \\frac{1}{2} (J, K^{-1}, J)  \n\\,,\n\\end{align}\n%\nwhich, by direct comparison with the Taylor expansion, means that \n%\n\\begin{align}\nC^{2}_{C} (x_1 , x_2 ) = K^{-1} (x_1 , x_2 )\n\\,,\n\\end{align}\n%\nwhile \\(C^{N} \\equiv 0\\) for any \\(N \\neq 2\\). \nThis is a validation of our ansatz: in the Gaussian case it is the construction we hoped to make. \n\n\\subsection{Computing probabilities}\n\nWe now wish to apply the construction of probability density functionals: what we will typically ask about, besides correlation functions, is the probability density that the field will attain certain values at certain points.\nIn order to describe this we will need to introduce some mathematical tools, making use of both Legendre and Fourier transforms. \nWhat we will achieve is a general framework, although its application to non-Gaussian cases is complicated, therefore we will only treat Gaussian examples. \n\nWe define the \\emph{classical field}\n%\n\\begin{align}\nq _{\\text{cl}}(x) = \\fdv{\\mathscr{W}[J]}{J(x)}\n\\,,\n\\end{align}\n%\nand the effective action \\(\\Gamma [q _{\\text{cl}}]\\) as the Legendre transform of \\(\\mathscr{W}[J]\\): \n%\n\\begin{align}\n\\Gamma [q _{\\text{cl}}] = \\mathscr{W}[J] - \\int \\dd{x} q _{\\text{cl}} (x) J(x) \n\\,,\n\\end{align}\n%\nfrom which we can then recover \\(J(x)\\) as \n%\n\\begin{align}\nJ(x) = - \\fdv{\\Gamma [q _{\\text{cl}}]}{q _{\\text{cl}}(x)}\n\\,.\n\\end{align}\n\nAlso, our expression for \\(q _{\\text{cl}}\\) yields \n%\n\\begin{align}\nq _{\\text{cl}}(x) = -\\int \\dd{y} K^{-1}(x, y) J(y) \n\\,,\n\\end{align}\n%\nfrom which we can express \\(J(x)\\) by using the direct kernel \\(K(x, y)\\): \n%\n\\begin{align}\n\\int \\dd{x} q _{\\text{cl}} (x) K (x, w) = - \\int \\dd{y} \\dd{w} K^{-1}(x, y) K(x, w) J(y) = - \\int \\dd{w} \\delta (y- w) J(y) = - J(w)\n\\,.\n\\end{align}\n\nWith an analogous procedure we can show that\n%\n\\begin{align}\n(J, K^{-1}, J) = (q _{\\text{cl}}, K, q _{\\text{cl}})\n\\,.\n\\end{align}\n\n% \\todo[inline]{So in some sense \\(J\\) is a covariant vector while \\(q\\) is a contravariant one, right? can this be said in a better way?}\n\nThe effective action then reads\n%\n\\begin{align}\n\\Gamma [q _{\\text{cl}}] &= \\eval{- \\frac{1}{2} (J, K^{-1}, J) + (J, K^{-1}, J)}_{J = J(q _{\\text{cl}})} \\\\\n&= \\frac{1}{2} (q _{\\text{cl}}, K, q _{\\text{cl}})\n\\,.\n\\end{align}\n\nNow we have the tools to consider actual probabilities: starting from our classical field \\(q\\), we want to compute the probability that it takes on a certain value \\(q \\in (\\alpha , \\alpha + \\dd{\\alpha })\\) at a point \\(\\overline{x}\\): this is expressed with a probability density function in the form \n%\n\\begin{align}\n\\dv{P_q}{\\alpha } =P_{q(x)} (\\alpha; \\overline{x})\n\\,.\n\\end{align}\n\nWe want to write this ``\\(P(\\alpha ) \\dd{\\alpha }\\)'' in terms of the functional integral; in order to do so, we start from the Fourier transform \n%\n\\begin{align}\n\\int \\dd{\\beta } \\exp(i \\beta \\varphi ) P_q (\\beta ; \\overline{x}) = \\expval{e^{i \\beta \\varphi }}_{\\beta }\n\\,.\n\\end{align}\n\nThe integral is a definite one, with bounds corresponding to the region in which \\(P_q\\) is nonzero, typically \\(\\mathbb{R}\\).\nThe right-hand side is an average over the possible values taken on by the field \\(q\\) at the point \\(\\overline{x}\\), but it can also be computed by averaging over the possible \\emph{overall} field configurations, computed at a point \\(\\overline{x}\\):\n% This also holds if we average over the position \\(x\\): \n%\n\\begin{align}\n\\int \\dd{\\beta } \\exp(i \\beta \\varphi ) P_q (\\beta ; \\overline{x}) = \\expval{e^{i q(\\overline{x}) \\varphi }}_{q }\n\\,.\n\\end{align}\n\nAlthough the expression is similar we have made a large conceptual leap: \\(\\expval{}_\\beta \\) denotes a one-dimensional integral, the Fourier transform, while \\(\\expval{}_q\\) denotes an \\emph{infinite}-dimensional functional integral, for which we must employ calculation tools such as \\eqref{eq:average-of-a-functional}. \n% This will be much harder to compute: it is a proper functional integral, to be computed according to \\eqref{eq:average-of-a-functional}; however it must give the same result.\nThe need for this step is motivated by the fact that the values of the field at different points are not independent: what we want to study are precisely the correlations between them, so in order to treat a single point we must consider the whole field.\n\n% \\todo[inline]{In the notes this is denoted as ``averaging over \\(q\\)'', but I do not see how that makes sense. Is \\(\\overline{x}\\) a typical, generic position?}\n\nIf we take the Fourier antitrasform, we find \n%\n\\begin{align}\nP_q (\\alpha , \\overline{x}) &= \\frac{1}{2 \\pi } \\int \\dd{\\varphi } e^{-i \\varphi \\alpha } \\expval{e^{i \\varphi q (\\overline{x})}}_q  \\\\\n&= \\frac{1}{2 \\pi } \\int \\dd{\\varphi } \\expval{e^{i \\varphi (q(\\overline{x}) - \\alpha )}}_q  \\\\\n&= \\expval{ \\delta (q(\\overline{x}) - \\alpha ) }_q\n\\,.\n\\end{align}\n\nThis equation can be interpreted to mean the following: ``the probability that the field \\(q\\) is equal to \\(\\alpha \\) at \\(\\overline{x}\\) is given by the integral of the probabilities of all the field configurations which satisfy \\(q(\\overline{x}) = \\alpha \\)''.\n\nThis formalism can be generalized to \\(N\\)-point functions, the notation for the probability that for \\(j = 1 \\dots N\\) the field \\(q\\) takes on the value \\(\\alpha _j\\) at position \\(x_j\\) is as follows: \n%\n\\begin{align}\n\\dd{P}_q^{N} &= P_q \\qty(\\alpha_1 \\dots \\alpha _N; x_1 \\dots x_N) \\dd{\\alpha_1 } \\dots \\dd{\\alpha _N}  \\\\\n&= P_q \\qty([\\alpha _N]; [x_N]) \\dd{\\alpha_1 } \\dots \\dd{\\alpha _N}  \\\\\nP_q \\qty([\\alpha _N]; [x_N]) \n&= \\expval{\\prod_{j=1}^{N} \\delta (q(x_j) - \\alpha _j)}_q\n\\,.\n\\end{align}\n\nStatistical independence corresponds to the statement that the product can be brought out of the average, and we are interested in the general case, in which this does not happen.\n\nThe question we generally ask is: what is this probability? We can try to find an expression for it in terms of the partition function \\(Z[J]\\): we use once again the fact that \n%\n\\begin{align}\n\\delta (x) = \\frac{1}{2 \\pi } \\int \\dd{\\varphi } e^{i \\varphi x}\n\\,\n\\end{align}\n%\nto see that \n%\n\\begin{align}\nP_q ([\\alpha _N]; [X_N]) = \\frac{1}{(2 \\pi )^{N}} \\int \\dd{\\varphi_1 } \\dots \\dd{\\varphi _N} \\exp(- i \\sum _{j=1}^{N} \\varphi _j \\alpha _j)\n\\expval{\\exp(i \\sum _{j=1}^{N}\\varphi _j q(x_j))}_q\n\\,.\n\\end{align}\n\nThis can be brought back to the partition function by making use of the fact that \n%\n\\begin{align}\nZ[J] &= \\expval{\\exp(i (J, q))}_q  \\\\\n&= \\int \\mathcal{D}q P[q] \\exp( i \\int \\dd{x} J(x) q(x))\n\\,,\n\\end{align}\n%\ntherefore \n%\n\\begin{align}\n\\expval{\\exp(i \\sum _{j=1}^{N}\\varphi _j q(x_j))}_q\n= Z \\qty[ \\sum _{j=1}^{N} \\varphi _j \\delta (x - x_j)] \n= Z[ \\widetilde{J}_\\varphi ]\n\\,,\n\\end{align}\n%\nwhere we have used the fact that \n%\n\\begin{align}\ni \\int \\dd{x} q(x) \\qty(\\sum _{j=1}^{N} \\varphi _j \\delta (x-x_j))\n= i \\sum _{j=1}^{N} \\varphi_j q(x_j) \n\\,.\n\\end{align}\n\nSo, we can compute the probability that the field reaches the values \\(\\alpha _j\\) at the points \\(x_j\\) as long as we can compute the partition function at \\(Z[\\widetilde{J}_\\varphi ]\\), and then integrate \\(N\\) times: \n%\n\\begin{align}\nP_q ([\\alpha _N], [x_N]) = \\frac{1}{(2 \\pi)^{N}} \n\\int \\dd{\\varphi_1 } \\dots \\dd{\\varphi _N }\n\\exp(-i \\sum _{j=1}^{N} \\varphi _j \\alpha_j )\nZ[\\widetilde{J}_\\varphi ]\n\\,.\n\\end{align}\n\nLet us compute this for the case of a Gaussian random field \\(q(x)\\). The partition function evaluated at \\(\\widetilde{J}_\\varphi = \\sum _{j} \\varphi _j \\delta (x- x_j)\\) is equal to \n%\n\\begin{align}\nZ[\\widetilde{J}_\\varphi ] &= \\exp( - \\frac{1}{2} (\\widetilde{J}_\\varphi , K^{-1}, \\widetilde{J}_\\varphi )) \\\\\n&= \\exp( - \\frac{1}{2} \\sum _{i, j =1}^{N} \\varphi _i \\varphi _j K^{-1} (x_i, x_j))\n\\,.\n\\end{align}\n\nSince it appears in the expression for the partition function, let us define the \\emph{covariance matrix}\n%\n\\begin{align}\nM_{ij} = K^{-1}(x_i, x_j) = C^{(2)} (x_i, x_j)\n\\,,\n\\end{align}\n%\nso that the probability reads, using the usual result about Gaussian integrals,\n%\n\\begin{align}\nP_q ([\\alpha _N], [x_N]) &= \\frac{1}{(2 \\pi)^{N}} \n\\int \\dd{\\varphi_1 } \\dots \\dd{\\varphi _N }\n\\exp(-i \\sum _{j=1}^{N} \\varphi _j \\alpha_j )\n\\exp(- \\frac{1}{2} \\varphi_i M_{ij } \\varphi _j)  \\\\\n&= \\frac{1}{\\sqrt{(2 \\pi)^{N} \\det M}}\n\\exp(- \\frac{1}{2} \\alpha _i (M^{-1})_{ij} \\alpha _j)\n\\,.\n\\end{align}\n\nThis is the standard expression for an \\(N\\)-variate Gaussian distribution whose covariance matrix is \\(M_{ij}\\). \n\n% \\textbf{Application to Brownian motion}. \n\n\\subsection{Avoiding divergences: high- and low-pass filters}\n\nWe will now discuss the application of the filter functions introduced in section \\ref{sec:statistical-methods} to the path integral formulation. \nIf \\(W_R (x)\\) is our filter function, then the general correlation function can be calculated by substituting the value of the field at a point with its convolution with \\(W_R\\):\n%\n\\begin{align}\nC^{(N)}_R (x_1 \\dots x_N) &= \\int \\mathcal{D}q \\mathcal{P}[q] \\prod_{r = 1}^{N}\n\\int \\dd{y_r} q(y_r) W_R (\\abs{y_r - x_r})  \\\\\n&= \\int \\prod_{r=1}^{N} \\dd{y_r} W_R(\\abs{y_r - x_r}) C^{(N)} (y_1 \\dots y_N)\n\\,.\n\\end{align}\n\nThe same procedure can be applied to the \\emph{connected} correlation functions, so we can calculate the \\(C^{(N)}_{R, C}\\): \\emph{smooth, connected} \\(N\\)-point correlation functions. \n\nWe can calculate these ``smoothed'' correlation functions through the generating functional \\(Z[J]\\) by choosing \\(J(x)\\) in the form \n%\n\\begin{align}\nJ(x) =\\int \\dd{y} \\varphi (x+y) W_R (y)\n\\,,\n\\end{align}\n%\nwhere \\(\\varphi \\) is a generic function.\n\nSince these regularized correlation functions do not diverge at vanishing distances, we can define the \\(N\\)-th order \\textbf{moment}:\n%\n\\begin{align}\n\\expval{q_R^{N}} = C^{(N)}_R (x \\dots x) \n\\,\n\\end{align}\n%\nand the \\(N\\)-th order \\textbf{cumulant}: \n%\n\\begin{align}\n\\expval{q_R^{(N)}}_C = C^{(N)}_{R, C} (x \\dots x)\n\\,.\n\\end{align}\n\nDue to homogeneity and isotropy, neither of these depends on \\(x\\). \nWe expect these to diverge if we perform no filtering, which is equivalent to taking the filtering scale \\(R \\to 0\\).\nOn the opposite limit, taking \\(R \\to \\infty \\) amounts to averaging over all space, so we expect \\(\\expval{q_R^{N}} \\sim \\expval{q}^{N}\\), while \\(\\expval{q_R^{N}}_C \\sim 0\\). \n\nThe moments of \\(q_R\\) can be obtained through the moment generating function \n%\n\\begin{align}\nZ(\\varphi ) = Z[J_\\varphi (x) ] = Z[\\varphi W_R (\\abs{x})]\n\\,,\n\\end{align}\n%\nand we can define the \\emph{cumulant} generating function \\(W(\\varphi )\\) analogously. \nWe can then define an effective action through a Legendre transform:\n%\n\\begin{align}\n\\Gamma (q_{R, \\text{cl}}) = W(\\varphi ) - q_{R, \\text{cl}} \\varphi \n\\,,\n\\end{align}\n%\nwhere the classical field \\(q_{R, \\text{cl}}\\) is given by \n%\n\\begin{align}\nq_{R, \\text{cl}} = \\dv{W(\\varphi )}{\\varphi }\n\\,.\n\\end{align}\n\nThe function \\(Z(\\varphi )\\) is just the Fourier transform of \\(P_R(\\alpha , \\overline{x})\\): \n%\n\\begin{align}\nZ(\\varphi ) = \\expval{e^{i \\varphi q_R}}_q \n= \\int  \\dd{\\alpha } e^{i \\varphi \\alpha } P_{q_R} (\\alpha ; \\overline{x}) \n\\,,\n\\end{align}\n%\nso we can insert the explicit expression for the probability inside the inverse of this transform: \n%\n\\begin{align}\nP_R(\\alpha ; \\overline{x}) &= \\frac{1}{2 \\pi } \\int \\dd{\\varphi } e^{-i \\alpha \\varphi } Z(\\varphi ) \\\\\n&= \\frac{1}{2 \\pi } \\int \\dd{\\varphi } e^{-i \\alpha \\varphi }\n\\int \\mathcal{D}q \\mathcal{P}[q] \\exp(i \\varphi \\int \\dd{y} W_R (\\abs{y - \\overline{x}}) q(y) )\n\\,.\n\\end{align}\n\nWe can expand \\(Z(\\varphi )\\) and \\(W(\\varphi )\\) as \n%\n\\begin{align}\nZ(\\varphi ) &= 1 + \\sum _{N=1}^{\\infty } \\frac{i^{N}}{N!} \\varphi^{N} \\expval{q_R^{N}} \\\\ \nW(\\varphi ) &=  \\sum _{N=1}^{\\infty } \\frac{i^{N}}{N!} \\varphi^{N} \\expval{q_R^{N}}_C  \n\\,,\n\\end{align}\n%\ntherefore \n%\n\\begin{align}\n\\expval{q_R^{N}} = \\int \\dd{\\alpha } \\alpha^{N} P_{q_R} (\\alpha ; \\overline{x})\n\\,,\n\\end{align}\n%\nwhich clarifies what was meant by saying that these are \\emph{moments}. \n\\(P_{q_R}\\) can be reconstructed starting from these moments. \n\n\nWe can take a different path, by transforming the probability measure: we start from the probability density functional evaluated at \\(q\\), and calculate the expectation value of a generic functional \\(F[q]\\) as \n%\n\\begin{align}\n\\int \\mathcal{D}q P[q] F[q]\n\\propto \\int \\mathcal{D}q_R P[(W^{-1}, q_R)] F[(W^{-1}, q_R)]\n\\,.\n\\end{align}\n\nThe proportionality is because we need to include a Jacobian determinant, however probability densities must always be normalized to 1 so any constant is inessential. \n\nApplying this to a Gaussian field amounts to mapping the kernel \\(K\\) to a ``smoothed'' kernel \\(K_R\\), which corresponds to the inverse of the smoothed two-point correlation function: \\(\\expval{q_R(x) q_R(y)}\\). \n\n% Now, we move to a more physical example. We consider a Gaussian field \\(q\\) in three dimensions, whose mean is zero and whose two-point function in Fourier space is \n% %\n% \\begin{align}\n% \\expval{q(k) q(k')} = (2 \\pi )^3 P(\\abs{k}) \\delta^{(3)} (k - k')\n% \\,,\n% \\end{align}\n% %\n% where \\(P(\\abs{k})\\) is called the \\textbf{power spectral density} of the field \\(q\\): it quantifies how much of the power of the field is transmitted at each frequency. \n\nAs we mentioned in section \\ref{sec:statistical-methods}, the smoothed two-point function is given by \n% \n\\begin{align}\n\\expval{q_R (x) q_R(y)} &= G_R ( \\abs{x-y}) = \\frac{1}{(2 \\pi )^3}\n\\int \\dd[3]{k} P(k) \\widetilde{W}^2_R(k) e^{i k \\cdot (x-y)}  \\\\\n&= \\frac{1}{2 \\pi^2} \\int_0^{\\infty } \\dd{k } k^2 P(k) \\widetilde{W}^2_R (k) j_0 (k \\abs{x-y})\n\\,,\n\\end{align}\n%\nwhere \\(j_0 \\) is a Bessel function of the first kind. \nRecall that the two point function is the inverse of the kernel: therefore, from the first expression we can read off the Fourier transform of \\(K^{-1}_R\\), or, inverting, \n%\n\\begin{align}\n\\widetilde{K}_R (k) = \\frac{1}{P(k) \\widetilde{W}_R^2(k)} \n\\,.\n\\end{align}\n\nNow, let us consider a typical power-spectrum: \\(P(k) = A k^{n}\\). \nIt can be shown that the smoothed two-point function will be asymptotically (in the \\(r \\ll R\\) or \\(r \\gg R\\) limits) proportional to\n%\n\\begin{align}\nG_R (r) \\propto \\qty[\\max(R, r)]^{- (n+3)}\n\\,.\n\\end{align}\n\n% The \\(n = 0\\) case is \\textbf{white noise}, for which \\(G_R(0) \\sim R^{-3}\\). \n\n% \\todo[inline]{Connection with a Poisson process\\dots not clear}\n\n% The \\(n = -3\\) case is \\textbf{flicker noise} corresponds to \\(1/f\\) noise in 1D.\n\nIn the \\(n > 0\\) case the smoothed two-point function \\(G_R(r)\\) must be equal to zero somewhere, since we have the constraint\\footnote{This comes from the following line of reasoning: since the field has zero mean, \n%\n\\begin{align}\n\\int \\dd[3]{y} \\expval{q_R (x) q_R (y)} = \\expval{q_R (x) \\int \\dd[3]{y} q_R(y)} = 0\n\\,.\n\\end{align}}\n%\n\\begin{align}\n\\int_0^{\\infty } \\dd{r} r^2 G_R(r) = 0\n\\,.\n\\end{align}\n\n% \\todo[inline]{What does this have to do with \\(n>0\\)? Isn't this constraint always present?}\n\nThe first zero-crossing of the correlation function, \\(r_0 \\) such that \\(G_R(r_0 ) = 0\\), is called the \\textbf{coherence length}. \n\n% If \\(n < 0\\), we have fractal behaviour: as long as \\(G_R(r) \\gg 1\\), the fractal dimension is \\(D_F = -n\\). \n% \\todo[inline]{Clarify this\\dots how would we show it?}\n\n\\subsection{Saddle-point expansion}\n\nWe want to show that in general, our probability measure can be written as \n%\n\\begin{align}\n\\mathcal{P}[q] = \\exp(- \\frac{1}{2} (q, K, q) - V[q])\n\\,,\n\\end{align}\n%\nwhere the potential term encompasses all the non-quadratic parts of the probability.\nLinear terms can be removed with a change of variable, so we can expand it starting from third order: \n%\n\\begin{align}\nV[q] = \\sum _{n=3}^{\\infty } \\dd{x_1 } \\dots \\dd{x_n} K^{(n)} (x_1 \\dots x_n) q(x_1 )\\dots q(x_n)\n\\,.\n\\end{align}\n\n% Suppose that we did not know this, and we wanted to write a generic functional in the form\n% %\n% \\begin{align}\n% Z[J] = \\int \\mathcal{D}q \\exp(- \\frac{1}{2} (q, K, q) - V[q] + i (q, J) )\n% \\,,\n% \\end{align}\n% %\n% with arbitrary \\(V[q]\\). \n\nWe start by ntroducing the ``integrating by parts relation'':\n%\n\\begin{align}\nZ[J] &= \\exp(- V \\qty(-i \\fdv{}{J})) \n\\int \\mathcal{D}q \\exp(- \\frac{1}{2} (q, K, q) + i (q, J) )  \\\\\n&= \\exp(- V \\qty(-i \\fdv{}{J}))\n\\underbrace{\\exp(- \\frac{1}{2} (J, K^{-1}, J))}_{Z_0 [J]}  \\\\\n&= \\sum _{n=0}^{\\infty } \\frac{(-)^{n}}{n!} \\qty[V \\qty(-i \\fdv{}{J})]^{n} Z_0 [J]\n\\,.\n\\end{align}\n\nWith this approach, we can recover the probability of the field attaining a certain value as a function of the cumulants \\(\\expval{q_R^{N}}_C\\), since as we have shown earlier the function \\(W(\\varphi )\\) can be expressed in terms of them.\\footnote{The calculation needed to do so is quite involved, more details can be found in the second section of \\textcite[]{matarresePathintegralApproachLargescale1986}. Here we merely meant to show that such a procedure is possible, by introducing all the tools needed to perform it.} \n\nThe \\textbf{saddle point expansion} is the following: we want to compute an integral in the form \n%\n\\begin{align}\nJ = \\int \\dd{\\tau } \\exp(\n    - f(\\tau )\n)\n\\,,\n\\end{align}\n%\nso we choose a \\(\\tau_0 \\) such that \\(f' (\\tau_0 ) = 0\\), and then approximate \\(f\\) up to second order: \n%\n\\begin{align}\nJ &\\approx \\int \\dd{\\tau } \\exp(\n    - f(\\tau_0 ) - \\frac{1}{2} f''(\\tau_0 ) (\\tau - \\tau_0  )^2\n)  \\\\\n&\\approx \\sqrt{\\frac{2 \\pi }{f''(\\tau_0 )}} \\exp(- f(\\tau_0 ))\n\\,.\n\\end{align}\n\nWe are effectively approximating the integrand as a Gaussian, whose mean and variance are computed perturbatively at its stationary point. \n\nWe can apply this to the calculation of path integrals: if the probability \\(P(q)\\) can be written in terms of \\(Z(\\varphi ) = \\exp(W(\\varphi ))\\), then we have \n%\n\\begin{align}\nP(q) &= \\frac{1}{2 \\pi } \\int \\dd{\\varphi } \\exp(-i \\varphi q + W(\\varphi ))\n\\,.\n\\end{align}\n\nThe classical variable \\(q _{R, \\text{cl}} = \\overline{q}\\) is defined as \\(\\overline{q}= W' (\\varphi ) \\), and the effective action is written in terms of it: \\(\\Gamma (\\overline{q}) = W(\\varphi ) - \\overline{q} \\varphi \\). Therefore, \n%\n\\begin{align}\n\\dv{\\overline{q}}{\\varphi } = W''(\\varphi )\n\\,,\n\\end{align}\n%\nwhile the derivative of the classical action reads \n%\n\\begin{align}\n\\Gamma '' (\\overline{q})\n= \\dv{}{\\overline{q}}\n\\qty(- \\varphi ) \n= - \\qty[W''(\\overline{q})]^{-1}\n\\,.\n\\end{align}\n\nThis means that \\(\\dd{\\varphi } = - \\Gamma'' (\\overline{q}) \\dd{\\overline{q}}\\), which allows us to change variable: \n%\n\\begin{align}\nP(q) = -\\frac{1}{2 \\pi } \\int \\Gamma '' (\\overline{q}) \\dd{\\overline{q}}\n\\exp( i q \\Gamma '(\\overline{q}) + \\Gamma (\\overline{q}) - \\overline{q} \\Gamma ' (\\overline{q}))\n\\,,\n\\end{align}\n%\nwhere we used the fact that \n%\n\\begin{align}\n- i \\varphi q + W(\\varphi ) &= - i \\varphi q + \\Gamma (\\overline{q}) + \\overline{q} \\varphi    \\\\\n&= \\Gamma (\\overline{q}) + (\\overline{q} - iq) \\dv{\\Gamma (\\overline{q})}{\\overline{q}}\n\\,.\n\\end{align}\n\nFor large \\(q\\), this oscillating integral is dominated by the points at which the phase does not change much: these are the stationary points of the argument of the exponential, \n%\n\\begin{align}\niq \\Gamma'' (\\overline{q})  \n+ \\Gamma '(\\overline{q}) - \\Gamma '(\\overline{q}) - \\overline{q} \\Gamma '' (\\overline{q}) \n=\n\\Gamma'' (\\overline{q}) (iq - \\overline{q} )\n&= 0\n\\,,\n\\end{align}\n%\nmeaning that \\(\\overline{q} = iq\\).\n\nWe can then apply the saddle-point approximation: \n%\n\\begin{align}\nP(q) &\\approx \\frac{e^{i \\Gamma (iq)}}{2 \\pi } \\Gamma '' (iq)\n\\int \\dd{\\overline{q}} \\exp(- \\Gamma ''(iq) \\frac{\\qty(\\overline{q} - iq)^2}{2})  \\\\\n&\\approx \\sqrt{\\frac{\\Gamma '' (iq)}{2 \\pi }} \\exp(\\Gamma (iq))\n\\,.\n\\end{align}\n\nIn the Gaussian case the probability reads \n%\n\\begin{align}\nP(q) = \\frac{1}{\\sqrt{2 \\pi \\sigma _R^2}} \\exp(- \\frac{q^2}{2 \\sigma _R^2})\n\\,,\n\\end{align}\n%\nso we can recognize \\(\\Gamma (\\overline{q}) = \\overline{q}^2 / (2 \\sigma _R^2)\\), while \\(W (\\varphi ) = -\\varphi^2 / (2 \\sigma _R^2)\\). \n\nWe can apply this approximation in order to prove the claim from the start of this section: suppose we had a generating functional in the form\n%\n\\begin{align}\nZ[J] = \\int \\mathcal{D}q \\underbrace{\\exp(- \\frac{1}{2} (q, K, q) - V[q] + i (q, J))}_{\\exp(\\mathscr{F}[q, J])}\n\\,\n\\end{align}\n%\nwith arbitrary \\(V\\).\n\n% The functional \\(\\mathscr{F}[q, J]\\) is the solution of the \nThe functional integral will be dominated by the stationary points of \\(\\mathscr{F}[q, J]\\): we define \\(q_0 \\) as the field configuration satisfying\n%\n\\begin{align} \\label{eq:functional-equation-for-q0}\n\\eval{\\fdv{\\mathscr{F}[q, J]}{q}}_{q_0 } = (K, q_0 ) + \\eval{\\fdv{V[q]}{q}}_{q_0 } - iJ = 0\n\\,.\n\\end{align}\n\nThis will depend on \\(J\\) in general. We then expand up to second order, as usual: \n%\n\\begin{align}\n\\mathscr{F}[q, J] \\approx \\mathscr{F}[q_0 , J] + \\frac{1}{2} \\int \\dd{x} \\dd{y} \\eval{\\frac{ \\delta^2 \\mathscr{F}[q, J]}{ \\delta q(x) \\delta q(y) }}_{q_0 } \\eval{(q(x) - q_0 (x)) (q(y) - q_0 (y))}_{\\text{symm}}\n\\,.\n\\end{align}\n\nThanks to the functional equation for \\(q_0 \\) \\eqref{eq:functional-equation-for-q0}, we can write \\(\\mathscr{F}[q_0 , J]\\) as \n%\n\\begin{align}\n\\mathscr{F}[q_0 , J] &= \\frac{1}{2} (q_0 , K, q_0 ) + V[q_0 ] - i (J, q_0 )  \\\\\n&= (K, q_0 ) + \\eval{\\fdv{V[q]}{q}}_{q_0 } - \\frac{i}{2}  (q_0, J) \n\\,.\n\\end{align}\n%\n\nThis is usually the ``bottleneck'' in the calculation, practically speaking. It is really hard to solve this functional equation.\nAlso, we define\n%\n\\begin{align}\n\\eval{\\frac{ \\delta^2 \\mathscr{F}}{ \\delta q(x_1 ) \\delta  q(x_2 )}}_{q_0 }\n= K(x_1 , x_2 ) + \\eval{\\frac{ \\delta^2 V[q]}{ \\delta q(x_1 ) \\delta q(x_2 )}}_{q_0 } = \\mathscr{Q}(x_1 , x_2 )\n\\,.\n\\end{align}\n\n% \\todo[inline]{This confuses me: earlier we said that \\(V\\) starts from third order, since the quadratic term is brought out. What are we doing here? If we only consider the Gaussian part of \\(V\\) what is even the point of including it?}\n\nWe then perform the integration with respect to the field \\(q - q_0 \\) (this amounts to adding a constant, so the measure does not change) and find \n%\n\\begin{align}\nZ[q_0 , J] \\approx \\mathscr{N}\\frac{\\exp(- \\mathscr{F}[q_0, J ])}{\\sqrt{\\det \\mathscr{Q} [q_0, J]}}\n\\,.\n\\end{align}\n\nUsing the identity \\(\\log \\det M = \\Tr \\log M\\) we can compute the generating functional for cumulants:\n%\n\\begin{align}\nW[q_0 , J] \\approx \\mathscr{F}[q_0 , J] - \\frac{1}{2} \\Tr \\log \\mathscr{Q}[q_0, J] + \\const\n\\,.\n\\end{align}\n\n\\subsection{Path integrals in Quantum Mechanics}\n\n\\subsubsection{Nonrelativistic quantum mechanics}\n\nThe path integral formalism in quantum mechanics was first introduced in the first half of the twentieth century by the works of Dirac and Feynman.\nAlthough it is often more unwieldy than, say, solving the Schr\u00f6dinger equation, it is much more powerful, and it has found wide-ranging applications in Quantum Field Theory.\n\nIn extreme synthesis, in order to compute the probability amplitude that a particle is first found in a (one dimensional) position \\(x_1 \\) at a time \\(t_1 \\) and then at a position \\(x_2 \\) at a time \\(t_2 \\) we compute a functional integral in the form \n%\n\\begin{align}\n\\braket{x_2 , t_2 }{x_1 , t_1  } \n= \\int \\mathcal{D}q \\exp( i \\int_q L (x, \\dot{x}\n, t) \\dd{t}) = \\int \\mathcal{D}q \\exp( i S[q])\n\\,,\n\\end{align}\n%\nwhere \\(L\\) is the Lagrangian of the particle, \\(q (t)\\) is a possible path it can take from \\(x_1 \\) to \\(x_2 \\), and the integral extends over all of them: the integration measure can be understood as the limit\n%\n\\begin{align}\n\\mathcal{D}q = \\lim_{N \\to \\infty } \\prod_{i=1}^{N-1} \\dd{x_i}\n\\,,\n\\end{align}\n%\nwhere we ``slice'' the time interval \\(t_2 - t_1 \\) into \\(N \\to \\infty \\) equal parts, \\(\\dd{x_i}\\) being the integration element for the \\(i\\)-th slice. \n\nThe integrand is oscillatory: if we are far from the classical path, defined by \\(\\delta S[ q _{\\text{cl}}] = 0\\), the contributions to the integral destructively interfere, while constructive interference happens near the classical path. \nThe fully classical approach is to just consider the equations of motion \\(\\delta S = 0\\); we can take a \\emph{semiclassical} approach with a saddle-point approximation, by expanding the action up to second order. The paths which will contribute will be, roughly speaking, those for which \\(S[q] - S[q _{\\text{cl}}] \\sim 1 = \\hbar\\) (we are working in natural units). \n\nQualitatively speaking, this oscillatory version of the integral of \\(\\exp(i S[q])\\) converges just like the Euclidean integral of \\(\\exp(- S[q])\\) we have seen up to now; however it does so much more slowly.\nIn both cases we expect the greatest contribution to come from the maxima of \\(S[q]\\). \n\nBecause of this, a common technique is the \\textbf{Wick rotation}, in which we map the time to a new ``euclidean time'': \\(t \\to -i t_E\\), which allows us to move to a Euclidean path integral in 4D space with a Cartesian signature.\n\n\\subsubsection{Quantum Field Theory}\n\nLet us look at a relatively simple example application of the path integral in QFT, describing the motion of a massive scalar boson with Lagrangian \n%\n\\begin{align}\n\\mathscr{L} = \\underbrace{\\frac{1}{2} \\partial_{\\mu } \\phi \\partial^{\\mu } \\phi - \\frac{1}{2} \\mu^2 \\phi^2}_{\\mathscr{L}_{0}}+ \\mathscr{L}_I (\\phi )\n\\,,\n\\end{align}\n%\nwhere the self-interaction term is some non-quadratic function of \\(\\phi \\), often taken to be proportional to  \\(\\phi^3\\)  or \\(\\phi^{4}\\). \nIn this case we are considering self-interaction only, but the interaction between different fields can also be treated in a similar, perturbative way.\n\nThe Feynman path integral corresponding to this Lagrangian is given by the functional \n%\n\\begin{align}\nZ[J] = N \\int \\mathcal{D} \\phi \\exp(i \\int \\mathscr{L}(\\phi ) +  J \\phi \\dd{x} )\n\\,.\n\\end{align}\n\nLet us start with the non-interacting case, that is, we compute \\(Z_0 \\) with only the quadratic term in the Lagrangian. This can be expressed, in the formalism from before, using the kernel \n%\n\\begin{align}\nK(x, y )= (- \\square_x - \\mu^2 ) \\delta (x-y)\n\\,.\n\\end{align}\n\nNow, the expression the functional is given in terms of \\(K^{-1}\\): what is the inverse of this kernel? The definition reduces to \n%\n\\begin{align}\n\\int K(x, y) K^{-1}(y, z) \\dd{y} &= \\delta (x-z)   \\\\\n- \\qty(\\square_x + \\mu^2) K^{-1} (x, z) &= \\delta (x-z)\n\\,,\n\\end{align}\n%\nwhich is readily solved in momentum space, with a \\(+i \\epsilon \\) prescription in order to avoid the pole in the integration: what we find is called the \\emph{Green's function}, \n%\n\\begin{align}\nK^{-1}(x, z) = G(x-z) = \\frac{1}{(2 \\pi )^{4}} \\int \\frac{e^{-ik \\cdot (x-z)}}{k^2 + \\mu^2 - i \\epsilon } \\dd{k} \n\\,,\n\\end{align}\n%\nso the unperturbed functional reads \n%\n\\begin{align}\nZ_0 [J] = \\exp(- \\frac{i}{2} \\int \\dd{x} \\dd{y} G(x-y) J(x) J(y))\n\\,.\n\\end{align}\n\n% This, evaluated at \\(J =0\\), can be used to calculate the \\emph{propagator}, which quantifies the probability amplitude that a particle localized at a certain position at a certain time will be found at another position, at another time.\n% This can be used to compute expectation \n\nThis by itself might not seem very useful, the motion of a free massive boson can be calculated with easier methods.\nHowever, the real power of this path integral is the possibility to write the interacting term perturbatively: the interaction Lagrangian is a function of \\(\\phi \\), which is what we find if we perform a functional integration of the argument of the exponential in \\(Z_0 [J]\\) with respect to \\(J\\); so we can express the full functinal as \n%\n\\begin{align}\nZ[J] &= \\exp( i\\int \\dd{x} \\mathscr{L}_I\\qty( \\frac{1}{i} \\fdv{}{J(x)}))\n\\underbrace{\\int \\mathcal{D} \\phi \\exp(i \\int \\dd{x} \\qty( \\mathscr{L}_0 + J \\phi )) }_{= Z_0 [J]}  \\\\\n&= \\sum _{n=0}^{\\infty } \\frac{i^{n}}{n!} \\qty[ \\int \\dd{x} \\mathscr{L}_I\\qty(\\frac{1}{i} \\fdv{}{J(x)})]^{n} Z_0 [J]\n\\,.\n\\end{align}\n\nWe can use this to compute the Green's functions: \n%\n\\begin{align}\nG(x_1 , \\dots , x_n) = \\eval{\\frac{1}{i^{n}} \\frac{ \\delta Z[J]}{ \\delta J (x_1 ) \\dots \\delta J(x_n)}}_{J = 0}\n\\,.\n\\end{align}\n\nWe can now also draw some physical intuition about the term \\(J \\phi \\): it is called a ``source term'', and by the way it appears in the Lagrangian we could interpret it as a part of the potential, by mapping \\(V \\to V_J = V - J \\phi \\).\nThis is then precisely equivalent to the addition of an external force \\(J\\) to the system. \n\n\\subsubsection{Diagrammatic approach}\n\n% \\todo[inline]{Review from the PI notes why this makes sense.}\nThe calculation of the path integral in the interacting case is generally intractable; we proceed perturbatively, and a very useful calculation tool is given by Feynman diagrams. \nWe will only discuss their application as a way to better understand the correlation functions we defined earlier.\n% They are whole new topic, of which we only sketch the workings.  \n\nGreen's functions, which correspond to \\(N\\)-point correlation functions in the cosmological case, can be graphically represented by drawing lines between \\(N\\) points.\n\nEach term in the interaction Lagrangian corresponds to a way to join these points: for example, if we had a \\(\\phi^3\\) term in the interaction Lagrangian (or in the potential, if we are considering self-interaction) it would correspond to a vertex with three lines coming from it. \nOn the other hand, the regular \\(\\phi^2\\) Gaussian term allows us to connect two points. \nThis allows us to understand the result from \\ref{sec:gaussian-field-correlation-functions}: if we want to connect \\(2N\\) points we can do so with lines in several ways, while if we wish to connect \\(2N+1\\) points at least one will be left out. \n\nFurther, what is meant by ``connected'' correlation functions has a direct graphical interpretation: if we connect \\(N > 2\\) points in pairs the graph is composed of disjoint parts. \nThe only possible connected graph in the Gaussian case is the \\(N=2\\) one.\n\n% In the general non-Gaussian case, on the other hand, we can have higher order terms in the potential. \n\n\\end{document}\n", "meta": {"hexsha": "17864fc130084040d4092d16d57b8cceb5a91c84", "size": 46113, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_second_semester/path_integral/path_integral.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_second_semester/path_integral/path_integral.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_second_semester/path_integral/path_integral.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 43.5850661626, "max_line_length": 525, "alphanum_fraction": 0.6628065838, "num_tokens": 15478, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7371581568543043, "lm_q1q2_score": 0.5883898939040154}}
{"text": "\\subsubsection{Example: BOD mineralization under rate-limited aeration: }\nIn this example we will model a batch system containing a aerobically degrading substrate that recieves dissolved oxygen through aeration. The parameters of the experiment are assumed to be as follows:\n- \\textbf{Area: } \\textit{0.2 $m^2$}\\\\\n- \\textbf{Depth: } \\textit{0.3 m}\\\\\n- \\textbf{Aeration model: } \\textit{Rate limited} \\\\\n- \\textbf{Oxygen transfer rate coefficient: } \\textit{2 $day^{-1}$}\\\\\n- \\textbf{Initial BOD concentration: } \\textit{25 mg/L}\\\\\n- \\textbf{Initial DO concentration: } \\textit{7 mg/L}\\\\\n- \\textbf{BOD mineralization rate, ($k_d$): } \\textit{10 $day^-1$}\\\\\n- \\textbf{DO half saturation concentration: } \\textit{2 mg/L}\n- \\textbf{BOD half saturation concentration: } \\textit{5 mg/L}\n\n\nBelow are the steps to create the model:\n\n\\begin{itemize}\n\n\\item Start GIFMod or create a new project\n\\item \\textbf{Add constitients: } Add two constituents called BOD and DO by right-clicking on \\textbf{Project Explorer}$\\rightarrow$\\textbf{Water Quality}$\\rightarrow$\\textbf{Constituents} and then clicking on \\textbf{Add Constituents}\n\\item \\textbf{Creating an external flux object: } Right-click on \\textbf{Project Explorer}$\\rightarrow$\\textbf{Water Quality}$\\rightarrow$\\textbf{External Fluxes} and click on \\textbf{Add External Flux} \n\\item Set the following properties for the external flux object that was just added:  \\\\\n- \\textbf{Name: } \\textit{Aeration} \\\\\n- \\textbf{Coefficient: } \\textit{2 $day^{-1}$} \\\\\n- \\textbf{Constituent: } \\textit{DO}\\\\\n- \\textbf{Model: } \\textit{Constant rate} \\\\\n- \\textbf{Saturation: } \\textit{8.5 mg/L}\\\\\n\n\n\\item \\textbf{Add a pond: } A pond block is used to represent the batch system. From the top tool bar, click on the pond icon \\includegraphics[width=0.5cm]{Icons/pond_icon.png}.\n\\item Set the following properties for the pond that was added. \n- \\textbf{Bottom area: } \\textit{0.2$m^2$} \\\\\n- \\textbf{Initial water depth: } \\textit{0.3 m} \\\\\n- \\textbf{Constituent initial concentration: } \\textit{BOD=25 mg/L, DO=7mg/L}\\\\\n- \\textbf{External Flux: } \\textit{Aeration} \\\\\n\n\\item \\textbf{Adding reactions parameters: } Add the following reaction parameters: \n- BOD maximum decay rate, k\\_d, value = 10 $day^{-1}$\\\\\n- DO half saturation constant, K\\_o, value = 2 mg/L \\\\\n- BOD half saturation constant, K\\_s, value = 5 mg/L \\\\\n\n\\item \\textbf{Setting reactions: } Set the reaction network as shown in Figure \\ref{fig:28}.\n\\item \\textbf{Setting simulation duration: } Set the simulation duration to 20 days by setting the \\textbf{Simulation end time: } to Jan-10-1990 from \\textbf{Project settings}. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{Images/Figure28.png} \\\\\n\\caption{Reaction network for the simple BOD model}\\label{fig:28}\n\\end{center}\n\\end{figure}\n\n\\item \\textbf{Running the model: }The model is ready to run. Click on the \\textbf{forward run bottom} \\includegraphics[width=0.5cm]{Icons/run_icon.png} and wait until the simulation ends. \n\n\\item \\textbf{Inspecting the results: } Right-click on the block identified as \\textbf{Pond (1)} and choose \\textbf{Plot Water Quality Results}$\\rightarrow$\\textbf{DO}. Similarly check the BOD results. The graphs should look like figure \\ref{fig:29}.\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{c c}\na) \\includegraphics[width=5cm]{Images/Figure29a.png} & b) \\includegraphics[width=5cm]{Images/Figure29b.png}\\\\\n\\end{tabular}\n\\caption{Temporal variation of a) DO and b) BOD in the simple batch test with aeration example}\\label{fig:29}\n\\end{center}\n\\end{figure}\n\n\\end{itemize}", "meta": {"hexsha": "c12dff5951b29e96faf1c849e7b6c024f7136ca8", "size": 3568, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "GIFMod User's Manual/Aeration_ex.tex", "max_stars_repo_name": "ArashMassoudieh/GIFMod_", "max_stars_repo_head_hexsha": "1fa9eda21fab870fc3baf56462f79eb800d5154f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-11-20T19:32:27.000Z", "max_stars_repo_stars_event_max_datetime": "2018-08-28T06:08:45.000Z", "max_issues_repo_path": "GIFMod User's Manual/Aeration_ex.tex", "max_issues_repo_name": "ArashMassoudieh/GIFMod_", "max_issues_repo_head_hexsha": "1fa9eda21fab870fc3baf56462f79eb800d5154f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-07-04T05:40:30.000Z", "max_issues_repo_issues_event_max_datetime": "2017-07-04T05:43:37.000Z", "max_forks_repo_path": "GIFMod User's Manual/Aeration_ex.tex", "max_forks_repo_name": "ArashMassoudieh/GIFMod_", "max_forks_repo_head_hexsha": "1fa9eda21fab870fc3baf56462f79eb800d5154f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-11-09T22:00:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-08-30T10:56:08.000Z", "avg_line_length": 57.5483870968, "max_line_length": 250, "alphanum_fraction": 0.7345852018, "num_tokens": 1080, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867681382279, "lm_q2_score": 0.7371581568543043, "lm_q1q2_score": 0.58838988682627}}
{"text": "\\section{Three Dimensions}\\label{sec:3D}\n\nIn this section we describe a two fermion system with a contact interaction, considering both unitarity and, later, a finite scattering length.\nWe implement the Hamiltonian of this system in \\eqref{p space hamiltonian} in a three-dimensional cubic box of linear size $L$ with $N$ sites and lattice spacing $\\epsilon=L/N$.\nAt first, the interaction parameter $C(\\Lambda)$ of this system is tuned in the regular way--so that the ground state energy of the system matches the first intersection of the spherical zeta function $S^{\\spherical}_3$ (evaluated using software provided by \\Refs{Morningstar:2017spu,Morningstar:2hib}) with the physical phase shifts \\eqref{spherical quantization}.\nAfter the interaction parameter is tuned to machine precision, the low-lying energy levels for the fixed volume and fixed lattice spacing are extracted using numeric exact diagonalization.\n\nThe tuning procedure to intersections of the zeta function with the physical phase shifts ensures that the finite-volume effects are incorporated in the energy levels and thus the contact interaction parameter is independent of the volume length $L$.\nHowever, the interaction strength still depends on the implementation of the kinetic operator and the lattice spacing.\nTherefore the strength has to be retuned for each lattice discretization implementation.\nThis discretization dependence has the consequence that in order to obtain \\textit{pure} finite-volume energy levels which can be used to compute physical phase shifts, each lattice energy level (besides the input ground state), has to extrapolated to the continuum first.\nOnly when using these continuum energy levels in \\Luscher's formalism can one expect to extract infinite volume scattering information.\n\nIn practice, it is not always possible to compute any energy level in the continuum limit before using it in the finite-volume \\Luscher formalism.\nWe therefore present consequences of the following scenarios; to obtain physical scattering data, we\n\\begin{enumerate}\n\t\\item perform a continuum limit of the spectrum before inserting it in \\Luscher's zeta function,\n\t\\item insert finite-spacing energy levels into \\Luscher's zeta function, followed by a continuum limit,\n\t\\item utilize the dispersion zeta function to simultaneously perform a continuum and infinite volume limit,\n\t\\item subtract lattice artifacts from finite-spacing eigenvalues before inserting them in the standard zeta function.\n\\end{enumerate}\nThe results for these approaches are obtained for the following parameters\n\\begin{equation}\n    \\{ L \\,[\\mathrm{fm}]= 1, 2 \\}\n    \\times \\left\\{ \\epsilon \\,[\\mathrm{fm}] = \\frac{1}{4}, \\frac{1}{5}, \\frac{1}{10}, \\frac{1}{20}, \\frac{1}{40}, \\frac{1}{50} \\right\\}\n    \\times \\{ n_s = 1, 2, 3, 4, \\infty \\}\n    \\, ,\n\\end{equation}\nas long as $N = L / \\epsilon \\leq 50$.\n\n\n\\input{section/three-dimensions/regular-luescher}\n\\input{section/three-dimensions/dispersion}\n", "meta": {"hexsha": "5a66253358bf50b50361a9391cce0e05dcd0618e", "size": 2965, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/luescher-nd/section/three-dimensions.tex", "max_stars_repo_name": "ckoerber/luescher-nd", "max_stars_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-04-12T22:19:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-26T14:06:49.000Z", "max_issues_repo_path": "paper/luescher-nd/section/three-dimensions.tex", "max_issues_repo_name": "ckoerber/luescher-nd", "max_issues_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2019-12-16T19:49:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-02T00:50:31.000Z", "max_forks_repo_path": "paper/luescher-nd/section/three-dimensions.tex", "max_forks_repo_name": "ckoerber/luescher-nd", "max_forks_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.2058823529, "max_line_length": 365, "alphanum_fraction": 0.7865092749, "num_tokens": 684, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.5883885107428848}}
{"text": "\\section{Learning from failed attempts}\n\\label{sec:feedback}\n\n\n\\subsection{Deriving Structural Constraints}\nOne key feature of our synthesis is the feedback loop structure that allows us to learn from incorrect program structures.\nEach time that the parameter tuning phase fails to produce a program $p(c)$ that meets the cost threshold, we can extract information from that search which allows us to improve our choice of program structure from within the language of the current grammar, $\\languageOf{G}$.\n\n\n\\begin{exmp} Eliminating Unproductive DSP structures\n\nTake as an example a synthesis attempt that produced the following log of candidate DSP programs and their associated costs.\n\n\\begin{lstlisting}\n(LPF 800 $\\arrComp$ HPF 200, 80)\n(LPF 200 $\\arrComp$ HPF 800, 83)\n(HPF 800 $\\arrComp$ LPF 200, 20)\n(HPF 200 $\\arrComp$ LPF 800, 23)\n\\end{lstlisting}\n\nFrom this we want to learn something about the discrete space of program structure. \nThis is where we are working in a semantics-driven programming-by-example style\nWe can see that the structural form \\texttt{LPF \\_ $\\arrComp$ HPF \\_} is way worse than \\texttt{HPF \\_ $\\arrComp$ LPF \\_}.\nFrom this, we can learn a structural constraint that we should not allow the form \\texttt{LPF \\_ $\\arrComp$ HPF \\_}.\nWe will add this as a \\textit{derived structural constraint}, and continue synthesis.\n\\end{exmp}\n\n\\subsection{Deriving Metric weights}\nAdditionally, we can improve our next parameter search over a new program structure by learning from synthesis attempts over other program structures.\nTo do this, we map the parameters from the best program in the previous parameter search onto the new program structure as an initial starting point for metrical synthesis.\n\nThis is where we are working in a numerical-driven supervised learning style.\n\n\\begin{exmp}\nWe take the same log from above, and see that, across all structures, \\texttt{HPF 800} is slightly better than \\texttt{200}, \n  and \\texttt{LPF 200} is slightly better than \\texttt{800}.\nTo leverage this new knowledge, we can accordingly adjust our initial values for the starting point of our gradient descent.\n\\end{exmp}\n", "meta": {"hexsha": "d9065a1bd7235bd7bb5b57f71119952e166560b3", "size": 2139, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/formal/secs/feedback.tex", "max_stars_repo_name": "Yale-OMI/DSP-PBE", "max_stars_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-03T02:36:39.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-03T02:36:39.000Z", "max_issues_repo_path": "papers/formal/secs/feedback.tex", "max_issues_repo_name": "Yale-OMI/DSP-PBE", "max_issues_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2018-11-16T21:50:44.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T18:57:19.000Z", "max_forks_repo_path": "papers/formal/secs/feedback.tex", "max_forks_repo_name": "Yale-OMI/DSP-PBE", "max_forks_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.8461538462, "max_line_length": 276, "alphanum_fraction": 0.7840112202, "num_tokens": 513, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324803738429, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5883885091422895}}
{"text": "   \\section{Optimizers} \\label{sec:Optimizers} The optimizer is another important entity in the\n  RAVEN framework. It performs the driving of a specific ``goal function'' or ``objective function''\n  over the model for value optimization. The Optimizer can be used almost anywhere a Sampler can be\n  used, and is only distinguished from other AdaptiveSampler strategies for clarity.\n\n\\subsection{GradientDescent}\n  The \\xmlNode{GradientDescent} optimizer represents an a la carte option\n  for performing gradient-based optimization with a variety of gradient\n  estimation techniques, stepping strategies, and acceptance criteria. \\hspace{12pt}\n  Gradient descent optimization generally behaves as a ball rolling down a hill;\n  the algorithm estimates the local gradient at a point, and attempts to move\n  ``downhill'' in the opposite direction of the gradient (if minimizing; the\n  opposite if maximizing). Once the lowest point along the iterative gradient search\n  is discovered, the algorithm is considered converged. \\hspace{12pt}\n  Note that gradient descent algorithms are particularly prone to being trapped\n  in local minima; for this reason, depending on the model, multiple trajectories\n  may be needed to obtain the global solution.\n\\vspace{7pt} \\\\When used as part of a \\xmlNode{MultiRun} step, this entity provides\n        additional information through the \\xmlNode{SolutionExport} DataObject. The\n        following variables can be requested within the \\xmlNode{SolutionExport}:\n        \\begin{itemize}\n          \\item \\texttt{trajID}: integer identifier for different optimization starting locations and paths\n             \\item \\texttt{iteration}: integer identifying which iteration (or step, or generation) a trajectory is on\n             \\item \\texttt{accepted}: string acceptance status of the potential optimal point (algorithm dependent)\n             \\item \\texttt{rejectReason}: discription of reject reason, 'noImprovement' means rejected the new optimization point for no improvement from last point, 'implicitConstraintsViolation' means rejected by implicit constraints violation, return None if the point is accepted\n             \\item \\texttt{\\{VAR\\}}: any variable from the \\xmlNode{TargetEvaluation} input or output; gives the value of that variable at the optimal candidate for this iteration.\n             \\item \\texttt{stepSize}: the size of step taken in the normalized input space to arrive at each optimal point\n             \\item \\texttt{conv\\_\\{CONV\\}}: status of each given convergence criteria\n             \\item \\texttt{CG\\_task}: for ConjugateGradient, current task of line search. FD suggests continuing the search, and CONV indicates the line search converged and will pivot.\n           \n         \\end{itemize}\n\n  The \\xmlNode{GradientDescent} node recognizes the following parameters:\n    \\begin{itemize}\n      \\item \\xmlAttr{verbosity}: \\xmlDesc{[silent, quiet, all, debug], optional}, \n        Desired verbosity of messages coming from this entity\n      \\item \\xmlAttr{name}: \\xmlDesc{string, required}, \n        User-defined name to designate this entity in the RAVEN input file.\n  \\end{itemize}\n\n  The \\xmlNode{GradientDescent} node recognizes the following subnodes:\n  \\begin{itemize}\n    \\item \\xmlNode{objective}: \\xmlDesc{string}, \n      Name of the response variable (or ``objective function'') that should be optimized\n      (minimized or maximized).\n\n    \\item \\xmlNode{variable}:\n      defines the input space variables to be sampled through various means.\n      The \\xmlNode{variable} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{name}: \\xmlDesc{string, optional}, \n            user-defined name of this Sampler. \\nb As for the other objects,               this is\n            the name that can be used to refer to this specific entity from other input blocks\n          \\item \\xmlAttr{shape}: \\xmlDesc{comma-separated integers, optional}, \n            determines the number of samples and shape of samples               to be taken.  For\n            example, \\xmlAttr{shape}=``2,3'' will provide a 2 by 3               matrix of values,\n            while \\xmlAttr{shape}=``10'' will produce a vector of 10 values.               Omitting\n            this optional attribute will result in a single scalar value instead.               Each\n            of the values in the matrix or vector will be the same as the single sampled value.\n            \\nb A model interface must be prepared to handle non-scalar inputs to use this option.\n      \\end{itemize}\n\n      The \\xmlNode{variable} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{distribution}: \\xmlDesc{string}, \n          name of the distribution that is associated to this variable.               Its name needs\n          to be contained in the \\xmlNode{Distributions} block explained               in Section\n          \\ref{sec:distributions}. In addition, if NDDistribution is used,               the\n          attribute \\xmlAttr{dim} is required. \\nb{Alternatively, this node must be omitted\n          if the \\xmlNode{function} node is supplied.}\n          The \\xmlNode{distribution} node recognizes the following parameters:\n            \\begin{itemize}\n              \\item \\xmlAttr{dim}: \\xmlDesc{integer, optional}, \n                for an NDDistribution, indicates the dimension within the NDDistribution that\n                corresponds               to this variable.\n          \\end{itemize}\n\n        \\item \\xmlNode{function}: \\xmlDesc{string}, \n          name of the function that               defines the calculation of this variable from\n          other distributed variables.  Its name               needs to be contained in the\n          \\xmlNode{Functions} block explained in Section               \\ref{sec:functions}. This\n          function must implement a method named ``evaluate''.               \\nb{Each\n          \\xmlNode{variable} must contain only one \\xmlNode{Function} or\n          \\xmlNode{Distribution}, but not both.}\n\n        \\item \\xmlNode{initial}: \\xmlDesc{comma-separated floats}, \n          indicates the initial values where independent trajectories for this optimization\n          effort should begin. The number of entries should be the same for all variables, unless\n          a variable is initialized with a sampler (see \\xmlNode{samplerInit} below). Note these\n          entries are ordered; that is, if the optimization variables are $x$ and $y$, and the\n          initial               values for $x$ are \\xmlString{1, 2, 3, 4} and initial values for $y$\n          are \\xmlString{5, 6, 7, 8},               then there will be four starting trajectories\n          beginning at the locations (1, 5), (2, 6),               (3, 7), and (4, 8).\n      \\end{itemize}\n\n    \\item \\xmlNode{TargetEvaluation}: \\xmlDesc{string}, \n      name of the DataObject where the sampled outputs of the Model will be collected.\n      This DataObject is the means by which the sampling entity obtains the results of requested\n      samples, and so should require all the input and output variables needed for adaptive\n      sampling.\n      The \\xmlNode{TargetEvaluation} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this entity (e.g. Samplers, Models, DataObjects)\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this entity; a subtype of the class (e.g. MonteCarlo, Code, PointSet)\n      \\end{itemize}\n\n    \\item \\xmlNode{samplerInit}:\n      collection of nodes that describe the initialization of the optimization algorithm.\n\n      The \\xmlNode{samplerInit} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{limit}: \\xmlDesc{integer}, \n          limits the number of Model evaluations that may be performed as part of this optimization.\n          For example, a limit of 100 means at most 100 total Model evaluations may be performed.\n\n        \\item \\xmlNode{writeSteps}: \\xmlDesc{[final, every]}, \n          delineates when the \\xmlNode{SolutionExport} DataObject should be written to. In case\n          of \\xmlString{final}, only the final optimal solution for each trajectory will be written.\n          In case of \\xmlString{every}, the \\xmlNode{SolutionExport} will be updated with each\n          iteration               of the Optimizer.\n\n        \\item \\xmlNode{initialSeed}: \\xmlDesc{integer}, \n          seed for random number generation. Note that by default RAVEN uses an internal seed,\n          so this seed must be changed to observe changed behavior. \\default{RAVEN-determined}\n\n        \\item \\xmlNode{type}: \\xmlDesc{[min, max]}, \n          the type of optimization to perform. \\xmlString{min} will search for the lowest\n          \\xmlNode{objective} value, while \\xmlString{max} will search for the highest value.\n      \\end{itemize}\n\n    \\item \\xmlNode{gradient}:\n      a required node containing the information about which gradient approximation algorithm to\n      use, and its settings if applicable. Exactly one of the gradient approximation algorithms\n      below may be selected for this Optimizer.\n\n      The \\xmlNode{gradient} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{FiniteDifference}:\n          if node is present, indicates that gradient approximation should be performed\n          using Finite Difference approximation. Finite difference makes use of orthogonal\n          perturbations         in each dimension of the input space to estimate the local gradient,\n          requiring a total of $N$         perturbations, where $N$ is dimensionality of the input\n          space. For example, if the input space         $\\mathbf{i} = (x, y, z)$ for objective\n          function $f(\\mathbf{i})$, then FiniteDifference chooses         three perturbations\n          $(\\alpha, \\beta, \\gamma)$ and evaluates the following perturbation points:\n          \\begin{itemize}           \\item $f(x+\\alpha, y, z)$,           \\item $f(x, y+\\beta, z)$,\n          \\item $f(x, y, z+\\gamma)$         \\end{itemize}         and evaluates the gradient $\\nabla\n          f = (\\nabla^{(x)} f, \\nabla^{(y)} f, \\nabla^{(z)} f)$ as         \\begin{equation*}\n          \\nabla^{(x)}f \\approx \\frac{f(x+\\alpha, y, z) - f(x, y, z)}{\\alpha},\n          \\end{equation*}         and so on for $ \\nabla^{(y)}f$ and $\\nabla^{(z)}f$.\n\n        \\item \\xmlNode{CentralDifference}:\n          if node is present, indicates that gradient approximation should be performed\n          using Central Difference approximation. Central difference makes use of pairs of\n          orthogonal perturbations         in each dimension of the input space to estimate the\n          local gradient, requiring a total of $2N$         perturbations, where $N$ is\n          dimensionality of the input space. For example, if the input space         $\\mathbf{i} =\n          (x, y, z)$ for objective function $f(\\mathbf{i})$, then CentralDifference chooses\n          three perturbations $(\\alpha, \\beta, \\gamma)$ and evaluates the following perturbation\n          points:         \\begin{itemize}           \\item $f(x\\pm\\alpha, y, z)$,           \\item\n          $f(x, y\\pm\\beta, z)$,           \\item $f(x, y, z\\pm\\gamma)$         \\end{itemize}\n          and evaluates the gradient $\\nabla f = (\\nabla^{(x)} f, \\nabla^{(y)} f, \\nabla^{(z)} f)$\n          as         \\begin{equation*}           \\nabla^{(x)}f \\approx \\frac{f(x+\\alpha, y, z) -\n          f(x-\\alpha, y, z)}{2\\alpha},         \\end{equation*}         and so on for $\n          \\nabla^{(y)}f$ and $\\nabla^{(z)}f$.\n\n        \\item \\xmlNode{SPSA}:\n          if node is present, indicates that gradient approximation should be performed\n          using the Simultaneous Perturbation Stochastic Approximation (SPSA).         SPSA makes\n          use of a single perturbation as a zeroth-order gradient approximation,         requiring\n          exactly $1$         perturbation regardless of the dimensionality of the input space. For\n          example, if the input space         $\\mathbf{i} = (x, y, z)$ for objective function\n          $f(\\mathbf{i})$, then SPSA chooses         a single perturbation point $(\\epsilon^{(x)},\n          \\epsilon^{(y)}, \\epsilon^{(z)})$ and evaluates         the following perturbation point:\n          \\begin{itemize}           \\item $f(x+\\epsilon^{(x)}, y+\\epsilon^{(y)}, z+\\epsilon^{(z)})$\n          \\end{itemize}         and evaluates the gradient $\\nabla f = (\\nabla^{(x)} f, \\nabla^{(y)}\n          f, \\nabla^{(z)} f)$ as         \\begin{equation*}           \\nabla^{(x)}f \\approx\n          \\frac{f(x+\\epsilon^{(x)}, y+\\epsilon^{(y)}, z+\\epsilon^{(z)})) -               f(x, y,\n          z)}{\\epsilon^{(x)}},         \\end{equation*}         and so on for $ \\nabla^{(y)}f$ and\n          $\\nabla^{(z)}f$. This approximation is much less robust         than FiniteDifference or\n          CentralDifference, but has the benefit of being dimension agnostic.\n      \\end{itemize}\n\n    \\item \\xmlNode{stepSize}:\n      a required node containing the information about which iterative stepping algorithm to\n      use, and its settings if applicable. Exactly one of the stepping algorithms\n      below may be selected for this Optimizer.\n\n      The \\xmlNode{stepSize} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{GradientHistory}:\n          if this node is present, indicates that the iterative steps in the gradient\n          descent algorithm should be determined by the sequential change in gradient. In\n          particular, rather         than using the magnitude of the gradient to determine step\n          size, the directional change of the         gradient versor determines whether to take\n          larger or smaller steps. If the gradient in two successive         steps changes\n          direction, the step size shrinks. If the gradient instead continues in the same\n          direction, the step size grows. The rate of shrink and growth are controlled by the\n          \\xmlNode{shrinkFactor}         and \\xmlNode{growthFactor}. Note these values have a large\n          impact on the optimization path taken.         Large growth factors converge slowly but\n          explore more of the input space; large shrink factors         converge quickly but might\n          converge before arriving at a local minimum.\n\n          The \\xmlNode{GradientHistory} node recognizes the following subnodes:\n          \\begin{itemize}\n            \\item \\xmlNode{growthFactor}: \\xmlDesc{float}, \n              specifies the rate at which the step size should grow if the gradient continues in\n              same direction through multiple iterative steps. For example, a growth factor of 2\n              means               that if the gradient is identical twice, the step size is doubled.\n              \\default{1.25}\n\n            \\item \\xmlNode{shrinkFactor}: \\xmlDesc{float}, \n              specifies the rate at which the step size should shrink if the gradient changes\n              direction through multiple iterative steps. For example, a shrink factor of 2 means\n              that if the gradient completely flips direction, the step size is halved. Note that\n              for               stochastic surfaces or low-order gradient approximations such as\n              SPSA, a small value               for the shrink factor is recommended. If an\n              optimization path appears to be converging               early, increasing the shrink\n              factor might improve the search. \\default{1.15}\n          \\end{itemize}\n\n        \\item \\xmlNode{ConjugateGradient}:\n          Base class for Step Manipulation algorithms in the GradientDescent Optimizer.\n      \\end{itemize}\n\n    \\item \\xmlNode{acceptance}:\n      a required node containing the information about the acceptability criterion for iterative\n      optimization steps, i.e. when a potential new optimal point should be rejected and when\n      it can be accepted. Exactly one of the acceptance criteria               below may be selected\n      for this Optimizer.\n\n      The \\xmlNode{acceptance} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{Strict}:\n          if this node is present, indicates that a Strict acceptance policy for         potential\n          new optimal points should be enforced; that is, for a potential optimal point to\n          become the new point from which to take another iterative optimizer step, the new response\n          value         must be improved over the old response value. Otherwise, the potential opt\n          point is rejected         and the search continues with the previously-discovered optimal\n          point.\n      \\end{itemize}\n\n    \\item \\xmlNode{convergence}:\n      a node containing the desired convergence criteria for the optimization algorithm.\n      Note that convergence is met when any one of the convergence criteria is met. If no\n      convergence               criteria are given, then nominal convergence on gradient value is\n      used.\n\n      The \\xmlNode{convergence} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{gradient}: \\xmlDesc{float}, \n          provides the desired value for the local estimated of the gradient\n          for convergence. \\default{1e-6, if no criteria specified}\n\n        \\item \\xmlNode{objective}: \\xmlDesc{float}, \n          provides the maximum relative change in the objective function for convergence.\n\n        \\item \\xmlNode{stepSize}: \\xmlDesc{float}, \n          provides the maximum size in relative step size for convergence.\n\n        \\item \\xmlNode{terminateFollowers}: \\xmlDesc{[True, Yes, 1, False, No, 0, t, y, 1, f, n, 0]}, \n          indicates whether a trajectory should be terminated when it begins following the path\n          of another trajectory.\n          The \\xmlNode{terminateFollowers} node recognizes the following parameters:\n            \\begin{itemize}\n              \\item \\xmlAttr{proximity}: \\xmlDesc{float, optional}, \n                provides the normalized distance at which a trajectory's head should be proximal to\n                another trajectory's path before terminating the following trajectory.\n          \\end{itemize}\n\n        \\item \\xmlNode{persistence}: \\xmlDesc{integer}, \n          provides the number of consecutive times convergence should be reached before a trajectory\n          is considered fully converged. This helps in preventing early false convergence.\n\n        \\item \\xmlNode{constraintExplorationLimit}: \\xmlDesc{integer}, \n          provides the number of consecutive times a functional constraint boundary can be explored\n          for an acceptable sampling point before aborting search. Only apples if using a\n          \\xmlNode{Constraint}. \\default{500}\n      \\end{itemize}\n\n    \\item \\xmlNode{constant}: \\xmlDesc{comma-separated strings, integers, and floats}, \n      allows variables that do not change value to be part of the input space.\n      The \\xmlNode{constant} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{name}: \\xmlDesc{string, required}, \n            variable name for this constant, which will be provided to the Model.\n          \\item \\xmlAttr{shape}: \\xmlDesc{comma-separated integers, optional}, \n            determines the shape of samples of the constant value.               For example,\n            \\xmlAttr{shape}=``2,3'' will shape the values into a 2 by 3               matrix, while\n            \\xmlAttr{shape}=``10'' will shape into a vector of 10 values.               Unlike the\n            \\xmlNode{variable}, the constant requires each value be entered; the number\n            of required values is equal to the product of the \\xmlAttr{shape} values, e.g. 6 entries\n            for shape ``2,3'').               \\nb A model interface must be prepared to handle non-\n            scalar inputs to use this option.\n          \\item \\xmlAttr{source}: \\xmlDesc{string, optional}, \n            the name of the DataObject containing the value to be used for this constant.\n            Requires \\xmlNode{ConstantSource} node with a \\xmlNode{DataObject} identified for this\n            Sampler/Optimizer.\n          \\item \\xmlAttr{index}: \\xmlDesc{integer, optional}, \n            the index of the realization in the \\xmlNode{ConstantSource} \\xmlNode{DataObject}\n            containing the value for this constant. Requires \\xmlNode{ConstantSource} node with\n            a \\xmlNode{DataObject} identified for this Sampler/Optimizer.\n      \\end{itemize}\n\n    \\item \\xmlNode{ConstantSource}: \\xmlDesc{string}, \n      identifies a \\xmlNode{DataObject} to provide \\xmlNode{constant} values to the input\n      space of this entity while sampling. As an alternative to providing predefined values\n      for constants, the \\xmlNode{ConstantSource} provides a dynamic means of always providing\n      the same value for a constant. This is often used as part of a larger multi-workflow\n      calculation.\n      The \\xmlNode{ConstantSource} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, optional}, \n            The RAVEN class for this source. Options include \\xmlString{DataObject}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, optional}, \n            The RAVEN type for this source. Options include any valid \\xmlNode{DataObject} type,\n            such as HistorySet or PointSet.\n      \\end{itemize}\n\n    \\item \\xmlNode{Constraint}: \\xmlDesc{string}, \n      name of \\xmlNode{Function} which contains explicit constraints for the sampling of\n      the input space of the Model. From a practical point of view, this XML node must contain\n      the name of a function defined in the \\xmlNode{Functions} block (see\n      Section~\\ref{sec:functions}).               This external function must contain a method\n      called ``constrain'', which returns True for               inputs satisfying the explicit\n      constraints and False otherwise.\n      The \\xmlNode{Constraint} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this source. Options include \\xmlString{Functions}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this source. Options include \\xmlNode{External}.\n      \\end{itemize}\n\n    \\item \\xmlNode{ImplicitConstraint}: \\xmlDesc{string}, \n      name of \\xmlNode{Function} which contains implicit constraints of the Model. From a practical\n      point of view, this XML node must contain the name of a function defined in the\n      \\xmlNode{Functions}               block (see Section~\\ref{sec:functions}). This external\n      function must contain a method called               ``implicitConstrain'', which returns True\n      for outputs satisfying the implicit constraints and False otherwise.\n      The \\xmlNode{ImplicitConstraint} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this source. Options include \\xmlString{Functions}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this source. Options include \\xmlNode{External}.\n      \\end{itemize}\n\n    \\item \\xmlNode{Sampler}: \\xmlDesc{string}, \n      name of a Sampler that can be used to initialize the starting points for the trajectories\n      of some of the variables. From a practical point of view, this XML node must contain the\n      name of a Sampler defined in the \\xmlNode{Samplers} block (see\n      Section~\\ref{subsec:onceThroughSamplers}).               The Sampler will be used to\n      initialize the trajectories' initial points for some or all               of the variables.\n      For example, if the Sampler selected samples only 2 of the 5 optimization\n      variables, the \\xmlNode{initial} XML node is required only for the remaining 3 variables.\n      The \\xmlNode{Sampler} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this entity (e.g. Samplers, Models, DataObjects)\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this entity; a subtype of the class (e.g. MonteCarlo, Code, PointSet)\n      \\end{itemize}\n\n    \\item \\xmlNode{Restart}: \\xmlDesc{string}, \n      name of a DataObject. Used to leverage existing data when sampling a model. For\n      example, if a Model has               already been sampled, but some samples were not\n      collected, the successful samples can               be stored and used instead of rerunning\n      the model for those specific samples. This RAVEN               entity definition must be a\n      DataObject with contents including the input and output spaces               of the Model\n      being sampled.\n      The \\xmlNode{Restart} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, optional}, \n            The RAVEN class for this source. Options include \\xmlString{DataObject}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, optional}, \n            The RAVEN type for this source. Options include any valid \\xmlNode{DataObject} type,\n            such as HistorySet or PointSet.\n      \\end{itemize}\n\n    \\item \\xmlNode{restartTolerance}: \\xmlDesc{float}, \n      specifies how strictly a matching point from a \\xmlNode{Restart} DataObject must match\n      the desired sample point in order to be used. If a potential restart point is within a\n      relative Euclidean distance (as specified by the value in this node) of a desired sample\n      point,               the restart point will be used instead of sampling the Model.\n      \\default{1e-15}\n\n    \\item \\xmlNode{variablesTransformation}:\n      Allows transformation of variables via translation matrices. This defines two spaces,\n      a ``latent'' transformed space sampled by RAVEN and a ``manifest'' original space understood\n      by the Model.\n      The \\xmlNode{variablesTransformation} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{distribution}: \\xmlDesc{string, optional}, \n            the name for the distribution defined in the XML node \\xmlNode{Distributions}.\n            This attribute indicates the values of \\xmlNode{manifestVariables} are drawn from\n            \\xmlAttr{distribution}.\n      \\end{itemize}\n\n      The \\xmlNode{variablesTransformation} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{latentVariables}: \\xmlDesc{comma-separated strings}, \n          user-defined latent variables that are used for the variables transformation.\n          All the variables listed under this node should be also mentioned in \\xmlNode{variable}.\n\n        \\item \\xmlNode{manifestVariables}: \\xmlDesc{comma-separated strings}, \n          user-defined manifest variables that can be used by the \\xmlNode{Model}.\n\n        \\item \\xmlNode{manifestVariablesIndex}: \\xmlDesc{comma-separated strings}, \n          user-defined manifest variables indices paired with \\xmlNode{manifestVariables}.\n          These indices indicate the position of manifest variables associated with multivariate\n          normal               distribution defined in the XML node \\xmlNode{Distributions}.\n          The indices should be postive integer. If not provided, the code will use the positions\n          of manifest variables listed in \\xmlNode{manifestVariables} as the indices.\n\n        \\item \\xmlNode{method}: \\xmlDesc{string}, \n          the method that is used for the variables transformation. The currently available method\n          is \\xmlString{pca}.\n      \\end{itemize}\n  \\end{itemize}\n\n\\hspace{24pt}\nGradient Descent Example:\n\\begin{lstlisting}[style=XML]\n<Optimizers>\n  ...\n  <GradientDescent name=\"opter\">\n    <objective>ans</objective>\n    <variable name=\"x\">\n      <distribution>x_dist</distribution>\n      <initial>-2</initial>\n    </variable>\n    <variable name=\"y\">\n      <distribution>y_dist</distribution>\n      <initial>2</initial>\n    </variable>\n    <samplerInit>\n      <limit>100</limit>\n    </samplerInit>\n    <gradient>\n      <FiniteDifference/>\n    </gradient>\n    <stepSize>\n      <GradientHistory/>\n    </stepSize>\n    <acceptance>\n      <Strict/>\n    </acceptance>\n    <TargetEvaluation class=\"DataObjects\" type=\"PointSet\">optOut</TargetEvaluation>\n  </GradientDescent>\n  ...\n</Optimizers>\n\\end{lstlisting}\n\n\n\n\\subsection{SimulatedAnnealing}\n  The \\xmlNode{SimulatedAnnealing} optimizer is a metaheuristic approach\n  to perform a global search in large design spaces. The methodology rose\n  from statistical physics and was inspitred by metallurgy where                             it was\n  found that fast cooling might lead to smaller and defected crystals,\n  and that reheating and slowly controling cooling will lead to better states.\n  This allows climbing to avoid being stuck in local minima and hence facilitates\n  finding the global minima for non-convex probloems.                             More information\n  can be found in: Kirkpatrick, S.; Gelatt Jr, C. D.; Vecchi, M. P. (1983).\n  ``Optimization by Simulated Annealing\". Science. 220 (4598): 671\u2013680.\n\\vspace{7pt} \\\\When used as part of a \\xmlNode{MultiRun} step, this entity provides\n        additional information through the \\xmlNode{SolutionExport} DataObject. The\n        following variables can be requested within the \\xmlNode{SolutionExport}:\n        \\begin{itemize}\n          \\item \\texttt{trajID}: integer identifier for different optimization starting locations and paths\n             \\item \\texttt{iteration}: integer identifying which iteration (or step, or generation) a trajectory is on\n             \\item \\texttt{accepted}: string acceptance status of the potential optimal point (algorithm dependent)\n             \\item \\texttt{rejectReason}: discription of reject reason, 'noImprovement' means rejected the new optimization point for no improvement from last point, 'implicitConstraintsViolation' means rejected by implicit constraints violation, return None if the point is accepted\n             \\item \\texttt{\\{VAR\\}}: any variable from the \\xmlNode{TargetEvaluation} input or output; gives the value of that variable at the optimal candidate for this iteration.\n             \\item \\texttt{conv\\_\\{CONV\\}}: status of each given convergence criteria\n             \\item \\texttt{amp\\_\\{VAR\\}}: amplitued associated to each variable used to compute step size based on cooling method and the corresponding next neighbour\n             \\item \\texttt{delta\\_\\{VAR\\}}: step size associated to each variable\n             \\item \\texttt{Temp}: temperature at current state\n             \\item \\texttt{fraction}: current fraction of the max iteration limit\n           \n         \\end{itemize}\n\n  The \\xmlNode{SimulatedAnnealing} node recognizes the following parameters:\n    \\begin{itemize}\n      \\item \\xmlAttr{verbosity}: \\xmlDesc{[silent, quiet, all, debug], optional}, \n        Desired verbosity of messages coming from this entity\n      \\item \\xmlAttr{name}: \\xmlDesc{string, required}, \n        User-defined name to designate this entity in the RAVEN input file.\n  \\end{itemize}\n\n  The \\xmlNode{SimulatedAnnealing} node recognizes the following subnodes:\n  \\begin{itemize}\n    \\item \\xmlNode{objective}: \\xmlDesc{string}, \n      Name of the response variable (or ``objective function'') that should be optimized\n      (minimized or maximized).\n\n    \\item \\xmlNode{variable}:\n      defines the input space variables to be sampled through various means.\n      The \\xmlNode{variable} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{name}: \\xmlDesc{string, optional}, \n            user-defined name of this Sampler. \\nb As for the other objects,               this is\n            the name that can be used to refer to this specific entity from other input blocks\n          \\item \\xmlAttr{shape}: \\xmlDesc{comma-separated integers, optional}, \n            determines the number of samples and shape of samples               to be taken.  For\n            example, \\xmlAttr{shape}=``2,3'' will provide a 2 by 3               matrix of values,\n            while \\xmlAttr{shape}=``10'' will produce a vector of 10 values.               Omitting\n            this optional attribute will result in a single scalar value instead.               Each\n            of the values in the matrix or vector will be the same as the single sampled value.\n            \\nb A model interface must be prepared to handle non-scalar inputs to use this option.\n      \\end{itemize}\n\n      The \\xmlNode{variable} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{distribution}: \\xmlDesc{string}, \n          name of the distribution that is associated to this variable.               Its name needs\n          to be contained in the \\xmlNode{Distributions} block explained               in Section\n          \\ref{sec:distributions}. In addition, if NDDistribution is used,               the\n          attribute \\xmlAttr{dim} is required. \\nb{Alternatively, this node must be omitted\n          if the \\xmlNode{function} node is supplied.}\n          The \\xmlNode{distribution} node recognizes the following parameters:\n            \\begin{itemize}\n              \\item \\xmlAttr{dim}: \\xmlDesc{integer, optional}, \n                for an NDDistribution, indicates the dimension within the NDDistribution that\n                corresponds               to this variable.\n          \\end{itemize}\n\n        \\item \\xmlNode{function}: \\xmlDesc{string}, \n          name of the function that               defines the calculation of this variable from\n          other distributed variables.  Its name               needs to be contained in the\n          \\xmlNode{Functions} block explained in Section               \\ref{sec:functions}. This\n          function must implement a method named ``evaluate''.               \\nb{Each\n          \\xmlNode{variable} must contain only one \\xmlNode{Function} or\n          \\xmlNode{Distribution}, but not both.}\n\n        \\item \\xmlNode{initial}: \\xmlDesc{comma-separated floats}, \n          indicates the initial values where independent trajectories for this optimization\n          effort should begin. The number of entries should be the same for all variables, unless\n          a variable is initialized with a sampler (see \\xmlNode{samplerInit} below). Note these\n          entries are ordered; that is, if the optimization variables are $x$ and $y$, and the\n          initial               values for $x$ are \\xmlString{1, 2, 3, 4} and initial values for $y$\n          are \\xmlString{5, 6, 7, 8},               then there will be four starting trajectories\n          beginning at the locations (1, 5), (2, 6),               (3, 7), and (4, 8).\n      \\end{itemize}\n\n    \\item \\xmlNode{TargetEvaluation}: \\xmlDesc{string}, \n      name of the DataObject where the sampled outputs of the Model will be collected.\n      This DataObject is the means by which the sampling entity obtains the results of requested\n      samples, and so should require all the input and output variables needed for adaptive\n      sampling.\n      The \\xmlNode{TargetEvaluation} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this entity (e.g. Samplers, Models, DataObjects)\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this entity; a subtype of the class (e.g. MonteCarlo, Code, PointSet)\n      \\end{itemize}\n\n    \\item \\xmlNode{samplerInit}:\n      collection of nodes that describe the initialization of the optimization algorithm.\n\n      The \\xmlNode{samplerInit} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{limit}: \\xmlDesc{integer}, \n          limits the number of Model evaluations that may be performed as part of this optimization.\n          For example, a limit of 100 means at most 100 total Model evaluations may be performed.\n\n        \\item \\xmlNode{writeSteps}: \\xmlDesc{[final, every]}, \n          delineates when the \\xmlNode{SolutionExport} DataObject should be written to. In case\n          of \\xmlString{final}, only the final optimal solution for each trajectory will be written.\n          In case of \\xmlString{every}, the \\xmlNode{SolutionExport} will be updated with each\n          iteration               of the Optimizer.\n\n        \\item \\xmlNode{initialSeed}: \\xmlDesc{integer}, \n          seed for random number generation. Note that by default RAVEN uses an internal seed,\n          so this seed must be changed to observe changed behavior. \\default{RAVEN-determined}\n\n        \\item \\xmlNode{type}: \\xmlDesc{[min, max]}, \n          the type of optimization to perform. \\xmlString{min} will search for the lowest\n          \\xmlNode{objective} value, while \\xmlString{max} will search for the highest value.\n      \\end{itemize}\n\n    \\item \\xmlNode{convergence}:\n      a node containing the desired convergence criteria for the optimization algorithm.\n      Note that convergence is met when any one of the convergence criteria is met. If no\n      convergence               criteria are given, then the defaults are used.\n\n      The \\xmlNode{convergence} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{objective}: \\xmlDesc{float}, \n          provides the desired value for the convergence criterion of the objective function\n          ($\\epsilon^{obj}$), i.e., convergence is reached when: $$ |newObjevtive - oldObjective|\n          \\le \\epsilon^{obj}$$.                        \\default{1e-6}, if no criteria specified\n\n        \\item \\xmlNode{temperature}: \\xmlDesc{float}, \n          provides the desired value for the convergence creiteron of the system temperature,\n          ($\\epsilon^{temp}$), i.e., convergence is reached when: $$T \\le \\epsilon^{temp}$$.\n          \\default{1e-10}, if no criteria specified\n\n        \\item \\xmlNode{persistence}: \\xmlDesc{integer}, \n          provides the number of consecutive times convergence should be reached before a trajectory\n          is considered fully converged. This helps in preventing early false convergence.\n      \\end{itemize}\n\n    \\item \\xmlNode{coolingSchedule}:\n      The function governing the cooling process. Currently, user can select\n      between,\\xmlString{exponential},                  \\xmlString{cauchy},\n      \\xmlString{boltzmann},or \\xmlString{veryfast}.\\\\ \\\\In case of \\xmlString{exponential} is\n      provided, The cooling process will be governed by: $$ T^{k} = T^0 * \\alpha^k$$\n      In case of \\xmlString{boltzmann} is provided, The cooling process will be governed by: $$\n      T^{k} = \\frac{T^0}{log(k + d)}$$                  In case of \\xmlString{cauchy} is provided,\n      The cooling process will be governed by: $$ T^{k} = \\frac{T^0}{k + d}$$In case of\n      \\xmlString{veryfast} is provided, The cooling process will be governed by: $$ T^{k} =  T^0 *\n      \\exp(-ck^{1/D}),$$                  where $D$ is the dimentionality of the problem (i.e.,\n      number of optimized variables), $k$ is the number of the current iteration\n      $T^{0} = \\max{(0.01,1-\\frac{k}{\\xmlNode{limit}})}$ is the initial temperature, and $T^{k}$ is\n      the current temperature                  according to the specified cooling schedule.\n      \\default{exponential}.\n\n      The \\xmlNode{coolingSchedule} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{exponential}: \\xmlDesc{string}, \n          exponential cooling schedule\n\n          The \\xmlNode{exponential} node recognizes the following subnodes:\n          \\begin{itemize}\n            \\item \\xmlNode{alpha}: \\xmlDesc{float}, \n              slowing down constant, should be between 0,1 and preferable very close to 1.\n              \\default{0.94}\n          \\end{itemize}\n\n        \\item \\xmlNode{veryfast}: \\xmlDesc{string}, \n          veryfast cooling schedule\n\n          The \\xmlNode{veryfast} node recognizes the following subnodes:\n          \\begin{itemize}\n            \\item \\xmlNode{c}: \\xmlDesc{float}, \n              decay constant, \\default{1.0}\n          \\end{itemize}\n\n        \\item \\xmlNode{cauchy}: \\xmlDesc{string}, \n          cauchy cooling schedule\n\n          The \\xmlNode{cauchy} node recognizes the following subnodes:\n          \\begin{itemize}\n            \\item \\xmlNode{d}: \\xmlDesc{float}, \n              bias, \\default{1.0}\n          \\end{itemize}\n\n        \\item \\xmlNode{boltzmann}: \\xmlDesc{string}, \n          boltzmann cooling schedule\n\n          The \\xmlNode{boltzmann} node recognizes the following subnodes:\n          \\begin{itemize}\n            \\item \\xmlNode{d}: \\xmlDesc{float}, \n              bias, \\default{1.0}\n          \\end{itemize}\n      \\end{itemize}\n\n    \\item \\xmlNode{constant}: \\xmlDesc{comma-separated strings, integers, and floats}, \n      allows variables that do not change value to be part of the input space.\n      The \\xmlNode{constant} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{name}: \\xmlDesc{string, required}, \n            variable name for this constant, which will be provided to the Model.\n          \\item \\xmlAttr{shape}: \\xmlDesc{comma-separated integers, optional}, \n            determines the shape of samples of the constant value.               For example,\n            \\xmlAttr{shape}=``2,3'' will shape the values into a 2 by 3               matrix, while\n            \\xmlAttr{shape}=``10'' will shape into a vector of 10 values.               Unlike the\n            \\xmlNode{variable}, the constant requires each value be entered; the number\n            of required values is equal to the product of the \\xmlAttr{shape} values, e.g. 6 entries\n            for shape ``2,3'').               \\nb A model interface must be prepared to handle non-\n            scalar inputs to use this option.\n          \\item \\xmlAttr{source}: \\xmlDesc{string, optional}, \n            the name of the DataObject containing the value to be used for this constant.\n            Requires \\xmlNode{ConstantSource} node with a \\xmlNode{DataObject} identified for this\n            Sampler/Optimizer.\n          \\item \\xmlAttr{index}: \\xmlDesc{integer, optional}, \n            the index of the realization in the \\xmlNode{ConstantSource} \\xmlNode{DataObject}\n            containing the value for this constant. Requires \\xmlNode{ConstantSource} node with\n            a \\xmlNode{DataObject} identified for this Sampler/Optimizer.\n      \\end{itemize}\n\n    \\item \\xmlNode{ConstantSource}: \\xmlDesc{string}, \n      identifies a \\xmlNode{DataObject} to provide \\xmlNode{constant} values to the input\n      space of this entity while sampling. As an alternative to providing predefined values\n      for constants, the \\xmlNode{ConstantSource} provides a dynamic means of always providing\n      the same value for a constant. This is often used as part of a larger multi-workflow\n      calculation.\n      The \\xmlNode{ConstantSource} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, optional}, \n            The RAVEN class for this source. Options include \\xmlString{DataObject}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, optional}, \n            The RAVEN type for this source. Options include any valid \\xmlNode{DataObject} type,\n            such as HistorySet or PointSet.\n      \\end{itemize}\n\n    \\item \\xmlNode{Constraint}: \\xmlDesc{string}, \n      name of \\xmlNode{Function} which contains explicit constraints for the sampling of\n      the input space of the Model. From a practical point of view, this XML node must contain\n      the name of a function defined in the \\xmlNode{Functions} block (see\n      Section~\\ref{sec:functions}).               This external function must contain a method\n      called ``constrain'', which returns True for               inputs satisfying the explicit\n      constraints and False otherwise.\n      The \\xmlNode{Constraint} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this source. Options include \\xmlString{Functions}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this source. Options include \\xmlNode{External}.\n      \\end{itemize}\n\n    \\item \\xmlNode{ImplicitConstraint}: \\xmlDesc{string}, \n      name of \\xmlNode{Function} which contains implicit constraints of the Model. From a practical\n      point of view, this XML node must contain the name of a function defined in the\n      \\xmlNode{Functions}               block (see Section~\\ref{sec:functions}). This external\n      function must contain a method called               ``implicitConstrain'', which returns True\n      for outputs satisfying the implicit constraints and False otherwise.\n      The \\xmlNode{ImplicitConstraint} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this source. Options include \\xmlString{Functions}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this source. Options include \\xmlNode{External}.\n      \\end{itemize}\n\n    \\item \\xmlNode{Sampler}: \\xmlDesc{string}, \n      name of a Sampler that can be used to initialize the starting points for the trajectories\n      of some of the variables. From a practical point of view, this XML node must contain the\n      name of a Sampler defined in the \\xmlNode{Samplers} block (see\n      Section~\\ref{subsec:onceThroughSamplers}).               The Sampler will be used to\n      initialize the trajectories' initial points for some or all               of the variables.\n      For example, if the Sampler selected samples only 2 of the 5 optimization\n      variables, the \\xmlNode{initial} XML node is required only for the remaining 3 variables.\n      The \\xmlNode{Sampler} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, required}, \n            RAVEN class for this entity (e.g. Samplers, Models, DataObjects)\n          \\item \\xmlAttr{type}: \\xmlDesc{string, required}, \n            RAVEN type for this entity; a subtype of the class (e.g. MonteCarlo, Code, PointSet)\n      \\end{itemize}\n\n    \\item \\xmlNode{Restart}: \\xmlDesc{string}, \n      name of a DataObject. Used to leverage existing data when sampling a model. For\n      example, if a Model has               already been sampled, but some samples were not\n      collected, the successful samples can               be stored and used instead of rerunning\n      the model for those specific samples. This RAVEN               entity definition must be a\n      DataObject with contents including the input and output spaces               of the Model\n      being sampled.\n      The \\xmlNode{Restart} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{class}: \\xmlDesc{string, optional}, \n            The RAVEN class for this source. Options include \\xmlString{DataObject}.\n          \\item \\xmlAttr{type}: \\xmlDesc{string, optional}, \n            The RAVEN type for this source. Options include any valid \\xmlNode{DataObject} type,\n            such as HistorySet or PointSet.\n      \\end{itemize}\n\n    \\item \\xmlNode{restartTolerance}: \\xmlDesc{float}, \n      specifies how strictly a matching point from a \\xmlNode{Restart} DataObject must match\n      the desired sample point in order to be used. If a potential restart point is within a\n      relative Euclidean distance (as specified by the value in this node) of a desired sample\n      point,               the restart point will be used instead of sampling the Model.\n      \\default{1e-15}\n\n    \\item \\xmlNode{variablesTransformation}:\n      Allows transformation of variables via translation matrices. This defines two spaces,\n      a ``latent'' transformed space sampled by RAVEN and a ``manifest'' original space understood\n      by the Model.\n      The \\xmlNode{variablesTransformation} node recognizes the following parameters:\n        \\begin{itemize}\n          \\item \\xmlAttr{distribution}: \\xmlDesc{string, optional}, \n            the name for the distribution defined in the XML node \\xmlNode{Distributions}.\n            This attribute indicates the values of \\xmlNode{manifestVariables} are drawn from\n            \\xmlAttr{distribution}.\n      \\end{itemize}\n\n      The \\xmlNode{variablesTransformation} node recognizes the following subnodes:\n      \\begin{itemize}\n        \\item \\xmlNode{latentVariables}: \\xmlDesc{comma-separated strings}, \n          user-defined latent variables that are used for the variables transformation.\n          All the variables listed under this node should be also mentioned in \\xmlNode{variable}.\n\n        \\item \\xmlNode{manifestVariables}: \\xmlDesc{comma-separated strings}, \n          user-defined manifest variables that can be used by the \\xmlNode{Model}.\n\n        \\item \\xmlNode{manifestVariablesIndex}: \\xmlDesc{comma-separated strings}, \n          user-defined manifest variables indices paired with \\xmlNode{manifestVariables}.\n          These indices indicate the position of manifest variables associated with multivariate\n          normal               distribution defined in the XML node \\xmlNode{Distributions}.\n          The indices should be postive integer. If not provided, the code will use the positions\n          of manifest variables listed in \\xmlNode{manifestVariables} as the indices.\n\n        \\item \\xmlNode{method}: \\xmlDesc{string}, \n          the method that is used for the variables transformation. The currently available method\n          is \\xmlString{pca}.\n      \\end{itemize}\n  \\end{itemize}\n\n\\hspace{24pt}\nSimulated Annealing Example:\n\\begin{lstlisting}[style=XML]\n  <Optimizers>\n    ...\n    <SimulatedAnnealing name=\"simOpt\">\n      <samplerInit>\n        <limit>2000</limit>\n        <initialSeed>42</initialSeed>\n        <writeSteps>every</writeSteps>\n        <type>min</type>\n      </samplerInit>\n      <convergence>\n        <objective>1e-6</objective>\n        <temperature>1e-20</temperature>\n        <persistence>1</persistence>\n      </convergence>\n      <coolingSchedule>\n        <exponential>\n          <alpha>0.94</alpha>\n        </exponential>\n      </coolingSchedule>\n      <variable name=\"x\">\n        <distribution>beale_dist</distribution>\n        <initial>-2.5</initial>\n      </variable>\n      <variable name=\"y\">\n        <distribution>beale_dist</distribution>\n        <initial>3.5</initial>\n      </variable>\n      <objective>ans</objective>\n      <TargetEvaluation class=\"DataObjects\" type=\"PointSet\">optOut</TargetEvaluation>\n    </SimulatedAnnealing>\n    ...\n  </Optimizers>\n\\end{lstlisting}\n\n", "meta": {"hexsha": "8f26c9665ad33b5730e6ef1a4b523f3860deb75e", "size": 51157, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/generated/optimizer.tex", "max_stars_repo_name": "JiaZhou-PU/raven", "max_stars_repo_head_hexsha": "a1fd82facb0f02f20770ea4df39d55a999c49017", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user_manual/generated/optimizer.tex", "max_issues_repo_name": "JiaZhou-PU/raven", "max_issues_repo_head_hexsha": "a1fd82facb0f02f20770ea4df39d55a999c49017", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/user_manual/generated/optimizer.tex", "max_forks_repo_name": "JiaZhou-PU/raven", "max_forks_repo_head_hexsha": "a1fd82facb0f02f20770ea4df39d55a999c49017", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.8287752675, "max_line_length": 283, "alphanum_fraction": 0.6654221319, "num_tokens": 11801, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.5883885058635563}}
{"text": "\\documentclass{article}\n\n\\usepackage{graphicx}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{listings}\n\n\\graphicspath{{artifact/images/}}\n\\numberwithin{equation}{section}\n\\numberwithin{figure}{section}\n\\numberwithin{table}{section}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{proposition}{Proposition}[section]\n\\newtheorem{definition}[theorem]{Definition}\n\n\\begin{document}\n\n\\title{Converting Graphic Relationships into Conditional Probabilities in Bayesian Network}\n\n\\author{\nLoc Nguyen\\\\\nSunflower Soft Company, An Giang, Vietnam\\\\\nng\\_phloc@yahoo.com\n}\n\n\\maketitle\n\n\\begin{abstract}\nBayesian network is a powerful mathematical tool for prediction and diagnosis applications. A large Bayesian network can be constituted of many simple networks which in turn are constructed from simple graphs. A simple graph consists of one child node and many parent nodes. The strength of each relationship between a child node and a parent node is quantified by a weight and all relationships share the same semantics such as prerequisite, diagnostic, and aggregation. The research focuses on converting graphic relationships into conditional probabilities in order to construct a simple Bayesian network from a graph. Diagnostic relationship is the main research object, in which sufficient diagnostic proposition is proposed for validating diagnostic relationship. Relationship conversion is adhered to logic gates such as AND, OR, and XOR, which is essential feature of the research.\n\n\\textit{Keywords}: diagnostic relationship, Bayesian network, transformation coefficient\n\\end{abstract}\n\n\\section{Introduction}\nBayesian network (BN) is a directed acyclic graph (DAG) consisting of a set of nodes and a set of arcs. Each node is a random variable. Each arc represents a relationship between two nodes. The strength of relationship in a graph can be quantified by a number called \\textit{weight}. There are some important relationships such as prerequisite, diagnostic, and aggregation. The difference between BN and normal graph is that the strength of every relationship in BN is represented by a conditional probability table (CPT) whose entries are conditional probabilities of a child node given parent nodes. There are two main approaches to construct a BN:\n\n\\begin{enumerate}\n\\item  The first approach aims to learn BN from training data by learning machine algorithms.\n\n\\item  The second approach is that experts define some graph patterns according to specific relationships and then, BN is constructed based on such patterns along with determined CPT (s).\n\\end{enumerate}\n\nThis research focuses on the second approach in which relationships are converted into CPT (s). Essentially, relationship conversion aims to determined conditional probabilities based on weights and meanings of relationships. We will have different ways to convert graphic weights into CPT (s) for different relationships. It is impossible to convert all relationships but some of them such as diagnostic, aggregation, and prerequisite are mandatory ones that we must specify as computable CPT (s) of BN. Especially, these relationships are adhered to logic X-gates \\cite{wikipedia:logic-gates} such as AND-gate, OR-gate, and SIGMA-gate. The X-gate inference in this research is derived and inspired from noisy OR-gate described in the book ``Learning Bayesian Networks'' by author \\cite[pp.~157-159]{neapolitan:bn}. Authors \\cite{diez:pmodels} also researched OR/MAX, AND/MIN, noisy XOR inferences but they focused on canonical models, deterministic models, ICI models whereas I focus on logic gate and graphic relationships. So their research is different from mine but we share the same result that is AND-gate model. In general, my research focuses on applied probability adhered to Bayesian network, logic gates, and Bayesian user modeling \\cite{millan:bayesiandiagnostic}. The scientific results are shared with authors Eva Mill\u00e1n and Jos\u00e9 Luis P\u00e9rez-de-la-Cruz \\cite{millan:bayesiandiagnostic}.\n\nFactor graph \\cite{wikipedia:factor-graph} represents factorization of a global function into many partial functions. If joint distribution of BN is considered as the global function and CPT (s) are considered as partial functions, the sum-product algorithm \\cite{kschischang:factor-graph} of factor graph is applied into calculating posterior probabilities of variables in BN. Pearl's propagation algorithm \\cite{pearl:propagation} is very successful in BN inference. The application of factor graph into BN is only realized if all CPT (s) of BN are already determined whereas this research focuses on defining such CPT (s) firstly. I did not use factor graph for constructing BN. The concept ``X-gate inference'' only implies how to convert simple graph into BN. However the arrange sum with a fixed variable mentioned in this research is the ``not-sum'' \\cite[p.~499]{kschischang:factor-graph} of factor graph. Essentially, X-gate probability shown in formula~\\ref{formula:x-gate-prob} is as same as $\\lambda$ message in the Pearl's algorithm \\cite[p.~518]{kschischang:factor-graph} but I use the most basic way to prove the X-gate probability.\n\nAs default, the research is applied in learning context in which BN is used to assess students' knowledge. Evidences are tests, exams, exercises, etc. and hypotheses are learning concepts, knowledge items, etc. Note that diagnostic relationship is very important to Bayesian evaluation in learning context because it is used to evaluate student's mastery of concepts (knowledge items) over entire BN. As a convention, all equations are called formulas having particular titles. Now we start relationship conversion with a research on diagnostic relationship in the next section.\n\n\\section{Diagnostic relationship}\nIn some opinions like mine, the diagnostic relationship should be from hypothesis to evidence. For example, disease is hypothesis and symptom is evidence. The symptom must be conditionally dependent on disease. Given a symptom, calculating the posterior probability of disease is essentially to diagnose likelihood of such disease \\cite[p.~1666]{millan:bayesiannetwork}. Inversely, the arc from evidence to hypothesis implies prediction where evidence and hypothesis represent observation and event, respectively. Given an observation, calculating the posterior probability of the event is essentially to predict/assert such event \\cite[p.~1666]{millan:bayesiannetwork}. Figure~\\ref{figure:diagnosis-prediction} shows diagnosis and prediction.\n\n\\begin{figure}\n\\centering\n\\includegraphics{DiagnosisPrediction.png}\n\\caption{Diagnosis and prediction with hypothesis \\textit{X} and evidence \\textit{D}}\n\\label{figure:diagnosis-prediction}\n\\end{figure}\n%Figure 2.1. Diagnosis and prediction with hypothesis \\textit{X} and evidence \\textit{D}\n\nThe weight \\textit{w} of relationship between \\textit{X} and \\textit{D} is 1. Figure~\\ref{figure:diagnosis-prediction} depicts simplest graph with two random variables. We need to convert diagnostic relationship into conditional probabilities in order to construct a simplest BN from the simplest graph. Note that hypothesis is binary but evidence can be numerical. In learning context, evidence \\textit{D} can be test, exam, exercise, etc. The conditional probability of \\textit{D} given \\textit{X} (likelihood function) is \\textit{P}(\\textit{D}{\\textbar}\\textit{X}). The posterior probability of \\textit{X} is \\textit{P}(\\textit{X}{\\textbar}\\textit{D}), which is used to evaluate student's mastery over concept (hypothesis) \\textit{X} given evidence \\textit{D}. Formula~\\ref{formula:cpt-evidence-diagnostic} specifies CPT of \\textit{D} when \\textit{D} is binary (0 and 1):\n\n\\begin{equation}\nP\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=\\left\\{ \\begin{array}{r}\nD\\ \\mathrm{if}\\ X=1 \\\\ \n1-D\\ \\mathrm{if}\\ X=0 \\end{array}\n\\right.\n\\label{formula:cpt-evidence-diagnostic}\n\\end{equation}\n%Formula 2.1. CPT of binary evidence of diagnostic relationship\n\nFormula~\\ref{formula:cpt-evidence-diagnostic} is our first relationship conversion. It implies\n\\[P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)=D+1-D=1\\] \nEvidence \\textit{D} can be used to diagnose hypothesis \\textit{X} if the so-called \\textit{sufficient} \\textit{diagnostic proposition} is satisfied, as seen in proposition~\\ref{proposition:diagnostic-proposition}.\n\n\\begin{proposition}[Sufficient diagnostic proposition]\n\\textit{D} is equivalent to \\textit{X} in diagnostic relationship if \\textit{P}(\\textit{X}{\\textbar}\\textit{D}) = \\textit{kP}(\\textit{D}{\\textbar}\\textit{X}) given uniform distribution of \\textit{X} and the \\textit{transformation coefficient k} is independent from \\textit{D}. In other words, \\textit{k} is constant with regards to \\textit{D }and\\textit{ }so \\textit{D} is called \\textit{sufficient evidence}.\n\\label{proposition:diagnostic-proposition}\n\\end{proposition}\n%Table 2.1. Sufficient diagnostic proposition\n\nThe concept of sufficient evidence is borrowed from the concept of sufficient statistics and it is inspired from equivalence of variables \\textit{T} and \\textit{T'} in the research \\cite[pp.~292-295]{millan:bayesiandiagnostic}. The proposition can be restated that evidence \\textit{D} is only used to assess hypotheses if it is sufficient evidence. As a convention, the proposition is called \\textit{diagnostic condition} and hypotheses have uniform distribution. The assumption of hypothetic uniform distribution (\\textit{P}(\\textit{X}=1) = \\textit{P}(\\textit{X}=0)) implies that we cannot assert whether or not given hypothesis is true before we observe its evidence.\n\nIn learning context, \\textit{D} can be totally used to assess student's mastery of \\textit{X} if diagnostic condition is satisfied. Derived from such condition, formula~\\ref{formula:transformation-coefficient} specifies transformation coefficient \\textit{k} given uniform distribution of \\textit{X}.\n\n\\begin{equation}\nk=\\frac{P\\left(X\\mathrel{\\left|\\vphantom{X D}\\right.\\kern-\\nulldelimiterspace}D\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)} \n\\label{formula:transformation-coefficient}\n\\end{equation}\n%Formula 2.2. Transformation coefficient given uniform distribution of \\textit{X}\n\nWe need to prove that formula~\\ref{formula:cpt-evidence-diagnostic} satisfies diagnostic condition. Suppose the prior probability of \\textit{X} is uniform:\n\\[P\\left(X=0\\right)=P\\left(X=1\\right)\\] \nWe have:\n\\begin{align*}\n&P\\left(X\\mathrel{\\left|\\vphantom{X D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)P\\left(X\\right)}{P\\left(D\\right)}\\\\\n&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)P\\left(X\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)P\\left(X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)P\\left(X=1\\right)}\\\\\n&\\left(\\mathrm{due\\ to\\ Baye}{\\mathrm{s}}^{\\mathrm{'}}\\mathrm{rule}\\right)\\\\\n&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)P\\left(X\\right)}{P\\left(X\\right)\\left(P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)\\right)}\\\\\n&\\left(\\mathrm{due\\ to}\\ P\\left(X=0\\right)=P\\left(X=1\\right)\\right)\\\\\n&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)}=1*P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)\\\\\n&\\left(\\mathrm{due\\ to}\\ P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)=1\\right)\n\\end{align*}\nIt is easy to infer that the transformation coefficient \\textit{k} is 1 if \\textit{D} is binary. In practice, evidence \\textit{D} is often a test whose grade ranges within an interval $\\{$0, 1, 2,{\\dots}, \\textit{$\\eta$}$\\}$. Formula~\\ref{formula:cpt-integer-evidence} specifies CPT of \\textit{D} in this case:\n\n\\begin{equation}\n\\begin{split}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=\\left\\{ \\begin{array}{r}\n\\frac{D}{S}\\ \\mathrm{if}\\ X=1 \\\\ \n\\frac{\\eta }{S}-\\frac{D}{S}\\ \\mathrm{if}\\ X=0 \\end{array}\n\\right.\\\\\n&\\mathrm{Where,}\\\\\n&D\\in \\left\\{0,1,2,\\dots ,\\eta \\right\\}\\\\\n&S=\\sum^n_{D=0}{D}=\\frac{\\eta \\left(\\eta +1\\right)}{2}\n\\end{split}\n\\label{formula:cpt-integer-evidence}\n\\end{equation}\n%Formula 2.3. CPT of integer evidence ranging in $\\{$0, 1, 2,{\\dots}, \\textit{$\\eta$}$\\}$ of diagnostic relationship\n\nAs a convention, $P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=0,\\forall D\\notin \\left\\{0,1,2,\\dots ,\\eta \\right\\}$. Formula~\\ref{formula:cpt-integer-evidence} implies that if student has mastered concept (\\textit{X}=1), the probability that she/he completes the exercise/test \\textit{D} is proportional to her/his mark on \\textit{D} $\\left(P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=\\frac{D}{S}\\right)$. We also have:\n\\begin{align*}\nP\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)&=\\frac{D}{S}+\\frac{\\eta -D}{S}\\\\\n&=\\frac{\\eta }{S}=\\frac{2}{\\left(\\eta +1\\right)}\\\\\n\\sum^{\\eta }_{D=0}{P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)}&=\\sum^{\\eta }_{D=0}{\\frac{D}{S}}\\\\\n&=\\frac{\\sum^{\\eta }_{D=0}{D}}{S}=\\frac{S}{S}=1\\\\\n\\sum^{\\eta }_{D=0}{P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)}&=\\sum^{\\eta }_{D=0}{\\frac{\\eta -D}{S}}\\\\\n&=\\frac{\\sum^{\\eta }_{D=0}{\\left(\\eta -D\\right)}}{S}\\\\\n&=\\frac{\\sum^{\\eta }_{D=0}{\\eta }-\\sum^{\\eta }_{D=0}{D}}{S}\\\\\n&=\\frac{\\eta \\left(\\eta +1\\right)-S}{S}=\\frac{2S-S}{S}=1\n\\end{align*}\nWe need to prove that formula~\\ref{formula:cpt-integer-evidence} satisfies diagnostic condition. Suppose the prior probability of \\textit{X} is uniform:\n\\[P\\left(X=0\\right)=P\\left(X=1\\right)\\] \nThe assumption of prior uniform distribution of \\textit{X} implies that we do not determine if student has mastered \\textit{X} yet. Similarly, we have:\n\\begin{align*}\nP\\left(X\\mathrel{\\left|\\vphantom{X D}\\right.\\kern-\\nulldelimiterspace}D\\right)&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)P\\left(X\\right)}{P\\left(D\\right)}\\\\\n&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)}=\\frac{\\eta +1}{2}P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)\n\\end{align*}\nSo, the transformation coefficient \\textit{k} is $\\frac{\\eta +1}{2}$ if \\textit{D} ranges in $\\{$0, 1, 2,{\\dots}, \\textit{$\\eta$}$\\}$.\n\nIn the most general case, discrete evidence \\textit{D} ranges within an arbitrary integer interval $\\left\\{a,a+1,a+2,\\dots ,b\\right\\}$. In other words, \\textit{D} is bounded integer variable whose lower bound and upper bound are \\textit{a} and \\textit{b}, respectively. Formula~\\ref{formula:cpt-discrete-evidence} specifies CPT of \\textit{D} where $D\\in \\left\\{a,a+1,a+2,\\dots ,b\\right\\}$.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=\\left\\{ \\begin{array}{r}\n\\frac{D}{S}\\ \\mathrm{if}\\ X=1 \\\\ \n\\frac{b+a}{S}-\\frac{D}{S}\\ \\mathrm{if}\\ X=0 \\end{array}\n\\right.\\\\\n&\\mathrm{Where,}\\\\\n&D\\in \\left\\{a,a+1,a+2,\\dots ,b\\right\\}\\\\\n&S=a+\\left(a+1\\right)+\\left(a+2\\right)+\\dots +b=\\frac{\\left(b+a\\right)\\left(b-a+1\\right)}{2}\n\\end{split}\n\\label{formula:cpt-discrete-evidence}\n\\end{equation} \n%Formula 2.4. CPT of general discrete evidence of diagnostic relationship\n\nNote, $P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=0,\\forall D\\notin \\left\\{a,a+1,a+2,\\dots ,b\\right\\}$. According to the diagnostic condition, we need to prove the equality $P\\left(X\\mathrel{\\left|\\vphantom{X D}\\right.\\kern-\\nulldelimiterspace}D\\right)=kP\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)$ where\n\\[k=\\frac{b-a+1}{2}\\] \nSimilarly, we have:\n\\begin{align*}\nP\\left(X\\mathrel{\\left|\\vphantom{X D}\\right.\\kern-\\nulldelimiterspace}D\\right)&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)P\\left(X\\right)}{P\\left(D\\right)}\\\\\n&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)}\\\\\n&=\\frac{b-a+1}{2}P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)\n\\end{align*}\nIf evidence \\textit{D} is continuous in the real interval [\\textit{a}, \\textit{b}] with note that \\textit{a} and \\textit{b} are real numbers, formula~\\ref{formula:cpt-continuous-evidence} specifies probability density function (PDF) of continuous evidence $D\\in \\left[a,b\\right]$. The PDF $p\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)$ replaces CPT in case of continuous random variable.\n\n\\begin{equation}\n\\begin{split}\n&p\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=\\left\\{ \\begin{array}{r}\n\\frac{2D}{b^2-a^2}\\ \\mathrm{if}\\ X=1 \\\\ \n\\frac{2}{b-a}-\\frac{2D}{b^2-a^2}\\ \\mathrm{if}\\ X=0 \\end{array}\n\\right.\\\\\n&\\mathrm{Where,}\\\\\n&D\\in \\left[a,b\\right]\\ \\mathrm{where}\\ a\\ \\mathrm{and}\\ b\\ \\mathrm{are\\ real\\ numbers}\\\\\n&S=\\int^b_a{D\\mathrm{d}D}=\\frac{b^2-a^2}{2}\n\\end{split}\n\\label{formula:cpt-continuous-evidence}\n\\end{equation}\n%Formula 2.5. CPT of continuous evidence of diagnostic relationship\nAs a convention, [\\textit{a}, \\textit{b}] is called domain of continuous evidence, which can be replaced by open or half-open intervals such as (\\textit{a}, \\textit{b}), (\\textit{a}, \\textit{b}], and [\\textit{a}, \\textit{b}). Of course we have $p\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=0,\\forall D\\notin \\left[a,b\\right]$. In learning context, evidence \\textit{D} is often a test whose grade ranges within real interval [\\textit{a}, \\textit{b}].\n\nFunctions $p\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)$ and $p\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)$ are valid PDF (s) due to:\n\\begin{align*}\n\\int_D{p\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)\\mathrm{d}D}&=\\int^b_a{\\frac{2D}{b^2-a^2}\\mathrm{d}D}\\\\\n&=\\frac{1}{b^2-a^2}\\int^b_a{2D\\mathrm{d}D}=1\\\\\n\\int_D{p\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)\\mathrm{d}D}&=\\frac{2}{b-a}\\int^b_a{\\mathrm{d}D}-\\frac{1}{b^2-a^2}\\int^b_a{2D\\mathrm{d}D}=1\n\\end{align*}\nAccording to the diagnostic condition, we need to prove the equality\n\\[P\\left(X\\mathrel{\\left|\\vphantom{X D}\\right.\\kern-\\nulldelimiterspace}D\\right)=kp\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)\\] \nWhere,\n\\[k=\\frac{b-a}{2}\\] \nWhen \\textit{D} is continuous, its probability is calculated in \\textit{$\\varepsilon$}-vicinity where \\textit{$\\varepsilon$} is very small number. As usual, \\textit{$\\varepsilon$} is bias if \\textit{D} is measure values produced from equipment. The probability of \\textit{D} given \\textit{X} where $D+\\epsilon \\in \\left[a,b\\right]$ and $D-\\epsilon \\in \\left[a,b\\right]$ is:\n\\begin{align*}\nP\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)&=\\int^{D+\\varepsilon }_{D-\\varepsilon }{p\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)\\mathrm{d}D}\\\\\n&=\\left\\{ \\begin{array}{r}\n\\int^{D+\\varepsilon }_{D-\\varepsilon }{\\frac{2D}{b^2-a^2}\\mathrm{d}D}\\ \\mathrm{if}\\ X=1 \\\\ \n\\int^{D+\\varepsilon }_{D-\\varepsilon }{\\left(\\frac{2}{b-a}-\\frac{2D}{b^2-a^2}\\right)\\mathrm{d}D}\\ \\mathrm{if}\\ X=0 \\end{array}\n\\right.\\\\\n&=\\left\\{ \\begin{array}{r}\n\\frac{4\\varepsilon D}{b^2-a^2}\\ \\mathrm{if}\\ X=1 \\\\ \n\\frac{4\\varepsilon }{b-a}-\\frac{4\\varepsilon D}{b^2-a^2}\\ \\mathrm{if}\\ X=0 \\end{array}\n\\right.\\\\\n&=2\\varepsilon p\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)\n\\end{align*}\nIn fact, we have:\n\\begin{align*}\n&P\\left(X\\mathrel{\\left|\\vphantom{X D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)P\\left(X\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)P\\left(X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)P\\left(X=1\\right)}\\\\\n&\\left(\\mathrm{due\\ to\\ Baye}{\\mathrm{s}}^{\\mathrm{'}}\\mathrm{rule}\\right)\\\\ \n&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X=0}\\right.\\kern-\\nulldelimiterspace}X=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X=1}\\right.\\kern-\\nulldelimiterspace}X=1\\right)}\\\\\n&\\left(\\mathrm{due\\ to}\\ \\mathrm{assumption}\\ P\\left(X=0\\right)=P\\left(X=1\\right)\\right)\\\\ \n&=\\frac{b-a}{4\\varepsilon }P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=kp\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)\n\\end{align*}\nIn general, formula~\\ref{formula:cpt-evidence} summarizes CPT of evidence of single diagnosis relationship.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X}\\right.\\kern-\\nulldelimiterspace}X\\right)=\\left\\{ \\begin{array}{r}\n\\frac{D}{S}\\ \\mathrm{if}\\ X=1 \\\\ \n\\frac{M}{S}-\\frac{D}{S}\\ \\mathrm{if}\\ X=0 \\end{array}\n\\right.\\\\\n&k=\\frac{N}{2}\\\\\n&\\mathrm{Where,}\\\\\n&N=\\left\\{ \\begin{array}{l}\n2\\ \\mathrm{if}\\ D\\in \\left\\{0,1\\right\\} \\\\ \n\\eta +1\\ \\mathrm{if}\\ D\\in \\left\\{0,1,2,\\dots ,\\eta \\right\\} \\\\ \nb-a+1\\ \\mathrm{if}\\ D\\in \\left\\{a,a+1,a+2,\\dots ,b\\right\\} \\\\ \nb-a\\ \\mathrm{if}\\ D\\ \\mathrm{continuous\\ and}\\ D\\in \\left[a,b\\right] \\end{array}\n\\right.\\\\\n&M=\\left\\{ \\begin{array}{l}\n1\\ \\mathrm{if}\\ D\\in \\left\\{0,1\\right\\} \\\\ \n\\eta \\ \\mathrm{if}\\ D\\in \\left\\{0,1,2,\\dots ,\\eta \\right\\} \\\\ \nb+a\\ \\mathrm{if}\\ D\\in \\left\\{a,a+1,a+2,\\dots ,b\\right\\} \\\\ \nb+a\\ \\mathrm{if}\\ D\\ \\mathrm{continuous\\ and}\\ D\\in \\left[a,b\\right] \\end{array}\n\\right.\\\\\n&S=\\sum_D{D}=\\frac{NM}{2}=\\left\\{ \\begin{array}{l}\n1\\ \\mathrm{if}\\ D\\in \\left\\{0,1\\right\\} \\\\ \n\\frac{\\eta \\left(\\eta +1\\right)}{2}\\ \\mathrm{if}\\ D\\in \\left\\{0,1,2,\\dots ,\\eta \\right\\} \\\\ \n\\frac{\\left(b+a\\right)\\left(b-a+1\\right)}{2}\\ \\mathrm{if}\\ D\\in \\left\\{a,a+1,a+2,\\dots ,b\\right\\} \\\\ \n\\frac{b^2-a^2}{2}\\ \\mathrm{if}\\ D\\ \\mathrm{continuous\\ and}\\ D\\in \\left[a,b\\right] \\end{array}\n\\right.\n\\end{split}\n\\label{formula:cpt-evidence}\n\\end{equation}\n%Formula 2.6. CPT of evidence of diagnostic relationship\n\nIn general, if the conditional probability \\textit{P}(\\textit{D{\\textbar}X}) is specified by formula~\\ref{formula:cpt-evidence}, the diagnostic condition will be satisfied. Note that the CPT \\textit{P}(\\textit{D{\\textbar}X}) is the PDF \\textit{p}(\\textit{D{\\textbar}X}) in case of continuous evidence. The diagnostic relationship will be extended with more than one hypothesis. The next section will mention how to determine CPT (s) of a simple graph with one child node and many parent nodes based on X-gate inferences.\n\n\\section{X-gate inferences}\nGiven a simple graph consisting of one child variable \\textit{Y} and \\textit{n} parent variables \\textit{X${}_{i}$}, as shown in figure~\\ref{figure:simple-graph}. Each relationship from \\textit{X${}_{i}$} to \\textit{Y} is quantified by a normalized weight \\textit{w${}_{i}$} where 0 $\\mathrm{\\le}$ \\textit{w${}_{i}$} $\\mathrm{\\le}$ 1. A large graph is an integration of many simple graphs. Figure~\\ref{figure:simple-graph} shows the DAG of a simple BN. As aforementioned, the essence of constructing simple BN is to convert graphic relationships of simple graph into CPT (s) of simple BN.\n\n\\begin{figure}\n\\centering\n\\includegraphics{SimpleGraph.png}\n\\caption{Simple graph or simple network}\n\\label{figure:simple-graph}\n\\end{figure}\n%Figure 3.1. Simple graph or simple network\n\nChild variable \\textit{Y} is called target and parent variables \\textit{X${}_{i}$} (s) are called sources. Especially, these relationships are adhere to X-gates such as AND-gate, OR-gate, and SIGMA-gate. These gates are originated from logic gate \\cite{wikipedia:logic-gates}. For instance, AND-gate and OR-gate represent prerequisite relationship. SIGMA-gate represents aggregation relationship. Therefore, relationship conversion is to determined X-gate inference. The simple graph shown in figure~\\ref{figure:simple-graph} is also called X-gate graph or X-gate network. Please distinguish the letter ``X'' in the term ``X-gate inference'' which implies logic operators (AND, OR, XOR, etc.) from the ``variable \\textit{X}''.\n\nAll variables are binary and they represent events. The probability \\textit{P}(\\textit{X}) indicates event \\textit{X} occurs. Thus, \\textit{P}(\\textit{X}) implicates \\textit{P}(\\textit{X}=1) and \\textit{P}(not(\\textit{X})) implicates \\textit{P}(\\textit{X}=0). Formula~\\ref{formula:not-gate-inference} specifies the simple NOT-gate inference.\n\n\\begin{equation}\n\\begin{split} \n&P\\left(\\mathrm{not}\\left(X\\right)\\right)=P\\left(\\overline{X}\\right)=P\\left(X=0\\right)=1-P\\left(X=1\\right)=1-P\\left(X\\right)\\\\\n&P\\left(\\mathrm{not}\\left(\\mathrm{not}\\left(X\\right)\\right)\\right)=P\\left(X\\right)\n\\end{split}\n\\label{formula:not-gate-inference}\n\\end{equation}\n%Formula 3.1. NOT-gate inference\n\nX-gate inference is based on three assumptions mentioned in \\cite[p.~157]{neapolitan:bn}:\n\n\\begin{enumerate}\n\\item  \\textit{X-gate inhibition}: Given a relationship from source \\textit{X${}_{i}$} to target \\textit{Y}, there is a factor \\textit{I${}_{i}$} that inhibits \\textit{X${}_{i}$} from integrated into \\textit{Y}. Factor \\textit{I${}_{i}$} is called inhibition of \\textit{X${}_{i}$}. That the inhibition \\textit{I${}_{i}$} is turned off is prerequisite of \\textit{X${}_{i}$} integrated into \\textit{Y}.\n\n\\item  \\textit{Inhibition independence}: Inhibitions are mutually independent. For example, inhibition \\textit{I}${}_{1}$ of \\textit{X}${}_{1}$ is independent from inhibition \\textit{I}${}_{2}$ of \\textit{X}${}_{2}$.\n\n\\item  \\textit{Accountability}: X-gate network is established by accountable variables \\textit{A${}_{i}$} for \\textit{X${}_{i}$} and \\textit{I${}_{i}$}. Each X-gate inference owns particular combination of \\textit{A${}_{i}$} (s).\n\\end{enumerate}\n\nFigure~\\ref{figure:extended-x-gate-network} shows the extended X-gate network with accountable variables \\textit{A${}_{i}$} (s) \\cite[p.~158]{neapolitan:bn}.\n\n\\begin{figure}\n\\centering\n\\includegraphics{ExtendedXgateNetwork.png}\n\\caption{Extended X-gate network with accountable variables \\textit{A${}_{i}$} (s)}\n\\label{figure:extended-x-gate-network}\n\\end{figure}\n%Figure 3.2. Extended X-gate network with accountable variables \\textit{A${}_{i}$} (s)\n\nThe strength of each relationship from source \\textit{X${}_{i}$} to target \\textit{Y} is quantified by a weight 0 $\\mathrm{\\le}$ \\textit{w${}_{i}$} $\\mathrm{\\le}$ 1. According to the assumption of inhibition, probability of \\textit{I${}_{i}$}=\\textit{OFF} is \\textit{p${}_{i}$}, which is set to be the weight \\textit{w${}_{i}$}.\n\\[p_i=w_i\\] \nIf notation \\textit{w${}_{i}$} is used, we focus on strength of relationship. If notation \\textit{p${}_{i}$} is used, we focus on probability of \\textit{OFF} inhibition. In probabilistic inference, \\textit{p${}_{i}$} is also prior probability of \\textit{X${}_{i}$}=1. However, we will assume each \\textit{X${}_{i}$} has uniform distribution later on. Formula~\\ref{formula:inhibition-probs} specifies probabilities of inhibitions \\textit{I${}_{i}$} (s) and accountable variables \\textit{A${}_{i}$} (s).\n\n\\begin{equation}\n\\begin{split}\n&P\\left(I_i\\mathrm{=}OFF\\right)=p_i=w_i\\\\\n&P\\left(I_i\\mathrm{=}ON\\right)=1-p_i=1-w_i\\\\\n&P\\left(A_i\\mathrm{=}ON\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}ON X_i\\mathrm{=1,}I_i\\mathrm{=}OFF}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=1,}I_i\\mathrm{=}OFF\\right)\\mathrm{=1}\\\\\n&P\\left(A_i\\mathrm{=}ON\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}ON X_i\\mathrm{=1,}I_i\\mathrm{=}ON}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=1,}I_i\\mathrm{=}ON\\right)\\mathrm{=0}\\\\\n&P\\left(A_i\\mathrm{=}ON\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}ON X_i\\mathrm{=0,}I_i\\mathrm{=}OFF}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=0,}I_i\\mathrm{=}OFF\\right)\\mathrm{=0}\\\\\n&P\\left(A_i\\mathrm{=}ON\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}ON X_i\\mathrm{=0,}I_i\\mathrm{=}ON}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=0,}I_i\\mathrm{=}ON\\right)\\mathrm{=0}\\\\\n&P\\left(A_i\\mathrm{=}OFF\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}OFF X_i\\mathrm{=1,}I_i\\mathrm{=}OFF}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=1,}I_i\\mathrm{=}OFF\\right)\\mathrm{=0}\\\\\n&P\\left(A_i\\mathrm{=}OFF\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}OFF X_i\\mathrm{=1,}I_i\\mathrm{=}ON}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=1,}I_i\\mathrm{=}ON\\right)\\mathrm{=1}\\\\\n&P\\left(A_i\\mathrm{=}OFF\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}OFF X_i\\mathrm{=0,}I_i\\mathrm{=}OFF}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=0,}I_i\\mathrm{=}OFF\\right)\\mathrm{=1}\\\\\n&P\\left(A_i\\mathrm{=}OFF\\mathrel{\\left|\\vphantom{A_i\\mathrm{=}OFF X_i\\mathrm{=0,}I_i\\mathrm{=}ON}\\right.\\kern-\\nulldelimiterspace}X_i\\mathrm{=0,}I_i\\mathrm{=}ON\\right)\\mathrm{=1}\n\\end{split}\n\\label{formula:inhibition-probs}\n\\end{equation} \n%Formula 3.2. Probabilities of inhibitions \\textit{I${}_{i}$} (s) and accountable variables \\textit{A${}_{i}$} (s)\n\nAccording to formula~\\ref{formula:inhibition-probs}, given probability \\textit{P}(\\textit{A${}_{i}$}=\\textit{ON} {\\textbar} \\textit{X${}_{i}$}=1, \\textit{I${}_{i}$}=\\textit{OFF}), it is assured 100\\% confident that accountable variables \\textit{A${}_{i}$} is turned on if source \\textit{X${}_{i}$} is 1 and inhibition \\textit{I${}_{i}$} is turned off. Formula~\\ref{formula:accountable-probs} specifies conditional probability of accountable variables \\textit{A${}_{i}$} (s) given \\textit{X${}_{i}$} (s), which is corollary of formula~\\ref{formula:inhibition-probs}.\n\n\\begin{equation}\n\\begin{split} \n&P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=p_i=w_i\\\\\n&P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=0\\\\\n&P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=1-p_i=1-w_i\\\\\n&P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=1\n\\end{split}\n\\label{formula:accountable-probs}\n\\end{equation}\n%Formula 3.3. Conditional probability of accountable variables\n\nFollowing is proof of formula~\\ref{formula:accountable-probs}.\n\\begin{align*}\n&P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)=P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i,I_i=ON}\\right.\\kern-\\nulldelimiterspace}X_i,I_i=ON\\right)P\\left(I_i=ON\\right)\\\\\n&+P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i,I_i=OFF}\\right.\\kern-\\nulldelimiterspace}X_i,I_i=OFF\\right)P\\left(I_i=OFF\\right)\\\\\n&=0*\\left(1-p_i\\right)+P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i,I_i=OFF}\\right.\\kern-\\nulldelimiterspace}X_i,I_i=OFF\\right)p_i\\\\\n&\\left(\\mathrm{By\\ applying\\ formula\\ \\ref{formula:inhibition-probs}}\\right)\\\\\n&=p_iP\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i,I_i=OFF}\\right.\\kern-\\nulldelimiterspace}X_i,I_i=OFF\\right)\n\\end{align*}\nIt implies\n\\begin{align*}\n&P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=p_iP\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1,I_i=OFF}\\right.\\kern-\\nulldelimiterspace}X_i=1,I_i=OFF\\right)=p_i\\\\\n&P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=p_iP\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0,I_i=OFF}\\right.\\kern-\\nulldelimiterspace}X_i=0,I_i=OFF\\right)=0\\\\\n&P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=1-P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=1-p_i\\\\\n&P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=1-P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=1\n\\end{align*}\nAs a definition, the set of all \\textit{X${}_{i}$} (s) is complete if and only if\n\\[P\\left(X_1\\cup X_2\\cup \\cdots \\cup X_n\\right)=P\\left(\\mathrm{\\Omega }\\right)=\\sum^n_{i=1}{w_i}=1\\] \nThe set of all \\textit{X${}_{i}$} (s) is mutually exclusive if and only if\n\\[X_i\\cap X_j=\\emptyset ,\\forall i\\neq j\\] \nFor each \\textit{X${}_{i}$} there is only one \\textit{A${}_{i}$} and vice versa, which establishes a bijection between \\textit{X${}_{i}$} (s) and \\textit{A${}_{i}$} (s). Obviously, the fact that the set of all \\textit{X${}_{i}$} (s) is complete is equivalent to the fact that the set of all \\textit{A${}_{i}$} (s) is complete. We will prove by contradiction that ``the fact that the set of all \\textit{X${}_{i}$} (s) is mutually exclusive is equivalent to the fact that the set of all \\textit{A${}_{i}$} (s) is mutually exclusive''. Suppose $X_i\\cap X_j=\\emptyset ,\\forall i\\neq j$ but $\\exists i\\neq j$: $A_i\\cap A_j=B\\neq \\emptyset $. Let $B^{-1}\\neq \\emptyset $ be preimage of \\textit{B}. Due to $B\\subseteq A_i$ and $B\\subseteq A_j$, we have $B^{-1}\\subseteq X_i$ and $B^{-1}\\subseteq X_j$, which causes that $X_i\\cap X_j=B^{-1}\\neq \\emptyset $. There is a contradiction and so we have:\n\\[X_i\\cap X_j=\\emptyset ,\\forall i\\neq j\\Rightarrow A_i\\cap A_j=\\emptyset ,\\forall i\\neq j\\] \nBy similar proof, we have:\n\\[A_i\\cap A_j=\\emptyset ,\\forall i\\neq j\\Rightarrow X_i\\cap X_j=\\emptyset ,\\forall i\\neq j\\] \nThe extended X-gate network shown in figure~\\ref{figure:extended-x-gate-network} is interpretation of simple network shown in figure~\\ref{figure:simple-graph}. Specifying CPT of the simple network is to determine the conditional probability \\textit{P}(\\textit{Y}=1 {\\textbar} \\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$}) based on extended X-gate network. The X-gate inference is represented by such probability \\textit{P}(\\textit{Y}=1 {\\textbar} \\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$}) specified by formula~\\ref{formula:x-gate-prob} \\cite[p.~159]{neapolitan:bn}.\n\n\\begin{equation}\nP\\left(Y\\mathrel{\\left|\\vphantom{Y X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y\\mathrel{\\left|\\vphantom{Y A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\n\\label{formula:x-gate-prob}\n\\end{equation}\n%Formula 3.4. X-gate probability\n\nFollowing is the proof of formula~\\ref{formula:x-gate-prob}.\n\\begin{align*}\n&P\\left(Y\\mathrel{\\left|\\vphantom{Y X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=\\frac{P\\left(Y,X_1,X_2,\\dots ,X_n\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}\\\\\n&\\left(\\mathrm{Due\\ to\\ Bayes'\\ rule}\\right)\\\\\n&=\\frac{\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y,X_1,X_2,\\dots ,X_n\\mathrel{\\left|\\vphantom{Y,X_1,X_2,\\dots ,X_n A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)*P\\left(A_1,A_2,\\dots ,A_n\\right)}}{P\\left(X_1,X_2,\\dots ,X_n\\right)}\\\\\n&\\left(\\mathrm{Due\\ to\\ total\\ probability\\ rule}\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y,X_1,X_2,\\dots ,X_n\\mathrel{\\left|\\vphantom{Y,X_1,X_2,\\dots ,X_n A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)*\\frac{P\\left(A_1,A_2,\\dots ,A_n\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}}\\\\\n&\\begin{aligned}\n=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y\\mathrel{\\left|\\vphantom{Y A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)*P\\left(X_1,X_2,\\dots ,X_n\\mathrel{\\left|\\vphantom{X_1,X_2,\\dots ,X_n A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)}\\\\\n*\\frac{P\\left(A_1,A_2,\\dots ,A_n\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}\n\\end{aligned}\\\\\n&\\left(\\mathrm{Because}\\ Y\\ \\mathrm{is\\ conditionally\\ independent\\ from}\\ X_i\\ \\mathrm{(s)}\\ \\mathrm{given}\\ A_i\\ \\mathrm{(s)}\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y\\mathrel{\\left|\\vphantom{Y A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)*\\frac{P\\left(X_1,X_2,\\dots ,X_n,A_1,A_2,\\dots ,A_n\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}}\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y\\mathrel{\\left|\\vphantom{Y A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)*P\\left(A_1,A_2,\\dots ,A_n\\mathrel{\\left|\\vphantom{A_1,A_2,\\dots ,A_n X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)}\\\\\n&\\left(\\mathrm{Due\\ to\\ Bayes'\\ rule}\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y\\mathrel{\\left|\\vphantom{Y A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)}}\\\\\n&\\left(\\mathrm{Because}\\ A_i\\ \\mathrm{(s)}\\ \\mathrm{are\\ mutually\\ independent}\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y\\mathrel{\\left|\\vphantom{Y A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\\\\\n&\\left(\\mathrm{Because}\\ \\mathrm{each}\\ A_i\\ \\mathrm{is\\ only\\ dependent\\ on}\\ X_i\\right)\n\\end{align*}\n\nIt is necessary to make some mathematical notations because formula~\\ref{formula:x-gate-prob} is complicated, which is relevant to arrangements of \\textit{X${}_{i}$} (s). Given $\\Omega$ = $\\{$\\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$}$\\}$ is set of binary variables and $|\\Omega|$=\\textit{n} is cardinality of $\\Omega$.\n\nLet \\textit{a}($\\Omega$) be an arrangement of $\\Omega$ which is a set of \\textit{n} instances $\\{$\\textit{X}${}_{1}$=\\textit{x}${}_{1}$, \\textit{X}${}_{2}$=\\textit{x}${}_{2}$,{\\dots}, \\textit{X${}_{n}$}=\\textit{x${}_{n}$}$\\}$ where \\textit{x${}_{i}$} is 1 or 0. The number of all \\textit{a}($\\Omega$) is $2^{|\\Omega|}$. For instance, given $\\Omega$ = $\\{$\\textit{X}${}_{1}$, \\textit{X}${}_{2}$$\\}$, there are 2${}^{2}$=4 arrangements as follows:\n\\begin{align*}\n&a\\left\\{\\Omega\\right\\}=\\left\\{X_1=1,X_2=1\\right\\}\\\\\n&a\\left\\{\\Omega\\right\\}=\\left\\{X_1=1,X_2=0\\right\\}\\\\\n&a\\left\\{\\Omega\\right\\}=\\left\\{X_1=0,X_2=1\\right\\}\\\\\n&a\\left\\{\\Omega\\right\\}=\\left\\{X_1=0,X_2=0\\right\\}\n\\end{align*}\nLet \\textit{a}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) be the arrangement of $\\Omega$ with fixed \\textit{X${}_{i}$}. The number of all \\textit{a}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) is $2^{|\\Omega|-1}$. Similarly, for instance, \\textit{a}($\\Omega$:$\\{$\\textit{X}${}_{1}$, \\textit{X}${}_{2}$, \\textit{X}${}_{3}$$\\}$) is an arrangement of $\\Omega$ with fixed \\textit{X}${}_{1}$, \\textit{X}${}_{2}$, \\textit{X}${}_{3}$. The number of all \\textit{a}($\\Omega$:$\\{$\\textit{X}${}_{1}$, \\textit{X}${}_{2}$, \\textit{X}${}_{3}$$\\}$) is $2^{|\\Omega|-3}$.\n\nLet \\textit{c}($\\Omega$) and \\textit{c}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) be the number of arrangements \\textit{a}($\\Omega$) and \\textit{a}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$), respectively. Such \\textit{c}($\\Omega$) and \\textit{c}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) are called arrangement counters. As usual, counters \\textit{c}($\\Omega$) and \\textit{c}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) are equal to $2^{|\\Omega|}$ and $2^{|\\Omega|-1}$, respectively but they will vary according to specific cases.\n\nLet $\\sum_a{F\\left(a\\left(\\mathrm{\\Omega }\\right)\\right)}$ and $\\prod_a{F\\left(a\\left(\\mathrm{\\Omega }\\right)\\right)}$ denote sum and product of values generated from function \\textit{F} acting on every \\textit{a}($\\Omega$). The number of arrangements on which \\textit{F} acts is \\textit{c}($\\Omega$).\n\nLet x denote the X-gate operator, for instance, $\\mathrm{x}=\\odot $ for AND-gate, $\\mathrm{x}=\\oplus $ for OR-gate, $\\mathrm{x}=\\mathrm{not}\\odot $ for NAND-gate, $\\mathrm{x}=\\mathrm{not}\\oplus $ for NOR-gate, $\\mathrm{x}=\\otimes $ for XOR-gate, $\\mathrm{x}=\\mathrm{not}\\otimes $ for XNOR-gate, $\\mathrm{x}=\\uplus $ for U-gate, $\\mathrm{x}=+$ for SIGMA-gate. Given an x-operator, let \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) and \\textit{s}($\\Omega$) be sum of all $P\\left(X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n\\right)$ through every arrangement of $\\Omega$ with and without fixed \\textit{X${}_{i}$}, respectively. Such \\textit{s}($\\Omega$) and \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) are called arrangement sum according to definition~\\ref{definition:binary-arrangements}. They are acting function \\textit{F}.\n\n\\begin{definition}\n$\\begin{aligned}\ns\\left(\\mathrm{\\Omega }\\right)&=\\sum_a{P\\left(X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n\\mathrel{\\left|\\vphantom{X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n a\\left(\\mathrm{\\Omega }\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\right)\\right)}\\\\\n&=\\sum_a{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 a\\left(\\mathrm{\\Omega }\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\right)\\right)}\\\\\ns\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)&=\\sum_a{P\\left(X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n\\mathrel{\\left|\\vphantom{X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\right)}\\\\\n&=\\sum_a{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\right)}\n\\end{aligned}$\n\\label{definition:binary-arrangements}\n\\end{definition}\n%Table 3.1. Binary arrangements\nFor example, \\textit{s}($\\Omega$) and \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) for OR-gate are:\n\\begin{align*}\ns\\left(\\mathrm{\\Omega }\\right)&=\\sum_a{P\\left(X_1\\oplus X_2\\oplus \\dots \\oplus X_n\\mathrel{\\left|\\vphantom{X_1\\oplus X_2\\oplus \\dots \\oplus X_n a\\left(\\mathrm{\\Omega }\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\right)\\right)}\\\\\ns\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)&=\\sum_a{P\\left(X_1\\oplus X_2\\oplus \\dots \\oplus X_n\\mathrel{\\left|\\vphantom{X_1\\oplus X_2\\oplus \\dots \\oplus X_n a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\right)}\n\\end{align*}\nNote that $\\Omega$ can be any set of binary variables.\n\nIt is not easy to produce all binary arrangements of $\\Omega$. The following is code snippet written by Java programming language for producing such all arrangements.\n\n\\begin{lstlisting}[language = Java]\npublic class Generator {\n  private ArrayList<int[]> arrangements;\n  private int n;\n  private int r;\n\n  private Generator(int n, int r) {\n    this.n = n;\n    this.r = r;\n    this.arrangements = new ArrayList();\n  }\n\n  private void create(int[] a, int i) {\n    for(int j = 0; j < n; j++) {\n      a[i] = j;\n      if(i < r - 1)\n        create(a, i + 1);\n      else if(i == r -1) {\n        int[] b = new int[a.length];\n        for(int k = 0; k < a.length; k++)\n          b[k] = a[k];\n        arrangements.add(b);\n      }\n    }\n  }\n\n  public int[] get(int i) {\n    return arrangements.get(i);\n  }\n\n  public long size() {\n    return arrangements.size();\n  }\n\n  public static Generator parse(int n, int r) {\n    Generator arr =\n      new Generator(n, r);\n    int[] a = new int[r];\n    for(int i=0; i<r; i++) a[i] = -1;\n    arr.create(a, 0);\n    return arr;\n  }\n}\n\\end{lstlisting}\n%Table 3.2. Code snippet generating all binary arrangements\n\nEach element of the list ``\\textit{arrangements}'' is a binary arrangement \\textit{a}($\\Omega$) presented by an array of bits (0 and 1). The method ``\\textit{create}(\\textit{int}[] \\textit{a}, \\textit{int i})'' which is recursive method, is the main one that generates arrangements. The method call ``\\textit{Generator}.\\textit{parse}(2, \\textit{n})'' will list all possible binary arrangements.\n\nFormula~\\ref{formula:arrangement-sum-counter} specifies the connection between \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}=1$\\}$) and \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}=0$\\}$), between \\textit{c}($\\Omega$:$\\{$\\textit{X${}_{i}$}=1$\\}$) and \\textit{c}($\\Omega$:$\\{$\\textit{X${}_{i}$}=0$\\}$).\n\n\\begin{equation}\n\\begin{split}\n&s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)+s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)=s\\left(\\mathrm{\\Omega }\\right)\\\\\n&c\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)+c\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)=c\\left(\\mathrm{\\Omega }\\right)\n\\end{split}\n\\label{formula:arrangement-sum-counter}\n\\end{equation} \n%Formula 3.5. Connection among arrangement sum and counter\n\nIt is easy to draw formula~\\ref{formula:arrangement-sum-counter} when the set of all arrangements \\textit{a}($\\Omega$:$\\{$\\textit{X${}_{i}$}=1) is complement of the set of all arrangements \\textit{a}($\\Omega$:$\\{$\\textit{X${}_{i}$}=0).\n\nLet \\textit{K} be a set of \\textit{X${}_{i}$} (s) whose values are 1 and let \\textit{L} be a set of \\textit{X${}_{i}$} (s) whose values are 0. \\textit{K} and \\textit{L} are mutually complementary. Formula~\\ref{formula:sets-k-l} determines sets \\textit{K} and \\textit{L}.\n\n\\begin{equation}\n\\left\\{ \\begin{array}{l}\nK=\\left\\{i:X_i=1\\right\\} \\\\ \nL=\\left\\{i:X_i=0\\right\\} \\\\ \nK\\cap L=\\emptyset  \\\\ \nK\\cup L=\\{1,2,\\dots ,n\\} \\end{array}\n\\right.\n\\label{formula:sets-k-l}\n\\end{equation}\n%Formula 3.6. Sets \\textit{K} and \\textit{L}\n\nThe \\textbf{AND-gate} inference represents prerequisite relationship satisfying AND-gate condition specified by formula~\\ref{formula:and-gate-condition}.\n\n\\begin{equation} \nP\\left(Y\\mathrm{=1}\\mathrel{\\left|\\vphantom{Y\\mathrm{=1} A_i\\mathrm{=}OFF\\mathrm{\\ for\\ some\\ }i}\\right.\\kern-\\nulldelimiterspace}A_i\\mathrm{=}OFF\\mathrm{\\ for\\ some\\ }i\\right)\\mathrm{=0}\n\\label{formula:and-gate-condition}\n\\end{equation}\n%Formula 3.7. AND-gate condition\n\nFrom formula~\\ref{formula:x-gate-prob}, we have\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\\\\ \n&=\\prod^n_{i=1}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\ \n&\\left(\\mathrm{due\\ to\\ }P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_i=OFF\\ \\mathrm{for\\ some}\\ i}\\right.\\kern-\\nulldelimiterspace}A_i=OFF\\ \\mathrm{for\\ some}\\ i\\right)=0\\right)\\\\\n&=\\left(\\prod_{i\\in K}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)}\\right)\\left(\\prod_{i\\notin K}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)}\\right)\\\\\n&=\\left(\\prod_{i\\in K}{p_i}\\right)\\left(\\prod_{i\\notin K}{0}\\right)\\\\\n&=\\left\\{ \\begin{array}{l}\n\\prod^n_{i=1}{p_i}\\ \\mathrm{if\\ all}\\ X_i\\ \\left(s\\right)\\ \\mathrm{are}\\ 1 \\\\ \n0\\ \\mathrm{if\\ there\\ exists\\ at\\ least\\ one\\ }X_i=0 \\end{array}\n\\right.\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{formula}\\ \\mathrm{\\ref{formula:accountable-probs}}\\right)\n\\end{align*}\n\nIn general, formula~\\ref{formula:and-gate-inference} specifies AND-gate inference.\n\n\\begin{equation}\n\\begin{split}\nP\\left(X_1\\odot X_2\\odot \\dots \\odot X_n\\right)&=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left\\{ \\begin{array}{l}\n\\prod^n_{i=1}{p_i}\\ \\mathrm{if\\ all}\\ X_i\\ \\left(s\\right)\\ \\mathrm{are}\\ 1 \\\\ \n0\\ \\mathrm{if\\ there\\ exists\\ at\\ least\\ one\\ }X_i=0 \\end{array}\n\\right.\\\\\nP\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)&=\\left\\{ \\begin{array}{l}\n1-\\prod^n_{i=1}{p_i}\\ \\mathrm{if\\ all}\\ X_i\\ \\left(s\\right)\\ \\mathrm{are}\\ 1 \\\\ \n1\\ \\mathrm{if\\ there\\ exists\\ at\\ least\\ one\\ }X_i=0 \\end{array}\n\\right.\n\\end{split}\n\\label{formula:and-gate-inference}\n\\end{equation}\n%Formula 3.8. AND-gate inference\n\nThe AND-gate formula was also described in \\cite[p.~33]{diez:pmodels}. Formula~\\ref{formula:and-gate-inference} varies according to two cases whose arrangement counters are listed as follows:\n\\begin{center}\n\\begin{tabular}{l|l}\n$L=\\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\\\\\n\\hline\n$L\\neq \\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=2^{n-1}-1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=2^{n-1}\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=2^n-1\n  \\end{aligned}$\\\\\n\\end{tabular}\n\\end{center}\nThe \\textbf{OR-gate} inference represents prerequisite relationship satisfying OR-gate condition specified by formula~\\ref{formula:or-gate-condition} \\cite[p.~157]{neapolitan:bn}.\n\n\\begin{equation}\nP\\left(Y\\mathrm{=1}\\mathrel{\\left|\\vphantom{Y\\mathrm{=1} A_i\\mathrm{=}ON\\mathrm{\\ for\\ some\\ }i}\\right.\\kern-\\nulldelimiterspace}A_i\\mathrm{=}ON\\mathrm{\\ for\\ some\\ }i\\right)\\mathrm{=1}\n\\label{formula:or-gate-condition}\n\\end{equation}\n%Formula 3.9. OR-gate condition\n\nThe OR-gate condition implies\n\\[P\\left(Y\\mathrm{=0}\\mathrel{\\left|\\vphantom{Y\\mathrm{=0} A_i\\mathrm{=}ON\\mathrm{\\ for\\ some\\ }i}\\right.\\kern-\\nulldelimiterspace}A_i\\mathrm{=}ON\\mathrm{\\ for\\ some\\ }i\\right)\\mathrm{=0}\\] \nFrom formula~\\ref{formula:x-gate-prob}, we have \\cite[p.~159]{neapolitan:bn}:\n\\begin{align*}\n&P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\\\\ \n&=\\prod^n_{i=1}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&\\left(\\mathrm{due\\ to\\ }P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_i=ON\\ \\mathrm{for\\ some}\\ i}\\right.\\kern-\\nulldelimiterspace}A_i=ON\\ \\mathrm{for\\ some}\\ i\\right)=0\\right)\\\\\n&=\\left(\\prod_{i\\in K}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)}\\right)\\left(\\prod_{i\\notin K}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)}\\right)\\\\\n&=\\left(\\prod_{i\\in K}{\\left(1-p_i\\right)}\\right)\\left(\\prod_{i\\notin K}{1}\\right)\\\\\n&=\\left\\{ \\begin{array}{r}\n\\prod_{i\\in K}{\\left(1-p_i\\right)}\\ \\mathrm{if}\\ K\\neq \\emptyset  \\\\ \n1\\ \\ \\mathrm{if}\\ K=\\emptyset  \\end{array}\n\\right.\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{formula}\\ \\mathrm{\\ref{formula:accountable-probs}}\\right)\n\\end{align*}\n\nIn general, formula~\\ref{formula:or-gate-inference} specifies OR-gate inference.\n\n\\begin{equation}\n\\begin{split}\nP\\left(X_1\\oplus X_2\\oplus \\dots \\oplus X_n\\right)&=1-P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left\\{ \\begin{array}{r}\n1-\\prod_{i\\in K}{\\left(1-p_i\\right)}\\ \\mathrm{if}\\ K\\neq \\emptyset  \\\\ \n0\\ \\ \\mathrm{if}\\ K=\\emptyset  \\end{array}\n\\right.\\\\\nP\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)&=\\left\\{ \\begin{array}{r}\n\\prod_{i\\in K}{\\left(1-p_i\\right)}\\ \\mathrm{if}\\ K\\neq \\emptyset  \\\\ \n1\\ \\ \\mathrm{if}\\ K=\\emptyset  \\end{array}\n\\right.\n\\end{split}\n\\label{formula:or-gate-inference}\n\\end{equation}\n%Formula 3.10. OR-gate inference\n\nWhere $K$ is the set of $X_i$ (s) whose values are 1. The OR-gate formula was mentioned in \\cite[p.~158]{neapolitan:bn} and \\cite[p.~20]{diez:pmodels}. Formula~\\ref{formula:or-gate-inference} varies according to two cases whose arrangement counters are listed as follows:\n\\begin{center}\n\\begin{tabular}{l|l}\n$K\\neq \\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=2^{n-1}\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=2^{n-1}-1\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=2^n-1\n  \\end{aligned}$\\\\\n\\hline\n$K=\\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\n\\end{tabular}\n\\end{center}\nAccording to De Morgan's rule with regard to AND-gate and OR-gate, we have:\n\\begin{align*}\nP\\left(\\mathrm{not}\\left(X_1\\odot X_2\\odot \\dots \\odot X_n\\right)\\right)&=P\\left(\\left(\\mathrm{not}\\left(X_1\\right)\\right)\\oplus \\left(\\mathrm{not}\\left(X_2\\right)\\right)\\oplus \\dots \\oplus \\left(\\mathrm{not}\\left(X_n\\right)\\right)\\right)\\\\\n&=\\left\\{ \\begin{array}{r}\n1-\\prod_{i\\in L}{\\left(1-\\left(1-p_i\\right)\\right)}\\ \\mathrm{if}\\ L\\neq \\emptyset  \\\\ \n0\\ \\ \\mathrm{if}\\ L=\\emptyset  \\end{array}\n\\right.\\\\\n&\\left(\\mathrm{Due\\ to\\ formula~\\ref{formula:or-gate-inference}}\\right)\n\\end{align*}\n\nAccording to formula~\\ref{formula:and-gate-inference}, we also have:\n\\begin{align*}\nP\\left(\\mathrm{not}\\left(X_1\\oplus X_2\\oplus \\dots \\oplus X_n\\right)\\right)&=P\\left(\\left(\\mathrm{not}\\left(X_1\\right)\\right)\\odot \\left(\\mathrm{not}\\left(X_2\\right)\\right)\\odot \\dots \\odot \\left(\\mathrm{not}\\left(X_n\\right)\\right)\\right)\\\\\n&=\\left\\{ \\begin{array}{l}\n\\prod^n_{i=1}{P\\left(\\mathrm{not}\\left(X_i\\right)\\right)}\\ \\mathrm{if\\ all}\\ \\mathrm{not}\\left(X_i\\right)\\ \\left(s\\right)\\ \\mathrm{are}\\ 1 \\\\ \n0\\ \\mathrm{if\\ there\\ exists\\ at\\ least\\ one\\ not}\\left(X_i\\right)=0 \\end{array}\n\\right.\\\\\n&=\\left\\{ \\begin{array}{l}\n\\prod^n_{i=1}{\\left(1-p_i\\right)}\\ \\mathrm{if\\ all}\\ X_i\\ \\left(s\\right)\\ \\mathrm{are}\\ 0 \\\\ \n0\\ \\mathrm{if\\ there\\ exists\\ at\\ least\\ one\\ }X_i=1 \\end{array}\n\\right.\n\\end{align*}\nIn general, formula~\\ref{formula:nand-nor-gates-inference} specifies NAND-gate inference and NOR-gate inference derived from AND-gate and OR-gate:\n\n\\begin{equation}\n\\begin{split}\n&P\\left(\\mathrm{not}\\left(X_1\\odot X_2\\odot \\dots \\odot X_n\\right)\\right)=\\left\\{ \\begin{array}{r}\n1-\\prod_{i\\in L}{p_i}\\ \\mathrm{if}\\ L\\neq \\emptyset  \\\\ \n0\\ \\ \\mathrm{if}\\ L=\\emptyset  \\end{array}\n\\right.\\\\\n&P\\left(\\mathrm{not}\\left(X_1\\oplus X_2\\oplus \\dots \\oplus X_n\\right)\\right)=\\left\\{ \\begin{array}{r}\n\\prod^n_{i=1}{q_i}\\ \\mathrm{if\\ }K=\\emptyset  \\\\ \n0\\ \\mathrm{if\\ }K\\neq \\emptyset  \\end{array}\n\\right.\n\\end{split}\n\\label{formula:nand-nor-gates-inference}\n\\end{equation}\n%Formula 3.11. NAND-gate inference and NOR-gate inference\nWhere $K$ and $L$ are the sets of $X_i$ (s) whose values are 1 and 0, respectively.\n\nSuppose the number of sources \\textit{X${}_{i}$} (s) is even. Let \\textit{O} be the set of \\textit{X${}_{i}$} (s) whose indices are odd. Let \\textit{O}${}_{1}$ and \\textit{O}${}_{2}$ be subsets of \\textit{O}, in which all \\textit{X${}_{i}$} (s) are 1 and 0, respectively. Let \\textit{E} be the set of \\textit{X${}_{i}$} (s) whose indices are even. Let \\textit{E}${}_{1}$ and \\textit{E}${}_{2}$ be subsets of \\textit{E}, in which all \\textit{X${}_{i}$} (s) are 1 and 0, respectively.\n\\begin{definition}\n$\\left\\{ \\begin{array}{l}\nE=\\left\\{2,4,6,\\dots ,n\\right\\} \\\\ \nE_1\\subseteq E \\\\ \nE_2\\subseteq E \\\\ \nE_1\\cup E_2=E \\\\ \nE_1\\cap E_2=\\emptyset  \\\\ \nX_i=1,\\forall i\\in E_1 \\\\ \nX_i=0,\\forall i\\in E_2 \\end{array}\n\\right.\\ \\mathrm{and}\\ \\left\\{ \\begin{array}{l}\nO=\\left\\{1,3,5,\\dots ,n-1\\right\\} \\\\ \nO_1\\subseteq O \\\\ \nO_2\\subseteq O \\\\ \nO_1\\cup O_2=O \\\\ \nO_1\\cap O_2=\\emptyset  \\\\ \nX_i=1,\\forall i\\in O_1 \\\\ \nX_i=0,\\forall i\\in O_2 \\end{array}\n\\right.$\n\\label{definition:xor-subsets}\n\\end{definition}\nThus, \\textit{O}${}_{1}$ and \\textit{E}${}_{1}$ are subsets of \\textit{K}. Sources \\textit{X${}_{i}$} (s) and target \\textit{Y} follow \\textbf{XOR-gate} if one of two XOR-gate conditions specified by formula~\\ref{xor-conditions} is satisfied.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(Y\\mathrm{=1}\\mathrel{\\left|\\vphantom{Y\\mathrm{=1} \\left\\{ \\begin{array}{l}\nA_i=ON\\ \\mathrm{for}\\ i\\in O\\  \\\\ \nA_i=OFF\\ \\mathrm{for}\\ i\\notin O\\  \\end{array}\n\\right\\}}\\right.\\kern-\\nulldelimiterspace}\\left\\{ \\begin{array}{l}\nA_i=ON\\ \\mathrm{for}\\ i\\in O\\  \\\\ \nA_i=OFF\\ \\mathrm{for}\\ i\\notin O\\  \\end{array}\n\\right\\}\\right)\\\\\n&=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1=ON,A_2=OFF,\\dots ,A_{n-1}=ON,A_n=OFF}\\right.\\kern-\\nulldelimiterspace}A_1=ON,A_2=OFF,\\dots ,A_{n-1}=ON,A_n=OFF\\right)=1\\\\\n\\\\\n&P\\left(Y\\mathrm{=1}\\mathrel{\\left|\\vphantom{Y\\mathrm{=1} \\left\\{ \\begin{array}{l}\nA_i=ON\\ \\mathrm{for}\\ i\\in E\\  \\\\ \nA_i=OFF\\ \\mathrm{for}\\ i\\notin E\\  \\end{array}\n\\right\\}}\\right.\\kern-\\nulldelimiterspace}\\left\\{ \\begin{array}{l}\nA_i=ON\\ \\mathrm{for}\\ i\\in E\\  \\\\ \nA_i=OFF\\ \\mathrm{for}\\ i\\notin E\\  \\end{array}\n\\right\\}\\right)\\\\\n&=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1=OFF,A_2=ON,\\dots ,A_{n-1}=OFF,A_n=ON}\\right.\\kern-\\nulldelimiterspace}A_1=OFF,A_2=ON,\\dots ,A_{n-1}=OFF,A_n=ON\\right)=1\n\\end{split}\n\\label{xor-conditions}\n\\end{equation} \n%Formula 3.12. Two XOR-gate conditions\n\nFrom formula~\\ref{formula:x-gate-prob}, we have:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\n\\end{align*}\nIf both XOR-gate conditions are not satisfied then,\n\\[P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=0\\] \nIf the first XOR-gate condition is satisfied, we have:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&\\begin{aligned}\n=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1=ON,A_2=OFF,\\dots ,A_{n-1}=ON,A_n=OFF}\\right.\\kern-\\nulldelimiterspace}A_1=ON,A_2=OFF,\\dots ,A_{n-1}=ON,A_n=OFF\\right)\\\\\n*\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\n\\end{aligned}\\\\\n&=\\left(\\prod_{i\\in O}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\\left(\\prod_{i\\in E}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\n\\end{align*}\nWe have\n\\begin{align*}\n&\\prod_{i\\in O}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&=\\left(\\prod_{i\\in O_1}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)}\\right)*\\left(\\prod_{i\\in O_2}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)}\\right)\\\\\n&=\\left(\\prod_{i\\in O_1}{p_i}\\right)*\\left(\\prod_{i\\in O_2}{0}\\right)=\\left\\{ \\begin{array}{r}\n\\prod_{i\\in O_1}{p_i}\\ \\mathrm{if}\\ O_2=\\emptyset  \\\\ \n0\\ \\mathrm{if}\\ O_2\\neq \\emptyset  \\end{array}\n\\right.\\\\\n&\\left(\\mathrm{Due\\ to\\ formula~\\ref{formula:accountable-probs}}\\right)\n\\end{align*}\n\nWe also have\n\\begin{align*}\n&\\prod_{i\\in E}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&=\\left(\\prod_{i\\in E_1}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)}\\right)*\\left(\\prod_{i\\in E_2}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)}\\right)\\\\\n&=\\left(\\prod_{i\\in E_1}{\\left(1-p_i\\right)}\\right)\\left(\\prod_{i\\in E_2}{1}\\right)=\\left\\{ \\begin{array}{r}\n\\prod_{i\\in E_1}{\\left(1-p_i\\right)}\\ \\mathrm{if}\\ E_1\\neq \\emptyset  \\\\ \n1\\ \\mathrm{if}\\ E_1=\\emptyset  \\end{array}\n\\right.\\\\\n&\\left(\\mathrm{Due\\ to\\ formula~\\ref{formula:accountable-probs}}\\right)\n\\end{align*}\n\nGiven the first XOR-gate condition, it implies\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left(\\prod_{i\\in O}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\\left(\\prod_{i\\in E}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\\\\\n&=\\left\\{ \\begin{array}{r}\n\\left(\\prod_{i\\in O_1}{p_i}\\right)\\left(\\prod_{i\\in E_1}{\\left(1-p_i\\right)}\\right)\\ \\mathrm{if}\\ O_2=\\emptyset \\ \\mathrm{and}\\ E_1\\neq \\emptyset \\  \\\\ \n\\prod_{i\\in O_1}{p_i}\\ \\mathrm{if}\\ O_2=\\emptyset \\ \\mathrm{and}\\ E_1=\\emptyset  \\\\ \n0\\ \\mathrm{if}\\ O_2\\neq \\emptyset  \\end{array}\n\\right.\n\\end{align*}\nSimilarly, given the second XOR-gate condition, we have:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left(\\prod_{i\\in E}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\\left(\\prod_{i\\in O}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\\\\\n&=\\left\\{ \\begin{array}{r}\n\\left(\\prod_{i\\in E_1}{p_i}\\right)\\left(\\prod_{i\\in O_1}{\\left(1-p_i\\right)}\\right)\\ \\mathrm{if}\\ E_2=\\emptyset \\ \\mathrm{and}\\ O_1\\neq \\emptyset \\  \\\\ \n\\prod_{i\\in E_1}{p_i}\\ \\mathrm{if}\\ E_2=\\emptyset \\ \\mathrm{and}\\ O_1=\\emptyset  \\\\ \n0\\ \\mathrm{if}\\ E_2\\neq \\emptyset  \\end{array}\n\\right.\n\\end{align*}\nIf one of XOR-gate conditions is satisfied then,\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left(\\prod_{i\\in O}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\\left(\\prod_{i\\in E}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)+\\\\\n&\\left(\\prod_{i\\in E}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\\left(\\prod_{i\\in O}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\right)\n\\end{align*}\nThis implies formula~\\ref{formula:xor-gate-inference} to specify XOR-gate inference.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(X_1\\otimes X_2\\otimes \\dots \\otimes X_n\\right)=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left\\{\\begin{aligned}\n\\mathrm{If}&\\ O_2=\\emptyset\\ \\mathrm{and}\\ E_2=\\emptyset\\ \\mathrm{then}\\\\\n&\\left(\\prod_{i\\in O_1}{p_i}\\right)\\left(\\prod_{i\\in E_1}{\\left(1-p_i\\right)}\\right)\\mathrm{+}\\left(\\prod_{i\\in E_1}{p_i}\\right)\\left(\\prod_{i\\in O_1}{\\left(1-p_i\\right)}\\right)\\\\\n\\mathrm{If}&\\ O_2=\\emptyset\\ \\mathrm{and}\\ E_1\\neq \\emptyset\\ \\mathrm{and}\\ E_2\\neq \\emptyset\\ \\mathrm{then}\\\\\n&\\left(\\prod_{i\\in O_1}{p_i}\\right)\\left(\\prod_{i\\in E_1}{\\left(1-p_i\\right)}\\right)\\\\\n\\mathrm{If}&\\ O_2=\\emptyset\\ \\mathrm{and}\\ E_1=\\emptyset\\ \\mathrm{then}\\ \\prod_{i\\in O_1}{p_i}\\\\\n\\mathrm{If}&\\ E_2=\\emptyset\\ \\mathrm{and}\\ O_1\\neq \\emptyset \\ \\mathrm{and}\\ O_2\\neq \\emptyset\\ \\mathrm{then}\\\\\n&\\left(\\prod_{i\\in E_1}{p_i}\\right)\\left(\\prod_{i\\in O_1}{\\left(1-p_i\\right)}\\right)\\\\\n\\mathrm{If}&\\ E_2=\\emptyset\\ \\mathrm{and}\\ O_1=\\emptyset\\ \\mathrm{then}\\ \\prod_{i\\in E_1}{p_i}\\\\\n\\mathrm{If}&\\ O_2\\neq \\emptyset\\ \\mathrm{and}\\ E_2\\neq \\emptyset\\ \\mathrm{then}\\ 0\\\\\n\\mathrm{If}&\\ n<2\\ \\mathrm{or}\\ n\\ \\mathrm{is\\ odd}\\ \\mathrm{then}\\ 0\n\\end{aligned}\\right.\n\\end{split}\n\\label{formula:xor-gate-inference}\n\\end{equation} \n%Formula 3.13. XOR-gate inference\nWhere $O_1$, $O_2$, $E_1$, and $E_2$ are specified in definition~\\ref{definition:xor-subsets}.\n\nGiven \\textit{n} $\\mathrm{\\ge}$ 2 and \\textit{n} is even, formula~\\ref{formula:xor-gate-inference} varies according to six cases whose arrangement counters are listed as follows:\n\\begin{center}\n\\begin{tabular}{l|l}\n$O_2=\\emptyset \\ \\mathrm{and}\\ E_2=\\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }\\right)=1\n  \\end{aligned}$\\\\\n\\hline\n$O_2=\\emptyset \\ \\mathrm{and}\\ E_1\\neq \\emptyset \\ \\mathrm{and}\\ E_2\\neq \\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=2^{\\frac{n}{2}}-2\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=2^{\\frac{n}{2}}-2\n  \\end{aligned}$\\\\\n\\hline\n$O_2=\\emptyset \\ \\mathrm{and}\\ E_1=\\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\\\\\n\\hline\n$E_2=\\emptyset \\ \\mathrm{and}\\ O_1\\neq \\emptyset \\ \\mathrm{and}\\ O_2\\neq \\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=2^{\\frac{n}{2}-1}-1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=2^{\\frac{n}{2}-1}-1\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=2^{\\frac{n}{2}}-2\n  \\end{aligned}$\\\\\n\\hline\n$E_2=\\emptyset \\ \\mathrm{and}\\ O_1=\\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\\\\\n\\hline\n$O_2\\neq \\emptyset \\ \\mathrm{and}\\ E_2\\neq \\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=\\left(2^{\\frac{n}{2}-1}-1\\right)\\left(2^{\\frac{n}{2}}-1\\right)\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=2^{\\frac{n}{2}-1}\\left(2^{\\frac{n}{2}}-1\\right)\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&={\\left(2^{\\frac{n}{2}}-1\\right)}^2\n  \\end{aligned}$\n\\end{tabular}\n\\end{center}\nSuppose the number of sources \\textit{X${}_{i}$} (s) is even. According to \\textbf{XNOR-gate} inference (Wikipedia, 2016), the output is on if all inputs get the same value 1 (or 0). Sources \\textit{X${}_{i}$} (s) and target \\textit{Y} follow XNOR-gate if one of two XNOR-gate conditions specified by formula~\\ref{xnor-gate-conditions} is satisfied.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_i=ON,\\forall i}\\right.\\kern-\\nulldelimiterspace}A_i=ON,\\forall i\\right)=1\\\\\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_i=OFF,\\forall i}\\right.\\kern-\\nulldelimiterspace}A_i=OFF,\\forall i\\right)=1\n\\end{split}\n\\label{xnor-gate-conditions}\n\\end{equation}\n%Formula 3.14. Two XNOR-gate conditions\n\nFrom formula~\\ref{formula:x-gate-prob}, we have:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\n\\end{align*}\nIf both XNOR-gate conditions are not satisfied then,\n\\[P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=0\\] \nIf \\textit{A${}_{i}$}=\\textit{ON} for all \\textit{i}, we have:\n\\begin{align*}\nP\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)&=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_i=ON,\\forall i}\\right.\\kern-\\nulldelimiterspace}A_i=ON,\\forall i\\right)\\prod^n_{i=1}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&=\\prod^n_{i=1}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&=\\left\\{ \\begin{array}{l}\n\\prod^n_{i=1}{p_i}\\ \\mathrm{if\\ }L=\\emptyset  \\\\ \n0\\ \\mathrm{if\\ }L\\neq \\emptyset  \\end{array}\n\\right.\n\\end{align*}\n(Please see similar proof in AND-gate inference)\n\nIf \\textit{A${}_{i}$}=\\textit{OFF} for all \\textit{i}, we have:\n\\begin{align*}\nP\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)&=\\prod^n_{i=1}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&=\\left\\{ \\begin{array}{r}\n\\prod_{i\\in K}{\\left(1-p_i\\right)}\\ \\mathrm{if}\\ K\\neq \\emptyset  \\\\ \n1\\ \\ \\mathrm{if}\\ K=\\emptyset  \\end{array}\n\\right.\n\\end{align*}\n(Please see similar proof in OR-gate inference)\n\nIf one of XNOR-gate conditions is satisfied then,\n\\[P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=\\prod^n_{i=1}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}+\\prod^n_{i=1}{P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\] \nThis implies formula~\\ref{xnor-gate-inference} to specify XNOR-gate inference.\n\n\\begin{equation}\n\\begin{split}\nP\\left(\\mathrm{not}\\left(X_1\\otimes X_2\\otimes \\dots \\otimes X_n\\right)\\right)&=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left\\{ \\begin{array}{r}\n\\prod^n_{i=1}{p_i}+\\prod^n_{i=1}{\\left(1-p_i\\right)}\\ \\mathrm{if\\ }L=\\emptyset  \\\\ \n\\prod_{i\\in K}{\\left(1-p_i\\right)}\\ \\mathrm{if}\\ L\\neq \\emptyset \\ \\mathrm{and}\\ K\\neq \\emptyset  \\\\ \n1\\ \\mathrm{if}\\ L\\neq \\emptyset \\ \\mathrm{and}\\ K=\\emptyset  \\end{array}\n\\right.\n\\end{split}\n\\label{xnor-gate-inference}\n\\end{equation}\n%Formula 3.15. XNOR-gate inference\n\nWhere \\textit{K} and \\textit{L} are the sets of \\textit{X${}_{i}$} (s) whose values are 1 and 0, respectively. Formula~\\ref{xnor-gate-inference} varies according to three cases whose arrangement counters are listed as follows:\n\\begin{center}\n\\begin{tabular}{l|l}\n$L=\\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\\\\\n\\hline\n$L\\neq \\emptyset \\ \\mathrm{and}\\ K\\neq \\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=2^{n-1}-1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=2^{n-1}-1\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=2^n-2\n  \\end{aligned}$\\\\\n\\hline\n$L\\neq \\emptyset \\ \\mathrm{and}\\ K=\\emptyset$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\\\\\n\\end{tabular}\n\\end{center}\nLet \\textit{U} be a set of indices such that \\textit{A${}_{i}$}=\\textit{ON} and let \\textit{$\\alpha$} $\\mathrm{\\ge}$ 0 and \\textit{$\\beta$} $\\mathrm{\\ge}$ 0 be predefined numbers. The U-gate inference is defined based on \\textit{$\\alpha$}, \\textit{$\\beta$} and cardinality of \\textit{U}. Formula~\\ref{u-gate-conditions} specifies four common U-gate conditions.\n\n\\begin{equation}\n\\begin{tabular}{l|p{3.4in}}\n$|U|=\\alpha$ & \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 1 if there are exactly $\\alpha$ variables $A_i$ = \\textit{ON} (s). Otherwise, \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 0. \\\\ \\hline \n$|U|\\ge\\alpha$ & \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 1 if there are at least $\\alpha$ variables $A_i$ = \\textit{ON} (s). Otherwise, \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 0. \\\\ \\hline \n$|U|\\le\\beta$ & \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 1 if there are at most $\\beta$ variables $A_i$ = \\textit{ON} (s). Otherwise, \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 0. \\\\ \\hline \n$\\alpha\\le|U|\\le\\beta$ & \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 1 if the number of $A_i$ = \\textit{ON} (s) is from $\\alpha$ to $\\beta$. Otherwise, \\textit{P}(\\textit{Y}=1 \\textbar $A_1$, $A_2$,\\ldots, $A_n$) = 0. \n\\end{tabular}\n\\label{u-gate-conditions}\n\\end{equation}\n%Formula 3.16. U-gate conditions\n \nYour attention please, U-gate condition on {\\textbar}\\textit{U}{\\textbar} can be arbitrary and it is only relevant to \\textit{A${}_{i}$} (s) (\\textit{ON} or \\textit{OFF}) and the way to combine \\textit{A${}_{i}$} (s). For example, AND-gate and OR-gate are specific cases of U-gate with {\\textbar}\\textit{U}{\\textbar}=\\textit{n} and {\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\ge}$1, respectively. XOR-gate and XNOR-gate are also specific cases of U-gate with specific conditions on \\textit{A${}_{i}$} (s). However, it must be assured that there is at least one combination of \\textit{A${}_{i}$} (s) satisfying the predefined U-gate condition, which causes that U-gate probability is not always equal to 0. In this research, U-gate is the most general nonlinear gate where U-gate probability contains products of weights (see formula~\\ref{u-gate-inference}). Later on, we will research a so-called SIGMA-gate that contains only linear combination of weights (sum of weights, see formula~\\ref{sigma-gate-inference}). Shortly, each X-gate is a pattern owning a particular X-gate inference that is X-gate probability \\textit{P}(\\textit{X}${}_{1}$x\\textit{X}${}_{2}$x{\\dots}x\\textit{X${}_{n}$}). Each X-gate inference is based on particular X-gate condition (s) relevant to only variables \\textit{A${}_{i}$} (s).\n\nFrom formula~\\ref{formula:x-gate-prob}, we have:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\sum_{A_1,A_2,\\dots ,A_n}{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\n\\end{align*}\nLet $\\mathcal{U}$ be the set of all possible \\textit{U} (s), we have:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=\\sum_{U\\in \\mathcal{U}}{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 A_1,A_2,\\dots ,A_n}\\right.\\kern-\\nulldelimiterspace}A_1,A_2,\\dots ,A_n\\right)\\prod^n_{i=1}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}}\\\\ \n&=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\prod_{j\\notin U}{P\\left(A_j=OFF\\mathrel{\\left|\\vphantom{A_j=OFF X_j}\\right.\\kern-\\nulldelimiterspace}X_j\\right)}}\n\\end{align*}\nIf $X_i=0,\\forall i\\in U$ then,\n\\[P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{0}\\prod_{j\\notin U}{P\\left(A_j=OFF\\mathrel{\\left|\\vphantom{A_j=OFF X_j}\\right.\\kern-\\nulldelimiterspace}X_j\\right)}}=0\\] \nThis implies all sets \\textit{U} (s) must be subsets of \\textit{K}. The U-gate probability is rewritten as follows:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)}\\prod_{j\\notin U}{P\\left(A_j=OFF\\mathrel{\\left|\\vphantom{A_j=OFF X_j}\\right.\\kern-\\nulldelimiterspace}X_j\\right)}}\\\\\n&=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{p_i}\\prod_{j\\notin U}{P\\left(A_j=OFF\\mathrel{\\left|\\vphantom{A_j=OFF X_j}\\right.\\kern-\\nulldelimiterspace}X_j\\right)}}\\\\\n&=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{p_i}\\prod_{j\\in K\\backslash U}{P\\left(A_j=OFF\\mathrel{\\left|\\vphantom{A_j=OFF X_j=1}\\right.\\kern-\\nulldelimiterspace}X_j=1\\right)}\\prod_{j\\notin K}{P\\left(A_j=OFF\\mathrel{\\left|\\vphantom{A_j=OFF X_j=0}\\right.\\kern-\\nulldelimiterspace}X_j=0\\right)}}\\\\\n&=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{p_i}\\prod_{j\\in K\\backslash U}{\\left(1-p_j\\right)}\\prod_{j\\notin K}{1}}=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{p_i}\\prod_{j\\in K\\backslash U}{\\left(1-p_j\\right)}}\\\\\n&\\left(\\mathrm{Due\\ to\\ formula~\\ref{formula:accountable-probs}}\\right)\n\\end{align*}\nAs a convention,\n\\begin{align*}\n\\prod_{i\\in U}{p_i}&=1\\ \\mathrm{if}\\ \\left|U\\right|=0\\\\\n\\prod_{j\\in K\\backslash U}{\\left(1-p_j\\right)}&=1\\ \\mathrm{if}\\ \\left|U\\right|=\\left|K\\right|\n\\end{align*}\nLet \\textit{P${}_{U}$} be the U-gate probability, we define:\n\\begin{align*}\nS_U&=\\sum_{U\\in \\mathcal{U}}{\\prod_{i\\in U}{p_i}\\prod_{j\\in K\\backslash U}{\\left(1-p_j\\right)}}\\\\\nP_U&=P\\left(X_1\\uplus X_2\\uplus \\dots \\uplus X_n\\right)=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\n\\end{align*}\nFormula~\\ref{u-gate-inference} specifies U-gate inference and cardinality of $\\mathcal{U}$ where $\\mathcal{U}$ is the set of subsets (\\textit{U}) of \\textit{K}.\n\n\\begin{equation}\n\\begin{tabular}{p{0.5in}|p{2.5in}}\n{\\textbar}\\textit{U}{\\textbar}=0 & $P_U=\\left\\{ \\begin{array}{r}\n\\prod^n_{j=1}{\\left(1-p_j\\right)}\\ \\mathrm{if}\\ \\left|K\\right|>0 \\\\ \n1\\ \\ \\mathrm{if}\\ \\left|K\\right|=0 \\end{array}\n\\right.$\\newline $\\left|\\mathcal{U}\\right|=1$ \\\\ \\hline \n{\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\ge}$0 & $P_U=\\left\\{ \\begin{array}{r}\nS_U\\mathrm{\\ if}\\ \\left|K\\right|>0 \\\\ \n1\\ \\mathrm{if}\\ \\left|K\\right|=0 \\end{array}\n\\right.$\\newline $\\left|\\mathcal{U}\\right|=2^{\\left|K\\right|}$\\newline The case {\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\ge}$0 is the same to the case {\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\le}$\\textit{n} \\\\ \\hline \n{\\textbar}\\textit{U}{\\textbar}=\\textit{n} & $P_U=\\left\\{ \\begin{array}{r}\n\\prod^n_{i=1}{p_i}\\ \\mathrm{if\\ }\\left|K\\right|=n \\\\ \n0\\ \\mathrm{if\\ }\\left|K\\right|<n \\end{array}\n\\right.$\\newline $\\left|\\mathcal{U}\\right|=\\left\\{ \\begin{array}{r}\n1\\ \\mathrm{if\\ }\\left|K\\right|=n \\\\ \n0\\ \\mathrm{if\\ }\\left|K\\right|<n \\end{array}\n\\right.$ \\\\ \\hline \n{\\textbar}\\textit{U}{\\textbar}=\\textit{$\\alpha$}\\newline 0$<$\\textit{$\\alpha$}$<$\\textit{n} & $P_U=\\left\\{ \\begin{array}{r}\nS_U\\mathrm{\\ if}\\ \\left|K\\right|\\ge \\alpha  \\\\ \n0\\ \\mathrm{if}\\ \\left|K\\right|<\\alpha  \\end{array}\n\\right.$\\newline $\\left|\\mathcal{U}\\right|=\\left\\{ \\begin{array}{r}\n\\left(\\genfrac{}{}{0pt}{}{\\left|K\\right|}{\\alpha }\\right)\\ \\mathrm{if}\\ \\left|K\\right|\\ge \\alpha  \\\\ \n0\\ \\mathrm{if}\\ \\left|K\\right|<\\alpha  \\end{array}\n\\right.$ \\\\ \\hline \n{\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\ge}$\\textit{$\\alpha$}\\newline 0$<$\\textit{$\\alpha$}$<$\\textit{n} & $P_U=\\left\\{ \\begin{array}{r}\nS_U\\mathrm{\\ if}\\ \\left|K\\right|\\ge \\alpha  \\\\ \n0\\ \\mathrm{if}\\ \\left|K\\right|<\\alpha  \\end{array}\n\\right.$\\newline $\\left|\\mathcal{U}\\right|=\\left\\{ \\begin{array}{r}\n\\sum^{\\left|K\\right|}_{j=\\alpha }{\\left(\\genfrac{}{}{0pt}{}{\\left|K\\right|}{j}\\right)}\\ \\mathrm{if}\\ \\left|K\\right|\\ge \\alpha  \\\\ \n0\\ \\mathrm{if}\\ \\left|K\\right|<\\alpha  \\end{array}\n\\right.$ \\\\ \\hline \n{\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\le}$\\textit{$\\beta$}\\newline 0$<$\\textit{$\\beta$}$<$\\textit{n} & $P_U=\\left\\{ \\begin{array}{r}\nS_U\\mathrm{\\ if}\\ \\left|K\\right|>0 \\\\ \n1\\ \\mathrm{if}\\ \\left|K\\right|=0 \\end{array}\n\\right.$\\newline $\\left|\\mathcal{U}\\right|=\\left\\{ \\begin{array}{r}\n\\sum^{\\mathrm{min}\\left(\\beta ,\\left|K\\right|\\right)}_{j=0}{\\left(\\genfrac{}{}{0pt}{}{\\left|K\\right|}{j}\\right)}\\ \\mathrm{if}\\ \\left|K\\right|>0 \\\\ \n1\\ \\mathrm{if}\\ \\left|K\\right|=0 \\end{array}\n\\right.$ \\\\ \\hline \n\\textit{$\\alpha$}$\\mathrm{\\le}${\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\le}$\\textit{$\\beta$}\\newline 0$<$\\textit{$\\alpha$}$<$\\textit{n}\\newline 0$<$\\textit{$\\beta$}$<$\\textit{n} & $P_U=\\left\\{ \\begin{array}{r}\nS_U\\mathrm{\\ if}\\ \\left|K\\right|\\ge \\alpha  \\\\ \n0\\ \\mathrm{if}\\ \\left|K\\right|<\\alpha  \\end{array}\n\\right.$\\newline $\\left|\\mathcal{U}\\right|=\\left\\{ \\begin{array}{r}\n\\sum^{\\mathrm{min}\\left(\\beta ,\\left|K\\right|\\right)}_{j=\\alpha }{\\left(\\genfrac{}{}{0pt}{}{\\left|K\\right|}{j}\\right)}\\ \\mathrm{if}\\ \\left|K\\right|\\ge \\alpha  \\\\ \n0\\ \\mathrm{if}\\ \\left|K\\right|<\\alpha  \\end{array}\n\\right.$\n\\end{tabular}\n\\label{u-gate-inference}\n\\end{equation}\n%Formula 3.17. U-gate inference\n\n\\noindent Note that the notation $\\left(\\genfrac{}{}{0pt}{}{n}{j}\\right)$ denotes the number of combinations of \\textit{j} elements taken from \\textit{n} elements.\n\\[\\left(\\genfrac{}{}{0pt}{}{n}{j}\\right)=\\frac{n!}{j!\\left(n-j\\right)!}\\] \nArrangement counters relevant to U-gate inference and the set \\textit{K} are listed as follows:\n\\begin{center}\n\\begin{tabular}{l|l}\n$\\left|K\\right|=0$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\\\\\n\\hline\n$\\left|K\\right|=1$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=1\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=0\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=1\n  \\end{aligned}$\\\\\n\\hline\n$\\left|K\\right|=\\alpha \\ \\mathrm{and}\\ \\alpha >0$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=\\left(\\genfrac{}{}{0pt}{}{n-1}{\\alpha -1}\\right)\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=\\left(\\genfrac{}{}{0pt}{}{n-1}{\\alpha }\\right)\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=\\left(\\genfrac{}{}{0pt}{}{n}{\\alpha }\\right)\n  \\end{aligned}$\\\\\n\\hline\n$\\left|K\\right|\\le \\alpha \\ \\mathrm{and}\\ \\alpha >0$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=\\sum^{\\alpha }_{j=1}{\\left(\\genfrac{}{}{0pt}{}{n-1}{j-1}\\right)}\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=\\sum^{\\alpha }_{j=0}{\\left(\\genfrac{}{}{0pt}{}{n-1}{j}\\right)}\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=\\sum^{\\alpha }_{j=0}{\\left(\\genfrac{}{}{0pt}{}{n}{j}\\right)}\n  \\end{aligned}$\\\\\n\\hline\n$\\left|K\\right|\\ge \\alpha \\ \\mathrm{and}\\ \\alpha >0$ &\n  $\\begin{aligned}\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=1\\right\\}\\right)&=\\sum^n_{j=\\alpha }{\\left(\\genfrac{}{}{0pt}{}{n-1}{j-1}\\right)}\\\\\n  c\\left(\\mathrm{\\Omega }:\\left\\{X_i=0\\right\\}\\right)&=\\sum^{n-1}_{j=\\alpha }{\\left(\\genfrac{}{}{0pt}{}{n-1}{j}\\right)}\\\\\n  c\\left(\\mathrm{\\Omega }\\right)&=\\sum^n_{j=\\alpha }{\\left(\\genfrac{}{}{0pt}{}{n}{j}\\right)}\n  \\end{aligned}$\n\\end{tabular}\n\\end{center}\nThe \\textbf{SIGMA-gate} inference \\cite{nguyen:sigma} represents aggregation relationship satisfying SIGMA-gate condition specified by formula~\\ref{sigma-gate-condition}.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(Y\\right)=P\\left(\\sum^n_{i\\mathrm{=1}}{A_i}\\right)\\\\\n&\\mathrm{Where}\\ \\mathrm{the}\\ \\mathrm{set}\\ \\mathrm{of}\\ A_i\\ \\mathrm{(s)}\\ \\mathrm{is}\\ \\mathrm{complete}\\ \\mathrm{and}\\ \\mathrm{mutually}\\ \\mathrm{exclusive.}\\\\\n&\\sum^n_{i=1}{w_i}=1\\\\\n&A_i\\cap A_j=\\emptyset ,\\forall i\\neq j\n\\end{split}\n\\label{sigma-gate-condition}\n\\end{equation}\n%Formula 3.18. SIGMA-gate condition\n\nThe sigma sum $\\sum^n_{i\\mathrm{=1}}{A_i}$ indicates that \\textit{Y} is exclusive union of \\textit{A${}_{i}$} (s) and here, it does not express arithmetical additions.\n\\[Y=\\sum^n_{i\\mathrm{=1}}{A_i}=\\bigcup^n_{i=1}{A_i}\\] \nThis implies:\n\\[P\\left(Y\\right)=P\\left(\\sum^n_{i\\mathrm{=1}}{A_i}\\right)=P\\left(\\bigcup^n_{i=1}{A_i}\\right)=\\sum^n_{i\\mathrm{=1}}{P\\left(A_i\\right)}\\] \nThe sigma sum $\\sum^n_{i\\mathrm{=1}}{P\\left(A_i\\right)}$ now expresses arithmetical additions of probabilities \\textit{P}(\\textit{A${}_{i}$}).\n\nSIGMA-gate inference requires the set of \\textit{A${}_{i}$} (s) is complete and mutually exclusive, which means that the set of \\textit{X${}_{i}$} (s) is complete and mutually exclusive too. The SIGMA-gate probability is \\cite{nguyen:sigma}:\n\\begin{align*}\n&P\\left(Y\\mathrel{\\left|\\vphantom{Y X_{\\mathrm{1}},X_{\\mathrm{2}}\\mathrm{,\\dots ,}X_n}\\right.\\kern-\\nulldelimiterspace}X_{\\mathrm{1}},X_{\\mathrm{2}}\\mathrm{,\\dots ,}X_n\\right)=P\\left(\\sum^n_{i\\mathrm{=1}}{A_i}\\mathrel{\\left|\\vphantom{\\sum^n_{i\\mathrm{=1}}{A_i} X_{\\mathrm{1}},X_{\\mathrm{2}}\\mathrm{,\\dots ,}X_n}\\right.\\kern-\\nulldelimiterspace}X_{\\mathrm{1}},X_{\\mathrm{2}}\\mathrm{,\\dots ,}X_n\\right)\\mathrm{\\ }\\\\\n&\\left(\\mathrm{due\\ to\\ SIGMA-gate\\ condition}\\right)\\\\\n&=\\sum^n_{i\\mathrm{=1}}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_{\\mathrm{1}},X_{\\mathrm{2}}\\mathrm{,\\dots ,}X_n}\\right.\\kern-\\nulldelimiterspace}X_{\\mathrm{1}},X_{\\mathrm{2}}\\mathrm{,\\dots ,}X_n\\right)}\\mathrm{\\ }\\\\\n&\\left(\\mathrm{because\\ }A_i\\mathrm{\\ }\\left(\\mathrm{s}\\right)\\mathrm{\\ are\\ mutually\\ exclusive}\\right)\\\\\n&=\\sum^n_{i\\mathrm{=1}}{P\\left(A_i\\mathrel{\\left|\\vphantom{A_i X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&\\left(\\mathrm{because\\ }A_i\\mathrm{\\ is\\ only\\ dependent\\ on\\ }X_i\\right)\n\\end{align*}\nIt implies\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\sum^n_{i=1}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&=\\left(\\sum_{i\\in K}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)}\\right)+\\left(\\sum_{i\\notin K}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)}\\right)\\\\\n&=\\sum_{i\\in K}{w_i}+\\sum_{i\\notin K}{0}=\\sum_{i\\in K}{w_i}\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{formula}\\ \\mathrm{\\ref{formula:accountable-probs}}\\right)\n\\end{align*}\n\nIn general, formula~\\ref{sigma-gate-inference} specifies the theorem of SIGMA-gate inference \\cite{nguyen:sigma}. The base of this theorem was mentioned by authors \\cite[pp.~292-295]{millan:bayesiandiagnostic}.\n\n\\begin{equation}\n\\begin{split}\nP\\left(X_1+X_2+\\dots +X_n\\right)&=P\\left(\\sum^n_{i=1}{X_i}\\right)\\\\\n&=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=\\sum_{i\\in K}{w_i}\\\\\nP\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)&=1-\\sum_{i\\in K}{w_i}=\\sum_{i\\in L}{w_i}\n\\end{split}\n\\label{sigma-gate-inference}\n\\end{equation}\n\\begin{align*}\n&\\mathrm{Where}\\ \\mathrm{the}\\ \\mathrm{set}\\ \\mathrm{of}\\ X_i\\ \\left(\\mathrm{s}\\right)\\ \\mathrm{is}\\ \\mathrm{complete}\\ \\mathrm{and}\\ \\mathrm{mutually}\\ \\mathrm{exclusive.}\\\\\n&\\sum^n_{i=1}{w_i}=1\\\\\n&X_i\\cap X_j=\\emptyset ,\\forall i\\neq j\n\\end{align*}\n%Formula 3.19. SIGMA-gate inference\n\n\\noindent The arrangement counters of SIGMA-gate inference are\n\\begin{align*}\n&c\\left(\\Omega:\\{X_i=1\\}\\right)=c\\left(\\Omega:\\{X_i=0\\}\\right)=2^{n-1}\\\\\n&c\\left(\\Omega\\right)=2^n\n\\end{align*}\n\nFormula~\\ref{formula:accountable-probs} specifies the ``clockwise'' strength of relationship between \\textit{X${}_{i}$} and \\textit{Y}. Event \\textit{X${}_{i}$}=1 causes event \\textit{A${}_{i}$}=\\textit{ON} with ``clockwise'' weight \\textit{w${}_{i}$}. There is a question ``given \\textit{X${}_{i}$}=0, how likely the event \\textit{A${}_{i}$}=\\textit{OFF} is''. In order to solve this problem, I define a so-called ``counterclockwise'' strength of relationship between \\textit{X${}_{i}$} and \\textit{Y} denoted \\textit{$\\omega$${}_{i}$}. Event \\textit{X${}_{i}$}=0 causes event \\textit{A${}_{i}$}=\\textit{OFF} with ``counterclockwise'' weight \\textit{$\\omega$${}_{i}$}. In other words, each arc in simple graph is associated with a clockwise weight \\textit{w${}_{i}$} and a counterclockwise weight \\textit{$\\omega$${}_{i}$}. Such graph is called \\textit{bi-weight simple graph} shown in figure~\\ref{figure:biweight-simple-graph}.\n\n\\begin{figure}\n\\centering\n\\includegraphics{BiweightSimpleGraph.png}\n\\caption{Bi-weight simple graph}\n\\label{figure:biweight-simple-graph}\n\\end{figure}\n%Figure 3.3. Bi-weight simple graph\n\nWith bi-weight simple graph, all X-gate inferences are extended as so-called X-gate bi-inferences. Derived from formula~\\ref{formula:accountable-probs}, formula~\\ref{formula:accountable-probs-biweight} specifies conditional probability of accountable variables with regard to bi-weight graph.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=p_i=w_i\\\\\n&P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=1-{\\rho }_i=1-{\\omega }_i\\\\\n&P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=1-p_i=1-w_i\\\\\n&P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)={\\rho }_i={\\omega }_i\\\\\n\\end{split}\n\\label{formula:accountable-probs-biweight}\n\\end{equation}\n%Formula 3.20. Conditional probability of accountable variables with regard to bi-weight graph\n\nThe probabilities \\textit{P}(\\textit{A${}_{i}$}=\\textit{ON} {\\textbar} \\textit{X${}_{i}$}=0) and \\textit{P}(\\textit{A${}_{i}$}=\\textit{OFF} {\\textbar} \\textit{X${}_{i}$}=1) are called clockwise adder \\textit{d${}_{i}$} and counterclockwise adder \\textit{$\\delta$${}_{i}$}. As usual, \\textit{d${}_{i}$} and \\textit{$\\delta$${}_{i}$} are smaller than \\textit{w${}_{i}$} and \\textit{$\\omega$${}_{i}$}. When \\textit{d${}_{i}$}=0, bi-weight graph becomes normal simple graph.\n\\begin{align*}\n&d_i=P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=1-{\\rho }_i=1-{\\omega }_i\\\\\n&{\\delta }_i=P\\left(A_i=OFF\\mathrel{\\left|\\vphantom{A_i=OFF X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=1-p_i=1-w_i\n\\end{align*}\nThe total clockwise weight or total counterclockwise weight is defined as sum of clockwise weight and clockwise adder or sum of counterclockwise weight and counterclockwise adder. Formula~\\ref{formula:total-clockwise-counterclockwise} specifies such total weights \\textit{W${}_{i}$} and ${\\mathcal{W}}_i$. These weights are also called relationship powers.\n\n\\begin{equation}\n\\begin{split}\n&W_i=w_i+d_i\\\\\n&{\\mathcal{W}}_i={\\omega }_i+{\\delta }_i\\\\\n&\\mathrm{Where,}\\\\\n&d_i=1-{\\rho }_i=1-{\\omega }_i\\\\\n&{\\delta }_i=1-p_i=1-w_i\n\\end{split}\n\\label{formula:total-clockwise-counterclockwise}\n\\end{equation}\n%Formula 3.21. Total clockwise and counterclockwise weights\n\nGiven formula~\\ref{formula:total-clockwise-counterclockwise}, the set of all \\textit{A${}_{i}$} (s) is complete if and only if $\\sum^n_{i=1}{W_i}=1$.\n\nBy extending aforementioned X-gate inferences, we get bi-inferences for AND-gate, OR-gate, NAND-gate, NOR-gate, XOR-gate, XNOR-gate, and U-gate as shown in table~\\ref{table:bi-inferences}.\n\n\\begin{table}\n\\centering\n\\caption{Bi-inferences for AND-gate, OR-gate, NAND-gate, NOR-gate, XOR-gate, XNOR-gate, and U-gate}\n\\begin{tabular}{|p{\\columnwidth}|} \\hline\n$\\begin{aligned}\n&P\\left(X_1\\odot X_2\\odot \\dots \\odot X_n\\right)=\\prod_{i\\in K}{p_i}\\prod_{i\\in L}{d_i}\\\\\n&P\\left(X_1\\oplus X_2\\oplus \\dots \\oplus X_n\\right)=1-\\prod_{i\\in K}{{\\delta }_i}\\prod_{i\\in L}{{\\rho }_i}\\\\\n&P\\left(\\mathrm{not}\\left(X_1\\odot X_2\\odot \\dots \\odot X_n\\right)\\right)=1-\\prod_{i\\in L}{{\\rho }_i}\\prod_{i\\in K}{{\\delta }_i}\\\\\n&P\\left(\\mathrm{not}\\left(X_1\\oplus X_2\\oplus \\dots \\oplus X_n\\right)\\right)=\\prod_{i\\in L}{d_i}\\prod_{i\\in K}{p_i}\\\\\n&P\\left(X_1\\otimes X_2\\otimes \\dots \\otimes X_n\\right)=\\prod_{i\\in O_1}{p_i}\\prod_{i\\in O_2}{d_i}\\prod_{i\\in E_1}{{\\delta }_i}\\prod_{i\\in E_2}{{\\rho }_i}\\\\\n&+\\prod_{i\\in E_1}{p_i}\\prod_{i\\in E_2}{d_i}\\prod_{i\\in O_1}{{\\delta }_i}\\prod_{i\\in O_2}{{\\rho }_i}\\\\\n\\\\\n&P\\left(\\mathrm{not}\\left(X_1\\otimes X_2\\otimes \\dots \\otimes X_n\\right)\\right)=\\prod_{i\\in K}{p_i}\\prod_{i\\in L}{d_i}+\\prod_{i\\in K}{{\\delta }_i}\\prod_{i\\in L}{{\\rho }_i}\\\\\n&P\\left(X_1\\uplus X_2\\uplus \\dots \\uplus X_n\\right)=\\sum_{U\\in \\mathcal{U}}{\\left(\\prod_{i\\in U\\cap K}{p_i}\\prod_{i\\in U\\cap L}{d_i}\\right)\\left(\\prod_{i\\in \\overline{U}\\cap K}{{\\delta }_i}\\prod_{i\\in \\overline{U}\\cap L}{{\\rho }_i}\\right)}\n\\end{aligned}$\\\\\n\\\\There are four common conditions of \\textit{U}: {\\textbar}\\textit{U}{\\textbar}=\\textit{$\\alpha$}, {\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\ge}$\\textit{$\\alpha$}, {\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\le}$\\textit{$\\beta$}, and \\textit{$\\alpha$}$\\mathrm{\\le}${\\textbar}\\textit{U}{\\textbar}$\\mathrm{\\le}$\\textit{$\\beta$}. Note that $\\overline{U}$ is the complement of \\textit{U},\\\\\n$\\overline{U}=\\left\\{1,2,\\dots ,n\\right\\}\\backslash U$\\\\\nThe largest cardinality of $\\mathcal{U}$ is $\\left|\\mathcal{U}\\right|=2^n$\n\\\\ \\hline \n\\end{tabular}\n\\label{table:bi-inferences}\n\\end{table}\n%Table 3.3. Bi-inferences for AND-gate, OR-gate, NAND-gate, NOR-gate, XOR-gate, XNOR-gate, and U-gate\n\nThe largest cardinalities of \\textit{K} (\\textit{L}) are 2\\textit{${}^{n}$}${}^{-1}$ and 2\\textit{${}^{n}$} with and without fixed \\textit{X${}_{i}$}. Thus, it is possible to calculate arrangement counters. As a convention, the product of probabilities is 1 if indices set is empty.\n\\[\\prod_{i\\in I}{f_i}=1\\ \\mathrm{if}\\ I=\\emptyset \\] \nWith regard to SIGMA-gate bi-inference, the sum of all total clockwise weights must be 1 as follows:\n\\[\\sum^n_{i=1}{W_i}=\\sum^n_{i=1}{\\left(w_i+d_i\\right)}=\\sum^n_{i=1}{\\left(w_i+1-{\\omega }_i\\right)}=1\\] \nDerived from formula~\\ref{sigma-gate-inference}, the SIGMA-gate probability for bi-weight graph is:\n\\begin{align*}\n&P\\left(X_1+X_2+\\dots +X_n\\right)=\\sum^n_{i=1}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}\\\\\n&=\\sum_{i\\in K}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)}+\\sum_{i\\in L}{P\\left(A_i=ON\\mathrel{\\left|\\vphantom{A_i=ON X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)}\\\\\n&=\\sum_{i\\in K}{w_i}+\\sum_{i\\in L}{d_i}\n\\end{align*}\nShortly, formula~\\ref{formula:sigma-gate-biinference} specifies SIGMA-gate bi-inference.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(X_1+X_2+\\dots +X_n\\right)=\\sum_{i\\in K}{w_i}+\\sum_{i\\in L}{d_i}\\\\\n&\\mathrm{Where}\\ \\mathrm{the}\\ \\mathrm{set}\\ \\mathrm{of}\\ X_i\\ \\mathrm{(s)}\\ \\mathrm{is}\\ \\mathrm{complete}\\ \\mathrm{and}\\ \\mathrm{mutually}\\ \\mathrm{exclusive.}\\\\\n&\\sum^n_{i=1}{W_i}=1\\\\\n&X_i\\cap X_j=\\emptyset ,\\forall i\\neq j\n\\end{split}\n\\label{formula:sigma-gate-biinference}\n\\end{equation}\n%Formula 3.22. SIGMA-gate bi-inference\n\nThe next section will research diagnostic relationship which adheres to X-gate inference.\n\n\\section{Multi-hypothesis diagnostic relationship}\nGiven a simple graph shown in figure~\\ref{figure:simple-graph}, if we replace the target source \\textit{Y} by an evidence \\textit{D}, we get a so-called \\textit{multi-hypothesis diagnostic relationship} whose property adheres to X-gate inference. Maybe there are other diagnostic relationships in which X-gate inference is not concerned. However, this research focuses on X-gate inference and so multi-hypothesis diagnostic relationship is called \\textit{X-gate diagnostic relationship}. Sources \\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$} become hypotheses\\textit{.} As a convention, these hypotheses have prior uniform distribution.\n\nAccording to aforementioned X-gate network shown in figures \\ref{figure:simple-graph} and \\ref{figure:extended-x-gate-network}, the target variable must be binary whereas evidence \\textit{D} can be numeric. It is impossible to establish the evidence \\textit{D} as direct target variable. Thus, the solution of this problem is to add an augmented target binary variable \\textit{Y} and then, the evidence \\textit{D} is connected directly to \\textit{Y}. In other words, the \\textit{X-gate diagnostic network} have \\textit{n} sources $\\{$\\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$}$\\}$, one augmented hypothesis \\textit{Y}, and one evidence \\textit{D}. As a convention, X-gate diagnostic network is called \\textit{X-D network}. The CPT (s) of the entire network are determined based on combination of diagnostic relationship and X-gate inference mentioned in previous sections. Figure~\\ref{figure:augmented-xd-network} depicts the augmented X-D network. Note that variables \\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$}, and \\textit{Y} are always binary.\n\n\\begin{figure}\n\\centering\n\\includegraphics{XDNetwork.png}\n\\caption{Augmented X-D network}\n\\label{figure:augmented-xd-network}\n\\end{figure}\n%Figure 4.1. Augmented X-D network\n\n\\noindent The joint probability of augmented X-D network shown in figure~\\ref{figure:augmented-xd-network} is\n\\[P\\left(X_1,X_2,\\dots ,X_n,Y,D\\right)=P\\left(D\\mathrel{\\left|\\vphantom{D Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)P\\left(Y\\mathrel{\\left|\\vphantom{Y X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\prod^n_{i=1}{P\\left(X_i\\right)}\\] \nThe joint probability of X-D network is\n\\[P\\left(X_1,X_2,\\dots ,X_n,D\\right)=P\\left(D\\mathrel{\\left|\\vphantom{D X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\prod^n_{i=1}{P\\left(X_i\\right)}\\] \nBy applying total probability rule into X-D network, we have:\n\\begin{align*}\n&P\\left(X_1,X_2,\\dots ,X_n,D\\right)=\\frac{P\\left(D,X_1,X_2,\\dots ,X_n\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}\\prod^n_{i=1}{P\\left(X_i\\right)}\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{Bayes'}\\ \\mathrm{rule}\\right)\\\\\n&=\\frac{\\sum_Y{P\\left(D,X_1,X_2,\\dots ,X_n\\mathrel{\\left|\\vphantom{D,X_1,X_2,\\dots ,X_n Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)P\\left(Y\\right)}}{P\\left(X_1,X_2,\\dots ,X_n\\right)}\\prod^n_{i=1}{P\\left(X_i\\right)}\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{total}\\ \\mathrm{probability}\\ \\mathrm{rule}\\right)\\\\\n&=\\frac{\\sum_Y{P\\left(D,X_1,X_2,\\dots ,X_n\\mathrel{\\left|\\vphantom{D,X_1,X_2,\\dots ,X_n Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)P\\left(Y\\right)}}{P\\left(X_1,X_2,\\dots ,X_n\\right)}\\prod^n_{i=1}{P\\left(X_i\\right)}\\\\\n&=\\left(\\sum_Y{P\\left(D,X_1,X_2,\\dots ,X_n\\mathrel{\\left|\\vphantom{D,X_1,X_2,\\dots ,X_n Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)*\\frac{P\\left(Y\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}}\\right)*\\prod^n_{i=1}{P\\left(X_i\\right)}\\\\\n&=\\left(\\sum_Y{P\\left(D\\mathrel{\\left|\\vphantom{D Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)*\\frac{P\\left(X_1,X_2,\\dots ,X_n\\mathrel{\\left|\\vphantom{X_1,X_2,\\dots ,X_n Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)P\\left(Y\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}}\\right)*\\prod^n_{i=1}{P\\left(X_i\\right)}\\\\\n&\\left(\\mathrm{Because}\\ D\\ \\mathrm{is}\\ \\mathrm{conditionally}\\ \\mathrm{independent}\\ \\mathrm{from}\\ \\mathrm{all}\\ X_i\\ \\mathrm{(s)}\\ \\mathrm{given}\\ Y\\right)\\\\\n&=\\left(\\sum_Y{P\\left(D\\mathrel{\\left|\\vphantom{D Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)*\\frac{P\\left(Y,X_1,X_2,\\dots ,X_n\\right)}{P\\left(X_1,X_2,\\dots ,X_n\\right)}}\\right)*\\prod^n_{i=1}{P\\left(X_i\\right)}\\\\\n&=\\sum_Y{P\\left(D\\mathrel{\\left|\\vphantom{D Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)P\\left(Y\\mathrel{\\left|\\vphantom{Y X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\prod^n_{i=1}{P\\left(X_i\\right)}}\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{Bayes'}\\ \\mathrm{rule}\\right)\\\\\n&=\\sum_Y{P\\left(X_1,X_2,\\dots ,X_n,Y,D\\right)}\n\\end{align*}\nIt is implied that the augmented X-D network is equivalent to X-D network with regard to variables \\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$} and \\textit{D}. As a convention, augmented X-D network is considered as same as X-D network.\n\nThe simplest case of X-D network is NOT-D network having one hypothesis \\textit{X}${}_{1}$ and one evidence \\textit{D}, equipped with NOT-gate inference. NOT-D network satisfies diagnostic condition because it essentially represents the single diagnostic relationship. Inferred from formulas \\ref{formula:cpt-evidence-diagnostic} and \\ref{formula:not-gate-inference}, the conditional probability \\textit{P}(\\textit{D{\\textbar}X}${}_{1}$) and posterior probability \\textit{P}(\\textit{X}${}_{1}$\\textit{{\\textbar}D}) of NOT-D network are:\n\\[P\\left(D\\mathrel{\\left|\\vphantom{D X_1}\\right.\\kern-\\nulldelimiterspace}X_1\\right)=\\left\\{ \\begin{array}{r}\n1-D\\ \\mathrm{if}\\ X_1=1 \\\\ \nD\\ \\mathrm{if}\\ X_1=0 \\end{array}\n\\right.\\] \n\n\\begin{align*}\n&P\\left(X_1\\mathrel{\\left|\\vphantom{X_1 D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X_1}\\right.\\kern-\\nulldelimiterspace}X_1\\right)P\\left(X_1\\right)}{P\\left(X_1\\right)\\left(P\\left(D\\mathrel{\\left|\\vphantom{D X_1=0}\\right.\\kern-\\nulldelimiterspace}X_1=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X_1=1}\\right.\\kern-\\nulldelimiterspace}X_1=1\\right)\\right)}\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{Bayes'}\\ \\mathrm{rule}\\ \\mathrm{and}\\ \\mathrm{uniform}\\ \\mathrm{distribution}\\ \\mathrm{of}\\ X_1\\right)\\\\\n&=\\frac{P\\left(D\\mathrel{\\left|\\vphantom{D X_1}\\right.\\kern-\\nulldelimiterspace}X_1\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X_1=0}\\right.\\kern-\\nulldelimiterspace}X_1=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X_1=1}\\right.\\kern-\\nulldelimiterspace}X_1=1\\right)}=1*P\\left(D\\mathrel{\\left|\\vphantom{D X_1}\\right.\\kern-\\nulldelimiterspace}X_1\\right)\\\\\n&\\left(\\mathrm{due\\ to}\\ P\\left(D\\mathrel{\\left|\\vphantom{D X_1=0}\\right.\\kern-\\nulldelimiterspace}X_1=0\\right)+P\\left(D\\mathrel{\\left|\\vphantom{D X_1=1}\\right.\\kern-\\nulldelimiterspace}X_1=1\\right)=1\\right)\n\\end{align*}\nIt implies NOT-D network satisfies diagnostic condition. Let\n\\begin{align*}\n\\Omega&=\\left\\{X_1,X_2,\\dots ,X_n\\right\\}\\\\\nn&=\\left|\\Omega\\right|\n\\end{align*}\nWe will validate whether the CPT of diagnostic relationship, \\textit{P}(\\textit{D{\\textbar}X}) specified by formula~\\ref{formula:cpt-evidence}, still satisfies diagnostic condition within general case, X-D network. In other words, X-D network is general case of single diagnostic relationship.\n\nRecall from dependencies shown in figure~\\ref{figure:augmented-xd-network}, formula~\\ref{formula:joint-prob-xd-network} specifies the joint probability of X-D network.\n\n\\begin{equation}\n\\begin{split}\nP\\left(\\mathrm{\\Omega },Y,D\\right)&=P\\left(X_1,X_2,\\dots ,X_n,Y,D\\right)\\\\\n&=P\\left(D\\mathrel{\\left|\\vphantom{D Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)P\\left(Y\\mathrel{\\left|\\vphantom{Y X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\prod^n_{i=1}{P\\left(X_i\\right)}\n\\end{split}\n\\label{formula:joint-prob-xd-network}\n\\end{equation}\n\\[\\mathrm{Where}\\ \\Omega=\\left\\{X_1,X_2,\\dots,X_n\\right\\}\\mathrm{.}\\]\n%Formula 4.1. Joint probability of X-D network\n\n\\noindent Formula~\\ref{formula:general-cond-prob} specifies the conditional probability of \\textit{D} given \\textit{X${}_{i}$} (likelihood function) and the posterior probability of \\textit{X${}_{i}$} given \\textit{D}.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)=\\frac{P\\left(X_i,D\\right)}{P\\left(X_i\\right)}=\\frac{\\sum_{\\left\\{\\mathrm{\\Omega },Y,D\\right\\}\\backslash \\left\\{X_i,D\\right\\}}{P\\left(\\mathrm{\\Omega },Y,D\\right)}}{\\sum_{\\left\\{\\mathrm{\\Omega },Y,D\\right\\}\\backslash \\left\\{X_i\\right\\}}{P\\left(\\mathrm{\\Omega },Y,D\\right)}}\\\\\n&P\\left(X_i\\mathrel{\\left|\\vphantom{X_i D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{P\\left(X_i,D\\right)}{P\\left(D\\right)}=\\frac{\\sum_{\\left\\{\\mathrm{\\Omega },Y,D\\right\\}\\backslash \\left\\{X_i,D\\right\\}}{P\\left(\\mathrm{\\Omega },Y,D\\right)}}{\\sum_{\\left\\{\\mathrm{\\Omega },Y,D\\right\\}\\backslash \\left\\{D\\right\\}}{P\\left(\\mathrm{\\Omega },Y,D\\right)}}\n\\end{split}\n\\label{formula:general-cond-prob}\n\\end{equation}\nWhere $\\Omega$ = $\\{$\\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$}$\\}$ and the sign  ``{\\textbackslash}''  denotes  the  subtraction  (excluding)  operator in  set  theory \\cite{wikipedia:set}.\n%Formula 4.2. Conditional probability \\textit{P}(\\textit{D}{\\textbar}\\textit{X${}_{i}$}) and posterior probability \\textit{P}(\\textit{X${}_{i}$}{\\textbar}\\textit{D})\n\nGiven uniform distribution of \\textit{X${}_{i}$} (s), we have:\n\\[P\\left(X_1\\right)=P\\left(X_2\\right)=\\cdots =P\\left(X_n\\right)=\\frac{1}{2}\\] \nThe joint probability becomes\n\\[P\\left(\\mathrm{\\Omega },Y,D\\right)=\\frac{1}{2^n}P\\left(Y\\mathrel{\\left|\\vphantom{Y X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)P\\left(D\\mathrel{\\left|\\vphantom{D Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)\\] \nThe joint probability of \\textit{X${}_{i}$} and \\textit{D} is:\n\\begin{align*}\n&P\\left(X_i,D\\right)=\\sum_{\\left\\{\\mathrm{\\Omega },Y,D\\right\\}\\backslash \\left\\{X_i,D\\right\\}}{P\\left(\\mathrm{\\Omega },Y,D\\right)}\\\\\n&=P\\left(X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=1,Y=1,D\\right)\\\\\n&+P\\left(X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=0,Y=1,D\\right)\\\\\n&+\\cdots \\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=1,Y=1,D\\right)\\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=0,Y=1,D\\right)\\\\\n&+P\\left(X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=1,Y=0,D\\right)\\\\\n&+P\\left(X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=0,Y=0,D\\right)\\\\\n&+\\cdots \\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=1,Y=0,D\\right)\\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=0,Y=0,D\\right)\\\\\n&=\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=0\\right)\\right)\\\\\n&+\\cdots \\\\\n&+\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=0\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=1,X_n=0\\right)\\right)\\\\\n&+\\cdots \\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_i,\\dots ,X_{n-1}=0,X_n=0\\right)\\right)\\\\\n&\\left(\\mathrm{Due\\ to\\ formula\\ \\ref{formula:cpt-evidence}}\\right)\n\\end{align*}\n\nThe marginal probability of \\textit{D} is:\n\\begin{align*}\n&P\\left(D\\right)=\\sum_{\\left\\{\\mathrm{\\Omega },Y,D\\right\\}\\backslash \\left\\{D\\right\\}}{P\\left(\\mathrm{\\Omega },Y,D\\right)}\\\\\n&=P\\left(X_1=1,X_2=1,\\dots ,X_n=1,Y=1,D\\right)\\\\\n&+P\\left(X_1=1,X_2=1,\\dots ,X_n=0,Y=1,D\\right)+\\cdots \\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_n=1,Y=1,D\\right)\\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_n=0,Y=1,D\\right)\\\\\n&+P\\left(X_1=1,X_2=1,\\dots ,X_n=1,Y=0,D\\right)\\\\\n&+P\\left(X_1=1,X_2=1,\\dots ,X_n=0,Y=0,D\\right)+\\cdots \\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_n=1,Y=0,D\\right)\\\\\n&+P\\left(X_1=0,X_2=0,\\dots ,X_n=0,Y=0,D\\right)\\\\\n&=\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=0\\right)\\right)+\\cdots \\\\\n&+\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{D}{S}\\left(P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1=1,X_2=1,\\dots ,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=0\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=0\\right)\\right)+\\cdots \\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_n=1}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=1\\right)\\right)\\\\\n&+\\frac{1}{2^n}\\frac{M-D}{S}\\left(P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 X_1=1,X_2=1,\\dots ,X_n=0}\\right.\\kern-\\nulldelimiterspace}X_1=1,X_2=1,\\dots ,X_n=0\\right)\\right)\n\\end{align*}\nBy applying definition~\\ref{definition:binary-arrangements}, the joint probability \\textit{P}(\\textit{X${}_{i}$}, \\textit{D}) is determined as follows:\n\\begin{align*}\n&P\\left(X_i,D\\right)\\\\\n&=\\frac{1}{2^nS}\\left(D\\sum_a{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\right)}+\\left(M-D\\right)\\sum_a{P\\left(Y=0\\mathrel{\\left|\\vphantom{Y=0 a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\right)}\\right)\\\\\n&=\\frac{1}{2^nS}\\left(D\\sum_a{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\right)}+\\left(M-D\\right)\\sum_a{\\left(1-P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\right)\\right)}\\right)\\\\\n&=\\frac{1}{2^nS}\\left(\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)+2^{n-1}\\left(M-D\\right)\\right)\n\\end{align*}\nSimilarly, formula~\\ref{formula:join-marginal-probs} specifies the joint probability \\textit{P}(\\textit{X${}_{i}$}, \\textit{D}) and the marginal probability \\textit{P}(\\textit{D}) given uniform distribution of all sources.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(X_i,D\\right)=\\frac{1}{2^nS}\\left(\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)+2^{n-1}\\left(M-D\\right)\\right)\\\\\n&P\\left(D\\right)=\\frac{1}{2^nS}\\left(\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\right)+2^n\\left(M-D\\right)\\right)\n\\end{split}\n\\label{formula:join-marginal-probs}\n\\end{equation}\n%Formula 4.3. Joint probability \\textit{P}(\\textit{X${}_{i}$}, \\textit{D}) and marginal probability \\textit{P}(\\textit{D}) given uniform distribution of all sources\nWhere \\textit{s}($\\Omega$) and \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}$\\}$) are specified in definition~\\ref{definition:binary-arrangements}.\n\nIn general, formula~\\ref{formula:join-marginal-probs-xd-network} specifies conditional probability \\textit{P}(\\textit{D{\\textbar}X${}_{i}$}), posterior probability \\textit{P}(\\textit{X${}_{i}${\\textbar}D}), and transformation coefficient for X-gate inference.\n\n\\begin{equation}\n\\begin{split}\nP\\left(D\\mathrel{\\left|\\vphantom{D X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)&=\\frac{P\\left(X_i=1,D\\right)}{P\\left(X_i=1\\right)}\\\\\n&=\\frac{\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)+2^{n-1}\\left(M-D\\right)}{2^{n-1}S}\\\\\nP\\left(D\\mathrel{\\left|\\vphantom{D X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)&=\\frac{P\\left(X_i=0,D\\right)}{P\\left(X_i=0\\right)}\\\\\n&=\\frac{\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)+2^{n-1}\\left(M-D\\right)}{2^{n-1}S}\\\\\nP\\left(X_i=1\\mathrel{\\left|\\vphantom{X_i=1 D}\\right.\\kern-\\nulldelimiterspace}D\\right)&=\\frac{P\\left(X_i=1,D\\right)}{P\\left(D\\right)}\\\\\n&=\\frac{\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)+2^{n-1}\\left(M-D\\right)}{\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\right)+2^n\\left(M-D\\right)}\\\\\nP\\left(X_i=0\\mathrel{\\left|\\vphantom{X_i=0 D}\\right.\\kern-\\nulldelimiterspace}D\\right)&=1-P\\left(X_i=1\\mathrel{\\left|\\vphantom{X_i=1 D}\\right.\\kern-\\nulldelimiterspace}D\\right)\\\\\n&=\\frac{\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)+2^{n-1}\\left(M-D\\right)}{\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\right)+2^n\\left(M-D\\right)}\\\\\nk&=\\frac{P\\left(X_i\\mathrel{\\left|\\vphantom{X_i D}\\right.\\kern-\\nulldelimiterspace}D\\right)}{P\\left(D\\mathrel{\\left|\\vphantom{D X_i}\\right.\\kern-\\nulldelimiterspace}X_i\\right)}=\\frac{2^{n-1}S}{\\left(2D-M\\right)s\\left(\\mathrm{\\Omega }\\right)+2^n\\left(M-D\\right)}\n\\end{split}\n\\label{formula:join-marginal-probs-xd-network}\n\\end{equation}\n%Formula 4.4. Conditional probability, posterior probability, and transformation coefficient of X-D network\n\n\\noindent The transformation coefficient is rewritten as follows:\n\\[k=\\frac{2^{n-1}S}{2D\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)+M\\left(2^n-s\\left(\\mathrm{\\Omega }\\right)\\right)}\\] \nNote that \\textit{S}, \\textit{D}, \\textit{M} are abstract symbols and there is no proportional connection between 2\\textit{${}^{n}$}${}^{-1}$\\textit{S} and \\textit{D} for all \\textit{D}, specified by formula~\\ref{formula:cpt-evidence}. Assuming that such proportional connection 2\\textit{${}^{n}$}${}^{-1}$\\textit{S} = \\textit{aD${}^{j}$} exists for all \\textit{D} where \\textit{a} is arbitrary constant. Given binary case when \\textit{D}=0 and \\textit{S}=1, we have:\n\\[2^{n-1}=2^{n-1}*1=2^{n-1}S=aD^j=a*0^j=0\\] \nThere is a contradiction, which implies that it is impossible to reduce \\textit{k} into the following form:\n\\[k=\\frac{aD^j}{bD^j}\\]\nTherefore, if \\textit{k} is constant with regard to \\textit{D} then,\n\\[2D\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)+M\\left(2^n-s\\left(\\mathrm{\\Omega }\\right)\\right)=C\\neq 0,\\forall D\\] \nWhere \\textit{C} is constant. We have:\n\\begin{align*}\n&\\sum_D{\\left(2D\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)+M\\left(2^n-s\\left(\\mathrm{\\Omega }\\right)\\right)\\right)}=\\sum_D{C}\\\\\n&\\Rightarrow 2S\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)+NM\\left(2^n-s\\left(\\mathrm{\\Omega }\\right)\\right)=NC\\\\\n&\\Rightarrow 2^nS=NC\n\\end{align*}\nIt is implied that\n\\begin{align*}\nk&=\\frac{2^{n-1}S}{2D\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)+M\\left(2^n-s\\left(\\mathrm{\\Omega }\\right)\\right)}\\\\\n&=\\frac{NC}{2C}=\\frac{N}{2}\n\\end{align*}\nThis holds\n\\begin{align*}\n&2^nS=N\\left(2D\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)+M\\left(2^n-s\\left(\\mathrm{\\Omega }\\right)\\right)\\right)\\\\\n&\\Rightarrow 2^nS=2ND\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)+2S\\left(2^n-s\\left(\\mathrm{\\Omega }\\right)\\right)\\\\\n&\\Rightarrow 2ND\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)-2S\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)=0\\\\\n&\\Rightarrow \\left(ND-S\\right)\\left(s\\left(\\mathrm{\\Omega }\\right)-2^{n-1}\\right)=0\n\\end{align*}\nAssuming \\textit{ND}=\\textit{S} we have:\n\\[ND=S=2NM\\Rightarrow D=2M\\] \nThere is a contradiction because \\textit{M} is maximum value of \\textit{D}. Therefore, if \\textit{k} is constant with regard to \\textit{D} then \\textit{s}($\\Omega$) = 2\\textit{${}^{n}$}${}^{-1}$. Inversely, if \\textit{s}($\\Omega$) = 2\\textit{${}^{n}$}${}^{-1}$ then \\textit{k} is:\n\\[k=\\frac{2^{n-1}S}{2D\\left(2^{n-1}-2^{n-1}\\right)+M\\left(2^n-2^{n-1}\\right)}=\\frac{N}{2}\\] \nIn general, the event that \\textit{k} is constant with regard to \\textit{D} is equivalent to the event \\textit{s}($\\Omega$) = 2\\textit{${}^{n}$}${}^{-1}$. In general, diagnostic theorem is stated as follows:\n\n\\begin{theorem}[Diagnostic theorem]\nThe diagnostic condition of X-D network is satisfied if and only if\n\\begin{center}\n$\\begin{aligned}\ns\\left(\\mathrm{\\Omega }\\right)=\\sum_a{P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 a\\left(\\mathrm{\\Omega }\\right)}\\right.\\kern-\\nulldelimiterspace}a\\left(\\mathrm{\\Omega }\\right)\\right)}=2^{\\left|\\mathrm{\\Omega }\\right|-1},\\forall \\mathrm{\\Omega }\\neq \\emptyset\n\\end{aligned}$\n\\end{center}\nAt that time, the transformation coefficient becomes:\n\\begin{center}\n$\\begin{aligned}\nk=\\frac{N}{2}\n\\end{aligned}$\n\\end{center}\n%\\end{tabular}\n\\label{theorem:diagnostic-theorem}\n\\end{theorem}\n\nWhere the X-D network is combination of diagnostic relationship and X-gate inference as follows:\n\\begin{align*}\n&P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=P\\left(X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n\\right)\\\\\n&P\\left(D\\mathrel{\\left|\\vphantom{D Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)=\\left\\{ \\begin{array}{r}\n\\frac{D}{S}\\ \\mathrm{if}\\ Y=1 \\\\ \n\\frac{M}{S}-\\frac{D}{S}\\ \\mathrm{if}\\ Y=0 \\end{array}\n\\right.\n\\end{align*}\nNote that weights $p_i=w_i$ and $\\rho_i=\\omega_i$, which are inputs of $s\\left(\\Omega\\right)$, are abstract variables. Thus, the equality $s\\left(\\Omega\\right)=2^{\\left|\\Omega\\right|-1}$ implies all abstract variables are removed and so $s\\left(\\Omega\\right)$ does not depend on weights. The diagnostic theorem is the optimal way to validate the diagnostic condition.\n\nThe formula~\\ref{formula:join-marginal-probs-xd-network} becomes simple with AND-gate inference. Recall that formula~\\ref{formula:and-gate-inference} specified AND-gate inference as follows:\n\\begin{align*}\nP\\left(X_1\\odot X_2\\odot \\dots \\odot X_n\\right)&=P\\left(Y=1\\mathrel{\\left|\\vphantom{Y=1 X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\\\\n&=\\left\\{ \\begin{array}{l}\n\\prod^n_{i=1}{p_i}\\ \\mathrm{if\\ all}\\ X_i\\ \\left(s\\right)\\ \\mathrm{are}\\ 1 \\\\ \n0\\ \\mathrm{if\\ there\\ exists\\ at\\ least\\ one\\ }X_i=0 \\end{array}\n\\right.\n\\end{align*}\nDue to only one case \\textit{X}${}_{1}$ = \\textit{X}${}_{2}$ ={\\dots}= \\textit{X${}_{n}$} = 1, we have:\n\\[s\\left(\\mathrm{\\Omega }\\right)=s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)=\\prod^n_{i=1}{p_i}\\] \nDue to \\textit{X${}_{i}$} = 0, we have:\n\\[s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)=0\\] \nDerived from formula~\\ref{formula:join-marginal-probs-xd-network}, formula~\\ref{formula:cond-post-prob-andd-network} specifies conditional probability \\textit{P}(\\textit{D{\\textbar}X${}_{i}$}), posterior probability \\textit{P}(\\textit{X${}_{i}${\\textbar}D}), and transformation coefficient according to X-D network with AND-gate reference called \\textit{AND-D network}.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=\\frac{\\left(2D-M\\right)\\prod^n_{i=1}{p_i}+2^{n-1}\\left(M-D\\right)}{2^{n-1}S}\\\\\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=\\frac{M-D}{S}\\\\\n&P\\left(X_i=1\\mathrel{\\left|\\vphantom{X_i=1 D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{\\left(2D-M\\right)\\prod^n_{i=1}{p_i}+2^{n-1}\\left(M-D\\right)}{\\left(2D-M\\right)\\prod^n_{i=1}{p_i}+2^n\\left(M-D\\right)}\\\\\n&P\\left(X_i=0\\mathrel{\\left|\\vphantom{X_i=0 D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{2^{n-1}\\left(M-D\\right)}{\\left(2D-M\\right)\\prod^n_{i=1}{p_i}+2^n\\left(M-D\\right)}\\\\\n&k=\\frac{2^{n-1}S}{\\left(2D-M\\right)\\prod^n_{i=1}{p_i}+2^n\\left(M-D\\right)}\n\\end{split}\n\\label{formula:cond-post-prob-andd-network}\n\\end{equation}\n%Formula 4.5. Conditional probability and posterior probability of AND-D network\n\nFor convenience, we validate diagnostic condition with a case of two sources $\\Omega$ = $\\{$\\textit{X}${}_{1}$, \\textit{X}${}_{2}$$\\}$, \\textit{p}${}_{1}$ = \\textit{p}${}_{2}$ = \\textit{w}${}_{1}$ = \\textit{w}${}_{2}$ = 0.5, $D\\in \\{0,1,2,3\\}$. According to diagnostic theorem, if \\textit{s}($\\Omega$) $\\mathrm{\\neq}$ 2 for given X-gate then, such X-gate does not satisfy diagnostic condition.\n\nGiven AND-gate inference, by applying formula~\\ref{formula:and-gate-inference}, we have:\n\\[s\\left(\\mathrm{\\Omega }\\right)=\\left(0.5*0.5\\right)+0+0+0=0.25\\] \nGiven OR-gate inference, by applying formula~\\ref{formula:or-gate-inference}, we have:\n\\[s\\left(\\mathrm{\\Omega }\\right)=\\left(1-0.5*0.5\\right)+\\left(1-0.5\\right)+\\left(1-0.5\\right)+0=1.75\\] \nGiven XOR-gate inference, by applying formula~\\ref{formula:xor-gate-inference}, we have:\n\\[s\\left(\\mathrm{\\Omega }\\right)=\\left(0.5*0.5+0.5*0.5\\right)+0.5+0.5+0=1.5\\] \nGiven XNOR-gate inference, by applying formula~\\ref{xnor-gate-inference}, we have:\n\\[s\\left(\\mathrm{\\Omega }\\right)=\\left(0.5*0.5+0.5*0.5\\right)+0.5+0.5+1=2.5\\] \nGiven SIGMA-gate inference, by applying formula~\\ref{sigma-gate-inference}, we have:\n\\[s\\left(\\mathrm{\\Omega }\\right)=\\left(0.5+0.5\\right)+0.5+0.5+0=2\\] \nIt is asserted that AND-gate, OR-gate, XOR-gate, and XNOR-gate do not satisfy diagnostic condition and so they should not be used to assess hypotheses. However, it is not asserted if U-gate and SIGMA-gate satisfy such diagnostic condition. It is necessary to expend formula for SIGMA-gate diagnostic network (called \\textit{SIGMA-D network}) in order to validate it.\n\nIn case of SIGMA-gate inference, by applying formula~\\ref{sigma-gate-inference}, we have:\n\\begin{align*}\n\\sum_i{w_i}&=1\\\\\ns\\left(\\mathrm{\\Omega }\\right)&=2^{n-1}\\sum_i{w_i}=2^{n-1}\\\\\ns\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)&=2^{n-1}w_i+2^{n-2}\\sum_{j\\neq i}{w_j}\\\\\n&=2^{n-1}w_i+2^{n-2}\\left(1-w_i\\right)=2^{n-2}\\left(1+w_i\\right)\\\\\ns\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)&=s\\left(\\mathrm{\\Omega }\\right)-s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)\\\\\n&=2^{n-2}\\left(1-w_i\\right)\n\\end{align*}\nIt is necessary to validate SIGMA-D network with SIGMA-gate bi-inference. By applying formula~\\ref{formula:sigma-gate-biinference}, we recalculate these quantities as follows:\n\\begin{align*}\ns\\left(\\mathrm{\\Omega }\\right)&=2^{n-1}\\sum_i{w_i}+2^{n-1}\\sum_i{d_i}=2^{n-1}\\sum_i{\\left(w_i+d_i\\right)}=2^{n-1}\\\\\n&\\left(\\mathrm{due\\ to}\\sum_i{\\left(w_i+d_i\\right)}=1\\right)\\\\\ns\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)&=2^{n-1}w_i+2^{n-2}\\sum_{j\\neq i}{w_j}+2^{n-2}\\sum_i{d_i}\\\\\n&=2^{n-2}w_i+2^{n-2}\\sum_i{\\left(w_i+d_i\\right)}=2^{n-2}\\left(1+w_i\\right)\\\\\ns\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)&=s\\left(\\mathrm{\\Omega }\\right)-s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)=2^{n-2}\\left(1-w_i\\right)\n\\end{align*}\nObviously, quantities \\textit{s}($\\Omega$), \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}=1$\\}$), and \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}=0$\\}$) are kept intact. According to diagnostic theorem, we conclude that SIGMA-D network does satisfy diagnostic condition due to \\textit{s}($\\Omega$)=2\\textit{${}^{n}$}${}^{--1}$. Thus, SIGMA-D network can be used to assess hypotheses.\n\nFormula~\\ref{cond-post-k-sigmad-network}, an immediate consequence of formula~\\ref{formula:join-marginal-probs-xd-network}, specifies conditional probability \\textit{P}(\\textit{D{\\textbar}X${}_{i}$}), posterior probability \\textit{P}(\\textit{X${}_{i}${\\textbar}D}), and transformation coefficient for SIGMA-D network.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=\\frac{\\left(2D-M\\right)w_i+M}{2S}\\\\\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=\\frac{\\left(M-2D\\right)w_i+M}{2S}\\\\\n&P\\left(X_i=1\\mathrel{\\left|\\vphantom{X_i=1 D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{\\left(2D-M\\right)w_i+M}{2M}\\\\\n&P\\left(X_i=0\\mathrel{\\left|\\vphantom{X_i=0 D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{\\left(M-2D\\right)w_i+M}{2M}\\\\\n&k=\\frac{N}{2}\n\\end{split}\n\\label{cond-post-k-sigmad-network}\n\\end{equation}\n%Formula 4.6. Conditional probability, posterior probability, and transformation coefficient of SIGMA-D network\n\nIn case of SIGMA-gate, the augmented variable \\textit{Y} can be removed from X-D network. The evidence \\textit{D} is now established as direct target variable. Figure~\\ref{figure:direct-sigmad-network} shows a so-called \\textit{direct SIGMA-gate diagnostic network} (direct SIGMA-D network).\n\n\\begin{figure}\n\\centering\n\\includegraphics{DirectSIGMADNetwork.png}\n\\caption{Direct SIGMA-gate diagnostic network (direct SIGMA-D network)}\n\\label{figure:direct-sigmad-network}\n\\end{figure}\n%Figure 4.2. Direct SIGMA-gate diagnostic network (direct SIGMA-D network)\n\nDerived from formula~\\ref{sigma-gate-inference}, the CPT of direct SIGMA-D network is determined by formula~\\ref{formula:cpt-direct-sigmad-network}.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)=\\sum_{i\\in K}{\\frac{D}{S}w_i}+\\sum_{j\\in L}{\\frac{M-D}{S}w_j}\\\\\n&\\mathrm{Where}\\ \\mathrm{the}\\ \\mathrm{set}\\ \\mathrm{of}\\ X_i\\ \\mathrm{(s)}\\ \\mathrm{is}\\ \\mathrm{complete}\\ \\mathrm{and}\\ \\mathrm{mutually}\\ \\mathrm{exclusive.}\\\\\n&\\sum^n_{i=1}{w_i}=1\\\\\n&X_i\\cap X_j=\\emptyset ,\\forall i\\neq j\n\\end{split}\n\\label{formula:cpt-direct-sigmad-network}\n\\end{equation}\n%Formula 4.7. CPT of direct SIGMA-D network\n\nThe formula~\\ref{formula:cpt-direct-sigmad-network} specifies valid CPT due to:\n\\begin{align*}\n\\sum_D{P\\left(D\\mathrel{\\left|\\vphantom{D X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)}&=\\frac{1}{S}\\sum_{i\\in K}{w_i\\sum_D{D}}+\\frac{1}{S}\\sum_{j\\in L}{w_j\\sum_D{\\left(M-D\\right)}}\\\\\n&=\\frac{1}{S}\\sum_{i\\in K}{Sw_i}+\\frac{1}{S}\\sum_{j\\in L}{w_j\\left(NM-S\\right)}\\\\\n&=\\frac{1}{S}\\sum_{i\\in K}{Sw_i}+\\frac{1}{S}\\sum_{j\\in L}{Sw_j}=\\sum^n_{i=1}{w_i}=1\n\\end{align*}\nFrom dependencies shown in figure~\\ref{figure:direct-sigmad-network}, formula~\\ref{formula:joint-prob-direct-sigmad-network} specifies the joint probability of direct SIGMA-D network.\n\n\\begin{equation}\nP\\left(X_1,X_2,\\dots ,X_n,Y,D\\right)=P\\left(D\\mathrel{\\left|\\vphantom{D X_1,X_2,\\dots ,X_n}\\right.\\kern-\\nulldelimiterspace}X_1,X_2,\\dots ,X_n\\right)\\prod^n_{i=1}{P\\left(X_i\\right)}\n\\label{formula:joint-prob-direct-sigmad-network}\n\\end{equation}\n%Formula 4.8. Joint probability of direct SIGMA-D network\n\nInferred from formula~\\ref{formula:join-marginal-probs}, formula~\\ref{formula:joint-marginal-probs-direct-sigmad-network} specifies the joint probability \\textit{P}(\\textit{X${}_{i}$}, \\textit{D}) and the marginal probability \\textit{P}(\\textit{D}) of direct SIGMA-D network, given uniform distribution of all sources.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(X_i,D\\right)=\\frac{1}{2^n}s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i\\right\\}\\right)\\\\\n&P\\left(D\\right)=\\frac{1}{2^n}s\\left(\\mathrm{\\Omega }\\right)\n\\end{split}\n\\label{formula:joint-marginal-probs-direct-sigmad-network}\n\\end{equation}\n%Formula 4.9. Joint probability \\textit{P}(\\textit{X${}_{i}$}, \\textit{D}) and marginal probabilities \\textit{P}(\\textit{X${}_{i}$}), \\textit{P}(\\textit{D}) of direct SIGMA-D network\nWhere $s\\left(\\Omega\\right)$ and $s\\left(\\Omega:\\left\\{X_i\\right\\}\\right)$ are specified in definition~\\ref{definition:binary-arrangements}.\n\nBy browsing all variables of direct SIGMA-D network, we have:\n\\begin{align*}\n&s\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=1\\right\\}\\right)\\\\\n&=2^{n-1}\\frac{D}{S}w_i+2^{n-2}\\sum_{j\\neq i}{\\frac{D}{S}w_j}+2^{n-2}\\sum_{j\\neq i}{\\frac{M-D}{S}w_j}\\\\\n&=\\frac{2^{n-2}}{S}\\left(2Dw_i+M\\sum_{j\\neq i}{w_j}\\right)=\\frac{2^{n-2}}{S}\\left(2Dw_i+M\\left(1-w_i\\right)\\right)\\\\\n&\\left(\\mathrm{Due\\ to\\ }\\sum^n_{i=1}{w_i}=1\\right)\\\\\n&=\\frac{2^{n-2}}{S}\\left(\\left(2D-M\\right)w_i+M\\right)\n\\end{align*}\nSimilarly, we have:\n\\begin{align*}\ns\\left(\\mathrm{\\Omega }\\mathrm{:}\\left\\{X_i=0\\right\\}\\right)&=2^{n-1}\\frac{M-D}{S}w_i+2^{n-2}\\sum_{j\\neq i}{\\frac{M-D}{S}w_j}+2^{n-2}\\sum_{j\\neq i}{\\frac{D}{S}w_j}\\\\\n&=\\frac{2^{n-2}}{S}\\left(\\left(M-2D\\right)w_i+M\\right)\\\\\ns\\left(\\mathrm{\\Omega }\\right)&=2^{n-1}\\sum_i{\\frac{D}{S}w_i}+2^{n-1}\\sum_i{\\frac{M-D}{S}w_i}\\\\\n&=\\frac{2^{n-1}M}{S}\n\\end{align*}\nBy applying formula~\\ref{formula:joint-marginal-probs-direct-sigmad-network}, \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}=0$\\}$), \\textit{s}($\\Omega$:$\\{$\\textit{X${}_{i}$}=1$\\}$), and \\textit{s}($\\Omega$), we get the same result with formula~\\ref{cond-post-k-sigmad-network}.\n\\begin{align*}\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_i=1}\\right.\\kern-\\nulldelimiterspace}X_i=1\\right)=\\frac{\\left(2D-M\\right)w_i+M}{2S}\\\\\n&P\\left(D\\mathrel{\\left|\\vphantom{D X_i=0}\\right.\\kern-\\nulldelimiterspace}X_i=0\\right)=\\frac{\\left(M-2D\\right)w_i+M}{2S}\\\\\n&P\\left(X_i=1\\mathrel{\\left|\\vphantom{X_i=1 D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{\\left(2D-M\\right)w_i+M}{2M}\\\\\n&P\\left(X_i=0\\mathrel{\\left|\\vphantom{X_i=0 D}\\right.\\kern-\\nulldelimiterspace}D\\right)=\\frac{\\left(M-2D\\right)w_i+M}{2M}\\\\\n&k=\\frac{N}{2}\n\\end{align*}\nTherefore, it is possible to use direct SIGMA-D network to assess hypotheses. It is asserted that SIGMA-D network satisfy diagnostic condition when single relationship, NOT-D network, direct SIGMA-D network are specific cases of SIGMA-D network. There is a question: Does an X-D network which is different from SIGMA-D network and not aforementioned exist such that it satisfies diagnostic condition?\n\nRecall that each X-D network is a pattern owning a particular X-gate inference which in turn is based on particular X-gate condition (s) relevant to only variables \\textit{A${}_{i}$} (s). The most general nonlinear X-D network is U-D network whereas SIGMA-D network is linear one. The U-gate inference given arbitrary condition on \\textit{U} is:\n\\begin{align*}\n&P\\left(X_1\\uplus X_2\\uplus \\dots \\uplus X_n\\right)\\\\\n&=\\sum_{U\\in \\mathcal{U}}{\\left(\\prod_{i\\in U\\cap K}{p_i}\\prod_{i\\in U\\cap L}{\\left(1-{\\rho }_i\\right)}\\right)\\left(\\prod_{i\\in \\overline{U}\\cap K}{\\left(1-p_i\\right)}\\prod_{i\\in \\overline{U}\\cap L}{{\\rho }_i}\\right)}\n\\end{align*}\nLet \\textit{f} be the arrangement sum of U-gate inference.\n\\[f\\left(p_i,{\\rho }_i\\right)=\\sum_{a\\left(\\mathrm{\\Omega }\\right)}{\\sum_{U\\in \\mathcal{U}}{\\left(\\prod_{i\\in U\\cap K}{p_i}\\prod_{i\\in U\\cap L}{\\left(1-{\\rho }_i\\right)}\\right)\\left(\\prod_{i\\in \\overline{U}\\cap K}{\\left(1-p_i\\right)}\\prod_{i\\in \\overline{U}\\cap L}{{\\rho }_i}\\right)}}\\]\nThe function \\textit{f} is sum of many large expressions and each expression is product of four possible sub-products ($\\Pi$) as follows:\n\\[Expr=\\prod_{i\\in U\\cap K}{p_i}\\prod_{i\\in U\\cap L}{\\left(1-{\\rho }_i\\right)}\\prod_{i\\in \\overline{U}\\cap K}{\\left(1-p_i\\right)}\\prod_{i\\in \\overline{U}\\cap L}{{\\rho }_i}\\] \nIn any case of degradation, there always exist expression \\textit{Expr} (s) having at least 2 sub-products ($\\Pi$), for example:\n\\[Expr=\\prod_{i\\in U\\cap K}{p_i}\\prod_{i\\in U\\cap L}{\\left(1-{\\rho }_i\\right)}\\] \nConsequently, there always exist \\textit{Expr} (s) having at least 5 terms relevant to \\textit{p${}_{i}$} and \\textit{$\\rho$${}_{i}$} if \\textit{n }$\\mathrm{\\ge}$ 5, for example:\n\\[Expr=p_1p_2p_3\\left(1-{\\rho }_4\\right)\\left(1-{\\rho }_5\\right)\\] \nThus, degree of \\textit{f} will be larger than or equal to 5 given \\textit{n }$\\mathrm{\\ge}$ 5. According to diagnostic theorem, U-gate network satisfies diagnostic condition if and only if \\textit{f(p${}_{i}$}, \\textit{$\\rho$${}_{i}$}) = 2\\textit{${}^{n}$}${}^{-1}$ for all \\textit{n} $\\mathrm{\\ge}$ 1 and for all abstract variables \\textit{p${}_{i}$} and \\textit{$\\rho$${}_{i}$}. Without loss of generality, each \\textit{p${}_{i}$} or \\textit{$\\rho$${}_{i}$} is sum of variable \\textit{x} and a variable \\textit{a${}_{i}$} or \\textit{b${}_{i}$}, respectively. Note that all \\textit{p${}_{i}$}, \\textit{$\\rho$${}_{i}$}, \\textit{a${}_{i}$} are \\textit{b${}_{i}$} are abstract variables.\n\\begin{align*}\np_i=x+a_i\\\\\n\\rho_i=x+b_i\n\\end{align*}\nThe equation \\textit{f}--2\\textit{${}^{n}$}${}^{-1}$ = 0 becomes equation \\textit{g}(\\textit{x}) = 0 whose degree is \\textit{m} $\\mathrm{\\ge}$ 5 if \\textit{n }$\\mathrm{\\ge}$ 5.\n\\[g\\left(x\\right)=\\pm x^m+C_1x^{m-1}+\\dots +C_{m-1}x+C_m-2^{n-1}=0\\] \nWhere coefficients \\textit{C${}_{i}$} (s) are functions of \\textit{a${}_{i}$} and \\textit{b${}_{i}$} (s). According to Abel-Ruffini theorem \\cite{wikipedia:abel-ruffini}, equation \\textit{g}(\\textit{x}) = 0 has no algebraic solution when \\textit{m} $\\mathrm{\\ge}$ 5. Thus, abstract variables \\textit{p${}_{i}$} and \\textit{$\\rho$${}_{i}$} cannot be eliminated entirely from \\textit{g}(\\textit{x})=0, which causes that there is no specification of U-gate inference \\textit{P}(\\textit{X}${}_{1}$x\\textit{X}${}_{2}$x{\\dots}x\\textit{X${}_{n}$}) so that diagnostic condition is satisfied.\n\nIt is concluded that there is no nonlinear X-D network satisfying diagnostic condition but a new question is raised: Does there exist the general linear X-D network satisfying diagnostic condition? Such linear network is called GL-D network and SIGMA-D network is specific case of GL-D network. The GL-gate probability must be linear combination of weights.\n\\[P\\left(X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n\\right)=C+\\sum^n_{i=1}{{\\alpha }_iw_i}+\\sum^n_{i=1}{{\\beta }_id_i}\\] \nWhere \\textit{C} is arbitrary constant.\n\nThe GL-gate inference is singular if \\textit{$\\alpha$${}_{i}$} and \\textit{$\\beta$${}_{i}$} are functions of only \\textit{X${}_{i}$} as follows:\n\\[P\\left(X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n\\right)=C+\\sum^n_{i=1}{h_i\\left(X_i\\right)w_i}+\\sum^n_{i=1}{g_i\\left(X_i\\right)d_i}\\] \nThe functions \\textit{h${}_{i}$} and \\textit{g${}_{i}$} are not relevant to \\textit{A${}_{i}$} because the final formula of GL-gate inference is only relevant to \\textit{X${}_{i}$} (s) and weights (s). Because GL-D network is a pattern, we only survey singular GL-gate. Mentioned GL-gate is singular by default and it is dependent on how to define functions \\textit{h${}_{i}$} and \\textit{g${}_{i}$}. The arrangement sum with regard to GL-gate is:\n\\begin{align*}\ns\\left(\\mathrm{\\Omega }\\right)&=\\sum_a{\\left(C+\\sum^n_{i=1}{h_i\\left(X_i\\right)w_i}+\\sum^n_{i=1}{g_i\\left(X_i\\right)d_i}\\right)}\\\\\n&=2^nC+2^{n-1}\\sum^n_{i=1}{\\left(h_i\\left(X_i=1\\right)+h_i\\left(X_i=0\\right)\\right)w_i}\\\\\n&+2^{n-1}\\sum^n_{i=1}{\\left(g_i\\left(X_i=1\\right)+g_i\\left(X_i=0\\right)\\right)d_i}\n\\end{align*}\nSuppose \\textit{h${}_{i}$} and \\textit{g${}_{i}$} are probability mass functions with regard to \\textit{X${}_{i}$}. For all \\textit{i}, we have:\n\\begin{align*}\n&0\\le h_i\\left(X_i\\right)\\le 1\\\\\n&0\\le g_i\\left(X_i\\right)\\le 1\\\\\n&h_i\\left(X_i=1\\right)+h_i\\left(X_i=0\\right)=1\\\\\n&g_i\\left(X_i=1\\right)+g_i\\left(X_i=0\\right)=1\n\\end{align*}\nThe arrangement sum becomes:\n\\[s\\left(\\mathrm{\\Omega }\\right)=2^nC+2^{n-1}\\sum^n_{i=1}{\\left(w_i+d_i\\right)}\\] \nGL-D network satisfies diagnostic condition if\n\\begin{align*}\n&s\\left(\\mathrm{\\Omega }\\right)=2^nC+2^{n-1}\\sum^n_{i=1}{\\left(w_i+d_i\\right)}=2^{n-1}\\\\\n&\\Rightarrow 2C+\\sum^n_{i=1}{\\left(w_i+d_i\\right)}=1\n\\end{align*}\nSuppose the set of \\textit{X${}_{i}$} (s) is complete.\n\\[\\sum^n_{i=1}{\\left(w_i+d_i\\right)}=1\\] \nThis implies \\textit{C}=0. Shortly, formula~\\ref{formula:singular-glgate-inference} specifies the singular GL-gate inference so that GL-D network satisfies diagnostic condition.\n\n\\begin{equation}\n\\begin{split}\n&P\\left(X_1\\mathrm{x}X_2\\mathrm{x}\\dots \\mathrm{x}X_n\\right)=\\sum^n_{i=1}{h_i\\left(X_i\\right)w_i}+\\sum^n_{i=1}{g_i\\left(X_i\\right)d_i}\\\\\n&\\mathrm{Where}\\ h_i\\ \\mathrm{and}\\ g_i\\ \\mathrm{are}\\ \\mathrm{probability}\\ \\mathrm{mass}\\ \\mathrm{functions}\\\\\n&\\mathrm{and}\\ \\mathrm{the}\\ \\mathrm{set}\\ \\mathrm{of}\\ X_i\\ \\mathrm{(s)}\\ \\mathrm{is}\\ \\mathrm{complete.}\\\\\n&\\sum^n_{i=1}{W_i}=1\n\\end{split}\n\\label{formula:singular-glgate-inference}\n\\end{equation} \n%Formula 4.10. Singular GL-gate inference\n\n\\noindent Functions \\textit{h${}_{i}$}(\\textit{X${}_{i}$}) and \\textit{g${}_{i}$}(\\textit{X${}_{i}$}) are always linear due to \\textit{X${}_{i}$${}^{m}$}=\\textit{X${}_{i}$} for all \\textit{m}$\\mathrm{\\ge}$1 when \\textit{X${}_{i}$} is binary. It is easy to infer that SIGMA-D network is GL-D network with following definition of functions \\textit{h${}_{i}$} and \\textit{g${}_{i}$}.\n\\[h_i\\left(X_i\\right)=1-g_i\\left(X_i\\right)=X_i,\\forall i\\] \nAccording to authors \\cite{millan:bayesiandiagnostic}, a hypothesis can have multiple evidences as seen in figure~\\ref{figure:med-network}. This is \\textit{multi-evidence diagnostic relationship} opposite to aforementioned multi-hypothesis diagnostic relationship.\n\n\\begin{figure}\n\\centering\n\\includegraphics{MEDNetwork.png}\n\\caption{Diagnostic relationship with multiple evidences (M-E-D network)}\n\\label{figure:med-network}\n\\end{figure}\n%Figure 4.3. Diagnostic relationship with multiple evidences (M-E-D network)\n\nFigure~\\ref{figure:med-network} depicts the multi-evidence diagnostic network called M-E-D network in which there are \\textit{m} evidences \\textit{D}${}_{1}$, \\textit{D}${}_{2}$,{\\dots}, \\textit{D${}_{m}$} and one hypothesis \\textit{Y}. Note that \\textit{Y} has uniform distribution.\n\nIn simplest case where all evidences are binary, the joint probability of M-E-D network is:\n\\begin{align*}\nP\\left(Y,D_1,D_2,\\dots ,D_m\\right)&=P\\left(Y\\right)\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)}\\\\\n&=P\\left(Y\\right)P\\left(D_1,D_2,\\dots ,D_m\\mathrel{\\left|\\vphantom{D_1,D_2,\\dots ,D_m Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)\n\\end{align*}\nThe product $\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)}$ is denoted as likelihood function as follows:\n\\[P\\left(D_1,D_2,\\dots ,D_m\\mathrel{\\left|\\vphantom{D_1,D_2,\\dots ,D_m Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)=\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)}\\] \nThe posterior probability \\textit{P}(\\textit{Y} {\\textbar} \\textit{D}${}_{1}$, \\textit{D}${}_{2}$,{\\dots}, \\textit{D${}_{m}$}) given uniform distribution of \\textit{Y} is:\n\\begin{align*}\n&P\\left(Y\\mathrel{\\left|\\vphantom{Y D_1,D_2,\\dots ,D_m}\\right.\\kern-\\nulldelimiterspace}D_1,D_2,\\dots ,D_m\\right)\\\\\n&=\\frac{P\\left(Y,D_1,D_2,\\dots ,D_m\\right)}{P\\left(Y=1,D_1,D_2,\\dots ,D_m\\right)+P\\left(Y=0,D_1,D_2,\\dots ,D_m\\right)}\\\\\n&=\\frac{1}{\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y=1}\\right.\\kern-\\nulldelimiterspace}Y=1\\right)}+\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y=0}\\right.\\kern-\\nulldelimiterspace}Y=0\\right)}}*P\\left(D_1,D_2,\\dots ,D_m\\mathrel{\\left|\\vphantom{D_1,D_2,\\dots ,D_m Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)\n\\end{align*}\nThe possible transformation coefficient is:\n\\[\\frac{1}{k}=\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y=1}\\right.\\kern-\\nulldelimiterspace}Y=1\\right)}+\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y=0}\\right.\\kern-\\nulldelimiterspace}Y=0\\right)}\\] \nM-E-D network will satisfy diagnostic condition if \\textit{k} = 1 because all hypotheses and evidence are binary, which leads that following equation specified by formula~\\ref{formula:diagnostic-cond-med-network} has 2\\textit{m} real roots \\textit{P}(\\textit{D${}_{j}${\\textbar}Y}) for all \\textit{m} $\\mathrm{\\ge}$ 2.\n\n\\begin{equation}\n\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y=1}\\right.\\kern-\\nulldelimiterspace}Y=1\\right)}+\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y=0}\\right.\\kern-\\nulldelimiterspace}Y=0\\right)}=1\n\\label{formula:diagnostic-cond-med-network}\n\\end{equation}\n%Formula 4.11. Diagnostic condition equation of binary M-E-D network \n\nThe equation~\\ref{formula:diagnostic-cond-med-network} has no real root given \\textit{m}=2 according to following proof. Suppose equation~\\ref{formula:diagnostic-cond-med-network} has 4 real roots as follows:\n\\begin{align*}\n&a_1=P\\left(D_1=1\\mathrel{\\left|\\vphantom{D_1=1 Y=1}\\right.\\kern-\\nulldelimiterspace}Y=1\\right)\\\\\n&a_2=P\\left(D_2=1\\mathrel{\\left|\\vphantom{D_2=1 Y=1}\\right.\\kern-\\nulldelimiterspace}Y=1\\right)\\\\\n&b_1=P\\left(D_1=1\\mathrel{\\left|\\vphantom{D_1=1 Y=0}\\right.\\kern-\\nulldelimiterspace}Y=0\\right)\\\\\n&b_2=P\\left(D_2=1\\mathrel{\\left|\\vphantom{D_2=1 Y=0}\\right.\\kern-\\nulldelimiterspace}Y=0\\right)\n\\end{align*}\nFrom equation~\\ref{formula:diagnostic-cond-med-network}, it holds:\n\\begin{align*}\n&\\left\\{ \\begin{array}{l}\na_1a_2+b_1b_2=1 \\\\ \na_1\\left(1-a_2\\right)+b_1b_2=1 \\\\ \n\\left(1-a_1\\right)a_2+b_1b_2=1 \\\\ \na_1a_2+b_1\\left(1-b_2\\right)=1 \\\\ \na_1a_2+\\left(1-b_1\\right)b_2=1 \\end{array}\n\\right.\\Rightarrow \\left\\{ \\begin{array}{l}\na_1=a_2 \\\\ \nb_1=b_2 \\\\ \na^2_1+b^2_1=1 \\\\ \na_1+2b^2_1=2 \\\\ \nb_1+2a^2_1=2 \\end{array}\n\\right.\\\\\n&\\Leftrightarrow \\left\\{ \\begin{array}{l}\na_1=a_2=0 \\\\ \nb_1=b_2 \\\\ \na^2_1+b^2_1=1 \\\\ \nb_1=2 \\end{array}\n\\right.\\ \\mathrm{or}\\ \\left\\{ \\begin{array}{l}\na_1=a_2=0.5 \\\\ \nb_1=b_2 \\\\ \na^2_1+b^2_1=1 \\\\ \nb_1=1.5 \\end{array}\n\\right.\n\\end{align*}\nThe final equation leads a contradiction (\\textit{b}${}_{1}$=2 or \\textit{b}${}_{1}$=1.5) and so it is impossible to apply the sufficient diagnostic proposition into M-E-D network. Such proposition is only used for one-evidence network. Moreover, X-gate inference absorbs many sources and then produces out of one targeted result whereas the M-E-D network essentially splits one source into many results. It is impossible to model M-E-D network by X-gates. The potential solution for this problem is to group many evidences \\textit{D}${}_{1}$, \\textit{D}${}_{2}$,{\\dots}, \\textit{D${}_{m}$} into one representative evidence \\textit{D} which in turn is dependent on hypothesis \\textit{Y} but this solution will be inaccurate in specifying conditional probabilities because directions of dependencies become inconsistent (relationships from \\textit{D${}_{j}$} to \\textit{D} and from \\textit{Y} to \\textit{D}) except that all \\textit{D${}_{j}$} (s) are removed and \\textit{D} becomes a vector. However evidence vector does not simplify the hazardous problem and it changes the current problem into a new problem.\n\nAnother solution is to reverse the direction of relationship, in which the hypothesis is dependent on evidences so as to take advantages of X-gate inference as usual. However, the reversion method violates the viewpoint in this research where diagnostic relationship must be from hypothesis to evidence. In other words, we should change the viewpoint.\n\nAnother solution is based on a so-called \\textit{partial diagnostic condition} that is a loose case of diagnostic condition for M-E-D network, which is defined as follows:\n\\[P\\left(Y\\mathrel{\\left|\\vphantom{Y D_j}\\right.\\kern-\\nulldelimiterspace}D_j\\right)=kP\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)\\] \nWhere \\textit{k} is constant with regard to \\textit{D${}_{j}$}. The joint probability is:\n\\[P\\left(Y,D_1,D_2,\\dots ,D_m\\right)=P\\left(Y\\right)\\prod^m_{j=1}{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)}\\] \nM-E-D network satisfies partial diagnostic condition. In fact, given all variables are binary, we have:\n\\begin{align*}\n&P\\left(Y\\mathrel{\\left|\\vphantom{Y D_j}\\right.\\kern-\\nulldelimiterspace}D_j\\right)=\\frac{\\sum_{\\mathrm{\\Psi }\\backslash \\left\\{Y,D_j\\right\\}}{P\\left(Y,D_1,D_2,\\dots ,D_m\\right)}}{\\sum_{\\mathrm{\\Psi }\\backslash \\left\\{D_j\\right\\}}{P\\left(Y,D_1,D_2,\\dots ,D_m\\right)}}\\\\\n&\\left(\\mathrm{Let}\\ \\Psi=\\left\\{D_1,D_2,\\dots,D_m\\right\\}\\right)\\\\\n&=\\frac{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)\\prod^m_{k=1,k\\neq j}{\\left(\\sum_{D_k}{P\\left(D_k\\mathrel{\\left|\\vphantom{D_k Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)}\\right)}}{\\prod^m_{k=1,k\\neq j}{\\left(\\sum_{D_k}{P\\left(D_k\\mathrel{\\left|\\vphantom{D_k Y=1}\\right.\\kern-\\nulldelimiterspace}Y=1\\right)}\\right)}+\\prod^m_{k=1,k\\neq j}{\\left(\\sum_{D_k}{P\\left(D_k\\mathrel{\\left|\\vphantom{D_k Y=0}\\right.\\kern-\\nulldelimiterspace}Y=0\\right)}\\right)}}\\\\\n&\\left(\\mathrm{Due}\\ \\mathrm{to}\\ \\mathrm{uniform}\\ \\mathrm{distribution}\\ \\mathrm{of}\\ Y\\right)\\\\\n&=\\frac{P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)\\prod^m_{k=1,k\\neq j}{1}}{\\prod^m_{k=1,k\\neq j}{1}+\\prod^m_{k=1,k\\neq j}{1}}=\\frac{1}{2}P\\left(D_j\\mathrel{\\left|\\vphantom{D_j Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)\\\\\n&\\left(\\mathrm{Due\\ to}\\ \\sum_{D_k}{P\\left(D_k\\mathrel{\\left|\\vphantom{D_k Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)}=P\\left(D_k=0\\mathrel{\\left|\\vphantom{D_k=0 Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)+P\\left(D_k=1\\mathrel{\\left|\\vphantom{D_k=1 Y}\\right.\\kern-\\nulldelimiterspace}Y\\right)=1\\right)\n\\end{align*}\nPartial diagnostic condition expresses a different viewpoint. It is not an optimal solution because we cannot test a disease based on only one symptom while ignoring other obvious symptoms, for example. The equality \\textit{P}(\\textit{Y{\\textbar}D${}_{j}$}) = 0.5\\textit{P}(\\textit{D${}_{j}${\\textbar}Y}) indicates the accuracy is decreased two times. However, Bayesian network provides inference mechanism based on personal belief. It is subjective. You can use partial diagnostic condition if you think that such condition is appropriate to your application.\n\nIf we are successful in specifying conditional probabilities of M-E-D network, it is possible to define an extended network which is constituted of \\textit{n} hypotheses \\textit{X}${}_{1}$, \\textit{X}${}_{2}$,{\\dots}, \\textit{X${}_{n}$} and \\textit{m} evidences \\textit{D}${}_{1}$, \\textit{D}${}_{2}$,{\\dots}, \\textit{D${}_{m}$}. Such extended network represents \\textit{multi-hypothesis multi-evidence diagnostic relationship}, called M-HE-D network. Figure~\\ref{figure:mhed-network} depicts M-HE-D network.\n\n\\begin{figure}\n\\centering\n\\includegraphics{MHEDNetwork.png}\n\\caption{M-HE-D network}\n\\label{figure:mhed-network}\n\\end{figure}\n%Figure 5.1. M-HE-D network\n\nThe M-HE-D network is the most general case of diagnostic network, which was mentioned in \\cite{millan:bayesiandiagnostic}. We can construct any large diagnostic BN from M-HE-D networks and so the research is still open.\n\n\\section{Conclusion}\nIn short, relationship conversion is to determine conditional probabilities based on logic gates that is adhered to semantics of relationships. The weak point of logic gates is to require that all variables must be binary. For example, in learning context, it is inconvenient for expert to create an assessment BN with studying exercises (evidences) whose marks are only 0 and 1. In order to lessen the impact of such weak point, I use numeric evidence for extending capacity of simple Bayesian network. However, combination of binary hypothesis and numeric evidence leads to errors or biases in inference. For example, given a student gets maximum grade for an exercise but the built-in inference results out that she/he has not mastered fully the associated learning concept (hypothesis). Therefore, I propose the sufficient diagnostic proposition so as to confirm that numeric evidence is adequate to make complicated inference tasks in BN. The probabilistic reasoning based on evidence is always accurate. Application of the research can go beyond learning context whenever probabilistic deduction relevant to constraints of semantic relationships is required. A large BN can be constituted of many simple BN (s). Inference in large BN is hazardous problem and there are many optimal algorithms for solving such problem. In future, I will research effective inference methods for the special BN that is constituted of X-gate BN (s) mentioned in this research because X-gate BN (s) have precise and useful features of which we should take advantages. For instance, their CPT (s) are simple in some cases and the meanings of their relationships are mandatory in many applications. Moreover, I try my best to research deeply M-E-D network and M-HE-D network whose problems I cannot solve absolutely now.\n\nTwo main documents I referred to do this research are the book ``Learning Bayesian Networks'' \\cite{neapolitan:bn} by the author Richard E. Neapolitan and the article ``A Bayesian Diagnostic Algorithm for Student Modeling and its Evaluation'' \\cite{millan:bayesiandiagnostic} by authors Eva Mill\u00e1n and Jos\u00e9 Luis P\u00e9rez-de-la-Cruz. Especially, the SIGMA-gate inference is based on and derived from the work of the authors Eva Mill\u00e1n and Jos\u00e9 Luis P\u00e9rez-de-la-Cruz. This research is originated from my PhD research ``A User Modeling System for Adaptive Learning'' \\cite{nguyen:zebra}. Other references relevant to user modeling, overlay model, and Bayesian network are \\cite{froschl:usermodeling}, \\cite{debra:aha}, \\cite{murphy:bn}, and \\cite{heckerman:bn}. Please concern these references.\n\n\\bibliographystyle{abbrv}\n\\bibliography{RelationshipConversionBN}\n\n\\end{document}\n\n", "meta": {"hexsha": "374fd1efb7eca22771d6d81e853a135bea4dc049", "size": 147655, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "articles/RelationshipConversionBN/RelationshipConversionBN.tex", "max_stars_repo_name": "ngphloc/mum", "max_stars_repo_head_hexsha": "e2b2682d9e61cd090b4e6dd0ffbd054ea0b64fbf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "articles/RelationshipConversionBN/RelationshipConversionBN.tex", "max_issues_repo_name": "ngphloc/mum", "max_issues_repo_head_hexsha": "e2b2682d9e61cd090b4e6dd0ffbd054ea0b64fbf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "articles/RelationshipConversionBN/RelationshipConversionBN.tex", "max_forks_repo_name": "ngphloc/mum", "max_forks_repo_head_hexsha": "e2b2682d9e61cd090b4e6dd0ffbd054ea0b64fbf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 84.567583047, "max_line_length": 1804, "alphanum_fraction": 0.7000914293, "num_tokens": 56640, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324848629215, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.5883884977442656}}
{"text": "\\section{Methodology}\\label{sec:metodo}\n\n\\subsection{Forward problem}\n\nLet $\\mathbf{d}^{o}$ be the observed data vector, whose $i$th element $d^{o}_{i}$, $i = 1, \\dots, N$, is the total-field anomaly produced by a 3-D source (Fig. \\ref{fig:obs}a) at the point $(x_{i}, y_{i}, z_{i})$ of a Cartesian coordinate system with $x$-, $y$- and $z$- axes pointing to north, east and down, respectively. We assume that the direction of the total magnetization vector of the source is constant and known. We approximate the volume of the source by a set of $L$ vertically juxtaposed 3-D prisms (Fig. \\ref{fig:obs}b) by following the same approach of \\cite{oliveirajr-etal2011} and \\cite{oliveirajr-barbosa2013}. The depth to the top of the shallowest prism is defined by $z_{0}$, and $m_{0}$ is the constant total-magnetization intensity of all prisms. The horizontal cross-section of each prism (Fig. \\ref{fig:prism_parameters}) \nis described by a polygon with a fixed number $V$ of vertices equi-angularly spaced from $0^{\\circ}$ to $360^{\\circ}$, which are described in polar coordinates referred to an internal origin $O^{k}$. The radii of the vertices ($r^{k}_{j}$, $j=1,\\dots , V$, $k=1,\\dots ,L$), the horizontal coordinates ($x_{0}^{k}$ and $y_{0}^{k}$, $k=1,\\dots ,L$) of the origins $O^{k}$, $k=1,\\dots ,L$, and the thickness $dz$ of the $L$ vertically stacked prisms (Fig. \\ref{fig:obs}b) are arranged in a $M \\times 1$ parameter vector $\\mathbf{p}$, $M = L (V + 2) + 1$, given by\n\\begin{equation}\n\\mathbf{p} = \\left[ \\begin{array}{@{}*{12}{c}@{}}\n{\\mathbf{r}^{1}}^{\\mathsf{T}} & x_{0}^{1} & y_{0}^{1} & \\dots & {\\mathbf{r}^{L}}^{\\mathsf{T}} & x_{0}^{L} & y_{0}^{L} & dz \\\\\n\\end{array} \\right]^{\\mathsf{T}} \\: ,\n\\label{eq:p-vector}\n\\end{equation}\nwhere ``$^{\\mathsf{T}}$\" denotes transposition and $\\mathbf{r}^{k}$ is a $V \\times 1$ vector containing the radii $r^{k}_{j}$ \nof the $k$th prism.\nLet $\\mathbf{d} (\\mathbf{p}, m_{0}, z_{0})$ be the predicted data vector, \nwhose $i$th element \n\\begin{equation}\nd_{i} (\\mathbf{p}, m_{0}, z_{0}) \\equiv m_{0} \\: \\sum\\limits_{k=1}^{L} f_{i}(\\mathbf{r}^{k}, x_{0}^{k}, y_{0}^{k}, dz, z_{0}), \\quad i = 1, \\dots, N \\: ,\n\\label{eq:predicted-data-i}\n\\end{equation}\nis the total-field anomaly produced by the ensemble of $L$ prisms at the $i$th observation point ($x_{i}, y_{i}, z_{i}$). In Eq. \\ref{eq:predicted-data-i}, $f_{i}(\\mathbf{r}^{k}, x_{0}^{k}, y_{0}^{k}, dz, z_{0})$ is the total-field anomaly produced, at the observation point ($x_{i}, y_{i}, z_{i}$), by the $k$th prism with unitary magnetization intensity and depth to the top $z_{1}^{k} = z_{0} + (k-1)dz$. We calculate $d_{i} (\\mathbf{p}, m_{0}, z_{0})$ (Eq. \\ref{eq:predicted-data-i}) by using the Python package Fatiando a Terra \\cite[]{uieda-etal2013}, which implements the formulas proposed by \\cite{plouff1976}.\n\n\\subsection{Inverse problem for estimating $\\mathbf{p}$}\n\nGiven the depth to the top of shallowest prism $z_{0}$ and the total-magnetization\nintensity $m_{0}$ of all prisms, we solve a constrained nonlinear problem to estimate the parameter vector $\\mathbf{p}$ (Eq. \\ref{eq:p-vector}) by minimizing \nthe goal function\n\\begin{equation}\n\\Gamma (\\mathbf{p}, m_{0}, z_{0}) = \\phi (\\mathbf{p}, m_{0}, z_{0}) + \n\\sum\\limits^{7}_{\\ell =1} \\alpha_{\\ell} \\, \\varphi_{\\ell}(\\mathbf{p}) \\: ,\n\\label{eq:gamma}\n\\end{equation}\nsubject to the inequality constraints\n\\begin{equation}\np_{l}^{min} < p_{l} < p_{l}^{max}, \\quad l = 1, \\dots, M \\: .\n\\label{eq:inequality-constraint}\n\\end{equation}\nThe first term in the right side of Eq. \\ref{eq:gamma} is the data-misfit \nfunction given by\n\\begin{equation}\\label{eq:misfit}\n\\phi (\\mathbf{p}, m_{0}, z_{0}) = \\frac{1}{N} \\| \\mathbf{d}^{o} - \n\\mathbf{d}(\\mathbf{p}, m_{0}, z_{0}) \\|_{2}^{2} \\: ,\n\\end{equation}\nwhich represents the normalized squared Euclidean norm of the difference between the \nobserved $\\mathbf{d}^{o}$ and predicted $\\mathbf{d}(\\mathbf{p}, m_{0}, z_{0})$ data\nvector, whose $i$th element is the predicted data $d_{i} (\\mathbf{p}, m_{0}, z_{0})$ \n(Eq. \\ref{eq:predicted-data-i}).\nThe second term in the right side of Eq. \\ref{eq:gamma} represents the \nweighted sum of the seven constraint functions $\\varphi_{\\ell}(\\mathbf{p}), \\: \n\\ell = 1, \\dots, 7$, described in the following section \\textit{Constraint functions}.\nIn Eq. \\ref{eq:gamma}, $\\alpha_{\\ell}$ is a positive number representing \nthe weight of the $\\ell$th constraint function $\\varphi_{\\ell}(\\mathbf{p})$.\nIn the inequality constraints (Eq. \\ref{eq:inequality-constraint}), \n$p_{l}^{min}$ and $p_{l}^{max}$ are, respectively, the lower and upper limits for \nthe $l$th element $p_{l}$ of the parameter vector $\\mathbf{p}$ \n(Eq. \\ref{eq:p-vector}). \nThese limits are defined by the interpreter based on the knowledge about the \nhorizontal and total depth extensions of the magnetic source. \nDetails about how we set the weights $\\alpha_{\\ell}$ and the limits in the\ninequality constraints are presented in the section \n\\textit{Computational procedures} later in this article.\n\nTo solve our nonlinear inverse problem we use a gradient-based method and,\nconsequently, we need to define an initial approximation $\\hat{\\mathbf{p}}_{(0)}$ \nfor the parameter vector $ \\mathbf{p} $ (Eq. \\ref{eq:p-vector}). \nThen our method iteratively updates this initial approximation to obtain \nan estimated parameter vector $\\hat{\\mathbf{p}}_{(f)}$ minimizing the goal function\n$\\Gamma (\\mathbf{p}, m_{0}, z_{0})$ (Eq. \\ref{eq:gamma}),\nfor given values of total-magnetization intensity $m_{0}$ and depth to the top\nof the shallowest prism $z_{0}$. \nHere, we use the superscript hat ``~$\\hat{}$~\" to denote initial approximation or\nestimated parameter vector.\n\nSince we are using a gradient-based method, we need to define the \ngradient vector $\\boldsymbol{\\nabla}\\Gamma(\\mathbf{p})$ and Hessian matrix \n$\\mathbf{H}(\\mathbf{p})$ of the goal function $\\Gamma(\\mathbf{p}, m_{0}, z_{0})$ \n(Eq. \\ref{eq:gamma}), both of them computed with respect to the parameter vector\n$\\mathbf{p}$. That is why we define them by omitting the parameters $m_{0}$ and\n$z_{0}$. They are given by:\n\\begin{equation}\\label{eq:gamma_gradient}\n\\boldsymbol{\\nabla}\\Gamma (\\mathbf{p}) = \\boldsymbol{\\nabla}\\phi (\\mathbf{p}) + \n\\sum\\limits^{7}_{\\ell =1} \\alpha_{\\ell} \\, \\boldsymbol{\\nabla}\\varphi_{\\ell}(\\mathbf{p})\n\\end{equation}\nand\n\\begin{equation}\\label{eq:gamma_hessian}\n\\mathbf{H} (\\mathbf{p}) = \\mathbf{H}_\\phi (\\mathbf{p}) + \\sum\\limits^{7}_{\\ell =1} \\alpha_{\\ell} \\, \\mathbf{H}_\\ell \\: ,\n\\end{equation}\nwhere the gradient vector and the Hessian matrix of the misfit function \n$\\phi(\\mathbf{p})$ (Eq. \\ref{eq:gamma}) are respectively given by:\n\\begin{equation}\\label{eq:phi_gradient}\n\\boldsymbol{\\nabla} \\phi(\\mathbf{p}) = - \\frac{2}{N}\\mathbf{G}(\\mathbf{p})^{\\mathsf{T}}[\\mathbf{d}^o - \n\\mathbf{d}(\\mathbf{p}, m_{0}, z_{0})]\n\\end{equation} \nand \n\\begin{equation}\\label{eq:phi_hessian}\n\\mathbf{H}_{\\phi}(\\mathbf{p}) = \\frac{2}{N}\\mathbf{G}(\\mathbf{p})^{\\mathsf{T}}\\mathbf{G}(\\mathbf{p}) \\: .\n\\end{equation}\nIn Eqs \\ref{eq:gamma_gradient} and \\ref{eq:gamma_hessian}, the terms \n$\\boldsymbol{\\nabla} \\varphi_{\\ell}(\\mathbf{p})$ and $\\mathbf{H}_{\\ell}$, \n$\\ell = 1, \\dots, 7$, are the gradient vectors and Hessian matrices of the constraint functions, respectively. In Eqs \\ref{eq:phi_gradient} and \\ref{eq:phi_hessian},\n$\\mathbf{G}(\\mathbf{p})$ is an $N \\times M$ matrix whose element $il$ is the derivative of the predicted data $d_{i}(\\mathbf{p}, m_{0}, z_{0})$ (Eq. \\ref{eq:predicted-data-i}) with respect to the $l$th element $p_{l}$ of \nthe parameter vector $\\mathbf{p}$ (Eq. \\ref{eq:p-vector}), $l = 1, \\dots, M$.\nDetails about the constraint functions $\\varphi_\\ell(\\mathbf{p})$, $\\ell = 1, \\dots, 7$, as well as the numerical procedure to solve this nonlinear inverse problem are given in the following sections.\n\n\\subsection{Constraint functions}\\label{sec:constraints}\n\nTo explain the constraint functions $\\varphi_{\\ell}(\\mathbf{p})$ (Eq. \\ref{eq:gamma}), $\\ell = 1, \\dots, 7$, used here to obtain stable solutions and introduce prior information about the magnetic source, we have organized them into the following three groups.\n\n\\subsubsection{Smoothness constraints}\n\nThis group is formed by variations of the first-order Tikhonov regularization \\cite[][ p. 103]{aster-etal2019} that imposes smoothness on the radii $r_{j}^{k}$ and the Cartesian coordinates $x_{0}^{k}$ and $y_{0}^{k}$ of the origin $O^{k}$, $j = 1, \\dots, V$, $k = 1, \\dots, L$, defining the horizontal section of each prism (Fig.\\ref{fig:obs}b).\nThey were proposed by \\cite{oliveirajr-etal2011} and \\cite{oliveirajr-barbosa2013} and play a very important role in introducing prior information about the shape of the source. \n\nThe first constraint of this group is the \\textit{smoothness constraint on the adjacent radii defining the horizontal section of each prism}. This constraint imposes that adjacent radii $r_{j}^{k}$ and $r_{j+1}^{k}$ within each prism must be close to each other. It forces the estimated prism to be approximately cylindrical. Mathematically, the constraint is given by \\begin{equation}\\label{eq:phi1}\n\\begin{split}\n\\varphi_{1}(\\mathbf{p}) &= \\sum\\limits^{L}_{k=1}\\left[\\left(r^{k}_{V}-r^{k}_{1}\\right)^2 + \\sum\\limits^{V-1}_{j=1}\\left(r^{k}_{j}-r^{k}_{j+1}\\right)^2\\right]\\\\\n &= \\mathbf{p}^{\\mathsf{T}} \\mathbf{R}^{\\mathsf{T}}_{1}\\mathbf{R}_{1} \\mathbf{p} \\quad ,\n\\end{split}\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathbf{R}_{1} = \n\\begin{bmatrix}\n\\mathbf{S}_{1} &\n\\mathbf{0}_{LV \\times 1} \\\\\n\\end{bmatrix}_{LV \\times M} \\quad ,\n\\label{eq:R1-matrix}\n\\end{equation}\n\\begin{equation}\n\\mathbf{S}_{1} = \n\\mathbf{I}_{L} \\otimes \n\\begin{bmatrix}\n\\left( \\mathbf{I}_{V} - \\mathbf{D}_{V}^\\mathsf{T} \\right) & \\mathbf{0}_{V \\times 2} \\\\\n\\end{bmatrix}_{V \\times (V+2)} \\quad ,\n\\label{eq:S1-matrix}\n\\end{equation}\n$ \\mathbf{0}_{LV \\times 1} $ is an $ LV \\times 1 $ vector with null elements,\n$\\mathbf{I}_{L}$ is the identity matrix of order $L$, ``$\\otimes$\" denotes the Kronecker product \\cite[][ p. 243]{horn_johnson1991}, $\\mathbf{0}_{V \\times 2}$ is a $V \\times 2$ matrix with null elements, \n$\\mathbf{I}_{V}$ is the identity matrix of order $V$ and $\\mathbf{D}_{V}^\\mathsf{T}$ is the upshift permutation matrix of order $V$ \\cite[][ p. 20]{golub-vanloan2013}. The gradient and Hessian of function $\\varphi_{1}(\\mathbf{p})$ (Eq. \\ref{eq:phi1}) are given by:\n\\begin{equation}\\label{eq:phi1_grad}\n\\boldsymbol{\\nabla}\\varphi_{1}(\\mathbf{p}) = 2 \\mathbf{R}^\\mathsf{T}_{1}\\mathbf{R}_{1}\\mathbf{p} \\quad ,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:phi1_hessian}\n\\mathbf{H}_{1}(\\mathbf{p}) = 2\\mathbf{R}^\\mathsf{T}_{1}\\mathbf{R}_{1} \\quad .\n\\end{equation}\n\nThe second constraint of this group is the \\textit{smoothness constraint on the adjacent radii of the vertically adjacent prisms}, which imposes that adjacent radii $r_{j}^{k}$ and $r_{j}^{k+1}$ within vertically adjacent prisms must be close to each other. This constraint forces the shape of all prisms to be similar to each other\nand is given by\n\\begin{equation}\\label{eq:phi2}\n\\begin{split}\n\\varphi_{2}(\\mathbf{p}) &= \\sum\\limits^{L-1}_{k=1}\\left[\\sum\\limits^{V}_{j=1}\\left(r^{k+1}_{j}-r^{k}_{j}\\right)^2\\right] \\\\\n&= \\mathbf{p}^{\\mathsf{T}} \\mathbf{R}^{\\mathsf{T}}_{2}\\mathbf{R}_{2}\\mathbf{p}\n\\end{split} \\quad ,\n\\end{equation}\nwhere \n\\begin{equation}\n\\mathbf{R}_{2} = \n\\begin{bmatrix}\n\\mathbf{S}_{2} & \\mathbf{0}_{(L-1)V \\times 1} \\\\\n\\end{bmatrix}_{(L-1)V \\times M} \\quad ,\n\\label{eq:R2-matrix}\n\\end{equation}\n\\begin{equation}\n\\mathbf{S}_{2} =\n\\left( \n\\begin{bmatrix} \\mathbf{I}_{L-1} & \\mathbf{0}_{(L-1) \\times 1} \\end{bmatrix} -\n\\begin{bmatrix} \\mathbf{0}_{(L-1) \\times 1} & \\mathbf{I}_{L-1} \\end{bmatrix} \n\\right) \\otimes \n\\begin{bmatrix} \\mathbf{I}_{V} & \\mathbf{0}_{V \\times 2} \\end{bmatrix} \\quad ,\n\\label{eq:S2-matrix}\n\\end{equation}\n$\\mathbf{0}_{(L-1)V \\times 1}$ is an $(L-1)V \\times 1$ vector with null elements,\n$\\mathbf{0}_{(L-1) \\times 1}$ is an $(L-1) \\times 1$ vector with null elements and \n$\\mathbf{I}_{L-1}$ is the identity matrix of order $L-1$. The gradient and Hessian of function $\\varphi_{2}(\\mathbf{p})$ (Eq. \\ref{eq:phi2}) are given by:\n\\begin{equation}\\label{eq:phi2_grad}\n\\boldsymbol{\\nabla}\\varphi_{2}(\\mathbf{p}) = 2\\mathbf{R}^\\mathsf{T}_{2}\\mathbf{R}_{2}\\mathbf{p} \\quad ,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:phi2_hessian}\n\\mathbf{H}_{2}(\\mathbf{p}) = 2\\mathbf{R}^\\mathsf{T}_{2}\\mathbf{R}_{2} \\quad .\n\\end{equation}\n\nThe last constraint of this group is the \\textit{smoothness constraint on the horizontal position of \nthe arbitrary origins of the vertically adjacent prisms}. This constraint imposes that the estimated horizontal \nCartesian coordinates $(x_{0}^{k}, y_{0}^{k})$ and $(x_{0}^{k+1}, y_{0}^{k+1})$ of the origins $O^{k}$ and $O^{k+1}$ \nof adjacent prisms must be close to each other. It forces the centers of the prisms to be vertically aligned. This constraint \nis given by\n\\begin{equation}\\label{eq:phi3}\n\\begin{split}\n\\varphi_{3}(\\mathbf{p}) &= \\sum\\limits^{L-1}_{k=1}\\left[\\left(x_{0}^{k+1} - x_{0}^{k}\\right)^2 + \\left(y_{0}^{k+1} - y_{0}^{k}\\right)^2 \\right] \\\\\n&= \\mathbf{p}^{\\mathsf{T}} \\mathbf{R}^{\\mathsf{T}}_{3}\\mathbf{R}_{3}\\mathbf{p}\n\\end{split} \\quad ,\n\\end{equation}\nwhere \n\\begin{equation}\n\\mathbf{R}_{3} = \n\\begin{bmatrix}\n\\mathbf{S}_{3} & \\mathbf{0}_{(L-1)2 \\times 1} \\\\\n\\end{bmatrix}_{(L-1)2 \\times M} \\quad ,\n\\label{eq:R3-matrix}\n\\end{equation}\n\\begin{equation}\n\\mathbf{S}_{3} =\n\\left( \n\\begin{bmatrix} \\mathbf{I}_{L-1} & \\mathbf{0}_{(L-1) \\times 1} \\end{bmatrix} -\n\\begin{bmatrix} \\mathbf{0}_{(L-1) \\times 1} & \\mathbf{I}_{L-1} \\end{bmatrix} \n\\right) \\otimes \n\\begin{bmatrix} \\mathbf{0}_{2 \\times V} & \\mathbf{I}_{2} \\end{bmatrix} \\quad ,\n\\label{eq:S3-matrix}\n\\end{equation}\n$\\mathbf{0}_{(L-1)2 \\times 1}$ is an $(L-1)2 \\times 1$ vector with null elements,\n$\\mathbf{0}_{2 \\times V}$ is a $2 \\times V$ matrix with null elements and \n$\\mathbf{I}_{2}$ is the identity matrix of order $2$. The gradient and Hessian of function $\\varphi_{3}(\\mathbf{p})$ (Eq. \\ref{eq:phi3}) are given by:\n\\begin{equation}\\label{eq:phi3_grad}\n\\boldsymbol{\\nabla}\\varphi_{3}(\\mathbf{p}) = 2 \\mathbf{R}^{\\mathsf{T}}_{3}\\mathbf{R}_{3}\\mathbf{p} \\quad ,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:phi3_hessian}\n\\mathbf{H}_{3}(\\mathbf{p}) = 2 \\mathbf{R}^{\\mathsf{T}}_{3}\\mathbf{R}_{3} \\quad .\n\\end{equation}\n\n\n\\subsubsection{Equality constraints}\n\n\nThis group is formed by two constraints that were proposed by \\cite{oliveirajr-etal2011} and \\cite{oliveirajr-barbosa2013} by following the same approach proposed \\cite{barbosa-etal1997} and \\cite{barbosa-1999a}. \nThey introduce a priori information about the shallowest prism and are suitable for outcropping sources.\n\nThe \\textit{source\u2019s outcrop constraint} imposes that the horizontal cross-section of the shallowest prism must be close to known outcropping boundary of the geologic source.\nThe horizontal cross-section of the known outcropping boundary separating the geologic source from the host rock is described by the radii $\\tilde{r}_{1}^{0} \\dots \\tilde{r}_{V}^{0}$. Mathematically, this constraint is given by\n\\begin{equation}\\label{eq:phi4}\n\\begin{split}\n\\varphi_{4}(\\mathbf{p}) &= \\left[ \\sum\\limits^{V}_{j=1}\\left(r^{1}_{j}-\\tilde{r}^{0}_{j}\\right)^2\\right] \\\\\n&= \\left(\\mathbf{R}_{4} \\mathbf{p} - \\mathbf{a} \\right)^{\\mathsf{T}} \n\\left(\\mathbf{R}_{4} \\mathbf{p} - \\mathbf{a} \\right)\n\\end{split} \\quad ,\n\\end{equation}\nwhere $\\mathbf{a}$ is a $V \\times 1$ vector containing the radii of the polygon defining the outcropping boundary\n\\begin{equation}\n\\mathbf{a} = \\left[ \\begin{array}{@{}*{12}{c}@{}}\n\\tilde{r}_{1}^{0} & \\dots & \\tilde{r}_{V}^{0} \\\\\n\\end{array} \\right]^{\\mathsf{T}} \\: ,\n\\label{eq:a-vector}\n\\end{equation}\nand\n\\begin{equation}\n\\mathbf{R}_{4} = \n\\begin{bmatrix}\n\\mathbf{I}_{V} & \\mathbf{0}_{V \\times (M-V)} \\\\\n\\end{bmatrix}_{V\\times M},\n\\label{eq:R4-matrix}\n\\end{equation}\nwhere $\\mathbf{I}_{V}$ is the identity matrix of order $V$ and \n$\\mathbf{0}_{V \\times (M-V)}$ is a $V \\times (M-V)$ matrix with null elements.\nThe gradient and Hessian of function $\\varphi_{4}(\\mathbf{p})$ (Eq. \\ref{eq:phi4}) are given by:\n\\begin{equation}\\label{eq:phi4_grad}\n\\boldsymbol{\\nabla}\\varphi_{4}(\\mathbf{p}) = 2 \\mathbf{R}_{4}^{\\mathsf{T}} \n\\left(\\mathbf{R}_{4} \\mathbf{p} - \\mathbf{a} \\right) \\quad ,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:phi4_hessian}\n\\mathbf{H}_{4}(\\mathbf{p}) = 2 \\mathbf{R}^{\\mathsf{T}}_{4}\\mathbf{R}_{4} \\quad .\n\\end{equation}\n\nThe \\textit{source's horizontal location constraint} imposes that the horizontal Cartesian coordinates of the origin within \nthe shallowest prism must be as close as possible to a known outcropping point given by the horizontal Cartesian coordinates $(\\tilde{x}_{0}^{0}, \\: \\tilde{y}_{0}^{0})$. \nThis constraint is given by\n\\begin{equation}\\label{eq:phi5}\n\\begin{split}\n\\varphi_{5}(\\mathbf{p}) &= \\left[\\left(x_{0}^{1} - \\tilde{x}_{0}^{0}\\right)^2 \n+ \\left(y_{0}^{1} - \\tilde{y}_{0}^{0}\\right)^2\\right] \\\\\n&= \\left(\\mathbf{R}_{5} \\mathbf{p} - \\mathbf{b} \\right)^{\\mathsf{T}}\n\\left(\\mathbf{R}_{5} \\mathbf{p} - \\mathbf{b}\\right)\n\\end{split} \\quad ,\n\\end{equation}\nwhere $\\mathbf{b}$ is a $2 \\times 1$ vector containing the horizontal Cartesian coordinates of the outcropping point \n\\begin{equation}\n\\mathbf{b} = \\left[ \\begin{array}{@{}*{12}{c}@{}}\n\\tilde{x}_{0}^{0} & \\tilde{y}_{0}^{0} \\\\\n\\end{array} \\right]^{\\mathsf{T}} \\: ,\n\\label{eq:b-vector}\n\\end{equation}\nand\n\\begin{equation}\n\\mathbf{R}_{5} = \n\\begin{bmatrix}\n\\mathbf{0}_{2 \\times V} & \\mathbf{I}_{2} & \\mathbf{0}_{2 \\times (M-V-2)} \\\\\n\\end{bmatrix}_{2 \\times M},\n\\label{eq:R5-matrix}\n\\end{equation}\nwhere $\\mathbf{0}_{2 \\times (M-V-2)}$ is a $2 \\times (M-V-2)$ matrix \nwith null elements. \nThe gradient and Hessian of function $\\varphi_{5}(\\mathbf{p})$ (Eq. \\ref{eq:phi5}) are given by:\n\\begin{equation}\\label{eq:phi5_grad}\n\\boldsymbol{\\nabla}\\varphi_{5}(\\mathbf{p}) = 2\\mathbf{R}_{5}^{\\mathsf{T}}\n\\left(\\mathbf{R}_{5} \\mathbf{p} - \\mathbf{b}\\right) \\quad ,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:phi5_hessian}\n\\mathbf{H}_{5}(\\mathbf{p}) = 2 \\mathbf{R}^{\\mathsf{T}}_{5}\\mathbf{R}_{5} \\quad .\n\\end{equation}\n\n\\subsubsection{Minimum Euclidean norm constraints}\n\nTwo constraints use the zeroth-order Tikhonov regularization with the purpose of obtaining stable solutions without introducing prior information about the shape of the source. \nHowever, these two constraints combined with the interpretation model impose \nsource compactness. \n\nThe \\textit{Minimum Euclidean norm of the radii} imposes that \nall estimated radii within each prism must be close to null values. This constraint was proposed by \\cite{oliveirajr-etal2011} and \\cite{oliveirajr-barbosa2013} and can be rewritten as follows\n\\begin{equation}\\label{eq:phi6}\n\\begin{split}\n\\varphi_{6}(\\mathbf{p}) &= \\sum\\limits^{L}_{k=1}\\sum\\limits^{V}_{j=1}\\left(r_{j}^{k}\\right)^2 \\\\\n&= \\mathbf{p}^{\\mathsf{T}} \\mathbf{R}_{6}^{\\mathsf{T}} \\mathbf{R}_{6} \\mathbf{p}\n\\end{split} \\quad ,\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathbf{R}_{6} = \n\\begin{bmatrix}\n\\mathbf{S}_{6} & \\mathbf{0}_{(M-1) \\times 1} \\\\\n\\mathbf{0}_{1 \\times (M-1)} & 0 \\\\\n\\end{bmatrix}_{M\\times M} \\quad ,\n\\label{eq:R6-matrix}\n\\end{equation}\nand \n\\begin{equation}\n\\mathbf{S}_{6} = \n\\mathbf{I}_{L} \\otimes \n\\begin{bmatrix}\n\\mathbf{I}_{V}      & \\mathbf{0}_{V \\times 2} \\\\\n\\mathbf{0}_{2 \\times V} & \\mathbf{0}_{2 \\times 2} \\\\\n\\end{bmatrix}_{(V+2) \\times (V+2)} \\quad ,\n\\label{eq:S6-matrix}\n\\end{equation}\nwhere $\\mathbf{0}_{2 \\times 2}$ is a $2 \\times 2$ matrix with null elements,\n$\\mathbf{0}_{V \\times 2}$ is a $V \\times 2$ matrix with null elements and\n$\\mathbf{0}_{2 \\times V}$ is a $2 \\times V$ matrix with null elements.\nThe gradient and Hessian of function $\\varphi_{6}(\\mathbf{p})$ (Eq. \\ref{eq:phi6}) are given by:\n\\begin{equation}\\label{eq:phi6_grad}\n\\boldsymbol{\\nabla}\\varphi_{6}(\\mathbf{p}) = 2 \\mathbf{R}_{6}^{\\mathsf{T}} \\mathbf{R}_{6} \\mathbf{p} \\quad ,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:phi6_hessian}\n\\mathbf{H}_{6}(\\mathbf{p}) = 2 \\mathbf{R}^{\\mathsf{T}}_{6}\\mathbf{R}_{6} \\quad .\n\\end{equation}\n\nThe other constraint, the \\textit{Minimum Euclidean norm of the prism thickness}, imposes that the thickness of all prisms must be close to zero. We present this constraint to introduce a priori information about the maximum depth extent of the source which in turn is dependent on the depth to the top of the shallowest prism $z_{0}$. It is given by\n\\begin{equation}\\label{eq:phi7}\n\\begin{split}\n\\varphi_{7}(\\mathbf{p}) &= dz^2 \\\\\n&= \\mathbf{p}^{\\mathsf{T}} \\mathbf{R}_{7}^{\\mathsf{T}} \\mathbf{R}_{7} \\mathbf{p}\n\\end{split} \\quad ,\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathbf{R}_{7} =\n\\begin{bmatrix}\n\\mathbf{0}_{(M-1) \\times (M-1)} & \\mathbf{0}_{(M-1) \\times 1} \\\\\n\\mathbf{0}_{1 \\times (M-1)} & 1 \\\\\n\\end{bmatrix}_{ M \\times M } \\quad .\n\\end{equation}\nThe gradient and Hessian of function $\\varphi_{7}(\\mathbf{p})$ (Eq. \\ref{eq:phi7}) are given by:\n\\begin{equation}\\label{eq:phi7_grad}\n\\boldsymbol{\\nabla}\\varphi_{7}(\\mathbf{p}) = 2 \\mathbf{R}_{7}^{\\mathsf{T}} \\mathbf{R}_{7} \\mathbf{p} \\quad ,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:phi7_hessian}\n\\mathbf{H}_{7}(\\mathbf{p}) = 2 \\mathbf{R}^{\\mathsf{T}}_{7}\\mathbf{R}_{7} \\quad .\n\\end{equation}\n\n\\subsection{Computational procedures}\n\nIn this section, we present the computational procedures \nto solve the nonlinear inverse problem for estimating a parameter vector \n$\\hat{\\mathbf{p}}_{(f)}$ that minimizes the goal function \n$\\Gamma(\\mathbf{p}, m_{0}, z_{0})$ (Eq. \\ref{eq:gamma}), subject to the inequality\nconstraints (Eq. \\ref{eq:inequality-constraint}), for previously defined values of\ndepth to the top of the shallowest prism $z_{0}$ and total-magnetization intensity\n$m_{0}$. We use an iterative gradient-based algorithm for solving the nonlinear\ninverse problem.\n\nThis section is divided into four parts. The first part describes how to define  \nthe number of prisms $L$, the number of vertices $V$, total-magnetization \ndirection and thickness $dz$ of all prisms, the initial approximation\n$\\hat{\\mathbf{p}}_{(0)}$ for the parameter vector $ \\mathbf{p} $ \n(Eq. \\ref{eq:p-vector}), as well as the upper and lower limits in the inequality\nconstraints ($p_{l}^{min}$ and $p_{l}^{max}$ in Eq. \\ref{eq:inequality-constraint}). \nAll these variables are previously defined by the interpreter and never updated\nin the nonlinear inverse problem.\nThe second part of this section explains how we define different \ntentative values for the depth to the top $z_{0}$ and total-magnetization\nintensity $m_{0}$, as well as how we choose their optimum values.\nIn the third part we present the gradient-based algorithm for solving the nonlinear\ninverse problem for a given tentative pair ($m_{0}$, $z_{0}$). \nFinally, we describe how to set the weights $ \\alpha_{\\ell} $ \n(Eq. \\ref{eq:gamma}) in the fourth part of this section.\n\n\\subsubsection{Initial approximation $\\hat{\\mathbf{p}}_{(0)}$ and inequality constraints}\n\nOur initial approximation $\\hat{\\mathbf{p}}_{(0)}$ is a uniformly-magnetized\ncylinder-like body formed by \n$L$ prisms, all of them with the same number of vertices $V$. The radii of the\nvertices of all prisms ($r^{k}_{j}$, $j=1,\\dots , V$, $k=1,\\dots ,L$) have the \nsame constant value, so that the initial approximation approaches a vertical\ncylinder. \nThe number of prisms $L$ and vertices $V$ are set based on the expected complexity \nof the magnetic source.\nWe start by computing the reduction to pole (RTP) anomaly, which has three purposes.\nThe first one is verifying if the used total-magnetization direction is valid.\nThe second purpose is defining the upper and lower limits of the inequality\nconstraints (Eq. \\ref{eq:inequality-constraint}).\nFinally, the third purpose of the RTP anomaly is defining the radius and horizontal\nCartesian coordinates of the center of the cylinder-like initial approximation.\n\nIt is well known that if the source has a uniform magnetization direction, \nthe RTP anomaly is predominantly positive and decays to zero close to its \nhorizontal boundaries \\cite[e.g.,][p. 331]{blakely1996}. \nTo compute this transformation, however, the interpreter must use declination and\ninclination values close to those defining the true total-magnetization direction\nof the source. \nHence, the interpreter can validate the total-magnetization direction of the source \nby verifying if the computed RTP anomaly shows predominantly positive values.\n\nBy using the computed RTP anomaly, we also define the upper and lower limits \n($p_{l}^{min}$ and $p_{l}^{max}$ in Eq. \\ref{eq:inequality-constraint}) for \nthe parameters representing the radii $r^{k}_{j}$ of the vertices and the \nhorizontal coordinates $x_{0}^{k}$ and $y_{0}^{k}$ of the origins $O^{k}$\nof all prisms ($j=1,\\dots , V$, $k=1,\\dots ,L$). For the parameters representing\nthe radii $r^{k}_{j}$, the lower limit $p_{l}^{min}$ is close to zero and the \nupper limit $p_{l}^{max}$ is approximately defined by the radius of a circular area \nencompassing the region where the RTP anomaly is positive and decays to zero.\nThe lowermost and uppermost $x-$ and $y-$ coordinates of this circular area \nare used to define, respectively, the lower and upper limits for the \nhorizontal coordinates $x_{0}^{k}$ and $y_{0}^{k}$. \n\nNext, we set the same thickness $dz$ for all prisms so that the resulting \ntotal thickness ($L \\, dz$) is greater than that we expect for the true source.\nThe lower and upper limits ($p_{l}^{min}$ and $p_{l}^{max}$ in \nEq. \\ref{eq:inequality-constraint}) for the parameter $dz$ are set to be,\nrespectively, a value close to zero and a large value resulting in a \ntotal thickness ($L \\, dz$) greater than that we expect for the true source.\nAt this point, we have fully defined the vector $\\hat{\\mathbf{p}}_{(0)}$ \nrepresenting the cylinder-like initial approximation for the parameter \nvector $ \\mathbf{p} $ (Eq. \\ref{eq:p-vector}).\n\n\\subsubsection{Definition of optimum values for $m_{0}$ and $z_{0}$}\n\nAfter defining the initial approximation $\\hat{\\mathbf{p}}_{(0)}$, we need\nto set the depth to the top of the shallowest prism $z_0$ \nand the total-magnetization intensity $m_0$ for all prisms.\nThese two parameters are also previously defined by the interpreter and \nremain fixed along the iterations of our gradient-based algorithm for solving the \nnonlinear inverse problem. \n\nIn the absence of a priori information, the finding of values close to the true ones\nfor $z_0$ and $m_0$ is a very difficult task due to the inherent ambiguity\nin magnetic inversion. It is expected that estimated magnetic sources obtained by\nusing different combinations of $z_0$ and $m_0$ may produce similar data fits.\nBecause of that, we do not estimate their values in the nonlinear inverse problem.\nInstead, we set ranges of tentative values for them.\nFor each tentative pair ($m_0$, $z_0$), we use the same previously defined \ncylinder-like initial approximation $\\hat{\\mathbf{p}}_{(0)}$ to \nobtain an independent estimated parameter vector $\\hat{\\mathbf{p}}_{(f)}$ \nminimizing the goal function $\\Gamma (\\mathbf{p}, m_{0}, z_{0})$ \n(Eq. \\ref{eq:gamma}).\nNote that this approach results in a set of estimated magnetic sources with\ndifferent depths to the top $z_0$, total-magnetization intensities $m_0$ and \ngeometries defined by different estimated parameter vectors $\\hat{\\mathbf{p}}_{(f)}$.\nEach estimated magnetic source results in a different value \n$\\Gamma (\\hat{\\mathbf{p}}_{(f)}, m_{0}, z_{0})$ for the goal function.\nOn the discrete map of the goal function produced by the set of estimated magnetic\nsources, each one associated with a tentative pair ($m_0$, $z_0$), \nwe select the optimum values of $m_{0}$ and $z_{0}$ as those producing the smallest\nvalue of $\\Gamma (\\hat{\\mathbf{p}}_{(f)}, m_{0}, z_{0})$. \nWe stress that all estimated parameter vectors $\\hat{\\mathbf{p}}_{(f)}$ are obtained\nby using the same values for the weights $\\alpha_{\\ell}$ (Eq. \\ref{eq:gamma}).\nDetails about how we set these weights are presented later, in the section\n\\textit{Setting the weights $\\alpha_{1}-\\alpha_{7}$}.\n\nUsually, the range of $z_0$ includes the topography ($z_0 = 0$) and the range of \n$m_0$ is based on a priori information such as petrophysical studies.\nIn order to verify our initial approximation $\\hat{\\mathbf{p}}_{(0)}$ and the\nranges of tentative values for $z_0$ and $m_0$, we compute a discrete map\nof the data-misfit function $\\phi (\\mathbf{p}, m_{0}, z_{0})$ (Eq. \\ref{eq:misfit}).\nAll values in this map are produced by using the same previously defined \ninitial approximation $\\hat{\\mathbf{p}}_{(0)}$. The only differences are the \ntentative values for $z_0$ and $m_0$. \nThe computation of this discrete map is carried out, before performing the \ninversions, with the purpose of verifying if there is at least one pair of $ m_0 $ and $ z_0 $ \nassociated with a data-misfit function $\\phi (\\hat{\\mathbf{p}}_{(0)}, m_{0}, z_{0})$ \nshowing a relatively low value. This means that the initial approximation $\\hat{\\mathbf{p}}_{(0)}$\nand tentative values for $m_{0}$ and $z_{0}$ produce a predicted data vector \n$\\mathbf{d}(\\hat{\\mathbf{p}}_{(0)}, m_{0}, z_{0})$ close to the observed \ndata vector $\\mathbf{d}^{o}$.\nWe stress that, at this step, we do not need a good data fit.\nHowever, if we cannot identify any point on the discrete map of the data-misfit function \n$\\phi (\\hat{\\mathbf{p}}_{(0)}, m_{0}, z_{0})$ associated with a satisfactory\ndata fit, the initial approximation $\\hat{\\mathbf{p}}_{(0)}$ \nand the ranges of $ z_0 $ and $ m_0 $ should be redefined before recomputing the discrete map.\n\n\\subsubsection{Inversion algorithm for estimating $\\mathbf{p}$}\n\nWe use the Levenberg-Marquardt algorithm \\cite[e.g., ][ p. 240]{aster-etal2019} \nto solve the nonlinear inverse problem for estimating the\nparameter vector $\\hat{\\mathbf{p}}_{(f)}$ minimizing the goal function\n$\\Gamma (\\mathbf{p}, m_{0}, z_{0})$ (Eq. \\ref{eq:gamma}),\nsubject to the inequality constraints (Eq. \\ref{eq:inequality-constraint}), \nfor given values of total-magnetization intensity $m_{0}$ and depth to the top\nof the shallowest prism $z_{0}$.\nThe Levenberg-Marquardt algorithm is an iterative gradient-based method that, at each iteration $n$, updates the estimate parameter vector $\\hat{\\mathbf{p}}_{(n)}$ to obtain a new estimated parameter vector $\\hat{\\mathbf{p}}_{(n + 1)}$.\nWe compute this update by following the same strategy of \\cite{barbosa-1999b}, \\cite{oliveirajr-etal2011} and \\cite{oliveirajr-barbosa2013} to incorporate the inequality constraints (Eq. \\ref{eq:inequality-constraint}). \nThis strategy consists in transforming each element $\\hat{p}_{l} \\in (p_{l}^{min}, p_{l}^{max})$ of the estimated parameter vector $\\hat{\\mathbf{p}}_{(n)}$ \ninto the element $\\hat{p}^{\\dagger}_{l} \\in (- \\infty, + \\infty)$ of an \nunconstrained vector $\\hat{\\mathbf{p}}^{\\dagger}_{(n)}$ as follows:\n\\begin{equation}\\label{eq:inequality-function}\n\\hat{p}^{\\dagger}_{l} = -\\ln\\left(\\frac{p_{l}^{max} - \\hat{p}_{l}}{\\hat{p}_{l} - p_{l}^{min}}\\right) \\: ,\n\\end{equation}\nwhere $p_{l}^{min}$ and $p_{l}^{max}$ are defined in the inequality constraints \n(Eq. \\ref{eq:inequality-constraint}).\nThe inverse transformation of each element $\\hat{p}^{\\dagger}_{l} \\in (- \\infty, + \\infty)$ into the element $\\hat{p}_{l} \\in (p_{l}^{min}, p_{l}^{max})$ is the following:\n\\begin{equation}\\label{eq:inv-inequality-function}\n\\hat{p}_{l} = p_{l}^{min} + \\left(\\frac{p_{l}^{max} - p_{l}^{min}}{ 1 + e^{-\\hat{p}^{\\dagger}_{l}} }\\right) \\: .\n\\end{equation}\n\nAt each iteration $n$ of our algorithm, a correction \n$\\boldsymbol{\\Delta}\\hat{\\mathbf{p}}^{\\dagger}_{(n)}$ for the\nunconstrained vector $\\hat{\\mathbf{p}}^{\\dagger}_{(n)}$\nis computed by solving the following linear system:\n\\begin{equation}\\label{eq:linear-system}\n\\mathbf{Q}_{(n)}^{-1} \\left[\\mathbf{Q}_{(n)} \\mathbf{H}^{\\dagger}(\\hat{\\mathbf{p}}_{(n)}) \\mathbf{Q}_{(n)} + \\lambda_{(n)} \\mathbf{I}_{M} \\right] \n\\mathbf{Q}_{(n)}^{-1} \\boldsymbol{\\Delta} \\hat{\\mathbf{p}}^{\\dagger}_{(n)} \n= -\\boldsymbol{\\nabla}\\Gamma(\\hat{\\mathbf{p}}_{(n)}) \\: ,\n\\end{equation}\nwhere $\\lambda_{(n)}$ is a positive scalar (known as Marquardt parameter) which is adjusted at each iteration and is associated with the Levenberg-Marquardt method \\cite[e.g., ][ p. 240]{silva-2001,aster-etal2019},\n$\\mathbf{I}_{M}$ is the identity matrix with order $M$, \n$\\boldsymbol{\\nabla}\\Gamma(\\hat{\\mathbf{p}}_{(n)})$\nis the gradient of the goal function (Eq. \\ref{eq:gamma_gradient}), \n$\\mathbf{H}^{\\dagger}(\\hat{\\mathbf{p}}_{(n)})$ is a matrix given by\n\\begin{equation}\\label{eq:H-dagger}\n\\mathbf{H}^{\\dagger}(\\hat{\\mathbf{p}}_{(n)}) = \\mathbf{H}(\\hat{\\mathbf{p}}_{(n)})\\mathbf{T}(\\hat{\\mathbf{p}}_{(n)}) \n\\end{equation}\nand $\\mathbf{Q}_{(n)}$ is a diagonal matrix proposed by \\cite{marquardt_algorithm_1963} for scaling the parameter $\\lambda_{(n)}$ \nat each iteration.\nThe element $ll$ of this diagonal matrix is given by\n\\begin{equation}\\label{eq:Q-matrix}\nq_{ll} = \\frac{1}{\\sqrt{h^{\\dagger}_{ll}}} \\: ,\n\\end{equation}\nwhere $h^{\\dagger}_{ll}$ is the element $ll$ of the matrix $\\mathbf{H}^{\\dagger}(\\hat{\\mathbf{p}}_{(n)})$ (Eq. \\ref{eq:H-dagger}). \nIn Eq. \\ref{eq:H-dagger}, $\\mathbf{H}(\\hat{\\mathbf{p}}_{(n)})$ is the Hessian matrix of the goal function (Eq. \\ref{eq:gamma_hessian}) and $\\mathbf{T}(\\hat{\\mathbf{p}}_{(n)})$ is a diagonal matrix whose element $ll$ is given by\n\\begin{equation}\\label{eq:inequality-diag}\nt_{ll} = \\frac{(p_{l}^{max} - \\hat{p}_{l} + \\epsilon)(\\hat{p}_{l} - p_{l}^{min} + \\epsilon)}{p_{l}^{max} - p_{l}^{min}} \\: ,\n\\end{equation}\nwhere $\\hat{p}_{l}$ is the $l$th element of the estimated parameter vector \n$\\hat{\\mathbf{p}}_{(n)}$ and $\\epsilon$ is a small positive number \n($\\approx 10^{-2}$) used to prevent null values.\n\nAfter estimating $\\boldsymbol{\\Delta} \\hat{\\mathbf{p}}^{\\dagger}_{(n)}$ by \nsolving the linear system (Eq. \\ref{eq:linear-system}), we update the unconstrained \nvector by computing\n\\begin{equation}\\label{eq:p-k1-dagger}\n\\hat{\\mathbf{p}}^{\\dagger}_{(n + 1)} =\n\\hat{\\mathbf{p}}^{\\dagger}_{(n)} +\n\\boldsymbol{\\Delta} \\hat{\\mathbf{p}}^{\\dagger}_{(n)} \\: .\n\\end{equation}\nNext, we compute the elements of an updated parameter vector $\\hat{\\mathbf{p}}_{(n + 1)}$ by using Eq. \\ref{eq:inv-inequality-function}.\nFinally, we stop the iterative process by evaluating the invariance of the \ngoal function (Eq. \\ref{eq:gamma}) along successive iterations.\nSpecifically, we check if the inequality\n\\begin{equation}\\label{eq:stop-criterion}\n \\left| \\frac{\\Gamma (\\hat{\\mathbf{p}}_{(n +1)}, m_{0}, z_{0}) - \n \\Gamma (\\hat{\\mathbf{p}}_{(n)}, m_{0}, z_{0})}\n {\\Gamma (\\hat{\\mathbf{p}}_{(n)}, m_{0}, z_{0})} \n \\right| \\le \\tau\n\\end{equation}\nholds, where $\\tau$ is a threshold value on the order of $10^{-3}$ to $10^{-4}$. \nUsually, the inversion algorithm converges at $8$-$12$ iterations.\nThe estimate $\\hat{\\mathbf{p}}_{(n+1)}$ that is obtained\nat the last iteration and satisfies the inequality constraint defined by\nEq. \\ref{eq:stop-criterion} is the estimated parameter vector\n$\\hat{\\mathbf{p}}_{(f)}$ minimizing the goal function\n$\\Gamma (\\mathbf{p}, m_{0}, z_{0})$ (Eq. \\ref{eq:gamma}), for the given\nvalues of depth to the top $z_{0}$ and total-magnetization intensity $m_{0}$.\nIn the evaluation of the goal function, we compute the \nconstraint functions $\\varphi_{\\ell}(\\mathbf{p})$ (Eqs \\ref{eq:phi1}, \\ref{eq:phi2},\n\\ref{eq:phi3}, \\ref{eq:phi4}, \\ref{eq:phi5}, \\ref{eq:phi6} and \\ref{eq:phi7}), \n$\\ell = 1, \\dots, 7$, by using the expressions which are written as sum of \nterms instead of those defined by sparse matrices.\n\n\n\\subsubsection{Choice of the weights $\\alpha_{\\ell}$}\n\nAttributing values to the weights $ \\alpha_{\\ell} $ (Eq. \\ref{eq:gamma}) is an important feature of our method. \nHowever, there is no analytical rule to define them and their values can be dependent on the particular characteristics of the type of geological setting where the method is being applied \\cite[]{silva-2001}. \n\nAt this point, we draw attention that the weights $ \\alpha_{\\ell}$ (Eq. \\ref{eq:gamma}) are dimensional quantities. \nNote that the units of the data-misfit function (Eq. \\ref{eq:misfit}) and the constraint functions \n(Eqs \\ref{eq:phi1}, \\ref{eq:phi2}, \\ref{eq:phi3}, \\ref{eq:phi4}, \\ref{eq:phi5}, \n\\ref{eq:phi6} and \\ref{eq:phi7}), are nT$^{2}$ and $m^{2}$, respectively.\nBecause we set the unit of the goal function (Eq. \\ref{eq:gamma}) as nT$^{2}$, the unit of the weights \n$ \\alpha_{\\ell} $ (Eq. \\ref{eq:gamma}) is nT$^{2}$m$^{-2}$.\n\nThe physical dimensions of the weights $ \\alpha_{\\ell}$ makes assignment of \ntheir values problem dependent. \nTo make these weights comparable to each other, we normalize the $ \\alpha_{\\ell} $ \nvalues as follows:\n\\begin{equation}\\label{eq:alphas}\n\\alpha_{\\ell} = \\tilde{\\alpha}_\\ell \\frac{E_\\phi}{E_\\ell}, \\quad \\ell = 1,\\dots, 7,\n\\end{equation}\nwhere $\\tilde{\\alpha}_\\ell$ is a positive scalar and $ E_\\phi/E_\\ell $ is a normalizing\nfactor allowing the $\\tilde{\\alpha}_\\ell$ to be independent of the physical units used.\nIn Eq. \\ref{eq:alphas}, $ E_\\ell $ represents the \ntrace of the Hessian matrix $\\mathbf{H}_{\\ell}$ (Eqs \\ref{eq:phi1_hessian}, \n\\ref{eq:phi2_hessian}, \\ref{eq:phi3_hessian}, \\ref{eq:phi4_hessian}, \\ref{eq:phi5_hessian}, \n\\ref{eq:phi6_hessian}, and \\ref{eq:phi7_hessian}) of the $\\ell$th constraining function \n$\\varphi_{\\ell}(\\mathbf{p})$ (Eqs \\ref{eq:phi1}, \\ref{eq:phi2}, \\ref{eq:phi3}, \n\\ref{eq:phi4}, \\ref{eq:phi5}, \\ref{eq:phi6}, and \\ref{eq:phi7}). \nThe constant $E_\\phi$ is the trace of the Hessian matrix \n$\\mathbf{H}_{\\phi}(\\hat{\\mathbf{p}}_{(0)})$ (Eq. \\ref{eq:phi_hessian}) of the misfit function \n$\\phi(\\mathbf{p})$ (Eq. \\ref{eq:misfit}) computed with the initial approximation $\\hat{\\mathbf{p}}_{(0)}$ \nfor the parameter vector $ \\mathbf{p} $ (Eq. \\ref{eq:p-vector}) at the beginning of the inversion algorithm. \nNote that the trace of the Hessian matrix $\\mathbf{H}_{\\ell}$ is dimensionless and \nthe trace of the Hessian matrix $\\mathbf{H}_{\\phi}(\\hat{\\mathbf{p}}_{(0)})$ has unit of nT$^{2}$m$^{-2}$.\nThus, the positive scalars $\\tilde{\\alpha}_\\ell$ in Eq. \\ref{eq:alphas} are dimensionless quantities.\n\nAccording to this empirical strategy, the weights $ \\alpha_{\\ell} $ \n(Eq. \\ref{eq:gamma}) are redefined using Eq. \\ref{eq:alphas}, in which the weights\n$\\tilde{\\alpha}_\\ell$ are positive scalars, independent of physical units and \nless dependent on the particular characteristics of the\ninterpretation geological setting.\n\n%\\subsubsection{Practical considerations}\n\nThe values attributed to the dimensionless weights $\\tilde{\\alpha}_{1} - \\tilde{\\alpha}_{7}$ \n(Eq. \\ref{eq:alphas}) significantly impact the estimated models and cannot be \nautomatically set without the interpreter\u2019s judgment. \nBased on our practical experience, we suggest some \nempirical procedures for setting these parameters.\n\nThe parameters $\\tilde{\\alpha}_1$ and $\\tilde{\\alpha}_2$ impose prior information \non the shape of the horizontal cross-section of the prisms. \nThe first one forces all prisms to have a circular horizontal cross-section, while \nthe second forces all prisms to have a similar horizontal cross-section.\nGenerally, their values vary from $10^{-5}$ to $10^{-3}$ and differ from\neach other by one order of magnitude, at most.\nThe parameter $\\tilde{\\alpha}_3$ also varies from $10^{-5}$ to $10^{-3}$ and \ncontrols the relative position of adjacent prisms forming the model.\nA high value privileges a vertical estimated body, whereas a small value \ntends to generate an inclined estimated body.\n\nIn comparison to $\\tilde{\\alpha}_1$, $\\tilde{\\alpha}_2$ and $\\tilde{\\alpha}_3$,\nthe other parameters usually have smaller values varying from $10^{-8}$ to $10^{-4}$.\nThe parameters $\\tilde{\\alpha}_4$ and $\\tilde{\\alpha}_5$ control the shape and position of the shallowest prism of the interpretativ model. They are used when a priori\ninformation about the source is available at the study area.\nThe parameter $\\tilde{\\alpha}_6$ imposes a minimum volume to the estimated source.\nThe value of parameters $\\tilde{\\alpha}_4$, $\\tilde{\\alpha}_5$, and $\\tilde{\\alpha}_6$ are set to be as small as possible.\nThe parameter $\\tilde{\\alpha}_7$ controls the total-vertical extension of the \nthe estimated body. \nThe greater its value, the shallower the estimated depth to the bottom of the source\nwill be and vice versa.\n\nIf the data fitting is poor or if the estimated source shows unrealistic shape,\nthe dimensionless weights $\\tilde{\\alpha}_{\\ell}$ (Eq. \\ref{eq:alphas}) must be\nfine-tuned\nto obtain a realistic solution. After tuning their values, they will be used in all\ninversions without changing along the iterations. \nEducated guesses for the $\\tilde{\\alpha}_{\\ell}$ values are \n$\\tilde{\\alpha}_1 = 10^{-4}$, $\\tilde{\\alpha}_2 = 10^{-4}$, $\\tilde{\\alpha}_3 = 10^{-4}$,\n$\\tilde{\\alpha}_4 = 10^{-7}$, $\\tilde{\\alpha}_5 = 10^{-7}$, $\\tilde{\\alpha}_6 = 10^{-7}$, \n$\\tilde{\\alpha}_7 = 10^{-5}$.\n", "meta": {"hexsha": "e1ff89938af789f3fd7333d03cd114ae96fc97f1", "size": 40810, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/methodology.tex", "max_stars_repo_name": "pinga-lab/magnetic-radial-inversion", "max_stars_repo_head_hexsha": "ac7e04a143ddc29eb4ded78671a5382a2869d5d8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-15T11:35:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-15T11:35:41.000Z", "max_issues_repo_path": "manuscript/methodology.tex", "max_issues_repo_name": "pinga-lab/magnetic-radial-inversion", "max_issues_repo_head_hexsha": "ac7e04a143ddc29eb4ded78671a5382a2869d5d8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuscript/methodology.tex", "max_forks_repo_name": "pinga-lab/magnetic-radial-inversion", "max_forks_repo_head_hexsha": "ac7e04a143ddc29eb4ded78671a5382a2869d5d8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-01T02:14:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-01T02:14:31.000Z", "avg_line_length": 60.7291666667, "max_line_length": 849, "alphanum_fraction": 0.7003185494, "num_tokens": 13638, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681013541611, "lm_q2_score": 0.6859494550081925, "lm_q1q2_score": 0.5883855616472988}}
{"text": "\\label{sec:DETRavenApproach}\nThe main subject of this report is to explain the newer Adaptive Dynamic Event Tree (ADET) methodology developed within the RAVEN code. As already mentioned, the DET approach is extremely effective (achiving almost a perfect tradeoff) with respect to computational resources usage and the effective exploration of the uncertain domain (input space). The proposed approach is designed to further focus these features of the DET methodology toward the characterization of some properties of the input space. More specifically, the approach that will be introduced is an adaptive methodology that defines the branching strategy in order to identify more efficiently (lower computational costs) and more accurately the location of transitions in the input space (e.g., failure vs. success). The characterization of the input space is translated into an \u201coptimization-like\u201d problem for the identification of the so-called Limit Surface, and accelerated through the employment of artificial intelligence-based algorithms that help the selection of future branching points.\n\nThe concepts mentioned here in brief are explained in detail in the following sub-sections.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Concept of Limit Surface }\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:LimitS}\n As already mentioned, the adaptive methodology can be considered as a goal oriented sampling strategy for the research of the limit surfaces (LSs). To define the meaning of the LS, it is necessary to introduce the concept of \u201cgoal\u201d function. Without making a rigorous mathematical treatment, the \u201cgoal\u201d function is an object that is defined as a part of the system outcome space. In a safety context, the goal function usually represents the success or failure (transition) of the system. Therefore, if $\\overline{x}_{f}$ is the final status of the system, $C\\left (\\overline{x}_{f}\\right )$ the goal function ( $C\\left (\\overline{x}_{f}\\right ) = 1$ system success,  $C\\left (\\overline{x}_{f}\\right ) = -1$ system failure), ), it is possible to define $\\overline{x}_{f,L}$ as the transition surface in the output space with respect to the goal function. This statement can be translated in the following mathematical expression: $\\overline{x}_{f,L} = \\left \\{ \\overline{x}_{f}|\\left | \\overline{\\bigtriangledown} C\\left ( \\overline{x}_{f} \\right ) \\right |= \\infty \\right \\}$. If $\\overline{x}\\left (t,\\overline{p},\\overline{x}_{0}  \\right )$ represents the evolution of the system (that can be considered deterministic, once the initial condition ($\\overline{x}_{0} $) and stochastic model parameters ($\\overline{p}$) have been chosen), then is it possible to identify the set of pairs $\\left ( \\overline{p},\\overline{x}_{0} \\right )_{LS}$, in the input space, which leads the system outcome to match $\\overline{x}_{f,L}$. Such set of points $\\left ( \\overline{p},\\overline{x}_{0} \\right )$ represents the limit surface, in the input space.The LS represent therefore the boundary between inputs leading to success or failure of the system, and more generally boundaries among transition regions.\n\nSince the probability of a particular outcome is equivalent to the probability of being in the input space that leads to that outcome, the probability of a system outcome (e.g. success/failure) can be evaluated by computing the probability of being in the space surrounded by the LS.\nThe determination of the LS is extremely important since its informative content.  Its knowledge: allows to efficiently evaluate risk functions, informs regarding which uncertainties are the most relevant from a risk point of view (ranking of uncertain space), defines safe regions to be explored for plant optimization, identifies risk neutral design direction, etc. Unfortunately, the determination of LSs is very computationally expensive, since a brute-force approach would imply the evaluation of each point of an N-dimensional grid that discretizes the input space. The number of points in this grid is proportional to the requested accuracy. In order to overcome these limitations, a possible approach is the employment of acceleration schemes based on Reduced Order Models (ROMs). ROMs are used to predict the location of the LS in order to drive the exploration of the input space toward its possible location. The Adaptive Dynamic Event Tree is based on this concept.\n\\label{sec:ADET}\n\\begin{figure}[h]\n  \\centering\n     \\includegraphics[width=0.8\\textwidth]{figures/DET_LS_pb.png}\n  \\caption{Dynamic Event Tree Limit Surface}\n  \\label{fig:LSDET}\n\\end{figure}\n%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Reduced Order Models}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:ROMs}\nAs mentioned in the previous section, the Adaptive Dynamic Event Tree uses ROMs in order to accelerate the search of transition boundaries represented by the LS. Simplistically, a ROM is a mathematical model of fast solution trained to predict a response of interest of a physical system. The training process is performed by \u201csampling\u201d the response of a physical model with respect to variations of parameters that are subject to probabilistic behavior. The results (outcomes of the physical model) of those samplings are fed into the algorithm representing the ROM that tunes itself to replicate those results. More specifically, in RAVEN case the reduced order model is constructed to emulate a numerical representation of a physical system. In fact, as already mentioned, the physical system model would be too expensive to be used to explore the whole input space, exploration that would be needed for the limit surface search. Under certain assumptions, the ROM ability to predict the output space of a system improves, increasing the number of training samples (better representation of the whole domain).  This is not always true, since some of the ROMs used might be subject to over-fitting issues.\nIn RAVEN, several different Reduced Order Model types are available, such as Support Vector Machine, Neighbors based, multi-class models, Quadratic Discriminants, etc. All those supervised learning algorithms have been imported via an Application Programming Interface (API) within the scikit-learn~\\cite{scikitlearn} library.\n\nThe acceleration schemes in the adaptive approach use a class of supervised learning algorithms usually referred to as \u201cclassifiers.\u201d In essence, classifiers are ROMs specialized to represent a binary response of a system (e.g., failure/success, on/off, etc.), like the goal function of interest in our case.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{figure}[h]\n  \\centering\n     \\includegraphics[width=0.5\\textwidth]{figures/AdaptiveDET.png}\n  \\caption{Adaptive Dynamic Event Tree Scheme}\n  \\label{fig:AdaptiveDET}\n\\end{figure}\n\\subsection{Adaptive DET}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nThe main idea of the application of the previously explained adaptive sampling approach to the DET comes from the observation that the DET is intrinsically adaptive.\n\nIn order to explain this statement, as an example, fig.~\\ref{fig:LSDET} shows a LS generated by the DET sampling methodology currently available in RAVEN. In this case, a goal function based on the clad max temperature has been used:\n\\begin{equation}\n\\left \\{ C \\right \\}_{i}=\\left \\{ \\begin{matrix}\n1 & T_{C,i}\\geq T_{C,F}\\\\\n-1 & T_{C,i}<  T_{C,F}\n\\end{matrix} \\right.\n\\end{equation}\nwhere $T_{C,i}$ is the clad temperature and $T_{C,F}$ is the failure temperature, sampled by triangular distribtuion.\nAs it can be seen, the DET method, when one or more uncertain parameters directly influence the outcome of the goal function, tends to search for the LS with accuracy equal to the user defined grid discretization in the input space.\nFor this reason, it appears natural to use the DET approach to perform a goal-function oriented pre-sampling of the input space. The proposed approach can be described through the following steps (see Figs.~\\ref{fig:AdaptiveDET}~\\ref{fig:ADETflowChart}):\n\\begin{enumerate}\n\\item A limited number of points in the input space $\\left ( \\overline{p},\\overline{x}_{0} \\right )_{i}$ are selected via a DET approach\n\\item The system code is then used to compute all the possible evolution of the system starting from the above selected input points:\n\\begin{equation}\n    \\overline{x}_{i}=\\overline{x}\\left (t,\\overline{p}_{i},\\overline{x}_{0,i}  \\right )\n\\end{equation}\n\\item The goal function  $C\\left (\\overline{x}_{f}\\right )$ (in this case of type Boolean) is evaluated at the final phase space coordinate of the system (outcome):\n\\begin{equation}\n \\left \\{ C \\right \\}_{i}=\\left \\{ \\overline{x} \\left (  t=t_{end},\\overline{p}_{i},\\overline{x}_{0,i}\\right )\\right \\}\n\\end{equation}\n\\item The set of pairs $\\left ( \\overline{p},\\overline{x}_{0} \\right )_{i} \\rightarrow \\left \\{ C \\right \\}_{i}$ are used to train a ROM\n\\item The ROM is used to predict the value of the goal function on a regular Cartesian grid  in the domain space. The grid meshing is dictated by the user-requested accuracy in the determination of the LS location:\n\\begin{equation}\n \\begin{matrix}\n \\left \\{ c \\right \\}_{j} \\cong ROM\\left ( \\left \\{ \\overline{x}_{0} \\right \\}_{j} \\right ) & j=1,...,M\n\\end{matrix}\n\\end{equation}\nwhere $j=1, ..., M$ points on the grid\n\\item The regions identified by a change in value of $\\left \\{ c \\right \\}_{j}$ (between -1 and 1) are therefore the ROM prediction of the limit surface location:\n\\begin{equation}\n(\\overline{p},\\overline{x}_{0})_{LS}\n\\end{equation}\n\\item The position of the LS is compared with the one at the previous iteration; if no changes are consecutively detected for a user-defined number of times than the iterations stop; otherwise a new point, in the input space, needs to be selected to increase the ROM training set.\n\\item The next selected point to be added to the training set is the one located on the limit surface that is the farther from all the other already selected points\n\\item A hierarchical searching process is performed on the DET branches already evaluated and the starting point (branch) for the new evaluation is determined\n\\item The process restart from point 2.\n\\end{enumerate}\n\\begin{figure}[h]\n  \\centering\n     \\includegraphics[width=0.6\\textwidth]{figures/adaptiveFlowChart.png}\n  \\caption{Dynamic Event Tree Flow Chart}\n  \\label{fig:ADETflowChart}\n\\end{figure}\n\nIn order to understand how the algorithm works, it is necessary to explain in details the first and ninth steps of the previous algorithm flow. The ADET methodology starts with a standard DET that performs the initial exploration of the input space, exploiting its capability to investigate the uncertain domain in a reasonably short time. As already mentioned and shown in fig.~\\ref{fig:LSDET}, when a goal function is directly influenced by one (or more) uncertain parameter(s), the DET tends to increase the sampling density in proximity of the transition boundaries (LS). This means that the acceleration ROM gets trained with an initial dataset closed to the LS that reasonably helps the convergence of the method.\n\nAfter the initial DET, the actual adaptive scheme starts to be employed following the calculation flow previously reported. Every time a new point, in the input space, is requested, a search in the DET database is performed. This search is needed to identify the closest branch with respect to the next point needs to be evaluated. Once the closest branch has been identified, the system code is run using that branch as starting point (i.e. the calculation restarts from an advanced point in time). This new evaluation is then added to the DET database and can be used for sub-sequential requests. This approach speeds the adaptive search up and employs an effective exploration of the input space in proximity of the transition boundaries.\n\nIn essence this methodology combines the adaptive search algorithm already present in RAVEN with a DET approach, allowing the re-usage of branches of simulations already performed rather than restarting the simulation always from the begin.\n\n\n\n", "meta": {"hexsha": "746d660cb8f630e54244f6b06d8a51fc2a3eda91", "size": 12123, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/L3MilestoneAdaptiveDETSept2014/adet.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "papers/L3MilestoneAdaptiveDETSept2014/adet.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "papers/L3MilestoneAdaptiveDETSept2014/adet.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 130.3548387097, "max_line_length": 1798, "alphanum_fraction": 0.7644147488, "num_tokens": 2711, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942041005328, "lm_q2_score": 0.6513548714339145, "lm_q1q2_score": 0.5883650801789027}}
{"text": "\\section{One to one and onto transformations}\n\n\\begin{outcome}\n  \\begin{enumerate}\n  \\item Determine if a linear transformation is onto or one to one.\n  \\end{enumerate}\n\\end{outcome}\n\nLet $T: \\R^n \\to \\R^m$ be a linear transformation. We define the \\textbf{range}\\index{linear transformation!range} or \\textbf{image}\\index{linear transformation!image} of $T$ as the set of vectors of $\\R^{m}$ which are of the form\n$T (\\vect{x})$ (equivalently, $A\\vect{x}$) for some $\\vect{x}\\in \\R^{n}$. It is common\nto write $T\\R^{n}$, $T(\\R^{n})$, or\n$\\func{Im}(T) $ to denote these vectors.\n\n\\begin{lemma}{Range of a matrix transformation}{Ax}\n\nLet $A$ be an $m\\times n$-matrix where $A_{1},\\ldots, A_{n}$ denote the columns of\n$A$\\index{range of matrix transformation}. Then, for a vector $\\vect{x}=\\begin{mymatrix}{c}\nx_{1} \\\\\n\\vdots \\\\\n x_{n}\n\\end{mymatrix}$ in $\\R^n$,\n\n\\begin{equation*}\nA\\vect{x}=\\sum_{k=1}^{n}x_{k}A_{k}\n\\end{equation*}\n\nTherefore, $A (\\R^n)$ is the collection of all\nlinear combinations of these products.\n\\end{lemma}\n\n\\begin{proof}\nThis follows from the definition of matrix multiplication.\n\\end{proof}\n\nThis section is devoted to studying two important characterizations of linear transformations, called one to one and onto. We define them now.\n\n\\begin{definition}{One to one}{one-to-one}\nSuppose $\\vect{x}_1$ and $\\vect{x}_2$ are vectors in $\\R^n$. A linear transformation $T: \\R^n \\to \\R^m$ is called \\textbf{one to one}\\index{one to one} (often written as $1-1)$ if whenever\n $\\vect{x}_1 \\neq \\vect{x}_2$ it follows that :\n\\begin{equation*}\nT(\\vect{x}_1) \\neq T (\\vect{x}_2)\n\\end{equation*}\n\nEquivalently, if $T(\\vect{x}_1) =T(\\vect{x}_2)$,\nthen $\\vect{x}_1 = \\vect{x}_2$. Thus,  $T$ is one to one if it never takes two different\nvectors to the same vector.\n\\end{definition}\n\nThe second important characterization is called onto.\n\n\\begin{definition}{Onto}{onto}\nLet $T: \\R^n \\to \\R^m$ be a linear transformation. Then $T$ is called \\textbf{onto}\\index{onto} if whenever $\\vect{x}_2 \\in \\R^{m}$ there exists\n$\\vect{x}_1 \\in \\R^{n}$ such that $T(\\vect{x}_1) = \\vect{x}_2. $\n\\end{definition}\n\nWe often call a linear transformation which is one-to-one an \\textbf{injection}\\index{injection}. Similarly, a linear transformation which is onto is often called a \\textbf{surjection}\\index{surjection}.\n\nThe following proposition is an important result.\n\n\\begin{proposition}{One to one}{one-to-one-matrices}\nLet $T:\\R^n \\to \\R^m$ be a linear transformation. Then $T$ is one to one if\nand only if $T(\\vect{x}) = \\vect{0}$ implies $\\vect{x}=\\vect{0}$.\n\\end{proposition}\n\n\\begin{proof}\nWe need to prove two things here. First, we will prove that if $T$ is one to one, then\n$T(\\vect{x}) = \\vect{0}$ implies that $\\vect{x}=\\vect{0}$. Second, we will show that if $T(\\vect{x})=\\vect{0}$ implies that $\\vect{x}=\\vect{0}$, then\nit follows that $T$ is one to one. Recall that a linear transformation has the property that $T(\\vect{0}) = \\vect{0}$.\n\n%%Note that since $T$ is linear, it is induced by an $m \\times n$-matrix $A$. Therefore we can rewrite the statement ``$T_A(\\vect{x}) = \\vect{0}$ implies $\\vect{x}=\\vect{0}$'' in terms of the matrix $A$ as ``$A\\vect{x}=\\vect{0}$ implies $\\vect{x}=\\vect{0}$''. Therefore we can prove this theorem using $A$.\n%%\n%%Observe that $A\\vect{0}=A(\\vect{0}+\\vect{0}) =A\\vect{0} +A\\vect{0}$ and so $A\\vect{0}=\\vect{0}$.\n%%\n%%Now suppose $A$ is one to one and $A\\vect{x}=\\vect{0}$. We need to show that this implies $\\vect{x}=\\vect{0}$. Since $A$ is one to one, by Definition~\\ref{def:one-to-one} $A$ can only map one vector to the zero vector $\\vect{0}$. Now $A\\vect{x}=\\vect{0}$ and $A\\vect{0}=\\vect{0}$, so it follows that $\\vect{x}=\\vect{0}$. Thus if $A$ is one to one and $A\\vect{x}=\\vect{0}$, then $\\vect{x}=\\vect{0}$.\n%%\n%%Next assume that $A\\vect{x}=\\vect{0}$ implies $\\vect{x}=\\vect{0}$.  We need to show that $A$ is one to one. Suppose $A\\vect{x}=A\\vect{y}$. Then $A\\vect{x} - A\\vect{y} = \\vect{0}$.\n%%Hence $A\\vect{x}-A\\vect{y} =  A(\\vect{x}-\\vect{y}) = \\vect{0}$. However, we have assumed that $A\\vect{x}=\\vect{0}$ implies $\\vect{x}=\\vect{0}$. This means that\n%%whenever $A$ times a vector equals $\\vect{0}$, that vector is also equal to $\\vect{0}$. Therefore,  $\\vect{x}-\\vect{y} = \\vect{0}$ and so $\\vect{x}=\\vect{y}$.\n%%Thus $A$ is one to one by Definition~\\ref{def:one-to-one}.\n%%\\end{proof}\n\nSuppose first that $T$ is one to one and consider $T(\\vect{0})$.\n\\begin{equation*}\nT(\\vect{0})=T(\\vect{0}+\\vect{0}) =T(\\vect{0})+T(\\vect{0})\n\\end{equation*}\nand so, adding the additive inverse of $T(\\vect{0})$ to both sides, one sees\nthat $T(\\vect{0})=\\vect{0}$. If $T(\\vect{x})=\\vect{0}$ it must be the\ncase that $\\vect{x}=\\vect{0}$ because it was just shown that $T(\\vect{0})=\\vect{0}$ and $T$ is assumed to be one to one.\n\nNow assume that if $T(\\vect{x})=\\vect{0}$, then it follows that $\\vect{x}=\\vect{0}$. If $T(\\vect{v})=T(\\vect{u})$, then\n\\[\nT(\\vect{v})-T(\\vect{u})=T(\\vect{v}-\\vect{u}) =\\vect{0}\n\\]\nwhich shows that $\\vect{v}-\\vect{u}=0$. In other words, $\\vect{v}=\\vect{u}$, and $T$ is one to one.\n\\end{proof}\n\nNote that this proposition says that if $A=\\begin{mymatrix}{ccc}\nA_{1} & \\cdots & A_{n}\n\\end{mymatrix} $ then $A$ is one to one if and only if whenever\n\\begin{equation*}\n0 = \\sum_{k=1}^{n}c_{k}A_{k}\n\\end{equation*}\nit follows that each scalar $c_{k}=0$.\n\nWe will now take a look at an example of a one to one and onto linear transformation.\n\n\\begin{example}{A one to one and onto linear transformation}{one-to-one-onto-linear-transformation}\nSuppose\n\\begin{equation*}\nT\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix} =\\begin{mymatrix}{rr}\n1 & 1 \\\\\n1 & 2\n\\end{mymatrix} \\begin{mymatrix}{r}\nx \\\\\ny\n\\end{mymatrix}\n\\end{equation*}\nThen, $T:\\R^{2}\\rightarrow \\R^{2}$ is a linear\ntransformation. Is $T$ onto? Is it one to one?\n\\end{example}\n\n\\begin{solution} Recall that because $T$ can be expressed as matrix\nmultiplication, we know that $T$ is a linear transformation.  We will\nstart by looking at onto.  So suppose $\\begin{mymatrix}{c}\na \\\\\nb\n\\end{mymatrix} \\in \\R^{2}$. Does there exist $\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix}  \\in \\R^2 $ such that $T\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix} =\\begin{mymatrix}{c}\na \\\\\nb\n\\end{mymatrix}$? If so, then since $\\begin{mymatrix}{c}\na \\\\\nb\n\\end{mymatrix} $ is an arbitrary vector in $\\R^{2}$, it will follow that $T$\nis onto.\n\nThis question is familiar to you. It is asking whether\nthere is a solution to the equation\n\\begin{equation*}\n\\begin{mymatrix}{cc}\n1 & 1 \\\\\n1 & 2\n\\end{mymatrix} \\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix} =\\begin{mymatrix}{c}\na \\\\\nb\n\\end{mymatrix}\n\\end{equation*}\nThis is the same thing as asking for a solution to the following system of\nequations.\n\\begin{equation*}\n\\begin{array}{c}\nx+y=a \\\\\nx+2y=b\n\\end{array}\n\\end{equation*}\nSet up the augmented matrix and row reduce.\n\\begin{equation}\n\\begin{mymatrix}{rr|r}\n1 & 1 & a \\\\\n1 & 2 & b\n\\end{mymatrix} \\rightarrow \\begin{mymatrix}{rr|r}\n1 & 0 & 2a-b \\\\\n0 & 1 & b-a\n\\end{mymatrix}\n\\label{onto-matrix}\n\\end{equation}\nYou can see from this point that the system has a solution. Therefore,\nwe have shown that for any $a, b$, there is a $\n\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix}$ such that $T\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix} =\\begin{mymatrix}{c}\na \\\\\nb\n\\end{mymatrix}$.\nThus $T$ is onto.\n\nNow we want to know if $T$ is one to one.\nBy Proposition~\\ref{prop:one-to-one-matrices} it is enough to show that $A\\vect{x}=0$ implies $\\vect{x}=0$.\nConsider the system $A\\vect{x}=0$ given by:\n\\begin{equation*}\n\\begin{mymatrix}{cc}\n1 & 1 \\\\\n1 & 2\\\\\n\\end{mymatrix}\n\\begin{mymatrix}{c}\nx\\\\\ny\n\\end{mymatrix}\n=\n\\begin{mymatrix}{c}\n0 \\\\\n0\n\\end{mymatrix}\n\\end{equation*}\n\nThis is the same as the system given by\n\n\\begin{equation*}\n\\begin{array}{c}\nx + y = 0 \\\\\nx + 2y = 0\n\\end{array}\n\\end{equation*}\n\nWe need to show that the solution to this system is $x = 0$ and $y = 0$. By setting up the augmented matrix and row reducing, we end up with\n\\begin{equation*} \\begin{mymatrix}{rr|r}\n1 & 0 & 0 \\\\\n0 & 1 & 0\n\\end{mymatrix}\n\\end{equation*}\n\nThis tells us that $x = 0$ and $y = 0$. Returning to the original system, this says that if\n\n\\begin{equation*}\n\\begin{mymatrix}{cc}\n1 & 1 \\\\\n1 & 2\\\\\n\\end{mymatrix}\n\\begin{mymatrix}{c}\nx\\\\\ny\n\\end{mymatrix}\n=\n\\begin{mymatrix}{c}\n0 \\\\\n0\n\\end{mymatrix}\n\\end{equation*}\n\nthen\n\\begin{equation*}\n\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix}\n=\n\\begin{mymatrix}{c}\n0 \\\\\n0\n\\end{mymatrix}\n\\end{equation*}\n\nIn other words, $A\\vect{x}=0$ implies that $\\vect{x}=0$. By\nProposition~\\ref{prop:one-to-one-matrices}, $A$ is one to one, and so $T$ is also one to one.\n\nWe also could have seen that $T$ is one to one from our above solution for onto. By looking at the matrix given\nby {\\eqref{onto-matrix}}, you can see that there is a \\textbf{unique} solution given\nby $x=2a-b$ and $y=b-a$. Therefore, there\nis only one vector, specifically\n$\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix}\n=\n\\begin{mymatrix}{c}\n2a-b\\\\\nb-a\n\\end{mymatrix} $ such that $T\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix} =\\begin{mymatrix}{c}\na \\\\\nb\n\\end{mymatrix}$. Hence by Definition~\\ref{def:one-to-one}, $T$ is one to one.\n\\end{solution}\n\n\\begin{example}{An onto transformation}{onto-transformation}\nLet $T: \\R^4 \\to \\R^2$ be a linear transformation defined by\n\\[\nT \\begin{mymatrix}{c}\na \\\\\nb \\\\\nc \\\\\nd\n\\end{mymatrix} =\n\\begin{mymatrix}{c}\na + d \\\\\nb + c\n\\end{mymatrix}\n\\mbox{ for all } \\begin{mymatrix}{c}\na \\\\\nb \\\\\nc \\\\\nd\n\\end{mymatrix} \\in \\R^4\n\\]\nProve that $T$ is onto but not one to one.\n\\end{example}\n\n\\begin{solution}\nYou can prove that $T$ is in fact linear.\n\nTo show that $T$ is onto, let $\\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix}$ be an arbitrary vector in $\\R^2$. Taking the vector $\\begin{mymatrix}{c}\nx \\\\\ny \\\\\n0 \\\\\n0\n\\end{mymatrix} \\in \\R^4$ we have\n\\[\nT \\begin{mymatrix}{c}\nx \\\\\ny \\\\\n0 \\\\\n0\n\\end{mymatrix} =\n\\begin{mymatrix}{c}\nx + 0 \\\\\ny + 0\n\\end{mymatrix}\n= \\begin{mymatrix}{c}\nx \\\\\ny\n\\end{mymatrix}\n\\]\nThis shows that $T$ is onto.\n\nBy Proposition~\\ref{prop:one-to-one-matrices} $T$ is one to one if and only if $T(\\vect{x}) = \\vect{0}$ implies that $\\vect{x} = \\vect{0}$. Observe that\n\\[\nT \\begin{mymatrix}{r}\n1 \\\\\n0 \\\\\n0 \\\\\n-1\n\\end{mymatrix} =\n\\begin{mymatrix}{c}\n1 + -1 \\\\\n0 + 0\n\\end{mymatrix}\n= \\begin{mymatrix}{c}\n0 \\\\\n0\n\\end{mymatrix}\n\\]\nThere exists a non-zero vector $\\vect{x}$ in $\\R^4$ such that $T(\\vect{x}) = \\vect{0}$. It follows that $T$ is not one to one.\n\\end{solution}\n\nThe above examples demonstrate a method to determine if a linear transformation $T$ is one to one or onto. It turns out that the matrix $A$ of $T$ can provide this information.\n\n\\begin{theorem}{Matrix of a one to one or onto transformation}{matrix-one-to-one-onto}\nLet $T: \\R^n \\to \\R^m$ be a linear transformation induced by the $m \\times n$-matrix $A$. Then $T$ is one to one if and only if the rank of $A$ is $n$. $T$ is onto if and only if the rank of $A$ is $m$.\n\\end{theorem}\n\nConsider Example~\\ref{exa:onto-transformation}. Above we showed that $T$ was onto but not one to one. We can now use this theorem to determine this fact about $T$.\n\n\\begin{example}{An onto transformation}{onto-transformation-matrix}\nLet $T: \\R^4 \\to \\R^2$ be a linear transformation defined by\n\\[\nT \\begin{mymatrix}{c}\na \\\\\nb \\\\\nc \\\\\nd\n\\end{mymatrix} =\n\\begin{mymatrix}{c}\na + d \\\\\nb + c\n\\end{mymatrix}\n\\mbox{ for all } \\begin{mymatrix}{c}\na \\\\\nb \\\\\nc \\\\\nd\n\\end{mymatrix} \\in \\R^4\n\\]\nProve that $T$ is onto but not one to one.\n\\end{example}\n\n\\begin{solution}\nUsing Theorem~\\ref{thm:matrix-one-to-one-onto} we can show that $T$ is onto but not one to one from the matrix of $T$. Recall that to find the matrix $A$ of $T$, we apply $T$ to each of the standard basis vectors $\\vect{e}_i$ of $\\R^4$. The result is the $2 \\times 4$-matrix A given by\n\\[\nA = \\begin{mymatrix}{rrrr}\n1 & 0 & 0 & 1 \\\\\n0 & 1 & 1 & 0\n\\end{mymatrix}\n\\]\nFortunately, this matrix is already in {\\rref}. The rank of $A$ is $2$. Therefore by the above theorem $T$ is onto but not one to one.\n\\end{solution}\n\nRecall that if $S$ and $T$ are linear transformations, we can discuss their composite denoted $S \\circ T$. The following examines what happens if both $S$ and $T$ are onto.\n\n\\begin{example}{Composite of onto transformations}{composite-onto}\nLet $T: \\R^k \\to \\R^n$ and $S: \\R^n \\to \\R^m$ be linear transformations.\nIf $T$ and $S$ are onto, then $S \\circ T$ is onto.\n\\end{example}\n\n\\begin{solution}\nLet $\\vect{z}\\in \\R^m$.\nSince $S$ is onto, there exists a vector $\\vect{y}\\in \\R^n$\nsuch that $S(\\vect{y})=\\vect{z}$.\nFurthermore, since $T$ is onto, there exists a vector $\\vect{x}\\in \\R^k$\nsuch that $T(\\vect{x})=\\vect{y}$.\nThus\n\\[ \\vect{z} = S(\\vect{y}) = S(T(\\vect{x})) = (ST)(\\vect{x}),\\]\nshowing that for each $\\vect{z}\\in \\R^m$ there exists and $\\vect{x}\\in \\R^k$\nsuch that $(ST)(\\vect{x})=\\vect{z}$.\nTherefore, $S \\circ T$ is onto.\n\\end{solution}\n\nThe next example shows the same concept with regards to one-to-one transformations.\n\n\\begin{example}{Composite of one to one transformations}{composite-one-to-one}\nLet $T: \\R^k \\to \\R^n$ and $S: \\R^n \\to \\R^m$ be linear transformations.\nProve that if $T$ and $S$ are one to one, then $S \\circ T$\nis one-to-one.\n\\end{example}\n\n\\begin{solution}\nTo prove that $S \\circ T$ is one to one, we need to show that if $S(T (\\vect{v})) = \\vect{0}$ it follows that $\\vect{v} = \\vect{0}$.\nSuppose that  $S(T (\\vect{v})) = \\vect{0}$. Since $S$ is one to one, it follows that  $T (\\vect{v}) = \\vect{0}$. Similarly, since $T$ is one to one, it follows that $\\vect{v} = \\vect{0}$. Hence $S \\circ T$ is one to one.\n\\end{solution}\n", "meta": {"hexsha": "80f374044efe817f4fe713e8f8f673d744fc0f10", "size": 13483, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old/content/lineartransformationsOneOneOnto.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "old/content/lineartransformationsOneOneOnto.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "old/content/lineartransformationsOneOneOnto.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 31.4289044289, "max_line_length": 400, "alphanum_fraction": 0.6629830156, "num_tokens": 4851, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8705972784807408, "lm_q1q2_score": 0.5883188225408064}}
{"text": "% Copyright 2018 Melvin Eloy Irizarry-Gelp\u00ed\n\\chapter{WX Quaternion}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA WX quaternion has the form\n\\begin{equation}\n    a_{0} + a_{1} W + a_{2} X + a_{3} WX\n\\end{equation}\nThese follow from a parabolic Cayley-Dickson construct on the W binions.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Arithmetic Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Multiplication}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Conjugate Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Asterisk Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quadrance}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Cloak Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Dagger Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Hodge Star}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Zero-Divisors}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Differential Operators}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{M\\\"{o}bius Transformations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Cross-Ratio}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...", "meta": {"hexsha": "e709acf6fe438cba0f4d29231ea2f63f5a6feeab", "size": 2666, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/F.tex", "max_stars_repo_name": "meirizarrygelpi/cdc", "max_stars_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/F.tex", "max_issues_repo_name": "meirizarrygelpi/cdc", "max_issues_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/F.tex", "max_forks_repo_name": "meirizarrygelpi/cdc", "max_forks_repo_head_hexsha": "f9c9f027888aa8c05fd58d2fef0b21ee78002b9c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.6071428571, "max_line_length": 80, "alphanum_fraction": 0.1751687922, "num_tokens": 268, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587905460026, "lm_q2_score": 0.66192288918838, "lm_q1q2_score": 0.5882897864297804}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{enumitem}\n\\usepackage{amsmath}\n\\begin{document}\n\\title{MAT1830 --- Assignment 3}\n\\author{Dylan Pinn --- 24160547}\n\\maketitle\n\n\\section*{Question 1}\nConsider the following sentences. Under the interpretation where $x$ and $y$ range over the positive integers and $P(x, y)$ is the predicate \"$y=x+7$\", state whether each sentence is true or false and give a short explanation of why.\n\n\\begin{enumerate}[label= (\\alph*)]\n\t\\item $\\forall y \\exists x P(x, y)$\n\t\"For all $y$ there exists a $x$ such that $y = x + 7$\n\n\tThis is False, as picking an arbitrary number for $y$ such as 1, gives:\n\t$$y - x = 7$$\n\t$$1 - x = 7$$\n\t$$-x = 6$$\n\t$$x = -6$$\n\twhich does not fall in the range of positive integers.\n\t\\item $\\forall x \\exists y P(x, y)$\n\t\"For all $x$ there exists a $y$ such that $y = x + 7$\n\n\tThis is True, as the expression can be equated to $x = y - 7$ and for every value of $x$ in the range we can find a $y$ also in the range that makes the expression true. $y$ will always be 7 more than $x$.\n\t\\item $\\exists y \\forall x P(x, y)$\n\t\"There exists a $y$ for all $x$ such that $y = x + 7$\n\n\tThis is False, as picking an arbitrary value for $y$ such as 1 and picking any positive integer for $x$ such as 1 gives:\n\t$$1 + 1 = 7$$\n\t$$2 = 7$$\n\tWhich is False.\n\n\\end{enumerate}\n\\break\n\n\\section*{Question 2}\nUse logic laws to show that\n$$\\neg (\\forall xA(x) \\to \\forall x \\exists y \\neg B(x, y))$$\nis logically equivalent to\n$$\\forall xA(x) \\land \\exists x \\forall y B(x, y)$$\n\n$$\\neg (\\forall xA(x) \\to \\forall x \\exists y \\neg B(x, y))$$\n$$\\equiv \\neg (\\neg (\\forall xA(x) \\lor \\forall x \\exists y \\neg B(x, y)) \\text{ (Implication law) }$$\n$$\\equiv \\neg \\neg(\\forall xA(x)) \\land \\neg(\\forall x\\exists y \\neg B(x, y)) \\text{ (De Morgan's law)}$$\n$$\\equiv \\forall xA(x) \\land \\neg(\\forall x\\exists y \\neg B(x, y)) \\text{ (Double Negation law)} $$\n$$\\equiv \\forall xA(x) \\land \\neg(\\forall x \\neg \\forall yB(x, y))$$\n$$\\equiv \\forall xA(x) \\land \\neg(\\neg \\exists x \\forall yB(x, y))$$\n$$\\equiv \\forall xA(x) \\land \\neg\\neg \\exists x \\forall yB(x, y)$$\n$$\\equiv \\forall xA(x) \\land \\exists x \\forall yB(x, y) \\text{ (Double Negation law)} $$\n\nTherefore $$\\neg (\\forall xA(x) \\to \\forall x \\exists y \\neg B(x, y))$$ is logically equivalent to $$\\forall xA(x) \\land \\exists x \\forall y B(x, y)$$\n\n\\break\n\n\\section*{Question 3}\nIs the sentence\n$$(\\exists xQ(x) \\land \\exists xR(x)) \\to \\exists x(Q(x) \\land R(x))$$\nvalid?\n\nThe expression is not valid and is False. For this to be False the LHS must be True and the RHS must be True.\n\nLSH: $(\\exists xQ(x) \\land \\exists xR(x))$\n\nRSH: $(\\exists x(Q(x) \\land R(x))$\n\nAn implementation of this where this holds is $Q(x) \\text{ is } x = 0$ and $R(x) \\text{ is } x = 1$. The LHS of this is True as there exists a $x$ for $x=0$ and there exists a $x$ for $x=1$. The RHS is False because there doesn't exist a $x$ for which both $x=0$ and $x=1$.\n\n\\break\n\n\\section*{Question 4}\nProve using simple induction that, for each integer $n \\geq 1$,\n$$5+5^2+5^3+\\dots + 5^n = \\frac{5^{n+1}-5}{4}$$\n\nLet $P(n)$ be the statement $5+5^2+5^3+\\dots + 5^n = \\frac{5^{n+1}-5}{4}$\n\n\\emph{Base step.} The left hand side of $P(1)$ is 5 and the right hand side of $P(1)$ is $\\frac{5^{1+1}-5}{4} = \\frac{5^2-5}{4}=\\frac{25-5}{4} = \\frac{20}{4} = 5$. Therefore the base case is true.\n\n\\emph{Induction step.} For some integer $k \\geq 1$, assume that $P(k)$ is true. That is assume that\n$$5 + 5^2 + 5^3 + \\dots + 5^k = \\frac{5^{k+1}-5}{4}$$\n\nNow we need to prove that $P(k+1)$ is true. So we must show that\n$$5 + 5^2 + 5^3 + \\dots + 5^{k+1} = \\frac{5^{k+1+1}-5}{4}$$\n\nWorking with the left hand side of the equation we see that\n\n\\begin{align*}\n\t5+5^2+5^3+ \\dots + 5^{k+1} &= (5 + 5^2 + 5^3 + \\dots + 5^k) + 5^{k+1} \\\\\n\t&= \\frac{5^{k+1}-5}{4} + 5^{k+1} \\text{ (using our assumption)} \\\\\n\t&= \\frac{5^{k+1}-5}{4} + \\frac{20^{k+1}}{4} \\\\\n\t&= \\frac{25^{k+1}-5}{4} \\\\\n\t&= \\frac{5(5^{k+1}-1)}{4} \\\\\n\t&= \\frac{5(1^{k+1+1}-1)}{4} \\\\\n\t&= \\frac{5^{k+2}-5}{4}\n\\end{align*}\n\nWhich is the right hand side we required. Thus $P(k+1)$ is true. So we have proved by induction that $5+5^2+5^3+\\dots + 5^n = \\frac{5^{n+1}-5}{4}$ for each integer $n \\geq q$.\n\n\\end{document}\n", "meta": {"hexsha": "0081bbc90552e3be3fb01b82742277b26021a18c", "size": 4203, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignments/assignment-03.tex", "max_stars_repo_name": "dylanpinn/MAT1830", "max_stars_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-03-01T22:58:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-20T03:41:28.000Z", "max_issues_repo_path": "assignments/assignment-03.tex", "max_issues_repo_name": "dylanpinn/MAT1830", "max_issues_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-03-05T13:52:05.000Z", "max_issues_repo_issues_event_max_datetime": "2018-06-03T06:46:04.000Z", "max_forks_repo_path": "assignments/assignment-03.tex", "max_forks_repo_name": "dylanpinn/MAT1830", "max_forks_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-27T03:41:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-19T15:55:08.000Z", "avg_line_length": 42.03, "max_line_length": 273, "alphanum_fraction": 0.6164644302, "num_tokens": 1630, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228625116081, "lm_q2_score": 0.8887587912826163, "lm_q1q2_score": 0.5882897632081462}}
{"text": "\\problemname{Disposable Switches}\n\n\\illustration{0.42}{cables_small}{\\href{https://flic.kr/p/7AMDgK}{Picture} by Ted Sakshaug on Flickr, cc by}%\n\\noindent\nHaving recently been hired by Netwerc Industries as a network engineer, your\nfirst task is to assess their large and dated office network. After mapping out\nthe entire network, which consists of network switches and cables\nbetween them, you get a hunch that, not only are some of the switches\nredundant, some of them are not used at all! Before presenting this to\nyour boss, you decide to make a program to test your claims.\n\nThe data you have gathered consists of the map of the network, as well as the\nlength of each cable. While you do not know the exact time it takes to send a\nnetwork packet across each cable, you know that it can be calculated as\n$\\ell / v + c$, where\n\\begin{itemize}\n    \\item $\\ell$ is the length of the cable,\n    \\item $v$ is the propagation speed of the cable, and\n    \\item $c$ is the overhead of transmitting the packet on and off the cable.\n\\end{itemize}\nYou have not been able to measure $v$ and $c$.  The only thing you know about\nthem is that they are real numbers satisfying $v > 0$ and $c \\ge 0$, and that\nthey are the same\nfor all cables. You also know that when a network packet is being transmitted\nfrom one switch to another, the network routing algorithms will ensure that the\npacket takes an \\emph{optimal path}---a path that minimises the total transit\ntime.\n\nGiven the map of the network and the length of each cable, determine which\nswitches could never possibly be part of an optimal path when transmitting a network packet from\nswitch $1$ to switch $n$, no matter what the values of $v$ and $c$ are.\n\n\\section*{Input}\nThe input consists of:\n\\begin{itemize}\n    \\item One line with two integers $n$, $m$ ($2 \\leq n \\leq 2\\,000$, $1 \\leq m \\leq 10^4$), the number\n    of switches and the number of cables in the network. The switches are numbered\n    from $1$ to $n$.\n    \\item $m$ lines, each with three integers $a$, $b$ and $\\ell$ ($1 \\leq a, b\n    \\leq n$, $1\\leq \\ell \\leq 10^9$), representing a network cable of length\n    $\\ell$ connecting switches $a$ and $b$.\n\\end{itemize}\n\nThere is at most one cable connecting a given pair of switches, and no cable\nconnects a switch to itself. It is guaranteed that there exists a path between\nevery pair of switches.\n\n\\section*{Output}\nFirst output a line with an integer $k$, the number of switches that could never possibly be\npart of an optimal path when a packet is transmitted from switch $1$ to switch $n$.\nThen output a line with $k$ integers,\nthe indices of the $k$ unused switches, in increasing order.\n\n", "meta": {"hexsha": "8e0aaf720df2c66e42bfd0998e0ef5689f66d388", "size": 2661, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ICPC_Mirrors/Nitc_9.0/nwerc2019all/disposableswitches/problem_statement/problem.en.tex", "max_stars_repo_name": "Shahraaz/CP_P_S5", "max_stars_repo_head_hexsha": "b068ad02d34338337e549d92a14e3b3d9e8df712", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ICPC_Mirrors/Nitc_9.0/nwerc2019all/disposableswitches/problem_statement/problem.en.tex", "max_issues_repo_name": "Shahraaz/CP_P_S5", "max_issues_repo_head_hexsha": "b068ad02d34338337e549d92a14e3b3d9e8df712", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICPC_Mirrors/Nitc_9.0/nwerc2019all/disposableswitches/problem_statement/problem.en.tex", "max_forks_repo_name": "Shahraaz/CP_P_S5", "max_forks_repo_head_hexsha": "b068ad02d34338337e549d92a14e3b3d9e8df712", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.2777777778, "max_line_length": 109, "alphanum_fraction": 0.7414505825, "num_tokens": 714, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7853085909370423, "lm_q1q2_score": 0.5882646238200826}}
{"text": "\\section{Ballot-polling audits of a tolerable overstatement in votes}\n\\label{sec:ballotPollError}\n\nIn this section we develop a new method for ballot-polling audits that can test numerical margins,\nrather than just test whether a candidate won.\nThis requires a different approach than that taken by \\cite{lindemanEtal12}.\n\nExisting ballot-polling methods consider only the fraction of ballots with a vote for either \n$w$ or $\\ell$ that contain a vote for $w$,\nmaking the statistical test one for a proportion.\nTo allow the error to be partitioned across the strata via $\\lambda_s$,\nthe necessary inference is about the \n\\emph{difference} between the number of votes for $w$ and the number of votes for $\\ell$.\nThis introduces a nuisance parameter, the number of ballots with votes for either $w$ or $\\ell$.\nWe deal with the nuisance parameter by maximizing the $P$-value \nover all possible values of the nuisance parameter, which ensures that the test is conservative.\n\n\\subsection{Conditional tri-hypergeometric test}\n\nWe consider a single stratum $s$, containing $N_s$ ballots.\nOf the $N_s$ ballots,\n$A_{w,s}$ have a vote for $w$ but not for $\\ell$, $A_{\\ell,s}$ have a vote for $\\ell$ but not for $w$, and $A_{u,s} = N_s - N_{w,s} - N_{\\ell,s}$ have votes for both $w$ and $\\ell$ or neither $w$ nor $\\ell$, including undervotes and invalid ballots.\nWe might draw a simple random sample of $n$ ballots ($n$ fixed ahead of time), or we might draw \nsequentially without replacement, so the sample size $B$ could be random.\nFor instance, the rule for determining $B$ could depend on the data.\\footnote{%\n   Sampling with replacement leads to simpler arithmetic, but is not as efficient.\n}\n\nRegardless, we assume that, conditional on the attained sample size $n$, the ballots are a simple random sample of size $n$ from the $N_s$ ballots in the population.\nIn the sample, $B_w$ ballots contain a vote for $w$ but not $\\ell$, with $B_\\ell$ and $B_u$ defined analogously.\nThe conditional joint distribution of\n$(B_w, B_\\ell, B_u)$ is tri-hypergeometric: \n\n\\begin{equation}\n    \\mathbb{P}_{A_{w,s}, A_{\\ell,s}} \\{ B_w = i, B_\\ell = j \\vert B=n \\} = \n     \\frac{ {A_{w,s } \\choose i}{A_{\\ell,s} \\choose j}{N_s - A_{w,s} - A_{\\ell,s} \\choose n-i-j}}{{N_s \\choose n}}.\n\\end{equation}\n\nDefine the diluted sample margin, $D \\equiv (B_w - B_\\ell)/B$.\nWe want to test the compound hypothesis $A_{w,s} - A_{\\ell,s} \\le c$.\nThe value of $c$ is inferred from the definition\n$\\omega_{w\\ell,s} \\equiv V_{w\\ell,s} - A_{w\\ell,s} = V_{w,s} - V_{\\ell,s} - (A_{w,s} -A_{\\ell,s})$.\nThus,\n\n\\beq\n    c = V_{w,s} - V_{\\ell,s} - \\omega_{w\\ell,s} = V_{w\\ell,s} - \\lambda_s V_{w\\ell}.\n\\eeq\n\nThe alternative is the compound hypothesis \n$A_{w,s} - A_{\\ell,s} > c$.\\footnote{%\n    To use Wald's Sequential Probability Ratio Test, we might pick a simple alternative instead, e.g.,\n   $A_{w,s} = V_{w,s}$ and $A_{\\ell,s} = V_{\\ell,s}$, the reported values, provided \n   $V_{w,s} - V_{\\ell,s} > c$.\n}\nHence, we will reject for large values of $D$.\nConditional on $B=n$, the event $D = (B_w - B_\\ell)/B = d$ is the same as $B_w - B_\\ell = nd$.\\footnote{%\nIn contrast, the BRAVO ballot-polling\nmethod~\\cite{lindemanEtal12}\nconditions only on $B_w+B_\\ell = m$.\n}\n\n\nThe $P$-value of the simple hypothesis that there are $A_{w,s}$ ballots with\na vote for $w$ but not for $\\ell$, $A_{\\ell,s}$ ballots with a vote for $\\ell$ but not for $w$, \nand $N - A_{w,s} - A_{\\ell,s}$ ballots with votes for both $w$ and $\\ell$ or neither $w$ nor $\\ell$ \n(including undervotes and\ninvalid ballots) is the probability that $B_w - B_\\ell \\geq nd$.\nTherefore,\n\n\\begin{equation}\n   \\mathbb{P}_{A_{w,s}, A_{\\ell,s}, N_s} \\left \\{ D \\geq d \\;\\vert\\; B = n\\right \\} = \n   \\sum_{\\substack{(i, j) :  i, j\\ge 0 \\\\ i-j \\geq nd \\\\ i+j \\leq n}} \\frac{ {A_{w,s } \\choose i}{A_{\\ell,s} \\choose j}{N_s - A_{w,s} - A_{\\ell,s} \\choose n-i-j}}{{N_s \\choose n}}.\n\\end{equation}\n\n\n%\\subsection{Conditional hypergeometric test}\n%Another approach is to condition on both the events $B=n$ and $B_w+B_\\ell=m$.\n%We describe the hypothesis test here, but do not advocate for using it.\n%We found that this approach was inefficient in some simulation experiments.\n%\n%Given $B=n$, all samples of size $n$ from the ballots are equally likely, by hypothesis.\n%Hence, in particular, all samples of size $n$ for which $B_w + B_\\ell = m$ are equally likely.\n%There are ${A_{w,s}+A_{\\ell,s} \\choose m}{N_s - A_{w,s}-A_{\\ell,s} \\choose n-m}$ such samples.\n%Among these samples, $B_w$ may take values $i=0, 1, \\dots, m$.\n%For a fixed $i$, there are ${A_{w,s} \\choose i}{A_{\\ell, s} \\choose m-i}{N_s - A_{w,s} - A_{\\ell,s} \\choose n-m}$\n%samples with $B_w=i$ and $B_\\ell = m-i$.\n%\n%The factor ${N_s - A_{w,s} - A_{\\ell,s} \\choose n-m}$ counts the number of ways to sample $n-m$ of the\n%remaining ballots.\n%If we divide out this factor, we simply count the number of ways to sample ballots\n%from the group of ballots for $w$ or for $\\ell$.\n%There are ${A_{w,s}+A_{\\ell,s} \\choose m}$ equally likely samples of size $m$ from\n%the ballots with either a vote for $w$ or for $\\ell$, but not both, \n%and of these samples, ${A_{w,s} \\choose i}{A_{\\ell, s} \\choose m-i}$ contain $i$ ballots with a vote for $w$ but not $\\ell$.\n%Therefore, conditional on $B=n$ and $B_w+B_\\ell=m$, the probability that $B_w=i$ is\n%\n%$$\\frac{{A_{w,s} \\choose i}{A_{\\ell, s} \\choose m-i}}{{A_{w,s}+A_{\\ell,s} \\choose m}}.$$\n%\n%The $P$-value of the simple hypothesis that there are $A_{w,s}$ ballots with\n%a vote for $w$ but not for $\\ell$, $A_{\\ell,s}$ ballots with a vote for $\\ell$ but not for $w$, \n%and $N - A_{w,s} - A_{\\ell,s}$ ballots with votes for both $w$ and $\\ell$ or neither $w$ nor $\\ell$ \n%(including undervotes and\n%invalid ballots) is the sum of these probabilities for events when $B_w - B_\\ell \\geq nd$.\n%This event occurs for $B_w \\geq \\frac{m+nd}{2}$.\n%Therefore,\n%\n%\\begin{equation}\n%   \\mathbb{P}_{A_{w,s}, A_{\\ell,s}, N_s} \\left \\{ D \\geq d \\;\\vert\\; B = n, B_w+B_\\ell = m \\right \\} = \n%   \\sum_{i=(m+nd)/2}^{\\min\\{m, A_{w,s}\\}} \\frac{{A_{w,s} \\choose i}{A_{\\ell, s} \\choose m-i}}{{A_{w,s}+A_{\\ell,s} \\choose m}}.\n%\\end{equation}\n%\n%\n%This conditional $P$-value is thus the tail probability of the hypergeometric distribution\n%with parameters $A_{w,s}$ ``good'' items, $A_{\\ell,s}$ ``bad'' items, and a sample of size $m$.\n%This calculation is numerically stable and fast; tail probabilities of the hypergeometric distribution are available\n%and well-tested in all standard statistics software.\n\n\\subsection{Maximizing the $P$-value over the nuisance parameter}\n\nThe composite null hypothesis does not specify $A_{w,s}$ or $A_{\\ell,s}$ separately, only \nthat $A_{w,s} - A_{\\ell,s} \\le c$ for\nsome fixed, known $c$.\nDefine $\\mathcal{S}$ to be the set of pairs $(i, j)$ such that $i, j\\ge 0, i-j \\ge nd,$ and $ i+j \\leq n$.\nThe (conditional) $P$-value of this composite hypothesis for $D=d$ is the maximum $P$-value for all\nvalues $(A_{w,s}, A_{\\ell,s})$ that are possible under the null hypothesis,\n\\begin{equation}\n  \\max_{A_{w,s}, A_{\\ell,s} \\in \\{0, 1, \\ldots, N \\}: A_{w,s} - A_{\\ell,s} \\le c, A_{w,s} + A_{\\ell,s} \\le N_s}\n   \\sum_{\\substack{(i, j)\\in \\mathcal{S}}} \\frac{ {A_{w,s } \\choose i}{A_{\\ell,s} \\choose j}{N_s - A_{w,s} - A_{\\ell,s} \\choose n-i-j}}{{N_s \\choose n}},\n\\end{equation}\nwherever the summand is defined. \n(Equivalently, define ${m \\choose k} \\equiv 0$ if $k > m$, $k < 0$, or $m \\le 0$.)\n\n\\subsubsection{Characterizing the optimal solution}\nThe following result enables us to only test hypotheses along the boundary of the null set.\n\n\\begin{thm}\nAssume that $n < A_{w,s}+A_{\\ell,s}$.\nSuppose the composite null hypothesis is $N_w - N_\\ell \\leq c$.\nThe $P$-value is maximized on the boundary of the null region, i.e. when $N_w - N_\\ell = c$.\n\\end{thm}\n\n\\begin{proof}\nWithout loss of generality, let $c=0$ and assume that $A_{u,s}=N_s - A_{w,s} - A_{\\ell,s}$ is fixed.\nLet $N_{w\\ell, s} \\equiv A_{w,s}+A_{\\ell,s}$ be the fixed, unknown number of ballots for $w$ or for $\\ell$ in stratum $s$.\nThe $P$-value $p_0$ for the simple hypothesis that $c=0$ is\n\n\\begin{equation}\n  p_0 = \\sum_{\\substack{(i, j) \\in \\mathcal{S}}} \\frac{ {N_{w\\ell, s}/2 \\choose i}{N_{w\\ell, s}/2 \\choose j}{A_{u,s} \\choose n-i-j}}{{N_s \\choose n}} =  \\sum_{\\substack{(i, j) \\in \\mathcal{S}}}T_{ij},\n\\end{equation}\n\n\\noindent where $T_{ij}$ is defined as the $(i, j)$ term in the summand and $T_{ij} \\equiv 0$ for pairs $(i, j)$ that don't appear in the summation.\n\nAssume that $c>0$ is given.\nThe $P$-value $p_c$ for this simple hypothesis is\n\\begin{align*}\np_c &=   \\sum_{\\substack{(i, j) \\in \\mathcal{S}}}  \\frac{ {(N_{w\\ell, s}+c)/2 \\choose i}{(N_{w\\ell, s}-c)/2 \\choose j}{A_{u,s} \\choose n-i-j}}{{N_s \\choose n}}  \\\\\n   &=\\sum_{\\substack{(i, j) \\in \\mathcal{S}}} T_{ij} \\frac{ \\frac{N_{w\\ell, s}+c}{2}(\\frac{N_{w\\ell, s}+c}{2}-1)\\cdots(\\frac{N_{w\\ell, s}}{2}+1) (\\frac{N_{w\\ell, s}-c}{2} -j)\\cdots(\\frac{N_{w\\ell, s}}{2}-1-j) }\n   {(\\frac{N_{w\\ell, s}+c}{2} -i)\\cdots(\\frac{N_{w\\ell, s}}{2}+1-i)(\\frac{N_{w\\ell, s}-c}{2})\\cdots(\\frac{N_{w\\ell, s}}{2}-1)}.\n\\end{align*}\n\nTerms in the fraction can be simplified: choose the corresponding pairs in the numerator and denominator.\nFractions of the form $\\frac{\\frac{N_{w\\ell, s}}{2} + a}{\\frac{N_{w\\ell,s}}{2} + a - i}$ can be expressed as $1 + \\frac{i}{\\frac{N_{w\\ell,s}}{2} + a-i}$.\nFractions of the form $\\frac{\\frac{N_{w\\ell, s}}{2}  - a - j}{\\frac{N_{w\\ell, s}}{2}  - a}$ can be expressed as $1 - \\frac{j}{\\frac{N_{w\\ell, s}}{2} -a}$.\nThus, the $P$-value can be written as \n\n\\begin{align*}\np_c &= \\sum_{\\substack{(i, j) \\in \\mathcal{S}}} T_{ij} \\prod_{a=1}^{c/2} \\left(1 + \\frac{i}{\\frac{N_{w\\ell,s}}{2} + a-i}\\right)\\left(1 - \\frac{j}{\\frac{N_{w\\ell, s}}{2} - a}\\right) \\\\\n&> \\sum_{\\substack{(i, j) \\in \\mathcal{S}}}  T_{ij} \\left[ \\left(1 + \\frac{i}{\\frac{N_{w\\ell,s}+c}{2} -i}\\right)\\left(1 - \\frac{j}{\\frac{N_{w\\ell, s}}{2}+1}\\right) \\right]^{c/2} \\\\\n&= \\sum_{\\substack{(i, j) \\in \\mathcal{S}}} T_{ij} \\left[ 1 + \\frac{\\frac{N_{w\\ell,s}+c}{2}j + \\frac{N_{w\\ell,s}}{2}i + i}{(\\frac{N_{w\\ell,s}+c}{2}-i)(\\frac{N_{w\\ell,s}}{2}+1)}\\right]^{c/2} \\\\\n&> \\sum_{\\substack{(i, j) \\in \\mathcal{S}}}  T_{ij}\\\\\n&= p_0\n\\end{align*}\n\nThe last inequality follows from the fact that $i$ and $j$ are nonnegative, and \nthat $i < \\frac{N_{w\\ell,s}+c}{2}$ (it is a possible outcome under the null hypothesis).\n\n\n\\end{proof}\n\n\\subsubsection{Solving the optimization problem}\n\nWe have found empirically (but have not proven) that given $N$, $c$, and the observed sample values $B_w$ and $B_\\ell$, the tail probability $p_c$, as a function of $A_{w,s}$,\nhas a unique maximum at one of the endpoints, where $A_{w,s}$ is either as small or as large as possible.\nIf this empirical result is true in general, then finding the maximum is trivial;\notherwise, computing the unconditional $P$-value is a simple 1-dimensional optimization problem\non a bounded interval.\n\n\\subsection{Conditional testing}\nIf the conditional tests are always conducted at significance level $\\alpha$ or less, so that\n$\\mathbb{P} \\{\\mbox{Type I error} | B = n\\} \\le \\alpha$, then the\noverall procedure has significance level $\\alpha$ or less:\n\\begin{eqnarray}\n    \\mathbb{P} \\{\\mbox{Type I error}\\} &=& \\sum_{n=0}^N  \\mathbb{P}\\{\\mbox{Type I error} |  B = n\\} \\mathbb{P} \\{ B = n \\} \\nonumber \\\\\n       & \\le & \\sum_{n=0}^N \\alpha \\mathbb{P} \\{  B = n \\}  =  \\alpha.\n\\end{eqnarray}\n\nIn particular, this implies that our conditional hypergeometric test will have a conservative $P$-value  unconditionally.\n", "meta": {"hexsha": "7ad2c4b3aa63a0d9e8e06853e8f8f2d2452bdb78", "size": 11510, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ballotPollError.tex", "max_stars_repo_name": "zwt16300180060/CORLA18", "max_stars_repo_head_hexsha": "9e5826245f80aeb8923c84550b8f8e39ed0325c5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-09-10T08:15:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-04T08:35:06.000Z", "max_issues_repo_path": "ballotPollError.tex", "max_issues_repo_name": "ElectionAuditWare/CORLA18", "max_issues_repo_head_hexsha": "f03af7a7b514746f40426bb204c531fabcd10baf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2018-08-14T16:43:48.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-27T22:23:38.000Z", "max_forks_repo_path": "ballotPollError.tex", "max_forks_repo_name": "ElectionAuditWare/CORLA18", "max_forks_repo_head_hexsha": "f03af7a7b514746f40426bb204c531fabcd10baf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-09-12T03:06:12.000Z", "max_forks_repo_forks_event_max_datetime": "2018-12-01T13:43:19.000Z", "avg_line_length": 58.1313131313, "max_line_length": 249, "alphanum_fraction": 0.6587315378, "num_tokens": 4092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085909370422, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.5882646194211668}}
{"text": "%!TEX root = ../notes.tex\n\\section{April 20, 2022}\n\\subsection{Short Vectors}\n\\begin{ques*}\n    How short is the shortest vector in a lattice $L$?\n\\end{ques*}\n\nA more general question we could ask:\n\\begin{ques*}\n    When does some region contain a nontrivial lattice point?\n\\end{ques*}\n\n\\begin{theorem}[Minkowski's Theorem]\n    Let $L\\subseteq \\RR^n$ be a lattice of dimension $n$. Let $S\\subseteq\\RR^n$ be a \\emph{bounded symmetric convex} set.\n\n    If $\\mathsf{Vol}(S)>2^n\\det(L)$, then $S\\cap L$ contains a nonzero lattice point.\n\\end{theorem}\n\n\\begin{definition}[Bounded Set]\n    $\\{\\text{Lengths of vectors in }S\\}$ is bounded.\n\n    In other words, there is some ball that contains $S$.\n\\end{definition}\n\n\\begin{definition}[Symmetric Set]\n    If $\\bvec{v}\\in S$, then $-\\bvec{v}\\in S$.\n\\end{definition}\n\n\\begin{definition}[Convex Set]\n    If $\\bvec{v}, \\bvec{w}$, then the line segment connecting $\\bvec{v}$ and $\\bvec{w}$ is a subset of $S$.\n\\end{definition}\n\n\\begin{proof}[Proof of Minkowski's Theorem]\n    Let $\\mathcal{F}$ be a fundamental domain.\n\n    Any vector $\\bvec{w} = t(\\bvec{w}) + v(\\bvec{w})$ where $t(\\bvec{w})\\in \\mathcal{F}$ and $v(\\bvec{w})\\in L$.\n\n    Consider map $t: \\frac{1}{2}S \\mapsto F$ by sending every vector $t: \\bvec{w}\\mapsto t(\\bvec{w})$.\n\n    What does $t$ do to volume? We cut $S$ up into finite number of regions, and `cut-and-paste' them into the fundamental domain.\n\n    Locally, $t$ preserves volume. When must two points in $\\frac{1}{2}S$ be sent to the same point in $F$? When we have `carpet' with area greater than room area. That is to say, $\\mathsf{Vol}(\\frac{1}{2} S) > \\mathsf{Vol}(\\mathcal{F})$ implies that there is an overlapping point.\n\n    This is to say\n    \\begin{align*}\n        \\mathsf{Vol}\\left(\\frac{1}{2} S\\right) & > \\mathsf{Vol}(\\mathcal{F})           \\\\\n        \\frac{1}{2^n}\\mathsf{Vol}(S)           & > \\mathsf{Vol}(\\mathcal{F}) = \\det(L) \\\\\n        \\mathsf{Vol}(S)                        & > 2^n \\det(L)\n    \\end{align*}\n\n    So given this inequality, there are two points in $\\frac{1}{2}S$ such that $t\\left( \\frac{1}{2}\\bvec{w}_1 \\right) = t\\left( \\frac{1}{2}\\bvec{w}_2 \\right)$. Then we know that\n    \\[\\frac{1}{2}\\bvec{w}_1 - \\frac{1}{2}\\bvec{w}_2 \\in L\\]\n    So then consider\n    \\[\\frac{1}{2}\\bvec{w}_1 - \\frac{1}{2}\\bvec{w}_2 = \\frac{1}{2}(\\bvec{w}_1 - \\bvec{w}_2)\\]\n    which is the midpoint of $\\bvec{w}_1$ and $\\bvec{w}_2$, which is in $S$ and in $L$. So $S$ contains a nonzero lattice point.\n\\end{proof}\n\n\\begin{theorem}[Variant of Minkowski's Theorem]\n    If $S\\subseteq \\RR^n$ is bounded, symmetric, convex and \\emph{closed} set, then if\n    \\[\\mathsf{Vol}(S)\\geq 2^n\\det(L)\\]\n    $S\\cap L$ contains a nonzero lattice point.\n\\end{theorem}\n\\begin{definition}[Closed Set]\n    Every limit point of $S$ is contained in $S$.\n\\end{definition}\n\\emph{We added the condition that $S$ be closed, and changed our bound to be a $\\geq$. }\n\\begin{proof}[Proof of variant]\n    For any $k$:\n    \\[\\left( 1 + \\frac{1}{k} \\right)S \\cap L\\]\n    contains $\\bvec{v}_k\\neq \\bvec{0}\\in L$ (which is true by our first version).\n\n    The sequence $\\bvec{v}_1, \\bvec{v}_2, \\bvec{v}_3, \\dots$ is a sequence in $2S\\cap L$. $2S$ is bounded, so we have a finite set of lattice points. There's some $\\bvec{v}\\neq \\bvec{0}$ is contained in $\\bigcap_{k}\\left( 1 + \\frac{1}{k} \\right) S = S$ because $S$ is closed.\n\\end{proof}\n\\begin{corollary}[Hermite's Theorem]\n    Let $L$ be a lattice of dimension $n$ in $\\RR^n$. Then, $L$ contains a vector $\\bvec{v}$ with\n    \\[||\\bvec{v|}|\\leq \\sqrt{n}\\cdot \\det(L)^\\frac{1}{n}\\]\n\\end{corollary}\n\\begin{proof}\n    \\emph{Application of Minkowski's Theorem.} Apply Minkowski's Theorem to\n    \\[\\left\\{ (x_1, \\dots, x_n)\\Bigm\\vert |x_i|\\leq \\det(L)^{1/n} \\right\\}\\]\n    which is a cube with side length $2\\cdot \\det(L)^{1/n}$. So $\\textsf{Vol}(S) = 2^n\\cdot \\det(L)$. The diagonal has length $\\sqrt{n}\\det(L)^{1/n}$.\n\\end{proof}\nA variant of Hermite's Theorem is that we can find an entire basis $\\bvec{v}_1, \\bvec{v}_2, \\dots, \\bvec{v}_n$ such that\n\\[||\\bvec{v}_1||\\cdot||\\bvec{v}_2||\\cdots ||\\bvec{v}_n|| \\leq n^{n/2}\\det(L)\\]\nand we define the Hadamard ratio to be\n\\[\\mathcal{H} = \\left( \\frac{\\det(L)}{||\\bvec{v}_1||\\cdot||\\bvec{v}_2||\\cdots ||\\bvec{v}_n||} \\right)^{1/n}\\]\nwhere $0< \\mathcal{H} \\leq 1$ and $\\mathcal{H} = 1$ when our basis is orthogonal.", "meta": {"hexsha": "cbbadf296962d8fa421bf5db9ab5105d332cce57", "size": 4344, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-04-20.tex", "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_issues_repo_path": "lectures/2022-04-20.tex", "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-04-20.tex", "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9310344828, "max_line_length": 281, "alphanum_fraction": 0.6298342541, "num_tokens": 1617, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7853085859124002, "lm_q1q2_score": 0.588264611258356}}
{"text": "\\chapter{Backgrounds}\n\\label{sec:background}\n\nIn this chapter, we briefly review related backgrounds.\n\n\\section{E-graphs and e-matching}\n\n\\paragraph{Terms.}\nLet $\\Sigma$ be a set of function symbols with associated arities. \nA function symbol is called a \\textit{constant} if it has a zero arity. \nLet $V$ be the set of variables. We define $T(\\Sigma, V)$ to be the set of terms constructed using function symbols from $\\Sigma$ and variables from $V$. \nMore formally, $T(\\Sigma, V)$ is the smallest set that (1) all variables and constants are in $T(\\Sigma,V)$ and (2) $t_1,\\dots,t_k\\in T(\\Sigma,V)$ implies $f(t_1,\\dots,t_k)\\in T(\\Sigma,V)$, where $f\\in \\Sigma$ has arity $k$. \nA \\textit{ground term} is a term in $T(\\Sigma,V)$ that contains no variables. A non-ground term is also called a \\textit{pattern}. We call a term of the form $f(t_1,\\ldots,t_k)$ an $f$-application term.\n\n\\paragraph{Congruence relation.}\nAn \\textit{equivalence relation} $\\equiv_\\Sigma$ is a binary relation over $T(\\Sigma,\\emptyset)$ that is reflexive, symmetric, and transitive. A \\textit{congruence relation} $\\cong_\\Sigma$ is an equivalence relation satisfying that if $t_i\\cong t'_i$ for all $i=1,\\dots,n$ hold, $f(t_1,\\dots,t_n)\\cong f(t'_1,\\dots,t'_n)$ holds as well, where $f$ is a $n$-ary function symbol. We wrote $\\cong$ when $\\Sigma$ is clear from the context.\n\n\n\\paragraph{\\Egraph.}\nIntuitively, an \\egraph $E$ is a set of \\eclasses $\\{c_1, \\ldots, c_k\\}$, where each \\eclass $c$ is a set of \\enodes $\\{n_1,\\ldots,n_k\\}$. Each \\enode consists of a function symbol $f$ and a list of children \\eclasses. \nSimilar to patterns, we call an \\enode of the form $f(c_1, \\ldots c_n)$ an $f$-application \\enode.\nMore formally, we define an \\egraph $E$ to be an tuple $(\\Sigma,N,C,\\textit{symbol}, \\textit{lookup}, \\textit{child})$ where \n\\begin{itemize}\n    \\item $\\Sigma$ is a set of function symbols with associated arities,\n    \\item $N$ is the set of \\enodes,\n    \\item $C$ is the set of \\eclasses,\n    \\item \\textit{symbol} is a function that maps each \\enode in $N$ to a function symbol in $\\Sigma$,\n    \\item \\textit{lookup} is a function that maps every \\enode in $N$ to the \\eclass in $C$ that contains it, and\n    \\item \\textit{child} is a function that maps an $\\enode$ to a list of \\eclass that are the children of this \\enode.\n\\end{itemize}\nFor convenience, we use $n.\\sym$, $n.\\textit{id}$, and $n.\\child_i$ to denote $\\textit{symbol}\\,(n)$, $\\textit{lookup}\\,(n)$, and the $i$th element of $\\child(n)$.\n\nAn \\egraph $E$ efficiently represents sets of ground terms in a congruence relation. \nAn \\egraph is said to \\textit{represent} a ground term $t$ if its \\eclasses represent it.\nAn \\eclass $c$ represents a ground term if $c$ contains an \\enode $a$ that represents it. \nAn \\enode $f(c_1,\\dots,c_k)$ represents a ground term $f(t_1,\\dots,t_k)$ if they have the same function symbol $f$ and each e-class $c_i$ represents term $t_i$.\nTerms represented by an \\eclass are equivalent to each other.\nConsequently, an \\egraph forms a congruence relation.\nConsider \\autoref{fig:egraph} for example. $g(f(c,c))$, $f(c, g(b))$, and $f(a, g(a))$ are three ground terms represented by \\eclass $c_1$ in this \\egraph. Therefore, they are congruent with each other by definition.\n\n\\begin{figure}[!t]\n    \\hspace{-2em}\n  \\begin{tabular}[b]{cc}\n    \\begin{subfigure}[b]{0.45\\linewidth}\n        \\includegraphics[width=\\linewidth]{figures/egraph.png}\n      \\caption{}\n      \\label{fig:egraph}\n    \\end{subfigure}\n    \\begin{tabular}[b]{c}\n      \\begin{subfigure}[t]{0.45\\columnwidth}\n      \\centering\n        \\begin{tabular}{|ccc|}\n            \\hline\n            eclass-id & $\\text{child}_1$ & $\\text{child}_2$ \\\\\n            \\hline\n            1         & 4        & 3        \\\\\n            2         & 4        & 4       \\\\\n            \\hline\n        \\end{tabular}\n        \\caption{}\n        \\label{fig:repr-f}\n      \\end{subfigure}\\\\\n      \\begin{subfigure}[b]{0.5\\columnwidth}\n      \\centering  \n        \\begin{tabular}{|cc|}\n            \\hline\n            eclass-id & $\\text{child}_1$ \\\\\n            \\hline\n            1         & 2         \\\\\n            3         & 4         \\\\\n            3         & 5         \\\\\n            \\hline\n        \\end{tabular}\n        \\caption{}\n        \\label{fig:repr-g}\n      \\end{subfigure}\n    \\end{tabular}\n  \\end{tabular}\n  \\caption{(a) an e-graph over $T(\\Sigma, \\emptyset)$ and  $\\cong_\\Sigma$ where $\\Sigma=\\{f,g,a,b,c\\}$ and $a,b,c$ are nullary functions. Each solid box denotes an e-node and each dashed box denotes an e-class. Every term represented by an e-class is mutually equivalent. For example, $a\\cong_\\Sigma c$, $g(a)\\cong_\\Sigma g(b)$, and $f(a, g(a)) \\cong_\\Sigma g(f(a, a))$. The labels of e-classes are at bottom left.\\quad (b) relation representing $f$. \\quad(c) relation representing $g$.}\n\\end{figure}\n\n\\paragraph{\\Ematching.}\n\\Ematching is the task of finding \\ematching substitutions that instantiate patterns to set of terms represented in the \\egraph. \nAn \\textit{\\ematching substitution} $\\sigma$ is a function that maps every variable in a pattern to e-classes.\nFor convenience, we use $\\sigma(p)$ to denote the set of terms obtained by replacing every occurrence of variable $v_i$ in $p$ with terms in $\\sigma(v_i)$.\nFormally, given an e-graph $E$ and a pattern $p$,  \\ematching finds the set of all possible pairs $(\\sigma, r)$ such that every term in $\\sigma(p)$ is represented in the e-class $r$.\nTerms in $\\sigma(p)$ are said to be matched by pattern~$p$. $r$ is said to be the root of matched terms.\nFor instance, pattern $f(\\alpha, g(\\alpha))$\n matches four terms in e-class $c_1$: $f(a, g(a))$, $f(a, g(c))$, $f(c,g(c))$, and $f(c, g(a))$; \n all of which are witnessed by the substitution $\\{\\alpha \\mapsto c_4\\}$.\n\nExisting approaches to e-matching rely on backtracking \\citep{efficient-ematching,simplify,egg}. \nFor example, \\citet{efficient-ematching} proposed a backtracking-based e-matching algorithm that is used by \\textsc{Z3} \\citep{Z3} and \\egg \\citep{egg}, two state-of-the-art e-graph implementations. \nTo match the pattern $f(\\alpha, g(\\alpha))$ on the e-graph in Figure \\ref{fig:egraph}, their algorithm does a depth-first search over the e-graph:\n% Conceptually, it recursively searches for the pattern $f(\\alpha, g(\\beta))$ and filters out substitutions that map $\\alpha$ and $\\beta$ to different e-classes.\nit searches for all $f$-application e-nodes $n_f$, adds $\\alpha\\mapsto n_f.\\textit{child}_1$ to substitution $\\sigma$, iterates through all $g$-application e-nodes $n_g$ in e-class $n_f.\\textit{child}_2$, and only yield $\\sigma$ if $n_g.\\textit{child}_1=\\sigma(\\alpha)$.\nIn general, this procedure runs in time that is quadratic of the \\egraph size.\nIn a large e-graph, there may be thousands of pairs of $n_f$ and $n_g$ where $n_g$ is in e-class $n_f.\\textit{child}_2$, but only a few satisfy the constraint $n_f.\\textit{child}_1=n_g.\\textit{child}_1$. Therefore, backtracking-based \\ematching enumerates an unnecessarily large pool of candidates.\nEven worse, complex query patterns may involve many variables that occur at several places, which makes na\\\"ive backtracking enumerates an excessive number of terms and therefore extremely slow. \nThis inefficiency is due to the fact that na\\\"ive backtracking does not use the equality constraints to prune the search space \\textit{globally}. \nThis is in contrast to our approach, which exploits the equality constraints during query planning for greater performance and guarantees worst-case optimality with respect to the output size.\n  \n\n\\section{Conjunctive queries}\n\n\\paragraph{Relational schema.}\nA relational schema $S_D$ over domain $D$ is a set of relation symbols with associated arities. An atom under a schema $S_D$ is an expression $R(t_1,\\ldots,t_k)$, where $k$ is the arity of $R$ in $S_D$ and $t_i$ is an element in $D$. An instance of $S_D$ is a set of atoms over $S_D$. \n\n\\paragraph{Conjunctive queries.}\nA conjunctive query over schema $S_D$ and a set of variables $V$ is a formula of the form\n\\[\n    \\textit{ans}(x_1,\\ldots x_k)\\textit{ :- } R_1(v_{1,1},\\ldots,v_{1,k_1}),\\ldots, R_n(v_{n,1},\\ldots,v_{n,k_n}),\n\\]\nwhere $n\\geq 0$, $R_1\\ldots R_n$ are relation names in $S_D$ with arities $k_1,\\ldots k_n$, \\textit{ans} is the name of the resulting relation not in $S_D$, $v_{i,j}$ are variables in $V$\\footnote{Some definitions of conjunctive queries allow both variables and constants, but we only allow variables in conjunctive queries for simplicity.}, and $x_1,\\ldots x_k$ are variables occurring in $v_{i,j}$. We call such $R_i(v_{i,1},\\ldots,v_{i,k_1})$ an atom of the conjunctive query.\n\n\\paragraph{Semantics of conjunctive queries.} \nSimilar to \\ematching substitutions, we define a conjunctive query substitution as a function that maps every variable occurring in $v_{i,j}$ to elements in $D$. The semantics of conjunctive queries is defined as follows: Let $q$ be a conjunctive query, $I$ be an instance of $S_D$, conjunctive queries yield the set of all $\\textit{ans}(\\sigma(x_1),\\ldots,\\sigma(x_k))$ where $\\sigma$ is a substitution satisfying that $R_i(\\sigma(t_1),\\ldots \\sigma(t_k))$ are atoms in $I$ for $i=1,\\ldots,n$. We denote the result set as $q(I)$.\n\nWe make the observation that the definition of conjunctive query and \\ematching\nare definitionally similar to each other: both are defined as finding\nsubstitutions whose instantiations are present in a (relational or graph)\ndatabase. In fact, the \\ematching problem can be viewed as a ``nested''\nconjunctive queries on a database where the atoms are nested. Therefore, it is\ntempting to reduce the \\ematching problem to a conjunctive query over the\nrelational database, thereby benefiting from well-studied techniques from the\ndatabase community, including join algorithms, query optimization, and query\nevaluations.\n\n\\section[The AGM bound]{The AGM bound and Generic Join \\footnote{This section is inspired by Remy Wang's \\href{https://gitlab.com/remywang/blog/-/blob/master/posts/wcoj.md}{introduction to generic join algorithms}.}}\n\n\\algrenewcomment[1]{\\(\\triangleright\\) #1}\n\\algnewcommand{\\LineComment}[1]{\\State \\(\\triangleright\\) {\\it #1}}\n\\paragraph{The AGM Bound}\n\\begin{figure}\n  \\centering\n\\begin{algorithmic}[1]\n\\Procedure{SolveTriangleQuery}{$R,S,T$}\n\\State $A := R(x,y).x \\cap T(z,x).x$\\;\n  \\LineComment{Compute $Q(\\alpha,y,z) \\textit{ :- } R(\\alpha,y),S(y,z),T(z,\\alpha)$}\n  \\For{$\\alpha\\in A$} \n  \\State $B := R(\\alpha,y).y \\cap S(y,z).y$\\;\n    \\LineComment{Compute $Q(\\alpha,\\beta,z) \\textit{ :- } R(\\alpha,\\beta),S(\\beta,z),T(z,\\alpha)$}\n    \\For{$\\beta \\in B$}\n        \\State $C := S(\\beta,z).z \\cap T(z,\\alpha).z$\\;\n        \\LineComment{Yield join results $Q(\\alpha, \\beta, \\gamma)$}\n        \\For{$\\gamma \\in C$}\n        \\State {\\bf output } $Q(\\alpha,\\beta,\\gamma)$\n        \\EndFor\n    \\EndFor\n  \\EndFor\n\\EndProcedure\n  \\caption{Generic join algorithm for the triangle query $Q(x,y,z) \\textit{ :- } R(x,y),S(y,z),T(z,x)$, with ordering $[x, y, z]$.}\\label{alg:gj}\n\\end{algorithmic}\n\\end{figure}\n\nGiven a database and a conjunctive query, the AGM bound \\cite{agm} is a bound on how large the query output could be. A simple bound just multiplies the cardinally of each relation, which is the size of the Cartesian product of all relations. \nFor example, for triangle query $ Q(x,y,z) \\textit{ :- } R(x,y), S(y,z), T(x,z)$, this  bound computes to $|Q| \\leq |R| \\times |S| \\times |T|$. \nIf $|R|=|S|=|T|=N$, then $|Q| \\leq N^3$.\nHowever, we can do better: we observe that $Q$ contains fewer atoms than the query\n$ Q\u2019(x,y,z)\\textit{ :- } R(x,y), S(y,z),$ as $Q$ is further filtered by joining with $T$. Therefore, we know $|Q| \\leq |R| \\times |S| = N^2$. \nIn fact, the theoretical bound of conjunctive query output size is given as the AGM bound \\citep{agm}, which is $N^{3/2}$ in this case.\n\nThe AGM bound proposes a question: is there an algorithm that always achieves the performance described by the AGM bound? In particular, \\textit{cyclic queries} is a kind of conjunctive queries whose query hypergraph contains cycles, such as the triangle query $Q$ above, and they are known to make traditional join plans (e.g., join plans using two-way joins like hash joins or merge joins) suboptimal. Most of real-world database systems therefore have bad performance on cyclic queries.\n\n\\paragraph{Generic join.} Fortunately, theoretical progress has been made in the database theory community to achieve optimal performance on conjunctive queries and on cyclic queries in particualr. In particular, the generic join algorithm for conjunctive query is developed that runs in time linear to the worst-case output size with a log factor \\citep{wcoj}. \nThe generic join algorithm has great performance in practice when the query is cyclic, and is competitive to traditional join algorithms in other cases.\nGiven a conjunctive query and a relational database, generic join is parameterized over a ordering of the set of variables in the query, say $[x,y,z]$ for the query above. The ordering does not matter for the optimality guarantee, but may impact practical performance greatly. \nGiven an ordering,\nwe assume the input relations are stored in tries sorted by the ordering.\nThat is, given the ordering $[x,y,z]$,\n$R(x,y)$ is sorted by $x$ and then $y$,\nand the first-level trie nodes are the $x$'s.\n% Even more concretely,\n% Figrue~\\ref{fig:trie} illustrates the trie\n% representing $R=\\{(3, 4), (2, 1), (2, 5)\\}$.\nWe can very efficiently intersect (join) relations on their\nfirst (according to the variable ordering) variable given such tries.\nAlgorithm~\\ref{alg:gj} shows the generic join algorithm for computing $Q$.\nNote that selection, e.g. $R(a, y)$ is very fast with the built tries.\n$A \\cap B$ can also be done in $\\tilde{O}(\\min(|A|, |B|))$ time\n($\\tilde{O}$ means $O$ with a log factor).\nFor general queries we may have to intersect more than two relations,\nin which case the intersection should be performed in\n$\\tilde{O}(\\min_i|A_i|)$ time (using the merge in merge-sort).\n", "meta": {"hexsha": "af2ca32e198fbae3a39e8bffacf5787a72d9b2db", "size": 14020, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/2-background.tex", "max_stars_repo_name": "yihozhang/UWThesis", "max_stars_repo_head_hexsha": "286aa777bd528b8da3ec86b717dc1f7f39d89816", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/2-background.tex", "max_issues_repo_name": "yihozhang/UWThesis", "max_issues_repo_head_hexsha": "286aa777bd528b8da3ec86b717dc1f7f39d89816", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/2-background.tex", "max_forks_repo_name": "yihozhang/UWThesis", "max_forks_repo_head_hexsha": "286aa777bd528b8da3ec86b717dc1f7f39d89816", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 75.7837837838, "max_line_length": 530, "alphanum_fraction": 0.7014978602, "num_tokens": 4113, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.5882530925322752}}
{"text": "\\section{Discretization of the Electrophysiology Models}\\label{sec:discretization}\n\nThe partial and ordinary differential equations described in the last section contain spatial and temporal derivatives that have to be discretized to be solved numerically. For temporal derivatives, we use timestepping schemes, for spatial derivatives, we employ the finite element method.\n\nIn this section, we describe the discretization of the subcellular and electrophysiology models that were presented in the last section. A description of the discretization of the solid mechanics model follows in \\cref{sec:discretization_mechanics}.\n\nWe begin with the discretization in time in \\cref{sec:discretization_monodomain}, followed by the spatial discretization for the monodomain (\\cref{sec:discretization_diffusion,sec:mass_lumping}) and multidomain models (\\cref{sec:discretization_multidomain,sec:discretization_body_domain}).\n\n\\subsection{Discretization of the Monodomain Model}\\label{sec:discretization_monodomain}\n\nElectrophysiology models typically consist of a reaction-diffusion equation. The diffusion term describes the electric conduction in the tissue and the reaction term includes the subcellular model. In our model, the monodomain equation  \\cref{eq:monodomain} used in the fiber based description and the second multidomain equation \\cref{eq:multidomain2} are equations of this type.\n\nThis type of partial differential equation is often solved using operator splitting schemes. A first order operator splitting scheme is Godunov splitting \\cite{Godunov2003}. It was used for the solution of the chemo-electro-mechanical model in \\cite{Roehrle2012}. In addition to Godunov splitting, we also employ the second order accurate Strang splitting scheme \\cite{Strang1968}.\n\nIn the following, the application of these two schemes is illustrated for the monodomain equation \\cref{eq:monodomain}. The right-hand sides of the diffusion and reaction terms are denoted in short as $\\mathcal{L}_1$ and $\\mathcal{L}_2$:%\n\\begin{align*}\n  \\mathcal{L}_1(V_m) := \\dfrac{1}{A_m\\,C_m} \\sigma_\\text{eff}\\dfrac{\\partial^2 V_m}{\\partial x^2}, &&\n  \\mathcal{L}_2(V_m) := -\\dfrac{1}{C_m} I_\\text{ion}(V_m,\\bfy).\n\\end{align*}\n%\nThen, the monodomain equation takes the form:\n%\n\\begin{align}\\label{eq:monodomain_operator_splitting}\n  \\p{V_m}{t} = \\mathcal{L}_1(V_m) + \\mathcal{L}_2(V_m).\n\\end{align}\n\nA timestepping scheme is constructed that starts with a given initial value $V_m^{(0)}$ and computes solution values $V_m^{(i)}$ at discrete points in time $t^{(i)} = i\\cdot\\dt$ with a fixed timestep width $\\dt$.\nGodunov splitting proceeds by alternatingly performing steps in the two directions of the right-hand sides $\\mathcal{L}_1$ and $\\mathcal{L}_2$. In the first substep per iteration, an intermediate value $V_m^{\\ast}$ is calculated, which is used as starting point for the second substep. Each of the substeps are performed using independent timestepping scheme, e.g., the explicit Euler scheme:\n%\n\\begin{subequations}\\label{eq:monodomain_godunov}\n  \\begin{align}\n    V_m^{\\ast} &= V_m^{(i)} + \\dt \\mathcal{L}_1(V_m^{(i)},t^{(i)}),\\\\[4mm]\n    V_m^{(i+1)} &= V_m^{\\ast} + \\dt \\mathcal{L}_2(V_m^{\\ast},t^{(i)})\n  \\end{align}\n\\end{subequations}\n\nStrang splitting uses a similar approach with three substeps per timestep and two intermediate values $V_m^{\\ast}$ and $V_m^{\\ast\\ast}$:\n%\n\\begin{subequations}\\label{eq:monodomain_strang}\n  \\begin{align}\n    V_m^{\\ast} &= V_m^{(i)} + \\dfrac{\\dt}{2} \\mathcal{L}_1(V_m^{(i)},t^{(i)}),\\\\[4mm]\n    V_m^{\\ast\\ast} &= V_m^{\\ast} + \\dt \\mathcal{L}_2(V_m^{\\ast},t^{(i)}),\\\\[4mm]\n    V_m^{(i+1)} &= V_m^{\\ast\\ast} + \\dfrac{\\dt}{2} \\mathcal{L}_1(V_m^{\\ast\\ast},t_{(i)}+\\dfrac12 \\dt).\n  \\end{align}\n\\end{subequations}\n%\n% stuff needed to create the plot\n%\\definecolor{vred}{RGB}{170,0,0}\n%\\definecolor{vyellow}{RGB}{212,170,0}\n%\\definecolor{vgreen}{RGB}{0,170,0}\n%\\begin{equation*}\n%  \\begin{array}{lll}\n%    \\frac{\\partial}{\\partial t} V_m  = \\textcolor{vred}{c_1 \\frac{\\partial^2}{\\partial x^2} V_m }\\\\[4mm]\n%    \\frac{\\partial}{\\partial t} \\bfy = \\textcolor{vyellow}{G(V_m,\\bfy)}\\\\[4mm]\n%    \\frac{\\partial}{\\partial t} V_m = \\textcolor{vyellow}{c_2\\,I_\\text{ion}(V_m,\\bfy)}\\\\[4mm]\n%    \\textcolor{vyellow}{\\dt_\\text{0D}}\\quad\n%    \\textcolor{vred}{\\dt_\\text{1D}}\\quad\n%    \\dt_\\text{splitting}\\quad\n%    \\textcolor{vgreen}{\\dt_\\text{3D}}\\quad\n%    t^{(i)}, t^{(i)} + \\dt_\\text{splitting}/2, t^{(i)} + \\dt_\\text{splitting} ...\\\\[4mm]\n%    t^{(i)} + \\textcolor{vgreen}{\\dt_\\text{3D}}\\\\[4mm]\n%    t^{(i+1)}\n%  \\end{array}\n%\\end{equation*}\n%\\textcolor{vyellow}{0D:}\n%\\textcolor{vred}{1D:}\n%\\textcolor{vgreen}{3D:}\n\nNote that each substep can either be executed as a single timestep of the chosen method as in \\cref{eq:monodomain_godunov,eq:monodomain_strang} or divided into several steps with timestep widths $\\dt_{0D}$ (for the 0D subcellular model represented by $\\mathcal{L}_1$) and $\\dt_{1D}$ (for the diffusion equation represented by $\\mathcal{L}_2$).\n\n\\Cref{fig:splitting_schemes} visualizes both splitting schemes applied to the monodomain equation. \nThe yellow arrows correspond to the solution of the 0D subcellular model. The red arrows correspond to the solution of the 1D diffusion equation. The timestep width of one splitting step is $\\dt_\\text{splitting}$. Depending on how the timestep widths are chosen in relation to each other, different numbers of subcycles are used in the solution of the 0D and 1D problems.\n\n\\begin{figure}%\n  \\centering%\n  \\begin{subfigure}[t]{0.48\\textwidth}%\n    \\centering%\n    \\def\\svgwidth{\\textwidth}\n    \\includegraphics[width=\\textwidth]{images/theory/godunov_splitting.pdf}\n    \\caption{The Godunov splitting uses two substeps: 0D, 1D.}%\n    \\label{fig:godunov_splitting}%\n  \\end{subfigure}\n  \\quad\n  \\begin{subfigure}[t]{0.48\\textwidth}%\n    \\centering%\n    \\includegraphics[width=\\textwidth]{images/theory/strang_splitting.pdf}\n    \\caption{The Strang splitting uses three substeps: 0D, 1D, 0D.}%\n    \\label{fig:strang_splitting}%\n  \\end{subfigure}\n  \\caption{Godunov and Strang splitting schemes that are used to solve the monodomain equation. The equation is split into a reaction part (0D,yellow) and a diffusion part (1D,red) and these parts are solved alternatingly. The visualizations show one splitting timestep starting at the left circle and completing at the right circle.}%\n  \\label{fig:splitting_schemes}%\n\\end{figure}%\n\nInstead of the explicit Euler method in \\cref{eq:monodomain_godunov,eq:monodomain_strang}, other timestepping methods can be used for the substeps. \nWe use the following schemes, which are listed as single steps for the generic ODE ${\\p V_m / \\p t = \\mathcal{L}(V_m,t)}$:\n\\begin{subequations}\\label{eq:ode_solver_schemes}\n  \\begin{align}\n    V_m^{(i+1)} &= V_m^{(i)} + \\dt \\mathcal{L}(V_m^{(i)},t^{(i)}), \\label{eq:explicit_euler}\\\\[4mm]\n    V_m^{(i+1)} &= V_m^{(i)} + \\dfrac{\\dt}{2}\\Big(\n      \\mathcal{L}(V_m^{(i)},t^{(i)}) + \\mathcal{L}\\big(V_m^{(i)} + \\dt \\mathcal{L}(V_m^{(i)},t^{(i)}),t^{(i+1)}\\big)\\Big), \\label{eq:heun}\\\\[4mm]\n    V_m^{(i+1)} &= V_m^{(i)} + \\dt \\mathcal{L}(V_m^{(i+1)},t^{(i+1)}), \\label{eq:implicit_euler}\\\\[4mm]\n    V_m^{(i+1)} &= V_m^{(i)} + \\dfrac{\\dt}{2}\\Big(\n      \\theta\\,\\mathcal{L}(V_m^{(i+1)},t^{(i+1)}) + (1-\\theta)\\,\\mathcal{L}(V_m^{(i)},t^{(i)})\\Big).\\label{eq:crank_nicolson}\n  \\end{align}\n\\end{subequations}\n%\nHere, \\cref{eq:explicit_euler} is the first-order accurate explicit Euler scheme, \\cref{eq:heun} is the second-order accurate Heun scheme, \\cref{eq:implicit_euler} is the first order accurate implicit Euler scheme, and \\cref{eq:crank_nicolson} is the Crank-Nicolson scheme \\cite{CrankNicolson1947}, which for $\\theta=0$ equals the explicit Euler and for $\\theta=1$ equals the implicit Euler scheme. For $\\theta=\\frac12$, it is second order accurate. An advantage of the implicit schemes in \\cref{eq:implicit_euler,eq:crank_nicolson} is, that, for our considered diffusion problems, they are unconditionally stable. A disadvantage is, that a linear equation has to be solved in every timestep.\n\nA second order accurate timestepping scheme yields a faster decrease of the numerical error with decreasing step size and, thus, in many cases allows a larger step size than a first order scheme.\nTo obtain a second order scheme for the monodomain equation, we use Strang splitting (\\cref{eq:monodomain_strang}) with the Crank-Nicolson scheme (\\cref{eq:crank_nicolson}) for the diffusion term $\\mathcal{L}_1$ and Heun's method (\\cref{eq:heun}) for the reaction term $\\mathcal{L}_2$. In the subcellular model, the system of ODEs with state vector $\\bfy$ given in \\cref{eq:subcellular} is solved with Heun's method along with the equation in terms of $V_m$.\n\nNext, the spatial derivatives in the diffusion part $\\mathcal{L}_2$ of the split equation have to be discretized. Then, both the multidomain and the fiber based models can be solved using the splitting scheme.\n\n\\subsection{Discretization of the Diffusion and Laplace Equations}\\label{sec:discretization_diffusion}\n\nFor the spatial discretization, we first derive the finite element formulation for a generic parabolic diffusion equation in a domain $\\Omega\\subset\\R^d$ of arbitrary dimensionality $d$. Then, specialization to 1D yields the formulation for the monodomain equation. Considering a 3D domain, the formulation is an important building block for the discretization of the multidomain model. This is shown in more detail in a later section, \\cref{sec:discretization_multidomain}\n\nWe consider the following diffusion problem in the variable $u: \\Omega \\times [0,t_\\text{end}] \\to \\R$ with Neumann boundary conditions on a part of the boundary $\\Gamma_f \\subset \u2202\\Omega$ with normal vector $\\bfn$:\n\\begin{align*}\n  \\p{u}{t} &= \\div(\\bfsigma \\grad u), &(\\bfsigma\\,\\grad u) \\cdot \\bfn &= f \\quad \\text{on }\\Gamma_f, & (\\bfsigma\\,\\grad u) \\cdot \\bfn &= 0 \\quad \\text{on } \u2202\u03a9\\backslash \\Gamma_f.\n\\end{align*}\nWe discretize the temporal derivative using the Crank-Nicolson scheme as in \\cref{eq:crank_nicolson}. Following the procedure of the Galerkin finite element formulation with the Hilbert space $H^1_0(\\Omega)$ of test functions $\\phi$ that are zero on the boundary, we arrive at the following weak form:\n%\n\\begin{align*}\n  \\ds\\int_\u03a9 \\big(\\theta\\,\u2207\\cdot(\\bfsigma \u2207 \\bfu^{(i+1)})  + (1-\\theta)\\,\u2207\\cdot(\\bfsigma \u2207 u^{(i)})\\big)\\,\\phi \\,\\d\\bfx &&\\\\\n    \\qquad = \\dfrac{1}{\\dt} \\ds\\int_\u03a9(u^{(i+1)} - u^{(i)})\\,\\phi\\,\\d\\bfx, &&\\qquad \\forall \\phi \\in H^1_0(\\Omega).\n\\end{align*}\nFor brevity, we express divergence and gradient using the nabla operator. \n\nTo discretize the weak form in space, we choose a function space $V_h = \\text{span}\\{\\varphi_j \\mid j = 1, \\dots, N\\}$ to represent the solution as $u = \\sum_{j=1}^N u_j \\phi_j$. Applying the divergence theorem, we obtain:\n\\begin{equation}\\label{eq:diffusion_helper1}\n  \\begin{array}{l}\n    \\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big)  \n    \\left(-\\ds\\int_\u03a9 \\bfsigma\\,\u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} \\big(\\bfsigma\\,\u2207\\varphi_j\\cdot \\bfn\\big)\\,\\varphi_k \\,\\d\\bfx  \\right) \\\\\n      \\quad = \\dfrac{1}{\\dt} \\sum\\limits_{j=1}^{N} \\big(u_j^{(i+1)} - u_j^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx, \\qquad \\forall k = 1,\\dots, N.\n  \\end{array}\n\\end{equation}\nThis iteration step can be written in matrix notation in terms of the vectors of unknowns $\\bfu^{(i)}=(u^{(i)}_0,\\dots,u^{(i)}_N)^\\top$ at timestep $i$:%\n\\begin{align*}\n  \\bfA\\,\\bfu^{(i+1)} = \\bfb(\\bfu^{(i)}).\n\\end{align*}\nThe system matrix $\\bfA$ and the right-hand side $\\bfb$ are given by:\n\\begin{align*}\n  \\bfA &= \\theta\\,(\\bfK_{\\bfsigma} + \\bfB_{\\bfsigma}) -\\dfrac{1}{\\dt}\\bfM, &\n  \\bfb &= \\big((\\theta-1)\\,(\\bfK_{\\bfsigma} + \\bfB_{\\bfsigma}) - \\dfrac{1}{\\dt} \\bfM \\big)\\,\\bfu^{(i)}.\n\\end{align*}\nThe formulation uses the standard stiffness matrix $\\bfK_{\\bfsigma}$, the matrix $\\bfB_{\\bfsigma}$ of the boundary integral and the mass matrix $\\bfM$, whose components are defined as%\n\\begin{align}\\label{eq:diffusion_matrices}\n  \\bfK_{\\bfsigma,kj} &= -\\ds\\int_\u03a9 (\\bfsigma\\, \u2207\\varphi_j)\\cdot \u2207\\varphi_k \\,\\d\\bfx,&\n     \\bfB_{\\bfsigma,kj} &= \\ds\\int_{\\Gamma_f} \\big((\\bfsigma\\,\u2207\\varphi_j)\\cdot \\bfn\\big)\\,\\varphi_k \\,\\d\\bfx,&\n     \\bfM_{kj} &= \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx.\n\\end{align}\nNote that, after applying the divergence theorem, the definition of the stiffness matrix has a minus sign.\n\nNext, we take into account the Neumann boundary condition $\\bfsigma\u2207u\\cdot \\bfn = f$ on the boundary $\\Gamma_f$. The flux $f$ over the boundary is discretized by $M$ separate ansatz functions $\\psi_j$ on $\\Gamma_f$ as $f = \\sum_{j=1}^M f_j\\, \\psi_j$.\nThe flux values are summarized in a vector $\\bff=(f_1,\\dots,f_M)^\\top$.\nPlugging this into \\cref{eq:diffusion_helper1} yields the following equation in matrix notation:%\n\\begin{align}\\label{eq:diffusion_helper2}\n  \\tilde{\\bfA}\\,\\bfu^{(i+1)} = \\tilde{\\bfb}(\\bfu^{(i)}),  \n\\end{align}\nwith the system matrix $\\tilde{\\bfA}$ and right-hand side $\\tilde{\\bfb}$: \n\\begin{align*}\n  \\tilde{\\bfA} &= \\theta\\,\\bfK_{\\bfsigma} -\\dfrac{1}{\\dt}\\bfM, &\n    \\tilde{\\bfb} &= \\big((\\theta-1)\\,\\bfK_{\\bfsigma} - \\dfrac{1}{\\dt} \\bfM \\big)\\,\\bfu^{(i)} - \\bfB_{\\Gamma_f}\\,\\big(\\theta\\,\\bff^{(i+1)} + (1-\\theta)\\,\\bff^{(i)}\\big),\n\\end{align*}\nand the boundary matrix $\\bfB_{\\Gamma_f}$ given by:\n\\begin{align}\\label{eq:definition_boundary_matrix}\n  \\bfB_{\\Gamma_f,kj} &= \\ds\\int_{\\Gamma_f} \\psi_j\\,\\varphi_k \\,\\d\\bfx.\n\\end{align}\nNote that incorporating the Neumann boundary conditions in the weak form corresponds to the following exchange of the boundary matrices $\\bfB_\\sigma$ and $\\bfB_{\\Gamma_f}$:%\n\\begin{align}\\label{eq:boundary_relation}\n  \\bfB_\\sigma\\,\\bfu = \\bfB_{\\Gamma_f}\\,\\bff.\n\\end{align}\n%\n\n\n\\Cref{eq:diffusion_helper2} is used to solve the diffusion part of the monodomain equation given in \\cref{eq:monodomain} after inserting the corresponding constant prefactors.\n\nWhen deriving or implementing new models or optimizing solver code, it is often beneficial to study certain effects in isolation. It can help to use a toy problem such as the simple Laplace problem $\u0394u = 0$, possibly with Neumann boundary condition $\\partial u/\\partial \\bfn = f$. \nBy specializing the formulation in \\cref{eq:diffusion_helper2} accordingly, we obtain the system\n%\n\\begin{align*}\n  (\\bfK_\\bfI + \\bfB_\\bfI)\\,\\bfu = \\bfzero\n\\end{align*}\nfor the case without boundary condition (set $\\bfB_\\bfI$ to zero to assume homogeneous Neumann boundaries) or\n\\begin{align}\\label{eq:discretization_laplace}\n  \\bfK_\\bfI\\,\\bfu = -\\bfB_{\\Gamma_f}\\,\\bff\n\\end{align}\nto include the formulated Neumann boundary condition.\n\n\\subsection{Using Mass Lumping for Implicit Timestepping}\\label{sec:mass_lumping}\nImplicit timestepping schemes such as implicit Euler or the Crank-Nicolson scheme for $\\theta=\\frac12$ need to solve a linear equation in every timestep.\nAssuming homogeneous Neumann boundary conditions for simplicity, the iteration step of the canonical Crank-Nicolson scheme follows from \\cref{eq:diffusion_helper2}:\n\\begin{subequations}\\label{eq:lumping_crank_nicolson}\n  \\begin{align}\n    \\big(\\dfrac1{2}\\bfK-\\dfrac{1}{\\dt}\\bfM\\big)\\, \\bfu^{(i+1)} &= \\big(-\\dfrac12{\\bfK} - \\dfrac{1}{\\dt}\\bfM\\big)\\, \\bfu^{(i)}\\label{eq:lumping_crank_nicolson_1}\\\\[4mm]\n    \\Leftrightarrow \\quad (\\bfI - \\dfrac{\\dt}{2}\\,\\bfM^{-1}\\bfK)\\,\\bfu^{(i+1)}&= (\\bfI + \\dfrac{\\dt}{2}\\,\\bfM^{-1}\\bfK)\\,\\bfu^{(i)}.\\label{eq:lumping_crank_nicolson_2}\n  \\end{align}\n\\end{subequations}\nFor the implicit Euler method, we obtain:%\n\\begin{subequations}\\label{eq:lumping_implicit_euler}\n  \\begin{align}\n    \\ds(\\bfK-\\frac{\\bfM}{\\dt})\\,\\bfu^{(i+1)} &=\\,\\ds -\\frac{\\bfM}{\\dt}\\bfu^{(i)}\\label{eq:lumping_implicit_euler_1}\\\\[4mm]\n    \\Leftrightarrow \\quad (\\bfI - \\dt\\,\\bfM^{-1}\\bfK)\\,\\bfu^{(i+1)}&= \\,\\bfu^{(i)}.\\label{eq:lumping_implicit_euler_2}\n  \\end{align}\n\\end{subequations}\nBoth iteration steps in \\cref{eq:lumping_crank_nicolson_1,eq:lumping_crank_nicolson_2} and in \\cref{eq:lumping_implicit_euler_1,eq:lumping_implicit_euler} are equivalent, as the second equation follows from the first one by left multiplication of $(-\\dt \\bfM^{-1})$. In the second equations, the matrices to be multiplied are created by a sum of the unity matrix $\\bfI$ and another matrix term that is scaled by the potentially small timestep width $\\dt$. For the implicit Euler in \\cref{eq:lumping_implicit_euler_2}, the matrix on the right-hand side even reduces to the identity matrix. This is preferred over the first iteration steps in \\cref{eq:lumping_crank_nicolson_1,eq:lumping_implicit_euler_1} as it leads to better conditioned matrix-vector multiplications.\n\nThe required inversions of the mass matrix cannot be carried out explicitly as the inversion would fill in numerous matrix entries and eliminate the sparse structure. This is not feasible for highly resolved meshes with many degrees of freedom. Instead, \\emph{mass lumping} is used, where the mass matrix $\\bfM$ is approximated by a diagonal matrix with diagonal entries equal to the row sums in $\\bfM$ \\cite{Hinton1976}. Thus, multiplication with the inverse mass matrix corresponds to a rescaling of columns by the inverse lumped diagonal entries.\n\n\\subsection{Discretization of the Multidomain Model}\\label{sec:discretization_multidomain}\nWith the prerequisites of temporal discretization in \\cref{sec:discretization_monodomain} and the finite element formulation of a diffusion equation in \\cref{sec:discretization_diffusion}, we can now discretize the multidomain model. Since this has not been previously done in literature using the finite element method, the subsequent derivation is more detailed.\n\nThe first multidomain equation given in \\cref{eq:multidomain1} yields the following form after applying the finite element derivation in \\cref{eq:diffusion_helper2}:\n%\n\\begin{align}\\label{eq:multidomain_discretization_helper_multidomain1}\n  \\big(\\bfK_{\\bfsigma_e + \\bfsigma_i} + \\bfB_{\\bfsigma_e + \\bfsigma_i}\\big)\\,\\bfphi_{e} +  \\s{k=1}{N_\\text{MU}} f_r^k \\big(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}\\big)\\,\\bfV_m^k = 0.  \n\\end{align}\nHere, $\\bfphi_{e}$ and $\\bfV_m^k$ are the vectors of degrees of freedom for the extracellular potential $\\phi_e$ and membrane voltage $V_m^k$ of compartment $k$. The matrices are defined by \\cref{eq:diffusion_matrices} and do not yet include the boundary conditions.\nThe subscripts of the stiffness matrices $\\bfK$ and boundary integral matrices $\\bfB$ refer to the anisotropy tensors that occur in their definitions.\n\nThe diffusion part of the second multidomain equation, \\cref{eq:multidomain2}, discretized with Crank-Nicolson, yields the system%\n\\begin{align}\\label{eq:multidomain_discretization_helper_multidomain2}\n  \\bfA\\,\\mat{\\bfV_m^{k,(i+1)}\\\\ \\bfphi_{e}^{(i+1)}} = \\bfb,  \n\\end{align}\n%\nwith the $1 \\times 2$ block system matrix $\\bfA$ and right-hand side vector $\\bfb$ given by:%\n\\begin{subequations}\\label{eq:multidomain_discretization_helper_multidomain3}\n\\begin{align}\n   \\bfA &= \\matt{\n      \\dfrac{\\theta}{A_m^k\\,C_m^k}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) -\\dfrac{1}{\\dt}\\bfM & \\quad\n      \\dfrac{\\theta}{A_m^k\\,C_m^k}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k})\n    }, \\\\[4mm]\n    \\bfb &= \\Big( \\dfrac{\\theta-1}{A_m^k\\,C_m^k}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) - \\dfrac{1}{\\dt}\\bfM\\Big)\\,\\bfV_m^{k,(i)} \n      + \\dfrac{\\theta - 1}{A_m^k\\,C_m^k}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k})\\,\\bfphi_e^{(i)}.\n\\end{align}\n\\end{subequations}\n\nA separate instance of this equation holds for every compartment $k$. Again, the integrals over the boundary are still present in the $\\bfB_{\\bfsigma_i^k}$ matrices.\nTo resolve this and to close the formulation, we have to consider the fluxes over the boundary of all involved unknowns and to replace them using the boundary conditions.\n\nOne required boundary conditions to solve the multidomain model without body domain is given in \\cref{eq:multidomain_bc1}. The boundary condition for compartment $k$ in terms of the intracellular potential $\\phi_i^k$,\n%\n\\begin{align}\\label{eq:multidomain_discretization_helper1}\n  (\\bfsigma_i^k\\,\u2207\\phi_i^k) \\cdot \\bfn_m = 0 \\qquad \\text{on } \u2202\\Omega_M,\n\\end{align}\n%\nis expressed in terms of the unknowns $V_m^k$ and $\\phi_e$ to yield the condition\n%\n\\begin{align}\\label{eq:multidomain_discretization_helper2}\n  (\\bfsigma_i^k\\,\u2207V_m^k) \\cdot \\bfn_m &= -(\\bfsigma_i^k\\,\u2207\\phi_e)\\cdot \\bfn_m =: p^k \\qquad \\text{on }\u2202\\Omega_M.\n\\end{align}\n%\nWe define the value of this flux to be equal to a helper variable $p^k$.\nA second flux is formulated for the extracellular potential $\\phi_e$. We assign its value to the helper variable $q$:\n\\begin{align}\\label{eq:definition_q}\n  (\\bfsigma_e \u2207 \\phi_e)\\cdot \\bfn_m =:q \\qquad \\text{on }\u2202\\Omega_M.\n\\end{align}\n\nWe can now express the flux value $\\big((\\bfsigma_e + \\bfsigma_i)\\,\u2207\\phi_e\\big) \\cdot \\bfn_m$, which occurs in the discretized first multidomain equation, \\cref{eq:multidomain_discretization_helper_multidomain1}, in terms of the variables $p^k$ and $q$. Using \\cref{eq:multidomain_discretization_helper1,eq:multidomain_discretization_helper2} and the relation $\\phi_e = \\phi^k_i - V_m^k$, we derive:\n\\begin{align}\\label{eq:multidomain_discretization_helper3}\n   \\big((\\bfsigma_e + \\bfsigma_i)\\,\u2207\\phi_e\\big) \\cdot \\bfn_m\n    &= (\\bfsigma_e\\,\u2207\\phi_e)\\cdot \\bfn_m + (\\bfsigma_i\\,\u2207\\phi_e)\\cdot \\bfn_m = q - \\s{k=1}{N_\\text{MU}} f_r^k\\,p^k.\n\\end{align}\n\nWe discretize the flux values $p^k$ and $q$ analogously to the Neumann boundary condition flux $f$ in \\cref{sec:discretization_diffusion} and summarize the degrees of freedoms in vectors $\\bfp^k$ and $\\bfq$.\n\nNext, we combine the flux values with the first and second multidomain equation.\nPlugging the generic relation \\cref{eq:boundary_relation} for boundary integral terms into the discretization of the first multidomain equation, \\cref{eq:multidomain_discretization_helper_multidomain1}, and using the derived flux values in \\cref{eq:multidomain_discretization_helper2,eq:multidomain_discretization_helper3} leads in a first step to the following equation:\n\\begin{align*}\n  \\bfK_{\\bfsigma_e + \\bfsigma_i}\\,\\bfphi_{e} + \\bfB_{\\Gamma_M}\\,\\big(\\bfq - \\s{k=1}{N_\\text{MU}} f_r^k\\,\\bfp^k\\big) +  \\s{k=1}{N_\\text{MU}} f_r^k \\big(\\bfK_{\\bfsigma_i^k}\\,\\bfV_m^k + \\bfB_{\\Gamma_M}\\,\\bfp^k\\big) = 0.  \n\\end{align*}\n\nIt can be seen that the terms involving $\\bfp^k$ cancel out, such that we get:\n\\begin{align}\\label{eq:multidomain_discretization_helper4}\n    \\bfK_{\\bfsigma_e + \\bfsigma_i}\\,\\bfphi_{e} + \\s{k=1}{N_\\text{MU}} f_r^k \\bfK_{\\bfsigma_i^k}\\,\\bfV_m^k = -\\bfB_{\\Gamma_M}\\,\\bfq.\n\\end{align}\n\nIf the multidomain description is used without body fat domain, the boundary condition in \\cref{eq:multidomain_bc2} is used and the right-hand side in \\cref{eq:multidomain_discretization_helper4} vanishes. If a body domain is considered, the right-hand side interacts with the body domain model, which is considered in the next section.\n\nAdding boundary conditions to the discretization of the second multidomain equation proceeds using \\cref{eq:multidomain_discretization_helper_multidomain2,eq:multidomain_discretization_helper_multidomain3}.\nCarrying out the analog procedure to the first multidomain equation, we plug in \\cref{eq:boundary_relation} to yield the matrix equation\n\\begin{align}\\label{eq:multidomain_discretization1}\n  \\bfA\\,\\mat{\\bfV_m^{k,(i+1)}\\\\ \\bfphi_{e}^{(i+1)}} = \\bfb\n\\end{align}\n%\nwith system matrix $\\bfA$ and right-hand side vector $\\bfb$ given by\n%\n\\begin{align}\n \\bfA &= \\matt{\n    \\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} -\\dfrac{1}{\\dt}\\bfM & \\quad\n    \\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k},\n  },\\label{eq:multidomain_discretization2} \\\\[4mm]\n  \\bfb &= \\Big( \\dfrac{\\theta-1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} - \\dfrac{1}{\\dt}\\bfM\\Big)\\,\\bfV_m^{k,(i)} \n    + \\dfrac{\\theta - 1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k}\\,\\bfphi_e^{(i)}\\nonumber \\\\[4mm]\n  & +\\dfrac{\\theta-1}{A_m^k\\,C_m^k}\\bfB_{\\Gamma_M}\\,\\bfp^{k,(i)} - \\dfrac{\\theta-1}{A_m^k\\,C_m^k}\\bfB_{\\Gamma_M}\\,\\bfp^{k,(i)}\n  -\\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfB_{\\Gamma_M}\\,\\bfp^{k,(i+1)} + \\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfB_{\\Gamma_M}\\,\\bfp^{k,(i+1)}.\\nonumber \n\\end{align}\nAgain, the boundary terms involving $\\bfp^k$ vanish to yield the following expression for $\\bfb$:%\n\\begin{align}\\label{eq:multidomain_discretization3}\n    \\bfb &= \\Big( \\dfrac{\\theta-1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} - \\dfrac{1}{\\dt}\\bfM\\Big)\\,\\bfV_m^{k,(i)} \n      + \\dfrac{\\theta - 1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k}\\,\\bfphi_e^{(i)}.\n\\end{align}\nIn summary, \\cref{eq:multidomain_discretization_helper4} with $\\bfq=\\bfzero$ coupled with $N_\\text{MU}$  instances of \\cref{eq:multidomain_discretization1,eq:multidomain_discretization2,eq:multidomain_discretization3} comprises the discretization for the multidomain model without body domain. Definitions of the involved stiffness and mass matrices are given in \\cref{eq:diffusion_matrices}.\n\n\\subsection{Discretization of the Multidomain Model for Surface EMG}\\label{sec:discretization_body_domain}\n\nTo discretize the multidomain model with the electric potential $\\phi_b$ in the body domain, we extend the formulation without body domain in \\cref{sec:discretization_multidomain}.\nThe body domain adds the electric potential $\\phi_b$ to the vector of unknowns, for which the system has to be solved. As before, we discretize the field using finite element ansatz functions and solve for the vector $\\bfphi_b$ of degrees of freedom.\n\nThe model for $\\phi_b$ is the Laplace equation given in \\cref{eq:body} with homogeneous Neumann boundary conditions given in \\cref{eq:body_domain_bc3}. According to \\cref{eq:discretization_laplace}, the discretized equation is given by\n\\begin{align}\\label{eq:discretized_body}\n  \\bfK_{\\bfsigma_b}\\,\\bfphi_b = 0.\n\\end{align}\n\nIn addition, the value of the body potential $\\phi_b$ is coupled to the extracellular potential $\\phi_e$ in the muscle domain $\\Omega_M$ via the coupling conditions on the boundary $\\Gamma_M$ given in \\cref{eq:body_domain_coupling}.\n\nWe write the discretized and coupled multidomain equations as a linear system of equations in generic block-matrix form:\n\\begin{align}\\label{eq:discretized_multidomain_body}\n  \\left[\n  \\begin{array}{@{}c|c|c|c@{}}\n    \\bfA_{V_m,V_m}^k & \\bfB_{V_m,\\phi_e}^k & &\\\\[2mm]\n    \\bfB_{\\phi_e,V_m}^k & \\bfB_{\\phi_e,\\phi_e} & &\\bfB_{\\Gamma_M} \\\\[2mm] \\hline\n    &&\\bfC_{\\phi_b,\\phi_b} & -\\bfB_{\\Gamma_M}\\\\[2mm]\\hline\n    & \\bfI_{\\Gamma_M,\\phi_e} & -\\bfI_{\\Gamma_M,\\phi_b} &\\\\[2mm]\n  \\end{array}\n  \\right]\n  \\left[\n  \\begin{array}{@{}c@{}}\n    \\bfV_{m}^{k,(i+1)}  \\\\[2mm]\\hline \n    \\bfphi_{e}^{(i+1)} \\\\[2mm]\\hline\n    \\bfphi_{b}^{(i+1)}  \\\\[2mm]\\hline\n    \\bfq^{(i+1)}\n  \\end{array}\\right]\n  = \n  \\left[\\begin{array}{@{}c@{}}\n    \\bfb_{V_m}^{k,(i)} \\\\[2mm]\n    \\bfzero\\\\\\hline\n    \\bfzero\\\\\\hline \n    \\bfzero\n  \\end{array}\\right].\n\\end{align}\n\nThe vector of unknowns consists of the degrees of freedom in the finite element formulation at the next timestep $(i+1)$ of the transmembrane voltage $\\bfV_m^{k,(i+1)}$, the extracellular potential $\\bfphi_{e}^{(i+1)}$, the body potential $\\bfphi_{b}^{(i+1)}$, and additionally the flux $\\bfq^{(i+1)}$ over the shared boundary $\\Gamma_M$ of the muscle and the body domain, which was defined in \\cref{eq:definition_q}. For illustration purposes, only one compartment, $k=1$, for one MU, $N_\\text{MU}=1$, is considered.\n\nWe refer to parts of the matrix in \\cref{eq:discretized_multidomain_body} as block rows and block columns according to the given block-structure.\n\nThe first block row in the matrix equation is given by the discretized second multidomain equation. Following \\cref{eq:multidomain_discretization2,eq:multidomain_discretization3}, the matrices and the right-hand side are given by\n\\begin{align*}\n  &\\bfA^k_{V_m,V_m} = \\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} -\\dfrac{1}{\\dt}\\bfM, \\qquad\n  \\bfB^k_{V_m,\\phi_e} = \\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k},\\\\[4mm]\n  &\\bfb_{V_m}^{k,(i)} = \\Big( \\dfrac{\\theta-1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} - \\dfrac{1}{\\dt}\\bfM\\Big)\\,\\bfV_m^{k,(i)} \n      + \\dfrac{\\theta - 1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k}\\,\\bfphi_e^{(i)}.\n\\end{align*}\n%\n\nThe second block row describes the first multidomain equation that was derived in \\cref{eq:multidomain_discretization_helper4}. The flux term $\\bfq$ has been brought to the left-hand side and is incorporated by the boundary matrix $\\bfB_{\\Gamma_M}$ defined in \\cref{eq:definition_boundary_matrix}. The other matrices are formulated as follows:\n%\n\\begin{align*}\n  \\bfB_{\\phi_e,V_m}^k &= f_r^k \\bfK_{\\bfsigma_i^k}, & \n  \\bfB_{\\phi_e,\\phi_e} &= \\bfK_{\\bfsigma_e + \\bfsigma_i}.\n\\end{align*}\n%\n\nThe third block row is the formulation of the harmonic body potential $\\phi_b$ and the matrix $\\bfC_{\\phi_b,\\phi_b}$ equals the system matrix $\\bfK_{\\bfsigma_b}$ in \\cref{eq:discretized_body}. The coupling condition on the flux $q$ in  \\cref{eq:body_domain_bc2} is accounted for by including the boundary matrix $\\bfB_{\\Gamma_M}$ in the last column. The minus sign comes from the fact that the outward normal vector on $\\Gamma_M$ as the boundary of $\\Omega_B$ has the opposite direction to the normal vector on $\\Gamma_M$ that is used for the models in the muscle domain $\\Omega_M$. Using the helper variable $\\bfq^{(i+1)}$, the second and third row of \\cref{eq:discretized_multidomain_body} are coupled according to the prescribed condition in \\cref{eq:body_domain_bc2}.\n\nThe other coupling condition, \\cref{eq:body_domain_bc1}, is accounted for by the last block row in \\cref{eq:discretized_body}. The degrees of freedom for the extracellular potential $\\bfphi_e^{(i+1)}$ and the body potential $\\bfphi_b^{(i+1)}$ have equal values on the boundary $\\Gamma_M$. The matrices $\\bfI_{\\Gamma_M,\\phi_e}$ and $\\bfI_{\\Gamma_M,\\phi_b}$ are identity matrices that only have nonzero entries on the diagonal for the boundary degrees of freedom in the meshes of muscle domain and body domain, respectively.\n\nBecause the vector $\\bfq^{(i+1)}$ is not an unknown in the system, the respective values in \\cref{eq:discretized_multidomain_body} have to be eliminated.\nAs a result, we obtain the following system, which is formulated for a generic number $N_\\text{MU}$ of MUs:\n%\n\\begin{align}\\label{eq:discretized_multidomain_body2}\n  \\left[\\begin{array}{@{}ccc|c|c@{}}\n    \\bfA_{V_m,V_m}^1 &&& \\bfB_{V_m,\\phi_e}^1 &\\\\[2mm]\n    &\\ddots&&\\vdots&\\\\[2mm]\n    &&\\bfA_{\\phi_e,V_m}^{N_\\text{MU}} & \\bfB_{V_m,\\phi_e}^{N_\\text{MU}}&\\\\[2mm]\n    \\bfB_{\\phi_e,V_m}^1 & \\dots & \\bfB_{\\phi_e,V_m}^{N_\\text{MU}} & \\bfB_{\\phi_e,\\phi_e} & \\bfD \\\\[2mm] \\hline\n    &&&\\bfE & \\tilde{\\bfC}_{\\phi_b,\\phi_b}\n  \\end{array}\\right]\n  \\left[\\begin{array}{@{}c@{}}\n    \\bfV_{m}^{1,(i+1)}  \\\\[2mm]\n    \\vdots\\\\[2mm]\n    \\bfV_{m}^{N_\\text{MU},(i+1)}\\\\[2mm]\\hline \n    \\bfphi_{e}^{(i+1)} \\\\[2mm]\\hline\n    \\tilde{\\bfphi}_{b}^{(i+1)}\n  \\end{array}\\right]\n  = \n  \\left[\\begin{array}{@{}c@{}}\n    \\bfb_{V_m}^{1,(i)} \\\\[2mm]\n    \\vdots \\\\[2mm]\n    \\bfb_{V_m}^{N_\\text{MU},(i)}\\\\[2mm]\n    \\bfzero\\\\[2mm]\\hline\n    \\bfzero\n  \\end{array}\\right].\n\\end{align}\n\nFormally, the elimination step is carried out by adding the equations of the third block row in \\cref{eq:discretized_multidomain_body}, that correspond to the boundary degrees of freedom on $\\Gamma_M$, to the corresponding equations of the same degrees of freedom in the second block row. This eliminates the last block column, which corresponds to $\\bfq^{(i+1)}$. Next, the duplicate boundary degrees of freedom, that appear in both the $\\Omega_M$ and $\\Omega_B$ meshes, get unified. The corresponding matrix columns in the third block column are removed. To preserve the entries in the third block row, they are added in the sub matrix of block row three and block column two.\n\nNow considering the updated matrix equation in \\cref{eq:discretized_multidomain_body2}, all sub blocks are equal to \\cref{eq:discretized_multidomain_body}, except for the former matrix $\\bfC_{\\phi_b,\\phi_b}$ and the new matrices $\\bfD$ and $\\bfE$. The new matrix $\\tilde{\\bfC}_{\\phi_b,\\phi_b}$ is obtained from $\\bfC_{\\phi_b,\\phi_b}$ by removing all rows and columns of boundary degrees of freedom. The removed entries are contained in the new matrices $\\bfD$ and $\\bfE$.\n\nThe size of the system matrix in \\cref{eq:discretized_multidomain_body2} equals $a\\times a$, where the number $a$ is composed of $N_\\text{MU}+1$ times the number of degrees of freedom in the muscle mesh plus the number of degrees of freedom in the fat layer mesh without the boundary degrees of freedom on $\\Gamma_M$. Accordingly, the vector $\\tilde{\\bfphi}_b^{(i+1)}$ is the same as $\\bfphi_b^{(i+1)}$ except that it does not contain the boundary degrees of freedom, which are already included in $\\bfphi_e^{(i+1)}$.\n\n\\Cref{eq:discretized_multidomain_body2} describes one iteration of the Crank-Nicolson scheme that is used to solve the multidomain model. This iteration is carried out alternatingly with the subcellular model according to the chosen operator splitting scheme. \n\nThe first $N_\\text{MU}$ block rows in \\cref{eq:discretized_multidomain_body2} contain the second multidomain equation for every MU. The second-to-last block row contains the first multidomain equation and the last block row  contains the body fat layer model.\n\nBecause of the implicit formulation, electric conduction in the intracellular and extracellular space and the body domain are bidirectionally coupled. Therefore, the model can be used to simulate the effects of natural activation in the muscle on EMG signals on the skin surface as well as the reverse effect of external stimulation on the surface on the electrophysiology.\n\n\\subsection{Discretization of the Fiber Based Electrophysiology Model}\n\n% Nach zwei Abschnitten \u00fcber das Multidomain-Modell w\u00e4re es hier glaub ich gut, noch einen Abschnitt \u00fcber das fiber-based Modell einzuf\u00fcgen, mit entsprechender \u00dcberschrift. Der Abschnitt darf gern kurz sein und Gemeinsamkeiten und Unterschiede zu den vorausgehenden Abschnitten erkl\u00e4ren. Eine kurze Zusammenfassung in generischer Matrix-Block-Schreibweise wie oben mit Erkl\u00e4rung uu den Bl\u00f6cken w\u00e4re sehr hilfreich. Ich denke, man versteht dann auch die Parallelisierung und Implementierung besser, wenn man hier schonmal ganz klar die unabh\u00e4ngigen Bl\u00f6cke f\u00fcr die Fasern gesehen hat. \n\nThe fiber based electrophysiology model consists of multiple independent 1D fiber domains, where the monodomain equation \\cref{eq:monodomain} is solved. The transmembrane voltage $V_m$ is then mapped to a 3D mesh of the muscle domain and unidirectionally coupled to the first bidomain equation \\cref{eq:bidomain1}. The first bidomain equation is solved for the extracellular potential $\\phi_e$ and possibly the electric potential $\\phi_b$ in the body fat domain, which corresponds to EMG signals on the skin surface.\n\nThe temporal discretization of the monodomain equation was described in \\cref{sec:discretization_monodomain}. The diffusion term within the operator splitting requires a spatial discretization for which we use the finite element method. This 1D diffusion equation is given as\n\\begin{align}\\label{eq:discretization_diffusion_term}\n  \\p{V_m}{t} = \\dfrac{\\sigma_\\text{eff}}{A_m\\,C_m} \\p{V_m}{x}{2}.\n\\end{align}\nIt can be solved using a timestepping scheme such as the implicit Euler method to obtain time-discrete values $V_m^{(i)}, i=1,2,\\dots$ for the transmembrane potential. The discretization leads to the matrix equation given in \\cref{eq:diffusion_helper2} and to the variants presented in \\cref{sec:mass_lumping} if mass lumping is used. In the stiffness and mass matrices, the anisotropic conduction tensor is replaced by the constant scalar prefactor $c := \\sigma_\\text{eff}/(A_m\\,C_m)$ of the spatial second derivative in \\cref{eq:discretization_diffusion_term}.\n\nThe first bidomain equation \\cref{eq:bidomain1} is a 3D Poisson problem in terms of the unknown extracellular potential $\\phi_e$. According to \\cref{eq:discretization_laplace}, the finite element discretization is given by%\n\\begin{align*}\n  \\bfK_{\\bfsigma_i + \\bfsigma_e} \\bfphi_e^{(i+1)} &= - \\bfB_{\\Gamma_f}\\bff + \\textbf{rhs},\n\\end{align*}\nwhere the right-hand side $\\textbf{rhs}$ of the Poisson problem is the transmembrane flow and is given by\n\\begin{align}\\label{eq:static_bidomain_rhs}\n  \\textbf{rhs} &= -\\bfK_{\\bfsigma_i} \\bfV_{m,3D}^{(i+1)}.\n\\end{align}\nHere, $\\bfphi_e^{(i+1)}$ and $\\bfV_{m,3D}^{(i+1)}$ are the vectors of degrees of freedom on the 3D mesh for the extracellular potential $\\phi_e$ and the membrane potential $V_m$ at timestep $(i+1)$. With the homogeneous Neumann boundary conditions for $V_m$ and $\\phi_e$ given in \\cref{eq:monodomain_bc}, the boundary term $\\bfB_{\\Gamma_f}$ vanishes.\n\nIn summary, the following matrix equations are solved for the fiber based electrophysiology model with $n$ fibers:\n\\begin{subequations}\\label{eq:discretized_fibers}\n  \\begin{align}\n    \\left[\\begin{array}{@{}ccc@{}}\n      \\bfA &&\\\\\n      &\\ddots&\\\\\n      &&\\bfA\n    \\end{array}\\right]\n    \\left[\\begin{array}{@{}c@{}}\n      \\bfV_{m}^{1,(i+1)}  \\\\\n      \\vdots\\\\\n      \\bfV_{m}^{n,(i+1)}\n    \\end{array}\\right]\n    &= \n    \\left[\\begin{array}{@{}c@{}}\n      \\bfV_{m}^{1,(i)}  \\\\\n      \\vdots\\\\\n      \\bfV_{m}^{n,(i)}\n    \\end{array}\\right],\\label{eq:discretized_fibers_1} \\\\[4mm]\n    \\bfV_{m,3D}^{(i+1)} &= \\bfP \\left[\\begin{array}{@{}c@{}}\n      \\bfV_{m}^{1,(i+1)} \\label{eq:discretized_fibers_2} \\\\\n      \\vdots\\\\\n      \\bfV_{m}^{n,(i+1)}\n    \\end{array}\\right],\\\\[4mm]\n    \\bfK_{\\bfsigma_i + \\bfsigma_e} \\bfphi_e^{(i+1)} &= -\\bfK_{\\bfsigma_i} \\bfV_{m,3D}^{(i+1)} \\label{eq:discretized_fibers_3}\n  \\end{align}\n\\end{subequations}\nwith the system matrix $\\bfA$ for a single fiber given according to \\cref{eq:lumping_implicit_euler_2} by\n\\begin{align*}\n  \\bfA &= \\bfI - \\dt\\,\\bfM_c^{-1}\\bfK_c.\n\\end{align*}\n\\Cref{eq:discretized_fibers_1} solves the diffusion part of the operator splitting in \\cref{eq:monodomain_operator_splitting}. After the values $\\bfV_{m}^{j,(i+1)}$ for the timestep $(i+1)$ are computed on the 1D fiber meshes, the homogenized vector $\\bfV_{m,3D}^{(i+1)}$ in the 3D mesh of the muscle domain $\\Omega_M$ is obtained by the prolongation operation $\\bfP$ in \\cref{eq:discretized_fibers_2}. The homogenized vector is used in the right-hand side of the bidomain model in \\cref{eq:discretized_fibers_3}, which computes the discretized extracellular potential $\\bfphi_e^{(i+1)}$.\n\n\\Cref{eq:discretized_fibers_3} can be extended by adding a body fat layer $\\Omega_B$ and the corresponding model for the electric potential $\\phi_b^{(i+1)}$. Then, the vector of unknowns contains the degrees of freedom for both $\\phi_e^{(i+1)}$ and $\\phi_b^{(i+1)}$. The stiffness matrix $\\bfK_{\\bfsigma_i + \\bfsigma_e}$ is obtained by integrating over both meshes in $\\Omega_M \\cup \\Omega_B$. Only in the elements of the finite element mesh for $\\Omega_B$, the conduction tensors are redefined as $\\bfsigma_i = \\bfzero$ and $\\bfsigma_e = \\bfsigma_b$. This sets the right-hand side of \\cref{eq:discretized_fibers_3} to zero in $\\Omega_B$ and the solution $\\phi_b$ in harmonic according to the model in \\cref{eq:body}. The coupling conditions \\cref{eq:body_domain_coupling} between $\\phi_e$ and $\\phi_b$ and the outer Neumann boundary conditions \\cref{eq:body_domain_bc3} for $\\phi_b$ are satisfied automatically by this approach.\n\nThe comparison between the discretized multidomain model in \\cref{eq:discretized_multidomain_body2} with the discretized fiber based model in \\cref{eq:discretized_fibers} reveals several differences. Whereas the multidomain description consists of a single coupled linear system for electric conduction in the intracellular, extracellular and body domains, the formulations are only unidirectionally coupled in the fiber based description. While the multidomain model always computes the EMG signals on the skin surface in every timestep, the corresponding model in the fiber based description can be solved with larger timestep widths, using subcycling for the action potential propagation model.\n\nAs can be seen in \\cref{eq:discretized_fibers_1}, the system matrix is decoupled and contains independent problems for every fiber. This is an advantage compared to the multidomain model, where a system describing the whole muscle domain has to be solved. On the downside, separate representations of the transmembrane voltage $V_m$ exist in the fiber based description. The representation in the 3D mesh has to be computed by interpolation from the representation on the fibers. The multidomain description has a single vector of degrees of freedom for $V_m$ with fewer entries than in the fiber-based description.\n\n\\subsection{Summary of Domains and Meshes}\n\nVarious finite element meshes occur in the formulation of the multi-scale model.\nIf the fiber based description is used, the description requires finite element meshes for the 1D fiber domains $\\Omega_f^j$ for ${j=1,\\dots,n}$. Further meshes are needed for the 3D muscle domain $\\Omega_M$ and for the 3D body domain $\\Omega_B$. The meshes for $\\Omega_M$ and $\\Omega_B$ share nodes on their common boundary $\\Gamma_M$. The fiber meshes are embedded in the muscle domain. Their nodes do not necessarily have to coincide with the nodes of the muscle mesh.\n\nThe subcellular model is solved at locations $\\Omega_s^i$ for $i=1,\\dots,m$. These locations are the nodes of the fiber meshes for the fiber based description and the nodes of the muscle mesh for the multidomain description. We therefore have the inclusion $\\Omega_s^i \\subset \\Omega_f^j \\subset \\Omega_M$.\n\nFor the solid mechanics model, the unified 3D domain $\\Omega = \\Omega_M \\cup \\Omega_B$ is used. The mesh for the continuum mechanics formulation can be different from the meshes used for the electrophysiology model. In fact, the continuum mechanics mesh has special requirements in order to yield a consistent formulation. Our implementation uses two overlaid meshes of quadratic and linear hexahedral elements for displacements and the hydrostatic pressure. \n\nOften, the required accuracy of the electrophysiology model is higher than for the continuum mechanics model, such that differently resolved meshes can be used. To facilitate data mapping, the nodes of the mechanics mesh should be chosen as subset of the nodes of the electrophysiology meshes.\n\n%We spatially discretize the variables in \\cref{eq:multi-domain1,eq:multi-domain2,eq:body,eq:bc1,eq:bc2,eq:bc3,eq:monodomain,eq:bidomain,eq:subcellular} using the finite element method with linear ansatz functions. The subcellular points $\\Omega_s^i$ are placed at the nodes of the muscle domain mesh $\\Omega_M$ for the multi-domain model and at the nodes of the fiber domain meshes $\\Omega_f^j$ for the fiber based model. The fibers $\\Omega_f^j$ are embedded in the muscle domain $\\Omega_M$, however, the nodes of their meshes do not necessarily coincide. The nodes on the internal boundary $\\Gamma_M$ between $\\Omega_M$ and $\\Omega_B$ are shared between the meshes of $\\Omega_M$ and $\\Omega_B$.\n\n% ---\n", "meta": {"hexsha": "a316768b6fdd6e5786e02e9f86b1434c4e4cbe9b", "size": 42938, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "document/05_theory_2.tex", "max_stars_repo_name": "maierbn/phd_thesis_source", "max_stars_repo_head_hexsha": "babee64f01f15d93cb75140eb8c8424883b33c6c", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-05T19:00:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-05T19:00:04.000Z", "max_issues_repo_path": "document/05_theory_2.tex", "max_issues_repo_name": "maierbn/phd_thesis_source", "max_issues_repo_head_hexsha": "babee64f01f15d93cb75140eb8c8424883b33c6c", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "document/05_theory_2.tex", "max_forks_repo_name": "maierbn/phd_thesis_source", "max_forks_repo_head_hexsha": "babee64f01f15d93cb75140eb8c8424883b33c6c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.7148760331, "max_line_length": 929, "alphanum_fraction": 0.734011831, "num_tokens": 13678, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.727975443004307, "lm_q1q2_score": 0.588253081033663}}
{"text": "\\lab{Algorithms}{Scientific Visualization}{Scientific Visualization}\n\n\\objective{Go over techniques for making publication ready figures and making other pretty things.}\n\nPython and SciPy have no builtin plotting capbilities.  These capabilites are provided by Matplotlib.  In this lab we will be focusing on producing publication quality figures in Matplotlib using Python and SciPy.  Thus far we have produced many different plots for several applications.  Here we will focus on different ways of presenting those plots so that they are more informative and aesthetically pleasing.\n\n\\section{The \\li{pyplot.plot} Command}  \nSeveral of the past labs have required the use of the \\li{pyplot.plot} command.  While what we have done so far as been fairly straightforward as far as plotting is concerned, a quick glance at the Matplotlib documentation for \\li{pyplot.plot} reveals a plethora of options that can be set to adjust the final plot.\n\nHow we use the \\li{pyplot.plot} command will vary with our application.  Plotting data requires a very different presentation approach than graphing straight lines.  In the sequel we examine several different options.\n\n\\begin{table}[h!]\n\\begin{center}\n\t\\begin{tabular}{|c|c|}\n\t\n\t\\hline\n\t\n\tUsage & Property Options\\\\\n\t\n\t\\hline\n\t\n\t\\li{color} & \\li{yellow, green, blue} etc., or user defined colors \\\\& given by $[r g b]$, a three element red, green, \\\\& blue vector.  See documentation for full list. \\\\\n\t\n\t\\hline\n\t\n\t\\li{linestyle} & \\li{-, --, :, -., 'None'} \\\\\n\t\n\t\\hline\n\t\n\t\\li{linewidth} & Width in points.  $1$ point is $\\frac{1}{72}$ inch.  Default is $1.0$ points. \\\\\n\t\n\t\\hline\n\t\n\t\\li{marker} & \\li{+, *, x, .} etc. See documentation for full list. \\\\\n\t\n\t\\hline\n\t\\end{tabular}\n\\end{center}\n\\caption{Common Plot commands in MATLAB}\n\\end{table}\n\nFor example, if we wanted to plot $f(x) = x^2$, we don't need to extend our typical plotting knowledge much, except maybe we like green better than blue:\n\n\\begin{lstlisting}[style=python]\n: import matplotlib.pyplot as plt\n: import scipy as sp\n: x = sp.linspace(-1,5,50)\n: f = lambda x: x**2\n: plt.plot(x, map(f, x), color='green')\n: plt.show()\n\\end{lstlisting}\n\nSee figure 37.1.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{./FiguresMAT/plot1}\n\\caption{Graph of $f(x) = x^2$}\n\\end{center}\n\\end{figure}\n\n\nHowever, if we wanted to plot the data drawn from a certain distribution, we would want to use the plot command in a very different way.  For example:\n\n%Explain the code?\n\\begin{lstlisting}[style=python]\n>> X = random('Normal',0,1,[100,1]);\n>> plot(X,'marker','*','LineStyle','none');\n\\end{lstlisting}\n\nSee figure 37.2.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{./FiguresMAT/plot3}\n\\caption{100 data points pulled from a normal distribution}\n\\end{center}\n\\end{figure}\n\nThe \\li{hold} command will also be useful for when plotting more than one graph on a single plot.  For example, if we wanted to graph $f(x) = \\sin{x}$ and it's derivative $f'(x) = \\cos{x}$, we would do the following:\n\n\\begin{lstlisting}[style=python]\n: f = lambda x: sp.sin(x)\n: df = lambda x: sp.cos(x)\n: x = sp.linspace(0,2*sp.pi, 50)\n: plt.hold(True)\n: plot(x, map(f, x), color='blue')\n: plot(x, map(df, x), color='green')\n: plt.hold(False)\n: plt.show()\n\\end{lstlisting}\n\nSee figure 37.3.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{./FiguresMAT/plot2}\n\\caption{$f(x) = \\sin{x}$ and $f'(x) = \\cos{x}$. Note the color choice makes for better presentation.}\n\\end{center}\n\\end{figure}\n\n\\begin{problem}  Using the MATLAB documentation and the aforementioned commands, do the following:\n\\begin{itemize}\n\\item Generate 25 data points from a distribution of your choosing.  Plot the data points and the least squares regression line on the same plot.  Use different colors and symbols for the data and the regression line.\n\\item \n\\end{itemize}\n\\end{problem}\n\n\\subsection{Changing the Plot window}\n\nWithin in MATLAB we have several options for changing the \\li{plot} window.  This includes adding captions and other labels, legends, and altering the axes (among other things).  For example, we plot 3 trajectories of a projectile shot from a cannon at 30 meters per second.\n\n\\begin{lstlisting}[style=python]\n: g = 9.8\n: v = 30.0\n: t = linspace(0,6,100)\n:\n: h1 = lambda t: v*sp.math.sin(sp.pi/6)*t-0.5*g*t**2\n: h2 = lambda t: v*sp.math.sin(sp.pi/4)*t-0.5*g*t**2\n: h3 = lambda t: v*sp.math.sin(sp.pi/3)*t-0.5*g*t**2\n:\n: plt.hold(True)\n: plt.plot(v*sp.math.cos(sp.pi/6)*t, map(h1, t), color='green')\n: plt.plot(v*sp.math.cos(sp.pi/4)*t, map(h2, t), color='blue')\n: plt.plot(v*sp.math.cos(sp.pi/3)*t, map(h3, t), color='red')\n: plt.axis([0, 100, 0, 40])\n: plt.xlabel('Distance')\n: plt.ylabel('Height')\n: plt.hold(False)\n: plt.show()\n\\end{lstlisting}\n\nSee figure 37.4\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{./FiguresMAT/plot4}\n\\caption{Trajectory of a projectile fired at three different angles.}\n\\end{center}\n\\end{figure}\n\n\\begin{problem} \n\\end{problem}\n\n\\section{The \\li{subplot} command}  \nOften times we want to include two or more plots in a single figure.  This is done using the \\li{subplot} command.  This command will break the plot window into an $m\\times n$ matrix of plots.  The subplot windows are numbered starting at $1$ in the top left cell and then ascending from left to right, top to bottom.  For example, instead of using different colors on the previous example, we could do 3 separate plots in a single window.  We could also include a graph of, for example, vertical velocity vs. time, which is informative but unable to be included in the same plot since it requires different units.  Let's see how this is done.\n\n\\begin{lstlisting}[style=python]\n: g = 9.8\n: v = 30.0\n: t = linspace(0,6,100)\n:\n: h1 = lambda t: v*sp.math.sin(sp.pi/6)*t-0.5*g*t**2\n: h2 = lambda t: v*sp.math.sin(sp.pi/4)*t-0.5*g*t**2\n: h3 = lambda t: v*sp.math.sin(sp.pi/3)*t-0.5*g*t**2\n:\n: plt.subplot(221)\n: plt.plot(v*cos(pi/6)*t,h1(t),'color','green')\n: plt.axis([0, 100, 0, 40])\n: plt.xlabel('Distance')\n: plt.ylabel('Height')\n: plt.title('30 degrees')\n:\n: plt.subplot(222)\n: plt.plot(t, v*sin(pi/6) - g*t,'color','green')\n: plt.xlabel('Time')\n: plt.ylabel('Vertical Velocity')\n:\n: plt.subplot(223)\n: plt.plot(v*cos(pi/4)*t,h2(t),'color','blue')\n: plt.axis([0, 100, 0, 40])\n: plt.xlabel('Distance')\n: plt.ylabel('Height')\n: plt.title('45 degrees')\n:\n: plt.subplot(224)\n: plt.plot(t, v*sin(pi/4) - g*t, 'color','blue')\n: plt.xlabel('Time')\n: plt.ylabel('Vertical Velocity')\n\\end{lstlisting}\n\nSee figure 37.5.\n\n\\begin{figure}\n\\begin{center}\n%\\includegraphics[scale=0.5]{./Figures/plot5}\n\\caption{Trajectory of a projectile fired at two different angles and the projectiles corresponding vertical velocity at time $t$.}\n\\end{center}\n\\end{figure}\n\n\\section{Time graphics?}\n\n\n\n", "meta": {"hexsha": "a0d78ca58d8ff792bab9b58eeb3564d441fb36b7", "size": 6788, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Algorithms/SciViz/SciViz.tex", "max_stars_repo_name": "jasongrout/numerical_computing", "max_stars_repo_head_hexsha": "fa29838af62417703c65f680b167e81828de01c5", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Algorithms/SciViz/SciViz.tex", "max_issues_repo_name": "jasongrout/numerical_computing", "max_issues_repo_head_hexsha": "fa29838af62417703c65f680b167e81828de01c5", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Algorithms/SciViz/SciViz.tex", "max_forks_repo_name": "jasongrout/numerical_computing", "max_forks_repo_head_hexsha": "fa29838af62417703c65f680b167e81828de01c5", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-21T23:06:27.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-21T23:06:27.000Z", "avg_line_length": 34.8102564103, "max_line_length": 643, "alphanum_fraction": 0.7060989982, "num_tokens": 2025, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410572017153, "lm_q2_score": 0.8840392725805822, "lm_q1q2_score": 0.5882490780534732}}
{"text": "\\section{Moment Generating Functions}\r\n\\subsection{Moment Generating Function of One Random Variable}\r\n\\begin{definition}\r\n    Let $X$ have density $f$, then the moment generating function (MGF) is defined by\r\n    $$m(\\theta)=\\mathbb E[e^{\\theta X}]=\\int_{-\\infty}^\\infty e^{\\theta x}f(x)\\,\\mathrm dx$$\r\n    wherever it is finite.\r\n\\end{definition}\r\nNote that $m(0)=1$.\r\n\\begin{theorem}\r\n    The MGF uniquely determines the distribution of random variable provided it is defined on some open interval around $0$.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Omitted.\r\n\\end{proof}\r\n\\begin{theorem}\r\n    Suppose the MGF is defined for some open interval around $0$, then $m^{(r)}(0)=\\mathbb E[X^r]$.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Omitted as well.\r\n\\end{proof}\r\n\\begin{example}\r\n    1. Gamma distribution $X\\sim\\Gamma(n,\\lambda)$ for $n\\in\\mathbb N,\\lambda\\ge 0$.\r\n    $$f(x)=e^{-\\lambda x}\\lambda^n\\frac{x^{n-1}}{(n-1)!}$$\r\n    defined for $x\\ge 0$.\r\n    For $n=1$, we get $\\operatorname{Exp}(\\lambda)$.\r\n    One can show by reduction formula that this is indeed a density.\\\\\r\n    In this case, we have\r\n    \\begin{align*}\r\n        m(\\theta)&=\\mathbb E[e^{\\theta X}]\\\\\r\n        &=\\int_0^\\infty e^{-(\\lambda-\\theta)x}\\frac{\\lambda^n}{(\\lambda-\\theta)^n}(\\lambda-\\theta)^n\\frac{x^{n-1}}{(n-1)!}\\,\\mathrm dx\\\\\r\n        &=\\left(\\frac{\\lambda}{\\lambda-\\theta} \\right)^n\r\n    \\end{align*}\r\n    taking $n=1$ then gives the MGF of the exponential.\r\n    Suppose now that $X_1,\\cdots,X_n$ are independent, then the MGF of the sum\r\n    $$m(\\theta)=\\mathbb E[e^{\\theta(X_1+\\cdots+X_n)}]=\\mathbb E[e^{\\theta X_1}]\\cdots\\mathbb E[e^{\\theta X_n}]$$\r\n    So if $X\\sim\\Gamma(n,\\lambda),Y\\sim\\Gamma(m,\\lambda)$ are independent, then\r\n    $$\\mathbb E[e^{\\theta(X+Y)}]=\\left(\\frac{\\lambda}{\\lambda-\\theta} \\right)^n\\left(\\frac{\\lambda}{\\lambda-\\theta} \\right)^m=\\left(\\frac{\\lambda}{\\lambda-\\theta} \\right)^{m+n}\\sim\\Gamma(n+m,\\lambda)$$\r\n    In particular, if $X_1,\\ldots,X_n$ are i.i.d. $\\operatorname{Exp}(\\lambda)$, then $X_1+\\cdots+X_n\\sim\\Gamma(n,\\lambda)$.\\\\\r\n    2. Suppose $X\\sim\\mathcal N(\\mu,\\sigma^2)$, then the MGF of $X$ is\r\n    \\begin{align*}\r\n        m(\\theta)&=\\mathbb E[e^{\\theta X}]\\\\\r\n        &=\\int_{-\\infty}^\\infty e^{\\theta x}\\frac{1}{\\sqrt{2\\pi \\sigma^2}}e^{-(x-\\mu)^2/(2\\sigma^2)}\\,\\mathrm dx\\\\\r\n        &=e^{\\mu\\theta+\\theta^2\\sigma^2/2}\r\n    \\end{align*}\r\n    After some completing square and calculation.\r\n    Now if $X\\sim\\mathcal N(\\mu,\\sigma^2)$, then we know that $aX+b\\sim\\mathcal N(a\\mu+b,a^2\\sigma^2)$.\r\n    We can prove this again using MGF, indeed,\r\n    $$\\mathbb E[e^{\\theta(aX+b)}]=e^{\\theta b}e^{a\\theta\\mu+(a\\theta)^2\\sigma^2/2}=e^{\\theta(a\\mu +b)+\\theta^2(a^2\\sigma^2)/2}$$\r\n    So $aX+b\\sim\\mathcal N(a\\mu+b,a^2\\sigma^2)$.\r\n    Now let $Y=\\mathcal N(\\nu,\\tau^2)$ independent of $X$, then by using the same trick (MGF), we get $\\mathbb E[e^{\\theta(X+Y)}]=e^{\\theta(\\mu+\\nu)+\\theta^2(\\sigma^2+\\tau^2)/2}$ so $X+Y\\sim\\mathcal N(\\mu+\\nu,\\sigma^2+\\tau^2)$.\\\\\r\n    3. (non-example) Cauchy's Distribition is obtained by $f(x)=1/(\\pi(1+x^2))$ for $x\\in\\mathbb R$.\r\n    Then we can get $m(\\theta)=\\infty$ for any $\\theta\\neq 0$, so if $X\\sim f$ then $X,2X,\\ldots$ all have the same MGF but not the same distribution.\r\n    So the assumption on $m(\\theta)$ being finite on an open interval is essential.\r\n\\end{example}\r\n\\subsection{Multivariate Moment generating Function}\r\n\\begin{definition}\r\n    Let $X=(X_1,\\ldots,X_n)$ be a random variable in $\\mathbb R^n$, then the MGF of $X$ is defined to be a function $m:\\mathbb R^n\\to\\mathbb R$ with\r\n    $$m(\\theta)=\\mathbb E[e^{\\theta^\\top X}]=\\mathbb E[e^{\\theta_1X_1+\\cdots+\\theta_nX_n}]$$\r\n    where $\\theta=(\\theta_1,\\ldots,\\theta_n)^\\top$.\r\n\\end{definition}\r\nProvided that $m(\\theta)$ is finite for a certain range (which is out of the scope of this course) of values of $\\theta$, it uniquely characterizes the distribution of $X$.\r\nAlso\r\n$$\\left.\\frac{\\partial^rm}{\\partial\\theta_i^r}\\right|_{\\theta=0}=\\mathbb E[X_i^r],\\left.\\frac{\\partial^{r+s}m}{\\partial\\theta_i^r\\partial\\theta_j^s}\\right|_{\\theta=0}=\\mathbb E[X_i^rX_j^s]$$\r\nAlso $X_1,\\ldots,X_n$ are independent iff\r\n$$m(\\theta)=\\prod_{i=1}^n\\mathbb E[e^{\\theta_iX_i}]$$\r\n\\begin{definition}\r\n    Let $(X_n,n\\in\\mathbb N)$ be a sequence of random variables, and let $X$ be a random variable.\r\n    We say $X_n\\to X$ in distribution if\r\n    $$F_{X_n}(x)\\to F_X(x),F_{X_n}(x)=\\mathbb P(X_n\\le x),F_X(x)=\\mathbb P(X\\le x)$$\r\n    For each $x\\in\\mathbb R$ such that $F_X$ is continuous at $x$.\r\n\\end{definition}\r\n\\begin{theorem}[Continuity Theorem for MGFs]\r\n    Let $X$ be a random variable with MGF $m$ and $m(\\theta)<\\infty$ for some $\\theta\\neq 0$.\r\n    Suppose we have\r\n    $$m_n(\\theta)=\\mathbb E[e^{\\theta X_n}]\\to m(\\theta),\\forall\\theta\\in\\mathbb R$$\r\n    Then $X_n\\to X$ in distribution.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Omitted.\r\n\\end{proof}\r\n", "meta": {"hexsha": "1ce93253259ea931a24e425b783d02384d11d742", "size": 4861, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "12/moment.tex", "max_stars_repo_name": "david-bai-notes/IA-Probability", "max_stars_repo_head_hexsha": "47487b998f0975ea0a342e17b5b9dffa0bb8e9ec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "12/moment.tex", "max_issues_repo_name": "david-bai-notes/IA-Probability", "max_issues_repo_head_hexsha": "47487b998f0975ea0a342e17b5b9dffa0bb8e9ec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "12/moment.tex", "max_forks_repo_name": "david-bai-notes/IA-Probability", "max_forks_repo_head_hexsha": "47487b998f0975ea0a342e17b5b9dffa0bb8e9ec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.012345679, "max_line_length": 230, "alphanum_fraction": 0.6338202016, "num_tokens": 1758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891479496521, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5882138455159495}}
{"text": "\\documentclass{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\n% Required to call the command \\operatorname{name} and \\DeclareMathOperator{command}{definition}\n\\usepackage{amsmath}\n\n% Required to render the clickable url\n\\usepackage[hyphens]{url}\n\\usepackage{hyperref}\n\n% Declare a new math function\n\\DeclareMathOperator{\\abc}{abc}\n\n\\begin{document}\n\t\n\\section*{Existing functions}\n\n\\LaTeX{} comes with a lot of built-in mathematical functions, such as $\\cos x$, $\\ln x$ or $\\max x$.\nA complete list of symbols can be found in the \\emph{The Comprehensive LATEX Symbol List} hosted in the \\emph{Best Practices} part of this repository (\\url{https://github.com/ZenLulz/LatexCompendium/blob/master/best-practices/the-comprehensive-latex-symbol-list.pdf}).\n\n\\section*{Custom functions}\n\\subsection*{The command \\emph{operatorname}}\n\nThe following formula uses a custom function called \\emph{abc}.\n\n\\[\\operatorname{abc} x\\]\n\n\\subsection*{The command \\emph{DeclareMathOperator}}\n\nThis command enables to define math functions or operators in the preamble so they can be reused everywhere.\n\n\\[\\abc x\\]\n\n\\subsection*{The command \\emph{mathrm}}\n\nMany people use the command \\emph{mathrm}, nevertheless this leads to space issue, as illustrated below.\n\n\\[\\mathrm{abc}x\\]\n\n\\end{document}", "meta": {"hexsha": "320f3f7d3fb3ab7b2042f12eb47775ae165cdf12", "size": 1292, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "compendium/mathematics/functions-and-custom-functions.tex", "max_stars_repo_name": "ZenLulz/LatexCompendium", "max_stars_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-07-30T21:43:55.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-23T20:16:19.000Z", "max_issues_repo_path": "compendium/mathematics/functions-and-custom-functions.tex", "max_issues_repo_name": "ZenLulz/LatexCompendium", "max_issues_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "compendium/mathematics/functions-and-custom-functions.tex", "max_forks_repo_name": "ZenLulz/LatexCompendium", "max_forks_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7619047619, "max_line_length": 268, "alphanum_fraction": 0.7670278638, "num_tokens": 340, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8221891261650247, "lm_q1q2_score": 0.5882138299307043}}
{"text": "%!TEX root = ../notes.tex\n\\section{March 18, 2022}\n\\subsection{Elliptic Curves over Finite Fields \\emph{continued}}\n\\recall from last class, we had our toy example\n\\begin{example*}\n    We take elliptic curve\n    \\[y^2 = x^3 + x + 2\\quad \\text{ over }\\ZZ/5\\ZZ.\\]\n    with group law\n    \\[\\begin{array}{c|cccc}\n            +     & \\cO   & (1,2) & (4,0) & (1,3) \\\\ \\hline\n            \\cO   & \\cO   & (1,2) & (4,0) & (1,3) \\\\\n            (1,2) & (1,2) & (4,0) & (1,3) & \\cO   \\\\\n            (4,0) & (4,0) & (1,3) & \\cO   & (1,2) \\\\\n            (1,3) & (1,3) & \\cO   & (1,2) & (4,0)\n        \\end{array}\\]\n\\end{example*}\nWe define some more useful functions:\n\\begin{lstlisting}\ndef minus(P, p):\n    if P == O:\n        return O\n    else:\n        x, y = P\n        return (x, (-y) % p)\n\\end{lstlisting}\nWhat about multiplication? We can repeatedly add:\n\\begin{lstlisting}\ndef multiply(P, n, a, p):\n    S = O\n    for _ in range(n):\n        S = add(P, S, a, p)\n    return S\n\\end{lstlisting}\nbut this might be very slow (it does $n$ iterations). We can do something similar to fast powering for exponentiation, except we repeatedly double our point:\n\\begin{lstlisting}\ndef multiply(P, n, a, p):\n    S = 0\n    while n != 0:\n        if n % 2 == 1:\n            S = add(S, P, a, p)\n        n = n // 2\n        P = add(P, P, a, p)\n    return S\n\\end{lstlisting}\n\n\\begin{ques*}\n    What is the order of $E(\\FF_p)$? That is, how many points are there on the elliptic curve?\n\\end{ques*}\nLet's say we take $x, y, \\dots$. We want to solve\n\\[y^2\\overset{?}{=}x^3 + ax + b\\]\nThere are $p^2$ different $(x, y)$. The probability that this equality holds is \\emph{like} $\\frac{1}{p}$. So there are about $p^2\\cdot \\frac{1}{p} + 1\\approx p + 1$ (added one for the point $\\cO$) elements in $E(\\FF_p)$.\n\n\\begin{theorem}\n    \\[|(p+1) - \\#E(\\FF_p)| \\leq 2\\sqrt{p}.\\] That is, the difference between our estimate $p+1$ and the actual number of points in $E(\\FF_p)$ is bounded by $2\\sqrt{p}$.\n\\end{theorem}\n\\begin{proof}\n    \\emph{(Beyond the scope of this class.)}\n\\end{proof}\n\n\\begin{remark}\n    ~\\begin{enumerate}\n        \\item We note that this number can be efficiently computed (in polynomial time in the digits of $p$). \\emph{(Again, beyond the scope of this class.)}\n        \\item We call this number $|(p+1) - \\#E(\\FF_p)|$ the \\ul{trace of Frobenius}. \\emph{(You guessed it: again, beyond the scope of this class.)}\n    \\end{enumerate}\n\\end{remark}\n\nJust for funsies, we can compute the \\emph{trace of Frobenius}\\footnote{Using \\emph{Sage}.}:\n\\begin{lstlisting}\np = next_prime(92834712736591432)\nE = EllipticCurve([34123498, GF(p)(2349182347)])\nE.trace_of_frobenius()\n\\end{lstlisting}\n\n\\subsection{Elliptic Diffe-Hellman Key Exchange}\n\\subsubsection{Elliptic Discrete Log Problem}\nThis is based on the \\emph{Elliptic Discrete Log Problem}:\nGiven $E$ an elliptic curve over $\\FF_p$ with $P$ a point on $E$. We take $P, n$ and calculate\n\\[n\\cdot P\\]\nThe \\ul{elliptic curve discrete log problem} is given point $n\\cdot P$, computing $n$.\n\nThe best known algorithm for \\emph{ECDLP} is Babystep-Giantstep, which runs in $\\mathcal{O}(\\sqrt{p})$ and $\\mathcal{O}(\\sqrt{p})$ memory. There is \\emph{no} analog of index calculus. This could be good or bad... This could mean a lack of knowledge about elliptic curves. We could also create greater security at smaller key sizes.\n\n\\subsubsection{Sharing Secrets}\n\\emph{Public information:} we have some $p$ prime and $E$ an elliptic curve over $\\FF_p$. We have $P$ a point in the elliptic curve $E$.\n\nAlice and Bob do the following:\n\n\\begin{enumerate}\n    \\item Alice and Bob each generate $a$ and $b$. Alice computes $a\\cdot P$ and Bob computes $b\\cdot P$. This is shared to each other (and public).\n    \\item Alice now computes $a\\cdot (b\\cdot P)$ and Bob computes $b\\cdot (a\\cdot P)$. These are all equal to $(a\\cdot b)\\cdot P$ which is a shared secret.\n\\end{enumerate}\n", "meta": {"hexsha": "65a2fa8f1fd4cf463a484d264d0c3fef06b10095", "size": 3884, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-03-18.tex", "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_issues_repo_path": "lectures/2022-03-18.tex", "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-03-18.tex", "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.2173913043, "max_line_length": 331, "alphanum_fraction": 0.6333676622, "num_tokens": 1313, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.588213826813655}}
{"text": "\\section{Planes}\nIn this section we will discuss how to represent planes, and many ways we can use them. We will see that they play a very similar role to the role lines play in 2D, and many of the operations we defined in section~\\ref{s:lines} have a direct equivalent here. Since the explanations are nearly identical, we make them a bit shorter here; please refer to section~\\ref{s:lines} if you want more details.\n\n\\subsection{Defining planes}\nPlanes are sets of points $(x,y,z)$ which obey an equation of the form $ax+by+cz=d$. Here, $a,b,c$ determine the orientation of the plane, while $d$ determines its position relative to the origin.\n\n\\centerFig{plane0}\n\nVector $\\vv{n}=(a,b,c)$ is perpendicular to the plane and is called a \\emph{normal} of the plane. The equation can be rewritten as $\\vv{n}\\cdot(x,y,z) = d$: that is, the plane is formed by all points whose dot product with $\\vv{n}$ is equal to constant $d$. This makes sense, because we know that the dot product doesn't change when one vector (here, the point $(x,y,z)$) moves perpendicularly to the other (here, the normal $\\vv{n}$).\n\n\\centerFig{plane1}\n\nHere are some other ways to define a plane, and how to find $\\vv{n}$ and $d$ in each case:\n\\begin{itemize}\n\\item if we know the normal $\\vv{n}$ and a point $P$ belonging to the plane: we can find $d$ as $\\vv{n}\\cdot P$;\n\\item if we know a point $P$ and two (non-parallel) vectors $\\vv{v}$ and $\\vv{w}$ that are parallel to the plane: we can define $\\vv{n}=\\crossv{v}{w}$ then find $d$ as above;\n\\item if we know 3 non-collinear points $P,Q,R$ on the plane: we first find two vectors $\\vv{v}=\\vv{PQ}$ and $\\vv{w} = \\vv{PR}$ that are parallel from the plane, then find $\\vv{n}$ and $d$ as above.\n\\end{itemize}\n\nWe implement this with the following structure and constructors:\n\\begin{lstlisting}\nstruct plane {\n    p3 n; T d;\n    // From normal n and offset d\n    plane(p3 n, T d) : n(n), d(d) {}\n    // From normal n and point P\n    plane(p3 n, p3 p) : n(n), d(n|p) {}\n    // From three non-collinear points P,Q,R\n    plane(p3 p, p3 q, p3 r) : plane((q-p)*(r-p), p) {}\n    \n    // Will be defined later:\n    // - these work with T = int\n    T side(p3 p);\n    double dist(p3 p);\n    plane translate(p3 t);\n    // - these require T = double\n    plane shiftUp(double dist);\n    p3 proj(p3 p);\n    p3 refl(p3 p);\n};\n\\end{lstlisting}\n\n\\subsection{Side and distance}\nThe first thing we are interested for a plane $\\Pi$ in is the value of $ax+by+cz-d$. If it's zero, the point is on $\\Pi$, and otherwise it tells us which side of $\\Pi$ the point lies. We name it \\lstinline|side()|, like its 2D line equivalent:\n\\begin{lstlisting}\nT side(p3 p) {return (n|p)-d;}\n\\end{lstlisting}\nwhich we will denote $\\side_{\\Pi}(P)$.\n\n$\\side_{\\Pi}(P)$ is positive if $P$ is on the side of $\\Pi$ pointed by $\\vv{n}$, and negative for the other side. In fact, for given points $P,Q,R,S$, \\lstinline|plane(p,q,r).side(s)| gives the same value as \\lstinline|orient(p,q,r,s)|.\n\n\\centerFig{plane2}\n\nAnd just like the \\lstinline|side()| for 2D lines, we can get the distance from it, if we compensate for the norm of $\\vv{n}$:\n\\begin{lstlisting}\ndouble dist(p3 p) {return abs(side(p))/abs(n);}\n\\end{lstlisting}\n\n\\subsection{Translating a plane}\nIf we translate a plane by a vector $\\vv{t}$, the normal $\\vv{n}$ remains unchanged, but the offset $d$ changes.\n\n\\centerFig{plane3}\n\nTo find the new value $d'$, we use the same argument as for 2D lines: suppose point $P$ is on the old plane, that is, $\\vv{n}\\cdot P = d$. Then $P+\\vv{t}$ should be in the new plane:\n\\[d' = \\vv{n}\\cdot(P+\\vv{t}) = \\vv{n}\\cdot P + \\dotv{n}{t} = d + \\dotv{n}{t}\\]\n\n\\begin{lstlisting}\nplane translate(p3 t) {return {n, d+(n|t)};}\n\\end{lstlisting}\n\nAnd if we want to shift perpendicularly (in the direction of $\\vv{n}$) by some distance $\\delta$, then $\\dotv{n}{t}$ becomes $\\delta\\normv{n}$, which gives the following code:\n\\begin{lstlisting}\nplane shiftUp(double dist) {return {n, d + dist*abs(n)};}\n\\end{lstlisting}\n\n\\subsection{Orthogonal projection and reflection}\nThe orthogonal projection of a point $P$ on a plane $\\Pi$ is the point on $\\Pi$ that is closest to $P$. The reflection of point $P$ by plane $\\Pi$ is the point on the other side of $\\Pi$ that is at the same distance and has the same orthogonal projection.\n\n\\centerFig{plane4}\n\nWe use a similar reasoning as in section~\\ref{ss:proj-refl}. Clearly, to go from $P$ to its projection on $\\Pi$, we need to move perpendicularly to $\\Pi$, that is, we need to move by $k\\vv{n}$ for some $k$, so that the resulting point $P+k\\vv{n}$ is on $\\Pi$.\n\nFrom this we find $k$:\n\\begin{align*}\n\\vv{n}\\cdot(P+k\\vv{n}) = d\n&\\Leftrightarrow \\vv{n}\\cdot P + k(\\dotv{n}{n}) = d \\\\\n&\\Leftrightarrow k\\normv{n}^2 = -(\\vv{n}\\cdot P - d) \\\\\n&\\Leftrightarrow k = -\\frac{\\side_{\\Pi}(P)}{\\normv{n}^2}\n\\end{align*}\n\nAnd to find the reflection, we need to move $P$ twice as far to $P+2k\\vv{n}$, so we get the following implementation for both:\n\\begin{lstlisting}\np3 proj(p3 p) {return p - n*side(p)/sq(n);}\np3 refl(p3 p) {return p - n*2*side(p)/sq(n);}\n\\end{lstlisting}\n\n\\subsection{Coordinate system based on a plane}\nWhen we have a plane $\\Pi$, sometimes we will want to know what are the coordinates of a point in $\\Pi$. That is, suppose we have a few points that we know are coplanar, and we want to use some 2D algorithm on them. How do we get the 2D coordinates of those points?\n\nTo do this, we need to chose an origin point $O$ on $\\Pi$ and two vectors $\\vv{d_x}$ and $\\vv{d_y}$ indicating the desired $x$ and $y$ directions. Let's first assume $\\vv{d_x}$ and $\\vv{d_y}$ are perpendicular and their norms are 1. Then, to find the $x$- and $y$-coordinate of a point $P$ on $\\Pi$, we simply need to compute\n\\begin{align*}\nx &= \\dotv{OP}{d_x} \\\\\ny &= \\dotv{OP}{d_y}\n\\end{align*}\nand if we also have vector $\\vv{d_z} = \\crossv{d_x}{d_y}$ perpendicular to the first two, we can find the ``height'' of $P$ respective to $\\Pi$:\n\\[z = \\dotv{OP}{d_z}\\]\n\n\\centerFig{plane5}\n\nIf we have three non-collinear points $P,Q,R$ that form plane $\\Pi$, how can we choose $O$, $\\vv{d_x}$, $\\vv{d_y}$ and $\\vv{d_z}$?\n\\begin{enumerate}\n\\item First, we choose the origin $O$ to be $P$.\n\\item Then, we choose $\\vv{d_x}$ to be $\\vv{PQ}$, and scale it to have a norm of 1.\n\\item Next, we compute $\\vv{d_z} = \\vv{d_x}\\times\\vv{PR}$, and scale it to have a norm of 1. It is perpendicular to $\\Pi$ because it is perpendicular to two non-parallel vectors in it.\n\\item Finally, we find $\\vv{d_y}$ as $\\crossv{d_z}{d_x}$.\n\\end{enumerate} \n\nThis gives the following code. Method \\lstinline|pos2d()| gives the position on the plane as a 2D point, and method \\lstinline|pos3d()| gives the position and height as a 3D point (so in a way this structure represents a change of coordinate system).\n\\begin{lstlisting}\nstruct coords {\n    p3 o, dx, dy, dz;\n    // From three points P,Q,R on the plane:\n    // build an orthonormal 3D basis\n    coords(p3 p, p3 q, p3 r) : o(p) {\n        dx = unit(q-p);\n        dz = unit(dx*(r-p));\n        dy = dz*dx;\n    }\n    // From four points P,Q,R,S:\n    // take directions PQ, PR, PS as is\n    coords(p3 p, p3 q, p3 r, p3 s) :\n        o(p), dx(q-p), dy(r-p), dz(s-p) {}\n    \n    pt pos2d(p3 p) {return {(p-o)|dx, (p-o)|dy};}\n    p3 pos3d(p3 p) {return {(p-o)|dx, (p-o)|dy, (p-o)|dz};}\n};\n\\end{lstlisting}\n\n\\begin{mathy}\nThe second constructor allows us specify points $P,Q,R,S$ and choose $\\vv{d_x}=\\vv{PQ}$, $\\vv{d_y}=\\vv{PR}$ and $\\vv{d_z}=\\vv{PS}$ directly. This can be useful if we don't care that the 2D coordinate system $(\\vv{d_x},\\vv{d_y})$ is not orthonormal (perpendicular and of norm 1), because it allows us to keep using integer coordinates.\n\nIf $\\vv{d_x}$ and $\\vv{d_y}$ are not perpendicular or do not have a norm of 1, the computed distances and angles will not be correct. But if we are only interested in the relative positions of lines and points, and finding the intersections of lines or segments, then everything works fine. Computing the 2D convex hull of a set of points is an example of such a problem, because it only requires that the sign of $\\orient()$ is correct.\n\\end{mathy}\n", "meta": {"hexsha": "80c78e5c616f5a1d7d333445e4f2a0771f1e5546", "size": 8115, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "archives/codelibraries/cp-geo-master/3d/plane.tex", "max_stars_repo_name": "cbarnson/UVa", "max_stars_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_stars_repo_licenses": ["Unlicense", "MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-09-07T17:00:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-05T02:08:35.000Z", "max_issues_repo_path": "archives/codelibraries/cp-geo-master/3d/plane.tex", "max_issues_repo_name": "cbarnson/UVa", "max_issues_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_issues_repo_licenses": ["Unlicense", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "archives/codelibraries/cp-geo-master/3d/plane.tex", "max_forks_repo_name": "cbarnson/UVa", "max_forks_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_forks_repo_licenses": ["Unlicense", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.3541666667, "max_line_length": 437, "alphanum_fraction": 0.6762784966, "num_tokens": 2537, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850402140659, "lm_q2_score": 0.8333245911726382, "lm_q1q2_score": 0.5881480300921504}}
{"text": "\\documentclass[]{BasiliskReportMemo}\n\n\\newcommand{\\submiterInstitute}{Autonomous Vehicle Simulation (AVS) Laboratory,\\\\ University of Colorado}\n\n\\newcommand{\\ModuleName}{RWJitterBackSub}\n\\newcommand{\\subject}{Derivation of Reaction Wheel Jitter Back-Substitution}\n\\newcommand{\\status}{Draft}\n\\newcommand{\\preparer}{J. Alcorn, C. Allard}\n\\newcommand{\\summary}{This document derives the back substitution method for reaction wheel jitter to conform with Basilisk's dynamics architecture.}\n\n\n\\begin{document}\n\n\n\\makeCover\n\n\n%\n%\tenter the revision documentation here\n%\tto add more lines, copy the table entry and the \\hline, and paste after the current entry.\n%\n\\pagestyle{empty}\n{\\renewcommand{\\arraystretch}{1.1}\n\\noindent\n\\begin{longtable}{|p{0.5in}|p{4.0in}|p{1.64in}|}\n\\hline\n{\\bfseries Rev}: & {\\bfseries Change Description} & {\\bfseries By} \\\\\n\\hline\nDraft & Initial draft. & J. Alcorn, C. Allard \\\\\n\\hline\n\n\\end{longtable}\n}\n\n\\newpage\n\\setcounter{page}{1}\n\\pagestyle{fancy}\n\n\\tableofcontents\n~\\\\ \\hrule ~\\\\\n\n\\section{Introduction}\nThe goal is to manipulate the reaction wheel equations of motion to conform to the following form\n\\begin{equation}\n\\begin{bmatrix}\n[A] & [B]\\\\\n[C] & [D]\n\\end{bmatrix} \\begin{bmatrix}\n\\ddot{\\bm r}_{B/N}\\\\\n\\dot{\\bm\\omega}_{\\cal B/N}\n\\end{bmatrix} = \\begin{bmatrix}\n\\bm v_{\\text{trans}}\\\\\n\\bm v_{\\text{rot}}\n\\end{bmatrix}\n\\end{equation}\nSolving the system-of-equations by\n\\begin{equation}\n\\dot{\\bm\\omega}_{\\cal B/N} = \\Big([D] - [C]][A]^{-1}[B]\\Big)^{-1}(\\bm v_{\\text{rot}} - [C][A]^{-1}\\bm v_{\\text{trans}})\n\\end{equation}\n\\begin{equation}\n\\ddot{\\bm r}_{B/N} = [A]^{-1} (\\bm v_{\\text{trans}} - [B]\\dot{\\bm\\omega}_{\\cal B/N})\n\\end{equation}\n\n\n\\section{Balanced Reaction Wheel Back-Substitution}\n\n\\subsection{Equations of Motion}\nThe translational equation of motion is not coupled with $\\dot{\\Omega}$ as seen in the equation below.\n\\begin{equation}\nm_{\\text{sc}} [I_{3 \\times 3}]\\ddot{\\bm r}_{B/N}\n-m_{\\text{sc}} [\\tilde{\\bm{c}}] \\dot{\\bm\\omega}_{\\cal B/N} \n= \\bm F_{\\text{ext}}- 2 m_{\\text{sc}} [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm c'\n-m_{\\text{sc}} [\\tilde{\\bm\\omega}_{\\cal B/N}][\\tilde{\\bm\\omega}_{\\cal B/N}]\\bm{c}\n\\label{eq:Rbddot35}\n\\end{equation}\nThe rotational equation of motion includes $\\dot{\\Omega}$ terms, and is thus coupled with wheel motion as seen below.\n\\begin{multline}\nm_{\\text{sc}}[\\tilde{\\bm{c}}] \\ddot{\\bm r}_{B/N}\n+[I_{\\text{sc},B}] \\dot{\\bm\\omega}_{\\cal B/N}\n+\\sum_{i=1}^{N}J_{\\text{s}_i} \\hat{\\bm g}_{\\text{s}_i}\\dot{\\Omega}_i\n= -[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} - \\sum_{i=1}^{N}(\\bm\\omega_{\\cal B/N}\\times J_{\\text{s}_i}\\Omega_i\\hat{\\bm g}_{\\text{s}_i})\n+ \\bm{L}_B\n\\label{eq:Final7}\n\\end{multline}\nThe motor torque equation can be seen below.\n\\begin{equation}\n\\dot{\\Omega}_i = \\frac{u_{\\text{s}_i}}{J_{\\text{s}_i}}-\\hat{\\bm g}_{\\text{s}_i}^T \\dot{\\bm{\\omega}}_{\\cal B/N}\n\\label{eq:mtorque}\n\\end{equation}\n\n\\subsection{Back-Substitution Derivation}\nPlugging Eq.~\\eqref{eq:mtorque} into Eq.~\\eqref{eq:Final7}\n\\begin{multline}\nm_{\\text{sc}}[\\tilde{\\bm{c}}] \\ddot{\\bm r}_{B/N}\n+([I_{\\text{sc},B}]-\\sum_{i=1}^{N}J_{\\text{s}_i} \\hat{\\bm g}_{\\text{s}_i} \\hat{\\bm g}_{\\text{s}_i}^T) \\dot{\\bm\\omega}_{\\cal B/N}\n= -[\\bm{\\tilde{\\omega}}_{\\cal B/N}] [I_{\\text{sc},B}] \\bm\\omega_{\\cal B/N} - \\sum_{i=1}^{N}(\\hat{\\bm g}_{\\text{s}_i} u_{\\text{s}_i} + \\bm\\omega_{\\cal B/N}\\times J_{\\text{s}_i}\\Omega_i\\hat{\\bm g}_{\\text{s}_i})\n\\\\- [I'_{\\text{sc},B}] \\bm\\omega_{\\cal B/N}\n+ \\bm{L}_B\n\\label{eq:Final72}\n\\end{multline}\n\n\n\\subsection{Back-Substitution Contribution Matrices}\nThe following can be defined:\n\\begin{equation}\n[A_\\text{contr}] = [0_{3 \\times 3}]\n\\end{equation}\n\n\\begin{equation}\n[B_\\text{contr}] = [0_{3 \\times 3}]\n\\end{equation}\n\n\\begin{equation}\n[C_\\text{contr}] = [0_{3 \\times 3}]\n\\end{equation}\n\n\\begin{equation}\n[D_\\text{contr}] = -\\sum_{i=1}^{N}J_{\\text{s}_i} \\hat{\\bm g}_{\\text{s}_i} \\hat{\\bm g}_{\\text{s}_i}^T\n\\end{equation}\n\n\\begin{equation}\n\\bm v_{\\text{trans,contr}} = \\bm 0\n\\end{equation}\n\n\\begin{equation}\n\\bm v_{\\text{rot,contr}} =  - \\sum_{i=1}^{N}(\\hat{\\bm g}_{\\text{s}_i} u_{\\text{s}_i} + \\bm\\omega_{\\cal B/N}\\times J_{\\text{s}_i}\\Omega_i\\hat{\\bm g}_{\\text{s}_i})\n\\end{equation}\n\n\\section{Imbalanced Reaction Wheel Back-Substitution}\n\\subsection{Equations of Motion}\nThe translational equation of motion is\n\\begin{equation}\n\t\\ddot{\\bm r}_{B/N} -[\\tilde{\\bm{c}}]\\dot{\\bm\\omega}_{\\cal B/N} +\\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\hat{\\bm{w}}_{3_i}\\dot{\\Omega}_i = \\ddot{\\bm r}_{C/N}   - 2[\\tilde{\\bm\\omega}_{\\cal B/N}]\\bm{c}'-[\\tilde{\\bm\\omega}_{\\cal B/N}][\\tilde{\\bm\\omega}_{\\cal B/N}]\\bm{c} + \\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\Omega_i^2\\hat{\\bm{w}}_{2_i}\n\t\\label{eq:rBddot}\n\\end{equation}\nThe rotational equation of motion is\n\\begin{equation}\n\t\\begin{split}\n\t\tm_{\\text{sc}}[\\tilde{\\bm c}]\\ddot{\\bm r}_{B/N} +& [I_{\\text{sc},B}]\\dot{\\bm\\omega}_{\\cal B/N} +\\sum\\limits_{i=1}^{N}\\Big([I_{\\text{rw}_i,W_{c_i}}]\\hat{\\bm{g}}_{s_i} + m_{\\text{rw}_i}d_i[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\hat{\\bm{w}}_{3_i}\\Big)\\dot{\\Omega}_i\n\t\t\\\\=& \n\t\t\\sum\\limits_{i=1}^{N}\\Big[ m_{\\text{rw}_i}[\\tilde{\\bm{r}}_{W_{c_i}/B}]d_i \\Omega_i^2\\hat{\\bm{w}}_{2_i}-[I_{\\text{rw}_i,W_{c_i}}]'\\Omega_i \\hat{\\bm{g}}_{s_i} -[\\tilde{\\bm\\omega}_{\\cal B/N}]\\Big([I_{\\text{rw}_i,W_{c_i}}]\\Omega_i \\hat{\\bm{g}}_{s_i}+ m_{\\text{rw}_i}[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\bm{r}'_{W_{c_i}/B}\\Big)\\Big]\n\t\t\\\\&  -[\\tilde{\\bm\\omega}_{\\cal B/N}][I_{\\text{sc},B}]\\bm\\omega_{\\cal B/N}-  [I_{\\text{sc},B}]'\\bm\\omega_{\\cal B/N} + \\bm{L}_B\n\t\\end{split}\n\t\\label{eq:rotationalEOM}\n\\end{equation}\nThe motor torque equation is (note that $J_{12_i} = J_{23_i} = 0$)\n\\begin{multline}\n\t\\big[m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i}^T\\big]\\ddot{\\bm{r}}_{B/N} + \\big[(J_{11_i} + m_{\\text{rw}_i} d_i^2)\\hat{\\bm g}_{\\text{s}_i}^T  + J_{13_i}\\hat{\\bm w}_{3_i}^T -m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i}^T [\\tilde{\\bm r}_{W_i/B}]\\big]\\dot{\\bm\\omega}_{\\cal B/N} + \\big[J_{11_i} + m_{\\text{rw}_i} d_i^2\\big]\\dot{\\Omega}_i\n\t\\\\=   - J_{13_i} \\omega_{w_{2_i}}\\omega_{s_i}  \n\t+ \\omega_{w_{2_i}} \\omega_{w_{3_i}} (J_{22_i} - J_{33_i} - m_{\\text{rw}_i} d_i^2\n\t) \n\t -m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i}^T [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm r_{W_i/B} + u_{s_i}\n\t\\label{eq:motorTorqueFinal}\n\\end{multline}\n\n\n\\subsection{Derivation of Back-Substitution}\n\nSolve motor torque equation for $\\dot{\\Omega}_i$ in terms of $\\ddot{\\bm{r}}_{B/N}$ and $\\dot{\\bm\\omega}_{\\cal B/N}$\n\\begin{multline}\n\\dot{\\Omega}_i\n= \\frac{-m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i}^T }{J_{11_i} + m_{\\text{rw}_i} d_i^2}\\ddot{\\bm{r}}_{B/N} + \\frac{-\\big[(J_{11_i} + m_{\\text{rw}_i} d_i^2)\\hat{\\bm g}_{\\text{s}_i}^T  + J_{13_i}\\hat{\\bm w}_{3_i}^T -m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i}^T [\\tilde{\\bm r}_{W_i/B}]\\big]}{J_{11_i} + m_{\\text{rw}_i} d_i^2}\\dot{\\bm\\omega}_{\\cal B/N} \n\\\\\n+\\frac{1}{J_{11_i} + m_{\\text{rw}_i} d_i^2}\\left[\\omega_{w_{2_i}} \\omega_{w_{3_i}} (J_{22_i} - J_{33_i} - m_{\\text{rw}_i} d_i^2\n)-J_{13_i} \\omega_{w_{2_i}}\\omega_{s_i} -m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i}^T [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm r_{W_i/B} + u_{s_i} \\right]\n\\label{eq:motorTorqueFinal2}\n\\end{multline}\n\n\\begin{equation}\n\\bm{a}_{\\Omega_i} = -\\frac{m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i} }{J_{11_i} + m_{\\text{rw}_i} d_i^2}\n\\end{equation}\n\n\\begin{equation}\n\\bm{b}_{\\Omega_i} = -\\frac{(J_{11_i} + m_{\\text{rw}_i} d_i^2)\\hat{\\bm g}_{\\text{s}_i}  + J_{13_i}\\hat{\\bm w}_{3_i} +m_{\\text{rw}_i} d_i  [\\tilde{\\bm r}_{W_i/B}]\\hat{\\bm{w}}_{3_i}}{J_{11_i} + m_{\\text{rw}_i} d_i^2}\n\\end{equation}\n\n\\begin{equation}\nc_{\\Omega_i} = \\frac{1}{J_{11_i} + m_{\\text{rw}_i} d_i^2}\\left[\\omega_{w_{2_i}} \\omega_{w_{3_i}} (J_{22_i} - J_{33_i} - m_{\\text{rw}_i} d_i^2\n) - J_{13_i} \\omega_{w_{2_i}}\\omega_{s_i} -m_{\\text{rw}_i} d_i \\hat{\\bm{w}}_{3_i}^T [\\tilde{\\bm\\omega}_{\\cal B/N}] [\\tilde{\\bm\\omega}_{\\cal B/N}] \\bm r_{W_i/B} + u_{s_i} \\right]\n\\end{equation}\n\n\n\n\\begin{equation}\n\\dot{\\Omega}_i = \\bm{a}_{\\Omega_i}^T\\ddot{\\bm r}_{B/N} + \\bm{b}_{\\Omega_i}^T\\dot{\\bm\\omega}_{\\cal B/N} + c_{\\Omega_i}\n\\end{equation}\nPlugging the equation above into Eq.~\\eqref{eq:rBddot} and multiplying both sides by $m_\\text{sc}$, (plug $\\dot{\\Omega}_i$ into translation)\n\\begin{multline}\n\\left[ [I_{3\\times3}] +\\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\hat{\\bm{w}}_{3_i}\\bm{a}_{\\Omega_i}^T \\right] \\ddot{\\bm r}_{B/N} +\\left[ -[\\tilde{\\bm{c}}] + \\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\hat{\\bm{w}}_{3_i}\\bm{b}_{\\Omega_i}^T \\right] \\dot{\\bm\\omega}_{\\cal B/N} \n\\\\= \\ddot{\\bm r}_{C/N}   - 2[\\tilde{\\bm\\omega}_{\\cal B/N}]\\bm{c}'-[\\tilde{\\bm\\omega}_{\\cal B/N}][\\tilde{\\bm\\omega}_{\\cal B/N}]\\bm{c} + \\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\left(\\Omega_i^2\\hat{\\bm{w}}_{2_i}-c_{\\Omega_i}\\hat{\\bm{w}}_{3_i}\\right)\n\\end{multline}\nMoving on to rotation, (plug $\\dot{\\Omega}_i$ into rotation)\n\\begin{multline}\n\\left[ m_{\\text{sc}}[\\tilde{\\bm c}]+ \\sum\\limits_{i=1}^{N}\\Big([I_{\\text{rw}_i,W_{c_i}}]\\hat{\\bm{g}}_{s_i} + m_{\\text{rw}_i}d_i[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\hat{\\bm{w}}_{3_i}\\Big)\\bm{a}_{\\Omega_i}^T \\right] \\ddot{\\bm r}_{B/N}\n\\\\ \n+ \\left[ [I_{\\text{sc},B}] + \\sum\\limits_{i=1}^{N}\\Big([I_{\\text{rw}_i,W_{c_i}}]\\hat{\\bm{g}}_{s_i} + m_{\\text{rw}_i}d_i[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\hat{\\bm{w}}_{3_i}\\Big)\\bm{b}_{\\Omega_i}^T \\right] \\dot{\\bm\\omega}_{\\cal B/N} \n\\\\= \n\\sum\\limits_{i=1}^{N}\\Big[ m_{\\text{rw}_i}[\\tilde{\\bm{r}}_{W_{c_i}/B}]d_i \\Omega_i^2\\hat{\\bm{w}}_{2_i}-[I_{\\text{rw}_i,W_{c_i}}]'\\Omega_i \\hat{\\bm{g}}_{s_i} -[\\tilde{\\bm\\omega}_{\\cal B/N}]\\Big([I_{\\text{rw}_i,W_{c_i}}]\\Omega_i \\hat{\\bm{g}}_{s_i}+ m_{\\text{rw}_i}[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\bm{r}'_{W_{c_i}/B}\\Big) \\\\ \n-\\Big([I_{\\text{rw}_i,W_{c_i}}]\\hat{\\bm{g}}_{s_i} + m_{\\text{rw}_i}d_i[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\hat{\\bm{w}}_{3_i}\\Big)c_{\\Omega_i}\\Big]\n\\\\  -[\\tilde{\\bm\\omega}_{\\cal B/N}][I_{\\text{sc},B}]\\bm\\omega_{\\cal B/N}-  [I_{\\text{sc},B}]'\\bm\\omega_{\\cal B/N} + \\bm{L}_B\n\\end{multline}\nNow we have two equations containing $\\ddot{\\bm{r}}_{B/N}$ and $\\dot{\\bm\\omega}_{\\cal B/N}$\n\n\\subsection{Back-Substitution Contribution Matrices}\n\n\\begin{equation}\n[A_\\text{contr}] = \\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\hat{\\bm{w}}_{3_i}\\bm{a}_{\\Omega_i}^T\n\\end{equation}\n\n\\begin{equation}\n[B_\\text{contr}] = \\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\hat{\\bm{w}}_{3_i}\\bm{b}_{\\Omega_i}^T\n\\end{equation}\n\n\\begin{equation}\n[C_\\text{contr}] = \\sum\\limits_{i=1}^{N}\\Big([I_{\\text{rw}_i,W_{c_i}}]\\hat{\\bm{g}}_{s_i} + m_{\\text{rw}_i}d_i[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\hat{\\bm{w}}_{3_i}\\Big)\\bm{a}_{\\Omega_i}^T\n\\end{equation}\n\n\\begin{equation}\n[D_\\text{contr}] = \\sum\\limits_{i=1}^{N}\\Big([I_{\\text{rw}_i,W_{c_i}}]\\hat{\\bm{g}}_{s_i} + m_{\\text{rw}_i}d_i[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\hat{\\bm{w}}_{3_i}\\Big)\\bm{b}_{\\Omega_i}^T\n\\end{equation}\n\n\\begin{equation}\n\\bm{v}_\\text{trans,contr} = \\frac{1}{m_{\\text{sc}}}\\sum\\limits_{i=1}^{N}m_{\\text{rw}_i}d_i\\left(\\Omega_i^2\\hat{\\bm{w}}_{2_i}-c_{\\Omega_i}\\hat{\\bm{w}}_{3_i}\\right)\n\\end{equation}\n\n\\begin{multline}\n\\bm{v}_\\text{rot,contr} = \\sum\\limits_{i=1}^{N} \\Big[ m_{\\text{rw}_i}[\\tilde{\\bm{r}}_{W_{c_i}/B}]d_i \\Omega_i^2\\hat{\\bm{w}}_{2_i}-[I_{\\text{rw}_i,W_{c_i}}]'\\Omega_i \\hat{\\bm{g}}_{s_i} -[\\tilde{\\bm\\omega}_{\\cal B/N}]\\Big([I_{\\text{rw}_i,W_{c_i}}]\\Omega_i \\hat{\\bm{g}}_{s_i}+ m_{\\text{rw}_i}[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\bm{r}'_{W_{c_i}/B}\\Big) \\\\ \n-\\Big([I_{\\text{rw}_i,W_{c_i}}]\\hat{\\bm{g}}_{s_i} + m_{\\text{rw}_i}d_i[\\tilde{\\bm{r}}_{W_{c_i}/B}]\\hat{\\bm{w}}_{3_i}\\Big)c_{\\Omega_i} \\Big]\n\\end{multline}\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "2f2bd98f47997d5e54ed7e4b4241e25cb9b4f957", "size": 11582, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/simulation/dynamics/reactionWheels/_Documentation/RWJitterBackSubstitution/Basilisk-RWJitterBackSub-20161221.tex", "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/simulation/dynamics/reactionWheels/_Documentation/RWJitterBackSubstitution/Basilisk-RWJitterBackSub-20161221.tex", "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_forks_repo_path": "src/simulation/dynamics/reactionWheels/_Documentation/RWJitterBackSubstitution/Basilisk-RWJitterBackSub-20161221.tex", "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.0762711864, "max_line_length": 383, "alphanum_fraction": 0.6268347436, "num_tokens": 5277, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245994514082, "lm_q2_score": 0.7057850278370112, "lm_q1q2_score": 0.5881480256210784}}
{"text": "\\documentclass{notes}\n\n  \\title{Heap Sort}\n  \\author{ian.mcloughlin@gmit.ie}\n  \\date{\\today}\n\n\\begin{document}\n\n  \n\n  \\section*{Sorting}  \n    \\[l = (l_0, l_1, l_2, \\ldots, l_{n-1})\\]\n    \\[l_0 \\leq l_1 \\leq \\cdots \\leq l_{n-1}\\]\n\n  \\section*{On a tree}\n\n  \\begin{center}\n    \\begin{forest}\n      for tree={circle, draw}\n      [\\(l_0\\)\n        [\\(l_1\\)\n          [\\(l_3\\)\n            [\\(l_7\\)]\n            [\\(l_8\\)]\n          ]\n          [\\(l_4\\)\n            [\\(l_9\\)]\n          ]\n        ]\n        [\\(l_2\\)\n          [\\(l_5\\)]\n          [\\(l_6\\)]\n        ]\n      ]\n    \\end{forest}\n  \\end{center}\n    \n  \\section*{Heaps}\n  Almost complete binary tree with heap property.  \n    \\begin{description}\n      \\item[Max heap:] each parent bigger than children.\n      \\item[Min heap:] each parent smaller than children.\n    \\end{description}\n\n  \\section*{To min or max heap}\n    \\begin{enumerate}\n      \\item Start with last node, moving backwards.\n      \\item Compare node to children, swap if needed.\n      \\item Swap parent down tree until we have a heap.\n    \\end{enumerate}\n\n    \\section*{Example heap}\n    Sort ascending \\(\\rightarrow\\) use max heap.\n\n    \\begin{center}\n      \\begin{forest}\n        for tree={circle, draw}\n        [\\(1\\)\n          [\\(3\\)\n            [\\(2\\)\n              [\\(7\\)]\n              [\\(5\\)]\n            ]\n            [\\(8\\)\n              [\\(9\\)]\n            ]\n          ]\n          [\\(10\\)\n            [\\(4\\)]\n            [\\(6\\)]\n          ]\n        ]\n      \\end{forest}\n    \\end{center}\n    \n    Last five nodes have no children.\n    Sixth-last has one child, is smaller so swap.\n    Now have a heap from sixth-last.\n    Same for seventh-last: swap 2 for 7.\n    Third node is a heap.\n    Second node swaps 9 for 3, and filters down swapping 3 for 8.\n    Finally, the root is swapped with 10 and then 6.\n\n    \\begin{center}\n      \\begin{forest}\n        for tree={circle, draw}\n        [\\(10\\)\n          [\\(9\\)\n            [\\(7\\)\n              [\\(2\\)]\n              [\\(5\\)]\n            ]\n            [\\(8\\)\n              [\\(3\\)]\n            ]\n          ]\n          [\\(6\\)\n            [\\(4\\)]\n            [\\(1\\)]\n          ]\n        ]\n      \\end{forest}\n    \\end{center}\n  \n  \\section*{As an array}\n    \n  \\begin{center}\n    \\begin{forest}\n      for tree={circle, draw}\n      [\\(l_i\\)\n        [\\(l_{2i + 1}\\)]\n        [\\(l_{2i + 2}\\)]\n      ]\n    \\end{forest}\n  \\end{center}\n\n  \\begin{align*}\n  &(&l_0&,&l_1&,&l_2&,&l_3&,&l_4&,&l_5&,&l_6&,&l_7&,&l_8&,&l_9\\ &)&\\\\\n  &(&1&,&3&,&10&,&2&,&{\\color{gmitred}8}&,&4&,&6&,&7&,&5&,&{\\color{gmitred}9}\\ &)&\\\\\n  &(&1&,&3&,&10&,&{\\color{gmitred}2}&,&9&,&4&,&6&,&{\\color{gmitred}7}&,&{\\color{gmitred}5}&,&8\\ &)&\\\\\n  &(&1&,&3&,&{\\color{gmitred}10}&,&7&,&9&,&{\\color{gmitred}4}&,&{\\color{gmitred}6}&,&2&,&5&,&8\\ &)&\\\\\n  &(&1&,&{\\color{gmitred}3}&,&10&,&{\\color{gmitred}7}&,&{\\color{gmitred}9}&,&4&,&6&,&2&,&5&,&8\\ &)&\\\\\n  &(&1&,&9&,&10&,&7&,&{\\color{gmitred}3}&,&4&,&6&,&2&,&5&,&{\\color{gmitred}8}\\ &)&\\\\\n  &(&{\\color{gmitred}1}&,&{\\color{gmitred}9}&,&{\\color{gmitred}10}&,&7&,&8&,&4&,&6&,&2&,&5&,&3\\ &)&\\\\\n  &(&10&,&9&,&{\\color{gmitred}1}&,&7&,&8&,&{\\color{gmitred}4}&,&{\\color{gmitred}6}&,&2&,&5&,&3\\ &)&\\\\\n  &(&10&,&9&,&6&,&7&,&8&,&4&,&1&,&2&,&5&,&3\\ &)&\\\\\n  \\end{align*}\n  \n  \\section*{Heap Sort}\n    \\begin{enumerate}\n      \\item Convert complete binary tree to heap.\n      \\item Swap root for last child, ignore last child.\n      \\item Repeat \\(n-1\\) times.\n    \\end{enumerate}\n\n    \\begin{align*}\n      &(&l_0&,&l_1&,&l_2&,&l_3&,&l_4&,&l_5&,&l_6&,&l_7&,&l_8&,&l_9\\ &)&\\\\\n      &(&10&,&9&,&6&,&7&,&8&,&4&,&1&,&2&,&5&,&3\\ &)&\\\\\n      &(&3&,&9&,&6&,&7&,&8&,&4&,&1&,&2&,&5&,&{\\color{gmitblue}10}\\ &)&\\\\\n      &(&9&,&8&,&6&,&7&,&3&,&4&,&1&,&2&,&5&,&{\\color{gmitblue}10}\\ &)&\\\\\n      &(&5&,&8&,&6&,&7&,&3&,&4&,&1&,&2&,&{\\color{gmitblue}9}&,&{\\color{gmitblue}10}\\ &)&\\\\\n      &(&8&,&7&,&6&,&5&,&3&,&4&,&1&,&2&,&{\\color{gmitblue}9}&,&{\\color{gmitblue}10}\\ &)&\\\\\n      &(&2&,&7&,&6&,&5&,&3&,&4&,&1&,&{\\color{gmitblue}8}&,&{\\color{gmitblue}9}&,&{\\color{gmitblue}10}\\ &)&\\\\\n    \\end{align*}\n\n  \\section*{Comparisons}\n\n    \\begin{description}\n      \\item[To heap:] \\(O(n \\log n)\\)\n      \\item[Replace root:] \\(O(\\log n)\\) but \\( O(n) \\) times.\n      \\item[Check heap:] \\(O(n)\\) \n    \\end{description}\n\n  %\\bibliography{bibliography}\n\\end{document}", "meta": {"hexsha": "c4a470017ed09d5a747f6e14112215723fb350f9", "size": 4301, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "heapsort.tex", "max_stars_repo_name": "ianmcloughlin/latex-notes", "max_stars_repo_head_hexsha": "2ce8e4de828f7ff916d8e21d46ad610f62653acb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "heapsort.tex", "max_issues_repo_name": "ianmcloughlin/latex-notes", "max_issues_repo_head_hexsha": "2ce8e4de828f7ff916d8e21d46ad610f62653acb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "heapsort.tex", "max_forks_repo_name": "ianmcloughlin/latex-notes", "max_forks_repo_head_hexsha": "2ce8e4de828f7ff916d8e21d46ad610f62653acb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.7483870968, "max_line_length": 108, "alphanum_fraction": 0.4447802837, "num_tokens": 1717, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8333245911726382, "lm_q1q2_score": 0.5881480197780465}}
{"text": "\\documentclass[]{article}\n\n\\usepackage{graphicx}\n\n\\usepackage[margin=1in]{geometry}\n\n\\setlength\\parindent{0pt}\n\n\\usepackage{physics}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n\\usepackage{listings}\n\n\\usepackage{enumitem}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\renewcommand*{\\thesection}{Problem \\arabic{section}}\n\\renewcommand*{\\thesubsection}{\\arabic{section}\\alph{subsection})}\n\\renewcommand*{\\thesubsubsection}{\\quad \\roman{subsubsection})}\n\n%Custom Commands\n\\newcommand{\\Rel}{\\mathcal{R}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\n\\newcommand{\\toI}{\\xrightarrow{\\textsf{\\tiny I}}}\n\\newcommand{\\toS}{\\xrightarrow{\\textsf{\\tiny S}}}\n\\newcommand{\\toB}{\\xrightarrow{\\textsf{\\tiny B}}}\n\n\\newcommand{\\divisible}{ \\ \\vdots \\ }\n\n\n%opening\n\n\\title{MATH 5301 Elementary Analysis - Homework 3}\n\n\\author{Jonas Wagner}\n\n\\date{2021, September 17}\n\n\n\n\\begin{document}\n\n\\maketitle\n\n% Problem 1\n\\section{}\nLet $X$ denote the universal set. Two subsets $A$ and $B$ are said to have \nthe same cardinality if there is a bijection $f : A \\to B$.\nNotation: $\\abs*{A} = \\abs*{B}$.\n\n\\subsection{\n\tProve that $\\abs{A} = \\abs{B}$ is an equivalence relation on the power set of $X$\n}\n\nThe relation $\\Rel = ``\\abs{A} = \\abs{B}''$\n\\footnote{using $\\Rel$ for simplicyity/reusability} \nis defined as: \n$$\\Rel = ``\\abs{A} = \\abs{B}'' = \\{(A,B) : \\exists (f : A \\xrightarrow{\\textsf{\\tiny B}} B)\\}$$\n\nEquivilence can be demonstrated by proven by demonstrating: \n(i) Reflexivity, (ii) Symetry, and (iii) Transivity\n\n\n\\subsubsection{Reflective:}\n\\begin{align*}\n\tA \\Rel A\n\t\t&= \\{(A,A) : \\exists (f : A \\toB A)\\}\n\\end{align*}\nSince $f : A \\toB A = {a \\in A}$ is true $\\forall A \\in 2^X$, $\\R$ is Reflective.\n\n\\subsubsection{Symetric:}\n\\begin{align*}\n\tA \\Rel B \n\t\t&\\implies B \\Rel A\\\\\n\t\\{(A,B) : \\exists (f : A \\xrightarrow{\\textsf{\\tiny B}} B)\\}\n\t\t&\\implies \\{(B,A) : \\exists (f : B \\xrightarrow{\\textsf{\\tiny B}} A)\\}\n\\end{align*}\nSince $f : A \\toB B \\implies g : B \\toB A = f^{-1}$ is true $\\forall A,B \\in 2^X$, \n$\\R$ is Symetric.\\\\\n(Essentially if $f$ is bijective one way, \n$f^{-1}$ is bijective for the other way)\n\n\\subsubsection{Transative:}\n\\begin{align*}\n\t(A \\Rel B) \n\t\t\\land (B \\Rel C) \n\t\t&\\implies A \\Rel C\\\\\n\t(\\{(A,B) : \\exists (f : A \\xrightarrow{\\textsf{\\tiny B}} B)\\})\n\t\t\\land \\{(B,C) : \\exists (g : B \\xrightarrow{\\textsf{\\tiny B}} C)\\}\n\t\t&\\implies \\{(A,C) : \\exists (h : A \\xrightarrow{\\textsf{\\tiny B}} C)\\}\n\\end{align*}\nSince $(f : A \\toB B) \\land (g : B \\toB C) \n\\implies (h : A \\toB C = A \\xrightarrow{f} B \\xrightarrow{g} C)$ \nis true $\\forall A,B,C \\in 2^X$, $\\Rel$ is Transative.\\\\\n\nTherefore $\\Rel = ``\\abs{A} = \\abs{B}''$ is an equivalence relation over $2^X$.\n\n\\newpage\n\\subsection{\n\tIs it true that if $\\abs{A_1}=\\abs{B_1}$ and $\\abs{A_2}=\\abs{B_2}$ \n\tthen $\\abs{A_1 \\cup A_2} = \\abs{B_1 \\cup B_2}$?\n}\n\\begin{align*}\n\t(\\abs{A_1} = \\abs{B_1}) \\land (\\abs{A_2} = \\abs{B_2})\n\t\t&\\implies \\abs{A_1 \\cup A_2} = \\abs{B_1 \\cup B_2}\\\\\n\t(A_1 \\Rel B_1) \\land (A_2 \\Rel B_2)\n\t\t&\\implies (A_1 \\cup A_2) \\Rel (B_1 \\cup B_2)\\\\\n\t\\{(A_1,B_1) : \\exists (f_1 : A_1 \\xrightarrow{\\textsf{\\tiny B}} B_1)\\}\n\t\t\\land \\{(A_2,B_2) : \\exists (f_2 : A_2 \\xrightarrow{\\textsf{\\tiny B}} B_2)\\}\n\t\t&\\implies \\{((A_1 \\cup A_2), (B_1 \\cup B_2)) \n\t\t\t: \\exists (f : (A_1 \\cup A_2) \\xrightarrow{\\textsf{\\tiny B}} (B_1 \\cup B_2))\\}\n\\end{align*}\n\nThis itself is false, as in the case when \n\\begin{align*}\n\t(A_1 \\cap A_2 \\neq \\emptyset) \\land (B_1 \\cap B_2 = \\emptyset)\n\t\\implies (f : (A_1 \\cup A_2) \\toI (B_1 \\cup B_2))\n\t\\land (f^{-1} : (B_1 \\cup B_2) \\toS (B_1 \\cup B_2))\n\\end{align*}\nBut, $f$ is not surjective and $f^{-1}$ is not injective, so $f$ cannot be bijective.\n\n\\newpage\n% Problem 2\n\\section{}\nFinish the proof of the Cantor-Bernstein theorem:\nFor the sets $A$ and $B$, such that $\\abs{A} \\leq \\abs{B}$ and $\\abs{B} \\leq \\abs{A}$\ndefine $A_\\infty$ as the set of all elements of $A$ having infinite order,\n$A_0$ as the set of all elements of $A$ having even order,\nand $A_1$ the set of all elements of $A$ having odd order. Similarly for $B$.\n\nLet\n$$A,B : (\\abs*{A} \\leq \\abs*{B}) \\land (\\abs*{B} \\leq \\abs*{B})$$\n\nDefine\n\\begin{align*}\n\tA_\\infty &= \\{a \\in A : \\order*{a} = \\infty\\}\\\\\n\tA_0 &= \\{a \\in A : \\order*{a} \\divisible 2\\}\\\\\n\tA_1 &= \\{a \\in A : (\\order*{a} + 1) \\divisible 2\\}\\\\\n\tB_\\infty &= \\{b \\in B : \\order*{b} = \\infty\\}\\\\\n\tB_0 &= \\{b \\in B : \\order*{b} \\divisible 2 \\}\\\\\n\tB_1 &= \\{b \\in B : (\\order*{b} + 1) \\divisible 2\\}\\\\\n\\end{align*}\n\n\\subsection{Show that $\\abs{A_\\infty}=\\abs{B_\\infty}$.}\n\nSince,\n$$\\abs{A} \\leq \\abs{B} \\implies \\exists (f_{AB} : A \\toI B) \\therefore f_{AB}(A_\\infty) = B_\\infty$$\nSimilarly,\n$$\\abs{B} \\leq \\abs{A} \\implies \\exists (f_{BA} : B \\toI A) \\therefore f_{BA}(B_\\infty) = A_\\infty$$\nSince,\n$$(\\exists f_{AB} : A \\toI B) \\land (\\exists f_{BA} : B \\toI A) \n\\implies (\\exists f : A \\toB B)$$\nTherefore, since a bijective mapping exists,\n$$\\abs*{A_\\infty} = \\abs*{B_\\infty}$$\n\n\\subsection{Construct an injective mapping $A_1 \\rightarrow B_0$.}\n\nFrom the definition of $A_1$,\nit is known that $A_1$ contains all elements whose ancestors (for $f$ and $g$) are of odd order.\\\\\nSimilarly, from the definition of $B_0$ it is known that $B_0$ contains all elements whose \nancestors (for $f$ and $g$) are of even order.\\\\\nSince all elements of $B_0$ are of even order, then that implies that acording to the mappings\n$f$ and $g$, $$a \\in \\{a \\in A : \\exists b \\in B_0 : f(a) = b\\} \\implies a \\in A_1$$\ntherefore, $f$ is a direct mapping from $A_1$ to $B_0$ \n(which is technically bijective so it is also injective)\n\n\\subsection{Show that this mapping is also surjective.}\nSince, from the previous argument, $f$ is bijective so it is also surjective.\n\n\n\\newpage\n% Problem 3\n\\section{}\nSet $A$ is called countable if $\\abs{A} \\leq \\abs{\\N}$.\nProve that the following sets are countable.\n\n\\subsection{Set $\\Z_+$ of all non-negative integer numbers}\nDefine $f_0 : \\N \\to \\Z_+$, \n\\begin{displaymath}\n\tf_0(x) := x - 1\n\\end{displaymath}\nClaim $f_0 : \\N \\toS \\Z_+$ (surjective).\nThis is true becouse $\\forall x \\in \\N \\ \\exists f_0(x) \\in \\Z_+$.\\\\\nWhen (i) $x \\in \\N = 1$, $f_0(x) = x - 1 = 0$ which is in $\\Z_+$.\\\\\nSimilarly, when (ii) $x \\in \\N > 1$, $f_0(x) = x - 1$ which is in $\\Z_+$.\\\\\n\nSince $f_0$ is in fact surjective, $\\Z_+$ is countable. (i.e)\n$$f_0 : \\N \\toS \\Z_+ \\implies \\abs{\\Z_+} \\leq \\abs{\\N}$$\n\n\\subsection{Set $2\\N$ of all even numbers}\nDefine $f : \\N to 2\\N$,\n\\begin{displaymath}\n\tf(x) := 2x\n\\end{displaymath}\nClaim $f : \\N \\toS 2\\N$ (surjective).\nThis is true becouse $\\forall x \\in \\N \\exists f(x) \\in 2\\N$.\\\\\nWhen $x \\in \\N$, $f(x) = 2 x$ which is in $2 \\N$.\\\\\n\nSince $f$ is surjective, $2\\N$ is countable. (i.e)\n$$f : \\N \\toS 2\\N \\implies \\abs{2\\N} \\leq \\abs{\\N}$$\n\n\\subsection{Set $\\Z^2$ of all ordered pairs of integer numbers}\nDefine $g : \\N \\to \\Z^2$,\n\\begin{displaymath}\n\tg(x) :=\n\t\\begin{cases}\n\t\t(0,0)\t& x = 1\\\\\n\t\t(1,0)\t& x = 2\\\\\n\t\t(1,1)\t& x = 3\\\\\n\t\t(0,1)\t& x = 4\\\\\n\t\t(-1,1)\t& x = 5\\\\\n\t\t(-1,0)\t& x = 6\\\\\n\t\t(-1,-1)\t& x = 7\\\\\n\t\t(0,-1)\t& x = 8\\\\\n\t\t(1,-1)\t& x = 9\\\\\n\t\t(2,-1)\t& x = 10\\\\\n\t\t(2,0)\t& x = 11\\\\\n\t\t\\quad \\vdots & \\quad \\vdots\n\t\\end{cases}\n\\end{displaymath}\nClaim $g : \\N \\toS \\Z^2$ (surjective).\nThis is true becouse $\\forall x \\in \\N \\exists g(x) \\in \\Z^2$ as is \nclearly evident in the spiraling mapping discribed by $g$ mapping to all $\\Z^2$.\\\\\n\nSince $g$ is surjective, $\\Z^2$ is countable. (i.e)\n$$g : \\N \\toS \\Z^2 \\implies \\abs{\\Z^2} \\leq \\abs{\\N}$$\n\n\\newpage\n\\subsection{Set $\\Q$ of all rational numbers}\nDefine $h : \\N \\to \\Q$,\n\\begin{displaymath}\n\th(x) := \n\t\\begin{cases}\n\t\t0\t& x = 1\\\\\n\t\t1\t& x = 2\\\\\n\t\t-1\t& x = 3\\\\\n\t\t2\t& x = 4\\\\\n\t\t\\frac{3}{2}\t& x = 5\\\\\n\t\t\\frac{1}{2} & x = 6\\\\\n\t\t\\frac{-1}{2}& x = 7\\\\\n\t\t\\frac{-3}{2}& x = 8\\\\\n\t\t-2\t& x = 9\\\\\n\t\t3\t& x = 10\\\\\n\t\t\\frac{8}{3} & x = 11\\\\\n\t\t\\frac{5}{2} & x = 12\\\\\n\t\t\\frac{7}{3} & x = 13\\\\\n\t\t\\frac{5}{3} & x = 14\\\\\n\t\t\\frac{4}{3} & x = 15\\\\\n\t\t\\frac{2}{3} & x = 16\\\\\n\t\t\\frac{1}{3} & x = 17\\\\\n\t\t\\frac{-1}{3} & x = 18\\\\\n\t\t\\quad \\vdots & \\quad \\vdots\n\t\\end{cases}\n\\end{displaymath}\nClaim $h : \\N \\toS \\Q$ (surjective).\nThis is true becouse $\\forall x \\in \\N \\exists h(x) \\in \\Q$ as is \nclearly evident in the spiraling mapping discribed by $h$ mapping to all $\\Q$.\\\\\n\nSince $h$ is surjective, $Q$ is countable. (i.e)\n$$g : \\N \\toS \\Q \\implies \\abs{Q} \\leq \\abs{\\N}$$\n\n\\newpage\n\\subsection{Set $\\Q^2$ of all ordered pairs of rational numbers}\nDefine $m : \\N \\to \\Q^2$,\n\\begin{displaymath}\n\tm(x) :=\n\t\\begin{cases}\n\t\t(0,0)\t&x=1\\\\\n\t\t(0,1)\t&x=2\\\\\n\t\t(1,0)\t&x=3\\\\\n\t\t(1,1)\t&x=4\\\\\n\t\t(0,-1)\t&x=5\\\\\n\t\t(1,-1)\t&x=6\\\\\n\t\t(-1,0)\t&x=7\\\\\n\t\t(-1,1)\t&x=8\\\\\n\t\t(-1,-1)\t&x=9\\\\\n\t\t(0,2)\t&x=10\\\\\n\t\t\\quad \\vdots &\\quad \\vdots\\\\\n\t\t(2,2)\t&\\quad \\vdots\\\\\n\t\t(0,\\frac{3}{2}) &\\quad \\vdots\\\\\n\t\t\\quad \\vdots\t&\\quad \\vdots\\\\\n\t\t(\\frac{3}{2},\\frac{3}{2}) &\\quad \\vdots\\\\\n\t\t(0,\\frac{1}{2}) &\\quad \\vdots\\\\\n\t\t\\quad \\vdots\t&\\quad \\vdots\\\\\n\t\t(\\frac{1}{2},\\frac{1}{2}) &\\quad \\vdots\\\\\n\t\t(0,\\frac{-1}{2}) &\\quad \\vdots\\\\\n\t\t\\quad \\vdots\t&\\quad \\vdots\\\\\n\t\t(\\frac{-1}{2},\\frac{-1}{2}) &\\quad \\vdots\\\\\n\t\t(0,\\frac{-3}{2}) &\\quad \\vdots\\\\\n\t\t\\quad \\vdots\t&\\quad \\vdots\\\\\n\t\t(\\frac{-3}{2},\\frac{-3}{2}) &\\quad \\vdots\\\\\n\t\t(0,3)\t&\\quad \\vdots\\\\\n\t\t\\quad \\vdots &\\quad \\vdots\\\\\n\t\t(3,3)\t&\\quad \\vdots\\\\\n\t\t(0, \\frac{8}{3})\t&\\quad \\vdots\\\\\n\t\t\\quad \\vdots\t&\\quad \\vdots\n\t\\end{cases}\n\\end{displaymath}\nNote: $m(x)$ continues to spiral folowing the pattern in a way that ultimently\ncombines the mappings of $f_0$, $g$, and $h$.\\\\\n\nClaim $m : \\N \\toS \\Q^2$ (surjective).\nThis is true becouse $\\forall x \\in \\N \\exists m(x) \\in \\Q^2$ as is \nclearly evident in the spiraling mapping discribed by $m$ mapping to all $\\Q^2$.\\\\\n\nSince $m$ is surjective, $Q^2$ is countable. (i.e)\n$$g : \\N \\toS \\Q^2 \\implies \\abs{\\Q^2} \\leq \\abs{\\N}$$\n\n\n\\newpage\n% Problem 4\n\\section{}\nProve that the following sets are countable.\n\n% 4a\n\\subsection{Set $\\mathbf{P}_5(\\Z)$ of all polynomials of degree 4 with integer coefficients}\n\nSet $\\mathbf{P}_5(\\Z)$ can be defined as:\n\\begin{displaymath}\n\t\\mathbf{P}_5(\\Z) := \\{ax^4 + bx^3 + cx^2 + dx + e : a,b,c,e,d \\in \\Z\\}\n\\end{displaymath}\n\nA direct mapping $f_0 : \\Z^5 \\to \\mathbf{P}_5(\\Z)$ can be created by either redefining \nor mapping into:\n\\begin{displaymath}\n\t\\mathbf{P}_5(\\Z^5) := \\{ax^4 + bx^3 + cx^2 + dx + e : (a,b,c,e,d) = \\in \\Z^5\\}\n\\end{displaymath}\nClearly $\\abs{\\mathbf{P}_5(\\Z)} = \\abs{\\mathbf{P}_5(\\Z^5)} = \\abs{\\Z^5}$ so if now if \n$\\Z^5$ can be shown to be countable, then $\\mathbf{P}_5(\\Z)$ would also be countable.\\\\\n\nDefine $f_1 : \\N \\to \\Z^5$,\n\\begin{displaymath}\n\tf_1 := \n\t\\begin{cases}\n\t\t(0,0,0,0,0)\t&x=1\\\\\n\t\t(0,0,0,0,1)\t&x=2\\\\\n\t\t(0,0,0,1,0)\t&x=3\\\\\n\t\t(0,0,0,1,1)\t&x=4\\\\\n\t\t\\quad \\vdots &\\quad \\vdots\\\\\n\t\t(1,1,1,1,1) &x=33\\\\\n\t\t(0,0,0,0,-1)&x=34\\\\\n\t\t(0,0,0,1,-1)&x=35\\\\\n\t\t(0,0,0,-1,0)&x=35\\\\\n\t\t(0,0,0,-1,1)&x=36\\\\\n\t\t(0,0,0,-1,-1)&x=37\\\\\n\t\t(0,0,-1,0,0)&x=38\\\\\n\t\t\\quad \\vdots &\\quad \\vdots\\\\\n\t\t(-1,-1,-1,-1,-1) &\\quad \\vdots\\\\\n\t\t(0,0,0,0,2) &\\quad \\vdots\\\\\n\t\t\\quad \\vdots &\\quad \\vdots\n\t\\end{cases}\n\\end{displaymath}\nClaim $f_1 : \\N \\toS \\Z^5$ (surjective).\nClearly,\n\\begin{displaymath}\n\t\\forall x \\in \\N \\exists f_1(x) \\in \\Z^5\n\\end{displaymath}\nTherefore, $f_1$ is surjective which implies $\\abs*{\\Z^5} \\leq \\abs*{\\N}$.\\\\\n\nSince $f_1$ is surjective and $f_0$ is bijective (and therefore surjective) the \nset $\\mathbf{P}_5(\\Z)$ is countable:\n\\begin{displaymath}\n\t(f_1 : \\N \\toS \\Z^5) \\land (f_0 : \\Z^5 \\toB \\mathbf{P}_5(\\Z))\n\t\\implies \\abs*{\\mathbf{P}_5(\\Z)} = \\abs*{\\Z^5} \\leq \\abs*{\\N} = \\aleph_0\n\\end{displaymath}\n\n% 4b\n\\newpage\n\\subsection{Any collection of non-intersecting discs on a plane}\nLet,\n\\begin{displaymath}\n\tA := \\qty{(x,y,r) \\in \\R^3 : \\forall (x_i,y_i,r_i) \\in A : \n\t\t\t(x - x_i)^2 + (y - y_i)^2 \\geq \\qty(r + r_i)^2}\n\\end{displaymath}\n\nClearly, $A \\subseteq \\qty{(x,y,r) \\in \\R^3}$, therefore \n$\\abs*{A} \\leq \\abs*{\\R^3} = \\abs*{\\R}= \\aleph_1$.\nThis itself does not actually say anything about if it is countable.\\\\\n\nAlternativly, the restction of $r_0$ being arbritrarily selected would redefine the set as\n\\begin{displaymath}\n\tA := \\qty{(x,y) \\in \\R^2 : \\forall (x_i,y_i) \\in A : \n\t\t\t(x - x_i)^2 + (y - y_i)^2 \\geq \\qty(2r_0)^2}\n\\end{displaymath}\nAlthough this could also be looked at as just \n$A \\subseteq \\qty{(x,y) \\in \\R^2} \\implies \\abs*{A} \\leq \\abs*{\\R^2} = \\aleph_1$\nthis could also be seen as the subset of the optimaly organized set $A_0^*$,\n\\begin{displaymath}\n\tA_0^* := \\qty{(x,y,r_0) : (x,y) \\in 2 r_0 \\N^2}\n\\end{displaymath}\nwhere $2 r_0 \\N^2$ is defined as\n\\begin{displaymath}\n\t2 r_0 \\N^2 := \\qty{(2 r_0 x, 2 r_0 y) : (x,y) \\in \\N^2}\n\\end{displaymath}\nClearly, $\\abs*{A_0^*} \\leq \\abs{2 r_0 \\N^2}$. \nUsing a simple 1-1 mapping between $2 r_0 \\N^2$ and $\\N$, $2 r_0 \\N^2$ is countable. \nAdditionally, this implies that $A_0^*$ is also countable.\n\\begin{align*}\n\t(f : \\N \\toS 2 r_0 \\N^2) \\land (g : 2 r_0 \\N^2 \\toB A_0^*) \n\t\t&\\implies \\abs*{A_0^*} \\leq \\abs{2\\N^2} = \\abs{\\N} = \\aleph_0\n\\end{align*}\n\nThis can all be used then to claim (and subsequently prove) that $A$ is countable.\n\\begin{align*}\n\tA \\subseteq A_0^* &\\implies \\abs*{A} \\leq \\abs*{A_0^*}\\\\\n\t(\\abs*{A} \\leq \\abs*{A_0^*}) \\land (\\abs*{A_0^*} \\leq 2 \\abs{2 r_0 \\N^2} \n\t\t= \\abs*{\\N} = \\aleph_0) \n\t\t&\\implies \\abs*{A} \\leq \\aleph_0\n\\end{align*}\n\n\\newpage\nThis result can then be expanded to include an additional set of non-intersecting discs \noptimaly placed in the unconstrained space. \nThis next set of dics $A_1^*$ can be defined as\n\\begin{displaymath}\n\tA_1^* := \\qty{(x + r_0, y + r_0, r_1) : (x,y) \\in 2 r_0 \\N^2}\n\\end{displaymath}\nwith a selected $r_1$ such that the two sets of discs do not intersect.\\\\\nSuch $r_1$ can be calculated so that $(2r_0)^2 + (2r_0)^2 \\leq (2r_0 + 2 r_1)$:\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.7\\textwidth]{fig/pblm4b_1.png}\n\\end{figure}\nthus $r_1 \\leq -1 + \\sqrt{1 + r_0}$.\\\\\nThis is then proven as countable by the same logic as $A_0^*$.\\\\\nAdditionally, this process can be repeated for more collections of disks of decreasing size \nuntil $r_i$ is infinitey small.\\\\\nAt first glance the union of all these sets may not actually be proven to be coundable, \nbut each additional set will no longer consist of all potential optimal pairs so it \nmay be possible to limit this to $\\aleph_0$\n\n% 4c\n\\newpage\n\\subsection{Any collection of non-intersecting T-shapes on a plane}\n\\textbf{Note:} T-shape consists of two perpendicular line segments such that \none of the segments is attached by one of its endpoints to the center of the other segment. \nThe lengths of these segments can be arbitrary. The orientation of the T-shape can be arbitrary.\\\\\n\nDefine set $A$,\n\\begin{align*}\n\tA := &\\Bigg{\\{}\n\t\t((x_1,y_1),(x_2,y_2),(x_3,y_3)) \\in (\\R^2)^3: \\\\\n\t\t&\\quad\\forall \\qty((x_1^{(i)},y_1^{(i)}),(x_2^{(i)},y_2^{(i)}),(x_3^{(i)},y_3^{(i)})) \\in (\\R^2)^3 : \\\\ \n\t\t&\\quad ((x_1,y_1)\\Rel_1(x_2,y_2) \\cap ((x_1^{(i)},y_1^{(i)})\\Rel_1(x_2^{(i)},y_2^{(i)}) = \\emptyset)\\\\\n\t\t&\\quad \\land \\qty((\\cfrac{x_1+x_2}{2},\\cfrac{y_1+y_2}{c})\\Rel(x_3,y_3)) \n\t\t\t\\cap \\qty(\\cfrac{x_1^{(i)}+x_2^{(i)}}{2},\\cfrac{y_1^{(i)}+y_2^{(i)}}{c}) = \\emptyset)\n\t\\Bigg{\\}}\n\\end{align*}\nwith $\\Rel_1$ defined as\n\\begin{align*}\n\t\\Rel_1 := &\\Bigg{\\{}\n\t\t\\qty(((x_1,y_1),(x_2,y_2),(x_3,y_3)), ((x_1,y_1),(x_2,y_2),(x_3,y_3)))\n\t\t\\in (\\R^2)^3 \\cross (\\R^2)^3, \\forall t \\in \\R, 0 \\leq 1 : \\\\\n\t\t\t&\\quad \\qty((x_1,y_1) \\Rel_0 (x_2,y_2)) \n\t\t\t\\land \\qty(\\qty(\\cfrac{x_1 + x_2}{2}, \\cfrac{y_1 + y_2}{2}) \\Rel_0 (x_3,y_3))\n\t\\Bigg{\\}}\n\\end{align*}\nand $\\Rel_0$ defined as\n\\begin{displaymath}\n\t\\Rel_0 := \\qty{\n\t\t(x,y) \\in x \\cross y, \\ t \\in \\R : \n\t\t(x = (1-t) x_a + t x_b) \\land (y = (1-t) y_a + t y_b)\n\t\t\\land (0 \\leq t \\leq 1)\n\t}\n\\end{displaymath}\n\nFrom this definition, the sum of an infinite collection of non-intersecting T-shapes on a plane\ncan be bounded by $\\aleph_1$:\n\\begin{displaymath}\n\t\\qty(A \\subset (\\R^2)^3) \\land \\qty(\\exists f: \\R \\toB (\\R^2)^3) \n\t\t\\implies \\abs*{A} \\leq \\abs*{(\\R^2)^3} = \\abs*{\\R} = \\aleph_1\n\\end{displaymath}\nThis doesn't imply countability, but it is an important bound for every possible \ncombinination of $T-shapes$.\\\\\n\nWhen restricting to an individual set of arbritray parameters $l_1, l_2$ and orientation $\\theta$ \nset $A^*$ can be defined as:\n\\begin{align*}\n\tA^* := &\\bigg{\\{}\n\t\t((x_1,y_1),(x_2,y_2),(x_3,y_4)) \\in A : \\\\\n\t\t% &\\quad\\forall \\qty((x_1^{(i)},y_1^{(i)}),(x_2^{(i)},y_2^{(i)}),(x_3^{(i)},y_3^{(i)})) \\in (\\R^2)^3 : \\\\ \n\t\t% &\\quad ((x_1,y_1)\\Rel_1(x_2,y_2) \\cap ((x_1^{(i)},y_1^{(i)})\\Rel_1(x_2^{(i)},y_2^{(i)}) = \\emptyset)\\\\\n\t\t% &\\quad \\land \\qty(\\qty(\\cfrac{x_1+x_2}{2},\\cfrac{y_1+y_2}{c})\\Rel(x_3,y_3)) \n\t\t% \t\\cap \\qty(\\qty(\\cfrac{x_1^{(i)}+x_2^{(i)}}{2},\\cfrac{y_1^{(i)}+\n\t\t% \t\ty_2^{(i)}}{c})\\Rel(x_3^{(i)},y_3^{(i)})) = \\emptyset)\\\\\n\t\t&\\quad \\land \\qty((x_2 - x_1) = l_1 * \\cos(\\theta))\n\t\t\t\\land \\qty((x_2^{(i)} - x_1^{(i)}) = l_2 * \\cos(\\theta))\\\\\n\t\t&\\quad \\land \\qty((y_2 - y_1) = l_1 * \\sin(\\theta))\n\t\t\t\\land \\qty((y_2^{(i)} - y_1^{(i)}) = l_2 * \\sin(\\theta))\n\t\\bigg{\\}}\n\\end{align*}\nThis can then be bounded follwoing a similar procedure as shown in the previous proof.\n% 4d\n\\subsection{Set $\\mathbb{P}$ of all prime numbers}\nBy definition, all prime numbers are natural numbers, thus\n\\begin{displaymath}\n\t\\mathbb{P} \\subseteq \\N\n\\end{displaymath}\nThis then implies that $\\abs*{\\mathbb{P}} \\leq \\abs*{\\N}$, therefore $\\mathbb{P}$ is countable.\n\\begin{displaymath}\n\t\\mathbb{P} \\subseteq \\N \\implies \\abs*{\\mathbb{P}} \\leq \\abs*{\\N}\n\\end{displaymath}\n\n% 4e\n\\subsection{Set $\\mathbb{A}$ of all algebraic numbers}\n\\textbf{Note:} Algebraic numbers are numbers which are roots of \nsome polynomials with integer coefficients.\\\\\n\nDefine $\\mathbb{A}$ as,\n\\begin{displaymath}\n\t\\mathbb{A} := \\qty{x \\in \\real : \\forall i \\in \\N \\exists p \\in \\mathbf{P}_i(\\Z) : \n\tp(x) = 0}\n\\end{displaymath}\nSince $\\forall i \\in \\N : \\abs{\\mathbf{P}_i(\\Z)} \\leq \\abs{\\N} = \\aleph_0$,\nand $\\exists f: \\mathbf{P}_i(\\Z) \\toS \\mathbb{A}$.\n$\\mathbb{A}$ is countable.\n\\begin{displaymath}\n\t\\qty(\\forall i \\in \\N : \\abs{\\mathbf{P}_i(\\Z)} \\leq \\abs{\\N} = \\aleph_0)\n\t\\land (\\exists f: \\mathbf{P}_i(\\Z) \\toS \\mathbb{A})\n\t\\implies \\abs{\\mathbb{A}} \\leq \\abs{\\mathbf{P}_i(\\Z)} \\leq \\abs{\\N} = \\aleph_0\n\\end{displaymath}\n\n\\newpage\n% Problem 5\n\\section{}\nProve that for any infinite set $A$ there exists $B \\subseteq A$, so that $\\abs{B}=\\abs{\\N}$.\\\\\n\nBy definition an infinite set, $\\abs*{A} \\geq \\abs{\\N} = \\aleph_0$.\nA very, very simple proof of the fact $\\exists B \\subseteq A : \\abs{B}=\\abs{\\N} = \\aleph_0$ \nis as follows:\n\nDefine $f : \\N \\toB B$ (bijective) to directly map $\\N$ to an element of subset $B$. \nSince $f$ is bijective, $\\abs{B}=\\abs{\\N} = \\aleph_0$.\\\\\nNext, define $g : B \\toI A$ (injective). Becouse $A$ is an infinite set, \nand injective function $g$ can in fact map the infinite set $B$ into $A$, \nwhich then implies $B \\subseteq A$.\n\n\\begin{displaymath}\n\t(f : \\N \\toB B) \\land (g : B \\toI A) \\implies \\abs*{A} \\geq \\abs*{B} = \\abs*{\\N} = \\aleph_0\n\\end{displaymath}\n\n\n\\newpage\n% Problem 6\n\\section{}\nProve that the following sets have the same cardinality.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{fig/pblm6.png}\n\\end{figure}\n\nThese sets can all be represented as a set of real number ordered pairs.\\\\\nThese are constructed with arbritrary constants: \n$x_a,x_b,x_c,x_d,x_e,x_f,y_a,y_b,y_c,y_d,y_e,y_f,x_s,y_s,r_s,m_l,b_l$\n\n\\begin{align*}\n\tAB &= \\{(x,y) \\in \\R^2, \\ t \\in \\R : \n\t(x = (1-t) x_a + t x_b) \\land (y = (1-t) y_a + t y_b)\n\t\\land (0 \\leq t \\leq 1)\\}\\\\\n\tCD &= \\{(x,y) \\in \\R^2, \\ t \\in \\R : \n\t(x = (1-t) x_c + t x_d) \\land (y = (1-t) y_c + t y_d)\n\t\\land (0 \\leq t < 1)\\}\\\\\n\tEF &= \\{(x,y) \\in \\R^2, \\ t \\in \\R : \n\t(x = (1-t) x_e + t x_f) \\land (y = (1-t) y_e + t y_f)\n\t\\land (0 < t < 1)\\}\\\\\n\t\\mathbb{S} &= \\{(x,y) \\in \\R^2, \\ t \\in \\R : \n\t(x = r_s cos(2 \\pi t) + x_s) \\land (y = sin(2 \\pi t) + y_s)\n\t\\land (0 \\leq t < 1)\\}\\\\\n\t\\mathit{l} &= \\{(x,y) \\in \\R^2, \\ t \\in \\R: (x = t) \\land (y = m_l t + b_l)\\}\n\\end{align*}\n\nIt is clear that each set is defined parmetrically with bijective equations maping \nthe parameter $t$ into a 2-D corndates $(x,y)$, thus proving that the cardinality \nof each sets parameter is sufficent to showing the each set has the same cardinality.\\\\\n\nLet $T_i = \\{t \\in A_i\\}$ for each of the sets $A_i$, then (from the reasoning above) \nthe following can be said:\n\n\\begin{align*}\n\tT_{AB} &= \\{t \\in \\R : 0 \\leq t \\leq 1\\}, \\ &\\abs{T_{AB}} = \\abs{AB}\\\\\n\tT_{CD} &= \\{t \\in \\R : 0 \\leq t < 1\\}, \\ &\\abs{T_{CD}} = \\abs{CD}\\\\\n\tT_{EF} &= \\{t \\in \\R : 0 < t < 1\\}, \\ &\\abs{T_{EF}} = \\abs{EF}\\\\\n\tT_{\\mathbb{S}} &= \\{t \\in \\R : 0 \\leq t < 1\\}, \\ &\\abs{T_{\\mathbb{S}}} = \\abs{\\mathbb{S}}\\\\\n\tT_{\\mathit{l}} &= \\{t \\in \\R\\}, \\ &\\abs{T_{\\mathit{l}}} = \\abs{\\mathit{l}}\\\\\n\\end{align*}\n\nClearly, the equivalent definition of $T_{CD}$ and $T_{\\mathbb{S}}$ indicates\n$$\\abs*{CD} = \\abs*{T_CD} = \\abs*{T_\\mathbb{S}} = \\abs*{\\mathbb{S}}$$\n\nThe equivalence of the other sets is more difficult then via definition.\\\\\nFirst, the baseline cardinality can be shown to be $\\aleph_1$ as (by definition of $T_{\\mathit{l}})$)\n$$\\abs{T_\\mathit{l}} = \\abs{\\R} = \\aleph_1$$\n\nNext, the equivalence of $T_{EF}$ and $T_\\mathit{l}$ can be shown with the biforjective mapping \n$$f_1 : \\R \\rightarrow T_{EF} = \\frac{2 \\pi \\tan^{-1}(x) + 1}{2}$$\nTherefore,\n$$\\abs{EF} = \\abs{T_{EF}} = \\abs{T_{\\mathit{l}}} = \\abs{\\R} = \\aleph_1$$\n\nNext, due to the nature of infinite sets, the addition of $t=0$ from $T_{EF}$ to $T_{CD}$ \ndoes not affect the overall cardinality of $T_{CD}$, thus\n$$\\abs{CD} = \\abs{T_{CD}} = \\abs{T_{EF}} = \\abs{\\R} = \\aleph_1$$\n\nSimilarly, the addition of $t=1$ from $T_{CD}$ to $T_{AB}$ will stil result in \n$$\\abs{AB} = \\abs{T_{AB}} = \\abs{T_{CD}} = \\abs{\\R} = \\aleph_1$$\n\nUltimently this means that\n$$\\abs{AB} = \\abs{CD} = \\abs{EF} = \\abs{\\mathbb{S}} = \\abs{\\mathit{l}} = \\abs{\\R} = \\aleph_1$$\n\n\n\n\\end{document}\n", "meta": {"hexsha": "6bf668361bde1574293bd1301576d22d5d01f434", "size": 21758, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW3/MATH5301-HW3.tex", "max_stars_repo_name": "jonaswagner2826/MATH5301", "max_stars_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-01T05:26:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-01T05:26:53.000Z", "max_issues_repo_path": "Homework/HW3/MATH5301-HW3.tex", "max_issues_repo_name": "jonaswagner2826/MATH5301", "max_issues_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW3/MATH5301-HW3.tex", "max_forks_repo_name": "jonaswagner2826/MATH5301", "max_forks_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2071197411, "max_line_length": 108, "alphanum_fraction": 0.6027208383, "num_tokens": 9510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.8615382165412809, "lm_q1q2_score": 0.5880633578035157}}
{"text": "\\section{Heuristics}\n\nIn trying to determine a heuristic function, we created a simple heuristic based on Hamming distance in order to serve as a baseline, as shown in Figure 8. Essentially, h1(n) was the number of 1\u2019s (since our goal state is a board full of 0\u2019s). Given the nature of the problem, this heuristic makes a lot of sense, but fails to consider groups and ordering. For example, a board with 3 scattered 1\u2019s is definitely not closer to the goal state than a board with 5 1\u2019s in the shape of a plus (this would reach the goal state in one move).\n\\\\\n\nFrom the heuristics above, we were able to stem some different variations and test them out. We created and tested heuristic functions that determined the number of pairs of 1's (h2) and number of three consecutive 1's (h3) as well (both overlapping and not) but this proved to yield a longer search result than our original heuristic. Both algorithms are shown in Figure 9 and 10, respectively. In retrospect, this makes sense as they don\u2019t take into consideration other alternatives that could prove to be better moves. Taking the same example as above, a board with some scattered pairs of 1\u2019s would be chosen over a board with 5 1\u2019s in the shape of a plus. Due to the size restriction of h2 and h3, you would have several boards that, while yielding the same heuristics, would not be comparable at all.\n\\\\\n\nAnother variation, shown in Figure 11, involved selecting the board containing a move that could clear the most amount of 1\u2019s (haround). However, this suffers from the same problem as mentioned above. This performs a more exhaustive search since it goes through every board that could potentially clear 5 1\u2019s first, and then 4, and so on. \n\\\\\n\nLastly, we tested a heuristic that would return the number of cleared rows (hrows) from top to bottom, as shown in Figure 12. The Indonesian Dot puzzle is based on a popular game called Light\u2019s Out, and one strategy involves performing such technique. However, this proved not as efficient as we had planned as towards the end, the algorithm involved going through each row once more in order to clear any remaining 1\u2019s. \n\\\\\n\nFrom all these heuristics, the one selected was the initial one proposed (h1) that we will term h(n), which instead of returning the number of 1\u2019s, simply returns the number of potential moves that could be done, as shown in Figure 13. This involves first verifying that the modulus of the number of 1\u2019s for 5, 4 and 3, respectfully in that order, is not 1 and 2 has no moves could flip exactly 1 or 2 nodes. This heuristic is not without its fault as it merely works by counting the numbers of 1\u2019s and assumes their positions instead of take into account ordering.\n", "meta": {"hexsha": "b5e65789ba690662f833306f3b6d452318bb80b6", "size": 2709, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/sections/heuristics.tex", "max_stars_repo_name": "Ra-Ni/Indonesian-Dot-Solver", "max_stars_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documentation/sections/heuristics.tex", "max_issues_repo_name": "Ra-Ni/Indonesian-Dot-Solver", "max_issues_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documentation/sections/heuristics.tex", "max_forks_repo_name": "Ra-Ni/Indonesian-Dot-Solver", "max_forks_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-03-18T15:23:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-18T15:23:24.000Z", "avg_line_length": 169.3125, "max_line_length": 806, "alphanum_fraction": 0.7833148763, "num_tokens": 624, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879312056025699, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5879062007736983}}
{"text": "\\documentclass[main.tex]{subfiles}\n\n\\begin{document}\n\n\\section{Real Analysis}\n\nWe review over Banach and Hilbert spaces, measure theory and Lebesgue integration, Fourier theory (series and functions). \n\nWe are following these notes: \\url{https://www.math.ucdavis.edu/~hunter/measure_theory/measure_notes.pdf}\n\n\\todo[inline]{Sobolev theory and Baire category theorem (open mapping theorme, closed graph theorem, and uniform boundedness principal}\n\n\\subsection{Cheat sheet}\nI begin to realize that this is getting way to long, so I am going to have a cheat sheet of all the most important theorems. \n\n\\subsubsection{Basic Real Analysis}\nBasic questions in real analysis, like continuity and uniform convergence and stuff:\n\nUniform convergent implies continuity of the limit:\n\n\\begin{theorem}\nIf $f_n$ uniformly converges to $f$, then if $f_n$ is continuous, then so is $f$.\n\\end{theorem}\n\nAlso derivatives:\n\n\\begin{theorem}\nIf $f_n$ are differentiable with $f_n$ uniformly converges to $f$ and $f' _n$ uniformly converges to $g$, then $g = f'$.\n\\end{theorem}\n\nWe also have the Arzela-Ascoli theorem, which tells us gives us a way to prove a linear operator between Banach spaces is compact.\n\n\\begin{theorem}[Arzela-Ascoli Theorem]\nConsider a sequence of real-valued continuous functions ${f}_n$ defined on a closed and bounded interval [a, b] of the real line. If this sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence ${f}_k$that converges uniformly.\n\\end{theorem}\n\n\n\n\\subsubsection{Topology}\nA space is compact iff any decreasing sequence of nonempty closed subset has a nonempty intersection.\n \nWe have the all-important contraction mapping theorem:\n\\begin{theorem}\nIf $T: X \\rightarrow X$ is a contraction on a complete metric space $X$, that is, $d(Tx, Ty) \\leq c d(x,y)$ for some constant $c \\leq 1$, then there exists exactly one solution $x$ for the fix point equation $T(x) = x$.\n\\end{theorem}\n\n\n\n\\subsubsection{Banach Spaces}\nTo show that $f_j \\rightarrow f$ implies that they are $L^1$ convergent (ofc with additional hypothesis), normally, we can separate $g_j = |f - f_j|$ into two parts, one part is $h_j = g_j \\chi_{g_j < M}$ for some $M$. This part always goes to zero as it is dominated by integrable function $M$ (on a finite measure space). Thus we only have to worry about \"the large part\" $k_j = g_j - h_j = g_j \\chi_{g_j \\geq M}$, where hopefully the other hypothesis will help us.\n\n\\subsubsection{$L^p$ functions}\n\n$L^p$ function on measure space $X$ can be realized as Lebesgue measureable function that has a finite $L^p$ norm modulo thus those integral is zero.\n\nWhen $X$ is $\\mathbb{R}^n$, regularity of Lebesgue measure tells us that compact generated (smooth) continuous functions are dense in $L^p$.\n\nWhen $X$ is a finite measure space, then by Holder inequality we see that for $1 \\leq p \\leq q \\leq \\infty$, then $L^q \\subset L^p$, and this map is continuous, that is, the $L^p$ norm can be bounded by the $L^q$ norm. By above, since compact generated continuous functions are dense in both (with their respective topology), this is not a close map.\n\nThus we have uniform convergence implies uniform a.e. - $L^\\infty$ implies $L^p$ implies $L^1$ implies convergence in measure, for finite measure spaces.\n\n\\begin{theorem}[Arzela-Ascoli Theorem]\nConsider a sequence of real-valued continuous functions ${f}_n$ defined on a closed and bounded interval [a, b] of the real line. If this sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence ${f}_k$that converges uniformly.\n\\end{theorem}\n\n\n\n\n\\subsubsection{Fourier series}\nFourier series gives equivalence of Hilbert spaces between $L^(S^1) \\cong l^2$, where $l^2 \\coloneqq L^2(\\mathbb{Z})$. It takes derivative to multiplication by identity on $\\mathbb{Z}$, it takes convolution to pointwise multiplication.\n\nThe decay of the Fourier series $a_n$ relates to the smoothness of $f$, namely if $f$ is $k$-times derivable, then $a_n$ are $O(n^{-k})$ as $k a_n$ is bounded. \n\nIf $f$ is piecewise smooth, then the Fourier series converges to $f$ at continuous point and is the average of left and right limit at jump points. So if $f$ is smooth (continuous and first derivative continuous), then the Fourier series converge.\n\nWe have the Poisson Summation theorem:\n\\begin{theorem}[Poisson Summation Theorem]\nFor a large class of functions $f$, we have that \n$$\n\\sum_{n = - \\infty}{\\infty} f(2\\pi n) = \\frac{1}{\\sqrt{2 \\pi}} \\sum_{n = - \\infty} {\\infty} \\hat{f}(n)\n$$\n\\end{theorem}\n\n\\subsubsection{Fourier Transform}\nIf $f$ is compactly generated, then $\\hat{f}$ can be extended to a holomorphic function. For example, this implies that if $f$ and $\\hat{f}$ is compactly supported, this implies that $f = 0$.\n\n\nFourier transform of a Gaussian is another Guassian.\n\nWe have the Plancherel theorem, a version of Parseval theorem for continuous spectrum:\n\n\\begin{theorem}\nIf $f$ is both $L^1(\\mathbb{R})$ and $L^2(\\mathbb{R})$, then $\\hat{f}$ is $L^2$ and \n$$\n\\norm{f}_{L^2} = \\norm_{\\hat{f}}_{L^2}\n$$\nIn addition, this map extends to a map $L^2 \\rightarrow L^2$ that is an isometry. \n\\end{theorem}\n\nThus for any function $f \\in L^2(\\mathbb{R}$, we can approximate it by $L^1$ functions and the Fourier transform of $f$ is the limit of the Fourier transform of the $L^1$ functions. It is an unitary transformation, that is, it is invertible. With the inverse\n\\subsubsection{Inequalities}\n\nWe have the important Cauchy's inequality:\nlet $V$ be an inner product space, then \n\\begin{theorem}[Cauchy's inequality]\n$$(u,v)^2 \\leq (u,u) (v,v)$$\nwith equality iff $u,v$ are linear dependent.\n\\end{theorem}\n\nWe also have Holder's inequality:\n\n\\begin{theorem}[Holder's Inequality]\nGiven $r, p, q$ with $\\frac{1}{r} = \\frac{1}{p} + \\frac{1}{q}$, then \n$$\n\\norm{fg}_r \\leq \\norm{f}_{p}  \\norm{f}_q\n$$\n\nA space is compact iff every family of closed subset having the finite intersection property  has non-empty intersection.\n\n\\end{theorem}\n\nWe also have the Jensen's inequality:\n\n\\begin{theorem}[Jensen's Inequality]\nGiven a finite measure space $X$ with total measure $\\mu(X) = 1$ and a $L^1$ function $f$ on it. Let $\\phi$ be a convex function on $\\mathbb{R}$ (like $x log x, x^2$), then \n$$\n\\phi(\\int f d\\mu) \\leq \\int (\\phi (f)) d\\mu\n$$\n\\end{theorem}\n\nIn probability, that is, we work with a finite measureable space with total measure 1. Given a random variable $X$ (that is a measureable function), we have the Markov's inequality:\n\n\\begin{theorem}[Markov's Inequality]\nIf $X$ is nonnegative, then \n$$\n\\mathbb{P}(X > a) < \\frac{\\mathbb{E}(X)}{a}\n$$\n\\end{theorem}\n\nFrom this we have the Chebyshev's Inequality:\n\n\\begin{theorem}[Chebyshev's Inequality]\nFor any $X$, we have \n$$\n\\mathbb{P}[|X - \\mathbb{E}[X]| \\leq a] \\leq \\frac{var(X)}{a^2}\n$$\n\\end{theorem}\n\n\n\n\n\n\n\\subsection{Banach Spaces}\nIn analysis there is a lot of infinite dimensional spaces, by themselves they are hard to control. Banach spaces are vector spaces complete to a norm and they are easier to control. \n\n\\begin{remark}\nGiven a Banach space and its norm, we have a notion of convergence. Normally the Banach space comes from the metric completion of an dense subset, which is normally nice (like compact supported smooth (infinitely differentiable). Therefore the space is entirely controlled by these nice function together with the notnio of convergence.\n\\end{remark}\n\n\\subsubsection{Basics}\n\nWe first start with the definition of a norm:\n\n\\begin{definition}\nA norm on a vector space $V$ is a real-valued map $\\norm{-} : X \\rightarrow \\mathbb{R}$, with the condition that \n\\begin{enumerate}\n    \\item $\\norm{x + y} \\leq \\norm{x} + \\norm{y}$\n    \\item $\\norm{s x} = |s| \\norm{x}$, where $s \\in \\mathbb{R}$ is a scalar and $|s|$ is its aboslute value.\n    \\item If $\\norm{x} = 0$ then $x = 0$.\n\\end{enumerate}\n\nA norm give a metric on $V$ by $d(x, y) = \\norm{x -y}$\n\\end{definition}\n\n\\begin{example}\nExample of normed spaces are $L^p$ spaces. Inner product $(-,-)$ also gives norms by $\\norm{v} \\coloneqq \\sqrt{(v,v)}$. \n\\end{example}\n\n\n\\begin{definition}\nA Banach space $V$ is a normed vector space that is complete with respect to the metric derived from the norm. A Banach space is separable if there exists a countable dense subset of it.\n\\end{definition}\n\nThe dual of a separable Banach space is not necessarily separable, however, \n\n\\begin{proposition}\nIf $X^*$ is separable, then so is $X$.\n\\end{proposition}\n\n\nA closed subspace of a Banach space is always Banach. However, there might be open dense subspaces (and the Banach space is the completion).\n\nFor linear maps between normed spaces, there is a notion of bounded linear maps:\n\n\\begin{definition}\n$T: X \\rightarrow Y$ is bounded if exists $M \\geq 0$ such that \n$$\n\\norm{Tx} \\leq M \\norm{x}\n$$\nfor all $x \\in X$. The operator norm of $T$ is \n$$\n\\norm{T} \\coloneqq inf \\{ M | \\norm{Tx} \\leq M \\norm{x}\n$$\n\\end{definition}\n\n\nMany linear maps are not bounded, such as taking derivatives, such as taking derivative on smooth functions. \n\nIn fact, a linear map is bounded iff it is continuous:\n\n\\begin{theorem}\nA linear map between normed spaces is bounded iff it is continuous.\n\\end{theorem}\n\nMoreover, it is suffice to suffice to check on a dense subspace. This is called the BLT theorem:\n\n\\begin{theorem}[Bounded Linear Transformation]\nLet $X$ be a normed linear space and $Y$ a Banach space. If $M$ is a dense lienar subspace of $X$ and $T: M \\rightarrow Y$ is a bounded linear map, then there is a unique extension to $\\hat{T}: X \\rightarrow Y$ with the same operator norm.\n\\end{theorem}\n\nWe only care about norm up to topology:\n\n\\begin{definition}\nTwo norms on $X$ are equivalent if there exists constants $c , C > 0$ such that \n$$\nc\\norm{x}_1 \\leq \\norm{x}_2 \\leq C\\norm{x}_1\n$$\n\\end{definition}\n\n\\begin{theorem}\nTwo norms induces the same topology iff they are equivalent\n\\end{theorem}\n\nHere's the open mapping theorem:\n\n\\begin{theorem}[Open Mapping Theorem]\nSuppose $T: X \\rightarrow Y$ is one-to-one, onto bounded linear map between Banach Spaces, then $T^{-1} : Y \\rightarrow X$ is bounded.\n\\end{theorem}\n\n\nNote that every finite dimensional normed vector spaces are Banach spaces and all the norms are equivalent, thus their topology are the same.\n\nGiven a Banach space $X$ and a close subspace $M$, then we have the quotient $X/M$ with the qoutient norm:\n$$\n\\norm{[x]} = inf_{v \\in M} \\{\\norm_{v + x} \\}\n$$\nThis in fact makes $X/M$ a Banach space and the projection map $X \\rightarrow X/M$ continuous.\n\n\\subsubsection{Bounded Operators}\nNow we study bounded operators. \n\nFirst of all, bounded operator composes: for $T: X \\rightarrow Y$ and $S: Y \\rightarrow Z$, the composition $ST$ is bounded with operator norm $\\norm{ST} \\leq \\norm{S}\\norm{T}$. \nThe space of bounded mao=ps $B(X,Y)$ is a Banach space if $Y$ is one:\n\\begin{theorem}\nIf $X$ is a normed linear space and $Y$ is a Banach space, then $B(X,Y)$ is a Banach space with repsect to the operator norm.\n\\end{theorem}\n\nAn important class of bounded operators are compact ones:\n\n\\begin{definition}\nA linear operator $T: X \\rightarrow Y$ is compact if $T(B)$ is precompact subset of $Y$ for every bounded subset $B$ of $X$.\n\\end{definition}\n\nEquivalently, $T$ is compact iff every bounded sequence $(x_n)$ in $X$ has a subsequence whose image converges in $Y$. A subset is precompact if its closure is compact. \n\nTo show a map is compact, we normally use Ascola theorem. To show it is not, we normally find functions with disjoint support that are bounded, this work really well when we are trying to show that an inclusion Banach spaces generated by diferent norms is not compact.\n\n\\begin{proposition}\n$K(X,Y)$ the space of compact linear operators is a close linear subspace. It is also a two sided idea, that is, composition of a compact operator and a bounded operator is compact.\n\\end{proposition}\n\nCompact operators on infinite dimensional spaces behave like operators on finite-dimensional spaces. On Hilbert spaces they are uniform limit of operators with finite rank.\n\nThere is also different notions of convergence. Converging in the operator norm is uniform convergence. We also have another notion, called strong convergence.\n\n\\begin{definition}\nA sequence $T_n$ in $B(X,Y)$ converges strongly if it converges pointwise to $T$.\n\\end{definition}\nUniform implies strongly but not reverse (the same argument as uniform convergence of function and point-wise convergence).\n\nWe can define the exponential of a bounded linear function:\n$$\ne^A \\coloneqq I + A + \\frac{1}{2!}A^2 + ...\n$$\nIt has norm less than $e^{\\norm{A}}$.\n\nIf $A$ and $B$ commute, we have $e^A e^B = e^{A + B}$. This gives rise to a flow, which is a one-parameter uniformly continuous group. \n\n\n\\subsubsection{Dual Spaces}\nThe topological dual of $X$ is the space of continuous (bounded) functionals.\n\nFor Hilbert spaces, the Riesz representaiton theorem tells us that the vector space can be identified with the origin space. Not true for general Banach spaces.\n\nHans-Banach says that we can extend bounded linear operator defined on a subspace:\n\n\\begin{theorem}[Hans-Banach]\nIf $Y$ is a linear subspace of normed space $X$ and $\\phi: Y \\rightarrow \\mathbb{R}$ is a bounded linear functional on $Y$ with norm $M$, then exists an extension $\\phi': X \\rightarrow \\mathbb{R}$ that restricts to $\\phi$ on $Y$ and has the same norm.\n\\end{theorem}\n\nThis gives us a notion of weak convergence:\n\\begin{definition}\nA sequence $x_n$ in Banach space $X$ converges weakly to $x$ if \n$$\n\\phi_(x_n) \\rightarrow \\phi(x)\n$$\nfor every bounded linear functional $\\phi$ in $X^*$.\n\\end{definition}\n\nWe also have weak $*$ convergence for the dual space $X^*$:\n\\begin{definition}\nA sequence $\\phi_n$ in the dual Banach space $X^*$ weakly $*$ converge to $\\phi$ if\n$$\n\\phi_n(x) \\rightarrow \\phi(x)\n$$\nfor every $x \\in X$.\n\\end{definition}\nIt generates the weakest topology such that the pairing $X \\times X^* \\rightarrow \\mathbb{R}$ is continuous.\n\n\n\n\\subsection{Hilbert Spaces}\nBanach spaces are pretty nice spaces, however, they are still not very intuitive. Hilbert spaces are Banach spaces with the norm coming from an inner product. They behave closer to finite dimensional vector spaces. And as we will see, in some sense, there are \"only one\" Hilbert space of a given (orthonormal basis) size.\n\n\n\\subsubsection{Basics}\nWe will define it or complex linear spaces, for real vector spaces just omit the complex conjugates:\n\n\\begin{definition}\nAn inner product on $V$ is a map \n$$\n(-, -): X \\times X \\rightarrow \\mathbb{C}\n$$\nsuch that:\n\\begin{enumerate}\n    \\item It is linear in the second factor.\n    \\item $(y,x) = \\overline{(x,y)}$ (Hermitian symmetric)\n    \\item $(x,x) \\geq 0$\n    \\item $(x,x)$ = 0 iff $x = 0$\n\\end{enumerate}\n\\end{definition}\n\nFrom an inner product, we can define a norm by \n$$\n\\norm{x} = \\sqrt{(x,x)}\n$$\n\n\\begin{definition}\nA Hilbert space is a complete inner product space.\n\\end{definition}\n\n\\begin{example}\nThe standard inner product on $\\mathbb{C}^n$ makes $\\mathbb{C}^n$ a Hilbert Space. $L^2(X)$ is an inner product space with inner product\n$$\n(f,g) = \\int_a ^b \\overline{f}g d\\mu\n$$\nThe other $L^p$ spaces are not Hilbert spaces.\n\nLet $C^k([a,b])$ the space of functions with $k$ continuous derivatives. We have an inner product:\n$$\n(f,g) = \\sum_{j=0} ^k \\int_a ^b \\overline{f^{(j)}(x)} g^{(j)}(x) dx,\n$$\n$f^{(j)}$ is the $j$-th derivative. Then the completion is called the Sobolev space $H^k((a,b)) = W^{k,2}((a,b))$.\n\\end{example}\n\nFrom a norm, it is a condition if it comes from an inner product or not:\n\n\\begin{theorem}\nA norm comes from an inner product iff \n$$\n\\norm{x + y}^2 + \\norm{x - y}^2 = 2\\norm{x}^2 + 2\\norm{y}^2\n$$\n\\end{theorem}\n\nLastly, the inner product $X \\times X \\rightarrow \\mathbb{C}$ is a continuous map.\n\nIn Hilbert spaces, we have the notion of orthogonality. The orthogonal complement of a subset if a closed linear subspace.\n\n\\begin{theorem}[Projection]\nFor a closed subspace $M \\subset X$, for any point $x$, there is a unique closest point. It is the point $y$ where $(x - y)$ is orthogonal to $M$. This also means that $\\mathcal{H} = M \\oplus M^\\perp$.\n\n\\subsubsection{Orthonormal Bases}\nOrthonormal basis are nice. Two hilbert spaces with orthonormal bases have the same cardinality are isomorphic. A separable Hilbert space has a finite or countably infinite orthonormal basis. For infinite dimensional Hilbert spaces, the notion of orthonormal basis is about infinite sums, not finite sums. There is a notion of absolute convergence and sum over an (possibly uncountable) elements. \nHowever, we have the Bassel's inequality:\n\n\\begin{theorem}[Bassel's inequality]\nGiven an orthonormal set $u_\\alpha$ and $x \\in H$, then \\begin{enumerate}\n    \\item $\\sum_{\\alpha} |(u_\\alpha, x)|^2 \\leq \\norm{x}^2$\n    \\item $x_U = \\sum_\\alpha (u_\\alpha, x)u_\\alpha$ is a convergent sum\n    \\item $x - x_U \\in U^\\perp$\n\\end{enumerate}\n\\end{theorem}\n\nGiven a subset $U$ of $H$, there is a notion of the closed linear span, being the infinite sums that converges unconditionally. It is the smallest closed linear subspace that contains $U$.\n\nNow we can define orthonormal basis (this is really a theorem):\n\n\\begin{definition}\nA set of orthonormal element $u_\\alpha$ is complete (an orthonormal basis) if one of the following equivalent condition is satisfied:\n\\begin{enumerate}\n    \\item $(u_\\alpha, x) = 0$ for all $\\alpha$ implies $x = 0$\n    \\item $x = \\sum_\\alpha (u_\\alpha, x) u_\\alpha$\n    \\item $\\norm{x}^2 = \\sum_\\alpha |(u_\\alpha, x)|^2$\n    \\item the closed linear span $[U] = H$\n    \\item $U$ is a maximal orthonormal set.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\nthe first condition is the easiest to verify, and second is used most often. \n\\end{remark}\n\nWe have the Parseval's identity:\n\\begin{theorem}[Parseval's Identity]\nSuppose $u_\\alpha$ is an orthonormal basis of $H$, if $x = \\sum_\\alpha a_\\alpha u_\\alpha$, $y = \\sum_\\alpha b_\\alpha u_\\alpha$, then \n$$\n(x,y) = \\sum_\\alpha \\overline{a_\\alpha} b_{\\alpha}\n$$\n\\end{theorem}\n\nA corollary of this is that any Hilbert space $H$ with orthonormal basis $U$  is isomorphic to $l^2(U)$. By Zorn's lemma, any Hilbert space has an orthonormal basis.\n\nThe Gram-Schmidt orthonormalization procedure constructs orthonormal basis from any countable linearly independent set whose linear span is dense in H.\n\n\nExamples of orthonormal basis is the standard basis of $\\mathbb{C}^n$, the delta basis of $l^2(\\mathbb{Z})$, \n$$\ne_n(x) = \\frac{1}{\\sqrt{2\\pi}} e^{inx}\n$$\nis an orthonormal basis of $L^2(T)$. \n\n\n\\end{theorem}\n\n\\subsubsection{Bounded Operator on Hilbert Spaces}\nBounded operators behave better on Hilbert spaces, because we have the adjoint of a bounded linear map. \n\nFirst we have the theory of Orthogonal projections:\n\n\\begin{definition}\nAn orthogonal projection on Hilbert space $H$ is $P : H \\rightarrow H$ such that $P^2= P$ and $<Px, y> = <x, Py>$. \n\\end{definition}\nA projection is bounded with norm $1$. They corresponds to closed subspaces (being the image). The kernel is the orthogonal complement.\n\n\\begin{example}\nTake $H = L^2(T)$, $u = \\frac{1}{\\sqrt{2 \\pi}}$ constant function with norm one. Then the projection $P_u$ takes $f$ to its mean $<f> = \\frac{1}{2 \\pi} \\int_0 ^{2\\pi} f(x) dx$. The orthogonal decomposition \n\n$$\nf(x) = <f> + f'(x)\n$$\nwith $f'(x)$ the fluctuation around the mean (it is not the derivative).\n\\end{example}\nFrom this we can easily deduce that all bounded linear functionals are of the form $\\phi_y(x) \\coloneqq <y, x>$ for some $y \\in H$:\n\n\\begin{theorem}[Riesz representation]\nFor any $\\phi$ bounded linear functional on $H$, there exists an unique vector $y$ such that \n$$\n\\phi(x) = <y,x>\n$$\n\\end{theorem}\nThe map $<-, x>: H \\rightarrow H^*$ is an isometry isomorphism. For compelx Hilbert spaces, $HJ$ is antilinear. So Hilbert spaces are self-dual, $H$ and $H^*$ are isomorphic as Banach spaces and anti-isomorphic as Hilbert spaces. This is a special aspect of Hilbert spces.\n\n\\subsubection{adjoint of an operator}\nFrom Riesz representation theorem we also get the adjoint of a bounded operator. For $A \\in B(H)$, the adjoint statisfies the property that \n$$\n<x, Ay> = <A^* x, y>\n$$\nThis exists and is unique by Riesz representation theorem. The adjoint of a matrix (linear map $\\mathbb{R}^n$ to $\\mathbb{R}^m$ with standard inner prodcuts) is its transpose matrix. For complex $\\mathbb{C}^n$, it is the Hermitian conjuagate.\n\nAdjoint is important for studying the solvaability of $Ax = y$. For any $z$ with $A^* z = 0$, then $z$ is perpendicular to imgaes of $A$. \n\n\\begin{theorem}\nFor $A: H \\rightarrow H'$, we have \n$$\n\\overline{ran A} = (ker A^*)^\\perp, ker A = (ran A^*)^\\perp\n$$\n\\end{theorem}\nGiven $A: H \\rightarrow H'$ bounded, then we have direct sum \n$$\nH' = \\overline{ran A} \\oplus ker A^*\n$$\nThus if $A$ has closed image, it is even simpler. Recall that if we have $c \\norm{x} \\leq \\norm{Ax}$ then the map is injective and has closed range. \n\nA bounded operator with close range can have no kernel and finite cokernel $(ker A^*)$, or vice versa. Think about the right and left shift operator on $l^2$. \n\n\\begin{definition}\nA bounded linear operator $A$ on Hilbert space is Fredholm if\n\\begin{enumerate}\n    \\item $ran A$ is closed\n    \\item $ker A$ and $ker A^*$ are finite-dimensional\n\\end{enumerate}\n\\end{definition}\nIt has an invariant, called index:\n$$\nind A \\coloneqq dim ker A = dim\\ ker \\ A^*\n$$\n\nIf $A$ is Fredholm and $k$ is compact, then $A + K$ is Fredholm and $ind(A + K) = ind A$. So the index is unchanged by compact perturbations. It is actually an topological invariant (we have the space of Fredholm operator, which is open in the Banach space of bounded operators).\n\n\\subsubsection{Self-adjoint and unitary operators}\nThe two most impiortant classes are self-adjoint and unitary. We fix $H$ and $A: H \\rightarrow H$ bounded.\n\n\\begin{definition}\n$A$ is self-adjoint if $A^* = A$.\n\\end{definition}\n\n\\begin{example}\nFor $\\mathbb{R}^n$, $A$ is self-adjoint iff it is symmetric. On $\\mathbb{C}^n$ it is Hermitian.\n\\end{example}\n\nGiven $A$, we have sesquilinear for $a: H \\times H \\rightarrow \\mathbb{C}$ with $a(x,y) \\coloneq <x, Ay>$. If $A$ is self-adjoint, then it is symmetric:\n$$\na(x,y) = \\overline{a(y,x)}\n$$\nso $q(x) = a(x,x) = <x, Ax>$ is real-valued. $A$ is nonnegative if it is self-adjoint and $q \\geq 0$. It is positive definite if it is positive and $q(x) > 0$ for non-zero $x$. It defines an inner product on $H$. If $<x, Ax> \\geq c \\norm{x}^2 = c <x,x>$, it is called bounded from below. The norm associated to $(-,-)$ is equivalent to that of $<-,->$.\n\nThe norm of a self-adjoint operator is $\\norm{A} = sup_{\\norm{x} = 1}|<x, Ax>|$\nGiven $A$ bounded, then $A^* A$ is self-adjoint, and $\\norm{A^* A} = \\norm{A}^2$\nThis also shows that $\\norm{A} = \\norm{A^*}$.\n\nWe also have unitary, which is the notion of isomorphism of Hilbert spaces:\n\n\\begin{definition}\nA linear map $U: H \\rightarrow H$ is unitary if it is invertible and \n$$\n<Ux, Uy> = <x,y>\n$$\nA map $U$ is unitary iff $U^*U = Id$ and $U U^* = Id$.\n\\end{definition}\nFor fintie dimensional real, $A$ is orthogonal if $Q^T = Q^{-1}$, for complex it is $U^* = U^{-1}$\n\nGiven $A$ a bounded self-adjoint operator, then $e^{iA}$ is unitary. A bounded operator $S$ is sekw-adjoint if $S^* = -S$. $S = iA$ for A self-adjoint. Then Lie algebra of the Lie group of Unitary operators is the space of bounded, skew-adjoint operators with the commutator Lie bracket. \n\nFor example, the translation operator is unitary on $L^2$. A good class of operator is normal:\n\n\\begin{definition}\n$T: H \\rightarrow H$ is normal if $TT^ * = T^* T$, aka $T$ commutes with adjoint. They have a nice spectral theory.\n\\end{definition}\n\n\\subsubsection{Weak convergence in Hilbert Space}\nA sequence $(x_n$ in $H$ converges weakly to $x \\in H$ if\n$$\nlim_{n \\rightarrow \\infty}<x_n, y> = <x,y>\n$$\nStrong convergence implies weak convergence but not the other way:\n\n\\begin{example}\nLet $H + l^2(N)$ and $e_n$ is the $n$-th standard basis. Then $<e_n, y> = y_n \\rightarrow 0$ as the coefficients of $y$ converge to $0$ (as $y$ is $L^2$). Thus $e_n$ converges weakly to $0$ but not strongly.\n\\end{example}\n\nWe have a criterion for weakly convergent sequence:\n\n\\begin{theorem}\nGiven $(x_n)$ sequence in $H$ and $D$ a dense subset, then $(x_n)$ converges weakly to $x$ iff \n\\begin{enumerate}\n    \\item $\\norm{x_n} < M$ for some $M$ independent of $n$\n    \\item $<x_n, y) \\rightarrow <x,y>$ for all $y \\in D$.\n\\end{enumerate}\n\\end{theorem}\n\nThus given $e_\\alpha$ orthonormal basis, then $x_n$ converges weakly to $x$ iff it is bounded and its coordinates converge for every $\\alpha$.\n\nWe like weak convergence because it is easier for sets to be compact in the weak topology than the strong one. In fact, a set is weakly precompact iff it is bounded. A version of Heine-Borel theorem for infinite-dimensional spaces. \n\n\\begin{theorem}[Banach-Alaoglu]\nThe closed unit ball of a hilbert space is weakly compact, that is, compact in the weak topology.\n\\end{theorem}\n\n\\subsubsection{Spectrum of Bounded Linear Operator}\nIn many cases, we would like to diagonalize a linear map, that is, find its eigenvalues and eigenvectors. This is called the spectral theory. There are continuous and point spectrums. Compact opeartors are nice.\n\nIn the finite dimensional case we have that \n\\begin{theorem}\nan $n \\times n$ matrix $A$ is normal iff there is an orthonormal basis consists of eigenvectors of $A$.\n\\end{theorem}\n\nIn the infinite dimensional case, a bounded linear operator might not have eigenvalues at all. We define the spectrum of $A$ as follows\n\\begin{definition}\nThe resolvent set of $A$ is the set of complex numbers $\\lambda$ such that $A - \\lambda I: H \\rightarrow H$ is an iso. The spectrum of $A$, $\\sigma(A)$ is the complement.\n\\end{definition}\n\nThe points in the spectrum can be divided more:\n\\begin{definition}\n\\begin{enumerate}\n    \\item The point spectrum of $A$ are those $\\lambda$ with $A - \\lambda I$ not one-to-one, $\\lambda$ is called an eigenvalue.\n    \\item continuous pectrum of $A$ are $\\lambda$ such that $A - \\lambda I$ is one-to-one but not onto, and $ran(A - \\lambda I)$ is dense.\n    \\item residual is one-to-one, not onto, and not dense.\n\\end{enumerate}\n\\end{definition}\nAt resolvent points, we have inverse $R_\\lambda$. This is defined on the subset of $|mathbb{C}$ of resolvent point of $A$. An operator-valued function $F: \\Omega \\rightarrow B(H)$, $\\Omega$ open subset of $\\mathbb{C}$, is called analyitc at $z_0 \\in Omega$ if it has a convergent power series with respect to the operator norm on $B(H)$. It is called analytic or holomophic if it is analytic at every point in $\\Omega$.\n\nIf $A$ is bounded, then $\\rho(A)$ the resolvent set if an open set, and the resulvent is an operator-valued analytic function of $\\lambda$. .This is because we can write Neumann series. Thus the spectrum is closed. Its radius is the smallest disk centered at zero containing $\\sigma(A)$.\n\nIf $A$ is bounded, then $r(A) = lim_{n \\rightarrow \\infty} \\norm{A^n}^{1/n}$, if $A$ is self-adjoint, then $r(A) = \\norm{A}$.\n\nThis implies that the spectrum of a bounded operator is not empty.\n\n\\subsubsection{Spectral theorem for self-adjoint operators}\nFor (bounded) self-adjoint operators, the eigenvalues are real and eigenvectors are orthogonal.\nThus we are looking at invariant subspaces. This implies that the spectrum of $A$ is contained in the interval $-[\\norm{A}, \\norm{A}]$. In fact, the residual spectrum of a bounded, self-adjoint operator is empty.\n\n\nThe bounded, compact, self-adjoint ones behave much more like finite dimensional ones. \n\nA nonzero eigenvalue of a compat, self-adjoint operator has finite multiplicity. A countable infinite set of nonzero eigenvaluehas zero asn an accumulation point, and no others.\n\nThe spectral theorem that says that any compact operator looks like countable sums of projections to finite -dimensional eigenspaces multiply by $\\lambda_k$:\n\n\\begin{theorem}\nIf $A : H \\rightarrow H$ bounded, compact, self-adjoint. Then there exists orthonormal basis of $H$ consists of eigenvectors of $A$. The nonzero eigenvectors forms a countable infinite set $\\lambda_k$ with $A = \\sum_k \\lambda_k P_k$ where $P_k$ is orthogonal projection to the finite-dimensional eigenspace with eigenvalues $\\lambda_k$. It converges in the operator norm.\n\\end{theorem}\n\n\\subsubsection{Compact operators}\nto check an operator is comapct, best to use Ascoli or this:\n\n\\begin{theroem}\nA bounded linear operator on Hilbert space is comact iff it map weakly convergent sequences to strongly convergent sequences.\n\\end{theroem}\n\n\\subsection{Linear Differential Operators and Green's functions}\n\nLinear differential operators are important. However, they are often unbounded. There are two approaches to study them, one is use weak topology where differential operators are continuous. This give rise to distribution theory. The other is to consider those that are defined on a dense subspace of a Hilbert or Banach space. \n\nThe inverse of a linear differential operator is an integral operator, the kernel is the Green's function. Use bounded inverse to study the unbounded differential operator. \n\n\\subsubsection{Unbounded operators}\nThey are normally defined on a dense subspace $A: D(A) \\subset H \\rightarrow H$. An extension is an extention of domains. Domain decodes smoothness and boundary conditions.\n\nThe adjoint of an unbounded operator $A$ is operator $A^*$, defined on the largest subspace such that \n$$\n<Ax,y> = <x, A^* y>\n$$\nholds for all $x \\in D(A)$. $D(A^*)$ is the space of all $y$ such that $\\phi_y \\coloneqq <y, A -> $is bounded. It is possible that $D(A^*)$ is not dense so we can't define $A^**$.\n\n\\begin{remark}\nNote this $A^*$ depends on the domain, I think.\n\\end{remark}\n\nThe adjoint of a differential operator is another differential operator, by integration by parts. The domain $D(A)$ defines boundary conditions for $A$ and $D(A^*)$ defines adjoint boundary conditions, they are there to make sure the boundary terms from integration by parts vanish. An important class is self-adjoint ones, $A = A^*$, this also means the domains are equal. This is a self-adjointness of the boundary conditions.\n\n\\begin{defintion}\nAn unbounded operator is self-adjoint if $A  = A^*$. An unbounded operator $A$ is symmetric if $A^*$ is an extension of $A$.\n\\end{defintion}\n\nDifferential operators also have the notion of closedness:\n\n\\begin{definition}\nAn operator $A$ is closed if for every sequence $(x_n)$ in $D(A)$ such that $x_n \\rightarrow x$ and $A x_n \\rightarrow y$, then $x \\in D(A)$ and $Ax = y$.\n\\end{definition}\n\nThis means the graph of $A$, $\\Gamma(A) \\subset H \\times H$ is a closed subset.\n\nAn operator is closable if every $(x_n)$ in $D(A)$ such that $x_n \\rightarrow 0$ and $A x_n \\rightarrow y$, then $y = 0$. Then we can define the closure of a closable operator. Its graph is the closure of the graph of $A$. Every symmetric operator is closable. Symmetric operator $A$ is essentially self-adjoint if its closure is self-adjoint.\n\nIf $A$ is one-to-one and onto, then we have $A^{-1}: H \\rightarrow H$. Its range is $D(A)$. If $A$ is closed, then by the closed graph theorem, $A^{-1}$ is bounded. Then $(A^*)^{-1} = (A^{-1})^*$. We also have resolvent set, spectrum, and resolvent operator. By the closed graph theorem, if $A$ is closed, $R_\\lambda$ is bounded whenever $A - \\lambda I$ is one-to-one and onto. It may have an empty spectrum.\n\n\\subsubsection{Adjoint of a differential operator}\nA linear ordinary differential operator of order $n$ is a linear mpa $A$ acts on $n$-times continuously differentiable function $u$ by \n$$\nAu = \\sum_{j=0} ^n a_j u^{(j)}\n$$\n\nWe want to study boundary value problems (BVP)\n$$\nAu = f, Bu = 0\n$$\n$Bu = 0$ is a set of linear boundary conditions.\n\nFor example, consider functions on $[0,1]$, with \n$$\nAu = a u'' + b u' + cu,\n$$\nfor $a, b, c$ sufficiently smooth. $a(x) > 0$. For second-order, we expect to have two boundary conditions to get unique (sometimes not enough), we also want overdetermined or underdetermined. \nExamples are \n\\begin{enumerate}\n    \\item $u(0) = 0, u(1) = 0$ Dirichlet\n    \\item $u'(0) =0, u'(1) = 0$ Neumann\n    \\item $u(0) = u(1), u'(0) = u'(1)$ periodic\n    \\item $u(0) = u'(0) = 0$ Intial\n    \\item $u(1) = u'(1) = 0$ final\n\\end{enumerate}\n\nNonhomogeneous boundary conditions are affine space for the homogeneous ones. \n\nWe first start by formulating the adjoint boundary value problem. We have $A^* v = (\\overline(a)v)'' - (\\overline(b)v)' + \\overline{c}v$, then $<v, Au> - <A^*v, u>$ is some boundary term. It is called the formal adjoint. The formal adjoint of $D = \\frac{d}{dx}$ is $-D$, thus $(iD)^* = iD$ and $(D^2)^* = D^2$. \n\nThe dual boundary condition for $B$ is the requirement for the boundary term to vanish. So $B^*v = 0$ iff $<v, Au> = <A^* v, u> $ for all $u \\in C^2([0,1])$ with $Bu = 0$.\n\n\\begin{example}\nThe dual for Dirichlet condition for $A = D^2$ is Dirichlet. So the Dirichlet boundary value problem for $D^2$ is self-adjoint. Same for Neumann, mixed, and periodic are also self-adjoint. However, the adjoint for initial is final. The adjoint of no boundary condition is $v(0) = v'(0) = v(1) = v'(1) = 0$, so adjoint of undertermined BVP is overdetermined. \n\\end{example}\n\n\\subsubsection{Green's functions}\nConsider the Dirichlet boundary value problem for $A$:\n$$\nAu = f, u(0) = u(1) = 0\n$$\nWe are looking for solution \n$$\nu(x) = \\int_0 ^1 g(x,y) f(y) dy\n$$\nThis is called the Green's function. \n\nWe regard $A : D(A) \\rightarrow C([0,1])$ as an operator, with \n$$\nGf(x) = \\int_0 ^1 g(x,y)f(y) dy\n$$\nas the inverse.\n\n\\begin{remark}\nSo $G$ is one-to-one but not onto, its image is a dense subset. Those are symptons of these inverse operators to unbounded operators.\n\\end{remark}\n\nWe will use delta function $\\delta$ as heuristic (it is really a distribution). It is the derivative of the Heaviside step function.\n\nThe Green's function $g(x,y)$ is the solution to the BVP:\n$$\nAg(x,y) = \\delta(x-y), g(0,y) = g(1,y) = 0\n$$\n$A$ is differential operator with respect to $x$. As a function of $x$, we want $g(x,y)$ to satisfies the homogeneous ODE when $x \\neq y$, and we want it to jump at $x = y$. \n\nWe can construct the Greens' function: think about the homogenous ODE \n$$\nau'' + bu' + c = 0\n$$\nhas two -dimensional space of solutions. We can find basis $u_1, u_2$ determined by initial conditions $u(0) = 1, u'(0) = 0$ and $u(0) = 0, u'(0) = 1$. \n\nNote that in nice situations, we have $g^*(x,y)$ the Green's function for the adjoint operator, being $g^*(x,y) = \\overline{g(y,x)}$\n\n\\subsubsection{Weak derivatives}\nClassical differential operators that act on continuously differentiable functions lack desirable properties such as closed or self-adjoint. To get them, we extend the domains of classical differentiation to include functions whose weak derivatives belong in $L^2$. Weak deriavtieves are defined using integration against test functions. They can be approximated by smooth functions. The space of functions with $k$ square-integrable derivates is the Sobolev $H^k$. They give domains of simple self-adjoint orindary differential operators.\n\nA test function $\\phi: \\mathbb{R} \\rightarrow \\mathbb{C}$ is a test function if it has compact support and continuous derivatives of all order, call the space $C^\\infty _c(\\mathbb{R})$. They are dense in $L^2$. We can approximate an $L^2$ function by its convolution with a smooth approximate idenitty, this is called mollification.\n\nGiven $\\phi$ a test function with support $[-1, 1]$ and \n$$\n\\int \\phi(x) dx = 1\n$$\nexample is $\\phi(x) = e^{\\frac{-1}{(1-x^2}}$. \nThen $\\phi_\\epsilon(x) = \\frac{1}{\\epsilon} \\phi(\\frac{x}{\\epsilon})$ is an approximate identity (for convolution, so delta at $0$) as $\\epsilon \\rightarrow 0$.  Then the mollification $u_\\epsilon = \\phi_\\epsilon * u$, that is \n$$\nu_\\epsilon(x) = \\int_\\mathbb{R} \\phi_\\epsilon(x-y) u(y) dy\n$$\nit is in $C^\\infty(\\mathbb{R}$ as \n$$\nu^{(k)} _\\epsilon (x) = \\int \\phi^{(k)} _\\epsilon (x-y) u(y) dy\n$$\nIt converges to $u$ in $L^2$.\n\nWe can use integration by parts to define weak derivative:\n\n\\begin{theorem}\nA function $u \\in L^2$ has a weak derivative $v \\in L^2$ if \n$$\n<v, \\phi> = -<u, \\phi'> \n$$ for all $\\phi \\in C^\\infty _c (\\mathbb{R}$. \nThe Sobolev space $H^k(\\mathbb{R})$ consists of functions with $k$ square-integrable weak derivatives, equipped with inner product\n$$\n<u,v> = \\int_\\mathbb{R} \\{\\overline{u}v + \\overline{u}' v' + \\overline{u}^{(k)} v^{(k)} \\} dx\n$$\n\\end{theorem}\n\nDifferential operator on weak derivative is now a closed operator.\n\nClosedness of $D$ implies that $H^k(\\mathbb{R})$ is complete thus a Hilbert space.\n\nAnother way to define $L^2$ derivatives is the $L^2$ limit of smooth derivative. We say $u' = v$ if there is a sequence of smooth function $u_n$ with $u_n \\rightarrow u$ and $u' _n \\rightarrow v$. They are equivalent as any $H^k$ functions can be approximated in the $H^k$ norm by a test function. Aka $C^\\infty _c (\\mathbb{R})$ is smooth.\n\nWe have that \n$$\n\\int u v' dx = - \\int u' v dx\n$$\nfor $u, v \\in H^1(\\mathbb{R})$\n\nNow we can have self-adjoint operators: $A = iD: H^1(\\mathbb{R}) \\rightarrow L^2(\\mathbb{R})$ is self-adjoint. Namely, the domain $D(A^*) = H^1(\\mathbb{R})$\n\nWe have Sobolev embedding theorem, which says that the elements of the Sobolev spaces are actually continuous (have a continuous representative), and differentiable etc, for nice enough $k$.\n\n\\subsection{Distribution}\nA distribution is a continuous linear functional on test functions. They are an extension of functions. Delta function are distributions. Every distributions are differentiable, and differentiation is a continuous operation on spaces of distributions. Fourier tranform work well with tempered distributions. One problem is that there is no product of distributions extending pointwise product of functions (they are homological, not cohomological). \n\n\\subsubsection{Schwartz space}\nIt is the space test functions that consists of smooth, rapidly decreasing functions.\n\n\\begin{definition}\nA function $\\phi \\in C^\\infty(\\mathbb{R}^n)$ is in the Schwartz space $S$ if all of its derivative decay faster than any polynomial as $|x| \\rightarrow \\infty$.\n\\end{definition}\n\nExample is a polynomial $q(x)$ times $e^{-c |x - x_0|^2}$. \n\nWe want convergence and topology on $S$. The appropriate topology is coming from a countable family of seminorms. Given a family of seminorms, we have a topology on $X$ such that a sequence $(x_n)$ converges to $x$ iff $p_\\alpha(x - x_n) \\rightarrow 0$ for every norm $p_\\alpha$.\n\nFor countably infinite seminorms, then the topology is metrizable. A metrizable, locally convex space that is complete is called a Frechet space.\n\nSchwartz space $S$ with the countable family of seminorms $p_{\\alpha, \\beta}: \\phi \\mapsto sup_{x \\in \\mathbb{R}^n} |x^\\alpha \\partial^\\beta \\phi(x)|$, where $\\alpha, \\beta \\in \\mathbb{Z}^m _+$ are in multi-indices.\n\n\\begin{remark}\nThe basic idea here is that taking derivative goes from $C^k \\rightarrow C^{k-1}$. So to keep it at the same $k$ we need $k = \\infty$, where we don't have a good notion of norm. So we need this family of norms.\n\\end{remark}\n\n\\begin{proposition}\nTaking partial differentiation $\\partial^\\alpha: S \\rightarrow S$ is a continuous linear operator on $S$.\n\\end{proposition}\n\n\\subsubsection{Tempered Distributions}\nThe topological dual of $S$, denoted as $S^*$, is the space of continuous linear functionals on $S$> Elements are called tempered distributions. \n\n$T$ is continuous iff \n$$\n|T (\\phi)| \\leq \\sum_{|\\alpha|, |\\beta| \\leq d} c_{\\alpha, \\beta} \\norm{\\phi}_{\\alpha, \\beta}\n$$\n\n\\begin{example}\nThe fundamental example is the delta function: \n$$\n\\delta(\\phi) = \\phi(0)\n$$\n\\end{example}\n\nGiven $f$ Lebesgue measureable with \n$$\n|f(x)| \\leq g(x) (1 + |x|^2)^{d/2}\n$$\na.e. for some $d \\geq 0$, $g$ nonnegative, integrable function $g$. Then \n$$\nT_f(\\phi) = \\int f(x) \\phi(x) dx\n$$\ndefines a tempered distribution. We can recover $f$ up to pointwise-a.e. by $T_f$ by convolving with approximate identity like Gaussian.\n\nDistributions that are integration against a function $f$ is called a regular distributions. Those that not of this form is called singular. Tempered distributions is generalization of locally integrable functions with polynomial growth. \n\nTempered distributions is a subspace of $D^*$ the space of distributions. It is the continuous linear functionals on $D$ the space of smooth, compactly generated test functions. The distributions in $D^*$ can grow faster than polynomial at infinity, and its Fourier transform not beling in $D^*$. For $S^*$, its Fourier transform stays in $S^*$.\n\n\\subsubsection{Operations on distributions}\nGiven $f$ such that it and all its derivative have polynomial growth, then $f\\phi \\in S$ for $\\phi \\in S$. Thus it also acts on $S^*$ continuously.\n\n$f \\delta = f(0) \\delta$. \n\nWe can also take derivatives:\n\n\\begin{definition}\n$\\partial^\\alpha T$ is defined as \n$$\n<\\partial^\\alpha T, \\phi> = (-1)^{|\\alpha|}<T, \\partial^\\alpha \\phi>\n$$\n\\end{definition}\nTaking derivative is continuous and every tempered distributions has a derivative. It is the minimal extension of the space of functions of polynomial growth that is closed under differentiation:\n\n\\begin{theorem}\nFor every $T \\in S^*$, exists $f$ continuous of polynomial growth and $\\alpha$ such that $T = \\partial^\\alpha f$.\n\\end{theorem}\n\nIf $_f$ is a regular distribution whose distributional derivative is also regular $T_g$, then $g$ is the weak derivative of $f$, $g = \\partial^\\alpha f$. Weak $L^2$ derivative is a version of this.\n\nLet $f(x)$ be $0$ for $x \\leq 0$ and $x$ for $x \\geq 0$. It has no derivative. But it has a weak derivative, which is the step function $H$. The distributional derivative of $H$ is the delta function. So $H$ is not weakly differentiable.\n\nGiven a continuous linear transform $K: S \\rightarrow S$, ofc we get $K'$ the transpose. For example the translation operator gives a translation on distribution.\n\nConvolution of test function exists. We can also convolve a test function with a distribution:\n$$\n(\\phi * T)(x) = <T, R\\tau_x \\phi>\n$$\nwhere $R$ is reflection and $\\tau_x$ is shift by $x$. It has at most polynomial growth.\n\nConvolving with $delta$ gives identity: $(\\phi * \\delta) = \\phi$.\n\nThere is a notion of weak convergence. It is the convergence corresponding to the weakest topology such taht all functional $T \\mapsto <T, \\phi> $ is continuous. It is called the weak $*$ topology, generated by the semi-norms $p_\\phi$:\n$$\np_\\phi(T) = |<T, \\phi>|\n$$\n\n\\begin{theorem}\nSchwartz space is dense in the space of tempered distributions\n\\end{theorem}\n$S$ is also dense in $C_0$ with respect to uniform convergence, and in $L^p$ for all $1 \\leq p < \\infty$.\n\n\n%%%%%%%%%%\n\n\n\\subsection{Fourier Series}\nFourier Series takes a function on $T = S^1$ (periodic function) to a series. It turns out the behavior of the series says a lot about the function.\n\n\\subsubsection{Basics}\nLet $C(T)$ be the space of continuous functions, and $L^2(T)$ the completion to the $L^2$-norm.\nIt can be concretely realized as equivalence class of Lebesgue measureable, square integrable functions from $T$ to $\\mathbb{C}$. It is a Hilbert space. \n\nThe Fourier basis elements are functions\n$$\ne_n(x) = \\frac{1}{\\sqrt{2 \\pi}} e^{inx}\n$$\n\n\\begin{theorem}\nThey form an orthonormal basis of $L^2(T)$, thus $L^2(T) \\cong l^2(\\mathbb{N})$ by sending $f$ to its Fourier coefficients.\n\\end{theorem}\n\nThis means that the partial Fourier sum converges to $f$ in the $L^2$ norm. \n\nThe Fourier coefficients $\\hat{f}_n$ is \n$$\n\\hat{f}_n = \\frac{1}{\\sqrt{2 \\pi}} \\int_T f(x) e^{\\inx} dx\n$$\nWe have the Parseval's identity:\n$$\n\\int_T \\overline{f(x)} g(x) dx = \\sum_{n = -\\infty} ^{\\infty} \\overline{\\hat{f}_n} \\hat{g}_n\n$$\nIn particular, we can find the $L^2$ norm $f$ by its Fourier coefficients:\n$$\n\\int_T |f(x)|^2 dx = \\sum_{n = -\\infty} ^{\\infty} |\\hat{f}_n|^2\n$$\n\nWe have the convolution (which is really just the group algebra multiplication on $L^2(T)$ induced by the group structure on $T$):\n$$\nf * g(x) \\coloneqq \\int_T f(x-y)\\ g(y)\\  dy\n$$\n\nConvolution of $L^2$ functions is $L^2$. And we can calculate its Fourier coefficients:\n\nFourier transform maps convolution of two functions to the pointwise product. A classic part of abelian duality:\n\\begin{theorem}\n$$\n\\overline{f * g}_n = \\sqrt{2\\pi} \\hat{f}_n \\hat{g}_n\n$$\n\\end{theorem}\n\nWe can also ask for other part of convergence. for $L^2$, it converges pointwise a.e., and for continuous differentiable, the convergence is uniform.\n\nThe behavior of the partial sums near a point of discontinuity is interesting, not converge uniformly, it oscillate in an interval contains the point of discontinuity. Width of interval shrinks to zero as $N \\rightarrow \\infty$, but the size of the oscillations does not, the magnitude is approximately $9$ the jump in $f$ at the jump discontinuity.\n\nWe have real-valued orthogonal basis \n\n$$\n1, cos\\ nx, sin\\ nx\n$$\nThis is good for real-valued function as they have real Fourier coefficients. If $f$ is even, then it has a cosine expansion, and $f$ is odd then it has a Fourier sine expansion.\n\nWe can also extend to general toruses. Then the theory says that $L^2(T)$ is iso to $l^2(\\hat{\\Gamma})$, where $\\hat{\\Gamma}$ is the dual of the lattice $\\Gamma$ associated to $T$.\n\n\\subsubsection{Smoothness and decay of Fourier coefficients}\n\nThe smoother a function is (the more times it is differentiable), the faster its Fourier coefficients decay. That is, smooth function contains a small amount of high frequency components.\n\nIf $f \\in C^1(T)$ is continuously differentiable, then we can calculate the coefficients of $f'$:\n$$\n\\hat{f'}_n = in\\hat{f}_n\n$$\nGenerally $\\hat{f^{(k)}}_n= (in)^k \\hat{f}_n$\n\nWe can use this to define the notion of a function whose derivative is square integrable, not continuous. They are called weak derivative. The space of $L^2$ function whose weak derivative is $L^2$ is called $H^1$, a Sobolev space:\n\n\\begin{definition}\nThe Sobolev space $H^1(T)$ is functions $f \\in L^2(T)$ such that \n$$\n\\sum_{n = -\\infty} ^{\\infty} n^2 |\\hat{f}_n|^2 < \\infty\n$$\nThe weak $L^2$ derivative $f' \\in L^2(T)$ for $f \\in H^1(T)$ is the $L^2$-convergent Fourier series\n$$\nf'(x) = \\frac{1}{\\sqrt{2\\pi}} \\sum_{n = -\\infty} ^{\\infty} n^2 in \\hat{f}_n e^{inx}\n$$\n\\end{definition}\n\nIt is a Hilbert space with respect to the inner product \n$$\n(f, g)_{H^1} = \\int_T \\overline{f(x)}g(x) + \\overline{f'(x)}g'(x) dx\n$$\nBy Parseval's theorem, the inner product is \n$$\n(f,g)_{H^1} = \\sum_{n = -\\infty} ^{\\infty} (1 + n^2)\\overline{\\hat{f}_n} \\hat{g}^n\n$$\n\nWe can also define weak derivative and $H^1(T)$ by integrating against a smooth test function:\n\nLet $f \\in L^2(T)$ and the linear functional $F(\\phi): C^1(T) \\rightarrow \\mathbb{C}$\n$$\nF(\\phi) = - \\int_T f\\phi' dx\n$$\nis bounded. Then it uniquely extends to a bounded linear functional on $L^2(T)$, and by Riesz representation theorem, we have a unique function $f' \\in L^2(T)$ such that $F(\\phi) = (\\overline{f}', \\phi)$ for all $\\phi \\in C^1(T)$. This can be taken as our definition of a weak $L^2$ derivative.\n\nFor any $k \\geq 0$, we define the Sobolev space\n$$\nH^k(T) = \\{f \\in L^2(T) | f(x) = \\sum_{n = -\\infty} ^{\\infty} c_n e^{inx}, \\sum_{n = -\\infty} ^{\\infty} |n|^{2k}|c_n|^2 \\leq \\infty \\}\n$$\n\nNote this even make sense when $k$ is not a natural number.\n\nHere's a version of Sobolev embedding theorem, which means that if a function on $T$ has a square-integrable weak derivative, then it is continuous. Ofc the only way to show continuity is by showing that the partial Fourier sums (which are ofc continuous) converges uniformly.\n\n\\begin{theorem}[Soboloev embedding]\nIf $f \\in H^k(T)$ for $k > 1/2$, then $f \\in C(T)$.\n\\end{theorem}\n\nGenerally speaking, if $f \\in H^k(T)$, then the Fourier series for $f^{(j)}$ converge uniformly when $k + j + 1/2$, so $f \\in C^l(T)$, where $l$ is the greatest integer strictly less than $k - 1/2$. For function of several variable, $f$ is continuous when $k > d/2$ and $j$-times continuous differentiable when $k > j + d/2$. THere is a loss of slight more than one-half a derivative per space dimension in passing from $L^2$ derivatives to continuous derivatives.\n\n\\subsubsection{Convergence}\nDoes the Fourier series of $f$ converges to $f$ is an important question. Here are some results:\n\nA function is piece-wise continuous if it is continuous away from finitely many points and has jump discontinuity at those finite points.\n\nA function is piece-wise smooth if it and its derivative are piece-wise continuous.\n\\begin{theorem}\nIf $f(x)$ is piecewise smooth, then the fourier series converges pointwise on the continuous points, and is $\\frac{f^-(x) + f^+(x)}{2}$ on jump discountinuity points.\n\\end{theorem}\n\n\\begin{theorem}\nIf $f(x)$ is continuous and piecewise smooth, then the Fourier series converges uniformly.\n\\end{theorem}\n\nOf course, for any $L^2$ integrable function $f$, the Fourier series $L^2$ converges to $f$.\n\n\\subsection{Fourier Transform}\n\n\\subsubsection{FT on Schwartz space and Distributions}\nWe define Fourier transform of a Schwartz function, it is continuous, one-to-one mapping from $S$ to $S$. Thus we get a continuous, one-to-one on $S^*$.\n\n\\begin{definition}\nGiven $\\phi$, we have the Fourier transform $\\hat{\\phi}$:\n$$\n\\hat{\\phi}(k) = \\frac{1}{(2\\pi)^{n/2}}\\int \\phi(x)e^{-i k x} dx\n$$\nThe operator is $\\mathcal{F}$.\n\\end{definition}\n\nIt takes derivatives to pointwise multiple by its value, and wise versa. It takes translation by rotation with that frequency and vice versa. It takes convolution to pointwise multiplication.\n\nWe have the Fourier inverse transform $\\mathcal{F}^*$:\n$$\n\\check{\\phi}(k) = \\frac{1}{(2\\pi)^{n/2}}\\int \\phi(x)e^{i k x} dx\n$$\n\n\\subsubsection{Fourier transform of tempered distribution}\n\nWe can define Fourier transform on tempered distribution as $\\hat{T} = \\mathcal{F} T$ :\n$$\n<\\hat{T}, \\phi> = <T, \\hat{\\phi}>\n$$\nNote that for regular distribution this agrees with the Fourier transform definition.\nIt is an isomorphism. \n\nThe Fourier transform of delta function is a constant.\n\n\\subsection{Fourier transform of $L^1$ function}\n\nThe Fourier transform of $f$ is well-defined if $f \\in L^1$. \nWe have the Riemann-Lebesgue lemma:\n\\begin{theorem}[Riemann-Lebesgue]\nFor $f \\in L^1$, then $\\hat{f} \\in C_0(\\mathbb{R}^n)$, and \n$$\n(2\\pi)^{n/2} \\norm{\\hat{f}}_\\infty \\leq \\norm{f}_1\n$$\n\\end{theorem}\n\nThus Fourier transform is a bounded linear map $L^1 \\rightarrow C_0$. $L^1$ with convolution and $C_0$ with pointwise, then Fourier transform maps convolution to pointwise (up to a factor). \n\n\\subsubsection{Fourier Transform on $L^2$}\nFourier transform also gives isomorphism on $L^2(\\mathbb{R}^n)$.  We need to define Fourier Transform on $L^2$ function by extend from the dense subspace $S$.\n\nNote that Fourier transform preserves in inner prouduct, thus we extend to an isometric extension $\\mathcal{F}: L^2 \\rightarrow L^2$.\n\nThus we have the following:\n\n\\begin{theorem}\nThe Fourier transform $\\mathcal{F}$ is a unitary map. \nThus $(\\hat{f}, \\hat{g}) = (f, g)$ and \n$$\n\\int |f(x)|^2 dx = \\int |\\hat{f}(k)|^2 dk\n$$\n\\end{theorem}\n\nIt is a unitary operator so its spectrum on the unit circle. It is entirely consist of eigenvalues.\n\nWe can use the Fourier transform to define Sobolev spaces of functions with square-integrable derivatives. \n\n\\begin{definition}\nLet $s \\in \\mathbb{R}$. The Sobolev space $H^s(\\mathbb{R}^n)$ consists of all distributions $f \\in S^*$ whose Fourier transform $\\hat{f}$ is a regular distribution and \n$$\n\\int (1 + |k|^2)^s |\\hat{f}(k)|^2 dk < \\infty\n$$\n\\end{definition}\n\nWe have that for $f \\in H^s(\\mathbb{R}^n)$ with $s > n/2$, then $f \\in C_0(\\mathbb{R}^n)$.\n%%%%%%%\n\\subsubsection{Facts}\nFor periodic functions we have Fourier series, for non-periodic functions we need all frequency, thus the Fourier transform gives a function on the frequency space.\n\nIf $f$ is a compactly supported continuous function, then $\\hat{f}$ can be extended to an entire function, that is, a holomorphic function on the entire $\\mathbb{C}$ plane.\n\nHere's a useful fact: both $f$ and $\\hat{f}$ can't both be compactly supported:\n\n\\begin{theorem}\nIf a continuous $f \\in \\mathbb{R}$ has $f$ and $\\hat{f}$ compactly supported, then $f = 0$\n\\end{theorem}\n\n\\begin{proof}\nHere's are three proofs:\n\n1. Note that the Fourier tranform is holomorphic as we can extend to the complex plane in $s$\n$$\n\\hat{f}(s) \\coloneqq \\frac{1}{\\sqrt{2\\pi}} \\int f(x) e^{isx} dx\n$$\nas $f$ is compactly supported. Thus $\\hat{f}$ can't have too many zero, definitely not compactly supported.\n\n2. We restricts $f$ to an interval containing its support, then in the interval $f$ is equal to the Fourier series. If $\\hat{f}$ is compactly supported, this means that $f$ is a finite sum of trigonometric functions, thus analytic, and can't be compactly supported if not zero.\n\n3. We will show that $f$ is analytic. \n\\end{proof}\n\n\n\n\\subsection{General Measure Theory}\n\n\\subsubsection{$\\sigma$ Algebras, Measureable Spaces, and Measures}\nThe story of measure theory is about trying to generalize the notion of volume or size. \n\nWe first start with the notion of outer measure:\n\n\\begin{definition}\nAn outer measure $\\mu_^*$ on a set $X$ is a function \n\n$$\n\\mu^* : 2^X \\rightarrow [0, \\infty]\n$$ \nsuch that \n\\begin{enumerate}\n    \\item $\\mu^*(\\emptyset) = 0$;\n    \\item if $E \\subset F \\subset X$, then $\\mu^*(E) \\leq \\mu^*(F)$;\n    \\item (Countable subadditivity) if $E_i$ are countable collection of subsets of $X$, then \n    $$\n    \\mu^*(\\bigcup_{i + 1} ^{\\infty} E_i)\\leq \\sum_{i = 1} ^{\\infty} \\mu^*(E_i)\n    $$\n\\end{enumerate}\n\\end{definition}\nNote that $\\mu^*$ is not assumed to be additive even if ${E_i}$ are disjoint. In fact, they often are not.\n\nThe condition we really want is countable additivity, which is that when $E_i$ are disjoint subset, then the it is additive rather than subadditive (aka it is an equal sign). However, this can't always happen (not true over the real line $\\mathbb{R}$). So we instead restrict on the subset that we are looking at. This is where we need $\\sigma$-algebras:\n\n\\begin{definition}\nA $\\sigma$-algebra on a set $X$ is a connection $\\mathcal{A}$ of subsets of $X$ such that\n\\begin{enumerate}\n    \\item $\\empty, X \\in A$;\n    \\item if $A \\in \\mathcal{A}$, then so is $A^c \\in \\mathcal{A}$. $A^c$ is the complement.\n    \\item if $A_i \\in \\mathcal{A}$ for $i \\in \\mathbb{N}$, then \n    $$\n    \\bigcup_{i=1} ^{\\infty} A_i,  \\ \\bigcap_{i=1} ^{\\infty} A_i\n    $$\n    are in $\\mathcal{A}$\n\\end{enumerate}\n\\end{definition}\nThose is is a collection of subsets that is contains the empty set, close under taking complements and countable unions.\nExamples are $\\{ \\emptyset, X \\}$, and $\\mathcal{P}(X) = 2^X$.\n\nNow we can define a measureable space, that is, a setting where we can define measures:\n\n\\begin{definition}\nA measureable space is a pair $(X, \\mathcal{A})$, where $X$ is a set and $\\mathcal{A}$ is a $\\sigma$-algebra on $X$.\n\\end{definition}\n\nAny topological space gives a measureable space:\n\n\\begin{definition}\nGiven a topolgical space $X$ with topology $\\tau$, then the Borel topology is the smallest measureable space containing all the open sets (thus also close sets) of $X$. \n\\end{definition}\n\nGiven a measureable space $(X, \\mathcal{A})$, we can now define a measure:\n\n\\begin{definition}\nA measure $\\mu$ on $(X, \\mathcal{A})$ is a function \n$$\n\\mu: \\mathcal{A} \\rightarrow [0, \\infty]\n$$\nsuch that \n\\begin{enumerate}\n    \\item $\\mu(\\emptyset) = 0$;\n    \\item It is countably additive.\n\\end{enumerate}\n\\end{definition}\n\nOfc measures restricts to a measureable subset. \n\n\\begin{example}\nLet $X$ be a set, then $\\nu: \\mathcal{P}(X) \\rightarrow [0, \\infty]$ by cardinality is a measure, called the counting measure. \n\\end{example}\n\nA measure zero set, is a measureable net $N$ such that $\\mu(N) = 0$. A property hold for all element away from a measure zero set is called to hold almost everywhere, or a.e..\n\n\\begin{definition}\nA measures space $(X, \\mathcal{A}, \\mu)$ is complete if every subset of a set of measure zero is measureable.\n\\end{definition}\n\nThere is a completion which takes a measure space to a complete measureable space.\n\n\\subsubsection{Measure Functions}\nA measureable functions is similiar to a continuous functions. A continuous functions pull back open sets to open setes, a measureable functions pulls back measureable sets to measureable sets.\n\n\nThe once we care about are the real-valued functions. We equipped the target $\\mathbb{R}$ with the Borel $\\sigma$-algebra. \n\n\\begin{definition}\nLet $(X, \\mathcal{A})$ be a measureable space, then $f : X \\rightarrow \\mathbb{R}$ if it pulls back Borel sets to to measureable subsets.\n\\end{definition}\n\nAs the Borel $\\sigma$-algebra is generated by $(-\\infty, b)$, $(-\\infty, b]$, $(a, \\infty)$, $[a, \\infty)$, the pull back of those sets being measureable is suffice for a function to be measureable.\n\nOf course, products, sums, and inverse of a measureable function is measureable. So is $|f|$, $max(f,g)$, $min(f, g)$.\n\n\\todo[inline]{ Finish chapter 3 in this subsubsection}\n\n\n\n\\begin{lemma}\n\nPointwise limits ($sup, inf, lim sup, lim inf$) of measurable (extended) functions are measureable (extended) real-valued functions. \n\\end{lemma}\n\nTherefore if a sequence of measureable functions are pointwise convergent, then its pointwise limit is measureable (as it is lim sup = lim inf).\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nMeasureable functions are limits of simple functions:\nA characteristic function of $E \\subset X$ is a function $\\chi_E: X \\rightarrow \\mathbb{R}$ defined as $\\chi_E(x) = 1$ iff $x \\in E$ and else $0$. It is measureable iff $E$ is a measureable set.\n\n\\begin{definition}\nA simple function $\\phi: X \\rightarrow \\mathbb{R}$ is a finite $\\mathbb{R}$ linear combination of measureable characteristic functions.\n\\end{definition}\n\n\n\\begin{theorem}\nAny positive measurable function is the pointwise limit of a monotone increasing positive simple functions. \n\\end{theorem}\n\n\\begin{remark}\nIn Lebesgue integral we approximate by simple functions , in Riemann integral we partition the domain. So Lebesgue integral we partition the range, in Riemann integral we partition the domain.\n\\end{remark}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%\nIn a complete measure space, then all the proposition above we only need almost every property, such as if $f_n \\rightarrow f$ almost everywhere (away from a measure-zero set), then $f$ is measureable.\n\n\n\\subsubsection{Integration}\n\nWe develope the general theory of integration of a real-valued function on an measure space. Since every measureable function is the pointwise limit of simple functions, we define the simple ones first and extend from there:\n\nFor a simple function $\\phi = \\sum_{i = 1} ^N c_i \\chi_{E_i}$, the integral with respect to measure $\\mu$ is \n$$\n\\int \\phi d\\mu = \\sum_{i = 1} ^N c_i \\mu(E_i).\n$$\n\n\\begin{example}\nThe characteristic function of the rations $\\chi_\\mathbb{Q}: \\mathbb{R} \\rightarrow \\mathbb{R}$ is not Riemann integrable, but they are Lebesgue integrable with $\\int \\chi_\\mathbb{Q} d\\mu = 0$.\n\\end{example}\n\nFor a positive function $f$, then \n$$\n\\int f d\\mu = sup{\\int \\phi d\\mu: 0 \\leq \\phi \\leq f, \\phi \\ simple}\n$$\nIt is integrable if it is measureable and $\\int f d\\mu < \\infty$\n\n\nLet $A$ be a measureable set, then \n$$\n\\int_A f d\\mu = \\int f_{\\chi_A}d\\mu\n$$\n\nHere's the important Monotone Convergence Theorem:\n\n\\begin{theorem}[Monotone Convergence Theorem]\nIf $\\{f_n\\}$ is a monotone increasing sequence of positive, measureable, extended real-valued functions, and \n$$\nf = lim_{n\\rightarrow \\infty} f_n,\n$$\nthen \n$$\nlim_{n \\rightarrow \\infty} \\int f_n d\\mu = \\int f d\\mu\n$$\n\\end{theorem}\n\nThis means that $f$ is the limit of integrals of an increasing sequence of simple functions. This means the following:\n\n\\begin{lemma}\nIn the $L^1$ norm, the space of simple functions are dense. That is, any measureable function is a limit of simple functions in the $L^1$ norm.\n\\end{lemma}\n\nThe measure of an extended real-valued function is its positive part  + the negative part.\nIt is integrable iff $\\int |f| d\\mu < \\infty$. Note that absolutely convergence of a series is a version of integrable functions.\n\nAn important question of integration theory is given $f_n$, does $$\nlim_{n \\rightarrow \\infty} f_n d \\mu = \\int lim_{n \\rightarrow \\infty} f_n d \\mu\n$$.\nIt is definitely not true in general. \n\nHere's two important theorems about it:\n\n\\begin{theorem}[Fatou's lemma]\nLet $f_n$ be sequence of positive measureable functions, then \n$$\n\\int lim\\ inf f_n d\\mu \\leq lim in \\int f_n d\\mu\n$$\n\\end{theorem}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\nWe also have the dominated convergence theorem:\n\\begin{theorem}[Dominated Convergence Theorem]\nif $f_n$ is a sequence of measureable functions $f_n: X \\rightarrow \\mathbb{R}$ such that $f_n \\rightarrow f$ pointwise, and $|f_n| \\leq g$ for $g$ integrable, then\n$$\n\\int f_n d\\mu \\rightarrow \\int f d\\mu \n$$\n\\end{theorem}\n\nOf course, we can generalize all of the above to complex-valued functions.\n\n\\begin{theorem}\nOf course any Riemann integrable function is Lebesgue integrable and its Riemann integral is equal to Lebesgue intergral. Moreoever, a Lebesgue measureable function is Riemann integrable iff it is bounded and the set of discontinuities has Lebesgue measure zero.\n\\end{theorem}\n\n\\subsubsection{$L^p$ spaces}\n\n\\begin{definition}\nGiven a measure space $X, \\mathcal{A}, \\mu$ and $1 \\leq p < \\infty$, the space $L^p(X)$ is equivalence classes of measureable functions $f: X \\rightarrow \\mathbb{R}$. such that\n$$\n\\int |f|^p d\\mu < \\infty\n$$, \nwhere two measureable functions are equivalent if they are equal $\\mu$-a.e... The $L^p$-norm of $f$ is \n$$\n\\norm{f}_{L_p} = (\\int |f|^p d\\mu )^{1/p}\n$$\n\n\\end{definition}\nWe also have the $L^\\infty$ norm:\n\\begin{definition}\nLet $f$ be a measureable function. The essential supremum is \n$$\ness sup f = inf \\{a \\in \\mathbb{R}: \\mu \\{x \\in X : f(x) > a \\} = 0 \\}\n$$\n\nIt is essentailly bounded if $ess sup|f| < \\infty$\n\nThe space of $L^\\infty(X)$ consists of point-wise a.e.-equivalence classes of essentially bounded function with norm $ess sup|f|$.\n\\end{definition}\n\nHere's are two important inequalies:\n\n\\begin{theorem}[Minkowski inequility]\nIf $f, g \\in L^p(X)$ for $1 \\leq p \\leq \\infty$, then $f + g \\in L^p(X)$ and \n$$\n\\norm{f + g}_{L^p} \\leq \\norm{f}_{L^p} + \\norm{g}_{L^p}\n$$\n\\end{theorem}\n\n\\begin{definition}\nFor $p$ with $1 \\leq p \\leq \\infty$, \nThe Holder conjuage of $p$ if $p'$ with \n$$\n\\frac{1}{p} + \\frac{1}{p'} = 1\n$$\n$1' = \\infty$ and $\\infty' = 1$.\n\\end{definition}\n\n\\begin{theorem}[Holder's inequality]\nIf $f \\in L^p(X)$ and $g \\in L^{p'}(X)$, then $fg \\in L^1(X)$ and \n$$\n\\int |fg| d\\mu \\leq \\norm{f}_{L^p}\\norm{g}_{L^{p'}}\n$$\n\\end{theorem}\n\nHere's a immediate generalization, which is even better:\n\n\\begin{theorem}[Holder's Inequality]\nGiven $r, p, q$ with $\\frac{1}{r} = \\frac{1}{p} + \\frac{1}{q}$, then \n$$\n\\norm{fg}_r \\leq \\norm{f}_{p}  \\norm{f}_q \n$$\n\n\\end{theorem}\n\nSimple functions are dense in $L^p$ for all $p$:\n\n\\begin{theorem}\nThe simple functions that belong to $L^p(X)$ are dense in $L^p(X)$.\n\\end{theorem}\n\nNote that a simple function $\\phi = \\sum_{i=1} ^n c_i \\chi_{A_i}$ belong in $L^p$ for $p < \\infty$ iff $\\mu(A_i) < \\infty$. Every simple function belong in $L^\\infty$.\n\nIn addition, $L^p(X)$ are also complete:\n\n\\begin{theorem}[Riesz-Fischer Theorem]\nIf $X$ is a measure space and $1 \\leq p \\leq \\infty$, then $L^p(X)$ is complete.\n\\end{theorem}\n\nNote that for $1 \\leq p < \\infty$, any converging sequence has a pointwise converging subsequence.\n\n%%%%%%%%%%%%%%%%%%%%%\nWe also have duality for $L^p$ spaces:\nThe dual $X^*$ of a Banach space $X$, as a set, is all bounded linear functionals on $X$.\n\n\\begin{theorem}[Duality of $L^p$ spaces]\nGiven $1 < p \\leq \\infty$, if $f \\in L^{p'}(X)$, then \n$$\nF(g) = \\int fg d\\mu \n$$\ndefines a bounded linear functional $F: L^p(X) \\rightarrow \\mathbb{R}$ with \n$$\n\\norm{F} = \\norm{f}_{L^{p'}}\n$$.\nFor $1 < p < \\infty$, this defines an isomorphism $L^{p'}(X) \\cong L^p(X)^*$\n\\end{theorem}\n\n\\begin{corollary}\nFor those $p$, $L^p(X)$ is reflexive, that is, the canonical map $L^p(X) \\rightarrow L^p(X)^{**}$ is an isomorphism.\n\\end{corollary}\n\nFor $X$ a finite measure space, for $p < q$, we have that $L^q(X) \\xrightarrow{} L^p(X)$:\n\n\\begin{theorem}\nFor $X$ a finite measure space, for $1 \\leq p \\leq q \\leq \\infty$, then the $L^q$ norm bounds $L^p$ norm. Thus we have inclusion $L^q(X) \\xrightarrow{} L^p(X)$. \n\\end{theorem}\n\nThus we have uniform convergence implies $L^\\infty$ implies $L^p$ ... implies $L^1$.\n\n\\subsubsection{Convergences}\nOn a measure space, there is a notion of convergence in measure:\n\n\\begin{definition}\nA sequence $f_n \\rightarrow f$ converges in measure if for any $\\epsilon > 0$, then the limit \n$$\n\\mu(\\{x \\in X | |f(x) - f_n(x)| > \\epsilon\\} \\rightarrow 0\n$$\n\n\\end{definition}\nSo in a measure space, we have uniform convergence, $L^p$ convergences for $1 \\leq p \\leq \\infty$, pointwise convergence a.e. and convergence in measure.\n\nIn general, $L^p$ implies convergence in measure. In a finite measure space, pointwise convergence implies convergence in measure. And convergence in measure implies a subsequence that is pointwise convergent.\n\\subsection{Lebesgue measure on $\\mathbb{R}^n$}\n\nThe Lebesgue measure is constructed from elementary geometrical object like cubes. First we describe the Lebesgue $\\sigma$-algebra.\n\n\\begin{theorem}\nThe Lebesgue $\\sigma$-algebra is simply the completion of the Borel $\\sigma$-algebra on $\\mathbb{R}^n$.\n\\end{theorem}\n\nHere's are the measure zero sets:\n\\begin{lemma}\nA subset $N \\subset \\mathbb{R}^n$ has Lebesgue measure zero if for every $\\epsilon > 0$, there exists a countable collection of rectanges $R_i$ such that $N \\subset \\bigcup_{i = 0}^\\infty R_i$, and $\\sum_{i = 1} ^\\infty \\mu(R_i) < \\epsilon$.\n\\end{lemma}\n\nEvery countable set has measure zero, the Cantor set is an uncountable measure zero set.\n\nThe Lebesgue measure is translational invariant, it is the Haar measure. It is also translational invariant. \n\n\n%%%%%%%%%%%%%%%%\n\\subsubsection{Regularity Results}\nWe also have regularity results for the Lebesgue measure. \n\n\\begin{theorem}\nIf $A \\subset \\mathbb{R}^n$, then \n$$\n\\mu^*(A) = inf\\{\\mu(G): A \\subset G, G open\\},\n$$\nand if $A$ is Lebesgue measurable, then \n$$\n\\mu(A) = sup \\{\\mu(K): K \\subset A, K compact\\}.\n$$\n\\end{theorem}\n\nA subset $A$ is Lebesgue measureable iff there are open sets containing $A$ and difference is arbitrary small.\n\nHere's another characterization, which says the Lebesgue measureable sets are the ones that can be \"squeezed\" between open and closed sets:\n\n\\begin{theorem}\nA subset $A$ is Lebesgue measurable iff if for every $\\epsilon > 0$ there is an open set $G$ and closed set $F$ s.y. $F \\subset A \\subset G$ and $\\mu(G\\\\F)< \\epsilon.$ If $\\mu(A) < \\infty$, then $F$ can be chosen to be compact.\n\\end{theorem}\n\nThus we can approximate a Lebesgue measureable set by an open containing it and a closed contained in it, with arbitrary small, but generally nonzero difference.\n\n\n\\subsubsection{$L^p$ functions on $\\mathbb{R}^n$}\n\\begin{theorem}\nFor $1 \\leq p < \\infty$, we have that the space of compactly supported functions is dense in $L^p(\\mathbb{R}^n)$.\n\\end{theorem}\n\nLet $\\mu$ be the Lebesgue Measure on the Real number line $\\mathbb{R}$. We want to study how integration on the real number work:\n\n\\begin{definition}\nLet $f$ be a non-negative function on $\\mathbb{R}$, it is Lebesgue measureable if \n\\end{definition}\n\n\n\n\\subsection{Probability}\nIn measure theory point-of-view, probability theory is just the study of measure theory on a particular nice measure space, namely, one that's total measure 1. However, they are their own languages, which I will try to describe here.\n\n\\begin{defintion}\n\nA probability space $\\Omega$ is a finite measure space with total measure 1. A random variable $X$ is a real-valued measureable function.\n\\end{defintion}\n\n\n\\begin{definition}\nThe mean/expectation of $X$, $\\mathbb{E}[X]$ is simply the Lebesgue integral \n$$\n\\int X d\\mu\n$$\nMore generally, given a function $\\phi: \\mathbb{R} \\rightarrow \\mathbb{R}$, we have $\\mathbb{E}[\\phi] \\coloneqq \\int \\phi(X) d\\mu$. \n\\end{definition}\n\nWe also use $\\mathbb{P}$:\n\\begin{definition}\n$$\n\\mathbb{P}[X < a] \\coloneqq \\mu (X^{-1}(-\\infty, a))\n$$\n\\end{definition}\n\\begin{definition}\nThe variance of $X$ is $var(X) \\coloneqq \\mathbb{E}[(X - \\mathbb{E}[X])^2] = \\mathbb{E}[X^2] - (\\mathbb{E}[X])^2$\n\\end{definition}\n\nGiven a random variable $X$, we have its cumulative distribution function $F(X,x)$:\n\n\\begin{definition}\nThe cumulative distribution function (CDF) $F_X(x)$ is a real-valued function on $\\mathbb{R}$: \n\n$$\nF_X(x) = \\mathbb{P}[X \\leq x]\n$$\n\n\\end{definition}\n\nThe derivative of this in the density function, it is the Radon-Nikodym derivative $f = \\frac{d X_*\\mu}{d \\mu_{L}}$, where $\\mu_{L}$ is the Lebesgue measure on $\\mathbb{R}$. It has the property that\n$$\n\\int_a ^b f dx = \\mathbb{P}[a < X < b]\n$$\n\nNote that \n$$\n\\mathbb{E}[\\phi] = \\int_\\Omega \\phi(X) d\\mu = \\int_\\mathbb{R} \\phi d X_*(\\mu) = \\int_\\mathbb{R} \\phi f dx\n$$\n\n\\begin{example}\nThe most important distribution is of course the Gaussian (normal) distribution $N(\\mu, \\sigma)$, where $\\mu$ is the mean and $\\sigma$ is the standard deviation. Its density function is \n$$\n\\frac{1}{\\sigma \\sqrt{2\\pi}} e^{-\\frac{1}{2} (\\frac{x-\\mu}{\\sigma} )^2}\n$$\n\n\\end{example}\n\n\n\\begin{definition}\n\nTwo random variables are independent iff $F_{X,Y}(x,y) = F_X(x) F_Y(y)$, where $F_{X,Y}(x,y)$ is the probability that $X < x$ and $Y < y$. \n\n$n$ random variables are pairwise independent if any two are independent. They are mutually independent iff $F_{X_1, ..}(x_1, ..) = F_{X_1}(x_1) ...$.\n\\end{definition}\n\nEquivalently, $X_1, ... , X_n$ are independent iff the pushforward measure $(X_1, ..., X_n)_* \\mu$ of the map \n$(X_1, ... X_n): \\Omega \\rightarrow \\mathbb{R}^n$ is the product measure of $X_1_* \\mu ... $\n\nThus by Fubini's theorem, we have that\n$$\n\\mathbb{E}[XY] = \\mathbb{E}[X] \\mathbb{E}[Y]\n$$\nthus \n$$\nvar(XY) = var(X) + var(Y)\n$$\n\n\n\nWe have the central limit theorem, which describe what happens when we look at the partial sum of the same independent experiment done again and again, in the limit of infinity experiments:\n\n\n\n\\begin{theorem}\nLet $X_1, ...$ be i.i.d (independent identical density) random variables with mean $\\mu$ and standard deviation $\\sigma$. Let $S_n = \\frac{X_1 + X_2 + .. + X_n}{\\sqrt{n}}$, then $S_n$ tends to the normal distribution $N(\\sqrt{\\mu}, \\sigma) $ with mean $\\sqrt{\\mu}$ and standard deviation $\\sigma$. \n\\end{theorem}\n\n\\end{document}", "meta": {"hexsha": "c80bb9c21e3e2b154972be20aea08e21073c6280", "size": 70527, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RA.tex", "max_stars_repo_name": "leon2k2k2k/harvard-qual", "max_stars_repo_head_hexsha": "c3abbcee3d77a688ce060de697f31e92b60ee393", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "RA.tex", "max_issues_repo_name": "leon2k2k2k/harvard-qual", "max_issues_repo_head_hexsha": "c3abbcee3d77a688ce060de697f31e92b60ee393", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RA.tex", "max_forks_repo_name": "leon2k2k2k/harvard-qual", "max_forks_repo_head_hexsha": "c3abbcee3d77a688ce060de697f31e92b60ee393", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.779047619, "max_line_length": 539, "alphanum_fraction": 0.7113162335, "num_tokens": 21099, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7879312006227324, "lm_q1q2_score": 0.5879061926101563}}
{"text": "\\documentclass{article}\n\n\\usepackage{enumitem}\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\\usepackage{parskip}\n\\usepackage{graphicx}\n\\usepackage{mathtools}\n\\usepackage{mathrsfs}\n\\usepackage{amsmath,amsthm,hyperref}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\DeclareMathOperator{\\Tr}{tr}\n\\usepackage{bbm}\n\n% Margins\n\\usepackage[top=2.5cm, left=3cm, right=3cm, bottom=4.0cm]{geometry}\n% Colour table cells\n\\usepackage[table]{xcolor}\n\n% Get larger line spacing in table\n\\newcommand{\\tablespace}{\\\\[1.25mm]}\n\\newcommand\\Tstrut{\\rule{0pt}{2.6ex}}         % = `top' strut\n\\newcommand\\tstrut{\\rule{0pt}{2.0ex}}         % = `top' strut\n\\newcommand\\Bstrut{\\rule[-0.9ex]{0pt}{0pt}}   % = `bottom' /\n\n% my new commands\n\\newcommand\\partialkj{\\frac{\\partial^2}{\\partial\\theta_k\\partial\\theta_j}}\n\\makeatletter\n\\newcommand*\\bigcdot{\\mathpalette\\bigcdot@{.5}}\n\\newcommand*\\bigcdot@[2]{\\mathbin{\\vcenter{\\hbox{\\scalebox{#2}{$\\m@th#1\\bullet$}}}}}\n\\makeatother\n\\newcommand{\\minus}{\\scalebox{0.5}[1.0]{$-$}}\n\\newcommand{\\zero}{\\scalebox{0.6}[0.75]{$^{(0)}$}}\n\\newcommand{\\supi}[1]{\\scalebox{0.6}[0.75]{$^{(#1)}$}}\n\\newcommand{\\bigDash}{\\scalebox{3.0}[1.0]{$-$}}\n\n%%%%%%%%%%%%%%%%%\n%     Title     %\n%%%%%%%%%%%%%%%%%\n\\title{Problem Set #1: Supervised Learning}\n\\author{Eitan Joseph \\and Caroline Wang}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n%%%%%%%%%%%%%%%%%\n%   Problem 1   %\n%%%%%%%%%%%%%%%%%\n\\section{Problem 1}\n\\textbf{Newton\u2019s method for computing least squares\\\\\\\\}\nIn this problem, we will prove that if we use Newton\u2019s method solve the least squares optimization problem, then we only need one iteration to converge to $\\theta$.\n\\begin{enumerate}[label=(\\alph*)]\n    \\item Find the Hessian of the cost function $J(\\theta) = \\frac{1}{2}\\sum_{i=1}^{m}\\left(\\theta^T x^{(i)}-y^{(i)}\\right )^2$.\\\\\n    \\textit{answer:}\n    \\begin{align*}\n        H_k_j = \\partialkj J(\\theta) = \\partialkj \\frac{1}{2}\\sum_{i=1}^{m}\\left(\\theta^T x^{(i)}-y^{(i)}\\right )^2 = \\frac{\\partial}{\\partial\\theta_k} \\sum_{i=1}^{m}\\left(\\theta^T x^{(i)}-y^{(i)}\\right )(x_j^{(i)}) = \\sum_{i=1}^{m}x_k^{(i)}x_j^{(i)} \n    \\end{align*}\n    The sum can be understood as $x_k^{(i)}\\bigcdot x_j^{(i)}\\quad \\forall i$\n    \\begin{align*}\n         H = X^TX\n    \\end{align*}\n    \\item Show that the first iteration of Newton\u2019s method gives us $\\theta^* = \\left(X^TX\\right)^{\\minus1}X^T\\Vec{y}$, the\nsolution to our least squares problem.\\\\\n    \\textit{answer:}\n    \\begin{align*}\n        &&\\text{The first iteration of Newton's Method is given by}\n        &&\\theta^* = \\theta^{\\zero} - H^{\\minus1}\\nabla_{\\theta^{\\zero}} J(\\theta^{\\zero}) \\\\\n        &&\\text{Via lecture 2 we know that}\n        &&\\nabla_\\theta J(\\theta) = X^TX\\theta - X^T\\Vec{y}\\\\\n        &&\\text{Which means we need to solve}\n        &&\\theta^* = \\theta^{\\zero} - H^{\\minus1}\\left(X^TX\\theta^{\\zero} - X^T\\Vec{y}\\right)\\\\\n        &&\\text{We can substitute our result from part (a) to get}\n        &&\\theta^* = \\theta^{\\zero} - \\left(X^TX\\right)^{\\minus1}\\left(X^TX\\theta^{\\zero} - X^T\\Vec{y}\\right)\\\\\n        &&\\text{This reduces to}\n        &&\\theta^* = \\left(X^TX\\right)^{\\minus1}X^T\\Vec{y}\n    \\end{align*}\n\\end{enumerate}\n\n%%%%%%%%%%%%%%%%%\n%   Problem 3   %\n%%%%%%%%%%%%%%%%%\n\\section{Problem 3}\n\\textbf{Multivariate least squares\\\\\\\\}\nSo far in class, we have only considered cases where our target variable $y$ is a scalar value. Suppose that instead of trying to predict a single output, we have a training set with multiple outputs for each example:\\\\\n\\begin{align*}\n    \\{(x^{\\supi{i}}y^{\\supi{i}}),i=1,...,m\\},\\; x^{\\supi{i}}\\in\\mathbb{R}^n,\\; y^{\\supi{i}}\\in\\mathbb{R}^p\n\\end{align*}\nThus for each training example, $y^{\\supi{i}}$ is vector-valued, with p entries. We wish to use a linear model to predict the outputs, as in least squares, by specifying the parameter matrix $\\Theta$ in\n\\begin{align*}\n    y = \\Theta^Tx \\text{, where }\\Theta \\in \\mathbb{R}^{n\\times p}\n\\end{align*}\n\n\\begin{enumerate}[label=(\\alph*)]\n    \\item The cost function for this case is\n    \\begin{align*}\n        J(\\Theta) = \\frac{1}{2}\\sum_{i=1}^{m}\\sum_{j=1}^{p}\\left((\\Theta^T x^{\\supi{i}})_j - y_j^{\\supi{i}}\\right)^2\n    \\end{align*}\n    Write $J(\\Theta)$ in matrix-vector notation (i.e., without using any summations).\\\\\n    \\begin{center}\n    $X=$\n        \\begin{bmatrix}\n            \\bigDash && (x^{(1)})^T && \\bigDash\\\\\n            \\bigDash && (x^{(2)})^T && \\bigDash\\\\\n            && \\vdots &&\\\\\n            \\bigDash && (x^{(m)})^T && \\bigDash\\\\\n        \\end{bmatrix}\n    \\end{center}\n    and the $m\\times p$ target matrix\n    \\begin{center}\n    $Y=$\n        \\begin{bmatrix}\n            \\bigDash && (y^{(1)})^T && \\bigDash\\\\\n            \\bigDash && (y^{(2)})^T && \\bigDash\\\\\n             && \\vdots &&\\\\\n            \\bigDash && (y^{(m)})^T && \\bigDash\\\\\n        \\end{bmatrix}\n    \\end{center}\n    \\textit{answer:}\n    \\begin{align*}\n        J(\\Theta) &{}= \\frac{1}{2}\\sum_{i=1}^{m}\\sum_{j=1}^{p}\\left((\\Theta^T x^{\\supi{i}})_j - y_j^{\\supi{i}}\\right)^2\\\\ \n        &{}= \\frac{1}{2}\\sum_{i=1}^{m}\\sum_{j=1}^{p}\\left(X\\Theta - Y\\right)^2_{i,j}\\\\\n        &{}= \\frac{1}{2}\\sum_{j=1}^{p}\\left(X\\Theta - Y\\right)^T\\left(X\\Theta - Y\\right)_{j,j}\\\\\n        &{}= \\frac{1}{2}\\Tr\\left(\\left(X\\Theta - Y\\right)^T\\left(X\\Theta - Y\\right)\\right)\n    \\end{align*}\n    \\item  Find the closed form solution for \u0398 which minimizes $J(\\Theta)$. This is the equivalent to the normal equations for the multivariate case.\\\\\\\\\n    \\textit{answer:}\\\\\n    We now have an optimization problem of the form:\n    \\begin{align*}\n        \\min_{\\Theta}\\;J(\\Theta)\n    \\end{align*}\n    We can solve this by setting the gradient of $J(\\Theta)$ to 0 and solving for $\\Theta$.\n    \\begin{align*}\n        \\nabla_{\\Theta}J(\\Theta) &{}= \\frac{1}{2}\\nabla_{\\Theta}\\Tr\\left(\\left(X\\Theta - Y\\right)^T\\left(X\\Theta - Y\\right)\\right)\\\\\n        &{}=\\frac{1}{2}\\nabla_{\\Theta}\\Tr\\left(\\Theta^TX^TX\\Theta - \\Theta^TX^TY - Y^TX\\Theta + Y^TY\\right)\\\\\n        % maybe make a note here that you need to use the property of trace and gradient\n        &{}=\\frac{1}{2}\\left(\\nabla_{\\Theta}\\Tr\\Theta^TX^TX\\Theta - 2\\nabla_{\\Theta}\\Tr\\Theta^TX^TY + \\nabla_{\\Theta}\\Tr Y^TY\\right)\\\\\n        &{}=\\frac{1}{2}\\left(X^TX\\Theta + X^TX\\Theta - 2\\nabla_{\\Theta}\\Tr\\Theta^TX^TY\\right)\\\\\n        &{}=X^TX\\Theta - X^TY \n    \\end{align*}\n    Now after setting this result to 0 we get\n    \\begin{align*}\n        &{}X^TX\\Theta - X^TY = 0\\\\\n        &{}\\Theta = \\left(X^TX\\right)^{\\minus1} X^TY\n    \\end{align*}\n    \\item Suppose instead of considering the multivariate vectors $y^{\\supi{i}}$ all at once, we instead compute each variable $y^{\\supi{i}}_j$ separately for each $j = 1,\\dots, p$. In this case, we have a $p$ individual linear models, of the form\n    \\begin{align*}\n        y^{\\supi{i}}_j = \\theta^T_jx^{\\supi{i}},\\;j = 1,\\dots, p. \n    \\end{align*}\n    How do the parameters from these $p$ independent least squares problems compare to the multivariate solution?\\\\\\\\\n    \\textit{answer:}\\\\\n    We first realize that $\\Theta$ can be written in terms of each $\\theta_j$s as\n    \\begin{align*}\n        \\sum_{i=1}^p\n        \\begin{bmatrix}\n            \\theta_i^{\\supi{1}}\\mathbbm{1}\\{i=1\\} & \\theta_i^{\\supi{1}}\\mathbbm{1}\\{i=2\\} & \\dots & \\theta_i^{\\supi{1}}\\mathbbm{1}\\{i=p\\} \\\\\n            \\theta_i^{\\supi{2}}\\mathbbm{1}\\{i=1\\} & \\theta_i^{\\supi{2}}\\mathbbm{1}\\{i=2\\} & \\dots & \\theta_i^{\\supi{2}}\\mathbbm{1}\\{i=p\\} \\\\\n            \\vdots &\\vdots &\\ddots &\\vdots\\\\\n            \\theta_i^{\\supi{n}}\\mathbbm{1}\\{i=1\\} &\\theta_i^{\\supi{n}}\\mathbbm{1}\\{i=2\\} &\\dots &\\theta_i^{\\supi{n}}\\mathbbm{1}\\{i=p\\}\n        \\end{bmatrix}_{n\\times p}\n         = \n        \\begin{bmatrix}\n            \\theta_1 &\\theta_2 &\\dots &\\theta_p\n        \\end{bmatrix}\n    \\end{align*}\n    Combining this with the original result of part (b) gives us\n    \\begin{align*}\n        \\begin{bmatrix}\n            \\theta_1 &\\theta_2 &\\dots &\\theta_p\n        \\end{bmatrix}\n        &{}= \\begin{bmatrix}\n            \\left(X^TX\\right)^{\\minus1} X^T\\Vec{y_1} &\\left(X^TX\\right)^{\\minus1} X^T\\Vec{y_2} &\\dots &\\left(X^TX\\right)^{\\minus1} X^T\\Vec{y_p}\n        \\end{bmatrix}\\\\\n        &{}= \\left(X^TX\\right)^{\\minus1} X^T\n        \\begin{bmatrix}\n            \\Vec{y_1} &\\Vec{y_2} &\\dots &\\Vec{y_p}\n        \\end{bmatrix}\\\\\n        &{}= \\left(X^TX\\right)^{\\minus1} X^TY\\\\\n        &{}= \\Theta\n    \\end{align*}\n    This result implies that evaluating the parameters separately yields the same result as evaluating them together.\n\\end{enumerate}\n\n%%%%%%%%%%%%%%%%%\n%   Problem 4   %\n%%%%%%%%%%%%%%%%%\n\\section{Problem 4}\n\\textbf{Naive Bayes\\\\\\\\}\nIn this problem, we look at maximum likelihood parameter estimation using the naive Bayes assumption. Here, the input features $x_j, j = 1,\\dots, n$ to our model are discrete, binary-valued variables, so $x_j \\in \\{0, 1\\}.$ We call $x = [x_1 x_2 \\dots x_n]^T$ to be the input vector. For each training example, our output targets are a single binary-value $y \\in \\{0, 1\\}$. Our model is then parameterized by $\\phi_{j|y=0} = p(x_j = 1|y = 0)$, $\\phi_{j|y=1} = p(x_j = 1|y = 1)$, and\n$\\phi_y = p(y = 1)$. We model the joint distribution of (x, y) according to\n\\begin{align*}\n    p(y) \\quad=&{}\\quad (\\phi_y)^y(1-\\phi_y)^{1-y}\\\\\n    p(x|y=0) \\quad=&{}\\quad \\prod_{j=1}^n p(x_j|y=0)\\\\\n    =&{}\\quad \\prod_{j=1}^n(\\phi_{j|y=0})^{x_j}(1-\\phi_{j|y=0})^{1-x_j}\\\\\n    p(x|y=1) \\quad=&{}\\quad \\prod_{j=1}^n p(x_j|y=1)\\\\\n    =&{}\\quad \\prod_{j=1}^n(\\phi_{j|y=1})^{x_j}(1-\\phi_{j|y=1})^{1-x_j}\\\\\n\\end{align*}\n\\begin{enumerate}[label=(\\alph*)]\n    \\item Find the joint likelihood function $\\ell(\\varphi) = \\log\\prod_{i=1}^mp(x^{\\supi{i}}, y^{\\supi{i}};\\varphi)$ in terms of the model parameters given above. Here, $\\varphi$ represents the entire set of parameters $\\{\\phi_y, \\phi_{j|y=0}, \\phi_{j|y=1} | j = 1,\\dots,n\\}$.\\\\\\\\\n    \\textit{answer:}\n    \\begin{align*}\n        \\ell(\\varphi) =&{} \\log\\prod_{i=1}^mp(x^{\\supi{i}}, y^{\\supi{i}};\\varphi)\\\\\n        =&{}\\log\\prod_{i=1}^m\\left(\\prod_{j=1}^{n_i}p(x_j^{\\supi{i}}|y^{\\supi{i}};\\varphi)\\right)p(y^{\\supi{i}};\\varphi)\\\\\n        =&{}\\sum_{i=1}^m\\left(\\log p(y^{\\supi{i}};\\varphi) + \\log\\prod_{j=1}^{n_i}p(x_j^{\\supi{i}}|y^{\\supi{i}};\\varphi)\\right)\\\\\n         =&{}\\sum_{i=1}^m\\left(\\log p(y^{\\supi{i}};\\varphi) + \\sum_{j=1}^{n_i}\\log p(x_j^{\\supi{i}}|y^{\\supi{i}};\\varphi)\\right)\\\\\n         =&{}\\sum_{i=1}^m\\left(\\log\\left((\\phi_y)^{y^{\\supi{i}}}(1-\\phi_y)^{1-y^{\\supi{i}}}\\right) + \\sum_{j=1}^{n_i}\\log\\left( (\\phi_{j|y})^{x_j}(1-\\phi_{j|y})^{1-x_j}\\right)\\right)\\\\\n         =&{}\\sum_{i=1}^m\\left(y^{\\supi{i}}\\log(\\phi_y) + (1-y^{\\supi{i}})\\log(1-\\phi_y) + \\sum_{j=1}^{n_i}x_j\\log (\\phi_{j|y})+(1-x_j)\\log(1-\\phi_{j|y})\\right)\n    \\end{align*}\n    \\item Show that the parameters which maximize the likelihood function are the same as those given in the lecture notes.\\\\\\\\\n    \\textit{answer:}\\\\\\\\\n    To find the maximum $\\phi_y$ that maximizes the likelihood function, we will set the gradient with respect to $\\phi_y$ of the result above to zero.\n    \\begin{align*}\n       &{}\\nabla_{\\phi_y} \\sum_{i=1}^m\\left(y^{\\supi{i}}\\log(\\phi_y) + (1-y^{\\supi{i}})\\log(1-\\phi_y) + \\sum_{j=1}^{n_i}x_j\\log (\\phi_{j|y})+(1-x_j)\\log(1-\\phi_{j|y})\\right) \\overset{set}= 0\\\\\n       =&{}\\quad \\nabla_{\\phi_y} \\sum_{i=1}^my^{\\supi{i}}\\log(\\phi_y) + (1-y^{\\supi{i}})\\log(1-\\phi_y)\\\\\n       =&{}\\quad \\sum_{i=1}^m\\frac{y^{\\supi{i}}}{\\phi_y} - \\frac{1-y^{\\supi{i}}}{1-\\phi_y}\\\\\n       =&{}\\quad\\sum_{i=1}^my^{\\supi{i}}(1-\\phi_y) - \\phi_y(1-y^{\\supi{i}})\\\\\n       =&{}\\quad\\sum_{i=1}^my^{\\supi{i}} - \\phi_y = 0\\\\\n       \\implies&{}\\quad\\sum_{i=1}^my^{\\supi{i}} = \\sum_{i=1}^m\\phi_y\\\\\n       \\implies&{}\\quad\\sum_{i=1}^my^{\\supi{i}} = m\\phi_y\\\\\n       \\implies&{}\\quad\\phi_y = \\frac{\\sum_{i=1}^m\\mathbbm{1}\\{y^{\\supi{i}} = 1\\}}{m}\n    \\end{align*}\n    Similarly for $\\phi_{j|y}$ we can do the same thing and solve\n    \\begin{align*}\n       &{}\\nabla_{\\phi_{j|y}} \\sum_{i=1}^m\\left(y^{\\supi{i}}\\log(\\phi_y) + (1-y^{\\supi{i}})\\log(1-\\phi_y) + \\sum_{j=1}^{n_i}x_j\\log (\\phi_{j|y})+(1-x_j)\\log(1-\\phi_{j|y})\\right) \\overset{set}= 0\\\\\n       =&{}\\quad \\sum_{i=1}^m\\frac{x_j^{\\supi{i}}}{\\phi_{j|y}} - \\frac{1-x_j^{\\supi{i}}}{1-\\phi_{j|y}} = 0\\\\\n       =&{}\\quad \\sum_{i=1}^m x_j^{\\supi{i}}(1-\\phi_{j|y}) - \\phi_{j|y}(1-x_j^{\\supi{i}}) = 0\\\\\n       =&{}\\quad \\sum_{i=1}^m x_j^{\\supi{i}} - \\phi_{j|y} = 0\n    \\end{align*}\n    We can now separate this into two cases.\\\\\\\\\n    $\\phi_{j|y=0}$:\n    \\begin{align*}\n        &{}\\sum_{i=1}^m (x_j^{\\supi{i}} - \\phi_{j|y=0})\\mathbbm{1}\\{y^{\\supi{i}} = 0\\} = 0\\\\\n        \\implies&{}\\sum_{i=1}^m x_j^{\\supi{i}}\\mathbbm{1}\\{y^{\\supi{i}} = 0\\} = \\sum_{i=1}^m \\phi_{j|y=0}\\mathbbm{1}\\{y^{\\supi{i}} = 0\\}\\\\\n        \\implies&{}\\phi_{j|y=0} =  \\frac{\\sum_{i=1}^m x_j^{\\supi{i}}\\mathbbm{1}\\{y^{\\supi{i}} = 0\\}}{ \\sum_{i=1}^m\\mathbbm{1}\\{y^{\\supi{i}} = 0\\}}\\\\\n        \\implies&{}\\phi_{j|y=0} =  \\frac{\\sum_{i=1}^m\\mathbbm{1}\\{x_j^{\\supi{i}} = 0 \\wedge y^{\\supi{i}} = 0\\}}{ \\sum_{i=1}^m\\mathbbm{1}\\{y^{\\supi{i}} = 0\\}}\n    \\end{align*}\n    $\\phi_{j|y=1}$:\n    \\begin{align*}\n        &{}\\sum_{i=1}^m (x_j^{\\supi{i}} - \\phi_{j|y=1})\\mathbbm{1}\\{y^{\\supi{i}} = 1\\} = 0\\\\\n        \\implies&{}\\sum_{i=1}^m x_j^{\\supi{i}}\\mathbbm{1}\\{y^{\\supi{i}} = 1\\} = \\sum_{i=1}^m \\phi_{j|y=1}\\mathbbm{1}\\{y^{\\supi{i}} = 1\\}\\\\\n        \\implies&{}\\phi_{j|y=1} =  \\frac{\\sum_{i=1}^m x_j^{\\supi{i}}\\mathbbm{1}\\{y^{\\supi{i}} = 1\\}}{ \\sum_{i=1}^m\\mathbbm{1}\\{y^{\\supi{i}} = 1\\}}\\\\\n        \\implies&{}\\phi_{j|y=1} =  \\frac{\\sum_{i=1}^m\\mathbbm{1}\\{x_j^{\\supi{i}} = 1 \\wedge y^{\\supi{i}} = 1\\}}{ \\sum_{i=1}^m\\mathbbm{1}\\{y^{\\supi{i}} = 1\\}}\n    \\end{align*}\n    \n    %commenting out this question\n    \\iffalse\n    \\item Consider making a prediction on some new data point $x$ using the most likely class estimate generated by the naive Bayes algorithm. Show that the hypothesis returned by Naive Bayes is a linear classifier\u2014i.e., if $p(y = 0|x)$ and $p(y = 1|x)$ are the class probabilities returned by naive Bayes, show that there exists some $\\theta\\inR^{n+1}$ such that\n    \\begin{align*}\n        p(y = 1|x) \\geq p(y = 0|x) \\iff \\theta^{T} \\begin{bmatrix}\n            1\\\\\n            x\n        \\end{bmatrix} \\geq 0\n    \\end{align*}\n    \\fi\n\n\\end{enumerate}\n\\section{Problem 5}\n\\textbf{Exponential family and the geometric distribution}\\\\\n\\begin{enumerate}[label=(\\alph*)]\n    \\item Consider the geometric distribution parameterized by $\\phi$:\\begin{align*}\n        p(y;\\phi)=(1-\\phi)^{y-1}\\phi, \\quad j=1,2,3\\dots\n    \\end{align*}Show that the geometric distribution is in the exponential family, and give $b(y)$, $\\eta$, $T(y)$ and $a(\\eta)$.\\\\\\\\\n    \\textit{answer:}\\\\\\\\\n    Recall that to be a member of the exponential family distribution, a distribution's PDF must be in the form $p(y;\\phi ) = b(y)\\exp[T(y)\\cdot \\eta-a(\\eta)]$. \\\\\\\\Here we have\n    \\begin{align*}\n        p(y;\\phi)=&(1-\\phi)^{y-1}\\phi\\\\\n        =&\\exp[\\log(1-\\phi)^{y-1} + \\log(\\phi)]\\\\\n        =&\\exp[(y-1)\\log(1-\\phi) + \\log(\\phi)]\\\\\n        =&\\exp[y\\log (1-\\phi)-\\log(1-\\phi)+\\log(\\phi)]\n    \\end{align*}Which can be decomposed as\\begin{align*}\n        &b(y)=1\\\\\n        &\\eta = \\log (1-\\phi)\\\\\n        &T(y) = y\\\\\n        &a(\\eta)=-\\eta +\\log (1-e^\\eta)\n    \\end{align*}\n    \\item Consider performing regression using a GLM model with a geometric response variable. What is the canonical response function for the family? You may use the fact that the mean of a geometric distribution is given by $1/\\phi$.\\\\\\\\\n    \\textit{answer}:\\begin{align*}\n        g(\\eta) = E[y;\\phi ] = \\frac{1}{\\phi } = \\frac{1}{1-e^\\eta}\n    \\end{align*}\n    \\item For a training set $\\{(x^{(i)}, y^{(i)}); i=1,\\dots m\\}$, let the log-likelihood of an example be $\\log p(y^{(i)}|x^{(i)};\\theta)$. By taking the derivative of the log-likelihood with respect to, derive the stochastic gradient ascent rule for learning using a GLM model with geometric responses y and the canonical response function.\\\\\\\\\n    \\textit{answer:}\\\\\\\\\n    The log-likelihood of $\\theta$ with respect to a training example $(x^{(i)}, y^{(i)})$ is defined as $l_i(\\theta) = \\log p(y\\supi{i} | x\\supi{i}; \\theta)$. We use the GLM assumption that $\\eta = \\theta^Tx$. Therefore, we obtain \\begin{align*}\n        l_i(\\theta ) =& \\log [\\exp(\\theta^Tx\\supi{i}\\cdot y\\supi{i}-\\theta^Tx\\supi{i} +\\log(1-e^{\\theta^Tx\\supi{i}}))]\\\\\n        =&\\log [\\exp(\\theta^Tx\\supi{i}\\cdot y\\supi{i}-\\log(e^{\\theta^Tx\\supi{i}}) +\\log(1-e^{\\theta^Tx\\supi{i}}))]\\\\\n        =&\\log \\left[\\exp\\left(\\theta^Tx\\supi{i}\\cdot y\\supi{i}-\\log\\left(\\frac{e^{\\theta^Tx\\supi{i}}}{1-e^{\\theta^Tx\\supi{i}}}\\right)\\right) \\right]\\\\\n        =&\\log \\left[\\exp\\left(\\theta^Tx\\supi{i}\\cdot y\\supi{i}-\\log\\left(\\frac{1}{e^{-\\theta^Tx\\supi{i}}-1}\\right)\\right) \\right]\\\\\n        =&\\theta^Tx\\supi{i}\\cdot y\\supi{i} +\\log \\left(e^{-\\theta^Tx\\supi{i}}-1\\right)\n    \\end{align*}Then we want to take the gradient respect to $\\theta_j$\\begin{align*}\n        \\nabla_{\\theta_j} =& x\\supi{i}_j\\cdot y\\supi{i}+\\frac{e^{-\\theta^Tx\\supi{i}}}{e^{-\\theta^Tx\\supi{i}}-1}(-x\\supi{i}_j)\\\\\n        =& x\\supi{i}_j \\left(y\\supi{i}-\\frac{e^{-\\theta^Tx\\supi{i}}}{e^{-\\theta^Tx\\supi{i}}-1}\\right)\\\\\n        =&x\\supi{i}_j \\left(y\\supi{i}-\\frac{1}{1-e^{\\theta^Tx\\supi{i}}}\\right)\n    \\end{align*}Finally, we can derive the stochastic gradient ascent update rule \\begin{align*}\n        \\theta_j \\coloneqq \\theta_j + \\alpha\\left(x\\supi{i}_j \\left(y\\supi{i}-\\frac{1}{1-e^{\\theta^Tx\\supi{i}}}\\right)\\right)\n    \\end{align*}\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "4fa5602e944ca3f2dd38029e377e4b090dd4fa5a", "size": 17391, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Supervised Learning/Problem Set 1 - Supervised Learning.tex", "max_stars_repo_name": "EitanJoseph/Standford-Machine-Learning", "max_stars_repo_head_hexsha": "5b1609a3fc1c7f32494a70ebc3f89d2ed8aed941", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-01T02:53:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-01T02:53:32.000Z", "max_issues_repo_path": "Supervised Learning/Problem Set 1 - Supervised Learning.tex", "max_issues_repo_name": "EitanJoseph/Standford-Machine-Learning", "max_issues_repo_head_hexsha": "5b1609a3fc1c7f32494a70ebc3f89d2ed8aed941", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Supervised Learning/Problem Set 1 - Supervised Learning.tex", "max_forks_repo_name": "EitanJoseph/Standford-Machine-Learning", "max_forks_repo_head_hexsha": "5b1609a3fc1c7f32494a70ebc3f89d2ed8aed941", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.3590604027, "max_line_length": 482, "alphanum_fraction": 0.5746650566, "num_tokens": 6874, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.787931188173138, "lm_q1q2_score": 0.5879061922168103}}
{"text": "\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Copyright (c) 2003-2018 by The University of Queensland\n% http://www.uq.edu.au\n%\n% Primary Business: Queensland, Australia\n% Licensed under the Apache License, version 2.0\n% http://www.apache.org/licenses/LICENSE-2.0\n%\n% Development until 2012 by Earth Systems Science Computational Center (ESSCC)\n% Development 2012-2013 by School of Earth Sciences\n% Development from 2014 by Centre for Geoscience Computing (GeoComp)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\section{Some Remarks on Lumping}\n\\label{LUMPING}\n\nExplicit time integration schemes (two examples are discussed later in this \nsection), require very small time steps in order to maintain numerical stability. \nUnfortunately, these small time increments often result in a prohibitive \ncomputational cost. \nIn order to minimise these costs, a technique termed lumping can be utilised.\nLumping is applied to the coefficient matrix, reducing it to a simple diagonal \nmatrix. This can significantly improve the computational speed, because the \nsolution updates are simplified to a simple component-by-component \nvector-vector product. However, some care is required when making radical\napproximations such as these. In this section, two commonly applied lumping\ntechniques are discussed, namely row sum lumping \n\\index{linear solver!row sum lumping}\\index{row sum lumping}and HRZ \nlumping\\index{linear solver!HRZ lumping}\\index{HRZ lumping}.\n\n\\subsection{Scalar wave equation}\nOne example where lumping can be applied to a hyperbolic problem, is  \nthe scalar wave equation\n\\begin{eqnarray} \\label{LUMPING WAVE} \nu_{,tt}=c^2 u_{,ii} \\; .\n\\end{eqnarray}\nIn this example, both of the lumping schemes are tested against the reference solution\n\\begin{eqnarray} \\label{LUMPING WAVE TEST} \nu=sin(5 \\pi (x_0-c*t) )\n\\end{eqnarray}\nover the 2D unit square. Note that $u_{,i}n_i=0$ on faces $x_1=0$ and $x_1=1$.\nThus, on the faces $x_0=0$ and $x_0=1$ the solution is constrained.\n\nTo solve this problem the explicit Verlet scheme\\index{Verlet scheme} was used \nwith a constant time step size $dt$ given by \n\\begin{eqnarray} \\label{LUMPING WAVE VALET}\nu^{(n)}=2u^{(n-1)}-u^{(n-2)} + dt^2 a^{(n)}\n\\end{eqnarray}\nfor all $n=2,3,\\ldots$ where the upper index ${(n)}$ refers to values at \ntime $t^{(n)}=t^{(n-1)}+h$ and $a^{(n)}$ is the solution of \n\\begin{eqnarray} \\label{LUMPING WAVE VALET 2} \na^{(n)}=c^2 u^{(n-1)}_{,ii} \\; .\n\\end{eqnarray}\nThis equation can be interpreted as a PDE for the unknown value $a^{(n)}$,\nwhich must be solved at each time-step. \nIn the notation of equation~\\ref{LINEARPDE.SINGLE.1} we thus set $D=1$ and \n$X=-c^2 u^{(n-1)}_{,i}$. Furthermore, in order to maintain stability, \nthe time step size needs to fulfill the Courant-Friedrichs-Lewy condition \n(CFL condition).\n\\index{Courant condition}\n\\index{explicit scheme!Courant condition} \nFor this example, the CFL condition takes the form \n\\begin{eqnarray} \\label{LUMPING WAVE CFL} \ndt = f \\cdot \\frac{dx}{c} .\n\\end{eqnarray}\nwhere $dx$ is the mesh size and $f$ is a safety factor. In this example, \nwe use $f=\\frac{1}{6}$.\n\nFigure~\\ref{FIG LUMPING VALET A} depicts a temporal comparison between four \nalternative solution algorithms: the exact solution; using a full mass matrix;\nusing HRZ lumping; and row sum lumping. The domain utilised rectangular order 1 \nelements (element size is $0.01$) with observations taken at the point \n$(\\frac{1}{2},\\frac{1}{2})$. \nAll four solutions appear to be identical for this example. This is not the case\nfor order $2$ elements, as illustrated in Figure~\\ref{FIG LUMPING VALET B}.\nFor the order $2$ elements, the row sum lumping has become unstable. Row sum\nlumping is unstable in this case because for order $2$ elements, a row sum can \nresult in a value of zero. HRZ lumping does not display the same problems, but \nrather exhibits behaviour similar to the full mass matrix solution. When using\nboth the HRZ lumping and full mass matrix, the wave-front is slightly delayed \nwhen compared with the analytical solution.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_valet_a_1}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the acceleration formulation~\\ref{LUMPING WAVE VALET 2} of the \nVelet scheme with order $1$ elements, element size $dx=0.01$, and $c=1$.}\n\\label{FIG LUMPING VALET A}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_valet_a_2}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the acceleration formulation~\\ref{LUMPING WAVE VALET 2} of the \nVelet scheme with order $2$ elements, element size $0.01$, and $c=1$.}\n\\label{FIG LUMPING VALET B}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_valet_u_1}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the displacement formulation~\\ref{LUMPING WAVE VALET 3} of the \nVelet scheme with order $1$ elements, element size $0.01$ and $c=1$.}\n\\label{FIG LUMPING VALET C}\n\\end{figure}\n\nAlternatively, one can directly solve for $u^{(n)}$ by inserting \nequation~\\ref{LUMPING WAVE VALET} into equation~\\ref{LUMPING WAVE VALET 2}:\n\\begin{eqnarray} \\label{LUMPING WAVE VALET 3} \nu^{(n)}=2u^{(n-1)}-u^{(n-2)} + (dt\\cdot c)^2 u^{(n-1)}_{,ii} \\; .\n\\end{eqnarray}\nThis can also be interpreted as a PDE that must be solved at each time-step, but \nfor the unknown $u^{(n)}$. \nAs per equation~\\ref{LINEARPDE.SINGLE.1} we set the general form coefficients to:\n$D=1$; $Y=2u^{(n-1)}-u^{(n-2)}$; and $X=-(h\\cdot c)^2 u^{(n-1)}_{,i}$. \nFor the full mass matrix, the acceleration ~\\ref{LUMPING WAVE VALET 2} and displacement formulations ~\\ref{LUMPING WAVE VALET 3}\nare identical. \n\nThe displacement solution is depicted in Figure~\\ref{FIG LUMPING VALET C}. The\ndomain utilised order $1$ elements (for order $2$, both \nlumping methods are unstable). The solutions for the exact and the full mass \nmatrix approximation are almost identical while the lumping solutions, whilst \nidentical to each other, exhibit a considerably faster wave-front propagation \nand a decaying amplitude.\n\n\\subsection{Advection equation}\nConsider now, a second example that demonstrates the advection equation\n\\begin{eqnarray} \\label{LUMPING ADVECTIVE} \nu_{,t}=(v_i u)_i \\; .\n\\end{eqnarray}\nwhere $v_i$ is a given velocity field. To simplify this example, set $v_i=(1,0)$ and\n\\begin{equation} \\label{LUMPING ADVECTIVE TEST} \nu(x,t)= \n\\left\\{\n   \\begin{array}{cl}\n   1 &  x_0 < t \\\\\n   0 &  x_0 \\ge t \n   \\end{array}\n\\right\\}.\n\\end{equation}\nThe solution scheme implemented, is the two-step Taylor-Galerkin scheme \n\\index{Taylor-Galerkin scheme}\n(which is in this case equivalent to SUPG\\index{SUPG}):\nthe incremental formulation is given as\n\\begin{eqnarray} \\label{LUMPING SUPG 1} \ndu^{(n-\\frac{1}{2})} = \\frac{dt}{2} (v_i u^{(n-1)})_i \\\\\ndu^{(n)} = dt (v_i (u^{(n-1)}+du^{(n-\\frac{1}{2})}) )_i \\\\\nu^{(n)} = u^{(n)} + du^{(n)} \n\\end{eqnarray}\nThis can be reformulated to calculate $u^{(n)}$ directly:\n\\begin{eqnarray} \\label{LUMPING SUPG 2} \nu^{(n-\\frac{1}{2})} = u^{(n-1)} + \\frac{dt}{2} (v_i u^{(n-1)})_i \\\\\nu^{(n)} =  u^{(n-1)} + dt (v_i u^{(n-\\frac{1}{2})} )_i \n\\end{eqnarray}\nIn some cases it may be possible to combine the two equations to calculate \n$u^{(n)}$ without the intermediate step. This approach is not discussed, because\n it is inflexible when a greater number of terms (e.g. a diffusion term) are\n added to the right hand side. \n\nThe advection problem is thus similar to the wave propagation problem, because \nthe time step also needs to satisfy the CFL condition\n\\index{Courant condition}\\index{explicit scheme!Courant condition}. For the \nadvection problem, this takes the form\n\\begin{eqnarray} \\label{LUMPING ADVECTION CFL} \ndt = f \\cdot \\frac{dx}{\\|v\\|} .\n\\end{eqnarray}\nwhere $dx$ is the mesh size and $f$ is a safty factor. \nFor this example, we again use $f=\\frac{1}{6}$.\n\nFigures~\\ref{FIG LUMPING SUPG INC A} and~\\ref{FIG LUMPING SUPG INC B} illustrate\nthe four incremental formulation solutions: the true solution; the exact mass matrix;\nthe HRZ lumping; and the row sum lumping. Observe, that for the order $1$ elements\ncase, there is little deviation from the exact solution before the wave front,\nwhilst there is a significant degree of osciallation after the wave-front has \npassed. For the order $2$ elements example, all of the numerical techniques fail. \n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_SUPG_du_1}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the incremental formulation~\\ref{LUMPING SUPG 1} of the \nTaylor-Galerkin scheme with order $1$ elements, element size $dx=0.01$, $v=(1,0)$.}\n\\label{FIG LUMPING SUPG INC A}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_SUPG_du_2}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the incremental formulation~\\ref{LUMPING SUPG 1} of the \nTaylor-Galerkin scheme  with order $2$ elements, element size $0.01$, $v=(1,0)$.}\n\\label{FIG LUMPING SUPG INC B}\n\\end{figure}\n\nFigure~\\ref{FIG LUMPING SUPG A} depicts the results from the direct formulation\nof the advection problem for an order $1$ mesh. Generally, the results have\nimproved when compared with the incremental formulation. The full mass matrix \nstill introduces some osciallation both before and after the arrival of the \nwave-front at the observation point. The two lumping solutions are identical, and\nhave introduced additional smoothing to the solution. There are no oscillatory\neffects when using lumping for this example. In Figure~\\ref{FIG LUMPING SUPG Ab}\nthe mesh or element size has been reduced from 0.01 to 0.002 units. As predicted\nby the CFL condition, this significantly improves the results when lumping is \napplied. However, when utilising the full mass matrix, a smaller mesh size will \nresult in post wave-front oscilations which are higher frequency and slower to \ndecay.\n\nFigure~\\ref{FIG LUMPING SUPG B} illustrates the results when utilising elements \nof order $2$. The full mass matrix and HRZ lumping formulations are unable to \ncorrectly model the exact solution. Only the row sum lumping was capable of \nproducing a smooth and sensical result. \n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_SUPG_u_1}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the direct formulation~\\ref{LUMPING SUPG 2} of the \nTaylor-Galerkin scheme using order $1$ elements, element size $dx=0.01$, $v=(1,0)$.}\n\\label{FIG LUMPING SUPG A}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_SUPG_u_1b}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the direct formulation~\\ref{LUMPING SUPG 2} of the \nTaylor-Galerkin scheme using order $1$ elements, element size $dx=0.002$, $v=(1,0)$.}\n\\label{FIG LUMPING SUPG Ab}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=7cm]{lumping_SUPG_u_2}}\n\\caption{Amplitude at point $(\\frac{1}{2},\\frac{1}{2})$ using the direct formulation~\\ref{LUMPING SUPG 2} of the \nTaylor-Galerkin scheme  using order $2$ elements, element size $0.01$, $v=(1,0)$.}\n\\label{FIG LUMPING SUPG B}\n\\end{figure}\n\n\\subsection{Summary}\nThe examples in this section have demonstrated the capabilities and limitations\nof both HRZ and row sum lumping with comparisons to the exact and full mass \nmatrix solutions. Wave propagation type problems that utilise lumping, produce \nresults simular the full mass matrix at significantly \nlower computation cost. An acceleration based formulation, with HRZ lumping \nshould be implemented for such problems, and can be applied to both order $1$ and\n order $2$ elements. \n\nIn transport type problems, it is essential that row sum lumping is used to \nachieve a smooth solution. Additionally, it is not recommended that second order\nelements be used in advection type problems.\n\n\n\n", "meta": {"hexsha": "630a0cf4e26beac20b5b8e83e6484298bd93381b", "size": 11896, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user/lumping.tex", "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user/lumping.tex", "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_forks_repo_path": "doc/user/lumping.tex", "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.5551020408, "max_line_length": 128, "alphanum_fraction": 0.7334398117, "num_tokens": 3576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.787931188173138, "lm_q1q2_score": 0.5879061833210284}}
{"text": "\\section{Established Vortex models}\nSeveral vortex models are already commonly used to describe the behavior of \naxial wake vortices. Among these models are, in order of introduction, the \nRankine, Lamb-Oseen, Burnham-Hallock, and Proctor models \\cite{ahmad2014}. The \nsimplest and oldest model, the Rankine vortex, approximates the azimuthal \nvelocity of the vortex within the core zone as a rigid body of rotating fluid, \nwith that component of velocity decreasing as dictated by an inviscid potential \nflow model, predicting that the azimuthal velocity decreases with the inverse \nof radius beyond the central core boundary \\cite{rankine1869}. The azimuthal \nvelocity profile of a Rankine vortex is given by\n\n\\begin{equation}\nv_{\\theta}(r) = \\frac{\\Gamma_0}{2 \\pi r_{core}} \\frac{r}{r_{core}} \n\t\\text{ for } r \\leq r_{core}\n\t\\label{eq:rankine1}\n\\end{equation}\n\n\\begin{equation}\nv_{\\theta}(r) = \\frac{\\Gamma_0}{2 \\pi r}\n\t\\text{ for } r > r_{core}\n\t\\label{eq:rankine2}\n\\end{equation}\n\n\\noindent\nwhere $\\Gamma_0$ is the circulation and $r_{core}$ is the core radius,\nwhich is defined as the location of the maximum azimuthal velocity, \n$v_{\\theta, max}$. The azimuthal velocity $v_{\\theta}$, is proportional to \nradius $r$, in the core zone, but proportional to $1/r$ in the region outside \nthe core zone. This shear stress discontinuity makes the model unsuitable in \nthe vicinity of the core region of the vortex. The model is therefore \nunsuitable for the study of unsteady flow or turbulence, and the rigid body \nassumption within the core has been shown experimentally to be invalid.\n\nMuch later, the Lamb-Oseen vortex model was developed from an \nanalytical solution to the Navier-Stokes equations \\cite{lamb1932}. In this \nmodel, a potential \nline vortex with an infinite velocity at the centerline was introduced \ninstantaneously as a point, then subjected to viscous decay. This is an \nunsteady solution for the azimuthal \nvelocity profile at any time after the introduction of the discontinuity, given \nas\n\n\\begin{equation}\nv_{\\theta}(r,t) = \\frac{\\Gamma_0}{2 \\pi r_{core}}[1 - \n\t\t\t\t\t\t\texp(-(r / r_{core})^2)]\n\t\\label{eq:lamb1}\n\\end{equation}\n\n\\noindent\nwhere the core radius dilates according to \n\n\\begin{equation}\nr_{core}(t) = \\sqrt{4 \\nu t}\n\t\\label{eq:lamb2}\n\\end{equation}\n\n\\noindent\nUtilizing the kinematic viscosity, $\\nu$. Alternatively, the azimuthal velocity \nprofile can be expressed in terms of the maximum azimuthal velocity, \n$v_{\\theta, max}$, as\n\n\\begin{equation}\nv_{\\theta}(r,t) = v_{\\theta, max} (1 + \\frac{1}{2 \\alpha}) \\frac{r_{max}}{r} \n[1 - exp(- \\alpha \\frac{r^2}{r_{max}^2})]\n\\label{eq:lamb3}\n\\end{equation}\n\n\\noindent\nwhere the core radius dilates according to \n\n\\begin{equation}\nr_{core}(t) = \\sqrt{\\alpha 4 \\nu t}\n\\label{eq:lamb4}\n\\end{equation}\n\n\\noindent\nIn the case where $\\alpha = 1.25643$ \\cite{davenport1996}. Like the Rankine \nvortex, a \nLamb-Oseen vortex does not allow for unsteady flow. In addition, the predicted \ncenterline velocity at $r=0$ is undefined. It has utility in estimating average \nvelocity profiles, and has been used to initialize large eddy \nsimulations \\cite{hennemann2011}.\n\nA model which is widely used in the study of wake vortices, and demonstrates \ngreat suitability for modeling the velocity profiles of large experimental \nvortices is the Burnham-Hallock model \\cite{burnham1982}. This \nempirically-based model has been independently discovered by many \n\\cite{burnam2013}, and describes the azimuthal velocity profile as\n\n\\begin{equation}\nv_{\\theta}(r) = \\frac{\\Gamma_0}{2 \\pi r_{core}} \\Big(\\frac{r^2}{r^2 + \nr_{core}^2} \\Big)\n\\label{eq:burnham-hallock}\n\\end{equation}\n\n\\noindent\nSimilar to the Lamb-Oseen vortex, the Burnham-Hallock model has been widely \nused to initialize large eddy simulations and model aircraft response to wake \nencounters \\cite{ahmad2014}.\n\n\\section{Vortex Model with non-equilibrium Pressure}\n\nNon-equilibrium pressure influences on an incompressible, Newtonian fluid can \nbe introduced through the application of the Hamilton variational principle.\n\\cite{zuckerwar2006} employed this approach in an attempt to clarify confusion \nassociated with the volume or bulk viscosity coefficient for simple fluids. \nThey were concerned primarily with its relevance to high-speed compressible \nflows. When Hamilton's principle was subjected to conservation of mass, \nconservation of reacting species, and material entropy constraints, the \nprocedure resulted in \ntwo dissipative coefficients in the Navier Stokes equation; a traditional \nvolume viscosity term and a term that was proportional to the material time \nderivative of the pressure gradient \\cite{zuckerwar2009}. The resulting \nmodified Navier Stokes equation can be expressed as\n\n\\begin{equation}\n\\rho \\frac{Dv}{Dt} = - \\nabla[P - \\eta_p] -\n\\rho \\nabla \\Omega + \n\\nabla[(\\eta_v - \\frac{2}{3} \\mu)\\nabla \\cdot \\pmb{\\text{v}})] + \n\\nabla \\times (\\mu \\nabla \\times \\pmb{\\text{v}}) + \n2[\\nabla \\cdot (\\mu \\nabla)] \\pmb{\\text{v}}\n\\label{eq:modified_ns1}\n\\end{equation}\n\n\\noindent\nwhere $\\eta_p$ is the pressure relaxation coefficient, $\\eta_v$ is the volume \nviscosity, $P$ is pressure, $\\rho$ is density of the fluid, and $\\mu$ is the \ndynamic viscosity. If each of the thermophysical parameters are considered \nto be constant, and body forces are \nneglected, the resulting conservation of momentum equation simplifies to \n\n\\begin{equation}\n\\rho \\frac{Dv}{Dt} = - \\nabla P + \\eta_p \\nabla \\frac{DP}{Dt} - \n(\\eta_v + \\frac{4}{3}\\nu) \\nabla (\\nabla \\cdot\\pmb{\\text{v}})\n- \\mu \\nabla \\times (\\nabla \\times \\cdot\\pmb{\\text{v}})\n\\label{eq:modified_ns2}\n\\end{equation}\n\n\\noindent\nAlternatively, Equation \\ref{eq:modified_ns2} can be expressed in index \nnotation as \n\n\\begin{equation}\n\\rho \\frac{Dv_i}{Dt} = -\\frac{\\partial P}{\\partial x_i} +\n\\eta_p \\frac{D}{Dt} \\Bigg( \\frac{\\partial P}{\\partial x_i} \\Bigg) + \n\\mu \\frac{\\partial^2 v_i}{\\partial x_{k}^2} + \n\\eta_p \\Bigg[\\frac{\\partial v_k}{\\partial x_i} \\frac{\\partial P}{\\partial x_k} -\n\\frac{\\eta_v + (\\frac{1}{3}\\mu)}{\\eta_p} \\frac{\\partial}{\\partial \nx_i} \\Bigg( \\frac{1}{\\rho} \\frac{D\\rho}{Dt} \\Bigg) \\Bigg]\n\\label{eq:modified_ns3}\n\\end{equation}\n\nThe term in brackets has been examined and appears to be related to sound \nproduction \nin incompressible flows \\cite{ash1998}, but in any scenario, it should be \nnegligibly small \nwhen multiplied by the pressure relaxation coefficient. Thus, Equation \n\\ref{eq:modified_ns3} can be dramatically simplified \\cite{ash2011}.\n\n\\begin{equation}\n\\rho \\frac{Dv_i}{Dt} = -\\frac{\\partial P}{\\partial x_i} +\n\\eta_p \\frac{D}{Dt} \\Bigg( \\frac{\\partial P}{\\partial x_i} \\Bigg) + \n\\mu \\frac{\\partial^2 v_i}{\\partial x_{k}^2} + \n\\label{eq:modified_ns4}\n\\end{equation}\n\nWhen applied to a steady, incompressible axial vortex with zero axial velocity, \nthe azimuthal velocity profile is described by\n\n\\begin{equation}\nv_\\theta(r) = \\frac{\\Gamma_0}{\\pi} \\frac{\\sqrt{2}}{R_\\Gamma \\sqrt{\\nu \\eta_p}}\n\\frac{(r / r_{core})}{(r/r_{core})^2 + 1}\n\\label{eq:ash_vortex_model}\n\\end{equation}\n\n\\noindent\nwhere $R_\\Gamma$ is the circulation based Reynolds number. The maximum \nazimuthal velocity occurs where $r = r_{core}$, and can be represented as\n\n\\begin{equation}\nv_{\\theta, max} = \\frac{\\Gamma_0}{\\pi} \\frac{1}{R_\\Gamma \\sqrt{2 \\nu \n\\eta_p}}\n\\label{eq:ash_vthetamax}\n\\end{equation}\n\n\\noindent\nTherefore, in terms of azimuthal velocity, Equation \\ref{eq:ash_vortex_model} \nbecomes\n\n\\begin{equation}\nv_\\theta(r) = 2 v_{\\theta, max}\\frac{(r / r_{core})}{(r/r_{core})^2 + 1}\n\\label{eq:ash_vortex_model2}\n\\end{equation}\n\nEquation \\ref{eq:ash_vortex_model2} is the same form as the \nempirically based Burnham-Hallock model, but is an exact solution to the \nmodified Navier Stokes equations. As the pressure relaxation coefficient \ntends to zero, the solution becomes a potential vortex similar to the \nLamb-Oseen model. For very large pressure relaxation coefficients, the velocity \ndistribution becomes similar to that of a rigid body rotation as in the inner \nportion of the Rankine model.\n\n\n\\section{Turbulent Structure of Axial Vortices}\n\nUnderstanding the turbulent structure of an axial vortex is critical to \nunderstanding the mechanisms controlling vortex decay. Axial wake vortices \ntypically \nstart with axial velocity deficits at their centers and strong rotational \nmomentum about their cores. Turbulence is responsible for the observed rapid \ndecay of the axial velocity deficit in the core region, but its influence on \nthe relatively slow decay of the rotational velocity profile is  unclear. \nVortices have been shown to quickly absorb large \nscale turbulent structures, and break them up into smaller ones \n\\cite{ragab1994,beninati2005}. This activity \nis associated with the exchange of momentum between the vortex core and the \nrotating surroundings, strengthening the near rigid body rotation of the core \nregion and relaminarizing subsequently the inner vortex region \n\\cite{bandyopadhyay1991}. Organized motions surrounding the core \ncause intermittent ejections of fluid with low momentum and high vorticity, and \nentrainment and laminarization of highly turbulent fluid into the core \n\\cite{bandyopadhyay1991}. Direct Numerical Simulation (DNS) has shown \nintense turbulent structures in the free stream to impact the longevity of a \nvortex formed within it, with large scale motions causing long wave instability \nand slight periodic movements of the vortex core (wobbling), and small scale \nmotions enhancing the diffusion of vorticity \nbetween the core region and the free stream \\cite{risso1997}. It is \nunsurprising that the turbulent structure of an axial vortex likely depends\nupon local atmospheric conditions, and thus the ambient environment can \ninfluence the rate at which a vortex decays. This has a significant impact on \naviation safety \\cite{ash1998}. \n\nThe vortex model developed by Ash, Zardadkhan \nand Zuckerwar comes from an exact solution to the modified Navier Stokes \nequations, employing a simple turbulent eddy viscosity assumption \n\\cite{ash2011}. \nSince that model merely changes the ratio of the effective \nviscosity to the pressure relaxation coefficient, the resulting axial velocity \nprofile continues to be identical with the empirical Burnham-Hallock model. \nSpecific assumptions for the turbulent viscosity model and Reynolds stresses \nwill be examined in the present study.\n", "meta": {"hexsha": "841ae68ae43ad497a6e9445c09d0821652ba9723", "size": 10346, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "texdocs/docs/intro/vortices.tex", "max_stars_repo_name": "Jwely/thesis-pivpr", "max_stars_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "texdocs/docs/intro/vortices.tex", "max_issues_repo_name": "Jwely/thesis-pivpr", "max_issues_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "texdocs/docs/intro/vortices.tex", "max_forks_repo_name": "Jwely/thesis-pivpr", "max_forks_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.7520661157, "max_line_length": 80, "alphanum_fraction": 0.7629035376, "num_tokens": 2873, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.587803650069705}}
{"text": "\n\\chapter{Periodicity, Reciprocal Lattice, Floquet Modes}\n\\label{chap:periodicity}\n\nThis chapter discusses the direct and reciprocal lattice vectors, and\ndefines the Floquet modes that can exist in the dielectric regions.\n\n\\section{The Direct Lattice}\nWe consider a structure with discrete translational invariance in two space\ndimensions.  The periodicity is characterized by the {\\em direct lattice\n  vectors} $\\s_1$ and $\\s_2$, a pair of real vectors satisfying\n\\begin{equation}\n  \\s_1 \\bdot \\z = \\s_2 \\bdot \\z = 0, \\quad A \\equiv \\z \\bdot \\s_1 \\cross \\s_2 > 0.\n\\end{equation}\nThe structure is invariant to a translation consisting of any integer number of\nshifts in the $\\s_1$ or $\\s_2$ directions.  Such periodicity\nis exhibited by idealized models of frequency selective surfaces\n(FSSs) and phased arrays, for example.\nThis periodicity gives rise to the concept of the {direct lattice},\nthe set of points $\\vecrho_{mn} = \\x x_{mn} + \\y y_{mn}$ satisfying\n\\begin{equation}\n  \\vecrho_{mn} = m \\s_1 + n \\s_2, \\quad \\text{for $m$ and $n$ any integers.}\n\\end{equation}\nA periodic structure and its direct lattice is shown in\nFigure~\\ref{fig:direct}.\n\\begin{figure}[htbp]\n  \\begin{center}\n        \\fbox{%\n      \\psset{unit=0.005in}\n      \\pspicture*(-1,-10)(660,410)\n                                % Paint it on a white, opaque background:\n      \\psframe*[linecolor=white,fillcolor=white,fillstyle=solid](-1,-10)(660,410)\n      \\multips(-60.62,-105.0)(60.62,105.0){5}%\n      {%\n        \\multips(-242.48,0)(121.24,0){8}%\n        {%\n          \\pspolygon(73.68,18.0)(108.18,18)(130.77,52.5)%\n          (108.18,87.0)(73.68,87.0)(51.096,52.5)%\n          %\n          \\psline[linestyle=dashed,linewidth=0.5pt](0,0)(121.24,0)\n          \\psline[linestyle=dashed,linewidth=0.5pt](0,0)(60.62,105.0)\n          \\qdisk(0,0){1.5pt}\n        }%\n      }\n    \\rput(181.86,105){%\n      \\psline[linewidth=1pt]{->}(0,0)(121.24,0)\n      \\psline[linewidth=1pt]{->}(0,0)(60.62,105.0)\n      \\rput(90,-13){$\\s_1$}\n      \\rput[l](10,90){$\\s_2$}\n      }\n    \\endpspicture\n      }\n    \\caption{A frequency selective surface consisting of a thin metal plate\n        with hexagonal perforations, and the associated direct\n        lattice. The location selected for the lattice origin is arbitrary.}\n    \\label{fig:direct}\n  \\end{center}\n\\end{figure}\n\n\\section{Periodic Boundary Conditions and the Unit Cell}\n\\label{sec:pbcuc}\nWe now assume that an electromagnetic excitation of some type is\napplied to the structure.  In the case of a FSS, the excitation takes\nthe form of an incident plane wave.  In the case of a phased array,\nthe excitation may be an incident plane wave, or perhaps a set of\nincoming waveguide modes in each of the excitation ports of the\nradiating elements.  Denote the spatial variation of the excitation by\nthe function $V(\\r)$.  We insist that the function $V$ satisfy the\nfollowing quasi-periodicity condition:\n\\begin{equation}\n  \\label{eq:floquetbc}\n  V(\\r + m\\s_1 + n\\s_2) = V(\\r) e^{-j(m\\psi_1 + n\\psi_2)}, \\quad\n  \\text{for any integers $m$ and $n$}\n\\end{equation}\nwhere $\\psi_1$ and $\\psi_2$ are given real numbers, which we will\nrefer to as the ``unit cell incremental phase shifts''.  By the\ntranslational invariance of Maxwell's equations and given the discrete\ntranslational invariance of the structure, it is clear that all\nelectromagnetic fields, charges, etc., resulting from the given\nexcitation must also satisfy \\eqref{eq:floquetbc}, which we refer to\nas the ``Floquet boundary condition.''\n\nSince the fields throughout the structure satisfy\nEquation~\\eqref{eq:floquetbc}, it suffices to restrict consideration\nto a single unit cell $U$, defined\\footnote{The definition of a unit\n  cell is not unique.  The present definition is most useful for our\n  purposes.}\nas the set of points $\\r$\nsatisfying\n\\begin{equation}\n  \\label{eq:unitcell}\n  U = \\{\\r: \\; \\r = \\xi_1 \\s_1 + \\xi_2 \\s_2 + \\z z, \\quad 0 \\leq \\xi_1 , \\xi_2\n  \\leq 1\\},\n\\end{equation}\nwhere $\\xi_1$ and $\\xi_2$ are the so-called ``normalized area\ncoordinates,'' each constrained to the interval $[0,1]$.\nWe seek a set of modes that can propagate in the unit cell, subject to\nan appropriate set of boundary conditions to be stated below.  Let\n$E(\\r)$ be some rectangular component of electric or magnetic field\nevaluated at a point $\\r = \\x x + \\y y + \\z z = \\xi_1 \\s_1 + \\xi_2\n\\s_2 + \\z z$ within the unit cell.  \n%Let $f(\\xi_1,\\xi_2) = \n%E(\\xi_1\\s_1+\\xi_2\\s_2+\\z z) =E(\\r)$.\nThen the quasi-periodic boundary condition can be expressed as\n\\begin{subequations}\n  \\label{eq:cellfloquetbc}\n  \\begin{align}\n    E(\\s_1 + \\xi_2 \\s_2 + \\z z) &= E(\\xi_2 \\s_2 + \\z z) e^{-j\\psi_1} \\\\\n    E(\\xi_1 \\s_1 + \\s_2 + \\z z) &= E(\\xi_1 \\s_1 + \\z z) e^{-j\\psi_2} \n  \\end{align}\n\\end{subequations}\n  which must hold for all $z$ and for all $\\xi_1$ and $\\xi_2$ in the\n  interval $[0,1]$.  \n\\subsection{Mode Potentials}\n\nFollowing the formalism of Section~5.1 of \\cite{coll:91}, for both TE\nand TM modes we seek mode potentials $\\Psi(\\vecrho) = \\Psi(x,y)$ that satisfy the\ntwo-dimensional Helmholtz equation\n\\begin{equation}\n  \\label{eq:laplace2d}\n  \\laplace_t \\Psi + k_c^2 \\Psi = 0\n\\end{equation}\nwithin the unit cell in addition to the boundary \nconditions \\eqref{eq:cellfloquetbc}.\nTo simplify the following derivation, let $f(\\xi_1,\\xi_2) =\n\\Psi(\\xi_1\\s_1 + \\xi_2\\s_2) = \\Psi(x,y)$.\nThen the boundary condition \\eqref{eq:cellfloquetbc} satisfied by $\\Psi$\ncan be expressed more simply in terms of $f$ as\n\\begin{subequations}\n  \\label{eq:cellfloquetbcf}\n  \\begin{align}\n    f(1, \\xi_2) &= f(0,\\xi_2) e^{-j\\psi_1} \\\\\n    f(\\xi_1, 1) &= f(\\xi_1, 0) e^{-j\\psi_2} \n  \\end{align}\n\\end{subequations}\nNote that $f$ is periodic in $\\xi_1$ and $\\xi_2$ with unit period \nexcept for the\nprogressive phase shifts $\\psi_1$ and $\\psi_2$.  This motivates us to\nconsider the function $f(\\xi_1,\\xi_2) e^{j(\\xi_1\\psi_1 +\n  \\xi_2\\psi_2)}$ which is indeed periodic and can therefore be\nexpanded in a double Fourier series:\n\\begin{equation*}\n  f(\\xi_1,\\xi_2) e^{j(\\xi_1\\psi_1 + \\xi_2\\psi_2)} = \n  \\sum_{m=-\\infty}^{\\infty} \\sum_{n=-\\infty}^{\\infty} \\!\\!\\!\n  f_{mn} \\, e^{-j(m2\\pi\\xi_1 + n2\\pi\\xi_2)}\n\\end{equation*}\nor equivalently\n\\begin{equation}\n  \\label{eq:fourier}\n  f(\\xi_1,\\xi_2) =\n  \\sum_{m=-\\infty}^{\\infty} \\sum_{n=-\\infty}^{\\infty} \\!\\!\\!\n  f_{mn} \\, e^{-j[\\xi_1(\\psi_1+m2\\pi) + \\xi_2(\\psi_2+n2\\pi)]}.\n\\end{equation}\nWe wish to write Equation~\\eqref{eq:fourier} explicitly in terms of\n$\\vecrho = \\x x + \\y y$.  Recalling that $\\vecrho = \\xi_1\\s_1 +\n\\xi_2\\s_2$ and writing the relation in matrix form yields\n\\begin{equation}\n  \\colvec{x\\\\y} = \n  \\begin{bmatrix}\n    s_{1x} & s_{2x} \\\\\n    s_{1y} & s_{2y} \n  \\end{bmatrix}\n    \\colvec{\\xi_1 \\\\ \\xi_2}.\n\\end{equation}\nInverting, we obtain\n\\begin{align}\n\\colvec{\\xi_1 \\\\ \\xi_2} &=\n  \\frac{1}{A}\n  \\begin{bmatrix}\n    s_{2y} & -s_{2x} \\\\\n    -s_{1y} & s_{1x} \n  \\end{bmatrix}\n      \\colvec{x\\\\y} \\nonumber \\\\\n      &=\n      \\frac{1}{A}\n      \\colvec{s_{2y}x -s_{2x}y \\\\\n        -s_{1y}x + s_{1x}y}  \\nonumber \\\\\n      &=\n      \\frac{1}{A}\n      \\colvec{\\s_2 \\cross \\z \\bdot \\vecrho \\\\\n        \\z \\cross \\s_1 \\bdot \\vecrho}  \\nonumber \\\\\n      &=\n      \\frac{1}{2\\pi}\n      \\colvec{\n        \\vecbeta_1 \\bdot \\vecrho \\\\\n        \\vecbeta_2 \\bdot \\vecrho\n        }\n\\end{align}\nwhere \n\\begin{equation}\n  \\label{eq:betadef}\n  \\vecbeta_1 = \\frac{2\\pi}{A} \\s_2\\cross\\z, \\quad\n  \\vecbeta_2 = \\frac{2\\pi}{A} \\z \\cross \\s_1,\n\\end{equation}\nare the {\\em reciprocal lattice vectors} \\cite{dufo:67,kitt:66}\nand $A = \\z \\bdot \\s_1 \\cross \\s_2$ is the area of the unit cell.\nSubstituting \\eqref{eq:betadef} into \\eqref{eq:fourier}, we obtain the\ndesired representation of the mode potential:\n\\begin{equation}\n    f(\\xi_1,\\xi_2) = \\Psi(x,y) = \n  \\sum_{m=-\\infty}^{\\infty} \\sum_{n=-\\infty}^{\\infty}\n  f_{mn} e^{-j\\vecbeta_{mn} \\bdot \\vecrho}\n\\end{equation}\nwhere \n\\begin{subequations}\n  \\begin{align}\n    \\vecbeta_{mn} &= \\vecbeta_{00} + m\\vecbeta_1 + n\\vecbeta_2, \\\\\n    \\vecbeta_{00} &= \\frac{\\psi_1}{2\\pi}\\vecbeta_1 +  \\frac{\\psi_2}{2\\pi}\\vecbeta_2. \n  \\end{align}\n\\end{subequations}\nWe see that the mode potentials assume the form of a discrete set of plane waves for\nboth TE and TM modes.  The cutoff wavenumber $k_c$ of a plane wave\nwith transverse propagation vector $\\vecbeta_{mn}$ is given by \n\\begin{equation}\n  k_c = \\beta_{mn} \\equiv \\sqrt{\\vecbeta_{mn} \\bdot \\vecbeta_{mn}}.\n\\end{equation}\nFor a lossless medium a finite number of modes may satisfy $k >\n\\beta_{mn}$; these are the propagating modes.  The remaining modes,\ncomprising a denumerably infinite set, are cut-off (or evanescent).\nThe situation is depicted in Figure~\\ref{fig:reciprocal} for the\nstructure of Figure~\\ref{fig:direct}.\n\\begin{figure}[tbp]\n  \\begin{center}\n        \\fbox{%\n      \\psset{unit=0.006in}\n      \\pspicture*(20,10)(660,400)\n                                % Paint it on a white, opaque background:\n      \\psframe*[linecolor=white,fillcolor=white,fillstyle=solid](20,10)(660,400)\n      \\multips(-105,303.1)(105.0,-60.62){8}%\n      {%\n        \\multips(0,-242.48)(0,121.24){8}%\n        {%\n          \\psline[linestyle=dashed,linewidth=0.5pt](0,0)(0,121.24)\n          \\psline[linestyle=dashed,linewidth=0.5pt](0,0)(105.0,-60.62)\n          \\psline[linewidth=1pt]{->}(0,0)(35,25)\n          \\qdisk(0,0){1.5pt}\n        }%\n      }\n    \\rput(315,181.86){%\n      \\psline[linewidth=1pt]{->}(0,0)(105.0,-60.62)\n      \\psline[linewidth=1pt]{->}(0,0)(0,121.24)\n      \\rput(75,-65){$\\vecbeta_1$}\n      \\rput(-20,95){$\\vecbeta_2$}\n      \\pscircle[linewidth=0.75pt,linestyle=dashed](0,0){70}\n      \\rput*(62,25){$\\vecbeta_{0,0}$}\n      }\n    \\endpspicture\n      }\n    \\caption[Reciprocal lattice.]{The reciprocal lattice for the\n      structure of Figure~\\ref{fig:direct}.  Note that this lattice is\n        a scaled and rotated (by $90^\\circ$) version of the direct\n        lattice. \n        Modes are located at the tips of the small, offset vectors.\n        Propagating modes lie within the dashed circle of radius $k$\n        centered on the origin.  The offset vector $\\vecbeta_{00}$\n        accounts for the effects of the impressed phase shift.\n        }\n    \\label{fig:reciprocal}\n  \\end{center}\n\\end{figure}\n\n\nFollowing the prescription given in \\cite{coll:91}, we may now write\ndown the explicit forms of the modal fields:\n\n\\subsection{TE modes}\n\\label{sec:temodes}\n\n\\subsubsection{Oblique Incidence}\nWe first assume that $\\beta_{mn} \\neq 0$, in which case \nthe modal fields are\n\\begin{subequations}\n  \\label{eq:TEmodes}\n  \\begin{align}\n    \\Psi_{mn}\\TE(\\vecrho) &= \\frac{c_{mn}\\TE}{k\\eta\\beta_{mn}}\n    e^{-j\\vecbeta_{mn} \\bdot \\vecrho} \\\\\n    \\gamma_{mn} &= \\sqrt{\\beta_{mn}^2 - k^2} \\qquad \\text{(1st quadrant)} \\\\\n    Z_{mn}\\TE &= \\frac{1}{Y_{mn}\\TE} = \\frac{jk\\eta}{\\gamma_{mn}} \\\\\n    (\\x\\x+\\y\\y) \\bdot \\H_{mn}\\TE(\\r) &= \\pm \\gamma_{mn} e^{\\pm\\gamma_{mn}z}\n    \\,\\gradient_t \\Psi_{mn}\\TE \\nonumber \\label{eq:htransTE} \\\\\n    &= \\mp j \\gamma_{mn} \\Psi_{mn}\\TE e^{\\pm\\gamma_{mn}z}  \\vecbeta_{mn}\n    \\nonumber \\\\\n    &= \\pm c_{mn}\\TE Y_{mn}\\TE e^{-j\\vecbeta_{mn}\\bdot\\vecrho \\pm\n      \\gamma_{mn} z} \\betahat_{mn}\n    \\\\\n    \\z \\bdot \\H_{mn}\\TE(\\r) &= \\beta_{mn}^2 \\Psi_{mn}\\TE\n    e^{\\pm\\gamma_{mn} z} \\nonumber  \\\\\n    &= c_{mn}\\TE \\frac{\\beta_{mn}}{k\\eta} e^{-j\\vecbeta_{mn}\\bdot\\vecrho \\pm\n      \\gamma_{mn} z} \\\\\n    \\E_{mn}\\TE(\\r) &= \\pm Z_{mn}\\TE \\z \\cross \\H_{mn}(\\r) \\nonumber \\\\\n     &= c_{mn}\\TE  e^{-j\\vecbeta_{mn}\\bdot\\vecrho \\pm \\gamma_{mn} z}\n     \\z\\cross\\betahat_{mn} \\label{eq:etransTE}\n  \\end{align}\n\\end{subequations}\nwhere we have used  $  \\betahat_{mn} = \\vecbeta_{mn} / \\beta_{mn}$.\n\n\n\n\n\\subsubsection{Normal Incidence}\nIn the case where $\\beta_{mn} = 0$, we use the following convention:\n\\begin{equation}\n  \\betahat_{mn} = \\x, \\label{eq:betahatnormal}\n\\end{equation}\nso that the final formulas in \\eqref{eq:TEmodes} remain valid.\n\n\\subsection{TM modes}\n\\label{sec:tmmodes}\n\n\\subsubsection{Oblique Incidence}\nWe first assume that $\\beta_{mn} \\neq 0$, in which case \n\\begin{subequations}\n  \\label{eq:TMmodes}\n  \\begin{align}\n    \\Psi_{mn}\\TM(\\vecrho) &= \\frac{\\pm j c_{mn}\\TM}{\\gamma_{mn}\\beta_{mn}}\n    e^{-j\\vecbeta_{mn} \\bdot \\vecrho} \\\\\n    \\gamma_{mn} &= \\sqrt{\\beta_{mn}^2 - k^2} \\qquad \\text{(1st quadrant)} \\\\\n    Y_{mn}\\TM &= \\frac{1}{Z_{mn}\\TM} = \\frac{jk}{\\eta\\gamma_{mn}} \\\\\n    (\\x\\x+\\y\\y) \\bdot \\E_{mn}\\TM(\\r) &= \\pm \\gamma_{mn} e^{\\pm\\gamma_{mn}z}\n    \\,\\gradient_t \\Psi_{mn}\\TM \\nonumber \\\\\n    &= \n    \\mp j \\gamma_{mn} \\Psi_{mn}\\TM e^{\\pm\\gamma_{mn}z}  \\vecbeta_{mn}\n    \\nonumber \\\\\n    &= c_{mn}\\TM e^{-j\\vecbeta_{mn}\\bdot\\vecrho \\pm \\gamma_{mn} z}\n    \\betahat_{mn} \\\\\n    %\n    \\z \\bdot \\E_{mn}\\TM(\\r) &= \n    \\beta_{mn}^2 \\Psi_{mn}\\TM e^{\\pm\\gamma_{mn} z} \\nonumber \\\\\n    &= \\pm j c_{mn}\\TM \\frac{\\beta_{mn}}{\\gamma_{mn}} \n    e^{-j\\vecbeta_{mn}\\bdot\\vecrho \\pm \\gamma_{mn} z} \\\\\n    %\n    \\H_{mn}\\TM(\\r) &= \\mp Y_{mn}\\TM \\z \\cross \\E_{mn}\\TM(\\r) \\nonumber \\\\\n     &= \\mp c_{mn}\\TM Y_{mn}\\TM\n     e^{-j\\vecbeta_{mn}\\bdot\\vecrho \\pm \\gamma_{mn} z}\n     \\z\\cross\\betahat_{mn} \n  \\end{align}\n\\end{subequations}\n\n\\subsubsection{Normal Incidence}\nIn the case where $\\beta_{mn} = 0$, we again employ the convention\n\\eqref{eq:betahatnormal}, so that the final formulas in\n\\eqref{eq:TMmodes}\nremain valid.\n\n\\subsection{Mode Normalization}\nSo far we have not specified values for \nthe set of mode normalization constants $\\{c_{mn}\\TE\\}$ and\n$\\{c_{mn}\\TM\\}$.  These can be specified in any convenient manner.  We\nwill choose a normalization that allows us to easily interpret the\nincident and reflected traveling wave variables in terms of power\ntransported by the modes.  Consider a source-free slab of the unit cell bounded by\n$z = \\text{constant}$ planes, filled with homogeneous dielectric\nmaterial.  Taking account of the results of Sections~\\ref{sec:temodes}\nand~\\ref{sec:tmmodes}, we see that the transverse components of the\nmost general electromagnetic field that can exist in\nthis region can be written as\n\\begin{subequations}\n  \\label{eq:modalexpansions}\n  \\begin{align}\n    (\\x\\x+\\y\\y) \\bdot \\E(\\r) &= \n    \\mkern -15mu \\sum_{(p,m,n) \\in S} \\mkern -15mu\n    \\e_{pmn}(x,y) \n    \\left(\n      a_{pmn} e^{-\\gamma_{mn} z} + b_{pmn} e^{\\gamma_{mn} z}\n    \\right), \\\\\n    %\n    (\\x\\x+\\y\\y) \\bdot \\H(\\r) &= \n    \\mkern -15mu \\sum_{(p,m,n) \\in S} \\mkern -15mu\n    \\h_{pmn}(x,y) \n    \\left(\n      a_{pmn} e^{-\\gamma_{mn} z} - b_{pmn} e^{\\gamma_{mn} z}\n    \\right),\n  \\end{align}\n\\end{subequations}\nwhere the summations are taken over the set of integer triples \n$S = \\{(p,m,n) \\in \\{1,2\\}\n\\cross \\Integers \\cross \\Integers\\}$, $\\Integers$ is the set of\nall integers, $p=1$ corresponds to TE modes, and $p=2$ corresponds to\nTM modes.  The modal fields $\\e_{pmn}$ and $\\h_{pmn}$ are given\nexplicitly by\n\\begin{subequations}\n  \\label{eq:modaldefs}\n  \\begin{align}\n    \\e_{1mn} &= \n      c_{1mn}  \\z \\cross \\betahat_{mn} e^{-j\\vecbeta_{mn} \\bdot \\vecrho}\n       \\\\\n    %\n    \\e_{2mn} &= \n      c_{2mn}  \\betahat_{mn} e^{-j\\vecbeta_{mn} \\bdot \\vecrho}\n      \\hphantom{\\z \\cross \\mbox{}}\n      \\\\\n    %\n    \\h_{pmn} &= Y_{pmn} \\z \\cross \\e_{pmn} \n  \\end{align}\n\\end{subequations}\nwhere \n\\begin{subequations}\n  \\begin{align}\n    Y_{1mn} &= Y_{mn}\\TE, \\quad  c_{1mn} = c_{mn}\\TE \\\\\n    Y_{2mn} &= Y_{mn}\\TM, \\quad  c_{2mn} = c_{mn}\\TM.\n  \\end{align}\n\\end{subequations}\n\n\nBy virtue of the orthogonality of the Floquet modes (see\nAppendix~\\ref{app:orthogonal}), the \ncomplex power P traveling down the unit cell in the $z$ direction can\nbe expressed as a sum of the individual complex powers transported by each mode:\n\\begin{align}\n  P &= \\iint_{U'} \\E \\cross \\H^* \\bdot \\z \\; \\d A\n  \\nonumber \\\\\n  &= \\sum_{(p,m,n) \\in S}\n  (a_{pmn} + b_{pmn})(a_{pmn}^* - b_{pmn}^*) \\iint_{U'}\n  \\e_{pmn} \\cross \\h_{pmn}^* \\bdot \\z \\; \\d A \\nonumber \\\\\n  %\n  &= \\sum_{(p,m,n) \\in S} \\mkern -16 mu\n  \\left[\n    \\abs{a_{pmn}}^2 - \\abs{b_{pmn}}^2 - 2j\\Imag{a_{pmn}b_{pmn}^*}\n  \\right]\n  P_{pmn}\n\\end{align}\nwhere $U'$ is the restriction of the unit cell to the plane $z=0$,\nand\n\\begin{equation}\n  P_{pmn} = \\iint_{U'} \\mkern -8 mu\n  \\e_{pmn} \\cross \\h_{pmn}^* \\bdot \\z \\; \\d A  \n  = \\abs{c_{pmn}}^2\n  Y_{pmn}^* A\n\\label{eq:Ppmn}\n\\end{equation}\nis the complex power associated with each unit-strength positive-going\nmode.  Its value depends on the choice of mode normalization constant\n$c_{pmn}$, which has not yet been specified.  Note that if $P_{pmn}$\nis equal to $1$, then the time-average (real) power carried down the\nguide in the $+z$ direction is just $\\abs{a_{pmn}}^2 -\n\\abs{b_{pmn}}^2$, consistent\nwith the usual definition of traveling wave variables \\cite{bbse:69}.\nSuch a normalization is not possible, since in many cases\nof practical interest, $P_{\\mkern -4mu pmn}$ has zero real part.\nConsider the case of a lossless medium\nwith $\\beta_{mn} > k$.  Then $\\gamma_{mn}$ is pure real, so that\n$Y_{pmn}$ is pure imaginary.  It follows from Equation~\\eqref{eq:Ppmn}\nthat $P_{\\mkern -4mu pmn}$ is pure imaginary, since $A$ and $\\abs{c_{pmn}}$ are\nboth pure real.  Therefore, we content ourselves with setting the\nmagnitude of $P_{\\mkern -4mu pmn}$ equal to $P_0 \\equiv \\text{one watt}$:\n\\begin{equation}\n  P_{pmn} = \\frac{Y_{pmn}^*}{\\abs{Y_{pmn}}} P_0.\n  \\label{eq:Pset}\n\\end{equation}\nSubstituting \\eqref{eq:Pset} into \\eqref{eq:Ppmn} determines the value\nof the mode normalization constant (up to an arbitrary phase):\n\\begin{equation}\n  \\abs{c_{pmn}} = \\sqrt{\\frac{P_0}{A \\abs{Y_{pmn}}}}.\n\\end{equation}\nThis choice results in a unitary scattering matrix for propagating\nmodes in lossless media.  It will be convenient for later work to\nchoose the phase of $c_{pmn}$ as follows:\n\\begin{equation}\n  c_{pmn} = \\sqrt{\\frac{P_0}{A Y_{pmn}}}.\n  \\label{eq:modenormalization}\n\\end{equation}\nwhere we agree to take that square root of $Y_{pmn}$ having positive\nreal part.\n\n", "meta": {"hexsha": "d69080c28d89cc5b25a340c107e888ed36f57d7d", "size": 17742, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/TheoryDocs/chapter2.tex", "max_stars_repo_name": "mortenpi/PSSFSS.jl", "max_stars_repo_head_hexsha": "9a3a6503d9266eee57771612a47a3e9b77bc9a2e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-05-21T15:44:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T17:29:19.000Z", "max_issues_repo_path": "docs/TheoryDocs/chapter2.tex", "max_issues_repo_name": "mortenpi/PSSFSS.jl", "max_issues_repo_head_hexsha": "9a3a6503d9266eee57771612a47a3e9b77bc9a2e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2020-10-08T22:20:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-26T01:01:09.000Z", "max_forks_repo_path": "docs/TheoryDocs/chapter2.tex", "max_forks_repo_name": "mortenpi/PSSFSS.jl", "max_forks_repo_head_hexsha": "9a3a6503d9266eee57771612a47a3e9b77bc9a2e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-20T13:58:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-24T09:16:48.000Z", "avg_line_length": 38.1548387097, "max_line_length": 85, "alphanum_fraction": 0.6480103709, "num_tokens": 6472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.8006920020959544, "lm_q1q2_score": 0.5878036430867845}}
{"text": "%\n% CMPT 310: Artificial Intelligence - A Course Overview\n% Section: Flashcard Questions\n%\n% Author: Jeffrey Leung\n%\n\n\\section{Flashcard Questions}\n\t\\label{sec:flashcard-questions}\n\\begin{easylist}\n\n& What characteristics does a task environment consist of?\n\t&& Performance measure, Environment, Actuators, Sensors (PEAS)\n& What does a problem formulation consist of?\n\t&& Initial state, goal test, successor function, and cost function\n\n& How is uniform-cost search different from breadth-first search?\n\t&& Uniform-cost search uses a priority queue to follow the path of least cost\n& Which search algorithm is iterative deepening search derived from?\n\t&& Depth-first search\n& How is iterative deepening search unique?\n\t&& Maximum depth of depth-first search increases until the goal is found\n\n& What is an admissible heuristic?\n\t&& Function which underestimates the true cost to the goal\n& What is a dominant heuristic?\n\t&& Admissible heuristic which is greater than or equal to another admissible heuristic\n& How is greedy/heuristic search different from A* search?\n\t&& Heuristic search does not consider the distance already travelled\n& What is the equation for A* search?\n\t\\end{easylist}\n\t\\begin{align*}\n\t\tf(n) & = g(n) + h(n) \\\\\n\t\t\\textrm{where }\n\t\t& f(n) = \\textrm{ estimated total cost of the path through } n \\textrm{ to the goal} \\\\\n\t\t& g(n) = \\textrm{ cost so far to reach } n \\\\\n\t\t& h(n) = \\textrm{ heuristic-estimated cost from } n \\textrm{ to the goal}\n\t\\end{align*}\n\t\\begin{easylist}\n\n& Which of $\\alpha / \\beta$ is the upper bound on the potential minimum node, and which is the lower bound on the potential maximum node?\n\t&& $\\alpha$: Lower bound on the potential value of a maximum node\n\t&& $\\beta$: Upper bound on the potential value of a minimum node\n\n& When does backtracking search backtrack?\n\t&& When a variable has no possible valid values, given the assignments of the other variables\n& What algorithm is DPLL derived from?\n\t&& Backtracking search\n& How does WalkSAT `walk'?\n\t&& The value of a random variable from a false clause is flipped\n\n& What is a normalization constant?\n\t&& A value which every probability is divided by, to reduce a probability function to a total probability of 1\n& What is the difference between dependence and conditional dependence?\n\t&& Dependence: Whether knowing a variable has an effect on the probability of another variable (share a parent node in Bayesian network)\n\t&& Conditional dependence: Given an evidence variable(s), whether knowing a variable has an effect on the probability of another variable (share a child node in Bayesian network)\n& What is the formula for the Chain Rule?\n\t\\end{easylist}\n\t\t\\begin{align*}\n\t\tP(a, b) &= P(a) P(b|a) = P(b) P(a|b) \\\\\n\t\tP(x_1, \\dotsc, x_n) &= \\prod_{i=1}^n P(x_i | x_1, \\dotsc, x_{i-1})\n\t\t\\end{align*}\n\t\\begin{easylist}\n\n& What is the formula for Bayes' Rule?\n\t\\[\n\t\tP(B | A) = \\frac{P(A | B) \\times P(B)}{P(A)}\n\t\\]\n\t\n& What is rejection sampling?\n\t&& Sampling each variable multiple times to find approximate probabilities of all variables\n& What is Gibbs sampling?\n\t&& Fixing evidence variables, randomly assigning values to non-evidence variables, sampling a non-evidence variable, and iterating through all non-evidence variables\n\n& What is filtering?\n\t&& Finding a hidden probability given evidence of all previous probabilities (i.e. finding $x_t$ given $e_{1:t}$)\n& What is smoothing?\n\t&& Finding previous hidden probabilities given evidence of previous and future probabilities (i.e. finding $x_t$ given $e_{1:T}$ where $1 <= t < T$)\n& What does the Viterbi algorithm do?\n\t&& Find the most likely sequence of states ending in xt\n\n& How do you avoid overfitting?\n\t&& Use relatively less training data\n\n& What is the equation used for entropy?\n\t&& $H(p_1, p_2, \\dotsc, p_n) = - \\sum_i p_i \\log_2 p_i$\n& What is the equation used for reduction in uncertainty?\n\t&& $H(root) - \\sum_{i \\in \\textrm{children}} \\frac{\\textrm{samples of } i}{\\textrm{samples of root}} H(i)$\n& What is the entropy of a 50/50 choice?\n\t&& 1\n& What is the entropy of a 0/100 choice?\n\t&& 0\n& Do you make a choice at a decision tree to increase or decrease entropy?\n\t&& Decrease entropy (i.e. maximize reduction of entropy)\n\n& What is the equation for a threshold function?\n\t\\end{easylist}\n\t\\[\n\t\ta = \\begin{dcases}\n\t\t\t-1 & \\textrm{if } in < 0 \\\\\n\t\t\t1 & \\textrm{otherwise}\n\t\t\\end{dcases}\n\t\\]\n\t\\begin{easylist}\n\n& What is the equation for a ReLU function?\n\t\\end{easylist}\n\t\\[\n\t\ta = max(0, in)\n\t\\]\n\t\\begin{easylist}\n\n\\end{easylist}\n\\clearpage\n", "meta": {"hexsha": "c62bf640eae89c2bb822263fb95b066302a5b40c", "size": 4517, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cmpt-310-artificial-intelligence-survey/course-overview/tex/flashcard-questions.tex", "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_issues_repo_path": "cmpt-310-artificial-intelligence-survey/course-overview/tex/flashcard-questions.tex", "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cmpt-310-artificial-intelligence-survey/course-overview/tex/flashcard-questions.tex", "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "avg_line_length": 39.2782608696, "max_line_length": 179, "alphanum_fraction": 0.7234890414, "num_tokens": 1230, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5878036384291135}}
{"text": "\\documentstyle[fullpage]{article}\n\n\\input{tex_stuff.tex}\n\n\n\\begin{document}\n\\bibliographystyle{alpha}\n\\title{{\\bf The Omega Calculator and Library}, {\\em version 1.1.0}}\n\n\n\\author{\nWayne Kelly, Vadim Maslov, William Pugh, \\\\\nEvan Rosser, Tatiana Shpeisman, Dave Wonnacott\n}\n\n\\maketitle\n\n\n\\section{Introduction}\n\nThis document gives an overview of the Omega library and \ndescribes the Omega Calculator, a text-based interface to\nthe Omega library. A separate document describes the C++ interface \nto the Omega library and we are still working on a third document that describes\nsome of the algorithms used in implementing the Omega library.\n\nThe Omega library \nmanipulates integer tuple relations and sets, such as\n$$\\{[i,j] \\rightarrow [j,j'] : 1 \\leq i < j < j' \\leq n\\}\n{\\rm\\ and\\ } \\{ [ i ,j ] : 1 \\leq i < j \\leq n\\}$$\nTuple relations and\nsets are described using Presburger \nformulas\\cite{presburger1,Shostak77,presburger2,presburger2a} a class of logical\nformulas which can be built from affine constraints over integer\nvariables, the logical connectives $\\neg$, $\\land$ and $\\lor$, and the\nquantifiers $\\forall$ and $\\exists$.\nThe best known upper bound on the performance of an algorithm for\nverifying Presburger formulas is $2^{2^{2^{n}}}$ \\cite{presburger3},\nand we have no\nreason to believe that our method provides better worst-case\nperformance. However,  we have found it\nto be reasonably efficient for our applications.\n\nThe following relation, which maps 2-tuples to 1-tuples:\n$\\{[i, j] \\rightarrow [i] : 1 \\le i,j \\le 2\\}$\nrepresents the following set of mappings: $\\{[1,1]\\rightarrow[1],\\\n[1,2]\\rightarrow[1],\\ [2,1]\\rightarrow[2],\\ [2,2]\\rightarrow[2]\\}$.\nIn addition to variables in the input and output tuples, the Presburger \nformulas may also contain free variables. This allows parameterized relations to be described. \nFor example, $n$ and $m$ are free in \n$\\{ [i, j] \\rightarrow [i] : 1 \\le i \\le n \\land 1 \\le j \\le m \\}$. \nThe language allows new relations to be defined in terms of existing relations \nby providing a number of relational operators.  The relational operations \nprovided include intersection, union, composition, inverse, domain, range and complement.  \nFor example, $\\{[p] \\rightarrow [2p, q]\\}$ {\\tt compose}\n$\\{[i,j]\\rightarrow[i]\\}$ evaluates to\n$\\{[i,j]\\rightarrow[2i,q]\\}$.\n\nRelations are simplified before being displayed to the user.  This\ninvolves transforming them into disjunctive normal form and removing\nredundant constraints and conjuncts.  During simplification, it may be determined\nthat the relation contains no points or contains all points, in which\ncase the simplified constraints will be False or True respectively.\nFor example, $\\{[i]\\rightarrow[i]\\}$ {\\tt intersect}\n$\\{[i]\\rightarrow[i+1]\\}$ evaluates to $\\{[i]\\rightarrow[i'] : {\\tt\nFalse}\\}$.\n\nThe copyright notice and legal fine print for the Omega calculator and library are contained in\nthe README and omega.h files. Basically, you can do anything you want with\nthem (other than sue us); if you redistribute it\nyou must include a copy of our copyright notice and legal fine print.\n\n\\section{Omega Calculator invocation, syntax and semantics}\n\nThe calculator reads from the file given as the first argument, or\nstandard input, and prints results on standard output.  \nYou can specify several command line flags. Using {\\tt -D}{\\em mk},\nwhere $m$ is a character and $k$ is a digit, sets the debugging level\nto $k$ for module $m$. Using {\\tt -G}$g$ sets the maximum number\nof inequalities in a conjunct to $g$. Using {\\tt -E}$e$ sets the maximum number\nof equalities in a conjunct to $E$. In version 1.1.0, we intend to \nchange our data structures so that these will not need to be specified.\nThere is also a limit on the maximum number of variables in a conjunct,\nbut this cannot be changed at run-time. It is given by\n{\\tt maxVars} in {\\tt oc.h}. We also intend to make this go away.\n\n\nThe input is a\nseries of statements terminated by semicolons. A \\# indicates that\nthe rest of the line is a comment.  \nThe syntax {\\tt <<}{\\em filename}{\\tt >>} can occur anywhere\n(and indicates that the text of the file is to be included here.\nThe statements can have one of the\nforms listed in Figure~\\ref{IPIstatements}.  {\\em RelExpr} is a short\nform of {\\em Relational Expression}.\n\n\\begin{figure}[tbh]\n\\begin{tabular}{l|p{4.5in}}\nSyntax & Semantics \\\\\\hline\n\n{\\tt symbolic} {\\em varList} & \n  Defines the variable names as symbolic\n  constants that can be used in all later\n  expressions. \\\\\n\n{\\em var} := {\\em RelExpr} & \n  Computes the relational expression and binds the\n  variable name to the result. \\\\\n\n{\\em RelExpr} & \n  Computes and prints the relational expression. \t\\\\\n\n{\\tt codegen} $\\ldots$ &\n\tThis is described in Section \\ref{section:codegen}.\n\t\\\\\n\n{\\em RelExpr} {\\tt subset} {\\em RelExpr} &\n  Prints True if the first relation is a subset of the second,\n  otherwise prints False. \\\\\n\\end{tabular}\n\\caption{Omega Calculator statements}\n\\label{IPIstatements}\n\\end{figure}\n\n\nFigures \\ref{figure:example1} and \\ref{figure:example2} show a sample\nsession with the Omega\nCalculator. Lines starting with \\# are input to the Omega\ncalculator, the other lines are output from the calculator.\n\n\\input{calculator-ex.tex}\n\nSome relational operations may not preserve\nthe names of input and output variables.\nIf this happens, the variables get default names:\n{\\tt In\\_}$n$ for input variables and\n{\\tt Out\\_}$n$ for output variables,\nwhere $n$ is a position of variables in its tuple.\nWhen printing, primes are added to variables to distinguish\nbetween multiple variables with the same name. \nIn input, primes may also be used to distinguish between multiple variables \nwith the same name: the primes are stripped before being passed\nto the Omega library.\n\n\\section{Relations}\n\n\\subsection{Building relations}\n\nA relation is an operand of a relational expression.\nIts syntax is:\n\\begin{quote}\n{\\tt \\{ [} {\\em InputList} {\\tt ] -> [} {\\em OutputList} {\\tt ] :} \n{\\em formula} {\\tt \\}}\n\\end{quote}\n\n{\\em InputList} and {\\em OutputList} are lists of tuple elements.\nA tuple element can be:\n\n\\begin{tabular}{cp{5in}}\n{\\em var} & The corresponding tuple variable is given this name.\n\tIf a variable with that name is already in scope,\n\tan equality is added\n\tforcing this tuple variable to be equal to the value\n\tof the previous definition.\n\\\\\n{\\em exp} & The tuple variable is unnamed and forced to be equal\n\tto the expression.\n\\\\\n{\\em exp}:{\\em exp} & The tuple variable is unnamed and forced to be\n\tgreater than or equal to the first expression and less than\n\tor equal to the second.\n\\\\\n{\\em exp}:{\\em exp}:{\\em int} & The tuple variable is unnamed and forced to be\n\tgreater than or equal to the first expression and less than\n\tor equal to the second, and the difference between the tuple\nvariable and the first expression must be an integer multiple of the\n\tinteger.\n\\\\\n{\\tt *} &The tuple variable is unnamed.\n\\end{tabular}\n\n\nThe {\\em formula} is optional.  If it is omitted, no constraints other than\nthose introduced by the input and output expressions are imposed upon\nthe relation's variables.  Otherwise, the formula describes additional\nconstraints on variables used in the relation.  \n\n\\subsection{Sets}\nIn addition to relations, the system can represent {\\em sets}.\n\nWhen a relation is declared with only one tuple, as in:\n\\begin{quote}\n{\\tt \\{ [} {\\em SetList} {\\tt ] :} {\\em formula} {\\tt \\}}\n\\end{quote}\nthen the relation becomes a set.\nThe variables that are used to describe a set ({\\em SetList}) \nare called {\\em set variables} rather than input or output variables.\n\n\\subsection{Presburger formula operations}\nAs mentioned above, the tuples belonging to the relation are defined\nby a {\\em Presburger formula}.  This formula is built from constraints\nusing the operations described in Figure~\\ref{FormulaOps}.\n\n\\begin{figure}[tbh]\n\\begin{tabular}{l|lll}\nName & \\multicolumn{3}{l}{Notation} \\\\\\hline\n\nAnd & {\\em formula} {\\tt \\&} {\\em formula} & {\\em formula} {\\tt \\&\\&} {\\em formula} & {\\em formula} {\\tt and} {\\em formula} \\\\\n\nOr  & {\\em formula} {\\tt |} {\\em formula}  & {\\em formula} {\\tt ||} {\\em formula}   & {\\em formula} {\\tt or} {\\em formula} \\\\\n\nNot & {\\tt !} {\\em formula}  &                    & {\\tt not} {\\em formula}\\\\\n\nExists & \\multicolumn{3}{l}{{\\tt exists} ($v_1,...,v_n$: {\\em formula})} \\\\\n\nForall & \\multicolumn{3}{l}{{\\tt forall} ($v_1,...,v_n$: {\\em formula})} \\\\\n\nParentheses & ({\\em formula}) \\\\\nConstraint & {\\em constraint} \\\\\n\\end{tabular}\n\\caption{Presburger formula syntax}\n\\label{FormulaOps}\n\\end{figure}\n\n\n\\subsection{Constraints and arithmetic and comparison operations}\n\nA Presburger formula is built from constraints using the operations\ndescribed in the previous subsection.  In this subsection we describe\nthe syntax of individual constraints.\n\nA constraint is a series of expression lists, connected with\nthe arithmetic relational operators {\\tt =, !=, >, >=, <, <=}.\nAn example is {\\tt 2i + 3j <= 5k,p <= 3x-z = t}.\n\nWhen an constraint contains a comma-separated list of expressions, it\nindicates that the same constraints should be introduced for each of\nthe expressions.  The constraint {\\tt a,b <= c,d > e,f} translates to\nthe constraints\n$$a \\leq c \\land a \\leq d \\land b \\leq c \\land b \\leq d \\land \nc > e \\land c > f \\land d > e \\land d > f$$\n\nExpressions can be of the forms (where {\\em var} is a variable, \n{\\em integer} is an integer constant, and $e$, $e_1$, and $e_2$ are \nexpressions):\n{\\em var}, {\\em integer}, $e$, {\\em integer} $e$,\n$e_1 + e_2$, $e_1 - e_2$, \n$e_1 \\ast e_2$, $- e$, $(e)$.\n\nAn important restriction is that all expressions in\nthe constraints must be affine functions of the variables.\nFor example, $2\\ast x$ is legal, $x \\ast 2$ is legal, but $x \\ast x$ is\nillegal.\n\n\\section{Relational and set operations}\n\nA relational expression is an expression over individual relations.\nThe relational operations defined in the system are listed\nin Figures \\ref{RelOps1} and \\ref{RelOps2}. \nHere $r, r_1, r_2$ are relations and $s, s_1,\ns_2$ are sets.\n\n\n\\begin{figure}[tbp]\n\\begin{tabular}{l|l|p{3.5in}}\nName & Syntax & Explanation \\\\\\hline\n\nUnion & \n$r = r_1$ {\\tt union} $r_2$ \n& $x \\rightarrow y \\in  r$ iff $x \\rightarrow y \\in  r_1$\nor $x \\rightarrow y \\in  r_1$\n\\\\\n\n& \n$s = s_1$ {\\tt union} $s_2$ \n& $x \\in  s$ iff $x \\in  s_1$\nor $x \\in  s_1$\n\\\\\n\nIntersection & \n$r = r_1$ {\\tt intersection} $r_2$\n& $x \\rightarrow y \\in  r$ iff $x \\rightarrow y \\in  r_1$\nand $x \\rightarrow y \\in  r_1$\n\\\\\n\n& \n$s = s_1$ {\\tt intersection} $s_2$\n& $x \\in  s$ iff $x \\in  s_1$\nand $x \\in  s_1$\n\\\\\n\nDifference &\n$r = r_1 - r_2$ \n& $x \\rightarrow y \\in  r$ iff $x \\rightarrow y \\in  r_1$\nand $x \\rightarrow y \\not\\in  r_1$\n\\\\\n\n&\n$s = s_1 - s_2$ \n& $x \\in  s$ iff $x \\in  s_1$\nand $x \\not\\in  s_1$\n\\\\\n\nComplement &\n$r =$ {\\tt complement} $r_1$ \n& $x \\rightarrow y \\in  r$ iff $x \\rightarrow y \\not\\in  r_1$\n\\\\\n\n&\n$s =$ {\\tt complement} $s_1$ \n& $x \\in  s$ iff $x \\not\\in  r_1$\n\\\\\n\nComposition &\n$r = r_1$ {\\tt compose} $r_2$ &\n$x \\rightarrow y \\in r$ iff $\\exists y \\st x \\rightarrow y \\in r_2$\nand $ y \\rightarrow z \\in r_1$\n\\\\\n\nApplication &\n$s = r_1 (s_2)$ &\n$x \\in s$ iff $\\exists y \\st x \\rightarrow y \\in r_1$\nand $ y \\in s_2$\n\\\\\n\n\n&\n$r = r_1 (r_2)$ &\nEquivalent to $r_1$ {\\tt compose} $r_2$ \\\\\n\nJoin &\n$ r = r_1 . r_2 $\n& Equivalent to $r_2$ {\\tt compose} $r_1$\\\\\n& & $x \\rightarrow y \\in r$ iff $\\exists y \\st x \\rightarrow y \\in r_1$\nand $ y \\rightarrow z \\in r_2$\n\\\\\n\n\n\nInverse &\n$r =$ {\\tt inverse} $r_1$ &\n$x \\rightarrow y \\in r $ iff $y \\rightarrow x \\in r_1$\\\\\n\nDomain &\n$s =$ {\\tt domain} $r_1$ \n& $x \\in s$ iff $\\exists y \\st x \\rightarrow y \\in r_1$\\\\\n\nRange &\n$s =$ {\\tt range} $r_1$ \n& $y \\in s$ iff $\\exists x \\st x \\rightarrow y \\in r_1$\\\\\n\nRestrict Domain &\n$r = r_1 ~\\verb+\\+~ s_2$ \n& $x \\rightarrow y \\in r$ iff $x \\rightarrow y \\in r_1$ and $x \\in s_2$\n\\\\\n\nRestrict Range &\n$r = r_1 ~/~ s_2$ \n& $x \\rightarrow y \\in r$ iff $x \\rightarrow y \\in r_1$ and $y \\in s_2$\n\\\\\n\nGist &\n$r=$ {\\tt gist} $r_1$ {\\tt given} $r_2$ &\nComputes gist of relation $r_1$ given relation $r_2$. \\\\\n\n\n\\end{tabular}\n\\caption{Relational operations, part 1}\n\\label{RelOps1}\n\\end{figure}\n\n\n\\begin{figure}[tbp]\n\\begin{tabular}{l|l|p{3.5in}}\nName & Syntax & Explanation \\\\\\hline\n\nHull & $r=$ {\\tt hull} $r_1$ & Computes an single convex region that\ncontains all of $r_1$.  Not as tight as the convex hull, but intended\nto be fairly fast.  \\\\\n\nConvex Hull &\n$r=$ {\\tt ConvexHull} $r_1$ &\nComputes the convex hull \\cite{Schrijver86,Wilde93} of $r_1$.\nMay be prohibitively expensive to compute\nand/or induce numeric overflow of coefficients (leading\nto a core dump)\n\\\\\n\nAffine Hull &\n$r=$ {\\tt AffineHull} $r_1$ &\nComputes the affine hull \\cite{Schrijver86,Wilde93} of $r_1$.\nMay be prohibitively expensive to compute\nand/or induce numeric overflow of coefficients (leading\nto a core dump)\n\\\\\n\nLinear Hull &\n$r=$ {\\tt LinearHull} $r_1$ &\nComputes the linear hull \\cite{Schrijver86,Wilde93} of $r_1$.\nMay be prohibitively expensive to compute\nand/or induce numeric overflow of coefficients (leading\nto a core dump)\n\\\\\n\nConic Hull &\n$r=$ {\\tt ConicHull} $r_1$ &\nComputes the conic or cone hull \\cite{Schrijver86,Wilde93} of $r_1$.\nMay be prohibitively expensive to compute\nand/or induce numeric overflow of coefficients (leading\nto a core dump)\n\\\\\n\nFarkas Lemma &\n$r=$ {\\tt farkas} $r_1$ &\nApplies Farkas's lemma \\cite{Schrijver86,Wilde93} to $r_1$.\nIn the result, the values of the variables\ncorrespond to legal values for coefficients\nof equations implied by the convex hull of $r_1$.\nAlso happens to represent the convex hull of $r_N$\nusing a points and rays representation. \t \nSee \\cite{Wilde93} for a more detailed explaination.\n\\\\\n\nConvex Check &\n$r=$ {\\tt ConvexCheck} $r_1$ &\nReturns the convex hull of $r_1$ if we can easily determine\nthat it is equal to $r_1$, otherwise returns $r_1$.\n\\\\\n\nPairwise Convex Check &\n$r=$ {\\tt PairwiseCheck} $r_1$ &\nChecks to see if any two conjuncts in $r_1$ can be replaced\nexactly by their convex hull (doing so if possible).\n\\\\\n\nTransitive Closure &\n$r = r_1 {\\tt +}$ &\nLeast fixed point of $r^+ \\equiv r \\cup (r \\circ r^+)$\n\\\\\n\n&\n$r = r_1 {\\tt +}$ {\\tt within} $b$ &\nLeast fixed point of $r^+$ for dependence relation $r$ within iteration space $b$\n\\\\\n\nConic Closure &\n$r = r_1 {\\tt @}$ &\nGives a simple dependence relation $r$ such that\nany linear schedule for  all of the dependences in \n$r$ is non-negative if and only\nif it is legal for all of the dependences in $r_1$.\nNote that $r_1 {\\tt +} \\subseteq r_1 {\\tt @}$.\nWe previously refered to this as affine closure \\cite{uniform2.4}.\n\\\\\n\n\nCross-product &\n$r = r_1$ {\\tt *} $r_2$ &\n$x \\rightarrow y \\in r $ iff $x \\in r_1$ and $y \\in r_2$\n\\\\\n\nCreate superset &\n$r= $ {\\tt supersetof} $r_1$ &\n$r$ is inexact with lower bound $r_1$ \n\\\\\n\nCreate subset &\n$r= $ {\\tt subsetof} $r_1$ &\n$r$ is inexact with upper bound $r_1$ \n\\\\\n\nCreate upper bound &\n$r= $ {\\tt upper\\_bound} $r_1$ &\n$r$ is an exact relation and is an upper bound on $r_1$ (all UNKNOWN\nconstraints in $r_1$ are interpreted as TRUE)\n\\\\\n\nCreate lower bound &\n$r= $ {\\tt lower\\_bound} $r_1$ &\n$r$ is an exact relation and a lower bound on $r_1$ (all UNKNOWN\nconstraints in $r_1$ are interpreted as FALSE)\n\\\\\n\n\nGet an example &\n$r= $ {\\tt example} $r_1$ &\n$r \\subseteq r_1$ and all variables in $r$ are single-valued\n\\\\\n\nSymbolic example &\n$r= $ {\\tt sym\\_example} $r_1$ &\n$r \\subseteq r_1$ and all non-symbolic variables in $r$ are single-valued\n\n\\end{tabular}\n\\caption{Relational operations, part 2}\n\\label{RelOps2}\n\\end{figure}\n\n\\section{Code Generation}\n\\label{section:codegen}\nThe Omega Calculator incorporates an algorithm for generating code for\nmultiple, overlapping iteration spaces \\cite{Uniform4}.  Each\niteration space has an associated statement or block of statements.\nThe syntax is \n\\centerline{{\\tt codegen} [{\\em effort}] $IS_1, IS_2, \\ldots ,IS_n$ [ {\\tt given} {\\em known}] }\nEach\niteration space $IS_i$ can be specified either as a set representing\nthe iteration space, or as an original iteration space and a\ntransformation, {\\em T}:{\\em IS}, where {\\em IS} is the original\niteration space and {\\em T} is a relation defining an affine, 1-1 mapping\nto a new iteration space. That is, given a point in the original\niteration space, the mapping specifies the point in the new iteration\nspace at which to execute that iteration.\n\nThe effort value specifies the amount of effort to be used to\neliiminate sources of overhead in the generated code. Sources of\noverhead include if statements an {\\tt min} and {\\tt max} functions\nin loop bounds. If not specified, the effort level is 0. The different effort\nlevels are:\n\\begin{description}\n\\item{-2}\tMinimal possible effort. Loop bounds may not be finite.\n\\item{-1}\tForces finite bounds\n\\item{0}\tForces finite bounds and tries to remove {\\tt if}'s \n\t\twithin most deeply nested loops (at a possible cost\n\t\tof code duplication).\n\\item{1}\tremoves {\\tt if}'s within most deeply nested loops\n\t\tand loops one short of being most deeply nested.\n\\item{2}\t$\\ldots$ and loops two short of being most deeply nested.\n\\item{$x$}\t$\\ldots$ and loops $x$ short of being most deeply nested.\n\\end{description}\n\nThe known information, if specified, is used to simplify the generated\ncode. The generated code will not be correct if known is not true.\nCurrently, the known relation needs to be a set with the same arity and the transformed\niteration space.\n\nA discussion of program transformations using this framework is\ngiven in \\cite{Uniform2.1,Uniform2.3,Uniform3}.\n\nThe following is an example of code generation, given three original\niteration spaces and mappings.  The transformed code requires the\ntraditional transformations loop distribution and imperfectly nested\ntriangular loop interchange.   Below, the program information has been\nextracted and presented to the Omega Calculator in relation form.\n\n\\vspace{0.2in}\n\n{\\bf Original code}\n\\begin{verbatim}\n      do 30 i=2,n\n10      sum(i) = 0.\n        do 20 j=1,i-1\n20        sum(i) = sum(i) + a(j,i)*b(j)\n30      b(i) = b(i) - sum(i)\n\\end{verbatim}\n\n{\\bf Schedule} (for parallelism)\n$$T10:\\{\\ [\\ i\\ ] \\rightarrow\\   [\\ 0,i,0,0\\ ]\\ \\}$$\n$$T20:\\{\\ [\\ i,j\\ ] \\rightarrow\\ [\\ 1,j,0,i\\ ]\\ \\}$$\n$$T30:\\{\\ [\\ i\\ ] \\rightarrow\\   [\\ 1,i-1,1,0\\ ]\\ \\}$$\n\n{\\bf Omega Calculator output:}\n\\input{calculator-ex2.tex}\n\n\\section{Inexact Relations}\n\nThe special constraint UNKNOWN represents one or more additional\nconstraints that are not known to the Omega Library.  Such constraints\ncan arise from uses of uninterpreted function symbols or transitive\nclosure (as described below), or when the user explicitly requests it.\nSuch relations require conservative treatment from the library (e.g.,\nsubtracting a conjunct containing UNKNOWN from a relation must\nreturn that relation, since the unknown constraints might make the\nconjunct unsatisfiable.)\n\nThe {\\tt upper\\_bound} and {\\tt lower\\_bound} operations can be used to\nproduce exact relations from inexact relations.  They are produced by\ntreating UNKNOWN constraints as TRUE or FALSE, respectively.\n\n\n\n\\section{Presburger Arithmetic with Uninterpreted Function Symbols}\nThe Omega Calculator allows certain restricted uses of \n{\\em uninterpreted function symbols} in a Presburger formula.\nFunctions may be declared in the {\\tt symbolic} statement as\n\\begin{quote}\n{\\tt symbolic} {\\em Function} ({\\em Arity})\n\\end{quote}\nwhere {\\em Function} is the function name and \n{\\em Arity} is its number of arguments.\nFunctions of arity 0 are symbolic constants.\n\nFunctions may only be applied to a prefix of the input or output tuple\nof a relation, or a prefix of the tuple of a set.  The function\napplication may list the names of the argument variables explicitly\n({\\em not yet supported}), or use the abbreviations {\\em Function({\\tt\nIn})}, {\\em Function({\\tt Out})}, and {\\em Function({\\tt Set})}, to\ndescribe the application of a function to the appropriate length\nprefix of the desired tuple.\n\nOur system relies on the following observation: Consider a formula $F$\nthat contains references $f(i)$ and $f(j)$, where i and j are free in\n$F$.  Let $F'$ be $F$ with $f_{i}$ and $f_{j}$ substituted for $f(i)$\nand $f(j)$.  $F$ if satisfiable iff $((i = j) \\Rightarrow (f_{i} =\nf_{j})) \\land F'$ is satisfiable.  For more details, see \\cite{Shostak79}.\n% Robert\n% Shostak's paper ``A Practical Decision Procedure for Arithmetic with\n% Function Symbols'' \\cit(JACM, April 1979).\n\nPresburger Arithmetic with uninterpreted function symbols is in\ngeneral undecidable, so in some circumstances we will have to produce\napproximate results (as we do with the transitive closure operation)\n\\cite{transitiveClosure}. \n\nThe following examples show some legal uses of uninterpreted function\nsymbols in the Omega Calculator:\n\n\\input{calculator-ex3.tex}\n\n\n\\section{Reachability}\n\nConsider a graph where each directed edge is specified as a tuple\nrelation.  Given a tuple set for each node representing starting\nstates at each node, the library can compute which nodes of the graph are\nreachable from those start states, and the values the tuples can take\non.  \n\nThe syntax is:\n\\begin{verbatim}\nreachable ( [list of nodes] ) { [node:startstates] | node_i->node_j:transition] }\n\\end{verbatim}\n\nFor example,\n\n\\begin{verbatim}\nreachable (a,b,c)\n\t  { a->b:{[1]->[2]},\n\t     b->c:{[2]->[3]},\n\t     a:{[1]}};\n\\end{verbatim}\n\nThe transitions and start states may be any expression that evaluates\nto a relation.\n\nYou can also compute a tuple set containing the reachable values a t\ngiven node; for example:\n\n\\begin{verbatim}\nR := reachable of c in (a,b,c)\n\t                { a->b:{[1]->[2]},\n\t                  b->c:{[2]->[3]},\n\t                  a:{[1]}};\n\\end{verbatim}\n\nThe current implementation is very straightforward and can be very slow.\n\n\\section{Current limitations}\n\nThe transitive closure operation will not work on a relation with\nuninterpreted function symbols of arity $> 0$.  Any operation that\nrequires the projection of input or output variables (such as\ncomposition) may return inexact results if variables in the argument\nlist of a function symbol are projected.\n\n\n\n\n\n\\bibliography{para}\n\n\\end{document}\n \n\n", "meta": {"hexsha": "6d3eeb8f563ed09b921d5d84eb8363bd70a3ce4f", "size": 22151, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "omega_calc/doc/calculator.tex", "max_stars_repo_name": "Alessandro-Barbieri/the-omega-project", "max_stars_repo_head_hexsha": "e43941452fbb612b60f31d3741286334b8fa27e3", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 43, "max_stars_repo_stars_event_min_datetime": "2015-06-02T21:09:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T12:16:18.000Z", "max_issues_repo_path": "omega_calc/doc/calculator.tex", "max_issues_repo_name": "holycrap872/the-omega-project", "max_issues_repo_head_hexsha": "2e2890cacc1fe6e25f11255aecda063717f71f5b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-08-07T23:32:34.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-06T02:06:58.000Z", "max_forks_repo_path": "omega_calc/doc/calculator.tex", "max_forks_repo_name": "holycrap872/the-omega-project", "max_forks_repo_head_hexsha": "2e2890cacc1fe6e25f11255aecda063717f71f5b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2015-07-21T08:11:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-07T23:00:27.000Z", "avg_line_length": 32.8162962963, "max_line_length": 126, "alphanum_fraction": 0.7105322559, "num_tokens": 6355, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5878036384291135}}
{"text": "% introduction-to-linear-algebra.tex\n% xelatex ./introduction-to-linear-algebra.tex \n\\documentclass[UTF8]{ctexart}\n\n\\title{Introduction to Linear Algebra\u5b66\u4e60\u7b14\u8bb0}\n\\author{Andy Yao}\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction to Vectors}\n  \\subsection{Vectors and Linear Combinations}\n  \\subsection{Lengths and Dot Porducts}\n  \\subsection{Matrices}\n\n\\section{Solving Linear Equations}\n  \\subsection{Vectors and Linear Equations}\n  \\subsection{The Idea of Elimination}\n  \\subsection{Elimination Using Matrices}\n1. $Ax = x_{1}$ times column 1 + ... + $x_{n}$ times column n. And $(Ax)_{i} = \\sum_{j=1}^n a_{ij}x_{j}$.\n\n2. Identity matrix $= I$, elimination matrix = $E_{ij}$ using $l_{ij}$, exchange matrix $= P_{ij}$.\n\n3. Multiplying Ax = b by E21 subtracts a multiple l21 of equation 1 from equation 2. The number -l21 is the (2.1) entry\nof the elimination matrix E21.\n\n4. For the augmented matrix [A b], that elimination step gives [E21A E21b].\n\n5. When A multiplies any matrix B, it multiplies each column of B separately.\n\n  \\subsection{Rules for Matrix Operations}\n  \\subsection{Inverse Matrices}\n  \\subsection{Elimination = Factorization: A = LU}\n  \\subsection{Transposes and Permutations}\n\n\\end{document}\n\n\n", "meta": {"hexsha": "fac17c93f1be1665154202380beea6e9463e0d0b", "size": 1231, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "introduction-to-linear-algebra.tex", "max_stars_repo_name": "andyzyao/linear-algebra", "max_stars_repo_head_hexsha": "db68e915422115f60ebf3e1fde6bfc238c3f4fc6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "introduction-to-linear-algebra.tex", "max_issues_repo_name": "andyzyao/linear-algebra", "max_issues_repo_head_hexsha": "db68e915422115f60ebf3e1fde6bfc238c3f4fc6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "introduction-to-linear-algebra.tex", "max_forks_repo_name": "andyzyao/linear-algebra", "max_forks_repo_head_hexsha": "db68e915422115f60ebf3e1fde6bfc238c3f4fc6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0243902439, "max_line_length": 119, "alphanum_fraction": 0.7351746548, "num_tokens": 361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660687, "lm_q2_score": 0.8006920116079209, "lm_q1q2_score": 0.5878036314390204}}
{"text": "\\lab{Data Visualization}{Data Visualization}\n\\label{lab:DataVis}\n\\objective{Correctly presenting and interpreting data with visualizations is both a science and an art.\n% Indeed, effectively communicating complex relationships through data visualization is crucial to gaining the understanding needed to make important and difficult decisions.\nIn this lab we discuss and demonstrate the principles of good data visualization, including the key characteristics of good graphs, common mistakes, and how to avoid unethical visualizations.\n\\\\ \\indent We strongly recommend completing this lab as a Jupyter Notebook.}\n\n\\section*{The Importance of Visualizations} % =================================\n\nVisualizations of data often reveal insights that may not immediately obvious from simple statistics.\nThe following data set, known as \\emph{Anscombe's quartet}, is a famous example of the importance of graphing data.\n\n\\begin{table}[H]\n\\small{\n\\begin{tabular}{rr|rr|rr|rr}\n    \\multicolumn{2}{c|}{I}    & \\multicolumn{2}{|c|}{II} &\n    \\multicolumn{2}{|c|}{III} & \\multicolumn{2}{|c}{IV} \\\\\n    \\multicolumn{1}{c}{$x$}   & \\multicolumn{1}{c|}{$y$} &\n    \\multicolumn{1}{|c}{$x$}  & \\multicolumn{1}{c|}{$y$} &\n    \\multicolumn{1}{|c}{$x$}  & \\multicolumn{1}{c|}{$y$} &\n    \\multicolumn{1}{|c}{$x$}  & \\multicolumn{1}{c}{$y$} \\\\\n    \\hline\n    10.0 & 8.04  & 10.0 & 9.14 & 10.0 & 7.46  & 8.0  & 6.58  \\\\\n    8.0  & 6.95  & 8.0  & 8.14 & 8.0  & 6.77  & 8.0  & 5.76  \\\\\n    13.0 & 7.58  & 13.0 & 8.74 & 13.0 & 12.74 & 8.0  & 7.71  \\\\\n    9.0  & 8.81  & 9.0  & 8.77 & 9.0  & 7.11  & 8.0  & 8.84  \\\\\n    11.0 & 8.33  & 11.0 & 9.26 & 11.0 & 7.81  & 8.0  & 8.47  \\\\\n    14.0 & 9.96  & 14.0 & 8.10 & 14.0 & 8.84  & 8.0  & 7.04  \\\\\n    6.0  & 7.24  & 6.0  & 6.13 & 6.0  & 6.08  & 8.0  & 5.25  \\\\\n    4.0  & 4.26  & 4.0  & 3.10 & 4.0  & 5.39  & 19.0 & 12.50 \\\\\n    12.0 & 10.84 & 12.0 & 9.13 & 12.0 & 8.15  & 8.0  & 5.56  \\\\\n    7.0  & 4.82  & 7.0  & 7.26 & 7.0  & 6.42  & 8.0  & 7.91  \\\\\n    5.0  & 5.68  & 5.0  & 4.74 & 5.0  & 5.73  & 8.0  & 6.89  \\\\\n\\end{tabular}}\n\\end{table}\n\nEach section of Anscombe's quartet shares the following statistical properties:\n\\begin{itemize}\n\\setlength\\itemsep{0em}\n    \\item The means are $9$ for $x$ and $7.5$ for $y$.\n    \\item The sample variances are $11$ for $x$ and $3.75$ for $y$.\n    \\item The correlation between $x$ and $y$ is $.816$.\n    \\item The linear least squares regression line is $y=\\frac{1}{2}x+3$.\n\\end{itemize}\n\nDespite these similarities, each section is quite different from the others.\n\n\\begin{problem} % Describe Anscombe's quartet.\nThe file \\texttt{anscombe.npy} contains Anscombe's quartet in the same format as displayed above.\nPlot each section of the quartet separately as a scatter plot.\nAlso plot the regression line $y = \\frac{1}{2}x + 3$ on the domain $x\\in[0,20]$ over each scatter plot.\n\nWrite a few sentences describing what makes each section unique.\n\\label{prob:anscombes-quartet}\n\\end{problem}\n\n\\section*{Principles of Good Data Visualization} % ============================\n\nUnderstanding a data set through visualizations is an iterative process.\nExamine the data, start with an initial visualization, then adjust the original visualization or create new ones.\nAsk the following questions while searching for insights:\n%\n\\begin{enumerate}\n\\item Does the visualization represent the data accurately?\n\\item Would a different visualization communicate more information?\n\\item Would visualizing a subset of the data provide more information?\n\\item Would transforming the data reveal a hidden pattern?\n\\end{enumerate}\n\nEffectively visualizing data involves technical expertise combined with design knowledge.\nNo visualization is perfect, but every good visualization must contain the following essential elements.\n\n\\begin{itemize}\n    \\item \\textbf{Clarity}.\n    Good visualizations are as self-explanatory as possible.\n    Always use specific titles and axis labels, and include units of measure.\n    Use a legend or other annotations where appropriate.\n\n    \\item \\textbf{Simplicity}.\n    A good visualization communicates everything that it needs to, but nothing more.\n    Anything in a plot that fails to communicate information or that misrepresents the data is called \\emph{chartjunk}, a term coined by Edward Tufte.\n    Make visualizations as simple, clean, and readable as possible.\n\n    \\item \\textbf{Integrity}.\n    Tell the truth, the whole truth, and nothing but the truth.\n    It is usually alarmingly easy to manipulate a visualization so that it supports a specific agenda.\n    Resist the temptation to morph a visualization into something that misrepresents the true nature of the data, even though that misrepresentation might support your hypotheses about the data.\n\n    Every visualization should be presented together with information on who created it, where the data was obtained, how it was collected, whether it was cleaned or transformed, and whether there are conflicts of interest or possible biases present.\n    Cite your sources.\n\\end{itemize}\n\nThis list could be expanded, but virtually every good data visualization principle fits into these three categories in one way or another.\n\n\\newpage\n\n\\section*{Improving Specific Types of Visualizations} % =======================\n\nData can be visualized in many forms and styles.\nHowever, Most data sets are more naturally described with one type of visualization than another.\nIn the following sections, we explore how the plots we commonly use can be improved and refined.\n\n\\subsection*{Line Plots} % ----------------------------------------------------\n\nA line plot connects ordered $(x,y)$ points with straight lines, and is therefore best for visualizing one or two ordered arrays, such as functional outputs over an ordered domain or a sequence of values over time.\n\nWhen creating a line plot, consider the following details.\n%\n\\begin{itemize}\n    \\item What do the axes represent? How should they be labeled?\n    \\item Would a linear scale or a logarithmic scale most clearly reveal patterns?\n    \\item What should the window limits be?\n    \\item Is the line an appropriate thickness and color?\n    \\item Should each point be distinctly marked, or is a smooth line preferable?\n    \\item Should multiple lines be in the same plot, or in separate subplots?\n\\end{itemize}\n\nThe \\emph{Chebyshev polynomials} are a family of orthogonal polynomials that are recursively defined as follows.\n\\[T_0(x) = 1 \\qquad T_1(x) = x \\qquad\\qquad T_{n+1} = 2xT_n(x) - T_{n-1}(x)\\]\nNumPy's \\li{polynomial} module has a convenient tool for constructing these and other important polynomials.%\n\\footnote{\\li{numpy.polynomial} also has tools for computing other important polynomial families, including the Legendre, Hermite, and Laguerre polynomials.}\nHowever, plotting several the polynomials on top of each other, especially with a legend, results in a very cluttered visual.\n\n\\begin{lstlisting}\n>>> import numpy as np\n>>> from matplotlib import pyplot as plt\n>>> % matplotlib inline                     # Display notebook plots inline.\n\n# Plot the first 9 Chebyshev polynomials in the same plot.\n>>> T = np.polynomial.Chebyshev.basis\n>>> x = np.linspace(-1, 1, 200)\n>>> for n in range(9):\n...     plt.plot(x, T(n)(x), label=\"n = \"+str(n))\n...\n>>> plt.axis([-1.1, 1.1, -1.1, 1.1])        # Set the window limits.\n>>> plt.legend()\n\\end{lstlisting}\n\n\\begin{figure}[H] % Chebyshev polynomials (bad example).\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{figures/chebyshev_bad.pdf}\n\\end{figure}\n\nThis line plot can be improved in several easy ways.\n%\n\\begin{enumerate}\n    \\item\n    Use subplots to split the visualization into smaller, comparable pieces.\n    Instead of using a legend, give each subplot a title.\n    This method, called \\emph{small multiples}, was made famous by Edward Tufte.\n    \\item Increase the line thicknesses to 2 or 3 (the default is 1).\n    \\item Remove extra tick marks and axis labels.\n\\end{enumerate}\n\nMatplotlib's \\li{plt.tick_params()}, summarized below, controls which tick marks and labels are displayed.\n%\n\\begin{table}[H]\n\\begin{tabular}{r|c|l}\n    Argument & Options & Description\n    \\\\ \\hline\n    \\li{axis} & \\li{'x'}, \\li{'y'}, \\li{\"both\"} & Axis on which to operate. \\\\\n    \\li{which} & \\li{\"major\"}, \\li{\"minor\"}, \\li{\"both\"} & Operate on major or minor ticks.\\\\\n    \\li{color} & Any Matplotlib color & Tick color. \\\\\n    \\li{labelcolor} & Any Matplotlib color & Tick label color. \\\\\n    \\li{bottom}, \\li{top}, \\li{left}, \\li{right} & \\li{\"on\"}, \\li{\"off\"} & Turn ticks on or off. \\\\\n    \\li{labelbottom}, \\li{labeltop}, & \\li{\"on\"}, \\li{\"off\"} & Turn tick labels on or off. \\\\\n    \\li{labelleft}, \\li{labelright} & &\n\\end{tabular}\n\\end{table}\n\n\\begin{lstlisting}\n>>> for n in range(9):\n...     plt.subplot(3, 3, n+1)\n...     plt.plot(x, T(n)(x), lw=2)\n...     plt.axis([-1.1, 1.1, -1.1, 1.1])\n...\n...     # Turn off extra tick marks and axis labels.\n...     plt.tick_params(which=\"both\", top=\"off\", right=\"off\")\n...     if n < 6:                   # Remove x-axis label on upper plots.\n...         plt.tick_params(labelbottom=\"off\")\n...     if n % 3:                   # Remove y-axis label on right plots.\n...         plt.tick_params(labelleft=\"off\")\n...     plt.title(\"n = \"+str(n))\n\\end{lstlisting}\n\n\\begin{figure}[H] % Chebyshev polynomials (bad example).\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{figures/chebyshev_good.pdf}\n\\end{figure}\n\n\\begin{info} % LaTex with Matplotlib text.\nMatplotlib titles and annotations can be formatted with \\LaTeX, a system for creating technical documents.%\n\\footnote{See \\url{http://www.latex-project.org/} for more information.}\n% (this lab manual, for example, is written in \\LaTeX).\nTo do so, use an `r' before the string quotation mark and surround the text with dollar signs.\nFor example, try replacing the final line of code in the previous example with the following line.\n\n\\begin{lstlisting}\n...     plt.title(r\"$T_{}(x)$\".<<format>>(n))\n\\end{lstlisting}\n\nThe string's \\li{<<format>>()} method inserts the input $n$ at the curly braces.\nThe title of the sixth subplot, instead of being ``n = 5,'' will then be ``$T_5(x)$.''\n\\end{info}\n\n% TODO: Make sure the Bernstein polynomial notation is consistent with the book (it is consistent with Wikipedia currently)\n\n\\begin{problem} % Plot the Bernstein polynomials.\nThe $n+1$ Bernstein basis polynomials of degree $n$ are defined as follows.\n\\[b_{v,n} = {{n} \\choose {v}} x^v (1-x)^{n-v},\\qquad v = 0,\\ 1,\\ \\ldots,\\ n\\]\n\nPlot at least the first $10$ Bernstein basis polynomials ($n = 0,\\ 1,\\ 2,\\ 3$) as small multiples on the domain $[0,1] \\times [0,1]$.\nLabel the subplots for clarity, adjust tick marks and labels for simplicity, and set the window limits of each plot to be the same.\nConsider arranging the subplots so that the rows correspond with $n$ and the columns with $v$.\n\\\\(Hint: The constant ${{n} \\choose {v}} = \\frac{n!}{v!(n-v)!}$ is called the \\emph{binomial coefficient} and can be efficiently computed with \\li{scipy.special.binom()} or \\li{scipy.misc.comb()}.)\n\\end{problem}\n\n\\subsection*{Scatter Plots} % -------------------------------------------------\n\nA scatter plot draws $(x,y)$ points without connecting them.\nConnecting the points would imply an order or relation between the points, so scatter plots are best for displaying data sets without a natural order, or where each point is a distinct, individual instance.\n\nConsider the following questions when making a scatter plot.\nNote that some of them are the same questions that should be asked when creating a line plot.\n%\n\\begin{itemize}\n    \\item What do the axes represent? How should they be labeled?\n    \\item Would a linear scale or a logarithmic scale most clearly reveal patterns?\n    \\item What should the window limits be?\n    \\item Which marker is best? Are the markers an appropriate size and color?\n\\end{itemize}\n\nA scatter plot can be drawn with either \\li{plt.plot()} (specify a point marker such as \\li{'.'}, \\li{','}, \\li{'o'}, or \\li{'+'}) or \\li{plt.scatter()}.\nWhile \\li{plt.plot()} is the more flexible function in general, \\li{plt.scatter()} provides a few extra tools.\nMost useful are the keywords \\li{s} and \\li{c}, which correspond to marker size and marker color, respectively.\nEach keyword can either be a single entry or an array.\nUsing an array specifies the sizes or colors of each individual marker, allowing a scatter plot to have up to four dimensions of information.\n\n\\begin{comment}\n% Window size, also should go later.\nFigure \\ref{fig:scatter_correlation} displays two scatter plots.\nThe first appears to have a weak correlation and the second appears to have a strong correlation.\nHowever, the same data is being plotted and the only difference is the scale and window size.\nManipulating these can change your interpretation and should be done with careful consideration.\n\\end{comment}\n\nConsider a collection of rectangular boxes where the lengths, widths, and heights are given.\nA scatter plot of length against width mostly describes the sizes of the boxes; tying the third dimension (height) to the color of the points can provide the additional information.\nSetting the marker size as the volume of the boxes also adds some depth to the visualization, though modifying both the color and the size might be considered overkill.\n\nSince adjusting the marker size may lead to overlapping points, we specify the \\emph{alpha value} of the color to make the markers slightly transparent.\nThe keyword \\li{alpha} accepts a value in the interval $[0,1]$; 0 makes the markers completely transparent, while a $1$ makes the markers completely opaque.\n\nFinally, as with heat maps and contour plots, a color bar can be added with \\li{plt.colorbar()} to indicate the values that the colors represent.\nThis color bar can be given a label as well.\n\n\\begin{lstlisting}\n>>> length, width, height = np.random.randint(1, 20, (3,50))\n\n>>> plt.subplot(221)                # Plot length against width.\n>>> plt.scatter(length, width, s=100)\n>>> plt.grid()\n>>> plt.ylabel(\"Width (inches)\")\n>>> plt.tick_params(labelbottom=\"off\")\n>>> plt.axis([0, 20, 0, 20])\n\n# Continued on the next page.\n\\end{lstlisting}\n\n\\newpage\n\n\\begin{lstlisting}\n>>> plt.subplot(222)                # Set the marker color to the height.\n>>> plt.scatter(length, width, c=height, s=100)\n>>> cbar = plt.colorbar()\n>>> cbar.set_label(\"Height (inches)\")\n>>> plt.grid()\n>>> plt.tick_params(labelbottom=\"off\", labelleft=\"off\")\n>>> plt.axis([0, 20, 0, 20])\n\n>>> plt.subplot(223)                # Set the marker size to half the volume.\n>>> plt.scatter(length, width, s=length*width*height/2., alpha=.7)\n>>> plt.grid()\n>>> plt.xlabel(\"Length (inches)\")\n>>> plt.ylabel(\"Width (inches)\")\n>>> plt.axis([0, 20, 0, 20])\n\n>>> plt.subplot(224)                # Use color and marker size together.\n>>> plt.scatter(length, width, c=height, s=length*width*height/2., alpha=.7)\n>>> cbar = plt.colorbar()\n>>> cbar.set_label(\"Height (inches)\")\n>>> plt.grid()\n>>> plt.tick_params(labelleft=\"off\")\n>>> plt.xlabel(\"Length (inches)\")\n>>> plt.axis([0, 20, 0, 20])\n\\end{lstlisting}\n\n\\begin{figure}[H] % Scatter plots.\n\\centering\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/scatter_1.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/scatter_2.pdf}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/scatter_3.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/scatter_4.pdf}\n\\end{subfigure}\n\\end{figure}\n\nIn scatter plots, connecting the points into a line plot usual results in extreme clutter.\nA regression line, however, highlights a pattern in the data without overshadowing the actual data points.%\n\\footnote{See the Least Squares lab (QR 2), especially Problems 2\u2013--4, for a refresher on regression lines.}\n\n\\begin{problem} % Scatter plots with visualizations.\nThe file \\texttt{MLB.npy} contains measurements from over 1,000 recent Major League Baseball players, compiled by UCLA.\\footnote{See \\url{http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_MLB_HeightsWeights}.}\nEach row in the array represents a different player; the columns are the player's height (in inches), weight (in pounds), and age (in years), in that order.\n\nDescribe the data with at least one scatter plot.\nYour graph(s) should demonstrate whether height, weight, or age correlated with each other in the MLB.\nConsider plotting linear regression lines to indicate trends.\n\\end{problem}\n\n\\subsection*{Histograms} % ----------------------------------------------------\n\nA histogram partitions an interval into a number of bins and counts the number of values that fall into each bin.\nHistograms are ideal for visualizing how unordered data in a single array is distributed over an interval.\nIf the data are draws from a probability distribution, a histogram approximates the distribution's probability density function (PDF).\n\nThe following are important factors to consider when constructing a histogram.\n%\n\\begin{itemize}\n    \\item What does the $x$-axis represent? How should it be labeled?\n    \\item Is a linear or a logarithmic scale more appropriate for the frequency axis?\n    \\item How many bins should be used? Over what range should the bins be?\n    This is perhaps the most important question, as a histogram with too few or too many bins usually fails to give a clear view of the distribution.\n\\end{itemize}\n\n% Use \\li{plt.hist()} to create a histogram.\n% The arguments \\li{bins} and \\li{<<range>>} specify the number of bins to draw and over what domain.\n\nFor most histograms, the most desirable insight is the general shape of the distribution.\nThe lines separating the bins, axis tick marks, and even the labels on the $y$ axis are all unnecessary (and potentially distracting) details.\nRemoving the lines between bins is easy: set the line width to $0$.\nTo ensure the gaps where the lines used to be are filled, specify the \\li{histtype} as \\li{\"stepfilled\"} (using \\li{histtype=\"step\"} will draw the outline without filling it in).\nFinally, to get rid of extra markings on the axes, use \\li{plt.tick_params()}.\n\nA histogram can be converted into a line plot by using \\li{np.histogram()}.\nThis function returns the number of values in each bin and the locations of the edges of the bins (in fact, \\li{plt.hist()} employs this function).\nUse the edges of the bins to calculate the center of the bins, then plot the bin centers against the frequency.\n\n\\begin{lstlisting}\n# Get 10,000 random samples from a Beta distribution.\n>>> data = np.random.beta(a=5, b=2, size=N)\n\n>>> plt.subplot(131)                # Draw a regular histogram.\n>>> plt.hist(data, bins=30)\n\n>>> plt.subplot(132)                # Draw a clean histogram.\n>>> plt.hist(data, bins=30, lw=0, histtype=\"stepfilled\")\n>>> plt.tick_params(left=\"off\", top=\"off\", right=\"off\", labelleft=\"off\")\n# Continued on the next page.\n\\end{lstlisting}\n\n\\newpage\n\n\\begin{lstlisting}\n>>> plt.subplot(133)                # Convert the histogram to a line plot.\n>>> freq, bin_edges = np.histogram(data, bins=30)\n>>> bin_centers = (bin_edges[:-1] + bin_edges[1:])/2.\n>>> plt.plot(bin_centers, freq, 'k-', lw=4)\n>>> plt.tick_params(left=\"off\", top=\"off\", right=\"off\", labelleft=\"off\")\n\\end{lstlisting}\n\n\\begin{figure}[H] % Cleaning up a histogram.\n\\centering\n\\begin{subfigure}{.325\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/histogram_2.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.325\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/histogram_3.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.325\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/histogram_4.pdf}\n\\end{subfigure}\n\\end{figure}\n\nFinally, if the frequency domain is better visualized on a logarithmic scale, use \\li{log=True} as an argument to \\li{plt.hist()}.\nThis is the histogram equivalent of using \\li{plt.semilogy()} for line or scatter plots.\n\n\\begin{warn} % Line plots versus histograms.\nLine plots should \\textbf{not} be used for data that has no natural progression.\nFor example, consider a collection of random draws from a statistical distribution.\nIn this case, a plain line plot is completely useless, because consecutive random draws are completely unrelated.\nA histogram, on the other hand, provides an approximation of the distribution's probability density function.\n\n\\begin{lstlisting}\n# Get 1,000 random samples from the standard normal distribution.\n>>> data = np.random.normal(size=1000)\n\n>>> plt.subplot(121)                # Regular line plot of the data.\n>>> plt.plot(data)\n\n>>> plt.subplot(122)                # Histogram of the data.\n>>> plt.hist(data, bins=30)\n\\end{lstlisting}\n\n\\begin{figure}[H] % line plot versus a histogram.\n\\centering\n\\begin{subfigure}{.496\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/histogram_1_bad.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.49\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/histogram_1_good.pdf}\n\\end{subfigure}\n\\end{figure}\n\\end{warn}\n\n\\begin{problem} % Earthquake data\nThe file \\texttt{earthquakes.npy} contains data from over 17,000 earthquakes between 2000 and 2010 that were at least a 5 on the Richter scale.\\footnote{See \\url{http://earthquake.usgs.gov/earthquakes/search/}.}\nEach row in the array represents a different earthquake;\nthe columns are the earthquake's date (as a fraction of the year), magnitude (on the Richter scale), longitude, and latitude, in that order.\n\nBecause each earthquake is a distinct event, a good way to start visualizing this data might be a scatter plot of the years versus the magnitudes of each earthquake.\n\n\\begin{lstlisting}\n>>> year, magnitude, longitude, latitude = np.load(\"earthquakes.npy\").T\n>>> plt.plot(year, magnitude, '.')\n>>> plt.xlabel(\"Year\")\n>>> plt.ylabel(\"Magnitude\")\n\\end{lstlisting}\n\n\\begin{figure}[H] % Bad visualization: earthquake data, year vs magnitude.\n    \\centering\n    \\includegraphics[width=.7\\textwidth]{figures/earthquake.pdf}\n\\end{figure}\n\nUnfortunately, this plot communicates very little information because the data is so cluttered.\nDescribe the data with two or three better visualizations, including line plots, scatter plots, and histograms as appropriate.\nYour plots should clearly answer the following questions:\n\\begin{enumerate}\n    \\item How many earthquakes happened every year?\n    \\item How often do stronger earthquakes happen compared to weaker ones?\n    \\item Where do earthquakes happen? Where do the strongest earthquakes happen?\n    \\\\(Hint: Use \\li{plt.axis(\"equal\")} to fix the aspect ratio, which may improve comparisons between longitude and latitude.)\n\\end{enumerate}\n\\end{problem}\n\n\\subsection*{Heat Maps and Contour Plots} % -----------------------------------\n\nLet $f:\\mathbb{R}^2\\rightarrow\\mathbb{R}$ be a scalar-valued function on a 2-dimensional domain.\nA heat map of $f$ assigns a color to each $(x,y)$ point in the domain based on the value of $f(x,y)$, while a contour plot is a drawing of the \\emph{level curves} of $f$.\nThe level curve corresponding to the constant $c$ is the set $\\left\\{(x,y)\\mid c = f(x,y)\\right\\}$.\nA filled contour plot, which colors in the sections between the level curves, is a discretized version of a heat map.\n\nConsider the following questions when plotting a heat map or contour plot:\n%\n\\begin{itemize}\n    \\item Is the domain sufficiently refined?\n    \\item Which color scheme is most clear and effective?\n    \\item How many / which contour lines should be drawn, if any?\n    \\item Is a linear or a logarithmic scale more appropriate for the color?\n\\end{itemize}\n\nIt is often sufficient to choose a fixed number of level curves.\nIn this case, the values of $c$ corresponding to the level curves are automatically chosen to be evenly spaced over the range of values of $f$ on the domain.\nHowever, it is sometimes better to strategically specify the curves by providing a list of $c$ constants.\n\nConsider the function $f(x,y) = y^2 - x^3 + x^2$ on the domain $[-\\frac{3}{2}, \\frac{3}{2}] \\times [-\\frac{3}{2}, \\frac{3}{2}]$.\nA heat map of $f$ reveals that it has a large basin around the origin.\nThen since $f(0,0) = 0$, choosing several level curves close to $0$ more closely describes the topography of the basin.\nThe fourth subplot in the following example uses the curves with $c = -1,\\ -\\frac{1}{4},\\ 0,\\ \\frac{1}{4},\\ 1,$ and $4$.\n\n\\begin{lstlisting}\n# Construct a 2-D domain with np.meshgrid() and calculate f on the domain.\n>>> x = np.linspace(-1.5, 1.5, 200)\n>>> X, Y = np.meshgrid(x, x.copy())\n>>> Z = Y**2 - X**3 + X**2\n\n>>> plt.subplot(221)                # Plot a heat map of f.\n>>> plt.pcolormesh(X, Y, Z, cmap=\"viridis\")\n>>> plt.colorbar()\n\n>>> plt.subplot(222)                # Plot a contour map with 6 level curves.\n>>> plt.contour(X, Y, Z, 6, cmap=\"viridis\")\n>>> plt.colorbar()\n\n>>> plt.subplot(223)                # Plot a filled contour map with 12 levels.\n>>> plt.contourf(X, Y, Z, 12, cmap=\"viridis\")\n>>> plt.colorbar()\n\n>>> plt.subplot(224)                # Plot specific level curves and a heat map.\n>>> plt.contour(X, Y, Z, [-1, -.25, 0, .25, 1, 4], colors=\"white\")\n>>> plt.pcolormesh(X, Y, Z, cmap=\"viridis\")\n>>> plt.colorbar()\n\\end{lstlisting}\n\n\\begin{figure}[H] % Heat maps and contour plots.\n\\centering\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/heatmap_1.png}\n\\end{subfigure}\n%\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/contour_1.pdf}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/contour_2.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/heatmap_2.png}\n\\end{subfigure}\n\\end{figure}\n\nThere are two main kinds of color maps: sequential and diverging.\nSequential color maps, like \\li{\"hot\"} and \\li{\"cool\"}, transition very gradually between two colors; diverging color maps, like \\li{\"seismic\"} and \\li{\"coolwarm\"}, transition very rapidly from one color to another at the mean value.\nWhen in doubt, use \\li{\"viridis\"} or \\li{\"plasma\"}, two specialized sequential color schemes.\nFor the complete list of Matplotlib color maps, see \\url{http://matplotlib.org/examples/color/colormaps_reference.html}.\n\nThe color map can be changed to a logarithmic scale by using the keyword argument \\li{norm=matplotlib.colors.LogNorm()} (this works for \\li{plt.scatter()} as well).\nAs an example, consider the same $f$ defined above on the larger domain $[-6,6]\\times [-6,6]$.\nLog scaling can only be done on arrays of all positive values, so we visualize $|f|$.\n\n\\begin{lstlisting}\n>>> from matplotlib.colors import LogNorm\n\n>>> x = np.linspace(-6, 6, 200)\n>>> X, Y = np.meshgrid(x, x.copy())\n>>> Z = np.<<abs>>(Y**2 - X**3 + X**2)\n\n>>> plt.subplot(121)              # Plot a regular heat map of |f|.\n>>> plt.pcolormesh(X, Y, Z, cmap=\"plasma\")\n>>> plt.colorbar()\n\n>>> plt.subplot(122)              # Plot a filled contour plot with log scaling.\n>>> plt.contourf(X, Y, Z, 6, cmap=\"plasma\", norm=LogNorm())\n>>> plt.colorbar()\n\\end{lstlisting}\n\n\\begin{figure}[H] % Heat maps and contour plots.\n\\centering\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/heatmap_3.png}\n\\end{subfigure}\n%\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/contour_3.pdf}\n\\end{subfigure}\n\\end{figure}\n\n\\begin{problem} % Contour map problem. TODO: ROSENBROCK!!!\nThe \\emph{Rosenbrock function} is defined as follows.\n\\[f(x,y) = (1-x)^2 + 100(y-x^2)^2\\]\nThe minimum value of $f$ is $0$, which occurs at the point $(1,1)$ at the bottom of a steep, banana-shaped valley of the function.\n\nUse heat maps and contour plots to visualize the Rosenbrock function in such a way that the minimum value, and the valley that it lies in, is apparent.\nConsider plotting the minimizer $(1,1)$ on top of the plot as well.\n\\end{problem}\n\n\\subsection*{Bar Charts} % ----------------------------------------------------\n\nA bar chart plots categorical data in a sequence of bars.\nThey are best for small, discrete, one-dimensional data sets.\nUse \\li{plt.bar()} (vertical) or \\li{plt.barh()} (horizontal) to create a bar chart in Matplotlib.\nThese functions receive the locations of each bar, then the height of each bar (as lists or arrays).\n\nConsider the following questions when constructing a bar chart.\n\n\\begin{itemize}\n    \\item What are the different categories in the data?\n    \\item How should the data be sorted?\n    \\item Should the value axis have a linear or a logarithmic scale?\n\\end{itemize}\n\nHorizontal bar charts are usually preferable to vertical bar charts because horizontal labels are easier to read than vertical labels.\nLabels can be rotated, but it is better when the reader doesn't have to turn his or her head.\n\nData in a bar chart should also be sorted in a logical way.\nSort the categories by bar size, alphabetize the labels, or use some other intuitive ordering.\n\n\\newpage\n\n\\begin{lstlisting}\n>>> labels = [\"Lobster Thermador\", \"Baked Beans\", \"Crispy Bacon\",\n...             \"Smoked Sausage\", \"Hannibal Ham\", \"Eggs\", \"Spam\"]\n>>> values = [10, 11, 18, 19, 20, 21, 22]\n>>> positions = np.arange(len(labels))\n\n>>> plt.subplot(121)\n>>> plt.bar(positions, values, align=\"center\")\n>>> plt.xticks(positions, labels)\n\n>>> plt.subplot(122)\n>>> plt.barh(positions, values, align=\"center\")\n>>> plt.yticks(positions, labels)\n\\end{lstlisting}\n\n\\begin{figure}[H] % Horizontal vs. vertical bar chart.\n\\centering\n\\begin{subfigure}{.47\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/bar_1.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.52\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/bar_2.pdf}\n\\end{subfigure}\n\\end{figure}\n\n\\section*{Practices to Avoid} % ===============================================\n\n\\subsection*{Bad Types of Visualization} % ------------------------------------\n\nSome kinds of visualizations, especially for categorical data, are popular even though they interfere with the reader's ability to interpret the displayed information.\nFor example, it is more difficult to see the differences in a pie chart than on a bar chart since differences in area are more difficult to detect than differences in length.\nBy the same logic, bar chart with several bars per category are almost always better with the bars side by side rather than stacked.\n\n\\begin{figure}[H] % Pie chart --> Bar chart\n\\centering\n\\begin{subfigure}{.49\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/bad_pie_chart.png}\n\\end{subfigure}\n%\n\\begin{subfigure}{.49\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/bar_2.pdf}\n\\end{subfigure}\n\\caption{Transforming the horrendous 3-D pie chart on the left into the horizontal bar chart on the right both simplifies and clarifies the information.}\n\\end{figure}\n\n\\subsection*{Misrepresenting Data} % --------------------------------------\n\nTo conclude, we return to the issue of integrity.\nIt is very easy to misrepresent the true nature of a set of data with a visualization.\nPerhaps the easiest way to do so is with poorly chosen window limits or axis scales.\nConsider the following example of a scatter plot, presented four different ways.\nThe fourth subplot is the only one that correctly shows the position of the points in relation to the origin and that has a reasonable scales on both axes.\n\n\\begin{lstlisting}\n>>> x = np.linspace(5, 10, N) + np.random.normal(size=N)/3.\n>>> y = .5*x + 4 + np.random.normal(size=N)/2.\n\n>>> plt.subplot(221)                # Plot a default scatter plot of the data.\n>>> plt.plot(x, y, 'o', ms=10)\n\n>>> plt.subplot(222)                # Change the window limits so that the\n>>> plt.plot(x, y, 'o', ms=10)          # data appears steep in the y direction.\n>>> plt.xlim(-5,20)\n\n>>> plt.subplot(223)                # Change the y-axis scale so that the data\n>>> plt.semilogy(x, y, 'o', ms=10)      # appears flat in the y direction.\n\n>>> plt.subplot(224)                # Plot the data with equal axis scales and\n>>> plt.plot(x, y, 'o', ms=10)          # presenting the relation to the origin.\n>>> plt.xlim(xmin=0)\n>>> plt.ylim(ymin=0)\n\\end{lstlisting}\n\n\\begin{figure}[H] % Heat maps and contour plots.\n\\centering\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/dishonest_1.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/dishonest_2.pdf}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/dishonest_3.pdf}\n\\end{subfigure}\n%\n\\begin{subfigure}{.495\\textwidth}\n    \\centering\n    \\includegraphics[width=\\linewidth]{figures/honest.pdf}\n\\end{subfigure}\n\\end{figure}\n\nDo everything in your power to avoid visualizing data unethically.\n\n\\begin{problem}\nThe file \\texttt{countries.npy} contains information from 20 different countries.\nEach row in the array represents a different country; the columns are the 2015 population (in millions of people), the 2015 GDP (in billions of US dollars), the average male height (in centimeters), and the average female height (in centimeters), in that order.%\n\\footnote{\nSee \\url{https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)}, \\url{http://www.averageheight.co/}, and \\url{https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population}.}\n\nThe countries corresponding are listed below in order.\n\n\\begin{lstlisting}\ncountries = [\"Austria\", \"Bolivia\", \"Brazil\", \"China\",\n            \"Finland\", \"Germany\", \"Hungary\", \"India\",\n            \"Japan\", \"North Korea\", \"Montenegro\", \"Norway\",\n            \"Peru\", \"South Korea\", \"Sri Lanka\", \"Switzerland\",\n            \"Turkey\", \"United Kingdom\", \"United States\", \"Vietnam\"]\n\\end{lstlisting}\n\nVisualize this data set with at least four plots, using at least one scatter plot, one histogram, and one bar chart.\nWrite a few sentences summarizing the major insights that your visualizations reveal.\n\\\\(Hint: consider using \\li{np.argsort()} and fancy indexing to sort the data for the bar chart.)\n\\end{problem}\n\n\n\\begin{comment} % TODO: Add sections on box plots and hexbin plots.\n\n\\newpage\n\n\\subsection*{Additional Material} % ===========================================\n\nThere are many software packages that facilitate the visual exploration of data.\nOne Python library is Glue (see \\cite{glue}).\n\nSome packages for making nicer looking plots include \\li{Seaborn} and \\li{prettyplotlib}.\n\nFor more about visualization of data, we highly recommend the following books and websites:\n\n% TODO: get full refs for the following\n\n\\begin{itemize}\n    \\item \\emph{How to Lie with Statistics} by Darrell Huff (1954)\n    \\item \\emph{Envisioning Information} by Edward Tufte\n    \\item \\emph{The Visual Display of Quantitative Information} by Edward Tufte (2nd edition)\n    \\item \\emph{Beautiful Evidence} by Edward Tufte\n    \\item \\emph{Now you see it} by Stephen Few\n    \\item \\url{http://www.informationisbeautiful.net/}\n\\end{itemize}\n\\end{comment}\n", "meta": {"hexsha": "d6ba5192baa338bd8a14d867d9c0c37be7f37688", "size": 35307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Vol1A/DataVisualization/DataVisualization.tex", "max_stars_repo_name": "joshualy/numerical_computing", "max_stars_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Vol1A/DataVisualization/DataVisualization.tex", "max_issues_repo_name": "joshualy/numerical_computing", "max_issues_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Vol1A/DataVisualization/DataVisualization.tex", "max_forks_repo_name": "joshualy/numerical_computing", "max_forks_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.5791556728, "max_line_length": 262, "alphanum_fraction": 0.7060922763, "num_tokens": 9374, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6477982179521103, "lm_q2_score": 0.9073122125886485, "lm_q1q2_score": 0.5877552344411128}}
{"text": "\\section{Macroscopic observables}\n\\label{sec:analysis}%\nSpatial distributions of charge and current densities can provide a better insight in the microscopic mechanisms of charge transport.\n%\nIf $O$ is an observable which has a value $O_\\alpha$ in a state $\\alpha$, its ensemble average at time $t$ is a sum over all states weighted by the probability $P_\\alpha$ to be in a state $\\alpha$ at time $t$\n%\n\\begin{equation}\n\\label{equ:ensemble}\n\\left< O \\right> = \\sum_{\\alpha} O_\\alpha P_\\alpha.\n\\end{equation}\n%\nIf $O$ does not explicitly depend on time, the time evolution of $\\left< O \\right>$ can be calculated as\n\\begin{equation}\n\\begin{split}\n\\frac{d \\left< O \\right>}{dt} = \\sum_{ \\alpha, \\beta} \n      \\left[ P_\\beta \\Omega_{\\beta \\alpha} - \n       P_\\alpha \\Omega_{\\alpha \\beta} \\right] \n      O_\\alpha %\\\\\n %     \n      = \\sum_{ \\alpha, \\beta} \n      P_\\beta \\Omega_{\\beta \\alpha}  \n      \\left[ O_\\alpha - O_\\beta \\right] .\n\\end{split}\n\\end{equation}\n%\nIf averages are obtained from KMC trajectories, $P_\\alpha = s_\\alpha / s$, where $s_\\alpha$ is the number of Markov chains ending in the state $\\alpha$ after time $t$, and $s$ is the total number of chains.\n\nAlternatively, one can calculate time averages by analyzing a single Markov chain. If the total occupation time of the state $\\alpha$ is $\\tau_\\alpha$ then\n\\begin{align}\n\\label{equ:time}\n\\overline{ O } \n= \\frac{1}{\\tau} \\sum_{\\alpha} O_\\alpha \\tau_\\alpha \\,,\n\\end{align}\nwhere $\\tau = \\sum_{\\alpha} \\tau_\\alpha$ is the total time used for time averaging.\n\nFor ergodic systems and sufficient sampling times, ensemble and time averages should give identical results. \nIn many cases, the averaging procedure reflects a specific experimental technique. For example, an ensemble average over several KMC trajectories with different starting conditions corresponds to averaging over injected charge carriers in a time-of-flight experiment.  In what follows, we focus on the single charge carrier (low concentration of charges) case. \n\n\\subsection{Charge density}\n\\label{sec:occupation}\n\nFor a specific type of particles, the microscopic charge density of a site $i$ is proportional to the occupation probability of the site, $p_i$\n\\begin{equation}\n \\rho_i = e p_i / V_i\\, ,\n\\end{equation}\nwhere,  for an irregular lattice, the effective volume $V_i$ can be obtained from a Voronoi tessellation of space. For reasonably uniform lattices (uniform site densities) this volume is almost independent of the site and a constant volume per cite, $V_i = V/N$, can be assumed.  In the macroscopic limit, the charge density can be calculated using a sxtpthing kernel function, i.e. a distance-weighted average over multiple sites. Site occupations $p_i$ can be obtained from \\equ{ensemble} or  \\equ{time} by using the occupation of site $i$ in state $\\alpha$ as an observable.\n\nIf the system is in thermodynamic equilibrium, that is without sources or sinks and without circular currents (and therefore no net flux) a condition, known as detailed balance, holds\n%\n\\begin{equation}\n\\label{equ:detailed_balance}\n  p_j \\omega_{ji} = p_i \\omega_{ij},\n\\end{equation}\n%\nIt can be used to test whether the system is ergodic or not by correlating $\\log p_i$ and the site energy $E_i$. Indeed, if $\\lambda_{ij} = \\lambda_{ji}$ the ratios of forward and backward rates are determined solely by the energetic disorder, $\\omega_{ji} / \\omega_{ij} = \\exp(-\\Delta E_{ij} / k_\\text{B} T)$ (see \\equ{marcus}).\n\n\\subsection{Current}\n\\label{sec:vaverage}\n\\index{current}\nIf the position of the charge, $\\vec{r}$, is an observable, the time evolution of its average $\\left<\\vec{r}\\right>$ is the total current in the system\n\\begin{equation}\n \\vec{J} = e \\left< \\vec{v} \\right> = e \\frac{d \\left< \\vec{r}\n   \\right>} {dt} = e \\sum_{i, j} p_{j} \\omega_{ji} ( \\vec{r}_i -\n \\vec{r}_j ) .\n\\label{equ:current_def}\n\\end{equation}\nSymmetrizing this expression we obtain\n\\begin{equation}\n  \\vec{J} = \\frac{1}{2} e \\sum_{i, j} \\left( p_{j} \n  \\omega_{ji} - p_{i} \\omega_{ij} \\right) \\vec{r}_{ij} ,\n \\label{equ:current}\n\\end{equation}\nwhere $\\vec{r}_{ij} = \\vec{r}_{i} - \\vec{r}_{j}$. Symmetrization ensures equal flux\nsplitting between neighboring sites and absence of local average fluxes in equilibrium. It allows to define a local current through site $i$ as\\index{current!local}\n\\begin{equation}\n  \\vec{J_i} = \\frac{1}{2} e \\sum_{ j} \\left( p_{j}  \\omega_{ji} - p_{i} \\omega_{ij} \\right) \\vec{r}_{ij} .\n \\label{equ:site_current}\n\\end{equation}\nA large value of the local current indicates that the site contributes considerably to the total current. A collection of such sites thus represents most favorable charge pathways~\\cite{van_der_holst_modeling_2009}.\n\n\\subsection{Mobility and diffusion constant}\n\\label{sec:mobility}\n\\index{mobility}\nFor a single particle, e.g. a charge or an exciton, a zero-field mobility can be determined by studying particle diffusion in the absence of external fields. Using the particle displacement squared, $\\Delta {\\bm r}_i^2$, as an observable  we obtain\n \\begin{equation}\n\\begin{split}\n2d D_{\\gamma \\delta} =  \\frac{d \\left<  \\Delta{r}_{i, \\gamma} \\Delta{r}_{i, \\delta} \\right>}{dt} \n= \\sum_{\\substack{i,j \\\\ i\\ne j}} p_j\\omega_{ji} \n \\left( \\Delta r_{i,\\gamma}\\Delta r_{i,\\delta} - \\Delta r_{j,\\gamma}\\Delta r_{j,\\delta} \\right)  \n= \\sum_{\\substack{i,j\\\\ i\\ne j}} p_j \\omega_{ji} \\left( r_{i,\\gamma} r_{i,\\delta} - r_{j,\\gamma} r_{j,\\delta} \\right) \\, .\n\\end{split}\n\\label{equ:diffusion}\n\\end{equation}\nHere $\\vec{r}_i$ is the coordinate of the site $i$, $D_{\\gamma \\delta}$ is the diffusion tensor, $\\gamma, \\delta = x,y,z$, and $d=3$ is the system dimension. Using the Einstein relation, \n\\begin{equation}\n D_{\\gamma \\delta} = k_\\text{B}T \\mu_{\\gamma \\delta} \\, ,\n\\end{equation}\none can, in principle, obtain the zero-field mobility tensor $\\mu_{\\gamma \\delta}$. \\Equ{diffusion}, however, does not take into account the use of periodic boundary conditions when simulating charge dynamics. In this case, the simulated occupation probabilities can be compared to the solution of the Smoluchowski equation with periodic boundary conditions  (see the supporting information for details). \n\nAlternatively, one can directly analyze time-evolution of the KMC trajectory and obtain the diffusion tensor from a linear fit to the mean square displacement, $\\overline{ \\Delta{r}_{i, \\gamma} \\Delta{r}_{i, \\delta}} = 2d D_{\\gamma \\delta} t$. \n\nThe charge carrier mobility tensor, $\\hat{\\mu}$, for any value of the external field can be determined either from the average charge velocity defined in\n\\equ{current_def} \n\\begin{equation}\n%\\begin{split}\n \\langle \\vec{v} \\rangle =  \\sum_{i,j}  p_j  \\omega_{ji}  (\\vec{r}_i - \\vec{r}_j) = \\hat{\\mu} \\vec{F} \\, ,\n%\\end{split}\n\\end{equation}\nor directly from the KMC trajectory. In the latter case the velocity is calculated from the unwrapped (if periodic boundary conditions are used) charge displacement vector divided by the total simulation time. Projecting this velocity on the direction of the field $\\vec{F}$ yields the charge carrier mobility in this particular direction. In order to improve statistics, mobilities can be averaged over several KMC trajectories and MD snapshots.\n", "meta": {"hexsha": "2b15490eefabd8da646c61e4a326443022ab8c0c", "size": 7189, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/theory/analysis.tex", "max_stars_repo_name": "mbarbry/xtp", "max_stars_repo_head_hexsha": "e79828209d11ec25bf1750ab75499ecf50f584ef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-03-05T17:36:53.000Z", "max_stars_repo_stars_event_max_datetime": "2018-03-05T17:36:53.000Z", "max_issues_repo_path": "manual/theory/analysis.tex", "max_issues_repo_name": "choudarykvsp/xtp", "max_issues_repo_head_hexsha": "9a249fd34615abcf790d5f0ecd3ddf1ed0ac0e7a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/theory/analysis.tex", "max_forks_repo_name": "choudarykvsp/xtp", "max_forks_repo_head_hexsha": "9a249fd34615abcf790d5f0ecd3ddf1ed0ac0e7a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.3545454545, "max_line_length": 577, "alphanum_fraction": 0.7287522604, "num_tokens": 2055, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289388083214155, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5877334809336666}}
{"text": "% Preamble.\n\\documentclass[12pt]{article}\n\\usepackage[margin=1.25in]{geometry}\n\\usepackage[fleqn]{amsmath}\n\\usepackage{textcomp}\n\\usepackage{gensymb}\n\\usepackage{amsfonts}\n\\usepackage{enumitem}\n%\\usepackage{tikz}  % Include for figures.\n%\\usepackage{subfiles}  % Include for subfiles.\n\n%% Title macros.\n\\newcommand{\\HOMEWORKNUM}{26}\n\\newcommand{\\NAME}{D. Choi}\n\\newcommand{\\DATE}{2020-07-01}\n\n\\title{\\vspace{-2\\baselineskip}MATH 225 - Homework \\#\\HOMEWORKNUM}\n\\author{\\NAME}\n\\date{\\DATE}\n\n%% Formatting options.\n%\\pagenumbering{gobble}  % Include for single-page document.\n\n\n% Document.\n\\begin{document}\n\\maketitle\n\n\\section*{1.}\n\\textit{Verify that}\n\\begin{equation*}\n\ty = C_1 e^x + C_2 x e^x\n\\end{equation*}\n\\textit{is a solution to the differential equation}\n\\begin{equation*}\n\ty^{\\prime\\prime} - 2y^\\prime + y = 0\n\t.\n\\end{equation*}\nFirst, find $y^\\prime$ and $y^{\\prime\\prime}$.\n\\begin{align*}\n\ty &= C_1 e^x + C_2 e^x x, \\\\\n\ty^\\prime &= C_1 e^x + C_2 e^x x + C_2 e^x, \\\\\n\ty^{\\prime\\prime} &= C_1 e^x + C_2 e^x x + 2C_2 e^x.\n\\end{align*}\nSubstitute the right-hand sides into the equation.\n\\begin{gather*}\n\t0 \\overset{?}{=}\n\ty^{\\prime\\prime} - 2y^\\prime + y\n\t\\\\\n\t0 \\overset{?}{=}\n\t(C_1 e^x + C_2 e^x x + 2C_2 e^x)\n\t- 2(C_1 e^x + C_2 e^x x + C_2 e^x)\n\t+ (C_1 e^x + C_2 e^x x)\n\t\\\\\n\t0 \\overset{?}{=}\n\t(2 - 2)(C_1 e^x)\n\t+\n\t(2 - 2)(C_2 e^x x)\n\t+\n\t(2 - 2)(C_2 e^x)\n\t\\\\\n\t0 =\n\t0\n\t.\n\\end{gather*}\n\n\\section*{2.}\nVerify the solution to the differential equation.\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item\n\t\\textit{Equation: $y^\\prime = 2x$.}\n\t\\textit{Solution: $y = x^2$.}\n\t\\begin{equation*}\n\t\ty^\\prime = \\frac{d}{dx} y = \\frac{d}{dx} x^2 = 2x.\n\t\\end{equation*}\n\t\\newpage\n\t\n\t\\item\n\t\\textit{Equation: $y^\\prime + 3y = 6x + 11$.}\n\t\\textit{Solution: $y = e^{-3x} + 2x + 3$.}\n\t\\begin{equation*}\n\t\ty^\\prime = -3e^{-3x} + 2.\n\t\\end{equation*}\n\tThus,\n\t\\begin{gather*}\n\t\t6x + 11 \\overset{?}{=} y^\\prime + 3y \\\\\n\t\t6x + 11 \\overset{?}{=} (-3e^{-3x} + 2) + 3(e^{-3x} + 2x + 3) \\\\\n\t\t6x + 11 \\overset{?}{=} (3 - 3)(e^{-3x}) + 3(2x) + 3(3) + 2 \\\\\n\t\t6x + 11 = 6x + 11.\n\t\\end{gather*}\n\t\n\t\\item\n\t\\textit{Equation: $y^{\\prime\\prime} - 3y^\\prime + 2y = 24e^{-2x}$.}\n\t\\textit{Solution: $y = 3e^x - 4e^{2x} + 2e^{-2x}$.}\n\t\\begin{align*}\n\t\ty &= 3e^x - 4e^{2x} + 2e^{-2x}, \\\\\n\t\ty^\\prime &= 3e^x - 8e^{2x} - 4e^{-2x}, \\\\\n\t\ty^{\\prime\\prime} &= 3e^x - 16e^{2x} + 8e^{-2x}.\n\t\\end{align*}\n\tThus,\n\t\\begin{align*}\n\t\t24e^{-2x} &\\overset{?}{=} y^{\\prime\\prime} - 3y^\\prime + 2y \\\\\n\t\t24e^{-2x} &\\overset{?}{=}\n\t\t(3e^x - 16e^{2x} + 8e^{-2x}) \\\\\n\t\t&\\quad - 3(3e^x - 8e^{2x} - 4e^{-2x}) \\\\\n\t\t&\\quad + 2(3e^x - 4e^{2x} + 2e^{-2x}) \\\\\n\t\t24e^{-2x} &\\overset{?}{=}\n\t\t(3 - 9 + 6)(e^x) + (-16 + 24 - 8)(e^{2x}) + (8 + 12 + 4)(e^{-2x}) \\\\\n\t\t24e^{-2x} &= 24e^{2x}.\n\t\\end{align*}\n\\end{enumerate}\n\n\\end{document}", "meta": {"hexsha": "f87a7787ef48e77a90331985aad3e1959448a072", "size": 2757, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "usc-20202-math-225-39425/hw26/main.tex", "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "usc-20202-math-225-39425/hw26/main.tex", "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "usc-20202-math-225-39425/hw26/main.tex", "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.3982300885, "max_line_length": 70, "alphanum_fraction": 0.5640188611, "num_tokens": 1343, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.8289388062084421, "lm_q1q2_score": 0.5877334743387589}}
{"text": "\n\\section{Additive Schwarz methods}\n\n\\begin{intro}\n  In this section, we study preconditioners, which are related to\n  subspace decompositions of the space $V$ or its finite dimensional\n  subspaces. We will develop the theory in an abstract way, but always\n  keep the model problem~\\eqref{eq:itintro:1} in mind when we do so. In\n  particular, the subspaces chosen will be associated with either\n  coarser mesh levels or with meshes on subdomains of $\\Omega$.\n  \n  This section follows in part~\\cite[Chapter 7]{BrennerScott02}. A\n  more detailed discussion with extension of the methods developed\n  here can be found in~\\cite{ToselliWidlund05}\n\\end{intro}\n\n\\subsection{The abstract framework}\n\n\\begin{intro}\n  Let $V$ be a Hilbert space with inner product $\\scal(.,.)$ and let\n  $a(.,.): V\\times V$ be a symmetric and $V$-elliptic not necessarily\n  bounded bilinear form. Let a set of subspaces\n  $\\{V_j\\}_{j=1,\\dots,J}$ of $V$ be chosen such that\n  \\begin{gather*}\n    V = \\sum_{j=1}^J V_j.\n  \\end{gather*}\n  The sum is not required to be direct, that is, a vector $v\\in V$ may\n  have several decompositions $v = \\sum \\alpha_j v_j$ with\n  $v_j\\in V_j$. Methods below based on the subspaces $V_j$ are called\n  \\define{subspace correction} methods.\n\n  Alternatively, we consider methods based on abstract auxiliary\n  spaces $\\hat V_j$, such that\n  \\begin{gather*}\n    V = \\sum_{j=1}^J R_j^T \\hat V_j,\n  \\end{gather*}\n  where $R_j^T$ is an operator embedding the auxiliary space\n  $\\hat V_j$ into $V$.  Such methods are called \\define{auxiliary\n    space methods}. Subspace correction and auxiliary space methods\n  are equivalent by considering $V_j = R_j^T \\hat V_j$.\n\\end{intro}\n\n\\begin{assumption}\n  \\label{lemma:schwarz:1}\n  There are bilinear forms $a_j(.,.)$ defined on $V_j$ such that the weak formulation\n  \\begin{gather}\n    \\label{eq:schwarz:1}\n    a_j(u_j,v_j) = f(v_j),\n    \\quad\\forall v_j\\in V_j,\n  \\end{gather}\n  has a unique solution $u_j\\in V_j$ for all $f\\in V_j^*$. In the\n  context of auxiliary space methods, we assume that there are\n  bilinear forms $\\hat a_j(.,.)$ defined on $\\hat V_j$ such\n  that the weak formulation\n  \\begin{gather}\n    \\label{eq:schwarz:1a}\n    \\hat a_j(u_j,v_j) = f(v_j),\n    \\quad\\forall v_j\\in \\hat V_j,\n  \\end{gather}\n  has a unique solution $u_j\\in \\hat V_j$ for all $f\\in \\hat V_j^*$.\n\\end{assumption}\n\n\\begin{Definition}{subspace-projection}\n  \\label{definition:schwarz:1}\n  \\index{Pj@$P_j$|see {Ritz projection}} Let the \\define{subspace\n    projection} operator $P_j: V \\to V_j$ be defined such that\n  $P_j u \\in V_j$ is the unique (Assumption~\\ref{lemma:schwarz:1})\n  solution to the problem\n  \\begin{gather}\n    \\label{eq:schwarz:2}\n    a_j(P_j u,v_j) = a(u,v_j),\\quad\\forall v_j\\in V_j.\n  \\end{gather}\n  Note that this is a projection only if $a_j(.,.)$ is the restriction\n  of the bilinear form $a(.,.)$ to the subspace $V_j$. In this\n  case,\\index{A-orthogonal projection@$A$-orthogonal projection|see\n    {Ritz projection}} we call $P_j$ the $A$-orthogonal projection or\n  \\define{Ritz projection} to $V_j$. If $a_j(.,.)$ is not the\n  restriction of $a(.,.)$, the operator $P_j$ still maps into $V_j$,\n  but it is not idempotent.\n\n  Since $V_j$ is a subspace of $V$, we will also understand $P_j$ as\n  an endomorphism of $V$ iteself. The left hand side of this equation\n  induces an operator $A_j: V_j \\to V_j^*$ by $A_j u_j = a_j(u_j,.)$.\n\\end{Definition}\n\n\\begin{Definition}{auxiliary-space-projection}\n  \\label{definition:schwarz:1a}\n  Let the operator $\\hat P_j: V \\to \\hat V_j$ be defined such that $\\hat P_j u \\in \\hat V_j$ is\n  the unique (Assumption~\\ref{lemma:schwarz:1}) solution to the problem\n  \\begin{gather}\n    \\label{eq:schwarz:2a}\n    \\hat a_j(\\hat P_j u,v_j) = a(u,R_j^T v_j),\\quad\\forall v_j\\in \\hat V_j.\n  \\end{gather}\n  The left hand side of this equation induces an operator $\\hat A_j:\n  \\hat V_j \\to \\hat V_j^*$ by $\\hat A_j u_j = \\hat a_j(u_j,.)$.\n\\end{Definition}\n\n\\begin{Lemma}{schwarz-ritz}\n  \\label{lemma:schwarz:ritz}\n  If the local bilinear forms $a_j(.,.)$ are the restrictions of\n  $a(.,.)$ to $V_j$, then the projections $P_j$ as mappings from $V$\n  to itself are self-adjoint with respect to the $a(.,.)$-inner\n  product and positive semi-definite. Furthermore, $P_j$ acts as\n  identity on $P_j$ and there holds $P_j^2 = P_j$.\n\\end{Lemma}\n\\begin{proof}\n  This is a well-known fact about orthogonal projections, which we\n  will prove shortly.\n  First, we note that by the uniqueness in\n  Lemma~\\ref{lemma:schwarz:1} $P_j u_j = u_j$\n  for all $u_j\\in V_j$. Thus, for all $u\\in V$: $P_j P_j u = P_j u$.\n  Let now $u,v\\in V$ arbitrary. Then, there holds\n  \\begin{gather*}\n    a(u, P_j v) = a(P_j u, P_j v) = a(P_j v, P_j u)\n    = a(v, P_j u) = a(P_j u, v).\n  \\end{gather*}\n  Furthermore,\n  \\begin{gather*}\n    a(P_j u,u) = a(u, P_j u) = a(P_j u, P_j u)\\ge 0,\n  \\end{gather*}\n  since $a(.,.)$ is positive definite.\n\\end{proof}\n\n\\begin{Definition}{l2-projection}\n  \\label{definition:schwarz:1a}\n  For subspace and auxiliary space methods, we define the operators\n  \\begin{gather}\n    \\begin{split}\n      \\Pi_j: V &\\to V_j\\\\\n      R_j: V &\\to\\hat V_j\n    \\end{split}\n  \\end{gather}\n  \\index{Pij@$\\Pi_j$}\n  such that $\\Pi_j u\\in V_j$ is the solution to the problem\n  \\begin{gather}\n    \\scal(\\Pi_j u_j,v_j) = \\scal(u,v_j),\\quad\\forall v_j\\in V_j.\n  \\end{gather}\n\n  For auxiliary spaces $\\hat V_j$, we define the projection like\n  operators  through\n  \\begin{gather}\n    \\scal(R_j u_j,v_j)_{\\hat V_j} = \\scal(u,R_j^T v_j),\\quad\\forall v_j\\in \\hat V_j.\n  \\end{gather}\n  \n  % \\index{Pijt@$\\Pi^T_j$}\n  % We define its dual $\\Pi_j^T: V_j^* \\to V^*$ by\n  % \\begin{gather}\n  %   \\label{eq:schwarz:3}\n  %   \\scal(\\Pi_j^T \\phi_j,v)_{V^*\\times V} =  \\scal( \\phi_j,\\Pi_j\n  %   v)_{V_j^*\\times V_j}\n  % \\end{gather}\n\\end{Definition}\n\n% \\begin{todo}\n%   Show that $\\Pi^T$ is an orthogonal projection, and onto which space.\n% \\end{todo}\n\n\\begin{Lemma}{schwarz-ritz-representation}\n  \\label{lemma:schwarz:2}  \n  There holds\n  \\begin{gather}\n    \\label{eq:schwarz:15}\n    A_j P_j = \\Pi^T_j A.\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{proof}\n  Let $u,v\\in V$ arbitrary. Let $v_j=\\Pi_j v$. We rewrite\n  equation~\\eqref{eq:schwarz:2} as\n  \\begin{gather*}\n    \\scal(A_j P_j u, v)_{V^*\\times V}\n    = \\scal(A_j P_j u, v_j)_{V^*\\times V}\n    = \\scal(A u, v_j)_{V^*\\times V}\n    = \\scal(\\Pi^T_j A u, v)_{V^*\\times V}.\n  \\end{gather*}\n\\end{proof}\n\n\\begin{Definition}{additive-schwarz}\n  The \\define{additive Schwarz preconditioner} for the operator $A$ associated\n  with the symmetric, and $V$-elliptic bilinear form $a(.,.)$ with\n  respect to the subspace decomposition $V_j$ is the mapping $B:\n  V \\to V^*$ such that\n  \\begin{gather}\n    \\label{eq:schwarz:4}\n    B^{-1} = \\sum_{j=1}^J P_j A^{-1}.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{example}\n  \\label{example:schwarz:Jacobi}\n  The \\putindex{Jacobi method} may serve as a guiding example for the\n  definition of these methods. To this end, let $V = \\R^n$ with its\n  Euclidean inner product $\\scal(.,.)$. let $V_j =\n  \\operatorname{span}\\{e_j\\}$ be the space spanned by the $j$th unit\n  vector. Let $A$ be a symmetric, positive definite matrix and $a(u,v)\n  = v^T A u$. Then, equation~\\eqref{eq:schwarz:2} becomes\n  \\begin{gather}\n    \\label{eq:schwarz:27}\n    e_j^T A u_j = e_j^T A u\n    \\quad \\Leftrightarrow \\quad\n    P_j u = u_j = \\frac1{a_{j j}}(A u)_j.\n  \\end{gather}\n  Since for this decomposition, the sum $V=\\bigoplus V_j$ is direct,\n  we obtain with $D=\\operatorname{diag}(a_{11},\\dots,a_{n n})$ the\n  matrix representation\n  \\begin{gather*}\n    (B^{-1} v)_j = \\frac1{a_{j j}}(A A^{-1} v)_j = \\frac1{a_{j j}} v_j\n    \\quad \\Leftrightarrow \\quad\n    B^{-1} = D^{-1}.\n  \\end{gather*}\n  We enter this preconditioner into the Richardson method in operator\n  form~\\eqref{eq:richardson:11} to obtain the iteration\n  \\begin{gather}\n    \\label{eq:schwarz:28}\n    \\begin{split}\n      u^{(k+1)} &= u^{(k)} - \\omega_k \\sum_{j=1}^J P_j \\bigl(u^{(k)} -\n      A^{-1}f\\bigr)\\\\\n      &= u^{(k)} - \\omega_k D^{-1} \\bigl(A u^{(k)} - f\\bigr).\n    \\end{split}\n  \\end{gather}\n\\end{example}\n\n\\begin{Lemma}{additive-positive-definite}\n  \\label{lemma:schwarz:3}\n  If $A$ is symmetric and positive definite, so is $B^{-1}$ as defined\n  in~\\eqref{eq:schwarz:4}.\n\\end{Lemma}\n\n\\begin{proof}\n  By Lemma~\\ref{lemma:schwarz:2} and the fact that $P_j$ maps into $V_j$, we have that\n  \\begin{gather}\n    \\label{eq:schwarz:16}\n    B^{-1} = \\sum_{j=1}^J A_j^{-1} \\Pi^T_j.\n  \\end{gather}\n  Due to equation~\\eqref{eq:schwarz:1}, $A_j$ inherits its symmetry\n  and positive definiteness from $A$, and thus $A_j^{-1}$ is s.p.d.\n  Therefore, for each term in this sum and arbitrary elements\n  $\\phi,\\psi\\in V^*$, we have\n  \\begin{gather*}\n    \\scal(A_j^{-1} \\Pi^T_j \\phi, \\psi)_{V\\times V^*}\n    = \\scal(A_j^{-1} \\Pi^T_j \\phi, \\Pi^T_j \\psi)_{V\\times V^*}\n    = \\scal(\\Pi^T_j \\phi, A_j^{-1}\\Pi^T_j \\psi)\n    = \\scal(\\phi, A_j^{-1}\\Pi^T_j \\psi).\n  \\end{gather*}\n  The result now follows by linearity.\n\\end{proof}\n\n\\begin{Lemma}{additive-minimum}\n  \\label{lemma:schwarz:5}\n  For $v\\in V$ holds\n  \\begin{gather}\n    \\label{eq:schwarz:5}\n    b(v,v) \\equiv \\scal(B v,v) = \\min_{v=\\sum v_j} \\sum_{j=1}^J a(v_j,v_j),\n  \\end{gather}\n  where the minimum is taken over all possible decompositions of $v$\n  into a sum of elements $v_j\\in V_j$ with $j=1,\\dots,J$.\n\\end{Lemma}\n\n\\begin{proof}\n  Since $B^{-1}$ is s.p.d., so is $B$. Therefore, $\\scal(.,A_j^{-1}.)$\n  is an inner product on $V_j^*$, for which the \\putindex{Bunyakovsky-Cauchy-Schwarz\n  inequality} holds. Thus, for an arbitrary decomposition $v=\\sum v_j$\n  with $v_j\\in V_j$, the computation\n  \\begin{align*}\n    b(v,v)\n    &= \\sum_{j=1}^J b(v, A_j^{-1} A_j v_j)\n    = \\sum_{j=1}^J \\scal(\\Pi^T_j B v, A_j^{-1} A_j v_j) \\\\\n    &\\le \\sum_{j=1}^J \\sqrt{\\scal(\\Pi^T_j B v, A_j^{-1}\\Pi^T_j B v)}\n    \\sqrt{\\scal(A_j v_j, A_j^{-1}A_j v_j)} \\\\\n    & \\le \\sqrt{\\sum_{j=1}^J \\scal(\\Pi^T_j B v, A_j^{-1}\\Pi^T_j\n      B v) }\n    \\sqrt{\\sum_{j=1}^J \\scal(A_j v_j,A_j^{-1}A_j v_j)} \\\\\n    &= \\sqrt{\\scal(B v, {\\sum A_j^{-1}\\Pi^T_j} B v)}\n    \\sqrt{\\sum_{j=1}^J \\scal(A_j v_j, v_j)} \\\\\n    &= \\sqrt{b(v,v)} \\sqrt{\\sum_{j=1}^J a(v_j, v_j)},\n  \\end{align*}\n  yields for arbitrary decompositions\n  \\begin{gather}\n    \\label{eq:schwarz:17}\n    b(v,v) \\le \\sum_{j=1}^J a(v_j, v_j),\n  \\end{gather}\n  and thus in particular, that the left hand side is bounded by the\n  minimum of the right. Now we choose a special decomposition, showing\n  that it cannot be less than the minimum. To this end, let\n  \\begin{gather}\n    \\label{eq:schwarz:18}\n    v_j = A_j^{-1} \\Pi^T_j B v.\n  \\end{gather}\n  By Lemma~\\ref{lemma:schwarz:2}, we have\n  \\begin{gather*}\n    \\sum v_j = \\sum A_j^{-1} \\Pi^T_j B v = B^{-1} B v = v.\n  \\end{gather*}\n  Furthermore,\n  \\begin{align*}\n    \\sum_{j=1}^J \\scal(A_j v_j, v_j)\n    &= \\sum_{j=1}^J \\scal(A_j A_j^{-1} \\Pi^T_j B v, A_j^{-1} \\Pi^T_j\n    B v) \\\\\n    &= \\sum_{j=1}^J \\scal(\\Pi^T_j B v, A_j^{-1} \\Pi^T_j v) \\\\\n    &= \\scal(B v, \\sum A_j^{-1} \\Pi^T_j B v) = b(v, v).\n  \\end{align*}\n\\end{proof}\n\n\\begin{Theorem}{schwarz-equivalence}\n  \\label{theorem:schwarz:1}\n  Let $A$ be s.p.d. and $B$ defined by\n  equation~\\eqref{eq:schwarz:4}. Then, the \\putindex{spectral\n    equivalence}~\\eqref{eq:richardson:12} holds with\n  positive constants\n  \\begin{gather}\n    \\label{eq:schwarz:19}\n%    \\begin{split}\n    \\Lambda(B,A) = \\max_{v\\in V} \\frac{a(v,v)}{\\min\\limits_{v=\\sum v_j}\n      \\sum\\limits_{j=1}^J a(v_j, v_j)}\n    ,\\qquad\n    \\lambda(B,A) = \\min\\limits_{v\\in V} \\frac{a(v,v)}{\\min\\limits_{v=\\sum v_j}\n      \\sum\\limits_{j=1}^J a(v_j, v_j)}.\n%    \\end{split}\n  \\end{gather}\n\\end{Theorem}\n\n\\begin{proof}\n  Here we use the fact, that $b(.,.)$ is an inner product on\n  $V$ and that by\n  \\begin{gather*}\n    b(B^{-1}A v,v) = a(v,v) = b(v, B^{-1}A v),\n  \\end{gather*}\n  the operator $B^{-1}A$ is symmetric with respect to this inner\n  product. Thus, the \\putindex{Rayleigh quotient} qualifies to\n  estimate the extremal eigenvalues, for instance,\n  \\begin{gather*}\n    \\Lambda(B^{-1}A)\n    = \\max_{v\\in V} \\frac{b(B^{-1}A v,v)}{b(v,v)}\n    = \\max_{v\\in V} \\frac{a(v,v)}{\\min\\limits_{v=\\sum v_j}\n      \\sum\\limits_{j=1}^J a(v_j, v_j)},\n  \\end{gather*}\n  and the same for the minimum.\n\\end{proof}\n\n\\begin{remark}\n  \\label{note:schwarz:1}\n  In order to estimate the condition number $\\Lambda(B,A)/\\lambda(B,A)$ of a Schwarz\n  preconditioner, it is now sufficient to bound the two quotients\n  in~\\eqref{eq:schwarz:19} from above and below. In particular, in\n  order to find a bound for $\\Lambda(B,A)$, we have to find an estimate of\n  the form\n  \\begin{gather}\n    \\label{eq:schwarz:23}\n    a(v,v) \\lesssim \\min_{v=\\sum v_j}\\sum_{j=1}^J a(v_j, v_j),\n  \\end{gather}\n  or in other words, $a(v,v)$ has to be bounded by the sum on the\n  right for any decomposition $v=\\sum v_j$. On the other hand, in\n  order to bound $1/\\lambda(B,A)$, we need an estimate in the opposite direction,\n  where it is sufficient to find one decomposition $v=\\sum v_j$ such\n  that it holds. We reduce these\n  conditions to the following two abstract assumptions, which guarantee that\n  Theorem~\\ref{theorem:schwarz:1} holds true.\n\\end{remark}\n\n\\begin{assumption}[Stable decomposition]\n  \\label{assumption:schwarz:stable-decomposition}\n  For each $v\\in V$ there is a decomposition\n  \\begin{gather*}\n    v = \\sum_{j=1^J} v_j, \\qquad v_j\\in V_j,\n  \\end{gather*}\n  such that there holds\n  \\begin{gather}\n    \\label{eq:schwarz:24}\n     \\min_{v=\\sum v_j}\\sum_{j=1}^J a(v_j, v_j) \\lesssim a(v,v).\n  \\end{gather}  \n\\end{assumption}\n\n\\begin{assumption}[Strengthened Cauchy-Schwarz inequalities]\n  \\label{assumption:schwarz:1}\n  \\defindex{strengthened Cauchy-Schwarz inequalities}\n  \\index{E@$\\mathcal E$} There is a symmetric $J\\times J$-matrix\n  $\\mathcal E$ with entries $\\epsilon_{ij} \\in [0,1]$ and a constant\n  $C$ independent of $J$, such that for the spectral radius\n  $\\rho(\\mathcal E)$ there holds\n  \\begin{gather*}\n    \\rho(\\mathcal E) \\le C\n  \\end{gather*}\n  and for all $1 \\le i,j \\le J$, $v_i \\in V_i$ and $v_j \\in V_j$ there\n  holds\n  \\begin{gather}\n    \\label{eq:schwarz:25}\n    \\left| a(v_i, v_j)\\right|\n    \\le \\epsilon_{ij} \\sqrt{a(v_i,v_{\\phantom ji})} \\sqrt{a(v_j,v_j)}.\n  \\end{gather}\n\\end{assumption}\n\n\\begin{remark}\n  Inequality~\\eqref{eq:schwarz:25} with $\\epsilon_{ij} \\equiv 1$ holds\n  trivially by the regular \\putindex{Bunyakovsky-Cauchy-Schwarz\n    inequality}.  But for such a matrix, the spectral radius is\n  $J$. As the following lemma will reveal, it is necessary to obtain\n  $\\rho(\\mathcal E)$ independent of $J$ to obtain\n  estimate~\\eqref{eq:schwarz:23}.\n\\end{remark}\n\n\\begin{Lemma}{schwarz-maximum}\n  \\label{lemma:schwarz:7}\n  Let the estimate~\\eqref{eq:schwarz:25} hold. Then,\n  estimate~\\eqref{eq:schwarz:23} holds with the constant\n  $\\rho(\\mathcal E)$.\n\\end{Lemma}\n\n\\begin{proof}\n  Let $v\\in V$ and its decomposition $v=\\sum v_j$ with $v_j\\in V_j$ be\n  chosen arbitrarily. Then,\n  \\begin{multline*}\n    a(v,v)\n    = a\\left(\\sum_i v_i, \\sum_j v_j\\right)\n    = \\sum_{i,j=1}^J a(v_i, v_j)\n    \\le \\sum_{i,j=1}^J \\epsilon_{ij} \\sqrt{a(v_{\\vphantom{j}i},v_i)} \\sqrt{a(v_j,v_j)}.\n  \\end{multline*}\n  The latter sum corresponds to a matrix-vector product of the form\n  $\\vec x^T \\mathcal E \\vec x$, where the entries of $x$ are of the\n  form $\\sqrt{a(v_i,v_i)}$. Since $\\mathcal E$ is symmetric positive\n  definite, this product can be estimated by $\\rho(\\mathcal E) |\\vec\n  x|^2$, and thus\n  \\begin{gather}\n    \\label{eq:schwarz:38}\n    a(v,v) \\le \\rho(\\mathcal E) \\sum_{j=1}^J a(v_j,v_j).\n  \\end{gather}\n\\end{proof}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Two-level additive Schwarz preconditioner}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  This preconditioner is in the class of \\putindex{domain\n    decomposition} methods. The attribute \\putindex{two-level} refers\n  to the fact that we are considering finite element discretizations\n  of~\\eqref{eq:itintro:1} on two finite element meshes, the\n  \\putindex{fine mesh} $\\T_h$ on which we desire to compute the\n  solution, and the auxiliary \\putindex{coarse mesh} $\\T_H$. Both\n  meshes cover the whole domain $\\Omega$ (see\n  Figure~\\ref{fig:schwarz:ddmeshes}), and each cell of the coarse mesh\n  is the union of cells of the fine mesh ($4\\times 4$ fine cells in\n  the figure).\n\n  In addition to these two meshes, we introduce subdomains\n  $\\Omega_1,\\Omega_2,\\dots,\\Omega_J$ of $\\Omega$ such that each\n  $\\Omega_j$ is the union of cells in $\\T_h$. We require that those\n  subdomains overlap each other like the three examples in\n  Figure~\\ref{fig:schwarz:ddmeshes} on the right. A more precise\n  definition of the required overlap follows.\n\\end{intro}\n\n\\begin{figure}[tp]\n  \\centering\n  \\begin{tikzpicture}\n    \\draw[step=.25cm] (-2,-2) grid (2,2);\n    \\draw[step=1cm,thick] (-2,-2) grid (2,2);\n    \\draw[step=1cm,thick] (3,-2) grid (7,2);\n    \\draw[step=.25cm] (3.74,-1.25) grid (5.25,0.25);\n    \\draw[step=.25cm,red] (3,-2) grid (4.25,-0.75);\n    \\draw[step=.25cm,green] (4.74,-0.25) grid (7,1.25);\n  \\end{tikzpicture}\n  \\caption{Fine mesh and coarse mesh (left) for overlapping domain\n    decomposition. Examples for a subdomain decomposition on the\n    right.}\n  \\label{fig:schwarz:ddmeshes}\n\\end{figure}\n\n\\begin{definition}{dd-overlap}\n  \\label{definition:schwarz:overlap}\n  A covering of $\\Omega$ with subdomains $\\Omega_j$ is called\n  \\define{overlapping} with minimal \\define{overlap} $\\delta$, if for\n  each $\\Omega_j$ and all $x\\in\\Omega_j$ holds:\n  \\begin{gather*}\n    \\operatorname{dist}(x,\\partial\\Omega_J\\setminus\\partial\\Omega) <\n    \\delta\\;\n    \\Rightarrow\\; \\exists k\\neq j: x\\in \\Omega_k.\n  \\end{gather*}\n\\end{definition}\n\n\\begin{Definition}{dd-finite-covering}\n  \\label{definition:schwarz:finite-covering}\n\\defindex{NO@$N_O$}\n  \\defindex{finite covering}\n  We say that a family of coverings is finite, if\n  there is a constant $N_O$ independent of $\\T_h$ and the number of\n  subdomains, such that for each $j$ the intersection\n  $\\Omega_j\\cap\\Omega_k$ is nonempty for at most $N_O$ subdomains\n  $\\Omega_k$.\n\\end{Definition}\n\n\\begin{Definition}{partition-of-unity}\n  A smooth \\define{partition of unity} with respect to the subdomains\n  $\\Omega_1,\\Omega_2,\\dots,\\Omega_J$ of $\\Omega$ is a set of\n  nonnegative functions $\\{\\phi_1,\\dots,\\phi_J\\}\\subset\n  C^\\infty(\\overline\\Omega)$ such that\n  \\begin{xalignat}2\n    \\label{eq:schwarz:6}\n    \\phi_j(x) &  = 0\n    & \\forall x & \\in \\Omega\\setminus\\Omega_j, \\quad j=1,\\dots,J\n    \\\\\n    \\label{eq:schwarz:7}\n    \\sum_{j=1}^J \\phi_j(x) &= 1\n    & \\forall x&\\in\\overline\\Omega.\n  \\end{xalignat}\n  Similarly, we can define partitions of unity in $H^1(\\Omega)$ or\n  partitions of unity which are piecewise $C^1$.\n  \n  Furthermore, we assume that there is a positive constant $\\delta$,\n  called \\define{overlap}, such that for all $j=1,\\dots,J$ there holds\n  \\begin{gather}\n    \\label{eq:schwarz:8}\n    \\norm{\\nabla\\phi j}_{L^\\infty(\\Omega)} \\lesssim \\frac 1\\delta,\n  \\end{gather}\n  where the implicit constant is independent of $h$, $\\delta$ and $J$.\n\\end{Definition}\n\n\\begin{remark}\n  The term overlap for $\\delta$ is justified by the following\n  consideration. Let $x \\in \\Omega_j$ be a point which is not in any\n  other $\\Omega_i$. Then, $\\phi_j(x) = 1$. If~\\eqref{eq:schwarz:8} is\n  to hold, then it is necessary that\n  $\\operatorname{dist}(x,\\partial\\Omega_J) \\ge \\delta$ (up to a\n  constant, but this constant is already\n  in~\\eqref{eq:schwarz:8}). Thus, the points of distance less than\n  $\\delta$ from $\\partial\\Omega_j$ must be elements of another\n  subdomain as well, which is then said to overlap with $\\Omega_j$.\n\\end{remark}\n\n\\begin{example}\n  \\label{example:schwarz:2}\n  On uniform meshes, overlapping subdomains with an overlap of\n  $\\delta = n h$ with $n=1,2,\\ldots$ can be achieved easily by the\n  following procedure:\n  \\begin{enumerate}\n  \\item Begin with a non overlapping subdivision $\\{\\Omega_j^0\\}$,\n    for instance aligned with the mesh cells of $\\T_H$.\n  \\item Add all cells that share at least a vertex with a cell in\n    $\\Omega_j^0$ to obtain a domain $\\Omega_j^1$. After this\n    procedure, two neighboring domains will overlap by two cells,\n    resulting in $\\delta = 2h$.\n  \\item Repeat this procedure to obtain larger overlaps.\n  \\end{enumerate}\n  \n  On the resulting partitions, a partition of unity in $H^1$ can be\n  constructed with piecewise linear (bilinear on quadrilaterals)\n  functions. For instance for $\\Omega_j^1$ this function is\n  constructed as follows:\n  \\begin{enumerate}\n  \\item Choose $\\phi_j(x) = 1/2$ in all vertices on $\\partial \\Omega_j^0$.\n  \\item Choose $\\phi_j(x) = 0$ in all vertices on $\\partial \\Omega_j^1$ and outside\n    $\\Omega_j^1$.\n  \\item Choose $\\phi_j(x) = 1$ in all remaining vertices inside\n    $\\Omega_j^0$.\n    \\item Connect these values by linear (on simplicial meshes), bilinear\n      (quadrilateral meshes) of trilinear (hexahedral meshes)\n      polynomials inside each mesh cell $T\\in \\T_h$.\n  \\end{enumerate}\n  This partition of unity achieves the estimate~\\eqref{eq:schwarz:8}\n  with a constant of $1/2$.\n\\end{example}\n\n\\begin{notation}\n  \\label{par:schwarz:1}\n  The solution space of our problem is the space $V=V_h$ given by the\n  finite element space on the mesh $\\T_h$. We define finite element\n  spaces on $\\Omega_j$ by\n  \\begin{gather}\n    \\label{eq:schwarz:9}\n    V_j = \\bigl\\{ v\\in V_h \\big| \\forall x\\in\\Omega\\setminus\\Omega_j :\n    v(x) =0\\bigr\\}.\n  \\end{gather}\n  Additionally, we define the space $V_0 \\equiv V_H$ as the finite\n  element space on the coarse mesh $\\T_H$.  Since the meshes are\n  nested, $V_H$ is indeed a subspace of $V_h$. Thus, we obtain a\n  decomposition of $V_h$ into $J+1$ subspaces\n  \\begin{gather*}\n    V_h = V_0 + \\sum_{j=1}^J V_j,\n  \\end{gather*}\n  where the last $J$ are associated with the subdomains. In fact,\n  Lemma~\\ref{lemma:schwarz:4} below states that already the spaces\n  $V_1$ to $V_J$ are sufficient to span $V_h$. Nevertheless, the\n  coarse grid space plays a crucial role in the efficiency of the\n  method due to Lemma~\\ref{lemma:schwarz:stable-decomposition}.\n\\end{notation}\n\n\\begin{Definition}{two-level-additive}\n  Let the spaces $V_j$, $j=0,\\dots,J$ be defined as above. Then, the\n  \\define{two-level additive Schwarz preconditioner} is defined as\n  \\begin{gather}\n    \\label{eq:schwarz:10}\n    B^{-1}_{\\text{TLS}} = \\sum_{j=0}^J P_j A^{-1} = \\sum_{j=0}^J A_j^{-1} \\Pi^T_j,\n  \\end{gather}\n  where $P_j$ is defined according to~\\eqref{eq:schwarz:2} and $A_j:\n  V_j\\to V_j^*$ by\n  \\begin{gather}\n    \\scal(A_j u_j, v_j)_V = a(u_j, v_j),\\quad\\forall u_j, v_j \\in V_j.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{Notation}{H-equals-0}\n  In order to simplify notation, we have assigned index zero to\n  $V_H$. Thus, sums in future terms may either start at one, summing\n  over subdomains, or at zero, summing over all subspaces.\n\\end{Notation}\n\n\\begin{Lemma}{schwarz4}\n  \\label{lemma:schwarz:4}\n  There holds\n  \\begin{gather}\n    \\label{eq:schwarz:11}\n    V_h = \\sum_{j=1}^J V_j.\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{proof}\n  Let $I_h: C(\\overline\\Omega) \\to V_h$ be the interpolation operator\n  of the finite element space. Then, for any given $v\\in V_h$ define\n  $v_j = I_h(\\phi_j v)$, where $\\phi_j$ is the function associated to\n  $\\Omega_j$ of a partition of unity for $\\Omega_1,\\dots,\\Omega_J$.\n  \n  By definition of $\\phi_j$, there holds $\\phi_j v = 0$ on\n  $\\Omega\\setminus\\Omega_j$. Furthermore, we assumed that a mesh cell\n  of $\\T_h$ is either completely in $\\Omega_j$ or completely in its\n  complement. Since nodal values of a cell are located in the cell\n  itself, this implies that $I_h (\\phi_j v) = 0$ on\n  $\\Omega\\setminus\\Omega_j$. Therefore, $I_h (\\phi_j v) \\in V_j$.\n  \n  On the other hand, we use the linearity of the interpolation\n  operator to obtain\n  \\begin{gather*}\n    \\sum_{j=1}^J v_j = \\sum_{j=1}^J I_h(\\phi_j v)\n    = I_h\\left(v\\sum_{j=1}^J \\phi_h\\right)\n    = I_h v = v,\n  \\end{gather*}\n  thus, the $v_j$ are indeed a decomposition of $v$. Since $v\\in V_h$\n  was chosen arbitrarily, the lemma is proven.\n\\end{proof}\n\n\\begin{Lemma}{finite-overlap-cauchy-schwarz}\n  \\label{lemma:schwarz:8}\n  Let the covering $\\{\\Omega_j\\}_{j=1,\\dots,J}$ for $\\Omega$ be finite\n  according to\n  Definition~\\ref{definition:schwarz:finite-covering}. Then, the\n  \\putindex{strengthened Cauchy-Schwarz inequalities}~\\eqref{eq:schwarz:25} hold\n  with a spectral radius\n  \\begin{gather}\n    \\label{eq:schwarz:26}\n    \\rho(\\mathcal E) \\le N_O.\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{proof}\n  The term $a(v_i, v_j)$ is nonzero only if the supports of the two\n  functions have a nonempty intersection. Accordingly, for each index\n  $i$ only a maximum of $N_O$ of the coefficients $\\epsilon_{ij}$ are\n  nonzero. We set these equal to one and use Gershgorin's theorem to\n  estimate the greatest eigenvalue.\n\\end{proof}\n\n\\begin{Lemma}{two-level-bound1}\n  \\label{lemma:schwarz:6}\n  Let $v_j\\in V_j$ for $j=0,\\dots,J$ be a composition of $v\\in V_h$\n  such that\n  \\begin{gather*}\n    v=\\sum_{j=0}^J v_j.\n  \\end{gather*}\n  Then,\n  \\begin{gather}\n    \\label{eq:schwarz:12}\n    a(v,v) \\lesssim \\sum_{j=0}^J a(v_j, v_j),\n  \\end{gather}\n  where the implicit constant does not depend on $h$, $H$, or $J$.\n\\end{Lemma}\n\n\\begin{proof}\n  First a note: the inequality would be obvious, if $V_h$ was a direct\n  sum of the spaces $V_j$, and it would hold with a constant of one if\n  they were mutually orthogonal. Thus, we have to show some kind of\n  orthogonality between the spaces.\n\n  We start out by stating that\n  \\begin{align*}\n    a(v,v) &= a\\left(v_0+\\sum_{j=1}^J v_j, v_0+\\sum_{j=1}^J v_j\\right)\n    \\\\\n    &\\le 2 \\left(a(v_0, v_0) + a\\!\\left(\\sum_{j=1}^J v_j,\\sum_{j=1}^J\n        v_j\\right)\\right)\n    \\\\\n    &= 2 \\left(a(v_0, v_0) + \\sum_{j,k=1}^J a(v_j,v_k)\\right) \\\\\n    & \\le 2 a(v_0, v_0) + 2 N_O \\sum_{j=0}^J a(v_j, v_j),\n  \\end{align*}\n  where the last inequality is due to Lemma~\\ref{lemma:schwarz:7} and\n  Lemma~\\ref{lemma:schwarz:8}. Since $N_O$ is assumed independent of\n  $h$, $H$, and $J$, the lemma is proven.\n\\end{proof}\n\n\n\\begin{Lemma}{stable-decomposition}\n  \\index{stable decomposition}\n  \\label{lemma:schwarz:stable-decomposition}\n  For each $v\\in V_h$ there exists a decomposition $v=\\sum_{j=0}^J\n  v_j$ with $v_j\\in V_j$, such that\n  \\begin{gather}\n    \\label{eq:schwarz:13}\n    \\sum_{j=0}^J a(v_j, v_j)\n    \\lesssim \\left(1+\\frac H\\delta\\right)^2 a(v,v).\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{proof}\n  Let $\\tilde I_H: H^1_0(\\Omega) \\to V_H$ be an interpolation operator\n  continuous on $H^1(\\Omega)$, for instance the interpolation operator\n  by Clement or the one by Scott and Zhang. For a given function\n  $v\\in V_h$, let $v_H = \\tilde I_H v$. Then\n  \\begin{gather}\n    \\label{eq:schwarz:20}\n    \\begin{split}\n      |v_H|_1 &\\lesssim |v|_1 \\\\\n      \\norm{v-v_H}_0 &\\lesssim H |v|_1.\n    \\end{split}\n  \\end{gather}\n  From~\\eqref{eq:schwarz:20} there holds\n  \\begin{gather}\n    \\label{eq:schwarz:22}\n    a(v_H,v_H) = |v_H|_1^2 \\lesssim |v|_1^2 = a(v,v).\n  \\end{gather}\n  Let $w=v-v_H$. Using a partition of unity $\\{\\phi_j\\}$ for the\n  subdomains $\\{\\Omega_j\\}$ and the nodal interpolant $I_h$ as in the\n  proof of Lemma~\\ref{lemma:schwarz:4}, and let\n  \\begin{gather*}\n    v_j = I_h(\\phi_j w).\n  \\end{gather*}\n  Thus, $v = \\sum v_j$. We point out, that we can use the nodal\n  interpolant, since $w$ is a finite element function on $\\T_h$ and\n  $\\phi_j$ is either smooth or piecewise polynomial. For the remainder\n  of this proof, we will assume the piecewise polynomial case (see\n  Example~\\ref{example:schwarz:2}) and leave the arguments for a\n  smooth function $\\phi_j$ to the reader.\n  \n  The interpolation operator is exact for polynomials of degree $k$\n  (assuming such an order for the finite element being\n  used). Therefore,\n  \\begin{gather*}\n    a(v_j,v_j) = |v_j|_1^2 \\lesssim |\\phi_j w|_1^2\n    \\lesssim \\norm{\\nabla \\phi_j w}_0^2 + \\norm{\\phi_j \\nabla w}_0^2.\n  \\end{gather*}\n  Using the properties of the partition of unity, we obtain\n  \\begin{gather*}\n    a(v_j,v_j) \\lesssim \\frac1{\\delta^2} \\norm{\\chi(\\Omega_j) w}_0^2\n    + |\\chi(\\Omega_j)w|_1^2.\n  \\end{gather*}\n  Summing up yields\n  \\begin{gather}\n    \\label{eq:schwarz:21}\n    \\begin{split}\n      \\sum_{j=1}^J a(v_j, v_j)\n      &\\lesssim \\sum_{j=1}^J \\left(\n        \\frac1{\\delta^2} \\norm{\\chi(\\Omega_j) w}_0^2\n        + |\\chi(\\Omega_j)w|_1^2\\right) \\\\\n      &\\le N_O \\left(\\frac1{\\delta^2}\\norm{w}_0^2 + |w|_1^2\\right) \\\\\n      &=  N_O \\left(\\frac1{\\delta^2} \\norm{v-v_H}_0^2 + |v-v_H|_1^2\\right)\n      \\\\\n      & \\lesssim \\frac{H^2}{\\delta^2} |v|_1^2 + |v|_1^2 \\\\\n      &= \\left(1+\\frac{H^2}{\\delta^2}\\right) a(v,v).      \n    \\end{split}\n  \\end{gather}\n  The estimate~\\eqref{eq:schwarz:13} now follows\n  from~\\eqref{eq:schwarz:22} and~\\eqref{eq:schwarz:21}.\n\\end{proof}\n\n\\begin{Theorem}{two-level-additive-convergence}\n  \\label{theorem:schwarz:two-level-convergence}\n  Under the assumptions made so far in this section, there holds\n  \\begin{gather}\n    \\label{eq:schwarz:14}\n    \\kappa(B^{-1}_{\\text{TLS}} A_h)\n    = \\frac{\\Lambda(B_{\\text{TLS}}, A_h)}{\\lambda(B_{\\text{TLS}},\n      A_h)}\n    \\lesssim \\left(1+\\frac H\\delta\\right)^2,\n  \\end{gather}\n  where the implicit constant is independent of $h$, $\\delta$, $H$, and $J$.\n\\end{Theorem}\n\n\\begin{proof}\n  The proof follows Note~\\ref{note:schwarz:1}. Indeed,\n  Lemma~\\ref{lemma:schwarz:6} proves inequality~\\eqref{eq:schwarz:23}\n  and Lemma~\\ref{lemma:schwarz:stable-decomposition}\n  proves~\\eqref{eq:schwarz:24}.\n\\end{proof}\n\n\\begin{remark}\n  We have constructed a preconditioner $B_{\\text{TLS}}$ for the finite\n  element discretization of the Poisson problem~\\eqref{eq:itintro:1}\n  such that the preconditioned system has a bounded condition number\n  independent of the mesh size $h$. Thus, a Richardson iteration or a\n  conjugate gradient method using this preconditioner will reduce the\n  error by a certain amount within a fixed number of steps.\n  \n  Closer inspection of the estimate~\\eqref{eq:schwarz:14} in view of\n  Example~\\ref{example:schwarz:2} reveals a problem though: typically,\n  $\\delta$ is of the order of $h$, such that the estimate becomes\n  \\begin{gather*}\n    \\kappa(B^{-1}_{\\text{TLS}} A_h) \\lesssim \\left(1+\\frac H h\\right)^2.\n  \\end{gather*}\n  If we choose $H$ constant, this is exactly as bad as the condition\n  number of $A$ itself. Under mild further assumptions, the square on\n  the right hand side can be avoided~\\cite{DryjaWidlund94}, which\n  is an improvement compared to the operator without preconditioning,\n  but is not uniform with respect to $h$. Therefore, we are left with\n  two options:\n  \\begin{enumerate}\n  \\item Increase the overlap such that it is $\\mathcal O(H)$. This\n    procedure yields a \\putindex{uniform preconditioner}, but it\n    introduces a problem: when refining $h$, more and more\n    cells belong to several subdomains and thus computations on them\n    have to be performed several times. Therefore, the effort per\n    preconditioning step is increased considerably.\n  \\item Keep the overlap at a small multiple of $h$ and choose $H$\n    such that $H/h$ is bounded by a constant. Then, the preconditioner\n    remains uniform, the overlap remains small. This way, the\n    difficulty has been transferred to the coarse grid problem on\n    $\\T_H$, since now this problem becomes more and more difficult to\n    solve, when $h$ decreases.\n  \\end{enumerate}\n  \n  While the problems of the first option above are inherent and\n  unavoidable, the second option is at least seemingly optimal and\n  more creativity may be invested into the solution of the coarse grid\n  problem. Therefore, the latter is usually preferred. The coarse grid\n  problem then leads to the idea of multigrid methods, which will be\n  dealt with in Chapter~\\ref{cha:iteration:multigrid-methods}.\n\\end{remark}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Multiplicative Schwarz methods}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{example}\n  If we write the Jacobi method in equation~\\eqref{eq:schwarz:28} for\n  each line, we get for each $j=1,\\ldots,J$:\n  \\begin{gather}\n    \\label{eq:schwarz:29}\n      u_j^{(k+1)} = u_j^{(k)} - \\omega_k P_j \\bigl(u^{(k)} - A^{-1}f\\bigr).\n  \\end{gather}\n  Since the updates are orthogonal, this is equivalent to the sum\n  in~\\eqref{eq:schwarz:28}. In the \\putindex{Gauss-Seidel method}, these\n  projections are done consecutively. For instance, if we introduce\n  broken indices, we can write it in the form\n  \\begin{gather}\n    \\label{eq:schwarz:30}\n      u_j^{(k+\\frac{j}{J})} = u_j^{(k+\\frac{j-1}{J})}\n      - \\omega_k P_j \\bigl(u^{(k+\\frac{j-1}{J})} - A^{-1}f\\bigr).\n  \\end{gather}\n  This means, we apply the corrections consecutively one after the\n  other. In order to understand convergence of this method, we are\n  bringing it into a different form and study the propagation of the\n  error. Let $uA^{-1}f$ be the solution of the problem. Then, the\n  error propagates like\n  \\begin{multline}\n    \\label{eq:schwarz:31}\n      u^{(k+\\frac{j}{J})} - u = u^{(k+\\frac{j-1}{J})} - u\n      - \\omega_k P_j \\left(u^{(k+\\frac{j-1}{J})} - u\\right) \\\\\n      = (I-\\omega_k P_j) \\left(u^{(k+\\frac{j-1}{J})} - u\\right).\n  \\end{multline}\n  The error after a whole Gauss-Seidel step is\n  \\begin{gather}\n    \\label{eq:schwarz:32}\n      u^{(k+1)} - u\n      = (I-\\omega_k P_J)(I-\\omega_k P_{J-1})\\dots(I-\\omega_k P_1) \\left(u^{(k)} - u\\right).\n  \\end{gather}\n  The corresponding error equation for the Jacobi method is\n  \\begin{gather}\n    \\label{eq:schwarz:33}\n    u^{(k+1)} - u = \\left(I-\\omega_k \\sum_{j=1}^J P_j\\right) \\left(u^{(k)} - u\\right).\n  \\end{gather}\n  Thus the notions of multiplicative and additive methods.\n\\end{example}\n\n\\begin{intro}\n  The example of the Jacobi and Gauss-Seidel methods is generic in the\n  way that for any subspace decomposition of $V$ into the sum of\n  $V_j$, we can define an additive and a multiplicative method. Not\n  surprisingly, their analysis also rests on the same ingredients.\n\\end{intro}\n\n\\begin{Definition}{multiplicative-schwarz}\n  The \\define{multiplicative Schwarz preconditioner} for the operator\n  $A$ associated with the symmetric, and $V$-elliptic bilinear form\n  $a(.,.)$ with respect to the subspace decomposition $V_j$ is the\n  mapping $B_m: V \\to V^*$ such that\n  \\begin{gather}\n    \\label{eq:schwarz:34}\n    B_m^{-1} = (I -  E_J) A^{-1},\n  \\end{gather}\n  where $E_J$ is the multiplicative \\define{error propagation operator}\n  \\begin{gather}\n    \\label{eq:schwarz:35}\n    E_J = (I - P_J)(I - P_{J-1})\\dots(I - P_1).\n  \\end{gather}\n\\end{Definition}\n\n\\begin{Lemma}{error-propagation}\n  The error after one step of the iteration\n  \\begin{gather*}\n    u^{(k+1)} = u^{(k)} - B_m^{-1}\\left(A u^{(k)} - f\\right)\n  \\end{gather*}\n  is given by\n  \\begin{gather*}\n    u^{(k+1)} - u = E_J \\left(u^{(k)} - u\\right).\n  \\end{gather*}\n\\end{Lemma}\n\n\\begin{proof}\n  First, we use the definition of $B_m$ to obtain\n  \\begin{gather*}\n    u^{(k+1)} = u^{(k)} - (I -  E_J) \\left(u^{(k)} - u\\right).\n  \\end{gather*}\n  Therefore,\n  \\begin{gather*}\n    u^{(k+1)} - u = u^{(k)} - u - (I -  E_J) \\left(u^{(k)} - u\\right)\n    = (I-I+E_J) \\left(u^{(k)} - u\\right).\n  \\end{gather*}  \n\\end{proof}\n\n\\begin{remark}\n  We can define the error propagation operator $E_j$ through the\n  recursion\n  \\begin{gather}\n    \\label{eq:schwarz:45}\n    E_{0} = I, \\qquad E_{k} = (I - P_k) E_{k-1}, \\quad k=1,\\dots,J.\n  \\end{gather}\n\\end{remark}\n\n\\begin{Lemma}{error-propagation-recursion}\n  \\label{lemma:schwarz:9}\n  The error propagation operator $E_j$ has the following porperties:\n  \\begin{align}\n    \\label{eq:schwarz:44}\n    E^*_{j-1}E_{j-1} - E^*_{j}E_{j} &= E^*_{j-1} P_j E_{j-1}\n    \\\\\n    \\label{eq:schwarz:46}\n    I - E_j &= \\sum_{k=1}^{j} P_k E_{k-1}.\n  \\end{align}\n  Here, $E^*$ is the $a(.,.)$-adjoint of $E$.\n\\end{Lemma}\n\n\\begin{proof}\n  In order to prove the first identity, we use the recursion\n  formula~\\eqref{eq:schwarz:45} to obtain (using $P_j = P_j^*$\n  and $P_j^2 = P_j$)\n  \\begin{align*}\n    E^*_{j}E_{j} &= E^*_{j-1} (I-P_j^*) (I-P_j) E_{j-1} \\\\\n    &= E^*_{j-1} E_{j-1} - E^*_{j-1}P_jE_{j-1}.\n  \\end{align*}\n  The second identity is proven by induction with\n  \\begin{align*}\n    (I-E_0) &= I - I = 0, \\\\\n    (I-E_j) &= I-(I-P_j) E_{j-1} = P_j  E_{j-1} + I - E_{j-1} = P_j\n    E_{j-1} +\\sum_{k=1}^{j-1} P_k E_{k-1}.\n  \\end{align*}\n\\end{proof}\n\n\\begin{lemma}{schwarz10}\n  \\label{lemma:schwarz:10}\n  Let Assumption~\\ref{assumption:schwarz:1} (\\putindex{strengthened\n    Cauchy-Schwarz inequalities}) be satisfied. Then the following\n  inequalities hold for $0\\le j,k \\le J$ and for $u,v\\in V$:\n  \\begin{align}\n    a(P_j u,v) &\\le \\sqrt{a(P_j u,u)} \\sqrt{a(P_j v,v)}, \\\\\n    a(P_j u,P_k v) &\\le \\epsilon_{jk}\n    \\sqrt{a(P_j u,u)} \\sqrt{a(P_k v,v)}.\n  \\end{align}\n\\end{lemma}\n\n\\begin{proof}\n  By the definition of $P_j$, the fact that $P_j$ is $a(.,.)$-self\n  adjoint (Lemma~\\ref{lemma:schwarz:ritz}) and the \\putindex{Bunyakovsky-Cauchy-Schwarz\n  inequality}, we have\n  \\begin{multline*}\n    a(P_j u,v) = a(u, P_j v) = a(P_j u, P_j v)\n    \\\\\n    \\le \\sqrt{a(P_j u,P_j u)} \\sqrt{a(P_j v,P_j v)}\n    \\le \\sqrt{a(P_j u,u)} \\sqrt{a(P_j v,v)}.\n  \\end{multline*}\n  The second inequality follows readily by\n  \\begin{gather*}\n    a(P_j u,P_k v)\n    \\le \\epsilon_{jk}\\sqrt{a(P_j u,P_j u)} \\sqrt{a(P_j v,P_j v)}\n    = \\epsilon_{jk}\\sqrt{a(P_j u,u)} \\sqrt{a(P_j v,v)}.\n  \\end{gather*}\n\\end{proof}\n\n\\begin{Theorem}{schwarz-multiplicative-convergence}\n  Let Assumptions~\\ref{assumption:schwarz:1} and the\n  estimate~\\eqref{eq:schwarz:23} be satisfied. Then, the error\n  propagation operator satisfies\n  \\begin{gather}\n    \\label{eq:schwarz:47}\n    \\norm{E_J}_A^2 \\le 1 - \\frac{1}{\\rho(\\mathcal E) \\constref{eq:schwarz:24}} < 1,\n  \\end{gather}\n\\end{Theorem}\n\n\\begin{proof}\n  We use equation~\\eqref{eq:schwarz:44} of Lemma~\\ref{lemma:schwarz:9} to obtain\n  \\begin{gather*}\n    I- E^*_J E_J = \\sum_{j=1}^{j} \\left(E^*_{j-1}E_{j-1} -\n      E^*_{j}E_{j}\\right)\n    = \\sum_{j=1}^J E^*_{j-1} P_j E_{j-1}.\n  \\end{gather*}\n  Since the operators $P_j$ are positive semi-definite, by rearranging\n  we obtain for all $v\\in V$ the estimate\n  \\begin{gather}\n    \\label{eq:schwarz:49}\n    a(E_J v, E_J v) \\le a(v,v) - \\sum_{j=1}^J a(E_{j-1}v, P_j E_{j-1}v).\n  \\end{gather}\n  Thus, $\\norm{E_J}_A^2 \\le 1$, but we have to show that the sum on the\n  right is sufficiently positive to get an estimate of the convergence\n  rate. Therefore, we start with equation~\\eqref{eq:schwarz:46} of\n  Lemma~\\ref{lemma:schwarz:9}, yielding by Lemma~\\ref{lemma:schwarz:10}\n  \\begin{align*}\n    a(P_j v,v)\n    &= a(P_j v, E_{j-1}v) + \\sum_{k=1}^{j-1} a(P_j v, P_k E_{k-1} v)\n    \\\\\n    &\\le \\sqrt{a(P_j v,v)} \\left(\\sqrt{a(P_j E_{j-1}v,E_{j-1}v)}\n    + \\sum_{k=1}^{j-1} \\epsilon_{jk} \\sqrt{a(P_k E_{k-1}v,E_{k-1}v)}\n    \\right) \\\\\n    & \\le \\sqrt{a(P_j v,v)} \\left(\\sum_{k=1}^{j} \\epsilon_{jk}\n      \\sqrt{a(P_k E_{k-1} v,E_{k-1} v)}\n    \\right)\n  \\end{align*}\n  Now let $z \\in \\R^J$ be the vector with entries $z_k = \\sqrt{a(P_k\n    E_{k-1}v,E_{k-1}v)}$. Then, we rewrite the previous estimate as\n  \\begin{gather*}\n    a(P_j v,v) \\le (\\mathcal E z)_j^2.\n  \\end{gather*}\n  Summing over $j$ yields\n  \n  \\begin{gather}\n    \\label{eq:schwarz:48}\n    \\sum_{j=1}^J a(P_j v,v) \\le \\norm{\\mathcal E z}^2\n    \\le \\rho(\\mathcal E)^2 \\norm{z}^2\n    = \\rho(\\mathcal E)^2 \\sum_{k=1}^{j} a(P_k E_{k-1} v,E_{k-1} v).\n  \\end{gather}\n  At this point, we use Lemma~\\ref{lemma:schwarz:11} to estimate\n  \\begin{gather*}\n    \\frac1{\\constref{eq:schwarz:24}^2} a(v,v)\n    \\le \\sum_{j=1}^J a(P_j v,v)\n    \\le \\rho(\\mathcal E)^2 \\sum_{k=1}^{j} a(P_k E_{k-1} v,E_{k-1} v)\n  \\end{gather*}\n  Finally, entering this estimate into~\\eqref{eq:schwarz:49}, we\n  obtain\n  \\begin{gather*}\n    a(E_J v, E_J v)\n    \\le a(v,v) \\left(1-\\frac1{\\rho(\\mathcal\n        E)^2\\constref{eq:schwarz:24}^2}\n      \\right),\n  \\end{gather*}\n  which is the statement of the theorem.\n\\end{proof}\n\n\\begin{Lemma}{schwarz11}\n  \\label{lemma:schwarz:11}\n  Let inequality~\\eqref{eq:schwarz:24} hold for some \\putindex{stable\n    decomposition} $v=\\sum v_j$ with the constant\n  $\\constref{eq:schwarz:24}$. Then,\n  \\begin{gather}\n    \\label{eq:schwarz:50}\n    \\sum_{j=1}^J a(P_j v,v) \\ge \\frac1{\\constref{eq:schwarz:24}^2} a(v,v).\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{proof}\n  From inequality~\\eqref{eq:schwarz:24} for the decomposition $v=\\sum\n  v_j$,\\ and the definition of $P_j$, and the\n  \\putindex{Bunyakovsky-Cauchy-Schwarz inequality}, we obtain\n  \\begin{align*}\n    a(v,v) &= \\sum_{j=1}^J a(v,v_j) \\\\\n    &= \\sum_{j=1}^J a(P_j v, v_j) \\\\\n    &\\le \\sqrt{\\sum_{j=1}^J a(P_j v, P_j v)} \\;\\sqrt{\\sum_{j=1}^J a(v_j,\n      v_j)} \\\\\n    & \\le \\sqrt{\\sum_{j=1}^J a(P_j v, P_j v)}\\;\n    \\constref{eq:schwarz:24} \\sqrt{a(v,v)}.\n  \\end{align*}\n  Thus,\n  \\begin{gather*}\n    a(v,v) \\le \\constref{eq:schwarz:24}^2 \\sum_{j=1}^J a(P_j v, P_j v)\n    = \\constref{eq:schwarz:24}^2 \\sum_{j=1}^J a(v, P_j v).\n  \\end{gather*}\n  This proves the lemma.\n\\end{proof}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Extensions}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  In order to keep the presentation and the proofs simple, we have\n  presented Schwarz methods in their most straight forward form. The\n  framework allows for extensions, which each add minor\n  complications to the proofs and yield slightly modified results. We\n  are going to mention a few of them.\n\\end{intro}\n\n\\begin{remark}\n  The definition of the additive Schwarz operator in\n  equation~\\eqref{eq:schwarz:4} and the multiplicative Schwarz\n  operator~\\eqref{eq:schwarz:34} rely on the Ritz projections $P_j$,\n  which in turn, through their definition in~\\eqref{eq:schwarz:2}\n  require solving the local problems with the operators $A_j$ exactly.\n  \n  In some cases, it might be advantageous either to solve a different\n  local problem, or to solve the local problem approximately. In both\n  cases, we can rewrite the algorithm as using a different local\n  projection $\\widetilde P_j$ which instead of the Ritz\n  projection~\\eqref{eq:schwarz:2} is defined by the equation\n  \\begin{gather}\n    \\label{eq:schwarz:42}\n    \\widetilde a_j (\\widetilde P_j u_j,v_j) = a(u,v_j),\\quad\\forall v_j\\in V_j,\n  \\end{gather}\n  with a corresponding operator $\\widetilde A_j: V_j \\to V_j^*$. We\n  will continue to assume that $\\widetilde a_j (.,.)$ is symmetric and\n  elliptic in the same way as $a(.,.)$ is, but possibly with different\n  constants.\n\n  It is obvious, that we will need assumptions on the modified\n  bilinear forms $\\widetilde a_j (.,.)$and their relationship with\n  $a(.,.)$. But it turns out, that if we make these additional\n  assumptions, we can make the replacements of $P_j$ by $\\widetilde\n  P_j$ in the algorithm and the analysis carries through with just one\n  additional parameter involved.\n  \n  The modifications in the analysis are as follows: first, replace the\n  stable decomposition lemma~\\ref{lemma:schwarz:stable-decomposition}\n  by assumption~\\ref{assumption:schwarz:stable-decomposition-2}. Then,\n  introduce the additional assumption\n  \\ref{assumption:schwarz:local-stability}, which is ellipticity of\n  the modified forms in $V_j$ with respect to the norm established by the\n  original bilinear form. Both assumptions together establish a\n  relaxed form (only for the sum over $j$) of spectral equivalence for\n  $\\widetilde a_j(.,.)$ and $a(.,.)$ on $V_j$. The \\putindex{strengthened\n  Cauchy-Schwarz inequalities} in Assumption~\\ref{assumption:schwarz:1}\n  remain the same.\n\\end{remark}\n\n\\begin{assumption}[Stable decomposition]\n  \\index{stable decomposition}\n  \\label{assumption:schwarz:stable-decomposition-2}\n    For each $v\\in V_h$ there exists a decomposition $v=\\sum_{j=0}^J\n  v_j$ with $v_j\\in V_j$, such that\n  \\begin{gather}\n    \\label{eq:schwarz:36}\n    \\sum_{j=0}^J \\widetilde a_j(v_j, v_j)\n    \\lesssim a(v,v),\n  \\end{gather}\n  where the implicit constant is independent of the number of\n  subdomains $J$.\n\\end{assumption}\n\n\\begin{assumption}[Local stability]\n  \\index{local stability}\n  \\label{assumption:schwarz:local-stability}\n  There is a constant $\\omega > 0$ such that\n  \\begin{gather}\n    \\label{eq:schwarz:37}\n    a(v_j, v_j) \\le \\omega \\widetilde a_j(v_j, v_j)\n    \\quad\\forall j=1,\\dots,J \\; \\forall v_j\\in V_j.\n  \\end{gather}\n\\end{assumption}\n\n\\begin{remark}\n  With these two assumptions, the convergence estimates change as\n  follows: first, using the \\putindex{strengthened Cauchy-Schwarz\n  inequalities}~\\eqref{eq:schwarz:25}, we extend ~\\eqref{eq:schwarz:38}\n  to\n  \\begin{gather}\n    \\label{eq:schwarz:39}\n    a(v,v) \\le \\omega \\rho(\\mathcal E) \\sum_{j=1}^J \\widetilde a_j(v_j, v_j).\n  \\end{gather}\n  Then, the proof of Theorem~\\ref{theorem:schwarz:1} can be conducted\n  in the very same way as before, using~\\eqref{eq:schwarz:36}\n  and~\\eqref{eq:schwarz:39}.\n\\end{remark}\n\n\\begin{example}\n  A simple example is the introduction of a relaxation parameter\n  $\\omega$ such that\n  \\begin{gather*}\n    \\widetilde a_j(v_j, v_j) = \\frac1\\omega a(v_j, v_j).\n  \\end{gather*}\n  Then, obviously, the local stability~\\eqref{eq:schwarz:37}\n  holds. Furthermore, the constant in the stable decomposition\n  estimate changes by a factor $1/\\omega$. Thus, in this case, the\n  upper and lower bounds $\\Lambda(B,A)$ and $\\lambda(B,A)$ change by\n  the same factor and the condition number stays the same.\n\\end{example}\n\n\\begin{remark}\n  The second extension is, that we can replace the subspaces $V_j$ by\n  \\putindex{auxiliary space}s $X_j$, which are not subspaces of\n  $V$. In such a situation, we have to require the existence of a\n  \\putindex{prolongation} or \\putindex{extension operator} $R_j^T: X_j\n  \\to V$. This situation is adapted most easily to our existing\n  framework by introducing subspaces $V_j$ as the range of $R_j^T$,\n  namely,\n  \\begin{gather}\n    \\label{eq:schwarz:40}\n    V_j = \\bigl\\{ R_j^T x \\in V \\big| x\\in X_j \\bigr\\}.\n  \\end{gather}\n  Then, the local forms are defined on the auxiliary spaces,\n  \\begin{gather}\n    \\label{eq:schwarz:41}\n    \\widetilde a_j(.,.) : X_j\\times X_j \\to \\R, \\quad j=1,\\dots,J,\n  \\end{gather}\n  and wherever we need a vector in $V_j$, we use the interpolation\n  operator. Thus, we introduce decompositions of the form $v = \\sum\n  \\alpha_j R_j^T x_j$ and the operator $\\widetilde P_j: V \\to X_j$ is\n  defined by\n  \\begin{gather}\n    \\label{eq:schwarz:43}\n    \\widetilde a_j(\\widetilde P_j u, y_j) = a(u, R_j^T y_j),\n    \\quad \\forall y_j \\in X_j.\n  \\end{gather}\n  These modifications introduce new operators, but they do not affect\n  the analysis.\n\\end{remark}\n\n\\begin{example}\n  When we implement a block-Jacobi method, we have $V = \\R^n$ and\n  $m$-dimensional subspaces with $J=n/m$ (we assume the quotient is\n  integer). In the standard formulation with subspaces $V_j$, the\n  operators $A_j$, which have to be inverted, are $n\\times n$-matrices\n  with only $m$ rows and columns different from zero. This is not a\n  useful description of the local problems. Instead, we want an\n  invertible matrix $A_j \\in \\R^{m\\times m}$. This can be achieved by\n  choosing the extension operator\n  \\begin{gather}\n    R_j^T : \\R^m \\to \\R_n,\n    \\qquad\n    \\begin{pmatrix}\n      x_1 \\\\ \\vdots \\\\ x_m\n    \\end{pmatrix}\n    \\mapsto\n    \\begin{pmatrix}\n      0\\\\ \\vdots \\\\ 0 \\\\\n      v_{mj+1} = x_1 \\\\ \\vdots \\\\ v_{m(j+1)} = x_m\n      \\\\ 0\\\\ \\vdots \\\\ 0\n    \\end{pmatrix}.\n  \\end{gather}\n\\end{example}\n\n\\begin{example}\n  Other examples are finite element methods with nonnested spaces on\n  different levels, for instance, when the meshes of a hierarchy are\n  not nested.\n\\end{example}\n\n\\begin{remark}\n  We have discussed additive and multiplicative Schwarz methods, but\n  we are not forced to consider only methods organized strictly in one\n  way or the other. Instead, some of the operations can be performed\n  in the additive, some in the multiplicative way.\n  \n  This is advantageous for instance on multicore hardware: an\n  inspection of the algorithms shows, that application of all\n  operators $P_j$ can be implemented in parallel, while the\n  applications of the operators $(I-P_j)$ in the multiplicative method\n  is sequential.\n\\end{remark}\n\n\\begin{example}\n  Let the indices $j=1,\\dots,J$ be grouped into $M$ subsets $I_m$,\n  such that\n  \\begin{gather*}\n    P_iP_k = P_k P_i = 0,\\qquad \\forall i,k\\in I_m.\n  \\end{gather*}\n  Such a distribution of indices is also called\n  \\define{coloring}. Then,\n  \\begin{gather*}\n    (I-P_i)(I-P_k) = I- (P_i+P_k),\n  \\end{gather*}\n  and by application to the whole subset and all subsets, the error\n  propagation operator of the multiplicative method can be rewritten\n  as\n  \\begin{gather*}\n    E_J = \\left(I-\\sum_{j\\in I_1} P_j \\right)\n    \\cdots\n    \\left(I-\\sum_{j\\in I_m} P_j \\right).\n  \\end{gather*}\n  While the operations inside each ``color'' are arranged in an\n  additive and thus parallelizable manner, the colers themselves are\n  arranged in a multiplicative way.\n\\end{example}\n\n\\begin{example}\n  Another example is a rearrangement of the two-level Schwarz method\n  in a way, that the domain decomposition subspaces are still dealt\n  with in an additive fashion, while the coarse space is connected\n  multiplicatively. An example for a symmetric version of this method\n  is described by the error propagation operator\n  \\begin{gather}\n    \\label{eq:schwarz:51}\n    E_{TL} = \\left(I-\\sum_{j=1}^J P_j \\right) (I-P_0)\n    \\left(I-\\sum_{j=1}^J P_j \\right),\n  \\end{gather}\n  which is the two-level operator with Schwarz smoother discussed in\n  the next chapter.\n\\end{example}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End: \n", "meta": {"hexsha": "7bd4ab62863e714e08f6cfc3c6e425de6e7244f4", "size": 50030, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "iteration/schwarz.tex", "max_stars_repo_name": "ahumanita/notes", "max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "iteration/schwarz.tex", "max_issues_repo_name": "ahumanita/notes", "max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-05-24T07:31:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-31T12:58:14.000Z", "max_forks_repo_path": "iteration/schwarz.tex", "max_forks_repo_name": "ahumanita/notes", "max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-05-15T19:28:53.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-05T19:07:29.000Z", "avg_line_length": 38.0167173252, "max_line_length": 95, "alphanum_fraction": 0.6571657006, "num_tokens": 17316, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723317123102956, "lm_q2_score": 0.8740772482857833, "lm_q1q2_score": 0.5876698530314521}}
{"text": "%!TEX root = ../OGUSAdoc.tex\n\nThe production side of the \\ogindia model is populated by a unit measure of identical perfectly competitive firms that rent capital $K_t$ and hire labor $L_t$ to produce output $Y_t$. Firms also face a flat corporate income tax $\\tau^{corp}$ as well as a tax on the amount of capital they depreciate $\\tau^\\delta$.\n\n\n\\section{Production Function}\\label{EqFirmsProdFunc}\n\n  Firms produce output $Y_t$ using inputs of capital $K_t$ and labor $L_t$ according to a general constant elasticity (CES) of substitution production function,\n  \\begin{equation}\\label{EqFirmsCESprodfun}\n    Y_t = F(K_t, L_t) \\equiv Z_t\\biggl[(\\gamma)^\\frac{1}{\\ve}(K_t)^\\frac{\\ve-1}{\\ve} + (1-\\gamma)^\\frac{1}{\\ve}(e^{g_y t}L_t)^\\frac{\\ve-1}{\\ve}\\biggr]^\\frac{\\ve}{\\ve-1} \\quad\\forall t\n  \\end{equation}\n  where $Z_t$ is an exogenous scale parameter (total factor productivity) that can be time dependent, $\\gamma$ represents the capital share of income, and $\\ve$ is the constant elasticity of substitution between capital and labor. We have included constant productivity growth $g_y$ as the rate of labor augmenting technological progress.\n\n  A nice feature of the CES production function is that the Cobb-Douglas production function is a nested case for $\\ve=1$.\n  \\begin{equation}\\label{EqFirmsCDprodfun}\n    Y_t = Z_t(K_t)^\\gamma(e^{g_y t}L_t)^{1-\\gamma} \\quad\\text{for}\\quad \\ve=1 \\quad\\forall t\n  \\end{equation}\n\n\n\\section{Optimality Conditions}\\label{EqFirmsFOC}\n\n  The profit function of the representative firm is the following.\n  \\begin{equation}\\label{EqFirmsProfit}\n    PR_t = (1 - \\tau^{corp})\\Bigl[F(K_t,L_t) - w_t L_t\\Bigr] - \\bigl(r_t + \\delta\\bigr)K_t + \\tau^{corp}\\delta^\\tau K_t \\quad\\forall t\n  \\end{equation}\n  Gross income for the firms is given by the production function $F(K,L)$ because we have normalized the price of the consumption good to 1. Labor costs to the firm are $w_t L_t$, and capital costs are $(r_t +\\delta)K_t$. The per-period economic depreciation rate is given by $\\delta$.\n\n  Taxes enter the firm's profit function \\eqref{EqFirmsProfit} in two places. The first is the corporate income tax rate $\\tau^{corp}$, which is a flat tax on corporate income. As is the case in the U.S., corporate income is defined as gross income minus labor costs. This will cause the corporate tax to only distort the firms' capital demand decision.\n\n  The next place where tax policy enters the profit function \\eqref{EqFirmsProfit} is through a refund of a percent of depreciation costs $\\delta^\\tau$ refunded at the corporate income tax rate $\\tau^{corp}$. When $\\delta^\\tau=0$, no depreciation expense is deducted from the firm's tax liability. When $\\delta^\\tau=\\delta$, all economic depreciation is deducted from corporate income.\n\n  Taking the derivative of the profit function \\eqref{EqFirmsProfit} with respect to labor $L_t$ and setting it equal to zero and taking the derivative of the profit function with respect to capital $K_t$ and setting it equal to zero, respectively, characterizes the optimal labor and capital demands.\n  \\begin{align}\n    w_t &= e^{g_y t}(Z_t)^\\frac{\\ve-1}{\\ve}\\left[(1-\\gamma)\\frac{Y_t}{e^{g_y t}L_t}\\right]^\\frac{1}{\\ve} \\quad\\forall t \\label{EqFirmFOC_L} \\\\\n    r_t &= (1 - \\tau^{corp})(Z_t)^\\frac{\\ve-1}{\\ve}\\left[\\gamma\\frac{Y_t}{K_t}\\right]^\\frac{1}{\\ve} - \\delta + \\tau^{corp}\\delta^\\tau \\quad\\forall t \\label{EqFirmFOC_K}\n  \\end{align}\n\n  We discuss how to calibrate the values of $\\tau^{corp}$ and $\\tau^\\delta$ from the \\btax microsimulation model in Chapter \\ref{Chap_BTax}.\n", "meta": {"hexsha": "ecb7fdb44714be33d9e753d0fdbdfdd712fa3104", "size": 3550, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/LaTeXsource/Chapters/Chap_Firms.tex", "max_stars_repo_name": "keshavchoudhary87/OG-India", "max_stars_repo_head_hexsha": "269ee172b837882c826ee7f99507d93f9643128e", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-17T19:49:22.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-17T19:49:22.000Z", "max_issues_repo_path": "docs/LaTeXsource/Chapters/Chap_Firms.tex", "max_issues_repo_name": "keshavchoudhary87/OG-India", "max_issues_repo_head_hexsha": "269ee172b837882c826ee7f99507d93f9643128e", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2019-08-16T15:40:52.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-16T07:07:15.000Z", "max_forks_repo_path": "docs/LaTeXsource/Chapters/Chap_Firms.tex", "max_forks_repo_name": "keshavchoudhary87/OG-India", "max_forks_repo_head_hexsha": "269ee172b837882c826ee7f99507d93f9643128e", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 44, "max_forks_repo_forks_event_min_datetime": "2019-08-16T15:10:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-08T07:03:26.000Z", "avg_line_length": 91.0256410256, "max_line_length": 385, "alphanum_fraction": 0.7352112676, "num_tokens": 1024, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772318846387, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.587669830526926}}
{"text": "\\section{Signal modelling}\n\\label{sec:signal_model}\n\nThe parameterization of \\mfl distributions based on simulated samples for signals are described in this section.\nSeveral signal models are studied, including heavy Higgs like narrow-width signal (NWA) and large-width signal (LWA), as well as the modelling of Randall-Sundrum graviton (RSG) signal.\n\n%% ========================================================================================================================\n%% NWA\n\n\\subsection{Modelling of narrow-width signal}\n\\label{sec:signal_nwa}\n\nFor narrow-width (NWA) signal, the \\mfl width is totally determined by detector resolution, which is modelled \nby the sum of a Crystal Ball ($\\mathcal{C}$) function~\\cite{CrystalBall1,CrystalBall2} and a Gaussian ($\\mathcal{G}$) function:\n\n\\begin{equation}\n    \\label{eq:cb_plus_g}\n    P_{s} (\\mfl) = f_{\\mathcal{C}} \\cdot \\mathcal{C}(\\mfl; \\mu, \\sigma_{\\mathcal{C}}, \\alpha_{\\mathcal{C}}, n_{\\mathcal{C}})\n                   + (1 - f_{\\mathcal{C}}) \\cdot \\mathcal{G}(\\mfl; \\mu, \\sigma_{\\mathcal{G}})\n\\end{equation}\n\nThe two functions share the same central value $\\mu$, while the resolution parameters, $\\sigma_{\\mathcal{C}}$ and $\\sigma_{\\mathcal{G}}$, are different.\nIn the Crystal Ball function, the parameters $\\alpha_{\\mathcal{C}}$ and $n_{\\mathcal{C}}$ model the shape of non-Gaussian tail,\nand the fraction parameter $f_{\\mathcal{C}}$ is used to ensure the relative normalization between two functions.\n\nThe parameters are obtained by fitting to signal MC simulations combining the mc16a, mc16d and mc16e campaigns for each category at each mass points from 200~\\gev~ to 2000~\\gev~ respectively,\nand the shape of ggF and VBF signals are found to be similar.\nFigure~\\ref{fig:ggf_mass_signalParam_2mu2e} shows the \\mfl distribution and fitted curves for ggF production at mass from 200~\\gev~ to 2000~\\gev~ in 2$e$2$\\mu$ channel as examples.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_200_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_300_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_400_H4l_2mu2e.eps}\\\\\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_500_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_600_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_700_H4l_2mu2e.eps}\\\\\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_800_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_900_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_1000_H4l_2mu2e.eps}\\\\\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_1200_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_1400_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_1600_H4l_2mu2e.eps}\\\\\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_1800_H4l_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_mass_signal_2000_H4l_2mu2e.eps}\n    \\caption{Distributions of the $m_{2\\mu 2e}$ and fit projection for signal samples between 200 to 3000~\\gev for ggF production mode. \n    Three MC campaigns, mc16a, mc16d and mc16e, are combined. \n    The lower panel in each plot shows the pull distribution.}\n    \\label{fig:ggf_mass_signalParam_2mu2e}\n\\end{figure}\n\nThen the $\\mathcal{C}+\\mathcal{G}$ parameters are fitted with a polynomial function as the function of generated mass points (\\mH), as an example shown in figure~\\ref{fig:ggf_fitParams_interpolation_2mu2e} for ggF production in 2$e$2$\\mu$ channel.\nThe fitting quality can be measured by the Pearson's $\\chi^2$, which is within 3 (2) for 2$e$2$\\mu$ (4$e$ and 4$\\mu$) channel.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_mass_mean_2mu2e_fit.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_f_cb_gauss_2mu2e_fit.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_mass_gauss_sigma_2mu2e_fit.eps} \\\\\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_mass_cb_sigma_2mu2e_fit.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_mass_cb_alpha_2mu2e_fit.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_mass_cb_n_2mu2e_fit.eps}\n    \\caption{Polynomial fits of the parameters $\\mu$, $f_{\\mathcal{C}}$, $\\sigma_{\\mathcal{G}}$, $\\sigma_{\\mathcal{C}}$,\n    $n_{\\mathcal{C}}$ and $\\alpha_{\\mathcal{C}}$ for the signal $\\mathcal{C}+\\mathcal{G}$ model in the $2\\mu 2e$ channel as a function of \\mH for\n    the ggF production mode. The combination of the mc16a, mc16d and mc16e MC campaigns is used.}\n    \\label{fig:ggf_fitParams_interpolation_2mu2e}\n\\end{figure}\n\nIn addition, possible difference on the signal yield extracted from parameterization and MC simulation is studied.\nFigure~\\ref{fig:ggf_graph_YieldCheckAll} shows this difference by computing $\\frac{N_{\\text{reco}}-N_{\\text{fit}}}{N_{\\text{fit}}}$, where $N_{\\text{reco}}$ denotes the total number of reconstructed events observed from MC simulation at that mass point\nand $N_{\\text{fit}}$ depicts the number of events obtained from the fitted pdf.\nThe differences are treated as an additional systematic uncertainty with the value of 2\\% (1\\%) for 2$e$2$\\mu$ (4$e$ and 4$\\mu$) channel in statistical fit.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_YieldCheck_4mu_fixed.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_YieldCheck_4e_fixed.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_graph_YieldCheck_2mu2e_fixed.eps}\n    \\caption{The difference between MC simulation and parameterization of\n    $4\\mu$ (left), $4e$ (middle) and $2\\mu 2e$ (right) for the ggF production\n    mode. The combination of the mc16a, mc16d and mc16e MC campaigns is used.}\n    \\label{fig:ggf_graph_YieldCheckAll}\n\\end{figure}\n\nIn summary, the final interpolated signal shapes for the ggF production mode are shown together in figure~\\ref{fig:ggf_multipeak} for mass points with step of 100~\\gev~ from 200~\\gev~ to 3000~\\gev.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_multipeakplot_4mu.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_multipeakplot_4e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/NWA//ggf_multipeakplot_2mu2e.eps}\n    \\caption{The final signal shapes for the ggF production mode, interpolated from the polynomial fit parameters.}\n    \\label{fig:ggf_multipeak}\n\\end{figure}\n\n%%========================================================================================================================\n%% LWA\n\n\\subsection{Modelling of large-width signal}\n\\label{sec:signal_lwa}\n\nThe \\mfl shape of heavy Higgs model in large-width (LWA) hypothesis can be described by a convolution of a truth distribution and a resolution from detector effect.\nThe detector resolution effect is the one modelled by the function described in NWA parameterization, as in NWA model the truth level width is negligible.\n\nThe differential parton cross section for the heavy Higgs model can be written as~\\cite{Goria:2011wa}:\n\\begin{equation}\n    \\sigma_{gg \\to H \\to ZZ} (s) = \\frac{1}{2s}  \\int d \\Omega \\left | A_{gg \\to H}(s,\\Omega) \\right |^2 \\frac{1}{ \\left | s - s_H \\right |^2}  \\left | A_{H \\to ZZ}(s,\\Omega) \\right |^2\n\\end{equation}\nwhere $A_{gg \\to H}(s,\\Omega)$ and $A_{H \\to ZZ}(s,\\Omega)$ are corresponding Higgs production and decay amplitudes,\nand $\\frac{1}{s - s_H}$ denotes the Higgs propagator and $\\Omega$ represents the phase space of the process.\n\nUsing the definition of a partial width,\n\\begin{equation} \\label{eq:PartialWidth}\n    \\Gamma_{H \\to F} (s) = \\frac{1}{2\\sqrt{s}}  \\int d \\Omega \\left | A_{H \\to F}(s,\\Omega) \\right |^2\n\\end{equation}\nthe parton cross section can be rewritten as,\n\\begin{equation} \\label{eq:HiggsPartonXSection}\n    \\sigma_{gg \\to H \\to ZZ} (s) = 2 \\frac{1}{ \\left | s - s_H \\right |^2}  \\times \\Gamma_{H \\to gg} (s) \\times \\Gamma_{H \\to ZZ} (s)\n\\end{equation}\nwith the components computed in Ref~\\cite{Goria:2011wa,Spira1995}:\n\\begin{align} \\label{eq:propagator}\n\\begin{split}\n    \\frac{1}{s - s_H} &= \\frac{ 1 + i \\cdot \\overline{\\Gamma}_H / \\overline{m}_H} { s - \\overline{m}_H^2 +  i \\cdot s \\cdot\\overline{\\Gamma}_H / \\overline{m}_H }  \\\\\n    \\overline{m}_H &= \\sqrt{\\Gamma_H^2 + m_H^2} \\\\\n    \\overline{\\Gamma}_H &= \\overline{m}_H  \\cdot \\frac{\\Gamma_H}{m_H}\n\\end{split}\n\\end{align}\n\n\\begin{equation} \\label{eq:WidthZZ}\n    \\Gamma_{H \\to ZZ} (s) = C \\cdot s^{\\frac{3}{2}} \\cdot \\left [ 1 - \\frac{4m_Z^2}{s} + \\frac{3}{4}\\left ( \\frac{4m_Z^2}{s} \\right )^2 \\right ] \\cdot \\left [ 1 - \\frac{4m_Z^2}{s} \\right ]^{\\frac{1}{2}}\n\\end{equation}\n\n\\begin{align} \\label{eq:WidthGG}\n\\begin{split}\n    \\Gamma_{H \\to gg} (s) &= C \\cdot s^{\\frac{3}{2}} \\cdot \\left | A_t(\\tau_t) \\right |^{2} \\\\\n    A_t(\\tau) &= 2 \\frac{\\tau+(\\tau-1)f(\\tau)}{\\tau^2} \\\\\n    \\tau_t &= \\frac{s}{4m_t^2} \\\\\n    f(\\tau) &= \\left\\{\\begin{matrix}\n    \\arcsin^2 (\\sqrt{\\tau}) , ~ \\tau \\leqslant 1\n    \\\\\n    - \\frac{1}{4} \\left [  \\log{\\frac{1+\\sqrt{1-\\tau^{-1}}}{1-\\sqrt{1-\\tau^{-1}}}}  - i  \\pi \\right ]^2 , ~ \\tau > 1\n    \\end{matrix}\\right.\n\\end{split}\n\\end{align}\n\nwhere $m_f$ stands for the mass of a fermion $f$, and $\\Gamma_H$ denotes an assumed total width of the heavy Higgs boson.\n\nAt the LHC, the \\mfl line shape can be defined by a hadron cross section that is derived from\nequation~\\ref{eq:HiggsPartonXSection} by multiplication with gluon-gluon luminosity $\\mathcal{L}_{gg}$ described in\n\\cite{Ball:2012wy}. \nMeanwhile, the cross section is rewritten as a function of \\mfl instead of $s$, which will give an extra power of mass dependence in the formula:\n\\begin{equation} \\label{eq:HiggsHadronXSection}\n    \\sigma_{pp \\to H \\to ZZ} (m_{4\\ell}) = 2 \\cdot m_{4\\ell} \\cdot \\mathcal{L}_{gg} \\cdot  \\frac{1}{ \\left | s - s_H \\right |^2}  \\cdot \\Gamma_{H \\to gg} (m_{4\\ell}^2) \\cdot \\Gamma_{H \\to ZZ} (m_{4\\ell}^2)\n\\end{equation}\nThe analytical shapes of truth level \\mfl distribution of gg2VV MC samples is shown on figure~\\ref{fig:truthShape}.\n\n\\begin{figure}[!htbp]\n    \\centering\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/LWA/450_5.png}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/LWA/450_10.png}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/LWA/450_15.png}\\\\\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/LWA/700_5.png}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/LWA/700_10.png}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/LWA/700_15.png}\\\\\n    \\caption{Comparison of the analytical shape to a truth $\\mfl$ distribution of gg2VV MC samples for $\\mH$ = 450~\\gev (top), 700~\\gev (bottom) \n    and width equal to 5$\\%$ (left), 10$\\%$ (middle), 15$\\%$ (right) of the mass.}\n    \\label{fig:truthShape}\n\\end{figure}\n\nThe reconstruction level signal shape can then be modelled by the analytical truth shape convoluted with detector effects modelled in section~\\ref{sec:signal_nwa}.\nA comparison between the modelled shape and reconstruction level MC simulation for signal mass above 400~\\gev~ (for ggF production in 2$e$2$\\mu$ channel as an example) are shown in figure~\\ref{fig:recoShape_2mu2e}, \nthe shapes are well compatible between each other.\nThis modelling is not valid for lower masses due to the rapid change of detector resolution.\n\n\\begin{figure}[!htbp]\n    \\centering\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_400GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_500GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_600GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\\\\\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_700GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_800GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_900GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\\\\\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_1200GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_1400GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_1600GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\\\\\n    \\includegraphics[width=.32\\textwidth]{figures/HMHZZ/signal/LWA/cmp_1800GeV_ATLAS_Signal_ggF_ggF_2mu2e_13TeV_conv.pdf}\n    \\caption{Comparison between the analytical shape convoluted with detector effects and the reconstructed $m_{2\\mu 2e}$ MC\n    distribution for mass points ranging from 400 to 1800~$\\gev$ and width equal to $15\\%$ of the mass.}\n    \\label{fig:recoShape_2mu2e}\n\\end{figure}\n\n\\subsection{Modelling of interference}\n\\label{sec:int_model}\n\nThere are three processes sharing the same $gg$ initial state and $ZZ$ final state:\n\\begin{itemize}\n\t\\item The SM \\ggZZ process with an amplitude $A_{B}$\n\t\\item The SM (light) Higgs at mass of around 125~\\gev~ with an amplitude $A_{h}$\n\t\\item The BSM heavy Higgs we are searching in this analysis with an amplitude $A_{H}$\n\\end{itemize}\nThe three processes can interfere with each other due to the same initial and final states.\nThe parton cross section for these processes can be written as:\n\n\\begingroup\n%\\small\n\\begin{equation} \\label{eq:xsec_parton_int}\n\\begin{split}\n    \\sigma_{gg \\to (X) \\to ZZ} (s) &= \\frac{1}{2s}  \\int d \\Omega \\left | A_h(s,\\Omega) + A_H(s,\\Omega) + A_B(s,\\Omega) \\right |^2 \\\\\n    &= \\frac{1}{2s}  \\int d \\Omega  \\left (  \\left | A_h(s,\\Omega)  \\right |^2 +  \\left | A_H(s,\\Omega)  \\right |^2 +  \\left | A_B(s,\\Omega)  \\right |^2  \\right )  + \\\\\n    &+ \\frac{1}{s}  \\int d \\Omega  \\big( Re \\left [ A_h(s,\\Omega) \\cdot A^*_B(s,\\Omega)  \\right ] \\\\\n    &+ Re \\left [ A_H(s,\\Omega) \\cdot A^*_B(s,\\Omega)  \\right ] + Re \\left [ A_H(s,\\Omega) \\cdot A^*_h(s,\\Omega)  \\right ]  \\big) \\\\\n    %&+ \\frac{1}{s} \\mathrm{Re} \\left [ \\frac{1}{s-s_H}  \\int d \\Omega \\cdot A_H^P(s,\\Omega)  \\cdot A_H^D(s,\\Omega) \\cdot A^*_B(s,\\Omega)  \\right ] \\\\\n    %&+ \\frac{1}{s} \\int d \\Omega \\cdot \\mathrm{Re} \\left [A_H^P(s,\\Omega)\\cdot  \\frac{1}{s-s_H}   \\cdot A_H^D(s,\\Omega) \\cdot A_h^{P*}(s,\\Omega)\\cdot  \\frac{1}{(s-s_h)^*}   \\cdot A_h^{D*}(s,\\Omega)   \\right ]\n\\end{split}\n\\end{equation}\n\\endgroup\n\nThe first term in equation~\\ref{eq:xsec_parton_int} denotes the on-shell SM Higgs contribution, which is negligible in this analysis.\nThe second term corresponds to the heavy Higgs contribution, whose line shape has been described in previous section.\nThe third term is the \\ggZZ continuum process, while the forth term is the interference between SM Higgs and \\ggZZ continuum.\nThe fifth and sixth terms are the interferences between heavy Higgs and \\ggZZ continuum (H-B), and between heavy Higgs and SM Higgs (H-h) that we are interested in.\nMore details about the parameterization of these two interferences are described as below.\n\n\\subsubsection{Interference between heavy Higgs and \\ggZZ continuum}\n\nThe parton cross section of this interference term can be expressed as:\n\\begin{equation}\n\t\\sigma_{gg} (s) = \\frac{1}{s} \\mathrm{Re} \\left [ \\frac{1}{s-s_H}  \\int d \\Omega \\cdot A_H^P(s,\\Omega)  \\cdot A_H^D(s,\\Omega) \\cdot A^*_B(s,\\Omega)  \\right ]\n\\end{equation}\nBy assuming that this function has a smooth behaviour, it can be replaced with complex polynomial:\n\\begin{equation}\n    \\int d \\Omega \\cdot A_H^P(s,\\Omega)  \\cdot A_H^D(s,\\Omega) \\cdot A^*_B(s,\\Omega)  \\approx  (a_0 + a_1 \\cdot \\sqrt{s} + \\dots) + i \\cdot (b_0 + b_1 \\cdot \\sqrt{s} + \\dots)\n\\end{equation}\n\nThe parameters $a_i$ and $b_i$ can be extracted by fitting to the \\mfl distribution from truth level MC simulation after analysis selection.\nSince the signal mass and width does not enter into this function, the parameters should be independent for every tested signal hypothesis.\n\nSame as description for equation~\\ref{eq:HiggsHadronXSection}, the parton cross section can be transformed into a hadron cross section as a function of \\mfl:\n\n\\begingroup\n\\small\n\\begin{equation} \\label{eq:HBHadronXSection}\n    \\sigma_{pp} (m_{4\\ell}) = \\mathcal{L}_{gg} \\cdot \\frac{1}{m_{4\\ell}} \\cdot \\mathrm{Re} \\left [   \\frac{1}{s-s_H}  \\cdot  \\left ( (a_0 + a_1 \\cdot m_{4\\ell} + \\dots) + i \\cdot (b_0 + b_1 \\cdot m_{4\\ell} + \\dots) \\right )\\right ]\n\\end{equation}\n\\endgroup\nwhere the propagators are shown in equation~\\ref{eq:propagator}.\n\nFigure~\\ref{fig:HB_Int_2mu2e} shows the distributions of interference function obtained by simultaneous fitting to \\mfl shape from truth level H-B interference simulation at different mass in 2$e$2$\\mu$ channel as an example.\n\n\\begingroup\n\\small\n\\begin{figure}[!htbp]\n    \\centering\n    \\includegraphics[width=1.\\textwidth]{figures/HMHZZ/signal/Interference/HB_Int_2mu2e.pdf}\n    \\caption{The interference (H-B) model fitted to the truth~\\mfl~MC distribution after signal region selection for $2\\mu2e$~channel. \\label{fig:HB_Int_2mu2e} }\n\\end{figure}\n\\endgroup\n\n\\subsubsection{Interference between heavy Higgs and SM Higgs}\n\nThe parton cross section of this interference term can be written as:\n\\begingroup\n\\small\n\\begin{equation}\n\t\\sigma_{gg} (s) = \\frac{1}{s} \\int d \\Omega \\cdot \\mathrm{Re} \\left [A_H^P(s,\\Omega)\\cdot  \\frac{1}{s-s_H}   \\cdot A_H^D(s,\\Omega) \\cdot A_h^{P*}(s,\\Omega)\\cdot  \\frac{1}{(s-s_h)^*}   \\cdot A_h^{D*}(s,\\Omega)   \\right ]\n\\end{equation}\nBy assuming the production and decay amplitudes are the same for heavy Higgs boson and SM Higgs boson, the cross section function can be simplified to:\n\\begin{equation}\n    \\sigma_{gg} (s) = \\frac{1}{s}  \\int d \\Omega \\cdot \\mathrm{Re} \\left [\\frac{1}{s-s_H} \\cdot  \\frac{1}{(s-s_h)^*} \\right ] \\cdot \\left | A_{gg \\to H}(s,\\Omega) \\right |^2 \\left | A_{H \\to ZZ}(s,\\Omega) \\right |^2\n\\end{equation}\n\\endgroup\n\nTaking into account Equation~\\ref{eq:PartialWidth}:\n\n\\begin{equation} \\label{eq:HhInterference_2}\n    \\sigma_{gg} (s) = 4 \\cdot \\mathrm{Re} \\left [\\frac{1}{s-s_H} \\cdot  \\frac{1}{(s-s_h)^*} \\right ] \\cdot  \\Gamma_{H \\to gg} (s) \\cdot \\Gamma_{H \\to ZZ} (s)\n\\end{equation}\n\nwhere the propagators are described in equation~\\ref{eq:propagator}, and the partial widths are described in equations~\\ref{eq:WidthZZ}~and~\\ref{eq:WidthGG}.\n\nSame as previous procedure, the parton cross section can be transformed to a hadron cross section as a function of \\mfl:\n\n\\begin{equation} \\label{eq:HhInterference}\n    \\sigma_{pp} (m_{4\\ell}) = 4 \\cdot m_{4\\ell} \\cdot  \\mathcal{L}_{gg}\\cdot \\mathrm{Re} \\left [\\frac{1}{s-s_H} \\cdot  \\frac{1}{(s-s_h)^*} \\right ] \\cdot  \\Gamma_{H \\to gg} (m_{4\\ell}) \\cdot \\Gamma_{H \\to ZZ} (m_{4\\ell})\n\\end{equation}\n\n%This equation is similar to the differential cross section of signal only production with the only difference in propagator.\n%Since the propagator does not affect the kinematic performance of the final state, this interference process can be reproduced by reweighting the signal events with weights defined as:\n%\n%\\begin{equation}\n%    w(m_{4\\ell}) = \\frac{2 \\cdot \\mathrm{Re} \\left [\\frac{1}{s-s_H} \\cdot  \\frac{1}{(s-s_h)^*} \\right ] }{\\frac{1}{\\left | s - s_H \\right |^2}}\n%\\end{equation}\n\nThe modelling procedure of interference is the same as the way for large-width signal described in section~\\ref{sec:signal_lwa}.\nThe truth line shape is measured as analytical function from equation~\\ref{eq:HhInterference}, and then convolute with detector effect from NWA parameterization to get the reconstruction level shape.\n\nFor LWA signal model, these two interferences are carefully taken into account, and the integration of the pure LWA signal with the interferences is used for further studies.\nFigure~\\ref{fig:LWASignalModel} shows the signal model for large-width scenario at mass points of 400~\\gev, 600~\\gev, 800~\\gev, for three different signal widths: 5\\%, 10\\%, 15\\%, with and without interference.\nAdditionally, the contribution of the interference between heavy Higgs and SM Higgs (H-h) is shown together with the one between heavy Higgs and SM \\ggZZ background (H-b).\nOne can see the interference effect on signal shape becomes less important when going to higher mass.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{figures/HMHZZ/signal/LWA/LWAModel.pdf}\n    \\caption{The signal modelling for the large-width scenario at \\mH of 400 GeV (top), 600 GeV (middle) and 800 GeV (bottom), as well as three different signal width:  5\\%  (left), 10\\%  (middle) and 15\\% (right). \n    The contribution of the interference between heavy Higgs and SM Higgs (H-h) is shown together with the one between heavy Higgs and SM \\ggZZ background (H-b).}\n    \\label{fig:LWASignalModel}\n\\end{figure}\n\n\n\\subsection{Modelling of spin-2 RS Graviton signal}\n\nThe search for Randall-Sundrum (RS) graviton is performed in mass region between 600 to 2000~\\gev.\nThe width of resonance is determined by the \\kOverMpl, which, as mentioned in section~\\ref{sec:hmhzz_signal_mc}, is set to be 1. \nIn this configuration, the width of signal is expected to be about 6\\% of its mass.\n\nThe reconstructed \\mfl lineshape of graviton is also built by convolving the truth-level lineshape with a detector resolution function,\nwhere the detector resolution effect is modelled by a Gaussian + Crystal Ball function, whose parameters are taken from the NWA signal parameterization in section~\\ref{sec:signal_nwa}.\nAnd the truth-level shape is modelled as the product of a relativistic Breit-Wigner (RBW) term, a term corresponding to the squared matrix element of the production process and a parton luminosity term $\\mathcal{L}$ as given in ~\\cite{Bijnens_2001}.\nSo the truth lineshape of \\mfl is taken from:\n\\begin{equation*}\n        \\mfl^{\\text{Truth}} \\sim \\mathcal{L}_{gg} \\cdot s^2 \\cdot \\frac{s(1 + s)(1 + 2s + 2s^2)}{(s^2 - \\mG^2)^2 + \\mG^2 \\Gamma^2}\n\\end{equation*}\nThe truth-level signal model is extracted by fitting to MC simulation at truth-level with the mass $\\mG$ and width $\\Gamma$ parameters floating at each mass points respectively.\nAnd then the two parameters are parameterized as the function of \\mH by a linear fit as shown in figure~\\ref{fig:graviton_truth_param_vs_mG}.\n\\begin{figure}[htbp]\n        \\centering\n        \\includegraphics[width=0.4\\textwidth]{figures/HMHZZ/signal/RSGraviton/graviton_param_m_RBW.eps}\n        \\includegraphics[width=0.4\\textwidth]{figures/HMHZZ/signal/RSGraviton/graviton_param_G_RBW.eps}\n        \\caption{Fitted parameters of the graviton RBW, $m_{\\text{RBW}}$ and $\\Gamma_{\\text{RBW}}$, as a function of the\n        graviton resonance mass, \\mG.}\n        \\label{fig:graviton_truth_param_vs_mG}\n\\end{figure}\n\nThe final signal model is obtained by convolving the truth-level lineshape with the detector resolution function.\nTo verify the result, figure~\\ref{fig:graviton_reco_shape_2mu2e} compares the \\mfl lineshape from parameterization with the one observed from reconstructed-level MC simulation in 2$e$2$\\mu$ channel at masses of 600~\\gev, 1600~\\gev~and 2000~\\gev~as examples.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/RSGraviton/graviton_reco_shape_m4l_constrained_HM_600GeV_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/RSGraviton/graviton_reco_shape_m4l_constrained_HM_1600GeV_2mu2e.eps}\n    \\includegraphics[width=0.32\\textwidth]{figures/HMHZZ/signal/RSGraviton/graviton_reco_shape_m4l_constrained_HM_2000GeV_2mu2e.eps}\n    \\caption{Reconstructed \\mfl distributions in the $2\\mu 2e$ channel with the final signal\n        model superimposed for each RS graviton signal sample at masses of 600~$\\gev$, 1600~$\\gev$ and 2000~$\\gev$. The lower panel in each plot shows\n        the pull distribution. The dashed green lines show the truth-level graviton signal models for reference.}\n    \\label{fig:graviton_reco_shape_2mu2e}\n\\end{figure}\n", "meta": {"hexsha": "07589be5f6dc8ed53bb4456ed4b0a533ec063278", "size": 24630, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/HMHZZ/signal.tex", "max_stars_repo_name": "zhuhel/PhDthesis", "max_stars_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e", "max_stars_repo_licenses": ["LPPL-1.3c"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/HMHZZ/signal.tex", "max_issues_repo_name": "zhuhel/PhDthesis", "max_issues_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e", "max_issues_repo_licenses": ["LPPL-1.3c"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/HMHZZ/signal.tex", "max_forks_repo_name": "zhuhel/PhDthesis", "max_forks_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e", "max_forks_repo_licenses": ["LPPL-1.3c"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.775862069, "max_line_length": 257, "alphanum_fraction": 0.7299228583, "num_tokens": 8011, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772286044095, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5876698225827657}}
{"text": "\\section*{Imbalance}\n\\subsection*{Cost Sensitive Classification}\nReplace loss by: $l_{CS}(w;x,y) = c_y l(w;x,y)$\n\n\\subsection*{Metrics}\nAccuracy: $\\frac{TP+TN}{TP+TN+FP+FN}$, Precision: $\\frac{TP}{TP+FP}$\\\\ \nTPR = Recall: $\\frac{TP}{TP+FN}$, F1 score: $\\frac{2TP}{2TP+FP+FN}$\\\\\nFPR = $\\frac{FP}{TN+FP}$", "meta": {"hexsha": "4f5d5989e1a9d02141c0f119db977bd5545a7499", "size": 302, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/Imbalance.tex", "max_stars_repo_name": "meck93/intro_ml_ethz", "max_stars_repo_head_hexsha": "e769cf628739959efd6ded517d80f0f14b5aa39d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-04-24T14:58:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-19T14:02:08.000Z", "max_issues_repo_path": "source/Imbalance.tex", "max_issues_repo_name": "meck93/intro_ml_ethz", "max_issues_repo_head_hexsha": "e769cf628739959efd6ded517d80f0f14b5aa39d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/Imbalance.tex", "max_forks_repo_name": "meck93/intro_ml_ethz", "max_forks_repo_head_hexsha": "e769cf628739959efd6ded517d80f0f14b5aa39d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.75, "max_line_length": 71, "alphanum_fraction": 0.6390728477, "num_tokens": 121, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026573249611, "lm_q2_score": 0.640635868562172, "lm_q1q2_score": 0.5876569846097649}}
{"text": "\\begin{center}\n\t\\LARGE{\\textbf{\u00d8ving 8}}\n\\end{center}\n\n\n\\section*{13.3}\n\n\\subsection*{15}\n\n\n\\begin{gather*}\n\tz = x + i y\n\t|z|^2 Im \\left(\\frac{1}{z}\\right)\n\t=\n\t\\left(x^2 + y^2\\right) \\frac{y}{x^2 + y^2}\n\t=\n\ty\n\t\\\\\n\tz = 0 \\Rightarrow y = 0 \\Rightarrow |z|^2 \\text{ Im} \\left(\\frac{1}{z}\\right) = 0\n\t\\Rightarrow f(z) \\text{ is continous}\n\\end{gather*}\n\n\n\\subsection*{16}\n\n\n\\begin{gather*}\n\t\\frac{\\text{ Im } z^2}{|z|^2}\n\t=\n\t\\frac{2 x y}{x^2 + y^2}\n\t\\\\\n\t\\text{Limit along } x = y:\n\t\\\\\n\t\\lim_{x \\rightarrow 0}{\\frac{2 x^2}{x^2 + x^2}}\n\t=\n\t1 \\neq 0 \\Rightarrow f(z) \\text{ is discontinous}\n\\end{gather*}\n\n\n\\section*{13.4}\n\n\\subsection*{10}\n\n\n\\begin{gather*}\n\tf(z) = \\ln{|z|} + i \\text{ Arg } z = \\ln{r} + i \\theta = u(r, \\theta) + i v(r, \\theta)\n\t\\\\\n\tu_r = \\frac{1}{r} = \\frac{1}{r} v_\\theta\n\t\\\\\n\tu_\\theta = 0 = v_r\n\t\\\\\n\tf(z) \\text{ is therefore analytic}\n\\end{gather*}\n\n\n\\subsection*{18}\n\n\n\\begin{gather*}\n\tu = x^3 - 3 x y^2\n\t\\\\\n\tu_{xx} + u_{yy} = 6 x - 6 x = 0 \\Rightarrow u(x, y) \\text{ is harmonic}\n\t\\\\\n\tu_x = 3 x^2 - 3 y^2 = v_y\n\t\\\\\n\tv = \\int{\\left(3 x^2 - 3 y^2\\right) dy}\n\t=\n\t3 x^2 y - y^3 + C(x)\n\t\\\\\n\tu_y = -6 x y = -v_x\n\t\\\\\n\tv = \\int{6 x y dx} = 3 x^2 y + C(y)\n\t\\\\\n\tv = 3 x^2 y - y^3 + C\n\\end{gather*}\n\n\n\\subsection*{19}\n\n\n\\begin{gather*}\n\tv = e^{-x} \\sin{2 y}\n\tv_{xx} + v_{yy} = e^{-x} \\sin{2 y} - 4 e^{-x} \\sin{2 y} = -3 e^{-x} \\sin{2 y} \\neq 0\n\t\\\\\n\tv(x, y) \\text{ is therefore not harmonic}\n\\end{gather*}\n\n\n\\section*{13.5}\n\n\\subsection*{20}\n\n\n\\begin{gather*}\n\te^z = 4 + 3 i\n\t\\\\\n\tz = \\ln{(4 + 3 i)} = \\ln{5} + i (\\text{ Arg } (4 + 3 i) + 2 \\pi n)\n\\end{gather*}\n\n\n\\begin{center}\n\t\\plot{Figurer/f1.tikz}{5}{4}\n\\end{center}\n\n\n\\section*{13.6}\n\n\\subsection*{10}\n\n\n\\begin{gather*}\n\t\\sinh{(3 + 4 i)}\n\t=\n\t\\frac{1}{2} \\left(\n\t\te^{3 + 4 i} - e^{- 3 - 4 i}\n\t\\right)\n\t=\n\t\\frac{1}{2} \\left(\n\t\te^3 e^{4 i} - e^{-3} e^{-4 i}\n\t\\right) =\n\t\\\\\n\t\\frac{1}{2} \\left(\n\t\te^3 (\\cos{4} + i \\sin{4}) - e^{-3} (\\cos{4} - i \\sin{4})\n\t\\right)\n\t=\n\t\\\\\n\t\\cos{4} \\cdot \\frac{1}{2} \\left(\n\t\te^3 - e^{-3}\n\t\\right)\n\t+\n\ti \\sin{4} \\cdot \\frac{1}{2} \\left(\n\t\te^{3} + e^{-3}\n\t\\right)\n\t=\n\t\\\\\n\t\\sinh{3} \\cos{4} + i \\cosh{3} \\sinh{4}\n\t\\\\\n\t\\\\\n\t\\\\\n\t\\cosh{(3 + 4 i)} = \\cosh{3} \\cos{4} + i \\sinh{3} \\sin{4}\n\\end{gather*}\n\n\n\\subsection*{16}\n\n\n\\begin{gather*}\n\t\\sin{z} = 100\n\t\\Rightarrow\n\t\\frac{1}{2 i} \\left(\n\t\te^{i z} - e^{-i z}\n\t\\right) = 100\n\t\\\\\n\te^{i z} - e^{-i z} = 200 i\n\t\\\\\n\te^{2 i z} - 1 = 200 i e^{i z}\n\t\\Rightarrow\n\t\\left(e^{i z}\\right)^2 - 200 i e^{i z} - 1 = 0\n\t\\\\\n\te^{i z} = \\frac{200 i \\pm \\sqrt{(200 i)^2 - 4 \\cdot 1 \\cdot (-1)}}{2}\n\t=\n\t\\frac{200 i \\pm \\sqrt{- 39996}}{2}\n\t=\n\t\\\\\n\t100 i \\pm 3 i \\sqrt{1111} \\approx 199.995 \\vee 0.005 i\n\t\\\\\n\ti z = \\ln{199.995 i} = \\pm \\ln{199.995} + \\ln{i} = \\pm 5.298 + i \\left(\\frac{\\pi}{2} + 2 \\pi n\\right)\n\t\\\\\n\tz = \\frac{\\pi}{2} + 2 \\pi n \\pm 5.298 i\n\\end{gather*}\n\n\n\\subsection*{19}\n\n\n\\begin{gather*}\n\t\\sinh{z} = 0\n\t\\Rightarrow\n\t\\frac{1}{2} \\left(\n\t\te^{z} - e^{-z}\n\t\\right) = 0\n\t\\\\\n\te^{2 z} - 1 = 0\n\t\\\\\n\t2 z = \\ln{1} = 2 \\pi n i\n\t\\\\\n\tz = \\pi n i\n\\end{gather*}\n\n\n\\section*{13.7}\n\n\\subsection*{15}\n\n\n\\begin{gather*}\n\t\\ln{e^i}\n\t=\n\ti \\ln{e} = i - 2 \\pi n\n\\end{gather*}\n\n\\begin{center}\n\t\\plot{Figurer/f2.tikz}{5}{4}\n\\end{center}\n\n\n\\subsection*{17}\n\n\n\\begin{gather*}\n\t\\ln{(4 - 3 i)} = \\ln{5} + i \\text{ Arg } (4 - 3 i)\n\t=\n\t\\ln{5} + i \\text{atan}{\\frac{-3}{4}} + 2 i \\pi n\n\\end{gather*}\n\n\n\\begin{center}\n\t\\plot{Figurer/f3.tikz}{5}{4}\n\\end{center}\n\n\n\\subsection*{26}\n\n\\begin{gather*}\n\ti^{\\frac{i}{2}}\n\t=\n\te^{\\frac{i}{2} \\ln{i}}\n\t=\n\te^{\\frac{i}{2} \\cdot i \\frac{\\pi}{2}}\n\t=\n\te^{-\\frac{\\pi}{4}} \\approx 0.4559\n\\end{gather*}\n\n\\begin{minted}[mathescape, fontsize=\\small, xleftmargin=0.5em]{julia}\nusing Plots\nR = 0.0821\nT = 243.6\nisoterm(x) = R * T / x\nv = 1 : 0.1 : 10\np = isoterm.(v)\nplot(xlabel = \"V[L]\", ylabel = \"P[atm]\", xlims = (0, 15), ylims = (0, 25),\n\t\txticks = 0 : 1 : 15, yticks = 0 : 2 : 25)\nplot!(reverse(v), reverse(p);\n\t\tcolor = :red, label = \"Trinn 1\", arrow = :arrow, linewidth = 2)\nplot!([1, 10], [20, 20];\n\t\tcolor = :blue, label = \"Trinn 2\", arrow = :arrow, linewidth = 2)\nplot!([10, 10], [20, 2];\n\t\tcolor = \"#007f00\", label = \"Trinn 3\", arrow = :arrow, linewidth = 2)\n\\end{minted}\n\\includegraphics[width=\\linewidth]{figures/oppg_1_1.pdf}\n", "meta": {"hexsha": "ff2a89044eca2b4233ed74c50e0f6ee6061d6c94", "size": 4156, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "19Var/TKJ4202/Test/Dokumenter/oppg.tex", "max_stars_repo_name": "MarcusTL12/School", "max_stars_repo_head_hexsha": "f7302f2d390e99ad9d06004e15da032c05ec59e7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "19Var/TKJ4202/Test/Dokumenter/oppg.tex", "max_issues_repo_name": "MarcusTL12/School", "max_issues_repo_head_hexsha": "f7302f2d390e99ad9d06004e15da032c05ec59e7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "19Var/TKJ4202/Test/Dokumenter/oppg.tex", "max_forks_repo_name": "MarcusTL12/School", "max_forks_repo_head_hexsha": "f7302f2d390e99ad9d06004e15da032c05ec59e7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 16.5577689243, "max_line_length": 102, "alphanum_fraction": 0.510587103, "num_tokens": 2033, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744850834648, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.5875044404899828}}
{"text": "\\chapter{Applications in Machine Learning }\n\n\\section{Certigrad}\nA recent paper\\cite{certigrad} by \\textit{Daniel Selsam} verifies the computation graph of machine learning model. The project \\textbf{Certigrad}\\cite{certigrad_video} explains that it is very possible that conventional debugging techniques fail to explain ambiguous nature of a ML model failure (like memory overflow and corruption) that may be a result of setting some hyper-parameters or a flaw in program logic. Say for example the one of the functions diverge at a certain point then it might be very difficult to track that simple using a debugger. However when verified using a proof assistant, it correctly suggestes that the unexpected behaviour is outcome of mathematical error and not program logic. Similar logic can thus be extended to verify computation graphs of every machine learning model. \n\n\\section{Automated Proving}\nWhile correcting ML model using program verification, new kind of proof assistant \\textbf{Proving Ground}\\cite{provingground} by Professor \\textit{Siddhartha Gadgil} is also being developed that uses AI to help reduce steps furthermore.\n", "meta": {"hexsha": "925fa201fbfabdbd9e51c56a84d373cbd12bea9e", "size": 1140, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "files/applications-in-ml.tex", "max_stars_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_stars_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/applications-in-ml.tex", "max_issues_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_issues_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/applications-in-ml.tex", "max_forks_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_forks_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 142.5, "max_line_length": 808, "alphanum_fraction": 0.8219298246, "num_tokens": 238, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744673038222, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5875044326407611}}
{"text": "\\section{View Spectrum Data}\n\tThis drop down has options that come directly from the data loaded into the main interface. \n\n\t\t\\subsection{View Energies}\n\t\t\tThis takes a set of energies, in MeV, separated by commas to draw vertical lines on the graph. These lines will be drawn as a dotted red line.\n\t\t\\subsection{Change Zoom Location}\n\t\t\tChanges the minimum and maximum energies shown on the graph. Much like a zoom feature. \n\t\t\\subsection{Calibration Energies}\n\t\t\tTakes a set of values, separated by commas to draw vertical lines representing calibration energies. These will appear as solid black lines. \n\t\t\\subsection{Count Rates}\n\t\t\tWill display the count rate of all loaded spectrum in counts per second. If a time of 1 second has been entered for the accumulation time, the value displayed will be the total number of counts in the total spectrum.\n\t\t\\subsection{ROI Uncertainties}\n\t\tThis will compute the gross count rate, net count rate, and the associate uncertainties for each in a specified region of interest. The gross count uncertainties are found using $\\sigma=\\sqrt{N}$, where $\\sigma$ is the standard deviation and $N$ is the number of counts in the region of interest. The net uncertainty is found using equation \\ref{eq:net_uncer}, where $\\sigma_F$ and $\\sigma_B$ are foreground and background standard deviations respectively.\n\t\t\\begin{equation}\n\t\t\t\\sigma_{net}=\\sqrt{\\sigma_F^2+\\sigma_B^2}\n\t\t\t\\label{eq:net_uncer}\n\t\t\\end{equation}\n\t\t\n\t\t\\subsection{Energy Resolution}\n\t\t\tOnce a spectrum has been plotted and is located in the Plotted Spectrum section, the option to view the energy resolution will be enabled. This option uses a peak finding algorithm to analyze and determine the peaks. The energy resoution is calculated as the full width half maximum of the identified peak divided by the peak centroid location. This is calculated and displayed for each peak found in the spectrum. The energy resolution is displayed next to a vertical line as a percentage. ", "meta": {"hexsha": "185f0473d150d4d7f8a2838fb250516acf81c03a", "size": 1981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/Spectrum_Menu.tex", "max_stars_repo_name": "alangburl/NWIC_Spectrum_Viewer", "max_stars_repo_head_hexsha": "aac95bce19a7361d357f7203e3a7a1813bf6c380", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Documentation/Spectrum_Menu.tex", "max_issues_repo_name": "alangburl/NWIC_Spectrum_Viewer", "max_issues_repo_head_hexsha": "aac95bce19a7361d357f7203e3a7a1813bf6c380", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documentation/Spectrum_Menu.tex", "max_forks_repo_name": "alangburl/NWIC_Spectrum_Viewer", "max_forks_repo_head_hexsha": "aac95bce19a7361d357f7203e3a7a1813bf6c380", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.05, "max_line_length": 494, "alphanum_fraction": 0.7834427057, "num_tokens": 448, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5875044291746774}}
{"text": "\n\\appendix\n\n\\section{Proof of the invertibility of the Farkas representation}\nIn this appendix, we prove~\\cref{lem:farkinv} which is restated here\nas~\\cref{lem:farkinv_app}.\n\nIn the following lemma, we present an integral indentity involving the\nGreen's function of the oscillatory biharmonic equation.\n\\begin{lem}\nSuppose that $\\Gbh$ is the Green's function of the oscillatory biharmonic equation \nwith Helmholtz parameter $k$. Suppose that $v:\\R^{2} \\to \\R$ is a compactly supported smooth\nfunction. Then\n\\begin{equation}\n\\int_{\\R^{2}} \\Gbh(\\xx,\\yy) \\Delta (\\Delta + k^2) v(\\yy) d\\yy = v(\\xx)\n\\end{equation}\n\\end{lem}\n\\begin{proof}\nSuppose that $r>0$. \nLet $B_{r}(\\xx)$ denote the ball of radius $r$ centered at the origin\nand let $\\partial B_{r}(\\xx)$ denote its boundary.\nThen applying Green's second identity twice, we get \n\\begin{align}\n\\int_{\\R^{2} \\setminus B_{r}(\\xx)} \\Gbh(\\xx,\\yy) \\Delta (\\Delta +k^{2}) v(\\yy)\n&=\n\\int_{\\R^{2} \\setminus B_{r}(\\xx)} \\Delta \\Gbh (\\xx,\\yy) (\\Delta + k^2) \nv(\\yy) + \\\\\n& \\hspace{5ex}\\int_{\\partial B_{r}(\\xx)} \\Gbh(\\xx,\\yy) (\\Delta + k^{2})\n\\frac{\\partial}{\\partial n}v(\\yy) \\, dS - \\\\\n& \\hspace{5ex}\\int_{\\partial B_{r}(\\xx)} \\frac{\\partial }{\\partial n} \\Gbh(\\xx,\\yy) \n(\\Delta + k^2) v \\, dS \\\\\n&= \\int_{\\R^{2} \\setminus B_{r}(\\xx)} \\Delta (\\Delta + k^2)G(\\xx,\\yy) v(\\yy)\nd\\yy  - \\nonumber \\\\\n& \\hspace{5ex}\\int_{\\partial B_{r}(\\xx)} (\\Delta + k^2) \n\\frac{\\partial }{\\partial n} \\Gbh(\\xx,\\yy) v \\, dS + \\nonumber\\\\\n& \\hspace{5ex}\\int_{\\partial B_{r}(\\xx)} (\\Delta + k^2) \n\\Gbh(\\xx,\\yy) \\frac{\\partial}{\\partial n}v \\, dS - \\nonumber \\\\\n& \\hspace{5ex}\\int_{\\partial B_{r}(\\xx)}  \n\\frac{\\partial }{\\partial n} \\Gbh(\\xx,\\yy) \\Delta v\\, dS + \\\\\n& \\hspace{5ex}\n\\int_{\\partial B_{r}(\\xx)} \\Gbh(\\xx,\\yy) \\Delta \\frac{\\partial v}{\\partial n}\n\\, dS \\nonumber\n\\end{align}\nFor all $\\yy \\in \\partial B_{r}(x)$, there exists a constant $C$ such that\n$\\Gbh$ satisfies the following equations\n\\begin{align}\n(\\Delta + k^2) \\frac{\\partial }{\\partial n} G(\\xx,\\yy) &= \n-\\frac{1}{2\\pi r} \\label{eq:estapp1}\\\\\n\\left|(\\Delta + k^2) G(\\xx,\\yy) \\right| &\\leq C \\log{(r)} \\label{eq:estapp2}\\\\\n\\left|\\frac{\\partial }{\\partial n} G(\\xx,\\yy) \\right| &\\leq C r\\log{(r)} \n\\label{eq:estapp3}\\\\\n\\left| G(\\xx,\\yy) \\right| &\\leq C r^{2} \\log{(r)} \\label{eq:estapp4}\\, .\n\\end{align}\nFurthermore, $\\Gbh$ satisfies $\\Delta (\\Delta + k^2) \\Gbh(\\xx,\\yy) = 0$\nfor all $\\yy \\in \\R^{2} \\setminus B_{r}(\\xx)$.\nUsing the estimates in~\\cref{eq:estapp1,eq:estapp2,eq:estapp3,eq:estapp4}, \nand the smoothness of $v$, we get\n\\begin{equation}\n\\int_{\\R^{2} \\setminus B_{r}(\\xx)} \\Gbh(\\xx,\\yy) \\Delta (\\Delta + k^2) v(\\yy) \nd \\yy = \\frac{1}{2\\pi r} \\int_{\\partial B_{r}(\\xx)} v(\\yy) dS + \nO(r \\log(r)) \\, \\label{eq:estapp5}.\n\\end{equation}\nThe result then follows by taking the limit $r\\to 0$ in~\\cref{eq:estapp5}. \n\\end{proof}\n\\label{sec:farkproof}\n\n\\section{Proof of invertibility of new representation}\nThe proof proceeds as follows:\n\\begin{itemize}\n\\item Show uniqueness of solutions to the impedence, velocity, and surface\ntraction boundary value problem in exterior domains for the oscillatory Stokes\nPDEs. In order to prove this, we need the following lemmas:\n\\begin{itemize}\n\\item Deriving Green's theorem for exterior domains\n\\item Rellich's lemma\n\\item Uniqueness proof\n\\end{itemize}\n\\item The proof for the clamped plate problem proceeds in the manner\nanalogous to the Dirichlet biharmonic paper.\n\\end{itemize}\n\nWith this in mind, we first discuss the PDE theory for the impedence, \nvelocity and surface traction boundary value problems on unbounded\ndomains for the oscillatory Stokes equation.\nWe focus on the proof for the velocity boundary value problem. \nThe proofs for the surface traction and impedence problems is similar.\n\nSuppose that $\\Omega$ is a bounded simply connected domain and that\n$E = \\R^{2} \\setminus \\Omega$ is its exterior.\nLet $\\Gamma = \\partial \\Omega$ denote the boundary of $E$.\nFor a given function $\\bh$ on the boundary, \nthe exterior velocity boundary value problem is given by:\n\\begin{align}\n\\Delta \\bu + \\omega^{2} \\bu &= \\nabla p \\quad \\bx \\in E \\, ,\\\\\n\\nabla \\cdot \\bu &= 0 \\quad \\bx \\in E \\, ,\\\\\n\\bu &= \\bh \\quad \\bx \\in \\Gamma \\, . \\\\\n\\end{align}\nLet $\\sigma$ denote the stress tensor associated with the velocity\nfield $\\bu$, i.e.\n\\begin{equation}\n\\sigma = -p I + e(\\bu) \\, ,\n\\end{equation}\nwhere $e(\\bu)$ is the strain tensor given by\n\\begin{equation}\ne(\\bu) = \\frac{1}{2}\\left(D\\bu + D\\bu^{T} \\right) \\, .\n\\end{equation}\nLet $\\bn$ denote the outward normal to the boundary $\\Gamma$, then\nthe surface traction $\\ff$ on the boundary $\\Gamma$ is given \nby\n\\begin{equation}\n\\ff = \\sigma \\cdot \\bn\n\\end{equation}\nAnalogous to the Helmholtz equation, we need to impose radiation conditions\nat $\\infty$.\nWe propose the following radiation conditions for the oscillatory Stokes \nequation\n\\begin{equation}\n\\lim_{r\\to \\infty} \\sqrt{r} \\left| \\ff - i \\omega \\bu \\right| \\to 0 \\, ,\n\\label{eq:radcond}\n\\end{equation}\nuniformly in direction.\n\n\\begin{lem}\n\\label{lem:rep}\nSuppose that $\\bu$ satisfies the oscillatory Stokes equation in \nan unbounded region $E$ along with the radiaition \ncondition~\\cref{eq:radcond}. \nThen \n\\begin{equation}\n\\label{eq:repinfest}\n\\lim_{r\\to\\infty}\n\\int_{|\\by|=r} \\left( |\\ff|^2 + |\\omega|^2 |\\bu|^2 \\right) ds +\n2 \\text{Im}(\\omega) \\int_{E \\cap B_{r}(0)} \\left(|\\omega|^2 |\\bu|^2 + |e(\\bu)|^2 \\right)\ndV = -2 \\text{Im} \\left( \\omega \\int_{\\Gamma} \\bu \\cdot \\overline{\\ff} ds  \\right)\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nSince $\\bu$ satisfies the radiation condition, we have that\n\\begin{equation}\n\\lim_{r\\to\\infty} \\int_{|\\by|=r} | \\ff - i \\omega \\bu|^2 ds = \n\\lim_{r\\to\\infty} \\int_{|\\by| =r} \\left( |\\ff|^2 + |\\omega|^2|\\bu|^2 + 2 \\text{Im} \n\\left( \\omega \\bu\\cdot \\ff \\right) ds \\label{eq:raddecayproof1}\n\\right) = 0 \\, . \n\\end{equation}\nSince $\\bu$ satisfies the oscillatory Stokes equation $E \\cap B_{r}(0)$,\nusing a couple of applications of the divergence theorem, we have that\n\\begin{align}\n\\int_{E\\cap B_{r}(0)} |e(\\bu)|^2 dV &= \\int_{E\\cap B_{r}(0)} \\left< e(\\bu), \\overline{e(\\bu)} \\right> dV  \\\\\n&= \\frac{1}{2}\\left(\\int_{E\\cap B_{r}(0)} \\left< e(\\bu),\\overline{D\\bu} \\right> + \\left< e(\\bu),\\overline{D\\bu}^{T} \\right> \\right) dV \\\\\n&= \\int_{E\\cap B_{r}(0)} \\left< e(\\bu),\\overline{D\\bu} \\right> dV \\\\\n&= \\int_{\\partial }\n-\\int_{\\Gamma} \\bu \\cdot \\overline{\\ff} ds\n+ \\int_{|\\by|=r} \\bu \\cdot \\overline{\\ff} ds + \\overline{\\omega}^2 \n\\int_{E \\cap B_{r}(0)} |\\bu|^2 dV \\,. \\label{eq:raddecayproof2}\n\\end{equation}\nCombining~\\cref{eq:raddecayproof1,eq:raddecayproof2}, we get\n\\begin{equation}\n\\lim_{r\\to\\infty} \\int_{|\\by|=r}\\left(|\\ff|^2 + |\\omega|^2 |\\bu|^2 \\right) ds \n+ 2\\text{Im}(k)\\int_{E \\cap B_{r}(0)} \\left(|e(\\bu)|^2 + |\\omega|^2 |\\bu|^2 \n\\right) dV = -2\\text{Im} \\int_{\\Gamma} \\omega \\bu \\cdot \\overline{\\ff} dS \\, .\n\\end{equation}\n\\end{proof}\n\nIn the next lemma, we prove the analogue of Rellich's lemma\nfor the oscillatory Stokes equation. \n\\begin{lem}\n\\label{lem:rellich}\nSuppose that $\\bu$ satisfies the oscillatory Stokes equation in an unbounded\nregion $E$.  \nSuppose further that \n\\begin{equation}\n\\lim_{r \\to \\infty} \\int_{|\\by|=r} |\\bu|^2 dS = 0 \n\\, . \\label{eq:decayatinf}\n\\end{equation}\nThen each component of $\\bu$ is harmonic and has a convergent Laurent\nexpansion in $E$.\n\\end{lem}\n\\begin{proof}\nWe first note that each component of $\\bu= (u_{1},u_{2})$ satisfies the \noscillatory biharmonic equation in $E$, i.e.\n\\begin{equation}\n\\Delta (\\Delta + \\omega^2) u_{j} = 0 \\quad j=1,2 \\,. \n\\end{equation}\nFor $r$ sufficiently large, we can express $u_{j}$ in the Fourier basis as\n\\begin{equation}\nu_{j}(r,\\theta) = \\sum_{n=-\\infty}^{\\infty} a_{j,n}(r) e^{i n \\theta}  \\quad \nj=1,2 \\, .\n\\end{equation}\nUsing Parseval's identity then\n\\begin{equation}\n\\int_{|\\by|=r} |\\bu|^2 ds = r\\sum_{n=-\\infty}^{\\infty} |a_{1,n}(r)|^2  +\n|a_{2,n}(r)|^2 \\, .\n\\end{equation}\nSince $\\bu$ satisfies~\\cref{eq:decayatinf}, we conclude that\n\\begin{equation}\n\\lim_{r\\to\\infty} r|a_{j,n}(r)|^2 = 0 \\quad j=1,2 \\, , \\label{eq:adecay}\n\\end{equation}\nSince $u_{j}$, $j=1,2$ satisfies the oscillatory biharmonic equation,\nthe functions $a_{j,n}$ are linear combinations of \n\\begin{equation}\nr^{|n|}, r^{-|n|}, H^{1}_{n}(\\omega r), H^{2}_{n}(\\omega r) \\, , \\quad\nn\\neq 0 \\, ,\n\\end{equation}\nand\n\\begin{equation}\n1, \\log{(r)}, H^{1}_{0}(\\omega r), H^{2}_{0}(\\omega r) \\quad n=0 \\, , \n\\end{equation} \nwhere $H_{n}^{1,2}(\\cdot)$ are the Hankel functions of the first and\nsecond kind of order $n$.\nSince $a_{j,n}(r)$ satisfy~\\cref{eq:adecay}, and using the asymptotic \nexpansion of $H_{n}^{1,2}(r)$, we note that the projection of \n$a_{j,n}$ on $r^{|n|}$, and $H_{n}^{j}(\\omega r)$ must be zero. \nThus, \n\\begin{equation}\nu_{j}(r,\\theta) = \\sum_{n=-\\infty}^{\\infty} \\frac{a_{j,n} e^{i n \\theta}}{r^{|n|}} \n\\, .\n\\end{equation}\n\\end{proof}\n\\begin{remark}\nNote that each component of $\\bu$ is harmonic and thus $\\bu$ then satisfies\n\\begin{align}\n\\omega^2 \\bu &= \\nabla p  \\quad \\label{eq:massconsred} \\\\\n\\nabla \\cdot \\bu &= 0 \\, .\n\\end{align}\nTaking the inner product with $\\nabla^{\\perp}$ in~\\cref{eq:massconsred}, \nwe note that $\\bu$ satisfies both $\\nabla^{\\perp} \\cdot \\bu = 0$ \nand $\\nabla \\cdot \\bu = 0$.\nThis remark will be useful for the impedence problem...\n\\end{remark}\n\\begin{remark}\nNote that in Rellich's lemma $\\bu$ need not be a radiating solution. \nAll we know of $\\bu$ is that it satifies the PDE in $E$.\n\\end{remark}\n\n\\begin{thm}\n(Uniqueness Dirichlet)\nSuppose that $\\text{Im}(\\omega)\\geq 0$ and \nthat $\\bu$ is a radiating solution to the oscillatory Stokes\nequation in $E$ with $\\bu =0$ on the boundary $\\Gamma$, then\n$u \\equiv 0$ in $E$.\n\\end{thm}\n\n\\begin{proof}\nSince $\\bu = 0$ on $\\Gamma$, it follows from~\\cref{eq:repinfest} that\n\\begin{equation}\n\\lim_{r\\to\\infty}\n\\int_{|\\by|=r} \\left( |\\ff|^2 + |\\omega|^2 |\\bu|^2 \\right) ds +\n2 \\text{Im}(\\omega) \\int_{E \\cap B_{r}(0)} \\left(|\\omega|^2 |\\bu|^2 + |e(\\bu)|^2 \\right)\ndV = 0\n\\end{equation} \nIn particular, this implies that\n\\begin{equation}\n\\lim_{r\\to\\infty} \\int_{|\\by|=r} |\\bu|^2 ds = 0 \\, .\n\\end{equation}\nThus, the conditions for $\\bu$ in Rellich's lemma (~\\cref{lem:rellich})\nare satisfied, and each component of $\\bu$ is a harmonic function\nwith $\\bu \\to 0$ as $r \\to \\infty$. Furthermore, since $\\bu=0$ on\n$\\Gamma$, from uniqueness of solutions to the Dirichlet problem\non exterior domains, we conclude that $\\bu \\equiv 0$ in $E$.\n\\end{proof}\n\nThe proof for the Neumann problem proceeds in a similar manner. \n", "meta": {"hexsha": "988480023260849e3eb30210f774535cc5c4b06f", "size": 10444, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/draft-01/08append2.tex", "max_stars_repo_name": "askhamwhat/biharm-evals", "max_stars_repo_head_hexsha": "d836302f544670b3d899bd91ea4cb49e9afb6a75", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/draft-01/08append2.tex", "max_issues_repo_name": "askhamwhat/biharm-evals", "max_issues_repo_head_hexsha": "d836302f544670b3d899bd91ea4cb49e9afb6a75", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/draft-01/08append2.tex", "max_forks_repo_name": "askhamwhat/biharm-evals", "max_forks_repo_head_hexsha": "d836302f544670b3d899bd91ea4cb49e9afb6a75", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.5606060606, "max_line_length": 137, "alphanum_fraction": 0.6586556875, "num_tokens": 3911, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5875044291746774}}
{"text": "\\chapter{Code: Numerical Solver}\n\nSome extra bits of code to help check if a bipartite state's ket vector can be separated numerically.\\\\\nA copy can be obtained from\\\\\nhttps://github.com/saad440/undergrad-project \n\n\\begin{verbatim}\n# -*- coding: utf-8 -*-\n\"\"\"\nThis module contains simple solver functions to check if a given state\nac|00>+ad|01>+bc|10>+bd|11> is separable and if it is, solve for a,b,c,d.\n\nProject: Investigation of Entanglement Measures in QuTiP\n         (IIUI, Fall 2015)\n\n@author: Muhammad Saad <muhammad.saad@yandex.com>\n\"\"\"\n\n__all__ =  ['solve_coefficients', 'verify_separability',\n            'is_separable_coeffs', 'is_separable']\n\nfrom scipy.optimize import fsolve\nimport numpy as np\n\ndef solve_coefficients(coeffprod):\n    \"\"\" Simple solver to separate a,b,c,d from products ac,ad,bc,bd\n    \n    Parameters\n    ----------\n    coeffprod : list\n                [ac,ad,bc,bd] of which to separate a, b, c and d.\n    \n    -------\n    z :       list\n                The solutions [a,b,c,d]\n\n    Examples\n    --------\n    >>> \n    \n    \"\"\"\n    ac,ad,bc,bd = coeffprod\n    def f(x):\n        a,b,c,d = x\n        r = np.zeros(4)\n        r[0] = a*c - ac\n        r[1] = a*d - ad\n        r[2] = b*c - bc\n        r[3] = b*d - bd\n        return r\n    z = fsolve(f,[1,1,1,1])\n    return z\n\ndef verify_separability(coeffprod,coeffsepd,tol=6):\n    \"\"\" Verify that the solutions obtained are actually valid\n    \n    Parameters\n    ----------\n    coeffprod : list\n                [ac,ad,bc,bd] of which to separate a, b, c and d.\n    coeffsepd : list\n                [a,b,c,d], the solutions obtained from solve_coefficients\n    \n    -------\n    vrfy :      bool\n                True: match, False: donot match\n    \n    \"\"\"\n    a,b,c,d = coeffsepd\n    ac,ad,bc,bd = coeffprod\n    vrfy = [np.round((a*c-ac),tol)==0, np.round((a*d-ad),tol)==0,\n            np.round((b*c-bc),tol)==0, np.round((b*d-bd),tol)==0]\n    if all(vrfy):\n        return True\n        \ndef is_separable_coeffs(coeffprod,tol=6):\n    \"\"\" Check whether a list of coefficients to a state is separable\n    \n    Parameters\n    ----------\n    coeffprod : list\n                [ac,ad,bc,bd] of which to separate a, b, c and d.\n    tol :       int\n                rounding error tolerance\n    \n    -------\n    separable : bool\n                True: Coefficients are separable, False: not separable\n    \n    \"\"\"\n    ac,ad,bc,bd = coeffprod\n    a,b,c,d = solve_coefficients(coeffprod)\n    coeffsepd = a,b,c,d\n    separable = verify_separability(coeffprod,coeffsepd)\n    return bool(separable)\n    \ndef is_separable(kt,tol=6):\n    \"\"\" Check whether a state vector represents a separable state\n    Note: Currently limited to 2-level product states of two particles\n    \n    Parameters\n    ----------\n    kt  :       ket state vector\n                [ac,ad,bc,bd] of which to separate a, b, c and d.\n    tol :       int\n                rounding error tolerance\n    \n    -------\n    separable : bool\n                True: State is separable, False: not separable\n    \n    \"\"\"\n    # Current limitations. Need to generalize\n    limitations = [ kt.isket, kt.shape==[4,1] ] # kt.dims==[[2,2],[1,1]]\n    if not all(limitations):\n        raise ValueError(\"Currently limited to 2-level product states of 2 particles\")\n    ac,ad,bc,bd = kt.full()\n    ac,ad,bc,bd = ac[0],ad[0],bc[0],bd[0]\n    coeffprod = ac,ad,bc,bd\n    is_sp = is_separable_coeffs(coeffprod,tol)\n    return is_sp\n\\end{verbatim}\n\n", "meta": {"hexsha": "a07b862592227f933a3e76e44a0bddd786db0c13", "size": 3449, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix-b.tex", "max_stars_repo_name": "saad440/undergrad-project", "max_stars_repo_head_hexsha": "6e4ddf112219c1e884ca2b2657852d54524c282c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-06-13T00:02:19.000Z", "max_stars_repo_stars_event_max_datetime": "2016-06-13T00:02:19.000Z", "max_issues_repo_path": "chapters/appendix-b.tex", "max_issues_repo_name": "saad440/undergrad-project", "max_issues_repo_head_hexsha": "6e4ddf112219c1e884ca2b2657852d54524c282c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/appendix-b.tex", "max_forks_repo_name": "saad440/undergrad-project", "max_forks_repo_head_hexsha": "6e4ddf112219c1e884ca2b2657852d54524c282c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.814516129, "max_line_length": 103, "alphanum_fraction": 0.5761090171, "num_tokens": 969, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744584140003, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5875044262525849}}
{"text": "% Part: first-order-logic\n% Chapter: sequent-calculus\n% Section: quantifier-rules\n\n\\documentclass[../../../include/open-logic-section]{subfiles}\n\n\\begin{document}\n\n\\olfileid{fol}{seq}{qrl}\n\n\\olsection{Quantifier Rules}\n\n\\subsection{Rules for $\\lforall$}\n\n\\begin{defish}\n\\Axiom$ !A(t), \\Gamma \\fCenter \\Delta$\n\\RightLabel{\\LeftR{\\lforall}}\n\\UnaryInf$ \\lforall[x][!A(x)],\\Gamma \\fCenter \\Delta$\n\\DisplayProof\n\\hfill\n\\Axiom$ \\Gamma \\fCenter \\Delta, !A(a) $\n\\RightLabel{\\RightR{\\lforall}}\n\\UnaryInf$ \\Gamma \\fCenter \\Delta, \\lforall[x][!A(x)]$\n\\DisplayProof\n\\end{defish}\n\nIn \\LeftR{\\lforall}, $t$ is a closed term (i.e., one without\nvariables). In \\RightR{\\lforall}, $a$~is !!a{constant} which must\nnot occur anywhere in the lower sequent of the \\RightR{\\lforall}\nrule. We call $a$ the \\emph{eigenvariable} of the \\RightR{\\forall}\ninference.\n\n\\subsection{Rules for $\\lexists$}\n\n\\begin{defish}\n\\Axiom$ !A(a), \\Gamma \\fCenter \\Delta $\n\\RightLabel{\\LeftR{\\lexists}}\n\\UnaryInf$ \\lexists[x][!A(x)], \\Gamma \\fCenter \\Delta$\n\\DisplayProof\n\\hfill\n\\Axiom$ \\Gamma \\fCenter \\Delta, !A(t) $\n\\RightLabel{\\RightR{\\lexists}}\n\\UnaryInf$ \\Gamma \\fCenter \\Delta, \\lexists[x][!A(x)]$\n\\DisplayProof\n\\end{defish}\n\nAgain, $t$~is a closed term, and $a$~is !!a{constant} which does\nnot occur in the lower sequent of the \\LeftR{\\lexists} rule. We call\n$a$ the \\emph{eigenvariable} of the \\LeftR{\\lexists} inference.\n\nThe condition that an eigenvariable not occur in the lower sequent of\nthe \\RightR{\\lforall} or \\LeftR{\\lexists} inference is called the\n\\emph{eigenvariable condition}.\n\n\\begin{explain}\nWe use the term ``eigenvariable'' even though $a$ in the above rules\nis !!a{constant}. This has historical reasons.\n\nIn \\RightR{\\lexists} and \\LeftR{\\lforall} there are no restrictions on\nthe term~$t$. On the other hand, in the \\LeftR{\\lexists} and\n\\RightR{\\lforall} rules, the eigenvariable condition requires that the\n!!{constant}~$a$ does not occur anywhere outside of~$!A(a)$ in the\nupper sequent. It is necessary to ensure that the system is sound,\ni.e., only !!{derive}s sequents that are valid. Without this\ncondition, the following would be allowed:\n\\begin{prooftree}\n  \\Axiom$!A(a) \\fCenter !A(a)$\n  \\RightLabel{*\\LeftR{\\lexists}}\n  \\UnaryInf$\\lexists[x][!A(x)] \\fCenter !A(a)$\n  \\RightLabel{\\RightR{\\lforall}}\n  \\UnaryInf$\\lexists[x][!A(x)] \\fCenter \\lforall[x][!A(x)]$\n  \\DisplayProof\\bottomAlignProof\n  \\qquad\n  \\Axiom$!A(a) \\fCenter !A(a)$\n  \\RightLabel{*\\RightR{\\lforall}}\n  \\UnaryInf$!A(a) \\fCenter \\lforall[x][!A(x)]$\n  \\RightLabel{\\LeftR{\\lexists}}\n  \\UnaryInf$\\lexists[x][!A(x)] \\fCenter \\lforall[x][!A(x)]$\n\\end{prooftree}\nHowever, $\\lexists[x][!A(x)] \\Sequent \\lforall[x][!A(x)]$ is not valid.\n\\end{explain}\n\n\\end{document}\n", "meta": {"hexsha": "8d8584aee500aa44122f76906f0d13c0fc2854be", "size": 2717, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/first-order-logic/sequent-calculus/quantifier-rules.tex", "max_stars_repo_name": "jzc/OpenLogic", "max_stars_repo_head_hexsha": "5948483c1d08c25664dc12ac8350e9ae34986b31", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 754, "max_stars_repo_stars_event_min_datetime": "2015-01-13T20:57:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T23:18:26.000Z", "max_issues_repo_path": "content/first-order-logic/sequent-calculus/quantifier-rules.tex", "max_issues_repo_name": "jzc/OpenLogic", "max_issues_repo_head_hexsha": "5948483c1d08c25664dc12ac8350e9ae34986b31", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 229, "max_issues_repo_issues_event_min_datetime": "2015-01-12T23:00:15.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T19:14:08.000Z", "max_forks_repo_path": "content/first-order-logic/sequent-calculus/quantifier-rules.tex", "max_forks_repo_name": "jzc/OpenLogic", "max_forks_repo_head_hexsha": "5948483c1d08c25664dc12ac8350e9ae34986b31", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 241, "max_forks_repo_forks_event_min_datetime": "2015-02-28T22:05:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T18:47:05.000Z", "avg_line_length": 32.3452380952, "max_line_length": 71, "alphanum_fraction": 0.7004048583, "num_tokens": 932, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680199891789, "lm_q2_score": 0.7905303162021596, "lm_q1q2_score": 0.5874968498333785}}
{"text": "\\section{Skewness and Vector-Valued Data-Constructed QoI Maps}\nIn our first example, we show that for the problem in \\ref{subsec:pde-example}, a small change in the decision of how to aggregate the measurements collected on the response surface can induce a 2-dimensional map $\\qoi_\\text{2D}^\\prime$ which is ``effectively'' one-dimensional; the two components provide highly correlated information.\nSuch a map is, by definition, more skewed than the one we presented with $\\qoi_\\text{2D}$.\nTherefore, $\\qoi_\\text{2D}^\\prime$ leads to MUD-point solutions that exhibit similar behavior to those obtained with $\\qoi_\\text{1D}$.\nThis decreased precision demonstrates that skewness\\---although reviewed and analyzed in the previous chapter within the context of accuracy of set-valued solutions\\---is a relevant measure of a map's utility for solving parameter identification problems.\n\nThe examples in Sections~\\ref{subsec:ode-example} and \\ref{subsec:pde-example} motivate the use of a data-constructed QoI in order to incorporate an arbitrary number of measurements in a system into a scalar-valued map.\nThese examples are chosen so that $\\text{dim}({\\Lambda}) = 1$ for simplicity and to establish a baseline for convergence results.\nThe linear examples in Section~\\ref{sec:high-dim-linear-example} demonstrate that the DCI framework maintains the accuracy of least-squares while incorporating initial beliefs for higher-dimensional linear maps.\nIn those examples, we show that the ability to resolve a true parameter improves as the gap between input dimension and operator row-rank decreases.\n\nThe rank-deficiency of an operator is attributed to either the ill-conditioning of an operator $A:P\\to P$, or when $P>D$ for a full-rank $A:P\\to D$.\nIn scenarios where $S>P$ observations are available, we are motivated to leverage the form of Eq.~\\eqref{eq:qoi_WME} to construct a vector-valued version of the QoI map incorporating subsets of $S$ for each component.\nFor example, a system for which spatial measurements are available over time may motivate constructing a scalar-valued QoI map using the WME functional for each spatial location.\nIf distinct observable quantities are available (e.g., perhaps with different physical units), then these may be collapsed into each component of the map.\n\nThe discussion of how to optimally construct such maps is beyond the scope of this work, and is highly problem specific, requiring nuances involving measurement sensitivities and combinatorial design-spaces.\nHowever, we summarize that the extension of the equations presented in Section~\\ref{sec:MUD_analysis} follows directly by constructing the resultant $1\\times P$ matrices $A$ and scalar-valued $b$ for each component and then stacking them to form a $D\\times P$ system, where we are motivated to minimize $P-D$.\nThe choice of how to form each of the (up to) $\\dimP$ components will directly impact the precision of the parameter estimate.\n", "meta": {"hexsha": "a1c91f85237fdafd5f7aa508cc22d58125c29f58", "size": 2942, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "extensions/mud_vector_map.tex", "max_stars_repo_name": "mathematicalmichael/thesis", "max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z", "max_issues_repo_path": "extensions/mud_vector_map.tex", "max_issues_repo_name": "mathematicalmichael/thesis", "max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 59, "max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z", "max_forks_repo_path": "extensions/mud_vector_map.tex", "max_forks_repo_name": "mathematicalmichael/thesis", "max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 147.1, "max_line_length": 336, "alphanum_fraction": 0.7987763426, "num_tokens": 655, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5874968471703446}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\begmath 9.2 Local Minimum of a Multivariate Function \\hbox{with Linear Constraints}\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThe subroutine DMLC01 finds a local minimum of a continuously differentiable\nfunction $f$ of $n$ variables. Variables may be subject to bounds, and\nlinear equality and inequality constraints. The problem statement is%\n\\begin{gather}\n\\text{Minimize }f({\\bf x}),\\text{ subject to}\\notag\\\\\n\\label{O1}A{\\bf x} \\leq {\\bf b},\\text{ and}\\\\\n\\label{O2}XL_j\\leq x_j\\leq XU_j,\\quad j=1,...,n\n\\end{gather}\nwhere $A$ is an $m\\times n$ matrix, ${\\bf b}$ is an $m$-vector, and ${\\bf x}$ is\nthe $n $-vector to be determined. It can be specified that the first MEQ\nrows of the system $A{\\bf x} \\leq  {\\bf b}$ are to be equality constraints.\nThis subroutine is also applicable to an unconstrained problem as one can\nset $m=0 $ and set the contents of XL() and XU() to have large magnitudes.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Double Precision}\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf N, M, MEQ, LDA, IPRINT,\\\\ MXEVAL, LIW, LW, IW}(LIW)\n\\item[DOUBLE PRECISION]  {\\bf A}(LDA,$\\geq $N){\\bf , B}($\\geq $M){\\bf ,\\\\\nXL}($\\geq $N){\\bf, XU}($\\geq $N){\\bf , X}($\\geq $N){\\bf, ACC, W}(LW)\n\\item[EXTERNAL]  {\\bf \\ DMLCFG }\n\\end{description}\nAssign values to N, M, MEQ, A(,), B(), XL(), XU(), X(), ACC, IPRINT, MXEVAL,\nLIW, and LW.\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL DMLC01 (DMLCFG, N, M, MEQ, A,\\\\\nLDA, B, XL, XU, X, ACC, IPRINT,\\\\\nMXEVAL, IW, LIW, W, LW)\\\\\n\\end{tabular}}\n\\end{center}\nResults are returned in X(), IW(), and W().\n\\subsubsection{Argument Definitions}\n\\begin{description}\n\\item[DMLCFG]  \\ [in] The name of a subroutine provided by the user to\ncompute the function value, $f({\\bf x})$, and optionally the gradient\nvector, ${\\bf g}$, $g_j=\\partial f/\\partial x_j$, $j=1$, ..., N,\nusing the current ${\\bf x}$-vector given in X(). It must have an interface\nof the form:\n{\\tt \\begin{tabbing}\nSUBROUTINE DMLCFG(N, X, F, G, HAVEG)\\\\\nINTEGER N\\\\\nDOUBLE PRECISION X(N), F, G(N)\\\\\nLOGICAL HAVEG\n\\end{tabbing}}\nDMLCFG must store the function value in F.\n\nDMLCFG may either compute the gradient vector or let the package compute an\nestimate of the gradient vector by computing finite differences of function\nvalues. Performance will generally be more reliable if DMLCFG computes the\ngradient vector.\n\nIf DMLCFG computes the gradient vector it must store it in G(1:N) and set\nHAVEG = .true. If DMLCFG does not compute the gradient it must set HAVEG =\n.false. and must not store anything into G().\n\nDMLCFG must not change the values of N or X().\n\nIn some applications DMLCFG may need additional data that may have been\ninput by the user's main program. One way to handle this in Fortran~77 is by\nuse of named COMMON blocks.\n\n\\item[N]  \\ [in] Number of components in the vector ${\\bf x}$. Require N $\\geq\n1.$\n\n\\item[M]  \\ [in] Number of general linear constraints, $i.e.$, number of\nrows in the matrix $A$ and components in the vector ${\\bf b}$. Require\nM $\\geq 0.$\n\n\\item[MEQ]  \\ [in] Specifies that rows~1 through MEQ of the system $A{\\bf x} %\n\\leq {\\bf b}$ are to be interpreted as equality constraints. Require $0\\leq\n\\text{MEQ}\\leq \\text{M}.$\n\n\\item[A(,)]  \\ [in] Array containing the M$\\times $N matrix $A$. Even if M $=0$\nthe array A(,) must be declared and given positive dimensions. This is a\nFortran~77 language requirement.\n\n\\item[LDA]  \\ [in] The leading dimension of the array A(,). Require LDA $\\geq\n\\max (1,\\text{M}).$\n\n\\item[B()]  \\ [in] Array containing the M-vector {\\bf b}. Even if M = 0 the\narray B() must be declared and given a positive dimension. This is a Fortran\n77 language requirement.\n\n\\item[XL(), XU()]  \\ [in] Lower and upper bounds on the variables. See\nEq.\\,(2) above. Require XL($j)\\leq $ XU($j)$ for $j=1$,\\ ...,\\ N. To\nleave a solution component effectively unconstrained, set its lower bound to\na negative number of large magnitude and its upper bound to a large positive\nnumber. Do not set these bounds to such large magnitudes that computation of\nthe difference, XU($j)-$ XL($j)$, would overflow for some $j$. Setting XL($%\nj)=$ XU($j)$ is permitted and has the effect of fixing $x_j$ at the value XL(%\n$j).$\n\n\\item[X()]  \\ [inout] The vector of variables of the optimization\ncalculation. On entry X() must contain an initial estimate of ${\\bf x}$. The\nsubroutines can usually cope with a poor estimate, and the initial ${\\bf x}$\nis not required to satisfy Eqs.~(1) and (2). On return X() will contain the\nsubroutine's best estimate of the solution vector ${\\bf x}$.\n\n\\item[ACC]  \\ [in] A tolerance on the first order Kuhn-Tucker\nconditions that the solution must satisfy. See Eq.\\,(4),\npage~\\pageref{O4} and Remark~4 in Section~C.  Setting ACC $=0.0$ is\npermitted and essentially means the user wants as much accuracy as\npossible on the host computer. The termination will then usually be\nwith INFO $=2$, meaning Eq.\\,(4) could not be satisfied.\n\n\\item[IPRINT]  \\ [in] Contains five print control parameters, packed as\nfollows:\n\\begin{tabbing}\n\\hspace{.2in}\\=IPRINT = IPFREQ $\\times $ 100 + IPTOL $\\times $ 8 +\\\\\n\\>\\quad IPFRST $\\times $ 4 + IPMORE $\\times $ 2 + IPLAST\n\\end{tabbing}\n\nIteration counting only begins after a first feasible ${\\bf x}$ is found. The\nitems to be printed are selected by IPMORE. The other parameters specify\nwhen to print. If any of IPFREQ, IPTOL, IPFRST, or IPLAST are nonzero, there\nwill be an initial printing of N, M, MEQ, ACC, RELACC, and TOL.\n\\begin{description}\n\\item[\\rm IPFREQ] is zero or positive. If positive, printing is done on\niterations 0, IPFREQ, 2*IPFREQ, etc.\n\n\\item[\\rm IPTOL] = 0 or 1. 1 means to print the new TOL value and the standard items\neach time TOL is changed. TOL is described in Section D.\n\n\\item[\\rm IPFRST] = 0 or 1. 1 means to print on the first iteration, $i.e$. on\niteration number~0.\n\n\\item[\\rm IPMORE] = 0 or 1. Items always printed are\n\\begin{description}\n\\item[\\rm ITERC ---]  The iteration count.\n\n\\item[\\rm NFVALS ---]  The count of function evaluations, which is tested\nagainst MXEVAL.\n\n\\item[\\rm F ---]  The current value of $f({\\bf x}).$\n\n\\item[\\rm TOL ---]  See Section D.\n\n\\item[\\rm $\\|\\rho \\|$ ---]  The Euclidean norm of RESKT(1:N).  This is the\nquantity that will be compared with ACC for the main convergence test. See\nSection D.\n\n\\item[\\rm X(1:N) ---]  The current value of ${\\bf x}$.\n\\end{description}\n\nWhen IPMORE = 1 the package also prints G(1:N), RESKT(1:N), IACT(1:NACT),\nand PAR(1:NACT). These quantities are described in Section D.\n\n\\item[\\rm IPLAST] = 0 or 1. 1 means to print the final results, and the\nreason for termination.\n\\end{description}\n\n\\item[MXEVAL]  \\ [in] If positive, this sets an upper limit on the number of\nfunction evaluations. Gradient evaluations and function evaluations for\nestimating the gradient are not counted. If MXEVAL = 0, there is no upper\nlimit.\n\n\\item[IW()]  \\ [work, out] Integer work space of length LIW. Also used to\nreturn status information. On return IW() contains information as follows:\n\n\\begin{description}\n\\item[IW(1)]  contains INFO. Indicates reason for termination as follows:\n\n\\begin{itemize}\n\\item[1]  means successful termination. X() is feasible and the\nconvergence test involving ACC is satisfied.\n\n\\item[2]  means X() is feasible and satisfaction of the ACC test\nseems to be prevented by the precision of arithmetic being used. This mode\nof termination will commonly occur if the user sets ACC = 0.\n\n\\item[3]  means X() is feasible but the objective function fails to\ndecrease, although a decrease is predicted by the current gradient vector.\nThis may be due to limitation of computational precision as with INFO = 2,\nhowever, if the final RESKT() has components of large magnitude and the user\nhas provided code to compute the gradient vector, this could be due to an\nerror in the gradient code. One should also question the coding of the\ngradient when the final rate of convergence is slow.\n\n\\item[4]  means bad input values. See requirements on N, M, MEQ,\nXL(), XU(), LIW, and LW. No solution is computed in this case.\n\n\\item[5]  means the equality constraints are inconsistent. These\nconstraints include any components of ${\\bf x}$ that are frozen by setting XL($%\nj)=$ XU($j).$\n\n\\item[6]  means the equality constraints and the bounds on the\nvariables are inconsistent.\n\n\\item[7]  means there is no ${\\bf x}$ satisfying all of the\nconstraints. Specifically, when this return or an INFO = 6 return occurs,\nthe current active constraints indexed in IACT(1:NACT) prevent any change in\nX() that reduces the sum of constraint violations.\n\n\\item[8]  means the limit set by MXEVAL has been reached, and there\nwould have been further calculation otherwise.\n\n\\item[9]  means the solution is uniquely determined by the\nconstraints, so there is no further freedom for minimization of $f({\\bf x})$. In\nthis case the results returned in W() will include $f$ evaluated at the\nsolution, but will not include PAR() and RESKT(), and will include G() only\nif DMLCFG computes G().\n\\end{itemize}\n\n\\item[IW(2)]  contains ITERC. Number of iterations used.\n\n\\item[IW(3)]  contains NFVALS. Number of function evaluations, not counting\nextra function evaluations done to estimate the gradient when the gradient\nis not computed explicitly. In this latter case the actual number of\nfunction evaluations will be ($\\text{K}+1)\\times \\text{NFVALS}$, where K is the number\nof solution components whose lower and upper bounds are not equal.\n\n\\item[IW(4)]  contains NACT. The number of active constraints at the final\n${\\bf x}$. Will be in the interval [MEQ, N].\n\n\\item[IW(5:4+NACT)]  contains IACT(1:NACT).\\newline\nIACT() is used as work space of length $\\text{M} +2\\times $ N. On return\nthe first NACT locations contain the\nindices of the constraints that are active at the final point. Indices [1:M]\nrefer to rows of the system $A{\\bf x} \\leq  {\\bf b}$. Indices [M+1:M+N]\nrefer to component lower bounds. Indices [M+N+1:M+2$\\times $N] refer to\ncomponent upper bounds.\n\\end{description}\n\n\\item[LIW]  \\ [in] Dimension of IW(). Require\\newline\n\\rule{.4in}{0pt} LIW $\\geq  4 + \\text{M} + 2 \\times \\text{N}$.\n\n\\item[W()]  \\ [work, out] Working space of floating point variables of\nlength LW. Also used to return auxiliary results. On return W() contains\ninformation as follows:\n\\begin{description}\n\\item[W(1)] contains FVAL, the final value of the objective function, $f.$\n\n\\item[W(2)] contains the Euclidean norm of the Kuhn-Tucker residual vector, $i.e$. $%\n\\Vert \\rho \\Vert $ as described in Section D.\n\n\\item[W(3:2+N)] contains the final gradient vector, G(1:N).\n\n\\item[W(3+N:2+2$\\times $N)] contains the Kuhn-Tucker residual vector, RESKT(1:N).\n\n\\item[W(3+2$\\times $N:2+2$\\times $N+NACT)] contains the Lagrange multipliers,\nPAR(1:NACT) where NACT has a value in [MEQ, N].\n\\end{description}\nG(), RESKT(), and PAR() are described in Section D.\n\n\\item[LW]  \\ [in] Dimension of W(). Require\\newline\n\\rule{.4in}{0pt} LW $\\geq $ 3 + M + N $\\times $ (16 + N).\n\\end{description}\n\n\\subsubsection{Modifications for Single Precision}\n\nFor single precision usage change all subroutine names beginning with D,\nexcept D1MACH, to begin with S. Change the name D1MACH to R1MACH. Change all\nDOUBLE PRECISION type statements to REAL.\n\nWe suggest that the double precision version of this package be used,\nexcept on machines such as the Cray J90, where single precision arithmetic\nhas precision of about~14.4 decimal places.\n\n\\subsection{Examples and Remarks}\n\n{\\bf Example}: Minimize ${\\displaystyle f({\\bf x})=\\sum_{k=1}^4x_j\\ln x_j.}$\n\nLet us bound each variable in [0,1] and require the sum of variables to be\n1. With just these constraints a unique minimum would occur when $x_j=0.25$,\nfor all $j$. We shall add two more constraints to break up the symmetry:%\n\\vspace{-4pt}%\n\\begin{align*}\nx_1-x_2&=0.25\\vspace{-4pt}\\\\\nx_2-x_3&\\geq 0.10\n\\end{align*}\nSince inequality constraints must be expressed in less than or equal form,\nwe rewrite this last constraint as%\n\\begin{equation*}\n-x_2+x_3\\leq -0.10\n\\end{equation*}\nIn matrix form the three general linear constraints are%\n\\begin{equation*}\n\\left[\n\\begin{array}{rrrr}\n1 & 1 & 1 & 1 \\\\\n1 & -1 & 0 & 0 \\\\\n0 & -1 & 1 & 0\n\\end{array}\n\\right] \\ {\\bf x}\\leq \\left[\n\\begin{array}{r}\n1.00 \\\\\n0.25 \\\\\n-0.10\n\\end{array}\n\\right]\n\\end{equation*}\nand we will specify that the first two rows are to be treated as equality\nconstraints by setting MEQ $=2.$\n\nThe program DRDMLC01 illustrates the use of DMLC01 to solve this problem,\nand also uses DCKDER to check the consistency of the function and gradient\ncalculations. The output is shown in ODDMLC01. We have set ACC $= 0.0$,\nmeaning we want as much accuracy as the host computer can provide. Note that\nthe termination code is~2 indicating that the requested accuracy of~0.0\ncould not be attained. This is the usual termination code when the user has\nset ACC $= 0.$\n\nWe have set the lower bounds in XL() to~1.0D-6 rather than zero to avoid the\npossibility of the package requesting a function evaluation with some $x_j =\n0$, since attempting to compute LOG(0.0D0) would cause an error stop.\nAlternatively this issue could be handled with appropriate logic in the\nfunction evaluation subroutine and XL() could be set to zero.\n\nAs print options we have selected printing only at the end (IPLAST = 1), and\nthe more extensive number of printed items (IPMORE = 1). This is indicated\nby setting IPRINT $= 2 \\times \\text{IPMORE} + \\text{IPLAST} = 3.$ Setting\nMXEVAL = 0 means we are not setting any limit on the number of function\nevaluations.\n\nTo illustrate the contents of IW() and W() on return we have printed the\nresults from these arrays.\n\n\\subparagraph{Remarks}\n\n1. It is important to the success of DMLC01 that the initial guess be as\ngood as possible.\n\n2. DMLC01 only finds a local minimum. Problems may have more than one local\nminimum, so caution in accepting results is suggested. It may be useful to\nsolve the problem several times using significantly different starting\npoints.\n\n3. Minimization of a nonlinear function is inherently difficult in many\ncases, and sometimes may require some interaction with the user. The\nintermediate output available from the subroutine may be useful if one has\nquestions about the performance of the subroutine. It is not uncommon to\nmake mistakes in writing the code for computing partial derivatives. This\nmistake is likely to cause a return with IW($1) = 3$. The user can use the\nsubroutine DCKDER of Chapter~8.3 to check the consistency of code for\nfunction and gradient evaluation.\n\n4. It is reasonable to set ACC $=0.0$ the first time this subroutine is\napplied to a new mathematical model. If one intends to solve more problems\nof similar form and wishes to reduce the number of iterations, one could\nturn on the intermediate output and try to determine a nonzero value of ACC\nthat gives a result of adequate accuracy in fewer iterations. From Eq.\\,(3)\none sees that an appropriate nonzero value of ACC depends on the norms\nof ${\\bf g}$ and of the row vectors of $A$, and the constraints active at\nthe solution. If ${\\bf g}$ is computed by finite-differences the magnitude\nof $f$ must be considered also.\n\n\\subsection{Functional Description}\n\nThe algorithm provides for the bounds on the variables to be specified and\ntreated separately from the general linear constraints because this is\nconvenient for the user and allows for efficiencies in storage and execution\ntime. However, for analysis of the problem it is more convenient to treat\nthe bounds as additional general linear constraints. Thus the full set of\nconstraints can be expressed as%\n\\begin{equation*}\n\\left[\n\\begin{array}{r}\nA \\\\\n-I \\\\\nI\n\\end{array}\n\\right] {\\bf x\\leq }\\left[\n\\begin{array}{r}\n{\\bf b} \\\\ -XL \\\\\nXU\n\\end{array}\n\\right]\n\\end{equation*}\nwith the first MEQ rows to be treated as equality constraints. Here I\ndenotes the N $\\times $ N identity matrix, $XL$ denotes the N-vector of lower\nbounds, and $XU$ denotes the N-vector of upper bounds.\n\nLet $C$ denote the left-side matrix and {\\bf d} the right-side vector in the\nabove expression. Thus the constraints can be written simply as $C{\\bf x} %\n\\leq  {\\bf d}$, where $C$ and ${\\bf d}$ each have M + 2N rows. Let ${\\bf c}_i$\ndenote the column vector which is the transpose of row $i$ of $C$.\n\nIf $XL_j = XU_j$ for some $j$, then row $\\text{M}+j$ of $C{\\bf x}\n\\leq {\\bf d}$ will be treated as an equality constraint and row $\\text{M}+\n\\text{N}+j$ will be ignored (not changing the indexing of\nother rows).  A set {${\\cal C}$} of linearly independent equality\nconstraints is identified. This will generally consist of the\nconstraints due to equality of lower and upper bounds, plus the first\nMEQ rows. However, it may consist of a proper subset of these if this\nset is not linearly independent. An internal variable, MEQL, is set\nto the number of these constraints and their indices are stored in\nIACT(1:MEQL). After setting MEQL and the contents of IACT(1:MEQL)\nthese will remain unchanged throughout the rest of the algorithm.\n\nA vector ${\\bf x}$ is feasible if it satisfies $C{\\bf x} \\leq  {\\bf d}$, and\nin addition achieves equality for the rows in the set {${\\cal C}$}. With any\nfeasible ${\\bf x}$ the algorithm considers, it associates a list of indices of\nat most N of the rows of $C{\\bf x} \\leq  {\\bf d}$ that are satisfied with\nequality. These rows are called the active constraints (for ${\\bf x}$) and the\nindices are denoted by IACT$(k)$, $k = 1$, ..., NACT, where NACT depends on\n${\\bf x}$, but will satisfy MEQL $\\leq $ NACT $\\leq $ N.\n\nThe first-order Kuhn-Tucker necessary conditions for a solution ${\\bf x}$ are\nthat ${\\bf x}$ must be feasible and that the gradient, ${\\bf g}$, of $f$ at ${\\bf x}$\nmust be a linear combination of vectors ${\\bf c}_i$ in the active set for ${\\bf x}$,\nwith combining coefficients (called Lagrange coefficients) that are of\nunrestricted sign for constraints in set {${\\cal C}$}, and are nonpositive\nfor constraints not in set {${\\cal C}$}. Thus define the {\\em Kuhn-Tucker\nresidual vector}\n\\begin{equation}\n\\hspace{-15pt}\\label{O3}{\\bf \\rho} ={\\bf g}-\\sum_{k=1}^{NACT}\\lambda _k\n{\\bf c}_{IACT(k)},\\ \\ \\text{with }\\lambda _k\\leq 0\\text{ for } k>\\text{MEQL}\n\\end{equation}\nThen the Kuhn-Tucker condition on a vector ${\\bf x}$ can be stated as the\nrequirement that there exist $\\lambda _k$'s such that ${\\bf \\rho }$ is a\nzero vector. The convergence test used in the package is\n\\begin{equation}\n\\label{O4}\\Vert {\\bf \\rho} \\Vert \\leq \\text{ACC}\n\\end{equation}\nwhere $\\Vert \\cdot \\Vert $ denotes the Euclidean vector norm.\n\nInternally the package stores ${\\bf g}$ in G(), ${\\bf \\rho }$ in RESKT(), and\nthe $\\lambda_k$'s in PAR(). These can be printed using the IPRINT argument\nand are returned in the W() array.\n\nThe package also uses internal tolerance parameters, RELACC and TOL. RELACC\nis set to about 100~times the machine precision. TOL is initially set to\n0.01, and is reduced as the algorithm progresses until it reaches the value\nRELACC. TOL is used as a relative tolerance in checking the satisfaction of\nconstraints. The technique of starting with TOL fairly large and later\nreducing it is a unique design feature of this algorithm. It has the effect\nof avoiding many small changes to ${\\bf x}$ in the early stages of the\nalgorithm.\n\nThe algorithm is described in detail by the author in\n\\cite{Powell:1989:TOL} and \\cite{Powell:ATA:1988}.  He characterizes the\nalgorithm as an active set method as described in \\cite{Gill:1987:PO},\nusing BFGS updating of second derivative approximations as described in\n\\cite{Powell:UCD:1987} and the matrix factorizations of\n\\cite{Goldfarb:1983:ANS}.  Examples from \\cite{Hock:1981:TEN} were used by\nthe author in testing the package.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nOn all returns, successful or not, the reason for the return is indicated by\nINFO, stored in IW(1). See the specification of IW(1) in Section B for the\ninterpretation of these values. On returns with IW(1) from~3 to~8 an error\nmessage will be printed using the error message printing subroutines of\nChapter~19.2 with an error level of 0.\n\n\\subsection{Supporting Information}\n\nThe source language is ANSI Fortran~77.\n\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nDMLC01 & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\n AMACH, DERV1, DMLC, ERFIN, ERMOR, ERMSG, IERM1, IERV1\\rule[-5pt]{0pt}{8pt}}\\\\\nSMLC01 & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\n AMACH, ERFIN, ERMOR, ERMSG, IERM1, IERV1, SERV1, SMLC}\\\\\n\\end{tabular}\n\nCvonerted from Powell's code cited above by C. L. Lawson, April, 1990 (with\nminor contribution from F. T. Krogh).\n\n\\begcode\n\n\\medskip\\\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRDMLC01}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{dmlc01}}\n\\newpage\n\\vspace{30pt}\\centerline{\\bf \\large ODDMLC01}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{dmlc01}}\n\\end{document}\n", "meta": {"hexsha": "67b571efd831ed68de3fbc82c45cb75f330216c5", "size": 21183, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch09-02.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch09-02.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch09-02.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 43.2306122449, "max_line_length": 98, "alphanum_fraction": 0.7322853231, "num_tokens": 6178, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5874968471703446}}
{"text": "\\documentclass[t]{beamer}\n\\usetheme{Copenhagen}\n\\setbeamertemplate{headline}{} % remove toc from headers\n\\beamertemplatenavigationsymbolsempty\n\n\\usepackage{amsmath, tikz, bm, tkz-euclide,pgfplots}\n\\pgfplotsset{compat = 1.16}\n\\usetkzobj{all}\n\n\\title{Complex Numbers}\n\\author{}\n\\date{}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}\n    \\frametitle{Objectives}\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\begin{document}\n\n\\begin{frame} \n\\maketitle\n\\end{frame}\n\n\\section{Write square roots of negative numbers as imaginary numbers}\n\n\\begin{frame}{Background}\nImaginary numbers arose from trying to solve equations such as \n\\[x^2 = -1\\] \\pause\n\nIf we take the square root of both sides, we get \n\\[ x = \\sqrt{-1} \\]\t\\pause\n\nWe define the imaginary unit $i$ to be that value:\n\\[ i = \\sqrt{-1} \\]\n\\end{frame}\n\n\\begin{frame}{Writing Square Roots of Negative Numbers}\nWhen writing square roots of negative values as imaginary numbers, we can factor out \\[\\sqrt{-1}\\] from the expression \\newline\\\\\n\n{\\color{blue}\\textbf{as long as what remains is the square root of a positive number.}}\n\\begin{align*}\n\\onslide<2->{\\sqrt{-25} &= \\sqrt{-1} \\cdot \\sqrt{25} }\\\\[6pt] \n\\onslide<3->{&= i \\cdot 5} \\\\[6pt]\t\\pause\n\\onslide<4->{&= 5i}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 1}\nWrite each as an imaginary number.\t\\newline\\\\\n(a) \\quad $\\sqrt{-36}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{-36} &= \\sqrt{-1} \\cdot \\sqrt{36}} \\\\[6pt]\n\\onslide<3->{&= i \\cdot 6} \\\\[6pt]\n\\onslide<4->{&= 6i}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 1}\n(b) \\quad $\\sqrt{-100}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{-100} &= \\sqrt{-1} \\cdot \\sqrt{100}} \\\\[6pt]\n\\onslide<3->{&= i \\cdot 10} \\\\[6pt]\n\\onslide<4->{&= 10i}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 1}\n(c) \\quad $\\sqrt{-8}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{-8} &= \\sqrt{-1} \\cdot \\sqrt{8}} \\\\[6pt]\n\\onslide<3->{&= i \\cdot 2\\sqrt{2}} \\\\[6pt]\n\\onslide<4->{&= 2i\\sqrt{2}}\n\\end{align*}\n\\end{frame}\n\n\\section{Add and subtract complex numbers}\n\n\\begin{frame}{Complex Numbers}\nA \\alert{complex number} is a number written in the form\n\\[ a + bi \\]\nwhere $a$ (the \\alert{real part}) and $b$ (the \\alert{imaginary part}) are real numbers.\n\\end{frame}\n\n\\begin{frame}{Adding and Subtracting Complex Numbers}\nComplex numbers can be added and subtracted much like combining like terms in Algebra 1.\t\\newline\\\\\t\\pause\n\nThe real parts are added (or subtracted) together, as are the imaginary parts.\t\\newline\\\\\t\\pause\n\nAnswers are then typically written in $a + bi$ form.\n\\end{frame}\n\n\\begin{frame}{Example 2}\nSimplify each. \\newline\\\\ \n(a) \\quad $(3 + 2i) + (-1 + 8i)$\n\\begin{align*}\n\\onslide<2->{(3+2i)+(-1+8i)&= (3+(-1))+(2i+8i)} \\\\[6pt]\n\\onslide<3->{&= 2 + 10i}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n(b) \\quad $(-5-i) + (2+4i)$\n\\begin{align*}\n\\onslide<2->{(-5-i) + (2+4i)&= (-5+2)+(-i+4i)} \\\\[6pt]\n\\onslide<3->{&= -3+3i}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n(c) \\quad $(7-9i) - (3+5i)$\n\\begin{align*}\n\\onslide<2->{(7-9i) - (3+5i)&= 7-9i-3-5i} \\\\[6pt]\n\\onslide<3->{&= 4-14i}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n(d) \\quad $(-1+2i) - (-8+10i)$\n\\begin{align*}\n\\onslide<2->{(-1+2i) - (-8+10i)&= -1+2i+8-10i} \\\\[6pt]\n\\onslide<3->{&= 7-8i}\n\\end{align*}\n\\end{frame}\n\n\\section{Multiply complex numbers}\n\n\\begin{frame}{Multiplying Complex Numbers}\nComplex numbers can be multiplied in the same manner that binomials are multiplied in Algebra 1, such as $(x+3)(x-8)$.\t\\newline\\\\\t\\pause\n\nHowever, since $i = \\sqrt{-1}$, if we \\alert{square both sides} we get\n\\[ i^2 = -1 \\]\t\\pause\n\nSo when multiplying complex numbers, you will substitute a $-1$ whenever you see an $i^2$.\n\\end{frame}\n\n\\begin{frame}{Example 3}\nMultiply each.\t\\newline\\\\\n(a) \\quad $(2+3i)(5+6i)$\t\\newline\\\\\n\\onslide<2->{\n\\begin{center}\n\\begin{tikzpicture}\n\\draw[black] (0,0) grid (2,2);\n\\node at (0,1.5) [left] {$2$};\n\\node at (0,0.5) [left] {$3i$};\n\\node at (0.5,2) [above] {$5$};\n\\node at (1.5,2) [above] {$6i$};\n\\onslide<3->{\\node at (0.5,1.5) {$10$};}\n\\onslide<4->{\\node at (1.5,1.5) {$12i$};}\n\\onslide<5->{\\node at (0.5,0.5) {$15i$};}\n\\onslide<6->{\\node at (1.5,0.5) {$18i^2$};}\n\\onslide<7->{\\node[fill=white, scale=2.5] at (1.5,0.5) {};}\n\\onslide<8->{\\node at (1.5,0.5) {$-18$};}\n\\end{tikzpicture}\n\\end{center}\n}\n\\onslide<9->{\\[-8 + 27i\\]}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(b) \\quad $(-1+i)(4-3i)$\t\\newline\\\\\n\\onslide<2->{\n\\begin{center}\n\\begin{tikzpicture}\n\\draw[black] (0,0) grid (2,2);\n\\node at (0,1.5) [left] {$-1$};\n\\node at (0,0.5) [left] {$1i$};\n\\node at (0.5,2) [above] {$4$};\n\\node at (1.5,2) [above] {$-3i$};\n\\onslide<3->{\\node at (0.5,1.5) {$-4$};}\n\\onslide<4->{\\node at (1.5,1.5) {$3i$};}\n\\onslide<5->{\\node at (0.5,0.5) {$4i$};}\n\\onslide<6->{\\node at (1.5,0.5) {$-3i^2$};}\n\\onslide<7->{\\node[fill=white, scale=3] at (1.5,0.5) {};}\n\\onslide<8->{\\node at (1.5,0.5) {$3$};}\n\\end{tikzpicture}\n\\end{center}\n}\n\\onslide<9->{\\[-1 + 7i\\]}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(c) \\quad $(7-7i)(7+7i)$\t\\newline\\\\\n\\onslide<2->{\n\\begin{center}\n\\begin{tikzpicture}\n\\draw[black] (0,0) grid (2,2);\n\\node at (0,1.5) [left] {$7$};\n\\node at (0,0.5) [left] {$-7i$};\n\\node at (0.5,2) [above] {$7$};\n\\node at (1.5,2) [above] {$7i$};\n\\onslide<3->{\\node at (0.5,1.5) {$49$};}\n\\onslide<4->{\\node at (1.5,1.5) {$49i$};}\n\\onslide<5->{\\node at (0.5,0.5) {$-49i$};}\n\\onslide<6->{\\node at (1.5,0.5) {$-49i^2$};}\n\\onslide<7->{\\node[fill=white, scale=3.55] at (1.5,0.5) {};}\n\\onslide<8->{\\node at (1.5,0.5) {$49$};}\n\\end{tikzpicture}\n\\end{center}\n}\n\\onslide<9->{\\[98\\]}\n\\end{frame}\n\n\\section{Divide complex numbers}\n\n\\begin{frame}{Dividing Complex Numbers}\nDividing complex numbers presents a bit of a challenge. \\newline\\\\\t\\pause\n\nThe reason is that the denominator will have a square root, which is a big no-no in math:\t\\pause\n\n\\[\\frac{3-i}{2+i} = \\frac{3-\\sqrt{-1}}{2+\\sqrt{-1}} \\]\t\\pause\n\nTo remedy this, we need to find the \\alert{conjugate} of the {\\color{blue}\\textbf{denominator}}.\n\\end{frame}\n\n\\begin{frame}{Complex Conjugates}\nThe \\alert{conjugate} of a complex number \n\\[ a + bi \\] \nis\n\\[a - bi \\] \nand vice versa\n\\end{frame}\n\n\\begin{frame}{Conjugate Examples}\n\\begin{center}\n\\begin{tabular}{cc}\n\\textbf{Number} & \\textbf{Conjugate} \\\\ \\hline\n\\onslide<2->{$7+2i$} & \\onslide<3->{$7-2i$} \\\\[6pt]\n\\onslide<4->{$-3+i$} & \\onslide<5->{$-3-i$} \\\\[6pt]\n\\onslide<6->{$5-4i$} & \\onslide<7->{$5+4i$} \\\\[6pt]\n\\end{tabular}\t\\newline\\\\\n\\end{center}\n\\onslide<8->{When you multiply complex conjugates, you will always get a real number, like in Example 3c.}\t\\newline\\\\\n\\onslide<9->{So, to divide complex numbers, multiply both numerator and denominator by the {\\color{blue}\\textbf{conjugate of the denominator}}.}\n\\end{frame}\n\n\\begin{frame}{Example 4}\nDivide $\\dfrac{3-i}{2+i}$. Write your answer in $a+bi$ form.\t\\newline\\\\\n\\onslide<2->{The conjugate of $2+i$ is $2-i$}\n\\begin{align*}\n\\onslide<3->{\\frac{3-i}{2+i}}\\onslide<4->{\\left(\\frac{2-i}{2-i}\\right)&} \\\\[10pt]\n\\onslide<5->{&=\\frac{5-5i}{5}} \\\\[10pt]\n\\onslide<6->{&=1-i}\n\\end{align*}\n\\end{frame}\n\n\\end{document}", "meta": {"hexsha": "b14725bb90e5ede0eee39af9b7327a32bdbfaf09", "size": 6973, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Complex_Numbers(BEAMER).tex", "max_stars_repo_name": "BryanBain/HA2_BEAMER", "max_stars_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Complex_Numbers(BEAMER).tex", "max_issues_repo_name": "BryanBain/HA2_BEAMER", "max_issues_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Complex_Numbers(BEAMER).tex", "max_forks_repo_name": "BryanBain/HA2_BEAMER", "max_forks_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:45.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:45.000Z", "avg_line_length": 27.892, "max_line_length": 144, "alphanum_fraction": 0.6238347913, "num_tokens": 2894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303087996143, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5874968398352314}}
{"text": "%\n% 160\n%\n\\chapter{Fourier Series and Trigonometrical Series}\n\\Section{9}{1}{Definition of Fourier series}\\footnote{Throughout\n  this chapter (except in \\hardsubsectionref{9}{1}{1}) it is supposed that all\n  the numbers involved are real.}.\nSeries of the type\n\\begin{align*}\n  \\half a_{0}\n  + (a_{1} + \\cos x + b_{1} \\sin x)\n  + (a_{2} + \\cos 2x + b_{2} \\sin 2x)\n  + \\cdots\n  \\\\\n  &\n  \\hfill\n  \\half a_{0}\n  +\n  \\sum_{n=1}^{\\infty} ( a_{n} \\cos n x + b_{n} \\sin n x),\n\\end{align*}\nwhere $a_{n}$, $b_{n}$ are independent of $x$, are of great importance in many\ninvestigations. They are called \\emph{trigonometrical series}.\n\nIf there is a function $f(t)$ such that\n$\\int_{-\\pi}^{\\pi} f(t) \\dmeasure t$ exists as a Riemann\nintegral or as an improper integral which converges absolutely, and such that\n$$\n\\pi a_{n} = \\int_{-\\pi}^{\\pi} f(t) \\cos nt \\dmeasure t,\n\\quad\n\\pi b_{n} = \\int_{-\\pi}^{\\pi} f(t) \\sin nt \\dmeasure t,\n$$\nthen the trigonometrical series is called a \\emph{Fourier series}.\n\n%\\begin{smalltext}\nTrigonometrical series first appeared in analysis in connexion with\nthe investigations of Daniel Bernoulli on vibrating\nstrings\\index{Strings, vibrations of}\\index{Vibrations of!strings};\nd'Alembert had previously solved the equation of\nmotion\n$ \\ddot{y} = a^{2} \\frac{\\dd^{2} y}{\\dd x^{2}}$\nin the form\n$y = \\half \\thebrace{f(x+at) + f(x-at)}$, where $y=f(x)$ is\nthe initial shape of the string starting from rest;\nand Bernoulli shewed that a formal solution is\n$$\ny\n=\n\\sum_{n=1}^{\\infty}\nb_{n}\n\\sin \\frac{n \\pi x}{l}\n\\cos \\frac{n \\pi a t}{l},\n$$\nthe fixed ends of the string being $(0,0)$ and $(l,0)$; and he asserted\nthat this was the most general solution of the problem. This appeared\nto d'Alembert and Euler to be impossible, since such a series, having\nperiod $2l$, could not possibly represent such a function\nas\\footnote{This function gives a simple form to the initial shape of the string.}\n$c x (l-x)$ when $t = 0$.\nA controversy arose between these mathematicians, of which\nan account is given in Hobson's \\emph{Functions of a Real Variable}.\n\nFourier, in his \\emph{TODO Theorie de la Chaleur}, investigated a number of\ntrigonometrical series and shewed that, in a large number of\nparticular cases, a Fourier series \\emph{actually converged to the sum $f(x)$}.\nPoisson attempted a general proof of this theorem. TODO Journal de VEcole\npoly technique, xil. (1823), pp. 404-509. Two proofs were given by\nCauchy, TODO Men. de VAcad. R. des Sci. vi. (1823, publi-hed 1826), pp.\n603-612 Oeuvres, (1), n. pp. 12-19) and Exercices de Math. 11. (1827),\npp. 341-376 (Oeuvres, (2), vil. pp. 393-430); these proofs, which are\nbased on the theory of contour integration, are concerned with rather\nparticular classes of functions and one is invalid. The second proof\nhas been investigated by Harnack, TODO Math. Ann. xxxii. (1888), pp.\n175-202.\n\n%\\end{smalltext}\n%\n% 161\n%\n\nIn 1829, Dirichlet gave the first rigorous proof\\footnote{TODO Journal J iir Math. iv. (1829), pp. 157-169.}\nthat, for a general class of functions, the Fourier series, defined as above, does\nconverge to the sum $f(x)$. A modification of this proof was given later\nby Bonnet\\footnote{TODO Meiuoires des Savants etramjers of the Belgian Academy, xxiii.\n(1848-1850). Bonnet employs the second mean value theorem directly,\nwhile Dirichlet's original proof makes use of arguments precisely\nsimilar to those by which that theorem is proved. See \\hardsubsectionref{9}{4}{3}.}.\n\nThe result of Dirichlet\\index{Statement of Fourier's theorem, \\Dirichlet's}\nis that\\footnote{The conditions postulated for $f(t)$\n  are known as \\emph{Dirichlet's conditions}; as will be seen in TODO \u00a7\u00a7 9-2, 9 42, they are unnecessarily\n  stringent.}\nif $f(t)$ is defined and bounded in the\nrange $(-\\pi, \\pi)$ and if $f(t)$ has only a finite number of maxima and\nminima and a finite number of discontinuities in this range and,\nfurther, if $f(t)$ is defined by the equation\n$$\nf(t + 2 \\pi) = f(t)\n$$\noutside the range $(-\\pi, \\pi)$, then, provided that\n$$\n\\pi a_{n}\n=\n\\int_{-\\pi}^{\\pi} f(t) \\cos nt \\dmeasure t,\n\\quad\n\\pi b_{n}\n=\n\\int_{-\\pi}^{\\pi} f(t) \\sin nt \\dmeasure t,\n$$\nthe series $TODO$ converges to the sum /( +\n0)+/( - 0) .\n\nLater, Riemann and Cantor developed the theory of trigonometrical\nseries generally, while still more recently Hurwitz, \\Fejer\\ and others\nhave investigated properties of Fourier series when the series does\nnot necessarily converge. Thus \\Fejer has proved the remarkable\ntheorem that a Fourier series (even if not convergent) is 'summable\n$(C1)$' at all points at which $f(x \\pm 0)$ exist, and its sum $(C1)$ is\n$\\half \\thebrace{ f(x+0) + f(x-0) }$,\nprovided that $\\int_{-\\pi}^{\\pi} f(t) \\dmeasure t$ is an absolutely convergent integral.\nOne of the investigations of the convergence of Fourier series which we shall give later\n(\\hardsectionref{9}{4}{2}) is based on this result.\n\nFor a fuller account of investigations subsequent to Riemann, the\nreader is referred to Hobson's \\emph{Functions of a Real Variable}, and to\nTODO de la Vallee Poussin's Cours dWnalyse Infinitesimale.\n\n\\Subsection{9}{1}{1}{Nature of the region within which a\n  trigonometrical series converges.}\n\nConsider the series\n$$\n\\half a_{0}\n+\n\\sum_{n=1}^{\\infty}\n\\theparen{\n  a_{n} \\cos nz\n  +\n  b_{n} \\sin nz\n},\n$$\nwhere $z$ may be complex. If we write\n$e^{iz} = \\zeta$, the series becomes\n$$\n\\half a_{0}\n+\n\\sum_{n=1}^{\\infty}\n\\thebrace{\n  \\half (a_{n} - i b_{n}) \\zeta^{n}\n  +\n  \\half (a_{n} + i b_{n}) \\zeta^{-n}\n}\n$$\nThis Laurent series will converge, if it converges at all, in a region\nin which $a \\leq \\absval{\\zeta} \\leq b$, where $a,b$ are positive\nconstants.\n\nBut, if $z = x + iy$, $\\absval{\\zeta} = e^{-y}$, and so we get, as the\nregion of convergence of the trigonometrical series, the strip in the\n$z$ plane defined by the inequality\n$$\n\\log a \\leq -y \\leq \\log b.\n$$\n\nThe case which is of the greatest importance in practice is that in which\n$a = b = 1$, and the strip consists of a single line, namely the real axis.\n\nTODO Example 1\nLet\n$$\nf(z)\n=\n\\sin z\n- \\half \\sin 2z\n+ \\frac{1}{3} \\sin 3z\n- \\frac{1}{4} \\sin 4z\n+ \\ldots,\n$$\nwhere $z = x + iy$.\n%\n% 161\n%\n\nWriting this in the form\n$$\nf(z)\n=\n-\n\\half\ni\n\\theparen{\n  e^{iz}\n  - \\half e^{2iz}\n  + \\frac{1}{3} e^{3iz}\n  -\n  \\cdots\n}\n+\n\\half\ni\n\\theparen{\n  e^{-iz}\n  - \\half e^{-2iz}\n  + \\frac{1}{3} e^{-3iz}\n  -\n  \\cdots\n}\n$$\nwe notice that the first series converges\\footnote{The series\n  \\emph{do converge} if $y=0$, see \\hardsubsectionref{2}{3}{1} example TODO 2}\nonly if\n$y \\geq 0$, and the second only if $y \\leq 0$.\n\nWriting $x$ in place of $z$ ($x$ being real), we see that by Abel's\ntheorem (\\hardsubsectionref{3}{7}{1}),\n\\begin{align*}\n  f(x)\n  &=\n  \\lim_{r \\rightarrow 1} \\theparen{\n    r \\sin x\n    - \\half r^{2} \\sin 2x\n    + \\frac{1}{3} r^{3} \\sin 3x\n    - \\cdots\n  }\n  \\\\\n  &=\n  \\lim_{r \\rightarrow 1} \\thebrace{\n    - \\half \\theparen{\n      r e^{ix}\n      - \\half r^{2} e^{2ix}\n      + \\frac{1}{3} r^{3} e^{3ix}\n      - \\cdots\n    }\n    +\n    \\half i \\theparen{\n      r e^{-ix}\n      - \\half r^{2} e^{-2ix}\n      + \\frac{1}{3} r^{3} e^{-3ix}\n      - \\cdots\n    }\n  }\n\\end{align*}\n\nThis is the limit of one of the values of\n$$\n- \\half i \\log (1 + r e^{ix})\n+ \\half i \\log (1 + r e^{-ix}),\n$$\nand as $r \\rightarrow 1$ (if $-\\pi < x < \\pi$), this tends to\n$\\half x + k\\pi$, where $k$ is some integer.\n\nNow $\\sum_{n=1}^{\\infty} \\frac{(-)^{n-1} \\sin nx}{n}$ converges uniformly\n(\\hardsubsectionref{3}{3}{5} example TODO 1) and is therefore continuous in\nthe range $-\\pi + \\delta \\leq x \\leq \\pi - \\delta$, where\n$\\delta$ is any positive constant.\n\nSince $\\half x$ is continuous, $k$ has the same value wherever $x$ lies in the\nrange; and putting $x=0$, we see that $k=0$.\n\n\\emph{Therefore, when $-\\pi < x < \\pi$,\n  $$\n  f(x) = \\half x.\n  $$\n}\n\nBut, when $\\pi < x < 3\\pi$,\n$$\nf(x)\n=\nf(x - 2\\pi)\n=\n\\half (x - 2\\pi)\n=\n\\half x - \\pi,\n$$\nand generally, if $(2n - 1) \\pi < x < (2n + 1) \\pi$,\n$$\nf(x) = \\half x - n \\pi.\n$$\n\nWe have thus arrived at an example in which $f(x)$ is\nnot represented by a single analytical expression.\n\nIt must be observed that this phenomenon can only occur when the\nstrip in which the Fourier series converges is a single line.\nFor if the strip is not of zero breadth, the associated Laurent\nseries converges in an annulus of non-zero breadth and represents an\nanalytic function of $\\zeta$ in that annulus; and, since\n$\\zeta$ is an analytic function of $z$, the Fourier series\nrepresents an analytic function of $z$; such a series is given by\n$$\nr \\sin x\n- \\half r^{2} \\sin 2x\n+ \\frac{1}{3} r^{3} \\sin 3x\n- \\cdots,\n$$\nwhere $0 < r < 1$; its sum is\n$\\arctan \\frac{r \\sin x}{1 + r \\cos x}$, the $\\arctan$ always\nrepresenting an angle between $\\pm \\half \\pi$.\n\nExample TODO\nWhen $-\\pi \\leq x \\leq \\pi$,\n$$\n\\sum_{n=1}^{\\infty}\n\\frac{(-)^{n-1} \\cos nx}{n^{2}}\n=\n\\frac{1}{12} \\pi^{2}\n-\n\\frac{1}{4} x^{2}.\n$$\n\nThe series converges only when $x$ is real; by\n\\hardsubsectionref{3}{3}{4} the convergence is then\nabsolute and uniform.\n\nSince\n$$\n\\half x\n=\n\\sin x\n- \\half \\sin 2x\n+ \\frac{1}{3} \\sin 3x\n- \\cdots\n\\quad\n(-\\pi + \\delta \\leq x \\pi - \\delta,\n\\delta > 0),\n$$\nand this series converges uniformly, we may integrate\nterm-by-term from $0$ to $x$ (\\hardsectionref{4}{7}),\nand consequently\n$$\n\\frac{1}{4} x^{2}\n=\n\\sum_{n=1}^{\\infty}\n\\frac{(-)^{n-1} (1 - \\cos nx)}{n^{2}}\n\\quad\n(-\\pi + \\delta \\leq x \\leq \\pi - \\delta).\n$$\n%\n% 163\n%\n\nThat is to say, when $-\\pi + \\delta \\leq x \\leq \\pi - \\delta$,\n$$\nC - \\frac{1}{4} x^{2}\n=\n\\sum_{n=1}^{\\infty} \\frac{(-)^{n-1} \\cos nx}{n^{2}},\n$$\nwhere $C$ is a constant, at present undetermined.\n\nBut since the series on the right converges uniformly throughout the\nrange $-\\pi \\leq x \\leq \\pi$, its sum is a continuous function of $x$ in this\nextended range; and so, proceeding to the limit when\n$x \\rightarrow \\pm \\pi$, we see\nthat the last equation is still true when $x = \\pm \\pi$.\n\nTo determine $C$, integrate each side of the equation \\hardsectionref{4}{7} between\nthe limits $-\\pi, \\pi$; and we get\n$$\n2 \\pi C - \\frac{1}{6} \\pi^{3} = 0.\n$$\n\nConsequently\n$$\n\\frac{1}{12} \\pi^{2} - \\frac{1}{4} x^{2}\n=\n\\sum_{n=1}^{\\infty}\n\\frac{ (-)^{n-1} \\cos nx }{ n^{2} }\n\\quad\n(-\\pi \\leq x \\leq \\pi).\n$$\n\nExample TODO.\nBy writing $\\pi - 2x$ for $x$ in example TODO2, shew that\n$$\n\\sum_{n=1}^{\\infty} \\frac{\\sin^{2} nx}{n^{2}}\n=\n\\begin{cases}\n  \\half x (\\pi - x)                         & 0 \\leq x \\leq \\pi,   \\\\\n  \\half \\thebrace{ \\pi \\absval{x} - x^{2}} & -\\pi \\leq x \\leq \\pi.\n\\end{cases}\n$$\n\n\\Subsection{9}{1}{2}{Values of the coefficients in terms of the sum of a\ntrigonometrical series.}\nLet the trigonometrical series\n$\n\\half c_{0}\n+\n\\sum_{n=1}^{\\infty} (c_{n} \\cos nx + d_{n} \\sin nx)\n$\nbe uniformly convergent in the range $(-\\pi, \\pi)$ and let its sum be $f(x)$.\nUsing the obvious results\n\\begin{align*}\n  \\int_{-\\pi}^{\\pi} \\cos mx \\cos nx \\dmeasure x\n  =&\n  \\begin{cases}\n    0 & m \\neq n \\\\\n    \\pi & m = n \\neq 0,\n  \\end{cases}\n  \\\\\n  \\int_{-\\pi}^{\\pi} \\sin mx \\sin nx \\dmeasure x\n  =&\n  \\begin{cases}\n    0 & m \\neq n \\\\\n    \\pi & m = n \\neq 0,\n  \\end{cases}\n  \\\\\n  \\int_{-\\pi}^{\\pi} \\dmeasure x\n  =& 2\\pi,\n\\end{align*}\nwe find, on multiplying the equation\n$\n\\half c_{0}\n+\n\\sum_{n=1}^{\\infty}\n(c_{n} \\cos nx + d_{n} \\sin nx)\n= f(x)\n$\nby\\footnote{Multiplying by these factors does not destroy the uniformity of the\n  convergence.}\n$\\cos nx$ or by $\\sin nx$ and integrating\nterm-by-term\\footnote{These were given by TODO Euler (with limits $0$ and $2\\pi$),\n  Nova Acta Acad. Petrop. xi. (1793).}\n(\\hardsectionref{4}{7}),\n$$\n\\pi c_{n} = \\int_{-\\pi}^{\\pi} f(x) \\cos nx \\dmeasure x,\n\\quad\n\\pi d_{n} = \\int_{-\\pi}^{\\pi} f(x) \\sin nx \\dmeasure x.\n$$\n\nTODO Corollary. A trigonometrical series uniformly convergent in the range\n$(-\\pi, \\pi)$ is a Fourier series.\n\nTODO Note. Lebesgue has given a proof (TODO Series trigonometriques, p. 124) of a\ntheorem communicated to him by Fatou that the trigonometrical series\n$\\sum_{n=2}^{\\infty} \\sin nx / \\log n$, which converges for all real values of $x$\n(\\hardsubsectionref{2}{3}{1} example TODO), is \\emph{not} a Fourier series.\n\n\\Section{9}{2}{On Dirichlet's conditions and Fourier's theorem.\n\\index{Statement of Fourier's theorem, \\Dirichlet's}}\nA theorem, of the type described in \\hardsectionref{9}{1}, concerning the\nexpansibility of a function of a real variable into a trigonometrical\nseries is usually described\n%\n% 164\n%\nas \\emph{Fourier's theorem}. On account of the length and difficulty of a\nformal proof of the theorem (even when the function to be expanded is\nsubjected to unnecessarily stringent conditions), we defer the proof\nuntil TODO \u00a7\u00a7 \\hardsubsectionref{9}{4}{2}, \\hardsubsectionref{9}{4}{3}.\nIt is, however, convenient to state here certain\n\\emph{sufficient} conditions under which a function can be expanded into a\ntrigonometrical series.\n\n\\emph{Let $f(t)$ he defined arbitrarily when $-\\pi \\leq t \\leq \\pi$\n  and defined\\footnote{This definition frequently results in $f(t)$ not being\n    expressible by a single analytical expression for all real values of $t$.\n    Cf.\\hardsubsectionref{9}{1}{1} example TODO:1.}\n  for all other real values of $t$ by means of the equation\n  $$\n  f(t + 2\\pi) = f(t),\n  $$\n  so that $f(t)$ is a periodic function with period $2\\pi$.\n}\n\\emph{\n  Let $f(t)$ be such that\n  $\\int_{-\\pi}^{\\pi} f(t) \\dmeasure t$ exists; and if this is an improper\n  integral, let it be absolutely convergent.\n}\n\n\\emph{\n  Let $a_{n}, b_{n}$ be defined by the\n  equations\\footnote{The numbers $a_{n}, b_{n}$ are called\n    \\emph{the Fourier constants\\index{Fourier constants}} of\n    $f(t)$, and the symbols $a_{n}, b_{n}$ will be used in this sense throughout\n    \u00a7\u00a7 TODO \\hardsectionref{9}{2}--\\hardsectionref{9}{5}.\n    It may be shewn that the convergence and absolute convergence of the\n    integrals defining the Fourier constants are consequences of the\n    convergence and absolute convergence of\n    $\\int_{-\\pi}^{\\pi} f(t) \\dmeasure t$.\n    Cf. \u00a7\u00a7 TODO \\hardsubsectionref{2}{3}{2}, \\hardsectionref{4}{5}.}\n  $$\n  \\pi a_{n} = \\int_{-\\pi}^{\\pi} f(t) \\cos nt \\dmeasure t,\n  \\quad\n  \\pi b_{n} = \\int_{-\\pi}^{\\pi} f(t) \\sin nt \\dmeasure t\n  \\quad\n  (n=0,1,2,\\ldots).\n  $$\n}\n\n\\emph{\n  Then, if $x$ be an interior point of any interval $(a, b)$ in which\n  $f(t)$ has limited total fluctuation, the series\n  $$\n  \\half a_{0}\n  +\n  \\sum_{n=1}^{\\infty} (a_{n} \\cos nx + b_{n} \\sin nx)\n  $$\n  is convergent, and its sum\\footnote{The limits $f(x \\pm 0)$ exist,\n    by \\hardsubsectionref{3}{6}{4} example TODO:3.}\n  is $\\half \\thebrace{ f(x+0) + f(x-0) }$.\n  If $f(t)$ is continuous at $t=x$, this sum reduces to $f(x)$.\n}\n\nThis theorem will be assumed in\nTODO \\hardsubsectionref{9}{2}{1}--\\hardsubsectionref{9}{3}{2};\nthese sections deal with theorems concerning Fourier series which\nare of some importance\nin practical applications. It should be stated here that every\nfunction which Applied Mathematicians need to expand into Fourier\nseries satisfies the conditions just imposed on $f(t)$, so that the\nanalysis given later in this chapter establishes the validity of all\nthe expansions into Fourier series which are required in physical\ninvestigations.\n\nThe reader will observe that in the theorem just stated,\n$f(t)$ is subject to less stringent conditions than those contemplated by\nDirichlet, and this decrease of stringency is of\nconsiderable practical importance. Thus, so simple a series as\n$\\sum_{n=1}^{\\infty} (-)^{n-1} (\\cos nx) / n$\nis the\nexpansion of the function\\footnote{Cf. example TODO:6 at the end of the chapter (p. TODO:190).}\n$\\log \\absval{2 \\cos \\half x}$; and this function\ndoes not satisfy Dirichlet's condition of boundedness at $\\pm \\pi$.\n\nIt is convenient to describe the series\n$\\half a_{0} + \\sum_{n=1}^{\\infty} (a_{n} \\cos nx + b_{n} \\sin nx)$\nas \\emph{the Fourier series associated with $f(t)$}. This description must,\nhowever, be\n%\n% 165\n%\ntaken as implying nothing concerning the convergence of the series in\nquestion.\n\n\\Subsection{9}{2}{1}{The representation of a function by Fourier series for ranges\nother than $(-\\pi,\\pi)$.}\n\nConsider a function $f(x)$ with an (absolutely) convergent integral,\nand with limited total fluctuation in the range $a \\leq x \\leq b$.\n\nWrite\n$x = \\half (a + b) - \\half (a-b) \\pi^{-1} x',\n\\quad\nf(x) = F(x')$.\n\nThen it is known (\\hardsectionref{9}{2}) that\n$$\n\\half [F(x'+0) + F(x'-0)]\n=\n\\half a_{0} + \\sum_{n=1}^{\\infty} (a_{n} \\cos nx' + b_{n} \\sin nx'),\n$$\nand so\n\\begin{align*}\n  & \\half \\thebrace{ f(x+0) + f(x-0)}\n  \\\\\n  &\n  \\hfill\n  =\n  \\half a_{0}\n  +\n  \\sum_{n=1}^{\\infty}\n  \\thebrace{\n    a_{n} \\cos \\frac{n \\pi (2x-a-b)}{b-a}\n    +\n    b_{n} \\sin \\frac{n \\pi (2x-a-b)}{b-a}\n  },\n\\end{align*}\nwhere by an obvious transformation\n\\begin{align*}\n  \\half (b-a) a_{n} =& \\int_{a}^{b} f(x) \\cos \\frac{n \\pi (2x-a-b)}{b-a} \\dmeasure x,\n  \\\\\n  \\half (b-a) b_{n} =& \\int_{a}^{b} f(x) \\sin \\frac{n \\pi (2x-a-b)}{b-a} \\dmeasure x\n  .\n\\end{align*}\n\\Subsection{9}{2}{2}{The cosine series and the sine series.}\nLet $f(x)$ be defined in the range $(0,l)$ and let it have an\n(absolutely) convergent integral and also let it have limited\ntotal fluctuation in that range.\n\\emph{Define} $f(x)$ in the range\n$\\wandwtypo{(0,-l)}{(-l,0)}$\nby the equation\n$$\nf(-x) = f(x).\\index{Even functions}\n$$\n\nThen\n$$\n\\half \\thebrace{ f(x+0) + f(x-0) }\n=\n\\half a_{0}\n+\n\\sum_{n=1}^{\\infty} \\thebrace{\n  a_{n} \\cos \\frac{n \\pi x}{l}\n  +\n  b_{n} \\sin \\frac{n \\pi x}{l}\n},\n$$\nwhere, by \\hardsubsectionref{9}{2}{1},\n\\begin{align*}\n  l a_{n}\n  =&\n  \\int_{-l}^{l} f(t) \\cos \\frac{n \\pi t}{l} \\dmeasure t\n  =\n  2 \\int_{0}^{l} f(t) \\cos \\frac{n \\pi t}{l} \\dmeasure t,\n  \\\\\n  l b_{n}\n  =&\n  \\int_{-l}^{l} f(t) \\sin \\frac{n \\pi t}{l} \\dmeasure t\n  = 0,\n\\end{align*}\nso that when $-l \\leq x \\leq l$,\n$$\n\\half \\thebrace{ f(x+0) + f(x-0) }\n=\n\\half a_{0} + \\sum_{n=1}^{\\infty} a_{n} \\cos \\frac{n \\pi x}{l};\n$$\nthis is called the \\emph{cosine series}.\n\nIf, however, we define $f(x)$ in the range $(0,-l)$ by the equation\n$$\nf(-x) = -f(\\wandwtypo{-}{}x),\\index{Odd functions}\n$$\n%\n% 166\n%\nwe get, when $-l \\leq x \\leq l$,\n$$\n\\half \\thebrace{ f(x+0) + f(x-0) }\n=\n\\sum_{n=1}^{\\infty} b_{n} \\sin \\frac{n \\pi x}{l},\n$$\nwhere\n$$\nl b_{n}\n=\n2 \\int_{0}^{l} f(t) \\sin \\frac{n \\pi t}{l} \\dmeasure t;\n$$\nthis is called the \\emph{sine series}.\n\nThus the series\n$$\n\\half a_{0}\n+\n\\sum_{n=1}^{\\infty} a_{n} \\cos \\frac{n \\pi x}{l},\n\\quad\n\\sum_{n=1}^{\\infty} b_{n} \\sin \\frac{n \\pi x}{l},\n$$\nwhere\n$$\n\\half l a_{n}\n=\n\\int_{0}^{l} f(t) \\cos \\frac{n \\pi t}{l} \\dmeasure t,\n\\quad\n\\half l b_{n}\n=\n\\int_{0}^{l} f(t) \\sin \\frac{n \\pi t}{l} \\dmeasure t,\n$$\n\\emph{have the same sum when $0 \\leq x \\leq l$;}\nbut their sums are numerically\nequal and opposite in sign when $0 \\geq x \\geq -l$.\n\n%\\begin{smalltext}\nThe cosine series was given by Clairaut, TODO Hist, de I'Acad. R. des Sci.\n1754 [published, 1759], in a memoir dated July 9, 1757; the sine\nseries was obtained between 1762 and 1765 by Lagrange, Oeuvres, l. p.\n553.\n%\\end{smalltext}\n\nTODO Example 1. Expand $\\half (\\pi - x) \\sin x$ in a cosine series in the range\n$0 \\leq x \\leq \\pi$.\n[We have, by the formula just obtained,\n$$\n\\half (\\pi - x) \\sin x\n=\n\\half a_{0}\n+ \\sum_{n=1}^{\\infty} a_{n} \\cos nx,\n$$\nwhere\n$$\n\\half \\pi a_{n}\n=\n\\int_{0}^{\\pi} \\half (\\pi - x) \\sin x \\cos nx \\dmeasure x.\n$$\n\nBut, integrating by parts, if $n \\neq 1$,\n\\begin{align*}\n  &\n  \\int_{0}^{\\pi} 2 (\\pi - x) \\sin x \\cos nx \\dmeasure x\n  \\\\\n  & \\quad\n  \\int_{0}^{\\pi} (\\pi - x) \\thebrace{\n    \\sin (n+1) x - \\sin (n-1) x\n  } \\dmeasure x\n  \\\\\n  & \\quad\n  \\thebracket{\n    (x - \\pi)\n    \\thebrace{\n      \\frac{ \\cos (n+1) x }{n+1}\n      -\n      \\frac{ \\cos (n-1) x }{n-1}\n    }\n  }_{0}^{\\pi}\n  -\n  \\int_{0}^{\\pi}\n  \\thebrace{\n    \\frac{ \\cos (n+1) x }{n+1}\n    -\n    \\frac{ \\cos (n-1) x }{n-1}\n  }\n  \\dmeasure x\n  \\\\\n  & \\quad\n  \\pi\n  \\theparen{\n    \\frac{1}{n+1}\n    -\n    \\frac{1}{n-1}\n  }\n  =\n  \\frac{-2\\pi}{(n+1)(n-1)}\n\\end{align*}\nWhereas if $n=1$, we get\n$\\int_{0}^{\\pi} 2 (\\pi - x) \\sin x \\cos x \\dmeasure x = \\half \\pi$.\n\nTherefore the required series is\n$$\n\\half\n+ \\frac{1}{4} \\cos x\n- \\frac{1}{1 \\cdot 3} \\cos 2x\n- \\frac{1}{2 \\cdot 4} \\cos 3x\n- \\frac{1}{3 \\cdot 5} \\cos 4x\n- \\cdots.\n$$\n\nIt will be observed that it is only for values of $x$ between\n$0$ and $\\pi$ that the sum of this series is proved to be\n$\\half (\\pi - x) \\sin x$; thus for\ninstance when $x$ has a value between $0$ and $-\\pi$,\nthe sum of the series is not\n$\\half (\\pi - x) \\sin x$, but $-\\half (\\pi + x) \\sin x$; when $x$ has a value\nbetween $\\pi$ and $2 \\pi$, the sum of the series happens to be again\n$\\half (\\pi - x) \\sin x$, but this is a mere coincidence arising from the special\nfunction considered, and does not follow from the general theorem.]\n\nTODO Example 2. Expand $\\frac{1}{8} \\pi x (\\pi - x)$ in a sine series,\nvalid when $0 \\leq x \\leq \\pi$.\n\n[The series is $\\sin x + \\frac{\\sin 3x}{3^{3}} + \\frac{\\sin 5x}{5^{3}} + \\cdots.$]\n%\n% 167\n%\n\nTOOD Example 3. Shew that, when $0 \\leq x \\leq \\pi$,\n$$\n\\frac{1}{96}\n\\pi\n(\\pi - 2x)\n(\\pi^{2} + 2 \\pi x - 2 x^{2})\n=\n\\cos x\n+ \\frac{\\cos 3x}{3^{4}}\n+ \\frac{\\cos 5x}{5^{4}}\n+ \\cdots.\n$$\n\n[Denoting the left-hand side by $f(x)$, we have, on integrating by\nparts and observing that $f'(0) = f'(\\pi) = 0$,\n\\begin{align*}\n  \\int_{0}^{\\pi} f(x) \\cos nx \\dmeasure x\n  =&\n  \\frac{1}{n} \\thebracket{f(x) \\sin nx}_{0}^{\\pi}\n  -\n  \\frac{1}{n} \\int_{0}^{\\pi} f'(x) \\sin nx \\dmeasure x\n  \\\\\n  =&\n  \\frac{1}{n^{2}} \\thebracket{f'(x) \\cos nx}_{0}^{\\pi}\n  -\n  \\frac{1}{n^{2}} \\int_{0}^{\\pi} f''(x) \\cos nx \\dmeasure x\n  \\\\\n  =&\n  -\\frac{1}{n^{3}} \\thebracket{f''(x) \\sin nx}_{0}^{\\pi}\n  +\n  \\frac{1}{n^{3}} \\int_{0}^{\\pi} f'''(x) \\sin nx \\dmeasure x\n  =&\n  -\\frac{1}{n^{4}} \\thebracket{f'''(x) \\cos nx}_{0}^{\\pi}\n  =\n  \\frac{\\pi}{4 n^{4}} (1 - \\cos n \\pi).]\n\\end{align*}\nTODO Example 4. Shew that for values of $x$ between $0$ and $\\pi$,\n$e^{s x}$ can be expanded in the cosine series\n$$\n\\frac{2 s}{\\pi}\n\\theparen{e^{s \\pi} - 1}\n\\theparen{\n  \\frac{1}{2 s^{2}}\n  + \\frac{\\cos 2x}{s^{2} + 4}\n  + \\frac{\\cos 4x}{s^{2} + 16}\n  + \\cdots\n}\n-\n\\frac{2 s}{\\pi}\n\\theparen{e^{s \\pi} - 1}\n\\theparen{\n  \\frac{\\cos x}{s^{2} + 1}\n  + \\frac{\\cos 3x}{s^{2} + 9}\n  + \\cdots\n},\n$$\nand draw graphs of the function $e^{s x}$ and of the sum of the series.\n\nTODO Example 5. Shew that for values of $x$ between $0$ and $\\pi$,\nthe function $\\frac{1}{8} \\pi (\\pi - 2x)$ can\nbe expanded in the cosine series\n$$\n\\cos x\n+ \\frac{\\cos 3x}{3^{2}}\n+ \\frac{\\cos 5x}{5^{2}}\n+ \\cdots,\n$$\nand draw graphs of the function $\\frac{1}{8} \\pi (\\pi - 2x)$ and of the sum of the\nseries.\n\n\\Section{9}{3}{The nature of the coefficients in a Fourier series.}\\footnote{TODO The analysis of this section and of \\hardsubsectionref{9}{3}{1} is contained in Stokes'\ngreat memoir, Camb. Phil. Tratis. VIII. (1849), pp. 538-583 [Math.\nPapers, i. pp. 236-313].}\n\nSuppose that (as in the numerical examples which have been discussed)\nthe interval $(-\\pi, \\pi)$ can bo divided into a finite number of ranges\n$(-\\pi, k_{1}), (k_{1}, k_{2}), \\ldots, (k_{n}, \\pi)$\nsuch that throughout each range $f(x)$\nand all its differential coefficients are continuous with limited\ntotal fluctuation and that they have limits on the right and on the\nleft (\\hardsectionref{3}{2}) at the end points of these ranges.\n\nThen\n$$\n\\pi a_{m}\n=\n\\int_{-\\pi}^{k_{1}} f(t) \\cos mt \\dmeasure t\n+ \\int_{k_{1}}^{k_{2}} f(t) \\cos mt \\dmeasure t\n+ \\cdots\n+ \\int_{k_{n}}^{\\pi} f(t) \\cos mt \\dmeasure t.\n$$\n\nIntegrating by parts we get\n\\begin{align*}\n  &\n  \\pi a_{m}\n  =\n  \\thebracket{\n    m^{-1} f(t) \\sin mt\n  }_{-\\pi}^{k_{1}}\n  +\n  \\thebracket{\n    m^{-1} f(t) \\sin mt\n  }_{k_{1}}^{k_{2}}\n  +\n  \\cdots\n  +\n  \\thebracket{\n    m^{-1} f(t) \\sin mt\n  }_{k_{n}}^{\\pi}\n  \\\\\n  & \\hfill\n  - m^{-1} \\int_{-\\pi}^{k_{1}} f'(t) \\sin mt \\dmeasure t\n  - m^{-1} \\int_{k_{1}}^{k_{2}} f'(t) \\sin mt \\dmeasure t\n  \\cdots\n  - m^{-1} \\int_{k_{n}}^{\\pi} f'(t) \\sin mt \\dmeasure t,\n\\end{align*}\nso that\n$$\na_{m} = \\frac{A_{m}}{m} - \\frac{b_{m}'}{m},\n$$\n%\n% 168\n%\nwhere\n$$\n\\pi A_{m}\n=\n\\sum_{r=1}^{n}\n\\sin m k_{r}\n\\thebrace{\n  f(k_{r} - 0) - f(k_{r} + 0),\n}\n$$\nand $b_{m}'$ is a Fourier constant of $f'(x)$.\n\nSimilarly\n$$\nb_{m} = \\frac{B_{m}}{m} + \\frac{a_{m}'}{m},\n$$\nwhere\n$$\n\\pi B_{m}\n=\n-\n\\sum_{r=1}^{n}\n\\cos m k_{r}\n\\thebrace{\n  f(k_{r} - 0)\n  -\n  f(k_{r} + 0)\n}\n-\n\\cos m \\pi\n\\thebrace{\n  f(\\pi - 0)\n  -\n  f(-\\pi + 0)\n}\n,\n$$\nand $a_{m}'$ is a Fourier constant of $f'(x)$.\n\nSimilarly, we get\n$$\na_{m}' = \\frac{A_{m}'}{m} - \\frac{b_{m}''}{m},\n\\quad\nb_{m}' = \\frac{B_{m}'}{m} - \\frac{a_{m}''}{m},\n$$\nwhere $a_{m}'', b_{m}''$ are the Fourier constants of\n$f''(x)$ and\n\\begin{align*}\n  \\pi A_{m}'\n  =&\n  \\sum_{r=1}^{n} \\sin m k_{r} \\thebrace{\n    f'(k_{r}-0) - f'(k_{r}+0)\n  },\n  \\\\\n  \\pi B_{m}'\n  =&\n  - \\sum_{r=1}^{n} \\cos m k_{r} \\thebrace{\n    f'(k_{r}-0) - f'(k_{r}+0)\n  }\n  \\\\\n  \\hfill\n  - \\cos m \\pi \\thebrace{\n    f'(\\pi - 0) - f'(-\\pi + 0)\n  }.\n\\end{align*}\n\nTherefore\n$$\na_{m} =\n\\frac{A_{m}}{m}\n- \\frac{B_{m}'}{m^{2}}\n- \\frac{a_{m}''}{m^{2}},\n\\quad\nb_{m} =\n\\frac{B_{m}}{m}\n+ \\frac{A_{m}'}{m^{2}}\n- \\frac{b_{m}''}{m^{2}},\n$$\n\nNow as $m \\rightarrow \\infty$, we see that\n$$\nA_{m}' = \\bigo(1),\n\\quad\nB_{m}' = \\bigo(1),\n$$\nand, since the integrands involved in $a_{m}''$ and $b_{m}''$\nare bounded, it is evident that\n$$\na_{m}'' = \\bigo(1),\n\\quad\nb_{m}'' = \\bigo(1).\n$$\n\nHence if $A_{m}=0, B_{m}=0$, the Fourier series for $f(x)$ converges\nabsolutely and uniformly, by \\hardsubsectionref{3}{3}{4}.\n\nThe necessary and sufficient conditions that\n$A_{m} = B_{m} = 0$ for all values of $m$ are that\n$$\nf(k_{r} - 0) = f(k_{r} + 0),\n\\quad\nf(\\pi - 0) = f(-\\pi + 0)\n$$\nthat is to say that\\footnote{Of course $f(x)$ is also subject to the conditions stated at the\nbeginning of the section.} $f(x)$ should be continuous for \\emph{all} values of $x$.\n\n\\Subsection{9}{3}{1}{Differentiation of Fourier series.}\nThe result of differentiating\n$$\n\\half a_{0}\n+ \\sum_{m=1}^{\\infty} (a_{m} \\cos mx + b_{m} \\sin mx)\n$$\nterm by term is\n$$\n\\sum_{m=1}^{\\infty} \\thebrace{\n  m b_{m} \\cos mx\n  -\n  m a_{m} \\sin mx\n}.\n$$\n%\n% 169\n%\n\nWith the notation of \\hardsectionref{9}{3}, this is the same as\n$$\n\\half a_{0}'\n+\n\\sum_{m=1}^{\\infty} ( a_{m}' \\cos mx + b_{m}' \\sin mx),\n$$\nprovided that $A_{m} = B_{m} = 0$ and\n$\\int_{-\\pi}^{\\pi} f'(x) \\dmeasure x = 0$;\nthese conditions are satisfied if $f(x)$ is continuous for all values of\n$x$.\n\nConsequently sufficient conditions for the legitimacy of\ndifferentiating a Fourier series term by term are that $f(x)$ should be\ncontinuous for \\emph{all} values of $x$ and $f'(x)$ should have only a finite\nnumber of points of discontinuity in the range $(-\\pi, \\pi)$, both\nfunctions having limited total fluctuation throughout the range.\n\n\\Subsection{9}{3}{2}{Determination of points of discontinuity.}\n\nThe expressions for $a_{m}$ and $b_{m}$ which have been found in\n\\hardsectionref{9}{3} can\nfrequently be applied in practical examples to determine the points\nat which the sum of a given Fourier series may be discontinuous. Thus,\nlet it be required to determine the places at which the sum of the\nseries\n$$\n\\sin x\n+ \\frac{1}{3} \\sin 3x\n+ \\frac{1}{5} \\sin 5x\n+ \\cdots\n$$\nis discontinuous.\n\n\\emph{Assuming} that the series is a Fourier series and not \\emph{any}\ntrigonometrical series and observing that\n$a_{m} = 0, b_{m} = (2m)^{-1}(1 - \\cos m \\pi)$, we get on considering the\nformula found in \\hardsectionref{9}{3},\n$$\nA_{m} = 0,\n\\quad\nB_{m} = \\half - \\half \\cos m \\pi,\n\\quad\na_{m}' = b_{m}' = 0.\n$$\n\nHence if $k_{1}, k_{2},\\ldots$ are the places at which the analytic\ncharacter of the sum is broken, we have\n$$\n0\n=\n\\pi A_{m}\n=\n\\thebracket{\n  \\sin m k_{1} \\thebrace{\n    f(k_{1} - 0) - f(k_{1} + 0)\n  }\n  +\n  \\sin m k_{2} \\thebrace{\n    f(k_{2} - 0) - f(k_{2} + 0)\n  }\n  +\n  \\cdots\n}.\n$$\nSince this is true for all values of $m$, the numbers\n$k_{1}, k_{2}, \\ldots$ must\nbe multiples of $\\pi$; but\nthere is only one even multiple of $\\pi$ in the range\n$-\\pi < x \\leq \\pi$, namely zero.\nSo $k_{1} = 0$,\nand $k_{2}, k_{3}, \\ldots$ do not exist.\nSubstituting $k_{1} = 0$ in the equation\n$B_{m} = \\half - \\half \\cos m \\pi$, we have\n$$\n\\pi (\\half - \\half \\cos m \\pi)\n=\n- \\thebracket{\n  \\cos m \\pi \\thebrace{\n    f(\\pi - 0) - f(-\\pi + 0)\n  }\n  + f(-0)\n  - f(+0)\n}.\n$$\n\nSince this is true for all values of $m$, we have\n$$\n\\half \\pi = f(+0) - f(-0),\n\\quad\n\\half \\pi = f(\\pi - 0) - f(-\\pi + 0).\n$$\n\nThis shews that, if the\nseries is a Fourier series, $f(x)$ has discontinuities at the points\n$n \\pi$ ($n$ any integer), and since $a_{m}' = b_{m}' = 0$, we should\nexpect\\footnote{In point of fact\n  $$\n  f(x)\n  =\n  \\begin{cases}\n    -\\frac{1}{4} \\pi & -\\pi < x < 0;\\\\\n    \\frac{1}{4} \\pi & 0 < x < \\pi.\n  \\end{cases}\n  $$\n} $f(x)$\nto be constant in the open range $(-\\pi, 0)$ and to be another constant\nin the open range $(0, \\pi)$.\n\n\\Section{9}{4}{\\Fejer's theorem.}\n\nWe now begin the discussion of the theory of Fourier series by proving\nthe following theorem, due to \\Fejer,\\footnote{TODO:Math. Ann. lviii. (1904), pp. 51-69.}\nconcerning the summability of\nthe Fourier series associated with an arbitrary function, $f(t)$:\n\n\\emph{Let $f(t)$ be a function of the real variable $t$, defined arbitrarily\n  when $-\\pi \\leq t < \\pi$, and defined by the equation\n  $$\n  f(t + 2\\pi) = f(t)\n  $$\n%\n% 170\n%\n  for all other real values of $t$; and let\n  $\\int_{-\\pi}^{\\pi} f(t) \\dmeasure t$\n  exist and (if it is an improper integral)\n  let it he absolutely convergent.\n}\n\n\\emph{Then the Fourier series associated with the function\n  $f(t)$ is summable\\footnote{See \\hardsubsectionref{8}{4}{3}.} ($C1$)\n  at all points $x$ at which the two limits $f(x \\pm 0)$ exist.}\n\nAnd its sum ($C1$) is\n$$\n\\half \\thebrace{\n  f(x + 0) + f(x-0)\n}.\n$$\n\nLet $a_{n}, b_{n}, (n=0,1,2,\\ldots)$ denote the Fourier constants\n(\\hardsectionref{9}{2}) of\n$f(t)$ and let\n$$\n\\half a_{0} = A_{0},\n\\hfill\na_{n} \\cos n x + b_{n} \\sin n x = A_{n}(x),\n\\hfill\n\\sum_{n=0}^{m} A_{n}(x) = S_{m}(x).\n$$\n\nThen we have to prove that\n$$\n\\lim_{m \\rightarrow \\infty}\n\\frac{1}{m} \\thebrace{\n  A_{0}\n  + S_{1}(x) + S_{2}(x) + \\cdots + S_{m-1}(x)\n}\n=\n\\half \\thebrace{\n  f(x+0) + f(x-0)\n},\n$$\nprovided that the limits on the right exist.\n\nIf we substitute for the Fourier constants their values in the form of\nintegrals (\\hardsectionref{9}{2}), it is easy to verify\nthat\\footnote{It is obvious that, if we write $\\lambda$ for $e^{i(x-t)}$ in the second line,\n  then\n  \\begin{align*}\n    m + &\n    (m-1) (\\lambda + \\lambda^{-1})\n    + (m-2) (\\lambda^{2} + \\lambda^{-2})\n    + \\cdots\n    + (\\lambda^{m-1} + \\lambda^{1 - m})\n    \\\\\n    =&\n    (1 - \\lambda)^{-1} \\thebrace{\n      \\lambda^{1-m}\n      + \\lambda^{2-m}\n      + \\cdots\n      + \\lambda^{-1}\n      + 1\n      - \\lambda\n      - \\lambda^{2}\n      - \\cdots\n      - \\lambda^{m}\n    } \\\\\n    =&\n    (1 - \\lambda)^{-2} \\thebrace{\n      \\lambda^{1 - m}\n      - 2 \\lambda\n      + \\lambda^{m+1}\n    }\n    =\n    (\\lambda^{\\half m} - \\lambda^{-\\half m})^{2}\n    /\n    (\\lambda^{\\half} - \\lambda^{-\\half})^{2}.\n  \\end{align*}\n  m + (m - 1) (X + X-i) + (hi - 2) (X2 + X-2) + . . . + (X' -i + Xi-' )\n\n  = (l-X)-i \\ i-\u2122 + X2-' +...+X-i + l-X--X'-i-...-X' = (1 - X)-2 xi- v -\n  2X + X' +i = (X '\" - X~ '\"f /(X - X\" )\n}\n\\begin{align*}\n  A_{0} + \\sum_{n=1}^{m-1} S_{n}(x)\n  =&\n  m A_{0}\n  + (m - 1) A_{1}(x)\n  + (m - 2) A_{2}(x)\n  + \\cdots\n  + A_{m-1}(x)\n  \\\\\n  =&\n  \\frac{1}{\\pi}\n  \\int_{-\\pi}^{\\pi} \\thebrace{\n    \\half m\n    + (m-1) \\cos (x-t)\n    + (m-2) \\cos 2(x-t)\n    + \\cdots\n    + \\cos (m-1)(x-t)\n  }\n  f(t) \\dmeasure t\n  \\\\\n  =&\n  \\frac{1}{2\\pi}\n  \\int_{-\\pi}^{\\pi}\n  \\frac{\\sin^{2} \\half m (x-t)}{\\sin^{2} \\half (x-t)}\n  f(t) \\dmeasure t\n  \\\\\n  =&\n  \\frac{1}{2\\pi}\n  \\int_{-\\pi+x}^{\\pi+x}\n  \\frac{\\sin^{2} \\half m (x-t)}{\\sin^{2} \\half (x-t)}\n  f(t) \\dmeasure t,\n\\end{align*}\nthe last step following from the periodicity\nof the integrand.\n\nIf now we bisect the path of integration and write $x \\mp 2\\theta$ in place of\n$t$ in the two parts of the path, we get\n$$\nA_{0}\n+ \\sum_{n=1}^{m-1} S_{n}(x)\n=\n\\frac{1}{\\pi}\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\nf(x + 2\\theta) \\dmeasure \\theta\n+\n\\frac{1}{\\pi}\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\nf(x - 2\\theta) \\dmeasure \\theta\n$$\n\n\\emph{Consequently it is sufficient to prove that, as\n  $m \\rightarrow \\infty$, then}\n$$\n\\frac{1}{m}\n\\int_{0}^{\\half\\pi}\n\\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\nf(x + 2\\theta)\n\\dmeasure \\theta\n\\rightarrow\n\\half \\pi f(x + 0),\n\\quad\n\\frac{1}{m}\n\\int_{0}^{\\half\\pi}\n\\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\nf(x - 2\\theta)\n\\dmeasure \\theta\n\\rightarrow\n\\half \\pi f(x - 0),\n$$\n%\n% 171\n%\n\nNow, if we integrate the equation\n$$\n\\half\n\\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\n=\n\\half m + (m-1) \\cos 2\\theta + \\cdots + \\cos 2(m-1)\\theta,\n$$\nwe find that\n$$\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\n\\dmeasure \\theta\n=\n\\half \\pi m,\n$$\nand so we have to prove that\n$$\n\\frac{1}{m}\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\n\\phi(\\theta)\n\\dmeasure \\theta\n\\rightarrow\n0\n\\quad\n\\textrm{as} %TODO: clean up; do more properly\n\\quad\nm \\rightarrow \\infty\n$$\nwhere $\\phi(\\theta)$ stands in turn for each of the two functions\n$$\nf(x + 2\\theta) - f(x + 0),\n\\quad\nf(x - 2\\theta) - f(x - 0).\n$$\nNow, given an arbitrary positive number $\\eps$, we can choose $\\delta$ so\nthat\\footnote{On the assumption that $f(x \\pm 0)$ exist.}\n$$\n\\absval{\\phi(\\theta)} < \\eps\n$$\nwhenever $0 < \\theta \\leq \\half \\delta$. This choice of $\\delta$ is obviously independent of $m$.\n\nThen\n\\begin{align*}\n  \\absval{\n    \\frac{1}{m}\n    \\int_{0}^{\\half \\pi}\n    \\frac{\\sin^{2} m\\theta}{\\sin^{2} \\theta}\n    \\phi(\\theta)\n    \\dmeasure \\theta\n  }\n  &\n  \\leq\n  \\frac{1}{m}\n  \\int_{0}^{\\half \\delta}\n  \\frac{\\sin^{2} m\\theta}{\\sin^{2} \\theta}\n  \\absval{ \\phi(\\theta) }\n  \\dmeasure \\theta\n  +\n  \\frac{1}{m}\n  \\int_{\\half\\delta}^{\\half\\pi}\n  \\frac{\\sin^{2} m\\theta}{\\sin^{2} \\theta}\n  \\absval{ \\phi(\\theta) }\n  \\dmeasure \\theta\n  \\\\\n  &\n  <\n  \\frac{\\eps}{m}\n  \\int_{0}^{\\half \\delta}\n  \\frac{\\sin^{2} m\\theta}{\\sin^{2} \\theta}\n  \\dmeasure \\theta\n  +\n  \\frac{1}{m \\sin^{2} \\half \\delta}\n  \\int_{\\half \\delta}^{\\half \\pi}\n  \\absval{\\phi(\\theta)}\n  \\dmeasure \\theta\n  \\\\\n  &\n  \\leq\n  \\frac{\\eps}{m}\n  \\int_{0}^{\\half \\pi}\n  \\frac{\\sin^{2} m\\theta}{\\sin^{2} \\theta}\n  \\dmeasure \\theta\n  +\n  \\frac{1}{m \\sin^{2} \\half\\delta}\n  \\int_{0}^{\\half \\pi}\n  \\absval{\\phi(\\theta)}\n  \\dmeasure \\theta\n  \\\\\n  &\n  =\n  \\half \\pi \\eps\n  +\n  \\frac{1}{m \\sin^{2} \\half\\delta}\n  \\int_{0}^{\\half \\pi}\n  \\absval{\\phi(\\theta)}\n  \\dmeasure \\theta\n\\end{align*}\n\nNow the convergence of\n$\\int_{-\\pi}^{\\pi} \\absval{f(t)} \\dmeasure t$\nentails the convergence of\n$$\n\\int_{0}^{\\half \\pi}\n\\absval{\\phi(\\theta)}\n\\dmeasure \\theta,\n$$\nand so, given $\\eps$ (and therefore $\\delta$), we can make\n$$\n\\half \\pi \\eps\n\\sin^{2} \\half \\delta\n>\n\\int_{0}^{\\half\\pi}\n\\absval{ \\phi(\\theta) }\n\\dmeasure \\theta,\n$$\nby taking $m$ sufficiently large.\n\nHence, by taking $m$ sufficiently large, we can make\n$$\n\\absval{\n  \\frac{1}{m}\n  \\int_{0}^{\\half \\pi}\n  \\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\n  \\phi(\\theta)\n  \\dmeasure \\theta\n}\n<\n\\pi \\eps,\n$$\nwhere $\\eps$ is an arbitrary positive number; that is to say, from the\ndefinition of a limit,\n$$\n\\lim_{m \\rightarrow \\infty}\n\\frac{1}{m}\n\\int_{0}^{\\half\\pi}\n\\frac{\\sin^{2} m\\theta}{\\sin^{2} \\theta}\n\\phi(\\theta)\n\\dmeasure \\theta\n=\n0,\n$$\nand so \\Fejer's theorem is established.\n%\n% 172\n%\n\n%TODO:Corollary\nCorollary 1. Let $U$ and $L$ be the upper and lower bounds of $f(t)$ in any\ninterval $(a, b)$ whose length does not exceed $2\\pi$, and let\n$$\n\\int_{-\\pi}^{\\pi} \\absval{f(t)} \\dmeasure t = \\pi A.\n$$\nThen, if $a + \\eta \\leq x \\leq b - \\eta$, where $\\eta$ is any positive number, we have\n\\begin{align*}\n  U\n  -\n  \\frac{1}{m}\n  \\thebrace{\n    A_{0}\n    +\n    \\sum_{n=1}^{m-1} S_{n}(x)\n  }\n  &\n  =\n  \\frac{1}{2 m \\pi}\n  \\thebrace{\n    \\int_{-\\pi + x}^{x - \\eta}\n    + \\int_{x - \\eta}^{x + \\eta}\n    + \\int_{x + \\eta}^{\\pi + x}\n  }\n  \\frac{\\sin^{2} \\half m (x-t)}{\\sin^{2} \\half (x-t)}\n  \\thebrace{\n    U - f(t)\n  }\n  \\dmeasure t\n  \\\\\n  &\n  \\geq\n  \\frac{1}{2 m \\pi}\n  \\thebrace{\n    \\int_{-\\pi + x}^{x - \\eta}\n    + \\int_{x + \\eta}^{\\pi + x}\n  }\n  \\frac{\\sin^{2} \\half m (x-t)}{\\sin^{2} \\half (x-t)}\n  \\thebrace{\n    U - f(t)\n  }\n  \\dmeasure t\n  \\\\\n  &\n  \\geq\n  -\\frac{1}{2 m \\pi}\n  \\thebrace{\n    \\int_{-\\pi + x}^{x - \\eta}\n    + \\int_{x + \\eta}^{\\pi + x}\n  }\n  \\frac{\\absval{U} + \\absval{f(t)}}{\\sin^{2} \\half \\eta}\n  \\dmeasure t\n\\end{align*}\nso that\n$$\n\\frac{1}{m}\n\\thebrace{\n  A_{0}\n  + \\sum_{n=1}^{m-1} S_{n}(x)\n}\n\\leq\nU +\n\\thebrace{\n  \\absval{U}\n  + \\half A\n}\n/\n\\thebrace{\n  m \\sin^{2} \\half \\eta\n}.\n$$\nSimilarly\n$$\n\\frac{1}{m}\n\\thebrace{\n  A_{0}\n  + \\sum_{n=1}^{m-1} S_{n}(x)\n}\n\\geq\nL -\n\\thebrace{\n  \\absval{L}\n  + \\half A\n}\n/\n\\thebrace{\n  m \\sin^{2} \\half \\eta\n}.\n$$\nCorollary 2. Let $f(t)$ be continuous in the interval $a \\leq t \\leq b$. Since\ncontinuity implies uniformity of continuity (\\hardsubsectionref{3}{6}{1}), the choice of\n$\\delta$ corresponding to any value of $x$ in $(a, b)$ is independent of $x$, and the\nupper bound of $\\absval{f(x \\pm 0)}$, i.e. of $\\absval{f(x)}$, is also independent of $x$,\nso that\n\\begin{align*}\n  \\int_{0}^{\\half \\pi} \\absval{\\phi(\\theta)} \\dmeasure \\theta\n  =&\n  \\int_{0}^{\\half \\pi}\n  \\absval{\n    f(x \\pm 2\\theta) - f(x \\pm 0)\n  }\n  \\dmeasure \\theta\n  \\\\\n  \\leq\n  &\n  \\half \\int_{-\\pi}^{\\pi} \\absval{f(t)} \\dmeasure t\n  + \\half \\pi \\absval{f(x \\pm 0)},\n\\end{align*}\nand the upper bound of the last expression is independent of $x$.\n\nHence the choice of $m$, which makes\n$$\n\\absval{\n  \\frac{1}{m}\n  \\int_{0}^{\\half \\pi}\n  \\frac{\\sin^{2} m \\theta}{\\sin^{2} \\theta}\n  \\phi(\\theta)\n  \\dmeasure \\theta\n}\n<\n\\pi \\eps,\n$$\nis independent of $x$, \\emph{and consequently\n  $\\frac{1}{m}\\thebrace{\n    A_{0} + \\sum_{n=1}^{m-1} S_{n}(x)\n  }$ tends to the limit $f(x)$, as $m \\rightarrow \\infty$,\n  uniformly throughout the interval $a \\leq x \\leq b$}.\n\\Subsection{9}{4}{1}{The Riemann-Lehesgue lemmas.}\nIn order to be able to apply Hardy's theorem (\\hardsectionref{8}{5}) to deduce the\nconvergence of Fourier series from \\Fejer's theorem, we need the two\nfollowing lemmas :\n\n%TODO\n(I) Let $\\int_{a}^{b} \\psi(\\theta) \\dmeasure \\theta$ exist and (if it is an improper integral) let it be\nabsolutely convergent. Then, as $\\lambda \\rightarrow \\infty$,\n$$\n\\int_{a}^{b} \\psi(\\theta) \\sin (\\lamba \\theta) \\dmeasure \\theta\n\\quad\n\\textrm{is}\n\\quad\n\\littleo(1).\n$$\n%TODO\n(II) If, further, $\\psi(\\theta)$ has limited total fluctuation in the range\n$(a,b)$ then, as $\\lambda \\rightarrow \\infty$,\n$$\n\\int_{a}^{b} \\psi(\\theta) \\sin (\\lambda\\theta) \\dmeasure \\theta\n\\quad\n\\textrm{is}\n\\quad\n\\bigo(1 / \\lambda).\n$$\n\n%\n% 173\n%\n\nOf these results (I) %TODO:fixref\nwas stated by W. R. Hamilton\\footnote{TODOTrans. Dublin Acad. xix. (1843), p. 267.} and by\nRiemann\\footnote{TODO:Ges. Math. IVerke, p. 241. For Lebesgue's investigation see his\nSeries trigonometriques (1906), Ch. III.} in\nthe case of bounded fiuictions.\nThe truth of (II) %TODO:fixref\nseems to have been\nwell known before its importance was realised; it is a generalisation\nof a result established by Dirksen\\footnote{TODO:Journal fUr Math. iv. (1829), p. 172.}\nand Stokes (see \\hardsectionref{9}{3}) in the\ncase of functions with a continuous differential coefficient.\n\nThe reader should observe that the analysis of this section remains\nvalid when the sines are replaced throughout by cosines.\n\n%TODO:fixref\n(I) It is convenient\\footnote{For this proof we are indebted to Mr Hardy;\n  it seems to be neater than the proofs given by other writers,\n  %TODO\n  e.g. de la Vallee Poussin,\n  Cours cV Analyse Infinitesiniale, ii. (1912), pp. 140-141.}\nto establish this lemma first in the case in\nwhich $\\psi(\\theta)$ is bounded in the range $(a, b)$. In this case, let $K$ be\nthe upper bound of $\\absval{\\psi(\\theta)}$, and let $\\eps$ be an arbitrary positive\nnumber. Divide the range $(a, b)$ into $n$ parts by the points\n$x_{1}, x_{2}, \\ldots x_{n-1}$, and form the sums $S_{n}, s_{n}$ associated with the function\n$\\psi(\\theta)$ after the manner of \\hardsectionref{4}{1}. Take $n$ so large that\n$S_{n} - s_{n} < \\eps$; this is\npossible since $\\psi(\\theta)$ is integrable.\n\nIn the interval $(x_{r-1}, x_{r})$ write\n$$\n\\psi(\\theta) = \\psi_{r}(x_{r-1}) + \\omega_{r}(\\theta),\n$$\nso that\n$$\n\\absval{ \\omega_{r}(\\theta) } \\leq U_{r} - L_{r},\n$$\nwhere $U_{r}$ and $L_{r}$ are the upper and lower bounds of\n$\\psi(\\theta)$ in the interval\n$(x_{r-1}, x_{r})$.\n\nIt is then clear that\n\nBy taking X sufficiently large (n remaining fixed after e has been\nchosen), the last expression may be made less than 2e, so that\n%TODO:alignment\n\\begin{align*}\n  &\n  \\absval{\n    \\int_{a}^{b}\n    \\psi(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n  }\n  \\\\\n  & \\quad\n  =\n  \\absval{\n    \\sum_{r=1}^{n}\n    \\psi_{r}(x_{r-1})\n    \\int_{x_{r-1}}^{x_{r}}\n    \\sin (\\lambda \\theta) \\dmeasure \\theta\n    +\n    \\sum_{r=1}^{n}\n    \\int_{x_{r-1}}^{x_{r}}\n    \\omega_{r}(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n  }\n  \\\\\n  & \\quad\n  \\leq\n  \\sum_{r=1}^{n}\n  \\absval{\n    \\psi_{r}(x_{r-1})\n  }\n  \\cdot\n  \\absval{\n    \\int_{x_{r-1}}^{x_{r}}\n    \\sin (\\lambda \\theta) \\dmeasure \\theta\n  }\n  +\n  \\sum_{r=1}^{n}\n  \\int_{x_{r-1}}^{x_{r}}\n  \\absval{ \\omega_{r} (\\theta)} \\dmeasure \\theta\n  \\\\\n  & \\quad\n  \\leq n K \\cdot (2 / \\lambda)\n  +\n  (S_{n} - s_{n})\n  \\\\\n  & \\quad\n  < (2nK / \\lambda)\n  +\n  \\eps.\n\\end{align*}\n\nBy taking $\\lambda$ sufficiently large ($n$ remaining fixed after $\\eps$ has been\nchosen), the last expression may be less than $2\\eps$, so that\n$$\n\\lim_{\\lambda \\rightarrow \\infty}\n\\int_{a}^{b} \\psi(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta = 0,\n$$\nand this is the result stated.\n\nWhen $\\psi(\\theta)$ is unbounded, if it has an absolutely convergent integral,\nby \\hardsectionref{4}{5}, we may enclose the points at which it is unbounded in a\nfinite\\footnote{The \\emph{finiteness} of the number of intervals is assumed in the\ndefinition of an improper integral,\\hardsectionref{4}{5}.} number\n%\n% 174\n%\nof intervals $\\delta_{1}, \\delta_{2}, \\ldots, \\delta_{p}$ such that\n$$\n\\sum_{n=1}^{p}\n\\int_{\\delta_{r}}\n\\absval{ \\psi(\\theta) } \\dmeasure \\theta\n<\n\\eps.\n$$\nIf $K$ denote the upper bound of $\\absval{\\psi(\\theta)}$ for values of\n$\\theta$ outside these intervals, and if\n$\\gamma_{1}, \\gamma_{2}, \\ldots, \\gamma_{p+1}$ denote the portions of the interval\n$(a, b)$ which do not belong to $\\delta_{1}, \\delta_{2}, \\ldots, \\delta_{p}$\nwe may prove as before that\n\\begin{align*}\n  \\absval{\\int_{a}^{b} \\psi(\\theta) \\sin (\\lambda \\theta) \\dmeasure \\theta}\n  & =\n  \\absval{\n    \\sum_{r=1}^{p+1}\n    \\int_{\\gamma_{r}} \\psi(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n    +\n    \\sum_{r=1}^{p}\n    \\int_{\\delta_{r}} \\psi(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n  }\n  \\\\\n  & \\leq\n  \\absval{\n    \\sum_{r=1}^{p+1}\n    \\int_{\\gamma_{r}} \\psi(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n  }\n  +\n  \\sum_{r=1}^{p}\n  \\int_{\\delta_{r}}\n  \\absval{\\psi(\\theta) \\sin(\\lambda \\theta)}\n  \\dmeasure \\theta\n  \\\\\n  & <\n  (2nK / \\lambda) + 2 \\eps.\n\\end{align*}\n\nNow the choice of $\\eps$ fixes $n$ and $K$, so that the last expression may be\nmade less than $3\\eps$ by taking $\\lambda$ sufficiently large. That is to say\nthat, even if $\\psi(\\theta)$ be unbounded,\n$$\n\\lim_{\\lambda \\rightarrow \\infty}\n\\int_{a}^{b} \\psi(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n=\n0,\n$$\nprovided that $\\psi(\\theta)$ has an (improper) integral which is absolutely\nconvergent.\n\nThe first lemma is therefore completely proved.\n\n%TODO\n(II) When $\\psi(\\theta)$ has limited total fluctuation in the range $(a, b)$,\nby %TODO: un-hardcode example number\n\\hardsubsectionref{3}{6}{4} example 2, we may write\n$$\n\\psi(\\theta) = \\chi_{1}(\\theta) - \\chi_{2}(\\theta),\n$$\nwhere $\\chi_{1}(\\theta), \\chi_{2}(\\theta)$ are positive increasing bounded functions.\n\nThen, by the second mean-value theorem (\\hardsubsectionref{4}{1}{4}) a number\n$\\xi$ exists such that $a \\leq \\xi \\leq b$ and\n\\begin{align*}\n  \\absval{\n    \\int_{a}^{b} \\chi_{1}(\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n  }\n  &=\n  \\absval{\n    \\chi_{1}(b) \\int_{\\xi}^{b} \\sin(\\lambda \\theta) \\dmeasure \\theta\n  }\n  \\\\\n  & \\leq\n  2 \\chi_{1}(b) / \\lambda.\n\\end{align*}\n\nIf we treat $\\chi_{2}(\\theta)$ in similar manner, it follows that\n\\begin{align*}\n  \\absval{\n    \\int_{a}^{b} \\psi (\\theta) \\sin(\\lambda \\theta) \\dmeasure \\theta\n  }\n  &\n  \\leq\n  \\absval{\n    \\int_{a}^{b} \\chi_{1}(\\theta) \\sin(\\lambda\\theta) \\dmeasure \\theta\n  }\n  +\n  \\absval{\n    \\int_{a}^{b} \\chi_{2}(\\theta) \\sin(\\lambda\\theta) \\dmeasure \\theta\n  }\n  \\\\\n  &\n  \\leq\n  2 \\thebrace{ \\chi_{1}(b) + \\chi_{2}(b) } / \\lambda\n  \\\\\n  = &\n  \\bigO(1 / \\lambda),\n\\end{align*}\nand the second lemma is established.\n\nCorollary. If $f(t)$ lie such that $\\int_{-\\pi}^{\\pi} f(t)$ %TODO:fix typo in book (missing 'dt')?\nexists and is an absolutely convergent integral,\nthe Fourier constants $a_{n}, b_{n}$ of $f(t)$ are $\\littleo{l}$ as\n$n \\rightarrow \\infty$;\nand if, further, $f(t)$ has limited total fluctuation in the range\n$(-\\pi, \\pi)$, the Fourier constants are $\\bigo{1/n}$.\n\n[Of course these results are not sufficient to ensure the convergence\nof the Fourier series associated with $f(t)$; for a series, in which the\nterms are of the order of magnitude of the terms in the harmonic\nseries \\hardsectionref{2}{3}), is not necessarily convergent.]\n\n%\n% 175\n%\n\n\\Subsection{9}{4}{2}{The proof of Fourier's theorem.}\n\nWe shall now prove the theorem enunciated in \\hardsectionref{9}{2}, namely:\n\n%TODO:emphasize paragraphs\nLet $f(t)$ be a function defined arbitrarily when $-\\pi \\leq t < \\pi$, and defined by the\nequation $f(t + 2\\pi) = f(t)$ for all other real values of $t$; and let\n$\\int_{-\\pi}^{\\pi} f(t) \\dmeasure t$\nexist and (if it is an improper integral) let it be absolutely\nconvergent. Let $a_{n}, b_{n}$ be defined by the equations\n$$\n\\pi a_{n} = \\int_{-\\pi}^{\\pi} f(t) \\cos nt \\dmeasure t,\n\\quad\n\\pi b_{n} = \\int_{-\\pi}^{\\pi} f(t) \\sin nt \\dmeasure t.\n$$\nThen, if $x$ be an interior point of any interval $(a, b)$ within which\n$f(t)$ has limited total fluctuation, the series\n$$\n\\half a_{0}\n+\n\\sum_{n=1}^{\\infty} (\na_{n} \\cos nx + b_{n} \\sin nx\n)\n$$\nis convergent and its sum is $half \\thebrace{f(x+0) + f(x-0)}$.\n\nIt is convenient to give two proofs, one applicable to functions for\nwhich it is permissible to take the interval $(a, b)$ to be the interval\n$(-\\pi+x, \\pi + x)$, the other applicable to functions for which it is\nnot permissible.\n\n%TODO:autonumbering\n(I) When the interval $(a, b)$ may be taken to be $(-\\pi + x, \\pi + x)$,\nit follows from \\hardsubsectionref{9}{4}{1} (II) %TODO:ref\nthat $a_{n} \\cos nx + b_{n} \\sin nx$ is $\\bigo{1/n}$ as\n$n \\rightarrow \\infty$. Now by \\Fejer's theorem (\\hardsectionref{9}{4})\nthe series under consideration\nis summable (Cl) %TODO\nand its sum (C'l) %TODO\nis\\footnote{The limits $f(x \\pm 0)$ exist, by \\hardsubsectionref{3}{6}{4} example 3.%TODO\n}\n$\\half \\thebrace{f(x+0) + f(x-0)}$. Therefore.,\nby Hardy's convergence theorem \\hardsectionref{8}{5}), the series under consideration\nis CONVERGENT %TODO:emph?\nand its sum (by \\hardsubsectionref{8}{4}{3}) is\n$\\half \\thebrace{f(x+0) + f(x-0)}$.\n\n(II) %TODO\nEven if it is not permissible to take the interval $(a, b)$ to be\nthe whole interval $(-\\pi + x, \\pi + x)$, it is possible, by\nhypothesis, to choose a positive number $\\delta$, less than $\\pi$,\nsuch that $f(t)$\nhas limited total fluctuation in the interval $(x-\\delta, x+\\delta)$.\nWe now define an auxiliary function $g(t)$, which is equal to $f(t)$ when\n$x - \\delta \\leq t \\leq x + \\delta$,\nand which is equal to zero throughout the rest of the interval\n$(-\\pi + x, \\pi + x)$; and $g(t + 2\\pi)$ is to be equal to $g(t)$ for all real\nvalues of $t$.\n\nThen $g(t)$ satisfies the conditions postulated for the functions under\nconsideration in (I),%TODO:ref\nnamely that it has an integral which is\nabsolutely convergent and it has limited total fluctuation in the\ninterval $(-\\pi + x, \\pi + x)$; and so, if\n$a_{n}^{(1)}, b_{n}^{(1)}$ denote the Fourier\nconstants of $g(t)$, the arguments used in (I) %TODO:addref\nprove that the Fourier\nseries associated with $g(t)$, namely\n$$\n\\half a_{0}^{(1)}\n+\n\\sum_{n=1}^{\\infty}\n\\theparen{\n  a_{n}^{(1)} \\cos nx\n  +\n  b_{n}^{(1)} \\sin nx\n},\n$$\nis convergent and has the sum\n$\\half \\thebrace{g(x+0) + g(x-0)}$, and this is\nequal to\n$$\n\\half \\thebrace{\n  f(x+0) + f(x-0)\n}.\n$$\n%\n% 176\n%\n\nNow let $S_{m}(x)$ and $S_{m}^{(1)} (x)$ denote the sums of the first $m + 1$\nterms of the Fourier series associated with $f(t)$ and $g(t)$ respectively. Then\nit is easily seen that\n\\begin{align*}\n  S_{m}(x)\n  =&\n  \\frac{1}{\\pi}\n  \\int_{-\\pi}^{\\pi} \\thebrace{\n    \\half\n    + \\cos (x-t)\n    + \\cos 2(x-t)\n    + \\cdots\n    + \\cos m(x-t)\n  }\n  f(t) \\dmeasure t\n  \\\\\n  =&\n  \\frac{1}{2\\pi}\n  \\int_{-\\pi}^{\\pi}\n  \\frac{\\sin (m+\\half) (x-t) }{\\sin \\half (x-t)}\n  f(t) \\dmeasure t\n  \\\\\n  =&\n  \\frac{1}{2\\pi}\n  \\int_{-\\pi+x}^{\\pi+x}\n  \\frac{\\sin (m+\\half) (x-t) }{\\sin \\half (x-t)}\n  f(t) \\dmeasure t\n  \\\\\n  =&\n  \\frac{1}{\\pi}\n  \\int_{0}^{\\half \\pi}\n  \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n  f(x + 2\\theta) \\dmeasure \\theta\n  +\n  \\frac{1}{\\pi}\n  \\int_{0}^{\\half \\pi}\n  \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n  f(x - 2\\theta) \\dmeasure \\theta,\n\\end{align*}\nby steps analogous to those given in \\hardsectionref{9}{4}.\n\nIn like manner\n$$\nS_{m}^{(1)}(x)\n=\n\\frac{1}{\\pi}\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\ng(x + 2\\theta) \\dmeasure \\theta\n+\n\\frac{1}{\\pi}\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\ng(x - 2\\theta) \\dmeasure \\theta,\n$$\nand so, using the definition of $g(t)$, we have\n\\begin{align*}\n  S_{m}(x) - S_{m}^{(1)}(x)\n  =&\n  \\frac{1}{\\pi}\n  \\int_{\\half\\delta}^{\\half \\pi} \\sin (2m+1)\\theta\n  \\frac{f(x+2\\theta)}{\\sin \\theta}\n  \\dmeasure \\theta\n  \\\\\n  &\\quad\n  \\frac{1}{\\pi}\n  \\int_{\\half\\delta}^{\\half \\pi} \\sin (2m+1)\\theta\n  \\frac{f(x-2\\theta)}{\\sin \\theta}\n  \\dmeasure \\theta.\n\\end{align*}\n\nSince $\\cosec$ is a continuous function in the range $(\\half \\delta, \\half \\pi)$, it follows\nthat $f(x \\pm 2\\theta) \\cosec \\theta$ are integrable functions with absolutely\nconvergent integrals; and so, by the Riemann-Lebesgue lemma of\n\\hardsubsectionref{9}{4}{1} (I), %TODO:add reference to lemma (the `(I)' here)\n\\emph{both the integrals on the right in the last equation tend to zero\nas $m \\rightarrow \\infty$}.\n\nThat is to say\n$$\n\\lim_{m \\rightarrow \\infty}\n\\thebrace{\n  S_{m}(x) - S_{m}^{(1)}(x)\n}\n= 0.\n$$\n\nHence, since\n$$\n\\lim_{m \\rightarrow \\infty}\nS_{m}^{(1)}(x)\n=\n\\half \\thebrace{\n  f(x+0) + f(x-0)\n},\n$$\nit follows also that\n$$\n\\lim_{m \\rightarrow \\infty} S_{m}(x)\n=\n\\half \\thebrace{\n  f(x+0) + f(x-0)\n}.\n$$\n\n\\emph{We have therefore proved that the Fourier series associated with\n  $f(t)$, namely\n  $\n  \\half a_{0}\n  + \\sum \\theparen{\n    a_{n} \\cos nx\n    +\n    b_{n} \\sin nx\n  }\n  $\n  is convergent and its sum is}\n$$\n\\half \\thebrace{\n  f(x+0) + f(x-0)\n}\n$$\n\\Subsection{9}{4}{3}{The Dirichlet-Bonnet proof of Fourier's theorem.}\nIt is of some interest to prove directly the theorem of \\hardsubsectionref{9}{4}{2},\nwithout making use of the theory of summability; accordingly we now\ngive a proof which is on the same general lines as the proofs due to\nDirichlet and Bonnet.\n%\n% 177\n%\n\nAs usual we denote the sum of the first $m + 1$ terms of the Fourier\nseries by $S_{m}(x)$, and then, by the analysis of \\hardsubsectionref{9}{4}{2}, we have\n$$\nS_{m}(x)\n=\n\\frac{1}{\\pi}\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n\\f(x + 2\\theta)\n\\dmeasure \\theta\n+\n\\frac{1}{\\pi}\n\\int_{0}^{\\half \\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n\\f(x - 2\\theta)\n\\dmeasure \\theta.\n$$\n\nAgain, on integrating the equation\n$$\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n=\n1\n+ 2 \\cos 2\\theta\n+ 2 \\cos 4\\theta\n+ \\cdots\n+ 2 \\cos 2m\\theta,\n$$\nwe have\n$$\n\\int_{0}^{\\half\\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n\\dmeasure \\theta\n=\n\\half\\pi,\n$$\nso that\n\\begin{align*}\n  TODO\n\\end{align*}\n\nIn order to prove that\n$$\n\\lim_{m \\rightarrow \\infty}\nS_{m}(x)\n=\n\\half \\thebrace{\n  f(x+0) + f(x-0)\n},\n$$\nit is therefore sufficient to prove that\n$$\n\\lim_{m \\rightarrow \\infty}\n\\int_{0}^{\\half\\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n\\phi(\\theta) \\dmeasure \\theta\n=\n0,\n$$\nwhere $\\phi(\\theta)$ stands in turn for each of the functions\n$$\nf(x+2\\theta) - f(x+0),\n\\quad\nf(x-2\\theta) - f(x-0).\n$$\nNow, by \\hardsubsectionref{3}{6}{4} example 4, %TODO:replace ref\n$\\theta \\phi(\\theta) \\cosec \\theta$ is a function with limited\ntotal fluctuation in an interval of which $\\theta=0$ is an\nend-point;\\footnote{The other end-point is $\\theta = \\half (b-x)$\n  or $\\theta = \\half (x-a)$, according as $\\phi(\\theta)$\n  represents one or other of the two functions.} and so\nwe may write\n$$\n\\theta\n\\phi (\\theta)\n\\cosec \\theta\n=\n\\chi_{1}(\\theta)\n-\n\\chi_{2}(\\theta),\n$$\nwhere $\\chi_{1}(\\theta), \\chi_{2}(\\theta)$ are bounded positive\nincreasing functions of $\\theta$ such that\n$$\n\\chi_{1}(+0) = \\chi_{2}(+0) = 0.\n$$\n\nHence, given an arbitrary positive number $\\eps$, we can choose a positive\nnumber $\\delta$ such that\n$$\n0 \\leq \\chi_{1}(\\theta) < \\eps,\n\\quad\n0 \\leq \\chi_{2}(\\theta) < \\eps\n$$\nwhenever $0 \\leq \\theta \\leq \\half \\delta$.\n\nWe now obtain inequalities satisfied by the three integrals on the\nright of the obvious equation\n\\begin{align*}\n  &\n \\int_{0}^{\\half\\pi}\n \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n \\phi(\\theta) \\dmeasure \\theta\n =\n \\int_{\\half \\delta}^{\\half \\pi}\n \\sin (2m+1)\\theta\n \\frac{\\phi(\\theta)}{\\sin \\theta}\n \\dmeasure \\theta\n \\\\\n &\n \\quad\n \\int_{0}^{\\half \\delta}\n \\frac{\\sin (2m+1)\\theta}{\\theta}\n \\chi_{1}(\\theta) \\dmeasure \\theta\n -\n \\int_{0}^{\\half \\delta}\n \\frac{\\sin (2m+1)\\theta}{\\theta}\n \\chi_{2}(\\theta) \\dmeasure \\theta\n\\end{align*}\n%\n% 178\n%\n\nThe modulus of the first integral can be made less than $\\eps$ by taking $m$\nsufficiently large; this follows from\n\\hardsubsectionref{9}{4}{1} (i) %TODO:subref\nsince $\\phi(\\theta) \\cosec \\theta$\nhas an integral which converges absolutely in the interval\n$(\\half \\delta, \\half \\pi)$.\n\nNext, from the second mean-value theorem, it follows that there is a\nnumber $\\xi$ between $0$ and $\\delta$ such that\n\\begin{align*}\nTODO\n\\end{align*}\n\nSince\n$\\int^{\\infty} \\frac{\\sin t}{t} \\dmeasure t$\nis convergent, it follows that\n$\\absval{\\int_{\\beta}^{\\infty} \\frac{\\sin u}{u} \\dmeasure u}$\nhas an upper bound\\footnote{The reader will find it interesting to prove that\n  $\\int_{0}^{\\infty} \\frac{\\sin u}{u} \\dmeasure u = \\half\\pi$.}\n$B$ which is independent of $\\beta$, and it is then clear that\n$$\n\\absval{\n  \\int_{0}^{\\half\\delta}\n  \\frac{\\sin (2m+1)\\theta}{\\theta}\n  \\chi_{1}(\\theta)\n  \\dmeasure \\theta\n}\n\\leq\n2 B \\chi_{1}(\\half\\delta)\n<\n2 B \\eps.\n$$\n\nOn treating the third integral in a similar manner, we see that we can\nmake\n$$\n\\absval{\n  \\int_{0}^{\\half\\delta}\n  \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n  \\phi(\\theta)\n  \\dmeasure \\theta\n}\n<\n(4B+1) \\eps\n$$\nby taking $m$ sufficiently large; \\emph{and so we have proved that}\n$$\n\\lim_{m \\rightarrow \\infty}\n\\int_{0}^{\\half\\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n\\phi(\\theta)\n\\dmeasure \\theta\n=\n0.\n$$\nBut it has been seen that this is a sufficient condition for the limit\nof $S_{m}(x)$ to be $\\half \\thebrace{f(x+0) + f(x-0)}$; and we have therefore\nestablished the con- vergence of a Fourier series in the circumstances\nenunciated in \\hardsubsectionref{9}{4}{2}.\n\n%\\begin{smallfontnote}%TODO\nNote. The reader should observe that in either proof of the\nconvergence of a Fourier \\emph{series} the second mean-value theorem is\nrequired; but to prove the summability of the series, the \\emph{first}\nmean-value theorem is adequate. It should also be observed that, while\nrestrictions are laid upon $f(t)$ throughout the range $(-\\pi, \\pi)$ in\nestablishing the \\emph{summability} at any point $x$, the only additional\nrestriction necessary to ensure \\emph{convergence} is a restriction on the\nbehaviour of the function in the \\emph{immediate neighbourhood} of the point\n$x$. The fact that the convergence depends only on the behaviour of the\nfunction in the immediate neighbourhood of $x$ (provided that the\nfunction has an integral which is absolutely convergent) was noticed\nby Riemann and has been emphasised by Lebesgue,\n%TODO:ref\nSeries Trigonometriques p: 60.\n%\\end{smallfontnote}%TODO\n\nIt is obvious that the condition\\footnote{Due to Jordan, Comptes Rendus, xcii. (1881), p. 228.}\nthat $x$ should be an interior point\nof an interval in which $f(t)$ has limited total fluctuation is merely a\n\\emph{sufficient} condition for the convergence of the Fourier series; and\nit may be replaced by any condition which makes\n$$\n\\lim_{m \\rightarrow \\infty}\n\\int_{0}^{\\half\\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n\\phi(\\theta)\n\\dmeasure \\theta\n=\n0.\n$$\n%\n% 179\n%\n\nJordan's condition is, however, a natural modification of the\nDirichlet condition that the function $f(t)$ should have only a finite\nnumber of maxima and minima, and it does not increase the difficulty\nof the proof.\n\nAnother condition with the same efifect is due to Dini,\n% TODO:ref\nSopra le Serie di Foiirier (Pisa, 1880),\nnamely that, if\n$$\n\\Phi(\\theta)\n=\n\\thebrace{\n  f(x+2\\theta) + f(x-2\\theta) - f(x+0) - f(x-0)\n}\n\\theta,\n$$\nthen\n$\\int_{0}^{a} \\Phi(\\theta) \\dmeasure \\theta$ should converge absolutely for some positive value of\n$a$.\n\n[If the condition is satisfied, given $\\eps$ we can find $\\delta$ so that\n$$\n\\int_{0}^{\\half\\delta}\n\\absval{\n  \\Phi(\\theta)\n}\n\\dmeasure \\theta\n<\n\\eps,\n$$\nand then\n$$\n\\absval{\n  \\int_{0}^{\\half\\delta}\n  \\sin (2m+1)\\theta\n  \\frac{\\theta}{\\sin \\theta}\n  \\Phi(\\theta)\n  \\dmeasure \\theta\n}\n<\n\\half \\pi \\eps\n$$\nthe proof that\n$\n\\absval{\n  \\int_{\\half\\delta}^{\\half \\pi}\n  \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n  \\phi(\\theta)\n  \\dmeasure \\theta\n}\n<\n\\eps\n$\nfor sufficiently large values of $m$\nfollows from the Riemann-Lebesgue lemma.]\n\nA more stringent condition than Dini's is due to Lipschitz,\nJournal fuer Math. LXiil. (1864), p. 296, %TODO:ref\nnamely $\\absval{\\phi(\\theta) < C \\theta^{k}}$, where $C$ and\n$k$ are positive and independent of $\\theta$.\n\nFor other conditions due to Lebesgue and to\nde la Vallee Poussin, %TODO:proper accents\nsee the latter's\nCours d' Analyse In/lnitesimale, ii. (1912), pp. 149-150. %TODO:ref\nIt should be noticed that Jordan's condition differs in character from\nDini's condition; the latter is a condition that the series may\nconverge \\emph{at a point}, the former that the series may converge\n\\emph{throughout an interval}.\n\n\\Subsection{9}{4}{4}{The uniformity of the convergence of Fourier series.}\nLet $f(t)$ satisfy the conditions enunciated in \\hardsubsectionref{9}{4}{2},\nand further let it be continuous\n(in addition to having limited total fluctuation) in\nan interval $(a, b)$. \\emph{Then the Fourier series associated with $f(t)$\n  converges uniformly to the sum $f(x)$ at all points $x$ for which\n  $a + \\delta \\leq x \\leq b - \\delta$, where $\\delta$ is any positive number.}\n\nLet $h(t)$ be an auxiliary function defined to be equal to $f(t)$ when\n$a \\leq t \\leq b$ and equal to zero for other values of $t$ in the range\n$(-\\pi, \\pi)$, and\nlet $\\alpha_{n}, \\beta_{n}$ denote the Fourier constants of $h(t)$.\nAlso let $S_{m}^{2}(x)$ denote the sum of the first $m + 1$ terms of the\nFourier series associated with $h(t)$.\n\nThen, by \\hardsectionref{9}{4} corollary 2,%TODO:ref\nit follows that\n$\\half \\alpha_{0} + \\sum_{n=1}^{\\infty} \\theparen{\\alpha_{n} \\cos nx + \\beta_{n} \\sin nx}$\nis \\emph{uniformly} summable throughout the interval $(a+\\delta, b-\\delta)$;\nand since\n$$\n\\absval{\n  \\alpha_{n} \\cos nx + \\beta_{n} \\sin nx\n}\n\\leq\n\\sqrt{\n  \\alpha_{n}^{2} + \\beta_{n}^{2}\n},\n$$\nwhich is independent of $x$ and which, by \\hardsubsectionref{9}{4}{1} (ii), %TODO:ref\nis $\\bigO(1/n)$, it\nfollows from \\hardsectionref{8}{5} corollary that\n$$\n\\half \\alpha_{0} + \\sum_{n=1}^{\\infty} \\theparen{\\alpha_{n} \\cos nx + \\beta_{n} \\sin nx}\n$$\nconverges uniformly to the sum $h(x)$, which is equal to $f(x)$.\n\nNow, as in \\hardsubsectionref{9}{4}{2},\n$$\nS_{m}(x)\n-\nS_{m}^{(2)}(x)\n=\n\\frac{1}{\\pi}\n\\int_{\\half (b-x)}^{\\half \\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\nf(x + 2\\theta)\n\\dmeasure \\theta\n+\n\\frac{1}{\\pi}\n\\int_{\\half (x-a)}^{\\half \\pi}\n\\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\nf(x - 2\\theta)\n\\dmeasure \\theta.\n$$\n%\n% 180\n%\n\nAs in \\hardsubsectionref{9}{4}{1} we choose an arbitrary positive number\n$\\eps$ and then enclose the points at which $f(t)$ is unbounded in a set of intervals\n$\\delta_{1}, \\delta_{2}, \\ldots, \\delta_{p}$, such\nthat\n$\\sum_{r=1}^{p} \\int_{\\delta_{r}} \\absval{f(t)} \\dmeasure t < \\eps$.\n\nIf $K$ be the upper bound of $\\absval{f(t)}$ \\ outside these intervals, we then\nhave, as in \\hardsubsectionref{9}{4}{1},\n$$\n\\absval{\n  S_{m}(x) - S_{m}^{(2)}(x)\n}\n<\n\\theparen{\n  \\frac{2nK}{2m + 1} + 2\\eps\n}\n\\cosec \\delta,\n$$\nwhere the choice of $n$ depends only on $a$ and $b$ and the form of the\nfunction $f(t)$. Hence, by a choice of $m$ \\emph{independent} of $x$ we can make\n$$\n\\absval{\n  S_{m}(x) - S_{m}^{(2)}(x)\n}\n$$\narbitrarily small; so that $\\absval{S_{m}(x) - S_{m}^{(2)}(x)}$\ntends uniformly to zero. Since\n$S_{m}^{(2)}(x) \\rightarrow f(x)$\nuniformly, it is then obvious that\n$S_{m}(x) \\rightarrow f(x)$\nuniformly; and this is the result to be proved.\n\n%\\begin{notefont}???\nNOTE. It must be observed that no general statement can be made about\nuniformity or absoluteness of convergence of Fourier series. Thus the\nseries of \\hardsubsectionref{9}{1}{1} example 1 %TODO:ref\nconverges uniformly except near $x = (2n+1)\\pi$\nbut converges absolutely only when $x = n\\pi$, whereas the series of\n\\hardsubsectionref{9}{1}{1} example 2 %TODO:ref\nconverges uniformly and absolutely for all real values\nof $x$.\n%\\begin{notefont}???\n\\begin{wandwexample}\n If $\\phi(\\theta)$ satisfies suitable conditions in the range $(0, \\pi)$,\nshew that\n\\begin{align*}\n  \\lim_{m \\rightarrow \\infty}\n  \\int_{0}^{\\pi}\n  \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n  \\phi(\\theta) \\dmeasure \\theta\n  =&\n  \\lim_{m \\rightarrow \\infty}\n  \\int_{0}^{\\half \\pi}\n  \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n  \\phi(\\theta) \\dmeasure \\theta\n  \\\\\n  & \\quad\n  +\n  \\lim_{m \\rightarrow \\infty}\n  \\int_{0}^{\\half \\pi}\n  \\frac{\\sin (2m+1)\\theta}{\\sin \\theta}\n  \\phi(\\pi - \\theta) \\dmeasure \\theta\n  \\\\\n  &\n  = \\half \\pi \\thebrace{\n    \\phi(+0) + \\phi(\\pi - 0)\n  }.\n\\end{align*}\n\\end{wandwexample}\n\\begin{wandwexample}\n Prove that, if $a > 0$,\n $$\n \\lim_{m \\rightarrow \\infty}\n \\int_{0}^{\\infty}\n \\frac{\\sin (2n+1)\\theta}{\\sin \\theta}\n e^{-a\\theta}\n \\dmeasure \\theta\n =\n \\half \\pi \\coth \\half a \\pi.\n $$\n \\addexamplecitation{Math. Trip. 1894.}\n [Shew that\n \\begin{align*}\n   \\int_{0}^{\\infty}\n   \\frac{\\sin (2n+1)\\theta}{\\sin \\theta}\n   e^{-a\\theta} \\dmeasure \\theta\n   =&\n   \\lim_{m \\rightarrow \\infty}\n   \\int_{0}^{m \\pi}\n   \\frac{\\sin (2n+1)\\theta}{\\sin \\theta}\n   e^{-a\\theta} \\dmeasure \\theta\n   \\\\\n   =&\n   \\lim_{m \\rightarrow \\infty}\n   \\int_{0}^{\\pi}\n   \\frac{\\sin (2n+1)\\theta}{\\sin \\theta}\n   \\thebrace{\n     e^{-a \\theta}\n     + e^{-a(\\theta + \\pi)}\n     + \\cdots\n     + e^{-a(\\theta + m \\pi)}\n   } \\dmeasure \\theta\n   \\\\\n   =&\n   \\int_{0}^{\\pi}\n   \\frac{\\sin (2n+1)\\theta}{\\sin \\theta}\n   \\frac{e^{-a \\theta} \\dmeasure \\theta}{1 - e^{-a\\pi}},\n \\end{align*}\n and use example l.] %TODO:ref\n\\end{wandwexample}\n\\begin{wandwexample}\n  Discuss the uniformity of the convergence of Fourier series\n  by means of the Dirichlet-Bonnct integrals, without making use of the\n  theory of summability.\n\\end{wandwexample}\n%\n\\Section{9}{5}{The TODO-TODO* theorem concerning Fourier constants.}\n\\emph{Let $f(x)$ be bounded in the interval $(-\\pi, \\pi)$ and let\n$\\int_{-\\pi}^{\\pi} f(x) \\dmeasure x$ exist, so\n%TODO:move into section header\n* Math. Ann. lvii. (1903), p. 429. Liapounoff discovered the theorem\nin 1896 and published it in the Proceedings of the Math. Soc. of the\nUniv. of Kharkov. See Comptes Rendw, cxxvi. (1898), p. 1024.\n%\n% 181\n%\nthat the Fourier constants $a_{n}, b_{n}$ of $f(x)$ exist. Then the series\n$$\n\\half a_{0}^{2}\n+\n\\sum_{n=1}^{\\infty} \\theparen{\n  a_{n}^{2} + b_{n}^{2}\n}\n$$\nis convergent and its sum is\\footnote{This integral exists by\n  \\hardsubsectionref{4}{1}{2} example 1. %TODO:ref\n  A proof of the theorem has been given by de la Valine Poussin, %TODO:accent(s)\n  in which the sole restrictions on $f(x)$ are that the (improper) integrals of\n  $f(x)$ and $\\thebrace{f(x)}^{2}$ exist in the interval $(-\\pi, \\pi)$. See his\n  Cours d' Analyse Infinitesimale, ii. (1912), pp. 165-166.}\n$$\n\\frac{1}{\\pi}\n\\int_{-\\pi}^{\\pi}\n\\thebrace{\n  f(x)\n}^{2}\n\\dmeasure x.\n$$}.\n\nIt will first be shewn that, with the notation of \\hardsectionref{9}{4},\n$$\n\\lim_{m \\rightarrow \\infty}\n\\int_{-\\pi}^{\\pi}\n\\thebrace{\n  f(x)\n  -\n  \\frac{1}{m}\n  \\sum_{n=0}^{m-1}\n  S_{n}(x)\n}^{2}\n\\dmeasure x\n=\n0.\n$$\n\nDivide the interval $(-\\pi, \\pi)$ into $4r$ parts, each of length $\\delta$;\nlet the upper and lower bounds of $f(x)$ in the interval\n$\\thebrace{(2p-1)\\delta - \\pi, (2p+3)\\delta - \\pi}$\nbe $U_{p}, L_{p}$, and let the upper bound of $\\absval{f(x)}$ in the\ninterval $(-\\pi, \\pi)$ be $K$. Then, by\n\\hardsectionref{9}{4} corollary 1, %TODO:ref\n\\begin{align*}\n  \\absval{\n    f(x)\n    -\n    \\frac{1}{m}\n    \\sum_{m=0}^{m-1} S_{n}(x)\n  }\n  <&\n  U_{p} - L_{p} + 2K / \\thebrace{m \\sin^{2} \\half\\delta}\n  \\\\\n  <&\n  2K \\thebracket{\n    1 + 1/\\thebrace{m \\sin^{2} \\half\\delta}\n  }\n\\end{align*}\nwhen $x$ lies between $2p\\delta$ and $(2p+2)\\delta$.\n\nConsequently, by the first mean-value theorem,\n$$\n\\int_{-\\pi}^{\\pi}\n\\thebrace{\n  f(x)\n  -\n  \\frac{1}{m}\n  \\sum_{n=0}^{m-1} S_{n}(x)\n}^{2}\n\\dmeasure x\n<\n2K\n\\thebrace{\n  1\n  +\n  \\frac{1}{m \\sin^{2} \\half\\delta}\n}\n\\thebrace{\n  2\\delta\n  \\sum_{p=0}^{2r-1}\n  (U_{p}-L_{p})\n  +\n  \\frac{4Kr}{m \\sin^{2} \\half \\delta}\n}\n$$\n\nSince $f(x)$ satisfies the Rieraann condition of integrability\n(\\hardsubsectionref{4}{1}{2}), it follows that both\n$4\\delta \\sum_{p=0}^{r-1} (U_{2p}-L_{2p})$\nand\n$4\\delta \\sum_{p=0}^{r-1} (U_{2p+1}-L_{2p+1})$ can be made arbitrarily\nsmall by giving $r$ a sufficiently large value.\nWhen $r$ (and therefore also $\\delta$) has been given\nsuch a value, we may choose $m_{1}$ so large that\n$r / \\thebrace{m_{1} \\sin^{2} \\half\\delta}$ is\narbitrarily small. That is to say, we can make the expression on the\nright of the last iiiecpiality arbitrarily small by giving $m$ any value\ngreater than a determinate value $m_{1}$. Hence the expression on the left\nof the inequality tends to zero as $m \\rightarrow \\infty$.\n\nBut evidently\n\\begin{align*}\n  &\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n    -\n    \\frac{1}{m}\n    \\sum_{n=0}^{m-1} S_{n}(x)\n  }^{2} \\dmeasure x\n  \\\\\n  &\\quad\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n    -\n    \\sum_{n=0}^{m-1}\n    \\frac{m-n}{m}\n    A_{n}(x)\n  }^{2} \\dmeasure x\n  \\\\\n  &\\quad\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n    -\n    \\sum_{n=0}^{m-1}\n    A_{n}(x)\n    +\n    \\sum_{n=0}^{m-1}\n    \\frac{n}{m}\n    A_{n}(x)\n  }^{2} \\dmeasure x\n  \\\\\n  &\\quad\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n    -\n    \\sum_{n=0}^{m-1}\n    A_{n}(x)\n  }^{2} \\dmeasure x\n  +\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    \\sum_{n=0}^{m-1}\n    \\frac{n}{m}\n    A_{n}(x)\n  }^{2} \\dmeasure x\n  \\\\\n  &\\quad\\quad\n  +\n  2\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n    -\n    \\sum_{n=0}^{m-1}\n    A_{n}(x)\n  }\n  \\cdot\n  \\thebrace{\n    \\sum_{n=0}^{m-1}\n    \\frac{n}{m}\n    A_{n}(x)\n  } \\dmeasure x\n  \\\\\n  &\\quad\n    \\int_{-\\pi}^{\\pi}\n    \\thebrace{\n      f(x)\n      -\n      \\sum_{n=0}^{m-1}\n      A_{n}(x)\n    }^{2} \\dmeasure x\n    +\n    \\frac{\\pi}{m^{2}}\n    \\sum_{n=0}^{m-1}\n    n^{2} (a_{n}^{2} + b_{n}^{2}),\n\\end{align*}\n%\n% 182\n%\nsince\n$$\n\\int_{-\\pi}^{\\pi}\nf(x) A_{r}(x) \\dmeasure x\n=\n\\int_{-\\pi}^{\\pi}\n\\thebrace{\n  \\sum_{n=0}^{m-1}\n  A_{n}(x)\n}\nA_{r}(x)\n\\dmeasure x\n$$\nwhen $r = 0,1,2,\\ldots,m-1$.\n\nSince the original integral tends to zero and since it has been proved\nequal to the sura of two positive expressions, it follows that each of\nthese expressions tends to zero; that is to say\n$$\n\\int_{-\\pi}^{\\pi}\n\\thebrace{\n  f(x)\n  -\n  \\sum_{n=0}^{m-1}\n  A_{n}(x)\n}^{2} \\dmeasure x\n\\rightarrow\n0.\n$$\n\nNow the expression on the left is equal to\n\\begin{align*}\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n  }^{2} \\dmeasure x\n  -&\n  2\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n    -\n    \\sum_{n=0}^{m-1}\n    A_{n}(x)\n  }\n  \\cdot\n  \\thebrace{\n    \\sum_{n=0}^{m-1}\n    A_{n}(x)\n  }\n  \\dmeasure x\n  \\\\\n  &\\quad\\quad\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    \\sum_{n=0}^{m-1}\n    A_{n}(x)\n  }^{2}\n  \\dmeasure x\n  \\\\\n  =&\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n  }^{2}\n  \\dmeasure x\n  -\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    \\sum_{n=0}^{m-1}\n    A_{n}(x)\n  }^{2}\n  \\dmeasure x\n  \\\\\n  =&\n  \\int_{-\\pi}^{\\pi}\n  \\thebrace{\n    f(x)\n  }^{2}\n  \\dmeasure x\n  -\n  \\pi\n  \\thebrace{\n    \\half a_{0}^{2}\n    +\n    \\sum_{n=1}^{m-1}\n    (a_{n}^{2} + b_{n}^{2})\n  }\n  ,\n\\end{align*}\nso that, as $m \\rightarrow \\infty$,\n$$\n\\int_{-\\pi}^{\\pi}\n\\thebrace{\n  f(x)\n}^{2}\n\\dmeasure x\n-\n\\pi\n\\thebrace{\n  \\half a_{0}^{2}\n  +\n  \\sum_{n=1}^{m-1}\n  (a_{n}^{2} + b_{n}^{2})\n}\n\\rightarrow\n0.\n$$\nThis is the theorem stated.\n\n% TODO:formatting\nCorollary. Parseval's theorem\\footnote{Mem. par divers senium, i. (1805), pp. 639-648.%TODO:ref\n  Parseval, of course, assumed the permissibility of integrating the trigonometrical series\n  term-by-term.}. If $f(x), F(x)$ both satisfy the\nconditions laid on $f(x)$ at the beginning of this section, and if\n$A_{n}, B_{n}$ be the Fourier constants of $F(x)$, it follows by subtracting the pair\nof equations which may be combined in the one form\n$$\n\\int_{-\\pi}^{\\pi}\n\\thebrace{\n  f(x) \\pm F(x)\n}^{2}\n\\dmeasure x\n=\n\\pi\n\\thebrace{\n  \\half (a_{0} \\pm A_{0})^{2}\n  +\n  \\sum_{n=1}^{\\infty}\n  ((a_{n} \\pm A_{n})^{2}\n  +\n  (b_{n} \\pm B_{n})^{2})\n}\n$$\nthat\n$$\n\\int_{-\\pi}^{\\pi}\nf(x)F(x)\n\\dmeasure x\n=\n\\pi\n\\thebrace{\n  \\half a_{0}A_{0}\n  +\n  \\sum_{n=1}^{\\infty}\n  (a_{n}A_{n} + b_{n}A_{n})\n}\n$$\n\\Section{9}{6}{Riemann's theory of trigonometrical series.}\nThe theory of Dirichlet concerning Fourier series is devoted to series\nwhich represent given functions. Important advances in the theory were\nmade by Riemann, who considered properties of functions defined by a\nseries of the type\\footnote{Throughout \\hardsectionref{9}{6}--\\hardsubsubsectionref{9}{6}{3}{2} %TODO:cite multiple\n  the letters $a_{n}, b_{n}$ do not necessarily denote Fourier constants.}\n$\\half a_{0} + \\sum_{n=1}^{\\infty} (a_{n} \\cos nx + b_{n} \\sin nx)$\nwhere it is assumed that\n$\\lim_{n \\rightarrow \\infty} (a_{n} \\cos nx + b_{n} \\sin nx) = 0$.\nWe shall give the propositions leading\nup to Riemann's theorem\\footnote{The proof given is due to G. Cantor, Journal fiir\n  Math, lxxii. (1870), pp. 130-142.} %TODO:ref\nthat if two trigonometrical series converge and\nare equal\n%\n% 183\n%\nat all points of the range $(-\\pi, \\pi)$ with the possible exception of a\nfinite number of points, corresponding coefficients in the two series\nare equal.\n\n\\Subsection{9}{6}{1}{Riemann's associated function.}\nLet the sum of the series\n$\\half a_{0} + \\sum_{n=1}^{\\infty} (a_{n} \\cos nx + b_{n} \\sin nx)\n=\nA_{0} + \\sum_{n=1}^{\\infty} A_{n}(x)$,\nat any point $x$ where it converges, be denoted by $f(x)$.\n\nLet\n$$\nF(x) = \\half A_{0} x^{2} - \\sum_{n=1}^{\\infty} n^{-2} A_{n}(x).\n$$\n\n\\emph{Then, if the series defining $f(x)$ converges at all points of any\nfinite interval, the series defining $F(x)$ converges for all real\nvalues of $x$.}\n\nTo obtain this result we need the following Lemma due to Cantor :\n\nCantor's lemma\\footnote{Riemann appears to have regarded this result as obvious.\n  The proof here given is a modification of Cantor's proof,\n  Math. Ann. iv. (1871), pp. 139-143, and Journal fiir Math, lxxii. (1870), pp. 130-138.}.%TODO:ref\n\\emph{If $\\lim_{n \\rightarrow \\infty} A_{n}(x) = 0$ for all values of $x$ such that $a \\leq x \\leq b$,\nthen $a_{n} \\rightarrow 0, b_{n} \\rightarrow 0$.\n}\n\nFor take two points $x, x+\\delta$ of the interval. Then, given $\\eps$, we can\nfind $n_{0}$ such that\\footnote{The value of $n_{0}$ depends on $x$ and on $\\delta$.},\nwhen $n > n_{0}$\n$$\n\\absval{\n  a_{n} \\cos nx + b_{n} \\sin nx\n}\n<\n\\eps,\n\\quad\n\\absval{\n  a_{n} \\cos n(x+\\delta) + b_{n} \\sin n(x+\\delta)\n}\n<\n\\eps.\n$$\n\nTherefore\n$$\n\\absval{\n  \\cos n\\delta\n  (a_{n} \\cos nx + b_{n} \\sin nx)\n  +\n  \\sin n\\delta\n  (-a_{n} \\sin nx + b_{n} \\cos nx)\n}\n<\n\\eps.\n$$\n\nSince\n$$\n\\absval{\n  \\cos n\\delta\n  (a_{n} \\cos nx + b_{n} \\sin nx)\n}\n<\n\\eps,\n$$\nit follows that\n$$\n\\absval{\n  \\sin n\\delta\n  (-a_{n} \\sin nx + b_{n} \\cos nx)\n}\n<\n2\\eps,\n$$\nand it is obvious that\n$$\n\\absval{\n  \\sin n\\delta\n  (a_{n} \\sin nx + b_{n} \\cos nx)\n}\n<\n2\\eps.\n$$\n\nTherefore, squaring and adding,\n$$\n\\sqrt{a_{n}^{2} + b_{n}^{2}}\n\\absval{ \\sin n\\delta }\n<\n2\\eps\\sqrt{2}\n$$\n\nNow suppose that $a_{n}, b_{n}$ have not the unique limit $0$; it will be shewn\nthat this hypothesis involves a contradiction. For, by this\nhypothesis, \\emph{some} positive number $\\eps_{0}$ exists such that there is an\nunending increasing sequence $n_{1}, n_{2}, \\ldots$ of values of $n$, for which\n$$\n\\sqrt{a_{n}^{2} + b_{n}^{2}} > 4\\eps_{0}.\n$$\n\nNow let the range of values of $\\delta$ be called the interval $I_{1}$ of length\n$L_{1}$ on the real axis.\n\nTake $n_{1}'$ the smallest of the integers $n_{r}$ such that\n$n_{1}' L_{1} > 2\\pi$; then\n$\\sin n_{1}'y$ goes through all its phases in the interval $I$; call\n$I_{2}$\nthat sub-interval\\footnote{If there is more than one such sub-interval,\n  take that which lies on the left.}\nof $I_{1}$ in which $\\sin n_{1}'y > 1/\\sqrt{2}$; its length is\n$\\pi / (2n_{1}') = L_{2}$. Next take $n_{2}'$ the smallest of the integers\n$n_{r} (> n_{1}')$\nsuch that $n_{2}' L_{2} > 2\\pi$, so that\n$\\sin n_{2}'y$ goes through all its phases in\nthe interval $I_{2}$; call $I_{3}$ that sub-interval%TODO:repeat previous footnote\nof $I_{2}$ in which $\\sin n_{2}'y > 1/\\sqrt{2}$;\nits length is $\\pi / (2 n_{2}') = L_{3}$. We thus get a sequence of\ndecreasing intervals $I_{1}, I_{2},\\ldots$ each contained in all the previous\nones. It is obvious from the definition of an irrational number that\nthere is a certain point $a$ which is not outside any of these\nintervals, and $\\sin na \\geq 1/\\sqrt{2}$ when\n$n=n_{1}', n_{2}', \\ldots (n_{r+1}' > n_{r}')$.\nFor these values of $n$,\n$\\sqrt{a_{n}^{2} + b_{n}^{2}} \\sin na > 2\\eps_{0} \\sqrt{2}$.\nBut it has been shewn that\ncorresponding\n%\n% 184\n%\nto given numbers $a$ and $\\eps$ we can find $n_{0}$ such that when\n$n > n_{0}$,\n$\\sqrt{a_{n}^{2} + b_{n}^{2}} \\sin na < 2\\eps_{0} \\sqrt{2}$;\nsince some values of $n_{r}'$ are greater than $n_{0}$, the\nrequired contradiction has been obtained, because we may take\n$\\eps < \\eps_{0}$; therefore\n$a_{n} \\rightarrow 0, b_{n} \\rightarrow 0$.\n\nAssuming that the series defining $f(x)$ converges at all points of a\ncertain interval of the real axis, we have just seen that\n$a_{n} \\rightarrow 0, b_{n} \\rightarrow 0$. Then, for all real values of\n$x$,\n$\\absval{a_{n} \\cos nx + b_{n} \\sin nx} \\leq \\sqrt{a_{n}^{2} + b_{n}^{2}} \\rightarrow 0$,\nand so, by \\hardsubsectionref{3}{3}{4}, the\nseries\n$\\half A_{0} x^{2} - \\sum_{n=1}^{\\infty} n^{-2} A_{n}(x) = F(x)$\nconverges absolutely and uniformly for\nall real values of $x$; therefore, (\\hardsubsectionref{3}{3}{2}), $F(x)$ is continuous for all\nreal values of $x$.\n\n\\Subsection{9}{6}{2}{Properties of Riemann's associated function;\n  Riemann's first lemma.}\nIt is now possible to prove Riemann's first lemma \\emph{that if\n  $$\n  G(x, \\alpha)\n  =\n  \\frac{F(x + 2\\alpha) + F(x - 2\\alpha) - 2F(x)}{4 \\alpha^{2}},\n  $$\n  then $\\lim_{\\alpha \\rightarrow 0} G(x,\\alpha) = f(x)$, provided that\n  $\\sum_{n=0}^{\\infty} A_{n}(x)$ converges for the value\n  of $x$ under consideration.}\n\nSince the series defining $F(x), F(x \\pm 2\\alpha)$ converge absolutely, we may\nrearrange them; and, observing that\n\\begin{align*}\n  \\cos n(x + 2\\alpha) + \\cos n(x - 2\\alpha) - 2 \\cos nx =& -4 \\sin^{2} n\\alpha \\cos nx,\\\\\n  \\sin n(x + 2\\alpha) + \\sin n(x - 2\\alpha) - 2 \\sin nx =& -4 \\sin^{2} n\\alpha \\sin nx,\n\\end{align*}\nit is evident that\n$$\nG(x, \\alpha)\n=\nA_{0}\n+\n\\sum_{n=1}^{\\infty}\n\\theparen{\n  \\frac{\\sin n\\alpha}{n\\alpha}\n}^{2}\nA_{n}(x).\n$$\n\nIt will now be shewn that this series converges uniformly with regard\nto $\\alpha$ for all values of $\\alpha$, provided that\n$\\sum_{n=1}^{\\infty} A_{n}(x)$ converges. The result\nrequired is then an immediate consequence of \\hardsubsectionref{3}{3}{2}:\nfor, if\n$f_{n}(\\alpha) = \\theparen{\n  \\frac{\\sin n\\alpha}{n\\alpha}\n}^{2}, (\\alpha \\neq 0)$\nand $f_{n}(0) = 1$, then $f_{n}(\\alpha)$ is continuous for all values of $\\alpha$, and so\n$G(x,\\alpha)$ is a continuous function of $\\alpha$, and therefore, by \\hardsectionref{3}{2},\n$G(x, 0) = \\lim_{\\alpha \\rightarrow \\infty} G (x, \\alpha)$.\n\nTo prove that the series defining $G(x, \\alpha)$ converges uniformly, we\nemploy the test given in \\hardsubsectionref{3}{3}{5} example 2. %TODO:ref\nThe expression corresponding to $\\omega_{n}(x)$ is\n$f_{n}(\\alpha)$, and it is obvious that\n$\\absval{f_{n}(\\alpha)} \\leq 1$; it is therefore sufficient to shew\nthat\n$\\sum_{n=1}^{\\infty} \\absval{f_{n+1}(\\alpha) - f_{n}(\\alpha)} < K$,\nwhere $K$ is independent of $\\alpha$.\n\n%\\begin{smallfont}\nIn fact\\footnote{Since $x^{-1} \\sin x$ decreases as $x$ increases from $0$ to $\\pi$.},\nif $s$ be the integer such that\n$a \\absval{\\alpha} \\leq \\pi < (s+1) \\absval{\\alpha}$,\nwhen $\\alpha \\neq 0$ we have\n$$\n\\sum_{n=1}^{s-1}\n\\absval{\n  f_{n+1}(\\alpha) - f_{n}(\\alpha)\n}\n=\n\\sum_{n=1}^{s-1}\n(\nf_{n}(\\alpha)\n??? %TODO\nf_{n+1}(\\alpha)\n)\n=\n\\frac{ \\sin^{2} \\alpha }{\\alpha^{2}}\n-\n\\frac{ \\sin^{2} s\\alpha }{s^{2} \\alpha^{2}}.\n$$\n%\n% 185\n%\n\nAlso\n\\begin{align*}\n  \\sum_{n=s+1}^{\\infty}\n  \\absval{\n    f_{n+1}(\\alpha) - f_{n}(\\alpha)\n  }\n  =&\n  \\sum_{n=s+1}^{\\infty}\n  \\absval{\n    \\thebrace{\n      \\frac{\\sin^{2} n\\alpha}{\\alpha^{2}}\n      \\theparen{\n        \\frac{1}{n^{2}}\n        -\n        \\frac{1}{(n+1)^{2}}\n      }\n    }\n    +\n    \\frac{\\sin^{2} n\\alpha - \\sin^{2} (n+1)\\alpha }{(n+1)^{2} \\alpha^{2}}\n  }\n  \\\\\n  \\leq &\n  \\sum_{n=s+1}^{\\infty}\n  \\frac{1}{\\alpha^{2}}\n  \\theparen{\n    \\frac{1}{n^{2}}\n    -\n    \\frac{1}{ (n+1)^{2} }\n  }\n  +\n  \\sum_{n=s}^{\\infty}\n  \\frac{\\absval{\n      \\sin^{2} n\\alpha - \\sin^{2} (n+1)\\alpha\n    }}{ (n+1)^{2} \\alpha^{2} }\n  \\\\\n  \\leq &\n  \\frac{1}{ (s^{2} + 1)^{2} \\alpha^{2} }\n  +\n  \\sum_{n=s+1}^{\\infty}\n  \\frac{\\absval{\n      \\sin \\alpha \\sin (2n+1)\\alpha\n    }}{ (n^{2} + 1)^{2} \\alpha^{2} }\n  \\\\\n  \\leq &\n  \\frac{1}{ (s^{2} + 1)^{2} \\alpha^{2} }\n  +\n  \\frac{\\absval{\\sin \\alpha}}{\\alpha^{2}}\n  \\sum_{n=s+1}^{\\infty}\n  \\frac{1}{(n+1)^{2}}\n  \\\\\n  \\leq &\n  \\frac{1}{\\pi^{2}}\n  +\n  \\frac{\\absval{\\sin \\alpha}}{\\alpha^{2}}\n  \\int_{s}^{\\infty}\n  \\frac{\\dmeasure x}{(x+1)^{2}}\n  \\\\\n  \\leq &\n  \\frac{1}{\\pi^{2}}\n  +\n  \\frac{1}{(s+1) \\alpha}.\n\\end{align*}\n\nTherefore\n\\begin{align*}\n  \\sum_{n=1}^{\\infty}\n  \\absval{\n    f_{n+1}(\\alpha) - f_{n}(\\alpha)\n  }\n  \\leq &\n  \\frac{\\sin^{2} \\alpha}{\\alpha^{2}}\n  -\n  \\frac{\\sin^{2} s\\alpha}{s^{2}\\alpha^{2}}\n  +\n  \\theparen{\n    \\frac{\\sin^{2} s\\alpha}{s^{2}\\alpha^{2}}\n    +\n    \\frac{\\sin^{2} (s+1)\\alpha}{(s+1)^{2}\\alpha^{2}}\n  }\n  + \\frac{1}{\\pi^{2}} + \\frac{1}{\\pi}.\n  \\\\\n  \\leq &\n  1 + \\frac{1}{\\pi} + \\frac{2}{\\pi^{2}}.\n\\end{align*}\n\nSince this expression is independent of $\\alpha$, the result required has\nbeen obtained\\footnote{This inequality is obviously true when $\\alpha = 0$.}.\n%\\end{smallfont}\n\nHence, if $\\sum_{n=0}^{\\infty} A_{n}(x)$ converges, the series defining $G(x,\\alpha)$\nconverges uniformly with respect to $\\alpha$ for all values of $\\alpha$, and, as stated above,\n$$\n\\lim_{\\alpha \\rightarrow \\infty} G(x, \\alpha)\n= G(x,0)\n= A_{0} + \\sum_{n=1}^{\\infty} A_{n}(x)\n= f(x).\n$$\n\n\\begin{wandwexample}\n  If\n  $H(x, \\alpha, \\beta) = \\frac{F(x+\\alpha+\\beta) ? %TODO\n    F(x+\\alpha-\\beta) ? %TODO\n    F(x-\\alpha+\\beta) +\n    F(x-\\alpha-\\beta)\n  }{4 \\alpha \\beta}$\n  shew that $H(x,\\alpha,\\beta) \\rightarrow f(x)$ when $f(x)$ converges if\n  $\\alpha,\\beta \\rightarrow 0$ in such a\n  way that $\\alpha/\\beta$ and $\\beta/\\alpha$ remain finite.\n  \\addexamplecitation{Riemann.}\n\\end{wandwexample}\n\\Subsubsection{9}{6}{2}{1}{Riemann's second lemma.}\n\\emph{With the notation of\n\\hardsectionref{9}{6}--\\hardsubsectionref{9}{6}{2}, %TODO:refmultiple\nif\n$a_{n}, b_{n} \\rightarrow 0$, then\n$\\lim_{\\alpha \\rightarrow 0} \\frac{F(x+2\\alpha)+F(x-2\\alpha)-2 F(x)}{4\\alpha}$ for all values of $x$.\n}\n\nFor\n$\\frac{1}{4} \\alpha^{-1} \\thebrace{ F(x + 2\\alpha) + F(x - 2\\alpha) - 2 F(x)\n  =\n  A_{0} \\alpha + \\sum_{n=1}^{\\infty} \\frac{\\sin^{2} n\\alpha}{n^{2}\\alpha} A_{n}(x)\n}$; but by \\hardsubsectionref{9}{1}{1} example 3, %TODO:example ref\nif $\\alpha > 0$,\n$\\sum_{n=1}^{\\infty} \\frac{\\sin^{2} n\\alpha}{n^{2}\\alpha} = \\half (\\pi - \\alpha)$; and so,\nsince\n\\begin{align*}\n  &\n  A_{0}(x) \\alpha\n  +\n  \\sum_{n=1}^{\\infty}\n  \\frac{\\sin^{2} n\\alpha}{n^{2} \\alpha} A_{n}(x)\n  \\\\ &\n  A_{0}(x) \\alpha\n  +\n  \\half (\\pi - \\alpha) A_{1}(x)\n  +\n  \\sum_{n=1}^{\\infty}\n  \\thebrace{\n    \\half (\\pi - \\alpha)\n    -\n    \\sum_{m=1}^{n}\n    \\frac{\\sin^{2} m\\alpha}{m^{2} \\alpha}\n  } \\thebrace {\n    A_{n+1}(x) - A_{n}(x)\n  },\n\\end{align*}\nit follows from \\hardsubsectionref{3}{3}{5} example 2, %TODO:ref example\nthat this series converges uniformly\nwith regard to $\\alpha$ for all values of $\\alpha$ greater than, or equal to,\nzero\\footnote{If we define $g_{n}(\\alpha)$ by the equations\n  $g_{n}(\\alpha) = \\half (\\pi - \\alpha) - \\sum_{m=1}^{n} \\frac{\\sin^{2} m\\alpha}{m^{2} \\alpha}, (\\alpha \\neq 0)$,\n  and $g_{n}(0) = \\half \\pi$, then $g_{n}(\\alpha)$ is continuous when\n  $\\alpha \\geq 0$, and $g_{n+1}(\\alpha} \\leq g_{n}(\\alpha)$.\n}\n%\n% 186\n%\n\nBut\n\\begin{align*}\n  &\n  \\lim_{\\alpha \\rightarrow +0} \\alpha^{-1}\n  \\thebrace{\n    F(x + 2\\alpha) + F(x - 2\\alpha) - 2 F(x)\n  }\n  \\\\\n  &\n  \\lim_{\\alpha \\rightarrow +0}\n  \\thebracket{\n    A_{0}(x) \\alpha\n    + \\half (\\pi - \\alpha) A_{1}(x)\n    + \\sum_{n=1}^{\\infty} g_{n}(\\alpha)\n    \\thebrace{\n      A_{n+1}(x) - A_{n}(x)\n    }\n  },\n\\end{align*}\nand this limit is the value of the function when $\\alpha = 0$, by\n\\hardsubsectionref{3}{3}{2};\nand this value is zero since $\\lim_{n \\rightarrow \\infty} A_{n}(x)$.\nBy symmetry we see that\n$\\lim_{\\alpha \\rightarrow +0} = \\lim_{\\alpha \\rightarrow -0}$.\n\n\\Subsection{9}{6}{3}{Riemann's theorem\\footnote{\n    The proof we give is due to Gr. Cantor, Journal fit r Math, lxxii.\n    (1870), pp. 139-142. + Quoted by G. Cantor, Journal filr Math, i.xxii.\n    (1870).}\non trigonometrical series.}\n\\emph{Two trigonometrical series which converge and are equal at all points\n  of the range $(-\\pi, \\pi)$, with the possible exception of a finite\n  number of points, must have corresponding coefficients equal.}\n%\\begin{smallfont}\nAn immediate deduction from this theorem is that a function of the\ntype considered in \\hardsubsectionref{9}{4}{2} cannot be represented\nby any trigonometrical series in the range $(-\\pi, \\pi)$ other than\nthe Fourier series. This fact was first noticed by Du Bois Reymond.\n\nWe observe that it is certainly possible to have other expansions of\n(say) the form\n$$\n\\alpha_{0}\n+\n\\sum_{m=1}^{\\infty} \\theparen{\n  \\alpha_{m} \\cos \\half mx\n  +\n  \\beta_{m} \\sin \\half mx\n},\n$$\nwhich represent $f(x)$ between $-\\pi$ and $\\pi$; for write\n$x = 2\\xi$, and\nconsider a function $\\phi(\\xi)$, which is such that\n$\\phi(\\xi) = f(2\\xi)$ when $-\\half\\pi < \\xi < \\half\\pi$, and\n$\\phi(\\xi) = g(\\xi)$ when $-\\pi < \\xi < -\\half \\pi$, and when\n$\\half \\pi < \\xi < \\pi$,\nwhere $g(\\xi)$ is any function satisfying the conditions of\n\\hardsubsectionref{9}{4}{3}. Then if we expand\n$\\phi(\\xi)$ in a Fourier series of the form\n$$\n\\alpha_{0}\n+\n\\sum_{m=1}^{\\infty} \\theparen{\n  \\alpha_{m} \\cos m \\xi\n  +\n  \\beta_{m} \\sin m \\xi\n},\n$$\nthis expansion represents $f(x)$ when $-\\pi < x < \\pi$; and clearly by\nchoosing the function $g(\\xi)$ in different ways an unlimited number of\nsuch expansions can be obtained.\n\nThe question now at issue is, whether other series proceeding in\nsines and cosines of \\emph{integral} multiples of $x$ exist, which differ from\nFourier's expansion and yet represent $f(x)$ between $-\\pi$ and $\\pi$.\n%\\end{smallfont}\n\nIf possible, let there be two trigonometrical series satisfying the\ngiven conditions, and let their difference be the trigonometrical\nseries\n$$\nA_{0}\n+\n\\sum_{n=1}^{\\infty} A_{n}(x)\n=\nf(x).\n$$\n\nThen $f(x) = 0$ at all points of the range $(-\\pi, \\pi)$ w th a finite number\nof exceptions; let $\\xi_{1}, \\xi_{2}$ be a consecutive pair of these exceptional\npoints, and let $F(x)$ be Riemann's associated function. We proceed to establish a\nlemma concerning the value of $F(x)$ when $\\xi_{1} < x < \\xi_{2}$.\n\n\\Subsubsection{9}{6}{3}{1}{Schwartz' lemma.}\\footnote{Quoted by TODO} \\emph{In the range $\\xi_{1} < x < \\xi_{2}$,\n  $F(x)$ is a linear function of $x$, if $f(x) = 0$ in this range.}\n\nFor if $\\theta = 1$ or if $\\theta = -1$\n$$\n\\phi(x)\n=\n\\theta \\thebracket{\n  F(x)\n  - F(\\xi_{1})\n  - \\frac{x - \\xi_{1}}{\\xi_{2} - \\xi_{1}} \\thebrace{\n    F(\\xi_{2}) - F(\\xi_{1})\n  }\n}\n-\n\\half h^{2} (x - \\xi_{1})(\\xi_{2} - x)\n$$\nis a continuous function of $x$ in the range\n$\\xi_{1} \\leq x \\leq \\xi_{2}$ and $\\phi(\\xi_{1}) = \\phi(\\xi_{2}) = 0$.\n%\n% 187\n%\n\nIf the first term of $\\phi(x)$ is not zero throughout the\nrange\\footnote{If it is zero throughout the range, $F(x)$\n  is a linear function of $x$.}\nthere will be some point $x=c$ at which it is not zero. Choose the sign of $\\theta$\nso that the first term is positive at $c$, and then choose $h$ so small\nthat $\\phi(c)$ is still positive.\n\nSince $\\phi(x)$ is continuous it attains its upper bound (\\hardsubsectionref{3}{6}{2}), and\nthis upper bound is positive since $\\phi(c) > 0$. Let $\\phi(x)$ attain its\nupper bound at $c_{1}$, so that $c_{1} \\neq \\xi_{1}, c_{1} \\neq \\xi_{2}$.\n\nThen, by Riemann's first\nlemma,\n$$\n\\lim_{\\alpha \\rightarrow 0}\n\\frac{\\phi(c_{1} + \\alpha) + \\phi(c_{1} - \\alpha) - 2 \\phi(c_{1})}{\\alpha^{2}}\n=\nh^{2}.\n$$\n\nBut $\\phi(c_{1} + \\alpha) \\leq \\phi(c_{1}), \\phi(c_{1} - \\alpha) \\leq \\phi(c_{1})$, so this limit must be\nnegative or zero.\n\nHence, by supposing that the first term of $\\phi(x)$ is not everywhere zero\nin the range $(\\xi_{1}, \\xi_{2})$ we have arrived at a contradiction. Therefore it\nis zero; and consequently $F(x)$ is a linear function of $x$ in the range\n$\\xi_{1} < x < \\xi_{2}$\nThe lemma is therefore proved.\n\n\\Subsubsection{9}{6}{3}{2}{Proof of Riemann's Theorem.}\nWe see that, in the circumstances under consideration, the curve $y = F(x)$\nrepresents a series of segments of straight lines, the beginning\nand end of each line corresponding to an exceptional point; and as $F(x)$,\nbeing uniformly convergent, is a continuous function of $x$, these\nlines must be connected.\n\nBut, by Riemann's second lemma, even if $\\xi$ be an exceptional point,\n$$\n\\lim_{\\alpha \\rightarrow 0}\n\\frac{F(\\xi + \\alpha) + F(\\xi - \\alpha) - 2F(\\xi)}{\\alpha}\n=\n0.\n$$\n\nNow the fraction involved in this limit is the difference of the\nslopes of the two segments which meet at that point whose abscissa is\n$\\xi$; therefore the two segments are continuous in direction, so the\nequation $y = F(x)$ represents a single line. If then we write\n$F(x) = cx + c'$, it follows that $c$ and $c'$ have the same values for\n\\emph{all} values of $x$. Thus\n$$\n\\half A_{0} x^{2} - cx - c'\n=\n\\sum_{n=1}^{\\infty} n^{-2} A_{n}(x),\n$$\nthe right-hand side of this equation being periodic, with period $2\\pi$.\n\nThe left-hand side of this equation must therefore be periodic, with\nperiod $2\\pi$. Hence\n$$\nA_{0} = 0,\n\\quad\nc = 0,\n$$\nand\n$$\n-c' = \\sum_{n=1}^{\\infty} n^{-2} A_{n}(x).\n$$\n\nNow the right-hand side of this equation converges uniformly, so we\ncan multiply by $\\cos nx$ or by $\\sin nx$ and integrate.\n\nThis process gives\n\\begin{align*}\n  \\pi n^{-2} \\alpha_{n}\n  =&\n  - c' \\int_{-\\pi}^{\\pi} \\cos nx \\dmeasure x = 0,\n  \\\\\n  \\pi n^{-2} b_{n}\n  =&\n  - c' \\int_{-\\pi}^{\\pi} \\sin nx \\dmeasure x = 0.\n\\end{align*}\n%\n% 188\n%\nTherefore all the coefficients vanish, and therefore the two\ntrigonometrical series whose difference is\n$A_{0} + \\sum_{n=1}^{\\infty} A_{n}(x)$ have\ncorresponding coefficients equal.\nThis is the result stated in \\hardsubsectionref{9}{6}{3}.\n\n\\Section{9}{7}{Fourier's representation of a function by an integral.}\\footnote{TODO:La Theorie Analytique de la Chaleur, Cb. ix.}\nIt follows from \\hardsubsectionref{9}{4}{3} that, if $f(x)$ be continuous except at a\nfinite number of discontinuities and if it have limited total\nfluctuation in the range $(-\\infty, \\infty)$, then, if $x$ be any\n\\emph{internal} point of the range $(-\\alpha, \\beta)$,\n$$\n\\lim_{m \\rightarrow \\infty}\n\\int_{-\\alpha}^{\\beta}\n\\frac{\\sin (2m + 1)(t - x)}{t - x}\nf(t) \\dmeasure t\n=\n\\lim_{\\theta \\rightarrow 0}\n\\half \\pi \\theta^{-1} \\sin \\theta\n\\thebrace{\n  f(x + 2\\theta) + f(x - 2 \\theta)\n}.\n$$\n\nNow let $\\lambda$ be any real number, and choose the integer $m$ so that\n$\\lambda = 2m + 1 + 2\\eta$ where $0 \\leq \\eta < 1$.\n\nThen\n\\begin{align*}\n  &\n  \\int_{-\\alpha}^{\\beta}\n  \\thebrace{\n    \\sin \\lambda(t - x) - \\sin (2m+1)(t-x)\n  }\n  (t - x)^{-1} f(t) \\dmeasure t\n  \\\\\n  &\\quad\n  \\int_{-\\alpha}^{\\beta}\n  2 \\thebrace{\n    \\cos (2m + 1 + \\eta)(t-x)\n  } \\cdot \\thebrace{\n    \\sin \\eta (t-x)\n  }\n  (t - x)^{-1} f(t) \\dmeasure t\n  \\\\\n  &\\quad\n  \\rightarrow 0,\n\\end{align*}\nas $m \\rightarrow \\infty$ by\n\\hardsubsectionref{9}{4}{1} (ii), %TODO:ref\nsince $(t - x)^{-1} f(t) \\sin \\eta(t - x)$ has\nlimited total fluctuation.\n\nConsequently, from the proof of the Riemann-Lebesgue lemma of \\hardsubsectionref{9}{4}{1},\nit is obvious that if $\\int_{0}^{\\infty} \\absval{f(t)} \\dmeasure t$ and\n$\\int_{-\\infty}^{0} \\absval{f(t)} \\dmeasure t$ converge, then\\footnote{$\\int_{-\\infty}^{\\infty}$ means the double limit\n  $\\lim_{\\phi\\rightarrow\\infty, \\sigma\\rightarrow\\infty} \\int_{-\\rho}^{\\sigma}$. If this limit exists, it is of course\n  equal to $\\lim_{\\rho\\rightarrow\\infty}\\int_{-\\rho}^{\\rho}$.}\n$$\n\\lim_{\\lambda\\rightarrow\\infty}\n\\int_{-\\infty}^{\\infty}\n\\frac{\\sin \\lambda(t-x)}{(t-x)}\nf(t) \\dmeasure t\n=\n\\half\n\\pi\n\\thebrace{\n  f(x+0) + f(x-0)\n},\n$$\nand so\n$$\n\\lim_{\\lambda\\rightarrow\\infty}\n\\int_{-\\infty}^{\\infty}\n\\thebrace{\n  \\int_{0}^{\\lambda}\n  \\cos u(t-x) \\dmeasure u\n}\nf(t) \\dmeasure t\n=\n\\half\n\\pi\n\\thebrace{\n  f(x+0) + f(x-0)\n},\n$$\n\nTo obtain Fourier's result, we must reverse the order of integration\nin this repeated integral.\n\nFor any given value of $\\lambda$ and any arbitrary value of $\\eps$, there exists a\nnumber $\\beta$ such that\n$$\n\\int_{\\beta}^{\\infty}\n\\absval{f(t)}\n\\dmeasure t\n<\n\\half \\eps / \\lambda;\n$$\n%\n% 189\n%\nwriting $\\cos u(t-x) \\cdot f(t) = \\phi(t,u)$, we\nhave\\footnote{The equation\n  $\\int_{0}^{\\beta} \\int_{0}^{\\lambda} = \\int_{0}^{\\lambda} \\int_{0}^{\\beta}$\n  is easily justified by \\hardsectionref{4}{3}, by considering the ranges within\n  which $f(x)$ is continuous.}\n\\begin{align*}\n  \\absval{\n    \\int_{0}^{\\infty} \\thebrace{\n      \\int_{0}^{\\lambda} \\phi(t,u) \\dmeasure u\n    } \\dmeasure t\n    -\n    \\int_{0}^{\\lambda} \\thebrace{\n      \\int_{0}^{\\infty} \\phi(t,u) \\dmeasure t\n    } \\dmeasure u\n  }\n  \\\\\n  \\quad &\n  =\n  \\int_{0}^{\\beta}\n  \\thebrace{\n    \\int_{0}^{\\lambda} \\phi(t,u) \\dmeasure u\n  } \\dmeasure t\n  +\n  \\int_{\\beta}^{\\infty}\n  \\thebrace{\n    \\int_{0}^{\\lambda} \\phi(t,u) \\dmeasure u\n  } \\dmeasure t\n  \\\\\n  \\quad & \\quad\n  - \\int_{0}^{\\lambda} \\thebrace{\n    \\int_{0}^{\\beta} \\phi(t,u) \\dmeasure t\n  } \\dmeasure u\n  - \\int_{0}^{\\lambda} \\thebrace{\n    \\int_{\\beta}^{\\infty} \\phi(t,u) \\dmeasure t\n  } \\dmeasure u\n  \\\\\n  & =\n  \\absval{\n    \\int_{\\beta}^{\\infty} \\thebrace{\n      \\int_{0}^{\\lambda} \\phi(t,u) \\dmeasure u\n    } \\dmeasure t\n    -\n     \\int_{0}^{\\lambda} \\thebrace{\n      \\int_{\\beta}^{\\infty} \\phi(t,u) \\dmeasure t\n    } \\dmeasure u\n  }\n  \\\\\n  & <\n  \\int_{\\beta}^{\\infty} \\thebrace{\n    \\int_{0}^{\\lambda} \\absval{\\phi(t,u)} \\dmeasure u\n  } \\dmeasure t\n  -\n  \\int_{0}^{\\lambda} \\thebrace{\n    \\int_{\\beta}^{\\infty} \\absval{\\phi(t,u)} \\dmeasure t\n  } \\dmeasure u\n  \\\\\n  & <\n  2 \\lambda \\int_{\\beta}^{\\infty} \\absval{f(t)} \\dmeasure t\n  <\n  \\eps\n  .\n\\end{align*}\n\nSince this is true for all values of $\\eps$, no matter how small, we infer\nthat\n$\\int_{0}^{\\infty} \\int_{0}^{\\lambda} = \\int_{0}^{\\lambda} \\int_{0}^{\\infty}$;\nsimilarly\n$$\\int_{0}^{-\\infty} \\int_{0}^{\\lambda} = \\int_{0}^{\\lambda} \\int_{0}^{-\\infty}$$\n\nHence\n\\begin{align*}\n  \\half \\pi \\thebrace{\n    f(x+0) + f(x-0)\n  }\n  & =\n  \\lim_{\\lambda \\rightarrow \\infty}\n  \\int_{0}^{\\lambda} \\int_{-\\infty}^{\\infty}\n  \\cos u(t-x) f(t) \\dmeasure t \\dmeasure u\n  \\\\\n  & =\n  \\int_{0}^{\\infty} \\int_{-\\infty}^{\\infty}\n  \\cos u(t-x) f(t) \\dmeasure t \\dmeasure u.\n\\end{align*}\n\nThis result is known as \\emph{Fourier's integral theorem}\\footnote{\n  For a proof of the theorem when $f(x)$ is subject to less stringent\n  restrictions, see\n  Hobson, Functions of a Real Variable, %% 492-493. %TODO:ref\n  The reader should observe that, although\n  $\\lim_{\\lambda\\rightarrow\\infty} \\int_{-\\infty}^{\\infty} \\int_{0}^{\\lambda}$ exists,\n  the repeated integral $\\int_{-\\infty}^{\\infty} \\thebrace{\n    \\int_{0}^{\\infty} \\sin u(t-x) \\dmeasure u\n  } f(t) \\dmeasure t$ does not.\n}.\n\nExample. % TODO: Format example\nVerify Fourier's integral theorem directly\n(i) % TODO: enumerate\nfor the function\n$$\nf(x) = (a^{2} + x^{2})^{-\\half}\n$$\n(ii) % TODO: enumerate\nfor the function defined by the equations\n$$\nf(x) = 1,\n\\quad\n(-1 < x < 1);\n\\quad\nf(x) = 0,\n\\quad (\\absval{x} > 1).\n$$\n\\addexamplecitation{Rayleigh.}\n\n%TODO:references section\nREFERENCES.\nG. F. B. Riemann, Ges. Math. Werke, pp. 213-250.\nE. W. Hobson, Functions of a Real Variable (1907), Ch. vii.\nH. Lebesgde, Lemons sur les Series Trigonometriques. (Paris, 1906.)\nC. J. DE LA VallJ e Poussix, Cours d: Analyse Infinitesimale, ir. (Louvain and\nParis, 1912), Ch. IV.\nH. Burkhardt, Ena/clopddie der Math. WUg. ii. 1 (7). (Leipzig, 1914.)\nG. A. Carse and G. Shearer, A course in Fourier's analysis and\nperiodogram analysis (Edinburgh Math. Tracts, No. 4, 1915).\n\n%\n% 190\n%\n\n%TODO:format examples\nMiscellaneous Examples.\n\n\\begin{wandwexample}\nObtain the expansions\n%TODO: subpart\n\\frac{1 - r \\cos z}{1 - 2r \\cos z + r^{2} = 1 + r \\cos z + r^{2} \\cos 2z + \\cdots,\n%TODO: subpart\n  (6) - log ( 1 - 2r cos z + r ) = - r cos J - '' cos 2 - - r cos 3 - .\n. .,\n\n,,, ?'sinz . 1 . 1 . \\,\n\n(c) arc tan, = rsins + -?' sin 22 + ?: /' sin 3s + ...,\n\n   r cos 2 2 3\n\n,, 2rsin2 . I,  1 c\n\n(a) arc tan 3- = r sin z- -r sin 3 + - r\u00b0 sm 02 + . . .,\n\n1 - r'' 3 5\n\nand shew that, when | r | < 1, thej- are convergent for all values of\nz in certain strips parallel to the real axis in the -plane.\n\\end{wandwexample}\n\\begin{wandwexample}\nExpand x and x in Fourier sine series valid when - \\pi < .r < \\pi;\nand hence find the value of the sum of the series\n\nsin X - sin 2.r + -3 sin 3*' - - sin 4.r + . . ., for all values of\na.-. \\addexamplecitation{Jesus, 1902.}\n\\end{wandwexample}\n\\begin{wandwexample}\nShew that the function of x I'epresented by 2 ~i sin sin /2a, is\nconstant\n\nn = \\\n\n(0 < X < 2a) and zero (2a < x < \\pi), and draw a graph of the function.\n\n\\addexamplecitation{Pembroke, 1907.}\n\\end{wandwexample}\n4. Find the cosine series representing /(.r) where\n\n/ (x) = sin X + cos X (0 < . ' \\pi ),\n\n/(.r) = sin.r-cos.r (in- .r < \\pi). \\addexamplecitation{Peterhouse, 1906.}\n\n5. Shew that\n\nsin 3n- sinSjTA' sin Ttt., r -,\n\nsin TTX + - - + - - + - - - + ... = Jtt [x],\n\nwhere [x] denotes 4-1 or - 1 according as the integer next inferior to\nx is even or uneven, and is zero if x is an integer. \\addexamplecitation{Trinity, 1895.}\n\n6. Shew that the expansions\n\nlog and\n\n1\n\n2 cos - X 2\n\nlog 2 .sin - X\n\n= cos X-- cos 2x + - cos 3x\n\n- COS X - - COS 2x - - cos 3x\n\nare valid for all real A'alues of x, except multiples of \\pi.\n\n7. Obtain the expansion\n\n\"(-)\"* cos m.r, o M / 1 \\ 1 /  T  \\\n\n2 ~ - -yT-. = (cos X + cos 2x) log ( 2 cos -x\\ +-,:?; (sin zx + sin x)\n- cos x,\n\nand find the range of values of . ' for which it is applicable.\n\\addexamplecitation{Trinity, 1898.}\n\n8. Prove that, if < a- < 27r, then\n\nsin X 2 sin 2x Z sin 3.:*; \\ \\pi sinh a (\\pi - x) a' + Y' \" a +2- \"a T +\n' ' \" \" 2 ' sinh aTr '\n\n\\addexamplecitation{Trinity, 1895.}\n\nFOURIER SERIES\n\n%\n% 191\n%\n\n9. Shew that between the values -\\pi and +n o( x the following\nexpansions hold\n\n2 . / sin 2 sin 2x 3 sin Zx\n\nsin in.v = - sin ran -. r - -, - + -= - - ...\n\nIT \\ V-m'' -l -m 32-7 2\n\n2 . ( \\ m cos X m cos 2.r m cos 3 ' cos mx = - sin ??l7r;;- + - - -r\n+ - 5 -\n\nTT \\ 2m P - 7rt- 22-77l2 32-J/l2\n\ne-mx g-mx 2 / 1 ??IC0S.\u00a5 ?/l COS 2.r 7?2C0S3.1- \\\n\n10. Let X be a real variable V)etween and 1, and let n be an odd\nnumber 3. Shew that\n\n(-1)* = - + - 2 -tan - cos2OTn-:r, if is not a multiple of -, where s\nis the gi-eatest integer contained in nx; but\n\n 1 2 1 WITT\n\n0= - + - 2 - tan - cos 2m7rx, n 7r, =i i n\n\nif jp is an integer multiple of 1/n. \\addexamplecitation{Berger.}\n\n11. Shew that the sum of the series\n\nX\n\n 5 + 47r ~ 2 m~ sin mn cos 2m7rx\n\nm = 1\n\nis 1 when 0<x<l, and when fj<A'<l, and is -1 when l<.r<|.\n\\addexamplecitation{Trinity, 1901.}\n\n12. If\n\nshew that, when - 1 <x <\\,\n\na\" V (x)\n\nae\"'\n\ne\" - 1 =o n\n\ncos 2irx+\n\ns47r.r cosB\\pij: \\ \\ -i 2\" \"\"*' \" i-\n\n22\"\n\n32\n\n2/i !\n\nsin 47ra; sin 6nx \"2-\" + 1 \"*\" \"32\n\n. -, 8IU 'iTTX Sin OTTX-, .,, -,. .,, .\n\nsm27ra;+-j; +-; +. .. = (-)\"+! \\ \\,, V,,, x).\n\n22\" 2 + l\n\n2 + 1 :\n\n\\addexamplecitation{Math. Trip. 1896.}\n\n13. If 7?i is an integer, shew that, for all real values of .v,\n\ncos2' a;=2\n\n1.3.5...(2m-\n\n2.4.6 ...2m\n\n1) fl, r,i (2 i + l\n\nm m- )\n\ncos 2x + - - ~ cos 4x\n\n(ot4-1)(7 + 2)\n\nm m-l) m - 2) * (m + l)(?\\pi + 2)(w+3)\n\ncos 6.r + ...V,\n\n  \\ ., 4 2.4.6...(27ft-2) (1 2w-l . f2m-l)(27n-3),,\n\nr.ns2m-l\\;p|\\ \\ - \\ - \\ \\ \\ \\ J -!;; + ., COS 2x+-,,,, -;\nC0S4a-+.\n\n1.3.5 ...(2 i-l) (2 2/u + l\n\n(2to + 1)(2 h-3)\n\n/\n\n14. A point moves in a straight line with a velocity which is\ninitially u, and which receives constant increments, each equal to u,\nat equal intervals r. Prove that the velocity at any time t after the\nbeginning of the motion is\n\nu ut V- 1 . 2miit 77 + - + - 2 - sin,\n\nand that the di-stance traversed is\n\nlit\n\n( + ) + T7>-o32 2 -2 cos\n\n2mTTt\n\n\\addexamplecitation{Trinity, 1894.}\n\n%\n% 192\n%\n\n15. If\n\nX 2 = 1\n\nf x)= 2 sin (6?i - 3) j; - 2 2 -sm 2n-l)x\n\nji=i '2n - 1 71=1 zn - i\n\n3 v'3 f . sin 5.r sin 7.r sin ll r ]\n\nshew that / ( + 0) =/ (\\pi - 0) = - \\pi,\n\nand /G +0)-/(i7r-0)=-i7r, /(;:|7r+0)-/(37r-0)=*7r.\n\nObserving that the last series is\n\n6 sin (2ft- 1 ) TTsin 2n-l)x\n\ndraw the graph of /(* ) \\addexamplecitation{Math. Trip. 1893.}\n\n16. Shew that, when <.r < n,\n\n / N 2 /3/ 1 .1 1 Ti, \\\n\nf(,v) = '- I cos .r-- cos 0. '+- cos /.<--- cos 11. r,'-f... ' ' 3 \\\n/ 11 /\n\n= sin 2x' + - sin 4.f + - sin 8.v + r sin lO.r + . . . 2 4 5\n\nwhere / ) = i (0< < 7r),\n\nf x) = (l7r<.r<g7r),\n\nf x)=-\\ v %7T<X<1t).\n\nFind the sum of each series when .r = 0, \\ ir, qtt, n, and for all\nother values of x.\n\n\\addexamplecitation{Trinity, 1908.}\n\n17. Prove that the locus represented by\n\n2 2 - '' \" '*'' ny =\n\nn=l \"\n\nis two systems of lines at right angles, dividing the coordinate plane\ninto squares of area \\pi'\\ \\addexamplecitation{Math. Trip. 1895.}\n\n18. Shew that the equation\n\n\" ( - )\" ~ sin ny cos nx \\\n\nn=i n ~\n\nrepresents the lines i/= \u00b1mir, (wi = 0, 1, 2, ...) together with a set\nof arcs of ellipses whose\n\nsemi-axes are \\pi and nlslZ, the arcs being placed in squares of area\n27r\" . Draw a diagram\n\nof the locus. \\addexamplecitation{Trinity, 1903.}\n\n19. Shew that, if tlie point x,y, z) lies inside the octahedron\nbounded by the planes\n\n\u00b1x\u00b1y\u00b1z = Tr, then\n\n\",,, sin 7ia;sin 7jwsin wj 1\n\n j-Y- - =2-y-\n\n\\addexamplecitation{Math. Trip. 1904.}\n\n20. Circles of radius a are diawn having their centres at the\nalternate angular jjoints of a regular hexagon of side a. Sliew that\nthe equation of the trefoil formed by the outer arcs of the circles\ncan be put in the form\n\n-nr- = -; + - cos 3(9- --= cos 6 + -- - cos9 - ..., 6s'3a 22.4 5.7 8.\n10\n\nthe initial line being taken to pass through the centre of one of the\ncircles.\n\n\\addexamplecitation{Pembroke, 1902.}\n\n2m . = 1 H sm\n\n%\n% 193\n%\n\n21. Draw the graph represented by\n\nrr J 1 ( - )\" COS nm6\\ m [2 a=i 1 - nm)- J ' where m is an integer.\n\\addexamplecitation{Jesus, 1908.}\n\n22. With each vertex of a regular hexagon of side 2a as centre the arc\nof a circle of radius 2a lying within the hexagon is drawn. Shew that\nthe equation of the figure formed by the six arcs is\n\nf-=6-3,.i + 2Si< -)-' j- f co.6 <>,\n\n4a a=i 6n-l) bn+l)\n\nthe prime vector bisecting a petal. \\addexamplecitation{Trinity, 1905.}\n\n23. Shew that, if c> 0,\n\nlim f\n\ne ' cot A\" sin 2ii + l)x . dx=- n tanh -c\\pi.\n\n\\addexamplecitation{Trinity, 1894.}\n\n24. Shew that\n\nsin(2 + l) dx 1\n\nlim r\n\nIT O\n\noth 1.\n\n\\addexamplecitation{King's, 1901.}\n\nsin X l+x' 2\n\n25. Shew that, when - 1 < .r < 1 and a is real,\n\n/ '\u00b0sin(2n + l) sin(H-.r)(9, 1 sinha.r\n\nhm I -.;; 5 -r, d6= --TT .-, .\n\n  .xjo sm a + 6- 2 smh a\n\n\\addexamplecitation{Math. Trip. 1905.}\n\n26. Assuming the possibility of expanding f(x) in a uniformly\nconvergent series of\n\nthe form A -ainkx, where is a root of the equation /-cosa/t-Hft sin a\n- = and the\n\n  . . . *\n\nsummation is extended to all positive roots of this equation,\ndetermine the constants Jj..\n\n\\addexamplecitation{Math. Trip. 1898.}\n\n1 \"\"\n\n27. If f(x) = -a + 2 a cosnx + b,t>im nx)\n\n2,(=1\n\nis a Fourier series, shew that, iif x) satisfies certain general\nconditions,\n\na = - P i f (t) cos nt ta.n - t --, 6 = - / f t)smnttn,n-t - .\n\n7T J ' 'It \"\"yO 'It\n\n\\addexamplecitation{Beau.}\n\nW. M. A. 13\n", "meta": {"hexsha": "0554c916f781513533644ef70ddb4fa0a1ad8736", "size": 100542, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/wandw-ch09.tex", "max_stars_repo_name": "CdLbB/Whittaker-and-Watson", "max_stars_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/wandw-ch09.tex", "max_issues_repo_name": "CdLbB/Whittaker-and-Watson", "max_issues_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/wandw-ch09.tex", "max_forks_repo_name": "CdLbB/Whittaker-and-Watson", "max_forks_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.7274963109, "max_line_length": 169, "alphanum_fraction": 0.6162499254, "num_tokens": 38692, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7905303087996143, "lm_q1q2_score": 0.5874968263447944}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{tfrspwv}\n\\section*{\\hspace*{-1.6cm} tfrspwv}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nSmoothed pseudo Wigner-Ville time-frequency distribution.\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[tfr,t,f] = tfrspwv(x)\n[tfr,t,f] = tfrspwv(x,t)\n[tfr,t,f] = tfrspwv(x,t,N)\n[tfr,t,f] = tfrspwv(x,t,N,g)\n[tfr,t,f] = tfrspwv(x,t,N,g,h)\n[tfr,t,f] = tfrspwv(x,t,N,g,h,trace)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty tfrspwv} computes the smoothed pseudo Wigner-Ville\n        distribution of a discrete-time signal {\\ty x}, or the cross\n        smoothed pseudo Wigner-Ville distribution between two signals. Its\n        expression writes\n\\[SPW_x(t,\\nu)=\\int_{-\\infty}^{+\\infty} h(\\tau)\\ \\int_{-\\infty}^{+\\infty}\ng(s-t)\\ x(s+\\tau/2)\\ x^*(s-\\tau/2)\\ ds\\ e^{-j2\\pi \\nu \\tau}\\ d\\tau.\\]\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty x}     & signal if auto-SPWV, or {\\ty [x1,x2]} if cross-SPWV\n\t\t\t({\\ty Nx=length(x)}) \\\\ \n        {\\ty t}     & time instant(s)          & {\\ty (1:Nx)}\\\\\n        {\\ty N}     & number of frequency bins & {\\ty Nx}\\\\\n        {\\ty g}     & time smoothing window, {\\ty G(0)} being forced to {\\ty 1}, where {\\ty G(f)} is the Fourier transform of {\\ty g(t)}\n                                         & {\\ty window(odd(N/10))}\\\\ \n        {\\ty h}     & frequency smoothing window in the time-domain, \n                {\\ty h(0)} being forced to {\\ty 1}   & {\\ty window(odd(N/4))}\\\\ \n        {\\ty trace} & if nonzero, the progression of the algorithm is shown\n                                         & {\\ty 0}\\\\\n     \\hline {\\ty tfr}   & time-frequency representation \\\\\n        {\\ty f}     & vector of normalized frequencies\\\\\n \n\\hline\n\\end{tabular*}\n\\vspace*{.2cm}\n\nWhen called without output arguments, {\\ty tfrspwv} runs {\\ty tfrqview}.\n\\end{minipage}\n\n\\newpage\n\n{\\bf \\large \\sf Example}\n\\begin{verbatim}\n         sig=fmlin(128,0.05,0.15)+fmlin(128,0.3,0.4);   \n         g=window(15,'Kaiser'); h=window(63,'Kaiser'); \n         tfrspwv(sig,1:128,64,g,h,1);\n\\end{verbatim}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nall the {\\ty tfr*} functions.\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf References}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n[1] P. Flandrin ``Some Features of Time-Frequency Representations of\nMulti-Component Signals'' IEEE Int. Conf. on Acoust. Speech and Signal\nProc., pp. 41.B.4.1-41.B.4.4, San Diego (CA), 1984.\\\\\n\n[2] T. Claasen, W. Mecklenbrauker ``The Wigner Distribution - A Tool for\nTime-Frequency Signal Analysis'' {\\it 3 parts} Philips\nJ. Res., Vol. 35, No. 3, 4/5, 6, pp. 217-250, 276-300, 372-389, 1980.\n\\end{minipage}\n\n\n\n", "meta": {"hexsha": "f52f376a48ca36ad9eb4d3cc3ed43e3fabe3e2b9", "size": 3123, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/tfrspwv.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/tfrspwv.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/tfrspwv.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 31.8673469388, "max_line_length": 136, "alphanum_fraction": 0.6029458854, "num_tokens": 1151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.5874920751804921}}
{"text": "% Dynamic Programming\n%\n% A cartoon-like introduction to dynamic programming in sequence alignment.\n% The aim is to illustrate the method, with some easy to parse examples, \n% emphasising that the method is general and mathematical, and that any \n% biology resides only in (i) the scoring scheme (substitution matrix) and (ii)\n% the scientists' heads when interpreting the biology\n\n\\subsection{Dynamic Programming}\n  \\begin{frame}\n  \\frametitle{Our Goal}\n  \\begin{itemize}\n    \\item<1-> We aim to align the words\n    \\begin{itemize}\n      \\item<1-> \\texttt{COELACANTH}\n      \\item<1-> \\texttt{PELICAN}\n    \\end{itemize}\n    \\item<2-> Each identical letter (match) scores +1\n    \\item<2-> Each different letter (mismatch) scores -1\n    \\item<2-> Each gap scores -1\n    \\item<3-> \\emph{All sequence alignment is maximisation of an alignment score} - a mathematical operation.\n  \\end{itemize}\n\\end{frame}   \n   \n\\begin{frame}\n  \\frametitle{Initialise the matrix}\n  \\begin{center}\n    \\includegraphics[width=0.65\\textwidth]{images/initialise}\n    \\end{center}\n\\end{frame}   \n   \n\\begin{frame}\n  \\frametitle{Fill the cells}\n  \\begin{center}\n    \\includegraphics[width=0.65\\textwidth]{images/fill_start} \\\\\n    \\includegraphics[width=0.3\\textwidth]{images/fill_start_letters}\n  \\end{center}\n\\end{frame}     \n\n\\begin{frame}\n  \\frametitle{Fill the matrix -- represents all possible alignments \\& scores}\n  \\begin{center}\n    \\includegraphics[width=0.65\\textwidth]{images/full_matrix}\n  \\end{center}\n\\end{frame}  \n   \n\\begin{frame}\n  \\frametitle{Traceback}\n  \\begin{center}\n    \\includegraphics[width=0.65\\textwidth]{images/traceback} \\\\\n    \\includegraphics[width=0.3\\textwidth]{images/traceback_sequence}         \n  \\end{center}\n\\end{frame}     \n\n\\begin{frame}\n  \\frametitle{Algorithms}\n  \\begin{itemize}\n    \\item<1-> Global: Needleman-Wunsch (as in example)\n    \\item<1-> Local: Smith-Waterman (differs from example)\n    \\item<2-> Biological information encapsulated \\emph{only} in the scoring scheme (matches, mismatches, gaps)\n    \\item<3-> NW/SW are \\emph{guaranteed} to find the optimal match \\emph{with respect to the scoring system being used}\n    \\item<3-> BUT the optimal alignment is a biological approximation: no scoring scheme encapsulates biological ``truth''\n    \\item<3-> Any pair of sequences can be aligned: finding meaning is up to you\n  \\end{itemize}\n\\end{frame}   ", "meta": {"hexsha": "c16fddf2f094199f593591507ecebbe45470f656", "size": 2379, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "presentation/sections/subsection_dynamicprogramming.tex", "max_stars_repo_name": "peterjc/Teaching-Intro-to-Bioinf", "max_stars_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2015-05-28T18:29:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-20T13:38:44.000Z", "max_issues_repo_path": "presentation/sections/subsection_dynamicprogramming.tex", "max_issues_repo_name": "peterjc/Teaching-Intro-to-Bioinf", "max_issues_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2016-11-25T11:55:43.000Z", "max_issues_repo_issues_event_max_datetime": "2018-10-05T10:53:49.000Z", "max_forks_repo_path": "presentation/sections/subsection_dynamicprogramming.tex", "max_forks_repo_name": "peterjc/Teaching-Intro-to-Bioinf", "max_forks_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2015-02-05T20:54:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-08T18:24:04.000Z", "avg_line_length": 36.6, "max_line_length": 122, "alphanum_fraction": 0.7129045818, "num_tokens": 688, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.5874920607290601}}
{"text": "% Created 2018-09-04 mar 12:51\n\\documentclass[a4paper]{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{fixltx2e}\n\\usepackage{graphicx}\n\\usepackage{longtable}\n\\usepackage{float}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{marvosym}\n\\usepackage{wasysym}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\tolerance=1000\n\\usepackage{khpreamble}\n\\author{Kjartan Halvorsen}\n\\date{Due 2018-08-24 at midnight}\n\\title{Computerized control - homework 2}\n\\hypersetup{\n  pdfkeywords={},\n  pdfsubject={},\n  pdfcreator={Emacs 24.5.1 (Org mode 8.2.10)}}\n\\begin{document}\n\n\\maketitle\n\n\\section*{Exercises}\n\\label{sec-1}\n\n\\subsection*{1 Sample the continuous-time harmonic oscillator}\n\\label{sec-1-1}\nConsider the harmonic oscillator with transfer function \n\\begin{equation}\nG(s) = \\frac{\\omega_0^2 s}{s^2 + \\omega_0^2}.\n\\label{eq:contsys}\n\\end{equation}\n\n\\begin{enumerate}\n\\item Determine the poles of \\(G(s)\\).\n\\item Compute the pulse-transfer function by zero-order-hold sampling (step-invariant sampling) of the transfer function \\(G(s)\\).\n\\item Determine the poles of the pulse-transfer function. What is the relationship between these poles and the corresponding poles of \\(G(s)\\)?\n\\item Let $\\omega_0=1$. Describe in words how the poles depend on the sampling period $h$. In particular, discuss what happens when $\\omega_0h > \\pi$.\n\\end{enumerate}\n\n\\subsection*{2 Simulate the continuous- and the discrete-time harmonic oscillator}\n\\label{sec-1-2}\nUse matlab's control toolbox to simulate the system and verify your calculations. \n\n\\subsubsection*{Define systems}\n\\label{sec-1-2-1}\nFirst, define the continuous-time system in \\eqref{eq:contsys}\n\\begin{verbatim}\nomega0 = ?; % Use the average of the last digit of your two phone numbers.\nsys_c = tf(?,?)\n\\end{verbatim}\nSample the system using the function \\texttt{c2d}\n\\begin{verbatim}\nh = 0.2/omega0; % This gives about 30 samples per period \nsys_c2d = c2d(?,?)\n\\end{verbatim}\nDefine the discrete-time system you calculated in the first part of the homework\n\\begin{verbatim}\nden = [1 a1 a2];\nnum = [b1 b2];\nsys_d = tf(num, den, h)\n\\end{verbatim}\n\\textbf{Verify that the two discrete-time systems} \\texttt{sys\\_c2d} \\textbf{and} \\texttt{sys\\_d} \\textbf{are identical.}\n\n\\subsubsection*{Simulate step responses}\n\\label{sec-1-2-2}\nSimulate for 4 complete periods \n\\begin{verbatim}\nTc = linspace(0, 4*(2*pi/omega0), 800);\n[yc,tc] = step(sys_c, Tc);\n\nTd = h*(0:120);\n[yd,td] = step(sys_d, Td);\n\nfigure()\nclf\nplot(tc,yc)\nhold on\nplot(td,yd, 'r*')\n\\end{verbatim}\n\n\\textbf{Verify that the step response of the discrete-time system is exactly equal to that of the continuous-time system at the sampling instants. Explain why this is so!}\n\n\\subsubsection*{Compute the discrete step response yourself}\n\\label{sec-1-2-3}\n Write some lines of code that solves the difference equation\n \\[ y(k+2) = -a_1y(k+1) - a_2y(k) + b_1u(k+1) + b_2u(k) \\]\n for the harmonic oscillator. \nUse the initial state \\(y(-1)=y(0)=0\\) and compute the response to a step sequence \n \\[ u(k) = \\begin{cases} 1, & k \\ge 0\\\\ 0, & \\text{otherwise} \\end{cases}.\\]\n Verify that your solution is the same as when using the \\texttt{step} function in the previous exercise in this homework.\n% Emacs 24.5.1 (Org mode 8.2.10)\n\\end{document}", "meta": {"hexsha": "cf1d94bd6d68ca1e8a48f92d177571ac589abad2", "size": 3335, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "homework/historical/hw1-sample-oscillator-fall18.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "homework/historical/hw1-sample-oscillator-fall18.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "homework/historical/hw1-sample-oscillator-fall18.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 33.0198019802, "max_line_length": 171, "alphanum_fraction": 0.7364317841, "num_tokens": 1054, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.72487026428967, "lm_q2_score": 0.8104789063814617, "lm_q1q2_score": 0.5874920590699328}}
{"text": "\\documentclass[main.tex]{subfiles}\r\n\\begin{document}\r\n\\chapter{Countability}\r\n\\label{chapter:countability}\r\n\r\n\\epigraph{\\(\\aleph_0\\) is my favorite character!}{Justin Goodman}\r\n\r\n\\minitoc\r\n\r\n\\section{Introduction}\r\n\r\nOur motivation here is that we want to describe the sizes of incredibly large sets. What is the cardinality of \\(\\N\\)? So far, we have not learned any means to count the natural numbers. That is what this chapter is for!\r\n\r\nAdditional reading: \\href{https://youtu.be/s86-Z-CbaHA}{The Banach-Tarski Paradox -- Vsauce} (relevant content starts at 2:06)\r\n\r\n\\section{Definitions}\r\n\r\n\\begin{defn}[Enumeration of a Set\\index{Enumeration}]\r\n\tA specified ordering of a set. Enumerations should be described in such a way that given a partial listing, you can always find the next element.\r\n\\end{defn}\r\n\r\n\\begin{example}\r\n\tWe could enumerate the set of natural numbers as follows: \\[0,1,2,3,4,5,6,7,\\cdots\\]\r\n\tWe call this the \\textit{natural ordering} of \\(\\N\\). Some sets might not have a natural ordering, so we must enumerate it to provide an ordering.\r\n\\end{example}\r\n\r\n\\begin{example}\r\n\tWe could enumerate the set of integers as follows: \\[0,1,-1,2,-2,3,-3,4,-4,\\cdots\\]\r\n\\end{example}\r\n\r\n\\begin{example}\r\n\tThe following is \\textit{not} a valid enumeration of the integers: \\[0,1,2,3,4,5,\\cdots,-1,-2,-3,-4,-5,\\cdots\\]\r\n\tThis is not valid because we will never know when to switch from positive to negative integers.\r\n\\end{example}\r\n\r\nThe point of enumeration is such that we can create \\textit{indices} for each element in a set. As we will see later, it will be helpful if you know that, for example, the index \\(i=4\\) into the set of integers yields \\(-2\\).\r\n\r\n\\begin{defn}[Finite\\index{Finite}]\r\n\tA set is finite if there exists a bound on the number of elements. If you start counting the elements in a finite set, there should be a precise stopping point.\r\n\\end{defn}\r\n\r\n\\begin{example}\r\n\t\\(\\{1,2,3\\}\\) and \\(\\N^{< 10,000,000,000}\\) are finite sets.\r\n\\end{example}\r\n\r\n\\begin{defn}[Countably Infinite\\index{Countably Infinite}]\r\n\tA set is countably infinite if you can find a one-to-one correspondence (bijection) between the set and \\(\\N\\). Alternatively, a set is countably infinite if you can enumerate the set such that every element appears at a finite position.\r\n\t\r\n\tMore simply, a set is countably infinite if you can index each element.\r\n\\end{defn}\r\n\r\n\\begin{rem}\r\n\tWe denote \\(|S| = \\aleph_0\\) for any (every) countably infinite set \\(S\\).\r\n\\end{rem}\r\n\r\nThere exist other definitions for countably infinite, however they all share the same idea that \\textit{countably infinite} sets can be \\textit{counted} \\textit{in a finite amount of time}.\r\n\r\n\\begin{defn}[Countable\\index{Countable}]\r\n\tA set is countable if it is finite or countably infinite.\r\n\\end{defn}\r\n\r\n\\exproof{\r\n\t\\(S=\\{1,2,3,4,5,6,7,8,9,10\\}\\) is countable.\r\n}{\r\n\t\\(S\\) is finite, with 10 elements, so it is countable.\r\n}\r\n\r\n\\exproof{\r\n\t\\(\\N\\) is countable.\r\n}{\r\n\tThe function \\(f(x) = x\\) where \\(f : \\N \\mapsto \\N\\) is a bijection, so \\(\\N\\) is countably infinite.\r\n}\r\n\r\n\\exproof{\r\n\t\\(\\Z\\) is countable.\r\n}{\r\n\tEnumerate \\(\\Z\\) as follows: \\(\\{0,-1,1,-2,2,-3,3,\\cdots\\}\\)\r\n\t\r\n\tThen \\[f(n) = \\begin{cases} \\frac{n}{2} & n \\equiv 0 \\Mod{2} \\\\ -\\frac{n+1}{2} & n \\equiv 1 \\Mod{2} \\end{cases}\\] where \\(f : \\N \\mapsto \\Z\\) is a bijection (\\textit{why?}).\r\n}\r\n\r\n\\exproof{\r\n\t\\(\\Q^+\\) is countable.\r\n}{\r\n\tWe use what is called a \\textit{snaking argument}. The idea is to build a grid that describes all positive rational numbers, then provide a bijection between that grid and the natural numbers.\r\n\t\r\n\tGrid:\r\n\t\r\n\t\\begin{center}\r\n\t\t\\begin{tabular}{c|cccccc}\r\n\t\t\t& 1 & 2 & 3 & 4 & 5 & \\(\\cdots\\) \\\\\r\n\t\t\t\\hline\r\n\t\t\t1 & \\(1/1\\)\\tikzmark{1} & \\tikzmark{2}\\(2/1\\) & \\tikzmark{6}\\(3/1\\)\\tikzmark{6b} & \\tikzmark{7}\\(4/1\\) & \\(5/1\\) & \\(\\cdots\\) \\\\\r\n\t\t\t2 & \\(1/2\\)\\tikzmark{3} & \\tikzmark{5}\\(2/2\\)\\tikzmark{5b} & \\tikzmark{8b}\\(3/2\\)\\tikzmark{8} & \\(4/2\\) & \\(5/2\\) & \\(\\cdots\\) \\\\\r\n\t\t\t3 & \\(1/3\\)\\tikzmark{4} & \\tikzmark{9b}\\(2/3\\)\\tikzmark{9} & \\(3/3\\) & \\(4/3\\) & \\(5/3\\) & \\(\\cdots\\) \\\\\r\n\t\t\t4 & \\(1/4\\)\\tikzmark{10} & \\tikzmark{12}\\(2/4\\) & \\(3/4\\) & \\(4/4\\) & \\(5/4\\) & \\(\\cdots\\) \\\\\r\n\t\t\t5 & \\(1/5\\)\\tikzmark{11} & \\(2/5\\) & \\(3/5\\) & \\(4/5\\) & \\(5/5\\) & \\(\\cdots\\) \\\\\r\n\t\t\t\\(\\vdots\\) & \\(\\vdots\\) & \\(\\vdots\\) & \\(\\vdots\\) & \\(\\vdots\\) & \\(\\vdots\\) & \\(\\ddots\\) \\\\\r\n\t\t\\end{tabular}\r\n\t\\end{center}\r\n\t\r\n\t\\begin{tikzpicture}\r\n\t[\r\n\tremember picture,\r\n\toverlay,\r\n\t-latex,\r\n\t%color=blue!75!green,\r\n\tcolor=red,\r\n\tyshift=1ex,\r\n\tshorten >=1pt,\r\n\tshorten <=1pt,\r\n\t]\r\n\t\\draw ([yshift=1mm]{pic cs:1}) -- ([yshift=1mm]{pic cs:2});\r\n\t\\draw ({pic cs:2}) -- ([yshift=1mm]{pic cs:3});\r\n\t\\draw ([yshift=1mm]{pic cs:3}) -- ([yshift=1mm]{pic cs:4});\r\n\t\\draw ({pic cs:4}) -- ({pic cs:5});\r\n\t\\draw ([yshift=1mm]{pic cs:5b}) -- ({pic cs:6});\r\n\t\\draw ([yshift=1mm]{pic cs:6b}) -- ([yshift=1mm]{pic cs:7});\r\n\t\\draw ({pic cs:7}) -- ([yshift=1mm]{pic cs:8});\r\n\t\\draw ({pic cs:8b}) -- ([yshift=1mm]{pic cs:9});\r\n\t\\draw ({pic cs:9b}) -- ([yshift=1mm]{pic cs:10});\r\n\t\\draw ([yshift=1mm]{pic cs:10}) -- ([yshift=1mm]{pic cs:11});\r\n\t\\draw ({pic cs:11}) -- ({pic cs:12});\r\n\t\\end{tikzpicture}\r\n\t\r\n\tLet \\(f(n)\\) yield the rational number at position \\(n\\) along the snake in the table (e.g.\\ \\(f(0) = 1/1\\), \\(f(1) = 2/1\\), \\(f(2) = 1/2\\)). Then \\(f : \\N \\mapsto \\Q^+\\) is a bijection (\\textit{why?}).\r\n}\r\n\r\n\\begin{rem}\r\n\tThis proof does not account for the fact that we can reduce fractions (like how \\(4/2 = 2/1\\)).\r\n\tA more rigorous proof uses the Schr\\\"{o}der-Bernstein Theorem, which is out of the scope of this course.\r\n\\end{rem}\r\n\r\n\\begin{defn}[Uncountable\\index{Uncountable}]\r\n\tA set that is not countable.\r\n\\end{defn}\r\n\r\n\\begin{example}\r\n\t\\(\\R\\) is uncountable.\r\n\\end{example}\r\n\r\nBut why? We will come back to this. First, we need a new proof tool to help us, plus some theorems.\r\n\r\n\\section{Cantor's Diagonal Argument}\r\n\r\nWe begin our study of this famous proof by diving into the actual proof.\r\n\r\n\\exproof{\r\n\tProve \\((0,1)\\) is uncountable.\r\n}{\r\n\tBy contradiction. Assume \\((0,1)\\) is countable. Then, we can enumerate the entire set. One possible enumeration is as follows:\r\n\t\r\n\t\\begin{center}\r\n\t\t\\begin{tabular}{rlllllllll}\r\n\t\t\t\\(r_0 = 0.\\) & 0 & 1 & 2 & 6 & 5 & 9 & 8 & 7 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_1 = 0.\\) & 1 & 8 & 4 & 3 & 1 & 3 & 0 & 8 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_2 = 0.\\) & 1 & 9 & 9 & 5 & 9 & 9 & 6 & 6 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_3 = 0.\\) & 1 & 9 & 0 & 3 & 2 & 5 & 2 & 7 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_4 = 0.\\) & 1 & 9 & 0 & 4 & 2 & 0 & 4 & 1 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_5 = 0.\\) & 1 & 9 & 0 & 4 & 3 & 3 & 8 & 2 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_6 = 0.\\) & 1 & 9 & 0 & 4 & 3 & 4 & 8 & 2 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_7 = 0.\\) & 1 & 9 & 0 & 4 & 3 & 4 & 9 & 2 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_8 = 0.\\) & 1 & 9 & 0 & 4 & 3 & 4 & 9 & 3 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(\\vdots\\) \\\\\r\n\t\t\\end{tabular}\r\n\t\\end{center}\r\n\t\r\n\tNow, let us construct an element \\(x \\in (0,1)\\): \\[x = 0.x_1 x_2 x_3 x_4 \\cdots\\]\r\n\twhere \\[x_i \\equiv r_{i,i} + 1 \\Mod{10}\\]\r\n\tand \\(r_{i,i}\\) is the \\(i\\)\\textsuperscript{th} digit after the decimal point in \\(r_i\\).\r\n\tHere, we are using the mod function to give us the least-non-negative number in the congruence class \\(r_{i,i} + 1 \\Mod{10}\\) -- otherwise known as the \\textit{remainder}.\r\n\t\r\n\tSo we grab the \\textit{diagonal} digits,\r\n\t\r\n\t\\begin{center}\r\n\t\t\\begin{tabular}{rlllllllll}\r\n\t\t\t\\(r_0 = 0.\\) &\\cellcolor{Melon}0 & 1 & 2 & 6 & 5 & 9 & 8 & 7 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_1 = 0.\\) & 1 &\\cellcolor{Melon}8 & 4 & 3 & 1 & 3 & 0 & 8 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_2 = 0.\\) & 1 & 9 &\\cellcolor{Melon}9 & 5 & 9 & 9 & 6 & 6 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_3 = 0.\\) & 1 & 9 & 0 &\\cellcolor{Melon}3 & 2 & 5 & 2 & 7 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_4 = 0.\\) & 1 & 9 & 0 & 4 &\\cellcolor{Melon}2 & 0 & 4 & 1 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_5 = 0.\\) & 1 & 9 & 0 & 4 & 3 &\\cellcolor{Melon}3 & 8 & 2 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_6 = 0.\\) & 1 & 9 & 0 & 4 & 3 & 4 &\\cellcolor{Melon}8 & 2 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_7 = 0.\\) & 1 & 9 & 0 & 4 & 3 & 4 & 9 &\\cellcolor{Melon}2 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(r_8 = 0.\\) & 1 & 9 & 0 & 4 & 3 & 4 & 9 & 3 & \\(\\cdots\\) \\\\\r\n\t\t\t\\(\\vdots\\) \\\\\r\n\t\t\\end{tabular}\r\n\t\\end{center}\r\n\t\r\n\tand to each of them we add 1 \\(\\Mod{10}\\) to construct \\[x = 0.19043493\\cdots\\]\r\n\t\r\n\tWe claim that this new \\(x\\) is not in the enumeration.\r\n\tYour keen eye may have noticed that \\(x = r_8\\), so our claim is bogus.\r\n\tThis is a rightful note, \\textit{however} our claim is still valid.\r\n\tRemember that the numbers in the enumeration have an infinite decimal expansion.\r\n\tSo \\(r_8 = 0.19043493 d_8 d_9 d_{10} d_{11} \\cdots\\), as does \\(x = 0.19043493 x_8 x_9 x_{10} x_{11} \\cdots\\).\r\n\t\r\n\tFor decimal numbers to be equal, all of their corresponding digits must be equal.\r\n\tWhen we construct the digit \\(x_8\\) in \\(x\\), do we have that \\(x_8 = d_8\\), the corresponding digit in \\(r_8\\)?\r\n\tWell, by construction, \\(x_8 \\equiv r_{8,8} + 1 \\equiv d_8 + 1 \\Mod{10}\\).\r\n\tIt should be clear that \\(d_8 \\not\\equiv d_8 + 1 \\Mod{10}\\), so we conclude that \\(x_8 \\neq d_8\\). So \\(x\\) and \\(r_8\\) are not equal!\r\n\t\r\n\tNow, for \\(r_9\\), the previous argument tells us that \\(x_9\\) will be different than \\(d_9\\), so \\(x_9 \\neq r_9\\).\r\n\tAnd we can repeat this for every single \\(r_i\\) in the enumeration.\r\n\tSince the enumeration is finite, there has to be a stopping point.\r\n\tThen by repeatedly applying the previous argument, we have that \\textit{every} number in the enumeration \\(r_i \\neq x\\).\r\n\tThis is a contradiction, though, because \\(x \\in (0,1)\\) (this should be obvious).\r\n\t\r\n\tThus, \\((0,1)\\) is uncountable.\r\n}\r\n\r\n\\begin{rem}\r\n\tCantor's diagonal argument can be abstracted to show that other sets are uncountable. The general method is:\r\n\t\\begin{enumerate}\r\n\t\t\\item Assume the set is countable\r\n\t\t\\item Enumerate the set, because it is countable\r\n\t\t\\item Find an element in the set that is \\textit{not} in the enumeration\r\n\t\\end{enumerate}\r\n\tThe last step is the ``diagonal'' part of the argument -- the easiest way to find a new element is by constructing a new element that is different than every element in the set by at least one minuscule change.\r\n\\end{rem}\r\n\r\n\\begin{rem}\r\n\tWhen you apply Cantor's diagonal argument in your proofs, you can omit the lengthy explanations. All you need is the three parts above -- in the third step, though, you should offer justification as to why your \\textit{diagonalizer} (the constructor function that gives you the new element) actually gives you a new element.\r\n\\end{rem}\r\n\r\nNow, we just showed that \\((0,1)\\) is uncountable. But we also know that \\((0,1) \\subseteq \\R\\). This subset relation intuitively suggests that \\(|(0,1)| \\leq |\\R|\\). So it is tempting to conclude from this that \\(\\R\\) is indeed uncountable. Luckily, this is exactly what we can conclude, but we need some theorems that will let us do that.\r\n\r\n\\section{Useful Theorems}\r\n\r\nWe give some useful theorems of countable and uncountable sets. These will help you in determining whether a set is countable or uncountable.\r\n\r\n\\begin{defn}[Cardinality]\r\n\tTwo countable sets have the same cardinality if there exists a bijection between the two sets.\r\n\\end{defn}\r\n\r\n\\begin{rem}\r\n\tWe had previously defined cardinality as the number of elements in a set. This new definition is more abstract and generalizable. In fact, though, these two definitions are equivalent on finite sets. Indeed, if two finite sets have \\(n\\) elements, then they are countable, and any enumeration of the two sets yields a bijection. On the other hand, if there is a bijection \\(f\\) between the two, then we can associate each input with an index \\(\\leq n\\), which yields the same index on the second set -- so they have the same number of elements.\r\n\\end{rem}\r\n\r\n\\begin{rem}\r\n\tFor countably infinite sets we will take this as a definition, since this is the first time we are seeing \\textit{number of elements} for countably infinite sets.\r\n\\end{rem}\r\n\r\n\\begin{rem}\r\n\t\\(|\\{1,2,3,4,5,6,7,8,9,10\\}| < |\\N|\\) even though the two sets are countable. The first is finite, but the second is countably infinite.\r\n\\end{rem}\r\n\r\nThis tells us that our sets we have already proved countably infinite, \\(\\N, \\Z, \\Q^{+}\\), all have the same cardinality as \\(\\N\\) which is \\(\\aleph_0\\) since there is a bijection between \\(\\N\\) and those sets. This also implies that they all have the same cardinalities with each other. We can show this as a corollary to the following theorem.\r\n\r\n\\subsection{Countable Set Theorems}\r\n\r\n\\begin{thm}\r\n\tThere exists a bijection between any countably infinite set.\r\n\\end{thm}\r\n\r\n\\begin{proof}\r\n\tSuppose \\(A,B\\) are countably infinite. Then there exists bijections \\(f,g\\) such that \\(f : \\N \\rightarrow A\\) and \\(g : \\N \\rightarrow B\\). Define the function \\(h : A \\rightarrow B\\) given by \\(h : a \\mapsto g(f^{-1}(a))\\). This is visualized as \\[A \\stackrel{f^{-1}}{\\rightarrow} \\N \\stackrel{g}{\\rightarrow} B\\]\r\n\t\r\n\tThis is a composition of bijective functions, which is a bijection.\r\n\\end{proof}\r\n\r\n\\begin{thm}\r\n\tEvery countably infinite set has cardinality \\(\\aleph_0\\).\r\n\\end{thm}\r\n\r\n\\begin{proof}\r\n\tThis follows from the previous theorem, and that a bijection between two sets implies their cardinalities are equal.\r\n\\end{proof}\r\n\r\n\\begin{rem}\r\n\tIt seems natural to also define \\(|A| \\leq |B| \\Leftrightarrow\\) there exists an injection \\(f : A \\mapsto B\\).\r\n\\end{rem}\r\n\r\n\\begin{thm}\r\n\t\\(A\\) is countable \\(\\Rightarrow\\) \\(|A| \\leq |\\N|\\).\r\n\\end{thm}\r\n\r\n\\begin{proof}\r\n\t\\(A\\) is countable, so there is a bijection from \\(\\N\\) to \\(A\\). Then the inverse is also a bijection, which is also an injection. Then \\(|A| \\leq |\\N|\\). Alternatively, \\(A\\) is countable so \\(|A| = |\\N|\\) so \\(|A| \\leq |\\N|\\).\r\n\\end{proof}\r\n\r\n\\begin{thm}\r\n\tIf a set \\(B\\) is countable, then so is any subset \\(A \\subseteq B\\).\r\n\\end{thm}\r\n\r\n\\begin{proof}\r\n\tDefine a function \\(f : A \\rightarrow B\\) given by \\(f(a) = a\\). Then this is an injective function (why?), so \\(|A| \\leq |B|\\). Then \\(B\\) is countable so \\(|B| = |\\N|\\). Then \\(|A| \\leq |B| = |\\N|\\).\r\n\\end{proof}\r\n\r\n\\begin{rem}\r\n\tWe can conclude from here that \\(A \\subseteq B \\Rightarrow |A| \\leq |B|\\).\r\n\tNote later that this holds even if \\(A\\) and \\(B\\) are uncountable.\r\n\tWe cannot go the \\(\\Leftarrow\\) direction though -- why?\r\n\\end{rem}\r\n\r\n\\begin{thm}\r\n\t\\(A\\) is countable \\(\\Leftarrow\\) \\(|A| \\leq |\\N|\\).\r\n\\end{thm}\r\n\r\n\\begin{proof}\r\n\tIf \\(|A| \\leq |\\N|\\), then there is an injective function \\(f : A \\rightarrow \\N\\). Then let \\(B \\subseteq \\N\\) be the image of \\(f\\). Then by the previous theorem \\(\\N\\) is countable so \\(B\\) is countable. Then \\(|A| = |B| \\leq |\\N|\\).\r\n\\end{proof}\r\n\r\n\\begin{rem}\r\n\tWe conclude that \\(A\\) is countable \\(\\Leftrightarrow\\) \\(|A| \\leq |\\N|\\).\r\n\\end{rem}\r\n\r\n\\subsection{Uncountable Set Theorems}\r\n\r\n\\begin{thm}\r\n\tIf any set \\(A\\) is uncountable, then so is any superset \\(B \\supseteq A\\).\r\n\\end{thm}\r\n\r\n\\begin{proof}\r\n\tBy contradiction, let \\(B \\supseteq A\\) and assume \\(B\\) is countable.\r\n\tThen, since \\(A \\subseteq B\\), by previous theorem \\(A\\) is countable.\r\n\tThis contradicts that \\(A\\) is uncountable, thus our assumption was wrong and \\(B\\) is uncountable.\r\n\t\r\n\t\\bigskip\r\n\tAnother argument:\r\n\tBy contradiction, let \\(B \\supseteq A\\) and assume \\(B\\) is countable. Then by the previous theorem, \\(B \\subseteq A\\). But \\(A \\subseteq B \\subseteq A\\), so \\(A = B\\). But \\(A\\) is uncountable, so \\(B\\) is uncountable. But we assumed \\(B\\) was countable. This is a contradiction, so \\(B\\) is uncountable.\r\n\\end{proof}\r\n\r\nNow we come back to \\(\\R\\) being uncountable. Since \\(\\R \\supseteq (0,1)\\), then since \\((0,1)\\) is uncountable our theorems allow us to conclude that \\(\\R\\) is uncountable. Interestingly, we could also have given the bijection \\(f : (0,1) \\rightarrow \\R\\) given by \\(f : x \\mapsto \\tan(\\pi(x+\\frac{1}{2}))\\). It looks like this:\r\n\\begin{center}\r\n\t\\begin{tikzpicture}\r\n\t\\begin{scope}\r\n\t\t\\draw[<->] (-2,0) -- (2,0) node[right, below] {\\(x\\)};\r\n\t\t\\draw[<->] (0,-2) -- (0,2) node[above] {\\(f(x)\\)};\r\n\t\t\\draw[shift={(1,0)},color=black] (0pt,3pt) -- (0pt,-3pt);\r\n\t\t\\draw[shift={(1,0)},color=black] (0pt,0pt) -- (0pt,-3pt) node[below] {1};\r\n\t\t\\draw[scale=1,domain=-1.1:1.1,smooth,variable=\\x,blue] plot ({\\x},{tan(deg(\\x))}) node[right] {\\(\\tan(x)\\)};\r\n\t\\end{scope}\r\n\t\\draw (0:2.75) node {\\(\\rightarrow\\)};\r\n\t\\begin{scope}[xshift=5cm]\r\n\t\t\\draw[<->] (-1.5,0) -- (2.5,0) node[right, below] {\\(x\\)};\r\n\t\t\\draw[<->] (0,-2) -- (0,2) node[above] {\\(f(x)\\)};\r\n\t\t\\draw[shift={(1,0)},color=black] (0pt,3pt) -- (0pt,-3pt);\r\n\t\t\\draw[shift={(1,0)},color=black] (0pt,0pt) -- (0pt,-3pt) node[below] {1};\r\n\t\t\\draw[scale=1,domain=0.15:0.85,smooth,variable=\\x,red] plot ({\\x},{tan(deg(pi*(\\x+(1/2))))}) node[right] {\\(\\tan(\\pi(x+\\frac{1}{2}))\\)};\r\n\t\\end{scope}\r\n\t\\end{tikzpicture}\r\n\\end{center}\r\nGiven this bijection, we conclude that both \\(|(0,1)| \\leq |\\R|\\) \\(|(0,1)| \\geq |\\R|\\).\r\n\r\n\\section{The Continuum Hypothesis}\r\n\r\nWe are motivated by the power set.\r\nRemember that \\(\\mathcal{P}(A) = \\{ S \\subseteq A \\}\\) -- the set of all possible subsets.\r\nHow does the cardinality of the power set relate to the input set, then?\r\nWell, we turn back to Georg Cantor with the following theorem.\r\n\r\n\\begin{thm}[Cantor's Theorem]\r\n\t\\[|A| < |\\mathcal{P}(A)|\\]\r\n\\end{thm}\r\n\r\n\\(<\\) here means that their cardinalitites are strictly different.\r\nWhile there are other types of cardinalities (this gets into the idea of ordinals), for now we can interpret this to mean that if \\(A\\) is countable then its powerset will be uncountable.\r\nThe proof of this statement is left as an exercise.\r\n\r\n\\begin{thm}\r\n\tThere is no set of all sets\r\n\\end{thm}\r\n\r\n\\begin{proof}\r\n\tSuppose \\(U = \\) a set of all sets.\r\n\tThen \\(U \\subseteq \\mathcal{P}(U)\\) by definition, so \\(|U| \\leq |\\mathcal{P}(U)|\\).\r\n\tBut, \\(\\mathcal{P}(U) \\subseteq U\\) since \\(\\mathcal{P}(U)\\) is a set of sets and \\(U\\) is a set of all sets.\r\n\tThe Schr\\\"{o}der-Bernstein Theorem (out of scope) tells us (intuitively) that \\(|A| \\leq |B|\\) and \\(|B| \\leq |A|\\) implies \\(|A| = |B|\\).\r\n\tThus, \\(|U| = |\\mathcal{P}(U)|\\).\r\n\tBut this contradicts the previous theorem.\r\n\\end{proof}\r\n\r\nNaturally, this leads us to ask whether there \\textit{is} some set of cardinality strictly between \\(|\\N|\\) and \\(|\\mathcal{P}(\\N)|\\).\r\nThis is precisely the continuum hypothesis:\r\n\\begin{center}\r\n\tThere is no set whose cardinality is strictly between that of the integers and the real numbers.\r\n\\end{center}\r\nThis is an open question in mathematics.\r\nPrevious mathematicians have shown that, given our system of mathematics, we cannot prove this statement \\textit{true} \\textbf{nor} \\textit{false}.\r\nIf it interests you, consider taking a course in real analysis or set theory.\r\n\r\n\\section{Summary}\r\n\r\n\\begin{itemize}\r\n\t\\item A countable set is either finite, or can be enumerated by the natural numbers\r\n\t\\item An uncountable set has no possible enumeration\r\n\t\\item Is there any distinct cardinality between \\(\\Z\\) and \\(\\R\\)?\r\n\\end{itemize}\r\n\r\n\\section{Practice}\r\n\r\n\\begin{enumerate}\r\n\t\\item\r\n\tExplain why the following proof fails:\r\n\t\\begin{proof}\r\n\t\t\\(\\R^{+} \\times \\R^{+}\\) is countably infinite because we can set up a grid as we did in the \\(\\Q^{+}\\) argument and apply the same snaking bijection. \\mbox{}\r\n\t\\end{proof}\r\n\t\\item\r\n\tShow that the even integers are countable by providing an explicit bijection from \\(\\N\\) to \\(\\Z^{\\text{even}}\\).\r\n\tThen, show that the odd integers are countable using a bijection.\r\n\t\\item\r\n\tShow that the set of perfect squares is countable using a bijection.\r\n\t\\item\r\n\tShow that, given two countable sets \\(A,B\\), the following are countable:\r\n\t\\begin{enumerate}\r\n\t\t\\item \\(A \\cup B\\) \\textit{Hint: how did we prove the integers were countable?}\r\n\t\t\\item \\(A \\cap B\\)\r\n\t\t\\item \\(A \\setminus B\\)\r\n\t\t\\item \\(A \\times B\\) \\textit{Hint: how did we prove the positive rationals were countable?}\r\n\t\\end{enumerate}\r\n\t\\item\r\n\tProve by induction that \\(\\N^k\\) is countable.\r\n\t\\item Show that \\(\\Q\\) is countable.\r\n\t\\item Show that \\(\\{ x \\in \\R \\mid x > 0 \\land x^2 \\in \\Q \\}\\) is countable.\r\n\t\\item Explain why the following proof fails:\r\n\t\\begin{proof}\r\n\t\tThe set of all polynomials with positive integer coefficients is uncountable. Assume it is countable, and enumerate each polynomial as we did in Cantor's diagonalization argument -- the rows are the polynomials, and the columns are the \\(i\\)\\textsuperscript{th} coefficient in the polynomial (eg, \\(ax^3\\), \\(a\\) is the third coefficient). Then the polynomial \\(f(x) = (p_{1,1}+1) + (p_{2,2}+1)x + (p_{3,3}+1)x^2 + \\cdots\\), where \\(p_{i,i}\\) is the \\(i\\)\\textsuperscript{th} coefficient on polynomial \\(p\\), is not in our enumeration.\r\n\t\\end{proof}\r\n\t\\item Show that the set of functions from natural numbers to natural numbers \\(\\{ f \\mid f : \\N \\rightarrow \\N \\}\\) is uncountable.\r\n\t\\item\r\n\tShow that the power set of any countably infinite set is uncountable.\r\n\tDo this by showing that \\(\\mathcal{P}(\\N)\\) is uncountable using a diagonalization argument.\r\n\t\\item Show that the power set of any set is uncountable.\r\n\t\\item Show that, given an uncountable set \\(A\\) and a set \\(B\\),\r\n\t\\begin{enumerate}\r\n\t\t\\item \\(A \\cup B\\) is uncountable\r\n\t\t\\item \\(A \\cap B\\) is countable if and only if \\(B\\) is countable\r\n\t\t\\item \\(A \\setminus B\\) is uncountable if \\(B\\) is countable\r\n\t\t\\item \\(A \\times B\\) is uncountable\r\n\t\\end{enumerate}\r\n\t\\item Show that the set of irrational numbers \\(\\{ x \\in \\R \\mid x \\not\\in \\Q \\}\\) is uncountable.\r\n\t\\item Find another bijection from \\((0,1) \\rightarrow \\R\\) that does not use a trigonometric function.\r\n\\end{enumerate}\r\n\\end{document}\r\n", "meta": {"hexsha": "2dcb470d58ad20057e3703a39f0a6dd4ac62d5e4", "size": 21444, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch-countability.tex", "max_stars_repo_name": "jugoodma/250-textbook", "max_stars_repo_head_hexsha": "ebfcd8e9d15079fe8924bf562a194ed057aed302", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-04-22T03:33:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-27T14:39:11.000Z", "max_issues_repo_path": "ch-countability.tex", "max_issues_repo_name": "jugoodma/250-textbook", "max_issues_repo_head_hexsha": "ebfcd8e9d15079fe8924bf562a194ed057aed302", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch-countability.tex", "max_forks_repo_name": "jugoodma/250-textbook", "max_forks_repo_head_hexsha": "ebfcd8e9d15079fe8924bf562a194ed057aed302", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-19T22:24:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-19T22:24:49.000Z", "avg_line_length": 49.1834862385, "max_line_length": 546, "alphanum_fraction": 0.6395728409, "num_tokens": 7416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.847967764140929, "lm_q1q2_score": 0.5873380737889012}}
{"text": "\\documentclass{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[a4paper]{geometry}\n\\usepackage{parskip}\n\n\\title{\nIntroduction to Functional Programming 2014\\\\\nTerm Project - List Filtering\n}\n\\author{\n  Benjamin N\u00f8rgaard \\#201209884\\\\\n}\n\n\\begin{document}\n\\maketitle\n\\clearpage\n\\tableofcontents\n\\clearpage\n\\section{Introduction}\nThe purpose of this report is to explain what list filtering is, we will be doing this using the Coq Proof Assistant. We will define two functions \\texttt{filter\\_in} and \\texttt{filter\\_out}, and prove some properties about these. This should in turn give us a better understanding of what list filtering is and how it works. In the end we will find a connection between these two functions, which in turn allows us to make proofs for one of them, and then reuse the same proofs for the other function.\n\nThe report is meant to be read alongside the source code file (\\texttt{Noergaard\\_Benjamin.v}). In this way I will be able to explain the steps we take without copying too much source code into this document. I will be using \\texttt{mono space font} when I refer to how things are named in the source code. Finally i use indentation to visualise the number of remaining subgoals.\n\n\\section{Filter In}\nThe \\texttt{filter\\_in} function takes a predicate and a list of natural numbers and returns a new list. The predicate is a function which takes a natural number and returns true or false. An example of a predicate could be a function which returns true if the number is even and false otherwise.\n\nWhen we give the \\texttt{filter\\_in} function a predicate and a list, we get a new list where the only elements in it are those that satisfy the predicate.\n\n\\subsection{Unit Test for Filter In}\nWe will start by creating a unit test which takes a candidate \\texttt{filter\\_in} function and runs a few tests on it that we see fit. We have to do assertions on how lists look after we have filtered them with a predicate, for that we first need a few helper functions. We will not prove anything about them because we solely use them in our tests.\n\nEarlier in the course for which this report was written, I came up with a generic way of comparing lists in Coq and defined this as a function \\texttt{beq\\_list}. This function takes a type, two lists of that type and a function, which compares two elements in the lists for equality and returns a boolean. Using this function we can very easily define a function which compares our lists of natural numbers. We defined this as \\texttt{beq\\_nat\\_list}. Now we have a function which can compare two lists for equality. In addition we define a special notation for this function which looks like this: \\texttt{list1 =l= list2}. Last thing before the actual unit tests are two examples of predicates which we will use in the unit tests.\n\nThe actual unit tests all work on the list \\texttt{1 :: 2 :: 3 :: nil}, we simply give the candidate \\texttt{filter\\_in} function different predicates, and then we assert what the resulting lists will look like.\n\n\\subsection{Filter In is Well-Defined}\nWe were given the \\texttt{specification\\_of\\_filter\\_in} and we will now look at a proof which shows that it is well-defined. We do this by structural induction on the list which is given to \\texttt{filter\\_in}. \n\nThe proof is pretty straight forward, but in the induction case we have to show that it holds no matter if the predicate evaluates to true or false, we must therefore do a case proof on the predicate of the head of the list. In the beginning of each case i rename the assuption \\texttt{H\\_p}, which reflects the destruction of the predicate, to \\texttt{H\\_p\\_true} and \\texttt{H\\_p\\_false}. This is not really nessesary, but it works as a headline for the case and it also makes it easier to read the proof when we use the assumption. I will be doing this in all case proofs for the rest of the sourcefile.\n\n\\subsection{Implementation of Filter In}\nThe implementation of \\texttt{filter\\_in} consists of two functions, a recursive auxillery function and a simple wrapper function that calls the auxillery function. The auxillery function consistet of first two cases which is the list base case and the list inductive case, either a list is \\texttt{nil} or else it is \\texttt{x :: xs'}, which is an element concatenated with a list. In the base case we return \\texttt{nil} and in the inductive case we match on the predicate of \\texttt{x} if true we concatenate \\texttt{x} with a recursive call to the auxillery function, if false we throw away \\texttt{x} and simply continue calling recursively on the tail of the list.\n\nWe compute the unit tests and see that they pass, so our implementation seems to be right. Let's prove that now.\n\n\\subsection{Implemtation Fits the Specification}\nIn order to prove if our implementation fits the specification, we will need unfold lemmas for it, one for the base case and one for the inductive case. Using these we simply show for each of the branches in the specification that our implemented \\texttt{filter\\_in} fits them.\n\nWe also define a small helper lemma which says that any \\texttt{filter\\_in} function that satisfies the specification can be rewritten to our implementation of \\texttt{filter\\_in}. This will be conveinient because we will be able to use the unfold lemma for our implementation in proofs about any abitratry \\texttt{filter\\_in} function. With rewriting to our implementation, we will also be able to reuse our proofs for other implementations. This lemma simply combines the well-defined-theorem with the theorem that says that our implementation fits the specification.\n\nWe will use this lemma in the following theorems, as they will be stating properties that any \\texttt{filter\\_in} function should have.\n\n\\subsection{About lemmas}\nWe will now prove some about lemmas, where each of them will state some property of our function. \n\n\\subsubsection{Filtering in all/none of the elements}\nThe first two are quite alike, and the point of both of them lies in the predicate. When the predicate is a function that always evaluates to true, we are to return the filtered list unmodified. Like wise when the predicate always evaluates to false, we have to return an empty list as everything gets filtered out.\n\nWe do both of these proves by first rewriting to our implementation, and then we do structural induction on the list. Because we have rewritten to our implementation we can now use our unfold lemmas to prove both of these properties.\n\n\\subsubsection{Filtering In Incrementally}\nThe last about lemma that we will look at says that given any \\texttt{filter\\_in} function that satisfies the specification, when that \\texttt{filter\\_in} function is used twice with two abritrary predicates on some list, it yields the same result as using it once with a predicate which is combined with an and of the two predicates.\n\nThe proof is a bit long but really not that complicated. Again we can use the fact that we can rewrite to our implemtation of \\texttt{filter\\_in}. We then do structural inducation on the list. The base case is very simple. In the inductive case we have to prove that the property holds for any combination of the predicates. Therefore we do a nested case proof on each of the predicates called on the head of the list. One should also notice that we used some of the library lemmas for working with the boolean binary operators.\n\n\\section{Filter Out}\nThe function \\texttt{filter\\_out} does list filtering like \\texttt{filter\\_in}, it takes the same arguments that is a predicate and a list of natural numbers. The difference lies in how we interpret the predicate of some element, where \\texttt{filter\\_in} includes an element in the resulting list if the predicate of the element is true, \\texttt{filter\\_out} excludes an element from the resulting list if the predicate of the element is true.\n\nAs I started making a naive implementation of \\texttt{filter\\_out}, I soon realised that I could reuse most of the proofs that I had written for \\texttt{filter\\_in} with only slight modifications to match the \\texttt{filter\\_out} behaviour. This meant that most of the proofs were mere copy paste. In the following sections I will skip the parts that are almost identical to \\texttt{filter\\_in}, and instead focus on the differences.\n\n\\subsection{Differences Between Filter In and Filter Out}\nThe first difference lies in the unit test, here we have to modify the resulting lists to match the behaviour of \\texttt{filter\\_out}.\n\nAs mentioned in the explaination of \\texttt{filter\\_out}, the difference in the implementation lies in the inductive case where we react differently on the results of the predicate. We simply swap the right-hand-side of the match clauses so that we throw away elements in the true case and we keep them in the false case.\n\nThe last difference that I would like to highligt is in the last about lemma. Here we should notice that \\texttt{filter\\_out} used twice with two arbitrary on some list, yields the same result as using \\texttt{filter\\_out} used once with a predicate created by an or of the two predicates, on some list. This is because we filter out an element if just one of the two predicates evaluates to true.\n\nThis also makes the proof differ somewhat from that of \\texttt{filter\\_in}, we have a few more cases and we now use the \\texttt{orb} lemmas of the Bool library.\n\n\\subsection{Thoughts on Similarities}\nAs a programmer similarities in code often means that something can be generalised. One example that I used early in the code was the generic list comparison function. When we look at mathematics the same thing applies, here similirities often hints that there must be some connection. In the following chapter we will see that there is indeed a connection between these two functions.\n\n\\section{Filter Out from Filter In and Vice Versa}\nThe following two propositions (\\texttt{filter\\_out\\_from\\_filter\\_in} and \\texttt{filter\\_in\\_from\\_filter\\_out}, will show us that there is a connection between \\texttt{filter\\_in} and \\texttt{filter\\_out}. The first one says that given a \\texttt{filter\\_in} that satisfies the specification of \\texttt{filter\\_in}, we have a function that satisfies the specification of \\texttt{filter\\_out}, and vice versa.\n\nBecause the function that satisfies the opposing specification uses the function \\texttt{negb} from the Bool library, we need unfold lemmas for \\texttt{negb} to prove that the function satisfies the opposing specification.\n\nUsing split, rewriting to our implementations of each of the functions and the unfold lemmas for negb these two proofs are straight forward.\n\n\\subsection{Consequences of the Propositions}\nThe fact that we can define functions that satisfies the opposing specification, given one of the functions that satisfies its specification, means that we should be able to make a new implementation of each of the functions that uses the opposing function which we have already defined and shown that it satisfies its specification.\n\n\\subsubsection{New Filter Out Function}\nWe can now define a new function that calls our first implementation of \\texttt{filter\\_in} but a predicate created by negating the result of the predicate.\n\nIf we compute the unit test we will see that it indeed seems that this new implementation is reasonable. We prove it by unfolding our new function so that we see what it is made up from, and using the unfold lemmas for \\texttt{negb} we can prove that the function actually satisfies the specification of \\texttt{filter\\_out}.\n\nFor convenience we will also here make a lemma that lets us rewrite any implementation of \\texttt{filter\\_out} to this new implementation. This will mean that we will be able to go from proofs about \\texttt{filter\\_out}, and show them using \\texttt{filter\\_in}.\n\n\\subsubsection{New Filter In Function}\nLike wise we will be able to do the same thing the other way around. Which means that we can proove properties about \\texttt{filter\\_in} using proofs for \\texttt{filter\\_out}.\n\n\\subsection{Final Properties Shown Using Connections}\nWe will now for \\texttt{filter\\_in} and \\texttt{filter\\_out} look at two more properties, and we will be using the connections to prove the opposing proofs.\n\n\\subsubsection{Filtering and Concatenation of Lists}\nWhen we use \\texttt{filter\\_in} or \\texttt{filter\\_out} on a predicate and two lists appended together, the result should be the same as first using \\texttt{filter\\_in} or \\texttt{filter\\_out} on the predicate and the first list, and then appending the result with using the same type of filtering on the predicate and the second list.\n\nIn order to show this we need unfold lemmas for append. We haven't yet shown this property for either of \\texttt{filter\\_in} or \\texttt{filter\\_out} so we will go ahead and show it for \\texttt{filter\\_in} first. With that done we could do an almost identical proof for \\texttt{filter\\_out}, or we could go ahead and rewrite to our second implementaion of \\texttt{filter\\_out}. Because its connection to \\texttt{filter\\_in} we can then go a head and simply reuse the proof we did for \\texttt{filter\\_in}.\n\n\\subsubsection{Filtering and Reverse Lists}\nWe will go ahead and take the same approach as with the previous property, but first we need unfold lemmas for the reverse list function. With them we go ahead and make the proof for \\texttt{filter\\_in}. Here it really shines to be able to reuse the proof because of the length and complexity of the proof. We then simply go ahead and reuse the proof for \\texttt{filter\\_in} to proove the same property for \\texttt{filter\\_out}.\n\n\\section{Conclusion}\nWe have looked at how to do induction proofs on lists and case proofs on predicates. We used this to explain what list filtering is and to get a better understanding of how to work on lists. We defined two functions one that filtered elements into a list and one that opposingly filtered elements out of a list. When doing our proofs about the second we noticed some great similarities, which in turn led us to find a connection between two. This enabled us to prove a property of one of the functions and then reuse the exact same proof when proving the same property for the other function.\n\nIf we were to look more into list filtering we could try doing it in a more generic fashion, where instead of limiting ourselves to work on lists of natural numbers, we could work on lists of an arbitrary type.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "7cfb01258729c4132c1c69d91ffd252cf8173524", "size": 14450, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "term/report.tex", "max_stars_repo_name": "blacksails/dIFP", "max_stars_repo_head_hexsha": "9d3e5f2838674f4fae670668c8a249f11eba0fac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "term/report.tex", "max_issues_repo_name": "blacksails/dIFP", "max_issues_repo_head_hexsha": "9d3e5f2838674f4fae670668c8a249f11eba0fac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "term/report.tex", "max_forks_repo_name": "blacksails/dIFP", "max_forks_repo_head_hexsha": "9d3e5f2838674f4fae670668c8a249f11eba0fac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 103.9568345324, "max_line_length": 733, "alphanum_fraction": 0.7941868512, "num_tokens": 3254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.847967764140929, "lm_q1q2_score": 0.5873380737889012}}
{"text": "\\section{Results and Discussion}\n\\label{sec:Results_and_Discussion}\nThis section contains lists of all important results and summarizes the most important things.\n\n\\subsection{Comparability}\n\\label{subsec:Comparability}\nTo be able to compare the fitted results agains a calculated value, equation \\ref{eq:cylindrical_coil_b0} is used to compute the vacuum permeability $\\mu_0$. In order to obtain the desired result, the formula must be rearranged like this:\n\\begin{equation}\n\\mu_0=\\frac{B_0l}{NI}\\cdot\\sqrt{1+(\\,^{2R}\\!/_{l})^2}\n\\label{eq:cylindrical_coil_mu_0}\n\\end{equation}\n\n\\subsection{Results Short Coil (Central Value)}\n\\label{subsec:Results_Short_Coil_Central}\nThe calculated value has been computed with equation \\ref{eq:cylindrical_coil_mu_0}.\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r l}\n\t\t\\hline\n\t\t\\textbf{Calculated} $\\mu_0$ & $(1.25235\\pm0.01922)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\\n\t\t\\textbf{Fitted} $\\mu_0$ & $(1.25396\\pm0.00019)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Results Short Coil (Central Value)}\n\t\\label{tab:Results_Short_Coil_Central}\n\\end{table}\nThe calculated and fitted values are pretty close to each other. The big difference is the uncertainty which is way bigger for the calculated value. This is plausible because it contains not only statistical but also systematic uncertainty. This is shown in the following figure \\ref{fig:Graphical_Comparison_Short_Center}:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Graphical_Comparison_Short_Center}\n\t\\caption{Graphical Comparison Short Coil (Center Value)}\n\t\\label{fig:Graphical_Comparison_Short_Center}\n\\end{figure}\nThis figure shows a graphical comparison between the calculated (top), the fitted (middle) and the real value (bottom) of $\\mu_0$. The red line marks the uncertainty. Obviously the real value of $\\mu_0$ has no uncertainty and is thus not shown. An interesting fact is, that the real value of $\\mu_0$ is not in the range of the uncertainty of the fitted value. This is due to the fact, that the systematic uncertainty is not contained.\n\\newpage\n\\subsection{Results Short Coil (Field Pattern)}\n\\label{subsec:Results_Short_Coil_Field}\nThe calculated value has again been computed with equation \\ref{eq:cylindrical_coil_mu_0}.\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r l}\n\t\t\\hline\n\t\t\\textbf{Calculated} $\\mu_0$ & $(1.25314\\pm0.01924)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\\n\t\t\\textbf{Fitted} $\\mu_0$ & $(1.25017\\pm0.00063)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Results Short Coil (Field Pattern)}\n\t\\label{tab:Results_Short_Coil_Field}\n\\end{table}\nThis time the calculated and the fitted value are further apart (not that much tough). This is shown in the following figure \\ref{fig:Graphical_Comparison_Short_Field}:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Graphical_Comparison_Short_Field}\n\t\\caption{Graphical Comparison Short Coil (Field Pattern)}\n\t\\label{fig:Graphical_Comparison_Short_Field}\n\\end{figure}\nThis figure shows a graphical comparison between the calculated (top), the fitted (middle) and the real value (bottom) of $\\mu_0$. The real value of $\\mu_0$ is again not in the range of the uncertainty of the fitted value.\n\n\\subsection{Results Long Coil (Central Value)}\n\\label{subsec:Results_Long_Coil_Central}\nThe calculated value was once again computed with equation \\ref{eq:cylindrical_coil_mu_0}.\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r l}\n\t\t\\hline\n\t\t\\textbf{Calculated} $\\mu_0$ & $(1.24875\\pm0.01482)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\\n\t\t\\textbf{Fitted} $\\mu_0$ & $(1.24853\\pm0.00016)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Results Long Coil (Central Value)}\n\t\\label{tab:Results_Long_Coil_Central}\n\\end{table}\nThe calculated and the fitted values lie really close to each other. The calculated value is almost within the range of uncertainty of the fitted one. This is shown in figure \\ref{fig:Graphical_Comparison_Long_Center} on the next page:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Graphical_Comparison_Long_Center}\n\t\\caption{Graphical Comparison Long Coil (Center Value)}\n\t\\label{fig:Graphical_Comparison_Long_Center}\n\\end{figure}\nThis figure shows a graphical comparison between the calculated (top), the fitted (middle) and the real value (bottom) of $\\mu_0$. Interesting in this figure is, that this time both the measured and the fitted value are close to each other but further apart from the real value $\\mu_0$.\n\\subsection{Results Long Coil (Field Pattern)}\n\\label{subsec:Results_Long_Coil_Field}\nThe calculated value has as well been computed with equation \\ref{eq:cylindrical_coil_mu_0}.\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r l}\n\t\t\\hline\n\t\t\\textbf{Calculated} $\\mu_0$ & $(1.24661\\pm0.01480)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\\n\t\t\\textbf{Fitted} $\\mu_0$ & $(1.24867\\pm0.00036)\\cdot10^{-6}\\ \\,^\\text{Vs}\\!/_\\text{Am}$ \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Results Long Coil (Field Pattern)}\n\t\\label{tab:Results_Long_Coil_Field}\n\\end{table}\nThe calculated and the fitted values are again close to each other but further apart from the true value of $\\mu_0$. This is shown in the following figure \\ref{fig:Graphical_Comparison_Long_Field}:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Graphical_Comparison_Long_Field}\n\t\\caption{Graphical Comparison Long Coil (Field Pattern)}\n\t\\label{fig:Graphical_Comparison_Long_Field}\n\\end{figure}\nThis figure shows a graphical comparison between the calculated (top), the fitted (middle) and the real value (bottom) of $\\mu_0$. Generally, the measurements with the long cylindrical coil are further apart from the real value of $\\mu_0$.\n", "meta": {"hexsha": "bc7928f2b65ab4658b6c243dcec2e179f289de1c", "size": 5849, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "glaL3_E_6_Magnetic_Fields/sections/results_discussion.tex", "max_stars_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_stars_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "glaL3_E_6_Magnetic_Fields/sections/results_discussion.tex", "max_issues_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_issues_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "glaL3_E_6_Magnetic_Fields/sections/results_discussion.tex", "max_forks_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_forks_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.786407767, "max_line_length": 434, "alphanum_fraction": 0.7599589673, "num_tokens": 1756, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8479677622198946, "lm_q1q2_score": 0.5873380724583122}}
{"text": "\\chapter{Summary of Functions} \\label{appx:function}\n\n\\begin{center} \\begin{tabular}{||l|l||}\n\\hline\n\\multicolumn{2}{||c||}{} \\\\\n\\multicolumn{2}{||c||}{Standard \\caps{FORTRAN} Functions} \\\\\n\\multicolumn{2}{||c||}{} \\\\\n\\hline\n\\param{r} = \\cmd{AINT} ($x$)\n      & truncation: $|x|$ \\\\\n\\param{r} = \\cmd{ANINT} ($x$)\n      & nearest integer: [$x + .5*$sign($x$)] \\\\\n\\param{r} = \\cmd{ABS} ($x$)\n      & absolute value: $|x|$ \\\\\n\\param{r} = \\cmd{MOD} ($x$, $y$)\n      & remainder: $x - y * [x/y]$ \\\\\n\\param{r} = \\cmd{SIGN} ($x$, $y$)\n      & transfer of sign: $|x|$ sign $y$ \\\\\n\\param{r} = \\cmd{DIM} ($x$, $y$)\n      & positive difference: $x - $min($x$,$y$) \\\\\n\\param{r} = \\cmd{MAX} ($x$, $y$, \\ldots)\n      & maximum of $x$, $y$, \\ldots\\ \\\\\n\\param{r} = \\cmd{MIN} ($x$, $y$, \\ldots)\n      & minimum of $x$, $y$, \\ldots\\ \\\\\n\\param{r} = \\cmd{SQRT} ($x$)\n      & square root: $\\sqrt{x}$ \\\\\n\\param{r} = \\cmd{EXP} ($x$)\n      & exponentiation: e$^{x}$ \\\\\n\\param{r} = \\cmd{LOG} ($x$)\n      & natural logarithm: log$_{e}x$ \\\\\n\\param{r} = \\cmd{LOG10} ($x$)\n      & common logarithm: log$_{10}x$ \\\\\n\\param{r} = \\cmd{SIN} ($x$)\n      & sine $x$ \\\\\n\\param{r} = \\cmd{COS} ($x$)\n      & cosine $x$ \\\\\n\\param{r} = \\cmd{TAN} ($x$)\n      & tangent $x$ \\\\\n\\param{r} = \\cmd{ASIN} ($x$)\n      & arc sine $x$ \\\\\n\\param{r} = \\cmd{ACOS} ($x$)\n      & arc cosine $x$ \\\\\n\\param{r} = \\cmd{ATAN} ($x$)\n      & arc tangent $x$ \\\\\n\\param{r} = \\cmd{ATAN2} ($x$, $y$)\n      & arc tangent $x/y$ \\\\\n\\param{r} = \\cmd{SINH} ($x$)\n      & hyperbolic sine $x$ \\\\\n\\param{r} = \\cmd{COSH} ($x$)\n      & hyperbolic cosine $x$ \\\\\n\\param{r} = \\cmd{TANH} ($x$)\n      & hyperbolic tangent $x$ \\\\\n\\hline\n\\end{tabular} \\end{center}\n\n\\medskip\n\\begin{center} \\begin{tabular}{||l|l||}\n\\hline\n\\multicolumn{2}{||c||}{} \\\\\n\\multicolumn{2}{||c||}{Tensor Principal Values and Magnitude Functions}\n\\\\\n\\multicolumn{2}{||c||}{} \\\\\n\\hline\n\\param{r} = \\cmd{PMAX}\n   ($T_{11}$, $T_{22}$, $T_{33}$, $T_{12}$, $T_{23}$, $T_{31}$)\n      & maximum principal values \\\\\n\\param{r} = \\cmd{PMIN}\n   ($T_{11}$, $T_{22}$, $T_{33}$, $T_{12}$, $T_{23}$, $T_{31}$)\n      & minimum principal values \\\\\n\\param{r} = \\cmd{PMAX2} ($T_{11}$, $T_{22}$, $T_{12}$)\n      & maximum principal values (2D) \\\\\n\\param{r} = \\cmd{PMIN2} ($T_{11}$, $T_{22}$, $T_{12}$)\n      & minimum principal values (2D) \\\\\n\\param{r} = \\cmd{TMAG}\n   ($T_{11}$, $T_{22}$, $T_{33}$, $T_{12}$, $T_{23}$, $T_{31}$)\n      & magnitude of the deviatoric part \\\\\n\\hline\n\\end{tabular} \\end{center}\n\n\\medskip\n\\begin{center} \\begin{tabular}{||l|l||}\n\\hline\n\\multicolumn{2}{||c||}{} \\\\\n\\multicolumn{2}{||c||}{IF Functions} \\\\\n\\multicolumn{2}{||c||}{} \\\\\n\\hline\n\\param{r} = \\cmd{IFLZ} (\\param{cond}, \\param{rtrue}, \\param{rfalse})\n      & if \\param{cond} $<$ 0.0,\n         \\param{rtrue} else \\param{rfalse} \\\\\n\\param{r} = \\cmd{IFEZ} (\\param{cond}, \\param{rtrue}, \\param{rfalse})\n      & if \\param{cond} $=$ 0.0,\n         \\param{rtrue} else \\param{rfalse} \\\\\n\\param{r} = \\cmd{IFGZ} (\\param{cond}, \\param{rtrue}, \\param{rfalse})\n      & if \\param{cond} $>$ 0.0,\n         \\param{rtrue} else \\param{rfalse} \\\\\n\\hline\n\\end{tabular} \\end{center}\n\n\\medskip\n\\begin{center} \\begin{tabular}{||l|l||}\n\\hline\n\\multicolumn{2}{||c||}{} \\\\\n\\multicolumn{2}{||c||}{Array $\\Rightarrow$ Global Variable Functions} \\\\\n\\multicolumn{2}{||c||}{} \\\\\n\\hline\n\\param{r} = \\cmd{SUM} (\\param{x})\n      & sum of \\param{x} over all nodes or elements \\\\\n\\param{r} = \\cmd{SMAX} (\\param{x})\n      & maximum of \\param{x} over all nodes or elements \\\\\n\\param{r} = \\cmd{SMIN} (\\param{x})\n      & minimum of \\param{x} over all nodes or elements \\\\\n\\hline\n\\end{tabular} \\end{center}\n\n\\medskip\n\\begin{center} \\begin{tabular}{||l|l||}\n\\hline\n\\multicolumn{2}{||c||}{} \\\\\n\\multicolumn{2}{||c||}{Envelope Functions} \\\\\n\\multicolumn{2}{||c||}{} \\\\\n\\hline\n\\param{r} = \\cmd{ENVMAX} (\\param{x})\n      & maximum of \\param{x} over all previous time steps \\\\\n\\param{r} = \\cmd{ENVMIN} (\\param{x})\n      & minimum of \\param{x} over all previous time steps \\\\\n\\hline\n\\end{tabular} \\end{center}\n", "meta": {"hexsha": "c1ee934cac58bdd6e888147abfd5ede96d5a8873", "size": 4017, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/seacas/doc-source/algebra/algfnctsum.tex", "max_stars_repo_name": "jschueller/seacas", "max_stars_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_stars_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_stars_count": 82, "max_stars_repo_stars_event_min_datetime": "2016-02-04T18:38:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T03:01:49.000Z", "max_issues_repo_path": "packages/seacas/doc-source/algebra/algfnctsum.tex", "max_issues_repo_name": "jschueller/seacas", "max_issues_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_issues_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_issues_count": 206, "max_issues_repo_issues_event_min_datetime": "2015-11-20T01:57:47.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:12:04.000Z", "max_forks_repo_path": "packages/seacas/doc-source/algebra/algfnctsum.tex", "max_forks_repo_name": "jschueller/seacas", "max_forks_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_forks_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_forks_count": 68, "max_forks_repo_forks_event_min_datetime": "2016-01-13T22:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T06:25:05.000Z", "avg_line_length": 31.3828125, "max_line_length": 72, "alphanum_fraction": 0.5334826985, "num_tokens": 1613, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677506936878, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5873380644747774}}
{"text": "%% -*- coding:utf-8 -*-\n\\chapter{Base definitions of probability theory}\nI am going to provide several definitions. I will give the both formal\nand informal definitions and show how they are related each other.\n\n\\section{Example and motivation}\nWe will start with the simplest example. \n\\begin{example}\n\\label{ex:initial}\nIn the example we have (see \\cref{fig:simpleprobability}) $N=5$ balls.\nThere are $N_G = 2$ green balls and $N_R$ red balls. I.e. $N = N_G +\nN_R$.  \n\\begin{figure}[H]\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n    \\tikzstyle{element}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]\n\n    \\node[element, color=green] at (0,1) {};\n    \\node[element, color=red] at (1,1) {};\n    \\node[element, color=green] at (2,1) {};\n    \\node[element, color=red] at (0,0) {};\n    \\node[element, color=red] at (1,0) {};\n\n\n  \\end{tikzpicture}\n  \\caption{Probability example}\n  \\label{fig:simpleprobability}\n\\end{figure}\n\nWe can define the probability to get green ball as\n\\[\nP_G = \\frac{N_G}{N} = \\frac{2}{5}\n\\]\nand the probability to get red ball as\n\\[\nP_R = \\frac{N_R}{N} = \\frac{3}{5}.\n\\]\nWe can get only a red or a green ball and \n\\[\nP_G + P_R = 1.\n\\]\n\\end{example}\n\nWe will formalize the probability calculations for such cases via the\ncounting principle and define the probability as a ratio of number\ndesired outcomes $\\left|A\\right|$ to the number of all possible\noutcomes $\\left|\\Omega\\right|$:\n\\[\nP = \\frac{\\left|A\\right|}{\\left|\\Omega\\right|}.\n\\]\n\nThere is a simple example on counting principle below.\n\\begin{example}[Fish in a pond]\n\\label{ex:fishinpond}\nThe example is from a Russian Biological Olympiad. Consider a pond with\nfishes. 15 of them were marked. After sometime we took 15 fishes and 5\nof them were marked. How many fishes are in the pond.\n\nThe accepted answer was 45 with the following explanation: \n\\[\n\\frac{15}{5} = \\frac{n}{15}\n\\]\ntherefore $n=45$.\n\nLets try to solve the task with probability theory and convert the\nquestion to the following one: How many fishes are in the pond if in\nevery 15 fishes with high probability we get 5 marked ones?\n\nLet $n$ the number of fishes in the pond. Then the size of the sample\nspace $\\left|\\Omega\\right|$ is the following\n\\[\n\\left|\\Omega\\right| = \\binom{n}{15}\n\\]\ni.e. how many ways to get 15 fishes from $n$.\n\nThe desired outcome has the size $\\left|A\\right|$ combined from 2\nones: getting 5 fishes from 15 and get the rest (10) from non marked\nfishes: $n - 15$:\n\\[\n\\left|A\\right| = \\binom{15}{5} \\cdot \\binom{n - 15}{10}.\n\\]\nTherefore the result can be calculated as follows\n\\footnote{\nAlternative way to calculate it can be found in \\mynameref{ex:fishinpond_cond}.\n}\n\\begin{equation}\n\\label{eq:fishinpond}\nP_n = \\frac{\\left|A\\right|}{\\left|\\Omega\\right|} = \n\\frac{\\binom{15}{5} \\cdot \\binom{n - 15}{10}}{\\binom{n}{15}}.\n\\end{equation}\n\nQuick calculations \n\\cite{github:mathexperiments_ivanmurashko}\nshow that $n=45$ is very close to real answer: \n\\begin{verbatim}\n$ stack repl\n> map fish_in_pond [55, 50, 45, 40, 35, 30]\n[0.21391501072376837,0.24492699593153727,0.26162279575176545,\n 0.2440273978093778,0.17082265318953427,5.813662441225208e-2]\n\\end{verbatim}\n\n\\end{example}\n\n\n\\begin{example}[Birthday paradox]\nConsider $n$ people and give a prediction that there is at least one\npair of people who have a birthday at the same day. We will call the\nevent as $A$ and there fore are required to find $P(A)$. \nThe straightforward calculations are not easy and we will try to find\na probability for another event $A^c$ that is the complement of event\n$A$. I.e. event $A^c$ states that there is no such pairs and all\nbirthdays are different. We can calculate the desired probability via \n\\[\nP_n = P(A) = 1 - P(A^c).\n\\]\n\nWe will use the standard counting here. There are totally $365^n$\npossible options for different birthdays in the group on $n$ people.\nThe number of outcomes that satisfied $A^c$ is \n\\[\n365 \\cdot 364 \\cdot \\dots \\cdot (365 -n +1) = \n\\prod_{i = 1}^n (365 - i + 1).\n\\]\nTherefore\n\\[\nP(A^c) = \\frac{\\prod_{i = 1}^n (365 - i + 1)}{365^n}\n\\]\nand\n\\[\nP_n = 1 - \\frac{\\prod_{i = 1}^n (365 - i + 1)}{365^n}.\n\\]\nCalculations gives us\n$P_{10} = 0.1169, P_{30} = 0.7063, P_{60} = 0.9941$. I.e. we can say\nwith high probability\nthat in the group of 60 people there will be at least 2 persons with\nthe same birthday.\n\\end{example}\n\n\\section{Definitions}\nNow we are ready to give several formal definitions. \n\\subsection{$\\sigma$-algebra}\n\\begin{definition}[Power set]\n\\label{def:powerset}\nLet $\\Omega$ is a set than the set of all possible subsets of $\\Omega$\nis called \\textit{power set} and denoted as\n$\\mathcal{P}\\left(\\omega\\right)$. \n\\end{definition}\n\n\\begin{definition}[$\\sigma$ algebra]\nLet $\\Omega$ is a set then a subset $\\mathcal{F}$ of\n\\mynameref{def:powerset} $\\mathcal{P}\\left(\\Omega\\right)$ ($\\mathcal{F}\\subseteq\n\\mathcal{P}\\left(\\Omega\\right)$) is called $\\sigma$ algebra if the\nfollowing conditions are \nsatisfied:\n\\begin{enumerate}\n\\item $\\mathcal{F}$ contains $\\Omega$: $\\Omega \\in \\mathcal{F}$\n\\item TBD\n\\item TBD\n\\end{enumerate} \n\\end{definition}\n\nIn our \\cref{ex:initial}, $\\sigma$ algebra is a collection of any\nballs. \n\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \\tikzstyle{element}=[circle,fill=black!25,minimum size=17pt,inner\n      sep=0pt]\n\n    \\node[element,label=left:$a$] (a) at (0,2) {};    \n    \\node[element,label=left:$b$] (b) at (0,1) {};    \n    \\node[element,label=right:$c$] (c) at (2,2) {};\n    \\node[element,label=right:$d$] (d) at (2,1) {};\n    \n  \\end{tikzpicture}\n  \\caption{Probability space. It consists of elementary events: $a$,\n    $b$, $c$ and $d$, each\n    of them has equal probability $p_{a,b,c,d} = \\frac{1}{4}$}\n  \\label{fig:probabilityspace}\n\\end{figure}\n\n\\section{Conditional probability}\n\n\\begin{definition}[Conditional probability]\n\\label{def:conditionalprobability}\nThe \\textit{conditional probability} of event $A$ on event $B$ is\ndefined as follow\n\\[\nP(A|B) = \\frac{P\\left(A \\cap B\\right)}{P(B)}\n\\]\n\\end{definition}\n\n\\begin{example}[Conditional probability]\n\\label{ex:conditionalprobability}\nLets consider 6 balls each of them can be either two colors (see\n\\cref{fig:excondprobability}). \n\\begin{figure}[H]\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n    \\tikzstyle{element}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]\n\n    \\node[element,top color=green, bottom color=red] at (0,1) {};\n    \\node[element,top color=green, bottom color=red] at (1,1) {};\n    \\node[element,top color=green, bottom color=red] at (2,1) {};\n    \\node[element,top color=red, bottom color=blue] at (0,0) {};\n    \\node[element,top color=red, bottom color=blue] at (1,0) {};\n    \\node[element,top color=green, bottom color=blue] at (2,0) {};\n\n\n  \\end{tikzpicture}\n  \\caption{Condition probability. Original probability space. $P(A = red) =\n    \\frac{5}{6}$, $P(A = blue) = \\frac{3}{6}$, $P(A = green) = \\frac{4}{6}$\n  }\n  \\label{fig:excondprobability}\n\\end{figure}\nYou can see that the probability $P(A)$ to get red ball is $P(A = red) =\n\\frac{5}{6}$, blue one is $P(A = blue) = \\frac{3}{6}$, green one is\n$P(A = green) =\n\\frac{4}{6}$. \n\nNow assume that event $A$ is to get a green ball but event $B$ is to\nget red ball, how we can define $P(A|B)$ in the case. \n\n\\begin{figure}[H]\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n    \\tikzstyle{element}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]\n\n    \\node[element,top color=green, bottom color=red] at (0,1) {};\n    \\node[element,top color=green, bottom color=red] at (1,1) {};\n    \\node[element,top color=green, bottom color=red] at (2,1) {};\n    \\node[element,top color=red, bottom color=blue] at (0,0) {};\n    \\node[element,top color=red, bottom color=blue] at (1,0) {};\n\n\n  \\end{tikzpicture}\n  \\caption{Condition probability. $P(A = green|B = red) =\n    \\frac{3}{5}$, $P(A = blue| B = red) = \\frac{2}{5}$}\n  \\label{fig:excondprobability_red}\n\\end{figure}\nThe situation is displayed on \\cref{fig:excondprobability_red}. We have only 5\npossibilities to choose a ball now instead of 6 in the original case.\nThis is because we just got an additional information - ``one of the\ncolor should be red''. Only 3 of the 5 balls are green. Therefore\n$P(A|B) = P(A=green|B=red) = \\frac{3}{5}$. \n\nThis result is in correlation with the formal definition of\n\\mynameref{def:conditionalprobability}:\n\\begin{eqnarray}\nP(A|B) = \\frac{P\\left(A \\cap B\\right)}{P(B)} = \n\\nonumber \\\\\n\\frac{P\\left(A = green \\cap B = red\\right)}{P(B = red)} = \n\\frac{3/6}{5/6} = \\frac{3}{5}.\n\\nonumber\n\\end{eqnarray}\n\nThe \\cref{fig:excondprobability_blue} gives as the view if event\n$B=blue$ occurs. \n\\begin{figure}[H]\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n    \\tikzstyle{element}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]\n\n    \\node[element,top color=red, bottom color=blue] at (0,0) {};\n    \\node[element,top color=red, bottom color=blue] at (1,0) {};\n    \\node[element,top color=green, bottom color=blue] at (2,0) {};\n\n\n  \\end{tikzpicture} \n  \\caption{Condition probability. $P(A = red|B=blue) = \\frac{2}{3}$,\n    $P(A= green|B = blue) = \\frac{1}{3}$} \n  \\label{fig:excondprobability_blue}\n\\end{figure}\nIn the case we have the following conditional probabilities:\n$P(A = red|B=blue) = \\frac{2}{3}$, $P(A= green|B = blue) =\n    \\frac{1}{3}$.\n\nFinally, the \\cref{fig:excondprobability_green} gives as the view if event\n$B=green$ occurs. \n\\begin{figure}[H]\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n    \\tikzstyle{element}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]\n\n    \\node[element,top color=green, bottom color=red] at (0,1) {};\n    \\node[element,top color=green, bottom color=red] at (1,1) {};\n    \\node[element,top color=green, bottom color=red] at (2,1) {};\n    \\node[element,top color=green, bottom color=blue] at (2,0) {};\n\n\n  \\end{tikzpicture}\n  \\caption{Condition probability. $P(A = blue|B = green) =\n    \\frac{1}{4}$, $P(A = red|B = green) =\n    \\frac{3}{4}$} \n  \\label{fig:excondprobability_green}\n\\end{figure}\n\\end{example}\n\n\\begin{example}[The King's sibling]\nSuppose that we have a king from a family of 2 children. What's the\nprobability that his sibling is a girl. The important assumption\n\\footnote{\nFor instance if the king family assume to get a new child until the\nfirst boy (king) get then we will have the sibling is girl with\nprobability $1$.\n}\nthat\nhas to be made is the following: there is no any family planning in\nthe king family and the probability to get a boy $P_b$ and probability\nto get a girl $P_b$ are equally likely:\n\\[\nP_b = P_g = \\frac{1}{2}.\n\\] \nWe have 4 cases: $bb, bg, gb, gg$ and the condition that the king is a\nboy pick up only 3 options for us: $bb, bg, gb$. All of them are\nequally likely and 2 have a girl as sibling. I.e.\n\\[\nP(sibling = girl|king) = \\frac{2}{3}.\n\\]\n\\end{example}\n\n\\begin{proposition}[Total probability]\n\nThe \\textit{total probability} is defined as follows\n\\[\nP(A) = \\sum_i P(A|B_i)\n\\]\n\\label{thm:totalprobability}\n\\end{proposition}\n\n\\begin{example}[Total probability]\nLets assume in the \\mynameref{ex:conditionalprobability} that we are\ninterested in the event $A$ that the ball is green. The other color\nwill be either blue or red. I.e. $B_1 = blue$, $B_2 = red$.\n\\begin{eqnarray}\nP(A = green) = P(A = green|B = blue) P(B = blue) +\n\\nonumber \\\\\n+\nP(A = green|B = red) P(B = red) =\n\\nonumber \\\\\n= \\frac{1}{3} \\cdot \\frac{1}{2} + \\frac{3}{5}\\cdot \\frac{5}{6} = \\frac{4}{6}.\n\\nonumber\n\\end{eqnarray}\nI.e. formula works.\n\\end{example}\n\nConsider another, not so simple example\n\\begin{example}[Total probability paradox]\nLet we have 6 balls each of them has one color: red or green (see\n\\cref{fig:excondprobability_add}). \n\\begin{figure}[H]\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n    \\tikzstyle{element}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]\n\n    \\node[element,color=red] at (0,1) {};\n    \\node[element,color=red] at (1,1) {};\n    \\node[element,color=red] at (2,1) {};\n    \\node[element,color=green] at (0,0) {};\n    \\node[element,color=green] at (1,0) {};\n    \\node[element,color=green] at (2,0) {};\n\n\n  \\end{tikzpicture}\n  \\caption{Total probability example}\n  \\label{fig:excondprobability_add}\n\\end{figure}\nLets event $A$ is an event to get a ball. $P(A) = \\frac{1}{6}$. The\nevent $B_1$ is an event to get green ball: $P(B_1) = \\frac{1}{2}$. The\nsame one is for probability to get red ball: $P(B_2) = \\frac{1}{2}$.\nConditional probabilities can be calculated as follows:\n\\begin{equation}\nP(A|B_1) = P(A|B_2) = \\frac{1}{3}.\n\\label{eq:condprob_wrong}\n\\end{equation}\nAs result the total probability is \n\\[\nP(A) = \\frac{1}{2}\\cdot\\frac{1}{3} + \\frac{1}{2}\\cdot\\frac{1}{3} =\n\\frac{2}{6} \\ne \\frac{1}{6}.\n\\]\nThe error is in the \\eqref{eq:condprob_wrong}. When we consider a\nconcrete ball then it either green or blue and as result one of the\nconditional probabilities $P(A|B_1)$ or $P(A|B_2)$ is zero. In the\ncase we will get correct answer $P(A) = \\frac{1}{6}$.\n\\end{example}\n\nLets calculate the required probability from \\mynameref{ex:fishinpond}\nby means of conditional probability tools. This will show us that a\nsame task can be solved in different ways\n\\begin{example}[Fish in a pond (conditional)]\n\\label{ex:fishinpond_cond}\nFirst of all calculate a probability of the following collection\n\\[\nA = MMMMMNNNNNNNNNN\n\\]\nwhere $M$ means marked fish and $N$ - non marked. I.e. first 5 fishes\nwere marked and last 10 non marked.\nThe probability to get a marked fish is\n\\[\nP_1 = \\frac{15}{n}\n\\]\nbut if we catch a fish then the probability (conditional) to get a new\nmarked fish is\n\\[\nP_2 = \\frac{14}{n-1}\n\\]\ni.e. the probability to catch $i$-th marked fish is\n\\[\nP_i = \\frac{15-i+1}{n - i + 1}.\n\\]\nThe probability to catch the first non marked fish is\n\\[\nQ_1 = \\frac{n - 15}{n - 5},\n\\]\n$i$-th\n\\[\nQ_i = \\frac{n -15 -i + 1}{n - 5 - i + 1}.\n\\]\nThe final probability for event $A$ is\n\\[\nP(A) = \\frac{\\prod_{k=11}^{15}k}{\\prod_{i =\n    1}^{15}{\\left(n -i + 1\\right)}}\\prod_{i=1}^{10}{\\left(n - 15 - i + 1\\right)}.\n\\]\nThe desired collection is any collection with 5 $M$s and 10 $N$s. All\nof them have the same probability i.e. the final probability is\n\\begin{eqnarray}\nP = \\binom{15}{5} P(A) = \\binom{15}{5}\n\\frac{ \\prod_{k=11}^{15}k}{\\prod_{i = 1}^{15}{\\left(n - i +\n    1\\right)}}\\prod_{i=1}^{10}{\\left(n - 15 - i + 1\\right)} = \n\\nonumber \\\\\n= \\binom{15}{5}\n\\frac{\\frac{15!}{10!}}{\\frac{n!}{(n-15)!}}\\frac{(n-15)!}{(n-25)!} = \n\\nonumber \\\\\n= \\binom{15}{5}\n\\frac{15!(n-15)!}{n!}\n\\frac{(n-15)!}{10! (n-25)!} =\n\\frac{\\binom{15}{5} \\cdot \\binom{n - 15}{10}}{\\binom{n}{15}}.\n\\nonumber\n\\end{eqnarray}\nThat is the same as the equation got in \\eqref{eq:fishinpond}.\n\\end{example}\n\n\n\\begin{definition}[Independence]\n\\label{def:independence}\nTwo events $A$ and $B$ are independent if \n\\[\nP\\left(A \\cap B\\right) = \nP\\left(A\\right) P\\left(B\\right). \n\\]\n\\end{definition}\n\n\\begin{example}[Non independent events]\nConsider situation shown in \\cref{fig:excondprobability_add}. Let\nevent $A$ is that ball is green, event $B$ is that ball is red. We\nhave\n\\[\nA \\cap B = \\emptyset, \n\\]\ni.e. $P\\left(A \\cap B\\right) = 0$. This means that the events cannot\nbe considered as independent accordingly \\cref{def:independence} as\nsoon as $P\\left(A\\right) = \nP\\left(B\\right) = \\frac{1}{2}$.\n\nReally the events are dependent as soon as we can say that $A$ will\nnot occur if $B$ occurs and vice versa.\n\\end{example}\n\nTBD \\cite{bib:kolmogorov74basic}\n", "meta": {"hexsha": "05520e71db828a13b870b303aa52b77b6ad1863f", "size": 16152, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "probability/basedefinitions.tex", "max_stars_repo_name": "ivanmurashko/articles", "max_stars_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-09-27T08:59:55.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-27T08:59:55.000Z", "max_issues_repo_path": "probability/basedefinitions.tex", "max_issues_repo_name": "ivanmurashko/articles", "max_issues_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "probability/basedefinitions.tex", "max_forks_repo_name": "ivanmurashko/articles", "max_forks_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.1663244353, "max_line_length": 81, "alphanum_fraction": 0.6796062407, "num_tokens": 5450, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455589, "lm_q2_score": 0.8479677506936878, "lm_q1q2_score": 0.5873380537147643}}
{"text": "\\documentclass{article}\n\\usepackage{tikz}\n\\usepackage{graphicx}\n\\usepackage{amsmath,amssymb}\n\\usepackage{hyperref}\n\\hypersetup{ colorlinks=true, linkcolor=blue }\n\\title{Moment of Inertia}\n\n\\begin{document}\n\\maketitle\n\\tableofcontents\n\n\\section{Intro}\n\\subsection{Definition}\nMoment of Inertia is denoted by the letter 'I' and is the product of the mass times the square of the radius of gyration.\n$$ I = M R^2$$\n\\subsection{Parallel Axis Theorem}\nMoment of inertia of an object about an axis parallel to another axis is related by the parallel axis theorem.\\\\\nLet I be the moment of inertia about an axis,\\\\\nI' be the moment of Inertia about an axis parallel to it,\\\\\n$x$ be the distance between those axis,\\\\\n$M$ be the mass of object. Then\n$$I' = I + M x^2$$\n\n\\subsection{Perpendicular Axis Theorem}\nFor a planar object, the moment of inertia about an axis perpendicular to the plane is the sum of the moments of inertia of two perpendicular axes through the same point in the plane of the object.\n$$ I_z = I_x + I_y$$\nwhere $I_x$,$I_y$ are the moment of inertia about axis in the plane of the object and $I_z$ is the moment of inertia about an axis perpendicular to the plane of the object.\n\n\\section{Uniform Rod}\n\\subsection{Axis passing through its midpoint}\n\\begin{tikzpicture}\n  \\draw (0,0) rectangle (6,0.5);\n  \\draw[dashed] (3,1) -- (3,-0.5);\n  \\draw (3,-1) node {$x$=0};\n\\end{tikzpicture}\n\\\\\nConsider a uniform rod of length L and mass M rotating about an axis passing through the centre of gravity and perpendicular to the rod.\nLet us consider a small element with mass $dm$ and length $dx$ at a distance $x$ from the axis. Then,\n$$I = \\int dm*x^2$$\n$$dm = \\frac{M}{L} dx$$\n$$\\therefore I = \\int_{-\\frac{L}{2}}^{\\frac{L}{2}} \\frac{M}{L} x^2 dx$$\n$$ I = \\frac{M}{L} \\left[ \\frac{x^3}{3} \\right]_{\\frac{-L}{2}}^{\\frac{L}{2}}$$\n$$ I = \\frac{M}{3L} \\left[ \\frac{L^3}{8} - \\left(-\\frac{L^3}{8}\\right) \\right]$$\n$$ I = \\frac{M}{3L} \\left[ \\frac{L^3}{8} + \\frac{L^3}{8} \\right]$$\n$$ \\boxed{I = \\frac{ML^2}{12}} $$\n\n\\subsection{Axis passing through its end}\n\\begin{tikzpicture}\n  \\draw (0,0) rectangle (6,0.5);\n  \\draw[dashed] (0,1) -- (0,-0.5);\n  \\draw (0,-1) node {$x$=0};\n\\end{tikzpicture}\n\\\\\nConsider a uniform rod of length L and mass M rotating about an axis passing through one end of the rod.\nLet us consider a small element with mass $dm$ and length $dx$ at a distance $x$ from the axis. Then,\n$$I = \\int dm*x^2$$\n$$dm = \\frac{M}{L} dx$$\n$$\\therefore I = \\int_{0}^{L} \\frac{M}{L} x^2 dx$$\n$$ I = \\frac{M}{L} \\left[ \\frac{x^3}{3} \\right]_{0}^{L}$$\n$$ I = \\frac{M}{3L} \\left[ 0 - \\left( -L^3\\right) \\right]$$\n$$ I = \\frac{M}{3L} \\left[ L^3 \\right]$$\n$$ \\boxed{I = \\frac{ML^2}{3}} $$\n\n\\section{Ring}\n\\subsection{Axis through centre and perpendicular to the plane of the ring}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.3]{ring1.png}\n\\end{figure}\nConsider a uniform ring of radius R and mass M rotating about an axis passing through the centre of gravity and perpendicular to the plane of the ring.\nLet us consider a small element with mass $dm$ and length $Rd\\theta$\n$$dm = \\frac{M}{2\\pi R} Rd\\theta$$\n$$I = dm R^2$$\n$$I = \\int_{0}^{2\\pi} \\frac{M}{2\\pi} R^2 d\\theta$$\n$$\\boxed{I = MR^2}$$\n\n\\subsection{About the diameter}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.3]{ring2.png}\n\\end{figure}\nThe moment of inertia about axis through the centre and perpendicular to the plane of ring is $MR^2$. Moment of inertia about diameter in the plane of the ring is I.\\\\\n$\\therefore$ Using perpendicular axis theorem,\n$$MR^2 = 2I$$\n$$\\boxed{I = \\frac{MR^2}{2}}$$\n\n\\section{Disk}\n\\subsection{Axis through the centre and perpendicular to the plane of the disk}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.3]{disk1.png}\n\\end{figure}\nCosider a uniform disk of radius R and mass M rotating about an axis passing through the centre of gravity and perpendicular to the plane of the disk.\nLet us consider a ring of radius with mass $dm$ and radius $x$ and thickness $dx$\n$$dm = \\frac{M}{\\pi R^2} 2\\pi x dx$$\n$$dI = dm x^2$$\n$$I = \\int_{0}^{R} \\frac{M}{R^2} 2 x^3 dx$$\n$$I = \\frac{2M}{R^2} \\left[ \\frac{x^4}{4} \\right]_{0}^{R}$$\n$$\\boxed{I = \\frac{MR^2}{2}}$$\n\n\\subsection{About the diameter}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.3]{disk2.png}\n\\end{figure}\nThe moment of inertia about axis through the centre and perpendicular to the plane of ring is $\\frac{MR^2}{2}$. Moment of inertia about diameter in the plane of the ring is I.\\\\\n$\\therefore$ Using perpendicular axis theorem,\n$$\\frac{MR^2}{2} = 2I$$\n$$\\boxed{I = \\frac{MR^2}{4}}$$\n\n\\section{Hollow Cylinder}\n\\subsection{Axis through its centre and perpendicular to the circular faces}\nConsider a hollow cylinder of radius R, height h and mass M rotating about an axis passing through its centre and perpendicular to the circular faces. Let the hollow cylinder be made up of many rings.\nLet us consider a ring of mass $dm$ and thickness $dx$\n$$dm = \\frac{M}{2\\pi Rh}2\\pi R dx$$\n$$dI = dm R^2$$\n$$dI = \\frac{M}{h} R^2 dx$$\n$$I = \\int_0^h \\frac{MR^2}{h} dx$$\n$$\\boxed{I = MR^2}$$\n\n\\subsection{Axis through its centre and perpendicular to the lateral side}\nConsider a hollow cylinder of radius R, height h and mass M rotating about an axis passing through its centre and perpendicular to the lateral side. Let us consider a ring of mass $dm$ and thickness $dx$\n$$dm = \\frac{M}{2\\pi Rh} 2\\pi Rdx$$\n$$dI = \\frac{dmR^2}{2} + dmx^2$$\n$$dI = \\frac{MR^2}{2h} dx + \\frac{Mx^2}{h} dx$$\n$$I = \\int_{\\frac{-h}{2}}^{\\frac{h}{2}} \\frac{MR^2}{2h} + \\frac{Mx^2}{h} dx$$\n$$I = \\left[ \\frac{MR^2}{2h}x + \\frac{Mx^3}{3h} \\right]_{\\frac{-h}{2}}^{\\frac{h}{2}}$$\n$$\\boxed{I = \\frac{MR^2}{2} + \\frac{Mh^2}{12}}$$\n\n\\section{Solid Cylinder}\n\\subsection{Axis through its centre and perpendicular to the circular faces}\n\\begin{figure}[h!]\n  \\includegraphics[scale=0.5]{cylinder1.png}\n  \\centering\n\\end{figure}\nConsider a solid cyclinder of radius R, height h and mass M rotating about an axis passing through its centre and perpendicular to the circular faces. Let the solid cylinder be made up of many hollow cylinder.\nLet us consider a hollow cylinder of mass $dm$, thickness $dx$, radius $x$\n$$dm = \\frac{M}{\\pi R^2h}2\\pi xdx h$$\n$$dI = dmx^2$$\n$$dI = 2MR^2x^3dx$$\n$$I = \\int_0^h 2MR^2x^3 dx$$\n$$I = \\left[ \\frac{MR^2x^4}{2} \\right]_0^h$$\n$$\\boxed{I = \\frac{MR^2}{2}}$$\n\n\\subsection{Axis through its centre and perpendicular to lateral face}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.5]{cylinder2.png}\n\\end{figure}\nConsider a solid cyclinder of radius R, height h and mass M rotating about an axis passing through its centre and perpendicular to lateral face. Let the solid cylinder be made up of many hollow cylinders.\nLet us consider a hollow cylinder of mass $dm$, height $h$, thickness $dx$ and radius $x$.\n$$dm = \\frac{M}{\\pi R^2 h}2\\pi xdxh$$\n$$dI = \\frac{dmx^2}{2} + \\frac{dmh^2}{12}$$\n$$dI = \\frac{Mx^3}{R^2} dx + \\frac{2Mxh^2}{12R^2} dx$$\n$$I = \\int_0^R \\frac{Mx^3}{R^2} + \\frac{Mxh^2}{6R^2} dx$$\n$$I = \\left[ \\frac{Mx^4}{4R^2} + \\frac{Mx^2h^2}{12R^2} \\right]_0^R$$\n$$\\boxed{I = \\frac{MR^2}{4} + \\frac{Mh^2}{12}}$$\n\n\\section{Spherical Shell}\n\\subsection{About its centre}\n\\begin{figure}[h!]\n  \\includegraphics[scale=0.3]{sphere.png}\n  \\centering\n\\end{figure}\nConsider a spherical shell of radius R and mass M rotating about its centre.\nLet us consider it to be made up of many rings of radius $y$ thickness $yd\\theta$ and mass $dm$. Also, $$x=R\\cos{\\theta}$$ $$y=R\\sin{\\theta}$$\n$$dm = \\frac{M}{4\\pi R^2} 2\\pi y Rd\\theta$$\n$$dI = dmy^2$$\n$$dI = \\frac{My^3}{2R} d\\theta$$\n$$I = \\int_0^{\\pi} \\frac{MR^2\\sin^3{\\theta}}{2} d\\theta$$\n$$I = \\int_0^{\\pi} \\frac{MR^2\\left(3\\sin{\\theta}-\\sin{3\\theta}\\right)}{8} d\\theta$$\n$$I = \\frac{MR^2}{8} \\left[-3\\cos(\\theta) + \\frac{\\cos{3\\theta}}{3}\\right]_0^{\\pi}$$\n$$I = \\frac{MR^2}{8}\\left[3-\\frac{1}{3}+3-\\frac{1}{3}\\right]$$\n$$I = \\frac{MR^2}{8}\\frac{16}{3}$$\n$$\\boxed{I = \\frac{2MR^2}{3}}$$\n\n\\section{Solid Sphere}\n\\subsection{About its centre}\n\\begin{figure}[h!]\n  \\includegraphics[scale=0.3]{sphere.png}\n  \\centering\n\\end{figure}\nConsider a solid sphere of radius R and mass M rotating about its centre.\nLet us consider it to be made up of many hollow spheres with radius $x$, thickness $dx$ and mass $dm$.\n$$dm = \\frac{3M}{4\\pi R^3} 4\\pi x^2 dx$$\n$$dI = \\frac{2dmx^2}{3}$$\n$$dI = \\frac{2Mx^4}{R^3} dx$$\n$$I = \\int_0^R \\frac{2Mx^4}{R^3} dx$$\n$$\\boxed{I = \\frac{2MR^2}{5}}$$\n\n\\section{Solid Cone}\n\\subsection{Axis through the centre and perpendicular to the base}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.3]{cone1.png}\n\\end{figure}\nConsider a uniform cone of height h, base radius R and mass M rotating about an axis passing through centre of the base and perpendicular to base.\nLet us consider it to be made up of many disks of radius $r$, thickness $dx$ and mass $dm$. Also, we have from similarity of triangles\n$$\\frac{r}{R} = \\frac{x}{h}$$\n$$dm = \\frac{3M}{\\pi R^2h} \\pi r^2 dx$$\n$$dI = \\frac{dmr^2}{2}$$\n$$dI = \\frac{3Mr^4}{2R^2h} dx$$\n$$dI = \\frac{3MR^4x^4}{2R^2h^5} dx$$\n$$I = \\int_0^h \\frac{3MR^2x^4}{2h^5} dx$$\n$$\\boxed{I = \\frac{3MR^2}{10}}$$\n\n\\subsection{Roatating about it tip}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.3]{cone2.png}\n\\end{figure}\nConsider a uniform solid cone of height h, base radius R and mass M roatating about an axis passing through its tip. Let us consider it to be made up of disks of radius $r$, thickness $dx$ and mass $dm$. Also, we have from similarity of triangles\n$$\\frac{r}{R} = \\frac{x}{h}$$\n$$\\implies r = \\frac{R}{h}x$$\n$$dm = \\frac{3M}{\\pi R^2h}\\pi r^2 dx = \\frac{3Mr^2dx}{R^2h}$$\nUsing parallel axis theorem,\\\\\nMoment of inertia of the small disk about the required axis = Moment of inertia of disk about its diameter + (Mass of the element) $\\cdot$ distance from axis\n$$dI = \\frac{dmr^2}{4} + dmx^2$$\n$$dI = \\frac{3Mr^4}{4R^2h}dx + \\frac{3Mr^2x^2}{R^2h}dx$$\n$$dI = \\frac{3MR^2x^4}{4h^5}dx + \\frac{3Mx^4}{h^3}dx$$\n$$I = \\int_0^h \\frac{3MR^2x^4}{4h^5} + \\frac{3Mx^4}{h^3} dx$$\n$$I = \\left[ \\frac{3MR^2x^5}{20h^5} + \\frac{3Mx^5}{5h^3} \\right]_0^h$$\n$$\\boxed{I = \\frac{3MR^2}{20} + \\frac{3Mh^2}{5}}$$\n\n\\subsection{Rotating about its base}\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.3]{cone3.png}\n\\end{figure}\nConsider a uniform solid cone of height h, base radius R and mass M rotating about an axis along the base diameter. Let us consider it to be made up of many small disks of radius $r$, thickness $dx$ and mass $dm$. Also, we have from similarity of triangles\n$$\\frac{r}{R} = \\frac{h-x}{h}$$\n$$\\implies r = \\frac{R}{h}(h-x)$$\n$$dm = \\frac{3M}{\\pi R^2h}\\pi r^2dx = \\frac{3Mr^2dx}{R^2h}$$\nUsing parallel axis theorem,\\\\\nMoment of inertia of the small disk about the required axis = Moment of inertia of disk about its diameter + (Mass of the element) $\\cdot$ distance from axis\n$$dI = \\frac{dmr^2}{4} + dmx^2$$\n$$dI = \\frac{3Mr^4}{4R^2h} dx + \\frac{3Mr^2x^2}{R^2h}x^2$$\n$$dI = \\frac{3MR^2(h-x)^4}{4h^5} + \\frac{3M(h-x)^2x^2}{h^3} dx$$\n$$I = \\int_0^h \\frac{3MR^2(h-x)^4}{4h^5} + \\frac{3M(h-x)^2x^2}{h^3} dx$$\n$$\\boxed{I = \\frac{3MR^2}{20} + \\frac{Mh^2}{10}}$$\n\\end{document}\n", "meta": {"hexsha": "423ba2f05cc9319b1ee9eaa27e93cb7ba64e3ca0", "size": 11126, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "moment_of_inertia/moment_of_inertia.tex", "max_stars_repo_name": "evans-0/Latex_documents", "max_stars_repo_head_hexsha": "d82b900c5573282daca41609dce3884086a42e20", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "moment_of_inertia/moment_of_inertia.tex", "max_issues_repo_name": "evans-0/Latex_documents", "max_issues_repo_head_hexsha": "d82b900c5573282daca41609dce3884086a42e20", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "moment_of_inertia/moment_of_inertia.tex", "max_forks_repo_name": "evans-0/Latex_documents", "max_forks_repo_head_hexsha": "d82b900c5573282daca41609dce3884086a42e20", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.7860082305, "max_line_length": 256, "alphanum_fraction": 0.6643897178, "num_tokens": 4157, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458153, "lm_q2_score": 0.8438951025545426, "lm_q1q2_score": 0.5873158327213043}}
{"text": "\\subsection{Estimating optimal focal length of exciting beam lens}\n\\label{subsec:focus_optimization}\n\nFor the spectrometer design, it is necessary to optimize the focal\nlength of a laser focusing lens to deliver the most signal through the\nscattered-light-imaging system onto the detector.\nSuppose that the focused beam is cylindrically symmetric about the $z$ axis.\nLet consider the focusing angle of the beam $\\vartheta$ as the full angle\nbetween the $1/e^2$ intensity points in the far-field limit.\nIn practice, the small-angle approximation can be used\n\\begin{equation}\n\t\\vartheta = \\frac{d}{f},\n\t\\label{\\eqnlabel{focus_optimization:theta}}\n\\end{equation}\nwhere $d$ is the diameter (between the $1/e^2$ intensity points) of a laser\nbeam at the focusing lens and $f$ is the focal length of this lens.\n\nAccording to theory~\\parencite{Boyd1961,Boyd1962}, the radius of the minimum\nspot $\\omega_0$ may be expressed in terms of $\\vartheta$ and focused laser\nbeam wavelength $\\lambda$ as\n\\begin{equation*}\n\t\\omega_0 = \\frac{2\\lambda}{\\text{\\g{p}}\\vartheta},\n\t\\label{\\eqnlabel{focus_optimization:min_spot_diameter}}\n\\end{equation*}\nand the confocal parameter $b$ (the distance between perpendicular sections\nof the beam with diameter $\\sqrt{2}\\omega_0$, see\n\\figref{GaussianBeamWaist_wiki}) may be written as\n\\begin{equation*}\n\tb = \\frac{2\\pi\\omega_0^2}{\\lambda} =\n\t\t\\frac{8\\lambda}{\\text{\\g{p}}\\vartheta^2}.\n\t\\label{\\eqnlabel{focus_optimization:confocal_parameter}}\n\\end{equation*}\n\n\\begin{figure}\n\t\\centering\n\t\\input{results_and_discussion/assets/GaussianBeamWaist_wiki}\n\t\\caption[%\n\t\tThe geometry of a focal region of a laser beam.%\n\t]{%\n\t\t\\captiontitle{%\n\t\t\tThe geometry of a focal region of a laser beam.%\n\t\t}\n\t\tThe beam is considered cylindrically symmetric about the $z$ axis;\n\t\tthe effective volume of the Raman sample was taken to be the volume of a\n\t\tcylinder of diameter $2\\omega_0$ and length $2b$.\n\t\tAdopted from\n\t\t\\textcite{GaussianBeamWaist}.\n\t}\n\t\\label{\\figlabel{GaussianBeamWaist_wiki}}\n\\end{figure}\n\nThe beam radius $\\omega(z)$ can be described as the function of $z$\ncoordinate by equation\n\\begin{equation}\n\t\\omega(z) = \\omega_0\\sqrt{1+\\left(\\frac{2z}{b}\\right)^2}.\n\t\\label{\\eqnlabel{focus_optimization:beam_radius}}\n\\end{equation}\n\nAs a first estimation, consider non-resonance Raman scattering produced from\nhomogenous infinite medium and arising only from the region with the highest\nillumination intensity.\nFurthermore, assume that the loss of the illumination beam intensity due to the\nRaman scattered light is negligible and that the Raman medium is fully\ntransparent to excitation and scattered light, which means that there is\nnegligible absorption and stimulated emission.\nThe amount of Raman scattered light from the thin transverse slice of the\nexcitation beam is independent of the distribution of the energy in the slice\nand therefore is proportional to the total illuminating laser-beam power under\nthese assumptions.\nThen, view the Raman emission from the particular direction in the plane of the\nslice.\nThe emitted flux would be the integrated flux from all elements on the line\nintersecting the slice, so there would be a linear increase of the number of\nelements with the increasing diameter, but each element illumination power\ndecreases with the square of the diameter of the illuminating beam.\n\nConsidering\n\\eqnref{focus_optimization:beam_radius},\nderive the approximate irradiance dependence of that point by the\nRaman-scattered light on the $z$ coordinate\n\\begin{equation*}\n\tI \\propto \\frac{1}{\\omega(z)} \\propto \\frac{1}{\\sqrt{1 + (2z/b)^2}}.\n\t\\label{\\eqnlabel{focus_optimization:slice_irradiance}}\n\\end{equation*}\n\nSo, the irradiance at $z = b$ will be reduced from $z = 0$ by a factor of\n$1/\\sqrt{5} \\doteq 0.447 \\doteq 1/2$.\nFor the following calculations, the brightest region of excitation beam was\napproximated by the \u201csource cylinder\u201d of length $2b$ and diameter $2\\omega_0$\nand therefore included only the brightest region of Raman emission and\nneglected the emission from less-bright regions.\nThe source parameters are then \\parencite{Barrett1968}:\n\\begin{align}\n\t\\text{length:}&\n\t\t& L_\\text{E}& = 16\\lambda/(\\text{\\g{p}}\\vartheta^2)\n\t\\label{\\eqnlabel{focus_optimization:L_E}}\\\\\n\t\\text{diameter (width of scattering area):}&\n\t\t& W_\\text{E}& = 4\\lambda/(\\text{\\g{p}}\\vartheta)\n\t\\label{\\eqnlabel{focus_optimization:W_E}}\\\\\n\t\\text{volume:}&\n\t\t& V_\\text{E}& = 64 \\lambda^3/(\\text{\\g{p}}^2\\vartheta^4)\n\t\\label{\\eqnlabel{focus_optimization:V_E}}\\\\\n\t\\text{length-to-diameter ratio:}&\n\t\t& L_\\text{E}/W_\\text{E}& = 4/\\vartheta.\n\t\\label{\\eqnlabel{focus_optimization:LW_E}}\n\\end{align}\n\nThese approximations apply to diffraction-limited Gaussian beams (\nTEM\\textsubscript{00}); for low-order transverse modes or a combination of such\nmodes, the calculations based on these equations can be considered accurate\nwithin an order of magnitude.\n\nMany simple arguments\n\\parencite{%\n\tAtwood1963,%\n\tBarrett1968%\n}\nproposed that the Raman spectrometer using right-angle geometry scattering\ngeometry will gain a higher signal if the laser beam is focused more.\nHowever, none of these arguments established an optimum degree of focusing\nless than the largest practical value.\n\\textcite{Barrett1968} showed that such an optimum exists theoretically; they\ncalculated its approximate value, and they compared the Raman power that can be\nusefully collected at that optimum degree of focusing with the total emitted\nRaman power.\n\nIt is known that the light is scattered in all directions by the Raman effect,\nand within the assumptions stated above, the Raman intensity is linearly\nproportional to the excitation laser beam power.\nTherefore, if the amount of Raman scattered light gathered by the spectrometer\nshould be maximized, the light needs to be collected from the largest solid\nangle and the largest possible illuminated volume.\nTypical Raman spectrometer light-collecting path has two optical parts,\nspectrograph and light collecting optics.\nLet's consider their parameters as given, even though similar considerations as\nbellow can also be applied to the light-collecting optics optimization.\nLet's further consider that the spectrograph consists of an entrance slit,\nimaging optics, dispersing element, and CCD camera.\n\nFor simplicity, approximate the source as a flat ribbon with the same length\nand width as specified above for the \u201csource cylinder\u201d which radiates uniformly\nto all collected angles from all parts of its surface.\nThen, consider that the light-collecting optic displays the entrance slit of\nthe spectrograph to the center of the focal region of the illuminating beam.\nThe width of the slit $W_\\mathrm{s}$ is determined by the spectrograph\nsettings, particularly by the desired spectral resolution, and its length\n$L_\\mathrm{s}$ could be estimated as an image of the CCD chip through the\nspectrograph.\nMoreover, suppose that the aperture of the light-collecting optic is large\nenough further not to constrain the collected Raman light over the spectrograph\naperture.\n\nThen there are two competing processes.\nThe assumptions above that the amount of the Raman scattered light from all\nthin transverse slices of the excitation beam is the same implies that the\ntotal amount of Raman scattered light from the source ribbon depends only on\nits length.\nAs the focal length of the excitation beam focusing lens decreases, the source\nribbon gets narrower, which results in more intensive emission of the scattered\nlight per unit area.\nSo if we start with the source ribbon width larger than the image of the\nspectrograph entrance slit width, the amount of collected light increases with\nthe narrower source ribbon.\nOn the opposite, the length (and therefore the total Raman scattered light\nintensity) of the source ribbon decreases with the decreasing focal length of\nthe excitation beam focusing lens, and so when the source ribbon gets shorter\nthan the image of the spectrograph entrance slit length, the amount of\ncollected light decreases with the shorter source ribbon.\n\nSo, let's split further analysis of the dependence of the Raman flux\ntransmitted through the spectrometer on the focusing angle of the excitation\nbeam $\\vartheta$ (\\eqnref{focus_optimization:theta}) to regions divided by\ntwo focusing angles $\\vartheta_\\mathrm{W}$ and $\\vartheta_\\mathrm{L}$.\nThe $\\vartheta_\\mathrm{W}$ defines focusing angle at which the width of the\nfocusing ribbon $W_\\text{E}$ is the same as the width $W$ of the image of the\nspectrograph entrance slit width $W_\\text{S}$, and $\\vartheta_\\mathrm{L}$ is\nthe angle that causes equality of the source ribbon length $L_\\text{E}$ and\nlength $L$ of the spectrograph entrance slit image length $L_\\text{S}$ on the\nsource.\nFurther, define magnification of the scattered light collecting optics as $M$.\nThen\n\\begin{align}\n\tL& = \\frac{L_\\mathrm{S}}{M}\n\t\\label{\\eqnlabel{focus_optimization:slit_length_magnification}}\\\\\n\tW& = \\frac{W_\\mathrm{S}}{M}\n\t\\label{\\eqnlabel{focus_optimization:slit_width_magnification}}\n\\end{align}\nand using\n\\cref{%\n\t\\eqnlabel{focus_optimization:L_E},%\n\t\\eqnlabel{focus_optimization:W_E}%\n}\nand putting $L = L_\\mathrm{E}$\nand $W = W_\\mathrm{E}$ respectively we get\n\\begin{align}\n\t\\vartheta_\\mathrm{W} = \\frac{4\\lambda}{\\text{\\g{p}}W}\n\t\t= \\frac{4\\lambda M}{\\text{\\g{p}}W_\\mathrm{S}}\n\t\\label{\\eqnlabel{focus_optimization:theta_W}}\\\\\n\t\\vartheta_\\mathrm{L} = \\sqrt{\\frac{16\\lambda}{\\text{\\g{p}}L}}\n\t\t= \\sqrt{\\frac{16\\lambda M}{\\text{\\g{p}}L_\\mathrm{S}}}.\n\t\\label{\\eqnlabel{focus_optimization:theta_L}}\n\\end{align}\n\nFor our spectrometer, we have $W_\\mathrm{S} = 50$\\,\\g{m}m and beam diameter\n$d = 0.9$\\,mm.\nThe wavelength can be calculated from the wavelength in vacuum\n$\\lambda_0 = 257.2$\\,nm divided by the refractive index of water\n$n_{257} = 1.3598$\n\\parencite{Hale1973},\n$\\lambda = 189.1$\\,nm.\nThe $L_\\mathrm{S}$ can be calculated from the height of the CCD chip\n$L_\\text{CCD} = 6.9$\\,mm and magnification of spectrograph\n$M_\\text{spc} = 1.1$ as\n$L_\\text{S} = L_\\text{CCD}/M_\\text{spc} \\doteq 6.3$\\,mm,\nthe magnification $M$ may be computed from focal length of objective\n$f_\\text{o} = 13$\\,mm\nand focusing lens before monochromator $f \\text{c} = 101.6$\\,mm as\n$M = f_\\text{c}/f_\\text{o} \\doteq 7.82$.\nThen, from the above equations, we calculate\n\\begin{align}\n\t\\vartheta_\\mathrm{W} \\doteq 0.0376\n\t\\label{\\eqnlabel{focus_optimization:theta_W_calc}}\\\\\n\t\\vartheta_\\mathrm{L} \\doteq 0.0346,\n\t\\label{\\eqnlabel{focus_optimization:theta_L_calc}}\n\\end{align}\nwhich correspond regarding \\eqnref{focus_optimization:theta} to\n\\begin{align}\n\tf_\\mathrm{W} \\doteq 24\\,\\text{mm}\n\t\\label{\\eqnlabel{focus_optimization:f_W_calc}}\\\\\n\tf_\\mathrm{L} \\doteq 26\\,\\text{mm},\n\t\\label{\\eqnlabel{focus_optimization:f_L_calc}}\n\\end{align}\nrespectively.\n\nWe can note that $\\vartheta_\\mathrm{W}$ is greater than\n$\\vartheta_\\mathrm{L}$, which is, according to\n\\textcite{Barrett1968},\ncommon for the tabletop spectrometer with moderate resolution and with a fairly\nfast focal ratio.\nSo we can divide further analysis into three regions according to excitation\nbeam focusing angle\n$\\vartheta$:\n$0 \\leq \\vartheta < \\vartheta_\\text{L}$;\n$\\vartheta_\\text{L} \\leq \\vartheta < \\vartheta_\\text{W}$;\n$\\vartheta_\\text{W} \\leq \\vartheta$.\n\nUnder the approximations above the Raman flux fraction $F$, collected by the\nspectrometer of the total Raman flux emitted by the source ribbon into all\nangles (i.e., full solid angle of $4\\text{\\g{p}}$), can be estimated now.\nFor simplicity, polarization effects of input and output are further ignored,\nand the Raman emission is assumed isotropic over all collection angles.\nDefine the solid angle subtended by the spectrometer pupil as\n$\\Omega_\\mathrm{S}$ and corresponding scattered light collecting solid angle\nat the source ribbon as $\\Omega$.\nThen\n\\begin{equation}\n\t\\Omega = \\Omega_\\mathrm{S}M^2.\n\t\\label{\\eqnlabel{focus_optimization:omega_magnification}}\n\\end{equation}\nFor our spectrometer, the solid angle of acceptance of monochromator may be\ncalculated from monochromator $f/\\# = f/10.5$\n(see \\cref{first_layout})\nas\n$\\Omega_\\text{S} =\n\t2\\text{\\g{p}}\\left[1 - \\cos\\left(\\arctan\\frac{1}{6.4}\\right)\\right]\n\t\\doteq 0.028$\\,rad.\n\nThe Raman flux fraction $F$ is then calculated from the solid angle fraction\n$\\Omega / 4\\text{\\g{p}}$ and the fraction of the light, which is \u201cmasked out\u201d\nby the width of the slit image and fraction of the ribbon length compared\nto the height of the slit image using \\cref{%\n\\eqnlabel{focus_optimization:L_E},%\n\\eqnlabel{focus_optimization:W_E},%\n\\eqnlabel{focus_optimization:slit_length_magnification},%\n\\eqnlabel{focus_optimization:slit_width_magnification},%\n\\eqnlabel{focus_optimization:omega_magnification}}.\n\nFor the first $0 \\leq \\vartheta < \\vartheta_\\text{L}$ region, the whole\nslit-image width and height are illuminated, but the focused beam is wider\nthan the slit width, and therefore the radiant flux density in the sample is\nsmaller (recall that we assumed that radiant losses in the sample are\nnegligible)\n\\begin{equation}\n\tF_\\text{I}(\\vartheta)\n\t\t= \\frac{W}{W_\\text{E}}\\cdot\\frac{\\Omega}{4\\text{\\g{p}}}\n\t\t= \\frac{\\frac{W_\\text{S}}{M}}{\\frac{4\\lambda}{\\text{\\g{p}}\\vartheta}}\n\t\t\t\\cdot\\frac{\\Omega_\\text{S}M^2}{4\\text{\\g{p}}}\n\t\t=\t\\frac{W_\\text{S}\\Omega_\\text{S}M}{16\\lambda}\\vartheta.\n\t\\label{\\eqnlabel{focus_optimization:F_I}}\n\\end{equation}\n\nIn the next second region\n$\\vartheta_\\text{L} \\leq \\vartheta < \\vartheta_\\text{W}$,\nonly the whole slit-image width is illuminated, but the exciting beam is\nfocused on an area with a smaller length than the slit length, and so the\nradiant flux density is diminished by the ratio between scattering area\nlength and slit-image length\n\\begin{equation}\n\tF_\\text{II}(\\vartheta)\n\t\t= \\frac{W}{W_\\text{E}}\\cdot\\frac{\\Omega}{4\\text{\\g{p}}}\n\t\t\t\\cdot\\frac{L_\\text{E}}{L}\n\t\t=\t\\frac{\\frac{W_\\text{S}}{M}}{\\frac{4\\lambda}{\\text{\\g{p}}\\vartheta}}\n\t\t\t\\cdot\\frac{\\Omega_\\text{S}M^2}{4\\text{\\g{p}}}\n\t\t\t\\cdot\\frac{\\frac{16\\lambda}{\\text{\\g{p}}\\vartheta^2}}\n\t\t\t\t{\\frac{L_\\text{S}}{M}}\n\t\t= \\frac{W_\\text{S}\\Omega_\\text{S}M^2}{\\text{\\g{p}}L_\\text{S}}\n\t\t\t\\cdot\\frac{1}{\\vartheta}.\n\t\\label{\\eqnlabel{focus_optimization:F_II}}\n\\end{equation}\n\nThe Raman scattering area in the third region $\\vartheta_\\text{L} \\leq \\vartheta$\nis so small that the Raman scattered light is collected from the whole beam,\nand therefore whole radiant flux is used\n\\begin{equation}\n\tF_\\text{III}(\\vartheta)\n\t\t= \\frac{\\Omega}{4\\text{\\g{p}}}\\cdot\\frac{L_\\text{E}}{L}\n\t\t= \\frac{\\Omega_\\text{S}M^2}{4\\text{\\g{p}}}\n\t\t\t\\cdot\\frac{\\frac{16\\lambda}{\\text{\\g{p}}\\vartheta^2}}\n\t\t\t\t{\\frac{L_\\text{S}}{M}}\n\t\t= \\frac{4\\lambda\\Omega_\\text{S}M^3}{\\text{\\g{p}}^2L_\\text{S}}\n\t\t\t\\cdot\\frac{1}{\\vartheta^2}.\n\t\\label{\\eqnlabel{focus_optimization:F_III}}\n\\end{equation}\n\n\\begin{figure}\n\t\\centering\n\t\\input{results_and_discussion/assets/F}\n\t\\caption[%\n\t\tA plot of the fractional Raman flux $F$.%\n\t]{%\n\t\t\\captiontitle{%\n\t\t\tA plot of the fractional Raman flux $F$ as calculated\n\t\t\tfor our experimental conditions using \\cref{%\n\t\t\t\t\\eqnlabel{focus_optimization:F_I},%\n\t\t\t\t\\eqnlabel{focus_optimization:F_II},%\n\t\t\t\t\\eqnlabel{focus_optimization:F_III}%\n\t\t\t}\n\t\t\tversus the focusing angle $\\vartheta$ of the\n\t\t\tilluminating laser beam.%\n\t\t}\n\t}\n\t\\label{\\figlabel{focus_optimization:F}}\n\\end{figure}\n\nThe results of \\cref{%\n\t\\eqnlabel{focus_optimization:F_I},%\n\t\\eqnlabel{focus_optimization:F_II},%\n\t\\eqnlabel{focus_optimization:F_III}%\n}\nfor our experimental setup can be seen in\n\\figref{focus_optimization:F}.\nThe optimum value of $\\vartheta$ is $\\vartheta_\\text{L}$.\n\\textcite{Barrett1968}\nproposed the optimum value to lie above the $\\vartheta_\\text{L}$ because the\nsignal from outside the cylinder along the excitation beam axis is also\ntransmitted through the spectrometer in the second and third region, but they\ndo not expect the optimum as high as $\\vartheta_\\text{W}$.\nThe light collecting optics magnification can be optimized according to the\nequations from above too, but each equation must be multiplied by the length of\nthe slit image $L$.\nThe Raman signal then does not decrease with the magnification, which means\nthat the magnification should be used as high as practical.\n\nFrom these considerations follows the conclusion for our experimental setup\nthat the optimal focal length of the laser beam focusing lens for the\noffresonance Raman spectroscopy should be somewhere between 18 and 22\\,mm,\nwhich is impractically small.\n\nHowever, all the above calculations neglected the loss of the excitation beam\npower as it goes through the sample, but we want to use the instrument for\nresonance Raman spectroscopy where significant absorption inside the sample\noccurs.\nIt can significantly decrease the source cylinder length $L_E$ and can push the\n$\\vartheta_\\text{L}$ to much lower values.\nFor example, we typically use samples with absorbance between 5 and 50 at\n257\\,nm in a 1\\,cm cuvette.\nSo, for example, the intensity of incident light is halved in 300\\,\\g{m}m path\nthrough the sample with absorbance 10 but the $L_E = 802$\\,\\g{m}m at\n$\\vartheta_\\text{L}$.\nIf we correct\n\\eqnref{focus_optimization:L_E}\nto, for example, one fourth, then the\ncorrected $L_E,corr$ would be approximately 170\\,\\g{m}m, and the corresponding\nfocal length of the laser beam focusing lens would be about 52\\,mm.\n\nThe absorbing medium brings additional complications.\nFor example, the anomalous refractive index near the absorption band can\nsignificantly influence the focused beam waist, and laser beam absorption can\ncause local heating, which also influences the refractive index and, moreover,\nproduces currents in the sample.\nThere is also a dependence of the optimal focal length on the excitation\nwavelength, but exchanges of the focusing lenses in dependence on the\nexcitation wavelength were impractical.\n\nHowever, in conclusion, we have confirmed that the focusing lens should have\nthe focal length as small as possible following the experimental geometric\nconstraints.\n", "meta": {"hexsha": "f80fde2bba3c5a9f196c2fe2e69f606661687a3b", "size": 17894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/results_and_discussion/focusing_lens_estimation.tex", "max_stars_repo_name": "lumik/phd_thesis", "max_stars_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/results_and_discussion/focusing_lens_estimation.tex", "max_issues_repo_name": "lumik/phd_thesis", "max_issues_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 41, "max_issues_repo_issues_event_min_datetime": "2019-08-13T12:27:09.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T03:00:58.000Z", "max_forks_repo_path": "src/results_and_discussion/focusing_lens_estimation.tex", "max_forks_repo_name": "lumik/phd_thesis", "max_forks_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.959798995, "max_line_length": 81, "alphanum_fraction": 0.7664580306, "num_tokens": 5123, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438950986284991, "lm_q2_score": 0.6959583376458153, "lm_q1q2_score": 0.5873158299889416}}
{"text": "\n\\subsection{Estimating HMMs with the forward algorithm}\n\nGiven we have observed outputs, what is the chance of being in a certain state at a certain time?\n\n", "meta": {"hexsha": "3936cdf574e6d53606ed5c6fa1d10fa7371e8912", "size": 157, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/markovHMMEstimation/01-03-forward.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/markovHMMEstimation/01-03-forward.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/markovHMMEstimation/01-03-forward.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.1666666667, "max_line_length": 97, "alphanum_fraction": 0.7898089172, "num_tokens": 35, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8519527944504227, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5872558410130878}}
{"text": "\\documentclass{article}\n\\usepackage{siunitx}\n\\usepackage{amsmath}\n\\title{Cardano's Formula for Cubic Equation}\n\\author{Ariyibi Joseph Iseoluwa}\n\\begin{document}\n\t\\maketitle\n\t\\begin{center}\n\t\t\\textbf{ABSTRACT}\n\t\\end{center}\nGerolamo Cardano was born in Pavia in 1501 as the illegitimate child of a jurist.He attended the University of Padua and become a physician in the town of Sacco, after being rejected by his home town of Milan. He became one of the most famous doctors in all of Europe, having treated the Pope. He was also an astrologerr and an avid gambler, to which he wrote the Book on Games of Chance, which was the first serious treatise on the mathematics of probability.\\cite{ Ariyibi2021}\n\\section{Introdction To Cardano's Formula}\nCardano's formula for solution of cubic equations for an equation like: $x^3 + a_1x^2 + a_2x + a^3 = 0$\nthe parameters Q, R, S and T can be computed thus,\\\\\n\\raggedright\n\t\\begin{align}\n\t\t\\textbf{Q}= 3a_2 - \\frac{a_1^2}{a} &                       \\textbf{R}= \\frac{9a_1a_2-27a_3-2a_1^3}{54} \\\\\n\t\t\\textbf{S} =\\sqrt[3]{R + \\sqrt{-Q^3 + R^2}} & \n\t\t \\textbf{T} = \\sqrt{R - \\sqrt{Q^3 + R^2}} \\\\\n\t\\end{align}\nTo give the roots\n$x_1= S + T-\\frac{1}{3}a_1$\n$x_2= \\frac{-(S+T)}{2} - \\frac{a_1}{3} + i\\frac{\\sqrt{3}(s-T)}{2}$ \\\\\n$x_2= \\frac{-(S+T)}{2} - \\frac{a_1}{3} - i\\frac{\\sqrt{3}(s-T)}{2}$ \\\\\n\n\nNote : $x^3$ must not have a co-efficient\n\\subsection{Some Examples}\n\\begin{itemize}\n\t\\item  $x^3 - 3x^2 + 4 = 0$\n\t\\item $2x^3 + 6x^2 + 1 = 0$\n\\end{itemize}\n\\end{document}", "meta": {"hexsha": "78f12d1c881b084b59f240a2ca19b45ed042e919", "size": 1507, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "last commit for texstudio/cardano.tex", "max_stars_repo_name": "ise2005best/iseoluwaCSC102", "max_stars_repo_head_hexsha": "4c2ac00d92146f0f84e4260160ff68700e13fd71", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "last commit for texstudio/cardano.tex", "max_issues_repo_name": "ise2005best/iseoluwaCSC102", "max_issues_repo_head_hexsha": "4c2ac00d92146f0f84e4260160ff68700e13fd71", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "last commit for texstudio/cardano.tex", "max_forks_repo_name": "ise2005best/iseoluwaCSC102", "max_forks_repo_head_hexsha": "4c2ac00d92146f0f84e4260160ff68700e13fd71", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.6666666667, "max_line_length": 479, "alphanum_fraction": 0.6761778368, "num_tokens": 573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619393159452, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.58720833289686}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{graphicx}\n\t\\DeclareGraphicsExtensions{.png, .jpeg, .pgm}\n\\usepackage{caption}\n% \\usepackage{subcaption}\n\\usepackage{csvsimple}\n\\usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry}\n\n\\title{PA03: Mandelbrot Image\\\\CS791v: Parallel Computing}\n\\author{Terence Henriod}\n\\date{\\today}\n\n\\begin{document}\n\n\\clearpage\n\\maketitle\n\\thispagestyle{empty} % removes the page number from the title page\n\n\\begin{abstract}\nThe Mandelbrot set is a well known fractal among math enthusiasts. Computation of Mandelbrot sets/images is known to programmers to be an ``embarrassingly parallel\" task. The task is somewhat compute heavy, and the only communication required (if any) is to collect the results of the computaion. In this assignment, we explore computation of Mandelbrot images on the GPU. \n\\end{abstract}\n\n\\newpage\n\\section{Introduction}\nComputation of Mandelbrot images is ``embarassingly parallel\" due to the fact that pixel values can be generated without communication, they can be computed independent of one another, and the only communication required through the whole process is to collect the computed pixel values in one location once the computation completes.\n\n\\section{Theory}\n\\subsection{Computing Pixel Values}\nComputing a pixel value follows the flowing equation: $z_{k} = z_{k - 1}^{2} + c$ where $z$ is the pixel value, and $c$ is the coordinate of the pixel in the complex plane. This formula is computed iteratively until either a maximum number of iterations has been reached (of note is the number $1024$ for this exercise) or until $z$ begins to converge to zero or diverge to $\\infty$ (which is know to happen if $z$ ever reaches $2$).\n\n\\subsection{Sequential Algorithm}\nThis algorithm is straightforward. The simple use of a for loop (or pair of nested for loops) to visit the storage location for each pixel's value is used, and at each location the pixel's value is located. The value of the pixel is dependent on its coordinates in the image, but these coordinates can be computed easily from the overall array index and the width of the image. That is, the coordinates $x, y$ can be computed from the main data array index $i$ and the image width $w$ by:\n$$x = i / w$$\n$$y = i \\% w$$\n\n\\subsection{Parallel GPU Algorithm}\nThe GPU algorithm is also straightforward. Each GPU thread uses the same algorithm with one small difference: instead of incrementing the location of the next pixel to compute by one, each GPU thread strides across the data array in steps of the number of total threads.\n\nI was curious about doing a version where a sort of ``work queue\" would be implemented using an integer in the shared block memory indicating the next pixel to be computed's coordinate(s) and atomic operations to increment the shared value. The premise of this idea was that different pixels take different amounts of time to compute, so we could better balance the load on the threads using the work queue. Unfortunately, I ran out of time/motivation to experiment with this idea. It would indeed require more effort to implement, but I think that for very large images it might have its merits.\n\n\\newpage\n\\section{Results}\nThe performance results of the vector reduction implementation are listed here.\n\n\\subsection{Information on the GPU device used}\n\\csvautotabular{gpu_properties.csv}\n\n\\newpage\n\\subsection{Performance Graphs}\n  \\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{runtime}\n    \\caption{A comparison of the runtimes for computing a 2000x2000 Mandelbrot image using 1024 iterations at most for each pixel.}\n    \\label{fig:runtime}\n  \\end{figure}\n\n  \\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{throughput}\n    \\caption{A comparison of the throughput in integer pixel values produced per second.}\n    \\label{fig:throughput}\n  \\end{figure}\n\n  \\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{compute_speedup}\n    \\caption{The speedup achieved. Note the peak in the surface at 64 threads for all block sizes.}\n    \\label{fig:compute_speedup}\n  \\end{figure}\n\n  \\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=.9\\linewidth]{total_speedup}\n    \\caption{The speedup achieved when also considering the time to copy the computed data from the GPU to the Host.}\n    \\label{fig:total_speedup}\n  \\end{figure}\n\n  \\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{testimage}\n    \\caption{A Mandelbrot image for verification.}\n    \\label{fig:total_speedup}\n  \\end{figure}\n\n\\newpage\n\\section{Discussion}\n\\subsection{Low Speedups}\nThe speedups were not as high as I had thought they would be. I believe that this is likely due to my use of a weak GPU - specifically the one that is in my laptop. I have little reason to believe that the lack of speedup was due to a mismatch of algorithms because the sequential and the parallel algorithms are so similar, literally the only difference is the increment factor in the for loop. It is not impressive, but I was able to write CUDA code and do the explorations for the assignment.\n\n\\subsection{The Right Number of Threads/Blocks (or The Right ``Stride\")}\nThe highest speedup (for the 2000 x 2000 image) was observed using $255$ blocks $64$ and threads per block to produce a speedup of approximately $6.64$. Looking at the speedup surface graphs do seem to indicate that the number of threads per block was more important than the number of total blocks used. I believe that this is likely because Mandelbrot computation has a high computation to memory access ratio. It is likely that since the threads only need to make $1$ global memory access at the end of their computation, so it is better to keep the number of threads low so that they can spend their time performing the computations.\n\n\\subsection{Lack of Transfer Time}\nUnlike with many other processes that are parallelized on the GPU, memory transfer time was almost negligible for this task. Only one transfer needs to occur since GPU threads ca figure everything about a pixel out based on which index they are processing\n\n\\section{Issues}\n\\subsection{Weak Hardware Setup}\nThe GPU used was a medium caliber laptop card. This resulted in limited performance, memory sizes, thread/block counts, etc. In addition, the Windows operating system was used, meaning that all GPU operations needed to occur within $2$ seconds. This limited My data collection from using anything less than 32 threads per block and be able to have consistent success (16 threads worked often, but not often enough). However, I believe enough data was gathered to be able to comprehensively address the topic.\n  \n\\end{document}\n", "meta": {"hexsha": "bd6e10334561212c1255744a61dd47fe87d66ced", "size": 6717, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CS791v_Spring2015/PA03/Report/PA03_mandelbrot_thenriod.tex", "max_stars_repo_name": "T-R0D/Past-Courses", "max_stars_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-03-13T17:32:26.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-27T16:51:22.000Z", "max_issues_repo_path": "CS791v_Spring2015/PA03/Report/PA03_mandelbrot_thenriod.tex", "max_issues_repo_name": "T-R0D/Past-Courses", "max_issues_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-29T19:54:02.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-29T19:54:52.000Z", "max_forks_repo_path": "CS791v_Spring2015/PA03/Report/PA03_mandelbrot_thenriod.tex", "max_forks_repo_name": "T-R0D/Past-Courses", "max_forks_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2016-10-18T03:31:44.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-29T13:23:10.000Z", "avg_line_length": 65.8529411765, "max_line_length": 637, "alphanum_fraction": 0.7787702844, "num_tokens": 1588, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.8244619177503206, "lm_q1q2_score": 0.587208322573114}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath}\n\\usepackage{mathrsfs}\n\\usepackage{amsfonts}\n\\usepackage{graphicx}\n\\usepackage{multirow}\n\\usepackage{alltt}\n\\usepackage[numbers]{natbib}\n\\usepackage[total={5.8in, 9in},top=1in,left=1.3in]{geometry}\n\\usepackage{fancyhdr}\n\\usepackage{hyperref}\n\\usepackage{enumerate}\n\n\n\\fancypagestyle{plain}{%\n  \\renewcommand{\\headrulewidth}{0.001pt}%\n  \\fancyhf{}%\n  \\fancyfoot[C]{\\footnotesize Page \\thepage\\ of \\pageref{LastPage}}%\n}\n\n\\lhead{\\small Carl-Johan Thore}\n\\rhead{\\small fminsdp - Optimization with matrix inequality constraints}\n\\pagestyle{fancyplain}\n\n\n\\newcommand{\\bm}[1]{\\mbox{\\boldmath $#1$}}\n\\newcommand{\\T}{\\textsf{T}}\n\\newcommand{\\svec}{\\mbox{\\textsf{svec}}}\n\n\\renewcommand{\\headrulewidth}{0.002pt}\n\n\\hypersetup{bookmarksopen=true}\n\n\\title{FMINSDP -- a code for solving optimization problems with matrix inequality constraints \\vskip 2mm\n\\footnotesize{\\url{https://se.mathworks.com/matlabcentral/fileexchange/43643-fminsdp}} \\normalsize}\n\\date{\\today}\n\\author{Carl-Johan Thore}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%s\n\n\\begin{document}\n\n\\maketitle\n\n\\thispagestyle{empty}\n\n\\noindent This document is a theoretical and practical introduction to the Matlab-code \\texttt{fminsdp}, designed to find local solutions to non-linear, non-convex optimization problems (NLPs) with both scalar constraints and (small-size) matrix inequality constraints. \n\\vskip 2mm\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\noindent \\textbf{Notation.} A matrix $\\bm{A} \\in \\mathbb{R}^{m \\times m}$ is said to be \\textit{positive semi-definite} if \n$\\bm{y}^{\\T}\\bm{A}\\bm{y} \\geq 0$ for all $\\bm{y} \\in \\mathbb{R}^{m}$. It is convenient to introduce\nthe notation \"$\\bm{A} \\succeq \\bm{0}$\" to indicate that $\\bm{A}$ is positive semi-definite. A \\textit{matrix inequality} is here\ndefined as an expression of the form\n\\begin{equation}\\label{eq:matrix_ineq}\n\\bm{\\mathcal{A}}(\\bm{x}) \\succeq \\bm{0},\n\\end{equation}\nwhere $\\bm{\\mathcal{A}}$ is a map from $\\Omega \\subset \\mathbb{R}^{n}$ to the space $\\mathbb{S}^{m}$ of symmetric, real-valued matrices of size $m \\times m$.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\section{The optimization problem}\n\n\\texttt{fminsdp} attempts to find a local solution to non-linear, non-convex optimization problems of the form\n\\begin{equation}\\label{eq:problem}\n\\quad\n\t\\begin{aligned}\n\t&\t\\underset{\\bm{x}\\in\\mathbb{R}^{n}}{\\mbox{minimize}} \\;\\; f(\\bm{x})  \\\\\n\t&\t\\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t\t& \\bm{A}_{eq}\\bm{x} = \\bm{b}_{eq} & \\mbox{linear equality constraints}      \\\\\n\t\t\t& \\bm{A}\\bm{x} \\leq \\bm{b}\t      & \\mbox{linear inequality constraints} \\\\\n\t\t\t& \\bm{c}_{eq}(\\bm{x}) = \\bm{0}\t \t& \\mbox{nonlinear equality constraints}\t\t\t\\\\\t\t\n\t\t  & \\bm{c}(\\bm{x}) \\leq \\bm{0}\t\t\t& \\mbox{nonlinear inequality constraints}  \\\\\t\t\t\n\t\t\t& \\bm{l} \\leq \\bm{x} \\leq \\bm{u}\t& \\mbox{box constraints}\\\\\n\t\t\t& \\bm{\\mathcal{A}}_{i}(\\bm{x}) \\succeq \\bm{0}, \\quad i = 1,...,q & \\mbox{matrix inequality constraints},\n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\n\\vskip 2mm\n\\noindent where $\\bm{A}_{eq}$, $\\bm{A}$, $\\bm{b}_{eq}$, $\\bm{b}$, $\\bm{l}$ and $\\bm{u}$ are constant matrices and vectors, respectively. The functions $f$, $\\bm{c}$,  $\\bm{c}_{eq}$ and $\\bm{\\mathcal{A}}_{i}:\\Omega_{i} \\rightarrow \\mathbb{S}^{m_{i}}$, $i = 1,...,q$, can be non-linear and are (preferably) at least \ntwice continuously differentiable. Users familiar with \\texttt{fmincon} from the Optimization Toolbox in Matlab \\cite{fmincon:60} should recognize the \nform of problem \\eqref{eq:problem} --- the novelty here is the addition of $q$ matrix inequality constraints. \n\n\\texttt{fminsdp} offers three different methods to treat problem \\eqref{eq:problem}: the ''cholesky-method'', the ''ldl-method'' \\cite{Bogani:2009} and ''penlab'' \\cite{Fiala:2013}. The first two methods works by reformulating the problem into a standard NLP which can be solved by any of the NLP-solvers interfaced by \\texttt{fminsdp}. The third method relies on the external solver PENLAB\\footnote{Downloadable from \\url{http://web.mat.bham.ac.uk/kocvara/penlab/}.}, which treats the problem directly using an augmented Lagrangian-type algorithm \\cite{Fiala:2013}.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Theoretical background}\n\n\\subsection{The cholesky-method}\n\nIn the cholesky-method, \\texttt{fminsdp} reformulates the matrix inequality constraints into scalar equality constraints using the fact that a matrix is positive semi-definite if and only if it admits a Cholesky decomposition. In other words, \n\\vskip 2mm\n\\noindent \\textbf{Theorem 1.} A symmetric matrix $\\bm{A} \\succeq \\bm{0}$ if and only if\n\\begin{equation}\\nonumber\n\\bm{A} = \\bm{L}\\bm{L}^{\\T}\n\\end{equation}\nfor some lower triangular matrix $\\bm{L}$. \n\\vskip 2mm\n\\noindent By default in \\texttt{fminsdp}, $\\bm{L}$ is restricted to the space of lower triangular $m \\times m$-matrices with non-negative diagonal elements, referred to as $\\mathcal{L}^{m}$. Non-negativity of the diagonal elements is not required by the theory, but enforcement of this condition is often important for computational efficiency.\n\n\\texttt{fminsdp} makes use of the function $\\svec : \\mathbb{R}^{m\\times m} \\rightarrow \\mathbb{R}^{p}$, where $p$ denotes the number of  non-zero elements in the Cholesky factor $\\bm{L}$ of a given matrix; i.e., the number of elements in the sparsity pattern of $\\bm{L}$, obtained by a \\textit{symbolic} Cholesky factorization of the given matrix. $\\svec$ takes, column-wise, the elements of the input matrix corresponding to potential non-zeros of $\\bm{L}$ and stacks them on top of each other to form a vector of length $p$. If the lower triangular part of $\\bm{L}\\in\\mathcal{L}^{m}$ is full we have $p=m(m+1)/2$ and\n\\begin{equation}\\nonumber\n\\svec(\\bm{A}) = (A_{11},A_{21},\\ldots,A_{m1},A_{22},\\ldots,A_{m2},\\ldots,A_{mm})^{\\T};\n\\end{equation}\ni.e., the vector $\\svec(\\bm{A})$ contains the elements of the lower triangular part of $\\bm{A}$. \n\nNow, based on Theorem 1, problem \\eqref{eq:problem} is reformulated into the following problem:\n\\begin{equation}\\label{eq:problem2}\n\\quad\n\t\\begin{aligned}\n\t&\t\\underset{\\bm{x}\\in\\mathbb{R}^{n},\\; \\bm{\\ell}_{1}\\in\\mathbb{R}^{p_{1}},\\;\\ldots ,\\;\\bm{\\ell}_{q}\\in\\mathbb{R}^{p_{q}}}{\\mbox{minimize}} \\;\\; f(\\bm{x})  \\\\\n\t&\t\\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t\t& \\bm{A}_{eq}\\bm{x} = \\bm{b}_{eq} &\\\\\n\t\t\t& \\bm{A}\\bm{x} \\leq \\bm{b}\t      &\\\\\n\t\t\t& \\bm{c}_{eq}(\\bm{x}) = \\bm{0}\t \t&\\\\\n\t\t  & \\bm{c}(\\bm{x}) \\leq \\bm{0}\t\t\t&\\\\\n\t\t\t& \\bm{l} \\leq \\bm{x} \\leq \\bm{u}\t&\\\\\n\t\t\t& \\tilde{\\bm{l}} \\leq (\\bm{\\ell}_{1},\\ldots,\\bm{\\ell}_{q}) \\leq \\tilde{\\bm{u}} & \\\\\n\t\t\t& \\svec\\left(\\bm{\\mathcal{A}}_{i}(\\bm{x}) - \\bm{L}_{i}(\\bm{\\ell}_{i})\\bm{L}_{i}(\\bm{\\ell}_{i})^{\\T}\\right) = \\bm{0}, & \\quad i = 1,...,q \\\\\n\t\t\t& \\mbox{\\textsf{diag}}\\{\\bm{L}_{i}(\\bm{\\ell}_{i})\\} \\geq \\bm{0}, & \\quad i = 1,...,q,\n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\nwhere the variables $\\bm{\\ell}_{i}$ defining the non-zero elements of the Cholesky factors are referred to as \\textit{auxiliary variables}, $\\tilde{\\bm{l}}$ are $\\tilde{\\bm{u}}$ are constant vectors, and $\\mbox{\\textsf{diag}}$ returns a vector containing the diagonal elements of a matrix. This problem is in a form amenable to direct treatment by an NLP-solver, and this is the problem to which \\texttt{fminsdp} attempts to find a local solution.\n\n\\vskip 2mm\n\\noindent \\textbf{Note.} Any local minimum of problem \\eqref{eq:problem2} is also a local minimum for \\eqref{eq:problem}, and vice versa (assuming that bounds on the auxiliary variables do not prevent this). However, problem \\eqref{eq:problem2} may have additional stationary points not present in \\eqref{eq:problem}. The author has not experienced any difficulties obviously attributed to this fact, but it cannot be ruled out that it might cause trouble on some problems.\n\\vskip 2mm\n\nA key motivation behind \\texttt{fminsdp} is to abstract away the exact treatment of the matrix inequality constraints so that the user only ''sees'' a problem of the form \\eqref{eq:problem}; that is, the user should not have to deal with the Cholesky factors and the auxiliary variables. Thus, when using \\texttt{fminsdp} one only works with the primary variables $\\bm{x}$, and user-supplied derivatives are only with respect to $\\bm{x}$. The examples in Section \\ref{sec:tutorial} and the folder \\texttt{examples} shows how this is done in practice.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{The ldl-method}\n\nIn the ldl-method \\cite{Vanderbei:2000,Bogani:2009}, \\texttt{fminsdp} reformulates the matrix inequality constraints into scalar inequality constraints using the fact that a matrix is positive semi-definite if and only if the diagonal elements in an LDL-factorization of the matrix is non-negative. In other words, \n\\vskip 2mm\n\\noindent \\textbf{Theorem 2.} A symmetric $m\\times m$-matrix $\\bm{A} \\succeq \\bm{0}$ if and only if\n\\begin{equation}\nd_{i}(\\bm{A}) \\geq 0, \\quad i=1,\\ldots,m,\n\\end{equation}\nwhere $d_{i}(\\bm{A})$ are the diagonal elements in an LDL-factorization of $\\bm{A}$.\n\\vskip 2mm\nThe original problem \\eqref{eq:problem} is now replaced by\n\\begin{equation}\\label{eq:problemldl}\n\\quad\n\t\\begin{aligned}\n\t&\t\\underset{\\bm{x}\\in\\mathbb{R}^{n}}{\\mbox{minimize}} \\;\\; f(\\bm{x})  \\\\\n\t&\t\\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t\t& \\bm{A}_{eq}\\bm{x} = \\bm{b}_{eq} &\\\\\n\t\t\t& \\bm{A}\\bm{x} \\leq \\bm{b}\t      &\\\\\n\t\t\t& \\bm{c}_{eq}(\\bm{x}) = \\bm{0}\t \t&\\\\\n\t\t  & \\bm{c}(\\bm{x}) \\leq \\bm{0}\t\t\t&\\\\\n\t\t\t& \\bm{l} \\leq \\bm{x} \\leq \\bm{u}\t&\\\\\n\t\t\t& d_{ij}(\\bm{\\mathcal{A}}_{i}(\\bm{x})) \\geq 0, \\quad j=1,\\ldots,m_{i},\\; i=1,\\ldots,q ,  &\n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\n\nThe functions $d_{ij} : \\mathbb{S}^{m} \\rightarrow \\mathbb{R}$ are smooth, and concave, on the set of positive\ndefinite matrices \\cite{Vanderbei:2000}, so provided $\\bm{\\mathcal{A}}_{i}(\\bm{x})$ are always positive definite,\n\\eqref{eq:problemldl} is a smooth NLP. In practise, even if $\\bm{\\mathcal{A}}_{i}(\\bm{x})$ are all positive definite at the initial point (see section \\ref{sec:infeasible}), this property can not be ensured throughout the optimization process without taking\nspecial measures. When using \\texttt{fmincon} the line search step-length can be reduced until, due to continuity, $\\bm{\\mathcal{A}}_{i}(\\bm{x})$ is positive definite. When using the NLP-solver \\texttt{gcmma} the subproblems are made more conservative to reach the same effect. \n\nCompared to the cholesky-method, the ldl-method introduces no additional variables, but requires, e.g., shortening of the search step in line-search procedures, and derivatives of the matrix constraints can be more expensive to compute. \n\nWhen using NLP-solver \\texttt{gcmma} the computational cost can sometimes be reduced significantly by using an active-set approach \\cite{Bogani:2009} where the gcmma subproblem only takes into accounts those constraints which satisfies \n\\begin{equation}\\nonumber\nd_{ij}(\\bm{\\mathcal{A}}_{i}(\\bm{x})) \\leq \\eta,\n\\end{equation}\nwhere $\\eta$ is a positive constant. This approach can be efficient for problems with low-rank solutions, and is activated by setting the option \\texttt{eta} (see Section \\ref{sec:options}) to some suitable value.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{Penlab}\n\nThe reader is referred to \\cite{Fiala:2013} and references therein for a description of the algorithm implemented in PENLAB.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Infeasible initial points}\n\\label{sec:infeasible}\n\nIt is recommended that the user supplies an initial point which is feasible with respect to the constraints of problem \\eqref{eq:problem}, in particular the matrix inequality constraints. If the latter is not possible, an option can be set (see section \\ref{sec:options}) so that \\texttt{fminsdp} attempts to solve the following problem in place of \\eqref{eq:problem}:\n\\begin{equation}\\label{eq:problem3}\n\\quad\n\t\\begin{aligned}\n\t&\t\\underset{\\bm{x}\\in\\mathbb{R}^{n},\\; \\mbox{$s$}\\in\\mathbb{R}}{\\mbox{minimize}} \\;\\; f(\\bm{x}) + cs \\\\\n\t&\t\\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t\t& \\bm{A}_{eq}\\bm{x} = \\bm{b}_{eq} &\\\\\n\t\t\t& \\bm{A}\\bm{x} \\leq \\bm{b}\t      &\\\\\n\t\t\t& \\bm{c}_{eq}(\\bm{x}) = \\bm{0}\t \t&\\\\\n\t\t  & \\bm{c}(\\bm{x}) \\leq \\bm{0}\t\t\t&\\\\\n\t\t\t& \\bm{l} \\leq \\bm{x} \\leq \\bm{u}\t&\\\\\n\t\t\t& \\bm{\\mathcal{A}}_{i}(\\bm{x}) + s\\bm{I}^{m_{i}} \\succeq \\bm{0}, & \\quad i = 1,...,q \\\\\n\t\t\t& \\underline{s} \\leq s \\leq \\overline{s}. & \n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\nHere $s$ is an auxiliary variable, $\\bm{I}^{m_{i}},\\, i = 1,...,q$, are identity matrices of size $m_{i} \\times m_{i}$, and\n$\\underline{s}$ and $\\overline{s}$ are constants. The constant $c$ appearing in the objective should be set to some large positive number (finding a suitable value might require experimenting a bit) such that $s$ becomes close to zero at a solution. \n\nUnless the user specifies something else, given $\\bm{x}_{0}$ \\texttt{fminsdp} will generate an initial value for the auxiliary variable according to\n\\begin{equation}\\nonumber\ns_{0} = \\max_{i=1,\\ldots,q}\\Big(-\\min \\,\\{\\lambda_{1}(\\mathcal{A}_{i}(\\bm{x}_{0})),-10^{-12}\\}\\Big),\n\\end{equation}\nwhere $\\lambda_{1}(\\cdot)$ returns the smallest eigenvalue of a matrix. This guarantees that $\\bm{\\mathcal{A}}_{i}(\\bm{x}_{0}) + s_{0}\\bm{I}^{m_{i}} \\succ \\bm{0}$ for all $i$.\n\n\\vskip 2mm\n\\noindent \\textbf{Note.} The ldl-method requires a feasible initial point, so if you're unable to supply such a point you must set $c > 0$. The cholesky- and penlab-methods do not require a feasible initial point.\n\n\n \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Problem size and some limitations}\n\nAssuming the constraint matrices are of small size or very sparse, \\texttt{fminsdp} might be able to solve large-scale problems within reasonable time. The ldl-method might also work for problems involving large constraint matrices. For the cholesky-method, one can expect a fairly large number of auxiliary variables and thus many non-zero elements in the constraint Jacobian (and Hessian of the Lagrangian), resulting in much memory and CPU time being devoted to solution of linear systems by the NLP solver used to solve \\eqref{eq:problem2}. Penlab, finally, relies on exact second-derivative information which can make problems costly to solve.\n\nHere are some additional things to note:\n\\begin{enumerate}\n\\item NLP-solvers ipopt, knitro, snopt, penlab, mma and gcmma must be downloaded and installed separately. \n\\item Penlab requires a user-supplied function for evaluating the Hessian of the Lagrangian.\n\\item The ldl-method can currently not make use of a user-supplied Hessian of the Lagrangian and has\n      only been tested with NLP-solvers fmincon and gcmma.     \n\\item Although it is possible to improve performance by implementing parts of the code as MEX-functions, \\texttt{fminsdp} is a pure Matlab-code. The reason is to make installation, set up and maintenance as easy as possible.\nThe interested user could try out the Matlab-function \\texttt{profile} to identify bottlenecks in the code and\nimplement them as MEX-functions instead. \n\\end{enumerate}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\section{Alternatives}\n\nThere are many important special cases of problem \\eqref{eq:problem} for which specialized solvers \nare available. Although this is possible, \\texttt{fminsdp} is not intended to replace such solvers, which, \nwhen applicable, can be a lot faster.\n\nThe following is an incomplete list of currently available solvers:\n\\begin{itemize}\n\\item Problems with linear matrix inequality (LMI) constraints:\\\\\n\\hskip2mm SeDuMi, SDPT3, SDPA, LMI Lab, PENSDP, BMISolver, PENBMI, PENNON\n\\item Problems with bilinear matrix inequality (BMI) constraints:.\\\\\n\\hskip2mm BMISolver, PENBMI, PENNON, PENLAB\n\\item General problems of type \\eqref{eq:problem}:\\\\\n\\hskip2mm PENNON, PENLAB\n\\end{itemize}\nNote that except for PENNON, and its open-source version PENLAB, there are additional constraints on the structure of the problems treated by the listed solvers.\n\nThe Matlab code YALMIP \\cite{Yalmip:2004} provides a convenient unified interface to the solvers listed above (except BMISolver).\n\nOne could of think of other criteria, not without drawbacks of course, for positive semi-definiteness beside the ones used here that might be used to obtain a standard NLP formulation of \\eqref{eq:problem}. Non-negativity of the smallest eigenvalue or of the leading principle minors of a matrix are both necessary and sufficient criteria. If one is satisfied with sufficiency, diagonal dominance could also be considered.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\n\\section{Using \\texttt{fminsdp}}\n\\label{sec:using}\n\nA user of \\texttt{fminsdp} is expected to provide two functions; one for evaluating the objective function (i) and one for evaluating the non-linear constraints (ii):\n\\begin{enumerate}[i.]\n\\item \\texttt{[fval,\\textit{grad}] = objfun(x)}\n\\item \\texttt{[cineq,ceq,\\textit{cineqgrad},\\textit{ceqgrad}] = nonlcon(x)}\n\\end{enumerate}\n(Here and in the following it is assumed that the reader is familiar with the Matlab syntax.) The return arguments written in italics are optional and only needed if the user wishes to provide gradients for the objective and constraints. The matrix inequality constraints are defined in the non-linear constraints function (even linear matrix inequalities). The values of the constraint matrices, vectorized using \\svec, are returned as the last elements in the output vector \\texttt{ceq}; see Section \\ref{sec:tutorial} for an example.\n\nIn addition to objective and non-linear constraints functions, the user may optionally provide a function for evaluating\nthe Hessian of the Lagrangian:\n\\begin{itemize}\n\\item \\texttt{H = hessian(x,lambda)},\n\\end{itemize}\nwhere \\texttt{lambda} is a struct with two fields \\texttt{ineqnonlin} and \\texttt{eqnonlin} containing values of the Lagrange\nmultipliers associated with the non-linear inequality and equality constraints, respectively, and one field \\texttt{sigma} \nwhich only used when running with NLP-solver Ipopt.\n\nThe calling syntax of \\texttt{fminsdp} is similar to that of \\texttt{fmincon}:\n\\begin{verbatim}\n>> [x,fval,exitflag,output,lambda,grad,hessian] = ...\n\t\t\t\t\t\t\t\t\t\t                        fminsdp(objfun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)\n\\end{verbatim}\nUnlike fmincon, at least 9 input arguments are required. If your problem has no linear constraints and no simple\nbounds, the corresponding input arguments can be set to empty matrices. Note that \\texttt{fminsdp} does not\nsupport passing additional arguments to the objective or constraint functions through an 11-th input argument. \nTo pass additional arguments to the user-supplied functions one should use anonymous or nested functions.\n\n\\subsection{Options}\n\\label{sec:options}\nA number of options are available when calling \\texttt{fminsdp}. These are passed to the code in the form of a struct containing\nparameter-value pairs. The most convenient way to specify options is to use the function \\texttt{sdpoptionset}:\n\\begin{verbatim}\n>> options = sdpoptionset('MatrixInequalities',true);\n\\end{verbatim}\n\nIn addition to the options accepted by the function \\texttt{optimset} from the Matlab Optimization Toolbox, the\nfollowing options are available when calling \\texttt{fminsdp}:\n\\vskip 2mm\n\\noindent \\texttt{MatrixInequalities}  \\hskip 1cm logical scalar\\\\\nIndicates whether or not the problem has any matrix inequality constraints. If not, \\texttt{fminsdp} will simply call the \nNLP solver specified using options.NLPsolver.\\\\\n\\noindent  Default: true \n\\vskip 2mm\n\\noindent \\texttt{Aind}  \\hskip 3.5cm scalar or numeric array\\\\\nMarks the beginning of each matrix constraint in the vector \\texttt{ceq} returned from the non-linear constraints function. If\nthe option \\texttt{sp\\_pattern}, described below, is used it is only necessary to mark the beginning of the \\texttt{first} matrix \nconstraint.\\\\\n\\noindent  Default: 1\n\\vskip 2mm\n\\noindent \\texttt{method} \\hskip 3.0cm \\{'cholesky', 'ldl', 'penlab'\\} \\\\\nSelect treatment of matrix inequality constraints. \\\\\nDefault: 'cholesky'\n\\vskip 2mm\n\\noindent \\texttt{NLPsolver} \\hskip 2.5cm  \\{'fmincon', 'ipopt', 'snopt', 'knitro', 'mma', 'gcmma'\\}               \\\\\nSelect NLP-solver (not applicable if \\texttt{method}='penlab'). Interfaces are provided to fmincon, Ipopt \\cite{Wachter:2006}, SNOPT \\cite{Gill:2002}, KNITRO \\cite{Byrd:2006}, MMA/GCMMA \\cite{Svanberg:2007}, and PENLAB.\nNOTE: If you run Ipopt older than 3.11.0, make sure to modify the file ipopt\\_main.m appropriately. \\\\\nDefault: 'fmincon'\n\\vskip 2mm\n\\noindent \\texttt{max\\_cpu\\_time}  \\hskip 2.1cm positive scalar \\\\\nMaximum CPU time. Applicable to NLP-solvers fmincon, ipopt, mma and gcmma.\\\\\nDefault: inf       \n\\vskip 2mm\n\\noindent \\texttt{sp\\_pattern}  \\hskip 2.3cm  (sparse) matrix or cell array of matrices\\\\\nSparsity patterns of the  matrix constraints. If this option is used, the user must provide one matrix for each matrix constraint. \n\\\\\n\\noindent  Default: []\n\\vskip 2mm\n\\noindent \\texttt{L0} \\hskip 3.2cm  (sparse) matrix  or cell array of matrices \\\\\nInitial guess for the Cholesky factors of the matrix constraints. If the user does not provide an initial\nguess, \\texttt{fminsdp} will generate one by attempting a Cholesky factorization of the constraint matrices \nat the initial point. If this fails, \\texttt{fminsdp} will add a multiple of the identity matrix to\nthe constraint matrices until all of them are positive definite and use the Cholesky factorizations of these\nmatrices as an initial guess. \\\\\nDefault: []\n\\vskip 2mm\n\\noindent \\texttt{Ldiag\\_low} \\hskip 2cm  scalar or numeric array\t\\\\\nLower bound(s) on the diagonal elements of the Cholesky factors. \\\\\nDefault: 0\n\\vskip 2mm\n\\noindent \\texttt{L\\_low} \\hskip 2.75cm   scalar or array of doubles \\\\\nLower bound(s) on the off-diagonal element of the Cholesky factors \\\\\nDefault: -inf\n\\vskip 2mm\n\\noindent \n\\texttt{L\\_upp} \\hskip 2.75cm  scalar or array of doubles \\\\\nUpper bound(s) on the elements of the Cholesky factors (including the diagonal elements).\\\\\nDefault: inf\n\\vskip 2mm\n\\noindent \n\\texttt{eta} \\hskip 3.1cm  positive scalar  \\\\\nTolerance for determining the active set when using the ldl-method with NLP-solver gcmma.\\\\\nDefault: inf\n\\vskip 2mm\n\\noindent \n\\texttt{c} \\hskip 3.47cm  non-negative scalar    \\\\\nIf $c>0$, \\texttt{fminsdp} attempts to solve problem \\eqref{eq:problem3} instead of \n\\eqref{eq:problem}. The user may in this case also let the objective function be empty; i.e., the first input argument to \\texttt{fminsdp} can be set to '[]'. This is useful when one wants to check feasibility of one or more matrix inequalities. \\\\\nDefault: 0\n\\vskip 2mm\n\\noindent\n\\texttt{s\\_low}   \\hskip 2.8cm        scalar      \\\\\nLower bound on the auxiliary variable $s$ in problem \\eqref{eq:problem3}.\\\\\nDefault: 0\n\\vskip 2mm\n\\noindent\n\\texttt{s\\_upp}   \\hskip 2.8cm        scalar      \\\\\nUpper bound on the auxiliary variable $s$ in problem \\eqref{eq:problem3}.\\\\\nDefault: inf\n\\vskip 2mm\n\\noindent\n\\texttt{HessianCheck} \\hskip 1.4cm   \\{'on', 'off'\\} \\\\      \nSimple check of Hessian of the Lagrangian against finite differences at the initial point. \nAssumes you have the code DERIVEST, which must be obtained separately, on your Matlab path. \nThis check can be very time consuming and should preferably be carried out on small instances of \na problem.\\\\\nDefault: 'off'\n\\vskip 2mm\n\\noindent\n\\texttt{HessMult} \\hskip 2cm   \\{function\\_handle, 'on'\\} \\\\\nIf using fmincon with \\texttt{options.SubProblemAlgorithm = 'cg'}, you can work with Hessian \ntimes vector products directly, thereby avoiding the formation of the full Hessian of\nthe Lagrangian. Set to a function handle or simply to 'on' if the Hessian of\nthe Lagrangian with respect to the primary variables is zero. Only applicable when using the cholesky-method. \\\\\nDefault: []\n\\vskip 2mm\n\\noindent\n\\texttt{ipopt} \\hskip 2.7cm     struct   \\\\\nOptions to be passed on to NLP solver Ipopt. Please refer to the Ipopt documentation for a\nlist of available options. \\\\\nDefault: []\n\\vskip 2mm\n\\noindent\n\\texttt{eigs\\_opts} \\hskip 2.1cm       struct    \\\\\nOptions passed to the Matlab function \\texttt{eigs} used for eigenvalue computations. \\\\\nDefault: struct('isreal',true,'issym',true)\n\\vskip 2mm\n\\noindent\n\\texttt{KnitroOptionsFile} \\hskip 0.5cm  character array \\\\\nName of an options file to be read by NLP solver KNITRO. Please refer to the KNITRO\ndocumentation for a list of available options.\\\\\nDefault: []\n\\vskip 2mm\n\\noindent\n\\texttt{SnoptOptionsFile} \\hskip 0.6cm   character array \\\\\nName of an options file to be read by NLP solver SNOPT. Please refer to the SNOPT\ndocumentation for a list of available options.\\\\\nDefault: []\n\\vskip 2mm\n\\noindent\n\\texttt{GradPattern} \\hskip 1.5cm        numeric array   \\\\\nSparsity pattern for the gradient of the objective function. Only used by NLP solver SNOPT and \nonly effective if you also set \\texttt{options.JacobPattern} (see the fmincon documentation for\ndetails on the latter). \\\\\nDefault: []\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\subsection{Output}\n\nThe available output arguments from \\texttt{fminsdp} are the same as those of fmincon:\n\\begin{verbatim}\n>> [x,fval,exitflag,output,lambda,grad,hessian] = fminsdp(...)\n\\end{verbatim}\nThe only difference is that the struct \\texttt{output}, the fourth output argument, contains\nsome additional fields (some are only available when running the cholesky-method):\n\\vskip 3mm\n\\noindent\n\\texttt{A} \\hskip 4.2cm        cell array of matrices  \\\\\nConstraint matrices evaluated at the solution \\texttt{x}. \n\\vskip 2mm\n\\noindent\n\\texttt{L} \\hskip 4.2cm        cell array of matrices  \\\\\nCholesky factors of the constraint matrices evaluated at the solution \\texttt{x}. These are computed\nby assembling each $\\bm{L}_{i}$ from the variable vector $\\bm{\\ell}_{i}$ and not by a Cholesky factorization\nof the corresponding constraint matrix.\n\\vskip 2mm\n\\noindent\n\\texttt{L0} \\hskip 4.0cm        cell array of matrices  \\\\\nCholesky factors of the constraint matrices evaluated at the initial point \\texttt{x0}.\n\\vskip 2mm\n\\noindent \\texttt{nxvars}\t\t\\hskip 3.2cm \t\tnumeric scalar \\\\\nNumber of primary variables.\n\\vskip 2mm\n\\noindent \\texttt{nLvars}\t\t\\hskip 3.2cm     numeric scalar \\\\\nNumber of auxiliary variables.\n\\vskip 2mm\n\\noindent \\texttt{A\\_size}   \\hskip 3.3cm     numeric array \\\\\nSize of the constraint matrices; i.e., $m_{i}$ in \\eqref{eq:problem2}.\n\\vskip 2mm\n\\noindent \\texttt{nMatrixConstraints}   \\hskip 1cm     numeric scalar \\\\\nNumber of matrix constraints.\n\\vskip 2mm\n\\noindent \\texttt{NLPsolver}   \\hskip 2.7cm     string \\\\\nSelected NLP solver.\n\\vskip 3mm\nIt the user has set \\texttt{options.c} to some positive number in order to solve problem \\eqref{eq:problem3},\nthen two additional fields are available:\n\\vskip 3mm\n\\noindent \\texttt{s0}   \\hskip 2.7cm     double scalar \\\\\nInitial value for the auxiliary variable $s$.\n\\vskip 2mm\n\\noindent \\texttt{s}   \\hskip 2.9cm     double scalar \\\\\nValue of $s$ at the solution.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\section{A tutorial example}\n\\label{sec:tutorial}\n\nTo aid the user, a very simple tutorial example is provided here. For more details, please refer to the examples \nfound in the \\texttt{examples}-folder. \n\nConsider the following (non-sense) problem:\n\\begin{equation}\\label{eq:tutorial}\n\\quad\n\t\\begin{aligned}\n    & \\underset{\\bm{x}\\in\\mathbb{R}}{\\mbox{minimize}} \\;\\;  x^{2}  \\\\\n\t& \\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t\t& x^{3} = 0 \t\\\\\n\t\t    & x^{2} \\leq 0\t\t\t\\\\\n\t\t\t& \\left(\n\t\t\t\\begin{array}{cc}\n\t\t\t\t\tx   & x^2 \\\\\n\t\t\t        x^2 & 0\n\t\t\t      \\end{array}\n\t\t\t\\right) \\succeq \\bm{0}\t\t\n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\nTo solve this problem using \\texttt{fminsdp} we need to implement at least two functions:\n\n\\begin{enumerate} \n\\item The objective function:\n\\begin{verbatim}\nfunction [fval,grad] = objfun(x)\nfval = x^2;\nif nargout>1 \n   grad = 2*x;\nend\n\\end{verbatim}\n\n\\item The non-linear constraints function:\n\\begin{verbatim}\nfunction [cineq,ceq,cineqgrad,ceqgrad] = nonlcon(x)\ncineq = x^2;\nceq = [x^3; svec([x x^2; x^2 0]))];\nif nargout>2\n   cineqgrad = 2*x;\n   ceqgrad = [3*x^2 svec([1 2*x; 2*x 0])'];\nend\n\\end{verbatim}\n\n\\end{enumerate}\n\n\\noindent A function for evaluating the Hessian of the Lagrangian is optional, but recommended. One\nsuch function is given here:\n\\begin{verbatim}\nfunction H = hessian(x, lambda)\nH = lambda.ineqnonlin*2 + lambda.eqnonlin(1)*6*x  + ...\n    lambda.eqnonlin(2:end,1)'*svec([0 2; 2 0]));\n\\end{verbatim}\n\nUsing the functions specified above, a simple script to solve problem \\eqref{eq:tutorial} can now be written:\n\n\\begin{verbatim}\n\n% Mark the beginning of the matrix inequality constraints in the vector ceq \n% returned from nonlcon\noptions.Aind = 2;\n\n% Specify that analytical gradients should be used\noptions.GradObj = 'on';\noptions.GradConstr = 'on';\n\n% Specify that the function \"hessian\" should be used for the\n% Hessian of the Lagrangian\noptions.Hessian = 'user-supplied'\noptions.HessFcn = @(x,lambda) hessian(x,lambda);\n\n% Specify initial point\nx0 = 1;\n\n% Call fminsdp\n[x,fval] = fminsdp(objfun,x0,[],[],[],[],[],[],nonlcon,options);\n\t\t\t\t   \n\\end{verbatim}\t\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\n\\section{Practicalities}\n\\label{sec:practicalities}\n\nFor a given problem there are usually a number of things that can be done to improve numerical performance. One can, for instance, try out various scalings or introduce new variables and constraints to reduce the degree of non-linearity. Here are some additional tips: \n\n\\vskip 5mm\n\\noindent\\textbf{Exploit sparsity}\n\\vskip 2mm\n\\noindent In many cases, the functions $\\bm{\\mathcal{A}}_{i}$, $i = 1,...,q$, are sparse in the sense that the matrix $\\bm{\\mathcal{A}}_{i}(\\bm{x})$\nis sparse for every $\\bm{x}$. This fact can be exploited to reduce the number of auxiliary variables and nonlinear constraints for the cholesky-method.\n\nTo take advantage of sparsity, the user should compute the sparsity pattern of each matrix constraint (or\nan (pessimistic) estimate thereof) and supply this to \\texttt{fminsdp} via the option \\texttt{sp\\_pattern}. For the\ndummy problem in Section \\ref{sec:tutorial} we simply add two lines to the driver script:\n\\begin{verbatim}\nsp_A = [1 1; 1 0]\t\t\t\t\t  \t\t        % Sparsity pattern of the constraint matrix\noptions.sp_pattern = sp_A;\n\n...\n\n[x,fval] = fminsdp(@(x) objfun(x),x0,[],[],[],...\n\t\t\t\t   [],[],[],@(x) nonlcon(x),options);\n\n\\end{verbatim}\n\n\n\\vskip 2mm\n\\noindent\\textbf{Add artificial upper and lower bounds}\n\\vskip 2mm\n\\noindent Even though some of the variables in your problem has no  ''natural'' bounds on them, it is usually wise\nto add upper and lower bounds. These bounds should of course be loose enough to not exclude any solution of \ninterest, but even if they don't, setting them too tight might have a negative impact on the solution process.\nTherefore, a bit of experimenting is recommended.\n\nBounds on the auxiliary variables in the cholesky-method can be specified by setting \\texttt{options.L\\_low} and \\texttt{options.L\\_upp}. Sometimes it is easy to derive\nsuitable bounds \\textit{a priori} \\cite{Thore:2015}.\n\n\n\\vskip 5mm\n\\noindent\\textbf{Experiment with various solvers and algorithms}\n\\vskip 2mm\n\\noindent \\texttt{fminsdp} can use fmincon, SNOPT, KNITRO, Ipopt, mma, gcmma and PENLAB to solve problems \\eqref{eq:problem2}. In addition, fmincon\nand KNITRO lets the user choose between three different algorithms (two interior-point and one sqp). It is unlikely that a single solver and/or algorithm is best suited for all types of problems so it is recommended that the user experiment with various \nalternatives. The author's experience is that the interior-point method of fmincon (selected by setting \\texttt{options.Algorithm='interior-point'} and \\texttt{options.SubProblemAlgorithm = 'ldl-factorization'}) works well in\nterms of the number of function evaluations, but that more efficient handling of the linear algebra makes the other solvers \nbetter suited for larger problems.\n\n\\vskip 5mm\n\\noindent\\textbf{Provide gradients and Hessian of the Lagrangian}\n\\vskip 2mm\n\\noindent Providing code that computes gradients and the Hessian of the Lagrangian is usually a good idea (if you use Ipopt you \\textit{must} provide code for evaluating gradients; SNOPT does not make use of code for evaluating the Hessian of the Lagrangian). Due to the homogeneity of $\\svec$ one has simply\n\\begin{equation}\\nonumber\n\\frac{\\partial \\svec(\\bm{\\mathcal{A}})}{\\partial x_{i}} = \\svec\\left(\\frac{\\partial \\bm{\\mathcal{A}}}{\\partial x_{i}}\\right),\n\\end{equation}\nand for a function $L(\\bm{x}) = \\bm{\\lambda}^{\\T}\\svec(\\bm{\\mathcal{A}}(\\bm{x}))$,\n\\begin{equation}\\nonumber\n\\frac{\\partial^2 L}{\\partial x_{i}\\partial x_{j}} =  \\bm{\\lambda}^{T}\\svec\\left(\\frac{\\partial^2 \\bm{\\mathcal{A}}}{\\partial x_{i}\\partial x_{j}}\\right).\n\\end{equation}\n\nThe user is strongly recommended to check the correctness of the derivatives by comparing against finite-difference \napproximations. This can be done by setting \\texttt{options.DerivativeCheck='on'} and \\texttt{options.HessianCheck='on'}. \nIf necessary, the accuracy of the finite-difference approximations in \\texttt{fmincon} can be increased by setting\n\\texttt{options.FinDiffType='central'}.\n\nIf you use NLP solvers SNOPT, Ipopt or KNITRO, you can also provide sparsity patterns for the constraint Jacobian, and Ipopt\nand KNITRO can also exploit a sparsity pattern for the Hessian of the Lagrangian. Sparsity patterns are passed to the solvers\nby setting \\texttt{options.JacobPattern} and \\texttt{options.HessPattern} to sparse matrices.\n\nAs an alternative to deriving and implementing derivatives one might consider using automatic differentiation, which provides \nderivatives with accuracy to machine precision, and, at least in principle, does so completely automatically.\n\n   \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Examples}\n\nThis section comprise a brief description of three problems concerned with structural optimization of (plane) trusses. Implementations of the example problem can be found in the folder \\texttt{examples}.\\footnote{Note that there are often many different ways to formulate such problems \\cite{Cristensen:2009,Bendsoe:1994}, and formulations other than those given here may certainly be better suited to one's needs. Here, however, we are specifically interested in solving problems with matrix inequality constraints.}\n\nLet $n_{d} = 2$ be the number of spatial dimensions, $N$ the number of nodes in the truss, and $n_{fixed}$ the number of prescribed (to zero) displacement components. The number of displacement degrees of freedom is $n=n_{d}N-n_{fixed}$ and there are $m$ potential bars in the truss.\nThe $m$ bar volumes are collected in a vector $\\bm{x}\\geq\\bm{0}$ which is used to parametrize the design of the truss. Assuming (infinitesimally) small deformations and a quasi-static situation, the nodal displacement vector $\\bm{u}\\in\\mathbb{R}^{n}$ satisfies the equilibrium equation\n\\begin{equation}\\label{eq:equlibrium}\n\\bm{K}(\\bm{x})\\bm{u} = \\bm{f},\n\\end{equation}\nwhere $\\bm{K}(\\bm{x})$ is known as the stiffness matrix and $\\bm{f}\\in\\mathbb{R}^{n}$ contains forces applied to the nodes. The  stiffness matrix is given by\n\\begin{equation}\\nonumber\n\\bm{K}(\\bm{x}) = \\sum_{i=1}^{m}x_{i}E_{i}\\bm{b}_{i}\\bm{b}_{i}^{\\T},\n\\end{equation}\nwhere $x_{i}$ is the volume of the $i$:th bar, $E_{i}$ the Young's modulus, and the vectors $\\bm{b}_{i}$ depend on the geometry of the undeformed truss. Clearly, $\\bm{K}(\\bm{x})$ is positive semi-definite and symmetric. Assuming rigid body motions are prevented, by appropriate support conditions, and $\\bm{x}>\\bm{0}$, it is even positive definite.\n\\vskip2mm\n\\noindent\\textbf{1.} The first problem in the \\texttt{examples}-folder is to minimize the volume of a truss subject to the equilibrium condition \\eqref{eq:equlibrium} and an upper bound $c$ on the so-called compliance $\\bm{f}^{\\T}\\bm{u}$. This can be formulated as a problem involving a single LMI: \n\\begin{equation}\\label{eq:prob1}\n\\quad\n\t\\begin{aligned}\n\t&\t\\underset{\\bm{x}\\in\\mathbb{R}^{m}}{\\mbox{minimize}} \\;\\; \\sum_{i=1}^{m}x_{i} \\\\\n\t&\t\\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t  & \\left(\n\\begin{array}{cc}\nc &\\bm{f}^{\\T}         \\\\\n\\bm{f} & \\bm{K}(\\bm{x})\n\\end{array}\n\\right) \\succeq \\bm{0} \\\\\n\t\t\t& x_{i} \\geq 0 , \\quad i=1,\\ldots,m.\n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\n\n\\vskip2mm\n\\noindent\\textbf{2.} \nA drawback of problem \\eqref{eq:prob1} is that optimized structures may be unstable in the sense that they are prone to global buckling. One way to avoid many such designs is to impose an additional constraint \\cite{Kocvara:2002} requiring that\n\\begin{equation}\\label{eq:stability}\n\\bm{K}(\\bm{x}) + \\bm{G}(\\bm{u}(\\bm{x}),\\bm{x}) \\succeq \\bm{0},\n\\end{equation}\nwhere the geometrix stiffness matrix\n\\begin{equation}\\nonumber\n\\bm{G}(\\bm{u}(\\bm{x}),\\bm{x}) = \\sum_{i=1}^{m}x_{i}E_{i}\\bm{b}_{i}^{\\T}\\bm{u}(\\bm{x})\\bm{\\gamma}_{i}\\bm{\\gamma}_{i}^{\\T},\n\\end{equation}\nin which $\\bm{u}(\\bm{x})$ denotes a solution to \\eqref{eq:equlibrium} and $\\bm{\\gamma}_{i}$ depend on the geometry of the undeformed truss. \n\nAdding \\eqref{eq:stability} to \\eqref{eq:prob1} leads to the following problem involving both a linear and a non-linear matrix \ninequality\\footnote{In practice it is perhaps better to replace the LMI in \\eqref{eq:prob2} by the non-linear constraint $\\bm{f}^{\\T}\\bm{u}(\\bm{x})\\leq c$ but we keep it to illustrate solution of a problem with more than one matrix inequality.}:\n\\begin{equation}\\label{eq:prob2}\n\\quad\n\t\\begin{aligned}\n\t&\t\\underset{\\bm{x}\\in\\mathbb{R}^{m}}{\\mbox{minimize}} \\;\\; \\sum_{i=1}^{m}x_{i} \\\\\n\t&\t\\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t  & \\left(\n\\begin{array}{cc}\nc &\\bm{f}^{\\T}         \\\\\n\\bm{f} & \\bm{K}(\\bm{x})\n\\end{array}\n\\right) \\succeq \\bm{0} \\\\\n\t\t\t& \\bm{K}(\\bm{x}) + \\bm{G}(\\bm{u}(\\bm{x}),\\bm{x}) \\succeq \\bm{0} \\\\\n\t\t\t& x_{i} \\geq \\epsilon , \\quad i=1,\\ldots,m,\n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\nwhere $\\epsilon$ is a small positive number introduced to avoid singularity of the stiffness matrix. \\citet{Kocvara:2002} showed that a solution to this problem corresponds to a truss that will not exhibit global, linear buckling for loads\nof the form $\\tau\\bm{f}$, $\\tau\\in[0,1)$.\n\n\\vskip2mm\n\\noindent\\textbf{3.} \nThe third example problem is an alternative formulation of problem \\eqref{eq:prob2} obtained by treating the displacements as explicit variables in the optimization problem:\n\\begin{equation}\\nonumber\n\\quad\n\t\\begin{aligned}\n\t&\t\\underset{\\bm{x}\\in\\mathbb{R}^{m},\\bm{u}\\in\\mathbb{R}^{n}}{\\mbox{minimize}} \\;\\; \\sum_{i=1}^{m}x_{i} \\\\\n\t&\t\\mbox{subject to} \\;\n\t\\left\\{\n\t\t\\begin{aligned}\n\t\t  & \\bm{f}^{\\T}\\bm{u} \\leq c \\\\\n\t\t  & \\bm{K}(\\bm{x})\\bm{u} = \\bm{f} \\\\\n\t\t\t& \\bm{K}(\\bm{x}) + \\bm{G}(\\bm{u},\\bm{x}) \\succeq \\bm{0} \\\\\n\t\t\t& x_{i} \\geq \\epsilon , \\quad i=1,\\ldots,m.\n\t\t\\end{aligned}\n\t\t\\right.\n\t\\end{aligned}\n\\end{equation}\nIn this formulation, the upper bound on the compliance is a linear constraint, the equilibrium equation a set of bilinear equality constraints,\nand the stability constraint is a BMI.\n\n\\vskip 5mm\n\\noindent To see how these problems can be solved with \\texttt{fminsdp}, and what optimized designs might look like, please check out the Matlab codes found in the \\texttt{examples}-folder.\n\n\n\\newpage\n\\bibliographystyle{../../../spbasic}\n\\bibliography{references}\n\n\\end{document}\n", "meta": {"hexsha": "18c7a75b31053a70ef57f9c6f30355a65476cb2f", "size": 41250, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "util/fminsdp/docs/doc.tex", "max_stars_repo_name": "jrmout/lpv-em", "max_stars_repo_head_hexsha": "13445927e534e33884727f063aae313f001bd143", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2017-11-30T12:51:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-11T20:21:17.000Z", "max_issues_repo_path": "util/fminsdp/docs/doc.tex", "max_issues_repo_name": "jrmout/lpv-em", "max_issues_repo_head_hexsha": "13445927e534e33884727f063aae313f001bd143", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "util/fminsdp/docs/doc.tex", "max_forks_repo_name": "jrmout/lpv-em", "max_forks_repo_head_hexsha": "13445927e534e33884727f063aae313f001bd143", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-10-22T14:06:11.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-27T05:48:40.000Z", "avg_line_length": 53.1572164948, "max_line_length": 648, "alphanum_fraction": 0.6844363636, "num_tokens": 12078, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765707, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5872083136090486}}
{"text": "\n\\section{Connections on Riemann manifolds}\n", "meta": {"hexsha": "7485bbb16cc7cda324afd5f200a86c696526b651", "size": 44, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/manifoldsRiemann/02-00-Connections_on_Riemann_manifolds.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/manifoldsRiemann/02-00-Connections_on_Riemann_manifolds.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/manifoldsRiemann/02-00-Connections_on_Riemann_manifolds.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.6666666667, "max_line_length": 42, "alphanum_fraction": 0.8181818182, "num_tokens": 13, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765707, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.5872083085730623}}
{"text": "\n\\chapter{\\texorpdfstring{Typed sets and category $\\mathbf{Rel}$}%\n{Typed sets and category Rel}}\n\n\n\\section{Relational structures}\n\\begin{defn}\n\\index{relational structure}A \\emph{relational structure} is a pair\nconsisting of a set and a tuple of relations on this set.\n\\end{defn}\nA poset $(\\mathfrak{A},\\sqsubseteq)$ can be considered as a relational\nstructure: $(\\mathfrak{A},\\llbracket\\sqsubseteq\\rrbracket).$\n\nA set can $X$ be considered as a relational structure with zero relations:\n$(X,\\llbracket\\rrbracket).$\n\nThis book is not about relational structures. So I will not introduce\nmore examples.\n\nThink about relational structures as a common place for sets or posets,\nas far as they are considered in this book.\n\nWe will denote $x\\in(\\mathfrak{A},R)$ iff $x\\in\\mathfrak{A}$ for\na relational structure $(\\mathfrak{A},R)$.\n\n\n\\section{Typed elements and typed sets}\n\nWe sometimes want to differentiate between the same element of two\ndifferent sets. For example, we may want to consider different the\nnatural number~$3$ and the rational number~$3$. In order to describe\nthis in a formal way we consider elements of sets together with sets\nthemselves. For example, we can consider the pairs $(\\mathbb{N},3)$\nand $(\\mathbb{Q},3)$.\n\\begin{defn}\n\\index{typed element}\\index{element!typed}A \\emph{typed element}\nis a pair $(\\mathfrak{A},a)$ where $\\mathfrak{A}$ is a relational\nstructure and $a\\in\\mathfrak{A}$.\n\nI denote $\\type(\\mathfrak{A},a)=\\mathfrak{A}$ and $\\GR(\\mathfrak{A},a)=a$.\n\\end{defn}\n\n\\begin{defn}\nI will denote typed element $(\\mathfrak{A},a)$ as $@^{\\mathfrak{A}} a$ or just\n$@a$ when $\\mathfrak{A}$ is clear from context.\n\\end{defn}\n\n\\begin{defn}\n\\index{typed set}\\index{set!typed}A \\emph{typed set} is a typed\nelement equal to $(\\subsets U,A)$ where $U$ is a set and $A$ is\nits subset.\\end{defn}\n\\begin{rem}\n\\emph{Typed sets} is an awkward formalization of type theory sets\nin ZFC ($U$ is meant to express the \\emph{type} of the set). This\nbook could be better written using type theory instead of ZFC, but\nI want my book to be understandable for everyone knowing ZFC. $(\\subsets U,A)$\nshould be understood as a set $A$ of type $U$. For an example, consider\n$(\\subsets\\mathbb{R},[0;10])$; it is the closed interval $[0;10]$\nwhose elements are considered as real numbers.\\end{rem}\n\\begin{defn}\n$\\mathfrak{T}\\mathfrak{A}=\\setcond{(\\mathfrak{A},a)}{a\\in\\mathfrak{A}}=\\{\\mathfrak{A}\\}\\times\\mathfrak{A}$\nfor every relational structure~$\\mathfrak{A}$.\\end{defn}\n\\begin{rem}\n$\\mathfrak{T}\\mathfrak{A}$ is the set of typed elements of~$\\mathfrak{A}$.\n\\end{rem}\n\n\\begin{defn}\nIf $\\mathfrak{A}$ is a poset, we introduce order on its typed elements\nisomorphic to the order of the original poset: $(\\mathfrak{A},a)\\sqsubseteq(\\mathfrak{A},b)\\Leftrightarrow a\\sqsubseteq b$.\n\\end{defn}\n\n\\begin{defn}\nI denote $\\GR(\\mathfrak{A},a)=a$ for a typed element~$(\\mathfrak{A},a)$.\n\\end{defn}\n\n\\begin{defn}\nI will denote \\emph{typed subsets} of a typed poset $(\\subsets U,A)$\nas $\\subsets(\\subsets U,A)=\\setcond{(\\subsets U,X)}{X\\in\\subsets A}=\\{\\subsets U\\}\\times\\subsets A$.\\end{defn}\n\\begin{obvious}\n$\\subsets(\\subsets U,A)$ is also a set of typed sets.\n\\end{obvious}\n\n\\begin{defn}\nI will denote $\\mathscr{T}U=\\mathfrak{T}\\subsets U$.\\end{defn}\n\\begin{rem}\nThis means that $\\mathscr{T}U$ is the set of typed subsets of a set~$U$.\\end{rem}\n\\begin{obvious}\n$\\mathscr{T}U=\\setcond{(\\subsets U,X)}{X\\in\\subsets U}=\\{\\subsets U\\}\\times\\subsets U=\\subsets(\\subsets U,U)$.\n\\end{obvious}\n\n\\begin{obvious}\n$\\mathscr{T}U$ is a complete atomistic boolean lattice. Particularly:\n\\begin{enumerate}\n\\item $\\bot^{\\mathscr{T}U}=(\\subsets U,\\emptyset)$;\n\\item $\\top^{\\mathscr{T}U}=(\\subsets U,U)$;\n\\item $(\\subsets U,A)\\sqcup(\\subsets U,B)=(\\subsets U,A\\cup B)$;\n\\item $(\\subsets U,A)\\sqcap(\\subsets U,B)=(\\subsets U,A\\cap B)$;\n\\item $\\bigsqcup_{A\\in S}(\\subsets U,A)=(\\subsets U,\\bigcup_{A\\in S}A)$;\n\\item $\\bigsqcap_{A\\in S}(\\subsets U,A)=\\left(\\subsets U,\\begin{cases}\\bigcap_{A\\in S}A&\\text{ if }A\\ne\\emptyset\\\\U&\\text{ if }A=\\emptyset\\end{cases}\\right)$;\n\\item $\\overline{(\\subsets U,A)}=(\\subsets U,U\\setminus A)$;\n\\item atomic elements are $(\\subsets U,\\{x\\})$ where $x\\in U$.\n\\end{enumerate}\nTyped sets are ``better'' than regular sets as (for example) for\na set~$U$ and a typed set~$X$ the following are defined by regular\norder theory:\\end{obvious}\n\\begin{itemize}\n\\item $\\atoms X$;\n\\item $\\overline{X}$;\n\\item $\\bigsqcap^{\\mathscr{T}U}\\emptyset$.\n\\end{itemize}\nFor regular (``non-typed'') sets these are not defined (except of\n$\\atoms X$ which however needs a special definition instead of using\nthe standard order-theory definition of atoms).\n\nTyped sets are convenient to be used together with filters on sets\n(see below), because both typed sets and filters have a set~$\\subsets U$\nas their type.\n\nAnother advantage of typed sets is that their binary product (as defined\nbelow) is a $\\mathbf{Rel}$-morphism. This is especially convenient\nbecause below defined products of filters are also morphisms of related\ncategories.\n\nWell, typed sets are also quite awkward, but the proper way of doing\nmodern mathematics is \\emph{type theory} not ZFC, what is however\noutside of the topic of this book.\n\n\n\\section{\\texorpdfstring{Category $\\mathbf{Rel}$}{Category Rel}}\n\nI remind that $\\mathbf{Rel}$ is the category of (small) binary relations\nbetween sets, and $\\mathbf{Set}$ is its subcategory where only monovalued\nentirely defined morphisms (functions) are considered.\n\\begin{defn}\nOrder on $\\mathbf{Rel}(A,B)$ is defined by the formula $f\\sqsubseteq g\\Leftrightarrow\\GR f\\subseteq\\GR g$.\\end{defn}\n\\begin{obvious}\nThis order is isomorphic to the natural order of subsets of the set\n$A\\times B$.\\end{obvious}\n\\begin{defn}\n$X\\rsuprel fY\\Leftrightarrow\\GR X\\rsuprel{\\GR f}\\GR Y$ and $\\rsupfun fX=(\\Dst f,\\rsupfun{\\GR f}\\GR X)$\nfor a $\\mathbf{Rel}$-morphism~$f$ and typed sets $X\\in\\mathscr{T}\\Src f$,\n$Y\\in\\mathscr{T}\\Dst f$.\n\\end{defn}\n\n\\begin{defn}\nFor category $\\mathbf{Rel}$ there is defined reverse morphism: $(A,B,F)^{-1}=(B,A,F^{-1})$.\\end{defn}\n\\begin{obvious}\n$(f^{-1})^{-1}=f$ for every $\\mathbf{Rel}$-morphism~$f$.\n\\end{obvious}\n\n\\begin{obvious}\n$\\rsuprel{f^{-1}}=\\rsuprel f^{-1}$ for every $\\mathbf{Rel}$-morphism~$f$.\n\\end{obvious}\n\n\\begin{obvious}\n$(g\\circ f)^{-1}=f^{-1}\\circ g^{-1}$ for every composable $\\mathbf{Rel}$-morphisms\n$f$ and~$g$.\\end{obvious}\n\\begin{prop}\n$\\rsupfun{g\\circ f}=\\rsupfun g\\circ\\rsupfun f$ for every composable\n$\\mathbf{Rel}$-morphisms $f$ and~$g$.\\end{prop}\n\\begin{proof}\nExercise.\\end{proof}\n\\begin{prop}\nThe above definitions of monovalued morphisms of $\\mathbf{Rel}$ and\nof injective morphisms of $\\mathbf{Set}$ coincide with how mathematicians\nusually define monovalued functions (that is morphisms of $\\mathbf{Set}$)\nand injective functions.\\end{prop}\n\\begin{proof}\nLet $f$ be a $\\mathbf{Rel}$-morphism $A\\rightarrow B$.\n\nThe following are equivalent:\n\\begin{itemize}\n\\item $f$ is a monovalued relation;\n\\item $\\forall x\\in A,y_{0},y_{1}\\in B:(x\\mathrel fy_{0}\\land x\\mathrel fy_{1}\\Rightarrow y_{0}=y_{1})$;\n\\item $\\forall x\\in A,y_{0},y_{1}\\in B:(y_{0}\\ne y_{1}\\Rightarrow\\lnot(x\\mathrel fy_{0})\\lor\\lnot(x\\mathrel fy_{1}))$;\n\\item $\\forall y_{0},y_{1}\\in B\\forall x\\in A:(y_{0}\\ne y_{1}\\Rightarrow\\lnot(x\\mathrel fy_{0})\\lor\\lnot(x\\mathrel fy_{1}))$;\n\\item $\\forall y_{0},y_{1}\\in B:(y_{0}\\ne y_{1}\\Rightarrow\\forall x\\in A:(\\lnot(x\\mathrel fy_{0})\\lor\\lnot(x\\mathrel fy_{1})))$;\n\\item $\\forall y_{0},y_{1}\\in B:(\\exists x\\in A:(x\\mathrel fy_{0}\\land x\\mathrel fy_{1})\\Rightarrow y_{0}=y_{1})$;\n\\item $\\forall y_{0},y_{1}\\in B:y_{0}\\mathrel{(f\\circ f^{-1})}y_{1}\\Rightarrow y_{0}=y_{1}$;\n\\item $f\\circ f^{-1}\\sqsubseteq1_{B}$.\n\\end{itemize}\nLet now $f$ be a $\\mathbf{Set}$-morphism $A\\rightarrow B$.\n\nThe following are equivalent:\n\\begin{itemize}\n\\item $f$ is an injective function;\n\\item $\\forall y\\in B,a,b\\in A:(a\\mathrel fy\\land b\\mathrel fy\\Rightarrow a=b)$;\n\\item $\\forall y\\in B,a,b\\in A:(a\\ne b\\Rightarrow\\lnot(a\\mathrel fy)\\lor\\lnot(b\\mathrel fy))$;\n\\item $\\forall y\\in B:(a\\ne b\\Rightarrow\\forall a,b\\in A:(\\lnot(a\\mathrel fy)\\lor\\lnot(b\\mathrel fy)))$;\n\\item $\\forall y\\in B:(\\exists a,b\\in A:(a\\mathrel fy\\land b\\mathrel fy)\\Rightarrow a=b)$;\n\\item $f^{-1}\\circ f\\sqsubseteq1_{A}$.\n\\end{itemize}\n\\end{proof}\n\\begin{prop}\nFor a binary relation $f$ we have:\n\\begin{enumerate}\n\\item \\label{br-j}$\\rsupfun f\\bigcup S=\\bigcup\\rsupfun{\\rsupfun f}S$ for\na set of sets $S$;\n\\item \\label{br-j1}$\\bigcup S\\rsuprel fY\\Leftrightarrow\\exists X\\in S:X\\rsuprel fY$\nfor a set of sets $S$;\n\\item \\label{br-j2}$X\\rsuprel f\\bigcup T\\Leftrightarrow\\exists Y\\in T:X\\rsuprel fY$\nfor a set of sets $T$;\n\\item \\label{br-j12}$\\bigcup S\\rsuprel f\\bigcup T\\Leftrightarrow\\exists X\\in S,Y\\in T:X\\rsuprel fY$\nfor sets of sets $S$ and $T$;\n\\item \\label{br-at-r}$X\\rsuprel fY\\Leftrightarrow\\exists\\alpha\\in X,\\beta\\in Y:\\{\\alpha\\}\\rsuprel f\\{\\beta\\}$\nfor sets $X$ and $Y$;\n\\item \\label{br-at-f}$\\rsupfun fX=\\bigcup\\rsupfun{\\rsupfun f}\\atoms X$ for a\nset $X$ (where $\\atoms X=\\setcond{\\{x\\}}{x\\in X}$).\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n~\n\\begin{widedisorder}\n\\item [{\\ref{br-j}}] ~\n\n\\begin{multline*}\ny\\in\\rsupfun f\\bigcup S\\Leftrightarrow\\exists x\\in\\bigcup S:x\\mathrel fy\\Leftrightarrow\\exists P\\in S,x\\in P:x\\mathrel fy\\Leftrightarrow\\\\\n\\exists P\\in S:y\\in\\rsupfun fP\\Leftrightarrow\\exists Q\\in\\rsupfun{\\rsupfun f}S:y\\in Q\\Leftrightarrow y\\in\\bigcup\\rsupfun{\\rsupfun f}S.\n\\end{multline*}\n\n\\item [{\\ref{br-j1}}] \n\\begin{multline*}\n\\bigcup S\\rsuprel fY\\Leftrightarrow\\exists x\\in\\bigcup S,y\\in Y:x\\mathrel fy\\Leftrightarrow\\\\\n\\exists X\\in S,x\\in X,y\\in Y:x\\mathrel fy\\Leftrightarrow\\exists X\\in S:X\\rsuprel fY.\n\\end{multline*}\n\n\\item [{\\ref{br-j2}}] By symmetry.\n\\item [{\\ref{br-j12}}] From two previous formulas.\n\\item [{\\ref{br-at-r}}] $X\\rsuprel fY\\Leftrightarrow\\exists\\alpha\\in X,\\beta\\in Y:\\alpha\\mathrel f\\beta\\Leftrightarrow\\exists\\alpha\\in X,\\beta\\in Y:\\{\\alpha\\}\\rsuprel f\\{\\beta\\}$.\n\\item [\\ref{br-at-f}] Obvious.\n\\end{widedisorder}\n\\end{proof}\n\\begin{cor}\nFor a $\\mathbf{Rel}$-morphism $f$ we have:\n\\begin{enumerate}\n\\item $\\rsupfun f\\bigsqcup S=\\bigsqcup\\rsupfun{\\rsupfun f}S$ for $S\\in\\subsets\\mathscr{T}\\Src f$;\n\\item $\\bigsqcup S\\rsuprel fY\\Leftrightarrow\\exists X\\in S:X\\rsuprel fY$\nfor $S\\in\\subsets\\mathscr{T}\\Src f$;\n\\item $X\\rsuprel f\\bigsqcup T\\Leftrightarrow\\exists Y\\in T:X\\rsuprel fY$\nfor $T\\in\\subsets\\mathscr{T}\\Dst f$;\n\\item $\\bigsqcup S\\rsuprel f\\bigsqcup T\\Leftrightarrow\\exists X\\in S,Y\\in T:X\\rsuprel fY$\nfor $S\\in\\subsets\\mathscr{T}\\Src f$, $T\\in\\subsets\\mathscr{T}\\Dst f$;\n\\item $X\\rsuprel fY\\Leftrightarrow\\exists x\\in\\atoms X,y\\in\\atoms Y:x\\rsuprel fy$\nfor $X\\in\\mathscr{T}\\Src f$, $Y\\in\\mathscr{T}\\Dst f$;\n\\item $\\rsupfun fX=\\bigsqcup\\rsupfun{\\rsupfun f}\\atoms X$ for $X\\in\\mathscr{T}\\Src f$.\n\\end{enumerate}\n\\end{cor}\n\n\\begin{cor}\nA $\\mathbf{Rel}$-morphism $f$ can be restored knowing either $\\rsupfun fx$\nfor atoms $x\\in\\mathscr{T}\\Src f$ or $x\\rsuprel fy$ for atoms $x\\in\\mathscr{T}\\Src f$,\n$y\\in\\mathscr{T}\\Dst f$.\\end{cor}\n\\begin{prop}\nLet $A$, $B$ be sets, $R$ be a set of binary relations.\n\\begin{enumerate}\n\\item \\textbf{\\label{bsr-jf}$\\rsupfun{\\bigcup R}X=\\bigcup_{f\\in R}\\rsupfun fX$}\nfor every set $X$;\n\\item \\label{bsr-mf}$\\rsupfun{\\bigcap R}\\{\\alpha\\}=\\bigcap_{f\\in R}\\rsupfun f\\{\\alpha\\}$\nfor every $\\alpha$, if $R$ is nonempty;\n\\item \\label{bsr-jr}$X\\rsuprel{\\bigcup R}Y\\Leftrightarrow\\exists f\\in R:X\\rsuprel fY$\nfor every sets $X$, $Y$;\n\\item \\label{bsr-mr}$\\alpha\\mathrel{\\left(\\bigcap R\\right)}\\beta\\Leftrightarrow\\forall f\\in R:\\alpha\\mathrel f\\beta$\nfor every $\\alpha$ and $\\beta$, if $R$ is nonempty.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n~\n\\begin{widedisorder}\n\\item [{\\ref{bsr-jf}}] ~\n\\begin{multline*}\ny\\in\\rsupfun{\\bigcup R}X\\Leftrightarrow\\exists x\\in X:x\\mathrel{\\left(\\bigcup R\\right)}y\\Leftrightarrow\\exists x\\in X,f\\in R:x\\mathrel fy\\Leftrightarrow\\\\\n\\exists f\\in R:y\\in\\rsupfun fX\\Leftrightarrow y\\in\\bigcup_{f\\in R}\\rsupfun fX.\n\\end{multline*}\n\n\\item [{\\ref{bsr-mf}}] ~ \n\\[\ny\\in\\rsupfun{\\bigcap R}\\{\\alpha\\}\\Leftrightarrow\\forall f\\in R:\\alpha\\mathrel fy\\Leftrightarrow\\forall f\\in R:y\\in\\rsupfun f\\{\\alpha\\}\\Leftrightarrow y\\in\\bigcap_{f\\in R}\\rsupfun f\\{\\alpha\\}.\n\\]\n\n\\item [{\\ref{bsr-jr}}] ~\n\\begin{multline*}\nX\\rsuprel{\\bigcup R}Y\\Leftrightarrow\\exists x\\in X,y\\in Y:x\\mathrel{\\left(\\bigcup R\\right)}y\\Leftrightarrow\\\\\n\\exists x\\in X,y\\in Y,f\\in R:x\\mathrel fy\\Leftrightarrow\\exists f\\in R:X\\rsuprel fY.\n\\end{multline*}\n\n\\item [{\\ref{bsr-mr}}] Obvious.\n\\end{widedisorder}\n\\end{proof}\n\\begin{cor}\nLet $A$, $B$ be sets, $R\\in\\subsets\\mathbf{Rel}(A,B)$.\n\\begin{enumerate}\n\\item \\textbf{$\\rsupfun{\\bigsqcup R}X=\\bigsqcup_{f\\in R}\\rsupfun fX$} for\n$X\\in\\mathscr{T}A$;\n\\item $\\rsupfun{\\bigsqcap R}x=\\bigsqcap_{f\\in R}\\rsupfun fx$ for atomic\n$x\\in\\mathscr{T}A$;\n\\item $X\\rsuprel{\\bigsqcup R}Y\\Leftrightarrow\\exists f\\in R:X\\rsuprel fY$\nfor $X\\in\\mathscr{T}A$, $Y\\in\\mathscr{T}B$;\n\\item $x\\rsuprel{\\bigsqcap R}y\\Leftrightarrow\\forall f\\in R:x\\rsuprel fy$\nfor every atomic $x\\in\\mathscr{T}A$, $y\\in\\mathscr{T}B$.\n\\end{enumerate}\n\\end{cor}\n\\begin{prop}\n$X\\rsuprel{g\\circ f}Z\\Leftrightarrow\\exists\\beta:(X\\rsuprel f\\{\\beta\\}\\land\\{\\beta\\}\\rsuprel gZ)$\nfor every binary relation~$f$ and sets $X$ and $Y$.\\end{prop}\n\\begin{proof}\n~\n\\begin{multline*}\nX\\rsuprel{g\\circ f}Z\\Leftrightarrow\\exists x\\in X,z\\in Z:x\\mathrel{(g\\circ f)}z\\Leftrightarrow\\\\\n\\exists x\\in X,z\\in Z,\\beta:(x\\mathrel f\\beta\\land\\beta\\mathrel gz)\\Leftrightarrow\\\\\n\\exists\\beta:(\\exists x\\in X:x\\mathrel f\\beta\\land\\exists y\\in Y:\\beta\\mathrel gz)\\Leftrightarrow\\exists\\beta:(X\\rsuprel f\\{\\beta\\}\\land\\{\\beta\\}\\rsuprel gZ).\n\\end{multline*}\n\\end{proof}\n\\begin{cor}\n$X\\rsuprel{g\\circ f}Z\\Leftrightarrow\\exists y\\in\\atoms^{\\mathscr{T}B}:(X\\rsuprel fy\\land y\\rsuprel gZ)$\nfor $f\\in\\mathbf{Rel}(A,B)$, $g\\in\\mathbf{Rel}(B,C)$ (for sets $A$,\n$B$, $C$).\\end{cor}\n\\begin{prop}\n$f\\circ\\bigcup G=\\bigcup_{g\\in G}(f\\circ g)$ and $\\bigcup G\\circ f=\\bigcup_{g\\in G}(g\\circ f)$\nfor every binary relation~$f$ and set~$G$ of binary relations.\\end{prop}\n\\begin{proof}\nWe will prove only $\\bigcup G\\circ f=\\bigcup_{g\\in G}(g\\circ f)$\nas the other formula follows from duality. Really\n\n\\begin{multline*}\n(x,z)\\in\\bigcup G\\circ f\\Leftrightarrow\\exists y:((x,y)\\in f\\land(y,z)\\in\\bigcup G)\\Leftrightarrow\\\\\n\\exists y,g\\in G:((x,y)\\in f\\land(y,z)\\in g)\\Leftrightarrow\\exists g\\in G:(x,z)\\in g\\circ f\\Leftrightarrow(x,z)\\in\\bigcup_{g\\in G}(g\\circ f).\n\\end{multline*}\n\\end{proof}\n\\begin{cor}\nEvery $\\mathbf{Rel}$-morphism is metacomplete and co-metacomplete.\\end{cor}\n\\begin{prop}\n\\label{rel-mono}The following are equivalent for a $\\mathbf{Rel}$-morphism~$f$:\n\\begin{enumerate}\n\\item \\label{rel-mono-mono}$f$ is monovalued.\n\\item \\label{rel-mmv}$f$ is metamonovalued.\n\\item \\label{rel-wmmv}$f$ is weakly metamonovalued.\n\\item \\label{rel-mono-atom}$\\rsupfun fa$ is either atomic or least whenever\n$a\\in\\atoms^{\\mathscr{T}\\Src f}$.\n\\item \\label{rel-mono-bin}$\\rsupfun{f^{-1}}(I\\sqcap J)=\\rsupfun{f^{-1}}I\\sqcap\\rsupfun{f^{-1}}J$\nfor every $I,J\\in\\mathscr{T}\\Src f$.\n\\item \\label{rel-mono-meet}$\\rsupfun{f^{-1}}\\bigsqcap S=\\bigsqcap_{Y\\in S}\\rsupfun{f^{-1}}Y$\nfor every $S\\in\\subsets\\mathscr{T}\\Src f$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n~\n\\begin{description}\n\\item [{\\ref{rel-mmv}$\\Rightarrow$\\ref{rel-wmmv}}] Obvious.\n\\item [{\\ref{rel-mono-mono}$\\Rightarrow$\\ref{rel-mmv}}] Take $x\\in\\atoms^{\\mathscr{T}\\Src f}$;\nthen $fx\\in\\atoms^{\\mathscr{T}\\Dst f}\\cup\\{\\bot^{\\mathscr{T}\\Dst f}\\}$ and thus \n\\begin{multline*}\n\\rsupfun{\\left(\\bigsqcap G\\right)\\circ f}x=\\rsupfun{\\bigsqcap G}\\rsupfun fx=\\bigsqcap_{g\\in G}\\rsupfun g\\rsupfun fx=\\\\\n\\bigsqcap_{g\\in G}\\rsupfun{g\\circ f}x=\\rsupfun{\\bigsqcap_{g\\in G}(g\\circ f)}x;\n\\end{multline*}\nso $\\left(\\bigsqcap G\\right)\\circ f=\\bigsqcap_{g\\in G}(g\\circ f)$.\n\\item [{\\ref{rel-wmmv}$\\Rightarrow$\\ref{rel-mono-mono}}] Take $g=\\{(a,y)\\}$\nand $h=\\{(b,y)\\}$ for arbitrary $a\\ne b$ and arbitrary~$y$. We\nhave $g\\cap h=\\emptyset$; thus $(g\\circ f)\\cap(h\\circ f)=(g\\cap h)\\circ f=\\bot$\nand thus impossible $x\\mathrel fa\\land x\\mathrel fb$ as otherwise\n$(x,y)\\in(g\\circ f)\\cap(h\\circ f)$. Thus $f$ is monovalued.\n\\item [{\\ref{rel-mono-atom}$\\Rightarrow$\\ref{rel-mono-meet}}] Let $a\\in\\atoms^{\\mathscr{T}\\Src f}$,\n$\\rsupfun fa=b$. Then because $b\\in\\atoms^{\\mathscr{T}\\Dst f}\\cup\\{\\bot^{\\mathscr{T}\\Dst f}\\}$\n\\begin{gather*}\n\\bigsqcap S\\sqcap b\\ne\\bot\\Leftrightarrow\\forall Y\\in S:Y\\sqcap b\\ne\\bot;\\\\\na\\rsuprel f\\bigsqcap S\\Leftrightarrow\\forall Y\\in S:a\\rsuprel fY;\\\\\n\\bigsqcap S\\rsuprel{f^{-1}}a\\Leftrightarrow\\forall Y\\in S:Y\\rsuprel{f^{-1}}a;\\\\\na\\nasymp\\rsupfun{f^{-1}}\\bigsqcap S\\Leftrightarrow\\forall Y\\in S:a\\nasymp\\rsupfun{f^{-1}}Y;\\\\\na\\nasymp\\rsupfun{f^{-1}}\\bigsqcap S\\Leftrightarrow a\\nasymp\\bigsqcap_{Y\\in S}\\rsupfun{f^{-1}}Y;\\\\\n\\rsupfun{f^{-1}}\\bigsqcap S=\\bigsqcap_{X\\in S}\\rsupfun{f^{-1}}X.\n\\end{gather*}\n\n\\item [{\\ref{rel-mono-meet}$\\Rightarrow$\\ref{rel-mono-bin}}] Obvious.\n\\item [{\\ref{rel-mono-bin}$\\Rightarrow$\\ref{rel-mono-mono}}]\n$\\rsupfun{f^{-1}}a\\sqcap\\rsupfun{f^{-1}}b=\\rsupfun{f^{-1}}(a\\sqcap b)=\\rsupfun{f^{-1}}\\bot=\\bot$\nfor every two distinct atoms $a=\\{\\alpha\\},b=\\{\\beta\\}\\in\\mathscr{T}\\Dst f$.\nFrom this \n\\begin{multline*}\n\\alpha\\mathrel{(f\\circ f^{-1})}\\beta\\Leftrightarrow\\exists y\\in\\Dst f:(\\alpha\\mathrel{f^{-1}}y\\land y\\mathrel{f}\\beta)\\Leftrightarrow\\\\\n\\exists y\\in\\Dst f:(y\\in\\rsupfun{f^{-1}}a\\land y\\in\\rsupfun{f^{-1}}b)\n\\end{multline*}\n is impossible. Thus $f\\circ f^{-1}\\sqsubseteq1_{\\Dst f}^{\\mathbf{Rel}}$.\n\\item [{$\\lnot$\\ref{rel-mono-atom}$\\Rightarrow\\lnot$\\ref{rel-mono-mono}}] Suppose\n$\\rsupfun fa\\notin\\atoms^{\\mathscr{T}\\Dst f}\\cup\\{\\bot^{\\mathscr{T}\\Dst f}\\}$\nfor some $a\\in\\atoms^{\\mathscr{T}\\Src f}$. Then there exist distinct\npoints $p$, $q$ such that $p,q\\in\\rsupfun fa$. Thus $p\\mathrel{(f\\circ f^{-1})}q$\nand so $f\\circ f^{-1}\\nsqsubseteq1_{\\Dst f}^{\\mathbf{Rel}}$.\n\\end{description}\n\\end{proof}\n\n\\section{Product of typed sets}\n\\begin{defn}\nProduct of typed sets is defined by the formula\n\\[\n(\\subsets U,A)\\times(\\subsets W,B)=(U,W,A\\times B).\n\\]\n\\end{defn}\n\\begin{prop}\nProduct of typed sets is a $\\mathbf{Rel}$-morphism.\\end{prop}\n\\begin{proof}\nWe need to prove $A\\times B\\subseteq U\\times W$, but this is obvious.\\end{proof}\n\\begin{obvious}\nAtoms of $\\mathbf{Rel}(A,B)$ are exactly products $a\\times b$ where\n$a$ and $b$ are atoms correspondingly of $\\mathscr{T}A$ and $\\mathscr{T}B$.\n$\\mathbf{Rel}(A,B)$ is an atomistic poset.\\end{obvious}\n\\begin{prop}\n$f\\nasymp A\\times B\\Leftrightarrow A\\rsuprel fB$ for every $\\mathbf{Rel}$-morphism~$f$\nand $A\\in\\mathscr{T}\\Src f$, $B\\in\\mathscr{T}\\Dst f$.\\end{prop}\n\\begin{proof}\n~\n\\begin{multline*}\nA\\rsuprel fB\\Leftrightarrow\\exists x\\in\\atoms A,y\\in\\atoms B:x\\rsuprel fy\\Leftrightarrow\\\\\n\\exists x\\in\\atoms^{\\mathscr{T}\\Src f},y\\in\\atoms^{\\mathscr{T}\\Dst f}:(x\\times y\\sqsubseteq f\\land x\\times y\\sqsubseteq A\\times B)\\Leftrightarrow f\\nasymp A\\times B.\n\\end{multline*}\n\\end{proof}\n\\begin{defn}\n\\emph{Image} and \\emph{domain} of a $\\mathbf{Rel}$-morphism~$f$\nare typed sets defined by the formulas \n\\[\n\\dom(U,W,f)=(\\subsets U,\\dom f)\\quad\\text{and}\\quad\\im(U,W,f)=(\\subsets W,\\im f).\n\\]\n\\end{defn}\n\\begin{obvious}\nImage and domain of a $\\mathbf{Rel}$-morphism are really typed sets.\\end{obvious}\n\\begin{defn}\n\\emph{Restriction} of a $\\mathbf{Rel}$-morphism to a typed set is\ndefined by the formula $(U,W,f)|_{(\\subsets U,X)}=(U,W,f|_{X})$.\\end{defn}\n\\begin{obvious}\nRestriction of a $\\mathbf{Rel}$-morphism is $\\mathbf{Rel}$-morphism.\n\\end{obvious}\n\n\\begin{obvious}\n$f|_{A}=f\\sqcap(A\\times\\top^{\\mathscr{T}\\Dst f})$ for every $\\mathbf{Rel}$-morphism~$f$\nand $A\\in\\mathscr{T}\\Src f$.\n\\end{obvious}\n\n\\begin{obvious}\n$\\rsupfun fX=\\rsupfun f(X\\sqcap\\dom f)=\\im(f|_{X})$ for every $\\mathbf{Rel}$-morphism~$f$\nand $X\\in\\mathscr{T}\\Src f$.\n\\end{obvious}\n\n\\begin{obvious}\n$f\\sqsubseteq A\\times B\\Leftrightarrow\\dom f\\sqsubseteq A\\land\\im f\\sqsubseteq B$\nfor every $\\mathbf{Rel}$-morphism~$f$ and $A\\in\\mathscr{T}\\Src f$,\n\\textbf{$B\\in\\mathscr{T}\\Dst f$}.\\end{obvious}\n\\begin{thm}\nLet $A$, $B$ be sets. If $S\\in\\subsets(\\mathscr{T}A\\times\\mathscr{T}B)$\nthen\n\\[\n\\bigsqcap_{(A,B)\\in S}(A\\times B)=\\bigsqcap\\dom S\\times\\bigsqcap\\im S.\n\\]\n\\end{thm}\n\\begin{proof}\nFor every atomic $x\\in\\mathscr{T}A$, $y\\in\\mathscr{T}B$ we have\n\\begin{multline*}\nx\\times y\\sqsubseteq\\bigsqcap_{(A,B)\\in S}(A\\times B)\\Leftrightarrow\\forall(A,B)\\in S:x\\times y\\sqsubseteq A\\times B\\Leftrightarrow\\\\\n\\forall(A,B)\\in S:(x\\sqsubseteq A\\land y\\sqsubseteq B)\\Leftrightarrow\\forall A\\in\\dom S:x\\sqsubseteq A\\land\\forall B\\in\\im S:y\\sqsubseteq B\\Leftrightarrow\\\\\nx\\sqsubseteq\\bigsqcap\\dom S\\land y\\sqsubseteq\\bigsqcap\\im S\\Leftrightarrow x\\times y\\sqsubseteq\\bigsqcap\\dom S\\times\\bigsqcap\\im S.\n\\end{multline*}\n\\end{proof}\n\\begin{obvious}\nIf $U$, $W$ are sets and $A\\in\\mathscr{T}(U)$ then $A\\times$ is\na complete homomorphism from the lattice $\\mathscr{T}(W)$ to the\nlattice $\\mathbf{Rel}(U,W)$, if also $A\\ne\\bot$\nthen it is an order embedding.\\end{obvious}\n\n", "meta": {"hexsha": "65380615bff194710df250230f1c46c06eeff098", "size": 20891, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-rel.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-rel.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-rel.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.830472103, "max_line_length": 191, "alphanum_fraction": 0.6979560576, "num_tokens": 8176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7662936484231889, "lm_q1q2_score": 0.5872059597036067}}
{"text": "\\chapter{Substitution Commands}\\index{Substitution}\nAn important class of commands in {\\REDUCE} define\nsubstitutions for variables and expressions to be made during the\nevaluation of expressions.  Such substitutions use the prefix operator\n{\\tt SUB}, various forms of the command {\\tt LET}, and rule sets.\n\n\\section{SUB Operator}\\ttindex{SUB}\n\nSyntax:\n\\begin{verbatim}\n        SUB(<substitution_list>,EXPRN1:algebraic):algebraic\n\\end{verbatim}\nwhere {\\tt <substitution\\_list>} is a list of one or more equations of the\nform\n\\begin{verbatim}\n        VAR:kernel=EXPRN:algebraic\n\\end{verbatim}\nor a kernel that evaluates to such a list.\n\nThe {\\tt SUB} operator gives the algebraic result of replacing every\noccurrence of the variable {\\tt VAR} in the expression {\\tt EXPRN1} by the\nexpression {\\tt EXPRN}.  Specifically, {\\tt EXPRN1} is first evaluated\nusing all available rules.  Next the substitutions are made, and finally\nthe substituted expression is reevaluated.  When more than one variable\noccurs in the substitution list, the substitution is performed by\nrecursively walking down the tree representing {\\tt EXPRN1}, and replacing\nevery {\\tt VAR} found by the appropriate {\\tt EXPRN}.  The {\\tt EXPRN} are\nnot themselves searched for any occurrences of the various {\\tt VAR}s.\nThe trivial case {\\tt SUB(EXPRN1)} returns the algebraic value of\n{\\tt EXPRN1}.\n\n{\\it Examples:}\n\\begin{verbatim}\n\t\t\t\t    2              2\n     sub({x=a+y,y=y+1},x^2+y^2) -> A  + 2*A*Y + 2*Y  + 2*Y + 1\n\\end{verbatim}\nand with {\\tt s := \\{x=a+y,y=y+1\\}},\n\\begin{verbatim}\n\t\t\t\t    2              2\n     sub(s,x^2+y^2)             -> A  + 2*A*Y + 2*Y  + 2*Y + 1\n\\end{verbatim}\n\nNote that the global assignments {\\tt x:=a+y}, etc., do not take place.\n\n{\\tt EXPRN1} can be any valid algebraic expression whose type is such that\na substitution process is defined for it (e.g., scalar expressions, lists\nand matrices).  An error will occur if an expression of an invalid type\nfor substitution occurs either in {\\tt EXPRN} or {\\tt EXPRN1}.\n\nThe braces around the substitution list may also be omitted, as in:\n\n\\begin{verbatim}\n\t\t\t\t    2              2\n     sub(x=a+y,y=y+1,x^2+y^2)   -> A  + 2*A*Y + 2*Y  + 2*Y + 1\n\\end{verbatim}\n\n\\section{LET Rules}\\ttindex{LET}\nUnlike substitutions introduced via {\\tt SUB}, {\\tt LET}\nrules are global in scope and stay in effect until replaced or {\\tt CLEAR}ed.\n\nThe simplest use of the {\\tt LET} statement is in the form\n\\begin{verbatim}\n\tLET <substitution list>\n\\end{verbatim}\nwhere {\\tt <substitution list>} is a list of rules separated by commas, each\nof the form:\n\\begin{verbatim}\n\t<variable> = <expression>\n\\end{verbatim}\nor\n\\begin{verbatim}\n    <prefix operator>(<argument>,...,<argument>) = <expression>\n\\end{verbatim}\nor\n\\begin{verbatim}\n    <argument> <infix operator>,..., <argument> = <expression>\n\\end{verbatim}\nFor example,\n\\begin{verbatim}\n\tlet {x => y^2,\n\t     h(u,v) => u - v,\n\t     cos(pi/3) => 1/2,\n\t     a*b => c,\n\t     l+m => n,\n\t     w^3 => 2*z - 3,\n\t     z^10 => 0}\n\\end{verbatim}\nThe list brackets can be left out if preferred.  The above rules could\nalso have been entered as seven separate {\\tt LET} statements.\n\nAfter such {\\tt LET} rules have been input, {\\tt X} will always be\nevaluated as the square of {\\tt Y}, and so on.  This is so even if at the\ntime the {\\tt LET} rule was input, the variable {\\tt Y} had a value other\nthan {\\tt Y}. (In contrast, the assignment {\\tt x:=y\\verb|^|2} will set {\\tt X}\nequal to the square of the current value of {\\tt Y}, which could be quite\ndifferent.)\n\nThe rule {\\tt let a*b=c} means that whenever {\\tt A} and {\\tt B} are both\nfactors in an expression their product will be replaced by {\\tt C}.  For\nexample, {\\tt a\\verb|^|5*b\\verb|^|7*w} would be replaced by\n{\\tt c\\verb|^|5*b\\verb|^|2*w}.\n\nThe rule for {\\tt l+m} will not only replace all occurrences of {\\tt l+m}\nby {\\tt N}, but will also normally replace {\\tt L} by {\\tt n-m}, but not\n{\\tt M} by {\\tt n-l}.  A more complete description of this case is given\nin Section~\\ref{sec-gensubs}.\n\nThe rule pertaining to {\\tt w\\verb|^|3} will apply to any power of {\\tt W}\ngreater than or equal to the third.\n\nNote especially the last example, {\\tt let z\\verb|^|10=0}.  This declaration\nmeans, in effect: ignore the tenth or any higher power of {\\tt Z}.  Such\ndeclarations, when appropriate, often speed up a computation to a\nconsiderable degree. (See\\index{Asymptotic command}\nSection~\\ref{sec-asymp} for more details.)\n\nAny new operators occurring in such {\\tt LET} rules will be automatically\ndeclared {\\tt OPERATOR} by the system, if the rules are being read from a\nfile.  If they are being entered interactively, the system will ask\n{\\tt DECLARE} ... {\\tt OPERATOR?} .  Answer {\\tt Y} or {\\tt N} and hit\n\\key{Return}.\n\nIn each of these examples, substitutions are only made for the explicit\nexpressions given; i.e., none of the variables may be considered arbitrary\nin any sense. For example, the command\n\\begin{verbatim}\n        let h(u,v) = u - v;\n\\end{verbatim}\nwill cause {\\tt h(u,v)} to evaluate to {\\tt U - V}, but will not affect\n{\\tt h(u,z)} or {\\tt H} with any arguments other than precisely the\nsymbols {\\tt U,V}.\n\nThese simple {\\tt LET} rules are on the same logical level as assignments\nmade with the := operator.  An assignment {\\tt x := p+q} cancels a rule\n{\\tt let x = y\\verb|^|2} made earlier, and vice versa.\n\n{\\it CAUTION:} A recursive rule such as\n\\begin{verbatim}\n        let x = x + 1;\n\\end{verbatim}\nis erroneous, since any subsequent evaluation of {\\tt X} would lead to a\nnon-terminating chain of substitutions:\n\\begin{verbatim}\n      x -> x + 1 -> (x + 1) + 1 -> ((x + 1) + 1) + 1 -> ...\n\\end{verbatim}\nSimilarly, coupled substitutions such as\n\\begin{verbatim}\n        let l = m + n, n = l + r;\n\\end{verbatim}\nwould lead to the same error. As a result, if you try to evaluate an {\\tt X},\n{\\tt L} or {\\tt N} defined as above, you will get an error such as\n\\begin{verbatim}\n        X improperly defined in terms of itself\n\\end{verbatim}\n\nArray and matrix elements can appear on the left-hand side of a {\\tt LET}\nstatement. However, because of their {\\em instant evaluation\\/}\n\\index{Instant evaluation} property, it is the value of the element that\nis substituted for, rather than the element itself.  E.g.,\n\\begin{verbatim}\n        array a(5);\n        a(2) := b;\n        let a(2) = c;\n\\end{verbatim}\nresults in {\\tt B} being substituted by {\\tt C}; the assignment for\n{\\tt a(2)} does not change.\n\nFinally, if an error occurs in any equation in a {\\tt LET} statement\n(including generalized statements involving {\\tt FOR ALL} and {\\tt SUCH\nTHAT)}, the remaining rules are not evaluated.\n\n\\subsection{FOR ALL \\ldots LET}\\ttindex{FOR ALL}\nIf a substitution for all possible values of a given argument of an\noperator is required, the declaration {\\tt FOR ALL} may be used. The\nsyntax of such a command is\n\\begin{verbatim}\n        FOR ALL <variable>,...,<variable>\n                <LET statement> <terminator>\n\\end{verbatim}\ne.g.,\n\\begin{verbatim}\n        for all x,y let h(x,y) = x-y;\n        for all x let k(x,y) = x^y;\n\\end{verbatim}\nThe first of these declarations would cause {\\tt h(a,b)} to be evaluated\nas {\\tt A-B}, {\\tt h(u+v,u+w)} to be {\\tt V-W}, etc.  If the operator\nsymbol {\\tt H} is used with more or fewer argument places, not two, the\n{\\tt LET} would have no effect, and no error would result.\n\nThe second declaration would cause {\\tt k(a,y)} to be evaluated as\n{\\tt a\\verb|^|y}, but would have no effect on {\\tt k(a,z)} since the rule\ndidn't say {\\tt FOR ALL Y} ... .\n\nWhere we used {\\tt X} and {\\tt Y} in the examples, any variables could\nhave been used.  This use of a variable doesn't affect the value it may\nhave outside the {\\tt LET} statement.  However, you should remember what\nvariables you actually used.  If you want to delete the rule subsequently,\nyou must use the same variables in the {\\tt CLEAR} command.\n\nIt is possible to use more complicated expressions as a template for a\n{\\tt LET} statement, as explained in the section on substitutions for\ngeneral expressions.  In nearly all cases, the rule will be accepted, and\na consistent application made by the system.  However, if there is a sole\nconstant or a sole free variable on the left-hand side of a rule (e.g.,\n{\\tt let 2=3} or {\\tt for all x let x=2)}, then the system is unable to\nhandle the rule, and the error message\n\\begin{verbatim}\n        Substitution for ... not allowed\n\\end{verbatim}\nwill be issued.  Any variable listed in the {\\tt FOR ALL} part will have\nits symbol preceded by an equal sign: {\\tt X} in the above example will\nappear as {\\tt =X}.  An error will also occur if a variable in the\n{\\tt FOR ALL} part is not properly matched on both sides of the {\\tt LET}\nequation.\n\n\\subsection{FOR ALL \\ldots SUCH THAT \\ldots LET}\n\\ttindex{FOR ALL}\\ttindex{SUCH THAT}\n\nIf a substitution is desired for more than a single value of a variable in\nan operator or other expression, but not all values, a conditional form of\nthe {\\tt FOR ALL \\ldots LET} declaration can be used.\n\n{\\it Example:}\n\\begin{verbatim}\n        for all x such that numberp x and x<0 let h(x)=0;\n\\end{verbatim}\nwill cause {\\tt h(-5)} to be evaluated as 0, but {\\tt H} of a positive\ninteger, or of an argument that is not an integer at all, would not be\naffected.  Any boolean expression can follow the {\\tt SUCH THAT} keywords.\n\n\\subsection{Removing Assignments and Substitution Rules}\\ttindex{CLEAR}\n\nThe user may remove all assignments and substitution rules from any\nexpression by the command {\\tt CLEAR}, in the form\n\\begin{verbatim}\n        CLEAR <expression>,...,<expression><terminator>\n\\end{verbatim}\ne.g.\n\\begin{verbatim}\n        clear x, h(x,y);\n\\end{verbatim}\nBecause of their {\\em instant evaluation\\/} property, array and matrix elements\ncannot be cleared with {\\tt CLEAR}.  For example, if {\\tt A} is an array,\nyou must say\n\\begin{verbatim}\n        a(3) := 0;\n\\end{verbatim}\nrather than\n\\begin{verbatim}\n        clear a(3);\n\\end{verbatim}\nto ``clear'' element {\\tt a(3)}.\n\nOn the other hand, a whole array (or matrix) {\\tt A} can be cleared by the\ncommand {\\tt clear a};  This means much more than resetting to 0 all the\nelements of {\\tt A}.  The fact that {\\tt A} is an array, and what its\ndimensions are, are forgotten, so {\\tt A} can be redefined as another type\nof object, for example an operator.\n\nThe more general types of {\\tt LET} declarations can also be deleted by\nusing {\\tt CLEAR}.  Simply repeat the {\\tt LET} rule to be deleted, using\n{\\tt CLEAR} in place of {\\tt LET}, and omitting the equal sign and\nright-hand part.  The same dummy variables must be used in the {\\tt FOR\nALL} part, and the boolean expression in the {\\tt SUCH THAT} part must be\nwritten the same way. (The placing of blanks doesn't have to be\nidentical.)\n\n{\\it Example:} The {\\tt LET} rule\n\\begin{verbatim}\n        for all x such that numberp x and x<0 let h(x)=0;\n\\end{verbatim}\ncan be erased by the command\n\\begin{verbatim}\n        for all x such that numberp x and x<0 clear h(x);\n\\end{verbatim}\n\n\\subsection{Overlapping LET Rules}\n{\\tt CLEAR} is not the only way to delete a {\\tt LET} rule.  A new {\\tt\nLET} rule identical to the first, but with a different expression after\nthe equal sign, replaces the first.  Replacements are also made in other\ncases where the existing rule would be in conflict with the new rule.  For\nexample, a rule for {\\tt x\\verb|^|4} would replace a rule for {\\tt x\\verb|^|5}.\nThe user should however be cautioned against having several {\\tt LET}\nrules in effect that relate to the same expression.  No guarantee can be\ngiven as to which rules will be applied by {\\REDUCE} or in what order.  It\nis best to {\\tt CLEAR} an old rule before entering a new related {\\tt LET}\nrule.\n\n\\subsection{Substitutions for General Expressions}\n\\label{sec-gensubs}\nThe examples of substitutions discussed in other sections have involved\nvery simple rules. However, the substitution mechanism used in {\\REDUCE} is\nvery general, and can handle arbitrarily complicated rules without\ndifficulty.\n\nThe general substitution mechanism used in {\\REDUCE} is discussed in Hearn, A.\nC., ``{\\REDUCE}, A User-Oriented Interactive System for Algebraic\nSimplification,'' Interactive Systems for Experimental Applied Mathematics,\n(edited by M. Klerer and J. Reinfelds), Academic Press, New York (1968),\n79-90, and Hearn. A. C., ``The Problem of Substitution,'' Proc. 1968 Summer\nInstitute on Symbolic Mathematical Computation, IBM Programming Laboratory\nReport FSC 69-0312 (1969). For the reasons given in these\nreferences, {\\REDUCE} does not attempt to implement a general pattern\nmatching algorithm.  However, the present system uses far more sophisticated\ntechniques than those discussed in the above papers.  It is now possible for\nthe rules appearing in arguments of {\\tt LET} to have the form\n\\begin{verbatim}\n        <substitution expression> = <expression>\n\\end{verbatim}\nwhere any rule to which a sensible meaning can be assigned is permitted.\nHowever, this meaning can vary according to the form of {\\tt <substitution\nexpression>}. The semantic rules associated with the application of the\nsubstitution are completely consistent, but somewhat complicated by the\npragmatic need to perform such substitutions as efficiently as possible.\nThe following rules explain how the majority of the cases are handled.\n\nTo begin with, the {\\tt <substitution expression>} is first partly\nsimplified by collecting like terms and putting identifiers (and kernels)\nin the system order.  However, no substitutions are performed on any part\nof the expression with the exception of expressions with the {\\em instant\nevaluation\\/} property, such as array and matrix elements, whose actual\nvalues are used.  It should also be noted that the system order used is\nnot changeable by the user, even with the {\\tt KORDER} command.  Specific\ncases are then handled as follows:\n\\begin{enumerate}\n\\item If the resulting simplified rule has a left-hand side that is an\nidentifier, an expression with a top-level algebraic operator or a power,\nthen the rule is added without further change to the appropriate table.\n\n\\item If the operator * appears at the top level of the simplified left-hand\nside, then any constant arguments in that expression are moved to the\nright-hand side of the rule.  The remaining left-hand side is then added\nto the appropriate table.  For example,\n\\begin{verbatim}\n        let 2*x*y=3\n\\end{verbatim}\nbecomes\n\\begin{verbatim}\n        let x*y=3/2\n\\end{verbatim}\nso that {\\tt x*y} is added to the product substitution table, and when\nthis rule is applied, the expression {\\tt x*y} becomes 3/2, but {\\tt X} or\n{\\tt Y} by themselves are not replaced.\n\n\\item If the operators {\\tt +}, {\\tt -} or {\\tt /} appear at the top level\nof the simplified left-hand side, all but the first term is moved to the\nright-hand side of the rule.  Thus the rules\n\\begin{verbatim}\n        let l+m=n, x/2=y, a-b=c\n\\end{verbatim}\nbecome\n\\begin{verbatim}\n        let l=n-m, x=2*y, a=c+b.\n\\end{verbatim}\n\\end{enumerate}\nOne problem that can occur in this case is that if a quantified expression\nis moved to the right-hand side, a given free variable might no longer\nappear on the left-hand side, resulting in an error because of the\nunmatched free variable. E.g.,\n\\begin{verbatim}\n        for all x,y let f(x)+f(y)=x*y\n\\end{verbatim}\nwould become\n\\begin{verbatim}\n        for all x,y let f(x)=x*y-f(y)\n\\end{verbatim}\nwhich no longer has {\\tt Y} on both sides.\n\nThe fact that array and matrix elements are evaluated in the left-hand side\nof rules can lead to confusion at times. Consider for example the\nstatements\n\\begin{verbatim}\n        array a(5); let x+a(2)=3; let a(3)=4;\n\\end{verbatim}\nThe left-hand side of the first rule will become {\\tt X}, and the second\n0.  Thus the first rule will be instantiated as a substitution for\n{\\tt X}, and the second will result in an error.\n\nThe order in which a list of rules is applied is not easily understandable\nwithout a detailed knowledge of the system simplification protocol. It is\nalso possible for this order to change from release to release, as improved\nsubstitution techniques are implemented. Users should therefore assume\nthat the order of application of rules is arbitrary, and program\naccordingly.\n\nAfter a substitution has been made, the expression being evaluated is\nreexamined in case a new allowed substitution has been generated. This\nprocess is continued until no more substitutions can be made.\n\nAs mentioned elsewhere, when a substitution expression appears in a\nproduct, the substitution is made if that expression divides the product.\nFor example, the rule\n\\begin{verbatim}\n\tlet a^2*c = 3*z;\n\\end{verbatim}\nwould cause {\\tt a\\verb|^|2*c*x} to be replaced by {\\tt 3*Z*X} and\n{\\tt a\\verb|^|2*c\\verb|^|2} by {\\tt 3*Z*C}.  If the substitution is desired only\nwhen the substitution expression appears in a product with the explicit\npowers supplied in the rule, the command {\\tt MATCH} should be used\ninstead.\\ttindex{MATCH}\n\nFor example,\n\\begin{verbatim}\n        match a^2*c = 3*z;\n\\end{verbatim}\nwould cause {\\tt a\\verb|^|2*c*x} to be replaced by {\\tt 3*Z*X}, but\n{\\tt a\\verb|^|2*c\\verb|^|2} would not be replaced. {\\tt MATCH} can also be used\nwith the {\\tt FOR ALL} constructions described above.\n\nTo remove substitution rules of the type discussed in this section, the\n{\\tt CLEAR}\\ttindex{CLEAR} command can be used, combined, if necessary,\nwith the same {\\tt FOR ALL} clause with which the rule was defined, for\nexample:\n\\begin{verbatim}\n        for all x clear log(e^x),e^log(x),cos(w*t+theta(x));\n\\end{verbatim}\nNote, however, that the arbitrary variable names in this case {\\em must\\/}\nbe the same as those used in defining the substitution.\n\n\\section{Rule Lists} \\index{Rule lists}\n\nRule lists offer an alternative approach to defining substitutions that is\ndifferent from either {\\tt SUB} or {\\tt LET}.  In fact, they provide the\nbest features of both, since they have all the capabilities of {\\tt LET},\nbut the rules can also be applied locally as is possible with {\\tt SUB}.\nIn time, they will be used more and more in {\\REDUCE}.  However, since they\nare relatively new, much of the {\\REDUCE} code you see uses the older\nconstructs.\n\nA rule list is a list of {\\em rules\\/} that have the syntax\n\\begin{verbatim}\n     <expression> => <expression> (WHEN <boolean expression>)\n\\end{verbatim}\nFor example,\n\\begin{verbatim}\n\t{cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2,\n\t cos(~n*pi)      => (-1)^n when remainder(n,2)=0}\n\\end{verbatim}\n\nThe tilde preceding a variable marks that variable as {\\em free\\/} for that\nrule, much as a variable in a {\\tt FOR ALL} clause in a {\\tt LET}\nstatement.  The first occurrence of that variable in each relevant rule\nmust be so marked on input, otherwise inconsistent results can occur.\nFor example, the rule list\n\\begin{verbatim}\n        {cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2,\n\t cos(x)^2        => (1+cos(2x))/2}\n\\end{verbatim}\ndesigned to replace products of cosines, would not be correct, since the\nsecond rule would only apply to the explicit argument {\\tt X}.  Later\noccurrences in the same rule may also be marked, but this is optional\n(internally, all such rules are stored with each relevant variable\nexplicitly marked).  The optional {\\tt WHEN}\\ttindex{WHEN} clause allows\nconstraints to be placed on the application of the rule, much as the {\\tt\nSUCH THAT} clause in a {\\tt LET} statement.\n\nA rule list may be named, for example\n\\begin{verbatim}\n        trig1 := {cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2,\n                  cos(~x)*sin(~y) => (sin(x+y)-sin(x-y))/2,\n                  sin(~x)*sin(~y) => (cos(x-y)-cos(x+y))/2,\n                  cos(~x)^2       => (1+cos(2*x))/2,\n                  sin(~x)^2       => (1-cos(2*x))/2};\n\\end{verbatim}\n\nSuch named rule lists may be inspected as needed. E.g., the command\n{\\tt trig1;} would cause the above list to be printed.\n\nRule lists may be used in two ways.  They can be globally instantiated by\nmeans of the command {\\tt LET}.\\ttindex{LET} For example,\n\\begin{verbatim}\n        let trig1;\n\\end{verbatim}\nwould cause the above list of rules to be globally active from then on until\ncancelled by the command {\\tt CLEARRULES},\\ttindex{CLEARRULES} as in\n\\begin{verbatim}\n        clearrules trig1;\n\\end{verbatim}\n{\\tt CLEARRULES} has the syntax\n\\begin{verbatim}\n        CLEARRULES <rule list>|<name of rule list>(,...) .\n\\end{verbatim}\n\nThe second way to use rule lists is to invoke them locally by means of a\n{\\tt WHERE}\\ttindex{WHERE} clause.  For example\n\\begin{verbatim}\n        cos(a)*cos(b+c)\n\t   where {cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2};\n\\end{verbatim}\nor\n\\begin{verbatim}\n        cos(a)*sin(b) where trigrules;\n\\end{verbatim}\n\nThe syntax of an expression with a {\\tt WHERE} clause is:\n\\begin{verbatim}\n        <expression>\n            WHERE <rule>|<rule list>(,<rule>|<rule list> ...)\n\\end{verbatim}\nso the first example above could also be written\n\\begin{verbatim}\n        cos(a)*cos(b+c)\n           where cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2;\n\\end{verbatim}\n\nThe effect of this construct is that the rule list(s) in the {\\tt WHERE}\nclause only apply to the expression on the left of {\\tt WHERE}.  They have\nno effect outside the expression.  In particular, they do not affect\npreviously defined {\\tt WHERE} clauses or {\\tt LET} statements.  For\nexample, the sequence\n\\begin{verbatim}\n     let a=2;\n     a where a=>4;\n     a;\n\\end{verbatim}\nwould result in the output\n\\begin{verbatim}\n     4\n\n     2\n\\end{verbatim}\n\nAlthough {\\tt WHERE} has a precedence less than any other infix operator,\nit still binds higher than keywords such as {\\tt ELSE}, {\\tt THEN},\n{\\tt DO}, {\\tt REPEAT} and so on.  Thus the expression\n\\begin{verbatim}\n        if a=2 then 3 else a+2 where a=3\n\\end{verbatim}\nwill parse as\n\\begin{verbatim}\n        if a=2 then 3 else (a+2 where a=3)\n\\end{verbatim}\n\n{\\tt WHERE} may be used to introduce auxiliary variables in symbolic mode\nexpressions, as described in Section~\\ref{sec-lambda}.  However, the\nsymbolic mode use has different semantics, so expressions do not carry\nfrom one mode to the other.\n\n\\COMPATNOTE In order to provide compatibility with older versions of rule\nlists released through the Network Library, it is currently possible to use\nan equal sign interchangeably with the replacement sign {\\tt =>} in rules\nand {\\tt LET} statements.  However, since this will change in future\nversions, the replacement sign is preferable in rules and the equal sign\nin non-rule-based {\\tt LET} statements.\n\n\\subsection*{Advanced Use of Rule Lists}\n\nSome advanced features of the rule list mechanism make it possible to\nwrite more complicated rules than those discussed so far, and in many\ncases to write more compact rule lists.  These features are:\n\n\\begin{itemize}\n\\item Free operators\n\\item Double slash operator\n\\item Double tilde variables.\n\\end{itemize}\nA {\\bf free operator} in the left hand side of a pattern will match any\noperator with the same number of arguments.  The free operator is written\nin the same style as a variable.  For example, the implementation of the\nproduct rule of differentiation can be written as:\n\\begin{verbatim}\noperator diff, !~f, !~g;\n\nprule := {diff(~f(~x) * ~g(~x),x) =>\n\t     diff(f(x),x) * g(x) + diff(g(x),x) * f(x)};\n\nlet prule;\n\ndiff(sin(z)*cos(z),z);\n\n         cos(z)*diff(sin(z),z) + diff(cos(z),z)*sin(z)\n\\end{verbatim}\n\nThe {\\bf double slash operator} may be used as an alternative to a single\nslash (quotient) in order to match quotients properly.  E.g., in the\nexample of the Gamma function above, one can use:\n\\begin{verbatim}\ngammarule :=\n   {gamma(~z)//(~c*gamma(~zz))  => gamma(z)/(c*gamma(zz-1)*zz)\n\t\t  when fixp(zz -z) and (zz -z) >0,\n    gamma(~z)//gamma(~zz) => gamma(z)/(gamma(zz-1)*zz)\n\t\t  when fixp(zz -z) and (zz -z) >0};\n\nlet gammarule;\n\ngamma(z)/gamma(z+3);\n\n          1\n----------------------\n  3      2\n z  + 6*z  + 11*z + 6\n\\end{verbatim}\nThe above example suffers from the fact that two rules had to be\nwritten in order to perform the required operation. This can be simplified\nby the use of {\\bf double tilde variables}. E.g. the rule list\n\\begin{verbatim}\n GGrule :=  {\n    gamma(~z)//(~~c*gamma(~zz))  => gamma(z)/(c*gamma(zz-1)*zz)\n     when fixp(zz -z) and (zz -z) >0};\n\\end{verbatim}\nwill implement the same operation in a much more compact way.\nIn general, double tilde variables are bound to the neutral element\nwith respect to the operation in which they are used.\n\n\\begin{tabular}{lll}\n\nPattern given & Argument used & Binding  \\\\\n\\\\\n\\symbol{126}z + \\symbol{126}\\symbol{126}y  &   x   &  z=x; y=0  \\\\   \n\\symbol{126}z + \\symbol{126}\\symbol{126}y  &   x+3 &  z=x; y=3  or  z=3; y=x \\\\ \n\\\\\n\\symbol{126}z * \\symbol{126}\\symbol{126}y  &   x   &  z=x; y=1\\\\\n\\symbol{126}z * \\symbol{126}\\symbol{126}y  &   x*3 &  z=x; y=3  or  z=3; y=x\\\\\n\\\\\n\\symbol{126}z / \\symbol{126}\\symbol{126}y  &    x   &  z=x; y=1\\\\\n\\symbol{126}z / \\symbol{126}\\symbol{126}y  &    x/3 &  z=x; y=3  \\\\\n\\\\\n\\end{tabular}\n\nRemarks: A double tilde variable as the numerator of a pattern is not allowed.\nAlso, using double tilde variables may lead to recursion errors when the\nzero case is not handled properly.\n\\begin{verbatim}\nlet f(~~a * ~x,x)  => a * f(x,x) when freeof (a,x);\n\nf(z,z);\n\n***** f(z,z) improperly defined in terms of itself\n\n% BUT:\n\nlet ff(~~a * ~x,x)\n       => a * ff(x,x) when freeof (a,x) and a neq 1;\n\nff(z,z);\n                 ff(z,z)\n\nff(3*z,z);\n                 3*ff(z,z)\n\\end{verbatim}\n\n\\subsection*{Displaying Rules Associated with an Operator}\n\nThe operator {\\tt SHOWRULES}\\ttindex{SHOWRULES} takes a single identifier\nas argument, and returns in rule-list form the operator rules associated\nwith that argument.  For example:\n\\begin{verbatim}\nshowrules log;\n\n{LOG(E) => 1,\n\n LOG(1) => 0,\n\n      ~X\n LOG(E  ) => ~X,\n\n\t\t    1\n DF(LOG(~X),~X) => ----}\n\t\t    ~X\n\\end{verbatim}\n\nSuch rules can then be manipulated further as with any list.  For example\n{\\tt rhs first ws;} has the value {\\tt 1}.  Note that an operator may\nhave other properties that cannot be displayed in such a form, such as the\nfact it is an odd function, or has a definition defined as a procedure.\n\n\\subsection*{Order of Application of Rules}\n\nIf rules have overlapping domains, their order of application is\nimportant.  In general, it is very difficult to specify this order\nprecisely, so that it is best to assume that the order is arbitrary.\nHowever, if only one operator is involved, the order of application of the\nrules for this operator can be determined from the following:\n\n\\begin{enumerate}\n\\item Rules containing at least one free variable apply before all rules\nwithout free variables.\n\\item Rules activated in the most recent {\\tt LET}\ncommand are applied first.\n\\item {\\tt LET} with several entries generate\nthe same order of application as a corresponding sequence of commands with\none rule or rule set each.\n\\item Within a rule set, the rules containing at least\none free variable are applied in their given order.\nIn other words, the first member of the list is applied first.\n\\item Consistent with the first item, any rule in a rule list that\ncontains no free variables is applied after all rules containing free\nvariables.\n\\end{enumerate}\n{\\it Example:} The following rule set enables the computation of exact\nvalues of the Gamma function:\n\\begin{verbatim}\n        operator gamma,gamma_error;\n        gamma_rules :=\n        {gamma(~x)=>sqrt(pi)/2 when x=1/2,\n         gamma(~n)=>factorial(n-1) when fixp n and n>0,\n         gamma(~n)=>gamma_error(n) when fixp n,\n         gamma(~x)=>(x-1)*gamma(x-1) when fixp(2*x) and x>1,\n         gamma(~x)=>gamma(x+1)/x when fixp(2*x)};\n\\end{verbatim}\nHere, rule by rule, cases of known or definitely uncomputable values\nare sorted out; e.g. the rule leading to the error expression\nwill be applied for negative integers only, since the positive\nintegers are caught by the preceding rule, and the\nlast rule will apply for negative odd multiples of $1/2$ only.\nAlternatively the first rule could have been written as\n\\begin{verbatim}\n        gamma(1/2) => sqrt(pi)/2,\n\\end{verbatim}\nbut then the case $x=1/2$ should be excluded in the {\\tt WHEN} part of the\nlast rule explicitly because a rule without free variables cannot take\nprecedence over the other rules.\n\n\\section{Asymptotic Commands} \\index{Asymptotic command}\n\\label{sec-asymp}\nIn expansions of polynomials involving variables that are known to be\nsmall, it is often desirable to throw away all powers of these variables\nbeyond a certain point to avoid unnecessary computation.  The command {\\tt\nLET} may be used to do this.  For example, if only powers of {\\tt X} up to\n{\\tt x\\verb|^|7} are needed, the command\n\\begin{verbatim}\n        let x^8 = 0;\n\\end{verbatim}\nwill cause the system to delete all powers of {\\tt X} higher than 7.\n\n{\\it CAUTION:}  This particular simplification works differently from most\nsubstitution mechanisms in {\\REDUCE} in that it is applied during\npolynomial manipulation rather than to the whole evaluated expression.\nThus, with the above rule in effect, {\\tt x\\verb|^|10/x\\verb|^|5} would give the\nresult zero, since the numerator would simplify to zero.  Similarly\n{\\tt x\\verb|^|20/x\\verb|^|10} would give a {\\tt Zero divisor} error message,\nsince both numerator and denominator would first simplify to zero.\n\nThe method just described is not adequate when expressions involve several\nvariables having different degrees of smallness. In this case, it is\nnecessary to supply an asymptotic weight to each variable and count up the\ntotal weight of each product in an expanded expression before deciding\nwhether to keep the term or not. There are two associated commands in the\nsystem to permit this type of asymptotic constraint. The command {\\tt WEIGHT}\n\\ttindex{WEIGHT}\ntakes a list of equations of the form\n\\begin{verbatim}\n        <kernel form> = <number>\n\\end{verbatim}\nwhere {\\tt <number>} must be a positive integer (not just evaluate to a\npositive integer).  This command assigns the weight {\\tt <number>} to the\nrelevant kernel form.  A check is then made in all algebraic evaluations\nto see if the total weight of the term is greater than the weight level\nassigned to the calculation.  If it is, the term is deleted.  To compute\nthe total weight of a product, the individual weights of each kernel form\nare multiplied by their corresponding powers and then added.\n\nThe weight level of the system is initially set to 1. The user may change\nthis setting by the command\\ttindex{WTLEVEL}\n\\begin{verbatim}\n        wtlevel <number>;\n\\end{verbatim}\nwhich sets {\\tt <number>} as the new weight level of the system.\n{\\tt <number>} must evaluate to a positive integer.  WTLEVEL will also\nallow NIL as an argument, in which case the current weight level is returned.\n", "meta": {"hexsha": "9050dac4f16ca875eb69f9098071fd57890267df", "size": 30731, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "atomic_Decomp/Redlog/reduce.doc/subst.tex", "max_stars_repo_name": "Korosensei42/AtomicDecomposition", "max_stars_repo_head_hexsha": "ca10f97c2cef1a258a4e9fade0a3133d1389d08e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "atomic_Decomp/Redlog/reduce.doc/subst.tex", "max_issues_repo_name": "Korosensei42/AtomicDecomposition", "max_issues_repo_head_hexsha": "ca10f97c2cef1a258a4e9fade0a3133d1389d08e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "atomic_Decomp/Redlog/reduce.doc/subst.tex", "max_forks_repo_name": "Korosensei42/AtomicDecomposition", "max_forks_repo_head_hexsha": "ca10f97c2cef1a258a4e9fade0a3133d1389d08e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.38239159, "max_line_length": 80, "alphanum_fraction": 0.7132211773, "num_tokens": 8283, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115012, "lm_q2_score": 0.7662936430859597, "lm_q1q2_score": 0.5872059392541825}}
{"text": "\\documentclass[]{report}   % list options between brackets        \n\\usepackage{graphicx, listings, color}      % list packages between braces\n\\graphicspath{ {Images/} }\n\n% type user-defined commands here\n\n\\definecolor{codegreen}{rgb}{0,0.6,0}\n\\definecolor{codegray}{rgb}{0.5,0.5,0.5}\n\\definecolor{codepurple}{rgb}{0.58,0,0.82}\n\\definecolor{backcolour}{rgb}{0.95,0.95,0.92}\n \n\\lstdefinestyle{mystyle}{\n    backgroundcolor=\\color{backcolour},   \n    commentstyle=\\color{codegreen},\n    keywordstyle=\\color{magenta},\n    numberstyle=\\tiny\\color{codegray},\n    stringstyle=\\color{codepurple},\n    basicstyle=\\footnotesize,\n    breakatwhitespace=false,         \n    breaklines=true,                 \n    captionpos=b,                    \n    keepspaces=true,                 \n    numbers=left,                    \n    numbersep=5pt,                  \n    showspaces=false,                \n    showstringspaces=false,\n    showtabs=false,                  \n    tabsize=2\n}\n \n\\lstset{style=mystyle}\n\n\\begin{document}\n\n\\title{CAB420 Assignment 1}   % type title between braces\n\\author{Jarrod Williams (n9722068) and Madeline Miller (n9342401)}         % type author(s) between braces\n\\maketitle\n\n\\chapter{Theory}\n\nGiven the following equation,\n\n$$L(w)=-\\sum_{i=1}^{N}log(\\frac{1}{1+e^{y_i(w^Tx_i+b)}})+\\lambda||w||_2^2$$\n\n\\section{Finding the Partial Derivative}\nFinding the partial derivative $$\\frac{\\partial L}{\\partial w_j}$$\n\n$$\\frac{\\partial L}{\\partial w_j}(-\\sum_{i=1}^{N}log(\\frac{1}{1+e^{y_i(w^Tx_i+b)}})+\\lambda||w||_2^2)$$\n\nAs the regulator at the end of the function is a squared L2 norm, it can be derived trivially.\n\n$$\\frac{\\partial L}{\\partial w_j}(-\\sum_{i=1}^{N}log(\\frac{1}{1+e^{y_i(w^Tx_i+b)}}))+2\\lambda w_j$$\n\nSimplify,\n\n$$\\frac{\\partial L}{\\partial w_j}(\\sum_{i=1}^{N}log(1+e^{y_i(w^Tx_i+b)}))+2\\lambda w_j$$\n\nUsing the chain rule,\n\n$$\\sum_{i=1}^{N}\\frac{\\frac{\\partial L}{\\partial w_j}(1+e^{y_i(w^Tx_i+b)})}{1+e^{y_i(w^Tx_{i,j}+b)}}+2\\lambda w_j$$\n\n$$\\sum_{i=1}^{N}\\frac{\\frac{\\partial L}{\\partial w_j}(1)+\\frac{\\partial L}{\\partial w_j}(e^{y_i(w^Tx_i+b)})}{1+e^{y_i(w^Tx_{i,j}+b)}}+2\\lambda w_j$$\n\n$$\\sum_{i=1}^{N}\\frac{\\frac{\\partial L}{\\partial w_j}(e^{y_i(w^Tx_i+b)})}{1+e^{y_i(w^Tx_{i,j}+b)}}+2\\lambda w_j$$\n\nUsing the chain rule again,\n\n$$\\sum_{i=1}^{N}\\frac{y_ix_ie^{y_i(w^Tx_i+b)}}{1+e^{y_i(w^Tx_{i,j}+b)}}+2\\lambda w_j$$\n\n\\section{Finding the Second Partial Derivative}\nFinding the second partial derivative $$\\frac{\\partial^2 L}{\\partial w_j\\partial w_k}$$\n\n$$\\frac{\\partial^2 L}{\\partial w_j\\partial w_k}=\\frac{\\partial}{\\partial w_k}(\\sum_{i=1}^{N}\\frac{y_ix_ie^{y_i(w^Tx_i+b)}}{1+e^{y_i(w^Tx_{i,j}+b)}}+2\\lambda w_j)$$\n\n\n$$\\sum_{i=1}^{N}\\frac{y_ix_i(\\frac{\\partial}{\\partial w_k}e^{y_i(w^Tx_i+b)})}{1+e^{y_i(w^Tx_{i,j}+b)}}$$\n\nUsing the chain rule,\n\n$$\\sum_{i=1}^{N}\\frac{e^{y_i(w_kx_i+b)}y_i^2x_i(\\frac{\\partial}{\\partial w_k}(w^Tx_i+b))}{1+e^{y_i(w^Tx_{i,j}+b)}}$$\n\n$$\\sum_{i=1}^{N}\\frac{e^{y_i(w^Tx_i+b)}y_i^2x_iw_kx_i+b}{1+e^{y_i(w^Tx_{i,j}+b)}}$$\n\n$$\\sum_{i=1}^{N}\\frac{e^{y_i(w^Tx_i+b)}y_i^2x_i^2w_k+b}{1+e^{y_i(w^Tx_{i,j}+b)}}$$\n\n\n\\section{Proving it's a convex function}\n\nA function can be defined as convex if the Hessian matrix is positive semi-definite. This is defined as,\n\n$$a^THa\\equiv\\sum_{j,k}a_ja_kH_{j,k}\\geq0$$\n\nWhere,\n\n$$H_{j,k}=\\frac{\\partial^2L}{\\partial w_j\\partial w_k}$$\n\nThis could be shown using a proof by contradiction.\n\n\\chapter{Practice}\n\n\n\\section{Features, Classes, and Linear Regression}\n%%%%%%%%%%%%%%%%%%%%\n{\\bf (a) Plot the training data in a scatter plot.}\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_1_Figure_1.png}\\\\\n\t{Figure 2.1.1: Training data in a scatter plot.}\n\\end{center} \n~\\\\\n{\\bf (b) Create a linear predictor (assuming quadratic features). Plot it on the same plot as the training data.}\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_1_Figure_2.png}\\\\\n\t{Figure 2.1.2: Quadratic linear predictor with training data scatter plot.}\n\\end{center}.\n~\\\\ \n{\\bf (c) Create another plot with the data and a fifth-degree polynomial.}\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_1_Figure_3.png}\\\\\n\t{Figure 2.1.3: Quintic linear predictor with training data scatter plot.}\n\\end{center}\n~\\\\\n{\\bf (d) Calculate the mean squared error associated with each of your learned models on the training.}\ndata.\n\\begin{lstlisting}[caption=Matlab output for MSE calculations on training data.]\nThe MSE for the quadratic linear predictor on training data was: 2178.55\nThe MSE for the quintic linear predictor on training data was: 1254.61\n\\end{lstlisting}\n~\\\\\n{\\bf (e) Calculate the MSE for each model on the test data (in mcycleTest.txt).}\n\\begin{lstlisting}[caption=Matlab output for MSE calculations on test data.]\nThe MSE for the quadratic linear predictor on test data was: 1573.27\nThe MSE for the quintic linear predictor on test data was: 999.65\n\\end{lstlisting}\n\n\n\n\n\n\\section{kNN Regression}\n%%%%%%%%%%%%%%%%%%%%\n{\\bf (a) Using the knnRegress class, implement (add code to) the predict function to make it functional.}\n\\\\~\\\\\n\\begin{lstlisting}[language=Matlab, caption=predict() Implementation]\n% Test function: predict on Xtest\nfunction Yte = predict(obj,Xte)\n  % get size of training, test data\n  [Ntr,Mtr] = size(obj.Xtrain);\n  [Nte,Mte] = size(Xte);\n  % figure out how many classes & their labels\n  classes = unique(obj.Ytrain);        \n  % make Ytest the same data type as Ytrain  \n  Yte = repmat(obj.Ytrain(1), [Nte,1]);  \n  % can't have more than Ntrain neighbors\n  K = min(obj.K, Ntr);                  \n  % For each test example:\n  for i=1:Nte,                \n    % compute sum of squared differences          \n    dist = sum( bsxfun( @minus, obj.Xtrain, Xte(i,:) ).^2 , 2);  \n    % find nearest neighbors over Xtrain (dimension 2)\n    [tmp,idx] = sort(dist);                                                \n    % Set y to the mean of the K nearest neighbours\n    Yte(i) = mean(obj.Ytrain(idx(1:K)));   \n  end;\nend\n\\end{lstlisting}\n{\\bf (b) Using the same technique as in Problem 1a, plot the predicted function for several values of k: 1, 2, 3, 5, 10, 50. (You can just use a for-loop to do this.) How does the choice of k relate to the \u201ccomplexity\u201d of the regression function?}\n\\\\~\\\\\n\\begin{center}\n\t\\includegraphics[width=35em]{2_2_Figure_1.png}\n\t{Figure 2.2.1: kNN Regression model at various k values.}\n\\end{center} \nThe closer to 1 that the k value is, the more complex the function appears. This is due to the line being influenced by more separate clusters of nodes when k is a lower value. When k is higher, individual nodes have less of an impact as it uses an average of the k nearest neighbours to determine the current y value.\n\\\\~\\\\\n{\\bf (c) We discussed in class that the k-nearest-neighbor classifier\u2019s decision boundary can be shown to be piecewise linear. What kind of functions can be output by a nearest neighbor regression function? Briefly justify your conclusion. (You do not need to discuss the general case \u2013 just the 1-dimensional regression picture such as your plots.)}\n\\\\~\\\\\nIn this case, as well as most cases, the output function will also be piecewise linear. Linear because there will only ever be a single y value for each x value, and piecewise because it may not necessarily be a single plot-able function. In this specific case it's much too complicated to be plotted as a single function and must be plotted as a piecewise.\n\\section{Hold-out and Cross-validation}\n{\\bf (a) Similarly to Problem 1 and 2, compute the MSE of the test data on a model trained on only the first 20 training data examples for k = 1, 2, 3, . . . , 100. Plot the MSE versus k on a log-log scale (see help loglog).}\n\\\\~\\\\\nSee part c below.\n\\\\~\\\\\n{\\bf (b) Repeat, but use all the training data. What happened? Contrast with your results from problem 1}\n\\\\~\\\\\nFor plot, see part c below.\n\\\\~\\\\\nWith all of the training data, k nearest neighbours works better. Due to the lack of data points in the 20 training data set, the lower k values performed better than the higher ones. \n\\\\~\\\\\nWhen k nearest neighbours has more data points available, higher k values can provide for a more accurate model. This is seen at around $10^1$ in the in the plot.\n\\\\~\\\\\n{\\bf (c) We discussed in class that the k-nearest-neighbor classifier\u2019s decision boundary can be shown to be piecewise linear. What kind of functions can be output by a nearest neighbor regression function? Briefly justify your conclusion. (You do not need to discuss the general case \u2013 just the 1-dimensional regression picture such as your plots.)}\n\\\\~\\\\\nThe cross validation allows us to get an estimated accuracy by averaging out the accuracy from multiple different models. It also allows for better validation of each k value without requiring more data, by using existing data over and over in different ways.\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_3_Figure_1.png}\\\\\n\t{Figure 2.3.1: kNN Regression model Mean Squared Error and Cross Validation.}\n\\end{center} \n\n\n\\section{Nearest Neighbor Classifiers}\n%%%%%%%%%%%%%%%%%%%%\n{\\bf (a) Plot the data by their feature values, using the class value to select the color.}\n\\begin{center}\n\t\\includegraphics[width=35em]{2_4_Figure_1.png}\n\t{Figure 2.4.1: Data plotted by feature value with class value determining colour.}\n\\end{center} \n~\\\\\n{\\bf (b) Use the provided $knnClassify$ class to learn a 1-nearest-neighbour predictor. Use the function $class2DPlot(learner,X,Y)$ to plot the decision regions and training data together.}\n\\begin{lstlisting}[language=Matlab, caption=Teaching the 1-nearest-neighbour predictor.]\nlearner = knnClassify(1, X, Y);\nclass2DPlot(learner,X,Y);\ntitle('1-nearest-neighbour predictor');\n\\end{lstlisting}\n\\begin{center}\n\t\\includegraphics[width=35em]{2_4_Figure_2.png}\n\t{Figure 2.4.2: Plot of 1-nearest-neighbour predictor. Generated with the code in Listing 2.4.}\n\\end{center} \n~\\\\\n{\\bf (c) Do the same thing for several values of k (say, [1, 3, 10, 30]) and comment on their appearance.}\n\\begin{lstlisting}[language=Matlab, caption=Teaching k-nearest-neighbour predictor for multiple values of k. {\\bf Note:} figure for 1-nearest-neighbour predictor is above in part (b).]\nkValues = [1,3,10,30]; % k=1 is already plotted above in (B).\nfor index = 1:length(kValues)\n    learner = knnClassify(kValues(index), X, Y);\n    class2DPlot(learner,X,Y);\n    title(strcat(int2str(kValues(index)), '-nearest-neighbour predictor'));\nend\n\\end{lstlisting}\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_4_Figure_3.png}\\\\\n\t{Figure 2.4.3: Plot of 3-nearest-neighbour predictor. Generated with the code in Listing 2.4.}\n\\end{center} \n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_4_Figure_4.png}\\\\\n\t{Figure 2.4.4: Plot of 10-nearest-neighbour predictor. Generated with the code in Listing 2.4.}\n\\end{center} \n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_4_Figure_5.png}\\\\\n\t{Figure 2.4.5: Plot of 30-nearest-neighbour predictor. Generated with the code in Listing 2.4.}\n\\end{center} \n{From visual inspection of figures 2.4.[2-5], as the value of $k$ increases, so does the definition of the decision regions. For small values of $k$ (such as 1), clear under-fitting can be observed. The opposite is also true for large values (such as 30) at which point over-fitting occurs.}\n\\\\~\\\\\n{\\bf (d) Now split the data into an 80/20 training/validation split. For k = [1, 2, 5, 10, 50, 100, 200], learn a model on the 80$\\%$ and calculate its performance ($\\#$ of data classified incorrectly) on the validation data.}\n\\begin{lstlisting}[language=Matlab, caption=Training models on 80\\% training data and calculating its performance on 20\\% validation data.]\nXtrain = X(1:118, 1:end);\nXvalid = X(119:end, 1:end);\nYtrain = Y(1:118);\nYvalid = Y(119:end);\nkValues = [1, 2, 5, 10, 50, 100, 200];\nerrors = [];\nfor index = 1:length(kValues)\n    learner = knnClassify(kValues(index), Xtrain, Ytrain); % train model on X/Ytrain\n    Yhat = predict(learner, Xvalid); % predict results on Xtrain/Yvalid\n    errors = [errors, numel(find(Yhat~=Yvalid))]; % count what fraction of predictions are wrong   \n    hold on;\nend\n\\end{lstlisting}\n\\begin{center}\n\t\\includegraphics[width=35em]{2_4_Figure_6.png}\n\t{Figure 2.4.6: Plot of incorrectly classified validation data points against k-nearest-neighbour predictors. Data generated with the code in Listing 2.5.}\n\\end{center} \n{\\bf What value of k appears to generalize best given your training data? Comment on the performance at the two endpoints, in terms of over- or under-fitting.} \\\\~\\\\\n{For small values of $k$ (1,3,5), under-fitting is present, while larger values (200) provide over-fitting models. As a result, $k$ appears to generalise best between these two values. For Figure 2.4.6 specifically, $k = 10$ appears to generalise best.}\n\n\n\n\n\\section{Perceptrons and Logistic Regression}\n{\\bf (a) Show the two classes in a scatter plot and verify that one is linearly separable while the other is not.}\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_5_Figure_1.png}\\\\\n\t{Figure 2.5.1: Class A. Data {\\bf is} linearly separable.}\n\\end{center} \n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_5_Figure_2.png}\\\\\n\t{Figure 2.5.2: Class B. Data {\\bf is not} linearly separable.}\n\\end{center} \n~\\\\\n{\\bf (b) Write the function @logisticClassify2/plot2DLinear.m so that it plots the two\nclasses of data in different colors, along with the decision boundary. Include the listing of your code in your report.}\n\n\\begin{lstlisting}[language=Matlab, caption=plot2DLinear() Implementation]\nfunction plot2DLinear(obj, X, Y)\n% plot2DLinear(obj, X,Y)\n%   plot a linear classifier (data and decision boundary) when features X are 2-dim\n%   wts are 1x3,  wts(1)+wts(2)*X(1)+wts(3)*X(2)\n%\n[n,d] = size(X);\nif (d~=2) error('Sorry -- plot2DLogistic only works on 2D data...'); end;\n\nfigure('Name','Linear Plot');\nhold on;\n\n% Plot class data.\nclasses = unique(Y);\n\nclassAIndicies = find(Y==classes(1));\nxPointsClassA = X(classAIndicies, 1:end);\nscatter(xPointsClassA(:, 1), xPointsClassA(:, 2));\n\nclassBIndicies = find(Y==classes(2));\nxPointsClassB = X(classBIndicies, 1:end);\nscatter(xPointsClassB(:, 1), xPointsClassB(:, 2));\n\n\n% Plot decision boundary.\nwts = getWeights(obj);\nf = @(x1, x2) wts(1) + wts(2)*x1 + wts(3)*x2 ;\nezplot(f,[-3.5,3.5])\n\nlegend('Class 0','Class 1', 'Decision Bondary');\nhold off;\n\\end{lstlisting}\n{\\bf To demo your function plot the decision boundary corresponding to the classifier: $sign( .5 + 1x_{1} \u2212 .25x_{2} )$ along with the A data, and again with the B data.}\n\\begin{lstlisting}[language=Matlab, caption=Demoing plot2DLinear()]\n%% (B) Write the function @logisticClassify2/plot2DLinear.m such that it \n%      can Plot the two classes of data in different colors, along with the \n%      decision boundary (a line). To demo your function plot the decision \n%       boundary corresponding to the classifier sign( .5 + 1x1 \u2212 .25x2 )\n%       along with the A data, and again with the B data.\n\nlearner=logisticClassify2(); % create \"blank\" learner\nlearner=setClasses(learner, unique(YA)); % define class labels using YA or YB\nwts = [0.5 1 -0.25]; \nlearner=setWeights(learner, wts); % set the learner's parameters\nplot2DLinear(learner, XA, YA);\ntitle('Class A with Decision Boundary');\nplot2DLinear(learner, XB, YB);\ntitle('Class B with Decision Boundary');\n\\end{lstlisting}\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_5_Figure_3.png}\\\\\n\t{Figure 2.5.3: Class A with decision boundary plotted.}\n\\end{center} \n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_5_Figure_4.png}\\\\\n\t{Figure 2.5.4: Class B with decision boundary plotted.}\n\\end{center} \n~\\\\\n{\\bf (c) Complete the predict.m function to make predictions for your linear classifier.}\n\\begin{lstlisting}[language=Matlab, caption=predict() Implementation]\nfunction Yte = predict(obj,Xte)\n% Yhat = predict(obj, X)  : make predictions on test data X\n\n% (1) make predictions based on the sign of wts(1) + wts(2)*x(:,1) + ...\n% (2) convert predictions to saved classes: Yte = obj.classes( [1 or 2] );\nwts = getWeights(obj);\nyhat = zeros(size(Xte,1),1);\nf = @(x1, x2) wts(1) + wts(2)*x1 + wts(3)*x2;\n\nfor i=1:size(Xte,1)\n    x = sign(f(Xte(i,1),Xte(i,2)));\n    yhat(i) = obj.classes(ceil((x+3)/2));\nend;\n\nYte = yhat;\n\\end{lstlisting}\n{\\bf Again, verify that your function works by computing \\& reporting the error rate of the classifier in the previous part on both data sets A and B. (The error rate on data set A should be \u2248 0.0505.)}\n\\begin{lstlisting}[language=Matlab, caption=Verifying predict() works]\n% Data set A\nyte = predict(learner,XA);\nclassError = 0;\nfor i=1:size(YA,1);\n    if(YA(i) ~= yte(i));\n        classError = classError + 1;\n    end;\nend;\nfinalClassError = classError/size(YA,1); % = 0.0505\ndisp(strcat({'The error for class A is:'},{' '},{num2str(finalClassError,' %.4f')}));\n% Data set B\nyte = predict(learner,XB);\nclassError = 0;\nfor i=1:size(YB,1);\n    if(YB(i) ~= yte(i));\n        classError = classError + 1;\n    end;\nend;\nfinalClassError = classError/size(YB,1); % = 0.54555\ndisp(strcat({'The error for class B is:'},{' '},{num2str(finalClassError,' %.4f')}));\n\\end{lstlisting}\n\\begin{lstlisting}[caption=Matlab output for error rate of Class A and B.]\n'The error rate for class A is: 0.0505'\n'The error rate for class B is: 0.5455'\n\\end{lstlisting}\n~\\\\\n{\\bf (d) In my provided code, I first transform the classes in the data Y into \"class 0\" (negative) and \"class 1\" (positive). In our notation, let $z = \\theta x^{(i)}$ is the linear response of the perceptron, and $\\sigma$ is the standard logistic function.\n$$\\sigma (z) = (1 + exp(-z))^{-1}$$\nThe (regularized) logistic negative log likelihood loss for a single data point j is then\n$$ J_{j}(\\theta) = -y^{(j)} log \\sigma (\\theta x^{(j)T}) - (1 - y^{(j)})log(1-\\sigma (\\theta x^{(j)T})) + \\alpha \\sum_{i} \\theta^{2}_{i} $$\nwhere $y(j)$ is either 0 or 1. Derive the gradient of the regularized negative log likelihood $J_{j}$\nfor logistic regression, and give it in your report.}\n\\\\~\\\\\n{The derivative of the gradient can be found by computing $\\frac{\\partial J_{j}(\\theta)}{\\partial \\theta_{i}}$ over the surrogate loss for point j, as given by $x^{(j)}$, $y^{(i)}$. The derivative then is as follows:\n$$\\frac{\\partial J_{j}(\\theta)}{\\partial \\theta_{i}} = x^{j}((1+exp(z))^{-1} - y^{(j)}) + 2.\\alpha.\\theta_{i} $$}\n\\\\~\\\\\n{\\bf (e) Complete your train.m function to perform stochastic gradient descent on the logistic loss function. This will require that you fill in:}\n\\\\~\\\\\n{\\bf (1) computing the surrogate loss function at each iteration ($J = 1/m\\sum J_{j}$)}\n\\begin{lstlisting}[language=Matlab, caption=Calculating surrogate loss at each point.]\niter=1; Jsur=zeros(1,stopIter); J01=zeros(1,stopIter); done=0; \nwhile (~done) \n  step = stepsize/iter;               % update step-size and evaluate current loss values\n  Jsur(iter) = inf;   \n  wtsTrans = (obj.wts * obj.wts')';\n  Jsur(iter) = mean(-Y .* log(logistic(obj, X)) - (1 - Y) .* log(1 - logistic(obj, X)) + reg * sum(wtsTrans));\n  J01(iter) = err(obj, X, Yin);\n\\end{lstlisting}\n~\\\\\n{\\bf (2) computing the prediction and gradient associated with each data point $x^{(i)}, j^{(i)}$;}\n\\begin{lstlisting}[language=Matlab, caption=Compute linear responses and activation for data point j.]\n    % Compute linear responses and activation for data point j\n    y = logistic(obj,X(j,:));\n\\end{lstlisting}\n~\\\\\n{\\bf (3) a gradient step on the parameters \u03b8;}\n\\begin{lstlisting}[language=Matlab, caption=Compute gradient.]\n    % Compute gradient:\n    gradient = X1(j,:) * (y - Y(j)) + 2 * reg * obj.wts;\n\n    obj.wts = obj.wts - step * gradient;      % take a step down the gradient\n\\end{lstlisting}\n~\\\\\n{\\bf (4) a stopping criterion (usually either stopIter iterations or that J has not changed by more than stopTol since the last iteration through all the data).}\n\\begin{lstlisting}[language=Matlab, caption=Implement stopping criterion.]\n  %   done = false;\n  deltaJ = mean(-Y .* log(logistic(obj, X)) - (1 - Y) .* log(1 - logistic(obj, X)) + reg * obj.wts * obj.wts');  \n  if (iter == stopIter || abs(deltaJ - Jsur(iter)) < stopTol)\n    done = true;\n  end;\n\\end{lstlisting}\n~\\\\\n{\\bf (f) Run your logistic regression classifier on both data sets (A and B); for this problem, use no regularization ($\\alpha = 0$). Describe your parameter choices (stepsize, etc.) and show a plot of both the convergence of the surrogate loss and error rate, and a plot of the final converged classifier with the data (using e.g. plotClassify2D). In your report, please also include the functions that you wrote (at minimum, train.m, but possibly a few small helper functions as well).}\n\n\\begin{lstlisting}[language=Matlab, caption=Full train.m implementation.]\nfunction obj = train(obj, X, Y, varargin)\n% obj = train(obj, Xtrain, Ytrain [, option,val, ...])  : train logistic classifier\n%     Xtrain = [n x d] training data features (constant feature not included)\n%     Ytrain = [n x 1] training data classes \n%     'stepsize', val  => step size for gradient descent [default 1]\n%     'stopTol',  val  => tolerance for stopping criterion [0.0]\n%     'stopIter', val  => maximum number of iterations through data before stopping [1000]\n%     'reg', val       => L2 regularization value [0.0]\n%     'init', method   => 0: init to all zeros;  1: init to random weights;  \n% Output:\n%   obj.wts = [1 x d+1] vector of weights; wts(1) + wts(2)*X(:,1) + wts(3)*X(:,2) + ...\n\n\n  [n,d] = size(X);            % d = dimension of data; n = number of training data\n\n  % default options:\n  plotFlag = true; \n  init     = []; \n  stopIter = 100; % Lowered to improve performance. Initially 1000.\n  stopTol  = -1;\n  reg      = 0.0;\n  stepsize = 1;\n\n  i=1;                                       % parse through various options\n  while (i<=length(varargin)),\n    switch(lower(varargin{i}))\n    case 'plot',      plotFlag = varargin{i+1}; i=i+1;   % plots on (true/false)\n    case 'init',      init     = varargin{i+1}; i=i+1;   % init method\n    case 'stopiter',  stopIter = varargin{i+1}; i=i+1;   % max # of iterations\n    case 'stoptol',   stopTol  = varargin{i+1}; i=i+1;   % stopping tolerance on surrogate loss\n    case 'reg',       reg      = varargin{i+1}; i=i+1;   % L2 regularization\n    case 'stepsize',  stepsize = varargin{i+1}; i=i+1;   % initial stepsize\n    end;\n    i=i+1;\n  end;\n\n  X1    = [ones(n,1), X];     % make a version of training data with the constant feature\n\n  Yin = Y;                              % save original Y in case needed later\n  obj.classes = unique(Yin);\n  if (length(obj.classes) ~= 2) error('This logistic classifier requires a binary classification problem.'); end;\n  Y(Yin==obj.classes(1)) = 0;\n  Y(Yin==obj.classes(2)) = 1;           % convert to classic binary labels (0/1)\n\n  if (~isempty(init) || isempty(obj.wts))   % initialize weights and check for correct size\n    obj.wts = randn(1,d+1);\n  end;\n  if (any( size(obj.wts) ~= [1 d+1]) ) error('Weights are not sized correctly for these data'); end;\n  wtsold = 0*obj.wts+inf;\n\n% Training loop (SGD):\niter=1; Jsur=zeros(1,stopIter); J01=zeros(1,stopIter); done=0; \nwhile (~done) \n  step = stepsize/iter;               % update step-size and evaluate current loss values\n  Jsur(iter) = inf;   \n  wtsTrans = (obj.wts * obj.wts')';\n  Jsur(iter) = mean(-Y .* log(logistic(obj, X)) - (1 - Y) .* log(1 - logistic(obj, X)) + reg * sum(wtsTrans));\n  J01(iter) = err(obj, X, Yin);\n\n  if (plotFlag), switch d,            % Plots to help with visualization\n    case 1, fig(2); plot1DLinear(obj,X,Yin);  %  for 1D data we can display the data and the function\n    case 2, fig(2); plot2DLinear(obj,X,Yin);  %  for 2D data, just the data and decision boundary\n    otherwise, % no plot for higher dimensions... %  higher dimensions visualization is hard\n  end; end;\n  fig(1); semilogx(1:iter, Jsur(1:iter),'b-',1:iter,J01(1:iter),'g-'); drawnow;\n\n  for j=1:n,\n    % Compute linear responses and activation for data point j\n    y = logistic(obj,X(j,:));\n\n    % Compute gradient:\n    gradient = X1(j,:) * (y - Y(j)) + 2 * reg * obj.wts;\n\n    obj.wts = obj.wts - step * gradient;      % take a step down the gradient\n  end;\n\n\n  %   done = false;\n  deltaJ = mean(-Y .* log(logistic(obj, X)) - (1 - Y) .* log(1 - logistic(obj, X)) + reg * obj.wts * obj.wts');  \n  if (iter == stopIter || abs(deltaJ - Jsur(iter)) < stopTol)\n    done = true;\n  end;\n\n  wtsold = obj.wts;\n  iter = iter + 1;\nend;\n\\end{lstlisting}\n{ As $\\alpha$ was set to 0, $stepsize$ was set to 1 and number of iterations was set to 100. Beyond 100 iterations, computation time became too high for minimal improvements to the error  rate and surrogate loss. The logistic regression classifier was then run on both classes:}\n\\begin{lstlisting}[language=Matlab, caption=Code to run logistic regression classifier on both data sets.]\n% Train Class A\ntrain(learner, XA, YA);\nlegend('Error Rate', 'Surrogate Loss');\n% Plot final converged classifier decision boundaries.\nfigure();\nplotClassify2D(learner, XA, YA);\n\n% Train Class B\ntrain(learner, XB, YB);\nlegend('Error Rate', 'Surrogate Loss');\n% Plot final converged classifier decision boundaries.\nfigure();\nplotClassify2D(learner, XB, YB);\n\\end{lstlisting}\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_6_Figure_1.png}\\\\\n\t{Figure 2.6.1: Plot of convergence of error rate and surrogate loss for Class A.}\n\\end{center} \n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_6_Figure_2.png}\\\\\n\t{Figure 2.6.2: Plot of final converged classifier for Class A.}\n\\end{center} \n\n\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_6_Figure_3.png}\\\\\n\t{Figure 2.6.3: Plot of final converged classifier decision boundaries for Class A.}\n\\end{center} \n\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_6_Figure_4.png}\\\\\n\t{Figure 2.6.4: Plot of convergence of error rate and surrogate loss for Class B.}\n\\end{center} \n\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_6_Figure_5.png}\\\\\n\t{Figure 2.6.5: Plot of final converged classifier for Class B.}\n\\end{center} \n\n\n\\begin{center}\n\t\\includegraphics[width=30em,keepaspectratio]{2_6_Figure_6.png}\\\\\n\t{Figure 2.6.6: Plot of final converged classifier decision boundaries for Class B.}\n\\end{center} \n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "84533e1b2eda81be4bf08c84be674384fadc4206", "size": 26278, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignment 1 - CAB420.tex", "max_stars_repo_name": "me4502/CAB420Assignment1", "max_stars_repo_head_hexsha": "38837dcc843e4fb316fdbd559403c13f10208f30", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-08-29T01:59:58.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-15T15:22:44.000Z", "max_issues_repo_path": "Assignment 1 - CAB420.tex", "max_issues_repo_name": "me4502/CAB420Assignment1", "max_issues_repo_head_hexsha": "38837dcc843e4fb316fdbd559403c13f10208f30", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-08-29T02:00:22.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T02:00:22.000Z", "max_forks_repo_path": "Assignment 1 - CAB420.tex", "max_forks_repo_name": "me4502/CAB420Assignment1", "max_forks_repo_head_hexsha": "38837dcc843e4fb316fdbd559403c13f10208f30", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.8603839442, "max_line_length": 488, "alphanum_fraction": 0.6886368826, "num_tokens": 8102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417500561683, "lm_q2_score": 0.8918110440002044, "lm_q1q2_score": 0.5871839985639197}}
{"text": "\\documentclass{article}\n\\usepackage{tocloft}\n\\include{common_symbols_and_format}\n\\renewcommand{\\cfttoctitlefont}{\\Large\\bfseries}\n\n\\begin{document}\n\\logo\n\\rulename{Regression of Technical Signal} %Argument is name of rule\n\\tblofcontents\n\n\\ruledescription{Technical indicators are metrics used by traders to determine opportune times to enter and exit trades. To use a technical indicator for trading you both need to know how to calculate the indicator and the interpretation of how the resultant signal should be used. For a given technical indicator, different traders may agree on how to calculate the signal and the indicator's definition, but disagree on how to use the signal for trading. As a security's price history is widely available to all market participants, popular technical strategies can become crowded. \\\\ \\\\ This strategy is a regression based approach to forecasting price change in assets using a signal generated from a technical indicator. This particular strategy, Regression of Technical Signal, uses a simple mathematical regression (120 step length) to size trade positioning based on the strength of the recent relationship between the technical indicator and future price changes. The first stage is to calculate the technical indicator: for this InferTrade uses the versions of technical indicators defined by the \\href{https://pypi.org/project/ta/}{TA} open source library. For the second stage - interpretation of when to trade - InferTrade regresses the history of returns in the price series against the 1-step lagged level of the technical indicator. The trade size taken then uses a simplified version of the Kelly Criterion, calculating investment size as a ratio of the expected return for the next time step to a fixed assumption of volatility. The resultant trading rule is therefore sensitive to the type of market conditions evaluated by the technical signal, but uses a statistical approach for implementation and sizing, rather than more conventional interpretations. As a single input signal is used, this also means technical indicators with multiple signal time series - such as Bollinger Bands, with lower and upper bands - must have one of the signals selected for this strategy. As the window length used for the regression is fixed (120 steps) and the Kthe strategy will have the same number of free parameters as the signal creation function itself. \\\\ \\\\ This form of regression strategy provides a way to evaluate the raw signals different technical indicators for information content in a consistent way, using them as potential predictive features rather than following their usual implementation recipe. InferTrade also supports the usual implementations for some technical indicators but these would be separate strategies. The rationale for also offering these regression-based approach is from two benefits: 1) regression sized technical rules give continuous size recommendations, rather than discrete entry/exit points. This makes them easier to optimize, as small changes in the parameters result in small change in positions and thus performance. Some technical indicator interpretations take discontinuous positions, whereby a small change results in a large position taken, which can increase overfitting during optimisation. 2) regression sized technical rules embed automatic adjustment for regimes where the strategy is not performing - if the technical indicator stops providing a significant prediction of price changes then the sized positions will be minimal.}\n\n\\howtotrade\n{Any of the technical indicators supported by InferTrade can be used as signal to forecast price change in asset. \\\\ \\\\ As an example we can consider Money Flow Index. MFI indicator is a momentum indicator which varies between 0 and 100 and is used to define an overbought and oversold conditions. Usually MFI value above 80 is considered overbought and a sell signal is generated while a value below 20 is considered oversold and a buy signal is generated.\\\\ \\\\\nBy using this trading strategy we compute the MFI of underlying asset using it's historical High, Low, Close \\& Volume. Instead of using the MFI value to generate buy/sell signal as explained above, a forecast is made for expected change in price using MFI as a signal governed by Equation \\ref{eqn:SignalRegression}. $\\beta$ coefficient determines the relationship and impact of Signal $S$ on change in price of asset $\\Delta \\price$.\\\\ \\\\\nForecast generated as described above is then used to generate fractional portfolio investment using formulation mentioned in Equation \\ref{eqn:portfolioallocation}.\n}\n\n\\ruleparameters %You can include however many arguments (in groups of 4) as you want!\n{Regression Period}{120}{Previous data points used to fit a regression line.}{\\lookbacklength}\n{Kelly fraction}{1.0}{Amplitude weighting. $1.0$ is maximum growth if regression is exact.\n$<1.0$ scales down positions taken.}{$\\kellyfraction$}\n{Volatility}{0.1}{Volatility used to compute the Kelly recommended optimum.}{$\\sigma$}\n\\stoptable %must be included or Tex engine runs infinitely\n\n\\section{Equation}\nBelow is the equations which govern how this specific trading rule calculates a trading position. Equation 1 is the regression equation used to determine $\\beta$ and $c$. Equation 2 is the subsequent calculation of position sizing.\n\n\\begin{equation}\n\\label{eqn:SignalRegression}\n\\Delta \\price_{\\currenttime} = \\beta \\, S_{\\currenttime-1} + c + \\varepsilon_{\\currenttime}\n\\end{equation}\n\\begin{equation}\n\\label{eqn:portfolioallocation}\n    \\position_{\\currenttime} = \\kellyfraction \\, \\frac{E[\\Delta \\price_{\\currenttime}]}{\\sigma^{2}} = \\kellyfraction \\, \\frac{S_{\\currenttime-1} + c}{\\sigma^{2}}\n\\end{equation}\n\\\\\nwith:\n\n$\\Delta \\price_{\\currenttime}:$ is the change in asset price at time $\\currenttime$.\n\n$E[\\Delta \\price_{\\currenttime}]$: is the expected change in price at time $\\currenttime$.\n\n$S_{\\currenttime-1}$: is the signal value at time $\\currenttime-1$.\n\n$\\beta$: is the relationship coefficient between $S_{\\currenttime-1}$ and $\\Delta \\price_{\\currenttime}$.\n\n$c$: is a constant bias.\n\n$\\varepsilon_{\\currenttime}$: is the error term.\n\n$\\kellyfraction$: is the Kelly fraction.\n\n$\\position_{\\currenttime}$: is the resultant fractional portfolio investment at time \\currenttime.\n\n\n\\keyterms\n\\furtherlinks %The footer\n\\end{document}\n", "meta": {"hexsha": "0554b7cc4a5ce88d39955eb40c634c066591554b", "size": 6360, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/strategies/tex/SignalRegression.tex", "max_stars_repo_name": "parthgajjar4/infertrade", "max_stars_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_issues_repo_path": "docs/strategies/tex/SignalRegression.tex", "max_issues_repo_name": "parthgajjar4/infertrade", "max_issues_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 137, "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_forks_repo_path": "docs/strategies/tex/SignalRegression.tex", "max_forks_repo_name": "parthgajjar4/infertrade", "max_forks_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "avg_line_length": 106.0, "max_line_length": 3296, "alphanum_fraction": 0.798427673, "num_tokens": 1360, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.587180278012864}}
{"text": "\\documentclass{beamer}\n\n\\usepackage{beamerthemevictor,comment,verbatim,graphicx,amssymb}\n\\usepackage[noeepic]{qtree}\n\n\\input{tutmacs}\n\\input{slidemacs}\n\\input idxmacs\n\n\\begin{document}\n\n\\title{Hashing}\n\\author{Victor Eijkhout}\n\\date{Notes for CS 594 -- Fall 2004}\n\n\\frame{\\titlepage}\n\n\\section{Introduction}\n\n\\frame{\n  \\frametitle{The basic problem}\nStoring names and information about them:\\\\\nassociative storage\n}\n\n\\frame{\n\\frametitle{Issues}\n\\begin{itemize}\n\\item<2-> Insertion\n\\item<3-> Retrieval\n\\item<4-> Deletion\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{Simple strategies}\n\\begin{itemize}\n\\item List in order of creation\n\\item<2-> $\\Rightarrow$ Cheap to create, linear search time, linear deletion\n\\item<3-> Sorted list\n\\item<4-> $\\Rightarrow$ Creation in linear time, search logarithmic,\n  deletion linear\n\\item<5-> Linear list\n\\item<6-> $\\Rightarrow$ all linear time\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{one more strategy}\n\\begin{quote}\n\\Tree [.$\\bullet$ [.B ART [.E GIN LL ] ] [.E LSE ND ] ]\n\\end{quote}\n\\begin{itemize}\n\\item<2-> $\\Rightarrow$ all linear in length of string\n\\end{itemize}\n}\n\n\\sectionframe{Hash functions}\n\n\\frame[containsverbatim]{\n  \\frametitle{}\n\\begin{itemize}\n\\item Mapping from space of words to space of indices\n\\item Source: unbounded; in practice not extremely large\n\\item Target: array (static/dynamic)\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{Requirements}\n\\begin{itemize}\n\\item<2-> Function determined only by input data\n\\item<3-> Determined by as much of the data as possible\\\\\n\\n{key1}, \\n{key2},\\dots\n\\item<4-> Uniform distribution (clustering bad, collisions really bad)\n\\item<5-> Similar data, mapped far apart\n\\end{itemize}\n}\n\n\\subsection{Modulo operations}\n\n\\frame[containsverbatim]{\n  \\frametitle{Good idea: prime numbers}\nWith $M$ size of the hash table:\n\\begin{equation}\n    h(K) = K\\mod M, \\end{equation}\nor:\n\\begin{equation}\n    h(K) = aK\\mod M,\\end{equation}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Bad examples:}\n\\begin{itemize}\n\\item $M$~is even, say~$M=2M'$,\\\\ $r=K\\mod M$ say~$K=nM+r$\nthen\n\\begin{eqnarray*}\nK=2K'&\\Rightarrow& r=2(nM'-K')\\\\\nK=2K'+1&\\Rightarrow& r=2(nM'-K')+1\n\\end{eqnarray*}\nso key even iff number $\\Rightarrow$~dependence on last digit\n\\item $M$ multiple of three: anagrams map to same key (sum of digits)\n\\item $\\Rightarrow$ $M$ prime, far away from powers of 2\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Multiplication instead of division}\n\\begin{itemize}\n\\item $r=K\\mod M = M\\big((K/M)\\mod 1\\big)$ \n\\item $A\\approx w/M$, where $w$ maxint\n\\item Then $1/M=A/w$, ($A$ with decimal point to its left).\n\\item from %K\\mod M=M(K/M\\mod 1)$:\n\\[ h(K)=\\lfloor M\\left(\\left({A\\over w}K\\right)\\mod\n1\\right)\\rfloor. \\]\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Example: Bible}\n\\begin{itemize}\n\\item 42,829 unique words,\n\\item into a hash table with 30,241 elements (prime): 76.6\\% used\n\\item table of size: 30,240 (divisible by 2--9): 60.7\\% used\n\\item (collisions discussed later)\n\\end{itemize}\n}\n\n\\subsection{Character hashing}\n\n\\frame[containsverbatim]{\n  \\frametitle{Two-step hashing}\n\\begin{itemize}\n\\item Mix up characters of the key\n\\item then modulo with table size\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Character based hashing}\n\\begin{verbatim}\n  h = <some value>\n  for (i=0; i<len(var); i++)\n    h = h + <byte i of string>;\n\\end{verbatim}\nprevent anagram problem:\n\\begin{verbatim}\n  h = <some value>\n  for (i=0; i<len(var); i++)\n    h = Rand( h + <byte i of string> );\n\\end{verbatim}\nwith table of random numbers; also function possible\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{ELF hash}\n\\begin{verbatim}\n/* UNIX ELF hash\n * Published hash algorithm used in the UNIX ELF format\n * for object files\n */\nunsigned long hash(char *name)\n{\n    unsigned long h = 0, g;\n\n    while ( *name ) {\n        h = ( h << 4 ) + *name++;\n        if ( g = h & 0xF0000000 )\n          h ^= g >> 24;\n        h &= ~g;\n    }\n}\n\\end{verbatim}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Another hash function}\n\\begin{verbatim}\n/* djb2\n * This algorithm was first reported by Dan Bernstein\n * many years ago in comp.lang.c\n */\nunsigned long hash(unsigned char *str)\n{\n    unsigned long hash = 5381;\n    int c; \n    while (c = *str++) hash = ((hash << 5) + hash) + c;\n    return hash;\n}\n\\end{verbatim}\n}\n\n\\sectionframe{Hash tables: collisions}\n\n\\frame[containsverbatim]{\n  \\frametitle{So far so good}\n\\pgfimage[height=2in]{hash-direct}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Collisions}\n\\begin{itemize}\n\\item $k_1\\not=k_2$, $h(k_1)=h(k_2)$\n\\item several strategies; all analysis statistical in nature\n\\item open hash table: solve conflict outside the table\n\\item closed hash table: solve by moving around in the table\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Separate chaining}\n\\pgfimage[height=2in]{hash-separate}\n}\n\n\\subsection{Open hash table}\n\n\\frame[containsverbatim]{\n  \\frametitle{}\n\\begin{itemize}\n\\item Pro: no need for searching through hash table\n\\item Con: dynamic storage\n\\item Also: $M$ large to prevent collisions $\\Rightarrow$~wasted space\n\\end{itemize}\n}\n\n\\subsection{Closed hash table}\n\n\\frame[containsverbatim]{\n  \\frametitle{Linear probing}\n\\pgfimage[height=2in]{hash-linear}\n\nLocation occupied: search linearly from first hash\n}\n\n\\frame[containsverbatim]{\n\\footnotesize\n\\begin{verbatim}\naddr = Hash(K);\nif (IsEmpty(addr)) Insert(K,addr);\nelse {\n    /* see if already stored */\n  test:\n    if (Table[addr].key == K) return;\n    else {\n      addr = Table[addr].link; goto test;}\n    /* find free cell */\n    Free = addr;\n    do { Free--; if (Free<0) Free=M-1; }\n    while (!IsEmpty(Free) && Free!=addr)\n    if (!IsEmpty(Free)) abort;\n    else {\n      Insert(K,Free); Table[addr].link = Free;}\n}\n\\end{verbatim}\n}\n\n\\frame{\n  \\frametitle{Merging blocks in linear probing}\n\\pgfimage[height=1in]{probing1}\n\\pgfimage[height=1in]{probing2}\n\\pgfimage[height=1in]{probing3}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Linear probing analysis}\n\\begin{itemize}\n\\item  Clusters forming\n\\item Particularly bad: merging clusters\n\\item Ratio occupied/total: $\\alpha=N/M$\\\\\nexpected search time\n\\[ T\\approx\\begin{cases}\n    {1\\over 2}\\left(1+\\left({1\\over 1-\\alpha}\\right)^2\\right)&\n        unsuccessful\\\\\n    {1\\over 2}\\left(1+{1\\over 1-\\alpha}\\right)&\n        successful\n    \\end{cases}\n\\]\n\\item $\\Rightarrow$ increasing as table fills up\n\\end{itemize}\n}\n\n\\subsection{Chaining}\n\n\\frame[containsverbatim]{\n\\frametitle{Chaining}\n\\pgfimage[height=1in]{hash-chain0}\n\\pgfimage[height=1in]{hash-chain}\n\\pgfimage[height=1in]{hash-hash2}\n\nIf location occupied, search from top of table\n}\n\n\\frame[containsverbatim]{\n\\footnotesize\n\\begin{verbatim}\naddr = Hash(K); Free = M-1;\nif (IsEmpty(addr)) Insert(K,addr);\nelse {\n    /* see if already stored */\n  test:\n    if (Table[addr].key == K) return;\n    else {\n      addr = Table[addr].link; goto test;}\n    /* find free cell */\n    do { Free--; }\n    while (!IsEmpty(Free)\n    if (Free<0) abort;\n    else {\n      Insert(K,Free); Table[addr].link = Free;}\n}\n\\end{verbatim}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Chaining analysis}\n\\begin{itemize}\n\\item No clusters merging\n\\item Coalescing lists\n\\item Search time ($\\alpha$~occupied fraction)\n\\[ T\\approx\\begin{cases}\n        1+(e^{2\\alpha}-1-2\\alpha)/4&unsuccessful\\\\\n        1+(e^{2\\alpha}-1-2\\alpha)/8\\alpha+\\alpha/4&successful\n           \\end{cases}\n\\]\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Nonlinear rehashing}\n\\begin{itemize}\n\\item `Random probing': Try $(h(m)+p_i)\\mod s$, where $p_i$~is a\n  sequence of random numbers (stored)\\\\\n  prevent secondary collisions\n\\item `Add the hash': Try $(i\\times h(m))\\mod s$. ($s$~prime)\n\\item Pro: scattered hash keys\n\\item Con: more calculations, worse memory locality\n\\end{itemize}\n}\n\n\\section{Other}\n\\subsection{Deletion}\n\n\\frame[containsverbatim]{\n  \\frametitle{Deleting keys}\n\\begin{itemize}\n\\item Simple in direct chaining\n\\item Very hard in closed hash table methods: can only mark `unused'\n\\end{itemize}\n}\n\n\\subsection{Examples}\n\n\\frame[containsverbatim]{\n  \\frametitle{Search in chess programs}\n\\begin{itemize}\n\\item Problem: evaluation board positions\n\\item if position arrived in two ways, no two calculations\n\\item Solution: hash the board, use as key in table of evaluations\n\\item Collisions?\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{String searching}\n\\begin{itemize}\n\\item Problem: does string (length~$M$) occur in document (length~$N$)\n\\item naive: $N$ comparisons, giving $O(MN)$ complexity\n\\item solution: hash the strings, compare hash values\n\\item (hash function does not distinguish between anagrams)\n\\[ h(k)=\\left\\{\\sum_ik[i]\\right\\}\\mathop{\\mathrm{mod}} K\\]\n\\item<2-> cheap updating of the document hash key:\n\\[ h(t[2\\ldots n+1]) = h(t[1\\ldots n])+t[n+1]-t[1] \\]\n(with addition/subtraction modulo~$K$)\n\\item string comparison in $O(1)$, $\\Rightarrow$~total cost~$O(M+N)$\n\\end{itemize}\n}\n\n\\subsectionframe{Discussion}\n\n\\frame{\n  \\frametitle{Hash table vs trees}\n\\begin{itemize}\n\\item<2-> Best case search time can be equal: harder to implement in\n  trees\n\\item<3-> Trees can become unbalanced: considerable time and effort to\n  balance\n\\item<4-> Threes have dynamic storage: harder to code optimally; worse\n  memory locality\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{Open vs closed hash tables}\n\\begin{itemize}\n\\item<2-> Approximately equal performance until the table fills up\n\\item<3-> Open: much simpler storage management, especially deletion\n\\end{itemize}\n}\n\n\\end{document}\n", "meta": {"hexsha": "b9f0bef56380a65d2f91df6ba10efcc0f804bd27", "size": 9467, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/hashing.tex", "max_stars_repo_name": "wvqusrai/the-science-of-tex-and-latex", "max_stars_repo_head_hexsha": "a96fd5cd0f7a6b9208675ba38ddcaec0264a9e31", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-07T08:21:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-07T08:21:41.000Z", "max_issues_repo_path": "slides/hashing.tex", "max_issues_repo_name": "wvqusrai/the-science-of-tex-and-latex", "max_issues_repo_head_hexsha": "a96fd5cd0f7a6b9208675ba38ddcaec0264a9e31", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/hashing.tex", "max_forks_repo_name": "wvqusrai/the-science-of-tex-and-latex", "max_forks_repo_head_hexsha": "a96fd5cd0f7a6b9208675ba38ddcaec0264a9e31", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.608478803, "max_line_length": 76, "alphanum_fraction": 0.6973698109, "num_tokens": 2958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7634837635542925, "lm_q1q2_score": 0.587180273873987}}
{"text": "\\section{Miscellaneous}\n\\subsection{VS Code Config}\n\\lstinputlisting{./source/01_Miscellaneous/00_VS Code Config.txt}\n\\subsection{Day of Date}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/01_Day of Date.cpp}\n\\subsection{Number of Days since 1-1-1}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/02_Number of Days since 1-1-1.cpp}\n\\subsection{Enumerate Subsets of a Bitmask}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/03_Enumerate Subsets of a Bitmask.cpp}\n\\subsection{Fast IO}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/04_Fast IO.cpp}\n\\subsection{Int to Roman}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/05_Int to Roman.cpp}\n\\subsection{Josephus Problem}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/06_Josephus Problem.cpp}\n\\subsection{Random Primes}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/07_Random Primes.cpp}\n\\subsection{RNG}\n\\lstinputlisting[language=c++]{./source/01_Miscellaneous/08_RNG.cpp}\n\\section{Data Structures}\n\\subsection{2D Segment Tree}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/01_2D Segment Tree.cpp}\n\\subsection{Fenwick RU-RQ}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/02_Fenwick RU-RQ.cpp}\n\\subsection{Heavy-Light Decomposition}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/03_Heavy-Light Decomposition.cpp}\n\\subsection{Li-Chao Tree}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/04_Li-Chao Tree.cpp}\n\\subsection{Persistent Segment Tree}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/05_Persistent Segment Tree.cpp}\n\\subsection{STL PBDS}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/06_STL PBDS.cpp}\n\\subsection{Treap}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/07_Treap.cpp}\n\\subsection{Unordered Map Custom Hash}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/08_Unordered Map Custom Hash.cpp}\n\\subsection{Mo's on Tree}\n\\input{./source/02_Data Structures/09_Mo's on Tree.tex}\n\\subsection{Link-Cut Tree}\n\\lstinputlisting[language=c++]{./source/02_Data Structures/10_Link-Cut Tree.cpp}\n\\section{Dynamic Programming}\n\\subsection{DP Convex Hull}\n\\lstinputlisting[language=c++]{./source/03_Dynamic Programming/01_DP Convex Hull.cpp}\n\\subsection{DP DNC}\n\\lstinputlisting[language=c++]{./source/03_Dynamic Programming/02_DP DNC.cpp}\n\\subsection{DP Knuth-Yao}\n\\lstinputlisting[language=c++]{./source/03_Dynamic Programming/03_DP Knuth-Yao.cpp}\n\\section{Geometry}\n\\subsection{Geometry Template}\n\\lstinputlisting[language=c++]{./source/04_Geometry/01_Geometry Template.cpp}\n\\subsection{Convex Hull}\n\\lstinputlisting[language=c++]{./source/04_Geometry/02_Convex Hull.cpp}\n\\subsection{Closest Pair of Points}\n\\lstinputlisting[language=c++]{./source/04_Geometry/03_Closest Pair of Points.cpp}\n\\subsection{Smallest Enclosing Circle}\n\\lstinputlisting[language=c++]{./source/04_Geometry/04_Smallest Enclosing Circle.cpp}\n\\subsection{Sutherland-Hodgman Algorithm}\n\\lstinputlisting[language=c++]{./source/04_Geometry/05_Sutherland-Hodgman Algorithm.cpp}\n\\subsection{Centroid of Polygon}\n\\input{./source/04_Geometry/06_Centroid of Polygon.tex}\n\\subsection{Pick Theorem}\n\\input{./source/04_Geometry/07_Pick Theorem.tex}\n\\section{Graphs}\n\\subsection{Articulation Point and Bridge}\n\\lstinputlisting[language=c++]{./source/05_Graphs/01_Articulation Point and Bridge.cpp}\n\\subsection{SCC and Strong Orientation}\n\\lstinputlisting[language=c++]{./source/05_Graphs/02_SCC and Strong Orientation.cpp}\n\\subsection{Centroid Decomposition}\n\\lstinputlisting[language=c++]{./source/05_Graphs/03_Centroid Decomposition.cpp}\n\\subsection{Dinic's Maximum Flow}\n\\lstinputlisting[language=c++]{./source/05_Graphs/04_Dinic's Maximum Flow.cpp}\n\\subsection{Minimum Cost Maximum Flow}\n\\lstinputlisting[language=c++]{./source/05_Graphs/05_Minimum Cost Maximum Flow.cpp}\n\\subsection{Flows with Demands}\n\\input{./source/05_Graphs/06_Flows with Demands.tex}\n\\subsection{Hungarian}\n\\lstinputlisting[language=c++]{./source/05_Graphs/07_Hungarian.cpp}\n\\subsection{Edmonds' Blossom}\n\\lstinputlisting[language=c++]{./source/05_Graphs/08_Edmonds' Blossom.cpp}\n\\subsection{Eulerian Path or Cycle}\n\\lstinputlisting[language=c++]{./source/05_Graphs/09_Eulerian Path or Cycle.cpp}\n\\subsection{Hierholzer's Algorithm}\n\\lstinputlisting[language=c++]{./source/05_Graphs/10_Hierholzer's Algorithm.cpp}\n\\subsection{2-SAT}\n\\lstinputlisting[language=c++]{./source/05_Graphs/11_2-SAT.cpp}\n\\section{Math}\n\\subsection{Extended Euclidean GCD}\n\\lstinputlisting[language=c++]{./source/06_Math/01_Extended Euclidean GCD.cpp}\n\\subsection{Generalized CRT}\n\\lstinputlisting[language=c++]{./source/06_Math/02_Generalized CRT.cpp}\n\\subsection{Generalized Lucas Theorem}\n\\lstinputlisting[language=c++]{./source/06_Math/03_Generalized Lucas Theorem.cpp}\n\\subsection{Linear Diophantine}\n\\lstinputlisting[language=c++]{./source/06_Math/04_Linear Diophantine.cpp}\n\\subsection{Modular Linear Equation}\n\\lstinputlisting[language=c++]{./source/06_Math/05_Modular Linear Equation.cpp}\n\\subsection{Miller-Rabin and Pollard's Rho}\n\\lstinputlisting[language=c++]{./source/06_Math/06_Miller-Rabin and Pollard's Rho.cpp}\n\\subsection{Berlekamp-Massey}\n\\lstinputlisting[language=c++]{./source/06_Math/07_Berlekamp-Massey.cpp}\n\\subsection{Fast Fourier Transform}\n\\lstinputlisting[language=c++]{./source/06_Math/08_Fast Fourier Transform.cpp}\n\\subsection{Number Theoretic Transform}\n\\lstinputlisting[language=c++]{./source/06_Math/09_Number Theoretic Transform.cpp}\n\\subsection{Gauss-Jordan}\n\\lstinputlisting[language=c++]{./source/06_Math/10_Gauss-Jordan.cpp}\n\\subsection{Derangement}\n\\lstinputlisting[language=c++]{./source/06_Math/12_Derangement.cpp}\n\\subsection{Bernoulli Number}\n\\input{./source/06_Math/13_Bernoulli Number.tex}\n\\subsection{Forbenius Number}\n\\input{./source/06_Math/14_Forbenius Number.tex}\n\\subsection{Stars and Bars with Upper Bound}\n\\input{./source/06_Math/15_Stars and Bars with Upper Bound.tex}\n\\subsection{Arithmetic Sequences}\n\\input{./source/06_Math/16_Arithmetic Sequences.tex}\n\\subsection{FWHT}\n\\lstinputlisting[language=c++]{./source/06_Math/17_FWHT.cpp}\n\\section{Strings}\n\\subsection{Aho-Corasick}\n\\lstinputlisting[language=c++]{./source/07_Strings/01_Aho-Corasick.cpp}\n\\subsection{Eertree}\n\\lstinputlisting[language=c++]{./source/07_Strings/02_Eertree.cpp}\n\\subsection{Manacher's Algorithm}\n\\lstinputlisting[language=c++]{./source/07_Strings/03_Manacher's Algorithm.cpp}\n\\subsection{Suffix Array}\n\\lstinputlisting[language=c++]{./source/07_Strings/04_Suffix Array.cpp}\n\\subsection{Suffix Automaton}\n\\lstinputlisting[language=c++]{./source/07_Strings/05_Suffix Automaton.cpp}\n\\section{OEIS}\n\\subsection{A000108 (Catalan)}\n\\lstinputlisting{./source/08_OEIS/01_A000108 (Catalan).txt}\n\\subsection{A000127}\n\\lstinputlisting{./source/08_OEIS/02_A000127.txt}\n\\subsection{A001434}\n\\lstinputlisting{./source/08_OEIS/04_A001434.txt}\n\\subsection{A018819}\n\\lstinputlisting{./source/08_OEIS/05_A018819.txt}\n\\subsection{A092098}\n\\lstinputlisting{./source/08_OEIS/06_A092098.txt}\n", "meta": {"hexsha": "af340f303daa4eae19521b2de1c10fbfeb4c772a", "size": 7039, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "contents.tex", "max_stars_repo_name": "KerakTelor86/AlgoCopypasta", "max_stars_repo_head_hexsha": "900d989c0651b51b55c551d336c2e92faead0fc8", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-25T05:46:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-25T05:46:11.000Z", "max_issues_repo_path": "contents.tex", "max_issues_repo_name": "KerakTelor86/AlgoCopypasta", "max_issues_repo_head_hexsha": "900d989c0651b51b55c551d336c2e92faead0fc8", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "contents.tex", "max_forks_repo_name": "KerakTelor86/AlgoCopypasta", "max_forks_repo_head_hexsha": "900d989c0651b51b55c551d336c2e92faead0fc8", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9219858156, "max_line_length": 95, "alphanum_fraction": 0.8023867027, "num_tokens": 2115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7634837635542925, "lm_q1q2_score": 0.587180273873987}}
{"text": "\\documentclass[]{article}\n\n%opening\n\\usepackage[a4paper, total={7.5in, 8in}]{geometry}\n\\usepackage{amsmath, amsfonts, amssymb, amsthm}\n\\usepackage{geometry}\n\\usepackage{float}\n\\usepackage{braket}\n\\usepackage{upgreek}\n\\usepackage{bbold}\n\\usepackage{graphicx}\n\\usepackage{wrapfig}\n\\usepackage{cancel}\n\\graphicspath{{./figures/}}\n\\makeatletter\n\\newcommand{\\distas}[1]{\\mathbin{\\overset{#1}{\\kern\\z@\\sim}}}%\n\\newsavebox{\\mybox}\\newsavebox{\\mysim}\n\\newcommand{\\distras}[1]{%\n\t\\savebox{\\mybox}{\\hbox{\\kern3pt$\\scriptstyle#1$\\kern3pt}}%\n\t\\savebox{\\mysim}{\\hbox{$\\sim$}}%\n\t\\mathbin{\\overset{#1}{\\kern\\z@\\resizebox{\\wd\\mybox}{\\ht\\mysim}{$\\sim$}}}%\n}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Ps}{\\mathbb{P}}\n\\renewcommand{\\vec}[1]{\\mathbf{#1}}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\newcommand{\\indep}{\\perp \\!\\!\\! \\perp}\n\\newcommand\\numberthis{\\addtocounter{equation}{1}\\tag{\\theequation}}\n\\makeatother\n\n\\usepackage{hyperref}\n\\hypersetup{\n\tcolorlinks=true,\n\tlinkcolor=blue,\n\tfilecolor=magenta,      \n\turlcolor=cyan,\n}\n\n\\newtheorem*{mydef}{Definition}\n\n\\title{Easy Expectation Maximisation}\n\\author{Victor Btesh}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction and motivation}\n\nThis small statistical computing project has the aim of providing a general purpose architecture to perform clustering on a set of data points and features. I am sure there exist better more complex packages to handle such problems but I need practice and having such a tool suits my current research needs. Finally, it's fun. Be advised, it will probably (for sure) be buggy, lack rigour and have unclear notation.\n\n\\section{Models}\n\nThis section details the type of model this package will be able to handle. The basic assumptions are that all data points are i.i.d, which is standard, but also that all features are independent given the hidden clustering variable. This allows for much simpler maximisation steps which can be standardised for each distribution. I aim to provide support for all standard distributions from the exponential family, starting with Poisson and Gaussian with known variance and unknown mean. Derivations will be below. I also want it to handle discrete time discrete space Markov chains, as it is a classic use case for this algorithm.\n\n\\subsection{Joint distributions}\n\nGiven two sets of variables, where $h$ is unique, hidden and the cluster multinomial distribution with $m$ possible outcomes and $v = \\{v_1, v_2, ..., v_k\\}$ are visible, and a parameter set $\\theta$, we have must have the following joint probability distribution:\n\\[\n\tp(h, v|\\theta) = p(h|\\theta) \\prod_{j=1}^{k} p(v_j|h, \\theta)\n\\]\nYielding the following if we observed $n$ i.i.d samples, where $i$ are indices, not powers:\n\\[\n\tp(\\vec{h}, \\vec{v}|\\theta) = \\prod_{i=1}^{n} p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)\n\\]\nThis is important as we want the log likelihood to have the following form:\n\\begin{align*}\n\t\\log p(h, v|\\theta) &= \\log \\left[ p(h|\\theta) \\prod_{j=1}^{k} p(v_i|h, \\theta) \\right] \\\\\n\t&= \\log p(h|\\theta) + \\sum_{j=1}^{k} \\log p(v_j|h, \\theta)\n\\end{align*}\nAgain, after observing $n$ i.i.d. samples:\n\\[\n\t\\log p(\\vec{h}, \\vec{v}|\\theta) = \\sum_{i=1}^{n}\\left[ \\log p(h^i|\\theta) +  \\sum_{j=1}^{k} \\log p(v_j^i|h^i, \\theta) \\right]\n\\]\n\n\\subsection{Maximisation steps}\nThis form of log likelihood is useful as the energy term, which we want to optimise in the maximisation step, will look as follows:\n\\begin{align*}\n\t\\mathbb{E}(\\theta) &= \\sum_{i=1}^{n} \\left\\langle \\log p(h^i|\\theta) +  \\sum_{j=1}^{k} \\log p(v_j^i|h^i, \\theta) \\right\\rangle_{q^i(h)} \\\\\n\t&= \\sum_{i=1}^{n} \\left[ \\left\\langle \\log p(h^i|\\theta) \\right\\rangle_{q^i(h)}  +  \\sum_{j=1}^{k} \\left\\langle \\log p(v_j^i|h^i, \\theta) \\right\\rangle_{q^i(h)}  \\right] \\\\\n\t&= \\sum_{i=1}^{n}  \\left\\langle \\log p(h^i|\\theta) \\right\\rangle_{q^i(h)}   + \\sum_{i=1}^{n}  \\sum_{j=1}^{k} \\left\\langle \\log p(v_j^i|h^i, \\theta) \\right\\rangle_{q^i(h)}  \\\\\n\t&= \\sum_{i=1}^{n} \\left\\langle \\log p(h^i|\\theta) \\right\\rangle_{q^i(h)}  + \\sum_{j=1}^{k}  \\sum_{i=1}^{n}  \\left\\langle \\log p(v_j^i|h^i, \\theta) \\right\\rangle_{q^i(h)} \\\\\n\\end{align*}\n\nAs a result, when we optimise w.r.t to each parameter in $\\theta$, for instance $\\theta^h$, the parameter of the hidden variable, all other terms can be ignored. \n\n\\subparagraph{On $q^i(h)$, the variational distribution} This is the variational distribution over hidden states $h$ w.r.t to which we are performing the Expectation step. Two things are worthy to note here. First, we $q(h)$ actually represent $p(h|v, \\theta^{old})$, which is the probability of being in a given cluster given observed variables and previous values of $\\theta$. This is because the expectation step is simply setting $q(h) = p(h|v, \\theta^{old})$. The second thing is that it does not depend on the $\\theta$ we are currently optimising but on $\\theta^{old}$, i.e. the $\\theta$ optimised in the previous iteration of the algorithm. Rigorously, $q^i(h)$ should be written as $p^i(h|v, \\theta^{old})$. Finally, the $i$ index represents the fact that we compute the $p^i(h|v, \\theta^{old})$ for each observations, therefore leading to different $q(h)$ for each sample. \n\n\\subparagraph{On $\\theta$, the parameter matrix} Note $\\theta$ is matrix where each row corresponds to parameters for each variable associated with a specific cluster. If we have only one parameter per variable, as in a set of means for a Gaussians, then $\\theta$ will be a $c$ by $k+1$ matrix where $c$ is the number of clusters and $k$ is the number of observed variables. If visible variables have multiple parameters, such as multinomial distributions, $\\theta$ will be a $c$ by $k+1$ by $o$ 3d matrix, where $o$ is the number of parameters, i.e. outcomes, to learn for each cluster for a given observed variable. As observed variables can have different numbers of parameters, e.g. if we observe a multinomial and a Poisson random variable, as such we will treat each column in $\\theta$ separately, guaranteeing that each $\\theta^k$ is at a most a 2d matrix. \\\\\n\nOptimisation should happen w.r.t to each entry $\\theta$. We are actually computing the gradient w.r.t $\\theta$ for all variables. As discussed, we split $\\theta$ so obtain one $\\theta^k$ for each variable. We would then have for the prior probability of being in a given cluster:\n\n\\begin{align*}\n\t\\nabla_{\\theta^h} \\mathbb{E}(\\theta) &= \\nabla_{\\theta^h} \\left[ \\sum_{i=1}^{n} \\left\\langle \\log p(h^i|\\theta) \\right\\rangle_{q^i(h)} \\right] \\\\\n\t&= \\sum_{i=1}^{n} \\nabla_{\\theta^h} \\left\\langle  \\log p(h^i|\\theta) \\right\\rangle_{q^i(h)} \\\\\n\t&= \\sum_{i=1}^{n} \\left\\langle \\nabla_{\\theta^h} \\log p(h^i|\\theta) \\right\\rangle_{q^i(h)} \\\\\n\\end{align*}\nWhat we get here is a generic expression that can be applied to any variable in the model. The gradient can be push inside the expectation as $q(h|v)$ does not depend on $\\theta$. As such, we can find generic expressions for gradient of each standard distribution w.r.t to a set of parameters, where each parameter in the set corresponds to one of the values of $h$, i.e. our clusters. \n\n\\subsubsection{Prior distribution over hidden states}\n\\paragraph{Log likelihood}\n\\[\n\t\\log p(h^i|v^i, \\theta^h) = \\log \\theta^h\n\\]\nWhere $\\theta^h$ is the vector of probabilities for each cluster.\n\\paragraph{Optimal parameter value}\n\\[\n\t\\argmax_{\\theta^h} \\mathbb{E}(\\theta) = \\frac{1}{n} \\sum_{i=1}^{n} q^i(h)\n\\]\nWhere $\\theta^h$ is the vector of probabilities for each cluster and $q^i(h) = p(h^i|v^i, \\theta^{old})$.\n\\paragraph{Derivation}\n\nLet us continue here to derive the simplest case of the distribution over hidden states, which is the prior $p(h|\\theta)$:\n\\begin{align*}\n\t&= \\sum_{i=1}^{n} \\nabla_{\\theta^h}  \\sum_{j=1}^{k} q^i(h) \\log \\theta^h_j  \\\\\n\t&= \\sum_{i=1}^{n}  \\frac{q^i(h)}{\\theta^h} \\\\\t\n\\end{align*}\n\nAfter adding a Lagrange multiplier as $p(h|\\theta)$, as a probability distribution must satisfy $\\sum_{j=1}^{k} p(h_j|\\theta) = 1$, which can be equivalently written as $\\sum_{j=1}^{k} \\theta^h_j = 1$, such that\n\\begin{align*}\n\tf(\\theta^h) &= \\mathbb{E}(\\theta) \\\\\n\tg(\\theta^h) &= \\sum_{j=1}^{k} \\theta^h_j - 1\n\\end{align*}\nand\n\\[\n\tL(\\theta^h, \\lambda) = f(\\theta^h) + \\lambda g(\\theta^h)\n\\]\nwe can compute the gradient and maximise given the constraints:\n\\begin{align*}\n\t\\nabla L(\\theta^h, \\lambda) &= \\nabla f(\\theta^h) + \\nabla \\lambda g(\\theta^h) \\\\\n\t\t\t\t\t\t\t\t&= \\sum_{i=1}^{n} \\frac{q^i(h)}{\\theta^h} + \\lambda\n\\end{align*}\nOptimising by setting it equal to 0, we get the following:\n\\[\n\t\\theta^h = \\frac{\\sum_{i=1}^{n} q^i(h)}{\\lambda} = \\frac{1}{n} \\sum_{i=1}^{n} q^i(h)\n\\]\nWhere it can be shown that $\\lambda$ is a normalisation constraint, which here will be $n$, the number of observations. The expression is therefore the average of all probabilities in the variational distribution over the whole sample.\n\n\\subsubsection{Multinomial distribution (categorical random variables)}\n\nThis distribution should be used to describe categorical variables.\n\nWe call $\\theta^{m}$ a generic $c \\times o$ parameter matrix where $c$ is the number of hidden clusters, $o$ the number of categories/outcomes in the observed multinomial distribution and each entry $\\theta^{m}_{co}$ is a probability observing outcome $o$ given cluster $c$. In addition, let $p(v^i|h^i|\\theta^m)$ refer to this generic multinomial distribution. Note that normalisation of $\\theta^{m}$ is done row-wise.\n\n\\paragraph{Log likelihood}\n\\[\n\t\\log p(v^i|h^i,\\theta_m) = \\log \\theta^{m}\n\\]\nThus the log likelihood is simply the element wise log of $\\theta^{m}$. \n\\paragraph{Optimal parameter value}\n\\[\n\t\\argmax_{\\theta^{m}} \\mathbb{E}(\\theta) \\propto \\sum_{i=1}^{n} q^i(h) \\mathbb{1}{[v^i = v]}\n\\]\nIntuitively, what this expression means is that to find the values of the parameters $\\theta_m$ that maximise $\\mathbb{E}(\\theta)$, we must count all cases where a given visible outcome $v$ has been observed weighted by the variational distribution.\n\n\\paragraph{Derivation}\n\nThe derivation here will be similar to the case of the hidden state as the distributions or of the same family. We call $\\theta_m$ a generic $m$ by $c$ parameter matrix where $m$ is the number of hidden clusters and $c$ the number of categories/outcomes in the observed multinomial distributions.\n\n\n\\subsubsection{Poisson random variables}\n\nThis distribution should be used to model random variables that represent counts or the occurrence of events.\n\n\\paragraph{Log likelihood}\n\\[\n\\log p(v^i=v|h^i, \\Lambda) \\propto v \\log \\Lambda - \\Lambda\n\\]\n\\paragraph{Optimal parameter value}\n\\[\n\\argmax_{\\Lambda \\in \\theta} \\mathbb{E}(\\theta) =  \\frac{\\sum_{i=1}^{n} q^i(h) v^i}{\\sum_{i=1}^{n} q^i(h)}\n\\]\n\n\\paragraph{Derivations}\n\n\\subparagraph{Log likelihood}\n\nA Poisson random variable has only rate parameter generally labelled $\\lambda$. Let us define $\\Lambda \\in \\theta$ to be a vector of length $c$ of rate parameters where each entry is the rate parameter corresponding to cluster $c$, represented by variable $h$. Let us remind ourselves then of the expression for the likelihood of observing a certain count given the clusters:\n\\[\np(v^i=v|h^i, \\Lambda) = \\frac{\\Lambda^v e^{-\\Lambda}}{v!} \\propto \\Lambda^v e^{-\\Lambda}\n\\]\nThe log likelihood is then:\n\\begin{align*}\n\t\\log p(v^i=v|h^i, \\Lambda) &= \\log \\left[ \\frac{\\Lambda^v e^{-\\Lambda}}{v!} \\right] \\\\\n\t&= \\log \\Lambda^v + \\log e^{-\\Lambda} - \\log v!\\\\\n\t&= v \\log \\Lambda - \\Lambda - \\log v!\n\\end{align*}\n\n\\subparagraph{Optimisation}\n\nLet us now find the expression for the values of $\\Lambda \\in \\theta$ that maximise the energy $\\mathbb{E}(\\theta)$. We can start from the generic expression of the gradient of the energy w.r.t to the parameters $\\Lambda$ of a visible variable $v$.\n\n\\begin{align*}\n\t\\nabla_{\\Lambda} \\mathbb{E}(\\theta) &= \\nabla_{\\Lambda}\\left[ \\sum_{i=1}^{n}  \\left\\langle \\log p(v^i|h^i, \\theta) \\right\\rangle_{q^i(h)} \\right] \\\\\n\t&= \\nabla_{\\Lambda} \\left[ \\sum_{i=1}^{n}  \\left\\langle v^i \\log \\Lambda - \\Lambda - \\log v^i! \\right\\rangle_{q^i(h)} \\right] \\\\\n\t&= \\sum_{i=1}^{n} \\nabla_{\\Lambda} \\sum_{h} q^i(h) (v^i \\log \\Lambda - \\Lambda - \\log v^i!) \\\\\n\t&= \\sum_{i=1}^{n} q^i(h) \\nabla_{\\Lambda} (v^i \\log \\Lambda - \\Lambda - \\log v^i!) \\\\\n\t&= \\sum_{i=1}^{n} q^i(h) \\left(\\frac{v^i}{\\Lambda} - 1 \\right)\n\\end{align*}\n\nWe set this to 0 to optimise and get the following expression.\n\n\\begin{align*}\n\t\\sum_{i=1}^{n} q^i(h) \\left( \\frac{v^i}{\\Lambda} - 1 \\right) &= 0 \\\\\n\t\\sum_{i=1}^{n} q^i(h) \\frac{v^i}{\\Lambda} &= \\sum_{i=1}^{n} q^i(h) \\\\\n\t\\Lambda &= \\frac{\\sum_{i=1}^{n} q^i(h)v^i}{\\sum_{i=1}^{n} q^i(h) }\n\\end{align*}\n\nPoisson random variables are subject to the constraint $\\Lambda > 0$. Here we can see that the output from the optimisation equation cannot be negative. Indeed, as $v^i$ is a count, it will always be a positive integer and $q^i(h)$ is a multinomial probability distribution, which also will always be positive. We therefore do not need to use an optimisation constraint here, the maximum will always satisfy $\\Lambda > 0$.\n\n\n\\subsubsection{Normal distribution (Gaussian), known variance unknown mean}\n\nThis distribution should be used to model the behaviour of continuous variables that can take any value in $\\R$. Can also be used for positive or negative continuous quantities that do not describe event occurrence, e.g. height, scores on a scale, etc.\n\n\\paragraph{Log likelihood}\n\\[\n\t\\log p(v^i=v|h^i, \\theta^{\\mu}) \\propto -\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2\n\\]\n\\paragraph{Optimal parameter value}\n\\[\n\t\\argmax_{\\theta^{\\mu}} \\mathbb{E}(\\theta) = \\frac{\\sum_{i=1}^{n} q^i(h) v^i}{\\sum_{i=1}^{n} q^i(h)}\n\\]\n\n\\paragraph{Derivations}\n\n\\subparagraph{Log likelihood}\nWe will deal with the case when we assume knowledge of the variance, e.g. approximated by the sample variance, but expect means to differ between clusters. Therefore, we only have one set of parameters to learn here, which we will call $\\theta^{\\mu}$ and is a 1d vector of length $c$, where each entry will be the mean for each cluster. Let us remind ourselves of the form of the normal likelihood of observing $v$, with a sample variance $\\sigma^2$ and possible means $\\theta^{\\mu}$ depending on the cluster:\n\\[\np(v^i=v|h^i, \\theta^{\\mu}) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{-\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2} \\propto e^{-\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2}\n\\]\nThe log likelihood is then\n\\begin{align*}\n\t\\log p(v^i=v|h^i, \\theta^{\\mu}) &= \\log \\left[ \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{-\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2} \\right] \\\\\n\t&= \\log \\frac{1}{\\sqrt{2\\pi\\sigma^2}} + \\log e^{-\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2}  \\\\\n\t&=\\log \\frac{1}{\\sqrt{2\\pi\\sigma^2}} -\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2 \\\\\n\t&= - \\frac{1}{2} \\log(2\\pi\\sigma^2)  -\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2 \\\\\n\t&\\propto -\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2\n\\end{align*}\n\n\\subparagraph{Optimisation}\n\nStarting from the generic form of the gradient of the energy, $\\mathbb{E}(\\theta)$, w.r.t to the parameters, $\\theta^{\\mu}$, of a visible variable, $v$, all other terms disappear and we have the following:\n\n\\begin{align*}\n\t\\nabla_{\\theta^{\\mu}} \\mathbb{E}(\\theta) &= \\nabla_{\\theta^{\\mu}}\\left[ \\sum_{i=1}^{n}  \\left\\langle \\log p(v^i|h^i, \\theta) \\right\\rangle_{q^i(h)} \\right]\n\\end{align*}\nreplacing the log likelihood by the expression found above\n\\begin{align*}\n\t&= \\nabla_{\\theta^{\\mu}}\\left[ \\sum_{i=1}^{n}  \\left\\langle \\log p(v^i|h^i, \\theta) \\right\\rangle_{q^i(h)} \\right] \\\\\n\t&= \\nabla_{\\theta^{\\mu}}\\left[ \\sum_{i=1}^{n}  \\left\\langle - \\frac{1}{2} \\log(2\\pi\\sigma^2)  -\\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2 \\right\\rangle_{q^i(h)} \\right] \\\\\n\t&= - \\sum_{i=1}^{n} \\nabla_{\\theta^{\\mu}} \\left\\langle \\frac{1}{2} \\log(2\\pi\\sigma^2)  + \\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2 \\right\\rangle_{q^i(h)} \\\\\n\t&= - \\sum_{i=1}^{n} \\nabla_{\\theta^{\\mu}} \\sum_{h} q^i(h) \\left[ \\frac{1}{2} \\log(2\\pi\\sigma^2)  + \\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2 \\right] \\\\\n\t&= - \\sum_{i=1}^{n} q^i(h) \\nabla_{\\theta^{\\mu}} \\left[ \\frac{1}{2} \\log(2\\pi\\sigma^2)  + \\frac{1}{2\\sigma^2}(v - \\theta^{\\mu})^2 \\right] \\\\\n\t&= \\sum_{i=1}^{n} q^i(h) \\frac{1}{\\sigma^2} (v^i - \\theta^{\\mu})  \\\\\n\\end{align*}\n\nNotation is unclear here, but the sum over $h$ disappears because when taking the gradient, each value for $h$ that is not the one we are differentiating w.r.t. amounts to 0. \n\nWe can now optimise by setting this to 0. Note that we do not use a Lagrange multiplier as the mean of the normal distribution can take any value on the real line.\n\n\\begin{align*}\n\t\\sum_{i=1}^{n} q^i(h) \\frac{1}{\\sigma^2} (v^i - \\theta^{\\mu}) &= 0 \\\\\n\t\\sum_{i=1}^{n} q^i(h) \\frac{1}{\\sigma^2}\\theta^{\\mu} &= \\sum_{i=1}^{n} q^i(h) \\frac{1}{\\sigma^2}v^i \\\\\n\t\\frac{1}{\\sigma^2}\\theta^{\\mu} \\sum_{i=1}^{n} q^i(h) &= \\sum_{i=1}^{n} q^i(h) \\frac{1}{\\sigma^2}v^i \\\\\t\n\t\\theta^{\\mu} \\sum_{i=1}^{n} q^i(h) &= \\sum_{i=1}^{n} q^i(h) v^i \\\\\t\n\t\\theta^{\\mu} &= \\frac{\\sum_{i=1}^{n} q^i(h) v^i}{\\sum_{i=1}^{n} q^i(h)} \\\\\n\\end{align*}\n\n\n\n\\subsection{Expectation steps}\n\nThe below equation depicts the evidence lower bound for the marginal log likelihood as the sum between the negative KL divergence between the variational distribution and the probability distribution of the hidden states given the observed variables.\n\\[\n\t\\log p(v|\\theta) \\geq - \\text{KL}[p(h|v,\\theta)||q(h)] + \\log p(v|\\theta)\n\\]\nAs the KL divergence is maximal, i.e. equal to 0, when both distribution are the same, we get the following when the variational distribution, $q(h)$, is set to $p(h|v,\\theta)$:\n\\begin{align*}\n\t\\log p(v|\\theta) &\\geq - \\text{KL}[p(h|v,\\theta)||p(h|v,\\theta)] + \\log p(v|\\theta) \\\\\n\t\\log p(v|\\theta) &\\geq - 0 + \\log p(v|\\theta) \\\\\n\t\\log p(v|\\theta) &= \\log p(v|\\theta)\t\n\\end{align*}\nThus we can see that the bound is equal when $q(h) = p(h|v,\\theta)$. Therefore the expectation step is simply to set the variational distribution to be equal to the posterior. We do not use the normalisation constraint, as explained in the appendix.\n\nLet us derive a generic expression for this. From Bayes' rule and our models' constraints, we have:\n\\begin{align*}\n\tp(h^i|v^i,\\theta) &= \\frac{p(v^i,h^i|\\theta)}{p(v^i|\\theta)} \\\\\n\t&= \\frac{p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)}{\\sum_{h}p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)} \\\\\n\t&\\propto p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)\n\\end{align*}\n\nTaking the log on both side yields:\n\n\\begin{align*}\n\t\\log p(h^i|v^i,\\theta) &\\propto \\log \\left[ p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta) \\right] \\\\\n\t&= \\log p(h^i|\\theta) + \\sum_{j=1}^{k} \\log p(v_j^i|h^i, \\theta)  \n\\end{align*}\n\nThis expression is easy to deal with as all log probabilities are well separated in their own addition term. We can use the log of the distribution in each case, which are well known for distributions in the exponential family. Also, we use the log likelihoods up to proportionality as we are then normalising $p(h^i|v^i, \\theta)$.\n\n\\section{Appendix}\n\n\\subsection{Numerical stability}\n\n\\paragraph{Normalisation of small log likelihoods}\n\nAs the expectation steps requires us to compute log likelihoods for the joint distribution of each observation without allowing us easy compute the normalisation constant, we need to come up with a solution to deal with very negative log values so that they do not go to 0 when exponentiated. Let us remind ourselves of the issue. We have, for the expectation step, the following quantity to compute:\n\n\\begin{align*}\n\tp(h^i|v^i,\\theta) &= \\frac{p(v^i,h^i|\\theta)}{p(v^i|\\theta)} \\\\\n\t&= \\frac{p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)}{\\sum_{h}p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)}\n\\end{align*}\n\nTaking the log on both side yields:\n\n\\begin{align*}\n\t\\log p(h^i|v^i,\\theta) &= \\log \\left[ \\frac{p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)}{\\sum_{h}p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta)} \\right] \\\\\n\t&= \\log \\left[ p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta) \\right] - \\log \\left[ \\sum_{h}p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta) \\right]  \\\\\n\t&= \\log p(h^i|\\theta) + \\sum_{j=1}^{k} \\log p(v_j^i|h^i, \\theta) - \\log \\left[ \\sum_{h}p(h^i|\\theta) \\prod_{j=1}^{k} p(v_j^i|h^i, \\theta) \\right]  \n\\end{align*}\nThe first term is the log likelihood and can be easily computed by pushing the log inside and adding all inner terms. However, as we are marginalising $h$ out, the normalisation constant is a sum and is not easily computable. It would not be a problem for cases where we do not have a lot of features but we still need a method that can deal with cases in which the probabilities get too small to be numerically approximated. \n\nThe solution is to find a way to not have to compute the normalising constant and \"normalise\" the log likelihood. A thread on StackExchange provides an elegant solution. The thread can be found here: \\url{https://stats.stackexchange.com/questions/66616/converting-normalizing-very-small-likelihood-values-to-probability}.\n\nThe idea is to subtract the maximum logarithm from all log in the likelihood array. This will not change the relative weights of each case and will render exponentiation much easier. The index of the maximum will be 0 which will yield 1 when exponentiated, with a lower log values indices being closer to 0. \n\nThe next step is to define an underflow threshold when using a programming language that does not automatically output 0 for values that underflow, which is not the case for python.\n\nTherefore, if we have a log likelihood array, $\\vec{L}$,  with $m$ entries $\\lambda_1, \\lambda_2, ..., \\lambda_m$, and maximum entry we perform the following operation:\n\\[\n\t\\vec{L}_{new} = \\vec{L} - \\max \\vec{L}\n\\]\nWe can then exponentiate $L_{new}$ element-wise and then normalise.\n\\end{document}\n", "meta": {"hexsha": "2e5d383e5ffa272c5931b248ba33024d924608af", "size": 21691, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/easy_em.tex", "max_stars_repo_name": "Vbtesh/easy_EM", "max_stars_repo_head_hexsha": "5b8e2dc07f7c63c74e4e3bf641a92ef0814ae622", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/easy_em.tex", "max_issues_repo_name": "Vbtesh/easy_EM", "max_issues_repo_head_hexsha": "5b8e2dc07f7c63c74e4e3bf641a92ef0814ae622", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/easy_em.tex", "max_forks_repo_name": "Vbtesh/easy_EM", "max_forks_repo_head_hexsha": "5b8e2dc07f7c63c74e4e3bf641a92ef0814ae622", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.174556213, "max_line_length": 882, "alphanum_fraction": 0.6865059241, "num_tokens": 7307, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5871802697351098}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 5.3 Deleting a term using patterns}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u#}::Indices(position=independent).\n\n   expr := A_{a b} B^{a b} + A_{a b} A_{c d} B^{a b} B^{c d} - C_{a b} B^{a b}.  # cdb (ex-0503.100,expr)\n\n   zoom       (expr, $A_{a b} A_{c d} Q??$)                                      # cdb (ex-0503.101,expr)\n   substitute (expr, $A_{a b} -> 0$)                                             # cdb (ex-0503.102,expr)\n   unzoom     (expr)                                                             # cdb (ex-0503.103,expr)\n\\end{cadabra}\n\n\\begin{dgroup*}\n   \\Dmath*{ \\cdb*{ex-0503.100} }\n   \\Dmath*{ \\cdb*{ex-0503.101} }\n   \\Dmath*{ \\cdb*{ex-0503.102} }\n   \\Dmath*{ \\cdb*{ex-0503.103} }\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "fdc7a5d9f8c44f859e444be9f84033e79f285dd9", "size": 981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0503.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0503.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0503.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 33.8275862069, "max_line_length": 105, "alphanum_fraction": 0.4525993884, "num_tokens": 334, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7634837527911057, "lm_q1q2_score": 0.5871802655962328}}
{"text": "%!TEX root = ../CombinatoricsNotes.tex\n\n\\section{Intersecting hypergraphs}\n\nA family $\\F\\subset \\P([n])$ is \\defn{intersecting}[intersecting set system] if every two sets $A,B\\in \\F$ have $A\\cap B\\neq \\emptyset$. If $\\F$ is intersecting, then $|\\F| \\leq 2^{n-1}$ because $\\F$ can contain at most one set in each pair $\\{A,A^c\\}$ for every $A\\subset [n]$. On the other hand, $2^{n-1}$ may be acheived by taking $\\F = \\{A\\subset [n]: x\\in A\\}$ for some $x\\in [n]$.\nWhat if $\\F\\subset [n]^{(r)}$ for some $r$? If $r> n/2$, then $\\F = [n]^{(r)}$ is intersecting.\n\n\\begin{theorem}[\\cite{erdos-ko-rado-1961}] \\label{thm:erdos-ko-rado} \nLet $r \\leq n/2$ and $\\F\\subset[n]^{(r)}$ be intersecting. Then\n\\[\n|\\F| \\leq {n-1\\choose r-1}\n\\]\nwhich can be acheived by $\\F = \\{A\\in [n]^{(r)}: x\\in A\\}$ for some $x\\in [n]$.\n\\end{theorem}\n\\begin{proof}\t\nLet us consider a particular circular ordering of $[n]$ and upper bound the number of sets in $\\F$ which are intervals of size $r$ in this order. \n\\begin{figure}[ht]\n\\usetikzlibrary{fit}\n\\begin{center}\n\\begin{tikzpicture}\n \\begin{scope}\n\\node[left] at (0,0) {$\\circlearrowleft$};\n\n\\foreach \\x in {1,...,8}\n{\n \\pgfmathsetmacro\\myangle{\\x*45}\n\\filldraw[black] (\\myangle:1cm) circle (0.4pt) node[left]{\\x};\n}\n \\end{scope}\n \\node[xshift=45pt] at (0,0) {$=$};\n% \\end{tikzpicture}\n% $=$\n% \\begin{tikzpicture}\n\\begin{scope}[xshift=100pt]\n\\node[left] at (0,0) {$\\circlearrowleft$};\n\\foreach \\x in {1,...,8}\n{\n \\pgfmathsetmacro\\myangle{\\x*45+45}\n\\filldraw[black] (\\myangle:1cm) circle (0.4pt) node[left]{\\x};\n}\n\\end{scope}\n \\node[xshift=145pt] at (0,0) {$\\neq$};\n\n% \\end{tikzpicture}\n% $\\neq$\n% \\begin{tikzpicture}\n\\begin{scope}[xshift=200pt]\n\\node[left] at (0,0) {$\\circlearrowleft$};\n\\foreach \\x in {1,...,8}\n{\n \\pgfmathsetmacro\\myangle{-1*\\x*45+90}\n\\filldraw[black] (\\myangle:1cm) circle (0.4pt) node[left]{\\x};\n}\n\\end{scope}\n\\end{tikzpicture}\n\\end{center}\n\\caption{An illustration of circular orders on $[8]$. We define our order counter-clockwise, and so the order is invariant under rotations, as the equality between the left and center orders demonstrates. However, the right order was obtained by reversing the order of the left, and thus is a new order.}\\label{fig:circ_order}\n\\end{figure}\nLet us prove there are at most $r$ intervals.\nLet us fix an interval $I= (a_1,a_2,\\dotsc,a_r)$, and count the number of intervals which intersect it. For each pair of consecutive points $(a_{i-1},a_i)$\\sidenote{Corresponding to gaps between points} in this interval $I$, there is an interval $I_1$ with first element $a_{i+1}$ and an interval $I_2$ with last element $a_i$, yielding $2(r-1)$ intervals intersecting it. But $\\F$ can contain at most one interval in this pair $(I_1,I_2)$, because $I_1\\cap I_2=\\emptyset$. So we are left with $r-1$ intervals intersecting $I$, along with $I$ itself. Thus, we've found $r$ intersecting intervals in this circular order.\n\nHow many circular orders are there on $[n]$? Every circular order corresponds to $n$ permutations (all rotations of each other), and there are $n!$ total permutations, yielding $(n-1)!$ circular orders.\n\n\\begin{marginfigure}\n\\begin{center}\n\\begin{tikzpicture}\n\\begin{scope}\n\\node[left] at (0,0) {$\\circlearrowleft$};\n\n\\foreach \\x in {1,...,3}\n{\n \\pgfmathsetmacro\\myangle{\\x*45}\n\\filldraw[black] (\\myangle:1cm) circle (0.4pt) node[left]{$x_{\\x}$};\n}\n\\foreach \\x in {1,...,5}\n{\n \\pgfmathsetmacro\\myangle{(\\x+3)*45}\n  % \\pgfmathsetmacro\\y{\\x-3}\n\\filldraw[black] (\\myangle:1cm) circle (0.4pt) node[left]{$y_{\\x}$};\n}\n \\end{scope}\n\\end{tikzpicture}\n\\end{center}\n\\caption{An order in which $X = \\{x_1,x_2,x_3\\} \\in [8]^{(3)}$ is an interval, where we've enumerated $[8]\\setminus X = \\{y_1,\\dotsc,y_5\\}$.} \\label{fig:X_interval_circ_order}\n\\end{marginfigure}\nIn how many circular orders is a given $X\\in [n]^{(r)}$ an interval? To obtain an interval, we order the elements of $X$ in $r!$ ways, and then order the rest of the set in $(n-r)!$ ways, yielding $r!(n-r)!$ orders in which $X$ is an interval. See \\cref{fig:X_interval_circ_order} for an illustration. \n\nWe have at most $r$ sets of $\\F$ as intervals per circular order, and $(n-1)!$ circular orders, so at most $r(n-1)!$ sets in $\\F$ over all circular orders.\n\nOn the other hand, each set of $\\F$ has only $r!(n-r)!$ orders in which it is an interval. Thus, we have $|\\F| r!(n-r)!$ intervals corresponding to sets in $\\F$, counted over all orders. Hence,\n\\[\n|\\F| r! (n-r)! \\leq r(n-1)! \n\\]\nwhich completes the proof.\n\\end{proof}\n\n\\begin{remark}\nFor $r < n/2$, if $\\F\\subset [n]^{(r)}$ is intersecting and has maximal size, i.e. $|\\F| = {n-1\\choose r-1}$, then for some $x\\in n$, we have\n\\[\n\\F = \\{A\\in [n]^{(r)}: x\\in A \\}.\n\\]\nThe proof is left as an exercise. In particular, there are only $n$ possible extremal families.\n\nConsider $r=n/2$. For each set $A \\in [n]^{(r)}$, there is only one forbidden set $A^c$. So $\\F$ may be formed by choosing one set $A$ from each pair $\\{A,A^c\\}$ arbitrarily. This yields a doubly-exponential number of extremal families.\n\\end{remark}\n\n\\newthought{We will now consider} a generalization of intersecting.\n% \\begin{definition}\nWe say $\\F\\subset [n]^{(r)}$ is \\defn{$t$-intersecting}[t-intersecting] if $|A\\cap B|\\geq t$ for all $A,B\\in \\F$.\n% \\end{definition}\nWe wish to find $m(n,r,t)$, the maximum size of a $t$-intersecting family $\\F$ which is a subset of $[n]^{(r)}$.\nA natural guess of an optimal family would be \n\\[\n \\F_0 := \\F_0(n,r,t) = \\{ A \\in [n]^{(r)}: L\\subset A\\}\n\\] for some $L\\subset [n]$ with $|L| = t$.\nNote $|\\F_0| = {n-t \\choose r-t}$.\nOn the other hand, for which $n,r,t$ is $m(n,r,t) = {n\\choose r}$? I.e., when is the family $[n]^{(r)}$ itself $t$-intersecting?\nWe may use the inequality\n\\[\nt\\leq |A\\cap B| \\leq |A| + |B| -  |A\\cup B| = 2r - n\n\\]\nto obtain the bound $2r-t \\geq n$. So, as with the $1$-intersecting case, for large enough $r$ (compared to $t$ and $n$), $[n]^{(r)}$ itself is intersecting.\n\nWhat about $n=2r-t+1$? Then $|\\F_0| = {2r-2t+1 \\choose r-t}$. But we could still take all $r$ element subsets of $[n-1]$ to obtain\n\\[\nm(n,r,t) \\geq {2r -t \\choose r}.\n\\]\nBut\n\\[\n{2r -t \\choose r} > {2r-t+1 \\choose r} \\geq {2r -2t+1 \\choose r-t}.\n\\]\nSo $\\F_0$ does not achieve the maximum in this case either.\n\n\\begin{exercise}\nProve that $\\F_0$ is optimal when $n$ is large compared to $r$ and~$t$.\\marginnote{This is the content of \\cref{thm:F0max}.}\n\\end{exercise}\n\n\\lect{1}{26}\n% \\marginnote{Lecture 6: Monday, January 26, 2016.}\n\n\nNow, consider the family\n\\[\n\\F_{r-t} := \\{ A\\in [n]^{(r)}: |L\\cap A|\\geq r\\}\n\\]\nfor some $L$ with $|L| = 2r-t$.\n % maybe: \n % When $n \\leq 2r-t$, we may choose all of $[n]^{(r)}$, and this family is optimal.\nDefine an interpolation between $\\F_0$ and $\\F_{r-t}$ by \n\\[\n\\F_k := \\{A\\in [n]^{(r)}: |A\\cap L|\\geq k+t\\}\n\\]\nfor $|L| = 2k+t$.\nThen for $A,B\\in \\F_k$ we have\n\\[\n|A\\cap B\\cap L| \\geq |A\\cap L| + |B\\cap L| - |L| \\geq 2(k+t) - (2k+t) = t\n\\]\nso for every $0\\leq k \\leq r-t$, the family $\\F_k$ is $t$-intersecting. \n~\\begin{margintable}[4cm]\n\\begin{center}\n\\begin{tabular}{lccc}\\toprule\n & \\multicolumn{3}{c}{$k$} \\\\ \\cmidrule(r){2-4} \n $n$& $0$ &$1$&$2$\\\\\\midrule \n % & & 0 & 1 & 2\\\\ \\midrule\n $8$ &10 & 16 & \\boxed{21}\\\\\n $11$ & 28 & \\boxed{31} & 21\\\\\n $13$ & \\boxed{45}& 41& 21 \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{The size of $\\F_k(n)$, given $r=5$ and $t=3$, tabulated over several choices of $k$ and $n$. We see that the maximal $\\F_k(n)$ changes based on both $k$ and $n$.}\\label{tab:Fks}\n\\end{margintable}\n\\begin{example}\nConsider $r=5$, $t=3$. Then\n\\begin{gather*}\t\n|\\F_0(n) | = {n-3 \\choose 2}, \\\\\n|\\F_1(n)| = {5\\choose 4}{n-5 \\choose 1} + 1 = 5(n-5)+1,\\\\\n|\\F_2(n)| = {7\\choose 5} = 21.\n\\end{gather*}\nBy choosing different values of $n$ (see \\cref{tab:Fks}), we see that the maximum size family occurs at different $k$'s: it is a more complicated situation than the $1$-intersecting case.\n\\end{example}\nIn general, for $n\\geq 2k+t$,\n\\[\n| \\F_k(n)| = \\sum_{s=k+t}^{2k+t} {2k+t \\choose s} {n-2k-t \\choose r-s}\n\\]\nwhere we are summing over possible sizes $s$ of $|A\\cap L|$. The first combination comes from choosing within $L$, and the second from choosing outside of $L$.\n\nWe may think of $|\\F_k(n)|$ as a function of $n$. Then it is a polynomial of degree $r-k-t$ in which the coefficient of the monomial of largest degree is positive. Among the $\\F_k(n)$'s, the family $\\F_0(n)$ is eventually of the largest size, because it is the polynomial of highest degree. In fact, $\\F_0(n)$ is eventually the largest family overall, as the following result shows.\n\\begin{theorem} \\label{thm:F0max}\nFor all $r,t$ there exists $n_0$ such that for $n\\geq n_0$,\n\\[\nm(n,r,t) = |\\F_0| = {n-t \\choose r-t}.\n\\]\n\\end{theorem}\n\\begin{proof}\t\nLet $\\F\\subset [n]^{(r)}$ be $t$-intersecting such that $|\\F| = m(n,r,t)$. Then there exists $A,B\\in \\F$ such that\n\\[\n|A\\cap B| = t\n\\]\nfor $n\\geq 2r-t$. Assume not. If $\\min_{A,B\\in \\F} |A\\cap B| = t+\\ell$ for some $\\ell\\geq 1$, then choose some  $L\\subset A\\cap B$ with $|L|=\\ell$, and consider the set $A'=A\\setminus L$. If $A'\\in \\F$, then we would have $|A'\\cap B|=t < t+\\ell$, so $A'\\not \\in \\F$. But for any $C\\in \\F$, we have\n\\[\nt+\\ell \\leq |A\\cap C| = |A'\\cap C| + |L\\cap C| \\leq |A'\\cap C| +\\ell,\n\\]\nso $|A'\\cap C|\\geq t$. Thus, $\\F\\cup \\{A'\\}$ is $t$-intersecting and $|\\F\\cup \\{A'\\}| > |\\F|$, which contradicts the maximality of $\\F$.\n\nNow, let $Z=A\\cap B$, where $|A\\cap B|=t$. If $Z\\subset C$ for all $C\\in \\F$, then $|\\F| \\leq {n-t \\choose r-t}$. So we may assume there exists $C\\in \\F$ such that $Z\\not \\subset C$. We will show \n\\begin{equation}\t\\label{eq:XcapAcupBcupC_big} \\tag{$\\star$}\n |X \\cap (A\\cup B\\cup C)|\\geq t+1\n \\end{equation} for all $X\\in \\F$. Since each set is of size $r$, the size $L:=|A\\cup B\\cup C| \\leq 3r$. This is enough as it implies\n\\[\n|\\F| \\leq \\sum_{s=t+1}^r {L \\choose s} {n-L \\choose r-s}\n\\]\nwhich is a polynomial of degree at most $r-t-1$, and so is eventually less than ${n-t\\choose r-t}$, which is a polynomial of degree $n-t$. \n\nLet us show \\eqref{eq:XcapAcupBcupC_big}. We know\n\\[\nX\\cap (A\\cup B\\cup C) = (X\\cap A)\\cup (X\\cap B)\\cup (X\\cap C).\n\\]\nEach set is of size at least $t$; for $|X\\cap (A\\cup B\\cup C)|\\leq t$, then $|X\\cap (A\\cup B\\cup C)| = t$, and in particular,  $Y:=X\\cap A= X\\cap B = X\\cap C$. Then $Y\\subset Z$, but since $|Y| = |Z| = t$, we have $Y=Z$. Then we have $Z=Y = X\\cap C \\subset C$, which is a contradiction.\n\\end{proof}\n\nThe following theorem, presented here without proof, resolves our question.\n\\begin{theorem}[\\cite{Ahlswede-Khachatrian-1997}]\nFor all $n,r,t$,\n\\[\nm(n,r,t) = \\max_k |\\F_k|.\n\\]\n\\end{theorem}\n\n\\newthought{Let us consider} a consequence of \\erdos-Ko-Rado\\sidenote{\\Cref{thm:erdos-ko-rado}}. Let $Z_1,\\dotsc,Z_n$ be independent Bernoulli random variables  each  with expectation value $p > \\tfrac{1}{2}$. Then $\\Pr[Z_i=1] = p$, $\\Pr[Z_i = 0] = 1-p$.\nSuppose the $Z_i$'s are stocks, and the price of each is $\\tfrac{1}{2}$. If we invest \\$0.50, then our expected return is \\$$p$, no matter how we invest.\n\nSuppose we are very conservative and our goal is to have at least $\\tfrac{1}{2}$ in the end. A good strategy is to diversify and invest uniformly in each stock; then the law of large numbers says that as the number of stocks goes to infinity, we almost surely recover our $1/2$.\nWhat's the worst possible strategy? It should be to invest in only 1 stock; in that case, our probability of success is $p$.\n\\begin{theorem}[\\cite{LIGGETT197715}]\nLet $Z_1,\\dotsc,Z_n$ be independent Bernoulli random variables with expectation value $p$. Let $c_1,\\dotsc,c_n\\geq 0$ with $\\sum c_i =1$. Then\n\\[\n\\prob \\Big[ \\sum_{i=1}^nc_i Z_i \\geq \\tfrac{1}{2} \\Big] \\geq p.\n\\]\n\\end{theorem}\n\\begin{proof}\t\nAssume $c_i >0$ for all $i$, and $n$ odd for simplicity. Let $\\F = \\{A\\subset [n]: \\sum_{i\\in A} c_i \\geq \\tfrac{1}{2} \\}$. That is, $\\F$ is the collection of all sets of r.v. such that if exactly those random variables obtain return 1, we did not lose.\nLet $\\F_k = \\F \\cap [n]^{(k)}$ and $f_k = |\\F_k|$. Then\n\\[\n\\prob \\Big[ \\sum_{i=1}^n c_i Z_i \\geq \\tfrac{1}{2}\\Big] = \\sum_{A\\in \\F} \\prob [ \\text{exactly }Z_i \\text{ with indicies }i\\in A \\text{ have value 1}].\n\\]\nIf we fix some $A$ of size $k$, what is the probability that these $k$ trials succeed? $p^k(1-p)^{n-k}$. Thus,\n\\[\n\\prob \\Big[ \\sum_{i=1}^n c_i Z_i \\geq \\tfrac{1}{2}\\Big] = \\sum_{k=0}^n f_k p^k (1-p)^{n-k}.\n\\]\nOur goal is to show that the LHS is larger than $p$.\n\\begin{enumerate}[{Fact }1.]\n\t\\item  $\\F_k$ is intersecting for each $k < n/2$.\n\n\t\\begin{subproof}\n\tSuppose not: then there exists $A,B\\in \\F_k$ such that $A\\cap B = \\emptyset$. Then $\\sum_{a\\in A} c_i \\geq \\frac{1}{2}$, and $\\sum_{a\\in B} c_i \\geq \\frac{1}{2}$. But since each $c_i>0$, we must have $A\\cup B = [n]$, but we know $|A\\cup B|< 2\\cdot n/2=n$.\n\t\\end{subproof}\n\n\t\\item[Corollary.] For $k<n/2$, we have $f_k \\leq {n-1\\choose k-1}$ by \\erdos-Ko-Rado. \\marginnote{This corollary is the essential idea of the proof; the algebraic computation later is long but trivial.}\n\t\\item $f_k + f_{n-k} \\geq {n\\choose k}$ for all $k$.\n\n\t\\begin{subproof}\n\tFor every $A$, either $A\\in \\F$ or $[n]\\setminus A \\in \\F$: if the sum of some collection of $c_i$ is less than one half, then the sum of the rest must be at least one half. Thus, for $A\\in [n]^{(k)}$, either $A\\in \\F_k$, or $[n]\\setminus A\\in \\F_{n-k}$, and $f_k + f_{n-k} \\geq |[n]^{(k)}| = {n\\choose k}$.\n\t\\end{subproof}\n\\end{enumerate}\nNow, \n\\begin{align*}\t\n\\sum_{k=0}^n f_k p^k (1-p)^{n-k}&= \\sum_{k<n/2} (f_k + f_{n-k}) p^{n-k}(1-p)^k \\\\\n&\\qquad+ \\sum_{k< n/2}  f_k ( p^k (1-p)^{n-k}) - p^{n-k}(1-p)^k).\\\\\n\\intertext{By fact 2,}\n&\\geq \\sum_{k<n/2}{n\\choose k} p^{n-k}(1-p)^k \\\\\n&\\qquad+ \\sum_{k< n/2}  f_k ( p^k (1-p)^{n-k}) - p^{n-k}(1-p)^k).\\\\\n\\intertext{Since $ p^k (1-p)^{n-k}) - p^{n-k}(1-p)^k < 0$,  the corollary to fact 1 yields}\n&\\geq \\sum_{k<n/2} {n\\choose k}p^{n-k}(1-p)^k \\\\\n&\\qquad+ \\sum_{k< n/2}  {n-1\\choose k-1}( p^k (1-p)^{n-k}) - p^{n-k}(1-p)^k). \\\\\n\\intertext{Now we may group powers of $p$ to obtain}\n&= \\sum_{k<n/2} p^{n-k} \\left[ {n\\choose k}(1-p)^k - {n-1 \\choose k-1}(1-p)^k  \\right] \\\\\n&\\qquad\\qquad +p^k \\left[ {n-1\\choose k-1}(1-p)^{n-k} \\right].\\\\\n\\intertext{We may pull out $(1-p)^k$ of the first term, and use that ${n\\choose k}- {n-1 \\choose k}={n-1\\choose k-1}$ to obtain}\n&= \\sum_{k<n/2} p^{n-k} \\left[ {n-1\\choose k-1}   \\right](1-p)^k + p^k \\left[ {n-1\\choose k-1} \\right](1-p)^{n-k}.\\\\\n\\intertext{Now, in first term we may switch to summing over $k>n/2$, swapping $n-k$ with $k$ to get}\n&= \\sum_{k>n/2} p^{n-k} \\left[ {n-1\\choose k-1}   \\right](1-p)^k \\\\\n&\\qquad+\\sum_{k<n/2} p^k \\left[ {n-1\\choose k-1} \\right](1-p)^{n-k}.\\\\\n% &= \\sum_{k<n/2} {n-1\\choose k-1} p^k(1-p)^{n-k} \\\\\n% &\\qquad+ \\sum_{k>n/2} \\Bigg({n\\choose k}- {n-1 \\choose k} \\Bigg) p^k(1-p)^{n-k}\\mathnote{Using ${n-1\\choose k-1}={n\\choose k}- {n-1 \\choose k}$.}\\\\\n&=\\sum_{k=1}^n {n-1 \\choose k-1} p^k (1-p)^{n-k}\\\\\n&= p \\sum_{\\ell=0}^{n-1} {n-1 \\choose \\ell} p^\\ell (1-p)^{n-1-\\ell} \\\\\n&= p( p + (1-p))^{n-1} =p. \\qedhere \n\\end{align*}\n\\end{proof}", "meta": {"hexsha": "9f67363834f240581fab1b7e4308013a529c5f3f", "size": 15010, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/ch3_intersecting_hypergraphs.tex", "max_stars_repo_name": "ericphanson/CombinatoricsNotes", "max_stars_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-04-24T06:43:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-20T04:27:41.000Z", "max_issues_repo_path": "chapters/ch3_intersecting_hypergraphs.tex", "max_issues_repo_name": "ericphanson/CombinatoricsNotes", "max_issues_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/ch3_intersecting_hypergraphs.tex", "max_forks_repo_name": "ericphanson/CombinatoricsNotes", "max_forks_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-04T19:38:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-04T19:38:24.000Z", "avg_line_length": 51.937716263, "max_line_length": 619, "alphanum_fraction": 0.6313790806, "num_tokens": 5958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410572017153, "lm_q2_score": 0.8824278710924295, "lm_q1q2_score": 0.5871768344674921}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Written By Michael Brodskiy\n% Class: Analytic Geometry & Calculus III (Math-292)\n% Professor: V. Cherkassky\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[12pt]{article} \n\\usepackage{alphalph}\n\\usepackage[utf8]{inputenc}\n\\usepackage[russian,english]{babel}\n\\usepackage{titling}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\usepackage[super]{nth}\n\\usepackage{everysel}\n\\usepackage{ragged2e}\n\\usepackage{geometry}\n\\usepackage{fancyhdr}\n\\geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in}\n\\newcommand{\\subtitle}[1]{%\n  \\posttitle{%\n    \\par\\end{center}\n    \\begin{center}\\large#1\\end{center}\n    \\vskip0.5em}%\n\n}\n\\usepackage{hyperref}\n\\hypersetup{\ncolorlinks=true,\nlinkcolor=blue,\nfilecolor=magenta,      \nurlcolor=blue,\ncitecolor=blue,\n}\n\n\\urlstyle{same}\n\n\n\\title{Lecture XX Notes}\n\\date{\\today}\n\\author{Michael Brodskiy\\\\ \\small Professor: V. Cherkassky}\n\n% Mathematical Operations:\n\n% Sum: $$\\sum_{n=a}^{b} f(x) $$\n% Integral: $$\\int_{lower}^{upper} f(x) dx$$\n% Limit: $$\\lim_{x\\to\\infty} f(x)$$\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Line Integrals (Continued) $-$ 16.2}\n\nA line integral defined by the function $f(x,y,z)$ may be integrated by using the formula:\n\n$$\\int_C f(x,y,z)\\,ds=\\int_C f(x(t),y(t),z(t))\\sqrt{\\left(\\frac{dx}{dt}\\right)^2+\\left(\\frac{dy}{dt}\\right)^2+\\left(\\frac{dz}{dt}\\right)^2}\\,dt$$\n\nGiven a vector function, $\\overrightarrow{r}(t)$, the above formula may be simplified to yield:\n\n$$\\int_C f(x,y,z)\\,ds=\\int_C f(\\overrightarrow{r}(t))|\\overrightarrow{r}'(t)|\\,dt$$\n\nLine integrals may also appear with respect to $dx$ and $dy$. This may be simplified by finding $dx$ and $dy$ in terms of $dt$:\n\n$$\\int_C f(x,y)\\,dx = \\int_a^b f(x(t),y(t))x'(t)\\,dt$$\n$$\\int_C f(x,y)\\,dy = \\int_a^b f(x(t),y(t))y'(t)\\,dt$$\n\nWhen line integrals appear in such a form it is abbreviate them:\n\n$$\\int_C P(x,y)\\,dx+\\int_C Q(x,y)\\,dy=\\int_C P(x,y)\\,dx+Q(x,y),dy$$\n\nIn physics, the formula for work was defined as $W=Fd\\cos(\\theta)$\n\nThis may be found by integrating:\n\n$$W=\\int_a^b \\overrightarrow{F}(\\overrightarrow{r}(t))\\cdot\\frac{\\overrightarrow{r}'(t)}{|\\overrightarrow{r}'(t)|}\\,ds$$\n$$W=\\int_a^b \\overrightarrow{F}(\\overrightarrow{r}(t))\\cdot\\frac{\\overrightarrow{r}'(t)}{|\\overrightarrow{r}'(t)|}|\\overrightarrow{r}'(t)|\\,dt$$\n$$W=\\int_a^b \\overrightarrow{F}(\\overrightarrow{r}(t))\\cdot\\overrightarrow{r}'(t)\\,dt$$\n$$W=\\int_a^b \\overrightarrow{F}(\\overrightarrow{r}(t))\\cdot\\overrightarrow{r}'(t)\\,dt$$\n$$W=\\int_a^b \\overrightarrow{F}\\cdot\\overrightarrow{r}\\,dt$$\n\nThis yields the work done on a particle by a field.\n\n\\end{document}\n\n", "meta": {"hexsha": "b21810a3abd36c97d9c1c3c7129a437912c765c4", "size": 2942, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture Notes/Lecture20.tex", "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_issues_repo_path": "Lecture Notes/Lecture20.tex", "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notes/Lecture20.tex", "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4318181818, "max_line_length": 188, "alphanum_fraction": 0.5968728756, "num_tokens": 946, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585903489891, "lm_q2_score": 0.8031737869342623, "lm_q1q2_score": 0.5871670964814212}}
{"text": "\\section{Decision Tree Surrogate Model}\n\n\\begin{frame}[c]\n\\Huge{\\centerline{Decision Tree Surrogate Model}}\n\\end{frame}\n\n%----------------------------------------------------------------------------------------\n\n\\begin{frame}\\frametitle{Decision Tree Surrogate Model}\n\t\\begin{itemize}\n\t\t\\item Given our learned function $g$ and set of predictions, $g(\\mathbf{X}) = \\hat{\\mathbf{Y}}$, we can train a decision tree surrogate model:\n\t\t\t\\begin{equation}\n\t\t\t\\begin{aligned}\n\t\t\t\th_{\\text{tree}}: \\mathbf{X},\\hat{\\mathbf{Y}} \\xrightarrow{\\mathcal{A}_{\\text{surrogate}}} h_{\\text{tree}}\n\t\t\t\\end{aligned}\n\t\t\t\\end{equation}\n\n\t\t\\item The decision tree surrogate model displays an approximate flow chart of $g$'s decision making process to increase model transparency.\n\t\t\\item It also shows likely important features and the most important interactions in $g$. \n\t\t\n\t\\end{itemize}\n\\end{frame}\n\n%----------------------------------------------------------------------------------------\n", "meta": {"hexsha": "48087d3ce13db00feff8a0b2a49225833dfb758f", "size": 971, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/sections/decision-tree.tex", "max_stars_repo_name": "sparsh999gupta/interpretable-ml", "max_stars_repo_head_hexsha": "a2c06777f686cb8e23c210e23120ccab6650c508", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2018-08-21T09:37:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-04T17:34:07.000Z", "max_issues_repo_path": "tex/sections/decision-tree.tex", "max_issues_repo_name": "sparsh999gupta/interpretable-ml", "max_issues_repo_head_hexsha": "a2c06777f686cb8e23c210e23120ccab6650c508", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2018-09-03T19:27:19.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-06T21:27:47.000Z", "max_forks_repo_path": "tex/sections/decision-tree.tex", "max_forks_repo_name": "sparsh999gupta/interpretable-ml", "max_forks_repo_head_hexsha": "a2c06777f686cb8e23c210e23120ccab6650c508", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-10-22T16:22:24.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-24T17:24:41.000Z", "avg_line_length": 38.84, "max_line_length": 144, "alphanum_fraction": 0.5901132853, "num_tokens": 240, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.8031737940012418, "lm_q1q2_score": 0.5871670922354162}}
{"text": "\n\\section{Additions}\n\nCalculer les additions suivantes :\n\\begin{questions}\n\t\n\t\\question[1] \u00a0$(-15) + (+ 5)$\n\t\\fillwithdottedlines{1cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\t\n\t\\question[1] \u00a0$(+20) + (+ 40)$\n\t\\fillwithdottedlines{1cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\t\n\t\n\t\n\t\n\t\\question[1] \u00a0$(-305) + (-27)$\n\t\\fillwithdottedlines{1cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\t\n\t\n\t\\question[1] \u00a0$(-35) + (+27)  + (-20)$\n\t\\fillwithdottedlines{1.5cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\t\n\t\n\t\\question[2] \u00a0$(-35) + (-27)  + (+20)$\n\t\\fillwithdottedlines{1.5cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\t\n\t\n\t\n\t\n\t\\question[2] \u00a0$(-75) + (-25) + (+37)$\n\t\\fillwithdottedlines{1.5cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\t\n\t\n\t\\question[2] \u00a0$(+85) + (-17) + (+15) + (-23)$ \n\t\\fillwithdottedlines{2cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\t\n\t\n\t\\question[2] \u00a0$(-43) + (-25) + (+57) + (+25)$ \n\t\\fillwithdottedlines{2cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\\end{questions}\n\n\\section{Soustractions}\n\nTransformer les soustractions en additions avant de les calculer\n\\begin{questions}\n\t\n\t\n\t\n\t\n\t\\question[2] \u00a0$(+78) - (+24)$\n\t\\fillwithdottedlines{2cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\n\t\n\t\\question[2] \u00a0$(-34) - (+57)$\n\t\\fillwithdottedlines{2cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\t\n\n\t\\question[2] \u00a0$(+24) - (-32)$\n\t\\fillwithdottedlines{2cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\t\n\t\t\n\t\\question[2] \u00a0$(-34) + (+57) - (-23) $\n\t\\fillwithdottedlines{2cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\n\t\\question[2] \u00a0$(-55) - (-35) - (+27) $\n\t\\fillwithdottedlines{2cm}\n\t\\begin{solution}\n\t\t\n\t\\end{solution}\n\n\t\n\t\n\t\n\\end{questions}", "meta": {"hexsha": "be952d5ee671dafb4684bddcd9b6200892405692", "size": 1597, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "college/5e/5_Relatifs/interros/interro8/v2.tex", "max_stars_repo_name": "malhys/maths_projects", "max_stars_repo_head_hexsha": "540337598037f7925fbf2f0e7232c4e18813c25b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "college/5e/5_Relatifs/interros/interro8/v2.tex", "max_issues_repo_name": "malhys/maths_projects", "max_issues_repo_head_hexsha": "540337598037f7925fbf2f0e7232c4e18813c25b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "college/5e/5_Relatifs/interros/interro8/v2.tex", "max_forks_repo_name": "malhys/maths_projects", "max_forks_repo_head_hexsha": "540337598037f7925fbf2f0e7232c4e18813c25b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.787037037, "max_line_length": 64, "alphanum_fraction": 0.6017532874, "num_tokens": 644, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7310585669110202, "lm_q1q2_score": 0.5871670914336609}}
{"text": "\\section{Normal Coordinate Calculation}\\index{Normal coordinates!calculation of|(}\r\n\\subsection{Calculation of Vibrational Frequencies}\r\nFor a simple harmonic oscillator the period $r$ is given by:\r\n$$\r\nr = 2 \\pi \\sqrt{\\frac{\\mu}{k}}\r\n$$\r\nwhere $k$ is the force constant.  A molecule can absorb a photon that vibrates\r\nat the same frequency as one of its normal vibrational modes.  That is, if a\r\nmolecule, initially in its ground vibrational state, could be excited so that\r\nit vibrated at   a given frequency, then that molecule could absorb a photon\r\nthat vibrates at the same frequency.  Although vibrational frequencies are\r\nusually expressed as kilohertz or megahertz, in chemistry vibrational\r\nfrequencies are normally  expressed in terms of the number of vibrations that\r\nwould occur in the time that light travels one centimeter, i.e., $\\bar{\\nu} =\r\n1/cr$ Using this equation for simple harmonic motion, the vibrational frequency\r\ncan be written as:\r\n$$\r\n\\bar{\\nu} = \\frac{1}{2\\pi c}\\sqrt{\\frac{k}{\\mu}}.\r\n$$\r\nIn order for $\\bar{\\nu}$ to be in cm$^{-1}$, $c$, the speed of light must be in\r\ncm.sec$^{-1}$, $k$, the force constant in erg/cm$^2$, and $\\mu$ the reduced\r\nmass in grams.\r\n\r\nFor a molecule, the force constants are obtained by diagonalization of the\r\nmass-weighted Hessian matrix.  Most of the work in calculating vibrational\r\nfrequencies is spent in constructing the Hessian.\r\n\r\n\\subsubsection*{Calculation of the Hessian matrix}\r\nThe elements of the Hessian are defined as:\r\n$$\r\nH_{i,j} = \\frac{\\delta^2E}{\\delta x_i\\delta x_j}\r\n$$\r\nand  are generated by use of finite displacements, that is, for each  atomic\r\ncoordinate $x_i$, the coordinate is first incremented by a small amount, the\r\ngradients  calculated, then the coordinate is decremented and the gradients\r\nre-calculated. The second derivative is then obtained from the difference of\r\nthe two derivatives and the step size:\r\n$$\r\nH_{i,j} = \\frac{(\\frac{\\delta E}{\\delta x_i})_{_{+0.5\\Delta x_j}}-\r\n                (\\frac{\\delta E}{\\delta x_i})_{_{-0.5\\Delta x_j}}}\r\n          {\\Delta x_j}.\r\n$$\r\nThis is done for all $3N$ Cartesian coordinates.  Because the Hessian is\r\nsymmetric, that is $H_{i,j}=H_{j,i}$, the random errors that occur in the\r\ngradient calculation can be reduced (by a factor of $\\sqrt{2}$) by re-defining\r\nthe Hessian as:\r\n$$\r\nH_{i,j} = \\frac{1}{2}\\left(\\frac{(\\frac{\\delta E}{\\delta x_i})_{_{+0.5\\Delta x_j}}-\r\n                (\\frac{\\delta E}{\\delta x_i})_{_{-0.5\\Delta x_j}}}\r\n          {\\Delta x_j}+\r\n \\frac{(\\frac{\\delta E}{\\delta x_j})_{_{+0.5\\Delta x_i}}-\r\n                (\\frac{\\delta E}{\\delta x_j})_{_{-0.5\\Delta x_i}}}\r\n          {\\Delta x_i}\\right).\r\n$$\r\nA call to the energy - gradient function \\comp{COMPFG} will generate the\r\ngradients in kcal/mol/\\AA ngstrom at a given geometry.  These can then be\r\nconverted into millidynes/\\AA ngstrom (or 10$^5$ dynes/cm) as follows:\r\n$$\r\nH_{i,j} (\\mathrm{millidynes/\\AA}) = 10^5\\frac{({\\mathrm{Kcal\\ to\\\r\nergs}})}{(\\mathrm{\\AA\\ to\\ cm})^2(\\mathrm{Mole\\ to\\ molecule})}H_{i,j}(\\mathrm{kcal/mol/\\AA ^2 })\r\n$$\r\nor\r\n$$\r\nH_{i,j} (\\mathrm{millidynes/\\AA}) = 10^5\\frac{4.184*10^3*10^7}{(10^{-8*2})\r\n(6.022*10^{23})}H_{i,j}(\\mathrm{kcal/mol/\\AA ^2}).\r\n$$\r\nDiagonalization of this matrix yields the force constants of the system.\r\n\r\nIn order to calculate the vibrational frequencies, the Hessian matrix is first\r\nmass-weighted:\r\n$$\r\nH^m_{i,j} = \\frac{H_{i,j}}{\\sqrt{M_i*M_j}}.\r\n$$\r\n\r\nThen the Hessian is converted from millidynes per \\AA ngstrom to dynes per\r\ncentimeter by multiplying by 10$^5$.\r\n\r\nDiagonalization of this matrix yields eigenvalues, $\\epsilon$, which represent\r\nthe quantities $\\sqrt{k/\\mu}$, from which the vibrational frequencies can be\r\ncalculated:\r\n$$\r\n\\bar{\\nu}_i = \\frac{1}{2\\pi c}\\sqrt{\\epsilon_i}.\r\n$$\r\n\r\n\\subsection{Mechanism of the frame in FORCE calculation}\r\n\\index{Frame!description of}\r\nThe FORCE calculation uses Cartesian coordinates, and all 3N  modes are\r\ncalculated, where N is the number of atoms in the system.  Clearly, there will\r\nbe 5 or 6 ``trivial'' vibrations,  which  represent  the  three translations\r\nand two or three rotations.  If the molecule is exactly at a stationary point,\r\nthen these ``vibrations'' will have a  force  constant and  frequency  of\r\nprecisely  zero.   If the force calculation was done correctly, and the\r\nmolecule was not exactly at a stationary point,  then the  three  translations\r\nshould be exactly zero, but the rotations would be non-zero.  The extent to\r\nwhich  the  rotations  are  non-zero  is  a measure of the error in the\r\ngeometry.\r\n\r\nIf  the  distortions  are  non-zero,  the  trivial  vibrations  can interact\r\nwith  the  low-lying genuine vibrations or rotations, and with the transition\r\nvibration if present.\r\n\r\nTo prevent this the analytic form of the rotations  and  vibrations is\r\ncalculated,  and arbitrary eigenvalues assigned; these are 500, 600, 700, 800,\r\n900, and 1000 millidynes/\\AA ngstrom for Tx, Ty, Tz, Rx,  Ry  and Rz  (if\r\npresent),  respectively.  The rotations are about the principal axes of inertia\r\nfor the system, taking  into  account  isotopic  masses. The ``force matrix''\r\nfor these trivial vibrations is determined, and added on to the calculated\r\nforce matrix.  After diagonalization the  arbitrary eigenvalues are subtracted\r\noff the trivial vibrations, and the resulting numbers are the ``true'' values.\r\nInterference with genuine vibrations  is thus avoided.\r\n\r\n\\subsection{Vibrational Analysis}\\index{Vibrational analysis}\r\nAnalyzing normal coordinates is very tedious.  Users  are  normally familiar\r\nwith the internal coordinates of the system they are studying, but not familiar\r\nwith the Cartesian coordinates.  To  help characterize the  normal\r\ncoordinates,  a very simple analysis is done automatically, and users are\r\nstrongly encouraged to use this analysis first,  and  then to look at the\r\nnormal coordinate eigenvectors.\r\n\r\nIn the analysis, each pair of bonded atoms is examined  to  see  if there  is\r\na  large  relative  motion  between them.  By bonded is meant \\index{Van der\r\nWaals|ff} within the van der Waals' distance.  If there  is  such  a  motion,\r\nthe indices  of  the  atoms,  the  relative  distance  in \\AA ngstroms, and the\r\npercentage radial motion are printed.   Radial  plus  tangential  motion adds\r\nto  100\\%,  but  as there are two orthogonal tangential motions and only one\r\nradial, the radial component is printed.\r\n\r\nVibrations in the range +50 to {\\em i}50 cm$^{-1}$ cannot be described\r\naccurately in the vibrational analysis, due to numerical difficulties. However,\r\nthe normal coordinates and frequencies are printed in the section above\r\n``DESCRIPTION OF VIBRATIONS''.\r\n\r\n\\subsection{Reduced masses}\\index{Reduced mass|ff}\\index{Simple harmonic motion}\r\n\\index{Mass!reduced}\r\nA molecular vibration normally involves all the atoms moving simultaneously.\r\nThis is clearly very different from the  simple harmonic motion of a mass\r\nattached to a spring that is attached to an immovable object.  Nevertheless, it\r\nis convenient to visualize a molecular vibration as consisting of a single\r\nmass, $M$, on the end of a spring of force constant $k$. For such a  system,\r\nthe period of vibration, $T$, is given by:\r\n$$\r\nT=2\\pi\\sqrt{\\frac{M}{k}}.\r\n$$\r\n\r\nHow, then, do we relate the complicated motion of a molecular vibration to the\r\nmass and spring model?\r\n\r\nDuring a molecular vibration, each atom follows a simple harmonic motion. So\r\nthe problem is, to what extent does each atom contribute to the mass, and to\r\nwhat extent does each atom contribute to the spring?\r\n\r\nIn order to answer this, first consider some simple systems.  In the system\r\nH--X, where X has a very large mass, compared to that of the H,  the effective\r\nmass is obviously that of H.  In H$_2$, the effective mass is half that of a\r\nsingle H.   Why is this so?  In H--X, particle X is stationary, and particle H\r\ncontributes 100\\% of the energy to the vibration.  In H$_2$, each particle\r\nobviously contributes 50\\%, but now the center of mass is half way between the\r\ntwo particles.  If the force constants are the same in H--X and in H--H, then\r\nthe period of vibration of H--X will be $\\sqrt{2}$ times that of H--H.  This is\r\nthe same period as for a system of two particles, each of which having a mass\r\ntwice that of a H particle.  For a system of two particles, $A$ and $B$, having\r\nmasses $M_A$ and $M_B$,  the vibrational wavefunction, $\\psi_v$, is:\r\n$$\r\n\\psi_v=\\sqrt{\\frac{M_B}{M_A+M_B}}\\psi_A-\\sqrt{\\frac{M_A}{M_A+M_B}}\\psi_B.\r\n$$\r\nThis can be interpreted as particle $A$ moves $(\\sqrt{\\frac{M_B}{M_A+M_B}})^2$\r\nin the time particle $B$ moves $(\\sqrt{\\frac{M_A}{M_A+M_B}})^2$.  The center of\r\nmass, $\\rho$, stays constant:\r\n$$\r\n\\rho=\\sum_iM_i\\delta x_i = M_A\\frac{M_A}{M_A+M_B}- M_B\\frac{M_A}{M_A+M_B} = 0.\r\n$$\r\nThe square of the coefficients of the wavefunction represent the contribution\r\nto the motion.  The effective mass, $\\mu$, for this system is:\r\n$$\r\n\\mu = \\frac{M_A\\times M_B}{M_A+M_B}.\r\n$$\r\nWhat fraction is due to $A$ and what fraction is due to $B$?  From the\r\nwavefunction, the intensity of $A$ is $\\frac{M_B}{M_A+M_B}$, and the relative\r\nrate of motion is also $\\frac{M_B}{M_A+M_B}$, so the contribution to the\r\neffective mass due to $A$ is:\r\n$$\r\n(\\frac{M_B}{M_A+M_B})M_A;\r\n$$\r\nlikewise, for $B$:\r\n$$\r\n(\\frac{M_A}{M_A+M_B})M_B.\r\n$$\r\nConsider two particles, $A$ and $B$, of mass 1 and 4, respectively.   The\r\nwavefunction for the vibration is:\r\n$$\r\n\\psi_v = \\sqrt{\\frac{4}{5}}\\psi_A-\\sqrt{\\frac{1}{5}}\\psi_B,\r\n$$\r\nwhere $A$ contributes\r\n$$\r\n\\frac{16}{25}\\times 1 = 0.64\r\n$$\r\nand $B$ contributes\r\n$$\r\n\\frac{1}{25}\\times 4 = 0.16\r\n$$\r\nto the effective mass of $\\frac{4}{5}$.\r\n\r\nIn other words, the contribution to the effective mass is equal to the\r\nintensity of the wavefunction on each atom, times the mass of the atom, times\r\nthe intensity of the wavefunction.  This is intuitively correct:  the total\r\nvibration is composed of contributions from each particle, and the amount each\r\nparticle contributes is proportional to its intensity in the wavefunction.  The\r\nmass of each particle is also proportional to its intensity in the\r\nwavefunction.\r\n\r\nExtension to polyatomic molecules is now trivial.  The effective mass is given\r\nby:\r\n$$\r\n\\mu = \\sum_A <\\!\\psi_AM_A\\psi_A\\!>\\times <\\!\\psi_A|\\psi_A\\!>.\r\n$$\r\nWhen written in this way, the quantum nature of the expression is obvious.\r\nHowever, for computational convenience, the effective mass is rewritten as:\r\n$$\r\n\\mu = \\sum_A(\\psi_A^2)^2\\times M_A\r\n$$\r\nor\r\n$$\r\n\\mu = \\sum_A(\\sum_{i=1}^3c_{A_i}^2)^2M_A.\r\n$$\r\nThis expression is suitable for most systems.  However, it is not a\r\nwell-defined quantity.  Under certain circumstances involving degenerate\r\nvibrations, the quantity $\\mu$ can become ill-defined.  This phenomenon can be\r\nattributed to the  fact that the reduced mass is not an observable.\r\n\r\n\\subsection{Effective masses}\\index{Mass!effective}\\index{Effective mass}\r\nAnother way of regarding the effective mass of a mode can be derived from\r\nconsideration of the simple harmonic oscillator:\r\n$$\r\n   E =  \\sqrt{\\frac{k}{\\mu}}.\r\n$$\r\nDiagonalization of the mass-weighted Hessian yields the energies, and from the\r\nnormal coordinates the force-constants can readily be derived.  From these two\r\nquantities, the effective mass can readily be calculated:\r\n$$\r\n\\mu = \\frac{k}{E^2}.\r\n$$\r\nFor a homonuclear diatomic, the effective mass calculated this way is equal\r\nto the mass of one atom.\r\n\r\n\\subsection{Travel}\\index{Travel}\r\nTo continue the idea of representing a normal mode as a simple harmonic\r\noscillator, the distance the atoms move through can be represented as the\r\ndistance the idealized mass moves through.  This can be calculated knowing the\r\nenergy of the mode and the force constant:\r\n$$\r\n     E = \\frac{1}{2}kx^2.\r\n$$\r\nHere $k$ is the force-constant for the mode, and is given by\r\n$$\r\nk = <\\psi|Hessian|\\psi>;\r\n$$\r\n$E$ is the energy of the mode.\r\n\r\nFrom this, the distance, $x$, which the system moves through, can be calculated\r\nfrom\r\n$$\r\nx =\\sqrt{\\frac{2\\times 1.196266\\times 10^8 \\times 1000 \\times 10^8 \\nu}{N k}},\r\n$$\r\nwhere $1.196266\\times 10^8$ is the conversion factor from cm$^{-1}$ to ergs,\r\n1000 converts from millidynes to dynes, $10^8$ converts from cm to \\AA , and\r\n$N$ converts from moles to molecules.\r\n\r\nNote that $x$, which in the output is called TRAVEL, is in mass weighted\r\nspace, not simple space.  This quantity can also be calculated using the DRC,\r\nby depositing one quantum of energy into a vibrational mode.  For a system at a\r\nstationary point, the relevant keywords would be \\comp{IRC=1 DRC t=1m}.  For\r\nlarger systems, the time may need to be increased.  At least one coordinate\r\nmust have an optimization flag set to 1.  This is required in order to instruct\r\nthe DRC to print the turning points. \\index{Normal coordinates!calculation\r\nof|)}\r\n", "meta": {"hexsha": "1fc95ea49f45c4947f44f63b51fa749d9c33d626", "size": 12839, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuals/MOPAC2000_manual/t_force.tex", "max_stars_repo_name": "openmopac/MOPAC-archive", "max_stars_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-12-16T20:53:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-16T20:54:11.000Z", "max_issues_repo_path": "manuals/MOPAC2000_manual/t_force.tex", "max_issues_repo_name": "openmopac/MOPAC-archive", "max_issues_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuals/MOPAC2000_manual/t_force.tex", "max_forks_repo_name": "openmopac/MOPAC-archive", "max_forks_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.518115942, "max_line_length": 98, "alphanum_fraction": 0.7211620843, "num_tokens": 3574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8688267830311354, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.5871223900630426}}
{"text": "\\section{Background}\nIn this section we will discuss the various tools and break down how we will\nformulate the project. In the next section we will discuss our proposition of\nhow to achieve these goals.\n\n\\subsection{Q-Learning}\nFor this project we will be using a Q-Learning algorithm to play these\nvideogames. The Q-Learning algorithm is a simple reinforcement learner that\nfollows Bellman's Equation. A Q matrix can be defined where each row represents\na different state and each column represents an action for that state. These\ncan be used to solve different Markov Decision Processes (MDPs). An example\nMDP is shown in Figure~\\ref{fig:MDP}.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.6\\textwidth]{MDP.pdf}\n\\caption{Sample Markov Decision Process}\n    \\label{fig:MDP}\n\\end{figure}\nEach node, labeled $s_i$, represents a state and the connections between nodes\nrepresent actions. Each action has an associated probability with it. There \nare certain ``rewards\" that an actor gets for taking certain actions, $a$, which\nleaves them in a new state, $s'$. We then want to find a set of actions, also\nknown as a policy ($\\pi$) that gives us our maximal value ($V$). Knowing this\nwe want to create a learner that will learn the best policy, set of actions,\nthat will maximize its reward. The Bellman Equation, Equation~\\ref{eq:Bell}, \ndoes just this. In this equation $\\alpha$ represents the learning rate, \n$\\gamma$ is the discount rate, and $Q(s',a')$ is the value in our $Q$ matrix\ngiven the next state and action.\n\\begin{equation}\n    Q(s,a) = Q(s,a) + \\alpha(r + \\gamma \\max_{a'}Q(s',a') - Q(s,a))\n    \\label{eq:Bell}\n\\end{equation}\nEssentially what we are doing is looking at the reward we get for taking an \naction and all future rewards given our best policy. A discount rate is \napplied so that rewards that are achieved sooner are weighted more heavily\nthan rewards further away. A simple example of why we might want to do this\nis because if we give a mouse a reward for completing a maze we want the mouse\nto finish the maze as fast as possible and not take an infinite time to finish.\nThe learning rate applies a weight to to our new reward and future rewards. \nAdditionally all actions are probabilistic. If we have the actions forward,\nleft, and right, if we select forward there is still a probability that the\nagent goes left or right. This is commonly known as a Monte-Carlo walk (or\nDrunken Walk). \n\nFor our videogames we discussed that there are potentially an unknown number\nof states. There are also limitations in computational hardware. To account\nfor this we can simply limit the depth of our lookahead values. This will\ncreate an approximate Q-Learner and allow us to solve problems with unknown\nor infinite number of states. \n\n\\subsection{Deep Q-Learning}\nAn extension of Q-Learning is deep Q-Learning (DQN) is by using a neural network \nto compute the above solution. We can accomplish this because a neural network \nis a universal function approximater, meaning it can approximate any function. \nFollowing DeepMind's work~\\cite{DeepMind}, if we rewrite the Bellman Equation as\n\\begin{equation}\n    Q^*(s,a) = \\mathbb{E}_{s'\\sim\\mathcal{E}}\\left[r + \\gamma\\max_{a'}Q^*(s',a')\\right]\n\\end{equation}\nWe can generate a loss function $L_i(\\theta_i)$\n\\begin{equation}\n    L_i(\\theta_i) = \\mathbb{E}_{s,a\\sim\\rho(\\dot)}\\left[(y_i - Q(s,a;\\theta_i))^2\\right]\n\\end{equation}\nwhere $y_i = \\mathbb{E}_{s'\\sim\\mathcal{E}}\\left[r + \\gamma\\max_{a'}Q(s',a';\\theta_{i-1})\\right]$. Recognizing that this is the same format as the Mean Square\nError (MSE) we can generate the following gradient (Equation~\\ref{eq:DQNgrad}).\n\n\\begin{equation}\n    \\nabla_{\\theta_i}L_i(\\theta_i) = \n\\mathbb{E}_{s,a\\sim\\rho(\\dot);s'\\sim\\mathcal{E}}\n\\left[\\left(r + \\gamma\\max_{a'}Q(s',a';\\theta_{i-1}) - Q(s,a;\\theta_i)\\right) \n\\nabla_{\\theta_i}Q(s,a;\\theta_i)\\right]\n\\label{eq:DQNgrad}\n\\end{equation}\n\nHere we have everything we need to create a DQN. The property of universal \napproximation handles the lookahead depth for us. This is because a\nneural network iterates to create an approximate function. Being that our\ngames are NP- or PSPACE-Hard this is reasonable.\n\n\\subsection{Integration into Retro}\nUnfortunately SMK was not able to be integrated into Retro's environment, but\nmuch about the environment and integration was learned along the way.\nRetro relies on different ``states\", which can be defined as different \nstarting states. \nThis is placed in quotes because the state will continuously be changing as\nthe agent progresses through the level, more on this later. For\nthis project we started with the scenario where there is a single agent\nplaying the 50cc Mushroom Cup. Essentially we set a place in the game where we\nwant our agent to start. Retro provides a UI where we can interactively play\nthe game and save specific states. For this example our initial state looks like\nthis:\n\\begin{figure}[h!]\n    \\centering\n    %\\includegraphics[clip,trim=0 500 0 10,width=0.55\\textwidth]{1P_GrandPrix_50cc_MushroomCup.png}\n    \\includegraphics[width=0.55\\textwidth]{1P_GrandPrix_50cc_MushroomCup.png}\n    \\caption{Initial state for 1P\\_GrandPrix\\_50cc\\_MushroomCup.state}\n\\end{figure}\n\\FloatBarrier\nFrom this we can see that in the upper half we see our player and the world\naround us. In the bottom half we see the entire map and can actually identify \ncertain features like coins and items. We will be focusing on the top half.\n\nNow that we have where we want to start, we need to define everything else.  \nRetro uses json files to define many different parts to the game. There are \ntwo important files for each game: data.json and scenario.json. Data defines\ninformation about the level, lives, and score. Scenario defines the\nending condition and the reward structure. An example of values are shown\nbelow\n\\\\\n\\begin{figure}[h!]\n\\begin{minipage}{0.47\\textwidth}\n    \\begin{lstlisting}[caption=data.json, frame=tlrb]{Name}\n{\n\"info\": {\n    \"lap\": {\n        \"address\": 8261825,\n        \"type\": \"|u1\"\n    },\n    \"lapsize\": {\n        \"address\": 8257864,\n        \"type\": \"|u1\"\n    },\n    \"coins\": {\n        \"address\": 8261120,\n        \"type\": \"|u1\"\n    },\n    \"minute\": {\n        \"address\": 8257796,\n        \"type\": \"|u1\"\n    },\n    \"second\": {\n        \"address\": 8257794,\n        \"type\": |u1\"\n    \\end{lstlisting}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.47\\textwidth}\n    \\begin{lstlisting}[caption=scenario.json, frame=tlrb]{Name}\n{\n\"done\": {\n    \"variables\": {\n        \"finish\": {\n            \"reference\": \"coins\",\n            \"op\": \"eq\"\n        }\n    }\n},\n\"reward\": {\n    \"variables\": {\n        \"rank\": {\n            \"reward\": 1.0\n        }\n    }\n},\n\"actions\": [\n    [[], [\"UP\"], [\"DOWN\"]],\n    [[], [\"LEFT\"], [\"RIGHT\"]],\n    [[], [\"A\"], [\"B\"], ...\n}\n    \\end{lstlisting}\n\\end{minipage}\n    \\caption{Snippets from data.json and scenario.json for SNES Super Mario Kart}\n\\end{figure}\nThe data.json file contains hex addresses that represent the RAM address that the\ngame uses to determine certain aspects of the game. We also need to add these\nto a rambase value that is given by retro (warning: this is not noted within\nthe documentation but is required).\nAdditionally, scenario.json can contain information about the action set. We can\neither restrict our action set to a specific set of buttons, and button \ncombinations, or we can expose all possible set of actions to the agent. Initially\nwe will be limiting our agent, since certain actions like [\"DOWN\",\"UP\"] are\nnot likely to be useful, excluding glitches. Super Mario Kart has the following\nset of actions: Steering with the D-Pad, B for Accelerate, A to use an item, \nY for Brakes, X for the rear-view, L or R for Hop/Power-side (same action), \nStart to pause the game, and select for Rear-View. We will limit the agent\nso that it may not look behind itself, cannot pause the game, and we will select\nR for hop/power-slide (because that is what I use when playing). So our new full\naction set for the steering will be \\{\\{UP\\},\\{DOWN\\},\\{LEFT\\},\\{RIGHT\\},\n\\{UP,LEFT\\}, \\{UP, RIGHT\\}, \\{DOWN,RIGHT\\}, \\{DOWN,LEFT\\}\\}. \nThis represents the left side of the controller. For the right side of the \ncontroller we will assume that the agent can play like a human does \n(this will also limit glitches that the agent might be able to exploit). \nThe controls action set will be defined as \\{\\{B\\},\\{A\\},\\{Y\\},\n\\{R\\},\\{A,B\\},\\{B,R\\},\\{A,R\\}\\}. The full action set is the combination of \nactions from the steering set and the control set. \n\nRAM addresses can be found by using emulator tools, such as BizHawk~\\cite{BH}, \nthat allow for hex searching and manipulation. Some values are easy to find \nsince their values in RAM identically match the values displayed on the screen.\nOther values may be more difficult to find given that they may be manipulated\nby the program. An example of finding a memory address in BizHawk is shown in\nFigure~\\ref{fig:hex} in the Appendix. We were unable to find all the required\naddress on our own, but a prominant Speed-Runner\\footnote{People who compete\nto finish a game as quickly as possible} named SethBling~\\cite{SethBling} \nkindly provided me with Lua scripts to help me find the missing addresses.\nSection~\\ref{sec:RAM} provides a more detailed discussion on finding RAM values.\n\nTo summarize the integration: we need to fully define how our agent will receive\nrewards, when the agent ends, and our action set.\n\n\\subsection{Rewards}\nDefining rewards is always a challenging problem in any machine learning task. \nThere are many fantastic examples where a well thought out reward function leads\nto outcomes not expected by the machine learning engineer. Rewards can be \nsimple or extremely complex. For example a simple reward function for Super\nMario Bros might be to use the x position on the screen and reward the agent\nfor making forward progression through the level. \n\nA more detailed discussion of rewards for Super Mario Kart are given in the\nappendix~\\ref{sec:SMKRewards}. Additionally sections are provided for \ndetermining the end state~\\ref{sec:endstate} and visualizing the environment\n\\ref{sec:env} specifically in the SMK world.\n\n", "meta": {"hexsha": "32c4deb99541a7dcf2085dbd8bdb546839aaef54", "size": 10155, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Project/Background.tex", "max_stars_repo_name": "stevenwalton/510-Multi-Agent_Systems", "max_stars_repo_head_hexsha": "ba14c5adfc5cf7ad334ceedf08a7e0285ee25632", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project/Background.tex", "max_issues_repo_name": "stevenwalton/510-Multi-Agent_Systems", "max_issues_repo_head_hexsha": "ba14c5adfc5cf7ad334ceedf08a7e0285ee25632", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project/Background.tex", "max_forks_repo_name": "stevenwalton/510-Multi-Agent_Systems", "max_forks_repo_head_hexsha": "ba14c5adfc5cf7ad334ceedf08a7e0285ee25632", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.676056338, "max_line_length": 158, "alphanum_fraction": 0.7363860167, "num_tokens": 2674, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8688267626522814, "lm_q2_score": 0.6757645879592641, "lm_q1q2_score": 0.5871223592717003}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts, amssymb, hyperref,graphicx}\n\\usepackage[letterpaper, total={7.5in, 9in}]{geometry}\n\\title{Intermediate Analysis}\n\\author{Grant Smith }\n\\date{Spring 2022}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Sequence Convergence}\n\n\\begin{enumerate}\n    \\item Prove the convergence of $$\\lim_{n\\rightarrow \\infty} (\\sqrt{n^{2}+n}-n)$$\n    \\begin{enumerate}\n        \\item First, notice that this equation approaches 0.5 as n increases.  Thus, we will be proving:\n        $$\\forall \\epsilon > 0,  \\exists N s.t. \\forall n > N, \\left| \\sqrt{n^{2}+n}-n - \\frac{1}{2}\\right| < \\epsilon$$\n        \\item which is the same as proving both of the following:\n        $$ \\sqrt{n^{2}+n}-n - \\frac{1}{2} < \\epsilon$$\n        $$ \\sqrt{n^{2}+n}-n - \\frac{1}{2} > -\\epsilon$$\n        \\item then add $n + 1/2$ to both sides\n        $$ \\sqrt{n^{2}+n} < \\epsilon + n + \\frac{1}{2}$$\n        $$ \\sqrt{n^{2}+n} > -\\epsilon + n + \\frac{1}{2}$$\n        \\item square both sides to get:\n        $$ n^{2}+n < \\epsilon^2 + 2\\epsilon n + 2 \\epsilon .5 + n^2 + 2 n .5 + .25$$\n        $$ n^{2}+n > \\epsilon^2 - 2\\epsilon n - 2 \\epsilon .5 + n^2 + 2 n .5 + .25 $$\n        \\item cancel the $n^{2}+n$ from both sides to get\n        $$ 0 < \\epsilon^2 + 2\\epsilon n + 2 \\epsilon .5 + 0 .25$$\n        $$ 0 > \\epsilon^2 - 2\\epsilon n - 2 \\epsilon .5 + 0 .25 $$\n        \\item Bring the $2\\epsilon n$ to the other side\n        $$ -2\\epsilon n < \\epsilon^2  + 2 \\epsilon .5 + 0 .25$$\n        $$ 2 \\epsilon n > \\epsilon^2  - 2 \\epsilon .5 + 0 .25 $$   \n        \\item divide by $2\\epsilon$ on the second, and divide by $-2\\epsilon$ on the first, which flips the direction of the $<$ to $>$.\n        $$ n > \\frac{\\epsilon^2  + 2 \\epsilon .5 + 0 .25}{-2\\epsilon} $$\n        $$ n > \\frac {\\epsilon^2  - 2 \\epsilon .5 + 0 .25}{2 \\epsilon} $$\n        \\item Given that both of these need to be true, and the second is bigger than the first because the second is positive and the first is negative, we can just use the second requirement. Also, notice that this is a decreasing function in $\\epsilon$, as desired.\n    \\end{enumerate}\n    \\newpage\n    \\item Prove the convergence of $$\\lim_{n\\rightarrow \\infty} \\left(1+\\frac{1}{n}\\right)^{3}$$\n    \\begin{enumerate}\n        \\item First, note that this sequence approaches 1. Thus, we need to prove that:\n            $$\\forall \\epsilon > 0,  \\exists N s.t. \\forall n > N, \\left| \\left(1+\\frac{1}{n}\\right)^{3} - 1\\right| < \\epsilon$$\n        \\item Noting that \n        $$\\left(1+\\frac{1}{n}\\right)^{3} > 1$$\n        We can drop the absolute value and get\n        $$\\left(1+\\frac{1}{n}\\right)^{3} - 1 < \\epsilon$$\n        \\item Move the 1 to the other side to get:\n        $$\\left(1+\\frac{1}{n}\\right)^{3} < 1+ \\epsilon$$\n        \\item Take the cube root of both sides\n        $$1+\\frac{1}{n} < \\left(1+ \\epsilon\\right) ^{\\frac{1}{3}}$$\n        \\item subtract from to both sides\n        $$\\frac{1}{n} < \\left(1+ \\epsilon\\right) ^{\\frac{1}{3}} -1$$\n        \\item switch positions of the fractions\n        $$\\frac{1}{\\left(1+ \\epsilon\\right) ^{\\frac{1}{3}} -1} < n$$\n        \\item Switch sides:\n        $$n > \\frac{1}{\\left(1+ \\epsilon\\right) ^{\\frac{1}{3}} -1}$$\n        Which increases as $\\epsilon$ decreases\n        \n    \\end{enumerate}\n    \\newpage\n    \\item Prove the convergence of: \n    $$\\lim_{n\\rightarrow \\infty} \\frac{\\sin n}{n^{2}}$$\n        \\begin{enumerate}\n        \\item First, note that this sequence approaches 0. Thus, we need to prove that:\n            $$\\forall \\epsilon > 0,  \\exists N s.t. \\forall n > N, \\left|\\frac{\\sin n}{n^{2}}\\right| < \\epsilon$$\n            \\item Noting that $n^2 > 0$, we can rewrite as: \n            $$ \\frac{\\left|\\sin n\\right|}{n^{2}} < \\epsilon$$\n            \\item Noting that \n            $$\\left|\\sin n\\right| \\leq 1 \\rightarrow \\frac{\\left|\\sin n\\right|}{n^{2}} \\leq \\frac{1}{n^2} \\leq \\frac{1}{n} $$\n            \\item Which means that if we can prove convergence of $1/n$, then we have proven the convergence of the sequence in question. Thus, we want:\n            $$\\frac{1}{n} < \\epsilon$$\n            or\n            $$\\frac{1}{\\epsilon} < n$$\n            or\n            $$n > \\frac{1}{\\epsilon}$$\n\n        \n    \\end{enumerate}\n                \\newpage\n            \\item Prove that if $\\{b_n\\}$ is a sequence of positive terms and $b_n \\rightarrow b > 0$, then there is a positive lower bound $m > 0$ such that $b_n \\geq m$ for all $n$.\n\\begin{itemize}\n    \\item If $b_n \\rightarrow b > 0$, then for any $\\epsilon$ there exists an $N$ such that for all $n > N$, $|b_n - b| < \\epsilon$. In particular, if we let $\\epsilon = b/2$, then for all $n > N$, we can ensure that $b_n > b/2$, and since $b > 0$, we know that $b/2 > 0$. \n    \\item Next, since there are only a finite number of $b_n$ values before N, we can take the minimum of those values. Call this $m'$.  We know that each term is positive, so $m'$ is positive.\n    \\item Next, we take the minimum of $b/2$ and $m'$, and we can take half of that value as well, and call it m: $\\min  \\left\\{b/2,m'\\right\\}/2 = m$. This $m$ is one of infinitely many possible lower bounds.\n\\end{itemize}\n\\end{enumerate}\n\n\n\n\\section{Measure Theory Limit}\n\nI really like single sided epsilon balls, and the related idea of single sided limits.  I especially like the deleted versions in which the point about which we're considering is deleted, but I'll explain more about that soon.  Single sided limits great because on either side of a single domain point, a function can behave totally differently. Which is sort of the whole idea. Every point of the continuum is a hinge point between what's on its left and on its right, and the behavior of the two sides can be completely different. So I think single sided limits/neighborhoods are nice because they allow us to treat these two parts of functions in their own right.\n\nDeleted neighborhoods and limits which exclude the individual point are great because they ignore what happens at the single point. And what's also interesting is that they ignore every other point. For example, if you pick a given point, you can make your delta ball small enough to exclude that point. The only situation in which you can include something would be an infinite set in which the point in question is a limit point of that set. So no finite set or individual point is ever considered in a single-sided, deleted neighborhood. And in fact, no point is ever considered in a deleted neighborhood, regardless if it's left or right. And this applies to every single domain point, so under this thinking, no finite set of points or single point ever matters. \n\nI also really like the idea of considering properties that hold almost everywhere, and ignoring properties that hold almost nowhere. The notion of almost everywhere is awesome because it ignores the behavior of sets with zero measure. This means that any behaviors that are oddities can be ignored. \n\nAnd what's interesting about combining these ideas is that if we consider properties in the \"almost everywhere\" sense, there is no difference between deleted neighborhoods and non-deleted neighborhoods. The singularity of the neighborhood point is ignored under \"almost everywhere\".  So this idea of almost everywhere is a souped-up version of the deleted neighborhood.\n\n\\subsection{Things that do matter}\nLimits that diverge such as $\\sin \\frac{1}{x}$ at $x=0$ and limits that go to infinity, such as 1/x. \n\nSo what we really want to know is how often one of these two things happen. When does the limit diverge, and when does it go to infinity? And we want to know if these happen at all in a function, and if so, how frequently. \n\nAs a small point, I'd like to include $\\infty$ in the domain in the sense that $\\displaystyle \\lim_{x \\to \\infty} x = \\infty$, which I'd include in this section of problems, and I'd also say that $\\displaystyle \\lim_{x \\to \\infty} \\sin x$ does not exist.  \n\n\\subsection{Definition}\n\n$$\\forall I, $$\n\n$$\\exists B_{\\epsilon} , \\mu(B_{\\delta} \\setminus f^{-1}(B_\\epsilon)) = 0$$\n\n\\subsection{New Theory}\n\nHow often can a function break essential continuity?\n\nWorking in the context of real functions:\n\nLet's say a function is essentially continuous at a domain point $c$ iff\n\n$$\\exists L, \\forall B(L), \\exists \\delta, B_\\delta (c) \\subseteq \\textrm{Cl}(f^{-1}(B(L)))$$\n\nWhich means it breaks essential continuity if: \n\n$$\\forall L, \\exists B(L), \\forall \\delta, B_\\delta (c) \\nsubseteq  \\textrm{Cl}(f^{-1}(B(L)))$$\n\nI'm wondering about the size of this set of discontinuities. I'm pretty sure it can at least be countable. But I'm wondering if it can be uncountable or have Lebesgue Measure $> 0$.\n\n\\subsection{New Theory From Discontinuity Classification}\n\nI'm going to define an essential discontinuity as the following:\n\nPick a ball in the domain, and call it $B$.\n\nThere exists a nonzero measure set of codomain points, call the set $R$, each of whose elements, $r$, have a ball around them whose inverse image is within $B$.\n\nSome examples of this are $sin(\\frac{1}{x})$ and $\\frac{1}{x}$ because in both cases, there are a nonzero measure set of codomain points whose surrounding ball has an inverse image inside a ball around the domain point in question.\n\nI think I also need to look into oscillation. The wikipedia page is \\url{https://en.wikipedia.org/wiki/Oscillation_(mathematics)/}\n\nWe know from \\url{https://en.wikipedia.org/wiki/Classification_of_discontinuities/} that the set $D$ of discontinuities is always $F_\\sigma$.  I'm pretty sure this can be bigger than countable.  I'm also pretty sure that it can have measure above zero because later in the article, Lebesgue's Theorem is discussed, which says that a function is reimann integrable iff $D$ has measure zero, which makes me think that $D$ can have nonzero measure in general, but it wouldn't be reimann integrable.  But I am pretty sure this includes discontinuities of the first kind such as removable or jumps.  So what I'm wondering is that: we know a function's set of discontinuities can have nonzero measure, but what about if we only discuss discontinuities of the second kind... can the measure of that set be more than zero?\n\n\\subsection{Essential Continuity}\n\nIn my theory of almost continuity, we can't include jump discontinuities because that would break our properties such as the essential mean value theorem.\n\n\\subsection{well-behavedness}\n\njump discontinuities are pretty well behaved. The do, though, break almost continuity and therefore break mean value theorem, etc. \n\nBut they aren't essential discontinuities, so I'd still say they're pretty well-behaved.\n\n\\subsection{sets and partitions of the reals}\n\nI was wondering if you could partition the reals in such a way that around a given point, you could have a nonzero meausre amount of each partition (the original way I thought about it was just to have two partitions and to determine if on either side of the point, you could have some of both partition).  It turns out the answer is yes, and there are a couple good examples.\n\nOne example is $f(x) = \\sin(1/x)$ at zero. To partition the set, we could have $A$ be the set of $x$ such that $f(x)>0$ and the other partition, $B$ be the set of $x$ such that $f(x)\\leq 0$.  Then no matter how small you draw your ball around zero, you'll have some nonzero amount of both $A$ and $B$.\n\nI also wondered if you could do this with infinite partitions. In particular, countable, uncountable, or measure $>0$.  It turns out that you can do it with countable partitions, but I'm not sure about uncountable or greater.  Here is an example of the countable:\n\n\\includegraphics[width=7in]{set-partitions.jpg}\n\\centering\n\n\\end{document}\n", "meta": {"hexsha": "f5dd4248d7c99a0496024c5cf510e9eac67291b9", "size": 11725, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main.tex", "max_stars_repo_name": "GSmithApps/Real-Analysis", "max_stars_repo_head_hexsha": "6d54f5c1d3beddd129876a5bb417d024e314ba51", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.tex", "max_issues_repo_name": "GSmithApps/Real-Analysis", "max_issues_repo_head_hexsha": "6d54f5c1d3beddd129876a5bb417d024e314ba51", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.tex", "max_forks_repo_name": "GSmithApps/Real-Analysis", "max_forks_repo_head_hexsha": "6d54f5c1d3beddd129876a5bb417d024e314ba51", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.7916666667, "max_line_length": 814, "alphanum_fraction": 0.6829850746, "num_tokens": 3302, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7606506581031359, "lm_q1q2_score": 0.5871032645839185}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\n\\title{A Simple Solver for Matrix Equations}\n\\author{Andrew Winkler, Ph.~D.}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Abstract}\n\nIn the companion paper to this post, we derive a solver for linear systems \\begin{math}{}Ax=b\\end{math}, which works for real, complex, rational,\nand certain other scalars. It is related to the Cholesky decomposition, but sidesteps creating a factorization\nto directly construct the solution.\n\nHere we exhibit a C implementation of the most important case, that where\nreal numbers are represented as double precision numbers and the matrix \\begin{math}A\\end{math} is symmetric and positive semi-definite,\nthat is \\begin{math}{}x^tAx >=0 \\end{math} for all \\begin{math}{}x\\end{math}.\n\n\\section{Implementation Notes}\nThe algorithm proceeds recursively. It is not tail-recursive; but at each step\nthe only information which is retained for post-processing is the current column\nof \\begin{math}{}A\\end{math}, and the current element of \n\\begin{math}{}b\\end{math}.\n\nThis allows us to overwrite the not-yet-processed elements of \n\\begin{math}{}A\\end{math} and \\begin{math}{}b\\end{math}, with the transforms\nwhich the recursion employs. In this way, the memory required for the matrix\nnever grows. On the other hand, both \n\\begin{math}{}A\\end{math} and \\begin{math}{}b\\end{math} are destroyed in the process, so must be copied if any further use of them is desired.\n\nBecause the matrix \\begin{math}A\\end{math} is symmetric, we need only an upper or lower triangle; accordingly we allocate a vector of size \\begin{math}n-i\\end{math} to represent column number \\begin{math}i\\end{math}, starting with the diagonal element and descending the column.\n\n\\begin{verbatim}\n$ cat positive_solver.h \nextern\ndouble * positive_solver(double ** A, double * b, int n);\n\n$ cat positive_solver.c\n/* \n * Andrew Winkler\n\nThis code solves the equation Ax=b, where A is a positive\nsemi-definite matrix, so that x^t A x >= 0 for all x,\nprovided a solution exists. If A is positive definite,\nit will be an isomorphism, but in general conditions\nmust be imposed on b, for it to lie in the range of A.\n\nIt has the virtue of dramatic simplicity - there's no need\nto explicitly construct the Cholesky decomposition, no need\nto do the explicit back-substitutions.  Yet it's essentially\nequivalent to that more labored approach, so its\nperformance/stability/memory, etc. should be at least as good.\n\nThere are two kinds of checks, both of which are disabled.\nThis means that a solution will be generated; it is up to\nthe caller to verify that the equation Ax=b is satisfied to\nthe desired precision. If it does not, then no solution exists.\n\nThe first kind of check is that Avec is 0 whenever A[0][0]\nis 0, which will always be true if A is positive semi-definite,\nbut which could become false because of any small perturbation\ncaused by roundoff error upstream in the computation of A.\nThe second kind of check is that b[0] is 0 whenever A[0][0] is 0,\nwhich is necessary for b to lie in the range of A. The checks are\ndisabled because the code does not deal with real numbers,\nbut rather floating point numbers; it's up to the user to decide\nwhat is \"close enough\" to zero.\n\n*/\n\n#include <stdlib.h>\n#include <stdio.h>\n#include \"positive_solver.h\"\n\nvoid _consistency_checker(double * v, int n) {\n    for (int i=0; i<n; i++) {\n        if ( v[i] != 0.0 ) exit(-1);\n    }\n}\n\nvoid _positive_solver(\n    double ** A, \n    double * b, \n    double * x, \n    int n\n    ) \n{\n  if (n < 1) exit(-1);\n\n  if (n == 1) {\n    if (A[0][0] == 0.0) {\n        /* _consistency_checker(b, 1); */\n        x[0] = 0.0;\n        return;\n    }\n    x[0] = b[0] / A[0][0];\n    return;\n  }\n\n  double * bvec = b + 1;\n  double * Avec = A[0] + 1;\n  double ** Asub = A + 1;\n  double * xvec = x + 1;\n\n  int m = n -1;\n\n  if (A[0][0] == 0.0) {\n      /*\n      _consistency_checker(b, 1);\n      _consistency_checker(Avec, m);\n      */\n  } else {\n      for(int j=0; j < m; j++){\n        bvec[j] -= Avec[j] * b[0] / A[0][0];\n        for(int i=0; i < m - j; i++)\n          Asub[i][j] -= Avec[i] * Avec[i+j] / A[0][0];\n      }\n  }\n\n  _positive_solver(Asub, bvec, xvec, m);\n\n  if (A[0][0] == 0.0) {\n      x[0] = 0.0;\n      return;\n  }\n\n  double p = 0; for(int k=0; k<m; k++) p += Avec[k] * xvec[k];\n\n  x[0] = (b[0] - p) / A[0][0];\n\n  return;\n}\n\n#include <malloc.h>\ndouble * positive_solver(\n    double ** A, \n    double * b, \n    int n\n    ) \n{\n  double * x = (double *) malloc(n * sizeof(double));\n  _positive_solver(A, b, x, n);\n  return x;\n}\n\n\\end{verbatim}\n\\end{document}\n", "meta": {"hexsha": "721f88e3f04469c01fc3156c4e24014fd368320f", "size": 4569, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "OPERATORS/AX_EQUALS_B/doc/abstract_orig.tex", "max_stars_repo_name": "subramon/qlu", "max_stars_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "OPERATORS/AX_EQUALS_B/doc/abstract_orig.tex", "max_issues_repo_name": "subramon/qlu", "max_issues_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-07-29T16:48:25.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-26T23:47:22.000Z", "max_forks_repo_path": "OPERATORS/AX_EQUALS_B/doc/abstract_orig.tex", "max_forks_repo_name": "subramon/qlu", "max_forks_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-14T22:34:13.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-14T22:34:13.000Z", "avg_line_length": 30.46, "max_line_length": 278, "alphanum_fraction": 0.6756401838, "num_tokens": 1350, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7606506418255927, "lm_q1q2_score": 0.5871032520202026}}
{"text": "%!TEX root = ../thesis.tex\n\n\\chapter{Isogeometric enhanced SBFEM in 3D}\n\\input{octree/introduction.tex}\n\n\\section{Projection}\n\\input{octree/surface_division.tex}\n\\input{octree/projection.tex}\n\\input{octree/convex_hull.tex}\n\n\\section{Intersection}\n\\paragraph{}\nInstead of projecting all points located on the boundary surfaces to the NURBS surfaces, the exact geometry can be retained by calculating the intersection points directly.\nBy utilizing the work from Sec.~\\ref{oct_sc:surface_division}, the number of the surfaces which may contain the intersection can be limited to a few by determine the relationship between their convex hulls and the line.\nOnly surfaces whose convex hull has intersection with the line will be selected as candidates.\nThe number of the candidates increase when the mesh becomes coarser as a longer line segment tends to intersect more convex hulls.\nOn the other hand, when the mesh is refined and the line segment becomes significantly shorter, the number of the candidates decreases at the same time.\nA considerable improvement in computational cost can be expected as the time complexity is only meaningful when numerous intersections are calculated.\nOnly a few intersections need to be determined when the mesh is coarse hence having quite a few candidates at this circumstance may not significantly increase the computational time.\n\\input{octree/mrep3d.tex}\n\n\n\n\\section{Introduction of SBFEM in 3D}\n\\input{octree/sbfem3d.tex}\n\n\\section{Numerical examples}\n\\input{octree/ex_sphere_hole3d.tex}\n\\input{octree/ex_capsule3d.tex}\n\\input{octree/ex_lamp.tex}\n\\input{octree/ex_other_meshes.tex}\n\n\\section{Conclusions}\n\\paragraph{}\nIn this chapter, a projection is conducted after the octree mesh is generated from the STL file.\nGeometric exact is retained by projecting the intersection on the triangular surface back to original NURBS surface or by replacing the distance calculation with finding that between the edge of the element and the NURBS surface directly. \nBoth of the projecting and the intersection calculating are accelerated by utilizing the strong convex hull property of the NURBS.\nThe proposed method is able to generate the mesh from an arbitrary geometric input and shows higher accuracy compared to conventional method.", "meta": {"hexsha": "bb11d74ab921387c36daa69af6afd3368bd07c45", "size": 2265, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "octree/index.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "octree/index.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "octree/index.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.6052631579, "max_line_length": 239, "alphanum_fraction": 0.8189845475, "num_tokens": 484, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434873426302, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5871032482239807}}
{"text": "\\documentclass[12pt, onside]{article}\n\n\\usepackage{fontspec,unicode-math} % Required for using utf8 characters in math mode\n\\usepackage{parskip}% Extra interlining\n\\usepackage[margin=0.8in]{geometry}    % Margins\n\\usepackage[spanish]{babel}\n\\usepackage{hyperref}\n\\usepackage{xcolor}\n\\usepackage{graphicx}\n\\usepackage[binary-units=true]{siunitx}\n\\usepackage{multirow}\n\\usepackage{mathtools}\n\\usepackage{array}\n\\usepackage{physics}\n\\usepackage{cleveref}\n\\usepackage{float}\n\\usepackage[toc,page]{appendix}\n\n\n% Header and Footer\n\\usepackage{fancyhdr}\n\\setlength\\headheight{30pt}\n\\pagestyle{fancy}\n\\renewcommand{\\headrulewidth}{0.4pt}\n\\fancyhead{}\n\\fancyhead[L]{\n    \\textbf{Numerical implementation of the two-body problem}\n}\n\\fancyhead[R]{\n    Emilio Dom\u00ednguez S\u00e1nchez\n}\n\\fancyfoot{}\n\\fancyfoot[C]{\\thepage}\n\n\\input{docs/arenstorf/listings.tex}\n\n\n\\newcommand{\\R}{{\\mathbb{R}}}\n% \\renewcommand{\\dv}{\\prime}\n\\newcommand{\\ddv}{\\dprime}\n\\renewcommand{\\dv}{\\prime}\n\\newcommand{\\inner}[2]{\\left\\langle #1, \\; #2 \\right\\rangle}\n\\newcommand{\\vr}{{\\vb{r}}}\n\\newcommand{\\vx}{{\\vb{x}}}\n\\newcommand{\\CM}{{\\operatorname{CM}}}\n\n\\begin{document}\n\\input{docs/arenstorf/title_page.tex}\n\\newpage\n\\tableofcontents\n\\newpage\n\n\\section{Introduction}\n\n    Richard Arenstorf discovered a stable periodic orbit between the Earth and the Moon\nwhich was later used as the basis for Apollo missions.\n\n    In this paper, we work with the initial value problem corresponding to Arenstorf's orbit\nand compute the solution with different iterative methods.\n\n\\section{Problem description}\n\n    This an instace of the three body problem.\nTwo of the objects, the Earth and the Moon, are orbiting in a plane.\nThe third one, our satellite, of negligible mass,\nis only affected by the attraction of this two large bodies.\n\n\\subsection{Initial Value Problem}\n\n    The system of equations that govern the system are given by\n%\n\\begin{gather*}\n    \\begin{alignedat}{2}\n        x\\ddv &= x + 2y\\dv - M\\frac{x+m}{D_1}& - m&\\frac{x-M}{D_2} \\\\\n        y\\ddv &= y + 2x\\dv - M\\frac{y}{D_1}& - m&\\frac{y}{D_2} \\\\\n    \\end{alignedat} \\\\\n    \\begin{alignedat}{1}\n        D_1 = &((x+m)^2 + y^2)^{\\frac{3}{2}} \\\\\n        D_2 = &((x-m)^2 + y^2)^{\\frac{3}{2}} \\\\\n    \\end{alignedat} \\\\\n    \\begin{alignedat}{1}\n        m &= \\num{0.012277471} \\\\\n        M &= 1-m \\\\\n    \\end{alignedat}\n\\end{gather*}\n%\nwhere $M$ and $m$ are the mass of the Earth and the Moon respectively.\n\n    The initial conditions are taken normalized to the problem\nto increase the numerical accuracy.\nFor example, the distance between the Earth and the Moon is $1$ and\nthe sum of masses of the Earth and the Moon is also $1$.\n%\n\\begin{alignat*}{2}\n    x(0) &= (\\num{0.994} & 0) \\\\\n    y(0) &= (0 & \\num{-2.001585106}) \\\\\n\\end{alignat*}\n\n\\section{Objective}\n\n    The objective is tracing the orbit for two cycles\nusing three fixed step methods:\nEuler's method, the modified version of Euler's method and the Runge-Kutta $4$ Method;\nas well as two adaptative steps methods: Felhberg's method and\nRichardson's extrapolation with variation of the step of the Runge-Kutta $4$ method.\nWe are interested in obtaining the best possible approximation with each method\nand the number of evaluations and the running time of each method.\n\n\\section{Implementation}\n\n\\subsection{General considerations}\n\n    To determine when the methods should stop\nI decided to compare the points with the starting point.\nHowever, we need to stop at the second turn.\nSome initial tries with a good method (like RK4) show that\nthe time taken in a complete turn is approximately $17$ time units.\nHence, the implementations use a lambda function (usually inlined in Julia) that checks\nthis giving a tolerance called \\lstinline!closing_err!.\nThis value is different depending on the methods,\nbecause some where able to obtain better results than the rest.\n\n\\begin{lstlisting}[\n    gobble=4,\n    caption=Stop condition for the methods,\n        basicstyle=\\scriptsize\n]\n    (t, x) ->\n        t > 20 && norm(Arenstorf.p0 .- last(x)[1:2]) < closing_err || t > stop_time,\n\\end{lstlisting}\n\n    It was also necessary to have a means of measuring the error of each solution.\nI measured the distance from the method's to the initial points after two iterations\nand that same distance when we interpolate the solution based on the last $5$ points\nto obtain what would be the exact $x$-coordinate when the $y$-coordinate is $0$,\nas in the beginning.\nI reused the code for Hermite's interpolation from previous assignments.\n\n\\begin{lstlisting}[\n    gobble=4,\n    caption=Code corresponding to Hermite's interpolation in the tests,\n    basicstyle=\\scriptsize\n]\n    n = length(x)\n    last5x = [p[1] for p in x[n-4:n]]\n    last5y = [p[2] for p in x[n-4:n]]\n    closingx = Interpolation.hermite(last5y, last5x, 0)\n    print(\"\\tDistance from interpolated closing point to start point: \",\n          \"$(abs(closingx - Arenstorf.p0[1]))\\n\")\n\\end{lstlisting}\n\n\\subsection{Plotting and building gifs from the solutions}\n\n    In order to build gifs (or plots) for the solutions, we must use the solutions computed.\nHowever, we have usually computed a lot of points.\nThis number of points makes it possible to use the solution in other contexts\nor draw it smoothly,\nbut processing a frame for each point would require a lot of computation time.\nFurthermore, it would be wrong for mixed step methods,\nbecause each frame would not correspond to the same fraction of time.\nHence, plotting or drawing a frame every $k$ points is also wrong,\nand if we do so,\nwe may be able to see some inconsistencies even in our plots.\nSpecifically, we could see some polygonals in the more complex parts of our drawing.\nInstead, we must iterate over the time (with a fixed step) and plot/frame\nall the points seen until that moment.\n\n\\begin{lstlisting}[\n    gobble=4,\n    caption=Creation of gifs in Julia,\n    basicstyle=\\scriptsize\n]\n    len = length(t)\n    anim = Animation()\n    i = 2\n    for j = 0:0.05:t[len]\n        while i < len && t[i] < j\n            i += 1\n        end\n        # NaNs are used to avoid plotting more points for the Earth\n        push!(p1, [(x[i][1], x[i][2]), (cos(2*\u03c0*t[i]/t[len]), sin(2*\u03c0*t[i]/t[len])), (NaN, NaN)])\n        push!(p2, t[i], steps[i-1])\n        plot(p1, p2)\n        frame(anim)\n    end\n    gif(anim, \"media/arenstorf/richardson.gif\", fps = 15)\n\\end{lstlisting}\n\n\\subsection{Euler's Method}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{media/arenstorf/euler.png}\n    \\caption{Graphical results for Euler's method.}\n\\end{figure}\n\n    Euler's method is the worst of the tested.\nWe could say it behaved properly for one turn,\nbut the accumulated error was too big for the second turn in this problem.\nI tried reducing the step to $\\num{1e-6}$ with just one turn,\nbut the results were not favorable either.\nIn this case, there is no point in checking the distance given by interpolation,\nbecause it is very clear that the orbit is deformed with respect to the original one.\n\n\\lstinputlisting{outputs/arenstorf/euler.txt}\n\n\\subsection{Modified Euler's Method}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.7\\textwidth]{media/arenstorf/mod_euler.png}\n    \\caption{Graphical results for the modified version of Euler's method.}\n    \\label{fig:mod_euler}\n\\end{figure}\n\n    The modified version of Euler's method improved significantly with respect\nto the original one.\nIn this case, I also needed to reduce significantly the step size to obtain better results.\nI used the least step among all methods because I could obtain\na better solution with that step size,\nwhich I was not able with Euler's method.\nI did not reduce it more because the running time was already over $\\SI{30}{\\s}$.\nThis made me think that I needed to consider the performance of the iterative methods.\nThe conclusions will be presented later (\\cref{sec:performance}).\n\n    In this case, we can see in the plot (\\cref{fig:mod_euler}) that the second cycle\nis slightly different than the first one towards the end.\nIn the following methods, the differences will not be visible.\nThe method gave an error when closing the orbit of $\\num{4e-3}$.\nBecause the initial point was almost at distance $1$, this error is also the relative error.\nThis means that even for an unstable problem like this one,\nreducing the step size enough can provide reasonable results.\nBut then again,\nthe next methods achieve more in less time.\n\n\\lstinputlisting{outputs/arenstorf/mod_euler.txt}\n\n\\subsection{Runge Kutta $4$ Method}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{media/arenstorf/rk4.png}\n    \\caption{Graphical results for the RK4 method.}\n\\end{figure}\n\n    The RK4 is the first method for which we obtain a plot that\nvisually seems like a single cycle,\nmeaning that it approximates very well,\nand a closing error of $\\num{2.8e-8}$ (after interpolation).\nHowever, we still need to use a very small step ($\\num{1e-5}$) to obtain good results,\nwhile with the adaptative methods,\nthe number of iterations, and therefore the running time,\ndecrease significantly.\nThe conclusion is that an approximation of order $4$ with steps of $\\num{1e-5}$\ngives good results for this problem,\nbut there are parts of the orbit where the system is more stable and we can use bigger steps.\n\n\\lstinputlisting{outputs/arenstorf/rk4.txt}\n\n\\subsection{Felhberg's method}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.95\\textwidth]{media/arenstorf/fehlberg.png}\n    \\caption{Graphical results for Felhberg's method.}\n\\end{figure}\n\n    I tweaked the values for Felhberg's method until I saw I could no longer decrease the\napproximation error.\nI was not able to reduce the closing error below $\\num{1e-5}$,\neven when I set the tolerance to $\\num{1e-16}$,\nsomething that also happened with Richardson's extrapolation.\nFelhberg's was the best method for this problem,\nthe fastest (a little bit above one second in my hardware)\nand the one that performed the least number of evaluations of the derivative function.\n\n\\lstinputlisting{outputs/arenstorf/fehlberg.txt}\n\n\\subsection{Richardson's extrapolation of the RK4 method}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.95\\textwidth]{media/arenstorf/richardson.png}\n    \\caption{Graphical results for Richardson's extrapolation of the RK4 method.}\n\\end{figure}\n\n    The last method performed similarly to the previous one.\nIt also falls in the category of mixed step methods,\nand as such it was much faster than the fixed step methods,\nbut it was slower and required more evaluations that Felhberg's method.\nA slight improvement with respect to Fehlberg's method\nis that the maximum step was slightly bigger, but so was the tolerance of the method.\n\n\\lstinputlisting{outputs/arenstorf/richardson.txt}\n\n\\section{Access to the code}\n\n    The code at the moment of writting can be checked in my Git repository.\n{\\small\n\\url{https://github.com/useredsa/numeric-differential-equations/commit/c0c34ee66c30e19cf622e8280653d83207a0ce89}\n}\n\n    I recommend looking at the files\n\\href{https://github.com/useredsa/numeric-differential-equations/blob/c0c34ee66c30e19cf622e8280653d83207a0ce89/media/arenstorf/fehlberg.gif}{\\lstinline{fehlberg.gif}}\nand\n\\href{https://github.com/useredsa/numeric-differential-equations/blob/c0c34ee66c30e19cf622e8280653d83207a0ce89/media/arenstorf/richardson.gif}{\\lstinline!richardson.gif!}\nwhich show animations of the satellite, the moon and the step size\nfor both adaptative methods.\n\n\\end{document}\n", "meta": {"hexsha": "a02745006a052d2f4880318adb9d319f1363eea6", "size": 11488, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/arenstorf/header.tex", "max_stars_repo_name": "useredsa/numeric-differential-equations", "max_stars_repo_head_hexsha": "7647a20120ce0a05ee5223b0ce81ad63d8aee115", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/arenstorf/header.tex", "max_issues_repo_name": "useredsa/numeric-differential-equations", "max_issues_repo_head_hexsha": "7647a20120ce0a05ee5223b0ce81ad63d8aee115", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/arenstorf/header.tex", "max_forks_repo_name": "useredsa/numeric-differential-equations", "max_forks_repo_head_hexsha": "7647a20120ce0a05ee5223b0ce81ad63d8aee115", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.9389067524, "max_line_length": 170, "alphanum_fraction": 0.739902507, "num_tokens": 3126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494678483918, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.5870706362975892}}
{"text": "\n\\documentclass[12pt]{article}\n\\usepackage{amsfonts}\n\n\\newcommand{\\real}{{\\rm Re}}\n\\newcommand{\\imag}{{\\rm Im}}\n\n\\begin{document}\n\n\\section{Mean value of position angle change \\\\ Mark II}\n\nThis memo documents an updated model for estimating the change in\nposition angle, $\\Delta\\Psi$, and its uncertainty. Let\n\\begin{equation}\nP_k = Q_k + i U_k = L_k \\exp i2\\Psi_k\n\\end{equation}\nrepresent the linear polarization of the $k$th phase bin in one profile and\n\\begin{equation}\nP^\\prime_k = Q^\\prime_k + i U^\\prime_k = L^\\prime_k \\exp i2\\Psi^\\prime_k\n\\end{equation}\nthe same in the other profile.  \n\nConsider the {\\bf weighted} cross-correlation,\n$\\bar{Z}=\\bar{C}+i\\bar{S}$, where $\\bar C$ and $\\bar S$ are the\nweighted mean values of the real and imaginary components of\n\\begin{equation}\nz_k = c_k + i s_k \n    = P^*_k P^\\prime_k = L_k L^\\prime_k \\exp i2(\\Psi^\\prime_k - \\Psi_k).\n\\end{equation}\nThat is, where\n\\begin{equation}\nc_k = Q_kQ^\\prime_k + U_kU^\\prime_k,\n\\hspace{1cm}\ns_k = Q_kU^\\prime_k - U_kQ^\\prime_k,\n\\end{equation}\nand\n\\begin{eqnarray}\n\\sigma_{c_k}^2 & = &\n Q^{\\prime2}_k\\sigma_{Q_k}^2 + U^{\\prime2}_k\\sigma_{U_k}^2 +\n Q^2_k\\sigma_{Q^\\prime_k}^2 + U^2_k\\sigma_{U^\\prime_k}^2 \\\\\n\\sigma_{s_k}^2 & = &\n U^{\\prime2}_k\\sigma_{Q_k}^2 + Q^{\\prime2}_k\\sigma_{U_k}^2 +\n U^2_k\\sigma_{Q^\\prime_k}^2 + Q^2_k\\sigma_{U^\\prime_k}^2 \\\\\n\\sigma_{s_k c_k} & = &\n Q^\\prime_k U^\\prime_k (\\sigma_{Q_k}^2 - \\sigma_{U_k}^2) +\n Q_kU_k (\\sigma_{U^\\prime_k}^2 -\\sigma_{Q^\\prime_k}^2)\n\\end{eqnarray}\nthe weighted means and their variances and covariance are given by\n\\begin{equation}\n\\bar C = \\sigma_{\\bar C}^2 \\sum_{k=1}^N { c_k \\over \\sigma_{c_k}^2 }\n\\hspace{1cm}\n\\bar S = \\sigma_{\\bar S}^2 \\sum_{k=1}^N { s_k \\over \\sigma_{s_k}^2 }\n\\end{equation}\nand\n\\begin{equation}\n\\sigma_{\\bar C}^2 = \\left[ \\sum_{k=1}^N{1\\over\\sigma_{c_k}^2} \\right]^{-1}\n\\hspace{5mm}\n\\sigma_{\\bar S}^2 = \\left[ \\sum_{k=1}^N{1\\over\\sigma_{s_k}^2} \\right]^{-1}\n\\hspace{5mm}\n\\sigma_{\\bar S \\bar C} = \\sum_{k=1}^N\n\\left[\\sigma_{\\bar S}\\sigma_{\\bar C}\\over\\sigma_{s_k}\\sigma_{c_k}\\right]^2\n                         \\sigma_{s_k c_k}\n\\end{equation}\n\n\\newpage\n\n\\noindent\nThe expectation value of $\\Delta\\Psi=\\Psi^\\prime_k - \\Psi_k$ is given by\n\\begin{equation}\n\\langle\\Delta\\Psi\\rangle\n={1\\over2}\\tan^{-1}\\left({\\bar S\\over\\bar C}\\right).\n\\end{equation}\nand the variance of the expectation value\n\\begin{equation}\n{\\mathrm{var}}(\\langle\\Delta\\Psi\\rangle) = \n{ \\bar C^2\\sigma_{\\bar S}^2 + \\bar S^2\\sigma_{\\bar C}^2\n  + 2 \\bar S\\bar C \\sigma_{\\bar S \\bar C} \\over \n  \\left(\\bar C^2 + \\bar S^2\\right)^2 }\n\\end{equation}\n\n\\end{document}\n", "meta": {"hexsha": "5a345bccaa3bb66ce919cf2fc3d6a54130839b6e", "size": 2581, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "More/Polarimetry/DeltaPA.tex", "max_stars_repo_name": "xuanyuanstar/psrchive_CDFT", "max_stars_repo_head_hexsha": "453c4dc05b8e901ea661cd02d4f0a30665dcaf35", "max_stars_repo_licenses": ["AFL-2.1"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "More/Polarimetry/DeltaPA.tex", "max_issues_repo_name": "xuanyuanstar/psrchive_CDFT", "max_issues_repo_head_hexsha": "453c4dc05b8e901ea661cd02d4f0a30665dcaf35", "max_issues_repo_licenses": ["AFL-2.1"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "More/Polarimetry/DeltaPA.tex", "max_forks_repo_name": "xuanyuanstar/psrchive_CDFT", "max_forks_repo_head_hexsha": "453c4dc05b8e901ea661cd02d4f0a30665dcaf35", "max_forks_repo_licenses": ["AFL-2.1"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.4756097561, "max_line_length": 75, "alphanum_fraction": 0.6640836885, "num_tokens": 1034, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511469672595, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5870706333249779}}
{"text": "% Copyright (C) 1992, Digital Equipment Corporation\n% All rights reserved.\n% See the file COPYRIGHT for a full description.\n% Last modified on Tue Jan 25 16:45:16 PST 1994 by mhb \n\n\\documentstyle{article}\n\\begin{document}\n\n%\\title{Notes on 3d Viewing and the Cube Assignment}\n%\\lecturedate{}\n%\\raggedbottom\n%\\begin{document}\n%\\maketitle\n\n\\noindent This note describes the ``simple 3D viewing'' system that \nI described at the end of class on Tuesday. I'm interested in corrections and\nimprovements; perhaps we can lobby FvDFH to incorporate parts of it in Chapter\n7, as a way to introduce the concepts and to give readers a better intuition\nfor what's going on. This material was derived fromWatt's {\\it Fundamentals of\nThree-Dimensional Computer Graphics}.\n\n\n\\section{ Viewing ($M_{view}$)}\n\nTo a first approximation, there are only two variables that can be set in our\nsimple viewing system: the position of the viewer's eye and the distance the\nviewing plane is away from the eye. The viewer is always looking at the origin\nof the ``world space,'' and the viewing plane is always normal to the line of\nsight. More about this later.  We say that the eye is at position $P$ and the\norigin of world space is the point $O$.\n\nThe position of the eye defines the origin of a new coordinate system called\nthe ``eye space''. It is a left-handed coordinate system whose $z$ axis sits on\nthe line $PO$. As we look along the eye's $z$ axis, the values are getting\nlarger as things are farther away. The $x$ axis of the eye-space is parallel to\nthe $x$-$y$ plane of world space.\n\nThe gist of things is that we need to find a matrix that will convert a point\nin world space into eye space. This matrix is completely specified by $P$'s\nlocation in world-space. As it turns out, the matrix is easy to derive (and to\nwrite down) if we specify $P$ in spherical coordinates rather than cartesian\ncoordinates. So $P$ is given by a triplet of numbers, $(\\mu, \\theta, \\phi)$,\nwith the following interpretation: \\begin{description} \\item $\\mu$ is the\ndistance from the viewer to the origin. \\item $\\theta$ is the angle made\nbetween the line of sight, $PO$, and the plane containing the $x$-$z$ axes.\n\\item $\\phi$ is the angle between the line of sight and the plane containing\nthe $y$-$z$ axes. \\end{description} To check your understanding, consider the\npoint $P$ with $\\mu=10$, $\\theta=-30,$ and $\\phi=45$. The eye is somewhere in\nthe octant with positive values for all three coordinates, and the viewer would\nbe looking down and to his left to see the origin.\n\nWith some high school trigonometry, we find that \n\\begin{eqnarray*}\n    P_x &=&   \\mu cos \\theta sin \\phi   \\\\    \n    P_y &=& - \\mu sin \\theta            \\\\\n    P_z &=&   \\mu cos \\theta cos \\phi   \\\\\n\\end{eqnarray*}\nNote that negative sign for $P$'s $y$ coordinate. The negative sign is\nneeded because \n$\\theta$ is negative in order to ``raise'' a line above the \n$xy$ plane. (An easy sanity check that the\ncoordinates are reasonable is to look at the square of the distance \nfrom $P$ to the origin using the cartesian coordinates. It should\nequal $\\mu^2$.\nThis doesn't convince you the sign is right, or that we haven't swapped\na cos with a sin, but it's nice to know\nthat there's nothing to prove the coordinates are wrong.) \n\nTo make a long story short, the matrix that converts a point from world space\ninto eye space is  \n\\[\n    M_{view} = \n      S(1, 1, -1) \\cdot Rx(-\\theta) \\cdot Ry(-\\phi) \\cdot T(-P_x, -P_y, -P_z) \n\\]\nwhere the matrices for scale, rotation, and translation are found \nin FvDFH, pages 214--5.\n\n\n\\section{ Projecting ($M_{proj}$) }\n\nAs mentioned, in our simple model the viewing plane is located normal to the\neye space's $z$ axis (the line from $P$ to $O$), at a distance $d$ away from\n$P$. That is, in the eye space, the plane is at $z=d$, parallel to the eye\nspace's $xy$ plane.\n\nTo perform a perspective projection, transform points in eye space\nby\n\\[\n    M_{per} =  \n       \\left[\n       \\begin{array}{cccc}\n       1 & 0 & 0 & 0 \\\\\n       0 & 1 & 0 & 0 \\\\\n       0 & 0 & 1 & 0 \\\\\n       0 & 0 & 1/d & 0\n       \\end{array}\n       \\right]\n\\]\nIf you ignore clipping and hidden surface elimination, you can now \nhomogenize the resulting points. That is, you need to divide the \n$x$ and $y$ coordinates by $z/d$.\n\nTo perform an orthographic projection, you don't need to do anything.\nIf, for the sake of symmetry, you'd like to do a ``projection'' type\ntransformation, simply transformi your points in eye space\nby the identity matrix:\n\\[\n    M_{orth} =  \n       \\left[\n       \\begin{array}{cccc}\n       1 & 0 & 0 & 0 \\\\\n       0 & 1 & 0 & 0 \\\\\n       0 & 0 & 1 & 0 \\\\\n       0 & 0 & 0 & 1\n       \\end{array}\n       \\right]\n\\]\n\nTo summarize, the projection matrix is $M_{proj}$. It is either \n$M_{per}$ or $M_{orth}$, depending on the desired type of projection. \nIt projects objects from eye space into ``screen space.'' \n\n\n\\section{ Imaging ($M_{image}$)}\n\nThe final thing you need to do is to get a part of the viewing plane onto\n``image space'' on an SRGP canvas (perhaps via a SUIT widget).  How do you\nspecify such window onto of the view plane?\n\nIn our simple model, we define a screen space coordinate system in the view\nplane whose origin is the spot where the line $PO$ hits the plane. The vertical\nand horizontal axes, usually labeled $u$ and $v$, are parallel to the $x$ and\n$y$ axis of our eye space.\n\nSince we are always looking down at the origin, there's a good chance \nthat we've set up the objects to be viewed such that they are around \nthe origin. So, for the sake of keeping things simple, we'll specify \na size $w$ of the viewing window centered about the origin. That \nis, the viewing window goes from $(-w,-w)$ in the lower left to $(w,w)$ \nin the upper right. With a regular window nicely centered about the \norigin, the transformation to a viewport in an SRGP canvas is simple:\n\\[\n    M_{image} = \n        T(lx, by, 0) \\cdot \n        S\\left( {{rx-lx} \\over {2w}}, {{ty-by} \\over {2w}}, 1\\right) \\cdot \n        T(w,w)\n\\]\nwhere the southwest corner of the canvas is $(lx,by)$ and\nthe northeast corner is $(rx,ty)$. This is just a special case of\nthe window-to-viewport transformation we've seen a few times in class already.\nYou are now ready to draw the object by just ignoring the $z$ coordinate.\n\n\\section{Putting it all together}\n\nIf you are not clipping or performing hidden surface elimination, \nyou can combine the three matrices together into a single matrix, \ntransform each point from the world space into image space, and then \nhomogenize transformed the point. \nThe cumulative transformation matrix is as follows\n\\[\n    M_{crm} = M_{image} \\cdot M_{proj} \\cdot M_{view}\n\\]\nDon't forget to homogenize the transformed  point! \n\n\n\\section{Rotating the cube}\n\nMost of you have had no problems computing the matrices for\nrotating the cube about its main diagonal. Just in case there's any\nconfusion, here's the scoop.\n\nAssume the cube is centered about the\norigin in world space, going from $(-1,-1,-1)$ to $(1,1,1)$. The matrix to spin\nthe cube by $\\alpha$ degree forward is\n\\begin{eqnarray*}\n    M_{spin} =  & T(-1, -1, -1) \\cdot Ry(-45) \\cdot Rz(arcsin(1/sqrt(3)))\n      \\cdot  \\\\\n      & ~~~ Rx(\\alpha) \\cdot \\\\\n      & ~~~ Rz(-arcsin(1/sqrt(3))) \\cdot Ry(45) \\cdot\n       T(1, 1, 1)\n\\end{eqnarray*}\n\nIt is tempting to combine this matrix with the three above. Your life will be\neasier if you don't.  Because $M_{spin}$ rotates the cube from its current\nposition by $\\alpha$ degrees, you need to keep the transformed points around\neach time you spin the cube a bit.\n\nIn other words, the high-level structure for your program is as follows:\n\\begin{verbatim}\n    cube = initial position of cube\n    Mctm = Mcanvas * Mproj * Mview;\n    loop\n        cube = Mspin * cube\n        cubeP = Mctm * cube\n        cubeP = homogenize cubeP\n        display cubeP\n    end\n\\end{verbatim}\n\nIf you aren't doing any clipping or hidden surface elimination, and you combine\n$M_{canvas},$ $M_{proj},$ and $M_{view}$ into a single cumulative\ntransformation matrix, $M_{ctm}$, you should still keep the individual matrices\naround so the user can change some of the parameters interactively.\n\nFor example, in my program, I allow the user to interactively set\nthe following items:\n\\begin{itemize}\n\\item \nviewing parameters $\\mu$, $\\theta,$ and $\\alpha$. These affect\n$M_{view}$ only.   \n\\item \nprojection parameters $d$,  and a perspective vs. orthographic flag.\nThese affect $M_{proj}$ only.\n\\item \ncanvas parameters $w$, and the $lx, by, rx, ty$ which define a\nlocation of the SUIT widget. These affect $M_{canvas}$ only. \n\\end{itemize}\nOf course, after changing one of the parameters, the cumulative matrix\nmust be recomputed by matrix multiplying the three matrices that\ncomprise it.  \n\n\\end{document}\n\n\n\n", "meta": {"hexsha": "2dc93ab2d1622f3a8acae7495146d7d7a4c711a9", "size": 8790, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "m3-demo/cube/src/cube-solution.tex", "max_stars_repo_name": "jaykrell/cm3", "max_stars_repo_head_hexsha": "2aae7d9342b8e26680f6419f9296450fae8cbd4b", "max_stars_repo_licenses": ["BSD-4-Clause-UC", "BSD-4-Clause", "BSD-3-Clause"], "max_stars_count": 105, "max_stars_repo_stars_event_min_datetime": "2015-03-02T16:58:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T07:17:49.000Z", "max_issues_repo_path": "m3-demo/cube/src/cube-solution.tex", "max_issues_repo_name": "jaykrell/cm3", "max_issues_repo_head_hexsha": "2aae7d9342b8e26680f6419f9296450fae8cbd4b", "max_issues_repo_licenses": ["BSD-4-Clause-UC", "BSD-4-Clause", "BSD-3-Clause"], "max_issues_count": 145, "max_issues_repo_issues_event_min_datetime": "2015-03-18T10:08:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T01:27:08.000Z", "max_forks_repo_path": "m3-demo/cube/src/cube-solution.tex", "max_forks_repo_name": "jaykrell/cm3", "max_forks_repo_head_hexsha": "2aae7d9342b8e26680f6419f9296450fae8cbd4b", "max_forks_repo_licenses": ["BSD-4-Clause-UC", "BSD-4-Clause", "BSD-3-Clause"], "max_forks_count": 26, "max_forks_repo_forks_event_min_datetime": "2015-10-10T09:37:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-23T02:02:05.000Z", "avg_line_length": 38.8938053097, "max_line_length": 79, "alphanum_fraction": 0.7052332196, "num_tokens": 2418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059462938814, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5870366969187524}}
{"text": "\\chapter{General Markov Chains}\nIn this chapter we will study Markov chains on a general state space, i.e., state spaces that are not necessarily finite or countable.\n\n\\section{Markov Chain Monte Carlo}\n\nThis Section is under \\work.\n\nThe methods described so far are generally unsuitable for complex multivariate or multi-dimensional distributions. One way to generate from such distributions is based on the simple idea that if a Markov chain is designed with a desired distribution as its stationary distribution, the states of the stationary chain will provide the required sample values. This approach is known as {\\it Markov Chain Monte Carlo (MCMC)}.\n\n\nA distribution with density/mass function $f$ is a {\\it stationary distribution} of a Markov chain if $X^{(t)}\\sim f$ implies $X^{(t+1)}\\sim f$.  \n\nTo generate from a desired distribution using MCMC, the Markov chain must have the following properties:\n\n\\begin{asparaenum}[(a)]\n\\item The state space of the Markov chain must coincide with the support of the desired distribution.\n\n\\item {\\it Irreducible}: The Markov chain must be free to move over the entire state space.\n\n\\item {\\it Harris recurrent}: The Markov chain must not get stuck in any subset of the state space.\n\n\\item {\\it Positive}: The Markov chain must converge to a unique stationary distribution regardless of the starting state.\n\n\\item {\\it Aperiodic}: The Markov chain must not exhibit any deterministic pattern of movement.\n\\end{asparaenum}\n\n\\begin{definition}[Ergodic Markov Chain]\nAn {\\it ergodic} Markov chain is one that is irreducible, positive, Harris recurrent and {\\it aperiodic}.\n\\end{definition}\n\n%\\begin{flushright}   $\\boxbox$ \\end{flushright}\nMCMC is based on the observation that the state of an {\\it ergodic} Markov chain will eventually converge to a stationary distribution, no matter which state the chain starts in. Thus, to obtain a sample from a desired distribution $f$, an ergodic Markov chain with $f$ as its stationary distribution can be constructed and then run till it is stationary. The required sample values are given by the states of the stationary chain. Let $X^{(0)},X^{(1)},\\ldots$ represent a sequence of states for an ergodic Markov chain and suppose that it reaches its stationary distribution $f$ after transition $T$. Then a sample with distribution $f$ is given by $\\{X^{(t)}:t>T\\}$. Note that unlike sample points given by the previous methods, which are independent, the sample points given by MCMC are {\\it dependent} because they are states from a Markov chain.\n\nGeneric algorithms that allow an ergodic Markov chain with a specified stationary distribution to be constructed easily are available. Many of these algorithms can be regarded as variants of the {\\it Metropolis-Hastings algorithm}.\n\n\\begin{algorithm}%WORK rewrite\n\\caption{Metropolis-Hastings Sampler}\n\\label{A:MHSampler}\n\\begin{algorithmic}[1]\n\\STATE {\n{\\it input:} \n\\begin{itemize}\n\\item[(1)] shape of a target density $\\tilde{f}(x) = \\left({\\int \\tilde{f}(x)dx}\\right) f(x)$,\n\\item[(2)] a {\\it transition kernel}, $q(y|x)$.\n\\end{itemize}\n}\n\\STATE {\\it output:} a sequence of samples $x_0,\\ldots$ from the Markov chain $\\{X\\}_{i \\in \\Zz_+}$ with stationary distribution $f$\n\n\\STATE Choose initial state $X^{(0)}$ and {\\it proposal distribution} $g$.\n\\REPEAT\n\\STATE At iteration $t$,\n\\STATE Generate $X\\sim g(x|X^{(t-1)})$ and $U\\sim U(0,1)$,\n\\STATE Compute {\\it acceptance probability}\n\\begin{equation}\n\\alpha=\\min\\left\\{1,\\frac{f(\\tilde{X})g(X^{(t-1)}|\\tilde{X})}{f(X^{(t-1)})g(\\tilde{X})|X^{(t-1)}} \\right\\},\n\\end{equation}\n\\STATE\n{\\bf If} $U \\leq \\alpha$\n{\\bf then} $X^{(t)} \\gets \\tilde{X}$, %(accept $\\tilde{X}$ with probability $\\alpha$)\n{\\bf else} $X^{(t)} \\gets X^{(t-1)}$\n\\UNTIL desired number of samples are obtained from $\\{X\\}_{i \\in \\Zz_+}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{definition}[Transition Kernel]\nThe transitions of a Markov chain are governed by a conditional density/mass function known as the {\\it transition kernel}, $q(y|x)$. For a discrete state space:\n\\begin{equation}\nq(y|x)=P(X^{(t+1)}=y|X^{(t)}=x),\n\\end{equation}\nwhile for a continuous state space:\n\\begin{equation}\n\\int_Aq(y|x)dy=P(X^{(t+1)}\\in A|X^{(t)}=x),\n\\end{equation}\nfor any subset $A$ of the state space.\n\\end{definition}\n\n%\\begin{flushright}   $\\boxbox$ \\end{flushright}\n\\begin{prop}If a Markov chain with transition kernel $q(y|x)$ satisfies the {\\it detailed balance condition}:\n\\begin{equation}\nq(x|y)f(y)=q(y|x)f(x),\n\\end{equation}\nwhere $f$ is a density/mass function, then $f$ is a stationary distribution of the chain.\n\n\\begin{proof}\nLet $S$ be the state space of the Markov chain and let $A\\subset S$. Suppose that $X^{(t)}\\sim f$. Then:\n\n\\begin{displaymath}\n\\begin{split}\nP(X^{(t+1)}\\in A)&=\\int_S P(X^{(t+1)\\in A, X^{(t)}=y}dy)\\\\\n&=\\int_S P(X^{(t+1)\\in A| X^{(t)}=y}f(y)dy)\\\\\n&=\\int_S\\int_A q(x|y)f(y)dxdy\\\\\n&=\\int_S\\int_A q(y|x)f(x)dxdy\\\\\n&=\\int_Af(x)dx.\n\\end{split}\n\\end{displaymath} \n\nTherefore, $X^{(t+1)}\\sim f$ and so $f$ is a stationary distribution.\n\\end{proof}\n\\end{prop}\n\n\\begin{prop}\nIn the Metropolis-Hastings algorithm, if the support of the proposal distribution is at least as large as the support of $f$, then the algorithm produces a Markov chain that has a stationary distribution $f$.\n\n\\begin{proof}\nLet\n$$\\alpha(x,y)=\\min\\left\\{ 1,\\frac{f(y)g(x|y)}{f(x)g(y|x)}\\right\\}.$$\n\nThe transition kernel of the Metropolis-Hastings chain is:\n\\begin{equation}\nq(y|x)=\\alpha(x,y)g(y|x)+[1-\\beta(x)]\\delta_x(y),\n\\end{equation}\nwhere:\n\\begin{equation}\n\\beta(x)=\\int_S\\alpha(x,y)g(y|x)dy,\n\\end{equation}\n\nand $\\delta_x(\\cdotp)$ is the Dirac delta function at $x$.\n\nIt is enough to show that the Metropolis-Hastings chain satisfies the detailed balance condition with $f$, i.e. that $q(y|x)f(x)=q(x|y)f(y)$. This follows from:\n\n$$\\alpha(x,y)g(y|x)f(x)=\\min\\{f(x)g(u|x),f(y)g(x|y)\\}=\\alpha(y,x)g(x|y)f(y),$$\n\nand:\n$$[1-\\beta(x)]\\delta_x(y)f(x)=[1-\\beta(y)]\\delta_y(x)f(y).$$\n\\end{proof}\n\\end{prop}\n\nSince $f$ is used only to compute the acceptance probability, and appears both in the numerator and denominator, the algorithm is applicable even if $f$ is not known completely but only up to a multiplicative constant. This is frequently the case in practice, where $f$ is available as an un-normalised distribution. Some common variants of the Metropolis-Hastings algorithm include the {\\it Metropolis sampler, independent Metropolis-Hastings sampler, single-component Metropolis-Hastings sampler} and {\\it Gibbs sampler}.\n\nThe {\\it Metropolis sampler} is obtained when a symmetric proposal distribution is used, i.e. $g(\\tilde{X}|X^{(t-1)})=g(X^{(t-1)}|\\tilde{x})$. In this case, the acceptance probability simplifies to:\n\n\\begin{equation}\n\\alpha=\\min\\left\\{1,\\frac{f(\\tilde{X})}{f(X^{(t-1)})}\\right\\}\n\\end{equation}\n\nA special case where $g(\\tilde{X}|X^{(t-1)})=g(|\\tilde{X}-X^{(t-1)}|)$ is known as the {\\it random walk Metropolis-Hastings (RWMH) sampler}.\n\n\\begin{example}[{\\tt rwmh\\_vonMises\\_uniform}]\nThe von Mises density with the location parameter $a\\in[-\\pi,\\pi]$ and a scale parameter $b > 0$ is given by:\n\\begin{equation}\nf(x)=\\frac{e_{b\\cos(x-a)}}{2\\pi I_0(b)},\n\\end{equation}\n\nfor $x\\in[-\\pi ,\\pi]$, and where $I_0$ is the modified Bessel function of the first kind and order zero. Implement the RWMH sampler for generating from the von Mises density with $a = 0$ and $b = 3$ by using the $U(-1, 1)$ density to generate steps in the random walk, i.e. $g(\\cdot|x)=U(x-1,x+1)$.\n\nMatlab code: For m = 1000 iterations of the IMH sampler,\n\\begin{VrbM}\nb = 3;\nm = 1000;\nx = ones(1,m); % allocate storage and initialise to 1\nfor k = 2:m\n\ty = x(k-1) + unifrnd(-1,1); % unifrnd(a,b) is the Matlab function for generating\n\t\t% U(a,b) random variables\n   \talpha = min(1,exp(b * (cos(y) - cos(x(k-1)))));\n\tif rand < alpha\n\t\tx(k) = y;\n\telse\n\t\tx(k) = x(k-1);\n\tend % if\nend % for\n\\end{VrbM}\n\\end{example}\n\nWhen the proposal distribution is independent of $X^{(t-1)}$, the {\\it independent Metropolis-Hastings (IMH) sampler} is obtained, with the acceptance probability given by:\n\\begin{equation}\n\\alpha=\\min\\left\\{ 1,\\frac{f(\\tilde{X}g(X^{(t-1)}))}{f(X^{(t-1)})g(\\tilde{X})}\\right|\\}\n\\end{equation}\nThis algorithm usually works well if $g$ is close to $f$ and has heavier tails than $f$.\n\n\\begin{example}\nConsider the log-normal distribution whose density is:\n\\begin{equation}\nf(x)=\\frac{1}{x\\sqrt{2\\pi}}\\exp\\left\\{-\\frac{(\\log x)^2}{2} \\right\\},\n\\end{equation}\nfor $x\\geq 0$. Use the IMH sampler with a gamma distribution proposal to generate from the log-normal distribution.\n\nThe gamma density with shape parameter $a > 0$ and scale parameter $b > 0$ is given by:\n\\begin{equation}\ng(x)=\\frac{1}{b^a\\Gamma(a)}x^{a-1}e^{-x/b},\n\\end{equation}\n\n\nfor $x \\geq  0$. Note that the IMH acceptance probability can be written as:\n$$\\alpha=\\min\\left\\{1, \\frac{f(\\tilde{X})/g(\\tilde{X})}{f(X^{(t-1)})/g(X^{(t-1)})}\\right\\},$$\n\nwhich involves the ratio $f /g$ in both the numerator and denominator, thus making multiplicative constants in $f$ and $g$ irrelevant and able to be discarded. In other words, is is enough to use:\n\n$$\\frac{\\tilde{f}(x)}{\\tilde{g}(x)}=\\frac{\\exp[-(\\log x)^2/2]}{x^ae^{-x/b}}=\\frac{1}{x^a}\\exp[\\frac{x}{b}-\\frac{(\\log x)^2}{2}]\n$$\nto compute $\\alpha$.\n\nMatlab code: For $m = 1000$ iterations of the IMH sampler using $gamma(1.5, 2.5)$ as proposal distribution:\n\\begin{VrbM}\nm = 1000;\nx = 0.5 * ones(1,m); % allocate storage and initialise to 0.5\nfor k = 2:m\n\ty = gamrnd(1.5,2.5); % gamrnd is the Matlab function for generating gamma\n\t\t % random variables\n  \talpha = (x(k-1) / y)^1.5 * \\exp((y - x(k-1)) /2.5 + (log(x(k-1))^2 - log(y)^2) /2) ;\n  \talpha = min(1,alpha);\n  \tif rand < alpha\n\t\tx(k) = y;\n\telse\n\t\tx(k) = x(k-1);\n\tend % if\nend % for\n\\end{VrbM}\n\\end{example}\n\nIn general, $X$ may be multivariate. The idea behind the {\\it single-component Metropolis-Hastings sampler} is to update $X$ using a series of steps at each iteration, rather than a single step. To do this, $X$ is partitioned into $d$ parts: $X=(X_[1],X_[2],\\ldots,X_[d])$. Let $X_[-j]$ denote $X$ with the $j^{\\textrm{th}}$ part omitted, and suppose that the conditional distributions, $f(x_{[j]}|x_{[-j]})$, are known.\n\n% WORK make into algorithm\n\\subsection{\\alg -- {\\it Single-component Metropolis-Hastings sampler.}}\nChoose initial state $X^{(0)}$ and proposal distribution $g$.\\\\\n\\begin{tabbing}\n\\=At ite\\=ration $t$,\\\\\n\t\t\\>For j = $1, 2,\\ldots, d,$\\\\\n\t\t\t\t\\>\t\t\\>Generate $\\tilde{ X}_{[j]}\\sim g(x_{[j]}|X^{(t)}_{[1]},\\ldots,X^{(t)}_{[j-1]},X^{(t-1)}_{[j]},\\ldots,X^{(t-1)}_{[d]},)$  and $U_j\\sim U(0,1)$ ,\\\\\n\t\t\t\t\\>\\>Compute\\\\\n\\end{tabbing}\n\n\n\n\\begin{equation}\n\\begin{split}\n\\alpha_j=\\min\\{1,&\\frac{f(\\tilde{X}_{[j]}|X^{(t)}_{[1]},\\ldots,X^{(t)}_{[j-1]},X^{(t-1)}_{[j+1]},\\ldots,X^{(t-1)}_{[d]},)}{f(X^{(t-1)}_{[j]}|X^{(t)}_{[1]},\\ldots,X^{(t)}_{[j-1]},X^{(t-1)}_{[j+1]},\\ldots,X^{(t-1)}_{[d]},)}\\cdot\\\\\n&\\frac{g(X^{(t-1)}_{[j]}|X^{(t)}_{[1]},\\ldots,X^{(t)}_{[j-1]},\\tilde{X}_{[j]},X^{(t-1)}_{[j+1]},\\ldots,X^{(t-1)}_{[d]},)}{g(\\tilde{X}_{[j]}|X^{(t)}_{[1]},\\ldots,X^{(t)}_{[j-1]},X^{(t-1)}_{[j]},X^{(t-1)}_{[j+1]},\\ldots,X^{(t-1)}_{[d]},)}\\}\n\\end{split} \n\\end{equation}\n\n\\begin{tabbing}\n\\=If $U_j$\\=$\\leq\\alpha_j$\\\\\n\\>\\>Set $X^{(t)}_{[j]}=\\tilde{X}_{[j]}.$\\\\(accept $\\tilde{X}_[j]$ with probability $\\alpha_j$)\\\\\n\\>Else\\>\\\\\n\\>\\>Set $X^{(t)}_{[j]}=X^{(t-1)}_{[j]}.$\\\\\n\\end{tabbing}\n%\\begin{flushright}   $\\boxbox$ \\end{flushright}\n\nIf it is possible to generate from the conditional distributions, $f(x_{[j]}|x-{[-j]})$, then by choosing them as the proposal distribution in the single-component Metropolis-Hastings sampler, the acceptance probabilities will always be one and the proposals will always be accepted. The resulting algorithm is known as the {\\it Gibbs sampler}, which effectively generates from the conditional distributions.\n\\subsection{\\alg --{\\it Gibbs sampler.}}\nChoose initial state $X^{(0)}$.\n\\begin{tabbing}\n\n\\=At it\\=eration \\=$t$,\\\\\n\t\t\t\t\\>\\>For $j$\\>$ = 1, 2, \u00a1\u00ad, d,$\\\\\n\t\t\t\t\t\t\\>\\>\\>Generate $X^{(t)}_{[j]}\\sim f(x_{[j]}|X^{(t)}_{[1]},\\ldots,X^{(t)}_{[j-1]},X^{(t-1)}_{[j+1]},\\ldots,X^{(t-1)}_{[d]})$.\\\\\n\\end{tabbing}\n\n\\begin{example}[{\\tt gibbs\\_example.m.}]\nConsider the joint density:\n$$f(x,y,z)\\propto x^4y^3z^2(1-x-y-z)$$\n \nwhere $x, y, z > 0$ and $x + y + z < 1$. Let $B(a, b)$ represent a beta distribution with parameters $a$ and $b$. The conditional distributions for $x$, $y$ and $z$ are given by:\n\\begin{displaymath}\n\\begin{array}{l}\n x|y,z\\sim (1-y-z)q, \\space q\\sim B(5,2),\\\\\n y|x,z\\sim (1-x-z)r, \\space r\\sim B(4,2),\\\\\n z|x,y\\sim (1-x-y)s, \\space s\\sim B(3,2).\\\\\n\\end{array}\n\\end{displaymath}\n\n\t\nIn other words, the conditional distribution of $x$, given $y$ and $z$, is the same as the distribution of $(1-y-z)q$ where $q$ has a $B(5, 2)$ distribution, and so on. Implement a Gibbs sampler to generate samples from the joint density.\n\n\\Matlab code: For $m = 1000$ iterations of the Gibbs sampler:\n\\begin{VrbM}\nm = 1000;\nx = 0.3 * ones(1,m); % allocate storage and initialise to 0.3\ny = x;\nz = y;\nfor k = 2:m\n\tx(k) = (1 - y(k-1) - z(k-1)) * betarnd(5,2); % betarnd is the Matlab function for\n\t\t  % generating beta random variables\n\ty(k) = (1 - x(k) - z(k-1)) * betarnd(4,2);\n\tz(k) = (1 - x(k) - y(k)) * betarnd(3,2);\nend\n\\end{VrbM}\n\\end{example}\n\nHybrid combinations of single-component Metropolis-Hastings and Gibbs sampling are possible, with some parts of $X$ updated using Gibbs updates, and others (which cannot be generated from their conditional distributions) using Metropolis-Hastings updates.\n\nIn practice, a MCMC sampler is used to generate a long sequence of Markov chain states. After an initial {\\it burn-in period}, the states are assumed to have the required stationary distribution, at least approximately. The difficulty in using MCMC is deciding how long the burn-in period should be.\n\n\n\n\\section{Exercises}\n\n\\begin{exercise}\nImplement the IMH sampler in Example 2.4.9. Perform 10,000 iterations and plot the outputs sequentially. Comment on the appearance of the plot with regard to convergence to the target density. Plot the density histogram for the last 5000 iterations, and superimpose the target and proposal densities onto it.\n\\end{exercise}\n\n\\begin{exercise}\nImplement the Gibbs sampler in Example 2.4.12. Perform 10,000 Gibbs iterations, and plot the sequential outputs for $x$, $y$ and $z$. Comment on the appearance of the plots with regard to convergence to the target density. Obtain a three-dimensional scatter plot of the last 5000 sample points (use the {\\tt plot3} function).\n\\end{exercise}\n\n\\begin{exercise}\n\\begin{asparaenum}[(a)]\n\\item Generate a sample of size 20 from the $N(0.06, 1)$ distribution.\n\n\\item\tTreat the sample from Part (a) as observations from an $N(\\theta, 1)$ distribution. Pretend that you do not know $\\theta$ and wish to infer its value using the Bayesian approach. Denoting the sample by $z={z_1,\\ldots,z_{20}}$, the posterior density of $\\theta$, given $z$, is given by Bayes' theorem as:\n$$f(\\theta|z)\\propto f(z|\\theta)f(\\theta),$$\nwhere $f(z|\\theta)$ is the likelihood function, i.e.:\n$$f(z|\\theta)\\propto \\exp[ -\\frac{1}{2} \\sum^n_{i=1} (z_i-\\theta)^2 ],$$\n\nand $f(\\theta)$ is a prior density for $\\theta$. Choosing the Cauchy$(0,1)$ density as the prior density, the posterior density is therefore:\n$$f(\\theta|z)\\propto \\exp [-\\frac{1}{2}\\sum^n_{i=1}(z_i-\\theta)^2] \\frac{1}{1+\\theta^2}.$$\n\nImplement the IMH sampler for generating from the posterior density, using the Cauchy$(0,1)$ as the proposal density.\n\n(c)\tUse your IMH sampler to generate 1000 values from the posterior distribution. Use the generated values to estimate the mean of the posterior distribution and to obtain an approximate 95\\% probability interval for $\\theta$.\n\\end{asparaenum}\n\\end{exercise}\n\n\\begin{exercise}\nSuppose $x$ and $y$ have conditional distributions that are exponential distributions restricted to the interval $(0, 5)$. Then:\n$$f(x|y)\\propto ye^{-xy}\\textrm{ and } f(y|x)\\propto xe^{-xy}.$$\n\n\\begin{asparaenum}[(a)]\n\\item Implement the Gibbs sampler for generating sample points from the joint distribution $f(x,y)$.\n\n\\item\tUse your Gibbs sampler to generate 5000 sample points from $f(x,y)$. Use appropriate plots of the Markov chain outputs to assess convergence to the target distribution.\n\n\\item\tObtain a two-dimensional scatter plot of your generated sample points.\n\\end{asparaenum}\n\\end{exercise}\n\n", "meta": {"hexsha": "1d04d6d452130b782288d80b2cadd871cadbcb48", "size": 16340, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "matlab/csebook/MCMC.tex", "max_stars_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_stars_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-02-19T07:54:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T13:55:18.000Z", "max_issues_repo_path": "matlab/csebook/MCMC.tex", "max_issues_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_issues_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "matlab/csebook/MCMC.tex", "max_forks_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_forks_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-07-18T07:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-19T11:28:24.000Z", "avg_line_length": 49.9694189602, "max_line_length": 850, "alphanum_fraction": 0.688127295, "num_tokens": 5351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5870366878274241}}
{"text": "\\subsubsection*{Example}\n\t\\begin{figure}[H]\n\t\t\\centering\\includegraphics[width=2in]{source/qwd/qc.eps}\n\t\\end{figure}\n\tThe picture shows an graph without a feasible orientation. If we ignore the node in the right-bottom corner, the subgraph will has 4 nodes, 5 edges and every node has a $c(i)=1$. According to the pigeonhole principle, there will be a node has an in-dgree greater than $1$, thus the feasible orientation doesn't exsist.\n\t\\subsubsection*{Witness}\n\tIf a graph doesn't has a feasible orientation, then there exists a subgraph(witness), $e\\subseteq E,v=\\{u|\\{u,x\\}\\in e$ or $\\{x,u\\}\\in e, x,u\\in V\\}$, where $|E|>\\sum_{u\\in v}c(u)$.\n\t\\subsubsection*{Prove}\n\tWe have already reduced the orientation problem into a maximum mathcing in Ex.6. So we can translate the miximum mathcing witness into feasible orientation one: some set $X\\subseteq U$ is an edge set $e\\subseteq E$, $\\tau (X)$ is the sum of $c(i)$, where node $i$ is involved in $e$: $\\{i, x\\}\\in e$ or $\\{x, i\\} \\in e$. So the witness we described above is also the witness of maximum matching witness in the reduced problem.\n", "meta": {"hexsha": "0e96f8328d83f4e810faa0ea9e1f077106c08a7e", "size": 1098, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Week 4/source/qwd/ex7.tex", "max_stars_repo_name": "PrayStarJirachi/Algorithm-Homework", "max_stars_repo_head_hexsha": "22ec83e6d4a202d994a177e3dcbfd9225736666f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Week 4/source/qwd/ex7.tex", "max_issues_repo_name": "PrayStarJirachi/Algorithm-Homework", "max_issues_repo_head_hexsha": "22ec83e6d4a202d994a177e3dcbfd9225736666f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week 4/source/qwd/ex7.tex", "max_forks_repo_name": "PrayStarJirachi/Algorithm-Homework", "max_forks_repo_head_hexsha": "22ec83e6d4a202d994a177e3dcbfd9225736666f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 109.8, "max_line_length": 427, "alphanum_fraction": 0.7240437158, "num_tokens": 324, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7401743505760727, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.5870366877851529}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{IB}\n\n\\def\\ntitle{Geometry}\n\\def\\nlecturer{A.\\ G.\\ Kovalev}\n\n\\def\\nterm{Lent}\n\\def\\nyear{2018}\n\n\\input{header}\n\n\\theoremstyle{definition}\n\\newtheorem*{fact}{Fact}\n\n\\DeclareMathOperator{\\Isom}{Isom}\n\\DeclareMathOperator{\\mesh}{mesh}\n\\newcommand*{\\inner}{\\innerproduct}\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\tableofcontents\n\n\\section{Eucldiean Geometry}\n\n\\subsection{Isometries}\n\nLet \\((\\cdot, \\cdot)\\) be the standard inner product, a.k.a.\\ dot product on the Euclidean space \\(\\R^n\\) where for \\(x, y \\in \\R^n\\),\n\\[\n  (x, y) = x \\cdot y = \\sum_{i = 1}^{n}x_iy_i.\n\\]\nThis induces the Euclidean norm\n\\[\n  \\norm x = \\sqrt{(x, x)}.\n\\]\nAlso define the Euclidean distance function\n\\[\n  d(x, y) = \\norm{x - y}\n\\]\nwhich makes \\(\\R^n\\) a metric space.\n\n\\begin{definition}[Isometry]\\index{isometry}\n  A map \\(f: \\R^n \\to \\R^n\\) is an \\emph{isometry} of \\(\\R^n\\) if\n  \\[\n    \\forall P, Q \\in \\R^n,\\, d(f(P), f(Q)) = d(P, Q).\n  \\]\n\\end{definition}\n\nRecall that an \\(n \\times n\\) matrix \\(A\\) is \\emph{orthogonal} if\n\\[\n  A^TA = AA^T = I.\n\\]\nFor any square matrix \\(A\\) we have\n\\[\n  (Ax, Ay) = (Ax)^T(Ay) = x^TA^TAy = (x, A^TAy)\n\\]\nso we find that \\(A\\) is orthogonal if and only if \\((Ax, Ay) = (x, y)\\) for all \\(x, y \\in \\R^n\\).\n\nAnother point of view:\n\\[\n  (x, y) = \\frac{1}{2}(\\norm{x + y}^2 -  \\norm x^2 - \\norm y^2)\n\\]\nso \\(A\\) is orthogonal if and only if \\(\\norm{Ax} = \\norm x\\) for all \\(x \\in \\R^n\\).\n\nAn example of isometry: let \\(f(x) = Ax + b\\) where \\(b \\in \\R^n\\), then\n\\[\n  d(f(x), f(y)) = \\norm{A(x - y)}.\n\\]\nSo \\(f\\) is an isometry if and only if \\(A\\) is orthogonal.\n\nSurprisingly, it turns out all isometries have this form:\n\n\\begin{theorem}\n  Every isometry \\(f: \\R^n \\to \\R^n\\) is of the form \\(f(x) = Ax + b\\) for some orthogonal matrix \\(A\\) and some vector \\(b \\in \\R^n\\).\n\\end{theorem}\n\n\\begin{proof}\n  Let \\(e_1, \\dots, e_n \\in \\R^n\\) be the standard basis of \\(\\R^n\\). Let \\(b = f(0)\\), \\(a_i = f(e_i) - b\\) for \\(i = 1, \\dots, n\\). We want to show that \\(a_i\\)'s form an orthonormal basis. Firstly\n  \\[\n    \\norm{a_i} = \\norm{f(e_i) - f(0)} = d(f(e_i), f(0)) = d(e_i, 0) = \\norm{e_i} = 1\n  \\]\n  so they have unit length. For \\(i \\neq j\\),\n  \\begin{align*}\n    (a_i, a_j) &= -\\frac{1}{2}(\\norm{a_i - a_j}^2 - \\norm{a_i}^2 - \\norm{a_j}^2) \\\\\n               &= -\\frac{1}{2}(\\norm{f(e_i) - f(e_j)}^2 - 2) \\\\\n               &= -\\frac{1}{2}(\\norm{e_i - e_j}^2 - 2) \\\\\n               &= 0\n  \\end{align*}\n  Thus \\(a_i\\)'s form an orthonormal basis of \\(\\R^n\\) and it follows that the matrix \\(A\\) with columns \\(a_i, \\dots, a_n\\) is orthogonal.\n\n  Let \\(g(x) = Ax + b\\) which is an isometry. We have \\(g(x) = f(x)\\) for \\(x = 0, e_1, \\dots, e_n\\). In addition,\n  \\[\n    g^{-1}(x) = A^{-1}(x - b) = A^T(x - b)\n  \\]\n  is an isometry so the composition \\(h = g^{-1} \\compose f\\) is an isometry fixing \\(0, e_1, \\dots, e_n\\). It then suffices to show \\(h = \\id\\). Consider \\(x = \\sum_{i = 1}^n x_ie_i \\in \\R^n\\). Let \\(y = h(x) = \\sum_{i = 1}^n y_ie_i\\). Then\n\\begin{align*}\n  d(x, e_i)^2 &= \\norm x^2 + 1 - 2x_i \\\\\n  d(x, 0)^2 &= \\norm x^2 \\\\\n  d(y, e_i)^2 &= \\norm y^2 + 1 - 2y_i \\\\\n  d(y, 0)^2 &= \\norm y^2\n\\end{align*}\nSince \\(h\\) is an isometry, \\(h(0) = 0\\), \\(h(e_i) = e_i\\) and \\(h(x) = y\\), we have \\(\\norm x = \\norm y\\) so \\(x_i = y_i\\) for all \\(i\\). Thus \\(h(x) = x\\) for all \\(x \\in \\R^n\\).\n\\end{proof}\n\n\\begin{remark}\n  \\[\n    \\Isom(\\R^n) = \\{\\text{all isometries of } \\R^n\\}\n  \\]\n  is a group by composition. This is also known as the group of rigid motions of \\(\\R^n\\).\n\\end{remark}\n\n\\begin{eg}[Reflections in an affine hyperplane \\(H \\subset \\R^n\\)]\n  Let\n  \\[\n    H = \\{x \\in \\R^n: u \\cdot x = c\\}\n  \\]\n  where \\(\\norm u = 1\\) and \\(c \\in \\R\\). Observe that \\(u\\) is perpendicular to \\(H\\) and so is a normal vector. The reflection in \\(H\\) is defined to be\n  \\[\n    R_H: x \\mapsto x - 2(x \\cdot u - c)u.\n  \\]\n  It is an exercise in example sheet to show that this is an isometry. Observe that if \\(x \\in H\\) then \\(R_H(x) = x\\). If \\(a \\in H, t \\in \\R\\) then\n  \\[\n    R_H(a + tu) = (a + tu) - 2tu = a - tu.\n  \\]\n  Thus \\(R_H\\) fixes exactly the points in \\(H\\).\n\n  Conversely, suppose \\(S \\in \\Isom(\\R^n)\\) and \\(S\\) fixes every point in \\(H\\). Let \\(a \\in H\\) and defind translation by \\(a\\) as\n  \\[\n    T_a(x) = x + a\n  \\]\n  which is clearly an isometry. Conjugate \\(S\\) by \\(T_a\\), we get\n  \\[\n    R = T_{-a}ST_a \\in \\Isom(\\R^n)\n  \\]\n  and \\(R\\) fixes \\(H' = T_{-a}(H)\\). We choose to work with \\(H'\\) since \\(0 \\in H'\\), making it a subspace of \\(\\R^n\\). Explicitly, if \\(H = \\{x: x\\cdot u = c\\}\\) then \\(H' = \\{x: x \\cdot u = 0\\}\\). Then for all \\(x \\in H'\\),\n  \\[\n    (Ru, x) = (Ru, Rx) = (u, x) = 0.\n  \\]\n  Thus \\(Ru\\) is orthogonal to \\(H'\\), i.e.\\ lies in the orthogonal complement of \\(H'\\) in \\(\\R^n\\). Thus \\(Ru = \\lambda u\\) for some \\(\\lambda \\in R\\) such that \\(\\lambda^2 = 1\\). So \\(\\lambda = \\pm 1\\).\n\n  Since \\(R\\) fixes \\(0 \\in \\R^n\\), \\(R\\) is linear by the previous theorem and either \\(R = \\id\\) or \\(R\\) is given by the matrix\n  \\[\n    \\begin{pmatrix}\n      -1 & & \\\\\n      & 1 & \\\\\n      & & \\ddots \\\\\n      & & & 1\n    \\end{pmatrix}\n  \\]\n  i.e.\\ \\(R_{H'}\\). If \\(R = \\id\\) then \\(S = \\id\\). If \\(R = R_{H'}\\) then \\(S = T_aR_{H'}T_{-a}\\). Check that\n  \\[\n    S: x \\mapsto x - a \\mapsto (x - a) - 2(x \\cdot u - a \\cdot u) u \\mapsto x - 2(x \\cdot u - c)u\n  \\]\n  is a reflection. Thus if \\(S \\in \\Isom(\\R^n)\\) fixing \\(H\\) and \\(S \\neq \\id \\) then \\(R\\) is the reflection in \\(H\\).\n\\end{eg}\n\n\\begin{remark}\n  One can show that every isometry of \\(\\R^n\\) is a composition of at most \\(n + 1\\) reflections (see example sheet 1).\n\\end{remark}\n\nFrom the previous theorem, the subgroup\n\\[\n  \\{f \\in \\Isom(\\R^n): f(0) = 0\\} = \\{f(x) = Ax: AA^T = I\\}\n\\]\nis naturally isomorphic to \\(O(n)\\), the \\emph{orthogonal group}\\index{orthogonal group}.\n\nAs for every \\(A \\in O(n)\\), \\((\\det A)^2 = 1\\), we must have \\(\\det A = \\pm 1\\). We call \\(\\{A \\in O(n): \\det A = 1\\}\\) the \\emph{special orthgonal group}\\index{orthogonal group!special}, denoted \\(SO(n)\\).\n\n\\begin{eg}[\\(O(2)\\)]\n  \\[\n    A =\n    \\begin{pmatrix}\n      a & c \\\\\n      b & d\n    \\end{pmatrix}\n    \\in O(2) \\Leftrightarrow a^2 + c^2 = 1, b^2 + d^2 = 1, ab + cd = 0\n  \\]\n  Set \\(a = \\cos \\theta, c = \\sin \\theta\\) and \\(b = - \\sin \\varphi, d = \\cos \\varphi\\) for some \\(0 \\leq \\theta, \\varphi \\leq 2\\pi\\). Then we deduce\n  \\[\n    \\tan \\theta = \\tan \\varphi \\in \\R \\cup \\{\\infty\\}\n  \\]\n  so\n  \\[\n    \\theta = \\varphi \\text{ or } \\theta = \\varphi \\pm \\pi.\n  \\]\n  The first case corresponds to\n  \\[\n    A =\n    \\begin{pmatrix}\n      \\cos \\theta & -\\sin \\theta \\\\\n      \\sin \\theta & \\cos \\theta\n    \\end{pmatrix}\n  \\]\n  which is the rotation through \\(\\theta\\) about \\(0\\). As \\(\\det A = 1\\), \\(A \\in SO(2)\\). The second case gives\n  \\[\n    A =\n    \\begin{pmatrix}\n      \\cos \\theta & \\sin \\theta \\\\\n      \\sin \\theta & - \\cos \\theta\n    \\end{pmatrix}\n  \\]\n  which we claim is a relfection: it fixes the line \\(\\ell\\) which passes through the origin and forms an angle \\(\\theta/2\\) with the positive \\(x\\)-axis. \\(\\det A = -1\\) so \\(A \\notin SO(2)\\).\n\\end{eg}\n\n\\begin{remark}[Orientation]\\index{orientation}\n  For a finite-dimensional vector space, its \\emph{orientation} is an equivalence class of bases --- let \\(v_1, \\dots, v_n\\) and \\(v_1', \\dots, v_n'\\) be two bases, and \\(A = (A_{ij})\\) be the respective change-of-basis from \\(\\{v_i\\}\\) to \\(\\{v_i'\\}\\). These bases are equivalent, i.e.\\ give the \\emph{same orientation}, if \\(\\det A > 0\\).\n\\end{remark}\n\n\\begin{definition}\n  An isometry \\(f(x) = Ax + b\\) is said to be \\emph{orientation-preserving} if \\(\\det A = 1\\), and \\emph{orientation-reversing} if \\(\\det A = -1\\).\n\\end{definition}\n\n\\begin{eg}{\\(O(3)\\)}\nLet's study \\(O(3)\\) in detail. Consider first the case \\(\\det A = 1\\), then\n\\[\n  \\det(A - I) = \\det(A^T -I) = \\det(A(A^T - I)) = \\det(I - A).\n\\]\nSince \\(A\\) is a \\(3 \\times 3\\) matrix, we must have \\(\\det (A - I) = 0\\) so \\(+1\\) is an eigenvalue. Thus there exists \\(v_1 \\in \\R^3, \\norm{v_1} = 1\\) such that \\(Av_1 = v_1\\). Set \\(W = \\generation{v_1}^\\perp\\), a plane. Then for \\(w \\in W\\),\n\\[\n  (Aw, v_1) = (Aw, Av_1) = (w, v_1) = 0\n\\]\nso \\(A\\) is \\(W\\)-stable. Thus \\(A|_W\\) is a \\emph{rotation} of the \\(2\\) dimensional space \\(W\\). Choose \\(v_2, v_3\\) to be an orthonormal basis of \\(W\\), then \\(A\\) has matrix representation with respect to \\(v_1, v_2, v_3\\)\n\\[\n  \\begin{pmatrix}\n    1 & 0 & 0 \\\\\n    0 & \\cos \\theta & -\\sin \\theta \\\\\n    0 & \\sin \\theta & \\cos \\theta\n  \\end{pmatrix}\n\\]\n\nNow suppose \\(\\det A = -1\\). Then \\(-A\\) in some basis is of the above matrix form. Thus \\(A\\) is of the form\n\\[\n  \\begin{pmatrix}\n    -1 & 0 & 0 \\\\\n    0 & \\cos \\varphi & -\\sin \\varphi \\\\\n    0 & \\sin \\varphi & \\cos \\varphi\n  \\end{pmatrix}\n\\]\nwhere \\(\\varphi = \\theta + \\pi\\). This is a \\emph{rotated reflection} (in particular a pure reflection when \\(\\varphi = 0\\)).\n\\end{eg}\n\n\\subsection{Curves in \\texorpdfstring{\\(\\R^n\\)}{\u211d\\^{}n}}\n\n\\begin{definition}[Curve]\\index{curve}\n  A \\emph{curve} \\(T\\) in \\(\\R^n\\) is a continuous map \\(\\Gamma: [a, b] \\to \\R^n\\).\n\\end{definition}\n\nA \\emph{dissection} \\(\\mathcal D\\) is a sequence\n\\[\n  a = t_0 < t_1 < \\dots, t_N = b \\in [a, b].\n\\]\nSet \\(P_i = T(t_i)\\) and let\n\\[\n  s_{\\mathcal D} = \\sum_i \\norm{P_iP_{i + 1}}.\n\\]\n\n\\begin{definition}[Length]\\index{length}\n  The \\emph{length} of a curve \\(\\Gamma\\) is\n  \\[\n    \\ell = \\sup_{\\mathcal D} s_{\\mathcal D}\n  \\]\n  if this supremum exists (i.e.\\ finite).\n\\end{definition}\n\nIf \\(\\mathcal D'\\) is a refinement of \\(\\mathcal D\\) (has extra points added), then \\(s_{\\mathcal D} \\leq s_{\\mathcal D'}\\) by triangle inequality. Let\n\\[\n  \\mesh \\mathcal D = \\max_i(t_i - t_{i - 1}).\n\\]\nThen if \\(\\ell\\) exists, we have\n\\[\n  \\ell = \\lim_{\\mesh \\mathcal D \\to 0} s_{\\mathcal D}.\n\\]\n\n\\begin{note}\n  In fact we have\n  \\[\n    \\ell = \\min\\{\\tilde \\ell: \\tilde \\ell \\geq s_{\\mathcal D} \\text{ for all } \\mathcal D\\}.\n  \\]\n\\end{note}\n\n\\begin{proposition}\n  If \\(\\Gamma\\)  is continuously differentiable, then\n  \\[\n    \\ell(\\Gamma) = \\int_a^b \\norm{\\Gamma'(t)} dt.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Assume \\(n = 3\\) to avoid excessive notation. Let\n  \\[\n    \\Gamma(t) = (f_1(t), f_2(t), f_3(t)).\n  \\]\n  If \\(s \\neq t \\in [a, b]\\), applying Mean Value Theorem to each \\(f_i\\) gives us\n  \\[\n    \\frac{f_i(t) - f_i(s)}{t - s} = f'_i(\\xi_i)\n  \\]\n  for some \\(s < \\xi_i < t\\). \\(f_i'\\) are uniformly continuous on \\([a, b]\\) so for all \\(\\varepsilon > 0\\) there exists \\(\\delta > 0\\) such that\n  \\[\n    |t - s| < \\delta \\implies |f_i'(\\xi_i) - f_i'(\\xi)| < \\frac{\\varepsilon}{3} \\, \\forall \\xi \\in (s, t).\n  \\]\n  So for all \\(\\xi \\in (s, t)\\),\n  \\begin{align*}\n    \\norm*{\\frac{\\Gamma(s) - \\Gamma(t)}{s - t} - \\Gamma'(\\xi)} &= \\norm{(f_1'(\\xi_1), f_2'(\\xi_2), f_3'(\\xi_3)) - (f_1'(\\xi), f_2'(\\xi), f_3'(\\xi))} \\\\\n                                                               &< \\frac{\\varepsilon}{3} + \\frac{\\varepsilon}{3} +\\frac{\\varepsilon}{3} \\\\\n                                                               &= \\varepsilon\n  \\end{align*}\n  i.e.\n  \\[\n    \\norm{(\\Gamma(t) - \\Gamma(s)) - (t - s)\\Gamma'(\\xi)} < \\varepsilon(t - s)\n  \\]\n  for all \\(\\xi \\in (s, t)\\). Specialise to \\(t = t_i, s= t_{i - 1}, \\xi = \\frac{t_i + t_{i - 1}}{2}\\) and sum over \\(i\\), we get\n  \\[\n    \\sum_i \\norm*{(\\Gamma(t_i) - \\Gamma(t_{i - 1})) - (t_i - t_{i - 1}) \\Gamma'\\left(\\frac{t_i + t_{i - 1}}{2}\\right)} < \\sum_i \\varepsilon(t_i - t_{i - 1}) = \\varepsilon(b - a).\n  \\]\n  Now if \\(\\mesh \\mathcal D < \\delta\\) then the reverse triangle inequality gives\n  \\[\n    \\left|s_{\\mathcal D}  - \\sum_i (t_i - t_{i - 1}) \\norm*{\\Gamma'\\left(\\frac{t_i + t_{i - 1}}{2}\\right)}\\right| < \\varepsilon (b - a).\n  \\]\n  Finally, as \\(\\norm{\\Gamma'(t)}\\) is a continuous function of \\(t\\), it is integrable so the summation converges to \\(\\int_a^b \\norm{\\Gamma'(t)} dt\\) as \\(\\mesh \\mathcal D \\to 0\\). Thus\n  \\[\n    \\ell(\\Gamma) = \\lim_{\\mesh \\mathcal D \\to 0} s_{\\mathcal D} = \\int_a^b \\norm{\\Gamma'(t)} dt\n  \\]\n  as required.\n\\end{proof}\n\n\\section{Spherical Geometry}\n\n\\begin{notation}\n  Let \\(S^2 \\subseteq \\R^3\\) be the unit sphere with centre \\(0\\). A \\emph{great circle}\\index{great circle}, sometimes also called a \\emph{(spherical) line}, is \\(S^2 \\cap \\text{a plane through } 0\\). For all non-antipodal pair of points \\(P, Q \\in S^2\\), there is a unique line in \\(S^2\\) passing through \\(P\\) and \\(Q\\). It is given by the intersection of \\(S^2\\) and the plane through \\(P, Q, 0\\).\n\\end{notation}\n\n\\begin{definition}[Distance on sphere]\n  For \\(P, Q \\in S^2\\), the \\emph{distance} \\(d(P, Q)\\) is the length of the shorter of two line segments \\(PQ\\) along the great circle through \\(P, Q\\). \\(d(P, Q) = \\pi\\) if \\(P\\) and \\(Q\\) are antipodal.\n\\end{definition}\n\n\\begin{note}\n  \\begin{align*}\n    d(P, Q) &= \\text{angle between } \\V P = OP \\text{ and } \\V Q = OQ \\\\\n            &= \\cos^{-1} (\\V P \\cdot \\V Q)\n  \\end{align*}\n\\end{note}\n\n\\begin{definition}[Spherical triangle]\\index{spherical triangle}\n  A \\emph{sperical triangle}, \\(ABC\\) say, is defined like a Euclidean triangle but with \\(AB, AC, BC\\) line segments on \\(S^2\\) with lengths \\(< \\pi\\).\n\\end{definition}\n\n\\begin{notation}\n  \\(\\V A = OA\\) etc.\n\\end{notation}\n\nSet\n\\begin{align*}\n  n_1 &= \\frac{\\V C \\times \\V B}{\\sin a} \\\\\n  n_2 &= \\frac{\\V A \\times \\V C}{\\sin b} \\\\\n  n_3 &= \\frac{\\V B \\times \\V A}{\\sin c}\n\\end{align*}\nwhich are the unit normals to the plane \\(OBC\\), \\(OAC\\), \\(OAB\\) pointing out of the solid \\(OABC\\). \\(\\alpha, \\beta, \\gamma\\) are the angles between the planes defining sides of \\(ABC\\).\n\n\\begin{note}\n  The angle between \\(n_2\\) and \\(n_3\\) is \\(\\pi + \\alpha\\) so \\(n_2 \\cdot n_3 = -\\cos \\alpha\\). Similarly for the other two terms.\n\\end{note}\n\n\\begin{theorem}[Spehrical cosine rule]\n  \\[\n    \\sin a \\sin b \\cos \\gamma = \\cos c - \\cos a \\cos b.\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  We use\n  \\[\n    (\\V C \\times \\V B) \\cdot (\\V A \\times \\V C) = (\\V A \\cdot \\V C)(\\V B \\cdot \\V C) - (\\V C \\cdot \\V C)(\\V B \\cdot \\V A)\n  \\]\n  which we derived in IA Vector Calculus. Note that \\(|\\V C| = 1\\) and\n  \\begin{align*}\n    -\\cos \\gamma &= n_1 \\cdot n_2 \\\\\n                 &= \\frac{\\V C \\times \\V B}{\\sin a} \\cdot \\frac{\\V A \\times \\V C}{\\sin b} \\\\\n                 &= \\frac{(\\V A \\cdot \\V C)(\\V B \\cdot \\V C) - (\\V B \\cdot \\V A)}{\\sin a \\sin b} \\\\\n                 &= \\frac{\\cos b \\cos a - \\cos c}{\\sin a \\sin b}\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}[Spherical Pythagoras Theorem]\n  If \\(\\gamma = \\frac{\\pi}{2}\\) then\n  \\[\n    \\cos c = \\cos a \\cos b.\n  \\]\n\\end{corollary}\n\n\\begin{theorem}[Spherical sine rule]\n  \\[\n    \\frac{\\sin a}{\\sin \\alpha} = \\frac{\\sin b}{\\sin \\beta} = \\frac{\\sin c}{\\sin \\gamma}.\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  Use the identity\n  \\[\n    (\\V A \\times \\V C) \\times (\\V C \\times \\V B) = (\\V C \\cdot (\\V B \\times \\V A)) \\V C.\n  \\]\n\n  Note that LHS equals to\n  \\[\n    -(n_1 \\times n_2) \\sin a \\sin b\n  \\]\n  and note that the angle between \\(n_1\\) and \\(n_2\\) is \\(\\pi + \\gamma\\).\n\n  Consider the plane through \\(0\\) that is orthogonal to \\(\\V C\\). We find\n  \\[\n    n_1 \\times n_2 = \\V C \\sin \\gamma.\n  \\]\n  Thus the coefficient of \\(\\V C\\) is\n  \\[\n    \\V C \\cdot (\\V B \\times \\V A) = -\\sin a \\sin b \\sin \\gamma.\n  \\]\n  By the symmetry cof triple product, it also equals to\n  \\[\n    \\V A \\cdot (\\V C \\times \\V B) = -\\sin b \\sin c \\sin \\alpha.\n  \\]\n  Equating them we get\n  \\[\n    \\frac{\\sin c}{\\sin \\gamma} = \\frac{\\sin a}{\\sin \\alpha}.\n  \\]\n\\end{proof}\n\n\\begin{remark}\n  Observe that for small \\(a, b, c\\), piece of \\(S^2\\) is approximated better and better by piece of \\(\\R^2\\). Formally,\n  \\begin{align*}\n    \\sin a &= a + O(a^3) \\\\\n    \\cos a &= 1 - \\frac{a^2}{2} + O(a^4)\n  \\end{align*}\n  Thus we can obtain Euclidean version of cosine and sine rule by setting \\(a, b, c\\) small:\n  \\begin{align*}\n    ab\\cos \\gamma &= (1 - \\frac{c^2}{2}) - (1 - \\frac{a^2}{2})(1 - \\frac{b^2}{2}) + O(\\norm{(a, b, c)}^3) \\\\\n    c^2 + 2ab \\cos \\gamma &= a^2 + b^2 + O(\\norm{(a, b, c)}^3)\n  \\end{align*}\n\\end{remark}\n\nNow we discuss the metric property of spherical geometry. If \\(\\gamma = \\pi\\) then \\(\\V C\\) lines in the line segment \\(AB\\) so \\(c = a + b\\). Otherwise, from Spherical cosine rule,\n\\[\n  \\cos c > \\cos a \\cos b - \\sin a \\sin b = \\cos(a + b).\n\\]\nAs \\(\\cos\\) is decresasing on \\([0, \\pi]\\) and \\(0 < c < \\pi\\), \\(0 < a + b < 2\\pi\\), we have\n\\[\n  c < a + b.\n\\]\nThis gives us\n\n\\begin{corollary}[Spherical triangle inequality]\n  \\[\n    d(P, Q) + d(Q, R) \\geq d(P, R)\n  \\]\n  with equality if \\(Q\\) is in line segment \\(PR\\) of shorter length.\n\\end{corollary}\nThus we have shown that \\((S^2, d)\\) is a metric space.\n\n\\begin{proposition}\n  Given a curve \\(\\Gamma\\) on \\(S^2 \\subseteq \\R^3\\) from \\(P\\) to \\(Q\\) with length \\(\\ell\\), we have\n  \\[\n    \\ell \\geq d(P, Q).\n  \\]\n  Moreover, if \\(\\ell = d(P, Q)\\), then the image of \\(\\Gamma\\)is a spherical line segment.\n\\end{proposition}\n\n\\begin{proof}\n  Let \\(\\Gamma: [0, 1] \\to S^2\\) with length \\(\\ell\\).For any dissection \\(\\mathcal D\\) of \\([0, 1]\\) with\n  \\[\n    0 = t_1 < t_1 < \\dots < t_N = 1\n  \\]\n  and \\(P_i = \\Gamma(t_i)\\). Define two sums\n  \\[\n    \\tilde s_{\\mathcal D} = \\sum_{i = 1}^N d(P_{i -1}, P_i) > s_{\\mathcal D} = \\sum_{i = 1}^N |P_{i - 1}P|\n  \\]\n  because the arc is longer than a sector, i.e.\\ \\(2\\theta < 2 \\sin \\theta\\).\n\n  Suppose \\(\\ell < d(P, Q)\\), we can deduce that there exists \\(\\varepsilon > 0\\) such that \\((1 + \\varepsilon)\\ell < d(P, Q)\\). As \\(\\lim_{\\theta \\to 0} \\frac{\\sin \\theta}{\\theta} = 1\\),\n  \\[\n    2\\theta \\leq (1 + \\varepsilon) 2\\sin \\theta\n  \\]\n  for each small \\(\\theta > 0\\).\n\n  \\(\\Gamma\\) is uniformly continuous on \\([0, 1]\\) so we can choose a refined \\(\\mathcal D\\) such that\n  \\[\n    d(P_{i - 1}, P_i) \\leq (1 + \\varepsilon)|P_{i - 1}P_i|\n  \\]\n  for all \\(i\\). Therefore\n  \\[\n    \\tilde s_{\\mathcal D} \\leq (1 + \\varepsilon)s_D \\leq (1 + \\varepsilon)\\sup_{\\mathcal D} s_{\\mathcal D} = (1 + \\varepsilon) \\ell < d(P, Q).\n  \\]\n  But \\(\\tilde s_{\\mathcal D} \\geq d(P, Q)\\) by repeated use of spherical triangle inequality. Absurd.\n\n  Suppose instead \\(\\ell = d(P, Q)\\) for some \\(\\Gamma: [0, 1] \\to S^2\\). Then for all \\(t \\in [0, 1]\\),\n  \\begin{align*}\n    d(P, Q) &= \\ell \\\\\n            &= \\operatorname{length} \\Gamma|_{[0, t]} + \\operatorname{length} \\Gamma|_{[t, 1]} \\\\\n            &\\geq d(P, \\Gamma(t)) + d(\\Gamma(t), Q) \\\\\n            &\\geq d(P, Q) \n  \\end{align*}\n  Thus we have equality throughout. By spherical triangle equality the image of \\(\\Gamma\\) is the shorter line segment from \\(P\\) to \\(Q\\).\n\\end{proof}\n\n\\begin{remark}\n  If \\(\\Gamma\\) is the shortest path between \\(P\\) and \\(Q\\) then \\(\\Gamma\\) is a spherical line segment. Furthermore, from argument of the above proposition\n  \\[\n    \\operatorname{length} \\Gamma|_{[0, t]} = d(P, \\Gamma(t))\n  \\]\n  so the parameterisation of \\(\\Gamma\\) is \\emph{monotonic}. In further courses in geometry such as IID Differential Geometry, such \\(\\Gamma\\)'s are sometimes called \\emph{minimising geodesics}. See example sheet 1 for a similar but easier discussion of geodesics in Euclidean space.\n\\end{remark}\n\n\\subsection{Area of spherical triangles}\n\n\\begin{proposition}[Gauss-Bonnet for \\(S^2\\)]\n  If \\(\\Delta\\) is a spherical triangle with angles \\(\\alpha, \\beta, \\gamma\\), then\n  \\[\n    \\operatorname{area} \\Delta = (\\alpha + \\beta + \\gamma) - \\pi.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  We assume the area of \\(S^2\\) is \\(4\\pi\\) and the additivity of areas.\n\n  A \\emph{double line} with angle \\(0 < \\alpha < \\pi\\) is two antipodal regions on \\(S^2\\) cut out by planes through antipodal \\(A, A' \\in S^2\\) where \\(\\alpha\\) is the angle between planes. The area of such double line is \\(4\\alpha\\). Then a triangle \\(\\Delta = ABC\\) is the intersection of 3 single lines. The \\(\\Delta\\) and its antipodal \\(\\Delta'\\) lie in each of the three double lines with angles \\(\\alpha, \\beta, \\gamma\\). Any other point \\(P \\in S^2 \\setminus (\\Delta \\cup \\Delta')\\) is in only one double line. Thus\n  \\[\n    4(\\alpha + \\gamma + \\gamma) = 4\\pi + 2 \\cdot (\\operatorname{area}\\Delta + \\operatorname{area}\\Delta').\n  \\]\n  The result thus follows.\n\\end{proof}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item spherical triangles have \\(\\alpha + \\beta + \\gamma > \\pi\\). As the area of a triangle tends to \\(0\\), \\(\\alpha + \\beta + \\gamma \\to \\pi\\). Thus Euclidean space is an approximation of sphere..\n  \\item If \\(M\\) is a convex \\(n\\)-gon in \\(S^2\\) where \\(n \\geq 3\\), i.e.\\ for all \\(P, Q \\in M\\), the shorter line segment \\(PQ\\) is in \\(M\\). Let the interior angles be \\(\\alpha_1, \\dots, \\alpha_n\\). Then\n    \\[\n      \\operatorname M = \\sum_{i = 1}^n \\alpha_i - (n - 2)\\pi\n    \\]\n    by cutting \\(M\\) into triangles.\n  \\end{enumerate}\n\\end{remark}\n\n\\subsection{M\u00f6bius geometry}\n\nConsider \\(\\C_\\infty = \\C \\cup \\{\\infty\\}\\).\n\n\\begin{definition}[Stereographic projection]\\index{stereographic projection}\n  The \\emph{stereographic projection} is a map \\(\\pi: S^2 \\to C_\\infty\\) where the north pole is mapped to \\(\\infty\\) and other point is mapped to the intersection of the line through \\(P\\) and the north pole and the complex plane.\n\\end{definition}\n\nWe will use \\(\\zeta = x + iy\\) to denote a point in \\(\\C_\\infty\\). From similar triangles we get\n\\[\n  \\pi(x, y, z) = \\frac{x + iz}{1 - z}.\n\\]\n\n\\begin{lemma}\n  If \\(\\pi': S^2 \\to \\C_\\infty\\) is the stereographic projection from the soth pole \\((0, 0, -1)\\) then\n  \\[\n    \\pi'(P) = \\frac{1}{\\conj{\\pi(P)}}\n  \\]\n  for all \\(P \\in S^2\\).\n\\end{lemma}\n\n\\begin{proof}\n  Easy once we write down the coordinates. Suppose \\(P = (x, y, z)\\). Then\n  \\begin{align*}\n    \\pi(P) &= \\frac{x + iy}{z} \\\\\n    \\pi'(P) &= \\frac{x + iy}{1 + z}\n  \\end{align*}\n  and so\n  \\[\n    \\conj{\\pi(P)} \\cdot \\pi'(P) = \\frac{x^2 + y^2}{1 - z^2} = 1\n  \\]\n  as \\(S^2 = \\{x^2 + y^2 + z^2 = 1\\}\\).\n\\end{proof}\n\n\\begin{note}\n  \\(\\pi' \\compose \\pi^{-1}: \\C_\\infty \\to \\C_\\infty\\) takes \\(\\zeta \\to \\frac{1}{\\conj \\zeta}\\), the \\emph{inversion} in the circle \\(|\\zeta| = 1\\).\n\\end{note}\n\nAntipodal points\n\nIf \\(P = (x, y, z) \\in S^2\\), then \\(\\pi(P) = \\frac{x + iy}{1 - z} \\in \\C_\\infty\\), \\(\\pi(-P) = \\frac{-x - iy}{1 + z}\\) so\n\\[\n  \\pi(P) \\cdot \\conj{\\pi(-P)} = -\\frac{x^2 + y^2}{1 - z^2} = -1\n\\]\nso \\(\\pi(-P) = -\\frac{1}{\\conj \\zeta}\\) where \\(\\zeta = \\frac{x + iy}{1 - z}\\).\n\n\\subsubsection{M\u00f6bius Transformations}\n\nM\u00f6bius transformations act on \\(\\C_\\infty\\) and form a group \\(G\\). Given any matrix with complex entries \\(A = \\begin{psmallmatrix} a & b \\\\ c & d \\end{psmallmatrix}\\), define\n\\begin{align*}\n  \\C_\\infty &\\to \\C_\\infty \\\\\n  \\zeta &\\mapsto \\frac{a\\zeta + b}{c\\zeta + d}\n\\end{align*}\nFor all \\(\\lambda \\in \\C^* = \\C \\setminus \\{0\\}\\), \\(\\lambda A\\) defines the same M\u00f6bius transformation. The converse is also true: if \\(A_1, A_2\\) define the same M\u00f6bius transformation then there exists \\(\\lambda\\) such that \\(A_1 = \\lambda A_2\\). Therefore from group theory we know\n\\[\n  G \\cong \\PGL(2, \\C) = \\GL(2, \\C)/\\C^*.\n\\]\nThus it suffices to assume \\(\\det A = 1\\). But this does not eliminate all ambiguities --- if \\(1 = \\det(\\lambda \\tilde A) = \\lambda^2 \\det \\tilde A = \\lambda^2\\) then \\(\\lambda = \\pm 1\\). Thus\n\\[\n  G \\cong \\PSL(2, \\C) = \\SL(2, \\C)/\\{\\pm 1\\}.\n\\]\n\nOn \\(S^2\\) we have rotations \\(\\SO(3)\\) acting as isometries (see example sheet). Which M\u00f6bius transformations do they correspond to?\n\n\\begin{theorem}\n  Via the stereographic projection, every rotation of \\(S^2\\) induces a M\u00f6bius transformation defiend by a matrix in \\(\\SU(2) \\leq \\SL(2, \\C)\\).\n\\end{theorem}\n\n\\begin{proof}\n  Denote by \\(r(z, \\theta)\\) the rotation about the \\(z\\)-axis through \\(\\theta\\). Then it corresponds to the M\u00f6bius map \\(\\zeta \\mapsto e^{i\\theta} \\theta\\) with a matrix\n  \\[\n    \\begin{pmatrix}\n      e^{i\\theta/2} & 0 \\\\\n      0 & e^{-i\\theta/2}\n    \\end{pmatrix}\n    \\in \\SU(2).\n  \\]\n\n  Next, the rotation \\(r(y, \\frac{\\pi}{2})\\) has matrix\n  \\[\n    \\begin{pmatrix}\n      0 & 0 & 1 \\\\\n      0 & 1 & 0 \\\\\n      -1 & 0 & 0\n    \\end{pmatrix}\n  \\]\n  correspond to \\(\\zeta = \\frac{x + iy}{1 - z} \\mapsto \\zeta' = \\frac{z + iy}{1 + x}\\) since by drawing a diagram we know\n  \\begin{align*}\n    -1 &\\mapsto \\infty \\\\\n    1 &\\mapsto 0 \\\\\n    i &\\mapsto i\n  \\end{align*}\n  The corresponding unique M\u00f6bius map is then \\(\\zeta' = \\frac{\\zeta - 1}{\\zeta + 1}\\). We check that\n  \\begin{align*}\n    \\frac{\\zeta - 1}{\\zeta + 1}\n    &= \\frac{x + iy - 1 + z}{x + iy +1 - z} \\\\\n    &= \\frac{x - 1 + z + iy}{x + 1 - (z - iy)} \\\\\n    &= \\frac{(z + iy)(x - 1 + z + iy)}{(x + 1)(z + iy) + (x^2 - 1)} \\\\\n    &= \\frac{(z + iy)(x - 1 + z + iy)}{(x + 1)(z + iy + x -1)} \\\\\n    &= \\zeta'\n  \\end{align*}\n  does give the rotation we want.\n\n  We claim that \\(\\SO(3)\\) is generated by \\(r(y, \\frac{\\pi}{2})\\) and \\(r(z, \\theta)\\). Observe that\n  \\[\n    r(x, \\varphi) = r(y, \\frac{\\pi}{2}) r(z, \\varphi) r(y, -\\frac{\\pi}{2})\n  \\]\n  which is a conjugation. To check this, note that the \\((1, 0, 0)\\) is an eigenvector and the map is orientation-preserving (and some more geometric arguments). Also for all \\(x \\in S^2 \\subseteq \\R^3\\), there exists \\(\\varphi, \\psi\\) such that \\(g =r(z, \\psi) r(x, \\varphi)\\) which maps \\(v\\) to \\((1, 0, 0)\\): choose \\(r(x, \\varphi)\\) rotating \\(v\\) to the \\((x, y)\\)-plane. Then \\(r(v, \\theta) = g^{-1} r(x, \\theta) g\\) and the claim follows.\n\n  Thus via projection, any rotation of \\(S^2\\) corresponds to a finite product of M\u00f6bius transformations with matrices in \\(\\SU(2)\\).\n\\end{proof}\n\nThe theorem gives a group homomorphism\n\\[\n  \\SO(3) \\to \\PSU(2) \\cong \\SU(2)/\\{\\pm I\\}\n\\]\nwhich is in fact surjective, so an isomrphism.\n\n\\begin{theorem}\n  The group \\(\\SO(3)\\) of rotations of \\(S^2\\) correponds precisely to the subgroup \\(\\PSU(2) \\cong \\SU(2)/\\{\\pm I\\}\\) of M\u00f6bius transformations acting on \\(\\C_\\infty\\).\n\\end{theorem}\n\n\\begin{proof}\n  Let \\(g \\in \\PSU(2) \\leq G, g(z) = \\frac{az - b}{\\conj b z + \\conj a}\\). Suppose \\(g(0) = 0\\), so \\(b = 0\\), \\(a \\conj a = 1\\) so let \\(a = e^{i\\theta/2}\\) where \\(\\theta \\in \\R\\). Then \\(g\\) corresponds to \\(r(z, \\theta)\\). In general, \\(g(0) = w \\in C_\\infty\\). Let \\(Q \\in S^2\\), \\(\\pi(Q) = w\\). Choose \\(A \\in \\SO(3)\\) with \\(A(Q) = (0, 0, -1)\\). Let \\(\\alpha \\in \\PSU(2)\\) be the correponding M\u00f6bius map in \\(\\PSU(2)\\). Thus \\(\\alpha(w) = 0\\) and \\(\\alpha \\compose g(0) = 0\\). So \\(\\alpha \\compose g\\) correponds to an element \\(B\\) in \\(\\PSU(2)\\), and thus \\(g\\) corresponds to \\(A^{-1}B\\).\n\\end{proof}\n\n\\begin{remark}\n  THa mep\n  \\[\n    \\SU(2)/\\{\\pm I\\} \\cong \\SO(3)\n  \\]\n  is a double cover.\n\\end{remark}\n\n\\section{Triangulations and the Euler number}\n\nFirst introduce one more ``geometry'': the locally Euclidean torus.\n\n\\begin{definition}[Torus]\\index{torus}\n  The \\emph{torus} \\(T\\) is the set \\(\\R^2/\\Z^2\\)of equivalence classes of \\((x, y) \\in \\R^2\\) with equivalence relation\n  \\[\n    (x_1, y_1) \\sim (x_2, y_2) \\Leftrightarrow x_1 - x_2, y_1, y_2 \\in \\Z.\n  \\]\n\\end{definition}\n\nThus a point in \\(T\\) is a coset \\((x, y) + \\Z^2\\) of the subgroup \\(\\Z^2\\) of \\((\\R^2, +)\\).\n\nFor a closed square \\(Q \\subseteq \\R^2\\) with side length \\(1\\), \\(T\\) is obtained by identifying the opposite sides.\n\nDefine the distance \\(d\\), for \\(P_1, P_2 \\in T\\)\n\\[\n  d(P_1, P_2) = \\min \\{|v_1 - v_2|: v_1, v_2 \\in \\R^2, v_i + \\Z^2 = P_i\\}.\n\\]\nIt is easy to check \\((T, d)\\) is a metric space.\n\nLet \\(Q^0\\) be the interior of \\(Q\\). The natural map\n\\begin{align*}\n  f: Q^0 &\\to T \\\\\n  v &\\mapsto v + \\Z^2\n\\end{align*}\nis a bijection onto an open set \\(U = f(Q^0) \\subseteq T\\). Moreover \\(f: Q^0 \\to U\\) is a homeomorphism. Let \\(P \\in Q\\). \\(f\\) restricted to small open disc about \\(P\\) is an \\emph{isometry} onto the image. Such \\(d\\) (on \\(T\\)) is said to correspond to a \\emph{locally Euclidean metric} on \\(T\\).\n\n\\(T\\) maybe embedded in \\(\\R^3\\) but the distance function we get by considering curves on \\(T \\subseteq \\R^3\\) is \\emph{not} the same as the locally Euclidean distance.\n\n\\begin{definition}[Topological triangle]\\index{topological triangle}\n  A \\emph{topological triangle} on \\(X = S^2\\) or \\(T\\) (or in general, any metric space \\(X\\)) is the image \\(R \\subseteq X\\) of a closed Euclidean triangle \\(\\triangle \\subseteq \\R^2\\) under a homeomorphism \\(\\triangle \\to R\\).\n\\end{definition}\n\nFor example, any spherical triangle is a topological triangle, by using radial projection from \\(0\\) to an affine plane in \\(\\R^2\\).\n\n\\begin{definition}[Topological triangulation]\\index{topological triangulation}\n  A \\emph{(topological) triangulation} \\(\\tau\\) of \\(X\\) is a finite collection of topologtical triangles which cover \\(X\\) and satisfy\n  \\begin{enumerate}\n  \\item every two topological triangles of \\(\\tau\\) are either disjoint, or meet in exactly one edge, or meet in exactly one vertex.\n  \\item each edge belongs to exactly two triangles.\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{definition}[Euler number]\\index{Euler number}\n  The \\emph{Euler number} \\(e = e(X, \\tau)\\) is\n  \\[\n    e = F - E + V\n  \\]\n  where \\(F = \\# \\text{faces in } \\tau\\), \\(E = \\#\\text{edges in } \\tau\\) and \\(V = \\# \\text{vertices in } \\tau\\).\n\\end{definition}\n\n\\begin{fact}[From algebraic topology]\n  \\(e\\) is independent of the choice of \\(\\tau\\), i.e.\\ \\(e = e(X)\\).\n\\end{fact}\n\n\\begin{eg}\n  (See pictures.)\n\n  In both examples we use \\emph{geodesic triangles}, i.e.\\ used spherical triangles on \\(S^2\\) and Euclidean triangles on \\(Q^0\\).\n\\end{eg}\n\n\\begin{proposition}\n  Each geodesic triangulation of \\(S^2\\), repsectively \\(T\\), has \\(e = 2\\), respectively \\(0\\).\n\\end{proposition}\n\n\\begin{remark}\n  The results also hold without the geodesic assumption but we will not prove it in this course.\n\\end{remark}\n\n\\begin{proof}\n  Denote the faces \\(\\triangle_1, \\dots, \\triangle_F\\) and \\(\\tau_i = \\alpha_i + \\beta_i + \\gamma_i\\), \\(i = 1, \\dots, F\\) the interior angles. Then \\(\\sum_{i = 1}^F \\tau_i = 2\\pi V\\). Also \\(3F = 2E\\) so \\(F = 2E - 2F\\).\n  \\begin{itemize}\n  \\item \\(S^2\\): Gauss-Bonnet says \\(\\operatorname{area} \\triangle_i = \\tau_i - \\pi\\) so\n    \\begin{align*}\n      4\\pi\n      &= \\sum_{i = 1}^F \\operatorname{area} \\triangle_i \\\\\n      &= \\sum_{i = 1}^F (\\tau_i - \\pi) \\\\\n      &= 2\\pi V - \\pi F \\\\\n      &= 2\\pi V - 2\\pi E + 2\\pi F \\\\\n      &= 2\\pi V\n    \\end{align*}\n    so \\(e = 2\\).\n  \\item \\(T\\): \\(\\tau_i = \\pi\\) for all \\(i\\) so\n    \\begin{align*}\n      2\\pi V &= \\sum_{i = 1}^F \\tau_i = \\pi F \\\\\n      2V &= F = 2 E - 2F\n    \\end{align*}\n    thus \\(e = 0\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{remark}\n  We can use topological polygonal decomposition of \\(X\\) and the proposition above still holds. The Euler's formula for \\(S^2\\) is\n  \\[\n    V - E + F = 2.\n  \\]\n\\end{remark}\n\n\\section{Hyperbolic geometry}\n\nThere are three ``classical'' types of ``geometries'' and we have seen two of them: Euclidean and spherical. The third type is hyperbolic geometry. We shall require the concept of \\emph{Riemannian metric} on an open \\(U \\subseteq \\R^2\\).\n\n\n\\subsection{Revision of differentiability}\n\nIt would be convenient to recall from IA Analysis I and IB Analysis I facts about the derivatives and make precise sense of the \\emph{differentials}.\n\nLet \\(U \\subseteq \\R^n\\) be an open set, \\(f = (f_1, \\dots, f_m): U \\to \\R^m\\). \\(f\\) is smooth, i.e.\\ \\(C^\\infty\\), if each \\(f_i\\) has continuous partial derivatives of every order. In particular, \\(C^\\infty\\) implies the existence of continuous derivatives of 1st order so it is differentiable.\n\nThe derivatives of \\(f\\) at \\(a \\in U\\) is a linear map \\(df_a: \\R^n \\to \\R^m\\) such that\n\\[\n  \\lim_{h \\to 0} \\frac{\\norm{f(a + h) - f(a) - df_a \\cdot h}}{\\norm h} = 0.\n\\]\nIf \\(m = 1\\), \\(df_a\\) is determined by\n\\[\n  \\left( \\frac{\\p f}{\\p x_1}(a), \\dots, \\frac{\\p f}{\\p x_n}(a) \\right)\n\\]\nvia\n\\[\n  df_a: (h_1, \\dots, h_n) \\mapsto \\sum_{i = 1}^n \\frac{\\p f}{\\p x_i}(a) h_ii.\n\\]\n\nFor general \\(m\\), we may use the \\emph{Jacobian matrix}\n\\[\n  J(f)_a = \\left( \\frac{\\p f_i}{\\p x_j}(a) \\right)\n\\]\nand use matrix multiplication\n\\[\n  h \\mapsto J(f)_ah.\n\\]\n\n\\begin{eg}\n  Consider a holomorphic, i.e.\\ analytic function function of complex variable \\(f: U \\to \\C\\) where \\(U \\subseteq \\C\\) is open, with derivative \\(f'(z)\\) such that\n  \\[\n    \\lim_{w \\to 0} \\frac{|f(z + w) - f(z) - f'(z)w|}{|w|} = 0.\n  \\]\n  Suppose \\(f'(z) = a + ib\\) and \\(w = h_1 + ih_2\\), then\n  \\[\n    f'(z)w = (ah_1 - bh_2) + i(ah_2 + bh_1)\n  \\]\n  Now identify \\(\\C \\cong \\R^2\\), \\(f: U \\to \\R^2\\) has derivative \\(df_z: \\R^2 \\to \\R^2\\) given by\n  \\[\n    \\begin{pmatrix}\n      a & -b \\\\\n      b & a\n    \\end{pmatrix}\n  \\]\n\\end{eg}\n\nChain rule: let \\(U \\subseteq \\R^n, V \\subseteq \\R^p\\) both open, \\(f: U \\to \\R^m, g: V \\to U\\) both smooth. THen \\(f \\compose: V \\to \\R^m\\) has for all \\(p \\in V\\),\n\\[\n  d(f \\compose g)_p = (df)_{g(p)} \\compose (dg)_p\n\\]\nor in terms or Jacobians,\n\\[\n  J(f \\compose g)_p = J(f)_{g(p)} \\cdot J(g)_p\n\\]\nwhere \\(\\cdot\\) denotes matrix multiplication.\n\n\\subsection{Riemannian metrics on open sets in \\texorpdfstring{\\(\\R^2\\)}{\u211d\\^{}2}}\n\nUse the coordinates \\((u, v) \\in \\R^2\\). Let \\(V \\subseteq \\R^2\\) be open.\n\n\\begin{definition}[Riemannian metric]\\index{Riemannian metric}\n  A \\emph{Riemannian meric} is define by giving \\(C^\\infty\\) functions \\(E, F, G: V \\to \\R\\) such that\n  \\[\n    \\begin{pmatrix}\n      E(p) & F(p) \\\\\n      F(p) & G(p)\n    \\end{pmatrix}\n  \\]\n  is a positive-definite matrix for all \\(p \\in V\\).\n\\end{definition}\n\nHence a Riemannian metric defines an inner product \\(\\inner{\\cdot, \\cdot}_p\\) on \\(\\R^2\\).\n\n\\begin{remark}\n  There are two ways to view \\(\\R^2\\): one way is the standard vector space \\(\\R^2\\), which has a distinguished zero vector. The other is the affine space \\(\\A^2\\) (space of points) with operation\n  \\[\n    \\text{point } + \\text{ vector } = \\text{ point}.\n  \\]\n  So the inner product could be thought of an operation on the affine space, with \\(p\\) identified as the zero.\n\\end{remark}\n\nLet \\(e_1, e_2\\) be the standard basis of \\(\\R^2\\). Then given the inner product defined above,\n\\begin{align*}\n  \\inner{e_1, e_1}_p &= E(p) \\\\\n  \\inner{e_1, e_2}_p &= F(p) \\\\\n  \\inner{e_2, e_2}_p &= G(p)\n\\end{align*}\n\n\\begin{notation}\n  Use \\(E du^2 + 2F dudv + G dv^2\\) to denote a family of inner products.\n\n  For pedagogical purpose it might be helpful to explain the origin of the notation \\(du, db\\). Let \\(u, v: V \\to \\R\\) be the components of the coordinates, which are \\(C^\\infty\\). Since they are linear maps, their derivatives are just themselves, i.e.\\ for all \\(p \\in V\\)\n  \\begin{align*}\n    (du)_p: \\R^2 &\\to \\R \\\\\n    (h_1, h_2) &\\mapsto h_1 \\\\\n    (dv)_p: \\R^2 &\\to \\R \\\\\n    (h_1, h_2) &\\mapsto h_2\n  \\end{align*}\n  Since the derivatives do not depend on \\(p\\), we write \\(du, dv\\) for brevity, which are elements of the dual space \\((\\R^2)^*\\). Furthermore, \\(du, dv\\) form the \\emph{dual} basis to standard basis \\(e_1, e_2\\) of \\(\\R^2\\). Then \\(du^2, dudv, dv^2\\) are symmetric bilinear forms on \\(\\R^2\\), with\n  \\begin{align*}\n    du^2(h, k) &= du(h) du(k) \\\\\n    dudv(h, k) &= \\frac{1}{2}(du(h)dv(k) + du(k)dv(k)) \\\\\n    dv^2(h, k) &= \\dots\n  \\end{align*}\n  with matrices given by ...\n\n  and so \\(Edu^2 + 2Fdudv + Gdv^2\\) is indeed the bilinear form \\(\\begin{psmallmatrix} E & F \\\\ F & G \\end{psmallmatrix}\\).\n\\end{notation}\n\n\\begin{definition}[Length]\\index{length}\n  The \\emph{length with repsect to a Riemannian metric} of smooth curve \\(\\gamma = (\\gamma_1, \\gamma_2): [0, 1] \\to V \\subseteq \\R^2\\) is\n  \\[\n    \\int_0^1 \\inner{\\dot \\gamma, \\dot \\gamma}_{\\gamma(t)}dt = \\int_0^1 (E \\dot \\gamma_1^2 + 2F \\dot \\gamma_1 \\dot \\gamma_2 + G \\dot \\gamma_2^2)^{1/2} dt.\n  \\]\n\\end{definition}\n\nNote that this is compatible with the definition of length in Euclidean metric, with \\(E = G = 1, F = 0\\).\n\n\\begin{definition}\n  The \\emph{area with respect to a Riemannian metric} of a region \\(W \\subseteq V\\) is defined as\n  \\[\n    \\int_W (EG - F^2)^{1/2} dudv\n  \\]\n  whenever the integral exists.\n\\end{definition}\n\nNote that \\(EG - F^2 = \\det \\begin{psmallmatrix} E & F \\\\ F & G \\end{psmallmatrix}\\), the Gram determinant.\n\n\\begin{eg}\n  Let \\(V = \\R^2\\) with Riemannian metric\n  \\[\n    \\frac{4 (du^2 + dv^2)}{(1 + u^2 + v^2)^2}.\n  \\]\n  Recall the stereographic projection\n  \\begin{align*}\n    \\pi: S^2 \\setminus \\{N\\} &\\to \\R^2 \\\\\n    (x, y, z) &\\mapsto (u, v)\n  \\end{align*}\n  For all \\(P \\neq N \\in S^2\\) have \\(\\pi(P) \\in \\R^2\\) then the Riemannian metric above gives \\(\\inner{\\cdot, \\cdot}_{\\pi(p)}\\).\n\n  The \\emph{tangent plane} to \\(S^2\\) at \\(P\\) is\n  \\[\n    \\{x \\in \\R^3: x \\cdot OP = 0\\}.\n  \\]\n  Then \\((d\\pi^{-1})_{\\pi(p)}\\) identifies \\(\\inner{\\cdot, \\cdot}_{\\pi(p)}\\) with restriction of standard inner product on \\(\\R^2\\) to the tangent plane.\n\\end{eg}\n\nMissed 2 lectures\n\nLet\n\\[\n  G_D = \\{\\text{M\u00f6bius transformations sending \\(D\\) onto \\(D\\)}\\} \\cong \\PSL(2, \\R).\n\\]\n\n\\begin{enumerate}\n\\item \\(G_D\\) acts transitively on the hyperbolic lines in \\(D\\) (and on paris \\(\\ell, P \\in \\ell\\)).\n\\item The length-minimizing curves on \\(D\\) are segments of hyperbolic lines parameterised monotonically.\n\\end{enumerate}\n\nRecall the notation \\(\\rho(\\cdot, \\cdot)\\), the hyperbolic distance makes sense on \\(H\\) and on \\(D\\).\n\n\\begin{lemma}\\leavevmode\n  \\begin{enumerate}\n  \\item Rotations \\(z \\mapsto e^{i\\theta}z\\), \\(\\theta \\in \\R\\) are in \\(G_D\\).\n  \\item If \\(a \\in D\\), then\n    \\[\n      g(z) = \\frac{z - a}{1 - \\conj a z}\n    \\]\n    is in \\(G_D\\).\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item The map is clearly linear and preserves both \\(|z|\\) and \\(|dz|\\), and thus the metric\n    \\[\n      \\frac{4|dz|^2}{(1 - |z|^2)^2}.\n    \\]\n  \\item \\(g\\) send \\(\\{|\\zeta| = 1\\}\\) to itself: if \\(|\\zeta| = 1\\) then\n    \\[\n      |1 - \\conj a \\zeta| = |\\conj \\zeta (1 - \\conj a \\zeta)| = |\\conj \\zeta - \\conj a| = |\\zeta - a| \\neq 0\n    \\]\n    unless \\(a = 0\\) in which case \\(g(z) = z\\). Thus \\(|g(\\zeta) = 1\\). Also \\(g(a) = 0\\) so \\(g \\in G_D\\).\n  \\end{enumerate}\n\\end{proof}\n\nThis does not exhaust all elements of \\(G_D\\) yet. But in example sheet we will show every \\(g \\in G_D\\) is of the form\n\\[\n  g(z) = e^{i\\theta} \\frac{z - a}{1 - \\conj a z}\n\\]\nfor some \\(a \\in D, \\theta \\in R\\), and \\(G_D \\subset \\Isom(D)\\) is a subgroup of index \\(2\\).\n\n\\begin{proposition}\n  If \\(0 \\leq r < 1\\) then\n  \\[\n    \\rho(0, r) = \\rho(0, e^{i\\theta}r) = 2 \\tanh^{-1} r.\n  \\]\n\n  In general, for \\(z_1, z_2 \\in D\\),\n  \\[\n    \\rho(z_1, z_2) = 2 \\tanh^{-1} \\left| \\frac{z_1 - z_2}{1 - \\conj z_1 z_2} \\right|.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  The first equality is clear from the above lemma. Now let \\(\\gamma(t) = t, 0 \\leq t \\leq r\\), a hyperbolic line segment through \\(0\\) and \\(r\\). Then from definitions the length is\n  \\[\n    \\rho(0, r) = \\int_0^r \\frac{2}{1 - t^2} dt = 2 \\tanh^{-1} r.\n  \\]\n\n  For the general case, let \\(\\ell\\) be the hyperbolic line through \\(z_1, z_2\\). Apply\n  \\[\n    g(z) = \\frac{z - z_1}{1 - \\conj z_1 z}\n  \\]\n  which is an isometry by the above lemma. Then \\(g(\\ell)\\) is a diameter. Further rotate about \\(0\\) to achieve \\(g(z_2) = r \\in \\R_+\\). Then\n  \\[\n    r = |g(z_2)| = \\left| \\frac{z_2 - z_2}{1 - \\conj z_1 z_2} \\right|\n  \\]\n  and\n  \\[\n    \\rho(z_1, z_2) = \\rho(0, r) = 2 \\tanh^{-1} r.\n  \\]\n\\end{proof}\n\n\\begin{remark}\n  When there is a ``distinguished'' point, it is sometimes convenient to send it to \\(0 \\in D\\) (and then use the disc model).\n\\end{remark}\n\nIn Euclidean geometry, we know that there is a unique line through a give point perpendicular to a given line. Similarly, we will show that\n\n\\begin{proposition}\n  For all points \\(P\\) and hyperbolic lines \\(\\ell\\) such that \\(P \\notin \\ell\\), there exists a unique \\(\\ell', P \\in \\ell'\\) such that \\(\\ell'\\) meet \\(\\ell\\) orthogonally, in \\(Q\\) say, and\n  \\[\n    \\rho(P, Q) \\leq \\rho(P, \\tilde Q)\n  \\]\n  for all \\(\\tilde Q \\in \\ell\\), with equality if and only if \\(\\tilde Q = Q\\).\n\\end{proposition}\n\n\\begin{proof}\n  Example sheet.\n  %Wlog \\(P = 0 \\in D\\).\n\\end{proof}\n\nThe next lemma concerns ``reflections'' in hyperbolic plane:\n\n\\begin{lemma}\n  Suppose \\(g\\) is an isometry of \\(H\\) and \\(g\\) fixes every point in \\(L^+ \\subseteq H\\). Then either \\(g = \\id_H\\) or \\(g(z) = - \\conj z\\), i.e.\\ reflection in the \\(y\\)-axis.\n\\end{lemma}\n\n\\begin{proof}\n  Let \\(P \\in H, P \\notin L^+\\). Then there exists a unique \\(\\ell \\neq P, \\ell' \\perp L^+\\). Theus \\(\\ell'\\) must be a semi-circle. Let \\(Q = \\ell \\cap L^+\\). As \\(g(Q) = Q\\), \\(\\rho(P, Q) = \\rho(g(P), Q)\\). Then \\(g(P) \\in \\ell'\\) by the properties of \\(\\ell'\\) and the uniqueness of \\(\\ell'\\). Thus we must have either \\(g(P) = P\\) or \\(g(P) = P'\\).\n\n  Suffices to show if \\(g(P) = P\\) then \\(g = \\id_H\\) (for if \\(g(P) = P'\\) then compose with the isometry \\(z \\mapsto -\\conj z\\)). Let \\(g(P) = R\\). Wlog \\(P \\in H^+ = \\{z \\in H: \\Re z > 0\\}\\) and consider any \\(A \\in H^+\\). If \\(g(A) = A'\\) then \\(\\rho(A', P) = \\rho(A, P)\\). But\n  \\[\n    \\rho(A', P) = \\rho(A', B) + \\rho(B, P) = \\rho(A, B) + \\rho(B, P) = \\rho(A, P)\n  \\]\n  and \\(B \\in \\ell_{AP}\\), contradicting the triangle inequality.\n\\end{proof}\n\nWe call \\(R: H \\to H, z \\mapsto -\\conj z\\) the (hyperbolic) relfection in \\(L^+\\). Now we can extend the definition to every hyperbolic line: for each hyperbolic line \\(\\ell \\in H\\) with \\(T \\in \\PSL(2, \\R)\\) chosen such that \\(T(\\ell) = L^+\\), we call\n\\[\n  R_\\ell = T^{-1}RT\n\\]\nthe reflection in \\(\\ell\\). By proposition above, \\(R_\\ell\\) is the unique non-identity isometry of \\(H\\) fixing \\(\\ell\\).\n\n\\begin{ex}\n  It is instructive to write out the reflections in \\(\\ell \\subseteq D\\) using an isometry \\(D \\to H\\) and Q4 on example sheet 2.\n\\end{ex}\n\n\\subsection{Hyperbolic triangle}\n\n\\begin{definition}[Hyperbolic triangle]\\index{hyperbolic triangle}\n  A \\emph{hyperbolic triangle} \\(\\triangle\\) is the region bounded by 3 hyperbolic line segments, including the extreme cases of vertices ``at infinity'' (i.e.\\ in \\(\\R \\cup \\{\\infty\\}\\) for \\(H\\), in \\(\\{|\\zeta| = 1\\}\\) for \\(D\\)) (in this case the angle is \\(0\\)).\n\\end{definition}\n\n\\begin{theorem}[Gauss-Bonnet for hyperbolic triangle]\\index{Gauss-Bonnet}\n  For each \\(T = \\triangle ABC\\) with angles \\(\\alpha, \\beta, \\gamma \\geq 0\\), then\n  \\[\n    \\operatorname{area} T = \\pi - \\alpha - \\beta - \\gamma.\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  Use the \\(H\\) model. Recall that for a region \\(R \\subseteq H\\), the area is computed as\n  \\[\n    \\int\\int_R \\frac{dxdy}{y^2}.\n  \\]\n  First do the case when \\(\\gamma = 0\\), so \\(C\\) is ``at infinity'' (\\(in\\R\\) or \\(\\infty\\)). Wlog \\(C = \\infty\\) (apply \\(g \\in \\PSL(2, \\R)\\). Then \\(AC\\) and \\(BC\\) are in vertical half-lines and \\(AB\\) is an arc of semi-circle. Use \\(z \\mapsto z + a, a \\in \\R\\) to ensure the semi-circle is centred at \\(0\\), \\(z \\mapsto bz, z > 0\\) to achieve radius \\(1\\). Thus wlog\n  \\[\n    AB \\subseteq (\\{x^2 + y^2 = 1\\} \\cap \\{y > 0\\}).\n  \\]\n  Then\n  \\begin{align*}\n    \\operatorname{area} T\n    &= \\int_{\\cos(\\pi - \\alpha)}^{\\cos \\beta} \\int_{(1 - x^2)^{1/2}}^\\infty \\frac{dydx}{y^2} \\\\\n    &= \\int_{\\cos (\\pi - \\alpha)}^{\\cos \\beta} \\frac{1}{(1 - x^2)^{1/2}}dx \\\\\n    &= -\\arccos \\Big|_{\\cos (\\pi - \\alpha)}^{\\cos \\beta} \\\\\n    &= \\pi - \\alpha - \\beta\n  \\end{align*}\n  as required.\n\n  In general, using \\(H\\) again, we can apply \\(g \\in \\PSL(2, \\R)\\) to move \\(AC\\) into a vertical half-line. Then as before, move \\(AB\\) into \\(\\{x^2 + y^2 = 1\\}\\) (and \\(AC\\) remains vertical). Consider \\(\\triangle_1 = AB\\infty, \\triangle_2 = CB\\infty\\). Apply the previous result to get\n  \\begin{align*}\n    \\operatorname{area} \\triangle_1 &= \\pi - \\alpha - (\\beta + \\delta) \\\\\n    \\operatorname{area} \\triangle_2 &= \\pi - \\delta - (\\pi - \\gamma)\n  \\end{align*}\n  so\n  \\[\n    \\operatorname{area} T = \\operatorname{area} \\triangle_1 - \\operatorname{area}_2 \\pi - \\alpha - \\beta - \\gamma.\n  \\]\n\\end{proof}\n\nThere are cosine and sine rules for hyperbolic triangles. See example sheet. Here is another observation: any two lines in \\(S^2\\) (great circles) always meet (in 2 points), any two lines in \\(\\R^2\\) meet (in 1 point) if and only if not parallel.\n\n\\begin{definition}[Parallel, ultraparallel]\n  Using the \\(D\\) model, two hyperbolic lines \\(\\ell_1, \\ell_2 \\subseteq D\\) are \\emph{parallel} if they meet only at \\(\\{|\\zeta| = 1\\}\\).\n\n  They are \\emph{ultraparallel} if and only if they do not meet anywhere in \\(\\{|\\zeta| \\leq 1\\}\\).\n\\end{definition}\n\n\\begin{axiom}[Euclid's parallel axiom (5th axiom)]\n  Given a line \\(\\ell\\) and \\(P \\notin \\ell\\), there exists a unique line \\(\\ell' \\ni P\\) with \\(\\ell \\cap \\ell' = \\emptyset\\).\n\\end{axiom}\n\nIt fails in both \\(S^2\\) and hyperbolic plane --- but for very different reasons!\n\n\\subsection{THe hyperboloid model}\n\nConsider \\emph{Lorenzian} inner product \\(\\inner{\\cdot, \\cdot}\\) on \\(\\R^3\\) with matrix\n\\[\n  \\begin{pmatrix}\n    1 & 0 & 0 \\\\\n    0 & 1 & 0 \\\\\n    0 & 0 & -1\n  \\end{pmatrix}\n\\]\nwith signature \\(p = 2, q = 1\\).\n\nSet\n\\[\n  %q(\\V x) = \\inner{\\V x, \\V x} = x^2 + y^2 - z^2\\)\n\\]\nand\n\\begin{align*}\n  S &= \\{\\V x \\in \\R^3: q(\\V x) = -1\\} \\\\\n  S^+ &= S \\cap \\{z > 0\\}\n\\end{align*}\n\nDefine a projection from the upper sheet of the hyperboloid to the unit disc in the complex plane\n\\begin{align*}\n  \\pi: S^+ \\to D \\subseteq \\C \\\\\n  (x, y, z) &\\mapsto \\frac{x + iy}{1 + z} = y + iv\n\\end{align*}\n\nPut \\(r^2 = u^2 + v^2\\) and by striaghforward calculation,\n\\begin{align*}\n  \\sigma = \\pi^{-1}: D &\\mapsto S^+ \\\\\n  (u, v) &\\mapsto \\frac{1}{1 - r^2} (2u, 2v, 1 + r^2) \n\\end{align*}\n\nNow we can check the inner product on the tangent plane to \\(S^+\\) at \\(\\sigma(u, v)\\) spanned by\n\\begin{align*}\n  \\sigma_u &= d\\sigma(e_1) = \\frac{2}{(1 - r^2)^2} (1 + u^2 - v^2, 2uv, 2u) \\\\\n  \\sigma_v &=  d\\sigma(e_2) = \\frac{2}{(1 - r^2)^2} (2uv, 1 + v^2 - u^2, 2v)\n\\end{align*}\nThe restriction of Lorenzian \\(\\inner{\\cdot, \\cdot}\\) to the span of \\(\\sigma_u, \\sigma_v\\) assigns to each \\((u, v) \\in D\\) a symmetric bilinear form on \\(\\R^2\\) with\n\\begin{align*}\n  E &= \\inner{\\sigma_u, \\sigma_u} = \\frac{4}{(1 - r^2)^2} \\\\\n  F &= 0 \\\\\n  G &= E\n\\end{align*}\nwhich is exactly the Poincare disc model of the hyperplane.\n\n\\section{Smooth embedded surfaces}\n\n\\begin{definition}[Smooth embedded surface]\\index{Smooth embedded surface}\n  \\(S \\subseteq \\R^3\\) is a (parameterised) \\emph{smooth embedded surface} if for all \\(P \\in S\\), there is an open neighbourhood \\(U = W \\cap S\\) and a map \\(\\sigma: V \\to U\\) from \\(V \\subseteq \\R^2\\) such that\n  \\begin{enumerate}\n  \\item \\(\\sigma\\) is a homeomorphism of \\(V\\) into \\(U\\),\n  \\item \\(\\sigma(u, v)\\) is \\(C^\\infty\\),\n  \\item at each \\(Q = \\sigma(P)\\), the vectors \\(\\frac{\\p \\sigma}{\\p u}(P), \\frac{\\p \\sigma}{\\p v}(P)\\) are linearly independent.\n  \\end{enumerate}\n\\end{definition}\n\n\\((u, v) \\in V \\subseteq \\R^2\\) are \\emph{smooth coordinates} on \\(U \\subseteq S\\). The subspace in \\(\\R^2\\) spanned by\\(\\frac{\\p \\sigma}{\\p u}(P), \\frac{\\p \\sigma}{\\p v}(P)\\) is the \\emph{tangent space} to \\(S\\) at \\(\\sigma(P)\\). \\(\\sigma\\) is a \\emph{smooth parameterisation} of \\(U \\subseteq S\\).\n\n\\begin{proposition}\n  Suppose \\(\\sigma: V \\to U, \\tilde \\sigma: \\tilde V \\to U\\) are smooth parameterisations, then\n  \\[\n    \\varphi: \\sigma^{-1} \\compose \\tilde \\sigma: \\tilde V \\to V\n  \\]\n  is a diffeomorphism.\n\\end{proposition}\n\n\\begin{proof}\n  It suffices to consider \\(\\varphi\\) on some neighbourhood of \\(P = (u_0, v_0) \\in \\tilde V\\). The Jacobian matrix of \\(\\sigma\\), \\(\\begin{psmallmatrix} x_u & x_v \\\\ y_u & y_v \\\\ z_u & z_v \\end{psmallmatrix}\\) has rank 2 at each \\((u, v) \\in \\tilde V\\). Wlog\n  \\[\n    \\det\n    \\begin{pmatrix}\n      x_u & x_v \\\\\n      y_u & y_v\n    \\end{pmatrix}\n    \\neq 0\n  \\]\n  at \\((u_0, v_0)\\). Let \\(F(u, v) = \\begin{psmallmatrix} x(u, x) \\\\ y(u, v) \\end{psmallmatrix}\\), then by Inverse function theorem, \\(F\\) maps some open \\(N \\subseteq \\R^2, (u_0, v_0) \\in N\\), diffeomorphically onto an open \\(N' \\subseteq \\R^2\\).\n  \\[\n    \\begin{tikzcd}\n      & \\sigma(N) \\ar[d, \"\\pi\"] \\\\\n      N \\ar[ur, \"\\sigma\"] \\ar[r, \"F\"] & N' & \\tilde N \\ar[ul, \"\\tilde \\sigma\"'] \\ar[l, \"\\tilde F\"']\n    \\end{tikzcd}\n  \\]\n  Here \\(\\sigma(N)\\) is open in \\(S\\) as \\(\\sigma\\) is a homeomorphism. \\(\\pi = F \\compose \\sigma^{-1}\\) is bijective as \\(\\sigma, F\\) are so. \\(\\tilde N = \\tilde \\sigma^{-1}(\\sigma(N)) \\subseteq \\tilde V\\) is open and \\(\\tilde F = \\pi \\compose \\tilde \\sigma\\). \\(\\pi^{-1}: N' \\to \\sigma(N)\\) is well-defined. Furthermore, \\(\\pi(x, y, z) = (x, y)\\) is linear, so smooth. Now\n  \\[\n    \\varphi = \\sigma^{-1} \\compose \\tilde \\sigma = (\\sigma^{-1} \\compose \\pi^{-1}) \\compose (\\pi \\compose \\tilde \\sigma) = F^{-1} \\compose \\tilde F\n  \\]\n  on \\(\\tilde N\\) is smooth as \\(F^{-1}\\) and \\(\\tilde F\\) are.\n\n  Similarly (by symmetry of notation) \\(\\varphi^{-1}\\) is smooth.\n\\end{proof}\n\nRecall that if \\(P \\in V, Q \\in \\sigma(P)\\), then the tangent space \\(T_QS\\) is the span of \\(\\sigma_u(P), \\sigma_v(P)\\).\n\n\\begin{corollary}\n  The tangent plane \\(T_QS\\) is independent of \\(\\sigma\\).\n\\end{corollary}\n\n\\begin{proof}\n  Use the same notation as the previous propositoin. Then\n  \\[\n    \\tilde \\sigma(\\tilde u, \\tilde v) = \\sigma(\\varphi_1(\\tilde u, \\tilde v), \\varphi_2(\\tilde u, \\tilde v))\n  \\]\n  where \\(\\varphi = (\\varphi_1, \\varphi_2)\\) by defintion. By chain rule,\n  \\begin{align*}\n    \\tilde \\sigma_{\\tilde u} &= (\\varphi_1)_{\\tilde u} \\sigma_u + (\\varphi_2)_{\\tilde u} \\sigma_v \\\\\n    \\tilde \\sigma_{\\tilde v} &= (\\varphi_1)_{\\tilde v} \\sigma_u + (\\varphi_2)_{\\tilde v} \\sigma_v\n  \\end{align*}\n  The Jacobian of \\(\\varphi\\) is just the matrix composed of the coefficients in parenthesis and thus \\(\\varphi\\) is invertible. Thus \\(\\sigma_u, \\sigma_v\\) and \\(\\tilde \\sigma_{\\tilde u}, \\tilde \\sigma_{\\tilde v}\\) have the same span.\n\\end{proof}\n\n\\begin{remark}\n  We can compute\n  \\[\n    \\tilde \\sigma_{\\tilde u} \\times \\tilde \\sigma_{\\tilde v} = \\det(J(\\varphi)) \\sigma_u \\times \\sigma_v.\n  \\]\n\\end{remark}\n\n\\begin{definition}[Unit normal]\\index{unit normal}\n  The \\emph{unit normal} to \\(S\\) at \\(Q\\) is\n  \\[\n    N = N_Q = \\frac{\\sigma_u \\times \\sigma_v}{\\norm{\\sigma_u \\times \\sigma_v}}(P).\n  \\]\n\\end{definition}\n\nBy the above remark, \\(N\\) is well-defined up to a sign.\n\n\\begin{definition}[Chart]\\index{chart}\n  \\(\\phi = \\sigma^{-1}: U \\to V\\) where \\(U \\subseteq S, V \\subseteq \\R^2\\) is a \\emph{chart}.\n\\end{definition}\n\n\\begin{eg}\n  For \\(S^2\\), the two stereographic projections, from \\(\\begin{psmallmatrix} 0 \\\\ 0 \\\\ \\pm 1 \\end{psmallmatrix}\\) are charts. Their domains cover \\(S^2\\).\n\\end{eg}\n\n\\begin{definition}[First fundamental form]\\index{first fundamental form}\n  If \\(S \\subseteq \\R^3\\) is an embedded surface, then \\(T_QS\\) at each \\(Q \\in S\\) has an induced inner product from \\(\\R^3\\) --- the resulting family of inner products on tangent planes is called the \\emph{first fundamental form} of \\(S\\).\n\\end{definition}\n\nGiven a parameterisation \\(\\sigma: V \\to U \\subseteq S, P \\in V, a, b \\in \\R^2\\), set\n\\[\n  \\inner{a, b}_P = \\inner{d\\sigma_P(a), d\\sigma_P(b)}_{\\R^3}.\n\\]\nWith respect to the standard basis \\(e_1, e_2\\) of \\(\\R^2\\), RHS becomes\n\\begin{equation}\n  \\label{eqn:Riemmanian metric}\n  Edu^2 + 2Fdudv + Gdv^2\n  \\tag{\\(\\ast\\)}\n\\end{equation}\nwith\n\\begin{align*}\n  E &= \\inner{\\sigma_u, \\sigma_u} \\\\\n  F &= \\inner{\\sigma_u, \\sigma_v} \\\\\n  G &= \\inner{\\sigma_v, \\sigma_v}\n\\end{align*}\nare \\(C^\\infty\\) functrions of \\((u, v) \\in V\\). Thus we have recovered our previous defintion of Riemannian metric. Thus the Riemannian metric \\eqref{eqn:Riemannian metric} on \\(V\\) is also called the first fundamental form corresponding to a parameterisatio \\(\\sigma\\).\n\n(first fundamental form is abstract but invariant to parameterisation, while Riemannian metric is easier to do computations with)\n\n\\begin{definition}[Length, energy]\\index{length}\\index{energy}\n  Given a smooth curve \\(\\Gamma: [a, b] \\to S \\subseteq \\R^2\\), define the \\emph{length} of \\(\\Gamma\\) to be\n  \\[\n    \\int_a^b \\norm{\\Gamma'(t)} dt\n  \\]\n  and the \\emph{energy} to be\n  \\[\n    \\int_a^b \\norm{\\Gamma'(t)}^2 dt.\n  \\]\n\\end{definition}\n\nIf \\(\\Gamma([a, b]) \\subseteq U = \\sigma(N)\\) for some parameterisation \\(\\sigma\\), then there exists a unique \\(\\gamma: [a, b] \\to V\\) such that \\(\\Gamma = \\sigma \\compose \\gamma\\). Thus it suffices to consider \\(\\gamma\\), a curve ``in \\(\\R^2\\)''. Write \\(\\gamma = (\\gamma_1, \\gamma_2)\\), then\n\\[\n  \\Gamma'(t) = (d\\sigma)_{\\gamma(t)} (\\dot \\gamma_1(t)e_1 + \\dot \\gamma_2(t)e_2).\n\\]\nThus\n\\begin{align*}\n  \\Gamma'(t) &= \\dot \\gamma_1(t) \\sigma_u|_{\\gamma(t)} + \\dot \\gamma_2(t) \\sigma_v|_{\\gamma(t)} \\\\\n  \\norm{\\gamma'(t)} &= \\inner{\\dot \\gamma, \\dot \\gamma}^{1/2}_{\\gamma(t)} = (E \\dot \\gamma_1^2 + 2F \\dot \\gamma_1 \\dot \\gamma_2 + G \\dot \\gamma_2^2)^{1/2}.\n\\end{align*}\nWith these expressions, we can write\n\\begin{align*}\n  \\operatorname{length}(\\gamma) &= \\int_a^b (E \\dot \\gamma_1^2 + 2F \\dot \\gamma_1 \\dot \\gamma_2 + G \\dot \\gamma_2^2)^{1/2} dt \\\\\n  \\operatorname{energy}(\\gamma) &= \\int_a^b (E \\dot \\gamma_1^2 + 2F \\dot \\gamma_1 \\dot \\gamma_2 + G \\dot \\gamma_2^2) dt\n\\end{align*}\n\n\\begin{definition}[Area]\\index{area}\n  Given a parameterisation \\(\\sigma: V \\to U \\subseteq S \\subseteq \\R^3\\) of \\(S\\) and a region \\(T \\pi U\\), the \\emph{area} of \\(T\\) is\n  \\[\n    \\int_{\\theta(T)} \\sqrt{EG - F^2} dudv\n  \\]\n  where \\(\\theta = \\sigma^{-1}\\) is a chart, whenever RHS is defined.\n\\end{definition}\n\nWe state without proof a proposition:\n\n\\begin{proposition}\n  The area defined as above is independent of parameterisation.\n\\end{proposition}\nIn particular, this shows that if the area does not exist in one parameterisation, then it does not exist in others as well.\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item In examples we encounter in this course, often \\(\\sigma(V) = U\\) is dense in \\(S\\). Then area of \\(S\\) is just the integral over \\(V\\).\n  \\item The area and lengths on \\(S\\) are invariant under isometries.\n  \\end{enumerate}\n\\end{remark}\n\n\\section{Geodesics}\n\nLet \\(V \\subseteq \\R^2_{u, v}\\) be open, \\(Edu^2 + 2Fdudv + Gdv^2\\) a Riemannian metric on \\(V\\), \\(\\gamma = (\\gamma_1, \\gamma_2): [a, b] \\to V\\) a smooth curve.\n\n\\begin{definition}[Geodesic]\\index{geodesic}\n  \\(\\gamma\\) is a \\emph{geodesic} if\n  \\begin{align*}\n    \\frac{d}{dt} (E \\dot \\gamma_1 + F \\dot \\gamma_2) &= \\frac{1}{2} (E_u \\dot \\gamma_1^2 + 2F_u \\dot \\gamma_1 \\dot \\gamma_2 + G_u \\dot \\gamma_2^2) \\\\\n    \\frac{d}{dt} (F \\dot \\gamma_1 + G \\dot \\gamma_2) &= \\frac{1}{2} (E_v \\dot \\gamma_1^2 + 2F_v \\dot \\gamma_1 \\dot \\gamma_2 + G_v \\dot \\gamma_2^2)\n  \\end{align*}\n  holds for all \\(t \\in [a, b]\\).\n\\end{definition}\n\nTo recoginise the importance of the definition, we need some knowledge from calculus of variantion.\n\n\\begin{definition}[Proper variation]\\index{proper variation}\n  Let \\(\\gamma(a) = p, \\gamma(b) = q\\). A \\emph{proper variation} of \\(\\gamma\\) is a \\(C^\\infty\\) map \\(h: [a, b] \\times (-\\varepsilon, \\varepsilon) \\to V\\) such that\n  \\begin{align*}\n    h(t, 0) &= \\gamma(t) \\quad \\forall t \\in [a, b] \\\\\n    h(a, \\tau) &= p, h(b, \\tau) = q \\quad \\forall \\tau \\in (-\\varepsilon, \\varepsilon)\n  \\end{align*}\n  and put \\(\\gamma_\\tau(t) = h(t, \\tau)\\), \\(\\gamma_\\tau: [a, b] \\to V\\) is a \\(C^\\infty\\) curve.\n\\end{definition}\n\n\\begin{proposition}\n  \\(\\gamma\\) satisfies the geodesic ODE's if and only if \\(\\gamma\\) is a stationary point for the energy for all proper variations (in the sense of Euler-Lagrange).\n\\end{proposition}\n\n\\begin{proof}\n  Relabel \\(\\gamma(t) = (u(t), v(t))\\). Recall that energy can be written as\n  \\[\n    \\int_a^b (E(u, v) \\dot u^2 + 2F(u, v) \\dot u \\dot v + G(u, v) \\dot v^2) dt\n    = \\int_a^b I(u, v, \\dot u, \\dot v) dt\n  \\]\n  for some formal expression \\(I\\) taking 4 variables. Euler-Lagrange equations assert that \\(\\gamma\\) is stationary if and only if\n  \\begin{align*}\n    \\frac{d}{dt} \\frac{\\p I}{\\p \\dot u} &= \\frac{\\p I}{\\p u} \\\\\n    \\frac{d}{dt} \\frac{\\p I}{\\p \\dot v} &= \\frac{\\p I}{\\p v}\n  \\end{align*}\n  which in our case of \\(\\gamma\\) is\n  \\begin{align*}\n    2 \\frac{d}{dt} (E \\dot u + F \\dot v) &= E_u \\dot u^2 + 2F_u \\dot u \\dot v + G_u \\dot v^2 \\\\ \n    2 \\frac{d}{dt} (F \\dot u + G \\dot v) &= E_v \\dot u^2 + 2F_v \\dot u \\dot v + G_v \\dot v^2\n  \\end{align*}\n  and the result follows.\n\\end{proof}\n\nLet \\(S \\subseteq \\R^3\\) be an embedded surface and \\(\\sigma: V \\to U\\) be a parameterisation and \\(\\theta = \\sigma^{-1}\\) be a chart. Let \\(\\Gamma: [a, b] \\to U\\) be a smooth curve in \\(S\\). Then \\(\\gamma = \\theta \\compose \\Gamma\\) is a smooth curve in \\(V\\).\n\nNow we can define geodesic in an equivalent but more synthetic way:\n\n\\begin{definition}[Geodesic]\\index{geodesic}\n  \\(\\Gamma\\) is a \\emph{geodesic curve} on \\(S\\) if and only if \\(\\gamma\\) is a geodesic in \\(V\\). Thus \\(\\Gamma\\) is a stationary point for the energy \\(\\int_a^b \\norm{\\Gamma'(t)}^2 dt\\).\n\\end{definition}\n\nNote that this is independent of the choice of chart \\(\\theta\\).\n\n\\begin{corollary}\n  If the curve \\(\\Gamma\\) minimises the energy for curves joining \\(P = \\Gamma(a)\\) and \\(Q = \\Gamma(b)\\) in \\(S\\), then \\(\\Gamma\\) is a geodesic.\n\\end{corollary}\n\n\\begin{proof}\n  For all \\(a \\leq a_1 < b_1 \\leq b\\), \\(\\Gamma_1 = \\Gamma|_{[a_1, b_1]}\\) minimises energy for all curves from \\(\\Gamma(a_1)\\) to \\(\\Gamma(b_1)\\). If \\(a_1, b_1\\) are so that \\(\\Gamma([a_1, b_1]) \\subseteq U\\) for some chart \\(\\theta: U \\to V\\) then \\(\\Gamma_1\\) is a geodesic by the previous proposition. Now just vary \\(a_1, b_1\\) to get a cover of \\([a, b]\\).\n\\end{proof}\n\nSo far we have characterised geodesic as energy-minimising curves. There are some valid reasons that this is helpful. To get to the more intuitive understanding of geodesic as length-minimising curve, we need a techinical lemma:\n\n\\begin{lemma}\n  Let \\(V \\subseteq \\R^2\\) open and endowed with a Riemannian metric. Let \\(P, Q \\in V\\) and consider \\(C^\\infty\\) curves \\(\\gamma: [0, 1] \\to V\\) with \\(\\gamma(0) = P, \\gamma(1) = Q\\). Then such \\(\\gamma_0\\) minimises energy if and only if \\(\\gamma_0\\) minimised the length and has constant speed.\n\\end{lemma}\n\n\\begin{proof}\n  Recall Cauchy-Schwarz for \\(f, g \\in C[a, b]\\):\n  \\[\n    \\left( \\int_a^b fg \\right)^2 \\leq \\int_a^b f^2 \\cdot \\int_a^b g^2\n  \\]\n  with equality if and only if \\(g = \\lambda f\\) for some \\(f \\in \\R\\) or \\(f = 0\\).\n\n  Put \\(f = 1, g = \\norm{\\dot \\gamma}\\) (with respect to the Riemannian metric), \\(a = 0, b = 1\\). Then\n  \\[\n    (\\operatorname{length} \\gamma)^2 \\leq \\operatorname{energy} \\gamma\n  \\]\n  with equality if and only if \\(\\norm{\\dot \\gamma}\\) is constant. If the length is \\(\\ell\\) then the minimal energy is \\(\\ell^2\\), occurs precisely when \\(\\norm{\\dot \\gamma}\\) is constant.\n\\end{proof}\n\nThis gives a necessary condition for geodesic and one naturally wonders if it is also sufficient. It turns out that it is a pretty close guess but one has to be more careful:\n\n\\begin{fact}\n  \\(\\Gamma\\) is a geodesic if and only if it locally minimises the energy and also if and only if it locally minimises energy the length and has constant speed. In this case, ``locally minimises'' means that for all \\(t_0\\), there exists \\(\\varepsilon > 0\\) such that \\(\\Gamma|_{[t_0 - \\varepsilon, t_0 + \\varepsilon]}\\) minimises the desired quantity. This is not a caprice of our own but comes from ODE theory.j\n\\end{fact}\n\n\\begin{fact}\n  The geodesic ODE's imply that \\(\\norm{\\Gamma'(t)}\\) is constant, see example sheet 3.\n\\end{fact}\n\n\\begin{remark}\n  Given that \\(\\begin{psmallmatrix} E & F \\\\ F & G \\end{psmallmatrix}\\) is invertible, the geodesic ODE's are equivalent to\n  \\[\n    (\\ddot u, \\ddot v) = F(v, u, \\dot u, \\dot v)\n  \\]\n  Following from standard theory of ODE's. For all \\(P = (u_0, v_0) \\in V\\), for all \\(\\V a = (a_0, b_0) \\in \\R^2\\), there exists a unique geodesic \\(\\gamma(t), |t| \\varepsilon\\) with \\(\\gamma(0) = P, \\dot \\gamma(0) = \\V a\\).\n\\end{remark}\n\n\\begin{eg}\n  Consider the surface \\(S^2\\). For all \\(P \\in S^2\\), for all tangent at \\(P\\), there exists a unique great circle as arcs \\(< \\pi\\) are length-minimising. Great circles are all the geodesics on \\(S^2\\).\n\\end{eg}\n\nRecall that\n\\begin{enumerate}\n\\item We defined the geodesic curves on a surface \\(S \\subseteq \\R^3\\) as solutions of certain ODE of 2nd order.\n\\item Equivalently, we can determine a geodesic \\(\\Gamma(t)\\) uniquely by the initial condition\n\\begin{align*}\n  \\Gamma(0) &= P \\in S \\\\\n  \\dot \\Gamma(0) &= \\V a \\in T_PX\n\\end{align*}\nwhere \\(\\Gamma: [0, T] \\to S, \\gamma = \\sigma \\compose \\Gamma\\).\n\\item If \\(\\Gamma(t)\\) minimises the length between its end points and \\(\\norm{\\dot \\Gamma(t)}\\) is constant, then \\(\\Gamma(t)\\) is a geodesic. This is a sufficient but not necessary condition.\n\\end{enumerate}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Arcs of great circles on \\(S^2\\) are precisely the geodesics.\n  \\item Similarly on a hyperbolic plane, the geodesics are precisely the hyperbolic line segments.The can also be computed directly. See example sheet.\n  \\end{enumerate}\n\\end{eg}\n\nAs solutions of geodesic ODEs depend smoothly on the initial conditions, we may use this to construct around \\(P \\in S\\) \\emph{geodesic polar coordinates}\\index{geodesic polar coordinates} (i.e. a particular chart). We will do an informal construction here while the more technical details will be left to IID Differential Geometry.\n\n\\begin{proof}[Sketch of construction]\n  Let \\(\\psi: U \\to V\\) be a chart. Let \\(P \\in U\\). May assume wlog \\(\\psi(P) = 0 \\in V\\).Denote by \\(\\theta\\) the polar angle coordinate on \\(\\R^2 \\setminus \\{0\\}\\). For each value of \\(\\theta\\), there exists a unique geodesic \\(\\gamma^\\theta: (-\\varepsilon, \\varepsilon) \\to V\\) with\n  \\begin{align*}\n    \\gamma^\\theta (0) &= 0 \\\\\n    \\dot \\gamma^\\theta(0) &= \\cos \\theta e_1 + \\sin \\theta e_2\n  \\end{align*}\n  Set \\(\\sigma(r, \\theta) = \\gamma^\\theta(r)\\). Then we can show\n  \\begin{enumerate}\n  \\item \\(\\sigma\\) is smooth on\n    \\[\n      W = \\{(r, \\theta): 0 < r < \\varepsilon_0, \\theta_0 < \\theta < \\theta_0 + 2\\pi\\}.\n    \\]\n  \\item For all \\(\\theta_0\\), \\(\\psi^{-1} \\compose \\sigma: W \\to S\\) is a valid parameterisation. Respectivly, \\(\\sigma^{-1} \\compose \\psi\\) is a valid chart on \\(S\\).\n  \\end{enumerate}\n  The values \\((r, \\theta)\\) of the chart are geodesic polar coordinates on some open \\(U \\subseteq S\\). Note that \\(P \\notin U\\).\n\\end{proof}\n\n\\begin{theorem}[Gauss' lemma]\\index{Gauss' lemma}\n  Geodesic circles \\(\\{r = r_0\\} \\subseteq W\\) are perpendicular to their radii, i.e.\\ to \\(\\gamma^\\theta(t)\\) and the induced Riemannian metric on \\(W\\) (i.e.\\ first fundamental form with respect to \\(\\sigma(r, \\theta)\\)) is\n  \\[\n    dr^2 + G(r, \\theta) d\\theta^2.\n  \\]\n\\end{theorem}\n\n\\begin{definition}[Atlas]\\index{atlas}\n  An \\emph{atlas} on \\(S\\) is a collection of charts covering \\(S\\).\n\\end{definition}\n\nFor example, the family of geodesic polar charts is an atlas. There are two more examples of interesting atlases in example sheet.\n\n\\subsection{Surfaces of revolution}\n\nIn this section, \\(S\\) will be obtained by rotating a plane curve around a straight line \\(\\ell\\). Wlog \\(\\ell\\) is the \\(z\\)-axis in \\(\\R^3\\) and the curve is in the \\((x, z)\\)-plane. Let\n\\begin{align*}\n  \\eta: (a, b) &\\to \\R^3 \\\\\n  u &\\mapsto (f(u), 0, g(u))\n\\end{align*}\nwhere \\(a\\) and \\(b\\) may be infinite such that\n\\begin{enumerate}\n\\item \\(\\norm{\\eta'(u)} = 1\\) (see example sheet 3),\n\\item \\(f(u) > 0\\) for all \\(u\\),\n\\item \\(\\eta\\) is a homeomorphism onto its image (this is to rule out for example, figure 8 or oscillating curve).\n\\end{enumerate}\n\nDefine \\(S\\) as the image of\n\\[\n  \\sigma(u, v) = (f(u) \\cos v, f(u) \\sin v, g(u)), a < u < b, 0 \\leq v \\leq 2\\pi\n\\]\nand for all \\(\\alpha \\in \\R\\), the restriction \\(\\sigma^\\alpha: (a, b) \\times (\\alpha, \\alpha + 2\\pi)\\) is a homeomorphism onto its image.\n\nThen\n\\begin{align*}\n  \\sigma_u &= (f' \\cos v, f' \\sin v, g') \\\\\n  \\sigma_v &= (-f \\sin v, f \\cos v, 0) \\\\\n  \\sigma_u \\times \\sigma_v &= (-fg' \\cos v, -fg' \\sin v, ff') \\\\\n  \\norm{\\sigma_u \\times \\sigma_v} &= f^2(f'^2 + g'^2) = f^2 \\neq 0\n\\end{align*}\nThus \\(\\sigma^\\alpha\\) is a valid parameterisation and \\(S\\) is a valid embedded surface.\n\n\\begin{definition}[Parallel \\& meridian]\\index{parallel}\\index{meridian}\n  Curves on \\(S\\) of the form \\(\\gamma(t) = \\sigma(u_0, t)\\) are \\emph{parallels} and \\(\\gamma(t) = \\sigma(t, v_0)\\) are \\emph{meridians}.\n\\end{definition}\n\nThe first fundamental form with respect to \\(\\sigma\\) is\n\\begin{align*}\n  E &= \\norm{\\sigma_u}^2 = f'^2 + g'^2 = 1 \\\\\n  F &= \\sigma_u \\cdot \\sigma_v = 0 \\\\\n  G &= \\norm{\\sigma_v}^2 = f^2\n\\end{align*}\ni.e.\\ \\(du^2 + f^2(u) dv^2\\). The geodesic equations on \\(S\\) are\n\\begin{align*}\n  \\ddot u &= f \\cdot \\frac{df}{du} \\cdot \\dot v^2 \\\\\n  \\frac{d}{dt} (f^2 \\dot v) &= 0\n\\end{align*}\n\nWhich of them is a geodesic?\n\n\\begin{proposition}\n  Assume \\(\\norm{\\dot \\gamma}_V = 1\\) where \\(\\gamma: I \\to V \\subseteq \\R^2\\), i.e.\\ if \\(\\gamma = (u(t), v(t))\\) then \\(\\dot u^2 + f^2(u)\\dot v^2 = 1\\). Then\n  \\begin{enumerate}\n  \\item every meridian \\(\\gamma\\) is geodesic.\n  \\item a parallel \\(\\gamma\\) is a geodesic if and only if\n    \\[\n      \\frac{df}{du}(u_0) = 0,\n    \\]\n    i.e.\\ \\(u_0\\) is a stationary/critical point of \\(f\\).\n  \\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(v = v_0 \\) is constant so the second equation holds. As \\(\\dot v = 0\\), \\(|\\dot u(t)| = 1\\) so \\(\\ddot u = 0\\) and the first equation also holds.\n  \\item Suppose \\(u = u_0\\). Then \\(f^2 \\dot v^2 = 1\\) whence \\(\\dot v = \\pm \\frac{1}{f(u_0)} \\neq 0\\) is a constant so the second equation holds. Then the first equation holds if and only if \\(\\frac{df}{du} (u_0) = 0\\).\n  \\end{enumerate}\n\\end{proof}\n\n\\section{Gaussian curvature}\n\nLet's begin by considering a \\(1\\)-dimensional example. Let \\(\\eta: [0, \\ell] \\to \\R^2\\) be a smooth curve with \\(\\norm{\\eta'} = 1\\). Then\n\\[\n  \\eta' \\cdot \\eta'' = \\frac{1}{2} (\\eta' \\cdot \\eta')' = 0.\n\\]\nRecall the \\emph{curvature} \\(\\kappa\\) at \\(\\eta(s)\\) is determined by\n\\[\n  \\eta'' = \\kappa n\n\\]\nwhere \\(\\norm n = 1, n \\cdot \\eta' = 0\\) is the unit normal and sign such that \\(\\kappa \\geq 0\\).\n\nWhen \\(f: [c, d] \\to [0, \\ell]\\) is smooth with \\(f'(t) > 0\\), we may reparameterise \\(\\gamma(t) = \\eta(f(t))\\). Then \\(\\dot \\eta = \\dot f(t)\\eta'(f(t))\\) and \\(\\norm{\\dot \\gamma}^2 = \\dot f^2\\). Also \\(\\eta''(f(t)) = \\kappa n\\) where \\(\\kappa\\) is the curvature at \\(f(t)\\). Then by Taylor's theorem,\n\\[\n  \\gamma(t + \\Delta t) - \\gamma(t) = \\dot f \\eta'(f(t)) \\cdot \\Delta t \\frac{1}{2} (\\ddot f \\eta'(f(t)) + \\dot f^2 \\eta''(f(t))) \\Delta t^2 + \\text{ high order terms}\n\\]\nso\n\\[\n  (\\gamma(t + \\Delta t) - \\gamma(t)) \\cdot n = \\frac{1}{2} \\kappa \\norm{\\dot \\gamma}^2 \\Delta t^2 + \\text{ high order terms}.\n\\]\nOn the other hand,\n\\[\n  \\norm{\\gamma(t + \\Delta t) - \\gamma(t)}^2 = \\norm{\\dot \\gamma}^2 \\Delta t^2 + \\text{ high order terms}.\n\\]\nComparing these two equations, \\(\\kappa\\) is the ratio of the leading (quadratic) terms of RHS and is independent of parameterisation of the curve.\n\nWe want to generalise this to two dimension.\n\nLet \\(\\sigma: V \\to U\\) be a parameterisation of a surface \\(S \\subseteq \\R^3\\). Apply Taylor's theorem,\n\\begin{align*}\n  &\\sigma(u + \\Delta u, v + \\Delta v) - \\sigma(u, v) \\\\\n  =& \\sigma_u \\Delta u + \\Delta v \\Delta v \\\\\n  +& \\frac{1}{2} (\\sigma_{uu} \\Delta u^2 + 2 \\sigma_{uv} \\Delta u \\Delta v + \\sigma_{vv} \\Delta v^2) + \\dots\n\\end{align*}\nThe ``deviation from the tangent plane'' is\n\\[\n  (\\sigma(u + \\Delta u, v + \\Delta v) - \\sigma(u, v)) \\cdot \\V N + \\frac{1}{2} (L \\Delta u^2 + 2M \\Delta u \\Delta v + N \\Delta v^2) + \\dots\n\\]\nwhere\n\\begin{align*}\n  L &= \\sigma_{uu} \\cdot \\V N \\\\\n  M &= \\sigma_{uv} \\cdot \\V N \\\\\n  N &= \\sigma_{vv} \\cdot \\V N\n\\end{align*}\nRecall that\n\\[\n  \\norm{\\sigma(u + \\Delta u, v + \\Delta v) - \\sigma(u, v)}^2 = E \\Delta u^2 + 2F \\Delta u \\Delta v + G \\Delta v^2 + \\dots\n\\]\nWe want to take similarly the ``quotient'' of the leading coefficients as an invariant. To do so we define\n\n\\begin{definition}[Second fundamental form]\\index{second fundamental form}\n  The \\emph{second fundamental form} for \\(S\\) (on \\(V\\) with respect to \\(\\sigma\\) is\n  \\[\n    L du^2 + 2M dudv + N dv^2\n  \\]\n  where \\(L, M, N\\) are smooth maps as defined above.\n\\end{definition}\n\n\\begin{definition}[Gaussian curvature]\\index{Gaussian curvature}\n  The \\emph{Gaussian curvature} \\(K\\) of \\(S\\) at \\(P \\in S\\) is\n  \\[\n    K = \\frac{LN - M^2}{EG - F^2}.\n  \\]\n\\end{definition}\n\nIf \\(K > 0\\), the second fundamental form is positive or negative definite. If \\(K < 0\\) then it is indefinite. If \\(K = 0\\) then it is semi-definite.\n\n\\begin{eg}[Informal]\n  The unit sphere \\(S^2\\) has \\(K > 0\\). A Pringle crisp has \\(K < 0\\).\n\\end{eg}\n\n\\begin{remark}\n  It can be checked, similar to the case of curves, that \\(K\\) is independent of the parameterisation \\(\\sigma\\).\n\\end{remark}\n\n\\begin{proposition}\n  Let \\(\\V N = \\frac{\\sigma_u \\times \\sigma_v}{\\norm{\\sigma_u \\times \\sigma_v}}\\), the unit normal for a surface local patch \\(\\sigma\\). Then at each point,\n  \\begin{align*}\n    \\V N_u &= a \\sigma_u + b \\sigma_v \\\\\n    \\V N_v &= c \\sigma_u + d \\sigma_v\n  \\end{align*}\n  where\n  \\[\n    -\n    \\begin{pmatrix}\n      L & M \\\\\n      M & N\n    \\end{pmatrix}\n    =\n    \\begin{pmatrix}\n      a & b \\\\\n      c & d\n    \\end{pmatrix}\n    \\begin{pmatrix}\n      E & F \\\\\n      F & G\n    \\end{pmatrix}.\n  \\]\n  In particular, \\(K = ad - bc\\).\n\\end{proposition}\n\n\\begin{proof}\n  \\(\\norm{\\V N} = 1\\) so \\(\\V N \\cdot \\V N_u = \\V N \\cdot \\V N_v = 0\\) so the relations must hold for some \\(a, b, c, d\\). As \\(\\V N \\cdot \\sigma_u = 0\\), \\(\\V N_u \\cdot \\sigma_u + \\V N \\cdot \\sigma_{uu} = 0\\) so \\(\\V N_n \\cdot \\sigma_u = -L\\). Similarly \\(\\V N_u \\cdot \\sigma_v = -M = \\V N_v \\cdot \\sigma_u\\), \\(\\V N_v \\cdot \\sigma_v = -N\\). Dot * with \\(\\sigma_u, \\sigma_v\\),\n  \\begin{align*}\n    -L &= aE + bF \\\\\n    -M &= cE + dF \\\\\n    -M &= aF + bG \\\\\n    -N &= cF + dG\n  \\end{align*}\n  which is **. Taking the determinants, obtain \\(K = ad - bc\\).\n\\end{proof}\n\n\\begin{theorem}\n  Suppose for a parameterisation \\(\\sigma: V \\to U\\) the first fundamental form is \\(du^2 + G(u, v) dv^2\\). Then\n  \\[\n    K = -\\frac{(\\sqrt G)_{uu}}{\\sqrt G}.\n  \\]\n\\end{theorem}\nWe notice that this formula for Gaussian curvature depends on the first fundamental form only. In fact, the general case, found in example sheet 3, is not much more difficult.\n\n\\begin{proof}\n  Set \\(e = \\sigma_u, f = \\frac{\\sigma_v}{\\sqrt G}, \\V N\\) is an orthonormal basis of \\(\\R^3\\) dependent on \\((u, v)\\). Take dot products, \\(e \\cdot e = 1\\) so \\(e \\cdot e_u = 0 = e \\cdot e_v\\). Thus we may write\n  \\begin{align*}\n    dagger\n    e_u &= \\alpha f + \\lambda_1 \\V N \\\\\n    e_v &= \\beta f + \\lambda_2 \\V N \\\\\n    f_u &= - \\tilde \\alpha e + \\mu_1 \\V N \\\\\n    f_v &= - \\tilde \\beta e + \\mu_2 \\V N\n  \\end{align*}\n  Since \\(e \\cdot f = 0\\), we have\n  \\begin{align*}\n    e_u \\cdot f + e \\cdot f_u &= 0 \\\\\n    e_v \\cdot f + e \\cdot f_v &= 0\n  \\end{align*}\n  so \\(\\alpha = \\tilde \\alpha, \\beta = \\tilde \\beta\\). But\n  \\[\n    \\alpha = e_u \\cdot f = \\sigma_{uu} \\cdot \\frac{\\sigma_v}{\\sqrt G} = [\\underbrace{(\\sigma_u \\sigma_v)_u}_{= F = 0} - \\frac{1}{2}(\\sigma_u\\sigma_u)_v = 0.\n  \\]\n  Similarly\n  \\[\n    \\beta = e_v \\cdot f = \\sigma_{uv} \\cdot \\frac{\\sigma_v}{\\sqrt G} = \\frac{G_u}{\\sqrt G} = (\\sqrt G)_u.\n  \\]\n  Using dagger again,\n  \\begin{align*}\n    & \\lambda_1 \\mu_2 - \\lambda_2 \\mu_2 \\\\\n    &= e_u \\cdot f_v - e_v \\cdot f_u \\\\\n    &= (e \\cdot f_v)_u - \\underbrace{(e \\cdot f_u)_v}_{= -\\tilde \\alpha = 0} \\\\\n    &= -\\beta_u \\\\\n    &= - (\\sqrt G)_{uu}\n  \\end{align*}\n  Now from * in the previous proposition,\n  \\begin{align*}\n    & \\V N_u \\times \\V N_v \\\\\n    &= (a \\sigma_u + b \\sigma_v) \\times (c \\sigma_u + d \\sigma_v) \\\\\n    &= (ad - bc) \\sigma_u \\times \\sigma_v \\\\\n    &= K \\sigma_u \\times \\sigma_v \\\\\n    &= K \\sqrt G e \\times f \\\\\n    &= K \\sqrt G \\V N\n  \\end{align*}\n  so\n  \\begin{align*}\n    K \\sqrt G\n    &= (\\V N_u \\times \\V N_v) \\cdot \\V N \\\\\n    &= (\\V N_n \\times \\V N_v) \\cdot (e \\times f) \\\\\n    &= (\\V N \\cdot e_u) (\\V N \\cdot f_v) - (\\V N \\cdot f_u) (\\V \\cdot e_v) \\\\\n    &= \\lambda_1 \\mu_2 - \\lambda_2 \\mu_1 \\\\\n    &= - (\\sqrt G)_{uu}\n  \\end{align*}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\printindex\n\n\\iffalse\nOther courses that might be useful: topology, part of analysis II (differentiability in R^n and inverse function theorem)\n\nLeads to: IID Differential Geometry\n\nReading List\n\nP.\\ Wilson, Curverd Spaces, CUP 2008\nFrom classical geometries to elementary differential geometry\n\\fi\n\n\n\\end{document}\n", "meta": {"hexsha": "0c28763ffa807b6174851e938388d058f2e64441", "size": 71857, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "IB/geometry.tex", "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_issues_repo_path": "IB/geometry.tex", "max_issues_repo_name": "b-mehta/tripos", "max_issues_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_forks_repo_path": "IB/geometry.tex", "max_forks_repo_name": "b-mehta/tripos", "max_forks_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "avg_line_length": 41.4878752887, "max_line_length": 598, "alphanum_fraction": 0.5941940242, "num_tokens": 26836, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7931059438487663, "lm_q1q2_score": 0.5870366860176125}}
{"text": "%------------------------------------------------------------\n\\section{Do's and Dont's}\n%------------------------------------------------------------\n\\begin{frame}[fragile]{Universal quantification}\n\\begin{block}{\\textbf{Goal:} identify objects such that ALL properties from a ``list'' hold}\n\\begin{enumerate}\n\\item<2-> \\alt<1-3>{check all properties explicitly}{\\sout{check all properties explicitly}}\n%  \\begin{itemize}\n%  \\item<3->[\\KO]\n  \\onslide<3->{\\hfill \\dots{} \\alert<3>{obsolete if properties change}}\\onslide<10>{\\,\\textcolor{red}{\\KO}}\n%  \\end{itemize}\n\\item<4-> use variable-sized conjunction (via `\\lstinline{:}')\n%  \\begin{itemize}\n%  \\item<6->[\\OK]\n   \\onslide<6->{\\hfill \\dots{} \\alert<6>{adapts to changing facts}}\\onslide<10>{\\,\\textcolor{green}{\\OK}}\n%  \\end{itemize}\n\\item<7-> use negation of complement \\onslide<9->{\\hfill \\dots{} \\alert<9>{adapts to changing facts}}\\onslide<10>{\\,\\textcolor{green}{\\OK}}\n\\end{enumerate}\n\\end{block}\n\\begin{block}{\\textbf{Example:} vegetables to buy}\n\\vspace*{-4mm}\\footnotesize\n\\begin{semiverbatim}\nveg(asparagus).       veg(cucumber).\npro(asparagus,cheap). pro(cucumber,cheap). \\alt<1-4>{}{\\alert<5>{pre(cheap).}}\npro(asparagus,fresh). pro(cucumber,fresh). \\alt<1-4>{}{\\alert<5>{pre(fresh).}}\npro(asparagus,tasty). pro(cucumber,tasty). \\alt<1-4>{}{\\alert<5>{pre(tasty).}}\n\\alt<1-2>{}{\\alert<3>{pro(asparagus,clean).}}\\alt<1-5>{}{                      \\alt<6>{\\alert<6>{pre(clean).}}{\\alt<9->{\\alert<9>{pre(clean).}}{}}}\n\n\\alt<1>{}{\\alt<2-3>{\\alert<2>{buy(X) :- veg(X), pro(X,cheap), pro(X,fresh), pro(X,tasty)}\\alt<3>{, \\alert{pro(X,clean)}}{}\\alert<2>{.}}}{\\alt<5-6>{\\alert<5-6>{buy(X) :- veg(X), pro(X,P) : pre(P).}}{\\alt<8->{\\alert<8-9>{buy(X) :- veg(X), not bye(X).     bye(X) :- veg(X), pre(P), not pro(X,P).}}{}}}\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}{Latin square}\n\\vspace*{-4mm}\n\\begin{minipage}[t]{0.42\\linewidth}%\n\\begin{block}{\\textbf{Given:} an $N{\\times}N$ board}\n\\vspace*{2mm}\n\\begin{tabular}{c@{~}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|}\n\\cline{2-7}\n1 & & & & & &\n\\\\\\cline{2-7}\n2 & & & & & &\n\\\\\\cline{2-7}\n3 & & & & & &\n\\\\\\cline{2-7}\n4 & & & & & &\n\\\\\\cline{2-7}\n5 & & & & & &\n\\\\\\cline{2-7}\n6 & & & & & &\n\\\\\\cline{2-7}\n\\multicolumn{1}{@{}c@{}}{} & \\multicolumn{1}{@{}c@{}}{1} & \\multicolumn{1}{@{}c@{}}{2} & \\multicolumn{1}{@{}c@{}}{3} & \\multicolumn{1}{@{}c@{}}{4} & \\multicolumn{1}{@{}c@{}}{5} & \\multicolumn{1}{@{}c@{}}{6}\n\\vspace*{1mm}\n\\end{tabular}\n\nrepresented by facts:\n\\vspace*{-2mm}\\footnotesize\n\\begin{semiverbatim}\nsquare(1,1). \\dots\\ square(1,6).\nsquare(2,1). \\dots\\ square(2,6).\nsquare(3,1). \\dots\\ square(3,6).\nsquare(4,1). \\dots\\ square(4,6).\nsquare(5,1). \\dots\\ square(5,6).\nsquare(6,1). \\dots\\ square(6,6).\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\end{minipage}%\n\\hfill\\pause%\n\\begin{minipage}[t]{0.52\\linewidth}%\n\\begin{block}{\\textbf{Wanted:} assignment of $1,\\dots,N$}\n\\vspace*{2mm}\n\\begin{tabular}{c@{~}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|@{}p{5mm}@{}|}\n\\cline{2-7}\n1 & ~$1$ & ~$2$ & ~$3$ & ~$4$ & ~$5$ & ~$6$\n\\\\\\cline{2-7}\n2 & ~$2$ & ~$3$ & ~$4$ & ~$5$ & ~$6$ & ~$1$\n\\\\\\cline{2-7}\n3 & ~$3$ & ~$4$ & ~$5$ & ~$6$ & ~$1$ & ~$2$\n\\\\\\cline{2-7}\n4 & ~$4$ & ~$5$ & ~$6$ & ~$1$ & ~$2$ & ~$3$\n\\\\\\cline{2-7}\n5 & ~$5$ & ~$6$ & ~$1$ & ~$2$ & ~$3$ & ~$4$\n\\\\\\cline{2-7}\n6 & ~$6$ & ~$1$ & ~$2$ & ~$3$ & ~$4$ & ~$5$\n\\\\\\cline{2-7}\n\\multicolumn{1}{@{}c@{}}{} & \\multicolumn{1}{@{}c@{}}{1} & \\multicolumn{1}{@{}c@{}}{2} & \\multicolumn{1}{@{}c@{}}{3} & \\multicolumn{1}{@{}c@{}}{4} & \\multicolumn{1}{@{}c@{}}{5} & \\multicolumn{1}{@{}c@{}}{6}\n\\vspace*{1mm}\n\\end{tabular}\n\nrepresented by atoms:\n\\vspace*{-2mm}\\footnotesize\n\\begin{semiverbatim}\nnum(1,1,1) num(1,2,2) \\dots\\ num(1,6,6)\nnum(2,1,2) num(2,2,3) \\dots\\ num(2,6,1)\nnum(3,1,3) num(3,2,4) \\dots\\ num(3,6,2)\nnum(4,1,4) num(4,2,5) \\dots\\ num(4,6,3)\nnum(5,1,5) num(5,2,6) \\dots\\ num(5,6,4)\nnum(6,1,6) num(6,2,1) \\dots\\ num(6,6.5)\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\end{minipage}%\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}[fragile]{Projecting Irrelevant Details Out}\n\\begin{block}{A Latin square encoding}\n\\vspace*{-4mm}\\footnotesize\n\\begin{semiverbatim}\n% DOMAIN\n#const n=32. square(1..n,1..n).\n\\alt<1-3>{}{\\alert<4>{squareX(X) :- square(X,Y).      squareY(Y) :- square(X,Y).}}\n\n% GENERATE\n1 \\{ num(X,Y,N) : N = 1..n \\} 1 :- square(X,Y).\n\n% TEST\n:- \\alt<1-3>{square(X,\\alert<2>{Y})}{\\alert<4>{squareX(X)}}, N = 1..n, not num(X,Y',N) : square(X,Y').\n:- \\alt<1-3>{square(\\alert<2>{X},Y)}{\\alert<4>{squareY(Y)}}, N = 1..n, not num(X',Y,N) : square(X',Y).\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\vspace*{-2mm}\n\\begin{itemize}\n\\item<2-3>\\structure{Note} \\alert<2>{unreused ``singleton variables''}\n\\end{itemize}\n%\\begin{block}<3-5>\n\\vspace*{-4mm}\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<3-5>{\\lstinline{gringo} \\attach{Examples/latin_0.lp}{\\lstinline{latin_0.lp}} \\lstinline{| wc}}\n\\lstinline{105480} \\alert<5>{\\lstinline{2558984}} \\lstinline{14005258}\n\\end{block}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<5->{\\lstinline{gringo} \\attach{Examples/latin_1.lp}{\\lstinline{latin_1.lp}} \\lstinline{| wc}}\n\\lstinline{42056} \\alert<5>{\\lstinline{273672}} \\lstinline{1690522}\n\\end{block}\n\\end{minipage}\n% }}{} }\n% \\alt<1-4>{\\onslide<3>{}~\\lstinline{2558984}~\\lstinline{13972426}}}}{\\alert<5>{\\lstinline{42056\\ }~\\lstinline{273672\\ }~\\lstinline{1688522}}}\n%\\end{block}\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}[fragile]{Unraveling Symmetric Inequalities}\n\\begin{block}{Another Latin square encoding}\n\\vspace*{-4mm}\\footnotesize\n\\begin{semiverbatim}\n% DOMAIN\n#const n=32. square(1..n,1..n).\n\n% GENERATE\n1 \\{ num(X,Y,N) : N = 1..n \\} 1 :- square(X,Y).\n\n% TEST\n:- num(X,\\alert<2>{Y},N), num(X,\\alert<2>{Y'},N), \\alt<1-3>{\\alert<2>{Y != Y'}}{\\alert<4>{Y <  Y'}}.\n:- num(\\alert<2>{X},Y,N), num(\\alert<2>{X'},Y,N), \\alt<1-3>{\\alert<2>{X != X'}}{\\alert<4>{X <  X'}}.\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\begin{itemize}\n\\item<2-3>\\structure{Note} \\alert<2>{duplicate ground rules\n\\\\\\small (swapping \\lstinline{Y}/\\lstinline{Y'} and \\lstinline{X}/\\lstinline{X'}\n gives the ``same'')}\n\\end{itemize}\n\\vspace*{-4mm}\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<3-5>{\\lstinline{gringo} \\attach{Examples/latin_2.lp}{\\lstinline{latin_2.lp}} \\lstinline{| wc}}\n\\lstinline{2071560} \\alert<5>{\\lstinline{12389384}} \\lstinline{40906946}\n\\end{block}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<5->{\\lstinline{gringo} \\attach{Examples/latin_3.lp}{\\lstinline{latin_3.lp}} \\lstinline{| wc}}\n\\lstinline{1055752} \\alert<5>{\\lstinline{6294536}} \\lstinline{21099558}\n\\end{block}\n\\end{minipage}\n% \\begin{block}<3-5>{\\lstinline{gringo} \\alt<1-4>{\\onslide<3>{\\attach{Examples/latin_2.lp}{\\lstinline{latin_2.lp}}}}{\\attach{Examples/latin_3.lp}{\\lstinline{latin_3.lp}}} \\lstinline{| wc}}\n% \\alt<1-4>{\\onslide<3>{\\alert<3>{\\lstinline{2071624}~\\lstinline{12389640}~\\lstinline{40915090}}}}{\\alert<5>{\\lstinline{1055816}~\\lstinline{6294792\\ }~\\lstinline{21104106}}}\n% \\end{block}\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}[fragile]{Linearizing Existence Tests}\n\\begin{block}{Still another Latin square encoding}\n\\vspace*{-4mm}\\footnotesize\n\\begin{semiverbatim}\n% DOMAIN\n#const n=32. square(1..n,1..n).\n\n% GENERATE\n1 \\{ num(X,Y,N) : N = 1..n \\} 1 :- square(X,Y).\n\n% \\alt<1-3>{TEST}{DEFINE \\onslide<5->{+ TEST}}\n\\alt<4->{\\alert<4>{gtX(X-1,Y,N) :- num(X,Y,N), 1 < X.     gtY(X,Y-1,N) :- num(X,Y,N), 1 < Y.}\n\\alert<4>{gtX(X-1,Y,N) :- gtX(X,Y,N), 1 < X.     gtY(X,Y-1,N) :- gtY(X,Y,N), 1 < Y.}\n\\onslide<5->{\\alert<5>{ :- num(X,Y,N), gtX(X,Y,N).             :- num(X,Y,N), gtY(X,Y,N).}}}{:- num(X,\\alert<2>{Y},N), num(X,\\alert<2>{Y'},N), \\alert<2>{Y < Y'}.\n:- num(\\alert<2>{X},Y,N), num(\\alert<2>{X'},Y,N), \\alert<2>{X < X'}.\n}\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\begin{itemize}\n\\item<2-3>\\structure{Note} \\alert<2>{uniqueness of \\lstinline{N} in a row/column checked by enumerating pairs!}\n\\end{itemize}\n\\vspace*{-4mm}\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<3-6>{\\lstinline{gringo} \\attach{Examples/latin_3.lp}{\\lstinline{latin_3.lp}} \\lstinline{| wc}}\n\\lstinline{1055752} \\alert<6>{\\lstinline{6294536}} \\lstinline{21099558}\n\\end{block}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<6->{\\lstinline{gringo} \\attach{Examples/latin_4.lp}{\\lstinline{latin_4.lp}} \\lstinline{| wc}}\n\\lstinline{228360} \\alert<6>{\\lstinline{1205256}} \\lstinline{4780744}\n\\end{block}\n\\end{minipage}\n% \\begin{block}<3-6>{\\lstinline{gringo} \\alt<1-5>{\\onslide<3>{\\attach{Examples/latin_3.lp}{\\lstinline{latin_3.lp}}}}{\\attach{Examples/latin_4.lp}{\\lstinline{latin_4.lp}}} \\lstinline{| wc}}\n% \\alt<1-5>{\\onslide<3>{\\alert<3>{\\lstinline{1055816}~\\lstinline{6294792}~\\lstinline{21104106}}}}{\\alert<6>{\\lstinline{228424\\ }~\\lstinline{1205512}~\\lstinline{4782138}}}\n% \\end{block}\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}[fragile]{Assigning Aggregate Values}\n\\begin{block}{Yet another Latin square encoding}\n\\vspace*{-4mm}\\footnotesize\n\\begin{semiverbatim}\n\\alert<2-3>{% DOMAIN}\n#const n=32. square(1..n,1..n).\\only<1-6>{\n\\alert<2>{sigma(S) :- \\alert<3>{S = #sum \\{ X:square(X,n) \\}}.}\\alt<3>{\\hfill\\textcolor{green}{\\OK}}{}}\n\n% GENERATE\n1 \\{ num(X,Y,N) : N = 1..n \\} 1 :- square(X,Y).\n\n\\alt<1-8>{\\alert<4-6>{% DEFINE + TEST}\n\\alert<4>{occX(X,N,C) :- X = 1..n, N = 1..n, \\alt<5>{\\alert{C \\{ num(X,Y,N) \\} C, C = 0..n}.}{\\alert<6>{C = \\{ num(X,Y,N) \\}}.}\\alt<6>{\\hfill\\KO}{}}\n\\alert<4>{occY(Y,N,C) :- Y = 1..n, N = 1..n, \\alt<5>{\\alert{C \\{ num(X,Y,N) \\} C, C = 0..n}.}{\\alert<6>{C = \\{ num(X,Y,N) \\}}.}\\alt<6>{\\hfill\\KO}{}}\n\\alert<4>{:- occX(X,N,C), C != 1.  :- occY(Y,N,C), C != 1.}}{\\alert<9>{% TEST\n:- X = 1..n, N = 1..n, not 1 \\{ num(X,Y,N) \\} 1.\n:- Y = 1..n, N = 1..n, not 1 \\{ num(X,Y,N) \\} 1.\n}}\n\n% DISPLAY\n#show num/3. \\only<1-6>{\\alert<2>{#show sigma/1.}}\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\alt<5>{%\n\\begin{itemize}\n\\item\\structure{Note} \\alert{internal transformation by \\lstinline{gringo}}\n\\end{itemize}\\vspace*{1mm}}{%\n\\vspace*{-7mm}\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<7->{\\lstinline{gringo} \\attach{Examples/latin_5.lp}{\\lstinline{latin_5.lp}} \\lstinline{| wc}}\n\\alt<7>{}{\\lstinline{304136} \\alert<10>{\\lstinline{5778440}} \\lstinline{30252505}}\n\\end{block}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{0.47\\linewidth}%\n\\begin{block}<9->{\\lstinline{gringo} \\attach{Examples/latin_6.lp}{\\lstinline{latin_6.lp}} \\lstinline{| wc}}\n\\alt<9>{}{\\lstinline{48136} \\alert<10>{\\lstinline{373768}} \\lstinline{2185042}}\n\\end{block}\n\\end{minipage}}\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}[fragile]{Breaking Symmetries}\n\\begin{block}{The ultimate Latin square encoding?}\n\\vspace*{-4mm}\\footnotesize\n\\begin{semiverbatim}\n% DOMAIN\n#const n=32. square(1..n,1..n).\n\n% GENERATE\n1 \\{ num(X,Y,N) : N = 1..n \\} 1 :- square(X,Y).\n\n% TEST\n:- X = 1..n, N = 1..n, not 1 \\{ num(X,Y,N) \\} 1.\n:- Y = 1..n, N = 1..n, not 1 \\{ num(X,Y,N) \\} 1.\n\\alt<1-3,6-7>{}{\\alert<4,8>{:- square(1,Y), not num(1,Y,Y). % Symmetry Breaking}}\n\n% DISPLAY\n#show num/3.\n\\end{semiverbatim}\n\\vspace*{-2mm}\n\\end{block}\n\\alt<6->{\\vspace*{-5mm}%\n\\begin{block}{\\alert<6,8>{\\lstinline{gringo -c}} \\alert{\\lstinline{n=5}} \\alt<8-9>{\\attach{Examples/latin_7.lp}{\\lstinline{latin_7.lp}}}{\\attach{Examples/latin_6.lp}{\\lstinline{latin_6.lp}}} \\alert<6,8>{\\lstinline{| clasp -q 0}}}\n\\alt<7-8>{\\onslide<7>{%\n\\alert{\\lstinline{Models      : 161280}}\\qquad\n\\alert{\\lstinline{Time        : 2.078s}}}}{\\alt<6>{\n\\lstinline{\\ }}{%\n\\alert{\\lstinline{Models      : 1344\\ }}\\qquad\n\\alert{\\lstinline{\\ Time        : 0.024s}}}}\n\\end{block}\\vspace*{-1mm}}{\\alt<1>{}{%\n\\begin{itemize}\n\\item\\structure{Note} \\alt<5>{Let's compare \\alert{enumeration} speed!}{\\alt<2>{\\alert{many symmetric solutions}\n\\\\\\small (mirroring, rotation, value permutation)}{\\alert<3>{easy and safe to fix a full row/column!}}}\n\\end{itemize}}}\n\\end{frame}\n%------------------------------------------------------------\n\\section{Hints}\n%------------------------------------------------------------\n\\begin{frame}{Encode With Care!}\n  \\begin{enumerate}\n  \\item Create a \\alert<1>{working} encoding\n    \\begin{itemize}\n    \\item<2->[Q1:] Do you need to modify the encoding if the facts change?\n    \\item<2->[Q2:] Are all variables significant (or statically functionally dependent)?\n    \\item<2->[Q3:] Can there be (many) identic ground rules?\n    \\item<2->[Q4:] Do you enumerate pairs of values (to test uniqueness)?\n    \\item<2->[Q5:] Do you assign dynamic aggregate values (to check a fixed bound)?\n    \\item<2->[Q6:] Do you admit (obvious) symmetric solutions?\n    \\item<3->[Q7:] Do you have additional domain knowledge simplifying the problem?\n    \\item<4->[Q8:] Are you aware of anything else that, if encoded,\n                   would reduce grounding and/or solving efforts?\n    \\end{itemize}\n  \\item<1-> \\alert<5>{Revise until no ``Yes'' answer!}\n    \\begin{itemize}\n    \\item<6->\\structure{Note} \\alert<6>{If the format of facts makes encoding painful\n                        (for instance,\n                        abusing grounding for ``scientific calculations''),\n                        revise the fact format as well.}\n    \\end{itemize}\n  \\end{enumerate}\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}{Some Hints on (Preventing) Debugging}\n\\begin{block}{Kinds of errors}\n\\vspace*{-2mm}\n  \\begin{itemize}\n  \\item \\alert<2>{syntactic} \\hfill\\onslide<2->{\\dots{} follow error messages by the grounder}\n  \\item \\alert<3->{semantic} \\hfill\\onslide<3->{\\dots{} (most likely) encoding/intention mismatch}\n  \\end{itemize}\n\\vspace*{-2mm}\n\\end{block}\n\\begin{block}<4->{Ways to identify semantic errors (early)}\n  \\begin{enumerate}\n  \\item<4-> \\alert{develop and test incrementally}\n    \\begin{itemize}\n    \\item prepare toy instances with ``interesting features''\n    \\item build the encoding bottom-up and verify additions (eg.\\ new predicates)\n    \\end{itemize}\n  \\item<5-> \\alert{compare the encoded to the intended meaning}\n    \\begin{itemize}\n    \\item check whether the grounding fits (use \\alert{\\lstinline{gringo --text}})\n    \\item if stable models are unintended, investigate conditions that fail to hold\n    \\item if stable models are missing, examine integrity constraints (add heads)\n    \\end{itemize}\n  \\item<6-> ask tools (eg.\\ {\\footnotesize\\url{http://www.kr.tuwien.ac.at/research/projects/mmdasp/}})\n  \\end{enumerate}\n\\end{block}\n\\end{frame}\n%------------------------------------------------------------\n\\begin{frame}{Overcoming Performance Bottlenecks}\n\\begin{block}{Grounding}\n\\vspace*{-2mm}\n\\begin{itemize}\n\\item monitor \\alert{time} spent by and output \\alert{size} of \\lstinline{gringo}\n  \\begin{enumerate}\n  \\item system tools   \\ (eg.\\ \\lstinline{time(gringo [}\\dots{}\\lstinline{] | wc)})\n  \\item grounding info \\ (eg.\\ \\lstinline{gringo --text})\n  \\end{enumerate}\n  \\item<2->\\structure{Note} once identified, \\alert{reformulate} ``critical'' logic program parts\n\\end{itemize}\n\\end{block}\n\\begin{block}<3->{Solving}\n\\vspace*{-2mm}\n\\begin{itemize}\n\\item check solving statistics (use \\alert{\\lstinline{clasp --stats}})\n\\item<4->\\structure{Note} if great search efforts (\\alert{\\lstinline{Conflicts}}/\\lstinline{Choices}/\\lstinline{Restarts}), then\n  \\begin{enumerate}\n  \\item<5-> try prefabricated settings (using \\lstinline{clasp} option `\\lstinline{--configuration}')\n  \\item<5-> try auto-configuration (offered by \\lstinline{claspfolio} or \\lstinline{accclingo})\n  \\item<5-> try manual fine-tuning (requires expert knowledge!)\n  \\item<6-> if possible, reformulate the problem or add domain knowledge\n            (``redundant'' constraints) to help the solver\n  \\end{enumerate}\n\\end{itemize}\n\\end{block}\n\\end{frame}\n%------------------------------------------------------------\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-PDF-mode: t\n%%% TeX-master: \"../asp\"\n%%% End:\n", "meta": {"hexsha": "8d6a0b6c2c22850bbedc98bce97570c8ce11ce18", "size": 16163, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "methodology.tex", "max_stars_repo_name": "potassco-asp-course/encoding", "max_stars_repo_head_hexsha": "bb3f0ff1db718ab79a45e7f0c8bd7391159da120", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "methodology.tex", "max_issues_repo_name": "potassco-asp-course/encoding", "max_issues_repo_head_hexsha": "bb3f0ff1db718ab79a45e7f0c8bd7391159da120", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "methodology.tex", "max_forks_repo_name": "potassco-asp-course/encoding", "max_forks_repo_head_hexsha": "bb3f0ff1db718ab79a45e7f0c8bd7391159da120", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-07-09T11:40:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-09T11:40:51.000Z", "avg_line_length": 42.0911458333, "max_line_length": 298, "alphanum_fraction": 0.5985275011, "num_tokens": 6149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7745833945721304, "lm_q1q2_score": 0.5869749364895644}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath,amssymb}\n\\usepackage{hyperref}\n\\usepackage{graphicx}\n\n\\title{\\bf{Axisymmetric Drag Polar Analysis}}\n\\author{Nicholas Malaya \\\\ Institute for Computational Engineering and Sciences \\\\ University of Texas at Austin} \\date{}\n\n\\begin{document}\n\\maketitle\n\n\\newpage\n\nThe power extracted by the turbine is, \n\\begin{equation}\n P = \\Omega Q\n\\end{equation}\nwhere the torque, Q, is, \n\\begin{equation}\n Q = B \\int_{r_{\\text{min}}}^{r_{\\text{max}}} F'_{\\tau}\\, r\\, dr.\n\\end{equation}\nHere, B is the number of blades $r_{\\text{max}}$ and $r_{\\text{min}}$\nare the turbine radii, and $F'_{\\tau}$ is the force per unit\nlength on the turbine, which is, \n\\begin{equation}\n F'_{\\tau} = \\frac{1}{2}\\rho U^2 \\, c \\, C_{\\tau}.\n\\end{equation}\n$U$ is magnitude of velocity. $c$ is the blade chord\nlength, which is assumed to be constant (not a function of the radius,\nfor instance). Finally, $C_{\\tau}$ is the tangential force coefficient,\nwhich depends on the local lift and drap coefficients, as well as the\nflow angle, $\\phi$, \n\\begin{equation}\n C_{\\tau} = C_L \\,\\text{sin}(\\phi) + C_D \\,\\text{cos}(\\phi)\n\\end{equation}\nCombining the equations above results in an expression for the power\nthat explicitly depends on the lift and drag coefficients, \n\\begin{equation*}\n P = \\frac{\\rho\\, c\\, \\Omega B}{2}\n  \\int_{r_{\\text{min}}}^{r_{\\text{max}}} U(r)^2 \\left(C_L\n\t\t\t\t\t\t     \\,\\text{sin}(\\phi)\n\t\t\t\t\t\t     + C_D\n\t\t\t\t\t\t     \\,\\text{cos}(\\phi)\n\t\t\t\t\t\t    \\right) r\\,dr. \n\\end{equation*}\nThis equation is separable, \n\\begin{align}\n P_L = \\frac{\\rho\\, c\\, \\Omega B}{2}\n  \\int_{r_{\\text{min}}}^{r_{\\text{max}}} U(r)^2 \\, C_L(\\phi,r)\n \\,\\text{sin}(\\phi)\\, r\\,dr,  \\label{lift} \\\\\n P_D = \\frac{\\rho\\, c\\, \\Omega B}{2}\n  \\int_{r_{\\text{min}}}^{r_{\\text{max}}} U(r)^2 \\, C_D(\\phi,r) \\,\\text{cos}(\\phi)\\, r\\,dr. \\label{drag}\n\\end{align}\nNote that we have assumed $C_D = C_D(\\phi,r)$ and $C_L = C_L(\\phi,r)$,\nnamely, that the coefficients vary with the flow direction and may vary\nradially, due to twisting the blade angle. Furthermore, the flow\ndirection, $\\phi$, varies with radial location, in that $\\phi=\\phi(r)$. \nOur objective is now to discover what these unknown functions of lift\nand drag are. To do this, we specify an optimization problem such that, \n\\begin{equation*} \n \\text{Max } P(C_L,C_D) \\quad \\text{ subject to: }\n  \\begin{cases}\n   |C_L| < C_L^{\\text{Max}}, \\\\\n   0 < C_D < C_D^{\\text{Max}}. \\\\\n  \\end{cases}\n\\end{equation*}\n\nIn words, the drag must be specified to be greater than zero, but\nthe lift can be negative. For these conditions, we are interested in\nlargest attainable values. For the drag coefficient, $C_D^{\\text{Max}}$\nis two. % refmunson \nThis corresponds to a flat plate perpendicular to the flow.\nThe lift coefficient peak is about 1.75. This design is not necessarily\nphysically realizable, but represents an absolute maximum that one\nappears conceivable. \n\nThe integral shown in Equations \\ref{lift} above is bounded by \nSchwarz's Inequality,  \n\\begin{align}\n  \\left[\\int_{r_{\\text{min}}}^{r_{\\text{max}}} C_L(\\phi,r)\\, U(r)^2\n \\,\\text{sin}(\\phi)\\, r\\,dr \\right]^2 \\le \n \\int_{r_{\\text{min}}}^{r_{\\text{max}}} C_L^2(\\phi,r) dr \\,\n \\int_{r_{\\text{min}}}^{r_{\\text{max}}} U(r)^4 \n \\,\\text{sin}^2(\\phi)\\, r^2\\,dr.\n\\end{align}\nIn this way the quantity\n\\begin{equation}\n \\int_{r_{\\text{min}}}^{r_{\\text{max}}} C_L^2(\\phi,r) dr \\,, \n\\end{equation}\nis clearly maximized when $C_L(\\phi,r) = C_L^{\\text{max}}$. \nThe result for Equation \\ref{drag} is identical. Thus, the drag polars\nhave the form, \n\\begin{align*} \n &C_L(\\phi) = \n  \\begin{cases}\n    1.75& \\quad 0^{\\circ} \\le \\phi \\le 180^{\\circ}, \\\\\n   -1.75& \\quad 180^{\\circ} \\le \\phi \\le 360^{\\circ}.  \\\\\n  \\end{cases}\\\\\n &C_D(\\phi) = \n  \\begin{cases}\n    2.0& \\quad -90^{\\circ} \\le \\phi \\le 90^{\\circ}, \\\\\n      0& \\quad 90^{\\circ} \\le \\phi \\le 270^{\\circ}.  \\\\\n  \\end{cases}\\\\\n\\end{align*}\nThe plot of these drag polars are shown in Figure \\ref{drags}. \n\n\\begin{figure}[!htb]\n  \\begin{center}\n    \\includegraphics[width = 12 cm]{figs/drags}\n    \\caption{The idealized drag polars.} \n    \\label{drags}\n  \\end{center}\n\\end{figure}\n\n\n\\newpage\n\\section{Questions}\n\n\\begin{itemize}\n \\item Does this need regularization to ensure well-posedness?\n \\item Boundary conditions are periodic\n \\item What about supporting twist? (e.g. $\\beta = \\beta(r)$)\n \\item Can we constrain $C_L, C_D$?\n \\item Is this just linear programming?\n\\end{itemize}\n\n\\end{document}\n", "meta": {"hexsha": "59587a8ea12ca63f32fbaafbeea6d45a887e5f91", "size": 4439, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "disputatio/causa/drag_polar_axi.tex", "max_stars_repo_name": "nicholasmalaya/paleologos", "max_stars_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-04T17:49:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-04T17:49:42.000Z", "max_issues_repo_path": "disputatio/causa/drag_polar_axi.tex", "max_issues_repo_name": "nicholasmalaya/paleologos", "max_issues_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "disputatio/causa/drag_polar_axi.tex", "max_forks_repo_name": "nicholasmalaya/paleologos", "max_forks_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-01-04T16:08:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-16T19:34:24.000Z", "avg_line_length": 34.6796875, "max_line_length": 121, "alphanum_fraction": 0.6616355035, "num_tokens": 1521, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.7577943658046608, "lm_q1q2_score": 0.5869749322526089}}
{"text": "\\label{ch:three}\n\n\\section{The abstraction through random variables}\n    We have mentioned many times the idea of \\textit{factoring out} the enemy in the previous chapter, but we never actually expressed formally what we meant with that for CKA.\n    In order to make reasoning like this it is needed to model the concepts of \\textit{message} and \\textit{information} in a computation-able way.\n    Shannon studied this problem and published his results in \\cite{Shannon49} which is now the foundation of modern information theory and cryptography.\n    \n    Let us suppose that over a certain alphabet there exist messages $M_1,M_2,\\ldots M_n$ and each message has a probability $P(M_i)$ to be chosen (i.e. transmitted).\n    Each message $M_i$ is encrypted into its counterpart $E_i$.\n    An enemy intercepts $E_i$ and can therefore calculate the probability of message $M_i$ corresponding to the received encrypted version; namely the conditional probability $P(M_i|E_i)$.\n    Shannon states that to obtain perfect secrecy of the message $P(M|E)$ must equal $P(M)$ for all $E$ and all $M$.\n    From Bayes' formula\n    \\begin{equation}\n    \tP(M|E) = \\frac{P(M)P(E|M)}{P(E)}\n    \\end{equation}\n    it follows that $P(E|M)=P(E)$ is an equivalent condition for perfect secrecy.\n    That is, the probability of the cipher-text $E$ must be independent of knowing the message $M$.\n    This translates into the case where intercepting the encrypted message gives the enemy no information. \n    \n    Now imagine the message is transmitted by a satellite to Alice and Bob.\n    Eve is also listening. This is an example of a public channel.\n    We end up with three versions of the message\\footnotemark : Alice's version $X$, Bob's $Y$ and Eve's version $Z$.\n    To express the whole space of combinations of possible messages, we need the joint probability $P_{XYZ}$.\n    Then the idea of \"factoring out Eve\" takes the meaning of obtaining \n    \\begin{equation}\n    \tP_{XYZ}(x,y,z) = P_{XY}(x,y)\\cdot P_Z(z) \\; \\forall x,y,z\n    \\end{equation}\n    so that the marginal of $P_{XYZ}$ over $Z$ --- i.e. the part of the distribution owned by Eve --- is now product with variables $X$ and $Y$. $Z$ is independent from $X$ and $Y$.\n    \n    \\footnotetext{Ideally the three messages are identical, but just consider that the message is sent through a noisy channel: each receiver will have slight variations of the message, hence the distinction.}\n    \n    Information theory builds on probability theory, which provides us with useful measures and rules to operate on those probabilities.\n    The most important to us are the marginal $P_X(x)$ of joint probabilities $P_{XY}(x,y)$, the entropy $\\Ent (X)$, the correlation and mutual information $\\I(X;Y)$ of random variables.\n    \n    \n    \n%    In classical information theory a message is defined as a repetition of an experiment on random variables with a (joint) probability distribution $P_{X_1X_2\\ldots X_n}$.\n%    The range of the random variable $X$ is the size of the alphabet with which we are communicating.\n%    For example if the message is written in English the alphabet is composed of $26$ letters, or $62$ if we include numbers and distinguish upper and lower case letters.\n%    The probability of a word in a message can be given, for example, by the frequency of that word in the English language.\n%    Moreover, the probability of a word (or letter) to follow another can be dependent on the previous realizations of $P$.\n%    Probability theory provides rules and operations to manipulate and observe these probability distributions.\n%    The most important for us are the entropy $\\Ent (X)$, the correlation and mutual information $\\I(X;Y)$ of random variables and the marginal $P_X$ of joint probabilities $P_{XY}$.\n    \n\\section{Local operations and public communication (LOPC)}\n    By local operations and public communication we mean operations carried out on bit strings sampled from $P_{XYZ}(x,y,z)$ and can then be modeled as channels.\n    We can mix different distributions together or trace out the marginal.\n    Communication over an actual realization of a channel is noisy. \n    That noise is also a form of operations on the probability distribution.\n    Operations can also be carried out directly by the parties.\n    This is of more interest because we have control over those operations.\n    \n    Take for example the Diffie-Hellman method illustrated in section \\ref{Diffie}.\n    steps $1$ and $4$ are \\emph{public communication} and steps $2$,$3$ and $5$ are \\emph{local operations}.\n    Local operations are conducted privately, meaning that everything that happens is only accessible to the party conducting the operation.\n    Other parties can not know what a local operation involved.\n    Public communication is everything that is communicated in clear by parties, or that an eavesdropper can intercept.\n    The totality of what an enemy knows from a protocol --- apart from the functioning of the protocol itself --- comes from public communication and the partial trace.\n    Through public communication Alice and Bob can also send instructions on what to do in their local operations, like in BB84.\n    \n    \n\\section{Secret-key rate} \\label{seckeyrate}\n    The secret-key rate $\\keyrate{X}{Y}{Z}$ is roughly a quantification of the maximal amount of correlated bits between Alice and Bob extractable from an arbitrarily large number of independent realizations of a distribution $P_{XYZ}$ that are not known to Eve.\n    It was introduced by Maurer in \\cite{Maur93} to prove lower bounds on the achievable size of a key shared by Alice and Bob in secrecy.\n    It can be seen as a classical analog of the \\textit{distillable entanglement} in \\cite{BDS96}.\n    Formally the secret key rate is defined as follows.\n    \\begin{definition}\\cite{Maur93, RW03}\n    Let $P_{XYZ}$ be a joint probability distribution. The secret-key rate $\\keyrate{X}{Y}{Z}$ of $X$ and $Y$ with respect to $Z$ is the largest $R\\in \\mathbb{R}$ such that for all $\\epsilon > 0$ there exists a protocol, that involves a sufficiently large number $N$ of realizations of $X^N$ of $X$ and $Y^N$ of $Y$, that satisfies:\n    Alice and Bob compute, at the end of the protocol, random variables $S_A$ and $S_B$, respectively, with range $\\mathcal{S}$ such that there exists a random variable $S$ with the same range and\n    \\begin{equation*}\n    \t\\Ent(S) = \\log |S| \\geq RN\\; ,\n    \\end{equation*}\n    \\begin{equation*}\n    \tP[S_A=S_B=S]>1-\\epsilon\\; ,\n    \\end{equation*}\n    \\begin{equation*}\n    \t\\I(S;CZ^N) < \\epsilon\n    \\end{equation*}\n\tHere $C$ is the totality of the protocol public communication; $\\I(S;CZ^N)$ is the mutual information between the secret key and what the eavesdropper Eve knows (ref. \\ref{mutInfo}).\n    \\end{definition}\n    \n    The secret-key rate is a useful measure of the amount of extractable secrecy --- hence, key bits --- from a protocol with LOPC.\n    It would be ideal to be able to express it as a function $\\keyrate{X}{Y}{Z} = S(P_{XYZ})$.\n    Instead, it can be bounded by two functions\\cite{Maur93}\n    \\begin{equation}\\label{eq:skrbound1}\n    \t\\keyrate{X}{Y}{Z} \\leq \\min[\\I(X;Y),\\, \\I(X;Y|Z)]\n    \\end{equation}\n    \\begin{equation}\\label{eq:skrbound2}\n    \t\\keyrate{X}{Y}{Z} \\geq \\max[\\I(Y;X) - \\I(Z;X),\\, \\I(X;Y) - \\I(Z,Y)]\n    \\end{equation}\n    Those are not tight bounds. In fact in Eq. \\ref{eq:skrbound2} the secret-key rate can be positive even when the right-hand side is negative.\n    \n    \n    \n", "meta": {"hexsha": "1574cf8dedcd35c6ae46dbb472faa3a3fd6e8872", "size": 7493, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writings/chapters/chapter3.tex", "max_stars_repo_name": "CrashingBrain/BSc_Project", "max_stars_repo_head_hexsha": "44b91601341ff3a59acbad7abbf28389aa99f89d", "max_stars_repo_licenses": ["FSFAP"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "writings/chapters/chapter3.tex", "max_issues_repo_name": "CrashingBrain/BSc_Project", "max_issues_repo_head_hexsha": "44b91601341ff3a59acbad7abbf28389aa99f89d", "max_issues_repo_licenses": ["FSFAP"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writings/chapters/chapter3.tex", "max_forks_repo_name": "CrashingBrain/BSc_Project", "max_forks_repo_head_hexsha": "44b91601341ff3a59acbad7abbf28389aa99f89d", "max_forks_repo_licenses": ["FSFAP"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.8736842105, "max_line_length": 332, "alphanum_fraction": 0.7256105699, "num_tokens": 1900, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835452961427, "lm_q2_score": 0.7025300636233415, "lm_q1q2_score": 0.586952308233154}}
{"text": "\\section{Forced Vibrations}\r\n\\noindent\r\nNow that we have some powerful methods for solving higher order equations, we can think about forced vibrations and understand ideas like beats and resonance.\\\\\r\n\r\n\\noindent\r\nThe equation we're trying to solve is\r\n\\begin{equation*}\r\n\tmy'' + by' + ky = F_{\\text{ext}}(t)\r\n\\end{equation*}\r\nWe'll assume that $F_{\\text{ext}}(t) = F_0\\cos{(\\gamma t)}$ so our equation becomes.\r\n\\begin{equation*}\r\n\tmy'' + by' + ky = F_0\\cos{\\gamma}\r\n\\end{equation*}\r\n\r\n\\noindent\r\nWe'll look at the undamped and damped cases separately.\r\n\r\n\\input{./higherOrder/forcedVibrs/undampedForcedVibrs.tex}\r\n\\input{./higherOrder/forcedVibrs/dampedForcedVibrs.tex}", "meta": {"hexsha": "38af580b043be6a4f60ac4768fae7d7f60ba87c9", "size": 672, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/higherOrder/forcedVibrs/forcedVibrs.tex", "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "diffEq/higherOrder/forcedVibrs/forcedVibrs.tex", "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "diffEq/higherOrder/forcedVibrs/forcedVibrs.tex", "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3684210526, "max_line_length": 161, "alphanum_fraction": 0.724702381, "num_tokens": 206, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835371034368, "lm_q2_score": 0.702530051167069, "lm_q1q2_score": 0.5869522920705212}}
{"text": "\\lab{The PageRank Algorithm}{The PageRank Algorithm}\n\\label{lab:PageRank}\n\\objective{Many real-world systems---the internet, transportation grids, social media, and so on---can be represented as graphs (networks).\nThe PageRank algorithm is one way of ranking the nodes in a graph by importance.\nThough it is a relatively simple algorithm, the idea gave birth to the Google search engine in 1998 and has shaped much of the information age since then.\nIn this lab, we implement the PageRank algorithm with a few different approaches, then use it to rank the nodes of a few different networks.\n% The results may be of interest to fans of NCAA March Madness brackets.\n}\n\n\\begin{comment} % Old introduction.\nWhen you enter keywords into Google's search engine, Google finds every page containing your keywords and lists the pages in order of their \\emph{rank}.\nThe rank of a page reflects many factors, including how often the page is visited and how connected it is to other pages.\n\nAs of 2013, the PageRank algorithm is one of over 200 algorithms that Google uses to determine the \\emph{rank}, or relative importance, of a webpage.\nNamed for Larry Page, cofounder of Google, this algorithm ranks pages based on how many other pages link to them.\n\nThe PageRank algorithm is also used in applications other than internet search engines.\nFor example, it has been used to rank graduate institutions and the impact factor of journals, and it has been used in some biological applications.\n\\end{comment}\n\n\\section*{The PageRank Model} % ===============================================\n\nThe internet is a collection of webpages, each of which may have a hyperlink to any other page.\nOne possible model for a set of $n$ webpages is a directed graph, where each node represents a page and node $j$ points to node $i$ if page $j$ links to page $i$.\nThe corresponding \\emph{adjacency matrix} $A$ satisfies $A_{ij} = 1$ if node $j$ links to node $i$ and $A_{ij} = 0$ otherwise.\n\n\\begin{comment} % OLD TIKZPICTURE.\n\\begin{tikzpicture}[node distance=1.75cm, thick ]\n\n\\node[draw=none](2)[]{2};\n\\node[draw=none](3)[right of=2]{3};\n\\node[draw=none](4)[right of=3]{4};\n\\node[draw=none](5)[right of=4]{5};\n\\node[draw=none](6)[right of=5]{6};\n\\node[draw=none](1)[above of=3]{1};\n\\node[draw=none, node distance=2.5cm](0)[right of=1]{0};\n\\node[draw=none](dummy)[above right of=0]{};\n\\node[draw=none, node distance=.5cm](7)[below\n  of=dummy]{7};\n\n\\foreach \\x/\\y in {3/2, 4/5, 5/6, 1/0, 3/0, 4/0, 5/0} \\draw[->,\n  >=stealth'](\\x)--(\\y);\n\\draw[->, >=stealth'](6)--(0);\n\\draw[->, >=stealth', shorten >= .1cm](7)edge[bend left=20](0);\n\\draw[->, >=stealth', shorten <= .1cm](0)edge[bend left=20](7);\n\\draw[->, >=stealth', shorten <= .1cm](3)edge[bend right=40](6.9,-.25);\n\\draw[->, >=stealth', shorten <= .1cm](4)edge[bend right](6);\n\\end{tikzpicture}\n\\end{comment}\n\n\\begin{figure}[H] % directed network and adjacency matrix.\n\\captionsetup[subfigure]{justification=centering}\n\\centering\n\\begin{subfigure}{.45\\textwidth}\n\\centering\n\\begin{tikzpicture}[normalcircle/.style={draw,circle,minimum size=.75cm,fill=none,thick,node distance=2.5cm}]\n% Nodes\n\\node[normalcircle] (A) {a};\n\\node[normalcircle] (B) [above of=A] {b};\n\\node[normalcircle] (C) [right of=B] {c};\n\\node[normalcircle] (D) [below of=C] {d};\n% Edges\n\\draw[bend right=60,thick,->,>=stealth'] (D) edge (C);\n\\foreach \\a/\\b in {A/B,A/C,A/D,C/B,C/D} \\draw[thick,->,>=stealth'] (\\a) edge (\\b);\n\\end{tikzpicture}\n\\end{subfigure}\n%\n\\begin{subfigure}{.45\\textwidth}\n\\centering\n\\begin{align*}\nA = \\begin{blockarray}{ccccccc}\n& & \\small\\text{\\textcolor{gray}{a}} & \\small\\text{\\textcolor{gray}{b}} & \\small\\text{\\textcolor{gray}{c}} & \\small\\text{\\textcolor{gray}{d}} & \\\\\n\\begin{block}{c[cccccc]}\n\\small\\text{\\textcolor{gray}{a}} & & 0 & 0 & 0 & 0 & \\topstrut\\\\\n\\small\\text{\\textcolor{gray}{b}} & & 1 & 0 & 1 & 0 & \\\\\n\\small\\text{\\textcolor{gray}{c}} & & 1 & 0 & 0 & 1 & \\\\\n\\small\\text{\\textcolor{gray}{d}} & & 1 & 0 & 1 & 0 & \\botstrut\\\\\n\\end{block}\\end{blockarray}\n\\end{align*}\n\\end{subfigure}\n\\caption{A directed unweighted graph with four nodes, together with its adjacency matrix.\nNote that the column for node b is all zeros, indicating that b is a \\emph{sink}---a node that doesn't point to any other node.}\n\\label{fig:pagerank-basic}\n\\end{figure}\n\nIf $n$ users start on random pages in the network and click on a link every 5 minutes, which page in the network will have the most views after an hour?\nWhich will have the fewest?\nThe goal of the PageRank algorithm is to solve this problem in general, therefore determining how ``important'' each webpage is.\n\n% A link from a more important page counts more than one from a less important page; if a node has many edges pointing to it, its outgoing edges gain importance.\n% For example, in Figure \\ref{fig:pagerank-basic} we would expect node 0 to have a very high rank because every other node links to it.\n% Consequently, we would expect node 7 to have a fairly high rank because node 0 links to it, even though node 0 is the only node to do so.\n\nBefore diving into the mathematics, there is a potential problem with the model.\nWhat happens if a webpage doesn't have any outgoing links, like node b in Figure \\ref{fig:pagerank-basic}?\nEventually, all of the users will end up on page b and be stuck there forever.\nTo obtain a more realistic model, modify each sink in the graph by adding edges from the sink to every node in the graph.\nThis means users on a page with no links can start over by selecting a random webpage.\n\n\\begin{figure}[H]\n\\captionsetup[subfigure]{justification=centering}\n\\centering\n\\begin{subfigure}{.45\\textwidth}\n\\centering\n\\begin{tikzpicture}[normalcircle/.style={draw,circle,minimum size=.75cm,fill=none,thick,node distance=2.5cm}]\n% Nodes\n\\node[normalcircle] (A) {a};\n\\node[normalcircle] (B) [above of=A, fill=blue!10] {b};\n\\node[normalcircle] (C) [right of=B] {c};\n\\node[normalcircle] (D) [below of=C] {d};\n% Edges\n\\draw[bend right=60,thick,->,>=stealth'] (D) edge (C);\n\\draw[bend left=60,thick,->,>=stealth',blue] (B) edge (C);\n\\draw[bend right=60,thick,->,>=stealth',blue] (B) edge (A);\n\\foreach \\a/\\b in {A/B,A/C,A/D,C/B,C/D} \\draw[thick,->,>=stealth'] (\\a) edge (\\b);\n\\draw[thick,->,>=stealth',blue] (B) edge (D);\n\\draw[thick,->,>=stealth',shorten >=1pt,blue] (B) to [out=110,in=170,loop,looseness=4.5] (B);\n\\end{tikzpicture}\n\\end{subfigure}\n%\n\\begin{subfigure}{.45\\textwidth}\n\\centering\n\\begin{align*}\n\\widetilde{A} = \\begin{blockarray}{ccccccc}\n& & \\small\\text{\\textcolor{gray}{a}} & \\small\\text{b} & \\small\\text{\\textcolor{gray}{c}} & \\small\\text{\\textcolor{gray}{d}} & \\\\\n\\begin{block}{c[cccccc]}\n\\small\\text{\\textcolor{gray}{a}} & & 0 & \\textcolor{blue}{1} & 0 & 0 & \\topstrut\\\\\n\\small\\text{\\textcolor{gray}{b}} & & 1 & \\textcolor{blue}{1} & 1 & 0 & \\\\\n\\small\\text{\\textcolor{gray}{c}} & & 1 & \\textcolor{blue}{1} & 0 & 1 & \\\\\n\\small\\text{\\textcolor{gray}{d}} & & 1 & \\textcolor{blue}{1} & 1 & 0 & \\botstrut\\\\\n\\end{block}\\end{blockarray}\n\\end{align*}\n\\end{subfigure}\n\\caption{Here the graph in Figure \\ref{fig:pagerank-basic} has been modified to guarantee that node b is no longer a sink (the added links are blue).\nWe denote the modified adjacency matrix by $\\widetilde{A}$.}\n\\label{fig:pagerank-nosinks}\n\\end{figure}\n\nNow let $p_k(t)$ be the likelihood that a particular internet user is surfing webpage $k$ at time $t$.\nSuppose at time $t+1$, the user clicks on a link to page $i$.\nThen $p_i(t+1)$ can be computed by counting the number of links pointing to page $i$, weighted by the total number of outgoing links for each node.\n\nAs an example, consider the graph in Figure \\ref{fig:pagerank-nosinks}.\nTo get to page a at time $t + 1$, the user had to be on page b at time $t$.\nSince there are three links from page b to other pages, assuming links are chosen with equal likelihood,\n\\begin{align*}\np_\\text{a}(t+1) = \\frac{1}{3}p_\\text{b}(t).\n\\end{align*}\nSimilarly, to get to page b at time $t+1$, the user had to have been on page a or c at time $t$.\nSince a has $3$ outgoing edges and c has $2$ outgoing edges,\n\\begin{align*}\np_\\text{b}(t+1)\n= \\frac{1}{3}p_\\text{a}(t)\n+ \\frac{1}{2}p_\\text{c}(t).\n\\end{align*}\nThe previous equations can be written in a way that hints at a more general linear form:\n\\begin{align*}\np_\\text{a}(t+1)\n    &= 0p_\\text{a}(t)\n    + \\frac{1}{3}p_\\text{b}(t)\n    + 0p_\\text{c}(t)\n    + 0p_\\text{d}(t),\n\\\\\np_\\text{b}(t+1)\n    &= \\frac{1}{3}p_\\text{a}(t)\n    + 0p_\\text{b}(t)\n    + \\frac{1}{2}p_\\text{c}(t)\n    + 0p_\\text{d}(t).\n\\end{align*}\nThe coefficients of the terms on the right hand side are precisely the entries of the $i$th row of the modified adjacency matrix $\\widetilde{A}$, divided by the $j$th column sum.\nIn general, $p_i(t+1)$ satisfies\n\\begin{equation}\\label{eq:pagerank-single-initial}\np_i(t+1) = \\sum_{j=1}^{n} \\widetilde{A}_{ij} \\frac{p_j(t)}{\\sum_{k=1}^n \\widetilde{A}_{kj}}.\n\\end{equation}\nNote that the column sum $\\sum_{k=1}^n \\widetilde{A}_{kj}$ in the denominator can never be zero since, after the fix in Figure \\ref{fig:pagerank-nosinks}, none of the nodes in the graph are sinks.\n\n\\subsection*{Accounting for Boredom} % ----------------------------------------\n\nThe model in \\eqref{eq:pagerank-single-initial} assumes that the user can only click on links from their current page.\nIt is more realistic to assume that the user sometimes gets bored and randomly picks a new starting page.\nLet $0\\le\\epsilon\\le 1$, called the \\emph{damping factor}, be the probability that a user stays interested at step $t$.\nThen the probability that the user gets bored at any time (and then chooses a new random page) is $1-\\epsilon$, and \\eqref{eq:pagerank-single-initial} becomes\n\\begin{equation}\n\\label{eq:pagerank-single}\np_i(t+1) =\n\\underbrace{\\epsilon\\sum_{j=1}^{n}\\left(\\widetilde{A}_{ij}\\frac{p_j(t)}{\\sum_{k=1}^n \\widetilde{A}_{kj}}\\right)}_{\\substack{\\text{User stayed interested and}\\\\\\text{clicked a link on the current page}}}\n+ \\underbrace{\\vphantom{\\sum_{j=1}^{n}\\left(\\frac{p_j(t)}{\\sum_{k=1}^n \\widetilde{A}_{kj}}\\right)} \\frac{1-\\epsilon}{n}.}_{\\substack{\\text{User got bored and}\\\\\\text{chose a random page}}}\n\\end{equation}\nNote that \\eqref{eq:pagerank-single} can be rewritten as the matrix equation\n\\begin{align}\n\\mathbf{p}(t+1) = \\epsilon \\widehat{A}\\mathbf{p}(t) + \\frac{1-\\epsilon}{n}\\mathbf{1},\n\\label{eq:pagerank-matrix}\n\\end{align}\nwhere $\\mathbf{p}(t)=[p_1(t), p_2(t), \\ldots, p_n(t)]\\trp$,\n$\\mathbf{1}$ is a vector of $n$ ones, and $\\widehat{A}$ is the $n\\times n$ matrix with entries\n\\begin{align}\n\\widehat{A}_{ij} = \\frac{\\widetilde{A}_{ij}}{\\sum_{k=1}\\widetilde{A}_{kj}}.\n\\label{eq:pagerank-Ahat-entries}\n\\end{align}\nIn other words, $\\widehat{A}$ is $\\widetilde{A}$ normalized so that the columns each sum to $1$.\nFor the graph in Figure \\ref{fig:pagerank-nosinks}, the matrix $\\widehat{A}$ is given by\n\\begin{align}\n\\widehat{A} = \\begin{blockarray}{ccccccc}\n& & \\small\\text{\\textcolor{gray}{a}} & \\small\\text{\\textcolor{gray}{b}} & \\small\\text{\\textcolor{gray}{c}} & \\small\\text{\\textcolor{gray}{d}} & \\\\\n\\begin{block}{c[cccccc]}\n\\small\\text{\\textcolor{gray}{a}} & & 0 & \\textcolor{blue}{1/4} & 0 & 0 & \\topstrut\\\\\n\\small\\text{\\textcolor{gray}{b}} & & 1/3 & \\textcolor{blue}{1/4} & 1/2 & 0 & \\\\\n\\small\\text{\\textcolor{gray}{c}} & & 1/3 & \\textcolor{blue}{1/4} & 0 & 1 & \\\\\n\\small\\text{\\textcolor{gray}{d}} & & 1/3 & \\textcolor{blue}{1/4} & 1/2 & 0 & \\botstrut\\\\\n\\end{block}\\end{blockarray}.\n\\label{eq:pagerank-example-widehat}\n\\end{align}\n\n\\begin{problem} % Problem 1: DiGraph.__init__().\nWrite a class for representing directed graphs via their adjacency matrices.\nThe constructor should accept an $n\\times n$ adjacency matrix $A$ and a list of node labels (such as $[$a, b, c, d$]$) defaulting to \\li{None}.\nModify $A$ as in Figure \\ref{fig:pagerank-nosinks} so that there are no sinks in the corresponding graph, then calculate the $\\widehat{A}$ from \\eqref{eq:pagerank-Ahat-entries}.\nSave $\\widehat{A}$ and the list of labels as attributes.\nUse $[0,1,\\ldots,n-1]$ as the labels if none are provided.\nFinally, raise a \\li{ValueError} if the number of labels is not equal to the number of nodes in the graph.\n\\\\(Hint: use array broadcasting to compute $\\widehat{A}$ efficiently.)\n\nFor the graph in Figure \\ref{fig:pagerank-basic}, check that your $\\widehat{A}$ matches \\eqref{eq:pagerank-example-widehat}.\n\\label{prob:pagerank-init}\n\\end{problem}\n\n\\subsection*{Computing the Rankings} % ----------------------------------------\n\nIn the model \\eqref{eq:pagerank-single}, define the \\emph{rank} of node $i$ as the limit\n\\[\np_i = \\lim_{t\\to \\infty} p_i(t).\n\\]\n\\begin{comment} % TODO: a note on Markov Chains, but not here.\nFor those familiar with Markov Chains, Equation \\ref{eq:pagerank-matrix} defines a Markov chain.\nPage ranks are simply the steady state of this Markov chain.\n\\end{comment}\nThere are several ways to solve for $\\mathbf{p} = \\lim_{t\\rightarrow\\infty} \\mathbf{p}(t)$.\n\n\\subsubsection*{Linear System} % - - - - - - - - - - - - - - - - - - - - - - -\n\nIf $\\mathbf{p}$ exists, then taking the limit as $t\\rightarrow\\infty$ to both sides of \\eqref{eq:pagerank-matrix} gives the following.\n\\begin{align}\n\\nonumber\n\\lim_{t\\rightarrow\\infty}\\mathbf{p}(t+1) &= \\lim_{t\\rightarrow\\infty}\\left[\\epsilon \\widehat{A}\\mathbf{p}(t) + \\frac{1-\\epsilon}{n}\\mathbf{1}\\right] \\\\\n\\nonumber\n\\mathbf{p} &= \\epsilon \\widehat{A}\\mathbf{p} + \\frac{1-\\epsilon}{n}\\mathbf{1} \\\\\n\\label{eq:pagerank-algebraic}\n\\left(I - \\epsilon \\widehat{A}\\right)\\mathbf{p} &= \\frac{1-\\epsilon}{n}\\mathbf{1}\n\\end{align}\nThis linear system is easy to solve as long as the number of nodes in the graph isn't too large.\n\n\\subsubsection*{Eigenvalue Problem} % - - - - - - - - - - - - - - - - - - - - -\n\nLet $E$ be an $n \\times n$ matrix of ones.\nThen $E\\mathbf{p}(t) = \\mathbf{1}$ since $\\sum_{i=1}p_i(t) = 1$.\nSubstituting into \\eqref{eq:pagerank-matrix},\n\\begin{align}\n\\mathbf{p}(t+1)\n= \\epsilon \\widehat{A}\\mathbf{p}(t) + \\frac{1-\\epsilon}{n}E\\mathbf{p}(t)\n= \\left(\\epsilon \\widehat{A} + \\frac{1-\\epsilon}{n}E\\right)\\mathbf{p}(t)\n= B\\mathbf{p}(t),\n\\label{eq:pagerank-eigenproblem}\n\\end{align}\nwhere $B = \\epsilon \\widehat{A} + \\frac{1-\\epsilon}{n}E$.\nNow taking the limit at $t\\rightarrow\\infty$ of both sides of \\eqref{eq:pagerank-eigenproblem},\n\\begin{align*}\nB\\mathbf{p} = \\mathbf{p}.\n\\end{align*}\nThat is, $\\mathbf{p}$ is an eigenvector of $B$ corresponding to the eigenvalue $\\lambda = 1$.\nIn fact, since the columns of $B$ sum to $1$, and because the entries of $B$ are strictly positive (because the entries of $E$ are all positive), Perron's theorem guarantees that $\\lambda = 1$ is the unique eigenvalue of $B$ of largest magnitude, and that the corresponding eigenvector $\\mathbf{p}$ is unique up to scaling.\nFurthermore, $\\mathbf{p}$ can be scaled so that each of its entires are positive, meaning $\\mathbf{p}/\\|\\mathbf{p}\\|_1$ is the desired PageRank vector.\n\n\\begin{info} % Side note: B defines a Markov chain.\nA \\emph{Markov chain} is a weighted directed graph where each node represents a \\emph{state} of a discrete system.\nThe weight of the edge from node $j$ to node $i$ is the probability of transitioning from state $j$ to state $i$, and the adjacency matrix of a Markov chain is called a \\emph{transition matrix}.\n\nSince $B$ from \\eqref{eq:pagerank-eigenproblem} contains nonnegative entries and its columns all sum to $1$, it can be viewed as the transition matrix of a Markov chain.\nIn that context, the limit vector $\\mathbf{p}$ is called the \\emph{steady state} of the Markov chain.\n\\end{info}\n\n\\subsubsection*{Iterative Method} % - - - - - - - - - - - - - - - - - - - - - -\n\nSolving \\eqref{eq:pagerank-algebraic} or \\eqref{eq:pagerank-eigenproblem} is feasible for small networks, but they are not efficient strategies for very large systems.\nThe remaining option is to use an iterative technique.\nStarting with an initial guess $\\mathbf{p}(0)$, use  \\eqref{eq:pagerank-matrix} to compute $\\mathbf{p}(1),\\mathbf{p}(2),\\ldots$ until $\\norm{\\mathbf{p}(t)-\\mathbf{p}(t-1)}$ is sufficiently small.\nFrom \\eqref{eq:pagerank-eigenproblem}, we can see that this is just the power method\\footnote{See the Least Squares and Computing Eigenvalues lab for details on the power method.} for finding the eigenvector corresponding to the dominant eigenvalue of $B$.\n\n\\begin{problem} % Write methods for computing the PageRank vector.\nAdd the following methods to your class from Problem \\ref{prob:pagerank-init}.\nEach should accept a damping factor $\\epsilon$ (defaulting to 0.85), compute the PageRank vector $\\mathbf{p}$, and return a dictionary mapping label $i$ to its PageRank value $p_i$.\n\n\\begin{enumerate}\n\\item \\li{linsolve()}: solve for $\\mathbf{p}$ in \\eqref{eq:pagerank-algebraic}.\n\\item \\li{eigensolve()}: solve for $\\mathbf{p}$ using \\eqref{eq:pagerank-eigenproblem}.\nNormalize the resulting eigenvector so its entries sum to $1$.\n\\item \\li{itersolve()}: in addition to $\\epsilon$, accept an integer \\li{maxiter} and a float \\li{tol}.\nIterate on \\eqref{eq:pagerank-matrix} until $\\|\\mathbf{p}(t) - \\mathbf{p}(t-1)\\|_1 < $ \\li{tol} or $t > $ \\li{maxiter}.\nUse $\\mathbf{p}(0)=[\\frac{1}{n},\\frac{1}{n},\\ldots,\\frac{1}{n}]\\trp$ as the initial vector (any positive vector that sums to $1$ will do, but this assumes equal starting probabilities).\n\\end{enumerate}\nCheck that each method yields the same results.\nFor the graph in Figure \\ref{fig:pagerank-basic} with $\\epsilon=0.85$, you should get the following dictionary mapping labels to PageRank values.\n\\begin{lstlisting}\n<<{'a': 0.095758635, 'b': 0.274158285, 'c': 0.355924792, 'd': 0.274158285}>>\n\\end{lstlisting}\n\\label{prob:pagerank-computation}\n\\end{problem}\n\n\\begin{problem}\nWrite a function that accepts a dictionary mapping labels to PageRank values, like the outputs in Problem \\ref{prob:pagerank-computation}.\nReturn a list of labels sorted \\textbf{from highest to lowest} rank.\n\\\\(Hint: if \\li{d} is a dictionary, use \\li{list(d.keys())} and \\li{list(d.values())} to get the list of keys and values in the dictionary, respectively.)\n\nFor the graph in Figure \\ref{fig:pagerank-basic} with $\\epsilon=0.85$, this is the list $[$c, b, d, a$]$ (or $[$c, d, b, a$]$, since b and d have the same PageRank value).\n\\label{prob:pagerank-ranking}\n\\end{problem}\n\n\\begin{comment}\nLooking at Figure \\ref{fig:pagerank-nosinks}, it is easy to see why a has the lowest PageRank value: the only other node that points to it is b.\nIt also makes sense that c has the highest ranking, since c and d both have edges from the other three nodes pointing to them, but d only has one edge (pointing to c), while c points to both b and d.\nIn other words, at each step d distributes all of its importance to c, while c splits its importance between b and d.\n\nOf course, constructing rankings is much more difficult to do by hand when there are more than just a few nodes in the graph.\n\\end{comment}\n\n\\begin{problem} % Rank webpages, subset of Stanford dataset.\nThe file \\texttt{web\\_stanford.txt} contains information on Stanford University webpages\\footnote{\\url{http://www.stanford.edu/}} and the hyperlinks between them, gathered in 2002.\\footnote{See \\url{http://snap.stanford.edu/data/web-Stanford.html} for the original (larger) dataset.}\nEach line of the file is formatted as \\li{a/b/c/d/e/f...}, meaning the webpage with ID \\li{a} has hyperlinks to webpages with IDs \\li{b}, \\li{c}, \\li{d}, and so on.\n\nWrite a function that accepts a damping factor $\\epsilon$ defaulting to 0.85.\nRead the data and get a list of the $n$ unique page IDs in the file (the labels).\nConstruct the $n\\times n$ adjacency matrix of the graph where node $j$ points to node $i$ if webpage $j$ has a hyperlink to webpage $i$.\nUse your class from Problem \\ref{prob:pagerank-init} and its \\li{itersolve()} method from Problem \\ref{prob:pagerank-computation} to compute the PageRank values of the webpages, then rank them with your function from Problem \\ref{prob:pagerank-ranking}.\nReturn the ranked list of webpage IDs.\n\\\\(Hint: after constructing the list of webpage IDs, make a dictionary that maps a webpage ID to its index in the list.\nFor Figure \\ref{fig:pagerank-basic}, this would be \\li{\\{'a': 0, 'b': 1, 'c': 2, 'd': 3\\}}.\nThe values are the row/column indices in the adjacency matrix for each label.)\n\nWith $\\epsilon=0.85$, the top three ranked webpage IDs are $98595$, $32791$, and $28392$.\n\\end{problem}\n\n\\section*{PageRank on Weighted Graphs} % ======================================\n\nNothing in the formulation of the PageRank model \\eqref{eq:pagerank-matrix} requires that the edges of the graph are unweighted.\nIf $A_{ij}$ is the weight of the edge from node $j$ to node $i$ (weight $0$ meaning there is no edge from $j$ to $i$), then the columns of $\\widehat{A}$ still sum to $1$.\nThus $B = \\epsilon \\widehat{A} + \\frac{1-\\epsilon}{n}E$ is still positive definite, so we can expect a unique PageRank vector $\\mathbf{p}$ to exist.\n\nAdding weights to the edges can improve the fidelity of the model and produce a slightly more realistic PageRank ordering.\nOn a given webpage, for example, if hyperlinks to page $a$ are clicked on more frequently hyperlinks to page $b$, the edge from node $a$ should be given more weight than the edge to node $b$.\n% The actual weight doesn't matter as much how it relates to the weights of the other edges proceeding from the same node.\n\n\\begin{figure}[H] % directed network and adjacency matrix.\n\\captionsetup[subfigure]{justification=centering}\n\\centering\n\\begin{subfigure}{.45\\textwidth}\n\\centering\n\\begin{tikzpicture}[normalcircle/.style={draw,circle,minimum size=.75cm,fill=none,thick,node distance=2.5cm}]\n% Nodes\n\\node[normalcircle] (A) {a};\n\\node[normalcircle] (B) [above of=A,fill=blue!10] {b};\n\\node[normalcircle] (C) [right of=B] {c};\n\\node[normalcircle] (D) [below of=C] {d};\n% Weighted Edges\n\\foreach \\i/\\j/\\w in {A/B/2,A/D/1,C/D/1}\n\\draw[->,>=stealth',thick]  (\\i) edge node [font=\\normalsize,pos=.5,auto] {\\w} (\\j);\n\\draw[->,>=stealth',thick] (A) edge node [font=\\normalsize,pos=.25,left] {1} (C);\n\\draw[->,>=stealth',thick] (C) edge node [font=\\normalsize,pos=.5,above] {1} (B);\n\\draw[bend right=60,->,>=stealth',thick] (D) edge node [font=\\normalsize,pos=.5,right] {2} (C);\n\\draw[bend right=60,->,>=stealth',thick,blue] (B) edge node [font=\\normalsize,pos=.5,left] {2} (A);\n\\draw[->,>=stealth',thick,blue] (B) edge node [font=\\normalsize,pos=.25,left] {2} (D);\n\\draw[bend left=60,->,>=stealth',thick,blue] (B) edge node [font=\\normalsize,pos=.5,above] {1} (C);\n\\draw[thick,->,>=stealth',shorten >=1pt,blue] (B) to [out=110,in=170,loop,looseness=4.5] (B) node[xshift=-.3cm,yshift=.6cm] {1};\n\\end{tikzpicture}\n\\end{subfigure}\n%\n\\begin{subfigure}{.45\\textwidth}\n\\centering\n\\begin{align*}\n&A = \\begin{blockarray}{ccccccc}\n& & \\small\\text{\\textcolor{gray}{a}} & \\small\\text{\\textcolor{gray}{b}} & \\small\\text{\\textcolor{gray}{c}} & \\small\\text{\\textcolor{gray}{d}} & \\\\\n\\begin{block}{c[cccccc]}\n\\small\\text{\\textcolor{gray}{a}} & & 0 & 0 & 0 & 0 & \\topstrut\\\\\n\\small\\text{\\textcolor{gray}{b}} & & 2 & 0 & 1 & 0 & \\\\\n\\small\\text{\\textcolor{gray}{c}} & & 1 & 0 & 0 & 2 & \\\\\n\\small\\text{\\textcolor{gray}{d}} & & 1 & 0 & 2 & 0 & \\botstrut\\\\\n\\end{block}\\end{blockarray}\n\\\\\n&\\widehat{A} = \\begin{blockarray}{ccccccc}\n& & \\small\\text{\\textcolor{gray}{a}} & \\small\\text{b} & \\small\\text{\\textcolor{gray}{c}} & \\small\\text{\\textcolor{gray}{d}} & \\\\\n\\begin{block}{c[cccccc]}\n\\small\\text{\\textcolor{gray}{a}} & & 0 & \\textcolor{blue}{1/4} & 0 & 0 & \\topstrut\\\\\n\\small\\text{\\textcolor{gray}{b}} & & 1/2 & \\textcolor{blue}{1/4} & 1/3 & 0 & \\\\\n\\small\\text{\\textcolor{gray}{c}} & & 1/4 & \\textcolor{blue}{1/4} & 0 & 1 & \\\\\n\\small\\text{\\textcolor{gray}{d}} & & 1/4 & \\textcolor{blue}{1/4} & 2/3 & 0 & \\botstrut\\\\\n\\end{block}\\end{blockarray}\n\\end{align*}\n\\end{subfigure}\n\\caption{A directed weighted graph with four nodes, together with its adjacency matrix and the corresponding PageRank transition matrix.\nEdges that are added to fix sinks have weight $1$, so the computation of $\\widetilde{A}$ and $\\widehat{A}$ are exactly the same as in Figure \\ref{fig:pagerank-nosinks} and \\eqref{eq:pagerank-Ahat-entries}, respectively.\n}\n\\label{fig:pagerank-weighted}\n\\end{figure}\n\n\\begin{problem} % NCAA March Madness ranking!\nThe files \\texttt{ncaa2010.csv}, \\texttt{ncaa2011.csv}, $\\ldots$, \\texttt{ncaa2017.csv} each contain data for men's college basketball for a given school year.\\footnote{\\texttt{ncaa2010.csv} has data for the 2010--2011 season, \\texttt{ncaa2011.csv} for the 2011--2012 season, and so on.}\nEach line (except the very first line, which is a header) represents a different basketball game, formatted \\li{winning_team,losing_team}.%\\footnote{There are no ties in basketball.}\n\nWrite a function that accepts a filename and a damping factor $\\epsilon$ defaulting to 0.85.\nRead the specified file (skipping the first line) and get a list of the $n$ unique teams in the file.\nConstruct the $n\\times n$ adjacency matrix of the graph where node $j$ points to node $i$ with weight $w$ if team $j$ was defeated by team $i$ in $w$ games.\nThat is, \\textbf{edges point from losers to winners}.\nFor instance, the graph in Figure \\ref{fig:pagerank-weighted} would indicate that team c lost to team b once and to team d twice, team b was undefeated, and team a never won a game.\nUse your class from Problem \\ref{prob:pagerank-init} and its \\li{itersolve()} method from Problem \\ref{prob:pagerank-computation} to compute the PageRank values of the teams, then rank them with your function from Problem \\ref{prob:pagerank-ranking}.\nReturn the ranked list of team names.\n\nUsing \\texttt{ncaa2010.csv} with $\\epsilon=0.85$, the top three ranked teams (of the $607$ total teams) should be UConn, Kentucky, and Louisville, in that order.\nThat season, UConn won the championship, Kentucky was a semifinalist, and Louisville lost in the first tournament round (a surprising upset).\n\\label{prob:pagerank-ncaa}\n\\end{problem}\n\n\\begin{info}\nIn Problem \\ref{prob:pagerank-ncaa}, the damping factor $\\epsilon$ acts as an ``upset'' factor: a larger $\\epsilon$ puts more emphasis on win history; a smaller $\\epsilon$ allows more randomness in the system, giving underdog teams a higher probability of defeating a team with a better record.\n\nIt is also worth noting that the sink-fixing procedure is still reasonable for this model because it gives every other team \\textbf{equal} likelihood of beating an undefeated team.\nThat is, the additional edges don't provide an extra advantage to any one team.\n% \\footnote{The likelihood of a team being completely undefeated all season is extremely low; it's only happened $7$ times in history, and that was before the tournament was expanded to $64$ teams (cf. \\url{https://www.thoughtco.com/basketballs-undefeated-teams-326279}).}\n\\end{info}\n\n\\subsection*{PageRank with NetworkX} % ----------------------------------------\n\n\\href{https://networkx.github.io/documentation/stable/}{NetworkX}, usually imported as \\li{nx}, is a third-party package for working with networks.\nIt represents graphs internally with dictionaries, thus taking full advantage of the sparsity in a graph.\nThe base class for directed graphs is called \\li{nx.DiGraph}.\nNodes and edges are usually added or removed incrementally with the following methods.\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{r|l}\n    Method & Description\\\\\n    \\hline\n    \\li{add_node()} & Add a single node.\\\\\n    \\li{add_nodes_from()} & Add a list of nodes.\\\\\n    \\li{add_edge()} & Add an edge between two nodes, adding the nodes if needed.\\\\\n    \\li{add_edges_from()} & Add multiple edges (and corresponding nodes as needed).\\\\\n    \\hline\n    \\li{remove_edge()} & Remove a single edge (no nodes are removed). \\\\\n    \\li{remove_edges_from()} & Remove multiple edges (no nodes are removed).\\\\\n    \\li{remove_node()} & Remove a single node and all adjacent edges.\\\\\n    \\li{remove_nodes_from()} & Remove multiple nodes and all adjacent edges. \\\\\n\\end{tabular}\n\\caption{Methods of the \\li{nx.DiGraph} class for inserting or removing nodes and edges.}\n\\end{table}\n\nFor example, the weighted graph in Figure \\ref{fig:pagerank-weighted} can be constructed with the following code.\n\n\\begin{lstlisting}\n>>> import networkx as nx\n\n# Initialize an empty directed graph.\n>>> DG = nx.DiGraph()\n\n# Add the directed edges (nodes are added automatically).\n>>> DG.add_edge('a', 'b', weight=2)     # a --> b (adds nodes a and b)\n>>> DG.add_edge('a', 'c', weight=1)     # a --> c (adds node c)\n>>> DG.add_edge('a', 'd', weight=1)     # a --> d (adds node d)\n>>> DG.add_edge('c', 'b', weight=1)     # c --> b\n>>> DG.add_edge('c', 'd', weight=2)     # c --> d\n>>> DG.add_edge('d', 'c', weight=2)     # d --> c\n\\end{lstlisting}\n\nOnce constructed, an \\li{nx.Digrah} object can be queried for information about the nodes and edges.\nIt also supports dictionary-like indexing to access node and edge attributes, such as the weight of an edge.\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{r|l}\n    Method & Description\\\\\n    \\hline\n    \\li{has_node(A)} & Return \\li{True} if \\li{A} is a node in the graph.\\\\\n    \\li{has_edge(A,B)} & Return \\li{True} if there is an edge from \\li{A} to \\li{B}.\\\\\n    \\li{edges()} & Iterate through the edges. \\\\\n    \\li{nodes()} & Iterate through the nodes. \\\\\n    \\li{number_of_nodes()} & Return the number of nodes.\\\\\n    \\li{number_of_edges()} & Return the number of edges.\\\\\n\\end{tabular}\n\\caption{Methods of the \\li{nx.DiGraph} class for accessing nodes and edges.}\n\\end{table}\n\n\\begin{lstlisting}\n# Check the nodes and edges.\n>>> DG.has_node('a')\n<<True>>\n>>> DG.has_edge('b', 'a')\n<<False>>\n>>> list(DG.nodes())\n<<['a', 'b', 'c', 'd']>>\n>>> list(DG.edges())\n<<[('a', 'b'), ('a', 'c'), ('a', 'd'), ('c', 'b'), ('c', 'd'), ('d', 'c')]>>\n\n# Change the weight of the edge (a, b) to 3.\n>>> DG['a']['b'][\"weight\"] += 1\n>>> DG['a']['b'][\"weight\"]\n3\n\\end{lstlisting}\n\nNetworkX efficiently implements several graph algorithms.\nThe function \\li{nx.pagerank()} computes the PageRank values of each node iteratively with sparse matrix operations.\nThis function returns a dictionary mapping nodes to PageRank values, like the methods in Problem \\ref{prob:pagerank-computation}.\n\n\\begin{lstlisting}\n# Calculate the PageRank values of the graph.\n>>> nx.pagerank(DG, alpha=0.85)     # alpha is the damping factor (epsilon).\n<<{'a': 0.08767781186947843,\n 'b': 0.23613138394239835,\n 'c': 0.3661321209576019,\n 'd': 0.31005868323052127}>>\n\\end{lstlisting}\n\n\\begin{warn} % PageRanks sucks on directed networks.\nNetworkX also has a class, \\li{nx.Graph}, for \\emph{undirected graphs}.\nThe edges in an undirected graph are bidirectional, so the corresponding adjacency matrix is symmetric.\n\nThe PageRank algorithm is not very useful for undirected graphs.\nIn fact, the PageRank value for node is close to its $degree$---the number of edges it connects to---divided by the total number of edges.\nIn Problem \\ref{prob:pagerank-ncaa}, that would mean the team who simply played the most games would be ranked the highest.\nAlways use \\li{nx.DiGraph}, not \\li{nx.Graph}, for PageRank and other algorithms that rely on directed edges.\n\\end{warn}\n\n\\begin{problem} % Rank actors in the 250 most popular movies.\nThe file \\texttt{top250movies.txt} contains data from the 250 top-rated movies according to IMDb.\\footnote{\\url{https://www.imdb.com/search/title?groups=top_250&sort=user_rating}}\nEach line in the file lists a movie title and its cast as\n\\li{title/actor1/actor2/...},\nwith the actors listed mostly in billing order (stars first), though some casts are listed alphabetically or in order of appearance.\n\nCreate a \\li{nx.DiGraph} object with a node for each actor in the file.\nThe weight from actor $a$ to actor $b$ should be the number of times that actor $a$ and $b$ were in a movie together but actor $b$ was listed first.\nThat is, \\textbf{edges point to higher-billed actors} (see Figure \\ref{fig:pagerank-actor-network}).\nCompute the PageRank values of the actors and use your function from Problem \\ref{prob:pagerank-ranking} to rank them.\nReturn the list of ranked actors.\n\\\\(Hint: Consider using \\li{itertools.combinations()} while constructing the graph.\nAlso, use \\li{encoding=\"utf-8\"} as an argument to \\li{open()} to read the file, since several actors and actresses have nonstandard characters in their names such as {\\o} and {\\ae}.)\n\n% Since this dataset only includes popular movies, and since edges point toward the stars of the movie, the PageRank vector can be seen as a way to determine who the most visible actors in Hollywood are.\nWith $\\epsilon = 0.7$, the top three actors should be Leonardo DiCaprio, Robert De Niro, and Tom Hanks, in that order.\n\\label{prob:pagerank-actors}\n\\end{problem}\n\n\\begin{figure}[H]\n\\centering\n\\begin{tikzpicture}[thick]\n% Nodes\n\\foreach [count=\\i] \\a/\\t/\\d in {0/Hugh\\\\Jackman/5, 1/Anne\\\\Hathaway/5, 2/Scarlett\\\\Johansson/5, 3/Christian\\\\Bale/5, 4/Michael\\\\Caine/5}\n\\draw (\\a*360/5+18: \\d cm) node[circle,draw=black,font=\\sffamily\\small,align=center,minimum size=1.75cm] (v\\i)  {\\t};\n% Edges\n\\foreach \\i/\\j/\\w in {3/1/1,4/1/1,3/5/1,5/2/2,5/4/4,2/4/1,3/4/1,5/1/1}\n\\draw[->,>=stealth',thick]  (v\\i) edge node [font=\\normalsize,pos=.5,auto] {\\w} (v\\j);\n\\end{tikzpicture}\n\\caption{A portion of the graph from Problem \\ref{prob:pagerank-actors}.\nMichael Caine was in four movies with Christian Bale where Christian Bale was listed first in the cast.}\n\\label{fig:pagerank-actor-network}\n\\end{figure}\n\n\\newpage\n\n\\section*{Additional Material} % ==============================================\n\n\\subsection*{Sparsity} % ------------------------------------------------------\n\nOn very large networks, the PageRank algorithm becomes computationally difficult because of the size of the adjacency matrix $A$.\nFortunately, most adjacency matrices are highly sparse, meaning the number of edges is much lower than the number of entries in the matrix.\nConsider adding functionality to your class from Problem \\ref{prob:pagerank-init} so that it stores $\\widehat{A}$ as a sparse matrix and performs sparse linear algebra operations in the methods from Problem \\ref{prob:pagerank-computation} (use \\li{scipy.sparse.linalg}).\n\n\n\\subsection*{PageRank as a Predictive Model} % --------------------------------\n\nThe data files in Problem \\ref{prob:pagerank-ncaa} include tournament games for their respective seasons, so the resulting rankings naturally align with the outcome of the championship.\nHowever, it is also useful to use PageRank as a predictive model: given data for all regular season games, can the outcomes of the tournament games be predicted?\nOver $40$ million Americans fill out $60$--$100$ million March Madness brackets each year and bet over $\\$9$ billion on the tournament, so being able to predict the outcomes of the games is a big deal.\nSee \\url{http://games.espn.com/tournament-challenge-bracket} for more details.\n\nGiven regular season data, PageRank can be used to predict tournament results as in Problem \\ref{prob:pagerank-ncaa}.\nThere are some pitfalls though; for example, how should $\\epsilon$ be chosen?\nUsing $\\epsilon = .5$ with \\texttt{ncaa2010.csv} minus tournament data (all but the last 63 games in the file) puts UConn---the actual winner that year---in seventh place, while $\\epsilon = .9$ puts UConn in fourth.\nBoth values for $\\epsilon$ also rank BYU as number one, but BYU lost in the Sweet Sixteen that year.\nIn practice, Google uses $.85$ as the damping factor, but there is no rigorous reasoning behind that particular choice.\n\n\\subsection*{Other Centrality Measures} % -------------------------------------\n\nIn network theory, the \\emph{centrality} of a node refers to its importance.\nSince there are lots of ways to measure importance, there are several different centrality measures.\n\n\\begin{itemize}\n\\item \\emph{Degree centrality} uses the \\emph{degree} of a node, meaning the number of edges adjacent to it (independent of edge direction), for ranking.\nAn academic paper that has been cited many times has a high degree and is considered more important than a paper that has only been cited once.\n\\item \\emph{Eigenvector centrality} is an extension of degree centrality.\nInstead of each neighbor contributing equally to the centrality, nodes that are important are given a higher weight.\nThus a node connected to lots of unimportant nodes can have the same measure as a node connected to a few, important nodes.\nEigenvector centrality is measured by the eigenvector associated with the largest eigenvalue of the adjacency matrix of the network.\n% One example of this is a social network where one person may be connected to a few famous people.\n\\item \\emph{Katz centrality} is a modification to eigenvalue centrality for directed networks.\nOutgoing nodes contribute centrality to their neighbors, so an important node makes its neighbors more important.\n\\item PageRank adapts Katz centrality by averaging out the centrality that a node can pass to its neighbors.\nFor example, if Google---a website that should have high centrality---points to a million websites, then it shouldn't pass on that high centrality to all of million of its neighbors, so each neighbor gets one millionth of Google's centrality.\n\\end{itemize}\n\nFor more information on these centralities, as well as other ways to measure node importance, see \\cite{newman2010networks}.\n", "meta": {"hexsha": "2437a3e31b1a49743831d09b4c7d2e78943641d1", "size": 37240, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "acme-material/Labs/Volume1/PageRank/PageRank.tex", "max_stars_repo_name": "DM561/dm561.github.io", "max_stars_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-13T13:22:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-13T13:22:41.000Z", "max_issues_repo_path": "acme-material/Labs/Volume1/PageRank/PageRank.tex", "max_issues_repo_name": "DM561/dm561.github.io", "max_issues_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-10-18T19:57:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-31T19:00:36.000Z", "max_forks_repo_path": "acme-material/Labs/Volume1/PageRank/PageRank.tex", "max_forks_repo_name": "DM561/dm561.github.io", "max_forks_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.4521452145, "max_line_length": 323, "alphanum_fraction": 0.7103383459, "num_tokens": 11295, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7025300573952052, "lm_q2_score": 0.8354835289107307, "lm_q1q2_score": 0.5869522915184043}}
{"text": "\\documentclass[../specific-algorithms.tex]{subfiles}\n\\begin{document}\nIn this chapter, we mainly discuss about the array based questions. We first categorize these problems into different type, and then each type can usually be solved and optimized with nearly the best efficiency. \n\nHere array means one dimension list. For array problems, math will play an important role here. The rules are as follows:\n\\begin{itemize}\n    \\item Subarray: using dynamic programming based algorithm to make brute force $O(n^3)$ to $O(n)$. Two pointers for the increasing subarray. Prefix sum, or kadane's algorithm plus sometimes with the hashmap, or two pointers (three pointers) for the maximum subarray. \n    \\item Subsequence: using dynamic programming based algorithm to make brute force $O(2^n)$ to $O(n^2)$, which corresponds to the seqence type of dynamic programming. \n    \\item Duplicates: 217, 26, 27, 219, 287, 442;\n    \\item Intersections of Two Arrays: \n\\end{itemize}\n\nBefore we get into solving each type of problems, we first introduce the algorithms we will needed in this Chapter, including two pointers (three pointers or sliding window), prefix sum, kadane's algorithm. Kadane's algorithm can be explained with sequence type of dynamic programming. \n\n    % Easy problems: Duplicates:  Intersection: 349. Intersection of Two Arrays; Consecutive: 485. Max Consecutive Ones\n    % Maximum/Minimum subarray: 718, 53. Maximum Subarray, 325. Maximum Size Subarray Sum Equals k. 209. Minimum Size Subarray Sum Solutions: divide and conquer, special sum and hashtable, two pointers (sliding window) for minimum\n    % Sum of K numbers of elements: Target, return either the index or the elements(might need to avoid repetition). (2/3/4 sums)\n    % Partition a list into K equal part: DP\n    \nAfter this chapter, we need to learn the step to solve these problems:\n\\begin{enumerate}\n    \\item Analyze the problem and categorize it.  To know the naive solution's time complexity can help us identify it.\n    \\item If we can not find what type it is, let us see if we can \\textit{convert}. If not, we can try to identify a simple version of this problem, and then upgrade the simple solution to the more complex one. \n    \\item Solve the problem with the algorithms we taught in this chapter.\n    \\item Try to see if there is any more solutions. \n    \n\n    % \\textit{Note: If the problem is complex, trying to see the simple version, and then upgrade the simple version to a complex one. e.g. (487. Max Consecutive Ones II, 485. Max Consecutive Ones)}\n    \\item Check the special case. (Usually very important for this type of problems)\n\\end{enumerate}\n% Including two pointers both from the start, or two pointers one is from the beginning and the other is from the end. Also, the sliding window, and the flexible sliding windows, also find the cycle algorithm. \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Algorithms\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Algorithms}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Two Pointers\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Pointers and Sliding Window Algorithm}\nT1: If you see in the problem that you can do comparison and it is always one type of satisfactory element is in ahead of the other, this could be resolved by two pointers (slower and faster). Note: when the while loop stops, is there operations you need?\n\nTwo pointers or three pointers are the most possible. \\textit{Two pointers or three pointers is a superset of the sliding window algorithm, prefix sum too.} It can lower the complexity by one power level of n. \n\n\\subsubsection{Two Pointers Sliding Window for Array}\n674. Longest Continuous Increasing Subsequence\n\\begin{lstlisting}\nGiven an unsorted array of integers, find the length of longest continuous increasing subsequence (subarray).\n\nExample 1:\nInput: [1,3,5,4,7]\nOutput: 3\nExplanation: The longest continuous increasing subsequence is [1,3,5], its length is 3. \nEven though [1,3,5,7] is also an increasing subsequence, it's not a continuous one where 5 and 7 are separated by 4.\n\nExample 2:\nInput: [2,2,2,2,2]\nOutput: 1\nExplanation: The longest continuous increasing subsequence is [2], its length is 1.\n\\textit{Note: Length of the array will not exceed 10,000.}\n\\end{lstlisting}\nSolution: The description of this problem should use ''subarray\" instead of the ''subsequence\".  The brute force solution is like any subarray problem $O(n^3)$. For embedded for loops to enumerate the subarray, and another $O(n)$ to check if it is strictly increasing.  Using two pointers, we can get $O(n)$ time complexity. We put two pointers: one $i$ located at the first element of the nums, second $j$ at the second element. We specifically restrict the subarray from $i$ to $j$ to be increasing, if this is violated, we reset the starting point of the subarray from the violated place.  \n\\begin{lstlisting}[language = Python]\nclass Solution:\n    def findLengthOfLCIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        if len(nums)==1:\n            return 1\n        i,j = 0,0\n        max_length = 0\n        while j < len(nums):\n            j += 1 #slide the window\n            max_length = max(max_length, j-i)\n            # when condition violated, reset the window\n            if j<len(nums) and nums[j-1]>=nums[j]:\n                i = j\n                         \n        return max_length\n\\end{lstlisting}\n\n\\subsubsection{Three Pointers Sliding  Window for Array}\nSometimes, by manipulating two pointers are not enough for us to get the final solution. \n\n930. Binary Subarrays With Sum\n    \\begin{lstlisting}\n    In an array A of 0s and 1s, how many non-empty subarrays have sum S?\nExample 1:\n\nInput: A = [1,0,1,0,1], S = 2\nOutput: 4\nExplanation: \nThe 4 subarrays are bolded below:\n[1,0,1,0,1]\n[1,0,1,0,1]\n[1,0,1,0,1]\n[1,0,1,0,1]\nNote:\n\n    A.length <= 30000\n    0 <= S <= A.length\n    A[i] is either 0 or 1.\n\\end{lstlisting}\nFor example in the following problem, if we want to use two pointers to solve the problem, we would find we miss the case; like in the example $1, 0, 1, 0, 1$, when $j = 5$, $i = 1$, the sum is $2$, but the algorithm would miss the case of $i = 2$, which has the same sum value.\n \nTo solve this problem, we keep another index $i_hi$, in addition to the moving rule of $i$, it also moves if the sum is satisfied and that value is $0$. This is actually a Three pointer algorithm, it is also a mutant sliding window algorithm. \n\\begin{lstlisting}[language=Python]\nclass Solution:\n    def numSubarraysWithSum(self, A, S):\n        i_lo, i_hi, j = 0, 0, 0 #i_lo <= j\n        sum_window = 0\n        ans = 0\n        while j < len(A):\n\n            sum_window += A[j]\n                                     \n            while i_lo < j and sum_window > S:\n                sum_window -= A[i_lo]\n                i_lo += 1\n            # up till here, it is standard sliding window\n            \n            # now set the extra pointer at the same location of the i_lo\n            i_hi = i_lo\n            while i_hi < j and sum_window == S and not A[i_hi]:\n                i_hi += 1\n            if sum_window == S:\n                ans += i_hi - i_lo + 1\n                            \n            j += 1 #increase the pointer at last so that we do not need to check if j<len again\n\n        return ans\n\\end{lstlisting}\n\n\\subsubsection{Sliding Window for String: Anagram or Exact Matching}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Prefex Sum\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Prefix Sum and Kadane's Algorithm}\n\\label{part4_prefix_sum}\nIn this section, we put prefix sum and kadance algorithm together because they are highly correlated to each others. They represents different perspective to solve a similary problem: one best example is the maximum subarray problem. \n\\subsubsection{Introduction to Prefix Sum}\nIn computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers x0, x1, x2, ... is a second sequence of numbers y0, y1, y2, ..., the sums of prefixes (running totals) of the input sequence:\n\\begin{lstlisting}\n    y0 = x0\n    y1 = x0 + x1\n    y2 = x0 + x1+ x2\n    ...\n\\end{lstlisting}\n\nFor instance, the prefix sums of the natural numbers are the triangular numbers:\n\\begin{lstlisting}\n    input numbers \t 1 \t 2 \t 3 \t 4 \t 5 \t 6 \t...\n    prefix sums \t 1 \t 3 \t 6 \t10 \t15 \t21 \t... \n\\end{lstlisting}\n\nPrefix sums are trivial to compute in sequential models of computation, by using the formula $y_i = y_{i-1} + x_i$ to compute each output value in sequence order. And the sum of subarray needed in this section can be computed with formula $S_{(i,j)} = y_j-y_i$. However, despite their ease of computation, prefix sums are a useful primitive in certain algorithms such as counting sort,[1] as introduced in Section 2.3 of [2] and they form the basis of the scan higher-order function in functional programming languages.\n\n\\subsubsection{Prefix Sum Solution for Maximum Subarray Problem}\nFor the maximum subarray problem, we have our answer to be $max(y_j - y_i) (j>i, j\\in[0,n-1])$, which is equivalent to $max(y_j - min(y_i)(i<j)), j\\in[0,n-1]$. We can solve the maximum subarray problem using prefix sum with linear $O(n)$ time, where using brute force is $O(n^3)$ and the divide and conquer is $O(nlgn)$. For example, given an array of $[-2, -3, 4, -1, -2, 1, 5, -3]$. We have the following results:\n\\begin{table}[!ht]\n\\centering\n\\noindent\\captionof{table}{ Process of using prefix sum for the maximum subarray}\n\\label{tab: prefix_sum}\n \\noindent \\begin{tabular}{c rrrrrrrr}\n  \\hline\\hline\n%   & Count & Percentage/Total Problems & Percentage/Total Data Structure \\\\ \\hline\nArray  & $-2$& $-3$ & $4$& $-1$& $-2$& $1$& $5$& $3$\\\\\nprefix sum   & $-2$& $-5$ & $-1$& $-2$& $-4$& $-3$& $2$& $1$ \\\\\nUpdated prefix sum &$-2$& $-3$& $4$& $3$& $1$& $2$ &$7$&$ 4$ \\\\\ncurrent max sum & $-2$& $-2$ & $4$& $4$& $4$& $4$& $7$& $7$\\\\ \nmin prefix sum & $-2$& $-5$ & $-5$& $-5$& $-5$& $-5$& $-5$& $-5$\\\\ \\hline\n\\end{tabular}\n\\end{table}\nThe coding:\n\\begin{lstlisting}[language = Python]\n\n# or we can use import math, math.inf\n\n# Function to compute maximum \n# subarray sum in linear time. \ndef maximumSumSubarray(nums): \n    if not nums:\n        return 0\n    prefixSum = 0\n    globalA = -sys.maxsize\n    minSub = 0\n    for i in range(len(nums)):\n        prefixSum += nums[i]\n        globalA = max(globalA, prefixSum-minSub)\n        minSub = min(minSub, prefixSum)\n    return globalA\n\n# Driver Program \n\n# Test case 1 \narr1 = [ -2, -3, 4, -1, -2, 1, 5, -3 ] \nprint(maximumSumSubarray(arr1)) \n\n# Test case 2 \narr2 = [ 4, -8, 9, -4, 1, -8, -1, 6 ] \nprint(maximumSumSubarray(arr2)) \n\\end{lstlisting}\nAs we can see, we did not need extra space to save the prefix sum, because each time we only use prefix sum at current index. \n\n\\subsubsection{Kadane Algorithm Solution to Maximum Subarray Problem}\n\\textbf{Mutant of Prefix Sum:} Another easier perspective using dynamic programming for this problem because we found the key word ''Maximum\" in the question which is problem that identified in the dynamic programming chapter. \n\\begin{lstlisting}\ndp: the maximum subarray result till index i, which includes the current element nums[i]. We need n+1 space due to using i-1.\nInit: all 0\nstate transfer function: dp[i] = max(dp[i-1]+nums[i], nums[i]); because if for each element, we can either continue the previous subarray or start a new subarray. \nResult: max(dp)\n\\end{lstlisting}\nHowever, to do space optimization, we only need to track the current maximum $dp$ and since $dp[i]$ is only related to $dp[i-1]$. For the last example, the newly updated prefix sum is $-2, -3, 4, 3, 1, 2, 7, 4$. The comparison result can be seen in Table~\\ref{tab: prefix_sum}. \n\\begin{lstlisting}[language=Python]\ndef maximumSumSubarray(arr): \n    if not arr:\n        return 0\n    dp = [0]*(len(arr)+1)\n    #max_so_far = -sys.maxsize\n    for i in range(len(arr)):\n        dp[i+1] = max(dp[i]+arr[i], arr[i])\n    return max(dp)\n\\end{lstlisting}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Kadane Algorithm\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nBecause we can still do space optimization to the above solution, we use one variable to replace the dp array, and we track the maximum dp in the for loop instead of obtaining the maximum value at the end. Also, if we rename the dp to max\\_ending\\_here and the max(dp) to max\\_so\\_far, the code is as follows:\n\\begin{lstlisting}[language=Python]\ndef maximumSumSubarray(arr, n): \n    if not arr:\n        return 0\n    max_ending_here = 0\n    max_so_far = -sys.maxsize\n    for i in range(len(arr)):\n        max_ending_here = max(max_ending_here+arr[i], arr[i])\n        max_so_far = max(max_so_far, max_ending_here)\n    return max_so_far\n\\end{lstlisting}\nThis space-wise optimized dynamic programming solution to the maximum subarray problem is exactly the Kadane's algorithm.  Kadane's algorithm begins with a simple inductive question: if we know the maximum subarray sum ending at position $i$, what is the maximum subarray sum ending at position $i+1$? The answer turns out to be relatively straightforward: either the maximum subarray sum ending at position $i+1$ includes the maximum subarray sum ending at position $i$ as a prefix, or it doesn't. Thus, we can compute the maximum subarray sum ending at position $i$ for all positions $i$ by iterating once over the array. As we go, we simply keep track of the maximum sum we've ever seen. Thus, the problem can be solved with the following code, expressed here in Python:\n\\begin{lstlisting}[language = Python]\ndef max_subarray(A):\n    max_ending_here = max_so_far = A[0]\n    for x in A[1:]:\n        max_ending_here = max(x, max_ending_here + x)\n        max_so_far = max(max_so_far, max_ending_here)\n    return max_so_far\n\\end{lstlisting}\n\nThe algorithm can also be easily modified to keep track of the starting and ending indices of the maximum subarray (when max\\_so\\_far changes) as well as the case where we want to allow zero-length subarrays (with implicit sum 0) if all elements are negative. For example: \n\nNow, let us see how we do maximum subarray with product operation instead of the sum. \n\n152. Maximum Product Subarray\n\\begin{lstlisting}\nGiven an integer array nums, find the contiguous subarray within an array (containing at least one number) which has the largest product.\n\nExample 1:\n\nInput: [2,3,-2,4]\nOutput: 6\nExplanation: [2,3] has the largest product 6.\n\nExample 2:\n\nInput: [-2,0,-1]\nOutput: 0\nExplanation: The result cannot be 2, because [-2,-1] is not a subarray.\n\\end{lstlisting}\n\nAnswer: For the product, the difference compared with sum is the max\\_ending\\_here is not necessarily computed from the previous value with current element; if the element is negative it might even become the smallest. So that we need to track another variable, the min\\_ending\\_here. Let use see the Python code which is a straightforward implementation of the product-modified kadane's algorithm. \n\\begin{lstlisting}\nfrom sys import maxsize\nclass Solution(object):\n    def maxProduct(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        n = len(nums)\n        max_so_far = nums[0]\n        min_local, max_local = nums[0], nums[0]\n        for i in range(1, n):\n            a = min_local*nums[i]\n            b = max_local*nums[i]\n            max_local = max(nums[i], a, b)\n            min_local = min(nums[i], a, b)\n            max_so_far = max(max_so_far, max_local)\n        return max_so_far\n\\end{lstlisting}\n\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% rabbit and turtle to find circle or repeat number\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Floyd\u2019s Cycle-Finding Algorithm}\n\nWithout this we detect cycle with the following code:\n\\begin{lstlisting}[language = Python]\ndef detectCycle(self, A):\n        visited=set()       \n        head=point=A\n        while point:\n            if point.val in visited:\n                return point\n            visited.add(point)\n            point=point.next\n        return None\n\\end{lstlisting}\n\nTraverse linked list using two pointers. Move one pointer by one and other pointer by two. If these pointers meet at some node then there is a loop. If pointers do not meet then linked list doesn\u2019t have loop. Once you detect a cycle, think about finding the starting point.\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width = 0.98\\columnwidth]{fig/floyd.png}\n    \\caption{Example of floyd\u2019s cycle finding}\n    \\label{fig:floyd}\n\\end{figure}\n\n\\begin{lstlisting}[language = Python]\ndef detectCycle(self, A):\n        #find the \"intersection\" \n        p_f=p_s=A\n        while (p_f and p_s and p_f.next):\n            p_f = p_f.next.next\n            p_s = p_s.next\n            if p_f==p_s:\n                break\n        #Find the \"entrance\" to the cycle.\n        ptr1 = A\n        ptr2 = p_s;\n        while ptr1 and ptr2:\n            if ptr1!=ptr2:\n                ptr1 = ptr1.next\n                ptr2 = ptr2.next\n            else:\n                return ptr1\n        return None\n\\end{lstlisting}\n\n\\subsection{Longest Common Subsequence}\nLongest common substring is actually a sub-algorithm from the double sequence type Dynamic programming. \n\nSuppose we have string A and B, each with m and n chars. If we use brute force, then in A, there could be $M^2$ substring, and to locate these substring in B, we spend O(n) for each substring, which makes the total complexity to be $O(n*M^2)$. \\textit{Note: it is the same if its two arrays.} But now, if we use the LCS method, the time and space complexity will be $O(n*m)$ . LCS is DP method that we can use bottom-up with memorization.\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width = 0.98\\columnwidth]{fig/lcs.png}\n    \\caption{Longest common substring}\n    \\label{fig:lcs}\n\\end{figure}\n\\begin{lstlisting}[language = Python]\nPseudo-code of LCS\n#init a[M+1][N+1] two dimensional matrix with [0]\na =[[0 for _ in xrange(N+1)] for _ in xrange(M+1)]\nfor i in [0,M):\n    for j in [0,N):\n        if A[i]==A[j]:\n            a[i+1][j+1]=a[i][j]+1\n            result = max(result,a[i+1][j+1])\n\n        else:\n            a[i+1][j+1] = 0\n\\end{lstlisting}\nAlthough the algorithm is named with ''substring\", however, they are equivalent the maximum intersection ''subarray\".\n\n718. Maximum Length of Repeated Subarray (Medium)\n\\begin{lstlisting}\nGiven two integer arrays A and B, return the maximum length of an subarray that appears in both arrays.\n\nExample 1:\n\nInput:\nA: [1,2,3,2,1]\nB: [3,2,1,4,7]\nOutput: 3\nExplanation: \nThe repeated subarray with maximum length is [3, 2, 1].\n\nNote:\n\n    1 <= len(A), len(B) <= 1000\n    0 <= A[i], B[i] < 100\n\\end{lstlisting}\nUsing LCS the time complexity if $O(n*m)$.\n\\begin{lstlisting}[language = Python]\nclass Solution:\n    def findLength(self, A, B):\n        \"\"\"\n        :type A: List[int]\n        :type B: List[int]\n        :rtype: int\n        \"\"\"\n        if not A or not B:\n            return 0\n        result = 0\n        a =[[0 for _ in range(len(B)+1)] for _ in range(len(A)+1)]\n        for i in range(len(A)):\n            for j in range(len(B)):\n                if A[i]==B[j]:\n                    a[i+1][j+1] = a[i][j]+1\n                    result = max(result, a[i+1][j+1])\n                else:\n                    a[i+1][j+1] = 0\n        return result\n\\end{lstlisting}\n% \\subsection{Examples-Intersections}\n% \\begin{enumerate}\n%     \\item \n\n% \\end{enumerate}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Examples\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Different Type of Questions}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Subarray (Easy) }\n\\textit{Note: For subarray the most important feature is contiguous. Here, we definitely will not use sorting.}\n\n\n\nGiven a single array, we would normally be asked to return either the maximum/minimum value, the maximum/minimum length, or the number of subarrays that has sum/product that \\textit{satisfy a certain condition}. The condtion here decide the difficulty of these problems.\n\nThe questions can are classified into two categories: \\textit{Absolute-conditioned Subarray} that $sum/product=K$ or \\textit{Vague-conditioned subarray} that has these symbols that is not equal. \n\n\n% \\begin{enumerate}\n%     \\item Maximum/minimum subarray with Sum or Product or a pattern; we use \\textbf{math and prefix\\_sum} or sometimes together with hashmap method to tackle. Also, sliding window can be used.\n%     \\item Minimum Subarray with Sum or Product or a pattern; \\textbf{sliding window} can be used to get the minimum length of subarray. \n%     \\item Find subarray that is increasing or decreasing ; \\textbf{Two pointers or sliding window} can be used. \n% \\end{enumerate}\nWith the proposed algorithms, the time complexity of subarray problems can be decreased from the brute force $O(n^3)$ to $O(n)$. The brute force is universal: two embedded for loops marked the start and end of the subarray to enumerate all the possible subarrays, and another $O(n)$ spent to compute the result needed (sum or product or check the pattern like increasing or decreasing). \n\nUsing prefix sum or kadane's algorithm or hashmap sometimes if we have problems to solve it, a panacea is the Sliding Window Algorithm either with two or three pointers. \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Maximum Subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Absolute-conditioned Subarray}\nFor the maximum array, you are either asked to return: \n\\begin{enumerate}\n\\item the maximum sum or product; \\textit{solved using prefix sum or kadane's algorithm}\n\\item the maximum length of subarray with sum or product S equals to K; \\textit{solved using prefix sum together with a hashmap saves previous prefix sum and its indices}\n\\item the maximum number of subarray with sum or product S (the total number of) equals to K; \\textit{solved using prefix sum together with a hashmap saves previous prefix sum and its count}\n\\end{enumerate}\n\n\\paragraph{Maximum/Minimum sum or product}\n\n53. Maximum Subarray\n\n\\begin{lstlisting}\nFind the contiguous subarray within an array (containing at least one number) which has the largest sum.\n\nFor example, given the array [-2,1,-3,4,-1,2,1,-5,4],\n the contiguous subarray [4,-1,2,1] has the largest sum = 6.\n\\end{lstlisting}\nSolution: Brute force is to use two for loops, first is the starting, second is the end, then we can get the maximum value. To optimize, we can use divide and conquer, $O(nlgn)$ vs brute force is $O(n^3)$ (two embedded for loops and n for computing the sum). The divide and conquer method was shown in that chapter. A more efficient algorithm is using pre\\_sum. Please check Section~\\ref{part4_prefix_sum} for the answer. \n\nNow what is the slinding window solution? The key step in sliding window is when to move the first pointer of the window (shrinking the window). The window must include current element j. For the maximum subarray, to increase the sum of the window, we need to abandon any previous elements if they have negative sum.\n\\begin{lstlisting}[language = Python]\nfrom sys import maxsize\nclass Solution:\n    def maxSubArray(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        i, j = 0, 0 #i<=j\n        maxValue = -maxsize\n        window_sum = 0\n        while j < len(nums):\n            window_sum += nums[j]\n            j += 1\n            maxValue = max(maxValue, window_sum)\n            while i<j and window_sum < 0:\n                window_sum -= nums[i]\n                i += 1                           \n        return maxValue\n\\end{lstlisting}\n\n\\paragraph{Maximum/Minimum length of subarray with sum or product S}\nFor this type of problem we need to track the length of it. \n\n% \\begin{enumerate}\n325. Maximum Size Subarray Sum Equals k\n\nGiven an array nums and a target value k, find the maximum length of a subarray that sums to k. If there isn\u2019t one, return 0 instead.\n\nNote:\n The sum of the entire nums array is guaranteed to fit within the 32-bit signed integer range.\n\n\\begin{lstlisting}\nExample 1:\nGiven nums = [1, -1, 5, -2, 3], k = 3,\n return 4. (because the subarray [1, -1, 5, -2] sums to 3 and is the longest)\n\nExample 2:\nGiven nums = [-2, -1, 2, 1], k = 1,\n return 2. (because the subarray [-1, 2] sums to 1 and is the longest)\n\nFollow Up:\n Can you do it in O(n) time?\n \\end{lstlisting}\nAnswer: the brute force solution of this problem is the same as the maximum subarray. The similarity here is we track the prefix sum $S_{(i,j)} = y_j - y_i$, if we only need to track a certain value of $S_{(i,j)}$, which is $k$. Because $y_i = y_j - k$ which is the current prefix sum minus the k. If we use a hashmap to save the set of prefix sum together with the first index of this value appears. We saved $(y_i, first\\_index)$, so that $max\\_len = max(idx-dict[y_j-k])$. \n% Solution: using prefix\\_sum and hashmap, to just need to reformulate dict[sum\\_i]. For this question, we need to get the maximum size of subarray, so dict[i] =min(idx), the earliest that the value appear. which means every time we just set the $dict[i]=current_index$, only if the key does'nt exist. dict[0] = -1. Here we have sum\\_i, to check if there is a j in front of i that makes sum(j,i)=k, sum(j,i)=sum\\_i-A Value in Dict = k, so the value need to be sum\\_i-k, so that we need to check if it is in the dictionary.\n\\begin{lstlisting}[language = Python]\ndef maxSubArrayLen(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        prefix_sum = 0\n        dict = {0:-1} #this means for index -1, the sum is 0\n        max_len = 0\n        for idx,n in enumerate(nums):\n            prefix_sum += n\n            # save the set of prefix sum together with the first index of this value appears. \n            if prefix_sum not in dict: \n                dict[prefix_sum] = idx\n            # track the maximum length so far\n            if prefix_sum-k in dict:\n                max_len=max(max_len, idx-dict[sum_i-k])\n        return max_len\n\\end{lstlisting}\n\nAnother example that asks for pattern but can be converted or equivalent to the last problems:\n\n525. Contiguous Array\n\nGiven a binary array, find the maximum length of a contiguous subarray with equal number of 0 and 1.\n\nExample 1:\n\\begin{lstlisting}\nInput: [0,1]\nOutput: 2\nExplanation: [0, 1] is the longest contiguous subarray with equal number of 0 and 1.\n\nExample 2:\n\nInput: [0,1,0]\nOutput: 2\nExplanation: [0, 1] (or [1, 0]) is a longest contiguous subarray with equal number of 0 and 1.\n\n\\textit{Note: The length of the given binary array will not exceed 50,000.}\n\\end{lstlisting}\n\nSolution: the problem is similar to the maximum sum of array with sum==0, so 0=-1, 1==1. Here our $k=0$\n\\begin{lstlisting}[language = Python]\ndef findMaxLength(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        nums=[nums[i] if nums[i]==1 else -1 for i in range(len(nums))]\n        \n        max_len=0\n        cur_sum=0\n        mapp={0:-1}\n        \n        for idx,v in enumerate(nums):\n            cur_sum+=v\n            if cur_sum in mapp:\n                max_len=max(max_len,idx-mapp[cur_sum])\n            else:\n                mapp[cur_sum]=idx    \n        \n        return max_len\n\\end{lstlisting}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Maximum Subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\paragraph{The maximum number of subarray with sum or product S}\n\n560. Subarray Sum Equals K\n\\begin{lstlisting}\nGiven an array of integers and an integer k, you need to find the total number of continuous subarrays whose sum equals to k.\n\nExample 1:\nInput:nums = [1,1,1], k = 2\nOutput: 2\n\\end{lstlisting}\n\nAnswer: The naive solution is we enumerate all possible subarray which is $n^2$, and then we compute and check its sum which is $O(n)$. So the total time complexity is  $O(n^3)$ time complexity. However, we can decrease it to $O(n^2)$ if we compute the sum of array in a different way: we first compute the sum till current index for each position, with equation $sum(i,j) = sum(0,j)-sum(0,i)$. However the OJ gave us LTE error. \n\\begin{lstlisting}[language = Python]\ndef subarraySum(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        ''' return the number of subarrays that equal to k'''\n        count = 0\n        sums = [0]*(len(nums)+1) # sum till current index\n        for idx, v in enumerate(nums):\n            sums[idx+1] = sums[idx]+v\n        for i in range(len(nums)):\n            for j in range(i, len(nums)):\n                value = sums[j+1]-sums[i]\n                count = count+1 if value==k else count\n        return count\n\\end{lstlisting}\n\nSolution 3: using prefix\\_sum and hashmap, to just need to reformulate dict[sum\\_i]. For this question, we need to get the total number of subsubarray, so $dict[i] = count$, which means every time we just set the dict[i]+=1. dict[0]=1\n\\begin{lstlisting}[language = Python]\nimport collections\nclass Solution(object):\n    def subarraySum(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        ''' return the number of subarrays that equal to k'''\n        dict = collections.defaultdict(int) #the value is the number of the sum occurs\n        dict[0]=1\n        prefix_sum, count=0, 0\n        for v in nums:\n            prefix_sum += v\n            count += dict[prefix_sum-k] # increase the counter of the appearing value k, default is 0\n            dict[prefix_sum] += 1 # update the count of prefix sum, if it is first time, the default value is 0\n        return count\n\\end{lstlisting}\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Vague-conditioned subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Vague-conditioned subarray}\nIn this section, we would be asked to ask the same type of question comapred with the last section. The only difference is the condition. For example, in the following question, it is asked with subarray that with $sum >= s$. \n\nBecause of the vague of the condition, a hashmap$+$prefix sum solution will on longer give us $O(n)$ liner time. The best we can do if the array is all positive number we can gain $O(nlgn)$ if it is combined with binary search. However, a carefully designed sliding window can still help us achieve linear time. \n\n\\paragraph{All Positive Array}\n\nIf it is all positive array, it can still be easily solved with sliding window. For example: \n\n209. Minimum Size Subarray Sum\n\\begin{lstlisting}\nGiven an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum >= s. If there isn't one, return 0 instead.\nExample: \n\nInput: s = 7, nums = [2,3,1,2,4,3]\nOutput: 2\nExplanation: the subarray [4,3] has the minimal length under the problem constraint.\n\nFollow up:\nIf you have figured out the O(n) solution, try coding another solution of which the time complexity is O(n log n). \n\\end{lstlisting}\nFor this problem, if we use prefix sum, we need to save the last index of the prefix sum compared with the maximum length problem. However, with this problem the condition is $sum >= s$, if we use a hashmap, we need to search through the hashmap with $key <= prefix_sum - s$. The time complexity would rise up to $O(n^2)$. We would receive LTE error.\n\\begin{lstlisting}[language = Python]\n    def minSubArrayLen(self, s, nums):\n        \"\"\"\n        :type s: int\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        dict = collections.defaultdict(int)\n        dict[0] = -1 # pre_sum 0 with index -1\n        prefixSum = 0\n        minLen = sys.maxsize\n        for idx, n in enumerate(nums):\n            prefixSum += n\n            for key, value in dict.items():\n                if key <= prefixSum - s: \n                    minLen = min(minLen, idx-value)\n            dict[prefixSum] = idx #save the last index\n        return minLen if 1<=minLen<=len(nums) else 0\n\\end{lstlisting}\nBecause if we use prefix sum and use brute force to enumerate the subarray we gain $O(n^2)$. In this problem because its all positive number, so the prefix sum array is increasing which means we can use binary search to find the largest value that is smaller than or equal to prefix sum - s.\n\\begin{lstlisting}[language = Python]\n    def minSubArrayLen(self, s, nums):\n        \"\"\"\n        :type s: int\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        def bSearch(nums, i, j, target):\n            while i < j:\n                mid = (i+j) / 2\n                if nums[mid] == target:\n                    return mid\n                elif nums[mid] < target:\n                    i = mid + 1\n                else:\n                    j = mid - 1\n            return i   \n        \n        if not nums:\n            return 0\n        rec = [0] * len(nums)\n        rec[0] = nums[0]\n        if rec[0] >= s:\n            return 1\n        minlen = len(nums)+1\n        for i in range(1, len(nums)):\n            rec[i] = rec[i-1] + nums[i]\n            if rec[i] >= s:\n                index = bSearch(rec, 0, i, rec[i] - s)\n                if rec[index] > rec[i] - s:\n                    index -= 1\n                minlen = min(minlen, i - index)\n        return minlen if minlen != len(nums)+1 else 0\n\\end{lstlisting}\nWhile, using the sliding window, we are still capable of getting the complexity with $O(n)$. \n\\begin{lstlisting}[language = Python]\n    def minSubArrayLen(self, s, nums):\n            \"\"\"\n            :type s: int\n            :type nums: List[int]\n            :rtype: int\n            \"\"\"\n            i,j = 0,0\n            preSum =0\n            min_length = len(nums)+1\n            while j < len(nums):          \n                preSum += nums[j]\n                j+=1\n                #shrink the sliding window size\n                while i < j and preSum >= s:\n                    min_length = min(min_length, j-i)\n                    preSum -= nums[i] #shrink\n                    i += 1\n            return min_length if min_length< len(nums)+1 else 0\n\\end{lstlisting}\n\n713. Subarray Product Less Than K\n\n\\begin{lstlisting}\nYour are given an array of positive integers nums.\nCount and print the number of (contiguous) subarrays where the product of all the elements in the subarray is less than k.\n\nExample 1:\nInput: nums = [10, 5, 2, 6], k = 100\nOutput: 8\nExplanation: The 8 subarrays that have product less than 100 are: [10], [5], [2], [6], [10, 5], [5, 2], [2, 6], [5, 2, 6].\n\nNote that [10, 5, 2] is not included as the product of 100 is not strictly less than k.\nNote:\n0 < nums.length <= 50000.\n0 < nums[i] < 1000.\n0 <= k < 10^6.\n\\end{lstlisting}\n\nAnswer: Because we need the subarray less than k, so it is difficult to use prefix sum. If we use sliding window,\n\\begin{lstlisting}\ni=0, j=0, 10 10<100, ans+= j-i+1 (1) -> [10]\ni=0, j=1, 50 50<100, ans+= j-i+1 (3), -> [10],[10,5]\ni=0, j=2, 100 shrink the window, i=1, product = 10, ans+=2, ->[5,2][2]\ni=1, j=3, 60, ans+=3->[2,6],[2],[6]\n\\end{lstlisting}\nThe python code:\n\\begin{lstlisting}[language = Python]\nclass Solution:\n    def numSubarrayProductLessThanK(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        i, j = 0, 0\n        window_product = 1\n        ans = 0\n        while j < len(nums):\n            window_product *= nums[j]\n            \n            while i<j and window_product >= k:\n                window_product /= nums[i]               \n                i+=1\n            if window_product < k:\n                ans += j-i+1            \n            j += 1\n        return ans\n\\end{lstlisting}\n\n\\paragraph{Array with Negative Element}\n\n862. Shortest Subarray with Sum at Least K\n\\begin{lstlisting}\nReturn the length of the shortest, non-empty, contiguous subarray of A with sum at least K.\n\nIf there is no non-empty subarray with sum at least K, return -1.\n\nExample 1:\nInput: A = [1], K = 1\nOutput: 1\n\nExample 2:\nInput: A = [1,2], K = 4\nOutput: -1\n\nExample 3:\nInput: A = [2,-1,2], K = 3\nOutput: 3\n\nNote:\n    1 <= A.length <= 50000\n    -10 ^ 5 <= A[i] <= 10 ^ 5\n    1 <= K <= 10 ^ 9\n\\end{lstlisting}\nThe only difference of this problem compared with the last is with negative value. Because of the negative, the shrinking method no longer works: for instance, [84,-37,32,40,95], K=167, the right answer is [32, 40, 95]. In this program, i=0, j=4, so how to handle the negative value?\n\n\n\n% \\item 674. Longest Continuous Increasing Subsequence\n\n% Given an unsorted array of integers, find the length of longest continuous increasing subsequence (subarray).\n\n% Example 1:\n% \\begin{lstlisting}\n% Input: [1,3,5,4,7]\n% Output: 3\n% Explanation: The longest continuous increasing subsequence is [1,3,5], its length is 3. \n% Even though [1,3,5,7] is also an increasing subsequence, it's not a continuous one where 5 and 7 are separated by 4.\n% \\end{lstlisting}\n% Example 2:\n% \\begin{lstlisting}\n% Input: [2,2,2,2,2]\n% Output: 1\n% Explanation: The longest continuous increasing subsequence is [2], its length is 1.\n% \\end{lstlisting}\n% \\textit{Note: Length of the array will not exceed 10,000.}\n\n% Solution: The brute force solution is use two for loops with $O(n^2)$. The first loop is the start number, the second loop is the $nums[j]>nums[j-1]$ or else stop. Or we can use two pointers. i,j start from 0,1 respectively.\n% \\begin{lstlisting}[language = Python]\n% class Solution:\n%     def findLengthOfLCIS(self, nums):\n%         \"\"\"\n%         :type nums: List[int]\n%         :rtype: int\n%         \"\"\"\n%         if not nums:\n%             return 0\n%         if len(nums)==1:\n%             return 1\n%         i,j=0,1\n%         max_length = 0\n%         while j<len(nums):\n%             if nums[j-1]<nums[j]:\n%                 j+=1\n%             else:\n%                 if j-i>max_length:\n%                     max_length = j-i\n%                 i=j\n%                 j+=1\n%         if j-i>max_length:\n%             max_length = j-i\n                \n%         return max_length\n% \\end{lstlisting}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% sub sequence\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Subsequence (Medium or Hard)}\nThe difference of the subsequence type of questions with the subarray is that we do not need the elements to be consecutive. Because of this relaxation, the brute force solution of this type of question is exponential$O(2^n)$, because for each element, we have two options: chosen or not chosen. This type of questions would usually be used as a follow-up question to the subarray due to its further difficulty because of nonconsecutive. This type of problems are a typical dynamic programming. Here we should a list of all related subsequence problems shown on LeetCode in Fig.~\\ref{fig:subsequence_problems}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.8\\columnwidth]{fig/subsequence_1.png}\n    \\includegraphics[width=0.8\\columnwidth]{fig/subsequence_2.png}\n    \\caption{Subsequence Problems Listed on LeetCode}\n    \\label{fig:subsequence_problems}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Sum\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Sum}\nIn this section, to get sum we can choose to use hashmap to save the original list so that for the last element, we only check the hashmap, we can lower the complexity by one power of n. However, a better solution is to use two pointers or three pointers. for three pointers, the first one is to make sure the starting point. Also, we can think about divide and conquer.\n\\begin{lstlisting}\n[-4,-1,-1,0,1,2]\ni, l-> ``````<-r\n\\end{lstlisting}\n\n\\begin{enumerate}\n    \\item 15. 3Sum\n\nGiven an array S of n integers, are there elements a, b, c in S such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero.\n\nNote: The solution set must not contain duplicate triplets.\n\nFor example, given array S = [-1, 0, 1, 2, -1, -4],\n\\begin{lstlisting}\nA solution set is:\n[\n  [-1, 0, 1],\n  [-1, -1, 2]\n]\n\\end{lstlisting}\n\nSolution: Should use three pointers, no extra space. i is the start point from [0,len-2], l,r is the other two pointers. l=i+1, r=len-1 at the beignning. The saving of time complexity is totally from the sorting algorithm.\n\\begin{lstlisting}\n[-4,-1,-1,0,1,2]\ni, l-> ``````<-r\n\\end{lstlisting}\nHow to delete repeat?\n\\begin{lstlisting}[language = Python]\ndef threeSum(self, nums):\n    res = []\n    nums.sort()\n    for i in xrange(len(nums)-2):\n        if i > 0 and nums[i] == nums[i-1]: #make sure pointer not repeat\n            continue\n        l, r = i+1, len(nums)-1\n        while l < r:\n            s = nums[i] + nums[l] + nums[r]\n            if s < 0:\n                l +=1 \n            elif s > 0:\n                r -= 1\n            else:\n                res.append((nums[i], nums[l], nums[r]))\n                l+=1\n                r-=1\n\n                #after the first run, then check duplicate example.\n                while l < r and nums[l] == nums[l-1]:\n                    l += 1\n                while l < r and nums[r] == nums[r+1]:\n                    r -= 1\n    return res\n\\end{lstlisting}\nUse hashmap:\n\\begin{lstlisting}[language = Python]\ndef threeSum(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: List[List[int]]\n        \"\"\"\n        res =[]\n        nums=sorted(nums)\n        if not nums:\n            return []\n        if nums[-1]<0 or nums[0]>0:\n            return []\n        end_position = len(nums)-2\n        dic_nums={}\n        for i in xrange(1,len(nums)):\n            dic_nums[nums[i]]=i# same result save the last index\n        \n        for i in xrange(end_position):\n            target = 0-nums[i]\n            if i>0 and nums[i] == nums[i-1]: #this is to avoid repeat \n                continue\n            if target<nums[i]: #if the target is smaller than this, we can not find them on the right side\n                break\n            for j in range(i+1,len(nums)): #this is to avoid repeat \n                if j>i+1 and nums[j]==nums[j-1]:\n                    continue\n                complement =target - nums[j]\n                if complement<nums[j]: #if the left numbers are bigger than the complement, no need to keep searching\n                    break\n                if complement in dic_nums and dic_nums[complement]>j: #need to make sure the complement is bigger than nums[j]\n                    res.append([nums[i],nums[j],complement])\n        return res\n\\end{lstlisting}\nThe following code uses more time\n\\begin{lstlisting}[language = Python]\nfor i in xrange(len(nums)-2):\n            if i > 0 and nums[i] == nums[i-1]:\n                continue\n            l, r = i+1, len(nums)-1\n            while l < r:\n                if l-1>=i+1 and nums[l] == nums[l-1]: #check the front\n                    l += 1\n                    continue\n                if r+1<len(nums) and nums[r] == nums[r+1]:\n                    r -= 1\n                    continue\n                s = nums[i] + nums[l] + nums[r]\n                if s < 0:\n                    l +=1 \n                elif s > 0:\n                    r -= 1\n                else:\n                    res.append((nums[i], nums[l], nums[r]))\n                    l += 1; r -= 1\n        return res\n\\end{lstlisting}\n\n\\item 18. 4Sum\n\\begin{lstlisting}[language = Python]\ndef fourSum(self, nums, target):\n        def findNsum(nums, target, N, result, results):\n            if len(nums) < N or N < 2 or target < nums[0]*N or target > nums[-1]*N:  # early termination\n                return\n            if N == 2: # two pointers solve sorted 2-sum problem\n                l,r = 0,len(nums)-1\n                while l < r:\n                    s = nums[l] + nums[r]\n                    if s == target:\n                        results.append(result + [nums[l], nums[r]])\n                        l += 1\n                        r-=1\n                        while l < r and nums[l] == nums[l-1]:\n                            l += 1\n                        while l < r and nums[r] == nums[r+1]:\n                            r -= 1\n                    elif s < target:\n                        l += 1\n                    else:\n                        r -= 1\n            else: # recursively reduce N\n                for i in range(len(nums)-N+1):\n                    if i == 0 or (i > 0 and nums[i-1] != nums[i]):\n                        findNsum(nums[i+1:], target-nums[i], N-1, result+[nums[i]], results) #reduce nums size, reduce target, save result\n\nresults = []\n        findNsum(sorted(nums), target, 4, [], results)\n        return results\n\\end{lstlisting}\n\n\\item 454. 4Sum II\n\nGiven four lists A, B, C, D of integer values, compute how many tuples (i, j, k, l) there are such that A[i] + B[j] + C[k] + D[l] is zero.\n\nTo make problem a bit easier, all A, B, C, D have same length of N where $0 \\leq N \\leq 500$. All integers are in the range of -228 to 228\u20131 and the result is guaranteed to be at most 231\u20131.\n\nExample:\n\\begin{lstlisting}\nInput:\nA = [ 1, 2]\nB = [-2,-1]\nC = [-1, 2]\nD = [ 0, 2]\n\nOutput:\n2\n\\end{lstlisting}\n\nExplanation:\n\n\\begin{lstlisting}\nThe two tuples are:\n1. (0, 0, 0, 1) -> A[0] + B[0] + C[0] + D[1] = 1 + (-2) + (-1) + 2 = 0\n2. (1, 1, 0, 0) -> A[1] + B[1] + C[0] + D[0] = 2 + (-1) + (-1) + 0 = 0\n\\end{lstlisting}\nSolution: if we use brute force, use 4 for loop, then it is $O(N^4)$. If we use divide and conquer, sum the first half, and save a dictionary (counter), time complexity is $O(2N^2)$. What if we have 6 sum, we can reduce it to $O(2N^3)$, what if 8 sum.\n\n\\begin{lstlisting}[language = Python]\ndef fourSumCount(self, A, B, C, D):\n    AB = collections.Counter(a+b for a in A for b in B)\n    return sum(AB[-c-d] for c in C for d in D)\n\\end{lstlisting}\n\\end{enumerate}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Others\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Others}\nFor example, the following question would be used as follow up for question \\textit{Longest Continuous Increasing Subsequence}\n\n300. Longest Increasing Subsequence\n\\begin{lstlisting}\nGiven an unsorted array of integers, find the length of longest increasing subsequence.\n\nFor example,\n\n Given [10, 9, 2, 5, 3, 7, 101, 18],\n The longest increasing subsequence is [2, 3, 7, 101], therefore the length is 4. Note that there may be more than one LIS combination, it is only necessary for you to return the length.\n\n\nYour algorithm should run in $O(n^2)$ complexity.\n\nFollow up: Could you improve it to $O(nlogn)$ time complexity?\n\\begin{lstlisting}\n\nSolution: Compared with the last question, this one loose the restriction that need to be continuous. For this problem, we need to understand it is not going to work with two pointers. It is not a brute-force $O(n^2)$ problem. It is a typical combination problem in recursive functions. So, at first, put the standard combination algorithm code here:\n\\begin{lstlisting}[language = Python]\ndef dfs(temp, idx):\n            rslt.append(temp[:]) #pass temp[:] with shallow copy so that we wont change the result of rslt when temp is changed\n            for i in range(idx, len(nums)):\n                temp.append(nums[i])\n                #backtrack\n                dfs(temp, i+1)\n                temp.pop()\n                \n                \n        dfs([],0)\n        return rslt\n\\end{lstlisting}\n\nSo, we use the backtracking-combination to enumerate all possible subsequence. The difference is here we do not unconditionally use this nums[i] in our result, only if nums[i]>tail, and the final length is the maximum of them all. $T(n) = max(T(n-1)+1, T(n-k)+1, \u2026)$. So, the time complexity is $O(2^n)$. It passed 21/15 test cases with TLE. In this process, we transfer from the combination problem to dynamic programming.\n\\begin{lstlisting}[language = Python]\ndef lengthOfLIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        max_count = 0\n        if not nums:\n            return 0\n        def backtrackingDFS(idx,tail):\n            if idx==len(nums):\n                \n                return 0\n            length = 0\n            for i in range(idx,len(nums)):\n                if nums[i]>tail:\n                    length = max(1+backtrackingDFS(i+1, nums[i]), length)\n            return length\n        \n        return backtrackingDFS(0,-maxsize)\n\\end{lstlisting}\n\nNow, we know we are doing dynamic programming, if we already know the ans(idx), meaning the max length from somewhere, we do not need to do it again. With memoization: The time complexity is n subproblem, top-down recursive+memo.\n\\begin{lstlisting}[language = Python]\ndef lengthOfLIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        max_count = 0\n        if not nums:\n            return 0\n        memo =[None for _ in range(len(nums))]\n        def backtrackingDFS(idx,tail):\n            if idx==len(nums):\n                \n                return 0\n            if memo[idx]==None:\n                length = 0\n                for i in range(idx,len(nums)):\n                    if nums[i]>tail:\n                        length = max(1+backtrackingDFS(i+1, nums[i]), length)\n                memo[idx]=length\n            return memo[idx]\n        \n        return backtrackingDFS(0,-maxsize)\n\\end{lstlisting}\n\nNow, we use dynamic programming which its solution can be found in Section~\\ref{part2_sequence_dp}.  And bottom-up iterative. For [10,9,2,5,3], the length array is [1,1,1,2,2], for [4,10,4,3,8,9], we have [1, 2, 1, 1, 2, 3]. To find the rule, T(0)=1, idx, max(memo[i]),$0\\leq i<idx$, $nums[idx]>nums[i]$. Now the time complexity is $O(n^2)$.\n\nstate: f[i] record the maximum length of increasing subsequence from 0-i.\n\nfunction: f[i]: choose or not to choose\n\ninitialize: f[0]=1\n\\begin{lstlisting}[language = Python]\ndef lengthOfLIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        max_count = 0\n        if not nums:\n            return 0\n        dp=[0 for _ in range(len(nums))]\n        dp[0]=1\n        maxans =1\n        for idx in range(1,len(nums)): #current combine this to this subsequence, 10 to [], 9 to [10]\n            pre_max=0\n            for i in range(0,idx):\n                if nums[idx]>nums[i]:\n                    pre_max=max(pre_max, dp[i])\n            dp[idx]=pre_max+1\n            maxans=max(maxans,dp[idx])\n            \n        print(dp)\n        return maxans\n\\end{lstlisting}\n\nWe can even speedup further by using binary search, the second loop we can use a binary search to make the time complexity $O(logn)$, and the dp array used to save the maximum ans. Each time we use binary search to find an insertion point, if it is at the end, then the length grow.\n[4]->[4,10],->[4,10],[3,10],->[3,8]->[3,8,9]\n\\begin{lstlisting}[language = Python]\ndef lengthOfLIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        def binarySearch(arr,l,r,num):\n            while l<r:\n                mid = l+(r-l)//2\n                if num>arr[mid]:\n                    l=mid+1\n                elif num<arr[mid]:\n                    r=mid\n                else:\n                    return mid\n            return l\n        max_count = 0\n        if not nums:\n            return 0\n        dp =[0 for _ in range(len(nums))]#save the maximum till now\n        maxans =1\n        length=0\n        for idx in range(0,len(nums)): #current combine this to this subsequence, 10 to [], 9 to [10]\n            pos = binarySearch(dp,0,length,nums[idx]) #find insertion point\n            dp[pos]= nums[idx] #however if it is not at end, we replace it, current number\n            if pos==length:\n                length+=1\n        print(dp)            \n        return length\n\\end{lstlisting}\n\n673. Number of Longest Increasing Subsequence\n\nGiven an unsorted array of integers, find the number of longest increasing subsequence.\n\\begin{lstlisting}\nExample 1:\n\nInput: [1,3,5,4,7]\nOutput: 2\nExplanation: The two longest increasing subsequence are [1, 3, 4, 7] and [1, 3, 5, 7].\n\nExample 2:\nInput: [2,2,2,2,2]\nOutput: 5\nExplanation: The length of longest continuous increasing subsequence is 1, and there are 5 subsequences' length is 1, so output 5.\n\\textit{Note: Length of the given array will be not exceed 2000 and the answer is guaranteed to be fit in 32-bit signed int.}\n\\end{lstlisting}\n\nSolution: Another different problem, to count the number of the max subsequence. Typical dp:\n\nstate: f[i]\n\\begin{lstlisting}[language = Python]\nfrom sys import maxsize\nclass Solution:\n    def findNumberOfLIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        max_count = 0\n        if not nums:\n            return 0\n        memo =[None for _ in range(len(nums))]\n        rlst=[]\n        def recursive(idx,tail,res):\n            if idx==len(nums):\n                rlst.append(res)\n                return 0\n            if memo[idx]==None:\n                length = 0\n                if nums[idx]>tail:\n                    addLen = 1+recursive(idx+1, nums[idx],res+[nums[idx]])\n                    notAddLen = recursive(idx+1, tail,res)\n                    return max(addLen,notAddLen)\n                else:\n                    return recursive(idx+1, tail,res)\n        \n        \n        ans=recursive(0,-maxsize,[])\n        count=0\n        for lst in rlst:\n            if len(lst)==ans:\n                count+=1\n                \n        return count\n\\end{lstlisting}\n\nUsing dynamic programming, the difference is we add a count array.\n\\begin{lstlisting}[language = Python]\nfrom sys import maxsize\nclass Solution:\n    def findNumberOfLIS(self, nums):\n        N = len(nums)\n        if N <= 1: return N\n        lengths = [0] * N #lengths[i] = longest ending in nums[i]\n        counts = [1] * N #count[i] = number of longest ending in nums[i]\n\n    for idx, num in enumerate(nums): #i\n            for i in range(idx): #j\n                if nums[i] < nums[idx]: #bigger \n                    if lengths[i] >= lengths[idx]:\n                        lengths[idx] = 1 + lengths[i] #set the biggest length\n                        counts[idx] = counts[i] #change the count\n                    elif lengths[i] + 1 == lengths[idx]: #if it is a tie\n                        counts[idx] += counts[i] #increase the current count by count[i]\n\nlongest = max(lengths)\n        print(counts)\n        print(lengths)\n        return sum(c for i, c in enumerate(counts) if lengths[i] == longest)\n\\end{lstlisting}\n\n128. Longest Consecutive Sequence\n\\begin{lstlisting}\nGiven an unsorted array of integers, find the length of the longest consecutive elements sequence.\n\nFor example,\n Given [100, 4, 200, 1, 3, 2],\n The longest consecutive elements sequence is [1, 2, 3, 4]. Return its length: 4.\n \n Your algorithm should run in O(n) complexity.\n \\end{lstlisting}\n\nSolution: Not thinking about the O(n) complexity, we can use sorting to get [1,2,3,4,100,200], and then use two pointers to get [1,2,3,4].\n\nHow about O(n)? We can pop out a number in the list, example, 4 , then we use while first-1 to get any number that is on the left side of 4, here it is 3, 2, 1, and use another to find all the bigger one and remove these numbers from the nums array.\n\\begin{lstlisting}[language =Python]\ndef longestConsecutive(self, nums):\n        nums = set(nums)\n        maxlen = 0\n        while nums:\n            first = last = nums.pop()\n            while first - 1 in nums: #keep finding the smaller one\n                first -= 1\n                nums.remove(first)\n            while last + 1 in nums: #keep finding the larger one\n                last += 1\n                nums.remove(last)\n            maxlen = max(maxlen, last - first + 1)\n        return maxlen\n\\end{lstlisting}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Merge List\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Merge and Partition}\n\\subsubsection{Merge Lists}\nWe can use divide and conquer (see the merge sort) and the priority queue.\n\\subsubsection{Partition Lists}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Intersection\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Intersection}\nFor problems to get intersections of lists, we can use hashmap, which takes $O(m+n)$ time complexity. Also, we can use sorting at first and use two pointers one start from the start of each array. Examples are shown as below;\n\\begin{enumerate}\n    \\item  349. Intersection of Two Arrays (Easy)\n    \n     Given two arrays, write a function to compute their intersection.\n\nExample:\n\\begin{lstlisting}\nGiven nums1 = [1, 2, 2, 1], nums2 = [2, 2], return [2].\n\\end{lstlisting}\n\nNote:\n\\begin{itemize}\n    \\item Each element in the result must be unique.\n    \\item The result can be in any order.\n\\end{itemize}\nSolution 1: Using hashmap, here we use set to convert, this takes 43ms. \n\\begin{lstlisting}[language = Python]\ndef intersection(self, nums1, nums2):\n    \"\"\"\n    :type nums1: List[int]\n    :type nums2: List[int]\n    :rtype: List[int]\n    \"\"\"\n    if not nums1 or not nums2:\n        return []\n    if len(nums1) > len(nums2):\n        nums1, nums2 = nums2, nums1\n    ans = set()\n    nums1 = set(nums1)\n    for e in nums2:\n        if e in nums1:\n            ans.add(e)\n    return list(ans)\n\\end{lstlisting}\nSolution2: sorting at first, and then use pointers. Take 46 ms. \n\\begin{lstlisting}[language = Python]\ndef intersection(self, nums1, nums2):\n    \"\"\"\n    :type nums1: List[int]\n    :type nums2: List[int]\n    :rtype: List[int]\n    \"\"\"\n    nums1.sort()\n    nums2.sort()\n    r = set()\n    i, j = 0, 0\n    while i < len(nums1) and j < len(nums2):\n        if nums1[i] < nums2[j]:\n            i += 1\n        elif nums1[i] > nums2[j]:\n            j += 1\n        else:\n            r.add(nums1[i])\n            i += 1\n            j += 1\n    return list(r)\n\\end{lstlisting}\n\\item 350. Intersection of Two Arrays II(Easy)\n\n Given two arrays, write a function to compute their intersection.\n\nExample:\n\\begin{lstlisting}\nGiven nums1 = [1, 2, 2, 1], nums2 = [2, 2], return [2, 2].\n\\end{lstlisting}\n\nNote:\n\\begin{itemize}\n    \\item Each element in the result should appear as many times as it shows in both arrays.\n    \\item The result can be in any order.\n\\end{itemize}\n\nFollow up:\n\\begin{enumerate}\n    \\item  What if the given array is already sorted? How would you optimize your algorithm?\n    \\item What if nums1's size is small compared to nums2's size? Which algorithm is better?\n    \\item What if elements of nums2 are stored on disk, and the memory is limited such that you cannot load all elements into the memory at once?\n\\end{enumerate}\n\n\\end{enumerate}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Exercises\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Exercises}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Subarray}\n\\subsubsection{Absolute-conditioned Subarray}\n\\begin{enumerate}\n    \\item 930. Binary Subarrays With Sum\n    \\begin{lstlisting}\n    In an array A of 0s and 1s, how many non-empty subarrays have sum S?\nExample 1:\n\nInput: A = [1,0,1,0,1], S = 2\nOutput: 4\nExplanation: \nThe 4 subarrays are bolded below:\n[1,0,1,0,1]\n[1,0,1,0,1]\n[1,0,1,0,1]\n[1,0,1,0,1]\nNote:\n\n    A.length <= 30000\n    0 <= S <= A.length\n    A[i] is either 0 or 1.\n\\end{lstlisting}\nAnswer: this is exactly the third time of maximum subarray, the maximum length of subarry with a certain value. We solve it using prefix sum and a hashmap to save the count of each value. \n\\begin{lstlisting}[language=Python]\nimport collections\nclass Solution:\n    def numSubarraysWithSum(self, A, S):\n        \"\"\"\n        :type A: List[int]\n        :type S: int\n        :rtype: int\n        \"\"\"\n        dict = collections.defaultdict(int) #the value is the number of the sum occurs\n        dict[0]=1 #prefix sum starts from 0 and the number is 1\n        prefix_sum, count=0, 0\n        for v in A:\n            prefix_sum += v\n            count += dict[prefix_sum-S] # increase the counter of the appearing value k, default is 0\n            dict[prefix_sum] += 1 # update the count of prefix sum, if it is first time, the default value is 0\n        return count\n\\end{lstlisting}\nWe can write it as:\n\\begin{lstlisting}[language=Python]\n    def numSubarraysWithSum(self, A, S):\n        \"\"\"\n        :type A: List[int]\n        :type S: int\n        :rtype: int\n        \"\"\"\n        P = [0]\n        for x in A: P.append(P[-1] + x)\n        count = collections.Counter()\n\n        ans = 0\n        for x in P:\n            ans += count[x]\n            count[x + S] += 1\n\n        return ans\n\\end{lstlisting}\nAlso, it can be solved used a modified sliding window algorithm. For sliding window, we have $i,j$ starts from 0, which represents the window. Each iteration j will move one position. For a normal sliding window, only if the sum is larger than the value, then we shrink the window size by one. However, in this case, like in the example $1, 0, 1, 0, 1$, when $j = 5$, $i = 1$, the sum is $2$, but the algorithm would miss the case of $i = 2$, which has the same sum value. To solve this problem, we keep another index $i_hi$, in addition to the moving rule of $i$, it also moves if the sum is satisfied and that value is $0$. This is actually a Three pointer algorithm. \n\\begin{lstlisting}[language=Python]\n    def numSubarraysWithSum(self, A, S):\n        i_lo, i_hi, j = 0, 0, 0 #i_lo <= j\n        sum_lo = sum_hi = 0\n        ans = 0\n        while j < len(A):\n            # Maintain i_lo, sum_lo:\n            # While the sum is too big, i_lo += 1\n            sum_lo += A[j]\n            while i_lo < j and sum_lo > S:\n                sum_lo -= A[i_lo]\n                i_lo += 1\n\n            # Maintain i_hi, sum_hi:\n            # While the sum is too big, or equal and we can move, i_hi += 1\n            sum_hi += A[j]\n            while i_hi < j and (\n                    sum_hi > S or sum_hi == S and not A[i_hi]):\n                sum_hi -= A[i_hi]\n                i_hi += 1\n\n            if sum_lo == S:\n                ans += i_hi - i_lo + 1\n            j += 1\n\n        return ans\n\\end{lstlisting}\n\\item 523. Continuous Subarray Sum\n\\begin{lstlisting}\nGiven a list of non-negative numbers and a target integer k, write a function to check if the array has a continuous subarray of size at least 2 that sums up to the multiple of k, that is, sums up to n*k where n is also an integer.\n\nExample 1:\nInput: [23, 2, 4, 6, 7],  k=6\nOutput: True\nExplanation: Because [2, 4] is a continuous subarray of size 2 and sums up to 6.\n\nExample 2:\nInput: [23, 2, 6, 4, 7],  k=6\nOutput: True\nExplanation: Because [23, 2, 6, 4, 7] is an continuous subarray of size 5 and sums up to 42.\n\nNote:\nThe length of the array won't exceed 10,000.\nYou may assume the sum of all the numbers is in the range of a signed 32-bit integer.\n\\end{lstlisting}\nAnswer: This is a mutant of the subarray with value k. The difference here, we save the prefix sum as the reminder of k. if $(a+b)\\%k=0$, then $(a\\%k+b\\%k)/k=1$.\n\\begin{lstlisting}[language=Python]\nclass Solution:\n    def checkSubarraySum(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: bool\n        \"\"\"\n        \n        if not nums:\n            return False\n        k = abs(k)\n        prefixSum = 0\n        dict = collections.defaultdict(int)\n        dict[0]=-1\n        for i, v in enumerate(nums):\n            prefixSum += v\n            if k!=0:\n                prefixSum %= k\n            if prefixSum in dict and (i-dict[prefixSum])>=2:\n                    return True\n            if prefixSum not in dict:\n                dict[prefixSum] = i\n        return False\n\\end{lstlisting}\n\\end{enumerate}\n\\subsubsection{Vague-conditioned Subarray}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Subsequence\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Subsequence}\n\\begin{enumerate}\n    \\item 594. Longest Harmonious Subsequence\n\nWe define a harmonious array is an array where the difference between its maximum value and its minimum value is exactly 1.\n\nNow, given an integer array, you need to find the length of its longest harmonious subsequence among all its possible subsequences.\n\nExample 1:\n\\begin{lstlisting}\nInput: [1,3,2,2,5,2,3,7]\nOutput: 5\nExplanation: The longest harmonious subsequence is [3,2,2,2,3].\n\\end{lstlisting}\n\n\\textit{Note: The length of the input array will not exceed 20,000.}\n\nSolution: at first, use a Counter to save the whole set. Then visit the counter dictionary, to check key+1 and key-1, only when the item is not zero, we can count it as validate, or else it is 0.\n\\begin{lstlisting}[language = Python]\nfrom collections import Counter\nclass Solution:\n    def findLHS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums or len(nums)<2:\n            return 0\n        count=Counter(nums) #the list is sorted by the key value\n        maxLen = 0\n        for key,item in count.items(): #to visit the key: item in the counter\n            if count[key+1]: #because the list is sorted, so we only need to check key+1\n                maxLen = max(maxLen,item+count[key+1])\n            \n            # if count[key-1]:\n            #     maxLen=max(maxLen, item+count[key-1])\n        return maxLen\n\\end{lstlisting}\n\n\\item 521. Longest Uncommon Subsequence I\n\nGiven a group of two strings, you need to find the longest uncommon subsequence of this group of two strings. The longest uncommon subsequence is defined as the longest subsequence of one of these strings and this subsequence should not be any subsequence of the other strings.\n\nA subsequence is a sequence that can be derived from one sequence by deleting some characters without changing the order of the remaining elements. Trivially, any string is a subsequence of itself and an empty string is a subsequence of any string.\n\nThe input will be two strings, and the output needs to be the length of the longest uncommon subsequence. If the longest uncommon subsequence doesn\u2019t exist, return -1.\n\nExample 1:\n\\begin{lstlisting}\nInput: \"aba\", \"cdc\"\nOutput: 3\nExplanation: The longest uncommon subsequence is \"aba\" (or \"cdc\"), \nbecause \"aba\" is a subsequence of \"aba\", \nbut not a subsequence of any other strings in the group of two strings.\n\\end{lstlisting}\n\n\\textit{Note:}\n\n    \\textit{Both strings\u2019 lengths will not exceed 100.}\n    \n    \\textit{Only letters from a ~ z will appear in input strings.}\n\nSolution: if we get more examples, we could found the following rules, \u201caba\u201d,\u201daba\u201d return -1,\n\\begin{lstlisting}[language = Python]\ndef findLUSlength(self, a, b):\n        \"\"\"\n        :type a: str\n        :type b: str\n        :rtype: int\n        \"\"\"\n        if len(b)!=len(a):\n            return max(len(a),len(b))\n        #length is the same\n        return len(a) if a!=b else -1\n\\end{lstlisting}\n\\item 424. Longest Repeating Character Replacement\n\nGiven a string that consists of only uppercase English letters, you can replace any letter in the string with another letter at most k times. Find the length of a longest substring containing all repeating letters you can get after performing the above operations.\n\n\\textit{Note:}\n\n \\textit{Both the string\u2019s length and k will not exceed 104.}\n\nExample 1:\n\\begin{lstlisting}\nInput:\ns = \"ABAB\", k = 2\n\nOutput:\n4\n\\end{lstlisting}\n\nExplanation:\nReplace the two 'A's with two 'B's or vice versa.\n\nExample 2:\n\\begin{lstlisting}\nInput:\ns = \"AABABBA\", k = 1\n\nOutput:\n4\n\\end{lstlisting}\n\nExplanation:\nReplace the one 'A' in the middle with 'B' and form \"AABBBBA\".\nThe substring \"BBBB\" has the longest repeating letters, which is 4.\n\nSolution: the brute-force recursive solution for this, is try to replace any char into another when it is not equal or choose not too. LTE\n\\begin{lstlisting}[language = Python]\n#brute force, use recursive function to write brute force solution\n        def replace(news, idx, re_char, k):\n            nonlocal maxLen\n            if k==0 or idx==len(s):\n                maxLen = max(maxLen, getLen(news))\n                return\n\nif s[idx]!=re_char: #replace\n                news_copy=news[:idx]+re_char+news[idx+1:]\n                replace(news_copy, idx+1, re_char, k-1)\n            replace(news[:], idx+1, re_char,k)\n        \n        #what if we only have one char\n        # for char1 in chars.keys():\n        #     replace(s[:],0,char1, k)\n\\end{lstlisting}\nTo get the BCR, think about the sliding window. The longest repeating string we can by number of replacement = `length of string max(numer of occurence of letter i), i=\u2019A\u2019 to \u2018Z\u2019. With the constraint, which means the equation needs to be $\\leq k$. So we can use sliding window to record the max occurence, and when the constraint is violated, we shrink the window. Given an example, strs= \u201cBBCABBBAB\u201d, k=2, when i=0, and j=7, 8\u20135=3>2, which is at A, we need to shrink it, the maxCharCount changed to 4, i=1, so that 8\u20131\u20134=3, i=2, 8\u20132\u20133=3, 8\u20133\u20133=2, so i=3, current length is 5.\n\\begin{lstlisting}[language = Python]\ndef characterReplacement(self, s, k):\n        \"\"\"\n        :type s: str\n        :type k: int\n        :rtype: int\n        \"\"\"\n        i,j = 0,0 #sliding window\n        counter=[0]*26\n        ans = 0\n        maxCharCount = 0\n        while j<len(s):\n            counter[ord(s[j])-ord('A')]+=1\n            maxCharCount = max(maxCharCount, counter[ord(s[j])-ord('A')])\n            while j-i+1-maxCharCount>k: #now shrink the window\n                counter[ord(s[i])-ord('A')]-=1\n                i+=1\n                #updata max\n                maxCharCount=max(counter)\n            ans=max(ans, j-i+1)\n            j+=1\n                \n        return ans\n\\end{lstlisting}\n\n\\item 395. Longest Substring with At Least K Repeating Characters\n\nFind the length of the longest substring T of a given string (consists of lowercase letters only) such that every character in T appears no less than k times.\n\nExample 1:\n\\begin{lstlisting}\nInput:\ns = \"aaabb\", k = 3\n\nOutput:\n3\n\\end{lstlisting}\n\nThe longest substring is \"aaa\", as 'a' is repeated 3 times.\n\nExample 2:\n\\begin{lstlisting}\nInput:\ns = \"ababbc\", k = 2\n\nOutput:\n5\n\\end{lstlisting}\n\nThe longest substring is \"ababb\", as 'a' is repeated 2 times and 'b' is repeated 3 times.\n\nSolution: use dynamic programming with memo: Cons: it takes too much space, and with LTE.\n\\begin{lstlisting}[language = Python]\nfrom collections import Counter, defaultdict\nclass Solution:\n    def longestSubstring(self, s, k):\n        \"\"\"\n        :type s: str\n        :type k: int\n        :rtype: int\n        \"\"\"\n        if not s:\n            return 0\n        if len(s)<k:\n            return 0\n        count = Counter(char for char in s)\n        print(count)\n        memo=[[None for col in range(len(s))] for row in range(len(s))]\n\ndef cut(start,end,count):\n            if start>end:\n                return 0\n            if memo[start][end]==None:\n                if any(0<item<k for key,item in count.items()):\n                    newCounterF=count.copy()\n                    newCounterF[s[start]]-=1\n                    newCounterB=count.copy()\n                    newCounterB[s[end]]-=1\n                    #print(newsF,newsB)\n                    memo[start][end]= max(cut(start+1, end, newCounterF), cut(start, end-1, newCounterB))\n                else:\n                    memo[start][end] = end-start+1\n            return memo[start][end]\n        return cut(0,len(s)-1,count)\n\\end{lstlisting}\n\nNow, use sliding window, we use a pointer mid, what start from 0, if the whole string satisfy the condition, return len(s). Otherwise, use two while loop to separate the string into three substrings: left, mid, right. left satisfy, mid unsatisfy, right unknown.\n\\begin{lstlisting}[language = Python]\nfrom collections import Counter, defaultdict\nclass Solution:\n    def longestSubstring(self, s, k):\n        \"\"\"\n        :type s: str\n        :type k: int\n        :rtype: int\n        \"\"\"\n        if not s:\n            return 0\n        if len(s)<k:\n            return 0\n        count = Counter(char for char in s)\n        mid=0 #on the left side, from 0-mid, satisfied elments\n        while mid<len(s) and count[s[mid]]>=k:\n            mid+=1\n        if mid==len(s): return len(s) \n        left = self.longestSubstring(s[:mid],k) #\"ababb\"\n        #from pre_mid - cur_mid, get rid of those cant satisfy the condition\n        while mid<len(s) and count[s[mid]]<k:\n            mid+=1\n        #now the right side keep doing it\n        right = self.longestSubstring(s[mid:],k)\n        return max(left,right)\n\\end{lstlisting}\n\n\\subsection{Intersection}\n\\item 160. Intersection of Two Linked Lists (Easy)\n\nWrite a program to find the node at which the intersection of two singly linked lists begins.\n\nFor example, the following two linked lists:\n\\begin{lstlisting}\n\n        \nA:          a1 -> a2\n                  \\\n                     c1 -> c2 -> c3\n                  /            \nB:     b1 -> b2 -> b3\n\\end{lstlisting}\n\nbegin to intersect at node c1.\n\nNotes:\n\\begin{itemize}\n    \\item If the two linked lists have no intersection at all, return null.\n    \\item The linked lists must retain their original structure after the function returns.\n    \\item You may assume there are no cycles anywhere in the entire linked structure.\n    \\item Your code should preferably run in O(n) time and use only O(1) memory.\n\\end{itemize}\n\n\n\\end{enumerate}\n\\end{document}", "meta": {"hexsha": "d06468b694da3c1cac00355138eafd2a36ce0533", "size": 74719, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Easy-Book/chapters/mastering/array_string.tex", "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Easy-Book/chapters/mastering/array_string.tex", "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Easy-Book/chapters/mastering/array_string.tex", "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.7854803493, "max_line_length": 773, "alphanum_fraction": 0.6076098449, "num_tokens": 19540, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300449389326, "lm_q2_score": 0.8354835309589074, "lm_q1q2_score": 0.5869522825502993}}
{"text": "% This document is part of the Oddity project.\n% Copyright 2015 the Authors.\n\n% ## to-do\n% - write\n\n\\documentclass[11pt]{article}\n\\usepackage{amsmath}\n\\newcommand{\\given}{\\,|\\,}\n\n\\title{notes on mixtures}\n\\author{marcus frean}\n\\date{}\n\\begin{document}\\sloppy\\sloppypar\n\\maketitle\n\nConsider a square (Oddity bounding box) in an image, containing $N$\npixels whose bin indices are collected in a vector $\\mathbf{n}$.\n\n\\section{mixtures of multinomials}\n\n\\subsection{single multinomial}\nSuppose we choose\\footnote{with replacement} from an urn containing\ncoloured balls (``colours'' correspond to bin indices). The densities\nof different colours are given by a known categorical distribution\n$\\mathbf{P}$, and after $N$ such draws we observe the numbers of each\ncolour drawn.\n\nThe likelihood would be given by the multinomial distribution:\n\\begin{align}\n\\mathcal{L}(\\mathbf{n} \\given \\mathbf{P}) &= {N \\choose \\mathbf{n}} \\prod_i P_i^{n_i}\n\\end{align}\nwhere ${N \\choose \\mathbf{n}} = \\frac{N!}{\\prod_i n_i !}$ is called the multinomial coefficient.\n\n\n\\subsection{two multinomials}\nNow suppose instead that there are two urns, $s$ and $b$, and that we\ndraw from $\\mathbf{P}_s$ with probability $\\lambda$ and from\n$\\mathbf{P}_b$ with probability $1-\\lambda$. The distribution\nspecified by this model is\n \n\\begin{align}\n\\mathcal{L}(\\mathbf{n} \\given \\mathbf{P}_s, \\mathbf{P}_b, \\lambda) \n&= {N \\choose \\mathbf{n}} \\prod_i (\\lambda P_{s,i} + (1-\\lambda) P_{b,i})^{n_i}\n\\end{align}\n\nwhich is also a multinomial distribution, but with parameter vector $\\lambda \\mathbf{P}_{s} + (1-\\lambda) \\mathbf{P}_{b}$ in place  of $\\mathbf{P}$.\n\nSee: Jason Rennie, Mixtures of Multinomials, 2005, {\\tt ``mixture.Multinomials.pdf'' }\n\n\n\\section{mixtures of Dirichlet-multinomials}\n\n\\subsection{single Dirichlet-multinomial}\n\nNow consider having ignorance as to the categorical distribution used, but suppose it comes from a Dirichlet itself.\n\nThat is: \n\\begin{itemize}\n\\item\n{\\tt one time only}, we draw a categorical distribution $\\mathbf{P}$\nfrom a Dirichlet having parameters $\\boldsymbol\\alpha$.\n\n\\item \nThen, {\\tt $N$ times}, we draw coloured balls from $\\mathbf{P}$.\n\\end{itemize}\n\nTheory tells us that, after integrating out the unknown $\\mathbf{P}$,\nthis generative model produces final counts $\\mathbf{n}$ with  a probability\nthat is given by the ``Dirichlet-multinomial distribution'',\n\\begin{align}\n\\mathcal{L}(\\mathbf{n} \\given \\boldsymbol\\alpha, N) &= \\frac{\\Gamma(A)}{\\Gamma(N+A)} \\prod_i \\frac{\\Gamma(n_i + \\alpha_i)}{\\Gamma(\\alpha_i)}\n\\end{align}\nwhere $A = \\sum_i \\alpha_i$.\n\nThis is the basis for our current scores for a region (both the score\nthat is a Bayes Factor, and an alternative score that compares a\nlikelihood with/without the region being ``sourcy'').\n\n\\subsection{two Dirichlet-multinomials}\n\nNow try this generative model for size, which uses two Dirichlets, having parameters $\\boldsymbol\\alpha_s$ and $\\boldsymbol\\alpha_b$ respectively.\n\n\\begin{itemize}\n\\item\n{\\tt one time only}, we draw a categorical distribution $\\mathbf{P}_s$\nfrom a Dirichlet having parameters $\\boldsymbol\\alpha_s$ AND similarly\none $\\mathbf{P}_b$ from a Dirichlet having parameters\n$\\boldsymbol\\alpha_b$.\n\n\\item\nThen, {\\tt $N$ times}, \nwe draw from $\\mathbf{P}_s$ with probability $\\lambda$ and from $\\mathbf{P}_b$ with probability $1-\\lambda$. \n\\end{itemize}\n\nTheory tells us that, after integrating out the unknown $\\mathbf{P}$,........... wot?\n\n\n\\end{document}\n", "meta": {"hexsha": "fc236a0ed22fd74047b74b6002e13bf931208a09", "size": 3449, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "words/mixtures.tex", "max_stars_repo_name": "eggplantbren/Oddity", "max_stars_repo_head_hexsha": "fd41e036ed4390e28c0a325a0232a8a2ffd53079", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-01-06T06:28:21.000Z", "max_stars_repo_stars_event_max_datetime": "2015-01-06T06:28:21.000Z", "max_issues_repo_path": "words/mixtures.tex", "max_issues_repo_name": "eggplantbren/Oddity", "max_issues_repo_head_hexsha": "fd41e036ed4390e28c0a325a0232a8a2ffd53079", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "words/mixtures.tex", "max_forks_repo_name": "eggplantbren/Oddity", "max_forks_repo_head_hexsha": "fd41e036ed4390e28c0a325a0232a8a2ffd53079", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.49, "max_line_length": 148, "alphanum_fraction": 0.7335459553, "num_tokens": 1015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835289107309, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.5869522811113937}}
{"text": "\\chapter{Monte Carlo Methods}\n\n\\section{Summary}\n\nMonte Carlo methods require not model of the system, they work with samples. So they are based on \\textbf{experience}.\n\n\\subsection{Monte Carlo Prediction}\nMonte carlo prediction creates a trajectory/sample of the system. Starting from a certain state, following a certain policy $v_{\\pi}$. And averaging the returns (equation~\\ref{eq:monte carlo prediction})to the value function. The error on the standard deviation of the value function drops by $\\frac{1}{\\sqrt{n}}$ with $n$ as the number of average returns.\n\n\\begin{equation}\n    G_t := \\gamma G_{t+1} + R_{t+1}\n    \\label{eq:monte carlo prediction}\n\\end{equation}\n\n\\subsection{Monte Carlo Estimation of action values}\nIf no model is available $v_{\\pi}(s)$ is not sufficient, as it's not clear what actions can be taken. We need to estimate the value function of the state/action pair $q(s,a)$.\n\nMonte Carlo methods can suffer from the \\textbf{problem of maintaining exploration}, as not all state/action combinations might be visited. One solution the this problem is using \\textbf{exploring starts}, every state/action combination has an equal probability to be used as start state/action.\n\n\\subsection{Monte Carlo Control with exploring starts}\n\nJust as with \\textbf{dynamic programming} the principle of \\textbf{generalized policy iteration} can be used. Monte Carlo exploring state is illustrated in Figure~\\ref{fig:monte carlo exploring starts}, it uses a random start pair to avoid the \\textbf{exploring state problem}.\n\n\\begin{figure}[H]\n\t\\begin{enumerate}\n\t\t\\item Take random $S_0$ and $A_0$\n\t\t\\item Generate an entire episode\n\t\t\\item Average the returns\n\t\t\\item Create an improved policy $\\pi(s_t) = \\argmax_a +(s_t, a)$ and repeat\n\t\\end{enumerate}\n\\label{fig:monte carlo exploring starts}\n\\caption{Monte Carlo Exploring starts}\n\\end{figure}\n\n\\subsection{Monte Carlo without exploring starts}\nOnline policy methods generally use \\textbf{soft policies}. Soft policies have a no-zero probability for every action $p(a | s)>0$.\n\n\\subsubsection{On-policy method}\n\n$\\epsilon$-Greedy is a commonly used online policy, illustrated in equation~\\ref{eq:epsilon greedy on-policy}. Generaly Policy iteration only requires that the policy moves towards the greedy policy. Which is still true, just a bit slower.\n\n\\begin{equation}\n\\begin{split}\np_{non-greedy} & = \\frac{\\epsilon}{A(s)} \\\\\np_{greedy}& = 1 - \\epsilon + \\frac{\\epsilon}{A(s)}\n\\end{split}\n\\label{eq:epsilon greedy on-policy}\n\\end{equation}\n\n\\subsubsection{Off-policy method}\n\n\\begin{itemize}\n\t\\item $\\pi(a|s)$: target policy\n\t\\item $b(a|s)$: behavior policy\n\t\\item Assumption of coverage: $b(a|s)>0$\n\t\\item $G_t$ return after t\n\t\\item $\\tau(s)$ set of all time steps when s was visited.\n\t\\item $T(t)$ first time of termination\n\\end{itemize}\n\n\\textbf{Importance sampling} is used in off-policy methods to translate the returns from the behavior policy to the target policy. Given a starting state $S_t$, the probability of a certain trajectory is defined by equation~\\ref{eq:probablity certain trajector off-policy method}. The relation between the likelyhood of a trajectory using the behavior policy and the target policy is called the \\textbf{importance-sampling ratio}. It's defined by equation~\\ref{eq:definition importance sampling}. The value function of the target policy simulate under the behavior policy is equation~\\ref{eq:value function target policy under behavior policy}.\n\n\\begin{equation}\n\\begin{split}\n& p(A_t, S_{t+1}, A_{t+1}... S_T| S_t, A_{t:T-1} \\sim \\pi)\\\\\n& = \\prod_{k=t}^{T-1} \\pi(A_k | S_k)p(S_{k+1}|S_k, A_k)\n\\end{split}\n\\label{eq:probablity certain trajector off-policy method}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\n\\rho_{t:T-1}\n& = \\frac{\\prod_{k=t}^{T-1} \\pi(A_k | S_k)p(S_{k+1}|S_k, A_k)}{\\prod_{k=t}^{T-1} b(A_k | S_k)p(S_{k+1}|S_k, A_k)}\\\\\n& = \\frac{\\prod_{k=t}^{T-1} \\pi(A_k | S_k)}{\\prod_{k=t}^{T-1} b(A_k | S_k)}\n\\label{eq:definition importance sampling}\n\\end{split}\n\\end{equation}\n\n\\begin{equation}\n\\EX[\\rho_{t:T-1} G_t | s_t = s] =  v_{\\pi}(s)\n\\label{eq:value function target policy under behavior policy}\n\\end{equation}\n\nThere are 2 variants of importance sampling that can be used. Either \\textbf{ordinary} (equation~\\ref{eq:ordinary importance sampling}) or \\textbf{weighted} (equation~\\ref{eq:weighted importance sampling}). The ordinary makes at first sight the most sense as it has no bias. It does however have an unbounded variance, which weight importance simple does not have. In practice weight variance tends to perform better. \n\n\\begin{equation}\nV_{\\text{ordinary}}(s) = \\frac{\\sum_{t\\in \\tau(s)} \\rho_{t:T(t)-1}G_t}{|\\tau(s)|}\n\\label{eq:ordinary importance sampling}\n\\end{equation}\n\n\\begin{equation}\nV_{\\text{weighted}}(s) = \\frac{\\sum_{t\\in \\tau(s)} \\rho_{t:T(t)-1}G_t}{\\sum_{t\\in \\tau(s)} \\rho_{t:T(t)-1}}\n\\label{eq:weighted importance sampling}\n\\end{equation}\n\n\n\\begin{center}\n\\begin{tabular}{ c | c}\nordinary importance sampling & weighted importance sampling \\\\\n\\hline\nunbiased & biased but asymptotically converges to zero \\\\\nunbound variance & bound variance\n\\end{tabular}\t\n\\end{center}\n\nImportance sampling can use recursion to incrementally implement the value function.(equation~\\ref{eq:recursive importance sampling}) with $W_n = \\rho_{t_n:T(t)-1}$ \n\n\\begin{equation}\n\\begin{split}\n&V_{k+1} = V_n + \\frac{W_n}{C_n}[G_n - V_n] \\\\\n&C_{n+1} = C_n + W_{n+1} \\\\\n&W_{n+1} = \\frac{\\pi(A_t | S_t)}{b(A_t | S_t)}W_n\n\\end{split}\n\\label{eq:recursive importance sampling}\n\\end{equation}\n\n\\subsection{Off-Policy Monte Carlo Control}\n\n\\begin{algorithm}\n\\begin{algorithmic}\n\t\\State $G \\gets 0$\n\t\\State $W \\gets 1$\n\t\\For{$t \\gets T-1$  to $0$}\n\t\t\\State $G \\gets \\gamma G + R_{t+1}$\n\t\t\\State $C(S_t, A_t) \\gets C(S_t, A_t) + W$\n\t\t\\State $Q(S_t, A_t) \\gets Q(S_t, A_t) + \\frac{W}{C(S_t, A_t)}[G-Q(S_t, A_t)]$\n\t\t\\State $\\pi(S_t) \\gets \\argmax_a(Q(S_t,a))$\n\t\t\\If{$A_t \\neq \\pi(S_t)$}\n\t\t\t\\State break\n\t\t\\EndIf\n\t\t\\State $W=\\frac{1}{b(A_t|S_t)}$\n\t\\EndFor\n\\end{algorithmic}\n\\caption{Off policy monte carlo control}\n\\label{alg:off policy monte carlo control}\n\\end{algorithm}\n\nIt's important to note that algorithm~\\ref{alg:off policy monte carlo control} only can learn from the tail of the trajectories. This can make the algorithm rather slow, if this is a problem, this can be addressed by using temporal difference learning.\n\n\\subsection{Discount-aware Importance Sampling}\n\nImportance sampling calculates $\\rho$ using all the factors, even if $\\gamma$ is close to zero, when the returns don't really matter after a few timesteps. They do however still influence the importance factor, and so still increase the variance. Discount away importance keeps this in mind.\n\nThe return value can be written as a sum of flat partial returns(equation~\\ref{eq:flat partial returns}) as demonstrated by equation~\\ref{eq:return G_t as sum of partial returns}.\n\n\\begin{equation}\n\\begin{split}\n& \\bar{G}_{t:h} = R_{t+1} + R_{t+2} + ... + R_h   \\\\ \n& 0 \\leq t \\le h \\leq T\n\\end{split}\n\\label{eq:flat partial returns}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\n\\bar{G}_{t:h} & = R_{t+1} + \\gamma R_{t+2} ... \\gamma^{T-t-1}R_h \\\\\n& = (1-\\gamma)R_{t+1} \\\\\n& + (1-\\gamma)\\gamma (R_{t+1} + R_{t+2}) \\\\\n& + (1-\\gamma)\\gamma^2(R_{t+1} + R_{t+2} + R_{t+3}) \\\\\n& ... \\\\\n& \\gamma^{T-t-1}(R_{t+1} + R_{t+2} + ... + R_{T}) \\\\\n& = (1-\\gamma)\\sum_{h=t+1}^{T-1}\\gamma^{h-t-1}\\bar{G}_{t:h} + \\gamma^{T-t-1}\\bar{G}_{t:T}\n\\end{split}\n\\label{eq:return G_t as sum of partial returns}\n\\end{equation}\n\nUsing equation~\\ref{eq:return G_t as sum of partial returns} we can define ordinary and weighted importance sampling:\n\n\\begin{equation}\nV_{\\text{ordinary}}(S) = \\frac{\\sum_{t\\in\\tau(s)}(1-\\gamma)\\sum_{h=t+1}^{T-1}\\gamma^{h-t-1}\\bar{G}_{t:h}\\rho_{t:h-1} + \\gamma^{T-t-1}\\bar{G}_{t:T}\\rho_{t:T(t)}}{|\\tau(S)|}\n\\label{eq:discount-away ordinary importance sampling}\n\\end{equation}\n\n\\begin{equation}\nV_{\\text{ordinary}}(S) = \\frac{\\sum_{t\\in\\tau(s)}(1-\\gamma)\\sum_{h=t+1}^{T-1}\\gamma^{h-t-1}\\bar{G}_{t:h}\\rho_{t:h-1} + \\gamma^{T-t-1}\\bar{G}_{t:T}\\rho_{t:T(t)}}{\\sum_{t\\in\\tau(s)}(1-\\gamma)\\sum_{h=t+1}^{T-1}\\gamma^{h-t-1}\\bar{G}_{t:h} + \\gamma^{T-t-1}\\bar{G}_{t:T}}\n\\label{eq:discount-aware weighted importance sampling}\n\\end{equation}\n\n\\subsection{Per Decision Importance Sampling}\n\nThe off-policy estimator can be written as a sum of rewards as demonstrated in equation~\\ref{eq:off policy estimator as sum of rewards}. Each of the terms has a reward \\textbf{and the same importance sampling term}. This term can be written out as demonstrated by equation~\\ref{eq:importance sampling ratio in subterms}. The \\textbf{terms can be averaged out} (equation~\\ref{eq:average out importance sampling factor}) to 1. Bringing the average of the sampling-ratio term's of equation~\\ref{eq:off policy estimator as sum of rewards} to a different value as demonstrated by equation~\\ref{eq:reduced sampling ratio term}. Finally the \\textbf{per decision importance sampling value function} then becomes equation~\\ref{eq:per-decision importance sampling value function}.\n\n\\begin{equation}\n\\begin{split}\n\\rho_{t:T-1}G_t & = \\rho_{t:T-1} (R_{t+1} + \\gamma R_{t+2} + ... + \\gamma^{T-t-1}R_T) \\\\\n& = \\rho_{t:T-1}R_{t+1}  + \\gamma \\rho_{t:T-1} R_{t+2} + ... +  \\gamma^{T-t-1}\\rho_{t:T-1}R_T\n\\end{split}\n\\label{eq:off policy estimator as sum of rewards}\n\\end{equation}\n\n\\begin{equation}\n\\rho_{t:T-1}R_{t+1} = \n\\frac\t\n\t{\\pi(A_t|S_t)\\pi(A_{t+1}|S_{t+1})\\pi(A_{t+2}|S_{t+2})}\n\t{b(A_t|S_t)b(A_{t+1}|S_{t+1})b(A_{t+2}|S_{t+2})}\n...\n\\frac{\\pi(A_{T-1}|S_{T-1})}{b(A_{T-1}|S_{T-1})} R_{t+1}\n\\label{eq:importance sampling ratio in subterms}\n\\end{equation}\n\n\\begin{equation}\n\\EX \\left[\\frac{\\pi(A_k|S_k)}{b(A_k|S_k))}\\right] = \\sum_a b(a|S_k)\\frac{\\pi(a|S_k)}{b(a|S_k)} = \\sum_a \\pi(a|S_k) = 1\n\\label{eq:average out importance sampling factor}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\n& \\EX[\\rho_{t:T-1}R_{t+1}] = \\EX[\\rho_{t:t}R_{t+1}] \\\\\n& \\EX[\\rho_{t:T-1}R_{t+k}] = \\EX[\\rho_{t:t+k-1}R_{t+k}] \\\\\n\\end{split}\n\\label{eq:reduced sampling ratio term}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\n& \\EX[\\rho_{t:T-1}G_t] = \\EX[\\widetilde{G_t}] \\\\\n& \\widetilde{G_t} = \\rho_{t:t}R_{t+1} + \\gamma \\rho_{t:t+1}R_{t+2} + \\gamma^2 \\rho_{t:t+2}R_{t+3} + ... \\\\\n& + \\gamma^{T-t-1}\\rho_{t:T-1}R_T \\\\\n& V(S) = \\frac{\\sum_{t\\in\\tau(s)}\\widetilde{G_t}}{|\\tau(s)|}\\\\\n\\label{eq:per-decision importance sampling value function}\n\\end{split}\n\\end{equation}\n\n\n\n\n\\section{Exercises}\n\n\\subsection{Exercise 5.1 page 94}\nThe last 2 rows in the rear means you either have 21, or 20, which means the odd's are very good you will win. (hence high value function)\n\nThe last row on the left means the dealer has an ace, so it's at an advantage to get a higher score.\n\nThe front row's are higher on the upper diagram, as there is a usuable ace. Which means that if you get a bad hit that put's you over 21. It can count as 1.\n\n\\subsection{Exercise 5.2 page 94}\nAs this is Markov process eg. The cards drawn are not exhaustible. The odds of winning on the second time your in the same state is just as good as the first time.\n\n\\subsection{Exercise 5.4 page 99}\nThe \"Append G to Returns ($S_{t} , A_{t}$) would be replaced by increasing a count and added it as running average to some table.\n\n\\subsection{Exercise 5.5 page 105}\n\\textbf{question: Consider an MDP with a single Non-terminal state and a single action that transitions back to the nonterminal state with probability $p$ and transitions to the terminal state with probability $p-1$. Let the reward be != on all transitions, and let $\\gamma = 1$. Suppose you observe one episode that lasts 10 steps, with a return of 10. What are the first-visit and every visit estimators of the value of the non-terminal state. }\n\n10 Steps means 9 towards the non-terminal, and one towards the terminal. The rewards are all-way's the same so the final cost=10. \n\nIf $\\gamma = 1$ then $G=G+\\gamma R_{k+1}$ in every iteration. \n\nIn case of all visit the complete horizon counts 10 times in the non-terminal state, as the 10th time we leave the non-terminal state for good and enter the terminal state. $(1+2+3+4+5+6+7+8+9+10)/10 = 55/10 = 5.5$ So the value is 5.\n\nIn case of the first-visit, we only count the first visit which has a reward of 1.\n\n\\subsection{Exercise 5.6 page 108}\n\\textbf{question: What is the equation analogous to (5.6) for action values $Q(s,a)$ instead of state values $V(s)$, again given returns generated using b?}\n\nQ(s, a) is similar to V(s), it takes the V(s) given a certain step was taken first.\n\n\\begin{equation}\nQ(s, a) = \\frac{\\sum_{t \\in J(s,a)} \\rho_{t+1:T(t)-1} G_t }{\\sum_{t \\in J(s,a)} \\rho_{t+1:T(t)-1}}\n\\end{equation}\n\n\\subsection{Exercise 5.7 page 108}\n\\textbf{question: In learning curves such as those shown in Figure 5.3 error generally decreases with training as indeed happened for the ordinary importance-sampling method. But for the weighted importance-sampling method error first increased and then decreased. Why do you think this happened}\n\nIf there are but a few samples, the bias will be the dominating error. And it will increase as more and more samples are added. Until there are so many samples, it starts to disappear.\n\n\\subsection{Exercise 5.8 page 108}\n\\textbf{question: The results with Example 5.5 and shown in Figure 5.4 used a first-visit MC method. Suppose that instead an every-visit MC method was used on the same problem. Would the variance of the estimator still be infinite? Why or why not?}\nA first Visit MC has less terms then a every Visit MC. All terms have a positive value, so it would also go to infinite.\n\n\\subsection{Exercise 5.11 page 111}\nIf the target policy is a greedy deterministic policy, and the loop is broken off if $\\pi (S_t) \\neq A_t$. Then $\\pi(A_t|S_t)=1$ by definition.  \n", "meta": {"hexsha": "ffa3a6079556329e14d9bb916d4fc2438c46c1b2", "size": 13732, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RL/notes/TeX_files/chapter05.tex", "max_stars_repo_name": "Zilleplus/HML", "max_stars_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "RL/notes/TeX_files/chapter05.tex", "max_issues_repo_name": "Zilleplus/HML", "max_issues_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RL/notes/TeX_files/chapter05.tex", "max_forks_repo_name": "Zilleplus/HML", "max_forks_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.1167883212, "max_line_length": 770, "alphanum_fraction": 0.7138071657, "num_tokens": 4392, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5867953187769144}}
{"text": "\\documentclass[12pt, a4paper]{article}\n\\usepackage[a4paper, bindingoffset=0.2in, %\nleft=0.5in,right=0.5in,top=0.5in,bottom=0.5in,%\nfootskip=.25in]{geometry}\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{hyperref}\n\\usepackage{physics}\n\n\\title{PSet9 Report}\n\\author{Ali Abolhassanzadeh Mahani}\n\n\n\\begin{document}\n\t\\maketitle\n\t\\section{Theory}\n\tThe problem is to simulate 100 argon atoms in a space. The potential for the interaction of \n\tthe atoms is the leonard-jones potential as below:\n\t\\begin{equation}\n\t\tU_{ij} = 4\\epsilon \\left(\\left(\\frac{\\sigma}{r_{ij}}\\right)^{12} - \n\t\t\\left(\\frac{\\sigma}{r_{ij}}\\right)^6\\right), \\; \n\t\tr_{ij} \\equiv \\norm{\\vec{r}_i - \\vec{r}_j}\n\t\t\\label{eq:leonard_jones}\n\t\\end{equation}\n\n\tFirst, there are some matters we need to clarify before jumping into the code.\n\tTrying to simulate the Argon atoms using the SI units is impossible. It takes a lot of time and\n\tspace that is unnecessary. In order to solve this problem, we come up with characteristic \n\tunits for the system. Our parameters are as follows:\n\t\\begin{equation}\n\t\tm_{arg} \\sim 10^{-26} (kg), \\; \\sigma = 3.47 \\times 10^{-10} (m), \\; \\epsilon \\simeq \n\t\\end{equation}\n\tIf we make the following changes:\n\t\\begin{equation}\n\t\tm \\rightarrow \\frac{m}{m_{arg}}, \\; U \\rightarrow \\frac{U}{\\epsilon}, \\; r \\rightarrow \n\t\t\\frac{r}{\\sigma}\n\t\\end{equation}\n\tthen Eq.~\\ref{eq:leonard_jones} becomes:\n\t\\begin{equation}\n\t\tu_{ij} = 4 \\left(\\frac{1}{r_{ij}^{12}} - \\frac{1}{r_{ij}^6}\\right)\n\t\\end{equation}\n\t\n\tUsing this transformation, the equations for kinetic and potential energy of the system \n\tare:\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\tK &= \\frac{1}{2}\\sum_{i=1}^{n} v_i^2,\\\\\n\t\t\tU &= \\sum_{i < j} u_{ij} = \\frac{1}{2} \\sum_{i, j = 1}^{n} u_{ij}\n\t\t\\end{aligned}\n\t\\end{equation}\n\n\t\\section{The MDSystem Class}\n\tI start by writing a class called \\texttt{MDSystem} that takes in the size of the box, the\n\tnumber of atoms, and the initial position of the atoms in the box. Then, it assigns a speed \n\taccording to the temperature of the system and speads the velocities in random directions.\n\tThen, we make sure that the velocity of the center of mass (CoM)  of the system is zero,\n\tby calling \\texttt{stabilize\\_system()}. This is because the notions of temperature is defined \n\tin a stationary system in the LAB frame. This is all the work of \\texttt{\\_\\_init\\_\\_()}\n\t\n\tThe algorithm for the evolution of the system is the \"verlet\" method, since it intrinsicly conserves the energy of the system as discussed in the previous homeworks. \n\t\n\tThen, we have the periodic boundary conditions that need to be applied both when a \n\tparticle is passing through one side of the box, and when we are calculating the interaction \n\tof the particles. For the interactions, since the Leonard-Jones interaction is short-range, we\n\tset a \"cutoff radius\" from each particle and assume that only the particles within that range \n\tcan interact with the subjected atom. The value chosen for the cutoff radius is $2.5$ in the \n\tcharacteristic units.\n\t\n\t\\section{The Simulation}\n\t\\subsection{Analysis}\n\t\\subsubsection{Conservation of Energy and number of atoms}\n\tFirst, we check the conservation of energy and the number of particles in the left side of the \n\tbox. Since our initial setup was to place the atoms in a grid in the left half of the box, we \n\tshould expect the value to start from total number of atom (in our case 100) and move towards half of that value (50 for our case), and then to oscillate about that value. As for\n\tenergy, we expect it to be conserved since our system is micro-canonical (NVE) as seen in\n\tFig~\\ref{fig:energy_and_left}\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=.45\\linewidth]{../results/particles_on_left100_50000.jpg}\n\t\t\\includegraphics[width=.45\\linewidth]{../results/energy_conservation100_50000.jpg}\n\t\t\\caption{As expected, on the left, we can see that the number of particles on the \n\t\tleft side of the box reaches, and oscillates about half of the atoms. And on the right, we\n\t\tcan clearly witness the conservation of energy}\n\t\t\\label{fig:energy_and_left}\n\t\\end{figure}\n\n\t\\subsubsection{Temperature, Pressure, and equilibrium}\n\tNow, we turn our heads to a thermodynamic observable, temperature. The thermodynamic \n\tequation for temperature is:\n\t\\begin{equation}\n\t\t\\frac{1}{2} \\sum_{i=1}^{N}\\sum_{e=1}^{d} v_{ie}^2 = Nd \\frac{1}{2}k_B T\n\t\\end{equation}\n\twhere $N$ is the number of particles, $d$ is the dimension of the system (in our case 2) and \n\tT, the temperature. In the system that $k_B$ is $1$, we can rewrite the equation as follows:\n\t\\begin{equation}\n\t\t\\sum_{i=1}^{N} \\sum_{e=1}^{d} v_{ie}^2 = N d T\n\t\\end{equation}\n\tone should take note that in our case, we used a convention, namely that the system is \n\tstabilized (i.e., the CoM velocity is zero). In Thermodynamics this doesn't matter due to the \n\tsize of our ensemble, but in MD it matters. So we have:\n\t\\begin{equation*}\n\t\tN d T \\rightarrow (N-1) d T\n\t\\end{equation*}\n\t\n\tNow we use the \"Virial Theorem\" to find and equation for the pressure of the system.\n\t\\begin{equation}\n\t\tP V = N k_B T - \\frac{1}{2 N d} \\sum_{i,j = 1}^{N} \\vec{F}_{ij}.\\vec{r}_{ij}\n\t\\end{equation}\n\t Now we have an equation to calculate pressure in-time. \n\t \n\t The values for temp. and pressure are available in Fig~\\ref{fig:temp_pressure}\n\t \n\t \\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=.45\\linewidth]{../results/temp100_5000.jpg}\n\t\t\\includegraphics[width=.45\\linewidth]{../results/pressure100_5000.jpg}\n\t\t\\caption{On the left, you can see the plot for temperature over time. the unit for temp. is Kelvin. On the right, you can see the values for \n\t\treduced pressure. The unit for time on both plots is $10^{-2} \\tau$ where $\\tau$ is time in reduced units.\n\t\tthe inital value for velocity is $1.5$}\n\t\t\\label{fig:temp_pressure}\n\t \\end{figure}\n \n \tWe find the temperature and pressure of the system in equilibrium, by taking the mean value over time. The values are found below:\n \t\\begin{equation*}\n \t\tT_{eq} = 87 \\pm 5 K, \\; P_{eq} = 0.080 \\pm 0.005 \\text{(reduced unit)}\n \t\\end{equation*}\n\t\n\t\\subsubsection{Velocity Correlation and Relaxation Time}\n\tI chose the initial conditions to be the same as in the previous section.\n\tI ran the simulation for 2000 time steps (waiting for the system to reach equilibrium), and then I saved the\n\tdata for the velocity of particles for every 2 time steps in a file in the \\texttt{data/} directory.\n\t\n\tThen using the file \\texttt{velocity\\_correlation.py}, I found the correlation and plotted in \n\tFig~\\ref{fig:velocity_correlation}\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{../results/velocity_correlation.jpg}\n\t\t\\caption{Velocity correlation for different values of seperation $\\tau$. The initial velocity in the simulation was \n\t\t\t$0.5$.}\n\t\t\\label{fig:velocity_correlation}\n\t\\end{figure}\n\t\n\tAnd so by crossing the auto-correlation curve with the $y = e^{-1}$ line, we find the relation time of the \n\tsystem, $T$, to be:\n\t\\begin{equation}\n\t\tT = 2564 \\times 10^{-3} \\tau = 2.564 \\tau, \\; \\tau \\simeq 10^{-12} s \\Rightarrow T \\simeq 2.564 \\times \n\t\t10^{-12} s\n\t\\end{equation}\n\t\n\t\\subsubsection{State Equation check with the Leonard-Jones gas}\n\tIn order to check this system with the Leonard-Jones gas, I have shown the P-T plot of the system, available in Fig~\\ref{fig:P-T}\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\linewidth]{../results/pressure_temp.jpg}\n\t\t\\caption{The P-T plot of the system to check with the Leonard-Jones system.}\n\t\t\\label{fig:P-T}\n\t\\end{figure}\n\t\n\t\\subsection{Phase transition of the system for different Temperatures}\n\tIn order to see the phase transition of the system we plot the energy E of the system vs. the temperature. And by the change in \n\t$C_v = \\derivative{E}{T}$ we can see the phase transitions. I varied the energy and temperature of the system by changing the initial velocity\n\tof the particles. For each initial value, I ran the simulation 2000 time steps for the system to reach equilibrium, then, I ran 5000 times and took\n\tthe temperature of the system and returned the mean and the standard deviation as values and error. The plot is available in \n\tFig~\\ref{fig:phase_transition}\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\linewidth]{../results/phase_transition.jpg}\n\t\t\\caption{Phase Transition of the MD system. As one can see, from about 80 to 90 Kelvin, the system is in the liquid phase, gas for over 90,\n\t\tand crystal for lower than 80.}\n\t\t\\label{fig:phase_transition}\n\t\\end{figure}\n\t\n\t\\section{Animating the system}\n\tAgain, I wait 2000 time steps for the system to reach equilibrium and then, I start animating the system.\n\tThe initial velocity for the crystal phase is $0.5$, for liquid, $1.5$, and for gas $3$. The animations are accessible in the \\texttt{animations/}\n\tsubdirectory.\n\\end{document}", "meta": {"hexsha": "494820eaaaac9dc5f77c9ee9b4230da39e98493a", "size": 8860, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PSet9/report/report.tex", "max_stars_repo_name": "alpha-leo/ComputationalPhysics-Fall2020", "max_stars_repo_head_hexsha": "737769d4a046b4ecea885cafeaf26e26075f7320", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-10T14:33:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T14:33:35.000Z", "max_issues_repo_path": "PSet9/report/report.tex", "max_issues_repo_name": "alpha-leo/ComputationalPhysics-Fall2020", "max_issues_repo_head_hexsha": "737769d4a046b4ecea885cafeaf26e26075f7320", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PSet9/report/report.tex", "max_forks_repo_name": "alpha-leo/ComputationalPhysics-Fall2020", "max_forks_repo_head_hexsha": "737769d4a046b4ecea885cafeaf26e26075f7320", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.9502762431, "max_line_length": 179, "alphanum_fraction": 0.7317155756, "num_tokens": 2650, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7772998508568416, "lm_q1q2_score": 0.5867953152738281}}
{"text": "\\documentclass{article}\n% This section of the document is the preamble\n\n\\usepackage{hyperref,lipsum,amsmath,amssymb,graphicx,youngtab,ytableau,tikz,subcaption}\n\n% TikZ uses its own libraries of commands to make drawing easier. We add TikZ libraries just like including new packages.\n\\usetikzlibrary{arrows,automata,positioning}\n\t\t\n% \\graphicspath{{../Images/}} % the folder in which images are stored for this project\n\n\\hypersetup{colorlinks=true} % make sure our hyperlinks are coloured for visibility\n% We define an Author, Title and Date\n\\author{\\emph{Your Name Here}}\n\\title{A Short \\LaTeX{} Worksheet}\n\\date{March 15\\textsuperscript{th}, 2019}\n\n\\begin{document}\n% Create a title from our Author, Title and Date\n\\maketitle\n% We put our content into section environments\n\\section{Introduction}\n\\label{sec:introduction}\n% I didn't have time to discuss labels today, but adding labels to your document allows you to refer back to the relevant place later in your document using the \\ref{label} command. For example, I could refer to the introduction later in my document by saying `see section \\ref{sec:introduction}'. This would produce the output `see section 1'. The advantage os using labels, is that if I add a reference to section 2, and then later add a new section before the old section 2, the reference will automatically be updated to say section 3. Try adding some references to the various sections and subsections of this document.\nThis worksheet has been created using the \\LaTeX{} typesetting system. By reproducing this document you should be able to develop your skills with \\LaTeX{}. Start by copying the basic example from the talk into a new tex file and then \\emph{typesetting} your file to produce a PDF. To make a document with a title and some \\emph{sections}, see the second example from the talk. You should \\emph{typeset} your file regularly to see whether it is looking correct. Try it now if you haven't already. If everything has worked properly you should now have a PDF file of the worksheet up to this point. \n\n\\section{Content}\n\\label{sec:content}\n\n\\emph{Sections} allow us to add structure our documents. For a short document such as this, we will use \\emph{sections} and \\emph{subsections}. A longer document such as a thesis would also use \\emph{chapters} (and the \\emph{report} document class). The first thing we will discuss is how to typeset mathematics. To start with, ignore the `verbatim' code and hyperlinks in the following subsection, these will be explained later. A good reference for some of the symbols and commands in the following subsections is \\href{https://wch.github.io/latexsheet/latexsheet.pdf}{the \\LaTeXe{} cheat sheet}.\n\n\\subsection{Typesetting Mathematics}\n\\label{sub:typesetting_mathematics}\n\nOne of the reasons to use \\LaTeX{} is that it can produce very good looking mathematical formulae. We have already seen some examples in the talk. \\LaTeX{} only recognises mathematical commands while in `maths mode'. There are a few ways to enter maths mode -- which one we use depends on how we want the output to be formatted. If I want the maths to be `inline', part of the current line, then I place it between dollar symbols. Therefore \\$ x = 4 \\$ produces $x=4$. If I want to produce an equation which is placed on its own line then I can use the \\emph{\\textbf{equation} environment}.\n\\begin{verbatim}\n\t\\begin{equation}\n\t    x^3 + 2 \\ge 6.\n\t\\end{equation}\n\\end{verbatim}\nThis produces\n\\begin{equation}\n\tx^3 + 2 \\ge 6.\n\\end{equation}\nSee \\href{https://www.overleaf.com/learn/latex/Mathematical_expressions}{this webpage} for more information on mathematics modes. Try to reproduce the rest of this subsection as closely as you can.\n\nA \\emph{bijection} is a map between two sets which is both \\emph{injective} and \\emph{surjective}. Let us now define these terms. Given two sets $A$ and $B$, a map $\\phi:A \\to B$ is said to be injective if\n\\begin{equation}\\label{eq:injective} % I can also add referenecs to my equations using labels.\n\t\\phi(a_1) = \\phi(a_2) \\iff a_1 = a_2,\\ \\forall\\ a_1,a_2 \\in A.\n\\end{equation}\nFurthermore, the map is surjective if\n\\begin{equation}\\label{eq:surjective}\n\t\\forall\\ b \\in B,\\ \\exists\\ a \\in A\\ |\\ \\phi(a)=b.\n\\end{equation}\n% Try uncommenting the following line.\n% If both equation \\ref{eq:injective} and equation \\ref{eq:surjective} are satisfied, then the map is said to be \\emph{bijective}.\n\nThe talk also covered the import result of the \\emph{Gaussian integral}\n\\begin{equation}\n\t\\int_{-\\infty}^\\infty e^{-\\frac{1}{2}\\left( \\frac{x - \\mu}{\\sigma} \\right)^2} dx = \\sigma\\sqrt{2 \\pi}.\n\\end{equation}\n\nIn the partitions project, we saw how the geometric progression formula\n\\begin{equation}\n\t\\frac{1}{1-x} = 1 + x + x^2 + \\ldots + x^n + \\ldots = \\sum_{n=0}^\\infty x^n,\n\\end{equation}\ncould be used to deduce the generating function for the partition number $p(n)$,\n\\begin{equation}\n\t\\sum_{n=0}^\\infty p(n) x^n = \\prod_{n=1}^\\infty \\frac{1}{1-x^n}.\n\\end{equation}\n\n\\subsection{Verbatim} % (fold)\n\\label{sub:verbatim}\n\nIn order to explain some of the commands that you will need to add to your file, we will need to show the names of the commands on the worksheet. In order for \\LaTeX{} to display the command rather than process it we use an \\emph{environment} called \\textbf{Verbatim}.\n\\begin{verbatim}\n\t\\begin{verbatim}\n\t    \\LaTeX{} is now printed rather than processed.\n\t\\end{verbatim }\n\\end{verbatim}\n% Note the space after in the inner \\end{verbatim }. This is so \\LaTeX{} understand the difference between the one I want to print verbatim and the actual end of the verbatim environment. \nWe can also produce verbatim output in the middle of a line using \\verb!\\verb|\\LaTeX{}|!.\n% We can use both the exclamation mark or the pipe mark to indicate what content is to be inline verbatim. As previously, I have to use one of each to ensure that \\LaTeX{} understands which one ends the actual verb environment, and which is to be printed verbatim.\n\\subsection{Packages}\n\\label{sub:packages}\n\nOne of the advantages of \\LaTeX{} (or \\TeX{}) is that it can be extended using \\emph{packages}. A package is \\emph{loaded} by adding the the line\n\\begin{verbatim}\n\t\\usepackage{package_name}\n\\end{verbatim}\ninto the \\emph{preamble} of your tex file. The preamble is the part of the file before the line \\verb|\\begin{document}|. Everything between \\verb|\\begin{document}| and \\verb|\\end{document}| is called the \\emph{body} of the document. We can use a package called \\href{https://ctan.org/pkg/lipsum?lang=en}{lipsum} to produce dummy text. This hyperlink was in turn created with a package called \\href{latex hyperref}{hyperref} and the command used was\n\\begin{verbatim}\n\t\\href{https://ctan.org/pkg/lipsum?lang=en}{lipsum}.\n\\end{verbatim}\nThe following paragraph is produce using the lipsum package, you can read about how to use this package from the package documentation on the previously linked webpage for lipsum -- the following is the lipsum paragraph 66.\n\n\\lipsum[66]\n\nTwo packages that we will often want to use are the \\href{https://michael-prokop.at/latextug/amsldoc.pdf}{amsmath and amssymb} packages, which are documented at the previously linked page. They add more mathematical commands that you can then use in you documents, such as matrices, unnumbered environments and mathematical fonts. Once I have loaded amsmath and amssymb, I can prodcue the output\n\\begin{equation*}\n\tA = \\begin{pmatrix}\n\t\t4 - 2i & i \\\\\n\t\t-i & a\n\t\\end{pmatrix} \\in M_2(\\mathbb{C}).\n\\end{equation*}\nThese packages also allow us to align equations using the \\textbf{align} environment. We can use this to define two block matrices $B$ and $C$. We can then give the multiplication of these matrices in a new equation environment. If\n\\begin{align}\n\tB &= \\left(\\begin{array}{c | c} % The c here is to centre align the array columns.\n\t\tP & Q \\\\\n\t\t\\hline\n\t\tR & S\n\t\\end{array}\\right),\\nonumber \\\\\n\tC &= \\left(\\begin{array}{c | c}\n\t\tW & X \\\\\n\t\t\\hline\n\t\tY & Z\n\t\\end{array}\\right),\n\\end{align}\nthen\n\\begin{equation}\n\tBC = \\left(\\begin{array}{c | c}\n\t\tPW + QY & PX + QZ \\\\\n\t\t\\hline\n\t\tRW + SY & RX + SZ\n\t\\end{array}\\right).\n\\end{equation}\n% There's quite a lot going on in the previous few lines, this was supposed to be the hardest part to reproduce for anyone who already had some experience with latex. The \\left( and \\right) commands are used to produce parentheses which scale automatically based on what is inside them. The array environment lets me format content in columns and specify a separator. I use this to produce the vertical line of the block matrix, the horizontal line is produced with the \\hline command. The ampersand character & is used by \\LaTeX{} to align elements. It is used between the B and the equals and between the C and the equals to align these two lines. The double slash \\\\ after the first right) is used to end the first line and stat a new line. The ampersand is also used inside the array environment to tell \\LaTeX{} how to align the elements of the columns. Finally, the \\nonumber command suppresses the equation number for the first line of the align environment.\n\n\\section{Images}\n\\label{sec:images}\nYou may want to include images in a \\LaTeX{} document. If you already have the image file saved on your computer, you can do this using the command\n\\begin{verbatim}\n\t\\includegraphics[scale=1]{imagename}\n\\end{verbatim}\n\nFor this to work, the image must be in the same folder as your tex file and we must include the \\emph{graphicx} package. This adds an image like so, \\includegraphics[scale=0.1]{DULogo}.\n%In fact, we can put our pictures in a separate location to our tex file by setting a 'graphics path'. We do this by putting the command '\\graphicspath{{location}}' in our document preamble. I've done this in the preamble to this document. Note that the location can either be given as an absolute folder path such as 'C:/...' on Windows, or '/...' on Unix based systems (Mac, Linux, etc.), or as a relative folder path such as '../folder'.\n\nThe easiest way to add an image to a \\LaTeX{} document is therefore to create an image file (using a drawing tool, or by hand and then taking a picture of your drawing) and then to include the image using the \\verb|\\includegraphics| command.\n\nImages often look better when included in a \\emph{figure} environment as shown in figure \\ref{fig:DULogo}. This also allows us to label the image and add a caption. See the documentation at \\href{https://www.overleaf.com/learn/latex/Inserting_Images}{Overleaf} for more details.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\t\\includegraphics[scale=0.1]{DU-logo}\n\t\\caption{The Durham University Logo}\n\t\\label{fig:DULogo}\n\\end{figure}\n% The options to the figure environment specify the places where LaTeX can place the figure. In this example, [htbp] means 'here', 'top', 'bottom', 'page'. This means that LaTeX will first try to place the figure in the location that it appears in the source tex file. If it can't place the figure there (what can't means is complicated, see https://tex.stackexchange.com/questions/39017/how-to-influence-the-position-of-float-environments-like-figure-and-table-in-lat/39020#39020 for a detailed explanation), then it next tries to place it at the top of either the current page, or the next page. If it can't do this, it tries to place it at the bottom of the current or next page. Finally, if none of the previous options work, it creates a new page for the figure.\n\nThere also exist many packages for creating images. Those on the partitions project may want to know about \\href{http://www.ctex.org/documents/packages/math/youngtab.pdf}{youngtab} or \\href{http://anorien.csc.warwick.ac.uk/mirrors/CTAN/macros/latex/contrib/ytableau/ytableau.pdf}{ytableau}. The image below was created with the command \\verb|\\ydiagram{6,6,1,1}| from the ytableau package. \n\n\\begin{equation*}\n\t\\ydiagram{6,6,1,1}\n\\end{equation*}\n\nAnother more advanced package is called \\href{https://www.overleaf.com/learn/latex/TikZ_package}{Ti\\emph{k}Z}. Using Ti\\emph{k}Z we can create much more complicated diagrams such as the following diagrams for finite state automata and graphs (or trees, networks etc.). These commands are very complicated, have a look at the source tex file to see how they were produced.\n\n\\begin{figure}[htbp]\n\t\\begin{subfigure}[b]{.5\\linewidth}\n\t\t\\centering\n\t\t\t\\begin{tikzpicture}\n\t\t\t\t% We can specify options for the tikz picture using the tikzset command\n\t\t\t\t\\tikzset{\n\t\t\t\t        ->,  % makes the edges directed\n\t\t\t\t        >=stealth', % makes the arrow heads bold\n\t\t\t\t        node distance=3cm, % specifies the minimum distance between two nodes. Change if needed\n\t\t\t\t        every state/.style={thick, fill=gray!10}, % sets the properties for each \u2019state\n\t\t\t\t        initial text=start, % sets the text that appears on the start arrow\n\t\t\t\t}\n\t\t\t\t% We define the states to be initial or accepting as required. They are positioned relative to the other nodes.\n\t\t\t    \\node[state, initial] (0p) {$0$p};\n\t\t\t\t\\node[state, below right of=0p] (20p) {$20$p};\n\t\t\t\t\\node[state, above right of=0p] (10p) {$10$p};\n\t\t\t    \\node[state, accepting, below right of=10p] (30p) {$\\ge 30$p};\n\t\t    \t% The sloped command tells tikz to turn the label sideways. I used this to help all the labels fit into a relatively compact space.\n\t\t\t    \\draw   (0p) edge[above, sloped] node{$10$p} (10p)\n\t\t\t\t\t\t(0p) edge[below, sloped] node{$20$p} (20p)\n\t\t\t            (10p) edge[left] node{$10$p} (20p)\n\t\t\t\t\t\t(10p) edge[bend right, above, sloped] node{$20$p} (30p)\n\t\t\t\t\t\t(30p) edge[bend right, above, sloped] node{$10$p} (10p)\n\t\t\t\t\t\t(20p) edge[above, sloped] node{$20$p} (30p)\n\t\t\t\t\t\t(20p) edge[bend left, above, sloped] node{$10$p} (30p)\n\t\t\t\t\t\t(30p) edge[bend left, above, sloped] node{$20$p} (20p);\n\t\t\t\\end{tikzpicture}\n\t\t\\caption{The transition diagram for a vending machine}\n\t\t\\label{fig:fsa}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{.5\\linewidth}\n\t\t \\centering\n\t\t \t\\begin{tikzpicture}\n\t\t\t\t% A graph probably wants different options to a FSA\n\t\t\t\t\\tikzset{node distance=2cm, % specifies the minimum distance between two nodes. Change if needed\n\t\t\t\t}\n\t\t\t\t\\node[draw,circle] (a) {$a$};\n\t\t\t\t% For the map/graph colouring examples, you may want to colour your nodes as shown here.\n\t\t\t\t\\node[draw,circle,right of=a,fill=red] (b) {};\n\t\t\t\t% The exclamation mark after orange sets some transparency, try changing it.\n\t\t\t\t\\node[draw,circle,above right of=a,fill=orange!40] (c) {};\n\t\t\t\t\\node[draw,circle,below of=b,fill=none] (d) {};\n\t\t\t\t% This node shows how you can label the nodes in your graphs.\n\t\t\t\t\\node[draw,circle,above of=a,fill=green] (e) {e};\n\t\t\t\t% The next node is never going to appear in the picture, it's used to make one of the curving paths in the diagram.\n\t\t\t\t\\node[above right of=c,draw=none,minimum size=0cm,inner sep=0] (f) {};\n\t\t\t\t% This edge has a label, so could be used to represent capacity or edge weight.\n\t\t\t\t\\draw (a) edge[above,->] node{10} (b);\n\t\t\t\t% I can change the line ends to either include arrows or not as the next few edges show.\n\t\t\t\t\\draw (a) edge[->] (c);\n\t\t\t\t\\draw (b) edge[-] (c);\n\t\t\t\t\\draw (b) edge[<->,right] node{4} (d);\n\t\t\t\t\\draw (a) edge[-] (e);\n\t\t\t\t\\draw (a) edge[bend right] (d);\n\t\t\t\t\\draw (b) edge[bend right] (f);\n\t\t\t\t\\draw (f) edge[bend right,->] (e);\n\t\t\t\t% Can you tell what the .north part of the next line does? Try changing it to '.south west'. The looseness command has been added to stop the line from cutting through node c.\n\t\t\t\t\\draw (b.north) edge[bend right,looseness=2.1] (e.east);\n\t\t\t\\end{tikzpicture}\n\t\t\\caption{A simple graph}\n\t\t\\label{fig:graph}\t\n\t\\end{subfigure}\n\\end{figure}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nRegardless of how much of this sheet you have been able to reproduce, I hope you have learnt something about \\LaTeX{}. In my opinion, it is best to learn \\LaTeX{} simply by starting to use it, and answering any questions you have as you go by using the many excellent resources online. The \\href{https://www.overleaf.com/learn}{Overleaf} website contains a lot of useful help, as does \\href{https://tobi.oetiker.ch/lshort/lshort.pdf}{The Not So Short Introduction To \\LaTeXe{}}. The tex file used to produce this talk will be available shortly so that you can compare it with your tex file. I have also added some comments to my file to explain some additional things, so I encourage you to read through if you are interested.\n\n\\end{document}\n", "meta": {"hexsha": "b617beb1471b86e2bc7886fea04d5a7e95bf04ef", "size": 16316, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/postfiles/20190420/LaTeXWorksheet.tex", "max_stars_repo_name": "samfearn/al-folio", "max_stars_repo_head_hexsha": "def82ffb174fd1c3181a72847484958d55b62217", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/postfiles/20190420/LaTeXWorksheet.tex", "max_issues_repo_name": "samfearn/al-folio", "max_issues_repo_head_hexsha": "def82ffb174fd1c3181a72847484958d55b62217", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/postfiles/20190420/LaTeXWorksheet.tex", "max_forks_repo_name": "samfearn/al-folio", "max_forks_repo_head_hexsha": "def82ffb174fd1c3181a72847484958d55b62217", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-06T00:05:16.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-11T14:11:08.000Z", "avg_line_length": 71.8766519824, "max_line_length": 965, "alphanum_fraction": 0.7357195391, "num_tokens": 4517, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149758396752, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5867953137079737}}
{"text": "<source-file filename=\"assignment_methods.tex\" project=\"Assignment_methods\">\n\n\\documentclass[12pt]{article}\n\\begin{document} \n\\title{PBGG Notes 2011: The use of assignment methods in population genetics.}\n\\author{Graham Coop$^{1}$ \\\\\n\\small $^1$ Department of Evolution and Ecology \\& Center for Population Biology,\\\\\n}\n\\date{}\n\\maketitle\n\nHere I describe a simple probabilistic assignment to find the probability that an individual of unknown population comes from one of $K$ predefined populations. I then briefly explain how to extend this to cluster individuals into $K$ initially unknown populations. This method is a simplified version of what Bayesian population genetics clustering algorithms such as STRUCTURE (Pritchard et al. Genetics 2000). \n\n\\paragraph{A simple assignment method}\n\nWe have genotype data from unlinked S bi-allelic loci for $K$ populations. The allele frequency of allele $1$ at locus $l$ in population $k$ is denoted by $p_{k,l}$, so that the allele frequencies in population 1 are $p_{1,1},\\cdots p_{1,L}$ and population 2 are $p_{2,1},\\cdots p_{2,L}$ and so on. \n\nYou type a new individual from an unknown population at these $L$ loci. This individual's genotype at locus $l$ is $g_l$, where $g_l$ denotes the number of copies of allele 1 this individual carries at this locus $g_l=0,1,2$). \n\nThe probability of this individual's genotype at locus $l$ conditional on coming from population $k$ (i.e. their alleles being a random HW draw from population $k$) is \n\\begin{equation}\nP(g_l | \\textrm{pop k}) = I(g_l=0) (1-p_{k,l})^2 +  I(g_l=1) 2 p_{k,l} (1-p_{k,l}) + I(g_l=2) p_{k,l}^2\n\\end{equation}\nwhere $I(g_l=0)$ is an indicator function which is $1$ if $g_l=0$ and zero otherwise, and likewise for the other indicator functions. \n\nTherefore, assuming that the loci are independent, the probability of individual's genotypes conditional on them coming from population $k$ is \n\\begin{equation}\nP(\\textrm{new ind.} | \\textrm{pop k})  = \\prod_{l=1}^S P(g_l | \\textrm{pop k}) \\label{eqn_assignment}\n\\end{equation}\n\nWe wish to know the probability that this new individual comes from population $k$, i.e. $P(\\textrm{pop k} | \\textrm{new ind.})$. We can obtain this through Bayes rule \n\\begin{equation}\n P(\\textrm{pop k} | \\textrm{new ind.})  = \\frac{P(\\textrm{new ind.} | \\textrm{pop k}) P(\\textrm{pop k})}{P(\\textrm{new ind.})}\n\\end{equation}\nwhere \n\\begin{equation}\nP(\\textrm{new ind.}) = \\sum_{k=1}^K  \\frac{P(\\textrm{new ind.} | \\textrm{pop k}) P(\\textrm{pop k})}{P(\\textrm{new ind.})}\n\\end{equation}\nis the normalizing constant. We interpret $P(\\textrm{pop k})$ as the prior probability of the individual coming from population $k$, unless we have some prior knowledge we will assume that the new individual has a equal probability of coming from each population $P(\\textrm{pop k})=1/K$.  \n\nWe intepret \n\\begin{equation}\n P(\\textrm{pop k} | \\textrm{new ind.})\n\\end{equation}\nas the posterior probability that our new individual comes from each of our $1,\\cdots, K$ populations.\n\nMore sophisticated versions of this are now used to allow for hybrids, e.g, we can have a proportion $q_k$ of our individual's genome come from population $k$ and estimate the set of $q_k$'s.\n\\paragraph{clustering based on assignment methods}\nWe wish to cluster our individuals into $K$ unknown populations. We begin by assigning our individuals at random to these $K$ populations. \n\\begin{itemize}\n\\item Given these assignments we estimate the allele frequencies at all of our loci in each population. \n\\item Given these allele frequencies we chose to reassign each individual to a population $k$ with a probability given by eqn. ($\\ref{eqn_assignment}$).\n\\end{itemize}\nWe iterate steps 1 and 2 for many iterations. If the data is sufficiently informative the assignments and allele frequencies will quickly converge. \n\nTechnically this is the joint posterior of our allele frequencies and assignments. \n\n\\end{document}\n\n\n</source-file>\n", "meta": {"hexsha": "014c644ec2f47bdcfc01803f9249552dae08fa33", "size": 3951, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pop_structure/assignment_methods.tex", "max_stars_repo_name": "emjosephs/popgen-notes", "max_stars_repo_head_hexsha": "30b596262543aca87d761365d4e0bf73480559c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 471, "max_stars_repo_stars_event_min_datetime": "2015-02-04T23:51:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-20T15:40:37.000Z", "max_issues_repo_path": "pop_structure/assignment_methods.tex", "max_issues_repo_name": "emjosephs/popgen-notes", "max_issues_repo_head_hexsha": "30b596262543aca87d761365d4e0bf73480559c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2015-12-03T23:14:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-03T18:10:54.000Z", "max_forks_repo_path": "pop_structure/assignment_methods.tex", "max_forks_repo_name": "emjosephs/popgen-notes", "max_forks_repo_head_hexsha": "30b596262543aca87d761365d4e0bf73480559c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 103, "max_forks_repo_forks_event_min_datetime": "2015-02-05T01:36:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-16T15:08:35.000Z", "avg_line_length": 63.7258064516, "max_line_length": 413, "alphanum_fraction": 0.7468995191, "num_tokens": 1115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676284, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.5867953105963517}}
{"text": "\\subsection{Geometric proof (using Compactness)}\n\nAfter the exhaustive proof of the previous section, we wanted to\nrefresh the reader with a totally different type of proof for SVD; one\nthat brings a new perspective for the way in which the special basis\nfor \\R{n} is chosen. \\\\\n\nThinking about modular theorem proving, that is, factorizing common\nresults for the sake of clarity; one could consider that the \n\\cref{thm:SVD1} and \\cref{thm:SVD2} are some kind  of common step\nfor many possible SVD proofs. They deal with the task of proving that,\ngiven certain condition over the bases in \\R{n} and \\R{m} ($A\\vec{v_i}\n= \\sigma_i\\vec{u_i}, \\ds{\\forall} i=1\\dots\\func{rank}(A)$); the SVD\nfactorization holds. Those auxiliary \ntheorems then, reduce the task of proving the SVD \n\\cref{thm:SVD} to the much more specific (but hard) subproblem of\nfinding the bases. Per discussion in previous section, we know\nthat such problem can be reduced even further, to the one of finding\nthe orthonormal basis for \\R{n} $\\{\\vec{v_1},\\vec{v_2},\\dots,\\vec{v_n}\\}$, such\nthat its orthogonality is preserved through $A$\n(\\cite{kalman96}). This last remark can be considered the true essence of a\nwhole family of proofs for SVD Theorem, where each one brings a\nparticular way of finding a basis with such an special\nproperty. \\\\\n\nIntuitively, this property of preserving orthogonality could be\nthought as a generalization of the eigenvectors behavior, which\nare the vectors not ``moved'' by transformation $A$ but just scaled; this in\nparticular implies that if the eigenvectors formed an orthogonal basis\nof the space prior application, their images under $A$ will still form a\nbasis (eigenvectors are defined when $A$ has signature $\\R{n} \\fromto\n\\R{n}$, which means $A$ is an square matrix). Something similar occurs\nfor the \\vec{v}\\apos{s} in SVD, but extended to a couple of spaces\ninstead of just one: we can not expect these vectors are not moved by\n$A$, as they migrate of space ($\\R{n} \\fromto \\R{m}$); but we request\nthat whatever landing they do on \\R{m}, they still form an orthogonal\nbasis there. \\\\ \n\nThe proof we are about to present now is thanks to Blank et al\n\\cite{blank89}; which presents a quite interesting approach: using\npure geometric arguments \\footnote{With an implicit use of\n  compactness} he finds the basis whose orthogonality under $A$ gets\npreserved. \\\\\n\nThe whole reasoning occurs on the unit sphere and its image over an\nsquare matrix $A$; and here comes a great connection with our previous\ncomments about the true essence of an arbitrary matrix $A$ of $m \\times\nn$ with rank $r$: the real information is on the mapping from the row\nspace to the column space ($\\C{\\trans{A}} \\fromto \\C{A}$), and\nrestricted to those subspaces $A$ is a bijection $\\R{r} \\fromto\n\\R{r}$; working with a \nbijective linear transformation means that $\\inv{A}$ does\nexist. Without loosing generality then, we will assume that matrix $A$ is\nsquare and non-singular (invertible); because if it was not, we can do\na zoom and focus on its embedded bijection $\\R{r} \\fromto \\R{r}$, and\ncalculate the basis there (and later extend to whole basis of host\nspaces \\R{n} and \\R{m}). \\\\\n\nThe unit sphere is picked as the source of the \\vec{v}\\apos{s}, simply\nbecause we want them to be an orthonormal basis (which in particular\nrequires them to be unitary). For starting to form this basis, we\ncould start picking an arbitrary unit vector; picking a second vector\nin the sphere such that is perpendicular to the former is no issue\neither; but how pick the second one such that the property the\nproperty $\\vec{v_1} \\perp \\vec{v_2}$ is preserved through $A$? Every\nvector $\\vec{v}$ in \\R{r} defines a hyperplane $P$, \nwhich actually happens to be a subspace on its own. But such hyperplane\nis actually the orthogonal complement of the subspace generated by\nthe vector alone; which implies that $P \\perp \\vec{v}$. So what? $P$\nmerely becomes an infinite source of orthogonal vectors to the second\nchoice \\vec{v}; but the question of how to pick one such that the\northogonality property gets preserved, is still unanswered. \\\\\n\nThe main point of the proof in \\cite{blank89}, is that if we choose\nproperly the first vector \\vec{v}, \nmeaning if we choose the one which gets the maximum expansion through\n$A$, then the orthogonality relation of \\vec{v} with the hyperplane it\ndefines gets preserved. Here comes then a very powerful and beautiful\nidea at the same time: having found a whole subspace that is\northogonal to the first choice vector, allows one to forget about the\noriginal host space \\R{r} and focus on that subspace only; the new\nhyperplane would essentially be \\R{r-1} embedded in \\R{r}, and the\nintersection with the sphere in \\R{r} would the unit sphere in\n\\R{r-1}. Thus, we can apply recursively the same procedure in that\nsubspace, for finding the next unit vector such that the\northogonality of its hyperplane gets preserved through $A$! \\\\ \n\nTherefore, the theoretical recursive algorithm would be to find one vector\n\\vec{v_i} at a time, by working only on a subspace of dimension\n$r-i+1$ (\\vec{v_1} is found in whole \\R{r}, \\vec{v_2} is found in an\nembedded \\R{r-1}, \\vec{v_3} in the nested embedded \\R{r-2}, etc). If\nwe wanted to think in a proof rather than a constructive algorithm, we\ncould use induction and claim that we know how to find the first $r-1$\nvectors \\R{r-1}, and proceed to find the remaining vector in \\R{r}. \\\\\n\nWe have reduced then, the problem of finding the right basis of\n\\vec{v}\\apos{s} to the following theorem: \\\\\n\n\\begin{theorem}\n\n\\label{thm:svdgeo}\nLet $A$ a non-singular matrix of $r \\times r$; let be vectors\n$\\vec{v},\\vec{w} \\in \\R{r} \n\\suchthat A\\vec{v} = \\vec{w} \\ds{\\land} \n\\norm{\\vec{w}}_2 = \\max\\left\\{\\norm{\\vec{x}}_2 \\suchthat \\norm{\\inv{A}\\vec{x}}_2= 1 \\right\\}$ \nand let $S$; $T$ be hyperplanes in\n\\R{r} $\\suchthat$ $S$ is the orthogonal hyperplane of \\vec{v}; and $T$\nis image of $S$ under $A$ $\\implies$ $T \\perp \\vec{w}$ ($T$ is also the\northogonal hyperplane of \\vec{w}).\n\\end{theorem}\n\nTo visualize the artifacts mentioned in the theorem, let us pay\nattention then to the following picture, which is the unit \nsphere in $\\R{r}$, mapped to an ellipsoid in $\\R{r}$ \\footnote{A\n  formal argument is actually required to prove that the image of the\n  unit sphere is an ellipsoid, and even telling that is the surface of\na quadratic form requires a little development. But we will omit those\ndetails, aiming to keep the spirit of this short proof.} In order to\nprove that $T \\perp \\vec{w}$, we will use the auxiliary hyperplanes\n$S_1$ and $T_1$ (where the second is the image under $A$ of the\nformer). Of course the picture aims to represent \\R{3}, and \nthe hyperplanes would be embeddings of \\R{2}; but let us just consider\nthem a visual representation of arbitrary dimension objects (the only\nrepresentation we can imagine, indeed). \\\\\n\n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[width=13cm]{svd-geo-proof}\n  \\caption{Geometrical proof of SVD Theorem: Transformation $A$ preserves the\n    relation $S \\perp \\vec{v}$}\n  \\label{fig:svdgeop}\n\\end{figure}\n\\hfill\n\n\\begin{proof}\nThe geometric proof goes like this: \\\\\n\n\\begin{enumerate}\n\\item The hyperplane $S_1$ touches the unit sphere only at point\n  \\vec{v}. It is a geometrical result that such hyperplane is unique\n  and that $S_1 \\perp \\vec{v}$. \\\\\n\n\\item By definition, the hyperplane $T_1$ is the image of $S_1$ under\n  $A$; also, the whole unit sphere is mapped by $A$ into an ellipsoid. Since\n  $A$ is a bijective function, then $T_1$ must touch the ellipsoid only in\n  point \\vec{w} (just like $S_1$ touches the unit sphere only at point\n  \\vec{v}). Actually, $T_1$ must be the only hyperplane with such\n  property (otherwise, we could apply \\inv{A} to that other hyperplane,\n  and it would produce a different hyperplane that also touches\n  \\vec{v} in the unit sphere; contradicting previous point about the\n  uniqueness of $S_1$). \\\\\n\n\\item Now take another sphere, big enough to cover the deformed image of\n  the unit sphere under $A$ (the ellipsoid). Start to shrink\n  such sphere until it touches the ellipsoid for the first time; per\n  definition, \\vec{w} must be part of those points of first\n  contact. \\\\\n\n\\item Now consider the hyperplane $T_2$ that touches this shrunk sphere,\n  precisely at \\vec{w}. Using the same geometrical theorem of first\n  argument about $S_1$, we can tell such hyperplane is unique and is\n  orthogonal to \\vec{w}. \\\\\n\n\\item Since this adjusted sphere covers entirely the ellipsoid (per\n  definition of \\vec{w}), then $T_2$ also touches the ellipsoid at\n  point \\vec{w}. But we argued that $T_1$ was the only hyperplane\n  touching the ellipsoid at \\vec{w} $\\implies$ $T_1 = T_2 \\ds{\\land} T_1\n  \\perp \\vec{w}$. \\\\\n\n\\item Both hyperplanes $S$ and $S_1$ are orthogonal to \\vec{v}; by\n  geometrical arguments they must be parallel then. \\\\\n\n\\item Linear transformations, in particular $A$, preserve parallelism;\n  since, $S \\parallel S_1$ $\\implies$ their respective images under $A$\n  must be parallel as well. \\\\\n\n\\item The image of $S_1$ under $A$ is $T_1$, then, whatever becomes\n  the image of $S$ under $A$; it must be parallel to $T_1$. Let us\n  call this image $T$. \\\\\n\n\\item $\\therefore$ $T \\parallel T_1 \\ds{\\land} T_1 \\perp \\vec{w}$\n  $\\implies$ $T \\perp \\vec{w}$.  \n\\end{enumerate}\n\\end{proof}\n\\hfill\n\nThe key choice in the proof was \\vec{w}, as being the biggest axis of\nthe ellipsoid makes it coincide with the sphere of radius\n$\\norm{w}_2$; and that in turns allows us to transfer the properties\nof the hyperplane that touches the sphere at \\vec{w} to the one that\ntouches the ellipsoid at same point (as they become same hyperplane\nindeed). It is no coincidence then, that the spectral norm $\\norm{A}_2$\nis actually defined as $\\norm{\\vec{w}}_2$; that is, is defined\nas the maximum expansion $\\norm{A\\vec{x}}_2$ that transformation $A$ causes\non the vectors belonging to the unit sphere. This norm actually, is\nused in the algebraic proof of Golub in \\cite{golub13}, of the SVD\nTheorem. \\\\ \n\nThe last question the reader may have now is: where was the\ncompactness property used? It may not be explicitly stated, but it\nlies behind the definition of \\vec{w}: $\\norm{\\vec{w}}_2 =\n\\max\\left\\{\\norm{\\vec{x}}_2 \\suchthat \\norm{\\inv{A}\\vec{x}}_2 = 1\n\\right\\}$. The reason why \\vec{w} exists \non the first place, is because the ellipsoid is a compact set (it\ninherits that property from its pre-image, the unit sphere, thanks to\nthe continuity of function $A$\\footnote{Though we did not find the\n  name for such theorem, it must exists and state that continuous\n  functions preserve compactness.}). Since the norm function $\\norm{.}_2$\nis also continuous, then by a generalization of the Extreme Value\nTheorem from Calculus, it must reach its maximum on a point of the\nellipsoid (we named that particular point as \\vec{w} in the proof). \\\\\n\n", "meta": {"hexsha": "8ea57bf4e65c05149e3de3a2e777c469518caa3e", "size": 10885, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "svd-proof-geo.tex", "max_stars_repo_name": "rzavalet/svd-lsi-project-master", "max_stars_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "svd-proof-geo.tex", "max_issues_repo_name": "rzavalet/svd-lsi-project-master", "max_issues_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "svd-proof-geo.tex", "max_forks_repo_name": "rzavalet/svd-lsi-project-master", "max_forks_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.103286385, "max_line_length": 94, "alphanum_fraction": 0.7427652733, "num_tokens": 3102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7772998560157663, "lm_q1q2_score": 0.5867953105963516}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\n\\begin{document}\n\\chapter{Combinatorial Search}\nWhat is the most straightforward way to solve problems? We form it as a search problem in \\textit{search space}--a simple example is to enumerate all possibilities--and search among them what we need. We introduce the general searching strategies and learn some math--combinatorics that we strongly need. \n\\section{Introduction}\nCombinatorial search problems consists of $n$ items and a requirement to find a solution, i.e., a set of $L < N$ items that satisfy specified conditions or constraints. For example, a sudoku problem where a $9\\times 9$ grid is partially filled with number between 1 and 9, fill the empty spots with numbers that satisfy the following conditions:\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width= 0.35\\columnwidth]{fig/250px-Sudoku-by-L2G-20050714.png}\n    \\includegraphics[width= 0.35\\columnwidth]{fig/250px-Sudoku-by-L2G-20050714_solution.png}\n    \\caption{Example sudoku puzzle and its solution}\n    \\label{fig:backtrack_puzzle_1}\n\\end{figure}\n\\begin{enumerate}\n    \\item Each row has all numbers form 1 to 9.\n    \\item Each column has all numbers form 1 to 9.\n    \\item Each sub-grid ($3 \\times3$) has all numbers form 1 to 9.\n\\end{enumerate}\nThe sudo problem together with one possible solution is shown in Fig.~\\ref{fig:backtrack_puzzle_1}. In this case, we have $81$ items, and we are required to fill 51 empty items with the above three constraints. Combinatorial search in computer science mainly studies algorithms that solve exponential or even NP-hard problems, such as:\n\\begin{enumerate}\n    \\item Constraint Satisfication Problems (CSP) such as sudoku, N-queen, and so on.\n    \\item Optimization problems such as Travel Salesman Problems (TSP) and knapsack problems. \n\\end{enumerate}\n\n\\paragraph{Techniques} From Chapter Discreet Programming, we have learned the basics of enumerative combinatorics, including counting principles and knowledge on permutations, combinations, partitions, subsets, and subsequences. Combinatorial search builds on top of this subject, and together through the technique in computer science which is called ``backtracking'', it is able to enumerate the search space   and find the solution effectively and efficiently with necessary speedup methods. \n\n\n\n\\paragraph{Search Problem} A search problem is defined as a problem that there is an algorithmic way to verify its answer. Finding a certain integer on an array of integers is a simplest example. \n\nFor the searching on input instance with a peculiar data structure, the search space itself is all items in this data structure, searching in this space is a simply applying searching strategies specified by data structure and situation, which we have already covered in Chapter. Searching on Data Structures. However, for problems that are not explicitly a searching on a data structure, to form it as a search problem, we need to define \\textit{state}, \\textit{state space}, and \\textit{goal test}. \n\n\\subsubsection{State, State Space, and Goal Test} What is a state? A state can be imagined as a container that holds all information of it. A state Space is a set of all possible states in a problem domain. For a discrete problem, this set will be finite, which is good news. We use $S$ to make the state, and $V_s$ to mark the state space. Let's look at an example about subarray.\n\\paragraph{Example: Subarray Problem (L560)} Given an array of integers and an integer $t$, find the total number of continuous subarrays whose sum equals to $t$.\n\\begin{lstlisting}[numbers=none]\na=[1,1,1], t=2\nReturn 2\n\\end{lstlisting}\nIn this question, we care about information about subarray and its sum,  state here can be further named as `a subarray start at index $i$ and end at index $j$, and its sum is $value$'. We use $a_{ij}, i\\leq j, j\\in[0,n-1]$ to make a subarray, and $s_{ij}=v$ a state. Simple math tells us we have total number of $n+(n-1)+(n-2)+...+1=\\frac{n\\times(n+1)}{2}$ subarries, which makes $|V_s|$. A goal test can be set as ``checking if any subarray has a sum value same as $t$''. Beside, there should always be an \\textit{initial state}, wherein this case it should be an empty array with value of $0$, we mark it as $\\empty$.\n\nSo far, we can use the Python code to solve this problem by generating all states and do goal testing:\n\\begin{lstlisting}[language=Python]\n# Generate all subarries\ndef naive_subarray_sum(a, t):\n  if not a:\n    return 0\n  n = len(a)\n  ans = 0\n  # simple enumeration\n  for i in range(n):\n    for j in range(i, n):\n      # define the state and compute its value\n      s_ij = 0 \n      for k in range(i,j+1):\n        s_ij += a[k]\n      # goal test\n      if s_ij == t:\n        ans += 1\n  return ans  \n\\end{lstlisting}\n\n\\subsubsection{State Transfer Model} The problem of the above implementation is that we did just blindly compute each sate, but never think about connections between state. A easy one that comes to our mind is $s_{ij}=s_{ij-1}+a_j$. From $s_{ij-1}$ to $s_{ij}$ we need only one addition; whereas in our previous way, we treated the states independently and spend $j-i$ additions to compute its state. This is called a \\textit{state transfer model}. With this prior knowledge, we can completely cut off the innermost \\texttt{for} loop. We can draw the state transfer model as a graph, where there will be an arc $a\\xrightarrow{}b$ if state $a$ can be converted to state $b$:\n\\begin{lstlisting}[numbers=none]\n[]->a_00->a_01->a_02\n[]->a_11->12\n[]->a_22\n\\end{lstlisting}\nThe code can be shown:\n\\begin{lstlisting}[language=Python]\n# State transfer model\ndef state_transfer_subarray_sum(a, t):\n  if not a:\n    return 0\n  n = len(a)\n  ans = 0\n  # simple enumeration\n  for i in range(n):\n    s_ij = 0\n    for j in range(i, n):\n      # a sate only depends on its previous state\n      s_ij = s_ij + a[j] \n      if s_ij == t:\n        ans += 1\n  return ans \n\\end{lstlisting}\nThe state transfer actually used the \\textit{reduce and conquer} principle. Each state is considered as a subproblem of the original problem. And a larger problem can be reduced into a set of smaller subproblems. The state transfer model is equivalently using this method. \n\\subsubsection{Reduce the State Space Further} There actually has a even better way to do this. We can define a state as ``the number of subarray that has sum equals to $t$ in an array of $a_{0,j}$, and we have the sum of the whole array as $sum$'', we short it for $s(j,sum,count)$. We need to use two data strcutures to track this state $dp[j]=count$ and $sum[j]$. In this definition, there is only one variable $j$ to indicate the total number of states. Now, what is the recurrence relation between $dp_j$ and $dp_{j-1}$? There are two options for the item $a_j$. First, it can be part of a qualified subarray, which means we need to find a previous state say $0-k$, then our current state is $0,1,..., k, k+1, ..,j$. We need a previous state $k$ which has $sum[k]=sum_[j]-t$. If we know how many such previous subproblem exist as $c$, our $dp[j]=c$. Second, when $a_j$ is not considered, we just simply has $dp[j]=dp[j-1]$. Sum up both cases: $dp=c+dp[j-1]$. The problem here become how to find $c$ efficiencly, and possibly just constant operations. We can save information $sum$ of a state into a hasing table (dictionary), which uses $sum$ as key and $c$ as values. The Python code is:\n\\begin{lstlisting}[language=Python]\nfrom collections import defaultdict\ndef subarraySum(a, t):\n    \"\"\"\n    :type nums: List[int]\n    :type t: int\n    :rtype: int\n    \"\"\"\n    sum_i = 0\n    dict = defaultdict(int) #sum 0, count = 0\n    dict[0] = 1\n    dp = [0]*(len(a)+1)\n    for idx, v in enumerate(a):\n        sum_i += v\n        if sum_i - t in dict:\n            dp[idx+1] = dict[sum_i-t] + dp[idx]\n        else:\n            dp[idx+1] = dp[idx]\n        dict[sum_i] += 1\n\n    return dp[-1]\n\\end{lstlisting}\n\\subsubsection{Problem Solving Guideline***} Heretofore, we learned the five key components of formulating a search problem: state, initial state, state space, state transfer, and goal test. \n\n\\paragraph{Exhaustive Search} To solve a search problem, there are some directions:\n\\begin{itemize}\n\\item if it is explicit data structure, apply particular searching strategy will do the problem easily!\n\\item if it is implicit data structure, we generate and save our state such as in our example where enumerate the state and compute its values in \\texttt{s\\_{ij}}. This also depends on the data structure that the state space forms. This chapter focuses on solving problems in this stage via enumeration and goal test. To be able to enumerate and count the size of the state space, we need combinatorics and its implementation with depth-first search based \\textbf{``backtracking''} technique.   \n\\end{itemize}\n\\paragraph{Optimization via Searching} To optimize a search problem within a search strategy, we need to prune as much  unqualified or unnecessary branch as possible. We introduce two ways:\n\\begin{itemize}\n    \\item Branch and Prune: This method prunes the unqualified branches with constraints of the problems. This is usually applied to solve constraint restricted problems (CSPs), and backtracking is a top technique to do it.  \n    \\item Branch and Bound: This method prunes unnecessary branches via comparing an estimation of this node with a found global best solution to see; if the estimation will never lead us to a better solution, we cut off this branch. Either backtracking or best-first search can be applied. This technique can be applied to solve a general optimization problems, such as Travel Salesman Problems (TSP), knapsack problems, and so.\n\\end{itemize}\nTherefore, these searching and prunning techniques are  widely applied to solve combinatorial problems, constraint restricted problems (CSPs), and optimization problems.\n\n\\paragraph{Other Optimization Techniques}\nThere are three directions to get better accuracy, which are trailers for the remaining chapters sitting in this part. So, DONNOT worry if you do not understand it all, read it first and come back to check it out later after you have learned these chapters. \n\\begin{itemize}\n\\item Reducing the cost of computing each state by finding connection between states. Such connections appears as an arc in the state transfer graph and can usually be obtained by \\textbf{Reduce and Conquer} method which reduce problems to subproblems and uses recurrence relation to denote its conversion. We will detail this method in Chapter. \\ref{chapter_divide_conquer}. The recurrence relation guarantees lower cost to get a state from a previous state. We showed the example of computing \\texttt{s\\_{ij}} in the section of State Transfer Model. And here the time complexity is reduced from $O(n^3)$ to $O(n^2)$.\n\\item Reducing the state space, which might requires us to define the state in another way.  We have seen in the above section, we further reduced the cost to $O(n)$.\n\\item If the recurrence relation shows some states  appear in the search tree multiple times or in the graph there exist vertices that have indegree larger than 1, we can use \\textbf{dynamic programming} to cut off the redundancy. Or else, we can make greedy choice via \\textbf{Greedy Algorithms}.\n\\end{itemize}\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%Backtracking%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Backtracking}\n\nWe know how to count the possibilities and how to enumerate them, manually. Time to learn how to program them efficiently. If we were thinking about iteration programming, we would encounter at least $n$ levels of \\texttt{for} loops, which is not manageable and a pain to writ the code. This is where the recursion is great! The recursive method we apply here is  \\textit{backtracking}. \n\n\\paragraph{Introduction to Backtracking} Backtracking is a general problem-solving technique for finding all (or some) solutions to some computational problems, that \\textit{incrementally} builds candidates to the solutions. Backtracking is all about choices and consequences. It shows the following two properties:\n \\begin{enumerate}\n     \\item \\textbf{No Repetition and Completion:} It is a systematic generating method that enumerate all possible states exactly at most once: will not miss any possible right solution but  avoids repetitions. If there is the ``correct'' solution, it is guaranteed to find. This property makes it ideal for solving combinatorial problems such as combination and permutation which requires us to enumerate all possible solutions. We focus on demonstrating this property in this section. \n    \\item \\textbf{Search Pruning:} Along the way of working with partial solutions, in some cases, it is possible for us to decide if they will  lead to a valid \\textit{complete solution}. As soon as it is confident to judge its invalid configuration or it will not be optimal, it abandons  this \\textit{partial candidate}, ``backtracks'' (return to the upper level), and reset to the upper level's state so that the search process can continue to explore the next branch to provide more efficiency.  This is called \\textit{search pruning} among searching algorithms.  With search pruning\nand we end up amortizely visiting each vertex less than once which is more\nefficient compared with an exhaustive graph search such as DFS and BFS.  This property makes backtracking the most promising way to solve \\textit{constraint satisfaction problem (CSP)}\\footnote{CSPs are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations, visit \\url{https://en.wikipedia.org/wiki/Constraint_satisfaction_problem} for more information}, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations. For example,  Eight Queens puzzle, Map Coloring problem, Sudoku, Crosswords, and many other logic puzzles. We show examples in Section.~\\ref{chapter_combinatorics_backtracking_csp}.\n \\end{enumerate}\n\n\\paragraph{Model Backtracking}\nWe can model the combinatorial search solution as a vector $s = (s_0, s_1, ..., s_{n-1})$, where each $s_i$ is selected from a finite ordered set $A$. Such a vector might represent an arrangement where $s_i$ contains the i-th item of the permutation. Or in the combination problem, a boolean denotes if the i-th item is selected already. Or it can represent a path in a graph or a sequence of moves in a game. At each step in the backtracking algorithm, we try to extend the last partial solution $s = (s_0, s_1, ..., s_{k})$ by adding another event at the end. And then we testify our partial solution with the desired solution to decide to (1) either collect this partial solution; and or (2) add $s_{k+1}$ to the state; or (3) backtrack and reset to previous state and go to next branch.  The relation between partial state and its next state can be easily viewed as a recurrence relation. \n\n\n\n\n\n\\subsection{Permutation}\nIn the case of permutate [1,2,3]. At first, we have three options: 1 or 2 or 3, we can get three possible partial result [1],[2],[3]. Second, we expand the option on the second position, for [1], we have option 2 and 3, we get [1,2], [1,3], same for [2]->[2,1],[2,3], for [3]->[3,1],[3,2]. At least, each partial result has only one option, we would get all permutations as: [1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]. We shall use a tree structure to better visualize this process. It is shown in Fig.~\\ref{fig:backtrack_permutation}. \n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width= 0.8\\columnwidth]{fig/permutation.png}\n    \\caption{The search tree of permutation}\n    \\label{fig:backtrack_permutation}\n\\end{figure}\n\nWe start from an empty list, at this time, we have three possible options (moves), each edge represents a move, and we can go from state [], to state [1],[2],[3] with different moves. Now, how would you program to implement this? Here is one naive way: we implement it recursively as in the DFS. We use \\texttt{state} to track the current partial result, \\texttt{k} is used to mark the level of the recursion, it will be the same as the length of the \\texttt{state}. \\texttt{ans} is just used to collect the answer.\n\\begin{lstlisting}[language=Python]\ndef naive_recursion(a, state, k, ans):\n  if k == len(a):\n    ans.append(state[::])\n    return\n  for i in range(len(a)):\n    if a[i] not in state:\n      naive_recursion(a, state + [a[i]], k+1, ans)\n\\end{lstlisting}\nThe problem with the above implementation that line $6$ takes $O(n)$ to check if an item is valid to put into the state or not. Another thing is, each time we call \\texttt{naive\\_recursion}, we make a copy of \\texttt{state}.This can be further avoided.\n\n\\texttt{state} is a list, when it is passed to function, it is just a pointer. However, in the case of \\texttt{state + [a[i]]}, a new list is generated and passed to its recursive call. We can avoid this by appending \\texttt{a[i]} at the end of \\texttt{state}, and after the recursive call, we need to set the state back to its original, so that it can continue with the first \\texttt{for} loop, and we can generate the next option. For example, if we are at [1], when the recursive call returned back from [1,2], if we do not set it back to [1], it can not be built to [1,3]. To avoid the second \\texttt{for} loop, we can use a list of boolean \\texttt{bUsed} to track which item is used. Same rule apply here, after the recursive call, we need to set its value back to False. For generality, we modify the code to generate $p(n,k)$ instead of $p(n,n)$. A better version is:\n\\begin{lstlisting}[language=Python]\ndef P_n_k(a, bUsed, state, d, k, ans):\n  '''\n  state start from []\n  d: the level of the traversal, starts from 0\n  bUsed: mark if corresponding item in a is used or not\n  '''\n  # reach to the last level\n  if d == k:\n    ans.append(state[::])\n    return\n  # move the state\n  for i in range(len(a)):\n    if not bUsed[i]:\n      state.append(a[i])\n      print(state)\n      bUsed[i] = True\n      P_n_k(a, bUsed, state, d+1, k, ans)\n      bUsed[i] = False\n      state.pop()\n      print('backtrack: ', state)\n\\end{lstlisting}\nSome of the process being printed out shows the process of backtracking:\n\\begin{lstlisting}[language=Python]\n[]\n[1]\n[1]\n[1, 2]\n[1, 2]\n[1, 2, 3]\nbacktrack:  [1, 2]\nbacktrack:  [1]\n[1, 3]\n[1, 3]\n[1, 3, 2]\nbacktrack:  [1, 3]\nbacktrack:  [1]\nbacktrack:  []\n\\end{lstlisting}\n\n\\paragraph{Discussion} In this case, we can not prune any branch, because for the permutation, we need a full enumeration. \n\n\\paragraph{Two Passes} Therefore, we can say backtracking visits these implicit vertices\nin two passes: First, the forward pass to build the solution \\textbf{incrementally}.\nSecond, the backward pass to \\textbf{backtrack} to previous state. We can see within\nthese two passes, the \\texttt{state} list is used as all vertices in the search tree, and\nit start with [] and end with []. This is the core character of backtracking.\n\n\\paragraph{Time Complexity of Permutation}\nIn the example of permutation, we can see that backtracking only visit each state once. The complexity of this is similar to the graph traversal of $O(|V|+|E|)$, where $|V| = \\sum_{i=0}^{n}{A_{n}^{k}}$, because it is a tree structure, $|E| = |v|-1$. This actually makes the permutation problem NP-hard. \n\n\\paragraph{Space Complexity} The implementation is depth-first search, which has $O(bd)$ as the space complexity, where $b$ is branching factor and $d$ is the depth of the tree. However, in the backtracking, it even saves more space because only one state is generated at a time rather than saving all of the states belong to the same predecessor state all at once; because once it returns to the predecessor, the \\texttt{for} loop there can always generate the next state. Further, in our second solution, we  reuse the state vector such as \\texttt{state} in our case for enumerating all states by do modification and undo the modification once we go back to the predecessor and to generate the next successor. The slight different can be critical for problems with large state description.  \n\\subsection{Combination} \n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width= 0.7\\columnwidth]{fig/combination.png}\n    \\caption{The Search Tree of Combination. Change each vertex to a set instead of a list}\n    \\label{fig:backtrack_combination}\n\\end{figure}\nSimilarly, we try to build the combination of [1,2,3] incrementally, with initial state starting at $[]$ at the first level. Then three options give us three possible combination of $C(3,1)$: [1], [2], and [3].  For [1], we have 2 and 3, to get [1, 2], [1,3]. One option is to get exactly the same code as in permutation, but using a \\texttt{set} for \\texttt{state}, the second time a set appears, say  for [2], because [2,1] is the same as of [1,2] in this case, we check if it already exist. But this step can be avoided.\n\nIt makes better sense that in the search tree, we only visit each state exactly once. At [2] we need to avoid checking any item ahead of it, if you really want to use the permutation code, we can achieve the ``no repetion'' property by prunning branches: checking if one item is smaller than items in \\texttt{state}, we prune the branch. From this prunning rule, you would say, wait for it, why do not we just add item that is behind the position of the very last item we added. For each recursive call, we pass it a \\texttt{start} variable which starts at $0$ to point the recursive function to use items from the right location.  Because just to get the combination of 3 items out of 3 is 1 option, we get all combinations instead, a superset. This process is illustrated in Fig.~\\ref{fig:backtrack_combination}. The code is as following: \n\\begin{lstlisting}[language=Python]\ndef powerset(a, s, k, state, ans):\n  # Save the state\n  ans.append(state[::])\n  # reach to the last level\n  if k == len(a):   \n    return\n  for i in range(s, len(a)):\n      state.append(a[i])\n      powerset(a, i+1, k+1, state, ans)\n      state.pop()\n\\end{lstlisting}\nOne thing I want to mention: algorithms are mostly obsessed with orders. Right ordering makes things more organized, easier to find a solution and potentially more efficient. \n\n\\paragraph{Time Complexity of Combination}\nBecause backtracking ensures efficiency by visiting each state no more than once. For the combination(subset) problem, the total nodes of the implicit search graph/tree is $\\sum_{k=0}^{n}C_{n}^{k} =2^n$. We can look it as another way, there are in total n objects, and each object we can make two decisions: inside of the subset or not, therefore, this makes $2^n$. \n\n%%%%%%%%%%%%%%%%%%%%%Others%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Other Combinatorics}\n%include all paths, subsequence, and so. \n\\subsubsection{All Paths In Graph}\n\\label{subsec_all_paths}\n% Try an example, we compute the $C_{4}^{2}$ from $[1, 2, 3, 4]$. We can get the following result. \n% \\begin{figure}[h]\n%     \\centering\n%     \\includegraphics[width = 0.8\\columnwidth]{fig/combination_rslt.png}\n% \\end{figure}\n% Actually, the above code has redundency, each time we do not need to set the range from $s$ to $n$, we can set it to $n-k+1$ (need furthe modification). \n\n% If we want all the results from $k=0$ to $k=n$, we can accumulate set $k=n$, and accumulate results all the time. Actually if \n\\begin{figure}[ht!]\n    \\centering\n    \\includegraphics[width=0.4\\columnwidth]{fig/all_paths.png}\n    \\caption{All paths from 0, include 0->1, 0->1->2,0->1->2->4, 0->1->2->4->3, 0->1->2->4->5, 0->1->2->4->5->6 }\n    \\label{fig:my_label}\n\\end{figure} \nBacktracking technique can be naturally used in graph path traversal. One example is to find all possible paths from a source to the target. One simpler occasion is when the graph has no cycles. Backtrack technique can enumerate all paths in the graph exactly once for each. \n\nThe implementation is as follow: we still use dfs, because there has no cycles, we have no need to track the visiting state of each node. We generate the possible answer with backtracking technique through the \\texttt{path} variable to track each state. \n\\begin{lstlisting}[language=Python]\ndef all_paths(g, s, path, ans):\n  '''generate all pahts with backtrack'''\n  ans.append(path[::])\n  for v in g[s]:\n    path.append(v)\n    print(path)\n    all_paths(g, v, path, ans)\n    path.pop()\n    print(path)\n\\end{lstlisting}\nFeed in the above network and run the following code:\n\\begin{lstlisting}[language=Python]\nal = [[1], [2], [4], [], [3, 5], [6], []]\nans = []\npath = [0]\nall_paths(al, 0, path, ans)\n\\end{lstlisting}\nWith the printing, we can see the whole process, \\texttt{path} changes as the description of backtrack. \\begin{lstlisting}[numbers=none]\n[0, 1]\n[0, 1, 2]\n[0, 1, 2, 4]\n[0, 1, 2, 4, 3]\n[0, 1, 2, 4] backtrack\n[0, 1, 2, 4, 5]\n[0, 1, 2, 4, 5, 6]\n[0, 1, 2, 4, 5] backtrack\n[0, 1, 2, 4] backtrack\n[0, 1, 2] backtrack\n[0, 1] backtrack\n[0] backtrack\n\\end{lstlisting}\nWe can see each state, we can always have a matching backtrack state. \n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{What to do if there is a cycle?} \n\\end{bclogo}\n\\subsubsection{Subsequence}\nIf we observe carefully, the process of enumerating the subsequence is exactly showing the same search tree as in generateing the powerset (the better version). The only difference is in this case, the ordering actually matters. Therefore, our code will be exactly the same as we used to enumerate the powerset. \n\nWe know the cost is $2^n$,  this is correct in the poweset too; when we are scanning the list of item there is only two options: either end up in one of the subset or not. \n\n\\section{Prune Search Space in Backtracking}\n\\label{chapter_combinatorics_backtracking_csp}\nSo far, curious and meticulous readers might ask a question: ``Seriously, Li, I dont see the difference of the backtracking you showed me with DFS''. It's a good question, and it is worth to clarify before we move on.\n\\paragraph{Backtracking VS DFS} All the above examples, we used DFS to implement our backtracking algorithm. Think of backtracking as a problem-solving approach that there is a hard requirement--we force ourselves to only visit each state or configuration at most once (once for enumeration and less than once if branch pruning is applied) for consideration of efficiency, that implies that our search needs to be happening on a tree, a free tree if you want to be more specific, but we know where to start--the initial state. How to set a rule to define the free tree? This is what we engineers need to do. How to search on a free tree? That is what DFS do. Why not BFS? The space is the main issue to block us from BFS. BFS can not backtrack thus we need to save a copy of all states, not like in the DFS with backtracking we just need to dynamically append and pop from \\texttt{state} we are able to experience all states.  For the case of the permutation, DFS is preferred because theoretically it took $O(\\log n!)$ space used by stack (recursive call), while if use BFS, the number of vertices saved in the queue can be close to $n!$. \n\n\n\\paragraph{Search Space Prunning}  In this section, we demonstrate  backtracking can be optimized via prunning search space--prunning branches in the search tree--to solve CSPs or optimization problems. Suppose we are at level 2 with state $s=(s_0, s_1)$, and if we know that this state will never lead to valid or optimal solution, we do not need to traverse through this branch but backtrack to previous state at level one. This will end up prunning half of all nodes in the search tree. We show the prunning techniques through examples and summarize them in a section.  We demonstrate the search space prunning via backtracking through two examples: TSP for branch and bound with estimates, sudoku for branch and prune with constraints.   \n\n\n\n\n\\subsection{Sudoku}\nSudoku problem is almost the most popular brain teaser seen in magazines, news papers or even books of its own. I bet it can be a lot effort to crack the game. In this section, we would learn how to apply backtracking and search pruning to solve this problem.\n\\paragraph{Sudoku Problem (L37)}\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width= 0.35\\columnwidth]{fig/250px-Sudoku-by-L2G-20050714.png}\n    \\includegraphics[width= 0.35\\columnwidth]{fig/250px-Sudoku-by-L2G-20050714_solution.png}\n    \\includegraphics[width= 0.25\\columnwidth]{fig/sudoku_grid.png}\n    \\caption{Example sudoku puzzle and its solution,( put a coloring to each block)}\n    \\label{fig:backtrack_puzzle}\n\\end{figure}\nGiven a  partially filled grid of size $n\\times n$, completely fill the grid with number between 1 and n.  The constraint is defined as:\n\\begin{enumerate}\n    \\item Each row has all numbers form 1 to \u2018n\u2019.\n    \\item Each column has all numbers form 1 to \u2018n\u2019.\n    \\item Each sub-grid ($\\sqrt{n}\\times\\sqrt{n}$) has all numbers form 1 to n.\n\\end{enumerate}\nAn example of a $9\\times9$ sudoku problem is shown in Fig.~\\ref{fig:backtrack_puzzle}.\n\n\\subsubsection{Analysis and Solution} \n\\paragraph{State Space} We start by examining its state space. A state here can be defined as a $n\\times n$ grid filled with numbers in range $[1,n]$. The brute force is to try 1 to n at each grid in this table, we have a state space of size $n^{n^2}$. This is beyond our current machine can handle. We apply rules: \n\\begin{enumerate}\n    \\item Each row is essentially a permutation of integers in range $[1,n]$, this gives us $n!$ for a row, with $n$ rows, it would make the state space to $n\\times n!$. \n    \\item Further, each column needs to be a permutation of integers too. If we are at row 1, col 0, we would only have $n-1$ options, for position (1,1), this goes to $n-2$ options. This means our total possibility further decrease $n!+(n-1)!+...+2!+1!$. \n    \\item Moreover, there is restriction in each subgrid, it becomes hard to get the exact possibility\\footnote{This site \\url{http://pi.math.cornell.edu/~mec/Summer2009/Mahmood/Count.html}offer some insights}. The possibility for $n=9$, is  actually  $6670903752021072936960$ which is approximately $6.671\\times 10^{21}$. It is a hard problem to solve for sure. \n\\end{enumerate}\n\n Good news, we are almost always given some prior integers in some grids, this narrows down the possibility. Applying backtracking, we first find out all empty spots, and fill up these spots recursively. An initial cost estimation can be offered, assume we have $m$ spots, and each has number of candidates $c_i$, the upper bound cost will be $c_0\\times c_1\\times...\\times c_m$. Does the filling ordering of these empty spots matters?  For completeness, not so much; for efficiency, SURE!  Considering two approaches: one we visit these spots arbitrarily and the other we always choose spot that has the least possible candidates number to fill in (not considering the implementation right now). In the second approach, starting with less candidates can help us quickly pinpoint the right answer and cut branches that are invalid. Because the branch is easily on in the search tree, and all others are having more candidates, this pruning makes sure the branch we cut off at this stage is rewarding. Another way to think, this is essentially an greedy approach, making sure when are multiplying $c_i$ to the total cost, we are adding the least expensive ones, larger probability to guess it right, and as it accumulates, we outrun the arbitrary ones in hundreds of times faster.\n \n\\paragraph{Implement Sudoku with Backtracking}\n\nIn implementation, we need to track state of each row, col, and grid that what numbers it has in the process. We set aside three data structures \\texttt{row\\_state}, \\texttt{col\\_state}, and \\texttt{block\\_state} for this purpose so that we can validate a candidate. \n% \\begin{enumerate}\n%     \\item Initialization: we initialize the three states and prepare data structure to track empty spots.\n%     \\item Backtracking and prunning: we use DFS to fill up empty spots, and each type we choose whichever in the remaining spots that has the least number of candidates. \n% \\end{enumerate}\n\n\n\\textbf{Step 1: Initialization} We scan the whole grid shown in Fig.~\\ref{fig:backtrack_puzzle} and find all empty spots that waiting for filling in.  \n% From the constraints, we use the following data structures to record the state of each row, each column, and each grid.  \n% \\begin{lstlisting}[language=Python]\n% row_state = [0]*9\n% col_state = [0]*9\n% grid_state = [0]*9\n% \\end{lstlisting}\nWe use (i,j) to denote the position of a grid. It correspond position $i$ in \\texttt{row\\_state[i]}, and $j$ in \\texttt{col\\_state[j]}, and \\texttt{block\\_state[i//3][j//3]} for corresponding sub-grid.  In this stage, we iterate through the \n\\texttt{board} to record these states.\n\\begin{lstlisting}[language=Python]\nfrom copy import deepcopy\nclass Sudoku():\n  def __init__(self, board):\n    self.org_board = deepcopy(board)\n    self.board = deepcopy(board)\n    \n  def init(self):\n    self.A = set([i for i in range(1,10)])\n    self.row_state = [set() for i in range(9)]\n    self.col_state = [set() for i in range(9)]\n    self.block_state = [[set() for i in range(3)] for i in range(3)]\n    self.unfilled = []\n\n    for i in range(9):\n      for j in range(9):\n          c = self.org_board[i][j]\n          if c == 0:\n              self.unfilled.append((i, j))\n          else:\n              self.row_state[i].add(c)\n              self.col_state[j].add(c)\n              self.block_state[i//3][j//3].add(c)\n  \n  def set_state(self, i, j, c):\n    self.board[i][j] = c\n    self.row_state[i].add(c)\n    self.col_state[j].add(c)\n    self.block_state[i//3][j//3].add(c)\n    \n  def reset_state(self, i, j, c):\n    self.board[i][j] = 0\n    self.row_state[i].remove(c)\n    self.col_state[j].remove(c)\n    self.block_state[i//3][j//3].remove(c)\n\\end{lstlisting}\n% We see 5 we need to set \\texttt{row\\_state[0]}, \\texttt{col\\_state[0]} and \\texttt{grid\\_state[0]}. We treat each state as a serial of bits of maximum length of 9. For 5, we set the 5th bit into 1 for each state, which is through XOR with mask that left shift 1 by 4 digits. The details can be found in Chapter\\ref{chapter_bit}. To check if 5 is in the row, or column or current grid we just need to get corresponding bit value, if it is one or not. Through the bit manipulation, we can use 27 extra spaces and able to check if a number if valid here in $O(1)$ time. Therefore, for each empty spot, we can find all possible values. The functions used to set/reset/check state is implemented as follows:\n% \\begin{lstlisting}[language=Python]\n% def setState(i, j, v, row_state, col_state, grid_state):\n%   row_state[i] |= 1 << v\n%   col_state[j] |= 1 << v\n%   grid_index = (i//3)*3 + (j//3)\n%   grid_state[grid_index] |= 1 << v\n  \n% def resetState(i, j, v, row_state, col_state, grid_state):\n%   row_state[i] &= ~(1 << v)\n%   col_state[j] &= ~(1 << v)\n%   grid_state[grid_index] &= ~(1 << v)\n  \n% def checkState(i, j, v, row_state, col_state, grid_state):\n%   row_bit = (1 << v) & row_state[i] != 0\n%   col_bit = (1 << v) & col_state[j]  != 0\n%   grid_index = (i//3)*3 + (j//3)\n%   grid_bit = (1 << v) & grid_state[grid_index]  != 0\n%   return not row_bit and not col_bit and not grid_bit\n% \\end{lstlisting}\n\n% The following function is implement to find empty spots and state record. \n% \\begin{lstlisting}[language=Python]\n%  def getEmptySpots(board, rows, cols, row_state, col_state, grid_state): \n%     ''' get empty spots and find its corresponding values in O(n*n)'''\n%     empty_spots = {}\n%     # initialize the state, and get empty spots\n%     for i in range(rows):\n%       for j in range(cols):\n%         if board[i][j]:\n%             # set that bit to 1\n%             setState(i, j, board[i][j]-1, row_state, col_state, grid_state)          \n%         else:\n%             empty_spots[(i,j)] = []\n                    \n%     # get possible values for each spot\n%     for i, j in empty_spots.keys():\n%       for v in range(9):\n%         if checkState(i, j, v, row_state, col_state, grid_state):\n%           empty_spots[(i, j)].append(v+1)\n          \n%     return empty_spots\n% \\end{lstlisting}\n\n%The result is:\n% \\begin{lstlisting}[numbers=none]\n% [((4, 4), [5]), ((6, 5), [7]), ((6, 8), [4]), ((7, 7), [3]), ((0, 3), [2, 6]), ((2, 0), [1, 2]), ((2, 3), [2, 3]), ((2, 4), [3, 4]), ((2, 5), [2, 4]), ((4, 1), [2, 5]), ((5, 1), [1, 5]), ((5, 3), [5, 9]), ((5, 5), [1, 4]), ((6, 4), [3, 5]), ((7, 0), [2, 3]), ((7, 6), [3, 6]), ((8, 5), [2, 6]), ((0, 2), [1, 2, 4]), ((0, 8), [2, 4, 8]), ((1, 1), [2, 4, 7]), ((1, 2), [2, 4, 7]), ((1, 7), [2, 3, 4]), ((2, 8), [2, 4, 7]), ((3, 1), [1, 2, 5]), ((3, 3), [5, 7, 9]), ((3, 5), [1, 4, 7]), ((4, 6), [5, 7, 9]), ((4, 7), [2, 5, 9]), ((5, 7), [4, 5, 9]), ((6, 0), [1, 3, 9]), ((6, 3), [3, 5, 7]), ((7, 1), [2, 7, 8]), ((7, 2), [2, 3, 7]), ((8, 0), [1, 2, 3]), ((0, 5), [2, 4, 6, 8]), ((0, 6), [1, 4, 8, 9]), ((0, 7), [1, 2, 4, 9]), ((1, 6), [3, 4, 7, 8]), ((1, 8), [2, 4, 7, 8]), ((3, 2), [1, 2, 5, 9]), ((3, 6), [4, 5, 7, 9]), ((3, 7), [2, 4, 5, 9]), ((4, 2), [2, 5, 6, 9]), ((5, 2), [1, 3, 5, 9]), ((5, 6), [4, 5, 8, 9]), ((8, 1), [1, 2, 4, 5]), ((8, 3), [2, 3, 5, 6]), ((8, 6), [1, 3, 4, 6]), ((2, 6), [1, 3, 4, 5, 7]), ((8, 2), [1, 2, 3, 4, 5]), ((6, 2), [1, 3, 4, 5, 7, 9])]\n% \\end{lstlisting}\n\\textbf{Step 2: Backtracking and Search Space Pruning} \n\\begin{lstlisting}[language=Python]\n  def solve(self):\n    '''implement solver restricted spot selection and look ahead'''\n    if len(self.unfilled) == 0:\n      return True\n    i, j = min(self.unfilled, key = self._ret_len)\n    option = self.A - (self.row_state[i] | self.col_state[j] | self.block_state[i//3 ][j//3])\n    #print(option)\n    if len(option) == 0:\n      return False\n    self.unfilled.remove((i, j))\n    for c in option:\n      self.set_state(i, j, c)\n      if self.solve():\n        return True\n      else:\n        self.reset_state(i, j, c)\n    # no candidate is valid, backtrack\n    self.unfilled.append((i, j))\n    return False\n\\end{lstlisting}\nIn the backtracking, at each recursive call, we first choose the spot that has the least number of candidates. This requires us to compute the spot in real-time, we do it through a set union and set difference computation \\texttt{A-(row\\_state[i]|col\\_state[j]|block\\_state[i//3][j//3])}. We set the time cost for this is O(9), and each time the time cost to pick the best one is $O(9n)$, where $n$ is the number of total empty spots. The total time complexity is $O(n^2)$. Compared with the time complexity of $c^n$, where $c$ is the average number of candidate for each spot, the time spent here is trivial.  Then we remove it from \\texttt{unfilled} list, try each candidate with a  \\texttt{for} loop. We record this option in the state, and do a recursive call: if it returns with a valid configuration that all spots are filled up and valid, we end the program by return True; otherwise, we reset the state to try the next option. At the end of the \\texttt{for} loop, this means no candidates at this spot would lead to a valid solution, we can only keep searching by returning to its parent branch and leave this spot unfilled by putting it back to \\texttt{unfilled} list, or say resetting its state.  we iterate through the empty spots and for each spot, we iterate through its candidates and fill in one at a time. Before we call recursive function to fill the next one, we record the state. If the sub recursive function returns True then we just need to return, otherwise, we recover the state and backtrack to previous state. \n\n% each time we choose to fill in the spot that with the least number of candidates. \n\n% In this problem, backtrack  happens if the current path can not lead to valid solution. First, for an empty spot following on the path that has no candidate to choose from (line 5-6), then it is an invalid path, and requires backtrack to line 16.  Second, for an empty spot, if none of its candidates can lead to valid solution, as shown in code line 11-16, it backtrack to previous empty spot.   \n\n\\begin{lstlisting}[language=Python]  \n  def naive_solve(self):\n    '''implement naitve solver without restricted spot selection or look ahead'''\n    if len(self.unfilled) == 0:\n      return True\n    i, j = self.unfilled.pop()\n    option = self.A - (self.row_state[i] | self.col_state[j] | self.block_state[i//3 ][j//3])\n    for c in option:\n      self.set_state(i, j, c)\n      if self.naive_solve():\n        return True\n      else:\n        self.reset_state(i, j, c)\n    # no candidate is valid, backtrack\n    self.unfilled.append((i, j))\n    return False\n  def _ret_len(self, args):\n    i, j = args\n    option = self.A - (self.row_state[i] | self.col_state[j] | self.block_state[i//3 ][j//3])\n    return len(option)\n\\end{lstlisting}\n% We prun braches that appears invalid already and only handle brachs that up till now are valid.\n\n\\paragraph{Time Complexity} Assume we have $n$ empty spots, and the number of possible values for each spot are $spot=[a_0, a_1, ..., a_{n-1}]$. To fill each spot, we need to search the possibility tree. The search tree will have a height of $n$, at the first level, the width of the tree will be $a_0$, second level $a_1$, and as such each level will have a total nodes of $\\prod_{i=0}^{n-1} a_i=a_i!$. This will result in worst time complexity $O(\\sum_{i=0}^{n-1} a_i!)$. \n\n\\paragraph{Experiment: Different Ordering of Empty Slots} Let us do an experiment, with the same input of board, we track the time that we use the sorted or unsorted empty spots and see what is the time difference. The code is provided in colab.  The time is 0.025 seconds for unsorted and 0.0005 seconds for sorted. \n\n\\subsection{Travels Salesman Problems (TSP)}\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width= 0.5\\columnwidth]{fig/Euler12-300x225.png}\n    \\caption{Example of a full connected graph.}\n    \\label{fig:tsp_graph}\n\\end{figure}\n\\paragraph{Travels Salesman Problem Definition} Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point.\n\\begin{lstlisting}[numbers=none]\nAssume our graph is a two dimensional list:\ng = [[(1, 10), (2, 15), (3, 20)], \n   [(0, 10), (2, 35),(3,25)],\n   [(0, 15),(1,35),(3,30)],\n   [(0,20),(1,25),(2,30)]]\ng[0][0]=(1,10), means the edge between 0 and 1 with cost 10.\n\\end{lstlisting}\nWe show the above example in Fig.~\\ref{fig:tsp_graph}.\n\\subsubsection{Analysis and Solution}\n\\paragraph{State Space} In TSP, we need to construct a list of vertices (\\texttt{path}) and its total cost (\\texttt{cost}) of all edges between as the state $s=(p, c)$, $p, c$ is short for path and cost respectively. A possible complete solution for path will have $n+1$ vertices which start with a vertex and end with the same, and $n-1$ vertices in between. Now, put together about constraints.\n\\begin{itemize}\n    \\item ``Visits every city exactly once'' means the first vertex will be a permutation of all cities, we get $n!$ combination (the last vertex does not matter).\n    \\item We have $n!$ possible states. We can further spot redundant states. For a cycle, it does not matter where it starts, it is always the same cycle. For convenience, we choose vertex $0$ as the starting path, and there will only be $n-1$ vertex to permutate with, making the size of the state space to $(n-1)!$. \n    \\item We only care about the minimum cost, then any partial result that has cost larger than the  minimum cost of all known complete solution can be prunned. This is called \\textit{Branch and bound} method, which is the extension of backtracking into the optimization problems.\n\\end{itemize}\n\\paragraph{Implementation} The implementation is a combination of \\textbf{all paths} and \\textbf{permutation}. We set the start vertex as $0$. \\texttt{bused} will be a list of boolean to track if the element is in the path so that the first $n$ vertices will be a permutation. \\texttt{cost} is used to record the current total cost. \\texttt{mincost} tracks the minimum known cost of complete paths. The Python code is as follows:\n\\begin{lstlisting}[language=Python]\ndef tsp(g, cv, path, mincost, cost, bused, ans):\n  '''\n  cv represents the current node\n  path: a list to track all vertices, we start from vertex 0, or we can use ordered set.\n  '''\n  if len(path) == len(g): # we can only choose 0\n    cost += g[cv][0][1]\n    if cost < mincost[0]:\n      mincost[0] = cost\n      ans[0] = path[::]\n    return\n  for v, c in g[cv]:\n    # Bound the search by an estimation    \n    if (not bused[v]) and (cost + c < mincost[0]):\n        bused[v] = True\n        path.append(v)\n        cost += c\n        tsp(g, v, path, mincost, cost, bused, ans)\n        bused[v] = False\n        path.pop()\n        cost -= c\n  return\n\\end{lstlisting}\nIn this example, we need constraint of permutation and the branch and bound, which is shown in line 14. The end condition of the problem is when we already have the first $n$ unique vertices, the last one must be the same as the start vertex. At this step, we track the \\texttt{mincost} and save the path in \\texttt{ans}. \n\n%%%%%%%%%%%%%%%%%%%%\n\\section{Knapsack Problem}\nIn this section we want to showcase more searching strategies applied in solving optimization problems: comparing backtracking and a chance to use best-first search strategy. \n\n\\paragraph{Define Knapsack Problem} We are given $n$ items with a weight $w$ and a value vector $v$ and a knapsack with capacity $c$. Our goal is to choose a certain number of items with a total weight that is bounded by $c$.  Each item can be only used at most once.\n\\begin{lstlisting}[numbers=none]\nFor example,\nc = 10\nw = [5, 8, 3]\nv = [45, 48, 35]\n\nThe best would be choosing item 1 and 3, with total weight of 8, and value of 80.\n\\end{lstlisting}\n\\subsubsection{Analysis} This is essentially a combination problem, we have to search a leaf node that is both feasible--bounded by capacity and optimal--has the largest value. \n\nTo bound the search, we have to develop a heuristic function to estimate the maximum total value a branch can lead to. A simplest estimation is based on the total value of all items. At first in our case it will be  \n\nA most closely heuristic function can come with \\textbf{constraint relaxation}. What if we are allowed to get part of an item, so that we can fit the knapsack as full as possible. We can sort the items by their unit value, and take items in the order of decreasing unit values. Another bool vector is used to indicate if a certain item can be used or not. At first, all items are allowed, we can get an estimation of 92 in this case. For branches that decide not to take an item, that item is excluded using the bool vector.  We compare the estimation with the best found value, if the estimated value will never be better, then this branch will prunned. \n\\subsubsection{Branch and Bound with Backtracking}\nIn this process, we only search in order of depth-first and bound by the estimation. The process is shown in Fig.~\\ref{}. \n\\begin{lstlisting}[language=Python]\nclass dfsBound:\n  def __init__(self, c, v, w):\n    self.best = 0 \n    self.c = c\n    self.v = v\n    self.w = w\n    self.n = len(v)\n    self.items = [(-vi/wi, wi, i) for i, (vi, wi) in enumerate(zip(v, w))]\n    self.items.sort(key=lambda x: x[0])\n    self.dfs(0, self.estimate([True]*self.n), 0, 0,[True]*self.n)\n\n  def estimate(self, blist):\n    est = 0\n    # use the v/w to estimate\n    left = self.c\n    j = 0\n    n = len(blist)\n    while left > 0 and j < n:\n      ratio, wi, i = self.items[j]\n      j += 1\n      if not blist[i]:\n        continue\n      if left - wi >= 0: # use all\n        est += -ratio * wi\n        left -= wi\n      else: # use part\n        est += -ratio * (left)\n        left = 0 \n    print(est)\n    return est\n  \n  def dfs(self, idx, est, val, cap, blist):\n      if idx == self.n:\n        self.best = max(self.best, val)\n        return\n      if cap + self.w[idx] <= c: # prune by constraint\n        # bound by estimate\n        if est > self.best:\n          self.dfs(idx+1, est, val+self.v[idx], cap + self.w[idx], blist)\n\n      # bound by estimate\n      if est > self.best:\n        blist[idx] = False\n        nest =  self.estimate(blist)\n        self.dfs(idx+1, nest, val, cap, blist) \n        blist[idx] = True\n      return\n\\end{lstlisting}\n\\subsection{Branch and Bound with Best-First Search}\nWe can decide to expand branch that has the most optimistic estimation first, instead of blindly in order of depth-first,  hoping it will find a good enough global value with a higher bar to serve as a good start for the bounding; a higher bar will help us prune more worse branches faster. Supposedly, best-first search will guide us to find the optimal solution faster than backtracking, but we know it comes with higher cost in space usage. The process is shown in Fig.~\\ref{}.\n\\begin{lstlisting}[language=Python]\nimport heapq\ndef bfs(c, v, w):\n  # track val, cap, and idx is which item to add next\n  q = [(-sum(v), 0, 0, 0)] # first simply use the sume of values as estimation\n  n = len(v)\n  best = 0\n  while q:\n    est, val, cap, idx = heapq.heappop(q)\n    #print(est, val, cap, idx, q)\n    if idx == n:\n      best = max(best, val)\n      continue\n    est = -est\n    nest = est - v[idx]\n\n    if est > best: # bound, when all nodes are worse than the found best, prune\n      if cap + w[idx] <= c: # prune by constraint\n        heapq.heappush(q, (-nest, val+v[idx], cap + w[idx], idx+1))\n      heapq.heappush(q, (-nest, val, cap, idx+1))\n  return best\n\\end{lstlisting}\n\n\n%%%%%%%%%summary%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Summary}\n\n\\paragraph{Backtracking Implementation} At first, backtracking might sounds scary and not many books out there did great job to ease it down. If we were to write down a standard template for backtracking, what are the elements? First, imagine that we are building and traverse a tree.\n \\begin{enumerate}\n     \\item \\textbf{State Vector $s$}: state vector is something we use to track the solution. In permutation and combination, it is \\texttt{curr} and \\texttt{path} in all paths problem. For the sudoku solver, this can be implemented in \\texttt{unfilled} with the value and position. However, instead, it is easier to directly tracking it on the \\texttt{board}.The state vector tells us the height of the tree, the candidates for each level is based on the constraint before. \n     \\item \\textbf{State Map and Candidates}: This is a good example of trading space for efficiency. The constraint of what candidates we have for current node depends on the previous state. We can get previous state by looking at the current result as in permutation in \\texttt{curr} or the \\texttt{board} in sudoku. However, each lookup took $O(n)$ time to complete. A smarter choice is to set aside another space with boolean or dictionary-like data structure to track the state along with the result data structure. Such as \\texttt{used} in help of permutation and \\texttt{row\\_state}, \\texttt{col\\_state}, and \\texttt{block\\_state} to assist tracking the constraint in sudoku. Now to lookup if a candidate is possible we only need $O(1)$. \n \\end{enumerate}\n \n \\paragraph{Time and Space Complexity} The time complexity analysis is straightforward which is the same as the DFS, $O(b^d)$. For the space, we have analyzed in the permutation example, however, it is important enough to summarize it here for emphasize. The backtracking techniques improves the space efficiency on the basis of DFS, which has $O(bd)$ using in two possible ways:\n \\begin{enumerate}\n     \\item The backtracking mechanism itself that only generate one state a time on the fly, and not worrying about its sibling states brings the space complexity down to $O(d)$. \n     \\item In our practice, we have seen another trick that we do not pass a new state to our recursive function each time. Instead, we make modification that takes $O(1)$ time complexity instead of $O(d)$ to copy the state, and undo the modification once we returned from the recursive call. This reduces the memory requirement to just one state vector and $O(d)$ actions to modify the state. This trick is both time and space saving.\n \\end{enumerate}\n\n\\paragraph{Search Space Prunning Methods} We summarize the following three methods that can be potentially applied in a CSP or an optimization problem.  \n\\begin{enumerate}\n     \\item \\textbf{Make Global Decision:} The backtracking works correctly if we do not update the each slot's candidates since initialization, we only make decision based on current node's validity. However, it would be more wise if each time we try an option, we check how this decision change remaining slots' candidates; if any of the remaining slots have zero candidates, we should better off stopping this shot and simply go to next option. \n    \\item \\textbf{Be Greedy about Ordering:} Each time we choose the spot that has the least number of candidates among the remaining spots list to update state with. This can be carried out with or without executing candidates updating for all remaining spots at each step. In our example of Suduku, we did update for all slots, but in some cases the cost might be too much or you just simply don't have enough time to write the code. \n    \\item \\textbf{*Symmetry:} Exploiting symmetry is another avenue for reducing combinatorial searches, prunning away partial solutions identical to those previously considered requires recognizing underlying symmetries in the search space. \n    \\item \\textbf{Branch and Bound:} Branch and bound is the idea of backtracking extended to optimization problems. We are minimizing a function with this useful property: A partial solution is pruned if its cost >= cost of best known complete solution.\n\\end{enumerate}\n\n \n%  \\paragraph{Backtracking VS Exhaustive Search} Backtracking helps in solving an overall problem by incrementally builds candidates {(implicit search tree)}. which is equivalent to finding a solution to the first sub-problem and then recursively attempting to resolve other sub-problems bases on the solution of the first sub-problem. Therefore, \\textbf{Backtracking can be considered as a Divide-and-conquer method for exhaustive search.} Problems which are typically solved using backtracking technique have such property in common:  they can only be solved by trying every possible configuration and each configuration is tried only once(every node one time). A Naive exhaustive search solution is to generate all configurations and \u201cpick\u201d a configuration that follows given problem constraints. Backtracking however works in incremental way and \\textbf{prunes} branches that can not give a result. It is an optimization over the exhaustive search where all possible(possible still with constraints) configurations are generated and tried. This comparison is called named as \\textbf{Generating VS Filtering}. Backtracking can be viewed as a smart exhaustive search. \n \n%  Similarly, the difference between backtracking and DFS is the same. DFS is a searching technique applied on graph or tree data structures (more likely on explicit data structures). \n \n There is an interesting questions. \n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{As we explained DFS itself is an incomplete search technique, then why would backtracking search be complete?} \n\\end{bclogo} \n \n\n%  \\section{Bonus}\n%  Generating combinations, permutation uniformly at random. \n\n\n\\section{Exercises}\n\\subsection{Knowledge Check}\n\n\\subsection{Coding Practice}\n\\paragraph{Cycle Detection}\n\\begin{enumerate}\n    \\item 207. Course Schedule (medium)\n\\end{enumerate}\n\n\\paragraph{Topological Sort}\n\n\\paragraph{Connected Component}\n\\begin{enumerate}\n    \\item 323. Number of Connected Components in an Undirected Graph (medium).\n    \\item 130. Surrounded Regions(medium)\n    \\item \n\\end{enumerate}\n\n\\paragraph{Mix}\n\\begin{enumerate}\n    \\item 210. Course Schedule II (medium, cycle detection and topological sort). \n\\end{enumerate}\n\n\\paragraph{Backtracking}\n\\begin{enumerate}\n\\item 77. Combinations\n\\begin{lstlisting}\nGiven two integers n and k, return all possible combinations of k numbers out of 1 ... n.\n\nExample:\n\nInput: n = 4, k = 2\nOutput:\n[\n  [2,4],\n  [3,4],\n  [2,3],\n  [1,2],\n  [1,3],\n  [1,4],\n]\n\\end{lstlisting}\n\n\\item 17. Letter Combinations of a Phone Number\n\\begin{lstlisting}\nGiven a digit string, return all possible letter combinations that the number could represent.\n\nA mapping of digit to letters (just like on the telephone buttons) is given below.\n\nInput:Digit string \"23\"\nOutput: [\"ad\", \"ae\", \"af\", \"bd\", \"be\", \"bf\", \"cd\", \"ce\", \"cf\"].\n\nNote:\nAlthough the above answer is in lexicographical order, your answer could be in any order you want.\n\\end{lstlisting}\n\n\\item 797. All Paths From Source to Target (medium).\n\n\\item \\textbf{37. Sudoku Solver (hard).}\nWrite a program to solve a Sudoku puzzle by filling the empty cells.\n\nA sudoku solution must satisfy all of the following rules:\n\\begin{enumerate}\n    \\item Each of the digits 1-9 must occur exactly once in each row.\n    \\item Each of the digits 1-9 must occur exactly once in each column.\n    \\item Each of the the digits 1-9 must occur exactly once in each of the 9 3x3 sub-boxes of the grid.\n\\end{enumerate}\nEmpty cells are indicated by the character '.'.\n\n\\item %%%%%%%%%%%%%%%%%%%%%%Eight Queen%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Eight Queen}\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/8-queens.png}\n    \\caption{An examplary solution to the eight-queen problem.}\n    \\label{fig:solution_eight_queen}\n\\end{figure}\n\\paragraph{Eight Queen Problem  Definition}\nGiven a chessboard which is of size $8\\times 8$, how many distinct ways to position eight queens on the chessboard such that no two queens threaten each other. According to the chess rules: a queen can move any step either horizontally, or vertically, or diagonally. Which is kind of similar to the rules of sudoku. There is another type of question which asks to return all distinct solutions. \n\\begin{enumerate}\n    \\item Each row can only has one queen.\n    \\item Each column can only has one queen.\n    \\item Each diagonal can only has one queen.\n\\end{enumerate}\nOne examplary solution is shown in Fig.~\\ref{fig:solution_eight_queen}. The problem can be extend to $n$-queen problem, which states on any size of $n\\times n$ chessboard, how many ways to place $n$ queens that they are mutually non-attacking.  \n\n\\subsubsection{Analysis and Solution}\n\\paragraph{State Space} For the $n$ queens, it does not matter about the ordering; like the example, if we switch the positions of them, we will not get another solution. Therefore, it is a combination problem instead of permutation. For combination, we just care the position and for each position, it only differs if there is a queen or not (two choices), while not 9 (no queen plus any of the other queen). We have different ways to arrange these queens:\n\\begin{enumerate}\n    \\item No constraint: (1) if even no constraint of number of queens, for 64 positions, each has two states, this gives us $N=2^{64}$, (2) put the constraint of only 8 queens, we simply try to come out the possible combination of 8 queens on $8\\times 8$ chessboard, it is going to be  $N = C_{64}^{8} =4426, 165,368$.\n    \\item Add constraint One: Now for the first row, we can have 8 different states, one and only one will have a queen, same for any other rows followed by. We end up with $N=8^8$.\n    \\item Add constraint Two: If each column can only have one queen, then for the first row, it will have 8 possible states, while the second can only have 7, thus making $N=8!$, which is less that $10^6$ and is possible for programs to run. \n\\end{enumerate}\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/n_queen_four.png}\n    \\caption{Solutions shown of $4 \\times 4$ chessboard}\n    \\label{fig:n_queen_four}\n\\end{figure}\nThe above analysis reveals that our state vector should be of size of the total number of rows, $S=[None]*8$, with the index to represent the row in the chessboard, and the value to be an integer from 0 to n-1 to track the column that has a queen. It is similar to a permutation problem. For $4\\times 4$ board, we have the following two solutions, represented with $S$, it is $S=[1, 3, 0, 2], [2, 0, 3, 1]$.  Therefore, our search tree will be of height 8. For edges which represent possible candidate, we can have two ways to generate candidates: (1) easy one that we iterate through all 9 columns for each row and we just need to validate each candidate through \\texttt{assist\\_state\\_tracker}. (2) generate candidate based on previous state vector $s$. \n\n\\paragraph{Implementation} \n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/n_queen_diag.png}\n    \\caption{Caption}\n    \\label{fig:my_n_queen_diag}\n\\end{figure}\n\nWe use \\texttt{n\\_queen} a list whose index indicates the row of the chessboard and the value represents the column that we put the queen in. It starts as an empty list and with a possible maximum length of $n$. Because in the backtracking, each level represents the row, thus we do not need to track the row state. We need to have \\texttt{col\\_state} to track if a column has a queen already. As indicates in Fig.~\\ref{fig:my_n_queen_diag}, for each possible position, we need to check the left and right diagonal if a queen already put there. Here, we use two lists \\texttt{left\\_diag} and \\texttt{right\\_diag} to track them. For position $(r, c)$, the position at the \\texttt{left\\_diag} will be (r-1, c-1), (r+1, c+1), the rule is bit hidden that r-c = (r-1)-(c-1) = (r-2)-(c-2). For the \\texttt{right\\_diag}, it will be (r-1, c+1), (r+1, c-1), thus the same diag has the same value of (r+c). The implementation is as follows:\n\\begin{lstlisting}[language=Python]\ndef solveNQueens(self, n):\n    \"\"\"\n    :type n: int\n    :rtype: List[List[str]]\n    \"\"\"\n    # queen can move: vertically, horizontally, diagonally \n    col_state = [False]*n\n    #diag =[False]*n\n    left_diag = [False]* (2*n-1) # x+y -> index\n    right_diag = [False]* (2*n-1) # x+(n-1-y) ->index\n    n_queen = [] # to track the positions\n    ans = []\n    board = [['.' for i in range(n)] for j in range(n)] #initialize as '.' we can try to flip\n    def collect_solution():\n        board = [['.' for i in range(n)] for j in range(n)] \n        for i, j in enumerate(n_queen):\n            board[i][j] = 'Q'\n            \n        for i in range(n):\n            board[i] = ''.join(board[i])\n        return board\n    \n    def is_valid(r, c):\n        return not (col_state[c] or left_diag[r+c] or right_diag[r+(n-1-c)])\n      \n    def set_state(r, c, val):\n        col_state[c] = val\n        #diag[abs(r-c)] = val\n        left_diag[r+c] = val\n        right_diag[r+(n-1-c)] = val\n        \n    def backtrack(n_queen, k):\n        if k == n: # a valid result\n            ans.append(collect_solution())\n            return\n        # generate candidates for kth queen\n        for col in range(n):\n            if is_valid(k, col):\n                set_state(k, col, True)\n                n_queen.append(col)\n                backtrack(n_queen, k+1)\n                set_state(k, col, False)\n                n_queen.pop()\n            \n    backtrack(n_queen, 0)\n    return ans\n\\end{lstlisting}\n\nThere is another way to generate candidates based on \\texttt{n\\_queen}. At the first row, we have candidates of [0, 1, 2, 3]. Assume we choose 1 here, and at row 1, we generate candidates based on previous rows. We remove the ones on the diagnal and columns. The code is implemented as:\n\\begin{lstlisting}[language=Python]\ndef solveNQueens2(self, n):\n  \"\"\"\n  :type n: int\n  :rtype: List[List[str]]\n  \"\"\"\n  n_queen = [] # to track the positions\n  ans = []\n  board = [['.' for i in range(n)] for j in range(n)] #initialize as '.' we can try to flip\n  def collect_solution():\n      board = [['.' for i in range(n)] for j in range(n)] \n      for i, j in enumerate(n_queen):\n          board[i][j] = 'Q'\n\n      for i in range(n):\n          board[i] = ''.join(board[i])\n      return board\n      \n  def generate_candidate(n_queen, k, n):\n    if k == 0: #the first row, then the candidates row is all columns\n      return set([i for i in range(n)])\n    # generate candidate in kth level based on previous levels\n    candidates = set([i for i in range(n)])\n    for r, c in enumerate(n_queen):\n      if c in candidates:\n        candidates.remove(c)\n      c1 = c-(k-r)\n      if c1 >=0 and c1 in candidates:\n        candidates.remove(c1)\n      c2 = c+(k-r)\n      if c2 < n and c2 in candidates:\n        candidates.remove(c2)\n    return candidates\n\n  def backtrack(n_queen, k):\n      if k == n: # a valid result\n          ans.append(collect_solution())\n          return\n      # generate candidates for kth queen\n      candidates = generate_candidate(n_queen, k, n)\n      for c in candidates:\n          n_queen.append(c)\n          backtrack(n_queen, k+1)\n          n_queen.pop()\n\n  backtrack(n_queen, 0)\n  return ans\n\\end{lstlisting}\n\\paragraph{Symmetry}\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/n_queen_symmetry.png}\n    \\caption{Mirroring can cut the search space into half.}\n    \\label{fig:n_queen_symmetry}\n\\end{figure}\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/n_queen_oddPicture1.png}\n    \\caption{Mirroring can cut the search space into half.}\n    \\label{fig:n_queen_odd}\n\\end{figure}\n\\url{https://www.aaai.org/Papers/AAAI/2006/AAAI06-257.pdf}.  Start from  an easy one, we can observe that we can obtain the second solution of $4\\times 4$ chessboard by flipping around the first around  the red axis. Assume our first solution is $S=[1, 3, 0, 2]$. We represent the $S=[a_0, a_1, a_2, a_3]$. Now the mirroring relation can be represented as $m_i+a_i=n-1$, thus $m_i=n-1-a_i$. If n is even, then at the first level of backtracking, we can elimiate the second half of candidates, which cut of search cost into half of the previous. If we just need to count the total number of distinct solutions, we can just double the number we find. The process is shown in Fig.~\\ref{fig:n_queen_symmetry}. For odd one, for the middle spot at the first row, if we follow the same rule as in the even $n$, some solutions shown in follows will be doubled. So we need to distinguish the middle spot situation in odd $n$ case. If we place a queen in the middle spot of the first row, then for the following $n-1$ rows, no one will be in the middle column any more. For the second row, we can eliminate the other half of candidates on the right side as shown in Fig.~\\ref{fig:n_queen_odd}. Our code become:\n\\begin{lstlisting}[language=Python]\ndef solveNQueensSymmetry(n):\n  \"\"\"\n  :type n: int\n  :rtype: List[List[str]]\n  \"\"\"\n  n_queen = [] # to track the positions\n      \n  def generate_candidate(n_queen, s, k, n):\n    if k == s: #apply symmetry\n      candidates = set([i for i in range(n//2)])\n    else:\n      candidates = set([i for i in range(n)])\n\n    for r, c in enumerate(n_queen):\n      if c in candidates:\n        candidates.remove(c)\n      c1 = c-(k-r)\n      if c1 >=0 and c1 in candidates:\n        candidates.remove(c1)\n      c2 = c+(k-r)\n      if c2 < n and c2 in candidates:\n        candidates.remove(c2)\n    return candidates\n\n  def backtrack(n_queen, s, k, ans):\n      '''add s to track the start depth'''\n      if k == n: # a valid result\n          ans += 1\n          return ans\n      # generate candidates for kth queen\n      candidates = generate_candidate(n_queen, s, k, n)\n      for c in candidates:\n          n_queen.append(c)\n          ans = backtrack(n_queen, s, k+1, ans)\n          n_queen.pop()\n      return ans\n    \n  # deal with the left half of the first row\n  ans = 0\n\n  ans += backtrack(n_queen, 0, 0, 0)*2\n  \n  # deal with the left half of the second row\n  if n%2 == 1:\n    n_queen = [n//2]\n    ans += backtrack(n_queen, 1, 1, 0)*2\n  return ans\n\\end{lstlisting}\n\\end{enumerate}\n\\end{document}\\paragraph{Alternative: Swapping Method}   \n\n", "meta": {"hexsha": "23961621b593725153d9ca8aaa1cedf4a96eee60", "size": 68125, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Easy-Book/chapters/chapter_combinatorial_search_old.tex", "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Easy-Book/chapters/chapter_combinatorial_search_old.tex", "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Easy-Book/chapters/chapter_combinatorial_search_old.tex", "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.0154162384, "max_line_length": 1536, "alphanum_fraction": 0.711706422, "num_tokens": 18400, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7549149868676284, "lm_q2_score": 0.7772998508568416, "lm_q1q2_score": 0.586795306701802}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\begin{document}\n\n\\title{Notes on Kronecker Products from Quantum Hamiltonians}\n\\author{Matthew Otten}\n\n\\maketitle\n\n\\begin{abstract}\n  Basic notes on utilizing the structure of Kronecker products for quantum Hamiltonians.\n\\end{abstract}\n\n\\section{Introduction}\nWhen calculating a coupled quantum system, the total Hilbert space is a Kronecker product\nof the Hilbert spaces of the individual systems. This is at the heart of the difficulty in\ncalculating a large number of coupled quantum systems; the total Hilbert space size\ngrows exponentially. For tractable Hilbert spaces, we can utilize the structure\nof Kronecker products to accelerate solutions of coupled quantum Hamiltonians.\n\n\\section{Kronecker Product Basics}\\label{basicprops}\nMost of the following is taken from Wikipedia.\nThe Kronecker product between two matrices $\\mathbf{A}, n \\times m$ and $\\mathbf{B}, p \\times q$\nis defined as\n\\begin{equation}\n  \\mathbf{M} = \\mathbf{A}\\otimes\\mathbf{B} =\n    \\begin{bmatrix}\n      a_{00} \\mathbf{B} & \\dotsm & a_{0n} \\mathbf{B} \\\\\n      \\vdots & \\ddots & \\vdots \\\\\n      a_{m0} \\mathbf{B} & \\dotsm & a_{mn} \\mathbf{B}.\n    \\end{bmatrix}\n\\end{equation}\nThe resulting matrix is size $np \\times mq$. Within this paper, we will only concern ourselves\nwith square matrices, $m=n, q=p$. Since we will be interested in generating these Kronecker products\nwithin code, we outline the algorithm for generating $\\mathbf{M}$ here.\\\\\n\\begin{minipage}{\\linewidth}\n\\begin{verbatim}\ndo i_a=0,n\n  do j_a=0,n\n    do i_b=0,p\n      do j_b=0,p\n        M(p*i_a + i_b,p*j_a+j_b) = A(i_a,j_a)*B(i_b,j_b)\n      end\n    end\n  end\nend\n\\end{verbatim}\n\\end{minipage}\\\\\n\nHere, we list several useful Kronecker product identities.\nThe Kronecker product is bilenear and associative:\n\\begin{equation}\n  \\mathbf{A} \\otimes (\\mathbf{B} + \\mathbf{C}) = \\mathbf{A} \\otimes \\mathbf{B} +\n  \\mathbf{A} \\otimes \\mathbf{C}\n\\end{equation}\n\\begin{equation}\n  (\\mathbf{A} + \\mathbf{B}) \\otimes \\mathbf{C} = \\mathbf{A} \\otimes \\mathbf{C} + \\mathbf{B} \\otimes\n  \\mathbf{C}\n\\end{equation}\n\\begin{equation}\n  (k\\mathbf{A}) \\otimes \\mathbf{B} = \\mathbf{A} \\otimes (k\\mathbf{B}) = k(\\mathbf{A}\n  \\otimes \\mathbf{B})\n\\end{equation}\n\\begin{equation}\n  (\\mathbf{A} \\otimes \\mathbf{B}) \\otimes \\mathbf{C} = \\mathbf{A} \\otimes\n  (\\mathbf{B} \\otimes \\mathbf{C})\n\\end{equation}\n\nGenerally, the Kronecker product is non-commutative. The {\\em mixed-product} property states\n\\begin{equation}\\label{mixed-product}\n  (\\mathbf{A}\\otimes\\mathbf{B})(\\mathbf{C}\\otimes\\mathbf{D}) = (\\mathbf{A}\\mathbf{C})\n  \\otimes (\\mathbf{B}\\mathbf{D}).\n\\end{equation}\n\nFurthermore, if $\\mathbf{A}$ and $\\mathbf{B}$ are invertible, then\n\\begin{equation}\n(\\mathbf{A} \\otimes \\mathbf{B})^{-1} = \\mathbf{A}^{-1} \\otimes \\mathbf{B}^{-1}.\n\\end{equation}\n\nWe can also represent the multiplication of three matrices using the Kronecker\nproduct:\n\\begin{equation}\\label{matmult}\n  \\mathbf{A}\\mathbf{B}\\mathbf{C} = (\\mathbf{C^T}\\otimes\\mathbf{A})vec(\\mathbf{B}),\n\\end{equation}\nwhere $vec(\\mathbf{B})$ takes the columns of $\\mathbf{B}$ and stacks them, creating a\nsingle column vector.\n\nCourtesy of Paul Fischer, there are other identities concerning the\ninversion of $\\mathbf{C} = (\\mathbf{A} \\otimes I + I\\otimes\\mathbf{B})$, which\nwe will add later.\n\n\\section{Construction of Coupled Quantum Hamiltonians}\nTo construct coupled quantum Hamiltonians, we must first construct the operators for each\nsubsystem. Each subsystem has its own creation and annihilation operators; those are sufficient\nto define most (all?) Hamiltonians involving that subsystem. Since the creation and annihilation\noperators are related by the adjoint operation, we only truly need to describe the annihilation\noperator. For example, given a subsystem with $n_a$ levels, let's define the annihilation\noperator,\n\\begin{equation}\n  a = \\begin{bmatrix}\n    0 & 1 & 0 & 0 & \\dotsm & 0 & 0\\\\\n    0 & 0 & \\sqrt{2} & 0 & \\dotsm & 0 & 0\\\\\n    0 & 0 & 0 & \\sqrt{3} & \\dotsm & 0 & 0\\\\\n    \\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\z\n    0 & 0 & 0 & 0 & \\dotsm & 0 & \\sqrt{n_a-1} \\\\\n    0 & 0 & 0 & 0 & \\dotsm & 0 & 0\n    \\end{bmatrix}.\n\\end{equation}\n$a$ is a matrix of size $n_a \\times n_a$. We can create a generating function which can\ncreate this matrix using only the number of levels, since we know, {\\em a priori},\nthat the only nonzeros will be on the super-diagonal, and the value is always\n$\\sqrt{i+1}$, where $i$ is the row index (counting from 0), and with corresponding column index,\n$j=i+1$. The creation operator\ncan also be defined by transposing the $i$ and $j$ of the annihilation operator,\nand the number operator $a^\\dagger a$ has a similarly simple definition, where\nthe value is simply $i$, and corresponding column index $j=i$. This allows us, regardless of\nthe number of levels of the subsystem, to store only a single integer, the number of levels,\nthat uniquely defines the annihilation operator.\n\nA couple Hamiltonian has multiple subsystems, possibly with different numbers of levels.\nConstructing the full Hamiltonian involves taking Kronecker products of our base operators\nto obtain their representation in the full Hilbert space. For example, let us take three operators,\n$a, b,$ and $c$, with levels $n_a, n_b,$ and $n_c$, respectively. \nThe full Hilbert space size\nis then $N = n_a n_b n_c$. We will always use the convention that $a, b,$ and $c$ represent\nthe operators within their own (small) Hilbert space, and $\\tilde{a}, \\tilde{b},$ and\n$\\tilde{c}$ represent the operators in the total Hilbert space, where\n\\begin{align}\\label{fullop}\n  \\begin{split}\n    \\tilde{a} &= a \\otimes I_b \\otimes I_c,\\\\\n    \\tilde{b} &= I_a \\otimes b \\otimes I_c,\\\\\n    \\tilde{a} &= I_a \\otimes I_b \\otimes c,\n  \\end{split}\n\\end{align}\nand $I_x$ is the identity matrix of size $n_x$.\nAn everpresent term in many Hamiltonians is the\nnumber operator, $\\tilde{a}^\\dagger \\tilde{a}$. We note that since $a$ and $a^\\dagger$ both\nreside in the same subspace, the combination of the two operators can be done within\nthat subspace, leading to\n\\begin{equation}\n  \\tilde{a}^\\dagger \\tilde{a} = a^\\dagger a \\otimes I_{bc}.\n\\end{equation}\nWe have used the fact that $I_b \\otimes I_c = I_{bc}$, the identity matrix of size\n$n_b n_c$. Combining terms from different subspaces, such as that needed in\na coupling term like $\\tilde{a} \\tilde{b}^\\dagger$, is also quite simple:\n\\begin{align}\n  \\begin{split}\n    \\tilde{a} \\tilde{b}^\\dagger &= (a \\otimes I_{bc})(I_a \\otimes b^\\dagger \\otimes I_c)\\\\\n    &= a I_a \\otimes I_{bc} (b^\\dagger\\otimes I_c) \\\\\n    &= a \\otimes b^\\dagger \\otimes I_c.\n  \\end{split}\n\\end{align}\nThis follows from the mixed-product property, eq.~\\ref{mixed-product}. Interestingly,\n\\begin{align}\\label{multiop}\n  \\begin{split}\n    \\tilde{b}^\\dagger \\tilde{a} &= (I_a \\otimes b^\\dagger \\otimes I_c)(a \\otimes I_{bc})\\\\\n    &= I_a a \\otimes (b^\\dagger\\otimes I_c) I_{bc}  \\\\\n    &= a \\otimes b^\\dagger \\otimes I_c,\n  \\end{split}\n\\end{align}\nwhich simply shows that operators from different subspaces commute.\n\n\\section{Efficiently Generating Hamiltonians in the Full Hilbert Space}\\label{generation}\nSo far, we have described generating operators within their subspace and given forms for the\ncombination of operators in the full Hilbert space. We also want to generate operators in the\nfull Hilbert space in both a space- and time-efficient manner, since the full Hilbert space\nis much, much larger than the subspaces.\n\n\\subsection{Operators from a Single Subspace}\nFirst, let us define the algorithm for generating\n$\\tilde{a} = a \\otimes I_{bc}$.\nThere are several simplifications from the general algorithm described in section~\\ref{basicprops},\nthe foremost being that we are taking a Kronecker product with the identity matrix, which is\ndiagonal and has entries of value one. Furthermore, $a$ is superdiagonal, with a simple\ngenerating function. This leads to the following algorithm for generating $\\tilde{a}$:\n\n\\begin{verbatim}\ndo k1=0,n_b*n_c\n  do i=0,n_a-1\n    a_tilde(n_b*n_c*i+k1,n_b*n_c*(i+1)+k1) = sqrt(i+1)\n  end\nend\n\\end{verbatim}\n\nHere, we have made use of several facts previously identified in this paper. First, we\nused the generating function for $a$, where, given row index $i$, we have column index $j=i+1$ and\nvalue $\\sqrt{i+1}$. The loop is only of size $n_a-1$ because the superdiagonal only hase\n$n_a-1$ elements. We have also used the diagonal structure of $I_{bc}$ to reduce the\ntwo outer loops of the general algorithm into one loop. With these two simplifications,\nour generating function only touches the nonzero elements, doing the minimum number\nof calculations needed. Furthermore, we are able to generate the much larger $\\tilde{a}$ from\nonly three integer values: $n_a, n_b,$ and $n_c$, the number of levels of each subsystem.\n\nNow, let us define a similar algorithm for $\\tilde{c}$.\n\\begin{verbatim}\ndo k2=0,n_a*n_b\n  do i=0,n_c-1\n    c_tilde(n_c*k2+i,n_c*k2+(i+1)) = sqrt(i+1) \n  end\nend\n\\end{verbatim}\n\nWe have used similar tricks here; the only difference is that the position of the identity\nmatrix has switch from being after the operator to being before the operator. Nonetheless,\njudicious application of the general algorithm with the diagonal and superdiagonal structure\nleads to a similarly simple, minimal algorithm.\n\nFinally, we now describe $\\tilde{b}$. This is slightly more complicated then\n$\\tilde{a}$ and $\\tilde{b}$ because we now have identity matrices both before and after\nthe operator, $b$. Rather than define the general algorithm for\n$\\mathbf{A}\\otimes\\mathbf{B}\\otimes\\mathbf{C}$, which would certainly lead to the answer,\nwe will combine the above algorithms for $\\tilde{a}$ and $\\tilde{c}$.\nThis leads to the following algorithm for generating\n$\\tilde{b} = I_a \\otimes b \\otimes I_c$:\n\n\\begin{verbatim}\ndo k1=0,n_c\n  do k2=0,n_a\n    do i=0,n_b-1\n      b_tilde(i*n_c+k1+k2*n_b*n_c,(i+1)*n_c+k1+k2*n_b*n_c) = sqrt(i+1)\n    end\n  end\nend\n\\end{verbatim}\n\nWe note that the algorithm for $\\tilde{b}$ is general in the sense that it can arbitrarily\ndefine any size identity matrix before and after an operator, and, because\n$I_b \\otimes I_c = I_{bc}$, it is sufficient to generate the full Hilbert space representation\nof any single subspace operator, regardless of how many total operators there are. For example,\ntake some operator \n\\begin{align}\n  \\begin{split}\n    \\tilde{d} &= I_a \\otimes I_b \\otimes I_c \\otimes d \\otimes I_e \\otimes I_f \\\\\n              &= I_{abc} \\otimes d \\otimes I_{ef}.\n  \\end{split}\n\\end{align}\nWe can simply use the algorithm for $\\tilde{b}$, but now we have a larger $n_{before}$ and\n$n_{after}$.\n\n\\subsection{Operators from Multiple Subspaces}\nIf we were only concerned with operators from a single subspace, as was described in the\nprevious section, where would be no reason to use a combined Hilbert space - all subsystems\nwould be independent and the full dynamics could be described by the dynamics of the\nsmaller, subspace Hamiltonians only. Combining operators from different subspaces allows\nfor interesting physics to develop from the coupling. We now describe how to efficiently\ngenerate coupling terms, such as $\\tilde{a} \\tilde{b}^\\dagger$.\n\nAs we saw in eq.~\\ref{multiop}, $\\tilde{a} \\tilde{b}^\\dagger = a \\otimes b^\\dagger \\otimes I_c$.\nFirst, let us focus just on $a\\otimes b^\\dagger$; the Kronecker product with $I_c$ could then\nbe done using the algorithms in the previous section. Both $a$ and $b$ are superdiagonal\nwithin their subspaces with associated generating functions.\nUsing this, we define the following algorithm to generate $a \\otimes b$,\n\n\\begin{verbatim}\ndo i_a = 0,n_a-1\n  do i_b = 0,n_b-1\n    a_bdag_tilde(n_b*i_a+i_b,n_b*(i_a+1)+(i_b+1)) = sqrt(i_a+1)*sqrt(i_b+1)\n  end\nend\n\\end{verbatim}\n\nIn many cases, the operators will have some identity matrix between them, such\n$\\tilde{a} \\tilde{c}^\\dagger = a \\otimes I_b \\otimes c$. In this case, we can\nfirst do $I_b \\otimes c$, followed by $a \\otimes (I_b \\otimes c)$.\n\n\\begin{verbatim}\ndo i_a = 0,n_a-1\n  do k3 = 0,n_b\n    do i_c = 0,n_c-1\n      a_cdag_tilde(n_c*n_b*i_a+n_c*k3+i_c,n_c*n_b*(i_a)+n_c*k3+(i_c+1)) \n                   = sqrt(i_a+1)*sqrt(i_c+1)\n    end\n  end\nend\n\\end{verbatim}\n\nThough the indices are starting to get a little complex, we are still generating the full\nspace representations from just a few integers. Combining the above algorithm with the final\nalgorithm from the previous section describing $I_a \\otimes b \\otimes I_c$ is sufficient\nto fully generate all one- and two-operator Hamiltonian terms. We do not explicitly\nshow that here both because it is cumbersome, and because our implementation of these\nalgorithms utilizes a subroutine that specifically calculates\n$I_{before} cross x cross I_{after}$, given a value from $x$, that values $i,j$ pair\nand the number of levels of $x$. This allows for efficient reuse of this subroutine\nboth in one- and two-operator generations, as well as allows for less cumbersome\nindex calculations. One- and two-operator terms make up\nmany of the most interesting Hamiltonians, such as the Hubbard model, Jaynes-Cummings, etc,\nand, as such, we do not extend this analysis to three or more operator terms, though\nit is rather trivial (albeit tiresome) to do.\n\n\\subsubsection{Structure of Combined Operators}\nTODO: Describe combinations of sub and superdiagonal becoming another sub or superdiagonal.\nOffsetnew = level2*nbetween*offset1 + offset2\n\\bigskip\n\nThese algorithms are extremely space efficient. To describe the Hamiltonian in the full Hilbert\nspace of $m$ operators requires only $m$ integers to be stored, regardless of the full Hilbert\nspace size. For example, if we wanted to combine 10 operators, each with 10 levels, the\nfull Hilbert space size would be $10^{10}$, or 10 billion. We can effectively build that\nHamiltonian from only 10 integers. This points to an incredible memory savings, even\nover storing the Hamiltonian sparsely. These algorithms can be used to develop a\nmatrix-free algorithm for time-stepping or steady-state solutions. If the matrix\nis desired, this is the most? efficient way to generate it.\n\n\\section{Lindblad Operators and Superoperator Space}\nWe have thus far described how to efficiently generate Hamiltonians. These are sufficient for\nsolving pure state dynamics within the Schrodinger equation. In many cases, simulating\nopen quantum systems is necessary. One way of doing this is the Liouville matrix master equation,\n\\begin{equation}\\label{dmmaster}\n\\dot{\\rho} = -i [H,\\rho] + L(C)\\rho= -i (H\\rho - \\rho H) + L(C)\\rho,\n\\end{equation}\nwhere $H$ is the Hamiltonian and $L(C)\\rho$ is a Lindblad superoperator,\n\\begin{equation}\\label{lindblad}\nL(C)\\rho = C\\rho C^\\dagger - \\frac{1}{2}(C^\\dagger C \\rho + \\rho C^\\dagger C).\n\\end{equation}\nGenerally, there could be many Lindblad superoperators, one for each dissipative channel\nin the system, but we used only one here for brevity's sake. The previous section described\nhow to generate $H$ efficiently, but $L(C)$ cannot similarly be represented as a matrix with\nthe same dimensions as $H$, say $N\\times N$, as it is a superoperator\n(an operator which acts on the space\nof operators). This can also be seen from the first term in $L(C)$, $C\\rho C^\\dagger$, which\ncertainly cannot be represented by a simple matrix of the same dimensions as $C$.\n\nIf we work in superoperator space, however, we can represent $L(C)$ as a matrix of dimension\n$N^2 \\times N^2$. Additionally, we can represent $[H,\\rho]$ as a matrix. To construct these\nsuperoperator matrices, we make use of eq.~\\ref{matmult}, which describes how to transform\nmatrix products into a (long) vecter multiplied by a Kronecker product. For example,\n\\begin{equation}\n  H\\rho I_N - I_N \\rho H = (I_N \\otimes H - H \\otimes I_N)vec(\\rho),\n\\end{equation}\nwhere we inserted an identity matrix to be able to make use of eq.~\\ref{matmult}. For the\nLindblad, we have\n\\begin{equation}\n  L(\\tilde{C})\\rho = (\\tilde{C}\\otimes \\tilde{C} - \\frac{1}{2}\n  (I_N \\otimes \\tilde{C}^\\dagger \\tilde{C} + \\tilde{C}^\\dagger \\tilde{C} \\otimes I_N)vec(\\rho).\n\\end{equation}\nThis allows us to fully transform the master equation, eq.~\\ref{dmmaster}, into\n\\begin{align}\\label{superdm}\n  \\begin{split}\n    \\dot{\\tilde{\\rho}} &= A\\tilde{\\rho} \\\\\n    &=\\Big(-i (I_N \\otimes H - H \\otimes I_N) + \\tilde{C}\\otimes \\tilde{C} - \\frac{1}{2}\n    (I_N\\otimes \\tilde{C}^\\dagger \\tilde{C} +\n    \\tilde{C}^\\dagger \\tilde{C} \\otimes I_N)\\Big) \\tilde{\\rho},\n  \\end{split}\n\\end{align}\nwhere we have defined $\\tilde{\\rho} = vec(\\rho)$.\n\nThe superoperator matrix, $A$, may seem unwieldy because of its greatly increased size.\nWhile it may be much bigger, we can utilize the algorithms of section~\\ref{gneration}\nto efficiently generate all of the terms. Terms of the form $I_N \\otimes X$ or\n$X \\otimes I_N$ simply\nincrease the sizes of the identity matrices involved, but do not change the algorithm in anyway.\nThe only other term, $\\tilde{C} \\otimes \\tilde{C}$ is also quite simple,\n\\begin{equation}\n  \\tilde{C}\\otimes \\tilde{C} = I_{bef} \\otimes C \\otimes I_{af} \\otimes\n  I_{bef} \\otimes C \\otimes I_{af},\n\\end{equation}\nwhere $I_{bef}$ is the Hilbert space size before operator $C$ and $I_{af}$ is the Hilbert\nspace after. This term can be simply generated with the algorithm describing\n$a \\otimes I_b \\otimes c$ (with the addition of $I_{bef}$ and $I_{af}$, of course). Efficiently\nconstructing $A$ in this way can, similar to the previous section, allow for space-efficient\nmatrix-free methods, or can allow for the construction of the (very, very sparse) $A$ to be input\ninto some linear solver which utilizes a preconditioner based off the matrix $A$. Since\nthe best preconditioner for $A$ is, of course, $A^{-1}$, and we have an incredible amount of\nstructure in our matrix $A$, it is natural to ask if we can efficiently construct $A^{-1}$, or\nat least a very good approximation.\n\n\\section{Efficient Generation of the Inverse of A}\n\n\\end{document}\n", "meta": {"hexsha": "e2ec8416e65bb85a3f36ea6f493cff6e35afe6f2", "size": 17985, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/kron.tex", "max_stars_repo_name": "sgulania/QuaC", "max_stars_repo_head_hexsha": "2b47b378c6b5b823a094e9af79f7cb8eb39dd337", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2017-06-18T02:11:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-28T10:27:57.000Z", "max_issues_repo_path": "doc/kron.tex", "max_issues_repo_name": "sgulania/QuaC", "max_issues_repo_head_hexsha": "2b47b378c6b5b823a094e9af79f7cb8eb39dd337", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-07-17T15:16:22.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-03T14:21:56.000Z", "max_forks_repo_path": "doc/kron.tex", "max_forks_repo_name": "sgulania/QuaC", "max_forks_repo_head_hexsha": "2b47b378c6b5b823a094e9af79f7cb8eb39dd337", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2017-03-13T15:03:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-24T20:07:22.000Z", "avg_line_length": 48.3467741935, "max_line_length": 100, "alphanum_fraction": 0.7326105088, "num_tokens": 5410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5867953024157889}}
{"text": "\\documentclass[submission,copyright,creativecommons]{eptcs}\n\\providecommand{\\event}{ACL2 2020} % Name of the event you are submitting to\n\\usepackage{setspace}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n% \\usepackage{breakurl}              % Not needed if you use pdflatex only.\n\\usepackage{underscore}            % Only needed if you use pdflatex.\n\\newcommand{\\Mod}[1]{\\ (\\mathrm{mod}\\ #1)}\n\\newcommand{\\minus}{\\scalebox{0.75}[1.0]{$-$}}\n\\DeclareMathSymbol{\\sneg}{\\mathbin}{AMSa}{\"39}\n\n%% \\setlength{\\abovedisplayskip}{1pt}\n%% \\setlength{\\belowdisplayskip}{1pt}\n%% \\setlength{\\abovedisplayshortskip}{1pt}\n%% \\setlength{\\belowdisplayshortskip}{1pt}\n\n%%\\setlength{\\abovedisplayskip}{3pt}\n%%\\setlength{\\belowdisplayskip}{3pt}\n\n\\title{Minimal Fractional Representations of Integers mod $M$ }\n\\author{David Greve\n\\email{david@thegreves.com}\n}\n\\def\\titlerunning{Minimal Fractions}\n\\def\\authorrunning{D. Greve}\n\n\\begin{document}\n\\maketitle\n\n\\begin{abstract}\n\nWe say that the fraction $\\frac{N}{D}$ \\emph{represents} $x \\in\n\\mathbf{Z}/m\\mathbf{Z}$ when $x*D \\equiv N \\Mod{M}$.  Our definition\nadmits many possible fractional representations.  We say that\n$\\frac{N}{D}$ is a \\emph{minimal} representation of $x$ if no smaller\ndenominator ($D$) results in a numerator with a magnitude less than\nthe magnitude of $N$.  We introduce a function for computing such fractional\nrepresentations and prove that it generates minimal fractions.  We\nalso prove that every $x \\in \\mathbf{Z}/m\\mathbf{Z}$ has a minimal\nfractional representation in which the magnitude of $N$ and $D$\nare less or equal to $\\sqrt{M}$.\n\n\\end{abstract}\n\n\\section{Introduction}\n\nWe say that the fraction $\\frac{N}{D}$ \\emph{represents} $x \\in\n\\mathbf{Z}/m\\mathbf{Z}$ when $x*D \\equiv N \\Mod{M}$.  We denote this\nrelationship\\footnote{ We might say ``congruent to'' when $D \\perp M$\n  but we don't require this condition.}  as $x \\cong \\frac{N}{D} \\Mod{M}$ .\nThis definition admits many possible fractional representations of\n$x$, some possibly not reduced.  For example, $7 \\Mod{17}$ has the\nfollowing representations:\n\\begin{spacing}{1.0}\n{\\small\n\\[\n\\{\n\\,7/1,\\, 14/2,\\, 4/3,\\, 11/4,\\, 1/5,\\, 8/6,\\, 15/7,\\, 5/8,\\, 12/9 ,\\, 2/10,\\, 9/11,\\, 16/12 ,\\,6/13,\\, 13/14,\\, 3/15,\\, 10/16,\\, 0/17 \\,\n\\}\n\\]\n}\n\\end{spacing}\n\\begin{spacing}{1.0}\nIn this work we consider both positive and negative residues in the\nnumerator.  A positive residue is defined as $(x*D \\bmod M)$ for $0 <\nD \\leq Q$ while a negative residue is defined as $(x*D \\bmod M) - M$\nfor $0 \\leq D < Q$.  Using the negative residue the fractional\nrepresentations of $7 \\Mod{17}$ are:\n\\end{spacing}\n\\begin{spacing}{1.0}\n{\\footnotesize\n\\[\n\\{\n\\sneg17/0,\\,\n\\sneg10/1,\\,\n\\sneg3/2,\\,\n\\sneg13/3,\\,\n\\sneg6/4,\\,\n\\sneg16/5,\\,\n\\sneg9/6,\\,\n\\sneg2/7,\\,\n\\sneg12/8,\\,\n\\sneg5/9,\\,\n\\sneg15/10,\\,\n\\sneg8/11,\\,\n\\sneg1/12,\\,\n\\sneg11/13,\\,\n\\sneg4/14,\\,\n\\sneg14/15,\\,\n\\sneg7/16\\,\n\\}\n\\]\n}\n\\end{spacing}\nWithin the same residue class (positive or negative), we say that\n$\\frac{N}{D}$ is a \\emph{minimal} representation of $x$ if no\ndenominator smaller than ($D$) results in a numerator with a magnitude\nless than the magnitude of $N$.  ($7/1$) is minimal for $7 \\Mod{17}$ simply because we\ndon't consider positive residues with denominators less than 1.  The\nfraction ($\\sneg3/2$) is also minimal because $\\lvert \\sneg3 \\rvert$ is\nless than the magnitude of the numerators of both negative fractions\nwith denominators less than $2$, ($-17/0$) and ($-10/1$).\n($\\sneg6/4$), however, is not minimal because ($\\sneg3/2$) has both a\nsmaller magnitude numerator and a smaller denominator.\n\nOur proof of correctness actually requires a stronger, more general\nnotion of minimality that is invariant over our algorithm for computing\nminimal fractions.  This more general invariant is expressed with\nrespect to a pair of fractions: one negative residual and one positive\nresidual.  The pair is considered minimal if, for all possible\ndenominators ($d$), if the magnitude of either the positive or\nnegative residual of $d*x$ is less than the \\emph{sum} of the\nmagnitudes of the numerators of the pair of fractions, then $d$ must\nbe greater than or equal to the denominator of the fraction with the\nsame residual sign.  Under this generalization the pair of fractions\n$(\\sneg3/2,\\,4/3)$ is considered minimal because no denominator less\nthan $3$ has a positive residual and no denominator less than $2$ has\na negative residual less than $\\lvert \\sneg3 \\rvert + \\lvert 4 \\rvert\n= 7$.\n\nOur computation of minimal fractions relies on the following property\nof the mediant computation\\footnote{As used in the generation of Farey\n  sequences~\\cite{Farey} except that, in our case, $N_1*D_2 - N_2*D_1$\n  is equal to $M$ rather than $1$}:\n\n\\begin{equation*}\nx \\cong \\frac{N_1}{D_1} \\; \\land \\;\nx \\cong \\frac{N_2}{D_2} \\; \\implies \\;\nx \\cong \\frac{N_1 + N_2}{D_1 + D_2} \\; \\; \\Mod{M}\n\\end{equation*}\n\nOur algorithm takes as input two minimal fractions with differing\nsigns, initially $(x-M)/0$ and $x/1$ (which are trivially minimal).  It\nthen recursively replaces the fraction with the larger magnitude\nnumerator with the mediant of the two original fractions until one of\nthe numerators is zero.  The key to the termination of our algorithm\nis the observation that the mediant of two fractions whose numerators\ndiffer in sign is a fraction whose numerator is smaller in magnitude\nthan the larger of the magnitudes of the two original numerators.  We\nprove that this algorithm preserves our notion of minimal fractional pairs.\nThe minimal fractional pairs generated by our algorithm\nfor $7 \\Mod{17}$ are listed below.\n\n{\\small\n\\[\n(\\sneg17/0,\\,7/1),\\,\n(\\sneg10/1,\\,7/1),\\,(\\sneg3/2,\\,7/1),\\,(\\sneg3/2,\\,4/3),\\,(\\sneg3/2,\\,1/5),\\,(\\sneg2/7,\\,1/5),\\,(\\sneg1/12,\\,1/5),\\,\n(\\sneg1/12,\\,0/17)\\,\n\\]\n}\n\n\\begin{spacing}{1.05}\nIn addition to our minimality invariant we prove that every $x$ has a\nfractional representation in which both $\\lvert N \\rvert$ and $D$ are\nless than or equal to $\\sqrt{M}$.  This result follows from the fact\nthat $N_1*D_2 - N_2*D_1 = M$ where $N_1,D_1,D_2 >= 0 \\;\\land \\;N2 < 0$ is\nalso an invariant of our algorithm.  Consider the case\nwhere $D_1 < \\sqrt{M}$ and $D_2 < \\sqrt{M}$ but $D_1 + D_2 \\geq \\sqrt{M}$.\nIf both $N_1 > \\sqrt{M}$ and $\\sneg N_2 > \\sqrt{M}$,\nthen in the following step, $\\lvert (D_1 + D_2)*N_i \\rvert$ will be\ngreater than $M$, violating our invariant.  Thus, at\nleast one $\\lvert N_i \\rvert \\leq \\sqrt{M}$.\n\\end{spacing}\n\nIt is possible for a number to have more than one representation whose\ncoefficients are less than $\\sqrt{M}$.  For example, $12 \\Mod{17}$ is\nrepresented by both ($\\sneg3/4$) and ($2/3$), both of whose\ncoefficient magnitudes are less than $\\sqrt{17}$.  Deciding which is\n\\emph{the minimum} is a judgment call.  We say that the minimum\nfraction is the one with the smallest maximum coefficient, resolving\nties with the smaller denominator.  Under those conditions the\nminimum fractional representation of $12 \\Mod{17}$ is ($2/3$).  Using\nthis minimality criteria the minimum fractional representations for\neach of the numbers $1\\dots16 \\Mod{17}$ are:\n\n\\[\n\\{\n\\,1,\\,2,\\,3,\\,4,\\,\\sneg2/3,\\,1/3,\\,\\sneg3/2,\\,\\sneg1/2,\\,1/2,\\,3/2,\\,\\sneg1/3,\\,2/3,\\,\\sneg4,\\,\\sneg3,\\,\\sneg2,\\,\\sneg1\\,\n\\}\n\\]\n\n\\section{Conclusion}\n\nWe have verified an algorithm for computing minimal fractional\nrepresentations $\\frac{N}{D}$ of numbers $x \\in\n\\mathbf{Z}/m\\mathbf{Z}$.  We also proved that all such numbers have a\nrepresentation in which the magnitude of both $N$ and $D$ are\nless than or equal to $\\sqrt{M}$.  In the cryptographic community\nthere is interest in finding smooth numbers that result from specific\ncomputations.  The quadratic sieve algorithm~\\cite{Sieve}, for\nexample, attempts to find small numbers (numbers on the order of\n$\\sqrt{M}$) in hopes of factoring them into a smooth factor base.  We\nshow that any residue relatively prime to $M$ can be represented as a\nquotient of two numbers less than or equal to $\\sqrt{M}$.\n\n\\bibliography{generic}{}\n\\bibliographystyle{eptcs}\n\\end{document}\n", "meta": {"hexsha": "4d1d7d8588d96456e6b3ad02f58ac8ae5ad54b51", "size": 8012, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/paper.tex", "max_stars_repo_name": "TheBeaNerd/fraction", "max_stars_repo_head_hexsha": "daffd24ba2d8b4c2663ac5893e5d3dba63156e0a", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/paper.tex", "max_issues_repo_name": "TheBeaNerd/fraction", "max_issues_repo_head_hexsha": "daffd24ba2d8b4c2663ac5893e5d3dba63156e0a", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/paper.tex", "max_forks_repo_name": "TheBeaNerd/fraction", "max_forks_repo_head_hexsha": "daffd24ba2d8b4c2663ac5893e5d3dba63156e0a", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.2613065327, "max_line_length": 136, "alphanum_fraction": 0.7153020469, "num_tokens": 2614, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737473266735, "lm_q2_score": 0.8596637469145053, "lm_q1q2_score": 0.5867839051723229}}
{"text": "\n\\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size\n\\usepackage{physics}\n\\usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs\n\\usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default\n\\usepackage[english]{babel} % English language/hyphenation\n\\usepackage{amsmath,amsfonts,amsthm} % Math packages\n\\usepackage{braket}\n\\usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template\n\\usepackage{tikz}\n\\usepackage{amsmath}\n\\usepackage{sectsty} % Allows customizing section commands\n\\allsectionsfont{\\centering \\normalfont\\scshape} % Make all sections centered, the default font and small caps\n\n\\usepackage{fancyhdr} % Custom headers and footers\n\\pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers\n\\fancyhead{} % No page header - if you want one, create it in the same way as the footers below\n\\fancyfoot[L]{} % Empty left footer\n\\fancyfoot[C]{} % Empty center footer\n\\fancyfoot[R]{\\thepage} % Page numbering for right footer\n\\renewcommand{\\headrulewidth}{0pt} % Remove header underlines\n\\renewcommand{\\footrulewidth}{0pt} % Remove footer underlines\n\\setlength{\\headheight}{13.6pt} % Customize the height of the header\n\\usepackage{float}\n\\numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\\numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\\numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\n\\setlength\\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text\n\n%----------------------------------------------------------------------------------------\n%\tTITLE SECTION\n%----------------------------------------------------------------------------------------\n\n\\newcommand{\\horrule}[1]{\\rule{\\linewidth}{#1}} % Create horizontal rule command with 1 argument of height\n\n\\title{\t\n\\normalfont \\normalsize \n\\textsc{California State University San Marcos \\\\ Dr. Dominguez, Physics 490} \\\\ [25pt] % Your university, school and/or department name(s)\n\\horrule{0.5pt} \\\\[0.4cm] % Thin top horizontal rule\n\\huge Astrophysics H.W. 2 \\\\ % The assignment title\n\\horrule{2pt} \\\\[0.5cm] % Thick bottom horizontal rule\n}\n\n\\author{Josh Lucas} % Your name\n\n\\date{\\normalsize\\today} % Today's date or a custom date\n\n\\begin{document}\n\n\\maketitle % Print the title\n\n%----------------------------------------------------------------------------------------\n%\tPROBLEM 1\n%----------------------------------------------------------------------------------------\n\n\\section{Earth's Orbit and Virial Theorem}\n\n\\textbf{(a) Consider the orbit of planet Earth, which is established by the interaction between the Sun and Earth masses. Please evaluate what is the ratio of kinetic energy (KE) to gravitational potential energy (PE). You may assume that the Earth has a circular orbit for simplicity.}\\\\\n\\\\\nWe know that kinetic energy of an object is equal to one half the mass multiplied by the velocity squared, $T =\\frac{1}{2}mv^2$. We need to relate the velocity to the force from gravity. We can use the equations for simple harmonic motion to make the relations. If we take two derivatives of the location of the Earth, as it orbits the origin, with respect to time we can find the acceleration,\n\\begin{align*}\n\\vec{r} & = r \\cos(\\omega t) \\hat{i} + r \\sin(\\omega t) \\hat{j} \\\\\n\\frac{d \\vec{r}}{dt} & = \\frac{d}{dt} \\big[ r\\cos(\\omega t) \\hat{i} + r\\sin(\\omega t) \\hat{j} \\big]\\\\\n\\dot{r} & = -\\omega r\\cos(\\omega t)\\hat{i} + \\omega r \\sin(\\omega t)\\hat{j} \\\\\n\\frac{d \\dot{r}}{dt} & = \\frac{d}{dt} \\big[ - \\omega r\\cos(\\omega t) \\hat{i} + \\omega r\\sin(\\omega t) \\hat{j} \\big]\\\\\n\\vec{a} = \\ddot{r} & = - \\omega^2 \\vec{r}\\quad \\text{Subsituting in to Newtons' second law,}\\\\ \nm(-\\omega^2 \\vec{r}) & = G\\frac{Mm}{r^2} \\hat{r} \\quad \\text{We need to relate omega to velocity}\n\\end{align*}\nThe velocity is the arc length S, distance around the sun, over the period of orbit T\n\\begin{equation*}\nS  = \\int_0^{2\\pi} r d\\theta = 2\\pi r,\\quad  v = \\frac{S}{T} \\rightarrow \\frac{2\\pi r}{T}, \\quad \\omega  = \\frac{2\\pi}{T}, \\quad v = \\omega r \\rightarrow \\omega = \\frac{v}{r}\\quad \n\\end{equation*}\nsubstituting in for omega and solving for velocity\n\\begin{align*}\nm \\bigg(\\frac{v}{r} \\bigg)^2 & = G \\frac{Mm}{r^2} \\\\\nv^2 & = \\frac{GM}{r} \\text{We can now subsitute into kinetic energy}\\\\\n\\frac{1}{2}mv^2 & = \\frac{1}{2}m\\bigg( \\frac{GM}{r} \\bigg)\\\\\nT & = \\frac{GMm}{2r}\n\\end{align*}\nWe can derive the gravitational potential energy of the Earth from the force of gravity by integrating over the negative of the work of the force over some distance r,\n\\begin{align*}\nU & = -\\int_0^r \\vec{F}\\cdot d\\vec{r} \\\\\n& = -\\int_0^r -G\\frac{Mm}{r^2}\\hat{r} \\cdot d\\vec{r} \\quad \\text{The vectors are parallel} \\\\\n& = GMm\\int_0^r r^{-2} dr \\\\\nU(r) & = - G\\frac{Mm}{r}\n\\end{align*}\nThe total gravitational energy of the earth is then the sum of kinetic energy and potential,\n\\begin{align*}\nE  &= T + U\\\\\n& = \\frac{1}{2}mv^2  - \\frac{GMm}{r}\\\\\n& = \\frac{1}{2}m (\\frac{GM}{r}) - \\frac{GMm}{r} \\\\\nE & = - \\frac{GMm}{2r} = \\frac{1}{2}U \\quad \\text{or,} \\\\\nT & = -\\frac{1}{2}U\n\\end{align*}\nFor gravitational energy we find the ration of kinetic energy to be negative one half of the potential.\\\\\n\\\\\n\\textbf{(b) Compare your answer in part (a) to the prediction made by Virial Theorem.}\\\\\n\\\\\nThe Virial theorem states that for a system in equilibrium the time average kinetic energy is equal to negative one half the sum of the time average of the total force dotted with the distance,\n\\begin{align*}\n\\langle T\\rangle & = -\\frac{1}{2}\\sum_1^{n} \\langle \\vec{F_k} \\cdot \\vec{r_k} \\rangle \\quad \\text{We know that potential is related to work,} \\\\\nU & = - \\int \\vec{F} \\cdot \\vec{r} \\quad \\text{substituting in for the sum,} \\\\\n\\langle T\\rangle & = -\\frac{1}{2} U \\quad \\text{which is the result we obtained.}\n\\end{align*}\n \n\\section{Gravitational Self-Energy of Celestial Bodies}\n\\textbf{(a) Consider a spherical celestial body with radius R and total mass M. Using unit\nanalysis of the gravitational potential energy expression, please write down an expression\nusing G, R, and M that may describe the gravitational self-potential energy of this body.}\\\\\n\\\\\nWe know that our expression needs to give us units of energy,\n\\begin{align*}\n\\frac{GMm}{r} & = \\frac{m^3}{kg\\ s^2}\\frac{  (kg)(kg)}{(m)} \\\\\n\\frac{GMm}{r} & = \\frac{m^2\\ kg}{s^2} = joules \\\\\n& = \\frac{GM^2}{r}\\quad \\text{We can multipy by some unitless scalar C} \\\\\nE & = C\\frac{GM^2}{r}\n\\end{align*}\n\\textbf{(b) Please derive the gravitational self energy of this body by assuming that the mass\ndensity as a function of r, the radial distance, is uniform. Compare to your answer from\npart (a)}\\\\\n\\\\\nThe potential from gravity is,\n\\begin{equation*}\nU = - \\frac{GMm}{r}\n\\end{equation*}\nThe radius is the parameter that will control the strength of potential due to the growing size of the mass. We need to describe our mass in terms of radius which can be done if we consider density.\nUniform density for a sphere is,\n\\begin{equation*}\n\\rho = \\frac{mass}{density} = \\frac{M_{\\odot}}{\\tfrac{4}{3} \\pi R_{\\odot}^3} \\rightarrow\\  mass\\ = \\bigg ( \\frac{4}{3} \\pi r^3 \\bigg)\\frac{M_{\\odot}}{\\tfrac{4}{3} \\pi R_{\\odot}^3} = M_{\\odot}\\frac{r^3}{R_{\\odot}^3}\n\\end{equation*}\nIf the mass starts as a infinitesimal speck that is adding mass in a spherical shape with uniform density then it is,\n\\begin{align*}\ndm &= \\int_0^R \\int \\int_0^{2\\pi} \\int_0^\\pi  \\rho r^2 \\sin(\\theta) d\\theta d\\phi dr\\\\\ndm &= 4\\pi \\int_0^R \\rho r^2 dr \n\\end{align*}\nsubsituting dm and M in our formula for gravitation potential we have,\n\\begin{align*}\nU & = - \\frac{GM_{\\odot}dm}{r} \\\\\n& = - \\frac{G}{r} \\bigg ( M_{\\odot}\\tfrac{r^3}{R_{\\odot}^3} \\bigg) \\bigg( \\rho 4\\pi r^2 dr \\bigg)\\\\\n& = - \\frac{4\\pi GM_{\\odot} \\rho}{R_{\\odot}^3} \\int_0^R r^4 dr \\\\\n& =  - \\frac{4\\pi GM_{\\odot} }{R_{\\odot}^3} \\bigg( \\frac{3M_{\\odot}}{4\\pi R_{\\odot}^3} \\bigg ) \\frac{R_{\\odot}^5}{5}\\\\ \nU & = - \\frac{3}{5} \\frac{G M_{\\odot}^2}{R_{\\odot}}\n\\end{align*} \nThe expression for gravitational self-potential energy.\n\\section{Gravitational Self-Energy of Celestial Bodies}\n\\textbf{(a) Assuming that the kinetic energy of the Sun is dominated by random thermal energy,\nplease use the Virial theorem to estimate the total thermal energy found within the Sun.\nPlease express in terms of G, M, and R.}\\\\\n\\\\\nWe know that the kinetic energy of a celestial body estimated by the Virial theorem is,\n\\begin{align*}\n\\langle KE \\rangle & = -\\frac{1}{2} \\langle PE\\rangle \\quad \\text{We can use our potential from earlier} \\\\ \nKE & = -\\frac{1}{2}PE = -\\frac{1}{2} \\Big ( - \\frac{3}{5} \\frac{G M_{\\odot}^2}{R_{\\odot}} \\Big)\\\\\nKE& = \\frac{3GM^2_{\\odot}}{10R_{\\odot}}\n\\end{align*} \n\\textbf{(b) Assuming that the mass of the Sun is dominated by hydrogen atoms (m=mp), use the\nresults of part (a) to estimate the average thermal velocity (vT ) for hydrogen atoms found\nin the Sun.}\n\\begin{align*}\nKE & = \\frac{3}{2} K_BT\\ per\\ molecule \\\\\n\\frac{3GM^2_{\\odot}}{10R_{\\odot}} & = \\frac{3}{2} K_BT\\big ( \\tfrac{M_{\\odot}}{m_p} \\big) \\\\\n\\frac{3GM^2_{\\odot}}{10R_{\\odot}} \\bigg( \\frac{2 m_p}{3 K_B M_{\\odot}} \\bigg) & = T\\\\\n\\frac{GM_{\\odot} m_p}{5K_BR_{\\odot}} & = T\n\\end{align*}\n\\begin{align*}\nv_{avg} & = \\sqrt{\\tfrac{3K_BT}{m}} \\\\\n&= \\sqrt{\\tfrac{3K_B}{m_p} \\bigg (\\tfrac{GM_{\\odot}m_p}{5K_bR_{\\odot} }\\bigg )} \\\\\n& = \\sqrt{\\tfrac{3GM_{\\odot}}{5R_{\\odot}}} \\\\\nV_{rms} & = \\sqrt{\\tfrac{3\\ (6.67\\times 10^{-11}m^3 kg^{-1} s^{-2})\\ (2\\times 10^{30} kg)}{5\\ (7\\times 10^8 m)}} \\approx 3.38 \\times 10^5 \\tfrac{m}{s}\n\\end{align*}\nWe can obtain the same result from the relation,\n\\begin{align*}\nKE = \\frac{1}{2}mv^2 & = \\frac{3GM_{\\odot}^2}{10 R_{\\odot}}\\\\\nv^2 & = \\frac{3GM_{\\odot}^2}{5R_{\\odot} M_{\\odot}} \\quad \\text{where m is equal to $M_{\\odot}$} \\\\\nv_{rms} & = \\sqrt{\\frac{3GM_{\\odot}}{5R_{\\odot}}}\n\\end{align*}\n\\textbf{(c) Use the ideal gas law or kinetic theory and your results from part (c) to estimate the\naverage temperature inside the Sun.}\\\\\n\\begin{align*}\nKE & = \\frac{3}{2} NK_BT \\quad \\text{Where N is number of particles } \\big ( \\tfrac{M_{\\odot}}{m_p} \\big) \\\\\n\\frac{3GM^2_{\\odot}}{10R_{\\odot}} & = \\frac{3}{2} NK_BT \\\\\nT & = \\frac{3GM^2_{\\odot}}{10R_{\\odot}} \\bigg( \\frac{2 }{3 NK_B } \\bigg) \\\\\n & = \\frac{GM_{\\odot}^2}{5NK_BR_{\\odot}}  \\\\\n& = \\frac{ (6.67\\times 10^{-11}m^3 kg^{-1} s^{-2})\\ (2\\times 10^{30} kg)^2\\ }{5\\ (1.19\\times 10^{57})(1.38\\times 10^{-23}) (7\\times 10^8 m)} \\\\\nT & \\approx 4.61\\times 10^6 K \\approx \\text{5 million degrees kelvin}\n\\end{align*}\n\n\\end{document}", "meta": {"hexsha": "9a73e27a51f0f6fcfb3045cbd463c74660c05a32", "size": 10716, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "H.W.2/H.W.2.tex", "max_stars_repo_name": "Epikarsios/astroHW1", "max_stars_repo_head_hexsha": "6816534f13ef01c85484d3b6c513f6232952fcd9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "H.W.2/H.W.2.tex", "max_issues_repo_name": "Epikarsios/astroHW1", "max_issues_repo_head_hexsha": "6816534f13ef01c85484d3b6c513f6232952fcd9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "H.W.2/H.W.2.tex", "max_forks_repo_name": "Epikarsios/astroHW1", "max_forks_repo_head_hexsha": "6816534f13ef01c85484d3b6c513f6232952fcd9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.6984126984, "max_line_length": 394, "alphanum_fraction": 0.6556550952, "num_tokens": 3764, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432182679956, "lm_q2_score": 0.8128673110375457, "lm_q1q2_score": 0.58668146909309}}
{"text": "\\documentclass{beamer}\n\\usetheme{CambridgeUS}\n\\beamertemplatenavigationsymbolsempty\n\n\\input{math.tex}\n\n\\title{math.tex}\n\\subtitle{a short guide}\n\\author{Tudor Berariu}\n\\date{version 0.1}\n\n\\begin{document}\n\n\\frame[plain]{\\titlepage}\n\n\\section{Functions}\n\n\\begin{frame}[fragile]{Functions}\n        \\begin{itemize}\n            \\item Use \\verb!\\fn! to write \\alert{function applications} and not to worry about parantheses.\n            \\begin{itemize}\n                \\item \\verb!\\fn[\\max]{x,0}! produces $$\\fn[\\max]{x, 0}$$\n                \\item \\verb!\\fn{\\fn[g]{\\fn[h]{x^2 + 1}^2}^{-1} - 1}! produces\n                  $$\\fn[f]{\\fn[g]{\\fn[h]{x^2 + 1}^2}^{-1} - 1}$$\n                \\end{itemize}\n            \\item Define new functions on top of \\verb!\\fn! for nicer equations.\n        \\begin{verbatim}\n    \\newcommand{\\foo}[1]{\\fn[foo]{#1}}\n\n    \\foo{x + 1}\n            \\end{verbatim}\n        \\end{itemize}\n    \\end{frame}\n    \n    \\begin{frame}[fragile]{Partial derivatives}\n        \\begin{itemize}\n            \\item Use \\verb!\\fstpd! and \\verb!\\fstpdfn! to write \\alert{first order partial derivatives}.\n                \\begin{itemize}\n                    \\item \\verb!\\fstpd{f}{x}! produces $$\\fstpd{f}{x}$$\n                    \\item \\verb!\\fstpdfn{\\fn[\\sin]{2x + 1}^2 +3}{x}! produces\n                     $$\\fstpdfn{\\fn[\\sin]{2x + 1}^2 +3}{x}$$\n                \\end{itemize}\n            \\item Similarly, use \\verb!\\sndpd! and \\verb!\\sndpdfn! for\n             \\alert{second order partial derivatives}.\n                $$\\sndpd{\\fn{x, y}}{x}{y} = \\sndpdfn{x^2 + 2y}{x}{y}$$\n        \\end{itemize}        \n    \\end{frame}\n\n    \\begin{frame}[fragile]{Some useful operators}\n        \\begin{itemize}\n            \\item Use \\verb!\\argmax{var}! for the {\\tt argmax} operator. \\verb!\\argmin! exists as well.\n                $$\\argmax{\\lambda} \\fn{\\lambda}$$\n            \\item Both \\verb!\\argmin! and \\verb!\\argmax! take an optional argument intended to insert the needed space aftet the oprator. The default is \\verb!\\;!, but you can provide whatever you feel appropriate.\n            \\begin{itemize}\n                \\item \\verb!\\argmin[]{\\alpha} \\fn{\\alpha}! produces\n                    $$ \\argmin[]{\\alpha} \\fn{\\alpha} $$\n                \\item \\verb!\\argmin[\\quad]{\\alpha} \\fn{\\alpha}! produces\n                    $$ \\argmin[\\quad]{\\alpha} \\fn{\\alpha} $$\n            \\end{itemize}\n        \\end{itemize}\n    \\end{frame}\n\n    \\begin{frame}[fragile]{Expected values}\n        \\begin{itemize}\n            \\item Use \\verb!\\expval! for the usual way of writing expected values.\n                \\begin{itemize}\n                    \\item Write \\verb!\\expval[{x \\sim \\fn[p]{x}}]{\\fn[g]{x}}! to produce\n                    \\footnote{Yes, this is the correct way to provide optional arguments\n                    in \\LaTeX: {\\tt [\\string{...\\string}]} }:\n                    $$\\expval[{x \\sim \\fn[p]{x}}]{ \\fn[g]{x} }$$\n                    \\item or, simply \\verb!\\expval{\\fn{x}}! to produce:\n                    $$\\expval{\\fn{x}}$$\n                \\end{itemize}\n        \\end{itemize}\n    \\end{frame}\n\n\\section{Tensors}\n\\label{sec:tensors}\n\n    \\begin{frame}[fragile]{Matrix operations}\n        \\begin{itemize}\n            \\item Use \\verb!\\tr! to \\alert{transpose} matrices (it uses the \\verb!\\intercal! symbol).\n                \\begin{itemize}\n                    \\item \\verb!\\tr{A}! produces $\\tr{A}$.\n                \\end{itemize}\n            \\item Use \\verb!\\inv! to refer to the \\alert{inverse} of a matrix.\n            \\begin{itemize}\n                \\item \\verb!\\inv{A}! produces $\\inv{A}$.\n            \\end{itemize}\n        \\end{itemize}\n    \\end{frame}\n\n\\section{Organizing your formulas}\n\\label{sec:org}\n\n    \\begin{frame}[fragile]{Parentheses and brackets}\n        \\begin{itemize}\n            \\item Use \\verb!\\rp! for \\alert{round parentheses} around some expression.\n            Do that if you prefer this to writing \\verb!\\left( ... \\right)! yourself.\n        \\end{itemize}\n    \\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "3ee7d68225fc4c77d111f82584578ec51a725c2d", "size": 4001, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "guide.tex", "max_stars_repo_name": "tudor-berariu/math.tex", "max_stars_repo_head_hexsha": "e91d27ac2f010c844c5bea7872ef6e566ff2cb8a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-12-07T15:30:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-07T18:15:43.000Z", "max_issues_repo_path": "guide.tex", "max_issues_repo_name": "tudor-berariu/math.tex", "max_issues_repo_head_hexsha": "e91d27ac2f010c844c5bea7872ef6e566ff2cb8a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "guide.tex", "max_forks_repo_name": "tudor-berariu/math.tex", "max_forks_repo_head_hexsha": "e91d27ac2f010c844c5bea7872ef6e566ff2cb8a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4711538462, "max_line_length": 214, "alphanum_fraction": 0.5346163459, "num_tokens": 1184, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271998, "lm_q2_score": 0.8128673110375457, "lm_q1q2_score": 0.5866814496323998}}
{"text": "%% JPEG Preimage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{JPEG Preimage}\n\\label{sec:jpeg}\n\n%\\newcommand{\\Int}[0]{\\mathrm{Z}}\n\nTo validate our work, we have applied BLT to the problem of\ncomputing preimages from JPEG decompression.\n\nTo simplify exposition, we will restrict our attention to\n\\emph{monochrome} JPEG images that consist of a single\n$8\\times{}8$ block of pixels.\n%\nIn experiments, we have also applied BLT to color images; color\nproblems involve more variables and constraints, but are otherwise similar\nto the monochrome case.\n%\nOur restriction to images that are $8$ by $8$ does not effect\nscalability either; JPEG compresses each $8\\times{}8$ block of\npixels within an image independently, so computing the preimage\nfor a larger image is the same as finding preimages for multiple\nindependent $8$ by $8$ blocks.\n\nWhen compressing an image, each pixel ranges from\n$0$ to $255$ where $0$ corresponds to black and $255$ corresponds to white.\nTo compress an image, JPEG performs the following steps~\\cite{jpeg}:\n\n\\newcommand{\\dct}{\\mathrm{dct}_2}\n\\newcommand{\\idct}{\\mathrm{idct}_2}\n\n\\begin{enumerate}\n\n\\item Each pixel value is shifted by $-128$ so that the pixel\n   values range between $-128$ and $127$.\n\n\\item A 2d discrete cosine transform (DCT) is applied to each block\n    that transforms the coordinate space from the image pixel values to\n    the frequency domain.  This has the effect of separating out the\n    image components by frequency, so that course grained qualities\n    such as overall brightness are represented distinctly from more\n    fine-grained fluctuations.  For a given input block\n    $I$, we denote the frequency representation by $F = \\dct(I)$.\n    A 2d DCT is obtained by first applying a 1d DCT to each\n    column in the image, and then applying a 1d DCT to each row\n    in the image.\n\n\\item A quantization step is performed in which each coordinate in\n    the frequency representation $F$ is quantized to the nearest multiple of an\n    associated value in a \\emph{quantization matrix}\n    $\\mat{Q}_{lvl} \\in \\ZZ^{8\\times{}8}$. The quantization matrix is\n    constructed so that high-frequency components are rounded to more\n    course grained values than low-frequency components.\n\n%  The human eye\n%    is much less perceptive of high-frequency changes than low-frequency\n%    changes in the image, and thus one can compress an image while\n%    minimizing the perceived loss of information by more coarsely representing\n%    fine grained information.\n\n    JPEG allows users some control over the tradeoff between the\n    compression ratio and image quality by providing a parameter $lvl$,\n    called the ``quality level'' and ranges from 1 to 100. The\n    coefficients in $\\mat{Q}_{lvl}$ grow as the quality level\n    $lvl$ decreases.\n\n    Given the output of the frequency transform $\\dct(I)$, the output\n    of the quantization step consists of the rounded quotient\n    $C = \\mathrm{round}(\\dct(I) ./ \\mat{Q}_{lvl})$.  The division is a\n    pointwise (Hadamard) division, rather than an inverse linear\n    transform.\n\n\\item Finally, a variant of Huffman compression is performed that\n    compresses the quantized coefficients into a string of bits.\n    The compression is lossless, and designed to represent the\n    quantized coefficients in a small number of bits.\n\n\\end{enumerate}\n\n\\newcommand{\\gbox}[0]{\\textcolor{black!50!white!50}{\\rule{10pt}{10pt}}}\n\\newcommand{\\mtt}[1]{\\mathtt{#1}}\n\nJPEG decompression just runs these steps in reverse order starting\nwith Huffman decompression.  As Huffman compression is lossless and\ncan be directly inverted, for our constraint satisfaction problem we\nbegin with the rounded quantized coefficients $C$, and the resulting\nimage $I$ can be obtained by computing:\n%\n\\[I = \\mathrm{round}(\\idct(\\mat{Q}_{lvl} .* C) + 128)\\]\n%\nAs an example preimage problem, suppose that we are looking for any\nimage containing ``Hello World!'' encoded as ASCII text within a block.\n%\n\\begin{equation}\n\\begin{array}{c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c}\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\mtt{H} & \\mtt{e} & \\mtt{l} & \\mtt{l} & \\mtt{o} & \\mtt{\\_} & \\gbox\\\\[-2pt]\n\\gbox & \\mtt{W} & \\mtt{o} & \\mtt{r} & \\mtt{l} & \\mtt{d} & \\mtt{!} & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\end{array}\n\\hspace{1in}\n\\begin{array}{c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c@{\\hspace{1.5pt}}c}\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\gbox & \\mtt{48} & \\mtt{65} & \\mtt{6c} & \\mtt{6c} & \\mtt{6f} & \\mtt{20} & \\gbox\\\\[-2pt]\n\\gbox & \\mtt{57} & \\mtt{6f} & \\mtt{72} & \\mtt{6c} & \\mtt{64} & \\mtt{21} & \\gbox\\\\[-2pt]\n\\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox & \\gbox\\\\[-2pt]\n\\end{array}\n\\end{equation}\n%\nTo construct a bounded ILP problem, we take the constraints above and\ngenerate the lower and upper bounds needed so that the final rounding\nfunction will return an image satisfying the constraints.  This gives us\nthe two matrices $\\mat{L}$ and $\\mat{U}$ below:\n%\n{\\small\n\\begin{equation}\n\\left[\n\\begin{array}{r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r}\n-0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5\\\\[-1pt]\n-0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5\\\\[-1pt]\n-0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5\\\\[-1pt]\n-0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5\\\\[-1pt]\n-0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5\\\\[-1pt]\n-0.5 & 71.5 & 100.5 & 107.5 & 107.5 & 110.5 & 31.5 & -0.5\\\\[-1pt]\n-0.5 & 86.5 & 110.5 & 113.5 & 107.5 &  99.5 & 32.5 & -0.5\\\\[-1pt]\n-0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5 & -0.5\\\\[-1pt]\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r@{\\hspace{3pt}}r}\n255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5\\\\[-1pt]\n255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5\\\\[-1pt]\n255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5\\\\[-1pt]\n255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5\\\\[-1pt]\n255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5\\\\[-1pt]\n255.5 & 72.5 & 101.5 & 108.5 & 108.5 & 111.5 & 32.5 & 255.5\\\\[-1pt]\n255.5 & 87.5 & 111.5 & 114.5 & 108.5 & 100.5 & 33.5 & 255.5\\\\[-1pt]\n255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5 & 255.5\\\\[-1pt]\n\\end{array}\n\\right]\n\\end{equation}\n}\n%\nWith these steps, the problem then reduces finding a coefficients\n\\(\\mat{C} \\in \\ZZ^{8\\times8}\\) such that:\n%\n\\[\\mat{L} \\leq \\mathrm{idct}_2(\\mat{Q}_{lvl} .* \\mat{C}) + 128 \\leq \\mat{U}.\\]\n%\nBoth the Hadamard product and inverse DCT are linear transformations,\nand we evaluate the inverse DCT by evaluating the coefficients\nto IEEE double floating point precision.  This allows us to construct\na bounded ILP problem from the equation above.\n\nFor this problem, we can compute an estimate of the number of solutions by\ndividing the size of space bounded by $\\mat{L}$ and $\\mat{U}$ by the density\nof the lattice $\\mat{A}_{lvl}$ generated by the quantization step and idct\nfunction.  In Figure \\ref{fig:solution_count}, we plot the number of solutions\non a logarithmic scale. This figure illustrates how dramatically the estimated\nnumber of solutions changes with respect to the quality level. At quality\nlevels \\(98\\) and higher, the number of expected solutions exceeds\n\\(10^{100}\\); while less than \\(1\\) solution is expected at quality level\n\\(25\\) for the same constraint. In the extreme case at quality level \\(1\\),\none would only expect to find solutions in roughly \\(3\\) out of \\(10^{90}\\)\nproblems with a similarly sized bounds.\n\n\\begin{figure}[htb]\n  \\centering\n  \\includegraphics[width=0.8\\textwidth]{figures/solcount2.pdf}\\\\[0em]\n  \\caption{Number of expected solutions}\n  \\label{fig:solution_count}\n\\end{figure}\n\nWe first tried applying CVC4, Yices, and Z3 to the problem by encoding the\nproblem in a format supported by the solver. We used SMT-LIB for CVC4 and Z3,\nand Yices' native format for it. Unfortunately, none of these tools were able to\nsolve any of the problems at quality levels from $1$ to $100$ within a 1 hour\ncutoff for each problem.  We also ran all three solvers without success\nfor over two months on problems at quality levels \\(99\\) and \\(1\\).\n\nWe have had much better success when BLT is applied to these problems.  In our\ntesting, BLT has been able to find solutions to all problems at quality level\n\\(27\\) and higher.  BLT found that the problems at quality levels\n\\(1\\) through \\(18\\) were \\emph{unsatisfiable}. We include a chart of BLT's\nruntime in Figure \\ref{fig:blt-perf}. In the plot, solid points denote\nproblems for which BLT returns SAT, whereas $\\times$ points denote problems\nwhere BLT returns UNSAT. The large number of problems with roughly constant\nruntime (mostly on the right side of the plot) have the property that the\nBabai point (mentioned in section \\ref{ssec:blt-optimizations}) is already a\nsolution and thus almost no search is needed. In the filled region between levels\n19 and 26, BLT failed to terminate in the $1$ hour cutoff.\n\nWe should note that BLT is performing floating point arithmetic, and so when BLT\nreturns unsat there is a risk that floating point rounding error lead to BLT\ndetecting a branch was infeasible when it was in fact infeasible.  This may also\naccount for some of the performance gap, between BLT and the above solvers, but\nwe suspect it is unlikely that precision alone can account for the more than 6\norders of magnitude runtime difference we have observed above.\n\n\\begin{figure}[htb]\n  \\centering\n    \\includegraphics[width=0.8\\textwidth]{figures/blt_benchmark.pdf}\n    \\caption{BLT Runtime vs. Problem Level}\n    \\label{fig:blt-perf}\n\\end{figure}\n\n\\FloatBarrier\n\n%We have run the problem on other problems with varying constraints.\n%These experiments have showna correlation between the number of expected\n%solutions, and whether the solver will succeed at finding solutions.\n%However, the correlation is not strict even on closely related problems.\n%To illustrate this, we considered a second problem \\(Q\\) for which we\n%added an additional row of constraints:\n%\n%\\begin{verbatim}\n%Q = { 'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx'\n%      'xx' '48' '65' '6c' '6c' '6f' '20' 'xx' % \"Hello \"\n%      'xx' '57' '6f' '72' '6c' '64' '21' 'xx' % \"World!\"\n%      'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx'\n%      'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx'\n%      'xx' '48' '65' '6c' '6c' '6f' '20' 'xx' % \"Hello \"\n%      'xx' '57' '6f' '72' '6c' '64' '21' 'xx' % \"World!\"\n%      'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx' 'xx' };\n%\\end{verbatim}\n%\n%Due to the additional rows of constraints, this problem has fewer\n%solutions at a given quality level. In this case, BLT was able to find\n%solutions for problems down to quality level \\(68\\), which was expected\n%to have \\(601\\) solutions. In contrast, BLT was unable to solve the\n%original problem \\(P\\) at quality level \\(30\\) even though that problem\n%was expected to have just over \\(10^{5}\\) solutions. BLT was also able to\n%show unsatisfiability for quality levels \\(1\\) through \\(9\\), showing\n%that BLT is better able to show unsatisfiability on these problems.\n\n%We plan to continue experiments to better understand the performance of\n%BLT on different problems. Given that BLT relies on search and ILP is an\n%NP-hard problem, we do not expect to be able to solve every problem, or\n%even be able to accurately predict whether a particular problem can\n%solved. However, research on random Boolean \\(k\\)-SAT problems has shown\n%a clear relationships between problem satisfiability and the clause to\n%variable ratio {[}3{]}. One may hope to establish similar relationships\n%for bounded ILP problems.\n", "meta": {"hexsha": "5b5a3d97256881567bb586507463a616d3ff60ac", "size": 12511, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "publications/2015_smt/jpeg.tex", "max_stars_repo_name": "benjaminfjones/blt", "max_stars_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 63, "max_stars_repo_stars_event_min_datetime": "2016-11-15T22:09:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T02:47:27.000Z", "max_issues_repo_path": "publications/2015_smt/jpeg.tex", "max_issues_repo_name": "benjaminfjones/blt", "max_issues_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-03-24T18:52:05.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-28T03:03:27.000Z", "max_forks_repo_path": "publications/2015_smt/jpeg.tex", "max_forks_repo_name": "benjaminfjones/blt", "max_forks_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-11-15T23:16:43.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T08:32:52.000Z", "avg_line_length": 50.044, "max_line_length": 142, "alphanum_fraction": 0.6814003677, "num_tokens": 4218, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430436757312, "lm_q2_score": 0.7057850402140659, "lm_q1q2_score": 0.586608326504317}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsfonts}\n\\usepackage{amsthm}\n\\usepackage{mathrsfs}\n\\usepackage{colonequals}\n\\usepackage{subfig}\n\\usepackage[shortlabels]{enumitem}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{caption}\n\\usepackage{color}\n\\usepackage{listings}\n\\usepackage{bm}\n\n\\theoremstyle{plain}\n\\newtheorem{thm}{Theorem}\n\n\\theoremstyle{definition}\n\\newtheorem{prob}{Problem}\n\n\\usepackage{color}\n\\usepackage{graphicx}\n%\\graphicspath{./figs/}\n\n\\newcommand{\\TODO}{{\\color{red}TODO}}\n\\newcommand{\\CC}{\\mathbb{C}} % complex numbers\n\\newcommand{\\RR}{\\mathbb{R}} % real numbers\n\\DeclareMathOperator{\\rank}{rank}\n\\newcommand{\\norm}[1]{\\left\\lVert #1 \\right\\rVert}\n\\newcommand{\\ip}[2]{\\langle #1, #2 \\rangle}\n\\newcommand{\\ones}{\\mathbf{1}}\n\\newcommand{\\nth}[1]{#1^{\\mathrm{th}}}\n\\newcommand{\\TR}{\\mathrm{TR}}\n\\newcommand{\\TE}{\\mathrm{TE}}\n\\renewcommand{\\vec}[1]{\\mathbf{#1}}\n\\newcommand{\\veca}[1]{\\boldsymbol{#1}}\n\n\\title{Optimal Experiment Design}\n\\author{Jon Tamir}\n\\begin{document}\n\\maketitle\n\n\n\\section{Signal Equation}\nThe signal equation is\n\\begin{align}\n  S_{ij}\\left(M_0, T_1, T_2\\right) &= M_0 f_i(T_1, T_2) \\frac{1 - E_j(T_1)}{1 - E_j(T_1)f_T(T_1,T_2)},\n  \\label{eqn:signaleq}\n\\end{align}\nwhere $M_0$ is the initial longitudinal magnetization and $T_1$ and $T_2$ are relaxation parameters.\nThe EPG function is\n\\begin{align}\n  f_i(T_1, T_2) &= \\mathrm{EPG}\\left( \\theta_1^i, T_1, T_2, T_s \\right),\n  \\label{eqn:epgfun}\n\\end{align}\nwhere $\\theta_1^i$ indicates the first $i$ refocusing pulses and $T_s$ is the echo spacing. The recovery function is\n\\begin{align}\n  E_j(T_1) &= e^{-\\frac{(\\TR_j - (T+1)\\times T_s)}{T_1}},\n  \\label{eqn:recoveryfun}\n\\end{align}\nwhere $\\TR_j$ is the $\\nth{j}$ repetition time.\n\nThe Fisher Information Matrix (FIM) consists of the following six terms:\n\n\\begin{align}\n  \\vec{I} = \\begin{bmatrix}\n  \\frac{2}{\\sigma^2} \\sum_{ij}\\left( \\frac{\\partial S_{ij}}{\\partial T_1} \\frac{\\partial S_{ij}}{\\partial T_1} \\right) & \n  \\frac{2}{\\sigma^2} \\sum_{ij}\\left( \\frac{\\partial S_{ij}}{\\partial T_1} \\frac{\\partial S_{ij}}{\\partial T_2} \\right) &\n  \\frac{2}{\\sigma^2} \\sum_{ij}\\left(\\frac{\\partial S_{ij}}{\\partial T_1} \\frac{\\partial S_{ij}}{\\partial M_0} \\right) \\\\\n  \\cdot & \\frac{2}{\\sigma^2} \\sum_{ij}\\left( \\frac{\\partial S_{ij}}{\\partial T_2} \\frac{\\partial S_{ij}}{\\partial T_2} \\right) &\n  \\frac{2}{\\sigma^2} \\sum_{ij}\\left( \\frac{\\partial S_{ij}}{\\partial T_2} \\frac{\\partial S_{ij}}{\\partial M_0} \\right) \\\\\n  \\cdot & \\cdot &\n  \\frac{2}{\\sigma^2}\\sum_{ij}\\left(\\frac{\\partial S_{ij}}{\\partial M_0} \\frac{\\partial S_{ij}}{\\partial M_0} \\right) &\n\\end{bmatrix}.\n  \\label{eqn:FIM}\n\\end{align}\n\nFor ease of notation, a function $f(x,y,z)$ will be referred to as $f(x)$ when it is understood that the other parameters\nare constants in the expression. Also, the partial derivative of a function $f(x,y,z)$ w.r.t.\\ parameter $x$ will be denoted by $f'(x)$.\nWe have\n\\begin{align}\n  S_{ij}'(M_0) &= f_i \\frac{1 - E_j}{1 - E_jf_T},\n  \\label{eqn:dM0} \\\\\n  \\begin{split}\n  S_{ij}'(T_1) &= M_0\\left( 1 - E_j(T_1) \\right)\\frac{\\left(f_i'(T_1)\\left(1 - E_j(T_1) \\right) - f_i(T_1)E_j'(T_1)\\right)}\n  {\\left(1 - f_TE_j\\right)^2} \\times \\\\\n  & \\frac{\\left(1 - f_T(T_1)E_j(T_1)\\right) + f_i(T_1)\\left(f_T'(T_1)E_j(T_1) + f_T(T_1)E_j'(T_1)\\right)}\n  {\\left(1 - f_TE_j\\right)^2},\n  \\label{eqn:dT1}\n  \\end{split} \\\\\n  S_{ij}'(T_2) &= M_0\\left( 1 - E_j \\right)\\frac{\\left(f_i'(T_2)\\left(1 - f_T(T_2)E_j \\right) + f_i(T_2)f_T'(T_2)E_j\\right)}\n  {\\left(1 - f_TE_j\\right)^2}.\n  \\label{eqn:dT2}\n\\end{align}\n\nThe inverse of the FIM is a lower bound on the covariance matrix of an unbiased estimator of the parameters $\\veca{\\theta}=(T_1, T_2, M_r)$.\nThus it reasons to minimize some metric of sub-matrix of $\\vec I^{-1}$.\nGroup the FIM into\n\\begin{align}\n  \\vec I &= \\begin{bmatrix}\n    \\vec I_{11} & \\vec I_{12} \\\\ \\vec I_{21} & \\vec I_{22},\n  \\end{bmatrix}\n  \\label{eqn:FIMd}\n\\end{align}\nwhere $\\vec I_{11}$ represents the $2\\times 2$ sub-matrix. Then the Schur complement of $\\vec I$ w.r.t.\\ $\\vec I_{22}$ is\n\n\\section{EPG Function}\nThe EPG function is a recursive function of the signal magnetization and refocusing flip angles. For $T$ refocusing pulses, the EPG\nfunction $\\vec f : \\RR_+^2 \\rightarrow \\RR_+^T$ is\n\\begin{align}\n  \\vec f(T_1, T_2, \\theta_1^T) = \\begin{bmatrix} f_1 & \\cdots & f_T \\end{bmatrix}^T.\n  \\label{eqn:epgvec}\n\\end{align}\n\nLet the magnetization state after the $\\nth{i}$ refocusing pulse be $\\vec s_i \\in \\CC^{3(T+1)}$. The first entry of $\\vec s_i$ represents the\ncoherent transverse magnetization, $M_{xy}$, and the third entry represents the coherent longitudinal magnetization, $M_z$. The observed\nsignal after the $\\nth{i}$ pulse is then\n\\begin{align}\n  f_i(T_1, T_2, \\theta_1^i) &= \\begin{bmatrix}1 & 0 & 0 & \\cdots & 0 \\end{bmatrix} \\vec s_i(T_1, T_2, \\theta_1^i)\n  \\label{eqn:epgrec}\n\\end{align}\nThe initial magnetization state (after the 90 degree excitation pulse) is given by \n\\begin{align}\n  \\vec s_0 &= \\mathrm{vec}\\left(\\begin{bmatrix}\n    1 & 0 & 0 & \\cdots & 0 \\\\\n    1 & 0 & 0 & \\cdots & 0 \\\\\n    0 & 0 & 0 & \\cdots & 0 \n\\end{bmatrix}\\right).\n  \\label{eqn:s0}\n\\end{align}\nOne progression of the EPG states is given by\n\\begin{align}\n  \\vec s_i & = \\vec E \\left(\\vec G \\vec R_i\\vec{GE}\\vec s_{i-1} + \\vec E_0 \\right) + \\vec E_0,\n  \\label{eqn:skrec}\n\\end{align}\nwhere $\\vec E$, $\\vec E_0$, $\\vec G$, and $\\vec R_i$ are defined in the conventional way for EPG.\nThus,\n\\begin{align}\n  \\begin{split}\n    \\nabla\\vec s_i(T_1) &= \\nabla\\vec E(T_1)\\left(\\vec{GR}_i \\vec{GE}(T_1)\\vec s_{i-1}(T_1) + \\vec E_0(T_1)\\right)  \\\\\n    &+ \\vec E(T_1)\\left(\\vec{GR}_i \\vec{G}\\left(\\nabla\\vec E(T_1)\\vec s_{i-1}(T_1) + \\vec E(T_1)\\nabla\\vec s_{i-1}(T_1)\\right) + \\nabla\\vec E_0(T_1)\\right)\n    + \\nabla\\vec E_0(T_1),\n  \\end{split}\n  \\label{eqn:dsdT1} \\\\\n  \\begin{split}\n  \\nabla\\vec s_i(T_2) &= \\nabla\\vec E(T_2)\\vec{GR}_i \\vec{G}\\left(\\vec E(T_2)\\vec s_{i-1}(T_2) + \\vec E_0\\right) \\\\\n  &+ \\vec E(T_2)\\left(\\vec{GR}_i \\vec{G}\\left(\\nabla\\vec E(T_2)\\vec s_{i-1}(T_2) + \\vec E(T_2)\\nabla\\vec s_{i-1}(T_2)\\right)\\right).\n  \\end{split}\n  \\label{eqn:dsdT2}\n\\end{align}\n\n\n\\end{document}\n\n", "meta": {"hexsha": "42b1c654f45215d4b2f2bc40782958d316f809e0", "size": 6166, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/crlb.tex", "max_stars_repo_name": "somnathrakshit/mri-sim-py", "max_stars_repo_head_hexsha": "2034357d1c5a89ee48f5dc38484a7ea33cc04db7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-09-06T18:51:56.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-26T10:17:29.000Z", "max_issues_repo_path": "doc/crlb.tex", "max_issues_repo_name": "somnathrakshit/mri-sim-py", "max_issues_repo_head_hexsha": "2034357d1c5a89ee48f5dc38484a7ea33cc04db7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/crlb.tex", "max_forks_repo_name": "somnathrakshit/mri-sim-py", "max_forks_repo_head_hexsha": "2034357d1c5a89ee48f5dc38484a7ea33cc04db7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2017-02-19T14:28:43.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-10T07:42:54.000Z", "avg_line_length": 39.2738853503, "max_line_length": 155, "alphanum_fraction": 0.6639636717, "num_tokens": 2466, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430394931457, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5866083081216564}}
{"text": "\\chapter{IMO Shortlist Geometry Problems from year 2000-2017}\n\n\t\n\t\\section{G1}\n\t\n\t\t\\prob{}{}{}{In the plane we are given two circles intersecting at $ X$ and $ Y$. Prove that there exist four points with the following property:\n\t\t\t\n\t\t\t(P) For every circle touching the two given circles at $ A$ and $ B$, and meeting the line $ XY$ at $ C$ and $ D$, each of the lines $ AC$, $ AD$, $ BC$, $ BD$ passes through one of these points.}\n\t\t\n\t\t\\prob{}{}{}{Let $A_1$ be the center of the square inscribed in acute triangle $ABC$ with two vertices of the square on side $BC$. Thus one of the two remaining vertices of the square is on side $AB$ and the other is on $AC$. Points $B_1,\\ C_1$ are defined in a similar way for inscribed squares with two vertices on sides $AC$ and $AB$, respectively. Prove that lines $AA_1,\\ BB_1,\\ CC_1$ are concurrent.}\n\t\t\n\t\t\\prob{}{}{}{Let $B$ be a point on a circle $S_1$, and let $A$ be a point distinct from $B$ on the tangent at $B$ to $S_1$. Let $C$ be a point not on $S_1$ such that the line segment $AC$ meets $S_1$ at two distinct points. Let $S_2$ be the circle touching $AC$ at $C$ and touching $S_1$ at a point $D$ on the opposite side of $AC$ from $B$. Prove that the circumcentre of triangle $BCD$ lies on the circumcircle of triangle $ABC$.%\\figdf{.8}{ISL_GEO/1_1_3}{}\n\t\t}\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a cyclic quadrilateral. Let $P$, $Q$, $R$ be the feet of the perpendiculars from $D$ to the lines $BC$, $CA$, $AB$, respectively. Show that $PQ=QR$ if and only if the bisectors of $\\angle ABC$ and $\\angle ADC$ are concurrent with $AC$.%\\figdf{.8}{ISL_GEO/1_1_4}{}\n\t\t}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute-angled triangle with $AB\\neq AC$. The circle with diameter $BC$ intersects the sides $AB$ and $AC$ at $M$ and $N$ respectively. Denote by $O$ the midpoint of the side $BC$. The bisectors of the angles $\\angle BAC$ and $\\angle MON$ intersect at $R$. Prove that the circumcircles of the triangles $BMR$ and $CNR$ have a common point lying on the side $BC$.%\\figdf{.8}{ISL_GEO/1_1_5}{}\n\t\t}\n\t\t\n\t\t\\prob{}{}{}{Given a triangle $ABC$ satisfying $AC+BC=3\\cdot AB$. The incircle of triangle $ABC$ has center $I$ and touches the sides $BC$ and $CA$ at the points $D$ and $E$, respectively. Let $K$ and $L$ be the reflections of the points $D$ and $E$ with respect to $I$. Prove that the points $A$, $B$, $K$, $L$ lie on one circle.%\\figdf{.8}{ISL_GEO/1_1_6}{}\n\t\t}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be triangle with incenter $I$. A point $P$ in the interior of the triangle satisfies \\[\\angle PBA+\\angle PCA = \\angle PBC+\\angle PCB.\\] Show that $AP \\geq AI$, and that equality holds if and only if $P=I$.}\n\t\t\n\t\t\\prob{}{}{}{In triangle $ ABC$ the bisector of angle $ BCA$ intersects the circumcircle again at $ R$, the perpendicular bisector of $ BC$ at $ P$, and the perpendicular bisector of $ AC$ at $ Q$. The midpoint of $ BC$ is $ K$ and the midpoint of $ AC$ is $ L$. Prove that the triangles $ RPK$ and $ RQL$ have the same area.}\n\t\t\n\t\t\\prob{}{}{}{Let $ H$ be the orthocenter of an acute-angled triangle $ ABC$. The circle $ \\Gamma_{A}$ centered at the midpoint of $ BC$ and passing through $ H$ intersects the sideline $ BC$ at points $ A_{1}$ and $ A_{2}$. Similarly, define the points $ B_{1}$, $ B_{2}$, $ C_{1}$ and $ C_{2}$.\n\t\t\t\n\t\t\tProve that the six points $ A_{1}$, $ A_{2}$, $ B_{1}$, $ B_{2}$, $ C_{1}$ and $ C_{2}$ are concyclic.}\n\t\t\n\t\t\\prob{}{}{}{Let $ ABC$ be a triangle with $ AB = AC$ . The angle bisectors of $ \\angle C AB$ and $ \\angle AB C$ meet the sides $ B C$ and $ C A$ at $ D$ and $ E$ , respectively. Let $ K$ be the incentre of triangle $ ADC$. Suppose that $ \\angle B E K = 45^\\circ$ . Find all possible values of $ \\angle C AB$ .}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle with $D, E, F$ the feet of the altitudes lying on $BC, CA, AB$ respectively. One of the intersection points of the line $EF$ and the circumcircle is $P.$ The lines $BP$ and $DF$ meet at point $Q.$ Prove that $AP = AQ.$}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle. Let $\\omega$ be a circle whose centre $L$ lies on the side $BC$. Suppose that $\\omega$ is tangent to $AB$ at $B'$ and $AC$ at $C'$. Suppose also that the circumcentre $O$ of triangle $ABC$ lies on the shorter arc $B'C'$ of $\\omega$. Prove that the circumcircle of $ABC$ and $\\omega$ meet at two points.}\n\t\t\n\t\t\\prob{}{}{}{ Given triangle $ABC$ the point $J$ is the centre of the excircle opposite the vertex $A.$ This excircle is tangent to the side $BC$ at $M$, and to the lines $AB$ and $AC$ at $K$ and $L$, respectively. The lines $LM$ and $BJ$ meet at $F$, and the lines $KM$ and $CJ$ meet at $G.$ Let $S$ be the point of intersection of the lines $AF$ and $BC$, and let $T$ be the point of intersection of the lines $AG$ and $BC.$ Prove that $M$ is the midpoint of $ST.$\n\t\t\t\n\t\t\t(The excircle of $ABC$ opposite the vertex $A$ is the circle that is tangent to the line segment $BC$, to the ray $AB$ beyond $B$, and to the ray $AC$ beyond $C$.)\n\t\t}\n\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle with orthocenter $H$, and let $W$ be a point on the side $BC$, lying strictly between $B$ and $C$. The points $M$ and $N$ are the feet of the altitudes from $B$ and $C$, respectively. Denote by $\\omega_1$ is the circumcircle of $BWN$, and let $X$ be the point on $\\omega_1$ such that $WX$ is a diameter of $\\omega_1$. Analogously, denote by $\\omega_2$ the circumcircle of triangle $CWM$, and let $Y$ be the point such that $WY$ is a diameter of $\\omega_2$. Prove that $X,Y$ and $H$ are collinear.}\n\t\t\n\t\t\\prob{}{}{}{Let $P$ and $Q$ be on segment $BC$ of an acute triangle $ABC$ such that $\\angle PAB=\\angle BCA$ and $\\angle CAQ=\\angle ABC$. Let $M$ and $N$ be the points on $AP$ and $AQ$, respectively, such that $P$ is the midpoint of $AM$ and $Q$ is the midpoint of $AN$. Prove that the intersection of $BM$ and $CN$ is on the circumference of triangle $ABC$.}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle with orthocenter $H$. Let $G$ be the point such that the quadrilateral $ABGH$ is a parallelogram. Let $I$ be the point on the line $GH$ such that $AC$ bisects $HI$. Suppose that the line $AC$ intersects the circumcircle of the triangle $GCI$ at $C$ and $J$. Prove that $IJ = AH$.}\n\t\t\n\t\t\\prob{}{}{}{Triangle $BCF$ has a right angle at $B$. Let $A$ be the point on line $CF$ such that $FA=FB$ and $F$ lies between $A$ and $C$. Point $D$ is chosen so that $DA=DC$ and $AC$ is the bisector of $\\angle{DAB}$. Point $E$ is chosen so that $EA=ED$ and $AD$ is the bisector of $\\angle{EAC}$. Let $M$ be the midpoint of $CF$. Let $X$ be the point such that $AMXE$ is a parallelogram. Prove that $BD,FX$ and $ME$ are concurrent.}\n\t\t\n\t\t\\prob{}{}{}{Let $ABCDE$ be a convex pentagon such that $AB=BC=CD$, $\\angle{EAB}=\\angle{BCD}$, and $\\angle{EDC}=\\angle{CBA}$. Prove that the perpendicular line from $E$ to $BC$ and the line segments $AC$ and $BD$ are concurrent.}\n\t\t\n\t\n\t\\newpage\\section{G2}\n\t\t\n\t\t\\prob{}{}{}{Two circles $ G_1$ and $ G_2$ intersect at two points $ M$ and $ N$. Let $ AB$ be the line tangent to these circles at $ A$ and $ B$, respectively, so that $ M$ lies closer to $ AB$ than $ N$. Let $ CD$ be the line parallel to $ AB$ and passing through the point $ M$, with $ C$ on $ G_1$ and $ D$ on $ G_2$. Lines $ AC$ and $ BD$ meet at $ E$; lines $ AN$ and $ CD$ meet at $ P$; lines $ BN$ and $ CD$ meet at $ Q$. Show that $ EP = EQ$.}\n\t\t\n\t\t\\prob{}{}{}{Consider an acute-angled triangle $ABC$. Let $P$ be the foot of the altitude of triangle $ABC$ issuing from the vertex $A$, and let $O$ be the circumcenter of triangle $ABC$. Assume that $\\angle C \\geq \\angle B+30^{\\circ}$. Prove that $\\angle A+\\angle COP < 90^{\\circ}$.}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle for which there exists an interior point $F$ such that $\\angle AFB=\\angle BFC=\\angle CFA$. Let the lines $BF$ and $CF$ meet the sides $AC$ and $AB$ at $D$ and $E$ respectively. Prove that \\[ AB+AC\\geq4DE. \\]}\n\t\t\n\t\t\\prob{}{}{}{Three distinct points $A$, $B$, and $C$ are fixed on a line in this order. Let $\\Gamma$ be a circle passing through $A$ and $C$ whose center does not lie on the line $AC$. Denote by $P$ the intersection of the tangents to $\\Gamma$ at $A$ and $C$. Suppose $\\Gamma$ meets the segment $PB$ at $Q$. Prove that the intersection of the bisector of $\\angle AQC$ and the line $AC$ does not depend on the choice of $\\Gamma$.}\n\t\t\n\t\t\\prob{}{}{}{Let $\\Gamma$ be a circle and let $d$ be a line such that $\\Gamma$ and $d$ have no common points. Further, let $AB$ be a diameter of the circle $\\Gamma$; assume that this diameter $AB$ is perpendicular to the line $d$, and the point $B$ is nearer to the line $d$ than the point $A$. Let $C$ be an arbitrary point on the circle $\\Gamma$, different from the points $A$ and $B$. Let $D$ be the point of intersection of the lines $AC$ and $d$. One of the two tangents from the point $D$ to the circle $\\Gamma$ touches this circle $\\Gamma$ at a point $E$; hereby, we assume that the points $B$ and $E$ lie in the same halfplane with respect to the line $AC$. Denote by $F$ the point of intersection of the lines $BE$ and $d$. Let the line $AF$ intersect the circle $\\Gamma$ at a point $G$, different from $A$.\n\t\t\t\n\t\t\tProve that the reflection of the point $G$ in the line $AB$ lies on the line $CF$.}\n\t\t\n\t\t\\prob{}{}{}{Six points are chosen on the sides of an equilateral triangle $ABC$: $A_1$, $A_2$ on $BC$, $B_1$, $B_2$ on $CA$ and $C_1$, $C_2$ on $AB$, such that they are the vertices of a convex hexagon $A_1A_2B_1B_2C_1C_2$ with equal side lengths.\n\t\t\t\n\t\t\tProve that the lines $A_1B_2$, $B_1C_2$ and $C_1A_2$ are concurrent.}\n\t\t\n\t\t\\prob{}{}{}{Let $ ABC$ be a trapezoid with parallel sides $ AB > CD$. Points $ K$ and $ L$ lie on the line segments $ AB$ and $ CD$, respectively, so that $AK/KB=DL/LC$. Suppose that there are points $ P$ and $ Q$ on the line segment $ KL$ satisfying \\[\\angle{APB} = \\angle{BCD}\\qquad\\text{and}\\qquad \\angle{CQD} = \\angle{ABC}.\\] Prove that the points $ P$, $ Q$, $ B$ and $ C$ are concyclic.\n\t\t}\n\t\n\t\t\\prob{}{}{}{Denote by $ M$ midpoint of side $ BC$ in an isosceles triangle $ \\triangle ABC$ with $ AC = AB$. Take a point $ X$ on a smaller arc $ MA$ of circumcircle of triangle $ \\triangle ABM$. Denote by $ T$ point inside of angle $ BMA$ such that $ \\angle TMX = 90$ and $ TX = BX$.\n\t\t\t\n\t\t\tProve that $ \\angle MTB - \\angle CTM$ does not depend on choice of $ X$.}\n\t\t\n\t\t\\prob{}{}{}{Given trapezoid $ ABCD$ with parallel sides $ AB$ and $ CD$, assume that there exist points $ E$ on line $ BC$ outside segment $ BC$, and $ F$ inside segment $ AD$ such that $ \\angle DAE = \\angle CBF$. Denote by $ I$ the point of intersection of $ CD$ and $ EF$, and by $ J$ the point of intersection of $ AB$ and $ EF$. Let $ K$ be the midpoint of segment $ EF$, assume it does not lie on line $ AB$. Prove that $ I$ belongs to the circumcircle of $ ABK$ if and only if $ K$ belongs to the circumcircle of $ CDJ$.}\n\t\t\n\t\t\\prob{}{}{}{Let $ ABC$ be a triangle with circumcentre $ O$. The points $ P$ and $ Q$ are interior points of the sides $ CA$ and $ AB$ respectively. Let $ K,L$ and $ M$ be the midpoints of the segments $ BP,CQ$ and $ PQ$. respectively, and let $ \\Gamma$ be the circle passing through $ K,L$ and $ M$. Suppose that the line $ PQ$ is tangent to the circle $ \\Gamma$. Prove that $ OP = OQ.$}\n\t\t\n\t\t\\prob{}{}{}{Let $P$ be a point interior to triangle $ABC$ (with $CA \\neq CB$). The lines $AP$, $BP$ and $CP$ meet again its circumcircle $\\Gamma$ at $K$, $L$, respectively $M$. The tangent line at $C$ to $\\Gamma$ meets the line $AB$ at $S$. Show that from $SC = SP$ follows $MK = ML$.}\n\t\t\n\t\t\\prob{}{}{}{Let $A_1A_2A_3A_4$ be a non-cyclic quadrilateral. Let $O_1$ and $r_1$ be the circumcentre and the circumradius of the triangle $A_2A_3A_4$. Define $O_2,O_3,O_4$ and $r_2,r_3,r_4$ in a similar way. Prove that\n\t\t\t\\[\\frac{1}{O_1A_1^2-r_1^2}+\\frac{1}{O_2A_2^2-r_2^2}+\\frac{1}{O_3A_3^2-r_3^2}+\\frac{1}{O_4A_4^2-r_4^2}=0.\\]\n\t\t\t\n\t\t\tProposed by Alexey Gladkich, Israel}\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a cyclic quadrilateral whose diagonals $AC$ and $BD$ meet at $E$. The extensions of the sides $AD$ and $BC$ beyond $A$ and $B$ meet at $F$. Let $G$ be the point such that $ECGD$ is a parallelogram, and let $H$ be the image of $E$ under reflection in $AD$. Prove that $D,H,F,G$ are concyclic.}\n\t\t\n\t\t\\prob{}{}{}{Let $\\omega$ be the circumcircle of a triangle $ABC$. Denote by $M$ and $N$ the midpoints of the sides $AB$ and $AC$, respectively, and denote by $T$ the midpoint of the arc $BC$ of $\\omega$ not containing $A$. The circumcircles of the triangles $AMT$ and $ANT$ intersect the perpendicular bisectors of $AC$ and $AB$ at points $X$ and $Y$, respectively; assume that $X$ and $Y$ lie inside the triangle $ABC$. The lines $MN$ and $XY$ intersect at $K$. Prove that $KA=KT$.}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle. The points $K, L,$ and $M$ lie on the segments $BC, CA,$ and $AB,$ respectively, such that the lines $AK, BL,$ and $CM$ intersect in a common point. Prove that it is possible to choose two of the triangles $ALM, BMK,$ and $CKL$ whose inradii sum up to at least the inradius of the triangle $ABC$.}\n\t\t\n\t\t\\prob{}{}{}{Triangle $ABC$ has circumcircle $\\Omega$ and circumcenter $O$. A circle $\\Gamma$ with center $A$ intersects the segment $BC$ at points $D$ and $E$, such that $B$, $D$, $E$, and $C$ are all different and lie on line $BC$ in this order. Let $F$ and $G$ be the points of intersection of $\\Gamma$ and $\\Omega$, such that $A$, $F$, $B$, $C$, and $G$ lie on $\\Omega$ in this order. Let $K$ be the second point of intersection of the circumcircle of triangle $BDF$ and the segment $AB$. Let $L$ be the second point of intersection of the circumcircle of triangle $CGE$ and the segment $CA$.\n\t\t\t\n\t\t\tSuppose that the lines $FK$ and $GL$ are different and intersect at the point $X$. Prove that $X$ lies on the line $AO$.}\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with circumcircle $\\Gamma$ and incenter $I$ and let $M$ be the midpoint of $\\overline{BC}$. The points $D$, $E$, $F$ are selected on sides $\\overline{BC}$, $\\overline{CA}$, $\\overline{AB}$ such that $\\overline{ID} \\perp \\overline{BC}$, $\\overline{IE}\\perp \\overline{AI}$, and $\\overline{IF}\\perp \\overline{AI}$. Suppose that the circumcircle of $\\triangle AEF$ intersects $\\Gamma$ at a point $X$ other than $A$. Prove that lines $XD$ and $AM$ meet on $\\Gamma$.}\n\t\t\t\n\t\t\\prob{}{}{}{Let $R$ and $S$ be different points on a circle $\\Omega$ such that $RS$ is not a diameter. Let $\\ell$ be the tangent line to $\\Omega$ at $R$. Point $T$ is such that $S$ is the midpoint of the line segment $RT$. Point $J$ is chosen on the shorter arc $RS$ of $\\Omega$ so that the circumcircle $\\Gamma$ of triangle $JST$ intersects $\\ell$ at two distinct points. Let $A$ be the common point of $\\Gamma$ and $\\ell$ that is closer to $R$. Line $AJ$ meets $\\Omega$ again at $K$. Prove that the line $KT$ is tangent to $\\Gamma$.}\n\t\t\n\t\t\n\t\t\n\t\\newpage\\section{G3}\n\t\t\n\t\t\\prob{}{}{}{Let $O$ be the circumcenter and $H$ the orthocenter of an acute triangle $ABC$. Show that there exist points $D$, $E$, and $F$ on sides $BC$, $CA$, and $AB$ respectively such that \\[ OD + DH = OE + EH = OF + FH\\] and the lines $AD$, $BE$, and $CF$ are concurrent.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with centroid $G$. Determine, with proof, the position of the point $P$ in the plane of $ABC$ such that $AP{\\cdot}AG + BP{\\cdot}BG + CP{\\cdot}CG$ is a minimum, and express this minimum value in terms of the side lengths of $ABC$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{The circle $S$ has centre $O$, and $BC$ is a diameter of $S$. Let $A$ be a point of $S$ such that $\\angle AOB<120{{}^\\circ}$. Let $D$ be the midpoint of the arc $AB$ which does not contain $C$. The line through $O$ parallel to $DA$ meets the line $AC$ at $I$. The perpendicular bisector of $OA$ meets $S$ at $E$ and at $F$. Prove that $I$ is the incentre of the triangle $CEF.$}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle and let $P$ be a point in its interior. Denote by $D$, $E$, $F$ the feet of the perpendiculars from $P$ to the lines $BC$, $CA$, $AB$, respectively. Suppose that \\[AP^2 + PD^2 = BP^2 + PE^2 = CP^2 + PF^2.\\]Denote by $I_A$, $I_B$, $I_C$ the excenters of the triangle $ABC$. Prove that $P$ is the circumcenter of the triangle $I_AI_BI_C$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $O$ be the circumcenter of an acute-angled triangle $ABC$ with ${\\angle B<\\angle C}$. The line $AO$ meets the side $BC$ at $D$. The circumcenters of the triangles $ABD$ and $ACD$ are $E$ and $F$, respectively. Extend the sides $BA$ and $CA$ beyond $A$, and choose on the respective extensions points $G$ and $H$ such that ${AG=AC}$ and ${AH=AB}$. Prove that the quadrilateral $EFGH$ is a rectangle if and only if ${\\angle ACB-\\angle ABC=60^{\\circ }}$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a parallelogram. A variable line $g$ through the vertex $A$ intersects the rays $BC$ and $DC$ at the points $X$ and $Y$, respectively. Let $K$ and $L$ be the $A$-excenters of the triangles $ABX$ and $ADY$. Show that the angle $\\measuredangle KCL$ is independent of the line $g$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ ABCDE$ be a convex pentagon such that\n\t\t\t\\[ \\angle BAC = \\angle CAD = \\angle DAE\\qquad \\text{and}\\qquad \\angle ABC = \\angle ACD = \\angle ADE. \\]The diagonals $BD$ and $CE$ meet at $P$. Prove that the line $AP$ bisects the side $CD$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{The diagonals of a trapezoid $ ABCD$ intersect at point $ P$. Point $ Q$ lies between the parallel lines $ BC$ and $ AD$ such that $ \\angle AQD = \\angle CQB$, and line $ CD$ separates points $ P$ and $ Q$. Prove that $ \\angle BQP = \\angle DAQ$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ ABCD$ be a convex quadrilateral and let $ P$ and $ Q$ be points in $ ABCD$ such that $ PQDA$ and $ QPBC$ are cyclic quadrilaterals. Suppose that there exists a point $ E$ on the line segment $ PQ$ such that $ \\angle PAE = \\angle QDE$ and $ \\angle PBE = \\angle QCE$. Show that the quadrilateral $ ABCD$ is cyclic.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle. The incircle of $ABC$ touches the sides $AB$ and $AC$ at the points $Z$ and $Y$, respectively. Let $G$ be the point where the lines $BY$ and $CZ$ meet, and let $R$ and $S$ be points such that the two quadrilaterals $BCYR$ and $BCSZ$ are parallelogram.\n\t\t\tProve that $GR=GS$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $A_1A_2 \\ldots A_n$ be a convex polygon. Point $P$ inside this polygon is chosen so that its projections $P_1, \\ldots , P_n$ onto lines $A_1A_2, \\ldots , A_nA_1$ respectively lie on the sides of the polygon. Prove that for arbitrary points $X_1, \\ldots , X_n$ on sides $A_1A_2, \\ldots , A_nA_1$ respectively,\n\t\t\t\\[\\max \\left\\{ \\frac{X_1X_2}{P_1P_2}, \\ldots, \\frac{X_nX_1}{P_nP_1} \\right\\} \\geq 1.\\]}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a convex quadrilateral whose sides $AD$ and $BC$ are not parallel. Suppose that the circles with diameters $AB$ and $CD$ meet at points $E$ and $F$ inside the quadrilateral. Let $\\omega_E$ be the circle through the feet of the perpendiculars from $E$ to the lines $AB,BC$ and $CD$. Let $\\omega_F$ be the circle through the feet of the perpendiculars from $F$ to the lines $CD,DA$ and $AB$. Prove that the midpoint of the segment $EF$ lies on the line through the two intersections of $\\omega_E$ and $\\omega_F$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In an acute triangle $ABC$ the points $D,E$ and $F$ are the feet of the altitudes through $A,B$ and $C$ respectively. The incenters of the triangles $AEF$ and $BDF$ are $I_1$ and $I_2$ respectively; the circumcenters of the triangles $ACI_1$ and $BCI_2$ are $O_1$ and $O_2$ respectively. Prove that $I_1I_2$ and $O_1O_2$ are parallel.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In a triangle $ABC$, let $D$ and $E$ be the feet of the angle bisectors of angles $A$ and $B$, respectively. A rhombus is inscribed into the quadrilateral $AEDB$ (all vertices of the rhombus lie on different sides of $AEDB$). Let $\\varphi$ be the non-obtuse angle of the rhombus. Prove that $\\varphi \\le \\max \\{ \\angle BAC, \\angle ABC \\}$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $\\Omega$ and $O$ be the circumcircle and the circumcentre of an acute-angled triangle $ABC$ with $AB > BC$. The angle bisector of $\\angle ABC$ intersects $\\Omega$ at $M \\ne B$. Let $\\Gamma$ be the circle with diameter $BM$. The angle bisectors of $\\angle AOB$ and $\\angle BOC$ intersect $\\Gamma$ at points $P$ and $Q,$ respectively. The point $R$ is chosen on the line $P Q$ so that $BR = MR$. Prove that $BR\\parallel AC$.\n\t\t\t(Here we always assume that an angle bisector is a ray.)}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $\\angle{C} = 90^{\\circ}$, and let $H$ be the foot of the altitude from $C$. A point $D$ is chosen inside the triangle $CBH$ so that $CH$ bisects $AD$. Let $P$ be the intersection point of the lines $BD$ and $CH$. Let $\\omega$ be the semicircle with diameter $BD$ that meets the segment $CB$ at an interior point. A line through $P$ is tangent to $\\omega$ at $Q$. Prove that the lines $CQ$ and $AD$ meet on $\\omega$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $B = (-1, 0)$ and $C = (1, 0)$ be fixed points on the coordinate plane. A nonempty, bounded subset $S$ of the plane is said to be nice if\n\t\t\t\n\t\t\t$\\text{(i)}$ there is a point $T$ in $S$ such that for every point $Q$ in $S$, the segment $TQ$ lies entirely in $S$; and\n\t\t\t\n\t\t\t$\\text{(ii)}$ for any triangle $P_1P_2P_3$, there exists a unique point $A$ in $S$ and a permutation $\\sigma$ of the indices $\\{1, 2, 3\\}$ for which triangles $ABC$ and $P_{\\sigma(1)}P_{\\sigma(2)}P_{\\sigma(3)}$ are similar.\n\t\t\t\n\t\t\tProve that there exist two distinct nice subsets $S$ and $S'$ of the set $\\{(x, y) : x \\geq 0, y \\geq 0\\}$ such that if $A \\in S$ and $A' \\in S'$ are the unique choices of points in $\\text{(ii)}$, then the product $BA \\cdot BA'$ is a constant independent of the triangle $P_1P_2P_3$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $O$ be the circumcenter of an acute triangle $ABC$. Line $OA$ intersects the altitudes of $ABC$ through $B$ and $C$ at $P$ and $Q$, respectively. The altitudes meet at $H$. Prove that the circumcenter of triangle $PQH$ lies on a median of triangle $ABC$.}\n\t\t\n\t\t\n\t\\newpage\\section{G4}\n\t\n\t\t\\prob{}{}{}{Let $ A_1A_2 \\ldots A_n$ be a convex polygon, $ n \\geq 4.$ Prove that $ A_1A_2 \\ldots A_n$ is cyclic if and only if to each vertex $ A_j$ one can assign a pair $ (b_j, c_j)$ of real numbers, $ j = 1, 2, \\ldots, n,$ so that $ A_iA_j = b_jc_i - b_ic_j$ for all $ i, j$ with $ 1 \\leq i < j \\leq n.$}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $M$ be a point in the interior of triangle $ABC$. Let $A'$ lie on $BC$ with $MA'$ perpendicular to $BC$. Define $B'$ on $CA$ and $C'$ on $AB$ similarly. Define\n\t\t\t\n\t\t\t\\[ p(M) = \\frac{MA' \\cdot MB' \\cdot MC'}{MA \\cdot MB \\cdot MC}. \\]\n\t\t\t\n\t\t\tDetermine, with proof, the location of $M$ such that $p(M)$ is maximal. Let $\\mu(ABC)$ denote this maximum value. For which triangles $ABC$ is the value of $\\mu(ABC)$ maximal?}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Circles $S_1$ and $S_2$ intersect at points $P$ and $Q$. Distinct points $A_1$ and $B_1$ (not at $P$ or $Q$) are selected on $S_1$. The lines $A_1P$ and $B_1P$ meet $S_2$ again at $A_2$ and $B_2$ respectively, and the lines $A_1B_1$ and $A_2B_2$ meet at $C$. Prove that, as $A_1$ and $B_1$ vary, the circumcentres of triangles $A_1A_2C$ all lie on one fixed circle.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $\\Gamma_1$, $\\Gamma_2$, $\\Gamma_3$, $\\Gamma_4$ be distinct circles such that $\\Gamma_1$, $\\Gamma_3$ are externally tangent at $P$, and $\\Gamma_2$, $\\Gamma_4$ are externally tangent at the same point $P$. Suppose that $\\Gamma_1$ and $\\Gamma_2$; $\\Gamma_2$ and $\\Gamma_3$; $\\Gamma_3$ and $\\Gamma_4$; $\\Gamma_4$ and $\\Gamma_1$ meet at $A$, $B$, $C$, $D$, respectively, and that all these points are different from $P$. Prove that\n\t\t\t\n\t\t\t\\[ \\frac{AB\\cdot BC}{AD\\cdot DC}=\\frac{PB^2}{PD^2}. \\]}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In a convex quadrilateral $ABCD$, the diagonal $BD$ bisects neither the angle $ABC$ nor the angle $CDA$. The point $P$ lies inside $ABCD$ and satisfies \\[\\angle PBC=\\angle DBA\\quad\\text{and}\\quad \\angle PDC=\\angle BDA.\\] Prove that $ABCD$ is a cyclic quadrilateral if and only if $AP=CP$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a fixed convex quadrilateral with $BC=DA$ and $BC$ not parallel with $DA$. Let two variable points $E$ and $F$ lie of the sides $BC$ and $DA$, respectively and satisfy $BE=DF$. The lines $AC$ and $BD$ meet at $P$, the lines $BD$ and $EF$ meet at $Q$, the lines $EF$ and $AC$ meet at $R$.\n\t\t\t\n\t\t\tProve that the circumcircles of the triangles $PQR$, as $E$ and $F$ vary, have a common point other than $P$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{A point $D$ is chosen on the side $AC$ of a triangle $ABC$ with $\\angle C < \\angle A < 90^\\circ$ in such a way that $BD=BA$. The incircle of $ABC$ is tangent to $AB$ and $AC$ at points $K$ and $L$, respectively. Let $J$ be the incenter of triangle $BCD$. Prove that the line $KL$ intersects the line segment $AJ$ at its midpoint.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Consider five points $ A$, $ B$, $ C$, $ D$ and $ E$ such that $ ABCD$ is a parallelogram and $ BCED$ is a cyclic quadrilateral. Let $ \\ell$ be a line passing through $ A$. Suppose that $ \\ell$ intersects the interior of the segment $ DC$ at $ F$ and intersects line $ BC$ at $ G$. Suppose also that $ EF = EG = EC$. Prove that $ \\ell$ is the bisector of angle $ DAB$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In an acute triangle $ ABC$ segments $ BE$ and $ CF$ are altitudes. Two circles passing through the point $ A$ anf $ F$ and tangent to the line $ BC$ at the points $ P$ and $ Q$ so that $ B$ lies between $ C$ and $ Q$. Prove that lines $ PE$ and $ QF$ intersect on the circumcircle of triangle $ AEF$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Given a cyclic quadrilateral $ABCD$, let the diagonals $AC$ and $BD$ meet at $E$ and the lines $AD$ and $BC$ meet at $F$. The midpoints of $AB$ and $CD$ are $G$ and $H$, respectively. Show that $EF$ is tangent at $E$ to the circle through the points $E$, $G$ and $H$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Given a triangle $ABC$, with $I$ as its incenter and $\\Gamma$ as its circumcircle, $AI$ intersects $\\Gamma$ again at $D$. Let $E$ be a point on the arc $BDC$, and $F$ a point on the segment $BC$, such that $\\angle BAF=\\angle CAE < \\dfrac12\\angle BAC$. If $G$ is the midpoint of $IF$, prove that the meeting point of the lines $EI$ and $DG$ lies on $\\Gamma$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle with circumcircle $\\Omega$. Let $B_0$ be the midpoint of $AC$ and let $C_0$ be the midpoint of $AB$. Let $D$ be the foot of the altitude from $A$ and let $G$ be the centroid of the triangle $ABC$. Let $\\omega$ be a circle through $B_0$ and $C_0$ that is tangent to the circle $\\Omega$ at a point $X\\not= A$. Prove that the points $D,G$ and $X$ are collinear.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $AB \\neq AC$ and circumcenter $O$. The bisector of $\\angle BAC$ intersects $BC$ at $D$. Let $E$ be the reflection of $D$ with respect to the midpoint of $BC$. The lines through $D$ and $E$ perpendicular to $BC$ intersect the lines $AO$ and $AD$ at $X$ and $Y$ respectively. Prove that the quadrilateral $BXCY$ is cyclic.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $\\angle B > \\angle C$. Let $P$ and $Q$ be two different points on line $AC$ such that $\\angle PBA = \\angle QBA = \\angle ACB $ and $A$ is located between $P$ and $C$. Suppose that there exists an interior point $D$ of segment $BQ$ for which $PD=PB$. Let the ray $AD$ intersect the circle $ABC$ at $R \\neq A$. Prove that $QB = QR$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Consider a fixed circle $\\Gamma$ with three fixed points $A, B,$ and $C$ on it. Also, let us fix a real number $\\lambda \\in(0,1)$. For a variable point $P \\not\\in\\{A, B, C\\}$ on $\\Gamma$, let $M$ be the point on the segment $CP$ such that $CM =\\lambda\\cdot CP$ . Let $Q$ be the second point of intersection of the circumcircles of the triangles $AMP$ and $BMC$. Prove that as $P$ varies, the point $Q$ lies on a fixed circle.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle and let $M$ be the midpoint of $AC$. A circle $\\omega$ passing through $B$ and $M$ meets the sides $AB$ and $BC$ at points $P$ and $Q$ respectively. Let $T$ be the point such that $BPTQ$ is a parallelogram. Suppose that $T$ lies on the circumcircle of $ABC$. Determine all possible values of $\\frac{BT}{BM}$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $AB = AC \\neq BC$ and let $I$ be its incentre. The line $BI$ meets $AC$ at $D$, and the line through $D$ perpendicular to $AC$ meets $AI$ at $E$. Prove that the reflection of $I$ in $AC$ lies on the circumcircle of triangle $BDE$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In triangle $ABC$, let $\\omega$ be the excircle opposite to $A$. Let $D, E$ and $F$ be the points where $\\omega$ is tangent to $BC, CA$, and $AB$, respectively. The circle $AEF$ intersects line $BC$ at $P$ and $Q$. Let $M$ be the midpoint of $AD$. Prove that the circle $MPQ$ is tangent to $\\omega$.}\n\n\n\t\n\t\\newpage\\section{G5}\n\t\n\t\t\\prob{}{}{}{The tangents at $B$ and $A$ to the circumcircle of an acute angled triangle $ABC$ meet the tangent at $C$ at $T$ and $U$ respectively. $AT$ meets $BC$ at $P$, and $Q$ is the midpoint of $AP$; $BU$ meets $CA$ at $R$, and $S$ is the midpoint of $BR$. Prove that $\\angle ABQ=\\angle BAS$. Determine, in terms of ratios of side lengths, the triangles for which this angle is a maximum.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle. Let $DAC,EAB$, and $FBC$ be isosceles triangles exterior to $ABC$, with $DA=DC, EA=EB$, and $FB=FC$, such that\n\t\t\t\n\t\t\t\\[ \\angle ADC = 2\\angle BAC, \\quad \\angle BEA= 2 \\angle ABC, \\quad \\angle CFB = 2 \\angle ACB. \\]\n\t\t\t\n\t\t\tLet $D'$ be the intersection of lines $DB$ and $EF$, let $E'$ be the intersection of $EC$ and $DF$, and let $F'$ be the intersection of $FA$ and $DE$. Find, with proof, the value of the sum\n\t\t\t\n\t\t\t\\[ \\frac{DB}{DD'}+\\frac{EC}{EE'}+\\frac{FA}{FF'}. \\]}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{For any set $S$ of five points in the plane, no three of which are collinear, let $M(S)$ and $m(S)$ denote the greatest and smallest areas, respectively, of triangles determined by three points from $S$. What is the minimum possible value of $M(S)/m(S)$ ?}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an isosceles triangle with $AC=BC$, whose incentre is $I$. Let $P$ be a point on the circumcircle of the triangle $AIB$ lying inside the triangle $ABC$. The lines through $P$ parallel to $CA$ and $CB$ meet $AB$ at $D$ and $E$, respectively. The line through $P$ parallel to $AB$ meets $CA$ and $CB$ at $F$ and $G$, respectively. Prove that the lines $DF$ and $EG$ intersect on the circumcircle of the triangle $ABC$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $A_1A_2A_3\\ldots A_n$ be a regular $n$-gon. Let $B_1$ and $B_n$ be the midpoints of its sides $A_1A_2$ and $A_{n-1}A_n$. Also, for every $i\\in\\left\\{2,3,4,\\ldots ,n-1\\right\\}$. Let $S$ be the point of intersection of the lines $A_1A_{i+1}$ and $A_nA_i$, and let $B_i$ be the point of intersection of the angle bisector bisector of the angle $\\measuredangle A_iSA_{i+1}$ with the segment $A_iA_{i+1}$.\n\t\t\t\n\t\t\tProve that $\\sum_{i=1}^{n-1} \\measuredangle A_1B_iA_n=180^{\\circ}$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $\\triangle ABC$ be an acute-angled triangle with $AB \\not= AC$. Let $H$ be the orthocenter of triangle $ABC$, and let $M$ be the midpoint of the side $BC$. Let $D$ be a point on the side $AB$ and $E$ a point on the side $AC$ such that $AE=AD$ and the points $D$, $H$, $E$ are on the same line. Prove that the line $HM$ is perpendicular to the common chord of the circumscribed circles of triangle $\\triangle ABC$ and triangle $\\triangle ADE$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In triangle $ABC$, let $J$ be the center of the excircle tangent to side $BC$ at $A_{1}$ and to the extensions of the sides $AC$ and $AB$ at $B_{1}$ and $C_{1}$ respectively. Suppose that the lines $A_{1}B_{1}$ and $AB$ are perpendicular and intersect at $D$. Let $E$ be the foot of the perpendicular from $C_{1}$ to line $DJ$. Determine the angles $\\angle{BEA_{1}}$ and $\\angle{AEB_{1}}$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ ABC$ be a fixed triangle, and let $ A_1$, $ B_1$, $ C_1$ be the midpoints of sides $ BC$, $ CA$, $ AB$, respectively. Let $ P$ be a variable point on the circumcircle. Let lines $ PA_1$, $ PB_1$, $ PC_1$ meet the circumcircle again at $ A'$, $ B'$, $ C'$, respectively. Assume that the points $ A$, $ B$, $ C$, $ A'$, $ B'$, $ C'$ are distinct, and lines $ AA'$, $ BB'$, $ CC'$ form a triangle. Prove that the area of this triangle does not depend on $ P$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ k$ and $ n$ be integers with $ 0\\le k\\le n - 2$. Consider a set $ L$ of $ n$ lines in the plane such that no two of them are parallel and no three have a common point. Denote by $ I$ the set of intersections of lines in $ L$. Let $ O$ be a point in the plane not lying on any line of $ L$. A point $ X\\in I$ is colored red if the open line segment $ OX$ intersects at most $ k$ lines in $ L$. Prove that $ I$ contains at least $ \\dfrac{1}{2}(k + 1)(k + 2)$ red points.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $P$ be a polygon that is convex and symmetric to some point $O$. Prove that for some parallelogram $R$ satisfying $P\\subset R$ we have \\[\\frac{|R|}{|P|}\\leq \\sqrt 2\\]\n\t\t\twhere $|R|$ and $|P|$ denote the area of the sets $R$ and $P$, respectively.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCDE$ be a convex pentagon such that $BC \\parallel AE,$ $AB = BC + AE,$ and $\\angle ABC = \\angle CDE.$ Let $M$ be the midpoint of $CE,$ and let $O$ be the circumcenter of triangle $BCD.$ Given that $\\angle DMO = 90^{\\circ},$ prove that $2 \\angle BDA = \\angle CDE.$}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{ISL 2011 G5}{}{Let $ABC$ be a triangle with incentre $I$ and circumcircle $\\omega$. Let $D$ and $E$ be the second intersection points of $\\omega$ with $AI$ and $BI$, respectively. The chord $DE$ meets $AC$ at a point $F$, and $BC$ at a point $G$. Let $P$ be the intersection point of the line through $F$ parallel to $AD$ and the line through $G$ parallel to $BE$. Suppose that the tangents to $\\omega$ at $A$ and $B$ meet at a point $K$. Prove that the three lines $AE,BD$ and $KP$ are either parallel or concurrent.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $\\angle BCA=90^{\\circ}$, and let $D$ be the foot of the altitude from $C$. Let $X$ be a point in the interior of the segment $CD$. Let $K$ be the point on the segment $AX$ such that $BK=BC$. Similarly, let $L$ be the point on the segment $BX$ such that $AL=AC$. Let $M$ be the point of intersection of $AL$ and $BK$.\n\t\t\t\n\t\t\tShow that $MK=ML$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCDEF$ be a convex hexagon with $AB=DE$, $BC=EF$, $CD=FA$, and $\\angle A-\\angle D = \\angle C -\\angle F = \\angle E -\\angle B$. Prove that the diagonals $AD$, $BE$, and $CF$ are concurrent.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Convex quadrilateral $ABCD$ has $\\angle ABC = \\angle CDA = 90^{\\circ}$. Point $H$ is the foot of the perpendicular from $A$ to $BD$. Points $S$ and $T$ lie on sides $AB$ and $AD$, respectively, such that $H$ lies inside triangle $SCT$ and \\[ \\angle CHS - \\angle CSB = 90^{\\circ}, \\quad \\angle THC - \\angle DTC = 90^{\\circ}. \\] Prove that line $BD$ is tangent to the circumcircle of triangle $TSH$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $CA \\neq CB$. Let $D$, $F$, and $G$ be the midpoints of the sides $AB$, $AC$, and $BC$ respectively. A circle $\\Gamma$ passing through $C$ and tangent to $AB$ at $D$ meets the segments $AF$ and $BG$ at $H$ and $I$, respectively. The points $H'$ and $I'$ are symmetric to $H$ and $I$ about $F$ and $G$, respectively. The line $H'I'$ meets $CD$ and $FG$ at $Q$ and $M$, respectively. The line $CM$ meets $\\Gamma$ again at $P$. Prove that $CQ = QP$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $D$ be the foot of perpendicular from $A$ to the Euler line (the line passing through the circumcentre and the orthocentre) of an acute scalene triangle $ABC$. A circle $\\omega$ with centre $S$ passes through $A$ and $D$, and it intersects sides $AB$ and $AC$ at $X$ and $Y$ respectively. Let $P$ be the foot of altitude from $A$ to $BC$, and let $M$ be the midpoint of $BC$. Prove that the circumcentre of triangle $XSY$ is equidistant from $P$ and $M$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCC_1B_1A_1$ be a convex hexagon such that $AB=BC$, and suppose that the line segments $AA_1, BB_1$, and $CC_1$ have the same perpendicular bisector. Let the diagonals $AC_1$ and $A_1C$ meet at $D$, and denote by $\\omega$ the circle $ABC$. Let $\\omega$ intersect the circle $A_1BC_1$ again at $E \\neq B$. Prove that the lines $BB_1$ and $DE$ intersect on $\\omega$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\\newpage\\section{G6}\n\t\n\t\n\t\t\\prob{}{}{}{Let $ ABCD$ be a convex quadrilateral. The perpendicular bisectors of its sides $ AB$ and $ CD$ meet at $ Y$. Denote by $ X$ a point inside the quadrilateral $ ABCD$ such that $ \\measuredangle ADX = \\measuredangle BCX < 90^{\\circ}$ and $ \\measuredangle DAX = \\measuredangle CBX < 90^{\\circ}$. Show that $ \\measuredangle AYB = 2\\cdot\\measuredangle ADX$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle and $P$ an exterior point in the plane of the triangle. Suppose the lines $AP$, $BP$, $CP$ meet the sides $BC$, $CA$, $AB$ (or extensions thereof) in $D$, $E$, $F$, respectively. Suppose further that the areas of triangles $PBD$, $PCE$, $PAF$ are all equal. Prove that each of these areas is equal to the area of triangle $ABC$ itself.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $n\\geq3$ be a positive integer. Let $C_1,C_2,C_3,\\ldots,C_n$ be unit circles in the plane, with centres $O_1,O_2,O_3,\\ldots,O_n$ respectively. If no line meets more than two of the circles, prove that \\[ \\sum\\limits^{}_{1\\leq i<j\\leq n}{1\\over O_iO_j}\\leq{(n-1)\\pi\\over 4}. \\]}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Each pair of opposite sides of a convex hexagon has the following property: the distance between their midpoints is equal to $\\dfrac{\\sqrt{3}}{2}$ times the sum of their lengths. Prove that all the angles of the hexagon are equal.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $P$ be a convex polygon. Prove that there exists a convex hexagon that is contained in $P$ and whose area is at least $\\frac34$ of the area of the polygon $P$.\n\t\t\t\n\t\t\tAlternative version. Let $P$ be a convex polygon with $n\\geq 6$ vertices. Prove that there exists a convex hexagon with\n\t\t\t\n\t\t\ta) vertices on the sides of the polygon (or)\n\t\t\tb) vertices among the vertices of the polygon\n\t\t\t\n\t\t\tsuch that the area of the hexagon is at least $\\frac{3}{4}$ of the area of the polygon.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle, and $M$ the midpoint of its side $BC$. Let $\\gamma$ be the incircle of triangle $ABC$. The median $AM$ of triangle $ABC$ intersects the incircle $\\gamma$ at two points $K$ and $L$. Let the lines passing through $K$ and $L$, parallel to $BC$, intersect the incircle $\\gamma$ again in two points $X$ and $Y$. Let the lines $AX$ and $AY$ intersect $BC$ again at the points $P$ and $Q$. Prove that $BP = CQ$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Circles $ w_{1}$ and $ w_{2}$ with centres $ O_{1}$ and $ O_{2}$ are externally tangent at point $ D$ and internally tangent to a circle $ w$ at points $ E$ and $ F$ respectively. Line $ t$ is the common tangent of $ w_{1}$ and $ w_{2}$ at $ D$. Let $ AB$ be the diameter of $ w$ perpendicular to $ t$, so that $ A, E, O_{1}$ are on the same side of $ t$. Prove that lines $ AO_{1}$, $ BO_{2}$, $ EF$ and $ t$ are concurrent.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Determine the smallest positive real number $ k$ with the following property. Let $ ABCD$ be a convex quadrilateral, and let points $ A_1$, $ B_1$, $ C_1$, and $ D_1$ lie on sides $ AB$, $ BC$, $ CD$, and $ DA$, respectively. Consider the areas of triangles $ AA_1D_1$, $ BB_1A_1$, $ CC_1B_1$ and $ DD_1C_1$; let $ S$ be the sum of the two smallest ones, and let $ S_1$ be the area of quadrilateral $ A_1B_1C_1D_1$. Then we always have $ kS_1\\ge S$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{There is given a convex quadrilateral $ ABCD$. Prove that there exists a point $ P$ inside the quadrilateral such that\n\t\t\t\n\t\t\t$\\quad \\angle PAB + \\angle PDC = \\angle PBC + \\angle PAD = \\angle PCD + \\angle PBA = \\angle PDA + \\angle PCB =$ $ 90^{\\circ}$\n\t\t\t\n\t\t\tif and only if the diagonals $ AC$ and $ BD$ are perpendicular.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let the sides $AD$ and $BC$ of the quadrilateral $ABCD$ (such that $AB$ is not parallel to $CD$) intersect at point $P$. Points $O_1$ and $O_2$ are circumcenters and points $H_1$ and $H_2$ are orthocenters of triangles $ABP$ and $CDP$, respectively. Denote the midpoints of segments $O_1H_1$ and $O_2H_2$ by $E_1$ and $E_2$, respectively. Prove that the perpendicular from $E_1$ on $CD$, the perpendicular from $E_2$ on $AB$ and the lines $H_1H_2$ are concurrent.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{The vertices $X, Y , Z$ of an equilateral triangle $XYZ$ lie respectively on the sides $BC, CA, AB$ of an acute-angled triangle $ABC.$ Prove that the incenter of triangle $ABC$ lies inside triangle $XYZ.$}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $AB=AC$ and let $D$ be the midpoint of $AC$. The angle bisector of $\\angle BAC$ intersects the circle through $D,B$ and $C$ at the point $E$ inside the triangle $ABC$. The line $BD$ intersects the circle through $A,E$ and $B$ in two points $B$ and $F$. The lines $AF$ and $BE$ meet at a point $I$, and the lines $CI$ and $BD$ meet at a point $K$. Show that $I$ is the incentre of triangle $KAB$.\n\t\t\t%\\fig{.4}{ISL_GEO/5_6_12}{}\n\t\t}\n\t\t\t\n\t\t\t\\solu{Draw the circle centered at $ D $ with radius $ DA=DC $}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with circumcenter $O$ and incenter $I$. The points $D,E$ and $F$ on the sides $BC,CA$ and $AB$ respectively are such that $BD+BF=CA$ and $CD+CE=AB$. The circumcircles of the triangles $BFD$ and $CDE$ intersect at $P \\neq D$. Prove that $OP=OI$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let the excircle of triangle $ABC$ opposite the vertex $A$ be tangent to the side $BC$ at the point $A_1$. Define the points $B_1$ on $CA$ and $C_1$ on $AB$ analogously, using the excircles opposite $B$ and $C$, respectively. Suppose that the circumcentre of triangle $A_1B_1C_1$ lies on the circumcircle of triangle $ABC$. Prove that triangle $ABC$ is right-angled.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a fixed acute-angled triangle. Consider some points $E$ and $F$ lying on the sides $AC$ and $AB$, respectively, and let $M$ be the midpoint of $EF$ . Let the perpendicular bisector of $EF$ intersect the line $BC$ at $K$, and let the perpendicular bisector of $MK$ intersect the lines $AC$ and $AB$ at $S$ and $T$ , respectively. We call the pair $(E, F )$ $\\textit{interesting}$, if the quadrilateral $KSAT$ is cyclic.\n\t\t\tSuppose that the pairs $(E_1 , F_1 )$ and $(E_2 , F_2 )$ are interesting. Prove that $\\displaystyle\\frac{E_1 E_2}{AB}=\\frac{F_1 F_2}{AC}$}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle with $AB > AC$. Let $\\Gamma $ be its cirumcircle, $H$ its orthocenter, and $F$ the foot of the altitude from $A$. Let $M$ be the midpoint of $BC$. Let $Q$ be the point on $\\Gamma$ such that $\\angle HQA = 90^{\\circ}$ and let $K$ be the point on $\\Gamma$ such that $\\angle HKQ = 90^{\\circ}$. Assume that the points $A$, $B$, $C$, $K$ and $Q$ are all different and lie on $\\Gamma$ in this order.\n\t\t\t\n\t\t\tProve that the circumcircles of triangles $KQH$ and $FKM$ are tangent to each other.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a convex quadrilateral with $\\angle ABC = \\angle ADC < 90^{\\circ}$. The internal angle bisectors of $\\angle ABC$ and $\\angle ADC$ meet $AC$ at $E$ and $F$ respectively, and meet each other at point $P$. Let $M$ be the midpoint of $AC$ and let $\\omega$ be the circumcircle of triangle $BPD$. Segments $BM$ and $DM$ intersect $\\omega$ again at $X$ and $Y$ respectively. Denote by $Q$ the intersection point of lines $XE$ and $YF$. Prove that $PQ \\perp AC$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $n\\ge3$ be an integer. Two regular $n$-gons $\\mathcal{A}$ and $\\mathcal{B}$ are given in the plane. Prove that the vertices of $\\mathcal{A}$ that lie inside $\\mathcal{B}$ or on its boundary are consecutive.\n\t\t\t\n\t\t\t(That is, prove that there exists a line separating those vertices of $\\mathcal{A}$ that lie inside $\\mathcal{B}$ or on its boundary from the other vertices of $\\mathcal{A}$.)}\n\t\n\t\t\n\t\t\n\t\\newpage\\section{G7}\n\t\n\t\t\n\t\t\\prob{}{}{}{Ten gangsters are standing on a flat surface, and the distances between them are all distinct. At twelve o\u2019clock, when the church bells start chiming, each of them fatally shoots the one among the other nine gangsters who is the nearest. At least how many gangsters will be killed?}\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $O$ be an interior point of acute triangle $ABC$. Let $A_1$ lie on $BC$ with $OA_1$ perpendicular to $BC$. Define $B_1$ on $CA$ and $C_1$ on $AB$ similarly. Prove that $O$ is the circumcenter of $ABC$ if and only if the perimeter of $A_1B_1C_1$ is not less than any one of the perimeters of $AB_1C_1, BC_1A_1$, and $CA_1B_1$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{The incircle $ \\Omega$ of the acute-angled triangle $ ABC$ is tangent to its side $ BC$ at a point $ K$. Let $ AD$ be an altitude of triangle $ ABC$, and let $ M$ be the midpoint of the segment $ AD$. If $ N$ is the common point of the circle $ \\Omega$ and the line $ KM$ (distinct from $ K$), then prove that the incircle $ \\Omega$ and the circumcircle of triangle $ BCN$ are tangent to each other at the point $ N$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with semiperimeter $s$ and inradius $r$. The semicircles with diameters $BC$, $CA$, $AB$ are drawn on the outside of the triangle $ABC$. The circle tangent to all of these three semicircles has radius $t$. Prove that\n\t\t\t\\[\\frac{s}{2}<t\\le\\frac{s}{2}+\\left(1-\\frac{\\sqrt{3}}{2}\\right)r. \\]\n\t\t\tAlternative formulation. In a triangle $ABC$, construct circles with diameters $BC$, $CA$, and $AB$, respectively. Construct a circle $w$ externally tangent to these three circles. Let the radius of this circle $w$ be $t$.\n\t\t\tProve: $\\frac{s}{2}<t\\le\\frac{s}{2}+\\frac12\\left(2-\\sqrt3\\right)r$, where $r$ is the inradius and $s$ is the semiperimeter of triangle $ABC$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{For a given triangle $ ABC$, let $ X$ be a variable point on the line $ BC$ such that $ C$ lies between $ B$ and $ X$ and the incircles of the triangles $ ABX$ and $ ACX$ intersect at two distinct points $ P$ and $ Q.$ Prove that the line $ PQ$ passes through a point independent of $ X$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In an acute triangle $ABC$, let $D$, $E$, $F$ be the feet of the perpendiculars from the points $A$, $B$, $C$ to the lines $BC$, $CA$, $AB$, respectively, and let $P$, $Q$, $R$ be the feet of the perpendiculars from the points $A$, $B$, $C$ to the lines $EF$, $FD$, $DE$, respectively.\n\t\t\t\n\t\t\tProve that $p\\left(ABC\\right)p\\left(PQR\\right) \\ge \\left(p\\left(DEF\\right)\\right)^{2}$, where $p\\left(T\\right)$ denotes the perimeter of triangle $T$ .}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{In a triangle $ ABC$, let $ M_{a}$, $ M_{b}$, $ M_{c}$ be the midpoints of the sides $ BC$, $ CA$, $ AB$, respectively, and $ T_{a}$, $ T_{b}$, $ T_{c}$ be the midpoints of the arcs $ BC$, $ CA$, $ AB$ of the circumcircle of $ ABC$, not containing the vertices $ A$, $ B$, $ C$, respectively. For $ i \\in \\left\\{a, b, c\\right\\}$, let $ w_{i}$ be the circle with $ M_{i}T_{i}$ as diameter. Let $ p_{i}$ be the common external common tangent to the circles $ w_{j}$ and $ w_{k}$ (for all $ \\left\\{i, j, k\\right\\}= \\left\\{a, b, c\\right\\}$) such that $ w_{i}$ lies on the opposite side of $ p_{i}$ than $ w_{j}$ and $ w_{k}$ do.\n\t\t\tProve that the lines $ p_{a}$, $ p_{b}$, $ p_{c}$ form a triangle similar to $ ABC$ and find the ratio of similitude.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Given an acute triangle $ ABC$ with $ \\angle B > \\angle C$. Point $ I$ is the incenter, and $ R$ the circumradius. Point $ D$ is the foot of the altitude from vertex $ A$. Point $ K$ lies on line $ AD$ such that $ AK = 2R$, and $ D$ separates $ A$ and $ K$. Lines $ DI$ and $ KI$ meet sides $ AC$ and $ BC$ at $ E,F$ respectively. Let $ IE = IF$.\n\t\t\t\n\t\t\tProve that $ \\angle B\\leq 3\\angle C$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ ABCD$ be a convex quadrilateral with $ BA\\neq BC$. Denote the incircles of triangles $ ABC$ and $ ADC$ by $ \\omega_{1}$ and $ \\omega_{2}$ respectively. Suppose that there exists a circle $ \\omega$ tangent to ray $ BA$ beyond $ A$ and to the ray $ BC$ beyond $ C$, which is also tangent to the lines $ AD$ and $ CD$. Prove that the common external tangents to $ \\omega_{1}$ and $\\omega_{2}$ intersect on $ \\omega$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with incenter $I$ and let $X$, $Y$ and $Z$ be the incenters of the triangles $BIC$, $CIA$ and $AIB$, respectively. Let the triangle $XYZ$ be equilateral. Prove that $ABC$ is equilateral too.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Three circular arcs $\\gamma_1, \\gamma_2,$ and $\\gamma_3$ connect the points $A$ and $C.$ These arcs lie in the same half-plane defined by line $AC$ in such a way that arc $\\gamma_2$ lies between the arcs $\\gamma_1$ and $\\gamma_3.$ Point $B$ lies on the segment $AC.$ Let $h_1, h_2$, and $h_3$ be three rays starting at $B,$ lying in the same half-plane, $h_2$ being between $h_1$ and $h_3.$ For $i, j = 1, 2, 3,$ denote by $V_{ij}$ the point of intersection of $h_i$ and $\\gamma_j$ (see the Figure below). Denote by $\\widehat{V_{ij}V_{kj}}\\widehat{V_{kl}V_{il}}$ the curved quadrilateral, whose sides are the segments $V_{ij}V_{il},$ $V_{kj}V_{kl}$ and arcs $V_{ij}V_{kj}$ and $V_{il}V_{kl}.$ We say that this quadrilateral is $circumscribed$ if there exists a circle touching these two segments and two arcs. Prove that if the curved quadrilaterals $\\widehat{V_{11}V_{21}}\\widehat{V_{22}V_{12}}, \\widehat{V_{12}V_{22}}\\widehat{V_{23}V_{13}},\\widehat{V_{21}V_{31}}\\widehat{V_{32}V_{22}}$ are circumscribed, then the curved quadrilateral $\\widehat{V_{22}V_{32}}\\widehat{V_{33}V_{23}}$ is circumscribed, too.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCDEF$ be a convex hexagon all of whose sides are tangent to a circle $\\omega$ with centre $O$. Suppose that the circumcircle of triangle $ACE$ is concentric with $\\omega$. Let $J$ be the foot of the perpendicular from $B$ to $CD$. Suppose that the perpendicular from $B$ to $DF$ intersects the line $EO$ at a point $K$. Let $L$ be the foot of the perpendicular from $K$ to $DE$. Prove that $DJ=DL$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a convex quadrilateral with non-parallel sides $BC$ and $AD$. Assume that there is a point $E$ on the side $BC$ such that the quadrilaterals $ABED$ and $AECD$ are circumscribed. Prove that there is a point $F$ on the side $AD$ such that the quadrilaterals $ABCF$ and $BCDF$ are circumscribed if and only if $AB$ is parallel to $CD$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with circumcircle $\\Omega$ and incentre $I$. Let the line passing through $I$ and perpendicular to $CI$ intersect the segment $BC$ and the arc $BC$ (not containing $A$) of $\\Omega$ at points $U$ and $V$ , respectively. Let the line passing through $U$ and parallel to $AI$ intersect $AV$ at $X$, and let the line passing through $V$ and parallel to $AI$ intersect $AB$ at $Y$ . Let $W$ and $Z$ be the midpoints of $AX$ and $BC$, respectively. Prove that if the points $I, X,$ and $Y$ are collinear, then the points $I, W ,$ and $Z$ are also collinear.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a convex quadrilateral, and let $P$, $Q$, $R$, and $S$ be points on the sides $AB$, $BC$, $CD$, and $DA$, respectively. Let the line segment $PR$ and $QS$ meet at $O$. Suppose that each of the quadrilaterals $APOS$, $BQOP$, $CROQ$, and $DSOR$ has an incircle. Prove that the lines $AC$, $PQ$, and $RS$ are either concurrent or parallel to each other.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $I$ be the incentre of a non-equilateral triangle $ABC$, $I_A$ be the $A$-excentre, $I'_A$ be the reflection of $I_A$ in $BC$, and $l_A$ be the reflection of line $AI'_A$ in $AI$. Define points $I_B$, $I'_B$ and line $l_B$ analogously. Let $P$ be the intersection point of $l_A$ and $l_B$.\n\t\t\t\n\t\t\tProve that $P$ lies on line $OI$ where $O$ is the circumcentre of triangle $ABC$.\n\t\t\tLet one of the tangents from $P$ to the incircle of triangle $ABC$ meet the circumcircle at points $X$ and $Y$. Show that $\\angle XIY = 120^{\\circ}$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{A convex quadrilateral $ABCD$ has an inscribed circle with center $I$. Let $I_a, I_b, I_c$ and $I_d$ be the incenters of the triangles $DAB, ABC, BCD$ and $CDA$, respectively. Suppose that the common external tangents of the circles $AI_bI_d$ and $CI_bI_d$ meet at $X$, and the common external tangents of the circles $BI_aI_c$ and $DI_aI_c$ meet at $Y$. Prove that $\\angle{XIY}=90^{\\circ}$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\\newpage\\section{G8}\n\t\n\t\t\\prob{}{}{}{Let $ AH_1, BH_2, CH_3$ be the altitudes of an acute angled triangle $ ABC$. Its incircle touches the sides $ BC, AC$ and $ AB$ at $ T_1, T_2$ and $ T_3$ respectively. Consider the symmetric images of the lines $ H_1H_2, H_2H_3$ and $ H_3H_1$ with respect to the lines $ T_1T_2, T_2T_3$ and $ T_3T_1$. Prove that these images form a triangle whose vertices lie on the incircle of $ ABC$.}\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with $\\angle BAC = 60^{\\circ}$. Let $AP$ bisect $\\angle BAC$ and let $BQ$ bisect $\\angle ABC$, with $P$ on $BC$ and $Q$ on $AC$. If $AB + BP = AQ + QB$, what are the angles of the triangle?}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let two circles $S_{1}$ and $S_{2}$ meet at the points $A$ and $B$. A line through $A$ meets $S_{1}$ again at $C$ and $S_{2}$ again at $D$. Let $M$, $N$, $K$ be three points on the line segments $CD$, $BC$, $BD$ respectively, with $MN$ parallel to $BD$ and $MK$ parallel to $BC$. Let $E$ and $F$ be points on those arcs $BC$ of $S_{1}$ and $BD$ of $S_{2}$ respectively that do not contain $A$. Given that $EN$ is perpendicular to $BC$ and $FK$ is perpendicular to $BD$ prove that $\\angle EMF=90^{\\circ}$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Given a cyclic quadrilateral $ABCD$, let $M$ be the midpoint of the side $CD$, and let $N$ be a point on the circumcircle of triangle $ABM$. Assume that the point $N$ is different from the point $M$ and satisfies $\\frac{AN}{BN}=\\frac{AM}{BM}$. Prove that the points $E$, $F$, $N$ are collinear, where $E=AC\\cap BD$ and $F=BC\\cap DA$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Assign to each side $b$ of a convex polygon $P$ the maximum area of a triangle that has $b$ as a side and is contained in $P$. Show that the sum of the areas assigned to the sides of $P$ is at least twice the area of $P$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Points $ A_{1}$, $ B_{1}$, $ C_{1}$ are chosen on the sides $ BC$, $ CA$, $ AB$ of a triangle $ ABC$ respectively. The circumcircles of triangles $ AB_{1}C_{1}$, $ BC_{1}A_{1}$, $ CA_{1}B_{1}$ intersect the circumcircle of triangle $ ABC$ again at points $ A_{2}$, $ B_{2}$, $ C_{2}$ respectively ($ A_{2}\\neq A, B_{2}\\neq B, C_{2}\\neq C$). Points $ A_{3}$, $ B_{3}$, $ C_{3}$ are symmetric to $ A_{1}$, $ B_{1}$, $ C_{1}$ with respect to the midpoints of the sides $ BC$, $ CA$, $ AB$ respectively. Prove that the triangles $ A_{2}B_{2}C_{2}$ and $ A_{3}B_{3}C_{3}$ are similar.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a convex quadrilateral. A circle passing through the points $A$ and $D$ and a circle passing through the points $B$ and $C$ are externally tangent at a point $P$ inside the quadrilateral. Suppose that \\[\\angle{PAB}+\\angle{PDC}\\leq 90^\\circ\\qquad\\text{and}\\qquad\\angle{PBA}+\\angle{PCD}\\leq 90^\\circ.\\] Prove that $AB+CD \\geq BC+AD$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Point $ P$ lies on side $ AB$ of a convex quadrilateral $ ABCD$. Let $ \\omega$ be the incircle of triangle $ CPD$, and let $ I$ be its incenter. Suppose that $ \\omega$ is tangent to the incircles of triangles $ APD$ and $ BPC$ at points $ K$ and $ L$, respectively. Let lines $ AC$ and $ BD$ meet at $ E$, and let lines $ AK$ and $ BL$ meet at $ F$. Prove that points $ E$, $ I$, and $ F$ are collinear.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABCD$ be a circumscribed quadrilateral. Let $g$ be a line through $A$ which meets the segment $BC$ in $M$ and the line $CD$ in $N$. Denote by $I_1$, $I_2$ and $I_3$ the incenters of $\\triangle ABM$, $\\triangle MNC$ and $\\triangle NDA$, respectively. Prove that the orthocenter of $\\triangle I_1I_2I_3$ lies on $g$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be an acute triangle with circumcircle $\\Gamma$. Let $\\ell$ be a tangent line to $\\Gamma$, and let $\\ell_a, \\ell_b$ and $\\ell_c$ be the lines obtained by reflecting $\\ell$ in the lines $BC$, $CA$ and $AB$, respectively. Show that the circumcircle of the triangle determined by the lines $\\ell_a, \\ell_b$ and $\\ell_c$ is tangent to the circle $\\Gamma$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $ABC$ be a triangle with circumcircle $\\omega$ and $\\ell$ a line without common points with $\\omega$. Denote by $P$ the foot of the perpendicular from the center of $\\omega$ to $\\ell$. The side-lines $BC,CA,AB$ intersect $\\ell$ at the points $X,Y,Z$ different from $P$. Prove that the circumcircles of the triangles $AXP$, $BYP$ and $CZP$ have a common point different from $P$ or are mutually tangent at $P$.}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{A triangulation of a convex polygon $\\Pi$ is a partitioning of $\\Pi$ into triangles by diagonals having no common points other than the vertices of the polygon. We say that a triangulation is a Thaiangulation if all triangles in it have the same area.\n\t\t\t\n\t\t\tProve that any two different Thaiangulations of a convex polygon $\\Pi$ differ by exactly two triangles. (In other words, prove that it is possible to replace one pair of triangles in the first Thaiangulation with a different pair of triangles so as to obtain the second Thaiangulation.)}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{Let $A_1, B_1$ and $C_1$ be points on sides $BC$, $CA$ and $AB$ of an acute triangle $ABC$ respectively, such that $AA_1$, $BB_1$ and $CC_1$ are the internal angle bisectors of triangle $ABC$. Let $I$ be the incentre of triangle $ABC$, and $H$ be the orthocentre of triangle $A_1B_1C_1$. Show that $$AH + BH + CH \\geq AI + BI + CI.$$}\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\prob{}{}{}{There are $2017$ mutually external circles drawn on a blackboard, such that no two are tangent and no three share a common tangent. A tangent segment is a line segment that is a common tangent to two circles, starting at one tangent point and ending at the other one. Luciano is drawing tangent segments on the blackboard, one at a time, so that no tangent segment intersects any other circles or previously drawn tangent segments. Luciano keeps drawing tangent segments until no more can be drawn.}", "meta": {"hexsha": "57e8350232b8e161469be325a6cf5bf43ab72cb5", "size": 58431, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PSets/ISL_Geo.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "PSets/ISL_Geo.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PSets/ISL_Geo.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 87.2104477612, "max_line_length": 1121, "alphanum_fraction": 0.6551659222, "num_tokens": 19119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8198933447152498, "lm_q1q2_score": 0.5865713727924606}}
{"text": "\\section{Equational Reasoning}\n\\label{sec:equationalreasoning}\n\nVerification is the process of checking if software does what its specification demands. To verify a program, a specification is required. In the case of functions that implement an instance of the type classes of section \\ref{sec:typeclasses}, the specification is defined in form of the type class laws.\nThis section will compare different verification techniques and describe the method equational reasoning by example using type class laws as specification. In section \\ref{sec:example} equational reasoning will be applied to proof the property of listing \\ref{lst:monoidinstance1} in section \\ref{sec:typeclasses}.\n\nThere are several ways to check the behavior of a program. \nWe will describe the difference with a simple example. Given the following property:\n\n\\begin{equation}\n  \\label{eq:reverse_prop}\n\\text{reverse} (\\text{reverse } (xs)) = xs  \n\\end{equation}\nEquation \\ref{eq:reverse_prop} shows that if we apply \\verb|reverse| twice on the same list \\verb|xs| we get back the original list \\verb|xs|. \\verb|reverse| is the inverse of \\verb|reverse| (other functions, like \\verb|id|, have this property too). Verification techniques allow us to check if equation \\ref{eq:reverse_prop} holds. We describe three techniques. The first two, testing and property-based testing, are very common and the third one is the topic of this article.\n\n\\begin{description}\n\\item[Testing] Run the program with a selected input and check if it behaves as expected. In order to check the behavior, a function evaluates both sides of equation \\ref{eq:reverse_prop} and compares the values. The following listing shows an example test.\n\n\\begin{lstlisting}[caption={Function definition for testing},label={lst:testing}]\ninput = [1,2,3]\ntest_reverse :: Bool\ntest_reverse = reverse (reverse input) == input\n\\end{lstlisting}\n\nThe selected input is \\verb|[1,2,3]|. \\verb|test_reverse| evaluates to a Boolean expression to indicate if the property holds for the given input. It's necessary to run the program to evaluate \\verb|test_reverse|.\nAn advantage of this method is, that the programmer doesn't have to define general properties. The specification is expressed with a concrete input value and a concrete output value. It's easier to think about a concrete input and the corresponding output value than to find general properties that a function must obey.\n\\item[Property-based testing] The input for the test program is generated randomly. The tests are executed by a tool (e.g. QuickCheck).\n\\item[Proof] A Proof can show that a property holds in all circumstances. To prove a property we use the technique equational reasoning. This technique requires knowledge of the function definition.\n\\end{description}\n\nFigure \\ref{fig:property_validation} compares the input coverage of the described methods. Testing checks if the program behaves correctly with one chosen point of the input space. Property-based testing checks the behavior at hundreds of randomly-generated points. Proving a property covers all possible cases of the possible input. It is the most reliable verification.\n\n\\begin{figure}\n  \\centering\n     \\includegraphics[width=0.9\\textwidth]{testing}\n  \\caption{Comparison of testing, property-based-testing and proof}\n  \\label{fig:property_validation}\n\\end{figure}\n\n\\subsection{Reasoning about algebraic properties}\n\nEquational reasoning is a method originally used in algebra. It's the process of proving a given property by substituting equal expressions.\nFor example, it's possible to show that the following property holds:\n\\begin{equation}\n  \\label{eq:sum}\n  (x+a)(x+b) = x^2 + (a+b)x+ab\n\\end{equation}\nTo show that the equality holds, we have to transform the expression on the left-hand side  $(x+a)(x+b)$  to an equal expression on the right-hand side with the help of the basic algebraic properties of numbers (\\gls{distributive_law}, \\gls{commutative_law}) until we get $x^2 + (a+b)x+ab$  \\cite{hutton}. \n\nIn the first step (equation \\ref{eq:firstalgebra} we use distributivity to expand the term on the left-hand side.\n\\begin{equation}\n   (x+a)(x+b) = x^2 + ax + xb + ab\n\\end{equation}\nIn the second step we use commutativity to substitute $xb$ with $bx$.\n\\begin{equation}\nx^2 + ax + xb + ab = x^2 + ax + bx + ab \\text{     (use commutativity)}\n\\end{equation}\nIn the last step we use distributivity to factorize $x$ and we get $x^2 + (a+b)x+ab$.\n\\begin{equation}\nx^2 + ax + bx + ab = x^2 + (a + b)x + ab \\text{     (use distributivity)}\n\\end{equation}\nAll we did, was substituting expression according the algebraic properties \n\n\\subsection{Reasoning about Haskell programs}\n\nA function definition in Haskell means that we can substitute the left-hand side with the right-hand side and vice versa. This is possible because Haskell is a purely functional language. Hence, we can use the same approach to prove that a property of a program written in Haskell holds, as we used to reason about mathematical expressions. \nFor example, it's possible to show that the length of a list with one element is actually 1. This general property can be formed as an Haskell expression. \n\\begin{verbatim}\nlength [x] = 1\n\\end{verbatim}\nThe property holds no matter what \\verb|x| is. To show that, we use the \\gls{function-definition} of \\verb|length| as a general description of the behavior. Listing \\ref{lst:lengthdefinition} shows the definition of \\verb|length| \\cite{hutton}.\n\\begin{lstlisting}[caption={Function definition of {\\ttfamily length}},label={lst:lengthdefinition}]\nlength [] = 0\nlength (x:xs) = 1 + length xs  \n\\end{lstlisting}\n\nIn order to conclude that \\verb|length [x] == 1| is always true, we substitute \\verb|length [x]| until we get \\verb|1|. Listing \\ref{lst:lengthproof} shows the step by step substitution. It's a stepwise transformation of the expression \\verb|length [x]| to \\verb|1|.\n\\begin{lstlisting}[caption={step by step substitution of {\\ttfamily length}},label={lst:lengthproof}]\nlength [x]         -- [x] is the same as x:[]\n= length (x:[])    -- apply definition\n= 1 + length []    -- apply definition\n= 1 + 0            --  1 + 0 = 1\n1\n\\end{lstlisting}\nFunction definitions are general descriptions and we can use them to deduce other general properties by substituting equal expressions.\n\n\\subsection{Proof by structural induction}\n\\label{sec:induction}\n\nIf we apply simple substitution to a recursive function, we run into problems.\nConsider the \\gls{function-definition} of \\verb|length| in listing \\ref{lst:lengthdefinition}.\nIf we substitute \\verb|length x| with the definition \\verb|1 + length xs|, we end up substituting \\verb|length x| forever. A way to verify recursive programs is to use proof by structural induction.  \nProof by structural induction can be used for list or algebraic data types with a recursive constructor (e.g. Tree) \\cite{Thompson}.\n\n The principle of induction states, that it is sufficient to prove a property $p$ for the base case and that $p$ is preserved by the inductive case \\cite{Doets}. In order to prove $p$, two steps are required:\n \\begin{description}\n \\item[Base case] Prove $p(0)$ is true.\n \\item[Induction step] Prove $p(n+1)$ if $p(n)$ (induction hypothesis) is true.\n \\end{description}\n\nProof by induction is similar to writing a recursive function. Recursive functions use a base case (e.g. \\verb|[]|, 0). \nIf we use structural induction we proof the base case. We show that the property holds for a concrete input value (e.g. \\verb|[]|, 0). \n\nIn a recursive function definition we define \\verb|f (x:xs)| and use \\verb|f x| in the right-hand side. In the proof we show that $p(n+1)$ with the assumption that $p(n)$ holds.\n\nWe explain proof by structural induction with another example from \\cite{Thompson}. We verify that the overall length of two concatenated lists $xs$ and $ys$, is the same as the sum of the length of $xs$ and the length of $ys$.  The ++-operator concatenates two lists.\n\\begin{equation}\n  \\label{eq:lengthprop}\n  \\text{length (xs ++ ys)} = \\text{length(xs) + length(ys)}\n\\end{equation}\nIn order to verify property \\ref{eq:lengthprop}, we need the function definitions for \\verb|length| and \\verb|(++)|.\nListings \\ref{lst:lengthdefinition} and \\ref{lst:concatdefintion} show he function definitions of \\verb|length| and \\verb|(++)|.\n\n\\begin{lstlisting}[caption={Haskell function definition of the concatenation operator},label={lst:concatdefintion}]\n[] ++ xs = xs\n(x:xs) ++ ys = x:(xs++ys)\n\\end{lstlisting}\n\n\\begin{description}\n\\item[Base case]\nWe have to show that property \\ref{eq:lengthprop} holds for the base case. The base case, in this example, are the arguments \\verb|[]| and an arbitrary list for \\verb|ys|. It is not necessary to replace \\verb|ys| because the definition of \\verb|++| uses recursion over \\verb|xs|. \\verb|ys| will always be the same list.\nIn order to check if property \\ref{eq:lengthprop} holds for the base case, we replace \\verb|xs| with  \\verb|[]|, leading to the following Haskell expression.\n\n\\begin{verbatim}\nlength ([] ++ ys) = length [] + length ys\n\\end{verbatim}\n\nWe will evaluate the expression stepwise on the left-hand side and the right-hand side separately.\nThe left-hand side evaluates to\n\n\\begin{verbatim}\nlength ([] ++ ys) -- apply ++\n= length ys\n\\end{verbatim}\n\nThe right-hand side evaluates to \n\n\\begin{verbatim}\nlength [] + length ys     -- apply length []\n= 0 + length ys\n= length ys\n\\end{verbatim}\n\nWhen we evaluate each side of the equation \\ref{eq:lengthprop} with the value \\verb|[]| for \\verb|xs|, the result is \\verb|length ys| on both sides. Hence, property \\ref{eq:lengthprop} holds for the base case.\n\n\\item[Induction step]\nWe have to show that if equation \\ref{eq:lengthprop} holds for any list \\verb|xs| then it also holds for \\verb|x:xs| (\\verb|x:xs| is the list \\verb|xs| with a  element \\verb|x| attached to its head). Therefore we have to show that the two expressions on both sides of the \\verb|=| sign of the  expression in listing \\ref{lst:inductionstep} are equal with the assumption that the property of equation \\ref{eq:lengthprop} holds (induction hypothesis).\n\\begin{lstlisting}[caption={Equality expression for induction step},label={lst:inductionstep}]\nlength ((x:xs) ++ ys) = length (x:xs) + (length ys)\n\\end{lstlisting}\nAgain, we evaluate the the left-hand side of equation in listing \\ref{lst:inductionstep} step by step.\n\\begin{verbatim}\nlength ((x:xs) ++ ys)        -- apply definition of ++\n= length (x:(xs ++ ys))      -- apply definition of length\n= 1 + length (xs ++ ys)      -- use induction hypothesis\n= 1 + length xs + length ys\n\\end{verbatim}\nIf we evaluate the right-hand side of the equation in listing \\ref{lst:inductionstep} we get:\n\\begin{verbatim}\nlength (x:xs) + length ys    -- apply definition of length\n= 1 + length xs + length ys\n\\end{verbatim}\n\nThe last listing shows, that the equality in listing \\ref{lst:inductionstep} follows from the induction hypothesis in equation \\ref{eq:lengthprop}. This completes the induction step and therefore the proof itself.\n\\end{description}\n\nThe previous example used \\glspl{function-definition} of \\verb|length| and \\verb|(++)|. In order to apply equational reasoning we have to know the function definitions of involved functions or we can rely on already proven properties. \nFor example all types of the standard library, that are an instance of a type class, satisfy the type class laws (see \\ref{sec:typeclasses}) \\cite{yorgey}. Some libraries exhibit properties in their documentation (e.g. pipes library \\cite{gonzales13}). ", "meta": {"hexsha": "5e3f02ae166cf38ce8026e1fd3bed9df9191e96f", "size": 11538, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "equational_reasoning.tex", "max_stars_repo_name": "Hofmaier/robertson", "max_stars_repo_head_hexsha": "a9659af0af3c5780230e8fe3cb64350f57fc8226", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "equational_reasoning.tex", "max_issues_repo_name": "Hofmaier/robertson", "max_issues_repo_head_hexsha": "a9659af0af3c5780230e8fe3cb64350f57fc8226", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "equational_reasoning.tex", "max_forks_repo_name": "Hofmaier/robertson", "max_forks_repo_head_hexsha": "a9659af0af3c5780230e8fe3cb64350f57fc8226", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.6785714286, "max_line_length": 477, "alphanum_fraction": 0.7528167793, "num_tokens": 2920, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484144, "lm_q2_score": 0.8198933381139645, "lm_q1q2_score": 0.5865713581202888}}
{"text": "\\chapter{Equivalence of Guesses}\n\n\\section{Overview}\n\nJust as a human would be easy to identify equivalent guesses, we use that in computer as well.\n\n[Isomorphism of guesses]\n\n * Precise definition of state equivalence should be on the\n * resulting partition. However, if computing that equivalence\n * is too expensive, we could instead compute the equivalence\n * of guesses.\n\n\n\\section{Prerequisites}\n\nA \\emph{constraint} is an associated pair of guess and response. A set of constraints is denoted $\\vC$. When applied to a set of potential secrets, a constraint set selects a subset of the secrets that are consistent with those constraints. We call this subset  \\emph{generated} by the constraint set. In particular, an empty constraint set generates all codewords.\n\nA constraint set is called \\emph{contradictory} if its generated secret set is empty, i.e.\\ if no secret could satisfy all the constraints.\n\nA constraint in a constraint set is called \\emph{redundant} if the secrets generated from this constraint set is the same with or without that constraint. For example, the same constraint appearing twice is clearly redundant. Also, consider the example:\n\\[\n\\begin{array}{c}\n\\cw{1234} : \\fb{0}{0} \\\\\n\\cw{3456} : \\fb{0}{0} \\\\\n\\cw{1256} : \\fb{0}{0} \n\\end{array}\n\\]\nAny one of the constraints is redundant.\n\nA constraint set without redundant constraints is called \\emph{irreducible}. In the following we are only interested in irreducible constraint sets.\n\nTwo constraint sets are \\emph{equal} in the traditional set-equal sense, i.e.\\ if they contain the same constraints, regardless of order. Note that this is \\emph{not} the same as set-equal on the generated secrets. For example, the constraint sets\n\\[\n\\vC_1 = (1234:0A0B, 3456:0A0B)\n\\]\nand\n\\[\n\\vC_2 = (1234:0A0B, 1256:0A0B)\n\\]\nare not equal by definition, although they generate the same secrets.\n\nLet $\\vS$ denote the set of all codewords under a given set of rules. A few properties of constraint sets are shown below.\n\n1. Not all subsets of $\\vS$ can be generated from constraints. For example, in Bulls and cows, after the first guess, the number of remaining secrets will not be more than 1440; this means any proper subset with more than 1440 elements cannot be generated from constraints. \n\nAlthough there are as many as $2^5040$ subsets, only a fraction can be generated by constraints. If we consider equivalence, the number is even smaller and probably tractable.\n\n2. If two constraint sets are equal, then the subsets generated from them are equal; if two subsets are different, and their generating constraints cannot be equal.\n\n3. If two constraint sets are different, their generated secrets may or may not be equal. See the example above. \n%For a non-trivial example, consider the following simulated guessing process by different strategies:\n%\n%\t Secret: 7450\n%\t # simple    minmax    entropy   maxparts\n%\t-------------------------------------------\n%\t 1 0123:0A1B 0123:0A1B 0123:0A1B 0123:0A1B\n%\t 2 1456:2A0B 1456:2A0B 1456:2A0B 1456:2A0B\n%\t 3 1478:1A1B 2435:1A1B 7418:2A0B 2475:1A2B\n%\t 4 1759:1A1B 0276:0A2B 0752:1A2B 0482:1A1B\n%\t 5 1896:0A0B 7450:4A0B 7450:4A0B 7450:4A0B\n%\t 6 2457:2A1B\n%\t 7 7450:4A0B\n%Each column represents a filter that generates the single-element subset\n%<code>{7450}</code>. [Note: however, they are reductible!]\n\n\\section{Color equivalence}\n\nAs a start-up, let's work with a simple but useful way to detect guess equivalence.\n\n\\subsection{Definitions}\n\n\\begin{definition}\n(Color mask) A \\emph{color mask}, denoted $\\tau$, is a subset of the available colors. Since the number of colors in a game is small (6 for Mastermind and 10 for Bulls and cows), it is convenient to implement this subset as a bit-mask with present colors set to one. That's why we call it a mask here.\n\\end{definition}\n\nGiven a codeword $g$, define $\\tau(g)$ to be the colors present in $g$. For example, $\\tau(\\cw{1442}) = \\{1, 2, 4\\}$. \n\nGiven a codeword set $\\mathcal{S}$, define $\\tau(\\mathcal{S})$ to be the colors present in any of the codewords in $\\mathcal{S}$. That is, $\\tau(\\mathcal{S}) = \\bigcup_{g \\in \\mathcal{S}} \\tau(g)$. For example, $\\tau(\\{\\cw{1442},\\cw{2315} \\}) = \\{ 1, 2, 3, 4, 5\\}$.\n\nof a codeword is the set of colors present in the codeword. The color mask of a set of codewords is the set of colors present in any of the codewords.\n\nA few of the important color masks are as follows.\n\n$\\tau_{\\text{all}}$ is the set of all colors.\n\n$\\tau_{\\text{free}}$ is the set of colors that have never been guessed. $\\tau_{\\text{used}}$ is the set of colors that have been guessed. We have $\\tau_{\\text{free}} \\cup \\tau_{\\text{used}} = \\tau_\\text{all}$.\n\n$\\tau_{\\text{excl}}$ is the set of colors that are excluded from the potential secrets.\n\n\n\\subsection{Algorithm}\n\n\\newcommand{\\cmall}{\\tau_\\text{all}}\n\\newcommand{\\cmfree}{\\tau_\\text{free}}\n\\newcommand{\\cmexcl}{\\tau_\\text{excl}}\n\nDenote by $\\mathcal{F}_k$ a color equivalence filter. It is characterized by a pair of color masks: the free/fresh/unguessed/uncalled colors and the excluded colors. That is,\n\\[\n\\mathcal{F} = (\\tau_\\text{free}, \\tau_\\text{excl}) .\n\\]\n\nThree operations are:\n\n\\emph{Initialize}. At the beginning of the game, there are no constraints. A color equivalence filter is initialized by\n\\[\n\\cmfree = \\cmall, \\cmexcl = \\phi .\n\\]\n\n\\emph{Filter}. For each candidate $g \\in \\mathcal{G}$, check whether $g$ is the lexical minimum of its equivalence class. We check this by replacing all excluded colors with the same symbol (e.g. *), and replace the unguessed colors using suitable mapping.\n\n\\emph{Restrict}. Given a constraint $(g,r)$ and the set of remaining possibilities $\\mathcal{S}$, we construct a new filter as $\\mathcal{F}' = (\\cmfree', \\cmexcl')$, where\n\\[\n\\cmfree' = \\cmfree \\setminus \\tau(g), \\cmexcl' = \\cmall \\setminus \\tau(\\mathcal{S})  .\n\\]\n\n\n\\section{Constraint equivalence}\n\nHere the definition of \\emph{constraint equivalence} is rather restrictive. Ideally, constraint equivalence should be the same as partition equivalence, but this is not the case here.\n\nIntroduced by \\cite{neuwirth81,koyama93}. \n\n[Constraint/filter equivalence]\n\nIt is easy to see that solving a game only depends on what colors have appeared before, and the relative peg position of the guesses. The particular colors or pegs guessed are not important. From this knowledge, we define the following.\n\n\\subsection{Definitions}\n\n\\begin{definition}\n(Peg permutation) A \\emph{peg permutation}, denoted $\\pi_p$, permutes the pegs in a codeword. For example, the peg permutation\n\\[\n\\begin{pmatrix}\n1 & 2 & 3 & 4 \\\\\n3 & 1 & 2 & 4\n\\end{pmatrix} \n\\]\nmoves peg 1 to the 3rd place, peg 2 to the 1st place, peg 3 to the 2nd place, and peg 4 in its original place. The reordered sequence of the pegs is given by $\\pi_p^{-1}$, which in this example is equal to\n\\[\n\\begin{pmatrix}\n1 & 2 & 3 & 4 \\\\\n2 & 3 & 1 & 4\n\\end{pmatrix} .\n\\]\n\\end{definition}\n\n\\begin{definition}\n(Color permutation) A \\emph{color permutation}, denoted $\\pi_c$, permutes the colors in a codeword. For example, the color permutation\n\\[\n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5 & 6 \\\\\n5 & 2 & 1 & 6 & 3 & 4\n\\end{pmatrix} ,\n\\]\nreplaces color 1 by color 5, color 2 as is, color 3 by color 1, etc.\n\\end{definition}\n\n\\begin{definition}\n(Partial color permutation) A \\emph{partial color permutation} permutes a subset of colors. The set of colors not involved in the partial color permutation are called \\emph{free colors} and denoted by $\\tau$.\n% , and leave the rest colors unchanged\n%with the additional attribute that only a subset of the mappings are subject to future changes. These mappings are called \\emph{free} mappings. \n%, denoted $\\Pi_c$, is an incomplete permutation of the colors where the mapping among some colors are fixed, but the mapping among the rest colors are not specified. \nFor example, the partial color permutation\n\\[\n\\begin{pmatrix}\n%1^* & 2^* & 3^* & 4^* & 5^* & 6^*  \\\\\n%1^* & 2^* & 3^* & 4^* & 5^* & 6^* \n%1 & 2 & 3 & 4 & 5 & 6 \\\\\n%\\textbf{1} & 2 & 3 & 4 & 5 & 6 \\\\\n%* & * & * & * & * & *\n\\underline{1} & \\underline{2} & 3 & \\underline{4} & 5 & \\underline{6} \\\\\n\\underline{1} & \\underline{2} & 5 & \\underline{4} & 3 & \\underline{6} \n\\end{pmatrix} .\n\\]\npermutes (and \\emph{restricts}) colors \\cw{3} and \\cw{5}, and leaves the free colors (marked with an underline) unchanged.\n\\end{definition}\n\nA class of (fully-restricted) color permutations can be generated from a partial color permutation $\\pi_c$ and its associated set of free colors $\\tau$ by composing the partial permutation with every possible permutation of the free colors. We denote such class of color permutations by $\\Pi_c = \\pi_c \\circ S_\\tau$.\n\n%A partial permutation in this context is actually a \\emph{permutation group} [see appendix] formed by composing the fixed mappings with all permutations of the free colors. We can conveniently denote it as $\\Pi_c = \\pi_c \\circ \\tau$, where $\\pi_c$ is the color mapping already fixed, and $\\tau$ is the set of unmapped colors.\n\n\\begin{definition}\n(Codeword permutation) A \\emph{codeword permutation}, denoted $\\pi$, is the combination of a peg permutation $\\pi_p$ and a (partial) color permutation $\\pi_c$. We write it as $\\pi = \\pi_p \\circ \\pi_c$. Note that the actual order of applying the peg permutation or the color permutation first does not matter.\n\\end{definition}\n\nGiven codeword $g$, we denote the permuted codeword under $\\pi$ as $\\pi(g)$ or $g^\\pi$. \n%Note that since a codeword usually does not contain all available colors, the mapping of unused colors does not matter when applying a color permutation to a given codeword.\nTo compute the image $h = (h_i)$ of a given codeword $g = (g_i)$ under a given permutation $\\pi = \\pi_p \\circ \\pi_c$, use the formula\n\\[\nh_i = \\pi_c\\left(g_{\\pi_p^{-1}(i)}\\right) .\n\\]\n\n%[Properties.] Some usual properties of permutations apply to codeword permutation as well.\n%\n%1) A codeword permutation is invertible, i.e.\\ for any $\\pi = (\\pi_p, \\pi_c)$, its inverse exists and is equal to $\\pi^{-1} = (\\pi_p^{-1}, \\pi_c^{-1})$.\n\n\\begin{definition}\n(Codeword equivalence) Two codewords $g$ and $h$ are \\emph{equivalent} if and only if there exists a codeword permutation $\\pi$ such that $g^\\pi = h$.\n\\end{definition}\n\nFor example, \\cw{1314} and \\cw{5336} are equivalent because we can use the following (non-unique) permutation of pegs and colors to map \\cw{1314} to \\cw{5336}.\n\\[\n\\pi_p = \n\\begin{pmatrix}\n1 & 2 & 3 & 4 \\\\\n2 & 1 & 3 & 4\n\\end{pmatrix}\n, \\pi_c = \n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5 & 6 \\\\\n3 & 2 & 5 & 6 & 1 & 4 \n\\end{pmatrix} .\n\\]\n\nThe equivalence relation defined above partitions a set of codewords into equivalence classes, where each equivalence class can be represented by an arbitrary element in that class, called a \\emph{representative}. \n%All elements in the same equivalence class are equivalent to this representative.\n%; i.e.\\ for all $g$, there exists $\\pi$ such that $g = g_0^\\pi$.\n\n\\begin{definition}\n(Constraint equivalence) [cite a few references] Let $\\mathcal{C}_1$ and $\\mathcal{C}_2$ be two ordered sets of constraints of the same size $k$. Let the guesses in $\\mathcal{C}_1$ be $(g_1,g_2,\\cdots,g_k)$ and the guesses in $\\mathcal{C}_2$ be $(h_1,h_2,\\cdots,h_k)$. Then constraint sets $\\mathcal{C}_1$ and $\\mathcal{C}_2$ are \\emph{equivalent} if there exists a codeword permutation $\\pi$ such that\n$g_i^\\pi = h_i$ for all $1 \\le i \\le k$.\n\\end{definition}\n\nThe intuition about equivalent constraints are rather simple: if we rearrange the peg positions and relabel the colors, then the two sets of constraints become exactly the same. Hence for the purpose of devising a strategy, we only need to work with one of them and the other follows automatically.\n\nNote, however, that there is a strong restriction in the above definition: the responses in the constraints are ignored. This has two implications. On the one hand, constraints such as \n\\[\n\\mathcal{C}_1 = \\{ (1234:0A0B), (3456:2A0B) \\}\n\\]\nand\n\\[\n\\mathcal{C}_2 = \\{ (1234:1A0B), (3456:0A2B) \\}\n\\]\nare considered equivalent, although they lead to drastically different potential secrets. On the other hand, constraints such as\n\\[\n\\mathcal{C}_1 = \\{ (1234:0A0B), (1234:0A0B) \\}\n\\]\nand\n\\[\n\\mathcal{C}_2 = \\{ (1234:0A0B), (4321:0A0B) \\}\n\\]\nare obviously not equivalent by definition, but from the first feedback \\fb{0}{0} we could already exclude the colors \\cw{1} -- \\cw{4} from the potential secret, so any permutation of \\cw{1234} should ideally be considered equivalent.\n\nDespite the less-than-ideal properties of such definition, this equivalence relation is still commonly used \\cite{neuwirth81,koyama93,francis10} because of its clarity, relative simplicity, and effectiveness for the first few guesses. A more powerful (but also more complex) equivalence definition will be introduced in the next section.\n\n\\subsection{Illustration}\n\nTo actually apply a constraint equivalence filter in practice, we implement an \\emph{incremental filter}, as illustrated below.\\footnote{The incremental filter described in this section is effectively the same technique used by \\cite{francis10} to detect equivalence in the Bulls and cows game.}\nAn alternative, more generic method, which relies on \\emph{graph automorphism}, will be introduced in the next section.\n\nGiven a set $\\mathcal{C}$ of constraints and a set $\\mathcal{G}$ of candidate codewords to filter for the next guess, we want to find all canonical codewords $g \\in \\mathcal{G}$ to be used as the next guess. There are a couple of ways to do this,\\footnote{For example, one could \\emph{generate} a list of canonical guesses from scratch, or \\emph{filter} a supplied list of codewords for canonical guesses. Here we employ the second approach because of its simplicity, flexibility, and good performance in an implementation.}\nand we employ a simple algorithm: traverse the set and keep the codewords that are lexically minimal in its equivalence class. That is, for each $g \\in \\mathcal{G}$, we find the lexically minimum codeword $g_0$ that is equivalent to $g$, and keep $g$ if and only if $g = g_0$. \n%We call $g_0$ the \\emph{canonical representative} of $g$.\n\nNote that in order for the algorithm to work correctly, the lexically minimal codeword of each equivalence class must be present in the candidate set $\\mathcal{G}$.\\footnote{\nThis requirement is satisfied in most practical application. The algorithm could be slightly modified to remove this requirement: instead of removing a non-canonical element, it could employ a subset union algorithm to label the equivalence class of each element. After all candidates are processed, it can scan the labels to find the canonical elements.\n}\n\n\nTo find the lexically minimum codeword equivalent to $g$, we could permute $g$ in all possible ways and check if any of the permuted image is smaller than $g$. For a Mastermind game, there are 4 pegs and 6 colors. The total number of permutations is therefore $4! \\times 6! = 24 \\times 720 = 17280$. This number is much larger in the Bulls and Cows game with 4 pegs and 10 colors, which is equal to $4! \\times 10! = 87,091,200$.\n\nTherefore we need to iterate the permutations in a more efficient way. We first conveniently classify the codeword permutations by their peg permutation, i.e.\\ those with the same peg permutation are put into the same class. Then, as it will turn out shortly, the permutations in each class can be written as\n\\[\n\\pi_p \\circ \\pi_c \\circ S_\\tau ,\n\\]\nwhere $\\pi_p$ is the peg permutation that represents the class, $\\pi_c$ is a partial color permutation restricted by the supplied constraints under this peg permutation, and $S_\\tau$ is the group of all permutations of the set of free colors.\n\n%We first find the canonical representative given each possible peg permutation of $g$. We then take the minimum of these minimum as the global minimum. [TBC]\n\nLet's illustrate the algorithm with 4 pegs and 6 colors. We first list all $4! = 24$ permutations of the pegs, $\\pi_p^1$ to $\\pi_p^{24}$, where $\\pi_p^1$ is the identity permutation.\n\nNext, for each peg permutation $\\pi_p^i$, we associate with it a class of \\emph{eligible} color permutations, $\\Pi_c^i = \\pi_c^i \\circ S_\\tau$, where $\\pi_c^i$ is the partial color permutation restricted by the supplied constraints under this peg permutation, and $\\tau$ is the set of free colors. \n%$\\Pi_c$ hence denotes the set of all eligibal color permutations as the product of $\\pi_c^i$ and all permutations of the colors in $\\tau$. \n\nAt the beginning of the game, there are no constraints; all colors are free, and every partial color permutation is unrestricted. We can conveniently write $\\pi_c \\circ S_\\tau$ as\n%We can conveniently write an identity permutation in the set of eligible color permutations as\n\\[\n%\\Pi_c^i = \n\\begin{pmatrix}\n%1^* & 2^* & 3^* & 4^* & 5^* & 6^*  \\\\\n%1^* & 2^* & 3^* & 4^* & 5^* & 6^* \n%1 & 2 & 3 & 4 & 5 & 6 \\\\\n%\\textbf{1} & 2 & 3 & 4 & 5 & 6 \\\\\n%* & * & * & * & * & *\n\\underline{1} & \\underline{2} & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \\\\\n\\underline{1} & \\underline{2} & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \n\\end{pmatrix} ,\n\\]\nwhere the underlined colors are controlled by $S_\\tau$ and hence can be mapped freely.\n\nTo filter a set of codewords $\\mathcal{G}$ to obtain canonical guesses, we check each codeword $g \\in \\mathcal{G}$ in order.\\footnote{The particular order of traversal is not important, as long as the lexical minimum belongs to the set.} \nTake for example $g = \\cw{1223}$. We iterate through each peg permutation $\\pi_p^i$ to permute $g$. Apply for example the following peg permutation,\n\\[\n\\pi_p = \n\\begin{pmatrix}\n1 & 2 & 3 & 4 \\\\\n2 & 1 & 3 & 4\n\\end{pmatrix} ,\n%\\]\n%whose associated partial color permutation is initially\n%\\[\n%\\Pi_c = \n%\\begin{pmatrix}\n%1^* & 2^* & 3^* & 4^* & 5^* & 6^*  \\\\\n%1^* & 2^* & 3^* & 4^* & 5^* & 6^* \n%\\end{pmatrix} .\n\\]\nfollowed by the (eligible) identity color permutation.\nThe permuted codeword is $g' = \\cw{2123}$. Since $g' \\succ g$, we cannot yet conclude that $g$ is not lexically minimum in its equivalence class. We proceed as follows.\n\nWe are free to map the free colors in $g'$ to any free color. Given our objective is to find the lexical minimum, we start from the leftmost peg of $g'$, which contains the color \\cw{2}. This is a free color, and it should be mapped to the smallest available free color, \\cw{1}, to achieve lexical minimum. This also means we have to map the color \\cw{1} to something else temporarily because we cannot have two colors map to the same value. For convenience we map it to \\cw{2}. Thus we apply the following color permutation to $g' = \\cw{2123}$,\n\\[\n\\pi_c = \n\\begin{pmatrix}\n\\underline{1} & 2 & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \\\\\n\\underline{2} & 1 & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \n\\end{pmatrix} ,\n\\]\nwhich yields $\\pi_c(g') = \\cw{1213}$.\n\nNote that \\cw{1213} is already lexically smaller than \\cw{1223}, so we can conclude that \\cw{1223} is not the lexical minimum of its equivalence class and can thus remove it from the candidates. However, for illustration purpose here we show a few more steps.\n\nNow proceed to the color on the second peg of $g'$, \\cw{1}. Again, \\cw{1} is not mapped in the color permutation, so we are free to choose its image. And for the same reason of achieving minimal lexicographical value, we map it to the smallest available free color, which in this case is \\cw{2}. The resulting color permutation is\n\\[\n\\pi_c = \n\\begin{pmatrix}\n1 & 2 & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \\\\\n2 & 1 & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \n\\end{pmatrix} ,\n\\]\nand the permuted codeword is $\\pi_c(g') = \\cw{1213}$. Repeat this until all pegs in $g'$ are processed, and we find the lexical minimum to be \\cw{1213}, which is smaller than $g$. We then remove $g$ from the candidate set. \n\nRepeat the above steps for all peg permutations. If $g$ is lexically minimum in all these cases, then it is the lexical minimum of its equivalence class, and we can add it to the filtered set.\n\nAbove is an examine where all colors are free. Now suppose we have already have some constraints, so some colors are fixed. Without loss of generality, suppose that the constraint consists of just one guess $g_1 = \\cw{3445}$. To find all canonical candidates for the second guess, we first need to \\emph{restrict} the partial color permutations associated with each peg permutation. \n\nTake for example \n\\[\n\\pi_p = \n\\begin{pmatrix}\n1 & 2 & 3 & 4 \\\\\n1 & 3 & 2 & 4\n\\end{pmatrix} , \n\\Pi_c = \n\\begin{pmatrix}\n\\underline{1} & \\underline{2} & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \\\\\n\\underline{1} & \\underline{2} & \\underline{3} & \\underline{4} & \\underline{5} & \\underline{6} \n\\end{pmatrix} .\n\\] \nThe permuted first guess is $\\pi(g_1) = \\cw{3445}$. In order for the permuted constraint to be equivalent, the permuted guess must be equal to $g_1$. This means we must map \\cw{3445} to \\cw{3445} in the color permutation. Thus we must restrict the associated color permutation as\n\\[\n\\Pi_c' = \n\\begin{pmatrix}\n\\underline{1} & \\underline{2} & 3 & 4 & 5 & \\underline{6} \\\\\n\\underline{1} & \\underline{2} & 3 & 4 & 5  & \\underline{6} \n\\end{pmatrix} .\n\\]\n\nAfter restricting the partial permutation, we can apply the previous steps again. \n[The following example need to be changed to continue from the previous example] Suppose we come again to \\cw{1123}. If we apply peg permutation $\\pi_p^3$, the permuted representative becomes \\cw{1213}. Now to find out what color permutations are eligible, note that the color 3 is already fixed in the partial permutation; only colors 1 and 2 are free to be mapped. We could therefore map them freely among the unrestricted colors, i.e.\\ $\\{1, 2, 6\\}$, and mark all resulting codewords as equivalent to \\cw{1123}.\n\nNote that given a set of constraints, not all peg permutations can yield an eligible color permutation. For example, consider the peg permutation\n\\[\n\\pi_p = \n\\begin{pmatrix}\n1 & 2 & 3 & 4 \\\\\n1 & 2 & 4 & 3\n\\end{pmatrix} .\n\\]\nWhen applied to $g_1 = \\cw{3445}$, we get $\\pi_p(g_1) = \\cw{3454}$. However, there is no way whatsoever to map \\cw{3445} to \\cw{3454}, because \\cw{4} would have to be mapped to both \\cw{4} and \\cw{5}, which is impossible. In this case, we conclude that this peg permutation is \\emph{ineligible} and remove it from further consideration.\n\n\\subsection{Algorithm}\n\n%Given a set of $k$ constraints, \n%%$\\mathcal{C}_k = \\{ (g_i,r_i) \\given 1 \\le i \\le k \\}$, \n%$\\mathcal{C}_k = \\{ (g_1,r_1), \\cdots, (g_k, r_k) \\}$, \n%a \\emph{constraint equivalence filter}, denoted $\\mathcal{F}_k$, produces a short-list of canonical codewords for the next guess.\n\nA constraint equivalence filter is characterized by the set of eligible codeword permutations which map existing constraints to themselves. These permutations \ncan be classified into $m$ classes according to their peg permutation. The permutations in the $i$th class can be written as the product of the peg permutation $\\pi_p^i$, the associated partial color permutation $\\pi_c^i$, and the symmetric group of the free colors $S_\\tau$ (which is the same across all $i$). \n\nThus, we can write a constraint equivalence filter as\n\\[\n\\mathcal{F}_k = \\bigcup_{i=1}^m \\pi_p^i \\circ \\pi_c^i \\circ S_\\tau ,\n\\]\nwhere $m$ is the number of eligible peg permutations.\n\nThree operations are defined for a constraint equivalence filter:\n\n\\emph{Initialize}. At the beginning of the game, there are no constraints. Hence the initial filter, $\\mathcal{F}_0$, is set to \n\\[\nm = \\text{number of pegs}!, \\pi_c^i = (), \\tau = \\{1, 2, \\ldots, c \\}.\n\\]\n\n\\emph{Filter}. To filter a set $\\mathcal{G}$ of candidate codewords for canonical guesses, we check each codeword $g \\in \\mathcal{G}$ in order.\\footnote{The particular order of traversal is not important, as long as the lexical minimum belongs to the set.} \nGiven $g$, we check each eligible peg permutation in $\\mathcal{F}$. For peg permutation $i$, we first permute $g$ to get $g' = \\pi_p \\circ \\pi_c (g)$. If $g' \\prec g$, then $g$ is not minimum. Otherwise, let $g' = (c_1, c_2, c_3, c_4)$. For each $j = 1$ to $4$, if $j \\in \\tau$, then map $j$ to the smallest color in $\\tau$, etc.\n\n\n%Formally, let $\\mathcal{F} = \\left\\{(\\pi_p^i, \\pi_c^i \\circ \\tau) \\given 1 \\le i \\le m \\right\\}$ be an incremental equivalence filter containing $m$ (partial) codeword permutations. Every partial color permutation has the property that some of the colors are restricted, while the rest are not. Let the set of free colors be $\\tau_i$. As will show shortly, $\\tau_i$ is the same across all permutations in a filter, so we can denote them by $\\tau$. Also, for each $\\Pi_c^i$, we pick a representitive $\\pi_c^i \\in \\Pi_c^i$. Hence the filter can be written as\n%\\[\n%\\mathcal{F} = \\left\\{(\\pi_p^i, \\pi_c^i; \\tau) \\given 1 \\le i \\le m \\right\\} .\n%\\]\n\n\\emph{Restrict}. Given a filter $\\mathcal{F}_k$ that satisfies all existing constraints $\\mathcal{C}_k$. When a constraint $(g, r)$ is added, we construct a new filter $\\mathcal{F}_{k+1} \\subseteq \\mathcal{F}_k$ by selecting the permutations in $\\mathcal{F}_k$ that maps $g$ to itself. This is effectively restricting the partial color permutation associated with each peg permutation.\n%so that\n%$\\pi_c^i ( \\pi_p^i (g) ) = \\pi_p^i (g)$ for all $\\pi_c^i \\in \\Pi_c^i$. \n%$\\pi(g_k) = g_k$ for all $\\pi \\in \\mathcal{F}_{k+1}$.\n\n[TBC] It can be seen from the above requirement that any color that is mapped from in a partial color partition is also mapped to, and any color that is not mapped in one direction is also not mapped in the other direction. In fact, the unmapped colors are those \\emph{unused} in any of the prior constraints. This means this equivalence filter fully considers the ``unguessed color equivalence'' described in the previous section.\n\n\nAs we proceed in the game, we are supplied with more constraints. The partial color permutation associated with each peg permutation gets more restrictive with each\nadded constraint, and more peg permutations becomes ineligible and gets removed. Finally we will be left with only the identity codeword permutation, where every codeword will become a representative of its own equivalence class.\n%Hence we call it \"incremental equivalence detection\".\n\n\\subsection{Results}\n\nApplying the constraint equivalence filter described in this section, we find 5 canonical guesses for the first round of a standard Mastermind game (p4c6r), listed in Table \\ref{tab:canonical-mastermind}. Listed alongside is the number of canonical guesses in the second round given each initial guess.\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{c c}\n\\hline\n\\hline\n$g_1$ & $\\#\\{g_2\\}$ \\\\\n\\hline\n\\cw{0000} & 12 \\\\\n\\cw{0001} & 53 \\\\\n\\cw{0011} & 39 \\\\\n\\cw{0012} & 130 \\\\\n\\cw{0123} & 57 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Canonical 1st and 2nd guesses in Mastermind}\n\\label{tab:canonical-mastermind}\n\\end{center}\n\\end{table}\n\nApplying the same filter to the Bulls and cows game (p4c10n) yields only one canonical initial guess, \\cw{0123}, and 20 canonical guesses for the second round. They are listed in Table \\ref{tab:canonical-bulls} along with the number of canonical guesses for the third round.\\footnote{The table on page 7 of \\cite{francis10} contains the same information; however their number for \\cw{0456} is 373 and their number for \\cw{4567} is 218.}\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{c c | c c | c c | c c}\n\\hline\n\\hline\n$g_2$ & $\\#\\{g_3\\}$ & $g_2$ & $\\#\\{g_3\\}$ & $g_2$ & $\\#\\{g_3\\}$ & $g_2$ & $\\#\\{g_3\\}$ \\\\\n\\hline\n\\cw{0123} & 20  & \\cw{0214} & 270  & \\cw{1032} & 39  & \\cw{1234} & 501  \\\\\n\\cw{0124} & 107 & \\cw{0231} & 75   & \\cw{1034} & 270 & \\cw{1245} & 1045 \\\\\n\\cw{0132} & 67  & \\cw{0234} & 501  & \\cw{1045} & 295 & \\cw{1435} & 541  \\\\\n\\cw{0134} & 270 & \\cw{0245} & 1045 & \\cw{1204} & 175 & \\cw{1456} & 1012 \\\\\n\\cw{0145} & 295 & \\cw{0456} & 363  & \\cw{1230} & 59  & \\cw{4567} & 180  \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Canonical 2nd and 3rd guesses in Bulls and cows}\n\\label{tab:canonical-bulls}\n\\end{center}\n\\end{table}\n\n[consistent with all prior guesses]\n\n\\section{Partition equivalence}\n\nState/Partition Equivalence\n\nThis is the finest-grained definition of equivalence, but is complex in implementation.\n\n\\begin{definition}\n(Partition) Let $(b_1, b_2, \\cdots, b_r)$ be the ordered set of distinct feedbacks in a game. Then the \\emph{partition} of a codeword set $\\mathcal{S}$ by a codeword $g$ is a partition of $\\mathcal{S}$ into ordered cells $(V_1, V_2, \\cdots, V_r)$ such that the codewords in $V_i$ compare to $g$ yields $b_i$.\n\\end{definition}\n\n\\begin{definition}\n(Partition equivalence.) Given two partitions $P_1 = (V_1, V_2, \\cdots, V_r)$ and $P_2 = (V'_1, V'_2, \\cdots, V'_r)$ of a codeword set $\\mathcal{S}$. $P_1$ and $P_2$ are \\emph{equivalent} if there exists codeword permutation $\\pi$ such that $V_i^\\pi = V'_i$ for all $1 \\le i \\le r$. That is, $\\pi$ maps the cells in $P_1$ to the same cells in $P_2$. [color preserving permutation]\n\\end{definition}\n\n\\begin{definition}\n(Guess equivalence.) Given a set of potential secrets $\\mathcal{S}$, two guesses $g_1$ and $g_2$ are \\emph{equivalent} if the partitions given by $g_1$ and $g_2$ on $\\mathcal{S}$ are equivalent.\n\\end{definition}\n\n[Can we show that partition equivalence is the \\emph{finest} equivalence relation that preserves the \\emph{structure} of a strategy tree?] Of course we need to define the motivation and what is a ``structure'' first.\n\n\\section{Spacial Equivalence}\n\nThe codewords can be thought of as points in a space. The comparison feedback between two codewords can be thought as the distance between the points. For a formal treatment on distance, see \\cite{stuckman06}.\n\nCan we define guess equivalence based on the guess's relative position to all the remaining secrets or the whole secret space?\n\nFor example, for Bulls and cows, at the beginning of the game all guesses are equivalent as the first guess. This is because the entire codeword set is completely symmetric.\n\nHowever, for the Mastermind game, there are a few distinct initial guesses. It looks like right that these guesses have different relative spacial position relative to the codeword space.\n\nConsider again when there is one secret left. Obviously [?] two guesses are equivalent if they yield the same feedback against this potential secret (if no repetition of digit is allowed). How about for Mastermind which allows repetition?\n\nAlso need to consider when two secrets are left, etc.\n\n[And is such equivalence correct? Is it equivalent to partition equivalence? Is it weaker than partition equivalence? How to implement?]\n\n\\section{Chaining Multiple Filters}\n\n[Note that constraint equivalence is strictly weaker(stronger?) than partition equivalence. For example, consider the first constraint 1234:0A0B. Then for the next guess, 1156 or 1256 are obviously the same (they both contain excluded colors 1, 2). However, 1156 and 1256 are clearly not equivalent under constraint equivalence. This is because constraint equivalence doesn't use the info of the feedback.\n\nOne way to deal with this is to use both. If two guesses are equivalent under any measure, then they are equivalent. This approach is discussed in this section.\n\nAlternatively, we could combine the two. When there are a lot of possibilities, we compute constraint equivalence. When there are a few possibilities, we compute partition equivalence. This approach is discussed in ?? future sections.]\n \nThe equivalence filters described above vary in complexity and performance. More complex filters tend to produce fewer canonical guesses, but at the cost of higher computational overhead. If the computational overhead is too high, the benefit of filtering could be offset.\n\nIt is therefore useful in practice to chain multiple equivalence filters together by feeding the output of one filter as input into the next filter. This has two benefits: i) it potentially reduces the number of canonical guesses by testing different equivalence relations, and ii) it speeds up the computation of the latter filter by supplying potentially fewer candidates as input.\\footnote{This relies on the assumption that the computational complexity of a filter grows with the number of candidates, which is satisfied for the filters described in the previous sections.}\nBelow we discuss some issues in implementing such a filter chain.\n\n\\begin{definition}\n(Composite equivalence) Let $R_1$ and $R_2$ be two equivalence relations defined on the codeword set $\\vS_0$. Then $R$ is a \\emph{composite equivalence relation} of $R_1$ and $R_2$ if for any $g_1, g_2 \\in \\vS_0$, $g_1 R g_2$ if and only if $g_1 R_1 g_2$ or $g_1 R_2 g_2$. We denote it as $R = R_1 \\vee R_2$. The $\\vee$ operator is commonly known as the \\emph{join} of the partitions induced by the two equivalence relations.\\footnote{See \\url{http://en.wikipedia.org/wiki/Lattice\\_(order)}.}\n\\end{definition}\n\n\\begin{definition}\n(Equivalence refinement) For two equivalence relations $R_1$ and $R_2$, $R_1$ is called \\emph{finer} than $R_2$ (and $R_2$ \\emph{coarser} than $R_1$) if any pair of elements that are equivalent under $R_1$ are also equivalent under $R_2$. It is easy to see that if $R_1$ is finer than $R_2$, then $R_1 \\vee R_2 = R_2$.\n\\end{definition}\n\n\\begin{definition}\n(Compatible relations) Let $R_1$ and $R_2$ be two equivalence relations defined on the set $S$. Then $R_1$ and $R_2$ are called \\emph{compatible} if for any $g \\in \\vS$, $[g]_{R_1} \\subseteq [g]_{R_2}$ or $[g]_{R_2} \\subseteq [g]_{R_1}$. Intuitively, this means $R_1$ is a refinement or $R_2$ for part of the set, and $R_2$ is a refinement of $R_1$ for the rest part of the set. In particular, any equivalence relation is compatible with its refinement.\n\\end{definition}\n\n\\begin{definition}\n(Discrete equivalence) A \\emph{discrete} equivalence relation is an equivalence relation where every element is equivalent only to itself. A discrete equivalence relation is finer than any other equivalence relation defined on the same set.\n\\end{definition}\n\n\\begin{definition}\n(Unit equivalence) A \\emph{unit} equivalence relation is an equivalence relation where all elements are equivalent. A unit equivalence relation is coarser than any other equivalence relation defined on the same set.\n\\end{definition}\n\n\\begin{definition}\n(Trivial equivalence) Discrete and unit equivalence relations are collectively called \\emph{trivial} equivalence relations.\n\\end{definition}\n\n\\begin{definition}\n(Simple filter) Let $R$ be an equivalence relation defined on a set $S$. A \\emph{simple filter} of $\\vS$ with respect to $R$ is a function that takes as input a subset $\\vG \\subseteq \\vS$ and produces as output a set containing a representative for each class of equivalent elements in $\\vG$.\\footnote{Such output is known as the set of \\emph{class representatives} of $\\vG$; see \\url{http://mathworld.wolfram.com/ClassRepresentative.html}. }\n\\end{definition}\n\nOne way to implement a composite equivalence filter is using \\emph{disjoint sets}.\\footnote{See \\url{http://en.wikipedia.org/wiki/Disjoint-set\\_data\\_structure}.} However, this approach does not reduce the computational complexity of the second filter, and is not useful when one equivalence is finer than the other. In addition, the data structure is more complex to implement.\n\nTherefore, we prefer to implement a composite filter by \\emph{chaining} the individual filters. That is, given candidate set $\\vG$, we first apply the first filter to get $\\vG_1 = \\vF_{R_1}(\\vG)$, and then apply the second filter to get $\\vG_2 = \\vF_{R_2}(\\vG_1)$. If we can ensure $\\vG_2 = \\vF_{R_1 \\vee R_2}(\\vG)$, then this method is a simple and efficient way to compute the composite.\n\nHowever, arbitrary equivalence filters for arbitrary equivalence relations \\emph{cannot} be chained in the above manner to produce the correct result. Let's look at a counter-example. Take a set of three elements, $X = \\{1, 2, 3\\}$, and take the following equivalence relations:\n\\begin{align}\nR_1 &= \\{ \\{1, 2\\}, \\{3\\}  \\} , \\notag \\\\\nR_2 &= \\{ \\{1\\}, \\{2,3\\}  \\} . \\notag \n\\end{align}\nIt follows that the composite equivalence relation is $R_1 \\vee R_2 = \\{ \\{ 1, 2, 3 \\} \\}$.\n\nNow suppose we use a ``natural'' filter $\\vF$ that keeps elements in the input sequence that are not equivalent to any of the prior elements. Then we have $\\vF_{R_1}(X) = \\{ 1, 3 \\}$. If we in turn feed this output to the next filter, we get $\\vF_{R_2}(\\{1,3\\}) = \\{1,3 \\}$. However, this result is not desired because $\\vF_{R_1 \\vee R_2}(X) = \\{1\\}$.\n\nThis counter-example can be generalized to show that for arbitrary choices of equivalence relations and simple filters, we are not guaranteed to get the desired result by simply chaining individual filters. This is formalized by the following theorem.\n\n\\begin{theorem}\nLet $R_1$ be a non-trivial equivalence relation defined on a set $\\vS$, and let $\\vF_{R_1}$ be a simple filter. Then there exists equivalence relation $R_2$ and set $\\vG \\subseteq \\vS$ such that for any simple filter $\\vF_{R_1 \\vee R_2}$ and $\\vF_{R_2}$, $\\vF_{R_1 \\vee R_2}(\\vG) \\ne \\vF_{R_2}(\\vF_{R_1}(\\vG))$.\n\\end{theorem}\n\n\\begin{proof}\nSince $R_1$ is non-trivial, there must exist a set of three distinct elements $\\vG = \\{ g_1, g_2, g_3 \\} \\subseteq \\vS$ such that $R_1$ partitions $\\vG$ as\n\\[\n\\vG / R_1 = \\{ \\{ g_1, g_2 \\}, \\{ g_3 \\} \\} .\n\\]\nIt follows that $\\vF_{R_1} (\\vG) = \\{ g, g_3 \\}$ where $g$ is one of $g_1$ and $g_2$. \nNow define equivalence relation $R_2$ on $\\vS$ as\n\\[\n\\vS / R_2 = \\{ \\{g\\}, \\vS \\setminus \\{g\\} \\} .\n\\]\nIt is easy to see that $R_1 \\vee R_2$ is the unit equivalence $\\{ \\vS \\}$, so $\\vF_{R_1 \\vee R_2}(\\vG)$ contains only one element. However, $\\vF_{R_2}(\\vF_{R_1}(\\vG)) = \\vF_{R_2}(\\{ g, g_3 \\}) = \\{ g, g_3 \\}$.\n\\end{proof}\n\nThe above theorem implies that for a non-trivial equivalence relation $R_1$, it is impossible to implement a ``universal'' simple filter that can be chained to an arbitrary equivalence relation $R_2$. However, for specific choices of $R_2$, in particular those compatible with $R_1$, this actually is possible. This is shown by the following theorem.\n\n\\begin{theorem}\nLet $R_1$ and $R_2$ be two compatible equivalence relations defined on a set $\\vS$, and let $\\vF_{R_1}$, $\\vF_{R_2}$, and $\\vF_{R_1 \\vee R_2}$ be simple filters. Then $\\vF_{R_1 \\vee R_2}(\\vG) = \\vF_{R_2}(\\vF_{R_1}(\\vG))$ for all $\\vG \\subseteq \\vS$.\n\\end{theorem}\n\nThis is quite obvious, so we are not going to elaborate the proof here.\n\n\\begin{theorem}\nLet $R_1$ and $R_2$ be two non-compatible equivalence relations defined on a set $\\vS$, and let $\\vF_R$ denote a simple filter for $R$. Then for any $\\vF_{R_2}$, there exists $\\vF_{R_1}$ and $\\vG \\subseteq \\vS$ such that $\\vF_{R_2}(\\vF_{R_1}(\\vG)) \\ne \\vG / (R_1 \\vee R_2)$.\n\\end{theorem}\n\n\\begin{proof}\nSince $R_1$ and $R_2$ are not compatible, there must exist a set of three distinct elements $\\vG = \\{ g_1, g_2, g_3 \\} \\subseteq \\vS$ such that $R_1$ and $R_2$ partitions $\\vG$ as\n\\[\n\\begin{array}{ r l }\nR_1: & \\{ g_1, g_2 \\}, \\{ g_3 \\} , \\\\\nR_2: & \\{ g_1 \\}, \\{ g_2, g_3 \\} .\n\\end{array}\n\\]\nIt is easy to see that $g_1, g_2, g_3$ are all equivalent under $R_1 \\vee R_2$, so $\\vG / (R_1 \\vee R_2)$ contains only one element.\nNow define $\\vF_{R_1} (\\vG) = \\{ g_1, g_3 \\}$. It then follows that\n\\[\n\\vF_{R_2}(\\vF_{R_1}(\\vG)) = \\vF_{R_2}(\\{ g_1, g_3 \\}) = \\{ g_1, g_3 \\} \\ne \\vG / (R_1 \\vee R_2) .\n\\]\n\\end{proof}\n\n%The above theorem shows that natural filters cannot be chained for arbitrary equivalence relations and arbitrary input. However, for specific pairs of equivalence relations, they may still be chained. In particular, if one equivalence is finer than the other, then it is easy to see that they can be chained.\\footnote{This is why we require $R_1$ to be non-trivial in the above theorem, since a discrete relation is finer than any relation, and any relation is finer than a unit relation.}\n\nThe counter-example and the theorem above means in order to chain filters on arbitrary relations, we must impose additional restriction on how a filter returns representative elements. For practical usefulness, we define a class of \\emph{canonical} filters which satisfy this.\n\n\\begin{definition}\n(Canonical filter) A \\emph{canonical filter} is an equivalence filter that only returns representatives that are minimum in their respective equivalence classes. Formally, let $\\prec$ be a total order on the set $\\vS_0$ of all codewords.\\footnote{A convenient example of such total order is the lexicographical order.}\nLet $R$ be an equivalence relation that partitions $\\vS_0$ into $r$ cells, where the minimum element (with respect to $\\prec$) of the $i$th cell is denoted by $m_i$. Then a \\emph{canonical filter}, $\\vF$ is a function $\\vS_0 \\rightarrow \\vS_0$ defined by\n\\begin{equation}\n\\vF(\\vG) = \\vG \\cap \\left\\{ m_i \\given 1 \\le i \\le r \\right\\} \\text{ for any } \\vG \\subseteq \\vS_0 . \\label{eq:canonical-filter-1}\n\\end{equation}\nIn particular, if some $m_i$ is not in $\\vG$, then its equivalence class is excluded from the output.\n\\end{definition}\n\nNext we show that we can safely chain canonical filters together to produce the desired output efficiently.\n\n\\begin{theorem}\nLet $\\mathcal{F}_1$ and $\\mathcal{F}_2$ be the canonical filters for equivalence relations $R_1$ and $R_2$. Let $\\vF$ be the canonical filter for equivalence relation $R = R_1 \\vee R_2$. Then $\\vF(\\vG) = \\vF_2(\\vF_1(\\vG))$ for any $\\vG \\subseteq \\vS_0$.\n\\end{theorem}\n\n\\begin{proof}\nBefore going into details, we illustrate the idea with an example of 7 codewords. A $\\times$ sign indicates a representative of one of the canonical filters, and a $\\otimes$ sign indicates a representative of both filters.\n\\[\n\\begin{array}{r c c c c c c c }\n      & g_1 & g_2 & g_3 & g_4 & g_5 & g_6 & g_7 \\\\\n\\vF_1: & \\otimes &        & \\times & \\otimes & & \\times & \\\\\n\\vF_2: & \\otimes & \\times &        & \\otimes & &        & \\times\n\\end{array}\n\\]\n\nFormally, let $\\vu = (u_1, \\cdots, u_r)$ and $\\vv = (v_1, \\cdots, v_s)$ be the ordered set of minimum representatives of $\\vS_0 / R_1$ and $\\vS_0 / R_2$ respectively. According to equation \\eqref{eq:canonical-filter-1}, we have\n\\[\n\\vF_1(\\vG) = \\vG \\cap \\vu, \\vF_2(\\vG) = \\vG \\cap \\vv.\n\\]\nLet $\\vw = \\vu \\cap \\vv = (w_1, \\cdots, w_t)$. We now show that $\\vw$ is the set of minimum representatives of $\\vS_0 / (R_1 \\vee R_2)$.\n\nFirst, it is obvious that any $w_{k_1}, w_{k_2} \\in \\vw$ are not equivalent under $R_1$ or $R_2$. Otherwise, suppose $w_{k_1} \\sim w_{k_2}$ under $R_1$, then the corresponding elements $u_{i_1}, u_{i_2} \\in \\vu$ are equivalent under $R_1$, which is contradictory to the definition of $\\vu$.\n\nNext, we show that any given $w \\in \\vS_0$ is equivalent to some $w_k \\le w$ under $R = R_1 \\vee R_2$. Given $w$, there exists $u_i \\le w$ and $v_j \\le w$ which are equivalent to $w$ under $R_1$ and $R_2$ respectively. By definition of $R$, $w$ is equivalent to $u_i$ and $v_j$ under $R$ as well. If $u_i = v_j$, then $w \\in \\vw$ and the statement is true. Otherwise, suppose without loss of generality that $u_i < v_j$. Since $w \\sim u_i$ under $R$, we just need to show that $u_i$ is equivalent to some $w_k \\le u_i$. Repeat this process until some $u_i = v_j$, or we come to $u_1 = v_1 = \\min \\vS_0$, where the statement trivially holds.\n\nThe above two paragraphs show that $\\vw$ is the minimum representative of $R$. [Still need to show the $R$ defined this way satisfies $R = R_1 \\vee R_2$?] It then follows that for any $\\vG \\subseteq \\vS_0$,\n\\[\n\\vF(\\vG) = \\vG \\cap \\vw = \\vG \\cap (\\vu \\cap \\vv ) = (\\vG \\cap \\vu) \\cap \\vv\n= \\vF_2(\\vF_1(\\vG)) .\n\\]\n\n\\end{proof}\n\n%However, if each filter in implemented in a way such that it keeps a specific element as the representative, such as the lexical minimum of its equivalence class of each particular equivalence, then chaining them together may incorrectly drop out canonical guesses. This section discuss the conditions under which chaining them together will still produce correct results.\n\n", "meta": {"hexsha": "587b63c3bb136b8461fe0d0ea02c77787c9b1d63", "size": 43988, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "book/equiv.tex", "max_stars_repo_name": "bijaykoirala/mastermind-strategy", "max_stars_repo_head_hexsha": "22763672c413c6a73e2c036a9f522b65a0025b68", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "book/equiv.tex", "max_issues_repo_name": "bijaykoirala/mastermind-strategy", "max_issues_repo_head_hexsha": "22763672c413c6a73e2c036a9f522b65a0025b68", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/equiv.tex", "max_forks_repo_name": "bijaykoirala/mastermind-strategy", "max_forks_repo_head_hexsha": "22763672c413c6a73e2c036a9f522b65a0025b68", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.9876352396, "max_line_length": 640, "alphanum_fraction": 0.7239929072, "num_tokens": 13141, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7520125848754472, "lm_q1q2_score": 0.5865644771045877}}
{"text": "\n\\subsection{MNIST dataset}\n\nWe tested RSOM on the standard MNIST dataset \\citep{Lecun:1998} that contains $60,000$ training images and $10,000$ testing images. The dimension of each image is 28$\\times$28 pixels and they are encoded using grayscale levels as the result of the normalization of the original black and white NIST database. The standard performance on most algorithms on the MNIST dataset is below 1\\% error rate (with or without preprocessing) while for the regular SOM it is around 90\\% recognition rate depending on the initial size, learning rate, and neighborhood function. Our goal here is not to find the best set of hyper-parameters but rather to explore if SOM and RSOM are comparable for a given set of hyper-parameters. Consequently, we considered a size of 32$\\times$32 neurons and used the entire training set (60,000 examples) for learning and we measured performance on the entire testing set. We did not use any preprocessing stage on the image and we fed directly each image of the training set with the associated label to the model. Labels (from 0 to 9) have been transformed to a binary vector of size 10 using one-hot encoding (e.g. label 3 has been transformed to 0000001000). These binary labels can then be learned using the same procedure as for the actual sample. To decode the label associated to a code word, we simply consider the argmax of these binary vectors. Figure \\ref{fig:MNIST:results} shows the final self-organisation of the RSOM where the class for each cell has been colorized using random colors. We can observe a number of large clusters of cells representing the same class (0, 1, 2, 3, 6) while the other classes (4,5,7,8,9) are split in two or three clusters. Interestingly enough, the codewords at the borders between two clusters are very similar. In term of recognition, this specific RSOM has an error rate just below 10\\% ($0.903,\\, \\pm 0.296$) which is quite equivalent to the regular SOM error rate ($0.906,\\, \\pm 0.292$). The perfomances of the RSOM and SOM are actually not significantly different, suggesting that the regular grid hypothesis can be weaken.\n\nIn a similar way we measured the similarity of the neural spaces generated by both the regular SOM and the RSOM using the persistent diagram and barcodes. The only significant difference from previous analysis was the projection of the input and neural spaces to a lower-dimension space via UMAP \\citep{Mcinnes:2018}. Projections of high-dimensional spaces to lower-dimension ones have been used before in the analysis of latent spaces of autoencoders \\citep{Detorakis:2019}. Here, we use the UMAP since it's an efficient and robust method for applying a dimensionality reduction on input and neural spaces. More precisely, we project the MNIST digits as well as the code words (dimension $784$) to a space of dimension $7$. Once we get the projections, we proceed to the topological analysis using the persistent diagram and barcodes as we already have described in previous paragraphs.\nFigure~\\ref{fig:MNIST:analysis} shows the results regarding the persistent barcodes and diagrams. \nThe persistent barcodes in figures~\\ref{fig:MNIST:analysis}A, B, and C indicate that RSOM captures more \npersistent features (panel C, orange and green lines reflect the $H1$- and $H2$-homological features, respectively)\nthan the regular SOM (panel B). The persistence diagrams of input, SOM and RSOM are shown in \nfigures~\\ref{fig:MNIST:analysis} D, E, and F, respectively. \nThese figures indicate that the RSOM has more persistent features (orange and green dots away from the diagonal\nline) than the regular SOM, consistently with the two previous experiments ($2$D uniform distribution with holes\nand $3$D uniform distribution). The Bottleneck distance between the persistence diagrams of input space and \nthose of SOM and RSOM for the $H0$ are SOM: $1.0$ and RSOM: $1.12$, for $H1$ SOM: $0.19$ and RSOM: $0.22$, and\nfinally for the $H2$ are SOM: $0.05$ and RSOM:$0.05$. Again we observe that the regular SOM has a persistent \ndiagram that is closer to the one of the input space than that of RSOM, however the RSOM seems to approache \nslightly better the input space topology since it has more pairs (birth, death) away from the diagonal (black\nline) in figures~\\ref{fig:MNIST:analysis}D, E, and F. Moreover, the persistent barcode of RSOM (figure~\\ref{fig:MNIST:analysis}C indicates that has more persistent features for the radius $\\alpha$\nbetween $0$ and $1.512$ than the regular SOM.\n\n\\begin{figure}\n  \\includegraphics[width=\\columnwidth]{experiment-MNIST.pdf}\n  \\vspace{2mm}\n  \\centering\n  \\includegraphics[width=.975\\columnwidth]{figures/colormap.pdf}\n  %\n  \\caption{%\n  %\n  {\\bfseries \\sffamily MNIST dataset (results)}\n  %\n  Randomized SOM made of $1024$ neurons with a $3$-nearest neighbors induced topology. Model has been trained for $25,000$ epochs on the MNIST dataset. \\textbf{A} Map topology in neural space. \\textbf{B} Map topology in data space. \\textbf{C to H} Normalized distance map for six samples. Normalization has been performed for each sample in order to enhance contrast but this prevents comparison between maps.\n  %\n  }\n  \\label{fig:MNIST:results}\n\\end{figure}\n\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\textwidth]{experiment-MNIST-analysis.pdf}\n  \\caption{{\\bfseries \\sffamily MNIST dataset (analysis)}\n  Persistent Barcodes of \\textbf{A} input space, \\textbf{B} SOM, and \\textbf{RSOM}.\n  The blue, orange, and green line segments represent the $H0$-, $H1$-, and $H2$-homology,\n  respectively. This means\n  that blue color represents connected segments within the space, orange color reflects the holes\n  within the space and green the voids. The longer the line segment the more important the \n  corresponding topological feature. \\textbf{D} illustrates the persistent diagram for the input space.\n  \\textbf{E} and \\textbf{F} depict the persistent diagrams for SOM and RSOM, respectively. Again blue dots\n  indicate $H0$-homology features, orange dots represent $H1$-homolocical features, and green the \n  $H2$-homological features.}\n  \\label{fig:MNIST:analysis}\n\\end{figure}", "meta": {"hexsha": "8f578252af46db698ea5107ed9635565bc696cb7", "size": 6139, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "article-overleaf/03-results-C.tex", "max_stars_repo_name": "rougier/VSOM", "max_stars_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-11-20T06:27:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:20:28.000Z", "max_issues_repo_path": "article-overleaf/03-results-C.tex", "max_issues_repo_name": "rougier/VSOM", "max_issues_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "article-overleaf/03-results-C.tex", "max_forks_repo_name": "rougier/VSOM", "max_forks_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-03T04:41:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T04:41:57.000Z", "avg_line_length": 115.8301886792, "max_line_length": 2097, "alphanum_fraction": 0.779605799, "num_tokens": 1538, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825007, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5865644761263406}}
{"text": "%%% \\footnote{Anecdotally, with proof-to-code ratios of 6:1\n%%% instead of 50:1 or 100:1 with classical proof assistants~\\citep{Ironfleet}},\n\n%%% In present of lambda expressions in\n%%% the specifications, our translation\n%%% can (optionally) insert extensionality\n%%% axioms that express $\\alpha$- and\n%%% $\\beta$-equivalence in the logic,\n%%% but do not preserve undecidability\n%%% and thus unpredictability of type checking.\n\nStructure\n* Introduction\n* Overview\n* Refinement Reflection\n* Algorithmic Checking\n* Proof Combinator Library\n* Evaluation\n* Related Work\n--------------------------------------------------------------------------------\nDependently typed languages like demonstrate the\nimmense potential of writing proofs \\emph{of}\nprograms, \\emph{by} programs.\n%\nThey enable developers to \\emph{specify} correctness\nproperties and to \\emph{verify} those properties, by\nwriting both the propositions and the proofs as\nordinary terms in the language, thereby uniting\nprogramming and proving into complementary parts\nof a whole.\n%\nUnfortunately, so far, these benefits require one\nto work in one of these specially designed languages.\n%\nWouldn't it be great to program proofs and write\nproofs of programs in \\emph{your} favorite language\nand libraries, and have them be built and executed\nusing \\emph{your} favorite compilers and run-times?\n\n\n\nWe used \\libname as an assistant\nto state and prove theorems\nof Haskell functions\non Integers and inductive data types (Lists and Maybe).\n%\nIn this section, we present by examples how \\libname\ncan be used to prove properties on linear arithmetic\n(\\eg @fib@ is increasing), higher order properties on\nintegers and inductive data types (\\eg generalization\nof increasing property and map fusion), and interpretation\nof $\\lambda$-terms (\\eg associativity of monadic bind).\n\n\\spara{Proving Strategies.}\n%\nThe proofs reply on four main proving strategies.\nWe use color notations to make the proofs more readable.\n%\n\\begin{itemize}\n\\item\\textbf{Theorem application.}\nWe prove\n@f op! g ?  thm e1 .. en@\nwhen the theorem @thm x1 ... xn@\nis of the form $(\\ttf\\ \\op ! \\ \\ttg) \\subst{\\overline{x}}{\\overline{e}}$.\n\\item \\textbf{Definition folding.}\nFolding the definition of @f e1 ... en == g@ once\nproves that\n@rf e1 ... e2 ==! g@\n(in which case we color @f@ with red).\n%\n\\item \\textbf{Definition unfolding.}\nUnfolding the definition of @f e1 ... en == g@ once\nproves that\n@g ==! gf e1 ... e2@\n(in which case we color @f@ with green).\n\\item \\textbf{SMT knowledge}\nWe rely on SMT interpretations to prove\nlinear arithmetic properties (\\eg @1 < 2@)\nor $\\eta$-reduction (\\eg @e = (\\x -> e) x@).\n\\end{itemize}\n\n\\subsection{Arithmetic Properties}\n%\nWe start by proving arithmetic properties\nof Haskell functions.  @fib_incr@ states\nand proves that the @fib@ function\nfrom \\S~\\ref{sec:intro} is increasing.\n%\n\\begin{code}\n  fib_incr :: n:Nat -> {fib n <= fib (n+1)}\n  fib_incr n\n    | n == 0\n    =   rfib 0\n    <!  gfib 1\n    *** QED\n    | n == 1\n    =   fib 1\n    <=! fib 1 + fib 0\n    <=! gfib 2\n    *** QED\n    | otherwise\n    =   rfib n\n    ==! fib (n-1) + fib (n-2)\n    <=! fib n + fib (n-2) ? fib_incr (n-1)\n    <=! fib n + fib (n-1) ? fib_incr (n-2)\n    <=! gfib (n+1)\n    *** QED\n\\end{code} %$\nProofs are total and terminating Haskell functions.\n@fib_incr@ proves that @fib n@ is increasing using three cases.\n1. If @n==0@,\n  then by unfolding @fib@ we get @fib 0 == 0@.\n  SMT proves that $0 < 1$, which in turn is folded to @1 == fib 1@.\n2. If @n==1@, since @fib@ always returns @Nat@, SMT proves that @fib 1 <= fib 1 + fib 0@\n which is folded to @fib 2@.\n3. Otherwise,\n  @fib n@ is unfolded to @fib n = fib (n-1) + fib (n-2)@.\n  We recursively apply the theorem to the (provably) smaller arguments @n-1@ and @n-2@,\n  and fold the result to get @fib n + fib (n-1) = fib (n+1)@.\n\n\\subsection{Higher Order Theorems}\\label{subsec:higherorder}\nNext, we state and prove a generalization of increasing functions.\nNamely, that for every function @f@,\nif there is a proof that @f@ is increasing, that is,\nan expressions with type @z:Nat -> {f z <= f (z+1)}@,\nthen, for every @x:Nat@ and @y@ greater than @x@,\nwe prove that @f x <= f y@.\n%\n\\begin{code}\n  type Greater N = {v:Int | N < v}\n\n  gen_incr :: f:(Nat -> Int)\n           -> (z:Nat -> {f z <= f (z+1)})\n           -> x:Nat\n           -> y:Greater x\n           -> {f x <= f y}\n           / [y]\n  gen_incr f thm x y\n    | x + 1 == y\n    =   f x\n    <=! f (x+1) ? thm x\n    <=! f y\n    *** QED\n    | x + 1 < y\n    =   f x\n    <=! f (y-1) ? gen_incr f thm x (y-1)\n    <=! f y     ? thm (y-1)\n    *** QED\n\\end{code}\n%\nWe prove the theorem by induction on @y@,\nwhich is stated by the termination metric~\\citep{Vazou15} @[y]@.\n%\nIf @x+1 == y@, then we apply the @thm@ argument.\nOtherwise, @x+1<y@ and the theorem holds by induction on @y@.\n\nWe use the above theorem to directly prove\nthat @fib@ in increasing with the generalized notion:\n%\n\\begin{code}\n  fib_incr_gen :: n:Nat\n               -> m:Greater n\n               -> {fib n <= fib m}\n  fib_incr_gen = gen_incr_eq fib fib_incr\n\\end{code}\n\n\n\\RJ{FIXME, moved from examples}\n\\subsection{Example: Reflecting Lists} \\label{subsec:list}\n\n%\nIn the rest of this section, we use \\libname to prove\nproperties about a user defined inductive List data type\n\\begin{code}\n  data L [length] a = N | C a (L a)\n\\end{code}\n%\nThe annotation @[length]@ on the List definition\nstates that \\liquidHaskell will use the @length@ of lists\nas a default termination metric on functions inductive on lists.\n%\nWhere @length@ is defined as expected to map lists to natural numbers\n\\begin{code}\n  measure length\n  length :: L a -> Nat\n  length N        = 0\n  length (C _ xs) = 1 + length xs\n\\end{code}\n%\nThe @measure@ definition (as defined in~\\citep{Vazou14})\ntranslates the @length@ function to refinements in the list's\ndata constructors.\n%\nSpecifically, the above definition strengthens list's data constructors\nas\n\\begin{code}\n N :: {v:L a | length v == 0 }\n C :: x:a -> xs:L a\n   -> {v:L a | length v == 1 + length xs }\n\\end{code}\n\nMoreover, the \\liquidHaskell flag @exact-data-constructors@,\nautomatically generates the\nlogical record selectors and checkers for lists.\n%\nSpecifically, the following measures will get generated:\n\\begin{code}\n  is_N N          = True\n  is_N (C _ _)    = False\n\n  is_C (C _ _)    = True\n  is_C N          = False\n\n  sel_C_1 (C x _) = x\n  sel_C_2 (C _ x) = x\n\\end{code}\n\n\\subsection{Append Associativity}\\label{subsec:append}\nNext we recursively define and axiomatize list append\n\\begin{code}\n  axiomatize (++)\n  (++) :: L a -> L a -> L a\n  N        ++ ys = ys\n  (C x xs) ++ ys = C x (xs ++ ys)\n\\end{code}\n%\nThe above axiomatization, will automatically produce a type for append\nthat exactly captures its implementation.\n%\n\\begin{code}\n  (++) :: xs:L a -> ys:L a ->\n    {v:L a |\n       v == if (is_N xs) then ys\n            else C (sel_C_1 xs)\n                   (sel_C_2 xs ++ ys) }\n\\end{code}\n%\nThe formal translation of arbitrary terminating functions\nis formalized in \\S~\\ref{sec:algorithmic}.\n", "meta": {"hexsha": "14b1af4b5bc077bdf1eb413e1b456b6a38229874", "size": 7047, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/refinementreflection/scratch.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/refinementreflection/scratch.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/refinementreflection/scratch.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 29.3625, "max_line_length": 88, "alphanum_fraction": 0.6554562225, "num_tokens": 2144, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928900257127, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5865644607425787}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{tfrpage}\n\\section*{\\hspace*{-1.6cm} tfrpage}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nPage time-frequency distribution.\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[tfr,t,f] = tfrpage(x)\n[tfr,t,f] = tfrpage(x,t)\n[tfr,t,f] = tfrpage(x,t,N)\n[tfr,t,f] = tfrpage(x,t,N,trace)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty tfrpage} computes the Page distribution of a discrete-time\n        signal {\\ty x}, or the cross Page representation between two\n        signals. The expression of the Page distribution is\n\\begin{eqnarray*}\nP_x(t,\\nu) &=& \\dfrac{d[|\\int_{-\\infty}^t x(u)\\ e^{-j2\\pi \\nu u}\\\ndu|^2]}{dt}\\\\ \n &=& 2\\ \\Re{\\left\\{x(t)\\ \\left(\\int_{-\\infty}^t\\ x(u)\\ e^{-j2\\pi \\nu\nu}du\\right)^* \\ e^{-j2\\pi \\nu t}\\right\\}}.\n\\end{eqnarray*}\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty x}     & signal if auto-Page, or {\\ty [x1,x2]} if cross-Page\n\t\t\t({\\ty Nx=length(x)})\\\\ \n        {\\ty t}     & time instant(s) & {\\ty (1:Nx)}\\\\\n        {\\ty N}     & number of frequency bins  & {\\ty Nx}\\\\\n        {\\ty trace} & if nonzero, the progression of the algorithm is shown\n                                          & {\\ty 0}\\\\\n     \\hline {\\ty tfr}   & time-frequency representation\\\\\n        {\\ty f}     & vector of normalized frequencies\\\\\n \n\\hline\n\\end{tabular*}\n\\vspace*{.2cm}\n\nWhen called without output arguments, {\\ty tfrpage} runs {\\ty tfrqview}.\n\\end{minipage}\n\\vspace*{1cm}\n\n{\\bf \\large \\sf Example}\n\\begin{verbatim}\n         sig=fmlin(128,0.1,0.4); \n         tfrpage(sig); \n\\end{verbatim}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nall the {\\ty tfr*} functions.\n\\end{minipage}\n\\vspace*{.2cm}\n\n\n{\\bf \\large \\sf References}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n[1] C. Page ``Instantaneous Power Spectra'' J. Appl. Phys., Vol. 23,\npp. 103-106, 1952.  \\\\\n\n[2] O. Grace ``Instantaneous Power Spectra'' J. Acoust. Soc. Am., Vol. 69,\npp. 191-198, 1981.\n\\end{minipage}\n", "meta": {"hexsha": "c31f790cfe0ac899240331168749c9ea8a7da74a", "size": 2421, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/tfrpage.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/tfrpage.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/tfrpage.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 27.202247191, "max_line_length": 75, "alphanum_fraction": 0.6088393226, "num_tokens": 916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.7956581049086031, "lm_q1q2_score": 0.586525866695046}}
{"text": "%&LaTeX\n\n\\section{Hello, Digital!}\n\nIn this lab, you will investigate how capturing an analog signal for\ncomputer use --- sampling and quantization --- modifies the signal,\nseeing how the choices you make in the parameters for sampling and\nquantization affect the quality of the digitized, computer signal.\n\n\\subsection{Sampling}\n\nThe first step in digitization is \\emph{sample and hold}, in which the\ncontinuous analog signal is converted to a discrete-time analog signal\n(an analog signal that only changes its value at particular points in\ntime). You will use the \\texttt{samplehold} method to do this:\n\\begin{lstlisting}[style=Matlab-editor,basicstyle=\\mlttfamily\\small]\n% samplehold Perform a sample and hold function on an AnalogSignal\n% Usage:\n%  x = obj.samplehold(h)\n% where  obj = AnalogSignal\n%        h = hold time in sec (sampling interval)\n%        x = resultant sampled AnalogSignal\n\\end{lstlisting}\n\n\\paragraph{Step 1.1} Create an analog sine waveform ranging from -5 to\n5V with a frequency of 200Hz and a duration of 2 seconds. Produce a\nplot with X-axis limits set to make the waveform visible (i.e., don't\njust make a 2s plot that tries (and fails) to show 400 cycles of the\nsinusoid.\n\n\\paragraph{Step 1.2} Use the \\texttt{samplehold} method to produce\nsampled versions of this signal at 300Hz, 500Hz, 1000Hz, and\n2000Hz. Use the Matlab \\texttt{subplot} command, and the\n\\texttt{discreteplot} function provided with this class's Matlab code,\nto plot the original and all four sampled signals together. One of the\nthings that you will note from these plots is that the X axes of these\nplots do not have units of continuous time; their units are \\emph{not}\nseconds. Instead, the X axis values are sample count, starting with\nsample 1. You will want these plots to show their signals over the\nsame time duration, and so you'll need to choose that duration,\nthen figure out, for each sampled signal, how many samples correspond\nto that duration, and then set the X axis limits for each figure so\nthat the plots show equal durations.\n\nClearly, the results are not the same, and none look identical to the\noriginal sine wave. What are the two essential pieces of information\nabout a sine wave that need to be preserved when sampling it? Does it\nappear that all sampled versions are equally useful in achieving this?\nWhy or why not (in other words, your answer to this question should\nnot be just ``yes'' or ``no'')?\n\n\\paragraph{Step 1.3} Let's look at aliasing in a little more detail\nand with a lot more numerical precision. You'll recall from the text\nthat, once we sample a signal, we have limited the range of\nfrequencies that we can represent in our discrete signal to the range\n$0 \\leq \\hat{\\omega} \\leq \\pi$, corresponding to a range of apparent\nfrequencies in the physical world of $0 \\leq \\omega' \\leq \\omega_s/2$\n(or $0 \\leq f' \\leq f_s/2$). Any frequency in the original signal\nabove $f_s/2$ will be \\emph{aliased} into the range of possible\napparent frequencies. To keep things simple, we'll stick with a\nsinusoid; this time, make it 10Hz with an amplitude of -1 to +1 and a\nduration of 1s. This will make it easy to count cycles when\nplotted. Set up a figure that can hold three plots and plot this\nanalog signal in the top plot.\n\n\\paragraph{Step 1.4} Sample this signal at 25Hz and use the\n\\texttt{discreteplot} function to plot the sampled signal in the\nmiddle. Sample it at 15Hz and similarly plot that sampled signal at\nthe bottom.\n\n\\paragraph{Step 1.5} Before examining the plots in detail, answer the\nfollowing questions: For each of the two sampling frequencies, what is\nthe range of apparent frequencies that can be represented? For each,\nwill a sinusoid with $f = 10$Hz be aliased? If so, what will be the\ndigital frequency and the apparent frequency of a 10Hz sinusoid?\n\n\\paragraph{Step 1.6} Examine the plots and count the number of\nup-and-down cycles in each. Don't worry that each cycle doesn't look\nthe same, or that every other cycle seems different; just count\neach. You should see 10 cycles in the top graph; how many do you see\nin the middle and bottom? How does this compare to the theory you\ndiscussed in the previous step? If there is any discrepancy, explain it.\n\n\n\\subsection{Analog to Digital Conversion}\n\nThe last step of digitization is called ``analog to digital\nconversion,'' or \\emph{quantization}. In this step, the sampled analog\nsignal is converted to a discrete signal, with values represented by\n$b$ bit integers. We will use the \\texttt{quant} function provided\nwith this class's Matlab code to perform this conversion:\n\\begin{lstlisting}[style=Matlab-editor,basicstyle=\\mlttfamily\\small]\n% QUANT Quantize a sampled (discrete) signal using a prescribed\n%       number of bits per point.\n% Usage:\n%  y = quant(x,nb,out)\n% where y = digital signal quantized to 2^(nb) bits resolution\n%       x = vertical points of sampled signal\n%       nb = number of bits to use per point\n%      out = 'raw' means output binary values: 0,...,2^(nb-1)\n%      otherwise, set ouput value range = input value range\n\\end{lstlisting}\n\n\\paragraph{Step 2.1} Write a Matlab function that compares two signals\nby computing the \\emph{signal to noise ratio} (SNR) that results from\nchanging one into the other (by quantization). Your function should do\nthis by first computing \\emph{root mean squared} (RMS) error between\nthe two. In this case, you will be comparing a sampled signal with a\nquantized version of it.  To do this, you should subtract the\nquantized signal from the sampled signal to produce a vector of\ndifferences (errors), then square each difference value using the\nMATLAB \\verb|.^| operator (to get squared errors), then take the mean\nof these squared values using the MATLAB \\verb|mean| function (mean\nsquared error --- a scalar value), and finally take the square root of\nthat scalar result (root mean squared error). We can compute the RMS\nsignal in a similar way, by squaring the signal values, getting the\nmean of those squared values, and then taking the square root. The\nfinal result should be a single value --- the signal-to-noise ratio\n(SNR) in \\emph{decibels}. What is in the numerator and what is in the\ndenominator of this ratio? Note that this is different than what we\ndid in the textbook, because we are now doing the computation for a\n\\emph{specific} signal, not just figuring SNR for a possible\n\\emph{range} of signal values.\n\n\n\n  \\paragraph{Step 2.2} Use your code to compute the SNR for a\n  quantized sinusoid. Generate an analog signal with with a range of 0\n  to 5, frequency of 10Hz, and duration 2sec. Sample it at 25Hz. Use\n  2, 4, 8, 12, and 16 bits quantization, and plot SNR on the Y-axis\n  versus number of quantization bits on the X-axis.\n\n  \\paragraph{Step 2.3} Repeat Step 2.2 using a square waveform with\n  the same parameters.\n\n  \\paragraph{Step 2.4} Repeat Step 2.2 using a triangle waveform with\n  the same parameters.\n\n  \\paragraph{Step 2.5} As you double the number of bits used in\n  quantization, how does the SNR change? How does this compare to what\n  your learned from the textbook? Refer to specific features of your\n  plots from Steps~2.2--2.4 to justify your answer.\n", "meta": {"hexsha": "5f283461cc1e5a2867c2553929ea7f59618f018a", "size": 7163, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Matlab Labs/lab3/lab3.tex", "max_stars_repo_name": "stiber/Signal-Computing", "max_stars_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-09-10T16:54:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T15:48:26.000Z", "max_issues_repo_path": "Matlab Labs/lab3/lab3.tex", "max_issues_repo_name": "stiber/Signal-Computing", "max_issues_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2015-08-18T18:16:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-29T17:19:16.000Z", "max_forks_repo_path": "Matlab Labs/lab3/lab3.tex", "max_forks_repo_name": "stiber/Signal-Computing", "max_forks_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.0909090909, "max_line_length": 72, "alphanum_fraction": 0.7667178556, "num_tokens": 1843, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.7956581000631542, "lm_q1q2_score": 0.5865258631231839}}
{"text": "%---------------------------Shape-----------------------------\n\\section{Shape\\label{s:tri-shape}}\n\nLet $C$ be the condition number as defined in \\S\\ref{s:tri-condition}.\nThen the shape metric is simply\n\\[\n  q = \\frac{1}{C}\n\\]\n\n\\trimetrictable{relative size squared}%\n{$1$}%                                                Dimension\n{$[0.25,1]$}%                                         Acceptable range\n{$[0,1]$}%                                            Normal range\n{$[0,1]$}%                                            Full range\n{$1$}%                                                Unit equilateral triangle value\n{\\cite{knu:03}}%                                      Reference(s)                   \n{v\\_tri\\_shape}%                            Verdict function name\n\n", "meta": {"hexsha": "f49f05c02eaf85ec3413ebb3620890b5755857fa", "size": 773, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TriShape.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TriShape.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/TriShape.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 40.6842105263, "max_line_length": 85, "alphanum_fraction": 0.362225097, "num_tokens": 150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7956581000631541, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.5865258631231838}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[a4paper,top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}\n\\geometry{a4paper} \n\\usepackage{amsmath}\n\\usepackage{physics}\n\\usepackage{color,soul}\n\\DeclareMathOperator{\\di}{d\\!}\n\\newcommand*\\Eval[3]{\\left.#1\\right\\rvert_{#2}^{#3}}\n\\newcommand{\\inlist}[1]{\\texttt{#1}}\n\n\\author{J L Kaplan}\n\n\\begin{document}\n\n\\section*{SICP Exercise 1.19}\n\n\\subsection*{Derivation of $p'$ and $q'$}\nFirst transformation:\n\\begin{equation}\n  a_{n+1} \\leftarrow q \\left( a_n + b_n \\right) + p a_n, \\qquad b_{n+1} \\leftarrow p b_n + q a_n,\n\\end{equation}\nsecond transformation:\n\\begin{equation}\n  \\label{eq:t2}\n  a_{n+2} \\leftarrow q \\left( a_{n+1} + b_{n+1} \\right) + p a_{n+1}, \\qquad b_{n+2} \\leftarrow p b_{n+1} + q a_{n+1},\n\\end{equation}\nthen substituting the values from the first transformation into the $b_{n+2}$ transformation in equation~\\ref{eq:t2}, we can see that:\n\\begin{equation}\n  b_{n+2} \\leftarrow b_n \\left( p^2 + q^2 \\right) + a_n \\left( 2pq + q^2 \\right),\n\\end{equation}\nand comparing with transform 1 it can be seen that\n\\begin{equation}\n  p' = p^2 + q^2,\n\\end{equation}\nand\n\\begin{equation}\n  q' = 2pq + q^2.\n\\end{equation}\n\n\n\\end{document}\n", "meta": {"hexsha": "9ffe32c3d4335d8fe6c45f30aee10f72b937313f", "size": 1225, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter01/ex_1-19.tex", "max_stars_repo_name": "moustachio-belvedere/sicp", "max_stars_repo_head_hexsha": "53f178beb39d7563ebec0d9e8204700e53945292", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter01/ex_1-19.tex", "max_issues_repo_name": "moustachio-belvedere/sicp", "max_issues_repo_head_hexsha": "53f178beb39d7563ebec0d9e8204700e53945292", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-19T16:01:01.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-19T16:01:01.000Z", "max_forks_repo_path": "chapter01/ex_1-19.tex", "max_forks_repo_name": "moustachio-belvedere/sicp", "max_forks_repo_head_hexsha": "53f178beb39d7563ebec0d9e8204700e53945292", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.488372093, "max_line_length": 134, "alphanum_fraction": 0.6914285714, "num_tokens": 476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.7956580952177051, "lm_q1q2_score": 0.5865258595513215}}
{"text": "\\documentclass{memoir}\n\\usepackage{linalg}\n\n% \\begin{figure}[ht]\n%     \\centering\n%     \\incfig{riemmans-theorem}\n%     \\caption{Riemmans theorem}\n%     \\label{fig:riemmans-theorem}\n% \\end{figure}\n\n\\begin{document}\n\\section{Matrices}\n\\label{cha:matrices}\n\\subsection{Representing a Linear Map by Matrices}\n\\begin{defn}[Matrix]\n\tLet $m$ and $n$ denote positive integers. An $m$-by-$n$ \\textbf{matrix} $A$ is a rectangular array of elements of $F$ with $m$ rows and $n$ columns:\n\\begin{align*}\n\tA = \\begin{bmatrix} A_{1,1} & \\dots & A_{1,n} \\\\\n\t\t\\vdots & \\ddots & \\vdots \\\\\n\t\tA_{m,1} & \\dots & A_{m,n}\n\t\\end{bmatrix} \n\\end{align*}\nwhere $j$ refers to the row number, and $k$ refers to the column number.\n\\end{defn}\n\\begin{defn}[Matrix of a linear map]\n\nSuppose $T \\in \\mathcal{L}(V,W)$, $v_1,\\ldots,v_n$ a basis of $V$, and $w_1,\\ldots,w_m$ a basis of $W$. Then, the \\textbf{matrix of $T$} with respect to these bases is the $m$-by-$n$ matrix $\\mathcal{M}(T)$ whose entries $A_{j,k}$ are defined by\n\\begin{align*}\n\tTv_k = A_{1,k}w_1 + \\ldots + A_{m,k}w_m .\n\\end{align*}\nIf the bases are not clear from context, we use the notation $\\mathcal{M}(T,(v_1,\\ldots,v_n),(w_1,\\ldots,w_m))$ is used.\n\n\\end{defn}\n\\color{gray}\n\\begin{exmp}[Examples of matrices of linear maps]\n\tConsider $T:\\R^2\\to \\R^3$ defined by $(x,y) \\mapsto (x+3y,2x+5y,7x+9y)$. By applying this to the standard basis for $\\R^2$ and $\\R^3$, we can determine the matrix:\n\t\\begin{align*}\n\t\tT(v_1) = T(1,0) = (1,2,7) \\\\\n\t\tT(v_2) = T(0,1) = (3,5,9) \\\\\n\t\t\\text{ so then } \\mathcal{M}(T) = \\begin{bmatrix} \n\t\t\t1 & 3\\\\\n\t\t\t2 & 5\\\\\n\t\t\t7 & 9\n\t\t\\end{bmatrix} \n\t\\end{align*}\n\n\tAnother important example is the differentiation map $D: P_3(\\R) \\to P_2(\\R)$ defined by $p \\mapsto p'$.\\\\\n\n\tChoose bases $P_3(\\R) = \\left\\{1,x,x^2,x^3 \\right\\}$ and $P_2(\\R) = \\left\\{1,x,x^2 \\right\\} $. Then\n\t\\begin{align*}\n\t\tD(v_1) = D(1) = 0 = 0\\cdot w_1 + 0\\cdot w_2 + 0\\cdot w_3 \\\\\n\t\tD(v_2) = D(x) = 1 = 1\\cdot w_1 + 0 \\cdot w_2 + 0 \\cdot w_3 \\\\\n\t\tD(v_3) = D(x^2) = 2x = 0\\cdot w_1 + 2 \\cdot w_2 + 0 \\cdot w_3 \\\\\n\t\tD(v_4) = D(x^3) = 3x^2 = 0 \\cdot w_1 + 0\\cdot w_2 + 3\\cdot w_3 \\\\\n\t\t\\text{ So therefore } \\mathcal{M}(D) = \\begin{bmatrix} \n\t\t\t0 & 0 & 0\\\\\n\t\t\t1 & 0 & 0\\\\\n\t\t\t0 & 2 & 0 \\\\\n\t\t\t0 & 0 & 3 \n\t\t\\end{bmatrix} \n\t\\end{align*}\n\\end{exmp}\n\\color{black}\n\n\\section{Addition and Scalar Multiplication of Matrices of Linear Maps}\n\\label{sec:addition_and_scalar_multiplication_of_matrices_of_linear_maps}\n\\begin{cor}[Linearity of Matrices]\n\tSuppose $S,T \\in \\mathcal{L}(V,W)$. Then $\\mathcal{M}(S+T) = \\mathcal{M}(S) + \\mathcal{M}(T)$.\\\\\n\n\tSuppose $\\lambda \\in F$ and $T \\in \\mathcal{L}(V,W)$. Then $\\mathcal{M}(\\lambda T) = \\lambda \\mathcal{M}(T)$.\n\\end{cor}\nNotation: The set of all $m$-by-$n$ matrices with entries in $F$ is denoted by $F^{m,n}$.\n\\begin{cor}\n\tIf $m,n$ are positive integers, then $F^{m,n}$ is a vector space with dimension $mn$.\n\\end{cor}\n\\end{document}\n", "meta": {"hexsha": "fb286f750763e548fc8ea9018904e6f4f46c0100", "size": 2900, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Linear Algebra/Notes/source/09-30-19-Matrices.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Linear Algebra/Notes/source/09-30-19-Matrices.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Linear Algebra/Notes/source/09-30-19-Matrices.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6623376623, "max_line_length": 245, "alphanum_fraction": 0.6237931034, "num_tokens": 1224, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799252, "lm_q2_score": 0.7956580952177051, "lm_q1q2_score": 0.5865258503624586}}
{"text": "\n\\chapter{Random Variables}\n\n\\section{Important/Useful Theorems}\n\n\\subsection{Chebyshev's Inequality}\n\n\\begin{equation}\n\tP \\{ |\\xi| > \\epsilon \\} \\leq \\frac{1}{\\epsilon^2} \\textbf{E}\\xi^2\n\\end{equation}\n\n\n\\section{Answers to Problems}\n\n\\subsection{}\n%problem 4.1\nThinking of this problem as 4 buckets each with 2 possibilities paves a clear way to the solution of the problem.  There are $2^4 = 16$ possibilities.  There is only one way for them to all be green ($\\xi = 4$), and again only one way for 3 greens followed by one red ($\\xi = 3$).  Once you have get to $\\xi = 2$, then the last light can be either green or red giving two possibilities then at $\\xi = 1$ you have two lights that can be either red or green giving 4 possibilities.  Clearly, then, there are 8 for the case of $\\xi = 0$.  \n\n\\begin{equation}\n\tP(\\xi) = \\left\\{ \\begin{array}{rl}\n\t\\frac{1}{2}, \\xi=0  \\\\\n\t\\frac{1}{4}, \\xi=1  \\\\\n\t\\frac{1}{8}, \\xi=2  \\\\\n\t\\frac{1}{16}, \\xi=3  \\\\\n\t\\frac{1}{16}, \\xi=4\n       \\end{array} \\right.\n\\label{answer4.1}\n\\end{equation}\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.2\nTo summarize what we want here:\n\\begin{eqnarray}\n\t\\xi_1 \\neq \\xi_2 \\\\\n\t\\Phi_{\\xi_1}(x) = \\Phi_{\\xi_2}(x) \\\\\n\t\\int_{-\\infty}^x p_{\\xi_1}(x')dx' = \\int_{-\\infty}^x p_{\\xi_2}(x')dx'\n\\end{eqnarray}\nSince the question does not rule out the possibility that the probability distributions are the same, we can just say that $\\xi_1$ is the outcome of a coin-flip experiment and that $\\xi_2$ is the outcome of the first spin measurement of an EPR experiment. \n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.3\nIf \n\\begin{equation}\n\tp(x)dx = \\frac{dx}{b-a}\n\\end{equation}\nthen\n\\begin{equation}\n\t\\Phi(x) = \\int_{-a}^x p(x')dx' = \\int_{a}^x \\frac{dx'}{b-a}\n\\end{equation}\n\\begin{equation}\n\t\\Phi(x) = \\frac{x - a}{b-a}\n\\label{answer4.3}\n\\end{equation}\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.4\nThe distribution is clearly not normalized so the first step will be to normalise it.\n\n\\begin{equation}\n\t\\int^{\\infty}_{-\\infty} \\frac{a}{x^2+1}dx = 1 = a \\pi\n\\end{equation}\n\n\\begin{equation}\n\ta = \\frac{1}{\\pi}\n\\label{answer4.4a}\n\\end{equation}\n\nJust by definition:\n\\begin{equation}\n\t\\Phi(x) = \\int^{x}_{\\infty} \\frac{1}{\\pi(x'^2+1)}dx' = \\left \\frac{\\arctan x'}{\\pi} \\right] ^x_{-\\infty} = \\frac{\\arctan x}{\\pi}+\\frac{1}{2}\n\\label{answer4.4b}\n\\end{equation}\nand last but not least:\n\\begin{equation}\n\tP( \\{ -1 \\leq x \\leq 1 \\} = \\int^{1}_{-1} \\frac{1}{\\pi(x'^2+1)}dx' = \\frac{1}{2}\n\\label{answer4.4c}\n\\end{equation}\n\n\\textbf{Answers verified}\n\n\\subsection{}\n%problem 4.5\nOnce again we start by normalizing\n\\begin{equation}\n\t1= \\int^{\\infty}_{0} a x^2 e^{-k x} dx = -\\left \\frac{e^{-k x} \\left(2+2 k x+k^2 x^2\\right)}{k^3} \\right]^{\\infty}_{0}= \\frac{2 a}{k^3}\n\t\\end{equation}\n\\begin{equation}\n\ta = \\frac{k^3}{2}\n\\label{answer4.5a}\n\\end{equation}\n\\begin{equation}\n\t\\Phi(x) = \\int^{x}_{0} \\frac{k^3}{2}x'^2 e^{-k x'}dx' = 1 - \\frac{e^{-k x} \\left(2+2 k x+k^2 x^2\\right)}{2}\n\\label{answer4.5b}\n\\end{equation}\n\\begin{equation}\n\tP( \\{ 0 \\leq x \\leq \\frac{1}{k} \\} = \\int_{0}^{\\frac{1}{k}} \\frac{k^3}{2}x^2 e^{-k x}dx = \\frac{2e - 5}{2 e}\n\\label{answer4.4c}\n\\end{equation}\n\\textbf{Answer not verified}\n\n\n\n\\subsection{}\n%problem 4.6\n\\begin{equation}\n\t\\Phi(\\infty) = 1 \\Rightarrow 1 = a + \\frac{b \\pi}{2}\n\\end{equation}\n\\begin{equation}\n\t\\Phi(-\\infty) = 0 \\Rightarrow 0 = a - \\frac{b \\pi}{2}\n\\end{equation}\nWhich you can solve to get\n\\begin{eqnarray}\n\tb = \\frac{1}{\\pi} \\\\\n\ta = \\frac{1}{2}\n\\end{eqnarray}\nSince $\\Phi$ is the integral of $p$, we can take the derivative of it and then make sure it's normalized.\n\\begin{equation}\n\tp(x) = \\frac{d \\Phi}{dx} = \\frac{1}{2 \\pi \\left(1+\\frac{x^2}{4}\\right)}\n\\end{equation}\nAnd the normalization is indeed still correct!\n\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.6\nThe area of the table is simply $\\pi R^2$ and the are of each of the smaller circles are $\\pi r^2$.  The ratio of the sum of the area of the two circles to the total table area will be the chance that one of the circles gets hit: $p = \\frac{2r^2}{R^2}$.\n\\textbf{Answer verified}\n\n\n\\subsection{}\n%problem 4.7\nJust like in example 2 in the book, this problem will go by very well if we draw a picture indicating the given criteria:\n\n\\begin{figure}[hbtp]\n\\includegraphics[width=\\textwidth]{overlap.png}\n\\caption{The Desired Region}\n\\end{figure}\nWhere does this come from?\n\\begin{eqnarray}\n\tx_1 + x_2 \\leq 1 \\\\ \n\tx_2 \\leq 1 - x_1\n\\end{eqnarray}\nwhich is the straight line.\n\\begin{eqnarray}\n\tx_1x_2 \\leq \\frac{2}{9} \\\\ \n\tx_2 \\leq \\frac{2}{9 x_1}\n\\end{eqnarray}\nA little bit of algebra shows that these two lines intersect at $\\frac{1}{3}$ and $\\frac{2}{3}$ so the area underneath the straight line but not above the curved line is\n\\begin{eqnarray}\n\tA =\\int_0^{\\frac{1}{3}}(1-x)dx + \\int_{\\frac{1}{3}}^{\\frac{2}{3}}\\frac{2}{9x}dx + \\int_{\\frac{2}{3}}^{1}(1-x)dx \\\\ \n\t= \\frac{5}{18} + \\int_{\\frac{1}{3}}^{\\frac{2}{3}}\\frac{2}{9x}dx + \\frac{1}{18} \\\\\n\t= \\frac{1}{3} + \\int_{\\frac{1}{3}}^{\\frac{2}{3}}\\frac{2}{9x}dx \\\\ \n\t= \\frac{1}{3\t} + \\frac{2 \\ln 2}{9} \\approx 0.487366\n\\end{eqnarray}\nAnd the answer is properly normalized since the initial probability distributions were unity (the box length is only 1).\n\\textbf{Answer  verified}\n\n\n\\subsection{}\n%problem 4.9\nAs we showed in example number 4, $p_{\\eta}$ is the convolution of $p_{\\xi_1}$ and $p_{\\xi_2}$.\n\\begin{eqnarray}\n\tp_{\\eta}(y) = \\int_{-\\infty}^{\\infty} p_{\\xi_1}(y-x)p_{\\xi_2}(x)dx\n\\end{eqnarray}\nI think the integration will look more logical if we stick in Heaviside step-functions.\n\\begin{eqnarray}\n\tp_{\\eta}(y) = \\frac{1}{6} \\int_{-\\infty}^{\\infty}e^{-\\frac{y-x}{3}}e^{-\\frac{x}{3}}H(y-x)H(x)dx\n\\end{eqnarray}\nClearly $x$ has to be greater than zero and $y$ must be greater than x, leading to the following limits of integration.\n\\begin{eqnarray}\n\tp_{\\eta}(y) = \\frac{1}{6} \\int_{0}^{y}e^{-\\frac{y-x}{3}}e^{-\\frac{x}{3}}dx \\\\\n\t= e^{-\\frac{y}{3}}\\left(1- e^{-\\frac{y}{6}}  \\right)\n\\end{eqnarray}\nOnly when y is greater than zero!\n\\textbf{Answer verified}\n\n\\subsection{}\n%problem 4.10\nDue to the magic of addition, finding the probability distribution of $\\xi_1 + \\xi_2 + \\xi_3$ is no different than the probability distribution of $(\\xi_1 + \\xi_2) + \\xi_3$ but we already know what the probability distribtution of the parenthesized quantity is:\n\\begin{equation}\n\tp_{\\xi_1 + \\xi_2}(y) = \\int_{-\\infty}^{\\infty} p_{\\xi_1}(y-x)p_{\\xi_2}(x)dx\n\\end{equation}\nTherefore the total combination is\n\\begin{equation}\n\tp_{\\xi_1 + \\xi_2+ \\xi_3}(z) = \\int_{-\\infty}^{\\infty} p_{\\xi_1 + \\xi_2}(z-y)p_{\\xi_3}(y)dx\n\\end{equation}\n\\begin{equation}\n\tp_{\\xi_1 + \\xi_2+ \\xi_3}(z) = \\int_{-\\infty}^{\\infty} \\left[ \\int_{-\\infty}^{\\infty} p_{\\xi_1}(z-y-x)p_{\\xi_2}(x)dx \\right]p_{\\xi_3}(y)dy\n\\end{equation}\n\\begin{equation}\n\tp_{\\xi_1 + \\xi_2+ \\xi_3}(z) = \\int_{-\\infty}^{\\infty}  \\int_{-\\infty}^{\\infty} p_{\\xi_1}(z-y-x)p_{\\xi_2}(x)p_{\\xi_3}(y)dx dy\n\\end{equation}\nwhich is just a triple convolution.\n\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.11\n\\begin{equation}\n\tp_{\\xi}(n) = \\frac{1}{3^n}\n\\end{equation}\ntherefore\n\\begin{equation}\n\t\\textbf{E}\\xi = \\sum_{n=1}^{\\infty}\\frac{n}{3^n} =  \\frac{3}{4}\n\\label{answer4.11}\n\\end{equation}\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.12\nSince finding a ball in one urn versus the other urn has nothing to do with each other, the events are clearly independent so we can multiply probabilities.  The probability of finding a white ball at any given urn is\n\\begin{equation}\n\tp_w = \\frac{w}{w+b}\n\\end{equation}\nIf you find a white ball on the $n^{th}$ try then that means you found $n-1$ black balls before you got to the white ball.  \n\\begin{equation}\n\tp_w(n) = \\frac{b^{n-1} w}{(w+b)^n}\n\\end{equation}\n\\begin{equation}\n\t\\textbf{E}n = \\sum_{i=1}^{\\infty}n p_w(n) = \\sum_{i=1}^{\\infty}n \\frac{b^{n-1} w}{(w+b)^n} = \\frac{b+w}{w}\n\\end{equation}\nWhich is the total number of balls drawn, subtract one to get the average number of black balls drawn:\n\\begin{equation}\n\tm=\\frac{b}{w}\n\\end{equation}\nNow for the variance. to start with we need the average of the square of the random variable.\n\\begin{equation}\n\t\\textbf{E}n^2 = \\sum_{i=1}^{\\infty}n^2 p_w(n) = \\sum_{i=1}^{\\infty}n^2 \\frac{b^{n-1} w}{(w+b)^n} = \\frac{(b+w) (2 b+w)}{w^2}\n\\end{equation}\n\\begin{equation}\n\t\\textbf{D}n = \\textbf{E}n^2 - (\\textbf{E}n)^2 =  \\frac{(b+w) (2 b+w)}{w^2} - \\left( \\frac{b+w}{w} \\right)^2 = \\frac{b^2+wb}{w^2}\n\\end{equation}\nA note that we don't need to subtract anything for the variance since shifting a distribution over does not affect its variance: just its average.\n\\textbf{Answer verified}\n\n\n\n\\subsection{}\n%problem 4.13\n\\begin{equation}\n\t\\textbf{E}\\xi = \\int_{-\\infty}^{\\infty} x\\frac{1}{2}e^{-|x|} = 0\n\\end{equation}\nbecause the function is even about $x=0$.\n\\begin{equation}\n\t\\textbf{E}\\xi^2 = \\int_{-\\infty}^{\\infty} x^2\\frac{1}{2}e^{-|x|} = 2\n\\end{equation}\nTherefore:\n\\begin{equation}\n\t\\textbf{D}\\xi = \\textbf{E}\\xi^2 - (\\textbf{E}\\xi)^2 = 2\n\\end{equation}\n\\textbf{Answer verified}\n\n\n\n\n\\subsection{}\n%problem 4.14\n\\begin{equation}\n\t\\textbf{E}x = \\int_{a-b}^{a+b} \\frac{xdx}{2b} = a\t\n\\end{equation}\n\\begin{equation}\n\t\\textbf{E}x^2 = \\int_{a-b}^{a+b} \\frac{x^2dx}{2b} = a^2+\\frac{b^2}{3}\n\\end{equation}\nTherefore:\n\\begin{equation}\n\t\\textbf{D}x = \\frac{b^2}{3}\n\\end{equation}\n\\textbf{Answer verified}\n\n\n\\subsection{}\n%problem 4.15\nIf the distribution function is \n\\begin{equation}\n\t\\Phi_{\\xi}(x) = a + b \\arcsin x, |x| \\leq 1\n\\end{equation}\nthen it must fulfill the proper boundary conditions as specified by both the problem and the definition of a distribution function.\n\\begin{equation}\n\t\\Phi_{\\xi}(-1)= 0 = a - b \\frac{\\pi}{2}\n\\end{equation}\n\\begin{equation}\n\t\\Phi_{\\xi}(1) = 1 = a + b \\frac{\\pi}{2}\n\\end{equation}\nSome easy algebra gets you\n\\begin{eqnarray}\n\t\\Phi_{\\xi}(x) = \\frac{1}{2} + \\frac{1}{\\pi} \\arcsin x\n\\end{eqnarray}\nTherefore:\n\\begin{equation}\n\tp_{\\xi}(x) = \\frac{1}{\\pi\\sqrt{1-x^2}}\n\\end{equation}\n\\begin{eqnarray}\n\t\\textbf{E}x = \\int_{-1}^{1} \\frac{x dx}{\\pi\\sqrt{1-x^2}} = 0 \\\\\n\t\\textbf{D}x = \\int_{-1}^{1} \\frac{x^2 dx}{\\pi\\sqrt{1-x^2}} = \\frac{1}{2}\n\\end{eqnarray}\n\\textbf{Answer verified}\n\n\n\n\\subsection{}\n%problem 4.16\nEach side of the die has the same probability, $\\frac{1}{6}$\n\\begin{equation}\n\t\\textbf{E}x = \\sum_{i=1}^6 \\frac{i}{6} = \\frac{7}{2}\n\\end{equation}\n\\begin{equation}\n\t\\textbf{E}x^2 = \\sum_{i=1}^6 \\frac{i^2}{6} = \\frac{91}{6}\n\\end{equation}\nTherefore:\n\\begin{equation}\n\t\\textbf{D}x = \\frac{91}{6} - \\left( \\frac{7}{2} \\right)^2 = \\frac{91}{6} - \\frac{49}{4} = \\frac{35}{12}\n\\end{equation}\n\\textbf{Answer verified}\n\n\\subsection{}\n%problem 4.17\nThis problem may seem difficult until you realize that being $\\pm \\frac{5}{2}$ away from the mean means you're at either 1 or 6, meaning that there's a unity probability of being that far from the mean.  Chebyshev's inequality, however, would have us believe that \n\\begin{equation}\n\tP \\{ |x + \\textbf{E}x| > \\frac{5}{2}  \\} \\leq \\frac{4}{25} \\frac{91}{6} = \\frac{182}{75} = 2.42667\n\\end{equation}\nwhich is not only far off from the actual answer, it's also unphysical to have probabilities greater than 1!\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.18\nWe want to consider the probability distribution of $\\xi$ by way of $\\eta$\n\\begin{equation}\n\t\\eta = e^{\\frac{a\\xi}{2}} \n\\end{equation}\nWe know from Chebyshev's identity:\n\n\\begin{equation}\n\tP\\{ \\eta > \\epsilon(\\eta) \\} \\leq \\frac{\\textbf{E}\\eta^2}{\\epsilon(\\eta)^2} \n\\end{equation}\nLet $\\epsilon$ be the error in $\\xi$ we're looking for.\n\\begin{equation}\n\tP\\{ \\xi > \\epsilon \\} \\leq \\frac{\\textbf{E}(e^{\\frac{a\\xi}{2}})^2}{(e^{\\frac{a\\epsilon}{2}})^2} \n\\end{equation}\n\\begin{equation}\n\tP\\{ \\xi > \\epsilon \\} \\leq \\frac{\\textbf{E}e^{a\\xi}}{e^{a\\epsilon}} \n\\end{equation}\n\\textbf{Answer not verified}\n\n\n\\subsection{}\n%problem 4.19\nFirst some initial calculations:\n\\begin{eqnarray}\n\t\\textbf{E}\\xi = \\frac{1}{4} \\left( -2+-1+1+2  \\right) = 0 \\\\\n\t\\textbf{E}\\xi^2 = \\frac{1}{4} \\left( (-2)^2+(-1)^2+1^2+2^2  \\right) = \\frac{10}{4} = \\frac{5}{2} = \\textbf{E}\\eta \\\\\n\t\\textbf{E}\\xi^4 = \\frac{1}{4} \\left( (-2)^4+(-1)^4+1^4+2^4  \\right) = \\frac{34}{4} = \\frac{17}{2} = \\textbf{E}\\eta^2\n\\end{eqnarray}\nNow, we know that\n\\begin{eqnarray}\n\tr = \\frac{\\textbf{E} \\left[ (\\xi - \\textbf{E}\\xi)(\\eta - \\textbf{E}\\eta)  \\right]}{(\\textbf{E}\\xi^2 - (\\textbf{E}\\xi)^2)(\\textbf{E}\\eta^2 - (\\textbf{E}\\eta)^2)}\n\\end{eqnarray}\nThe denominator $D$ is the easiest:\n\\begin{equation}\n\tD = \\left( \\frac{5}{2} - 0 \\right)\\left( \\frac{17}{2} - \\frac{5^2}{2^2} \\right) = \\frac{45}{8}\n\\end{equation}\nNow for the numerator:\n\\begin{eqnarray}\n\t\\textbf{E} \\left[ (\\xi - \\textbf{E}\\xi)(\\eta - \\textbf{E}\\eta)  \\right] \\\\\n\t= \\frac{1}{16} \\left[  \\sum_{\\xi, \\eta} (\\xi - \\textbf{E}\\xi)(\\eta - \\textbf{E}\\eta)      \\right]\n\\end{eqnarray}\nIf we look at the set we'll be summing over, we have ${\\xi - \\textbf{E}\\xi} = -2, -1, 1, 2$ and ${\\eta - \\textbf{E}\\eta} = \\frac{3}{2}, -\\frac{3}{2}, -\\frac{3}{2}, \\frac{3}{2}$ and clearly, since we have to sum over all possible products, since both distributions are symmetric about 0, the sum will go to zero.\n\\begin{equation}\n\tr=0\n\\end{equation}\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.20\nFirst some initial calculations:\n\\begin{eqnarray}\n\t\\textbf{E}x_1 = \\int_0^{\\frac{\\pi}{2}} x_1 \\sin x_1 \\sin x_2 dx_1dx_2 = 1 \\\\\n\t\\textbf{E}x_2 = 1 \\\\\n\t\\textbf{E}x_1^2 = \\int_0^{\\frac{\\pi}{2}} x_1^2 \\sin x_1 \\sin x_2 dx_1dx_2 = (\\pi - 2) \\\\\n\t\\textbf{E}x_2^2 = (\\pi - 2) \\\\\n\\end{eqnarray}\n\\begin{eqnarray}\n\t\\textbf{E} \\left[ (x_1 - \\textbf{E}x_1)(x_2 - \\textbf{E}x_2)  \\right] \\\\\n\t= \\int_0^{\\frac{\\pi}{2}} \\int_0^{\\frac{\\pi}{2}} \\left[(x_1 - 1)(x_2 - 1)\\right]\\sin x_1 \\sin x_2 dx_1dx_2 = 0 \\\\\n\t\\sigma_1 \\sigma_2 =  \\sqrt{(\\pi - 2)- 1}\\sqrt{(\\pi - 2) - 1} = (\\pi - 3) \\\\\n\tr= \\frac{0}{(\\pi - 3)} = 0\n\\end{eqnarray}\n\\textbf{Answer not verified}\n\n\\subsection{}\n%problem 4.21\nFirst some initial calculations:\n\\begin{eqnarray}\n\t\\textbf{E}x_1 = \\frac{1}{2}\\int_0^{\\frac{\\pi}{2}} x_1 \\sin (x_1 + x_2) dx_1dx_2 = \\frac{\\pi}{4} \\\\\n\t\\textbf{E}x_2 = \\frac{\\pi}{4} \\\\\n\t\\textbf{E}x_1^2 = \\frac{1}{2}\\int_0^{\\frac{\\pi}{2}} x_1^2 \\sin (x_1 + x_2) dx_1dx_2 = -2+\\frac{\\pi}{2} +\\frac{\\pi ^2}{8} \\\\\n\t\\textbf{E}x_2^2 = -2+\\frac{\\pi}{2} +\\frac{\\pi ^2}{8} \\\\\n\\end{eqnarray}\n\\begin{eqnarray}\n\t\\textbf{E} \\left[ (x_1 - \\textbf{E}x_1)(x_2 - \\textbf{E}x_2)  \\right] \\\\\n\t= \\frac{1}{2}\\int_0^{\\frac{\\pi}{2}} \\int_0^{\\frac{\\pi}{2}} \\sin (x_1 + x_2) \\left[(x_1 - \\frac{\\pi}{2})(x_2 - \\frac{\\pi}{2})\\right]dx_1dx_2  = -\\frac{1}{16} (\\pi -4)^2 \\\\\n\t\\sigma_1\\sigma_2 = -2+\\pi +\\frac{\\pi ^2}{2} +\\frac{\\pi^2}{8} - \\frac{\\pi^2}{16} = -2+\\pi +\\frac{\\pi ^2}{2} +\\frac{\\pi^2}{16} \\\\\n\tr = \\frac{-\\frac{1}{16} (\\pi -4)^2}{-2+\\pi +\\frac{\\pi ^2}{2} +\\frac{\\pi^2}{16}}\n\\end{eqnarray}\n\\textbf{Answer verified}\n\n\\subsection{}\n%problem 4.22\nIll-defined problem, don't feel like translating: \\textbf{SKIPPED}\n\n\\subsection{}\n%problem 4.23\nBased off 4.22: \\textbf{SKIPPED}\n\n\\subsection{}\n%problem 4.21\n\\begin{eqnarray}\n\tE\\xi = \\int_{-\\infty}^{\\infty} \\frac{x}{\\pi(1+x^2)}dx = \\left \\frac{\\log \\left(x^2+1\\right)}{2 \\pi } \\right]_{-\\infty}^{\\infty} \\rightarrow \\infty \\\\\n\tE\\xi^2 = \\int_{-\\infty}^{\\infty} \\frac{x^2}{\\pi(1+x^2)}dx = \\left \\frac{x}{\\pi }-\\frac{\\tan ^{-1}(x)}{\\pi } \\right]_{-\\infty}^{\\infty} \\rightarrow \\infty\n\\end{eqnarray}\nNeither integral actually converges so we cannot define averages or dispersions for this distribution.\n\\textbf{Answer verified}\n%%answer template\n%\\subsection{}\n%%problem n.n\n%\n%\n%\\begin{equation}\n%\t\n%\\label{answern.n}\n%\\end{equation}\n%\\textbf{Answer [not] verified}\n\n\n\n\n\n", "meta": {"hexsha": "3e85e1f529ef41596f609fdfb8f28f531f65d448", "size": 15509, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter4.tex", "max_stars_repo_name": "stefk/Rozanov_ptcc_solutions", "max_stars_repo_head_hexsha": "8af26b1cea3966df11a8ecfc5b2b1b95a784d2c1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter4.tex", "max_issues_repo_name": "stefk/Rozanov_ptcc_solutions", "max_issues_repo_head_hexsha": "8af26b1cea3966df11a8ecfc5b2b1b95a784d2c1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter4.tex", "max_forks_repo_name": "stefk/Rozanov_ptcc_solutions", "max_forks_repo_head_hexsha": "8af26b1cea3966df11a8ecfc5b2b1b95a784d2c1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2477272727, "max_line_length": 536, "alphanum_fraction": 0.6433683668, "num_tokens": 6371, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754607093178, "lm_q2_score": 0.8056321936479701, "lm_q1q2_score": 0.5864804673331394}}
{"text": "\n\n\\section{Sequences, sequences and sequences}\n\nEvery Riordan array $\\mathcal{R}$, as we've seen in \n\\autoref{sec:back:to:the:basics:sequences}, has a particular sequence \n$\\lbrace a_i\\rbrace_{i\\in\\mathbb{N}}$,\ncalled $A$-sequence, such that uniquely characterizes it (to be precise another sequence\n$\\lbrace z_i\\rbrace_{i\\in\\mathbb{N}}$, called $Z$-sequence, is needed \nto fulfill the very first column, together with root element $d_{00}$), \ncapturing the way every element $d_{n+1,k+1}$ can be written as a linear combination\nof elements lying on the previous row, formally:\n\\begin{displaymath}\n    d_{n+1,k+1} = a_{0}d_{n,k} +a_{1}d_{n,k+1} +a_{2}d_{n,k+2} + \n        \\ldots + a_{j}d_{n,k+j}\n\\end{displaymath}\nwhere $k+j = n$. In this section we would like to offer a generalization\nof this \\emph{combinatorial device}, providing a machinery to build sequences \nthat combine coefficients, possibly lying on arbitrary rows, as desired.\nFinally, a connection with $A$-matrix concept is explored, leaving an open \nquestion.\n\nWhat follows doesn't use the $h$-characterization concept directly, \nwe put it here because a \\emph{column oriented}\napproach, as used for derivation of $h$-characterizations, is used as well. \nFor the sake of simplicity,\nhowever, we start from an array $\\mathcal{R}(d(t),h(t))$ and its \n$h$-characterization $\\mathcal{R}_{h(t)}(g(h(t)), h(t))$, for some function $g$.\n\n\\subsection{Localizing $A$-sequences}\n\nConsider the following question: there exists a sequence \n$\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ of coefficients \nsuch that the generic element $d_{n+1,k+1}\\in\\mathcal{R}$ can be written\nas a linear combination of \\emph{all} other elements $d_{n,j}$ \nlying on the previous row, namely for all $j\\in\\lbrace0,1,2,\\ldots,n\\rbrace$? \nLater we ask a more general question, for now tackle the current one.\n\nThe previous statement can be written in a compact way, or \\emph{column-wise}, as:\n\\begin{displaymath}\n    g(h(t))h(t)^k = \\sum_{i\\geq0}{\\gamma_i\\,t\\,g(h(t)) h(t)^i}\\\\\n\\end{displaymath}\nTo see why, recall a generic column \n$k$ has generating function $g(h(t))h(t)^k$ and imagine that series written vertically, \nwith increasing degree of $t$ from top to bottom.\nThe required constraint on coefficient of $t^n$, namely $d_{nk}$, \nto be a linear combination of elements lying on the previous row \ncan be satisfied if we \\emph{shift downward} every column,\nnamely multiplying each one by $t$, and combining them using coefficients\n$\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$. \nWe have learnt this trick from \\citeauthor{shapiro:1991}, in \\cite{shapiro:1991}.\n\nSimplification of $g(h(t))$ shows that using the factored representation or\nthe natural one doesn't make any difference, therefore:\n\\begin{displaymath}\n    h(t)^k = t \\sum_{i\\geq 0}{\\gamma_i\\,h(t)^i} = t \\Gamma(h(t))\n\\end{displaymath}\nwhere function $\\Gamma$ is a \\ac{fps} over sequence \n$\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$, which we're searching for.\nDoing a change of variable to abstract over $h(t)$, its possible to\n\\marginpar{$\\hat{h}(y)^{\\alpha}$ acts as a ruler which,\n    if an element $d_{nk}$ is combined,\n    scrolls to row $n-\\alpha$}\nstructure the basic for a generic schema (or \\emph{device} if you please):\n\\begin{equation}\n    \\left.\\left[y^{k} = \\hat{h}(y)\\,\\Gamma(y) \\right| y = h(t) \\right]\n    \\label{eq:localizing:A:sequence}\n\\end{equation}\n\nBefore going on, let the device consumes some known arrays. Take \narray $\\mathcal{C}$, so $\\hat{h}_{\\mathcal{C}}(y) = y-y^2$, hence:\n\\begin{displaymath}\n    \\left.\\left[\\frac{y^{k-1}}{1-y} =  \\Gamma(y) \\right| y = h(t) \\right]\\\\\n\\end{displaymath}\nso sequence $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ satisfies:\n\\begin{displaymath}\n    \\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}} = \n        \\left(\\underbrace{\\gamma_{0}=0,\\ldots,\\gamma_{k-2}=0}_{k-1 \\text{ zeros}},\n            \\underbrace{\\gamma_{k-1}=1, \\ldots}_{\\text{infinitely many ones}} \\right)\n\\end{displaymath}\nso $d_{nk}\\in\\mathcal{C}$ is the combination of \\emph{all} coefficients\nlying on previous row $n-1$ according to $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$,\ntherefore $d_{nk}=\\sum_{i=k-1}^{n-1}{d_{n-1,i}}$.\n\\marginpar{exactly as $A_{\\mathcal{C}}(t)=\\frac{1}{1-t}$ requires} \n\nMotzkin's turn, \n$\\hat{h}_{\\mathcal{M}}(y) = \\frac{y}{1+y+y^2}$, hence:\n\\begin{displaymath}\n        \\left.\\left[y^{k-1}+y^{k}+y^{k+1}=\\Gamma(y)\\right| y = h(t) \\right]\n\\end{displaymath}\nso sequence $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ satisfies:\n\\begin{displaymath}\n    \\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}} = \n        \\left(\\underbrace{\\gamma_{0}=0,\\ldots,\\gamma_{k-2}=0}_{k-1 \\text{ zeros}},\n            1,1,1,\n            \\underbrace{\\gamma_{k+2}=0, \\ldots}_{\\text{infinitely many zeros}} \\right)\n\\end{displaymath}\n\\marginpar{exactly as $A_{\\mathcal{M}}(t)=1+t+t^{2}$ requires} \nso $d_{nk}\\in\\mathcal{M}$ is the combination of \\emph{all} coefficients\nlying on previous row $n-1$ according $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$,\ntherefore $d_{nk}=d_{n-1,k-1}+d_{n-1,k}+d_{n-1,k+1}$.\n\\\\\\\\\n\\marginpar{getting $A$-sequence back}\nWhat we've done is nothing more nor less than writing $A$-sequences \nin a more generic format, putting evidence on the \\emph{local} meaning \nof the combination: it is explicitly written the dependency of \nelements belonging to a column $k$. On the other hand, natural $A$-sequences \nare stated with a fixed start index in mind, namely $k-1$ if combining \nfor elements in a column $k$.\n\n\\marginpar{$y^{\\beta}$ acts as a second ruler too, which scrolls\n    to column $\\beta$ directly. Rulers $y^{k-1}$ and $\\hat{h}(y)$ \n    scroll to column $k-1$ and to row $n-1$ respectively, after \n    combination starts as $\\Gamma$ says\\ldots}\nIt is possible to get the natural $A$-sequence back by requiring that \ncombination of elements (denoted by coefficients of \\ac{fps} $\\Gamma$)  \nlying on the previous row (denoted by $\\hat{h}(y)$)\nstarts at index $k-1$ (denoted by $y^{k-1}$), formally:\n\\begin{displaymath}\n    \\left[y^{k} = y^{k-1}\\,\\hat{h}(y)\\,\\Gamma(y) \\big| y = h(t) \\right] = \n    \\left[y = \\hat{h}(y)\\,\\Gamma(y) \\big| y = h(t) \\right]\n\\end{displaymath}\nsince $h(t) = t\\,A(h(t))$, then \\ac{fps}\n$\\Gamma$ and $A$ are defined over the same sequence, the $A$-sequence, as desired.\n\\\\\\\\\n\\marginpar{The following can be skipped on a first reading}\nHere there are a couple of applications of concepts described in previous paragraphs. \n\nWhat if we ask: find a sequence $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ \nsuch that an element $d_{nk}\\in\\mathcal{C}$ combines elements lying on a\nrow $3$ lines above, starting from column index $2$, \nnamely $d_{n-3,k}$ for all $k\\in\\lbrace 2,\\ldots,n-3\\rbrace$. \nSet the device:\n\\begin{displaymath}\n    \\left[y^{k} = y^{2}\\,\\hat{h}(y)^3\\,\\Gamma(y) \\big| y = h(t) \\right] =\n        \\left.\\left[\\frac{y^{k-5}}{(1-y)^3} = \\Gamma(y) \\right| y = h(t) \\right]\n\\end{displaymath}\n\nChecking what we've obtained, assume you want to find coefficient $d_{7,5}$, \n so expand function $\\Gamma$ with $k=5$:\n\\begin{displaymath}\n    \\left.\\left[\\Gamma(y)=1 + 3y + 6y^2 + 10y^3 + 15y^4 + \\mathcal{O}(y^5) \n        \\big| y = h(t) \\right]\\right|_{k=5}\n\\end{displaymath}\ntherefore $d_{7,5}=d_{4,2} + 3\\,d_{4,3} + 6\\,d_{4,4}$, \ninstantiating $27 = 9 + 3\\cdot4 + 6\\cdot1$, which holds.\nJust another element, say $d_{9,7}$, so expand function $\\Gamma$ with $k=7$:\n\\marginpar{varing $k$ shifts $\\lbrace \\gamma_{i}\\rbrace_{i\\in\\mathbb{N}}$ only.\n    In \\autoref{par:generalized:A:sequence:Delannoy:example}\n    we will get a new sequence for each $k$}\n\\begin{displaymath}\n    \\left.\\left[\\Gamma(y)=y^2 + 3y^3 + 6y^4 + 10y^5 +  \\mathcal{O}(y^6) \n        \\big| y = h(t) \\right]\\right|_{k=7}\n\\end{displaymath}\ntherefore $d_{9,7}=d_{6,4} + 3\\,d_{6,5} + 6\\,d_{6,6}$, \ninstantiating $44 = 20 + 3\\cdot6 + 6\\cdot1$, which holds.\nIt's interesting to observe the following fact: while a natural $A$-sequence\nsays how to combine starting always from the previous column index. With\nthis generalization we got the same coefficients for combining as $A$-sequence\nstates, augmenting with the column index from where the combination should start.\n\nThe previous two checks shows exactly this aspect: for column index $k=5$\ncombination starts at column index $2$, while for column index $k=7$ combination\nstarts at column index $4$.\n\\\\\\\\\nOne last application, before the interlude: \nfind a sequence $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ such that \nan element $d_{nk}\\in\\mathcal{M}$ combines \\emph{all} elements lying on \nthe next row, namely $d_{n+1,k}$ for all $k\\in\\lbrace0,\\ldots,n+1\\rbrace$.\nThe first step is always setting the device:\n\\begin{displaymath}\n    \\left[y^{k} = \\hat{h}(y)^{-1}\\,\\Gamma(y) \\big| y = h(t) \\right]=\n        \\left[ \\frac{y^{k + 1}}{y^2 + y + 1} = \\Gamma(y) \\big| y = h(t) \\right]\n\\end{displaymath}\nin order to find $d_{6,3}$,  expand function $\\Gamma$ with $k=3$:\n\\begin{displaymath}\n    \\left.\\left[\\Gamma(y)=y^4 -y^5 + y^7 -y^8 +y^{10} + \\mathcal{O}(y^{11}) \n        \\big| y = h(t) \\right]\\right|_{k=3}\n\\end{displaymath}\ntherefore $d_{6,3}=d_{7,4} - d_{7,5} + d_{7,7}$, instantiating $44 = 70 -27 +1$, \nwhich holds. Just another element, say $d_{8,1}$, so expand function \n$\\Gamma$ with $k=1$:\n\\begin{displaymath}\n    \\left.\\left[\\Gamma(y)=y^2 -y^3 + y^5 -y^6 + y^8 -y^9 + y^{11} + \n        \\mathcal{O}(y^{12}) \\big| y = h(t) \\right]\\right|_{k=1}\n\\end{displaymath}\n\n\\marginpar{what about $d_{n0}\\in\\mathcal{M}$? \n    it should provide a combinatorial identity for the $n$th\n    Motzkin number\\ldots}\ntherefore $d_{8,1}=d_{9,2} - d_{9,3} + d_{9,5}- d_{9,6}+ d_{9,8}- d_{9,9}$, \ninstantiating $512 = 1422 -1140 +369 -147 +9 -1$, which holds.\n\\iffalse\n\\\\\\\\\nAs a final remark, under the insights of two solved exercises, is that a\nsequence found using this approach, knows which elements to combine and which\nones to discard, we're required to supply the set of available elements only.\n\\fi\n\n\\subsection{Interlude: let's generalize}\n\\label{subsec:sequences:interluce:generalization}\n\nPrevious applications shows a pattern that can be pointed out looking at \nthe generic device for a Riordan array $\\mathcal{R}$ where \nfunction $h$ is its second component and function $\\hat{h}$ \nis its compositional inverse:\n\\begin{displaymath}\n    \\left[y^{k} = \\hat{h}(y) \\Gamma(y) \\big| y = h(t) \\right]\n\\end{displaymath}\nthere's \\emph{an equation} on the left hand side of the variable substitution block, \nwhere function $\\Gamma$ is the unknown. In this format, \na constraint is stated: let $d_{nk}\\in\\mathcal{R}$ be a generic element, \nthen a \\emph{localized} sequence \n$\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ respect to column $k$ exists, which\ncombines all elements lying on the previous row $n-1$.\n\nSince we're dealing with an equation, we can augment it as desired in order to\nconstrain over additional facts. For instance, consider the following question:\nfind a sequence $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ such that \nan element $d_{nk}\\in\\mathcal{R}$ combines \\emph{all} elements lying on \n\\emph{some} row $n-\\alpha$, with the additional property to add the combination of \nelements, defined by a \\emph{given} sequence $\\lbrace \\theta_{i} \\rbrace_{i\\in\\mathbb{N}}$, \nlying on \\emph{some different} row $n-\\beta$. Formally, $\\alpha,\\beta\\in\\mathbb{Z}$ \nand $\\alpha \\not=\\beta$, we would like to express $d_{nk}$ as:\n\\begin{displaymath}\n    d_{nk} = \\sum_{i=0}^{n-\\alpha}{\\gamma_{i}\\,d_{n-\\alpha,i}} + \n        \\sum_{i=0}^{n-\\beta}{\\theta_{i}\\,d_{n-\\beta,i}}\n\\end{displaymath}\nso set the device as usual:\n\\begin{displaymath}\n    \\left[y^{k} = \\hat{h}(y)^{\\alpha} \\Gamma(y) + \\hat{h}(y)^{\\beta} \\Theta(y) \\big| y = h(t) \\right]\n\\end{displaymath}\nNonetheless its generality, its not the best one: to be truly general,\nwe should introduce two new parameters $c_\\mu$ and $c_\\nu$, which allow\nto fix the column index from where the combinations denoted by functions\n$\\Gamma$ and $\\Theta$ start, respectively. Here is the most general device:\n\\marginpar{most general device}\n\\begin{displaymath}\n    \\left[y^{k} = y^{c_\\mu}\\hat{h}(y)^{\\alpha} \\Gamma(y) + \n        y^{c_\\nu}\\hat{h}(y)^{\\beta} \\Theta(y) \\big| y = h(t) \\right]\n\\end{displaymath}\n\\\\\n\\paragraph{An example about $\\mathcal{D}$}{\n    \\label{par:generalized:A:sequence:Delannoy:example}\n    \\marginpar{an example about $\\mathcal{D}$}\n    Try to make the generalization at work:\n    find a sequence $\\lbrace \\gamma_{i} \\rbrace_{i\\in\\mathbb{N}}$ such that \n    an element $d_{nk}\\in\\mathcal{D}$, in the Delannoy array, \n    combines \\emph{all} elements lying on \n    the next row, namely $n+1$, in addition to the\n    combination of elements lying two row above, namely $n-2$, \n    defined as their sum, simply. \n\n    In order to set the device, we need $\\hat{h}_{\\mathcal{D}}$:\n    \\begin{displaymath} \n        \\hat{h}_{\\mathcal{D}}(y) = \\frac{\\sqrt{1+6y+y^2}-y-1}{2}\n    \\end{displaymath} \n    and build function $\\Theta$ from the additional requirement:\n    \\begin{displaymath} \n        \\Theta(y) = \\frac{1}{1-y}\n    \\end{displaymath} \n    now we're ready:\n    \\begin{lenghtydisplaymath}\n    \\begin{split}\n        &\\left.\\left[y^{k} = \\hat{h}_{\\mathcal{D}}(y)^{-1} \\Gamma(y) + \n            \\hat{h}_{\\mathcal{D}}(y)^{2}\\frac{1}{1-y} \\right| y = h(t) \\right]\\\\\n        &=\\left.\\left[\\frac{y^{3} + {\\left(y^{2} - 1\\right)} y^{k} + 6 \\, y^{2} - {\\left({\\left(y - 1\\right)} y^{k} + y^{2} + 3 \\, y + 1\\right)} \\sqrt{y^{2} + 6 \\, y + 1} + 6 \\, y + 1}{2 \\, {\\left(1-y\\right)}}=\\Gamma(y) \\right| y = h(t) \\right]\\\\\n    \\end{split}\n    \\end{lenghtydisplaymath}\n    in order to find $d_{8,1}$,  expand function $\\Gamma$ with $k=1$:\n    \\begin{lenghtydisplaymath}\n        \\left.\\left[\\Gamma(y)=y^2 -3y^3 + 11y^4  -47y^5 + 211y^6 -987y^7 + 4747y^8 \n            -23335y^9 + \\mathcal{O}(y^{10}) \\big| y = h(t) \\right]\\right|_{k=1}\n    \\end{lenghtydisplaymath}\n    therefore $d_{8,1}=d_{9,2} -3\\,d_{9,3} +11\\,d_{9,4}-47\\,d_{9,5} \n        +211\\,d_{9,6} -987\\,d_{9,7} +4747\\,d_{9,8}-23335\\,d_{9,9}+\\epsilon$,\n        where $\\epsilon = d_{6,0}+d_{6,1}+d_{6,2}+d_{6,3}+d_{6,4}+d_{6,5}+d_{6,6} = \n                2(d_{6,0}+d_{6,1}+d_{6,2})+d_{6,3} = 2(1 + 11 + 41) + 63 = 169$: \n        instantiating $15 = 113 -3\\cdot377 +11\\cdot681 -47\\cdot681 +211\\cdot377\n            -987\\cdot113 +4747\\cdot17 -23335 + \\epsilon = -154 + 169$, which holds.\n\n    It's interesting to observe that function $\\Gamma$ above, an algebraic one,\n    produces sequences that use different coefficients for different values of $k$,\n    and we observe a curious pattern for increasing values of \n    $k\\in\\lbrace 0,\\ldots,10 \\rbrace$:\n    \\begin{lenghtydisplaymath}\n        \\begin{split}\n            &\\left.\\left[\\Gamma(y)=\n            y -2 y^{2} + 5 y^{3} + -17 y^{4} + 65 y^{5} -273 y^{6} + 1213 y^{7} -5617 y^{8} + \\mathcal{O}\\left(y^{9}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=0}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            y^{2} -3 y^{3} + 11 y^{4} -47 y^{5} + 211 y^{6} -987 y^{7} + 4747 y^{8} + \\mathcal{O}\\left(y^{9}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=1}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            3 y^{4} -19 y^{5} + 99 y^{6} -503 y^{7} + 2547 y^{8} -12971 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=2}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 6 y^{4} -27 y^{5} + 127 y^{6} -615 y^{7} + 3031 y^{8} -15171 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=3}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 5 y^{4} -24 y^{5} + 119 y^{6} -587 y^{7} + 2919 y^{8} -14687 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=4}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 5 y^{4} -25 y^{5} + 122 y^{6} -595 y^{7} + 2947 y^{8} -14799 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=5}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 5 y^{4} -25 y^{5} + 121 y^{6} -592 y^{7} + 2939 y^{8} -14771 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=6}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 5 y^{4} -25 y^{5} + 121 y^{6} -593 y^{7} + 2942 y^{8} -14779 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=7}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 5 y^{4} -25 y^{5} + 121 y^{6} -593 y^{7} + 2941 y^{8} -14776 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=8}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 5 y^{4} -25 y^{5} + 121 y^{6} -593 y^{7} + 2941 y^{8} -14777 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=9}\\\\\n            &\\left.\\left[\\Gamma(y)=\n            - y^{3} + 5 y^{4} -25 y^{5} + 121 y^{6} -593 y^{7} + 2941 y^{8} -14777 y^{9} + \\mathcal{O}\\left(y^{10}\\right)\n                \\big| y = h(t) \\right]\\right|_{k=10}\\\\\n        \\end{split}\n    \\end{lenghtydisplaymath}\n\n    Starting from index $k=3$, it seems that one more coefficient stabilizes\n    in the expansion, one increment of $k$ at a time, pretty nice.\n    To finish this section involving Delannoy triangle, we report its natural\n    $A$-sequence, computed with $\\left.\\left[y^{k} = y^{k-1}\n    \\hat{h}_{\\mathcal{D}}(y)\\,\\Gamma(y) \\right| y = h(t) \\right]$:\n\n    \\marginpar{$\\mathcal{D}$'s $A$-sequence which, of course, doesn't\n        depend on $k$: it's unique\\ldots}\n    \\begin{lenghtydisplaymath}\n        \\left[\\Gamma(y)=\n        1 + 2 y -2\\,y^{2} + 6 y^{3} -22\\,y^{4} + 90 y^{5} -394\\,y^{6} + 1806 y^{7}  + \\mathcal{O}\\left(y^{8}\\right)\n            \\big| y = h(t) \\right]\\\\\n    \\end{lenghtydisplaymath}\n}\n\n\\subsection{$A$-matrix connection}\n\nIs the previous device the most general one? Really is it? \nWe lie. In this section we enhance the last version to \nfind an interesting connection with \nthe $A$-matrix concept of a Riordan array $\\mathcal{R}$, \nintroduced in \\autoref{sec:back:to:the:basics:sequences}.\n\\\\\\\\\nLet $\\lbrace\\Omega_{i}\\rbrace_{i\\in\\mathbb{N}}$ be a collection of formal\npower series and assume we would like to \\emph{not} localize them (a-l\\`a natural \n$A$-sequence) in order to combine elements lying on the previous row, on the previous\nbut one and so on \\ldots hence, respect to an element $d_{nk}\\in\\mathcal{R}$,\nset the following device:\n\n\\marginpar{since columns are infinite, so is this device}\n\\begin{displaymath}\n    \\left.\\left[\n            \\begin{split}\n                y^{k} &= y^{k-1}\\hat{h}(y) \\Omega_{0}(y) \\\\\n                &+ y^{k-1}\\hat{h}(y)^{2} \\Omega_{1}(y) \\\\\n                &+ y^{k-1}\\hat{h}(y)^{3} \\Omega_{2}(y) \\\\\n                &+ \\ldots\\\\ \n                &+ y^{k-1}\\hat{h}(y)^{i+1} \\Omega_{i}(y)\\\\\n                &+ \\ldots\n            \\end{split}\n        \\right| y = h(t) \\right]\n\\end{displaymath}\na simplification rewrites:\n\\begin{displaymath}\n    \\left.\\left[\n        y = \\hat{h}(y) \\Omega_{0}(y) + \n        \\hat{h}(y)^{2} \\Omega_{1}(y) + \\hat{h}(y)^{3} \\Omega_{2}(y) +\n        \\ldots +\n        \\hat{h}(y)^{i+1} \\Omega_{i}(y) + \\ldots\n        \\right| y = h(t) \\right]\n\\end{displaymath}\nHere are our thoughts, each in a dedicated subsection.\n\n\\subsubsection{Increasing generality}\nIt can be seen as an additional generalization of device introduced in\n\\autoref{subsec:sequences:interluce:generalization},\nwhere some new functions are thrown in: it allows to augment the\ncombination including sets of coefficients lying on more rows, \neach of them combined according to a function $\\Omega_{j}$. \n\n\\marginpar{one equation in one unknown}\nIf the combination is augmented using a fixed number $r$ \nof functions $\\Omega_{0},\\ldots,\\Omega_{r-1}$\nand $r-1$ of them $\\Omega_{i_{0}},\\ldots,\\Omega_{i_{r-2}}$\nare \\emph{explicitly defined}, then it is possible to find \nthe remaining one $\\Omega_{i_{r-1}}$, because it is the same to solve a system\nwith one equation in one unknown.\n\n% the following should be an attempt to find two rows in the\n% `A-matrix' but it merely fails\n%\\subsubsection{Two characterizing functions}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%Assume we allow that combinations starts from column index $0$,\n%%so the very general device in this case has the following form:\n%\\begin{displaymath}\n    %\\left.\\left[\n        %y^{k} = \\hat{h}(y) \\Omega_{0}(y) + \\hat{h}(y)^{2} \\Omega_{1}(y) \n            %\\right| y = h(t) \\right]\n%\\end{displaymath}\n%Even if we don't define one of functions $\\Omega_{0}$ or $\\Omega_{1}$\n%to find the other, it's possible to find \\emph{both of them} by solving the system \n%obtained letting $k\\in\\lbrace0,1\\rbrace$,\n%formally functions $\\Omega_{0}$ and $\\Omega_{1}$ satisfy both:\n%\\begin{displaymath}\n    %\\left.\\left[\n        %1 = \\hat{h}(y) \\Omega_{0}(y) + \\hat{h}(y)^{2} \\Omega_{1}(y) \n            %\\right| y = h(t) \\right]\n%\\end{displaymath}\n%and:\n%\\begin{displaymath}\n    %\\left.\\left[\n        %y = \\hat{h}(y) \\Omega_{0}(y) + \\hat{h}(y)^{2} \\Omega_{1}(y) \n            %\\right| y = h(t) \\right]\n%\\end{displaymath}\n\n\\subsubsection{$A$-sequence $\\hat{h}$-representation}\n\nIt provides a ``representation'' of \\ac{fps} $A$, over a corresponding\n$A$-sequence $\\lbrace a_{i}\\rbrace_{i\\in\\mathbb{N}}$, ``in base'' $\\hat{h}$, \nthe compositional inverse of function $h$.  Recognizing $h(t)=tA(h(t))$ yield:\n\\begin{displaymath}\n    \\left.\\left[\n        A(y) =  \\Omega_{0}(y) + \n        \\hat{h}(y)\\,\\Omega_{1}(y) + \\hat{h}(y)^{2}\\,\\Omega_{2}(y) + \\ldots +\n        \\hat{h}(y)^{i}\\,\\Omega_{i}(y) + \\ldots\n        \\right| y = h(t) \\right]\n\\end{displaymath}\n\nThis is quite interesting from the theoretical point of \nview but it can also have a practical application\nwhen functions $A$ and $\\hat{h}$ are polynomials. If this is the case,\ncan apply the \\emph{division theorem} among polynomial and \nproceed to factor polynomial $A$ dividing it by polynomial $\\hat{h}$,\nrepeatedly. For the sake of clarity, consider the following cases:\n\n\\begin{itemize}\n\n    % Pascal's $A$-sequence\n    \\item let $\\mathcal{P}$ be the Pascal array $\\mathcal{P}$ and recall \n        the following facts:\n        \n        \\marginpar{apply it to $A_{\\mathcal{P}}$}\n        \\begin{displaymath} \n            \\hat{h}_{\\mathcal{P}}(y)=\\frac{y}{1+y} \\quad\\quad A_{\\mathcal{P}}(y)=1+y\n        \\end{displaymath} \n        therefore, by \\emph{division theorem}, \n        there exist functions $\\Omega_{0}$ and $\\Delta_{0}$ such that:\n        \\begin{displaymath}\n            \\left.\\left[\n                (1+y)^2 =  (1+y)\\Omega_{0}(y) + y\\,\\Delta_{0}(y) \\right| y = h_{\\mathcal{P}}(t) \\right]\n        \\end{displaymath}\n        dividing $(1+y)^2$ by $y$ yield $\\Delta_{0}(y)=2+y$ and $(1+y)\\Omega_{0}(y)=1$, \n        which defines function $\\Omega_{0}$. We need to \n        keep applying \\emph{division theorem}, so there exist the following\n        sequences of functions such that:\n\n        %\\vskip+20pt plus.5fill\n        \\marginpar{$\\triangleq$ is the \\emph{unify} operator}\n        \\begin{lenghtydisplaymath}\n            \\begin{split} \n                & \\left.\\left[\n                    \\frac{\\Delta_{0}(y)}{\\hat{h}_{\\mathcal{P}}(y)} = \n                        \\left(y+3, 2\\right)\\triangleq\n                        \\left(\\Delta_{1}(y), (1+y)\\Omega_{1}(y) \\right)\n                     \\right| y = h_{\\mathcal{P}}(t) \\right]\\\\\n                & \\left.\\left[\n                    \\frac{\\Delta_{1}(y)}{\\hat{h}_{\\mathcal{P}}(y)} = \n                        \\left(y+4, 3\\right)\\triangleq\n                        \\left(\\Delta_{2}(y), (1+y)\\Omega_{2}(y) \\right)\n                     \\right| y = h_{\\mathcal{P}}(t) \\right]\\\\\n                & \\left.\\left[\n                    \\frac{\\Delta_{2}(y)}{\\hat{h}_{\\mathcal{P}}(y)} = \n                        \\left(y+5, 4\\right)\\triangleq\n                        \\left(\\Delta_{3}(y), (1+y)\\Omega_{3}(y) \\right)\n                     \\right| y = h_{\\mathcal{P}}(t) \\right]\\\\\n                & \\left.\\left[\n                    \\frac{\\Delta_{3}(y)}{\\hat{h}_{\\mathcal{P}}(y)} = \n                        \\left(y+6, 5 \\right)\\triangleq\n                        \\left(\\Delta_{4}(y), (1+y)\\Omega_{4}(y) \\right)\n                     \\right| y = h_{\\mathcal{P}}(t) \\right]\\\\\n                &\\vdots\n            \\end{split} \n        \\end{lenghtydisplaymath}\n        A pattern seems to emerge and can be captured with the \n        following rule in order to define function $\\Omega_{i}$:\n        \\begin{displaymath} \n                \\left.\\left[\n                    \\Delta_{i-1}(y) \\triangleq \n                        q_{i-1}(y)\\,\\hat{h}_{\\mathcal{P}}(y) + r_{i-1}\n                        \\rightarrow (1+y)\\,\\Omega_{i}(y)=r_{i-1} \n                    \\right| y = h_{\\mathcal{P}}(t) \\right]\n        \\end{displaymath} \n        where each polynomial $q_{j}$ satisfies $q_{j}(0)\\not=0$ and\n        each $r_{j}\\in\\mathbb{N}$ is a remainder, with boundary initial \n        condition $\\Delta_{-1}(y)=0\\,\\hat{h}_{\\mathcal{P}}(y)+1$. \n        Therefore it is possible to state\n        a closed formula for function $\\Omega_{i}$: \n        \\begin{displaymath} \n            \\Omega_{i}(y)=\\frac{1+i}{1+y}\n        \\end{displaymath} \n\n        Putting it all together, the factorization of polynomial \n        $A_{\\mathcal{P}}$ respect to polynomial $\\hat{h}_{\\mathcal{P}}$ is:\n        \\begin{displaymath}\n                \\left.\\left[\n                    A_{\\mathcal{P}}(y) = \\sum_{i \\geq0}{\\left(\\frac{1+i}{1+y}\\right)\n                        \\hat{h}_{\\mathcal{P}}(y)^{i}} \\right| y = h_{\\mathcal{P}}(t) \\right]\n        \\end{displaymath}\n\n    % Catalan's $A$-sequence\n    \\item for Catalan array $\\mathcal{C}$ things are quite interesting,\n        first of all recall the following facts:\n\n        \\marginpar{apply it to $A_{\\mathcal{C}}$}\n        \\begin{displaymath} \n            \\hat{h}_{\\mathcal{C}}(y)=y-y^2 \\quad\\quad \n                A_{\\mathcal{C}}(y)=\\frac{1}{1-y}\n        \\end{displaymath} \n        therefore, by \\emph{division theorem}, \n        there exist functions $\\Omega_{0}$ and $\\Delta_{0}$ such that:\n        \\begin{displaymath}\n            \\left.\\left[\n                1 = (1-y)\\Omega_{0}(y) + y(1-y)^{2}\\,\\Delta_{0}(y) \n                    \\right| y = h_{\\mathcal{C}}(t) \\right]\n        \\end{displaymath}\n        \\marginpar{congecture: is Catalan array's $A$-matrix \\emph{unique}? \n            Moreover: $\\Omega_{k}=0$ for $k>0$?}\n        dividing $1$ by $y(1-y)^2$ yield $\\Delta_{0}(y)=0$ \n        and $(1-y)\\Omega_{0}(y)=1$, this defines function $\\Omega_{0}$. \n        This means that ``polynomial'' $A_{\\mathcal{C}}$ is already\n        the factorization of itself, which is the same to say that\n        there exists a \\emph{unique} $A$-matrix for $\\mathcal{C}$, pretty curious;\n\n        \\item for Motzkin array $\\mathcal{M}$ things are quite interesting,\n            first of all recall the following facts:\n            \\marginpar{apply it to $A_{\\mathcal{M}}$}\n            \\begin{displaymath} \n                \\hat{h}_{\\mathcal{M}}(y)=\\frac{y}{1+y+y^2} \\quad\\quad \n                    A_{\\mathcal{M}}(y)=1+y+y^2\n            \\end{displaymath} \n            therefore, by \\emph{division theorem}, \n            there exist functions $\\Omega_{0}$ and $\\Delta_{0}$ such that:\n            \\begin{displaymath}\n                \\left.\\left[\n                    (1+y+y^2)^2 = (1+y+y^2)\\Omega_{0}(y) + y\\,\\Delta_{0}(y) \n                        \\right| y = h_{\\mathcal{M}}(t) \\right]\n            \\end{displaymath}\n            dividing $(1+y+y^2)^2$ by $y$ yield $\\Delta_{0}(y)=2+3y+2y^2+y^3$ \n            and $(1+y+y^2)\\Omega_{0}(y)=1$, this defines function $\\Omega_{0}$,\n            which expanded as fps equals: \n            \\begin{displaymath}\n                \\left.\\left[\n                    \\Omega_{0}(y) = 1 -y +y^{3} -y^{4} + y^{6} \n                        -y^{7} + y^{9} -y^{10} \n                        + y^{12} + \\mathcal{O}\\left(y^{13}\\right)\n                        \\right| y = h_{\\mathcal{M}}(t) \\right]\n            \\end{displaymath}\n            We need to keep applying \\emph{division theorem}, so there exist \n            the following sequences of functions such that:\n            \\begin{lenghtydisplaymath}\n                \\begin{split} \n                    & \\left.\\left[\n                        \\frac{\\Delta_{0}(y)}{\\hat{h}_{\\mathcal{M}}(y)} = \n                            \\left(y^4 + 3y^3 + 6y^2 + 7y + 5, 2\\right)\\triangleq\n                            \\left(\\Delta_{1}(y), (1+y+y^2)\\Omega_{1}(y) \\right)\n                         \\right| y = h_{\\mathcal{M}}(t) \\right]\\\\\n                    & \\left.\\left[\n                        \\frac{\\Delta_{1}(y)}{\\hat{h}_{\\mathcal{M}}(y)} = \n                            \\left(y^5 + 4y^4+10y^3+16y^2 + 18y + 12, 5\\right)\\triangleq\n                            \\left(\\Delta_{2}(y), (1+y+y^2)\\Omega_{2}(y) \\right)\n                         \\right| y = h_{\\mathcal{M}}(t) \\right]\\\\\n                    & \\left.\\left[\n                        \\frac{\\Delta_{2}(y)}{\\hat{h}_{\\mathcal{M}}(y)} = \n                            \\left(y^6 + 5y^5 + 15y^4 + 30y^3 + 44y^2+46y+30, \n                                12\\right)\\triangleq\n                            \\left(\\Delta_{3}(y), (1+y+y^2)\\Omega_{3}(y) \\right)\n                         \\right| y = h_{\\mathcal{M}}(t) \\right]\\\\\n                    & \\left.\\left[\n                        \\frac{\\Delta_{3}(y)}{\\hat{h}_{\\mathcal{M}}(y)} = \n                            \\left(y^7+6y^6+21y^5+50y^4+89y^3+120y^2+120y+76, 30\n                                \\right)\\triangleq\n                            \\left(\\Delta_{4}(y), (1+y+y^2)\\Omega_{4}(y) \\right)\n                         \\right| y = h_{\\mathcal{M}}(t) \\right]\\\\\n                    &\\vdots\n                \\end{split} \n            \\end{lenghtydisplaymath}\n            A pattern seems to emerge and can be captured with the \n            following rule in order to defines function $\\Omega_{i}$:\n            \\begin{displaymath} \n                    \\left.\\left[\n                        \\Delta_{i-1}(y) \\triangleq q_{i-1}(y)\\,\\hat{h}_{\\mathcal{M}}(y)\n                        + r_{i-1} \\rightarrow (1+y+y^2)\\,\\Omega_{i}(y)=r_{i-1}\n                         \\right| y = h_{\\mathcal{M}}(t) \\right]\n            \\end{displaymath} \n            where each polynomial $q_{j}$ satisfies $q_{j}(0)\\not=0$ and\n            each $r_{j}\\in\\mathbb{N}$ is a remainder, with boundary initial \n            condition $\\Delta_{-1}(y)=1$.\n\n            \\marginpar{something similar already occurs within $\\mathcal{P}$}\n            It is quite interesting that, as the case for array $\\mathcal{P}$,\n            the sequence $\\lbrace r_{j} \\rbrace_{j\\in\\mathbb{N}}$ of remainders\n            is exactly the sequence of coefficients of the function defining\n            the second column of array $\\mathcal{M}$ shifted by \\emph{one}\n            position, namely function $d_{\\mathcal{M}}$ times function \n            $h_{\\mathcal{M}}$. \n            \n            Putting it all together, the factorization of polynomial \n            $A_{\\mathcal{M}}$ respect to polynomial $\\hat{h}_{\\mathcal{M}}$ is:\n            \\begin{displaymath}\n                    \\left.\\left[\n                        A_{\\mathcal{M}}(y) = \\sum_{i \\geq0}{\n                            \\left(\\frac{[t^{1+i}]d_{\\mathcal{M}}(t)h_{\\mathcal{M}}(t)}\n                                {1+y+y^2}\\right)\n                            \\hat{h}_{\\mathcal{M}}(y)^{i}} \n                            \\right| y = h_{\\mathcal{M}}(t) \\right]\n            \\end{displaymath}\n\n        \\item if terms are not in a polynomial ring, we've difficulty to show a\n        factorization: for example Delannoy array $\\mathcal{D}$ is affected by this difficulty.\n\n    \\end{itemize}\n\n\\subsubsection{$A$-matrix from another point of view}\n\nWe observe that functions in the collection\n$\\lbrace\\Omega_{i}\\rbrace_{i\\in\\mathbb{N}}$ are exactly the same as those\nfunctions in the collection $\\lbrace P_{i}\\rbrace_{i\\in\\mathbb{N}}$ introduced\nby Merlini et al., therefore we've expressed the $A$-matrix concept from a\ndifferent point of view. \n\n\\marginpar{a system difficult to solve}\nA problem is still open: what if we would like to find all such functions? A reply\nto this question seems interesting but we've no idea how to satisfy it.\nThe major difficulty arises if the device is seen as a system:\nthere's \\emph{one} equation in possibly $k$ unknowns, with $k$ as desired, \nso we haven't any idea about the shape of the remaining $k-1$ equations.\n\nHave a little check about Pascal array $\\mathcal{P}$ again:\nlet $d_{nk}\\in\\mathcal{P}$, from the point above we've, firstly,\n$\\Omega_{0}(y)=\\frac{1}{1+y}$, which says to sum coefficients\nlying on row $n-1$ using alternating signs; then add the doubled sum of\ncoefficients lying on row $n-2$ using alternating signs; \nthen add the tripled sum of elements lying on row $n-3$ \nusing alternating signs; and so on \\ldots, all respect to column index $k$. \n\n\\marginpar{combining\\\\$d_{7,4}\\in\\mathcal{P}$}\nElement $d_{7,4}$,  is the combination \n    $(d_{6,3}-d_{6,4}+d_{6,5}-d_{6,6})+\n    2(d_{5,3}-d_{5,4}+d_{5,5}) + 3(d_{4,3}-d_{4,4}) + 4d_{4,4}$, \n    instantiating $35 = (20-15+6-1)+2(10-5+1)+3(4-1)+4\\cdot1$\n     which holds.\n%\\deleted[remark=boring and tedious the combination for $d_{8,1}$]{\n    %Element $d_{8,1}$,  is the combination\n    %$d_{7,0}-d_{7,1}+d_{7,2}-d_{7,3}+d_{7,4}-d_{7,5}+d_{7,6}-d_{7,7}\n        %+2d_{7,0}+d_{7,1}$, instantiating \n        %$8 = 1-7+21-35+35-21+7-1+2\\cdot1+6$ which holds.}\n\n\\marginpar{combining $d_{6,2}\\in\\mathcal{M}$}\nOn the other hand, let $d_{6,2}\\in\\mathcal{M}$ be the\ncombination $(d_{5,1}-d_{5,2}+d_{5,4}-d_{5,5})\n    +2(d_{4,1}-d_{4,2}+d_{4,4})\n    +5(d_{3,1}-d_{3,2})\n    +12(d_{2,1} -d_{2,2})\n    +30\\,d_{1,1}$, instantiating: \n    $69=(30-25+5-1)\n    +2(12-9+1)\n    +5(5-3)\n    +12(2 -1)\n    +30\\cdot1$, which holds.\n\n", "meta": {"hexsha": "e2f8f574fd43aa7d7e9078a2390b5da61c1a0b1b", "size": 33735, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classicthesis/Chapters/h-characterization/sequences-connection.tex", "max_stars_repo_name": "massimo-nocentini/master-thesis", "max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classicthesis/Chapters/h-characterization/sequences-connection.tex", "max_issues_repo_name": "massimo-nocentini/master-thesis", "max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classicthesis/Chapters/h-characterization/sequences-connection.tex", "max_forks_repo_name": "massimo-nocentini/master-thesis", "max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9777777778, "max_line_length": 246, "alphanum_fraction": 0.5839336001, "num_tokens": 11475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321983146848, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5864804564666665}}
{"text": "\\chapter{Non-parametric statistic models(NParmStat)}", "meta": {"hexsha": "ade8b37bfcd206a53320767968ca4836f7f5393e", "size": 52, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/NParmStat.tex", "max_stars_repo_name": "kilasuelika/SciStaLib", "max_stars_repo_head_hexsha": "103a75e6a433c2b873abb7ecd4da675028b782db", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Documentation/NParmStat.tex", "max_issues_repo_name": "kilasuelika/SciStaLib", "max_issues_repo_head_hexsha": "103a75e6a433c2b873abb7ecd4da675028b782db", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documentation/NParmStat.tex", "max_forks_repo_name": "kilasuelika/SciStaLib", "max_forks_repo_head_hexsha": "103a75e6a433c2b873abb7ecd4da675028b782db", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.0, "max_line_length": 52, "alphanum_fraction": 0.8461538462, "num_tokens": 12, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321983146849, "lm_q2_score": 0.727975443004307, "lm_q1q2_score": 0.5864804564666665}}
{"text": "\\documentclass[a4paper]{article}\n\n\\input{temp}\n\n\\begin{document}\n\n\\title{Quantum Mechanics}\n\n\\maketitle\n\n\\newpage\n\n\\tableofcontents\n\n\\newpage\n\n\\section{Wave Functions and Operators}\nWe introduce some of the mathematical structure of quantum mechanics (QM) by considering a particle in one dimension.\n\n\\subsection{Wave Function and States}\nA classical point particle in one dimension has a position $x$ at each time. In QM a particle has a \\emph{state} at each time given by a complex \\emph{wave function} $\\psi\\left(x\\right)$.\n\n\\begin{post}\nA measurement of position gives a result $x$ with probability density $|\\psi\\left(x\\right)|^2$, i.e.\n\\begin{equation*}\n\\begin{aligned}\n|\\psi\\left(x\\right)|^2 \\delta x\n\\end{aligned}\n\\end{equation*}\nwill be the probability that particle is found between $x$ and $x+\\delta x$, or\n\\begin{equation*}\n\\begin{aligned}\n\\int_a^b |\\psi\\left(x\\right)|^2 dx\n\\end{aligned}\n\\end{equation*}\nis the probability that the particle is found in the interval $a\\leq x \\leq b$.\nThis obviously requires\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-\\infty}^\\infty |\\psi\\left(x\\right)|^2 dx = 1\n\\end{aligned}\n\\end{equation*}\nWe say $\\psi$ is \\emph{normalised} if it satisfies this condition.\n\\end{post}\n\n\\begin{eg} (Gaussian wave function) Let \n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x\\right) = C e^{-\\frac{\\left(x-x_0\\right)^2}{2\\alpha}}\n\\end{aligned}\n\\end{equation*}\nFor some real $\\alpha > 0$.\\\\\nIf $\\alpha$ is small, $|\\psi|^2$ will be sharply peaked around $x=x_0$.\\\\\nIf $\\alpha$ is large, $|\\psi|^2$ is more spread out.\n\nSince $\\psi$ needs to be normalised,\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-\\infty}^\\infty |\\psi\\left(x\\right)|^2 dx &= |C|^2 \\int_{-\\infty}^\\infty e^{-\\frac{\\left(x-x_0\\right)^2}{\\alpha}}dx\\\\\n&= |C|^2 \\left(2\\pi\\right)^{1/2}\\\\\n&= 1\n\\end{aligned}\n\\end{equation*}\nSo $\\psi$ is normalised if \n\\begin{equation*}\n\\begin{aligned}\nC=\\left(\\frac{1}{2\\pi}\\right)^{1/4}\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\nIt is convenient to deal more generally with \\emph{normalisable} wave functions that are \\emph{not} identically zero, and satisfy\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-\\infty}^\\infty |\\psi\\left(x\\right)|^2 dx < \\infty\n\\end{aligned}\n\\end{equation*}\nIn fact, $\\psi\\left(x\\right)$ and $\\phi\\left(x\\right) = \\lambda \\psi\\left(x\\right)$ contain the same physical information for any complex $\\lambda \\neq 0$.\n\nIf $\\psi\\left(x\\right)$ is normalisable, we can choose $\\lambda$ to ensure $\\psi\\left(x\\right)$ is normalised. Note also $\\psi\\left(x\\right)$ and $e^{i\\alpha} \\psi\\left(x\\right)$ are physically equivalent for any real $\\alpha$, and when $\\psi$ is normalised,\n\\begin{equation*}\n\\begin{aligned}\n|\\psi\\left(x\\right)|^2 = |e^{i\\alpha} \\psi\\left(x\\right)|^2\n\\end{aligned}\n\\end{equation*}\ni.e. they have the same probability distribution.\n\nIn general we will consider normalisable wave functions $\\psi,\\phi,\\chi,...$ that are \\emph{smooth} (differentiable any number of times) except at isolated points (see examples below). Also, $\\psi,\\psi',... \\to 0$ as $|x| \\to \\infty$.\n\nGiven states $\\psi_1\\left(x\\right)$ and $\\psi_2\\left(x\\right)$ can form new state\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x\\right) = \\psi_1\\left(x\\right) + \\psi_2\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nwhich is called superposition.\n\n\\begin{eg}\nTake\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x\\right) = B\\left(e^{-\\frac{x^2}{2\\alpha}} + e^{-\\frac{\\left(x-x_0\\right)^2}{2\\alpha}}\\right)\n\\end{aligned}\n\\end{equation*}\ni.e. the superposition of Gaussian wave function and itself at different positions.\n\nBy drawing the image of $|\\psi\\left(x\\right)|^2$ we can get the probability distribution for single particle, with appropriate choice of $B$.\n\\end{eg}\n\n\\subsection{Operators and Observables}\nA quantum state contains information about other physical quantities \\emph{observables}, for example, momentum and energy, in addition to position. In QM, each observable is represented by an \\emph{operator} acting on wave functions:\\\\\nPosition:\n\\begin{equation*}\n\\begin{aligned}\n\\hat{x} = x\\\\\n\\left(\\hat{x}\\psi\\right)\\left(x\\right) = x\\psi\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nMomentum:\n\\begin{equation*}\n\\begin{aligned}\n\\hat{p} = -i\\hbar \\frac{d}{dx}\\\\\n\\left(\\hat{p}\\psi\\right)\\left(x\\right) = -i\\hbar \\frac{d\\psi}{dx} = -i\\hbar \\psi'\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nEnergy, or \\emph{Hamiltonian}, for a particle of mass $m$ moving at potential $V\\left(x\\right)$:\n\\begin{equation*}\n\\begin{aligned}\nH &= \\frac{\\hat{p}^2}{2m} + V\\left(\\hat{x}\\right)\\\\\n&= -\\frac{\\hbar^2}{2m} \\frac{d^2}{dx^2} + V\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\n\nIf an observable $Q$ is measured when the system is in a state $\\psi$, we would like to know:\\\\\n(i) what results are possible;\\\\\n(ii) what is the probability for each result.\n\n\\subsubsection{Expectation values}\nFor any normalisable $\\psi\\left(x\\right)$ and $\\phi\\left(x\\right)$, define\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\psi,\\phi\\right) = \\int_{-\\infty}^\\infty \\psi\\left(x\\right)^* \\phi\\left(x\\right) dx\n\\end{aligned}\n\\end{equation*}\n\nFor normalised $\\psi\\left(x\\right)$, define the \\emph{expectation value} of $Q$ (some operator) in this state to be\n\\begin{equation*}\n\\begin{aligned}\n\\left<Q\\right>_\\psi = \\left(\\psi,Q\\psi\\right) = \\int_{-\\infty}^\\infty \\psi^* Q\\psi dx\n\\end{aligned}\n\\end{equation*}\n\nNote for $Q=\\hat{x}$,\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{x}\\right>_\\psi = \\left(\\psi,\\hat{x}\\psi\\right)=\\int_{-\\infty}^\\infty x|\\psi\\left(x\\right)|^2 dx\n\\end{aligned}\n\\end{equation*}\nstandard expression for mean or expectation of $x$, and for $Q=\\hat{p}$,\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{p}\\right>_\\psi = \\left(\\psi,\\hat{p}\\psi\\right) = \\int_{-\\infty}^\\infty -i\\hbar \\psi^* \\psi' dx\n\\end{aligned}\n\\end{equation*}\n\n\\begin{post}\nFor any observable, $\\left<Q\\right>_\\psi$ is the mean result of measuring $Q$ when the system is in state $\\psi$.\n\\end{post}\n\nNow consider $\\phi$ and $\\psi$ (normalised) with\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x\\right) = \\psi\\left(x\\right) e^{ikx}\n\\end{aligned}\n\\end{equation*}\nfor real $k$. Then $|\\phi\\left(x\\right)|^2 = |\\psi\\left(x\\right)|^2$. As a result, $\\left<\\hat{x}\\right>_\\phi = \\left<\\hat{x}\\right>_\\psi$, but\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{p}\\right>_\\phi &= \\int_{-\\infty}^\\infty -i\\hbar \\phi^*\\phi' dx\\\\\n&= \\int_{-\\infty}^\\infty -i\\hbar \\psi^* \\psi' + \\int_{-\\infty}^\\infty \\hbar\nk \\psi^* \\psi\\\\\n&= \\left<\\hat{p}\\right>_\\psi + \\hbar k\n\\end{aligned}\n\\end{equation*}\nSo additional factor of $e^{ikx}$ changes momentum by $hk$.\n\n\\begin{eg}\nLet\n\\begin{equation*}\n\\begin{aligned}\n\\psi = C e^{-\\frac{x^2}{2\\alpha}}\n\\end{aligned}\n\\end{equation*}\nas in the last subsection but with $x_0=0$.\\\\\nThen $\\left<\\hat{p}\\right>_\\psi = 0$,\n\\begin{equation*}\n\\begin{aligned}\n\\phi = C e^{-\\frac{x^2}{2\\alpha}} e^{ikx}\n\\end{aligned}\n\\end{equation*}\nwith $\\left<\\hat{p}\\right>_\\phi = \\hbar k$.\n\\end{eg}\n\n\\subsubsection{Eigenvalues and Eigenstates}\nConsider an operator corresponding to an observable, $Q$, with\n\\begin{equation*}\n\\begin{aligned}\nQ \\psi = q\\psi\n\\end{aligned}\n\\end{equation*}\nfor some number $q$. Then $\\psi\\left(x\\right)$ is called an \\emph{eigenfunction}, or \\emph{eigenstate} of $Q$ with eigenvalue $q$.\n\n\\begin{post}\nIf $Q$ is measured when the system is in an eigenstate $\\psi$, then the result is the eigenvalue $q$ with probability 1.\n\\end{post}\n\n\\begin{eg}\n$Q = \\hat{x}$. This has no normalisable eigenfunctions, since if\n\\begin{equation*}\n\\begin{aligned}\n\\hat{x} \\psi\\left(x\\right) = x\\psi\\left(x\\right) = q\\psi\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nfor some $q$, then $\\psi\\left(x\\right) = 0$ for all $x\\neq q$.\n\\end{eg}\n\n\\begin{eg}\nSet $Q = \\hat{p} = -ih\\frac{d}{dx}$, and $q=p$. Then $\\psi = C e^{ikx}$ is an eigenfunction with $p = \\hbar k$. However, this is not normalisable on the real line (we'll return to this later).\n\\end{eg}\n\n\\begin{eg}\nSet $Q=H$ with $V\\left(x\\right) = \\frac{1}{2} kx^2$ with $k>0$, a harmonic oscillator, and $q = E$. Then the eigenvalue equation\n\\begin{equation*}\n\\begin{aligned}\nH \\psi &= -\\frac{\\hbar^2}{2m} \\psi'' + \\frac{1}{2} kx^2 \\psi = E\\psi\n\\end{aligned}\n\\end{equation*}\nis satisfied by\n\\begin{equation*}\n\\begin{aligned}\n\\psi = Ce^{-x^2/2\\alpha}\n\\end{aligned}\n\\end{equation*}\n(where $C$ is chosen to normalise $\\psi$) for $\\alpha^2 = \\hbar^2/km$ and $E = \\frac{\\hbar}{2} \\sqrt{\\frac{k}{m}}$.\n\\end{eg}\n\nIn general, the energy eigenvalue equation\n\\begin{equation*}\n\\begin{aligned}\nH_\\psi = E_\\psi\n\\end{aligned}\n\\end{equation*}\nor\n\\begin{equation*}\n\\begin{aligned}\n-\\frac{\\hbar}{2m} \\frac{d^2 \\psi}{dx^2} +V\\left(x\\right) \\psi = E\\psi\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nfor particle in potential $V\\left(x\\right)$ is called the \\emph{time-independent} \\emph{Schr\\\"{o}dinger Equation}(SE). Solving this determines all states of definite energy.\n\n\\subsubsection{Additional comments}\n\\begin{rem}\nIf $\\psi$ is any normalised state, then $\\left<\\hat{p}\\right>_\\psi$ and $\\left<H\\right>_\\psi$ are real. This follows from definitions using integration by parts, e.g.\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{p}\\right>^* &= \\left(\\int_{-\\infty}^\\infty -i\\hbar \\psi^* \\psi' dx\\right)^* \\\\\n&= \\left(\\int_{-\\infty}^\\infty i\\hbar \\psi\\left(\\psi^*\\right)' dx\\right)\\\\\n&= i\\hbar \\left[\\psi \\psi^*\\right]_{\\infty}^\\infty (=0) - i\\hbar \\int_{-\\infty}^\\infty \\psi'\\psi^* dx\\\\\n&= \\left<\\hat{p}\\right>_\\psi\n\\end{aligned}\n\\end{equation*}\nSimilarly we can check\n\\begin{equation*}\n\\begin{aligned}\n\\left<H\\right>_\\psi = \\int_{-\\infty}^\\infty \\left(-\\frac{\\hbar^2}{2m} \\psi^* \\psi'' + \\psi^* V \\psi\\right) dx\n\\end{aligned}\n\\end{equation*}\nis real, integrate first term by parts twice after taking complex conjugate (assume V real).\n\\end{rem}\n\n\\begin{rem}\nPostulate 3 is consistent with Postulate 2 since\n\\begin{equation*}\n\\begin{aligned}\n&H\\psi = E\\psi\\\\\n\\implies & \\left<H\\right>_\\psi = \\int_{-\\infty}^\\infty \\psi^* H \\psi dx = \\int_{-\\infty}^\\infty E \\psi^* \\psi dx = E\n\\end{aligned}\n\\end{equation*}\nfor $\\psi$ normalised.\n\\end{rem}\n\n\\begin{rem}\nFrom the previous two remarks, the energy eigenvalue for a normalised eigenstate $\\psi$ is always real.\n\\end{rem}\n\n\\subsection{Infinite well or particle in a box}\n\nLet\n\\begin{equation*}\n\\begin{aligned}\nV\\left(x\\right) = \\left\\{\n\\begin{array}{ll}\n0 & |x| \\leq a\\\\\n\\infty & |x| > a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nassume $\\psi\\left(\\pm a\\right) = 0$ and justify at end and $\\psi\\left(x\\right) = 0$ for $|x|>a$.\n\nConsider SE for $-a\\leq x \\leq a$\n\\begin{equation*}\n\\begin{aligned}\n-\\frac{\\hbar^2}{2m} \\psi'' = E\\psi\n\\end{aligned}\n\\end{equation*}\nfor $E>0$, set $E = \\frac{\\hbar^2 k^2}{2m}$ where $k>0$ so that SE becomes\n\\begin{equation*}\n\\begin{aligned}\n\\psi'' + k^2 \\psi = 0\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\psi = A \\cos kx + B \\sin kx\n\\end{aligned}\n\\end{equation*}\nBut $A\\cos ka \\pm B \\sin ka = 0$ from boundary condition. That implies either\n\\begin{equation*}\n\\begin{aligned}\nB=0,\\ ka = \\frac{n\\pi}{2} \\ n=1,3,...\n\\end{aligned}\n\\end{equation*}\nor\n\\begin{equation*}\n\\begin{aligned}\nA=0,\\ ka = \\frac{n\\pi}{2} \\ n=2,4,...\n\\end{aligned}\n\\end{equation*}\n\nSolutions:\n\\begin{equation*}\n\\begin{aligned}\n\\psi_n\\left(x\\right) = \\left(\\frac{1}{a}\\right)^{1/2} \\left\\{\n\\begin{array}{ll}\n\\cos \\\\\n\\sin\n\\end{array}\n\\right\\} \\frac{n\\pi}{2a}x \\text{ for } n>0 \\left\\{\n\\begin{array}{ll}\n\\text{odd} \\\\\n\\text{even}\n\\end{array}\n\\right\\}\n\\end{aligned}\n\\end{equation*}\nenergy eigenfunctions and discrete set of energy eigenvalues\n\\begin{equation*}\n\\begin{aligned}\nE_n = \\frac{\\hbar^2 \\pi^2 n^2}{8ma^2}\n\\end{aligned}\n\\end{equation*}\n\nFor $E<0$, set\n\\begin{equation*}\n\\begin{aligned}\nE= -\\frac{\\hbar^2 k^2}{2m}\n\\end{aligned}\n\\end{equation*}\nwith $k>0$ so that SE becomes\\begin{equation*}\n\\begin{aligned}\n\\psi'' - k^2 \\psi = 0\n\\end{aligned}\n\\end{equation*}\nSolutions are\n\\begin{equation*}\n\\begin{aligned}\n\\psi = A e^{kx} + B e^{-kx}\n\\end{aligned}\n\\end{equation*}\nand cannot satisfy boundary conditions, except by $A=B=0$.\n\nTo justify boundary conditions, consider potential with $V\\left(x\\right) = U \\gg E$ for $|x|>a$. Setting\n\\begin{equation*}\n\\begin{aligned}\nU-E = \\frac{\\hbar^2 k^2}{2m}\n\\end{aligned}\n\\end{equation*}\nand SE is\n\\begin{equation*}\n\\begin{aligned}\n\\psi'' - k^2 \\psi = 0\n\\end{aligned}\n\\end{equation*}\nfor $|x|>a$. So for normalisable solutions, we need\n\\begin{equation*}\n\\begin{aligned}\n\\psi = \\left\\{\n\\begin{array}{ll}\nAe^{-kx} & x>a\\\\\nBe^{+kx} & x<-a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nTaking $U \\to \\infty$ with $E$ fixed, $k \\to \\infty$, and $\\psi \\to 0$ for $|x| > a$.\n\n\\newpage\n\n\\section{The Schr\\\"{o}dinger Equation}\nTo continue our development of QM (in one dimension), we need to consider how things evolve in time.\n\nClassical dynamics of a particle can be specified by the potential $V\\left(x\\right)$ (Force $f\\left(x\\right) = -V'\\left(x\\right)$). Quantum dynamics is also specified by Hamiltonian\n\\begin{equation*}\n\\begin{aligned}\nH = \\frac{\\hat{p}^2}{2m} + V\\left(\\hat{x}\\right)\n\\end{aligned}\n\\end{equation*}\nalso determined by potential.\n\nEvolution of a quantum state in time is described by a $t$-dependent wave function $\\Psi\\left(x,t\\right)$ which satisfies\n\\begin{equation}\\label{1}\n\\begin{aligned}\ni\\hbar \\frac{\\partial}{\\partial t} \\Psi = H \\Psi\n\\end{aligned}\n\\end{equation}\nthe \\emph{time-dependent Schr\\\"{o}dinger Equation}.\n\nThe operators $\\hat{x}$ and $\\hat{p}$ do not change in time, and \\eqref{1} is\n\\begin{equation*}\n\\begin{aligned}\ni\\hbar \\frac{\\partial \\Psi}{\\partial t} = -\\frac{\\hbar^2}{2m} \\frac{\\partial^2 \\Psi}{\\partial x^2} + V\\left(x\\right) \\psi\n\\end{aligned}\n\\end{equation*}\na PDE linear in $\\Psi$ and first order in $t$, so specify $\\Psi\\left(x,0\\right)$ and $\\Psi\\left(x,t\\right)$ can be determined uniquely.\n\n\\subsection{Stationary states}\nConsider a wave function of definite frequency:\n\\begin{equation*}\n\\begin{aligned}\n\\Psi\\left(x,t\\right) = \\psi\\left(x\\right) e^{-i\\omega t}\n\\end{aligned}\n\\end{equation*}\nSubstituting in \\eqref{1} gives\n\\begin{equation*}\n\\begin{aligned}\n\\psi \\hbar \\omega e^{-i\\omega t} = \\left(H\\psi\\right) e^{-i\\omega t}\n\\end{aligned}\n\\end{equation*}\nThis holds if and only if\n\\begin{equation*}\n\\begin{aligned}\nH \\psi = E\\psi\n\\end{aligned}\n\\end{equation*}\nwith $E = \\hbar \\omega$.\n\nAlternatively, look for a separable solution\n\\begin{equation*}\n\\begin{aligned}\n\\Psi\\left(x,t\\right) = f\\left(t\\right)\\psi\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nand find\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{\\psi} H \\psi = \\frac{i\\hbar}{f}\\dot{f} = E\n\\end{aligned}\n\\end{equation*}\nwhich is a separation constant. This implies $H\\psi = E\\psi$ and $f\\left(t\\right) = f\\left(0\\right) e^{-iEt/\\hbar}$.\n\nA solution of this special form is called a \\emph{stationary state}. Special properties of stationary states:\\\\\n(i)\n\\begin{equation*}\n\\begin{aligned}\n\\left|\\Psi \\left(x,t\\right)\\right|^2 = \\left|\\psi\\left(x\\right)\\right|^2\n\\end{aligned}\n\\end{equation*}\nSo probability density does not change with time;\\\\\n(ii)\n\\begin{equation*}\n\\begin{aligned}\n\\Psi\\left(x,t\\right) = \\psi\\left(x\\right) e^{-iEt/\\hbar}\n\\end{aligned}\n\\end{equation*}\nis the unique solution with $\\Psi\\left(x,0\\right) = \\psi\\left(x\\right)$ and $H\\psi = E\\psi$. Then $H\\Psi = E\\Psi$ implies that the measurement of energy gives result $E$ with certainty (probability 1) for all $t$.\n\n\\begin{eg}\nConsider particle in a box in chapter 1.3: found energy eigenstates $\\psi_n\\left(s\\right)$ $\\left(\\begin{array}{ll}\\sin \\\\ \\cos\\end{array}\\right)$ with\n\\begin{equation*}\n\\begin{aligned}\nE_n = \\frac{\\hbar^2 \\pi^2}{8ma^2} n^2\n\\end{aligned}\n\\end{equation*}\nfor $n=1,2,...$.\\\\\nStationary state solutions of time dependent SE:\n\\begin{equation*}\n\\begin{aligned}\n\\Psi_n\\left(x,t\\right) = \\psi_n\\left(x\\right) e^{-iE_n t/\\hbar}\n\\end{aligned}\n\\end{equation*}\nNote however $\\psi_1+\\psi_2$ is \\emph{not} an energy eigenstate:\n\\begin{equation*}\n\\begin{aligned}\nH\\left(\\psi_1+\\psi_2\\right) = E_1\\psi_1 E_2 \\psi_2 \\not\\propto \\psi_1+\\psi_2\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\subsection{Conservation of Probability}\nThe probability density\n\\begin{equation*}\n\\begin{aligned}\nP\\left(x,t\\right) = \\left|\\Psi\\left(x,t\\right)\\right|^2\n\\end{aligned}\n\\end{equation*}\nobeys a conservation equation\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial P}{\\partial t} = -\\frac{\\partial J}{\\partial x}\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\nJ\\left(x,t\\right) = -\\frac{i\\hbar}{2m} \\left(\\Psi^* \\Psi' - \\left.\\Psi'\\right.^* \\Psi\\right)\n\\end{aligned}\n\\end{equation*}\nwhich is real (here $' = \\frac{\\partial}{\\partial x}$), and is called the \\emph{probability current}.\n\nThis follows from time dependent SE\n\\begin{equation*}\n\\begin{aligned}\ni\\hbar \\frac{\\partial\\Psi}{\\partial t} = -\\frac{\\hbar^2}{2m} \\Psi'' + V\\Psi\n\\end{aligned}\n\\end{equation*}\nand its conjugate\n\\begin{equation*}\n\\begin{aligned}\n-i\\hbar \\frac{\\partial\\Psi^*}{\\partial t} = -\\frac{\\hbar^2}{2m} \\left.\\Psi^*\\right.'' + V\\Psi^*\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial P}{\\partial t} &= \\Psi^* \\frac{\\partial \\Psi}{\\partial t} + \\frac{\\partial \\Psi^*}{\\partial t} \\Psi \\\\\n&= \\Psi^* \\frac{i\\hbar}{2m} \\Psi'' - \\frac{i\\hbar}{2m} \\left.\\Psi^*\\right.'' \\Psi\n\\end{aligned}\n\\end{equation*}\nSince the potential terms ($V$) cancel each other. That is equal to\n\\begin{equation*}\n\\begin{aligned}\n-\\frac{\\partial J}{\\partial x}\n\\end{aligned}\n\\end{equation*}\nas claimed.\n\nThe conservation equation implies\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dt}\\int_a^b P\\left(x,t\\right)dx &= \\int_a^b + \\frac{\\partial P}{\\partial t}\\left(x,t\\right)dx\\\\\n&=\\int_a^b -\\frac{\\partial J}{\\partial x} \\left(x,t\\right) dx\\\\\n&= -J\\left(b,t\\right) + J\\left(a,t\\right)\n\\end{aligned}\n\\end{equation*}\n\nThen boundary conditions $\\Psi, J \\to 0$ as $x \\to \\pm \\infty$ (for fixed $t$) gives\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-\\infty}^\\infty \\left|\\Psi\\left(x,t\\right)\\right|^2 dx\n\\end{aligned}\n\\end{equation*}\nwhich is independent of time.\n\nHence $\\Psi\\left(x,0\\right)$ normalised $\\implies$ $\\Psi\\left(x,t\\right)$ normalised for all $t\\geq 0$.\n\n\\subsection{Wave packets and particles}\nAny wave function that represents a particle localised in space (about some point, on some scale) is called a \\emph{wave packet}.\n\nFor example, consider Gaussian\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x\\right) = A \\frac{1}{\\alpha^{1/2}} e^{-x^2/2\\alpha}\n\\end{aligned}\n\\end{equation*}\nwhere $A = \\left(\\alpha/\\pi\\right)^{1/4}$.\\\\\nWave packet localised around $x=0$ on length scale $\\sqrt{\\alpha}$.\n\nSo the solution of time dependent SE with $V=0$ (free particle) and $\\Psi\\left(x,0\\right) = \\psi\\left(x\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\Psi\\left(x,t\\right) = A\\frac{1}{\\gamma\\left(t\\right)^{1/2}} e^{-x^2/2\\gamma\\left(t\\right)}\n\\end{aligned}\n\\end{equation*}\nwith\n\\begin{equation*}\n\\begin{aligned}\n\\gamma\\left(t\\right) = \\alpha + \\frac{i\\hbar t}{m}\n\\end{aligned}\n\\end{equation*}\n(see example sheet 1).\n\nThen the probability density is\n\\begin{equation*}\n\\begin{aligned}\nP_\\Psi\\left(x,t\\right) &= \\left|\\Psi\\left(x,t\\right)\\right|^2 \\\\\n&= \\frac{|a|^2}{|\\gamma\\left(t\\right)|} e^{-\\alpha x^2/|\\gamma\\left(t\\right)|^2}\n\\end{aligned}\n\\end{equation*}\nlocalised around $x=0$ but the length scale $|\\gamma\\left(t\\right)/\\sqrt{\\alpha}$ spreads with $t$.\n\nHowever it's easy to check\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{x}\\right>_\\Psi = \\left<\\hat{p}\\right>_\\Psi = 0\n\\end{aligned}\n\\end{equation*}\nfor all $t$.\n\nPreviously noted $\\phi\\left(x\\right) = \\psi\\left(x\\right) e^{ikx}$ has expectation value $\\hbar k$ for momentum. Solution to SE with $\\Phi\\left(x,0\\right) = \\phi \\left(x\\right)$ is\n\\begin{equation*}\n\\begin{aligned}\n\\Phi\\left(x,t\\right) = \\Psi\\left(x-ut,t\\right)e^{ikx}e^{-i\\left(\\hbar k^2/2m\\right)t}\n\\end{aligned}\n\\end{equation*}\nwith $mu = \\hbar k$ (can be checked directly). We can also check\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{x}\\right>_\\Phi = ut\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{p}\\right>_\\Phi = \\hbar k\n\\end{aligned}\n\\end{equation*}\n\nMoreover,\n\\begin{equation*}\n\\begin{aligned}\nP_\\Phi = \\left|\\Phi\\left(x,t\\right)\\right|^2  = \\left|\\Psi\\left(x-ut,t\\right)\\right|^2 = P_\\Psi\\left(x-ut,t\\right)\n\\end{aligned}\n\\end{equation*}\n\nSo $\\Phi$ corresponds to the moving particle with $mu = \\hbar k$ momentum.\n\nGaussian wave function spreads out on time-scale $\\tau \\sim \\frac{m\\alpha}{\\hbar}$.\n\nFor example, consider an electron $m=m_e$ and $\\sqrt{\\alpha} = 10^{-12}$ meter. Then $ \\tau \\sim 10^{-20}$ sec.\n\nNow take $m=10^{-6}$ kg and $\\alpha = 10^{-6}$ meter. Then $\\tau \\sim 10^{16}$ sec.\n\n\\newpage\n\n\\section{Bound States in One Dimension}\n\nA bound state for a particle of mass $m$ in a potential $V\\left(x\\right)$ is a normalisable energy eigenstate (or stationary state)\n\\begin{equation*}\n\\begin{aligned}\nH\\psi = -\\frac{\\hbar^2}{2m} \\psi'' + V\\left(x\\right) \\psi = E\\psi\n\\end{aligned}\n\\end{equation*}\n(time-independent SE). This corresponds to a bounded classical orbit.\n\nIf $V\\left(x\\right) \\to 0$ as $|x| \\to \\infty$, need $E < 0$ (see section 3.2 below).\n\n\\subsection{Potential Well}\nConsider a potential well\n\\begin{equation*}\n\\begin{aligned}\nV\\left(x\\right) = \\left\\{ \\begin{array}{ll}\n-U & |x|<a \\\\\n0 & |x| \\geq a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nSeek solutions of time-independent SE with $-U<E<0$:\n\\begin{equation*}\n\\begin{aligned}\n-\\frac{\\hbar^2}{2m}\\psi'' = \\left(E+U\\right)\\psi & &|x|<a\\\\\n-\\frac{\\hbar^2}{2m}\\psi'' = E\\psi & &|x|>n\n\\end{aligned}\n\\end{equation*}\n\nSet $U+E = \\frac{\\hbar^2 k^2}{2m}$ and $E=-\\frac{\\hbar^2\\kappa^2}{2m}$ for $k,\\kappa \\in \\R^+$. Then\n\\begin{equation*}\n\\begin{aligned}\nk^2 + \\kappa^2 = \\frac{2mU}{\\hbar^2}\n\\end{aligned}\n\\end{equation*}\nThen SE becomes\n\\begin{equation*}\n\\begin{aligned}\n\\psi'' + k^2 \\psi = 0 & & |x|<a\\\\\n\\psi'' - \\kappa^2 \\psi = 0 & & |x|>a\n\\end{aligned}\n\\end{equation*}\nNeed $\\psi,\\psi'$ continuous but $\\psi''$ discontinuous at $x = \\pm a$.\n\nConsider \\emph{even parity} solutions with $\\psi\\left(-x\\right) = \\psi\\left(x\\right)$:\\begin{equation*}\n\\begin{aligned}\n\\psi = \\left\\{\\begin{array}{ll}\nA\\cos kx & |x|<a\\\\\nBe^{-\\kappa x} & |x|>a\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nMatching $\\psi$ and $\\psi'$ at $x=a$ ($x=-a$ automatic for $\\psi$ even):\n\\begin{equation*}\n\\begin{aligned}\nA\\cos ka = Be^{-\\kappa a}\\\\\n-Ak\\sin ka = -B \\kappa e^{-\\kappa a}\n\\end{aligned}\n\\end{equation*}\n\nThese give the same result for $A/B$ if and only if\n\\begin{equation*}\n\\begin{aligned}\nk\\tan ka = \\kappa\n\\end{aligned}\n\\end{equation*}\nTo see when solutions exists, convenient to set\n\\begin{equation*}\n\\begin{aligned}\n\\xi = ak, \\eta = a\\kappa\n\\end{aligned}\n\\end{equation*}\nwhich are dimensionless (and positive), and consider\n\\begin{equation*}\n\\begin{aligned}\n\\eta = \\xi \\tan \\xi,\\\\\n\\xi^2 + \\eta^2 = \\frac{2ma^2}{\\hbar^2} U\n\\end{aligned}\n\\end{equation*}\n\nFor each point of intersection, we get one solution for $\\xi,\\eta$ or $k,\\kappa$ and corresponding value of $E$.\\\\\nHence there is exactly one solution for $\\frac{2ma^2}{\\hbar^2}U < \\pi^2$.\n\nIn general, there are $n$ solutions for $\\left(n-1\\right)^2 \\pi^2 < \\frac{2ma^2U}{h\\hbar^2} < n^2\\pi^2$.\n\nThere are finite number of allowed energy eigenstates.\n\n\\begin{tikzpicture}\n\\draw (0,0) -- (1,0);\n\\draw (1,0) -- (0,1);\n\\draw (0.5,1) -- (0.25,-0.25);\n\\draw (0,0) .. controls (1,0) and (0.5,1) .. (1,1);\n\\draw (2,2) .. controls (1.65,-0.95) and (-0.3,3.4) .. (1,0);\n\\end{tikzpicture}\n\nNote that now we have non-zero probability density $|\\psi\\left(x\\right)|^2$ of measuring particle \\emph{outside} classically allowed region $|x|<a$ (for $E<0$).\n\nWe can consider \\emph{odd parity} solutions $\\psi\\left(-x\\right) = -\\psi\\left(x\\right)$ similarly (see example sheet 1).\n\n\\subsection{General Properties}\n\n\\subsubsection{Bound state energies}\nConsider time-independent SE with $V\\left(x\\right) \\to 0$ as $x \\to \\pm \\infty$. For 2nd order ODE, there are 2 complex constants in general solution.\\\\\nBut this is linear in $\\psi$, so one complex constants corresponds to $\\psi \\to \\lambda \\psi$.\n\nNow\n\\begin{equation*}\n\\begin{aligned}\n-\\frac{\\hbar^2}{2m}\\psi'' \\sim E\\psi\n\\end{aligned}\n\\end{equation*}\nas $x \\to \\pm \\infty$. So\n\\begin{equation*}\n\\begin{aligned}\n\\psi \\sim A_\\pm e^{ikx} + B_\\pm e^{-ikx} & & E=\\frac{\\hbar^2 k^2}{2m} > 0\\\\\n\\psi \\sim A_\\pm e^{\\kappa x} + B_\\pm e^{-\\kappa x} & & E=-\\frac{\\hbar^2\\kappa^2}{2m} < 0\n\\end{aligned}\n\\end{equation*}\nFor $E>0$ there is no normalisable solution.\n\nFor $E<0$ we have normalisable solution if\n\\begin{equation*}\n\\begin{aligned}\n\\psi \\sim \\left\\{ \\begin{array}{ll}\nB_+ e^{-\\kappa x} & x\\to +\\infty \\  \\left(A_+ = 0\\right)\\\\\nA_- e^{\\kappa x} & x \\to -\\infty \\  \\left(B_- = 0\\right)\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n\nOnly one complex constant left to choose, so specifying behaviour at both boundaries $\\implies$ over-determined system, solutions exist for \\emph{particular} values of $E$ $\\implies$ bound state energies quantised.\n\nWe may have several bound states:\n\n\\begin{tikzpicture}\n\\draw (-2,0) -- (2,0);\n\\draw (0,-2) -- (0,2);\n\\draw (-2,-0.1) .. controls (-1.0,-0.15) and (-0.5,-1.0) .. (0,-1);\n\\draw (2,-0.1) .. controls (1.0,-0.15) and (0.5,-1.0) .. (0,-1);\n\\end{tikzpicture}\n\nor none:\n\n\\begin{tikzpicture}\n\\draw (-2,0) -- (2,0);\n\\draw (0,-2) -- (0,2);\n\\draw (-2,0.1) .. controls (-1.0,0.15) and (-0.5,1.0) .. (0,1);\n\\draw (2,0.1) .. controls (1.0,0.15) and (0.5,1.0) .. (0,1);\n\\end{tikzpicture}\n\nFurthermore, if $V\\left(x\\right) \\geq V_0$ (constant), then for $\\psi$ normalisable,\n\\begin{equation*}\n\\begin{aligned}\nH\\psi = E\\psi \\implies E &= \\left<H\\right>_\\psi\\\\\n&=\\int_{-\\infty}^\\infty \\left(-\\frac{\\hbar^2}{2m} \\psi^* \\psi'' + V\\left(x\\right) |\\psi\\left(x\\right)|^2\\right) dx\\\\\n&= \\int_{-\\infty}^\\infty \\left(\\frac{\\hbar^2}{2m}|\\psi'|^2 + V\\left(x\\right) |\\psi|^2\\right) dx\\\\\n&\\geq 0+V_0\n\\end{aligned}\n\\end{equation*}\n(integration by parts). So $0>E>V_0$ for any bounded state.\n\n\\newpage\n\n\\section{Expectation and Uncertainty}\n\\subsection{Hermitian Operators}\nRecall earlier definition\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\phi,\\psi\\right) = \\int_{-\\infty}^\\infty \\phi\\left(x\\right)^* \\psi\\left(x\\right) dx\n\\end{aligned}\n\\end{equation*}\nwith properties\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\psi,\\alpha\\psi\\right)=\\alpha\\left(\\phi,\\psi\\right) = \\left(\\alpha^*\\phi,\\psi\\right)\n\\end{aligned}\n\\end{equation*}\nand similarly\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\phi,\\psi\\right)^* = \\left(\\psi,\\phi\\right)\n\\end{aligned}\n\\end{equation*}\n\nRegarding this as an inner product on wave functions, define the \\emph{norm} of $psi$, denoted $||\\psi||$, by\n\\begin{equation*}\n\\begin{aligned}\n||\\psi||^2 = \\left(\\psi,\\psi\\right) = \\int_{-\\infty}^\\infty |\\psi\\left(x\\right)|^2 dx\n\\end{aligned}\n\\end{equation*}\nwhich is real and positive, and $||\\psi|| = 1$ is $\\psi$ is normalised.\n\nAn operator $Q$ is \\emph{hermitian} if\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\phi,Q\\psi\\right) = \\left(Q\\phi,\\psi\\right)\n\\end{aligned}\n\\end{equation*}\nfor all normalisable $\\psi,\\phi$. This implies\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\psi,Q\\psi\\right) = \\left(Q\\psi,\\psi\\right) = \\left(\\psi,Q\\psi\\right)^*\\\\\n\\implies \\left<Q\\right>_\\psi = \\left<Q\\right>_\\psi^*\n\\end{aligned}\n\\end{equation*}\n\nThe operators $\\hat{x},\\hat{p}$, and $H=\\frac{\\hat{p}^2}{2m} + V\\left(\\hat{x}\\right)$ are hermitian (for $V$ real).\n\nCheck:\n\\begin{equation*}\n\\begin{aligned}\n&\\left(\\phi,\\hat{x}\\psi\\right) = \\left(\\hat{x}\\phi,\\psi\\right)\\\\\n\\iff & \\int_{-\\infty}^\\infty \\phi\\left(x\\right)^* \\left(x\\psi\\left(x\\right)\\right)dx = \\int_{-\\infty}^\\infty \\left(x\\phi\\left(x\\right)\\right)^* \\psi\\left(x\\right) dx\n\\end{aligned}\n\\end{equation*}\nwhich is true ($x$ is real).\n\n\\begin{equation*}\n\\begin{aligned}\n&\\left(\\phi,\\hat{p}\\psi\\right) = \\left(\\hat{p}\\phi,\\psi\\right)\\\\\n&\\iff \\int_{-\\infty}^\\infty \\phi^* \\left(-i\\hbar \\psi'\\right) dx = \\int_{-\\infty}^\\infty \\left(-ih\\phi'\\right)^* \\psi dx\n\\end{aligned}\n\\end{equation*}\nby parts and $\\left[\\phi^*\\psi\\right]_{-\\infty}^\\infty = 0$.\n\nTo show $\\left(\\phi,H\\psi\\right) = \\left(H\\phi,\\psi\\right)$, check KE and PE terms separately:\\\\\nKE: $\\left(\\phi,\\psi''\\right) = -\\left(\\phi',\\psi'\\right) = \\left(\\phi'',\\psi\\right)$;\\\\\nPE: $\\left(\\phi,V\\left(x\\right)\\psi\\right) = \\left(V\\left(x\\right)\\phi,\\psi\\right)$ for $V$ real.\n\n(Later in chapter 6 we'll prove other general properties of hermitian operators, e.g. eigenvalues are real, eigenstates with distinct eigenvalues are orthogonal with respect to inner product.)\n\n\\subsection{Ehrenfest's Theorem}\n\nConsider normalised $\\Psi\\left(x,t\\right)$ satisfying SE\n\\begin{equation*}\n\\begin{aligned}\ni\\hbar\\dot{\\Psi} = H\\Psi = \\left(\\frac{\\hat{p}^2}{2m} + V\\left(\\hat{x}\\right)\\right)\\Psi = -\\frac{\\hbar^2}{2m}\\Psi'' + V\\left(x\\right) \\Psi\n\\end{aligned}\n\\end{equation*}\nThe expectation values\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{x}\\right>_\\Psi = \\left(\\Psi,\\hat{x}\\Psi\\right)\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{p}\\right>_\\Psi = \\left(\\Psi,\\hat{p}\\Psi\\right)\n\\end{aligned}\n\\end{equation*}\ndepend on $t$ through $\\Psi$. Ehrenfest's Theorem states\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dt}\\left<\\hat{x}\\right>_\\Psi = \\frac{1}{m} \\left<\\hat{p}\\right>_\\Psi\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dt}\\left<\\hat{p}\\right>_\\Psi = -\\left<V'\\left(\\hat{x}\\right)\\right>_\\Psi\n\\end{aligned}\n\\end{equation*}\nwhich is the quantum counterparts to classical equations of motion (in first order form).\n\\begin{proof}\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dt}\\left<\\hat{x}\\right>_\\Psi &= \\left(\\dot{\\Psi},\\hat{x}\\Psi\\right) + \\left(\\Psi,\\hat{x}\\dot{\\Psi}\\right)\\\\\n&= \\left(\\frac{1}{i\\hbar} H \\Psi, \\hat{x}\\Psi\\right) + \\left(\\Psi,\\hat{x}\\frac{1}{i\\hbar}H\\Psi\\right)\n\\end{aligned}\n\\end{equation*}\n\nSince $H$ is hermitian,\n\\begin{equation*}\n\\begin{aligned}\n&-\\frac{1}{i\\hbar}\\left(H\\Psi,\\hat{x}\\Psi\\right) + \\frac{1}{i\\hbar}\\left(\\Psi,\\hat{x}H\\Psi\\right)\\\\\n&= -\\frac{1}{i\\hbar}\\left(\\Psi,H\\hat{x}\\Psi\\right)+\\frac{1}{i\\hbar}\\left(\\Psi,\\hat{x}H\\Psi\\right)\\\\\n&= \\frac{1}{i\\hbar}\\left(\\Psi,\\left(\\hat{x}H-H\\hat{x}\\right)\\Psi\\right)\n\\end{aligned}\n\\end{equation*}\nBut\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\hat{x}H-H\\hat{x}\\right)\\Psi &= \\frac{-\\hbar^2}{2m}\\left(x\\Psi'' - \\left(x\\Psi\\right)''\\right) + \\left(xV-Vx\\right)\\Psi\\\\\n&= +\\frac{\\hbar^2}{2m}2\\Psi'\\\\\n&= \\frac{i\\hbar}{m}\\hat{p}\\Psi\n\\end{aligned}\n\\end{equation*}\nas required.\n\nSimilarly,\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dt}\\left<\\hat{p}\\right>_\\Psi &= \\left(\\dot{\\Psi},\\hat{p}\\Psi\\right)+\\left(\\Psi,\\hat{p}\\dot{\\Psi}\\right)\\\\\n&= \\left(\\frac{1}{i\\hbar}H\\Psi,\\hat{p}\\Psi\\right) + \\left(\\Psi,\\hat{p}\\frac{1}{i\\hbar}H\\Psi\\right)\\\\\n&= \\frac{1}{i\\hbar}\\left(\\Psi,\\left(\\hat{p}H-H\\hat{p}\\right)\\Psi\\right)\n\\end{aligned}\n\\end{equation*}\nBut\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\hat{p}H-H\\hat{p}\\right)\\Psi &= -i\\hbar \\left(-\\frac{\\hbar^2}{2m}\\right)\\left(\\left(\\Psi''\\right)'-\\left(\\Psi'\\right)''\\right) (=0) - i\\hbar\\left(\\left(V\\Psi\\right)'-V\\Psi'\\right)\\\\\n&= -i\\hbar V'\\left(x\\right)\\Psi\n\\end{aligned}\n\\end{equation*}\nas required.\n\n\\subsection{The Uncertainty Principle}\nIf $\\psi$ is any normalised state (at fixed time) define the \\emph{uncertainty} in position $\\left(\\Delta x\\right)_\\psi$ and momentum $\\left(\\Delta p\\right)_\\psi$ by\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\Delta x\\right)_\\psi^2 = \\left<\\left(\\hat{x}-\\left<\\hat{x}\\right>-\\psi\\right)^2\\right>_\\psi \\neq = \\left<\\hat{x}^2\\right>_\\psi - \\left<\\hat{x}\\right>_\\psi^2\n\\end{aligned}\n\\end{equation*}\nand the same formula for $\\left(\\Delta p\\right)_\\psi$.\n\nThese quantify 'spread' of possible results of measurements.\n\nHeisenberg's Uncertainty Principle states\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\Delta x\\right)_\\psi \\left(\\Delta p\\right)_\\psi \\geq \\frac{\\hbar}{2}\n\\end{aligned}\n\\end{equation*}\nInterpretation: we can never reduce combined uncertainty in measurements of position and momentum below this threshold.\n\nNote: $X=\\hat{x}-\\alpha$ and $P=\\hat{p}-\\beta$ are both hermitian for any real $\\alpha,\\beta$.\n\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\psi,X^2\\psi\\right) = \\left(X\\psi,X\\psi\\right) = ||X\\psi||^2 \\geq 0,\\\n\\left(\\psi,P^2\\psi\\right) = \\left(P\\psi,P\\psi\\right) = ||P\\psi||^2 \\geq 0\n\\end{aligned}\n\\end{equation*}\nChoosing $\\alpha = \\left<\\hat{x}\\right>_\\psi$ and $\\beta = \\left<\\hat{p}\\right>_\\psi$, we deduce $\\left(\\Delta x\\right)_\\psi^2$ and $\\left(\\Delta p\\right)_\\psi^2$ are indeed real and positive, as required in our definition.\n\n\\begin{eg}\nFor Gaussian\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x\\right) = \\left(\\frac{1}{\\alpha\\pi}\\right)^\\frac{1}{4} e^{-x^2/2\\alpha}\n\\end{aligned}\n\\end{equation*}\nfind\n\\begin{equation*}\n\\begin{aligned}\n\\left<\\hat{x}\\right>_\\psi = \\left<\\hat{p}\\right>_\\psi = 0\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\Delta x\\right)_\\psi^2 = \\alpha/2,\\left(\\Delta p\\right)_\\psi^2 = \\hbar^2 / 2\\alpha\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\Delta x\\right)_\\psi \\left(\\Delta p\\right)_\\psi = \\frac{\\hbar}{2}\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\end{proof}\n\n\\end{document}", "meta": {"hexsha": "8953dce16bfb307c8060a1d1defb6f40e177de43", "size": 32896, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/Quantum Mechanics.tex", "max_stars_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_stars_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-25T17:34:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T17:34:25.000Z", "max_issues_repo_path": "Notes/Quantum Mechanics.tex", "max_issues_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_issues_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/Quantum Mechanics.tex", "max_forks_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_forks_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.0623781676, "max_line_length": 258, "alphanum_fraction": 0.6718142023, "num_tokens": 12350, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812554, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5864804496721592}}
{"text": "\\chapter{Transmission Derivation}\r\n\r\nThis is an expanded version of the analysis performed for Fig.~\\ref{fig:energydistrib}\r\n\r\n\\begin{equation}\r\nT = T(\\omega, x_o) = \\frac{(\\exp(-\\frac{L}{\\xi}))^2}{ \r\n(\\omega-\\omega _o)^2 + (\\exp(-\\frac{L}{\\xi})\\frac{1}{2}\\exp(\\frac{\\mid L-2 x_o \\mid}{\\xi}))^2 }\r\n\\label{fig:appendixtransmission}\r\n\\end{equation}\r\nWhere we have appoximated cosh() as $\\frac{1}{2}\\exp()$. \r\n\r\nFrom the behavior of transmission in media with defects and centers of localizaiton, we see two cases when on a resonant frequency:  either the defect is in the first half of the sample.%, (\\ref{fig:cononicaldefectpositions}, plot 1).\r\n\r\n\\begin{equation}\r\n{\\cal E}(x) = \\left\\{ \r\n\\begin{array}{l l}\r\n  A \\exp\\left(\\frac{2 (x-x_o)}{\\xi}\\right) & \\quad 0 < x < x_o \\\\\r\n  B \\exp\\left(-\\frac{2 (x-x_o)}{\\xi}\\right) & \\quad x_o < x < L\\\\\r\n\\end{array} \\right.\r\n\\label{fig:left}\r\n\\end{equation}\r\n\r\nor it is in the second half of the sample.%, (\\ref{fig:cononicaldefectpositions}, plot 2).\r\n\\begin{equation}\r\n{\\cal E}(x) = \\left\\{ \r\n\\begin{array}{l l}\r\n A \\exp\\left(-\\frac{2 x}{\\xi}\\right) & \\quad 0 < x < L-2 x_o \\\\\r\n B \\exp\\left(\\frac{2 (x-x_o)}{\\xi}\\right) & \\quad L-2 x_o < x < x_o \\\\\r\n C \\exp\\left(-\\frac{2 (x-x_o)}{\\xi}\\right) & \\quad x_o < x < L \\\\\r\n\\end{array} \\right.\r\n\\label{fig:right}\r\n\\end{equation}\r\n\r\nWith either case, we see three distinct regions when the frequency is not on resonance but prior to the pure exponential decay regimes. %(\\ref{fig:cononicaldefectpositions}, plot 3):\r\n\\begin{equation}\r\n{\\cal E}(x) = \\left\\{ \r\n\\begin{array}{l l}\r\ny_1 = A \\exp\\left(-\\frac{2 x}{\\xi}\\right)   & \\quad 0 < x < x_1  \\\\\r\ny_2 = B \\exp\\left(\\frac{2 (x-x_o)}{\\xi}\\right) & \\quad x_1 < x < x_o  \\\\\r\ny_3 = C \\exp\\left(-\\frac{2 (x-x_o)}{\\xi}\\right) & \\quad  x_o < x < L \\\\\r\n\\end{array} \\right.\r\n\\label{fig:startingequations}\r\n\\end{equation}\r\nWhere $x_1$ is the turning point.\r\n\r\n\\begin{comment}\r\n%% pictures were removed, causes a problem during compile (?)\r\n\\begin{figure}\r\n\\vskip -0.5cm\r\n\\centerline{\r\n\\scalebox{0.5}{\\includegraphics{pictures/transmission_derivation_14_LR}}\r\n\\scalebox{0.5}{\\includegraphics{pictures/transmission_derivation_34_LR}}\r\n\\scalebox{0.5}{\\includegraphics{pictures/transmission_derivation_off_res_LR}}\r\n}\r\n\\vskip -0.5cm\r\n\\caption{Log of energy versus position sketches for a center of localization in the first half (left), Eq.~\\ref{fig:left}; and second half (center), Eq.~\\ref{fig:right}. The turning point in the center plot is $L-2 x_o$. The right sketch is off-resonance but prior to pure exponential decay, which applies to any center of localization position, as described by Eq.~\\ref{fig:startingequations}. The turning point in the right plot from the initial exponential decay to growth varies depending how far off resonance one is. We call this $x_1$}\r\n\\label{fig:cononicaldefectpositions}\r\n\\end{figure}\r\n\\end{comment}\r\n\r\nFrom the off-resonance Eq.~\\ref{fig:startingequations}, we apply boundry conditions to determine the coefficients in passive systems. We take gain into account and make corrections later.\r\n\r\nAt $x=0$ then $y_1=4$, giving $A=4$. Thus\r\n\\begin{equation}\r\n\\boxed{y_1 = 4 \\exp\\left(-\\frac{2 x}{\\xi}\\right)}   \\quad \\quad \\quad 0 < x < L-2 x_o \r\n\\end{equation}\r\n\r\nAt $x=L$, $y_3=T$. Solve for C,\r\n\\begin{equation}\r\nC=T \\exp\\left(\\frac{2(L-x_o)}{\\xi}\\right)\r\n\\end{equation}\r\nplug back in,\r\n\\begin{equation}\r\ny_3 = T \\exp\\left(\\frac{2(L-x_o)}{\\xi}) \\exp(-\\frac{2 (x-x_o)}{\\xi}\\right)  \\quad \\quad \\quad  x_o < x < L\r\n\\end{equation}\r\nsimplify to get\r\n\\begin{equation}\r\n\\boxed{y_3 = T \\exp\\left(-\\frac{2(x-L)}{\\xi}\\right)}  \\quad \\quad \\quad  x_o < x < L\r\n\\end{equation}\r\nat $x_o$, $y_2 = y_3$\r\n\\begin{equation}\r\nB \\exp\\left(\\frac{2 (x_o-x_o)}{\\xi}\\right) = B = T \\exp\\left(-\\frac{2(x_o-L)}{\\xi}\\right)\r\n\\end{equation}\r\nplug in to $y_2$\r\n\\begin{equation}\r\ny_2 = T \\exp\\left(-\\frac{2(x_o-L)}{\\xi}\\right) \\exp\\left(\\frac{2 (x-x_o)}{\\xi}\\right) \\quad \\quad \\quad L-2 x_o < x < x_o \r\n\\end{equation}\r\nsimplify to get\r\n\\begin{equation}\r\n\\boxed{y_2 = T \\exp\\left(\\frac{2(x+L-2 x_o)}{\\xi}\\right)} \\quad \\quad \\quad L-2 x_o < x < x_o \r\n\\end{equation}\r\n\r\nThe turning point $x_1$ varies as a function of frequency. Boundry conditions on $x_1$ are that it should remain less than $x_o$ and that it be non-negative. To solve for $x_1$, see where $y_1=y_2$\r\n\\begin{equation}\r\n4 \\exp\\left(-\\frac{2 x_1}{\\xi}\\right) = T \\exp\\left(\\frac{2(x_1+L-2 x_o)}{\\xi}\\right)\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\boxed{x_1(\\omega) = -\\frac{\\xi}{4} \\log(\\frac{1}{4} T) - \\frac{1}{2}L + x_o}\r\n\\end{equation}\r\n\r\nKnowing $y_1,y_2$, and $y_3$ we can integrate to find the energy ${\\cal E}$:\r\n\\begin{equation}\r\n\\begin{gathered}\r\n{\\cal E}(x,\\omega) = \\int _0 ^{x_1} 4 \\exp\\left(-\\frac{2 x}{\\xi}\\right) dx +\r\n    \\int _{x_1} ^{x_o} T \\exp\\left(\\frac{2(x+L-2 x_o)}{\\xi}\\right) dx + \\\\\r\n    \\int _{x_o} ^{L} T \\exp\\left(-\\frac{2(x-L)}{\\xi}\\right)\r\n\\label{fig:Eintegral}\r\n\\end{gathered}\r\n\\end{equation}\r\n\r\nSolve, reduce to get the energy as a function of x, frequency, and transmission:\r\n\\begin{equation}\r\n\\begin{gathered}\r\n{\\cal E}(x,\\omega) = \\frac{\\xi}{2} \\left( T \\left( \\exp\\left(\\frac{2(L-x_o)}{\\xi}\\right) - \\exp\\left(\\frac{2(L-2 x_o+x_1)}{\\xi}\\right)+\\right. \\right.\\\\\r\n\\left.\\left.\\exp\\left(\\frac{2(L-x_o))}{\\xi}\\right) -1 \\right) + 4 \\left(1-\\exp\\left(-\\frac{2 x_1}{\\xi}\\right)\\right) \\right)\r\n\\label{fig:appendixenergy}\r\n\\end{gathered}\r\n\\end{equation}\r\n\r\n", "meta": {"hexsha": "1bf49ed21a2f38486388896c7687ed5bf86120bb", "size": 5384, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix_transmission_derivation.tex", "max_stars_repo_name": "bhpayne/physics_phd_dissertation", "max_stars_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendix_transmission_derivation.tex", "max_issues_repo_name": "bhpayne/physics_phd_dissertation", "max_issues_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/appendix_transmission_derivation.tex", "max_forks_repo_name": "bhpayne/physics_phd_dissertation", "max_forks_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.4958677686, "max_line_length": 543, "alphanum_fraction": 0.6556463596, "num_tokens": 1970, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321796478255, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.586480447632227}}
{"text": "\\documentclass{article}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[utf8]{inputenc}\n\\usepackage{mathtools}\n\\usepackage{enumitem}\n\\usepackage{fancyhdr}\n\\usepackage{chngcntr}\n\\usepackage[table,xcrdaw]{xcolor}\n\\lhead{Evan Kohilas - z5114986}\n\\rhead{COMP3821 - Assignment 2}\n\\pagestyle{fancy}\n\\title{A17S1N1}\n\\counterwithin*{equation}{section}\n\\begin{document}\n\\begin{center}\n    \\begin{LARGE}\n        COMP3121\\\\\n        Assignment 2\\\\\n        A17S1N2\\\\\n        \\hrulefill\\\\\n        Evangelos Kohilas\\\\\n        z5114986\\\\\n        \\hrulefill\n    \\end{LARGE}\n\n    \\begin{large}\n        By submitting this document you are confirming that all the answers are your work and are not take from any other sources unless clearly mentioned.\n    \\end{large}\n\n\\end{center}\n\n\\section*{Question 1}\n\\begin{enumerate}[label=\\alph*)]\n    \\item\n        \\begin{gather}\n            \\text{let } A^n\n            = \\begin{pmatrix} F(n+1) & F(n) \\\\ F(n) & F(n-1)  \\end{pmatrix}\n        \\end{gather}\n        When $n = 1$\n        \\begin{gather}\n            A^1 = \\begin{pmatrix} 1 & 1 \\\\ 1 & 0 \\end{pmatrix}\n        \\end{gather}\n        Assume $n = k$\n        \\begin{gather}\n            A^k = \\begin{pmatrix} F(k+1) & F(k) \\\\ F(k) & F(k-1)  \\end{pmatrix}\n        \\end{gather}\n        let $n = k + 1$\n        \\begin{align}\n            A^{k+1} &= AA^k \\\\\n            &= \\begin{pmatrix} 1 & 1 \\\\ 1 & 0 \\end{pmatrix}\\begin{pmatrix} F(k+1) & F(k) \\\\ F(k) & F(k-1)\\end{pmatrix} \\\\\n            &= \\begin{pmatrix} F(k+1) + F(k) & F(k) + F(k-1) \\\\ F(k+1) & F(k)  \\end{pmatrix} \\\\\n            &= \\begin{pmatrix} F(k+2) & F(k+1) \\\\ F(k+1) & F(k)  \\end{pmatrix}\n        \\end{align}\n        Therefore the formula is true by induction for all $n > 0$.\n    \\item\n        $F(n)$ can be found in $\\log n$ matrix multiplications using a recursive algorithm\n        \\begin{verbatim}\nmatrix = ((1, 1), (1,0))\nfunc(n):\n    if n == 1:\n        return matrix\n    if n is even:\n        return func(n/2)^2\n    if n is odd:\n        return func(n-1) * matrix\n        \\end{verbatim}\n\\end{enumerate}\n\n\\section*{Question 2}\n\\begin{enumerate}[label=\\alph*)]\n    \\item\n        First we calculate the Karastuba trick.\n        \\setcounter{equation}{0}\n        \\begin{gather}\n            (a + b)(c + d) = ac + ad + bd + bc \\\\\n            ad + bd = (a + b)(c + d) - ac - bc\n        \\end{gather}\n        Then we substitute $(2)$ into $(4)$\n        \\begin{align}\n            (a + ib)(c + id) &= ac + adi + bdi + bc \\\\\n            &= ac + i(ad + bd) + bc \\\\\n            &= ac + i((a + b)(c + d) - ac - bc) + bc\n        \\end{align}\n        Thus only requiring 3 real number multiplications.\n    \\item\n        \\setcounter{equation}{0}\n        First we calculate\n        \\begin{gather}\n            a^2 - b^2 = (a + b)(a - b)\n        \\end{gather}\n        Then we substitute $(1)$ into $(2)$\n        \\begin{align}\n            (a + ib)^2 &= a^2 - b^2 + 2abi\\\\\n            &= (a + b)(a - b) + 2abi\n        \\end{align}\n        Thus only requiring 2 real number multiplications\n    \\item\n        \\setcounter{equation}{0}\n        By re-arranging by the laws of exponents:\n        \\begin{gather*}\n            (a+ib)^2(c+id)^2 = ((a+ib)(c+id))^2\n        \\end{gather*}\n        Thus from above, we then calculate the middle multiplication using\n        3 real number multiplications, and then we find the square as above\n        using 2 more real number multiplications.\n\\end{enumerate}\n\n\\section*{Question 3}\nExpand $P(x)$ and $Q(x)$ as follows.\n\\begin{align*}\n    & P(x) = a_0 + x^{17}(a_{17} + a_{19}x^{2} + a_{21}x^4 + a_{23}x^6)\\\\\n    & Q(x) = b_0 + x^{17}(b_{17} + b_{19}x^{2} + b_{21}x^4 + b_{23}x^6)\n\\end{align*}\nlet $y = x^2$ so that\n\\begin{align*}\n    & R_a(y) = a_{17} + a_{19}y + a_{21}y^2 + a_{23}y^3\\\\\n    & R_b(y) = b_{17} + b_{19}y + b_{21}y^2 + b_{23}y^3\n\\end{align*}\nthen\n\\begin{align*}\n    & P(x)Q(x) = a_0b_0 + x^{17}(a_0R_b(x^2) + b_0R_a(x^2)) + x^{34}R_a(x^2)R_b(x^2)\n\\end{align*}\nthrough brute force, we then calculate\n\\begin{align*}\n    & a_0b_0 \\text{ to require 1 multiplication}\\\\\n    & a_0R_b(x^2) \\text{ and } b_0R_a(x^2) \\text{ to require 4 multiplications each}\n\\end{align*}\nthen to multiply $R_a(x^2)R_b(x^2)$ (of degree 3), we require $2(3)+1 = 7$ multiplications using the generalised Karatsuba method.\\\\\nand so we get $1 + 4 + 4 + 7 = 16$ multiplications of large numbers.\n\n\\section*{Question 4}\nAs $P(x)$ has all 15 roots of unity, and $x^{15} - 1$ and $P(x)$ are both of the same degree and are both monic then it follows that\n\\begin{gather*}\nP(x) = x^{15} -1\n\\end{gather*}\n\n\\section*{Question 5}\nFor any input $(a_0, a_1, a_2, ..., a_{2^n-1})$, we can describe $a_i$'s new position by converting $i$ to $n$ binary places and finding the reversed sequence.\ne.g. $6 \\rightarrow 110 \\rightarrow 011 \\rightarrow 3$\n\n\\section*{Question 6}\nlet\n\\begin{align}\n    f_m &= \\sum_{i+j=m} (j+1)q_jq_i \\\\\n    p_j &= (j+1)q_j\n\\end{align}\nthen substitute $(2)$ into $(1)$\n\\begin{align}\n    & \\sum_{i+j=m} p_jq_i = \\vec{p} \\ast \\vec{q}\n\\end{align}\nas $(3)$ is a linear convolution, $f_m$ can be computed in $O(n\\log n )$ time by transforming $\\vec{p}$ and $\\vec{q}$ to a point value representation using FFT, calculating the linear multiplication, and then transforming back to coefficient form using inverse FFT.\n\n\\pagebreak\n\\section*{Question 7}\n\\begin{enumerate}[label=\\alph*)]\n    \\item Starting at $(0,0)$ any spiral arrangement such as the one below would require $n^2$ queries to find the middle element as the local minimum.\n        \\begin{center}\n\\texttt{%\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\rowcolor[HTML]{E6E6E6}\n16                        & 15                        & 14                        & 13 & 12                         \\\\ \\hline\n17                        & 18                        & 19                        & 20 & \\cellcolor[HTML]{E6E6E6}11 \\\\ \\hline\n\\cellcolor[HTML]{E6E6E6}2 & \\cellcolor[HTML]{E6E6E6}1 & \\cellcolor[HTML]{E6E6E6}0 & 21 & \\cellcolor[HTML]{E6E6E6}10 \\\\ \\hline\n\\cellcolor[HTML]{E6E6E6}3 & 24                        & 23                        & 22 & \\cellcolor[HTML]{E6E6E6}9  \\\\ \\hline\n\\rowcolor[HTML]{E6E6E6}\n4                         & 5                         & 6                         & 7  & 8                          \\\\ \\hline\n\\end{tabular}\n}\n        \\end{center}\n    \\item Using the fact that a surface attains its minimal height either along its edges or in the interior, we can develop a divide-and-conquer algorithm by first finding the minimal height along a grid's boundary rows \\& columns, and center rows \\& columns as follows for an odd and even $n$.\n\\begin{center}\n\\texttt{%\n\\begin{tabular}{|\n>{\\columncolor[HTML]{E6E6E6}}c |c|c|c|\n>{\\columncolor[HTML]{E6E6E6}}c |c|c|c|\n>{\\columncolor[HTML]{E6E6E6}}c |}\n\\hline\n\\rowcolor[HTML]{E6E6E6}\n25 & 75 & 63 & 34 & 9 & 45 & 49 & 57 & 77 \\\\ \\hline\n3  & 58 & 6 & 51 & 4 & 27 & 39 & 18 & 76 \\\\ \\hline\n62 & 33 & 46 & 59 & 36                        & 11 & 50 & 5  & 42 \\\\ \\hline\n10 & 0  & 15 & 60 & 55                        & 35 & 74 & 28 & 14 \\\\ \\hline\n\\rowcolor[HTML]{E6E6E6}\n30 & 68 & 78 & 21 & 31                        & 29 & 54 & 73 & 65 \\\\ \\hline\n40 & 48 & 80 & 43 & 44                        & 38 & 37 & 23 & 72 \\\\ \\hline\n19 & 7  & 22 & 32 & \\cellcolor[HTML]{CCCCCC}2 & 1  & 67 & 70 & 71 \\\\ \\hline\n64 & 61 & 56 & 8  & 79                        & 16 & 52 & 17 & 69 \\\\ \\hline\n\\rowcolor[HTML]{E6E6E6}\n13 & 47 & 12 & 20 & 66                        & 41 & 26 & 24 & 53 \\\\ \\hline\n\\end{tabular}\n\\quad\n\\begin{tabular}{|\n>{\\columncolor[HTML]{E6E6E6}}c |c|c|\n>{\\columncolor[HTML]{E6E6E6}}c |\n>{\\columncolor[HTML]{E6E6E6}}c |c|c|\n>{\\columncolor[HTML]{E6E6E6}}c |}\n\\hline\n\\rowcolor[HTML]{E6E6E6}\n16 & 8  & 25 & 24 & 22 & 38 & 37 & 53 \\\\ \\hline\n54 & 3  & 27 & 17 & 57 & 10 & 35 & 62 \\\\ \\hline\n4  & 28 & 5  & 52 & 34 & 56 & 42 & 18 \\\\ \\hline\n\\rowcolor[HTML]{E6E6E6}\n20 & 12 & 6  & 11 & 21 & 49 & 26 & 61 \\\\ \\hline\n\\rowcolor[HTML]{E6E6E6}\n46 & 33 & 2  & 60 & 40 & 41 & 58 & 14 \\\\ \\hline\n48 & 47 & 51 & 9  & 43 & 13 & 23 & 19 \\\\ \\hline\n45 & 7  & 30 & 63 & 36 & 50 & 0  & 29 \\\\ \\hline\n\\rowcolor[HTML]{E6E6E6}\n39 & 44 & 15 & 59 & 55 & 31 & \\cellcolor[HTML]{CCCCCC}1  & 32 \\\\ \\hline\n\\end{tabular}\n}\n\\end{center}\nWe then check to see if the minimal height is a local minimum.\\\\\nIf it is, we return it, otherwise we recurse into the quadrant of its smallest neighbour, including our original boundary, since there must be a minimum in that direction as we would be following the slope of the curve. We do this as follows.\n\\begin{center}\n\\texttt{%\n\\begin{tabular}{|\n>{\\columncolor[HTML]{E6E6E6}}c |c|c|c|\n>{\\columncolor[HTML]{CCCCCC}}c |c|c|c|\n>{\\columncolor[HTML]{CCCCCC}}c |}\n\\hline\n25 & \\cellcolor[HTML]{E6E6E6}75 & \\cellcolor[HTML]{E6E6E6}63 & \\cellcolor[HTML]{E6E6E6}34 & \\cellcolor[HTML]{E6E6E6}9  & \\cellcolor[HTML]{E6E6E6}45 & \\cellcolor[HTML]{E6E6E6}49 & \\cellcolor[HTML]{E6E6E6}57 & \\cellcolor[HTML]{E6E6E6}77 \\\\ \\hline\n3  & 58                         & 6                          & 51                         & \\cellcolor[HTML]{E6E6E6}4  & 27                         & 39                         & 18                         & \\cellcolor[HTML]{E6E6E6}76 \\\\ \\hline\n62 & 33                         & 46                         & 59                         & \\cellcolor[HTML]{E6E6E6}36 & 11                         & 50                         & 5                          & \\cellcolor[HTML]{E6E6E6}42 \\\\ \\hline\n10 & 0                          & 15                         & 60                         & \\cellcolor[HTML]{E6E6E6}55 & 35                         & 74                         & 28                         & \\cellcolor[HTML]{E6E6E6}14 \\\\ \\hline\n30 & \\cellcolor[HTML]{E6E6E6}68 & \\cellcolor[HTML]{E6E6E6}78 & \\cellcolor[HTML]{E6E6E6}21 & 31                         & \\cellcolor[HTML]{CCCCCC}29 & \\cellcolor[HTML]{CCCCCC}54 & \\cellcolor[HTML]{CCCCCC}73 & 65                         \\\\ \\hline\n40 & 48                         & 80                         & 43                         & 44                         & 38                         & \\cellcolor[HTML]{CCCCCC}37 & 23                         & 72                         \\\\ \\hline\n19 & 7                          & 22                         & 32                         & 2                          & \\cellcolor[HTML]{B3B3B3}1  & \\cellcolor[HTML]{CCCCCC}67 & \\cellcolor[HTML]{CCCCCC}70 & 71                         \\\\ \\hline\n64 & 61                         & 56                         & 8                          & 79                         & 16                         & \\cellcolor[HTML]{CCCCCC}52 & 17                         & 69                         \\\\ \\hline\n13 & \\cellcolor[HTML]{E6E6E6}47 & \\cellcolor[HTML]{E6E6E6}12 & \\cellcolor[HTML]{E6E6E6}20 & 66                         & \\cellcolor[HTML]{CCCCCC}41 & \\cellcolor[HTML]{CCCCCC}26 & \\cellcolor[HTML]{CCCCCC}24 & 53                         \\\\ \\hline\n\\end{tabular}\n\\quad\n\\begin{tabular}{|\n>{\\columncolor[HTML]{E6E6E6}}c |c|c|\n>{\\columncolor[HTML]{E6E6E6}}c |\n>{\\columncolor[HTML]{E6E6E6}}c |c|c|\n>{\\columncolor[HTML]{E6E6E6}}c |}\n\\hline\n16 & \\cellcolor[HTML]{E6E6E6}8  & \\cellcolor[HTML]{E6E6E6}25 & 24 & 22                         & \\cellcolor[HTML]{E6E6E6}38 & \\cellcolor[HTML]{E6E6E6}37 & 53                         \\\\ \\hline\n54 & 3                          & 27                         & 17 & 57                         & 10                         & 35                         & 62                         \\\\ \\hline\n4  & 28                         & 5                          & 52 & 34                         & 56                         & 42                         & 18                         \\\\ \\hline\n20 & \\cellcolor[HTML]{E6E6E6}12 & \\cellcolor[HTML]{E6E6E6}6  & 11 & 21                         & \\cellcolor[HTML]{E6E6E6}49 & \\cellcolor[HTML]{E6E6E6}26 & 61                         \\\\ \\hline\n46 & \\cellcolor[HTML]{E6E6E6}33 & \\cellcolor[HTML]{E6E6E6}2  & 60 & \\cellcolor[HTML]{CCCCCC}40 & \\cellcolor[HTML]{CCCCCC}41 & \\cellcolor[HTML]{CCCCCC}58 & \\cellcolor[HTML]{CCCCCC}14 \\\\ \\hline\n48 & 47                         & 51                         & 9  & \\cellcolor[HTML]{CCCCCC}43 & \\cellcolor[HTML]{CCCCCC}13 & \\cellcolor[HTML]{CCCCCC}23 & \\cellcolor[HTML]{CCCCCC}19 \\\\ \\hline\n45 & 7                          & 30                         & 63 & \\cellcolor[HTML]{CCCCCC}36 & \\cellcolor[HTML]{CCCCCC}50 & \\cellcolor[HTML]{B3B3B3}0  & \\cellcolor[HTML]{CCCCCC}29 \\\\ \\hline\n39 & \\cellcolor[HTML]{E6E6E6}44 & \\cellcolor[HTML]{E6E6E6}15 & 59 & \\cellcolor[HTML]{CCCCCC}55 & \\cellcolor[HTML]{CCCCCC}31 & \\cellcolor[HTML]{CCCCCC}1  & \\cellcolor[HTML]{CCCCCC}32 \\\\ \\hline\n\\end{tabular}\n}\n\\end{center}\nUsing the master theorem, we can reduce the $n \\times n$ matrix to a $\\sim\\frac{n}{2} \\times \\frac{n}{2}$ matrix with $O(n)$ queries.\n\\begin{align*}\n    T(n) &= T(n/2) + cn\\\\\n    T(n) &= T(n/4) + c\\frac{n}{2} + cn\\\\\n    T(n) &= T(n/8) + c\\frac{n}{4} + c\\frac{n}{2} + cn\\\\\n    &\\quad\\quad\\quad\\quad\\vdots\\\\\n    T(n) &= T(1) + cn(1 + \\frac{1}{2} + \\frac{1}{4} + \\ldots)\\\\\n    T(n) &= T(1) + 2cn\n\\end{align*}\nThus, our algorithm is of $\\Theta(n)$ time.\n\\end{enumerate}\n\n\\section*{Question 8}\n\\begin{align*}\n    \\text{let } &A(k) = k^{th} \\text{smallest element of } A\\\\\n    \\text{let } &B(k) = k^{th} \\text{smallest element of } B\\\\\n    \\text{let } &i \\text{ be the current iteration (starting from 1)}\\\\\n    \\text{let } &s_f(i) = \\lfloor\\frac{n}{2^i}\\rfloor, s_c(i) = \\lceil\\frac{n}{2^i}\\rceil\\\\\n    \\text{let } &a = s_f(1), b = s_c(1)\\\\\n    \\text{let } &m_a = A(a + 1), m_b = B(b + 1)\\\\\n    \\text{let } &N \\text{ be the set of } n \\text{ elements such that } \\forall x \\in N \\cap A, x < m_a \\text{ and } \\forall x \\in N \\cap B, x < m_b\n\\end{align*}\nWhen all elements in $N$ are smaller than $m_a$ and $m_b$, the median will be the smallest of both $m_a$ and $m_b$.\\\\\n\n\\noindent\nSo we iterate.\n\n\\noindent\nif $B(b) > m_a$ then there exist elements in $N$ that are not smaller than $m_a$:\\\\\n    \\indent Therefore, we change $m_a$ and $m_b$ and increment $i$.\\\\\n    \\indent We increase $a$ by $s_c(i)$ to consider elements that may be smaller than the median.\\\\\n    \\indent We decrease $b$ by $s_f(i)$ to ignore elements that may be larger than the median.\\\\\nelse if $A(a) > m_b$ then there exist elements in $N$ that are not smaller than $m_b$:\\\\\n    \\indent Therefore, we change $m_a$ and $m_b$ and increment $i$.\\\\\n    \\indent We decrease $a$ by $s_f(i)$ to ignore elements that may be larger than the median.\\\\\n    \\indent We increase $b$ by $s_c(i)$ to consider elements that may be smaller than the median.\\\\\nelse:\\\\\n    \\indent We stop iterating, as all elements of $N$ are smaller than both $m_a$ and $m_b$, and we take the median to be smaller of the two\\\\\n\n\\noindent This algorithm will take at most $O(\\log n)$ queries, as every iteration we jump by a factor of 2, similar to that of a binary search.\\\\\n\n\\noindent\nNote: if $a + 1 > n$ assume $m_b$ to be the median.\n\n\\end{document}\n", "meta": {"hexsha": "c1b8de44cbdf63b243e03dba2d57e7e8c95ff38f", "size": 14855, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ass_2/z5114986.tex", "max_stars_repo_name": "ekohilas/comp3821", "max_stars_repo_head_hexsha": "135d142dae4f33fe1b8b7add673386caa15f9fa3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ass_2/z5114986.tex", "max_issues_repo_name": "ekohilas/comp3821", "max_issues_repo_head_hexsha": "135d142dae4f33fe1b8b7add673386caa15f9fa3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ass_2/z5114986.tex", "max_forks_repo_name": "ekohilas/comp3821", "max_forks_repo_head_hexsha": "135d142dae4f33fe1b8b7add673386caa15f9fa3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.1858108108, "max_line_length": 295, "alphanum_fraction": 0.5256142713, "num_tokens": 5290, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.8056321913146127, "lm_q1q2_score": 0.5864804466162102}}
{"text": "\\documentclass{amsart}\n\n\\title{LocalGraph Abstract Data Type}\n\\author{Todd D. Vance}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle{}\n\n\\section{Local Graph}\n\nA local graph (modeling a directed graph, loops and multiple edges allowed, from which only a node and its immediate neighborhood are visible at any one time) is actually a system of ADTs.  It consists of a single Cursor, and a finite number of Nodes and outgoing Edges indexed by Directions from a node.  In addition, there is a concept of Items. An Item belongs to something, either a Node, another Item, the Cursor, or it belongs to nothing (the Nil object). The motivation for Items is Zork-like text adventure games, in which a Node is a room, the Cursor is the player, and Directions are connections to other rooms.  Items can then be in a room, on the player, in or on another item, or nowhere (for a period of time).\n\nThe axioms are as folows:\n\n\\begin{enumerate}\n\\item There exists a unique Cursor object $c$.\n\\item There exists a unique Nil object $\\epsilon$.\n\\item There exists a unique Node object $n$ such that $c\\in{n}$.\n\\item For each Direction $d$ and Node object $n$, either $n.d=\\epsilon$ or there exists a unique Node object $m$ such that $n.d=m$.\n\\item For each Item object $i$, exactly one of the following hold:\n\\begin{enumerate}\n\\item $i\\in\\epsilon$.\n\\item $i\\in{c}$.\n\\item There exists a unique Node object $n$ such that $i\\in{n}$\n\\item There exists a unique Item object $j$ such that $i\\in{j}$\n\\end{enumerate}\n\\item There are no circular inclusions among items; that is for no sequence of Item objects $i_1, i_2, \\dots, i_n$ does it happen that $i_1\\in i_2 \\in \\cdots\\in i_n\\in i_1$.\n\\end{enumerate}\n\nFor practical purposes, some query operations are permitted:\n\n\\begin{enumerate}\n\\item If $n$ is a Node object, then $\\mathrm{dir}(n)$ is a set containing all directions  $d$ such that $n.d\\ne\\epsilon$.\n\\item $\\mathrm{loc}(c)$ is the unique Node object $n$ such that $n\\in{c}$.\n\\item $\\mathrm{loc}(i)=o$ is equal to the object $o$ which must be exactly one of the following:\n\\begin{enumerate}\n\\item $o=\\epsilon$ if $i\\in\\epsilon$.\n\\item $o=n$ if $n$ is a Node object and $i\\in{n}$.\n\\item $o=j$ if  $j$ is an Item object and $i\\in{j}$.\n\\item $o=c$ if $i\\in{c}$.\n\\end{enumerate}\n\\end{enumerate}\n\nOptional operations provide more functionality:\n\\begin{enumerate}\n\\item If $n$ is a Node object, then $itm(n)$ is the set of all Item objects $i$ such that $i\\in{n}$.\n\\item If $i$ is an Item object, then $itm(i)$ is the set of all Item objects $j$ such that $j\\in{i}$\n\\item $itm(c)$ is the set of Item objects $i$ such that $i\\in{c}$.\n\\item $itm(\\epsilon)$ is the set of Item objects $i$ such that $i\\in\\epsilon$.\n\\item $itm()$ is the set of all Item objects.\n\\item $nod()$ is the set of all Node objects.\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "4b2d7395b4f60cb7f731f81d1c1053c6dd8b50b0", "size": 2809, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/local_graph_adt.tex", "max_stars_repo_name": "tdvance/LocalGraph", "max_stars_repo_head_hexsha": "c927947391c04e9e6870e0edcfef6e2ffe2a4f7b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/local_graph_adt.tex", "max_issues_repo_name": "tdvance/LocalGraph", "max_issues_repo_head_hexsha": "c927947391c04e9e6870e0edcfef6e2ffe2a4f7b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/local_graph_adt.tex", "max_forks_repo_name": "tdvance/LocalGraph", "max_forks_repo_head_hexsha": "c927947391c04e9e6870e0edcfef6e2ffe2a4f7b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.0727272727, "max_line_length": 724, "alphanum_fraction": 0.7173371307, "num_tokens": 830, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812554, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.5864804449175834}}
{"text": "\\section{Observation model}\n\\label{sec:observationModel}\nFigure \\ref{fig:observModel} gives an overview of the Observation Model as implemented in the current version of PharmML, which covers only continuous data models. A future release will cover discrete data models, such as categorical, count and time-to-event (greyed out in the figure).\n\\begin{figure}[h!]\n\\centering\n \\includegraphics[height=75mm]{observationalModel}\n\\caption{Observation Model}\n\\label{fig:observModel}\n\\end{figure}\nAn essential component of the Observation Model is the Residual Error Model, which applies only to continuous data models.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Residual error model}\n\\label{sec:residualErrorModel}\n\\label{maths:error_model}\n\\label{maths:combined-err-model}\nIn this section we consider different forms of the residual error, i.e. this section is about $g$ in the term\n\\begin{align*}\ng(x_{ij}, \\psi_{i}, \\xi) \\epsilon_{ij}\n\\end{align*}\nof eq.\\ref{eq:nlmeModel} with $\\epsilon_{ij} \\sim N(0, 1)$, i.e. a standard normally distributed random variable. We distinguish between\n\\begin{itemize}\\addtolength{\\itemsep}{-.95\\baselineskip}\n\\item\nmodels for \\textbf{untransformed} data\n\\begin{align*}\n \\underbrace{ y_{ij}}_{\\text{\\parbox{2cm}{\\centering Experimental \\\\[-4pt]  data}}} =\n \\underbrace{ f(x_{ij}, \\psi_{i})}_{\\text{\\parbox{2.5cm}{\\centering Model \\\\[-4pt]  prediction}}} +\n \\underbrace{ g(x_{ij}, \\psi_{i}, \\xi_i) \\; \\epsilon_{ij}}_{\\text{\\parbox{3cm}{\\centering Residual \\\\[-4pt] error}}}\n \\end{align*}\n \\item\n\\textbf{transform-both-sides} models\n\\begin{eqnarray}\n \\underbrace{ u(y_{ij})}_{\\text{\\parbox{2cm}{\\centering Transformed \\\\[-4pt] experimental \\\\[-4pt]  data}}} =\n \\underbrace{ u\\big(f(x_{ij}, \\psi_{i})\\big)}_{\\text{\\parbox{2.5cm}{\\centering Transformed \\\\[-4pt]  model \\\\[-4pt]  prediction}}} +\n \\underbrace{ g(x_{ij}, \\psi_{i}, \\xi_i) \\; \\epsilon_{ij}}_{\\text{\\parbox{3cm}{\\centering Residual \\\\[-4pt] error}}} \\nonumber\n \\end{eqnarray}\n \\item\nand \\textbf{implicit} models\n\\begin{eqnarray}\n \\underbrace{ u(y_{ij})}_{\\text{\\parbox{2.5cm}{\\centering Transformed \\\\[-4pt] experimental  data}}} =\n \\underbrace{ U\\big(f(x_{ij}, \\psi_{i}),\\xi_i, \\epsilon_{1,ij}, \\epsilon_{2,ij}, \\dots\\big)}_{\\text{\\parbox{2.5cm}{\\centering Transformed \\\\[-4pt]  model prediction}}}\n \\end{eqnarray}\n\\end{itemize}\nThe \\textit{untransformed} form is a special case of the \\textit{transform-both-sides} form with $u \\equiv Id$, i.e. the identity transformation.\nThen for models of both types with $\\epsilon_{ij}$ being normally distributed with mean 0 and variance 1, $u(y_{ij})$ is also normally distributed\nwith mean $u(f(x_{ij}, \\psi_{i}))$ and the standard deviation $g(x_{ij}, \\psi_{i}, \\xi_i)$. \\\\\nPossible extensions to the basic models are\n\\begin{itemize}\n\\item\nwhen more than one random variable is applied, i.e. multiple $\\epsilon$'s,\n\\item\nwhen more than one type of measurement or observation is defined, or\n\\item\nwhen variability, as discussed in section \\ref{sec:variabilityModel}, is applied to parameters of the residual error model (see section \\ref{subsec:varModelResidualError} for details).\n\\end{itemize}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Incorporating variability on the residual error model parameters}\n\\label{subsec:varModelResidualError}\nIn analogy to the nested hierarchical structure for the variability on the individual parameters,\nvariability on residual error model parameters can be defined using the same structure.\nBy doing so, no new structure is necessary to account for any inter-individual and/or inter-occasion variability of the residual error model parameters.\n\nThis allows \\pharmml to cover the so-called 'ETA-on-EPS' approach -- e.g. IIV on the residual error model parameters or in other words varying residual\nerror magnitude between individuals, see Figure \\ref{fig:IOV0_residualError}.\n\\begin{figure}[htb!]\n\\centering\n  \\includegraphics[width=125mm]{pics/IOV0_residualError}\n \\caption{Inter-individual variability of the residual error parameter $a$. The nested hierarchical structure is identical to that of structural model parameters.}\n \\label{fig:IOV0_residualError}\n\\end{figure}\nFor example, if an additive residual error model and a log-normal distribution for $a$ is assumed, then the parameter model reads\n\\begin{align*}\n\t& \\log(a_i) = \\log(a_{pop}) + \\eta_a, \\quad  \\eta_a \\sim \\mathcal{N}(0,\\omega_a^2)\n\\end{align*}\nand the observation model reads\n\\begin{align*}\n\t& y_{ij} \\sim \\mathcal{N}(f_{ij},a_i^2): \\quad y_{ij} = f_{ij} + a_i \\epsilon_{ij}, \\quad \\epsilon_{ij} \\sim \\mathcal{N}(0,1).\n\\end{align*}\n%See also section \\ref{modelKK_RM1} for three IIV and IOV examples with NMTRAN and MLXTRAN code.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Residual error model examples}\n\\label{subsec:modelExamples}\nCurrently, there is no library of residual error models but this might change in the future. All of the following residual error model examples and their different versions can be implemented in the present version of PharmML:\n\\begin{itemize}\n\\item\nConstant/additive:\n\\begin{align*}\n& y_{ij} = f_{ij} + a \\; \\epsilon_{ij}; \\quad \\epsilon_{ij} \\sim N(0,1)  \\\\\n\\text{or} \\quad & y_{ij} = f_{ij} + \\epsilon_{ij}; \\quad \\epsilon_{ij} \\sim N(0,\\sigma^2)\n\\end{align*}\n\\item\nProportional or constant CV (CCV):\n\\begin{align*}\n&y_{ij} =  f_{ij} + bf_{ij} \\; \\epsilon_{ij}; \\quad \\epsilon_{ij} \\sim N(0,1)  \\\\\n\\text{or} \\quad & y_{ij} =  f_{ij}(1+\\epsilon_{ij}); \\quad \\epsilon_{ij} \\sim N(0,\\sigma^2)\n\\end{align*}\n\\item\nCombined additive and proportional 1:\n\\begin{align*}\n& y_{ij} =  f_{ij} + (a + bf_{ij}) \\; \\epsilon_{ij}; \\quad \\epsilon_{ij} \\sim N(0,1)\n\\end{align*}\n\\item\nCombined additive and proportional 2:\n\\begin{align*}\n& y_{ij} =  f_{ij} + \\sqrt{a^2 + b^2f_{ij}^2} \\; \\epsilon_{ij}; \\quad \\epsilon_{ij} \\sim N(0,1)  \\\\\n\\text{or}  \\quad & y_{ij} =  f_{ij} +  a\\, \\epsilon_{1,ij} + b f_{ij}\\, \\epsilon_{2,ij}; \\quad \\epsilon_{1,ij} \\sim N(0,1); \\quad \\epsilon_{2,ij} \\sim N(0,1);   \\\\\n\\text{or}  \\quad & y_{ij} =  f_{ij} (1 + \\epsilon_{1,ij}) + \\epsilon_{2,ij}; \\quad \\epsilon_{1,ij} \\sim N(0,\\sigma_1^2); \\quad \\epsilon_{2,ij} \\sim N(0,\\sigma_2^2);\n\\end{align*}\n\\item\nPower error model:\n\\begin{align*}\n& y_{ij} = f_{ij} + b\\,f_{ij}^c \\; \\epsilon_{ij}; \\quad \\epsilon_{ij} \\sim N(0,1)\n\\end{align*}\n\\item\nCombined additive and power error model 1:\n\\begin{align*}\n& y_{ij} =  f_{ij} + (a + b f_{ij}^c) \\; \\epsilon_{ij}; \\quad \\epsilon_{ij} \\sim N(0,1)\n\\end{align*}\n\\item\nCombined additive and power error model 2:\n\\begin{align*}\n& y_{ij} = f_{ij} + a\\epsilon_{1,ij} + b f_{ij}^c \\epsilon_{2,ij}; \\quad \\epsilon_{1,ij} \\sim N(0,1); \\quad \\epsilon_{2,ij} \\sim N(0,1)\n\\end{align*}\n\\item\nTwo (or more) types of measurements error model:\n\\begin{align*}\n& y_{ij} = f_{ij} + \\text{ASY}_j\\epsilon_{1,ij} + (1-\\text{ASY}_j) \\epsilon_{2,ij}; \\quad \\epsilon_{1,ij} \\sim N(0,\\sigma_1^2); \\quad \\epsilon_{2,ij} \\sim N(0,\\sigma_2^2)\n\\end{align*}\n\\item\nTwo (or more) types of observations error model:\n\\begin{align*}\n& y_{ij} = \\text{TYP}_{ij} f_{1,ij} + (1-\\text{TYP}_{ij}) f_{2,ij} + \\text{TYP}_{ij}\\epsilon_{1,ij} + (1-\\text{TYP}_{ij}) \\epsilon_{2,ij};  \\\\\n&  \\epsilon_{1,ij} \\sim N(0,\\sigma_1^2); \\quad \\epsilon_{2,ij} \\sim N(0,\\sigma_2^2)\n\\end{align*}\n%\\item\n%Extended error model:\n%\n%y_{ij} = \\left\\{ \\begin{array}{rcl}  f_{ij} + \\epsilon_{1,ij}  & \\mbox{for} & \\text{TIME == 1 \\&\\& ID == 1} \\\\\n%f_{ij} + \\epsilon_{2,ij}  & \\mbox{for} & \\text{TIME == 2 \\&\\& ID == 1} \\\\\n%\\cdots \\end{array}\\right\\} \\quad \\text{with} \\quad\n%\\epsilon_{1,ij} \\sim N(0,\\sigma_1^2),\\, \\epsilon_{2,ij} \\sim N(0,\\sigma_2^2), \\cdots\n%\n\\end{itemize}\nMain sources: \\cite{NONMEM:2006aa} and \\cite{POPIX:2013}.\n\n\\subparagraph{Note 1}\nIn the list above models are pulled together which have the same variance function.\n\\subparagraph{Note 2}\nModels listed above are the most popular ones in use but the present PharmML structure allows for implementation of virtually any user-defined model. See section ref:XYZ for more examples and PharmML implementation.\n\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\\subsection{PharmML implementation}\n%Some of the above listed residual error model types have two or three equivalent forms, by which we mean they have the same variance, although they use one or more residual errors, $\\epsilon_{ij}$. Other types contain two or more predictions from the structural model, \\var{f_{ij}}. From a\n%computational point of view it makes a lot of sense to reflect such differences in the language structure.\n%This was the motivation to allow for the implementation of two types of observation models\n%\\begin{itemize}\n%\\item\n%\\xelem{Standard} -- any observation model of the form\n%\\begin{align*}\n%\tu(y_{ij}) = u(f_{ij}) + g\\times\\epsilon_{ij}\n%\\end{align*}\n%which can be defined using exactly one of the following items\n%\\begin{itemize}\n%\\item\n%a transformation, $u$, e.g. \\var{log} or \\var{logit}\n%\\item\n%one structural model prediction, \\var{f_{ij}}\n%\\item\n%one standard deviation function, \\var{g}\n%\\item\n%one random variable, $\\epsilon_{ij}$\n%\\end{itemize}\n%\\item\n%\\xelem{General} -- using any number of the items listed above and arbitrary functional relationship between them.\n%\\end{itemize}\n%Chapter \\ref{chap:worked-egs} contains a number of examples illustrating these constructs.\n\n%\\begin{itemize}\n%\\item\n%standard -- defined in the spec as \\xelem{Standard} with following child elements\n%\\begin{itemize}\n%\\item\n%\\item\n%\\item\n%\\end{itemize}\n%\\item\n%general -- defined in the spec as \\xelem{General}.\n%\\end{itemize}\n\n\n\n\n%The residual errors are part of the \\textit{Observations Model}, see section \\ref{sec:eg1-obs-model} for detailed discussion.\n%The following table contains some of the models which can be implemented in \\pharmml:\n%\n%\\begin{table}[htdp]\n%\\begin{center}\n%\\begin{tabular}{l c c}\n%Model name & $g$ & $\\xi$ \\\\\n%\\hline \\hline\n%Constant error model & $a$ & $a$ \\\\\n%Proportional error model & $bf$ & $b$ \\\\\n%Combined error model & $a + bf$ & $a,b$ \\\\\n%Alternative combined error model 1& $\\sqrt{a^2 + b^2f^2}$ & $a,b$ \\\\\n%Alternative combined error model 2 & $a + bf^c$ & $a, b, c$\n%\\end{tabular}\n%\\end{center}\n%\\caption{Examples of residual error models which can be implemented in \\pharmml.}\n%\\label{tab:residualModels}\n%\\end{table}%\n\n\n\n\n", "meta": {"hexsha": "a5bd7d111a53fe2d915b63fbeae6594e4955d953", "size": 10359, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "input/observationModel_specSection.tex", "max_stars_repo_name": "pharmml/pharmml-spec", "max_stars_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-01-26T13:17:54.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-26T13:17:54.000Z", "max_issues_repo_path": "input/observationModel_specSection.tex", "max_issues_repo_name": "pharmml/pharmml-spec", "max_issues_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "input/observationModel_specSection.tex", "max_forks_repo_name": "pharmml/pharmml-spec", "max_forks_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.0863636364, "max_line_length": 290, "alphanum_fraction": 0.6897383917, "num_tokens": 3299, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.5862824639365067}}
{"text": "\\documentclass{homework}\n\\course{Math 5522H}\n\\author{Jim Fowler}\n\\input{preamble}\n\\DeclareMathOperator{\\Res}{Res}\n\n\\usepackage[symbol]{footmisc}\n\\renewcommand{\\thefootnote}{\\fnsymbol{footnote}}\n\n\\begin{document}\n\\maketitle\n\n\\begin{inspiration}\n  Wenn ich nur erst die S\\\"atze habe! Die Beweise werde ich schon\n  finden.\\footnote{``If only I first had the theorems, then I'd find\n    the proofs somehow.''  Riemann knew all too well the challenge of\n    PODASIPs.} \\byline{Bernhard Riemann}\n\\end{inspiration}\n\n\\section{Terminology}\n\n\\begin{problem}\n  When $\\Real s > 0$, what is the Gamma function $\\Gamma(s)$?\n\\end{problem}\n\n\\begin{problem}\n  When $\\Real s > 1$, what is the Riemann zeta function $\\zeta(s)$?\n\\end{problem}\n\n\\begin{problem}\n  What is the von Mangoldt function $\\Lambda(n)$?\n\\end{problem}\n\n\\section{Numericals}\n\n\\begin{problem}\n  Compute $\\Res(\\Gamma,s)$ for $s \\in \\{ 0, -1, -2, \\ldots \\}$.\n\\end{problem}\n\n\\begin{problem}Evaluate $\\Res(f,s)$ for the\n  function $f(s) = \\Gamma(s) \\Gamma(1-s)$.\n\\end{problem}\n\n\\begin{problem} % be careful to get the sign right!\n  Use \\ref{euler-reflection-formula} to compute $\\Gamma(1/2)$.  \n\\end{problem}\n\n\\section{Exploration}\n\n\\begin{problem}\n  In previous courses, you have seen that\n  $\\Gamma(s+1) = s \\cdot \\Gamma(s)$.  Use this fact to explain how to\n  define a meromorphic function on $\\C$ agreeing with $\\Gamma(s)$ for\n  $\\Real s > 0$.  This is \\textbf{analytic continuation} which succeeds\n  here by using a \\textbf{functional equation}.\n\\end{problem}\n\n\\begin{problem}\\label{start-of-reflection-formula}For $\\lambda \\in (0,1)$, evaluate\n  \\[\n    \\int_{0}^\\infty \\frac{u^{\\lambda - 1}}{1 + u} \\, du.\n  \\]\n  \\textit{Hint:} make the substitution $u = e^x$ and invoke \\ref{integral-for-euler-reflection}.\n\\end{problem}\n\n\\begin{problem}\\label{integrate-gamma-one-minus-s}For $s \\in (0,1)$ and positive $t \\in \\R$, show that\n  \\[\n    \\Gamma(1 - s) = t \\int_0^\\infty (xt)^{-s} e^{-xt}\\, dx.\n  \\]\n\\end{problem}\n\n\\begin{problem}\\label{euler-reflection-formula}Combine \\ref{start-of-reflection-formula} and \\ref{integrate-gamma-one-minus-s} to conclude that\n  \\[\n    \\Gamma(1-s) \\, \\Gamma(s) = \\frac{\\pi}{\\sin \\left( \\pi s \\right)}.\n  \\]\n  This is \\textbf{Euler's reflection formula}.\n\\end{problem}\n\n\\begin{problem}\n  Show that the series\n  \\[\\displaystyle\\sum_{n=1}^\\infty \\displaystyle\\frac {1}{n^{s}}\\]\n  converges to a holomorphic function when $\\Real s > 1$.\n\\end{problem}\n\n\\begin{problem}\\label{euler-product-formula}Recall that you defined\n  \\(\\displaystyle\\prod_{n=1}^\\infty \\left( 1 + a_n \\right)\\) in\n  \\ref{terminology-infinite-product}.  Show that when $\\Real s > 1$, we\n  have\n  \\[\n    \\frac{1}{\\zeta(s)} = \\displaystyle\\prod_{n=1}^\\infty \\left( 1 - {p_n}^s \\right),\n  \\]\n  where $p_n$ is the $n$th prime number.\n\\end{problem}\n\n\\begin{problem}\n  Use \\ref{euler-product-formula} and \\ref{nonzero-infinite-product}\n  to show that if $\\Real s > 1$ then $\\zeta(s) \\neq 0$.\n\\end{problem}\n\n\\begin{problem}\n  Use \\ref{euler-product-formula} to show that\n  \\[\n    \\Real \\left( \\frac{ \\zeta'(a+bi) }{ \\zeta(a+bi) } \\right) = - \\sum_{n=1}^\\infty \\frac{\\Lambda(n) \\, \\cos \\left( b \\log n \\right)}{n^{a}}.\n  \\]\n\\end{problem}\n\n\\begin{problem}\n  Use the trigonometric fact $3 + 4 \\cos \\theta + \\cos (2\\theta) \\geq 0$ to show that\n  \\[\n    \\Real \\left( 3 \\frac{ \\zeta'(a) }{ \\zeta(a) } + 4 \\frac{ \\zeta'(a+bi) }{ \\zeta(a+bi) } +  \\frac{ \\zeta'(a+2bi) }{ \\zeta(a+2bi) } \\right) \\leq 0.\n  \\]\n  Knowing $\\zeta$ has a simple pole at $s = 1$ and assuming\n  $\\zeta(1+bi) = 0$ for $b \\in \\R$, define \n  \\[\n    f(a) = \\zeta(a)^3 \\cdot \\zeta(a+bi)^4 \\cdot \\zeta(a + 2bi)\n  \\]\n  and study $f$ near $a = 1$ to uncover a contradiction.  You will\n  have shown that $\\zeta(s) \\neq 0$ when $\\Real s = 1$.\n\\end{problem}\n\n\\section{Prove or Disprove and Salvage if Possible}\n\n\\begin{problem}\\label{nonzero-infinite-product}For a sequence $a_n \\neq 0$ of complex numbers, suppose\n  $\\displaystyle\\sum_{n=1}^\\infty |a_n|$ converges.  Then\n  $\\displaystyle\\prod_{n=1}^\\infty \\left( 1 + a_n \\right)$ converges\n  to a nonzero quantity.\n\\end{problem}\n\n\\begin{problem} % missing pole at s = 1\n  Define the \\textbf{Dirichlet eta function} via\n  \\[\n    \\eta(s) = := \\sum_{n=1}^\\infty \\frac{(-1)^n}{n^s}.\n  \\]\n  This series converges to a holomorphic function when $\\Real s > 0$.\n\\end{problem}\n\n\\begin{problem}\n  The Riemann zeta and Dirichlet eta functions are related via\n  \\[\n    \\left( 1 - 2^{1-s} \\right) \\zeta(s) = \\eta(s).\n  \\]\n\\end{problem}\n\n\\end{document}\n", "meta": {"hexsha": "f88a7603a7f693592e0d3bf29dc305389c649dbf", "size": 4484, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problem-sets/set11.tex", "max_stars_repo_name": "kisonecat/math5522h", "max_stars_repo_head_hexsha": "c9fc5eb915c6d29d91a864dfe066878b75305c42", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-13T03:38:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-13T03:38:29.000Z", "max_issues_repo_path": "problem-sets/set11.tex", "max_issues_repo_name": "kisonecat/math5522h", "max_issues_repo_head_hexsha": "c9fc5eb915c6d29d91a864dfe066878b75305c42", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem-sets/set11.tex", "max_forks_repo_name": "kisonecat/math5522h", "max_forks_repo_head_hexsha": "c9fc5eb915c6d29d91a864dfe066878b75305c42", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-01-11T18:43:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-11T18:43:51.000Z", "avg_line_length": 31.1388888889, "max_line_length": 148, "alphanum_fraction": 0.6534344335, "num_tokens": 1631, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.586282458965767}}
{"text": "\\chapter{STATISTICAL TESTS}\n\nThis chapter describes the different statistical tests available \nin TestU01 and how they can be applied.\nThese tests are organized in different modules, \nsometimes according to their similarity and\nsometimes according to the author of the book/article from which they\nwere taken.   \nEach test looks, in its own way, for empirical evidence against \nthe null hypothesis $\\cH_0$ defined in the introduction.\nIt computes a test statistic $Y$ whose distribution under $\\cH_0$\nis known (or for which a good approximation is available).\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\paragraph*{Single-level tests.} \\\n\nA {\\em first-order\\/} (or {\\em single-level\\/}) test \nobserves the value of $Y$, say $y$,\\index{single-level tests}\nand rejects $\\cH_0$ if the {\\em $p$-value\\/} \n(or {\\em significance level\\/})\n $$ p = P[Y \\ge y \\mid \\cH_0] $$\nis much too close to either 0 or 1.\nIf the distribution of $Y$ is [approximately] continuous,\n$p$ is [approximately]\na $U(0,1)$ random variable under $\\cH_0$.\nSometimes, this $p$ can be viewed as a {\\em measure of uniformity},\nin the sense that it will be close to 1 if the generator produces\nits values with excessive uniformity, and close to 0 in the opposite\nsituation (see, e.g., the module {\\tt smultin}).\n\nIn the case where $Y$ has a\n {\\em discrete distribution\\/}\\index{discrete distribution}\nunder $\\cH_0$, we distinguish the {\\em right $p$-value\\/}\n$p_R =  P[Y \\ge y \\mid \\cH_0]$ and the {\\em left $p$-value\\/}\n$p_L =  P[Y \\le y \\mid \\cH_0]$.  We then define the $p$-value as\n\\begin{eqnarray*}\n   p & = & \\left\\{ \n \\begin{array}{l@{\\qquad}l}\n      p_R, & \\mbox{if } p_R <  p_L \\\\[6pt]\n  1 - p_L, & \\mbox{if } p_R \\ge p_L \\mbox{ and } p_L < 0.5 \\\\[6pt]\n      0.5  &         \\mbox{otherwise.}\n \\end{array}\\right.\n\\end{eqnarray*} \nWhy such a definition?\nConsider for example a Poisson random variable $Y$ with mean 1\nunder $\\cH_0$.  If $Y$ takes the value 0, the right $p$-value is\n$p_R =  P[Y \\ge 0 \\mid \\cH_0] = 1$.  In the uniform case, this would\nobviously lead to rejecting $\\cH_0$ on the basis that the \n$p$-value is too close to 1.   \nHowever, $P[Y = 0 \\mid \\cH_0] = 1/e \\approx 0.368$, so it does not \nreally make sense to reject $\\cH_0$ in this case.\nIn fact, the left $p$-value here is $p_L = 0.368$, and the $p$-value\ncomputed with the above definition is $p = 1 - p_L \\approx 0.632$.\nNote that if $p_L$ is very small, with this definition, $p$ becomes\nclose to 1.\nIf the left $p$-value was defined as \n$p_L = 1 - p_R = P[Y < y \\mid \\cH_0]$, this would also lead to problems;\nin the example, one would have $p_L = 0$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\paragraph*{Two-level tests.} \\\n\nIn a {\\em second-order\\/} (or {\\em two-level\\/}) test, one generates\n$N$ ``independent'' copies of $Y$, say $Y_1,\\dots,Y_N$, by replicating\nthe first-order test $N$ times.\\index{two-levels tests}\nLet $F$ be the theoretical distribution function of $Y$ under $\\cH_0$.\nIn the case where $F$ is {\\em continuous}, the transformed observations\n$U_1 = F(Y_1),\\dots, U_N = F(Y_N)$ should behave as i.i.d.\\ uniform\nrandom variables. \nOne way of performing the two-level test is to compare the empirical \ndistribution of these $U_j$'s to the uniform\\index{goodness-of-fit tests}\ndistribution, via a {\\em goodness-of-fit\\/} (GOF) test such as \nthose of Kolmogorov-Smirnov, Anderson-Darling, Cr\\'amer-von Mises, etc.\nThese GOF test statistics are defined in module {\\tt gofs} and their \n$p$-values are computed by the functions of module {\\tt gofw}\n(these two modules are in library ProbDist).\nFor example, if $d_N^+$ is the sample value taken by the Kolmogorov-Smirnov\nstatistic $D_N^+$ (defined in module {\\tt gofs}), the corresponding \n$p$-value at the second level is $\\delta^+ = P[D_N^+ > d_N^+ | \\cH_0]$.\nUnder $\\cH_0$, $\\delta^+$ has the $U(0,1)$ distribution.\n\nIn TestU01, several of these GOF tests can actually be applied \nsimultaneously, and all their $p$-values are reported in the results.\nThose that are too close to 0 or 1 are marked by special indicators\nin the printouts.  %% The symbol {\\tt eps} in the printout stands for a\n%%  value smaller than $10^{-15}$.\nThe GOF tests that are applied are those that belong to the set\n{\\tt gofw\\_ActiveTests}.\nThis kind of flexibility is sometimes convenient for comparing the\npower of these GOF tests to detect the weaknesses of specific classes\nof generators.\n\nThis type of two-level testing procedure has been widely applied for \ntesting RNGs \\cite{sFIS96a,rKNU98a,rLEC92a,rLEE97a,rMAR85a}.\nThe arguments supporting it are that \n(i) it sometimes permits one to apply the test with a larger total \nsample size to increase its power \n(for example, if the memory size of the computer \nlimits the sample size of a single-level test), and\n(ii) it tests the RNG sequence at the local level, \nnot only at the global level (i.e., there could be very bad behavior \nover short subsequences, which cancels out when averaging \nover larger subsequences).\nAs an example of this, consider the extreme case of a generator\nwhose output values are $i/2^{31}$, for $i=1,2,\\dots,2^{31}-1$,\nin this order. \nA simple test of uniformity over the entire sequence would give\na perfect fit, whereas the same test applied repeatedly over \n(disjoint) shorter sub-sequences would easily detect the anomaly.\n\nAnother way of performing the test at the second level is to simply add\nthe $N$ observations of the first level \nand reject $\\cH_0$ if the sum is too large or too small.\nFor the great majority of the tests in this library, the distribution\nof $Y$ is either chi-square, normal, or Poisson.\nIn these three cases, the sum $\\tilde Y = Y_1 + \\cdots + Y_N$ has the\nsame type of distribution.  That is, if $Y$ is chi-square with $k$ degrees\nof freedom [resp., normal with mean $\\mu$ and variance $\\sigma^2$,\nPoisson with mean $\\lambda$], $\\tilde Y$ is chi-square with $Nk$ degrees\nof freedom [resp., normal with mean $N\\mu$ and variance $N^2\\sigma^2$,\nPoisson with mean $N\\lambda$].\nTestU01 usually \n\\hrichard {Not yet fully implemented everywhere, but will be done very soon.}\nreports the results of the test based on $\\tilde Y$ in\nthese situations, in addition to the second-order GOF tests specified\nby {\\tt gofs\\_ActiveTests} (for the Poisson case, where the \nsecond-order GOF tests are not valid unless $\\lambda$ is large enough\nfor the Poisson distribution to be well approximated by a normal,\nonly the results of the tests base on $\\tilde Y$ are reported).\n\nOur empirical investigations indicate that for a fixed total\nsample size $Nn$, when testing RNGs, a test with $N=1$ is often\nmore efficient than the corresponding test with $N > 1$.\n\\hrichard{Il y a des tests pour lesquels $N > 1$ est plus sensible \n  que $N=1$: au moins\n  {\\tt sstring\\_HammingWeight, sstring\\_AutoCor,\n  sstring\\_LongestHeadRun, sknuth\\_MaxOft}}.\nThis means that for typical RNGs, the type of structure found in\none (reasonably long) subsequence is usually found in (practically)\nall subsequences of the same length.\nIn other words, when a RNG started from a\ngiven seed fails spectacularly a certain test, it usually fails\nthat test for {\\em most\\/} admissible seeds, though\nthere are some exceptions.\nIn the case where $N > 1$, the test based on $\\tilde Y$ is usually\nmore powerful than the second-order GOF tests comparing the empirical\ndistribution of $F(Y_1),\\dots,F(Y_N)$ to the uniform,\naccording to our experience.\n\n\n%%%%%%%%%%%%%%%%%%%%\n\\paragraph*{Rejecting $\\cH_0$.} \\\n\nIn statistical studies where a limited amount of data is available,\npeople sometimes fix the significance level $\\alpha$ in advance \nto arbitrary values such as 0.05 or 0.01,\nand reject $H_0$ if and only if the $p$-value is below $\\alpha$.\nHowever, statisticians often recommend to just report the $p$-value,\nbecause this provides more information than reporting a ``reject'' or\n``do not reject'' verdict based on a fixed $\\alpha$.\n\nWhen a $p$-value is extremely close to 0 or to 1 (for example,\nif it is less than $10^{-10}$), one can obviously conclude that\nthe generator {\\em fails\\/} the test.\nIf the $p$-value is suspicious but failure is not clear enough,\n($p=0.0005$, for example), then the test can be replicated independently \nuntil either failure becomes obvious or suspicion disappears \n(i.e., one finds that the suspect $p$-value was obtained only by chance).\nThis approach is possible because there is no limit (other than CPU time)\non the amount of data that can be produced by a RNG to increase the\nsample size and the power of the test.\n\n%%%%%%%%%%%%%%%%%%%%%%\n\\paragraph*{Common parameters and tools.} \\\n\nThree parameters, called $N$, $n$, and $r$, are common to all the\nfunctions that apply a test in the {\\tt s} modules.\nThe parameter $N$ gives the number of independent replications of the \nbase test, i.e. the number of distinct subsequences on which it is applied,\nand $n$ is the sample size for each replication.\nThe parameter $r$ gives the number of bits that are discarded\nfrom each generated random number.  \nThat is, each real-valued random number is multiplied by $2^r$ modulo 1,\nto drop its $r$ most significant bits.\nThese three parameters are not re-explained in each test description.\nIt is implicit that the first $r$ bits of each uniform are always\ndiscarded, that the test explained in the function description \nis always replicated $N$ times, and that a two-level test is applied\nwhenever $N > 1$.\n% In many cases, it is best to choose $N=1$ and $n$ as large as possible.\n\nFor the tests based on bit strings, another parameter that usually \nappears is $s$.  It represents the number\nof bits of each uniform that are effectively used by the test.\nThat is, when $s$ appears, the test drops the $r$ most significant bits\nand takes the $s$ bits that follow.\nIn this case, it is important to make sure that $r+s$ does not exceed\nthe number of bits of precision provided by the RNG.\nFor example, if the RNG's output is always a multiple of $1/2^{31}$,\n$r+s$ should not exceed 31.\n% There are also other situations (e.g., the tests in {\\tt snpair})\n% where the test would detect the discretization error when the sample\n% size is large and the RNG does not return enough bits.\n\n\\ifdetailed  %%%\nModules {\\tt swrite} and  {\\tt sres}\nprovide basic tools used inside the other\n{\\tt s} modules, which implement the tests.\nTypical users do not need to use them directly.\nIn the majority of cases, it suffices to create the generator\nto be tested and pass it as a parameter to\nthe function that applies the desired test.\nThe test results are then printed automatically\nto the standard output, with a level of detail that can be modified\nby the user (see module {\\tt swrite}).\n\\fi  %%%%%\n\n%%%%%%%%%%%%%%%%%%%%\n\\paragraph*{Reports.} \\\n\nBy default, each test prints a report, on standard output,\ngiving the name of the test, the name of the tested generator,\nthe test parameters, the values of the statistics,\nthe significance level of those statistics\nand the CPU time used by the test.\nThis report may also contain information specific to a given test.\n\nIt is possible to print more or less detailed statistical reports\nby setting one or more of the {\\tt lebool} flags defined in module {\\tt swrite}. \nOne may wish to see, for example, the value of the test \nstatistic $Y$ for each of the $N$ replications,\n% (usually the values in the statistical collectors),\nthe values of the counters, the groupings of the classes, their\nexpected and observed numbers for the chi-square test, etc.\nFor some of the tests, printing the counters would generate huge reports\nand is not practically useful. For other tests (for example those\nbased on a chi-square test), seeing the counters and the classes may be\nenlightening as to why a given generator fails a test.\nIt is even possible to have no output at all from any of the {\\tt s} \nmodules of TestU01 by setting all the {\\tt lebool} flags in module {\\tt swrite} \n to {\\tt FALSE}.\n\nThe test functions automatically print the state of the generator\nat the beginning of an experiment and at the end of each test.\nIf more than one test are called in a program, the initial state\nof the generator at the beginning of a test will be the final state\nof the generator at the end of the preceding test. This permits one to\nkeep track of which segment of the stream of random numbers has been\nused by each test.\n\nA more flexible way of examining detailed information\nabout what has happened in the tests, to have a closer look at \nspecific details or perhaps for post-processing the results of the tests,\nis via the {\\tt ...\\_Res} structures.\nThese data structures are specific to each type of test and\nare described explicitly in the detailed version of this guide\n(see also module {\\tt sres}).\nEach function implementing a test has a parameter {\\tt \\ldots\\_Res *}\npointing to a structure that keeps the results.  \n\nPerhaps in the majority of situations, the automatic printout made\nby the testing function suffices and there is no need to \nexamine the {\\tt ...\\_Res} structure(s) after the test(s).\nIn this case, it suffices to pass \na {\\tt NULL} pointer for the {\\tt ...\\_Res *} parameter.\nThe structure will then be created internally and destroyed automatically\nafter the results are printed.\n% , inside the function that applies the test.\n\n\n\\ifdetailed %%%%\n\nTo examine or process the results of a test after its completion,\none must create a \n{\\tt \\ldots\\_Res} structure by calling the appropriate {\\tt Create...} \nfunction, and pass it as a parameter to the test. Failing to call the\nappropriate {\\tt Create...} function before using a {\\tt \\ldots\\_Res} \nstructure will result in a memory fault because each such \nfunction creates inner sub-structures used in the tests.\nThe same structure can be used for many successive tests of the same type.\nWhen the structure is no longer needed, it should be destroyed by\ncalling the appropriate {\\tt Delete} function.\nThese structures are explained in the {\\em detailed\\/} \nversion of this user's guide.\n\n\\fi  %%%%\n\n%%%%%%%%%%%%%%%%%%%%\n\\paragraph*{Scatter plots.} \\\n \nThere is a  module {\\tt scatter} that permits one to plot points produced\nby a  generator in the $t$-dimensional hypercube $[0,1)^t$.\nA rectangular box is defined in this hypercube, and the points lying \nin this box are projected on a selected two-dimensional subspace\nand placed on a 2-dimensional scatter plot.\nThe plot is put in a file ready to be processed by\n\\LaTeX\\ or {\\it Gnuplot}.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\paragraph*{An example: The birthday spacings tests applied to an LCG.} \\\n\nFigure~\\ref{fig:progstat} shows how to apply a test to a generator.\nThe call to {\\tt ulcg\\_CreateLCG} creates and initializes the generator\n{\\tt gen} to the LCG with modulus $m$ = 2147483647, \nmultiplier $a$ = 397204094, additive constant $c=0$, and initial\nstate $x_0 = 12345$.  This LCG is used in the SAS statistical\n software \\cite{iSAS90a}.\nThen the birthday spacings test is applied twice to this generator,\nwith $N=1$, $r=0$, in $t=2$ dimensions.\nThe sample sizes are $n=10^3$ and $n=10^4$, and the number $d$ of \ndivisions along each coordinate is chosen so that the expected number\nof collisions $\\lambda = n^3/(4d^t)$ is 2.5 in the first case and 0.25\nin the second case (the values of $d$ are $10^4$ and $10^6$, respectively).\nUnder $\\cH_0$, the number of collisions is approximately a Poisson \nrandom variable with mean $\\lambda$.\n\n\n\\setbox0=\\vbox {\\hsize = 6.0in\n\\smallc\n\\verbatiminput{../examples/birth1.c}\n}\n\n\\begin{figure}[htb] \\centering \\myboxit{\\box0}\n\\caption{Applying two birthday spacings tests to a LCG.}\n   \\label{fig:progstat}\n\\end{figure}\n\n\nThe results are in Figure~\\ref{fig:resstat}. \nThese results are printed to the standard output, which may be redirected\nto a file if desired.\n%% Here, the symbol {\\tt eps} means a $p$-value smaller than $10^{-15}$.\nAt sample size $n=10^3$, there are 6 collisions and the $p$-value is 0.04,\nwhich is not extreme enough to reject $\\cH_0$.\nAt sample size $n=10^4$, there are 44 collisions and the $p$-value is \nclose to $10^{-81}$ (i.e., if $Y$ is Poisson with mean 0.25, \n$P[Y\\ge 44] < 10^{-81}$).\nThe generator fails miserably in this case, with a sample size as\nsmall as ten thousands.  This test took approximately 0.02 second to run.\n\n\n\\ifdetailed  %%%\n\nOne may also want to examine or post-process the results of the tests in\nsome way. In this case, one would create a {\\tt res} structure as shown \nin the code segment of Figure~\\ref{fig:progstat2}, and delete the\nstructure when it is no longer needed. By setting the flag\n {\\tt swrite\\_Basic} to {\\tt FALSE}, no output will be automatically \nprinted by the testing function.\n\n\n\n\\setbox0=\\vbox {\\hsize = 6.0in\n\\smallc\n\\verbatiminput{../examples/birth2.c}\n}\n\n\\begin{figure}[htb] \\centering \\myboxit{\\box0}\n\\caption{Two birthday spacings tests and post-processing the results.}\n  \\label{fig:progstat2}\n\\end{figure}\n\n\\fi  %%%\n\n\\setbox1=\\vbox {\\hsize = 6.0in\n\\smallc\n\\verbatiminput{../examples/birth1.res}\n}\n\n\\begin{figure}[ht] \\centering \\myboxit{\\box1}\n\\caption{Results of the two birthday spacings tests.\n \\label{fig:resstat} }\n\\end{figure}\n", "meta": {"hexsha": "755cf5f0149924dac1c08ba5d8d089a49def67b0", "size": 17094, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "testu01/sintro.tex", "max_stars_repo_name": "bkmgit/TestU01-2009", "max_stars_repo_head_hexsha": "7519ac6506d97548e819a9b6e8f96b6be93a58d3", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2020-11-29T05:34:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T08:32:21.000Z", "max_issues_repo_path": "testu01/sintro.tex", "max_issues_repo_name": "bkmgit/TestU01-2009", "max_issues_repo_head_hexsha": "7519ac6506d97548e819a9b6e8f96b6be93a58d3", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2020-11-06T01:06:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-14T12:09:23.000Z", "max_forks_repo_path": "testu01/sintro.tex", "max_forks_repo_name": "bkmgit/TestU01-2009", "max_forks_repo_head_hexsha": "7519ac6506d97548e819a9b6e8f96b6be93a58d3", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2020-11-08T06:28:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T22:29:20.000Z", "avg_line_length": 45.4627659574, "max_line_length": 81, "alphanum_fraction": 0.7324207324, "num_tokens": 4638, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.5862824507842266}}
{"text": "\\chapter{Notation Convention, Structured Rotations, and Kronecker Products}\n\\label{chap:K_and_R}\n\nWe begin with some notation conventions before reviewing\nstandard orthogonal matrices and\nlooking at the Kronecker product and how it affects fast matrix-vector\nmultiplication. One standard reference for matrix-related topics is~\\cite{gvl4}.\nThese topics will then be used to look at\nmatrix factorizations of Chebyshev-Vandermonde (C\\=/V) matrices\nin Chapters~\\ref{chap:CV_mat_1D_I} and \\ref{chap:CV_mat_HD_I}.\n\n", "meta": {"hexsha": "00add5605b210e3f31a386c586c599c0ff58117d", "size": 510, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/kronecker_and_rot.tex", "max_stars_repo_name": "chgorman/UCSB-Dissertation-Template", "max_stars_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/kronecker_and_rot.tex", "max_issues_repo_name": "chgorman/UCSB-Dissertation-Template", "max_issues_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/kronecker_and_rot.tex", "max_forks_repo_name": "chgorman/UCSB-Dissertation-Template", "max_forks_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.5, "max_line_length": 80, "alphanum_fraction": 0.8176470588, "num_tokens": 133, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7826624688140726, "lm_q1q2_score": 0.586282447573426}}
{"text": "\\section{Complex numbers with round-off error}\n\n\n{\\it Remark} FIXME:7.1.\nThe theoretical method for proving Theorem FIXME(0.2)\nhas been implemented via the computer program {\\it verify}, which is available, together with the relevant data sets, at the {\\it Annals} web site.  To make this computer-aided proof rigorous, we needed to deal with round-off error in calculations.  \n\nOne approach to round-off error would be to use interval arithmetic packages to carry out all calculations with floating-point machine numbers, or to generate our own version of these packages.  \nHowever, it appears that this approach\nwould be much too slow given the size of our collection of sub-boxes and conditions/killerwords.  \n\nTo solve this problem of speed, we implement round-off error at a higher level of programing.  That is, we incorporate round-off error directly\n into AffApproxes,  which makes our error calculations more accurate, thereby avoiding much subdivision of\nsub-boxes. This necessitates that we incorporate round-off error directly into complex numbers as well. \nIn this section we show how to do standard operations on complex numbers while keeping track of round-off error.\nIn the next section we work with  AffApproxes.\n\n\\demo{Definition {\\rm FIXME(7.2)}}\nThere are two types of complex numbers to consider: \n\n\\begin{itemize}\n\\item[1)]  An {\\it XComplex} $x = (x.{\\rm re}, x.{\\rm im})$ corresponds to a complex number that is represented exactly; it\nsimply consists of a real part and an imaginary part.\n\n\\item[2)]  An {\\it AComplex} $x = (x.{\\rm re}, x.{\\rm im}; x.e)$ corresponds to an ``interval'' that contains the complex number in\nquestion.  Thus, it consists of an XComplex and a floating-point number representing the error.  In particular, the AComplex\n$x$ represents the set $S(x)$ of complex numbers \n$\\{w\\ :\\ |w - (x.{\\rm re} + i (x.{\\rm im}))| \\le x.e\\}$.  Note that $S(x)$ is also defined\nfor an XComplex if we conceptualize an XComplex as an AComplex with\n$x.e = 0.$\n\\end{itemize}\n\n\\enddemo\n\n{\\it Remark} 7.3.\nIn general,  our operations  act on XComplexes and produce AComplexes, or they act on AComplexes and produce\n AComplexes.  In one case, the unary minus, an XComplex goes to an XComplex.  \nIn the calculations that follow the effect on the error is the whole point.\n\n\\demo{Conventions and standards {\\rm FIXME:7.4}}\nWe begin, by writing down our basic rules, which follow easily from the IEEE-754 standard for machine arithmetic (see [IEEE]).\n(Actually,  the ``hypot\" function $h(a,b)$, which computes by elaborate chicanery $\\sqrt{a^2 + b^2},$ is not part of the IEEE-754 standard, but  satisfies the appropriate standard according to the documentation provided (see [K1]).)  The operations here are on\ndouble-precision floating-point real numbers (``doubles\") and we denote a true operation by the usual symbol and the associated machine operation by the same symbol in a circle, with two exceptions: a machine square root $\\sqrt a$ is denoted $\\root o \\of a$ and the machine version of the hypot function is denoted $h_\\circ$.  Perhaps a third exception is our occasional notation of true multiplication by the absence of a symbol.  \n\nThere is a finite set of numbers (sometimes called ``machine numbers\") which are representable on the computer.  With\ntechnicalities ignored,    a nonzero floating-point number is represented by a fixed number of bits of which\nthe first determines the sign of the number, the next $m$ represent the exponent, and the remaining $n$ represent the\nmantissa of the number.  Because our nonzero numbers start with a 1, that means the $n$ mantissa bits actually represent\nthe next\n$n$ binary digits after the 1.  That is, the mantissa is actually $1.b_1b_2b_3...b_{n}.$   The IEEE-754 standard calls for\n64-bit doubles with $m = 11$ and $n = 52.$  We define $EPS$ to be $2^{-n},$ in which case $EPS/2$ is $2^{-(n + 1)}.$  \n\nThe IEEE-754 standard states that the result of an operation\n is always the closest representable number to the true solution (as long as we are in the bounds of representable\nnumbers).  For example, for machine numbers $a$ and~$b$, we have $a \\oplus b = m(a+b)$ where $m$ is the function which\ntakes the machine value of its argument (when it lies in the range of representable numbers).  Thus, properties of the type\n$$|(a + b) - (a \\oplus b)| \\leq (EPS/2) |a + b|$$\nfollow immediately from the IEEE-754 standard, as long as we do not {\\it underflow} or {\\it overflow} outside of the range of representable numbers. \nSpecifically, underflow occurs when the result of an operation is smaller in absolute value than $2^{-1022}$,\n and overflow occurs when the result of an operation is larger in absolute value than roughly $2^{1024}$\n (see [IEEE, \\S 7]).\n\nWe further note that the formula \n$$|(a + b) - (a \\oplus b)| \\leq (EPS/2) |a \\oplus b|$$\nfollows because the true answer has ``exponent\" which is less than or equal to the exponent of the machine answer.  We reiterate, that in both cases, $a$ and $b$ are assumed to be machine numbers.\n\nOf course, a machine operation such as $\\oplus$ must act on doubles, while a ``true\" operation such as $+$ can act on reals (which includes doubles).\nIn this section, long strings of inequalities will be used to prove the various propositions, and care was taken to ensure that machine operations act on machine numbers.  In particular, the various variables appearing in the propositions are assumed to be doubles. The IEEE-754 standard provides for conversions from decimal to binary (within the appropriate range, conversion is to the nearest representable number) and from binary to decimal.  However, these are rarely used in this paper, although a trivial class of exceptions is provided by the decimal numbers in the conditions of\nSection 5.\n\nWhen calculations underflow or overflow outside of the range of representable numbers, we require that the computer inform us if either exception has occurred.  \n\nAs in Section 6, we now break with the usual numbering convention.  Note that the above comments provide a proof of the\nfollowing properties.\n\\vglue12pt\n\n {\\bf Basic Properties FIXME:7.0 (assuming no underflow and no overflow)}.\n\\vglue6pt\n\nIn the formulas that follow, $a$,$b$, and $A$ are machine numbers and\n$1 + k \\times EPS = 1 \\oplus (k \\otimes EPS)$ when $k$ is an integer which is not huge in absolute value (that is, \nsmaller than roughly $2^{50}$).  Thus, within the appropriate range,  $1 + k \\times EPS$ is a machine number.  Similarly,\n$2^k\n\\times A = 2^k \\otimes A$ when $k$ is an integer and  $2^k \\otimes A$ neither underflows nor overflows.\n\\begin{eqnarray*}\n|(a + b) - (a \\oplus b)| &\\leq& (EPS/2) |a + b|,\\\\\n|(a + b) - (a \\oplus b)|&\\leq& (EPS/2) |a \\oplus b|,\\\\\n|(a - b) - (a \\ominus b)| &\\leq& (EPS/2) |a - b|,\\\\\n|(a - b) - (a \\ominus b)|&\\leq& (EPS/2) |a \\ominus b|,\\\\\n|(a \\times b) - (a \\otimes b)| &\\leq& (EPS/2) |a \\times b|,\\\\\n|(a \\times b) - (a \\otimes b)| &\\leq&(EPS/2) |a \\otimes b|,\\\\\n|(a / b) - (a \\oslash b)| &\\leq& (EPS/2) |a / b|,\\\\\n|(a / b) - (a \\oslash b)| &\\leq& (EPS/2) |a \\oslash b|,\\\\\n|\\sqrt a - \\root o \\of a| &\\leq& (EPS/2) |\\sqrt a|,\\\\\n|\\sqrt a - \\root o \\of a|  &\\leq&  (EPS/2) |\\root o \\of a|,\\\\\n|h(a,b) - h_\\circ(a,b)| &\\leq& (EPS) |h(a,b)|,\\\\\n|h(a,b) - h_\\circ(a,b)| &\\leq&   (EPS) |h_\\circ(a,b)|.\n\\end{eqnarray*}\n\n \nFrom these formulas, we immediately compute the following.\n\\begin{eqnarray*}\n(1 - EPS/2) |a + b| &\\leq& |a \\oplus b| \n\\leq (1 + EPS/2) |a+b|, \\\\ \n  (1 - EPS/2) |a \\oplus b|& \\leq& |a + b| \n\\leq (1 + EPS/2) |a \\oplus b|,\\\\\n\\noalign{\\vskip4pt}\n\\noalign{\\hfil .\\hfil }  \\noalign{\\vskip4pt}\n\\noalign{\\hfil .\\hfil } \\noalign{\\vskip4pt}\n\\noalign{\\hfil .\\hfil } \\noalign{\\vskip4pt}\n(1 - EPS/2) |\\sqrt a| &\\leq&  |\\root o \\of a| \n\\leq (1 + EPS/2) |\\sqrt a|, \\\\ (1 - EPS/2) |\\root o \\of a|&\\leq&  |\\sqrt a| \n\\leq (1 + EPS/2) |\\root o \\of a|, \\\\  (1 - EPS) |h(a,b)| &\\leq&  |h_\\circ (a,b)| \n\\leq (1 + EPS) |h(a,b)|, \\\\  (1 - EPS) |h_\\circ (a,b)| &\\leq& |h(a,b)| \n\\leq (1 + EPS) |h_\\circ (a,b)|.\n\\end{eqnarray*}\nOf course, we can also get the following type of formula, which is sometimes convenient,\n for example, in the proof of Lemma FIXME(7.2):\n$$\\left({1 \\over {1 + {EPS \\over 2}}}\\right) |a \\oplus b| \\leq |a + b| \n\\leq \\left({1 \\over {1 - {EPS \\over 2}}}\\right) |a \\oplus b|.$$\n\nBefore stating our propositions, we prove two lemmas.\n\n\\nonumproclaim{Lemma FIXME(7.0) {\\rm (assuming no underflow and no overflow)}} \nFor machine numbers $a$ and $b${\\rm ,}\n$$(1 - EPS) \\otimes |a \\oplus b| \\le |a + b| \\le (1 + EPS) \\otimes |a \\oplus b|.$$\n Analogous formulas hold for $\\ -,\\ *,\\ /,\\ \\sqrt{}$.\n\\endproclaim\n\n\\demo{Proof}\nAssume $a+b > 0$.  \nIf $(1+EPS) \\otimes (a \\oplus b) < (a+b)$ then the machine number \n$(1+EPS) \\otimes (a \\oplus b)$ is a better approximation to $a+b$ than $a \\oplus b$, because $(a \\oplus b) \n< (1+EPS) \\otimes (a \\oplus b)$.  This contradicts the IEEE standard.  The case $a+b < 0$ can be handled similarly, and the\ncase $a+b = 0$ is trivial, similarly for the left-hand inequality.\n\\enddemo\n\n\\nonumproclaim{Lemma FIXME:7.1} $$(1 + EPS/2)^a A \\le (1 + k EPS) \\otimes A$$ \n where $A$ is a nonnegative machine number{\\rm ,} and $a$ is a {\\rm (}\\/not huge\\/{\\rm )} integer{\\rm ,} such that\nfor $a$ even{\\rm ,} $k = {a \\over 2} + 1$ and  for $a$ odd, $k = {a+1 \\over 2} + 1$. \n\\endproclaim\n\n\\demo{Proof} \n$$(1 + EPS/2)^a A\\le (1 - EPS/2)(1 + k EPS) A \\le (1 + k EPS) \\otimes A.$$ \nThe first inequality holds if $a$ and $k$ are as in the lemma, and the second inequality is a consequence of one of the\nformulas preceding Lemma FIXME:7.0 ($A \\ge 0$). \\enddemo\n \nWe now begin our construction of complex arithmetic. We will give proofs for most of the operations; the others should be straightforward to derive,\nor can be found in the {\\it Annals} web site.\n\\vglue6pt\n{\\it Remarks} FIXME:7.5.\ni) We remind the reader that all machine operations are on machine numbers, and that the various variables appearing in the propositions are assumed to be doubles.\n\\vglue4pt\nii) The propositions that follow include in their statements the definitions of the various operations (see Remark FIXME:6.6iii).\n\n\\nonumproclaim{Proposition FIXME:7.1 {\\rm $(-X)$}} \nIf $x$ is an {\\rm XComplex,} then \n$$-x \\equiv (-x.{\\rm re},-x.{\\rm im}).$$ \n\\endproclaim\n\n\\nonumproclaim{Proposition FIXME:7.2 {\\rm $(X + D)$}} \nIf $x$ is an {\\rm XComplex} and $d$ is a double{\\rm ,} then \n$S(x + d) \\supseteq S(x) + S(d)${\\rm ,} where \n $$x + d \\equiv (x.{\\rm re} \\oplus d, x.{\\rm im};\n(EPS/2)\\otimes |x.{\\rm re} \\oplus d|).$$\n\\endproclaim\n\n\\demo{Proof}\nThe  error is bounded by  \n\\vglue12pt\n\\hfill ${\\displaystyle |(x.{\\rm re} + d) - (x.{\\rm re} \\oplus d)|\n\\le (EPS/2) | x.{\\rm re} \\oplus d|\n  =  (EPS/2) \\otimes | x.{\\rm re} \\oplus d|.}$ \\enddemo\n\n\\nonumproclaim{Proposition FIXME:7.3 {\\rm $(X - D)$}} \nIf $x$ is an {\\rm XComplex} and $d$ is a double{\\rm ,} then \n$S(x - d) \\supseteq S(x) - S(d)${\\rm ,} where \n $$x - d \\equiv (x.{\\rm re} \\ominus d, x.{\\rm im};\n(EPS/2)\\otimes |x.{\\rm re} \\ominus d|).$$ \\endproclaim\n\n\\nonumproclaim{Proposition FIXME:7.4 {\\rm $(X + X)$}} \nIf $x$ and $y$ are {\\rm XComplexes,} then \n$S(x + y) \\supseteq S(x) + S(y)${\\rm ,} where\n \\begin{eqnarray*}\nx + y &\\equiv& (x.{\\rm re} \\oplus y.{\\rm re}, x.{\\rm im} \\oplus y.{\\rm im};\n(EPS/2)\\\\\n&&  \\otimes\\ ((1 + EPS)  \\otimes (|x.{\\rm re} \\oplus y.{\\rm re}| \\oplus |x.{\\rm im} \\oplus y.{\\rm im}|))).\n\\end{eqnarray*}\n\\endproclaim\n  \n\n\\demo{Proof}\nThe error is bounded by\n\\begin{eqnarray*}\n&&|(x.{\\rm re} + y.{\\rm re}) - (x.{\\rm re} \\oplus y.{\\rm re})| + \n  |(x.{\\rm im} + y.{\\rm im}) - (x.{\\rm im} \\oplus y.{\\rm im})|\\\\[4pt]\n&&\\qquad \\le (EPS/2)(| x.{\\rm re} \\oplus y.{\\rm re}| + \n   |x.{\\rm im} \\oplus y.{\\rm im}|)\\\\[4pt]\n&&\\qquad \\le (EPS/2)((1 + EPS) \\otimes (| x.{\\rm re} \\oplus y.{\\rm re}| \\oplus \n   |x.{\\rm im} \\oplus y.{\\rm im}|))\\\\[4pt]\n&&\\qquad = (EPS/2) \\otimes ((1 + EPS) \\otimes (| x.{\\rm re} \\oplus y.{\\rm re}| \\oplus \n   |x.{\\rm im} \\oplus y.{\\rm im}|)).\n\\end{eqnarray*}\nTo go from line 2 to line 3 we used Lemma FIXME:7.0. \\enddemo\n\n\\nonumproclaim{Proposition FIXME:7.5 $(X - X)$}\nIf $x$ and $y$ are {\\rm XComplexes,} then \n$S(x - y) \\supseteq S(x) - S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx - y &\\equiv& (x.{\\rm re} \\ominus y.{\\rm re}, x.{\\rm im} \\ominus y.{\\rm im};\\\\[4pt]\n&&\\quad \n(EPS/2)\\otimes ((1 + EPS) \\otimes (|x.{\\rm re} \\ominus y.{\\rm re}| \\oplus |x.{\\rm im} \\ominus y.{\\rm im}|))).\n\\end{eqnarray*} \\endproclaim\n\n\n\\nonumproclaim{Proposition FIXME:7.6 $(A + A)$} \nIf $x$ and $y$ are {\\rm AComplexes,} then \n$S(x + y) \\supseteq S(x) + S(y)${\\rm ,} where\n\\begin{eqnarray*}\n x+y &\\equiv& ({\\rm re},{\\rm im};e) \\hbox{ with }\\\\[4pt]\n{\\rm re}& =& x.{\\rm re} \\oplus y.{\\rm re}\\\\[4pt]\n{\\rm im}& =& x.{\\rm im} \\oplus y.{\\rm im}\\\\[4pt]\ne& =& (1 + 2 EPS) \\otimes ( ((EPS/2) \\otimes (|{\\rm re}| \\oplus |{\\rm im}|))\n \\oplus (x.e \\oplus y.e)). \\end{eqnarray*}\n\\endproclaim\n\n\\demo{Proof}\nThe error is bounded by the sum of the contributions from the real part, the imaginary part, and the two individual errors:\n\\begin{eqnarray*}\n & &\\hskip-20pt |(x.{\\rm re} \\oplus y.{\\rm re}) -(x.{\\rm re} + y.{\\rm re})| + |(x.{\\rm im} \\oplus y.{\\rm im}) -\n(x.{\\rm im} + y.{\\rm im})| + (x.e + y.e).\\\\[4pt]\n&&\\hskip-12pt \\le (EPS/2) |x.{\\rm re} \\oplus y.{\\rm re}| + (EPS/2) |x.{\\rm im} \\oplus y.{\\rm im}| + \n(1 + EPS/2) (x.e \\oplus y.e)\\\\[4pt]\n&&\\hskip-12pt \\le  (1 + EPS/2) (EPS/2)(|x.{\\rm re} \\oplus y.{\\rm re}| \\oplus \n |x.{\\rm im} \\oplus y.{\\rm im}| ) \\\\[4pt]\n&&\\hskip-12pt \\qquad + (1 + EPS/2) (x.e \\oplus y.e)\\\\[4pt]\n&&\\hskip-12pt = (1 + EPS/2) ((EPS/2)(|x.{\\rm re} \\oplus y.{\\rm re}| \\oplus \n |x.{\\rm im} \\oplus y.{\\rm im}| ) + (x.e \\oplus y.e))\\\\[4pt]\n&&\\hskip-12pt \\le (1 + EPS/2)^2 (((EPS/2) (|x.{\\rm re} \\oplus y.{\\rm re}| \\oplus \n |x.{\\rm im} \\oplus y.{\\rm im}| )) \\oplus  (x.e \\oplus y.e))\\\\[4pt]\n&&\\hskip-12pt \\le   (1 + 2 EPS) \\otimes (((EPS/2)\\otimes (|x.{\\rm re} \\oplus\ny.{\\rm re}| \\oplus |x.{\\rm im} \\oplus y.{\\rm im}| )) \\oplus (x.e \\oplus y.e)).\\\\[4pt]\n\\noalign{\\vskip-24pt}\n\\end{eqnarray*}\n\\enddemo\n\nThe precedence for machine operations is the same as that for true operations, so some  parentheses are unnecessary\nand will often be omitted in what\n follows.\n\n\\nonumproclaim{Proposition FIXME:7.7 $(A - A)$}\nIf $x$ and $y$ are {\\rm AComplexes,} then \n$S(x - y) \\supseteq S(x) - S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx-y &\\equiv& ({\\rm re},{\\rm im};e) \\hbox{ with }\\\\\n{\\rm re}& =& x.{\\rm re} \\ominus y.{\\rm re}\\\\\n{\\rm im}& =& x.{\\rm im} \\ominus y.{\\rm im}\\\\\ne &=& (1 + 2 EPS) \\otimes ( ((EPS/2) \\otimes (|{\\rm re}| \\oplus |{\\rm im}|))\n \\oplus (x.e \\oplus y.e)).\n\\end{eqnarray*}\n\\endproclaim\n\n\n\\nonumproclaim{Proposition FIXME:7.8 $(X  \\times  D)$} \nIf $x$ is an {\\rm XComplex} and $d$ is a double{\\rm ,} then \n$S(x \\times d) \\supseteq S(x) \\times S(d)${\\rm ,} where\n\\begin{eqnarray*}\nx \\times d &\\equiv& ({\\rm re},{\\rm im};e) \\hbox{ with}\\\\[6pt] {\\rm re} &= &x.{\\rm re} \\otimes d\\\\[6pt] {\\rm im} &= &x.{\\rm im} \\otimes d\\\\[6pt] e &=& (EPS/2) \\otimes ((1 + EPS) \\otimes (|{\\rm re}| \\oplus |{\\rm im}|) ). \\end{eqnarray*}\n\\endproclaim\n\n\\nonumproclaim{Proposition FIXME:7.9 $(X / D)$} \nIf $x$ is an {\\rm XComplex} and $d$ is a double{\\rm ,} then \n$S(x / d) \\supseteq S(x) / S(d)${\\rm ,} where\n\\begin{eqnarray*}\nx / d& \\equiv& ({\\rm re},{\\rm im};e) \\hbox{ with }\\\\[6pt] {\\rm re}& =& x.{\\rm re} \\oslash d\\\\[6pt] {\\rm im} &= &x.{\\rm im} \\oslash d\\\\[6pt] e& = &(EPS/2) \\otimes ((1 + EPS) \\otimes (|{\\rm re}| \\oplus |{\\rm im}|) ).\n\\end{eqnarray*}\n\n\\nonumproclaim{Proposition FIXME:7.10 $(X  \\times  X)$}\nIf $x$ and $y$ are {\\rm XComplexes,} then \n$S(x \\times y) \\supseteq S(x) \\times S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx \\times y &\\equiv& ({\\rm re},{\\rm im};e)\\hbox{ with}\\\\[6pt] {\\rm re}& =&{\\rm  re}1 \\ominus {\\rm re2}, \\hbox{ with ${\\rm re1} = x.{\\rm re} \\otimes y.{\\rm re}$ and ${\\rm re}2 =\nx.{\\rm im} \\otimes y.{\\rm im}$} \\\\[6pt] {\\rm im}& =& {\\rm im}1 \\oplus {\\rm im}2, \\hbox{ with ${\\rm im}1 = x.{\\rm re} \\otimes y.{\\rm im}$ and ${\\rm im}2 = x.{\\rm im} \\otimes\ny.{\\rm re}$}\\\\[6pt] e& =& EPS \\otimes ((1 + 2EPS) \\otimes  ((|{\\rm re}1| \\oplus |{\\rm re}2|) \\oplus  (|{\\rm im}1| \\oplus |{\\rm im}2|))).\n\\end{eqnarray*}\n\\endproclaim\n\n\\demo{Proof}\nThe error is bounded by the sum of the contributions from the real part and the imaginary part:\n\\begin{eqnarray*}\n&&|(x.{\\rm re} \\times  y.{\\rm re} - x.{\\rm im} \\times y.{\\rm im}) - ((x.{\\rm re} \\otimes y.{\\rm re}) \\ominus\n(x.{\\rm im}\n\\otimes y.{\\rm im}))| \\\\[6pt]\n&&\\quad + |(x.{\\rm re} \\times  y.{\\rm im} + x.{\\rm im} \\times y.{\\rm re}) - ((x.{\\rm re} \\otimes y.{\\rm re}) \\oplus (x.{\\rm im}\n\\otimes y.{\\rm im}))|.\n\\end{eqnarray*}\n   We want to bound this by a machine formula.  Let us begin by bounding \n$$|(x.{\\rm re} \\times  y.{\\rm re} - x.{\\rm im} \\times y.{\\rm im}) - ((x.{\\rm re} \\otimes y.{\\rm re}) \\ominus (x.{\\rm im}\n \\otimes y.{\\rm im}))| $$ \nby a machine formula:\\eject\n\n\\centerline{\n${\\displaystyle |(x.{\\rm re} \\times  y.{\\rm re} - x.{\\rm im} \\times y.{\\rm im}) - ((x.{\\rm re} \\otimes y.{\\rm re}) \\ominus\n(x.{\\rm im}\n\\otimes y.{\\rm im}))| }$}\n\\begin{eqnarray*}\n\\noalign{\\vskip-16pt}\n&&\\le |((x.{\\rm re} \\times  y.{\\rm re}) - (x.{\\rm im} \\times y.{\\rm im})) - ((x.{\\rm re} \\otimes y.{\\rm re}) -\n(x.{\\rm im}\n\\otimes y.{\\rm im}))|\\\\\n&&\\quad + |((x.{\\rm re} \\otimes y.{\\rm re}) - (x.{\\rm im} \\otimes y.{\\rm im}))-((x.{\\rm re} \\otimes y.{\\rm re})\n\\ominus (x.{\\rm im} \\otimes y.{\\rm im}))| \\\\\n&&\\le |(x.{\\rm re} \\times  y.{\\rm re}) - (x.{\\rm re} \\otimes y.{\\rm re})| + |(x.{\\rm\nim} \\times y.{\\rm im})- (x.{\\rm im} \\otimes y.{\\rm im})| \\\\\n&&\\quad + (EPS/2)|(x.{\\rm re} \\otimes y.{\\rm re}) - (x.{\\rm im} \\otimes\ny.{\\rm im})|\\\\\n&&\\le (EPS/2)|(x.{\\rm re} \\otimes y.{\\rm re})| + (EPS/2)|(x.{\\rm im} \\otimes y.{\\rm im})| \\\\\n&&\\quad + (EPS/2)(|x.{\\rm re} \\otimes\ny.{\\rm re}| + |x.{\\rm im} \\otimes y.{\\rm im}|)\\\\\n&&= (EPS/2)(2) (|x.{\\rm re} \\otimes y.{\\rm re}| + |x.{\\rm im} \\otimes y.{\\rm\nim}|)\\\\\n&&\\le EPS (1 + EPS/2) (|x.{\\rm re} \\otimes y.{\\rm re}| \\oplus |(x.{\\rm im} \\otimes y.{\\rm im}|) .\n\\end{eqnarray*}\n\n\nAlmost the exact same calculation produces the analogous formula for the imaginary contribution, and we now combine the\ntwo to get a bound on the total error.  \n\\begin{eqnarray*}\n&&\\le EPS (1 + EPS/2) (|x.{\\rm re} \\otimes y.{\\rm re}| \\oplus |x.{\\rm im} \\otimes y.{\\rm im}|) \\\\\n&&\\quad +\nEPS (1 + EPS/2) (|x.{\\rm re} \\otimes y.{\\rm im}| \\oplus |x.{\\rm im} \\otimes y.{\\rm re}| )\\\\\n&&\\le EPS \\otimes ((1 +\n2EPS)\\otimes((|x.{\\rm re} \\otimes y.{\\rm re}| \\oplus |x.{\\rm im} \\otimes y.{\\rm im}| ) \\\\\n&&\\quad \\oplus\n(|x.{\\rm re} \\otimes y.{\\rm im}| \\oplus |x.{\\rm im} \\otimes y.{\\rm re}| ))).\n\\\\\n\\noalign{\\vskip-36pt}\n\\end{eqnarray*}\n\\enddemo\n\n\\nonumproclaim{Proposition FIXME:7.11 $(D / X)$}\nIf $x$ is a double and $y$ is an {\\rm XComplex,} then\n$S(x / y) \\supseteq S(x) / S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx  / y & \\equiv& ({\\rm re},{\\rm im};e)\\hbox{ with }\\\\\nre& =& (x  \\otimes y.{\\rm re})\\oslash nrm \\hbox{ where $nrm = y.{\\rm re} \\otimes y.{\\rm re} \\oplus y.{\\rm im} \\otimes\ny.{\\rm im}$}\n\\\\\n{\\rm im} &=& -(x  \\otimes y.{\\rm im}) \\oslash nrm\\\\\ne& =& (2EPS) \\otimes ((1 +  2EPS)\\otimes(|{\\rm re}| \\oplus |{\\rm im}|)).\n\\end{eqnarray*}\n\\endproclaim\n\n{\\it Proof}.\nThe true version of $x/y$ is equal to  \n$$(x \\times y.{\\rm re} + i (-x \\times y.{\\rm im}))/ ((y.{\\rm re})^2 + (y.{\\rm im})^2)$$ and we need to compare this with the\nmachine version to find the error.  Further, this error is less than or equal to the sum of the real error and the imaginary\nerror. Thus, we start with the real calculation (as in the statement of the proposition, we use $nrm$ to represent the\nmachine version of $(y.{\\rm re})^2 + (y.{\\rm im})^2$).\n\\begin{eqnarray*}\n&&\\hskip-36pt \\left|{x \\times y.{\\rm re}  \\over (y.{\\rm re})^2 + (y.{\\rm im})^2} - ((x \\otimes y.{\\rm re})\\oslash\nnrm)\\right|\\\\\n&\n\\le &\\left|(x\n\\otimes y.{\\rm re}) \\oslash nrm - {x \\otimes y.{\\rm re} \\over nrm}\\right|\\\\\n&& +\\ \\left|{x \\otimes y.{\\rm re} \\over nrm} - {x\n\\times y.{\\rm re}\n\\over nrm}\\right| + \\left|{x \\times y.{\\rm re} \\over nrm} - \n{x \\times y.{\\rm re}  \\over (y.{\\rm re})^2 + (y.{\\rm im})^2}\\right|.\n\\end{eqnarray*}\n \nBefore continuing, let us compare ${1 \\over nrm}$ and ${1 \\over (y.{\\rm re})^2 + (y.{\\rm im})^2}$ by developing a formula for\ncomparing ${1 \\over a^2 + b^2}$ and its associated ${1 \\over nrm}$:\n\\nonumproclaim{Lemma FIXME:7.2} $$\\left|{1 \\over nrm} - {1 \\over a^2 + b^2}\\right| \\le   (EPS + (EPS/2)^2) {1 \\over nrm}$$\n{\\it where} $nrm = a\\otimes a \\oplus b \\otimes b.$\n\\endproclaim\n\n\\demo{Proof}\nWe compute that \n$$\\left({1 \\over 1 + EPS/2}\\right)^2 \\times nrm\n\\le a^2 + b^2 \n\\le \\left({1 \\over 1 - EPS/2}\\right)^2 \\times nrm;$$ hence \n$${1 \\over nrm} (1 - EPS/2)^2 \n\\le {1 \\over a^2 + b^2}\n\\le {1 \\over nrm} (1 + EPS/2)^2.$$  It then follows that \n\\begin{eqnarray*}\n\\left|{1 \\over nrm} - {1 \\over a^2 + b^2}\\right|& \\le &\n    {1 \\over nrm} (1 + EPS/2)^2 - {1 \\over nrm}\\\\\n&=& {1 \\over nrm} ((1 + EPS/2)^2 - 1) =\n  (EPS + (EPS/2)^2){1 \\over nrm} .\\\\\n\\noalign{\\vskip-24pt}\n\\end{eqnarray*}\n\\enddemo\n\n\\phantom{someone}\nGetting back to our main calculation (with $nrm = y.{\\rm re} \\otimes y.{\\rm re} \\oplus y.{\\rm im} \\otimes y.{\\rm im}$), we\nhave\n\\begin{eqnarray*}\n&&\\hskip-16pt\\left|(x \\otimes y.{\\rm re}) \\oslash nrm - {x \\otimes y.{\\rm re} \\over nrm}\\right|\\\\\n&&\\quad +\n\\left|{x \\otimes y.{\\rm re} \\over nrm} - {x \\times y.{\\rm re} \\over nrm}\\right| +\n\\left|{x \\times y.{\\rm re} \\over nrm} - \n{x \\times y.{\\rm re}  \\over (y.{\\rm re})^2 + (y.{\\rm im})^2}\\right|\\\\\n&&  \\le(EPS/2){|x \\otimes y.{\\rm re}| \\over nrm} +\n        (EPS/2){|x \\otimes   y.{\\rm re}| \\over nrm} +\n(EPS + (EPS/2)^2) {|x \\times   y.{\\rm re}| \\over nrm}\\\\\n&&  =(EPS/2)\\left({1 \\over nrm}\\right)(2 |x \\otimes y.{\\rm re}| +\n(2 + EPS/2) \\times |x \\times y.{\\rm re}| )\\\\\n&&  \\le (EPS/2)\\left({1 \\over nrm}\\right)(2 |x \\otimes y.{\\rm re}| +\n(2 + EPS/2)(1 + EPS/2) \\times |x \\otimes y.{\\rm re}| )\\\\\n&&\\quad = (EPS/2)\\left({1 \\over nrm}\\right)(|x \\otimes y.{\\rm re}|)\n(2 + (2 + EPS/2)(1+EPS/2))\\\\\n&&  \\le (EPS/2)(4 + 3EPS/2 + (EPS/2)^2)(|x \\otimes y.{\\rm re}|)\\left({1 \\over nrm}\\right)\\\\\n&& \\le (EPS/2)(4 + 3EPS/2 +\n(EPS/2)^2)(1+EPS/2)(|x \\otimes y.{\\rm re}| \\oslash nrm)\\\\\n&&  \\le (2EPS)(1 + 3EPS/8 + (EPS/4)^2)(1+EPS/2)(|(x \\otimes y.{\\rm\nre} \\oslash nrm)|).\n\\end{eqnarray*}\n We also get the analogous formula for the imaginary contribution for the error, so our total error is\nbounded by\n\\begin{eqnarray*}\n&&\\hskip-24pt (2EPS)(1 + 3EPS/8 + (EPS/4)^2)(1+EPS/2)((|(x \\otimes y.{\\rm re})\\oslash nrm|)\\\\\n&&\\quad+(|(x \\otimes y.{\\rm\nim})\\oslash nrm|))\\\\\n&& \\le (2EPS)(1 + 3EPS/8 + (EPS/4)^2)(1+EPS/2)^2\\\\\n&&\\quad\\cdot\\ ((|(x \\otimes y.{\\rm re})\\oslash nrm|)  \\oplus (|(x \\otimes\ny.{\\rm im})\\oslash nrm|))\\\\\n&&  \\le (2EPS)(1 - EPS/2)(1+ 2EPS)\\\\\n&&\\quad\\cdot\\ ((|(x \\otimes y.{\\rm re})\\oslash nrm|) \\oplus (|(x \\otimes y.{\\rm\nim})\\oslash nrm|))\\\\\n&&  \\le (2EPS) \\otimes ((1+ 2EPS) \\otimes ((|(x \\otimes y.{\\rm re})\\oslash nrm|)\\\\\n&&\\quad  \\oplus (|(x \\otimes y.{\\rm\nim})\\oslash nrm|))).\n\\end{eqnarray*}\n\nHere we used the fact that \n\\vglue6pt\n\\hfill $(1 + 3EPS/8 + (EPS/4)^2)(1+EPS/2)^2 \\le (1 - EPS/2)(1+ 2EPS).$ \\hfill\\qed\n\\vglue9pt\n \nThis should give the flavor of division proofs.  As such, we will skip the proofs of $X/X$ and $A/A$ and simply refer to the\n {\\it Annals} web site.\n\\nonumproclaim{Proposition FIXME:7.12 $(X / X)$}\nIf $x$ and $y$ are {\\rm XComplexes,} then\n$S(x / y) \\supseteq S(x) / S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx  / y  &\\equiv& ({\\rm re},{\\rm im};e)\\hbox{ with}\\\\\n{\\rm re}& =& (x.{\\rm re} \\otimes y.{\\rm re} \\oplus x.{\\rm im} \\otimes y.{\\rm im}) \\oslash nrm\\\\\n&&\\hbox{ where $nrm = y.{\\rm\nre}\n\\otimes y.{\\rm re} \\oplus y.{\\rm im} \\otimes y.{\\rm im}$}\\\\\n{\\rm im}& = &(x.{\\rm im} \\otimes y.{\\rm re} \\ominus x.{\\rm re} \\otimes y.{\\rm im}) \\oslash nrm\\\\\ne& =& (5EPS/2) \\otimes ((1\n+  3EPS) \\otimes A) \\hbox{  where}\\\\\n A& =& \n((|x.{\\rm re} \\otimes y.{\\rm re}| \\oplus |x.{\\rm im} \\otimes y.{\\rm im}|)\n \\oplus \n (|x.{\\rm im} \\otimes y.{\\rm re}| \\oplus |x.{\\rm re} \\otimes y.{\\rm im}|))\n\\oslash nrm\n.\\end{eqnarray*}\n\\endproclaim\n\n\\nonumproclaim{Proposition FIXME:7.13 $(A / A)$}\nIf $x$ and $y$ are {\\rm AComplexes} with \n$y.e < 100 EPS \\otimes |y|${\\rm ,} or{\\rm ,} more accurately{\\rm , }\n$$(y.e)^2< ((10000 EPS) \\otimes EPS)\\otimes nrm$$ then\n$S(x / y) \\supseteq S(x) / S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx  / y & \\equiv& ({\\rm re},{\\rm im};e)\\hbox{ with}\\\\\n{\\rm re}& =& (x.{\\rm re} \\otimes y.{\\rm re} \\oplus x.{\\rm im} \\otimes y.{\\rm im}) \\oslash nrm\\\\\n&&\\hbox{ where $nrm = y.{\\rm\nre}\n\\otimes y.{\\rm re} \\oplus y.{\\rm im} \\otimes y.{\\rm im}$}\\\\\n{\\rm im}& =& (x.{\\rm im} \\otimes y.{\\rm re} \\ominus x.{\\rm re} \\otimes y.{\\rm im}) \\oslash nrm\\\\\ne& =& (1 + 4EPS)\\otimes \n(\n((5 EPS/2) \\otimes A \\oplus (1+103EPS) \\otimes B)\n\\oslash nrm)\\hbox{ where}\\\\\n A& =& (|x.{\\rm re} \\otimes y.{\\rm re}| \\oplus |x.{\\rm im}\\otimes y.{\\rm im}|) \n \\oplus\n(|x.{\\rm im} \\otimes y.{\\rm re}| \\oplus | x.{\\rm re} \\otimes y.{\\rm im}|)\\\\\nB& =& x.e\\otimes(|y.{\\rm re}|\\oplus|y.{\\rm im}|)\n\\oplus\n   (|x.{\\rm re}|\\oplus|x.{\\rm im}|)\\otimes y.e.\n\\end{eqnarray*}\n\\endproclaim\n\nIn our last proposition we will construct the square-root function.  As a warm-up, ignoring round-off error, our\nconstruction is as follows.  If $x = x.{\\rm re} + i x.{\\rm im}$ then $\\sqrt{x} = s + {\\rm i d}$ where $s=\\sqrt{(|x.{\\rm\nre}| + h(x.{\\rm re},x.{\\rm im}))/2}$ and $d = x.{\\rm im}/(2s)$ when $x.{\\rm re} > 0.0$, and $\\sqrt{x} = d + i s$ otherwise.  \nThus, we take our (no-round-off) square roots to be in the first and fourth quadrants.\n\n\\nonumproclaim{Proposition FIXME:7.14 ($\\sqrt{X}$)}\nIf $x$ is an {\\rm XComplex,} then $S(\\sqrt{x}) \\supseteq \\sqrt{S(x)}$\nwhere we\nlet $s_o = \\root o \\of {(|x.{\\rm re}| \\oplus h_o (x.{\\rm re},x.{\\rm im})) \\otimes 0.5}$\n and $d_o = (x.{\\rm im} \\oslash s) \\otimes 0.5${\\rm ,} and define\n\\begin{eqnarray*}\n \\sqrt{x} & \\equiv &({\\rm re},{\\rm im};e) \\hbox{ where }\\\\\n {\\rm re} &=& s_o \\hbox{ if $x.{\\rm re} > 0.0$ and $re = d_o$ otherwise},\\\\\n{\\rm im}& =& d_o \\hbox{ if $x.{\\rm re} > 0.0$ and $im = s_o$ otherwise},\n\\\\\ne& =& EPS \\otimes ((1 + 4 EPS) \\otimes (1.25 \\otimes s_o \\oplus 1.75 \\otimes |d_o|)).\n\\end{eqnarray*}\n\\endproclaim\n\n\\demo{Proof}\nThis will be a little nasty.  Let us begin by analyzing $e_s$, which is the difference between the true calculation of $s$ and\nthe machine calculation of~$s$, that is $e_s = |s - s_o|.$   First, we bound $s.$\n\\begin{eqnarray*}\ns &=& \\sqrt{(|x.{\\rm re}| + h(x.{\\rm re},x.{\\rm im})) * 0.5}\\\\\n&\\le& (1+EPS)^{1/2} \\sqrt{(|x.{\\rm re}| + h_o(x.{\\rm re},x.{\\rm im}))\n* 0.5}\\\\\n&\\le& (1+EPS)^{1/2} (1+EPS/2)^{1/2}\n\\sqrt{(|x.{\\rm re}| \\oplus h_o(x.{\\rm re},x.{\\rm im})) * 0.5}\\\\\n&\\le& (1+EPS)^{1/2} (1+EPS/2)^{1/2}\\\\\n&&\\cdot\\ (1+EPS/2)\n\\root o \\of {(|x.{\\rm re}| \\oplus h_o(x.{\\rm re},x.{\\rm im})) * 0.5}\\\\\n&=& (1+EPS)^{1/2} (1+EPS/2)^{3/2} s_o.\n\\end{eqnarray*}\nBy a power series expansion, we see that\n\\begin{eqnarray*}\n(1+EPS)^{1/2} (1+EPS/2)^{3/2}&=&\\left(1 + {1 \\over 2} EPS - {1 \\over 8} EPS^2 + \\cdots\\right)\\\\\n&& +\\\n     \\left(1 + {3 \\over 2} EPS/2 + {3 \\over 8} (EPS/2)^2 +\\cdots\\right)\\\\\n&=&\\left(1 + {5 \\over 4} EPS + {11 \\over 32} EPS^2 +\\cdots\\right),\n\\end{eqnarray*}\nso that,\n$$s \\le \\left(1 + {5 \\over 4} EPS + {11 \\over 32} EPS^2 + \\cdots\\right) s_o.$$\nSimilarly, \n$$s \\ge \\left(1 - {5 \\over 4} EPS\\right) s_o.$$\n\nThus, we can bound the $s$ error, \n\\begin{eqnarray*}\ne_s = |s - s_o|&\\le&\\left(\\left(1 + {5 \\over 4} EPS + {11 \\over 32} EPS^2 +\\cdots\\right) - 1\\right) s_o\\\\\n&=& \\left({5 \\over 4} EPS +\n{11 \\over 32} EPS^2 +\\cdots\\right) s_o.\n\\end{eqnarray*}\n\nNext, we analyze $e_d$, which is the absolute value of the difference between the true calculation of $d$ and the machine calculation of $d$.  That is, $e_d = |d - d_o|.$\n\\begin{eqnarray*}\ne_d &=& |x.{\\rm im}/(2s) - x.{\\rm im} \\oslash (2 s_o)|\\\\\n&\\le& |x.{\\rm im} \\oslash (2 s_o) - x.{\\rm im} / (2 s_o)| + \n|x.{\\rm im}/(2s_o) - x.{\\rm im}/(2s) |\\\\\n&\\le& (EPS/2) |x.{\\rm im} / (2 s_o)| + \\left|{x.{\\rm im} \\over 2} {s - s_o \\over s s_o} \\right|\\\\\n&\\le& (EPS/2) |x .{\\rm im}/ (2\ns_o)| +\\left|{x.{\\rm im} \\over 2} {1 \\over s s_o} ((5/4)EPS + (11/32)EPS^2+\\cdots) s_o\\right|\\\\\n&\\le& (EPS/2) |x.{\\rm im} / (2\ns_o)| \\\\\n&&+ \\left|{x.{\\rm im} \\over 2} {1 \\over s_o (1 - (5/4)EPS)}((5/4)EPS + (11/32)EPS^2+\\cdots)\\right|\\\\\n&= &(EPS/2) |x .{\\rm\nim}/ (2 s_o)| \\left(1 + {(5/2) + (11/16)EPS+\\cdots) \\over (1 - (5/4)EPS)}\\right)\\\\\n&= &(EPS/2) {(7/2) + (-9/16) EPS +\\cdots\\over\n(1 - (5/4)EPS)} |x .{\\rm im}/ (2 s_o)|\\\\\n&\\le& (EPS/2) (1+EPS/2)\n {7/2 \\over (1 - (5/4)EPS)}\n|x .{\\rm im} \\oslash (2 s_o)|\\\\\n&= &(EPS/2) (1+EPS/2)\n {7/2 \\over (1 - (5/4)EPS)}\n|d_o|.\n\\end{eqnarray*}\n \nFinally, we can bound the overall error $e = e_s + e_d$.\n\\begin{eqnarray*}\ne_s + e_d&\\le& \\left({5 \\over 4} EPS + {11 \\over 32} EPS^2 +\\cdots\\right) s_o\\\\\n&& +\\\n (EPS/2) (1+EPS/2)\n {7/2 \\over (1 - (5/4)EPS)}|d_o|\\\\\n&\\le& \\left( EPS + {11 \\over 40} EPS^2 +\\cdots\\right) \\left({5 \\over 4} s_o\\right) \\\\\n&&+\\\n EPS (1+EPS/2)\n {1 \\over (1 - (5/4)EPS)}\\left|{7 \\over 4} d_o\\right|\\\\\n&\\le& EPS (1+EPS/2)\n {1 \\over (1 - (5/4)EPS)} \\left({5 \\over 4} s_o\\right) \\\\\n&&+\n\\  EPS (1+EPS/2)\n {1 \\over (1 - (5/4)EPS)}\\left|{7 \\over 4} d_o\\right|\\\\\n&\\le& EPS (1+EPS/2)\n {1 \\over (1 - (5/4)EPS)}\n\\left({5 \\over 4} s_o +\\left|{7 \\over 4} d_o\\right|\\right)\\\\\n&\\le& EPS (1+EPS/2)^3\n {1 \\over (1 - (5/4)EPS)}\n\\left({5 \\over 4} \\otimes s_o \\oplus \\left|{7 \\over 4} \\otimes d_o\\right|\\right)\\\\\n&\\le& EPS (1 - (EPS/2))(1 + 4 EPS) \n\\left({5 \\over 4} \\otimes s_o \\oplus \\left|{7 \\over 4} \\otimes d_o\\right|\\right)\\\\\n&\\le& EPS \\otimes \\left((1 + 4 EPS) \\otimes\n\\left({5 \\over 4} \\otimes s_o \\oplus \\left|{7 \\over 4} \\otimes d_o\\right|\\right)\\right).\\\\\n\\noalign{\\vskip-36pt}\n\\end{eqnarray*}\n\\enddemo\n \nNow, we develop two formulas for the absolute value of an XComplex.\n \n\n\\demo{Formula {\\rm FIXME:7.0 (${\\rm absUB}(X)$)}}\nIf $x$ is an XComplex, then\nthere is an upper bound on the absolute value of $x$ as follows:\n\\begin{eqnarray*}\n|x| &=& h(x.{\\rm re},x.{\\rm im}) \\le (1 + EPS) h_{\\circ} (x.{\\rm re},x.{\\rm im})\\\\\n&\\le& (1 - EPS/2) (1 + 2EPS)h_{\\circ} (x.{\\rm re},x.{\\rm im}) \\\\\n&\\le& (1 + 2EPS) \\otimes h_{\\circ} (x.{\\rm re},x.{\\rm im}).\n\\end{eqnarray*}\nThus, we define \n$${\\rm absUB}(x) = (1 + 2EPS) \\otimes h_{\\circ} (x.{\\rm re},x.{\\rm im}).$$ \n\\enddemo\n\n{\\it Formula} FIXME:7.1 (${\\rm absLB}(X)$).\nIf $x$ is an XComplex, then\nwe get a lower bound on the absolute value of $x$ as follows.\n\\begin{eqnarray*}\n|x|& =& h(x.{\\rm re},x.{\\rm im}) \\ge (1 - EPS) h_{\\circ} (x.{\\rm re},x.{\\rm im})\\\\\n&\\ge& (1 + EPS/2) (1 - 2EPS) h_{\\circ} (x.{\\rm\nre},x.{\\rm im})\\\\\n&\\ge& (1 - 2EPS) \\otimes h_{\\circ} (x.{\\rm re},x.{\\rm im}).\n\\end{eqnarray*}\nThus, we define \n$${\\rm absLB}(x) = (1 - 2EPS) \\otimes h_{\\circ} (x.{\\rm re},x.{\\rm im}).$$ \n\\vglue12pt\n\nFinally, in several places in the {\\it verify} program we perform a standard operation on a pair of doubles and must take into account round-off error.  This is easy if we use Lemma FIXME:7.0.\n\nFor example, in {\\it inequalityHolds} we want to show that $$wh \\times wh  > {\\rm absUB}{\\rm (along),}$$ where $wh =\n{\\rm absLB}{\\rm (whirle)}.$   \nBy Lemma FIXME:7.0, we know that $$(1 - EPS) \\otimes (wh \\otimes wh) \\le wh \\times wh$$ and we simply test that \n$$(1 - EPS) \\otimes  (wh \\otimes wh) \\ge {\\rm absUB}{\\rm (along)}.$$\n\nA slightly more complicated version of this occurs in the computer calculation of ${\\rm pos}[i]$ and $size[i],$ that is, the center and size of a sub-box.  Prior to multiplication by ${\\rm scale}[i] = 2 ^{(5 - i)/6},$  the calculations of ${\\rm pos}$ and $size$ are exact.  However, multiplication by $scale$ introduces round-off error.  For the center of the sub-box we will have the computer use ${\\rm pos}[i] \\otimes {\\rm scale}[i]$ with the realization that this is not necessarily ${\\rm pos}[i] \\times {\\rm scale}[i]$.   Thus, we have to choose appropriate sizes to ensure that the machine sub-box contains the true sub-box.  \n\nNotationally, this is annoying, because we typically use a computer command like ${\\rm pos}[i] = {\\rm pos}[i] \\otimes {\\rm scale}[i],$ while in an exposition, we need to avoid that.  We will denote the true center of the sub-box by $p[i]$ and the machine center of the sub-box by $p_0[i],$ and the true and machine sizes will be denoted $s[i]$ and $s_0[i].$  We will let $pos[i]$ and $size[i]$ be the position and size (true and machine are the same) before multiplication by ${\\rm scale}[i].$\n\nLet $p[i] = pos[i] \\times {\\rm scale}[i],\\ p_0[i] = pos[i] \\otimes {\\rm scale}[i],$ and \n$s[i]= size[i] \\times {\\rm scale}[i].$  We must select $s_0[i]$ so that \n$p_0[i] + s_0[i] \\ge p[i] + s[i].$  (Here, taking $+$ on the left-hand side is correct, because the need for machine calculation there is incorporated at other points in the programs.)  So, we must find $s_0[i]$ such that $s_0[i] \\ge (p[i] - p_0[i]) + s[i].$\n\\begin{eqnarray*}\n(p[i] - p_0[i]) + s[i]. &\\le& (EPS/2) |p_0[i]| + {\\rm size}[i] \\times {\\rm scale}[i]\\\\[4pt]\n&\\le& (EPS/2) |p_0[i]| + (1 + EPS/2) ({\\rm size}[i] \\otimes\n{\\rm scale}[i])\\\\[4pt]\n&\\le& (1 + EPS/2) ((EPS/2) |p_0[i]| +  ({\\rm size}[i] \\otimes {\\rm scale}[i]))\\\\[4pt]\n&\\le& (1 + EPS/2)^2 ((EPS/2) |p_0[i]| \\oplus  ({\\rm size}[i]\n\\otimes {\\rm scale}[i]))\\\\[4pt]\n&\\le& (1 + 2EPS) \\otimes ((EPS/2) |p_0[i]| \\oplus  ({\\rm size}[i] \\otimes {\\rm scale}[i])).\n\\end{eqnarray*}\nThus we take \n$$s_0 [i]= (1 + 2EPS) \\otimes ((EPS/2) |p_0[i]| \\oplus  ({\\rm size}[i] \\otimes {\\rm scale}[i])).$$\nThis also works to give $p_0[i] - s_0[i] \\le p[i] - s[i].$\\hfill\\qed\n\n\n \n\n\n\n\n", "meta": {"hexsha": "e74958099bef1ff16401aa04d61cd8e9637a945b", "size": 33309, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/TeX_and_Figures_Files/Chapter_5.tex", "max_stars_repo_name": "njt99/findingkillerwords", "max_stars_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/TeX_and_Figures_Files/Chapter_5.tex", "max_issues_repo_name": "njt99/findingkillerwords", "max_issues_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/TeX_and_Figures_Files/Chapter_5.tex", "max_forks_repo_name": "njt99/findingkillerwords", "max_forks_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.0091883614, "max_line_length": 631, "alphanum_fraction": 0.601128824, "num_tokens": 13550, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7826624738835052, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.5862824469867797}}
{"text": "\\begin{enumerate}[(a)]\r\n\t\\item This is not a degree sequence of a graph, as there would be an odd\r\n\t\tnumber of vertices with odd degrees.\r\n\t\\item Suppose this was the degree sequence of a graph. Then, by Havel-Hakimi\r\n\t\ttheorem, so is $3,2,1,0,0,0$, and $1,0,0,0,0,-1$. But the last is clearly\r\n\t\tnot a degree sequence, so the initial sequence is not either.\r\n\t\\item This is the degree sequence of the graph \\\\\r\n\t\t\\begin{tikzpicture}\r\n\t\t\t\\draw\r\n\t\t\t(0:1) node {} -- (60:2) node {} -- (0:3) node {}\r\n\t\t\t(180:1) node {} -- (120:2) node {} -- (180:3) node {}\r\n\t\t\t(120:2) -- (60:2)\r\n\t\t\t(240:2) node {} -- (300:2) node {}\r\n\t\t\t;\r\n\t\t\\end{tikzpicture}\r\n\t\\item Iterations of Havel-Hakimi algorithm go as follows:\r\n\\subparagraph{0} 7,4,3,3,2,2,2,1,1,1\r\n\\subparagraph{1}   3,2,2,1,1,1,0,1,1\r\n\\subparagraph{2}     1,1,0,1,1,0,1,1 which is, of course (with rearrangement), the degree sequence of the graph of $3K_2$ with 2 isolated vertices added.\r\nA graph with the original degree sequence is then \\\\\r\n\\begin{tikzpicture}\r\n\t\\draw\r\n\t(0:2) node {} -- (0:0) node {} -- (45:2) node {} -- (90:2) node {} -- (0:0)\r\n\t(0:0) -- (135:2) node {} -- (180:2) node {} -- (270:2) node {} -- (0:0)\r\n\t(0:0) -- (225:1) node {} -- (180:2) -- (0:0)\r\n\t(225:1) -- (270:2)\r\n\t(300:2) node {} -- (330:2) node {}\r\n\t;\r\n\\end{tikzpicture}\r\n\\end{enumerate}\r\n", "meta": {"hexsha": "1195fbb3c8ecf20d3928f82867e0c3ae061ac387", "size": 1315, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tut1/19.tex", "max_stars_repo_name": "h4tguy/gt-hons", "max_stars_repo_head_hexsha": "a9b4a271a9bdc31c68571507f6bff16b7bedd12b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tut1/19.tex", "max_issues_repo_name": "h4tguy/gt-hons", "max_issues_repo_head_hexsha": "a9b4a271a9bdc31c68571507f6bff16b7bedd12b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tut1/19.tex", "max_forks_repo_name": "h4tguy/gt-hons", "max_forks_repo_head_hexsha": "a9b4a271a9bdc31c68571507f6bff16b7bedd12b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.4193548387, "max_line_length": 154, "alphanum_fraction": 0.591634981, "num_tokens": 520, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7826624789529375, "lm_q1q2_score": 0.586282446400133}}
{"text": "\\chapter{May}\n\n\\section{Build sparse matrix with Eigen efficiently} \\index{Build sparse matrix with Eigen efficiently}\n\nWhen I build a sparse matrix with Eigen, I come across a problem that it is quite slow. The problem is that\nwhen you \\textit{insert} elements one by one, each time Eigen needs to reallocate space for the matrix and copy\nthe data, which is reason that it can be slow. \n\nThe solution is to first record the row, column index as well as value of non-zero entries with \\textit{triplets},\nand then build the sparse matrix with the function \\textit{setFromTriplets}.\n\n\\chapter{June}\n\\section{Transform matrix and projective matrix} \\index{Transform matrix and projective matrix}\nTo transform a vertex from world space to camera space, we need to construct the rigid transform matrix. Given a right\nhand system, we need \\textbf{eye} $\\in \\mathbb{R}^3$ position, \\textbf{lookAt} $\\in \\mathbb{R}^3$ position as well as \\textbf{up} $\\in \\mathbb{R}^3$ direction to determine the matrix.\nThe matrix is in the following format:\n \\myequ{\nV & =       \\begin{pmatrix}\n           s.x & u.x & -f.x & 0.0 \\\\\n           s.y & u.y & -f.y & 0.0 \\\\\n           s.z & u.z & -f.z & 0.0 \\\\\n           - (s \\cdot eye) & -(u \\cdot eye) & -(f \\cdot eye) & 1.0  \n       \\end{pmatrix} \\\\\n  where\\  & f = normalize(lookAt - eye) \\\\\n  & s = normalize(f \\times up) \\\\\n  & u = s \\times f \n}\n\n\\begin{remark}\n    Note this matrix is constructed for right-hand coordinate system.  \n\\end{remark}\n\nIn addition, to transform a point from camera space to image space, we need to construct the perspective projection matrix. \nIn this case, we need field of view in $x$ and $y$ axis, namely \\textbf{fovX} and \\textbf{fovY}, \\textbf{nearClip} and \\textbf{farClip}. The matrix is build as:\n \n \\myequ{\n     P & =       \\begin{pmatrix}\n        \\frac{1}{\\text{tan}(\\frac{fovX}{2})} & 0 & 0 & 0 \\\\\n        0 & \\frac{1}{\\text{tan}(\\frac{fovY}{2})} & 0 & 0 \\\\\n        0 & 0 & -\\frac{far + near}{far - near} \n        &  -\\frac{2 * far * near}{far - near} \\\\\n        0 & 0 & -1 & 0 \n \\end{pmatrix} \n}\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "334b95817abe9202ef14222f50593b2c1ea338d7", "size": 2076, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/may.tex", "max_stars_repo_name": "soundsilence/DailyNotes", "max_stars_repo_head_hexsha": "561ad833b3d7824699847bc3e933e7da05889463", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-05-11T08:56:57.000Z", "max_stars_repo_stars_event_max_datetime": "2016-05-11T08:56:57.000Z", "max_issues_repo_path": "tex/may.tex", "max_issues_repo_name": "soundsilence/DailyNotes", "max_issues_repo_head_hexsha": "561ad833b3d7824699847bc3e933e7da05889463", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/may.tex", "max_forks_repo_name": "soundsilence/DailyNotes", "max_forks_repo_head_hexsha": "561ad833b3d7824699847bc3e933e7da05889463", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7454545455, "max_line_length": 183, "alphanum_fraction": 0.6459537572, "num_tokens": 635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7826624789529375, "lm_q1q2_score": 0.586282446400133}}
{"text": "\\section{Materials and Methods}\n\\label{sec:method}\nConsider a panmictic diploid population with fixed size of $N$\nindividuals.  Let $\\bm{ \\nu} =\\{\\nu_t\\}_{t\\in\\Tc}$ be frequencies of\nthe derived allele at generations $t\\in\\Tc$ for a given variant, where\nat generations $\\Tc= \\{\\tau_i: 0\\le \\tau_0<\\tau_1\\ldots< \\tau_T\\}$\nsamples of $n$ individuals are chosen for pooled sequencing. The\nexperiment is replicated $R$ times. We denote allele frequencies of\nthe $R$ replicates by the set $\\{\\bm{\\nu}\\}_R$.  To identify the genes\nand variants that are responding to selection pressure, we use the\nfollowing procedure:\n\\begin{enumerate}\n\\item {\\bf Estimating population size.} The procedure starts by\n  estimating the effective population size, $\\hN$, under the\n  assumption that much of the genome is evolving neutrally.\n\\item {\\bf Estimating selection parameters.} For each polymorphic\n  site, selection and dominance parameters $s,h$ are estimated so\n  as to maximize the likelihood of the time series data, given $\\hN$.\n\\item {\\bf Computing likelihood statistics.} For each variant, a\n  log-odds ratio of the likelihood of selection model ($s>0$) to the\n  likelihood of neutral evolution/drift model is computed. Likelihood\n  ratios in a genomic region are combined to compute the \\comale\\\n  statistic for the region.\n\\item {\\bf Hypothesis testing.} An empirical null distribution of the\n  \\comale\\ statistic is calculated using genome-wide drift\n  simulations, and used to compute $p$-values and thresholds for a\n  specified FDR. We perform single locus hypothesis testing within\n  selected regions to identify significant variants and report genes\n  that intersect with the selected variants.\n\\end{enumerate}\nThese steps are described in detail below.\n\\subsection{Estimating Population Size}\nMethods for estimating population sizes from temporal neutral\nevolution data have been\ndeveloped~\\cite{williamson1999using,anderson2000monte,\n  bollback2008estimation, Terhorst2015Multi,jonas2016estimating}.\nHere, we aim to extend these models to explicitly model the sampling\nnoise that arise in pool-seq data. Specifically, we model the\nvariation in sequence coverage over different locations, and the noise\ndue to sequencing only a subset of the individuals in the population.\nIn addition, many existing\nmethods~\\cite{bollback2008estimation,feder2014Identifying,topa2015gaussian,Terhorst2015Multi}\nare designed for large populations, and model frequency as a\ncontinuous quantity. We observed that using Brownian motion to model\nfrequency drift may be inadequate for small populations, low starting\nfrequencies and sparse sampling (in time), factors that are common in\nexperimental evolution (see Results, \\ref{fig:power}A-C, and\n\\ref{fig:markov}). To this end, we model the Wright-Fisher Markov\nprocess for generating pool-seq data~(\\ref{proc:arya}) via a\n{discrete} HMM (~\\ref{fig:1}-B). We start by computing a likelihood\nfunction for the population size given neutral pool-seq data.\n\n\n\\paragraph{Likelihood for Neutral Model.}\nWe model the allele frequency counts $2N\\nu_t$ as being sampled from a\nBinomial distribution. Specifically,\n\\begin{eqnarray*} \n  \\nu_0 &\\sim& \\pi,\\\\\n  2N\\nu_t|\\nu_{t-1} &\\sim& \\bino(2N,\\nu_{t-1}) \n\\end{eqnarray*}\nwhere $\\pi$ is the global distribution of allele frequencies in the\nbase population. \nNote that  $\\pi$  \ndepends on the\ndemographic history of the founder lines and can be estimated from site \nfrequency spectrum(see~\\ref{fig:sfs}) of the initial population. For notational \nconvenience, henceforth we omit the dependence of likelihoods to the \nparameter $\\pi$.\n\nTo estimate frequency after $\\tau$ transitions, it is enough to\nspecify the $2N\\times2N$ transition matrix $P^{(\\tau)}$, where\n$P^{(\\tau)}[i,j]$ denotes probability of change in allele frequency\nfrom ${i}/{2N}$ to ${j}/{2N}$ in $\\tau$ generations:\n\\beq\n  P^{(1)}[i,j] &= \\pr\\left(\\nu_{t+1}=\\frac{j}{2N} \\left| \\right. \n  \\nu_{t}=\\frac{i}{2N}\\right)\\\\\n  &={2N \\choose j} s  \\nu_{t}^j\n  (1-\\nu_{t})^{2N-j}, \\label{eq:P1}\n\\eeq\n\\beq\n  P^{(\\tau)} =   P^{(\\tau-1)}P^{(1)} \\label{eq:Pt}\n\\eeq\nFurthermore, in an E\\&R experiment, $n\\le N$ individuals are randomly\nselected for sequencing. The {sampled allele frequencies},\n$\\{y_{t}\\}_{t\\in\\Tc}$, are also Binomially distributed \\beq 2ny_{t}\n\\sim \\text{Binomial}(2n,\\nu_t) \\eeq We introduce the $2N\\times2n$\nsampling matrix $Y$, where $Y[i,j]$ stores the probability that the\nsample allele frequency is ${j}/{2n}$ given that the true allele\nfrequency is ${i}/{2N}$.\n\n\nWe denote the pool-seq data for that variant as $\\{x_t = \\langle\nc_t,d_t \\rangle\\}_{t\\in\\Tc}$ where $d_t, c_t$ represent the coverage,\nand the read count of the derived allele, respectively. Let\n$\\{\\lambda_t\\}_{t\\in\\Tc}$ be the sequencing coverage at different\ngenerations. Then, the observed data are sampled according to\n\\begin{equation} d_t \\sim \\poiss(\\lambda_t), \\hspace{1in} c_t \\sim\n\\bino(d_t,y_t) \n\\end{equation}\nThe emission probability for a observed tuple $x_t=\\langle d_t,\nc_t\\rangle $ is \n\\begin{equation} {\\bf e}_{i}(x_t) = {d_t \\choose c_t}\n\\left(\\frac{i}{2n} \\right)^{c_t}\\left (1- \\frac{i}{2n}\n\\right)^{d_t-c_t} .  \n\\end{equation}\nFor $1\\le t\\le T, 1\\le j\\le 2N$, let $\\alpha_{t,j}$ denote the\nprobability of emitting $x_1,x_2,\\ldots,x_t$ and reaching state $j$ at\n$\\tau_t$. Then, $\\alpha_{t}$ can be computed using the\nforward-procedure~\\cite{durbin1998biological}:\n\\begin{align}\n\t\\alpha_t^T&=\\alpha_{t-1}^T P^{(\\delta_t)}\\text{diag}(Y\\bfe(x_t))\n\t\\label{eq:hmm}\n\\end{align}\nwhere $\\delta_t=\\tau_t-\\tau_{t-1}$. The joint likelihood of the\nobserved data from $R$ independent observations is given by\n\\beq\n\\Lc(N|\\{\\bm{x}\\}_R, \nn)&=\\prod_{r=1}^R\\Lc(N|\\bm{x}^{(r)},n)=\\pr(\\{\\bm{x}\\}_R|N,n)\\\\& =\t\n\t\\prod_{r=1}^R \\sum_i\\alpha_{T,i}^{(r)}\n\t\\label{eq:hmmlik}\n\\eeq\nwhere $\\bm{x}=\\{x_t\\}_{t\\in \\Tc}$. The graphical model and the generative \nprocess for which data is being generated is depicted in~\\ref{fig:1}-B \nand~\\ref{proc:arya}, respectively.\n\nFinally, the last step is to compute an estimate $\\hN$ that maximizes\nthe likelihood of all $M$ variants in whole genome. Let\n$\\bm{x}_i^{(r)}$ denote the time-series data of the $i$-th variant in\nreplicate $r$. Then,\n\\begin{equation}\n \\hN =\n\\underset{N}{\\arg \\max} \\prod_{i=1}^M  \\prod_{r=1}^R\\Lc(N|\\bm{x}_i^{(r)})\n\\label{eq:mlen}\n\\end{equation}\n\n\\subsection{Estimating Selection Parameters}\n\n\\paragraph{Likelihood for Selection Model.}\nAssume that the site is evolving under selection constraints $s\\in\n\\Rbb$, $h\\in \\Rbb_+$, where $s$ and $h$ denote selection strength and \ndominance parameters ,\nrespectively. By definition, the relative fitness values of genotypes\n0$|$0, 0$|$1 and 1$|$1 are given by $w_{00}=1$, $w_{01}=1+hs$ and\n$w_{11}=1+s$.  Then, $\\nusp$, the frequency at time\n$\\tau_{t}+1$ (one generation ahead), can be estimated using: \n\\beq \n\\hat{\\nu}_{t^+} =\n\\mathbb{E}[\\nusp|s,h,\\nu_t]&=\\frac{w_{11}\\nu_t^2 +\n  w_{01}\\nu_t(1-\\nu_t)}{w_{11}\\nu^2_t + 2w_{01}\\nu_t(1-\\nu_t) +\n  w_{00}(1-\\nu_t)^2}\\\\\n&=\\nu_t+\\frac{s(h+(1-2h)\\nu_t)\\nu_t(1-\\nu_t)}{1+s\\nu_t(2h+(1-2h)\\nu_t))}.\n  \\label{eq:transition}\n\\eeq\nThe machinery for computing likelihood of the selection parameters is \nidentical to that of population size, except for transition matrices. Hence, here \nwe only describe the definition transition matrix $Q_{s,h}$ of the selection \nmodel.\nLet $Q^{(\\tau)}_{s,h}[i,j]$ denote the\nprobability of transition from ${i}/{2N}$ to ${j}/{2N}$ in\n$\\tau$ generations, then (See~\\cite{Ewens2012Mathematical}, Pg.~24, \nEqn.~$1.58$-$1.59$):\n\\beq\n  Q^{(1)}_{s,h}[i,j] &= \\pr\\left(\\nusp=\\frac{j}{2N} \\left\\lvert\n      \\nu_{t}=\\frac{i}{2N};s,h,N \\right .\\right)\\\\\n  &    ={2N \\choose j}\n  \\hat{\\nu}_{t^+}^{j} (1-\\hat{\\nu}_{t^+})^{2N-j}\\label{eq:Q1}\n  \\eeq\n  \\beq\n  Q^{(\\tau)}_{s,h} &= Q^{(\\tau-1)}_{s,h}Q^{(1)}_{s,h}\\label{eq:Qt}  \n\\eeq\nThe maximum likelihood estimates are given by\n\\beq\n\\hs,\\hh = \\underset{s,h}{\\arg \\max} \\prod_{r=1}^R \\Lc(s,h|\\bm{x}^{(r)},\\hN) \n\\label{eq:mlesh}\n\\eeq\n\nUsing grid search, we first estimate $N$ (Eq.~\\ref{eq:mlen}), and\nsubsequently, we estimate parameters $s,h$ \n(Eq.~\\ref{eq:mlesh},~\\ref{fig:slikes}). By\nbroadcasting and vectorizing the grid search operations across all\nvariants, the genome scan on millions of polymorphisms can be done in\nsignificantly smaller time than iterating a numerical optimization\nroutine for each variant(see Results and \\ref{fig:runTime}).\n\\subsection{Empirical Likelihood Ratio Statistics}\nThe  likelihood\nratio statistic for testing directional selection, to be computed for each variant, \nis given by\n\\beq\n\tH &= -2 \\log \n\t\\left(\\frac{\\Lc(\\bar{s},0.5|\\{\\bm{x}\\}_R,\\hN)}{\\Lc(0,0.5|\\{\\bm{x}\\}_R,\\hN)}\\right),\\\\\n\t\\label{eq:ELRS}\n\\eeq\nwhere $\\bar{s} = \\underset{s}{\\arg \\max} \\prod_{r=1}^R \n \\Lc(s,0.5|\\bm{x}^{(r)},\\hN)$. Similarly we can define a test statistic for testing \n if selection is dominant by\n\\beq\n D &= -2 \\log \n \\left(\\frac{\\Lc(\\hs,\\hh|\\{\\bm{x}\\}_R,\\hN)}{\\Lc(\\bar{s},0.5|\\{\\bm{x}\\}_R,\\hN)}\\right).\n \\eeq\n\n\n\n While extending the single-locus WF model to a multiple linked-loci\n can improve the power of the model~\\cite{Terhorst2015Multi}, it is\n computationally and statistically expensive to compute exact\n likelihood. In addition, computing linked-loci joint likelihood requires  \n haplotype resolved data, which pool-seq\n does not provide. Here, similar to Nielsen~\\emph{et\n   al}~\\cite{nielsen2005genomic}, we calculate \\emph{composite likelihood\n ratio} score for a genomic region.\n\\begin{equation}\n\\Hc = \\frac{1}{|L|}\\sum_{\\ell \\in L} H_\\ell.\n\\label{eq:pihmm}\n\\end{equation}\nwhere $L$ is a collection of segregating sites and $H_\\ell$ is the\nlikelihood ratio score based for each variant $\\ell$ in $L$.  The\noptimal value of the hyper-parameter $L$ depends upon a number of\nfactors, including initial frequency of the favored allele,\nrecombination rates, linkage of the favored allele to neighboring\nvariants, population size, coverage, and time since the onset of\nselection (duration of the experiment). In~\\ref{sec:winSize}, we\nprovide a heuristic to compute a reasonable value of $L$, based on\nexperimental data. \n\nWe work with a normalized value of $\\Hc$, given by\n\\begin{equation} \\Hc_i^*=\\frac{\\Hc_i-\\mu_\\Cc}{\\sigma_\\Cc},\n\\hspace{0.5in} \\forall i \\in \\Cc,\n\\end{equation} \nwhere $\\mu_\\Cc$ and $\\sigma_\\Cc$ are the mean and standard deviation\nof $\\Hc$ values in a large region $\\Cc$. We found different\nchromosomes to have different distribution of $\\Hc_i$ values, and\ntherefore decided to use single chromosomes as $\\Cc$.\n\\subsection{Hypothesis Testing}\n\\paragraph{Single-Locus tests.}\nUnder neutrality, Log-likelihood ratios can be approximated by $\\Xc^2$\ndistribution~\\cite{williams2001weighing}, and $p$-values can be\ncomputed directly. However, Feder \\emph{et\n  al.}~\\cite{feder2014Identifying} showed that when the number of\nindependent samples (replicates) is small, $\\Xc^2$ is a crude\napproximation to the true null distribution and results in more false\npositive.  Following their suggestion, we first compute the empirical\nnull distribution using simulations with the estimated population size\n(See~\\ref{proc:arya}). The empirical null distribution of statistic\n$H$ is used to compute $p$-values as the fraction of null values that\nexceed the test score.  Finally, we use Storey and Tibshirani's\nmethod~\\cite{storey2003statistical} to control for False Discovery\nRate in multiple testing.\n\n\n\\paragraph{Composite likelihood tests.}\n\nSimilar to single-locus tests, we compute the null distribution of the\n$\\Hc^*$ statistic using whole-genome simulations with the estimated\npopulation size, and subsequently compute FDR. The simulations for\ngenerating the null distribution of $\\Hc^*$ are described next. \n\n\\subsection{Simulations}\\label{sec:sims}\nWe use the same simulation procedure for two purposes. First, we use\nthem to test the power of \\comale\\ against other methods in small\ngenomic windows. Second, we use the simulations to generate the\ndistribution of null values for the statistic to compute empirical\n$p$-values. We mainly chose parameters that are relevant to \\dmel\nexperimental evolution~\\cite{kofler2013guide}. See also \\ref{fig:1}-A\nfor illustration. \n\\begin{enumerate}\n\\item {\\bf Creating initial founder line haplotypes.} Using\n  \\texttt{msms}~\\cite{ewing2010msms}, we created neutral populations\n  for $F$ founding haplotypes with command \\texttt{\\$./msms <F> 1 -t\n    <2$\\mu WN_o$> -r <$2rWN_o$> <W>}, where $F=200$ is number of \n    founder\n  lines, $N_o=10^6$ is effective founder population size,\n  $r=2\\times10^{-8}$ is recombination rate, $\\mu=2\\times 10^{-9}$ is\n  mutation rate. The window size $W$ is used to compute $\\theta=2\\mu\n  N_oW$ and $\\rho=2N_orW$. We chose $W=50$Kbp for simulating\n  individual windows for performance evaluations, and $W=20$Mbp for\n  simulating \\dmel chromosomes for $p$-value computations.\n  \n\\item{\\bf Creating initial diploid population.} An initial set of\n  $F=200$ haplotypes was created from step I, and duplicated to create\n  $F$ homozygous diploid individuals to simulate generation of inbred\n  lines. $N$ diploid individuals were generated by sampling with\n  replacement from the $F$ individuals.\n\n\\item{\\bf Forward Simulation.} We used forward simulations for\n  evolving populations under selection. We also consider selection\n  regimes which the favored allele is chosen from standing variation\n  (not \\emph{de novo} mutations). Given initial diploid population,\n  position of the site under selection, selection strength $s$, number\n  of replicates $R=3$, recombination rate $r=2\\times10^{-8}$ and\n  sampling times $\\Tc=\\{0,10,20,30,40,50\\}$,\n  \\texttt{simuPop}~\\cite{peng2005simupop} was used to perform forward\n  simulation and compute allele frequencies for all of the $R$\n  replicates.  For hard sweep (respectively, soft sweep) simulations\n  we randomly chose a site with initial frequency of $\\nu_0=0.005$\n  (respectively, $\\nu_0=0.1$) to be the favored allele. For generating\n  the null distribution with drift for $p$-value computations, we used\n  this procedure with $s=0$.\n\n\\item{\\bf Sequencing Simulation.} Given allele frequency trajectories\n  we sampled depth of each site in each replicate identically and\n  independently from Poisson($\\lambda$), where $\\lambda \\in\n  \\{30,100,300\\}$ is the coverage for the experiment. Once depth $d$\n  is drawn for the site with frequency $\\nu$, the number of reads $c$\n  carrying the derived allele are sampled according to\n  Binomial$(d,\\nu)$. For experiments with finite depth the tuple\n  $\\langle c,d\\rangle$ is the input data for each site.\n\\end{enumerate}\n", "meta": {"hexsha": "2b3c25fffafbe41f38c04b2e94270dcddfbb9117", "size": 14533, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/methods.tex", "max_stars_repo_name": "airanmehr/timeseries_paper", "max_stars_repo_head_hexsha": "9efc1c849883219fcf0236f64357092159c53140", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manuscript/methods.tex", "max_issues_repo_name": "airanmehr/timeseries_paper", "max_issues_repo_head_hexsha": "9efc1c849883219fcf0236f64357092159c53140", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuscript/methods.tex", "max_forks_repo_name": "airanmehr/timeseries_paper", "max_forks_repo_head_hexsha": "9efc1c849883219fcf0236f64357092159c53140", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.1850649351, "max_line_length": 93, "alphanum_fraction": 0.7327461639, "num_tokens": 4448, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8902942173896132, "lm_q2_score": 0.6584175072643413, "lm_q1q2_score": 0.5861852993455268}}
{"text": "\n    \\documentclass{article}\n    \\usepackage{amsfonts}\n    \\usepackage{amsmath,multicol,eso-pic}\n    \\begin{document}\n    \\title{Algebra 101 exam 1} \n \\date{\\vspace{-5ex}} \n \\maketitle\n\n    \\section{Linear equations}\n    Solve the following equations for the specified variable.\n    \\begin{multicols}{2}\n    \\begin{enumerate}\n    \\item Solve for $e$ : $J + e n = N e -11$\n\\item Solve for $V$ : $- 3 V + d = 6 V + z$\n\\item Solve for $f$ : $R f + R = 17 f -9$\n\\item Solve for $S$ : $- 14 S + 20 = - 17 S + k$\n\\item Solve for $h$ : $5 h + n = 25 h + r$\n\\item Solve for $C$ : $C M + v = - 24 C + K$\n\\item Solve for $r$ : $10 r + 17 = - 11 r + 2$\n\\item Solve for $U$ : $3 U -24 = - 12 U + 4$\n\\item Solve for $W$ : $W Y -18 = 10 W -24$\n\\item Solve for $v$ : $J + 6 v = - 2 v + 14$\n\\item Solve for $J$ : $J Q -20 = J p + k$\n\\item Solve for $p$ : $R p + 13 = p z -17$\n\\item Solve for $B$ : $B c + x = 2 B + 12$\n\\item Solve for $a$ : $R + 10 a = N a + h$\n\\item Solve for $g$ : $K - 8 g = d + g t$\n\\item Solve for $A$ : $- 2 A + 8 = - 7 A + 19$\n\\item Solve for $d$ : $8 d + 16 = L + 22 d$\n\\item Solve for $w$ : $- 10 w + 7 = 10 w + 6$\n\\item Solve for $m$ : $- 12 m + 19 = V m -25$\n\\item Solve for $Z$ : $V - 20 Z = Z d + g$\n    \\end{enumerate}\n    \\end{multicols}\n    \n\n    \\section{Quadratic equations}\n    Solve the following quadratic equations.\n    \\begin{multicols}{2}\n    \\begin{enumerate}\n    \\item $17 x^{2} = - 14 x^{2} + 12$\n\\item $20 x^{2} - 10 x = 5 x^{2} + 5 x$\n\\item $3 x^{2} + 22 x = 19 x -2$\n\\item $2 x^{2} - 26 x -19 = 0$\n\\item $- 17 x^{2} + 13 x = 0$\n\\item $18 x^{2} + 3 x = - 18 x^{2} + 24 x -12$\n\\item $2 x^{2} + 15 = 8 x$\n\\item $- 24 x^{2} = - 17 x + 16$\n\\item $6 x^{2} - 7 x + 15 = - 12 x^{2}$\n\\item $- 16 x^{2} = 0$\n\\item $- 12 x^{2} + 15 x + 14 = - 21 x$\n\\item $- 15 x^{2} - 9 x = 0$\n\\item $- 8 x^{2} - 25 x -6 = - 23 x$\n\\item $7 x^{2} - 24 x -11 = - 7 x$\n\\item $- 26 x^{2} + 7 x = 20$\n\\item $- 2 x^{2} + 4 x -20 = 4 x^{2} + 3$\n\\item $- 26 x^{2} - 24 x = -7$\n\\item $7 x^{2} + 20 x + 10 = - 23 x^{2}$\n\\item $14 x^{2} + 20 x + 10 = -18$\n\\item $24 x^{2} + 23 x = 9$\n    \\end{enumerate}\n    \\end{multicols}\n    \n\n    \\end{document}\n    ", "meta": {"hexsha": "bfc5281d7b7aa8c22d8a8e480859c8b63f8e013d", "size": 2146, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algebra1.tex", "max_stars_repo_name": "jcstr/examgen", "max_stars_repo_head_hexsha": "8e72e72ee7aca3647efeaa2569b81226dd0271c4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2016-05-16T08:52:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-23T11:51:57.000Z", "max_issues_repo_path": "algebra1.tex", "max_issues_repo_name": "silky/examgen", "max_issues_repo_head_hexsha": "cefdc7114f3705481e77f0a46a2bef8aac4379b5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-11-15T08:47:58.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-30T15:19:06.000Z", "max_forks_repo_path": "algebra1.tex", "max_forks_repo_name": "silky/examgen", "max_forks_repo_head_hexsha": "cefdc7114f3705481e77f0a46a2bef8aac4379b5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 26, "max_forks_repo_forks_event_min_datetime": "2015-02-12T06:02:27.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-06T04:15:50.000Z", "avg_line_length": 32.0298507463, "max_line_length": 61, "alphanum_fraction": 0.491612302, "num_tokens": 998, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.7090191276365463, "lm_q1q2_score": 0.586154475038833}}
{"text": "% !TEX root = apxthy.tex\n\n\n\\section{Algebraic Polynomials}\n%\n\\label{sec:poly}\n% \nOur second major topic concerns approximation of functions defined on an\ninterval $f : [-1, 1] \\to \\R$, without loss of generality. But contrary to\n\\S~\\ref{sec:trig} we no longer assume periodicity. Instead we will approximate\n$f$ by algebraic polynomials,\n\\[\n      f(x) \\approx p_N \\in \\Poly_N\n\\]\nwhere $\\Poly_N$ denotes the space of degree $N$ polynomials,\n\\[\n   \\Poly_N := \\bg\\{ \\sum_{n = 0}^N c_n x^n \\bsep c_n \\in \\R \\bg\\}.\n\\]\nNote in particular that in the terms of \"simplicity\" these are indeed the \nsimplest functions to evaluate numerically in that they only require addition \nand multiplication operations. \n\nIn terms of a basic convergence result we have the following initial \nproposition, which we will not prove now, but it will follow from our \nlater work.\n\n\\begin{proposition}[Weierstrass Approximation Theorem] \\label{th:poly:Weierstrass}\n   $\\bigcup_{N \\in \\N} \\Poly_N$ is dense in $C([-1,1])$ and by extension also \n   in $L^p(-1,1)$ for all $p \\in [1, \\infty)$.\n\\end{proposition}\n\n\\bigskip \n\nIndeed, as we have argued before, convergence in itself of {\\em some} sequence\nof approximations  is rarely useful, but we require (i) rates and (ii) explicit\nconstructions. Much of this chapter is therefore devoted to interpolation.\n\nIt is a standard fact (and easy to prove) that for any $N+1$ distinct points\n$x_0, \\dots, x_N \\in \\R$ and values $f_0, \\dots, f_N$ there exists exactly one\npolynomial $p_N \\in \\Poly_N$ interpolating those values, i.e., \n\\[\n   p_N(x_j) = f_j, \\qquad j = 0, \\dots, N.\n\\]\n(Indeed, the same is even true for $x_j \\in \\C$.) These equations form \na linear system for the coefficients $c_n$, which can be solved to obtain \nthe interpolation polynomial, which in turn can be easily readily \nnumerically. \n\nA key question is how to choose the interpolation points $x_j$? It may seem\nintuitive to take equispaced nodes, $x_j = -1 + 2j/N$.  We start this section by\nexploring precisely this approach to approximate some smooth functions on\n$[-1,1]$; see \\nbpoly for some motivating examples. In this Julia notebook we\nclearly observe that this yields a divergent sequence of polynomials, but by\nexploring also other kinds of fits we also see that this does not preclude the\npossibility of computing a (very) good approximation. We therefore focus\ninitially by deriving a ``good'' set of interpolation nodes. The same idea will\nalso naturally lead to the Chebyshev polynomials.\n\n\n\\subsection{Chebyshev Points, Chebyshev Polynomials and Chebyshev Series}\n%\nWe can motivate the idea of the Chebyshev points by mapping the polynomial\napproximation problem to the trigonometric approximation problem:\n\nLet $f\\in C([-1,1])$, then let $g \\in C(\\TT)$ be defined by\n\\[\n   g(\\theta) = f(\\cos\\theta).\n\\]\nNote that $g$ ``traverses'' $f$ twice!\n\nWe will later see that $g$ inherits the regularity of $f$ even across domain\nboundaries; for now let us understand the consequence of this observation. We\nknow from \\S~\\ref{sec:trig} that equispaced interpolation of $g$ yields an\nexcellent trigonometric interpolant, i.e., we choose $\\theta_j = -\\pi + 2\\pi\nj/N$ and we choose coefficients $\\hat{g}_k$ such that\n\\[\n   t_N(\\theta_j) = \\sum_{-N}^N \\hat{g}_k e^{ik \\theta_j} = g(\\theta_j)\n\\]\n%\nWe may ask to interpolate $f$ at the analogous points, $x_j = \\cos(\\theta_j)$\nbut since $g$ contains ``two copies'' we only take half of the nodes.\nThis gives the Chebyshev nodes \n%\n\\begin{equation} \\label{eq:poly:chebnodes}\n   x_j := \\cos\\b( \\pi j / N \\b) \\qquad j = 0, \\dots, N.\n\\end{equation}\n\nWe can readily test our hypothesis that these yield much better approximations;\nsee again \\nbpoly. Thus, for future reference we define the Chebyshev\ninterpolant $I_N f$ to be the unique function $I_ f \\in \\Poly_N$ such that\n\\[\n   I_N f(x_j) = f(x_j) \\qquad \\for j = 0, \\dots, N,\n\\]\nwhere $x_j$ are the Chebyshev nodes \\eqref{eq:poly:chebnodes}.\n\n\nNext, we ask what the analogue of the Fourier series is. We write\n\\[\n   g(\\theta) = \\sum_{k \\in \\Z} \\hat{f}_k e^{ik\\theta},\n\\]\nthen using that $g$ is real and $g(-\\theta)=g(\\theta)$,\n\\[\n   g(\\theta) = \\hat{g}_0 + 2 \\sum_{k = 1}^N \\hat{g}_k \\cos(k\\theta)\n\\]\nIt is therefore natural to define the {\\em Chebyshev polynomials}\n%\n\\begin{equation} \\label{eq:poly:defn_Tk}\n   T_k(\\cos\\theta) = \\cos(k\\theta), \\qquad k \\in \\N := \\{0,1,2,\\dots\\}.\n\\end{equation}\n%\nA wide-ranging consequence of this definition is that\n\\[\n      |T_k(x)| \\leq 1 \\qquad \\forall k.\n\\]\n\n\\begin{lemma} \\label{th:poly:chebpolys}\n   The functions $T_k : [-1,1] \\to \\R$ are indeed polynomials and\n   satisfy the recursion\n   \\begin{equation} \\label{eq:poly:chebrecursion}\n      T_{k+1}(x) = 2 x T_k(x) - T_{k-1}(x),\n   \\end{equation}\n   with initial conditions $T_0(x) = 1, T_1(x) = x$.\n\\end{lemma}\n\\begin{proof}\n   The identities $T_0(x) = 1, T_1(x) = x$ follow immediately from\n   \\eqref{eq:poly:defn_Tk}. If we can prove the recursion, then\n   the fact that $T_k$ are polynomials follows as well.\n\n   To that end, we introduce another representation,\n   \\[\n      T_k\\B( \\smfrac{z + z^{-1}}{2} \\B)\n      = T_k(\\Re z)\n      = \\Re z^k = \\frac{z^k + z^{-k}}{2},\n   \\]\n   where $|z| = 1$. Then,\n   \\begin{align*}\n      & \\hspace{-1cm} T_{k+1}(\\Re z) - 2 \\Re z T_k(\\Re z) + T_{k-1}(\\Re z) \\\\\n      &= \\smfrac12 \\B(\n         z^{k+1} + z^{-k-1}  - (z+z^{-1}) (z^k+z^{-k})\n         + z^{k-1} + z^{-k+1} \\B) \\\\\n      &= \\smfrac12 \\B( z^{k+1} + z^{-k-1}\n               - z^{k+1} - z^{k-1} - z^{1-k} - z^{-1-k}\n               + z^{k-1} + z^{-k+1} \\B) \\\\\n      &=0. \\qedhere\n   \\end{align*}\n\\end{proof}\n\nFor future reference we define the Joukowsky map\n\\[\n   \\phi(z) = \\frac{z+z^{-1}}{2}.\n\\]\nand note that it is analytic in $\\C \\setminus \\{0\\}$.\n\nWe now know that $T_k(x)$ are indeed polynomials of degree $k$ and in light of\nthe foregoing motivating discussion, we have the following result.\n\n\\begin{lemma}\n   Let $f \\in C([-1,1])$ is uniformly continuous, then there exists {\\em\n   Chebyshev coefficients} $\\tilde{f}_k \\in \\R$ such that the {\\em Chebyshev\n   series}\n   \\begin{equation} \\label{eq:poly:chebseries}\n      f(x) = \\sum_{k = 0}^\\infty \\tilde{f}_k T_k(x)\n   \\end{equation}\n   is absolutely and uniformly convergent.\n\n   The Chebyshev coefficients are given by the following equivalent formulas,\n   \\begin{align*}\n      \\tilde{f}_k\n      &=  \\frac{2}{\\pi} \\int_{-1}^1 \\frac{f(x) T_k(x)}{\\sqrt{1-x^2}} \\,dx \\\\\n      &=  \\frac{1}{2\\pi i} \\oint_{\\SS} \\,\\,\\b(z^{-1+k} + z^{-1-k}\\b) f(\\phi(z))\n                  \\, dz \\\\\n      &= \\frac{1}{\\pi i} \\oint_{\\SS} \\,\\, z^{-1+k} f(\\phi(z)) \\, dz \\\\\n      &= \\frac{1}{\\pi i} \\oint_{\\SS} \\,\\, z^{-1-k} f(\\phi(z)) \\, dz.\n   \\end{align*}\n   For $k = 0$ a factor $1/2$ must be applied.\n\\end{lemma}\n\\begin{proof}\n   If $f \\in C([-1,1])$ with modulus of continuous $\\omega$, then $g \\in C(\\TT)$\n   also has a modulus of continuity and hence the Fourier series converges\n   uniformly and equivalently, the Chebyshev series does as well.\n\n   The expressions for $\\tilde{f}_k$ are simply transplanting the fourier\n   coefficients $\\hat{g}_k$ to Chebyshev coefficients $\\tilde{f}_k$.\n\\end{proof}\n\nIn analogy with the truncation of the Fourier series $\\Pi_N g$ (which\nis the $L^2(\\TT)$-projection or best-approximation we define\nthe Chebyshev projection\n\\[\n   \\PCheb_N f(x) := \\sum_{k = 0}^N \\tilde{f}_k T_k(x).\n\\]\n\n\n\\subsection{Convergence rates}\n%\n\\label{sec:poly:rates}\n%\nAs we have learned in \\S~\\ref{sec:trig}, the real power of polynomials is in\nthe approximation of analytic functions, hence we begin again with this\nsetting.\n\nIntuitively, the idea is that analyticity of $f$ on $[-1,1]$ translates into\nanalyticity of the corresponding periodic function $g(\\theta) = f(\\cos\\theta)$.\nExponential decay of the Fourier coefficients $\\hat{g}_k$ then translates into\nexponential decay of the Chebyshev coefficients  $\\tilde{f}_k$. But we can prove\nthis exponential decay directly with a relatively straightforward variation of\nthe argument we used in \\S~\\ref{sec:trip:pw}, which is interesting to see the\nanalogies.\n\n\nWe begin by defining\n\\[\n   F(z) := f(\\Re z) = f\\b( \\smfrac12(z+z^{-1}) \\b)\n         = f(\\phi(z)) \\qquad \\for z \\in \\SS := \\{|z|=1\\}.\n\\]\n%\nwhere $\\phi(z) = \\smfrac12 (z+z^{-1})$ is also called Joukowsky map. $\\phi$ is\nclearly analytic in $\\C \\setminus \\{0\\}$. Thus, if $f$ is analytic on $[-1,1]$\nthen $F$ must be analytic on $\\SS$. Next, we note that analyticity of $g(\\theta)$\non the strip $\\Omega_\\alpha$ is equivalent to analyticity of $F$ on the annulus\n%\n\\[\n   \\SS_\\rho := \\{ z \\in \\C \\sep \\rho^{-1} \\leq |z| \\leq \\rho \\},\n\\]\n%\nwith $\\rho = 1+\\alpha$. Let the corresponding {\\em Bernstein ellipse} be the\npre-image of $\\SS_\\rho$ under the Joukowsky map,\n%\n\\[\n   E_\\rho := \\phi(\\SS_\\rho),\n\\]\n%\nthen analyticity of $f$ in $E_\\rho$ implies analyticity of $F$ in $\\SS_\\rho$.\n\nFinally, we recall from the derivation of the Chebyshev polynomials $T_k(x)$\nthat they can also be written as\n\\[\n   \\smfrac12 (z^k + z^{-k}) = T_k(\\phi(z)).\n\\]\n\nAfter these preparations, we can prove the following result.\n\n\n\\begin{theorem}[Decay of Chebyshev coefficients]\n   Let $\\rho > 1$ and $f \\in A(E_\\rho)$ with  $M := \\|f\\|_{L^\\infty(E_\\rho)} <\n   \\infty$, then the Chebyshev coefficients of $f$ satisfy\n   \\[\n      |\\tilde{f}_k| \\leq 2 M \\rho^{-k}, \\qquad k \\geq 1.\n   \\]\n\\end{theorem}\n\\begin{proof}\n   We start with\n   \\[\n      \\tilde{f}_k = \\frac{1}{\\pi i} \\oint_{\\SS} z^{-1-k} F(z) \\, dz\n   \\]\n   Since $F$ is analytic on $\\SS_\\rho$ (and hence in the neighbourhood of $\\SS_\\rho$)\n   we can expand the contour to {\\it (Exercise: explain why this can be done using\n   Cauchy's integral formula and a suitable sketch!)}\n   \\[\n      \\tilde{f}_k = \\frac{1}{\\pi i} \\oint_{|z|=\\rho} z^{-1-k} F(z) \\, dz\n   \\]\n   and hence we immediately obtain\n   \\[\n      |\\tilde{f}_k| \\leq \\frac{2\\pi \\rho \\rho^{-1-k} M}{\\pi} = 2 M \\rho^{-k}.\n      \\qedhere\n   \\]\n\\end{proof}\n\nDecay of Chebyshev coefficients gives the following approximation error\nestimates.\n\n\\begin{theorem}[Chebyshev Projection and Interpolation Error]\n   %\n   \\label{th:poly:err_analytic}\n   %\n   Let $\\rho > 1$ and $f \\in A(E_\\rho)$ with $M := \\|f\\|_{L^\\infty(E_\\rho)} <\n   \\infty$, then\n   \\begin{align}\n      \\label{eq:poly:projerror}\n      \\| f - \\PCheb_N f \\|_{L^\\infty(-1,1)} &\\leq \\frac{2M \\rho^{-N}}{\\rho-1}, \\\\\n      \\label{eq:poly:interperror}\n      \\| f - I_N f \\|_{L^\\infty(-1,1)} &\\leq C M \\log N \\rho^{-N},\n   \\end{align}\n   where $C$ is a generic constant.\n\\end{theorem}\n\\begin{proof}\n   For the proof of \\eqref{eq:poly:projerror} we use the fact that\n   $\\|T_k\\|_\\infty \\leq 1$ to estimate\n   \\begin{align*}\n      \\| f - \\PCheb_N f \\|_\\infty\n      &\\leq\n      \\sum_{k = N+1}^\\infty |\\tilde{f}_k|  \\\\\n      &\\leq\n      2M \\sum_{k = N+1}^\\infty \\rho^{-k} \\\\\n      &=\n      \\frac{2M \\rho^{-N}}{\\rho-1}.\n   \\end{align*}\n   The estimate \\eqref{eq:poly:interperror} follows from the bound on the\n   Lebesgue constant\n   \\[\n      \\| I_N \\|_{L(L^\\infty)} \\leq C \\log N,\n   \\]\n   which follows from the analogous bound for trigonometric interpolation\n   given in Theorem~\\ref{th:trig:lebesgue}.\n\n   (For a sharp bound, it is in fact known that $\\Lambda_N \\leq \\frac{2}{\\pi}\n   \\log(N+1) + 1$.)\n\\end{proof}\n\n\\begin{remark}\n   One can in fact prove that\n   \\[\n      \\| f - I_N f \\|_{L^\\infty(-1,1)} \\leq \\frac{4M \\rho^{-N}}{\\rho-1},\n   \\]\n   using an aliasing argument; see \\cite[Thm. 8.2]{Trefethen2013-rg},\n   somewhat similar to the argument we used for our convergence estimate of\n   the trapezoidal rule in Exercise~\\ref{exr:trig:trapezoidal rule}.\n\\end{remark}\n\n\\begin{example}[Fermi-Dirac Function]\n   %\n   \\label{exa:poly:fermi-dirac}\n   %\n   Consider the Fermi-Dirac function\n   \\begin{equation}\n     f_\\beta(x) = \\frac{1}{1 + e^{\\beta x}},\n   \\end{equation}\n   where $\\beta > 0$.\n \n   {\\it REMARK: The Fermi--Dirac function describes the distribution of particles\n   over energy states in systems consisting of many identical particles that obey\n   the Pauli exclusion principle, e.g., electrons. A broad range of important\n   algorithms in computational physics are fundamentally about approximating the\n   Fermi--Dirac function. The parameters $\\beta$ is inverse proportional to\n   temperature (that is, Fermi-temperature).}\n \n   Extending $f_\\beta$ to the complex plane simply involves replacing $x$ with\n   $z$, i.e.,\n   \\[\n     f_\\beta(z) = \\frac{1}{1 + e^{\\beta z}},\n   \\]\n   which is well-defined {\\em except at the poles}\n   \\[\n       z_j = \\pm i  \\frac{\\pi}{\\beta}.\n   \\]\n\n   In Exercise~\\ref{exr:poly:ellipse} we show that the semi-minor axis of the \n   Bernstein ellipse $E_\\rho$ is $\\frac12 (\\rho-\\rho^{-1})$, hence the largest \n   $\\rho$ for which ${\\rm int}E_\\rho$ does not intersect any singularity is \n   given by \n   \\[ \\frac12 (\\rho-\\rho^{-1}) = \\frac{\\pi}{\\beta}x. \\]\n   Solving this quadratic equation for $\\rho$ yields one positive root \n   \\[ \\rho = \\smfrac{\\pi}{\\beta}+\\sqrt{1 + \\smfrac{\\pi^2}{\\beta^2}}\n   \\]\n   Of particular interest is the low temperature regime $\\beta \\to \\infty$ \n   (recall that $\\beta \\propto$ inverse temperature), for which we obtain \n   \\[\n      \\rho \\sim 1 + \\smfrac{\\pi}{\\beta}.\n   \\]\n   \n   In this regime we therefore expect an approximation rate close to \n   \\[\n      \\| f_\\beta - I_N f_\\beta \\|_{\\infty} \n      \\lesssim \\beta \\b(1 + \\smfrac{\\pi}{\\beta}\\b)^{-N}\n      \\sim \\beta \\exp\\b( - \\pi \\beta^{-1} N).\n   \\] \n   (Why is this not a rigorous and in fact likely false bound? You can get \n   a rigorous reformulation from the foregoing theorems.)\n \\end{example}\n\n\nFor convergence rates for $C^{j,\\sigma}([-1,1])$ and similar functions, we\nwant to adapt the Jackson theorems. We could again \"transplant\" the argument\nfrom the Fourier to the Chebyshev setting, but it will be more convenient\nthis time to simply apply the Fourier results directly. The details\nare carried out in Exercise~\\ref{exr:poly:convergence}. We obtain\nthe following result.\n\n\\begin{theorem}[Jackson's Theorem(s)]\n   \\label{th:poly:jackson}\n   %\n   Let $f \\in C^{(j)}([-1,1])$, $j \\geq 0$, where $f^{(j)}$ has modulus of\n   continuity $\\omega$, then\n   \\begin{equation}\n      \\label{eq:poly:jackson1}\n      \\inf_{p_N \\in \\Poly_N} \\| f - p_N \\|_{L^\\infty} \\leq\n      C N^{-j} \\omega\\b(N^{-1}\\b).\n   \\end{equation}\n\\end{theorem}\n\\begin{proof}\n   See Exercise~\\ref{exr:poly:convergence}.\n\\end{proof}\n\nWe cannot yet test these predictions numerically, since we don't yet have \na numerically stable way to evaluate the Chebyshev interpolants (or projections). \nWe will remedy this in the next two sections. \n \n\n\\subsection{Chebyshev transform}\n%\nWe have seen in \\nbpoly that naive evaluation of the Chebyshev interpolant leads\nto highly unstable numerical results. The emphasis here is on the term\n``naive''. Indeed, there exist at least two natural and numerically stable way\nto evaluate the Chebyshev interpolant.\n\nThe first approach we consider is the Discrete Chebyshev transform (DCT), an\nimmediate analogy of the Discrete Fourier transform (DFT). As in the Fourier case, \nonce we have transformed the polynomial to the Chebyshev basis, we can \nevaluate it in $O(N)$ operations. But in the Chebyshev case, this is even more \nefficient due to the recursion formula \\eqref{eq:poly:chebrecursion}. Moreover, \nthe polynomial derivatives are straightforward to compute in this case as well.\n\n\nLet $F = (F_j) \\in \\R^{N+1}$ (we immagine that $F_j = f(x_j)$ are nodal values\nof some $f \\in C([-1,1])$ at the Chebyshev nodes), then there exists a unique\npolynomial $p_N \\in \\Poly_N$ such that $p_N(x_j) = F_j$. We write $p_N(x) =\n\\sum_{k = 0}^N \\tilde{F}_k T_k(x)$, then\n\\begin{equation}\n   \\label{eq:poly:chebtransform}\n   \\tilde{F} := {\\rm DCT}[F] := \\b( \\tilde{F}_k \\b)_{k = 0}^N.\n\\end{equation}\nSince polynomial interpolation is linear and unique the operator is\nan invertible linear mapping, with inverse (obviously) given by\n\\begin{equation}\n   \\b({\\rm IDCT}[\\tilde{F}]\\b)_j = \\sum_{k = 0}^N \\tilde{F}_k T_k(x_j).\n\\end{equation}\n\n\\begin{lemma} \\label{th:poly:dct_explicit}\n   Let $\\tilde{F} = {\\rm DCT}[F]$, then\n   \\[\n      \\tilde{F}_k = \\frac{p_k}{N}\\bg\\{\n            \\smfrac12 \\b( (-1)^k F_0 + F_N \\b)\n            + \\sum_{k = 1}^{N-1} F_k T_k(x_j)\n         \\bg\\}.\n   \\]\n\\end{lemma}\n\nWe won't prove \\Cref{th:poly:dct_explicit} since we won't need this expression. \nIt is only stated here for the sake of completeness. The interested reader \nwill be able to check it by a direct computation; it is also implicitly \ncontained in the following discussion. \n\n\n{\\it A priori} the cost of evaluating the DCT and IDCT is $O(N^2)$, but the \nconnection between the Fourier and Chebyshev settings gives us an $O(N\\log N)$\nalgorithm which we now derive. Let $F = {\\rm IDCT}[\\tilde{F}]$, then writing \n\\[\n   T_k(x_j) = T_k(\\cos(j\\pi/N)) = \\cos(kj\\pi/N) \n\\]\nwe obtain \n\\begin{align}\n   \\label{eq:poly:costtransform}\n   F_j \n   &= \n   \\sum_{k = 0}^N \\tilde{F}_k \\cos(kj\\pi/N) \n   \\\\ \\notag &= \n   \\sum_{k = 0}^N \\tilde{F}_k \\smfrac12 \\b( e^{i2\\pi kj/ (2N)} + e^{-i2\\pi kj/(2N)}),\n\\end{align}\nwhich looks {\\em almost} like a IDFT on the grid $\\{-N, \\dots, N\\}$. We \ncan rewrite this a little more, \n\\begin{align*}\n   F_j\n   &= \n   \\tilde{F}_0 + \\sum_{k = 1}^{N-1} \\b[\\smfrac12 \\tilde{F}_k\\b] e^{i2\\pi kj/ (2N)}\n   + \\tilde{F}_N \\smfrac12 \\b( e^{i2\\pi N j/ (2N)} + e^{-i2\\pi N j/ (2N)} \\b) \n   \\\\  & \\qquad \n   + \\sum_{k = -N+1}^{-1} \\b[\\smfrac12 \\tilde{F}_{-k}\\b] e^{i2\\pi kj/ (2N)}\n   \\\\ &= \n   \\tilde{F}_0 + \\sum_{k = 1}^{N-1} \\b[\\smfrac12 \\tilde{F}_k\\b] e^{i2\\pi kj/ (2N)}\n   + \\tilde{F}_N e^{i2\\pi N j/ (2N)}\n   + \\sum_{k = N+1}^{2N-1} \\b[\\smfrac12 \\tilde{F}_{2N-k}\\b] e^{i2\\pi kj/ (2N)}\n   \\\\ &=: \n   \\sum_{k = 0}^{2N-1} \\hat{G}_k e^{i2\\pi kj/ (2N)},\n\\end{align*}\nwhere we have defined \n\\[\n   \\hat{G}_k := \\cases{\n      \\tilde{F}_k, & k = 0, \\\\ \n      \\smfrac12 \\tilde{F}_k, & k = 1, \\dots, N-1, \\\\ \n      \\tilde{F}_k, & k = N, \\\\ \n      \\smfrac12 \\tilde{F}_{2N-k}, & k = N+1, \\dots, 2N-1.\n   }\n\\] \nLet $\\hat{G}[\\tilde{F}]$ be defined by this expression, then we have shown \nthat \n\\[\n   F_j = \\b({\\rm IDCT}[\\tilde{F}]\\b)_j = \n   \\b( {\\rm IDFT}[\\hat{G}[\\tilde{F}]] \\b)_j, \\qquad j = 0, \\dots, N.\n\\]\nAfter determining $F_j$ for $j = N+1, \\dots, 2N-1$ we can then evaluate the \nDCT via the DFT.  From the expression \\eqref{eq:poly:costtransform} we \nimmediately see that \n\\begin{align*}\n   F_{j}\n   &= \n   \\sum_{k = 0}^N \\tilde{F}_k \\cos(kj\\pi/N - 2\\pi k) \n   \\\\ &= \n   \\sum_{k = 0}^N \\tilde{F}_k \\cos(k2\\pi(j-2N)/2N) \n   \\\\ &= \n   \\sum_{k = 0}^N \\tilde{F}_k \\cos(k2\\pi(2N-j)/2N)\n   \\\\ &= \n   F_{2N-j}\n\\end{align*}\nThat is, if we define \n\\[\n   G_j := \\cases{\n      F_j, & j = 0, \\dots, N, \\\\ \n      F_{2N-j}, & j = N+1, \\dots, 2N-1\n   }\n\\]\nthen we obtain \n\\[\n   {\\rm DFT}[G] = \\hat{G},   \n\\]\nfrom which we can readily obtain $\\tilde{F}$. \n\nIn {\\tt Julia} code an $O(N\\log N)$ scaling Chebyshev transform might \nlook as follows: \n\n\\begin{verbatim}\n   \"fast Chebyshev transform\"\n   function fct(F)\n      N = length(F)-1\n      G = [ F; F[N:-1:2] ]\n      Ghat = real.(fft(F))\n      return [Ghat[1]; 2 * Ghat[2:N]; Ghat[N+1]]\n   end \n\n   \"fast inverse Chebyshev transform\"\n   function ifct(Ftil)\n      N = length(Ftil)-1\n      Ghat = [Ftil[1]; 0.5 * Ftil[2:N]; Ftil[N+1]; 0.5*Ftil[N:-1:2]]\n      G = real.(ifft(Ghat))\n      return G[1:N+1]\n   end\n\\end{verbatim}\n\n\n\\begin{remark}\n   The expression \\eqref{eq:poly:costtransform} is in fact another kind of \n   well-known transform, the {\\em Discrete Cosine Transform} (one of several \n   variants). A practical implementation of the fast Chebyshev transform \n   should therefore use an efficient implementation of the fast cosine transform \n   rather than the FFT.\n   For the sake  of simplicity (to avoid studying yet another transformation) \n   we did not study this transform in detail, but there is plenty of literature \n   and software available on this topic. \n\\end{remark}\n\n\n\n\\subsection{Barycentric interpolation formula}\n%\n\\label{sec:poly:bary}\n%\nThe second method we discuss is the {\\em barycentric interpolation formula}.\nAfter precomputing some ``weights'' it gives another $O(N)$ method to evaluate\nthe Chebyshev interpolant (or indeed {\\em any} polynomial interpolant) in a\nnumerically stable manner. This method entirely avoids the transformation to the\nChebyshev basis. (This section is taken almost verbatim from\n\\cite{Trefethen2013-rg}; see also \\cite[Ch. 5]{Trefethen2013-rg} for a more\ndetailed, incl historical, discussion).\n\nWe begin with the usual Lagrange formula for the nodal interpolant. \nLet $p(x_j) = f_j, j = 0, \\dots, N$ where $p \\in \\Poly_N$, then  \n\\[\n   p(x) = \\sum_{j = 0}^N f_j \\ell_j(x), \n   \\qquad \\text{where} \\quad \n   \\ell_j(x) = \\frac{ \\prod_{n \\neq j} (x - x_n)}{\\prod_{n \\neq j} (x_j-x_n)}.\n\\]\nThis formula has the downside that it costs $O(N^2)$ to evaluate $p$ at a \nsingle point $x$. \n\nBut we observe that $\\ell_j(x)$ have a lot of terms in common. This can be \nexploited by defining the {\\em node polynomial}\n\\[\n   \\ell(x) := \\prod_{n = 0}^N (x-x_n),\n\\]\nthen we obtain \n\\begin{equation} \\label{eq:poly:bary_weights}\n   \\ell_j(x) = \\ell(x) \\frac{\\lambda_j}{x - x_j} \n   \\qquad \\text{where}  \\qquad \n   \\lambda_j = \\frac{1}{\\prod_{n \\neq j} (x_j - x_n)}.\n\\end{equation}\nThe ``weights'' $\\lambda_j$ still cost $O(N^2)$, but they are independent of $x$\nand can therefore be precomputed (Moreover, for various important sets of nodes\nthere exist fast algorithms. For Chebyshev nodes there is an explicit\nexperession; see below.). Since the common factor $\\ell(x)$ does not depend on\n$j$ we can now evaluate all $\\ell_j(x), j = 0, \\dots, N$ at $O(N)$ cost and thus\nobtain the {\\em first form of the barycentric interpolation formula}, \n\\begin{equation} \\label{eq:poly:bary1}\n   p(x) = \\ell(x) \\sum_{j = 0}^n \\frac{\\lambda_j}{x - x_j} f_j.\n\\end{equation}\nOnce the weights $\\lambda_j$ have been precomputed, the cost of evaluating \n$p(x)$ becomes $O(N)$. However, \\eqref{eq:poly:bary1} has a different \nshortcoming: in floating point arithmetic it is prone to overflow or underflow.\nSpecifically, suppose that $x = -1$ and we compute $\\ell(x)$ with $x_j$ \nordered decreasingly as defined in \\eqref{eq:poly:chebnodes}, then after \napproximately the first $M \\approx N/4$ terms we have evaluated \n\\[\n   \\bg|\\prod_{n = 0}^M (x - x_j) \\bg|\n   \\geq \\b( \\smfrac34 \\b)^{M+1} \n\\]\nwhich quickly becomes very large. The issue is also reflected in the\ncoefficients $\\lambda_j$, which for Chebyshev points are $O(2^N)$ (cf.\nExercise~\\ref{exr:poly:bary}). In practise, one typically gets overflow \nbeyond 100 or so grid points. \n\nThis can be avoided with the second form of the barycentric formula: observing\nthat $\\sum_{j = 0}^N \\ell_j \\equiv 1$ we obtain \n\\[\n   1 = \\ell(x) \\sum_{j = 0}^N \\frac{\\lambda_j}{x-x_j}, \n\\]\nand hence arrive at the second form of the barycentric interpolation formula:\n\n\\begin{theorem}[Barycentric interpolation formula]\n   \\label{th:poly:bary}\n   Let $p \\in \\Poly_N$, with $p(x_j) = f_j$ at $N+1$ distincts points \n   $\\{x_j\\}$ then \n   \\[\n      p(x) = \\frac{ \n         \\sum_{j = 0}^N \\frac{\\lambda_j f_j}{x-x_j} \n      }{\n         \\sum_{j = 0}^N \\frac{\\lambda_j}{x-x_j}\n      }, \n      \\qquad \\text{where} \\qquad \n      \\lambda_j = \\frac{1}{ \\prod_{n \\neq j} (x_j-x_n)},\n   \\] \n   with the special case $p(x_j) = f_j$.\n\\end{theorem}\n\n\\begin{theorem}[Barycentric interpolation formula in Chebyshev Points]\n   \\label{th:poly:barycheb}\n   Let $\\{x_j\\}$ be the Chebyshev points \\eqref{eq:poly:chebnodes}, then \n   the barycentric weights $\\lambda_j$ from \\Cref{th:poly:bary} \n   may be chosen as \n   \\[\n       \\lambda_j = \\cases{ \n          (-1)^j, & j = 1, \\dots, N-1, \\\\ \n          \\frac12 (-1)^j, & j = 0, N. }\n   \\]\n\\end{theorem}\n\\begin{proof}\n   See Exercise~\\ref{exr:poly:bary}. \n\\end{proof}\n\n\\subsubsection{Numerical stability of barycentric interpolation}\n%\n\\label{sec:poly:barystab}\n%\nWhile the DFT is matrix multiplication with an othogonal matrix, and the FFT \nan algorithm that even reduced the number of operations it is natural to \nexpect that these algorithms are numerically stable. By contrast, this is \nnot at all obvious {\\it a priori} for the barycentric formula. We will therefore \nspend a little time discussing this. \n%\nTo simplify this discussion we will only analyse the numerical stability \nof the {\\em first} barycentric formula \\eqref{eq:poly:bary1}. Understanding \nstability of the second barycentric formula is slightly more involved; \nsee \\cite{Higham2004-fn} for the details. \n\nWe have to begin by explaining the standard model of floating point arithmetic. \nLet $\\otimes \\in \\{ +, -, *,  / \\}$ be one of the standard four floating point \noperations, then applying the operation $a \\otimes b$  to two floating point\nnumbers will give an error, which we express as \n\\[\n   {\\rm fl}\\b( a \\otimes b \\b) = (a\\otimes b)(1+\\delta),\n\\]\nwhere $|\\delta| \\leq \\eps$ and $\\eps$ denotes machine precision (typically\n$10^{-6}$). That is, floating point arithmetic controls the {\\em relative\nerror}. For more on this topic, in particular additional subtleties that we are\nsweeping under the carpet here, see \\cite{Higham2002-nk}.\n\n\nFor example, consider the evaluation of an inner product of two vectors \n${\\bf a}, {\\bf b} \\in \\R^2$, \n\\begin{align*}\n   \\fl({\\bf a} \\cdot {\\bf b})\n   &= \\fl\\b( \\fl(a_1 b_1) + \\fl(a_2b_2)\\b) \\\\ \n   &= \\fl\\b( a_1 b_1 (1+\\delta_1) + a_2b_2 (1+\\delta_2)) \\\\ \n   &= \\b( a_1 b_1 (1+\\delta_1) + a_2b_2 (1+\\delta_2))(1+\\delta_3) \\\\ \n   &= a_1 b_1 (1+\\delta_1)(1+\\delta_3)\n      + a_2b_2 (1+\\delta_2)(1+\\delta_3).\n\\end{align*}\nUpon setting \n\\[\n   \\tilde{a_1} = a_1 (1+\\delta_1), \\quad \n   \\tilde{b_1} = b_1 (1+\\delta_3), \\quad \n   \\tilde{a_2} = a_2 (1+\\delta_2), \\quad \n   \\tilde{b_2} = b_2 (1+\\delta_3),\n\\]\nwe obtain \n\\[\n   \\fl({\\bf a} \\cdot {\\bf b}) = \\tilde{\\bf a} \\cdot \\tilde{\\bf b},\n\\]\nwhere $\\|{\\bf a} - \\tilde{\\bf a}\\| = O(\\eps)$ and $\\|{\\bf b} - \\tilde{\\bf b}\\| =\nO(\\eps)$. This is called {\\em backward stability}: the numerically evaluated\nquantity is the exact quantity for an exact computation with perturbed data. \n\n% As a second example we can consider \n% \\begin{align*}\n%    \\fl\\bg( \\frac{f(x+h) - f(x)}{h} \\bg)\n%    &= \\frac{(f(x+h)(1+\\delta_1) - f(x))(1+\\delta_2)}{h} (1+\\delta_3)  \\\\ \n%    &= \\frac{f(x+h) - f(x)}{h}(1+\\delta_3) + \n% \\end{align*}\n\nWe can now turn to the first barycentric formula. First we consider the \nevalutation of a weight $\\ell(x)$, \n\\begin{align}\n   \\fl\\B( \\prod_{n=0}^N (x-x_n) \\B) \n   &= \n   \\fl\\bg( \\fl\\bg( \\prod_{n=0}^{N-1} (x-x_n)\\bg) * \\fl(x-x_N) \\bg)  \\\\ \n   &= \n   \\fl\\bg( \\prod_{n=0}^{N-1} (x-x_n)\\bg) * (x-x_N) (1+\\delta_{1}) (1+\\delta_{2}),\n\\end{align}\nand by induction \n\\begin{align}\n   \\fl\\bg( \\prod_{n=0}^N (x-x_n) \\bg) = \n   \\ell(x) \\, \\prod_{m = 1}^{2N+1} (1+\\delta_m). \n\\end{align}\nThe argument for $\\lambda_j$ is of course analogous, hence we obtain\nwith a little extra work:\n\n\\begin{proposition} \\label{th:poly:barystab}\n   Let \n   \\[\n      \\tilde{p}_N(x) := \\fl\\bg( \\ell(x) \\sum_{j = 0}^N \\frac{\\lambda_j}{x - x_j} \\bg)\n   \\]\n   be the numerically evaluated polynomial in the standard model of \n   floating point arithmetic, then \n   \\[\n      \\tilde{p}_N(x) = \n      \\ell(x) \\sum_{j = 0}^N \\frac{\\lambda_j f_j}{x - x_j}\n      \\prod_{m = 1}^{5N+5} (1+\\delta_{jm}).\n   \\]\n\\end{proposition}\n\\begin{proof}\n   This is a straightforward continuation of the calculations above. \n\\end{proof}\n\nThe key point of \\Cref{th:poly:barystab} is that this is a {\\em backward\nstability} result, i.e., let $\\tilde{f}_j = f_j\\prod_{m = 1}^{5N+5}\n(1+\\delta_{jm})$, then $\\hat{p}_N$ is interpolates the values $\\tilde{f}_j$. \nIn particular, the error in the floating point polynomial $\\hat{p}_N(x)$ \nis no larger than if we had small errors in the nodal values $f_j$, which \nwe will normally have anyhow. \n\nFinally, for the second barycentric formula, the numerical stability result is\nweaker, but one can still show that for interpolation nodes with moderate\nLebesgue constant, and reasonable functions $f$ that we are interpolating, the\nnumerical stability is of no concern; see \\cite{Higham2004-fn} for more details.\n\n\n\n\\subsection{Applications}\n\nThe following applications of the theory in this chapter will be \ncovered in \\nbpoly.\n\n\\begin{itemize}\n   \\item Evaluating special functions\n   \\item Approximating a Matrix function\n   \\item Spectral methods for BVPs; see also \\cite[Sec. 21]{Trefethen2013-rg}\n\\end{itemize}\n\n\\noindent Further applications that could be explored in self-directed reading: \n\\begin{itemize}\n   \\item Chebyshev filtering \n   \\item Conjugate gradients and other Krylov methods \n   \\item Quadrature: \\cite{Trefethen2013-rg}, \n   \\item Richardson extrapolation: \\cite{Trefethen2013-rg}, p. 258\n   \\item \\dots \n\\end{itemize}\n\n\\subsection{Exercises}\n%\n% IDEAS FOR COMPUTATIONAL EXERCISES: \n% - test instability of Vandermonde interpolation \n% - super-exponential convergence for entire functions ... \n%   e^x should be easy to analyse too! Also e^{-x^2}!\n%   http://www.chebfun.org/examples/approx/EntireBound.html\n\n\n% \\begin{exercise}\n%    \\label{exr:poly:vandermonde}\n%    The seemingly ``canonical'' approach to constructing a polynomial interpolant\n%    in the monomial basis is via the linear system \n%    \\[\n%       c_0 + c_1 x_j + c_2 x_j^2 + \\cdots + c_N x_j^N = f_j.\n%    \\]\n%    The system matrix $V = (x_j^n)_{j,n=0}^N$ is called a {\\em Vandermonde matrix}. \n   \n%    1. Suppose we take $x_j$ to be equispaced points on the complex unit circle, \n%    i.e., $x_j = e^{i2\\pi j/N}$, $j = 0, \\dots, N-1$ (i.e., $N \\to N-1$!, then \n%    show that $V$ is the discrete Fourier transform operator, in particular it \n%    is unitary (up to rescaling) and thus has condition number $\\kappa(V) = 1$.\n\n%    2. \n\n%    Show that \n\n   \n%    \\alert{condition number of the Vandermonde matrix}.\n% \\end{exercise}\n\n\\begin{exercise}[Interpolation: Existence and Uniqueness] \n   \\label{exr:poly:interpunique}\n   Prove that for any collection of nodes $z_0, \\dots, z_N \\subset \\C$ with $x_i\n   \\neq z_j$  for $i \\neq j$, and nodal values $f_j$, there exists a unique\n   interpolant $p \\in \\Poly_N$ such that $p(z_j) = f_j$. \n\\end{exercise}\n\n\\begin{exercise}[Runge Phenomenon]\n   \\label{exr:poly:Runge Phenomenon}\n   %\n   For a partial explanation of the Runge phenomenon (cf \\nbpoly) consider \n   the following steps: \n   \\begin{enumerate} \\ilist \n      \\item Suppose $f \\in C^{N+1}([-1,1])$. Prove that there exists \n      $\\xi \\in (-1,1)$ such that \n      \\[\n         f(x) - I_N f(x) =  \\frac{f^{(N+1)}(\\xi)}{(N+1)!} \n            \\ell_N(x),\n      \\]\n      where $\\ell_N(x)$ is the node polynomial for the interpolation points. \n\n      {\\it Hint: Let $e(t) = f(t)-I_N f(t)$ and show that $y(t) = e(t) - e(x)\n      \\ell(t) / \\ell(x)$ has $N+2$ roots. What does this imply about the roots\n      of $y^{(N+1)}$?}\n\n      \\item Prove that for equispaces nodes, $\\|\\ell_N\\|_\\infty \\geq \\frac14\n      (N/2)^{-N-1} (N-1)!$.\n\n      \\item For $f(x) = 1 / (1+25 x^2)$ (The Witch of Agnesi), prove that $\\|\n       f^{(N+1)} \\|_\\infty \\| \\ell_N \\|_\\infty  / (N+1)! \\to \\infty$ as $N \\to\n       \\infty$. {\\it [HINT: $(1+c^2x^2)^{-1} = (1+cix)^{-1}+(1-cix)^{-1}$]}\n\n       {\\it (Note this does not prove divergence but proved a strong \n       hint why divergence occurs.)}\n   \\end{enumerate}\n\\end{exercise}\n\n\n\\begin{exercise}[Clenshaw's Algorithm]\n   %\n   \\label{exr:poly:clenshaw}\n   %\n   Let $p \\in \\Poly_N$, $N \\geq 1$, be given in the Chebyshev basis, with\n   coefficients $(\\tilde{f}_k)_{k = 0}^N$ and let $x \\in [-1,1]$. Show that\n   $p(x)$ can be evaluated by Clenshaw's algorithm:\n   \\begin{align*}\n      & u_{N+1} = 0, \\qquad u_N = \\tilde{f}_N; \\\\ \n      & u_n = 2 x u_{n+1} - u_{n+2} + \\tilde{f}_n, n = N-1, N-2, \\dots, 0; \\\\ \n      & p(x) = \\smfrac12 \\b( \\tilde{f}_0 + u_0 - u_2).   \n   \\end{align*}\n   What is the purpose of the Clenshaw algorithm, i.e., the potential \n   advantage over simply summing over the Chebyshev basis?\n\\end{exercise}\n\n\\begin{exercise}[Orthogonality of $T_k$]\n   Consider the weighted space \n   \\begin{align*}\n      L^2_C &:= \\b\\{ f : (-1,1) \\to \\R, \\text{ measurable, } \n            \\|f\\|_{L^2_C} < \\infty \\b\\}, \\qquad \n            \\text{where} \\\\  \n      \\|f\\|_{L^2_C}^2 &:= \\int_{-1}^1 \\frac{|f|^2}{\\sqrt{1 - x^2}} \\,dx.\n   \\end{align*}\n   Prove that $L^2_C$ is a Hilbert space and show that the Chebyshev \n   polynomials are (up to scaling) and orthonormal basis of this space. \n\n   Thus, conclude that the Chebyshev projection $\\tilde\\Pi_N$ is in fact that \n   best-approximation with respect to the $\\|\\cdot\\|_{L^2_C}$-norm. \n\\end{exercise}\n\n\\begin{exercise}[Bernstein Ellipse] \n   \\label{exr:poly:ellipse}\n   %\n   Prove that the Bernstein Ellipse $E_\\rho$, $\\rho > 1$ is indeed an ellipse\n   with centre $z = 0$, foci $\\pm 1$, semi-major axis $\\frac12 (\\rho+\\rho^{-1})$\n   and semi-minor axis $\\frac12 (\\rho-\\rho^{-1})$.\n\\end{exercise}\n\n\n\n\\begin{exercise}[Convergence Bounds]\n   \\label{exr:poly:convergence}\n   \\begin{enumerate} \\ilist\n   \\item Complete the proof of \\eqref{eq:poly:interperror} by proving\n      \\[\n         \\| I_N \\|_{L(L^\\infty)} \\leq C \\log N,\n      \\]\n      where $I_N$ is the Chebyshev nodal interpolation operator.\n\n   \\item In preparation for the proofs of the best approximation error estimates\n      for differentiable (non-analytic) functions, prove that, if $f \\in\n      C([-1,1])$ with modulus of continuity $\\omega$, then $g \\in C(\\TT)$ and it\n      has the same modulus of continuity.\n\n   \\item Prove Theorem~\\ref{th:poly:jackson}, case $j = 0$.\n\n   \\item Let $E_N(f) := \\inf_{p \\in \\Poly_N} \\|f - p\\|_\\infty$. Prove that \n      \\[\n         E_N(f)  \\leq C N^{-1} E_{N-1}(f'),\n      \\]\n      where $C$ is independent of $N$ and try to quantify $C$.\n\n   \\item Complete the proof of Theorem~\\ref{th:poly:jackson} (general $j$).\n   Indeed, you should obtain a more precise formula.\n   \\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise} \n   \\label{exr:poly:examplefunctions}\n   %\n   \\begin{enumerate} \\ilist \n   \\item For the following functions give bounds on the rate of polynomial best\n   approximation in the max-norm, as sharp as you can manage: \n   \\begin{enumerate} \\ilist\n      \\item $f(x) = |\\sin(5 x)|$ \n      \\item $f(x) = \\sqrt{|x|}$\n      \\item $f(x) = x (1 + 1000 (x - 1/2)^2)^{-1/2}$\n      \\item $f(x) = e^{- \\cos(3x)}$\n      \\item $f(x) = x^{100}$\n      \\item $f(x) = e^{-x^2}$ \n      \\item $f(x) = {\\rm sign}(x)$\n   \\end{enumerate}\n   \\item and for the following two functions also in the $L^2$-norm:\n   \\begin{itemize}\n      \\item $f(x) = {\\rm sign}(x)$\n      \\item $f(x) = \\sqrt{|x|}$ \\qedhere \n   \\end{itemize}\n   \\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}[Barycentric Chebyshev Interpolation]\n   %\n   \\label{exr:poly:bary}\n   %\n   Let $x_j$ be the Chebyshev points on $[-1,1]$.\n   \\begin{enumerate} \\ilist\n   \\item In general (not only for Chebyshev points), demonstrate that the\n   barycentric weights satisfy $\\lambda_j = 1 / \\ell'(x_j)$.\n   \\item Prove that the node polynomial satisfies \n   \\[\n      \\ell(x) = 2^{-N} \\b(T_{N+1}(x) - T_{N-1}(x)\\b)\n   \\]\n   \\item Show that \n   \\[\n      T_{N+1}'(x_j) - T_{N-1}'(x_j) = \n      \\cases{ \n         2N(-1)^j, & 1 \\leq j \\leq N-1, \\\\\n         4N(-1)^j, & j = 0, N.\n      }\n   \\]\n   \\item Deduce that, if $\\lambda_j$ is given by \\eqref{eq:poly:bary_weights}, then \n   \\[\n      \\lambda_j = \\frac{2^{N-1}}{N} (-1)^j, \\qquad j = 1, \\dots, N-1,\n   \\]\n   and suitably adjusted for $j = 0, N$. Explain why we can rescale the weights\n   $\\lambda_j$ without changing the validity of the barycentric formula, and\n   hence complete the proof of Theorem~\\ref{th:poly:barycheb}.\n   \\end{enumerate}\n\n   {\\it WARNING: it turns out, this exercise needs more material than I\n   realised, namely aliasing of Chebyshev coefficients. It is still very\n   interesting so I will leave it here for now. An interested reader should\n   follow to \\cite[Sec. 5]{Trefethen2013-rg} to derive this formula.}\n\\end{exercise}\n\n\n{\\it NOTE: The last exercise is a bit tedious; it needs to be redesigned \na bit. Maybe best leave it for now.}\n\n\\begin{exercise}[Coordinate Tranformations]\n   \\label{exr:poly:coordinates}\n   %\n   The purpose of this exercise is to investigate how the choice of \n   coordinate systems can expand the range of approximable functions, as \n   well as have an affect on the rate of convergence.\n\n   The basic idea is to considier functions $F : [a, b] \\to \\R$ and via a\n   coordinate transformation $f(x) = F(\\xi(x))$ transform them to functions $f :\n   [-1,1] \\to \\R$. This can can multiple consequences, including: (1) we can\n   represent functions on an arbitrary interval (inclusinf $\\R$); (2) we can\n   transform functions in such a way to increase the region of analyticity and\n   thus accelerate convergence.\n   %\n   \\begin{enumerate} \\ilist\n      \\item Consider the Morse potential $F(y) = e^{-2\\alpha y} - 2 e^{-\\alpha\n      y}$, then $F(y) = f(e^{-\\alpha y})$ where $f(x) = x^2 - 2x$ is a quadratic\n      polynomial. Suppose though that this ``optimal'' coordinate transform $x =\n      e^{-y}$ is not known. \n\n      Instead, consider the Morse coordinate transformation $\\xi^{-1}(y) = 2\n      e^{-y} - 1$ and the transformed function $f(x) = F(\\xi(x))$.\n      \n      coordinate transformation $x = 2/(1+y) - 1 =\n      \\xi^{-1}(y)$, that is, $\\xi^{-1}(0) = 1, \\xi^{-1}(\\infty) = -1$, and \n      let $f(x) = F(\\xi(x))$. \n      \\begin{enumerate} \\alist \n         \\item Establish an upper bound (as sharp as you can manage) for\n          approximation by Chebyshev projection and interpolation of $f$ on\n          $[-1,1]$ in the max-norm. \n         \\item Convert this bound to an approximation result for $F(y)$ on $[0,\n          \\infty)$. \n         \\item Can you give a simpler / more direct characterisation of the\n         effective approximation space for functions on $[0, \\infty)$ that you\n         used here?\n      \\end{enumerate}\n      \n      \\item Now consider the function $F(y) = (\\eps^2 + y^2)^{-1/2}$ on \n      $[-1, 1]$. Recall the rate of convergence of Chebyshev projection and \n      Chebyshev interpolation. \n\n      Now consider a coordinate transformation \n      \\[\n         \\xi^{-1}(y) = \\frac{\\arctan(x/\\eta)}{\\arctan(1/\\eta)},\n      \\]\n      and explicitly compute its inverse. Show that $\\xi, \\xi^{-1} : [-1,1] \\to\n      [-1,1]$ are bijective. \n\n      \\begin{enumerate} \\alist \n         \\item For any $\\eta > 0$ establish an upper bound (as sharp as you can\n         manage) for approximation of $f(x) = F(\\xi(x))$ by Chebyshev projection\n         and Chebyshev interpolation.\n         \n         \\item Discuss which choices of $\\eta$ appear to be particularly good.\n         Visualise the effect of $\\xi$ on the function $f$ as well as on \n         the singularities in the complex plane.\n      \\end{enumerate}\n   \\end{enumerate}\n\\end{exercise}\n\n", "meta": {"hexsha": "f5921795597b95998bb96067668e1c426caf85c4", "size": 38699, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/polys.tex", "max_stars_repo_name": "cortner/MA3J8ApxThyApp", "max_stars_repo_head_hexsha": "9400c557187dbd82468df2dbd0a7da99d7f08f8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-05-22T05:11:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-03T02:47:25.000Z", "max_issues_repo_path": "tex/polys.tex", "max_issues_repo_name": "cortner/MA3J8ApxThyApp", "max_issues_repo_head_hexsha": "9400c557187dbd82468df2dbd0a7da99d7f08f8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-03T22:23:55.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T01:58:58.000Z", "max_forks_repo_path": "tex/polys.tex", "max_forks_repo_name": "cortner/ApxThyApp", "max_forks_repo_head_hexsha": "0b28c5c4370eb4d9c5a9063c2c5c1b938aa54a3d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-02T02:44:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-02T02:44:56.000Z", "avg_line_length": 37.4264990329, "max_line_length": 85, "alphanum_fraction": 0.640300783, "num_tokens": 13260, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410558746814, "lm_q2_score": 0.8807970904940926, "lm_q1q2_score": 0.5860916841282422}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{parskip}\n\\usepackage{svg}\n\\usepackage[utf8]{inputenc}\n\\usepackage{helvet}\n\\renewcommand{\\familydefault}{\\sfdefault}\n\\usepackage{geometry}\n\\usepackage[document]{ragged2e}\n\\geometry{letterpaper, portrait, top=1in, bottom=1in, left=1.5in, right=1.5in}\n\\usepackage{comment}\n\n\\title{MATH 153}\n\n\\begin{document}\n\\section{Sequences}\n\nA sequence is a list of numbers written in a defined order\n\n\\begin{gather*}\n    a_1, a_2, a_3, \\cdots\n\\end{gather*}\n\nSequences can also be denoted\n\\begin{gather*}\n    \\{a_n\\} \\\\\n    \\{a_n\\}^\\infty_{n=1}\n\\end{gather*}\n\nConverging sequences settle on a certain number while diverging sequences does not settle on a number.\n\nRecursive sequence: A sequence that depends on previous values in the sequence.\n\n\\section{Limit of a sequence}\n\nL'Hospital's rule: For differentiable functions $f$ and $g$, if $\\lim_{x \\to c} f(x) = \\lim_{x \\to c} g(x) = 0$ or $\\pm \\infty$, and $g'(x) \\neq 0$ for all $x$, and $\\lim_{x \\to c} \\frac{f'(x)}{g'(x)}$ exists, then\n\n\\begin{gather*}\n    \\lim_{x \\to c} \\frac{f(x)}{g(x)} = \\lim_{x \\to c} \\frac{f'(x)}{g'(x)}\n\\end{gather*}\n\nLimit laws:\n\nLet $\\{a_n\\}$ and $\\{b_n\\}$ be convergent sequences and $c$ be a constant.\n\n\\begin{gather*}\n\\lim_{n \\to \\infty} (a_n + b_n) = \\lim_{n \\to \\infty} a_n + \\lim_{n \\to \\infty} b_n \\\\\n\\lim_{n \\to \\infty} (a_n - b_n) = \\lim_{n \\to \\infty} a_n - \\lim_{n \\to \\infty} b_n \\\\\n\\lim_{n \\to \\infty} (a_n b_n) = \\lim_{n \\to \\infty} a_n \\cdot \\lim_{n \\to \\infty} b_n \\\\\n\\lim_{n \\to \\infty} ca_n = c \\lim_{n \\to \\infty} a_n \\\\\n\\lim_{n \\to \\infty} \\frac{a_n}{b_n} = \\frac{\\lim_{n \\to \\infty} a_n}{\\lim_{n \\to \\infty} b_n} \\text{ if } \\lim_{n \\to \\infty} b_n \\neq 0 \\\\\n\\lim_{n \\to \\infty} a^p_n = \\left( \\lim_{n \\to \\infty} a_n \\right)^p \\text{ if } p > 0, a_n > 0\n\\end{gather*}\n\nIf $\\lim_{n \\to \\infty} |a_n| = 0$, $\\lim_{n \\to \\infty} a_n = 0$.\n\nIf $\\lim_{n \\to \\infty} a_n = L$ and function $f$ is continuous at $L$, $\\lim_{n \\to \\infty} f(a_n) = f(L)$.\n\nThe sequence $\\{r^n\\}$ is convergent if $-1 < r \\leq 1$ and divergent for all other values of $r$.\n\n\\begin{gather*}\n    \\lim_{n \\to \\infty} r^n = \\begin{cases}\n        0 \\text{ if } -1 < r < 1 \\\\\n        1 \\text{ if } r = 1\n    \\end{cases}\n\\end{gather*}\n\nSqueeze theorem: If $a_n \\leq b_n \\leq c_n$ and $\\lim_{n \\to \\infty} a_n = \\lim_{n \\to \\infty} c_n = L$, $\\lim_{n \\to \\infty} b_n = L$.\n\nMonotonic sequences: A sequence ${a_n}$ is increasing if $a_n < a_{n+1}$ for all $n \\geq 1$ and is decreasing if $a_n > a_{n+1}$ for all $n \\geq 1$. Monotonic sequences are either increasing or decreasing.\n\nBounded sequences: A sequence ${a_n}$ is bounded above if there is a number $M$ such that $a \\leq M$ for all $n \\geq 1$. ${a_n}$ is bounded below if there is a number $m$ such that $m \\leq a_n$ for all $n \\geq 1$. If ${a_n}$ is bounded above and below, then ${a_n}$ is a bounded sequence.\n\nIf a sequence is bounded and monotonic, that sequence is convergent.\n\n\\section{Ways to determine the limit}\n\n\\begin{itemize}\n    \\item Draw graph or construct table\n    \\item Break it up into smaller limits using the limit laws\n    \\item L'Hospital's Rule\n    \\item Determine how each term will change as $n$ approaches infinity\n    \\item Use the squeeze theorem if it converges between two functions with the same limits\n    \\item Substitute a variable if a common pattern appears more than once. Remember to solve its limit to replace the limit variable.\n    \\item If it is raised to the $n$th power, calculate the natural log of both sides.\n\\end{itemize}\n\n\\section{Series}\n\nSeries: The sum of all terms of a sequence.\n\nInfinite sum:\n\\begin{gather*}\n    \\sum_{n=1}^{\\infty} a_n = S\n\\end{gather*}\n\nPartial sum:\n\\begin{gather*}\n    \\sum_{K=1}^{n} a_K = S_n\n\\end{gather*}\n\nPartial sums can be used to determine convergence of an infinite sum.\n\n\\begin{gather*}\n    \\sum_{n=1}^{\\infty} a_n = \\lim_{n \\to \\infty} \\sum_{K=1}^{n} a_K\n\\end{gather*}\n\nIf $\\{S_n\\}$ converges, then $\\lim_{n \\to \\infty} S_n = S$.\n\n\\section{Geometric series}\n\n\\begin{gather*}\n    \\sum_{n=1}^{\\infty} ar^{n-1} = a + ar + a^2 + a^3 + \\cdots + ar^{n-1}\n\\end{gather*}\n\n\\begin{itemize}\n    \\item If $r \\leq -1$ or $r \\geq 1$, the series will diverge  \\\\\n    \\item If $-1 < r < 1$, the series will converge and the limits is \\\\\n    \\begin{gather*}\n        S = \\frac{a}{1-r}\n    \\end{gather*}\n\\end{itemize}\n\nTheorems for convergent series\n\\begin{align*}\n    \\sum_{n=1}^{\\infty} ca_n &= c \\sum_{n=1}^{\\infty} a_n \\\\\n    \\sum_{n=1}^{\\infty} (a_n + b_n) &= \\sum_{n=1}^{\\infty} a_n + \\sum_{n=1}^{\\infty} b_n \\\\\n    \\sum_{n=1}^{\\infty} (a_n - b_n) &= \\sum_{n=1}^{\\infty} a_n - \\sum_{n=1}^{\\infty} b_n\n\\end{align*}\n\nTelescoping series: Each pair of consecutive terms have parts that cancel out, which makes it easy to find the closed form.\n\nTest for divergence: If $\\lim_{n \\to \\infty} a_n$ does not exist or if $\\lim_{n \\to \\infty} a_n \\neq 0$, then the series $\\sum_{n=1}^{\\infty} a_n$ is divergent.\n\n\\section{Integral test}\n\nIntegral Test: Let $f$ be a continuous, positive, decreasing function on $[1, \\infty)$ and let $a_n = f(n)$. Then the series $\\sum_{n=1}^\\infty a_n$ is convergent if and only if the improper integral $\\int_1^\\infty f(x) dx$ is convergent.\n\nP-series test: The $p$-series $\\sum_{n=1}^\\infty \\frac{1}{n^p}$ is convergent if $p > 1$ and divergent if $p \\leq 1$.\n\n\\section{Comparison test}\n\nComparison test: Let $\\sum a_n$ and $\\sum b_n$ be series with positive terms.\n\\begin{itemize}\n    \\item If $\\sum b_n$ is convergent and $a_n \\leq b_n$ for all $n$, then $\\sum a_n$ is convergent.\n    \\item If $\\sum b_n$ is divergent and $a_n \\geq b_n$ for all $n$, then $\\sum a_n$ is divergent.\n\\end{itemize}\n\n\\section{Limit comparison test}\n\nLimit comparison test: Let $\\sum a_n$ and $\\sum b_n$ be series with positive terms. If\n\\begin{gather*}\n    \\lim_{n \\to \\infty} \\frac{a_n}{b_n} = c\n\\end{gather*}\n\nwhere $c$ is a finite number and $c > 0$, then either both series converge or both diverge.\n\n\\section{Alternating series}\n\nAlternating series: A series whose terms are alternately positive and negative.\n\\begin{gather*}\n    \\sum_{n = 0}^\\infty (-1)^{n-1} b_n = a_0 - a_1 + a_2 - a_3 + \\cdots\n\\end{gather*}\n\nAlternating series test: Given an alternating series $\\sum (-1)^{n-1} b_n$, if $b_n$ is decreasing and $\\lim_{n \\to \\infty} b_n = 0$, then $\\sum (-1)^{n-1} b_n$ is convergent.\n\nAlternating series estimation theorem: If $S = \\sum (-1)^{n-1}b_n$ where $b_n > 0$ is the sum of a converging alternating series, then\n\\begin{gather*}\n    |R_n| = |S - S_n| \\leq b_{n + 1}\n\\end{gather*}\n\n\\section{Ratio test}\n\nAbsolute convergence: Given a series $\\sum a_n$, if $\\sum |a_n|$ converges, then $\\sum a_n$ is absolutely convergent.\n\nConditional convergence: A series $\\sum a_n$ that is convergent but not absolutely convergent.\n\nRatio test: Given the series $\\sum_{n=1}^\\infty a_n$,\n\\begin{gather*}\n    \\text{if } \\lim_{n \\to \\infty} \\left| \\frac{a_{n+1}}{a_n} \\right|\n    \\begin{cases}\n        < 1 \\text{, then the series is absolutely convergent} \\\\\n        > 1 \\text{ or } \\infty \\text{, then the series is divergent} \\\\\n        = 1 \\text{, the Ratio Test is inconclusive}\n    \\end{cases}\n\\end{gather*}\n\n\\section{Root test}\n\nRoot test: Given the series $\\sum_{n=1}^\\infty a_n$,\n\\begin{gather*}\n    \\text{if } \\lim_{n \\to \\infty} |a_n|^{\\frac{1}{n}}\n    \\begin{cases}\n        < 1 \\text{, then the series is absolutely convergent} \\\\\n        > 1 \\text{ or } \\infty \\text{, then the series is divergent} \\\\\n        = 1 \\text{, the Ratio Test is inconclusive}\n    \\end{cases}\n\\end{gather*}\n\n\\section{Tests for convergence and divergence}\n\n\\begin{itemize}\n    \\item Geometric series\n    \\item Telescoping series\n    \\item Test for divergence\n    \\item Integral test\n    \\item P-series test\n    \\item Comparison test\n    \\item Limit comparison test\n    \\item Alternating series test\n    \\item Ratio test\n    \\item Root test\n\\end{itemize}\n\n\\section{Power series}\n\nThe power series:\n\\begin{gather*}\n    \\sum_{n=0}^\\infty c_n (x - a)^n\n\\end{gather*}\n\nThis power series is centered around $a$.\n\nFor a given power series $\\sum_{n=0}^\\infty c_n (x - a)^n$, there are only three possibilities:\n\\begin{itemize}\n    \\item The series converges only when $x=a$\n    \\item The series converges for all x\n    \\item There is a positive number $R$ such that the series converges if $|x - a| < R$ and diverges if $|x - a| > R$\n\\end{itemize}\n\nThe interval of convergence is the interval for $x$ such that the series converges.\n\nThe radius of convergence is half the distance of the interval of convergence.\n\nSometimes, such as for the ratio test and root test, the endpoints must be checked for convergence.\n\n\\section{Representations of functions as power series}\n\n\\begin{gather*}\n    \\frac{1}{1-x} = \\sum_{n=0}^\\infty x^n \\\\\n    \\text{where } |x| < 1\n\\end{gather*}\n\nTerm-by-term differentiation and integration:\n\nIf the power series $\\sum_{n=0}^\\infty c_n (x - a)^n$ has radius of convergence $R > 0$, then the function $f$ defined by\n\n\\begin{gather*}\n    f(x) = c_0 + c_1(x+a) + c_2(x-a)^2 + \\cdots = \\sum_{n=0}^\\infty c_n (x - a)^n\n\\end{gather*}\n\nis differentiable on the interval $(a - R, a + R)$ and\n\n\\begin{gather*}\n    f'(x) = c_1 + 2 c_2 (x - a) + 3 c_3 (x - a)^2 + \\cdots = \\sum_{n=1}^\\infty n c_n (x - a)^{n-1} \\\\\n    \\int f(x) dx = C + c_0 (x - a) + c_1 \\frac{(x-a)^2}{2} + c_2 \\frac{(x-a)^3}{3} + \\cdots = C + \\sum_{n=0}^\\infty c_n \\frac{(x - a)^{n+1}}{n + 1}\n\\end{gather*}\n\nThe radii of convergence of both equations are $R$.\n\n\\section{Taylor and Maclaurin Series}\n\nIf the power series representation at $a$ is\n\\begin{gather*}\n    f(x) = \\sum_{n=0}^\\infty c_n (x - a)^n\n\\end{gather*}\n\nthen its Taylor series is \n\n\\begin{gather*}\n    f(x) = \\sum_{n=0}^\\infty \\frac{f^{n}(a)}{n!} (x - a)^n\n\\end{gather*}\n\nMaclaurin series: The Taylor series but with $a = 0$\n\nImportant Maclaurin series\n\\begin{align*}\n    \\frac{1}{1-x} &= \\sum_{n=0}^\\infty x^n, R = 1 \\\\\n    e^x &= \\sum_{n=0}^\\infty \\frac{x^n}{n!}, R = \\infty \\\\\n    \\sin(x) &= \\sum_{n=0}^\\infty (-1)^n \\frac{x^{2n+1}}{(2n+1)!}, R = \\infty \\\\\n    \\cos(x) &= \\sum_{n=0}^\\infty (-1)^n \\frac{x^{2n}}{(2n)!}, R = \\infty \\\\\n    \\tan^{-1}(x) &= \\sum_{n=0}^\\infty (-1)^n \\frac{x^{2n+1}}{2n+1}, R = 1 \\\\\n    \\ln(1 + x) &= \\sum_{n=1}^\\infty (-1)^{n-1} \\frac{x^n}{n}, R = 1 \\\\\n    (1 + x)^k &= 1 + kx + \\frac{k(k-1)}{2!}x^2 + \\frac{k(k-1)(k-2)}{3!}x^3 + \\cdots = \\sum_{n=0}^\\infty {k \\choose n} x^n, R = 1\n\\end{align*}\n\n\\section{Parametric equations}\nUnlike rectangular equations, parametric equations have a third variable $t$.\nIt has an initial point and terminal point for the minimum and maximum $t$ values respectively.\n\nFirst derivative (Slope):\n\\begin{gather*}\n    \\frac{dy}{dx} = \\frac{\\frac{dy}{dt}}{\\frac{dx}{dt}} \\\\\n    \\text{if }\\frac{dx}{dt} \\neq 0\n\\end{gather*}\n\nSecond derivative (Concavity):\n\\begin{gather*}\n    \\frac{d^2 y}{dx^2} = \\frac{\\frac{d}{dt}\\left( \\frac{dy}{dx} \\right)}{\\frac{dx}{dt}}\n\\end{gather*}\n\nIf a curve is defined by $x = f(t), y = g(t), \\alpha \\leq t \\leq \\beta$, the area under the curve is\n\\begin{gather*}\n    \\int_\\alpha^\\beta g(t) f'(t) dt\n\\end{gather*}\n\nThe arc length is\n\\begin{gather*}\n    \\int_\\alpha^\\beta \\sqrt{\\left(\\frac{dx}{dt}\\right)^2 + \\left(\\frac{dy}{dt}\\right)^2} dt\n\\end{gather*}\n\n\\section{Polar coordinates}\n\nConverting from polar coordinates to Cartesian coordinates\n\\begin{gather*}\n    x = r \\cos(\\theta) \\\\\n    y = r \\sin(\\theta)\n\\end{gather*}\n\nConverting from Cartesian to polar coordinates\n\\begin{gather*}\n    r^2 = x^2 + y^2 \\\\\n    \\tan(\\theta) = \\frac{y}{x}\n\\end{gather*}\n\nDistance formula\n\\begin{gather*}\n    d = \\sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}\n\\end{gather*}\n\n\\begin{itemize}\n    \\item If $f(\\theta) = f(-\\theta)$, the curve is symmetric about the polar axis. \\\\\n    \\item If $f(\\theta) = f(\\theta + \\pi)$, the curve is symmetric about the pole. \\\\\n    \\item If $f(\\theta) = f(\\pi - \\theta)$, the curve is symmetric about $\\theta = \\pi / 2$.\n\\end{itemize}\n\nArea of $R$:\n\n\\begin{gather*}\n    \\int_a^b \\frac{1}{2}(f(\\theta))^2 d\\theta\n\\end{gather*}\n\nTrig identities\n\n\\begin{align*}\n    \\sin^2(\\theta) &= \\frac{1-\\cos(2\\theta)}{2} \\\\\n    \\cos^2(\\theta) &= \\frac{1+\\cos(2\\theta)}{2}\n\\end{align*}\n\n\\section{Vectors}\n\nA vector:\n\\begin{itemize}\n    \\item Has a magnitude and direction.\n    \\item Has an initial point and terminal point\n    \\item A vector from $A$ to $B$ is written $\\overrightarrow{AB}$\n\\end{itemize}\n\nScalar multiplication of vectors: If $c$ is a scalar and $v$ is a vector, the scalar multiple $cv$ is the vector whose length is $|c|$ times the length of $v$ and whose direction is the same as $v$ if $c > 0$ and is opposite to $v$ if $c<0$. If $c=0$, then $cv=0$.\n\nVector addition: The sum of vectors $u$ and $v$, $u+v$, is a vector from the initial point of $u$ to the terminal point of $v$. $u + v = v + u$.\n\nPosition vector: A vector that represents a coordinate relative to an origin.\n\nGiven the points $A(x_1, y_1, z_1)$ and $B(x_2, y_2, z_2)$, the vector $\\overrightarrow{AB} = \\left\\langle x_2 - x_1, y_2 - y_1, z_2 - z_1 \\right\\rangle$\n\nUnit vector: Vectors that have a magnitude of $1$.\n\n\\begin{align*}\n    \\hat{i} &= \\text{unit vector in $x$ direction} \\\\\n    \\hat{j} &= \\text{unit vector in $y$ direction} \\\\\n    \\hat{k} &= \\text{unit vector in $z$ direction}\n\\end{align*}\n\nMagnitude of a vector:\n\\begin{gather*}\n    |a| = \\sqrt{a_1^2 + a_2^2}\n\\end{gather*}\n\n\\begin{gather*}\n    \\hat{u} = \\frac{\\overrightarrow{a}}{|\\overrightarrow{a}|}\n\\end{gather*}\n\nEquation of a sphere:\n\\begin{gather*}\n    (x-h)^2 + (y-k)^2 + (z-l)^2 = r^2\n\\end{gather*}\n\n\\end{document}\n", "meta": {"hexsha": "0ce2d1054ad715aff937c1dbd6888eba15e8f860", "size": 13555, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "math153.tex", "max_stars_repo_name": "MakotoE/math153-notes", "max_stars_repo_head_hexsha": "bbede260182a67e25845bd9fab427adaefba41f4", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-28T16:47:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-28T16:47:26.000Z", "max_issues_repo_path": "math153.tex", "max_issues_repo_name": "MakotoE/math153-notes", "max_issues_repo_head_hexsha": "bbede260182a67e25845bd9fab427adaefba41f4", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "math153.tex", "max_forks_repo_name": "MakotoE/math153-notes", "max_forks_repo_head_hexsha": "bbede260182a67e25845bd9fab427adaefba41f4", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6675191816, "max_line_length": 288, "alphanum_fraction": 0.6388048691, "num_tokens": 4935, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105454764747, "lm_q2_score": 0.8807970701552504, "lm_q1q2_score": 0.5860916589060859}}
{"text": "% Prelim, Appendix A\n% by Rachel Slaybaugh\n\\separatorpage{}\n\\chapter{Diffusion Equation}\n\\label{sec:AppendixA}\nThe diffusion approximation is a widely used simplification that reduces the computational complexity of the transport equation. The approximation is that the angular dependence of the flux is unimportant, so the direction component of the transport equation can be discarded. Physically this means that neutrons move against their concentration gradient like just heat diffuses through a conductor. The information in this section is derived from Duderstadt and Hamilton's \\emph{Nuclear Reactor Analysis} \\cite{Duderstadt1976} and neglects fission for simplicity. \n\nThe first step in applying this approximation is to integrate the angular dependence out of Equation \\eqref{eq:neutron transport}, resulting in the neutron continuity equation:\n%\n\\begin{equation}\n  \\nabla \\cdot J(\\vec{r},E) + \\Macro(\\vec{r},E)\\phi(\\vec{r},E) = \\int \\:dE' \\:\\Macro_{s}(\\vec{r}, E' \\to E)\\phi(\\vec{r},E') + Q_{ex}(\\vec{r},E) \\:,\n  \\label{eq:continuity} \n\\end{equation}\n%\nwhere the following definitions have been used:\\\\\n\\indent $J(\\vec{r},E) = \\int d\\vOmega \\:\\vOmega \\psi(\\vec{r}, \\vOmega, E)$ is the neutron current \\\\\n\\indent  $\\phi(\\vec{r},E) = \\int d\\vOmega \\:\\psi(\\vec{r}, \\vOmega, E)$ is the scalar flux, and \\\\\n\\indent $Q_{ex} (\\vec{r},E)= \\int d\\vOmega \\:q_{ex}(\\vec{r}, \\vOmega, E)$ is the external source.\n\nUnfortunately, this simplifying approximation added another unknown, $J$, which leaves one equation with two unknowns. In an attempt to eliminate one of these unknowns, Equation \\eqref{eq:continuity} is multiplied by $\\hat{\\Omega}$ and integrated over angle again to obtain the first angular moment:\n%\n\\begin{align}\n  \\nabla \\cdot \\int  d\\vOmega \\:\\vOmega \\vOmega \\psi(\\vec{r}, \\vOmega, E) &+ \\Macro(\\vec{r},E) J(\\vec{r},E)= \\nonumber \\\\\n  &\\int dE' \\:\\Macro_{s1}(\\vec{r}, E' \\to E)J(\\vec{r},E') + \\int d\\vOmega \\int d\\vOmega \\:\\vOmega q_{ex} \\:,\n  \\label{eq:current1}\n\\end{align}\n%\nwhere $\\Macro_{s1}  = \\int d\\vOmega \\:\\vOmega \\Macro_{s}$. The first angular moment form of the equation cannot be solved either because the streaming (first) term is still unknown. \n\nTo make Equation \\eqref{eq:current1} solvable, the original assumption is modified to assert that the angular flux is weakly, in fact linearly, dependent on angle rather than independent of angle. To implement this assumption the angular flux is expanded in angle and only the first two terms are retained:  \n%\n\\begin{equation}\n  \\psi(\\vec{r}, \\vOmega, E) \\cong \\frac{1}{4 \\pi} \\phi(\\vec{r}, E) + \\frac{3}{4 \\pi}J(\\vec{r}, E) \\cdot \\vOmega \\:.\n  \\label{eq:angExpand} \n\\end{equation}\nThe truncated angular flux is then inserted into the streaming term in Equation \\eqref{eq:current1}, giving \n%\n\\begin{equation}\n  \\nabla \\cdot \\int d \\vOmega \\:\\vOmega \\vOmega \\psi(\\vec{r}, \\vOmega, E)  \\cong \\frac{1}{3} \\nabla \\phi(\\vec{r}, E) \\:. \n  \\label{eq:firstTerm}\n\\end{equation}\n\nNext, the scattering source term is simplified in angle and energy. To address the angular dependence define $\\bar{\\mu}_{0}$ as the average cosine of the scattering angle, which, temporarily suppressing energy, gives $\\Macro_{s1} = \\bar{\\mu}_{0}\\Macro_{s}$. For elastic scattering from stationary nuclei when s-wave scattering is present in the center of mass frame, $\\bar{\\mu_{0}} = \\frac{2}{3A}$ where $A$ is atomic mass number. A common procedure to simplify the energy dependence is to neglect the anisotropic contribution to energy transfer in a scattering collision. Mathematically this means $\\Macro_{s1}(E' \\to E) = \\Macro_{s1}(E) \\delta(E' = E)$, giving $\\int dE' \\:\\Macro_{s1}(\\vec{r}, E' \\to E)J(\\vec{r},E') = \\bar{\\mu_{0}}\\Macro_{s}(\\vec{r},E)J(\\vec{r},E)$. Finally, it is assumed that the external source is isotropic, $\\int  d\\vOmega \\int d\\vOmega \\:\\vOmega q_{ex} = 0$.\n\nIf these approximations are all included and Equation \\eqref{eq:current1} is solved for $J$, the result is Fick's Law:\n%\n\\begin{equation}\n  J(\\vec{r},E) \\cong -\\frac{1}{3(\\Macro(\\vec{r},E) - \\bar{\\mu_{0}}\\Macro_{s}(\\vec{r},E))} \\nabla \\phi (\\vec{r},E) = -D(\\vec{r},E) \\nabla \\phi (\\vec{r},E) \\:.\n\\end{equation}\n%\nFick's Law can be introduced back into Equation \\eqref{eq:continuity} to obtain the diffusion equation:\n%\n\\begin{equation}\n  -\\nabla \\cdot D(\\vec{r},E) \\nabla \\phi(\\vec{r},E) + \\Macro(\\vec{r},E)\\phi(\\vec{r},E) = \\int dE' \\:\\Macro_{s}(\\vec{r}, E' \\to E)\\phi(\\vec{r},E') + q_{ex}(\\vec{r},E) \\:.\n  \\label{eq:diffusion}\n\\end{equation}\n\nThis equation now includes several assumptions which are valid when the solution is not near a void, boundary, source, or strong absorber. While these requirements can be quite restrictive, the diffusion equation has been used frequently for analysis of nuclear systems throughout the history of the nuclear industry.\n\nSome terms from this section that are used when describing DSA in Chapter \\ref{sec:Chp4} are:\n\\begin{align}\n  \\Macro_{tr} &= \\Macro(\\vec{r},E) - \\bar{\\mu_{0}}\\Macro_{s}(\\vec{r},E) \\qquad \\text{is the transport cross section, and}\\\\\n  \\ve{C} &= -\\nabla \\cdot \\frac{1}{3\\Macro_{tr}}\\nabla+ \\Macro(\\vec{r},E) \\qquad \\text{is the diffusion operator.}\n\\end{align}\n\n", "meta": {"hexsha": "cf6eebb5bc5bca1fd9bf43b653ef1759c6d88c33", "size": 5161, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffusion.tex", "max_stars_repo_name": "rachelslaybaugh/RNS_Thesis", "max_stars_repo_head_hexsha": "d931afe50367e1d91b952a9d570c286e0b7f6d42", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-01-07T09:06:04.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-16T17:13:56.000Z", "max_issues_repo_path": "diffusion.tex", "max_issues_repo_name": "rachelslaybaugh/RNS_Thesis", "max_issues_repo_head_hexsha": "d931afe50367e1d91b952a9d570c286e0b7f6d42", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "diffusion.tex", "max_forks_repo_name": "rachelslaybaugh/RNS_Thesis", "max_forks_repo_head_hexsha": "d931afe50367e1d91b952a9d570c286e0b7f6d42", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-24T17:15:21.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-24T17:15:21.000Z", "avg_line_length": 78.196969697, "max_line_length": 884, "alphanum_fraction": 0.7076148033, "num_tokens": 1632, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.798186787341014, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5859645113438604}}
{"text": "\\section{Policy Gradient Algorithms}\n\n\n\\begin{frame}{Policy Gradient Algorithms}\n\t\n\t\\begin{block}{Key Idea}\n\t\\begin{enumerate}\n\t\t\\item<+-> The optimal policy $\\pi_*$ is approximated with a parametric policy $\\pi_{\\theta^*}$\n\t\t\\item<+-> The parameters $\\theta^*$ of the policy solve the optimization problem \n\t\t\\begin{equation*}\n\t\t\t\\max_{\\theta \\in \\Theta} J(\\theta) = V_{\\pi_\\theta}(s)\n\t\t\\end{equation*}\n\t\t\\item<+-> Which is solved numerically via gradient descent\n\t\t\\begin{equation*}\n\t\t\\theta_{k+1} = \\theta_k + \\alpha_k \n       \t\t\\tikz[baseline]{\n           \t\t\\node[anchor=base] (gradient) {$\\nabla_\\theta J\\left(\\theta_k\\right)$};\n       \t   }\n\t\t\\end{equation*}\n\t\\end{enumerate}\n\t\\end{block}\n\t\n\t\\onslide<+-|handout:0>{\n\t\t\\begin{tikzpicture}[overlay]\n\t\t\t\t\\node[draw=SteelBlue, circle, line width=3pt, minimum size=1.8cm] at (8.67, 1.65) (g) {};\n\t\t\t\t\\node[SteelBlue] at (11,-0.2) (q) {\\Huge \\textbf{?}};\n\t\t        \\draw [->, line width=3pt, SteelBlue] (q) to [bend left=35] (g);\n\t\t\\end{tikzpicture}\n\t}\n\\end{frame}\n\n\\begin{frame}{The Keystone of Policy Gradient Algorithms}\n\t\n\t\\onslide<1->\n\t\\begin{alertblock}{Policy Gradient Theorem}\n\t\t\\begin{equation*}\n\t\t\t\\nabla_\\theta J(\\theta) =\n\t\t\t\\E[\\substack{S \\sim d^\\theta\\\\A \\sim \\pi_\\theta}]{\\nabla_\\theta\\log\n\t\t\t\\pi_\\theta(S,A) Q_{\\theta}(S, A)}\n\t\t\\end{equation*}\n\t\\end{alertblock}\n\t\n\t\\onslide<2->{\n\tFor an episodic environment, the policy gradient can be approximated via Monte-Carlo\n\t\\begin{equation*}\n\t\t\\nabla_\\theta J(\\theta_k) \\approx \\frac{1}{M} \\sum_{m=0}^M\n\t\t \\sum_{u=0}^{T^{(m)}-1} \\nabla_\\theta\\log \\pi_{\\theta_k} \\left(s_u^{(m)}, a_u^{(m)}\\right) \\sum_{v \\geq u}^{T^{(m)}-1} \\gamma^{v-u} r_{v+1}^{(m)}   \n\t\\end{equation*}\n\t}\n\t\n\t\\onslide<3->{\n\t\tHowever, this estimate is characterized by a large variance. Possible improvements:\n\t\t\\begin{enumerate}\n\t\t\t\\item Optimal baseline\n\t\t\t\\item Actor-critic methods\n\t\t\t\\item Natural gradient\n\t\t\\end{enumerate}\n\t}\n\\end{frame}\n\n\\begin{frame}{Policy Gradient with Parameter-Based Exploration (PGPE)}\n\t\\begin{block}{Key Idea}\n\t\t\\begin{enumerate}\n\t\t\t\\item<+-> Actions are selected using a deterministic parametric controller $F_\\theta$\n\t\t\t\\item<+-> The controller parameters are drawn from a probability distribution $p_\\xi$\n\t\t\t\\item<+-> The search for an optimum is performed in the space of the hyperparameters $\\xi$\n\t\t\\end{enumerate}\n\t\\end{block}\n\t\n\t\\onslide<+->{\n\t\tMore formally, the update scheme becomes\n\t\t\t\\begin{equation*}\n\t\t\t\t\\xi_{k+1} = \\xi_k + \\alpha_k \\nabla_\\xi J(\\xi_k)\n\t\t\t\\end{equation*}\n\t\twhere the policy gradient is given by\n\t\t\\begin{alertblock}{Parameter-Based Policy Gradient Theorem}\n\t\t\t\\begin{equation*}\n\t\t\t\t\\nabla_\\xi J(\\xi) = \\E[\\substack{S \\sim d^\\xi\\\\\\theta \\sim p_\\xi}]{\\nabla_\\xi \\log p_\\xi(\\theta) Q_{\\xi}\\left(S, F_\\theta(S)\\right)}\n\t\t\t\\end{equation*}\t\t\n\t\t\\end{alertblock}\n\t}\n\\end{frame}\n\n\n", "meta": {"hexsha": "6be74636eae34cb2575faf3e213f988840fb43f2", "size": 2810, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Presentation/Sections/2_policy_gradient_algorithms.tex", "max_stars_repo_name": "AmineAboussalah/Thesis", "max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z", "max_issues_repo_path": "Presentation/Sections/2_policy_gradient_algorithms.tex", "max_issues_repo_name": "pnecchi/Thesis", "max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Presentation/Sections/2_policy_gradient_algorithms.tex", "max_forks_repo_name": "pnecchi/Thesis", "max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z", "avg_line_length": 33.0588235294, "max_line_length": 150, "alphanum_fraction": 0.6580071174, "num_tokens": 1019, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867681382279, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.58596449724672}}
{"text": "\n\nThis chapter gives routines and tools for computing and using scalar and vector wave functions in free-space spherical coordinates. The wave functions represent the frequency-domain multipoles of the field, which together give the multipole expansion of a field. While the wave functions themselves are not always used, the coefficients of the expansions are quite useful. The expansion coefficients can be quickly manipulated by rotation and translation addition theorems to transform the fields in different reference frames (see Chapters \\ref{chap:rotation} and \\ref{chap:translation}). This chapter provides codes for spherical wave functions and serves as a quick reference for certain properties and derivations. \n\n\\section{Indexing}\n\nSpherical harmonics and spherical wave functions are defined by the degree and order of the underlying associated Legendre polynomials, $(l,m)$. It is convenient to store and access harmonics using a linear index. For vector wave functions, where the harmonics range from $l = 1,...,L$ and $m = \\pm l$, the linear index for harmonic $(l,m)$ is given by $n = l^2 + l + m$. When the set of harmonics includes all $m$ at each $l$, the total number of harmonics is $N = L^2 + 2L$. The following sequence converts a linear index $n$ to $(l,m)$ for vector wave functions:\n\\begin{eqnarray}\nl &=& \\lfloor \\sqrt{n} \\rfloor \\\\\nm &=& n - l^2 - l\n\\end{eqnarray}\n\nTable \\ref{linearindexvec} illustrates the mapping between linear index and harmonic for vector waves. A similar table appears in \\cite{tsang2000scattering}.  Scalar wave functions include degree $l=0$, which is the monopole. The linear index is then given by $n = l^2 + l + m + 1$.  When the set of harmonics includes all $m$ at each $l$, the total number of harmonics is $N = L^2 + 2L + 1$.  The following sequence converts a linear index $n$ to $(l,m)$ for scalar waves:\n\\begin{eqnarray}\nl &=& \\lfloor \\sqrt{n-1} \\rfloor \\\\\nm &=& n - l^2 - l -1\n\\end{eqnarray}\n\nTable \\ref{linearindexmono} illustrates the mapping for scalar wave functions including the monopole.  \n\n\nThe function \\texttt{lm2ind} returns the linear index for pair $(l,m)$, while \\texttt{ind2lm} outputs the $(l,m)$ pair given the index.   Use the string switch \\texttt{'mono'} to include the monopole for scalar waves. \\texttt{lmtable} produces a table of $(l,m)$ pairs. These routines are useful for general purpose manipulation of spherical harmonic indexing or when developing routines. However, the error checking slows them down and should be replaced by inline expressions once routines are debugged.  \n\n\n\n\\begin{table}[H]\n\\parbox{.45\\linewidth}{\n\\caption{Vector Wave Function Harmonic Indexing}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\nLinear index & $l$ & $m$ \\\\\n\\hline\n1 &      1  &  -1 \\\\\n2 &      1  &   0\\\\\n3 &      1  &   1\\\\\n\\hline\n4 &      2  &  -2\\\\\n5 &      2   & -1\\\\\n6 &      2   &  0\\\\\n7 &      2   &  1\\\\\n8 &      2   &  2\\\\\n\\hline\n9 &      3   & -3\\\\\n10 &      3  &  -2\\\\\n11 &      3  &  -1\\\\\n12 &      3  &   0\\\\\n13 &      3   &  1\\\\\n14 &      3   &  2\\\\\n15 &      3  &   3 \\\\\n\\hline\n\\end{tabular}\n\\label{linearindexvec}\n}\n\\hfill\n\\parbox{.45\\linewidth}{\n\\centering\n\\caption{Scalar Wave Function Harmonic Indexing}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\nLinear index & $l$ & $m$ \\\\\n\\hline\n1 & \t0 & 0 \\\\\n\\hline\n2 &      1  &  -1 \\\\\n3 &      1  &   0\\\\\n4 &      1  &   1\\\\\n\\hline\n5 &      2  &  -2\\\\\n6 &      2   & -1\\\\\n7 &      2   &  0\\\\\n8 &      2   &  1\\\\\n9 &      2   &  2\\\\\n\\hline\n10 &      3   & -3\\\\\n11 &      3  &  -2\\\\\n12 &      3  &  -1\\\\\n13 &      3  &   0\\\\\n14 &      3   &  1\\\\\n15 &      3   &  2\\\\\n16 &      3  &   3 \\\\\n\\hline\n\\end{tabular}\n%\\caption{Scalar Wave Function Harmonic Indexing}\n\\label{linearindexmono}\n}\n\\end{table}\n\n\n%\n%\\begin{table}[htp]\n%\\caption{Vector Wave Function Harmonic Indexing}\n%\\begin{center}\n%\\begin{tabular}{|c|c|c|}\n%\\hline\n%Linear index & $l$ & $m$ \\\\\n%\\hline\n%1 &      1  &  -1 \\\\\n%2 &      1  &   0\\\\\n%3 &      1  &   1\\\\\n%\\hline\n%4 &      2  &  -2\\\\\n%5 &      2   & -1\\\\\n%6 &      2   &  0\\\\\n%7 &      2   &  1\\\\\n%8 &      2   &  2\\\\\n%\\hline\n%9 &      3   & -3\\\\\n%10 &      3  &  -2\\\\\n%11 &      3  &  -1\\\\\n%12 &      3  &   0\\\\\n%13 &      3   &  1\\\\\n%14 &      3   &  2\\\\\n%15 &      3  &   3 \\\\\n%\\hline\n%\\end{tabular}\n%\\end{center}\n%\\label{linearindexvec}\n%\\end{table}\n\n\n\n\n%\n%\\begin{table}[htbp]\n%\\caption{Scalar Wave Function Harmonic Indexing}\n%\\begin{center}\n%\\begin{tabular}{|c|c|c|}\n%\\hline\n%Linear index & $l$ & $m$ \\\\\n%\\hline\n%1 & \t0 & 0 \\\\\n%\\hline\n%2 &      1  &  -1 \\\\\n%3 &      1  &   0\\\\\n%4 &      1  &   1\\\\\n%\\hline\n%5 &      2  &  -2\\\\\n%6 &      2   & -1\\\\\n%7 &      2   &  0\\\\\n%8 &      2   &  1\\\\\n%9 &      2   &  2\\\\\n%\\hline\n%10 &      3   & -3\\\\\n%11 &      3  &  -2\\\\\n%12 &      3  &  -1\\\\\n%13 &      3  &   0\\\\\n%14 &      3   &  1\\\\\n%15 &      3   &  2\\\\\n%16 &      3  &   3 \\\\\n%\\hline\n%\\end{tabular}\n%\\end{center}\n%\\label{linearindexmono}\n%\\end{table}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/lm2ind.m}\n}\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/ind2lm.m}\n}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/lmtable.m}\n}\n\n\n\\clearpage\n\\section{Spherical Harmonics}\n\n\\subsection{$Y_{lm}(\\theta,\\phi)$}\nThe fully normalized spherical harmonics are given by\n\\begin{equation}\nY_{lm}(\\theta,\\phi) = \\sqrt{\\dfrac{2l+1}{4\\pi}\\dfrac{(l-m)!}{(l+m)!}}P_l^m(\\cos\\theta)e^{im\\phi}\n\\end{equation}\n\n\\noindent where $P_l^m(\\cos\\theta)$ are the associated Legendre polynomials. The Condon-Shortley phase, $(-1)^m$, is included in the definition of $P_l^m(\\cos\\theta)$. The spherical harmonics can be written in terms of the fully normalized associated Legendre polynomials, $\\widetilde P_l^m(\\cos\\theta)$, as \n\\begin{equation}\nY_{lm}(\\theta,\\phi) = \\dfrac{1}{\\sqrt{2\\pi}} \\widetilde P_l^m(\\cos\\theta) e^{im\\phi}\n\\end{equation}  \n\nAny scalar spherical function can be expanded as a sum of spherical harmonics\n\\eq{f(\\theta,\\phi) = \\sum_{l=0}^{\\infty} \\sum_{m=-l}^{l} a_{lm} Y_{lm}(\\theta,\\phi)}\n\nBeing fully normalized, the orthogonality relation for the spherical harmonics is simply\n\\begin{equation}\n\\int_0^{2\\pi} \\int_0^{\\pi}Y_{lm}(\\theta,\\phi) Y^*_{l'm'}(\\theta,\\phi) \\sin\\theta d\\theta d\\phi = \\delta_{ll'}\\delta_{mm'}\n\\end{equation}\n\n\\noindent where one can show with \\eqref{eq2} that\n\\begin{equation}\nY^*_{lm}(\\theta,\\phi) = (-1)^mY_{l,-m}(\\theta,\\phi) \n\\label{eq1}\n\\end{equation}\n\nIt is useful to note that \n\\eq{\\int_0^{2\\pi} \\int_0^{\\pi}Y_{lm}(\\theta,\\phi) \\sin\\theta d\\theta d\\phi =  \\sqrt{4 \\pi} \\delta_{l0}\\delta_{m0} }\n\nand \n\\eq{Y_{l,0}(0,0) = \\sqrt{\\dfrac{2l+1}{4\\pi}} \\label{ylzerozz} }\n\nFully normalized spherical harmonics are given by the routine \\texttt{sphericalY}.  The routine returns the values at points $(\\theta,\\phi)$ up to degree $L$ for all $\\pm m$, linearly indexed.  All the harmonics are returned because it is fastest to compute Legendre polynomials recursively and because field expansions often need all the harmonics. The output is a 2D array with first dimension with the points $(\\theta,\\phi)$ and second dimension with harmonics size $L^2 + 2L$. This is to facilitate matrix-vector multiplication of the harmonics with a column vector of expansion coefficients. After such multiplication, the result has to be reshaped to the size of the input arrays. Use the string switch \\texttt{'mono'} to include the monopole term, in which case the second dimension will be size $L^2 + 2L + 1$.  A naive computation of $Y_{lm}$ would compute and divide the factorials directly.  This is only accurate to about $L=21$.  It is best to use the fully normalized Legendre polynomials, then only a factor of $1/\\sqrt{2\\pi}$ is needed to complete the normalization.  Finally, this routine does not take advantage of the separability of $\\theta$ and $\\phi$ in the computation if, for example, the arrays contain redundant points on evenly spaced grids over the sphere.\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/sphericalY.m}\n}\n\n\n\\subsection{Angular Momentum Operators}\n\nThe angular momentum operators, $\\mathcal{L}$, for spherical harmonics are quite useful and will be used later to express vector spherical harmonics in Cartesian components. Several properties are repeated here from \\cite{angularmomop}, which can also be found in textbooks (the operators are usually denoted with $L$, rather than $\\mathcal{L}$). \n\\eq{\\mathcal{L}^2 Y_{lm} = l(l+1)Y_{lm}}\n\\eq{\\mathcal{L}^2 = \\mathcal{L}_x^2 + \\mathcal{L}_y^2 + \\mathcal{L}_z^2}\n\n\\noindent where \n\\ea{\\bb{\\mathcal{L}} &=& -i \\bb{r} \\times \\nabla \\\\\n\\ &=& \\mathcal{L}_x \\hat{x} + \\mathcal{L}_y \\hat{y} + \\mathcal{L}_z \\hat{z}}\n\n\\noindent where $\\mathcal{L}_x$, $\\mathcal{L}_y$, and $\\mathcal{L}_z$ are differential operators that act on Cartesian vector fields.  The components are combined to create raising and lowering operators:\n\\ea{\\mathcal{L}_{+} &=& \\mathcal{L}_x + i \\mathcal{L}_y = e^{i\\phi}\\left(\\dd{}{\\theta} + i \\cot\\theta \\dd{}{\\phi}\\right) \\\\\n\\mathcal{L}_{-} &=& \\mathcal{L}_x - i \\mathcal{L}_y = e^{-i\\phi}\\left(-\\dd{}{\\theta} + i \\cot\\theta \\dd{}{\\phi}\\right) \\\\\n\\mathcal{L}_{z}  &=& -i\\dd{}{\\phi} \n}\n\nAdding and subtracting the raising and lower operators give \n\\ea{\\mathcal{L}_x &=& \\dfrac{1}{2}\\left(\\mathcal{L}_{+} + \\mathcal{L}_{-}\\right) \\\\\n\\mathcal{L}_y &=& \\dfrac{1}{2 i}  \\left(\\mathcal{L}_{+} - \\mathcal{L}_{-}\\right) }\n\nThe operators have the property that \n\\ea{\\mathcal{L}_{+} Y_{lm} &=& \\sqrt{(l-m)(l+m+1)} Y_{l,m+1} \\\\\n\\mathcal{L}_{-} Y_{lm} &=& \\sqrt{(l+m)(l-m+1)}  Y_{l,m-1} \\\\\n\\mathcal{L}_{z} Y_{lm} &=& m Y_{l,m} }\n\n\n\\section{Scalar Spherical Wave Functions}\n\nThe spherical scalar wave functions are solutions to the Helmholtz wave equation in free space in spherical coordinates. They are a product of spherical Bessel functions and spherical harmonics.  The form of the Bessel function determines whether the wave is incoming or outgoing, specifically whether the field is regular at the origin or radiating, though they are all standing waves:  \n\\begin{eqnarray}\n\\textrm{Incoming regular wave:} \\quad \\quad\\textit{Rg} \\psi_{lm}(k,\\bb{r}) &=& j_l(kr) Y_{lm}(\\theta,\\phi) \\\\\n\\textrm{Outgoing radiating wave:} \\quad  \\quad \\quad \\psi_{lm}(k,\\bb{r}) &=& h_l^{(1)}(kr) Y_{lm}(\\theta,\\phi) \n\\end{eqnarray}\n\n\\noindent where $j_l(kr)$ and $h_l^{(1)}(kr)$ are spherical Bessel and Hankel functions, respectively, $k$ is the background wavenumber, and \\textit{Rg} means being regular (non-singular) at the origin which uses $j_l(kr)$. The wave functions form a complete set, so any scalar field can be expanded as an infinite sum: \n\\begin{eqnarray}\n\\textrm{Incoming regular field:} \\quad \\quad \\phi(\\bb{r}) &=& \\sum_{l=0}^{\\infty} \\sum_{m=-l}^l a_{lm} \\textit{Rg} \\psi_{lm}(k,\\bb{r})  \\\\\n\\textrm{Outgoing radiating field:} \\quad \\quad \\phi(\\bb{r}) &=& \\sum_{l=0}^{\\infty} \\sum_{m=-l}^l b_{lm} \\psi_{lm}(k,\\bb{r}) \n\\end{eqnarray}\n\nThe far-field scalar wave functions for radiating waves are found by taking the large argument limit of the Hankel function\n\\eq{\\lim_{kr \\rightarrow \\infty}  \\psi_{lm}(k,\\bb{r}) = \\dfrac{e^{ikr}}{kr} i^{-l-1} Y_{lm}(\\theta,\\phi)  }\n\nThe addition theorem for the scalar Green's function is given in terms of regular and radiating waves as, \\cite{chew1995waves},\n\\eq{g(\\bb{r},\\bb{r}') = i k\\sum_{l=0}^{\\infty} \\sum_{m=-l}^l \\textit{Rg} \\psi_{lm}(k,\\bb{r}_{<}) \\psi_{lm}(k,\\bb{r}_{>}) \\label{scalargreens}}\n\n\\noindent where $\\bb{r}_{<}$ is for the smaller of $\\vert \\bb{r} \\vert$ and $\\vert \\bb{r}'\\vert $ and $\\bb{r}_{>}$ is for the greater of $\\vert \\bb{r}\\vert$ and $\\vert \\bb{r}'\\vert$.\n\n\nThe routine \\texttt{psilm} returns $\\psi_{lm}(k,\\br)$ at points $(r,\\theta,\\phi)$ up to maximum degree $L$, for all $\\pm m$, linearly indexed.  $k$ is real or complex.  Like \\texttt{sphericalY}, the first dimension has evaluation points and second dimension is size $L^2 + 2L+1$.  Use the string switch \\texttt{'rg'} for $\\textit{Rg} \\psi_{lm}(k,\\bb{r})$. Evaluation of the sum over harmonics for all points can be accomplished with matrix-vector multiplication then reshaping the result to match the dimensions of the input coordinates.\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/psilm.m}\n}\n\n\\section{Scalar Plane Wave Expansion}\n\nThe expansion of scalar plane waves in terms of spherical waves at the origin is, \\cite{tsang2000scattering},\n\\eq{e^{i \\bb{k} \\cdot \\bb{r}} = 4\\pi \\sum_{l = 0}^{\\infty} \\sum_{m = -l}^l i^l  j_l(kr) Y_{lm}(\\theta,\\phi) Y_{lm}^*(\\theta_k,\\phi_k) \\label{scalarplanewaveexp} }\n\n\\noindent where the spherical harmonics are fully-normalized with Condon-Shortley phase and \n\\ea{\\bb{r} &=& x \\hat{x} + y \\hat{y} + z \\hat{z} \\\\\n\\bb{k} &=& k \\hat{k} \\\\\n\\hat{k} &=& \\sin\\theta_k \\cos\\phi_k \\hat{x} +  \\sin\\theta_k\\sin\\phi_k \\hat{y} + \\cos\\theta_k \\hat{z} }\n\nThe plane wave expansion coefficients are found by separating the regular scalar wave functions from \\eqref{scalarplanewaveexp} and collecting the remaining terms as expansion coefficients as\n\\ea{e^{i \\bb{k} \\cdot \\bb{r}} &=& \\sum_{l = 0}^{\\infty} \\sum_{m = -l}^l a_{lm} \\textit{Rg} \\psi_{lm}(k,\\bb{r}) \\\\\na_{lm} &=&  4\\pi i^l Y_{lm}^*(\\theta_k,\\phi_k)}\n\nThe routine \\texttt{scalarPlaneWaveCoef} routine takes as input the maximum harmonic degree $L$, and the spherical coordinates of the plane wave propagation direction, $(\\theta_k, \\phi_k)$, and returns the $a_{lm}$ scalar plane wave expansion coefficients for $l = 0,...,L$, for all $\\pm m$, linearly indexed. \n\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/scalarPlaneWaveCoef.m}\n}\n\n\\clearpage\n\n\\section{Vector Spherical Harmonics}\n\\label{sec:vecsphharm}\nVector spherical harmonics capture the angular variation of a vector field in spherical coordinates and are used for far-field radiation patterns. Their close cousin is the radially independent vector wave function usually notated $\\bb{X}_{lm}$, \\cite{jackson1999classical}. The three fully normalized vector spherical harmonics, which form an orthonormal basis, are given by, \\cite{chew1995waves, tsang2000scattering}, \n\\begin{eqnarray}\n\\bb{P}_{lm}(\\theta,\\phi) &=& \\hat{r}Y_{lm}(\\theta,\\phi) \\\\\n\\  & \\ & \\nonumber \\\\\n\\bb{B}_{lm}(\\theta,\\phi) &=&\\dfrac{1}{\\sqrt{ l(l+1)}} r\\nabla Y_{lm}(\\theta,\\phi)  \\\\\n\\ &=& N_{lm} \\left[\\hat\\theta \\dfrac{d}{d\\theta} P_l^m(\\cos\\theta) + \\hat\\phi \\dfrac{im}{\\sin\\theta}P_l^m(\\cos\\theta) \\right] e^{im\\phi}   \\\\\n\\ &=& \\dfrac{1}{\\sqrt{2\\pi l(l+1)}} \\left[\\hat\\theta \\dfrac{d}{d\\theta} \\widetilde{P}_l^m(\\cos\\theta) + \\hat\\phi \\dfrac{im}{\\sin\\theta}\\widetilde{P}_l^m(\\cos\\theta) \\right] e^{im\\phi}  \\\\\n\\bb{C}_{lm}(\\theta,\\phi) &=& \\dfrac{1}{\\sqrt{ l(l+1)}} \\nabla \\times \\left[ \\br Y_{lm}(\\theta,\\phi) \\right] \\\\\n\\ &=& N_{lm} \\left[\\hat\\theta \\dfrac{im}{\\sin\\theta}P_l^m(\\cos\\theta) - \\hat\\phi \\dfrac{d}{d\\theta} P_l^m(\\cos\\theta)  \\right] e^{im\\phi}  \\\\\n\\ &=& \\dfrac{1}{\\sqrt{2\\pi l(l+1)}}  \\left[\\hat\\theta \\dfrac{im}{\\sin\\theta}\\widetilde{P}_l^m(\\cos\\theta) - \\hat\\phi \\dfrac{d}{d\\theta} \\widetilde{P}_l^m(\\cos\\theta)  \\right] e^{im\\phi} \n\\end{eqnarray}\n\n\\noindent where\n\\begin{equation}\nN_{lm} = \\dfrac{1}{\\sqrt{l(l+1)}}\\sqrt{\\dfrac{2l+1}{4\\pi}\\dfrac{(l-m)!}{(l+m)!}}\n\\end{equation}\n\nThe Condon-Shortley phase, $(-1)^m$, is included in the definition of the associated Legendre polynomials. The following relation exists between $\\bb{B}_{lm}$ and $\\bb{C}_{lm}$:\n\\begin{eqnarray}\n\\bb{B}_{lm}(\\theta,\\phi) &=& \\hat{r} \\times \\bb{C}_{lm}(\\theta,\\phi) \\\\\n\\bb{C}_{lm}(\\theta,\\phi) &=& -\\hat{r} \\times \\bb{B}_{lm}(\\theta,\\phi) \n\\end{eqnarray}\n\nIn other words, $\\bb{B}_{lm}\\cdot \\hat{\\theta} = -\\bb{C}_{lm} \\cdot\\hat{\\phi} $ and  $\\bb{B}_{lm}\\cdot \\hat{\\phi} = -\\bb{C}_{lm} \\cdot\\hat{\\theta} $.  The fully normalized vector spherical harmonics have orthonormality \n\\begin{equation}\n\\int_0^{2\\pi} \\int_0^{\\pi}\n\\left\\{\n\\begin{array}{c}\n\\bb{P}_{lm}(\\theta,\\phi) \\cdot \\bb{P}^*_{lm}(\\theta,\\phi) \\\\\n\\bb{B}_{lm}(\\theta,\\phi) \\cdot \\bb{B}^*_{lm}(\\theta,\\phi) \\\\\n\\bb{C}_{lm}(\\theta,\\phi) \\cdot \\bb{C}^*_{lm}(\\theta,\\phi) \n\\end{array}\n\\right\\}\n\\sin\\theta d\\theta d\\phi = \\delta_{ll'}\\delta_{mm'}\n\\end{equation}\n\nand\n\\eq{\\int_0^{2\\pi} \\int_0^{\\pi}\\bb{B}_{lm}(\\theta,\\phi) \\cdot \\bb{C}^*_{lm}(\\theta,\\phi) \\sin\\theta d\\theta d\\phi= 0}\n\nAny spherical vector field (i.e., polarized spherical far-field pattern) can be expanded as\n\\begin{equation}\n\\bb{F}(\\theta,\\phi) = \\sum_{l=1}^{\\infty}\\sum_{m=-l}^l b_{lm} \\bb{B}_{lm}(\\theta,\\phi) + c_{lm}\\bb{C}_{lm}(\\theta,\\phi) \\label{FinBC}\n\\end{equation}\n\nThe vector spherical harmonics can be expressed in Cartesian components using the angular momentum operators.  $\\bb{C}_{lm}$ written with the angular momentum operator is \n\\eq{\\bb{C}_{lm} = \\dfrac{\\nabla \\times\\left[ \\br Y_{lm}\\right]}{\\sqrt{l(l+1)}}  = \\dfrac{-\\br \\times \\nabla Y_{lm}}{\\sqrt{l(l+1)}}  = \\dfrac{-i \\bb{\\mathcal{L}} Y_{lm}}{\\sqrt{l(l+1)}} }\n\nwhich leads to \n\\ea{\\bb{C}_{lm} &=&  \\dfrac{-i}{\\sqrt{l(l+1)}} \\left[\\dfrac{1}{2}\\left(\\hat{x} - i \\hat{y}\\right)e_{lm}Y_{l,m+1} + \\dfrac{1}{2}\\left(\\hat{x} + i \\hat{y}\\right) f_{lm}Y_{l,m-1} + mY_{l,m} \\hat{z}\\right] \\nonumber \\label{Clmxyz} \\\\    } \n%\\ea{\\bb{C}_{lm} &=& \\dfrac{-i}{\\sqrt{l(l+1)}} \\left[\\dfrac{1}{2}\\left(e_{lm}Y_{l,m+1} + f_{lm}Y_{l,m-1}\\right)\\hat{x} + \\right. \\nonumber \\\\\n%\\ & \\ & \\left. \\dfrac{1}{2i}\\left(e_{lm}Y_{l,m+1} - f_{lm}Y_{l,m-1}\\right)\\hat{y} + mY_{l,m} \\hat{z}\\right] \\label{Clmxyz} \\\\\n%\\ & =& \\dfrac{-i}{\\sqrt{l(l+1)}} \\left[\\dfrac{1}{2}\\left(\\hat{x} - i \\hat{y}\\right)e_{lm}Y_{l,m+1} + \\dfrac{1}{2}\\left(\\hat{x} + i \\hat{y}\\right) f_{lm}Y_{l,m-1} + mY_{l,m} \\hat{z}\\right] }\n\n\\noindent where\n\\ea{e_{lm} &=& \\sqrt{(l-m)(l+m+1)} \\\\\nf_{lm} &=& \\sqrt{(l+m)(l-m+1)}  }\n\nFor the special case $(\\theta,\\phi) = (0,0)$, only the $m = \\pm 1$ harmonics survive, which leads to\n\\ea{\\bb{C}_{l,\\pm1}(0,0) &=& \\dfrac{-i}{2}\\left(\\hat{x} \\mp i \\hat{y}\\right)\\sqrt{\\dfrac{2l+1}{4\\pi}} \\label{clmzplus} \\\\\n\\bb{B}_{l,\\pm1}(0,0)  &=& \\dfrac{-i}{2}\\left(\\hat{y} \\pm i \\hat{x}\\right)\\sqrt{\\dfrac{2l+1}{4\\pi}} \\label{blmzplus}}\n\nThe second special case $(\\theta,\\phi) = (\\pi,0)$ has again only $m = \\pm 1$ harmonics.  Using $Y_{lm}(\\pi,0) = (-1)^l Y_{lm}(0,0)$ we get\n\\ea{\\bb{C}_{l,\\pm1}(\\pi,0) &=& (-1)^l \\bb{C}_{l,\\pm1}(0,0) \\label{clmzminus} \\\\\n\\bb{B}_{l,\\pm1}(\\pi,0) &=& (-1)^{l+1} \\bb{B}_{l,\\pm1}(0,0) \\label{blmzminus}}\n\nThe routine \\texttt{BC} returns the components of $\\bb{B}_{lm}$ and $\\bb{C}_{lm}$, for all harmonics up to degree $L$, for all $\\pm m$, linearly indexed along the second dimension with spherical points along the first dimension. There is duplication in returning each vector component, but none in the computation.  Use the string switch \\texttt{'norm'} for fully normalized ($1/\\sqrt{l(l+1)}$). \n\nThe helper function \\texttt{BCmult} takes the outputs from \\texttt{BC} and expansion coefficients $b_{lm}$ and $c_{lm}$ and returns the field components of \\eqref{FinBC}, $F_{\\theta}$, $F_{\\phi}$. These arrays are columnized and require reshaping to make them the same size as the \\texttt{BC} input $(\\theta,\\phi)$ arrays.\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/BC.m}\n}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/BCmult.m}\n}\n\n\\clearpage\n\\newpage\n\n\\section{Vector Spherical Wave Functions}\n\\label{sec:vecsphwave}\nThe vector spherical wave functions are solutions to the free-space vector wave equation in spherical coordinates for electric and magnetic fields, \\cite{chew1995waves, tsang2000scattering}.  Analogous to scalar wave functions, they are composed of products of spherical Bessel functions and vector spherical harmonics and come in regular and radiating forms. \n\\begin{eqnarray}\n\\textit{Rg}\\M{k,\\br} &=& \\dfrac{1}{\\sqrt{l(l+1)}}\\nabla \\times \\left[ \\bb{r} \\textit{Rg}\\psi_{lm}(k,\\br)\\right]  \\\\\n\\ &= & j_l(kr) \\bb{C}_{lm}(\\theta,\\phi) \\\\ \n\\ & \\ & \\  \\nonumber  \\\\\n\\M{k,\\br} &=& \\dfrac{1}{\\sqrt{l(l+1)}}\\nabla \\times \\left[ \\bb{r} \\psi_{lm}(k,\\br)\\right]  \\\\\n\\ &= & h_l^{(1)}(kr) \\bb{C}_{lm}(\\theta,\\phi) \\\\\n\\ & \\ & \\  \\nonumber \\\\\n\\textit{Rg}\\N{k,\\br} &=&  \\dfrac{1}{k} \\nabla \\times \\textit{Rg}\\M{k,\\br}  \\\\\n\\ &=&  \\sqrt{l(l+1)}\\dfrac{j_l(kr)}{kr} \\bb{P}_{lm} (\\theta,\\phi) + \\dfrac{\\left[kr j_l(kr)\\right]'}{kr} \\bb{B}_{lm}(\\theta,\\phi) \\\\\n\\ & \\ & \\  \\nonumber \\\\\n\\N{k,\\br} &=&  \\dfrac{1}{k} \\nabla \\times \\M{k,\\br}  \\\\\n\\ &=&  \\sqrt{l(l+1)}\\dfrac{h_l^{(1)}(kr)}{kr} \\bb{P}_{lm} (\\theta,\\phi) + \\dfrac{\\left[kr h_l^{(1)}(kr)\\right]'}{kr} \\bb{B}_{lm}(\\theta,\\phi)  \n\\end{eqnarray}\n\nNote that $\\M{k,\\br}$ contains only $\\hat\\theta$ and $\\hat\\phi$ components, while $\\N{k,\\br}$ contains the same plus an $\\hat r$ component.  $\\N{k,\\br}$ holds the near-field radial part of a field, which disappears in the far-field.  Here, $\\bb{P}_{lm}$, $\\bb{B}_{lm}$ and $\\bb{C}_{lm}$ are fully normalized. If partially normalized wave functions are desired, then both sides of the equations are multiplied by $\\sqrt{l(l+1)}$. \n\n%If $\\bb{B}_{lm}$ and $\\bb{C}_{lm}$ are to be partially normalized, then an extra factor of  is applied to $\\bb{P}_{lm}$.\n\nThe two wave functions are related by \n\\ea{\\M{k,\\br} &=& \\dfrac{1}{k} \\curl \\N{k,\\br} \\label{vswfcurlM}\\\\\n\\N{k,\\br} &=& \\dfrac{1}{k} \\curl \\M{k,\\br} \\label{vswfcurlN}}\n\nAn electric or magnetic field can be written as a sum of vector wave functions. This is the multipole expansion of the vector field.  $ \\M{k,\\br}$ and \\N{k,\\br} are linearly independent, so any field is a linear combination of both \n\\begin{equation}\n\\bb{E}(\\br) = \\sum_{l=1}^{\\infty}\\sum_{m=-l}^l a_{lm} \\M{k,\\br} + b_{lm} \\N{k,\\br}\n\\end{equation}\n\n\\begin{equation}\n\\bb{H}(\\br) = \\dfrac{k}{i\\omega\\mu}\\sum_{l=1}^{\\infty}\\sum_{m=-l}^l a_{lm} \\N{k,\\br} + b_{lm} \\M{k,\\br} \\label{Halmblm}\n\\end{equation}\n\n\\noindent where $a_{lm}$ and $b_{lm}$ are complex expansion coefficients. These equations are for radiating fields. Expressions for incoming fields will use the $\\textit{Rg}$ counterparts.  \n\nThe addition theorem for the dyadic Green's function can be written as an outer product of normalized vector wave functions, \\cite{chew1995waves},\n\\eq{\\G{} = ik\\sum_{l=1}^{\\infty}\\sum_{m=-l}^l \\textit{Rg}\\M{k,\\br}\\Mhat{k,\\br'} + \\textit{Rg}\\N{k,\\br}\\Nhat{k,\\br'} \\label{dyadicgreensaddition}}\n\n\\noindent where $\\hat{}$ means conjugation of the angular function only. This equation applies under the condition $\\vert \\br \\vert < \\vert \\br' \\vert$. When $\\vert \\br \\vert > \\vert \\br' \\vert$ the equation changes so that the \\textit{Rg} is instead applied to $\\Mhat{k,\\br}$ and $\\Nhat{k,\\br}$. The far-field vector wave functions are found by taking the large argument limit of the Bessel functions in $\\M{k,\\br}$ and $\\N{k,\\br}$\n\\begin{eqnarray}\n\\lim_{kr \\rightarrow \\infty} \\M{k,\\br} &=& \\dfrac{e^{ikr}}{kr} i^{-l-1} \\bb{C}_{lm}(\\theta,\\phi) \\label{farfieldM} \\\\\n\\lim_{kr \\rightarrow \\infty} \\N{k,\\br} &=& \\dfrac{e^{ikr}}{kr} i^{-l} \\bb{B}_{lm}(\\theta,\\phi) \\label{farfieldN}\n\\end{eqnarray}\n\nVector wave functions are orthogonal over the unit sphere as\n\\begin{eqnarray}\n\\int \\bb{M}_{lm}(k,\\br) \\cdot \\hat{\\bb{M}}_{l'm'}(k,\\br) d\\Omega &=& \\left(h_l^{(1)}(kr)\\right)^2 \\delta_{ll'}\\delta_{mm'}\\label{MMint} \\\\\n\\int \\bb{N}_{lm}(k,\\br) \\cdot \\hat{\\bb{N}}_{l'm'}(k,\\br) d\\Omega &=& \\dfrac{1}{(kr)^2}\\left[l(l+1)\\left(h_l^{(1)}(kr)\\right)^2 + \\left(\\left[kr h_l^{(1)}(kr)\\right]'\\right)^2\\right] \\delta_{ll'}\\delta_{mm'} \\\\\n\\int \\bb{M}_{lm}(k,\\br) \\cdot \\hat{\\bb{N}}_{l'm'}(k,\\br) d\\Omega &=& 0\n\\end{eqnarray}\n%\n%\\begin{equation}\n%\\int \\bb{M}_{lm}(k,\\br) \\cdot \\hat{\\bb{M}}_{l'm'}(k,\\br) d\\Omega = \\left(h_l^{(1)}(kr)\\right)^2 \\delta_{ll'}\\delta_{mm'}\n%\\end{equation}\n%\n%\\begin{equation}\n%\\int \\bb{N}_{lm}(k,\\br) \\cdot \\hat{\\bb{N}}_{l'm'}(k,\\br) d\\Omega= \\dfrac{1}{(kr)^2}\\left[\\left(h_l^{(1)}(kr)\\right)^2 + \\left(\\left[kr h_l^{(1)}(kr)\\right]'\\right)^2\\right] \\delta_{ll'}\\delta_{mm'}\n%\\end{equation}\n%\n%\\begin{equation}\n%\\int \\bb{M}_{lm}(k,\\br) \\cdot \\hat{\\bb{N}}_{l'm'}(k,\\br) d\\Omega = 0\n%\\end{equation}\n\nThe cross products dotted with the radial vector and integrated over the sphere are\n\\begin{eqnarray}\n\\int \\left(\\bb{M}_{lm}(k,\\br) \\times \\hat{\\bb{M}}_{l'm'}(k,\\br)\\right)\\cdot \\hat{\\br}  d\\Omega &=& 0 \\label{MNrorth1} \\\\\n\\int \\left(\\bb{N}_{lm}(k,\\br) \\times \\hat{\\bb{N}}_{l'm'}(k,\\br) \\right)\\cdot \\hat{\\br} d\\Omega &=& 0 \\\\\n\\int \\left( \\bb{M}_{lm}(k,\\br) \\times \\hat{\\bb{N}}_{l'm'}(k,\\br)\\right)\\cdot \\hat{\\br} d\\Omega &=& h_l^{(1)}(kr) \\dfrac{\\left[kr h_l^{(1)}\\right]'}{kr} \\delta_{ll'}\\delta_{mm'}  \\label{MNrorth3}\n\\end{eqnarray}\n\n\n%\\begin{equation}\n%\\int \\bb{M}_{lm}(k,\\br) \\times \\hat{\\bb{N}}_{l'm'}(k,\\br) d\\Omega= h_l^{(1)}(kr)\\left( \\dfrac{h_l^{(1)}}{kr} + \\dfrac{\\left[kr h_l^{(1)}\\right]'}{kr}  \\right) \\delta_{ll'}\\delta_{mm'}\n%\\end{equation}\n%\n%\\begin{eqnarray}\n%\\int \\bb{M}_{lm}(k,\\br) \\times \\hat{\\bb{M}}_{l'm'}(k,\\br) d\\Omega= 0 \\\\\n%\\int \\bb{N}_{lm}(k,\\br) \\times \\hat{\\bb{N}}_{l'm'}(k,\\br) d\\Omega= 0\n%\\end{eqnarray}\n\nThese work for any combination of regular or radiating wave functions, simply substitute the correct Bessel functions.  \n\nThe routine \\texttt{MN} returns the components of $\\M{k,\\br}$ and $\\N{k,\\br}$, for all harmonics up to degree $L$, for all $\\pm m$, linearly indexed. This routine calls \\texttt{BC}. Like \\texttt{BC}, the outputs are 2D arrays with $L^2 + 2L$ harmonics along the second dimension and the evaluation points at $(r,\\theta,\\phi)$ along the first.  Use the string switch \\texttt{'rg'} for regular waves, \\texttt{'hat'} for angular conjugation, and \\texttt{'norm'} to use fully normalized vector spherical harmonics. It defaults to partial normalization. For regular waves, the spherical Bessel function routines from Chapter \\ref{sphericalbess} take care of $kr$ near the origin.  Use the following input constructions for \\texttt{MN}:\n\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{table}[H]\n\\caption{\\texttt{MN} Input Options}\n\\begin{center}\n\\begin{tabular}{|l|l|}\n\\hline\n$\\M{k,\\br}$, $\\N{k,\\br}$ & \\texttt{MN(L,k,r,theta,phi)} \\\\\n\\hline\n$\\textit{Rg}\\M{k,\\br}$, $\\textit{Rg}\\N{k,\\br}$ & \\texttt{MN(L,k,r,theta,phi,'rg')} \\\\\n\\hline\n$\\Mhat{k,\\br}$, $\\Nhat{k,\\br}$ & \\texttt{MN(L,k,r,theta,phi,[],'hat') } \\\\\n\\hline\n$\\textit{Rg}\\Mhat{k,\\br}$, \\textit{Rg}$\\Nhat{k,\\br}$ & \\texttt{MN(L,k,r,theta,phi,'rg','hat')}  \\\\ \n\\hline\nFully normalized $\\bb{B}_{lm}$ and $\\bb{C}_{lm}$& \\texttt{MN(L,k,r,theta,phi,...,...,'norm')} \\\\\n\\hline \n\\end{tabular}\n\\end{center}\n\\label{default}\n\\end{table}\n\nThe helper function \\texttt{MNmult} takes the outputs from \\texttt{MN} and expansion coefficients $a_{lm}$ and $b_{lm}$ and returns the electric field components $E_r, E_{\\theta}, E_{\\phi}$. These arrays are columnized and require reshaping to make them the same size as the \\texttt{MN} input $(r, \\theta,\\phi)$ arrays. Magnetic field components are obtained by swapping $a_{lm}$ and $b_{lm}$ and multiplying by $k/i\\omega\\mu$ as in \\eqref{Halmblm}. \n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/MN.m}\n}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/MNmult.m}\n}\n\n\n\\section{Vector Plane Wave Expansion}\n\nHere we derive the vector wave function expansion coefficients for vector plane waves with arbitrary propagation direction. These are needed when combining plane wave excitations and vector wave functions, for instance, in T-matrix scattering problems.\n\n\\subsection{Plane Wave Representation}\n\nLet the electric and magnetic fields of the vector plane wave be written\n\\ea{\\bb{E}(\\br) &=& \\bb{E} e^{i \\bb{k} \\cdot \\bb{r}} \\label{Eplane} \\\\\n\\bb{H}(\\br) &=& \\bb{H}  e^{i \\bb{k} \\cdot \\bb{r}} }\n\n\n%\\ea{\\bb{E}(\\br) &=& \\left(E_v \\hat{v} + E_h \\hat{h}\\right) e^{i \\bb{k} \\cdot \\bb{r}} \\\\\n%\\bb{H}(\\br) &=& \\dfrac{1}{\\eta} \\hat{k} \\times \\left(E_v \\hat{v} + E_h \\hat{h}\\right) e^{i \\bb{k} \\cdot \\bb{r}} }\n\n\\noindent where\n\\ea{\\bb{r} &=& x \\hat{x} + y \\hat{y} + z \\hat{z} \\\\\n\\bb{k} &=& k \\hat{k} \\\\\n\\hat{k} &=& \\sin\\theta_k \\cos\\phi_k \\hat{x} +  \\sin\\theta_k\\sin\\phi_k \\hat{y} + \\cos\\theta_k \\hat{z} }\n\nand \n\\ea{\\bb{E} &=& E_x \\hat{x} + E_y \\hat{y} + E_z \\hat{z} \\label{Explane}  \\\\\n\\bb{H} &=& \\dfrac{1}{\\eta} \\hat{k} \\times \\bb{E} \\label{Hcross} \\\\\n\\ & =& H_x \\hat{x} + H_y \\hat{y} + H_z \\hat{z}   }\n\n\\noindent where $\\eta$ is the characteristic impedance of free-space.\n\n\n\\subsection{Vector Plane Wave Coefficients}\n\nTo solve for the expansion coefficients about the origin, write the electric and magnetic fields as an expansion of regular waves at the origin\n\\ea{\n\\bb{E}(\\br) &=& \\sum_{l=1}^{\\infty}\\sum_{m=-l}^l a_{lm} \\textit{Rg}\\M{k,\\br} + b_{lm} \\textit{Rg}\\N{k,\\br} \\\\\n\\bb{H}(\\br) &=& \\dfrac{k}{i\\omega\\mu}\\sum_{l=1}^{\\infty}\\sum_{m=-l}^l a_{lm} \\textit{Rg}\\N{k,\\br} + b_{lm} \\textit{Rg}\\M{k,\\br} \n}\n\nMultiplying both fields by $\\textit{Rg}\\Mhat{k,\\br}$, integrating over the sphere, applying orthogonality for fully normalized wave functions, \\eqref{MMint}, and canceling one factor of the Bessel functions, we get \n%\\ea{a_{lm}\\left(j_l(kr)\\right)^2 &=& \\int \\textit{Rg}\\Mhat{k,\\br} \\cdot \\bb{E}(\\br) d\\Omega  \\\\\n%b_{lm}\\left(j_l(kr)\\right)^2 \\dfrac{k}{i\\omega\\mu} &=& \\int \\textit{Rg}\\Mhat{k,\\br} \\cdot \\bb{H}(\\br) d\\Omega \n%}\n%\n%or\n\\ea{a_{lm}j_l(kr) &=& \\int \\bb{C}^*_{lm}(\\theta,\\phi) \\cdot \\bb{E}(\\br) d\\Omega  \\label{cstarte} \\\\\nb_{lm}j_l(kr)\\dfrac{k}{i\\omega\\mu} &=& \\int \\bb{C}^*_{lm}(\\theta,\\phi) \\cdot \\bb{H}(\\br) d\\Omega \\label{cstarth}\n}\n\nSubstituting \\eqref{Eplane} into \\eqref{cstarte}, then substituting the scalar plane wave expansion, \\eqref{scalarplanewaveexp}, which cancels the remaining Bessel function, and exchanging the sum and integration\n\\eq{a_{lm} = 4\\pi \\bb{E}  \\cdot \\sum_{l' = 0}^{\\infty} \\sum_{m' = -l'}^{l'} i^l   \\int \\bb{C}^*_{lm}(\\theta,\\phi) Y_{l'm'}(\\theta,\\phi) Y_{l'm'}^*(\\theta_k,\\phi_k) d\\Omega  }\n%\\eq{a_{lm}j_l(kr) = \\int \\bb{C}^*_{lm}(\\theta,\\phi) \\cdot \\bb{E}e^{i \\bb{k} \\cdot \\bb{r}} d\\Omega }\n\n%Substituting  \\eqref{Eplane}  into \\eqref{Eplane}\n\n%Substituting \\eqref{cstarte} \n\n\nUsing \\eqref{Clmxyz} it can be shown that the integral and sum reduces for both coefficients to  \n\\ea{a_{lm} &=& 4\\pi i^l  \\bb{C}^*_{lm}(\\theta_k,\\phi_k)  \\cdot  \\bb{E} \\label{almplaneE}\\\\\nb_{lm} &=& 4\\pi i^{l+1}  \\bb{C}^*_{lm}(\\theta_k,\\phi_k) \\cdot  (\\eta  \\bb{H}) \\label{blmH}\\\\\n\\ &=& -4\\pi i^{l+1}  \\bb{B}^*_{lm}(\\theta_k,\\phi_k) \\cdot  \\bb{E} \\label{blmplaneE}}\n\n\\noindent where the last equation comes from taking $\\hat{k} = \\hat{r}$ in \\eqref{Hcross}. Taking the dot product between \\eqref{Clmxyz} and \\eqref{Explane} the coefficients are written out as \n\\ea{a_{lm} &=& \\dfrac{4\\pi i^{l+1}}{\\sqrt{l(l+1)}} \\left[ \\dfrac{1}{2}\\left(E_x + i E_y\\right)\\sqrt{(l-m)(l+m+1)} Y^*_{l,m+1}(\\theta_k,\\phi_k) \\right. \\nonumber \\\\\n\\ & \\ &  \\left. + \\dfrac{1}{2}\\left(E_x - i E_y\\right)\\sqrt{(l+m)(l-m+1)}Y^*_{l,m-1}(\\theta_k,\\phi_k)  + E_z m Y^*_{lm}(\\theta_k,\\phi_k)  \\right] \\\\\nb_{lm} &=& i a_{lm}(\\bb{E} \\rightarrow \\eta \\bb{H} =  \\hat{k} \\times \\bb{E}) }\n\nThe plane wave expansion coefficients have left and right circularly polarized amplitudes embedded in them.  \n\nThe routine \\texttt{vectorPlaneWaveCoef} routine takes as input the maximum harmonic degree $L$, the electric field components of the plane wave, $E_x$, $E_y$, and $E_z$, and the spherical coordinates of the plane wave propagation direction, $(\\theta_k, \\phi_k)$, and returns the $a_{lm}$ and $b_{lm}$ vector plane wave expansion coefficients for $l = 1,...,L$, for all $\\pm m$, linearly indexed. \n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/vectorPlaneWaveCoef.m}\n}\n\n\n%\\ea{b_{lm} &=& 4\\pi i^{l+1}  \\bb{C}^*_{lm}(\\theta_k,\\phi_k)  \\cdot  \\bb{H} \\label{blmH} \\\\\n%\\ & =& -4\\pi i^{l+1}  \\bb{B}^*_{lm}(\\theta_k,\\phi_k) \\cdot  \\bb{E}}\n\n\n%\\ea{b_{lm} &=& 4\\pi i^{l+1}  \\bb{C}^*_{lm}(\\theta_k,\\phi_k)  \\cdot \\left(\\hat{r}\\times \\bb{E} \\right) \\\\ \n%\\ &=& -4\\pi i^{l+1}  \\left(\\bb{r}\\times\\bb{C}^*_{lm}(\\theta_k,\\phi_k) \\right) \\cdot  \\bb{E} \\\\\n%\\ &=& -4\\pi i^{l+1}  \\bb{B}^*_{lm}(\\theta_k,\\phi_k) \\cdot  \\bb{E} }\n\n\n\n\\subsection{$\\hat{z}$-Propagating Plane Wave}\n\nHere we derive the special case of a vector plane wave propagating in the $\\hat{z}$ direction, $\\theta_k = \\phi_k = 0$, and $E_z = 0$.  This is useful for radar backscatter computations. Only the $m=\\pm 1$ harmonics are needed, and the expansion coefficients become\n\\ea{a_{l,\\pm1} &=& 2\\pi i^{l+1} \\left(E_x \\mp i E_y\\right)Y_{l,\\pm1}(0,0)  \\\\\nb_{l,\\pm1} &=& -2\\pi i^{l} \\eta \\left(H_x \\mp i H_y\\right)Y_{l,\\pm1}(0,0) }\n\n%Using the fact that\n%\\ea{\n%Y_{l,0}(0,0) = \\sqrt{\\dfrac{2l+1}{4\\pi}}  \n%}\n\nUsing \\eqref{ylzerozz} and the fact that $H_x = -E_y/\\eta$ and $H_y = E_x/\\eta$ for a $\\hat{z}$-propagating plane wave, one can show\n\\ea{a_{l,\\pm1} &=& \\sqrt{\\pi (2l +1)} i^{l+1} \\left(E_x \\mp i E_y\\right) \\\\\nb_{l,\\pm1} &=& \\pm a_{l,\\pm1} }\n\n\nThe routine \\texttt{vectorPlaneWaveCoefZ} works like \\texttt{vectorPlaneWaveCoef}. It takes as input the maximum harmonic degree $L$, and the electric field components of the $\\hat{z}$-propagating plane wave, $E_x$, $E_y$, and returns the $a_{lm}$ and $b_{lm}$ vector plane wave expansion coefficients for $l = 1,...,L$, for all $\\pm m$, linearly indexed.\n\n\n{\\footnotesize\n\\VerbatimInput{\\code/Wavefunctions/vectorPlaneWaveCoefZ.m}\n}\n\n\n%\\ea{a_{l,\\pm1} &=& \\sqrt{\\pi (2l +1)} i^{l+1} \\left(E_x \\mp i E_y\\right) \\\\\n%b_{l,\\pm1} &=& -\\sqrt{\\pi (2l +1)} i^{l} \\left(-E_y \\mp i E_x\\right) }\n%\n%\\ea{a_{l,\\pm1} &=& \\sqrt{\\pi (2l +1)} i^{l+1} \\left(E_x \\mp i E_y\\right) \\\\\n%b_{l,\\pm1} &=& -\\sqrt{\\pi (2l +1)} i^{l+1} \\left(iE_y \\mp  E_x\\right) }\n%\n%\\ea{a_{l,\\pm1} &=& \\sqrt{\\pi (2l +1)} i^{l+1} \\left(E_x \\mp i E_y\\right) \\\\\n%b_{l,\\pm1} &=& \\sqrt{\\pi (2l +1)} i^{l+1} \\left(\\pm  E_x -iE_y \\right) }\n%\n%\\ea{a_{l,\\pm1} &=& \\sqrt{\\pi (2l +1)} i^{l+1} \\left(E_x \\mp i E_y\\right) \\\\\n%b_{l,\\pm1} &=& \\pm  \\sqrt{\\pi (2l +1)} i^{l+1} \\left(E_x \\mp iE_y \\right) }\n\n\n\n\\section{Band-limited Fields and Required Number of Spherical Harmonics}\n\nThe number of spherical harmonics, or spherical spectral components, that are needed to fully capture the information contained in a far-field radiation pattern depends only on the size of the object and the background wavenumber. This is true regardless of the make up of the object and applies to both scalar and vector fields. For instance, the radiation pattern could be the scattered field from a natural object or the field radiated by an antenna array. \n\nIt is commonly stated that $O(kd)$ spherical harmonics are required to represent a spherical field pattern, where $k$ is the background wavenumber and $d$ is the diameter of the smallest sphere that encloses the object. $O(kd)$ actually gives the maximum degree harmonic $L$, where all $m$ need to be included in the expansion. Therefore, a total of $N = L^2 + 2L + 1$ harmonics are needed in the case of scalar waves. This is also equal to the number of angular points needed to Nyquist-sample the spatial pattern of a field over the sphere. This criteria has been studied in detail, \\cite{yaghjian1996sampling}, and a more precise value for the maximum degree harmonic, written in terms of the radius of the smallest enclosing sphere, $a$, is \n\\eq{L \\approx \\left\\lceil 1.1 ka \\left( 1 + \\dfrac{1}{ka}\\right) \\right\\rceil \\label{LLmax} }\n\nThis is a remarkable fact about fields. No matter the make up of a scattering object or antenna, so long as it is contained within a radius $a$ relative to the origin, the angular variation of the far-field can only oscillate as fast as the maximum harmonic $(l,m) = (L,\\pm L)$.  This is what is meant by the phrase band-limited field, because the spherical harmonic spectrum of any radiated field is finite. \n\nA quick derivation and visual of this idea follows. Start with the volume integral equation for the scalar scattered field\n\\eq{\\phi_{sca}(\\br) = \\int g(\\br,\\br') s(\\br') dV' \\label{view}}\n\n\\noindent where $s(\\br)$ is a volume source. Substitute the addition theorem for the scalar Green's function, \\eqref{scalargreens}, so that \\eqref{view} is expanded \n\\ea{\\phi_{sca}(\\br) &=& \\sum_{lm} a_{lm} \\psi_{lm}(\\br)  \\\\\na_{lm} &=&  i k \\int \\textit{Rg}\\psi_{lm}(\\br') s(\\br')  dV' \\label{almpoint}}\n\nChoose a point source in spherical coordinates that has unit amplitude at radius $a$ along the $z$ axis:\n\\eq{s(\\br) = \\dfrac{1}{r^2\\sin\\theta}\\delta(r-a)\\delta(\\theta)\\delta(\\phi)}\n\nEvaluating \\eqref{almpoint} with this source over spherical coordinates gives the expansion coefficients\n\\ea{a_{lm} &=&  i k  j_l(ka) Y_{lm}(0,0) \\\\\n\\ & = &  i k \\sqrt{\\dfrac{2l+1}{4\\pi}} j_l(ka)  }\n\nThe power in each harmonic is given by the magnitude squared of the coefficients\n\\ea{\\vert a_{lm}\\vert^2  &=&  k^2  \\dfrac{(2l+1)}{4\\pi} \\vert j_l(ka) \\vert^2  }\n\nFinally, we can normalize this to capture only the dependence on $l$\n\\ea{  \\vert \\widetilde{a}_{lm}\\vert^2  &=& (2l+1) \\vert j_l(ka) \\vert^2  }\n\nFigure \\ref{sphharmcont} left shows the magnitude of the normalized coefficients as a function of $ka$ and $l$. There is very quickly little harmonic content in the expansion coefficients above the line $l = ka$.  Figure \\ref{sphharmcont} right shows the normalized harmonics for $ka = 25$ as well as the cutoff predicted by \\eqref{LLmax}. \n\n\n\\begin{figure}[H] \n   \\centering\n      \\begin{tabular}{cc}\n     \\subfigure{\\includegraphics[width=3in]{WaveFunctions/Figures/sphharmcont}}\n     \\subfigure{\\includegraphics[width=3in]{WaveFunctions/Figures/sphharmcontex}}\n  \\end{tabular}\n  \\label{sphharmcont}\n\\caption{Spherical harmonic content of a scalar point source at $z = a$. Left: coefficients vs $ka$ with the line $l = ka$ plotted. Right: example of $ka = 25$ with \\eqref{LLmax} plotted.}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "9dff53f4b71e90be1677d06afcba186e0953a1ec", "size": 37108, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tex/WaveFunctions/WaveFunctions.tex", "max_stars_repo_name": "nasa-jpl/Waveport", "max_stars_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2021-08-29T13:29:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T20:09:47.000Z", "max_issues_repo_path": "Tex/WaveFunctions/WaveFunctions.tex", "max_issues_repo_name": "ruzakb/Waveport", "max_issues_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tex/WaveFunctions/WaveFunctions.tex", "max_forks_repo_name": "ruzakb/Waveport", "max_forks_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-08-29T13:28:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-08T19:58:04.000Z", "avg_line_length": 56.2242424242, "max_line_length": 1284, "alphanum_fraction": 0.6541177105, "num_tokens": 13605, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867729389247, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5859644914848092}}
{"text": " \\section{Integration Variable Remapping}\n\n\\subsection{Introduction}\nThis section will layout the jacobian needed for the 10 $\\rar$ 10 remapping of\nvariables for the parton level cross section. The base variables used are shown below.\n\n\\begin{myitemize}\n\\item $p_{3}$: Absolute momentum of the lepton\n\\item $p_{5}$: Absolute momentum of the first quark\n\\item $p_{6}$: Absolute momentum of the second quark\n\\item $p_{tot}^{z}$: Total $p_{z}$ of the system\n\\item $cos(\\theta_{3})$: Cosine($\\theta$) of the lepton\n\\item $\\phi_{3}$: $\\phi$ of the lepton\n\\item $cos(\\theta_{5})$: Cosine($\\theta$) of the first quark\n\\item $\\phi_{5}$: $\\phi$ of the first quark\n\\item $cos(\\theta_{6})$: Cosine($\\theta$) of the second quark\n\\item $\\phi_{6}$: $\\phi$ of the second quark\n\\end{myitemize}\n\nOther variables that are useful for the integration are\n\\begin{myitemize}\n\\item $m_{34}$: Mass of the lepton and neutrino (W mass)\n\\item $m_{345}$: Mass of the lepton, neutrino, and first quark (top mass)\n\\item $m_{56}$: Mass of the first and second quark ($b\\bar{b}$ mass)\n\\end{myitemize}\n\nSince some of these variables are sharp peaks (W and top masses), it is much\nbetter to sample from the expected distribution rather than make requirements of\nthe invariant masses. The W and top masses are expected to follow a Breit-Wigner\ndistribution shown below.\n\n\\begin{equation}\n\\sigma(M_{34}) = \\frac{1}{\\pi} \\left[ \\frac{\\gamma}{(M_{34} - M_{W})^{2} +\n\\gamma^{2}} \\right]\n\\end{equation}\n\nwhere $M_{34}$ is the mass of the lepton and neutrino. Similarly, the top mass\nhas the following expected distribution.\n\n\\begin{equation}\n\\sigma(M_{345}) = \\frac{1}{\\pi} \\left[ \\frac{\\gamma}{(M_{345} - M_{top})^{2} +\n\\Gamma^{2}} \\right]\n\\end{equation}\n\nwhere  $M_{345}$ is the mass of the lepton, neutrino, and first quark.\n\n\\subsection{Sampling from a Breit-Wigner mass distribution}\n\nSampling from a Breit-Wigner distribution is done by selecting a random point\nbetween 0 and 1 from the cumulative distribution function of the BW\nfunction. The cumulative distribution function, of cdf, for the Breit-Wigner\ndistribution is shown below.\n\n\\begin{equation}\n\\label{sample}\n\\int \\sigma(m;m_{0};\\Gamma) = F(m;m_{0},\\Gamma) = \\frac{1}{\\pi} \\tan^{-1} \\left[\n\\frac{m-m_{0}}{\\Gamma} \\right] + \\frac{1}{2}\n\\end{equation}\n\nThe value of F is taken as a random number between 0 and 1. After selecting a\nvalue of F, the next step is to solve for m. As a function of F, defined as u for\nthe following, the mass is \n\n\n\\begin{equation}\nm = m_{0} + \\Gamma \\tan \\left[ \\pi(u - \\frac{1}{2}) \\right]\n\\end{equation}\n\n\\subsection{Sampling from a Breit-Wigner $S_{cm}$ distribution}\n\nIn the previous example, a distribution was sampled using a random number\nuniformly distributed from 0 to 1. In, this example, a new random number is used\nthat is uniformaly samples from 0 to 1, but the maximum and minimum values of\nthe variable are taken into account in the Jacobian.\n\nThe distribution of the variable $S_{cm}$ is the following\n\n\\begin{equation}\n\\label{defines}\ns = m_{0}^{2} + m_{0}\\Gamma \\tan \\left[ m_{0} \\Gamma r \\right]\n\\end{equation}\n\nwhere r is defined in terms of the random variable, u, that is uniformaly\ndistributed between 0 and 1.\n\n\\begin{equation}\n\\label{rtou}\nr = (r_{max} - r_{min}) \\times u + r_{min}\n\\end{equation}\n\nwhere $r_{max}$ and $r_{min}$ are defined in terms of the variable $s_{cm}$.\n\n\\begin{eqnarray}\n\\label{definer}\nr = \\frac{1}{m_{0}\\Gamma} \\tan \\left[ \\frac{s - m_{0}^{2}}{m_{0}\\Gamma} \\right] \\\\\nr_{min} = \\frac{1}{m_{0}\\Gamma} \\tan \\left[ \\frac{s_{min} -\nm_{0}^{2}}{m_{0}\\Gamma} \\right] \\\\\nr_{max} = \\frac{1}{m_{0}\\Gamma} \\tan \\left[ \\frac{s_{max} - m_{0}^{2}}{m_{0}\\Gamma} \\right]\n\\end{eqnarray}\n\n\n\n\\subsection{Jacobian for random sampling of a Breit-Wigner distribution around\nthe W mass squared, $s_{34}$}\n\nThe first case to consider is the Breit-Wigner sampling around the W mass\n$S_{34}$ distribution and replace the integration variable, $p_{3}$ or the\nlepton momentum.\n\n\\begin{equation}\n\\label{jacobian1}\n|J(p_{3}, u)| = \\left| \\pderiv{p_{3}}{u} \\right| \n\\end{equation}\n\nBecause u is redined in terms of the variable r, we can rewrite ~\\ref{jacobian1}\nin terms of r instead of u.\n\n\\begin{equation}\n\\label{jacobian2}\n|J(p_{3}, u)| = \\left| \\pderiv{p_{3}}{u} \\right| = \\left| \\pderiv{p_{3}}{r}\n\\times \\pderiv{r}{u} \\right|\n\\end{equation}\n\nAnd since the variable r is sampling the $S_{34}$ distribution it makes sense to\ndefine the Jacobian in terms of this variable instead of $p_{3}$.\n\n\\begin{equation}\n\\label{jacobian3}\n|J(p_{3}, u)| = \\left| \\pderiv{p_{3}}{r} \\times \\pderiv{r}{u} \\right| = \\left|\n\\pderiv{p_{3}}{s_{34}} \\times \\pderiv{s_{34}}{r} \\times \\pderiv{r}{u} \\right| =\n\\left | \\frac{\\pderiv{s_{34}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{34}}{p_{3}}} \\right |\n\\end{equation}\n\nEquation ~\\ref{jacobian3} has three components: $\\pderiv{s_{34}}{r}$,\n$\\pderiv{r}{u}$, and $\\pderiv{s_{34}}{p_{3}}$. From equation ~\\ref{rtou}, the\npartial derivative of r with respect to u is \n\n\\begin{equation}\n\\label{drdu}\n\\pderiv{r}{u} = r_{max} - r_{min} = \\Delta r\n\\end{equation}\n\nNext, the partial of $s_{34}$ with respect to r can be determined from equation\n~\\ref{defines}.\n\n\n\\begin{equation}\n\\label{ds34dr}\n\\pderiv{s_{34}}{r} = (m_{W} \\Gamma_{W})^{2} \\sec^{2} \\left[m_{W} \\Gamma_{W} r \\right]\n\\end{equation}\n\nInserting the value of r(s) as defined in equation ~\\ref{definer}, equation\n~\\ref{ds34dr} can be re-written as\n\n\\begin{equation}\n\\label{ds34dr_2}\n\\pderiv{s_{34}}{r} = (m_{W} \\Gamma_{W})^{2} \\sec^{2} \\left[m_{W} \\Gamma_{W} r \\right] =\n(m_{W} \\Gamma_{W})^{2} \\sec^{2} \\left[\\arctan \\left[ \\frac{s_{34} - m_{W}^{2}}{m_{W}\n\\Gamma_{W}}  \\right] \\right]\n\\end{equation}\n\nEquation ~\\ref{ds34dr_2} is solved by defining a right triangle where\n\n\\begin{eqnarray}\n\\nonumber\n\\tan(\\theta) = \\frac{s_{34}-m_{W}^{2}}{m_{W}\\Gamma_{W}} \\\\\n\\cos(\\theta) = \\frac{1}{\\sqrt{1+\\left[ \\frac{s_{34}-m_{W}^{2}}{m_{W}\\Gamma_{W}}\n\\right]^{2}}}\n\\end{eqnarray}\n\nUsing these definitions, equation ~\\ref{ds34dr_2} is finally defined as\n\n\\begin{equation}\n\\label{ds34dr_3}\n\\pderiv{s_{34}}{r} = (m_{W} \\Gamma_{W})^{2} \\sec^{2} \\left[\\arctan \\left[ \\frac{s_{34} - m_{W}^{2}}{m_{W}\n\\Gamma_{W}}  \\right] \\right] = (m_{W}\\Gamma_{W})^{2} + (s_{34} - m_{W}^{2})^{2}\n\\end{equation}\n\nFinally, we need the partial derivative of $s_{34}$ with repect to\n$p_{3}$. First, we define $s_{34}$\n\n\\begin{equation}\n\\label{defines34}\ns_{34} = m_{3}^{2} + m_{4}^{2} + 2E_{3}E_{4} - 2p_{3}^{x}p_{4}^{x} - 2p_{3}^{y}p_{4}^{y} - 2p_{3}^{z}p_{4}^{z}\n\\end{equation}\n\nSince the neutino four-vector is defined in terms of all the other particles in\nthe event, we need to rewrite equation ~\\ref{defines34} to expose all the\ndependences on $p_{3}$. For the following, it is assumed that the lepton and\nneutrino are massless meaning $E_{3} = p_{3}$. \n\n\\begin{eqnarray}\n\\label{defines34_2}\n\\nonumber\ns_{34} = 2p_{3}\\sqrt{(-p_{3}^{x} -p_{5}^{x} -p_{6}^{x})^{2} + (-p_{3}^{y} -p_{5}^{y} -p_{6}^{y})^{2} + (p_{tot}^{z} -p_{3}^{z} -p_{5}^{z} -p_{6}^{z})^{2}} \\\\\n- 2p_{3}^{x}(-p_{3}^{x} -p_{5}^{x} -p_{6}^{x}) -\n2p_{3}^{y}(-p_{3}^{y} -p_{5}^{y} -p_{6}^{y}) - 2p_{3}^{z}(p_{tot}^{z} -p_{3}^{z} -p_{5}^{z} -p_{6}^{z})\n\\end{eqnarray}\n\nAfter combining like terms, we can evaluate the partial derivative of $s_{34}$\nwith respect to $p_{3}$ that yields the relatively\nsimple formula\n\n\\begin{equation}\n\\label{ds34dp3}\n\\pderiv{s_{34}}{p_{3}} = 2(p_{3} + p_{4})(1 - \\hat{p_{3}} \\cdot \\hat{p_{4}})\n\\end{equation}\n\nFinally, we can rewrite the Jacobian defined in ~\\ref{jacobian3} as\n\n\\begin{equation}\n\\label{jacobian4}\n|J(p_{3}, u)| = \\left | \\frac{\\pderiv{s_{34}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{34}}{p_{3}}} \\right | = \\frac{\\Delta R \\times \\left[\n(m_{W}\\Gamma_{W})^{2} + (s_{34} - m_{W}^{2})^{2} \\right]}{2(p_{3} + p_{4})(1 - \\hat{p_{3}} \\cdot \\hat{p_{4}})}\n\\end{equation}\n\nIn some cases, it is also common to replace the first quark momentum integration\nwith the Breit-Wigner sampling variable. In that case, we need to evaluate\n\n\\begin{equation}\n\\label{jacobian_5}\n|J(p_{5}, u)| = \\left | \\frac{\\pderiv{s_{34}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{34}}{p_{5}}} \\right |\n\\end{equation}\n\nFor this substitution, we only need to evaluate the partial derivative of\n$s_{34}$ with respect to $p_{5}$. Assuming a massless quark, the result is\n\n\\begin{equation}\n\\label{ds34dp5}\n\\pderiv{s_{34}}{p_{5}} = 2p_{3}(\\hat{p_{3}} \\cdot \\hat{p_{5}} - \\hat{p_{4}} \\cdot \\hat{p_{5}})\n\\end{equation}\n\nCombining equation ~\\ref{ds34dp5} with ~\\ref{jacobian_5} yields\n\n\\begin{equation}\n\\label{jacobian_6}\n|J(p_{5}, u)| = \\left | \\frac{\\pderiv{s_{34}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{34}}{p_{5}}} \\right | = \\frac{\\Delta R \\times \\left[\n(m_{W}\\Gamma_{W})^{2} + (s_{34} - m_{W}^{2})^{2} \\right]}{2p_{3}(\\hat{p_{3}} \\cdot \\hat{p_{5}} - \\hat{p_{4}} \\cdot \\hat{p_{5}})}\n\\end{equation}\n\n\n\n%\n%-----------------------------------------------------------------------\n%\n\n\n\n\\subsection{Jacobian for random sampling of a Breit-Wigner distribution around\nthe top mass squared, $s_{345}$}\n\nThe next case to consider is the Breit-Wigner sampling around the top mass squared\n$S_{345}$ distribution and replace the integration variable, $p_{3}$ or the\nlepton momentum. As before, we need need to calculate the following\n\n\\begin{equation}\n\\label{jacobian_1}\n|J(p_{3}, u)| = \\left| \\pderiv{p_{3}}{r} \\times \\pderiv{r}{u} \\right| = \\left|\n\\pderiv{p_{3}}{s_{345}} \\times \\pderiv{s_{345}}{r} \\times \\pderiv{r}{u} \\right| =\n\\left | \\frac{\\pderiv{s_{345}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{345}}{p_{3}}} \\right |\n\\end{equation}\n\nEquation ~\\ref{jacobian_1} has three components: $\\pderiv{s_{345}}{r}$,\n$\\pderiv{r}{u}$, and $\\pderiv{s_{345}}{p_{3}}$. We know $\\pderiv{r}{u}$ from\nequation ~\\ref{drdu} where $r_{min}$ and $r_{max}$ are defined by the $s_{345}$\nsystem instead of the $s_{34}$ system. We also know $\\pderiv{s_{345}}{r}$ from ~\\ref{ds34dr_3}\nwhere we replace $s_{34}$ with $s_{345}$.\n\n\\begin{equation}\n\\label{ds345dr}\n\\pderiv{s_{345}}{r} = (m_{t}\\Gamma_{t})^{2} + (s_{345} - m_{t}^{2})^{2}\n\\end{equation}\n\nWe do need the partial derivative of $s_{345}$ with repect to\n$p_{3}$. First, we define $s_{345}$\n\n\\begin{eqnarray}\n\\label{defines345}\n\\nonumber\ns_{345} = m_{3}^{2} + m_{4}^{2} + m_{5}^{2} + 2E_{3}E_{4} + 2E_{3}E_{5} +\n2E_{4}E_{5} - \\\\\n2p_{3}^{x}p_{4}^{x} - 2p_{3}^{x}p_{5}^{x} - 2p_{4}^{x}p_{5}^{x} -\n2p_{3}^{y}p_{4}^{y} - 2p_{3}^{y}p_{5}^{y} - 2p_{4}^{y}p_{5}^{y} -\n2p_{3}^{z}p_{4}^{z} - 2p_{3}^{z}p_{5}^{z} - 2p_{4}^{z}p_{5}^{z}\n\\end{eqnarray}\n\nAs before, the neutino four-vector is defined in terms of all the other particles in\nthe event so we need to rewrite equation ~\\ref{defines345} to expose all the\ndependences on $p_{3}$. For the following, it is assumed that the lepton,\nneutrino, and quark are massless meaning $E_{3} = p_{3}$.\n\n\\begin{eqnarray}\n\\label{defines345_2}\n\\nonumber\ns_{345} = 2p_{3}\\sqrt{(-p_{3}^{x} -p_{5}^{x} -p_{6}^{x})^{2} + (-p_{3}^{y}\n-p_{5}^{y} -p_{6}^{y})^{2} + (p_{tot}^{z} -p_{3}^{z} -p_{5}^{z} -p_{6}^{z})^{2}}\n+ \\\\\n\\nonumber\n2p_{3}p_{5} + 2\\sqrt{(-p_{3}^{x} -p_{5}^{x} -p_{6}^{x})^{2} + (-p_{3}^{y} -p_{5}^{y}\n-p_{6}^{y})^{2} + (p_{tot}^{z} -p_{3}^{z} -p_{5}^{z} -p_{6}^{z})^{2}}p_{5} - \\\\\n\\nonumber\n2p_{3}^{x}(-p_{3}^{x} -p_{5}^{x} -p_{6}^{x}) - 2p_{3}^{x}p_{5}^{x} -\n2(-p_{3}^{x} -p_{5}^{x} -p_{6}^{x})p_{5}^{x} - \\\\\n\\nonumber\n2p_{3}^{y}(-p_{3}^{y} -p_{5}^{y} -p_{6}^{y}) - 2p_{3}^{y}p_{5}^{y} -\n2(-p_{3}^{y} -p_{5}^{y} -p_{6}^{y})p_{5}^{y} - \\\\\n2p_{3}^{z}(-p_{3}^{z} -p_{5}^{z} -p_{6}^{z}) - 2p_{3}^{z}p_{5}^{z} - 2(-p_{3}^{z} -p_{5}^{z} -p_{6}^{z})p_{5}^{z}\n\\end{eqnarray}\n\nAfter combining like terms, we can evaluate the partial derivative of $s_{345}$\nwith respect to $p_{3}$ as\n\n\\begin{equation}\n\\label{ds345dp3}\n\\pderiv{s_{345}}{p_{3}} = 2(p_{3} + p_{4} + p_{5})(1 - \\hat{p_{3}} \\cdot \\hat{p_{4}})\n\\end{equation}\n\nFinally, we can rewrite the Jacobian defined in ~\\ref{jacobian_1} as\n\n\\begin{equation}\n\\label{jacobian_2}\n|J(p_{3}, u)| = \\left | \\frac{\\pderiv{s_{345}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{345}}{p_{3}}} \\right | = \\frac{\\Delta R \\times \\left[\n(m_{t}\\Gamma_{t})^{2} + (s_{345} - m_{t}^{2})^{2} \\right]}{2(p_{3} + p_{4} + p_{5})(1 - \\hat{p_{3}} \\cdot \\hat{p_{4}})}\n\\end{equation}\n\n\nInstead of replacing the lepton momentum integration variable, it is also common\nto replace the first quark momentum integration variable, $p_{5}$. In that case,\nwe need to evaluate the following Jacobian.\n\n\\begin{equation}\n\\label{jacobian_3}\n|J(p_{5}, u)| = \\left| \\pderiv{p_{5}}{r} \\times \\pderiv{r}{u} \\right| = \\left|\n\\pderiv{p_{5}}{s_{345}} \\times \\pderiv{s_{345}}{r} \\times \\pderiv{r}{u} \\right| =\n\\left | \\frac{\\pderiv{s_{345}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{345}}{p_{5}}} \\right |\n\\end{equation}\n\nThe only difference is that partial derivative of $s_{345}$ with respect to\n$p_{5}$ instead of $p_{3}$. However, since $s_{345}$ is invariant under a change\nof $p_{3}$ and $p_{5}$ the partial derivatives must be equal. Thus, \n\n\\begin{equation}\n\\label{jacobian_4}\n|J(p_{5}, u)| = \\left | \\frac{\\pderiv{s_{345}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{345}}{p_{5}}} \\right | = \\frac{\\Delta R \\times \\left[\n(m_{t}\\Gamma_{t})^{2} + (s_{345} - m_{t}^{2})^{2} \\right]}{2(p_{3} + p_{4} + p_{5})(1 - \\hat{p_{4}} \\cdot \\hat{p_{5}})}\n\\end{equation}\n\n\n\n\n\n%\n%-----------------------------------------------------------------------\n%\n\n\n\\subsection{Jacobian for random sampling of two Breit-Wigner distributions around\nthe top mass squared, $s_{345}$ and W mass squared, $s_{34}$}\n\n\nThe next situation is to sample from a Breit-Wigner around the\ntop mass squared and the W mass squared, or $s_{345}$ and $s_{34}$. It is common\nto replace the lepton momentum and first quark momentum integration variables\nwith the two new variables. Since we are\nreplacing two variables, we need to evaluate the following Jacobian\n\n\\begin{equation}\n|J(p_{3}, p_{5} ; u_{1}, u_{2})| = \\left| \\begin{array}{cc}\n\\pderiv{p_{3}}{u_{1}}\t& \\pderiv{p_{3}}{u_{2}} \\\\\n\\pderiv{p_{5}}{u_{1}}\t& \\pderiv{p_{5}}{u_{2}} \\\\\n\\end{array} \\right|\n\\end{equation}\n\nwhere $u_{1}$ and $u_{2}$ are the sampling variables around the top mass squared\nand W mass squared, respectively.\n\nWe have already computed the partial derivatives for each of these cases\nin the previous two sections, thus the result is\n\n\\begin{eqnarray}\n\\nonumber\n|J(p_{3}, p_{5} ; u_{1}, u_{2})| = \\left| \\begin{array}{cc}\n\\pderiv{p_{3}}{u_{1}}\t& \\pderiv{p_{3}}{u_{2}} \\\\\n\\pderiv{p_{5}}{u_{1}}\t& \\pderiv{p_{5}}{u_{2}} \\\\\n\\end{array} \\right| = \n\\nonumber\n\\left| \\begin{array}{cc}\n\\frac{\\pderiv{s_{345}}{r} \\times \\pderiv{r}{u}}{\\pderiv{s_{345}}{p_{3}}}\t& \\frac{\\pderiv{s_{345}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{345}}{p_{5}}} \\\\\n\\frac{\\pderiv{s_{34}}{r} \\times \\pderiv{r}{u}}{\\pderiv{s_{34}}{p_{3}}}\t& \\frac{\\pderiv{s_{34}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{34}}{p_{5}}}\n\\end{array} \\right| = \\\\\n\\left| \\begin{array}{cc}\n\\frac{\\Delta R_{345} \\times \\left[\n(m_{t}\\Gamma_{t})^{2} + (s_{345} - m_{t}^{2})^{2} \\right]}{2(p_{3} + p_{4} + p_{5})(1 - \\hat{p_{3}} \\cdot \\hat{p_{4}})}\t& \\frac{\\Delta R_{34} \\times \\left[\n(m_{W}\\Gamma_{W})^{2} + (s_{34} - m_{W}^{2})^{2} \\right]}{2(p_{3} + p_{4})(1 -\n\\hat{p_{3}} \\cdot \\hat{p_{4}})} \\\\\n\\frac{\\Delta R_{345} \\times \\left[\n(m_{t}\\Gamma_{t})^{2} + (s_{345} - m_{t}^{2})^{2} \\right]}{2(p_{3} + p_{4} + p_{5})(1 - \\hat{p_{4}} \\cdot \\hat{p_{5}})}\t& \\frac{\\Delta R_{34} \\times \\left[\n(m_{W}\\Gamma_{W})^{2} + (s_{34} - m_{0}^{2})^{2} \\right]}{2p_{3}(\\hat{p_{3}} \\cdot \\hat{p_{5}} - \\hat{p_{4}} \\cdot \\hat{p_{5}})}\n\\end{array}\n\\right|\n\\end{eqnarray}\n\n\n\\subsection{Sampling from a Polynomial $S_{cm}$ distribution}\n\n*** This is where I am taking a function from Aurelio and I can't seem to derive\nit on my own ***\n\nThe distribution of the variable $S_{cm}$ according to a polynomial power\ndistribution is\n\n\\begin{equation}\n\\label{defines_power}\ns = m_{0}^{2} + \\left[ (1-\\alpha)r \\right]^{\\frac{-1}{\\alpha - 1}}\n\\end{equation}\n\nwhere r is defined in terms of the random variable, u, that is uniformaly\ndistributed between 0 and 1.\n\n\\begin{equation}\n\\label{rtou_power}\nr = (r_{max} - r_{min}) \\times u + r_{min}\n\\end{equation}\n\nwhere $r_{max}$ and $r_{min}$ are defined in terms of the variable $s_{cm}$.\n\n\\begin{eqnarray}\n\\label{definer_power}\nr = \\frac{1}{1-\\alpha} \\times \\left[ s - m_{0}^{2} \\right]^{1-\\alpha} \\\\\nr_{min} = \\frac{1}{1-\\alpha} \\times \\left[ s_{min} - m_{0}^{2}\n\\right]^{1-\\alpha} \\\\\nr_{max} = \\frac{1}{1-\\alpha} \\times \\left[ s_{max} - m_{0}^{2}\n\\right]^{1-\\alpha}\n\\end{eqnarray}\n\nwhere alpha can not equal 1.\n\n\n\\subsection{Jacobian for random sampling of a polynomial distribution\nstarting at $m_{pole}$}\n\nThe first case to consider is sampling around a falling polynomial distribution\nfor the mass squared of two quarks in the event, $s_{56}$. We need to define the\nJacobian with respect $p_{5}$ or $p_{6}$. Since $s_{56}$ is invariant under an\ninterchange of particle 5 and 6, the Jacobian will be the same for each momentum\nintegration. The following assume $p_{5}$ will be replaced with the variable,\n$u$, which is sampled from a polynomial distribution.\n\n\\begin{equation}\n\\label{jacobian_7}\n|J(p_{5}, u)| = \\left| \\pderiv{p_{5}}{u} \\right| \n\\end{equation}\n\nBecause u is redined in terms of the variable r, we can rewrite ~\\ref{jacobian_7}\nin terms of r instead of u.\n\n\\begin{equation}\n\\label{jacobian_8}\n|J(p_{5}, u)| = \\left| \\pderiv{p_{5}}{u} \\right| = \\left| \\pderiv{p_{5}}{r}\n\\times \\pderiv{r}{u} \\right|\n\\end{equation}\n\nAnd since the variable r is sampling the $S_{56}$ distribution it makes sense to\ndefine the Jacobian in terms of this variable instead of $p_{5}$.\n\n\\begin{equation}\n\\label{jacobian_9}\n|J(p_{5}, u)| = \\left| \\pderiv{p_{5}}{r} \\times \\pderiv{r}{u} \\right| = \\left|\n\\pderiv{p_{5}}{s_{56}} \\times \\pderiv{s_{56}}{r} \\times \\pderiv{r}{u} \\right| =\n\\left | \\frac{\\pderiv{s_{56}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{56}}{p_{5}}} \\right |\n\\end{equation}\n\nEquation ~\\ref{jacobian_9} has three components: $\\pderiv{s_{56}}{r}$,\n$\\pderiv{r}{u}$, and $\\pderiv{s_{56}}{p_{5}}$. From equation ~\\ref{rtou_power}, the\npartial derivative of r with respect to u is \n\n\\begin{equation}\n\\label{drdu_power}\n\\pderiv{r}{u} = r_{max} - r_{min} = \\Delta R_{56}\n\\end{equation}\n\nNext, the partial of $s_{56}$ with respect to r can be determined from equation ~\\ref{defines_power}.\n\n\\begin{equation}\n\\label{ds56dr_power}\n\\pderiv{s_{56}}{r} = \\left[ r(1-\\alpha) \\right]^{\\frac{\\alpha}{1-\\alpha}}\n\\end{equation}\n\nInserting the value of r(s) as defined in equation ~\\ref{definer_power}, equation ~\\ref{ds56dr_power} can be re-written as\n\n\\begin{equation}\n\\label{ds56dr_power2}\n\\pderiv{s_{56}}{r} = \\left[ s_{56} - m_{0}^{2} \\right]^{\\alpha}\n\\end{equation}\n\nFinally, we need the partial derivative of $s_{56}$ with repect to $p_{5}$. First, we define $s_{56}$\n\n\\begin{equation}\n\\label{defines56_power}\ns_{56} = m_{5}^{2} + m_{6}^{2} + 2E_{5}E_{6} - 2p_{5}^{x}p_{6}^{x} - 2p_{5}^{y}p_{6}^{y} - 2p_{5}^{z}p_{6}^{z}\n\\end{equation}\n\nwe can evaluate the partial derivative of $s_{56}$ with respect to $p_{5}$.\n\n\\begin{equation}\n\\label{ds56dp5_power}\n\\pderiv{s_{56}}{p_{5}} = 2p_{6}(1 - \\hat{p_{5}} \\cdot \\hat{p_{6}})\n\\end{equation}\n\nFinally, we can rewrite the Jacobian defined in ~\\ref{jacobian_9} as\n\n\\begin{equation}\n\\label{jacobian_10}\n|J(p_{5}, u)| = \\left | \\frac{\\pderiv{s_{56}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{56}}{p_{5}}} \\right | = \\frac{\\Delta R_{56} \\times \\left[ s_{56}-m_{0}^{2} \\right]^{\\alpha}}{2p_{6}(1 - \\hat{p_{5}} \\cdot \\hat{p_{6}})}\n\\end{equation}\n\nand \n\n\\begin{equation}\n\\label{jacobian_11}\n|J(p_{6}, u)| = \\left | \\frac{\\pderiv{s_{56}}{r} \\times\n\\pderiv{r}{u}}{\\pderiv{s_{56}}{p_{6}}} \\right | = \\frac{\\Delta R_{56} \\times \\left[ s_{56}-m_{0}^{2} \\right]^{\\alpha}}{2p_{5}(1 - \\hat{p_{5}} \\cdot \\hat{p_{6}})}\n\\end{equation}\n", "meta": {"hexsha": "2cc212d408528e9f9dc67bc7feaa65a3831d5681", "size": 19638, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MEnote/Integration.tex", "max_stars_repo_name": "tgadf/thesis", "max_stars_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MEnote/Integration.tex", "max_issues_repo_name": "tgadf/thesis", "max_issues_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MEnote/Integration.tex", "max_forks_repo_name": "tgadf/thesis", "max_forks_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6380597015, "max_line_length": 161, "alphanum_fraction": 0.6392708015, "num_tokens": 7699, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7341195152660688, "lm_q1q2_score": 0.5859644903659961}}
{"text": "The previous chapter introduced the concepts of \\textit{open-loop} and \\textit{closed-loop} control laws, and then dove into techniques for designing open-loop control laws for robots based on optimal control and differential flatness. These techniques are useful for determining control inputs that accomplish different objectives, such as ``move from point A to point B in a minimal amount of time while satisfying some constraints''. Additionally, computing open-loop control laws is often computationally less challenging that computing closed-loop control laws.\nHowever in practice open-loop control is not very robust since observations are not leveraged to update the control input. One solution to this robustness problem is to convert the open-loop control law into a closed-loop control law, typically referred to as \\textit{trajectory tracking controllers}. Another solution is to not use any open-loop techniques but rather to directly synthesize a closed-loop control law, for example by performing a \\textit{Lyapunov stability analysis}.\nThis chapter will introduce techniques for synthesizing closed-loop controllers in both of these ways.\n\n\\notessection{Closed-loop Motion Planning \\& Control}\nRecall from the previous chapter that open-loop control laws are defined as a function of time for a given initial condition. In contrast, closed-loop control laws are a function of the \\textit{current} state, and therefore are reactive.\n\n\\begin{definition}[Closed-loop Control]\nIf the control law is a function of the state and time, i.e., \n\\begin{equation}\n\\bm{u}(t) = \\pi(\\x(t), t )    \n\\end{equation}\nthen the control is said to be in closed-loop form.\n\\end{definition}\n\nClosed-loop controllers (also sometimes referred to as \\textit{feedback controllers} or \\textit{policies}), are much more robust than open-loop controllers. For example, suppose a controller needs to be designed to make a wheeled robot move from point to point. If the model used for open-loop controller design wasn't perfect, if the initial state was not perfectly known, or if external disturbances affected the system (e.g. wheel slipping), then the robot would not exactly reach its desired destination. Alternatively, a closed-loop control law can continuously correct for these errors since it is always taking in new information.\n\n\\subsection{Trajectory Tracking}\nOne common approach to closed-loop control is to simply extend the open-loop control techniques from the previous chapter to include a feedback component. Such an approach consists of two steps:\n\\begin{enumerate}\n    \\item Use open-loop control techniques to design a desired trajectory $\\x_d(t)$ and corresponding control $\\bm{u}_d(t)$.\n    \\item Design a closed-loop control law that is designed to make sure the system stays close to the desired trajectory.\n\\end{enumerate}\nThese controllers are referred to as \\textit{trajectory tracking} controllers, and their control law is defined as\n\\begin{equation}\n\\bm{u}(t) =  \\bm{u}_d(t)+\\pi(\\x(t)-\\x_d(t),t).\n\\end{equation}\nThis type of control law is also said to be a ``feedforward plus feedback'' controller. This is because the term $\\bm{u}_d(t)$ is an open-loop ``feedforward'' term that attempts to generally make the system follow the desired trajectory, while the term $\\pi(\\x(t)-\\x_d(t),t)$ is a ``feedback'' term that attempts to correct for any errors. \n\nThe previous chapter discussed techniques for solving open-loop control problems to define the desired trajectory, and additionally there are several approaches for designing the feedback component $\\pi(\\x(t)-\\x_d(t),t)$:\n\\begin{itemize}\n  \\item \\textit{Geometric} approaches generally leverage some sort of insight about the system and are therefore hard to discuss in general settings. They are also typically difficult to derive theoretical guarantees for. \n  \\item \\textit{Linearization} based approaches typically linearize nonlinear dynamics models about points along the desired trajectory. These linearized models are then used to design linear controllers (e.g. linear quadratic regulators). For some nonlinear systems, instead of linearizing about specific points it possible to \\textit{feedback linearize} the system. This essentially means that the non-linearities can be exactly ``canceled'' out such that the system can be considered linear. Linear control theory can then be applied to design a feedback control scheme.\n  \n  \\item \\textit{Non-linear control} techniques also exist which do not rely on linearization. These approaches are also heavily system dependent, but one common tool for non-linear control is based on \\textit{Lyapunov} theory.\n\n  \\item \\textit{Optimization-based} feedback control laws can also be designed. These approaches often leverage optimal control theory, some of which was presented in the previous chapter. One common optimization-based approach for closed-loop control is known as \\textit{model predictive control} (MPC).\n\\end{itemize}\n\n\\subsubsection{Trajectory Tracking for Differentially Flat Systems}\nFor differentially flat systems linearization based approaches to designing trajectory tracking controllers are particularly useful\\cite{Levine2009}. In fact, every flat system can be linearized via dynamic feedback and a coordinate change to yield a dynamical system of the form\n\\begin{equation} \\label{eq:diff_flat_linearized}\n    \\z^{(q+1)} = \\bm{w}, \n\\end{equation}\nwhere $\\z^{(q+1)}$ is the $q+1$-th order derivative of the flat outputs $z$ and $q$ is the degree of the flat output space (i.e. the highest order of derivatives of the flat output that are needed to describe system dynamics), and $\\bm{w}$ is a modified ``virtual'' input term.\n\nThe set of ODEs \\eqref{eq:diff_flat_linearized} are \\textit{linear}, which means that techniques from linear control theory can be applied to design a control law for $\\bm{w}$. In particular, for trajectory tracking problems suppose a reference flat output trajectory $\\z_d(t)$ has been defined which corresponds to the virtual input $\\bm{w}_d(t)$. Let the error between the actual flat output and desired flat output be defined as $\\bm{e}(t) = \\z(t) - \\z_d(t)$ and consider a closed-loop control law of the form\n\\begin{equation}\nw_i(t) = w_{i,d}(t) - \\sum_{j=0}^q k_{i,j} e_i^{(j)}(t),\n\\end{equation}\nwhere $(\\cdot)_i$ denotes the $i$-th component of the vector, $e^{(j)} = \\z^{(j)} - \\z_d^{(j)}$ is the $j$-th order derivative of the error, and $k_{i,j}$ are called controller \\textit{gains}.\nThe application of this control law to the system \\eqref{eq:diff_flat_linearized} will result in \\textit{closed-loop dynamics} of the form\n\\begin{equation*}\n\\begin{split}\n\\z^{(q+1)} &= \\bm{w}_d - \\sum_{j=0}^q K_j \\bm{e}^{(j)}, \\\\\n\\end{split}\n\\end{equation*}\nwhere $K_j$ is a diagonal matrix with $i$-th diagonal element $k_{i,j}$. Since $\\z_d^{(q+1)} = \\bm{w}_d(t)$ this can be simplified to give the closed-loop error dynamics:\n\\begin{equation}\n\\begin{split}\n\\bm{e}^{(q+1)} + \\sum_{j=0}^q K_j \\bm{e}^{(j)} = 0. \\\\\n\\end{split}\n\\end{equation}\nThis set of linear ODEs describes the dynamics of the error, and many classical techniques from linear control theory can be used to choose the gains $k_{i,j}$ that will guarantee this system is \\textit{stable}. Having stable error dynamics means that the error will decay to zero, which in this case means the system will track the desired trajectory.\n\n\\begin{example}[Extended Unicycle Trajectory Tracking] \\label{ex:trajtrack}\n\\theoremstyle{definition}\nConsider the dynamically extended unicycle model\n\\begin{equation}\n\\begin{split}\n\\dot{x}(t) &= v \\cos(\\theta(t)), \\\\\n\\dot{y}(t) &= v \\sin(\\theta(t)), \\\\\n\\dot{v}(t) &= a(t), \\\\\n\\dot{\\theta}(t) &= \\omega(t),\n\\end{split}\n\\label{robot_eq_dyn}\n\\end{equation}\nwhere the two inputs are the acceleration $a(t)$ and the rotation rate $\\omega(t)$. This system is differentially flat with flat outputs $x$ and $y$ and order $q=1$. It can therefore be expressed as:\n\\begin{equation*}\n\\ddot{\\z} = \\begin{bmatrix} \\ddot{x} \\\\ \\ddot{y} \\end{bmatrix} = \\underbrace{\\begin{bmatrix} \\cos(\\theta) & -V\\sin(\\theta) \\\\ \\sin(\\theta) & V\\cos(\\theta) \\end{bmatrix}}_{:=J} \\begin{bmatrix} a \\\\ \\omega \\end{bmatrix} := \\begin{bmatrix} w_1 \\\\ w_2 \\end{bmatrix},\n\\end{equation*}\nand a trajectory tracking controller can be defined as\n\\begin{equation*}\n\\begin{split}\n    w_1 &= \\ddot{x}_d - k_{px}(x - x_d)- k_{dx}(\\dot{x} - \\dot{x}_d),\\\\\n    w_2 &= \\ddot{y}_d - k_{py}(y-y_d)- k_{dy}(\\dot{y}-\\dot{y}_d),\\\\\n\\end{split}\n\\end{equation*}\nwhere $(\\cdot)_d$ represents a term associated with the desired trajectory. The control inputs $a(t)$ and $\\omega(t)$ can then be computed by solving the linear system\n\\begin{equation*}\nJ\\begin{bmatrix} a \\\\ \\omega \\end{bmatrix} = \\begin{bmatrix} w_1 \\\\ w_2 \\end{bmatrix},\n\\end{equation*}\nassuming that $J$ is full rank.\n\\end{example}\n\n\n\\subsection{Closed-loop Control}\nTrajectory tracking is just one example of closed-loop control, which assumes the existence of a desired trajectory for which to track. As previously discussed, one way of computing the desired trajectory is by solving an open-loop optimal control problem. However, in the context of optimal control, modifying an open-loop optimal control with feedback is not always the most desirable option. Instead, it may be preferred to just directly solve a closed-loop optimal control problem to obtain an optimal policy $\\bm{u}^* = \\pi(\\x(t),t)$. Techniques for solving closed-loop optimal control problems typically are based on either the Hamilton-Jacobi-Bellman equation or dynamic programming.\n\nAnother common closed-loop control problem is to drive to or stabilize the system about a particular state (often called \\textit{regulation}). For systems with linear dynamics models the most controller for regulation problems is called the \\textit{linear quadratic regulator}. However, for nonlinear systems, stabilizing closed-loop controllers are commonly designed through \\textit{Lyapunov analysis}\\cite{SlotineLi1991}.\n\n\\subsubsection{Lyapunov-based Control}\nA Lyapunov stability analysis is a common tool for analyzing the stability of nonlinear systems. This analysis is based on the definition of a Lyapunov function, which can be thought of as a measure of the ``energy'' of the system. Similar to mechanical systems, if the energy does not increase in time then the system is considered stable\\footnote{Note there are more technical definitions of stability, but for simplicity these will not be discussed here}.\n\nThe most challenging part of a Lyapunov stability analysis is finding a suitable Lyapunov function, and for many complex systems this may be extremely difficult. However, one of the advantages of the method is that it provides nice theoretical guarantees regarding the stability of the system, and is applicable to any system of interest.\n\n\\begin{example}[Pose Stabilization] \\label{ex:pose}\n\\theoremstyle{definition}\n\\cite{AicardiCasalinoEtAl1995} Consider a robot that is modeled by the unicycle robot model (differential drive robot model) represented graphically in Figure \\ref{fig:posecartesian}\n\\begin{equation} \\label{eq:posecartesian}\n\\begin{split}\n\\dot{x}(t) &= v(t) \\cos\\theta(t), \\\\\n\\dot{y}(t) &= v(t) \\sin\\theta(t), \\\\\n\\dot{\\theta}(t) &= \\omega(t),\n\\end{split}\n\\end{equation}\nwhere the control inputs are the robot speed $v$ and the rotational rate $\\omega$. The objective is to design a closed-loop controller that will drive the robot the origin (i.e. $x=0$, $y=0$, $\\theta = 0$).\n\n\\begin{marginfigure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{tex/figs/ch03_figs/unicycle_cartesian.png}\n\\caption{Pose stabilization of a unicycle robot in Cartesian coordinates.}\n\\label{fig:posecartesian}\n\\end{marginfigure}\n\nTo make the controller design easier the dynamics will be alternatively expressed in polar coordinates. This can be accomplished by defining\n\\begin{equation} \\label{eq:polarcoord}\n\\begin{split}\n\\rho &= \\sqrt{x^2+y^2}, \\\\\n\\alpha &= \\mathrm{atan2}(y, x) - \\theta + \\pi, \\\\\n\\delta &= \\alpha + \\theta,\n\\end{split}\n\\end{equation}\nwhere $\\rho$ is the Euclidean distance to the origin, $\\alpha$ is the heading angle with respect to the line from the robot to the origin, and $\\delta$ is the angle between the $x$-axis and the line from the robot to the origin. These coordinates are graphically shown in Figure \\ref{fig:polarcoord}.\n\\begin{marginfigure}\n\\centering\n\\includegraphics[width=0.95\\textwidth]{tex/figs/ch03_figs/unicycle_polar.png}\n\\caption{Pose stabilization of a unicycle robot using polar coordinates.}\n\\label{fig:polarcoord}\n\\end{marginfigure}\nUsing the newly defined polar coordinates, the dynamics equations \\eqref{eq:posecartesian} can be equivalently expressed as\n\\begin{equation} \\label{eq:posepolar}\n\\begin{split}\n\\dot{\\rho}(t) &= -v(t) \\cos\\alpha(t), \\\\\n\\dot{\\alpha}(t) &= \\frac{v(t) \\sin\\alpha(t)}{\\rho(t)} - \\omega(t), \\\\\n\\dot{\\delta}(t) &= \\frac{v(t) \\sin \\alpha(t)}{\\rho(t)}. \\\\\n\\end{split}\n\\end{equation}\n\nBy expressing the dynamics in polar form, a Lyapunov stability analysis can now be easily performed. Consider the following \\textit{candidate} Lyapunov function:\n\\begin{equation}\nV(\\rho, \\alpha, \\theta) = \\frac{1}{2} \\rho^2 + \\frac{1}{2} (\\alpha^2 + k_3 \\delta^2),\n\\end{equation}\nand consider the following closed-loop control law:\n\\begin{equation} \\label{eq:poselaw}\n\\begin{split}\nv &= k_1 \\rho \\cos \\alpha,\n\\\\\n\\omega &= k_2 \\alpha + k_1 \\frac{\\sin \\alpha \\cos \\alpha}{\\alpha}(\\alpha + k_3 \\delta),\n\\end{split}\n\\end{equation}\nwhere $k_1, k_2, k_3>0$.\n\nThe candidate Lyapunov function is quadratic and therefore is positive everywhere, $V \\geq 0$, and is equal to zero only at the origin with $\\rho = 0$, $\\alpha = 0$, $\\delta=0$. Therefore, if it is possible to show that along \\textit{all} closed-loop system trajectories the Lyapunov function is \\textit{decreasing} ($\\dot{V} < 0$), then it can be guaranteed that the system will converge to the origin! To show that the Lyapunov function decreases along trajectories of the system, begin by taking the derivative of $V$:\n\\begin{equation*}\n\\dot{V} = \\rho \\dot{\\rho} + \\alpha \\dot{\\alpha} + k_3 \\delta \\dot{\\delta}.\n\\end{equation*}\nThis quantity can now be shown to decrease along \\textit{all} closed-loop trajectories by substituting in the dynamics equations \\eqref{eq:polarcoord} with the closed-loop control law as defined by \\eqref{eq:poselaw}:\n\\begin{equation*}\n\\begin{split}\n\\dot{V} &= \\rho \\dot{\\rho} + \\alpha \\dot{\\alpha} + k_3 \\delta \\dot{\\delta}, \\\\\n&= - \\rho v \\cos\\alpha + \\alpha \\big(\\frac{v \\sin\\alpha}{\\rho} - \\omega \\big) +  \\frac{k_3 \\delta v \\sin \\alpha}{\\rho}, \\\\\n&= -k_1 \\rho^2 \\cos^2 \\alpha  - k_2 \\alpha^2, \\\\\n\\end{split}\n\\end{equation*}\nwhere in the last line the control laws were substituted in for $v$ and $\\omega$ and algebraically simplified. Note that since $k_1$ and $k_2$ have been chosen to be strictly positive, this function must be strictly negative for all values of $\\rho$ and $\\alpha$! Therefore this Lyapunov stability analysis has theoretically proven that the system under the closed-loop control law \\eqref{eq:poselaw} will converge to the origin.\n\\end{example}\n\n\\subsection{Exercises}\nBoth exercises for this chapter can be found in the online repository:\n\n\\vspace{\\baselineskip}\n\n\\url{https://github.com/PrinciplesofRobotAutonomy/AA274A_HW1}.\n\n\\subsubsection{Pose Stabilization}\nComplete \\textit{Problem 2: Pose Stabilization}, where you will implement the Lyapunov-based pose controller for the unicycle robot described in Example \\ref{ex:pose}.\n\n\\subsubsection{Trajectory Tracking}\nComplete \\textit{Problem 3: Trajectory Tracking}, where you will implement the differential flatness-based trajectory tracking controller for the extended unicycle robot described in Example \\ref{ex:trajtrack}.", "meta": {"hexsha": "1e826174635e82a8539c2680da3c98520eebf7d7", "size": 15704, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/source/ch03.tex", "max_stars_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_stars_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-03-23T16:03:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T14:15:38.000Z", "max_issues_repo_path": "tex/source/ch03.tex", "max_issues_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_issues_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/source/ch03.tex", "max_forks_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_forks_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.6526315789, "max_line_length": 690, "alphanum_fraction": 0.7585965359, "num_tokens": 4118, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5859493581950863}}
{"text": "\\subsection{Type Classes}\\label{sec:type-classes}\n\nSo far we have only discussed \\toolname in the context of \nconcrete Haskell functions, but many Haskell programs make\nuse of ad-hoc polymorphism via type-classes.\n%\nWhile the implementation of each type-class instance is \ndifferent, there is often a common interface that we \nexpect all instances to satisfy.\n\n\\mypara{Class Measures}\nFor example, consider the classes\n%\n\\begin{code}\n  class Indexable f where\n    size :: f a -> Int\n    at   :: f a -> Int -> a\n\\end{code}\n%\nFor safe access, we might require that @at@'s second \nparameter be bounded by the @size@ of the container.\nTo this end, we can define a \\emph{type-indexed} \nmeasure, using the @class measure@ keyword\n%\n\\begin{code}\n  class measure sz :: a -> Nat\n\\end{code}\n%\nNow, we can specify the safe-access preconditions  \nindependant of particular instances of @Indexable@ \nas:\n%\n\\begin{code}\n  class Indexable f where\n    size :: xs:f a -> {v:Nat | v = (sz xs)}\n    at   :: xs:f a -> {v:Nat | v < (sz xs)} -> a\n\\end{code}\n\n\\mypara{Instance Measures}\nFor each concrete type for which a class is instantiated, we require \na corresponding definition for the measure. For example, we may want\nto define lists to be an instance of @Indexable@. To do so, we would\ndefine the @sz@ instance for lists as:\n%\n\\begin{code}\n  instance measure sz :: [a] -> Nat\n  sz []     = 0\n  sz (x:xs) = 1 + (sz xs)\n\\end{code}\n%\n%% Similarly, for a binary tree, we might define:\n%% %\n%% \\begin{code}\n%%   instance measure sz :: Tree a -> Nat\n%%   size Leaf         = 0\n%%   size (Node x l r) = 1 + (sz l) + (sz r)\n%% \\end{code}\n%\nClass measures work just like regular measures. \nThe above definitions are used to refine the types \nof the data constructors, \\eg\n%\n\\begin{code}\n  []  :: {v:[a] | (sz v) = 0}\n  (:) :: a -> xs:[a] \n      -> {v:[a] | (sz v) = 1 + (sz xs)}\n\\end{code}\n\nOnce we have defined the measure, we can define the actual\ninstance as:\n%\n\\begin{code}\n  instance Indexable [] where\n    size []        = 0\n    size (x:xs)    = 1 + size xs\n\n    (x:xs) `at` 0  = x\n    (x:xs) `at` i  = index xs (i-1)\n\\end{code}\n%\nNow, \\toolname will use the definition of the @sz@ measure for lists\nto check that @size@ and @at@ satisfy the refined class specifications,\nand hence, that the above instance creates a valid instance dictionary.\n\n\\mypara{Client Verification}\nOn the client side of a type-class we can use the refined \ntypes of class methods just as we would any regular function. \nFor example, consider a @sum@ function that works for any \n@Indexable@.\n%\n\\begin{code}\n  sum :: (Indexable f) => f Int -> Int\n  sum xs = go 0\n  where\n    go i\n      | i < size xs = (xs `at` i) + go (i+1)\n      | otherwise   = 0\n\\end{code}\n%\n\\toolname proves that each call to @at@ is safe, by using the refined\nclass specifications of @Indexable@. \nSpecifically, each call to @at@ is guarded by a check @i < size xs@,\nwhich, combined with the fact that @i@ is monotonically increasing \nfrom 0, allows \\toolname to prove that @xs `at` i@ will always be safe.\n\n%%%%%%%%   %% \n%%%%%%%%   %% We now have the task of recognizing each instance declaration \n%%%%%%%%   %% and matching it up with the correct class specification, a \n%%%%%%%%   %% non-trivial task in GHC's Core representation.\n%%%%%%%%   %% \n%%%%%%%%   %% Luckily for us, Haskell type-classes are implemented by passing around\n%%%%%%%%   %% dictionaries of functions~\\ES{CITE}. A typical instance for @Sized@ \n%%%%%%%%   %\n%%%%%%%%   %% then \\toolname will check that the generated class dictionary's @size@ field\n%%%%%%%%   %% satisfies the above refinement type, \\ie returns an\n%%%%%%%%   %% will look something like the following in Core\n%%%%%%%%   %\n%%%%%%%%   \\begin{code}\n%%%%%%%%     $csize    = \\xs -> case xs of\n%%%%%%%%       []      -> 0\n%%%%%%%%       (x:xs') -> 1 + $csize xs'\n%%%%%%%%     $fSized[] = D:Sized $csize\n%%%%%%%%   \\end{code}\n%%%%%%%%   %\n%%%%%%%%   where @D:Sized@ is the dictionary data constructor. Instead of\n%%%%%%%%   assigning a type directly to @$csize@ %$\n%%%%%%%%   we assign a refined type to the data constructor @D:Sized@, specifically\n%%%%%%%%   %\n%%%%%%%%   % When \\toolname sees a refined type-class definition it does two\n%%%%%%%%   % things:\n%%%%%%%%   % \\begin{enumerate}\n%%%%%%%%   % \\item it \\emph{assumes} the types of the class methods\n%%%%%%%%   % \\item it \\emph{strengthens} the type of the dictionary data constructor to\n%%%%%%%%   %   enforce the method types\n%%%%%%%%   % \\end{enumerate}\n%%%%%%%%   % For our @Sized@ class this results in the type\n%%%%%%%%   %\n%%%%%%%%   \\begin{code}\n%%%%%%%%     D:Sized :: (xs:f a -> {v:Nat | v = size xs}) \n%%%%%%%%             -> Sized f\n%%%%%%%%   \\end{code}\n%%%%%%%%   %\n%%%%%%%%   from which \\toolname creates the subtyping query\n%%%%%%%%   %\n%%%%%%%%   \\begin{code}\n%%%%%%%%     $csize <: (xs:f a -> {v:Nat | v = size xs})\n%%%%%%%%   \\end{code}\n%%%%%%%%   %$\n%%%%%%%%   which is valid, thus the instance declaration is safe.\n%%%%%%%%   %\n%%%%%%%%   Similarly for @Indexable@ we might find the instance\n%%%%%%%%   %\n%%%%%%%%   \\begin{code}\n%%%%%%%%     instance Indexable [] where\n%%%%%%%%       index []     i = error \"impossible\"\n%%%%%%%%       index (x:xs) 0 = x\n%%%%%%%%       index (x:xs) i = index xs (i-1)\n%%%%%%%%   \\end{code}\n%%%%%%%%   %\n%%%%%%%%   translated into\n%%%%%%%%   %\n%%%%%%%%   \\ES{FIXME: why does this next code block break latex???}\n%%%%%%%%   % \\begin{code}\n%%%%%%%%   %   $cindex = \\xs i -> case xs of\n%%%%%%%%   %     []      -> error \"impossible\"\n%%%%%%%%   %     (x:xs') -> case i of\n%%%%%%%%   %       0 -> x\n%%%%%%%%   %       _ -> $cindex xs' (i-1)\n%%%%%%%%   %   $fIndexable[] = D:Indexable $fSized[] $cindex\n%%%%%%%%   % \\end{code}\n%%%%%%%%   %$\n%%%%%%%%   which will result in a similar subtyping query as above. The actual class\n%%%%%%%%   methods merely select the correct function from the dictionary, \\eg\n%%%%%%%%   %\n%%%%%%%%   \\begin{code}\n%%%%%%%%     index (D:Indexable sz idx) = idx\n%%%%%%%%   \\end{code}\n%%%%%%%%   %\n%%%%%%%%   so verification is trivial, as \\toolname gets to assume the type of\n%%%%%%%%   @D:Indexable@ when pattern-matching against it.\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End: \n", "meta": {"hexsha": "fbc7a4cef26f74ec78290e71e3e8b336a27d7dcc", "size": 6205, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/realworldhaskell/type-classes.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/realworldhaskell/type-classes.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/realworldhaskell/type-classes.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 33.1818181818, "max_line_length": 90, "alphanum_fraction": 0.5801772764, "num_tokens": 1733, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5859493581950862}}
{"text": "\\section{Test Problems}\n\\label{sec:53testProblems}\n\nIt is impossible to assess the capability of optimization methods\nfor every possible optimization problem.\nThe most widespread approach in literature\nis the selection of a subset of specific problems\nwith different characteristics \\term{(test problems)} and\nthe application of the methods to only these problems,\nin the hope that the methods perform similarly in\nactual application settings.\n\n\\paragraph{Trivial test functions}\n\nWhen testing methods that involve sparse grid interpolation,\none has to consider that the function to be interpolated\ndoes not satisfy a specific \\term{trivial property.}\nA test function $\\objfun\\colon \\clint{\\*0, \\*1} \\to \\real$ is trivial if\n$\\objfun$ is a sum of tensor products of which at\nleast one factor is a linear polynomial, i.e., if\n$\\objfun$ is of the form\n{%\n  \\setlength{\\abovedisplayskip}{9pt}%\n  \\setlength{\\belowdisplayskip}{9pt}%\n  \\begin{equation}\n    \\objfun(\\*x) \\equiv \\sum_{q=1}^m \\prod_{t=1}^d \\objfun_{q,t}(x_t),\\quad\n    m \\in \\natz,\\;\\;\n    \\objfun_{q,t}\\colon \\clint{0, 1} \\to \\real,\\;\\;\n    \\faex{q = 1, \\dotsc, m}{t \\in \\{1, \\dotsc, d\\}}{\n      \\objfun_{q,t} \\in \\polyspace{1},\n    }\n  \\end{equation}%\n}%\nwhere $\\polyspace{1}$ is the space of univariate polynomials\nup to linear degree.\nThis is already fulfilled if the summands of $\\objfun(\\*x)$\ndo not depend on all coordinates $x_t$ of $\\*x$.\nOne can show that for hat functions on sparse grids,\nthe hierarchical surpluses $\\surplus{\\*l,\\*i}$ for trivial functions\nvanish if $\\*l \\ge \\*1$.\nThis means that trivial functions can be well-approximated by hat functions\non sparse grids just with boundary points, without placing any points\nin the interior.\nAs this would distort our results,\nwe avoid trivial test functions in the following,\nwhich include popular functions such as the\nBranin01, Rosenbrock, and Schwefel26 functions.\n\n\\paragraph{Selection of test problems}\n\nIn the following, we select six unconstrained test problems\nand two constrained test problems, which are listed in\n\\cref{tbl:optimizationProblem} and plotted in\n\\cref{fig:unconstrainedOptimizationProblem,fig:constrainedOptimizationProblem}.\nThe definitions of the problems are given in \\cref{chap:a20testProblems}.\nFor the unconstrained case and the standard hierarchical\nB-spline basis, a more exhaustive list of test functions has been\nstudied previously \\cite{Valentin14Hierarchische}.\nGavana \\cite{Gavana13Global} and Runarsson/Yao \\cite{Runarsson00Stochastic}\nprovide a good overview of unconstrained and constrained test problems,\nrespectively.\n\n\\begin{table}\n  \\setnumberoftableheaderrows{1}%\n  \\begin{tabular}{%\n    >{\\kern\\tabcolsep}=l<{\\kern5mm}+l<{\\kern5mm}*{5}{+c}%\n    <{\\kern5mm}+l<{\\kern\\tabcolsep}%\n  }\n    \\toprulec\n    \\headerrow\n    Name&           Abbr.& $d$& $m_{\\ineqconfun}$& C&    CD&   MM&   Reference\\\\\n    \\midrulec\n    Branin02&       Bra02& 2&   0&                 \\yes& \\yes& \\yes& \\cite{Munteanu98Global}\\\\\n    GoldsteinPrice& GoP&   2&   0&                 \\yes& \\yes& \\yes& \\cite{Goldstein71Descent}\\\\\n    Schwefel06&     Sch06& 2&   0&                 \\yes& \\no&  \\no&  \\cite{Schwefel77Numerische}\\\\\n    Ackley&         Ack&   $d$& 0&                 \\yes& \\yes& \\yes& \\cite{Ackley87Connectionist}\\\\\n    Alpine02&       Alp02& $d$& 0&                 \\yes& \\yes& \\yes& \\cite{Clerc99Swarm}\\\\\n    Schwefel22&     Sch22& $d$& 0&                 \\yes& \\no&  \\no&  \\cite{Schwefel77Numerische}\\\\\n    \\midrulec\n    G08&            G08&   2&   2&                 \\yes& \\yes& \\yes& \\cite{Schoenauer93Constrained}\\\\\n    G04Squared&     G04Sq& 5&   6&                 \\yes& \\yes& \\no&  \\cite{Colville68Comparative}\\\\\n    \\bottomrulec\n  \\end{tabular}\n  \\caption[Selection of test problems in optimization]{%\n    Unconstrained \\emph{(top)} and constrained \\emph{(bottom)} test problems.\n    The columns state the full and abbreviated names,\n    the dimensionality $d$ of the objective function $\\objfun$,\n    the number $m_{\\ineqconfun}$ of constraints,\n    whether $\\objfun$ is continuous in the domain\n    $\\clint{\\*0, \\*1}$ (C),\n    whether $\\objfun$ is continuously differentiable in the domain\n    $\\clint{\\*0, \\*1}$ (CD),\n    whether $\\objfun$ is multi-modal (MM, i.e.,\n    whether there are multiple local minima), and\n    a reference to the original literature that defines the problem.%\n  }%\n  \\label{tbl:optimizationProblem}%\n\\end{table}\n\n\\begin{figure}\n  \\subcaptionbox{%\n    Bra02%\n  }[71mm]{%\n    \\includegraphics{optimizationProblem_1}%\n  }%\n  \\hfill%\n  \\subcaptionbox{%\n    GoP%\n  }[76mm]{%\n    \\includegraphics{optimizationProblem_2}%\n  }\\\\[2.5mm]%\n  \\subcaptionbox{%\n    Sch06%\n  }[71mm]{%\n    \\includegraphics{optimizationProblem_3}%\n  }%\n  \\hfill%\n  \\subcaptionbox{%\n    Ack for $d = 2$%\n  }[76mm]{%\n    \\includegraphics{optimizationProblem_4}%\n  }\\\\[2.5mm]%\n  \\subcaptionbox{%\n    Alp02 for $d = 2$%\n  }[71mm]{%\n    \\includegraphics{optimizationProblem_5}%\n  }%\n  \\hfill%\n  \\subcaptionbox{%\n    Sch22 for $d = 2$%\n  }[76mm]{%\n    \\includegraphics{optimizationProblem_6}%\n  }%\n  \\caption[%\n    Unconstrained test problems%\n  ]{%\n    Bivariate test functions $\\objfunscaled$ in unconstrained optimization.\n    The \\textcolor{C1}{red dot} indicates the location of the\n    global minimum.%\n  }%\n  \\label{fig:unconstrainedOptimizationProblem}%\n\\end{figure}\n\n\\begin{figure}\n  \\subcaptionbox{%\n    G08%\n  }[72mm]{%\n    \\includegraphics{optimizationProblem_7}%\n  }%\n  \\hfill%\n  \\subcaptionbox{%\n    G04Sq (bivariate projection over $\\xscaledentry{3}$ and $\\xscaledentry{5}$\n    onto $\\xscaledentry{t} = \\xoptscaledentry{t}$ for $t = 1, 2, 4$)%\n  }[72mm]{%\n    \\includegraphics{optimizationProblem_8}%\n  }%\n  \\caption[%\n    Constrained test problems%\n  ]{%\n    Test problems in constrained optimization.\n    The \\textcolor{C0}{blue areas} denote the inequality constraints and\n    the \\textcolor{C1}{red dot} indicates the location of the\n    global minimum.%\n  }%\n  \\label{fig:constrainedOptimizationProblem}%\n\\end{figure}\n\nFor each test problem, we state unscaled versions of objective functions\n$\\objfunscaled\\colon \\clint{\\*a, \\*b} \\to \\real$,\n$\\xscaled \\mapsto \\objfunscaled(\\xscaled)$\n(and the unscaled constraint function $\\ineqconfunscaled$, if present).\nThe actual objective function $\\objfun\\colon \\clint{\\*0, \\*1} \\to \\real$\ncan be obtained by $\\objfun(\\*x) \\ceq \\objfunscaled(\\xscaled)$\nwith the affine parameter transformation\n$x_t = \\tfrac{\\xscaledentry{t} - a_t}{b_t - a_t}$, $t = 1, \\dotsc, d$\n(similarly for the constraint function).\n\n\\vspace*{1em}\n\nThe parameter domains of some test problems have been slightly translated\ncompared to the literature\nto avoid that the minima are located exactly at or close to\nthe center of the domain.\nIn these cases, sparse grids would be in advantage as\nthey tend to place more points near the center of the domain\n(especially for high dimensionalities).\n", "meta": {"hexsha": "a87edefa2f8c6f57b509707e34e95144fbe799ad", "size": 6877, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/document/53testProblems.tex", "max_stars_repo_name": "valentjn/thesis-arxiv", "max_stars_repo_head_hexsha": "ae30179e67cd6a7813385e140b609546fd65b897", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-10-12T09:28:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T21:07:17.000Z", "max_issues_repo_path": "tex/document/53testProblems.tex", "max_issues_repo_name": "valentjn/thesis", "max_issues_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/document/53testProblems.tex", "max_forks_repo_name": "valentjn/thesis", "max_forks_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.7754010695, "max_line_length": 101, "alphanum_fraction": 0.6888177985, "num_tokens": 2130, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7853085859124002, "lm_q1q2_score": 0.5859493575110813}}
{"text": "\\documentclass[notitlepage]{problem-solving}\n\n\\author{Matt McCarthy}\n\\title{Trigonometric Sum and Difference Formulae in $\\CC$}\n\\date{June 2016}\n\n\\addbibresource{complex-trig-sum.bib}\n\\nocite{*}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{problem*}\n\tShow that for any $z,w\\in\\CC$,\n\t\\[\n\t\t\\sin(z+w)=\\cos z\\sin w+\\sin z\\cos w \\text{ and } \\cos(z+w)=\\cos z\\cos w - \\sin z\\sin w.\n\t\\]\n\\end{problem*}\n\n\\section{Background}\n\nIn the world of $\\CC$, differentiability is a stronger condition than in $\\RR$.\nTo be precise, once a function is differentiable in $\\CC$, it is infinitely differentiable.\nThis happens because differentiability has more stringent requirements in $\\CC$.\nIn order for a $\\CC$-valued function to be differentiable on a domain, it must be \\textit{analytic} on that domain.\n\\begin{definition}\n\tLet $f:\\CC\\rightarrow\\CC$, where $f(x+iy)=u(x,y)+iv(x,y)$, be a function such that $\\partial u/\\partial x$, $\\partial u/\\partial y$, $\\partial v/\\partial x$, and $\\partial v/\\partial y$ exist and are continuous on some disk $D\\subseteq\\CC$ with a nonzero radius.\n\tIf $f$ satisfies the \\textit{Cauchy-Riemann equations}\n\t\\[\n\t\t\\frac{\\partial u}{\\partial x} = \\frac{\\partial v}{\\partial y} \\text{ and } \\frac{\\partial u}{\\partial y} = -\\frac{\\partial v}{\\partial x}\n\t\\]\n\tthen $f$ is said to be \\textit{analytic on $D$}.\n\tFurthermore, the largest subset of $\\CC$ on which $f$ is analytic is called $f$'s \\textit{domain of analyticity}.\n\tIf the domain of analyticity is $\\CC$, then $f$ is said to be \\textit{entire}.\n\\end{definition}\nSince any analytic function is infinitely differentiable, its Taylor expansion exists as well.\nFurthermore, the Taylor series converges on any disk in the domain of analyticity.\n\\begin{thm}\n\tLet $f$ be analytic on a disk $D(z_0,r)$, then $f$'s Taylor series about $z_0$ converges for all $z\\in D$.\n\\end{thm}\n\n\\section{Solution}\n\n\\begin{problem*}\n\tShow that for any $z,w\\in\\CC$,\n\t\\[\n\t\t\\sin(z+w)=\\cos z\\sin w+\\sin z\\cos w \\text{ and } \\cos(z+w)=\\cos z\\cos w - \\sin z\\sin w.\n\t\\]\n\\end{problem*}\n\nTo prove these statements, we will find $\\sin z$ and $\\cos z$ in terms of $e^z$.\nFrom there, we will show the sum formulae.\nHowever, before we can get to either of those steps, we need to show that $e^z$ is entire.\n\n\\subsection{Analyticity of $e^{z}$}\n\n\\begin{proposition}\n\t$e^z$ is entire.\n\\end{proposition}\n\\begin{proof}\n\tLet $z=x+iy$.\n\tWe want to find $\\RR$-valued functions $u,v$ such that $e^{x+iy}=u(x,y)+iv(x,y)$.\n\t\\begin{align*}\n\t\te^z &= e^{x+iy}\\\\\n\t\t&= e^x e^{iy}\\\\\n\t\t&= e^x \\cos y + ie^x\\sin y\\\\\n\t\t&= u(x,y) +iv(x,y)\n\t\\end{align*}\n\tTaking partial derivatives yields the following.\n\t\\[\n\t\t\\begin{array}{rlrl}\n\t\t\t\\partial u/\\partial x =& e^x \\cos y  \\hspace{1cm}&  \\partial v/\\partial y =& e^x \\cos y\\\\\n\t\t\t\\partial u/\\partial y =& -e^x \\sin y \\hspace{1cm}& -\\partial v/\\partial x =& -e^x \\sin y\n\t\t\\end{array}\n\t\\]\n\tTherefore $e^z$ satisfies the Cauchy-Riemann equations.\n\tFurthermore, since the equations hold for all $z\\in\\CC$, $e^z$ is entire.\n\\end{proof}\n\n\\subsection{Complex Trigonometric Functions in Terms of the Complex Exponential}\n\nOur next step is to use the Taylor expansion of $e^z$ to get the Taylor expansion of $\\cos z$ and $\\sin z$.\n\n\\begin{proposition}\n\tLet $z\\in\\CC$.\n\tThen\n\t\\[\n\t\t\\cos z = \\frac{ e^{iz} + e^{-iz} }{2} \\text{ and } \\sin z = \\frac{ e^{iz} - e^{-iz} }{2i}.\n\t\\]\n\\end{proposition}\n\\begin{proof}\n\tConsider $\\paren{e^{iz}+e^{-iz}}/2$.\n\tSince $e^z$ is entire, its Taylor series converges everywhere in $\\CC$.\n\t\\begin{align*}\n\t\t\\frac{1}{2}\\paren{e^{iz}+e^{-iz}}\n\t\t&= \\frac{1}{2}\\paren{\\sum_{n=0}^\\infty \\frac{1}{n!} i^n z^n + \\frac{(-1)^n}{n!} i^n z^n}\\\\\n\t\t&= \\frac{1}{2}\\paren{\\sum_{n=0}^\\infty \\frac{1+(-1)^n}{n!} i^n z^n}\\\\\n\t\t&= \\frac{1}{2}\\paren{\\sum_{n=0}^\\infty \\frac{1+(-1)^{2n}}{(2n)!}i^{2n}z^{2n} + \\frac{1+(-1)^{2n+1}}{(2n+1)!}i^{2n+1}z^{2n+1}}\\\\\n\t\t&= \\frac{1}{2}\\paren{\\sum_{n=0}^\\infty \\frac{2}{(2n)!}(i^2)^nz^{2n}}\\\\\n\t\t&= \\sum_{n=0}^\\infty \\frac{(-1)^n}{(2n!)} z^{2n}\\\\\n\t\t&= \\cos z\n\t\\end{align*}\n\tConsider $\\paren{e^{iz}-e^{-iz}}/(2i)$.\n\t\\begin{align*}\n\t\t\\frac{1}{2i}\\paren{e^{iz}-e^{-iz}}\n\t\t&= \\frac{1}{2i}\\paren{\\sum_{n=0}^\\infty \\frac{1}{n!} i^n z^n - \\frac{(-1)^n}{n!} i^n z^n}\\\\\n\t\t&= \\frac{1}{2i}\\paren{\\sum_{n=0}^\\infty \\frac{1-(-1)^n}{n!} i^n z^n}\\\\\n\t\t&= \\frac{1}{2i}\\paren{\\sum_{n=0}^\\infty \\frac{1-(-1)^{2n}}{(2n)!}i^{2n}z^{2n} + \\frac{1-(-1)^{2n+1}}{(2n+1)!}i^{2n+1}z^{2n+1}}\\\\\n\t\t&= \\frac{1}{2i}\\paren{\\sum_{n=0}^\\infty \\frac{2i}{(2n+1)!}(i^2)^nz^{2n+1}}\\\\\n\t\t&= \\sum_{n=0}^\\infty \\frac{(-1)^n}{(2n+1)!} z^{2n+1}\\\\\n\t\t&= \\sin z\n\t\\end{align*}\n\\end{proof}\n\n\\subsection{Trigonometric Sum Identities}\n\nWe will now use our new identities to show the sum formulae.\nAfterwards, we will get the difference formulae as corollaries.\n\n\\begin{thm}\n\tLet $z,w\\in\\CC$.\n\tThen $\\sin(z+w)=\\cos z\\sin w+\\sin z\\cos w$.\n\\end{thm}\n\\begin{proof}\n\tConsider $\\cos z\\sin w+\\sin z\\cos w$.\n\t\\begin{align*}\n\t\t\\cos z\\sin w+\\sin z\\cos w &=\n\t\t\\paren{\\frac{e^{iz}+e^{-iz}}{2}}\\paren{\\frac{e^{iw}-e^{-iw}}{2i}}+\\paren{\\frac{e^{iz}-e^{-iz}}{2i}}\\paren{\\frac{e^{iw}+e^{-iw}}{2}}\\\\\n\t\t&= \\frac{e^{i(z+w)} +e^{i(w-z)} - e^{i(z-w)} - e^{-i(z+w)}}{4i} + \\frac{e^{i(z+w)} -e^{i(w-z)} + e^{i(z-w)} - e^{-i(z+w)}}{4i}\\\\\n\t\t&= \\frac{2e^{i(z+w)}-2e^{-i(z+w)}}{4i}\\\\\n\t\t&= \\frac{e^{i(z+w)}-e^{-i(z+w)}}{2i}\\\\\n\t\t&= \\sin(z+w)\n\t\\end{align*}\n\\end{proof}\n\n\\begin{thm}\n\tLet $z,w\\in\\CC$.\n\tThen $\\cos(z+w)=\\cos z\\cos w - \\sin z\\sin w$.\n\\end{thm}\n\\begin{proof}\n\tConsider $\\cos z\\cos w - \\sin z\\sin w$.\n\t\\begin{align*}\n\t\t\\cos z\\cos w - \\sin z\\sin w &=\n\t\t\\paren{\\frac{e^{iz}+e^{-iz}}{2}}\\paren{\\frac{e^{iw}+e^{-iw}}{2}} - \\paren{\\frac{e^{iz}-e^{-iz}}{2i}}\\paren{\\frac{e^{iw}-e^{-iw}}{2i}}\\\\\n\t\t&= \\frac{e^{i(z+w)} + e^{i(w-z)} + e^{i(z-w)} + e^{-i(z+w)}}{4} + \\frac{e^{i(z+w)} -e^{i(w-z)} - e^{i(z-w)} + e^{-i(z+w)}}{4}\\\\\n\t\t&= \\frac{e^{i(z+w)}+e^{-i(z+w)}}{2}\\\\\n\t\t&= \\cos(z+w)\n\t\\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n\tLet $z,w\\in\\CC$.\n\tThen\n\t\\[\n\t\t\\sin(z-w)=\\sin z\\cos w - \\cos z\\sin w \\text{ and }\\cos(z-w)=\\cos z\\cos w + \\sin z\\sin w.\n\t\\]\n\\end{corollary}\n\\begin{proof}\n\tSince $\\sin w$ is an odd function, $\\sin(-w)=-\\sin w$.\n\tSince $\\cos w$ is an even function, $\\cos (-w)=\\cos w$.\n\tThese facts combined with the sum formulae for $\\cos(z+(-w))$ and $\\sin(z+(-w))$ yield the difference formulae.\n\\end{proof}\n\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "ffbefef5b14ad3910c9b747e68e47c4bc89cca8b", "size": 6262, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2016-summer/complex-trig-sum/complex-trig-sum.tex", "max_stars_repo_name": "matt-mccarthy/problem-solving", "max_stars_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2016-summer/complex-trig-sum/complex-trig-sum.tex", "max_issues_repo_name": "matt-mccarthy/problem-solving", "max_issues_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2016-summer/complex-trig-sum/complex-trig-sum.tex", "max_forks_repo_name": "matt-mccarthy/problem-solving", "max_forks_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2738095238, "max_line_length": 263, "alphanum_fraction": 0.6017246886, "num_tokens": 2617, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7461389873857264, "lm_q1q2_score": 0.5859493493289136}}
{"text": "% Preamble.\n\\documentclass[12pt]{article}\n\\usepackage[margin=1.25in]{geometry}\n\\usepackage[fleqn]{amsmath}\n\\usepackage{textcomp}\n\\usepackage{gensymb}\n\\usepackage{amsfonts}\n\\usepackage{enumitem}\n%\\usepackage{tikz}  % Include for figures.\n%\\usepackage{subfiles}  % Include for subfiles.\n\n%% Title macros.\n\\newcommand{\\HOMEWORKNUM}{14}\n\\newcommand{\\NAME}{D. Choi}\n\\newcommand{\\DATE}{2020-06-15}\n\n\\title{\\vspace{-2\\baselineskip}MATH 225 - Homework \\#\\HOMEWORKNUM}\n\\author{\\NAME}\n\\date{\\DATE}\n\n%% Formatting options.\n%\\pagenumbering{gobble}  % Include for single-page document.\n\n% Macros.\n%% Norm.\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\n\n% Document.\n\\begin{document}\n\\maketitle\n\n\\section*{1.}\n\\textit{For a generic orthographic projection $A$ in $\\mathbb{R}^3$, show that\n$\\text{det}(A) = 0$.} \\\\[\\baselineskip]\nLet $\\vec{n}$ be the normal vector so that the orthographic projection $A$ of\na vector $\\vec{v}$ is equivalent to the orthographic projection onto the plane\n\\begin{equation*}\n\t\\vec{n} \\cdot \\vec{v}\n\t=\n\t\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}\n\t\\cdot\n\t\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix}\n\t=\n\t0\n\t.\n\\end{equation*}\nSuch a projection is also equivalent to the vector rejection of $\\vec{v}$ from\n$\\vec{n}$. Thus, \n\\begin{equation*}\n\tA\\vec{v}\n\t=\n\t\\vec{v} - \\text{proj}_{\\vec{n}}\\vec{v}\n\t=\n\t\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix}\n\t-\n\t\\frac{\n\t\t\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix}\n\t\t\\cdot\n\t\t\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}\n\t}{\n\t\t\\norm{\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}}^2\n\t}\n\t\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}\n\t=\n\t\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix}\n\t-\n\t\\frac{ax + by + cz}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}\n\t.\n\\end{equation*}\nThe columns of $A$ can be found by computing the vector rejections of the\nstandard basis unit vectors from $\\vec{n}$:\n\\begin{gather*}\n\tA \\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\end{pmatrix}\n\t=\n\t\\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\end{pmatrix}\n\t-\n\t\\frac{a}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}\n\t=\n\t\\frac{1}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix}\n\t\tb^2 + c^2 \\\\\n\t\t-ab \\\\\n\t\t-ac\n\t\\end{pmatrix}\n\t\\\\\n\tA \\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\end{pmatrix}\n\t=\n\t\\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\end{pmatrix}\n\t-\n\t\\frac{b}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}\n\t=\n\t\\frac{1}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix}\n\t\t-ab \\\\\n\t\ta^2 + c^2 \\\\\n\t\t-bc\n\t\\end{pmatrix}\n\t\\\\\n\tA \\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\end{pmatrix}\n\t=\n\t\\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\end{pmatrix}\n\t-\n\t\\frac{c}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix} a \\\\ b \\\\ c \\end{pmatrix}\n\t=\n\t\\frac{1}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix}\n\t\t-ac \\\\\n\t\t-bc \\\\\n\t\ta^2 + b^2\n\t\\end{pmatrix}\n\t.\n\\end{gather*}\nThus,\n\\begin{equation*}\n\tA\n\t=\n\t\\frac{1}{a^2 + b^2 + c^2}\n\t\\begin{pmatrix}\n\t\tb^2 + c^2 & -ab & -ac \\\\\n\t\t-ab & a^2 + c^2 & -bc \\\\\n\t\t-ac & -bc & a^2 + b^2\n\t\\end{pmatrix}\n\t.\n\\end{equation*}\nFor the sake of convenience, let\n\\begin{equation*}\n\tB\n\t=\n\t\\begin{pmatrix}\n\t\tb^2 + c^2 & -ab & -ac \\\\\n\t\t-ab & a^2 + c^2 & -bc \\\\\n\t\t-ac & -bc & a^2 + b^2\n\t\\end{pmatrix}\n\\end{equation*}\nso that\n\\begin{equation*}\n\tA = \\frac{B}{a^2 + b^2 + c^2},\n\\end{equation*}\nwhich allows computing the determinant of $A$ as\n\\begin{equation*}\n\t\\text{det}(A)\n\t=\n\t\\left( \\frac{1}{a^2 + b^2 + c^2} \\right)^3\n\t\\text{det}(B)\n\t=\n\ts \\cdot \\text{det}(B)\n\t.\n\\end{equation*}\nThe row reduction of $B$ results in\n\\begin{equation*}\n\t\\text{rref}(B)\n\t=\n\t\\begin{pmatrix}\n\t\t1 & 0 & -\\frac{a}{c} \\\\\n\t\t0 & 1 & -\\frac{b}{c} \\\\\n\t\t0 & 0 & 0\n\t\\end{pmatrix}\n\t.\n\\end{equation*}\nBecause $\\text{rref}(B)$ contains a row of zeros, the determinant of $B$ must\nbe $0$. \\\\\nThus,\n\\begin{equation*}\n\t\\text{det}(A)\n\t=\n\ts \\cdot \\text{det}(B)\n\t=\n\ts \\cdot 0\n\t=\n\t\\boxed{0}\n\t.\n\\end{equation*}\n\t\n\\end{document}", "meta": {"hexsha": "6c211b9fb34df6deb48db560ebbb37c0f8ae1ddb", "size": 3674, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "usc-20202-math-225-39425/hw14/main.tex", "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "usc-20202-math-225-39425/hw14/main.tex", "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "usc-20202-math-225-39425/hw14/main.tex", "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.5251396648, "max_line_length": 78, "alphanum_fraction": 0.5955362003, "num_tokens": 1567, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.845942439250491, "lm_q1q2_score": 0.5859352487402961}}
{"text": "\\section{Jachimowsky Combustion Model}\n\nThe calculation of a chemical source term is explained in detail in Anderson. \\cite{gen:anderson2}.\nA brief summary of that exaplanation follows here.\nEach species conservation equation has a source term for each species $k$ given by:\n\n\\begin{displaymath}\n\\dot{W}_k = {\\cal M}_k \\sum_{l=1}^{N_e} \\left( \\nu''_{l,k} - \\nu'_{l,k} \\right)\n\\left[ k_{f,l} \\prod_{m=1}^{N_l} \\left[ \\chi_m \\right]^{\\nu'_{l,m}}\n- k_{b,l} \\prod_{m=1}^{N_l} \\left[ \\chi_m \\right]^{\\nu''_{l,m}} \\right]\n\\end{displaymath}\n\n$N_e$ is the number of elementary reactions involving species $k$.\n$N_l$ is the number of species involved in the elementary reaction.\n${\\cal M}_k$ is the molecular weight of species $k$.\n$\\chi_m$ is the mole fraction of species $m$.\n$\\nu'_{l,k}$ is the stoichiometric coefficient for the reactants of reaction $l$.\n$\\nu''_{l,k}$ is the stoichiometric coefficient for the products of reaction $l$.\n$k_b$ is the backward reaction rate constant and\n$k_f$ is the forward reaction rate constant, given by the\nmodified Arrhenius equation:\n\\begin{displaymath}\nk_f = AT^n \\exp(-E/{\\cal R}T)\n\\end{displaymath}\nwhere the coefficients $A$, $n$ and $E$ are germane to a particular combustion\nmodel. In this study, the Jachimowsky hydrogen-air combustion model is used \n\\cite{chem:jachimowsky} and the coefficients are given in table \\ref{model} \nbelow.\n\nThe backward and the forward reaction rate constants are related by the\nequilibrium constant, $K_c$\n\\begin{displaymath}\n\\frac{k_f}{k_b} = K_c\n\\end{displaymath}\nThe equilibrium constant, $K_c$ is given by:\n\\begin{displaymath}\nK_c = ({\\cal R} T)^{-\\Delta \\nu}\n       \\exp \\left( \\frac{-\\Delta G^0}{{\\cal R} T} \\right)\n\\end{displaymath}\n\nwhere\n\\begin{displaymath}\n\\Delta \\nu = \\sum_{i=1}^{N_l} (\\nu''_i - \\nu'_i)\n\\end{displaymath}\nThe Gibbs free energy for species $i$\nis defined as $G_i = H_i - T S_i$, where $H_i$ is the\nenthalpy, $T$ is the temperature and $S_i$ is the entropy.\nThe difference in Gibbs free energy $\\Delta G^0$ of products and reactants is\n\\begin{displaymath}\n\\Delta G^0 = \\sum_{i=1}^{N_l} (\\nu''_i - \\nu'_i) \\left( h^0_i - T s^0_i \\right)\n\\end{displaymath}\nwhere $h^0_i$ is the molar enthalpy (including the heat of formation at a\nreference temperature of 298 K)\nof the species $i$, and $s^0_i$ is its the molar entropy,\neach calculated at a reference pressure (one atmosphere).\\\\\n\\\\\n\n\\begin{table}\n\\begin{threeparttable}\n\\caption{Jachimowsky Hydrogen-Air Combustion Model with Nitrogen Inert}\n\\begin{tabular}{|cc|c|c|c|} \\hline\n\\multicolumn{2}{|c|}{Reaction} & A & n & E  \\\\ \\hline \\hline\n(1) & H$_{2}$ + O$_{2} \\rightarrow$ OH + OH & 1.70 $\\times$ 10$^{13}$ & 0 & 48 000 \\\\\n(2) & H + O$_{2} \\rightarrow$ OH + O & 2.60 $\\times$ 10$^{14}$ & 0 & 16 800 \\\\\n(3) & O + H$_{2} \\rightarrow$ OH + H & 1.80 $\\times$ 10$^{10}$ & 1.00 & 8 900 \\\\\n(4) & OH + H$_{2} \\rightarrow$ H$_{2}$O + H & 2.20 $\\times$ 10$^{13}$ & 0 & 5 150 \\\\\n(5) & OH + OH $\\rightarrow$ H$_{2}$O + O & 6.30 $\\times$ 10$^{12}$ & 0 & 1 090 \\\\\n(6) & H + OH + M $\\rightarrow$ H$_{2}$O + M & 2.20 $\\times$ 10$^{22}$ & -2.00 & 0 \\\\\n(7) & H + H + M $\\rightarrow$ H$_{2}$ + M & 6.40 $\\times$ 10$^{17}$ & -1.00 & 0 \\\\\n(8) & H + O + M $\\rightarrow$ OH + M & 6.00 $\\times$ 10$^{16}$ & -0.60 & 0 \\\\\n(9) & H + O$_{2}$ + M $\\rightarrow$ HO$_{2}$ + M & 2.10 $\\times$ 10$^{15}$ & 0 & -1 000 \\\\\n(10) & HO$_{2}$ + H $\\rightarrow$ H$_{2}$ + O$_{2}$ & 1.30 $\\times$ 10$^{13}$ & 0 & 0 \\\\\n(11) & HO$_{2}$ + H $\\rightarrow$ OH + OH & 1.40 $\\times$ 10$^{14}$ & 0 & 1 080 \\\\\n(12) & HO$_{2}$ + H $\\rightarrow$ H$_{2}$O + O & 1.00 $\\times$ 10$^{13}$ & 0 & 1 080 \\\\\n(13) & HO$_{2}$ + O $\\rightarrow$ O$_{2}$ + OH & 1.50 $\\times$ 10$^{13}$ & 0 &  950 \\\\\n(14) & HO$_{2}$ + OH $\\rightarrow$ H$_{2}$O + O$_{2}$ & 8.00 $\\times$ 10$^{12}$ & 0 & 0 \\\\\n(15) & HO$_{2}$ + HO$_{2} \\rightarrow$ H$_{2}$O$_{2}$ + O$_{2}$ & 2.00 $\\times$ 10$^{12}$ & 0 & 0 \\\\\n(16) & H + H$_{2}$O$_{2} \\rightarrow$ H$_{2}$ + HO$_{2}$ & 1.40 $\\times$ 10$^{12}$ & 0 & 3 600 \\\\\n(17) & O + H$_{2}$O$_{2} \\rightarrow$ OH + HO$_{2}$ & 1.40 $\\times$ 10$^{13}$ & 0 & 6 400 \\\\\n(18) & OH + H$_{2}$O$_{2} \\rightarrow$ H$_{2}$O + HO$_{2}$ & 6.10 $\\times$ 10$^{12}$ & 0 & 1 430 \\\\\n(19) & M + H$_{2}$O$_{2} \\rightarrow$ OH + OH + M & 1.20 $\\times$ 10$^{17}$ & 0 & 45 500 \\\\\n(20) & O + O + M $\\rightarrow$ O$_{2}$ + M & 6.00 $\\times$ 10$^{17}$ & 0 & -1 800 \\\\\n\\hline\n\\end{tabular}\n\\label{model}\n\\end{threeparttable}\n\\end{table}\n\nThe units for $A$ are in $[\\frac{cm^{3b}}{(mole-s)^b}]$ where $b=1$ for two body reactions and \n$b=2$ for three body reactions. $E$ is in $[\\frac{cal}{mole}]$.\\\\\n\\\\\n\nThe symbol $M$ denotes a third-body collision partner, a species acting as a catalyst only.\nThe concentration of $M$ is simply determined from the equation :\\\\\n\n\\begin{displaymath}\nX_M = \\sum_{k=1}^{n_s} \\eta_k X_k\n\\end{displaymath}\n\\clearpage\n\nwhere $\\eta_k$ is the third-body efficiency. $\\eta_k$ is unity\nfor most species and reactions except those listed in the table \n\\ref{thirds} above.\n\n\\vspace*{-3cm}\n\\begin{table}\n\\begin{threeparttable}\n\\caption{Third Body Efficiencies for Jachimowsky Model}\n\\begin{tabular}{|cc|cccc|} \\hline\n\\multicolumn{2}{|c|}{Reaction} & \\multicolumn{4}{|c|}{third body efficiency} \\\\ \\hline \\hline\n(6) & H + OH + M $\\rightarrow$ H$_{2}$O + M & H$_{2}$ & 1.0 & H$_{2}$O & 6.0 \\\\\n(7) & H + H + M $\\rightarrow$ H$_{2}$ + M & H$_{2}$ & 2.0 & H$_{2}$O & 6.0 \\\\\n(8) & H + O + M $\\rightarrow$ OH + M & H$_{2}$ & 1.0 & H$_{2}$O & 5.0 \\\\\n(9) & H + O$_{2}$ + M $\\rightarrow$ HO$_{2}$ + M & H$_{2}$ & 2.0 & H$_{2}$O & 16.0 \\\\\n(19) & M + H$_{2}$O$_{2} \\rightarrow$ OH + OH + M & H$_{2}$ & 1.0 & H$_{2}$O & 15.0 \\\\\n\\hline\n\\end{tabular}\n\\label{thirds}\n\\end{threeparttable}\n\\end{table}\n\n\n\n", "meta": {"hexsha": "6ce1917eba9e18041034bae6c016a2954eb950c1", "size": 5684, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "model/chem/old/_air8s15r/doc/chem.tex", "max_stars_repo_name": "zhanghuanqian/CFDWARP", "max_stars_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 29, "max_stars_repo_stars_event_min_datetime": "2018-09-13T13:58:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T21:44:13.000Z", "max_issues_repo_path": "model/chem/old/_air8s15r/doc/chem.tex", "max_issues_repo_name": "zhanghuanqian/CFDWARP", "max_issues_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-11-10T11:28:30.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-23T09:21:28.000Z", "max_forks_repo_path": "model/chem/old/_air8s15r/doc/chem.tex", "max_forks_repo_name": "zhanghuanqian/CFDWARP", "max_forks_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2018-07-26T08:17:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T08:41:55.000Z", "avg_line_length": 45.8387096774, "max_line_length": 100, "alphanum_fraction": 0.6055594652, "num_tokens": 2291, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424450764199, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5859352474084223}}
{"text": "\\section{Equivalence of DFAs and NFAs}\n\nNFAs are easier to build than DFAs because one does not have to worry,\nfor any state, of having out-going edges carrying a unique label. The\nsurprising fact is that NFAs and DFAs actually have the same\nexpressiveness, that is, all that can be defined by means of a NFA can\nalso be defined with a DFA (the converse is trivial since a DFA is\nalready a NFA). More precisely, there is a procedure, called \\emph{the\n  subset construction}, which converts any NFA to a DFA.\n\nConsider that, in a NFA, from a state~\\(q\\) with several out-going\nedges with the same label~\\(a\\), the transition function~\\(\\delta_N\\)\nleads, in general, to several states. The idea of the \\emph{subset\n  construction} is to create a new automaton where these edges are\nmerged. So we create a state~\\(p\\) which corresponds to the set of\nstates \\(\\delta_N (q,a)\\) in the NFA. Accordingly, we create a\nstate~\\(r\\) which corresponds to the set~\\(\\{q\\}\\) in the NFA. We\ncreate an edge labelled~\\(a\\) between~\\(r\\) and~\\(p\\). The important\npoint is that \\emph{this edge is unique}. This is the first step to\ncreate a DFA from a NFA.\n\nGraphically, instead of the non-determinism of\n\\fig~\\vref{fig:nfa_edges},\n\\begin{figure}\n\\centering\n\\subfloat[Non-determinism\\label{fig:nfa_edges}]{\n\\includegraphics[bb=62 660 222 730]{nfa_edges}\n}\n\\qquad\n\\subfloat[Determinism\\label{fig:nfa_subset}]{\n\\includegraphics[bb=60 712 144 730]{nfa_subset}\n}\n\\caption{From non-determinism to determinism}\n\\end{figure}\nwhere we have \\(\\delta_N (q, a) = \\{p_0, p_1, \\dots, p_n\\}\\), we get\nthe determinism of \\fig~\\vref{fig:nfa_subset}.\n\nLet us present the complete algorithm for the subset construction.\n\nLet us start from a NFA \\(\\mathcal{N} = (Q_N, \\Sigma, \\delta_N, q_0,\nF_N)\\). The goal is to construct a DFA \\(\\mathcal{D} = (Q_D, \\Sigma,\n\\delta_D, \\{q_0\\}, F_D)\\) such that \\(L(\\mathcal{D}) =\nL(\\mathcal{N})\\). Notice that the input alphabet of the two automata\nare the same and the initial state of~\\(\\mathcal{D}\\) if the set\ncontaining only the initial state of~\\(\\mathcal{N}\\).\n\nThe other components of~\\(\\mathcal{D}\\) are constructed as\nfollows. First, \\(Q_D\\)~is the set of subsets of~\\(Q_N\\); that is,\n\\(Q_D\\)~is the \\emph{power set} of~\\(Q_N\\). Thus, if~\\(Q_D\\) has~\\(n\\)\nstates, \\(Q_D\\)~has~\\(2^n\\) states. Fortunately, often not all these\nstates are accessible from the initial state of~\\(Q_D\\), so these\ninaccessible states can be discarded.\n\n\\label{state_explosion}\nWhy is~\\(2^n\\) the number of subsets of a finite set of\ncardinal~\\(n\\)?\n\nLet us order the~\\(n\\) elements and represent each subset by an\n\\(n\\)-bit string where bit \\(i\\) corresponds to the \\(i\\)th element:\nit is~\\(1\\) if the \\(i\\)th element is present in the subset and~\\(0\\)\nif not. This way, we counted all the subsets and not more (a bit\ncannot always be \\(0\\) since all elements are used to form subsets and\ncannot always be \\(1\\) if there is more than one element). There\nare~\\(2\\) possibilities, \\(0\\)~or~\\(1\\), for the first bit;\n\\(2\\)~possibilities for the second bit etc. Since the choices are\nindependent, we multiply all of them: \\(\\underbrace{2 \\times 2 \\times\n  \\dots \\times 2}_{n \\; \\text{times}} = 2^n\\), yielding the number of\nsubsets of an \\(n\\)-element set.\n\nResuming the definition of DFA~\\(\\mathcal{D}\\), the remaining\ncomponents are defined as follows.\n\\begin{itemize}\n\n  \\item \\(F_D\\)~is the set of subsets~\\(S\\) of~\\(Q_N\\) such that \\(S\n    \\cap F_N \\neq \\varnothing\\). That is, \\(F_D\\)~is all sets of\n    \\(N\\)'s states that include at least one final state\n    of~\\(\\mathcal{N}\\).\n\n  \\item for each set \\(S \\subseteq Q_N\\) and for each input \\(a \\in\n    \\Sigma\\),\n  \\begin{equation*}\n    \\delta_D(S, a) = \\bigcup_{q \\in S}{\\delta_N (q, a)}.\n  \\end{equation*}\n  In other words, to compute~\\(\\delta_D (S, a)\\), we look at all the\n  states~\\(q\\) in~\\(S\\), see what states of~\\(\\mathcal{N}\\) are\n  reached from~\\(q\\) on input~\\(a\\) and take the union of all those\n  states to make the next state of~\\(\\mathcal{D}\\).\n\n\\end{itemize}\nLet us reconsider the NFA given by its transition table in\n\\fig~\\vref{fig:initial_nfa_table}\n\\begin{figure}\n\\centering\n\\(\\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{NFA} \\; \\mathcal{N}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & q_0 & \\{q_0, q_1\\} & \\{q_0\\}\\\\\n                & q_1 & \\varnothing  & \\{q_2\\}\\\\\n    \\#          & q_2 & \\varnothing  & \\varnothing\n\\end{array}\\)\n\\caption{Initial NFA table\\label{fig:initial_nfa_table}}\n\\end{figure}\nand let us create an equivalent DFA. Firstly, we form all the subsets\nof the sets of the NFA and put them in the first column in\n\\fig~\\vref{fig:first_stage}.\n\\begin{figure}\n\\centering\n\\subfloat[First stage\\label{fig:first_stage}]{\n\\(\\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    & \\varnothing & &\\\\\n    & \\{q_0\\}           & &\\\\\n    & \\{q_1\\}           & &\\\\\n    & \\{q_2\\}           & &\\\\\n    & \\{q_0, q_1\\}      & &\\\\\n    & \\{q_0, q_2\\}      & &\\\\\n    & \\{q_1, q_2\\}      & &\\\\\n    & \\{q_0, q_1, q_2\\} & &\n\\end{array}\\)\n}\n\\qquad\n\\subfloat[Second stage\\label{fig:second_stage}]{\n\\(\\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    & \\varnothing & &\\\\\n    \\rightarrow & \\{q_0\\}  & &\\\\\n       & \\{q_1\\}           & &\\\\\n    \\# & \\{q_2\\}           & &\\\\\n       & \\{q_0, q_1\\}      & &\\\\\n    \\# & \\{q_0, q_2\\}      & &\\\\\n    \\# & \\{q_1, q_2\\}      & &\\\\\n    \\# & \\{q_0, q_1, q_2\\} & &\n\\end{array}\\)\n}\n\\caption{First two stages of the subset construction}\n\\end{figure}\n\nSecondly, we annotate in this first column the states\nwith~\\(\\rightarrow\\) if and only if they contain the initial state of\nthe NFA, here~\\(q_0\\), and we add a~\\(\\#\\) if and only if the states\ncontain at least a final state of the NFA, here~\\(q_2\\). See\n\\fig~\\vref{fig:second_stage}.\n\nThirdly, we form the subsets as follows:\n\\begin{equation*}\n\\small\n\\begin{array}{@{}r@{}l@{\\,}||@{\\,}c@{\\,}|@{\\,}c@{}}\n\\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n\\hhline{==::==}\n            & \\varnothing & \\varnothing & \\varnothing\\\\\n\\rightarrow & \\{q_0\\} & \\delta_N(q_0, 0) & \\delta_N(q_0, 1)\\\\\n            & \\{q_1\\} & \\delta_N(q_1, 0) & \\delta_N(q_1, 1)\\\\\n         \\# & \\{q_2\\} & \\delta_N(q_2, 0) & \\delta_N(q_2, 1)\\\\\n            & \\{q_0, q_1\\} & \\delta_N(q_0,0) \\cup \\delta_N(q_1,0) \n                           & \\delta_N(q_0,1) \\cup \\delta_N(q_1,1)\\\\\n         \\# & \\{q_0, q_2\\} & \\delta_N(q_0,0) \\cup \\delta_N(q_2,0) \n                           & \\delta_N(q_0,1) \\cup \\delta_N(q_2,1)\\\\\n         \\# & \\{q_1, q_2\\} & \\delta_N(q_1,0) \\cup \\delta_N(q_2,0) \n                           & \\delta_N(q_1,1) \\cup \\delta_N(q_2,1)\\\\\n         \\# & \\{q_0, q_1, q_2\\} \n            & \\delta_N(q_0,0) \\cup \\delta_N(q_1,0) \\cup \\delta_N(q_2,0)\n            & \\delta_N(q_0,1) \\cup \\delta_N(q_1,1) \\cup \\delta_N(q_2,1)\n\\end{array}\n\\end{equation*}\nFinally, we compute those subsets and obtain the table in\n\\fig~\\ref{fig:first_dfa_table}.\n\\begin{figure}[t]\n\\centering\n\\(\\begin{array}{r@{}l||c|c}\n\\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n\\hhline{==::==}\n            & \\varnothing & \\varnothing & \\varnothing\\\\\n\\rightarrow & \\{q_0\\} & \\{q_0, q_1\\} & \\{q_0\\}\\\\\n            & \\{q_1\\} & \\varnothing & \\{q_2\\}\\\\\n         \\# & \\{q_2\\} & \\varnothing & \\varnothing\\\\\n            & \\{q_0, q_1\\} & \\{q_0,q_1\\} & \\{q_0,q_2\\}\\\\\n         \\# & \\{q_0, q_2\\} & \\{q_0,q_1\\} & \\{q_0\\}\\\\\n         \\# & \\{q_1, q_2\\} & \\varnothing & \\{q_2\\}\\\\\n         \\# & \\{q_0, q_1, q_2\\} & \\{q_0, q_1\\} & \\{q_0,q_2\\}\n\\end{array}\\)\n\\caption{First DFA obtained\\label{fig:first_dfa_table}}\n\\end{figure}\nThe transition diagram of the DFA~\\(\\mathcal{D}\\) is showed in\n\\fig~\\vref{fig:dfa_from_nfa}\n\\begin{figure}[b]\n\\centering\n\\includegraphics[bb=42 628 291 758]{dfa_from_nfa}\n\\caption{Transition diagram of \\fig~\\vref{fig:first_dfa_table}\n\\label{fig:dfa_from_nfa}}\n\\end{figure}\nwhere states with out-going edges which have no end are final\nstates. If we look carefully at the transition diagram, we see that\nthe DFA is actually made of two disconnected sub\\hyp{}automata. In\nparticular, since we have only one initial state, this means that one\npart is not accessible, therefore its states are never used to\nrecognise or reject an input word, and we can remove this part, as\nshown in \\fig~\\ref{fig:dfa_from_nfa_simple1}.\n\\begin{figure}\n\\centering\n\\subfloat[Simplification of \\fig~\\ref{fig:dfa_from_nfa}\n\\label{fig:dfa_from_nfa_simple1}]{\n\\includegraphics[bb=43 628 162 758]{dfa_from_nfa_simple1}\n}\n\\qquad\n\\subfloat[State renamings of \\fig~\\ref{fig:dfa_from_nfa_simple1}\n\\label{fig:dfa_from_nfa_simple}]{\n\\includegraphics[bb=48 625 150 755]{dfa_from_nfa_simple}\n}\n\\caption{Simplifications}\n\\end{figure}\nIt is important to understand that the states of the DFA are subsets\nof the NFA states. This is due to the construction and, when finished,\nit is possible to hide this by renaming the states. For example, we\ncan rename the states of the previous DFA in the following manner:\n\\(\\{q_0\\}\\) into~\\(A\\), \\(\\{q_0, q_1\\}\\) in~\\(B\\) and \\(\\{q_0, q_2\\}\\)\nin~\\(C\\). So the transition table changes:\n\\begin{equation*}\n  \\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & \\{q_0\\}     & \\{q_0, q_1\\} & \\{q_0\\}\\\\\n                & \\{q_0,q_1\\} & \\{q_0,q_1\\}  & \\{q_0,q_2\\}\\\\\n             \\# & \\{q_0,q_2\\} & \\{q_0,q_1\\}  & \\{q_0\\}\n  \\end{array}\n\\qquad\n  \\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & A & B & A\\\\\n                & B & B & C\\\\\n             \\# & C & B & A\n  \\end{array}\n\\end{equation*}\nSo, finally, the DFA is simply as in \\fig~\\vref{fig:dfa_from_nfa_simple}.\n\n\\subsection*{Optimisation}\n\nEven if in the worst case the resulting DFA has an exponential number\nof states of the corresponding NFA, it is in practice often possible\nto avoid the construction of inaccessible states.\n\\begin{itemize*}\n\n  \\item The singleton containing the initial state (in our example,\n  \\(\\{q_0\\}\\))~is accessible;\n\n  \\item let us assume we have a set~\\(S\\) of accessible states; then\n    for each input symbol~\\(a\\), we compute \\(\\delta_D(S,a)\\): this\n    new set is also accessible;\n\n  \\item let us repeat the last step, starting with~\\(\\{q_0\\}\\), until\n    no new (accessible) sets are found.\n\n\\end{itemize*}\nLet us reconsider the NFA given by the transition table\n\\begin{equation*}\n  \\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{NFA} \\; \\mathcal{N}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & q_0 & \\{q_0, q_1\\} & \\{q_0\\}\\\\\n                & q_1 & \\varnothing  & \\{q_2\\}\\\\\n    \\#          & q_2 & \\varnothing  & \\varnothing\n  \\end{array}\n\\end{equation*}\nInitially, the sole subset of accessible states is \\(\\{q_0\\}\\):\n\\begin{equation*}\n  \\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & \\{q_0\\} & \\delta_N(q_0,0) & \\delta_N(q_0,1)\n  \\end{array}\n\\quad\\text{that is}\\quad\n  \\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & \\{q_0\\} & \\{q_0,q_1\\} & \\{q_0\\}\n  \\end{array}\n\\end{equation*}\nTherefore \\(\\{q_0,q_1\\}\\) and~\\(\\{q_0\\}\\) are accessible sets,\nbut~\\(\\{q_0\\}\\) is not a new set, so we only add to the table entries\n\\(\\{q_0, q_1\\}\\) and compute the transitions from it:\n\\begin{equation*}\n  \\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & \\{q_0\\}      & \\{q_0, q_1\\} & \\{q_0\\}\\\\\n                & \\{q_0, q_1\\} & \\{q_0, q_1\\} & \\{q_0, q_2\\}\n  \\end{array}\n\\end{equation*}\nThis step uncovered a new set of accessible states, \\(\\{q_0, q_2\\}\\),\nwhich we add to the table and repeat the procedure, and mark it as\nfinal state since \\(q_2 \\in \\{q_0, q_2\\}\\):\n\\begin{equation*}\n  \\begin{array}{r@{}l||c|c}\n    \\multicolumn{2}{c||}{\\text{DFA} \\; \\mathcal{D}} & 0 & 1\\\\\n    \\hhline{==::==}\n    \\rightarrow & \\{q_0\\}      & \\{q_0, q_1\\} & \\{q_0\\}\\\\\n                & \\{q_0, q_1\\} & \\{q_0, q_1\\} & \\{q_0, q_2\\}\\\\\n             \\# & \\{q_0, q_2\\} & \\{q_0, q_1\\} & \\{q_0\\}\n  \\end{array}\n\\end{equation*}\nWe are done since there are no more new accessible sets.\n\n\\subsection*{Tries}\n\nLexical analysis tries to recognise a prefix of the input character\nstream (in other words, the first lexeme of the given\nprogram). Consider the C~keywords \\term{const} and \\term{continue}:\n\\begin{center}\n\\includegraphics[bb=48 675 353 730]{nfa_kwd}\n\\end{center}\nThis example shows that a NFA is much more comfortable than a DFA for\nspecifying tokens for lexical analysis: we design separately the\nautomata for each token and then merge their initial states into one,\nleading to one, possibly large NFA. It is possible to apply the subset\nconstruction to this NFA.\n\nAfter forming the corresponding NFA as in the previous example, it is\nactually easy to construct an equivalent DFA by sharing their\nprefixes, hence obtaining a tree-like automaton called \\emph{trie}\n(pronounced as the word `try'):\n\\begin{center}\n\\includegraphics[bb=48 675 353 730]{trie_kwd}\n\\label{trie_kwd}\n\\end{center}\nNote that this construction only works for a list of constant words,\nlike keywords.\n\nThis technique can easily be generalised for searching constant\nstrings (like keywords) in a text, that is, not only as a prefix of a\ntext, but \\emph{at any position}. It suffices to add a loop on the\ninitial state for each possible input symbol. If we note~\\(\\Sigma\\)\nthe language alphabet, we get\n\\begin{center}\n\\includegraphics[bb=48 675 355 738]{trie_kwd_search}\n\\end{center}\nIt is possible to apply the subset construction to this NFA or to use\nit directly for searching the two keywords at any position in a\ntext. In case of direct use, the difference between this NFA and the\ntrie one page~\\pageref{trie_kwd} is that there is no need here to\n`restart' by hand the recognition process once a keyword has been\nrecognised: we just continue. This works because of the loop on the\ninitial state, which always allows a new start. (The reader may try\nfor instance the input \\texttt{constantcontinue}.)\n\nThe subset construction can lead, in the worst case, to a number of\nstates which is the total number of state subsets of the NFA. In other\nwords, if the NFA has~\\(n\\) states, the equivalent DFA by subset\nconstruction can have~\\(2^n\\) states (see\npage~\\pageref{state_explosion} for the count of all the subsets of a\nfinite set). For instance, consider the following NFA, which\nrecognises all binary strings which have~\\(1\\) at the \\(n\\)th\nposition from the end:\n\\begin{center}\n\\includegraphics[bb=48 711 305 758]{subset_bad_case}\n\\end{center}\nThe language recognised by this NFA is \\(\\Sigma^{*} 1 \\Sigma^{n-1}\\),\nwhere \\(\\Sigma=\\{0,1\\}\\), that is: all words of length greater or\nequal to~\\(n\\) are accepted as long as the \\(n\\)th bit from the\n\\emph{end} is~\\(1\\). Therefore, in any equivalent DFA, all the\nprefixes of length~\\(n\\) should not lead to a stuck state, because the\nautomaton must wait until the \\emph{end} of the word to accept or\nreject it. If the states reached by these prefixes are all different,\nthen there are at least \\(2^n\\)~states in the DFA. Equivalently (by\ncontraposition), if there are less than \\(2^n\\)~states, then some\nstates can be reached by several strings of length~\\(n\\):\n\\begin{center}\n\\includegraphics[bb=48 643 235 730]{dfa_bad_case}\n\\end{center}\nwhere words \\(x1w\\) and \\(x'0w\\) have length~\\(n\\). Let us define the\nDFA as follows: \\(\\mathcal{D} = (Q_D, \\Sigma, \\delta_D, q_D, F_D)\\),\nwhere \\(q_D=\\{q_0\\}\\). The extended transition function is noted\n\\(\\hat{\\delta}_D\\) as usual. The situation of the previous picture can\nbe formally expressed as\n\\begin{gather}\n\\hat{\\delta}_D (q_D, x1) = \\hat{\\delta}_D (q_D, x'0) = q, \\label{ext:1}\\\\\n\\lvert{x1w}\\lvert = \\lvert{x'0w}\\lvert = n,\\notag\n\\end{gather}\nwhere \\(\\lvert{u}\\lvert\\) is the length of~\\(u\\). Let~\\(y\\) be a any\nstring of~0 and~1 such that \\(\\lvert{wy}\\lvert = n - 1\\). Then\n\\(\\hat{\\delta}_D(q_D, x1wy) \\in F_D\\) since there is a~1 at the\n\\(n\\)th position from the end:\n\\begin{center}\n\\includegraphics[bb=48 643 287 730]{dfa_bad_case_complete}\n\\end{center}\nAlso, \\(\\hat{\\delta}_D(q_D,x'0wy) \\not\\in F_D\\) because there is a 0\nat the \\(n\\)th position from the end. On the other hand,\nequation~\\eqref{ext:1} implies\n\\begin{equation*}\n\\hat{\\delta}_D (q_D, x1wy) = \\hat{\\delta}_D (q_D, x'0wy) = p.\n\\end{equation*}\nSo we stumble upon a contradiction because a state (\\(p\\)) must be\neither final or not final, it cannot be both. As a consequence, we\nmust reject our initial assumption: there are at least \\(2^n\\)~states\nin the equivalent DFA. This is a very bad case, even if it is not the\nworst case (\\(2^{n+1}\\) states).\n", "meta": {"hexsha": "b1be48981b9f5e6ca54032ada6b2dc5843fa5474", "size": 16759, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "subset.tex", "max_stars_repo_name": "rinderknecht/Book", "max_stars_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "subset.tex", "max_issues_repo_name": "rinderknecht/Book", "max_issues_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "subset.tex", "max_forks_repo_name": "rinderknecht/Book", "max_forks_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.108040201, "max_line_length": 73, "alphanum_fraction": 0.6463392804, "num_tokens": 6008, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8459424353665382, "lm_q1q2_score": 0.5859352460501074}}
{"text": "\\section{On-policy Control with Approximation}\nWe consider attempts to solve the control problem using parametrised function approximation to estimate action-values. We consider only the on-policy case for now.\n\n\\subsection{Episodic Semi-gradient Control}\nExtension of the semi-gradient update rules to action-values is straightforward\n\\begin{equation}\n    \\vec{w}_{t+1} = \\vec{w}_t + \\alpha \\left[ U_t - \\hat{q}(S_t, A_t, \\vec{w}_t) \\right] \\grad_{\\vec{w}_t} \\hat{q}(S_t, A_t, \\vec{w}_t)\n\\end{equation}\nwhere $U_t$ is the update target at time $t$. For example, one-step Sarsa has the update target is\n\\[\n    U_t = R_{t+1} + \\gamma \\hat{q}(S_{t+1}, A_{t+1}, \\vec{w}_t).\n\\]\nWe call this method \\emph{episodic semi-gradient one-step Sarsa}. For a constant policy, this method converges in the same with as TD(0), with a similar kind of error bound.\\\\\n\nIn order to form control methods, we must couple the prediction ideas developed in the previous chapter with methods for policy improvement. Policy improvement methods for continuous actions or actions from large discrete spaces are an active area of research, with no clear resolution. For actions drawn from smaller discrete sets, we can use the same idea as we have before, which is to compute action values and then take an $\\varepsilon$-greedy action selection. Episodic semi-gradient sarsa can be used to estimate the optimal action-values as in the box below.\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/episodic_semi_gradient_sarsa.png}\\\\\n\n\\subsection{Semi-gradient $n$-step Sarsa}\nWe can use an $n$-step version of the episodic Sarsa that we defined above by incorporating the bootstrapped $n$-step return \n\\begin{equation}\n    G_{t:t+n} \\doteq \\sum_{i=1}^{n-1} \\gamma^i R_{i+1} + \\gamma^n \\hat{q}(S_{t+n}, A_{t+n}, W_{t+n-1})\n\\end{equation}\nwhere $G_{t:t+n} = G_{t}$ if $t + n \\geq T$, as usual. This update target is used in the pseudocode in the box below. As we have seen before, performance is generally best with amn intermediate value of $n$. \\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/n_step_episodic_semi_gradient_sarsa.png}\\\\\n\n\n\\subsection{Average Reward: A New Problem Setting for Continuing Tasks}\nWe introduce a third classical setting for formulating the goal in Markov decision problems (MDPs) (to go along with episodic and continuing). This new setting is called the \\emph{average reward setting}. This setting applies to continuing problems with no start or end state, but also no discounting. (Later we will see that the lack of a start state introduces a symmetry that makes discounting with function approximation pointless.)\\\\\n\nIn the average reward setting, the ordering of policies is (most often) defined with respect to the \\emph{average reward}  while following the policy\n\n\\begin{align}\n    r(\\pi) &\\doteq \\lim_{h\\to\\infty} \\frac1h \\sum_{t=1}^{h} \\E{}[R_{t} \\vert{} S_0, A_{0:t-1} \\sim \\pi]\\\\\n             &= \\lim_{t\\to\\infty} \\E{}[R_{t} \\vert{} S_0, A_{0:t-1} \\sim \\pi] \\\\\n             &= \\sum_s \\mu_\\pi(s) \\sum_a \\pi(a \\vert{} s) \\sum_{s', r} p(s', r \\vert{} s, a) r.\n\\end{align}\n\nWe will consider policies that attain the maximal value of $r(\\pi)$ to be optimal (though there are apparently some subtly distinctions here that are not gone into).\\\\\n\nThe distribution $\\mu_\\pi(s)$ is the steady-state distribution defined by\n\\begin{equation}\n    \\mu_\\pi(s) \\doteq \\lim_{t\\to \\infty} \\P{} (S_t = s \\vert{} A_{0:t-1} \\sim \\pi)\n\\end{equation}\nwhich we assume to exist for any $\\pi$ and to be independent of the starting state $S_0$. This assumption is known as \\emph{ergodicity}, and it means that the long run expectation of being in a state depends only on the policy and MDP transition probabilities -- not on the start state. The steady-state distribution has the property that it is invariant under actions taken by $\\pi$, in the sense that the following holds\n\\[\n    \\sum_s \\mu_\\pi(s) \\sum_a \\pi(a \\vert{} s) p(s' \\vert{} s, a) = \\mu_\\pi(s').\n\\]\\\\\n\nIn the average-reward setting we define returns in terms of the difference between the reward and the expected reward for the policy\n\\begin{equation}\n    G_t \\doteq \\sum_{i \\geq t} \\left(R_{i+1} - r(\\pi)\\right)\n\\end{equation}\nwe call this quantity the \\emph{differential return} and the corresponding value functions (defined in the same way, just with this return instead) \\emph{differential value functions}. These new value functions also have Bellman equations:\n\n\\begin{align}\n    v_\\pi(s) &= \\sum_a \\pi(a \\vert{} s) \\sum_{s', r} p(s', r \\vert{} s, a) \\left[ r - r(\\pi) + v_\\pi(s')\\right] \\\\\n    q_\\pi(s, a) &= \\sum_{s', r} p(s', r \\vert{} s, a) \\left[ r - r(\\pi) + \\sum_{a'} \\pi(a' \\vert{} s') q_\\pi(s', a')\\right] \\\\\n    v_*(s) &= \\max_a \\sum_{s', r} p(s', r \\vert{} s, a) \\left[ r - r(\\pi) + v_*(s')\\right] \\\\\n    q_\\pi(s, a) &= \\sum_{s', r} p(s', r \\vert{} s, a) \\left[ r - r(\\pi) + \\max_{a'} q_*(s', a')\\right].\n\\end{align}\nWe also have differential forms of the TD errors, where $\\bar{R}_t$ is the estimate of $r(\\pi)$ at $t$,\n\\begin{align}\n    \\delta_t &\\doteq R_{t+1} - \\bar{R}_{t+1} + \\hat{v}(S_{t+1}, \\vec{w}_t) - \\hat{v}(S_t, \\vec{w}_t)\\\\\n    \\delta_t &\\doteq R_{t+1} - \\bar{R}_{t+1} + \\hat{q}(S_{t+1}, A_{t+1}, \\vec{w}_t) - \\hat{q}(S_t, A_t, \\vec{w}_t).\n\\end{align}\\\\\n\nMany of the previous algorithms and theoretical results carry over to this new setting without change. For instance, the update for the semi-gradient Sarsa is defined in the same way just with the new TD error, corresponding pseudocode given in the box below.\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/differential_semi_gradient_sarsa.png}\\\\\n\n\\subsection{Deprecating the Discounted Setting}\nSuppose we want to optimise the discounted value function $v_\\pi^\\gamma(s)$ over the on-policy distribution, we would choose an objective $J(\\pi)$ with\n\\begin{align}\n    J(\\pi) &\\doteq \\sum_s \\mu_\\pi(s) v_\\pi^\\gamma(s) \\\\\n           &= \\sum_s \\mu_\\pi(s) \\sum_a \\pi(a \\vert{} s) \\sum_{s', r} p(s', r \\vert{} s, a) \\left[ r + \\gamma v_\\pi^\\gamma(s')\\right] \\\\\n           &= r(\\pi) + \\sum_s \\mu_\\pi(s) \\sum_a \\pi(a \\vert{} s) \\sum_{s', r} p(s', r \\vert{}  s, a) \\gamma v_\\pi^\\gamma(s') \\\\\n           &= r(\\pi) + \\gamma \\sum_{s'} v_\\pi^\\gamma(s') \\sum_{s} \\mu_\\pi(s) \\sum_a \\pi(a \\vert{} s)p(s', \\vert{}  s, a) \\\\\n           &= r(\\pi) + \\gamma \\sum_{s'} v_\\pi^\\gamma(s') \\mu_\\pi(s') \\\\\n           &= r(\\pi) + \\gamma J(\\pi) \\\\\n           &\\vdotswithin{=} \\\\\n           &= \\frac{1}{1 - \\gamma} r(\\pi)\n\\end{align}\nso we may as well have optimised for the \\emph{undiscounted} average reward.\\\\\n\nThe root cause (note: why \\emph{root} cause?) of the difficulties with the discounted control setting is that when we introduce function approximation we lose the policy improvement theorem. This is because when we change the discounted value of one state, we are not guaranteed to have improved the policy in any useful sense (e.g. generalisation could ruin the policy elsewhere). This is an area of open research.\n\n\\subsection{Differential Semi-gradient $n$-step Sarsa}\nWe generalise $n$-step bootstrapping by introducing an $n$-step version of the TD error  in this new setting. In order to do that, we first introduce the differential $n$-step return using function approximation\n\\begin{equation}\n    G_{t:t+n} \\doteq \\sum_{i=t}^{n-1} \\left( R_{i +1} - \\bar{R}_{i+1} \\right) + \\hat{q}(S_{t+n}, A_{t+n}, \\vec{w}_{t+n-1})\n\\end{equation}\nwith $G_{t:t+n} = G_t$ if $t+n \\geq T$ as usual and where $\\bar{R}_i$ are the estimates of $\\bar{R}$. The $n$-step TD error is then defined as before just with the new $n$-step return\n\\[\n    \\delta_t \\doteq G_{t:t+n} - \\hat{q}(S_t, A_t, \\vec{w}_t).\n\\]\nPseudocode for the use of this return in the Sarsa framework is given in the box below. Note that $\\bar{R}$ is updated using the TD error rather than the latest reward (see Exercise 10.9).\\\\\n\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/differential_semi_gradient_n_step_sarsa.png}\\\\\n", "meta": {"hexsha": "8a37f23c2cf2ac3d71f97b69007a335038fa4dbd", "size": 7908, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/chapters/chapter10/chapter10_content.tex", "max_stars_repo_name": "ElliotMunro200/reinforcement_learning_an_introduction", "max_stars_repo_head_hexsha": "c4fccb46a4bb00955549be3505144ec49f0132e5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 234, "max_stars_repo_stars_event_min_datetime": "2018-09-01T00:26:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T03:55:50.000Z", "max_issues_repo_path": "notes/chapters/chapter10/chapter10_content.tex", "max_issues_repo_name": "15779235038/reinforcement_learning_an_introduction", "max_issues_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2018-11-29T21:04:36.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-13T17:11:50.000Z", "max_forks_repo_path": "notes/chapters/chapter10/chapter10_content.tex", "max_forks_repo_name": "15779235038/reinforcement_learning_an_introduction", "max_forks_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 63, "max_forks_repo_forks_event_min_datetime": "2018-07-31T04:53:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T04:03:43.000Z", "avg_line_length": 77.5294117647, "max_line_length": 568, "alphanum_fraction": 0.690060698, "num_tokens": 2491, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711794579722, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5858899189561251}}
{"text": "\\section{Analytical Model}\n\\label{sec:analytical-model}\nIn this Section we define the analytical model of the system.\n%\nIn particular, we will show the queue model, Markov Chain and performance metrics formulas for the system both running the Off-Loading Policy 1 and the Off-Loading Policy 2.\n%\nAt the end, we will explain how we solved the analytical model to obtain theoretical results for the target case.\n\n\\paragraph{Queue Model}\nAs the system is characterized by Poisson arrivals and Exponential services, we could consider treating the queue model as a Jackson Network. \nNevertheless, this would not be correct without a very strong assumption on the routing probabilities, In fact, a Jackson Network requires the routing probabilities  to be static, i.e. independent of the system state, and this is not the case.\n\nSo, \\textit{we assume the routing probabilities to be static in order to unlock the potential of the Jackson Network in our analytical dissertation}.\n\nIn Figures~\\ref{fig:analytical-model-queue-1} and \\ref{fig:analytical-model-queue-2}, we show the queue models for the system with Off-Loading Policy 1 and Off-Loading Policy 2, respectively.\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{fig/analytical-model-queue-1}\n\t\\caption{Queue model for the system with OP1.}\n\t\\label{fig:analytical-model-queue-1}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{fig/analytical-model-queue-2}\n\t\\caption{Queue model for the system with OP2.}\n\t\\label{fig:analytical-model-queue-2}\n\\end{figure}\n\nThe definition of the routing probabilities relies on the following subsets of Cloudlet states $S_{clt}$, whose definition strictly depends on the adopted off-loading policy:\n\n\\begin{itemize}\n\t\\item $A_{clt,1}$:  subset of states where a task belonging to the $1^{st}$ class is accepted in the Cloudlet.\n\t\n\tIn OP1, a $1^{st}$ class task is accepted in the Cloudlet as long as not all the $N$ resources are occupied.\n\tIn OP2, a $1^{st}$ class task is accepted in the Cloudlet as long as not all the $N$ resources are occupied or there is at least one $2^{nd}$ class task to be interrupted.\n\n\t\\begin{equation}\n\t\tA_{clt,1} :=\n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\{(n_{clt,1},n_{clt,2})\\in S_{clt} : \\\\\n\t\t\t\t& n_{clt,1}+n_{clt,2}<N\\}\n\t\t\t\\end{aligned} & \\mbox{if } OP1 \\\\\n\t\t\t\\\\\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\{(n_{clt,1},n_{clt,2})\\in S_{clt} : \\\\\n\t\t\t\t& n_{clt,1}+n_{clt,2}<N \\vee n_{clt,2}>0\\}\n\t\t\t\\end{aligned} & \\mbox{if } OP2\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\n\t\n\t\\item $A_{clt,2}$: subset of states where  a task belonging to the $2^{nd}$ class is accepted in the Cloudlet.\n\t\n\tIn OP1, a $2^{nd}$ class task is accepted in the Cloudlet as long as not all the $N$ resources are occupied.\n\tIn OP2, a $2^{nd}$ class task is accepted in the Cloudlet as long as not all the $N$ resources are occupied and the number of resources occupied by $2^{nd}$ class tasks satisfies the condition $(n_{clt,1}+n_{clt,2}<S)$.\n\t\n\t\\begin{equation}\n\t\tA_{clt,2} :=\n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\{(n_{clt,1},n_{clt,2})\\in S_{clt} : \\\\\n\t\t\t\t& n_{clt,1}+n_{clt,2}<N\\}\n\t\t\t\\end{aligned} & \\mbox{if } OP1 \\\\\n\t\t\t\\\\\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\{(n_{clt,1},n_{clt,2})\\in S_{clt} : \\\\\n\t\t\t\t& n_{clt,1}+n_{clt,2}<N \\wedge n_{clt,1} + n_{clt,2}<S\\}\n\t\t\t\\end{aligned} & \\mbox{if } OP2\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\n\t\n\t\\item $R_{clt,2}$: subset of states where  a task belonging to the $2^{nd}$ class is interrupted in the Cloudlet and it is restarted in the Cloud.\n\t\n\tIn OP1, the set is empty because task interruption is not provided by the policy.\n\tIn OP2, a $2^{nd}$ class task is interrupted in the Cloudlet and restarted in the Cloud as long as the former does not have free resources and there is at least one $2^{nd}$ class task to interrupt.\n\t\n\t\\begin{equation}\n\t\tR_{clt,2} :=\n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\emptyset\n\t\t\t\\end{aligned} & \\mbox{if } OP1 \\\\\n\t\t\t\\\\\n\t\t\t\\begin{aligned}\n\t\t\t\t& \\{(n_{clt,1},n_{clt,2})\\in S_{clt} : \\\\\n\t\t\t\t& n_{clt,1}+n_{clt,2}\\geq S \\wedge n_{clt,2}>0\\}\n\t\t\t\\end{aligned} & \\mbox{if } OP2\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\n\\end{itemize}\n\nGiven such sets, the routing probabilities  are accordingly defined:\n\n\\begin{itemize}\n\t\n\t\\item $a_{clt,1}$: the probability for a demanding $1^{st}$ class task to be accepted in the Cloudlet.\n\t\n\t\\begin{equation} \n\t\ta_{clt,1} := \\sum_{s\\in A_{clt,1}} \\pi_{s}\n\t\\end{equation}\n\t\n\t\\item $a_{clt,2}$: the probability for a demanding $2^{nd}$ class task to be accepted in the Cloudlet.\n\t\n\t\\begin{equation} \n\t\ta_{clt,2} := \\sum_{s\\in A_{clt,2}} \\pi_{s}\n\t\\end{equation}\n\t\t\n\t\\item $r_{clt,2}$: the probability for a running $2^{nd}$ class task to be interrupted in the Cloudlet and restarted in the Cloud.\n\t\n\t\\begin{equation} \n\t\t\\begin{split}\n\t\t\tr_{clt,2} & = \\sum_{s\\in R_{clt,2}} \\pi_{s} \\Big(\\frac{\\lambda_{1}}{\\lambda_{1}+\\lambda_{2}}\\Big) \\\\\n\t\t\\end{split}\n\t\\end{equation}\n\\end{itemize}\n\n\\paragraph{Markov Chain}\nAs the system is characterized by Poisson arrivals\\footnote{same as Exponential inter-arrivals.} and Exponential services, the Markovian condition holds true and we can then determine the Markov Chain\\footnote{If the Markovian condition is not satisfied, Markov Chain solution must be considered only as an approximation.} whose resolution allows us to compute the routing probabilities.\n\nFor sake of simplicity, we consider here the simple case with $N=2$ in order to (i) give an idea of the system of equations to be solved and (ii) graphically recognize the critical states. \nIn fact, the representation fo the Markov Chain and the associated equations would be impractical for the target case $N=20$, due to the combinatorial explosion of the state space.\n\nIn Figure~\\ref{fig:analytical-model-markov-1} we show the Markov Chain for the system with Off-Loading Policy 1 and $N=2$. \nIn this chain, red states represent those where both the $1^{st}$ and $2^{nd}$ class traffic is blocked by the Cloudlet and forwarded to the Cloud.\nThe associated flow balance equations are listed in Equation~\\ref{eqn:analytical-model-markov-1}.\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{fig/analytical-model-markov-1}\n\t\\caption{Markov Chain for the system with OP1 (N=2).}\n\t\\label{fig:analytical-model-markov-1}\n\\end{figure}\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\pi_{0,0}(\\lambda_{1}+\\lambda_{2})& = \\pi_{1,0}\\mu_{clt,1}+\\pi_{0,1}\\mu_{clt,2} \\\\\n\t\t\\pi_{0,1}(\\lambda_{1}+\\lambda_{2}+\\mu_{clt,2}) & = \\pi_{0,0}\\lambda_{2}+\\pi_{1,1}\\mu_{clt,1}+\\pi_{0,2}2\\mu_{clt,2} \\\\\n\t\t\\pi_{1,0}(\\lambda_{1}+\\lambda_{2}+\\mu_{clt,1}) & = \\pi_{0,0}\\lambda_{1}+\\pi_{1,1}\\mu_{clt,2}+\\pi_{2,0}2\\mu_{clt,1} \\\\\n\t\t\\pi_{1,1}(\\mu_{clt,1}+\\mu_{clt,2}) & = \\pi_{0,1}\\lambda_{1}+\\pi_{1,0}\\lambda_{2} \\\\\n\t\t\\pi_{0,2}(2\\mu_{clt,2}) & = \\pi_{0,1}\\lambda_{2} \\\\\n\t\t\\pi_{2,0}(2\\mu_{clt,1}) & = \\pi_{1,0}\\lambda_{1} \\\\\n\t\t1 & = \\pi_{0,0}+\\pi_{0,1}+\\pi_{1,0}+\\pi_{1,1}+\\pi_{0,2}+\\pi_{2,0}\\\\\n\t\\end{split}\n\t\\label{eqn:analytical-model-markov-1}\n\\end{equation}\n\nIn Figure~\\ref{fig:analytical-model-markov-2} we show the Markov Chain for the system with Off-Loading Policy 2 and $N=S=2$. \nIn this chain, the red state represents the one where both the $1^{st}$ and $2^{nd}$ class traffic is blocked; whereas the orange states represent those where (i) a $1^{st}$ class arrival is accepted in Cloudlet, causing the restart in Cloud of a random running $2^{nd}$ class task and (ii) a $2^{nd}$ class arrival is blocked by the Cloudlet and forwarded to the Cloud.\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{fig/analytical-model-markov-2}\n\t\\caption{Markov Chain for the system with OP2 (N=S=2).}\n\t\\label{fig:analytical-model-markov-2}\n\\end{figure}\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\pi_{0,0}(\\lambda_{1}+\\lambda_{2})& = \\pi_{1,0}\\mu_{clt,1}+\\pi_{0,1}\\mu_{clt,2} \\\\\n\t\t\\pi_{0,1}(\\lambda_{1}+\\lambda_{2}+\\mu_{clt,2}) & = \\pi_{0,0}\\lambda_{2}+\\pi_{1,1}\\mu_{clt,1}+\\pi_{0,2}2\\mu_{clt,2} \\\\\n\t\t\\pi_{1,0}(\\lambda_{1}+\\lambda_{2}+\\mu_{clt,1}) & = \\pi_{0,0}\\lambda_{1}+\\pi_{1,1}\\mu_{clt,2}+\\pi_{2,0}2\\mu_{clt,1} \\\\\n\t\t\\pi_{1,1}(\\lambda_{1}+\\mu_{clt,1}+\\mu_{clt,2}) & = \\pi_{0,1}\\lambda_{1}+\\pi_{1,0}\\lambda_{2}+\\pi_{0,2}\\lambda_{1} \\\\\n\t\t\\pi_{0,2}(\\lambda_{1}+2\\mu_{clt,2}) & = \\pi_{0,1}\\lambda_{2} \\\\\n\t\t\\pi_{2,0}2\\mu_{clt,1} & = \\pi_{1,0}\\lambda_{1}+\\pi_{1,1}\\lambda_{1} \\\\\n\t\t1 & = \\pi_{0,0}+\\pi_{0,1}+\\pi_{1,0}+\\pi_{1,1}+\\pi_{0,2}+\\pi_{2,0}\\\\\n\t\\end{split}\n\t\\label{eqn:analytical-model-markov-2}\n\\end{equation}\n\nIn Figure~\\ref{fig:analytical-model-markov-2-1} we show the Markov Chain for the system with Off-Loading Policy 2 and $N=2,S=1$. \nIn this chain, the red state represents the one where both the $1^{st}$ and $2^{nd}$ class traffic is blocked; the orange state represents the one where (i) a $1^{st}$ class arrival is accepted in Cloudlet, causing the restart in Cloud of a random running $2^{nd}$ class task and (ii) a $2^{nd}$ class arrival is blocked by the Cloudlet and forwarded to the Cloud; whereas the grey state represents the one where a $1^{st}$ class arrival is accepted in Cloudlet and a $2^{nd}$ class arrival is blocked by the Cloudlet and forwarded to the Cloud.\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{fig/analytical-model-markov-2-1}\n\t\\caption{Markov Chain for the system with OP2 (N=2, S=1).}\n\t\\label{fig:analytical-model-markov-2-1}\n\\end{figure}\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\pi_{0,0}(\\lambda_{1}+\\lambda_{2})& = \\pi_{1,0}\\mu_{clt,1}+\\pi_{0,1}\\mu_{clt,2} \\\\\n\t\t\\pi_{0,1}(\\lambda_{1}+\\mu_{clt,2}) & = \\pi_{0,0}\\lambda_{2} \\\\\n\t\t\\pi_{1,0}(\\lambda_{1}+\\mu_{clt,1}) & = \\pi_{0,0}\\lambda_{1}+\\pi_{2,0}2\\mu_{clt,1} \\\\\n\t\t\\pi_{2,0}2\\mu_{clt,1} & = \\pi_{1,0}\\lambda_{1} \\\\\n\t\t1 & = \\pi_{0,0}+\\pi_{0,1}+\\pi_{1,0}+\\pi_{2,0}\\\\\n\t\\end{split}\n\t\\label{eqn:analytical-model-markov-2-1}\n\\end{equation}\n\n\\paragraph{Accepted Workload}\nGiven the routing probabilities we can determine the following accepted workloads:\n\n\\begin{itemize}\n\t\\item \\textit{Cloudlet}: arrival rate of tasks belonging to $j$-th class accepted in Cloudlet, valid both for OP1 and OP2:\n\t\\begin{equation}\n\t\\lambda_{clt,j} = a_{clt,j}\\lambda_{j}\n\t\\end{equation}\n\t\n\t\\item \\textit{Cloud}: arrival rate of tasks belonging to $j$-th class forwarded to Cloud, valid both for OP1 and OP2:\n\t\\begin{equation}\n\t\\lambda_{cld,j} = (1-a_{clt,j})\\lambda_{j}\n\t\\end{equation}\n\t\n\t\\item \\textit{Restarts}: rate of tasks belonging to $2^{nd}$ class interrupted in Cloudlet and restarted in Cloud, valid only for OP2:\n\t\\begin{equation}\n\t\\lambda_{r} = r(\\lambda_{1}+\\lambda_{2})\n\t\\end{equation}\n\\end{itemize}\n\n\\paragraph{Performance metrics}\nGiven the accepted workloads we can determine the following performance metrics for classed tasks in each subsystem\\footnote{In every formula, we omitted the symbol $E[\\cdot]$ of the expected value in order to lighten the notation.}.\n\n\\begin{itemize}\n\t\n\t\\item $1^{st}$ class in Cloudlet:\n\t\\begin{equation} \n\t\t\\begin{split}\n\t\t\tN_{clt,1} &= \\sum_{s: (n_{clt,1},n_{clt,2}) \\in S_{clt}} \\pi_{s} n_{clt,1}  \\\\\n\t\t\tX_{clt,1} &= \\lambda_{clt,1} \\\\\n\t\t\tT_{clt,1} &= \\frac{N_{clt,1}}{X_{clt,1}} \\\\\n\t\t\\end{split}\n\t\\end{equation}\n\n\t\\item $2^{nd}$ class in Cloudlet:\n%\\footnote{We assume here to know the expected time lost in Cloudlet by $2^{nd}$ class tasks before their interruption, i.e. $E[T_{clt,2,lost}]$.In particular, as we are not able to determine it from the Markov Chain in a simple way, we will assume the experimental value computed by the simulator.}:\n\t\\begin{equation} \n\t\t\\begin{split}\n\t\t\tN_{clt,2} &= \\sum_{s: (n_{clt,1},n_{clt,2}) \\in S_{clt}} \\pi_{s} n_{clt,2}  \\\\\n\t\t\t%E[N_{clt,2}] &= \\lambda_{clt,2}E[T_{clt,2}]-\\lambda_{r} E[T_{clt,2,lost}] \\\\\n\t\t\t%E[N_{clt,2}] &= \\lambda_{clt,2}E[T_{clt,2}]-E[N_{cld,2}]^{[R]} \\\\\n\t\t\tX_{clt,2} &= \\lambda_{clt,2} - \\lambda_{r} \\\\\n\t\t\tT_{clt,2} &= \\frac{N_{clt,2}}{X_{clt,2}} \\\\\n\t\t\\end{split}\n\t\\end{equation}\n\nIt is important to notice here that we assumed that the Cloudlet throughput for $2^{nd}$ class tasks is only given by the portion of traffic that is served to completion by the Cloudlet, thus excluding the portion of interrupted traffic.\n\n\t\\item $1^{st}$ class in Cloud:\n\t\\begin{equation} \n\t\t\\begin{split}\n\t\t\tT_{cld,1} &= \\frac{1}{\\mu_{cld,1}} \\\\\n\t\t\tN_{cld,1} &= \\lambda_{cld,1}T_{cld,1} \\\\\n\t\t\tX_{cld,1} &= \\lambda_{cld,1} \\\\\n\t\t\\end{split}\n\t\\end{equation}\n\t\n\t\\item $2^{nd}$ class in Cloud (NR: not restarted\\footnote{$2^{nd}$ class tasks that are served by the Cloud because they have not been accepted in the Cloudlet.}):\n\t\\begin{equation} \n\t\\begin{split}\n\t\tT_{cld,2}^{[NR]} &= \\frac{1}{\\mu_{cld,2}} \\\\\n\t\tN_{cld,2}^{[NR]} &= \\lambda_{cld,2}T_{cld,2}^{[NR]} \\\\\n\t\tX_{cld,2}^{[NR]} &= \\lambda_{cld,2} \\\\\n\t\\end{split}\n\t\\end{equation}\n\t\n\t\\item $2^{nd}$ class in Cloud (R: restarted\\footnote{$2^{nd}$ class tasks that are served by the Cloud because they have been interrupted in the Cloudlet.}):\n\t\\begin{equation} \n\t\\begin{split}\n\t\tT_{cld,2}^{[R]} &= T_{setup}+ T_{cld,2}^{[NR]} \\\\\n\t\tN_{cld,2}^{[R]} &= \\lambda_{r} T_{cld,2}^{[R]} \\\\\n\t\tX_{cld,2}^{[R]} &= \\lambda_{r} \\\\\n\t\\end{split}\n\t\\end{equation}\n\n\tIt is important to notice here that we decided to assign the entire setup time to the Cloud.\n\t\n\t\\item $2^{nd}$ class in Cloud (both restarted and not restarted):\n\t\\begin{equation} \n\t\\begin{split}\n\t\tT_{cld,2} &= \\sum_{m=NR,R} \\frac{N_{cld,2}^{[m]}}{N_{cld,2}}T_{cld,2}^{[m]} \\\\\n\t\tN_{cld,2} &= \\sum_{m=NR,R} N_{cld,2}^{[m]} \\\\\n\t\tX_{cld,2} &= \\sum_{m=NR,R} X_{cld,2}^{[m]} \\\\\n\t\\end{split}\n\t\\end{equation}\n\\end{itemize}\n\nThen we can determine the following performance metrics for each subsystem:\n\n\\begin{itemize}\n\t\\item Cloudlet:\n\t\\begin{equation} \n\t\\begin{split}\n\t\tT_{clt} &= \\sum_{j=1,2} \\frac{N_{clt,j}} {N_{clt}} T_{clt,j} \\\\\n\t\tN_{clt} &= \\sum_{j=1,2} N_{clt,j} \\\\\n\t\tX_{clt} &= \\sum_{j=1,2} X_{clt,j} \\\\\n\t\\end{split}\n\t\\end{equation}\n\t\n\t\\item Cloud:\n\t\\begin{equation} \n\t\\begin{split}\n\t\tT_{cld} &= \\sum_{j=1,2} \\frac{N_{cld,j}} {N_{cld}} T_{cld,j} \\\\\n\t\tN_{cld} &= \\sum_{j=1,2} N_{cld,j} \\\\\n\t\tX_{cld} &= \\sum_{j=1,2} X_{cld,j} \\\\\n\t\\end{split}\n\t\\end{equation}\n\n\tIt is important to notice here that we assumed that the Cloud throughput is given by all the traffic that is served to completion by the Cloudlet, i.e. both the portion of traffic directly assigned to the Cloud and the one that is restarted in the Cloud.\n\t\\end{itemize}\n\nFinally, we can determine the following performance metrics for the whole system:\n\n\\begin{equation} \n\\begin{split}\n\tT &= \\sum_{i=cld,clt} \\frac{N_{i}} {N} T_{i} \\\\\n\tN &= \\sum_{i=cld,clt} N_{i} \\\\\n\tX &= \\sum_{i=cld,clt} X_{i} \\\\\n\\end{split}\n\\end{equation}\n\n\n\\paragraph{Resolution}\nWe solved the analytical models with a custom Python script executing the following steps:\n\n\\begin{itemize}\n\\item receives as input the system configuration\n\\item  generates the Markov Chain representing the Cloudlet\n\\item  generates the system of equations from the Markov Chain\n\\item  computes limiting probabilities by solving the system\n\\item  computes routing probabilities\n\\item  computes performance metrics and\n\\item  displays a report of results.\n\\end{itemize} \n\n\nAnalytical results are presented in Tables~\\ref{tbl:evaluation-performance-metrics-1}, \\ref{tbl:evaluation-performance-metrics-2-5} and \\ref{tbl:evaluation-performance-metrics-2-20}, alongside with their experimental counterparts.\n%\nWe preferred to present analytical and experimental results in a unique common view, in order to provide the reader with an idea on how the simulator approximates analytical results.", "meta": {"hexsha": "dcdd984fe1eaa8babd81983f711ed22f7738f38b", "size": 15362, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pydes/sec/analytical-model.tex", "max_stars_repo_name": "gmarciani/research", "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_issues_repo_path": "pydes/sec/analytical-model.tex", "max_issues_repo_name": "gmarciani/research", "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pydes/sec/analytical-model.tex", "max_forks_repo_name": "gmarciani/research", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "avg_line_length": 46.2710843373, "max_line_length": 545, "alphanum_fraction": 0.6834396563, "num_tokens": 5510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711718571775, "lm_q2_score": 0.6893056231680121, "lm_q1q2_score": 0.5858899082918573}}
{"text": "\\section{Affine approximations}\n\n \n{\\it Remark} 6.1.\nTo show that a sub-box of the parameter box ${\\cal W}$ is killed by one of the interesting conditions (plus associated killerword) we need to show that at each point in the sub-box, the killerword evaluated at that point satisfies the given condition (see\nSection 5).  That is, we are simply analyzing a certain function from the sub-box to ${\\bf C}$.  \n\nAs described in Remark FIXME(6.5), this analysis can be pulled back from the sub-box in question to the closed polydisc \n$A = \\{(z_0,z_1,z_2) \\in {\\bf C}^3 : |z_k| \\le 1$  for $k \\in \\{0,1,2\\}\\}.$  \nLoosely,  we will analyze such a function on $A$ by using  Taylor approximations consisting of an affine approximating\nfunction together with a bound on the ``error\" in the approximation (this could also be described as a ``remainder bound\").  This ``error\" \nis separate from round-off error,  which will be analyzed in Sections 7 and 8.  \n\n\\demo{Problems {\\rm FIXME(6.2)}}\nThere are two immediate problems likely to arise from this Taylor approximation approach.\nThe first problem is the appearance of unpleasant functions such as \nArccosh.  We have already taken care of this problem by ``exponentiating'' our preliminary parameter space ${\\cal P}$.  This resulted in all functions under consideration being built up from the co-ordinate functions $ L^{\\prime}, D^{\\prime},$ and $ R^{\\prime}$ on ${\\cal W}$ by means of the elementary operations $+,\\ -,\\ \\times,\\ /,\\ \\sqrt.$ \n\nSecond,  for a given ``built-up function'' the computer needs to be able to compute the Taylor approximation, and the error term.  This will be handled \nby developing combination formulas for elementary operations (see the propositions below).  Specifically, given two Taylor\napproximations with error terms representing functions $g$ and $h$ and an elementary operation on $g$ and $h$, we will show how to\nget the Taylor approximation with error term for the resultant function from the two original Taylor approximations. \n\nA similar approach was developed independently by Stolfi and Figuereido (see [FS]).\n\\enddemo\n\n{\\it Remark} FIXME(6.3).\nWe set up the Taylor approximation approach rigorously\nas follows in Definition FIXME(6.4).\nThe notation will be a bit unusual, but we are motivated by a desire to stay close to the notation used in the checker computer programs, {\\it verify}.  However, it should be pointed out that the formulas in this\nsection will be superseded by the ones in Section 8, which incorporate a round-off error analysis.  It is the Section 8 formulas that are\nused in {\\it verify}.\n\n\\vglue8pt {\\it Definition {\\rm FIXME(6.4)}}.  An {\\it AffApprox}  $x$ is a five-tuple\n$(x.f;\\ x.f_0,\\ x.f_1,\\ x.f_2;\\ x.e),$\nconsisting of four complex numbers $x.f,\\ x.f_0,\\ x.f_1,\\ x.f_2$ and one real number $x.e,$  which represents all functions \n$g: A \\rightarrow {\\bf C}$ such that\n$$|g(z_0,z_1,z_2) - (x.f + x.f_0 z_0 + x.f_1 z_1 + x.f _2 z_2)| \\le x.e$$\nfor all $(z_0,z_1,z_2) \\in A.$  That is, $x$ represents all functions from $A$ to ${\\bf C}$ that are $x.e$-well-approximated by the affine function $x.f + x.f_0 z_0 + x.f_1 z_1 + x.f _2 z_2$.  We will denote this set of functions associated with $x$ by $S(x)$.\n\\vglue6pt\n\n{\\it Remark} 6.5.\nAs mentioned in Remark FIXME(6.1), given a sub-box to analyze, instead of working with functions defined on the sub-box, we will work with corresponding functions defined on $A.$  Specifically, rather than build up a function by elementary operations performed on the co-ordinate functions \n$L^{\\prime}, D^{\\prime}, R^{\\prime}$ \nrestricted to the given sub-box, we will perform the elementary operations on the following functions defined on $A$, \n$$(p_0 + i p_3; s_0 + i s_3,0,0; 0)\\  \\ (p_1 + i p_4; 0, s_1 + i s_4,0;0) \\   \\ (p_2 + i p_5; 0,0, s_2 + i s_5; 0)$$\nwhere $(p_0 + i p_3, p_1 + i p_4, p_2 + i p_5)$ is the center of the sub-box in question, and the $s_i$ describe the six dimensions of the box. In the computer programs, these three functions are called {\\it along}, {\\it ortho}, and {\\it whirle,} respectively, and $p_i$ and $s_i$\n are denoted {\\it pos}[$i$] and {\\it size}[$i$], respectively.\n\\vglue6pt\nAfter the following remarks, we state and prove the combination formulas. \n\n\\vglue6pt {\\it Remarks} 6.6.\ni) In order to co-ordinate numbering with Section 8, we will break with the convention used previously in this paper and start the\nnumbering of the propositions with FIXME(6.1). However, we will end this section with Example FIXME(6.7).\n\\vglue2pt\nii)  The negation of a set of functions is the set consisting of the negatives of the original functions, and similarly for other operations.\n\\vglue2pt\niii)  The propositions that follow include in their statements the definitions of the various operations on AffApproxes.  What needs to\nbe proved is that the $S$ functions behave as expected.  For example, we need to show that under the definition given for addition, the\nset of functions $S(x+y)$ contains all functions obtained by adding a function from $S(x)$ to a function from $S(y).$\n\n\\nonumproclaim{Proposition FIXME(6.1) {\\rm (unary minus)}} If $x$ is an {\\rm AffApprox,} then \n$S(-x) = -(S(x))$ where \n$$-x \\equiv (-x.f;\\ -x.f_0,\\ -x.f_1,\\ -x.f_2;\\ x.e).$$ \n\\endproclaim\n\n{\\it Proof}.\n$$|g(z_0,z_1,z_2) - (x.f + x.f_0 z_0 + x.f_1 z_1 + x.f_2 z_2)| \\le e$$\nif and only if \n\\vglue6pt\n\\hfill $|-g(z_0,z_1,z_2) - (-x.f - x.f_0 z_0 - x.f_1 z_1 - x.f_2 z_2)| \\le e.$ \\hfill\\qed\n\n\n\\nonumproclaim{Proposition FIXME(6.2) {\\rm (addition)}} \\hskip-8pt If $x$ and $y$ are {\\rm AffApproxes,} then $S(x + y) \\supseteq S(x)\n+ S(y)${\\rm ,} where\n$$x + y \\equiv (x.f + y.f; x.f_0 + y.f_0, x.f_1 + y.f_1, x.f_2 + y.f_2; x.e + y.e).$$\n\\endproclaim  \n\n\\demo{Proof}  If $g\\in S(x)$ and $h \\in S(y)$ then we must show that $g + h \\in S(x + y).$\n\\begin{eqnarray*}\n&&|(g + h)(z_0,z_1,z_2) \\\\\n&&\\quad- ((x.f + y.f) + (x.f_0 + y.f_0) z_0 + (x.f_1 + y.f_1) z_1 + (x.f_2 + y.f_2) z_2)| \\\\\n&&\\qquad \n\\le |g(z_0,z_1,z_2) - \n(x.f + (x.f_0) z_0 + (x.f_1) z_1 + (x.f_2) z_2)| \\\\\n&&\\quad\\qquad + \n|h(z_0,z_1,z_2) - (y.f + (y.f_0) z_0 + (y.f_1) z_1 + (y.f_2) z_2)| \n                                        \\\\\n&&\\qquad \\le x.e + y.e.\\\\\n\\noalign{\\vskip-36pt}\n\\end{eqnarray*}\n\\enddemo\n\n\\nonumproclaim{Proposition FIXME(6.3) {\\rm (subtraction)}} \\hskip-8pt If $x$ and $y$ are {\\rm AffApproxes,} then $S(x - y) \\supseteq\nS(x) - S(y)${\\rm ,}\n where\n$$x - y \\equiv (x.f - y.f; x.f_0 - y.f_0, x.f_1 - y.f_1, x.f_2 - y.f_2; x.e + y.e).$$ \n\\endproclaim\n\nTo co-ordinate numbering with Section 8 (where we incorporate round-off error into these formulas) we include the following special\ncases of Propositions FIXME(6.2) and FIXME(6.3).  Similarly for Propositions FIXME(6.7, 6.9, and 6.10). \n\nIn what follows,  ``double\" refers to a real number, and has an associated AffApprox, with the last four entries zero.  When we do\nmachine arithmetic in Sections FIXME(7 and 8), doubles will be  machine numbers.\n\n\\nonumproclaim{Proposition FIXME(6.4) {\\rm (addition of an AffApprox and a double)}} If $x$ is an {\\rm AffApprox}  and $y$ is a double{\\rm ,}\n then $S(x +\ny)\n\\supseteq S(x) + S(y)${\\rm ,} where\n$$x + y \\equiv (x.f + y; x.f_0, x.f_1 , x.f_2 ; x.e).$$ \n\\endproclaim\n\n\\nonumproclaim{Proposition FIXME(6.5) {\\rm (subtraction of a double from an AffApprox)}}  If $x$ is an {\\rm AffApprox} and $y$ is a double{\\rm ,}\n then\n$S(x - y)\n\\supseteq S(x) - S(y)${\\rm ,} where\n$$x - y \\equiv (x.f - y; x.f_0, x.f_1 , x.f_2 ; x.e).$$ \n\\endproclaim\n\n\\nonumproclaim{Proposition FIXME(6.6) {\\rm (multiplication)}}  If $x$ and $y$ are {\\rm AffApproxes,}\n then $S(x \\times y) \\supseteq S(x) \\times\nS(y)${\\rm ,} where\n\\begin{eqnarray*}\nx \\times y &\\equiv& (x.f \\times y.f; x.f \\times y.f_0 + x.f_0 \\times y.f, \\\\\n&& \\quad\nx.f \\times y.f_1 + x.f_1 \\times y.f, x.f \\times y.f_2 + x.f_2 \\times y.f; \n\\\\\n&& \\quad\n({\\rm size}(x) + x.e) \\times ({\\rm size}(y) + y.e) + (|x.f| \\times y.e + x.e \\times |y.f|))\n                                            \\end{eqnarray*}\n with ${\\rm size}(x) = |x.f_0| + |x.f_1| + |x.f_2|$ and ${\\rm size}(y) = |y.f_0| + |y.f_1| + |y.f_2|.$\n\\endproclaim\n\n\\demo{Proof}  If $g \\in S(x)$ and $h \\in S(y)$ then we must show that $g\\times h \\in S(x \\times y).$  That is, we need to show\n\\begin{eqnarray*}\n&&\\hskip-.25in |(g \\times h)(z_0,z_1,z_2)   - \n((x.f \\times y.f) + \n(x.f \\times y.f_0 + x.f_0 \\times y.f) z_0 \\\\\n&\\hskip-8pt+\\hskip-8pt& (x.f \\times y.f_1 + x.f_1 \\times y.f) z_1 + (x.f \\times y.f_2 + x.f_2 \\times y.f) z_2)| \n\\\\\n\\le&\\hskip-8pt\\hskip-8pt&\\hskip-16pt ({\\rm size}(x) + x.e) \\times ({\\rm size}(y) + y.e) + (|x.f| \\times y.e + x.e \\times |y.f|).\n\\end{eqnarray*}\nNote that for any point $(z_0, z_1, z_2) \\in A$ and any functions $g \\in S(x)$ and $h \\in S(y)$ we can find complex numbers $u, v$ with\n$|u| \\le 1$ and $|v| \\le 1$, such that\n$$ g(z_0, z_1, z_2) = x.f + (x.f_0 z_0 + x.f_1 z_1 + x.f_2 z_2) + (x.e) u$$ and \n$$ h(z_0, z_1, z_2) = y.f + (y.f_0 z_0 + y.f_1 z_1 + y.f_2 z_2) + (y.e) v.$$\nMultiplying out, we see that \n\\begin{eqnarray*}\n&&\\hskip-36pt(g \\times h) (z_0, z_1, z_2)\\\\\n& = &\n(x.f \\times y.f) + \n(x.f \\times y.f_0 + x.f_0 \\times y.f) z_0\\\\\n&& +\\ (x.f \\times y.f_1 + x.f_1 \\times y.f) z_1 +\n(x.f \\times y.f_2 + x.f_2 \\times y.f) z_2\\\\&& + \\\n(x.f \\times y.e) v + (x.e \\times y.f) u + \n((x.f_0 z_0 + x.f_1 z_1 + x.f_2 z_2) + (x.e) u)\\\\\n&& \\times\\\n((y.f_0 z_0 + y.f_1 z_1 + y.f_2 z_2) + (y.e) v).\n                                              \\end{eqnarray*}\nHence,\n\\begin{eqnarray*}\n&&\\hskip-48pt |(g \\times h) (z_0, z_1, z_2) - ((x.f \\times y.f) \\\\\n& & + \\\n((x.f \\times y.f_0 + x.f_0 \\times y.f) z_0\\\\\n&& +\\ (x.f \\times y.f_1 + x.f_1 \\times y.f) z_1 +\n(x.f \\times y.f_2 + x.f_2 \\times y.f) z_2))|\n \\\\\n&\\le& (|x.f| y.e + x.e |y.f|) + \n({\\rm size}(x) + x.e) \\times ({\\rm size}(y) + y.e).\\\\\n\\noalign{\\vskip-36pt}\n\\end{eqnarray*}\n\\enddemo\n\n\\vglue9pt \n\\nonumproclaim{Proposition FIXME(6.7) {\\rm (an AffApprox multiplied by a double)}}\nIf $x$ is an {\\rm AffApprox}  and $y$ is a double{\\rm ,} then $S(x \\times y) \\supseteq S(x) \\times S(y)${\\rm ,} where\n$$ x \\times y \\equiv (x.f \\times y;  x.f_0 \\times y, \nx.f_1 \\times y,  x.f_2 \\times y; \nx.e \\times |y|). $$\n\\endproclaim\n\n\\nonumproclaim{Proposition FIXME(6.8) {\\rm (division)}} If $x$ and $y$ are {\\rm AffApproxes}  with $|y.f| > {\\rm size}(y) + y.e${\\rm ,}\n then $S(x / y) \\supseteq\nS(x) / S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx / y &\\equiv& (x.f / y.f; (-x.f \\times y.f_0 + x.f_0 \\times y.f)/((y.f)^2),\\\\&&\\qquad  \n(-x.f \\times y.f_1 + x.f_1 \\times y.f)/((y.f)^2), \n\\\\\n&&\\qquad \n(-x.f \\times y.f_2 + x.f_2 \\times y.f)/((y.f)^2); \n\\\\\n&&\\qquad (|x.f| + {\\rm size}(x) + x.e) /(|y.f| - ({\\rm size}(y) + y.e))\\\\\n&&  - \\\n((|x.f|/|y.f| + {\\rm size}(x)/|y.f|) + |x.f| {\\rm size}(y)/(|y.f||y.f|))).\n                                           \\end{eqnarray*}\n\\endproclaim\n\n\\demo{Proof} \nFor notational convenience, denote $(x.f_0 z_0 + x.f_1 z_1 + x.f_2 z_2)$ by $x.f_k z_k$ and similarly for $y.f_k z_k$ and so on.\nAs above, note that for any point $(z_0, z_1, z_2) \\in A$ and any functions $g \\in S(x)$ and $h \\in S(y)$ we can find complex numbers $u, v$ with $|u| \\le 1$ and $|v| \\le 1$, such that\n$$ g(z_0, z_1, z_2) = x.f + (x.f_k z_k) + (x.e) u$$ and \n$$ h(z_0, z_1, z_2) = y.f + (y.f_k z_k) + (y.e) v.$$ \n\nWe compare $(g/h)(z_0, z_1, z_2)$ with its putative affine approximation.  That is, we analyze\n\\begin{eqnarray*}\n&&\\big|(x.f + (x.f_k z_k) + (x.e) u)/(y.f + (y.f_k z_k) + (y.e) v)\\\\\n&&\\qquad\\quad - \\\n((x.f / y.f) + {(x.f_k) y.f - x.f (y.f_k) \\over (y.f)^2} z_k)\\big|\n          . \\end{eqnarray*}\nPutting this over a common denominator  of \n$|((y.f)^2)(y.f + (y.f_k z_k) + (y.e) v)|$\nand cancelling equal terms (in the numerator) we are left with a quotient whose numerator is \n\\begin{eqnarray*} &&\n|x.e ((y.f)^2) u - (x.f_k) y.f (y.f_k) z_k - x.f ((y.f_k)^2) z_k\\\\\n&&\\qquad\\quad  + \\\n(x.f) y.f (y.e) v + x.f_k (y.f) y.e (v) z_k - x.f (y.f_k) y.e (v) z_k|.              \n\\end{eqnarray*}\nWe must show this (first) quotient is bounded by\n\\begin{eqnarray*}\n&&\n(|x.f| + {\\rm size}(x) + x.e) /(|y.f| - ({\\rm size}(y) + y.e)) \\\\\n&&\\qquad\\quad - \\\n((|x.f|/|y.f| + {\\rm size}(x)/|y.f|) + |x.f| {\\rm size}(y)/(|y.f||y.f|)).        \n\\end{eqnarray*}\nPutting this over a common denominator of \n$|(y.f)|^2 (|y.f| - ({\\rm size}(y) + y.e))$ \nand cancelling equal terms (in the numerator) we are left with a second quotient, whose numerator is \n$$x.e |y.f|^2 - (-|x.f| |y.f| y.e - {\\rm size}(x) |y.f| ({\\rm size}(y) + y.e) -\n|x.f| {\\rm size}(y) ({\\rm size}(y) + y.e))$$\nand we see that all terms in this numerator are positive.\nFurther, the terms in the numerators of the first and second quotients correspond in a natural way, and each term in the numerator of the second quotient  is greater than or equal to the absolute value of its corresponding term in the numerator of the first quotient.  \n\nFinally,\n because the denominator in the second quotient is less than or equal to the absolute value of the denominator in the first quotient, we\nsee that the absolute value of the first quotient is less than or equal to the second quotient, as desired. \\enddemo\n\n\\nonumproclaim{Proposition FIXME(6.9) {\\rm (division of a double by an AffApprox)}} If $x$ is a double and $y$ is an {\\rm AffApprox }\n with $|y.f| > {\\rm size}(y) + y.e${\\rm ,} then $S(x / y) \\supseteq S(x) / S(y)${\\rm ,} where\n\\begin{eqnarray*}\nx / y &\\hskip-8pt\\equiv\\hskip-8pt& (x / y.f; -x \\times y.f_0 /((y.f)^2), \n-x.f \\times y.f_1 /((y.f)^2), -x.f \\times y.f_2 /((y.f)^2); \n\\\\\n&\\hskip-8pt\\hskip-8pt&\\qquad (|x|  /(|y.f| - ({\\rm size}(y) + y.e)) - \n(|x|/|y.f|  + |x| {\\rm size}(y)/(|y.f||y.f|))).\n                                              \\end{eqnarray*}\n\\endproclaim\n\n\n\\nonumproclaim{Proposition FIXME(6.10) {\\rm (division of an AffApprox by a double)}} If $x$ is an {\\rm AffApprox}  and $y$ is a double\nwith $|y| > 0${\\rm ,} then $S(x / y) \\supseteq S(x) / S(y)${\\rm ,} where\n$$\nx / y \\equiv (x.f / y; x.f_0 / y, \nx.f_1 / y, x.f_2 / y; \nx.e/ |y| ).\n$$\n\\endproclaim\n\nFinally, we do the square root.\n\n\\nonumproclaim{Proposition FIXME(6.11) {\\rm (square root)}} If $x$ is an {\\rm AffApprox}  with $|x.f| > {\\rm size}(x) + x.e${\\rm ,}\n then $S(\\sqrt x)\n\\supseteq\n\\sqrt {S(x)}${\\rm ,} where\n\\begin{eqnarray*}\n\\sqrt x& = &\\left(\\sqrt {x.f}; \n {x.f_0 \\over 2 \\sqrt {x.f}}, \n {x.f_1 \\over 2 \\sqrt {x.f}}, \n {x.f_2 \\over 2 \\sqrt {x.f}};\n\\right.\\\\\n&& \\left. \\sqrt {|x.f|} - \\left({{\\rm size}(x) \\over 2 \\sqrt {|x.f|}} + \\sqrt {|x.f| - ({\\rm size}(x) + x.e)}\\right)\n                                              \\right).\n\\end{eqnarray*}\n\\endproclaim \nIf $|x.f| \\le {\\rm size}(x) + x.e$ then we use the crude estimate $$\\left(0;0,0,0;\\sqrt{|x.f| + {\\rm size}(x) + x.e}\\right).$$\n \nThe branch of the square root of a complex number is determined by the construction of the square root of a complex in Proposition FIXME(7.14).  In fact, the square root is in the first or fourth quadrant.\n\n\\demo{Proof} \nAs above, note that for any point $(z_0, z_1, z_2) \\in A$ and any function $g \\in S(x)$ we can find a complex number $u$ with \n$|u|   \\le 1$, such that\n$$ g(z_0, z_1, z_2) = x.f + (x.f_k z_k) + (x.e) u.$$\nAlso, because $|x.f| > {\\rm size}(x) + x.e.$ we see that the argument of\n$x.f + (x.f_k z_k) + (x.e) u$ is within $\\pi/2$ of the argument of \n$x.f$, and therefore, we can require that $\\sqrt{g(z_0,z_1,z_2)}$ have argument within $\\pi/4$ of the argument of $\\sqrt{x.f}.$\n\nWe need to show that \n\\begin{eqnarray*}\n&&\\hskip-35pt \\left|\\sqrt {x.f + x.f_k z_k + (x.e) u} - (\\sqrt{x.f} + {x.f_k z_k\\over 2 \\sqrt {x.f}})\\right|\\\\[4pt]\n&&\\qquad \\le \\sqrt{|x.f|} - \\left({{\\rm size}(x) \\over 2 \\sqrt {|x.f|}} + \\sqrt {|x.f| - ({\\rm size}(x) + x.e)}\\ \\right).\n\\end{eqnarray*}\nOr, after we multiply both sides by $\\sqrt {|x.f|}$,  \n\\begin{eqnarray*}\n&&\n\\hskip-35pt\\left| \\sqrt {x.f (x.f + x.f_k z_k + (x.e) u)} - (x.f + (x.f_k) z_k /2) \\right|\\\\\n&&\\quad \\le \\left( |x.f| - {\\rm size}(x) /2\\right) - \\sqrt {|x.f|(|x.f| - ({\\rm size}(x) +\nx.e))}.\n\\end{eqnarray*}\n The two  sides of the inequality are of the form $ A - B$ and $C - D$, and we\n``simplify\" by multiplying by  \n${ A+ B \\over A + B}$ and ${ C+ D \\over C + D}$.   We now show that the (absolute value of the) left-hand numerator is less than or equal to the right-hand numerator.  Later, we will show that the (absolute value of the) left-hand denominator is larger than or equal to the right-hand denominator. \nThe   left-hand numerator is \n\\begin{eqnarray*}\n&&\\hskip-35pt |x.f (x.f + x.f_k z_k + (x.e) u) - (x.f + (x.f_k) z_k /2)^2|\\\\[4pt]\n&&\\qquad \n = |(x.f)^ 2 + x.f (x.f_k) z_k + x.f (x.e) u - (x.f)^2\\\\[4pt]\n&&\\qquad\\quad - x.f (x.f_k) z_k - ((x.f_k)^2) (z_k)^2/4|\\\\[4pt]\n&&\\qquad = |x.f (x.e) u - ((x.f_k)^2) (z_k)^2/4|.\n\\end{eqnarray*}\nThe right-hand numerator is \n\\begin{eqnarray*}\n&&(|x.f| - {\\rm size}(x)/2)^2 - |x.f| (|x.f| - ({\\rm size}(x) + x.e))\\\\[4pt]\n&&\\qquad = |x.f|^2 - |x.f| {\\rm size}(x) + {\\rm size}(x)^2/4 - |x.f|^2 + |x.f| {\\rm\nsize}(x) + |x.f| x.e\\\\[4pt]\n&&\\qquad = |x.f| x.e + {\\rm size}(x)^2/4.\n\\end{eqnarray*}\nSo the left-hand numerator is indeed less than or equal to the right-hand numerator.\n\\vglue4pt\nWe now compare the denominators, but only after dividing each by\n$\\sqrt {|x.f|}$. \nThe left-hand denominator is\n$$\\left|\\sqrt {x.f + x.f_k z_k + (x.e) u} + \n\\left(\\sqrt {x.f} + {x.f_k z_k\\over 2 \\sqrt{x.f}}\\right)\\right|$$\nwhile the right-hand denominator is \n$$\\sqrt{|x.f|} - {{\\rm size}(x) \\over 2 \\sqrt {|x.f|}} + \\sqrt {|x.f| - ({\\rm size}(x) + x.e)}.$$\nThe claim that the left-hand denominator is greater than or equal to the right-hand denominator is a bit complicated.  First, compare the $\\sqrt{x.f}$ term and the $\\sqrt{|x.f|}$ terms.  They are the same distance from the origin.  Next, note that as $z_k$ and $u$ take on all\nrelevant values, $x.f + x.f_k z_k + (x.e) u$ describes a disk centered at $x.f$\nwith radius   less than $ |\\sqrt {x.f}|$.  Hence, $\\sqrt {x.f + x.f_k z_k + (x.e) u}$ describes a convex set containing $\\sqrt{x.f}$.  This set\nis symmetric about the line joining the origin and $\\sqrt{x.f}$. Further, $\\sqrt{x.f} + \\sqrt {x.f + x.f_k z_k + (x.e) u}$ describes a convex\nset containing $2 \\sqrt{x.f}$.   This set is also symmetric about the line joining the origin and  $\\sqrt{x.f}$.\nIt is easy enough to see that no points on this convex symmetric set get closer to the origin than $\\sqrt{|x.f|}  + \\sqrt {|x.f| - ({\\rm size}(x) + x.e)}$.\n\nFinally, because $|{x.f_k z_k \\over 2 \\sqrt{x.f}}| \\le {{\\rm size}(x) \\over 2 \\sqrt {|x.f|}},$ no points of $$\\sqrt {x.f} + \\sqrt {x.f + x.f_k z_k + (x.e) u} + \n{x.f_k z_k \\over 2 \\sqrt{x.f}}$$\ncan get closer to the origin than\n\\vglue12pt\n\\hfill ${\\displaystyle \\sqrt{|x.f|} + \\sqrt {|x.f| - ({\\rm size}(x) + x.e)} - \n{{\\rm size}(x) \\over 2 \\sqrt {|x.f|}}.} $ \\enddemo\n\n\\vglue12pt\n{\\it Example} FIXME(6.7) (Continuation of Example FIXME(5.4)).\nWe can now complete the analysis begun in Example FIXME(5.4),\nbecause we can describe $f$ and $w$ as 2-by-2 matrices of AffApproxes.\nWe note the minor quibble that the full definition of AffApprox is\ngiven in Section 8, where round-off error is incorporated into\nthe remainder/error-bound term.\n\nFor convenience,\nwe repeat the description of the sub-box under investigation.\nThe sub-box $Z(s01011)$ with  \n\\begin{small}\n$$s = 001000110001110111001111000101111111101111100111001111000001111011110111$$ \n\\end{small}%\n\n\\noindent is the region where\n$$\\left(\\begin{array}{rll}\n-1.381589027741\\ldots &\\hskip-6pt \\le  {\\rm Re}(L')&  \\hskip-6pt\\le  -1.379848991182\\ldots\\\\[7pt]\n-1.378124546093\\ldots &\\hskip-6pt \\le  {\\rm Re}(D')&\\hskip-6pt  \\le  -1.376574349753\\ldots\\\\[7pt]\n0.999893182771\\ldots  &\\hskip-6pt\\le {\\rm Re}(R')& \\hskip-6pt  \\le  1.001274250703\\ldots\\\\[7pt]\n-2.535837191243\\ldots &\\hskip-6pt \\le  {\\rm Im}(L')&\\hskip-6pt  \\le  -2.534606799593\\ldots\\\\[7pt]\n2.535404997792\\ldots & \\hskip-6pt\\le  {\\rm Im}(D') &\\hskip-6pt \\le  -2.534308843448\\ldots\\\\[7pt]\n-0.001953125000\\ldots &\\hskip-6pt \\le {\\rm  Im}(R')&\\hskip-6pt  \\le  0.000000000000\\ldots\n\\end{array}\\right).$$ \n\nFor this sub-box, we get (printing only 10 decimal places,\nfor visual convenience):\n\\begin{small}\n$$  \\hskip-12pt\nf = \\left[\\begin{array}{cc}\n  \\left(\\begin{array}{c}\n    -0.8677851121   + i 1.4607429651;\\cr\n   \\phantom{-} 0.0000248810   - i 0.0003125810,\\\\[4pt]\n   \\phantom{-} 0.0000000000   + i 0.0000000000,\\\\[4pt]\n   \\phantom{-} 0.0000000000   + i 0.0000000000;\\\\[4pt]\n   \\phantom{-} 0.0000000289\n \\end{array}\\right)\n &\n  \\left(\\begin{array}{c}\n    0.0000000000 + i 0.0000000000;\\\\[4pt]\n    0.0000000000 + i 0.0000000000,\\\\[4pt]\n    0.0000000000 + i 0.0000000000,\\\\[4pt]\n    0.0000000000 + i 0.0000000000;\\\\[4pt]\n    0.0000000000\n  \\end{array}\\right)\n \\\\[12pt] \n  \\left(\\begin{array}{c}\n    0.0000000000 + i 0.0000000000;\\\\[4pt]\n    0.0000000000 + i 0.0000000000,\\\\[4pt]\n    0.0000000000 + i 0.0000000000,\\\\[4pt]\n    0.0000000000 + i 0.0000000000;\\\\[4pt]\n    0.0000000000\n  \\end{array} \\right)\n &\n  \\left(\\begin{array}{c}\n    -0.3006023265 - i 0.5060039953;\\\\[4pt]\n    -0.0000909686 - i 0.0000593570,\\\\[4pt]\n   \\phantom{-} 0.0000000000 + i 0.0000000000,\\\\[4pt]\n   \\phantom{-} 0.0000000000 + i 0.0000000000;\\\\[4pt]\n   \\phantom{-} 0.0000000301\n  \\end{array}\\right)\n\\end{array}\\right]\n$$\\end{small} \nand\n\\begin{small}\n$$ \n\\hskip-16pt\nw = \\left[\\begin{array}{cc}\n  \\left(\\begin{array}{c}\n    -0.5845111829 + i 0.4773282853;\\\\[4pt]\n   \\phantom{-} 0.0000000000 + i 0.0000000000,\\\\[4pt]\n   -0.0000296707 - i 0.0001657332,\\\\[4pt]\n   -0.0004345111 - i 0.0001209539;\\\\[4pt]\n    0.0000002590\n  \\end{array}\\right)\n &\n  \\left(\\begin{array}{c}\n    -0.2840228472 + i 0.9825063583;\\\\[4pt]\n  \\phantom{-}  0.0000000000 + i 0.0000000000,\\\\[4pt]\n  \\phantom{-}  0.0000516606 - i 0.0001128245,\\\\[4pt]\n   \\phantom{-} 0.0005776611 - i 0.0001998632;\\\\[4pt]\n    0.0000006462\n  \\end{array}\\right)\n \\\\[12pt]\n  \\left(\\begin{array}{c}\n    -0.2832291572 + i 0.9833572297;\\\\[4pt]\n   \\phantom{-} 0.0000000000 + i 0.0000000000,\\\\[4pt]\n   \\phantom{-} 0.0000515806 - i 0.0001129408,\\\\[4pt]\n    -0.0005778031 + i 0.0002005440;\\\\[4pt]\n    0.0000002806\n  \\end{array}\\right)\n &\n  \\left(\\begin{array}{c}\n    -0.5846352333 + i 0.4764792236;\\\\[4pt]\n   \\phantom{-} 0.0000000000 + i 0.0000000000,\\\\[4pt]\n    -0.0000294917 - i 0.0001656653,\\\\[4pt]\n   \\phantom{-} 0.0004341392 + i 0.0001213070;\\\\[4pt]\n    0.0000005286\n  \\end{array}\\right)\n\\end{array}\\right].$$\n\\end{small}\nCalculation of\n$g = f^{-1}wf^{-1}w^{-1}f^{-1}w^{-1}fw^{-1}f^{-1}w^{-1}f^{-1}wf^{-1}wfww$ gives\n\\begin{small}\n$$ \\hskip-18pt\ng = \\left[\\begin{array}{cc}\n  \\left(\\begin{array}{c}\n    -0.5764337542 + i 0.4752708071;\\\\[4pt]\n    -0.0031657223 - i 0.0001436786,\\\\[4pt]\n    -0.0017723577 + i 0.0000352928,\\\\[4pt]\n    -0.0011623491 + i 0.0017516088;\\\\[4pt]\n    0.0008229225\n  \\end{array}\\right)\n &\n  \\left(\\begin{array}{c}\n    -0.2704033973 + i 0.9822741250;\\\\[4pt]\n    -0.0045902952 - i 0.0019135041,\\\\[4pt]\n    -0.0026219461 - i 0.0007506230,\\\\[4pt]\n    -0.0002823450 + i 0.0033805602;\\\\[4pt]\n    0.0008037640\n  \\end{array}\\right)\n \\\\[9pt]\n  \\left(\\begin{array}{c}\n    -0.2861207992 + i 0.9766064999;\\\\[4pt]\n    -0.0002777968 + i 0.0020330488,\\\\[4pt]\n   \\phantom{-} 0.0000837571 + i 0.0010241875,\\\\[4pt]\n   \\phantom{-} 0.0028322367 - i 0.0005972336;\\\\[4pt]\n   \\phantom{-} 0.0018172437\n  \\end{array}\\right)\n &\n  \\left(\\begin{array}{c}\n    -0.5861133046 + i 0.4624368851;\\\\[4pt]\n    -0.0021932627 + i 0.0040523411,\\\\[4pt]\n    -0.0008612361 + i 0.0022394639,\\\\[4pt]\n   \\phantom{-} 0.0061581377 - i 0.0005862070;\\\\[4pt]\n    0.0017738513\n  \\end{array}\\right)\n\\end{array}\\right].\n$$\n\\end{small}\nWe then get\n$$\n{\\rm length}(g) = \\left(\\begin{array}{c}\n    -1.3588762105 - i 2.4897230182;\\\\[4pt]\n  \\phantom{-}  0.0030210500 - i 0.0182284729,\\\\[4pt]\n   \\phantom{-} 0.0007938572 - i 0.0096614614,\\\\[4pt]\n    -0.0122034521 + i 0.0074353043;\\\\[4pt]\n    0.0080071969\n  \\end{array}\\right)\n$$\nand\n$$\n{{\\rm length}(g) \\over L'} =\n  \\left(\\begin{array}{c}\n   \\phantom{-} 0.9825397896 - i 0.0008933519;\\\\[4pt]\n   \\phantom{-} 0.0053701602 + i 0.0037789019,\\\\[4pt]\n   \\phantom{-} 0.0028076072 + i 0.0018421952,\\\\[4pt]\n    -0.0002400615 - i 0.0049443045;\\\\[4pt]\n   \\phantom{-} 0.0027802966\n  \\end{array}\\right).\n$$\n\n\nThis is not quite good enough to kill the sub-box, since\n$|{\\rm length}(g)/L'|$ can be as high as $1.0001951323$.\n\nWhen we subdivide $Z(s01011)$, we have to analyze two sub-boxes,\n$Z(s010110)$ and $Z(s010111)$.\nFor $Z(s010110)$, the same calculation on the region\n$$\\begin{array}{rll}\n-1.381589027741073400 &\\hskip-6pt \\leq {\\rm Re}(L') &\\hskip-6pt\\leq -1.379848991182205200\\\\[4pt]\n-1.378124546093485700 &\\hskip-6pt\\leq {\\rm Re}(D')\n&\\hskip-6pt\\leq -1.376574349753672900\\\\[4pt]\n0.999893182771602220 &\\hskip-6pt\\leq {\\rm Re}(R')&\\hskip-6pt \\leq 1.001274250703607400\\\\[4pt]\n-2.535837191243490300&\\hskip-6pt \\leq {\\rm Im}(L')&\\hskip-6pt \\leq\n-2.534606799593201600\\\\[4pt]\n-2.535404997792558600 &\\hskip-6pt\\leq {\\rm Im}(D') &\\hskip-6pt\\leq -2.534308843448505900\\\\[4pt]\n-0.001953125000000000&\\hskip-6pt \\leq {\\rm Im}(R')\n&\\hskip-6pt\\leq -0.000976562500000000\n\\end{array} \n$$\ngives \n$$\n{{\\rm length}(g) \\over L'} = \n  \\left(\\begin{array}{c}\n \\phantom{-}   0.9814518667 + i 0.0008103446;\\\\[4pt]\n\\phantom{-}      0.0053616729 + i 0.0037834001,\\\\[4pt]\n  \\phantom{-}    0.0028027236 + i 0.0018435245,\\\\[4pt]\n    -0.0013175066 - i 0.0032448794;\\\\[4pt]\n    0.0019033926\n  \\end{array}\\right),\n$$\nand we can then bound $\\left|{{\\rm length}(g) \\over L'}\\right| \\leq 0.9967745579$,\nwhich kills $Z(s010110)$.  \n\nOn $Z(s010111)$, the calculation gives\n$$\n{{\\rm length}(g) \\over L'} =\n  \\left(\\begin{array}{c}\n\\phantom{-}      0.9836225919 - i 0.0025990177;\\\\[4pt]\n  \\phantom{-}    0.0053786346 + i 0.0037743930,\\\\[4pt]\n \\phantom{-}     0.0028124892 + i 0.0018408583,\\\\[4pt]\n    -0.0013333182 - i 0.0032343347;\\\\[4pt]\n    0.0019044429\n  \\end{array}\\right)\n$$\nand $\\left|{{\\rm length}(g) \\over L'}\\right| \\leq 0.9989610507$,\nwhich kills $Z(s010111)$.\n\n \n", "meta": {"hexsha": "5a3fb5fa3d0e7a4ad7aaca768e194be44bcd3b49", "size": 25796, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/TeX_and_Figures_Files/Chapter_4.tex", "max_stars_repo_name": "njt99/findingkillerwords", "max_stars_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/TeX_and_Figures_Files/Chapter_4.tex", "max_issues_repo_name": "njt99/findingkillerwords", "max_issues_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/TeX_and_Figures_Files/Chapter_4.tex", "max_forks_repo_name": "njt99/findingkillerwords", "max_forks_repo_head_hexsha": "71271dca14a9986d631608929544bcd6d68813f0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.9872495446, "max_line_length": 344, "alphanum_fraction": 0.6224220809, "num_tokens": 10454, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8499711604559848, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5858899058579482}}
{"text": "Train networks to be provably robust (instead of experimentally result as in PGD training). \n\n\\paragraph{Opt.:} \n$\\displaystyle \\argmin_\\theta \\E_{(x, y) \\sim D} \\left[ \\max_{z \\in \\gamma(\\Sharp{\\operatorname{NN}}(S(x)))} \\loss(\\theta; z, y)\\right]$\n\n\\paragraph{Loss} $\\displaystyle \\loss(z, y) := \\max_{q \\neq y} (z_q - z_y) = \\max_{q \\neq y} (\\operatorname{box}(z_q - z_y))$\n\n\\paragraph{CE loss} $\\loss(z, y) = \\operatorname{CE}(z', y)$, with $z'_y := l_y$, $z'_q := u_q$ for $q \\neq y$ \n\n\\paragraph{Universal Approximation} For any neural network, there exists a network with the same properties that can be analyzed exactly with Box. \n\n\\paragraph{Complexity} Using complex relaxations generally leads to worse results in provability than with Box (more complex optimization problem)\n\n\\paragraph{COLT} For each layer, find $x_l \\in S_l$ that maximizes loss in final layer, and use it. \n\n\\paragraph{COLT projection} Write zonotope as $Z = A \\cdot [-1, 1]^d$, compute $e = A^{-1}\\cdot x$, clip $e$ to $[-1, 1]$, projection is $A\\cdot e_\\text{clip}$. \nThis projection is \\emph{sound} (result inside zonotope), but \\emph{not optimal}. ", "meta": {"hexsha": "a546bf18d587161572351e7fe08aefae697dff0f", "size": 1134, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "certified-defenses.tex", "max_stars_repo_name": "cknabs/RIAI-summary-HS2020", "max_stars_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-20T21:27:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-24T20:28:56.000Z", "max_issues_repo_path": "certified-defenses.tex", "max_issues_repo_name": "cknabs/RIAI-summary-HS2020", "max_issues_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-25T09:29:16.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-25T10:50:09.000Z", "max_forks_repo_path": "certified-defenses.tex", "max_forks_repo_name": "cknabs/RIAI-summary-HS2020", "max_forks_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.7058823529, "max_line_length": 161, "alphanum_fraction": 0.6948853616, "num_tokens": 364, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.849971175657575, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.5858898946365007}}
{"text": "\n\n    \\filetitle{lognormal}{Create function proportional to log of log-normal distribution}{logdist/lognormal}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nF = logdist.lognormal(Mean,Std)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\item\n  \\texttt{Mean} {[} numeric {]} - Mean of the log-normal distribution.\n\\item\n  \\texttt{Std} {[} numeric {]} - Std dev of the log-normal distribution.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{F} {[} function\\_handle {]} - Function handle returning a\n  value proportional to the log of the log-normal density.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\nSee \\href{logdist/Contents}{help on the logdisk package} for details on\nusing the function handle \\texttt{F}.\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "7969cb465712b401e47bc5e4d12c6e3e79f1520c", "size": 894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/logdist/lognormal.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/logdist/lognormal.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/logdist/lognormal.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 24.1621621622, "max_line_length": 108, "alphanum_fraction": 0.7494407159, "num_tokens": 250, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8418256551882382, "lm_q2_score": 0.695958331339634, "lm_q1q2_score": 0.5858755782637004}}
{"text": "\\section{Sequential colimits}\n\n\\emph{Note: This chapter currently contains only the statements of the definitions and theorems, but no proofs. I hope to make a complete version available soon.}\n\n\\subsection{The universal property of sequential colimits}\n\nType sequences are diagrams of the following form.\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots.\n\\end{tikzcd}\n\\end{equation*}\nTheir formal specification is as follows.\n\n\\begin{defn}\nAn \\define{(increasing) type sequence} $\\mathcal{A}$ consists of\n\\begin{align*}\nA & : \\N\\to\\UU \\\\\nf & : \\prd{n:\\N} A_n\\to A_{n+1}. \n\\end{align*}\n\\end{defn}\n\nIn this section we will introduce the sequential colimit of a type sequence.\nThe sequential colimit includes each of the types $A_n$, but we also identify each $x:A_n$ with its value $f_n(x):A_{n+1}$. \nImagine that the type sequence $A_0\\to A_1\\to A_2\\to\\cdots$ defines a big telescope, with $A_0$ sliding into $A_1$, which slides into $A_2$, and so forth.\n\nAs usual, the sequential colimit is characterized by its universal property.\n\n\\begin{defn}\n\\begin{enumerate}\n\\item A \\define{(sequential) cocone} on a type sequence $\\mathcal{A}$ with vertex $B$ consists of\n\\begin{align*}\nh & : \\prd{n:\\N} A_n\\to B \\\\\nH & : \\prd{n:\\N} f_n\\htpy f_{n+1}\\circ H_n.\n\\end{align*}\nWe write $\\mathsf{cocone}(B)$ for the type of cones with vertex $X$.\n\\item Given a cone $(h,H)$ with vertex $B$ on a type sequence $\\mathcal{A}$ we define the map\n\\begin{equation*}\n\\mathsf{cocone\\usc{}map}(h,H) : (B\\to C)\\to \\mathsf{cocone}(B)\n\\end{equation*}\ngiven by $f\\mapsto (f\\circ h,\\lam{n}{x}\\mathsf{ap}_f(H_n(x)))$. \n\\item We say that a cone $(h,H)$ with vertex $B$ is \\define{colimiting} if $\\mathsf{cocone\\usc{}map}(h,H)$ is an equivalence for any type $C$. \n\\end{enumerate}\n\\end{defn}\n\n\\begin{thm}\\label{thm:sequential_up}\nConsider a cocone $(h,H)$ with vertex $B$ for a type sequence $\\mathcal{A}$. The following are equivalent:\n\\begin{enumerate}\n\\item The cocone $(h,H)$ is colimiting.\n\\item The cocone $(h,H)$ is inductive in the sense that for every type family $P:B\\to \\UU$, the map\n\\begin{align*}\n\\Big(\\prd{b:B}P(b)\\Big)\\to {}& \\sm{h:\\prd{n:\\N}{x:A_n}P(h_n(x))}\\\\ \n& \\qquad \\prd{n:\\N}{x:A_n} \\mathsf{tr}_P(H_n(x),h_n(x))={h_{n+1}(f_n(x))}\n\\end{align*}\ngiven by\n\\begin{equation*}\ns\\mapsto (\\lam{n}s\\circ h_n,\\lam{n}{x} \\mathsf{apd}_{s}(H_n(x)))\n\\end{equation*}\nhas a section.\n\\item The map in (ii) is an equivalence.\n\\end{enumerate}\n\\end{thm}\n\n\\subsection{The construction of sequential colimits}\n\nWe construct sequential colimits using pushouts.\n\n\\begin{defn}\nLet $\\mathcal{A}\\jdeq (A,f)$ be a type sequence. We define the type $A_\\infty$ as a pushout\n\\begin{equation*}\n\\begin{tikzcd}[column sep=large]\n\\tilde{A}+\\tilde{A} \\arrow[r,\"{[\\idfunc,\\sigma_{\\mathcal{A}}]}\"] \\arrow[d,swap,\"{[\\idfunc,\\idfunc]}\"] & \\tilde{A} \\arrow[d,\"\\inr\"] \\\\\n\\tilde{A} \\arrow[r,swap,\"\\inl\"] & A_\\infty.\n\\end{tikzcd}\n\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\nThe type $A_\\infty$ comes equipped with a cocone structure consisting of\n\\begin{align*}\n\\mathsf{seq\\usc{}in} & : \\prd{n:\\N} A_n\\to A_\\infty \\\\\n\\mathsf{seq\\usc{}glue} & : \\prd{n:\\N}{x:A_n} \\mathsf{in}_n(x)=\\mathsf{in}_{n+1}(f_n(x)).\n\\end{align*}\n\\end{defn}\n\n\\begin{constr}\nWe define\n\\begin{align*}\n\\mathsf{seq\\usc{}in}(n,x)\\defeq \\inr(n,x) \\\\\n\\mathsf{seq\\usc{}glue}(n,x)\\defeq \\ct{\\glue(\\inl(n,x))^{-1}}{\\glue(\\inr(n,x))}.\n\\end{align*}\n\\end{constr}\n\n\\begin{thm}\nConsider a type sequence $\\mathcal{A}$, and write $\\tilde{A}\\defeq\\sm{n:\\N}A_n$. Moreover, consider the map\n\\begin{equation*}\n\\sigma_{\\mathcal{A}}:\\tilde{A}\\to\\tilde{A}\n\\end{equation*}\ndefined by $\\sigma_{\\mathcal{A}}(n,a)\\defeq (n+1,f_n(a))$. Furthermore, consider a cocone $(h,H)$ with vertex $B$.\nThe following are equivalent:\n\\begin{enumerate}\n\\item The cocone $(h,H)$ with vertex $B$ is colimiting.\n\\item The defining square\n\\begin{equation*}\n\\begin{tikzcd}[column sep=large]\n\\tilde{A}+\\tilde{A} \\arrow[r,\"{[\\idfunc,\\sigma_{\\mathcal{A}}]}\"] \\arrow[d,swap,\"{[\\idfunc,\\idfunc]}\"] & \\tilde{A} \\arrow[d,\"{\\lam{(n,x)}h_n(x)}\"] \\\\\n\\tilde{A} \\arrow[r,swap,\"{\\lam{(n,x)}h_n(x)}\"] & A_\\infty,\n\\end{tikzcd}\n\\end{equation*}\nof $A_\\infty$ is a pushout square.\n\\end{enumerate}\n\\end{thm}\n\n\\subsection{Descent for sequential colimits}\n\n\\begin{defn}\nThe type of \\define{descent data} on a type sequence $\\mathcal{A}\\jdeq (A,f)$ is defined to be\n\\begin{equation*}\n\\mathsf{Desc}(\\mathcal{A}) \\defeq \\sm{B:\\prd{n:\\N}A_n\\to\\UU}\\prd{n:\\N}{x:A_n}\\eqv{B_n(x)}{B_{n+1}(f_n(x))}.\n\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\nWe define a map\n\\begin{equation*}\n\\mathsf{desc\\usc{}fam} : (A_\\infty\\to\\UU)\\to\\mathsf{Desc}(\\mathcal{A})\n\\end{equation*}\nby $B\\mapsto (\\lam{n}{x}B(\\mathsf{seq\\usc{}in}(n,x)),\\lam{n}{x}\\mathsf{tr}_B(\\mathsf{seq\\usc{}glue}(n,x)))$.\n\\end{defn}\n\n\\begin{thm}\nThe map \n\\begin{equation*}\n\\mathsf{desc\\usc{}fam} : (A_\\infty\\to\\UU)\\to\\mathsf{Desc}(\\mathcal{A})\n\\end{equation*}\nis an equivalence.\n\\end{thm}\n\n\\begin{defn}\nA \\define{cartesian transformation} of type sequences from $\\mathcal{A}$ to $\\mathcal{B}$ is a pair $(h,H)$ consisting of\n\\begin{align*}\nh & : \\prd{n:\\N} A_n\\to B_n \\\\\nH & : \\prd{n:\\N} g_n\\circ h_n \\htpy h_{n+1}\\circ f_n,\n\\end{align*}\nsuch that each of the squares in the diagram\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[d,swap,\"h_0\"] \\arrow[r,\"f_0\"] & A_1 \\arrow[d,swap,\"h_1\"] \\arrow[r,\"f_1\"] & A_2 \\arrow[d,swap,\"h_2\"] \\arrow[r,\"f_2\"] & \\cdots \\\\\nB_0 \\arrow[r,swap,\"g_0\"] & B_1 \\arrow[r,swap,\"g_1\"] & B_2 \\arrow[r,swap,\"g_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nis a pullback square. We define\n\\begin{align*}\n\\mathsf{cart}(\\mathcal{A},\\mathcal{B}) & \\defeq\\sm{h:\\prd{n:\\N}A_n\\to B_n} \\\\\n& \\qquad\\qquad \\sm{H:\\prd{n:\\N}g_n\\circ h_n\\htpy h_{n+1}\\circ f_n}\\prd{n:\\N}\\mathsf{is\\usc{}pullback}(h_n,f_n,H_n),\n\\end{align*}\nand we write\n\\begin{equation*}\n\\mathsf{Cart}(\\mathcal{B}) \\defeq \\sm{\\mathcal{A}:\\mathsf{Seq}}\\mathsf{cart}(\\mathcal{A},\\mathcal{B}).\n\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\nWe define a map\n\\begin{equation*}\n\\mathsf{cart\\usc{}map}(\\mathcal{B}) : \\Big(\\sm{X':\\UU}X'\\to X\\Big)\\to\\mathsf{Cart}(\\mathcal{B}).\n\\end{equation*}\nwhich associates to any morphism $h:X'\\to X$ a cartesian transformation of type sequences into $\\mathcal{B}$.\n\\end{defn}\n\n\\begin{thm}\nThe operation $\\mathsf{cart\\usc{}map}(\\mathcal{B})$ is an equivalence.\n\\end{thm}\n\n\\subsection{The flattening lemma for sequential colimits}\n\nThe flattening lemma for sequential colimits essentially states that sequential colimits commute with $\\Sigma$. \n\n\\begin{lem}\nConsider\n\\begin{align*}\nB & : \\prd{n:\\N}A_n\\to\\UU \\\\\ng & : \\prd{n:\\N}{x:A_n}\\eqv{B_n(x)}{B_{n+1}(f_n(x))}.\n\\end{align*}\nand suppose $P:A_\\infty\\to\\UU$ is the unique family equipped with\n\\begin{align*}\ne & : \\prd{n:\\N}\\eqv{B_n(x)}{P(\\mathsf{seq\\usc{}in}(n,x))}\n\\end{align*}\nand homotopies $H_n(x)$ witnessing that the square\n\\begin{equation*}\n\\begin{tikzcd}[column sep=7em]\nB_n(x) \\arrow[r,\"g_n(x)\"] \\arrow[d,swap,\"e_n(x)\"] & B_{n+1}(f_n(x)) \\arrow[d,\"e_{n+1}(f_n(x))\"] \\\\\nP(\\mathsf{seq\\usc{}in}(n,x)) \\arrow[r,swap,\"{\\mathsf{tr}_P(\\mathsf{seq\\usc{}glue}(n,x))}\"] & P(\\mathsf{seq\\usc{}in}(n+1,f_n(x)))\n\\end{tikzcd}\n\\end{equation*}\ncommutes. Then $\\sm{t:A_\\infty}P(t)$ satisfies the universal property of the sequential colimit of the type sequence\n\\begin{equation*}\n\\begin{tikzcd}\n\\sm{x:A_0}B_0(x) \\arrow[r,\"{\\tot[f_0]{g_0}}\"] & \\sm{x:A_1}B_1(x) \\arrow[r,\"{\\tot[f_1]{g_1}}\"] & \\sm{x:A_2}B_2(x) \\arrow[r,\"{\\tot[f_2]{g_2}}\"] & \\cdots.\n\\end{tikzcd}\n\\end{equation*}\n\\end{lem}\n\nIn the following theorem we rephrase the flattening lemma in using cartesian transformations of type sequences.\n\n\\begin{thm}\nConsider a commuting diagram of the form\n\\begin{equation*}\n\\begin{tikzcd}[column sep=small,row sep=small]\nA_0 \\arrow[rr] \\arrow[dd] & & A_1 \\arrow[rr] \\arrow[dr] \\arrow[dd] &[-.9em] &[-.9em] A_2 \\arrow[dl] \\arrow[dd] & & \\cdots \\\\\n& & & X \\arrow[from=ulll,crossing over] \\arrow[from=urrr,crossing over] \\arrow[from=ur,to=urrr] \\\\\nB_0 \\arrow[rr] \\arrow[drrr] & & B_1 \\arrow[rr] \\arrow[dr] & & B_2 \\arrow[rr] \\arrow[dl] & & \\cdots \\arrow[dlll] \\\\\n& & & Y \\arrow[from=uu,crossing over] \n\\end{tikzcd}\n\\end{equation*}\nIf each of the vertical squares is a pullback square, and $Y$ is the sequential colimit of the type sequence $B_n$, then $X$ is the sequential colimit of the type sequence $A_n$. \n\\end{thm}\n\n\\begin{cor}\nConsider a commuting diagram of the form\n\\begin{equation*}\n\\begin{tikzcd}[column sep=small,row sep=small]\nA_0 \\arrow[rr] \\arrow[dd] & & A_1 \\arrow[rr] \\arrow[dr] \\arrow[dd] &[-.9em] &[-.9em] A_2 \\arrow[dl] \\arrow[dd] & & \\cdots \\\\\n& & & X \\arrow[from=ulll,crossing over] \\arrow[from=urrr,crossing over] \\arrow[from=ur,to=urrr] \\\\\nB_0 \\arrow[rr] \\arrow[drrr] & & B_1 \\arrow[rr] \\arrow[dr] & & B_2 \\arrow[rr] \\arrow[dl] & & \\cdots \\arrow[dlll] \\\\\n& & & Y \\arrow[from=uu,crossing over] \n\\end{tikzcd}\n\\end{equation*}\nIf each of the vertical squares is a pullback square, then the square\n\\begin{equation*}\n\\begin{tikzcd}\nA_\\infty \\arrow[r] \\arrow[d] & X \\arrow[d] \\\\\nB_\\infty \\arrow[r] & Y\n\\end{tikzcd}\n\\end{equation*} \nis a pullback square.\n\\end{cor}\n\n\\subsection{Constructing the propositional truncation}\\label{sec:propositional-truncation-constr}\nThe propositional truncation can be used to construct the image of a map, so we construct that first. We construct the propositional truncation of $A$ via a construction called the \\define{join construction}, as the colimit of the sequence of join-powers of $A$\n\\begin{equation*}\n  \\begin{tikzcd}\n    A \\arrow[r] & \\join{A}{A} \\arrow[r] & \\join{A}{(\\join{A}{A})} \\arrow[r] & \\cdots\n  \\end{tikzcd}\n\\end{equation*}\nThe join-powers of $A$ are defined recursively on $n$, by taking\\footnote{In this definition, the case $A^{\\ast1}\\defeq A$ is slightly redundant because we have an equivalence\n\\begin{equation*}\n  \\join{A}{\\emptyt}\\simeq A.\n\\end{equation*}\nNevertheless, it is nice to have that $A^{\\ast 1}\\jdeq A$.}\n\\begin{align*}\n  A^{\\ast0} & \\defeq \\emptyt \\\\\n  A^{\\ast 1} & \\defeq A \\\\\n  A^{\\ast(n+2)} & \\defeq \\join{A}{A^{\\ast (n+1)}}.\n\\end{align*}\nWe will write $A^{\\ast\\infty}$ for the colimit of the sequence\n\\begin{equation*}\n  \\begin{tikzcd}\n    A \\arrow[r,\"\\inr\"] & \\join{A}{A} \\arrow[r,\"\\inr\"] & \\join{A}{(\\join{A}{A})} \\arrow[r,\"\\inr\"] & \\cdots.\n  \\end{tikzcd}\n\\end{equation*}\nThe sequential colimit $A^{\\ast\\infty}$ comes equipped with maps $\\inseq_n:A^{\\ast (n+1)}\\to A^{\\ast\\infty}$, and we will write\n\\begin{equation*}\n  \\eta\\defeq\\inseq_0:A\\to A^{\\ast\\infty}.\n\\end{equation*}\nOur goal is to show $A^{\\ast\\infty}$ is a proposition, and that $\\eta:A\\to A^{\\ast\\infty}$ satisfies the universal property of the propositional truncation of $A$. Before showing that $A^{\\ast\\infty}$ is indeed a proposition, let us show in two steps that for any proposition $P$, the map\n\\begin{equation*}\n  (A^{\\ast\\infty}\\to P)\\to (A\\to P)\n\\end{equation*}\nis indeed an equivalence. \n\n\\begin{lem}\\label{lem:extend_join_prop}\nSuppose $f:A\\to P$, where $A$ is any type and $P$ is a proposition.\nThen the precomposition function\n\\begin{equation*}\n\\blank\\circ\\inr:(\\join{A}{B}\\to P)\\to (B\\to P)\n\\end{equation*}\nis an equivalence, for any type $B$.\n\\end{lem}\n\n\\begin{proof}\n  Since the precomposition function\n  \\begin{equation*}\n    \\blank\\circ\\inr:(\\join{A}{B}\\to P)\\to (B\\to P)\n  \\end{equation*}\n  is a map between propositions, it suffices to construct a map\n  \\begin{equation*}\n    (B\\to P)\\to (\\join{A}{B}\\to P).\n  \\end{equation*}\n  Let $g:B\\to P$. Then the square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      A\\times B \\arrow[r,\"\\proj 2\"] \\arrow[d,swap,\"\\proj 1\"] & B \\arrow[d,\"g\"] \\\\\n      A \\arrow[r,swap,\"f\"] & P\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes since $P$ is a proposition. Therefore we obtain a map $\\join{A}{B}\\to P$ by the universal property of the join.\n\\end{proof}\n\n\\begin{prp}\\label{prp:universal-property-brck}\nLet $A$ be a type, and let $P$ be a proposition. Then the function\n\\begin{equation*}\n\\blank\\circ \\eta : (A^{\\ast\\infty}\\to P)\\to (A\\to P)\n\\end{equation*}\nis an equivalence. \n\\end{prp}\n\n\\begin{proof}\n  Since the map\n  \\begin{equation*}\n    \\blank\\circ \\eta : (A^{\\ast\\infty}\\to P)\\to (A\\to P)\n  \\end{equation*}\n  is a map between propositions, it suffices to construct a map in the converse direction.\n\n  Let $f:A\\to P$. First, we show by recursion that there are maps\n  \\begin{equation*}\n    f_n:A^{\\ast(n+1)}\\to P.\n  \\end{equation*}\n  The map $f_0$ is of course just defined to be $f$. Given $f_n:A^{\\ast(n+1)}$ we obtain $f_{n+1}:\\join{A}{A^{\\ast(n+1)}}\\to P$ by \\cref{lem:extend_join_prop}. Because $P$ is assumed to be a proposition it is immediate that the maps $f_n$ form a cocone with vertex $P$ on the sequence\n  \\begin{equation*}\n    \\begin{tikzcd}\n      A \\arrow[r,\"\\inr\"] & \\join{A}{A} \\arrow[r,\"\\inr\"] & \\join{A}{(\\join{A}{A})} \\arrow[r,\"\\inr\"] & \\cdots.\n    \\end{tikzcd}\n  \\end{equation*}\n  From this cocone we obtain the desired map $(A^{\\ast\\infty}\\to P)$.\n\\end{proof}\n\n\\begin{prp}\\label{prp:isprop-infjp}\nThe type $A^{\\ast\\infty}$ is a proposition for any type $A$.\n\\end{prp}\n\n\\begin{proof}\n  By \\cref{lem:isprop_eq} it suffices to show that\n  \\begin{equation*}\n    A^{\\ast\\infty}\\to \\iscontr(A^{\\ast\\infty}).\n  \\end{equation*}\n  Since the type $\\iscontr(A^{\\ast\\infty})$ is already known to be a proposition by \\cref{ex:isprop_istrunc}, it follows from \\cref{prp:universal-property-brck} that it suffices to show that\n\\begin{equation*}\nA\\to \\iscontr(A^{\\ast\\infty}).\n\\end{equation*}\n\nLet $x:A$. To see that $A^{\\ast\\infty}$ is contractible it suffices by \\cref{ex:seqcolim_contr} to show that $\\inr:A^{\\ast n}\\to A^{\\ast(n+1)}$ is homotopic to the constant function $\\const_{\\inl(x)}$. However, we get a homotopy $\\const_{\\inl(x)}\\htpy \\inr$ immediately from the path constructor $\\glue$.  \n\\end{proof}\n\nAll the definitions are now in place to define the propositional truncation of a type.\n\n\\begin{defn}\n  For any type $A$ we define the type\n  \\begin{equation*}\n    \\trunc{-1}{A}\\defeq A^{\\ast\\infty},\n  \\end{equation*}\n  and we define $\\eta:A\\to\\trunc{-1}{A}$ to be the constructor $\\seqin_0$ of the sequential colimit $A^{\\ast\\infty}$. Often we simply write $\\brck{A}$ for $\\trunc{-1}{A}$.\n\\end{defn}\n\nThe type $\\trunc{-1}{A}$ is a proposition by \\cref{prp:isprop-infjp}, and\n\\begin{equation*}\n  \\eta:A\\to\\trunc{-1}{A}\n\\end{equation*}\nsatisfies the universal property of propositional truncation by \\cref{prp:universal-property-brck}.\n\n\\begin{prp}\n  The propositional truncation operation is functorial in the sense that for any map $f:A\\to B$ there is a unique map $\\brck{f}:\\brck{A}\\to\\brck{B}$ such that the square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      A \\arrow[r,\"f\"] \\arrow[d,swap,\"\\eta\"] & B \\arrow[d,\"\\eta\"] \\\\\n      \\brck{A} \\arrow[r,swap,\"\\brck{f}\"] & \\brck{B}\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes. Moreover, there are homotopies\n  \\begin{align*}\n    \\brck{\\idfunc[A]} & \\htpy \\idfunc[\\brck{A}] \\\\\n    \\brck{g\\circ f} & \\htpy \\brck{g}\\circ\\brck{f}.\n  \\end{align*}\n\\end{prp}\n\n\\begin{proof}\n  The functorial action of propositional truncation is immediate by the universal property of propositional truncation. To see that the functorial action preserves the identity, note that the type of maps $\\brck{A}\\to\\brck{A}$ for which the square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      A \\arrow[r,\"\\idfunc\"] \\arrow[d,swap,\"\\eta\"] & A \\arrow[d,\"\\eta\"] \\\\\n      \\brck{A} \\arrow[r,densely dotted] & \\brck{A}\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes is contractible. Since this square commutes for both $\\brck{\\idfunc}$ and for $\\idfunc$, it must be that they are homotopic. The proof that the functorial action of propositional truncation preserves composition is similar.\n\\end{proof}\n\n\\subsection{Proving type theoretical replacement}\n\nOur goal is now to show that the image of a map $f:A\\to B$ from an essentially small type $A$ into a locally small type $B$ is again essentially small. This property is called the type theoretic replacement property. In order to prove this property, we have to find another construction of the image of a map. In order to make this construction, we define a join operation on maps.\n\n\\begin{defn}\n  Consider two maps $f:A\\to X$ and $g:B\\to X$ with a common codomain $X$.\n  \\begin{enumerate}\n  \\item We define the type $\\join[X]{A}{B}$ as the pushout\n    \\begin{equation*}\n      \\begin{tikzcd}\n        A\\times_X B \\arrow[r,\"\\pi_2\"] \\arrow[d,swap,\"\\pi_1\"] & B \\arrow[d,\"\\inr\"] \\\\\n        A \\arrow[r,swap,\"\\inl\"] & \\join[X]{A}{B}.\n      \\end{tikzcd}\n    \\end{equation*}\n  \\item We define the \\define{join} $\\join{f}{g}:\\join[X]{A}{B}\\to X$ to be the unique map for which the diagram\n        \\begin{equation*}\n      \\begin{tikzcd}\n        A\\times_X B \\arrow[r,\"\\pi_2\"] \\arrow[d,swap,\"\\pi_1\"] & B \\arrow[d,\"\\inr\"] \\arrow[ddr,bend left=15,\"g\"] \\\\\n        A \\arrow[r,swap,\"\\inl\"] \\arrow[drr,bend right=15,swap,\"f\"]  & \\join[X]{A}{B} \\arrow[dr,densely dotted,swap,\"\\join{f}{g}\"] \\\\\n        & & X\n      \\end{tikzcd}\n    \\end{equation*}\n  \\end{enumerate}\n\\end{defn}\n\nThe reason to call the map $\\join{f}{g}$ the join of $f$ and $g$ is that the fiber of $\\join{f}{g}$ at any $x:X$ is equivalent to the join of the fibers of $f$ and $g$ at $x$.\n\n\\begin{lem}\n  Consider two maps $f:A\\to X$ and $g:B\\to X$. Then there is an equivalence\n  \\begin{equation*}\n    \\fib{\\join{f}{g}}{x}\\simeq\\join{\\fib{f}{x}}{\\fib{g}{x}}\n  \\end{equation*}\n  for any $x:X$.\n\\end{lem}\n\n\\begin{proof}\n  Consider the commuting cube\n  \\begin{equation*}\n    \\begin{tikzcd}\n      & \\fib{f}{x}\\times\\fib{g}{x} \\arrow[dl] \\arrow[dr] \\arrow[d] \\\\\n      \\fib{f}{x} \\arrow[d] & A\\times_X B \\arrow[dl] \\arrow[dr] & \\fib{g}{x} \\arrow[d] \\arrow[dl,crossing over] \\\\\n      A \\arrow[dr] & \\unit \\arrow[from=ul,crossing over] \\arrow[d] & B \\arrow[dl] \\\\\n      & X\n    \\end{tikzcd}\n  \\end{equation*}\n  In this cube, the bottom square is a canonical pullback square. The two squares in the front are pullbacks by \\cref{lem:fib_pb}, and the top square is a pullback square by \\cref{lem:prod_pb}. Therefore it follows by \\cref{rmk:strongly-cartesian} that all the faces of this cube are pullback squares, and hence by \\cref{thm:effectiveness-pullback} we obtain that the square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      \\join{\\fib{f}{x}}{\\fib{g}{x}} \\arrow[d,densely dotted] \\arrow[r] & \\unit \\arrow[d] \\\\\n      \\join[X]{A}{B} \\arrow[r,swap,\"\\join{f}{g}\"] & X\n    \\end{tikzcd}\n  \\end{equation*}\n  is a pullback square. Now the claim follows by the uniqueness of pullbacks, which was shown in \\cref{cor:uniquely-unique-pullback}.\n\\end{proof}\n\n\\begin{lem}\nConsider a map $f:A\\to X$, an embedding $m:U\\to X$, and $h:\\mathrm{hom}_X(f,m)$. Then the map\n\\begin{equation*}\n\\mathrm{hom}_X(\\join{f}{g},m)\\to \\mathrm{hom}_X(g,m)\n\\end{equation*}\nis an equivalence for any $g:B\\to X$.\n\\end{lem}\n\n\\begin{proof}\nNote that both types are propositions, so any equivalence can be used to prove the claim. Thus, we simply calculate\n\\begin{align*}\n\\mathrm{hom}_X(\\join{f}{g},m) & \\eqvsym \\prd{x:X}\\fib{\\join{f}{g}}{x}\\to \\fib{m}{x} \\\\\n& \\eqvsym \\prd{x:X}\\join{\\fib{f}{x}}{\\fib{g}{x}}\\to\\fib{m}{x} \\\\\n& \\eqvsym \\prd{x:X}\\fib{g}{x}\\to\\fib{m}{x} \\\\\n& \\eqvsym \\mathrm{hom}_X(g,m).\n\\end{align*}\nThe first equivalence holds by \\cref{ex:triangle_fib}; the second equivalence holds by \\cref{ex:fib_join}, also using \\cref{ex:equiv_precomp,lem:postcomp_equiv} where we established that that pre- and postcomposing by an equivalence is an equivalence; the third equivalence holds by \\cref{lem:extend_join_prop,lem:postcomp_equiv}; the last equivalence again holds by \\cref{ex:triangle_fib}.\n\\end{proof}\n\nFor the construction of the image of $f:A\\to X$ we observe that if we are given an embedding $m:U\\to X$ and a map $(i,I):\\mathrm{hom}_X(f,m)$, then $(i,I)$ extends uniquely along $\\inr:A\\to \\join[X]{A}{A}$ to a map $\\mathrm{hom}_X(\\join{f}{f},m)$. This extension again extends uniquely along $\\inr:\\join[X]{A}{A}\\to \\join[X]{A}{(\\join[X]{A}{A})}$ to a map $\\mathrm{hom}_X(\\join{f}{(\\join{f}{f})},m)$ and so on, resulting in a diagram of the form\n\\begin{equation*}\n\\begin{tikzcd}\nA \\arrow[dr] \\arrow[r,\"\\inr\"] & \\join[X]{A}{A} \\arrow[d,densely dotted] \\arrow[r,\"\\inr\"] & \\join[X]{A}{(\\join[X]{A}{A})} \\arrow[dl,densely dotted] \\arrow[r,\"\\inr\"] & \\cdots \\arrow[dll,densely dotted,bend left=10] \\\\\n& U\n\\end{tikzcd}\n\\end{equation*}\n\n\\begin{defn}\nSuppose $f:A\\to X$ is a map. Then we define the \\define{fiberwise join powers} \n\\begin{equation*}\nf^{\\ast n}:A_X^{\\ast n} X.\n\\end{equation*}\n\\end{defn}\n\n\\begin{constr}\nNote that the operation $(B,g)\\mapsto (\\join[X]{A}{B},\\join{f}{g})$ defines an endomorphism on the type\n\\begin{equation*}\n\\sm{B:\\UU}B\\to X.\n\\end{equation*}\nWe also have $(\\emptyt,\\ind{\\emptyt})$ and $(A,f)$ of this type. For $n\\geq 1$ we define\n\\begin{align*}\nA_X^{\\ast (n+1)} & \\defeq \\join[X]{A}{A_X^{\\ast n}} \\\\\nf^{\\ast (n+1)} & \\defeq \\join{f}{f^{\\ast n}}.\\qedhere\n\\end{align*}\n\\end{constr}\n\n\\begin{defn}\nWe define $A_X^{\\ast\\infty}$ to be the sequential colimit of the type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_X^{\\ast 0} \\arrow[r] & A_X^{\\ast 1} \\arrow[r,\"\\inr\"] & A_X^{\\ast 2} \\arrow[r,\"\\inr\"] & \\cdots.\n\\end{tikzcd}\n\\end{equation*}\nSince we have a cocone\n\\begin{equation*}\n\\begin{tikzcd}\nA_X^{\\ast 0} \\arrow[r] \\arrow[dr,swap,\"f^{\\ast 0}\" near start] & A_X^{\\ast 1} \\arrow[r,\"\\inr\"] \\arrow[d,swap,\"f^{\\ast 1}\" near start] & A_X^{\\ast 2} \\arrow[r,\"\\inr\"] \\arrow[dl,swap,\"f^{\\ast 2}\" xshift=1ex] & \\cdots \\arrow[dll,bend left=10] \\\\\n& X\n\\end{tikzcd}\n\\end{equation*}\nwe also obtain a map $f^{\\ast\\infty}:A_X^{\\ast\\infty}\\to X$ by the universal property of $A_X^{\\ast\\infty}$. \n\\end{defn}\n\n\\begin{lem}\\label{lem:finfjp_up}\nLet $f:A\\to X$ be a map, and let $m:U\\to X$ be an embedding. Then the function\n\\begin{equation*}\n\\blank\\circ \\seqin_0: \\mathrm{hom}_X(f^{\\ast\\infty},m)\\to \\mathrm{hom}_X(f,m)\n\\end{equation*}\nis an equivalence. \n\\end{lem}\n\n\\begin{thm}\\label{lem:isprop_infjp}\nFor any map $f:A\\to X$, the map $f^{\\ast\\infty}:A_X^{\\ast\\infty}\\to X$ is an embedding that satisfies the universal property of the image inclusion of $f$.\n\\end{thm}\n\n\\begin{lem}\nConsider a commuting square\n\\begin{equation*}\n\\begin{tikzcd}\nA \\arrow[r] \\arrow[d] & B \\arrow[d] \\\\\nC \\arrow[r] & D.\n\\end{tikzcd}\n\\end{equation*}\n\\begin{enumerate}\n\\item If the square is cartesian, $B$ and $C$ are essentially small, and $D$ is locally small, then $A$ is essentially small.\n\\item If the square is cocartesian, and $A$, $B$, and $C$ are essentially small, then $D$ is essentially small. \n\\end{enumerate}\n\\end{lem}\n\n\\begin{cor}\nSuppose $f:A\\to X$ and $g:B\\to X$ are maps from essentially small types $A$ and $B$, respectively, to a locally small type $X$. Then $A\\times_X B$ is again essentially small. \n\\end{cor}\n\n\\begin{lem}\nConsider a type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nwhere each $A_n$ is essentially small. Then its sequential colimit is again essentially small. \n\\end{lem}\n\n\\begin{thm}\\label{thm:replacement}\n  For any map $f:A\\to B$ from an essentially small type $A$ into a locally small type $B$, the image of $f$ is again essentially small.\n\\end{thm}\n\n\\begin{cor}\n  Consider a $\\UU$-small type $A$, and an equivalence relation $R$ over $A$ valued in the $\\UU$-small propositions. Then the set quotient $A/R$ is essentially small.\n\\end{cor}\n\n\\begin{exercises}\n\\exercise \\label{ex:seqcolim_shift}\nShow that the sequential colimit of a type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nis equivalent to the sequential colimit of its shifted type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & A_3 \\arrow[r,\"f_3\"] & \\cdots.\n\\end{tikzcd}\n\\end{equation*}\n  \\exercise Let\n  \\begin{tikzcd}\n    P_0 \\arrow[r] & P_1 \\arrow[r] & P_2 \\arrow[r] & \\cdots\n  \\end{tikzcd}\n  be a sequence of propositions. Show that\n  \\begin{equation*}\n    \\eqv{\\colim_n(P_n)}{\\exists_{(n:\\N)} P_n}.\n  \\end{equation*}\n\\exercise \\label{ex:seqcolim_contr}Consider a type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nand suppose that $f_n\\htpy \\mathsf{const}_{a_{n+1}}$ for some $a_n:\\prd{n:\\N}A_n$. Show that the sequential colimit is contractible.\n\\exercise Define the $\\infty$-sphere $\\sphere{\\infty}$ as the sequential colimit of\n\\begin{equation*}\n\\begin{tikzcd}\n\\sphere{0} \\arrow[r,\"f_0\"] & \\sphere{1} \\arrow[r,\"f_1\"] & \\sphere{2} \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nwhere $f_0:\\sphere{0}\\to\\sphere{1}$ is defined by $f_0(\\bfalse)\\jdeq \\inl(\\ttt)$ and $f_0(\\btrue)\\jdeq \\inr(\\ttt)$, and $f_{n+1}:\\sphere{n+1}\\to\\sphere{n+2}$ is defined as $\\susp(f_n)$. Use \\cref{ex:seqcolim_contr} to show that $\\sphere{\\infty}$ is contractible.\n\\exercise Consider a type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nin which $f_n:A_n\\to A_{n+1}$ is weakly constant in the sense that\n\\begin{equation*}\n\\prd{x,y:A_n} f_n(x)=f_n(y)\n\\end{equation*}\nShow that $A_\\infty$ is a mere proposition.\n\\exercise Show that $\\N$ is the sequential colimit of\n\\begin{equation*}\n  \\begin{tikzcd}\n    \\Fin(0) \\arrow[r,\"\\inl\"] & \\Fin(1) \\arrow[r,\"\\inl\"] & \\Fin(2) \\arrow[r,\"\\inl\"] & \\cdots.\n  \\end{tikzcd}\n\\end{equation*}\n\\end{exercises}\n", "meta": {"hexsha": "7403fd0c398c87c9c6779990dd0be814e6deb4a6", "size": 25322, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/sequences.tex", "max_stars_repo_name": "hemangandhi/HoTT-Intro", "max_stars_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Book/sequences.tex", "max_issues_repo_name": "hemangandhi/HoTT-Intro", "max_issues_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Book/sequences.tex", "max_forks_repo_name": "hemangandhi/HoTT-Intro", "max_forks_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7166392092, "max_line_length": 445, "alphanum_fraction": 0.6693784061, "num_tokens": 9545, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152325073083132, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.585821518219265}}
{"text": "\\chapter{Hypothesis testing} % Casella berger, S373\n\\section{Introduction}\nThis chapter explains hypothesis testing and P-values. A hypothesis is defined as below.\n\\begin{defn}\nA hypothesis is a statement about a population parameter.\n\\end{defn}\nFurthermore one tests two hypotheses against each other. A definiton of this is as follows.\n\\begin{defn}\nThe two complementary hypotheses in a hypothesis testing problem are called the null hypothesis and the alternative hypothesis. They are denoted by $H_0$ and $H_1$, respectivly.\n\\end{defn}\nTo do a hypothesis test one need to calculate a test statistic from data. For a one sided test one can assume that the data is outside the null model for large values of the test statistic. In the case of a two sided test the data can be outside the null hypothesis for small values of the test statistic too. This leads to calculation of p-values\n\\section{P-values} % s397\nA p-value can give the result of a hypothesis test. The p-value is defined in definiton ??.\n\\begin{theorem}\nLet $W(\\boldsymbol{X})$ be a test statistic such that large values of $W$ give evidence that $H_1$ is true. For each sample point $\\boldsymbol{x}$, define\n\\begin{equation}\np(\\boldsymbol{x}) = \\sup_{\\theta \\in \\Theta_0} P_\\theta (W(\\boldsymbol{X}) \\geq W(\\boldsymbol{x})).\n\\end{equation}\nThen, $p(\\boldsymbol{X})$ is a valid p-value.\n\\end{theorem}\nThe $\\Theta_0$ is the subset of the parameter space for the null model.\n\\\\\n\\\\\nP-values can also be defined by using sufficient statistics. A p-value is then defined as\n\\begin{equation}\np(\\boldsymbol{x}) = P(W(\\boldsymbol{X}) \\geq W(\\boldsymbol{x}) | S = S(\\boldsymbol{x})),\n\\label{eq:pvalue}\n\\end{equation}\nwhere $S(\\boldsymbol{x})$ is a sufficient statistic. A valid p-value is defined by\n\\begin{defn}\nA p-value $p(\\boldsymbol{X})$ is a test statistic satisfying $0 \\leq p(\\boldsymbol{x}) \\leq 1$ for every sample point $\\boldsymbol{x}$. Small values of $p(\\boldsymbol{X})$ give evidence that $H_1$ is true. A p-value is valid if, for every $\\theta \\in \\Theta_0$ and every $0 \\leq \\alpha \\leq 1$,\n\\begin{equation}\nP_\\theta (p(\\boldsymbol{X}\\leq \\alpha)) \\leq \\alpha.\n\\end{equation}\n\\label{defn:validpvalue}\n\\end{defn}\nBy this definition the p-value given a sufficient statistic is valid as shown below\n\\begin{equation}\nP_\\theta(p(\\boldsymbol{X}) \\leq \\alpha ) = \\sum_x P(p(\\boldsymbol{X}) \\leq \\alpha | s =s) P_\\theta (S=s) \\leq \\sum_s \\alpha P_\\theta (S=) \\leq \\alpha.\n\\end{equation}\n\\\\\n\\\\ % http://www.math.ntnu.no/~bo/Fordypning/2015/Marius/NHPP-Iran-paper.pdf\n\\section{Test statistics}\n\\label{sec:teststat}\nAs mentioned earlier to calculate the p-value one need a test statistic. A test statistic is often a function which takes a sample as input and gives out a meassure of an certain attribute.For our data we will use several types of test statistics. In the test statistics we are using tranformed times of our events. This is has also been done in \\cite{lindqvist2011monte}. The transformed times have an uniform distribution between 0 and 1. This is because if the times $t_1,...,t_n$ is from a NHPP then $\\Lambda(t_1),...,\\Lambda(t_1)$ is a homogenous Poisson process with intensity 1. The tranformed times are defined as below.\n\\begin{equation}\nV_j = \\frac{\\Lambda(t_j)}{\\Lambda(\\tau)}.\n\\label{eq:transtimes}\n\\end{equation}\nThe different statistics will be introduced below.\n\\subsection{Greenwood statistic}\nThe Greenwood statistic is a two sided statistic where values are rejected for small and large values.\n\\begin{equation}\nG = \\sum_{j=1}^{n+1} (\\hat{V_j} - \\hat{V}_{j-1})^2\n\\end{equation}\n\\subsection{Laplace statistic}\nThis one is also a two sided statistic and are rejected for small and large values.\n\\begin{equation}\nL = \\sqrt{\\frac{12}{n}} \\sum_{j=1}^{n+1} \\left( \\hat{V_j} - \\frac{1}{2} \\right)\n\\end{equation}\n\\subsection{Modified Cramer von Mises statistic}\n\\begin{equation}\nW^2 = \\sum_{j=1}^{n} \\left[ \\hat{V}_j - \\frac{(2j-1)}{2n} \\right]^2 + \\frac{1}{12n}\n\\end{equation}\n\\subsection{Modified Kolmogorov-Smirnov statistic}\n\\begin{equation}\nD = max[D^+ , D^-]\n\\end{equation}\n\\begin{equation}\nD^+ = \\max_{1 \\leq j \\leq n} \\left( \\frac{j}{n} - \\hat{V}_j \\right)\n\\end{equation}\n\\begin{equation}\nD^- = \\max_{1 \\leq j \\leq n} \\left( \\hat{V}_j - \\frac{(j-1)}{n} \\right)\n\\end{equation}\n\\\\\n\\\\\nBoth of the Modified Cramer von Mises statistic and Modified Kolmogorov:Smirnov statistic rejects the null hypothesis for large values of the statistic.\n\n\n\\section{Test statistics and p-values in NHPP}\nTo be able to use the test statistics in section \\ref{sec:teststat} we need to find the tranformed times as defined in equation \\ref{eq:transtimes}. To do this the parameters for the rate function can be estimated. These paramters can be found by using maximum likelihood estimators as defined in chapter \\ref{chap:like}. From equation \\ref{eq:loglike} we see that finding the maximum likelihood might not be trivial because of the large lambda term. However the maximum likelihood can be found numerically in R programming language by using the built in function Optim. Furthermore the integral in equation \\ref{eq:largelambda} can also be solved numerically. Hence the tranformed times and test statistics can be calculated.\n\\\\\n\\\\\nTo calculate the p-value in equation \\ref{eq:pvalue} we refer to \\cite{iranNHPP}. From this paper we have that the p-value can be estimated by\n\\begin{equation}\n\\hat{p} = \\#{W^* \\geq W_{obs}}/M,\n\\end{equation}\nwhere $W*$ is a test statistic for a sample, $W_{obs}$ and $M$ is the number of samples.\n", "meta": {"hexsha": "9b7065d5e357e1776f44e0506bd138b320e23d14", "size": 5513, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/Thesis/chapters/hypotesetesting.tex", "max_stars_repo_name": "mariufa/ProsjektOppgave", "max_stars_repo_head_hexsha": "3ef2fda314c55322de20f19ca861e4268a5e2d08", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Thesis/Thesis/chapters/hypotesetesting.tex", "max_issues_repo_name": "mariufa/ProsjektOppgave", "max_issues_repo_head_hexsha": "3ef2fda314c55322de20f19ca861e4268a5e2d08", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/Thesis/chapters/hypotesetesting.tex", "max_forks_repo_name": "mariufa/ProsjektOppgave", "max_forks_repo_head_hexsha": "3ef2fda314c55322de20f19ca861e4268a5e2d08", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.9438202247, "max_line_length": 726, "alphanum_fraction": 0.7422456013, "num_tokens": 1627, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.8152324960856177, "lm_q1q2_score": 0.5858215003286673}}
{"text": "\\section{Generic Programming} \\label{sec:generic_programming}\n\nEver since the humble beginnings of \\Cpp\\ there's been extensive support for different kinds of \\emph{polymorphism}, which can be split into two ``main'' categories: \\emph{run-time} polymorphism and \\emph{compile-time} polymorphism. The former is a key component in class-based OOP, and takes the form of inheritance, while the latter one enables \\emph{generic programming}, and is synonymous with \\emph{templates}.\n\nTemplates are a bit special in \\Cpp\\ since they are \\emph{unconstrained}. Meaning, when we \\emph{instantiate} e.g. function templates, the compiler will first generate the function (that's the instantiation part) by replacing \\texttt{T} with whatever type you've passed to the function, and only \\underline{after}, check if the syntax is correct! This leads to some very unfortunate side-effects, as you'll see soon enough :)\n\nBut before digging into the gritty details on that, let's start with a simple example of ``regular programming'' (i.e. with no templates), which we'll then use to iteratively build a scenario where generic programming will be needed. This will help us understand the though process that goes into designing a generic function, and how \\emph{requirements} are gathered by the function's author.\n\nBelow is a function that calculates the arithmetic mean by taking in two pointers to a C-style array of \\texttt{doubles}, one for the first, and last elements. Notice, that even if we would mangle the names, we would still recognize this as the \\texttt{mean}, since the operations (summing elements and dividing by size), would still be familiar to us, since we know what \\texttt{\\{\\}}, \\texttt{+=}, \\texttt{/} do to \\texttt{doubles}.\n\n\\lstinputlisting[linerange={3-10}]{examples/mean.cc}\n\n\\noindent By using \\emph{operator overloading}, we can make our user-defined types behave just like built-in types. This is very powerful, because it allows us to transfer our knowledge about e.g. \\texttt{doubles}, and apply it for our own types as well, by using the \\underline{\\emph{same syntax}} as before. This last part is particularly important!\n\n\\lstinputlisting[linerange={6-16}]{examples/point2.hh}\n\n\\lstinputlisting[linerange={5-12}]{examples/centroid.cc}\n\n\\noindent As can be seen above, providing all those overloads to \\texttt{point2} proved fruitful, since we can now represent the algorithm to find the centroid of a cluster of points in a very natural way, quite similarly to how we did the \\texttt{mean} function. In fact, this function is \\emph{exactly} the same as \\texttt{mean}, with the only difference being that the type \\texttt{double} $\\rightarrow$ \\texttt{point2} and that now \\texttt{mean} $\\rightarrow$ \\texttt{centroid}. Surely there must be a way to generalize this type of ``coincidence'' in \\Cpp? If you've done any GP at all before, you probably know where this is going!\n\n\\vspace{1em}\\noindent\\textbf{Note:} these are \\underline{not} good examples on how you should write these functions, for instance, what happens when \\texttt{begin >= end}? And why use pointers? Many examples in this primer are like this too, so don't use it in production!\n\n\\subsection{Unconstrained Templates} \\label{sec:unconstrained_templates}\n\n    Obviously, both \\texttt{mean} and \\texttt{centroid} are trying to find an \\texttt{average}, which we can express as a higher-level idea with \\emph{function templates}, as shown below. We've essentially just swapped \\texttt{double}/\\texttt{point2} by \\texttt{T}, a \\emph{template parameter}. These are defined in \\emph{template parameter lists}, which have either \\emph{type} or \\emph{non-type template parameters}. By default, template parameters are \\emph{unconstrained}, which means they may take \\emph{any} type or value (using non-type parameters).\n\n    Now, assume you're a user of this function, and wish to use it with your own user-defined type. What are the \\emph{requirements} our type needs to \\emph{satisfy}? As authors of this function, we know \\texttt{T} needs to be \\texttt{DefaultConstructible} in line 4, \\texttt{Summable} with \\texttt{T} in line 7, and \\texttt{Scalable} by \\texttt{double} in line 8. However, the user doesn't know that, and there's no way to tell by looking at the function signature... Do we have to go and look at the implementation?\n\n    While that might work for this short toy example, it certainly won't scale. A possible solution (adopted by the STL) is to document everything on paper.\n\n    \\lstinputlisting[linerange={7-7,11-18}]{examples/average.cc}\n\n    While writing documentation for the requirements on \\texttt{T} certainly works, there is no guarantee the user will read it. And even if the user \\emph{does} read it, there is still the possibility he/she will implement the requirements incorrectly. \\textbf{Note:} for a ``good'' example on this, see: \\emph{Type Requirements} in \\href{https://en.cppreference.com/w/cpp/algorithm/sort}{\\texttt{std::sort}}.\n\n    Another reasonable alternative, and probably the most common approach, is to implement the requirements iteratively by \\emph{trial-and-error}. We just let the compiler try to instantiate the template, and the hope is that the compiler will tell us what requirements are missing, so we can implement them. Even this approach is problematic for the user. For instance, consider this snippet:\n\n    \\begin{lstlisting}\nstd::list l { 5, 1, 2, 4, 3 };\nstd::sort(l.begin(), l.end()); \\end{lstlisting}\n\n\\noindent Spits around 50 lines of template instantiation errors in GCC 8.1. While this is actually pretty tame, and not that hard to figure out what's wrong, it will still scare away a lot of people. The compiler is somewhat helpful, and colors e.g. the \\texttt{std::\\_\\_lg(\\_\\_last - \\_\\_first) * 2}, telling us that it can't compile because \\texttt{std::\\_List\\_iterator<int>} doesn't have \\texttt{operator-}. However, the most helpful hint is hidden away between those lines, and the true cause is that \\texttt{std::list} uses \\texttt{ForwardIterators}, and \\texttt{std::sort} expects a pair of \\texttt{RandomAccessIterators}. Both of these are concepts, and their requirements are defined in the standard library specification. The programmer needs to the read documentation to locate the problem anyway!\n\nTo put a little bit more salt to the wound, consider this simple example:\n\n    \\begin{lstlisting}\nstd::set<Widget> w; // defined as struct Widget {  }; elsewhere\nw.insert(Widget{}); \\end{lstlisting}\n\n\\noindent Gives around 412 lines of template instantiation errors. This is still pretty tame in comparison to what some templated libraries output when you make a small mistake, and some of the longer errors even crash terminal emulators! Why can't we just get the same quality of errors as with non-templated code?\n\nThere is a reason why compilers can't be more helpful in these situations. As I've mentioned before, unconstrained templates only validate syntax \\emph{after} the template has been instantiated, which means it's very hard to track down the source of the problem. Especially if the instantiation error happens far from the ``call site'', resulting in a long \\emph{template instantiation stack}. It's so hard in fact, that compilers don't even needed to provide any diagnostics for these problems (according to the standard). Luckily, compilers still try, but the chances for these error messages getting vastly better in future compiler versions are slim, since these problems are equivalent to the \\emph{halting problem}.\n\nIt seems we've hit a dead-end with unconstrained templates. Maybe we'll just have to endure the pain of using templated code, or consider it a rite of passage for every \\Cpp\\ programmer that wishes to venture forth. We'll just have to live with fragile template interfaces and unintuitive error messages...\n\n    \\subsubsection*{Concepts to the Rescue!}\n\n    Don't fear, concepts is here! Below we have the \\texttt{average} from before, with the exception of lines 2-4, which have been squeezed in-between the template parameter list and the function signature. These apply \\emph{type constraints} on \\texttt{T}, which are just a set of \\emph{predicates} that must be \\emph{fulfilled} \\underline{before} instantiation.\n\n    Without getting boggled down on syntax (we'll have plenty of that later), lines 2-4 say that: we require \\texttt{T} to be \\texttt{DefaultConstructible}, \\texttt{T} must also be \\texttt{SummableWith} another \\texttt{T}, and \\texttt{T} has to be \\texttt{ScalableWith} a \\texttt{double}. All of these concepts together constrain \\texttt{T} to a set of types we want to allow.\n\n    \\lstinputlisting[linerange={7-18}]{examples/average.cc}\n\n    \\noindent Even though that seems pretty neat, what have we gained from doing this? Apart from the extra verboseness, function authors now need to spend  time thinking about the constraints on \\texttt{T}, which might seem like ``a waste of time''. After all, it's not like there aren't implicit ``constraints'' already defined in the body of the function: giving a ``bad'' \\texttt{T} won't compile, even without concepts. There needs to be a good reason \\emph{why} this extra verboseness is needed at all!\n\n    Luckily, this extra verboseness solves a lot of the problems we've described, and easily outweigh the cost of having to write a couple of extra lines of code. Because the constraints on \\texttt{T} are checked \\emph{before} instantiation occurs, a more reasonable error message can be displayed instead of the ``junk'' we get today. Also, because constraints are part of the function signature, the interface is no longer fragile, and the user can see what concepts they need to implement.\n\n    This completely eliminates the need to document template requirements, because the constraints are explicitly stated in the function signature. Also, the ``trial-and-error'' approach to implementing requirements is no longer unreasonable, because the compiler can now give (more) useful information! Calling \\texttt{std::sort} with a \\texttt{std::list} will now give something similar to: \\emph{``constraints not satisfied: \\texttt{std::list::iterator} is not \\texttt{RandomAccess}''}, followed by a concise summary of all the requirements you need to implement.\n\n    Also, concepts allow us to overload function templates using constraints. However, we've been able to do that for a while now, by using \\emph{type traits} and SFINAE, or even with \\emph{tag dispatching}. In fact, both of these are ``competitors'' to concepts, and fill similar niches. Let's have a look at both of these as well!\n\n\\subsection{Type Traits and SFINAE} \\label{sec:type_traits_and_sfinae}\n\n    Even though this primer is mainly focused on concepts, it would be foolish not to compare it with existing methods that already limit template parameters. One of these are \\emph{type traits + SFINAE (Substitution Failure Is Not An Error)}. Despite the scary looking acronym, the idea behind SFINAE is quite simple: ``if replacing \\texttt{T} gives invalid syntax, don't emit an error, find alternatives first'', which means we can ``swap'' between overloads depending on these ``failures''.\n\n    And this is where type traits come in handy, since they allow us to ``trigger'' these failures by using them together with \\texttt{enable\\_if}, a compile-time switch, which evaluates to a non-failing type if the type trait is satisfied. The type traits themselves are just clever constructs that evaluate to \\texttt{true} when \\texttt{T} has certain properties, e.g. \\texttt{is\\_integral\\_v<int>} gives us \\texttt{true}. One of the problems with type traits is that they are not obvious to specify.\n\n    \\noindent \\textbf{Note:} for a deeper dive see \\emph{Bendersky's} \\href{https://eli.thegreenplace.net/2014/sfinae-and-enable_if/}{SFINAE and \\texttt{enable\\_if}} blog post.\n\n    For instance, \\texttt{EqualityComparable} from the \\Cpp11 standard library:\n\n    \\begin{table}[h]\n    \\begin{tabular}{ccc}\n        \\toprule\n        \\bf{Expression} & \\bf{Return Type} & \\bf{Requirement Specification} \\\\\n        \\midrule\n        \\texttt{x == y} & \\textbf{\\texttt{bool}} convertible & \\makecell[l]{\\texttt{==}\\, is an equivalence relation, that is,\\\\\n                                                                it has the following properties:\\\\\n                                                                $\\rightarrow$ for all \\texttt{x}, \\texttt{x == x}\\\\\n                                                                $\\rightarrow$ if \\texttt{x == y}, then \\texttt{y == x}\\\\\n                                                                $\\rightarrow$ if \\texttt{x == y}, \\texttt{y == z}, then \\texttt{x == z}} \\\\\n        \\bottomrule\n    \\end{tabular}\n    \\end{table}\n\n    \\noindent could be defined as the type trait below, which gives \\texttt{true} iff \\texttt{T} and \\texttt{U} have \\texttt{operator==(T,U)}. While this wasn't all that complicated in this case, it's still annoying that 87.5\\% of this code is just pure boilerplate. It's line 7 that is actually doing the heavy lifting, everything else is just ``unnecessary'' code. Also, if you're very observant, you'll notice that this type trait doesn't even satisfy all of the requirements above, since it doesn't support commutativity.\n\n    Even line 7 isn't obvious, since we need to use \\texttt{std::declval} to acquire references to \\texttt{T} and \\texttt{U} that don't need to go through constructors, so that they can be evaluated with \\texttt{decltype}, and finally do what we came to do: check if \\texttt{operator==(T,U)} is available. I hope that you will agree that this seems like an awfully roundabout way of doing this. And this is one of the easier type traits to specify, expect extra machinery for anything less trivial.\n\n    \\lstinputlisting[linerange={6-13}]{examples/sfinae.hh}\n\n    \\subsubsection*{Concepts to the Rescue!}\n\n    Don't fear, concepts is here! Why do we have to go through all that trouble? In the end, the problem we're trying to solve is simple: for any \\texttt{T} and \\texttt{U}, check if \\texttt{x == y} and \\texttt{y == x} are expressions that compile (where \\texttt{x} $\\in$ \\texttt{T}, \\texttt{y} $\\in$ \\texttt{U}). Below we define \\texttt{EqualityComparable} by using concepts. Prettier, right?\n\n    Again, without delving too much into the syntax, line 2 defines a concept called \\texttt{EqualityComparableWith}, that requires for any \\texttt{x} $\\in$ \\texttt{T} and \\texttt{y} $\\in$ \\texttt{U}: \\texttt{x == y}, \\texttt{x != y}, \\texttt{y != x}, \\texttt{y == x}, be valid expressions evaluable to \\texttt{bool}. e.g. \\texttt{EqualityComparableWith<const char*, std::string>} gives \\texttt{true}, because \\texttt{std::string} has all operators for comparing C-style strings. Note that this is way more powerful than \\texttt{is\\_equality\\_comparable} since it also checks commutativity properties. Something that might be surprising is that it's also non-equality comparable, and is a trend you'll see with ``good'' concept definitions, since they \\emph{prefer complete} rather than \\emph{minimal} concepts.\n\n    Back in the \\texttt{average} example, where we apply constraints via concepts, you might have been wondering how to define e.g. \\texttt{DefaultConstructible}. And you'll be pleased to know all of them are really simple to define as well, just like the one below! If you want to take a peek: \\texttt{examples/concepts.h}.\n\n    \\lstinputlisting[linerange={57-63}]{examples/concepts.h}\n\n    \\noindent And now for something completely different, but still related to type traits, and more specifically, the many issues with SFINAE. Consider this example: a factory which creates different numbers depending on the constructor that was called. If \\texttt{NumberFactory} is called with an integer type, \\texttt{create\\_number} returns \\texttt{42}, but if it's a floating point type, we get e.g. \\texttt{0xDEADBEEF} instead.\n\n    \\lstinputlisting[linerange={8-22}]{examples/factory.hh}\n\n    Sounds simple right? And one would expect the \\texttt{NumberFactory} in the previous page to already work, since we're eliminating the ``bad'' constructors with SFINAE, via the \\texttt{is\\_integral} \\& \\texttt{is\\_floating\\_point} type traits. However, that doesn't quite work, because the call will be ambiguous. SFINAE isn't part of overload resolution. As far as the compiler knows, both constructors are exactly the same. How to fix this? We make them different!:\n\n    \\lstinputlisting[linerange={28-44}]{examples/factory.hh}\n\n    \\noindent Essentially, by introducing a \\texttt{dummy} argument, we have changed the signature of the function, which means the compiler no longer considers this ambiguous, and SFINAE can finally take over as usual. While this works, I'd like to argue that this isn't nice, and causes unnecessary headaches for something so simple. Wouldn't it be awesome to not have to think about these edge-cases anymore?\n\n    \\subsubsection*{Concepts to the Rescue!}\n\n    Don't fear, concepts is here! Since constraints are part of overload resolution, the compiler won't consider these constructors ambiguous anymore. In fact, constraints are part of the function's signature! No more edge-cases with this!\n\n    \\lstinputlisting[linerange={50-62}]{examples/factory.hh}\n\n\\subsection{Tag Dispatching} \\label{sec:tag_dispatching}\n\n    Another generic programming technique, heavily used in the \\Cpp\\ standard library, is \\emph{tag dispatching} together with \\emph{trait classes}. It's a way to choose the function to call/dispatch depending on the properties of a type by using \\emph{tags}. A tag is just an empty class (d\u00e9j\u00e0 vu?) used to control the function dispatch.\n\n    This is better explained with an example, which I've shamelessly borrowed from \\emph{Boost's} excellent \\href{https://www.boost.org/community/generic_programming.html#tag_dispatching}{Generic Programming Techniques} survey. If you've played around with \\texttt{std::advance} before, you'll know that it's an algorithm which increments an iterator $n$ times, and gives you that ``advanced'' iterator. What you might \\emph{not} know, is that there are two versions of it: one which runs in $\\mathcal{O}(n)$ time, and another in $\\mathcal{O}(1)$ time, depending on the properties of the type that was passed. Which makes sense when you think about it, since a \\texttt{ForwardIterator} can only call \\texttt{++i} while a \\texttt{RandomAccessIterator} can just directly call \\texttt{i += n}. It does this by using tag dispatching, like this:\n\n    \\lstinputlisting[linerange={13-31}]{examples/advance.cc}\n\n    \\noindent Notice that \\texttt{tagged\\_advance} needs an extra argument in the end: the tag, which is used to dispatch the right \\texttt{tagged\\_advance} from within \\texttt{advance}. It does this selection by using \\texttt{iterator\\_traits<T>::category}, which is a pre-defined type alias for finding out what type of iterator \\texttt{T} is. This is quite an elegant approach, but requires us to specify new ``empty'' classes for new iterator types. Worst of all though, we need to provide a specialization inside the \\texttt{iterator\\_traits} class! Isn't there a better way to solve this?\n\n    \\lstinputlisting[linerange={36-40}]{examples/advance.cc}\n\n    \\subsubsection*{Concepts to the Rescue!}\n\n    Don't fear, concepts is here! Instead of having to control dispatch with a tag, how about choosing the correct function overload right from the start? With concepts, this is possible, since we can specify the properties on \\texttt{T} explicitly!\n\n    \\lstinputlisting[linerange={47-50,52-59,61-62}]{examples/advance.cc}\n\n    \\noindent At this point, you should be somewhat convinced that concepts are neat, and that existing \\Cpp\\ language features aren't ``good'' enough for some scenarios. It's finally time to actually start talking about Concepts! I hope you're hyped!\n", "meta": {"hexsha": "d2efa0bfde2aec706c27bbb8b8b610596757b4f8", "size": 19825, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "primer/sections/generic_programming.tex", "max_stars_repo_name": "CaffeineViking/concepts-primer", "max_stars_repo_head_hexsha": "041cec40fa4a25cd954ce91da6d9d10ee31499a3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2019-05-01T21:49:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T06:50:47.000Z", "max_issues_repo_path": "primer/sections/generic_programming.tex", "max_issues_repo_name": "CaffeineViking/concepts-primer", "max_issues_repo_head_hexsha": "041cec40fa4a25cd954ce91da6d9d10ee31499a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "primer/sections/generic_programming.tex", "max_forks_repo_name": "CaffeineViking/concepts-primer", "max_forks_repo_head_hexsha": "041cec40fa4a25cd954ce91da6d9d10ee31499a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 135.7876712329, "max_line_length": 840, "alphanum_fraction": 0.7533921816, "num_tokens": 4857, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.8152324938410784, "lm_q1q2_score": 0.585821498715754}}
{"text": "%!TEX root = ../06DL.tex\n\n\\chapter{Stochastic Gradient Descent Methods and Its Variants}\n\n\\section{Fully Gradient Descent Method}\n\\begin{problem}Find ${x}^{*} \\in \\mathcal{X}$  s.t:\n\\begin{equation}\\label{equ:sum-opt}\n{x}^* = \\mathop{\\arg\\min}_{{x} \\in \\mathcal{X} \\subseteq \\mathbb{R}^{n}} f(x).\n\\end{equation}\nHere $\\mathcal{X}$ is a convex compact subset. (This condition is used to prove the gradient is bounded, i.e. $\\|\\nabla f(x)\\| \\le M$ for SGD method. But in fully gradient descent method, $\\mathcal{X}$ can be equal to $\\mathbb{R}^n$.)\n\\end{problem}\n\n\\begin{assumption}\\label{assum:GDconvergence}\nHere $f$ satisfies:\n\\begin{itemize}\n\\item $\\lambda$-strongly convex, i.e.\n\\begin{equation}\\label{equ:lambdastronglyconvex}\nf(x) - f(y) \\ge \\nabla f(y)^T(x-y) + \\frac{\\lambda}{2}\\|x-y\\|^2, \\quad \\forall x, y.\n\\end{equation}\n\\begin{itemize}\n\\item We can even assume that: \n\\begin{equation}\\label{eq:sufficient_lambdastronglyconvex}\ny^T\\nabla^2 f(x)y \\ge \\lambda \\|y\\|^2, \\quad \\forall x, y,\n\\end{equation}\nand \\eqref{eq:sufficient_lambdastronglyconvex} can prove $\\lambda$-strongly convex \\eqref{equ:lambdastronglyconvex}.\n\\end{itemize}\n\\item $\\nabla f$ Lipschitz continuous with constant $L$ i.e\n\\begin{equation}\\label{eq:lipschtizcont}\n\\| \\nabla f(x) - \\nabla f(y)\\| \\le L \\|x- y\\|\n\\end{equation}\n%Thus we will have:\n%\\begin{equation}\n%f(x) - f(y) \\le \\nabla f(y)^T(x-y) + \\frac{H}{2}\\|x-y\\|^2, \\quad \\forall x, y.\n%\\end{equation}\n\\end{itemize}\n\\end{assumption}\n\nHere from the Lipschitz continuity of $\\nabla f$ \\eqref{eq:lipschtizcont}, we have:\n\\begin{lemma}\nIf $f \\in C^1$ and $\\| \\nabla f(x) - \\nabla f(y)\\| \\le L \\|x- y\\|$, then we have\n\\begin{equation}\nf(x) - f(y) - \\nabla f(y)^T(x-y) \\le |f(x) - f(y) - \\nabla f(y)^T(x-y)| \\le \\frac{L}{2}\\|x- y\\|^2.\n\\end{equation} \n\\end{lemma}\n\\begin{proof}\nTake $g(t) = f(tx + (1-t)y)$, so we heve $g^\\prime(t) = \\nabla f(tx + (1-t)y)^\\top (x- y)$ and \n\\begin{equation*}\nf(x) - f(y) = \\int_0^1 g^{\\prime}(t) {\\rm d}t .\n\\end{equation*}\nHence,\n\\begin{align*}\n&|f(x) - f(y) - \\nabla f(y)^T(x-y)| \\\\\n= & \\left |\\int_0^1 \\nabla f(t + (1-t)y)^T(x- y) - \\nabla f(y)^T(x-y)\\rm{dt}\\right | \\\\\n\\le & \\int_0^1 \\left | \\nabla (f(t + (1-t)y)^T- \\nabla f(y)^T)(x-y)\\right | \\rm{dt} \\\\\n\\le & L\\|x - y\\|^2 \\int_0^1 t \\rm{dt} = \\frac{L}{2}\\|x-y\\|^2.\n\\end{align*}\n\\end{proof}\n\n\nSo the algorithm is:\n\\begin{algorithm}\n\\caption{ FGD} \n\\label{alg:FGD}\n\\begin{equation}\\label{equ:fgd-iteration}\nx_{t+1} = \\Pi_{\\mathcal{X}}(x_{t} - \\eta_t \\nabla f(x_t)), \\quad t = 0:T,\n\\end{equation}\nwhere $\\eta_t$ is the stepsize / learning rate.\n\\end{algorithm}\n\nHere $\\Pi_{\\mathcal{X}}$ is the nonlinear projection operator of convex compact set $\\mathcal{X} \\subseteq \\mathbb{R}^n$:\n\\begin{equation}\n\\Pi_{\\mathcal{X}} w = \\mathop{\\arg\\min}_{x \\in \\mathcal{X}} \\|w - x\\|.\n\\end{equation}\n\n\\subsection{Convergence of GD}\n\\begin{theorem}\nIf $\\eta_t  = c $ is a small enough constant, then there exists $ 0< \\alpha < 1 $ such that\n\\begin{equation}\n\\|x_t - x^*\\|^2 \\le  \\alpha^t \\|x_0 - x^*\\|^2\n\\end{equation}\nfor Algorithm \\ref{alg:FGD} .\n\\end{theorem}\n\n\\begin{proof}\n%Then we have \nIf we minus any $x \\in \\mathcal{X}$, we can only get:\n\\begin{equation*}\nx_{t+1} - x = \\Pi_{\\mathcal{X}}(x_{t} - \\eta_t \\nabla f(x_t)) - x.\n\\end{equation*}\nIf we take $L^2$ norm for both side, we get:\n\\begin{equation*}\n\\|x_{t+1} - x \\|^2 = \\| \\Pi_{\\mathcal{X}}(x_{t} - \\eta_t \\nabla f(x_t)) - x \\|^2.\n\\end{equation*}\nSo we cannot get $(x_t - x)$ from the right side directly. But thanks to the definition of $\\Pi_{\\mathcal{X}}$ and $x \\in \\mathcal{X}$, we have:\n\\begin{equation*}\n\\| \\Pi_{\\mathcal{X}}(x_{t} - \\eta_t \\nabla f(x_t)) - x \\|^2 \\le \\|x_{t} - \\eta_t \\nabla f(x_t) - x\\|^2,\n\\end{equation*}\nso we have the following inequality and take $x = x^*$:\n\\begin{align*}\n \\|x_{t+1} - x^* \\|^2 &\\le  \\| x_{t} - \\eta_t \\nabla f(x_t) - x^* \\|^2 \\\\\n &= \\|x_t-x^*\\|^2 - 2\\eta_t \\nabla f(x_t)^\\top (x_t - x^*) + \\eta_t^2 \\|\\nabla f(x_t) - \\nabla f(x^*)\\|^2 \\\\\n &\\le \\|x_t - x^*\\|^2 - \\eta_t \\lambda \\|x_t - x^*\\|^2 + \\eta_t ^2 L^2 \\|x_t - x^*\\|^2  \\\\\n &\\le (1 - 2\\eta_t \\lambda + \\eta_t^2 L^2) \\|x_t - x^*\\|.\n\\end{align*}\nSo, if $\\eta_t < 2\\lambda/L^2$, then $\\alpha = (1 - 2\\eta_t \\lambda + \\eta_t^2 L^2) < 1$, so it has linear convergence rate.\n\\end{proof}\n\n\n\\section{Stochastic Gradient Descent Methods}\nWhen we consider about the deep learning problems, they are almost like:\n\\begin{problem}Find ${x}^{*} \\in \\mathcal{X}$  s.t:\n\\begin{equation}\\label{equ:sum-opt}\n{x}^* = \\mathop{\\arg\\min}_{{x} \\in \\mathcal{X} \\subseteq \\mathbb{R}^{n}} \\frac{1}{m}\\sum_{i=1}^m f_i(x).\n\\end{equation}\nHere $\\mathcal{X}$ is a convex compact subset. (This condition is used to prove the gradient is bounded, i.e. $\\|\\nabla f_i(x)\\| \\le M$.)\n\\end{problem}\nThe SGD algorithm is:\n\\begin{algorithm}\\caption{SGD}\n\\label{alg:SGD}\n\\begin{equation}\\label{equ:sgd-iteration}\nx_{t+1} = \\Pi_{\\mathcal{X}}(x_{t} - \\eta_t \\nabla f_{i_t}(x_t)), \\quad t = 0:T,\n\\end{equation}\n\\begin{equation}\n\\mathbb{P}(i_t = s) = \\frac{1}{m}, \\quad s = 1:m,\n\\end{equation}\n\\begin{equation}\ni_1, \\cdots, i_T \\quad \\text{are independent}.\n\\end{equation}\n\\end{algorithm}\n\nUnder the next assumptions, we can prove the convergence properties for SGD:\n\\begin{assumption}\\label{assum:SGDconvergence}\nHere $f_i$ satisfies:\n\\begin{itemize}\n\\item $\\lambda$-strongly convex, i.e.\n\\begin{equation}\\label{eq:sgdlambdastrcvx}\nf_i(x) - f_i(y) \\ge \\nabla f_i(y)^\\top (x-y) + \\frac{\\lambda}{2}\\|x-y\\|^2, \\quad \\forall x, y, i.\n\\end{equation}\n\\begin{itemize}\n\\item We can even assume that: \n\\begin{equation}\\label{eq:sgdsufficientlambdastrcvx}\ny^\\top \\nabla^2 f_i(x)y \\ge \\lambda \\|y\\|^2, \\quad \\forall x, y, i,\n\\end{equation}\nand \\eqref{eq:sgdsufficientlambdastrcvx} can prove $\\lambda$-strongly convex \\eqref{eq:sgdlambdastrcvx}.\n\\end{itemize}\n\\item Gradient is bounded, i.e. $\\|\\nabla f_i(x)\\| \\le M$. \n\\begin{itemize}\n\\item This condition can be proved by the projection $\\Pi_{\\mathcal{X}}$, otherwise this is generally not true for a convex function in $R^n$.\n\\end{itemize}\n\\end{itemize}\n\\end{assumption}\n\n\n\\subsection{Convergence of SGD}\n\\begin{theorem}Let the problem satisfy Assumptions \\ref{assum:SGDconvergence}, and denote $e_t = \\|x_t - x^*\\|$,\nthen %we can prove by induction that \n\\begin{equation}\n\\mathbb{E}e_{t}^2 \\le \\frac{4M^2}{\\lambda^2 t},\n\\end{equation}\nif we take $\\eta_t = \\frac{1}{\\lambda t}$.\n\\end{theorem}\n\\begin{proof}\nThe $L^2$ error of SDG can be written as\n\\begin{equation}\n      \\label{equ:L2SGD}\n      \\begin{split}\n            \\mathbb{E} \\|x_{t+1} - x^*\\|^2 &\\le \\mathbb{E}\\| x_{t} - \\eta_t \\nabla f_{i_t}(x_t) - x^* \\|^2 \\\\\n            &\\le \\mathbb{E} \\|x_t - x^*\\|^2 \n            - 2 \\eta_t \\mathbb{E} (\\nabla f_{i_t}(x_t) \\cdot (x_t - x^*)) \n            + \\eta_t^2 \\mathbb{E} \\|\\nabla f_{i_t}(x_t)\\|^2 \\\\\n            & \\le \\mathbb{E} \\|x_t - x^*\\|^2 - 2 \\eta_t \\mathbb{E} (\\nabla f (x_t) \\cdot (x_t - x^*))\n            + \\eta_t^2 M^2 \\\\\n            & \\le \\mathbb{E} \\|x_t - x^*\\|^2 -  \\eta_t \\lambda \\mathbb{E} \\|x_t - x^*\\|^2 + \\eta_t^2 M^2 \\\\\n            & = (1 - \\eta_t\\lambda) \\mathbb{E} \\|x_t - x^*\\|^2 + \\eta_t^2 M^2\n      \\end{split}\n\\end{equation}\nHere we can take the expectation of $i_t$ independently from $\\{x_i, i = 1,\\ldots,t\\}$\nto obtain the second line of (\\ref{equ:L2SGD}) since $i_t$ is independent from \n$\\{x_i, i = 1,\\ldots,t\\}$ which is completely determined by $\\{i_j, j = 1,\\ldots,t - 1\\}$.\nAnd the third line of (\\ref{equ:L2SGD}) is obtained from \\eqref{eq:sgdlambdastrcvx}.\n\nNote that from Assumption \\ref{assum:SGDconvergence}%inequality (\\ref{equ:lambdastronglyconvex}) \n\\begin{equation}\n\\frac{\\lambda}{2} \\|x_{1} - x^*\\| \\le \\|\\nabla f_{i_1}(x_{1})\\|\n\\end{equation}\nwhich implies $ \\mathbb{E}e_{1}^2 \\le \\frac{4M^2}{\\lambda^2}$.\n\nIn the case of SDG, by the inductive hypothesis, \n\\begin{equation}\n      \\begin{split}\n            \\mathbb{E}e_{t+1}^2 & \\le 4 (1 - \\frac{2}{t}) \\frac{M^2}{\\lambda^2 t} + \\frac{M^2}{\\lambda^2 t^2} \\\\\n            & \\le \\frac{M^2}{\\lambda^2} \\frac{1}{t}(4 - \\frac{8}{t} + \\frac{1}{t}) \\\\\n            & \\le \\frac{M^2}{\\lambda^2} \\frac{4}{t+1}.\n      \\end{split}\n\\end{equation}\n\\end{proof}\n\n\n\n\\section{Incremental Gradient Descent and Practical Training Method in\n\tLarge-Scale Machine Learning Problems}\n\n\\subsection{Basic Gradient Descent Type Algorithms}\nNow we suppose we have a machine learning model\n\\begin{equation}\nf(x; w),\n\\end{equation}\nwhere $f$ can be any machine learning model like: linear regression, SVM or deep learning models. \n\nWe have data as \n$$\nz_i=\\{x_i, y_i\\}\\quad i = 1:N,\n$$\nand the loss between the label and prediction is:\n$$\nf_i(w) = l(f(x_i, w), y_i),\n$$\none example of $l$ is:\n$$\nl(f(x_i, w), y_i) = \\|f(x_i, w)- y_i\\|^2.\n$$\nWhat we want to solve is:\n\\begin{equation}\n\\mathop{\\min}_{{w} \\in \\mathbb{R}^{n}} \\frac{1}{N}\\sum_{i=1}N f_i(w),\n\\end{equation}\nwith $F =  \\frac{1}{m}\\sum_{i=1}^m f_i(w)$.\n\n\\subsubsection{Mini-Batch Method}\nNow we introduce the Mini-Batch algorithm which is mostly used basic algorithm for above problem:\n\\begin{algorithm}[H]\n\\caption{Mini-Batch}\n\\label{alg:mini-batch}\n{\\bf Input}: learning rate $\\eta_t$, batch size $m$, parameter Initialization $ w_0$, number of epochs $K$. \\\\\nFor iteration $t = 1: K\\frac{N}{m}$ \\\\\nFor Epoch $k = 1:K$\\\\\nShuffle data and get mini-batch $B_1, \\cdots, B_{\\frac{N}{m}}$, choose mini-batch as: $B_{i_t}$ with\n$$\ni_t \\equiv t \\mod(\\frac{N}{m}),\n$$\nCompute the gradient on $B_{i_t}$:\n$$\ng_t = \\nabla_{w} \\frac{1}{m} \\sum_{i \\in B_{i_t}} f_i(w_{t})\n$$\nUpdate $w$:\n\\begin{equation}\nw_{t+1} = w_t - \\eta_t g_t.\n\\end{equation}\n\\end{algorithm}\n\nThis method is a little different from pure SGD type method as we still have\n$$\n\\mathbb{E} g_t = \\nabla f,\n$$\nonly the choice of $\\nabla f_i$ and $\\nabla f_{i+1}$ are dependent.\nBut this method works very good with some suitable batch-size. Recently, many researches show that small batch-size may lead to some flat minimizer with better generalization error. And some research suggested that, you can use a small batch size at the beginning and increase the batch size slowly with iteration.\n\n\\subsubsection{Momentum Mini-Batch}\nThen we are going to introduce the momentum method to accelerate the gradient method.  \n{\\bf Without special statement, the general setup for those next algorithms are the same to Mini-Batch}\n\n\\begin{algorithm}[H]\n\\caption{Momentum Mini-Batch}\n\\label{alg:mom}\n{\\bf Input}: momentum coefficient $\\alpha$(we often choose as: 0.5, 0.9, 0.999) \\\\\nCompute the gradient on $B_{i_t}$:\n$$\ng_t = \\nabla_{w} \\frac{1}{m} \\sum_{i \\in B_{i_t}} f_i(w_{t}).\n$$\nCompute the momentum:\n\\begin{equation}\nv_t = \\alpha v_{t-1} - \\eta_t g_t \\quad (v_0 = 0).\n\\end{equation}\nUpdate $w$:\n\\begin{equation}\nw_{t+1} = w_t + v_t.\n\\end{equation}\n\\end{algorithm}\n\n\\subsubsection{Nesterov Momentum Mini-Batch Method}\nWe may also use the Nesterov accelerated skill and get the Nesterov momentum Mini-Batch method.\n\\begin{algorithm}[H]\n\\caption{Nesterov Momentum Mini-Batch}\n\\label{alg:Nesterov}\n{\\bf Input}: momentum coefficient $\\alpha$\\\\\nCompute the gradient on $B_{i_t}$:\n$$\ng_t = \\nabla_{w} \\frac{1}{m} \\sum_{i \\in B_{i_t}} f_i(w_{t} + \\alpha v_{t-1}) \\quad (v_0 = 0).\n$$\nCompute the momentum:\n\\begin{equation}\nv_t = \\alpha v_{t-1} - \\eta_t g_t.\n\\end{equation}\nUpdate $w$:\n\\begin{equation}\nw_{t+1} = w_t + v_t.\n\\end{equation}\n\\end{algorithm}\n\n\n\\subsection{Adaptive Learning Rate Mini-batch Based Method}\nThe delta-bar-delta algorithm (Jacobs, 1988) is an early heuristic approach to adapting individual learning rates for model parameters during training. The approach is based on a simple idea: if the partial derivative of the loss, with respect to a given model parameter, remains the same sign, then the learning rate should increase. If the partial derivative with respect to that parameter changes sign, then the learning rate should decrease. Of course, this kind of rule can only be applied to full batch optimization. More recently, a number of incremental (or mini-batch-based) methods have been introduced that adapt the learning rates of model parameters.\n\n\\subsubsection{AdaGrad}\nThe AdaGrad algorithm, individually adapts the learning rates of all model parameters by scaling them inversely proportional to the square root of the sum of all of their historical squared values (Duchi et al., 2011).  \n\\begin{algorithm}[H]\n\\caption{AdaGrad}\n\\label{alg:AdaGrad}\n{\\bf Input}: small constant $\\delta$(perhaps $10^{-7}$, for numerical stability)\\\\\nCompute the gradient on $B_{i_t}$:\n$$\ng_t = \\nabla_{w} \\frac{1}{m} \\sum_{i \\in B_{i_t}} f_i(w_{t}).\n$$\nAccumulate squared gradient:\n\\begin{equation}\nr_t = r_{t-1} + g_t \\otimes g_t. \\quad (r_0 = 0),\n\\end{equation}\nwith $\\otimes$ means multiplication element-wise.  \\\\\nCompute Update:\n\\begin{equation}\n\\Delta w_t = -\\frac{\\eta_t}{\\delta + \\sqrt{r_t}} \\otimes g_t,\n\\end{equation}\nwith division and square root applied element-wise. \\\\\nUpdate $w$:\n\\begin{equation}\nw_{t+1} = w_t + \\Delta w_t.\n\\end{equation}\n\\end{algorithm}\n\n\\subsubsection{RMSProp}\nThe RMSProp algorithm (Hinton, 2012) modifies AdaGrad to perform better in the non-convex setting by changing the gradient accumulation into an exponentially weighted moving average. \n\n\\begin{algorithm}[H]\n\\caption{RMSProp}\n\\label{alg:RMSProp}\n{\\bf Input}: decay rate $\\rho$, small constant $\\delta$(usually $10^{-6}$)\\\\\nCompute the gradient on $B_{i_t}$:\n\\begin{equation}\ng_t = \\nabla_{w} \\frac{1}{m} \\sum_{i \\in B_{i_t}} f_i(w_{t}).\n\\end{equation}\nAccumulate squared gradient:\n\\begin{equation}\nr_t = \\rho r_{t-1} + (1-\\rho)g_t \\otimes g_t. \\quad (r_0 = 0),\n\\end{equation}\nCompute update:\n\\begin{equation}\n\\Delta w_t = -\\frac{\\eta_t}{\\sqrt{\\delta + {r_t}}} \\otimes g_t\n\\end{equation}\nUpdate $w$:\n\\begin{equation}\nw_{t+1} = w_t + \\Delta w_t.\n\\end{equation}\n\\end{algorithm}\n\nAnd if we combine RMSProp  with Nesterov momentum, we can get:\n\n\\begin{algorithm}[H]\n\\caption{RMSProp with Nesterov momentum}\n\\label{alg:RMSPropNesterov}\n{\\bf Input}: decay rate $\\rho$, momentum coefficient $\\alpha$\\\\\nCompute the gradient on $B_{i_t}$:\n\\begin{equation}\ng_t = \\nabla_{w} \\frac{1}{m} \\sum_{i \\in B_{i_t}} f_i(w_{t} + \\alpha v_{t-1}), \\quad (v_0 = 0).\n\\end{equation}\nAccumulate squared gradient:\n\\begin{equation}\nr_t = \\rho r_{t-1} + (1-\\rho)g_t \\otimes g_t. \\quad (r_0 = 0),\n\\end{equation}\nCompute velocity update:\n\\begin{equation}\nv_t = \\alpha v_{t-1} -\\frac{\\eta_t}{\\sqrt{{r_t}}} \\otimes g_t\n\\end{equation}\nUpdate $w$:\n\\begin{equation}\nw_{t+1} = w_t + v_t.\n\\end{equation}\n\\end{algorithm}\n\n\\subsubsection{Adam}\nAdam (Kingma and Ba, 2014) is yet another adaptive learning rate optimization algorithm, and its name ``Adam'' derives from the phrase \\emph{Adaptive Moment Estimation}, which adapt both learning rate $\\eta$ and subgradient $g$.\nUse moving average of both $g$ and $g^2$ (here $g^2$ indicates the element-wise square $g \\bigodot g$) to compute first order moment vector\n\\begin{equation}\nm_t  =\\beta_1 m_{t-1} + (1-\\beta_1) g_t=(1-\\beta_1)\\sum_{i=1}^{t}\\beta_1^{t-i} g_i ,\n\\end{equation}\nand second order moment vector\n\\begin{equation}\nv_t  =\\beta_2 v_{t-1} + (1-\\beta_2) g_t^2=(1-\\beta_2)\\sum_{i=1}^{t}\\beta_2^{t-i} g^2_i.\n\\end{equation}\nHere $\\beta_1 = 0.9$, $\\beta_2 = 0.99$.\n\nSince\n\\begin{align}\n\\mathbb{E} \\left[ m_t \\right] & =\\mathbb{E} \\left[ g_t \\right] \\cdot (1-\\beta_1^t) +\\zeta\\\\\n\\mathbb{E} \\left[ v_t \\right] & =\\mathbb{E} \\left[ g^2_t \\right] \\cdot (1-\\beta_2^t) +\\zeta.\n\\end{align}\nHere $\\zeta=0$ if the true moment is stationary.\nSo, we use a bias-corrected version here\n\\begin{align}\n\\tilde{m}_t=\\frac{m_t}{1-\\beta_1^t} \\\\\n\\tilde{v}_t=\\frac{v_t}{1-\\beta_2^t}.\t\n\\end{align}\n\nThe iterate can be written as \n\\begin{equation}\\label{Adam}\nw_{t+1}=w_t-\\frac{\\eta}{\\sqrt{\\tilde{v}_t}+\\delta} \\tilde{m}_t.\n\\end{equation}\n\n\n\\begin{algorithm}[H]\n\\caption{Adam}\n\\label{alg:Adam}\n{\\bf Input}: small constant $\\delta$ (can be $10^{-8}$), decay rate for moment estimates $\\rho_1$ and $\\rho_2$ (suggested defaults: 0.9 and 0.999 respectively)\\\\\nCompute the gradient on $B_{i_t}$:\n\\begin{equation}\ng_t = \\nabla_{w} \\frac{1}{m} \\sum_{i \\in B_{i_t}} f_i(w_{t}).\n\\end{equation}\nUpdate biased first moment estimate:\n\\begin{equation}\nm_t = \\rho_1 m_{t-1} + (1-\\rho_1)g_t, \\quad (m_0 = 0).\n\\end{equation}\nUpdate biased second moment estimate:\n\\begin{equation}\nv_t = \\rho_2 v_{t-1} + (1-\\rho_2)g_t \\otimes g_t, \\quad (v_0 = 0).\n\\end{equation}\nCorrect bias in first moment:\n\\begin{equation}\n\\tilde m_t = \\frac{m_t}{1 - \\rho_1^{t}}.\n\\end{equation}\nCorrect bias in second moment:\n\\begin{equation}\n\\tilde v_t = \\frac{v_t}{1 - \\rho_2^{t}}.\n\\end{equation}\nCompute update:\n\\begin{equation}\n\\Delta w_t =  -\\frac{\\eta_t}{\\sqrt{\\tilde v_t} + \\delta} \\otimes \\tilde m_t\n\\end{equation}\nUpdate $w$:\n\\begin{equation}\nw_{t+1} = w_t + \\Delta w_t.\n\\end{equation}\n\\end{algorithm}\n\nNow we apply Adam for the following SPD linear system\n\\begin{equation}\n\tA u-b=0\n\\end{equation}\nwhere $A \\in \\mathbb{R}^{n \\times n}$ is \\emph{symmetric positive definite} matrix and $b \\in \\mathbb{R}^n$.\nIt is equal to solve the following minimization problem\n\\begin{equation}\n\t\\min_u\\ \\left\\{f(u):= \\frac{1}{2} u^T Au - b^T u\\right\\}.\n\\end{equation}\n\nAssume that we didn't take stochastic technique into consideration, then\n\\begin{equation}\ng_k=A u_k-b.\n\\end{equation}\nCompute first order moment vector\n\\begin{align}\nm_k & = \\beta_1 m_{k-1} +(1-\\beta_1) g_k\\\\\n&= \\beta_1 m_{k-1} + (1-\\beta_1) (A u_k - b)\n\\end{align}\nThe update can be written as\n\\begin{equation}\nu_{k+1}=u_k-\\eta P^{-1}_k R_k\n\\end{equation}\nwhere\n\\begin{align}\nR_k & =\\frac{1-\\beta_1}{1-\\beta_1^k} \\sum_{i=1}^{k} \\beta_1^{k-i} (A u_i -b)\\\\\n& = \\frac{(1-\\beta_1) \\beta_1^{(k-1)}}{1-\\beta_1^k} (Au_1 -b) + \\frac{(1-\\beta_1) \\beta_1^{(k-2)}}{1-\\beta_1^k} (Au_2 -b) +\\cdots \\\\\n& +\\frac{(1-\\beta_1) \\beta_1^{0}}{1-\\beta_1^k} (Au_k -b) \\\\\n& =c_1 (Au_1-b) + c_2(Au_2-b) + \\cdots + c_k (Au_k-b).\n\\end{align}\nHere\n\\begin{equation}\nc_i = \\frac{(1-\\beta_1) \\beta_1^{(k-i)}}{1-\\beta_1^k}\n\\end{equation}\nis increase geometrical sequence. \nAnd we can easily draw the conclusion that\n\\begin{equation}\n\\sum_{i=1}^{k} c_i =1.\n\\end{equation}\nSo $R_k$ is weighted average of $\\left\\{u_1, u_2, \\dots, u_k\\right\\}$.\n\n$P_k$ is a diagonal matrix\n\\begin{align}\nP_k & =\\text{diag} \\left\\{\n\\sqrt{ \\frac{1-\\beta_2}{1-\\beta_2^k} \\sum_{i=1}^{k} \\beta_2^{k-i} (A u_i -b)^2 }+\\epsilon \n\\right\\}\\\\\n& = \\text{diag} \\left\\{ p_1, p_2, \\cdots, p_n \\right\\},\n\\end{align}\nwhere $(Au_i -b)^2$ is element-wise product.\n$p_i^2$ is weighted average of $(A u_i-b)^2$.\n\nTo simplify our analysis, we will use $A u_k - b$ instead of $R_k$, as well as\n\\begin{equation}\n\tP_k=\\text{diag} \\left\\{ |A u_k-b| \\right\\}\n\\end{equation}\n\n\nAssume $u^*$ is the solution of $Au-b=0$.\nSo\n\\begin{equation}\nu^* = u^* + \\eta P^{-1} (A u^* -b)\n\\end{equation}\nAnd\n\\begin{equation}\nu^* -u_k = (I- \\eta P_k^{-1} A) (u^* - u_k)\n\\end{equation}\nWe need estimate the upper bound of $\\|I-\\eta P^{-1}_k A\\|$.\n\nIn other way, we will find the relation of ADAM and SDA.\n\n\\subsection{Choosing the Right Optimization Algorithm}\nSchaul et al. (2014) presented a valuable comparison of a large number of optimization algorithms across a wide range of learning tasks. While the results suggest that the family of algorithms with adaptive learning rates (represented by RMSProp and Adam) performed fairly robustly, no single best algorithm has emerged.\n\nCurrently, the most popular optimization algorithms actively in use include SGD, SGD with momentum, RMSProp, RMSProp with momentum, AdaDelta and Adam. The choice of which algorithm to use, at this point, seems to depend largely on the user's familiarity with the algorithm (for ease of hyperparameter tuning).\n", "meta": {"hexsha": "fbb97e7b926e27eaa6d74078e9a42f2804869ca3", "size": 19551, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/SGD.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/SGD.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/SGD.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8895348837, "max_line_length": 663, "alphanum_fraction": 0.6662574804, "num_tokens": 7336, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.5858214954899275}}
{"text": "\\subsection{Definitions}\n\\label{subsec:formalisations:groove_formalisation:definitions}\nThis section discusses some definitions specific to the GROOVE formalisation. The definitions need to be in place before the formalisations of the different GROOVE graphs are given.\n\nGROOVE internally uses a set of labels $Lab$ for each defined grammar. These labels are used by multiple graph types of GROOVE, including (but not limited to) type graphs and instance graphs.\n\n\\begin{defin}[Labels]\n\\label{defin:formalisations:groove_formalisation:definitions:labels}\n$Lab$ is the set of labels used by GROOVE graphs. It can be subdivided into three sets:\n\\begin{itemize}\n    \\item The set of type labels $Lab_t \\subseteq Lab$\n    \\item The set of edge labels $Lab_e \\subseteq Lab$\n    \\item The set of flag labels $Lab_f \\subseteq Lab$\n\\end{itemize}\n\nThe intersection of each of the sets $Lab_t$, $Lab_f$ and $Lab_e$ has to be empty:\n\\begin{equation*}\n    Lab_t \\cap Lab_f = \\emptyset \\land Lab_t \\cap Lab_e = \\emptyset \\land Lab_f \\cap Lab_e = \\emptyset\n\\end{equation*}\n\n\\isabellelref{Lab}{GROOVE.Type_Graph}\n\\end{defin}\n\nThe set $Lab_t$ will be used to denote the types of nodes in the graph, while the $Lab_e$ set will be used to distinguish between different edges. The $Lab_f$ set are labels for particular kind of edges which always have an identical source and target node. These are used as flags on nodes to indicate that a specific property holds for a node.\n\nAlthough the intersection of each of the sets $Lab_t$, $Lab_f$ and $Lab_e$ has to be empty, the sets do not necessarily form a partition of $Lab$, as one or more of the subsets can be empty. Furthermore, $Lab$ itself can be empty if the GROOVE grammar does not define any types, flags or edge labels.\n\nBesides $Lab$ that belongs to a grammar, GROOVE uses a set of reserved primitive type labels $Lab_{prim}$ that can never be part of the label set of a grammar.\n\n\\begin{defin}[Primitive type labels]\n\\label{defin:formalisations:groove_formalisation:definitions:primitive_type_labels}\nGROOVE has a set of reserved primitive type labels $Lab_{prim}$:\n\\begin{equation*}\n    Lab_{prim} = \\{ \\type{bool}, \\type{int}, \\type{real}, \\type{string} \\}\n\\end{equation*}\n\nIt should hold that $Lab \\cap Lab_{prim} = \\emptyset$ since the primitive type labels are reserved.\n\\end{defin}\n\nThe primitive type labels allow the use of primitive types as attributes and values. The label $\\type{bool}$ represents the type for boolean values $\\mathbb{B}$, $\\type{int}$ represents the type for integer values $\\mathbb{Z}$, $\\type{real}$ represents the type for real numbers $\\mathbb{R}$ and finally $\\type{string}$ represents the type for string values $\\mathbb{S}$.", "meta": {"hexsha": "e2537125ca59cbc584b8daa2e2f7f1e21d26f03f", "size": 2701, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/tex/03_formalisations/03_groove_formalisation/01_definitions.tex", "max_stars_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_stars_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_stars_repo_licenses": ["AFL-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/tex/03_formalisations/03_groove_formalisation/01_definitions.tex", "max_issues_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_issues_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_issues_repo_licenses": ["AFL-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/tex/03_formalisations/03_groove_formalisation/01_definitions.tex", "max_forks_repo_name": "RemcodM/thesis-ecore-groove-formalisation", "max_forks_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca", "max_forks_repo_licenses": ["AFL-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.525, "max_line_length": 371, "alphanum_fraction": 0.7641614217, "num_tokens": 710, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324713956856, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.5858214924126517}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{examples}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% ============================================================================================\n\\bgroup\n\\CdbSetup{action=hide}\n\\begin{cadabra}\n   import cdblib\n   checkpoint_file = 'tests/semantic/output/example-09.json'\n   cdblib.create (checkpoint_file)\n   checkpoint = []\n\\end{cadabra}\n\\egroup\n\n\\clearpage\n\n% ============================================================================================\n\\section*{Example 9 The Gauss equation}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u#}::Indices(position=independent).\n\n   \\nabla{#}::Derivative.\n\n   K_{a b}::Symmetric.\n   g^{a}_{b}::KroneckerDelta.\n\n   # define the projection operator\n\n   hab:=h^{a}_{b} -> g^{a}_{b} - n^{a} n_{b}.\n\n   # 3-covariant derivative obtained by projection on 4-covariant derivative\n\n   vpq:=v_{p q} -> h^{a}_{p} h^{b}_{q} \\nabla_{b}{v_{a}}.\n\n   # compute 3-curvature by commutation of covariant derivatives\n\n   vpqr:= h^{a}_{p} h^{b}_{q} h^{c}_{r} ( \\nabla_{c}{v_{a b}} - \\nabla_{b}{v_{a c}} ).\n\n   substitute (vpq,hab)\n   substitute (vpqr,vpq)\n\n   distribute   (vpqr)\n   product_rule (vpqr)\n   distribute   (vpqr)\n   eliminate_kronecker (vpqr)\n\n   # standard substitutions\n\n   substitute (vpqr,$h^{a}_{b} n^{b} -> 0$)\n   substitute (vpqr,$h^{a}_{b} n_{a} -> 0$)\n   substitute (vpqr,$\\nabla_{a}{g^{b}_{c}} -> 0$)\n   substitute (vpqr,$n^{a} \\nabla_{b}{v_{a}} -> -v_{a} \\nabla_{b}{n^{a}}$)\n   substitute (vpqr,$v_{a} \\nabla_{b}{n^{a}} -> v_{p} h^{p}_{a}\\nabla_{b}{n^{a}}$)\n   substitute (vpqr,$h^{p}_{a} h^{q}_{b} \\nabla_{p}{n_{q}} -> K_{a b}$)\n   substitute (vpqr,$h^{p}_{a} h^{q}_{b} \\nabla_{p}{n^{b}} -> K_{a}^{q}$)  # cdb(ex-09.095,vpqr)\n\n   # tidy up\n\n   {v_{a},h^{a}_{b},K_{a}^{b},K_{a b},R^{a}_{b c d},\\nabla_{a}{v_{b}}}::SortOrder.\n\n   sort_product   (vpqr)                                     # cdb(ex-09.096,vpqr)\n   rename_dummies (vpqr)                                     # cdb(ex-09.097,vpqr)\n   canonicalise   (vpqr)                                     # cdb(ex-09.098,vpqr)\n   factor_out     (vpqr,$h^{a?}_{b?}$)                       # cdb(ex-09.099,vpqr)\n   factor_out     (vpqr,$v_{a?}$)                            # cdb(ex-09.101,vpqr)\n\n   checkpoint.append (vpqr)\n\\end{cadabra}\n\n\\begin{dgroup*}[spread={3pt}]\n   \\Dmath*{( D_{r}D_{q} - D_{q}D_{r} ) v_{p} = \\Cdb*[\\hskip 1.0cm\\hfill]{ex-09.095}\n                                             = \\Cdb*[\\hskip 1.0cm\\hfill]{ex-09.096}\n                                             = \\Cdb*[\\hskip 1.0cm\\hfill]{ex-09.097}\n                                             = \\Cdb*{ex-09.098}\n                                             = \\Cdb*{ex-09.099}\n                                             = \\Cdb*{ex-09.101}}\n\\end{dgroup*}\n\n\\clearpage\n\n\\begin{cadabra}\n   R{#}::LaTeXForm(\"{{\\strut}^g R}\").\n\n   gRabcd := \\nabla_{c}{\\nabla_{b}{v_{a}}}\n            -\\nabla_{b}{\\nabla_{c}{v_{a}}} -> R^{d}_{a b c} v_{d}.\n\n   substitute     (vpqr,gRabcd)                              # cdb(ex-09.102,vpqr)\n   distribute     (vpqr)                                     # cdb(ex-09.103,vpqr)\n   substitute     (vpqr,$v_{a} -> h^{b}_{a} v_{b}$)          # cdb(ex-09.104,vpqr)\n   substitute     (vpqr,$h^{b}_{a} K_{c}^{a} -> K_{c}^{b}$)  # cdb(ex-09.105,vpqr)\n   sort_product   (vpqr)                                     # cdb(ex-09.106,vpqr)\n   rename_dummies (vpqr)                                     # cdb(ex-09.107,vpqr)\n   canonicalise   (vpqr)                                     # cdb(ex-09.108,vpqr)\n   factor_out     (vpqr,$v_{a?}$)                            # cdb(ex-09.109,vpqr)\n   substitute     (vpqr,$v_{a}->1$)                          # cdb(ex-09.110,vpqr)\n   sort_product   (vpqr)                                     # cdb(ex-09.111,vpqr)\n\n   checkpoint.append (vpqr)\n\\end{cadabra}\n\n\\begin{dgroup*}[spread={3pt}]\n   \\Dmath*{( D_{r}D_{q} - D_{q}D_{r} ) v_{p} = \\Cdb*{ex-09.101}\n                                             = \\Cdb*{ex-09.102}\n                                             = \\Cdb*{ex-09.103}\n                                             = \\Cdb*{ex-09.104}\n                                             = \\Cdb*{ex-09.105}\n                                             = \\Cdb*{ex-09.106}\n                                             = \\Cdb*{ex-09.107}\n                                             = \\Cdb*{ex-09.108}\n                                             = \\Cdb*{ex-09.109}}\n\\end{dgroup*}\n\n\\begin{dgroup*}[spread={3pt}]\n   \\Dmath*{{\\strut}^hR^{a}{}_{pqr} = \\Cdb*{ex-09.110}}\n   \\Dmath*{{\\strut}^hR^{a}{}_{pqr} = \\Cdb*{ex-09.111}}\n\\end{dgroup*}\n\n\\clearpage\n\n% ============================================================================================\n% export to json format\n\n\\bgroup\n\\CdbSetup{action=hide}\n\\begin{cadabra}\n   for i in range( len(checkpoint) ):\n      cdblib.put ('check{:03d}'.format(i),checkpoint[i],checkpoint_file)\n\\end{cadabra}\n\\egroup\n\n\\end{document}\n", "meta": {"hexsha": "49641b7d689f86388a3f0621cc1958c1a2072a75", "size": 4961, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/example-09.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/example-09.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/example-09.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 36.2116788321, "max_line_length": 96, "alphanum_fraction": 0.4370086676, "num_tokens": 1652, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324803738429, "lm_q2_score": 0.7185943805178138, "lm_q1q2_score": 0.5858214792122424}}
{"text": "% !TEX root = hott_intro.tex\n\n\\section{Equivalences}\n\n\\subsection{Homotopies}\nIn homotopy type theory, a homotopy is just a pointwise equality between two functions $f$ and $g$.\n\n\\begin{defn}\nLet $f,g:\\prd{x:A}P(x)$ be two dependent functions. The type of \\define{homotopies}\\index{homotopy|textbf} from $f$ to $g$ is defined as\n\\begin{equation*}\nf\\htpy g \\defeq \\prd{x:A} f(x)=g(x).\n\\end{equation*}\n\\end{defn}\n\nSince we formulated homotopies using dependent functions, we may also consider homotopies \\emph{between}\\index{homotopy!iterated} homotopies, and further homotopies between those higher homotopies. \nExplicitly, if $H,K:f\\htpy g$, then the type $H\\htpy K$ of homotopies is just the type\n\\begin{equation*}\n\\prd{x:A} H(x)=K(x).\n\\end{equation*}\n\nIn the following definition we define the groupoid-like structure of homotopies. Note that we implement the groupoid-laws as \\emph{homotopies} rather than as identifications.\n\n\\begin{defn}\\label{defn:htpy_groupoid}\\index{groupoid laws!of homotopies|textbf}\nFor any dependent type $B:A\\to\\type$ there are operations\n\\begin{align*}\n& \\mathsf{htpy\\usc{}refl} & & : \\prd{f:\\prd{x:A}B(x)}f\\htpy f \\\\\n& \\mathsf{htpy\\usc{}inv} & & : \\prd*{f,g:\\prd{x:A}B(x)} (f\\htpy g)\\to(g\\htpy f) \\\\\n& \\mathsf{htpy\\usc{}concat} & & : \\prd*{f,g,h:\\prd{x:A}B(x)} (f\\htpy g)\\to ((g\\htpy h)\\to (f\\htpy h)).\n\\end{align*}\nWe will write $H^{-1}$ for $\\mathsf{htpy\\usc{}inv}(H)$, and $\\ct{H}{K}$ for $\\mathsf{htpy\\usc{}concat}(H,K)$. \n\nFurthermore, we define\n\\begin{align*}\n& \\mathsf{htpy\\usc{}assoc}(H,K,L) & & : \\ct{(\\ct{H}{K})}{L}\\htpy\\ct{H}{(\\ct{K}{L})} \\\\\n& \\mathsf{htpy\\usc{}left\\usc{}unit}(H) & & : \\ct{\\mathsf{htpy\\usc{}refl}_f}{H}\\htpy H \\\\\n& \\mathsf{htpy\\usc{}right\\usc{}unit}(H) & & : \\ct{H}{\\mathsf{htpy\\usc{}refl}_g}\\htpy H \\\\\n& \\mathsf{htpy\\usc{}left\\usc{}inv}(H) & & : \\ct{H^{-1}}{H} \\htpy \\mathsf{htpy\\usc{}refl}_g \\\\\n& \\mathsf{htpy\\usc{}right\\usc{}inv}(H) & & : \\ct{H}{H^{-1}} \\htpy \\mathsf{htpy\\usc{}refl}_f\n\\end{align*}\nfor any $H:f\\htpy g$, $K:g\\htpy h$ and $L:h\\htpy i$, where $f,g,h,i:\\prd{x:A}B(x)$.\n\\end{defn}\n\n\\begin{constr}\nWe define\n\\begin{align*}\n\\mathsf{htpy\\usc{}refl}(f) & \\defeq \\lam{x} \\refl{f(x)} \\\\\n\\mathsf{htpy\\usc{}inv}(H) & \\defeq \\lam{x} H(x)^{-1} \\\\\n\\mathsf{htpy\\usc{}concat}(H,K) & \\defeq \\lam{x}\\ct{H(x)}{K(x)},\n\\end{align*}\nwhere $H:f\\htpy g$ and $K:g\\htpy h$ are homotopies. Furthermore, we define\n\\begin{align*}\n\\mathsf{htpy\\usc{}assoc}(H,K,L) & \\defeq \\lam{x}\\mathsf{assoc}(H(x),K(x),L(x)) \\\\\n\\mathsf{htpy\\usc{}left\\usc{}unit}(H) & \\defeq \\lam{x}\\mathsf{left\\usc{}unit}(H(x)) \\\\\n\\mathsf{htpy\\usc{}right\\usc{}unit}(H) & \\defeq \\lam{x}\\mathsf{right\\usc{}unit}(H(x)) \\\\\n\\mathsf{htpy\\usc{}left\\usc{}inv}(H) & \\defeq \\lam{x}\\mathsf{left\\usc{}inv}(H(x)) \\\\\n\\mathsf{htpy\\usc{}right\\usc{}inv}(H) & \\defeq \\lam{x}\\mathsf{right\\usc{}inv}(H(x)).\\qedhere\n\\end{align*}\n\\end{constr}\n\n\nApart from the groupoid operations and their laws, we will occasionally need \\emph{whiskering} operations.\n\n\\begin{defn}\nWe define the following \\define{whiskering}\\index{homotopy!whiskering operations|textbf}\\index{whiskering operations!of homotopies|textbf} operations on homotopies:\n\\begin{enumerate}\n\\item Suppose $H:f\\htpy g$ for two functions $f,g:A\\to B$, and let $h:B\\to C$. We define\n\\begin{equation*}\nh\\cdot H\\defeq \\lam{x}\\ap{h}{H(x)}:h\\circ f\\htpy h\\circ g.\n\\end{equation*}\n\\item Suppose $f:A\\to B$ and $H:g\\htpy h$ for two functions $g,h:B\\to C$. We define\n\\begin{equation*}\nH\\cdot f\\defeq\\lam{x}H(f(x)):h\\circ f\\htpy g\\circ f.\n\\end{equation*}\n\\end{enumerate}\n\\end{defn}\n\nWe also use homotopies to express the commutativity of diagrams. For example, we say that a triangle\n\\begin{equation*}\n  \\begin{tikzcd}[column sep=tiny]\n    A \\arrow[rr,\"h\"] \\arrow[dr,swap,\"f\"] & & B \\arrow[dl,\"g\"] \\\\\n    & X\n  \\end{tikzcd}\n\\end{equation*}\ncommutes if it comes equipped with a homotopy $H:f\\htpy g\\circ h$, and we say that a square\n\\begin{equation*}\n  \\begin{tikzcd}\n    A \\arrow[r,\"g\"] \\arrow[d,swap,\"f\"] & A' \\arrow[d,\"{f'}\"] \\\\\n    B \\arrow[r,swap,\"h\"] & B'\n  \\end{tikzcd}\n\\end{equation*}\nif it comes equipped with a homotopy $h \\circ f~g\\circ f'$.\n\n\\subsection{Bi-invertible maps}\n\\begin{defn}\nLet $f:A\\to B$ be a function. We say that $f$ has a \\define{section}\\index{section!of a map|textbf} if there is a term of type\\index{sec(f)@{$\\mathsf{sec}(f)$}|textbf}\n\\begin{equation*}\n\\mathsf{sec}(f) \\defeq \\sm{g:B\\to A} f\\circ g\\htpy \\idfunc[B].\n\\end{equation*}\nDually, we say that $f$ has a \\define{retraction}\\index{retraction} if there is a term of type\\index{retr(f)@{$\\mathsf{retr}(f)$}|textbf}\n\\begin{equation*}\n\\mathsf{retr}(f) \\defeq \\sm{h:B\\to A} h\\circ f\\htpy \\idfunc[A].\n\\end{equation*}\nIf a map $f:A \\to B$ has a retraction, we also say that $A$ is a \\define{retract}\\index{retract!of a type} of $B$.\nWe say that a function $f:A\\to B$ is an \\define{equivalence}\\index{equivalence|textbf}\\index{bi-invertible map|see {equivalence}} if it has both a section and a retraction, i.e., if it comes equipped with a term of type\\index{is_equiv@{$\\isequiv$}|textbf}\n\\begin{equation*}\n\\isequiv(f)\\defeq\\mathsf{sec}(f)\\times\\mathsf{retr}(f).\n\\end{equation*}\nWe will write $\\eqv{A}{B}$\\index{equiv@{$\\eqv{A}{B}$}|textbf} for the type $\\sm{f:A\\to B}\\isequiv(f)$.\n\\end{defn}\n\n\\begin{rmk}\nAn equivalence, as we defined it here, can be thought of as a \\emph{bi-invertible} map, since it comes equipped with a separate left and right inverse. Explicitly, if $f$ is an equivalence, then there are\n\\begin{align*}\ng & : B\\to A & h & : B\\to A \\\\\nG & : f\\circ g \\htpy \\idfunc[B] & H & : h\\circ f \\htpy \\idfunc[A].\n\\end{align*}\nClearly, if $f$ is \\define{invertible}\\index{invertible map} in the sense that it comes equipped with a function $g:B\\to A$ such that $f\\circ g\\htpy\\idfunc[B]$ and $g\\circ f\\htpy\\idfunc[A]$, then $f$ is an equivalence. We write\\index{is_invertible@{$\\mathsf{is\\usc{}invertible}$}|textbf}\n\\begin{equation*}\n\\mathsf{has\\usc{}inverse}(f)\\defeq\\sm{g:B\\to A} (f\\circ g\\htpy \\idfunc[B])\\times (g\\circ f\\htpy\\idfunc[A]).\n\\end{equation*}\n\\end{rmk}\n\n\\begin{defn}\\label{defn:inv_equiv}\nAny equivalence can be given the structure of an invertible map.\\index{equivalence!invertibility of}\n\\end{defn}\n\n\\begin{constr}\nFirst we construct for any equivalence $f$ with right inverse $g$ and left inverse $h$ a homotopy $K:g\\htpy h$. For any $y:B$, we have \n\\begin{equation*}\n\\begin{tikzcd}[column sep=huge]\ng(y) \\arrow[r,equals,\"H(g(y))^{-1}\"] & hfg(y) \\arrow[r,equals,\"\\ap{h}{G(y)}\"] & h(y).\n\\end{tikzcd}\n\\end{equation*} \nTherefore we define a homotopy $K:g\\htpy h$ by $K\\defeq \\ct{(H\\cdot g)^{-1}}{h\\cdot G}$.\nUsing the homotopy $K$ we are able to show that $g$ is also a left inverse of $f$. For $x:A$ we have the identification\n\\begin{equation*}\n\\begin{tikzcd}[column sep=large]\ngf(x) \\arrow[r,equals,\"K(f(x))\"] & hf(x) \\arrow[r,equals,\"H(x)\"] & x.\n\\end{tikzcd}\\qedhere\n\\end{equation*}\n\\end{constr}\n\n\\begin{cor}\nThe inverse of an equivalence is again an equivalence.\n\\end{cor}\n\n\\begin{proof}\nLet $f:A\\to B$ be an equivalence. By \\cref{defn:inv_equiv} it follows that the section of $f$ is also a retraction. Therefore it follows that the section is itself an invertible map, with inverse $f$. Hence it is an equivalence.\n\\end{proof}\n\n\\begin{thm}\\label{thm:id_equiv}\nFor any type $A$, the identity function $\\idfunc[A]$ is an equivalence.\\index{identity function!is an equivalence|textit}\n\\end{thm}\n\n\\begin{proof}\nThe identity function is trivially its own section and its own retraction.\n\\end{proof}\n\n\\begin{eg}\n  For any type $C(x,y)$ indexed by $x:A$ and $y:B$, the function\n\\begin{equation*}\n\\sigma:\\Big(\\prd{x:A}{y:B}C(x,y)\\Big)\\to\\Big(\\prd{y:B}{x:A}C(x,y)\\Big)\n\\end{equation*}\nthat swaps the order of the arguments $x$ and $y$ is an equivalence by \\cref{ex:swap}.\\index{swap function!is an equivalence|textit}\n\\end{eg}\n\n\\subsection{The identity type of a \\texorpdfstring{$\\Sigma$-}{dependent pair }type}\n\nIn this section we characterize the identity type of a $\\Sigma$-type as a $\\Sigma$-type of identity types. In this course we will be characterizing the identity types of many types, so we will follow the general outline of how such a characterization goes:\n\\begin{enumerate}\n\\item First we define a binary relation $R:A\\to A\\to \\UU$ on the type $A$ that we are interested in. This binary relation is intended to be equivalent to its identity type.\n\\item Then we will show that this binary relation is reflexive, by constructing a term of type\n  \\begin{equation*}\n    \\prd{x:A}R(x,x)\n  \\end{equation*}\n\\item Using the reflexivity we will show that there is a canonical map\n  \\begin{equation*}\n    (x=y)\\to R(x,y)\n  \\end{equation*}\n  for every $x,y:A$. This map is just constructed by path induction, using the reflexivity of $R$.\n\\item Finally, it has to be shown that the map\n  \\begin{equation*}\n    (x=y)\\to R(x,y)\n  \\end{equation*}\n  is an equivalence for each $x,y:A$. \n\\end{enumerate}\nThe last step is usually the most difficult, and we will refine our methods for this step in \\cref{chap:fundamental}, where we establish the Fundamental Theorem of Identity Types.\n\nIn this section we consider a type family $B$ over $A$. Given two pairs\n\\begin{equation*}\n  (x,y),(x',y'):\\sm{x:A}B(x),\n\\end{equation*}\nif we have a path $\\alpha:x=x'$ then we can compare $y:B(x)$ to $y':B(x')$ by first transporting $y$ along $\\alpha$, i.e., we consider the identity type\n\\begin{equation*}\n  \\mathsf{tr}_B(\\alpha,y)=y'.\n\\end{equation*}\nThus it makes sense to think of $(x,y)$ to be identical to $(x',y')$ if there is an identification $\\alpha:x=x'$ and an identification $\\beta:\\mathsf{tr}_B(\\alpha,y)=y'$. In the following definition we turn this idea into a binary relation on the $\\Sigma$-type.\n\n\\begin{defn}\n  We will define a relation\n  \\begin{equation*}\n    \\mathsf{Eq}_{\\Sigma} : \\Big(\\sm{x:A}B(x)\\Big)\\to\\Big(\\sm{x:A}B(x)\\Big)\\to\\UU\n  \\end{equation*}\n  by defining\n  \\begin{equation*}\n    \\mathsf{Eq}_{\\Sigma}(s,t)\\defeq\\sm{\\alpha:\\proj 1(s)=\\proj 1(t)}\\mathsf{tr}_B(\\alpha,\\proj 2(s))=\\proj 2 (t).\n  \\end{equation*}\n\\end{defn}\n\n\\begin{lem}\n  The relation $\\mathsf{Eq}_{\\Sigma}$ is reflexive, i.e., there is a term\n  \\begin{equation*}\n    \\mathsf{reflexive\\usc{}Eq}_{\\Sigma}:\\prd{s:\\sm{x:A}B(x)}\\mathsf{Eq}_{\\Sigma}(s,s).\n  \\end{equation*}\n\\end{lem}\n\n\\begin{constr}\n  This term is constructed by $\\Sigma$-induction on $s:\\sm{x:A}B(x)$. Thus, it suffices to construct a term of type\n  \\begin{equation*}\n    \\prd{x:A}{y:B(x)}\\sm{\\alpha:x=x}\\mathsf{tr}_B(\\alpha,y)=y.\n  \\end{equation*}\n  Here we take $\\lam{x}{y}(\\refl{x},\\refl{y})$.\n\\end{constr}\n\n\\begin{defn}\n  Consider a type family $B$ over $A$. Then for any $s,t:\\sm{x:A}B(x)$ we define a map\\index{pair_eq@{$\\mathsf{pair\\usc{}eq}$}|textbf}\n  \\begin{equation*}\n    \\mathsf{pair\\usc{}eq}: (s=t)\\to \\mathsf{Eq}_\\Sigma(s,t)\n  \\end{equation*}\n  by path induction, taking $\\mathsf{pair\\usc{}eq}(\\refl{s})\\defeq\\mathsf{reflexive\\usc{}Eq}_\\Sigma(s)$.\n\\end{defn}\n\n\\begin{thm}\\label{thm:eq_sigma}\n  Let $B$ be a type family over $A$. Then the map\n  \\begin{equation*}\n    \\mathsf{pair\\usc{}eq}: (s=t)\\to \\mathsf{Eq}_\\Sigma(s,t)\n  \\end{equation*}\n  is an equivalence for every $s,t:\\sm{x:A}B(x)$.\\index{Sigma type@{$\\Sigma$-type}!identity types of|textit}\\index{identity type!of a Sigma-type@{of a $\\Sigma$-type}|textit}\n\\end{thm}\n\n\\begin{proof}\nThe maps in the converse direction\\index{eq_pair@{$\\mathsf{eq\\usc{}pair}$}}\n\\begin{equation*}\n\\mathsf{eq\\usc{}pair} : \\mathsf{Eq}_\\Sigma(s,t)\\to(\\id{s}{t})\n\\end{equation*}\nare defined by repeated $\\Sigma$-induction. By $\\Sigma$-induction on $s$ and $t$  we see that it suffices to define a map\n\\begin{equation*}\n\\mathsf{eq\\usc{}pair} : \\Big(\\sm{p:x=x'}\\id{\\mathsf{tr}_B(p,y)}{y'}\\Big)\\to(\\id{(x,y)}{(x',y')}).\n\\end{equation*}\nA map of this type is again defined by $\\Sigma$-induction. Thus it suffices to define a dependent function of type\n\\begin{equation*}\n\\prd{p:x=x'} (\\id{\\mathsf{tr}_B(p,y)}{y'}) \\to (\\id{(x,y)}{(x',y')}).\n\\end{equation*}\nSuch a dependent function is defined by double path induction by sending $\\pairr{\\refl{x},\\refl{y}}$ to $\\refl{\\pairr{x,y}}$. This completes the definition of the function $\\mathsf{eq\\usc{}pair}$.\n\nNext, we must show that $\\mathsf{eq\\usc{}pair}$ is a section of $\\mathsf{pair\\usc{}eq}$. In other words, we must construct an identification\n\\begin{equation*}\n\\mathsf{pair\\usc{}eq}(\\mathsf{eq\\usc{}pair}(\\alpha,\\beta))=\\pairr{\\alpha,\\beta}\n\\end{equation*}\nfor each $\\pairr{\\alpha,\\beta}:\\sm{\\alpha:x=x'}\\id{\\mathsf{tr}_B(\\alpha,y)}{y'}$. We proceed by path induction on $\\alpha$, followed by path induction on $\\beta$. Then our goal becomes to construct a term of type\n\\begin{equation*}\n\\mathsf{pair\\usc{}eq}(\\mathsf{eq\\usc{}pair}\\pairr{\\refl{x},\\refl{y}})=\\pairr{\\refl{x},\\refl{y}}\n\\end{equation*}\nBy the definition of $\\mathsf{eq\\usc{}pair}$ we have $\\mathsf{eq\\usc{}pair}\\pairr{\\refl{x},\\refl{y}}\\jdeq \\refl{\\pairr{x,y}}$, and by the definition of $\\mathsf{pair\\usc{}eq}$ we have $\\mathsf{pair\\usc{}eq}(\\refl{\\pairr{x,y}})\\jdeq\\pairr{\\refl{x},\\refl{y}}$. Thus we may take $\\refl{\\pairr{\\refl{x},\\refl{y}}}$ to complete the construction of the homotopy $\\mathsf{pair\\usc{}eq}\\circ\\mathsf{eq\\usc{}pair}\\htpy\\idfunc$.\n\nTo complete the proof, we must show that $\\mathsf{eq\\usc{}pair}$ is a retraction of $\\mathsf{pair\\usc{}eq}$. In other words, we must construct an identification\n\\begin{equation*}\n\\mathsf{eq\\usc{}pair}(\\mathsf{pair\\usc{}eq}(p))=p\n\\end{equation*}\nfor each $p:s=t$. We proceed by path induction on $p:s=t$, so it suffices to construct an identification \n\\begin{equation*}\n\\mathsf{eq\\usc{}pair}\\pairr{\\refl{\\proj 1(s)},\\refl{\\proj 2(s)}}=\\refl{s}.\n\\end{equation*}\nNow we proceed by $\\Sigma$-induction on $s:\\sm{x:A}B(x)$, so it suffices to construct an identification\n\\begin{equation*}\n\\mathsf{eq\\usc{}pair}\\pairr{\\refl{x},\\refl{y}}=\\refl{(x,y)}.\n\\end{equation*}\nSince $\\mathsf{eq\\usc{}pair}\\pairr{\\refl{x},\\refl{y}}$ computes to $\\refl{(x,y)}$, we may simply take $\\refl{\\refl{(x,y)}}$.\n\\end{proof}\n\n\\begin{exercises}\n%  \\item Show that for any term $a:A$ the functions\n%    \\begin{align*}\n%      \\ind{\\unit}(a) & : \\unit \\to A \\\\\n%      \\mathsf{const}_a & : \\unit \\to A\n%    \\end{align*}\n%    are homotopic.\n%  \\item Let $A$ and $B$ be types, and consider the constant map $\\mathsf{const}_b:A\\to B$ for some $b:B$. Construct a homotopy\n%    \\begin{equation*}\n%      \\mathsf{ap}_{\\mathsf{const}_b}(x,y)\\htpy \\mathsf{const}_{\\refl{b}}\n%    \\end{equation*}\n%    for any $x,y:A$.\n\\item \\label{ex:equiv_grpd_ops}Show that the functions\n  \\begin{align*}\n    \\mathsf{inv} & :(\\id{x}{y})\\to(\\id{y}{x}) \\\\\n    \\mathsf{concat}(p) & : (\\id{y}{z})\\to(\\id{x}{z}) \\\\\n    \\mathsf{concat'}(q) & : (\\id{x}{y}) \\to (\\id{x}{z}) \\\\\n    \\mathsf{tr}_B(p) & :B(x)\\to B(y)\n  \\end{align*}\n  are equivalences, where $\\mathsf{concat'}(q,p)\\defeq \\ct{p}{q}$\\index{concat'@{$\\mathsf{concat'}$}}. Give their inverses explicitly.\n\\item Show that the maps\n  \\begin{align*}\n    \\inl & : X \\to X+\\emptyt &     \\proj 1 & : \\emptyt \\times X \\to \\emptyt \\\\\n    \\inr & : X \\to \\emptyt+X &    \\proj 2 & : X \\times \\emptyt \\to \\emptyt\n  \\end{align*}\n  are equivalences.\n\\item\n  \\begin{subexenum}\n  \\item \\label{ex:htpy_equiv}\\index{equivalence!homotopic maps} Consider two functions $f,g:A\\to B$ and a homotopy $H:f\\htpy g$. Then\n    \\begin{equation*}\n      \\isequiv(f)\\leftrightarrow\\isequiv(g).\n    \\end{equation*}\n  \\item Show that for any two homotopic equivalences $e,e':\\eqv{A}{B}$, their inverses are also homotopic.\n  \\end{subexenum}\n\\item \\label{ex:3_for_2}\\index{equivalence!three@{3-for-2 property}}\\index{3-for-2 property!of equivalences}\n  Consider a commuting triangle\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=tiny]\n      A \\arrow[rr,\"h\"] \\arrow[dr,swap,\"f\"] & & B \\arrow[dl,\"g\"] \\\\\n      & X.\n    \\end{tikzcd}\n  \\end{equation*}\n  with $H:f\\htpy g\\circ h$.\n  \\begin{subexenum}\n  \\item Suppose that the map $h$ has a section $s:B \\to A$. Show that the triangle\n    \\begin{equation*}\n      \\begin{tikzcd}[column sep=tiny]\n        B \\arrow[rr,\"s\"] \\arrow[dr,swap,\"g\"] & & A \\arrow[dl,\"f\"] \\\\\n        & X.\n      \\end{tikzcd}\n    \\end{equation*}\n    commutes, and that $f$ has a section if and only if $g$ has a section.\n  \\item Suppose that the map $g$ has a retraction $r:X\\to B$. Show that the triangle\n    \\begin{equation*}\n      \\begin{tikzcd}[column sep=tiny]\n        A \\arrow[rr,\"f\"] \\arrow[dr,swap,\"h\"] & & X \\arrow[dl,\"r\"] \\\\\n        & B.\n      \\end{tikzcd}\n    \\end{equation*}\n    commutes, and that $f$ has a retraction if and only if $h$ has a retraction.\n  \\item (The \\define{3-for-2 property} for equivalences.) Show that if any two of the functions\n    \\begin{equation*}\n      f,\\qquad g,\\qquad h\n    \\end{equation*}\n    are equivalences, then so is the third.\n  \\end{subexenum}\n\\item \\label{ex:neg_equiv} \n  \\begin{subexenum}\n  \\item Define the negation function on the booleans, and show that it is an equivalence.\\index{negation function!is an equivalence}\n  \\item Use the observational equality on the booleans, defined in \\cref{ex:obs_bool}, to show that $\\btrue\\neq\\bfalse$.\n  \\item Show that for any $b:\\bool$, the constant function $\\mathsf{const}_b$ is not an equivalence.\n  \\end{subexenum}\n\\item \\label{ex:succ_equiv} Show that the successor function on the integers is an equivalence.\\index{successor function!on Z@{on $\\Z$}!is an equivalence}\n\\item \\label{ex:comm_prod}Construct a equivalences $\\eqv{A+B}{B+A}$ and $\\eqv{A\\times B}{B\\times A}$.\\index{coproduct!is symmetric}\n\\item \\label{ex:retr_id} Consider a section-retraction pair\n  \\begin{equation*}\n    \\begin{tikzcd}\n      A \\arrow[r,\"i\"] & B \\arrow[r,\"r\"] & A,\n    \\end{tikzcd}\n  \\end{equation*}\n  with $H:r\\circ i\\htpy \\idfunc$. Show that $\\id{x}{y}$ is a retract of $\\id{i(x)}{i(y)}$.\\index{retract!identity types of}\n\\item \\label{ex:sigma_assoc}\\index{Sigma type@{$\\Sigma$-type}!associativity of}Let $B$ be a family of types over $A$, and let $C$ be a family of types indexed by $x:A,y:B(x)$. Construct an equivalence\n  \\begin{equation*}\n    \\Sigma\\mathsf{\\usc{}assoc}:\\eqv{\\Big(\\sm{p:\\sm{x:A}B(x)}C(\\proj 1(p),\\proj 2(p))\\Big)}{\\Big(\\sm{x:A}\\sm{y:B(x)}C(x,y)\\Big)}.\n  \\end{equation*}\n\\item \\label{ex:sigma_swap}Let $A$ and $B$ be types, and let $C$ be a family over $x:A,y:B$. Construct an equivalence\n  \\begin{equation*}\n    \\Sigma\\mathsf{\\usc{}swap}:\\eqv{\\Big(\\sm{x:A}{y:B}C(x,y)\\Big)}{\\Big(\\sm{y:B}{x:A}C(x,y)\\Big)}.\n  \\end{equation*}\n  %\\item \\label{ex:sigma_base_equiv}Consider an equivalence $e:A'\\eqv A$ and a type family $B$ over $A$. Show that the map\n  %\\begin{equation*}\n  %\\lam{(x',y)}(e(x'),y) : \\Big(\\sm{x':A'}B(e(x'))\\Big)\\to\\Big(\\sm{x:A}B(x)\\Big)\n  %\\end{equation*}\n  %is an equivalence.\n\\item \\label{ex:int_group_laws}\\index{Z@{$\\Z$}!group laws} In this exercise we will show that the laws for abelian groups hold for addition on the integers. Note: these are obvious facts, but the proof terms that show \\emph{how} the group laws hold are nevertheless fairly involved. This exercise is perfect for a formalization project. \n  \\begin{subexenum}\n  \\item Show that addition satisfies the left and right unit laws, i.e., construct terms\n    \\begin{align*}\n      \\mathsf{left\\usc{}unit\\usc{}law\\usc{}add\\usc{}}\\Z  & : \\prd{x:\\Z} 0 + x = x \\\\\n      \\mathsf{right\\usc{}unit\\usc{}law\\usc{}add\\usc{}}\\Z  & : \\prd{x : \\Z} x + 0 = x.\n    \\end{align*}\n  \\item Show that addition respects predecessors and successor on both sides, i.e., construct terms\n    \\begin{align*}\n      \\mathsf{left\\usc{}predecessor\\usc{}law\\usc{}add\\usc{}}\\Z & : \\prd{x,y:\\Z} \\mathsf{pred}_{\\mathbb{Z}}(x)+y = \\mathsf{pred}_{\\mathbb{Z}}(x+y) \\\\\n      \\mathsf{right\\usc{}predecessor\\usc{}law\\usc{}add\\usc{}}\\Z & : \\prd{x,y:\\Z} x+\\mathsf{pred}_{\\mathbb{Z}}(y) = \\mathsf{pred}_{\\mathbb{Z}}(x+y) \\\\\n      \\mathsf{left\\usc{}successor\\usc{}law\\usc{}add\\usc{}}\\Z & : \\prd{x,y:\\Z} \\mathsf{succ}_{\\mathbb{Z}}(x)+y = \\mathsf{succ}_{\\mathbb{Z}}(x+y) \\\\\n      \\mathsf{right\\usc{}successor\\usc{}law\\usc{}add\\usc{}}\\Z & : \\prd{x,y:\\Z} x+\\mathsf{succ}_{\\mathbb{Z}}(y) = \\mathsf{succ}_{\\mathbb{Z}}(x+y).\n    \\end{align*}\n    Hint: to avoid an excessive number of cases, use induction on $x$ but not on $y$. You may need to use the homotopies $\\mathsf{succ}_{\\mathbb{Z}}\\circ \\mathsf{pred}_{\\mathbb{Z}}\\htpy \\idfunc$ and $\\mathsf{pred}_{\\mathbb{Z}}\\circ\\mathsf{succ}_{\\mathbb{Z}}$ constructed in exercise \\cref{ex:succ_equiv}.\n  \\item Use part (b) to show that addition on the integers is associative and commutative, i.e., construct terms\n    \\begin{align*}\n      \\mathsf{assoc\\usc{}add\\usc{}}\\Z & : \\prd{x,y,z:\\Z} (x+y)+z = x + (y+z) \\\\\n      \\mathsf{comm\\usc{}add\\usc{}}\\Z & : \\prd{x,y:\\Z} x+y = y+x.\n    \\end{align*}\n    Hint: Especially in the construction of the associator there is a risk of running into an unwieldy amount of cases if you use $\\Z$-induction on all arguments. Avoid induction on $y$ and $z$.\n  \\item Show that addition satisfies the left and right inverse laws:\n    \\begin{align*}\n      \\mathsf{left\\usc{}inverse\\usc{}law\\usc{}add\\usc{}}\\Z & : \\prd{x : \\Z} (-x)+x=0 \\\\\n      \\mathsf{right\\usc{}inverse\\usc{}law\\usc{}add\\usc{}}\\Z & : \\prd{x : \\Z} x + (-x)=0.\n    \\end{align*}\n    Conclude that the functions $y \\mapsto x + y$ and $x\\mapsto x + y$ are equivalences for any $x:\\Z$ and $y:\\Z$, respectively.\n  \\end{subexenum}\n\\item \\label{ex:coproduct_functor}In this exercise we will construct the \\define{functorial action} of coproducts.\n  \\begin{subexenum}\n  \\item Construct for any two maps $f:A \\to A'$ and $g:B \\to B'$, a map\n    \\begin{equation*}\n      f+g:A+B \\to A'+B'.\n    \\end{equation*}\n  \\item Show that if $H:f\\htpy f'$ and $K:g\\htpy g'$, then there is a homotopy\n    \\begin{equation*}\n      H+K:(f+g)\\htpy (f'+g').\n    \\end{equation*}\n  \\item Show that $\\idfunc[A]+\\idfunc[B]\\htpy \\idfunc[A+B]$.\n  \\item Show that for any $f:A\\to A'$, $f':A'\\to A''$, $g:B\\to B'$, and $g':B'\\to B''$ there is a homotopy\n    \\begin{equation*}\n      (f'\\circ f)+(g'\\circ g) \\htpy (f'+g')\\circ (f\\circ g).\n    \\end{equation*}\n  \\item \\label{ex:coproduct_functor_equivalence}Show that if $f$ and $g$ are equivalences, then so is $f+g$. (The converse of this statement also holds, see \\cref{ex:is-equiv-is-equiv-functor-coprod}.)\n  \\end{subexenum}\n\\item Construct equivalences\n  \\begin{align*}\n    \\mathsf{Fin}(m+n) & \\simeq \\mathsf{Fin}(m)+\\mathsf{Fin}(n) \\\\\n    \\mathsf{Fin}(mn) & \\simeq \\mathsf{Fin}(m)\\times\\mathsf{Fin}(n).\n  \\end{align*}\n\\end{exercises}\n", "meta": {"hexsha": "b4a20abd5a07e50e2f5d060008c73cd2227e4328", "size": 22303, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/equivalences.tex", "max_stars_repo_name": "tadejpetric/HoTT-Intro", "max_stars_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Book/equivalences.tex", "max_issues_repo_name": "tadejpetric/HoTT-Intro", "max_issues_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Book/equivalences.tex", "max_forks_repo_name": "tadejpetric/HoTT-Intro", "max_forks_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.3544600939, "max_line_length": 418, "alphanum_fraction": 0.6630498139, "num_tokens": 8213, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5857439138630485}}
{"text": "\\section{A Silly Idea}\n    \n    \\frame{\\sectionpage}\n    \n    \\begin{frame}{Ordinary Differential Equations}\n        \\uncover<+->{\\begin{equation*}\n            \\dv{x} y(x) + \\frac{1}{CR} y(x) = 0\n        \\end{equation*}}\n        \n        \\uncover<+->{\\begin{equation*}\n            \\dv[2]{x} y(x) + \\gamma \\dv{x} y(x) + \\omega_0^2 y(x) = f(x)\n        \\end{equation*}}\n    \\end{frame}\n    \n    \\begin{frame}{Livin' La Vida Loca}\n        \\uncover<+>{\\begin{equation*}\n            \\dv[2]{x} y(x) + \\gamma \\dv{x} y(x) + \\omega_0^2 y(x) = f(x)\n        \\end{equation*}}\n        \\uncover<+>{\\[ \\Downarrow \\]\n        \\begin{equation*}\n            \\colch{\\dv[2]{x} + \\gamma \\dv{x} + \\omega_0^2} y(x) = f(x)\n        \\end{equation*}}\n        \\uncover<+>{\\[ \\Downarrow \\]\n        \\begin{equation*}\n            y(x) = \\frac{f(x)}{\\dv[2]{x} + \\gamma \\dv{x} + \\omega_0^2}\n        \\end{equation*}}\n    \\end{frame}\n    \n    \\begin{frame}{Livin' La Vida Loca}\n        \\centering\n        \\includegraphics[width = 0.8\\textwidth]{images/coke.jpg}\n    \\end{frame}\n    \n    \\begin{frame}{Pandora's Box}\n        \\centering\n        \\includegraphics[width = 0.9\\textwidth]{images/fourier-1.pdf}\n    \\end{frame}\n    \n    \\begin{frame}{Pandora's Box}\n        \\centering\n        \\includegraphics[width = 0.9\\textwidth]{images/fourier-2.pdf}\n    \\end{frame}\n    \n    \\begin{frame}{Pandora's Box}\n        \\centering\n        \\includegraphics[width = 0.9\\textwidth]{images/fourier-3.pdf}\n    \\end{frame}\n    \n    \\begin{frame}{Pandora's Box}\n        \\centering\n        \\includegraphics[width = 0.9\\textwidth]{images/fourier-4.pdf}\n    \\end{frame}\n    \n    \\begin{frame}{Pandora's Box}\n        \\centering\n        \\includegraphics[height = 0.7\\textheight]{images/fourier-5.pdf}\n    \\end{frame}\n    \n    \\begin{frame}{Box Proposal}\n        \\uncover<+>{\\begin{equation*}\n            \\mathcal{F}[f](\\xi) = \\hat{f}(\\xi) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} f(x) e^{-i x \\xi} \\dd{x}\n        \\end{equation*}}\n        \\uncover<+>{\\begin{equation*}\n            \\mathcal{F}^{-1}[\\hat{f}](x) = f(x) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} \\hat{f}(\\xi) e^{i x \\xi} \\dd{\\xi}\n        \\end{equation*}}\n    \\end{frame}\n    \n    \\begin{frame}{Quality Control}\n        \\uncover<+>{\\begin{equation*}\n            \\widehat{(f + \\alpha g)}(\\xi) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} \\prnt{f(x) + \\alpha g(x)} e^{-i x \\xi} \\dd{x}\n        \\end{equation*}}\n        \\uncover<+>{\\[ \\Downarrow \\]\n        \\begin{equation*}\n            \\widehat{(f + \\alpha g)}(\\xi) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} f(x) e^{-i x \\xi} \\dd{x} + \\frac{\\alpha}{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} g(x) e^{-i x \\xi} \\dd{x}\n        \\end{equation*}}\n        \\uncover<+>{\\[ \\Downarrow \\]\n        \\begin{equation*}\n            \\widehat{(f + \\alpha g)}(\\xi) = \\hat{f}(\\xi) + \\alpha \\hat{g}(\\xi)\n        \\end{equation*}}\n    \\end{frame}\n    \n    \\begin{frame}{Quality Control}\n        \\uncover<+>{\\begin{equation*}\n            \\widehat{f'}(\\xi) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} f'(x) e^{-i x \\xi} \\dd{x}\n        \\end{equation*}}\n        \\uncover<+>{\\[ \\Downarrow \\]\n        \\begin{equation*}\n            \\widehat{f'}(\\xi) = \\eval{\\frac{f(x) e^{-ix\\xi}}{\\sqrt{2\\pi}}}^{+\\infty}_{-\\infty} + i\\xi \\cdot \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} f(x) e^{-i x \\xi} \\dd{x}\n        \\end{equation*}}\n        \\uncover<+>{\\[ \\Downarrow \\]\n        \\begin{equation*}\n            \\widehat{f'}(\\xi) = i\\xi \\widehat{f}(\\xi)\n        \\end{equation*}}\n    \\end{frame}\n    \n    \\begin{frame}{Quality Control}\n        \\centering\n        \\huge{The inverse does work}\n        \n        \\normalsize{for appropriate functions}\n        \n        \\tiny{and, sometimes, the Fourier Transform of a function is not in the same set as the original function, but let's forget about this since we do not know a decent theory of integration}\n    \\end{frame}", "meta": {"hexsha": "4624dc8abe6bdddc438171d9a8614236a60b50b5", "size": 3901, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "200+ beamer \u6a21\u677f\u5408\u96c6/DeadPhysicistsSocietyPresentationTemplate(DPS \u7814\u8ba8\u4f1a)/chapters/a-silly-idea.tex", "max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z", "max_issues_repo_path": "200+ beamer \u6a21\u677f\u5408\u96c6/DeadPhysicistsSocietyPresentationTemplate(DPS \u7814\u8ba8\u4f1a)/chapters/a-silly-idea.tex", "max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "200+ beamer \u6a21\u677f\u5408\u96c6/DeadPhysicistsSocietyPresentationTemplate(DPS \u7814\u8ba8\u4f1a)/chapters/a-silly-idea.tex", "max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z", "avg_line_length": 37.8737864078, "max_line_length": 195, "alphanum_fraction": 0.5142271213, "num_tokens": 1434, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5857439138630485}}
{"text": "\nDue to the fact that  two of the  studied classes of systems that are studied in this paper are affine functions in terms of $f$ and $g$, we propose to solve the \"one--step nonsmooth problem'' (\\ref{eq:toto1}) by performing an external Newton linearization.\n\n \\paragraph{Newton's linearization of the first line of~(\\ref{eq:toto1})} The first line of the  problem~(\\ref{eq:toto1}) can be written under the form of a residue $\\mathcal R$ depending only on $x_{k+1}$ and $r_{k+1}$ such that \n\\begin{equation}\n  \\label{eq:NL3}\n  \\mathcal R (x_{k+1},r _{k+1}) =0\n\\end{equation}\nwith \n\\begin{equation}\n\\mathcal R(x,r) = M(x - x_{k}) -h\\theta f( x , t_{k+1}) - h(1-\\theta)f(x_k,t_k) - h\\gamma r\n- h(1-\\gamma)r_k.\n\\end{equation}\nThe solution of this system of nonlinear equations is sought as a limit of the sequence $\\{ x^{\\alpha}_{k+1},r^{\\alpha}_{k+1} \\}_{\\alpha \\in \\NN}$ such that\n \\begin{equation}\n   \\label{eq:NL7}\n   \\begin{cases}\n     x^{0}_{k+1} = x_k \\\\ \\\\\n     r^{0}_{k+1} = r_k \\\\ \\\\\n     \\mathcal R_L( x^{\\alpha+1}_{k+1},r^{\\alpha+1}_{k+1}) = \\mathcal\n     R(x^{\\alpha}_{k+1},r^{\\alpha}_{k+1})  + \\left[ \\nabla_{x} \\mathcal\n     R(x^{\\alpha}_{k+1},r^{\\alpha}_{k+1})\\right] (x^{\\alpha+1}_{k+1}-x^{\\alpha}_{k+1} ) +\n     \\left[ \\nabla_{r} \\mathcal R(x^{\\alpha}_{k+1},r^{\\alpha}_{k+1})\\right] (r^{\\alpha+1}_{k+1} - r^{\\alpha}_{k+1} ) =0\n \\end{cases}\n\\end{equation}\n\\begin{ndrva}\n  What about $r^0_{k+1}$ ?\n\\end{ndrva}\n\nThe residu \\free $\\mathcal R _{\\free}$ is also defined (useful for implementation only):\n\\[\\mathcal R _{\\free}(x) \\stackrel{\\Delta}{=}  M(x - x_{k}) -h\\theta f( x , t_{k+1}) - h(1-\\theta)f(x_k,t_k),\\]\nwhich yields\n\\[\\mathcal R (x,r) = \\mathcal R _{\\free}(x)   - h\\gamma r - h(1-\\gamma)r_k.\\]\n\n\\begin{equation}\n  \\mathcal R (x^{\\alpha}_{k+1},r^{\\alpha}_{k+1}) = \\fbox{$\\mathcal R^{\\alpha}_{k+1} \\stackrel{\\Delta}{=}  \\mathcal R\n_{\\free}(x^{\\alpha}_{k+1})  - h\\gamma r^{\\alpha}_{k+1} - h(1-\\gamma)r_k$}\\label{eq:rfree-1}\n\\end{equation}\n\n\\[  \\mathcal R\n_{\\free}(x^{\\alpha}_{k+1},r^{\\alpha}_{k+1} )=\\fbox{$ \\mathcal R _{\\free, k+1} ^{\\alpha} \\stackrel{\\Delta}{=}  M(x^{\\alpha}_{k+1} - x_{k}) -h\\theta f( x^{\\alpha}_{k+1} , t_{k+1}) - h(1-\\theta)f(x_k,t_k)$}\\]\n \n% The computation of the Jacobian of $\\mathcal R$ with respect to $x$, denoted by $   W^{\\alpha}_{k+1}$ leads to \n% \\begin{equation}\n%    \\label{eq:NL9}\n%    \\begin{array}{l}\n%     W^{\\alpha}_{k+1} \\stackrel{\\Delta}{=} \\nabla_{x} \\mathcal R (x^{\\alpha}_{k+1},r^{\\alpha}_{k+1})= M - h  \\theta \\nabla_{x} f(  x^{\\alpha}_{k+1}, t_{k+1} ).\\\\\n%  \\end{array}\n% \\end{equation}\nAt each time--step, we have to solve the following linearized problem,\n\\begin{equation}\n   \\label{eq:NL10}\n    \\mathcal R^{\\alpha}_{k+1} + (M-h\\theta A ^{\\alpha}_{k+1}) (x^{\\alpha+1}_{k+1} -\n    x^{\\alpha}_{k+1}) - h \\gamma (r^{\\alpha+1}_{k+1} - r^{\\alpha}_{k+1} )  =0 ,\n\\end{equation}\nwith \n\\begin{equation}\n     \\begin{array}{l}\n       A^{\\alpha}_{k+1} = \\nabla_x f(t_{k+1}, x^{\\alpha}_{k+1}) \n \\end{array}\n\\end{equation}\n\nBy using (\\ref{eq:rfree-1}), we get\n\\begin{equation}\n  \\label{eq:rfree-2}\n  \\mathcal R\n_{\\free}(x^{\\alpha}_{k+1},r^{\\alpha}_{k+1} )  - h\\gamma r^{\\alpha+1}_{k+1} - h(1-\\gamma)r_k  + (M-h\\theta A^{\\alpha}_{k+1}) (x^{\\alpha+1}_{k+1} -\n    x^{\\alpha}_{k+1})  =0 \n\\end{equation}\n\n% %\\fbox\n% {\n%   \\begin{equation}\n%     \\label{eq:rfree-11}\n%     \\boxed{ x^{\\alpha+1}_{k+1} = h\\gamma (W^{\\alpha}_{k+1})^{-1}r^{\\alpha+1}_{k+1} +x^\\alpha_{\\free}}\n%   \\end{equation}\n% }\n% with :\n% \\begin{equation}\n%   \\label{eq:rfree-12}\n%   \\boxed{x^\\alpha_{\\free}\\stackrel{\\Delta}{=}x^{\\alpha}_{k+1}-(W^{\\alpha}_{k+1})^{-1}\\mathcal (R_{\\free,k+1}^{\\alpha} \\textcolor{red}{- h(1-\\gamma) r_k})}\n% \\end{equation}\n\nThe matrix $W$ is clearly non singular for small $h$.\n\n\n\n\n% that is\n\n% \\begin{equation}\n%    \\begin{array}{l}\n%  h \\gamma  r^{\\alpha+1}_{k+1} = r_c + W^{\\alpha}_{k+1} x^{\\alpha+1}_{k+1}\n%  .\\label{eq:NL11} \n%  \\end{array}\n% \\end{equation}\n% with \n% \\begin{equation}\n%    \\begin{array}{l}\n% r_c \\stackrel{\\Delta}{=} h \\gamma r^{\\alpha}_{k+1} - W^{\\alpha}_{k+1} x^{\\alpha}_{k+1} + \\mathcal R\n% ^{\\alpha}_{k+1}=- W^{\\alpha}_{k+1} x^{\\alpha}_{k+1} + \\mathcal R_{\\free k+1} ^{\\alpha} - h(1-\\gamma)r_k\\\\ \\\\\n% \\end{array}\n% \\end{equation}\n% \\begin{equation}\n%    \\begin{array}{l}\n% \\mathcal R ^{\\alpha}_{k+1}=M( x^{\\alpha}_{k+1} - x_k) -h \\theta f(x^{\\alpha}_{k+1})-h(1-\\theta)f(x_k)\n% - h \\gamma r^{\\alpha}_{k+1} -h(1- \\gamma)r_k\n%  \\end{array}\n%    \\end{equation}\n% \\[x^{\\alpha+1}_{k+1} = h(W^{\\alpha}_{k+1})^{-1}r^{\\alpha+1}_{k+1} +(W^{\\alpha}_{k+1})^{-1}(\\mathcal\n% R_{\\free k+1} ^{\\alpha})+x^{\\alpha}_{k+1}\\]\n\n \\paragraph{Newton's linearization of the second  line of~(\\ref{eq:toto1})}\nThe same operation is performed with the second equation of (\\ref{eq:toto1})\n\\begin{equation}\n  \\begin{array}{l}\n    \\mathcal R_y(x,y,\\lambda)=y-h(t_{k+1},x,\\lambda) =0\\\\ \\\\\n  \\end{array}\n\\end{equation}\nwhich is linearized as\n\\begin{equation}\n  \\label{eq:NL9}\n  \\begin{array}{l}\n    \\mathcal R_{Ly}(x^{\\alpha+1}_{k+1},y^{\\alpha+1}_{k+1},\\lambda^{\\alpha+1}_{k+1}) = \\mathcal\n    R_{y}(x^{\\alpha}_{k+1},y^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1}) +\n    (y^{\\alpha+1}_{k+1}-y^{\\alpha}_{k+1})- \\\\[2mm] \\qquad  \\qquad \\qquad \\qquad  \\qquad \\qquad\n    C^{\\alpha}_{k+1}(x^{\\alpha+1}_{k+1}-x^{\\alpha}_{k+1}) - D^{\\alpha}_{k+1}(\\lambda^{\\alpha+1}_{k+1}-\\lambda^{\\alpha}_{k+1})=0\n  \\end{array}\n\\end{equation}\n\nThis leads to the following linear equation\n\\begin{equation}\n  \\boxed{y^{\\alpha+1}_{k+1} =  y^{\\alpha}_{k+1}\n  -\\mathcal R^{\\alpha}_{yk+1}+ \\\\\n  C^{\\alpha}_{k+1}(x^{\\alpha+1}_{k+1}-x^{\\alpha}_{k+1}) +\n  D^{\\alpha}_{k+1}(\\lambda^{\\alpha+1}_{k+1}-\\lambda^{\\alpha}_{k+1})}. \\label{eq:NL11y}\n\\end{equation}\nwith,\n\\begin{equation}\n     \\begin{array}{l}\n  C^{\\alpha}_{k+1} = \\nabla_xh(t_{k+1}, x^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1} ) \\\\ \\\\\n  D^{\\alpha}_{k+1} = \\nabla_{\\lambda}h(t_{k+1}, x^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1})\n \\end{array}\n\\end{equation}\nand\n\\begin{equation}\\fbox{$\n\\mathcal R^{\\alpha}_{yk+1} \\stackrel{\\Delta}{=} y^{\\alpha}_{k+1} - h(x^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1})$}\n \\end{equation}\n \\paragraph{Newton's linearization of the third  line of~(\\ref{eq:toto1})}\nThe same operation is performed with the third equation of (\\ref{eq:toto1})\n\\begin{equation}\n  \\begin{array}{l}\n    \\mathcal R_r(r,x,\\lambda)=r-g(t_{k+1},x,\\lambda) =0\\\\ \\\\  \\end{array}\n\\end{equation}\nwhich is linearized as\n\\begin{equation}\n  \\label{eq:NL9}\n  \\begin{array}{l}\n      \\mathcal R_{Lr}(r^{\\alpha+1}_{k+1},x^{\\alpha+1}_{k+1},\\lambda^{\\alpha+1}_{k+1}) = \\mathcal\n      R_{rk+1}^{\\alpha} + (r^{\\alpha+1}_{k+1} - r^{\\alpha}_{k+1}) -\n      K^{\\alpha}_{k+1}(x^{\\alpha+1}_{k+1} - x^{\\alpha}_{k+1})- B^{\\alpha}_{k+1}(\\lambda^{\\alpha+1}_{k+1} -\n      \\lambda^{\\alpha}_{k+1})=0\n    \\end{array}\n  \\end{equation}\n\\begin{equation}\n  \\label{eq:rrL}\n  \\begin{array}{l}\n    \\boxed{r^{\\alpha+1}_{k+1} = g(t_{k+1},x ^{\\alpha}_{k+1},\\lambda ^{\\alpha}_{k+1}) +\n      K^{\\alpha}_{k+1}(x^{\\alpha+1}_{k+1} - x^{\\alpha}_{k+1})\n      + B^{\\alpha}_{k+1}(\\lambda^{\\alpha+1}_{k+1} - \\lambda^{\\alpha}_{k+1})\n    }       \n  \\end{array}\n\\end{equation}\nwith,\n\\begin{equation}\n     \\begin{array}{l}\n  K^{\\alpha}_{k+1} = \\nabla_xg(t_{k+1},x^{\\alpha}_{k+1},\\lambda ^{\\alpha}_{k+1})  \\\\ \\\\\n  B^{\\alpha}_{k+1} = \\nabla_{\\lambda}g(t_{k+1},x^{\\alpha}_{k+1},\\lambda ^{\\alpha}_{k+1})\n \\end{array}\n\\end{equation}\nand the  residue for $r$:\n\\begin{equation}\n\\boxed{\\mathcal\n      R_{rk+1}^{\\alpha} = r^{\\alpha}_{k+1} - g(t_{k+1},x^{\\alpha}_{k+1},\\lambda ^{\\alpha}_{k+1})}\n  \\end{equation}\n\n\n\\paragraph{Reduction to a linear relation between  $x^{\\alpha+1}_{k+1}$ and $\\lambda^{\\alpha+1}_{k+1}$}\n\nInserting (\\ref{eq:rrL}) into~(\\ref{eq:rfree-2}), we get the following linear relation between $x^{\\alpha+1}_{k+1}$ and\n$\\lambda^{\\alpha+1}_{k+1}$, \n\\begin{equation}\n  \\label{eq:rfree-3}\n  \\begin{array}{l}\n  \\mathcal R^{\\alpha}_{\\free, k+1}  - h\\gamma\\left[  g(t_{k+1},x^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1}) +\n    B^{\\alpha}_{k+1} (\\lambda^{\\alpha+1}_{k+1} - \\lambda^{\\alpha}_{k+1})+K^{\\alpha}_{k+1}\n    (x^{\\alpha+1}_{k+1} - x^{\\alpha}_{k+1})  \\right] \\\\\n  \\quad\\quad - h(1-\\gamma)r_k  + (M-h\\theta A^{\\alpha}_{k+1}) (x^{\\alpha+1}_{k+1} -\n    x^{\\alpha}_{k+1})  =0\n  \\end{array}\n\\end{equation}\nthat is\n\\begin{equation}\n  \\label{eq:rfree-4}\n  \\begin{array}[l]{lcl}\n  (M-h\\theta A^{\\alpha}_{k+1}-h\\gamma K^{\\alpha}_{k+1}) (x^{\\alpha+1}_{k+1}  -  x^{\\alpha}_{k+1}) &=& \n  -  \\mathcal R^{\\alpha}_{\\free, k+1} -h(1-\\gamma) r_k \\\\ & & + h\\gamma \\left[  g(t_{k+1},x^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1}) +\n    B^{\\alpha}_{k+1} (\\lambda^{\\alpha+1}_{k+1} - \\lambda^{\\alpha}_{k+1})  \\right]  \n\\end{array}\n\\end{equation}\n\nLet us introduce some intermediate notation:\n\\begin{equation}\n   \\label{eq:NL9}\n   \\begin{array}{l}\n     W^{\\alpha}_{k+1} \\stackrel{\\Delta}{=} M-h\\theta A^{\\alpha}_{k+1}-h\\gamma K^{\\alpha}_{k+1})\\\\\n   \\end{array}\n \\end{equation}\n \\begin{equation}\n   \\label{eq:rfree-12}\n   \\boxed{x^\\alpha_{\\free}\\stackrel{\\Delta}{=}x^{\\alpha}_{k+1}-(W^{\\alpha}_{k+1})^{-1}\\mathcal (R_{\\free,k+1}^{\\alpha} \\textcolor{red}{- h(1-\\gamma) r_k})}\n \\end{equation}\nand \n\\begin{equation}\n  \\boxed{x^\\alpha_p \\stackrel{\\Delta}{=}  h\\gamma(W^{\\alpha}_{k+1} )^{-1}\\left[g(t_{k+1},x^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1}) \n    -B^{\\alpha}_{k+1} (\\lambda^{\\alpha}_{k+1}) \\right ] +x^\\alpha_{\\free}}.\n\\end{equation}\n\nThe relation (\\ref{eq:rfree-4}) can be written as\n\\begin{equation}\n  \\label{eq:rfree-13}\n  \\begin{array}{l}\n    \\boxed{   x^{\\alpha+1}_{k+1}\\stackrel{\\Delta}{=}  x^\\alpha_p +  \\left[ h \\gamma (W^{\\alpha}_{k+1})^{-1}    B^{\\alpha}_{k+1} \\lambda^{\\alpha+1}_{k+1}\\right]}\n   \\end{array}\n\\end{equation}\n\n\n\n\\paragraph{Reduction to a linear relation between  $y^{\\alpha+1}_{k+1}$ and\n$\\lambda^{\\alpha+1}_{k+1}$.}\n\nInserting (\\ref{eq:rfree-13}) into (\\ref{eq:NL11y}), we get the following linear relation between $y^{\\alpha+1}_{k+1}$ and $\\lambda^{\\alpha+1}_{k+1}$, \n\\begin{equation}\n   \\begin{array}{l}\n     y^{\\alpha+1}_{k+1} = y_p + \\left[ h \\gamma C^{\\alpha}_{k+1} ( W^{\\alpha}_{k+1})^{-1}  B^{\\alpha}_{k+1} + D^{\\alpha}_{k+1} \\right]\\lambda^{\\alpha+1}_{k+1}\n   \\end{array}\n\\end{equation}\nwith \n\\begin{equation}\\boxed{\n    y_p = y^{\\alpha}_{k+1} -\\mathcal R^{\\alpha}_{yk+1} + C^{\\alpha}_{k+1}(x_q) -   D^{\\alpha}_{k+1} \\lambda^{\\alpha}_{k+1} }\n\\end{equation}\n\\textcolor{red}{\n  \\begin{equation}\n   \\boxed{ x_q=x^\\alpha_p -x^{\\alpha}_{k+1}\\label{eq:xqq}}\n  \\end{equation}\n}\n\n\n\n\n\n\n\n% \\paragraph{With $\\gamma =1$:}\n% \\[(W^{\\alpha}_{k+1} )x^{\\alpha+1}_{k+1}= hr^{\\alpha+1}_{k+1}- \\mathcal R_{\\free, k+1} ^{\\alpha}+W^{\\alpha}_{k+1}x^{\\alpha}_{k+1}\\]\n% \\[x^{\\alpha+1}_{k+1}= h( W^{\\alpha}_{k+1})^{-1}r^{\\alpha+1}_{k+1}-\n% ( W^{\\alpha}_{k+1})^{-1} \\mathcal R_{\\free k+1} ^{\\alpha}+x^{\\alpha}_{k+1}\\]\n% \\[x^{\\alpha+1}_{k+1}= h( W^{\\alpha}_{k+1})^{-1}r^{\\alpha+1}_{k+1}+x_{\\free}\\]\n% with, using \\ref{}\n% \\begin{equation}\n% x_p-x^{\\alpha}_{k+1}=h(\n% W^{\\alpha}_{k+1})^{-1}(g(x^{\\alpha}_{k+1},\\lambda^{\\alpha}_{k+1},t_{k+1})-B^{\\alpha}_{k+1}\n% \\lambda^{\\alpha}_{k+1}-K^{\\alpha}_{k+1} x^{\\alpha}_{k}))+\\tilde x_{\\free}\n% \\end{equation}\n% \\[    \\tilde x_{\\free}= -( W^{\\alpha}_{k+1})^{-1} \\mathcal R _{\\free k+1} ^{\\alpha} \\]\n%       \\[x_{\\free} = \\tilde x_{\\free} + x^{\\alpha}_{k+1}=\\fbox{$- W^{-1}R_{\\free k+1} ^{\\alpha} + x^{\\alpha}_{k+1}$}\\]\n% \\[ \\fbox{$x_p  = x_{\\free} + h ( W^{\\alpha}_{k+1})^{-1}( g(x ^{\\alpha}_{k+1},\\lambda ^{\\alpha}_{k+1},t_{k+1}) -\n%       B^{\\alpha}_{k+1} \\lambda^{\\alpha}_{k+1}-K^{\\alpha}_{k+1} x^{\\alpha}_{k+1} )$} \\]\n\n\n\n\n\\paragraph{Mixed linear complementarity problem (MLCP)}To summarize, the problem to be solved in each Newton iteration is:\\\\{\n  \\begin{minipage}[l]{1.0\\linewidth}\n    \\begin{equation}\n      \\begin{cases}\n      \\begin{array}[l]{l}\n        y^{\\alpha+1}_{k+1} =   W_{mlcpk+1}^{\\alpha}  \\lambda^{\\alpha+1}_{k+1} + b^{\\alpha}_{k+1}\n        \\\\ \\\\\n        -y^{\\alpha+1}_{k+1} \\in N_{[l,u]}(\\lambda^{\\alpha+1}_{k+1} ). \n      \\end{array}\n      \\label{eq:NL14}\n      \\end{cases}\n    \\end{equation}\n  \\end{minipage}\n}\nwith $W_{mlcpk+1}\\in \\RR^{m\\times m}$ and $b\\in\\RR^{m}$ defined by\n\\begin{equation}\n  \\label{eq:NL15}\n \\begin{array}[l]{l}\n   W_{mlcpk+1}^{\\alpha} = h \\gamma C^{\\alpha}_{k+1}  (W^{\\alpha}_{k+1})^{-1}  B^{\\alpha}_{k+1} + D^{\\alpha}_{k+1} \\\\\n   b^{\\alpha}_{k+1} = y_p\n\\end{array}\n\\end{equation}\n\nThe problem~(\\ref{eq:NL14}) is equivalent to a Mixed Linear Complementarity Problem (MLCP) which can be solved under suitable assumptions by many linear complementarity solvers such as pivoting techniques, interior point techniques and splitting/projection strategies. The  reformulation into a standard MLCP follows the same line as for the MCP in the previous section. One obtains,\n    \\begin{equation}\n      \\begin{array}[l]{l}\n        y^{\\alpha+1}_{k+1} =   - W^{\\alpha}_{k+1}  \\lambda^{\\alpha+1}_{k+1} + b^{\\alpha}_{k+1}\n        \\\\ \\\\\n        (y^{\\alpha+1}_{k+1})_i  = 0 \\qquad \\textrm{ for } i \\in \\{ 1..n\\}\\\\[2mm]\n        0 \\leq  (\\lambda^{\\alpha+1}_{k+1})_i\\perp (y^{\\alpha+1}_{k+1})_i \\geq 0 \\qquad \\textrm{ for } i \\in \\{ n..n+m\\}\\\\\n      \\end{array}\n      \\label{eq:MLCP1} \n    \\end{equation}\n\n\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"DevNotes\"\n%%% End:", "meta": {"hexsha": "31ebc452e2e0cea109432efa6e9335a727d58882", "size": 13120, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/sphinx/devel_guide/notes/MCP_linearized.tex", "max_stars_repo_name": "BuildJet/siconos", "max_stars_repo_head_hexsha": "5e9c95806f0a01d62ab564ffb1d9d50c2dc32ef0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 137, "max_stars_repo_stars_event_min_datetime": "2015-06-16T15:55:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T06:01:59.000Z", "max_issues_repo_path": "docs/sphinx/devel_guide/notes/MCP_linearized.tex", "max_issues_repo_name": "BuildJet/siconos", "max_issues_repo_head_hexsha": "5e9c95806f0a01d62ab564ffb1d9d50c2dc32ef0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 381, "max_issues_repo_issues_event_min_datetime": "2015-09-22T15:31:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-14T09:05:23.000Z", "max_forks_repo_path": "docs/sphinx/devel_guide/notes/MCP_linearized.tex", "max_forks_repo_name": "BuildJet/siconos", "max_forks_repo_head_hexsha": "5e9c95806f0a01d62ab564ffb1d9d50c2dc32ef0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2015-08-06T22:57:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T20:30:20.000Z", "avg_line_length": 40.6191950464, "max_line_length": 383, "alphanum_fraction": 0.5756859756, "num_tokens": 5669, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672089305841, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5857439105125681}}
{"text": "% Define document class\n\\documentclass[modern]{aastex631}\n\n% Filler text\n\\usepackage{acro}\n\\usepackage{blindtext}\n\n\\DeclareAcronym{KDE}{\n    short = {KDE},\n    long = {kernel density estimate}\n}\n\n\\DeclareMathOperator{\\var}{Var}\n\n% Begin!\n\\begin{document}\n\n% Title\n\\title{A Note About Marginalizing Over Gaussian Populations With KDE Likelihood Representations}\n\n% Author list\n\\author[0000-0003-1540-8562]{Will M. Farr}\n\n% Abstract with filler text\n\\begin{abstract}\n    I work through the marginalization over parameters whose population is\n    Gaussian using a \\ac{KDE} approximation to the likelihood function.  The\n    result is an analytic \\ac{KDE} representation of the marginal likelihood for\n    the Gaussian population parameters. \n\\end{abstract}\n\n% Main body with filler text\n\\section{Marginalization}\n\nSuppose we are conducting a population analysis with a Gaussian population\n\\citep[e.g.][]{Isi2019,Miller2020}.  The hierarchical posterior for the unknown\nparameter $x_i$ with the Gaussian population from each of the $i = 1, \\ldots, N$\nevents and the population parameters $\\mu$ and $\\sigma$ is \n\\begin{equation}\n    \\label{eq:full-posterior}\n    p\\left( \\left\\{ x_i \\right\\}, \\mu, \\sigma \\mid \\left\\{ d_i \\right\\} \\right) \\propto p\\left( \\mu, \\sigma \\right) \\prod_{i=1}^N p\\left( d_i \\mid x_i \\right) p\\left( x_i \\mid \\mu, \\sigma \\right)\n\\end{equation}\nwhere $p\\left( \\mu, \\sigma \\right)$ is the prior we impose on the population\nparameters, and $d_i$ is the data for each observation.  The population is\nassumed to be Gaussian with mean $\\mu$ and standard deviation $\\sigma$:\n\\begin{equation}\n    p\\left( x \\mid \\mu, \\sigma \\right) = N\\left( x \\mid \\mu, \\sigma \\right).\n\\end{equation}\n\nIn problems of interest, the likelihood function is often not simple to write\ndown, and does not take a straightforward analytic form.  In such situations, we\nusually draw \\emph{samples} from the likelihood (or a posterior which is the\nlikelihood times some simple prior) for each event.  Let there be $j = 1,\n\\ldots, M_i$ samples $x_{ij}$ from (a density proportional to) the likelihood\n(i.e.\\ from a posterior with a flat prior):\n\\begin{equation}\n    x_{ij} \\sim p\\left( d \\mid x_i \\right).\n\\end{equation}\nA useful representation of the likelihood function (up to a proportionality\nconstant that we usually ignore---unless we are computing the Bayesian evidence)\nthen is a \\ac{KDE} with a Gaussian kernel over the samples from the likelihood\n\\begin{equation}\n    p\\left( d \\mid x_i \\right) \\simeq \\frac{1}{M_i} \\sum_{j} N\\left( x_i \\mid x_{ij}, \\sigma_i \\right)\n\\end{equation}\nwhere $\\sigma_i$ is a reasonable \\emph{bandwidth} for the \\ac{KDE}\\footnote{When\nin doubt, I usually follow \\citet{Scott1992}, with \n\\begin{equation}\n    \\sigma_i^2 = \\frac{\\var x_{ij}}{M_i^{2/5}}.\n\\end{equation}\n}.\n\nBecause we can do Gaussian integrals (though the discussion here is for a\none-dimensional parameter $x$, the same trick works is multiple dimensions;\n\\citet{Hogg2020} is helpful here), we can analytically integrate out the true\nparameter values $x_i$ from the posterior in Eq.\\ \\eqref{eq:full-posterior}\nusing the \\ac{KDE} approximation to the likelihood. The result is \n\\begin{equation}\n    p\\left( \\mu, \\sigma \\mid \\left\\{ d_i \\right\\} \\right) \\propto p\\left( \\mu, \\sigma \\right) \\prod_{i=1}^N \\frac{1}{M_i} \\sum_{j=1}^{M_i} N\\left( x_{ij} \\mid \\mu, \\sqrt{\\sigma_i^2 + \\sigma^2} \\right)\n\\end{equation}\nIn this special case of Gaussian populations and Gaussian \\ac{KDE}\napproximations to the likelihood function, the above expression is considerably\nmore robust to differences in scale between the population and measurement\n(i.e.\\ $\\sigma_i \\ll \\sigma$ or $\\sigma \\ll \\sigma_i$) than the usual\nMonte-Carlo approximation to the integral of the likelihood\n\\citep[e.g.][]{Miller2020}.\n\n\\bibliography{bib}\n\n\\end{document}\n", "meta": {"hexsha": "e01c3d69ed0275bedc6e3e97dc35873035be0b6a", "size": 3815, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/ms.tex", "max_stars_repo_name": "farr/KDEMarginalization", "max_stars_repo_head_hexsha": "4a881dd3eb2e40242334257887244b5ab475ce0d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/ms.tex", "max_issues_repo_name": "farr/KDEMarginalization", "max_issues_repo_head_hexsha": "4a881dd3eb2e40242334257887244b5ab475ce0d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ms.tex", "max_forks_repo_name": "farr/KDEMarginalization", "max_forks_repo_head_hexsha": "4a881dd3eb2e40242334257887244b5ab475ce0d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.3888888889, "max_line_length": 200, "alphanum_fraction": 0.7339449541, "num_tokens": 1126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.5857439042574287}}
{"text": "% file: Mat_out.tex\n%\n% github        : ernestyalumni\n% gmail         : ernestyalumni \n% linkedin      : ernestyalumni \n% wordpress.com : ernestyalumni\n%\n% This code is open-source, governed by the Creative Common license.  Use of this code is governed by the Caltech Honor Code: ``No member of the Caltech community shall take unfair advantage of any other member of the Caltech community.'' \n\n\\documentclass[10pt]{amsart}\n\\pdfoutput=1\n\\usepackage{mathtools,amssymb,caption}\n\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\usepackage[utf8]{inputenc}\n\\usepackage{listings}\n\\usepackage[table]{xcolor}\n\\usepackage{pdfpages}\n\\usepackage{tikz}\n\\usetikzlibrary{matrix,arrows}\n\n\\usepackage{breqn} % for dmath\n\n\n%\\usepackage{cancel} % for Feynman slash notation\n\n\\hypersetup{colorlinks=true,citecolor=[rgb]{0,0.4,0}}\n\n\n%\\oddsidemargin=15pt\n%\\evensidemargin=5pt\n%\\hoffset-45pt\n%\\voffset-55pt\n%\\topmargin=-4pt\n%\\headsep=5pt\n%\\textwidth=1120pt\n%\\textheight=595pt\n%\\paperwidth=1200pt\n%\\paperheight=700pt\n%\\footskip=40pt\n\n\n\n\n\n\n\n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{corollary}{Corollary}\n%\\newtheorem*{main}{Main Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{proposition}{Proposition}\n\n\\newtheorem{definition}{Definition}\n\\newtheorem{remark}{Remark}\n\n\\newenvironment{claim}[1]{\\par\\noindent\\underline{Claim:}\\space#1}{}\n\\newenvironment{claimproof}[1]{\\par\\noindent\\underline{Proof:}\\space#1}{\\hfill $\\blacksquare$}\n\n%This defines a new command \\questionhead which takes one argument and\n%prints out Question #. with some space.\n\\newcommand{\\questionhead}[1]\n  {\\bigskip\\bigskip\n   \\noindent{\\small\\bf Question #1.}\n   \\bigskip}\n\n\\newcommand{\\problemhead}[1]\n  {\n   \\noindent{\\small\\bf Problem #1.}\n   }\n\n\\newcommand{\\exercisehead}[1]\n  { \\smallskip\n   \\noindent{\\small\\bf Exercise #1.}\n  }\n\n\\newcommand{\\solutionhead}[1]\n  {\n   \\noindent{\\small\\bf Solution #1.}\n   }\n\n\n  \\title[Matrices as a Tensor Product and the \\emph{Mat} C++ class template]{Matrices as a Tensor Product and the \\emph{Mat} C++ class template}\n\n\\author{Ernest Yeung \\href{mailto:ernestyalumni@gmail.com}{ernestyalumni@gmail.com}}\n\\date{27 juillet 2017}\n\\keywords{C++, C++11, Matrices, Tensor Products, dual basis, basis, numerical computation}\n\n\\begin{document}\n\n\\definecolor{darkgreen}{rgb}{0,0.4,0}\n\\lstset{language=Python,\n frame=bottomline,\n basicstyle=\\scriptsize,\n identifierstyle=\\color{blue},\n keywordstyle=\\bfseries,\n commentstyle=\\color{darkgreen},\n stringstyle=\\color{red},\n }\n%\\lstlistoflistings\n\n\\maketitle\n\n\\tableofcontents\n\n%\\begin{multicols*}{2}\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\\begin{abstract}\n\\section{Executive Summary}\n\nI implement matrices, with a focus on implementing matrix multiplication and the taking the transpose for the matrices, in C++11.  In designing the class templates for the matrices, I begin with the mathematical formulation of the space of all matrices of matrix size dimensions $M\\times N$, $\\textbf{Mat}_{\\mathbb{K}}(M,N)$, with $\\mathbb{K}$ being the underlying field (e.g. $\\mathbb{K}=\\mathbb{R},\\mathbb{Z}$), as a \\emph{tensor product} of vector spaces,  $\\bigotimes_{j=1}^P \\mathbb{K}^M$ and equivalently as a tensor product of \\emph{dual} vector spaces $\\bigotimes_{i=1}^M \\mathbb{K}^P$.  By doing so, we can implement matrix multiplication and the transpose using new C++11 features in the library header `algorithm`, and efficiently read contiguous memory addresses along each of the \\verb|std::vector|s, representing rows or columns of a matrix.  \n\\end{abstract}\n\n\\section{Mathematical formulation: matrices as tensor products of vectors (or \\emph{dual} vectors)}\n\nDenote the set of all matrices of \\emph{matrix size dimensions} (i.e. number of rows $\\times $ number of columns) $(M,P)$ or i.e. $M\\times N$, with the underlying field $\\mathbb{K}$, to be $\\text{Mat}_{\\mathbb{K}}(M,P)$.  \n\nConsider two matrices $A \\in \\text{Mat}_{\\mathbb{K}}(M,P)$, $B \\in \\text{Mat}_{\\mathbb{K}}(P,N)$.  We can multiply them together since they share equal \"inner size dimensions.\" \n\nLet's consider the entries of matrix $A$ (even if only to set notation), $A_{ij}$, \\, $\\forall \\, i = 1,2, \\dots M$, $j=1,2,\\dots P$.  Consider the \"rows\" of the matrix, and the \"columns\" of the matrix $A$.  For the $i$th row of $A$, denote it as $A_{i*}$, so that \n\\begin{equation}\\label{Eq:RowForm}\n\tA_{i*} = (A_{i1},A_{i2}, \\dots A_{iP}) \\qquad \\, \\forall \\, i = 1,2, \\dots M\t\n\\end{equation}\nFor the $j$th column of $A$, denote it as $A_{*j}$, so that \n\\begin{equation}\\label{Eq:ColumnForm}\n\tA_{*j} = (A_{1j},A_{2j}, \\dots A_{jM}) \\qquad \\, \\forall \\, j = 1,2, \\dots P \n\\end{equation}\n\nLet's treat the \"row vector\", $A_{i*}$ as being an element of the dual vector space to $\\mathbb{K}^P$, $(\\mathbb{K}^P)^*$.  Notice that matrix $A$ has $M$ of these \"rows.\"  Then $\\forall \\, A \\in \\text{Mat}_{\\mathbb{K}}(M,P)$, $A \\in \\bigotimes_{i=1}^M (\\mathbb{K}^P)^*$.  In fact, we already described the isomorphism between these two spaces.  So \n\\begin{equation}\n\\bigotimes_{i=1}^M (\\mathbb{K}^P)^* \\cong \\text{Mat}_{\\mathbb{K}}(M,P)\n\\end{equation}\n\nLet's treat the \"column vector\", $A_{*j}$ as being an element of the vector space $\\mathbb{K}^M$.  Notice that matrix $A$ has $P$ of these \"columns.\"  Then $\\forall \\, A \\in \\text{Mat}_{\\mathbb{K}}(M,P)$, $A \\in \\bigotimes_{j=1}^P (\\mathbb{K}^M)^*$.  In fact, we already described the isomorphism between these two spaces.  So \n\\begin{equation}\n\\bigotimes_{j=1}^P (\\mathbb{K}^M) \\cong \\text{Mat}_{\\mathbb{K}}(M,P)\n\\end{equation}\n\n\\subsection{Matrix Multiplication}\n\nAgain, given matrices $A \\in \\text{Mat}_{\\mathbb{K}}(M,P)$, $B \\in \\text{Mat}_{\\mathbb{K}}(P,N)$, we multiply them together to obtain $C \\in \\text{Mat}_{\\mathbb{K}}(M,N)$: \n\\begin{equation}\n\\begin{gathered}\nC = AB  \\\\\nC_{ij} = \\sum_{k=1}^P A_{ik}B_{kj} \\equiv A_{ik}B_{kj} \\quad \\, \\forall \\, i=1,2,\\dots M, \\, \\forall \\, j = 1,2, \\dots N \n\\end{gathered}\n\\end{equation}\nwhere at the end, we use Einstein's summation notation of implicit summation of any repeated indices.  \n\nKeeping in mind that \"row vector\" and \"column vector\" (dual vector and vector) forms from Eq. \\ref{Eq:RowForm}, Eq. \\ref{Eq:ColumnForm}, respectively, let's try to rewrite the matrix multiplication $C=AB$: \n\\begin{equation}\\label{Eq:MatrixMultiplybyRows}\n\\begin{gathered}\nC_{ij} = A_{ik}B_{kj} = A_{ik} (B^T)_{jk} = A_{i*} \\cdot (B^T)_{j*} \\qquad \\, \\forall \\, i = 1, \\dots M, \\, \\forall \\, j = 1, \\dots N\n\\end{gathered}\n\\end{equation}\nThus, if we can get all the rows of $A$, and get all the columns of $B$, we can simply take the inner product of each row with each column (all possible row, column pairs), and we'll have obtained $C$!  \n\n\\subsubsection{The case for thinking of Matrix Multiplication as the inner product of a dual vector with a vector from a computational optimization point of view}  \n\nConsider a \"naive\" implementation of matrix multiplication, which, regardless of your choice of programming language, mathematically is essentially the following:  \n\\begin{equation}\n\\begin{gathered}\n\\forall \\, i = 1,2, \\dots M, \\\\\n\\phantom{\\qquad \\, }\\forall \\, j = 1,2, \\dots N , \\\\ \n\\phantom{\\qquad \\qquad \\, } \\forall \\, k = 1,2,\\dots P , \\\\ \n\\text{sum} += A_{ik}B_{kj} , \\quad \\, i.e. \\quad \\sum_{k=1}^P A_{ik}B_{kj}\n\\end{gathered}\n\\end{equation}\nA (very) cursory look at the read requirements in this matrix multiplication operation would tell us that the read, memory requirements are of order $O(M*P^2*N)$ (imagine multiply a row from $A$ with a \"fixed\" column from $B$.  That same \"fixed\" column from $B$ will have to be multiplied by the other $M-1$ rows, but we've read this particular column from $B$ already $M-1$ redundantly).  Nevertheless, it is polynomial in time.  \n\nAlso, notice that if we assume \\emph{row-major ordering} for how the values of the matrix entries are inserted or stored, contiguously, in memory addresses, then in accessing the values for the columns of matrix $B$, we are not efficiently accessing those values wince we're \"jumping\" over $P$ memory addresses to grab the next adjacent entry in a column.  \n\nHowever, Eq. \\ref{Eq:MatrixMultiplybyRows} gives us a prescription:  \n\\begin{equation}\\label{Eq:MatrixMultiplybyRows2}\nC_{ij} = A_{i*} \\cdot (B^T)_{j*} \\quad \\, \\forall \\, i =1 \\dots M, \\, \\forall \\, j = 1\\dots N\n\\end{equation}\nIf we can store the \"columns\" of $B$ as rows, $(B^T)_{j*}$, then we can access columns in a way that their respective memory addresses are contiguous.  Also, if we have an optimized routine for taking \\emph{inner products} or, i.e., \\emph{dot products}, we can employ that as well, according to Eq. \\ref{Eq:MatrixMultiplybyRows2}.  Indeed, we have this in C++11 standard library \\verb|<numeric>|, with \\verb|std::inner_product|.  At this point, take a look at the code in \\verb|Mat/Mat.h|, for the implementation of matrix multiplication given in the operator overloading of \\verb|*| operator for class \\verb|Mat|, \n\\[\n\\verb|Mat<Type> operator*(const Mat<Type>& rhs)|\n\\].  Each row from matrix $A$, $i$th row \\verb|A_i| and each column from $B \\equiv $ \\verb|rhs|, \\verb|col|, has its inner product computed by \\verb|std::inner_product|.  \n\n\\subsection{Transpose operation on matrices, as tensor products of vector spaces and dual vector spaces}\n\nConsider the mathematical formulation of the transpose operation:  \n\\begin{equation}\\label{Eq:Transpose}\n\\begin{gathered}\n\\text{Mat}_{\\mathbb{K}}(M,P) \\xrightarrow{ T } \\text{Mat}_{\\mathbb{K}}(P,M)  \\\\\n\\bigotimes_{i=1}^P(\\mathbb{K}^M) \\xrightarrow{ T } \\bigotimes_{j=1}^P (\\mathbb{K}^M)^*  \\\\\nA = (A_{*1}, A_{*2}, \\dots A_{*P}) \\xmapsto{T} A^T = (A^T_{1*}, A^T_{2*}, \\dots A^T_{M*})\n\\end{gathered}\n\\end{equation}\nThus, it's clear from Eq. \\ref{Eq:Transpose} that if we can simply store the \"columns\" or \"column\" vectors of matrix $A$ as \"rows\" or dual vectors, we can simply read off these \"rows\" or dual vectors as the new rows of a new matrix $A^T$, the transpose of $A$.  \n\nThis is what was exactly implemented in \\verb|Mat.h| in the class template method \\verb|Mat<Type> T()|.  All we do is take the \"columns\" of a matrix, stored in private member variable \\verb|Columns_| and assign them to the new matrix for the transpose, \\verb|new_Rows|, and then initialize a new matrix with the class template \\verb|Mat|, \\verb|Atranspose|.    \n\n\\section{Further isomorphisms between math and the code}\n\nTake a look at the \\verb|private| member variables of the class template \\verb|Mat| in \\verb|Mat/Mat.h|.  They include \n\\begin{itemize}\n\t\\item \\verb|std::array<unsigned int,2> Size_Dims_;|\n\t\\item \\verb|std::vector<std::vector<Type>> Rows_| \n\t\\item \\verb|std::vector<std::vector<Type>> Columns_| \n\t\\item \\verb|std::vector<Type> Entries_| \t\t\n\\end{itemize}\nThese are isomorphic to the following: \n\\begin{itemize}\n\t\\item $(M,P)$ in $\\text{Mat}_{\\mathbb{K}}(M,P) \\ni A$\n\t\\item $(A_{1*}, A_{2*}, \\dots A_{i*} \\dots A_{M*}) \\in \\bigotimes_{i=1}^M (\\mathbb{K}^P)^*$\n\t\\item $(A^T_{1*}, A^T_{2*}, \\dots A^T_{j*} \\dots A^T_{P*}) \\in \\bigotimes_{j=1}^P (\\mathbb{K}^M)^*$ \n\t\\item $(A_{ij})=A \\in \\mathbb{K}^{M\\cdot P}$ (there is an obvious isomorphism between matrices and a vector space, i.e. $\\text{Mat}_{\\mathbb{K}}(M,P) \\cong \\mathbb{K}^{M\\cdot P}$.  \t\t\n\\end{itemize}, respectively.  \n\nUnderstanding these relations and the availability of these private member variables, one can implement further useful methods, including getting the values of the entries, getting a particular row (that's contiguously laid out sequentially on adjacent memory addresses), setting a specific row, and then going from \\verb|Rows_| and turning it into a new \\verb|Entries_|.  Also, addition for matrices (since matrices are non-commutative rings) and scalar multiplication could also be implemented in memory efficient (i.e. saving the number of reads into memory that is needed and the utilization of optimized algorithms in the standard library of C++11) with this understanding.  \n\n\\section{Concluding Remarks}\n\nI explicitly show an isomorphism between the mathematical formulation of matrices and the operations matrix multiplication and the taking of the transpose as a tensor product of dual vectors and equivalently tensor product of vectors, and the C++11 implementation of a class template \\verb|Mat|.  I point out that by doing so, we've discovered a more read-memory efficient and more optimized way of doing matrix multiplication, taking the transpose, and accessing the values of the entries of the matrix, than a \"naive\" implementation.  I was also able to utilize and showcase a number of novel, new features from C++11; \\verb|std::vector| as a container and \\verb|<numerics>|.  \n\nThe developer using this code or the framework for it should be clearly able to add new methods and features to the class template, as long as he or she understands the underlying mathematics.  In fact, we should agree upon the mathematics first and then try as much as we could to have a 1-to-1 correspondence between the math and the code, in object-oriented programming (OOP) design.  I argue this because it should make the code maintainable and clear because we should all agree on the same thing with the math.  \n\nAlso, with these more efficient memory-accessing, we have made matrix multiplication and the taking of the transpose for matrices not only faster, but more scalable.  \n\n\n\n%\\begin{thebibliography}{9}\n\n  \n%\\end{thebibliography}\n\n\n\n\\end{document}\n", "meta": {"hexsha": "2b5adb3975d26ed9f433d87fa6ad9e071df9e7bf", "size": 13304, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Cpp/Cpp14/MatrixclassMat/Mat_out.tex", "max_stars_repo_name": "ernestyalumni/CompPhys", "max_stars_repo_head_hexsha": "1f5d7559146a14a21182653b77fd35e6d6829855", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 70, "max_stars_repo_stars_event_min_datetime": "2017-07-24T04:09:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-24T16:00:41.000Z", "max_issues_repo_path": "Cpp/Cpp14/MatrixclassMat/Mat_out.tex", "max_issues_repo_name": "ernestyalumni/CompPhys", "max_issues_repo_head_hexsha": "1f5d7559146a14a21182653b77fd35e6d6829855", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-01-16T22:34:47.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-29T22:37:10.000Z", "max_forks_repo_path": "Cpp/Cpp14/MatrixclassMat/Mat_out.tex", "max_forks_repo_name": "ernestyalumni/CompPhys", "max_forks_repo_head_hexsha": "1f5d7559146a14a21182653b77fd35e6d6829855", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 40, "max_forks_repo_forks_event_min_datetime": "2017-01-24T19:18:42.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-01T07:13:35.000Z", "avg_line_length": 53.6451612903, "max_line_length": 857, "alphanum_fraction": 0.7176037282, "num_tokens": 4121, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5857439025821886}}
{"text": "\n\\section{Projecting onto more than one vector}\n\n\nIn \\cref{sec:GEVD_overview,sec:GEVD_SNR}, we projected the multi-channel\ninput signal $\\z_t$ onto one optimal weight vector $\\what$, to yield a\none-dimensional output signal $y_t$ (see e.g. \\cref{eq:linear} or\n\\cref{fig:GEVD_principle}).\n\nWe may also project $\\z_t$ onto multiple weight vectors $\\w_i$ to obtain\nmultiple output channels, $y_{t,i}$. When the $k$ weight vectors are gathered\nas columns of a matrix $\\What \\in \\reals^{C \\cross k}$, and the different\noutputs are combined into one multichannel output signal $\\y_t \\in \\reals^k$,\n\\cref{eq:linear} becomes:\n\\[\n\\y_t = \\What \\z_t\n\\]\nThese multiple output signals can then be used as input to a further signal\nprocessing step. Alternatively, sharp wave-ripple events may be detected in\neach output signal separately, as described in \\cref{sec:GEVD_overview}.\nThese detections can then be combined using a voting scheme, to yield a final\ndetection only when enough output channels yield a detection within close\ntemporal proximity of each other.\n\nThe question is then how to find these different weight vectors $\\What$. As\nin \\cref{sec:GEVD_SNR}, we divide our input data into two data matrices $\\Sm\n\\in \\reals^{C \\cross N_s}$ and $\\Nm \\in \\reals^{C \\cross N_n}$, with $C$ the\nnumber of channels, and $N_s$ and $N_n$ the number of signal and noise\ntraining samples, respectively. The output data matrices are then:\n\\begin{align*}\n\\Y_s &= \\What^T \\Sm \\\\\n\\Y_n &= \\What^T \\Nm.\n\\end{align*}\n\nAs before, for each pair of output data vectors $\\y_{s,i}$ and $\\y_{n,i}$\n(i.e. rows $i$ of $\\Y_s$ and $\\Y_n$) we want to maximise the ratio of the\nvariance of the signal $\\y_{s,i}$ versus the variance of the noise\n$\\y_{n,i}$, i.e. we want to find (see \\cref{sec:GEVD_SNR}):\n%\n\\begin{equation}\n\\label{eq:maxwi}\n\\what_i = \\argmax_{\\w_i} \\frac{\\w_i^T \\Rss \\w_i}\n                              {\\w_i^T \\Rnn \\w_i},\n\\end{equation}\n% \nwith $\\Rss = \\frac{1}{N_s} \\Sm \\Sm^T$ and $\\Rnn = \\frac{1}{N_s} \\Nm \\Nm^T$\nthe signal and noise covariance matrices, as before.\n\nThe scale of $\\w_i$ does not matter; only the variance ratio in\n\\cref{eq:maxwi} does. \\Cref{eq:maxwi} is therefore equivalent to\n%\n\\begin{gather*}\n\\what_i = \\argmax_{\\w_i} \\w_i^T \\Rss \\w_i \\\\\n\\st \\w_i^T \\Rnn \\w_i = 1\n\\end{gather*}\n\nTo combine the different $\\w_i$ into one optimisation problem, we will ask to\noptimise the sum of the signal-to-noise variance ratios. In matrix form, this\nis written as:\n\\begin{gather*}\n\\What = \\argmax_{\\W} \\Tr(\\W^T \\Rss \\W) \\\\\n\\st \\diag{\\W^T \\Rnn \\W} = \\diag{\\I_k},\n\\end{gather*}\n%\nwhere $\\Tr(\\cdot)$ (for trace) denotes the sum of diagonal elements,\n$\\diag{\\cdot}$ denotes the vector of diagonal elements, and $\\I_k$ is the $k\n\\cross k$ identity matrix.\n\nThis maximisation problem yields identical $\\w_1 = \\w_2 = \\tdots = \\w_k$. If\nwe want different output channels, we need to introduce an additional\nconstraint. When we require the noise output channels to be uncorrelated\n(i.e. $\\Cov(\\y_{n,i}, \\y_{n,j}) = \\y_{n,i}^T \\y_{n,j} = 0$ for $i \\neq j$),\nour final optimisation problem becomes:\n%\n\\begin{gather}\n\\What = \\argmax_{\\W} \\Tr(\\W^T \\Rss \\W)  \\label{eq:opti_What} \\\\\n\\st \\W^T \\Rnn \\W = \\I_k.  \\nonumber\n\\end{gather}\n%\nIt can be shown that the solution $\\What$ to this problem can be found as the\nfirst $k$ generalised eigenvectors of $(\\Rnn,\\Rss)$, corresponding to the $k$\nlargest eigenvalues \\cite{Kokiopoulou2011}.\n\n\n\n\n\n\\section{Context}\n\nThe method described in this chapter finds linear combinations of input\nchannels that maximise the signal-to-noise ratios in the output channels. For\nbrevity, we will call this method \\textbf{LSM}, for Linear Signal-to-noise\nMaximisation.\\footnotemark{} This section shortly explores the wider context\nof this alogrithm.\n\n\\footnotetext{Algorithmically, there is nothing more to LSM than calculating\na generalised eigenvalue decomposition, just as algorithmically, there is\nnothing more to principal component analysis (PCA) than calculating an\nordinary eigenvalue decomposition. Why invent new names then? LSM and PCA are\nlittle `wrapper' frameworks around the eigenvalue decompositions to usefully\ninterprete and reason about these operations when applied to actual data\nproblems (see the exposition in \\cref{sec:GEVD_overview,sec:GEVD_SNR}).}\n\n\n\n\\subsection{Similarity to PCA}\n\nThroughout this chapter, you may have noticed the strong similarities between\nLSM and principal component analysis (PCA). Both are linear dimensionality\nreduction methods (number of output channels $k$ < number of input channels\n$C$). PCA is based on the ordinary eigenvalue decomposition (EVD) of the data\ncovariance matrix; LSM is based on the generalised eigenvalue decomposition\n(GEVD) of the signal and noise data covariance matrices. Both methods\nmaximise a sum of output variances under an orthogonality constraint. And\njust as PCA can also be computed using a singular value decomposition (SVD)\nof the data matrix (without making a detour via the data covariance matrix),\nLSM can also be calculated using a generalised singular value decomposition\n(GSVD) of the signal and noise data matrices \\cite{Howland2004}.\n\nThe difference between LSM (GEVD) and PCA (EVD), is that the latter is an\nunsupervised algorithm, while the former is a supervised algorithm (i.e. you\nneed to divide your training data into `signal' and `noise' samples). For a\nuseful overview of different supervised, unsupervised and semi-supervised\ndimensionality reduction methods, see \\cite{Kokiopoulou2011}.\\footnotemark{}\nOf particular relevance to this chapter is their section on linear\ndiscriminant analysis (LDA), which leads to the exact same optimisation\nproblem as \\cref{eq:argmax_R} and \\cref{eq:opti_What} (but for matrices\n$\\Rnn$ and $\\Rss$ that have a slightly different interpretation).\n\n\\footnotetext{This paper ties together many linear and quasi-linear\ndimensionality reduction methods. It shows how these methods relate to each\nother in a unifying framework, that is based around trace optimisation\nproblems and (generalised) eigenvalue decompositions. Some of the methods\ndiscussed are PCA, linear discriminant analysis (LDA), locally linear\nembedding (LLE), Laplacean eigenmaps, multidimensional scaling (MDS),\nspectral clustering, and kernel methods.}\n\n\n\n\\subsection{Other applications}\n\nThe generalised eigenvalue decomposition (GEVD) has found many applications.\nExamples include blind source separation and beamforming\n\\cite{Tome2006,Warsitz2007}, speech enhancement \\cite{Doclo2002}, linear\nclassification (via LDA, as described above), vibration analysis\n\\cite{Zhou2007} and analysis of magneto-encephalography (MEG)\n\\cite{Sekihara1999}, electro-encephalography (EEG) \\cite{Blankertz2008} and\nmagnetic resonance imaging (MRI) \\cite{Zhang2015} data.\n\n\n\n\\subsection{Distributed computation}\n\nModern neural probes support both large electrode counts and high sampling\nfrequencies \\cite{Jun2017}, and may be used in real-time applications. The\nsignal processing algorithm used needs to process this large data stream fast\nenough to meet the real-time requirements.\n\nAn advantage of using a signal processing algorithm based on an (adaptive)\nordinary or generalised eigenvalue decomposition, is that these\ndecompositions can be estimated in a distributed way\n(\\cite{Bertrand2014,Bertrand2015}). This allows a network of distributed,\nembedded electronic circuits to process the neural probe data chunk-wise,\nwhich may enable faster total processing, and thus larger electrode counts\nand sampling frequencies, than possible with an algorithm that only works on\ncentralized data from all channels.\n\n\n\n\n\\section{Summary}\n\nWe have establised that we can find the weight vector (the linear combination\nof input channels) that yields an output signal with a maximal\nsignal-to-noise variance ratio as the first generalised eigenvector $\\w_1$\nof the ordered covariance matrix pair $(\\Rss,\\Rnn)$, with $\\Rss$ and $\\Rnn$\nthe covariance matrices of signal and noise training data.\n\nAs a generalisation, we can also calculate a set of $k$ weight vectors that\nmaximise the sum of their respective output channel signal-to-noise variance\nratios (where the noise segments of each output channel are uncorrelated).\nAnalogously to the one output channel case, these weight vectors are found as\nthe first $k$ generalised eigenvectors $(\\Rss, \\Rnn)$. We have given some\ninsight in how these generalised eigenvectors are calculated, and into how\nthis signal-to-noise maximisation method fits into the literature.\n", "meta": {"hexsha": "1aec9ac22f4ca0d100b699768c5abdda8fbfa198", "size": 8503, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "modules/Scraps/GEVD/Coda.tex", "max_stars_repo_name": "tfiers/master-thesis", "max_stars_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-23T01:39:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-23T01:39:24.000Z", "max_issues_repo_path": "modules/Scraps/GEVD/Coda.tex", "max_issues_repo_name": "tfiers/master-thesis", "max_issues_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 46, "max_issues_repo_issues_event_min_datetime": "2018-09-18T16:38:12.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-10T22:37:35.000Z", "max_forks_repo_path": "modules/Scraps/GEVD/Coda.tex", "max_forks_repo_name": "tfiers/master-thesis", "max_forks_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.7150537634, "max_line_length": 77, "alphanum_fraction": 0.7673762202, "num_tokens": 2278, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768249, "lm_q2_score": 0.8080672066194946, "lm_q1q2_score": 0.5857438992317084}}
{"text": "%!TEX root = ceres.tex\n\\chapter{Theory}\n\\label{chapter:theory}\nEffective use of Ceres requires some familiarity with the underlying theory. In this chapter we will provide a brief exposition to Ceres's approach to solving non-linear least squares optimization. Much of the material in this section comes from~\\cite{Agarwal10bal,wu2011multicore,kushal2012}.\n\n\\section{The Levenberg-Marquardt Algorithm}\n\nThe Levenberg-Marquardt algorithm~\\cite{levenberg1944method, marquardt1963algorithm} is the most popular algorithm for solving non-linear least squares problems. Ceres implements an exact step~\\cite{madsen2004methods} and an inexact step variant of the Levenberg-Marquardt algorithm~\\cite{wright1985inexact,nash1990assessing}.  We begin by taking a brief look at how the Levenberg-Marquardt algorithm works.\n\n\nLet $x \\in \\mathbb{R}^{n}$ be an $n$-dimensional vector of variables, and\n$ F(x) = \\left[f_1(x),   \\hdots,  f_{m}(x) \\right]^{\\top}$ be a $m$-dimensional function of $x$.  We are interested in solving the following optimization problem,\n\\begin{equation}\n        \\arg \\min_x \\frac{1}{2}\\|F(x)\\|^2\\ .\n        \\label{eq:nonlinsq}\n\\end{equation}\nThe Jacobian $J(x)$ of $F(x)$ is an $m\\times n$ matrix, where $J_{ij}(x) = \\partial_j f_i(x)$  and the gradient vector $g(x) = \\nabla  \\frac{1}{2}\\|F(x)\\|^2 = J(x)^\\top F(x)$. Since the efficient global optimization of~\\eqref{eq:nonlinsq} for general $F(x)$ is an intractable problem, we will have to settle for finding a local minimum.\n\nThe general strategy when solving non-linear optimization problems is to solve a sequence of approximations to the original problem~\\cite{nocedal2000numerical}. At each iteration, the approximation is solved to determine a correction $\\Delta x$ to the vector $x$. For non-linear least squares, an approximation can be constructed by using the linearization $F(x+\\Delta x) \\approx F(x) + J(x)\\Delta x$, which leads to the following linear least squares  problem:\n\\begin{equation}\n         \\min_{\\Delta x} \\frac{1}{2}\\|J(x)\\Delta x + F(x)\\|^2\n        \\label{eq:linearapprox}\n\\end{equation}\nUnfortunately, na\\\"ively solving a sequence of these problems and updating $x \\leftarrow x+ \\Delta x$ leads to an algorithm that may not converge.  To get a convergent algorithm, we need to control the size of the step $\\Delta x$.  One way to do this is to introduce a regularization term:\n\\begin{equation}\n         \\min_{\\Delta x} \\frac{1}{2}\\|J(x)\\Delta x + F(x)\\|^2 + \\mu \\|D(x)\\Delta x\\|^2\\ .\n        \\label{eq:lsqr}\n\\end{equation}\nHere, $D(x)$ is a non-negative diagonal matrix, typically the square root of the diagonal of the matrix $J(x)^\\top J(x)$ and $\\mu$ is a non-negative parameter that controls the strength of regularization. It is straightforward to show that the step size $\\|\\Delta x\\|$ is inversely related to $\\mu$. Levenberg-Marquardt updates the value of $\\mu$ at each step based on how well the Jacobian $J(x)$ approximates $F(x)$. The quality of this fit is measured by the ratio of  the actual decrease in the objective function to the decrease in the value of the linearized model $L(\\Delta x) = \\frac{1}{2}\\|J(x)\\Delta x + F(x)\\|^2$.\n\\begin{equation}\n\\rho = \\frac{\\|F(x + \\Delta x)\\|^2 - \\|F(x)\\|^2}{\\|J(x)\\Delta x + F(x)\\|^2 - \\|F(x)\\|^2}\n\\end{equation}\n\nIf $\\rho$ is large, that means the linear model is a good approximation to the non-linear model and it is worth trusting it more in the computation of $\\Delta x$, so we decrease $\\mu$. If $\\rho$ is small, the linear model is a poor approximation and $\\mu$ is increased. This kind of reasoning is the basis of Trust-region methods, of which Levenberg-Marquardt is an early example~\\cite{nocedal2000numerical}.\n\nBefore going further, let us make some notational simplifications. We will assume that the matrix $\\sqrt{\\mu} D$ has been concatenated at the bottom of the matrix $J$ and similarly a vector of zeros has been added to the bottom of the vector $f$ and the rest of our discussion will be in terms of $J$ and $f$, \\ie the linear least squares problem.\n\\begin{align}\n \\min_{\\Delta x} \\frac{1}{2} \\|J(x)\\Delta x + f(x)\\|^2 .\n \\label{eq:simple}\n\\end{align}\nFurther, let $H(x)= J(x)^\\top J(x)$ and $g(x) = -J(x)^\\top  f(x)$. For notational convenience let us also drop the dependence on $x$. Then it is easy to see that solving~\\eqref{eq:simple} is equivalent to solving the {\\em normal equations}\n\\begin{align}\nH \\Delta x  &= g \\label{eq:normal}\n\\end{align}\n\nFor all but the smallest problems the solution of~\\eqref{eq:normal} in each iteration of the Levenberg-Marquardt algorithm is the dominant computational cost in Ceres. Ceres provides a number of different options for solving~\\eqref{eq:normal}.\n\n\n\\section{\\texttt{DENSE\\_QR}}\nFor small problems (a couple of hundred parameters and a few thousand residuals) with relatively dense Jacobians, \\texttt{DENSE\\_QR} is the method of choice~\\cite{bjorck1996numerical}. Let $J = QR$ be the QR-decomposition of $J$, where $Q$ is an orthonormal matrix and $R$ is an upper triangular matrix~\\cite{trefethen1997numerical}. Then it can be shown that the solution to~\\eqref{eq:normal} is given by\n\\begin{align}\n\t\\Delta x^* = -R^{-1}Q^\\top f\n\\end{align}\nCeres uses \\texttt{Eigen}'s dense QR decomposition routines.\n\n\n\\section{\\texttt{SPARSE\\_NORMAL\\_CHOLESKY}}\nLarge non-linear least square problems are usually sparse. In such cases, using a dense QR factorization is inefficient. Let $H = R^\\top R$ be the Cholesky factorization of the normal equations, where $R$ is an upper triangular matrix, then the  solution to ~\\eqref{eq:normal} is given by\n\\begin{equation}\n\t\\Delta x^* = R^{-1} R^{-\\top} g.\n\\end{equation}\nThe observant reader will note that the $R$ in the Cholesky factorization of $H$ is the same upper triangular matrix $R$ in the QR factorization of $J$. Since $Q$ is an orthonormal matrix, $J=QR$ implies that $J^\\top J = R^\\top Q^\\top Q R = R^\\top R$.\n\n\nThere are two variants of Cholesky factorization -- sparse and dense. \\texttt{SPARSE\\_NORMAL\\_CHOLESKY}, as the name implies performs a sparse Cholesky factorization of the normal equations. This leads to substantial savings in time and memory for large sparse problems. We use the Professor Tim Davis' \\texttt{CHOLMOD} library (part of the \\texttt{SuiteSparse} package) to perform the sparse cholesky~\\cite{chen2006acs}.\n\n\\section{\\texttt{DENSE\\_SCHUR} \\& \\texttt{SPARSE\\_SCHUR}}\nWhile it is possible to use \\texttt{SPARSE\\_NORMAL\\_CHOLESKY} to solve bundle adjustment problems, bundle adjustment problem have a special structure, and a more efficient scheme for solving~\\eqref{eq:normal} can be constructed.\n\nSuppose that the SfM problem consists of $p$ cameras and $q$ points and the variable vector $x$ has the  block structure $x = [y_{1},\\hdots,y_{p},z_{1},\\hdots,z_{q}]$. Where, $y$ and $z$ correspond to camera and point parameters, respectively.  Further, let the camera blocks be of size $c$ and the point blocks be of size $s$ (for most problems $c$ =  $6$--$9$ and $s = 3$). Ceres does not impose any constancy requirement on these block sizes, but choosing them to be constant simplifies the exposition.\n\nA key characteristic of the bundle adjustment problem is that there is no term $f_{i}$ that includes two or more point blocks.  This in turn implies that the matrix $H$ is of the form\n\\begin{equation}\n        H =  \\left[\n                \\begin{matrix} B & E\\\\ E^\\top & C\n                \\end{matrix}\n                \\right]\\ ,\n\\label{eq:hblock}\n\\end{equation}\nwhere, $B \\in \\reals^{pc\\times pc}$ is a block sparse matrix with $p$ blocks of size $c\\times c$ and  $C \\in \\reals^{qs\\times qs}$ is a block diagonal matrix with $q$ blocks of size $s\\times s$. $E \\in \\reals^{pc\\times qs}$ is a general block sparse matrix, with a block of size $c\\times s$ for each observation. Let us now block partition $\\Delta x = [\\Delta y,\\Delta z]$ and $g=[v,w]$ to restate~\\eqref{eq:normal} as the block structured linear system\n\\begin{equation}\n        \\left[\n                \\begin{matrix} B & E\\\\ E^\\top & C\n                \\end{matrix}\n                \\right]\\left[\n                        \\begin{matrix} \\Delta y \\\\ \\Delta z\n                        \\end{matrix}\n                        \\right]\n                        =\n                        \\left[\n                                \\begin{matrix} v\\\\ w\n                                \\end{matrix}\n                                \\right]\\ ,\n\\label{eq:linear2}\n\\end{equation}\nand apply Gaussian elimination to it. As we noted above, $C$ is a block diagonal matrix, with small diagonal blocks of size $s\\times s$.\nThus, calculating the inverse of $C$ by inverting each of these blocks is  cheap. This allows us to  eliminate $\\Delta z$ by observing that $\\Delta z = C^{-1}(w - E^\\top \\Delta y)$, giving us\n\\begin{equation}\n        \\left[B - EC^{-1}E^\\top\\right] \\Delta y = v - EC^{-1}w\\ .  \\label{eq:schur}\n\\end{equation}\nThe matrix\n\\begin{equation}\nS = B - EC^{-1}E^\\top\\ ,\n\\end{equation}\nis the Schur complement of $C$ in $H$. It is also known as the {\\em reduced camera matrix}, because the only variables participating in~\\eqref{eq:schur} are the ones corresponding to the cameras. $S \\in \\reals^{pc\\times pc}$ is a block structured symmetric positive definite matrix, with blocks of size $c\\times c$. The block $S_{ij}$ corresponding to the pair of images $i$ and $j$ is non-zero if and only if the two images observe at least one common point.\n\nNow, \\eqref{eq:linear2}~can  be solved by first forming $S$, solving for $\\Delta y$, and then back-substituting $\\Delta y$ to obtain the value of $\\Delta z$.\nThus, the solution of what was an $n\\times n$, $n=pc+qs$ linear system is reduced to the inversion of the block diagonal matrix $C$, a few matrix-matrix and matrix-vector multiplies, and the solution of block sparse $pc\\times pc$ linear system~\\eqref{eq:schur}.  For almost all  problems, the number of cameras is much smaller than the number of points, $p \\ll q$, thus solving~\\eqref{eq:schur} is significantly cheaper than solving~\\eqref{eq:linear2}. This is the {\\em Schur complement trick}~\\cite{brown-58}.\n\nThis still leaves open the question of solving~\\eqref{eq:schur}. The\nmethod of choice for solving symmetric positive definite systems\nexactly is via the Cholesky\nfactorization~\\cite{trefethen1997numerical} and depending upon the\nstructure of the matrix, there are, in general, two options. The first\nis direct factorization, where we store and factor $S$ as a dense\nmatrix~\\cite{trefethen1997numerical}. This method has $O(p^2)$ space complexity and $O(p^3)$ time\ncomplexity and is only practical for problems with up to a few hundred\ncameras. Ceres implements this strategy as the \\texttt{DENSE\\_SCHUR} solver.\n\n\n But, $S$ is typically a fairly sparse matrix, as most images\nonly see a small fraction of the scene. This leads us to the second\noption: sparse direct methods. These methods store $S$ as a sparse\nmatrix, use row and column re-ordering algorithms to maximize the\nsparsity of the Cholesky decomposition, and focus their compute effort\non the non-zero part of the factorization~\\cite{chen2006acs}.\nSparse direct methods, depending on the exact sparsity structure of the Schur complement,\nallow bundle adjustment algorithms to significantly scale up over those based on dense\nfactorization. Ceres implements this strategy as the \\texttt{SPARSE\\_SCHUR} solver.\n\n\n\\section{\\texttt{ITERATIVE\\_SCHUR}}\n\nThe factorization methods described above are based on computing an exact solution of~\\eqref{eq:lsqr}. But it is not clear if an exact solution of~\\eqref{eq:lsqr} is necessary at each step of the LM algorithm to solve~\\eqref{eq:nonlinsq}. In fact, we have already seen evidence that this may not be the case, as~\\eqref{eq:lsqr} is itself a regularized version of~\\eqref{eq:linearapprox}. Indeed, it is possible to construct non-linear optimization algorithms in which the linearized problem is solved approximately. These algorithms are known as inexact Newton or truncated Newton methods~\\cite{nocedal2000numerical}.\n\nAn inexact Newton method requires two ingredients. First, a cheap method for approximately solving systems of linear equations. Typically an iterative linear solver like the Conjugate Gradients method is used for this purpose~\\cite{nocedal2000numerical}. Second, a termination rule for the iterative solver. A typical termination rule is of the form\n\\begin{equation}\n        \\|H(x) \\Delta x + g(x)\\| \\leq \\eta_k \\|g(x)\\|. \\label{eq:inexact}\n\\end{equation}\nHere, $k$ indicates the Levenberg-Marquardt iteration number and $0 < \\eta_k <1$ is known as the forcing sequence.  Wright \\& Holt \\cite{wright1985inexact} prove that a truncated Levenberg-Marquardt algorithm that uses an inexact Newton step based on~\\eqref{eq:inexact} converges for any sequence $\\eta_k \\leq \\eta_0 < 1$ and the rate of convergence depends on the choice of the forcing sequence $\\eta_k$.\n\n\nThe convergence rate of Conjugate Gradients  for solving~\\eqref{eq:normal} depends on the distribution of eigenvalues of $H$~\\cite{saad2003iterative}. A useful upper bound is $\\sqrt{\\kappa(H)}$, where, $\\kappa(H)$f is the condition number of the matrix $H$. For most bundle adjustment problems, $\\kappa(H)$ is high and a direct application of Conjugate Gradients to~\\eqref{eq:normal} results in extremely poor performance.\n\nThe solution to this problem is to replace~\\eqref{eq:normal} with a {\\em preconditioned} system.  Given a linear system, $Ax =b$ and a preconditioner $M$ the preconditioned system is given by $M^{-1}Ax = M^{-1}b$. The resulting algorithm is known as Preconditioned Conjugate Gradients algorithm (PCG) and its  worst case complexity now depends on the condition number of the {\\em preconditioned} matrix $\\kappa(M^{-1}A)$.\n\n\\subsection{Preconditioning}\n\nThe computational cost of using a preconditioner $M$ is the cost of computing $M$ and evaluating the product $M^{-1}y$ for arbitrary vectors $y$. Thus, there are two competing factors to consider: How much of $H$'s structure is captured by $M$ so that the condition number $\\kappa(HM^{-1})$ is low, and the computational cost of constructing and using $M$.  The ideal preconditioner would be one for which $\\kappa(M^{-1}A) =1$. $M=A$ achieves this, but it is not a practical choice, as applying this preconditioner would require solving a linear system equivalent to the unpreconditioned problem.  It is usually the case that the more information $M$ has about $H$, the more expensive it is use. For example, Incomplete Cholesky factorization based preconditioners  have much better convergence behavior than the Jacobi preconditioner, but are also much more expensive.\n\nThe simplest of all preconditioners is the diagonal or Jacobi preconditioner, \\ie,  $M=\\operatorname{diag}(A)$, which for block structured matrices like $H$ can be generalized to the block Jacobi preconditioner. Another option is to apply PCG to the reduced camera matrix $S$ instead of $H$. One reason to do this is that $S$ is a much smaller matrix than $H$, but more importantly, it can be shown that $\\kappa(S)\\leq \\kappa(H)$.  Ceres implements PCG on $S$ as the \\texttt{ITERATIVE\\_SCHUR} solver. When the user chooses \\texttt{ITERATIVE\\_SCHUR} as the linear solver, Ceres automatically switches from the exact step algorithm to an inexact step algorithm.\n\n\nThere are two obvious choices for block diagonal preconditioners for $S$. The block diagonal of the matrix $B$~\\cite{mandel1990block} and the block diagonal $S$, \\ie the block Jacobi preconditioner for $S$. Ceres's implements both of these preconditioners and refers to them as  \\texttt{JACOBI} and \\texttt{SCHUR\\_JACOBI} respectively.\n\nAs discussed earlier, the cost of forming and storing the Schur complement $S$ can be prohibitive for large problems. Indeed, for an inexact Newton solver that computes $S$ and runs PCG on it, almost all of its time is spent in constructing $S$; the time spent inside the PCG algorithm is negligible in comparison. Because  PCG only needs access to $S$ via its product with a vector, one way to evaluate $Sx$ is to observe that\n\\begin{align}\n  x_1 &= E^\\top x \\notag \\\\\n  x_2 &= C^{-1} x_1 \\notag\\\\\n  x_3 &= Ex_2 \\notag\\\\\n  x_4 &= Bx \\notag\\\\\n  Sx &= x_4 - x_3\\ .\\label{eq:schurtrick1}\n\\end{align}\nThus, we can run PCG on $S$ with the same computational effort per iteration as PCG on $H$, while reaping the benefits of a more powerful preconditioner. In fact, we do not even need to compute $H$, \\eqref{eq:schurtrick1} can be implemented using just the columns of $J$.\n\nEquation~\\eqref{eq:schurtrick1} is closely related to {\\em Domain Decomposition methods} for solving large linear systems that arise in structural engineering and partial differential equations. In the language of Domain Decomposition, each point in a bundle adjustment problem is a domain, and the cameras form the interface between these domains. The iterative solution of the Schur complement then falls within the sub-category of techniques known as Iterative Sub-structuring~\\cite{saad2003iterative,mathew2008domain}.\n\nFor bundle adjustment problems, particularly those arising in reconstruction from community photo collections, more effective preconditioners can be constructed by analyzing and exploiting the camera-point visibility structure of the scene~\\cite{kushal2012}. Ceres implements the two visibility based preconditioners described by Kushal \\& Agarwal as \\texttt{CLUSTER\\_JACOBI} and \\texttt{CLUSTER\\_TRIDIAGONAL}. These are fairly new preconditioners and Ceres' implementation of them is in its early stages and is not as mature as the other preconditioners described above.\n\n\n\\section{\\texttt{CGNR}}\nFor general sparse problems, if the problem is too large for \\texttt{CHOLMOD} or \\texttt{SuiteSparse} is not linked into Ceres, another option is the \\texttt{CGNR} solver. This solver uses the Conjugate Gradients solver on the {\\em normal equations}, but without forming the normal equations explicitly. It exploits the relation\n\\begin{align}\n\tH x = J^\\top J x = J^\\top(J x)\n\\end{align}\nCurrently only the \\texttt{JACOBI} preconditioner is available for use with this solver. It uses the block diagonal of $H$ as a preconditioner. \n\n\\section{Ordering}\nAll three of the Schur based solvers depend on the user indicating to the solver, which of the parameter blocks correspond to the points and which correspond to the cameras. Ceres refers to them as \\texttt{e\\_block}s and \\texttt{f\\_blocks}. The only constraint on \\texttt{e\\_block}s is that there should be no term in the objective function with two or more \\texttt{e\\_block}s.\n\nAs we saw in Section~\\ref{sec:tutorial:bundleadjustment}, there are two ways to indicate \\texttt{e\\_block}s to Ceres. The first is to explicitly create an ordering vector \\texttt{Solver::Options::ordering} containing the parameter blocks such that all the \\texttt{e\\_block}s/points occur before the \\texttt{f\\_blocks}, and setting \\texttt{Solver::Options::num\\_eliminate\\_blocks} to the number \\texttt{e\\_block}s.\n\nFor some problems this is an easy thing to do and we recommend its use. In some problems though, this is onerous and it would be better if Ceres could automatically determine \\texttt{e\\_block}s. Setting \\texttt{Solver::Options::ordering\\_type} to \\texttt{SCHUR} achieves this.\n\nThe \\texttt{SCHUR} ordering algorithm is based on the observation that\nthe constraint that no two \\texttt{e\\_block} co-occur in a residual\nblock means that if we were to treat the sparsity structure of the\nblock matrix $H$ as a graph, then the set of \\texttt{e\\_block}s is an\nindependent set in this graph. The larger the number of\n\\texttt{e\\_block}, the smaller is the size of the Schur complement $S$. Indeed the reason Schur based solvers are so efficient at solving bundle adjustment problems is because the number of points in a bundle adjustment problem is usually an order of magnitude or two larger than the number of cameras.\n\nThus, the aim of the \\texttt{SCHUR} ordering algorithm is to identify the largest independent set in the graph of $H$. Unfortunately this is an NP-Hard problem. But there is a  greedy approximation algorithm that performs well~\\cite{li2007miqr} and we use it to identify \\texttt{e\\_block}s in Ceres.\n\n\\section{Automatic Differentiation}\nTBD\n\\section{Loss Function}\nTBD\n\\section{Local Parameterizations}\nTBD", "meta": {"hexsha": "35a1bbf3f866d7221467a3ae367d855cd72d7e76", "size": 20256, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/theory.tex", "max_stars_repo_name": "cvfish/ceres-solver", "max_stars_repo_head_hexsha": "087462a90dd1c23ac443501f3314d0fcedaea5f7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2015-11-25T13:51:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T06:26:58.000Z", "max_issues_repo_path": "docs/theory.tex", "max_issues_repo_name": "kashif/ceres-solver", "max_issues_repo_head_hexsha": "087462a90dd1c23ac443501f3314d0fcedaea5f7", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/theory.tex", "max_forks_repo_name": "kashif/ceres-solver", "max_forks_repo_head_hexsha": "087462a90dd1c23ac443501f3314d0fcedaea5f7", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 26, "max_forks_repo_forks_event_min_datetime": "2016-03-31T15:34:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T19:29:27.000Z", "avg_line_length": 101.7889447236, "max_line_length": 869, "alphanum_fraction": 0.7450138231, "num_tokens": 5443, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.5857438946518088}}
{"text": "% Created 2021-08-24 Tue 19:02\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=169]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{khpreamble}\n\\usepackage{amssymb}\n\\usepgfplotslibrary{groupplots}\n\\newcommand*{\\shift}{\\operatorname{q}}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{Process Automation Laboratory - Modeling first-order systems}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={Process Automation Laboratory - Modeling first-order systems},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\\section{Fitting first-order model}\n\\label{sec:org2a0f0ec}\n\\begin{frame}[label={sec:org01d4e0f}]{First-order system example: Level control of a tank}\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{../../figures/tank-with-hole-no-variables}\n\\end{center}\n\nWhat is the \\alert{input signal} and \\alert{output signal} to the system?\n\\end{frame}\n\n\n\n\\begin{frame}[label={sec:orga76c6fa}]{First-order system example: Level control of a tank}\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{../../figures/tank-with-hole-simple}\n\\end{center}\n\n\\begin{align*}\n\\frac{d}{dt} (Ah) &=  z(t) - x(t) = z(t) - a \\sqrt{2gh}\\quad \\Rightarrow\\\\\n\\frac{d}{dt} h(t) &= - \\frac{a\\sqrt{2g}}{A} \\sqrt{h(t)} + \\frac{1}{A} z(t)\n\\end{align*}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orgeaac400}]{Intuition}\n\\includegraphics[width=0.2\\linewidth]{../../figures/tank-with-hole-no-variables}\n\n\\alert{Individual activity} A constant inflow has been present since forever, but at time \\(t_1\\) the flow in is suddenly shut off. Which of the responses of the water level \\(h(t)\\) below is correct?\n\n\\begin{tikzpicture}\n\\small\n\n\\begin{axis}[\nwidth=7cm,\nheight=2.5cm,\nxlabel={$t$},\nylabel={$h(t)$},\nxmin=-3.5,\nxmax=10.5,\nytick = {0},\nxtick = {0},\nxticklabels = {$t_1$},\n]\n\\addplot+[black, no marks, domain=-4:10, samples=400,variable=k] { (k < 0) + (k>0)*(1+exp(-4))/(1+exp(4*(0.5*k-1)))};\n\n\\node[black!40!red] at (axis cs: 5, 0.5) {\\huge 1};\n\\end{axis}\n\n\\begin{axis}[\nxshift=7cm,\nwidth=7cm,\nheight=2.5cm,\nxlabel={$t$},\nylabel={$h(t)$},\nxmin=-3.5,\nxmax=10.5,\nytick = {0},\nxtick = {0},\nxticklabels = {$t_1$},\n]\n\\addplot+[black, no marks, domain=-4:10, samples=400,variable=k] { (k<0) + ((k>=0) - (k>4))*(1/4*(4-k)) };\n\\node[black!40!red] at (axis cs: 5, 0.5) {\\huge 2};\n\\end{axis}\n\n\\begin{axis}[\nxshift=0cm,\nyshift=-2.5cm,\nwidth=7cm,\nheight=2.5cm,\nxlabel={$t$},\nylabel={$h(t)$},\nxmin=-3.5,\nxmax=10.5,\nytick = {0},\nxtick = {0},\nxticklabels = {$t_1$},\n]\n\\addplot+[black, no marks, domain=-4:10, samples=400,variable=k] { (k<0) + (k>0)*exp(-0.9*k)};\n\\node[black!40!red] at (axis cs: 5, 0.5) {\\huge 3};\n\\end{axis}\n\n\\begin{axis}[\nxshift=7cm,\nyshift=-2.5cm,\nwidth=7cm,\nheight=2.5cm,\nxlabel={$t$},\nylabel={$h(t)$},\nxmin=-3.5,\nxmax=10.5,\nytick = {0},\nxtick = {0},\nxticklabels = {$t_1$},\n]\n\\addplot+[black, no marks, domain=-4:10, samples=400,variable=k] { (k<0) + ((k>=0) - (k>4))*(1-1/16*pow(-k,2)) };\n\\node[black!40!red] at (axis cs: 5, 0.5) {\\huge 4};\n\\end{axis}\n\n\n\\end{tikzpicture}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orgc81a87f}]{Deviation variables}\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{../../figures/tank-with-hole}\n\\end{center}\n\nFlow in: \\(z(t) = z_0 + w(t)\\). Level of water: \\(h(t) = h_0 + y(t)\\). The constants \\(h_0\\) and \\(z_0\\) define an \\emph{operating point}.\n\n\\begin{align*}\n\\frac{d}{dt} h(t) &= - \\frac{a\\sqrt{2g}}{A} \\sqrt{h(t)} + \\frac{1}{A} z(t)\n\\end{align*}\n\n\n\\alert{Individual activity} Given \\(h_0\\) determine the operating point for the inflow, \\(z_0\\), such that the system is in equilibrium at the operating point.\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org5f0b24a}]{Intuition}\nWhich change \\(y(t)\\) in the water level corresponds to a step change \\(w(t)\\) in the inflow? \n\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{../../figures/dc-response-exercise}\n\\end{center}\n\\end{frame}\n\n\n\n\\begin{frame}[label={sec:orga6c9c55}]{Fitting a first-order model}\nAssuming a plant model of first-order with time-constant \\(T\\)\n\\[  \\quad \\textcolor{green!50!black}{Y(s)} = \\frac{K}{sT + 1}\\textcolor{blue!80!black}{U(s)} \\quad \\overset{U(s) = \\frac{u_f}{s}}{\\Longrightarrow} \\quad \\textcolor{green!50!black}{y(t)} = u_f K\\big( 1 - \\mathrm{e}^{-\\frac{t}{T}}\\big)u_H(t)\\]\n\\def\\Tcnst{3}\n\\def\\tdelay{0.0}\n\\def\\ggain{2}\n\\def\\uampl{0.8}\n\\pgfmathsetmacro{\\yfinal}{\\uampl*\\ggain}\n\\pgfmathsetmacro{\\yone}{0.283*\\yfinal}\n\\pgfmathsetmacro{\\ytwo}{0.632*\\yfinal}\n\\pgfmathsetmacro{\\tone}{\\tdelay + \\Tcnst/3}\n\\pgfmathsetmacro{\\two}{\\tdelay + \\Tcnst}\n\n\\begin{center}\n  \\begin{tikzpicture}\n    \\begin{axis}[\n    width=14cm,\n    height=4.5cm,\n    grid = both,\n    xtick = {0,  \\two},\n    xticklabels = {0, $T$},\n    ytick = {0, \\ytwo, \\uampl, \\yfinal},\n    yticklabels = {0,  $ $, $u_f$, $y_f$},\n    xmin = -0.2,\n    %minor y tick num=9,\n    %minor x tick num=9,\n    %every major grid/.style={red, opacity=0.5},\n    xlabel = {$t$},\n    ]\n      \\addplot [thick, green!50!black, no marks, domain=0:10, samples=100] {\\uampl*\\ggain*(x>\\tdelay)*(1 - exp(-(x-\\tdelay)/\\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};\n      \\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\\uampl) (10,\\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};\n    \\end{axis}\n  \\end{tikzpicture}\n\\end{center}\n\n\\alert{Individual activity} Evaluate the response \\(y(t)\\) at the time instant \\(t=T\\) and for \\(t\\to\\infty\\)!\n\\end{frame}\n\n\\begin{frame}[label={sec:org41c055a}]{Fitting a first-order model}\nAssuming a plant model of first-order with time-constant \\(T\\)\n\\[  \\quad \\textcolor{green!50!black}{Y(s)} = \\frac{K}{sT + 1}\\textcolor{blue!80!black}{U(s)} \\quad \\overset{U(s) = \\frac{u_f}{s}}{\\Longrightarrow} \\quad \\textcolor{green!50!black}{y(t)} = u_f K\\big( 1 - \\mathrm{e}^{-\\frac{t}{T}}\\big)u_H(t)\\]\n\\def\\Tcnst{3}\n\\def\\tdelay{0.0}\n\\def\\ggain{2}\n\\def\\uampl{0.8}\n\\pgfmathsetmacro{\\yfinal}{\\uampl*\\ggain}\n\\pgfmathsetmacro{\\yone}{0.283*\\yfinal}\n\\pgfmathsetmacro{\\ytwo}{0.632*\\yfinal}\n\\pgfmathsetmacro{\\tone}{\\tdelay + \\Tcnst/3}\n\\pgfmathsetmacro{\\two}{\\tdelay + \\Tcnst}\n\n\\begin{center}\n  \\small\n  \\begin{tikzpicture}\n    \\begin{axis}[\n    width=14cm,\n    height=3.5cm,\n    grid = both,\n    xtick = {0,  \\two},\n    xticklabels = {0, $T$},\n    ytick = {0, \\ytwo, \\uampl, \\yfinal},\n    yticklabels = {0,  $0.632y_f$, $u_f$, $y_f$},\n    xmin = -0.2,\n    %minor y tick num=9,\n    %minor x tick num=9,\n    %every major grid/.style={red, opacity=0.5},\n    xlabel = {$t$},\n    ]\n      \\addplot [thick, green!50!black, no marks, domain=0:10, samples=100] {\\uampl*\\ggain*(x>\\tdelay)*(1 - exp(-(x-\\tdelay)/\\Tcnst)} node [coordinate, pos=0.9, pin=-90:{$y(t)$}] {};\n      \\addplot [const plot, thick, blue!80!black, no marks, domain=-1:10, samples=100] coordinates {(-1,0) (0,0) (0,\\uampl) (10,\\uampl)} node [coordinate, pos=0.9, pin=-90:{$u(t)$}] {};\n    \\end{axis}\n  \\end{tikzpicture}\n\\end{center}\n\n\\alert{Time-constant:} Find the time \\(t=T\\) at which the response has reached 63.2\\% of its final value\n\n\\alert{Gain:} \\(y_f = \\lim_{t\\to\\infty}y(t) = Ku_f \\quad \\Rightarrow \\quad K = \\frac{y_f}{u_f}\\)\n\\end{frame}\n\n\\section{The lab activity}\n\\label{sec:org6acf095}\n\n\\begin{frame}[label={sec:orged6ff34}]{About the tolerance of components and propagation of errors}\n\\setlength{\\tabcolsep}{1cm}\n\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=3cm]{../../figures/resistor-color-code-4-band.png} & \\includegraphics[width=3cm]{../../figures/capacitor.jpg}\\\\\n\\(R = R_0 \\pm \\Delta R\\) & \\(C = C_0 \\pm \\Delta C\\)\\\\\n\\end{tabular}\n\\end{center}\n\n\\[\\tau = RC = \\tau_0 + \\Delta\\tau\\]\n\n\\pause\nAssume the tolerance for the resistor is \\(\\frac{\\Delta R}{R_0}=\\)5\\% and for the capacitor \\(\\frac{\\Delta C}{C_0}=\\)20\\%. What will the tolerance for the time-constant \\(\\frac{\\Delta\\tau}{\\tau_0}\\) be?\n\\setlength{\\tabcolsep}{1cm}\n\\begin{center}\n\\begin{tabular}{cccc}\n1. 5\\% & 2. 20\\% & 3. 25\\% & 4. 100\\%\\\\\n\\end{tabular}\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org6db2167}]{About the tolerance of components amd propagation of errors}\n\\small\n\\begin{columns}\n\\begin{column}{0.3\\columnwidth}\n\\begin{center}\n \\includegraphics[width=.7\\linewidth]{../../figures/resistor-color-code-4-band.png}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}{ll}\nColor & Meaning\\\\\n\\hline\nBrown & First digit 1\\\\\nBlack & Second digit 0\\\\\nOrange & Multiply with \\(10^3\\)\\\\\nGold & Tolerance 5\\%\\\\\n\\end{tabular}\n\\end{center}\n\n\\begin{align*}\nR &= R_0 \\pm \\Delta R\\\\\n&=10\\, \\text{k}\\Omega \\pm 5\\% = (10 \\pm 0.5)\\, \\text{k}\\Omega\n\\end{align*}\n\\end{column}\n\n\\begin{column}{0.7\\columnwidth}\n\\[ \\tau = RC = R_0C_0 \\pm \\Delta\\tau = \\tau_0 \\pm \\Delta \\tau \\]\n\n\\pause\n\nTwo ways to calculate \\(\\Delta\\tau\\):\n\\footnotesize\n\\begin{enumerate}\n\\item Direct calculation \\begin{align*} \\tau &= RC = (R_0 + \\Delta R)(C_0 + \\Delta C)\\\\\n   &= R_0C_0 + R_0\\Delta C  + C_0\\Delta R + \\Delta R \\Delta C\\\\\n   &\\approx \\tau_0 + \\underbrace{R_0\\Delta C  + C_0\\Delta R}_{\\Delta\\tau}\n   \\end{align*}\n\\item Total derivative \\begin{align*} \\Delta\\tau &= \\frac{\\partial \\tau}{\\partial R} \\Big|_{R_0, C_0} \\Delta R + \\frac{\\partial \\tau}{\\partial C} \\Big|_{R_0, C_0} \\Delta C \\\\\n&=C_0\\Delta R + R_0\\Delta C.\n\\end{align*}\n\\end{enumerate}\n\n\\small\n\\pause\n\\[\\frac{\\Delta\\tau}{\\tau_0} = \\frac{C_0\\Delta R + R_0\\Delta C}{R_0C_0} = \\frac{\\Delta R}{R_0} + \\frac{\\Delta C}{C_0}\\]\n\\end{column}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgd445274}]{About the tolerance of components and propagation of errors}\n\\setlength{\\tabcolsep}{1cm}\n\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[width=3cm]{../../figures/resistor-color-code-4-band.png} & \\includegraphics[width=3cm]{../../figures/capacitor.jpg}\\\\\n\\(R = R_0 \\pm \\Delta R\\) & \\(C = C_0 \\pm \\Delta C\\)\\\\\n\\end{tabular}\n\\end{center}\n\n\\[\\tau = RC = \\tau_0 + \\Delta\\tau\\]\n\n\\pause\nAssume the tolerance for the resistor is \\(\\frac{\\Delta R}{R_0}=\\)5\\% and for the capacitor \\(\\frac{\\Delta C}{C_0}=\\)20\\%. What will the tolerance for the time-constant \\(\\frac{\\Delta\\tau}{\\tau_0}\\) be?\n\\setlength{\\tabcolsep}{1cm}\n\\begin{center}\n\\begin{tabular}{cccc}\n1. 5\\% & 2. 20\\% & 3. 25\\% & 4. 100\\%\\\\\n\\end{tabular}\n\\end{center}\n\\end{frame}\n\\end{document}", "meta": {"hexsha": "427af188472897709709d3d2c940b1d7cf6b14de", "size": 10461, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "first-order-models/slides/lecture-modeling-first-order-short.tex", "max_stars_repo_name": "kjartan-at-tec/mr2015", "max_stars_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "first-order-models/slides/lecture-modeling-first-order-short.tex", "max_issues_repo_name": "kjartan-at-tec/mr2015", "max_issues_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "first-order-models/slides/lecture-modeling-first-order-short.tex", "max_forks_repo_name": "kjartan-at-tec/mr2015", "max_forks_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7676470588, "max_line_length": 241, "alphanum_fraction": 0.6532836249, "num_tokens": 4044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.72487026428967, "lm_q2_score": 0.8080672135527632, "lm_q1q2_score": 0.5857438946518088}}
{"text": "\n%-------------------------------------------------------------------------------\n% Uni-level                                                                     \n%-------------------------------------------------------------------------------\n\n\\subsection{Uni-level representation}\n\nNote that the rules for types and kinds are very similar.\nPfenning's experience with implementing Twelf is that if they\nare separate much of the code has to be duplicated for\nthe two levels.  These concerns lead to the final \ngrammar for Spine-Form LF, a single class of expressions for\nterms, types, and kinds, classified by level.\n\n$$\n\\begin{array}{llll}\n\\mathbf{Levels} & L & ::= & \\Type \\Spb \\Kind \\\\\n\\mathbf{Expressions} & U & ::= & L \\Spb \\Pi(U_1,U_2) \\Spb \\lambda U \\Spb H\\cdot S \\\\\n\\mathbf{Heads} & H & ::= & c \\Spb i\\\\\n\\mathbf{Spines} & S & ::= & \\Nil \\Spb U;S\\\\\n\\end{array} \n$$\n", "meta": {"hexsha": "0567e63a132d05f0c32dc122dbb2ae46bc511fc8", "size": 872, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/inverse/tex/old/uni.tex", "max_stars_repo_name": "kryptine/twelf", "max_stars_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 61, "max_stars_repo_stars_event_min_datetime": "2015-01-24T18:10:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-25T12:41:05.000Z", "max_issues_repo_path": "src/inverse/tex/old/uni.tex", "max_issues_repo_name": "kryptine/twelf", "max_issues_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-02-27T22:17:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-27T22:17:51.000Z", "max_forks_repo_path": "src/inverse/tex/old/uni.tex", "max_forks_repo_name": "kryptine/twelf", "max_forks_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-05-06T01:32:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T19:33:29.000Z", "avg_line_length": 37.9130434783, "max_line_length": 84, "alphanum_fraction": 0.502293578, "num_tokens": 211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8539127603871312, "lm_q2_score": 0.6859494421679929, "lm_q1q2_score": 0.5857409816476836}}
{"text": "\\documentclass{beamer}\n\n\\usetheme{uhh}\n\\showtotalframenumber\n\\showuhhlogoeachframe\n\\showsections\n\n\\usepackage{amsmath}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\n\\usepackage{listings}\n\\lstset{\nlanguage=python\n}\n\n\\title{Tensorflow -- Regression Models}\n\\subtitle{Based on slides from Fabian Barteld, Benjamin Milde}\n\\author{ Fabian Barteld, Benjamin Milde, Prof. Dr. Chris Biemann}\n\\date[18.10.2018]{Oct 18, 2018}\n\n\\AtBeginSection[]\n{\n  %%%%% section title\n  % This is how it would look like in Beamer:\n  % \\begin{frame}\n  %     \\frametitle{Overview}\n  %     \\tableofcontents[sections={2-3},currentsection,sectionstyle=show/hide,subsectionstyle=hide]\n  % \\end{frame}\n\\begin{frame}[plain]\n\\begin{tikzpicture}[overlay]\n  \\relax%\n  \\fill[blueuhh,opacity=1] (-10,-10)\n  rectangle(\\the\\paperwidth,\\the\\paperheight);\n\\end{tikzpicture}\n  \\begin{tikzpicture}[overlay]\n  \\relax%\n  \\fill[white,opacity=1] (-5,-1.2)\n  rectangle(\\the\\paperwidth,0.5) node[pos=0.5,black]{\\LARGE\\insertsectionhead};\n\\end{tikzpicture}\n\\end{frame}\n\n%%%% add subsection to show navigation dots\n\\subsection{}\n}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Linear Regression}\n\n% https://medium.com/@saxenarohan97/intro-to-tensorflow-solving-a-simple-regression-problem-e87b42fd4845\n% https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/2_BasicModels/linear_regression.py\n\\begin{frame}\n\\frametitle{Linear Regression}\n\n\\begin{itemize}\n\\item Given: $(x_1, y_1)$, \\ldots, $(x_n,y_n)$\n\\item Goal: find $w$ and $b$ such that\n  \\begin{displaymath}\n    \\hat{y}_i = wx_i + b\n  \\end{displaymath}\n  fits the data, i.e.\n  \\begin{displaymath}\n    \\argmin_{w, b} \\frac{\\sum^n_{i=1} (\\hat{y}_i - y_i)^2}{n}\n  \\end{displaymath}\n\\end{itemize}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Define model parameters}\nModel: $\\hat{y}_i = wx_i + b$\\\\\nParameters: $w$, $b$, tensors of rank $0$\n\n\\begin{lstlisting}\nw = tf.Variable(tf.ones([]),\n  name=\"weight\")\nb = tf.Variable(tf.zeros([]),\n  name=\"bias\")\n\\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Define the model}\n\n\\begin{displaymath}\n  \\begin{pmatrix} \\hat{y}_1\\\\\\vdots\\\\\\hat{y}_n\\end{pmatrix} =\n  \\begin{pmatrix} x_1\\\\\\vdots\\\\x_n\\end{pmatrix} \\odot\n  \\begin{pmatrix} w\\\\\\vdots\\\\w\\end{pmatrix} +\n  \\begin{pmatrix} b\\\\\\vdots\\\\b\\end{pmatrix}\n\\end{displaymath}\n\n\\begin{lstlisting}\nyhat = tf.add(tf.multiply(X, w), b)\n\\end{lstlisting}\n\n{\\footnotesize The scalars $w$ and $b$ are converted into vectors of the same\nlength as X (broadcast); \\url{https://www.tensorflow.org/performance/xla/broadcasting}}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Define the loss}\n\n\\begin{displaymath}\n  \\frac{\\sum^n_{i=1} (\\hat{y}_i - y_i)^2}{n}\n\\end{displaymath}\n\n\\begin{lstlisting}\nloss = tf.reduce_mean(tf.square(yhat - Y))\n\\end{lstlisting}\n\n\\end{frame}\n\n\n\\begin{frame}[fragile]\n\\frametitle{Optimization}\n\n\\begin{lstlisting}\n## Optimizer\noptimizer = tf.train.GradientDescentOptimizer(\n  0.01 # learning rate\n  ).minimize(loss)\n\nwith tf.Session() as sess:\n  ## initalize parameters\n  sess.run(tf.global_variables_initializer())\n\n  for i in range(20):\n      ## run one epoch\n      sess.run(optimizer)\n\\end{lstlisting}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Hands on}\n\nDo a linear regression to learn $y = 2x + 1$\n\n\\begin{lstlisting}\nX_data = np.array([1., 2., 3., 4., 5., 6.],\n      dtype=np.float32).reshape(6, 1)\nY_data = 2*X_data + 1\n\\end{lstlisting}\n\\end{frame}\n\n\n\\section{Multiple Linear Regression}\n\n\\begin{frame}[fragile]\n\\frametitle{Defining the input}\n\nTensorflow graphs use placholders for input values\n\n\\begin{lstlisting}\ninput_dim = 13\n\nX = tf.placeholder(tf.float32, [None, input_dim])\nY = tf.placeholder(tf.float32, [None, 1])\n\\end{lstlisting}\n\nDefines placeholders for two tensors of rank 2,\\\\\nthe shape is [Number of examples, Dimension]\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Adapting the model}\n\n\\begin{displaymath}\n  \\begin{pmatrix} \\hat{y}_1\\\\\\vdots\\\\\\hat{y}_n\\end{pmatrix} =\n  \\begin{pmatrix} x_{1,1} & \\hdots & x_{1,input\\_dim}\\\\\\vdots && \\vdots\\\\x_{n,1} & \\hdots & x_{n,input\\_dim}\\end{pmatrix} \\times\n  \\begin{pmatrix} w_1 \\\\\\vdots\\\\w_{input\\_dim}\\end{pmatrix} +\n  \\begin{pmatrix} b\\\\\\vdots\\\\b\\end{pmatrix}\n\\end{displaymath}\n\n\\begin{lstlisting}\nw = tf.Variable(tf.ones(input_dim))\nyhat = tf.add(tf.matmul(X, w), b)\n\\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Getting data into the model}\n\n\\begin{lstlisting}\n## Optimizer\noptimizer = tf.train.GradientDescentOptimizer(\n  0.01 # learning rate\n  ).minimize(loss)\n\nwith tf.Session() as sess:\n  ## initalize parameters\n  sess.run(tf.global_variables_initializer())\n\n  for i in range(20):\n      ## run one epoch\n      sess.run(optimizer, {X: data_X, Y: data_Y})\n\\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Hands on}\n\nDo a multiple linear regression with Boston housing prices\n\n\\begin{lstlisting}\nfrom sklearn.datasets import load_boston\nfrom sklearn.preprocessing import scale\n\ndata_X, data_Y = load_boston(True)\ndata_X = scale(data_X)\ndata_Y = data_Y.reshape(len(data_Y), 1)\n\\end{lstlisting}\n\\end{frame}\n\n\\section{Logistic regression}\n\n% https://www.tensorflow.org/tutorials/wide\n% https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/2_BasicModels/logistic_regression.py\n\\begin{frame}[fragile]\n\\frametitle{Multiple Logistic regression}\n\n$p_i = S(WX_i + b)$\n\nLoss (Binary-crossentropy):\n\n\\[\\frac{1}{N}\\sum_{i=1}^N (y_i\\log{p_i} + (1-y_i)(\\log{1-p_i}))\\]\n\nIn tensorflow:\\\\\n\\footnotesize{\\textcolor{reduhh}{Don't use -- numerical problems!}}\n\n\\begin{lstlisting}\np = tf.sigmoid(yhat)\nloss = tf.reduce_mean(y*tf.log(p) + (1-y)*tf.log(1-p))\n\\end{lstlisting}\n\n\\pause\n\nOptimized version (\\footnotesize{\\textcolor{green}{Use this instead!}})\n\\begin{lstlisting}\nloss = tf.reduce_mean(\n  tf.nn.sigmoid_cross_entropy_with_logits(\n      labels=y, logits=yhat))\n\\end{lstlisting}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Scaling the input data}\n\n\\begin{lstlisting}\nfrom sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nscaler.fit(x_train)\nx_train = scaler.transform(x_train)\nx_test = scaler.transform(x_test)\n\\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Hands on: Binary classification}\n\nDataset: \\url{http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html}\n\n\\begin{lstlisting}\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\n\n## load the data\nbc = load_breast_cancer()\nx_data = bc['data']       # shape: (569,30)\ny_data = bc['target'].reshape(\n  len(bc['target']), 1) # shape: (569, 1)\n\nx_train, x_test, y_train, y_test =\n  train_test_split(x_data, y_data)\n\n\\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{One-hot encoding of nominal features}\nNames dataset\n\\url{http://www.nltk.org/book/ch06.html}\n\n\\begin{lstlisting}\ndef gender_features(word):\n  return {'last_letter': word[-1]}\n\ndef gender_features(word):\n  return {'suffix1': word[-1:],\n          'suffix2': word[-2:]}\n\\end{lstlisting}\n\\pause\n\\vspace{-1.5ex}\n\n\\begin{lstlisting}\nfrom sklearn.feature_extraction import DictVectorizer\nfeat_vectorizer = DictVectorizer(\n  dtype=numpy.int32, sparse=False)\ntrain_X = feat_vectorizer.fit_transform(\n  train_feats)\ntest_X = feat_vectorizer.transform(test_feats)\n\\end{lstlisting}\n\\end{frame}\n\n\\begin{frame}[fragile]\n  \\frametitle{Stochastic gradient descent}\n\n\\begin{lstlisting}\nwith tf.Session() as sess:\n    ## initalize parameters\n    sess.run(tf.global_variables_initializer())\n\n    for i in range(20):\n        ## run one epoch\n        ## update for each training example\n        for x, y in zip(x_data, y_data):\n            sess.run(optimizer, {X: x, Y: y})\n\\end{lstlisting}\n\n\\pause\nUsually the data is shuffled and\\\\ passed in small batches to the optimizer.\n\n\\end{frame}\n\n\n\\begin{frame}[fragile]\n\\frametitle{Hands on}\n\nFit a logistic regression model to the names dataset\n\n\\url{http://www.nltk.org/book/ch06.html}\n\\begin{lstlisting}\nimport nltk\n## names must be installed by running\n## nltk.download('names')\nfrom nltk.corpus import names\nimport random\n\nlabeled_names = ( \n  [(n, 0) for n in names.words('male.txt')] +\n  [(n, 1) for n in names.words('female.txt')])\nrandom.shuffle(labeled_names)\n\n\\end{lstlisting}\n\\end{frame}\n\n\n\\end{document}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-engine: luatex\n%%% End:\n", "meta": {"hexsha": "d3eed6e8af3fd5564450df524a1755c21bf691f3", "size": 8329, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dump/02_optimization.tex", "max_stars_repo_name": "uhh-lt/dl-seminar", "max_stars_repo_head_hexsha": "b146db2f63462a7d795c43b484dc9e8ca38fb4d6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dump/02_optimization.tex", "max_issues_repo_name": "uhh-lt/dl-seminar", "max_issues_repo_head_hexsha": "b146db2f63462a7d795c43b484dc9e8ca38fb4d6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dump/02_optimization.tex", "max_forks_repo_name": "uhh-lt/dl-seminar", "max_forks_repo_head_hexsha": "b146db2f63462a7d795c43b484dc9e8ca38fb4d6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.1361111111, "max_line_length": 128, "alphanum_fraction": 0.7168927842, "num_tokens": 2599, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.837619947119304, "lm_q2_score": 0.6992544210587586, "lm_q1q2_score": 0.5857094511901769}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[utf8]{inputenc}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 14, 2014}\n\\maketitle\n\\section*{5.7 monotone functions}\na function $f:\\mathbb{R}\\to\\mathbb{R}$ is monotonic (strictly) increasing (decreasing) if $\\forall x<y, f(x)\\le f(y)$ changing $f(x)$ and $f(y)$ relation as necessary\n\nnote $\\mathbb{R}^n$ has no natural order, so this is only defined in $\\mathbb{R}$ \n\n\\subsection*{proposition 5.7.2}\nif $f:(a,b)\\to \\mathbb{R}$ is increasing then $\\alpha=\\lim_{x\\to a^+}f(x)$ and $\\lim_{x\\to b^-}f(x)$ exists and $\\forall x\\in (a,b)$ we have $\\alpha\\le f(x)\\le \\beta$. and every element in $(a,b)$ has a left and right limit\n\\subsubsection*{proof}\nlet $c\\in(a,b)$. let $F:\\{f(x):x\\in(a,c)\\}$. because $f$ is increasing and $x<c$, then $f(x)\\le f(c)\\forall x\\in(a,c)$. therefore $f(c)$ is an upper bound  for $F$. so $F$ has a supremum $L$. also $L\\le f(c)$ because $f(c)$ is an upper bound and $L$ is the least upper bound. since $L-\\varepsilon$ is not an upper bound for $F$ then $\\exists y\\in(a,c)$ such that $L-\\varepsilon\\le f(y)\\le L$. take $\\varepsilon=\\frac{1}{n}$, for each $\\varepsilon=\\frac{1}{n}$, a corresponding $y_n\\in(a,c)$. $L-\\frac{1}{n}<f(y_n)\\le L\\to |f(y_n)-L|<\\frac{1}{n}$ and so the limit exists.\n\\subsection*{corollary}\nmonotonic functions have only jump disccontinuities. and the number of these continuities is countable\n\\subsubsection*{proof}\n by 5.7.2 if $f$ has a discontinuity at a point $c\\in(a,b)$ since $\\lim_{x\\to c^+}f(x)$ and $\\lim_{x\\to c^-}f(x)$ exists, they must be different. thus the discontinuity is a jump discontinuity. (if the limits were equal, then it would be continuous at that point)\n\nlet $c$ be a point where $f$ has a discontinuity. then wlog $f(x)$ is increasing. $\\gamma_1\\lim_{x\\to c^-}f(x)<\\lim_{x\\to c^+}=\\gamma_2$. to the discontinuity point $c$ we can associate the interval $(\\gamma_1,\\gamma_2)$ which  is not in the image of $f$. if $d\\ne c$ is another point of discontinuity, it's corresponding interval $(\\sigma_1,\\sigma_2)$ can not intersect $(\\gamma_1,\\gamma_2)$. if wlog $c\\le d$ then $\\gamma_2=\\lim_{x\\to c^+}\\le\\lim_{x\\to d^-}f(x)=\\sigma_1$ and so $\\gamma_2<\\sigma_1$ and they don't intersect.\n\n\nlet $F:\\{\\text{discontinuities}\\}\\to\\mathbb{Q}$. then $c\\to c'\\in(\\gamma_1,\\gamma_2)$. $F$ is injective because the intervals are disjoint, $|F|\\le|\\mathbb{Q}|$\n\n\\section*{example 5.7.8 cantor function}\n\nin general the limit of a sequence of continous functions is not continuous. $f_n(x)=\\begin{cases}0&x\\le0\\\\x^n&x\\in[0,1]\\\\1&x\\ge 1\\end{cases}$\n\nin the limt if $x\\in[0,1)$ then $f_n(x)=x^n\\rightarrow0$ but $f_n(1)=1$.\n\\end{document}\n", "meta": {"hexsha": "20609edb335dca4f10c468fb03992bde371855da", "size": 2807, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-11-14.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-11-14.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-11-14.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.4634146341, "max_line_length": 570, "alphanum_fraction": 0.6872105451, "num_tokens": 1015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.8221891370573386, "lm_q1q2_score": 0.5855895649438831}}
{"text": "\\chapter{Optimizing network-utilization by balancing workload}\n\\label{chap:balancing}\n\nThe last chapter brings an overview of different notations, algorithms and techniques.\nIn this chapter, all these instruments are combined to a chain for distributing routes more evenly.\nThe goal is to augment a given graph, such that diverse alternative paths can be found and used to spread too much load while limiting the detours of individual \\glspl{stpair}.\nAfter the theory is explained, implementation-details are discussed.\nWhen talking about performance, it is referred to the used testing-machine, which is an ordinary, but good home-computer.\nFurther details about the testing-machine and experimental results can be found in \\vref{chap:experiments}.\n\nFor a given \\gls{stpair} and a street-network given as a graph $G = (V, E)$ with vertices $V$, edges $E$ and a cost-function $c: E \\to \\mathbb{R}_+^d$, that returns a cost-vector for each edge $e \\in E$, a path should be found through an algorithm, that implicitly tends to distribute found paths over the network, while keeping the paths' costs sufficiently near every cost-dimension's optimum.\nNote, that $\\mathbb{R}_+^d$ doesn't contain zero, which would, depending on the metric, result in bad alternative routes otherwise.\n\nTo achieve this, a computation-phase called balancing analyzes the graph $G$ and computes a new metric, called workload-metric.\nWith this workload-metric, routing-algorithms like \\gls{repr} and Dijkstra (with personalized routing) can compute paths as usual.\nFor comparison, multiple scenarios using different combinations of these two routing-algorithms will be used.\nPlease note, that Dijkstra doesn't guarantee a tolerance when used with personalized routing and hence, some combinations don't provide an upper bound for path-costs.\nThis will also be pointed out in the respective experiments in \\cref{chap:experiments}.\n\nThis thesis focusses only on distance and travel-time, where travel-time is the only metric with a provided tolerance.\n\nIt should be noted, that Dijkstra (bidirectional) instead of A* is used for every occuring routing-query, may it be in \\gls{repr} or anywhere else.\nThe reason is, that A* works with heuristics, which are difficult to generalize over custom, artificial metrics.\nThe only benefit from using A* is the reduction of the search-space, which can be reduced much more using bidirectional Dijkstra on graphs contracted via contraction-hierarchies.\nBesides that, the graph has multidimensional metrics, for which reason Dijkstra is used with personalized routing.\n\n\\section{Balancing to create a new metric}\n\n    This balancing is depicted in \\vref{fig:balancing:algorithm} and explained briefly in the caption or detailled in the following.\n    It basically runs routing-queries, counts the resulting workload and adjusts a third metric (in addition to travel-distance and travel-time) to influence future query-executions.\n    After the new workload-metric will have been settled, routing-algorithms for multidimensional metrics can run queries on the balanced graph $G$ as before, now considering the new workload-metric.\n    Therefore, queries will be run in the experiments (\\cref{chap:experiments}) to evaluate the new spread of the street-network.\n    The set of \\glspl{stpair} will be chosen differently than the set used in balancing, to reduce the dependence on it.\n\n    \\begin{figure}\n        \\centering\n        \\input{resources/graphics/balancing/algorithm}\n        \\caption[Overview of balancing a graph]{%\n            This flow shows the balancing, that analyzes a given graph $G$.\n            Here, an evaluation-phase is included afterwards.\n            Shapes for actions are rectangular and blue, shapes for data are elliptical and red.\n            A new, artificial metric is created for $G$ and gets updated iteratively.\n            In the end, this new metric allows a routing-algorithm to distribute upcoming workload (like during rush-hour) more evenly over the underlying street-network in $G$.\n            This more spreaded workload can be seen in the evaluation-step, where paths for a set of \\glspl{stpair} are searched, once using and once ignoring the new metric.\n\n            Note, that the action doing the path-searches ignores the new metric in the first balancing-iteration.\n            \\label{fig:balancing:algorithm}\n        }\n    \\end{figure}\n\n    \\subsection{Selection of route-pairs}\n\n        Let a street-network be given as a graph $G = (V, E)$ as defined in \\cref{chap:preliminaries:graphs}.\n        Before such a graph can be balanced, a set of \\glspl{stpair} has to be chosen.\n        Here, different strategies can be applied.\n        For simplicity, the set of \\glspl{stpair}, which are used for balancing $G$, might consist of $s$ and $t$ chosen \\gls{uar}\\ from $V$.\n        This has the disadvantage, that a sufficiently large set of \\glspl{stpair} is needed to overload $G$.\n        This clearly becomes a bottleneck for performance on larger graphs (beginning with small German counties like Saarland with almost $|V| \\approx \\num{580000}$ and $|E| \\approx \\num{1160000}$ after parsing), especially when using \\gls{repr}, that computes multiple Dijkstra-queries per run.\n        To alleviate the computational burden of a large number of \\glspl{stpair}, knowledge of the typical workloads in this graph could be used.\n        Instead of picking \\gls{uar}\\ from $|V|$, other distributions could be chosen by weighting $v \\in V$ differently or by using some \\glspl{stpair} multiple times (on purpose, not only by random).\n        For example, real, daily, critical \\glspl{stpair} are not distributed \\gls{uar}\\ over $V$.\n        The rush-hour in the morning, when everybody travels from their home to work, is a good counterexample here.\n        The set of chosen $t$ might be much smaller than the set of chosen $s$, and less people live near industry or motorways.\n        Heuristics as described in~\\cite{bakillah:population_from_osm} can return \\glspl{stpair} based on approximating population-data from street-networks.\n        So considering more specific sets of \\glspl{stpair} might be interesting and helpful to boost the balancing up, but is not covered in this thesis.\n        The reason for keeping the choice \\gls{uar}\\ is the easy implementation, while results are sufficiently good enough, as long as $G$ shows differently (over-) loaded streets.\n\n    \\subsection{First steps of the balancing-process}\n    \\label{chap:balancing:balancing}\n\n        At first, the graph $G$, that should be balanced, has to be read in.\n        Before continuing, every metric is divided by the respective metric-mean resulting in $G = (V, E)$ of normalized metrics.\n        Non-normalized metrics of different scales would cause \\gls{repr} to find $\\alpha$-values of different scale, making some metrics more important than others just based on their different scales.\n        With metrics of different importance, the chosen paths tend to the respective metrics and distort the spread.\n        So normalization doesn't just allow to compare different graphs (of different metrics), but also improves the spread in \\gls{repr}.\n\n        This graph $G$ is contracted via contraction-hierarchies to $G_{CH}$.\n        The step of contracting is not needed for correctness, but reduces the needed runtime (even if $G$ is not being fully contracted).\n        For this reason, $G$ instead of $G_{CH}$ is used in the following, to emphasize, that contraction is not necessary from a theoretical point of view.\n\n        Given the \\glspl{stpair} and a graph $G$, a path in $G$ is searched for every \\gls{stpair}.\n        The search may use either Dijkstra or \\gls{repr}.\n        Multiple paths are found by \\gls{repr}, so one of them is chosen \\gls{uar}\\ after filtering all found paths by a given tolerance.\n        Note, that Dijkstra with personalized routing doesn't consider any tolerance.\n        But since \\gls{repr} calls Dijkstra multiple times, just Dijkstra is faster.\n        On the other hand, when using \\gls{repr} with choosing a path \\gls{uar}, the resulting paths are more spreaded over the network.\n        This results in a better coverage of $E$ when updating the workload-metric, which allows the \\gls{repr} to find more pareto-optimal personalized routes in the convex hull.\n        Both methods are compared in the experiments in \\vref{chap:experiments}.\n\n        When using \\gls{repr}, the quality of the balanced graph highly depends on the choice of initial paths for \\gls{repr}.\n        The basic approach in~\\cite{barth:alternative_multicriteria_routes} suggests to use $(d+1)$ initial alphas to define one initial convex hull for a graph of $d$-dimensional metrics.\n        \\begin{equation}\n        \\label{eq:init_alphas}\n        \\begin{aligned}\n            \\left( \\alpha^{(1)}, \\alpha^{(2)}, \\dots, \\alpha^{(d)} \\right) &= \\mathbb{I}^d\\\\\n            \\alpha^{(d+1)} &= \\left( \\frac{1}{d}, \\dots, \\frac{1}{d} \\right)^T\n        \\end{aligned}\n        \\end{equation}\n        This suggestion is nice to get a tolerance, because every metric's optimal path is computed.\n        However, in the balancing, this suggestion needs some adjustment.\n        When the new workload-metric is added (hence let the graph's metrics' dimension be $d+1$), $\\alpha^{(d+1)}$ becomes $\\alpha^{(d+2)}$, which is a different direction in the $(d+1)$-cost-space than the previous $\\alpha^{(d+1)}$ points to in the $(d+1)$-cost-space.\n        For example (considering only \\alpha's direction, not its length), let $d=2$ become $d=3$, hence $\\alpha^3=(1, 1) \\in \\mathbb{R}_+^2$ becomes $\\alpha^4=(1, 1, 1) \\in \\mathbb{R}_+^3$ and respectively $(1, 1, 0) \\in \\mathbb{R}_+^3$ from the previous case $d=2$ is missing now.\n        In the case of Saarland in the experiments, this lead to the issue, that the initially found paths (as suggested in~\\cite{barth:alternative_multicriteria_routes}) weren't enough to get an initial convex hull.\n        Because of this, less paths have been found on Saarland in the experiments.\n        Therefore, the balancing extends the original suggestion by~\\cite{barth:alternative_multicriteria_routes} and takes the initial $\\alpha^{(j)}$ from $\\left\\{ 0, 1 \\right\\}^{(d+1)}$ excluding $(0, \\dots, 0)^T$.\n        Only the direction of $\\alpha$ is relevant, so the previous notation in \\cref{eq:init_alphas} can be simplified.\n        \\begin{equation}\n            \\label{eq:new_init_alphas}\n            \\alpha \\in \\left\\{ 0, 1 \\right\\}^{(d+1)} \\setminus \\left\\{ \\left( 0, \\dots, 0 \\right)^T \\right\\} \\Rightarrow \\alpha \\text{ is an initial } \\alpha^{(j)}\n        \\end{equation}\n\n        After finding a set of paths for the given \\glspl{stpair}, every edge $e \\in E$ counts the number of found paths, that contain this edge $e$.\n        This workload of balancing-iteration $i$ is normalized by all workloads' mean and the result is used to update the graph's workload-metric.\n        \\begin{equation}\n        \\label{eq:workloads}\n        \\begin{aligned}\n            &\\mathit{workloads}_{e,i} = \\left| \\left\\{ (s, t) \\in \\text{\\glspl{stpair}} : \\text{found path from $s$ to $t$ uses } e \\in E \\right\\} \\right|\\\\\n            \\Rightarrow\\ &\\mathit{new}_{e,i} = \\frac{\\mathit{workloads}_{e,i}}{\\mathit{mean}(\\mathit{workloads}_i)}\n        \\end{aligned}\n        \\end{equation}\n\n    \\subsection{Updating the new metric with the collected workload}\n    \\label{chap:balancing:update}\n\n        Updating the old workload-metric is a very small step in the balancing with a large impact on the ongoing balancing's performance and the quality of the balanced graph.\n        Two approaches have been tested and are depicted in the following, after talking about the interpretation of the new artificial workload-metric.\n\n        The workload-metric can be associated with popularity, because it refers to high usage.\n        A path of high popularity should be chosen less often than without the workload-metric.\n        So, hopefully, updating the workload-metric dependent on the current workload-metric reveals the final, balanced state of the network eventually.\n        You could suggest, that a street's capacity should be considered as well, because large streets like motorways can support more vehicles than a street in a city and thus should tolerate a higher popularity than smaller streets.\n        A street-capacity could be approximated using a street's number of lanes or the respective street-type (\\eg\\ motorways) and its length.\n        Herewith, the workload-metric, penalizing popularity, would be corrected in favor of larger streets with (usually) higher speed-limit.\n        That is exactly contraproductive, because the travel-time does already consider the capacity of streets with the speed-limit.\n        In other words, the popularity-penalization should compensate the popularity of larger streets, that have a better travel-time.\n        This wished effect would be disturbed by the consideration of capacity.\n\n        The first approach is inspired by a simpler numerical method for solving initial-value-problems called explicit Euler.\n        Let $\\mathit{old}_i$ be the workload-metric (already normalized after $G$ has been initialized) already stored in the graph and $\\mathit{new}_i$ be the new workload-metric (normalized by their mean, see \\vref{eq:workloads}).\n        \\begin{equation}\n        \\label{eq:euler}\n            \\mathit{old}_{e,i+1} = \\mathit{old}_{e,i} + (\\mathit{new}_{e,i} - \\mathit{old}_{e,i}) \\cdot \\mathit{correction}\n        \\end{equation}\n        In each balancing-iteration, after the workload-metric is updated (as in \\cref{eq:euler}), every value of the workload-metric smaller than a predefined minimum value $\\epsilon$ (\\eg\\ \\num{0.1}) is set to $\\epsilon$.\n        This removes the metrics of value \\num{0.0}, which improves the set of computed shortcuts when applying contraction-hierarchies and thus performance.\n        In addition, the result isn't normalized by its mean in general, for which reason it is finally normalized.\n        \\begin{equation}\n        \\label{eq:metric_cleanup}\n        \\begin{aligned}\n            \\mathit{old}_{e,i+1} &\\leftarrow \\mathit{max} \\left( \\epsilon \\text{,\\ } \\mathit{old}_{e,i+1} \\right)\\\\\n            \\mathit{old}_{e,i+1} &\\leftarrow \\frac{\\mathit{old}_{e,i+1}}{\\mathit{mean}(\\mathit{old}_{i+1})}\n        \\end{aligned}\n        \\end{equation}\n\n        In fact, this approach works okay in smaller, overloaded networks like Isle~of~Man ($|V| \\approx \\num{50000}$ and $|E| \\approx \\num{100000}$ after parsing), but lacks in convergence.\n        In the first update-iteration, popular paths are ignored as much as possible, before they are highly preferred in the second iteration, ignored in the third iteration and so on.\n        This behaviour could be reduced by adding the $\\mathit{correction}$, but larger networks (like the small German county Saarland, $|V| \\approx \\num{580000}$ and $|E| \\approx \\num{1160000}$ after parsing) worsen the convergence.\n        Further, adding $\\mathit{correction}$ leads to the need of tuning it, for which reason only the second approach is shown in the experiments in \\vref{chap:experiments}.\n\n        Therefore, another approach, replacing the explicit Euler from \\vref{eq:euler}, is presented in \\vref{eq:averaging}, that does spread the workload well and within only two iterations.\n        \\begin{equation}\n        \\label{eq:averaging}\n            \\mathit{old}_{e,i+1} = \\frac{i \\cdot \\mathit{old}_{e,i} + \\mathit{new}_{e,i}}{i+1}\n        \\end{equation}\n        This result is treated as before, meaning Equations~\\ref{eq:metric_cleanup} are also applied.\n        From \\vref{eq:averaging}, both $\\mathit{old}_{e,1}=\\mathit{new}_{e,0}$ and $\\mathit{old}_{e,2}=\\frac{\\mathit{old}_{e,1} + \\mathit{new}_{e,1}}{2}$ hold before normalization.\n        The interpretation of these is quite intuitive.\n        The first iteration favors popular routes.\n        After the first update, the workload-metric is just the normalized workload from the original, unbalanced graph.\n        Hence the following iteration has an aversion to popular routes and tends to avoid them.\n        This second iteration favors alternative routes and the outcoming workload completes the previously generated workload-metric.\n        A third iteration has already a workload-metric with a balance between popular and alternative routes, so further iterations appear disruptive.\n\n\\section{Details on implementation}\n\\label{chap:balancing:implementation}\n\n    The implementation has the goal to be efficient, easy to use and maintainable.\n    For this reason, some important decisions were made and the most important ones are explained (from a theoretical point of view) in the following.\n\n    \\subsection{Configuration-files}\n\n        Main concept of the implementation is the use of configuration-files.\n        The whole parser, simulation and computation can be adjusted with these files.\n        One big advantage are the parser-related settings to define the graphs nodes and edges.\n        Both meta-data and metric-data is defined in the configuration-file.\n        This allows the graph to be static, meaning its allocated memory doesn't change during runtime after the graph is built and shrinked to fit its needs.\n        However, its values can be changed, which is necessary, since the balancing updates a metric.\n\n    \\subsection{Storing and accessing graphs}\n    \\label{chap:balancing:implementation:graphs}\n\n        The underlying graph-structure is implemented as adjacency-array with several extensions.\n        The adjacency-array is explained in \\cref{chap:preliminaries:graphs}.\n        This data-structure fits well, because it reduces redundancy by optimizing the graph's memory-consumption.\n        At the same time, accessing the graph can be done in constant or logarithmic time (depending on access via indices or via ids) through indexing-logic.\n        The indexing-logic from adjacency-arrays has been extended by a mapping to support forward-, backward- and shortcut-edges in the graph, which is necessary for the bidirectional-Dijkstra.\n\n        First, the graph from preliminaries is extended by metrics.\n        For every edge in $E$, a vector of floats is stored.\n        To formalize this, the cost-function is slightly redefined (and duplicated edges are being removed) to have edge-costs being associated as metrics.\n        Let $C \\subset E \\times \\mathbb{R}_+^d$ be the set of metrics and $c$ be the bijective function $c: E \\to C$.\n        With this definition, $C[i_E]$ can be associated as the cost-vector of the edge $E$ at edge-index $i_E$, where $C$ is stored as two-dimensional array of floats.\n\n        The vertices $V$ are an array of vertex-ids sorted in ascending order.\n        This is required by the adjacency-array, but also allows the search for nodes by their ids in logarithmic time.\n        However, this is only needed for users outside the graph, because inside, the graph uses only vertex-indices.\n\n        A backward-graph is a graph, whose edges are reversed according to the original graph.\n        So a backward-graph maps edges $(s, d)$ to the reversed edge $(d, s)$.\n        Since this graph is used for bidirectional routing-queries, it has to support the access to both forward-edges and backward-edges.\n        The backward-edges could be stored accordingly to the forward-edges, meaning storing the source-indices $S$ in addition to the destination-indices $D$.\n        The occuring issue with this is the sort-order of vertices $V$.\n        All vertices are sorted by their source-id to match the offset-array $O_D$ for the forward-edges and for $D$.\n        Respectively, all vertices should be sorted by their destination-id to match the offset-array $O_S$ for the backward-edges and for $S$.\n        This would need a copy of all vertices and an edge-index $i_E$ wouldn't point to the same edge in $S$ and $D$, leading to two access-implementations doing basically the same offset-mapping.\n        The metric-array $C$ could be accessed via $i_E$ only by Dijkstra's forward-search, not the backward-search, unless a mapping from backward-edge-indices to forward-edge-indices is built (or $C$ is also stored twice).\n        Therefore, this issue is solved by using this very mapping, such that an edge-index $i_E$ belongs to $S[i_E]$, $D[i_E]$, $C[i_E]$ independent of the edge's orientation and the graph can be stored without data-redundancy.\n        The decision towards no data-redundancy implies consistency, which is important for the metric-update.\n        Further, it improves maintainability and usage, because accessing the forward-edges and backward-edges can be done with the same accessor simply using different arrays.\n        The indices-mappings are illustrated in \\cref{table:balancing:indices_mapping} and explained afterwards with two example-queries.\n        The graph from \\cref{chap:preliminaries:graphs} contains only bidirectional edges, thus the edge $(s, d)$ with $(V[s], V[d])=(1, 2)$ is removed to make the graph containing unidirectional edges.\n\n        \\begin{table}[htbp]\n            \\centering\n            \\begin{tabular}{ l || c | c c | c c c | c c c | c c | c }\n                $i_E = i_\\mathit{fwd}$ & \\multicolumn{12}{c}{Storing data with respect to forward-edges} \\\\\n                \\hline\n                \\hline\n                $V$ & $\\mathit{id}_0$ & \\multicolumn{2}{c |}{$\\mathit{id}_1$} & \\multicolumn{3}{c |}{$\\mathit{id}_2$} & \\multicolumn{3}{c |}{$\\mathit{id}_3$} & \\multicolumn{2}{c |}{\\cellcolor[gray]{0.8} $\\mathit{id}_4$} & - \\\\\n                \\hline\n                $S$ & \\cellcolor[gray]{0.8} 0 & 1 & 1 & \\cellcolor[gray]{0.8} 2 & 2 & 2 & \\cellcolor[gray]{0.8} 3 & 3 & 3 & 4 & 4 & - \\\\\n                $D$ & 1 & 0 & 3 & 1 & 3 & 4 & 1 & 2 & 4 & \\cellcolor[gray]{0.8} 2 & \\cellcolor[gray]{0.8} 3 & - \\\\\n                $C$ & a & b & c & d & e & f & g & h & i & j & k & - \\\\\n                & & & & & & & & & & & & \\\\\n                $i_\\mathit{fwd}$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & \\cellcolor[gray]{0.8} 9 & \\cellcolor[gray]{0.8} 10 & - \\\\\n                $\\mathit{map}_\\mathit{bwd}: i_\\mathit{fwd} \\mapsto i_\\mathit{bwd}$ & 1 & 0 & 6 & 2 & 7 & 9 & 3 & 4 & 10 & 5 & 8 & - \\\\\n                & & & & & & & & & & & & \\\\\n                $O_D: i_V \\mapsto i_\\mathit{fwd}$ & 0 & \\multicolumn{2}{c |}{1} & \\multicolumn{3}{c |}{3} & \\multicolumn{3}{c |}{6} & \\multicolumn{2}{c |}{\\cellcolor[gray]{0.8} 9} & \\cellcolor[gray]{0.8} 11 \\\\\n                \\multicolumn{13}{c}{} \\\\\n            \\end{tabular}\n            \\begin{tabular}{ l || c | c c c | c c | c c c | c c | c }\n                $i_E = i_\\mathit{bwd}$ & \\multicolumn{12}{c}{Storing data with respect to backward-edges} \\\\\n                \\hline\n                \\hline\n                $V$ & $\\mathit{id}_0$ & \\multicolumn{3}{c |}{\\cellcolor[gray]{0.8} $\\mathit{id}_1$} & \\multicolumn{2}{c |}{$\\mathit{id}_2$} & \\multicolumn{3}{c |}{$\\mathit{id}_3$} & \\multicolumn{2}{c |}{$\\mathit{id}_4$} & - \\\\\n                \\hline\n                $D_\\mathit{bwd}$ & 0 & 1 & 1 & 1 & 2 & 2 & 3 & 3 & 3 & 4 & 4 & - \\\\\n                $S_\\mathit{bwd}$ & 1 & 0 & 2 & 3 & 3 & 4 & 1 & 2 & 4 & 2 & 3 & - \\\\\n                $C_\\mathit{bwd}$ & b & a & d & g & h & j & c & e & k & f & i & - \\\\\n                & & & & & & & & & & & & \\\\\n                $\\mathit{map}_\\mathit{fwd}: i_\\mathit{bwd} \\mapsto i_\\mathit{fwd}$ & 1 & \\cellcolor[gray]{0.8} 0 & \\cellcolor[gray]{0.8} 3 & \\cellcolor[gray]{0.8} 6 & 7 & 9 & 2 & 4 & 10 & 5 & 8 & - \\\\\n                $i_\\mathit{bwd}$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & - \\\\\n                & & & & & & & & & & & & \\\\\n                $O_S: i_V \\mapsto i_\\mathit{bwd}$ & 0 & \\multicolumn{3}{c |}{\\cellcolor[gray]{0.8} 1} & \\multicolumn{2}{c |}{\\cellcolor[gray]{0.8} 4} & \\multicolumn{3}{c |}{6} & \\multicolumn{2}{c |}{9} & 11 \\\\\n            \\end{tabular}\n            \\caption[Indices-mapping for accessing graph-data via offset-arrays]{%\n                This table shows one graph stored with respect to forward-edges above and with respect to backward-edges below.\n                Note that a backward-graph of a graph has just the edges being reversed, so $(s, d) \\mapsto (d, s)$.\n                The more intuitive way is storing the graph and necessary indices forwardly.\n                That's why the forward graph's notation is used as default (\\eg\\ $S$ is used instead of $S_\\mathit{fwd}$ according to $S_\\mathit{bwd}$).\n                However, both offset-arrays $O_D$ and $O_S$ and $\\mathit{map}_\\mathit{fwd}$ are needed and hence each table is needed.\n\n                The tables highlight important values of the two example-queries from the text.\n                \\label{table:balancing:indices_mapping}\n            }\n        \\end{table}\n\n        Two example-queries are explained in the following, assuming the graph being stored with respect to forward edges.\n        In general, leaving edges of vertex $V[s]$ or incoming edges of a vertex $V[d]$ can be determined by following mappings (referring to \\cref{table:balancing:indices_mapping}):\n        \\begin{equation}\n        \\label{eq:balancing:indices_mapping}\n        \\begin{aligned}\n            \\mathit{offset} &\\in \\left\\{ O_D \\left[ s \\right], O_D \\left[ s \\right] + 1, \\dots, O_D \\left[ s+1 \\right] - 1 \\right\\} \\\\\n            d_\\mathit{offset} &= D \\left[ \\mathit{identity}_\\mathit{fwd} \\left[ \\mathit{offset} \\right] \\right] \\\\\n            s &\\mapsto \\left\\{ \\left( s, d_\\mathit{offset} \\right) \\right\\} \\\\\n            \\\\\n            \\mathit{offset} &\\in \\left\\{ O_S \\left[ d \\right], O_S \\left[ d \\right] + 1, \\dots, O_S \\left[ d+1 \\right] - 1 \\right\\} \\\\\n            s_\\mathit{offset} &= S \\left[ \\mathit{map}_\\mathit{fwd} \\left[ \\mathit{offset} \\right] \\right] \\\\\n            d &\\mapsto \\left\\{ \\left( s_\\mathit{offset}, d \\right) \\right\\} \\\\\n        \\end{aligned}\n        \\end{equation}\n        The mapping $\\mathit{map}_\\mathit{fwd}$ can be computed by sorting all edges according to the forward-graph (first source-id, than destination-id) before sorting all edges again according to the backward-graph (first destination-id, than source-id).\n        In between, the edge-positions from the first sorted edges are linked to the edges, such that they get reordered by sorting a second time.\n        After sorting the second time, the remembered positions become the mapping $\\mathit{map}_\\mathit{fwd}$.\n\n        In the examples, one query is asking for all leaving edges starting in $V[s=4]=\\mathit{id}_4$ and one is asking for all incoming edges ending in $V[d=1]=\\mathit{id}_1$.\n        The choice of $d=1$ is handy, because $(2, 1) \\in E \\wedge (1, 2) \\notin E$ holds, which might help with understanding the examples.\n        Important values in these examples are highlighted in the tables.\n\n        To get all leaving edges starting in $V[s=4]=\\mathit{id}_4$, the offset-array $O_D$ is used.\n        All forward-indices within $\\left\\{ O_D[s], O_D[s] + 1, \\dots, O_D[s+1] - 1 \\right\\}$ refer to the leaving edges.\n        Since $O_D[4]=9$ and $O_D[5]=11$, all leaving-edges are at positions $\\left\\{ 9, 10 \\right\\}$ with respect to forward-edges.\n        Because the graph is stored with respect to forward-edges, these resulting positions are already the edge-indices $i_E$ of the leaving-edges.\n        The \\cref{table:balancing:indices_mapping} still marks the respective values \\num{9} and \\num{10} in the forward-table.\n        Thus, all leaving-edges are $\\left\\{ (4, D[9]=2), (4, D[10]=3) \\right\\} = \\left\\{ (4,2), (4,3) \\right\\}$.\n\n        Getting all incoming edges ending in $V[d=1]=\\mathit{id}_1$ works similar and uses the offset-array $O_S$ respectively.\n        All backward-indices within $\\left\\{ O_S[d], O_S[d] + 1, \\dots, O_S[d+1] - 1 \\right\\}$ refer to the incoming edges.\n        The difference here is the outcome from $O_S$.\n        Due to $O_S[1]=1$ and $O_S[2]=4$, all incoming-edges are at positions $\\left\\{ 1, 2, 3 \\right\\}$ with respect to backward-edges, but the graph is stored with respect to forward-edges.\n        Therefore, the indices are not matching and the resulting positions $\\left\\{ 1, 2, 3 \\right\\}$ need to be mapped to the respective (forward-) edge-indices $i_E$.\n        For this, $\\mathit{map}_\\mathit{fwd}$ is used.\n        \\begin{equation}\n        \\begin{aligned}\n            \\mathit{map}_\\mathit{fwd}[1] &= 0 \\Rightarrow S[0] = 0 \\\\\n            \\mathit{map}_\\mathit{fwd}[2] &= 3 \\Rightarrow S[3] = 2 \\\\\n            \\mathit{map}_\\mathit{fwd}[3] &= 6 \\Rightarrow S[6] = 3 \\\\\n        \\end{aligned}\n        \\end{equation}\n        Thus, all incoming-edges are $\\left\\{ (0, 1), (2, 1), (3, 1) \\right\\}$.\n\n    \\subsection{Storing and accessing shortcuts in graphs}\n\n        Shortcuts have metrics and source-/destination-vertices as well, so they are full edges and can be added to the current data-structure.\n        However, due to construction of a contracted graph, a shortcut replaces two edges, which may be shortcuts.\n        This functionality is missing in the graph's data-structure yet.\n        A first intuition may be extending edges by a tuple of two edge-indices.\n        Implementing this causes much redundancy, because normal edges would need a placeholder for their tuple.\n        The street-networks from the experiments in \\cref{chap:experiments} have less than $|E|$ shortcuts added.\n        So after adding all shortcuts, more than half of all edges will have unused allocated memory of two indices.\n        For example, Saarland from the experiments contains over $\\num{1000000}$~edges.\n        If these placeholders would be used, each of these million edges would have two unused indices.\n        Assuming one index needs $\\si{\\num{64} \\bit}$, this sums up to $\\si{\\num{16} \\mega\\byte}$ of unused memory.\n        This number scales with the graph's size, so Germany with over $\\num{100000000}$~edges would have already $\\si{\\num{1.6} \\giga\\byte}$ of completely unused, but allocated memory.\n\n        Therefore, since edge-indices $i_E$ can be used for all edge-data ($S$, $D$, $C$), this whole indexing-procedure from \\cref{chap:balancing:implementation:graphs} can be and is used in this thesis for storing and accessing shortcuts without such placeholders.\n        Like the offsets $O_D$ and $O_S$, another offset-array is created.\n        Let this new offset-array be called $O_L$ ($L$ for link, since $S$ is already used).\n        The significant difference between $O_D$ and $O_S$ on one side and $O_L$ on the other side is, that $O_L$ contains an offset-value per edge, whereas $O_D$ and $O_S$ contain offset-values per vertex.\n        Lining all shortcut-tuples of edge-indices up according to the forward-edges' order in the graph, this results in the data-array $L$.\n        This array $L$ behaves exactly similar as the array $S$ or $D$.\n        So, despite the indexing-mapping, the shortcut-indices of an edge-index $i_E$ can be found in $L$ between $O_L \\left[ i_E \\right]$ (including) and $O_L \\left[ i_E + 1 \\right]$ (excluding).\n        A shortcut can be determined by comparing neighbouring values in $O_L$.\n        If $O_L \\left[ i_E \\right] = O_L \\left[ i_E + 1 \\right]$ holds, the edge at $i_E$ is no shortcut.\n\n        When analyzing the memory-usage, this method has improved the other approach of storing tuples in the experiments.\n        Let $m$ be the number of edges without shortcuts and $m_L$ the number of shortcuts in the graph.\n        Previously, the number of needed indices was $2 \\cdot (m + m_L)$.\n        The offset-procedure stores $2 \\cdot m_L$ replaced edge-indices and $(m + m_L)$ offset-indices, resulting in $m + m_L + 2 \\cdot m_L$.\n        As long as $m_L < m$ holds, the offset-procedure saves memory, which was the case in all experimental graphs.\n\n    \\subsection{Multithreading}\n\n        The implementation of searching for paths in parallel has brought a huge performance-boost.\n        Optimal paths of different \\glspl{stpair} may have different costs and thus need a different amount of runtime.\n        This holds especially for \\gls{repr}, where longer paths lead to more alternative paths.\n        Therefore, the number of \\glspl{stpair} is not distributed evenly over all threads.\n        In fact, there is one single master-thread pulling some \\glspl{stpair} from the user-provided set of \\glspl{stpair} and sends this work-package to one of several worker-threads.\n        Every worker-thread has its own instance of the routing-algorithm, searches for the optimum paths, and returns a simple list of all occurred edge-indices $i_E$ to the master-thread.\n        The master-thread counts the workloads for each edge and pulls the next work-package until all \\glspl{stpair} are processed.\n\n        With this procedure, the workload and not the number of \\glspl{stpair} is distributed evenly.\n        The communication-overhead when using smaller work-package-sizes is negligible compared to idle worker-threads, that wait for the last few worker-threads to finish their larger work-package.", "meta": {"hexsha": "fc85ed377bda69d6eddf054be5bd34a438d22bd6", "size": 33096, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "core/src/content/balancing.tex", "max_stars_repo_name": "dominicparga/master-thesis", "max_stars_repo_head_hexsha": "0215902fc26180df102deaed03fbf3a8b2d03801", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-01-04T23:53:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-11T16:28:31.000Z", "max_issues_repo_path": "core/src/content/balancing.tex", "max_issues_repo_name": "dominicparga/master-thesis", "max_issues_repo_head_hexsha": "0215902fc26180df102deaed03fbf3a8b2d03801", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "core/src/content/balancing.tex", "max_forks_repo_name": "dominicparga/master-thesis", "max_forks_repo_head_hexsha": "0215902fc26180df102deaed03fbf3a8b2d03801", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.7563739377, "max_line_length": 395, "alphanum_fraction": 0.6874848924, "num_tokens": 8688, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891479496523, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.5855895626575323}}
{"text": "\\section{Algebraic Structures}\n\n\\subsection{OFE}\n\nThe model of Iris lives in the category of \\emph{Ordered Families of Equivalences} (OFEs).\nThis definition varies slightly from the original one in~\\cite{catlogic}.\n\n\\begin{defn}\n  An \\emph{ordered family of equivalences} (OFE) is a tuple $(\\ofe, ({\\nequiv{n}} \\subseteq \\ofe \\times \\ofe)_{n \\in \\nat})$ satisfying\n  \\begin{align*}\n    \\All n. (\\nequiv{n}) ~& \\text{is an equivalence relation} \\tagH{ofe-equiv} \\\\\n    \\All n, m.& n \\geq m \\Ra (\\nequiv{n}) \\subseteq (\\nequiv{m}) \\tagH{ofe-mono} \\\\\n    \\All x, y.& x = y \\Lra (\\All n. x \\nequiv{n} y) \\tagH{ofe-limit}\n  \\end{align*}\n\\end{defn}\n\nThe key intuition behind OFEs is that elements $x$ and $y$ are $n$-equivalent, notation $x \\nequiv{n} y$, if they are \\emph{equivalent for $n$ steps of computation}, \\ie if they cannot be distinguished by a program running for no more than $n$ steps.\nIn other words, as $n$ increases, $\\nequiv{n}$ becomes more and more refined (\\ruleref{ofe-mono})---and in the limit, it agrees with plain equality (\\ruleref{ofe-limit}).\n\nNotice that OFEs are just a different presentation of bisected 1-bounded ultrametric spaces, where the family of equivalence relations gives rise to the distance function (two elements that are equal for $n$ steps are no more than $2^{-n}$ apart).\n\n\\begin{defn}\n  An element $x \\in \\ofe$ of an OFE is called \\emph{discrete} if\n  \\[ \\All y \\in \\ofe. x \\nequiv{0} y \\Ra x = y\\]\n  An OFE $A$ is called \\emph{discrete} if all its elements are discrete.\n  For a set $X$, we write $\\Delta X$ for the discrete OFE with $x \\nequiv{n} x' \\eqdef x = x'$\n\\end{defn}\n\n\\begin{defn}\n  A function $f : \\ofe \\to \\ofeB$ between two OFEs is \\emph{non-expansive} (written $f : \\ofe \\nfn \\ofeB$) if\n  \\[\\All n, x \\in \\ofe, y \\in \\ofe. x \\nequiv{n} y \\Ra f(x) \\nequiv{n} f(y) \\]\n  It is \\emph{contractive} if\n  \\[ \\All n, x \\in \\ofe, y \\in \\ofe. (\\All m < n. x \\nequiv{m} y) \\Ra f(x) \\nequiv{n} f(y) \\]\n\\end{defn}\nIntuitively, applying a non-expansive function to some data will not suddenly introduce differences between seemingly equal data.\nElements that cannot be distinguished by programs within $n$ steps remain indistinguishable after applying $f$.\n\n\\begin{defn}\n  The category $\\OFEs$ consists of OFEs as objects, and non-expansive functions as arrows.\n\\end{defn}\n\nNote that $\\OFEs$ is bicartesian closed, \\ie it has all sums, products and exponentials as well as an initial and a terminal object.\nIn particular:\n\\begin{defn}\n  Given two OFEs $\\ofe$ and $\\ofeB$, the set of non-expansive functions $\\set{f : \\ofe \\nfn \\ofeB}$ is itself an OFE with\n  \\begin{align*}\n    f \\nequiv{n} g \\eqdef{}& \\All x \\in \\ofe. f(x) \\nequiv{n} g(x)\n  \\end{align*}\n\\end{defn}\n\n\\begin{defn}\n  A (bi)functor $F : \\OFEs \\to \\OFEs$ is called \\emph{locally non-expansive} if its action $F_1$ on arrows is itself a non-expansive map.\n  Similarly, $F$ is called \\emph{locally contractive} if $F_1$ is a contractive map.\n\\end{defn}\nThe function space $(-) \\nfn (-)$ is a locally non-expansive bifunctor.\nNote that the composition of non-expansive (bi)functors is non-expansive, and the composition of a non-expansive and a contractive (bi)functor is contractive.\n\nOne very important OFE is the OFE of \\emph{step-indexed propositions}:\nFor every step-index, such a proposition either holds or does not hold.\nMoreover, if a propositions holds for some $n$, it also has to hold for all smaller step-indices.\n\\begin{align*}\n  \\SProp \\eqdef{}& \\psetdown{\\nat} \\\\\n    \\eqdef{}& \\setComp{X \\in \\pset{\\nat}}{ \\All n, m. n \\geq m \\Ra n \\in X \\Ra m \\in X } \\\\\n  X \\nequiv{n} Y \\eqdef{}& \\All m \\leq n. m \\in X \\Lra m \\in Y \\\\\n  X \\nincl{n} Y \\eqdef{}& \\All m \\leq n. m \\in X \\Ra m \\in Y\n\\end{align*}\n\n\\subsection{COFE}\n\nCOFEs are \\emph{complete OFEs}, which means that we can take limits of arbitrary chains.\n\n\\begin{defn}[Chain]\n  Given some set $\\cofe$ and an indexed family $({\\nequiv{n}} \\subseteq \\cofe \\times \\cofe)_{n \\in \\nat}$ of equivalence relations, a \\emph{chain} $c \\in \\Chains(\\cofe)$ is a function $c : \\nat \\to \\cofe$ such that $\\All n, m. n \\leq m \\Ra c (m) \\nequiv{n} c (n)$.\n\\end{defn}\n\n\\begin{defn}\n  A \\emph{complete ordered family of equivalences} (COFE) is a tuple $(\\cofe : \\OFEs,  \\lim : \\Chains(\\cofe) \\to \\cofe)$ satisfying\n  \\begin{align*}\n    \\All n, c.& \\lim(c) \\nequiv{n} c(n) \\tagH{cofe-compl}\n  \\end{align*}\n\\end{defn}\n\n\\begin{defn}\n  The category $\\COFEs$ consists of COFEs as objects, and non-expansive functions as arrows.\n\\end{defn}\n\nThe function space $\\ofe \\nfn \\cofeB$ is a COFE if $\\cofeB$ is a COFE (\\ie the domain $\\ofe$ can actually be just an OFE).\n$\\SProp$ as defined above is complete, \\ie it is a COFE.\n\nCompleteness is necessary to take fixed-points.\n\n\\begin{thm}[Banach's fixed-point]\n\\label{thm:banach}\nGiven an inhabited COFE $\\ofe$ and a contractive function $f : \\ofe \\to \\ofe$, there exists a unique fixed-point $\\fixp_T f$ such that $f(\\fixp_T f) = \\fixp_T f$.\nMoreover, this theorem also holds if $f$ is just non-expansive, and $f^k$ is contractive for an arbitrary $k$.\n\\end{thm}\n\n\\begin{thm}[America and Rutten~\\cite{America-Rutten:JCSS89,birkedal:metric-space}]\n\\label{thm:america_rutten}\nLet $1$ be the discrete COFE on the unit type: $1 \\eqdef \\Delta \\{ () \\}$.\nGiven a locally contractive bifunctor $G : \\COFEs^{\\textrm{op}} \\times \\COFEs \\to \\COFEs$, and provided that \\(G(1, 1)\\) is inhabited,\nthen there exists a unique\\footnote{Uniqueness is not proven in Coq.} COFE $\\ofe$ such that $G(\\ofe^{\\textrm{op}}, \\ofe) \\cong \\ofe$ (\\ie the two are isomorphic in $\\COFEs$).\n\\end{thm}\n\n\\subsection{RA}\n\n\\begin{defn}\n  A \\emph{resource algebra} (RA) is a tuple \\\\\n  $(\\monoid, \\mvalFull :  \\monoid \\to \\mProp, \\mcore{{-}}:\n  \\monoid \\to \\maybe\\monoid, (\\mtimes) : \\monoid \\times \\monoid \\to \\monoid)$ satisfying:\n  \\begin{align*}\n    \\All \\melt, \\meltB, \\meltC.& (\\melt \\mtimes \\meltB) \\mtimes \\meltC = \\melt \\mtimes (\\meltB \\mtimes \\meltC) \\tagH{ra-assoc} \\\\\n    \\All \\melt, \\meltB.& \\melt \\mtimes \\meltB = \\meltB \\mtimes \\melt \\tagH{ra-comm} \\\\\n    \\All \\melt.& \\mcore\\melt \\in \\monoid \\Ra \\mcore\\melt \\mtimes \\melt = \\melt \\tagH{ra-core-id} \\\\\n    \\All \\melt.& \\mcore\\melt \\in \\monoid \\Ra \\mcore{\\mcore\\melt} = \\mcore\\melt \\tagH{ra-core-idem} \\\\\n    \\All \\melt, \\meltB.& \\mcore\\melt \\in \\monoid \\land \\melt \\mincl \\meltB \\Ra \\mcore\\meltB \\in \\monoid \\land \\mcore\\melt \\mincl \\mcore\\meltB \\tagH{ra-core-mono} \\\\\n    \\All \\melt, \\meltB.& \\mvalFull(\\melt \\mtimes \\meltB)  \\Ra \\mvalFull(\\melt)  \\tagH{ra-valid-op} \\\\\n    \\text{where}\\qquad %\\qquad\\\\\n    \\maybe\\monoid \\eqdef{}& \\monoid \\uplus \\set{\\mnocore} \\qquad\\qquad\\qquad \\melt^? \\mtimes \\mnocore \\eqdef \\mnocore \\mtimes \\melt^? \\eqdef \\melt^? \\\\\n    \\melt \\mincl \\meltB \\eqdef{}& \\Exists \\meltC \\in \\monoid. \\meltB = \\melt \\mtimes \\meltC \\tagH{ra-incl}\n  \\end{align*}\n\\end{defn}\nHere, $\\mProp$ is the set of (meta-level) propositions.\nThink of \\texttt{Prop} in Coq or $\\mathbb{B}$ in classical mathematics.\n\nRAs are closely related to \\emph{Partial Commutative Monoids} (PCMs), with two key differences:\n\\begin{enumerate}\n\\item The composition operation on RAs is total (as opposed to the partial composition operation of a PCM), but there is a specific subset of \\emph{valid} elements that is compatible with the composition operation (\\ruleref{ra-valid-op}).\nThese valid elements are identified by the \\emph{validity predicate} $\\mvalFull$.\n\nThis take on partiality is necessary when defining the structure of \\emph{higher-order} ghost state, \\emph{cameras}, in the next subsection.\n\n\\item Instead of a single unit that is an identity to every element, we allow\nfor an arbitrary number of units, via a function $\\mcore{{-}}$ assigning to an element $\\melt$ its \\emph{(duplicable) core} $\\mcore\\melt$, as demanded by \\ruleref{ra-core-id}.\n  We further demand that $\\mcore{{-}}$ is idempotent (\\ruleref{ra-core-idem}) and monotone (\\ruleref{ra-core-mono}) with respect to the \\emph{extension order}, defined similarly to that for PCMs (\\ruleref{ra-incl}).\n\n  Notice that the domain of the core is $\\maybe\\monoid$, a set that adds a dummy element $\\mnocore$ to $\\monoid$.\n%  (This corresponds to the option type.)\n  Thus, the core can be \\emph{partial}: not all elements need to have a unit.\n  We use the metavariable $\\maybe\\melt$ to indicate elements of  $\\maybe\\monoid$.\n  We also lift the composition $(\\mtimes)$ to $\\maybe\\monoid$.\n  Partial cores help us to build interesting composite RAs from smaller primitives.\n\nNotice also that the core of an RA is a strict generalization of the unit that any PCM must provide, since $\\mcore{{-}}$ can always be picked as a constant function.\n\\end{enumerate}\n\n\n\\begin{defn}\n  It is possible to do a \\emph{frame-preserving update} from $\\melt \\in \\monoid$ to $\\meltsB \\subseteq \\monoid$, written $\\melt \\mupd \\meltsB$, if\n  \\[ \\All \\maybe{\\melt_\\f} \\in \\maybe\\monoid. \\mvalFull(\\melt \\mtimes \\maybe{\\melt_\\f}) \\Ra \\Exists \\meltB \\in \\meltsB. \\mvalFull(\\meltB \\mtimes \\maybe{\\melt_\\f}) \\]\n\n  We further define $\\melt \\mupd \\meltB \\eqdef \\melt \\mupd \\set\\meltB$.\n\\end{defn}\nThe proposition $\\melt \\mupd \\meltsB$ says that every element $\\maybe{\\melt_\\f}$ compatible with $\\melt$ (we also call such elements \\emph{frames}), must also be compatible with some $\\meltB \\in \\meltsB$.\nNotice that $\\maybe{\\melt_\\f}$ could be $\\mnocore$, so the frame-preserving update can also be applied to elements that have \\emph{no} frame.\nIntuitively, this means that whatever assumptions the rest of the program is making about the state of $\\gname$, if these assumptions are compatible with $\\melt$, then updating to $\\meltB$ will not invalidate any of these assumptions.\nSince Iris ensures that the global ghost state is valid, this means that we can soundly update the ghost state from $\\melt$ to a non-deterministically picked $\\meltB \\in \\meltsB$.\n\n\\subsection{Cameras}\n\n\\begin{defn}\n  A \\emph{camera} is a tuple $(\\monoid : \\OFEs, \\mval : \\monoid \\nfn \\SProp, \\mcore{{-}}: \\monoid \\nfn \\maybe\\monoid,\\\\ (\\mtimes) : \\monoid \\times \\monoid \\nfn \\monoid)$ satisfying:\n  \\begin{align*}\n    \\All \\melt, \\meltB, \\meltC.& (\\melt \\mtimes \\meltB) \\mtimes \\meltC = \\melt \\mtimes (\\meltB \\mtimes \\meltC) \\tagH{camera-assoc} \\\\\n    \\All \\melt, \\meltB.& \\melt \\mtimes \\meltB = \\meltB \\mtimes \\melt \\tagH{camera-comm} \\\\\n    \\All \\melt.& \\mcore\\melt \\in \\monoid \\Ra \\mcore\\melt \\mtimes \\melt = \\melt \\tagH{camera-core-id} \\\\\n    \\All \\melt.& \\mcore\\melt \\in \\monoid \\Ra \\mcore{\\mcore\\melt} = \\mcore\\melt \\tagH{camera-core-idem} \\\\\n    \\All \\melt, \\meltB.& \\mcore\\melt \\in \\monoid \\land \\melt \\mincl \\meltB \\Ra \\mcore\\meltB \\in \\monoid \\land \\mcore\\melt \\mincl \\mcore\\meltB \\tagH{camera-core-mono} \\\\\n    \\All \\melt, \\meltB.& \\mval(\\melt \\mtimes \\meltB) \\subseteq \\mval(\\melt)  \\tagH{camera-valid-op} \\\\\n    \\All n, \\melt, \\meltB_1, \\meltB_2.& \\omit\\rlap{$n \\in \\mval(\\melt) \\land \\melt \\nequiv{n} \\meltB_1 \\mtimes \\meltB_2 \\Ra {}$} \\\\\n    &\\Exists \\meltC_1, \\meltC_2. \\melt = \\meltC_1 \\mtimes \\meltC_2 \\land \\meltC_1 \\nequiv{n} \\meltB_1 \\land \\meltC_2 \\nequiv{n} \\meltB_2 \\tagH{camera-extend} \\\\\n    \\text{where}\\qquad\\qquad\\\\\n    \\melt \\mincl \\meltB \\eqdef{}& \\Exists \\meltC. \\meltB = \\melt \\mtimes \\meltC \\tagH{camera-incl} \\\\\n    \\melt \\mincl[n] \\meltB \\eqdef{}& \\Exists \\meltC. \\meltB \\nequiv{n} \\melt \\mtimes \\meltC \\tagH{camera-inclN}\n  \\end{align*}\n\\end{defn}\n\nThis is a natural generalization of RAs over OFEs\\footnote{The reader may wonder why on earth we call them ``cameras''.\nThe reason, which may not be entirely convincing, is that ``camera'' was originally just used as a comfortable pronunciation of ``CMRA'', the name used in earlier Iris papers.\nCMRA was originally supposed to be an acronym for ``complete metric resource algebras'' (or something like that), but we were never very satisfied with it and thus ended up never spelling it out.\nTo make matters worse, the ``complete'' part of CMRA is now downright misleading, for whereas previously the carrier of a CMRA was required to be a COFE (complete OFE), we have relaxed that restriction and permit it to be an (incomplete) OFE.\nFor these reasons, we have decided to stick with the name ``camera'', for purposes of continuity, but to drop any pretense that it stands for something.}.\nAll operations have to be non-expansive, and the validity predicate $\\mval$ can now also depend on the step-index.\nWe define the plain $\\mvalFull$ as the ``limit'' of the step-indexed approximation:\n\\[ \\mvalFull(\\melt) \\eqdef \\All n. n \\in \\mval(\\melt) \\]\n\n\\paragraph{The extension axiom (\\ruleref{camera-extend}).}\nNotice that the existential quantification in this axiom is \\emph{constructive}, \\ie it is a sigma type in Coq.\nThe purpose of this axiom is to compute $\\melt_1$, $\\melt_2$ completing the following square:\n\n% RJ FIXME: Needs some magic to fix the baseline of the $\\nequiv{n}$, or so\n\\begin{center}\n\\begin{tikzpicture}[every edge/.style={draw=none}]\n  \\node (a) at (0, 0) {$\\melt$};\n  \\node (b) at (1.7, 0) {$\\meltB$};\n  \\node (b12) at (1.7, -1) {$\\meltB_1 \\mtimes \\meltB_2$};\n  \\node (a12) at (0, -1) {$\\melt_1 \\mtimes \\melt_2$};\n\n  \\path (a) edge node {$\\nequiv{n}$} (b);\n  \\path (a12) edge node {$\\nequiv{n}$} (b12);\n  \\path (a) edge node [rotate=90] {$=$} (a12);\n  \\path (b) edge node [rotate=90] {$=$} (b12);\n\\end{tikzpicture}\\end{center}\nwhere the $n$-equivalence at the bottom is meant to apply to the pairs of elements, \\ie we demand $\\melt_1 \\nequiv{n} \\meltB_1$ and $\\melt_2 \\nequiv{n} \\meltB_2$.\nIn other words, extension carries the decomposition of $\\meltB$ into $\\meltB_1$ and $\\meltB_2$ over the $n$-equivalence of $\\melt$ and $\\meltB$, and yields a corresponding decomposition of $\\melt$ into $\\melt_1$ and $\\melt_2$.\nThis operation is needed to prove that $\\later$ commutes with separating conjunction:\n\\begin{mathpar}\n  \\axiom{\\later (\\prop * \\propB) \\Lra \\later\\prop * \\later\\propB}\n\\end{mathpar}\n\n\\begin{defn}\n  An element $\\munit$ of a camera $\\monoid$ is called the \\emph{unit} of $\\monoid$ if it satisfies the following conditions:\n  \\begin{enumerate}[itemsep=0pt]\n  \\item $\\munit$ is valid: \\\\ $\\All n. n \\in \\mval(\\munit)$\n  \\item $\\munit$ is a left-identity of the operation: \\\\\n    $\\All \\melt \\in M. \\munit \\mtimes \\melt = \\melt$\n  \\item $\\munit$ is its own core: \\\\ $\\mcore\\munit = \\munit$\n  \\end{enumerate}\n\\end{defn}\n\n\\begin{lem}\\label{lem:camera-unit-total-core}\n  If $\\monoid$ has a unit $\\munit$, then the core $\\mcore{{-}}$ is total, \\ie $\\All\\melt. \\mcore\\melt \\in \\monoid$.\n\\end{lem}\n\n\\begin{defn}\n  It is possible to do a \\emph{frame-preserving update} from $\\melt \\in \\monoid$ to $\\meltsB \\subseteq \\monoid$, written $\\melt \\mupd \\meltsB$, if\n  \\[ \\All n, \\maybe{\\melt_\\f}. n \\in \\mval(\\melt \\mtimes \\maybe{\\melt_\\f}) \\Ra \\Exists \\meltB \\in \\meltsB. n \\in\\mval(\\meltB \\mtimes \\maybe{\\melt_\\f}) \\]\n\n  We further define $\\melt \\mupd \\meltB \\eqdef \\melt \\mupd \\set\\meltB$.\n\\end{defn}\nNote that for RAs, this and the RA-based definition of a frame-preserving update coincide.\n\n\\begin{defn}\n  A camera $\\monoid$ is \\emph{discrete} if it satisfies the following conditions:\n  \\begin{enumerate}[itemsep=0pt]\n  \\item $\\monoid$ is a discrete COFE\n  \\item $\\mval$ ignores the step-index: \\\\\n    $\\All \\melt \\in \\monoid. 0 \\in \\mval(\\melt) \\Ra \\All n. n \\in \\mval(\\melt)$\n  \\end{enumerate}\n\\end{defn}\nNote that every RA is a discrete camera, by picking the discrete COFE for the equivalence relation.\nFurthermore, discrete cameras can be turned into RAs by ignoring their COFE structure, as well as the step-index of $\\mval$.\n\n\\begin{defn}[Camera homomorphism]\n  A function $f : \\monoid_1 \\to \\monoid_2$ between two cameras is \\emph{a camera homomorphism} if it satisfies the following conditions:\n  \\begin{enumerate}[itemsep=0pt]\n  \\item $f$ is non-expansive\n  \\item $f$ commutes with composition:\\\\\n    $\\All \\melt_1 \\in \\monoid_1, \\melt_2 \\in \\monoid_1. f(\\melt_1) \\mtimes f(\\melt_2) = f(\\melt_1 \\mtimes \\melt_2)$\n  \\item $f$ commutes with the core:\\\\\n    $\\All \\melt \\in \\monoid_1. \\mcore{f(\\melt)} = f(\\mcore{\\melt})$\n  \\item $f$ preserves validity: \\\\\n    $\\All n, \\melt \\in \\monoid_1. n \\in \\mval(\\melt) \\Ra n \\in \\mval(f(\\melt))$\n  \\end{enumerate}\n\\end{defn}\n\n\\begin{defn}\n  The category $\\CMRAs$ consists of cameras as objects, and camera homomorphisms as arrows.\n\\end{defn}\nNote that every object/arrow in $\\CMRAs$ is also an object/arrow of $\\OFEs$.\nThe notion of a locally non-expansive (or contractive) bifunctor naturally generalizes to bifunctors between these categories.\n%TODO: Discuss how we probably have a commuting square of functors between Set, RA, CMRA, COFE.\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"iris\"\n%%% End: \n", "meta": {"hexsha": "f8a97c690a61d5934f5694547ab056897fcecd9b", "size": 16605, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/algebra.tex", "max_stars_repo_name": "SkySkimmer/iris", "max_stars_repo_head_hexsha": "186d9ece07e210e92be28eb0e1a42f5d5fe6f1b5", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-11T23:20:43.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-11T23:20:43.000Z", "max_issues_repo_path": "tex/algebra.tex", "max_issues_repo_name": "gares/iris", "max_issues_repo_head_hexsha": "7b4a04ce0d396cb27eeef22e883a9f3b738e83f4", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-03T16:15:44.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-03T16:15:44.000Z", "max_forks_repo_path": "tex/algebra.tex", "max_forks_repo_name": "gares/iris", "max_forks_repo_head_hexsha": "7b4a04ce0d396cb27eeef22e883a9f3b738e83f4", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-02T19:17:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-02T19:17:43.000Z", "avg_line_length": 61.9589552239, "max_line_length": 264, "alphanum_fraction": 0.6922011442, "num_tokens": 5789, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891305219504, "lm_q2_score": 0.7122321842389469, "lm_q1q2_score": 0.5855895602891693}}
{"text": "% When using TeXShop on the Mac, let it know the root document.\n% The following must be one of the first 20 lines.\n% !TEX root = ../design.tex\n\n\\chapter{Low-rank Matrix Factorization}\n\n\\begin{moduleinfo}\n\\item[Author] \\href{mailto:xfeng@cs.wisc.edu}{Xixuan (Aaron) Feng} (version 0.5 only)\n\\item[History]\n\t\\begin{modulehistory}\n\t\t\\item[v0.5] Initial revision of design document, Implementation based on incremental gradient descent\n\t\t\\item[v0.1] Initial revision (somewhat misleadingly called SVD matrix factorization at that time)\n\t\\end{modulehistory}\n\\end{moduleinfo}\n\n% Abstract. What is the problem we want to solve?\nThis module implements \"factor model\" for representing an incomplete matrix using a low-rank approximation \\cite{DBLP:conf/icml/SrebroJ03}.\nMathematically, this model seeks to find matrices U and V (also referred as factors) that, for any given incomplete matrix A, minimizes:\n\\[ \\|\\boldsymbol A - \\boldsymbol UV^{T} \\|_2 \\]\nsubject to $rank(\\boldsymbol UV^{T}) \\leq r$, where $\\|\\cdot\\|_2$ denotes the Frobenius norm.\nLet $A$ be a $m \\times n$ matrix, then $U$ will be $m \\times r$ and $V$ will be $n \\times r$, in dimension, and $1 \\leq r \\ll \\min(m, n)$.\nThis model is not intended to do the full decomposition, or to be used as part of inverse procedure.\nThis model has been widely used in recommendation systems (e.g., Netflix \\cite{:TheNetflixPrize07}) and feature selection (e.g., image processing \\cite{DBLP:conf/nips/WrightGRPM09}).\n\n\\section{Incremental Gradient Descent}\n\n% Background. Why can we solve the problem with incremental gradient?\n\\subsection{Solving as a Convex Program}\nRecent work \\cite{DBLP:journals/cacm/CandesR12, DBLP:journals/siamrev/RechtFP10} has demonstrated that the low-rank matrix factorization can be solved as a convex programming problem.\nThis body of work enables us to solve the problem by using gradient-based line search algorithms.\nAmong many of these algorithms, incremental gradient descent algorithm is a popular choice, especially for really large input matrices \\cite{DBLP:conf/sigmod/FengKRR12, DBLP:conf/kdd/GemullaNHS11}.\n\n\\subsection{Formal Description}\n\\begin{algorithm}[lmf-igd$(r, A, \\alpha)$] \\label{alg:lmf-igd}\n\\alginput{Sparse matrix $A$,\n\\\\step size $\\alpha$,\n\\\\low-rank constraint $r$, \n\\\\convergence criterion $\\mathit{Convergence}$,\n\\\\random factor generator $\\mathit{GenerateFactor}$}\n\\algoutput{Factors $U$ ($m \\times r$) and $V$ ($n \\times r$)}\n\\algprecond{$\\mathit{iteration} = 0$}\n\\begin{algorithmic}[1]\n\t\\State $U \\set \\mathit{GenerateFactor}(m, r)$\n\t\\State $V \\set \\mathit{GenerateFactor}(n, r)$\n\t\\Repeat\n\t\t\\State $\\mathit{iteration} \\set \\mathit{iteration} + 1$\n\t\t\\State $U_\\text{old} \\set U$\n\t\t\\State $V_\\text{old} \\set V$\n\t\t\\For{$(i, j, y) \\in A$} \\Comment{Single entry in sparse matrix A}\n\t\t\t\\State $e \\set U_i \\cdot V_j - y$\n\t\t\t\\State $\\mathit{temp} \\set U_i - \\alpha e V_j$\n\t\t\t\\State $V_j \\set V_j - \\alpha e U_i$ \\Comment{In-place update of V}\n\t\t\t\\State $U_i \\set \\mathit{temp}$ \\Comment{In-place update of U}\n\t\t\\EndFor\n\t\\Until{$Convergence(U_\\text{old}, V_\\text{old}, U, V, \\mathit{iteration})$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{description}\n\t\\item[Runtime] $O(N_{A} (m + n) r + m n r)$ for one iteration,\n        where $N_{A}$ is the number of nonzero elements in $A$.\n\n\t\\item[Space] Store the $\\mathit{temp}$, an $r$-floating-point vector.\n\n\t\\item[Parallelism] The outer loop is inherently sequential.\n        The inner loop is data-parallel and model averaging \\cite{DBLP:conf/nips/DuchiAW10} is used.\n\n    \\item[Factor initialization] The author of this document is not aware that significant differences are caused if random factors are initialized by different distributions.\n        But zero values should be avoided.\n        And entries in factors should not be initialized as the same value; otherwise, factors will always be rank 1.\n\n    \\item[Convergence criterion] Usually, following conditions are combined by AND, OR, or NOT.\n        \\begin{enumerate}\n            \\item The change in the objective drops below a given threshold (E.g., RMSE).\n            \\item The value of the objective matches some pre-computed value.\n            \\item The maximum number of iterations has been reached.\n            \\item There could be more.\n        \\end{enumerate}\n\\end{description}\n", "meta": {"hexsha": "10ad5d931f8b56cd07cfea25ec1a9c31518e2ab4", "size": 4293, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/design/modules/low-rank-matrix-decomposition.tex", "max_stars_repo_name": "fmcquillan99/apache-madlib", "max_stars_repo_head_hexsha": "e2dea62d1eadc7f662f2d926c71f42332f414ca0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-09-18T07:44:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-14T19:45:18.000Z", "max_issues_repo_path": "doc/design/modules/low-rank-matrix-decomposition.tex", "max_issues_repo_name": "fmcquillan99/apache-madlib", "max_issues_repo_head_hexsha": "e2dea62d1eadc7f662f2d926c71f42332f414ca0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-09-06T05:50:17.000Z", "max_issues_repo_issues_event_max_datetime": "2018-09-06T05:50:17.000Z", "max_forks_repo_path": "doc/design/modules/low-rank-matrix-decomposition.tex", "max_forks_repo_name": "fmcquillan99/apache-madlib", "max_forks_repo_head_hexsha": "e2dea62d1eadc7f662f2d926c71f42332f414ca0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-03T20:50:13.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-03T20:50:13.000Z", "avg_line_length": 53.6625, "max_line_length": 197, "alphanum_fraction": 0.7237363149, "num_tokens": 1231, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891348788759, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.5855895533481048}}
{"text": "%!TEX root = ../thesis.tex\n% ******************************************\n% \t\t\tThesis Appendix B\n% ******************************************\n\n\\chapter{Computational methods} \n\n\\section{Addressing the mean confounding effect \\\\ for differential variability testing}\n\\markboth{Computational methods}{B.1 Addressing the mean confounding effect for differential variability testing}\n\\label{appB.1}\n\n\\subsection{Prior specifications of the extended BASiCS model}\n\n\\begin{align*}\n\\mu_i &\\ind \\mbox{log-Normal}\\left(0, s^2_{\\mu}\\right) \\\\\n\\delta_i| \\mu_i,\\beta,\\sigma^2, \\lambda_i, \\eta &\\ind \\text{log-Normal}\\left( \\text{f}(\\mu_i),\\frac{\\sigma^2}{\\lambda_i} \\right)\\\\\n\\lambda_i|\\eta &\\ind \\text{Gamma}\\left(\\frac{\\eta}{2},\\frac{\\eta}{2}\\right)\\\\\n\\beta|\\sigma^2 & \\sim \\textnormal{Normal}(m_\\beta,\\sigma^2V_\\beta),\\\\\n\\sigma^2 & \\sim  \\textnormal{Inv-Gamma}(a_{\\sigma^2},b_{\\sigma^2}),\\\\\ns_j & \\iid  \\textnormal{Gamma}(a_s,b_s) \\\\\n(\\phi_1, \\ldots, \\phi_n)' & \\sim  n \\times \\textnormal{Dirichlet}(a_\\phi),\\\\\n\\theta & \\sim  \\textnormal{Gamma}(a_\\theta,b_\\theta)\n\\end{align*}\n\n\\newpage\n\n\\subsection{Starting values for hyper-parameters}\n\\label{appB.1.hyper}\n\n\\begin{align*}\nm_{\\beta} & = \\mathbf{0}_L \\hspace{0.2cm} \\text{(an $L$-dimensional vector of zeroes)}\\\\\nV_{\\beta} & = \\mathbf{I}_L \\hspace{0.2cm} \\text{(an $L$-dimensional identity matrix)}\\\\\na_{\\sigma^2} & = 2\\\\\nb_{\\sigma^2} & = 2\\\\\ns^2_{\\mu}& =  0.5\\\\\na_s& = 1\\\\\nb_s& = 1\\\\\na_\\phi& =  \\mathbf{1}_n\\\\\na_\\theta& = 1\\\\\nb_\\theta& = 1\n\\end{align*}\n\n\\subsection{Likelihood of the extended BASiCS model}\n\nThe likelihood function of the extended BASiCS model takes the form:\n\n\\begin{align} \n\\Lagr = & \\left[\\prod_{i=1}^{q_0}\\prod_{j=1}^n\\frac{\\Gamma(x_{ij}+\\frac{1}{\\delta_i})}{\\Gamma(\\frac{1}{\\delta_i})x_{ij}!}\\left(\\frac{\\frac{1}{\\delta_i}}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^\\frac{1}{\\delta_i}\\left(\\frac{\\phi_j\\nu_j\\mu_i}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^{x_{ij}}\\right] \\nonumber\\\\ \n&\\times\\left[\\prod_{i=q_0+1}^{q}\\prod_{j=1}^n\\frac{(\\nu_j\\mu_i)^{x_{ij}}}{x_{ij}!}\\exp\\lbrace-\\nu_j\\mu_i\\rbrace\\right]\\times{}\\left[\\prod_{j=1}^n\\frac{(s_j\\theta)^{-\\frac{1}{\\theta}}}{\\Gamma(\\frac{1}{\\theta})}\\nu_j^{\\frac{1}{\\theta}-1}\\exp\\left\\lbrace-\\frac{\\nu_j}{s_j\\theta}\\right\\rbrace\\right].\n\\end{align} \n\n\\subsection{Derivation of full conditionals for the extended BASiCS model}\n\\label{appB.1.derivation}\n\nTo calculate the full conditionals ($\\pi^*(\\cdot)$) for Gibbs sampling, the likelihood ($\\Lagr_j$ for cell-specific likelihood, $\\Lagr_i$ for gene-specific likelihood) is multiplied by the relevant prior specifications ($\\pi(\\cdot)$). \n$q_0$ indicates the number of biological genes while $q$ is the number of biological and spike-in genes. \n$L$ is the number of Gaussian Radial Basis Functions. $\\Lambda$ is a diagonal matrix with elements $(\\lambda_1, \\ldots, \\lambda_{q_0})$ and $Y = (\\log(\\delta_1), \\ldots, \\log(\\delta_{q_0}))'$.\n\n\\newpage \n\n\\subsubsection{Helper functions}\n\nFor simplicity, the distributions of the joint prior specification, the product of this distribution across all biological genes $q_0$ and the multivariate Normal distribution for the prior on $\\beta$ take the form:\n\n\\begin{align*}\n\\text{log-Normal}(\\text{f}(\\mu_i),\\frac{\\sigma^2}{\\lambda_i})&\\propto{}(\\frac{\\lambda_i}{\\sigma^2})^{\\frac{1}{2}}\\exp\\lbrace-\\frac{\\lambda_i}{2\\sigma^2}(\\log(\\delta_i)-x_{i,\\ast}^T\\beta)^2\\rbrace\\\\\n\\prod_{i=1}^{q_o}\\text{log-Normal}(\\text{f}(\\mu_i),\\frac{\\sigma^2}{\\lambda_i})&\\propto{}(\\frac{1}{\\sigma^2})^{\\frac{q_0}{2}}(\\prod_{i=1}^{q_0}\\lambda_i)^{\\frac{1}{2}}\\exp\\lbrace-\\frac{1}{2\\sigma^2}[(Y-X\\beta)^T\\Lambda(Y-X\\beta)\\\\\n\\textnormal{Normal}(m_\\beta,\\sigma^2V_\\beta)&\\propto{}(\\frac{1}{\\sigma^2})^{\\frac{L+2}{2}}\\exp\\lbrace-\\frac{1}{2\\sigma^2}(\\beta-m_\\beta)^TV_\\beta^{-1}(\\beta-m_\\beta)\\rbrace\\\\\n\\end{align*}\n\nHere, $x_{i,\\ast}^T$ is the transposed vector of the $i$th row in the model matrix $X$.\n\n\\subsubsection{Full conditionals}\n\nFull conditional for $\\mu_i$ across all cells:\n\\begin{fleqn}\n\\begin{align*}\n&\\pi^*(\\mu_i|\\cdot)&&\\propto{} \\Lagr_i\\times\\pi(\\mu_i)\\times\\pi(\\delta_i|\\mu_i,\\beta,\\sigma^2,\\eta)\\\\\n& &&\\propto{}\\left[\\prod_{j=1}^n\\left(\\frac{1}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^\\frac{1}{\\delta_i}\\left(\\frac{\\mu_i}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^{x_{ij}}\\right]\\times\\exp\\left(-\\frac{(\\log(\\mu_i)-0)^2}{2a_\\mu^2}\\right)\\frac{1}{\\mu_i}\\\\\n& &&\\times\\exp\\left\\lbrace-\\frac{\\lambda_i}{2\\sigma^2}(\\log(\\delta_i)-f(\\mu_i))^2\\right\\rbrace\\\\\n& &&\\propto\\left[\\prod_{j=1}^n{}\\frac{(\\mu_i)^{x_{ij}}}{(\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i})^{\\frac{1}{\\delta_i}+x_{ij}}}\\right]\\times{}\\exp\\left(-\\frac{(\\log(\\mu_i))^2}{2a_\\mu^2}-\\frac{\\lambda_i(\\log(\\delta_i)-f(\\mu_i))^2}{2\\sigma^2}\\right)\\frac{1}{\\mu_i}\\\\\n& &&\\propto{}\\frac{\\mu_i^{\\sum_{j=1}^n{}x_{ij}}}{\\prod_{j=1}^n{}(\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i})^{\\frac{1}{\\delta_i}+x_{ij}}}\\times{}\\exp\\left(-\\frac{(\\log(\\mu_i))^2}{2a_\\mu^2}-\\frac{\\lambda_i(\\log(\\delta_i)-f(\\mu_i))^2}{2\\sigma^2}\\right)\\frac{1}{\\mu_i}\n\\end{align*}\n\\end{fleqn}\n\nFull conditional for $\\delta_i$ across all cells:\n\\begin{fleqn}\n\\begin{align*}\n&\\pi^*(\\delta_i|\\cdot)&&\\propto{} \\Lagr_i\\times\\pi(\\delta_i|\\mu_i,\\beta,\\sigma^2,\\eta)\\\\\n& &&\\propto{}\\left[\\prod_{j=1}^n\\frac{\\Gamma(x_{ij}+\\frac{1}{\\delta_i})}{\\Gamma(\\frac{1}{\\delta_i})}\\left(\\frac{\\frac{1}{\\delta_i}}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^\\frac{1}{\\delta_i}\\left(\\frac{1}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^{x_{ij}}\\right]\\\\\n& &&\\times\\exp\\left\\lbrace-\\frac{\\lambda_i(\\log(\\delta_i)-f(\\mu_i))^2}{2\\sigma^2}\\right\\rbrace\\frac{1}{\\delta_i}\\\\\n& &&\\propto\\left[\\prod_{j=1}^n\\frac{\\Gamma(x_{ij}+\\frac{1}{\\delta_i})}{\\Gamma(\\frac{1}{\\delta_i})}\\frac{(\\frac{1}{\\delta_i})^{\\frac{1}{\\delta_i}}}{(\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i})^{\\frac{1}{\\delta_i}+x_{ij}}}\\right]\\times{}\\exp\\left\\lbrace-\\frac{\\lambda_i(\\log(\\delta_i)-f(\\mu_i))^2}{2\\sigma^2}\\right\\rbrace\\frac{1}{\\delta_i}\\\\\n\\end{align*}\n\\end{fleqn}\n\nFull conditional for $\\beta$ across all cells and genes:\n\\begin{fleqn}\n\\begin{align*}\n\\pi^*(\\beta|\\cdot)&\\propto{} \\Lagr\\times\\pi(\\delta|\\mu,\\beta,\\sigma^2,\\eta)\\times\\pi(\\beta)\\\\\n&\\propto{}\\exp\\lbrace-\\frac{1}{2\\sigma^2}[(Y-X\\beta)'\\Lambda(Y-X\\beta)\\rbrace\\times\\\\\n&\\exp\\lbrace-\\frac{1}{2\\sigma^2}(\\beta-m_\\beta)'V_\\beta^{-1}(\\beta-m_\\beta)\\rbrace\\\\\n&\\propto{}\\exp\\lbrace-\\frac{1}{2\\sigma^2}[(Y-X\\beta)'\\Lambda(Y-X\\beta)+(\\beta-m_\\beta)'V_\\beta^{-1}(\\beta-m_\\beta)]\\rbrace\\\\\n&\\propto{}\\exp\\lbrace-\\frac{1}{2\\sigma^2}[Y'\\Lambda{}Y-2(X\\beta)'\\Lambda{}Y+(X\\beta)'\\Lambda{}X\\beta\\\\\n&+\\beta'V_\\beta^{-1}\\beta-2m_\\beta'V_\\beta^{-1}\\beta+m_\\beta{}V_\\beta^{-1}m_\\beta{}]\\rbrace\\\\\n&\\propto{}\\exp\\lbrace-\\frac{1}{2\\sigma^2}[\\beta'X'\\Lambda{}X\\beta+\\beta'V_\\beta^{-1}\\beta-2X'\\Lambda{}Y\\beta-2V_\\beta^{-1}m_\\beta\\beta]\\rbrace\\\\\n&\\propto{}\\exp\\lbrace-\\frac{1}{2\\sigma^2}[\\beta'(X'\\Lambda{}X+V_\\beta^{-1})\\beta-2(X'\\Lambda{}Y+V_\\beta^{-1}m_\\beta)\\beta]\\rbrace\\\\\n&\\propto{}N(m^*_\\beta,\\sigma^2V^*_\\beta)\n\\end{align*}\n\\end{fleqn} With\n\\begin{fleqn}\n\\begin{align*}\nV^*_\\beta&=(X'\\Lambda{}X+V_\\beta^{-1})^{-1}\\\\\nm^*_\\beta&=(X'\\Lambda{}X+V_\\beta^{-1})^{-1}(X'\\Lambda{}Y+V_\\beta^{-1}m_\\beta{})\n\\end{align*}\n\nFull conditional for $\\lambda_i$ across all cells:\n\\begin{fleqn}\n\\begin{align*}\n\\pi^*(\\lambda_i|\\cdot)&\\propto{}\\Lagr_i\\times\\pi(\\delta_i|\\mu,\\beta,\\sigma^2,\\eta)\\times\\pi(\\lambda_i)\\\\\n&\\propto{}\\lambda_i^{1/2}\\exp\\lbrace-\\frac{\\lambda_i}{2\\sigma^2}(\\log(\\delta_i)-f(\\mu_i))^2\\rbrace\\cdot\\lambda_i{}^{\\frac{\\eta}{2}-1}\\exp(-\\lambda_i{}\\frac{\\eta}{2})\\\\\n&\\propto{}\\lambda_i^{\\frac{\\eta+1}{2}-1}\\exp\\lbrace-\\frac{\\lambda_i}{2}(\\eta+\\frac{1}{\\sigma^2}(\\log(\\delta_i)_i-f(\\mu_i)))\\rbrace\\\\\n&\\propto{}\\textnormal{Gamma}(a^*_\\lambda,b^*_\\lambda)\n\\end{align*}\n\\end{fleqn}\nWith\n\\begin{align*}\na^*_\\lambda&=\\frac{\\eta+1}{2}\\\\\nb^*_\\lambda&=\\frac{1}{2}\\left[\\frac{1}{\\sigma^2}(\\log(\\delta_i)-f(\\mu_i))^2+\\eta\\right]\n\\end{align*}\n\\end{fleqn}\n\nFull conditional for $\\sigma^2$ across all cells and genes:\n\\begin{fleqn}\n\\begin{align*}\n\\pi^*(\\sigma^2|\\cdot)&\\propto{}\\Lagr\\times\\pi(\\delta|\\mu,\\beta,\\sigma^2,\\eta)\\times\\pi(\\sigma^2)\\\\\n&\\propto{}\\left(\\frac{1}{\\sigma^2}\\right)^{\\frac{q_0}{2}}\\exp\\lbrace-\\frac{1}{2\\sigma^2}(Y-X\\beta)'\\Lambda(Y-X\\beta)\\rbrace\\\\\n&\\cdot\\left(\\frac{1}{\\sigma^2}\\right)^{L+2}\\exp\\lbrace-\\frac{1}{2\\sigma^2}(\\beta-m_\\beta{})'V_\\beta^{-1}(\\beta-m_\\beta{})\\rbrace\\\\\n&\\cdot\\left(\\frac{1}{\\sigma^2}\\right)^{a_{\\sigma^2}+1}\\exp\\lbrace-\\frac{b_{\\sigma^2}}{\\sigma^2}\\rbrace\\\\\n&\\propto{}\\left(\\frac{1}{\\sigma^2}\\right)^{\\frac{q_0+L+2}{2}+a_{\\sigma^2}+1}\\exp\\lbrace-\\frac{1}{\\sigma^2}[b_{\\sigma^2}+\\frac{1}{2}[(Y-X\\beta)'\\Lambda(Y-X\\beta) \\\\\n&+ (\\beta-m_\\beta{})'V_\\beta^{-1}(\\beta-m_\\beta{})]]\\rbrace\n\\end{align*}\n\\end{fleqn} \nAfter completing the squares\n\\begin{fleqn}\n\\begin{align*}\n&\\propto{}\\left(\\frac{1}{\\sigma^2}\\right)^{\\frac{q_0+L+2}{2}+a_{\\sigma^2}+1}\\exp\\lbrace-\\frac{1}{\\sigma^2}[b_{\\sigma^2}+\\frac{1}{2}(Y'\\Lambda{}Y+m_\\beta{}'V_\\beta{}^{-1}m_\\beta{}\\\\\n&+(\\beta-m^*_{\\beta})'(V^*_{\\beta})^{-1}(\\beta-m^*_{\\beta})-(m^*_{\\beta})'(V^*_{\\beta})^{-1}m^*_{\\beta})]\\rbrace\\\\\n&\\propto{}(\\frac{1}{\\sigma^2})^{a_{n,\\sigma^2}+1}\\exp(-\\frac{b^*_{\\sigma^2}}{\\sigma^2})\\\\\n&\\propto{}\\textnormal{Inv-Gamma}(a^*_{\\sigma^2},b^*_{\\sigma^2})\n\\end{align*}\n\\end{fleqn}\nWith\n\\begin{fleqn}\n\\begin{align*}\na^*_{\\sigma^2}&=\\frac{q_0+L+2}{2}+a_{\\sigma^2}\\\\\nb^*_{\\sigma^2}&=b_{\\sigma^2}+\\frac{1}{2}(Y'\\Lambda{}Y+m_\\beta'V_\\beta^{-1}m_\\beta+(\\beta-m^*_{\\beta})'(V^*_{\\beta})^{-1}(\\beta-m^*_{\\beta})-(m^*_{\\beta})'(V^*_{\\beta})^{-1}m^*_{\\beta})\\\\\n&\\equiv{}b_{\\sigma^2}+\\frac{1}{2}(Y'\\Lambda Y+ m'_{\\beta} V_\\beta^{-1} m_\\beta + \\beta' (V^*_{\\beta})^{-1} \\beta - 2 \\beta' (V^*_{\\beta})^{-1} m^*_{\\beta} )\n\\end{align*}\n\\end{fleqn}\n\nFull conditional for $s_j$ across all genes:\n\\begin{fleqn}\n\\begin{align*}\n\\pi^*(s_j|\\cdot)&\\propto{}\\Lagr_j\\times\\pi(s_j)\\\\\n&\\propto{}s_j{}^{a_s-1}\\exp\\lbrace-b_ss_j\\rbrace{}s_j{}^{-\\frac{1}{\\theta}}\\exp\\lbrace-\\frac{\\nu_j}{s_j\\theta}\\rbrace\\\\\n&\\propto{}s_j{}^{a_s-\\frac{1}{\\theta}-1}\\exp\\lbrace-\\frac{\\nu_j}{s_j\\theta}-b_ss_j\\rbrace\n\\end{align*}\n\\end{fleqn}\n\nFull conditional for $\\phi$ across all genes and cells:\n\\begin{fleqn}\n\\begin{align*}\n\\pi^*(\\phi_j|\\cdot)&\\propto{}\\Lagr_j\\times\\pi(\\phi_j)\\\\\n&\\propto{}\\prod_{i=1}^{q_0}\\prod_{j=1}^{n}\\left(\\frac{1}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^\\frac{1}{\\delta_i}\\left(\\frac{\\phi_j}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^{x_{ij}}\\times{}\\pi(\\phi_j)\\\\\n&\\propto{}\\frac{\\prod_{i=1}^{q_0}\\prod_{j=1}^{n}\\phi_j{}^{x_{ij}}}{\\prod_{i=1}^{q_0}\\prod_{j=1}^{n}(\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i})^{\\frac{1}{\\delta_i}+x_{ij}}}\\times{}\\pi(\\phi_j)\\\\\n&\\propto{}\\frac{\\prod_{i=1}^{q_0}\\phi_j{}^{\\sum_{j=1}^nx_{ij}}}{\\prod_{i=1}^{q_0}\\prod_{j=1}^{n}(\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i})^{\\frac{1}{\\delta_i}+x_{ij}}}\\times{}\\pi(\\phi_j)\n\\end{align*}\n\\end{fleqn}\n\nFull conditional for $\\nu_j$ across all genes:\n\\begin{fleqn}\n\\begin{align*}\n\\pi^*(\\nu_j|\\cdot)&\\propto{}\\Lagr_j\\times\\pi(\\nu_j)\\\\\n&\\propto{}\\left[\\prod_{i=1}^{q_0}\\left(\\frac{1}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^\\frac{1}{\\delta_i}\\left(\\frac{\\nu_j}{\\phi_j\\nu_j\\mu_i+\\frac{1}{\\delta_i}}\\right)^{x_{ij}}\\right]\\left[\\prod_{i=q_0+1}^{q}\\nu_j{}^{x_{ij}}\\exp\\lbrace-\\nu_j\\mu_i\\rbrace\\right]\\\\\n&\\times{}\\nu_j^{\\frac{1}{\\theta}-1}\\exp\\lbrace-\\frac{\\nu_j}{s_j\\theta}\\rbrace\n\\end{align*}\n\\end{fleqn}\n\nFull conditional for $\\theta$ across all genes and cells:\n\n\\begin{fleqn}\n\\begin{align*}\n\\pi^*(\\theta|\\cdot)&\\propto{}\\Lagr\\times\\pi(\\theta)\\\\\n&\\propto{}\\left[\\prod_{j=1}^{n}\\frac{(s_j\\theta)^{-\\frac{1}{\\theta}}}{\\Gamma(\\frac{1}{\\theta})}\\nu_j^{\\frac{1}{\\theta}-1}\\exp\\lbrace-\\frac{\\nu_j}{s_j\\theta}\\rbrace\\right]\\times{}\\theta^{a_\\theta-1}\\exp\\lbrace-b_\\theta\\theta\\rbrace\\\\\n&\\propto{}\\left[\\prod_{j=1}^{n}\\frac{(s_j\\theta)^{-\\frac{1}{\\theta}}}{\\Gamma(\\frac{1}{\\theta})}\\frac{1}{\\nu_j}^{-\\frac{1}{\\theta}}\\exp\\lbrace-\\frac{\\nu_j}{s_j\\theta}\\rbrace\\right]\\times\\theta^{a_\\theta-1}\\exp\\lbrace-b_\\theta\\theta\\rbrace\\\\\n&\\propto{}\\left[\\prod_{j=1}^{n}\\frac{\\frac{s_j}{\\nu_j}^{-\\frac{1}{\\theta}}}{\\Gamma(\\frac{1}{\\theta})}\\theta^{-\\frac{1}{\\theta}}\\exp\\lbrace-\\frac{\\nu_j}{s_j\\theta}\\rbrace\\right]\\times{}\\theta^{a_\\theta-1}\\exp\\lbrace-b_\\theta\\theta\\rbrace\\\\\n&\\propto{}\\frac{\\left(\\prod_{j=1}^{n}\\frac{s_j}{\\nu_j}\\right)^{-\\frac{1}{\\theta}}}{\\Gamma{}^n(\\frac{1}{\\theta})}\\theta^{-\\frac{n}{\\theta}}\\exp\\lbrace-\\frac{1}{\\theta}\\sum_{j=1}^n\\frac{\\nu_j}{s_j}\\rbrace\\theta^{a_\\theta-1}\\exp\\lbrace-b_\\theta\\theta\\rbrace\\\\\n&\\propto{}\\frac{\\left(\\prod_{j=1}^{n}\\frac{s_j}{\\nu_j}\\right)^{-\\frac{1}{\\theta}}}{\\Gamma{}^n(\\frac{1}{\\theta})}\\theta^{a_\\theta-\\frac{n}{\\theta}-1}\\exp\\lbrace-\\frac{1}{\\theta}\\sum_{j=1}^n\\frac{\\nu_j}{s_j}-b_\\theta\\theta\\rbrace\n\\end{align*}\n\\end{fleqn}", "meta": {"hexsha": "3446f019803acdd3347956ba1a3427b566b45c83", "size": 12596, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Appendix/appendixB.tex", "max_stars_repo_name": "nilseling/Thesis", "max_stars_repo_head_hexsha": "20bf4e22748cd4649bedcf91a6fb39caf07d1053", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-03-15T19:34:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T09:18:54.000Z", "max_issues_repo_path": "Appendix/appendixB.tex", "max_issues_repo_name": "nilseling/Thesis", "max_issues_repo_head_hexsha": "20bf4e22748cd4649bedcf91a6fb39caf07d1053", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Appendix/appendixB.tex", "max_forks_repo_name": "nilseling/Thesis", "max_forks_repo_head_hexsha": "20bf4e22748cd4649bedcf91a6fb39caf07d1053", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-04-22T16:28:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-07T18:32:52.000Z", "avg_line_length": 62.98, "max_line_length": 332, "alphanum_fraction": 0.639329946, "num_tokens": 5346, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.585589544038677}}
{"text": "\\chapter{Suffix structures}\n\n\\section{Suffix array}\n\nThere are several algoirthms for the Suffix Array construction. The most prominent is DC3 (Difference Cover, size 3) and which is a $O(n)$ time complexity divide-and-conquer algorithm.\n\nFor $T=abaabababbabbb$ calculate Suffix Array (\\textit{SA}) and Longest Common Prefix (\\textit{LCP}).\n\n\\begin{table}[h]\n  \\footnotesize\n\t\\begin{tabular}{c|cccccccccccccccc}\n\t\t        $i$ \t\t& -1 & 0   & 1   & 2   & 3   & 4   & 5   & 6   & 7   & 8   & 9   & 10  & 11  & 12  & 13  & 14 \\\\ \n\t\t\\hline $T[i]$ \t& \\# & $a$ & $b$ & $a$ & $a$ & $b$ & $a$ & $b$ & $a$ & $b$ & $b$ & $a$ & $b$ & $b$ & $b$ & \\$ \\\\ \n\t\t\\hline $SA[i]$ \t& 14 & 2   & 0   & 3   & 5   & 7   & 10  & 13  & 1   & 4   & 6   & 9   & 12  & 8   & 11  & (-1) \\\\ \n\t\t\\hline $LCP[i]$ & 0  & 0   & 1   & 3   & 4   & 2   & 3   & 0   & 1   & 2   & 3   & 4   & 1   & 2   & 2   & 0 \\\\ \n\t\\end{tabular} \n  \\caption{Suffix Array for $T=abaabababbabbb$.}\n\\end{table}\n\nSuffix Array can be constructed through simple sorting by adding padding characters \\# and \\$, where \\$ $\\leq a \\in \\Sigma \\leq$ \\#:\\\\\n$T_{padded}=$\\#$abaabababbabbb\\$$.\n\nSuffixes can be easily sorted, see~\\ref{sortingsuffixes}.\n\n\\begin{table}[h]\n  \\footnotesize\n\t\\begin{tabular}{c|l}\n\t\ti & unsorted \\\\ \\hline\n\t\t-1 & \\#abaabababbabbb\\$ \\\\\n\t\t0 & abaabababbabbb\\$ \\\\\n\t\t1 & baabababbabbb\\$ \\\\\n\t\t2 & aabababbabbb\\$ \\\\\n\t\t3 & abababbabbb\\$ \\\\\n\t\t4 & bababbabbb\\$ \\\\\n\t\t5 & ababbabbb\\$ \\\\\n\t\t6 & babbabbb\\$ \\\\\n\t\t7 & abbabbb\\$ \\\\\n\t\t8 & bbabbb\\$ \\\\\n\t\t9 & babbb\\$ \\\\\n\t\t10 & abbb\\$ \\\\\n\t\t11 & bbb\\$ \\\\\n\t\t12 & bb\\$ \\\\\n\t\t13 & b\\$ \\\\\n\t\t14 & \\$ \\\\\n\t\\end{tabular} \n  \\hspace*{1cm}\n\t\\begin{tabular}{c|l}\n\t\tSA[i] & sorted \\\\ \\hline\n\t\t14 & \\$ \\\\\n\t\t2 & aabababbabbb\\$ \\\\\n\t\t0 & abaabababbabbb\\$ \\\\\n\t\t3 & abababbabbb\\$ \\\\\n\t\t5 & ababbabbb\\$ \\\\\n\t\t7 & abbabbb\\$ \\\\\n\t\t10 & abbb\\$ \\\\\n\t\t13 & b\\$ \\\\\n\t\t1 & baabababbabbb\\$ \\\\\n\t\t4 & bababbabbb\\$ \\\\\n\t\t6 & babbabbb\\$ \\\\\n\t\t9 & babbb\\$ \\\\\n\t\t12 & bb\\$ \\\\\n\t\t8 & bbabbb\\$ \\\\\n\t\t11 & bbb\\$ \\\\\n\t\t-1 & \\#abaabababbabbb\\$ \\\\\n\t\\end{tabular} \n\t\\label{sortingsuffixes}\n\t\\caption{Sorting suffixes of $T_{padded}=$\\#$abaabababbabbb\\$$.}\n\\end{table}\n\nFor $P=abaa$ find all occurances using \\textit{SA Simple Search}.\n\n\\begin{enumerate}\n\t\\item \\begin{itemize}\n\t\t\t\\item $d = -1$, $f = n = 14$\n\t\t\\end{itemize}\n\t\\item \\begin{itemize}\n\t\t\\item $i \\leftarrow (-1 + 14)/2 = 6$\n\t\t\\item $L_6 = T[SA[6] .. 13] = T[13 .. 13] = b$\n\t\t\\item $l \\leftarrow lcp(x, L_6) = 0$\n\t\t\\item $L_6[0] > x[0] \\Rightarrow b > a \\Rightarrow f \\leftarrow i = 6$\n\t\\end{itemize}\n\t\\item \\begin{itemize}\n\t\t\\item $i \\leftarrow (-1 + 6)/2 = 2$\n\t\t\\item $L_2 = T[SA[2] .. 13] = T[3 .. 13] = abababbabbb$\n\t\t\\item $l \\leftarrow lcp(x, L_2) = 3$\n\t\t\\item $L_2[3] > x[3] \\Rightarrow b > a \\Rightarrow f \\leftarrow i = 2$\n\t\\end{itemize}\n\\item \\begin{itemize}\n\t\t\\item $i \\leftarrow (-1 + 2)/2 = 0$\n\t\t\\item $L_0 = T[SA[0] .. 13] = T[2 .. 13] = aabababbabbb$\n\t\t\\item $l \\leftarrow lcp(x, L_0) = 1$\n\t\t\\item $L_0[1] < x[1] \\Rightarrow a < b \\Rightarrow d \\leftarrow i = 0$\n\t\\end{itemize}\n\\item \\begin{itemize}\n\t\t\\item $i \\leftarrow (0 + 2)/2 = 1$\n\t\t\\item $L_1 = T[SA[1] .. 13] = T[0 .. 13] = abaabababbabbb$\n\t\t\\item $l \\leftarrow lcp(x, L_1) = 4$\n\t\t\\item $l = m \\wedge l \\neq |L_1| \\Rightarrow f \\leftarrow i = 1$\n\t\\end{itemize}\n\\item \\begin{itemize}\n\t\t\\item $d+1 \\not< f \\Rightarrow (d, f) = (0, 1)$\n\t\\end{itemize}\n\\end{enumerate}\n\n\\section{Suffix trie}\n\nFor $T=ababbab$ construct Suffix Trie.\n\n\\tikzstyle{a} = [above]\n\\tikzstyle{c} = [circle, draw, minimum width=0.4cm]\n\\tikzstyle{e} = [circle, draw, minimum width=0.4cm, accepting]\n\\tikzstyle{level 1} = [level distance=1cm, sibling distance=2cm]\n\\tikzstyle{level 2} = [level distance=1cm, sibling distance=1cm]\n\\tikzstyle{level 3} = [level distance=1cm, sibling distance=1cm]\n\\tikzstyle{level 4} = [level distance=1cm, sibling distance=1cm]\n\\begin{tikzpicture}[grow=right]\n\t\\node[c] {}\n\tchild {\n\t\tnode[c] {}\n\t\tchild {\n\t\t\tnode[e] {}\n\t\t\tchild {\n\t\t\t\tnode[c] {}\n\t\t\t\tchild {\n\t\t\t\t\tnode[c] {}\n\t\t\t\t\tchild {\n\t\t\t\t\t\tnode[c] {}\n\t\t\t\t\t\tchild {\n\t\t\t\t\t\t\tnode[c] {}\n\t\t\t\t\t\t\tchild {\n\t\t\t\t\t\t\t\tnode[e] {}\n\t\t\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\t\tnode[a] {$a$}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$a$}\n\t\t\t}\n\t\t\tchild {\n\t\t\t\tnode[c] {}\n\t\t\t\tchild {\n\t\t\t\t\tnode[c] {}\n\t\t\t\t\tchild {\n\t\t\t\t\t\tnode[e] {}\n\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$a$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$b$}\n\t\t\t}\n\t\t\tedge from parent\n\t\t\tnode[a] {$b$}\n\t\t}\n\t\tedge from parent\n\t\tnode[a] {$a$}\n\t}\n\tchild {\n\t\tnode[e] {}\n\t\tchild {\n\t\t\tnode[c] {}\n\t\t\tchild {\n\t\t\t\tnode[e] {}\n\t\t\t\tchild {\n\t\t\t\t\tnode[c] {}\n\t\t\t\t\tchild {\n\t\t\t\t\t\tnode[c] {}\n\t\t\t\t\t\tchild {\n\t\t\t\t\t\t\tnode[e] {}\n\t\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\tnode[a] {$a$}\n\t\t\t\t\t}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$b$}\n\t\t\t}\n\t\t\tedge from parent\n\t\t\tnode[a] {$a$}\n\t\t}\n\t\tchild {\n\t\t\tnode[c] {}\n\t\t\tchild {\n\t\t\t\tnode[c] {}\n\t\t\t\tchild {\n\t\t\t\t\tnode[e] {}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$a$}\n\t\t\t}\n\t\t\tedge from parent\n\t\t\tnode[a] {$b$}\n\t\t}\n\t\tedge from parent\n\t\tnode[a] {$b$}\n\t};\n\\end{tikzpicture}\n\n\n\\section{Suffix tree}\n\nFor $T=ababbab$ construct Suffix Tree.\n\n\\tikzstyle{a} = [above]\n\\tikzstyle{c} = [circle, draw, minimum width=0.4cm]\n\\tikzstyle{e} = [circle, draw, minimum width=0.4cm, accepting]\n\\tikzstyle{level 1} = [level distance=1cm, sibling distance=2cm]\n\\tikzstyle{level 2} = [level distance=1cm, sibling distance=1cm]\n\\tikzstyle{level 3} = [level distance=1cm, sibling distance=1cm]\n\\tikzstyle{level 4} = [level distance=1cm, sibling distance=1cm]\n\\begin{tikzpicture}[grow=right]\n\t\\node[c] {}\n\tchild {\n\t\tnode[e] {}\n\t\tchild {\n\t\t\tnode[e] {}\n\t\t\tedge from parent\n\t\t\tnode[above right] {$abbab$}\n\t\t}\n\t\tchild {\n\t\t\tnode[e] {}\n\t\t\tedge from parent\n\t\t\tnode[a] {$bab$}\n\t\t}\n\t\tedge from parent\n\t\tnode[above right] {$ab$}\n\t}\n\tchild {\n\t\tnode[e] {}\n\t\tchild {\n\t\t\tnode[e] {}\n\t\t\tchild {\n\t\t\t\tnode[e] {}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$bab$}\n\t\t\t}\n\t\t\tedge from parent\n\t\t\tnode[above right] {$ab$}\n\t\t}\n\t\tchild {\n\t\t\tnode[e] {}\n\t\t\tedge from parent\n\t\t\tnode[a] {$bab$}\n\t\t}\n\t\tedge from parent\n\t\tnode[a] {$b$}\n\t};\n\\end{tikzpicture}\n\n\\section{Suffix automaton}\n\nFor $T=ababbab$ construct Suffix Automaton.\n\n\\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto]\n\t\\tikzstyle{every state} = [circle,draw,minimum size=0.4cm]\n\t\\node[initial,state]\t(a)\t\t\t\t\t{};\n\t\\node[state]\t\t\t(b) [right of=a]\t{};\n\t\\node[state,accepting]\t(c) [right of=b]\t{};\n\t\\node[state]\t\t\t(d) [right of=c]\t{};\n\t\\node[state]\t\t\t(e) [right of=d]\t{};\n\t\\node[state]\t\t\t(f) [right of=e]\t{};\n\t\\node[state]\t\t\t(g) [right of=f]\t{};\n\t\\node[state,accepting]\t(h) [right of=g]\t{};\n\t\\node[state,accepting]\t\t\t(i) [below of=c]\t{};\n\t\\node[state]\t\t\t(j) [below of=d]\t{};\n\t\\node[state,accepting]\t(k) [below of=e]\t{};\n\t\\path\t(a) edge node {$a$} (b)\n\t\t\t\tedge [below] node {$b$} (i)\n\t\t\t(b) edge node {$b$} (c)\n\t\t\t(c) edge node {$a$} (d)\n\t\t\t\tedge [bend left] node {$b$} (f)\n\t\t\t(d) edge node {$b$} (e)\n\t\t\t(e) edge node {$b$} (f)\n\t\t\t(f) edge node {$a$} (g)\n\t\t\t(g) edge node {$b$} (h)\n\t\t\t(i) edge [below] node {$b$} (f)\n\t\t\t\tedge [below] node {$a$} (j)\n\t\t\t(j) edge [below] node {$b$} (k)\n\t\t\t(k) edge [below] node {$b$} (h);\n\\end{tikzpicture}\n\n\\section{Compressed Suffix automaton}\n\nFor $T=ababbab$ construct Compressed Suffix Automaton.\n\n\\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto]\n\t\\tikzstyle{every state} = [circle,draw,minimum size=0.4cm]\n\t\\node[initial,state]\t(a)\t\t\t\t\t{};\n\t\\node[state,accepting]\t(c) [right of=b]\t{};\n\t\\node[state]\t\t\t(f) [right of=e]\t{};\n\t\\node[state,accepting]\t(h) [right of=g]\t{};\n\t\\node[state,accepting]\t\t\t(i) [below of=c]\t{};\n\t\\node[state,accepting]\t(k) [below of=e]\t{};\n\t\\path\t(a) edge node {$ab$} (c)\n\t\t\t\tedge node {$b$} (i)\n\t\t\t(c) edge node {$abb$} (f)\n\t\t\t\tedge [bend left] node {$b$} (f)\n\t\t\t(f) edge node {$ab$} (h)\n\t\t\t(i) edge node {$b$} (f)\n\t\t\t\tedge node {$ab$} (k)\n\t\t\t(k) edge node {$b$} (h);\n\\end{tikzpicture}\n\n\\clearpage\n\\section{Suffix Automaton Construction from Suffix Trie}\n\nFor $T=ababbab$ construct Suffix Automaton from its Suffix Trie.\n\nStates of Suffix Trie are labeled as if it was an automaton.\n\n\\tikzstyle{a} = [above]\n\\tikzstyle{c} = [circle, draw, minimum width=0.4cm]\n\\tikzstyle{e} = [circle, draw, minimum width=0.4cm, accepting]\n\\tikzstyle{level 1} = [level distance=1cm, sibling distance=2cm]\n\\tikzstyle{level 2} = [level distance=1cm, sibling distance=1cm]\n\\tikzstyle{level 3} = [level distance=1cm, sibling distance=1cm]\n\\tikzstyle{level 4} = [level distance=1cm, sibling distance=1cm]\n\\begin{tikzpicture}[grow=right]\n\t\\node[c] {$0$}\n\tchild {\n\t\tnode[c] {$10$}\n\t\tchild {\n\t\t\tnode[e] {$11$}\n\t\t\tchild {\n\t\t\t\tnode[c] {$15$}\n\t\t\t\tchild {\n\t\t\t\t\tnode[c] {$16$}\n\t\t\t\t\tchild {\n\t\t\t\t\t\tnode[c] {$17$}\n\t\t\t\t\t\tchild {\n\t\t\t\t\t\t\tnode[c] {$18$}\n\t\t\t\t\t\t\tchild {\n\t\t\t\t\t\t\t\tnode[e] {$19$}\n\t\t\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\t\tnode[a] {$a$}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$a$}\n\t\t\t}\n\t\t\tchild {\n\t\t\t\tnode[c] {$12$}\n\t\t\t\tchild {\n\t\t\t\t\tnode[c] {$13$}\n\t\t\t\t\tchild {\n\t\t\t\t\t\tnode[e] {$14$}\n\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$a$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$b$}\n\t\t\t}\n\t\t\tedge from parent\n\t\t\tnode[a] {$b$}\n\t\t}\n\t\tedge from parent\n\t\tnode[a] {$a$}\n\t}\n\tchild {\n\t\tnode[e] {$1$}\n\t\tchild {\n\t\t\tnode[c] {$5$}\n\t\t\tchild {\n\t\t\t\tnode[e] {$6$}\n\t\t\t\tchild {\n\t\t\t\t\tnode[c] {$7$}\n\t\t\t\t\tchild {\n\t\t\t\t\t\tnode[c] {$8$}\n\t\t\t\t\t\tchild {\n\t\t\t\t\t\t\tnode[e] {$9$}\n\t\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tedge from parent\n\t\t\t\t\t\tnode[a] {$a$}\n\t\t\t\t\t}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$b$}\n\t\t\t}\n\t\t\tedge from parent\n\t\t\tnode[a] {$a$}\n\t\t}\n\t\tchild {\n\t\t\tnode[c] {$2$}\n\t\t\tchild {\n\t\t\t\tnode[c] {$3$}\n\t\t\t\tchild {\n\t\t\t\t\tnode[e] {$4$}\n\t\t\t\t\tedge from parent\n\t\t\t\t\tnode[a] {$b$}\n\t\t\t\t}\n\t\t\t\tedge from parent\n\t\t\t\tnode[a] {$a$}\n\t\t\t}\n\t\t\tedge from parent\n\t\t\tnode[a] {$b$}\n\t\t}\n\t\tedge from parent\n\t\tnode[a] {$b$}\n\t};\n\\end{tikzpicture}\n\nTransition function is iteratively analyzed. In each iteration the states are put into groups based on their similarity. In the initial interation $g_0$ states are split into two groups: final states and non-final states. The groups are labeled by a first state in the group for convenience. In following in iterations $g_i, i \\geq 1$ states are split into groups based on the previsous group and groups to which transions for all symbols of the alphabet lead.\n\n\\begin{table}[h]\n  \\footnotesize\n\t\\begin{tabular}{|c||c|c||c||c|c||c||c|c||c||c|c||c||c|c||c||}\n\t\t\\hline\n\t\t$\\delta$ & a & b & $g_0$ & a & b & $g_1$ & a & b & $g_2$ & a & b & $g_3$ & a & b & $g_4$\\\\ \\hline\n\t\t0  & 10 & 1  & 0 & 0 & 1 & 0 & 0 & 1 & 0  & 0  & 1  & 0 & 0  & 1  & 0 \\\\ \n\t\t1  & 5  & 2  & 1 & 0 & 0 & 1 & 0 & 2 & 1  & 0  & 2  & 1 & 0  & 2  & 1 \\\\\n\t\t2  & 3  & -  & 0 & 0 & 0 & 2 & 0 & 2 & 2  & 0  & -  & 2 & 0  & -  & 2 \\\\\n\t\t3  & -  & 4  & 0 & 0 & 1 & 0 & 2 & 1 & 0  & -  & 4  & 3 & -  & 4  & 3 \\\\\n\t\t4  & -  & -  & 1 & 0 & 0 & 1 & 2 & 2 & 4  & -  & -  & 4 & -  & -  & 4 \\\\\n\t\t5  & -  & 6  & 0 & 0 & 1 & 0 & 2 & 1 & 0  & -  & 1  & 5 & -  & 1  & 5 \\\\\n\t\t6  & -  & 7  & 1 & 0 & 0 & 1 & 2 & 2 & 1  & -  & 2  & 6 & -  & 2  & 6 \\\\\n\t\t7  & 8  & -  & 0 & 0 & 0 & 2 & 0 & 2 & 2  & 0  & -  & 2 & 0  & -  & 2 \\\\\n\t\t8  & -  & 9  & 0 & 0 & 1 & 0 & 2 & 1 & 0  & -  & 4  & 3 & -  & 4  & 3 \\\\\n\t\t9  & -  & -  & 1 & 0 & 0 & 1 & 2 & 2 & 4  & -  & -  & 4 & -  & -  & 4 \\\\\n\t\t10 & -  & 11 & 0 & 0 & 1 & 0 & 2 & 1 & 0  & -  & 11 & 10& -  & 11 & 10 \\\\\n\t\t11 & 15 & 12 & 1 & 0 & 0 & 1 & 2 & 2 & 11 & 15 & 2  & 11& 15 & 2  & 11\\\\\n\t\t12 & 13 & -  & 0 & 0 & 0 & 2 & 0 & 2 & 2  & 0  & -  & 2 & 3  & -  & 2 \\\\\n\t\t13 & -  & 14 & 0 & 0 & 1 & 0 & 2 & 1 & 0  & -  & 4  & 3 & -  & 4  & 3 \\\\\n\t\t14 & -  & -  & 1 & 0 & 0 & 1 & 2 & 2 & 4  & -  & -  & 4 & -  & -  & 4 \\\\\n\t\t15 & -  & 16 & 0 & 0 & 0 & 2 & 2 & 2 & 15 & -  & 2  & 15& -  & 2  & 15\\\\\n\t\t16 & -  & 17 & 0 & 0 & 0 & 2 & 2 & 2 & 15 & -  & 15 & 16& -  & 15 & 16\\\\\n\t\t17 & 18 & -  & 0 & 0 & 0 & 2 & 0 & 2 & 2  & 0  & -  & 2 & 3  & -  & 2 \\\\\n\t\t18 & -  & 19 & 0 & 0 & 1 & 0 & 2 & 1 & 0  & -  & 4  & 3 & -  & 4  & 3 \\\\\n\t\t19 & -  & -  & 1 & 0 & 0 & 1 & 2 & 2 & 4  & -  & -  & 4 & -  & -  & 4 \\\\\n\t\t-  & -  & -  & 0 & 0 & 0 & 2 & 2 & 2 & -  & -  & -  & - & -  & -  & - \\\\\n\t\t\\hline\n\t\\end{tabular} \n\t\\caption{Minimization of Suffix Trie for string $T=ababbab$.}\n\\end{table}\n\nThe algorithm ends when either each state is in its own group -- the automaton in already minimal or if two consecutive iterations $g_i$ and $g_{i+1}$ split the states the same way.\n\nStates which end up in the same group are equivalent:\n\\begin{itemize}\n\t\\item $2 \\Leftrightarrow 7 \\Leftrightarrow 12 \\Leftrightarrow 17$,\n\t\\item $3 \\Leftrightarrow 8 \\Leftrightarrow 13 \\Leftrightarrow 18$,\n\t\\item $4 \\Leftrightarrow 9 \\Leftrightarrow 14 \\Leftrightarrow 19$.\n\\end{itemize}\n\nThe equivalent states can be merged together and minimal automaton can be constructed.\n\n\\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto]\n\t\\tikzstyle{every state} = [circle,draw,minimum size=0.4cm]\n\t\\node[initial,state]\t(a)\t\t\t\t\t{$0$};\n\t\\node[state]\t\t\t(b) [right of=a]\t{$10$};\n\t\\node[state,accepting]\t(c) [right of=b]\t{$11$};\n\t\\node[state]\t\t\t(d) [right of=c]\t{$15$};\n\t\\node[state]\t\t\t(e) [right of=d]\t{$16$};\n\t\\node[state]\t\t\t(f) [right of=e]\t{$2$};\n\t\\node[state]\t\t\t(g) [right of=f]\t{$3$};\n\t\\node[state,accepting]\t(h) [right of=g]\t{$4$};\n\t\\node[state,accepting]\t\t\t(i) [below of=c]\t{$1$};\n\t\\node[state]\t\t\t(j) [below of=d]\t{$5$};\n\t\\node[state,accepting]\t(k) [below of=e]\t{$6$};\n\t\\path\t(a) edge node {$a$} (b)\n\t\t\t\tedge [below] node {$b$} (i)\n\t\t\t(b) edge node {$b$} (c)\n\t\t\t(c) edge node {$a$} (d)\n\t\t\t\tedge [bend left] node {$b$} (f)\n\t\t\t(d) edge node {$b$} (e)\n\t\t\t(e) edge node {$b$} (f)\n\t\t\t(f) edge node {$a$} (g)\n\t\t\t(g) edge node {$b$} (h)\n\t\t\t(i) edge [below] node {$b$} (f)\n\t\t\t\tedge [below] node {$a$} (j)\n\t\t\t(j) edge [below] node {$b$} (k)\n\t\t\t(k) edge [below] node {$b$} (h);\n\\end{tikzpicture}\n", "meta": {"hexsha": "57af5154b196c38a3a967f33de1f2288dd15259f", "size": 14064, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "evy/ch2.tex", "max_stars_repo_name": "exander77/handouts", "max_stars_repo_head_hexsha": "c30e8f1128bc71ea69c7a83daaf8915b6a8eff36", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "evy/ch2.tex", "max_issues_repo_name": "exander77/handouts", "max_issues_repo_head_hexsha": "c30e8f1128bc71ea69c7a83daaf8915b6a8eff36", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "evy/ch2.tex", "max_forks_repo_name": "exander77/handouts", "max_forks_repo_head_hexsha": "c30e8f1128bc71ea69c7a83daaf8915b6a8eff36", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.9047619048, "max_line_length": 460, "alphanum_fraction": 0.5149317406, "num_tokens": 6029, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.7879311906630568, "lm_q1q2_score": 0.5855652628527802}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath, amssymb}\n\\usepackage{tikz}\n\\usepackage{wrapfig}\n\\usepackage[margin=15mm]{geometry}\n\n\\newcommand{\\U}{\\mathcal{U}}\n\\renewcommand{\\l}{\\lambda}\n\\newcommand{\\rec}{\\text{rec}}\n\n\\begin{document}\n\\begin{center}\n\\subsection*{Notes on W-types}\n\\end{center}\n\nHeman Gandhi\n\\hfill\nApril 2020\\\\\n\n\\subsubsection*{Lists}\n\nFrom 5.1, we says lists are:\n\\begin{itemize}\n\\item $nil: List(A)$\n\\item $cons: A \\to List(A) \\to List(A)$\n\\end{itemize}\n\nWith W-types, this is simplified to $List(A) :\\equiv W_{x: 1 + A}\\ \\rec_{1 + A}(\\U, 0, \\l a. 1, x)$.\nThis means that lists are either a null-ary empty thing or something that takes one (1)\nargument. This seems at odds with ``cons'' above, but is reconciled by ``sup'':\n$cons(a, l) = sup(inr(a), \\l \\star. l)$.\n\nIf we have a list $<a, b, c>: List(A)$, we can write:\n\\[\nsup(inr(a), \\l \\star. sup(inr(b), \\l \\star. sup(inr(c), \\l \\star. sup(inl(\\star), \\l x. empty))))\n\\]\nwhich is suitably ugly and where $empty$ is the 0-induction to get us the nil at the end of the list.\n\n\\subsubsection*{Binary Trees}\n\nWith bullet-points, we'd probably write:\n\\begin{itemize}\n\\item $nil: BinTree(A)$\n\\item $node: A \\to BinTree(A) \\to BinTree(A) \\to BinTree(A)$\n\\end{itemize}\n\nWith W-types, we'd get something very similar to lists:\n\\[\nBinTree(A) :\\equiv W_{a: 1 + A}\\ \\rec_{1 + A}(\\U, 0, 2, a)\n\\]\n\nWhere the sort of ``cons''-ing operator would combine two trees under a root with:\n\\begin{equation}\n\\begin{split}\ncons&'(a, l, r) = sup(inr(a), C)\\\\\n&where\\ C\\ 0_2 = l;\\ C\\ 1_2 = r\n\\end{split}\n\\end{equation}\n\n(Writing out an actual tree is hecking ugly.)\n\n\\subsubsection*{Proofs over W-types}\n\n(I don't think this bit actually matters -- will just grab from the book.)\n\n\\subsubsection*{All Proofs of the Same Thing Over a W-type are the Same}\n\n``Proofs of the same thing'' means that $g, h: \\prod_{w: W_{x: A} B(X)} E(w)$\nand where $t = g, h$ have $G_t: \\prod_{a, f} t(sup(a, f)) = e(a, f, \\l b. t(f(b)))$.\n\nIn order to prove this, we consider our own dependent type over our W-type. We put\n$D: \\prod_{w: W_{x: A} B(x)} g(w) = h(w)$ which would prove $g = h$ by function extensionality.\nIn order to prove this, by induction on W-types, we need only inhabit\n\\[\nd: \\prod_{a: A} \\prod_{f: B(a) \\to W_{x: A} B(x)} \\prod_{g': \\prod_{b: B(a)} D(f(b))} D(sup(a, f))\n\\]\nTo do so, consider $e$ above. $g'$ gives us that $g(f(b)) = h(f(b))$ and then function\nextensionality makes $p: \\l b. g(f(b)) = \\l b. h(f(b))$. $ap_{e(a, f)}(p)$ gives us that\n$e(a, f, \\l b. g(f(b))) = e(a, f, \\l b. h(f(b)))$ and by the given $G_t$, we get\n$g(sup(a, f)) = h(sup(a, f))$, which gives us $d$ as desired.\n\n\\end{document}\n", "meta": {"hexsha": "7dc75f55774b911e671ab6f246e2a8049ada0bd8", "size": 2679, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "misc-tex/5.3-notes.tex", "max_stars_repo_name": "hemangandhi/HoTT-Intro", "max_stars_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "misc-tex/5.3-notes.tex", "max_issues_repo_name": "hemangandhi/HoTT-Intro", "max_issues_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "misc-tex/5.3-notes.tex", "max_forks_repo_name": "hemangandhi/HoTT-Intro", "max_forks_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.8928571429, "max_line_length": 101, "alphanum_fraction": 0.6427771557, "num_tokens": 985, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7879312006227324, "lm_q1q2_score": 0.5855652612904374}}
{"text": "\\section{\\module{cmath} ---\n         Mathematical functions for complex numbers}\n\n\\declaremodule{builtin}{cmath}\n\\modulesynopsis{Mathematical functions for complex numbers.}\n\nThis module is always available.  It provides access to mathematical\nfunctions for complex numbers.  The functions are:\n\n\\begin{funcdesc}{acos}{x}\nReturn the arc cosine of \\var{x}.\nThere are two branch cuts:\nOne extends right from 1 along the real axis to \\infinity, continuous\nfrom below.\nThe other extends left from -1 along the real axis to -\\infinity,\ncontinuous from above.\n\\end{funcdesc}\n\n\\begin{funcdesc}{acosh}{x}\nReturn the hyperbolic arc cosine of \\var{x}.\nThere is one branch cut, extending left from 1 along the real axis\nto -\\infinity, continuous from above.\n\\end{funcdesc}\n\n\\begin{funcdesc}{asin}{x}\nReturn the arc sine of \\var{x}.\nThis has the same branch cuts as \\function{acos()}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{asinh}{x}\nReturn the hyperbolic arc sine of \\var{x}.\nThere are two branch cuts, extending left from \\plusminus\\code{1j} to\n\\plusminus-\\infinity\\code{j}, both continuous from above.\nThese branch cuts should be considered a bug to be corrected in a\nfuture release.\nThe correct branch cuts should extend along the imaginary axis,\none from \\code{1j} up to \\infinity\\code{j} and continuous from the\nright, and one from -\\code{1j} down to -\\infinity\\code{j} and\ncontinuous from the left.\n\\end{funcdesc}\n\n\\begin{funcdesc}{atan}{x}\nReturn the arc tangent of \\var{x}.\nThere are two branch cuts:\nOne extends from \\code{1j} along the imaginary axis to\n\\infinity\\code{j}, continuous from the left.\nThe other extends from -\\code{1j} along the imaginary axis to\n-\\infinity\\code{j}, continuous from the left.\n(This should probably be changed so the upper cut becomes continuous\nfrom the other side.)\n\\end{funcdesc}\n\n\\begin{funcdesc}{atanh}{x}\nReturn the hyperbolic arc tangent of \\var{x}.\nThere are two branch cuts:\nOne extends from 1 along the real axis to \\infinity, continuous\nfrom above.\nThe other extends from -1 along the real axis to -\\infinity,\ncontinuous from above.\n(This should probably be changed so the right cut becomes continuous from\nthe other side.)\n\\end{funcdesc}\n\n\\begin{funcdesc}{cos}{x}\nReturn the cosine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{cosh}{x}\nReturn the hyperbolic cosine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{exp}{x}\nReturn the exponential value \\code{e**\\var{x}}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{log}{x}\nReturn the natural logarithm of \\var{x}.\nThere is one branch cut, from 0 along the negative real axis to\n-\\infinity, continuous from above.\n\\end{funcdesc}\n\n\\begin{funcdesc}{log10}{x}\nReturn the base-10 logarithm of \\var{x}.\nThis has the same branch cut as \\function{log()}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{sin}{x}\nReturn the sine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{sinh}{x}\nReturn the hyperbolic sine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{sqrt}{x}\nReturn the square root of \\var{x}.\nThis has the same branch cut as \\function{log()}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{tan}{x}\nReturn the tangent of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{tanh}{x}\nReturn the hyperbolic tangent of \\var{x}.\n\\end{funcdesc}\n\nThe module also defines two mathematical constants:\n\n\\begin{datadesc}{pi}\nThe mathematical constant \\emph{pi}, as a real.\n\\end{datadesc}\n\n\\begin{datadesc}{e}\nThe mathematical constant \\emph{e}, as a real.\n\\end{datadesc}\n\nNote that the selection of functions is similar, but not identical, to\nthat in module \\refmodule{math}\\refbimodindex{math}.  The reason for having\ntwo modules is that some users aren't interested in complex numbers,\nand perhaps don't even know what they are.  They would rather have\n\\code{math.sqrt(-1)} raise an exception than return a complex number.\nAlso note that the functions defined in \\module{cmath} always return a\ncomplex number, even if the answer can be expressed as a real number\n(in which case the complex number has an imaginary part of zero).\n\nA note on branch cuts: They are curves along which the given function\nfails to be continuous.  They are a necessary feature of many complex\nfunctions.  It is assumed that if you need to compute with complex\nfunctions, you will understand about branch cuts.  Consult almost any\n(not too elementary) book on complex variables for enlightenment.  For\ninformation of the proper choice of branch cuts for numerical\npurposes, a good reference should be the following:\n\n\\begin{seealso}\n  \\seetext{Kahan, W:  Branch cuts for complex elementary functions;\n           or, Much ado about nothings's sign bit.  In Iserles, A.,\n           and Powell, M. (eds.), \\citetitle{The state of the art in\n           numerical analysis}. Clarendon Press (1987) pp165-211.}\n\\end{seealso}\n", "meta": {"hexsha": "77496bf4e0559abdacb0b2db1f5aad13b087e351", "size": 4711, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/lib/libcmath.tex", "max_stars_repo_name": "deadsnakes/python2.3", "max_stars_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_stars_repo_licenses": ["PSF-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Doc/lib/libcmath.tex", "max_issues_repo_name": "deadsnakes/python2.3", "max_issues_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_issues_repo_licenses": ["PSF-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Doc/lib/libcmath.tex", "max_forks_repo_name": "deadsnakes/python2.3", "max_forks_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_forks_repo_licenses": ["PSF-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4113475177, "max_line_length": 75, "alphanum_fraction": 0.7527064318, "num_tokens": 1244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879312056025699, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5855652605092656}}
{"text": "% !TeX root = P2-Comonoids.tex\n\\documentclass[Book-Poly]{subfiles}\n\n\\begin{document}\n%\n\n\\setcounter{chapter}{4}%Just finished 4.\n\n\\part{A different category of categories}\\label{part.comon}\n\n\\Opensolutionfile{solutions}[solution-file5]\n\n%------------ Chapter ------------%\n\\chapter{The composition product}\\label{ch.comon.comp}\n\n\n% In this chapter we will discuss a monoidal structure on $\\poly$ that is quite easy from the mathematical point of view---it is simply composition---but which is again remarkable both in terms of its semantics and the phenomena that emerge mathematically, as we will see in later chapters. \n\n% In particular, we will see that the comonoids for the composition monoidal structure on $\\poly$ are precisely categories. However, the morphisms are different---they are often called \\emph{cofunctors}---and so we get a second category $\\smcat^\\sharp$ of categories and cofunctors. But the core groupoids of each---the groupoid of small categories and all functor isomorphisms between them, as well as the groupoid of small categories and all cofunctor isomorphisms between them---are isomorphic as groupoids. In other words, the following slogan is justified:\n% \\slogan{Polynomial comonads are precisely categories.}\n\n% Cofunctors are not too familiar, but we will explain how to think of them in a variety of ways. We will see that whereas a functor $\\cat{C}\\to\\cat{D}$ gives a kind of ``picture'' of $\\cat{C}$ inside $\\cat{D}$, a cofunctor $\\cat{C}\\cof\\cat{D}$ gives a kind of $\\cat{D}$-shaped ``crystallization'' of $\\cat{C}$, one that is intuitively more geometric, more like creating neighborhoods. We will see in \\cref{chapter.bimod} that there is another kind of morphism between comonoids, namely the bimodules, that are perhaps more familiar: they are the so-called \\emph{parametric right adjoints}, or in perhaps more friendly terms, \\emph{data migration functors} between copresheaf categories.\n\n% The plan for this part is to first introduce what is perhaps the most interesting monoidal structure on $\\poly$, namely the composition product; we do so in this chapter. We'll give a bunch of examples and ways to think about it in terms that relate to dynamical systems and our work so far. Then in \\cref{sec.comonoids_in_poly} we'll discuss comonoids in $\\poly$ and explain why they are categories in down-to-earth, set-theoretic terms. We will also discuss the morphisms between them.\n\n% Finally in \\cref{sec.cofree} we will discuss the cofree comonoid construction that takes any polynomial and returns a category. We will show how it relates to decision trees, as one may see in combinatorial game theory.\n\n%%%%% ** old preamble above\n\n\n\n\nWe have seen that the category $\\poly$ of polynomial functors has quite a bit of well-interoperating mathematical structure. Further, it is an expressive way to talk about dynamical systems that can change their interfaces and wiring patterns based on their internal states.\n\nBut we touched upon one thing---what in some sense is the most interesting part of the story---only briefly. That thing is quite simple to state, and yet has profound consequences. Namely, polynomials can be composed:\n\\[\n\\yon^\\2\\circ(\\yon+\\1)=(\\yon+\\1)^\\2\\iso\\yon^\\2+\\2\\yon+\\1.\n\\]\nWhat could be simpler?\n\nIt turns out that this operation, which we'll see soon is a monoidal product, has a lot to do with time. There is a strong sense---made precise in \\cref{prop.poly_closed_comp}---in which the polynomial $p\\circ q$ represents ``starting at a position $i$ in $p$, choosing a direction in $p[i]$, landing at a position $j$ in $q$, choosing a direction in $q[j]$, and then landing... somewhere.''\n\nThe composition product has many surprises up its sleeve, as we'll see in the following chapters. We've given you a glimpse of many of them already in \\cref{sec.poly.intro.math_theory}. We won't amass them all here; instead, we'll take you through the story step by step. But as a preview, $\\circ$ will get us into decision trees, databases, and more dynamics, as well as the interactions between these.\n\n%-------- Section --------%\n\\section{Defining the composition product}\\label{sec.comon.comp.def}\nWe begin with the definition of the composition product in terms of polynomials as functors.\n\n\\subsection{Composite functors}\\label{subsec.comon.comp.def.functor}\n\n\\begin{definition} \\label{def.comp}\nGiven polynomial functors $p, q$, we let $p \\circ q$ denote their composite as functors.\nThat is, $p \\circ q \\colon \\smset \\to \\smset$ sends each set $X$ to the set $p(q(X))$.\n\\end{definition}\n\nFunctor composition gives a monoidal structure on the category $\\smset^\\smset$ of functors $\\smset\\to\\smset$, but to check that the full subcategory $\\poly$ of $\\smset^\\smset$ inherits this monoidal structure, we need to verify that the composite of two functors in $\\poly$ is still a functor in $\\poly$.\n\n\\begin{proposition}\\label{prop.poly_closed_comp}\nSuppose $p,q\\in\\poly$ are polynomial functors $p,q\\colon\\smset\\to\\smset$. Then their composite $p\\circ q$ is again a polynomial functor, and we have the following isomorphism:\n\\begin{equation} \\label{eqn.composite_formula_circ}\np\\circ q\\iso\\sum_{i\\in p(\\1)}\\prod_{d\\in p[i]}\\sum_{j\\in q(\\1)}\\prod_{e\\in q[j]}\\yon.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nWe can rewrite $p$ and $q$ as\n\\[\np\\iso\\sum_{i\\in p(\\1)}\\yon^{p[i]}\\iso\\sum_{i\\in p(\\1)}\\prod_{d\\in p[i]}\\yon\n\\qqand\nq\\iso\\sum_{j\\in q(\\1)}\\yon^{q[j]}\\iso\\sum_{j\\in q(\\1)}\\prod_{e\\in q[j]}\\yon.\n\\]\nFor any set $X$ we have $(p\\circ q)(X)=p(q(X))=p(\\sum_j\\prod_e X)=\\sum_i\\prod_d\\sum_j\\prod_eX$, so \\eqref{eqn.composite_formula_circ} is indeed the formula for the composite $p \\circ q$.\nTo see this is a polynomial, we use \\eqref{eqn.push_prod_sum_set_indep}, which says we can rewrite the $\\prod\\sum$ in \\eqref{eqn.composite_formula_circ} as a $\\sum\\prod$ to obtain\n\\begin{align}\\label{eqn.composite_formula_sums_first_circ}\n  p\\circ q\\iso\n  \\scalebox{1.3}{$\\displaystyle\n  \\sum_{i\\in p(\\1)} \\; \\sum_{\\bar{j_i}\\colon p[i]\\to q(\\1)}\\yon^{\\sum_{d\\in p[i]}q[\\bar{j_i}(d)]}$}\n\\end{align}\n(written slightly bigger for clarity), which is clearly a polynomial.\n\\end{proof}\n\n\\begin{corollary} \\label{cor.comp_monoidal}\nThe category $\\poly$ has a monoidal structure $(\\yon,\\circ)$, where $\\yon$ is the identity functor and $\\circ$ is given by composition.\n\\end{corollary}\n\nBecause we may wish to use $\\circ$ to denote composition in arbitrary categories, we use a special symbol for polynomial composition, namely\n\\[\np\\tri q\\coloneqq p\\circ q.\n\\]\nThe symbol $\\tri$ looks a bit like the composition symbol in that it is an open shape, and when writing quickly by hand, it's okay if it morphs into a $\\circ$.\nBut $\\tri$ highlights the asymmetry of composition, in contrast with the other monoidal structures on $\\poly$ we've encountered.\nMoreover, we'll soon see that $\\tri$ is quite evocative in terms of trees.\nFor each $n\\in\\nn$, we'll also use $p\\tripow{n}$ to denote the $n$-fold composite of $p$, i.e.\\ $n$ copies of $p$ all composed with each other.\nIn particular, $p\\tripow{0}=\\yon$ and $p\\tripow{1}=p$.\n\nWe repeat the important formulas from \\cref{prop.poly_closed_comp} and its proof in the new notation:\n\\begin{equation}\\label{eqn.composite_formula}\np\\tri q\\iso\\sum_{i\\in p(\\1)}\\prod_{d\\in p[i]}\\sum_{j\\in q(\\1)}\\prod_{e\\in q[j]}\\yon.\n\\end{equation}\n\n\\begin{align}\\label{eqn.composite_formula_sums_first}\n  p\\tri q\\iso\n  \\scalebox{1.3}{$\\displaystyle\n  \\sum_{i\\in p(\\1)} \\; \\sum_{\\bar{j_i}\\colon p[i]\\to q(\\1)}\\yon^{\\sum_{d\\in p[i]}q[\\bar{j_i}(d)]}$}\n\\end{align}\n\n% \\[\n% \\begin{tikzpicture}[polybox, baseline=(helper)]\n% \t\\node[poly] (p) {$d:p[i]$\\at$i:p(\\1)$};\n% \t\\node[poly, above=of p] (q) {$e:q[j]$\\at$j:q(\\1)$};\n% \t\\coordinate (helper) at ($(p.north)!.5!(q.south)$);\n% \\end{tikzpicture}\n% \\quad\\cong\\quad\n% \\begin{tikzpicture}[polybox, baseline=(p.east)]\n% \t\\node[poly] (p) {$(d:p[i], e:q[j(d)]$)\\at$(i:p(1), j: p[i]\\to q(\\1))$};\n% \\end{tikzpicture}\n% \\]\n\n\\begin{exercise}\nLet's consider \\eqref{eqn.composite_formula_sums_first} piece by piece, with concrete polynomials $p\\coloneqq\\yon^\\2+\\yon^\\1$ and $q\\coloneqq \\yon^\\3+\\1$.\n\\begin{enumerate}\n\t\\item What is $\\yon^\\2\\tri q$? \n\t\\item What is $\\yon^\\1\\tri q$?\n\t\\item What is $(\\yon^\\2+\\yon^\\1)\\tri q$? This is what $p\\tri q$ ``should be.''\n\t\\item How many functions $\\bar{j_1}\\colon p[1]\\to q(\\1)$ are there?\n\t\\item For each function $\\bar{j_1}$ as above, what is $\\sum_{d\\in p[1]} q[\\bar{j_1}(d)]$?\n\t\\item How many functions $\\bar{j_2}\\colon p[2]\\to q(\\1)$ are there?\n\t\\item For each function $\\bar{j_2}$ as above, what is $\\sum_{d\\in p[2]} q[\\bar{j_2}(d)]$?\n\t\\item Write out \\[\\sum_{i\\in p(\\1)}\\;\\sum_{\\bar{j_i}\\colon p[i]\\to q(\\1)}\\yon^{\\sum_{d\\in p[i]}q[\\bar{j_i}(d)]}.\\]\n\tDoes the result agree with what $p\\tri q$ should be?\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\nWe are given $p\\coloneqq\\yon^\\2+\\yon^\\1$ and $q\\coloneqq \\yon^\\3+\\1$.\n\\begin{enumerate}\n    \\item By standard polynomial multiplication, we have that $\\yon^\\2 \\tri q \\iso q \\times q \\iso \\yon^\\6 + \\2\\yon^\\3 + \\1$.\n    \\item We have that $\\yon^\\1 \\tri q \\iso q \\iso \\yon^\\3 + \\1$.\n    \\item Combining the previous parts, we have that $(\\yon^\\2 + \\yon^\\1) \\tri q \\iso q \\times q + q \\iso \\yon^\\6 + \\3\\yon^\\3 + \\2$.\n    \\item Since $p[1] \\iso \\2$ and $q(\\1) \\iso \\2$, there are $2^2 = 4$ functions $p[1] \\to q(\\1)$.\n    \\item When $\\bar{j_1} \\colon p[1] \\to q(\\1)$ is one of the two possible bijections, we have that\n    \\[\n        \\sum_{d \\in p[1]} q[\\bar{j_1}(d)] \\iso q[1] + q[2] \\iso \\3 + \\0 \\iso \\3.\n    \\]\n    When $\\bar{j_1} \\colon p[1] \\to q(\\1)$ sends everything to $1 \\in q(\\1)$, we have that\n    \\[\n        \\sum_{d \\in p[1]} q[\\bar{j_1}(d)] \\iso q[1] + q[1] \\iso \\3 + \\3 \\iso \\6.\n    \\]\n    Finally, when $\\bar{j_1} \\colon p[1] \\to q(\\1)$ sends everything to $2 \\in q(\\1)$, we have that\n    \\[\n        \\sum_{d \\in p[1]} q[\\bar{j_1}(d)] \\iso q[2] + q[2] \\iso \\0 + \\0 \\iso \\0.\n    \\]\n    \\item Since $p[2] \\iso \\1$ and $q(\\1) \\iso \\2$, there are $2^1 = 2$ functions $p[2] \\to q(\\1)$.\n    \\item When $j_2 \\colon p[2] \\to q(\\1)$ maps to $1 \\in q(\\1)$, we have that $\\sum_{d \\in p[2]} q[\\bar{j_2}(d)] \\iso q[1] \\iso \\3$, and when $\\bar{j_2} \\colon p[2] \\to q(\\1)$ maps to $2 \\in q(\\1)$, we have that $\\sum_{d \\in p[2]} q[\\bar{j_2}(d)] \\iso q[2] \\iso \\0$.\n    \\item From the previous parts, we have that\n    \\[\n        \\sum_{i\\in p(\\1)}\\;\\sum_{\\bar{j_i}\\colon p[i]\\to q(\\1)}\\yon^{\\sum_{d\\in p[i]}q[j_i(d)]} \\iso (\\2\\yon^\\3 + \\yon^\\6 + \\yon^\\0) + (\\yon^\\3 + \\yon^\\0) \\iso \\yon^\\6 + \\3\\yon^\\3 + \\2,\n    \\]\n    which agrees with what $p \\tri q$ should be.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n\\begin{exercise}\\label{exc.composites_of_specials}\n\\begin{enumerate}\n\t\\item If $p$ and $q$ are representable, show that $p\\tri q$ is too. Give a formula for it.\n\t\\item If $p$ and $q$ are linear, show that $p\\tri q$ is too. Give a formula for it.\n\t\\item If $p$ and $q$ are constant, show that $p\\tri q$ is too. Give a formula for it.\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n\t\\item Given representable polynomials $p \\coloneqq \\yon^A$ and $q \\coloneqq \\yon^B$, we have that $p \\tri q \\iso \\left(\\yon^B\\right)^A \\iso \\yon^{AB}$, which is also representable.\n\t\\item Given linear polynomials $p \\coloneqq A\\yon$ and $q \\coloneqq B\\yon$, we have that $p \\tri q \\iso A(B\\yon) \\iso AB\\yon$, which is also linear.\n\t\\item Given constant polynomials $p \\coloneqq A$ and $q \\coloneqq B$, we have that $p \\tri q \\iso A$, which is also constant (see also \\cref{exc.composing_with_constants}).\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\nWe have how $\\tri$ acts on the objects in $\\poly$, but what does it do to the morphisms between them?\nFor any pair of natural transformations $f\\colon p\\to p'$ and $g\\colon q\\to q'$ between polynomial functors, their composition product $f\\tri g\\colon p\\tri q\\to p'\\tri q'$ is given by \\emph{horizontal composition}.\n\n\\begin{definition}[Horizontal composition of natural transformations]\\label{def.horiz_comp_nat_trans}\nLet $f\\colon p\\to p'$ and $g\\colon q\\to q'$ be two natural transformations between (polynomial) functors $p,p',q,q'\\colon\\smset\\to\\smset$.\nThen the \\emph{horizontal composite} of $f$ and $g$, denoted $f\\tri g$, is the natural transformation $p\\tri q\\to p'\\tri q'$ whose $X$-component for each $X\\in\\smset$ is the function\n\\begin{equation} \\label{eqn.horiz_comp_nat_trans_comp}\n    p(q(X)) \\To{f_{q(X)}} p'(q(X)) \\To{p'(g_X)} p'(q'(X))\n\\end{equation}\nobtained by composing the $q(X)$-component of $f$ with the functor $p'$ applied to the $X$-component of $g$.\n\\end{definition}\n\n\\begin{exercise}\nShow that we could have replaced the composite function \\eqref{eqn.horiz_comp_nat_trans_comp} in \\cref{def.horiz_comp_nat_trans} with the function\n\\begin{equation} \\label{eqn.horiz_comp_nat_trans_comp2}\n    p(q(X)) \\To{p(g_X)} p(q'(X)) \\To{f_{q'(X)}} p'(q'(X))\n\\end{equation}\nobtained by composing $p$ applied to the $X$-component of $g$ with the $q'(X)$-component of $f$, without altering the definition.\n\\begin{solution}\nWe wish to show that \\eqref{eqn.horiz_comp_nat_trans_comp2} could replace \\eqref{eqn.horiz_comp_nat_trans_comp} in \\cref{def.horiz_comp_nat_trans}.\nWe claim that \\eqref{eqn.horiz_comp_nat_trans_comp} and \\eqref{eqn.horiz_comp_nat_trans_comp2} are in fact the same function; that is, that the following square commutes:\n\\[\n\\begin{tikzcd}\n    p(q(X)) \\ar[r, \"f_{q(X)}\"]\\ar[d, \"p(g_X)\"'] & p'(q(X)) \\ar[d, \"p'(g_X)\"] \\\\\n    p(q'(X)) \\ar[r, \"f_{q'(X)}\"'] & p'(q'(X))\n\\end{tikzcd}\n\\]\nIndeed it does, by the naturality of $f$.\n\\end{solution}\n\\end{exercise}\n\nThe composition product of polynomials and lenses will be extremely important in the story that follows.\nHowever, we only sometimes think of it as the composition of functors and the horizontal composition of natural transformations; more often we think of it as certain operations on arenas or corolla forests.\n\n\\subsection{Composite arenas}\\label{subsec.comon.comp.def.arena}\n\nLet us interpret our formula \\eqref{eqn.composite_formula_sums_first} for the composite of two polynomials in terms of what it says about the positions and directions of the corresponding arenas.\nThe position-set of $p \\tri q$ is\n\\begin{equation} \\label{eqn.comp_pos}\n    (p \\tri q)(\\1) \\iso \\sum_{i \\in p(\\1)} \\; \\sum_{\\bar{j_i} \\colon p[i] \\to q(1)} \\1 \\iso \\sum_{i \\in p(\\1)} \\smset(p[i], q(\\1)).\n\\end{equation}\nIn other words, specifying a position of $p \\tri q$ amounts to first specifying a $p$-position $i$, then specifying a function $\\bar{j_i} \\colon p[i] \\to q(\\1)$, i.e.\\ a $q$-position $\\bar{j_i}(d)$ for each $p[i]$-direction $d$.\n\nGiven such a position $(i, \\bar{j_i})$ of $p \\tri q$, the direction-set of $p \\tri q$ at $(i, \\bar{j_i})$ is\n\\begin{equation} \\label{eqn.comp_dir}\n    (p \\tri q)[(i, \\bar{j_i})] \\iso \\sum_{d \\in p[i]} q[\\bar{j_i}(d)].\n\\end{equation}\nSo a direction of $p \\tri q$ at $(i, \\bar{j_i})$ consists of a $p[i]$-direction $d$ and a $q[\\bar{j_i}(d)]$-direction.\n\nWhile this description completely characterizes $p \\tri q$ as an arena, it may be a bit tricky to wrap your head around.\nHere is an alternative perspective that can help us get a better intuition for what's going on with composite arenas.\n\nBack in \\cref{subsec.poly.func_nat.repr_sum.dep_sum_prod_set}, we saw how to write the instructions for choosing an element of a dependent sum or product of sets.\nFor instance, given a polynomial $p$ and a set $X$, the instructions for choosing an element of\n\\[\n    p\\tri X=p(X)\\iso\\sum_{i\\in p(\\1)}\\prod_{d\\in p[i]}X\n\\]\nwould be written as follows.\n\\begin{quote}\nTo choose an element of $p(X)$: \n\\begin{enumerate}\n    \\item choose an element $i\\in p(\\1)$;\n    \\item for each element $d\\in p[i]$:\n    \\begin{enumerate}[label*=\\arabic*.]\n        \\item choose an element of $X$.\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\nBut say we hadn't picked a set $X$ yet; in fact, say we might replace $X$ with a general polynomial instead.\nWe'll replace ``an element of $X$'' with a placeholder---the words ``a future''---that indicates that we don't yet know what will go there.\nFurthermore, to highlight that these instructions are associated with some polynomial $p$, we will use our familiar arena terminology of positions and directions.\n\\begin{quote}\nThe instructions associated with a polynomial $p$ are:\n\\begin{enumerate}\n    \\item choose a $p$-position $i$;\n    \\item for each $p[i]$-direction $d$:\n    \\begin{enumerate}[label*=\\arabic*.]\n        \\item choose a future.\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\n\nIf we think of polynomials in terms of their instructions, then \\eqref{eqn.composite_formula} tells us that the composition product simply nests one set of instructions within another, as follows.\n\\begin{quote}\nThe instructions associated with a polynomial $p\\tri q$ are:\n\\begin{enumerate}\n    \\item choose a $p$-position $i$;\n    \\item for each $p[i]$-direction $d$:\n    \\begin{enumerate}[label*=\\arabic*.]\n        \\item choose a $q$-position $j$;\n        \\item for each $q[j]$-direction $e$:\n        \\begin{enumerate}[label*=\\arabic*.]\n            \\item choose a future.\n        \\end{enumerate}\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\nSimilarly, we could write down the instructions associated with any $n$-fold composite by nesting even further.\nWe might think of such instructions as specifying some sort of length-$n$ \\emph{strategy}, in the sense of game theory, for picking positions given any directions---except that the opponent is somehow abstract, having no positions of its own.\n\nWhen we rewrite \\eqref{eqn.composite_formula} \\eqref{eqn.composite_formula_sums_first}, we are collapsing the instructions down into the following, highlighting the positions and directions of $p\\tri q$.\n\\begin{quote}\nThe instructions associated with a polynomial $p\\tri q$ are:\n\\begin{enumerate}\n    \\item choose a $p$-position $i$ and, for each $p[i]$-direction $d$, a $q$-position $\\bar{j}_i(d)$;\n    \\item for each $p[i]$-direction $d$ and each $q[\\bar{j}_i(d)]$-direction $e$:\n    \\begin{enumerate}[label*=\\arabic*.]\n        \\item choose a future.\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\nWe will see in \\cref{subsec.comon.comp.def.corolla} that these instructions have a very natural interpretation when we translate from arenas to corolla forests.\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Let $p$ be an arbitrary polynomial. Write out the (uncollapsed) instructions associated with $p\\tripow{3}=p\\tri p\\tri p$.\n\t\\item Write out the (uncollapsed) instructions for choosing an element of $p\\tri p\\tri\\1$, but where you would normally write ``choose an element of $\\1$,'' just write ``done.'' \\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{longenum}\n    \\item The instructions associated with a polynomial $p\\tri p\\tri p$ are:\n    \\begin{enumerate}\n        \\item choose a $p$-position $i$;\n        \\item for each $p[i]$-direction $d$:\n        \\begin{enumerate}[label*=\\arabic*.]\n            \\item choose a $p$-position $i'$;\n            \\item for each $p[i']$-direction $d'$:\n            \\begin{enumerate}[label*=\\arabic*.]\n                \\item choose a $p$-position $i''$;\n                \\item for each $p[i'']$-direction $d''$:\n                \\begin{enumerate}[label*=\\arabic*.]\n                    \\item choose a future.\n                \\end{enumerate}\n            \\end{enumerate}\n        \\end{enumerate}\n    \\end{enumerate}\n    \\item To choose an element of $p\\tri p\\tri\\1$:\n    \\begin{enumerate}\n        \\item choose a $p$-position $i$;\n        \\item for each $p[i]$-direction $d$:\n        \\begin{enumerate}[label*=\\arabic*.]\n            \\item choose a $p$-position $i'$;\n            \\item for each $p[i']$-direction $d'$:\n            \\begin{enumerate}[label*=\\arabic*.]\n                \\item done.\n            \\end{enumerate}\n        \\end{enumerate}\n    \\end{enumerate}\n\\end{longenum}\n\n\\end{solution}\n\\end{exercise}\n\nBut how does the composition product act on lenses between arenas?\nGiven lenses $f\\colon p\\to p'$ and $g\\colon q\\to q'$, we can translate them to natural transformations, take their horizontal composite, then translate this back to a lens.\nThe following exercise guides us through this process.\n\n\\begin{exercise}[The composition product of lenses] \\label{exc.comp_prod_lens}\nFix lenses $f\\colon p\\to p'$ and $g\\colon q\\to q'$.\nWe seek to characterize their composition product $f\\tri g\\colon p\\tri q\\to p'\\tri q'$.\n\\begin{enumerate}\n    \\item\\label{exc.comp_prod_lens.1} Use \\cref{prop.morph_arena_to_func} to compute the $q(X)$-component of $f$ as a natural transformation.\n    \\item\\label{exc.comp_prod_lens.2} Use \\cref{prop.poly_on_functions,prop.morph_arena_to_func} to compute $p'$ applied to the $X$-component of $g$ as a natural transformation.\n    \\item\\label{exc.comp_prod_lens.3} Combine \\cref{exc.comp_prod_lens.1} and \\cref{exc.comp_prod_lens.2} using \\cref{def.horiz_comp_nat_trans} to compute the horizontal composite $f\\tri g$ of $f$ and $g$ as natural transformations.\n    \\item Use \\cref{cor.morph_func_to_arena} to translate the natural transformation $f\\tri g$ obtained in \\cref{exc.comp_prod_lens.3} to a lens between arenas $p\\tri q\\to p'\\tri q'$.\n    Verify that for each $(i,\\bar{j}_i)$ in $(p\\tri q)(\\1)$ (see \\eqref{eqn.comp_pos}), its on-positions function sends\n    \\begin{equation} \\label{eqn.comp_lens_pos}\n        (i,\\bar{j}_i)\\Mapsto{(f\\:\\tri\\:g)_\\1}(f_\\1(i), f^\\sharp_i\\then\\bar{j}_i\\then g_\\1);\n    \\end{equation}\n    while for each $(d',e')$ in $(p'\\tri q')[(f_\\1(i), f^\\sharp_i\\then\\bar{j}_i\\then g_\\1)]$ (see \\eqref{eqn.comp_dir}), its on-directions function sends\n    \\begin{equation} \\label{eqn.comp_lens_dir}\n        (d',e')\\Mapsto{(f\\:\\tri\\:g)^\\sharp_{(i,\\bar{j}_i)}}\\left(f^\\sharp_i(d'), g^\\sharp_{\\bar{j}_i(f^\\sharp_i(d'))}(e')\\right).\n    \\end{equation}\n    \\qedhere\n\\end{enumerate}\n\\begin{solution}\nWe have lenses $f\\colon p\\to p'$ and $g\\colon q\\to q'$.\n\\begin{enumerate}\n    \\item By \\cref{prop.morph_arena_to_func}, the $q(X)$-component of $f$ is a function $f_{q(X)}\\colon p(q(X))\\to p'(q(X))$ that sends every $(i,h)$ with $i\\in p(\\1)$ and $h\\colon p[i]\\to q(X)$ to $(f_\\1(i),f^\\sharp_i\\then h)$.\n    We can think of the function $h\\colon p[i]\\to q(X)$ equivalently as a function $\\bar{j}_i\\colon p[i]\\to q(\\1)$ and, for each $d\\in p[i]$, a function $h_d\\colon q[\\bar{j}_i(d)]\\to X$.\n    So $f_{q(X)}\\colon (p\\tri q)(X)\\to (p'\\tri q)(X)$ sends \\[(i,\\bar{j}_i,(h_d)_{d\\in p[i]})\\mapsto\\left(f_\\1(i),f^\\sharp_i\\then\\bar{j}_i,\\left(h_{f^\\sharp_i(d')}\\right)_{d'\\in p'[f_\\1(i)]}\\right).\\]\n    \n    \\item By \\cref{prop.morph_arena_to_func}, the $X$-component of $g$ is a function $g_X\\colon q(X)\\to q'(X)$ that sends every $(j,k)$ with $j\\in q(\\1)$ and $k\\colon q[j]\\to X$ to $(g_\\1(j),g^\\sharp_j\\then k)$ in $q'(X)$.\n    Then by \\cref{prop.poly_on_functions}, applying $p'$ to this $X$-component yields a function $p'(q(X))\\to p'(q'(X))$ that sends every $(i',\\bar{j'}_{i'},(h'_{d'})_{d'\\in p'[i']})$ with $i'\\in p'(\\1)$ as well as $\\bar{j'}_{i'}\\colon p'[i']\\to q(\\1)$ and $h'_{d'}\\colon q[\\bar{j'}_{i'}(d')]\\to X$ to \\[\\left(i',\\bar{j'}_{i'}\\then g_\\1,\\left(g^\\sharp_{\\bar{j'}_{i'}(d')}\\then h'_{d'}\\right)_{d'\\in p'[i']}\\right).\\]\n    \n    \\item By \\cref{def.horiz_comp_nat_trans}, the horizontal composite of $f$ and $g$ is the natural transformation $f\\tri g\\colon p\\tri p'\\to q\\tri q'$ whose $X$-component is the composite of the answers to \\cref{exc.comp_prod_lens.1} and \\cref{exc.comp_prod_lens.2}, sending\n    \\begin{align*}\n        (i,\\bar{j}_i,(h_d)_{d\\in p[i]})&\\mapsto\\left(f_\\1(i),f^\\sharp_i\\then\\bar{j}_i,\\left(h_{f^\\sharp_i(d')}\\right)_{d'\\in p'[f_\\1(i)]}\\right)\\\\\n        &\\mapsto\\left(f_\\1(i),f^\\sharp_i\\then\\bar{j}_i\\then g_\\1, \\left(g^\\sharp_{\\bar{j}_{i}(f^\\sharp_i(d'))}\\then h_{f^\\sharp_i(d')}\\right)_{d'\\in p'[f_\\1(i)]}\\right).\n    \\end{align*}\n    \n    \\item We use \\cref{cor.morph_func_to_arena} to translate the answer to \\cref{exc.comp_prod_lens.3} into a lens $f\\tri g\\colon p\\tri q\\to p'\\tri q'$, as follows.\n    Its on-positions function is the $\\1$-component $(f\\tri g)_\\1$, which sends every $(i,\\bar{j}_i)$ with $i\\in p(\\1)$ and $\\bar{j}_i\\colon p[i]\\to q(\\1)$ to\n    \\[\n        (f_\\1(i),f^\\sharp_i\\then\\bar{j}_i\\then g_\\1).\n    \\]\n    Then for each such $(i,\\bar{j}_i)$, if we apply the $(p\\tri q)[(i,\\bar{j}_i)]$-component of $f\\tri g$ to the element $(i,\\bar{j}_i,(\\iota_d)_{d\\in p[i]})$, where $\\iota_d\\colon q[\\bar{j}_i(d)]\\to(p\\tri q)[(i,\\bar{j}_i)]\\iso\\sum_{d\\in p[i]}q[\\bar{j}_i(d)]$ is the canonical inclusion, then take the last coordinate of the result, we obtain for each $d'\\in p'[f_\\1(i)]$ the function\n    \\[\n        q'[g_\\1(\\bar{j}_i(f^\\sharp_i(d')))] \\To{g^\\sharp_{\\bar{j}_{i}(f^\\sharp_i(d'))}} q[\\bar{j}_i(f^\\sharp_i(d'))] \\To{\\iota_{f^\\sharp_i(d')}} \\sum_{d\\in p[i]}q[\\bar{j}_i(d)] \\iso (p\\tri q)[(i,\\bar{j}_i)].\n    \\]\n    These can equivalently be thought of as a single function from\n    \\[\n        \\sum_{d'\\in p'[f_\\1(i)]} q'[g_\\1(\\bar{j}_i(f^\\sharp_i(d')))] \\iso (p'\\tri q')[(f\\tri g)_\\1(i,\\bar{j}_i)]\n    \\]\n    which \\cref{cor.morph_func_to_arena} tells us is the on-directions function of $f\\tri g$ at $(i,\\bar{j}_i)$, that sends every $(d',e')$ with $d'\\in p'[f_\\1(i)]$ and $e'\\in q'[g_\\1(\\bar{j}_i(f^\\sharp_i(d')))]$ to\n    \\[\n        \\left(f^\\sharp_i(d'), g^\\sharp_{\\bar{j}_i(f^\\sharp_i(d'))}(e')\\right).\n    \\]\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\nSo what does \\cref{exc.comp_prod_lens} tell us about the behavior of $f\\tri g\\colon p\\tri q\\to p'\\tri q'$?\nBy \\eqref{eqn.comp_lens_pos}, on positions, $f\\tri g$ takes a $p$-position $i$ and sends it to the $p'$-position $f_\\1(i)$; then for each direction $d'$ at this position, the associated $q'$-position is obtained by sending $d'$ back to a $p[i]$-direction via $f^\\sharp_i$, checking what $q$-position is associated to that $p[i]$-direction via some $\\bar{j}_i$, then sending that $q$-position forward again to a $q'$-position via $g_\\1$.\n\nThen by \\eqref{eqn.comp_lens_pos}, on directions, $f\\tri g$ sends a direction of $p'$ back to a direction of $p$ via an on-directions function of $f$, then sends a direction of $q'$ back to a direction of $q$ via an on-directions funtion of $g$.\nWe'll get a better sense of what's happening when we see this drawn out as corolla forests in \\cref{ex.comp_prod_trees}.\n\n\\subsection{Composite corolla forests} \\label{subsec.comon.comp.def.corolla}\n\nIt turns out that the forest of $p\\tri q$ is given by gluing corollas from the forest of $q$ onto the leaves of corollas from the forest of $p$ in every possible way.\nWe will demonstrate this using an example.\n\nLet's say $p\\coloneqq\\yon^\\2+\\yon$ and $q\\coloneqq\\yon^\\3+\\1$, whose corolla forests we draw as follows:\n\\begin{equation}\\label{eqn.pq_misc39}\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, blue!50!black, \"\\color{blue!50!black} $p$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$} \n      child {}\n      child {};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (2) {$\\bullet$} \n      child {};\n  \\end{tikzpicture}\n  };\n%\n\t\\node (p2) [draw, red!75!black, right=2 of p1, \"\\color{red!75!black} $q$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (4) {$\\bullet$}\n    ;\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\end{equation}\nBy \\eqref{eqn.comp_pos}, choosing a position of $p \\tri q$ amounts to first choosing a root $i$ from the forest of $p$, then choosing a root from the forest of $q$ for every leaf emanating from $i$.\nSo we may depict $(p \\tri q)(\\1)$ by gluing roots from $q$ to leaves in $p$ in every possible way, as follows:\n\\begin{equation}\\label{eqn.comp_pos_forest}\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"``$(${\\color{blue!50!black} $p$}$\\:\\tri\\:${\\color{red!75!black}$q$}$)(\\1)$''\" above] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  blue!50!black]\n    \\node[blue!50!black, \"\\tiny 1\" below] (1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 1\" above] {$\\bullet$}}\n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 1\" above] {$\\bullet$}};\n%\n    \\node[blue!50!black, right=1.5 of 1, \"\\tiny 1\" below] (2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 1\" above] {$\\bullet$}}\n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 2\" above] {$\\bullet$}};\n%\n    \\node[blue!50!black, right=1.5 of 2, \"\\tiny 1\" below] (3) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 2\" above] {$\\bullet$}}\n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 1\" above] {$\\bullet$}};\n%\n    \\node[blue!50!black, right=1.5 of 3, \"\\tiny 1\" below] (4) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 2\" above] {$\\bullet$}}\n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 2\" above] {$\\bullet$}};\n%\n    \\node[blue!50!black, right=1.2 of 4, \"\\tiny 2\" below] (5) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 1\" above] {$\\bullet$}};\n%\n    \\node[blue!50!black, right=1 of 5, \"\\tiny 2\" below] (6) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black, \"\\color{red!75!black} \\tiny 2\" above] {$\\bullet$}};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\end{equation}\nNow fix one of the positions of $p \\tri q$ drawn above: a root $i$ from $p$ and a root from $q$ glued to every leaf emanating from $i$.\nBy \\eqref{eqn.comp_dir}, a direction of $p \\tri q$ at that position consists of a leaf $d$ emanating from the root $i$ from $p$ and a second leaf emanating from the root from $q$ that has been glued to $d$.\nIn other words, in the following picture, where we have glued not just roots but entire corollas from $q$ to leaves in $p$, the directions of $p \\tri q$ at the position corresponding to each tree are the rooted paths\\footnote{A \\emph{rooted path} of a rooted tree is a path up the tree that starts from the root.} of that tree of length $2$ (we omit the labels):\n\\begin{equation}\\label{eqn.prefered_composite}\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"``{\\color{blue!50!black} $p$}$\\:\\tri\\:${\\color{red!75!black}$q$}''\" above] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=2.5mm},\n\t  blue!50!black]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.7 of 1] (2) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 2] (3) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 3] (4) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$}\n\t\t\t}\n      child {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=1.2 of 4] (5) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 5] (6) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n\t\t\t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\end{equation}\nEquivalently, we can think of the directions in the picture above as the leaves at the second level of each tree.\nSo $p \\tri q$ has six positions; the first has six directions, the second, third, and fifth have three directions, and the fourth and sixth have no directions.\nIn total, we can read off that $p\\tri q$ is isomorphic to $\\yon^\\6+\\3\\yon^\\3+\\2$.\n\nWe put the $p\\tri q$ in scare quotes above \\eqref{eqn.prefered_composite} because, to be pedantic, the corolla forest of $p \\tri q$ has the two levels smashed together as follows:\n\\begin{equation}\\label{eqn.actual_composite}\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$p\\tri q$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (1) {$\\bullet$} \n      child {}\n      child {}\n      child {}\n      child {}\n      child {}\n      child {};\n    \\node[right=1 of 1] (2) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=1 of 2] (3) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=1 of 3] (4) {$\\bullet$};\n    \\node[right=1 of 4] (5) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=1 of 5] (6) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\end{equation}\nUsually, we will prefer the style of \\eqref{eqn.prefered_composite} rather than the more pedantic style of \\eqref{eqn.actual_composite}.\n\nWe have now seen how to draw a single polynomial as a corolla forest, with level-$1$ leaves as directions; as well as how to draw a two-fold composite of polynomials as a forest of trees, with level-$2$ leaves as directions.\nNote that drawing a corolla of $p$ or a tree of $p\\tri q$ is just a graphical way of following the instructions associated with the polynomial $p$ or $p\\tri q$ that we saw in \\cref{subsec.comon.comp.def.arena}, where the arrows---the top-level leaves---are where the ``futures'' would go.\nSimilarly, we could depict any $n$-fold composite as a forest with level-$n$ leaves as directions.\nYou'll have an opportunity to try this in the following exercise.\n\n\\begin{exercise}\nUse $p,q$ as in \\eqref{eqn.pq_misc39} and $r\\coloneqq \\2\\yon+\\1$ in the following.\n\\begin{enumerate}\n\t\\item Draw $q\\tri p$.\n\t\\item Draw $p\\tri p$.\n\t\\item Draw $p\\tri p\\tri \\1$.\n\t\\item Draw $r\\tri r$.\n\t\\item Draw $r\\tri r\\tri r$.\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\nWe have $p \\coloneqq \\yon^\\2 + \\yon$ and $q \\coloneqq \\yon^\\3 + \\1$ as in \\eqref{eqn.pq_misc39}.\n\\begin{enumerate}\n    \\item Here is a picture of $q\\tri p$, where each tree is obtained by taking a corolla from $q$ and gluing corollas from $p$ to every leaf:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=4mm},\n\t  level 2/.style={sibling distance=2.5mm},\n\t  red!75!black]\n    \\node (1) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t};\n%\n    \\node[right=1.2 of 1] (2) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t};\n%\n    \\node[right=1.2 of 2] (3) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t};\n%\n    \\node[right=1.2 of 3] (4) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t};\n\n    \\node[right=1.2 of 4] (5) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t};\n\n    \\node[right=1.2 of 5] (6) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t};\n\n    \\node[right=1.2 of 6] (7) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t\tchild[blue!50!black]\n\t\t\t};\n%\n    \\node[right=1.2 of 7] (8) {$\\bullet$} \n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t}\n      child {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black]\n\t\t\t};\n%\n    \\node[right=1 of 8] (9) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n    \n    \\item Here is a picture of $p\\tri p$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=3mm},\n\t  blue!50!black]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild\n\t\t\t\tchild\n\t\t\t}\n      child {node {$\\bullet$} \n      \tchild\n\t\t\t\tchild\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 1] (2) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild\n\t\t\t\tchild\n\t\t\t}\n      child {node {$\\bullet$} \n        child\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 2] (3) {$\\bullet$} \n      child {node {$\\bullet$} \n        child\n\t\t\t}\n      child {node {$\\bullet$} \n      \tchild\n\t\t\t\tchild\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 3] (4) {$\\bullet$} \n      child {node {$\\bullet$}\n        child\n\t\t\t}\n      child {node {$\\bullet$}\n        child\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.2 of 4] (5) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild\n\t\t\t\tchild\n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 5] (6) {$\\bullet$} \n      child {node {$\\bullet$} \n        child\n\t\t\t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\n    \\item To obtain a picture of $p\\tri p\\tri\\1$, we take our picture of $p\\tri p$ and glue the single, leafless root from $\\1$ to every (level-$2$) leaf:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=3mm},\n\t  blue!50!black]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild {node[black] {$\\bullet$}}\n\t\t\t\tchild {node[black] {$\\bullet$}}\n\t\t\t}\n      child {node {$\\bullet$} \n      \tchild {node[black] {$\\bullet$}}\n\t\t\t\tchild {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 1] (2) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild {node[black] {$\\bullet$}}\n\t\t\t\tchild {node[black] {$\\bullet$}}\n\t\t\t}\n      child {node {$\\bullet$} \n        child {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 2] (3) {$\\bullet$} \n      child {node {$\\bullet$} \n        child {node[black] {$\\bullet$}}\n\t\t\t}\n      child {node {$\\bullet$} \n      \tchild {node[black] {$\\bullet$}}\n\t\t\t\tchild {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 3] (4) {$\\bullet$} \n      child {node {$\\bullet$}\n        child {node[black] {$\\bullet$}}\n\t\t\t}\n      child {node {$\\bullet$}\n        child {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.2 of 4] (5) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild {node[black] {$\\bullet$}}\n\t\t\t\tchild {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 5] (6) {$\\bullet$} \n      child {node {$\\bullet$} \n        child {node[black] {$\\bullet$}}\n\t\t\t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\\end{enumerate}\n\nNow $r\\coloneqq \\2\\yon+\\1$. Before we draw the composites, here's a picture of $r$ itself, with different colors to distinguish the different positions:\n\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child[blue!50!black];\n%  \n    \\node[right=.5 of 1, red!75!black] (2) {$\\bullet$} \n      child[red!75!black];\n%\n    \\node[right=.5 of 2] (3) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\n\\begin{enumerate}[resume]\n    \\item Here is a picture of $r\\tri r$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=2.5mm},\n\t  blue!50!black]\n    \\node (1) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild\n\t\t\t};\n%\n    \\node[right=.5 of 1] (2) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t};\n%\n    \\node[right=.5 of 2] (3) {$\\bullet$} \n      child {node[black] {$\\bullet$}};\n%\n    \\node[right=.5 of 3, red!75!black] (4) {$\\bullet$} \n      child[red!75!black] {node {$\\bullet$}\n        child};\n%\n    \\node[right=.5 of 4, red!75!black] (5) {$\\bullet$} \n      child[red!75!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t};\n%\n    \\node[right=.5 of 5, red!75!black] (6) {$\\bullet$} \n      child[red!75!black] {node[black] {$\\bullet$}};\n%\n    \\node[right=.5 of 6, black] (7) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n    \n    \\item Here is a picture of $r\\tri r\\tri r$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=2.5mm},\n\t  blue!50!black]\n    \\node (1) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild {node {$\\bullet$}\n      \t  child\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 1] (2) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild {node[red!75!black] {$\\bullet$}\n      \t  child[red!75!black]\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 2] (3) {$\\bullet$} \n      child {node {$\\bullet$} \n      \tchild {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[right=.5 of 3] (4) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black] {node[blue!50!black] {$\\bullet$}\n      \t  child[blue!50!black]\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 4] (5) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black] {node[red!75!black] {$\\bullet$}\n      \t  child[red!75!black]\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 5] (6) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black] {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[right=.5 of 6] (7) {$\\bullet$} \n      child {node[black] {$\\bullet$}};\n%\n    \\node[right=.5 of 7, red!75!black] (8) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black] {node[blue!50!black] {$\\bullet$}\n      \t  child[blue!50!black]\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 8, red!75!black] (9) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black] {node[red!75!black] {$\\bullet$}\n      \t  child[red!75!black]\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 9, red!75!black] (10) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] {$\\bullet$} \n      \tchild[blue!50!black] {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[right=.5 of 10, red!75!black] (11) {$\\bullet$} \n      child[red!75!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black] {node[blue!50!black] {$\\bullet$}\n      \t  child[blue!50!black]\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 11, red!75!black] (12) {$\\bullet$} \n      child[red!75!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black] {node[red!75!black] {$\\bullet$}\n      \t  child[red!75!black]\n      \t      }\n\t\t\t};\n%\n    \\node[right=.5 of 12, red!75!black] (13) {$\\bullet$} \n      child[red!75!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black] {node[black] {$\\bullet$}}\n\t\t\t};\n%\n    \\node[right=.5 of 13, red!75!black] (14) {$\\bullet$} \n      child[red!75!black] {node[black] {$\\bullet$}};\n%\n    \\node[right=.5 of 14, black] (15) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n\n% ** Find a home for this\n% Here is a curious fact relating composition and the closure operation adjoint to $\\otimes$. Recall that for any two polynomials $p,q$, there is a polynomial $[p,q]\\in\\poly$.\n \n% \\begin{proposition}\n% For any set $A\\in\\smset$ and any polynomial $p\\in\\poly$, there is an isomorphism\n% \\[\n% \\yon^A\\tri p\\cong[A\\yon,p].\n% \\]\n% \\end{proposition}\n% \\begin{proof}\n% \\begin{align*}\n% \t\\yon^A\\tri p&\\cong\n% \t\\prod_{a\\in A}\\sum_{i\\in p(\\1)}\\yon^{p[i]}\\\\&\\cong\n% \t\\sum_{i\\colon A\\to p(\\1)}\\prod_{a\\in A}\\yon^{p[i(a)]}\\\\&\\cong\n% \t\\sum_{i\\colon A\\to p(\\1)}\\yon^{\\sum_{a\\in A}p[i(a)]}\\\\&\\cong\n% \t\\sum_{\\varphi\\colon A\\yon\\to p}\\yon^{\\sum_{a\\in A\\yon(\\1)}\n% \tp[\\varphi_\\1(a)]}\\\\&\\cong\n% \t[A\\yon,p]\n% \\end{align*}\n% \\end{proof}\n\n\\begin{example}[Composing polynomials with constants] \\label{ex.apply_2}\nFor any set $X$ and polynomial $p$, we can take $p(X)\\in\\smset$; indeed $p\\colon\\smset\\to\\smset$ is a functor! In particular, by this point you've seen us write $p(\\1)$ hundreds of times. But we've also seen that $X$ is itself a polynomial, namely a constant one.\n\nIt's not hard to see that $p(X)\\iso p\\tri X$. Here's a picture, where $p\\coloneqq\\yon^\\3+\\yon+\\1$ and $X\\coloneqq\\2$.\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, blue!50!black, \"\\color{blue!50!black} $p$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (2) {$\\bullet$} \n      child {};\n      ;\n    \\node[right=.5 of 2,\"\\tiny 3\" below] (3) {$\\bullet$} \n      ;\n  \\end{tikzpicture}\n  };\n%\n\t\\node (p2) [draw, red!75!black, right=2 of p1, \"\\color{red!75!black}$X$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (4) {$\\diamond$};\n    \\node[above=10pt of 4] {};\n    ;\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nLet's see how $(\\yon^\\3+\\yon+\\1)\\tri\\2$ looks.\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"{\\color{blue!50!black} $p$}$\\:\\tri\\:${\\color{red!75!black}$X$}\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm, blue!50!black]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\bullet$}};\n    \\node[blue!50!black, right=of 1] (2) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\diamond$}};\n    \\node[blue!50!black, right=of 2] (3) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\bullet$}};\n    \\node[blue!50!black, right=of 3] (4) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\diamond$}};\n    \\node[blue!50!black, right=of 4] (5) {$\\bullet$} \n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\bullet$}};\n    \\node[blue!50!black, right=of 5] (6) {$\\bullet$} \n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\bullet$}}\n      child {node[red!75!black] {$\\diamond$}};\n    \\node[blue!50!black, right=of 6] (7) {$\\bullet$} \n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\bullet$}};\n    \\node[blue!50!black, right=of 7] (8) {$\\bullet$} \n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\diamond$}}\n      child {node[red!75!black] {$\\diamond$}};\n    \\node[blue!50!black, right=.8 of 8] (9) {$\\bullet$} \n      child {node[red!75!black] {$\\bullet$}};\n    \\node[blue!50!black, right=.6 of 9] (10) {$\\bullet$} \n      child {node[red!75!black] {$\\diamond$}};\n    \\node[blue!50!black, right=.6 of 10] (11) {$\\bullet$};\n\t\\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nIt has $11$ positions and no level-$2$ leaves, which means it's a set (constant polynomial, with no directions), namely $p\\tri X\\iso \\1\\1$.\n\nWe could also draw $X\\tri p$, since both are perfectly valid polynomials. Here it is:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p2) [draw, \"{\\color{red!75!black}$X$}$\\:\\tri\\:${\\color{blue!50!black} $p$}\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm, red!75!black]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (4) {$\\diamond$};\n    \\node[above=10pt of 4] {};\n    ;\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nEach of the leaves in $X$---of which there are none---is given a corolla from $p$.\n\\end{example}\n\n\\begin{exercise}\\label{exc.composing_with_constants}\n\\begin{enumerate}\n\t\\item Choose a polynomial $p$ and draw $p\\tri\\1$ in the style of \\cref{ex.apply_2}.\n\t\\item Show that if $X$ is a set (considered as a constant polynomial) and $p$ is any polynomial, then $X\\tri p\\iso X$.\n\t\\item \\label{exc.composing_with_constants.appl} Show that if $X$ is a set and $p$ is a polynomial, then $p\\tri X\\iso p(X)$, where $p(X)$ is the set given by applying $p$ as a functor to $X$.\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item We pick the list polynomial, $p\\coloneqq\\1+\\yon+\\yon^\\2+\\yon^\\3+\\cdots$, drawn as follows:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\\node (p1) [draw] {\n  \\begin{tikzpicture}[trees, sibling distance=3mm]\n    \\node (1) {$\\bullet$};\n    \\node[right=.3 of 1] (2) {$\\bullet$}\n      child {};\n    \\node[right=.4 of 2] (3) {$\\bullet$} \n      child {}\n      child {};\n    \\node[right=.6 of 3] (4) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=.6 of 4] {$\\cdots$};\n  \\end{tikzpicture}\n};\n\\end{tikzpicture}\n\\]\nThen here is a picture of $p\\tri\\1$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\\node (p1) [draw] {\n  \\begin{tikzpicture}[trees, sibling distance=3mm]\n    \\node (1) {$\\bullet$};\n    \\node[right=.3 of 1] (2) {$\\bullet$}\n      child {node {$\\bullet$}};\n    \\node[right=.4 of 2] (3) {$\\bullet$} \n      child {node {$\\bullet$}}\n      child {node {$\\bullet$}};\n    \\node[right=.6 of 3] (4) {$\\bullet$} \n      child {node {$\\bullet$}}\n      child {node {$\\bullet$}}\n      child {node {$\\bullet$}};\n    \\node[right=.6 of 4] {$\\cdots$};\n  \\end{tikzpicture}\n};\n\\end{tikzpicture}\n\\]\n\\end{enumerate}\nBelow, $X$ is a set and $p$ is a polynomial.\n\\begin{enumerate}[resume]\n    \\item A constant functor composed with any functor is still the same constant functor, so $X \\tri p \\iso X$.\n    We can also verify this using \\eqref{eqn.composite_formula}:\n    \\[\n        X \\tri p \\iso \\sum_{i \\in X} \\prod_{d \\in \\varnothing} \\sum_{j \\in p(\\1)} \\prod_{e \\in p[j]} \\yon \\iso \\sum_{i \\in X} \\1 \\iso X.\n    \\]\n    \\item When viewed as functors, it is easy to see that $p \\tri X \\iso p(X)$.\n    We can also verify this using \\eqref{eqn.composite_formula}:\n    \\[\n        p \\tri X \\iso \\sum_{i \\in p(\\1)} \\prod_{d \\in p[i]} \\sum_{j \\in X} \\prod_{e \\in \\varnothing} \\yon \\iso \\sum_{i \\in p(\\1)} \\prod_{d \\in p[i]} \\sum_{j \\in X} \\1 \\iso \\sum_{i \\in p(\\1)} \\prod_{d \\in p[i]} X \\iso \\sum_{i \\in p(\\1)} X^{p[i]} \\iso p(X).\n    \\]\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n% **Move the proposition & exercise below to part about coclosure\n% \\begin{proposition}\\label{prop.flipping_reps_lins}\n% For all sets $A,B$, we have the following adjunction:\n% \\[\n% \\poly\\left(A\\yon\\tri p\\tri \\yon^B, q\\right)\\cong\\poly\\left(p,\\yon^A\\tri q\\tri By\\right)\n% \\]\n% Moreover, this isomorphism is natural in $A\\in\\smset\\op$ and $B\\in\\smset$.\n% \\end{proposition}\n\n% \\[\n% \\begin{tikzpicture}\n% \t\\node (p1) {\n%   \\begin{tikzpicture}[polybox,tos]\n%   \t\\node[poly, \"$p$\" left] (p) {};\n%   \t\\node[poly, linear, below=of p, \"$A\\yon$\" left] (Ay) {};\n%   \t\\node[poly, pure, above=of p, \"$\\yon^B$\" left] (yB) {};\n%   \t\\node[poly, right=2 of p, \"$q$\" right] (q) {};\n%   \t\\node at ($(p.east)!.5!(q.west)$) {$\\leftrightarrows$};\n%   \\end{tikzpicture}\n%   };\n%  \\node[right=4 of p1] (p2) {\n%  \\begin{tikzpicture}[polybox,tos]\n%   \t\\node[poly, \"$p$\" left] (p) {};\n%   \t\\node[poly, right=2 of p, \"$q$\" right] (q) {};\n%   \t\\node[poly, linear, above=of q, \"$B\\yon$\" right] (By) {};\n%   \t\\node[poly, pure, below=of q, \"$\\yon^A$\" right] (yA) {};\n% \t\t\\draw (p_pos) to[first] (yA_pos);\n% \t\t\\draw (yA_dir) to[climb] (q_pos);\n% \t\t\\draw (q_dir) to[climb] (By_pos);\n% \t\t\\draw (By_dir) to[last] (p_dir);\n%  \\end{tikzpicture}\n%  };\n%  \\node[align=center] at ($(p1)!.5!(p2)$) {is the\\\\same as};\n% \\end{tikzpicture} \n% \\]\n% Do you see how polyboxes with a black (one-element) part can flip upside-down to go to the other side?\n% \\begin{proof}\n% We prove this in two pieces: that\n% \\begin{equation} \\label{eqn.flipping1}\n% \\poly\\left(A\\yon\\tri p, q\\right)\\cong\\poly\\left(p,\\yon^A\\tri q\\right)\n% \\end{equation}\n% and that\n% \\begin{equation} \\label{eqn.flipping2}\n% \\poly\\left(p \\tri \\yon^B, q\\right)\\cong\\poly\\left(p, q\\tri B\\yon\\right)\n% \\end{equation}\n\n% For \\eqref{eqn.flipping1}, we have that $A\\yon \\tri p \\cong Ap$, an $A$-fold coproduct of $p$.\n% Similarly, $\\yon^A \\tri q \\cong q^A$, an $A$-fold product of $q$.\n% So this follows from the corresponding universal properties.\n\n% For \\eqref{eqn.flipping2}, we first write out the two sets by hand.\n% To give a map from $p \\tri \\yon^B$ to $q$, we must provide for every $i \\in p(\\1)$ an element $j \\in q(\\1)$ and a function $q[j] \\to B \\times p[i]$.\n% Then to give a map from $p$ to $q \\tri B\\yon$, we must provide for every $i \\in p(\\1)$ an element $j \\in q(\\1)$ and for every $n \\in q[j]$, an element of $B$ and an element of $p[i]$.\n% These are clearly isomorphic.\n% \\end{proof}\n\n% \\begin{exercise}\n% Let $A,B\\in\\smset$ be sets, and let $p\\in\\poly$ be a polynomial. Is it true that the morphisms $A\\yon^B\\to p$ can be identified with the morphisms $A\\to p\\tri B$, i.e.\\ that there is a bijection:\n% \\begin{equation}\\label{eqn.monomials_and_comp}\n% \t\\poly(A\\yon^B,p)\\cong^?\\poly(A,p\\tri B)\n% \\end{equation}\n% If so, why? If not, give a counterexample.\n% \\begin{solution}\n% **\n% \\end{solution}\n% \\end{exercise}\n\n\\begin{exercise}\\label{ex.compose_yon}\nFor any $p\\in\\poly$ there are natural isomorphisms $p\\iso p\\tri \\yon$ and $p\\iso\\yon\\tri p$.\n\\begin{enumerate}\n\t\\item Thinking of polynomials as functors $\\smset\\to\\smset$, what functor does $\\yon$ represent?\n\t\\item Why are $p\\tri\\yon$ and $\\yon\\tri p$ isomorphic to $p$?\n\t\\item Let $p\\coloneqq\\yon^\\3+\\yon+\\1$.\n\tIn terms of tree pictures, draw $p\\tri\\yon$ and $\\yon\\tri p$, and explain pictorially how to see the isomorphisms $p\\tri\\yon \\iso p \\iso \\yon\\tri p$.\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item The polynomial $\\yon$ is the identity functor on $\\smset$.\n    \\item Composing any functor with the identity functor yields the original functor, so $p\\tri\\yon \\iso p \\iso \\yon\\tri p$.\n    \\item Before we draw $\\yon\\tri p$ and $p\\tri\\yon$, here are pictures of $p$ and $\\yon$ individually as corolla forests:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, blue!50!black, \"\\color{blue!50!black} $p$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (1) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=.5 of 1] (2) {$\\bullet$} \n      child {};\n    \\node[right=.5 of 2] (3) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n%\n\t\\node (p2) [draw, right=2 of p1, \"$\\yon$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (1) {$\\bullet$}\n      child {};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nNow here is a picture of $p\\tri\\yon$, obtained by gluing the one-leaf corolla of $\\yon$ to all the leaves of each of the corollas of $p$ in turn:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, blue!50!black, \"{\\color{blue!50!black}$p$}$\\:\\tri\\:\\yon$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node (1) {$\\bullet$} \n      child {node[black] {$\\bullet$} child[black]}\n      child {node[black] {$\\bullet$} child[black]}\n      child {node[black] {$\\bullet$} child[black]};\n    \\node[right=1 of 1] (2) {$\\bullet$} \n      child {node[black] {$\\bullet$} child[black]};\n    \\node[right=.5 of 2] (3) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nThis is just $p$ with every direction extended up one level, so it is still a picture of $p$.\n\nAnd here is a picture of $\\yon\\tri p$, obtained by gluing each of the corollas of $p$ to the single leaf of $\\yon$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p2) [draw, \"$\\yon\\:\\tri\\:${\\color{blue!50!black}$p$}\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (1) {$\\bullet$}\n      child {\n        node[blue!50!black] {$\\bullet$}\n            child[blue!50!black]\n            child[blue!50!black]\n            child[blue!50!black]\n      };\n    \\node[right=.5 of 1] (2) {$\\bullet$} \n      child {\n        node[blue!50!black] {$\\bullet$}\n            child[blue!50!black]\n      };\n    \\node[right=.5 of 2] (3) {$\\bullet$}\n      child {\n        node[blue!50!black] {$\\bullet$}\n      };\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\\end{enumerate}\n\\end{solution}\nThis is just $p$ with every position propped up one level, so it is also still a picture of $p$.\n\\end{exercise}\n\nHow shall we think about taking the composition product of lenses in terms of our tree pictures?\nWe can interpret the results of \\cref{exc.comp_prod_lens} as follows.\n\n\\begin{example}\\label{ex.comp_prod_trees}\nLet's take $p\\coloneqq \\yon^\\2+\\yon$, $q\\coloneqq\\yon^\\2+\\yon$, $p'\\coloneqq\\yon^\\3+\\yon$, and $q'\\coloneqq\\yon+\\1$.\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, blue!50!black, \"$p=$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$} \n      child {}\n      child {};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (2) {$\\bullet$} \n      child {};\n  \\end{tikzpicture}\n  };\n%\n\t\\node (q) [draw, red!75!black, above=1 of p, \"$q=$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$} \n      child {}\n      child {};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (2) {$\\bullet$} \n      child {};\n  \\end{tikzpicture}\n  };\n\t\\node (p') [draw, blue, right=3 of p, \"$p'=$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$} \n      child {}\n      child {}\n      child {};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (2) {$\\bullet$}\n      child {};\n  \\end{tikzpicture}\n  };\n\t\\node (q') [draw, red, above=1 of p', \"$q'=$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node[\"\\tiny 1\" below] (1) {$\\bullet$} \n      child {};\n    \\node[right=.5 of 1,\"\\tiny 2\" below] (2) {$\\bullet$}\n    ;\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nFor any pair of lenses $p\\to p'$ and $q\\to q'$, we have a lens $p\\tri q\\to p'\\tri q'$. Let's draw $p\\tri q$ and $p'\\tri q'$.\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$p\\tri q$\" above] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.7 of 1] (2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 2] (3) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.5 of 3] (4) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.2 of 4] (5) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n      \tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=.8 of 5] (6) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$p'\\tri q'$\" above] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=4mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[blue] (1) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t\tchild[red]\n\t\t\t};\n%\n    \\node[blue, right=1.4 of 1] (2) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue, right=1.4 of 2] (3) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t\tchild[red]\n\t\t\t};\n%\n    \\node[blue, right=1.4 of 3] (4) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue, right=1.4 of 4] (5) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t\tchild[red]\n\t\t\t};\n%\n    \\node[blue, right=1.4 of 5] (6) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue, right=1.4 of 6] (7) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t};\n%\n    \\node[blue, right=1.4 of 7] (8) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue, right=1 of 8] (9) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n      \tchild[red]\n\t\t\t};\n%\n    \\node[blue, right=.8 of 9] (10) {$\\bullet$} \n      child[blue] {node[red] {$\\bullet$} \n\t\t\t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\nLet's also pick a pair of lenses, $f\\colon p\\to p'$ and $g\\colon q\\to q'$.\n\\[\n\\begin{tikzpicture}\n\t\\node (p1) {\\raisebox{.3cm}{$f\\colon p\\to p'$}\\qquad\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[blue!50!black, \"\\color{blue!50!black} \\tiny 1\" below] (1) {$\\bullet$} \n      child[blue!50!black] {coordinate (11)}\n      child[blue!50!black] {coordinate (12)};\n    \\node[right=1.5 of 1, blue, \"\\color{blue} \\tiny 1\" below] (2) {$\\bullet$} \n      child[blue] {coordinate (21)}\n      child[blue] {coordinate (22)}\n      child[blue] {coordinate (23)};\n    \\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (2);\n    \\begin{scope}[densely dotted, bend right]\n      \\draw[postaction={decorate}] (21) to (12);\n      \\draw[postaction={decorate}] (22) to (12);\n      \\draw[postaction={decorate}] (23) to (11);\n    \\end{scope}\n  \\end{tikzpicture}\t\n\t};\t\n%\n\t\\node (p2) [below right=-1.3 and 1 of p1] {\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[blue!50!black, \"\\color{blue!50!black} \\tiny 2\" below] (1) {$\\bullet$} \n      child[blue!50!black] {coordinate (11)};\n    \\node[right=of 1, blue, \"\\color{blue} \\tiny 2\" below] (2) {$\\bullet$}\n      child[blue] {coordinate (21)};\n    \\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (2);\n    \\begin{scope}[densely dotted, bend right]\n      \\draw[postaction={decorate}] (21) to (11);\n\t\t\\end{scope}\n  \\end{tikzpicture}\t\n\t};\t\n\t\\node [below=.5 of p1] (p3) {\\raisebox{.3cm}{$g\\colon q\\to q'$}\\qquad\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[red!75!black, \"\\color{red!75!black} \\tiny 1\" below] (1) {$\\bullet$} \n      child[red!75!black] {coordinate (11)}\n      child[red!75!black] {coordinate (12)};\n    \\node[right=1.5 of 1, red, \"\\color{red} \\tiny 1\" below] (2) {$\\bullet$} \n      child[red] {coordinate (21)};\n    \\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (2);\n    \\begin{scope}[densely dotted, bend right]\n      \\draw[postaction={decorate}] (21) to (12);\n    \\end{scope}\n  \\end{tikzpicture}\t\n\t};\t\n%\n\t\\node (p4) [below right=-1.05 and 1 of p3] {\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[red!75!black, \"\\color{red!75!black} \\tiny 2\" below] (1) {$\\bullet$} \n      child[red!75!black] {coordinate (11)};\n    \\node[right=of 1, red, \"\\color{red} \\tiny 2\" below] (2) {$\\bullet$};\n    \\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (2);\n  \\end{tikzpicture}\t\n\t};\t\n\\end{tikzpicture}\n\\]\nThen by \\cref{exc.comp_prod_lens}, we can form the lens $f\\tri g\\colon p\\tri q\\to p'\\tri q'$ as follows.\nOn positions, we follow \\eqref{eqn.comp_lens_pos}: for each tree $t$ in the picture of $p\\tri q$, we begin by using $f_\\1$ to send the corolla $i$ of $p$ that forms the bottom level of $t$ to a corolla $i'$ of $p'$.\nThen for each leaf $d'$ of $i'$, to choose what corolla of $q'$ gets glued to $d'$, we use $f^\\sharp_i$ to send $d'$ back to a leaf $d$ of corolla $i$.\nSince $t$ has corolla $i$ as its bottom level, $d$ is just a level-$1$ vertex of the tree $t$.\nSo we can take the corolla $j$ of $q$ glued to $d$ in $t$, then use $g_\\1$ to send $j$ forward to a corolla $j'$ of $q'$.\nThis is the corolla we glue to $d'$.\nAll this specifies a tree $t'$ in $p'\\tri q'$ that $t$ gets sent to via $(f\\tri g)_\\1$.\n\nOn directions, we follow \\eqref{eqn.comp_lens_dir}: picking a direction of $t'$ consists of picking a level-$1$ vertex $d'$ and a level-$2$ leaf $e'$ emanating from $d'$.\nThe on-directions function $f^\\sharp_i$ sends $d'$ back to a level-$1$ vertex $d$ of $t$, and as we saw, the on-positions function $g_\\1$ sends the corolla $j$ of $q$ glued to $d$ in $t$ forward to the corolla of $q'$ glued to $d'$.\nThen $e'$ is a leaf of that corolla, and $g^\\sharp_j$ sends $e'$ back to a leaf $e$ emanating from $d$.\nSo the on-directions function $(f\\tri g)^\\sharp_t$ sends the level-$2$ leaf $e'$ to the level-$2$ leaf $e$.\n\nWe draw the lens $f\\tri g\\to p\\tri q\\to p'\\tri q'$ below.\nTo avoid clutter, we leave out the arrows for $g_\\1$ that show how the red corollas on the right are selected; we hope the reader can put it together for themselves.\n\\[\n\t\\begin{tikzpicture}[trees]\n\t\\begin{scope}[\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=5mm}]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] (11') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (11)}\n\t\t\t\tchild[red!75!black] {coordinate (12)}\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] (12') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (13)}\n\t\t\t\tchild[red!75!black] {coordinate (14)}\n\t\t\t};\n%\n    \\node[blue!50!black, right=5 of 1] (2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] (21') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (21)}\n\t\t\t\tchild[red!75!black] {coordinate (22)}\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] (22') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (23)}\n\t\t\t};\n%\n    \\node[blue!50!black, below=1.3 of 1] (3) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] (31') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (31)}\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] (32') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (32)}\n\t\t\t\tchild[red!75!black] {coordinate (33)}\n\t\t\t};\n%\n    \\node[blue!50!black] at (2|-3) (4) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] (41') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (41)}\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] (42') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (42)}\n\t\t\t};\n%\n    \\node[blue!50!black, below=1.3 of 3] (5) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] (51') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (51)}\n\t\t\t\tchild[red!75!black] {coordinate (52)}\n\t\t\t};\n%\n    \\node[blue!50!black] at (4|-5) (6) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] (61') {$\\bullet$} \n      \tchild[red!75!black] {coordinate (61)}\n\t\t\t};\n\t\t\\end{scope}\n%%\n\t\\begin{scope}[\t\t\n\t\tlevel 1/.style={sibling distance=4mm},\n\t  level 2/.style={sibling distance=5mm}]\n\t    \\node[blue, right=2 of 1] (1') {$\\bullet$} \n      child[blue] {node[red] (1'1') {$\\bullet$} \n      \tchild[red] {coordinate (1'1)}\n\t\t\t}\n      child[blue] {node[red] (1'2') {$\\bullet$} \n      \tchild[red] {coordinate (1'2)}\n\t\t\t}\n      child[blue] {node[red] (1'3') {$\\bullet$} \n      \tchild[red] {coordinate (1'3)}\n\t\t\t};\n%\n    \\node[blue, right=2 of 2] (2') {$\\bullet$} \n      child[blue] {node[red] (2'1') {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] (2'2') {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] (2'3') {$\\bullet$} \n      \tchild[red] {coordinate (2'1)}\n\t\t\t};\n%\n    \\node[blue, right=2 of 3] (3') {$\\bullet$} \n      child[blue] {node[red] (3'1') {$\\bullet$} \n      \tchild[red] {coordinate (3'1)}\n\t\t\t}\n      child[blue] {node[red] (3'2') {$\\bullet$} \n      \tchild[red] {coordinate (3'2)}\n\t\t\t}\n      child[blue] {node[red] (3'3') {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue, right=2 of 4] (4') {$\\bullet$} \n      child[blue] {node[red] (4'1') {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] (4'2') {$\\bullet$} \n\t\t\t}\n      child[blue] {node[red] (4'3') {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue, right=2 of 5] (5') {$\\bullet$} \n      child[blue] {node[red] (5'1') {$\\bullet$} \n      \tchild[red] {coordinate (5'1)}\n\t\t\t};\n%\n    \\node[blue, right=2 of 6] (6') {$\\bullet$} \n      child[blue] {node[red] (6'1') {$\\bullet$} \n\t\t\t};\n%\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (1');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (2) -- (2');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (3) -- (3');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (4) -- (4');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (5) -- (5');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (6) -- (6');\n    \\begin{scope}[densely dotted, bend right=15pt]\n      \\draw[postaction={decorate}] (1'1') to (12');\n      \\draw[postaction={decorate}] (1'2') to (12');\n      \\draw[postaction={decorate}] (1'3') to (11');\n      \\draw[postaction={decorate}] (1'1) to (14);\n      \\draw[postaction={decorate}] (1'2) to (14);\n      \\draw[postaction={decorate}] (1'3) to (12);\n%\n      \\draw[postaction={decorate}] (2'1') to (22');\n      \\draw[postaction={decorate}] (2'2') to (22');\n      \\draw[postaction={decorate}] (2'3') to (21');\n      \\draw[postaction={decorate}] (2'1) to (23);\n%\n      \\draw[postaction={decorate}] (3'1') to (32');\n      \\draw[postaction={decorate}] (3'2') to (32');\n      \\draw[postaction={decorate}] (3'3') to (31');\n      \\draw[postaction={decorate}] (3'1) to (33);\n      \\draw[postaction={decorate}] (3'2) to (33);\n%\n      \\draw[postaction={decorate}] (4'1') to (42');\n      \\draw[postaction={decorate}] (4'2') to (42');\n      \\draw[postaction={decorate}] (4'3') to (41');\n%\n      \\draw[postaction={decorate}] (5'1') to (51');\n      \\draw[postaction={decorate}] (5'1) to (52);\n%\n      \\draw[postaction={decorate}] (6'1') to (61');\n    \\end{scope}\n\n\t\\end{scope}\n  \\end{tikzpicture}\n\\]\n\\end{example}\n\n\\begin{exercise}\nWith $p,q,p',q'$ and $f,g$ as in \\cref{ex.comp_prod_trees}, draw the lens $g\\tri f\\colon q\\tri p\\to q'\\tri p'$ in terms of trees as in the example.\n\\begin{solution}\nUsing the definitions, instructions, and style from \\cref{ex.comp_prod_trees}, we draw $g\\tri f\\colon q\\tri p\\to q'\\tri p'$:\n\\[\n\t\\begin{tikzpicture}[trees]\n\t\\begin{scope}[\n\t\tlevel 1/.style={sibling distance=8mm},\n\t  level 2/.style={sibling distance=5mm}]\n    \\node[red!75!black] (1) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] (11') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (11)}\n\t\t\t\tchild[blue!50!black] {coordinate (12)}\n\t\t\t}\n      child[red!75!black] {node[blue!50!black] (12') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (13)}\n\t\t\t\tchild[blue!50!black] {coordinate (14)}\n\t\t\t};\n%\n    \\node[red!75!black, right=5 of 1] (2) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] (21') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (21)}\n\t\t\t\tchild[blue!50!black] {coordinate (22)}\n\t\t\t}\n      child[red!75!black] {node[blue!50!black] (22') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (23)}\n\t\t\t};\n%\n    \\node[red!75!black, below=1.3 of 1] (3) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] (31') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (31)}\n\t\t\t}\n      child[red!75!black] {node[blue!50!black] (32') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (32)}\n\t\t\t\tchild[blue!50!black] {coordinate (33)}\n\t\t\t};\n%\n    \\node[red!75!black] at (2|-3) (4) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] (41') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (41)}\n\t\t\t}\n      child[red!75!black] {node[blue!50!black] (42') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (42)}\n\t\t\t};\n%\n    \\node[red!75!black, below=1.3 of 3] (5) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] (51') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (51)}\n\t\t\t\tchild[blue!50!black] {coordinate (52)}\n\t\t\t};\n%\n    \\node[red!75!black] at (4|-5) (6) {$\\bullet$} \n      child[red!75!black] {node[blue!50!black] (61') {$\\bullet$} \n      \tchild[blue!50!black] {coordinate (61)}\n\t\t\t};\n\t\t\\end{scope}\n%%\n\t\\begin{scope}[\t\t\n\t\tlevel 1/.style={sibling distance=4mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n\t    \\node[red, right=2 of 1] (1') {$\\bullet$} \n      child[red] {node[blue] (1'1') {$\\bullet$} \n      \tchild[blue] {coordinate (1'1)}\n      \tchild[blue] {coordinate (1'2)}\n      \tchild[blue] {coordinate (1'3)}\n\t\t\t};\n%\n    \\node[red, right=2 of 2] (2') {$\\bullet$} \n      child[red] {node[blue] (2'1') {$\\bullet$} \n        child[blue] {coordinate (2'1)}\n\t\t\t};\n%\n    \\node[red, right=2 of 3] (3') {$\\bullet$} \n      child[red] {node[blue] (3'1') {$\\bullet$} \n      \tchild[blue] {coordinate (3'1)} \n      \tchild[blue] {coordinate (3'2)} \n      \tchild[blue] {coordinate (3'3)}\n\t\t\t};\n%\n    \\node[red, right=2 of 4] (4') {$\\bullet$} \n      child[red] {node[blue] (4'1') {$\\bullet$}  \n      \tchild[blue] {coordinate (4'1)}\n\t\t\t};\n%\n    \\node[red, right=2 of 5] (5') {$\\bullet$};\n%\n    \\node[red, right=2 of 6] (6') {$\\bullet$};\n%\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (1');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (2) -- (2');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (3) -- (3');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (4) -- (4');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (5) -- (5');\n\\draw[|->, shorten <= 3pt, shorten >= 3pt] (6) -- (6');\n    \\begin{scope}[densely dotted, bend right=15pt]\n      \\draw[postaction={decorate}] (1'1') to (12');\n      \\draw[postaction={decorate}] (1'1) to (14);\n      \\draw[postaction={decorate}] (1'2) to (14);\n      \\draw[postaction={decorate}] (1'3) to (13);\n%\n      \\draw[postaction={decorate}] (2'1') to (22');\n      \\draw[postaction={decorate}] (2'1) to (23);\n%\n      \\draw[postaction={decorate}] (3'1') to (32');\n      \\draw[postaction={decorate}] (3'1) to (33);\n      \\draw[postaction={decorate}] (3'2) to (33);\n      \\draw[postaction={decorate}] (3'3) to (32);\n%\n      \\draw[postaction={decorate}] (4'1') to (42');\n      \\draw[postaction={decorate}] (4'1) to (42);\n    \\end{scope}\n\n\t\\end{scope}\n  \\end{tikzpicture}\n\\]\n\\end{solution}\n\\end{exercise}\n\n\\begin{exercise}\nSuppose $p$, $q$, and $r$ are polynomials and you're given arbitrary lenses $f\\colon q\\to p\\tri q$ and $g\\colon q\\to q\\tri r$. Does the following diagram necessarily commute?\\footnote{When the name of an object is used in place of a morphism, we refer to the identity morphism on that object.\nSo for instance, $f\\tri r$ is the composition product of $f$ with the identity lens on $r$.}\n\\[\n\\begin{tikzcd}\n\tq\\ar[r, \"g\"]\\ar[d, \"f\"']&\n\tq\\tri r\\ar[d, \"f\\:\\tri\\:r\"]\\\\\n\tp\\tri q\\ar[r, \"p\\:\\tri\\:g\"']&\n\tp\\tri q\\tri r\\ar[ul, phantom, \"?\"]\n\\end{tikzcd}\n\\]\nThat is, do we have $f\\then (p\\tri g)=^?g\\then (f\\tri r)$?\n\\begin{solution}\nGiven arbitrary polynomials $p,q,r$ and lenses $f\\colon q\\to p\\tri q$ and $g\\colon q\\to q\\tri r$, it is \\emph{not} necessarily the case that $f\\then (p\\tri g)=g\\then (f\\tri r)$!\nAfter all, we can let $p\\coloneqq\\yon$ and $q\\coloneqq\\2$ so that $f$ is a lens $\\2\\to\\yon\\tri\\2\\iso\\2$ (see \\cref{ex.compose_yon}) and $g$ is a lens $\\2\\to\\2\\tri r\\iso\\2$ (see \\cref{exc.composing_with_constants}).\nThen by following the instructions for interpreting a composition product of lenses from either \\cref{exc.comp_prod_lens} or \\cref{ex.comp_prod_trees}, we can verify that $p\\tri g=\\yon\\tri g$ is a lens $\\2\\iso\\yon\\tri\\2\\to\\yon\\tri\\2\\tri r\\iso\\2$ equivalent to the lens $g$, while $f\\tri r$ is a lens $\\2\\iso\\2\\tri r\\to\\yon\\tri\\2\\tri r\\iso\\2$ equivalent to the lens $f$.\nIf, say, we let $f\\colon\\2\\to\\2$ be the function sending everything to $1\\in\\2$ and $g\\colon\\2\\to\\2$ be the function sending everything to $2\\in\\2$, then in this case $f\\then (p\\tri g)=f\\then g\\neq g\\then f=g\\then (f\\tri r)$.\n\\end{solution}\n\\end{exercise}\n\n%-------- Section --------%\n\\section{Lenses to composites}\\label{sec.comon.comp.to_comp}\n\nLenses to composites---that is, lenses of the form $f\\colon p\\to q_1\\tri q_2\\tri\\cdots\\tri q_n$ for some $n\\in\\nn$ with composites as their codomains---will be ubiquitous in the remainder of our story.\nFortunately, they have some very nice properties that make them convenient to work with.\nBefore we explore these properties, we'll introduce a new way of visualizing lenses that is well-suited to capturing the behavior of lenses to composites.\n\n\\subsection{Lenses as polyboxes}\nFirst, let us consider what goes into specifying a lens $f\\colon p\\to q$.\nTo visualize this, we will introduce a new form of notation, which we will call \\emph{polyboxes}, that will generalize well to lenses to composites:\n\\begin{equation} \\label{eqn.polybox_lens}\n\\begin{tikzpicture}\n  \\node (f) [\"$f\\colon p\\to q$\" above] {\n    \\begin{tikzpicture}[polybox, tos]\n  \t  \\node[poly, dom, \"$p$\" below] (p) {};\n  \t  \\node[left=0pt of p_pos] {$p(\\1)$};\n  \t  \\node[left=0pt of p_dir] {$p[-]$};\n\n  \t  \\node[poly, cod, right=of p, \"$q$\" below] (q) {};\n  \t  \\node[right=0pt of q_pos] {$q(\\1)$};\n\t  \\node[right=0pt of q_dir] {$q[-]$};\n\t  \n  \t  \\draw (p_pos) -- node[below] {} (q_pos);\n  \t  \\draw (q_dir) -- node[above] {} (p_dir);\n    \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\end{equation}\nIn our polyboxes, each pair of boxes stacked on top of each other represents a single polynomial.\nThe bottom box in a pair can be filled with any position of the polynomial, while the top box must be filled with a direction of the polynomial at the position in the bottom box.\n\nWe think of \\eqref{eqn.polybox_lens} as depicting the lens $f$ as a sort of gadget that acts like an automated spreadsheet: the blue boxes accept user input, while the white boxes are computed based on what is entered into the blue boxes according to the spreadsheet's preprogrammed rules.\nThe arrows track the flow of information, starting from the bottom left.\n\nWhen the user fills the bottom left blue box with a $p$-position $i$, the arrow to the right tells us that the gadget should automatically fill the bottom right white box with some $q$-position $j$, based on the value $i$ that has already been entered.\nThis process yields a map $i\\mapsto j$ that corresponds to the on-position function $f_\\1$ of the lens.\n\nThen when the user fills the top right blue box with a $q[j]$-direction $e$, the arrow to the left tells us that the gadget should automatically fill the top left white box with some $p[i]$-position $d$, based on the values $i$ and $e$ that have already been entered.\nFixing $i\\in p(\\1)$, this process yields a map $e\\mapsto d$ that corresponds to the on-directions function $f^\\sharp_i$ of the lens.\n\nSo when both the user and the automation have finished filling all the boxes, we'll end up with something that looks like this:\n\\[ \\label{eqn.polybox_lens_filled}\n\\begin{tikzpicture}[polybox, mapstos]\n    \\node[poly, dom, \"$p$\" left] (p) {$d$\\at$i$};\n    \\node[poly, cod, \"$q$\" right, right=of p] (q) {$e$\\at$j$};\n    \\draw (p_pos) -- node[below] {$f_\\1$} (q_pos);\n    \\draw (q_dir) -- node[above] {$f^\\sharp$} (p_dir);\n\\end{tikzpicture}\n\\]\nHere, of course, $j\\coloneqq f_\\1(i)$ and $d\\coloneqq f^\\sharp_i(e)$.\nSo a lens is any protocol that will fill in the white boxes once the user fills in the blue boxes, following the directions of the arrows drawn.\n\nIf we have two composable lenses $f\\colon p\\to q$ and $g\\colon q\\to r$, then we can piece their polyboxes together to form polyboxes for their composite, $f\\then g\\colon p\\to r$:\n\\[\n\\begin{tikzpicture}[polybox, tos]\n    \\node[poly, dom, \"$p$\" below] (p) {};\n\n    \\node[poly, right=of p, \"$q$\" below] (q) {};\n\n    \\node[poly, cod, right=of q, \"$r$\" below] (r) {};\n  \n    \\draw (p_pos) -- node[below] {$f_\\1$} (q_pos);\n    \\draw (q_dir) -- node[above] {$f^\\sharp$} (p_dir);\n  \n    \\draw (q_pos) -- node[below] {$g_\\1$} (r_pos);\n    \\draw (r_dir) -- node[above] {$g^\\sharp$} (q_dir);\n\\end{tikzpicture}\n\\]\nThe bottom (position) box for $q$, which would normally be blue as part of the polyboxes for $g\\colon q\\to r$, is instead filled in via $f_\\1$; similarly, the top (direction box) for $q$, which would normally be blue as part of the polyboxes for $f\\colon p\\to q$, is filled in via $g^\\sharp$.\nThis forms a gadget that is equivalent to what the polyboxes would be for $f\\then g$.\n\n\\subsection{Situations as polyboxes}\nWhen a polynomial has only $1$ position (i.e.\\ it is representable), we shade its bottom (position) polybox gray to indicate that there is no choice to be made there, either by the user or by the automation; similarly, if a polynomial has only $1$ direction at every position (i.e.\\ it is linear), we shade its top (direction) polybox gray.\nSo both polyboxes for $\\yon$ are shaded gray.\n\nIn \\cref{subsec.poly.dyn_sys.new.sit_encl}, we discussed situations, which are lenses with codomain $\\yon$.\nA situation $\\gamma\\colon p\\to\\yon$, then, can be depicted as follows, where $!$ is the unique function into $\\1$:\n\\[\n \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom, \"$p$\" left] (p) {};\n  \t\\node[poly, identity, right=of p, \"$\\yon$\" right] (yon) {};\n  \t\\draw (p_pos) -- node[below] {$!$} (yon_pos);\n  \t\\draw (yon_dir) -- node[above] {$\\gamma^\\sharp$} (p_dir);\n\t\\end{tikzpicture}\n\\]\nBut this is equivalent to a gadget where, when the user fills in the bottom left blue box with a $p$-position $i$, the top left white box is automatically filled with a $p[i]$-direction $d$.\nSo we can redraw the gadget like so:\n\\begin{equation}\\label{eqn.map_to_0ary_composite}\n\\begin{tikzpicture}[polybox, tos]\n    \\node[poly, dom, \"$p$\" left] (p) {};\n    \\draw (p_pos) to[climb'] node[right] {$\\gamma$} (p_dir);\n\\end{tikzpicture}\n\\end{equation}\nThis lines up with what we already know: that a situation $\\gamma\\colon p\\to\\yon$ is just a dependent function $\\gamma\\colon(i\\in p(\\1))\\to p[i]$.\n\n\\subsection{Lenses to composites as polyboxes}\nA lens $p\\to q_1\\tri q_2$ is an element of the set\n\\begin{align*}\n    \\poly(p, q_1\\tri q_2) &\\iso \\poly\\left(p, \\sum_{j_1\\in q_1(\\1)}\\;\\prod_{e_1\\in q_1[j_1]}\\;\\sum_{j_2\\in q_2(\\1)}\\;\\prod_{e_2\\in q_2[j_2]}\\yon\\right) \\tag*{\\eqref{eqn.composite_formula}} \\\\\n    &\\iso \\prod_{i\\in p(\\1)}\\;\\sum_{j_1\\in q_1(\\1)}\\;\\prod_{e_1\\in q_1[j_1]}\\;\\sum_{j_2\\in q_2(\\1)}\\;\\prod_{e_2\\in q_2[j_2]}p[i]. \\tag*{\\eqref{eqn.main_formula}}\n\\end{align*}\nSo we can write down the instructions for picking a lens $p\\to q_1\\tri q_2$ as follows.\n\\begin{quote}\nTo choose a lens $p\\to q_1\\tri q_2$:\n\\begin{enumerate}\n    \\item for each $p$-position $i$:\n    \\begin{enumerate}[label*=\\arabic*.]\n        \\item choose a $q_1$-position $j_1$;\n        \\item for each $q_1[j_1]$-direction $e_1$:\n        \\begin{enumerate}[label*=\\arabic*.]\n            \\item choose a $q_2$-position $j_2$;\n            \\item for each $q_2[j_2]$-direction $e_2$:\n            \\begin{enumerate}[label*=\\arabic*.]\n                \\item choose a $p[i]$-direction $d$.\n            \\end{enumerate}\n        \\end{enumerate}\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\nWe could try to write out the dependent functions that these instructions correspond to.\nAlternatively, we could simply draw this protocol out using polyboxes, with every ``for each'' step corresponding to a user-maintained blue box and every ``choose'' step corresponding to an automated white box:\n\\begin{equation}\\label{eqn.map_to_2ary_composite}\n\\begin{tikzpicture}[polybox, mapstos]\n\t\\node[poly, dom, \"$p$\" left] (p) {$d$\\at$i$};\n\t\\node[poly, cod, right=1.5cm of p.south, yshift=-1ex, \"$q_1$\" right] (q1) {$e_1$\\at$j_1$};\n\t\\node[poly, cod, above=of q1, \"$q_2$\" right] (q2) {$e_2$\\at$j_2$};\n  \t\\draw (p_pos) to[first] (q1_pos);\n  \t\\draw (q1_dir) to[climb] (q2_pos);\n  \t\\draw (q2_dir) to[last] (p_dir);\n\\end{tikzpicture}\n\\end{equation}\nWhenever we draw two pairs of polyboxes on top of each other, as we do with the polyboxes for $q_1$ and $q_2$ above on the right, we are indicating that the entire column of polyboxes depicts the composite of the polynomials depicted by each individual pair.\nSo the column of polyboxes on the right represents the composite $q_1\\tri q_2$.\nIn particular, the position in the bottom box of the top pair is the position associated with the direction in the top box of the bottom pair, for the depicted position of the composite.\n\nSo a lens $p\\to q_1\\tri q_2$ is any protocol that will fill in the white boxes above as the user fills in the blue boxes in the direction of the arrows.\nWe'll see this in action in \\cref{ex.map_to_comp}.\n\nIn fact, \\eqref{eqn.map_to_0ary_composite}, \\eqref{eqn.polybox_lens}, and \\eqref{eqn.map_to_2ary_composite} are the polybox depictions of the $n=0, n=1,$ and $n=2$ examples of lenses $p\\to q_1\\tri\\cdots\\tri q_n$ to $n$-fold composites.\nIn general, for any $n\\in\\nn$, we can apply \n\\begin{align} \\label{eqn.lens_to_comp}\n    \\poly(p, q_1\\tri\\cdots\\tri q_n) &\\iso\\poly\\left(p, \\sum_{j_1\\in q_1(\\1)}\\;\\prod_{e_1\\in q_1[j_1]}\\cdots\\sum_{j_n\\in q_n(\\1)}\\;\\prod_{e_n\\in q_n[j_n]}\\yon\\right) \\tag*{\\eqref{eqn.composite_formula}} \\\\\n    &\\iso \\prod_{i\\in p(\\1)}\\;\\sum_{j_1\\in q_1(\\1)}\\;\\prod_{e_1\\in q_1[j_1]}\\cdots\\sum_{j_n\\in q_n(\\1)}\\;\\prod_{e_n\\in q_n[j_n]}p[i], \\tag*{\\eqref{eqn.main_formula}}\n\\end{align}\nso the polybox depiction of $p\\to q_1\\tri\\cdots\\tri q_n$ generalizes analogously.\nFor example, here are the polyboxes corresponding to a lens to a $4$-fold composite:\n\\[\n\\begin{tikzpicture}[polybox, tos]\n\t\\node[poly, dom, \"$p$\" left] (p) {};\n\t\\foreach \\i in {1,...,4}\n\t{\n  \t\\node[poly, cod, \"$q_\\i$\" right] (q\\i) at (3,1.3*\\i-3.25) {};\n\t};\n\t\\draw (p_pos) to[first] node[below] {} (q1_pos.west);\n\t\\foreach \\i/\\j in {1/2,2/3,3/4}\n\t{\n\t\t\\draw \n\t\t\t(q\\i_dir.west) \n\t\t\tto[climb] \n\t\t\tnode[left] {}\n\t\t\t(q\\j_pos.west);\n\t};\n\t\\draw (q4_dir) to[last] node[above left] {} (p_dir);\n\\end{tikzpicture}\n\\]\n\n% We can use this to generalize our notation in the case $k=1$, i.e.\\ for morphisms $p\\to q$. That is we denoted such a morphism by $\\lens{f^\\sharp}{f_\\1}$, where $f_\\1\\colon p(\\1)\\to q(\\1)$ and $f^\\sharp_i\\colon q[f_\\1(i)]\\to p[i]$. We generalize this to the $k$-ary composite case as\n% \\begin{equation}\\label{eqn.notation_f1f2fk}\n% (f_1,f_2,\\ldots,f_k,f^\\sharp)\\colon p\\too q_1\\tri q_2\\tri\\cdots\\tri q_k,\n% \\end{equation}\n% where\n% \\begin{equation}\\label{eqn.maps_to_comp}\n% \\begin{aligned}\n% f_1&:p(\\1)\\to q_1(\\1),\\\\\n% f_2&:(i\\in p(\\1))\\to (e_1\\in q_1[f_1(i)])\\to q_2(\\1),\\\\\n% f_3&:(i\\in p(\\1))\\to (e_1\\in q_1[f_1(i)])\\to (e_2\\in q_2[f_2(i,e_1)])\\to q_3(\\1),\\\\\n% f_k&:(i\\in p(\\1))\\to (e_1\\in q_1[f_1(i)])\\to  \\cdots\\to(e_{k-1}\\in q_{k-1}[f_{k-1}(i,e_1,\\ldots,e_{k-2})])\\to q_k(\\1),\\\\\n% f^\\sharp&:(i\\in p(\\1))\\to (e_1\\in q_1[f_1(i)])\\to \\cdots\\to(e_{k}\\in q_k[f_{k}(i,e_1,\\ldots,e_{k-1})])\\to p[i]\n% \\end{aligned}\n% \\end{equation}\n\nThese lenses to $n$-fold composites lend themselves to a very natural interpretation in terms of our decision making language.\nEach of $p$'s decisions is passed forward to a decision for $q_1$ to make.\nFor every option that $q_1$ may choose, there is then also a decision for $q_2$ to make.\nThen for every option that $q_2$ may choose, there is a decision for $q_3$ to make, and so on, all the way until $q_n$ has picked an option.\nTogether, all the options that $q_1,\\ldots,q_n$ chose then inform the option that $p$ should make for its original decision.\n\n\\slogan{A lens $p\\to q_1\\tri\\cdots\\tri q_n$ is a multi-step policy for $p$ to make decisions by asking for decisions from $q_1$, then $q_2$, etc., all the way to $q_n$, then interpreting the results.}\n\n\\begin{example}[Lenses $p\\to q\\tri r$]\\label{ex.map_to_comp}\nConsider a lens $f\\colon p\\to q\\tri r$.\nLet's label the three arrows in the lens's polybox depiction:\n\\[\n\\begin{tikzpicture}[polybox, tos]\n\t\\node[poly, dom, \"$p$\" left] (p) {};\n\t\\node[poly, cod, right=1.5cm of p.south, yshift=-1ex, \"$q$\" right] (q) {};\n\t\\node[poly, cod, above=of q, \"$r$\" right] (r) {};\n  \t\\draw (p_pos) to[first] node[below] {$f^q$} (q_pos);\n  \t\\draw (q_dir) to[climb] node[right] {$f^r$} (r_pos);\n  \t\\draw (r_dir) to[last] node[above] {$f^\\sharp$} (p_dir);\n\\end{tikzpicture}\n\\]\nSo the on-position function of $f$ can be split into two parts: a function $f^q\\colon p(\\1)\\to q(\\1)$ and, for each $i\\in p(\\1)$, a function $f^r_i\\colon q[f^q(i)]\\to r(\\1)$.\nThen the on-directions function $f^\\sharp_i\\colon (q\\tri r)[f_\\1(i)]\\to p[i]$ takes the direction of $q$ and the direction of $r$ in the two blue boxes on the right and sends them to a direction of $p$ at $i$ to fill the white box on the left.\n\nFor example, let $p\\coloneqq\\{A\\}\\yon^{\\{R,S\\}}+{B}\\yon^{\\{T\\}}$, $q\\coloneqq\\{C\\}\\yon^{\\{U,V,W\\}}+\\{D\\}\\yon^{\\{X\\}}$, and $r\\coloneqq\\{E\\}\\yon^{\\{Y,Z\\}}+\\{F\\}$.\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$p=$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=4mm]\n    \\node[\"\\tiny $A$\" below] (1) {$\\bullet$} \n      child {node[above, font=\\tiny] {$R$}}\n      child {node[above, font=\\tiny] {$S$}};\n    \\node[right=.5 of 1,\"\\tiny $B$\" below] (2) {$\\bullet$} \n      child {node[above, font=\\tiny] {$T$}};\n  \\end{tikzpicture}\n  };\n%\n\t\\node (q) [draw, blue, right=2 of p, \"$q=$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=4mm]\n    \\node[\"\\tiny $C$\" below] (1) {$\\bullet$} \n      child {node[above, font=\\tiny] {$U$}}\n      child {node[above, font=\\tiny] {$V$}}\n      child {node[above, font=\\tiny] {$W$}};\n    \\node[right=.75 of 1,\"\\tiny $D$\" below] (2) {$\\bullet$} \n      child {node[above, font=\\tiny] {$X$}};\n  \\end{tikzpicture}\n  };\n\t\\node (r) [draw, red, right=2 of q, \"$r=$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=4mm]\n    \\node[\"\\tiny $E$\" below] (1) {$\\bullet$} \n      child {node[above, font=\\tiny] {$Y$}}\n      child {node[above, font=\\tiny] {$Z$}};\n    \\node[right=.5 of 1,\"\\tiny $F$\" below] (2) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nHere is a tree picture of a lens $f\\colon p\\to q\\tri r$:\n\\[\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=5mm}]\n    \\node[\"\\tiny $A$\" below] (1) {$\\bullet$} \n      child {coordinate (11')}\n      child {coordinate (12')};\n    \\node[right=2 of 1, blue, \"\\color{blue} \\tiny $C$\" below] (1') {$\\bullet$}\n    \tchild[blue] {node[red] {$\\bullet$}\n\t\t\t\tchild[red] {coordinate (1'1)}\n\t\t\t\tchild[red] {coordinate (1'2)}\n\t\t\t}\n\t\tchild[blue] {node[red] {$\\bullet$}\n\t\t\t}\n\t\tchild[blue] {node[red] {$\\bullet$}\n\t\t\t\tchild[red] {coordinate (1'3)}\n\t\t\t\tchild[red] {coordinate (1'4)}\n\t\t\t}\n\t\t\t;\n%\n    \\node (2) [right=3 of 1',\"\\tiny $B$\" below] {$\\bullet$} \n      child {coordinate (21')};\n    \\node[right=2 of 2, blue,\"\\color{blue} \\tiny $D$\" below] (2') {$\\bullet$}\n\t\t\tchild[blue] {node[red] {$\\bullet$}\n\t\t\t\tchild[red] {coordinate (2'1)}\n\t\t\t\tchild[red] {coordinate (2'2)}\n\t\t\t}\n\t\t\t;\n%\n  \\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (1');\n  \\draw[|->, shorten <= 3pt, shorten >= 3pt] (2) -- (2');\n  \\begin{scope}[densely dotted, bend right=60pt]\n  \t\\draw[postaction={decorate}] (1'1) to (12');\n  \t\\draw[postaction={decorate}] (1'2) to (11');\n  \t\\draw[postaction={decorate}] (1'3) to (11');\n  \t\\draw[postaction={decorate}] (1'4) to (11');\n  \t\\draw[postaction={decorate}] (2'1) to (21');\n  \t\\draw[postaction={decorate}] (2'2) to (21');\n  \\end{scope}\n\\end{tikzpicture}\n\\]\nIf we write $f$ as the corresponding triple $(f^q, f^r, f^\\sharp)$, then we have\n\\begin{gather*}\nf^q(A)=C,\\quad f^q(B)=D;\\\\\nf^r_A(U)=E,\\quad f^r_A(V)=F,\\quad f^r_A(W)=E;\\\\\nf^r_B(X)=E;\\\\\nf^\\sharp_A(U,Y)=S,\\quad f^\\sharp_A(U,Z)=R,\\quad f^\\sharp_A(W,Y)=R,\\quad f^\\sharp_A(W,Z)=R;\\\\\nf^\\sharp_B(X,Y)=T,\\quad f^\\sharp_B(X,Z)=T.\n\\end{gather*}\nPolyboxes display the same data in a different format:\n\\[\n\\begin{tikzpicture}[polybox, mapstos, node distance=2ex and 1.4cm]\n  \\node (a) {\n  \\begin{tikzpicture}\n  \t\\node[poly, dom] (p) {$S$\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$U$\\at$C$};\n  \t\\node[poly, cod, above=of q] (r) {$Y$\\at$E$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}\n  };\n  \\node[right=.6 of a] (b) {\n  \\begin{tikzpicture}\n  \t\\node[poly, dom] (p) {$R$\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$U$\\at$C$};\n  \t\\node[poly, cod, above=of q] (r) {$Z$\\at$E$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}\n  };\n  \\node[right=.6of b] (c) {\n  \\begin{tikzpicture}\n  \t\\node[poly, dom] (p) {\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$V$\\at$C$};\n  \t\\node[poly, constant, above=of q] (r) {\\at$F$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n\t\t\\draw[densely dotted] (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}\n  };\n  \\node[right=.6 of c] (d) {\n  \\begin{tikzpicture}\n  \t\\node[poly, dom] (p) {$R$\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$W$\\at$C$};\n  \t\\node[poly, cod, above=of q] (r) {$Y$\\at$E$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}\n\t};\n  \\node[right=.6 of d] (e) {\n  \\begin{tikzpicture}\n  \t\\node[poly, dom] (p) {$R$\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$W$\\at$C$};\n  \t\\node[poly, cod, above=of q] (r) {$Z$\\at$E$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}\n\t};\n  \\node[below=.6 of a] (f) {\n  \\begin{tikzpicture}\n  \t\\node[poly, dom] (p) {$T$\\at$B$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$X$\\at$D$};\n  \t\\node[poly, cod, above=of q] (r) {$Y$\\at$E$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}\n\t};\n  \\node[below=.6 of b] (g) {\n  \\begin{tikzpicture}\n  \t\\node[poly, dom] (p) {$T$\\at$B$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$X$\\at$D$};\n  \t\\node[poly, cod, above=of q] (r) {$Z$\\at$E$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nThe third set of polyboxes, where the left blue box has been filled with an $A$ and the bottom right blue box has been filled with a $V$, is worth highlighting: as $f^r_A(V)=F$, but $r[F]=\\varnothing$, it is impossible to write a direction of $r$ at $F$ to go in the top right box.\nTo indicate this, we color the top right box red and leave the arrow emerging from it dashsed.\n\\end{example}\n\n% \\begin{example}[Using \\eqref{eqn.notation_f1f2fk} to denote positions and directions in a composite]\\label{ex.pos_in_composite}\n% Given polynomials $p_1,\\ldots,p_n$, recall from \\cref{exc.positions_maps_yon} that the position-set of their composite is isomorphic to the hom-set\n% \\[\n%     \\poly(\\yon, p_1\\tri\\cdots\\tri p_n),\n% \\]\n% which by \\eqref{eqn.lens_to_comp} is in turn isomorphic to\n% \\[\n%     \\sum_{i_1\\in p_1(\\1)}\\;\\prod_{d_1\\in p_1[i_1]}\\cdots\\sum_{i_n\\in p_n(\\1)}\\;\\prod_{d_n\\in p_n[i_n]}\\1.\n% \\]\n\n% We can denote $i$ in the notation \\eqref{eqn.notation_f1f2fk} as $i=(i_1,\\ldots,i_n)$, forgoing the input to $i_1$ because it is always $1\\in\\1$ and also forgoing $f^\\sharp$ because it is always the unique map to $\\1$. Then in this notation \n% \\begin{gather*}\n% i_1\\in p_1(\\1),\\quad\n% i_2\\colon p_1[i_1]\\to p_2(\\1),\\quad\n% i_3\\colon (d_1\\in p_1[i_1])\\to(d_2\\in p_2[i_2(d_1)])\\to p_3(\\1),\\\\\n% i_k\\colon(d_1\\in p_1[i_1])\\to(d_2\\in p_2[i_2(d_1)])\\to\\cdots(d_{k-1}\\in p_{k-1}[i_{k-1}(d_1,\\ldots,d_{k-2})])\\to p_k(\\1)\n% \\end{gather*}\n\n% So for example to give a position in $p\\tri q\\tri r$ we need \n% \\[\n% i\\in p(\\1),\\quad\n% j\\colon p[i]\\to q(\\1),\\quad\n% k:(d\\in p[i])\\to(e\\in q[j(d)])\\to r(\\1).\n% \\]\n\n% The direction-set of $p_1\\tri\\cdots\\tri p_k$ at position $(i_1,\\ldots,i_k)$ is \n% \\[\n% (p_1\\tri\\cdots\\tri p_k)[(i_1,\\ldots,i_k)]\\cong\\sum_{d_1\\in p_1[i_1]}\\sum_{d_2\\in p_2[i_2(d_1)]}\\cdots\\sum_{d_k\\in p_k[i_k(d_1,\\ldots,d_{k-1})]}\\1\n% \\]\n% So for example given a position $(i,j,k)\\in p\\tri q\\tri r$, a direction there consists of a tuple $(d,e,f)$ where $d\\in p[i]$, $e\\in q[j(d)]$ and $f\\in r[k(d,e)]$.\n% \\end{example}\n\n% \\begin{exercise}\n% Suppose $A_1,\\ldots,A_k$ are sets and $p_i\\coloneqq A_i\\yon$ for each $i$. Use the notation of \\cref{ex.pos_in_composite} to give the position-set $p\\coloneqq p_1\\tri\\cdots\\tri p_k$.\n% \\end{exercise}\n\n\\subsection{The composition product of lenses as polyboxes}\n\nA special case of a lens whose codomain is a composite is a lens that is itself the composition product of lenses.\nIf we draw such a lens as polyboxes by following the instructions from \\eqref{eqn.comp_lens_pos} and \\eqref{eqn.comp_lens_dir}, we would really just be stacking the lenses on top of each other.\nFor example, given lenses $f\\colon p\\to q$ and $f'\\colon p'\\to q'$, here is $f\\tri f'$ drawn as polyboxes:\n\\begin{equation} \\label{eqn.comp_lens_polybox}\n\\begin{tikzpicture}[polybox, tos]\n\t\\node[poly, dom, \"$p$\" left] (p) {};\n\t\\node[poly, dom, above=.8 of p, \"$p'$\" left] (p') {};\n\t\\node[poly, cod, right=of p, \"$q$\" right] (q) {};\n\t\\node[poly, cod, above=.8 of q, \"$q'$\" right] (q') {};\n\t\\draw (p_pos) -- node[below] {$f_\\1$} (q_pos);\n\t\\draw (q_dir) -- node[above] {$f^\\sharp$} (p_dir);\n\t\\draw (p'_pos) -- node[below] {$f'_\\1$} (q'_pos);\n\t\\draw (q'_dir) -- node[above] {$(f')^\\sharp$} (p'_dir);\t\n\\end{tikzpicture}\n\\end{equation}\nWhat differentiates this from simply writing down the polyboxes for $f$ and the polyboxes for $f'$ is that we are explictly associating the position that will fill the bottom box of $p'$ with the direction that will fill the top box of $p$, and likewise the position that will fill the bottom box of $q'$ with the direction that will fill the top box of $q$.\nMoreover, we have the user fill out the bottom set of boxes first and work their way up, so that, in particular, they can use the information they obtained from the behavior of $f_\\1$ and $f^\\sharp$ to decide what to put in the bottom square of $p'$.\nSo this really does depict a lens $p\\tri p'\\to q\\tri q'$.\n\nHow does \\eqref{eqn.comp_lens_polybox} relate to our usual polybox depiction of a lens to a composite, as in \\eqref{eqn.map_to_2ary_composite}, but with the domain also replaced with a composite?\nA user who interacts with \\eqref{eqn.comp_lens_polybox} could fill the bottom set of polyboxes for $f$ independently of the top set of polyboxes for $f'$.\nAlternatively, along with what position goes in the bottom square of $p$, they could have already decided beforehand what position to put in the bottom square of $p'$ for every possible direction that could end up in the top square of $p$.\nBy \\eqref{eqn.comp_pos}, such a choice is equivalent to picking a position of the composite $p\\tri p'$.\nThen by \\eqref{eqn.comp_lens_pos}, following just the bottom arrow $f_\\1$ leads to the corresponding position of $q$ given by $f\\tri f'$, while filling in the top box of $q$ and following $f^\\sharp$ then $f'_\\1$ leads to the position of $q'$ that goes in the bottom box of $q'$.\nFinally, once the user fills in the top box of $q'$, following the top arrow $(f')^\\sharp$ completes the specification of a direction of $p\\tri p'$.\nIn this way, \\eqref{eqn.comp_lens_polybox} can be thought of as a special case of \\eqref{eqn.map_to_2ary_composite}.\n\nWe make a big deal out of it, but \\eqref{eqn.comp_lens_polybox} really is just the polyboxes of two separate lenses drawn together.\nWhere such polyboxes truly get interesting is when we compose them with polyboxes like \\eqref{eqn.map_to_2ary_composite}.\nThat is, given a lens $g\\colon r\\to p\\tri p'$, consider the polyboxes for $g\\then(f\\tri f')$:\n\\[\n\\begin{tikzpicture}[polybox, tos]\n\t\\node[poly, dom, \"$r$\" left] (r) {};\n\t\\node[poly, right=1.8 of r.south, yshift=-2.5ex, \"$p$\" below] (p) {};\n\t\\node[poly, above=.8 of p, \"$p'$\" above] (p') {};\n\t\\node[poly, cod, right=of p, \"$q$\" right] (q) {};\n\t\\node[poly, cod, above=.8 of q, \"$q'$\" right] (q') {};\n\t\\draw (p_pos) -- node[below] {$f_\\1$} (q_pos);\n\t\\draw (q_dir) -- node[above] {$f^\\sharp$} (p_dir);\n\t\\draw (p'_pos) -- node[below] {$f'_\\1$} (q'_pos);\n\t\\draw (q'_dir) -- node[above] {$(f')^\\sharp$} (p'_dir);\t\n\t\\draw (r_pos) to[first] node[below] {$g^{p}$} (p_pos);\n\t\\draw (p_dir) to[climb] node[right] {$g^{p'}$} (p'_pos);\n\t\\draw (p'_dir) to[last] node[above] {$g^\\sharp$} (r_dir);\n\\end{tikzpicture}\n\\]\nThere's a lot going on with this lens! To fill out these polyboxes, we start from the bottom box of $r$, go all the way right to the bottom box of $q$, loop back left, up, and right again to the bottom box of $q'$, then travel left all the way back to the top box of $r$.\n\nSay we knew that $g\\then(f\\tri f')$ were equal to some other lens $h\\colon r\\to q\\tri q'$:\n\\[\n\\begin{tikzpicture}\n\t\\node (1) {\n  \\begin{tikzpicture}[polybox, mapstos]\n\t\\node[poly, dom, \"$r$\" left] (r) {\\at$k$};\n\t\\node[poly, right=1.8 of r.south, yshift=-2.5ex, \"$p$\" below] (p) {};\n\t\\node[poly, above=.8 of p, \"$p'$\" above] (p') {};\n\t\\node[poly, cod, right=of p, \"$q$\" right] (q) {$e$};\n\t\\node[poly, cod, above=.8 of q, \"$q'$\" right] (q') {$e'$};\n\t\\draw (p_pos) -- node[below] {$f_\\1$} (q_pos);\n\t\\draw (q_dir) -- node[above] {$f^\\sharp$} (p_dir);\n\t\\draw (p'_pos) -- node[below] {$f'_\\1$} (q'_pos);\n\t\\draw (q'_dir) -- node[above] {$(f')^\\sharp$} (p'_dir);\t\n\t\\draw (r_pos) to[first] node[below] {$g^{p}$} (p_pos);\n\t\\draw (p_dir) to[climb] node[right] {$g^{p'}$} (p'_pos);\n\t\\draw (p'_dir) to[last] node[above] {$g^\\sharp$} (r_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=1.8 of 1] (2) {\n  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom, \"$r$\" left] (r) {\\at$k$};\n  \t\\node[poly, cod, right=1.8 of r.south, yshift=-1ex, \"$q$\" right] (q) {$e$};\n  \t\\node[poly, cod, above=of q, \"$q'$\" right] (q') {$e'$};\n  \t\\draw (r_pos) to[first] node[below] {$h^q$} (q_pos);\n  \t\\draw (q_dir) to[climb] node[right] {$h^{q'}$} (q'_pos);\n  \t\\draw (q'_dir) to[last] node[above] {$h^\\sharp$} (r_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node at ($(1.east)!.5!(2.west)$) {=};\n\\end{tikzpicture}\n\\]\n\nThen we can read off the picture that\n\\[\n    g^p\\then f_\\1 = h^q,\n\\]\nthat\n\\[\n    f^\\sharp_{g^p(k)}\\then g^{p'}_k\\then f'_\\1 = h^{q'}_k,\n\\]\nand that\n\\[\n    (f')^\\sharp_{g^{p'}(f^\\sharp_{g^p(k)}(e))}\\then g^\\sharp_k = h^\\sharp_k.\n\\]\nWe will read equations off of polyboxes like this repeatedly in what follows. %** Be more specific if this only really happens in next chapter\n\n%-------- Section --------%\n\\section{Categorical properties of the composition product} \\label{sec.comon.comp.prop}\n\nWe conclude this chapter by discussing several interesting properties of the composition product, many of which will come in handy in the following chapters.\n\n\\subsection{Interaction with products and coproducts} \\label{subsec.comon.comp.prop.prod}\n\nIt turns out that the composition product behaves well---albeit asymmetrically---with products and coproducts.\n\n\\begin{proposition}[Left distributivity of $\\tri$ over $+$ and $\\times$]\\label{prop.left_dist_prod}\nGiven a polynomial $r$, the functor $(-\\tri r)\\colon\\poly\\to\\poly$ that sends each $p\\in\\poly$ to $p\\tri r$ commutes with coproducts and products (up to natural isomorphism).\nThat is, for any $p,q\\in\\poly$, we have the following natural isomorphisms:\n\\begin{equation}\\label{eqn.comp_left_pres_plus}\n    (p+q)\\tri r\\iso (p\\tri r)+(q\\tri r)\n\\end{equation}\nand\n\\begin{equation}\\label{eqn.comp_left_pres_times}\n    pq\\tri r\\iso (p\\tri r)(q\\tri r).\n\\end{equation}\nMore generally, given a set $A$ and polynomials $(p_a)_{a\\in A}$, we have the following natural isomorphisms:\n\\begin{equation}\\label{eqn.comp_left_pres_sum}\n    \\left(\\sum_{a\\in A}p_a\\right)\\tri r\\iso \\sum_{a\\in A} (p_a\\tri r)\n\\end{equation}\nand\n\\begin{equation}\\label{eqn.comp_left_pres_prod}\n    \\left(\\prod_{a\\in A}p_a\\right)\\tri r\\iso \\prod_{a\\in A} (p_a\\tri r)\n\\end{equation}\n\\end{proposition} \n\\begin{proof}\nFormally, this comes down to the fact that (co)products of functors $\\smset\\to\\smset$ are computed pointwise (\\cref{prop.presheaf_lim_ptwise}) and that (co)products in $\\poly$ coincide with (co)products in $[\\smset,\\smset]$ (\\cref{prop.poly_coprods,prop.poly_prods}).\nOne could instead give an explicit proof using \\eqref{eqn.composite_formula}; this is done in \\cref{exc.left_dist}.\nIn fact, we will see yet another proof of \\eqref{eqn.comp_left_pres_prod} (and thus \\eqref{eqn.comp_left_pres_times}) in \\cref{exc.comp_left_pres_lim} \\cref{exc.comp_left_pres_lim.prod}.\n\\end{proof}\n\n\\begin{exercise}\\label{exc.left_dist}\nProve \\cref{prop.left_dist_prod} using the explicit formula for $\\tri$ given in \\eqref{eqn.composite_formula} by manipulating sums and products.\n\\begin{solution}\nTo prove \\cref{prop.left_dist_prod}, it suffices to verify \\eqref{eqn.comp_left_pres_sum} and \\eqref{eqn.comp_left_pres_prod}, as \\eqref{eqn.comp_left_pres_plus} and \\eqref{eqn.comp_left_pres_times} follow directly when $A\\coloneqq\\2$.\n\nGiven polynomials $(p_a)_{a\\in A}$, recall that the position-set of the sum $\\sum_{a\\in A}p_a$ is $\\sum_{a\\in A}p_a(\\1)$, while the direction-set at each position $(a,i)$ with $a\\in A$ and $i\\in p_a(\\1)$ is $p_a[i]$.\nSo by \\eqref{eqn.composite_formula}, we have that\n\\begin{align*}\n    \\left(\\sum_{a\\in A}p_a\\right)\\tri r &\\iso \\sum_{\\substack{a\\in A, \\\\ i\\in p_a(\\1)}}\\;\\prod_{d \\in p_a[i]}\\;\\sum_{j \\in r(\\1)}\\;\\prod_{e \\in r[j]}\\yon \\\\\n    &\\iso \\sum_{a\\in A}\\;\\sum_{i\\in p_a(\\1)}\\;\\prod_{d \\in p_a[i]}\\;\\sum_{j \\in r(\\1)}\\;\\prod_{e \\in r[j]}\\yon \\\\\n    &\\iso\\sum_{a\\in A}(p_a\\tri r).\n\\end{align*}\nWe can also recall that the position-set of the product $\\prod_{a\\in A}p_a$ is $\\prod_{a\\in A}p_a(\\1)$, while the direction-set at each position $\\bar{i}\\colon(a\\in A)\\to p_a(\\1)$ is $\\sum_{a\\in A}p_a[\\bar{i}(a)]$.\nSo by \\eqref{eqn.composite_formula}, we have that\n\\begin{align*}\n    \\left(\\prod_{a\\in A}p_a\\right)\\tri r &\\iso \\sum_{\\bar{i}\\in\\prod_{a\\in A}p_a(\\1)} \\; \\prod_{\\substack{a\\in A, \\\\ d\\in p_a[\\bar{i}(a)]}} \\; \\sum_{j \\in r(\\1)} \\; \\prod_{e \\in r[j]} \\yon \\\\\n    &\\iso \\prod_{a\\in A} \\; \\sum_{i\\in p_a(\\1)} \\; \\prod_{d\\in p_a[i]} \\; \\sum_{j \\in r(\\1)} \\; \\prod_{e \\in r[j]} \\yon \\tag*{\\eqref{eqn.cat_completely_distributive}} \\\\\n    &\\iso\\prod_{a\\in A}(p_a\\tri r).\n\\end{align*}\n\\end{solution}\n\\end{exercise}\n\n\\begin{example}[Picturing the left distributivity of $\\tri$ over $\\times$]\\label{ex.picturing_dist}\nWe want an intuitive understanding of the left distributivity given by \\eqref{eqn.comp_left_pres_times}.\nLet $p\\coloneqq\\yon$, $q\\coloneqq\\yon+\\1$, and $r\\coloneqq\\yon^\\2+\\1$, as shown here:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, blue!50!black, \"$p$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (p1) {$\\bullet$} \n      child {};\n  \\end{tikzpicture}\n  };\n\t\\node (q) [draw, blue!50!black, right=1 of p, \"$q$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (q1) {$\\bullet$} \n      child {};\n    \\node[right=.5 of q1] (q2) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\t\\node (r) [draw, red!75!black, right=1 of q, \"$r$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (r1) {$\\bullet$} \n      child {}\n      child {};\n    \\node[right=.5 of r1] (r2) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nThen $pq\\cong\\yon^\\2+\\yon$ can be drawn as follows, with each corolla comprised of a corolla from $p$ and a corolla from $q$ with their roots glued together:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, blue!50!black, \"$pq\\cong$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n        \\node (q1) {$\\bullet$}\n          child {}\n          child {};\n        \\node[right=.5 of q1] (q2) {$\\bullet$}\n          child {};\n    \\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nWe can therefore draw $pq\\tri r$ by gluing corollas of $r$ to leaves of $pq$ in every way, as follows:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$pq\\tri r\\cong$\" left] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.3 of 1] (2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=1.3 of 2] (3) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.3 of 3] (4) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 4] (5) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n      \tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=.8 of 5] (6) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$}\n      };\n  \\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nSo each tree in $pq\\tri r$ is obtained by gluing together the roots of a corolla from $p$ and a corolla from $q$, then attaching corollas from $r$ to each leaf.\n\nAlternatively, we can compute $p\\tri r$ and $q\\tri r$ seperately, gluing corollas of $r$ to leaves of $p$ in every way, then to leaves of $q$ in every way:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$p\\tri r\\cong$\" left] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[blue!50!black] (p1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n    \\node[blue!50!black, right=.5 of p1] (p2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n  \\end{tikzpicture}\n  };\n%\n\t\\node (q) [draw, \"$q\\tri r\\cong$\" left] at (4,0) {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[blue!50!black] (q1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n    \\node[blue!50!black, right=.5 of q1] (q2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n    \\node[blue!50!black, right=.5 of q2] (q3) {$\\bullet$};\t\t\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nTheir product is then obtained by taking each tree from $p\\tri r$ and pairing it with each tree from $q\\tri r$ by gluing their roots together:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$(p\\tri q)(p\\tri r)\\cong$\" left] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[blue!50!black] (1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.3 of 1] (2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 2] (3) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n      \tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 3] (4) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=1.3 of 4] (5) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=.8 of 5] (6) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$}\n      };\t  \t\n\t\\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nSo each tree in $(p\\tri r)(q\\tri r)$ is obtained by attaching corollas from $r$ to each leaf of a corolla from $p$ and a corolla from $q$ before gluing their roots together.\n\nBut it doesn't matter whether we glue corollas from $r$ to leaves first, or if we glue the roots of corollas from $p$ and $q$ together first--the processes are equivalent.\nHence the isomorphism $pq\\tri r\\iso(p\\tri r)(q\\tri r)$ holds.\n\\end{example}\n\n\\begin{exercise}\\label{exc.picturing_dist}\nFollow \\cref{ex.picturing_dist} with coproducts $(+)$ in place of products $(\\times)$: use pictures to give an intuitive understanding of the left distributivity given by \\eqref{eqn.comp_left_pres_plus}.\n\\begin{solution}\nWe want an intuitive understanding of the left distributivity of $\\tri$ over $+$.\nLet $p\\coloneqq\\yon^\\2$, $q\\coloneqq\\yon+\\1$, and $r\\coloneqq\\yon^\\2+\\1$, as shown here:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, green!50!black, \"$p$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (p1) {$\\bullet$} \n      child {}\n      child {};\n  \\end{tikzpicture}\n  };\n\t\\node (q) [draw, blue!50!black, right=1 of p, \"$q$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (q1) {$\\bullet$} \n      child {};\n    \\node[right=.5 of q1] (q2) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\t\\node (r) [draw, red!75!black, right=1 of q, \"$r$\" above] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n    \\node (r1) {$\\bullet$} \n      child {}\n      child {};\n    \\node[right=.5 of r1] (r2) {$\\bullet$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nThen $p+q\\cong\\yon^\\2+\\yon+\\1$ can be drawn as follows, consisting of all the corollas from $p$ and $q$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$p+q\\cong$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=2.5mm]\n        \\node[green!50!black] (q1) {$\\bullet$}\n          child[green!50!black] {}\n          child[green!50!black] {};\n        \\node[blue!50!black, right=.5 of q1] (q2) {$\\bullet$}\n          child[blue!50!black] {};\n        \\node[blue!50!black, right=.5 of q2] (q3) {$\\bullet$};\n    \\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nWe can therefore draw $(p+q)\\tri r$ by gluing corollas of $r$ to leaves of $p+q$ in every way, as follows:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$(p+q)\\tri r\\cong$\" left] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[green!50!black] (1) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 1] (2) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 2] (3) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 3] (4) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 4] (5) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n      \tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=.8 of 5] (6) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$}\n      };\n      \n    \\node[blue!50!black, right=.8 of 6] (7) {$\\bullet$};\n  \\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nSo each tree in $(p+q)\\tri r$ is obtained by taking either a corolla from $p$ or a corolla from $q$, then attaching corollas from $r$ to each leaf.\n\nAlternatively, we can compute $p\\tri r$ and $q\\tri r$ seperately, gluing corollas of $r$ to leaves of $p$ in every way, then to leaves of $q$ in every way:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$p\\tri r\\cong$\" left] at (-2,0) {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[green!50!black] (1) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 1] (2) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 2] (3) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 3] (4) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n  \\end{tikzpicture}\n  };\n%\n\t\\node (q) [draw, \"$q\\tri r\\cong$\" left] at (4,0) {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[blue!50!black] (q1) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n    \\node[blue!50!black, right=.5 of q1] (q2) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n    \\node[blue!50!black, right=.5 of q2] (q3) {$\\bullet$};\t\t\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nTheir coproduct then consists of all the trees from $p\\tri r$ and all the trees from $q\\tri r$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p) [draw, \"$p\\tri r+q\\tri r\\cong$\" left] {\n\t\\begin{tikzpicture}[trees,\n\t\tlevel 1/.style={sibling distance=5mm},\n\t  level 2/.style={sibling distance=2.5mm}]\n    \\node[green!50!black] (1) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 1] (2) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 2] (3) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t\tchild[red!75!black]\n\t\t\t\tchild[red!75!black]\n\t\t\t};\n%\n    \\node[green!50!black, right=1.3 of 3] (4) {$\\bullet$} \n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t}\n      child[green!50!black] {node[red!75!black] {$\\bullet$} \n\t\t\t};\n%\n    \\node[blue!50!black, right=1 of 4] (5) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$} \n      \tchild[red!75!black]\n      \tchild[red!75!black]\n\t\t\t};\n%\n    \\node[blue!50!black, right=.8 of 5] (6) {$\\bullet$} \n      child[blue!50!black] {node[red!75!black] {$\\bullet$}\n      };\n      \n    \\node[blue!50!black, right=.8 of 6] (7) {$\\bullet$};\n  \\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nSo each tree in $p\\tri r+q\\tri r$ is either a corolla from $p$ with corollas from $r$ attached to each leaf, or a corolla from $q$ with corollas from $r$ attached to each leaf.\n\nBut it doesn't matter whether we glue corollas from $r$ to leaves first, or if we pool together corollas from $p$ and $q$ first--the processes are equivalent.\nHence the isomorphism $(p+q)\\tri r\\iso p\\tri r+q\\tri r$ holds.\n\\end{solution}\n\\end{exercise}\n\n\\begin{exercise}\nShow that for any set $A$ and polynomials $p,q$, we have an isomorphism $A(p\\tri q)\\iso (Ap)\\tri q$.\n\\begin{solution}\nGiven a set $A$ and polynomials $p,q$, the left distributivity of $\\tri$ over products from \\eqref{eqn.comp_left_pres_prod} implies that $(Ap)\\tri q\\iso(A\\tri q)(p\\tri q)$, while \\cref{exc.composing_with_constants} \\cref{exc.composing_with_constants.appl} implies that $A\\tri q\\iso A$.\nSo $(Ap)\\tri q\\iso A(p\\tri q)$.\n\\end{solution}\n\\end{exercise}\n\nIn \\cref{subsec.comon.comp.prop.lim}, we will see how to generalize the left distributivity of $\\tri$ over products to arbitrary limits.\nBut first, we observe that right distributivity does not hold.\n\n\\begin{exercise} \\label{exc.right_not_dist_prod}\nShow that the distributivities of \\cref{prop.left_dist_prod} do not hold on the other side:\n\\begin{enumerate}\n\t\\item \\label{exc.right_not_dist_prod.prod} Find polynomials $p,q,r$ such that $p\\tri (qr)\\not\\iso(p\\tri q)(p\\tri r)$.\n\t\\item Find polynomials $p,q,r$ such that $p\\tri (q+r)\\not\\iso(p\\tri q)+(p\\tri r)$.\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item Let $p \\coloneqq \\yon + \\1, q \\coloneqq \\1,$ and $r \\coloneqq \\0$.\n    Then $p \\tri (qr) \\iso (\\yon + \\1) \\tri \\0 \\iso \\1$, while $(p \\tri q)(p \\tri r) \\iso ((\\yon + \\1) \\tri \\1)((\\yon + \\1) \\tri \\0) \\iso \\2 \\times \\1 \\iso \\2$.\n    \\item Again let $p \\coloneqq \\yon + \\1, q \\coloneqq \\1,$ and $r \\coloneqq \\0$.\n    Then $p \\tri (q+r) \\iso (\\yon + \\1) \\tri \\1 \\iso \\2$, while $(p \\tri q)+(p \\tri r) \\iso ((\\yon + \\1) \\tri \\1)+((\\yon + \\1) \\tri \\0) \\iso \\2 + \\1 \\iso \\3$.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\nNevertheless, there is something to be said about the relationship between $p\\tri q, p\\tri r,$ and $p\\tri(qr)$.\nWe'll see this in action after we discuss how $\\tri$ preserves limits on the left.\n\n\\subsection{Interaction with limits} \\label{subsec.comon.comp.prop.lim}\n\nWe saw in \\cref{thm.poly_limits} that $\\poly$ has all limits, and we saw in \\cref{exc.refl_limits} that these limits coincide with limits in $[\\smset,\\smset]$.\nHence the argument in the proof of \\cref{prop.left_dist_prod} can be generalized to arbitrary limits.\nIt follows that $\\tri$ preserves all limits on the left.\nBut we will present a proof of this fact from an alternative perspective: by appealing to the left coclosure of $\\tri$.\n\n\\begin{proposition}[Meyers] \\label{prop.comp_left_coclosed}\nThe composition product is left coclosed.\nThat is, there exists a left coclosure operation, which we denote $\\chom{-}{-}\\colon\\poly\\op\\times\\poly\\to\\poly$,\nsuch that there is a natural isomorphism\n\\[\n    \\poly(p, q\\tri r)\\iso\\poly\\left(\\chom{r}{p}, q\\right).\n\\]\nIn particular, the left coclosure operation sends $r,p\\in\\poly$ to\n\\begin{equation} \\label{eqn.chom_def}\n    \\chom{r}{p}\\coloneqq\\sum_{i\\in p(\\1)}\\yon^{r(p[i])}.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nWe have that\n\\begin{align*}\n    \\poly(p, q\\tri r)&\\iso\\prod_{i\\in p(\\1)}q(r(p[i])) \\tag*{\\eqref{eqn.main_formula}} \\\\\n    &\\iso\\poly\\left(\\sum_{i\\in p(\\1)}\\yon^{r(p[i])}, q\\right) \\tag*{\\eqref{eqn.main_formula}} \\\\\n    &\\iso\\poly\\left(\\chom{r}{p}, q\\right) \\tag*{\\eqref{eqn.chom_def}}.\n\\end{align*}\n\\end{proof}\n\n\\begin{corollary}[Left preservation of limits] \\label{cor.left_pres_lim}\nFix $r\\in\\poly$.\nThe functor $(-\\tri r)\\colon\\poly\\to\\poly$ that sends each $p\\in\\poly$ to $p\\tri r$ preserves all limits (up to natural isomorphism).\nThat is, for any category $\\cat{J}$ and functor $p_-\\colon\\cat{J}\\to\\poly$, there is a natural isomorphism\n\\begin{equation} \\label{eqn.comp_left_pres_lim}\n    \\left(\\lim_{j\\in\\cat{J}}p_j\\right)\\tri q\\iso\\lim_{j\\in\\cat{J}}(p_j\\tri q).\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nBy \\cref{prop.comp_left_coclosed}, the functor $(-\\tri r)\\colon\\poly\\to\\poly$ is the right adjoint of the functor $\\chom{r}{-}\\colon\\poly\\to\\poly$, and right adjoints preserve limits.\n\\end{proof}\n\n\\begin{exercise} \\label{exc.comp_left_pres_lim}\n\\begin{enumerate}\n    \\item Complete \\cref{exc.composing_with_constants} \\cref{exc.composing_with_constants.appl} using \\eqref{eqn.comp_left_pres_lim} and \\eqref{eqn.comp_left_pres_sum}.\n    \\item \\label{exc.comp_left_pres_lim.prod} Deduce \\eqref{eqn.comp_left_pres_prod} using \\eqref{eqn.comp_left_pres_lim}.\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item We wish to solve \\cref{exc.composing_with_constants} \\cref{exc.composing_with_constants.appl} using \\eqref{eqn.comp_left_pres_lim} and \\eqref{eqn.comp_left_pres_sum}.\n    If we set $\\cat{J}$ in \\eqref{eqn.comp_left_pres_lim} to be the empty category, then the limit of the functor from $\\cat{J}$ is just the terminal object.\n    It follows that $\\1\\tri p\\iso\\1$.\n    In other words, since $\\tri$ preserves limits on the left, and since terminal objects are limits, $\\tri$ preserves terminal objects on the left.\n    \n    Then a set $X$ can be written as a sum $\\sum_{x\\in X}\\1$, so by \\eqref{eqn.comp_left_pres_sum},\n    \\[\n        X\\tri p\\iso\\left(\\sum_{x\\in X}\\1\\right)\\tri p\\iso\\sum_{x\\in X}(\\1\\tri p)\\iso\\sum_{x\\in X}\\1\\iso X.\n    \\]\n    \n    \\item If we set $\\cat{J}$ in \\eqref{eqn.comp_left_pres_lim} to be the discrete category on the set $A$, then the limit of a functor from $\\cat{J}$ is just an $A$-fold product, so \\eqref{eqn.comp_left_pres_prod} follows.\n    In other words, since $\\tri$ preserves limits on the left, and since products are limits, $\\tri$ preserves products on the left.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\nSo $\\tri$ preserves limits on the left.\nHow about limits on the right?\nWe saw in \\cref{exc.right_not_dist_prod} that $\\tri$ does not even preserve products on the right, so it certainly does not preserve all limits.\nBut it turns out that there is a special class of limits that $\\tri$ does preserve on the right.\n\n\\begin{definition}\nA \\emph{connected limit} is one whose indexing category $\\cat{J}$ is nonempty and connected. That is, $\\cat{J}$ has at least one object, and any two objects are connected by a finite zigzag of arrows.\n\\end{definition}\n\n\\begin{example}\nThe following categories are connected:\n\\[\n\\fbox{$\\bullet$}\n\\qquad\n\\fbox{$\\bullet\\tto\\bullet$}\n\\qquad\n\\fbox{$\\bullet\\to\\bullet\\from\\bullet$}\n\\qquad\n\\fbox{$\\bullet\\from\\bullet\\from\\bullet\\from\\cdots$}\n\\]\nIn particular, equalizers, pullbacks, and directed limits are examples of connected limits. \n\nThe following categories are \\emph{not} connected:\n\\[\n\\fbox{$\\phantom{\\bullet}$}\n\\qquad\n\\fbox{$\\bullet\\quad\\bullet$}\n\\qquad\n\\fbox{$\\bullet\\quad\\bullet\\to \\bullet$}\n\\]\nIn particular, terminal objects and products are \\emph{not} examples of connected limits.\n\\end{example}\n\n\\begin{theorem}[Preservation of connected limits]\\label{thm.connected_limits}\nThe operation $\\tri$ commutes with connected limits on both sides. That is, if $\\cat{J}$ is a connected category, $p\\colon \\cat{J}\\to\\poly$ is a functor, and $q\\in\\poly$ is a polynomial, then there are natural isomorphisms\n\\[\n\t\\left(\\lim_{j\\in \\cat{J}} p_j\\right)\\tri q\\iso \\lim_{j\\in \\cat{J}}(p_j\\tri q)\n\t\\qqand\n\tq\\tri\\left(\\lim_{j\\in \\cat{J}} p_j\\right)\\iso \\lim_{j\\in \\cat{J}}(q\\tri p_j)\n\\]\n\\end{theorem}\n\\begin{proof}[Sketch of proof]\nThe claim for the left side is just a special case of \\cref{cor.left_pres_lim}.\nThe claim for the right side comes down to the fact that polynomials are sums of representables; representable functors commute with all limits and sums commute with connected limits in $\\smset$. See \\cite[Proposition 1.16]{kock2012polynomial} for details.\n\\end{proof}\n\n%**Right coclosure...kind of. At least an adjunction\n%**Right preservation of connected limits\n\n% Give alt proof as exercise??\n\n\\begin{exercise}\\label{ex.connected_limits_and_tri}\nUse \\cref{thm.connected_limits} in the following.\n\\begin{enumerate}\n\t\\item Let $p$ be a polynomial, thought of as a functor $p\\colon\\smset\\to\\smset$. Show that $p$ preserves connected limits (of sets).\n\t\\item Show that for any polynomials $p,q,r$ we have an isomorphism:\n\t\\begin{equation} \\label{eqn.right_prod_pullback}\n\tp\\tri(qr)\\iso (p\\tri q)\\times_{(p\\tri\\1)}(p\\tri r).\n\t\\end{equation}\n\t\\item Take $p,q,r$ from the counterexample you found in \\cref{exc.right_not_dist_prod} \\cref{exc.right_not_dist_prod.prod} and check that \\eqref{eqn.right_prod_pullback} holds.\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item Given a polynomial functor $p \\colon \\smset \\to \\smset$, we wish to show that $p$ preserves connected limits of sets; that is, for a connected category $\\cat{J}$ and a functor $X \\colon \\cat{J} \\to \\smset$, we have\n    \\[\n        p\\left(\\lim_{j \\in \\cat{J}} X_j\\right) \\iso \\lim_{j \\in \\cat{J}} p(X_j).\n    \\]\n    But we can identify $\\smset$ with the full subcategory of constant functors in $\\poly$ and instead view $X$ as a functor into $\\poly$.\n    Then by \\cref{exc.composing_with_constants} \\cref{exc.composing_with_constants.appl}, the left hand side of the isomorphism we seek is isomorphic to $p\\tri\\left(\\lim_{j \\in \\cat{J}} X_j\\right)$, while the right hand side is isomorphic to $\\lim_{j \\in \\cat{J}} \\left(p \\tri X_j\\right)$.\n    These are isomorphic by \\cref{thm.connected_limits}.\n    \\item Given $p,q,r \\in \\poly$, we wish to show that \\eqref{eqn.right_prod_pullback} holds.\n    As $\\1$ is terminal in $\\poly$, the product $qr$ can also be written as the pullback $q \\times_\\1 r$.\n    While products are not connected limits, pullbacks are, so by \\cref{thm.connected_limits}, they are preserved by precomposition with $p$.\n    Hence the desired isomorphism follows.\n    \\item We'll show that \\eqref{eqn.right_prod_pullback} holds for $p\\coloneqq\\yon+\\1, q\\coloneqq\\1,$ and $r\\coloneqq\\0$.\n    We have $p\\tri q = p\\tri\\1 \\iso (\\yon+\\1)\\tri\\1 \\iso \\2$ and $p\\tri r \\iso (\\yon+\\1)\\tri\\0 \\iso \\1$, so $(p\\tri q)\\times_{(p\\tri\\1)}(p\\tri r) \\iso \\2 \\times_\\2 \\1$.\n    We saw in \\cref{ex.pullbacks_in_poly} that the position-set of a pullback in $\\poly$ is just the pullback of the position-sets in $\\set$, while the direction-sets are given by a pushout of direction-sets in $\\set$.\n    As our polynomials have empty direction-sets, their pullback must have an empty direction-set as well, so this pullback is just a pullback of sets: $(p\\tri q)\\times_{(p\\tri\\1)}(p\\tri r) \\iso \\2 \\times_\\2 \\1\\iso\\1$.\n    And indeed we have $p\\tri(qr) \\iso (\\yon+\\1)\\tri\\0 \\iso \\1$ as well.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n\\subsection{Interaction with parallel products} \\label{subsec.comon.comp.prop.par}\n\nWhile we're here, it will be helpful to record the following.\n\\begin{proposition}\nFor any polynomial $q\\in\\poly$, tensoring with $q$ (on either side) preserves connected limits. That is, if $\\cat{J}$ is connected and $p\\colon \\cat{J}\\to\\poly$ is a functor, then there is a natural isomorphism:\n\\[\n\t\\left(\\lim_{j\\in \\cat{J}} p_j\\right)\\otimes q\\cong\n\t\\lim_{j\\in \\cat{J}} (p_j\\otimes q).\n\\]\n\\end{proposition}\n\n\n\\begin{proposition}\nFor any polynomials $p,p',q,q'$ there are natural maps\n\\begin{align}\\label{eqn.plus_duoidal}\n\t(p\\tri p')+(q\\tri q')&\\to (p+q)\\tri(p'+q')\\\\\\label{eqn.otimes_duoidal}\n\t(p\\tri p')\\otimes(q\\tri q')&\\to(p\\otimes pq)\\tri(p'\\otimes q')\\\\\\label{eqn.times_duoidal}\n\t(p\\tri p')\\times (q\\tri q')&\\from(p\\times q)\\tri(p'\\times q')\n\\end{align}\nmaking $(+,\\tri)$ and $(\\otimes,\\tri)$ duoidal structures and $(\\times,\\tri)$ op-duoidal.\n\\end{proposition}\n\\begin{proof}\nFor \\eqref{eqn.plus_duoidal} we have inclusion maps $p\\to p+q$ and $p'\\to p'+q'$, inducing a map $p\\tri p'\\to(p+q)\\tri(p'+q')$. Similarly we obtain a map $q\\tri q'\\to(p+q)\\tri(p'+q')$, so we get the desired map from the universal property of coproducts. It is straightforward to check that this is duoidal. The result for \\eqref{eqn.times_duoidal} is similar. \n\nIt remains to give a map \\eqref{eqn.times_duoidal}.**\n\\end{proof}\n\n%\\begin{exercise}\\label{exc.plus_duoidal}\n%\\begin{enumerate}\n%\t\\item Give maps $0\\to 0+0$, $\\yon\\to\\yon\\tri\\yon$, and $0\\to\\yon$.\n%\t\\item Check that the following diagrams commute:\n%\\[ \n%\\begin{tikzcd}\n%\t(p\\tri p')+0\\ar[r]&\n%\t(p\\tri p)+(0\\tri 0)\\ar[d]\\\\\n%\tp\\tri p'&\n%\t(p+0)\\tri (p'+0)\\ar[l]\n%\\end{tikzcd}\n%\\hspace{1in}\n%\\begin{tikzcd}\n%\t((p\\tri p')+(q\\tri q'))+(r\\tri r')\\ar[r]\\ar[d]&\n%\t(p\\tri p')+((q\\tri q')+(r\\tri r'))\\ar[r]&\n%\t(p\\tri p')+((q+r)\\tri(q'+r'))\\ar[d]\\\\\n%\t((p+ q)\\tri(p'+ q'))+(r\\tri r')\\ar[r]&\n%\t((p+q)+r)\\tri((p'+q')+r')\\ar[r]&\n%\t(p+(q+r))\\tri(p'+(q'+r'))\n%\\end{tikzcd}\n%\\]\n%\\qedhere\n%\\end{enumerate}\n%\\end{exercise}\n\n\\subsection{Interaction with cartesian lenses} \\label{subsec.comon.comp.prop.cart}\n\n\\begin{proposition}[$\\tri$ preserves cartesian maps in both variables]\\label{prop.comp_pres_cart}\nIf $f\\colon p\\to p'$ and $g\\colon q\\to q'$ are cartesian then so is $(f\\tri g)\\colon(p\\tri q)\\to (p'\\tri q')$.\n\\end{proposition}\n\\begin{proof}\nFor any $h\\colon A\\to B$ of sets, all faces of the cube\n\\[\n\\begin{tikzcd}[sep=small]\n  pqA\\ar[rr]\\ar[dd]\\ar[dr]&&\n  pqB\\ar[dd]\\ar[dr]\\\\&\n  pq'A\\ar[rr, crossing over]&&\n  pq'B\\ar[dd]\\\\\n  pq'A\\ar[rr]\\ar[dr]&&\n  pq'B\\ar[dr]\\\\&\n  p'q'A\\ar[from=uu, crossing over]\\ar[rr]&&\n  p'q'B\n\\end{tikzcd}\n\\]\nare pullbacks by \\cref{prop.cart_as_nt,thm.connected_limits}. Hence the diagonal is too by standard properties of pullbacks.\n\\end{proof}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Show that if $f$ is an isomorphism and $g$ is vertical then $f\\tri g$ is vertical.\n\t\\item Find a polynomial $q$ and a vertical morphism $f\\colon p\\to p'$ such that $(f\\tri\\id_q)\\colon (p\\tri q)\\to (p'\\tri q)$ is not vertical.\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item **\n    \\item **\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n\\section{Exercise solutions}\n\\Closesolutionfile{solutions}\n{\\footnotesize\n\\input{solution-file5}}\n\n\\Opensolutionfile{solutions}[solution-file6]\n\n%------------ Chapter ------------%\n\\chapter{Polynomial comonoids}\\label{sec.comonoids_in_poly}\n\n\\slogan{Imagine a sort of realm, where there are various positions you can be in. From every position, there are a number of moves you can make, possibly infinitely many. But whatever move you make, you'll end up in a new position. Well, technically it counts as a move to simply stay where you are, so you might end up in the same position. But wherever you move to, you can move again, and any number of moves from an original place counts as a single move. What sort of realm is this?}\n\nThe most surprising aspects of $\\poly$ really begin with its comonoids. In 2018, researchers Daniel Ahman and Tarmo Uustalu showed that comonoids in $(\\poly,\\yon,\\tri)$ can be identified with categories. Every category in the usual sense is a comonoid in $\\poly$ and every comonoid in $\\poly$ is a category. We find this revelation to be truly shocking, and it suggests some very different ways to think about categories. Let's go through it.\n\n\\begin{definition}[Comonoid]\\label{def.comonoid}\nA \\emph{comonoid} in a monoidal category $(\\cat{C},\\yon,\\lhd)$\nis a tuple $(c,\\epsilon,\\delta)$ where $c\\in\\cat{C}$ is an object, and $\\epsilon\\colon c\\to I$ and $\\delta\\colon c\\to c\\lhd c$ are maps, such that the following diagrams commute:\n\\begin{equation}\\label{eqn.comonoid_diagrams}\n\\begin{tikzcd}[background color=definitioncolor, row sep=large]\n\t\\yon\\lhd c&c\\ar[d, \"\\delta\" description]\\ar[r, equal]\\ar[l, equal]&c\\lhd\\yon\\\\&\n\tc\\lhd c\\ar[ul, \"\\epsilon\\lhd c\"]\\ar[ur, \"c\\lhd\\epsilon\"']\n\\end{tikzcd}\n\\hspace{.6in}\n\\begin{tikzcd}[row sep=large]\n\tc\\vphantom{\\yon}\\ar[r, \"\\delta\"]\\ar[d, \"\\delta\"']&\n\tc\\lhd c\\ar[d, \"c\\lhd\\delta\"]\\\\\n\tc\\lhd c\\ar[r, \"\\delta\\lhd c\"']&\n\tc\\lhd c\\lhd c\n\\end{tikzcd}\n\\end{equation}\nWe refer to a comonoid $P\\coloneqq(p,\\epsilon,\\delta)$ in $(\\poly,\\yon,\\tri)$ as a \\emph{polynomial comonoid}.\n\\end{definition}\n\nHere's a picture of one of the unit laws:\n\\[\n\\begin{tikzpicture}\n\t\\node (1) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, above=of p] (p') {};\n  \t\\node[poly, cod, identity, right=of p] (q) {};\n  \t\\node[poly, cod, above=of q] (q') {};\n  \t\\draw (p_pos) -- (q_pos);\n  \t\\draw (q_dir) -- node[above] {$\\epsilon^\\sharp$} (p_dir);\n  \t\\draw (p'_pos) -- (q'_pos);\n  \t\\draw (q'_dir) -- (p'_dir);\n  \t\\draw (yX_pos) to[first] node[below] {$\\delta_1$} (p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\delta_2$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above] {$\\delta^\\sharp$} (yX_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=1.8 of 1] (2) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, cod, above=of p] (p') {};\n  \t\\draw (yX_pos) to[first] node[below] {$\\delta_1$} (p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\delta_2$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above] {$\\delta^\\sharp$} (yX_dir);\n\t\t\\draw (p_pos) to[climb'] node[right] {$\\epsilon^\\sharp$} (p_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=1.8 of 2] (3) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (c) {};\n  \t\\node[poly, cod, right=of c] (c') {};\n  \t\\draw[double] (c_pos) -- (c'_pos);\n  \t\\draw[double] (c'_dir) -- (c_dir);\n\t\\end{tikzpicture}\n\t};\n\t\\node at ($(1.east)!.5!(2.west)$) {=};\n\t\\node at ($(2.east)!.5!(3.west)$) {=};\n\\end{tikzpicture}\n\\]\nWe'll put the associativity and the other unitality picture in the following exercise. The meaning of $\\epsilon$ and $\\delta$ will become clear; for those who want a hint, see \\cref{subsec.understanding_comonoids}.\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Draw the other unitality equation.\n\t\\item Draw the associativity equation.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}[$\\delta^{n}$ notation]\\label{ex.delta_n_notation}\nLet $(c,\\epsilon,\\delta)$ be a comonoid. From the associativity of $\\delta$, the two ways to get a map $c\\to c\\tri c\\tri c$ have the same result. This is true for any $n\\in\\nn$: we get an induced map $c\\to c\\tripow{n+1}$, which by mild abuse of notation we denote $\\delta^n$:\n\\[\n\tc\\To{\\delta}c\\tri c\\To{c\\tri\\delta}c\\tri c\\tri c\\To{c\\tripow{2}\\tri\\delta}\\cdots\\To{c\\tripow{n}\\tri\\delta}c\\tripow{(n+1)}.\n\\]\nIn particular, we have $\\delta^1=\\delta$ and we may write $\\delta^0\\coloneqq\\id_c$ and $\\delta^{-1}\\coloneqq\\epsilon$.\n\\end{example}\n\nPolynomial comonoids are usually called \\emph{polynomial comonads}. Though polynomials $p$ can be interpreted as polynomial \\emph{functors} $p\\colon\\smset\\to\\smset$, we do not generally emphasize this part of the story; we use it when it comes in handy, but generally we think of polynomials more as dependent arenas, or sets of corollas.\n\n\\begin{example}[The state comonad $S\\yon^S$]\\label{ex.state_comonad_1}\nLet $S$ be a set, and consider the polynomial $p\\coloneqq S\\yon^S$. It has a canonical comonoid structure---often called the \\emph{state} comonad---as we discussed in \\cref{sec.dynam_in_poly}, page~\\pageref{page.poly_comonad}. To say it in the current language, we first need to give maps $\\epsilon\\colon p\\to \\yon$ and $\\delta\\colon p\\to p\\tri p$. By \\eqref{eqn.composite_formula} and \\eqref{eqn.monomials_and_comp}, this is equivalent to giving functions\n\\begin{align}\\nonumber\n\tS&\\To{\\epsilon'} S&\n\tS&\\To{\\delta'}\\sum_{s'\\in S}\\prod_{s_1\\in S}\\sum_{s_1'\\in S}\\prod_{s_2\\in S}S\\\\\n\\intertext{We take $\\epsilon'$ to be the identity and we take $\\delta'$ to be}\n\ts&\\Mapsto{\\epsilon'} s&\n  s&\\Mapsto{\\delta'} (s'\\coloneqq s, s_1\\mapsto (s_1'\\coloneqq s_1, s_2\\mapsto s_2)).\\label{eqn.state_comonoid_eps_del}\n\\end{align}\nIf you know how to read such things, you'll see that each element $s\\in S$ is just being passed in a straightforward way. But we find this notation cumbersome and prefer the poly-box notation.\n\\begin{equation}\\label{eqn.state_comonoid_eps_del2}\n\\begin{tikzpicture}\n\t\\node (id) {\n  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom, \"$S\\yon^S$\" left] (S) {$s$\\at$s$};\n  \t\\node[poly, identity, right=of S, \"$\\yon$\" right] (y) {\\pphantom{s}{s}};\n  \t\\draw (S_pos) to[first] (y_pos);\n  \t\\draw (y_dir) to[last] (S_dir);\n\t\t\\node at ($(S)!.5!(y)$) {$\\epsilon$};\n  \\end{tikzpicture}\n  };\n  \\node[right=of id] (co) {\n  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom] (p) {$s_2$\\at$s$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$s_1$\\at$s$};\n  \t\\node[poly, cod, above=of q] (r) {$s_2$\\at$s_1$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] node[left] {$\\delta$} (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}  \n  };\n\\end{tikzpicture}\n\\end{equation}\n\\end{example}\n\n\\begin{exercise}\nLet $p\\coloneqq S\\yon^S$. For any $n\\in\\nn$, write out the morphism of polynomials $\\delta^n\\colon p\\to p\\tripow{(n+1)}$ either set-theoretically or in terms of poly-boxes as in \\cref{ex.state_comonad_1}\n\\end{exercise}\n\n\\begin{example}[Picturing the comonoid $S\\yon^S$]\\label{ex.picturing_SyS}\nLet's see this whole thing in pictures. First of all, let's take $S\\coloneqq\\3\\cong\\{\\bul[red],\\bul[dgreen],\\bul[blue]\\}$ and draw $\\{\\bul[red],\\bul[dgreen],\\bul[blue]\\}\\yon^{\\{\\bul[red],\\bul[dgreen],\\bul[blue]\\}}$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$\\3\\yon^\\3=$\" left] {\n  \\begin{tikzpicture}[trees, sibling distance=4mm]\n  \t\\foreach \\i/\\c in {1/red, 2/dgreen, 3/blue}\n  \t{\n      \\node[\"\\tiny \\i\" below, \\c] at (1.8*\\i,0) {$\\bullet$} \n        child [red]\n        child [dgreen]\n        child [blue]\n      ;\n  \t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nThe map $\\epsilon\\colon S\\yon^S\\to \\yon$ can be drawn as follows:\n\\[\n\\begin{tikzpicture}[trees, bend right]\n\t\\foreach \\i/\\c in {1/red, 2/dgreen, 3/blue}\n\t{\n  \t\\node[\"\\tiny \\i\" below, \\c] (\\i) at (3*\\i, 0) {$\\bullet$} \n    \tchild [red] {coordinate (\\i1)}\n      child [dgreen] {coordinate (\\i2)}\n      child [blue] {coordinate (\\i3)}\n     \t;\n  \t\\node[right=of \\i] (y\\i) {$\\bullet$}\n  \t\tchild{coordinate (y\\i')}\n  \t\t;\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (\\i) -- (y\\i);\n\t\\draw[densely dotted, postaction={decorate}] (y\\i') to (\\i\\i);\n\t};\n\\end{tikzpicture}\n\\]\nIt picks out one direction at each position, namely the one of the same color.\n\nThe map $\\delta\\colon S\\yon^S\\to (S\\yon^S)\\tripow{2}$ can be drawn as follows:\n\\begin{equation}\\label{eqn.state_comonoid_trees}\n\\begin{tikzpicture}[trees, \n  level 1/.style={sibling distance=5mm},\n  level 2/.style={sibling distance=1.5mm},\n\tbend right=60]\n\t\\foreach \\i/\\c in {1/red, 2/dgreen, 3/blue}\n\t{\n  \t\\node[\\c] (\\i) at (4*\\i, 0) {$\\bullet$} \n    \tchild [red] {coordinate (\\i1)}\n      child [dgreen] {coordinate (\\i2)}\n      child [blue] {coordinate (\\i3)}\n     \t;\n  \t\\node[right=1.7 of \\i, \\c] (SS\\i) {$\\bullet$}\n  \t\tchild [red] {node (S\\i1) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i11)}\n\t\t\t\tchild [dgreen] {coordinate (\\i12)} \n\t\t\t\tchild [blue] {coordinate (\\i13)}\n\t\t\t\t}\n  \t\tchild [dgreen] {node (S\\i2) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i21)}\n\t\t\t\tchild [dgreen] {coordinate (\\i22)} \n\t\t\t\tchild [blue] {coordinate (\\i23)}\n\t\t\t\t}\n  \t\tchild [blue] {node (S\\i3) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i31)}\n\t\t\t\tchild [dgreen] {coordinate (\\i32)} \n\t\t\t\tchild [blue] {coordinate (\\i33)}\n\t\t\t\t}\n  \t\t;\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (\\i) -- (SS\\i);\n\t\\foreach \\j in {1,2,3}\n\t{\n\t\t\\foreach \\k\\d in {1/red, 2/dgreen, 3/blue}\n\t\t{\n\t\t\t\\draw[densely dotted, postaction={decorate}, \\d] (\\i\\j\\k) to (\\i\\k);\n\t\t};\n\t};\n\t};\n\\end{tikzpicture}\n\\end{equation}\nNote that $(S\\yon^S)\\tripow{2}$ has $SS^S$, or in this case $\\8\\1$ many trees, only three of which are being pointed to by $\\delta$. That is, there is in general no rule on trees in that says the color of an arrow should agree in any sense with the color of the node it points to: \\eqref{eqn.state_comonoid_trees} shows that the comonoid structure is pointing out the special trees where that does occur.\n\nIt remains to check the comonoid laws, the three commutative diagrams in \\eqref{eqn.comonoid_diagrams}. The first two say that the composites\n\\[\nS\\yon^S\\To{\\delta}(S\\yon^S)\\tripow{2}\\To{\\id\\tri\\epsilon}S\\yon^S\n\\qqand\nS\\yon^S\\To{\\delta}(S\\yon^S)\\tripow{2}\\To{\\epsilon\\tri \\id}S\\yon^S\n\\]\nare the identity. Let's return to the case $S=\\3$. Then the second map in each case involves 81 different assignments, but only three of them will matter.%\n\\footnote{To say technically that we can disregard all but three positions in $(S\\yon^S)\\tripow{2}\\cong SS^S\\yon^{SS}$, one can use \\cref{prop.vert_cart_factorization}.} \nSince all three are strongly similar, we will draw only the red case. We also only draw the relevant passback maps.\n\\[\n\\begin{tikzpicture}[trees, \n  level 1/.style={sibling distance=5mm},\n  level 2/.style={sibling distance=1.5mm},\n\tbend right=60, \n\t]\n\t\\foreach \\i/\\c in {1/red}\n\t{\n  \t\\node[\\c] (\\i) at (4*\\i, 0) {$\\bullet$} \n    \tchild [red] {coordinate (\\i1)}\n      child [dgreen] {coordinate (\\i2)}\n      child [blue] {coordinate (\\i3)}\n     \t;\n  \t\\node[right=1.7 of \\i, \\c] (SS\\i) {$\\bullet$}\n  \t\tchild [red] {node (S\\i1) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i11)}\n\t\t\t\tchild [dgreen] {coordinate (\\i12)} \n\t\t\t\tchild [blue] {coordinate (\\i13)}\n\t\t\t\t}\n  \t\tchild [dgreen] {node (S\\i2) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i21)}\n\t\t\t\tchild [dgreen] {coordinate (\\i22)} \n\t\t\t\tchild [blue] {coordinate (\\i23)}\n\t\t\t\t}\n  \t\tchild [blue] {node (S\\i3) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i31)}\n\t\t\t\tchild [dgreen] {coordinate (\\i32)} \n\t\t\t\tchild [blue] {coordinate (\\i33)}\n\t\t\t\t}\n  \t\t;\n  \t\\node[\\c] (\\i') at (8*\\i, 0) {$\\bullet$} \n      child [red] {node[black] {$\\bullet$}\n      \tchild [black] {coordinate (11'')}\n\t\t\t}\n      child [dgreen] {node[black] {$\\bullet$}\n      \tchild [black] {coordinate (12'')}\n\t\t\t}\n      child [blue] {node[black] {$\\bullet$}\n      \tchild [black] {coordinate (13'')}\n\t\t\t}\n     \t;\n\t\t;\n  \t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (\\i) -- (SS\\i);\n  \t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (SS\\i) -- (\\i');\n\t\t\\foreach \\j in {1,2,3}\n\t\t{\n\t\t\\draw[densely dotted, postaction={decorate}] (1\\j'') to (1\\j\\j);\n\t\t\\draw[densely dotted, postaction={decorate}] (1\\j\\j) to (1\\j);\n\t\t};\n\t};\n\\end{tikzpicture}\n\\]\n\\[\n\\begin{tikzpicture}[trees, \n  level 1/.style={sibling distance=5mm},\n  level 2/.style={sibling distance=1.5mm},\n\tbend right=60, \n\t]\n\t\\foreach \\i/\\c in {1/red}\n\t{\n  \t\\node[\\c] (\\i) at (4*\\i, 0) {$\\bullet$} \n    \tchild [red] {coordinate (\\i1)}\n      child [dgreen] {coordinate (\\i2)}\n      child [blue] {coordinate (\\i3)}\n     \t;\n  \t\\node[right=1.7 of \\i, \\c] (SS\\i) {$\\bullet$}\n  \t\tchild [red] {node (S\\i1) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i11)}\n\t\t\t\tchild [dgreen] {coordinate (\\i12)} \n\t\t\t\tchild [blue] {coordinate (\\i13)}\n\t\t\t\t}\n  \t\tchild [dgreen] {node (S\\i2) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i21)}\n\t\t\t\tchild [dgreen] {coordinate (\\i22)} \n\t\t\t\tchild [blue] {coordinate (\\i23)}\n\t\t\t\t}\n  \t\tchild [blue] {node (S\\i3) {$\\bullet$} \n\t\t\t\tchild [red] {coordinate (\\i31)}\n\t\t\t\tchild [dgreen] {coordinate (\\i32)} \n\t\t\t\tchild [blue] {coordinate (\\i33)}\n\t\t\t\t}\n  \t\t;\n  \t\\node[black] (\\i') at (8*\\i, 0) {$\\bullet$}\n\t\t\tchild {node[red] {$\\bullet$} \n        child [red] {coordinate (11'')}\n        child [dgreen] {coordinate (12'')}\n        child [blue] {coordinate (13'')}\n      }\n     \t;\n\t\t;\n  \t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (\\i) -- (SS\\i);\n  \t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (SS\\i) -- (\\i');\n\t\t\\foreach \\j in {1,2,3}\n\t\t{\n\t\t\\draw[densely dotted, postaction={decorate}] (1\\j'') to (11\\j);\n\t\t\\draw[densely dotted, postaction={decorate}] (11\\j) to (1\\j);\n\t\t};\n\t};\n\\end{tikzpicture}\n\\]\nWe do not show associativity here, but instead leave it to the reader in \\cref{exc.state_comonoid_assoc}.\n\\end{example}\n\n\\begin{exercise}\\label{exc.state_comonoid_assoc}\nLet $S\\coloneqq\\2$ and $c\\coloneqq \\2\\yon^\\2$.\n\\begin{enumerate}\n\t\\item Draw $c$ using color if possible.\n\t\\item We know $c$ is supposed to be the carrier of a comonoid $(c,\\epsilon,\\delta)$. Which two maps $ c\\to c\\tripow{3}$ are supposed to be equal by associativity?\n\t\\item Draw these two maps in the style of \\cref{ex.picturing_SyS}.\n\t\\item Are they equal?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}[Monoid actions]\nSuppose that $(M,e,*)$ is a monoid, $S$ is a set, and $\\alpha\\colon M\\times S\\to S$ is an $M$-action. \n\\begin{enumerate}\n\t\\item Show that $S\\yon^M$ forms a comonoid. \n\t\\item Show that the projection $S\\yon^M\\to\\yon^M$ is a comonoid homomorphism.\n\t\\item $M$ always acts on itself by multiplication. Is the associated comonoid structure on $M\\yon^M$ the same or different from the one coming from \\cref{ex.state_comonad_1}?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n%---- Section ----%\n\\section{Speeding up dynamical systems}\n\nSuppose we have a dynamical system $f\\colon S\\yon^S\\to p$, and we want to make it go $k$-times faster. That is, in every moment, we want it to process $k$-many inputs, rather than one. \n\nSince $S\\yon^S$ has the structure of a comonoid, we know that for every $k\\in\\nn$ we have a map $\\delta^{k-1}\\colon S\\yon^S\\to (S\\yon^S)\\tripow{k}$ by \\cref{ex.delta_n_notation}. But we also have maps $f\\tripow{k}\\colon (S\\yon^S)\\tripow{k}\\to p\\tripow{k}$ because $\\tri$ is a monoidal product. Thus we can form the composite\n\\begin{equation}\\label{eqn.speedup}\n\\begin{tikzcd}\n\tS\\yon^S\\ar[r, \"\\delta^{k-1}\"]\\ar[rr, bend right, \"{\\text{Spdup}_k(f)}\"']&\n\t(S\\yon^S)\\tripow{k}\\ar[r, \"f\\tripow{k}\"]&\n\tp\\tripow k\n\\end{tikzcd}\n\\end{equation}\nFor every state $s\\in S$, we now have a length-$k$ strategy in $p$, i.e.\\ a tree of height $k$ in $p$, or explicitly a choice of $p$-position and, for every direction there, another $p$-position and so on $k$ times. Here is a poly-box drawing for the $k=3$ case:\n\\[\n\\begin{tikzpicture}\n\t\\node (given) {\n\t\\begin{tikzpicture}[polybox, tos]\n\t\t\\node[poly, dom, blue, \"$S\\yon^S$\" left] (S) {};\n\t\t\\node[poly, cod, dgreen, right=of S, \"$p$\" right] (p) {};\n\t\t\\draw (S_pos) to[first] (p_pos);\n\t\t\\draw (p_dir) to[last]  (S_dir);\n\t\t\\node at ($(S.east)!.5!(p.west)$) {$f$};\n\t\\end{tikzpicture}\n\t};\n\t\\node[right=of given] (obtain) {\n\t\\begin{tikzpicture}[polybox, tos]\n\t\t\\node[poly, dom, blue, \"$S\\yon^S$\" left] (S) {};\n\t\t\\node[poly, blue, right=of S] (S2) {};\n\t\t\\node[poly, blue, below=of S2] (S1) {};\n\t\t\\node[poly, blue, above=of S2] (S3) {};\n\t\t\\node[poly, dgreen, cod, right=of S1] (p1) {};\n\t\t\\node[poly, dgreen, cod, right=of S2] (p2) {};\n\t\t\\node[poly, dgreen, cod, right=of S3] (p3) {};\n%\n\t\t\\draw (S1_pos) to[first] (p1_pos);\n\t\t\\draw (p1_dir) to[last] (S1_dir);\t\t\n\t\t\\draw (S2_pos) to[first] (p2_pos);\n\t\t\\draw (p2_dir) to[last]  (S2_dir);\t\t\n\t\t\\draw (S3_pos) to[first] (p3_pos);\n\t\t\\draw (p3_dir) to[last]  (S3_dir);\n\t\t\\draw[blue] (S_pos) to[first] (S1_pos);\n\t\t\\draw[blue] (S1_dir) to[climb] (S2_pos);\n\t\t\\draw[blue] (S2_dir) to[climb] (S3_pos);\n\t\t\\draw[blue] (S3_dir) to[last] (S_dir);\n\t\t\\node[blue] at ($(S.east)!.4!(S2.west)$) {$\\delta^2$};\n\t\t\\node at ($(S1.east)!.4!(p1.west)$) {$f$};\n\t\t\\node at ($(S2.east)!.4!(p2.west)$) {$f$};\n\t\t\\node at ($(S3.east)!.4!(p3.west)$) {$f$};\t\t\n  \\end{tikzpicture}\t\n\t};\n\t\\node[above] at (obtain.north) (obtain_lab) {Obtain for $k=3$};\n\t\\node at (given|-obtain_lab) {Given $f\\colon S\\yon^S\\to p$};\n\\end{tikzpicture}\n\\]\n\n\\begin{example}\nLet $p\\coloneqq\\rr\\yon^\\1$, let $S\\coloneqq\\nn$, and let $\\lens{f^\\sharp}{f_1}\\colon S\\yon^S\\to p$ be given by $f_1(n)\\coloneqq n$ and $f^\\sharp(n,1)\\coloneqq n+1$. What is this speedup map $\\text{Spdup}_k(f)$? First of all, its type is \n\\[\\text{Spdup}_k(f)\\colon S\\yon^S\\to\\rr^k\\yon,\\]\nmeaning that it has the same set of states as before, but it outputs $k$-many reals in every moment. \n\nSo for example with $k=3$ here is one moment of output:\n\\[\n\\begin{tikzpicture}[polybox, mapstos]\n\t\t\\node[poly, dom, \"$S\\yon^S$\" left] (S) {$n+3$\\vphantom{!}\\at$n$\\vphantom{!}};\n\t\t\\node[poly, right=of S] (S2) {$n+2$\\vphantom{!}\\at$n+1\\vphantom{!}$};\n\t\t\\node[poly, below=of S2] (S1) {$n+1$\\vphantom{!}\\at$n\\vphantom{!}$};\n\t\t\\node[poly, above=of S2] (S3) {$n+3$\\vphantom{!}\\at$n+2\\vphantom{!}$};\n\t\t\\node[poly, linear cod, right=of S1, \"$\\rr\\yon$\" right] (p1) {$!$\\at$\\hphantom{+}n\\hphantom{+}$\\vphantom{!}};\n\t\t\\node[poly, linear cod, right=of S2, \"$\\rr\\yon$\" right] (p2) {$!$\\at$n+1$\\vphantom{!}};\n\t\t\\node[poly, linear cod, right=of S3, \"$\\rr\\yon$\" right] (p3) {$!$\\at$n+2$\\vphantom{!}};\n%\n\t\t\\draw (S1_pos) to[first] (p1_pos);\n\t\t\\draw (p1_dir) to[last] (S1_dir);\t\t\n\t\t\\draw (S2_pos) to[first] (p2_pos);\n\t\t\\draw (p2_dir) to[last]  (S2_dir);\t\t\n\t\t\\draw (S3_pos) to[first] (p3_pos);\n\t\t\\draw (p3_dir) to[last]  (S3_dir);\n\t\t\\draw (S_pos) to[first] (S1_pos);\n\t\t\\draw (S1_dir) to[climb] (S2_pos);\n\t\t\\draw (S2_dir) to[climb] (S3_pos);\n\t\t\\draw (S3_dir) to[last] (S_dir);\n\\end{tikzpicture}\n\\]\nSo for example starting at initial state $n=0$, we get the following output stream, e.g.\\ for 4 seconds:\n\\[(0,1,2),(3,4,5),(6,7,8),(9,10,11).\\]\n\\end{example}\n\n%-------- Subsection --------%\n\\section{Other comonoids}\n\nOnce you know that these all important $S\\yon^S$-things are comonoids in $\\poly$, it's interesting to ask ``what are all the comonoids in $\\poly$?'' Let's discuss another one before answering the question in generality.\n\n\\begin{example}[A simple comonoid that's not $S\\yon^S$]\\label{ex.walking_arrow}\nThe polynomial $\\yon^\\2+\\yon$ can be given a comonoid structure. Let's first associate names to its positions and directions. \n\nDefine $w\\coloneqq\\{A\\}\\yon^{\\{i_A,f\\}}+\\{B\\}\\yon^{\\{i_B\\}}$; it is clearly isomorphic to $\\yon^\\2+\\yon$, but its notation is meant to remind the reader of the walking arrow category\n\\[\n\\cat{W}\\coloneqq\\boxCD{examplecolor}{\n$A\\Too{f}B$\n}\n\\]\nWe will use the category $\\cat{W}$ as inspiration for equipping $w$ with a comonoid structure $(w,\\epsilon,\\delta)$. The map $\\epsilon$ will pick out identity arrows and the map $\\delta$ will tell us about codomains and composition (which is rather trivial in the case of $\\cat{W}$). Here's a picture of $w\\cong\\yon^\\2+\\yon$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$w\\coloneqq$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[\"\\tiny $A$\" below, red] (1) {$\\bullet$} \n      child  {coordinate (iA) \\idchild}\n      child {coordinate (f)};\n    \\node[right=.8 of 1,\"\\tiny $B$\" below, blue] (2) {$\\bullet$} \n      child  {coordinate (iB) \\idchild};\n    \\node[below left=0 of iA, font=\\tiny] {$i_A$};\n    \\node[below left=0 of iB, font=\\tiny] {$i_B$};\n    \\node[below right=0 of f, font=\\tiny] {$f$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\n\nWe first need to choose a map of polynomials $\\epsilon\\colon w\\to\\yon$; it can be identified with a dependent function $\\epsilon^\\sharp\\colon (o\\in w(\\1))\\to w[o]$, assigning to each position a direction there. Let's take $\\epsilon^\\sharp(A)\\coloneqq i_A$ and $\\epsilon^\\sharp(B)\\coloneqq i_B$:\n\\[\n\\begin{tikzpicture}[trees, bend right=60]\n  \\node[red] (1) {$\\bullet$} \n  \tchild  {coordinate (11) \\idchild}\n    child {coordinate (12)};\n  \\node[right=1.5 of 1] (1y) {$\\bullet$}\n  \tchild {coordinate (1y1)};\n%\n  \\node[right=2 of 1y, blue] (2) {$\\bullet$} \n  \tchild  {coordinate (21) \\idchild};\n  \\node[right=1.5 of 2] (2y) {$\\bullet$}\n  \tchild {coordinate (2y1)};\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (1) -- (1y);\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (2) -- (2y);\n\t\\draw[densely dotted, postaction={decorate}] (1y1) to (11);\n\t\\draw[densely dotted, postaction={decorate}] (2y1) to (21);\n\\end{tikzpicture}\n\\]\nNow we need a map of polynomials $\\delta\\colon w\\to w\\tri w$. Let's draw out $w\\tri w$ to see what it looks like.\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$w\\tri w=$\" left] {\n\t\\begin{tikzpicture}[trees,\n\t  level 1/.style={sibling distance=5mm},\n  \tlevel 2/.style={sibling distance=2.5mm}]\n    \\node[red] (1) {$\\bullet$} \n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t\\idchild\n\t\t\t}\n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t};\n    \\node[right=1 of 1, red] (2) {$\\bullet$} \n      child  {\n        node [red]{$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t\\idchild\n\t\t\t}\n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t};\n    \\node[right=1 of 2, red] (3) {$\\bullet$} \n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t\t\\idchild\n\t\t\t}\n      child  {\n        node [red] {$\\bullet$} \n \t\t    child {\\idchild}\n      \tchild {}\n\t\t\t};\n    \\node[right=1 of 3, red] (4) {$\\bullet$} \n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t\\idchild\n\t\t\t}\n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t};\n    \\node[right=.8 of 4, blue] (5) {$\\bullet$} \n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t\\idchild\n\t\t\t};\n    \\node[right=.6 of 5, blue] (6) {$\\bullet$} \n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t\\idchild\n\t\t\t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nThe map $\\delta$ is going to tell us both about codomains and composition. Here it is:\n\\[\n\\begin{tikzpicture}[trees, sibling distance=5mm,\tbend right=60]\n\t\\node (1A) [red] {$\\bullet$} \n  \tchild  {coordinate (1A1) \\idchild}\n    child {coordinate (1A2)};\n  \\node (2A) [right=1.5 of 1A, red] {$\\bullet$} \n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {coordinate (2A1) \\idchild}\n      \tchild {coordinate (2A2)}\n\t\t\t\\idchild\n\t\t\t}\n      child {node [blue] {$\\bullet$} \n      \tchild  {coordinate (2A3) \\idchild}\n\t\t\t};\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (1A) -- (2A);\n\t\\draw[densely dotted, postaction={decorate}] (2A1) to (1A1);\n\t\\draw[densely dotted, postaction={decorate}] (2A2) to (1A2);\n\t\\draw[densely dotted, postaction={decorate}] (2A3) to (1A2);\n%\n  \\node[right=2 of 2A, blue] (1B) {$\\bullet$} \n  \tchild  {coordinate (1B1) \\idchild};\n  \\node[right=1.5 of 1B, blue] (2B) {$\\bullet$} \n  \tchild {node [blue] {$\\bullet$} \n    child  {coordinate (2B1) \\idchild}\n\t\t\\idchild\n\t};\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (1B) -- (2B);\n\t\\draw[densely dotted, postaction={decorate}] (2B1) to (1B1);\n\\end{tikzpicture}\n\\]\nThe on-positions map selects, for each position (either $A$ or $B$) the two-level tree starting at that position and having the correct codomains: the identity arrow on $A$ points to the corolla for $A$; the $f$ map points to the corolla for $B$; and the identity arrow on $B$ points to the corolla for $B$. The on-directions maps assign the correct composites. Here is $\\delta\\colon w\\to w\\tri w$ again, in terms of poly-boxes.\n\\[\n\\begin{tikzpicture}\n\t\\node (p1) {\n\t  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom] (p) {$i_A$\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$i_A$\\at$A$};\n  \t\\node[poly, cod, above=of q] (r) {$i_A$\\at$A$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}  \n\t};\n\t\\node[right=1 of p1] (p2) {\n\t  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom] (p) {$f$\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$i_A$\\at$A$};\n  \t\\node[poly, cod, above=of q] (r) {$f$\\at$A$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}  \n\t};\n\t\\node[right=1 of p2] (p3) {\n\t  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom] (p) {$f$\\at$A$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$f$\\at$A$};\n  \t\\node[poly, cod, above=of q] (r) {$i_B$\\at$B$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}  \n\t};\n\t\\node[right=1 of p3] (p4) {\n\t  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom] (p) {$i_B$\\at$B$};\n  \t\\node[poly, cod, right= of p.south, yshift=-1ex] (q) {$i_B$\\at$B$};\n  \t\\node[poly, cod, above=of q] (r) {$i_B$\\at$B$};\n  \t\\draw (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] (r_pos);\n  \t\\draw (r_dir) to[last] (p_dir);\n  \\end{tikzpicture}  \n\t};\n\\end{tikzpicture}\n\\]\n\nIt remains to check that $(w,\\epsilon,\\delta)$ really is a comonoid, i.e.\\ that the diagrams in \\eqref{eqn.comonoid_diagrams} commute. We will check unitality only for $A$; it is easier for $B$.\n\\[\n\\begin{tikzpicture}[trees, sibling distance=5mm,\tbend right=60]\n\t\\node (1A) [red] {$\\bullet$} \n  \tchild  {coordinate (1A1) \\idchild}\n    child {coordinate (1A2)};\n  \\node (2A) [right=1.5 of 1A, red] {$\\bullet$} \n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {coordinate (2A1) \\idchild}\n      \tchild {coordinate (2A2)}\n\t\t\t\\idchild\n\t\t\t}\n      child {node [blue] {$\\bullet$} \n      \tchild  {coordinate (2A3) \\idchild}\n\t\t\t};\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (1A) -- (2A);\n\t\\draw[densely dotted, postaction={decorate}] (2A1) to (1A1);\n\t\\draw[densely dotted, postaction={decorate}] (2A2) to (1A2);\n\t\\draw[densely dotted, postaction={decorate}] (2A3) to (1A2);\n\t\\node (3A) [right=1.5 of 2A, red] {$\\bullet$}\n\t\tchild {\n\t\t\tnode {$\\bullet$}\n\t\t\tchild {coordinate (3A1)}\n\t\t\\idchild\n\t\t}\n\t\tchild {\n\t\t\tnode {$\\bullet$}\n\t\t\tchild {coordinate (3A2)}\n\t\t};\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (2A) -- (3A);\n\t\\draw[densely dotted, postaction={decorate}] (3A1) to (2A1);\n\t\\draw[densely dotted, postaction={decorate}] (3A2) to (2A3);\n\\end{tikzpicture}\n\\hspace{1in}\n\\begin{tikzpicture}[trees, sibling distance=5mm,\tbend right=60]\n\t\\node (1A) [red] {$\\bullet$} \n  \tchild  {coordinate (1A1) \\idchild}\n    child {coordinate (1A2)};\n  \\node (2A) [right=1.5 of 1A, red] {$\\bullet$} \n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {coordinate (2A1) \\idchild}\n      \tchild {coordinate (2A2)}\n\t\t\t\\idchild\n\t\t\t}\n      child {node [blue] {$\\bullet$} \n      \tchild  {coordinate (2A3) \\idchild}\n\t\t\t};\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (1A) -- (2A);\n\t\\draw[densely dotted, postaction={decorate}] (2A1) to (1A1);\n\t\\draw[densely dotted, postaction={decorate}] (2A2) to (1A2);\n\t\\draw[densely dotted, postaction={decorate}] (2A3) to (1A2);\n\t\\node (3A) [right=1.5 of 2A] {$\\bullet$}\n\t\tchild {\n        node [red] {$\\bullet$} \n \t\t    child  {coordinate (3A1) \\idchild}\n      \tchild {coordinate (3A2)}\n\t\t};\n\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (2A) -- (3A);\n\t\\draw[densely dotted, postaction={decorate}] (3A1) to (2A1);\n\t\\draw[densely dotted, postaction={decorate}] (3A2) to (2A2);\n\\end{tikzpicture}\n\\]\nIn both pictures, one can see that the composite map is the identity. We would do associativity here, but because the category $\\cat{W}$ is so simple, associativity is guaranteed; this makes the pictures too trivial.\n\\end{example}\n\n\n\\begin{exercise}\nWrite out the data $(c,\\epsilon,\\delta)$ for the comonoid corresponding to the category \n\\[\n\\boxCD{exercisecolor}{$B\\From{f}A\\To{g}C$}\n\\]\nFor this exercise, you are not being asked to check the unitality or associativity conditions.\n\\end{exercise}\n\n\\begin{exercise}\\label{exc.linear_poly_comon}\nShow that if $A$ is a set and $p\\coloneqq A\\yon$ is the associated linear polynomial, then there exists a unique comonoid structure on $p$.\n\\end{exercise}\n\n%\n%\\begin{exercise}\\label{ex.fleece}\n%\\begin{enumerate}\n%\t\\item Use \\cref{ex.const_to_const} to show that for any polynomial comonoid $(c,\\epsilon,\\delta)$, the polynomial $c$ is divisible by $\\yon$.\n%\t\\item Show that there exists a polynomial $\\fl{c}$ and a vertical morphism $\\phi_c\\colon c\\to \\fl{c}$ such that the induced map $(\\phi_c,\\epsilon)\\colon c\\to \\fl{c}\\yon$ is an isomorphism, i.e.\\ $\\phi_c$ is a product projection.\n%\t\\item Show that $\\phi_c$ is an epimorphism using \\cref{prop.epis_in_poly}.\n%\\qedhere\n%\\end{enumerate}\n%\\end{exercise}\n%\n%\\begin{definition}\\label{def.fleece}\n%Suppose $\\com{C}=(c,\\epsilon,\\delta)$ is a comonoid, and let $\\phi_c\\colon c\\to \\fl{c}$ be as in \\cref{ex.fleece}. We refer to the polynomial $\\fl{c}$ as the \\emph{fleece} of $c$ and $\\phi_c$ as the \\emph{fleecing map}.\n%\n%If there exists a map $\\fl{\\delta}\\colon \\fl{c}\\to \\fl{c}\\tri \\fl{c}$ making the diagram\n%\\[\n%\\begin{tikzcd}\n%\tc\\ar[r, \"\\delta\"]\\ar[d, \"\\phi_c\"']&\n%\tc\\tri c\\ar[d, \"\\phi_c\\tri\\phi_c\"]\\\\\n%\t\\fl{c}\\ar[r, \"\\fl{\\delta}\"']&\n%\t\\fl{c}\\tri \\fl{c}\n%\\end{tikzcd}\n%\\]\n%commute,%\n%\\footnote{Note that such a morphism $\\fl{\\delta}$ must be unique since $\\phi_c$ is an epimorphism.}\n%we say that $(\\fl{c},\\fl{\\delta})$ is the \\emph{fleece comagma} of $\\com{C}$.\n%\\end{definition}\n%\n%By using just the fleece, we can leave off identity arrows, making things just that much easier to draw. (We call it ``fleece'' because to some extent we're cheating the comonoid of its identities, and also because we're taking just its wool, leaving the identity behind.)\n%\n%\\begin{example}[Associativity]\\label{ex.associativity_pics}\n%Consider the non-category drawn here:\n%\\[\n%\\boxCD{examplecolor}{\n%\\begin{tikzcd}[ampersand replacement=\\&]\n%\tA\\ar[r, \"f\"]\\ar[rrr, bend left, \"i\"]\\ar[rrr, bend right, \"j\"]\\&\n%\tB\\ar[r, \"g\"]\\&\n%\tC\\ar[r, \"h\"]\\&\n%\tD\n%\\end{tikzcd}\n%\\leavevmode\\\\\\bigskip\n%$(f\\then g)\\then h=i\\qqand f\\then(g\\then h)=j$\n%}\n%\\]\n%We can see represent this in a polynomial $p$ with a map $\\delta\\colon p\\to p\\tri p$, and see that it is not associative. To show this it suffices to consider the fleece comagma $\\fl{\\delta}\\colon \\fl{p}\\to \\fl{p}\\tri \\fl{p}$; see \\cref{def.fleece}, so that $\\fl{p}\\cong \\yon^\\4+\\yon^\\2+\\yon+\\1$. Let's rename $\\fl{p}$ to an isomorphic polynomial with more names around:\n%\\[\n%\\fl{p}\\coloneqq\\{A\\}\\yon^{\\{f,f\\then g, i, j\\}}+\\{B\\}\\yon^{\\{g,g\\then h\\}}+C\\yon^{\\{h\\}}+\\{D\\}.\n%\\]\n%Every symbol used there, excluding $\\yon$, +, $\\{$, and $\\}$, but including ``$f$'' and ``$f\\then g$'', etc., is just a variable name. Below is a picture of $\\fl{p}$, where we use arrow lengths to help us remember which arrow is which: $f$, $g$, and $h$ are small, $f\\then g$ and $g\\then h$ are longer, and $i$ and $j$ are longest:\n%\\[\n%\\begin{tikzpicture}[rounded corners]\n%\\node[draw, \"$\\fl{p}$\" above] {\n%\\begin{tikzpicture}[trees, sibling distance=1cm]\n%\t\\node[\"\\tiny $A$\" below, red] (A) {$\\bullet$}\n%  \tchild[level distance=4mm] {coordinate (1)}\n%\t\tchild[level distance=8mm] {coordinate (2)}\n%\t\tchild[level distance=12mm] {coordinate (3)}\n%\t\tchild[level distance=12mm] {coordinate (4)};\n%%\n%\t\\node[\"\\tiny $B$\" below, blue, right=3 of A] (B) {$\\bullet$}\n%\t\tchild[level distance=4mm] {coordinate (B1)}\n%\t\tchild[level distance=8mm] {coordinate (B2)};\n%%\n%\t\\node[\"\\tiny $C$\" below, dgreen, right=2 of B] (C) {$\\bullet$}\n%\t\tchild[level distance=4mm] {coordinate (C1)};\n%%\n%\t\\node[\"\\tiny $D$\" below, right=1 of C] (D) {$\\bullet$};\n%\t\\begin{scope}[font=\\scriptsize]\n%  \t\\node[below left=1mm of 1] {$f$};\n%  \t\\node[below left=1mm of 2] {$f\\then g$};\n%  \t\\node[below right=1mm of 3] {$i$};\n%  \t\\node[below right=1mm of 4] {$j$};\n%  \t\\node[below left=1mm of B1] {$g$};\n%  \t\\node[below right=1mm of B2] {$g\\then h$};\n%  \t\\node[below left=1mm of C1] {$h$};\n%\t\\end{scope}\n%\\end{tikzpicture}\n%};\n%\\end{tikzpicture}\n%\\]\n%Here's a picture of our map $\\fl{\\delta}\\colon \\fl{p}\\to \\fl{p}\\tri \\fl{p}$.\n%\\begin{equation}\\label{eqn.delta_misc33}\n%\\begin{tikzpicture}[trees, bend right=60,\n%level 1/.style={sibling distance=5mm},\n%level 2/.style={sibling distance=2.5mm}]\n%\t\\node [red] (A1) {$\\bullet$}\n%\t\tchild[level distance =4mm] {coordinate (A11)}\n%\t\tchild[level distance =8mm] {coordinate (A12)}\n%\t\tchild[level distance =12mm] {coordinate (A13)}\n%\t\tchild[level distance =12mm] {coordinate (A14)};\n%\t\\node [right=2.5 of A1, red] (A2) {$\\bullet$}\n%\t\tchild[level distance =4mm] {node [blue]{$\\bullet$}\n%\t\t\tchild[level distance=4mm] {coordinate (A21)}\n%\t\t\tchild[level distance=8mm] {coordinate (A22)}\n%\t\t}\n%\t\tchild[level distance =8mm] {node [dgreen] {$\\bullet$}\n%\t\t\tchild[level distance=4mm] {coordinate (A23)}\n%\t\t}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}};\t\t\t\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (A1) -- (A2);\n%\t\\draw[densely dotted, postaction={decorate}] (A21) to (A12);\n%\t\\draw[densely dotted, postaction={decorate}] (A22) to (A14);\n%\t\\draw[densely dotted, postaction={decorate}] (A23) to (A13);\n%%\n%\t\\node [blue, right=1.5 of A2] (B1) {$\\bullet$}\n%\t\tchild[level distance=4mm] {coordinate (B11)}\n%\t\tchild[level distance=8mm] {coordinate (B12)};\n%\t\\node [blue, right=1.5 of B1] (B2) {$\\bullet$}\n%\t\tchild[level distance=4mm] {node[dgreen] {$\\bullet$}\n%\t\t\tchild {coordinate (B21)}\n%\t\t}\n%\t\tchild[level distance=8mm] {node {$\\bullet$}};\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (B1) -- (B2);\n%\t\\draw[densely dotted, postaction={decorate}] (B21) to (B12);\n%%\n%\t\\node [dgreen, right=1.5 of B2] (C1) {$\\bullet$}\n%\t\tchild[level distance=4mm] {coordinate (C11)};\n%\t\\node[right=1 of C1, dgreen] (C2) {$\\bullet$}\n%\t\tchild[level distance=4mm] {node {$\\bullet$}\n%\t};\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (C1) -- (C2);\n%%\n%\t\\node [right=1 of C2] (D1) {$\\bullet$};\n%\t\\node [right=1 of D1] (D2) {$\\bullet$};\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (D1) -- (D2);\t\n%\\end{tikzpicture}\n%\\end{equation}\n%One should check in the red position that $(f\\then g)\\then h$ is going to $i$ and that $f\\then (g\\then h)$ is going to $j$. This should cause a failure in the associativity of $\\delta$ and hence of $\\fl{\\delta}$.%\n%\\footnote{Note that if $\\fl{\\delta}$ is not associative, then $\\delta$ cannot be associative either; see \\cref{ex.fleece_assoc}.}\n%\n%Now we draw $\\delta\\then(\\delta\\tri \\fl{p})$ and X$\\delta\\then(\\fl{p}\\tri\\delta)$ and see that they disagree at the $A$ position. Here is $\\delta\\then(\\delta\\tri \\fl{p})$ whose second map $\\delta\\tri \\fl{p}$ does $\\delta$ as in \\eqref{eqn.delta_misc33} on the bottom layer and copies the top layer; to differentiate the intermediary $\\delta$, which comes before the top layer, we use dashed lines:\n%\\[\n%\\begin{tikzpicture}[trees, bend right=60,\n%level 1/.style={sibling distance=5mm},\n%level 2/.style={sibling distance=2.5mm}]\n%\t\\node [red] (A1) {$\\bullet$}\n%\t\tchild[level distance =4mm] {coordinate (A11)}\n%\t\tchild[level distance =8mm] {coordinate (A12)}\n%\t\tchild[level distance =12mm] {coordinate (A13)}\n%\t\tchild[level distance =12mm] {coordinate (A14)};\n%%\n%\t\\node [right=2.5 of A1, red] (A2) {$\\bullet$}\n%\t\tchild[level distance =4mm] {node [blue]{$\\bullet$}\n%\t\t\tchild[level distance=4mm] {coordinate (A21)}\n%\t\t\tchild[level distance=8mm] {coordinate (A22)}\n%\t\t}\n%\t\tchild[level distance =8mm] {node [dgreen] (A23') {$\\bullet$}\n%\t\t\tchild[level distance=4mm] {coordinate (A23)}\n%\t\t}\n%\t\tchild[level distance =12mm] {node (A24') {$\\bullet$}}\n%\t\tchild[level distance =12mm] {node (A25') {$\\bullet$}};\t\t\t\n%%\n%\t\\node [right=3.5 of A2, red] (A3) {$\\bullet$}\n%\t\tchild[level distance =4mm] {node [blue]{$\\bullet$}\n%\t\t\tchild[level distance=4mm] {node [dgreen] (A31') {$\\bullet$}\n%\t\t\t\tchild [level distance=4mm] {coordinate (A31)}}\n%\t\t\tchild[level distance=8mm] {node (A32') {$\\bullet$}}\n%\t\t}\n%\t\tchild[level distance =8mm] {node [dgreen] {$\\bullet$}\n%\t\t\tchild[level distance=4mm] {node (A33') {$\\bullet$}}\n%\t\t}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}};\t\t\t\n%%\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (A1) -- (A2);\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (A2) -- (A3);\n%\t\\draw[densely dotted, postaction={decorate}] (A21) to (A12);\n%\t\\draw[densely dotted, postaction={decorate}] (A22) to (A14);\n%\t\\draw[densely dotted, postaction={decorate}] (A23) to (A13);\n%\t\\draw[densely dotted, bend right=20, dashed] (A31') to (A23');\n%\t\\draw[densely dotted, bend right=20, dashed] (A32') to (A25');\n%\t\\draw[densely dotted, bend right=20, dashed] (A33') to (A24');\n%\t\\draw[densely dotted, postaction={decorate}] (A31) to (A23);\n%\\end{tikzpicture}\n%\\]\n%We can see that the unique leaf of $\\fl{p}\\tri \\fl{p}\\tri \\fl{p}$ is mapping to $i$. Now we draw $\\delta\\then(\\fl{p}\\tri\\delta)$. Its second map $\\fl{p}\\tri\\delta$ copies the bottom layer and does $\\delta$ as in \\eqref{eqn.delta_misc33} on the top layer; we have no need for dashed lines.\n%\\[\n%\\begin{tikzpicture}[trees, bend right=60,\n%level 1/.style={sibling distance=5mm},\n%level 2/.style={sibling distance=2.5mm}]\n%\t\\node [red] (A1) {$\\bullet$}\n%\t\tchild[level distance =4mm] {coordinate (A11)}\n%\t\tchild[level distance =8mm] {coordinate (A12)}\n%\t\tchild[level distance =12mm] {coordinate (A13)}\n%\t\tchild[level distance =12mm] {coordinate (A14)};\n%%\n%\t\\node [right=2.5 of A1, red] (A2) {$\\bullet$}\n%\t\tchild[level distance =4mm] {node [blue]{$\\bullet$}\n%\t\t\tchild[level distance=4mm] {coordinate (A21)}\n%\t\t\tchild[level distance=8mm] {coordinate (A22)}\n%\t\t}\n%\t\tchild[level distance =8mm] {node [dgreen]  {$\\bullet$}\n%\t\t\tchild[level distance=4mm] {coordinate (A23)}\n%\t\t}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}};\t\t\t\n%%\n%\t\\node [right=3.5 of A2, red] (A3) {$\\bullet$}\n%\t\tchild[level distance =4mm] {node [blue]{$\\bullet$}\n%\t\t\tchild[level distance=4mm] {node [dgreen] {$\\bullet$}\n%\t\t\t\tchild [level distance=4mm] {coordinate (A31)}}\n%\t\t\tchild[level distance=8mm] {node {$\\bullet$}}\n%\t\t}\n%\t\tchild[level distance =8mm] {node [dgreen] {$\\bullet$}\n%\t\t\tchild[level distance=4mm] {node {$\\bullet$}}\n%\t\t}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}}\n%\t\tchild[level distance =12mm] {node {$\\bullet$}};\t\t\t\n%%\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (A1) -- (A2);\n%\t\\draw[|->, shorten <= 3pt, shorten >= 3pt] (A2) -- (A3);\n%\t\\draw[densely dotted, postaction={decorate}] (A21) to (A12);\n%\t\\draw[densely dotted, postaction={decorate}] (A22) to (A14);\n%\t\\draw[densely dotted, postaction={decorate}] (A23) to (A13);\n%\t\\draw[densely dotted, postaction={decorate}] (A31) to (A22);\n%\\end{tikzpicture}\n%\\]\n%We can see that the unique leaf of $\\fl{p}\\tri \\fl{p}\\tri \\fl{p}$ is mapping to $j$.\n%\\end{example}\n%\n%\\begin{exercise}\\label{ex.fleece_assoc}\n%Let $p$, $\\delta$, $\\fl{p}$, and $\\fl{\\delta}$ be as in \\cref{ex.associativity_pics}.\n%\\begin{enumerate}\n%\t\\item Redraw \\eqref{eqn.delta_misc33} using $\\delta\\colon p\\to p\\tri p$ in place of the fleeced version $\\fl{\\delta}\\colon \\fl p\\to \\fl p$ shown there.\n%\t\\item Use \\cref{ex.fleece,def.fleece} to show that if $\\delta$ is associative then $\\fl{\\delta}$ must be too.\n%\\qedhere\n%\\end{enumerate}\n%\\end{exercise}\n%\n%\\begin{exercise}\n%In \\cref{ex.associativity_pics} we have maps $\\delta\\then(\\delta\\tri c)$ and $\\delta\\then(c\\tri\\delta)$ $c\\to c\\tri c\\tri c$. Which of these corresponds to composing from the left, e.g.\\ $(f\\then g)\\then h$, and which corresponds to composing to the right, e.g.\\ $f\\then (g\\then h)$?\n%\\end{exercise}\n%\n\n\\begin{example}[The category of $A$-streams]\\label{ex.streams_category}\nFor any set $A$, the set $A^\\nn$ of $A$-streams\n\\[\ns=(a_0\\leadsto a_1\\leadsto a_2\\leadsto a_3\\leadsto\\cdots)\n\\]\nare the objects of a category, where the set of morphisms emanating from each stream $s\\in A^\\nn$ is $\\nn$. The identity on $s$ is given by $0$ and the composite of two morphisms is the sum of the corresponding natural numbers.\n\nWe will see this category again in \\cref{ex.streams_cofree}.\n\\end{example}\n\n%-------- Section --------%\n\\section{Comonoids in $\\poly$ are categories}\n\nIt turns out that comonoids in $\\poly$ are precisely categories. Strangely, however, the a morphism between comonoids is not a functor but something people are calling a \\emph{cofunctor}.\n\n\\begin{definition}[Cofunctor]\\label{def.cofunctor}\nLet $\\cat{C}$ be a category with object set $C_0$, morphism set $C_1$, $\\dom,\\cod\\colon C_1\\to C_0$ the domain and codomain,%\n\\footnote{We privilege the domain function $\\dom\\colon C_1\\to C_0$ in the sense that an unnamed map $C_1\\to C_0$ will be assumed to be $\\dom$. For example, in the map $F^\\sharp$, the pullback $C_0\\times_{D_0}D_1$ is of the diagram $C_0\\To{F}D_0\\From{\\dom}D_1$.}\n and similarly for $\\cat{D}$. A \\emph{cofunctor} $F\\colon \\cat{C}\\coto \\cat{D}$ consists of\n\\begin{enumerate}[itemsep=0pt]\n  \\item a function  $F\\colon C_0\\to D_0$ \\emph{on objects} and\n  \\item a function $F^\\sharp\\colon C_0\\times_{D_0}D_1\\to C_1$ \\emph{backwards on morphisms},\n\\end{enumerate}\nsatisfying the following conditions:\n\\begin{enumerate}[itemsep=0pt, label=\\roman*.]\n\t\\item $F^\\sharp(c,\\id_{F_0c})=\\id_c$ for any $c\\in C_0$;\n\t\\item $F_0\\cod F^\\sharp(c,g)=\\cod g$ for any $c\\in C_0$ and $g\\in D_{F_0(c)}$;\n\t\\item $F^\\sharp(\\cod F^\\sharp(c,g_1),g_2)\\circ F^\\sharp(c,g_1)=F^\\sharp(c,g_1\\then g_2)$ for composable arrows $g_1,g_2$ out of $F_0 c$.\n\\end{enumerate}\nIn other words, $F^\\sharp$ preserves identities, codomains, and compositions.\n\nWe denote by $\\smcat^\\sharp$ the category of categories and cofunctors.\n\\end{definition}\n\nThe cofunctor laws can be written in commutative diagram form as follows:\n\\begin{gather*}\n\\begin{tikzcd}[column sep=15pt, ampersand replacement=\\&]\n  C_0\\times_{D_0}D_0\\ar[r, \"\\cong\"]\\ar[d, \"\\id_D\"']\\&\n  C_0\\ar[d, \"\\id_C\"]\\\\\n  C_0\\times_{D_0}D_1\\ar[r, \"F^\\sharp\"']\\&\n  C_1\\ar[ul, phantom, shift right=2pt, \"\\text{(i)}\"]\n\\end{tikzcd}\n\\quad\n\\begin{tikzcd}[column sep=15pt, ampersand replacement=\\&]\n\tC_0\\times_{D_0}D_1\\ar[r, \"F^\\sharp\"]\\ar[d, \"\\pi_2\"']\\&\n\tC_1\\ar[r, \"\\cod\"]\\&\n\tC_0\\ar[d, \"F_0\"]\\\\\n\tD_1\\ar[rr, \"\\cod\"']\\ar[urr, phantom, \"\\text{(ii)}\"]\\&\\&\n\tD_0\n\\end{tikzcd}\n\\\\\n\\begin{tikzcd}[column sep=15pt, ampersand replacement=\\&]\n\tC_0\\times_{D_0}D_1\\times_{D_0}D_1\\ar[r, \"\\then_D\"]\\ar[d, \"F^\\sharp\"']\\&[-15pt]\n\tC_0\\times_{D_0}D_1\\ar[r, \"F^\\sharp\"]\\&\n\tC_1\\\\\n\tC_1\\times_{D_0}D_1\\ar[r, \"\\cong\"']\\&\n\tC_1\\times_{C_0}C_0\\times_{D_0}D_1\\ar[r, \"F^\\sharp\"']\\ar[u, phantom, \"\\text{(iii)}\"]\\&\n\tC_1\\times_{C_0}C_1\\ar[u, \"\\then_C\"']\n\\end{tikzcd}\n\\end{gather*}\n\n\\begin{example}[Admissible sections]\\label{ex.admissible_section}\nConsider the monoid $\\cat{N}\\coloneqq (\\nn,0,+)$, considered as a category with one object. For any category $\\cat{C}$, a cofunctor $\\varphi\\colon\\cat{C}\\cof\\cat{N}$ is called an \\emph{admissible section} \\cite{Aguiar-thesis}. We'll have more to say about these in \\cref{thm.catsharp_to_mon}, but our goal here is simply to unpack the definition.\n\nTo specify $\\varphi$, we first say what it does on objects, but this is trivial: there is only one object in $\\cat{N}$, so each object of $\\cat{C}$ is sent to it. So the content of $\\varphi$ is all found in $\\varphi^\\sharp$, which assigns to each object $i\\in\\cat{C}$ and natural number $n\\in\\nn$ a morphism $\\varphi^\\sharp(i,n)$ emanating from $i$. That seems like a lot of data, but we still have two laws to pare it down:\n\\[\n  \\varphi^\\sharp(i,0)=i\n  \\qqand\n  \\varphi^\\sharp(i,n+n')=\\varphi^\\sharp(\\varphi^\\sharp(i,n),n').\n\\]\nEvery natural number $n$ is a sum of 1's, so if we denote $\\varphi^\\sharp(i,1)$ by $\\phi(i)$, we in fact have\n\\[\n\\varphi^\\sharp(i,n)=\\phi^{\\circ n}i.\n\\]\nThat is, $\\varphi^\\sharp(i,n)$ is just the $n$-fold application of the one-step case, $\\phi$.\n\nThus an admissible section of $\\cat{C}$ is given by choosing, for each object $i\\in\\cat{C}$, a morphism $\\phi(i)\\colon i\\to i'$ emanating from $i$.\n\\end{example}\n\n\\begin{exercise}\nHow many admissible sections does the category \\fbox{$\\bullet\\to\\bullet$} have?\n\\begin{solution}\nWe seek the number of admissible sections of the category \\fbox{$\\bullet\\to\\bullet$}.\nThere are $2$ choices of morphisms emanating from the object on the left, and $1$ choice of morphism emanating from the object on the right, for a total of $2 \\cdot 1 = 2$ admissible sections.\n\\end{solution}\n\\end{exercise}\n\n\\begin{exercise}\nLet $\\cat{Z}\\coloneqq(\\zz,0,+)$ denote the monoid of integers and let $\\cat{N}$ be that of natural numbers as above.\n\\begin{enumerate}\n\t\\item For a category $\\cat{C}$, describe the data of a cofunctor $\\cat{C}\\cof\\cat{Z}$.\n\t\\item What would you say is the canonical cofunctor $\\cat{Z}\\cof\\cat{N}$?\n\t\\item Thinking of an admissible section $\\cat{C}\\cof\\cat{N}$ as a policy, almost like a \\emph{physical law}, suppose it factors through $\\cat{Z}$. Would you say that this lets you ``run the law backwards''?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}[Systems of ODEs]\nA system of ordinary differential equations (ODEs) in $n$ variables, e.g.\n\\begin{align*}\n    \\dot{x}_1 &= f_1(x_1, \\ldots, x_n) \\\\\n    \\dot{x}_2 &= f_2(x_1, \\ldots, x_n) \\\\\n    & \\; \\; \\; \\vdots \\\\\n    \\dot{x}_n &= f_n(x_1, \\ldots, x_n)\n\\end{align*}\ncan be understood as a vector field on $\\rr^n$. Usually, we are interested in integrating this vector field to get flow lines, or integral curves. In other words, for each point $x = (x_1, \\ldots, x_n)$ and each amount of time $t \\in \\rr$, we can go forward from $x$ for time $t$ and arrive at a new point $x^{+t}$. These satisfy the equations\n\\begin{equation} \\label{eqn.cofunctor_ode}\n    x^{+0} = x \\qqand x^{+t_1+t_2} = (x^{+t_1})^{+t_2}. \n\\end{equation}\nLet's call such things \\emph{dynamical systems} with time domain $(T, 0, +)$; above, we used $T = \\rr$, but any monoid will do.\n\nDynamical systems in the above sense are cofunctors $F \\colon \\rr^n \\yon^{\\rr^n} \\cof \\yon^T$.\nIn order to say this, we first need to say how both $\\cat{C} := \\rr^n \\yon^{\\rr^n}$ and $\\yon^T$ are being considered as categories.\nThe category $\\cat{C}$ has objects $\\rr^n$, and for each object $x \\in \\rr^n$ and outgoing arrow $v \\in \\rr^n$, the codomain of $v$ is $x + v$; in other words, $v$ is a vector emanating from $x$.\nThe identity is $v = 0$, and composition is given by addition.\nThe category $\\yon^T$ is the monoid $T$ considered as a category with one object, $\\bullet$.\n\nThe cofunctor assigns to every object $x \\in \\rr^n$ the unique object $F(x) = \\bullet$, and to each element $t \\in T$ the morphism $F^\\sharp(x, t) = x^{+t} - x \\in \\rr^n$, which can be interpreted as a vector emanating from $x$.\nIts codomain is $\\cod F^\\sharp(x, t) = x^{+t}$, and we will see that \\eqref{eqn.cofunctor_ode} ensures the cofunctoriality properties.\n\nThe codomain law ii is vacuously true, since $\\yon^T$ only has one object.\nLaw i follows because $F^\\sharp(x, 0) = x^{+0} - x = 0$, and law iii follows as\n\\[\n    F^\\sharp(x^{+t_1}, t_2) + F^\\sharp(x, t_1) = (x^{+t_1})^{+t_2} - x^{+t_1} + x^{+t_1} - x = x^{+t_1 + t_2} - x = F^\\sharp(x, t_1+t_2).\n\\]\n\\end{example}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Suppose that $M,N$ are monoids (each is a category with one object). Are cofunctors between them related to monoid homomorphisms? If so, how?\n\t\\item Suppose $\\cat{C}$ and $\\cat{D}$ are categories and $F\\colon\\cat{C}\\cof\\cat{D}$ is a cofunctor. Does there necessarily exist a cofunctor $\\cat{C}\\op\\cof\\cat{D}\\op$ that acts the same as $F$ on objects?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{theorem}[Ahman-Uustalu]\\label{thm.ahman_uustalu}\nThere is an equivalence of categories\n\\[\n\\Cat{Comon}(\\poly)\\cong\\smcat^\\sharp.\n\\]\n\\end{theorem}\n\\begin{proof}\nThis will be proved as \\cref{prop.copointing,thm.cofree}.\n\\end{proof}\n\nOur first goal is to understand how one translates between categories $\\cat{C}$ and comonoids $\\com{C}=(\\ema{c},\\epsilon,\\delta)$ in $\\poly$. The idea is pretty simple: the objects of $\\cat{C}$ are the positions of $\\ema{c}$\n\\[\n\\Ob(\\cat{C})\\cong\\ema{c}(\\1)\n\\]\nand for each such object $i$, the morphisms $\\{f\\colon i\\to j\\mid j\\in\\Ob(\\cat{C})\\}$ emanating from $i$ in $\\cat{C}$ are the directions $\\ema{c}[i]$ there.\n\n\\begin{definition}\nLet $\\cat{C}$ be a category. The \\emph{emanation polynomial for $\\cat{C}$} is the polynomial\n\\[\n\\ema{c}\\coloneqq\\sum_{i\\in\\Ob(\\cat{C})}\\yon^{\\sum_{j\\in\\Ob(\\cat{C})}\\cat{C}(i,j)}\n\\]\n\\end{definition}\n\n\\begin{exercise} \\label{exc.ema_polys}\nWhat is the emanation polynomial for each of the following categories?\n\\begin{enumerate}\n\t\\item \\boxCD{exercisecolor}{$A\\Too{f}B$}?\n\t\\item \\boxCD{exercisecolor}{$B\\From{f}A\\To{g}C$}?\n\t\\item The empty category?\n\t\\item \\label{exc.ema_polys.nat_monoid} The monoid $(\\nn,0,+)$?\n\t\\item A monoid $(M, e, *)$?\n\t\\item \\label{exc.ema_polys.nat_poset} The poset $(\\nn,\\leq)$?\n\t\\item The poset $(\\nn,\\geq)$?\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n\t\\item The category \\boxCD{white}{$A\\Too{f}B$} has $2$ morphisms out of $A$ and $1$ morphism out of $B$, so its emanation polynomial is $\\yon^\\2 + \\yon$.\n\t\\item The category \\boxCD{white}{$B\\From{f}A\\To{g}C$} has $3$ morphisms out of $A$ and $1$ morphism out of each of $B$ and $C$, so its emanation polynomial is $\\yon^\\3 + 2\\yon$.\n\t\\item The empty category has no objects, so its emanation polynomial is the empty sum $\\0$.\n\t\\item The monoid $(\\nn,0,+)$ has $1$ object, and its morphisms form the set $\\nn$, so its emanation polynomial is $\\yon^\\nn$.\n\t\\item The monoid $(M, e, *)$ has $1$ object, and its morphisms form the set $M$, so its emanation polynomial is $\\yon^M$.\n\t\\item The poset $(\\nn,\\leq)$ has $\\nn$ as its set of objects, and there is exactly one morphism from every $n \\in \\nn$ to each element of $\\{n' \\in \\nn \\ | \\ n \\leq n'\\} \\iso \\nn$ (and no other morphisms from $n$), so the emanation polynomial of the poset is $\\nn\\yon^\\nn$.\n\t\\item The poset $(\\nn,\\geq)$ has $\\nn$ as its set of objects, and there is exactly one morphism from every $n \\in \\nn$ to each element of $\\{0, 1, \\ldots, n\\} \\iso \\ord{n}+\\1$ (and no other morphisms from $n$), so the emanation polynomial of the poset is $\\sum_{n \\in \\nn} \\yon^{\\ord{n}+\\1} \\iso \\yon^\\1 + \\yon^\\2 + \\yon^\\3 + \\cdots$.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\nA category $\\cat{C}$ is more than its emanation polynomial $\\ema{c}$, and a comonoid $(\\ema{c},\\epsilon,\\delta)$ in $\\poly$ is more than its carrier polynomial $\\ema{c}$. The identities of $\\cat{C}$ are all captured by the counit $\\epsilon\\colon\\ema{c}\\to\\yon$ and the codomain and composition information of $\\cat{C}$ are all captured by the comultiplication map $\\delta\\colon\\ema{c}\\to\\ema{c}\\tri\\ema{c}$. Our goal is to make this clear so that we can justly proclaim:\n\n\\slogan{Comonoids in $\\poly$ are precisely categories!}\n\n\nWe want to understand how the counit $\\epsilon$ and comultiplication $\\delta$ in a comonoid $\\com{C}=(\\ema{c},\\epsilon,\\delta)$ relate to identities, codomains, and composites in a category. We first use our work in \\cref{subsec.working_with_composites} to get a better handle on $\\epsilon$ and $\\delta$. For example, since $\\epsilon\\colon\\ema{c}\\to\\yon$ maps to the empty composite, we know by \\eqref{eqn.map_to_0ary_composite} that it is of the form\n\\[\n \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom, \"$\\ema{c}$\" left] (c) {$\\epsilon^\\sharp(i)$\\at$i$};\n  \t\\draw (c_pos) to[climb'] node[right] {$\\epsilon^\\sharp$} (c_dir);\n\t\\end{tikzpicture}\n\\]\ni.e.\\ for every $i\\in \\ema{c}(\\1)$, a choice of element $\\epsilon^\\sharp(i)\\in \\ema{c}[i]$. Rather than call it $\\epsilon^\\sharp$, we will refer to this map as $\\idy$. Similarly, we know by \\eqref{eqn.map_to_2ary_composite} that $\\delta$ is of the form\n\\[\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, cod, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, above=of p] (p') {};\n  \t\\draw (yX_pos) to[first] node[below] {$\\delta_1$}(p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\delta_2$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above left] {$\\delta^\\sharp$} (yX_dir);\n  \\end{tikzpicture}\n \\]\nWe've said that this secretly holds information about the codomains and composites for a category structure on $\\ema{c}$. How does that work? We will soon find that $\\delta_1$ is forced to be an identity, that $\\delta_2$ holds codomain information, and that $\\delta^\\sharp$ holds composite information. So we will use that notation here\n\\[\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, cod, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, cod, above=of p] (p') {};\n  \t\\draw (yX_pos) to[first] node[below] {$\\delta_1$}(p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n  \\end{tikzpicture}\n \\]\nand our goal now is to see that the $\\cod$ map really has something to do with codomains and that the $\\comp$ map really has something to do with composites, as advertised. What makes these true are the unitality and associativity equations required for $(\\ema{c},\\epsilon,\\delta)$ to be a comonoid; see \\cref{def.comonoid}.\n\nTo get started, we consider the first unitality equation from \\eqref{eqn.comonoid_diagrams}:\n\\[\n\\begin{tikzpicture}\n\t\\node (1) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, cod, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, above=of p] (p') {};\n  \t\\node[poly, cod, right=of p] (q) {};\n  \t\\node[poly, cod, identity, above=of q] (q') {};\n  \t\\draw (p_pos) -- (q_pos);\n  \t\\draw (q_dir) -- (p_dir);\n  \t\\draw (p'_pos) -- (q'_pos);\n  \t\\draw (q'_dir) -- node[above] {$\\idy$} (p'_dir);\n  \t\\draw (yX_pos) to[first] node[below] {$\\delta_1$}(p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=1.6 of 1] (2) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, cod, right=2.5 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, above left=.5 of p] (p') {};\n  \t\\draw (yX_pos) to[first] node[below] {$\\delta_1$}(p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n\t\t\\draw (p'_pos) to[climb'] node[right] {$\\idy$} (p'_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=1 of 2] (3) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (c) {};\n  \t\\node[poly, cod, right=of c] (c') {};\n  \t\\draw[double] (c_pos) -- (c'_pos);\n  \t\\draw[double] (c'_dir) -- (c_dir);\n\t\\end{tikzpicture}\n\t};\n\t\\node at ($(1.east)!.5!(2.west)$) {=};\n\t\\node at ($(2.east)!.5!(3.west)$) {=};\n\\end{tikzpicture}\n\\]\n\nLet's add some arbitrary fillers $i\\in \\ema{c}(\\1)$ and $d\\in \\ema{c}[i]$ to the open slots, and hence obtain an equation:\n\\[\n\\begin{tikzpicture}\n\t\\node (2) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {$d\\then\\idy(\\cod(d))$\\at$i$};\n  \t\\node[poly, cod, right=5 of yX.south, yshift=-1ex] (p) {$d$\\at$\\delta_1(i)$};\n  \t\\node[poly, above left=1 and 0 of p] (p') {$\\idy(\\cod(d))$\\at$\\cod(d)$};\n  \t\\draw (yX_pos) to[first] node[below] {$\\delta_1$}(p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n\t\t\\draw (p'_pos) to[climb'] node[right] {$\\idy$} (p'_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=2 of 2] (3) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (c) {$d$\\at$i$};\n  \t\\node[poly, cod, right=of c] (c') {$d$\\at$i$};\n  \t\\draw[double] (c_pos) -- (c'_pos);\n  \t\\draw[double] (c'_dir) -- (c_dir);\n\t\\end{tikzpicture}\n\t};\n\t\\node at ($(2.east)!.5!(3.west)$) {=};\n\\end{tikzpicture}\n\\]\nFirst it's saying that $\\delta_1(i)=i$. This is great news; it means we can forget about $\\delta_1$, just as we said earlier. Second it's saying that $d\\then\\idy(\\cod(d))=d$. Unpacking, this means that composing a morphism $d$ with the identity morphism on its codomain returns $d$. It's neat to watch the comonoid laws declaring the standard laws of categories. It's like meeting a like-minded toad; we never knew toads could be like-minded, but the phenomena don't lie.\n\nBefore moving on, we redraw $\\delta\\colon c\\to c\\tri c$ with the information-lacking $\\delta_1$ (which the first unitality equation said was always identity) and replace it with a double arrow:\n\\begin{equation}\\label{eqn.cod_comp}\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, cod, above=of p] (p') {};\n  \t\\draw[double] (yX_pos) to[first] (p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n  \\end{tikzpicture}\n\\end{equation}\n\nNow we can write the other unitality equation.\n\\[\n\\begin{tikzpicture}\n\t\\node (1) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, above=of p] (p') {};\n  \t\\node[poly, cod, identity, right=of p] (q) {};\n  \t\\node[poly, cod, above=of q] (q') {};\n  \t\\draw (p_pos) -- (q_pos);\n  \t\\draw (q_dir) -- node[above] {$\\idy$} (p_dir);\n  \t\\draw (p'_pos) -- (q'_pos);\n  \t\\draw (q'_dir) -- (p'_dir);\n  \t\\draw[double] (yX_pos) to[first] (p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=1.6 of 1] (2) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {};\n  \t\\node[poly, right=1.8 of yX.south, yshift=-1ex] (p) {};\n  \t\\node[poly, cod, above=of p] (p') {};\n  \t\\draw[double] (yX_pos) to[first] (p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n\t\t\\draw (p_pos) to[climb'] node[right] {$\\idy$} (p_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=1.6 of 2] (3) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (c) {};\n  \t\\node[poly, cod, right=of c] (c') {};\n  \t\\draw[double] (c_pos) -- (c'_pos);\n  \t\\draw[double] (c'_dir) -- (c_dir);\n\t\\end{tikzpicture}\n\t};\n\t\\node at ($(1.east)!.5!(2.west)$) {=};\n\t\\node at ($(2.east)!.5!(3.west)$) {=};\n\\end{tikzpicture}\n\\]\nLet's add some arbitrary fillers $i\\in c(\\1)$ and $d\\in c[i]$ to get some equations:\n\\[\n\\begin{tikzpicture}\n\t\\node (2) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (yX) {$\\idy(i)\\then d$\\at$i$};\n  \t\\node[poly, right=3 of yX.south, yshift=-1ex] (p) {$\\idy(i)$\\at$i$};\n  \t\\node[poly, cod, above right=1 and 0 of p] (p') {$d$\\at$\\cod(\\idy(i))$};\n  \t\\draw[double] (yX_pos) to[first] (p_pos);\n  \t\\draw (p_dir) to[climb] node[right] {$\\cod$} (p'_pos);\n  \t\\draw (p'_dir) to[last] node[above, sloped] {$\\comp$} (yX_dir);\n\t\t\\draw (p_pos) to[climb'] node[right] {$\\idy$} (p_dir);\n  \\end{tikzpicture}\n\t};\n\t\\node[right=2 of 2] (3) {\n  \\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (c) {$d$\\at$i$};\n  \t\\node[poly, cod, right=of c] (c') {$d$\\at$i$};\n  \t\\draw[double] (c_pos) -- (c'_pos);\n  \t\\draw[double] (c'_dir) -- (c_dir);\n\t\\end{tikzpicture}\n\t};\n\t\\node at ($(2.east)!.5!(3.west)$) {=};\n\\end{tikzpicture}\n%\\begin{tikzpicture}\n%\\node (2) {\n%  \\begin{tikzpicture}[polybox, tos]\n%  \t\\node[poly, dom] (c) {$\\idy(i)\\then d$\\at$i$};\n%  \t\\node[poly, right=1.2 of c] (c1) {$\\idy(i)$\\at$i$};\n%  \t\\node[poly, cod, above right=.3 and .2 of c1] (c2) {$d$\\at$\\cod(\\idy(i))$};\n%  \t\\draw[double] (c_pos) -- (c1_pos);\n%  \t\\draw (c1_dir) to[climb] node[right] {cod} (c2_pos);\n%  \t\\draw (c2_dir) to[last] node[above] {com} (c_dir);\n%  \t\\draw (c1_pos) to[out=0, in=0, looseness=2] node[right] {idy} (c1_dir);\n%\t\\end{tikzpicture}\n%\t};\n%\t\\node[right=2 of 2] (3) {\n%  \\begin{tikzpicture}[polybox, tos]\n%  \t\\node[poly, dom] (c) {$d$\\at$i$};\n%  \t\\node[poly, cod, right=of c] (c') {$d$\\at$i$};\n%  \t\\draw[double] (c_pos) -- (c'_pos);\n%  \t\\draw[double] (c'_dir) -- (c_dir);\n%\t\\end{tikzpicture}\n%\t};\n%\t\\node at ($(2.east)!.5!(3.west)$) {=};\n%\\end{tikzpicture}\n\\]\nAh, it's saying that it wants $\\cod(\\idy(i))=i$, which makes sense---the codomain of the identity on $i$ should be $i$---and that it wants $\\idy(i)\\then d=d$, i.e.\\ that composing the identity on $i$ with $d$ should return $d$. We couldn't have said it better ourselves; thanks like-minded toad!\n\nFinally we draw the associativity equation.\n\\[\n\\begin{tikzpicture}\n\t\\node (p1) {\n  \\begin{tikzpicture}[polybox, tos, font=\\tiny]\n  \t\\node[poly, dom, \"$c$\" left] (m) {};\n  \t\\node[poly, right= of m.south, yshift=-1ex, \"$c$\" below] (D) {};\n  \t\\node[poly, above=of D, \"$c$\" above] (mm) {};\n  \t\\node[poly, cod, right= of D.south, yshift=-1ex, \"$c$\" right] (DD) {};\n  \t\\node[poly, cod, above=of DD, \"$c$\" right] (mmm) {};\n  \t\\node[poly, cod, above=of mmm, \"$c$\" right] (C) {};\n%\n\t\t\\draw[double] (m_pos) to[first] (D_pos);\n\t\t\\draw (D_dir) to[climb] node[right] {$\\cod$} (mm_pos);\n\t\t\\draw (mm_dir) to[last] node[above, sloped] {$\\comp$} (m_dir);\n\t\t\\draw[double] (D_pos) to[first] (DD_pos);\n\t\t\\draw[double] (DD_dir) to[last] (D_dir);\n\t\t\\draw[double] (mm_pos) to[first] (mmm_pos);\n\t\t\\draw (mmm_dir) to[climb] node[right] {$\\cod$} (C_pos);\n\t\t\\draw (C_dir) to[last] node[above, sloped] {$\\comp$} (mm_dir);\n\t\\end{tikzpicture}\n\t};\n%\n\t\\node (p2) [right=of p1] {\n  \\begin{tikzpicture}[polybox, tos, font=\\tiny]\n  \t\\node[poly, dom, \"$c$\" left] (m') {};\n  \t\\node[poly, right= of m'.south, yshift=-1ex, \"$c$\" below] (mm') {};\n  \t\\node[poly, above=of mm', \"$c$\" above] (C') {};\n  \t\\node[poly, cod, right= of mm'.south, yshift=-1ex, \"$c$\" right] (D') {};\n  \t\\node[poly, cod, above=of D', \"$c$\" right] (mmm') {};\n  \t\\node[poly, cod, above=of mmm, \"$c$\" right] (CC') {};\n%\n\t\t\\draw[double] (m'_pos) to[first] (mm'_pos);\n\t\t\\draw (mm'_dir) to[climb] node[right] {$\\cod$} (C'_pos);\n\t\t\\draw (C'_dir) to[last] node[above, sloped] {$\\comp$} (m'_dir);\n\t\t\\draw[double] (mm'_pos) to[first] (D'_pos);\n\t\t\\draw (D'_dir) to[climb] node[right] {$\\cod$} (mmm'_pos);\n\t\t\\draw (mmm'_dir) to[last] node[above, sloped] {$\\comp$} (mm'_dir);\n\t\t\\draw[double] (C'_pos) to[first] (CC'_pos);\n\t\t\\draw[double] (CC'_dir) to[last] (C'_dir);\n\t\\end{tikzpicture}\n\t};\t\n\t\\node at ($(p1.south)!.5!(p2.north)$) {$=$};\n\\end{tikzpicture}\n\\]\nLet's fill it in with $i\\in c(\\1)$ and a sequence $i\\To{m}\\To{m'}\\To{m''}$ of emanating morphisms:\n\\[\n\\begin{tikzpicture}\n\t\\node (p1) {\n  \\begin{tikzpicture}[polybox, mapstos, font=\\tiny]\n  \t\\node[poly, dom] (m) {$m\\then(m'\\then m'')$\\at$i$};\n  \t\\node[poly, right=3 of m.south, yshift=-1ex] (D) {$m$\\at$i$};\n  \t\\node[poly, above=of D] (mm) {$m'\\then m''$\\at$\\cod(m)$};\n  \t\\node[poly, cod, right=2.5 of D.south, yshift=-1ex] (DD) {$m$\\at$i$};\n  \t\\node[poly, cod, above=of DD] (mmm) {$m'$\\at$\\cod(m)$};\n  \t\\node[poly, cod, above=of mmm] (C) {$m''$\\at$\\cod(m')$};\n%\n\t\t\\draw[double] (m_pos) to[first] (D_pos);\n\t\t\\draw (D_dir) to[climb] (mm_pos);\n\t\t\\draw (mm_dir) to[last]  (m_dir);\n\t\t\\draw[double] (D_pos) to[first] (DD_pos);\n\t\t\\draw[double] (DD_dir) to[last] (D_dir);\n\t\t\\draw[double] (mm_pos) to[first] (mmm_pos);\n\t\t\\draw (mmm_dir) to[climb] (C_pos);\n\t\t\\draw (C_dir) to[last] (mm_dir);\n\t\\end{tikzpicture}\n\t};\n%\n\t\\node (p2) [below=of p1] {\n  \\begin{tikzpicture}[polybox, mapstos, font=\\tiny]\n  \t\\node[poly, dom] (m') {$(m\\then m')\\then m''$\\at$i$};\n  \t\\node[poly, right= 3 of m'.south, yshift=-1ex] (mm') {$m\\then m'$\\at$i$};\n  \t\\node[poly, above=of mm'] (C') {$m''$\\at$\\cod(m\\then m')$};\n  \t\\node[poly, cod, right= 2.5 of mm'.south, yshift=-1ex] (D') {$m$\\at$i$};\n  \t\\node[poly, cod, above=of D'] (mmm') {$m'$\\at$\\cod(m)$};\n  \t\\node[poly, cod, above=of mmm'] (CC') {$m''$\\at$\\cod(m\\then m')$};\n%\n\t\t\\draw[double] (m'_pos) to[first] (mm'_pos);\n\t\t\\draw (mm'_dir) to[climb] (C'_pos);\n\t\t\\draw (C'_dir) to[last] (m'_dir);\n\t\t\\draw[double] (mm'_pos) to[first] (D'_pos);\n\t\t\\draw (D'_dir) to[climb] (mmm'_pos);\n\t\t\\draw (mmm'_dir) to[last] (mm'_dir);\n\t\t\\draw[double] (C'_pos) to[first] (CC'_pos);\n\t\t\\draw[double] (CC'_dir) to[last] (C'_dir);\n\t\\end{tikzpicture}\n\t};\t\n\t\\node at ($(p1.south)!.5!(p2.north)$) {$=$};\n\\end{tikzpicture}\n\\]\nAh, it's saying that it wants $\\cod(m')=\\cod(m\\then m')$; well yeah, that's how codomains should work. And it wants $m\\then(m'\\then m'')=(m\\then m')\\then m''$, classic associativity. Amazing; thanks again toad!\n\nWe've seen that all of the data and equations of categories are embedded, though in a very non-standard way, in the data and equations of polynomial comonoids. \n\n\\begin{exercise}\nLet $\\cat{C}$ be a category, $\\ema{c}$ its emanation polynomial, and $i\\in\\Ob(\\cat{C})$ an object. This exercise is for people who know the definition of the coslice category $i/\\cat{C}$ of objects under $i$. Is it true that there is an isomorphism\n\\[\\Ob(c/\\cat{C})i\\cong^?\\ema{c}[i]\\]\nIf so, describe it; if not, give a counterexample.\n\\end{exercise}\n\n%-------- Section --------%\n\\section{Examples showing the correspondence between comonoids and categories}\n\n\\begin{example}[Monoids]\\label{ex.monoids}\nLet $(M,e,*)$ be a monoid. Then we can construct a comonoid structure on the representable $\\yon^M$. A morphism $\\yon^M\\to\\yon$ can be identified with an element of $M$; under that identification we take $\\epsilon\\coloneqq e$. Similarly, $\\yon^M\\tri\\yon^M\\cong\\yon^{M^2}$ and a morphism $\\yon^M\\to\\yon^{M^2}$ can be identified with a function $M^2\\to M$; under that identification we take $\\delta\\coloneqq *$.\n\\end{example}\n\n\\begin{exercise}\nFinish \\cref{ex.monoids} by showing that if $(M,e,*)$ satisfies the unitality and associativity requirements of a monoid in $(\\smset,\\1,\\times)$ then $(\\yon^M,\\epsilon,\\delta)$ satisfies the unitality and associativity requirements of a comonoid in $(\\poly,\\yon,\\tri)$.\n\\end{exercise}\n\n\n\\begin{example}[Monoid action]\\label{ex.monoid_action}\nSuppose that $(M,e,*)$ is a monoid, $S$ is a set, and $\\alpha \\colon M \\times S \\to S$ is an action. There is an associated category $\\cat{A}$ with emanation polynomial $S \\yon ^ M$. In other words, it has objects set $S$ and for every $s \\in S$ there is an outgoing morphism for each $m \\in M$, namely $s \\To {m} \\alpha (m , s)$. \n\\end{example}\n\n\\begin{exercise}\nWith notation as in \\cref{ex.monoid_action},\n\\begin{enumerate}\n    \\item For a given object $s \\in \\cat{A}$, what is the identity morphism?\n    \\item What is the composite of two morphisms in $\\cat{A}$?\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\nHere $(M,e,*)$ is a monoid, $S$ is a set, $\\alpha \\colon M \\times S \\to S$ is an action, and $\\cat{A}$ is the associated category with emanation polynomial $S\\yon^M$; in particular, for each $s \\in S$ and $m \\in M$, there is a morphism $s \\To{m} \\alpha(m,s)$.\n\\begin{enumerate}\n    \\item We wish to identify the identity morphism for each object $s \\in \\cat{A}$.\n    This should be a morphism whose domain and codomain are $s$.\n    By the laws of monoid actions, $\\alpha(m,s)$ is guaranteed to be $s$ if $m = e$.\n    Hence, it only makes sense for the identity morphism of $s$ to be the morphism $s \\To{e} s$.\n    \\item Given a morphism $s \\To{m} \\alpha(m,s)$, we wish to determine its composite with another morphism $\\alpha(m,s) \\To{n} \\alpha(n,\\alpha(m,s))$.\n    By the laws of monoid actions, we have that $\\alpha(n,\\alpha(m,s)) = \\alpha(n*m,s)$, so it makes sense for the composite $s \\To{m} \\alpha(m,s) \\To{n} \\alpha(n,\\alpha(m,s))$ to be the morphism $s \\To{n*m} \\alpha(n*m,s)$.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n\n\n\\begin{example}[Cyclic lists]\nFor any $n\\in\\nn$, consider the monoid (group) $\\zz/n$. As a functor $c_n\\coloneqq\\yon^{\\zz/n}$ sends a set $X$ to the set of length-$n$ tuples in $X$. But the comonoid structure lets us think of these as cyclic lists. Indeed, $\\epsilon\\colon c_n\\to\\yon$ allows us to pick out the ``current'' element via the map $\\epsilon\\tri X\\colon c_n\\tri X\\to X$, and $\\delta$ lets us move around the list.\n\nWe will see later that comonoids are closed under coproducts, so $\\sum_{n\\in\\nn}c_n$ is also a comonoid.\n\\end{example}\n\n\\begin{example}[What category is $S\\yon^S$?] \\label{ex.state_comonad}\nThe first comonoid we introduced, back in \\cref{ex.state_comonad_1} was $\\com{S}=(p,\\epsilon,\\delta)$, where $p=S\\yon^S$ for some set $S$. Now we know that comonoids correspond to categories. So what category $\\cat{S}$ corresponds to $\\com{S}$?\n\nBy the work above, we know that $\\cat{S}$ has object set $S=p(\\1)$, and that for every object $s\\in S$ there are $S$-many emanating morphisms, though we don't yet know their codomains nor the composition formula. \n\nTo calculate the codomains and compositions we examine the map $\\delta\\colon p\\to p\\tri p$, which was given set-theoretically in \\eqref{eqn.state_comonoid_eps_del} and in terms of poly-boxes in \\eqref{eqn.state_comonoid_eps_del2}. We repeat it here for your convenience:\n\\[\n\\begin{tikzpicture}\n\t\\node (id) {\n  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom, \"$S\\yon^S$\" left] (S) {$s$\\at$s$};\n  \t\\node[poly, identity, right=of S, \"$\\yon$\" right] (y) {\\pphantom{s}{s}};\n  \t\\draw (S_pos) to[first] (y_pos);\n  \t\\draw (y_dir) to[last] (S_dir);\n\t\t\\node at ($(S)!.5!(y)$) {$\\epsilon$};\n  \\end{tikzpicture}\n  };\n  \\node[right=of id] (co) {\n  \\begin{tikzpicture}[polybox, mapstos]\n  \t\\node[poly, dom] (p) {$s_2$\\at$s$};\n  \t\\node[poly, cod, right=2 of p.south, yshift=-1ex] (q) {$s_1$\\at$s$};\n  \t\\node[poly, cod, above=of q] (r) {$s_2$\\at$s_1$};\n  \t\\draw[double] (p_pos) to[first] (q_pos);\n  \t\\draw (q_dir) to[climb] node[right] {$\\cod$} (r_pos);\n  \t\\draw (r_dir) to[last] node[above, sloped] {$\\comp$} (p_dir);\n  \\end{tikzpicture}  \n  };\n\\end{tikzpicture}\n\\]\nThe $\\epsilon$ map is saying that the identity on the object $s$ is the emanating morphism $s$. Remember that both the set of objects and the set of morphisms emanating from any given object are $S$. The map $\\cod=\\delta_2$ is telling us that the codomain of the morphism $s_1$ emanating from $s$ is the object $s_1$, and that the composite of $s_1$ and $s_2$ is $s_1\\then s_2=s_2$. \n\nWhat this all means is that $\\cat{S}$ is the category with $S$-many objects and a unique morphism $s\\to s'$ for any $s,s'\\in S$. Here are pictures for $S=\\3$ and $S=\\1\\5$, with all maps (even identities) drawn:\n\\[\n\\begin{tikzpicture}\n\\def\\n{3}% how many nodes\n\\def\\size{2cm}\n\\node[circle,minimum size=\\size] (b) {};\n\\foreach\\x in{1,...,\\n}{\n  \\node[minimum size=0.75cm,draw,circle] (n-\\x) at (b.{360/\\n*\\x}){\\x};\n}\n\\foreach\\x in{1,...,\\n}{\n  \\foreach\\y in{1,...,\\n}{\n    \\ifnum\\x=\\y\\draw[->] (n-\\x) to [in=360/\\n*\\x-15,out=360/\\n*\\x+15,loop] ();\\relax\\else\n      \\draw (n-\\x) edge[->,bend right=3] (n-\\y);\n    \\fi\n  }\n}\n\\def\\n{15}% how many nodes\n\\def\\size{4cm}\n\\node[circle,minimum size=\\size, right=4 of b] (b) {};\n\\foreach\\x in{1,...,\\n}{\n  \\node[minimum size=0.75cm,draw,circle] (n-\\x) at (b.{360/\\n*\\x}){\\x};\n}\n\\foreach\\x in{1,...,\\n}{\n  \\foreach\\y in{1,...,\\n}{\n    \\ifnum\\x=\\y\\draw[->] (n-\\x) to [in=360/\\n*\\x-15,out=360/\\n*\\x+15,loop] ();\\relax\\else\n      \\draw (n-\\x) edge[->,bend right=3] (n-\\y);\n    \\fi\n  }\n}\n% Source: Asterix: https://tex.stackexchange.com/questions/390647/understanding-complete-graph-example-in-tikz\n\\end{tikzpicture}\n\\]\nSome people would call this the contractible groupoid, or the terminal category with $S$-elements, or the unique category whose underlying graph is complete on $S$ vertices. The one that's \\emph{least} good for us will be ``terminal'' category, because as we'll see, we're going to be interested in different sorts of morphisms between categories than the usual ones, namely cofunctors rather than functors, and $\\cat{S}$ is not terminal for cofunctors.  \n\nAnyway, to avoid confusion, we'll refer to $\\cat{S}$ as the \\emph{state category on $S$}, because we will use these to think about states of dynamical systems, and also because the state comonad in functional programming is $S\\yon^S$.\n\\end{example}\n\n\\begin{exercise}\nWe showed in \\cref{exc.linear_poly_comon} that for any set $A$, the linear polynomial $p\\coloneqq A\\yon$ has a unique comonoid structure. What category does it correspond to?\n\\begin{solution}\nThe linear polynomial $A\\yon$ corresponds to a category whose objects form the set $A$ and whose only morphisms are identities: in other words, it is the discrete category on $A$.\n\\end{solution}\n\\end{exercise}\n\n\\begin{definition}\nLet $\\cat{C}$ be a category and $c\\in\\Ob(\\cat{C})$ an object. The \\emph{degree of $c$}, denoted $\\deg(c)$, is the set of arrows in $\\cat{C}$ that emanate from $c$.\n\nIf $\\deg(c)\\cong \\1$, we say that $c$ is \\emph{linear} and if $\\deg(c)\\cong\\ord{n}$ for $n\\in\\nn$, we say $c$ has \\emph{degree $n$}.\n\\end{definition}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item If every object in $\\cat{C}$ is linear, what does it mean about $\\cat{C}$?\n\t\\item Is it possible for an object in $\\cat{C}$ to have degree $0$?\n\t\\item Find a category that has an object of degree $\\nn$.\n\t\\item How many categories are there that have just one linear and one quadratic (degree 2) object?\n\t\\item Is the above the same as asking how many comonoid structures on $\\yon^\\2+\\yon$ there are?\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item If every object in $\\cat{C}$ is linear, then the only morphisms in $\\cat{C}$ are the identity morphisms, so $\\cat{C}$ must be a discrete category.\n    \\item It is not possible for an object in $\\cat{C}$ to have degree $0$, as every object must have at least an identity morphism emanating from it.\n    \\item Some possible examples of categories with objects of degree $\\nn$ are the monoid $(\\nn, 0, +)$ (see \\cref{exc.ema_polys} \\cref{exc.ema_polys.nat_monoid}), the poset $(\\nn, \\leq)$ (see \\cref{exc.ema_polys} \\cref{exc.ema_polys.nat_poset}), and the state category on $\\nn$ (see \\cref{ex.state_comonad}).\n    \\item There are $3$ categories with just one linear and one quadratic object. They can be distinguished by the behavior of the single non-identity morphism.\n    Either its domain and its codomain are distinct, in which case we have the walking arrow category; or its domain and its codomain are the same, in which case it can be composed with itself to obtain either itself or the identity, yielding two more categories for a total of three.\n    \\item Yes: since categories correspond to comonoids, there are as many categories with one linear and one quadratic object as there are comonoid structures on $\\yon^\\2 + \\yon$.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n\\begin{exercise}\\label{ex.star_shaped}\n\\begin{enumerate}\n\t\\item Find a category structure for the polynomial $\\yon^{\\ord{n}+\\1}+\\ord{n}\\yon$.\n\t\\item Would you call your category ``star-shaped''?\n\\qedhere\n\\end{enumerate}\n\\begin{solution}\n\\begin{enumerate}\n    \\item Take the discrete category on $\\ord{n}$ and adjoin a unique initial object $A$, so that the only non-identity morphisms are the morphisms from $A$ to each of the other objects exactly once.\n    Then this category has the emanation polynomial $\\yon^{\\ord{n}+\\1} + \\ord{n}\\yon$.\n    \\item This category could be thought of as ``star-shaped,'' with the initial object in the center with morphisms leading out to the other $n$ objects as spokes.\n\\end{enumerate}\n\\end{solution}\n\\end{exercise}\n\n\\begin{exercise}\nLet $S$ be a set. Is there any comonoid structure on $S\\yon^S$ other than that of the state category?\n\\end{exercise}\n\n\n%-------- Section --------%\n\\section{Morphisms of comonoids are cofunctors}\n\nOur next goal is to understand morphisms $\\com{C}\\to\\com{D}$ between comonoids and what they look like as maps between categories $\\cat{C}\\to\\cat{D}$.\n\n\n\\slogan{Cofunctors $F$ are forward on objects, and backwards on morphisms. It's good to remember: Codomains are objects, so $F$ preserves them going forwards; identities and composites are morphisms, so $F$ preserves them going backwards.}\n \nLet's begin with a definition.\n\n\\begin{definition}[Morphisms of comonoids]\\label{def.morphism_comonoids}\nLet $\\com{C}\\coloneqq(c,\\epsilon,\\delta)$ and $\\com{C}'\\coloneqq(c',\\epsilon',\\delta')$ be polynomial comonoids as in \\cref{def.comonoid}. A \\emph{morphism} $\\com{C}\\to\\com{C}'$ consists of a morphism $f\\colon c\\to c'$ of polynomials that commutes with the structure maps:\n\\begin{equation}\\label{eqn.morphism_comonoids}\n\\begin{tikzcd}\n  c\\ar[r, \"f\"]\\ar[d, \"\\epsilon\"']&\n\tc'\\ar[d, \"\\epsilon'\"]\\\\\n\t\\yon\\ar[r, equal]&\n\t\\yon\n\\end{tikzcd}\n\\hspace{.5in}\n\\begin{tikzcd}\n  c\\ar[r, \"f\"]\\ar[d, \"\\delta\"']&\n\tc'\\ar[d, \"\\delta'\"]\\\\\n\tc\\tri c\\ar[r, \"f\\tri f\"']&\n\tc'\\tri c'\n\\end{tikzcd}\n\\end{equation}\n\\end{definition}\n\nLet's see the two laws of comonoid morphisms using poly-boxes. First the counit law:\n\\begin{equation}\\label{eqn.unit_law_draw}\n\\begin{tikzpicture}\n\t\\node (id1) {\n\t\\begin{tikzpicture}[polybox, tos]\n\t\t\\node[poly, dom, blue, \"$c$\" above] (p) {};\n\t\t\\node[poly, dgreen, right=1 of p, \"$c'$\" above] (q) {};\n\t\t\\draw (p_pos) to[first] (q_pos);\n\t\t\\draw (q_dir) to[last] (p_dir);\n\t\t\\draw[dgreen] (q_pos) to[climb'] node[right] {$\\idy$} (q_dir);\n\t\t\\node at ($(p.east)!.5!(q.west)$) {$f$};\n\t\\end{tikzpicture}\n\t};\n\t\\node[right=of id1] (id2) {\n\t\\begin{tikzpicture}[polybox, tos]\n\t\t\\node[poly, dom, blue, \"$c\\vphantom{c'}$\" above] (p) {};\n\t\t\\draw[blue] (p_pos) to[climb'] node[right] {$\\idy$} (p_dir);\n\t\\end{tikzpicture}\t\n\t};\n\t\\node at ($(id1.east)!.5!(id2.west)-(0,6pt)$) {$=$};\n\\end{tikzpicture}\n\\end{equation}\nThen the comultiplication law:\n\\begin{equation}\\label{eqn.comult_law_draw}\n\\begin{tikzpicture}\n\t\\node (sp1) {\n\t\\begin{tikzpicture}[polybox, tos]\n\t\t\\node[poly, dom, blue] (c) {};\n\t\t\\node[poly, dgreen, right=1 of c] (c') {};\n\t\t\\node[poly, cod, dgreen, right=2 of c'.south, yshift=-1ex] (c'1) {};\n\t\t\\node[poly, cod, dgreen, above=of c'1] (c'2) {};\n\t\t\\node at ($(c.east)!.5!(c'.west)$) {$f$};\n\t%\n\t\t\\draw (c_pos) to[first] (c'_pos);\n\t\t\\draw (c'_dir) to[last] (c_dir);\n\t\t\\draw[dgreen,double] (c'_pos) to[first] (c'1_pos);\n\t\t\\draw[dgreen] (c'1_dir) to[climb] node[right] {$\\cod$} (c'2_pos);\n\t\t\\draw[dgreen] (c'2_dir) to[last] node[above,sloped] {$\\comp$} (c'_dir);\n\t\\end{tikzpicture}\n\t};\n\t\\node[right=of sp1] (sp2) {\n\t\\begin{tikzpicture}[polybox, tos]\n\t\t\\node[poly, dom, blue] (c) {};\n\t\t\\node[poly, blue, right=2 of c.south, yshift=-1ex] (c1) {};\n\t\t\\node[poly, blue, above=of c1] (c2) {};\n\t\t\\node[poly, cod, dgreen, right=1 of c1] (c'1) {};\n\t\t\\node[poly, cod, dgreen, right=1 of c2] (c'2) {};\n\t\t\\node at ($(c1.east)!.3!(c'1.west)$) {$f$};\n\t\t\\node at ($(c2.east)!.3!(c'2.west)$) {$f$};\n\t%\n\t\t\\draw[blue,double] (c_pos) to[first] (c1_pos);\n\t\t\\draw[blue] (c1_dir) to[climb] node[right] {$\\cod$} (c2_pos);\n\t\t\\draw[blue] (c2_dir) to[last] node[above,sloped] {$\\comp$} (c_dir);\n\t\t\\draw (c1_pos) to[first] (c'1_pos);\n\t\t\\draw (c'1_dir) to[last] (c1_dir);\n\t\t\\draw (c2_pos) to[first] (c'2_pos);\n\t\t\\draw (c'2_dir) to[last] (c2_dir);\n  \\end{tikzpicture}\t\n\t};\n\t\\node at ($(sp1.east)!.5!(sp2.west)-(0,4pt)$) {$=$};\n\\end{tikzpicture}\n\\end{equation}\nIf we fill in \\eqref{eqn.unit_law_draw} with an object $i\\in\\ema{c}(\\1)$, we obtain the equation\n\\[f^\\sharp(i,\\idy(f_1(i)))=\\idy(i),\\]\nwhich is the first law of \\cref{def.cofunctor}. If we fill in \\eqref{eqn.comult_law_draw} with $i\\in\\ema{c}(\\1)$ and $m\\in\\ema{c}'[f(i)]$ and $m'\\in\\ema{c}'[\\cod(m)]$, we obtain the equations\n\\begin{gather*}\n  \\cod(m)=f_1\\big(\\cod\\big(f^\\sharp(i,m)\\big)\\big)\\\\\n  f^\\sharp(i,\\comp(m,m'))=\\comp(f^\\sharp(i,m),f^\\sharp(\\cod(f^\\sharp(i,m)),m'))\n\\end{gather*}\nand these are the second and third laws of \\cref{def.cofunctor}.\n\n\\begin{exercise}\\label{ex.cofunctors_comon_homs}\nSummarize the proof of \\cref{thm.ahman_uustalu}, developed above. You may cite anything written in the text so far.\n\\end{exercise}\n\n\\begin{proposition}\\label{prop.cofunctors_isos}\nLet $F\\colon\\cat{C}\\cof\\cat{D}$ be a cofunctor, $c,c'\\in\\Ob(\\cat{C})$ objects, and $g\\colon F(c)\\to F(c')$ a morphism in $\\cat{D}$. Then if $g$ is an isomorphism, so is $F^\\sharp_{c}(g)$.\n\\end{proposition}\n\\begin{proof}\nWith $d\\coloneqq F(c)$, $d'\\coloneqq F(c')$, and $g'$ the inverse of $g$, we have\n\\begin{align*}\n\t\\id_c&=\n\tF^\\sharp_c(\\id_d)\\\\&=\n\tF^\\sharp_c(g\\then g')\\\\&=\n\tF^\\sharp_c(g)\\then F^\\sharp_{c'}(g')\n\\end{align*}\nThus $F^\\sharp_{c}(g)$ is a section of $F^\\sharp_{c'}(g')$. The opposite is true similarly, completing the proof.\n\\end{proof}\n\n%---- Subsection ----%\n\\subsection{Examples of cofunctors}\n\nWe saw in \\cref{thm.ahman_uustalu}, summarized in \\cref{ex.cofunctors_comon_homs}, that cofunctors $\\cat{C}\\cof\\cat{D}$ are the same thing as morphisms of comonoids $\\com{C}\\to\\com{D}$, so we elide the difference. The question we're interested in now is: how do we think about cofunctors? What is a map of polynomial comonoids like?\n\nThe rough idea is that a cofunctor $\\cat{C}\\cof\\cat{D}$ is, in particular, a morphism $\\ema{c}\\to\\ema{d}$ in $\\poly$ between their emanation polynomials. This map preserves identities, codomains, and composition, which is great, but you still feel like you've got a map of polynomials on your hands: it goes forwards on objects and backwards on morphisms.\n\n\\slogan{\nIf a functor $\\cat{C}\\to\\cat{D}$ is a \\emph{picture} of $\\cat{C}$ in $\\cat{D}$, then a cofunctor $\\cat{C}\\cof\\cat{D}$ is a \\emph{$D$-shaped crystallization of $C$}.\n}\n\nLet's look at some examples to see how cofunctors look like crystallizations, or perhaps partitions.\n\n\\begin{example}\\label{ex.BGEG}\nLet $(G,e,*)$ be a group and $(\\yon^G,\\epsilon,\\delta)$ the corresponding comonoid. There is a cofunctor $G\\yon^G\\to\\yon^G$ given by\n\\[\n\\begin{tikzpicture}[polybox, mapstos]\n\t\\node[poly, dom] (p) {$g_1*g_2$\\at$g_1$};\n\t\\node[poly, pure cod, right=of p] (q) {$g_2$\\at\\vphantom{$g_1$}};\n\t\\draw (p_pos) to[first] (q_pos);\n\t\\draw (q_dir) to[last] (p_dir);\n\\end{tikzpicture}\n\\]\nTo see this is a cofunctor, we check that identities, codomains, and compositions are preserved. For any $g_1$, the identity $e$ is passed back to $g_1*e=g_1$, and this is the identity on $g_1$ in $G\\yon^G$. Codomains are preserved because there is only one object in $\\yon^G$. Composites are preserved because for any $g_2,g_3$, we have $g_1*(g_2*g_3)=(g_1*g_2)*g_3$.\n\\end{example}\n\n\\begin{exercise}\\label{exc.BGEG}\nDoes the idea of \\cref{ex.BGEG} work when $G$ is merely a monoid, or does something go subtly wrong somehow?\n\\end{exercise}\n\n\\begin{proposition}\\label{prop.monoids_ff}\nThere is a fully faithful functor $\\Cat{Mon}\\op\\to\\smcat^\\sharp$, whose image is precisely those categories whose emanation polynomial is representable.\n\\end{proposition}\n\\begin{proof}\nGiven a monoid $(M,e,*)$, we think of it as a category with one object; its emanation polynomial $\\yon^M$ is representable. A cofunctor between such categories carries no data in its on-objects part, and codomains are automatically preserved. Cofunctors $\\yon^M\\to\\yon^N$ simply carry elements of $N$ to elements of $M$, preserving identity and composition, exactly the description of monoid homomorphisms.\n\\end{proof}\n\n\\begin{proposition}\nThere is an adjunction\n\\[\n\\smcat^\\sharp(\\cat{C},A\\yon)\\cong\\smset(\\Ob(\\cat{C}),A)\n\\]\nwhere $\\cat{C}\\in\\smcat^\\sharp$ is a comonoid and $A\\in\\smset$ is a set.  \n\\end{proposition}\n\n\\begin{example}\\label{ex.cof_to_rr}\nConsider the category $\\rr\\yon^\\rr$, where the codomain of $r$ emanating from $x$ is $x+r$, identities are $0$, and composition is given by addition. What are cofunctors into $\\rr\\yon^\\rr$?\n\nLet $\\cat{C}$ be a category and $|\\cdot|\\colon\\cat{C}\\cof\\rr\\yon^\\rr$ a cofunctor. It assigns to every object $c$ both a real number $|c|\\in\\rr$ and a choice of emanating morphism $|c|^\\sharp(r)\\colon c\\to c_r$ such that $|c|+r=|c_r|$. This assignment satisfies some laws. Namely we have $c_0=c$ and, given reals $r,s\\in\\rr$, we have $(c_r)_s=c_{r+s}$. \n\\end{example}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Do you think a cofunctor $\\cat{C}\\cof\\rr\\yon^\\rr$ as in \\cref{ex.cof_to_rr} should be called an $(\\rr,0,+)$-action on the objects of $\\cat{C}$, or a filtration, or a valuation, or something else?\n\t\\item Why?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Over two discrete objects $\\{A,B\\}$, how many cofunctors\n    \\[\n        \\yon^\\2+\\yon\\cong\\fbox{$A\\to B$}\\cof\\fbox{$A\\tto B$}\\cong\\yon^\\3+\\yon\n    \\]\n    are there from the walking arrow category to the walking parallel-arrows category?\n\t\\item What is meant more precisely by ``over two discrete objects $\\{A,B\\}$'' above?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nLet $\\com{C}=(\\ema{c},\\epsilon,\\delta)$ be a comonoid in $\\poly$. We have a state category $\\ema{c}(\\1)\\yon^{\\ema{c}(\\1)}$ on the set of objects of $\\com{C}$. There is a map of polynomials $\\ema{c}(\\1)\\yon^{\\ema{c}(\\1)}\\to\\ema{c}$ given by\n\\[\n\\begin{tikzpicture}[polybox, mapstos]\n\t\\node[poly, dom] (s) {$\\cod(m)$\\at$i$};\n\t\\node[poly, cod, right=of s] (c) {$m\\vphantom{d}$\\at$i$};\n\t\\draw[double] (s_pos) -- (c_pos);\n\t\\draw (c_dir) -- node[above] {$\\cod$} (s_dir); \n\\end{tikzpicture}\n\\]\nfor an object $i\\in\\ema{c}(\\1)$ and an outgoing morphism $m\\in\\ema{c}[i]$. Is this map a cofunctor?\n\\end{exercise}\n\n\\begin{example}[Canonical cofunctors from state categories]\\label{ex.cof_from_state}\nLet $\\com{C}=(\\ema{c},\\epsilon,\\delta)$ be a comonoid, where $\\delta=(\\id,\\cod,\\comp)$ as in \\eqref{eqn.cod_comp}. For any position $i\\in\\ema{c}(\\1)$, there is a cofunctor\n\\[\n\t(\\cod,\\comp)\\colon\\ema{c}[i]\\yon^{\\ema{c}[i]}\\cof\\ema{c}.\n\\]\nThat is, an object $f\\in \\ema{c}[i]$ is also a morphism in $\\cat{C}$ and we send it to its codomain $\\cod(f)$. A morphism in $\\cat{C}$ emanating from $\\cod(f)$ is passed back to its composite with $f$. \n\\end{example}\n\n\\begin{exercise}\nSuppose $\\cat{C}=(\\ema{c},\\epsilon,\\delta)$ is a comonoid.\n\\begin{enumerate}\n\t\\item Show that the map $(\\cod,\\comp)\\colon\\ema{c}[i]\\yon^{\\ema{c}[i]}\\cof\\ema{c}$ from \\cref{ex.cof_from_state} satisfies the conditions necessary for being a cofunctor (identities, codomains, and composites).\n\t\\item Find a comonoid structure on the polynomial $p\\coloneqq\\sum_{i\\in\\ema{c}(\\1)}\\ema{c}[i]\\yon^{\\ema{c}[i]}$ and a cofunctor $p\\to\\ema{c}$.\n\t\\item Is the polynomial map $p\\to\\ema{c}$ an epimorphism?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nSuppose $\\ema{c},\\ema{d},\\ema{e}$ are polynomials, each with a comonoid structure, and that $f\\colon \\ema{c}\\to\\ema{d}$ and $g\\colon\\ema{d}\\to\\ema{e}$ are maps of polynomials.\n\\begin{enumerate}\n\t\\item If $f$ and $f\\then g$ are each cofunctors, is $g$ automatically a cofunctor? If so, sketch a proof; if not sketch a counterexample.\n\t\\item If $g$ and $f\\then g$ are each cofunctors, is $f$ automatically a cofunctor? If so, sketch a proof; if not sketch a counterexample.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item For any category $\\cat{C}$ with emanation polynomial $\\ema{c}$, find a category with emanation polynomial $\\ema{c}\\yon$.\n\t\\item Show your construction is functorial; i.e.\\ given a cofunctor $\\ema{c}\\cof\\ema{d}$, find one $\\ema{c}\\yon\\cof\\ema{d}\\yon$, preserving identity and composition.\n\t\\item Is your functor either a monad or a comonad on $\\smcat^\\sharp$?\n\t\\item What category do you get by repeatedly applying this functor to $\\yon$?\n\\qedhere\n\\end{enumerate}\n\n\\end{exercise}\n\n\\begin{exercise}\nAre cofunctors between posets interesting?\n\\begin{enumerate}\n\t\\item Consider the chain poset $[n]\\cong\\sum_{i=1}^n\\yon^i$. How many cofunctors are there from $[m]\\to[n]$ for all $m,n\\in\\{0,1,2,3\\}$?\n\t\\item What does a cofunctor from $\\yon$ into a poset represent? Is there anything you'd call ``asymmetric'' about it?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item What is the finite set $\\{\\com{Q}_1,\\ldots,\\com{Q}_n\\}$ of comonoids (defined up to isomorphism) for which the carrier polynomial is $\\yon^\\2+\\yon$?\n\t\\item For each category $\\com{Q}_i$, describe how to imagine a cofunctor $\\com{C}\\to\\com{Q}_i$ from an arbitrary category into it.\n\t\\item What cofunctors exist between the various $\\com{Q}_i$?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nLet $S$ be a set. Describe a way to visualize cofunctors from categories into the state category $S\\yon^S$. Feel free to focus on the case where $S$ is a small finite set. Hint: use \\cref{prop.cofunctors_isos}.\n\\end{exercise}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Recall the star-shaped category $\\yon^{\\ord{n}+\\1}+\\ord{n}\\yon$ from \\cref{ex.star_shaped}. Describe cofunctors into it.\n\t\\item Describe cofunctors into $A\\yon$ for a set $A$.\n\t\\item Describe cofunctors into $(\\nn,\\leq)$.\n\t\\item Describe cofunctors into $(\\nn,\\geq)$.\n\t\\item Let $\\yon^\\4+2\\yon^\\2+\\yon$ denote the commutative square category. List the cofunctors from it to the walking arrow category $\\yon^\\2+\\yon$? There should be six or so.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}[Objects aren't representable in $\\smcat^\\sharp$]\\label{ex.rep_objects}\nFor categories and ordinary functors, there is a category $\\cat{T}$ that \\emph{represents objects}, in the sense that functors $\\cat{T}\\to\\cat{C}$ are the same as objects in $\\cat{C}$; indeed, take $\\cat{T}=\\fbox{$\\bullet$}$ to be the terminal (one morphism) category.\n\nThis does not work for cofunctors, as we'll see in \\cref{exc.rep_objects}. The comonoid corresponding to $\\cat{T}$ is $\\yon$ with its unique comonoid structure. Cofunctors $\\cat{T}\\cof\\cat{C}$ are somewhat strange beasts: they can be identified with objects $c\\in\\cat{C}$ for which the codomain of every emanating morphism $c\\to c'$ is $c'=c$ itself. The reason is the codomain condition (\\cref{def.cofunctor}, condition 2).\n\\end{example}\n\n\\begin{exercise}\\label{exc.rep_objects}\nWe saw in \\cref{exc.linear_poly_comon} that $\\2\\yon$ has a unique comonoid structure.\n\\begin{enumerate}\n\t\\item Show that for any category $\\cat{T}$, there are $2^{\\#\\Ob(\\cat{T})}$-many cofunctors $\\cat{T}\\cof\\2\\yon$.\n\t\\item Use the case of $\\cat{C}\\coloneqq\\2\\yon$ to show that if a category $\\cat{T}$ is going to represent objects as in \\cref{ex.rep_objects} then $\\cat{T}$ must have one object.\n\t\\item Now use a different $\\cat{C}$ to show that if a category $\\cat{T}$ is going to represent objects, it must have more than one object.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}[Policies are co-representable]\\label{ex.trajectories_corep}\nFor a category $\\cat{C}$, let's say that a \\emph{policy in $\\cat{C}$} is a choice, for each object $c\\in\\cat{C}$, of an emanating morphism $f\\colon c\\to c'$. For example, consider the category $(\\nn,\\leq)\\times(\\nn,\\leq)$:\n\\[\n\\begin{tikzpicture}[shorten <=4pt, shorten >=4pt]\n\t\\foreach \\i in {0,1,2,3} \n\t{\n\t\t\\foreach \\j in {0,1,2}\n\t\t{\n\t\t\t\\node (\\i\\j) at (\\i,\\j) {$\\bullet$};\n\t\t\t\\draw[->] (\\i,\\j) -- (\\i+1,\\j);\n\t\t\t\\draw[->] (\\i,\\j) -- (\\i,\\j+1);\n\t\t};\n\t};\n\t\\begin{scope}[red, thick]\n\t\t\\draw[->] (0,0) -- (0,1);\n\t\t\\draw[->] (0,1) -- (1,3);\n\t\t\\draw[->] (0,2) -- (1,2);\n\t\t\\draw[->] (1,0) -- (1,1);\n\t\t\\draw[->] (1,1) edge[in=30, out=60, loop] (1,1);\n\t\t\\draw[->] (1,2) -- (2,2);\n\t\t\\draw[->] (2,0) to[bend right] (2,2);\n\t\t\\draw[->] (2,1) -- (2,2);\n\t\t\\draw[->] (2,2) -- (3,2);\n\t\t\\draw[->] (3,0) -- (3,1);\n\t\t\\draw[->] (3,1) -- (3,2);\n\t\t\\draw[->] (3,2) edge[in=30, out=60, loop] (3,2);\n\t\\end{scope}\n\\end{tikzpicture}\n\\]\nIn red we have drawn a policy: every object has been assigned an emanating morphism to another object; there doesn't need to be any rhyme or reason to our choice.\n\nFor any category $\\cat{C}$, the set of trajectories in $\\cat{C}$ is in bijection with the set of cofunctors\n\\[\n\\cat{C}\\cof\\cat{N}\n\\]\nwhere $\\cat{N}=\\yon^\\nn$ is the monoid of natural numbers under addition.\n\\end{example}\n\n\\begin{exercise}\nAt the end of \\cref{ex.trajectories_corep} we said that a policy on $\\cat{C}$ can be identified with a cofunctor $F\\colon\\cat{C}\\cof\\cat{N}$. But at first it appears that $F$ includes more than just a policy: for every object $c\\in\\Ob(\\cat{C})$ and natural number $n\\in\\nn$, we have a morphism $F^\\sharp_c(n)$ emanating from $c$. That's infinitely many emanating morphisms per object, whereas a policy seems to include only one emanating morphism per object.\n\nExplain why looks are deceiving in this case: why is a policy on $\\cat{C}$ the same as a cofunctor $\\cat{C}\\cof\\cat{N}$?\n\\end{exercise}\n\nWe will see later in \\cref{prop.traj_mon_poly} that the trajectories on a category form a monoid, and that this operation $\\smcat^\\sharp\\to\\Cat{Mon}\\op$ is functorial and in fact an adjoint.\n\n\\begin{exercise}[Continuous trajectories]\nSuppose we say that a continuous policy in $\\cat{C}$ is a cofunctor $\\cat{C}\\cof\\cat{R}$, where $\\cat{R}$ is the monoid of real numbers under addition, considered as a category with one object. \n\nDescribe continuous trajectories in $\\cat{C}$ using elementary terms, i.e.\\ to someone who doesn't know what a cofunctor is and isn't yet ready to learn.\n\\end{exercise}\n\n\\begin{exercise}\nLet $\\rr/\\zz\\cong[0,1)$ be the quotient of $\\rr$ by the $\\zz$-action sending $(r,n)\\mapsto r+n$. More down to earth, it's the set of real numbers between 0 and 1, including 0 but not 1.%]\n\\begin{enumerate}\n\t\\item Find a comonoid structure on $(\\rr/\\zz)\\yon^\\rr$.\n\t\\item Is it a groupoid?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item If two categories are isomorphic in $\\smcat$, does that imply they are isomorphic in $\\smcat^\\sharp$?\n\t\\item If so, prove it; if not, give a counterexample.\n\t\\item Is it true that for any two categories $\\cat{C},\\cat{D}$, there is a bijection between the set of isomorphisms $\\cat{C}\\To{\\cong}\\cat{D}$ in $\\smcat$ and the set of isomorphisms $\\cat{C}\\overset{\\cong}{\\cof}\\cat{D}$ between them in $\\smcat^\\sharp$?\n\t\\item If so, prove it; if not, give a counterexample.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n%---- Subsection ----%\n\\subsection{Very well behaved lenses}\n\nIn the functional programming community, there is an important notion of very well-behaved lenses. These turn out to be precisely the cofunctors between state categories. Since state categories $S\\yon^S$ play an important role in our theory, we take a bit of time to consider cofunctors between them.\n\n\\begin{example}[Very well-behaved lenses]\\label{ex.well_behaved}\nRecall from \\cref{ex.state_comonad_1} that for any set $S$, we have the ``state'' category with emanation polynomial $S\\yon^S$. What are the comonoid morphisms---cofunctors---between different state categories?\n\nFirst, such a comonoid morphism includes a morphism of polynomials $f\\colon S\\yon^S\\to T\\yon^T$; we'll use the standard terminology of ``get'' and ``put'':\n\\[\n\\begin{tikzpicture}[polybox, tos]\n\t\\node[poly, dom] (s) {};\n\t\\node[poly, cod, right=of s] (t) {};\n\t\\node[left=0pt of s_pos] {$S$};\n\t\\node[left=0pt of s_dir] {$S$};\n\t\\node[right=0pt of t_pos] {$T$};\n\t\\node[right=0pt of t_dir] {$T$};\n\t%\n\t\\draw (s_pos) to[first] node[below] {$\\lensget$} (t_pos);\n\t\\draw (t_dir) to[last] node[above] {$\\lensput$} (s_dir);\n\\end{tikzpicture}\n\\]\nLet's apply the unit-homomorphism property \\eqref{eqn.unit_law_draw}\n\\[\n\\begin{tikzpicture}\n\t\\node (id1) {\n\t\\begin{tikzpicture}[polybox, tos]\n  \t\\node[poly, dom] (s) {put$($get$(s))$\\at$s\\vphantom{(}$};\n  \t\\node[poly, right=of s] (t) {get$(s$)\\at get$(s$)};\n  \t%\n  \t\\draw (s_pos) to[first] node[below] {get} (t_pos);\n  \t\\draw (t_dir) to[last] node[above] {put} (s_dir);\n  \t\\draw (t_pos) to[climb'] (t_dir);\n\t\\end{tikzpicture}\n\t};\n\t\\node[right=of id1] (id2) {\n\t\\begin{tikzpicture}[polybox, tos]\n\t\t\\node[poly, dom] (s) {$s$\\at$s$};\n\t\t\\draw (s_pos) to[climb'] (s_dir);\n\t\\end{tikzpicture}\t\n\t};\n\t\\node at ($(id1.east)!.5!(id2.west)-(0,6pt)$) {$=$};\n\\end{tikzpicture}\n\\]\nIt says that $\\lensput(\\lensget(s))=s$. This is typically called the get-put law.\n\nWe leave the comultiplication-homomorphism law to \\cref{exc.well_behaved}, where we will see that it specifies that get and put must satisfy two other properties, called the put-put and the put-get laws.\n\\end{example}\n\n\\begin{exercise}\\label{exc.well_behaved}\nComplete \\cref{ex.well_behaved}.\n\\begin{enumerate}\n\t\\item Write out the comultiplication law from \\eqref{eqn.unit_law_draw} in terms of poly-boxes.\n\t\\item What set-theoretic equations are forced by the comultiplication law?\n\t\\item Can you see why they might be called put-put and put-get?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}[Very well-behaved lenses are kinda boring]\\label{ex.well_behaved_boring}\nWe saw in \\cref{exc.well_behaved} that a comomonoid homomorphism (cofunctor) $S\\yon^S\\cof T\\yon^T$ between state comonoids can be characterized as a pair of functions $\\lensget\\colon S\\to T$ and $\\lensput\\colon S\\times T\\to S$ satisfying get-put, put-get, and put-put. \n\nIn fact, it turns out that this happens if and only if $\\lensget$ is a product projection! For example, if the cardinalities $|S|$ and $|T|$ of $S$ and $T$ are finite and $|S|$ is not divisible by $|T|$, then there are no cofunctors $S\\yon^S\\cof T\\yon^T$. A stringent condition, no? We'll explore it in  \\cref{exc.how_many_vwbls} below.\n\nLet's explain why cofunctors between state categories are just product projections. A product projection $A\\times B\\to A$ always has another factor ($B$); if every cofunctor between state categories is a product projection, what is the other factor? It turns out that the other factor will be:\n\\[\nF\\coloneqq\\{f\\colon T\\to S\\mid \\forall t\\in T,\\;\\lensget(f(t))=t\\text{ and }\\lensput(f(t),t)=f(t)\\}.\n\\]\nIn other words we will see that if $(\\lensget,\\lensput)$ is a comonoid homomorphism then there is a bijection $S\\cong T\\times F$ and that $\\lensget\\colon S\\to T$ is one of the projections. We will see that the converse is true in \\cref{exc.well_behaved_boring}\n\nSo assume $(\\lensget,\\lensput)\\colon S\\yon^S\\cof T\\yon^T$ is a comonoid homomorphism, in particular that it satisfies put-get, get-put, and put-put. We obtain a function $\\pi\\colon S\\to T\\times F$ given by\n\\[s\\mapsto\\big(\\lensget(s),t\\mapsto\\lensput(s,t)\\big)\\]\nand it is well-defined since for all $s\\in S$ and $t,t'\\in T$ we have $\\lensget(\\lensput(s,t))=t$ by put-get and $\\lensput(\\lensput(s,t),t')=\\lensput(s,t')$ by put-put. We also obtain a function $\\pi'\\colon T\\times F\\to S$ given by\n\\[\n(t,f)\\mapsto f(t).\n\\]\nThe two functions $\\pi,\\pi'$ are mutually inverse: the roundtrip on $S$ is identity because $\\lensput(s,\\lensget(s))=s$ by get-put; the roundtrip on $T\\times F$ is identity because $\\lensget(f(t))=t$ and $\\lensput(f(t),t)=f(t)$ by assumption on $f\\in F$.\n\\end{example}\n\n\\begin{exercise}\\label{exc.well_behaved_boring}\nLet $S,T,F$ be sets and suppose given an isomorphism $\\alpha\\colon S\\to T\\times F$.\n\\begin{enumerate}\n\t\\item Show that there exists a very well behaved lens $\\lensget\\colon S\\to T$ and $\\lensput\\colon S\\times T\\to S$.\n\t\\item Show that there exists a cofunctor between the state category on $S$ and the state category on $T$.\n\t\\item Show that there exists a comonoid homomorphism $S\\yon^S\\to T\\yon^T$ between the state comonoids.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\\label{exc.how_many_vwbls}\n\\begin{enumerate}\n\t\\item Suppose $|S|=3$. How many cofunctors are there $S\\yon^S\\to S\\yon^S$?\n\t\\item Suppose $|S|=4$ and $|T|=2$. How many cofunctors are there $S\\yon^S\\cof T\\yon^T$?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nLet $S,T$ be sets and $\\lensget\\colon S\\to T$ and $\\lensput\\colon S\\times T\\to S$ the parts of a very well behaved lens, i.e.\\ a cofunctor $S\\yon^S\\to T\\yon^T$ between state categories. Is it possible that $\\lensput\\colon S\\times T\\to S$ is itself a product projection, i.e.\\ sends $(s,t)\\mapsto s$?\n\\end{exercise}\n\nWhen we get to cofree comonoids, we'll obtain a whole new class of cofunctors that are interesting to consider. But for now, we move on to more theory.\n\n%-------- Section --------%\n\\section{Products in $\\smcat^\\sharp$}\n\nProducts in $\\smcat^\\sharp$ are fascinating. Given categories $\\cat{C}$ and $\\cat{D}$, the set of objects in their $\\smcat$-product is given by the product of their sets of objects, but this is not the case in $\\smcat^\\sharp$. So what is an object in $\\cat{C}\\times^\\sharp\\cat{D}$ (the usual categorical product, but taken in $\\smcat^\\sharp)$? \n\nAn object in $\\cat{C}\\times^\\sharp\\cat{D}$ is, roughly speaking, a tree for which each node is a pair of objects: some $i\\in\\cat{C}$ and some $j\\in\\cat{D}$.\nThe edges leading out of node $(i, j)$ are then all the morphisms emanating from $i$ and all the morphisms emanating from $j$.\nThe tree must respect identities, codomains, and composites in $\\cat{C}$ and $\\cat{D}$, as we'll explain.\nBut before we do, we define the following category, which will be crucial to our understanding of products in $\\smcat^\\sharp$.\n\n\\begin{definition}[Free monoidal categories on monoids and comonoids] \\label{def.free_monoid_cat}\nGiven a (small) set $I$, define $\\Delta_I$ to be the free monoidal category generated by $|I|$ distinct monoids.\nDually, $\\Delta_I\\op$ is the free monoidal category generated by $|I|$ distinct comonoids.\n\\end{definition}\n\nEssentially, what this definition says is that $\\Delta_I\\op$ is a monoidal category with $|I|$ distinct comonoids, each with its own counit and comultiplication, as well as all the objects and morphisms that can be obtained from these comonoids via composition and taking the monoidal product.\nThese objects and morphisms are then subject to no relations beyond those implied by the standard comonoid axioms.\n\nIn particular, we can identify the objects of $\\Delta_I\\op$ with the elements of the free monoid $\\List(I)$, where the $|I|$ comonoids are the singleton lists $[i]$ for each $i \\in I$.\nThe monoidal product of $\\Delta_I\\op$ can then be interpreted as list concatenation, which---in a suggestive overloading of notation---we will denote by $\\tri$.\nThe monoidal unit must then be the empty list $[]$.\nWe could give analogous notation for $\\Delta_I$.\n\nThe counits $[i] \\to []$ and comultiplications $[i] \\to [i,i]$ for each $i \\in I$ generate all the morphisms of $\\Delta_I\\op$ via composition and taking the monoidal product, while satisfying associativity and left and right unit laws.\nFor instance, with $I \\coloneqq \\3$, there are two distinct morphisms $[2,3,1,3] \\to [2,2,3]$ in $\\Delta_I\\op$. One of them is given by\n\\[\n    ([2] \\to [2,2]) \\tri \\id_{[3]} \\tri ([1] \\to []) \\tri ([3] \\to []),\n\\]\nand the other is given by\n\\[\n    ([2] \\to [2,2]) \\tri ([3] \\to []) \\tri ([1] \\to []) \\tri \\id_{[3]}.\n\\]\n\nAnother way to state \\cref{def.free_monoid_cat} would be to say that $\\Delta_I\\op$ is initial among monoidal categories equipped with $|I|$ comonoids.\nThat is, given any monoidal category $(\\cat{C}, \\yon, \\tri)$ with a comonoid $(c_i, \\epsilon_i, \\delta_i)$ for each $i \\in I$, there is a unique monoidal functor $\\Delta_I\\op \\to \\cat{C}$ that sends each $[i]$ to $c_i$, each $[i] \\to []$ to $\\epsilon_i$, and each $[i] \\to [i,i]$ to $\\delta_i$.\nDually, $\\Delta_I$ is initial among monoidal categories equipped with $|I|$ monoids.\n\n\\begin{example}[The augmented simplex category]\nIf we take $I \\coloneqq \\1$, then $\\Delta \\coloneqq \\Delta_I$ is what is commonly known as the \\emph{augmented simplex category} or the \\emph{algebraist's simplex category}.\nIt is the free monoidal category generated by the monoid $[1]$ with unit $[] \\to [1]$ and multiplication $[1,1] \\to [1])$.\n\nIf we identify each $n$-element list $[1,\\ldots,1]$ with the finite set $\\ord{n}$, interpreted as an ordinal, we can see that $\\Delta$ is in fact the category of all finite ordinals (i.e.\\ $\\0, \\1, \\2, \\ldots$) and the order-preserving maps between them.\nIn \\cite[Chapter~VII, Section~8]{maclane}, Mac Lane verifies that $\\Delta$ is initial among monoidal categories equipped with a monoid.\n\\end{example}\n\nArmed with the free monoidal category $\\Delta_I\\op$, we are now ready to state how products can be constructed in $\\smcat^\\sharp$.\n\n\\begin{proposition}\\label{prop.sharp_products}\nThe category $\\smcat^\\sharp$ has all small products.\n\nIn particular, for a small set $I$ and a category $\\cat{C}_i \\in \\smcat^\\sharp$ corresponding to a comonoid $\\ema{c}_i \\in \\poly$ for each $i \\in I$, the product of the $\\cat{C}_i$'s is given by the limit of the canonical monoidal functor $C \\colon \\Delta_I\\op \\to \\poly$ sending each $[i]$ to $\\ema{c}_i$.\n\\end{proposition}\n\nBefore we give the proof of the proposition above, let us examine what it says concretely in the case of binary products.\n\n\\begin{example}[Binary products in $\\smcat^\\sharp$] \\label{ex.bin_prod}\nFor each $i \\in \\2$, let $\\cat{C}_i$ be the category corresponding to the comonoid $(\\ema{c}_i, \\epsilon_i, \\delta_i)$ in $\\poly$.\nThen we can define $C \\colon \\Delta_\\2\\op \\to \\poly$ to be the canonical monoidal functor sending each $[i]$ to $\\ema{c}_i$.\n\\cref{prop.sharp_products} asserts that $\\lim C$ is the product of $\\cat{C}_1$ and $\\cat{C}_2$ in $\\smcat^\\sharp$. But what kind of a polynomial is $\\lim C$, and what is its comonoid structure?\n\nWell, for every list $\\ell \\in \\Ob(\\Delta_\\2\\op) = \\List(\\2)$, the polynomial $\\lim C$ should have a $C\\ell$-component, i.e.\\ a projection $\\pi_\\ell \\colon \\lim C \\to C\\ell$.\nFor example, when $\\ell \\coloneqq [1,2,2,1,1,1]$, we have $C\\ell = \\ema{c}_1 \\tri \\ema{c}_2 \\tri \\ema{c}_2 \\tri \\ema{c}_1 \\tri \\ema{c}_1 \\tri \\ema{c}_1$, giving us a projection $\\lim C \\to \\ema{c}_1 \\tri \\ema{c}_2 \\tri \\ema{c}_2 \\tri \\ema{c}_1 \\tri \\ema{c}_1 \\tri \\ema{c}_1$.\nThese projections must commute with the morphisms in the image of $C$, which are precisely the morphisms generated by the $\\epsilon_i$'s and the $\\delta_i$'s via composition and taking the monoidal product.\nFor instance, the diagram\n\\begin{equation} \\label{eq.lim_example}\n\\begin{tikzcd}\n    \\lim C \\ar[r, \"\\pi_{[1,2,1]}\"] \\ar[dr, \"\\pi_{[1,2,2,1,1,1]}\"'] & \\ema{c}_1 \\tri \\ema{c}_2 \\tri \\ema{c}_1 \\ar[d, \"\\ema{c}_1 \\tri \\delta_2 \\tri (\\delta_1 \\then (\\ema{c}_1 \\tri \\delta_1))\"] \\\\\n    & \\ema{c}_1 \\tri \\ema{c}_2 \\tri \\ema{c}_2 \\tri \\ema{c}_1 \\tri \\ema{c}_1 \\tri \\ema{c}_1\n\\end{tikzcd}\n\\end{equation}\ncommutes.\nIn fact, notice that for all $\\ell \\in \\List(\\2)$, there exists a unique alternating list $\\ell'$ of $1$'s and $2$'s with no repetitions (such as $[1,2,1], [2,1,2,1], [],$ or $[2]$) for which there is a unique morphism $d_{\\ell, \\ell'} \\colon \\ell' \\to \\ell$ generated by comultiplications $[i] \\to [i,i]$ (here uniqueness is guaranteed by associativity).\nSo we can generalize \\eqref{eq.lim_example} to say that\n\\begin{equation}\n\\begin{tikzcd}\n    \\lim C \\ar[r, \"\\pi_{\\ell'}\"] \\ar[dr, \"\\pi_\\ell\"'] & C\\ell' \\ar[d, \"Cd_{\\ell,\\ell'}\"] \\\\\n    & C\\ell\n\\end{tikzcd}\n\\end{equation}\ncommutes.\n\nHence the family of projections $\\pi_\\ell$ for all $\\ell \\in \\List(\\2)$ is completely characterized by just the projections $\\pi_{\\ell'}$ for which $\\ell'$ contains no repetitions.\nLet us focus, then, on only those projections.\nTogether, they form the commutative diagram\n\\begin{equation} \\label{eq.lim_projs}\n\\begin{tikzcd}\n    \\yon & \\ema{c}_1 \\ar[l, \"\\epsilon_1\"'] & \\ema{c}_1 \\tri \\ema{c}_2 \\ar[l, \"\\ema{c}_1 \\tri \\epsilon_2\"'] & \\cdots \\ar[l] \\\\\n    \\lim C \\ar[u, \"\\pi_{[]}\"] \\ar[d, \"\\pi_{[]}\"'] \\ar[ur, \"\\pi_{[1]}\"] \\ar[dr, \"\\pi_{[2]}\"'] \\ar[urr, \"\\pi_{[1,2]}\"'] \\ar[drr, \"\\pi_{[2,1]}\"] \\\\\n    \\yon & \\ema{c}_2 \\ar[l, \"\\epsilon_2\"] & \\ema{c}_2 \\tri \\ema{c}_1 \\ar[l, \"\\ema{c}_2 \\tri \\epsilon_1\"] & \\cdots. \\ar[l]\n\\end{tikzcd}\n\\end{equation}\nIt follows that there are projections from $\\lim C$ to both the limit of the top row of \\eqref{eq.lim_projs} and the limit of the bottom row of \\eqref{eq.lim_projs}.\nIn particular, the limit of the top row of \\eqref{eq.lim_projs} can be thought of intuitively as the ``infinite'' monoidal product $\\ema{c}_1 \\tri \\ema{c}_2 \\tri \\ema{c}_1 \\tri \\cdots$, corresponding to the polynomial whose positions are all of the infinite structures that can be constructed by following these instructions:\n\\begin{quote}\n\\begin{enumerate}[label=1.\\arabic*.]\n    \\item choose an object $c_1 \\in \\cat{C}_1$;\n    \\item for each morphism $f_1$ in $\\cat{C}_1$ with domain $c_1$:\n    \\begin{enumerate}[label=2.\\arabic*.]\n        \\item choose an object $d_2 \\in \\cat{C}_2$;\n        \\item for each morphism $g_2$ in $\\cat{C}_2$ with domain $d_2$:\n        \\begin{enumerate}[label=3.\\arabic*.]\n            \\item choose an object $c_3 \\in \\cat{C}_1$;\n            \\item for each morphism $f_3$ in $\\cat{C}_1$ with domain $c_3$:\n            \n            $\\quad \\cdots$,\n        \\end{enumerate}\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\nThen the directions at each such position are the sequences of morphisms formed by starting from the top of the instructions above and taking the morphisms mentioned in steps $1.2, 2.2, \\ldots, n.2$, for some finite $n$, to obtain sequences such as $(), (f_1), (f_1, g_2),$ and $(f_1, g_2, f_3)$.\nThe limit of the bottom row of \\eqref{eq.lim_projs} can be characterized in the same way, but with the roles of $\\cat{C}_1$ and $\\cat{C}_2$ swapped.\nFrom these structures, we can read off the behavior of each projection $\\pi_{\\ell'}$ from $\\lim C$; for instance, the projection $\\pi_{[1,2]}$ sends each position of $\\lim C$ to the position of $\\ema{c}_1 \\tri \\ema{c}_2$ specified by following just the first three steps of the above instructions.\n\nSo every position of $\\lim C$ yields a pair of these structures.\nWe'll call the structure obtained by following the instructions above the \\emph{left} structure, and the structure obtained by following the instructions above, but with $\\cat{C}_1$ and $\\cat{C}_2$ swapped, the \\emph{right} structure.\nBut not every such pair of structures corresponds to a position of $\\lim C$; they must satisfy additional conditions, given by morphisms in the image of $C$ that are not depicted in \\eqref{eq.lim_projs}.\nFor instance, the fact that the diagram\n\\begin{equation}\n\\begin{tikzcd}\n    \\lim C \\ar[r, \"\\pi_{[1,2]}\"] \\ar[dr, \"\\pi_{[2]}\"'] & \\ema{c}_1 \\tri \\ema{c}_2 \\ar[d, \"\\epsilon_1 \\tri \\ema{c}_2\"] \\\\\n    & \\ema{c}_2\n\\end{tikzcd}\n\\end{equation}\ncommutes implies that, when following the above instructions, the object $d_2$ chosen for the identity morphism on $c_1$ in the left structure must also be the first object chosen when constructing the right structure.\nSimilar diagrams render all such objects chosen for identity morphisms when constructing these structures redundant.\nSo, in the end, we have a one-to-one correspondence between positions of $\\lim C$ and pairs of structures that can be constructed by following these instructions:\n\\begin{quote}\n\\begin{enumerate}[label=1.\\arabic*.]\n    \\item choose an object $c_1 \\in \\cat{C}_1$;\n    \\item for each nonidentity morphism $f_1$ in $\\cat{C}_1$ with domain $c_1$:\n    \\begin{enumerate}[label=2.\\arabic*.]\n        \\item choose an object $d_2 \\in \\cat{C}_2$;\n        \\item for each nonidentity morphism $g_2$ in $\\cat{C}_2$ with domain $d_2$:\n        \\begin{enumerate}[label=3.\\arabic*.]\n            \\item choose an object $c_3 \\in \\cat{C}_1$;\n            \\item for each nonidentity morphism $f_3$ in $\\cat{C}_1$ with domain $c_3$:\n            \n            $\\quad \\cdots$\n        \\end{enumerate}\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\nfor the left structure, and\n\\begin{quote}\n\\begin{enumerate}[label=1.\\arabic*.]\n    \\item choose an object $d_1 \\in \\cat{C}_2$;\n    \\item for each nonidentity morphism $g_1$ in $\\cat{C}_2$ with domain $d_1$:\n    \\begin{enumerate}[label=2.\\arabic*.]\n        \\item choose an object $c_2 \\in \\cat{C}_1$;\n        \\item for each nonidentity morphism $f_2$ in $\\cat{C}_1$ with domain $c_2$:\n        \\begin{enumerate}[label=3.\\arabic*.]\n            \\item choose an object $d_3 \\in \\cat{C}_2$;\n            \\item for each nonidentity morphism $g_3$ in $\\cat{C}_2$ with domain $d_3$:\n            \n            $\\quad \\cdots$\n        \\end{enumerate}\n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\nfor the right structure.\nThese pairs are then the objects of $\\cat{C}_1 \\times^\\sharp \\cat{C}_2$.\nThe morphisms from each object are the sequences of morphisms formed by starting from the top of either set of instructions above and taking the morphisms mentioned in steps $1.2, 2.2, \\ldots, n.2$, for some finite $n$, to obtain sequences such as $(), (f_1), (g_1), (f_1, g_2), (g_1, f_2),$ $(f_1, g_2, f_3)$ and $(g_1, f_2, g_3)$.\nIn particular, the identity morphism is $()$.\n\nWe illustrate how to determine the codomain of each nonidentity morphism using an example.\nTo find the codomain of the morphism $(g_1, f_2)$, obtained from steps 1.2 and 2.2 in the right structure, we need to determine the left and right structures of the codomain.\nIts right structure is easy to describe: it is simply the structure obtained by following the remaining instructions, starting from step 3.1, nested under step 2.2 for $f_2$; the recursive nature of these instructions ensures that this is, in fact, a valid right structure.\nAs for the left structure of the codomain, we can construct it as follows:\n\\begin{quote}\n\\begin{enumerate}[label=1.\\arabic*.]\n    \\item choose the object $\\cod(f_2) \\in \\cat{C}_1$;\n    \\item for each nonidentity morphism $f'_2$ in $\\cat{C}_1$ with domain $\\cod(f_2)$:\n    \\begin{enumerate}[label=2.\\arabic*.]\n        \\item if $f_2 \\then f'_2$ is not the identity, then follow the steps nested under step 2.2 for $f_2 \\then f'_2$ in the instructions above for constructing the original right structure;\n        \\item otherwise, \n    \\end{enumerate}\n\\end{enumerate}\n\\end{quote}\n\n$\\cod(f_2)$ in $\\cat{C}_1$, then, for each nonidentity morphism $f'_2$ in $\\cat{C}_1$ \n\n\n%codomains...\n\n% example for (f_1, g_2)\n\n\n\\end{example}\n\n\\begin{example}[Products of discrete categories are products of sets]\nGiven sets $S$ and $T$, consider the corresponding discrete categories $S\\yon$ and $T\\yon$.\nThen by \\cref{ex.bin_prod}, the product $S\\yon \\times^\\sharp T\\yon$ is the discrete category $(S \\times T)\\yon$.\n\\end{example}\n\n\\begin{example}[Products of one-object categories are coproducts of monoids]\nGiven monoids $M$ and $N$, consider the corresponding one-object categories $\\yon^M$ and $\\yon^N$.\nThen by \\cref{ex.bin_prod}, the product $\\yon^M \\times^\\sharp \\yon^N$ is the one-object category $\\yon^{M \\ast N}$, where $M \\ast N$ is the free product (i.e.\\ the coproduct) of the monoids $M$ and $N$.\n\\end{example}\n\n\n\\begin{example}[Unexpectedly-many objects]\\label{ex.unexpectedly_many_objects}\nLet $\\cat{C}\\coloneqq\\fbox{$\\LMO{A}\\To{f}\\LMO{B}$}$ be the walking arrow category. By \\cref{ex.bin_prod}, the product $\\cat{C}\\times^\\sharp\\cat{C}$ has infinitely many objects. Namely, it has an object for every pair of sequences $(s_1, s_2)$, finite or infinite, that one can write in the alphabet $\\{A_1, A_2, B_1, B_2\\}$ subject to the following conditions:\n\\begin{itemize}\n    \\item the sequence $s_1$ starts with either $A_1$ or $B_1$;\n    \\item the sequence $s_2$ starts with either $A_2$ or $B_2$; and\n\t\\item in either sequence:\n\t\\begin{itemize}\n\t    \\item after $A_1$ comes either $A_2$ or $B_2$;\n\t    \\item after $A_2$ comes either $B_1$ or $A_1$; and\n\t    \\item after $B_1$ or $B_2$, the sequence stops.\n\t\\end{itemize}\n\\end{itemize}\nFor example, here is an object in $\\fbox{$\\LMO{A}\\To{f}\\LMO{B}$}\\times\\fbox{$\\LMO{A}\\To{f}\\LMO{B}$}$:\n\\[\n((A_1,A_2,A_1,A_2,\\ldots), (A_2,A_1,A_2,A_1,B_2))\n\\]\n\\end{example}\n\n% \\begin{exercise}\n% Again let $\\cat{C}\\coloneqq\\fbox{$\\LMO{A}\\To{f}\\LMO{B}$}$.\n% \\begin{enumerate}\n% \t\\item Use the official description of products, given in \\cref{prop.sharp_products}, to justify the informal description of the objects in $\\cat{C}\\times^\\sharp\\cat{C}$, as given in \\cref{ex.unexpectedly_many_objects}.\n% \t\\item Describe the morphisms in $\\cat{C}\\times^\\sharp\\cat{C}$.\n% \\qedhere\n% \\end{enumerate}\n% \\end{exercise}\n\n\\begin{example}\nLet $I$ be a set and $I\\yon$ the associated discrete category, and let $(M,e,*)$ be a monoid and $\\yon^M$ the associated one-object category.\nThen by \\cref{ex.bin_prod}, the product $I\\yon \\times^\\sharp \\yon^M$ is the category $I^M\\yon^M$.\nIts objects are functions $i \\colon M \\to I$, while its morphisms have the form $i \\To{m} (m' \\mapsto i(m * m'))$ for each $i \\colon M \\to I$ and $m \\in M$.\n\nGiven $i \\colon M \\to I$, its identity morphism is the morphism $i \\To{e} i$ corresponding to $e \\in M$; given $m, n \\in M$, the composite of the morphisms $i \\To{m} (m' \\mapsto i(m * m')) \\To{n} (m' \\mapsto i(m * n * m'))$ is the morphism $i \\To{m * n} (m' \\mapsto i(m * n * m'))$.\n\n% The product $I\\yon\\times^\\sharp\\yon^R$ can be thought of as an \\emph{event-based system} with states in $I$. Namely, an object in it can be identified as a sequence consisting of an element in $I$, followed by a nonzero amount of time, followed by another element in $I$, followed by another nonzero amount of time, etc.\n\\end{example}\n\n% \\begin{example}[Event-based systems]\n% Let $R\\coloneqq\\{r\\in\\rr\\mid r\\geq 0\\}$ denote the nonnegative real numbers, and let $\\yon^R$ be the associated monoid (under addition). Let $I$ be a set and $I\\yon$ the associated discrete category.\n\n% % The product $I\\yon\\times^\\sharp\\yon^R$ can be thought of as an \\emph{event-based system} with states in $I$. Namely, an object in it can be identified as a sequence consisting of an element in $I$, followed by a nonzero amount of time, followed by another element in $I$, followed by another nonzero amount of time, etc.\n% \\end{example}\n\n\\begin{proof}[Proof of \\cref{prop.sharp_products}]\nWe first show that $\\lim C$ actually is a comonoid in $\\poly$ by giving its counit and comultiplication.\nThe counit $\\epsilon \\colon \\lim C \\to \\yon$ is given by the projection $\\pi_{[]} \\colon \\lim C \\to C[]$, since $C[] \\iso \\yon$.\n\nTo construct the comultiplication, we observe that $\\Delta_I\\op$ is connected (every object has map to $[]$ given by a monoidal product of counits), so by \\cref{thm.connected_limits}, $\\tri$ preserves $\\Delta_I\\op$-shaped limits.\nAs $C$ is monoidal, it follows that\n\\[\n    (\\lim C) \\tri (\\lim C) \\iso \\lim_{\\ell \\in \\Delta_I\\op} \\lim_{\\ell' \\in \\Delta_I\\op} C(\\ell \\tri \\ell').\n\\]\nSo specifying a map to $(\\lim C) \\tri (\\lim C)$ amounts to specifying a map to $C(\\ell \\tri \\ell')$ for every $\\ell, \\ell' \\in \\Delta_I\\op$ such that the appropriate diagrams commute.\nIn particular, we can construct the comultiplication $\\delta \\colon \\lim C \\to (\\lim C) \\tri (\\lim C)$ by specifying the projection $\\pi_{\\ell \\tri \\ell'} \\colon \\lim C \\to C(\\ell \\tri \\ell')$ for each $\\ell, \\ell' \\in \\Delta_I\\op$.\nIt is routine to verify that the appropriate diagrams commute.\n\n(Verify comonoid laws)\n\n(Give projections; check that these are comonoid morphisms)\n\n(Verify universal property of product)\n\\end{proof}\n\n%-------- Section --------%\n\\section{Some math about $\\smcat^\\sharp$}\nWe refer to morphisms between polynomial comonoids as cofunctors, again eliding the difference between comonoids in $\\poly$ and categories.\n\n\n\\begin{proposition}\nThe coproduct of polynomial comonoids agrees with the coproduct of categories. In particular, the initial comonoid is $0$.\n\\end{proposition}\n\\begin{proof}\nWe refer the claim about $0$ to \\cref{exc.0_initial_com}.\n\nLet $\\cat{C}_1$ and $\\cat{C}_2$ be categories and $\\ema{c}_1,\\ema{c}_2$ their emanation polynomials, i.e.\\ the carriers of the corresponding comonoids $\\com{C}_1$ and $\\com{C}_2$. We first notice that $\\ema{c}\\coloneqq\\ema{c}_1+\\ema{c}_2$ is the carrier for the comonoid $\\com{C}$ corresponding to the sum category $\\cat{C}\\coloneqq\\cat{C}_1+\\cat{C}_2$. Indeed, the object-set of the sum is given by the sum of the object-sets\n\\[\n\t\\Ob(\\cat{C}_1+\\cat{C}_2)\n  \\cong\n  \\Ob(\\cat{C}_1)+\\Ob(\\cat{C}_2),\n\\]\nand a morphism in $\\cat{C}_1+\\cat{C}_2$ emanating from any such object is just a morphism in whichever of the categories $\\cat{C_1}$ or $\\cat{C}_2$ it is from. \n\nIt remains to show that $\\com{C}$ is the coproduct of $\\com{C}_1$ and $\\com{C}_2$ in $\\smcat^\\sharp$. Suppose given a comonoid $\\com{D}$ and comonoid homomorphisms (cofunctors) $f_1\\colon\\com{C}_1\\cof\\com{D}$ and $f_2\\colon\\com{C}_2\\cof\\com{D}$. Then for any object of $\\com{C}$ we have an associated object $f(c)\\in\\com{D}$, given either by $f(c)\\coloneqq f_1(c)$ or by $f(c)\\coloneqq f_2(c)$ depending on whether $c\\in\\com{C}_1$ or $c\\in\\com{C}_2$. For any morphism $m$ emanating from $f(c)$ we have a morphism $f^\\sharp(m)$ emanating from $c$. It is easy to check that the cofunctor laws hold for $f$. Uniqueness of $f$ given $f_1,f_2$ is also straightforward.\n\\end{proof}\n\n\\begin{exercise}\\label{exc.0_initial_com}\n\\begin{enumerate}\n\t\\item Show that $0$ is a comonoid.\n\t\\item Show that $0$ is initial as a comonoid.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nIf $\\com{C}=(\\ema{c},\\epsilon,\\delta)$ is a category, show there is an induced category structure on the polynomial $\\2\\ema{c}$.\n\\end{exercise}\n\n\\begin{exercise}\nCheck that the terminal comonoid is $\\yon$.\n\\end{exercise}\n\n\n\\begin{proposition}[Porst]\nThe forgetful functor $\\Cat{Comon}(\\poly) \\to \\poly$ is comonadic.\n\\end{proposition}\n\\begin{proof}\nThe fact that a forgetful functor $\\Cat{Comon}(\\poly) \\to \\poly$ is comonadic if it has a right adjoint follows from Beck's monadicity theorem via a straightforward generalization of an argument given by Par{\\'e} in \\cite[pp.~138-9]{pare1969absolute}, as pointed out by Porst in \\cite[Fact~3.1]{porst2019colimits}.\n\\end{proof}\n\n\\begin{corollary}\nThe category $\\smcat^\\sharp = \\Cat{Comon}(\\poly)$ has all small colimits.\nThey are created by the underlying polynomial functor $\\Cat{Comon}(\\poly) \\to \\poly$.\n\\end{corollary}\n\\begin{proof}\nIt is well known that a comonadic functor creates all colimits that exist in its codomain \\cite{nlab:created-limit}.\nBy \\cref{thm.poly_colimits}, the category $\\poly$ has all small colimits.\n\\end{proof}\n\n\\begin{proposition}\nLet $\\Cat{Comon}(\\poly)_{\\text{rep}}$ be the full subcategory of comonoids $(c,\\epsilon,\\delta)$ in $\\poly$ for which the carrier $c=\\yon^M$ is representable. Then there is an isomorphism of categories\n\\[\n\\Cat{Comon}(\\poly)_{\\text{rep}}\\cong\\Cat{Mon}\\op\n\\]\nwhere $\\Cat{Mon}$ is the category of monoids.\n\\end{proposition}\n\\begin{proof}\nLet $\\cat{C}$ be a category. It has only one object iff its emanation polynomial $\\ema{c}$ has only one position, i.e.\\ $\\ema{c}\\cong\\yon^M$ for some $M\\in\\smset$, namely where $M$ is the set of morphisms in $\\cat{C}$. It remains to show that cofunctors between monoids are dual---opposite---to morphisms between monoids.\n\nA cofunctor $f\\colon\\yon^M\\to\\yon^N$ involves a single function $f^\\sharp\\colon N\\to M$ that must satisfy a law coming from unitality and one coming from composition, as in \\cref{def.morphism_comonoids}. The result can now be checked by hand, or seen formally as follows. Each object in the two diagrams \\eqref{eqn.morphism_comonoids} is representable by \\cref{exc.composites_of_specials}. The Yoneda embedding $\\smset\\op\\to\\poly$ is fully faithful, so these two diagrams are equivalent to the unit and composition diagrams for monoid homomorphisms.\n\\end{proof}\n\n\\begin{exercise}\\label{exc.lin_comon_set}\nLet $\\Cat{Comon}(\\poly)_{\\text{lin}}$ be the full subcategory of comonoids $(c,\\epsilon,\\delta)$ in $\\poly$ for which the carrier $c=M\\yon$ is linear. Show that there is an isomorphism of categories\n\\[\n\\Cat{Comon}(\\poly)_{\\text{lin}}\\cong\\smset.\n\\qedhere\n\\]\n\\end{exercise}\n\n\\begin{proposition}\nThe inclusion of linear comonoids into all comonoids has a left adjoint\n\\[\n\\adj{\\Cat{Comon}(\\poly)}{(\\ema{c}\\tri\\1)\\yon}{A\\yon}{\\Cat{Comon}(\\poly)_{\\text{lin}}}\n\\]\ndenoted by where they send a comonoid $(\\ema{c},\\epsilon,\\delta)$ and a linear comonoid $A\\yon$.\n\\end{proposition}\n\\begin{proof}\nWe need to show that for any comonoid $(\\ema{c},\\epsilon,\\delta)$ and set $A$, we have a natural isomorphism\n\\[\n  \\smcat^\\sharp(\\ema{c},A\\yon).\n  \\cong^?\n  \\smcat^\\sharp((\\ema{c}\\tri\\1)\\yon,A\\yon)\n\\]\nBut every morphism in $A\\yon$ is an identity, so the result follows from the fact that every cofunctor must pass identities back to identities. \n\\end{proof}\n\nA cofunctor (map of polynomial comonoids) is called \\emph{cartesian} if the underlying map $f\\colon \\ema{c}\\to \\ema{d}$ of polynomials is cartesian (i.e.\\ for each position $i\\in \\ema{c}(\\1)$, the map $f^\\sharp_i\\colon\\ema{d}[f_1(i)]\\to\\ema{c}[i]$ is an isomorphism).\n\n\\begin{proposition}\\label{prop.factor_cofunctor}\nEvery cofunctor $f\\colon\\cat{C}\\cof\\cat{D}$ factors as a vertical morphism followed by a cartesian morphism\n\\[\n\\cat{C}\\overset{\\text{vert}}{\\cof}\\cat{C}'\\overset{\\text{cart}}{\\cof}\\com{D}.\n\\]\n\\end{proposition}\n\\begin{proof}\nA cofunctor $\\cat{C}\\cof\\cat{D}$ is a map of polynomials $\\ema{c}\\to\\ema{d}$ satisfying some properties, and any map of polynomials $f\\colon\\ema{c}\\to\\ema{d}$ can be factored as a vertical morphism followed by a cartesian morphism\n\\[\n\t\\ema{c}\\To{g}\\ema{c'}\\To{h}\\ema{d}.\n\\]\nFor simplicity, assume $g_1\\colon\\ema{c}(\\1)\\to\\ema{c}'(\\1)$ is identity (rather than merely isomorphism) on positions and similarly that for each $i\\in\\ema{c}$ the map $h^\\sharp_i\\colon\\ema{c}'[i]\\to\\ema{d}[h_1(i)]$ is identity (rather than merely isomorphism) on directions. \n\nIt suffices to show that the intermediate object $\\ema{c'}$ can be endowed with the structure of a category such that $g$ and $h$ are cofunctors. Given an object $i\\in\\ema{c}'(\\1)$, assign its identity to be the identity on $h_1(i)=f(i)$; then both $g$ and $h$ preserve identities because $f$ does. Given an emanating morphism $m\\in\\ema{c}'[i]=\\ema{d}[f(i)]$, assign its codomain to be $\\cod(m)\\coloneqq\\cod(f^\\sharp_i(m))$, and given an emanating morphism $m'\\in\\ema{c}'[\\cod(m)]$, assign the composite $m\\then m'$ in $\\ema{c}'$ to be $m\\then m'$ in $\\ema{d}$. In \\cref{exc.factor_cofunctor} we will check that with these definitions, $\\ema{c}'$ is a category and both $g$ and $h$ are cofunctors.\n\\end{proof}\n\n\\begin{exercise}\\label{exc.factor_cofunctor}\nWe will complete the proof of \\cref{prop.factor_cofunctor}, using the same notation.\n\\begin{enumerate}\n\t\\item Show that composition is associative and unital in $\\ema{c}'$.\n\t\\item Show that $g$ preserves codomains.\n\t\\item Show that $g$ preserves compositions.\n\t\\item Show that $h$ preserves codomains.\n\t\\item Show that $h$ preserves compositions.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\n\\begin{proposition}\nThe wide subcategory of cartesian maps in $\\smcat^\\sharp$ is isomorphic to the category of wide subcategory of discrete opfibrations in $\\smcat$.\n\\end{proposition}\n\\begin{proof}\nSuppose that $\\cat{C}$ and $\\cat{D}$ are categories. Both a functor and a cofunctor between them involve a map on objects, say $f\\colon\\Ob(\\cat{C})\\to\\Ob(\\cat{D})$. For any object $c\\in\\Ob(\\cat{C})$, a functor gives a function, say $f_\\sharp\\colon\\cat{C}[c]\\to\\cat{D}[f(c)]$ whereas a cofunctor gives a function $f^\\sharp\\colon\\cat{D}[f(c)]\\to\\cat{C}[c]$. The cofunctor is cartesian iff $f^\\sharp$ is an iso, and the functor is a discrete opfibration iff $f_\\sharp$ is an iso. We thus transform our functor into a cofunctor (or vice versa) by taking the inverse function on morphisms. It is easy to check that this inverse appropriately preserves identities, codomains, and compositions.\n\\end{proof}\n\n\\begin{proposition}\\label{prop.com_vert_cat_boo}\nThe wide subcategory of vertical maps in $\\smcat^\\sharp$ is isomorphic to the opposite of the wide subcategory bijective-on-objects maps in $\\smcat$:\n\\[\n\\smcat^\\sharp_{\\text{vert}}\\cong(\\smcat_{\\text{boo}})\\op.\n\\]\n\\end{proposition}\n\\begin{proof}\nLet $\\cat{C}$ and $\\cat{D}$ be categories. Given a vertical cofunctor $F\\colon\\cat{C}\\cof\\cat{D}$, we have a bijection $F_1\\colon\\Ob(\\cat{C})\\to\\Ob(\\cat{D})$; let $G_1$ be its inverse. We define a functor $G\\colon\\cat{D}\\to\\cat{C}$ on objects by $G_1$ and, for any $f\\colon d\\to d'$ in $\\cat{D}$ we define $G(f)\\coloneqq F^\\sharp_{G_1(d)}$. It has the correct codomain: $\\cod(G(f))=G_1(F_1(\\cod(G(f)))=G_1(\\cod f)$. And it sends identities and compositions to identities and compositions by the laws of cofunctors.\n\nThe construction of a vertical cofunctor from a bijective-on-objects functor is analogous, and it is easy to check that the two constructions are inverses.\n\\end{proof}\n\n\\begin{exercise}\nLet $S$ be a set and consider the state category $\\cat{S}\\coloneqq(S\\yon^S,\\epsilon,\\delta)$. Use \\cref{prop.com_vert_cat_boo} to show that categories $\\cat{C}$ equipped with a vertical cofunctor $\\cat{S}\\to\\cat{C}$ can be identified with categories whose set of objects is $S$.\n\\end{exercise}\n\n\\begin{exercise}\nConsider the categories $\\cat{C}\\coloneqq\\fbox{$\\bullet\\tto\\bullet$}$ and $\\cat{D}\\coloneqq\\fbox{$\\bullet\\to\\bullet$}$. There is a unique bijective-on-objects (boo) functor $F\\colon\\cat{C}\\to\\cat{D}$ and two boo functors $G_1,G_2\\colon\\cat{D}\\to\\cat{C}$.\n\\begin{enumerate}\n\t\\item Write down the morphism $\\ema{d}\\to\\ema{c}$ of emanation polynomials underlying $F$.\n\t\\item Write down the morphism $\\ema{c}\\to\\ema{d}$ of emanation polynomials underlying either $G_1$ or $G_2$.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n%---- Subsection ----%\n\\subsection{Dirichlet monoidal product on $\\smcat^\\sharp$}\n\nThe usual product of categories gives a monoidal operation on comonoids too, even though it is not a product in $\\smcat^\\sharp$. The carrier polynomial of the product is the $\\otimes$-product of the carrier polynomials.\n\n\n\\begin{proposition}\\label{prop.dirichlet_on_catsharp}\nThe Dirichlet monoidal product $(\\yon,\\otimes)$ on $\\poly$ extends to a monoidal structure $(\\yon,\\otimes)$ on $\\smcat^\\sharp$, such that the functor\n$U\\colon\\smcat^\\sharp\\to\\poly$\nis strong monoidal with respect to $\\otimes$. The Dirichlet product of two categories is their product in $\\smcat$.\n\\end{proposition}\n\\begin{proof}\nLet $\\cat{C},\\cat{D}\\in\\smcat^\\sharp$ be categories with emanation polynomials $\\ema{c},\\ema{d}\\in\\poly$. The emanation polynomial of $\\cat{C}\\otimes\\cat{D}$ is defined to be $\\ema{c}\\otimes\\ema{d}$. A position in it is a pair $(c,d)$ of objects, one from $\\cat{C}$ and one from $\\cat{D}$; a direction there is a pair $(f,g)$ of a morphism emanating from $c$ and one emanating from $d$. \n\nWe define $\\epsilon_{\\cat{C}\\otimes\\cat{D}}\\colon\\ema{c}\\otimes\\ema{d}\\to\\yon$ as\n\\[\n\\ema{c}\\otimes\\ema{d}\\To{\\epsilon_{\\ema{C}}\\otimes\\epsilon_{\\ema{D}}}\\yon\\otimes\\yon\\cong\\yon.\n\\]\nThis says that the identity at $(c,d)$ is the pair of identities.\n\nWe define $\\delta_{\\cat{C}\\otimes\\cat{D}}\\colon(\\ema{c}\\otimes\\ema{d})\\to(\\ema{c}\\otimes\\ema{d})\\tri(\\ema{c}\\otimes\\ema{d})$ using the duoidal property:\n\\[\n\\ema{c}\\otimes\\ema{d}\\To{\\delta_{\\ema{c}}\\otimes\\delta_{\\ema{d}}}(\\ema{c}\\tri\\ema{c})\\otimes(\\ema{d}\\tri\\ema{d})\\to(\\ema{c}\\otimes\\ema{d})\\tri(\\ema{c}\\otimes\\ema{d}).\n\\]\nOne can check that this says that codomains and composition are defined coordinate-wise, and that $(\\ema{c}\\otimes\\ema{d},\\epsilon_{\\ema{c}\\otimes\\ema{d}},\\delta_{\\ema{c}\\otimes\\ema{d}})$ forms a comonoid. One can also check that this is functorial in $\\cat{C},\\cat{D}\\in\\smcat^\\sharp$. See \\cref{exc.dirichlet_on_catsharp}.\n\\end{proof}\n\n\\begin{exercise}\\label{exc.dirichlet_on_catsharp}\nWe complete the proof of \\cref{prop.dirichlet_on_catsharp}.\n\\begin{enumerate}\n\t\\item Show that $(\\ema{c}\\otimes\\ema{d},\\epsilon_{\\ema{c}\\otimes\\ema{d}},\\delta_{\\ema{c}\\otimes\\ema{d}})$, as described in \\cref{prop.dirichlet_on_catsharp}, forms a comonoid.\n\t\\item Check that the construction $(\\cat{C},\\cat{D})\\mapsto\\cat{C}\\otimes\\cat{D}$ is functorial in $\\cat{C},\\cat{D}\\in\\smcat^\\sharp$.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n%\n%As everyone reading this is probably aware, strict monoidal structure on a category $\\cat{C}$ is a functor $1\\to\\cat{C}$ and a functor $\\cat{C}\\times\\cat{C}\\to\\cat{C}$ that together satisfy the monoid equations. Here the $\\times$ takes place in $\\smcat$. \n%\n%If we replace $(\\smcat,\\1,\\times)$ with $(\\smcat^\\sharp,\\yon,\\otimes)$, then a monoid is a cofunctor $\\intercal\\colon\\yon\\cof\\cat{C}$, which we call the \\emph{terminus}, and a cofunctor $\\curlyvee\\colon\\cat{C}\\otimes\\cat{C}\\cof\\cat{C}$, which we call the \\emph{team-up product} satisfying the monoid equations. Let's call these $\\otimes$-monoids in $\\smcat^\\sharp$. \n%\n%It appears that $\\otimes$-monoids $(\\cat{C},\\intercal,\\curlyvee)$ in $\\smcat^\\sharp$ could be interesting for thinking about the dynamics of something like ``swarms'': collections of dynamical systems that work very well together as a population. Indeed, $\\curlyvee$ takes a pair of objects $c_1,c_2$ and returns a single object $c_1\\curlyvee c_2$, but it takes a morphism $f\\colon c_1\\curlyvee c_2\\to d$ emanating that object to a pair of morphisms $f_1\\colon c_1\\to d_1$ and $f_2\\colon c_2\\to d_2$ with $d_1\\curlyvee d_2=d$.\n%\n%\n%\\begin{example}[Representable and linear $\\otimes$-monoids]\n%Representable polynomials have at most one commutative $\\otimes$-monoid structure by a theorem of Fox. Category (polynomial comonoid) structures on a representable polynomial $\\yon^M$ are just monoid structures on its underlying set $M$. So a commutative monoid $(M,e,*)$ in $\\smset$ can be identified with a $\\otimes$-monoid in $\\smcat^\\sharp$ that has carrier $\\yon^M$.\n%\n%A $\\otimes$-monoidal structure on a linear $M\\yon$ polynomial, on the other hand, can be identified with a monoid structure $(M,e,*)$ on the set $M$. But there is a unique comonoid structure on $M\\yon$. So a monoid in $\\smset$ can be identified with a $\\otimes$-monoid in $\\smcat^\\sharp$ having carrier $M\\yon$.\n%\\end{example}\n%\n%\\begin{example}\\label{ex.monaco_cat}\n%Consider the comonoid $\\com{C}=(\\ema{c},0,+)$ corresponding to the category\n%\\[\n%\\fbox{\n%\\begin{tikzcd}[ampersand replacement=\\&]\n%\t\\cdots\\ar[r, \"-1\"']\\&\n%\t\\LMO{n}\\ar[r, \"-1\"']\\&\n%\t\\cdots\\ar[r, \"-1\"']\\&\n%\t\\LMO{2}\\ar[r, \"-1\"']\\&\n%\t\\LMO{1}\\ar[r, \"-1\"']\\&\n%\t\\LMO{0}\n%\\end{tikzcd}\n%}\n%\\]\n%It is carried by the polynomial $\\ema{c}\\coloneqq\\sum_{n\\in\\nn}\\yon^{\\{0,-1,-2\\ldots,-n\\}}\\cong\\yon+\\yon^\\2+\\yon^\\3+\\cdots$, its codomain map is given by $(n,-i)\\mapsto n+(-i)$ and its counit and comultiplication are given by $0$ and addition.\n%\n%This comonoid has a (non-symmetric) $\\otimes$-monoid structure. Namely its terminus is $0$ and its team-up product $\\curlyvee$ on an object $(n_1,n_2)$ is given by\n%\\[\n%\\curlyvee_1(n_1,n_2)\\coloneqq n_1+n_2,\\]\n%and on a morphism $-n_1-n_2\\leq i\\leq 0$ works as follows:\n%\\[\n%\\curlyvee^\\sharp_{n_1+n_2}(i)\\coloneqq\t\\big(\\min(n_1,n_1+n_2+i),\\max(0,n_2+i)\\big)\n%\\]\n%For example, say $n_1=3$ and $n_2=2$, so $\\curlyvee_1(n_1,n_2)=5$. Let's pick $-5\\leq i\\geq 0$. \n%\\[\n%\\begin{array}{c|c}\n%\ti&\\curlyvee^\\sharp(i)\\\\\\hline\n%\t0&(3,2)\\\\\n%\t1&(3,1)\\\\\n%\t2&(3,0)\\\\\n%\t3&(2,0)\\\\\n%\t4&(1,0)\\\\\n%\t5&(0,0)\n%\\end{array}\n%\\]\n%\\end{example}\n%\n%\\begin{exercise}\n%Let $\\com{C}$ be the comonoid from \\cref{ex.monaco_cat}. We need to check that everything works correctly.\n%\\begin{enumerate}\n%\t\\item Check that the terminus $\\intercal\\colon\\yon\\to\\com{C}$ is a cofunctor.\n%\t\\item Check that $\\curlyvee$ preserves identities.\n%\t\\item Check that $\\curlyvee$ preserves codomains and compositions, and hence is a cofunctor.\n%\\qedhere\n%\\end{enumerate}\n%\\end{exercise}\n%\n%\\begin{exercise}\n%Consider the polynomial\n%\\[\n%  c\\coloneqq\n%  \\sum_{n\\in\\nn}\\sum_{\\ell\\colon\\ord{n}\\to\\nn}\n%  \\yon^{\\{\\ell'\\colon\\ord{n}\\to\\nn\\;\\mid\\; \\ell'\\leq\\ell\\}}\n%\\]\n%A position in $c$ is a list of natural numbers, and a direction is a smaller list. \n%\\begin{enumerate}\n%\t\\item Find a comonoid structure on $c$.\n%\t\\item Find a $\\otimes$-monoid structure $(\\intercal,\\curlyvee)$ on the comonoid you just found. (If you can't, start over.)\n%\\qedhere\n%\\end{enumerate}\n%\\end{exercise}\n%\n%\\begin{exercise}\n%Every $\\otimes$-monoid in $\\smcat^\\sharp$ has an underlying category. This forgetful functor has a left adjoint: to every category $\\cat{C}\\in\\smcat^\\sharp$ we can assign a \\emph{free} $\\otimes$-monoid.\n%\\begin{enumerate}\n%\t\\item Make the above claim precise, but don't prove it yet.\n%\t\\item We propose that if the emanation polynomial of $\\cat{C}$ is $\\ema{c}$ then the free $\\otimes$-monoid on it has emanation polynomial $\\ema{L_c}$ given by\n%\t\\[\n%\t\\ema{L_c}\\coloneqq\\sum_{[\\ell_1,\\ldots,\\ell_n]\\in\\Set{List}(\\ema{c}(\\1))}\\yon^{\\ema{c}[\\ell_1]\\times\\cdots\\times\\ema{c}[\\ell_n]}.\n%\t\\]\n%\tThis is supposed to be a comonoid in $\\poly$; what should be the identities, codomains, and composition?\n%\t\\item\tPropose a terminus $\\intercal\\colon\\yon\\to\\ema{L_c}$.\n%\t\\item Propose a team-up product $\\curlyvee\\colon\\ema{L_c}\\otimes\\ema{L_c}\\to\\ema{L_c}$.\n%\t\\item Does it look promising that these will satisfy the monoid laws?\n%\t\\item Sketch an argument that this construction is left adjoint to the forgetful functor sending a $\\otimes$-monoid in $\\smcat^\\sharp$ to its underlying category.\n%\\qedhere\n%\\end{enumerate}\n%\\end{exercise}\n%\n\n\\section{Exercise solutions}\n\\Closesolutionfile{solutions}\n{\\footnotesize\n\\input{solution-file6}}\n\n\\Opensolutionfile{solutions}[solution-file7]\n\n%------------ Chapter ------------%\n\\chapter{Cofree polynomial comonoids}\\label{sec.cofree}\n\n%-------- Section --------%\n\\section{Introduction: Cofree comonoids and discrete dynamical systems}\n\nWe can now return to dynamical systems. Recall that if $p\\in\\poly$ is a polynomial, then a dynamical system with interface $p$ consists of a set $S$ and a map of polynomials $f\\colon S\\yon^S\\to p$. We think of positions in $p$ kind of like outputs---others can observe your position---but also as determining the set of inputs---directions---that could be input next.\n\nThe way this map $f$ leads to something that seems to ``go on repeatedly'' is that $\\ema{s}\\coloneqq S\\yon^S$ is a comonoid, so we have maps $\\ema{s}\\To{\\delta}\\ema{s}\\tripow{n}\\To{f\\tripow{n}}p\\tripow{n}$ for all $n$. This says that given any initial position in $S$, we automatically get a position in $p$, and for every direction there, another position in $p$, and for every direction there, another position in $p$, and so on $n$ times. This is the dynamics.\n\nSo the above all works because we have a polynomial map $S\\yon^S\\to p$, where $S\\yon^S$ is the underlying polynomial of a polynomial comonad. Since ``underlying polynomial'' is a functor $U\\colon\\smcat^\\sharp\\to\\poly$, a seasoned category theorist might be tempted to ask the following: is there an adjunction\n\\[\n\\poly(U\\com{C},p)\\cong\\smcat^\\sharp(\\com{C},\\cofree{p}),\n\\]\nfor some functor $\\cofree{}\\colon\\poly\\to\\smcat^\\sharp$? In fact, there is; we refer to $\\cofree{p}$ as the \\emph{cofree comonoid} on $p$, or more descriptively, the \\emph{category of $p$-trees.}\n\nCofree comonoids in $\\poly$ are beautiful objects, both in their visualizable structure as a category and in the metaphors we can make about them. They allow us to replace the interface of a dynamical system with a category and get access to a rich theory that exists there.\n\n\\begin{theorem}[Cofree comonoid]\\label{thm.cofree}\nThe forgetful functor $\\Cat{Comon}(\\poly)\\to\\poly$ has a right adjoint\n\\[\n  \\adj{\\smcat^\\sharp}{\\ema{c}}{\\cofree{p}}{\\poly}\n\\]\nwhere the functors have been named by where they send $(\\ema{c},\\epsilon,\\delta)\\in\\smcat^\\sharp$ and $p\\in\\poly$ respectively.\n\\end{theorem}\nThis will be proved as \\cref{??}.\n\n%-------- Section --------%\n\\section{Cofree comonoids as trees}\\label{subsec.cofree_tree}\n\n\\begin{definition}[Cofree comonoid]\\label{def.cofree}\nLet $p\\in\\poly$ be a polynomial. The comonoid $\\cofree{p}=(\\tr_p,\\rt,\\fs)$ as in \\cref{thm.cofree} is called the \\emph{cofree comonoid on $p$} or informally the category of \\emph{(possibly infinite) $p$-trees}.\n\nAn object $t\\in \\tr_p(\\1)$ is a called a \\emph{(possibly infinite) tree in $p$.} Given such an object $t$ in the category, an emanating morphism $n\\in\\tr_p[t]$ is called a \\emph{path from root}.\n\\end{definition}\n\nThe terminology of \\cref{def.cofree} is alluding to a specific way we like to imagine the comonoid $\\cofree{p}=(\\tr_p,\\rt,\\fs)$, namely in terms of trees. To every polynomial $p$, we will associate a new polynomial $\\tr_p$ whose positions are (possibly infinite) $p$-trees. To choose such a tree we first choose its root to be some position $i\\in p(\\1)$. Then for every direction $d\\in p[i]$ there, we choose another position, and for every direction from each of those we choose another position, and so on indefinitely.\n\nSo a position in $\\tr_p$ is one of these trees. Such a tree may end, namely if none of the top-level positions has any directions, but often it will not end. Given such a tree, say $t$, a direction $d\\in\\tr_p[t]$ there is simply a path from the root to some node in the tree. \n\nWe'll explain the counit $\\rt$ and the comultiplication $\\fs$ after going through an example.\n\n\\begin{example}\\label{ex.imagining_trees}\nLet $p\\coloneqq\\{\\bul[red],\\bul[blue]\\}\\yon^\\2+\\bul[dgreen]\\yon+\\bul[dyellow]$. Here are four trees in $p$:\n\\begin{equation}\\label{eqn.some_trees_misc58}\n\\begin{tikzpicture}[trees,\n  level 1/.style={sibling distance=10mm},\n  level 2/.style={sibling distance=5mm},\n  level 3/.style={sibling distance=2.5mm}]\n\t\\node[red] (a) {$\\bullet$}\n\t\tchild {node[red] {$\\bullet$}\n\t\t\tchild {node[dyellow] {$\\bullet$}}\n\t\t\tchild {node[dyellow] {$\\bullet$}}\n\t\t}\n\t\tchild {node[dgreen] {$\\bullet$}\n\t\t\tchild {node[dgreen] {$\\bullet$}\n\t\t\t\tchild\n\t\t\t}\n\t\t}\n\t\t;\n\t\\node[dgreen, right=2 of a] (b) {$\\bullet$}\n\t\tchild {node[blue] {$\\bullet$}\n\t\t\tchild {node[blue] {$\\bullet$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t\tchild {node[red] {$\\bullet$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t};\n\t\\node[dyellow, right=2 of b] (c) {$\\bullet$};\n\t\\node[red, right=2 of c] {$\\bullet$}\n\t\tchild {node[blue] {$\\bullet$}\n\t\t\tchild {node[red] {$\\bullet$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t\tchild {node[red] {$\\bullet$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t}\n\t\tchild {node[red] {$\\bullet$}\n\t\t\tchild {node[red] {$\\bullet$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t\tchild {node[red] {$\\bullet$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t};\n\\end{tikzpicture}\n\\end{equation}\nThey all represent elements of $p\\tripow{3}$, but only the third one---the single yellow dot---would count as an element of $\\tr_p$. \n\nIndeed, in \\cref{def.cofree}, when we speak of a (possibly infinite) tree in $p$, we mean at tree for which each node is a position in $p$ with each of its emanating directions filled by another position in $p$, and so on. Since three of the four trees shown in \\eqref{eqn.some_trees_misc58} have open leaves---arrows emanating from the top---these trees are not elements of $\\tr_p$. However, each of them could be extended to an actual element of $\\tr_p$ by continually filling in each open leaf with another position of $p$.\n\nLet's imagine some actual elements of $\\tr_p$:\n\\begin{itemize}\n\t\\item The binary tree that's ``all red all the time.''\n\t\\item The binary tree where odd layers are red and even layers are blue.\n\t\\item The tree where all the nodes are red, except for the right-most branch, which is always green.%\n\t\\footnote{\\;Note that branches are actually unordered, so it's technically wrong to think of it as a line of green up the \\emph{right side}. Instead, it's just a line of green going up the tree forever.}\n\t\\item Any finite tree, where every branch terminates in a yellow dot.\n\t\\item Completely random: for the root, randomly choose either red, blue, green, or yellow, and at every leaf, loop back to the beginning, i.e.\\ randomly choose either red, blue, green, or yellow, etc.\n\\end{itemize}\nThese are the positions in the polynomial $\\tr_p$ that underlies the cofree comonoid on $p$. There are uncountably many positions in $\\cofree{p}$, at least for this particular $p$---in fact, even $\\cofree{2\\yon}$ has $2^\\nn$-many positions---but only finitely many can be described in any finite language like English. Thus most of the elements of $\\cofree{p}$ cannot be described.\n\\end{example}\n\n\\begin{exercise}\\label{exc.trees_1}\n\\begin{enumerate}\n\t\\item Interpret each of the five tree examples imagined in \\cref{ex.imagining_trees} by drawing three or four layers (your choice) of it.\n\\end{enumerate}\nFor each of the following polynomials $p$, describe the set of trees (positions) in $\\tr_{p}$.\n\\begin{enumerate}[resume]\n\t\\item $p=\\1$. (What is the set $\\tr_p$ of $p$-trees?)\n\t\\item $p=\\2$.\n\t\\item $p=\\yon$.\n\t\\item $p=\\yon^\\2$.\n\t\\item $p={\\2\\yon}$.\n\t\\item $p={\\yon+\\1}$.\n\t\\item $p={B\\yon^A}$ for some sets $A,B\\in\\smset$.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\nNow that we've explained the underlying polynomial $\\tr_p$ of the cofree comonoid $\\cofree{p}=(\\tr_p,\\rt,\\fs)$, we just need to explain how identities, codomains, and composition work, i.e.\\ we just need to give the counit map $\\rt\\colon\\tr_p\\to\\yon$ and the comultiplication map $\\fs\\colon\\tr_p\\to\\tr_p\\tri\\tr_p$. \n\nAgain, the objects in the category $\\cofree{p}$ are $p$-trees, and a morphism emanating from such a tree $t$ is a path from its root $r$ to some node. The map $\\rt$, applied to $t$, returns $t$'s root $r$, or more precisely the path from $r$ to itself. The map $\\fs$, applied to $t$, first needs to give a codomain (tree) to every path from the root to some other node $n$. It is just the subtree of $t$ whose root is $n$: the tree of all nodes starting at $n$. Now, given a path from the root of that tree (namely $n$) to another node, say $n'$, we need to give a path from $r$ to $n'$; we take it to be the composite of the path $r\\to n$ and the path $n\\to n'$.\n\n\\begin{exercise}\nLet $p\\coloneqq\\{\\bul[red],\\bul[blue]\\}\\yon^\\2+\\bul[dgreen]\\yon+\\bul[dyellow]$ as in \\cref{ex.imagining_trees}.\n\\begin{enumerate}\n\t\\item Choose an object $t\\in \\tr_p$, i.e.\\ a tree in $p$, and draw a finite approximation of it (say four layers).\n\t\\item What is the identity morphism at $t$?\n\t\\item Choose a nonidentity morphism $f$ emanating from $t$ and draw it.\n\t\\item What is the codomain of $f$? Draw a finite approximation of it.\n\t\\item Choose a morphism emanating from the codomain of $f$ and draw it.\n\t\\item What is the composite of your two morphisms? Draw it on $t$.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}\nLet's take $p\\coloneqq\\1$. An element in $\\tr_p(\\1)$ is given by choosing an element $i\\in p(\\1)$ and filling each of its direction $p[i]$ with another element of $p(\\1)$, and so on. But there is only one element of $p(\\1)$ and it has no directions. So $\\tr_\\1$ has only one position, and the only emanating morphism there is the identity. In other words, $\\tr_\\1=\\yon$.\n\nSince $\\yon$ has a unique comonoid structure, we've described the cofree comonoid $\\cofree(\\1)$. It is a single tree consisting of a single node, and the only outgoing morphism is the identity on that node.\n\\end{example}\n\n\\begin{exercise}\nLet $A$ be a set.\n\\begin{enumerate}\n\t\\item What is $\\tr_A$? \n\t\\item How is it given the structure of a category?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}\nLet $p\\coloneqq\\yon$. An element in $\\tr_p(\\1)$ is given by choosing an element $i\\in p(\\1)$ and filling each of its direction $p[i]$ with another element of $p(\\1)$, and so on. There is only one way to do this, i.e.\\ there is only one such tree, namely $t\\coloneqq\\fbox{$\\bullet\\to\\bullet\\to\\bullet\\to\\cdots$}$.\n\nSo $\\tr_p$ has a single position, namely $t$. That position has an emanating morphism for each path out of the root, so it has $\\nn$-many emanating morphisms: one for every length. Hence $\\tr_\\yon=\\yon^\\nn$.\n\nOf course the codomain of each morphism emanating from $t$ is again $t$: that's the only object. The composite of two paths, one of length $m$ and one of length $n$ is $m+n$. Hence we see that the category $\\cofree(\\yon)$ is the monoid $(\\nn,0,+)$ considered as a category with one object.\n\\end{example}\n\n\\begin{exercise}\nLet $A$ be a set.\n\\begin{enumerate}\n\t\\item What is $\\tr_{A\\yon}$? \n\t\\item How is it given the structure of a category?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}\nLet $p\\coloneqq\\nn\\yon^\\2$. An element of $\\tr_p$ might start like this:\n\\[\n\\begin{tikzpicture}[trees,\n  level 1/.style={sibling distance=10mm},\n  level 2/.style={sibling distance=5mm},\n  level 3/.style={sibling distance=2.5mm}]\n\t\\node {$17$}\n\t\tchild {node {$3$}\n\t\t\tchild {node {$0$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t\tchild {node {$3$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t}\n\t\tchild {node {$1$}\n\t\t\tchild {node {$92$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t\tchild {node {$6$}\n\t\t\t\tchild\n\t\t\t\tchild\n\t\t\t}\n\t\t};\n\\end{tikzpicture}\n\\]\nAny element of $\\tr_p$ goes on forever: it's an infinite binary tree. At each node it has a choice of some natural number, since $\\nn=p(\\1)$ is the position-set of $p$.\n\nSo such trees are the objects of the category $\\cofree{p}=(\\tr_p,\\rt,\\fs)$. A morphism emanating from a tree $t$ is a path from its root to another node, which is an element of $\\Set{List}(\\2)$, i.e.\\ a finite list of choices in $\\2$, which you can think of as a finite sequence of left/right choices. The codomain is whatever tree this path ends up on. \n\nSo the emanation polynomial of $\\cofree{p}$ is\n\\[\\tr_p\\cong\\nn^{\\Set{List}(\\2)}\\yon^{\\Set{List}(\\2)}\\]\nwith identities given by the empty list. An object $t\\in\\tr_p(\\1)$ is a function $t\\colon\\Set{List}(\\2)\\to\\nn$, a way to put a natural number at every node of the infinite binary tree. An emanating morphism $\\ell\\in\\Set{List}(\\2)$ is just a path from the root to another node, and its codomain is the other node. Formally it is the function $t'\\colon\\Set{List}(\\2)\\to\\nn$ given by $t'(\\ell')\\coloneqq t(\\ell:\\ell')$, where $\\ell:\\ell'$ is the concatenation of these lists. Composition of morphisms is also given by concatenation of the corresponding lists.\n\\end{example}\n\n\\begin{exercise}\nLet $p\\coloneqq B\\yon^A$ for sets $A,B\\in\\smset$.\n\\begin{enumerate}\n\t\\item Describe the objects of the cofree category $\\cofree{p}$.\n\t\\item For a given such object, describe the emanating morphisms.\n\t\\item Describe how to take the codomain of a morphism.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{example}[$\\cofree{A\\yon}$ for linear polynomials]\\label{ex.streams_cofree}\nLet $A\\in\\smset$ be a set. The cofree comonoid $\\cofree{A\\yon}$ on the associated linear polynomial has as emanation polynomial $\\tr_{A\\yon}\\cong (A\\yon)^\\nn$. Its objects are $A$-streams. For each stream $t\\colon \\nn\\to A$, an emanating morphism is just an element $n\\in\\nn$. The identity is $0\\in\\nn$, the codomain of $n$ is the composite function $\\nn\\To{+n}\\nn\\To{t}A$, and if we denote it by $n\\colon t\\to (t+n)$ then the composite of morphisms $n,n'$ is $(n+n')\\colon t\\to (t+n+n')$.\n\nWe first saw this category in \\cref{ex.streams_category}\n\\end{example}\n\n\\begin{exercise}\nLet $p\\coloneqq \\yon+\\1$.\n\\begin{enumerate}\n\t\\item Describe the objects of the cofree category $\\cofree{p}$.\n\t\\item For a given such object, describe the emanating morphisms.\n\t\\item Describe how to take the codomain of a morphism.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nLet $p\\coloneqq \\{a,b,c,\\ldots,z,\\text{\\textvisiblespace}\\}\\yon+\\1$.\n\\begin{enumerate}\n\t\\item Describe the objects of the cofree category $\\cofree{p}$, and draw one.\n\t\\item For a given such object, describe the emanating morphisms.\n\t\\item Describe how to take the codomain of a morphism.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nLet $p$ be a polynomial, let $Q\\coloneqq\\{q\\in\\qq\\mid q\\geq 0\\}$ and consider the monoid $\\yon^Q$ of nonnegative rational numbers under addition. Is it true that any cofunctor $\\varphi\\colon\\cofree{p}\\cof\\yon^\\qq$ is constant, i.e.\\ that it factors as\n\\[\n\\cofree{p}\\cof\\yon\\cof\\yon^\\qq?\n\\]\n\\begin{solution}\nTake $p\\coloneqq\\2\\yon$, and consider the object $x\\in\\cofree{p}(\\1)$ given by the stream\n\\[\nx\\coloneqq(2\\,12\\,112\\,1112\\,11112\\,111112\\ldots)\n\\]\n(with spaces only for readability); note that every morphism emanating from $x$ has a different codomain. We need to give $\\varphi^\\sharp_i(q)$ for every $i\\in\\cofree{p}(\\1)$ and $q\\geq 0$. Define\n\\[\n\t\\varphi^\\sharp_i(q)\\coloneqq\n\t\\begin{cases}\n\t\ti&\\tn{ if }i\\neq x \\tn{ or } q=0\\\\\n\t\tx'&\\tn{ if }i=x\\tn{ and } q>0\n\t\\end{cases}\n\\]\nwhere $x'\\coloneqq(12\\,112\\,1112\\,11112\\,111112\\ldots)$. There are three cofunctor conditions to check, namely identity, codomains, and composition. The codomain condition is vacuous since $\\yon^Q$ has one object, and the identity condition holds by construction, because we always have $\\varphi^\\sharp_i(0)=i$. Now take $q_1,q_2\\in Q$; we need to check that\n\\[\\varphi^\\sharp_{\\cod\\varphi^\\sharp_i(q_1)}(q_2)=^?\\varphi^\\sharp_i(q_1+q_2)\\]\nholds. If $i\\neq x$ or $q_1=q_2=0$, then it holds because both sides equal $i$. If $i=x$ and either $q_1>0$ or $q_2>0$, it is easy to check that both sides equal $x'$, so again it holds.\n\\end{solution}\n\\end{exercise}\n\n%---- Subsection ----%\n\\subsection{Decision trees}\n\nWhen you talk about your future, what exactly might you be talking about? In some sense you can make choices that change what will happen to you, but in another sense it's as though for each such choice there is something beyond your control that makes a new situation for you. You're constantly in the position of needing to make a choice, but its results are beyond your control.\n\nThis is very much how positions $t\\in\\cofree{p}$ look. Such a position is a decision tree: at each stage (node), you have an element $i\\in p(\\1)$, which we've been calling a decision. It has $p[i]$-many options, each of which, say $d\\in p[i]$ results in a new node $\\cod(d)$ of the tree.\n\nSo a position $t$ is like a future: it is a current decision, and for every option there, a new decision tree. It's all the decisions you could possibly make, and for each actual choice, it's a new future. A direction at $t$ is just a choice of finite path through the tree: a sequence of choices. \n\n\\begin{exercise}\nIf someone says that they understand a future to be a decision tree $t\\in\\cofree{p}$, explain in your own words how they're thinking about the term ``future.'' How does it agree or disagree with your own intuition about what a ``future'' is?\n\\end{exercise}\n\n\\begin{exercise}\nLet $G$ be a finite directed graph, and let $\\Cat{Fr}(G)$ be the associated free category. \n\\begin{enumerate}\n\t\\item Construct a cofunctor $\\Cat{Fr}(G)\\cof\\cofree{\\1+\\yon+\\yon^\\2+\\yon^\\3+\\cdots}$.\n\t\\item Would you say it associates to each node in $G$ its ``future'' decision tree?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n%-------- Section --------%\n\\section{Formal construction of $\\cofree{p}$}\n\nWe will sketch how one can formally construct $\\cofree{p}$ from $p$. The first step is called copointing, and it's pretty easy: just multiply $p$ by $\\yon$. It adds a kind of ``default'' direction to each position in $p$.\n\n%---- Subsection ----%\n\\subsection{Copointing}\n\n\\begin{definition}[Copointed polynomial]\nA \\emph{copointed polynomial} is a pair $(p,\\epsilon)$, where $p\\in\\poly$ is a polynomial and $\\epsilon\\colon p\\to\\yon$ is a morphism in $\\poly$.\n\nA \\emph{morphism} of copointed polynomials $f\\colon (p,\\epsilon)\\to(p',\\epsilon')$ is a morphism $f\\colon p\\to p'$ such that $\\epsilon=f\\then\\epsilon'$.\n\\end{definition}\n\nComonoids in $\\poly$ are triples $(p,\\epsilon,\\delta)$, with $(p,\\epsilon)$ a copointed polynomial, so there are forgetful functors\n\\[\n\\Cat{Comon}(\\poly)\\to\\Cat{Cpt}(\\poly)\\to\\poly.\n\\]\nWe want to find the right adjoint to the composite---that's the functor $\\cofree{-}\\colon\\poly\\to\\Cat{Comon}(\\poly)$---and we will obtain it in two steps. \n\n\\begin{proposition}\\label{prop.copointing}\nFor any polynomial $p$, the polynomial $p\\yon$ is naturally copointed by the projection to $\\yon$, and the functor sending $p\\mapsto p\\yon$ is right adjoint to the forgetful functor\n\\[\n\\adj{\\Cat{Cpt}(\\poly)}{p\\yon}{q}{\\poly},\n\\]\nwhere the functors are named by where they send $p\\in\\poly$ and $(q,\\epsilon)\\in\\Cat{Cpt}(\\poly)$.\n\\end{proposition}\n\\begin{proof}\nClearly the product $p\\yon\\cong p\\times\\yon$ is copointed by the projection map, call it $\\pi\\colon p\\yon\\to\\yon$, and the map $p\\mapsto p\\yon$ is functorial in $p$. For any copointed polynomial $q\\To{\\epsilon}\\yon$, there is an obvious bijection between morphisms of polynomials $q\\to p$ and commutative triangles\n\\[\n\\begin{tikzcd}[column sep=small]\n\tq\\ar[dr, \"\\epsilon\"']\\ar[rr]&&\n\tp\\yon\\ar[dl, \"\\pi\"]\\\\&\n\t\\yon\n\\end{tikzcd}\n\\]\nnatural in both $q$ and $p$. This completes the proof.\n\\end{proof}\n\n\\begin{exercise}\nShow that the copointing functor is essentially surjective. That is, every polynomial $p$ equipped with a map $\\epsilon\\colon p\\to\\yon$ is isomorphic to one of the form $p'\\yon$ (equipped with the projection $p'\\yon\\to\\yon$).\n\\end{exercise}\n\nThe reader might not remember any sort of copointing showing up in the tree description of $\\cofree{p}=(\\tr_p,\\rt,\\fs)$. Indeed, it was hidden in the fact that we allowed for trivial paths in the tree (e.g.\\ the path from the root to itself). But we'll get to that.\n\nThe copointing $p\\mapsto p\\yon$ just adds an extra direction to each position; we can denote this extra direction with an $=$, as we did in \\cref{ex.walking_arrow}. So for example if $p=\\yon^\\3+\\yon$, drawn as left, then $p\\yon\\cong\\yon^\\4+\\yon^\\2$ can be drawn as right:\n\\[\n\\begin{tikzpicture}\n\t\\node[draw, \"$p$\" below] (p1) {\n\t\\begin{tikzpicture}[trees, sibling distance=4mm]\n    \\node (1) {$\\bullet$} \n      child \n      child \n      child;\n    \\node[right=.75 of 1] (2) {$\\bullet$} \n      child;\n  \\end{tikzpicture}\n  };\n\t\\node[draw, \"$p\\yon$\" below] (p2) [right=of p1] {\n\t\\begin{tikzpicture}[trees, sibling distance=3mm]\n    \\node (1) {$\\bullet$} \n      child {\\idchild}\n      child \n      child \n      child;\n    \\node[right=.75 of 1] (2) {$\\bullet$} \n      child {\\idchild}\n      child;\n  \\end{tikzpicture}\n\t};\n\\end{tikzpicture}\n\\]\nIt just adds a default direction to each position. A copointed map from $(q,\\epsilon)$ to $(p\\yon,\\pi)$ must pass the default direction back to the default direction in $q$, but leaves the other directions in $p$ to go wherever they want to.\n\n\\begin{example}[Slowing down dynamical systems]\nGiven a dynamical system $f\\colon S\\yon^S\\to p$, we automatically get a map $S\\yon^S\\to p\\yon$ to the cofree pointing. We called this ``adding a pause button'' in \\cref{ex.pause}. Thus we can take any dynamical system and replace it with one whose interface is copointed.\n\nWe can use a copointed interface to slow down a dynamical system, in a kind of inverse to how we sped up dynamical systems in \\eqref{eqn.speedup}. There we took a dynamical system $f$ with interface $p$ and produced one with interface $p\\tripow{k}$. Here we will take one with interface $p\\tripow{k}$ and produce one with interface $p$.\n\nTo do this, we need $p$ to be copointed, i.e.\\ we need to have in hand a map $\\epsilon\\colon p\\to\\yon$, and as we saw above that we can always assume that. Now for any $k\\in\\nn$ we have $k$-many maps $p\\tripow{k}\\to p$. For example, if $k=3$, we have\n\\[\n\\{\\epsilon\\tri\\epsilon\\tri p,\\epsilon\\tri p\\tri\\epsilon,p\\tri\\epsilon\\tri\\epsilon\\}\\ss\\poly(p\\tripow{3},p)\n\\]\nSo given a dynamical system $S\\yon^S\\to p\\tripow{k}$, which outputs as its position a whole $k$-fold strategy at one time and which takes as input sequences of $k$-many inputs, we can feed it one input and $k-1$ pauses. This is what you get when you compose $S\\yon^S\\to p\\tripow{k}\\to p$.\n\nGiven a dynamical system $f\\colon S\\yon^S\\to p$, where $p$ is copointed and $f$ preserves the copoint, we could speed it up as before to get $S\\yon^S\\to p\\tripow{k}$ and then slow it down to get $S\\yon^S\\to p$, and we get back $f$. So slowing down is a retract of speeding up in this sense.\n\\end{example}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Show that there is a monoidal structure $(\\yon,\\otimes)$ on $\\Cat{Cpt}(\\poly)$ such that the forgetful functor $U\\colon\\Cat{Cpt}(\\poly)\\to\\poly$ is strong monoidal.\n\t\\item Show that this monoidal structure is closed, i.e.\\ that there is an internal hom $[-,-]$ on $\\Cat{Cpt}(\\poly)$. Hint: you should have $U([p\\yon,q\\yon]_{\\Cat{Cpt}})\\cong[p\\yon,q]_{\\poly}\\yon$.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n%---- Subsection ----%\n\\subsection{Constructing the cofree comonoid}\n\nIt remains to show that we can functorially take any copointed polynomial $(q,\\epsilon)$ and return a comonoid, and that this construction is right adjoint to the forgetful functor. From the description in \\cref{subsec.cofree_tree}, we know the cofree comonoid is supposed to have something to do with infinite trees. And we know that the set of height-$n$ $p$-trees is given by $p\\tripow{n}$. So we might think we can somehow take a limit of these height-$n$ trees for various $n$. \n\nThe problem is there's no obvious maps between $p\\tripow{n}$ and $p\\tripow{n+1}$. Luckily, the copointing fixes that problem. Given $\\epsilon\\colon q\\to\\yon$, we have two maps $q\\tri q\\tto q$, namely $q\\tri\\epsilon$ and $\\epsilon\\tri q$.\n\n\\begin{example}\nSuppose $q=\\{A\\}\\yon^{\\{i_A,f\\}}+\\{B\\}\\yon^{i_B}$ with copointing $\\epsilon$ selecting $i_A$ and $i_B$:\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$q\\coloneqq$\" left] {\n\t\\begin{tikzpicture}[trees, sibling distance=5mm]\n    \\node[\"\\tiny $A$\" below, red] (1) {$\\bullet$} \n      child  {coordinate (iA) \\idchild}\n      child {coordinate (f)};\n    \\node[right=.8 of 1,\"\\tiny $B$\" below, blue] (2) {$\\bullet$} \n      child  {coordinate (iB) \\idchild};\n    \\node[below left=0 of iA, font=\\tiny] {$i_A$};\n    \\node[below left=0 of iB, font=\\tiny] {$i_B$};\n    \\node[below right=0 of f, font=\\tiny] {$f$};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nThen $q\\tri q$ looks as follows\n\\[\n\\begin{tikzpicture}[rounded corners]\n\t\\node (p1) [draw, \"$q\\tri q=$\" left] {\n\t\\begin{tikzpicture}[trees,\n\t  level 1/.style={sibling distance=5mm},\n  \tlevel 2/.style={sibling distance=2.5mm}]\n    \\node[red] (1) {$\\bullet$} \n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t\\idchild\n\t\t\t}\n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t};\n    \\node[right=1 of 1, red] (2) {$\\bullet$} \n      child  {\n        node [red]{$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t\\idchild\n\t\t\t}\n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t};\n    \\node[right=1 of 2, red] (3) {$\\bullet$} \n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t\t\\idchild\n\t\t\t}\n      child  {\n        node [red] {$\\bullet$} \n \t\t    child {\\idchild}\n      \tchild {}\n\t\t\t};\n    \\node[right=1 of 3, red] (4) {$\\bullet$} \n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t\\idchild\n\t\t\t}\n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t};\n    \\node[right=.8 of 4, blue] (5) {$\\bullet$} \n      child  {\n        node [red] {$\\bullet$} \n \t\t    child  {\\idchild}\n      \tchild {}\n\t\t\t\\idchild\n\t\t\t};\n    \\node[right=.6 of 5, blue] (6) {$\\bullet$} \n      child {node [blue] {$\\bullet$} \n      \tchild  {\\idchild}\n\t\t\t\\idchild\n\t\t\t};\n  \\end{tikzpicture}\n  };\n\\end{tikzpicture}\n\\]\nHow can we picture the maps $(q\\tri\\epsilon),(\\epsilon\\tri q)\\colon q\\tri q\\to q$?\n\nThe map $q\\tri\\epsilon$ takes each position of $q\\tri q$ to whatever is on the bottom layer: it takes the first four to $A$ and the last two to $B$. It passes back directions using the defaults ($i_A$ and $i_B$) on the top layer.\n\nThe map $\\epsilon\\tri q$ uses the defaults on the bottom layer instead. Every position in $q\\tri q$ has a default direction, and the corolla sitting there in the top layer is where $\\epsilon\\tri q$ sends it, with identity on directions. \n\\end{example}\n\nIndeed, for every $n$, there are $(n+1)$-many morphisms $q\\tripow{n+1}\\to q\\tripow{n}$, so we have a diagram\n\\begin{equation}\\label{eq.simplicial_diag}\n\\begin{tikzcd}\n\t\\yon&\n\tq\\ar[l, \"\\epsilon\"']&\n\tq\\tri q\\ar[l, shift left, \"q\\tri\\epsilon\"]\\ar[l, shift right, \"\\epsilon\\tri q\"']&[10pt]\n\tq\\tripow{3}\\ar[l, shift left=7pt, \"q\\tri q\\tri\\epsilon\"]\\ar[l, shift right=7pt, \"\\epsilon\\tri q\\tri q\"']\\ar[l, \"q\\tri\\epsilon\\tri q\" description]&\n\t\\cdots\\ar[l, shift left=6pt]\\ar[l, shift left=2pt]\\ar[l, shift right=6pt]\\ar[l, shift right=2pt]\n\\end{tikzcd}\n\\end{equation}\nThe cofree comonoid is given by the limit of this diagram.\n\nLet's denote the shape of this diagram by $\\Delta_+$: its objects are finite ordered sets---including the empty set---and its morphisms are order-preserving injections. For any copointed polynomial $q\\To{\\epsilon}\\yon$, we get a diagram $Q\\colon\\Delta_+\\to\\poly$ as above, and this is functorial in $q$.\n\n\\begin{theorem}\\label{thm.cofree2}\nFor any copointed polynomial $q\\To{\\epsilon}\\yon$, let $\\bar{q}$ denote the limit of $Q\\colon\\Delta_+\\to\\poly$. It naturally has the structure of a comonoid $(\\bar{q},\\epsilon,\\delta)$, and this construction is right adjoint to the forgetful functor\n\\[\n\\adj{\\Cat{Comon}(\\poly)}{U}{\\bar{q}}{\\Cat{Cpt}(\\poly)}.\n\\]\n\\end{theorem}\n\\begin{proof}[Proof sketch]\nWe first give $\\bar{q}$ the structure of a comonoid. Since $\\bar{q}$ is the limit of \\eqref{eq.simplicial_diag}, the inclusion the inclusion $\\{0\\}\\to\\Delta_+$ induces a projection map $\\bar{q}\\to\\yon$, which we again call $\\epsilon$. Since $\\tri$ commutes with connected limits in both variables and $\\Delta_+$ is connected, we have that $\\bar{q}\\tri\\bar{q}$ is the limit of the following $\\Delta_+\\times\\Delta_+$-shaped diagram:\n\\[\n\\bar{q}\\tri\\bar{q}\\cong\\lim\\left(\n\\begin{tikzcd}\n  q\\tripow{(0+0)}&\n  q\\tripow{(0+1)}\\ar[l]&\n  q\\tripow{(0+2)}\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]&\n  q\\tripow{(0+3)}\\ar[l]\\ar[l, shift left=4pt]\\ar[l, shift right=4pt]&\n  \\cdots\\ar[l, shift left=6pt]\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]\\ar[l, shift right=6pt]\\\\\n  q\\tripow{(1+0)}\\ar[u]&\n  q\\tripow{(1+1)}\\ar[u]\\ar[l]&\n  q\\tripow{(1+2)}\\ar[u]\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]&\n  q\\tripow{(1+3)}\\ar[u]\\ar[l]\\ar[l, shift left=4pt]\\ar[l, shift right=4pt]&\n  \\cdots\\ar[l, shift left=6pt]\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]\\ar[l, shift right=6pt]\\\\\n  q\\tripow{(2+0)}\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]&\n  q\\tripow{(2+1)}\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]\\ar[l]&\n  q\\tripow{(2+2)}\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]&\n  q\\tripow{(2+3)}\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]\\ar[l]\\ar[l, shift left=4pt]\\ar[l, shift right=4pt]&\n  \\cdots\\ar[l, shift left=6pt]\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]\\ar[l, shift right=6pt]\\\\\n  q\\tripow{(3+0)}\\ar[u]\\ar[u, shift left=4pt]\\ar[u, shift right=4pt]&\n  q\\tripow{(3+1)}\\ar[u]\\ar[u, shift left=4pt]\\ar[u, shift right=4pt]\\ar[l]&\n  q\\tripow{(3+2)}\\ar[u]\\ar[u, shift left=4pt]\\ar[u, shift right=4pt]\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]&\n  q\\tripow{(3+3)}\\ar[u]\\ar[u, shift left=4pt]\\ar[u, shift right=4pt]\\ar[l]\\ar[l, shift left=4pt]\\ar[l, shift right=4pt]&\n  \\cdots\\ar[l, shift left=6pt]\\ar[l, shift left=2pt]\\ar[l, shift right=2pt]\\ar[l, shift right=6pt]\\\\\n\t\\vdots\\ar[u, shift left=6pt]\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]\\ar[u, shift right=6pt]&\n\t\\vdots\\ar[u, shift left=6pt]\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]\\ar[u, shift right=6pt]&\n\t\\vdots\\ar[u, shift left=6pt]\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]\\ar[u, shift right=6pt]&\n\t\\vdots\\ar[u, shift left=6pt]\\ar[u, shift left=2pt]\\ar[u, shift right=2pt]\\ar[u, shift right=6pt]&\n\t\\ddots\n\\end{tikzcd}\n\\right)\n\\]\nThere is a commutative diagram in $\\smcat$\n\\begin{equation}\\label{eqn.simplicial_poly_misc384}\n\\begin{tikzcd}[column sep=small]\n\t\\Delta_+\\times\\Delta_+\\ar[rr, \"+\"]\\ar[dr, bend right, near start, \"{(m_1,m_2)\\mapsto q\\tripow{(m_1+m_2)}}\"']&&\n\t\\Delta_+\\ar[dl, bend left, near start, \"n\\mapsto q\\tripow{n}\"]\\\\&\n\t\\poly\n\\end{tikzcd}\n\\end{equation}\nwhich induces a map (in the opposite direction) between their limits $\\delta\\colon\\bar{q}\\to\\bar{q}\\tri\\bar{q}$, which we take to be the comultiplication. Appending \\eqref{eqn.simplicial_poly_misc384} with the inclusion $\\{0\\}\\times\\Delta_+\\to\\Delta_+\\times\\Delta_+$, etc., it is easy to see that $(\\bar{q},\\epsilon,\\delta)$ satisfies the axioms of a comonoid.\n\nWe sketch the proof that this construction is right adjoint to the forgetful functor. For any copointed polynomial $(q,\\epsilon)$, there is a counit map $\\bar{q}\\to q$, given by the obvious projection of the limit \\eqref{eq.simplicial_diag}. Given a comonoid $(\\ema{c},\\epsilon,\\delta)$, there is a morphism $\\ema{c}\\to\\bar{\\ema{c}}$ induced by the maps $\\delta^{n-1}\\colon\\ema{c}\\to\\ema{c}\\tripow{n}$ from \\cref{ex.delta_n_notation}. It is easy to check that these commute with the $\\epsilon$'s in the diagram. To see that $\\ema{c}\\to\\bar{\\ema{c}}$ extends to a morphism of comonoids amounts to checking that the diagram\n\\[\n\\begin{tikzcd}[column sep=50pt]\n\t\\ema{c}\\ar[r, \"\\delta^{m+n-1}\"]\\ar[d, \"\\delta\"']&\n\t\\ema{c}\\tripow{(m+n)}\\ar[d, equal]\\\\\n\t\\ema{c}\\tri\\ema{c}\\ar[r, \"\\delta^{m-1}\\tri\\delta^{n-1}\"']&\n\t\\ema{c}\\tripow{m}\\tri\\ema{c}\\tripow{n}\n\\end{tikzcd}\n\\]\ncommutes for any $m,n\\in\\nn$. Both triangle equations are straightforward.\n\\end{proof}\n\n\\begin{remark}\nThe construction of the cofree comonoid from a copointed endofunctor in the proof of \\cref{thm.cofree2} is fairly standard; see \\cite{lack2010note}. Nelson Niu has also constructed the cofree comonoid on a polynomial using a different limit diagram\n\\[\n\\begin{tikzcd}\n\t\\yon\\ar[d]&\n\tp\\ar[d]&\n\tp\\tri p\\ar[d]&[10pt]\n\tp\\tri p\\tri p\\ar[d]&\n\t\\cdots\\\\\n\t\\1&\n\tp\\tri \\1\\ar[l, \"!\"]&\n\tp\\tri p\\tri \\1\\ar[l, \"p\\tri\\,!\"]&\n\tp\\tri p\\tri p\\tri \\1\\ar[l, \"p\\tri p\\tri\\,!\"]&\n\t\\cdots\\ar[l]\n\\end{tikzcd}\n\\]\nin terms of the original polynomial $p$, rather than from its copointing $p\\yon$; that is, this construction is right adjoint to $\\Cat{Comon}(\\poly)\\to\\poly$. One could also construct this right adjoint using the following limit, again applied to the original polynomial $p$:\n\\[\n\\begin{tikzcd}[sep=small]\n\t\\yon\\ar[dr]&&\n\tp\\ar[dl]\\ar[dr]&&\n\tp\\tri p\\ar[dl]\\ar[dr]&&\n\tp\\tri p\\tri p\\ar[dl]\\ar[dr]&\n\t\\cdots\\\\&\n\t\\1&&\n\tp\\tri\\1&&\n\tp\\tri p\\tri \\1&&\n\t\\cdots\n\\end{tikzcd}\n\\]\n\\end{remark}\n\nWe record the following proposition here; it will be useful in \\cref{cor.cartesian_cof_extra_adjoint}.\n\n\\begin{proposition}\nIf $f\\colon p\\to q$ is a Cartesian map of polynomials, then $\\tr_f\\colon\\tr_p\\to\\tr_q$ is a Cartesian cofunctor. That is, for each tree $t\\in\\tr_p(\\1)$ the function $\\tr_f^\\sharp\\colon\\tr_p[t]\\To{\\cong}\\tr_q[\\tr_f(t)]$ is a bijection.\n\\end{proposition}\n\\begin{proof}\n**\n\\end{proof}\n\n\\begin{proposition}\nThe cofree comonoid functor is lax monoidal; in particular, we have maps\n\\[\n\\yon\\to\\cofree{\\yon}\n\\qqand\n\\cofree{p}\\otimes\\cofree{q}\\to\\cofree{p\\otimes q}\n\\]\nfor any $p,q\\in\\poly$.\n\\end{proposition}\n\n%-------- Section --------%\n\\section{$\\sys(p)$ is a topos}\n\n\\begin{theorem}\\label{thm.cofree_coalgebras}\nLet $\\cofree{p}$ be the cofree comonoid on $p\\in\\poly$. There is an equivalence of categories\n\\[\n\\sys(p)\\cong\\smcat(\\cofree{p},\\smset)\n\\]\nbetween the category of dynamical systems on $p$ and that of functors $\\cofree{p}\\to\\smset.$\n\\end{theorem}\n\\begin{proof}\n** (In general, coalgebras of comonoids are copresheaves on the corresponding category)\n\\end{proof}\n\nA consequence of this is that $\\sys(p)$ forms a topos, and hence has a ready-made type theory and internal logic. While we don't have space to do this justice, we will briefly discuss the sort of logical statement one can make about dynamical systems.\n\n\n\n\n%-------- Section --------%\n\\section{Morphisms between cofree comonoids}\nGiven a morphism of polynomials $\\varphi \\colon p \\to q$, the cofree functor gives us a map of comonoids $\\cofree{\\varphi} \\colon \\cofree {p} \\to \\cofree{q}$, which works as follows.\n\nAn object $t \\in \\tr_p$ is a tree; the tree $u \\coloneqq \\cofree{\\varphi} (t) \\in \\tr_q $ is constructed recursively as follows. If the root of $t$ is $i \\in p(\\1)$ then the root of $u$ is $j \\coloneqq \\varphi_1 (i)$. To each branch $e \\in q[j]$, we need to assign a new tree, and we use the one situated at $\\varphi_i ^ \\sharp (e)$.\n\n\\begin{exercise}\nLet $p \\coloneqq \\yon ^\\2 + \\1$ and $q \\coloneqq \\2 \\yon + \\2$.\n\\begin{enumerate}\n    \\item Choose a map $\\varphi \\colon p \\to q$, and write it out.\n    \\item Choose a tree $t \\in \\tr_p$ with at least height $2$.\n    \\item What is $\\cofree{\\varphi}(t)$?\n    \\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\nThe following exercise is useful when considering the (topos-theoretic) logic of dynamical systems. Namely, it will allow us to specify legal subtrees of height $n$.\n\\begin{enumerate}\n    \\item Choose a polynomial $q$ and a map $\\epsilon \\colon q \\to \\yon$, i.e. a copointed polynomial.\n    \\item Recall from \\cref{thm.cofree2} that the carrier $\\bar{q}$ of the cofree comonoid $\\cofree{q} = (\\bar{q}, \\epsilon, \\delta)$ is constructed as a limit\n    \\[\n      \\bar{q} = \\lim \\left(\n\\begin{tikzcd}\n\t\\yon&\n\tq\\ar[l]&\n\tq\\tri q\\ar[l, shift left]\\ar[l, shift right]&[10pt]\n\tq\\tripow{3}\\ar[l, shift left=7pt]\\ar[l, shift right=7pt]\\ar[l ]&\n\t\\cdots\\ar[l, shift left=6pt]\\ar[l, shift left=2pt]\\ar[l, shift right=6pt]\\ar[l, shift right=2pt]\n\\end{tikzcd}\n      \\right)\n    \\]\n      and in particular there is a structure map $\\bar{q} \\to q \\tripow{n}$, for any $n \\in \\nn$. Where does it send a tree $t \\in \\tr_q$?\n      \\item There is an induced cofunctor $\\cofree{q} \\to \\cofree{q \\tripow{n}}$. Show that for all $n\\geq 1$, it is it an isomorphism.\n      \\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n%-------- Section --------%\n\\section{Consequences of adjointness}\n\n%\\[\n%\\begin{tikzpicture}[polybox, tos]\n%\t\\node[poly, dom, \"$q$\" left] (q) {};\n%\t\\node[poly, cod, below right=1.6 and 4 of q.south, \"$p$\" right] (p1) {};\n%\t\\node[poly, cod, above=of p1, \"$p$\" right] (p2) {};\n%\t\\node[poly, cod, above=of p2, \"$p$\" right] (p3) {};\n%\t\\node[poly, cod, above=of p3, \"$p$\" right] (p4) {};\n%\t\\node[poly, cod, above=of p4, \"$p$\" right] (p5) {};\n%\t\\node[poly, draw=none, above=of p5] (ph) {};\n%%\n%\t\\draw (q_pos) to[first] (p1_pos);\n%\t\\draw (p1_dir) to[climb] node[circle, inner sep=0, fill=white] (or2) {or}(p2_pos);\n%\t\\draw (p2_dir) to[climb] node[circle, inner sep=0, fill=white] (or3) {or} (p3_pos);\n%\t\\draw (p3_dir) to[climb] node[circle, inner sep=0, fill=white] (or4) {or} (p4_pos);\n%\t\\draw (p4_dir) to[climb] node[circle, inner sep=0, fill=white] (or5) {or} (p5_pos);\n%\t\\draw (p5_dir) to[climb] node[circle, inner sep=0, fill=white] (or6) {or} (ph_pos);\n%\t\\draw[shorten <=4pt] (or2.center) to[last] (q_dir);\n%\t\\draw[shorten <=4pt] (or3.center) to[last] (q_dir);\n%\t\\draw[shorten <=4pt] (or4.center) to[last] (q_dir);\n%\t\\draw[shorten <=4pt] (or5.center) to[last] (q_dir);\n%\t\\draw[shorten <=4pt] (or6.center) to[last] (q_dir);\n%\\end{tikzpicture}\n%\\]\n\nRecall from \\cref{def.gen_moore} that a dependent system (or generalized Moore machine) is a map of polynomials $f\\colon S\\yon^S\\to p$. Here $S$ is called the set of states and $p$ is the interface. \n\nBut now we know that $S\\yon^S$ is secretly the underlying polynomial of a comonoid. This means that $f$ has a mate, i.e.\\ a corresponding cofunctor $S\\yon^S\\cof\\cofree{p}$ to the category of $p$-trees. How does that work?\n\n\\begin{example}\\label{ex.cofree_dyn_sys}\nLet $S\\coloneqq\\{\\bul[dgreen],\\bul[dyellow],\\bul[red]\\}$ and $p\\coloneqq\\yon^\\2+\\yon$, and consider the dynamical system $f\\colon S\\yon^S\\to p$ from \\cref{exc.det_fsa_misc_398}, depicted here again for your convenience:\n\\[\n\\begin{tikzcd}[column sep=small]\n\t\\bul[dgreen]\\ar[rr, bend left, orange]\\ar[loop left, dgreen]&&\n\t\\bul[dyellow]\\ar[dl, bend left, orange]\\ar[ll, dgreen, bend left]\\\\&\n\t\\bul[red]\n\\end{tikzcd}\n\\]\nThe polynomial map $f$ is supposed to induce a cofunctor $F\\colon S\\yon^S\\cof\\cofree{p}$ from the state category on $S$ to the category of $p$-trees. Thus to each state $s\\in S$, we need to associate a tree; which should it be? \n\\end{example}\n\n\\begin{exercise}\nConsider the walking arrow category $\\cat{W}=\\fbox{$\\bullet\\to\\bullet$}$. Draw the cofunctor $\\cat{W}\\cof\\cofree{\\yon^\\2+\\yon}$.\n\\end{exercise}\n\n%---- Subsection ----%\n\\subsection{Dynamical systems and graph fibrations}\n\nIn \\cref{ex.cofree_dyn_sys}, there's a certain relationship we can see between the graph we associate to the dynamical system $S\\yon^S\\to p$, namely \n\\[\n\\begin{tikzcd}[column sep=small]\n\t\\bul[dgreen]\\ar[rr, bend left, orange]\\ar[loop left, dgreen]&&\n\t\\bul[dyellow]\\ar[dl, bend left, orange]\\ar[ll, dgreen, bend left]\\\\&\n\t\\bul[red]\n\\end{tikzcd}\n\\]\nand the trees (which are also graphs) that its mate $S\\yon^S\\to\\cofree{p}$ associates to each element of $S$, e.g.\\ for the green dot:\n\\[\n\\treepic\n\\]\nIndeed, there is a map of graphs from the latter to the former, which sends all the green dots in the tree to the green dot in the dynamical system, etc. This map of graphs is a kind of \\emph{fibration}, or maybe we should say op-fibration, in the sense that the set of arrows emanating from every dot in the tree is in bijection with the set of arrows emanating from its image in the dynamical system graph.\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item Draw the other two trees associated to the dynamical system in \\cref{ex.cofree_dyn_sys}.\n\t\\item Do they also have an op-fibration down to the dynamical system graph?\n\t\\item Are these op-fibrations special in any way? That is, are they unique, or have any universal property?\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n%---- Subsection ----%\n\\subsection{Replacing $S\\yon^S$ by another comonoid}\n\nFor any interface $p$, we defined a dependent dynamical system---also called a generalized Moore machine---to be a set $S$ and a polynomial map $S\\yon^S\\to p$. But now it seems that what really makes this work is that $S\\yon^S$ underlies a comonoid. This suggests that we could instead have defined a dependent dynamical system to be a comonoid $\\cat{C}=(\\ema{c},\\epsilon,\\delta)$ together with a map $\\ema{c}\\to p$. What are the similarities and differences?\n\nHere are some similarities. We still get a cofunctor $F\\colon\\cat{C}\\to\\cofree{p}$, so we associate a $p$-tree to each object in $\\cat{C}$ and pass back paths out of its root to morphisms in $\\cat{C}$. In terms of dynamics, we would think of objects in $\\cat{C}$ as internal states. We still have the situation from \\eqref{eqn.comon_delta_k}, meaning that for every state $c\\in\\cat{C}$, we get a position $i\\coloneqq F_1(c)$ in $p(\\1)$, and for every direction $d\\in p[i]$ there we get a new state $\\cod(F^\\sharp_c(d))\\in\\cat{C}$. \n\nBut in fact we get a little more from $F$, and this is where the differences come in. Namely, given a direction $d\\in p[i]$, we get the morphism $F^\\sharp_c(d)$ itself. In the state category $S\\yon^S$, there is a unique morphism between every two objects, so this passed-back morphism carries no data beyond what its codomain is. But for a more general comonoid $\\cat{C}$, the morphisms \\emph{do} carry data. \n\nThus we can think of a map $\\ema{c}\\to p$ as a dynamical system that ``records its history.'' That is, given a path $\\cofree{p}$, a sequence of inputs to our dynamical system, we get a morphism in $\\cat{C}$. If, unlike in a state category $S\\yon^S$, there are multiple morphisms between objects, we will know which one was actually taken by the system.\n\nThis seems like a nice generalization of dynamical systems---history-recording dynamical systems---and may have some use. However, we will see in \\cref{chapter.bimod} that there are strong theoretical reasons to emphasize the ahistorical state categories $S\\yon^S$. For one thing, the category of all such $S\\yon^S$-style dynamical systems on $p$ forms a topos for any $p\\in\\poly$.\n\n\n%-------- Section --------%\n\\section{Some math about cofree comonoids}\n\n\\begin{proposition}\\label{prop.cofree_free_on_graph}\nFor every polynomial $p$, the cofree category $\\cofree{p}$ is free on a graph. That is, there is a graph $G_p$ whose associated free category in the usual sense (the category of vertices and paths in $G_p$) is isomorphic to $\\cofree{p}$.\n\\end{proposition}\n\\begin{proof}\nFor vertices, we let $V_p$ denote the set of $p$-trees,\n\\[V_p\\coloneqq\\tr_p(\\1).\\]\nFor arrows we use the map $\\pi\\colon\\tr_p\\to p$ from \\cref{**} to define\n\\[\nA_p\\coloneqq\\sum_{t\\in\\tr_p(\\1)}p[\\pi_1(t)]\n\\]\nIn other words $A_p$ is the set $\\{d\\in p[\\pi_1(t)]\\,\\mid\\,t\\in\\tr_p\\}$ of directions in $p$ that emanate from the root corolla of each $p$-tree. The source of $(t,d)$ is $t$ and the target is $\\cod(\\pi^\\sharp_t(d))$. It is clear that every morphism in $\\cofree{t}$ is the composite of a finite sequence of such morphisms, completing the proof.\n\\end{proof}\n\n\n\\begin{corollary}\nLet $p$ be a polynomial and $\\cofree{p}$ the cofree comonoid. Every morphism in $\\cat{C}_p$ is both monic and epic.\n\\end{corollary}\n\\begin{proof}\nThe free category on a graph always has this property, so the result follows from \\cref{prop.cofree_free_on_graph}.\n\\end{proof}\n\n\\begin{proposition}\\label{prop.cofree_lax_monoidal}\nThe cofree functor $p\\mapsto\\cofree{p}=(\\tr_p,\\rt,\\fs)$ is lax monoidal; in particular there is a map of polynomials $\\yon\\to\\tr_\\yon$, and for any $p,q\\in\\poly$ there is a natural map\n\\[\n\t\\tr_p\\otimes\\tr_q\\to\\tr_{p\\otimes q}.\n\\]\nsatisfying the usual conditions.\n\\end{proposition}\n\\begin{proof}\nBy \\cref{prop.dirichlet_on_catsharp}, the left adjoint $U\\colon\\smcat^\\sharp\\to\\poly$ is strong monoidal. A consequence of Kelly's doctrinal adjunction theorem \\cite{kelly1974doctrinal} says that the right adjoint of an op-lax monoidal functor is lax monoidal.\n\\end{proof}\n\n\\begin{exercise}\n\\begin{enumerate}\n\t\\item What polynomial is $\\tr_\\yon$?\n\t\\item What is the map $\\yon\\to\\tr_\\yon$ from \\cref{prop.cofree_lax_monoidal}?\n\t\\item Explain in words how to think about the map $\\tr_p\\otimes\\tr_q\\to\\tr_{p\\otimes q}$ from \\cref{prop.cofree_lax_monoidal}, for arbitrary $p,q\\in\\poly$.\n\\qedhere\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{proposition}\\label{prop.ynn_monoid}\nThe additive monoid $\\yon^\\nn$ of natural numbers has a $\\times$-monoid structure in $\\smcat^\\sharp$.\n\\end{proposition}\n\\begin{proof}\nThe right adjoint $p\\mapsto\\cofree{p}$ preserves products, so $\\yon^{\\List(\\ord{n})}\\cong\\cofree{\\yon^{\\ord{n}}}$ is the $n$-fold product of $\\yon^\\nn$ in $\\smcat^\\sharp$. We thus want to find cofunctors $e\\colon \\yon\\to\\yon^\\nn$ and $m\\colon\\yon^{\\List(\\2)}\\to\\yon^\\nn$ that satisfy the axioms of a monoid. \n\nThe unique polynomial map $\\yon\\to\\yon^\\nn$ is a cofunctor (it is the mate of the identity $\\yon\\to\\yon$). We take $m$ to be the mate of the polynomial map $\\yon^{\\List(\\2)}\\to\\yon$ given by the list $[1,2]$. One can check by hand that these definitions make $(\\yon^\\nn,e,m)$ a monoid in $(\\smcat^\\sharp,\\yon,\\times)$.\n\\end{proof}\n\nRecall from \\cref{ex.admissible_section} that an admissible section of a category $\\cat{C}$ is a cofunctor $\\cat{C}\\cof\\yon^\\nn$.\n\n\\begin{corollary}\nFor any category $\\cat{C}$, the set $\\smcat^\\sharp(\\cat{C},\\yon^\\nn)$ of admissible sections has the structure of a monoid. Moreover, this construction is functorial\n\\[\\smcat^\\sharp(-,\\yon^\\nn)\\colon\\smcat^\\sharp\\to\\Cat{Mon}\\op\\]\n\\end{corollary}\n\\begin{proof}\nWe saw that $\\yon^\\nn$ is a monoid object in \\cref{prop.ynn_monoid}.\n\\end{proof}\n\nA cofunctor $\\cat{C}\\cof\\yon^\\nn$ is a policy in $\\cat{C}$: it assigns an outgoing morphism to each object of $\\cat{C}$. Any two such trajectories can be multiplied: we simply do one and then the other; this is the monoid operation. The policy assigning the identity to each object is the unit of the monoid.\n\nWe use the notation $\\cat{C}\\mapsto\\vec{\\cat{C}}$ for the monoid of admissible sections.\n\n\\begin{theorem}\\label{thm.catsharp_to_mon}\nThe admissible sections functor\n\\[\\smcat^\\sharp\\to\\Cat{Mon}\\op\\]\nis right adjoint to the inclusion $\\Cat{Mon}\\op\\to\\smcat^\\sharp$ from \\cref{prop.monoids_ff}.\n\\end{theorem}\n\\begin{proof}\nLet $\\cat{C}$ be a category and $(M,e,*)$ a monoid. A cofunctor $F\\colon\\cat{C}\\cof\\yon^M$ has no data on objects; it is just a way to assign to each $c\\in \\cat{C}$ and $m\\in M$ a morphism $F^\\sharp_c(m)\\colon c\\to c'$ for some $c'\\coloneqq\\cod(F^\\sharp_c(m))$. This assignment must send identities to identities and composites to composites: given $m'\\in M$ we have $F^\\sharp_c(m\\then m')=F^\\sharp_c(m)\\then F^\\sharp_{c'}(m')$. This is exactly the data of a monoid morphism $M\\to \\vec{\\cat{C}}$: it assigns to each $m\\in M$ an admissible section $\\cat{C}$, preserving unit and multiplication.\n\\end{proof}\n\n\\begin{proposition}\\label{prop.traj_mon_poly}\nThere is a commutative square of left adjoints\n\\[\n\\begin{tikzcd}\n\t\\Cat{Mon}\\op\\ar[r, \"U\"]\\ar[d, \"\\yon^-\"']&\n\t\\smset\\op\\ar[d, \"\\yon^-\"]\\\\\n\t\\smcat^\\sharp\\ar[r, \"U\"']&\n\t\\poly\n\\end{tikzcd}\n\\]\nwhere the functors denoted $U$ are forgetful functors.\n\\end{proposition}\n\\begin{proof}\nUsing the fully faithful functor $\\yon^-\\colon\\Cat{Mon}\\op\\fromto\\smcat^\\sharp$ from \\cref{prop.monoids_ff}, it is easy to check that the above diagram commutes. \n\nThe free-forgetful adjunction $\\smset\\fromto\\Cat{Mon}$ gives an opposite adjunction $\\smset\\op\\fromto\\Cat{Mon}\\op$, where $U$ is now left adjoint. We saw that $\\yon^-\\colon\\smset\\op\\to\\poly$ is a left adjoint in \\cref{prop.yoneda_left_adjoint}, that $U\\colon\\smcat^\\sharp\\to\\poly$ is a left adjoint in \\cref{thm.cofree}, and that $\\yon^-\\colon\\Cat{Mon}\\to\\smcat^\\sharp$ is a left adjoint in \\cref{thm.catsharp_to_mon}.\n\\end{proof}\n\n\\section{Exercise solutions}\n\\Closesolutionfile{solutions}\n{\\footnotesize\n\\input{solution-file7}}\n\n\\end{document}", "meta": {"hexsha": "2fd936687598d245b21fc3e68df13547ee229dab", "size": 319777, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "P2-Comonoids.tex", "max_stars_repo_name": "o1lo01ol1o/poly", "max_stars_repo_head_hexsha": "4d120b3c1b3df6530f7bf669b5f6a2e8d8b91a1a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "P2-Comonoids.tex", "max_issues_repo_name": "o1lo01ol1o/poly", "max_issues_repo_head_hexsha": "4d120b3c1b3df6530f7bf669b5f6a2e8d8b91a1a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "P2-Comonoids.tex", "max_forks_repo_name": "o1lo01ol1o/poly", "max_forks_repo_head_hexsha": "4d120b3c1b3df6530f7bf669b5f6a2e8d8b91a1a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.7985655425, "max_line_length": 697, "alphanum_fraction": 0.6569234185, "num_tokens": 113307, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.5855652575895814}}
{"text": "\\section{Deep Neural Net - RegularNet}\nThe regular neural network is composed exclusively of regular and strided convolutional layers. While this architecture works well for relatively shallow networks, it becomes increasingly more difficult to train as the network depth increases.\n\nRegular networks are a combination of the two standard forms of neural networks, the fully connected neural network and the convolution neural network.\n\n\\subsection{Fully Connected Neural Networks (FCNN)}\nA neural network consist of an input layer, some hidden layers and an output layer. The input layer is an image, and it is targeted for feature extraction and classification in the hidden layers and output layer. Every pixel in the image is connected to every neuron in the first hidden layer of the neural network as seen in figure \\ref{fig:RegularFCNN}.    \nEach hidden layer is an array of neurons, with each neuron normally consisting of a weight, some activation function and a regulation function. Each pixel value is weighted, activated in an activation function and regularized. Most often ReLU or Leaky ReLU are used as activation function to zero out negative values. The activation function provides a non-linear relationship within the data points to provide better feature extractions. Each neuron in the second hidden layer is supplied with the output of all the neurons in the previous layer and the procedure is repeated until the last hidden layer. The output layer consist of a loss function, which is often either a Softmax loss function or a support vector machine loss function. The amount of loss functions in the output layer is equal to the amount of classification categories the image can be classified as.\n\\\\\nAn image is forwarded through the neural network, and the loss functions provides the misclassification percentage of the image. This error, or loss in accuracy, is send back through the neural network and a gradient for each neuron is found. This process is repeated for an amount of images, batch size, and for each backward propagation the gradient is saved. After a batch of images all the gradients saved for an individual neuron are average and from this value the neural network response to the back propagation. The weights are updated depending on the averaged gradients and this procedure can be done in different ways. The most modern update method is called ADAM, and it is trying to reduce the loss result by changing the weights regarding to the gradients. ADAM will decide the amount of change the weights must have in relation to the gradients, while the regularization step decides the relationship of the change between the weights. ADAM defines the maximum change to a single weight, while the regularization defines the distribution of change over all weights related to the maximum defined change.\n\n\\myFigure{RegularNet_FCNN.png}{Fully Connected Neural Network Architecture}{fig:RegularFCNN}{0.5}\n\\FloatBarrier\n\n\\subsection{Convolutional Neural Networks (CNN)}\n\nThe main difference between the FCNN and CNN is the FCNN is providing every input pixel to every neuron between each layer, while the CNN is only connecting the neurons to regions of interest of the image, also known as receptive fields. Each receptive field is dotted together with a feature map which provides a single pixel output.\n\nThe input layer is an image, while the next layer consist of weighted feature maps, an activation function and a regulation function. Each feature map consist of three spatial dimensions. One spatial dimension of the feature map is convolved with one spatial dimension of the image. The result of dot products for the image pixels, at one receptive field, with the feature map provides a value for the receptive field. Convolving the input image of, M rows and N columns $MxNx3$ with X feature maps of 3x3x3 provides X new images with a dimension determine by the striding distance for the convolution as seen in figure \\ref{fig:RegularStrinding}.\n\n\\myFigure{RegularNET_Strinding.png}{Convolutional Neural Network feature map striding example}{fig:RegularStrinding}{0.5}\n\\FloatBarrier\n\nPooling layers are placed between layers at convenient places, to minimize variable count and training time for the neural network. This is typical done by increasing filter strides or by max pooling. Max pooling is done by choosing the highest pixel value in a spatial dimension of the image and compressing the images to only the chosen max pooling values, as seen in figure \\ref{fig:RegularCVNN}. It will not always be possible to have the desired stride unit distance in the image, so an extra layer of zeros are padded on to the image to make sure the image is filtered with the correct feature map size and stride unit distance. One zero padding layer and an 3x3 feature map will return an image with the same size as the input image.\n\n\\myFigure{RegularNet_CVNN.png}{Convolutional Neural Network Architecture}{fig:RegularCVNN}{0.5}\n\\FloatBarrier", "meta": {"hexsha": "8f9e5a3db0a65be481309f5c8ef054224a36ecbe", "size": 4981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/chapter/RegularNet.tex", "max_stars_repo_name": "Rotvig/cs231n", "max_stars_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-11T12:30:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-11T12:30:50.000Z", "max_issues_repo_path": "Report/chapter/RegularNet.tex", "max_issues_repo_name": "Rotvig/cs231n", "max_issues_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/chapter/RegularNet.tex", "max_forks_repo_name": "Rotvig/cs231n", "max_forks_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 184.4814814815, "max_line_length": 1118, "alphanum_fraction": 0.8159004216, "num_tokens": 987, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246035907932, "lm_q2_score": 0.7025300698514777, "lm_q1q2_score": 0.5854355919695949}}
{"text": "\\section{Geometric interpretation}\n\n\\begin{outcome}\n  \\begin{enumerate}\n  \\item View complex numbers as points in the plane.\n  \\item Understand the geometric meaning of addition, subtraction,\n    multiplication, and the complex conjugate.\n  \\item Understand the geometric meaning of the magnitude and argument\n    of a complex number.\n  \\end{enumerate}\n\\end{outcome}\n\nJust as a real number can be considered as a point on the line, a\ncomplex number $z = a + bi$ can be considered as a point $(a,b)$ in\nthe plane whose $x$ coordinate is $a$ and whose $y$ coordinate is\n$b$. For example, in the following picture, the complex number\n$z = 3+2i$ can be represented as the point in the plane with\ncoordinates $(3,2)$%\n\\index{complex number!geometric interpretation}.\n\\begin{equation*}\n  \\begin{tikzpicture}\n    \\draw[help lines, dotted, fill=red!10] (0,0) -- (3,0) -- (3,2) -- cycle;\n    \\draw[help lines, dotted] (0,2) -- (3,2);\n    \\filldraw[fill=green!20,draw=green!50!black] (0,0) -- (1.2,0) arc (0:33.69:1.2) -- cycle;\n    \\node at (16.845:9mm){$\\theta$};\n    \\draw[->](-1.5,0) -- (3.5,0);\n    \\draw[->](0,0) -- (0,2.5);\n    \\draw(0,1) -- +(-0.2,0) node[left] {$i$};\n    \\draw(0,2) -- +(-0.2,0) node[left] {$2i$};\n    \\draw(-1,0) -- +(0,-0.2) node[below] {$-1$};\n    \\draw(0,0) -- +(0,-0.2) node[below] {$0$};\n    \\draw(1,0) -- +(0,-0.2) node[below] {$1$};\n    \\draw(2,0) -- +(0,-0.2) node[below] {$2$};\n    \\draw(3,0) -- +(0,-0.2) node[below] {$3$};\n    \\draw[thick, blue] (0,0) -- node[above, black] {$r$} (3,2);\n    \\draw[fill, black] (3,2) circle [radius=1.8pt] node [right] {$z = 3+2i$};\n  \\end{tikzpicture}\n\\end{equation*}\nThe \\textbf{magnitude}%\n\\index{complex number!magnitude}%\n\\index{magnitude!of a complex number} $r=\\abs{z}$ of a complex number\nis its distance from the origin. We also define the \\textbf{argument}\nof $z$ to be the angle $\\theta$ between the $x$-axis and the line from\nthe origin to $z$, counted positively in the counterclockwise\ndirection. The magnitude $r$ and argument $\\theta$ are shown in the\nabove picture.\n\nAddition of complex numbers is like vector addition. The effect of\nmultiplying two complex numbers is to multiply their magnitudes and\nadd their arguments. For example, the following picture illustrates\nthe multiplication $(3+2i)(1+i)=1+5i$.\n\\begin{equation*}\n  \\begin{tikzpicture}[scale=0.8]\n    \\begin{scope}\n      \\draw[help lines, dotted, fill=red!10] (0,0) -- (3,0) -- (3,2) -- cycle;\n      \\filldraw[fill=green!20,draw=green!50!black] (0,0) -- (0.8,0) arc (0:33.69:0.8) -- cycle;\n      \\draw[->](-1.5,0) -- (3.5,0);\n      \\draw[->](0,0) -- (0,2.5);\n      \\draw(0,1) -- +(-0.2,0) node[left] {$i$};\n      \\draw(0,2) -- +(-0.2,0) node[left] {$2i$};\n      \\draw(-1,0) -- +(0,-0.2) node[below] {$-1$};\n      \\draw(0,0) -- +(0,-0.2) node[below] {$0$};\n      \\draw(1,0) -- +(0,-0.2) node[below] {$1$};\n      \\draw(2,0) -- +(0,-0.2) node[below] {$2$};\n      \\draw(3,0) -- +(0,-0.2) node[below] {$3$};\n      \\draw[thick, blue] (0,0) -- (3,2);\n      \\draw[fill, black] (3,2) circle [radius=2.25pt] node [right] {$3+2i$};\n      \\draw (1,-2) node {The number $3+2i$};\n    \\end{scope}\n    \\begin{scope}[xshift=7cm]\n      \\draw[help lines, dotted, fill=yellow!30] (0,0) -- (1,0) -- (1,1) -- cycle;\n      \\filldraw[fill=blue!10,draw=blue!50!black] (0,0) -- (0.5,0) arc (0:45:0.5) -- cycle;\n      \\draw[->](-1.5,0) -- (3.5,0);\n      \\draw[->](0,0) -- (0,2.5);\n      \\draw(0,1) -- +(-0.2,0) node[left] {$i$};\n      \\draw(0,2) -- +(-0.2,0) node[left] {$2i$};\n      \\draw(-1,0) -- +(0,-0.2) node[below] {$-1$};\n      \\draw(0,0) -- +(0,-0.2) node[below] {$0$};\n      \\draw(1,0) -- +(0,-0.2) node[below] {$1$};\n      \\draw(2,0) -- +(0,-0.2) node[below] {$2$};\n      \\draw(3,0) -- +(0,-0.2) node[below] {$3$};\n      \\draw[thick, blue] (0,0) -- (1,1);\n      \\draw[fill, black] (1,1) circle [radius=2.25pt] node [right] {$1+i$};\n      \\draw (1,-2) node {The number $1+i$};\n    \\end{scope}\n    \\begin{scope}[xshift=14cm]\n      \\draw[help lines, dotted, fill=red!10] (0,0) -- (3,0) -- (3,2) -- cycle;\n      \\draw[help lines, dotted, fill=yellow!30] (0,0) -- (3,2) -- (1,5) -- cycle;\n      \\filldraw[fill=green!20,draw=green!50!black] (0,0) -- (0.8,0) arc (0:33.69:0.8) -- cycle;\n      \\filldraw[fill=blue!10,draw=blue!50!black] (0,0) -- (33.69:0.8) arc (33:78.69:0.8) -- cycle;\n      \\draw[->](-1.5,0) -- (3.5,0);\n      \\draw[->](0,0) -- (0,5.5);\n      \\draw(0,1) -- +(-0.2,0) node[left] {$i$};\n      \\draw(0,2) -- +(-0.2,0) node[left] {$2i$};\n      \\draw(0,3) -- +(-0.2,0) node[left] {$3i$};\n      \\draw(0,4) -- +(-0.2,0) node[left] {$4i$};\n      \\draw(0,5) -- +(-0.2,0) node[left] {$5i$};\n      \\draw(-1,0) -- +(0,-0.2) node[below] {$-1$};\n      \\draw(0,0) -- +(0,-0.2) node[below] {$0$};\n      \\draw(1,0) -- +(0,-0.2) node[below] {$1$};\n      \\draw(2,0) -- +(0,-0.2) node[below] {$2$};\n      \\draw(3,0) -- +(0,-0.2) node[below] {$3$};\n      \\draw[thick, blue] (0,0) -- (3,2);\n      \\draw[fill, black] (3,2) circle [radius=2.25pt];\n      \\draw[thick, blue] (0,0) -- (1,5);\n      \\draw[fill, black] (1,5) circle [radius=2.25pt] node [right] {$1+5i$};\n      \\draw (1,-2) node {The product $(3+2i)(1+i)=1+5i$};\n    \\end{scope}\n  \\end{tikzpicture}\n\\end{equation*}\nTo take the multiplicative inverse of a complex number, we take the\nreciprocal of the magnitude and negate the argument.\n\\begin{equation*}\n  \\begin{tikzpicture}\n    \\draw[help lines, dotted] (30:4.5) -- (30:4.5 |- 0,0);\n    \\draw[help lines, dotted] (-30:2) -- (-30:2 |- 0,0);\n    \\filldraw[fill=green!20,draw=green!50!black] (0,0) -- (1.2,0) arc (0:30:1.2) -- cycle;\n    \\filldraw[fill=green!20,draw=green!50!black] (0,0) -- (1.2,0) arc (0:-30:1.2) -- cycle;\n    \\node at (15:9mm){$~\\theta$};\n    \\node at (-15:9mm){$-\\theta~$};\n    \\draw[->](-1.5,0) -- (5.5,0);\n    \\draw[->](0,0) -- (0,3.5);\n    \\draw(0,3) -- +(-0.2,0) node[left] {$i$};\n    \\draw(3,0) -- +(0,-0.2) node[below] {$1$};\n    \\draw[thick, blue] (0,0) -- node[above, black] {$r$} (30:4.5);\n    \\draw[fill, black] (30:4.5) circle [radius=1.8pt] node [right] {$z$};\n    \\draw[thick, blue] (0,0) -- node[below, black] {$\\frac{1}{r}$} (-30:2);\n    \\draw[fill, black] (-30:2) circle [radius=1.8pt] node [right] {$z^{-1}$};\n  \\end{tikzpicture}\n\\end{equation*}\nThe effect of taking the complex conjugate is to reflect the given\ncomplex number about the $x$ axis (or equivalently, keep the magnitude\nunchanged and negate the argument).\n\\begin{equation*}\n  \\begin{tikzpicture}\n    \\draw[help lines, dotted] (30:4.5) -- (30:4.5 |- 0,0);\n    \\draw[help lines, dotted] (-30:4.5) -- (-30:4.5 |- 0,0);\n    \\filldraw[fill=green!20,draw=green!50!black] (0,0) -- (1.2,0) arc (0:30:1.2) -- cycle;\n    \\filldraw[fill=green!20,draw=green!50!black] (0,0) -- (1.2,0) arc (0:-30:1.2) -- cycle;\n    \\node at (15:9mm){$~\\theta$};\n    \\node at (-15:9mm){$-\\theta~$};\n    \\draw[->](-1.5,0) -- (5.5,0);\n    \\draw[->](0,0) -- (0,3.5);\n    \\draw(0,3) -- +(-0.2,0) node[left] {$i$};\n    \\draw(3,0) -- +(0,-0.2) node[below] {$1$};\n    \\draw[thick, blue] (0,0) -- node[above, black] {$r$} (30:4.5);\n    \\draw[fill, black] (30:4.5) circle [radius=1.8pt] node [right] {$z$};\n    \\draw[thick, blue] (0,0) -- node[below, black] {$r$} (-30:4.5);\n    \\draw[fill, black] (-30:4.5) circle [radius=1.8pt] node [right] {$\\bar{z}$};\n  \\end{tikzpicture}\n\\end{equation*}\n", "meta": {"hexsha": "8936cec8641cdc6b73092f3aa2ac19227f60b600", "size": 7276, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "baseText/content/ComplexNumbers-Geometric.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "baseText/content/ComplexNumbers-Geometric.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "baseText/content/ComplexNumbers-Geometric.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 47.8684210526, "max_line_length": 98, "alphanum_fraction": 0.547691039, "num_tokens": 3082, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.8333246035907933, "lm_q1q2_score": 0.5854355815894767}}
{"text": "\\subsection{Stratified Bivariate Statistics}\n\n\\noindent{\\bf Description}\n\\smallskip\n\nThe {\\tt stratstats.dml} script computes common bivariate statistics, such\nas correlation, slope, and their p-value, in parallel for many pairs of input\nvariables in the presence of a confounding categorical variable.  The values\nof this confounding variable group the records into strata (subpopulations),\nin which all bivariate pairs are assumed free of confounding.  The script\nuses the same data model as in one-way analysis of covariance (ANCOVA), with\nstrata representing population samples.  It also outputs univariate stratified\nand bivariate unstratified statistics.\n\n\\begin{table}[t]\\hfil\n\\begin{tabular}{|l|ll|ll|ll||ll|}\n\\hline\nMonth of the year & \\multicolumn{2}{l|}{October} & \\multicolumn{2}{l|}{November} &\n    \\multicolumn{2}{l||}{December} & \\multicolumn{2}{c|}{Oct$\\,$--$\\,$Dec} \\\\\nCustomers, millions    & 0.6 & 1.4 & 1.4 & 0.6 & 3.0 & 1.0 & 5.0 & 3.0 \\\\\n\\hline\nPromotion (0 or 1)     & 0   & 1   & 0   & 1   & 0   & 1   & 0   & 1   \\\\\nAvg.\\ sales per 1000   & 0.4 & 0.5 & 0.9 & 1.0 & 2.5 & 2.6 & 1.8 & 1.3 \\\\\n\\hline\n\\end{tabular}\\hfil\n\\caption{Stratification example: the effect of the promotion on average sales\nbecomes reversed and amplified (from $+0.1$ to $-0.5$) if we ignore the months.}\n\\label{table:stratexample}\n\\end{table}\n\nTo see how data stratification mitigates confounding, consider an (artificial)\nexample in Table~\\ref{table:stratexample}.  A highly seasonal retail item\nwas marketed with and without a promotion over the final 3~months of the year.\nIn each month the sale was more likely with the promotion than without it.\nBut during the peak holiday season, when shoppers came in greater numbers and\nbought the item more often, the promotion was less frequently used.  As a result,\nif the 4-th quarter data is pooled together, the promotion's effect becomes\nreversed and magnified.  Stratifying by month restores the positive correlation.\n\nThe script computes its statistics in parallel over all possible pairs from two\nspecified sets of covariates.  The 1-st covariate is a column in input matrix~$X$\nand the 2-nd covariate is a column in input matrix~$Y$; matrices $X$ and~$Y$ may\nbe the same or different.  The columns of interest are given by their index numbers\nin special matrices.  The stratum column, specified in its own matrix, is the same\nfor all covariate pairs.\n\nBoth covariates in each pair must be numerical, with the 2-nd covariate normally\ndistributed given the 1-st covariate (see~Details).  Missing covariate values or\nstrata are represented by~``NaN''.  Records with NaN's are selectively omitted\nwherever their NaN's are material to the output statistic.\n\n\\smallskip\n\\pagebreak[3]\n\n\\noindent{\\bf Usage}\n\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\it%\n{\\tt{}-f }path/\\/{\\tt{}stratstats.dml}\n{\\tt{} -nvargs}\n{\\tt{} X=}path/file\n{\\tt{} Xcid=}path/file\n{\\tt{} Y=}path/file\n{\\tt{} Ycid=}path/file\n{\\tt{} S=}path/file\n{\\tt{} Scid=}int\n{\\tt{} O=}path/file\n{\\tt{} fmt=}format\n\n}\n\n\n\\smallskip\n\\noindent{\\bf Arguments}\n\\begin{Description}\n\\item[{\\tt X}:]\nLocation (on HDFS) to read matrix $X$ whose columns we want to use as\nthe 1-st covariate (i.e.~as the feature variable)\n\\item[{\\tt Xcid}:] (default:\\mbox{ }{\\tt \" \"})\nLocation to read the single-row matrix that lists all index numbers\nof the $X$-columns used as the 1-st covariate; the default value means\n``use all $X$-columns''\n\\item[{\\tt Y}:] (default:\\mbox{ }{\\tt \" \"})\nLocation to read matrix $Y$ whose columns we want to use as the 2-nd\ncovariate (i.e.~as the response variable); the default value means\n``use $X$ in place of~$Y$''\n\\item[{\\tt Ycid}:] (default:\\mbox{ }{\\tt \" \"})\nLocation to read the single-row matrix that lists all index numbers\nof the $Y$-columns used as the 2-nd covariate; the default value means\n``use all $Y$-columns''\n\\item[{\\tt S}:] (default:\\mbox{ }{\\tt \" \"})\nLocation to read matrix $S$ that has the stratum column.\nNote: the stratum column must contain small positive integers; all fractional\nvalues are rounded; stratum IDs of value ${\\leq}\\,0$ or NaN are treated as\nmissing.  The default value for {\\tt S} means ``use $X$ in place of~$S$''\n\\item[{\\tt Scid}:] (default:\\mbox{ }{\\tt 1})\nThe index number of the stratum column in~$S$\n\\item[{\\tt O}:]\nLocation to store the output matrix defined in Table~\\ref{table:stratoutput}\n\\item[{\\tt fmt}:] (default:\\mbox{ }{\\tt \"text\"})\nMatrix file output format, such as {\\tt text}, {\\tt mm}, or {\\tt csv};\nsee read/write functions in SystemML Language Reference for details.\n\\end{Description}\n\n\n\\begin{table}[t]\\small\\hfil\n\\begin{tabular}{|rcl|rcl|}\n\\hline\n& Col.\\# & Meaning & & Col.\\# & Meaning \\\\\n\\hline\n\\multirow{9}{*}{\\begin{sideways}1-st covariate\\end{sideways}}\\hspace{-1em}\n& 01     & $X$-column number                & \n\\multirow{9}{*}{\\begin{sideways}2-nd covariate\\end{sideways}}\\hspace{-1em}\n& 11     & $Y$-column number                \\\\\n& 02     & presence count for $x$           & \n& 12     & presence count for $y$           \\\\\n& 03     & global mean $(x)$                & \n& 13     & global mean $(y)$                \\\\\n& 04     & global std.\\ dev. $(x)$          & \n& 14     & global std.\\ dev. $(y)$          \\\\\n& 05     & stratified std.\\ dev. $(x)$      & \n& 15     & stratified std.\\ dev. $(y)$      \\\\\n& 06     & $R^2$ for $x \\sim {}$strata      & \n& 16     & $R^2$ for $y \\sim {}$strata      \\\\\n& 07     & adjusted $R^2$ for $x \\sim {}$strata      & \n& 17     & adjusted $R^2$ for $y \\sim {}$strata      \\\\\n& 08     & p-value, $x \\sim {}$strata       & \n& 18     & p-value, $y \\sim {}$strata       \\\\\n& 09--10 & reserved                         & \n& 19--20 & reserved                         \\\\\n\\hline\n\\multirow{9}{*}{\\begin{sideways}$y\\sim x$, NO strata\\end{sideways}}\\hspace{-1.15em}\n& 21     & presence count $(x, y)$          &\n\\multirow{10}{*}{\\begin{sideways}$y\\sim x$ AND strata$\\!\\!\\!\\!$\\end{sideways}}\\hspace{-1.15em}\n& 31     & presence count $(x, y, s)$       \\\\\n& 22     & regression slope                 &\n& 32     & regression slope                 \\\\\n& 23     & regres.\\ slope std.\\ dev.        &\n& 33     & regres.\\ slope std.\\ dev.        \\\\\n& 24     & correlation${} = \\pm\\sqrt{R^2}$  &\n& 34     & correlation${} = \\pm\\sqrt{R^2}$  \\\\\n& 25     & residual std.\\ dev.              &\n& 35     & residual std.\\ dev.              \\\\\n& 26     & $R^2$ in $y$ due to $x$          &\n& 36     & $R^2$ in $y$ due to $x$          \\\\\n& 27     & adjusted $R^2$ in $y$ due to $x$ &\n& 37     & adjusted $R^2$ in $y$ due to $x$ \\\\\n& 28     & p-value for ``slope = 0''        &\n& 38     & p-value for ``slope = 0''        \\\\\n& 29     & reserved                         &\n& 39     & \\# strata with ${\\geq}\\,2$ count \\\\\n& 30     & reserved                         &\n& 40     & reserved                         \\\\\n\\hline\n\\end{tabular}\\hfil\n\\caption{The {\\tt stratstats.dml} output matrix has one row per each distinct\npair of 1-st and 2-nd covariates, and 40 columns with the statistics described\nhere.}\n\\label{table:stratoutput}\n\\end{table}\n\n\n\n\n\\noindent{\\bf Details}\n\\smallskip\n\nSuppose we have $n$ records of format $(i, x, y)$, where $i\\in\\{1,\\ldots, k\\}$ is\na stratum number and $(x, y)$ are two numerical covariates.  We want to analyze\nconditional linear relationship between $y$ and $x$ conditioned by~$i$.\nNote that $x$, but not~$y$, may represent a categorical variable if we assign a\nnumerical value to each category, for example 0 and 1 for two categories.\n\nWe assume a linear regression model for~$y$:\n\\begin{equation}\ny_{i,j} \\,=\\, \\alpha_i + \\beta x_{i,j} + \\eps_{i,j}\\,, \\quad\\textrm{where}\\,\\,\\,\\,\n\\eps_{i,j} \\sim \\Normal(0, \\sigma^2)\n\\label{eqn:stratlinmodel}\n\\end{equation}\nHere $i = 1\\ldots k$ is a stratum number and $j = 1\\ldots n_i$ is a record number\nin stratum~$i$; by $n_i$ we denote the number of records available in stratum~$i$.\nThe noise term~$\\eps_{i,j}$ is assumed to have the same variance in all strata.\nWhen $n_i\\,{>}\\,0$, we can estimate the means of $x_{i, j}$ and $y_{i, j}$ in\nstratum~$i$ as\n\\begin{equation*}\n\\bar{x}_i \\,= \\Big(\\sum\\nolimits_{j=1}^{n_i} \\,x_{i, j}\\Big) / n_i\\,;\\quad\n\\bar{y}_i \\,= \\Big(\\sum\\nolimits_{j=1}^{n_i} \\,y_{i, j}\\Big) / n_i\n\\end{equation*}\nIf $\\beta$ is known, the best estimate for $\\alpha_i$ is $\\bar{y}_i - \\beta \\bar{x}_i$,\nwhich gives the prediction error sum-of-squares of\n\\begin{equation}\n\\sum\\nolimits_{i=1}^k \\sum\\nolimits_{j=1}^{n_i} \\big(y_{i,j} - \\beta x_{i,j} - (\\bar{y}_i - \\beta \\bar{x}_i)\\big)^2\n\\,\\,=\\,\\, \\beta^{2\\,}V_x \\,-\\, 2\\beta \\,V_{x,y} \\,+\\, V_y\n\\label{eqn:stratsumsq}\n\\end{equation}\nwhere $V_x$, $V_y$, and $V_{x, y}$ are, correspondingly, the ``stratified'' sample\nestimates of variance $\\Var(x)$ and $\\Var(y)$ and covariance $\\Cov(x,y)$ computed as\n\\begin{align*}\nV_x     \\,&=\\, \\sum\\nolimits_{i=1}^k \\sum\\nolimits_{j=1}^{n_i} \\big(x_{i,j} - \\bar{x}_i\\big)^2; \\quad\nV_y     \\,=\\, \\sum\\nolimits_{i=1}^k \\sum\\nolimits_{j=1}^{n_i} \\big(y_{i,j} - \\bar{y}_i\\big)^2;\\\\\nV_{x,y} \\,&=\\, \\sum\\nolimits_{i=1}^k \\sum\\nolimits_{j=1}^{n_i} \\big(x_{i,j} - \\bar{x}_i\\big)\\big(y_{i,j} - \\bar{y}_i\\big)\n\\end{align*}\nThey are stratified because we compute the sample (co-)variances in each stratum~$i$\nseparately, then combine by summation.  The stratified estimates for $\\Var(X)$ and $\\Var(Y)$\ntend to be smaller than the non-stratified ones (with the global mean instead of $\\bar{x}_i$\nand~$\\bar{y}_i$) since $\\bar{x}_i$ and $\\bar{y}_i$ fit closer to $x_{i,j}$ and $y_{i,j}$\nthan the global means.  The stratified variance estimates the uncertainty in $x_{i,j}$ \nand~$y_{i,j}$ given their stratum~$i$.\n\nMinimizing over~$\\beta$ the error sum-of-squares~(\\ref{eqn:stratsumsq})\ngives us the regression slope estimate \\mbox{$\\hat{\\beta} = V_{x,y} / V_x$},\nwith~(\\ref{eqn:stratsumsq}) becoming the residual sum-of-squares~(RSS):\n\\begin{equation*}\n\\mathrm{RSS} \\,\\,=\\, \\,\n\\sum\\nolimits_{i=1}^k \\sum\\nolimits_{j=1}^{n_i} \\big(y_{i,j} - \n\\hat{\\beta} x_{i,j} - (\\bar{y}_i - \\hat{\\beta} \\bar{x}_i)\\big)^2\n\\,\\,=\\,\\,  V_y \\,\\big(1 \\,-\\, V_{x,y}^2 / (V_x V_y)\\big)\n\\end{equation*}\nThe quantity $\\hat{R}^2 = V_{x,y}^2 / (V_x V_y)$, called \\emph{$R$-squared}, estimates the fraction\nof stratified variance in~$y_{i,j}$ explained by covariate $x_{i, j}$ in the linear \nregression model~(\\ref{eqn:stratlinmodel}).  We define \\emph{stratified correlation} as the\nsquare root of~$\\hat{R}^2$ taken with the sign of~$V_{x,y}$.  We also use RSS to estimate\nthe residual standard deviation $\\sigma$ in~(\\ref{eqn:stratlinmodel}) that models the prediction error\nof $y_{i,j}$ given $x_{i,j}$ and the stratum:\n\\begin{equation*}\n\\hat{\\beta}\\, =\\, \\frac{V_{x,y}}{V_x}; \\,\\,\\,\\, \\hat{R} \\,=\\, \\frac{V_{x,y}}{\\sqrt{V_x V_y}};\n\\,\\,\\,\\, \\hat{R}^2 \\,=\\, \\frac{V_{x,y}^2}{V_x V_y};\n\\,\\,\\,\\, \\hat{\\sigma} \\,=\\, \\sqrt{\\frac{\\mathrm{RSS}}{n - k - 1}}\\,\\,\\,\\,\n\\Big(n = \\sum_{i=1}^k n_i\\Big)\n\\end{equation*}\n\nThe $t$-test and the $F$-test for the null-hypothesis of ``$\\beta = 0$'' are\nobtained by considering the effect of $\\hat{\\beta}$ on the residual sum-of-squares,\nmeasured by the decrease from $V_y$ to~RSS.\nThe $F$-statistic is the ratio of the ``explained'' sum-of-squares\nto the residual sum-of-squares, divided by their corresponding degrees of freedom.\nThere are $n\\,{-}\\,k$ degrees of freedom for~$V_y$, parameter $\\beta$ reduces that\nto $n\\,{-}\\,k\\,{-}\\,1$ for~RSS, and their difference $V_y - {}$RSS has just 1 degree\nof freedom:\n\\begin{equation*}\nF \\,=\\, \\frac{(V_y - \\mathrm{RSS})/1}{\\mathrm{RSS}/(n\\,{-}\\,k\\,{-}\\,1)}\n\\,=\\, \\frac{\\hat{R}^2\\,(n\\,{-}\\,k\\,{-}\\,1)}{1-\\hat{R}^2}; \\quad\nt \\,=\\, \\hat{R}\\, \\sqrt{\\frac{n\\,{-}\\,k\\,{-}\\,1}{1-\\hat{R}^2}}.\n\\end{equation*}\nThe $t$-statistic is simply the square root of the $F$-statistic with the appropriate\nchoice of sign.  If the null hypothesis and the linear model are both true, the $t$-statistic\nhas Student $t$-distribution with $n\\,{-}\\,k\\,{-}\\,1$ degrees of freedom.  We can\nalso compute it if we divide $\\hat{\\beta}$ by its estimated standard deviation:\n\\begin{equation*}\n\\stdev(\\hat{\\beta})_{\\mathrm{est}} \\,=\\, \\hat{\\sigma}\\,/\\sqrt{V_x} \\quad\\Longrightarrow\\quad\nt \\,=\\, \\hat{R}\\sqrt{V_y} \\,/\\, \\hat{\\sigma} \\,=\\, \\beta \\,/\\, \\stdev(\\hat{\\beta})_{\\mathrm{est}}\n\\end{equation*}\nThe standard deviation estimate for~$\\beta$ is included in {\\tt stratstats.dml} output.\n\n\\smallskip\n\\noindent{\\bf Returns}\n\\smallskip\n\nThe output matrix format is defined in Table~\\ref{table:stratoutput}.\n\n\\smallskip\n\\noindent{\\bf Examples}\n\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\tt\n\\hml -f stratstats.dml -nvargs X=/user/biadmin/X.mtx Xcid=/user/biadmin/Xcid.mtx\n  Y=/user/biadmin/Y.mtx Ycid=/user/biadmin/Ycid.mtx S=/user/biadmin/S.mtx Scid=2\n  O=/user/biadmin/Out.mtx fmt=csv\n\n}\n{\\hangindent=\\parindent\\noindent\\tt\n\\hml -f stratstats.dml -nvargs X=/user/biadmin/Data.mtx Xcid=/user/biadmin/Xcid.mtx\n  Ycid=/user/biadmin/Ycid.mtx Scid=7 O=/user/biadmin/Out.mtx\n\n}\n\n%\\smallskip\n%\\noindent{\\bf See Also}\n%\\smallskip\n%\n%For non-stratified bivariate statistics with a wider variety of input data types\n%and statistical tests, see \\ldots.  For general linear regression, see\n%{\\tt LinearRegDS.dml} and {\\tt LinearRegCG.dml}.  For logistic regression, appropriate\n%when the response variable is categorical, see {\\tt MultiLogReg.dml}.\n\n", "meta": {"hexsha": "716cf35a7e27a909a70c177e8f545caee877439a", "size": 13210, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "system-ml/docs/Algorithms Reference/DescriptiveStratStats.tex", "max_stars_repo_name": "alcedo/systemml", "max_stars_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-03-17T18:03:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-25T08:17:09.000Z", "max_issues_repo_path": "system-ml/docs/Algorithms Reference/DescriptiveStratStats.tex", "max_issues_repo_name": "alcedo/systemml", "max_issues_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "system-ml/docs/Algorithms Reference/DescriptiveStratStats.tex", "max_forks_repo_name": "alcedo/systemml", "max_forks_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-11-26T00:43:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-02T06:29:30.000Z", "avg_line_length": 46.1888111888, "max_line_length": 121, "alphanum_fraction": 0.6426192279, "num_tokens": 4575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5853527592348374}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath, tikz, enumerate, sfmath, bm, multicol, tcolorbox}\n\\renewcommand{\\familydefault}{\\sfdefault}\n\\usepackage[top = 0.5in, bottom = 0.5in, left = 1in, right = 1in]{geometry}\n\\pagestyle{empty}\n\\raggedright\n\\tikzset{>=stealth}\n\\usetikzlibrary{calc}\n\\begin{document}\n\\section*{Matrix Multiplication}\n\n\\begin{tcolorbox}[colframe=orange!70!white, coltitle=black, title=\\textbf{Summary}]\n\\begin{enumerate}\n    \\item Matrix multiplication creates new basis vectors $\\hat{\\imath}$ and $\\hat{\\jmath}$.\n    \\item An $m \\times n$ matrix can be multiplied by an $n \\times r$ matrix to make an $m \\times r$ matrix.\n    \\item Matrix multiplication transforms the coordinate plane itself as a series of compositions of functions.\n\\end{enumerate}\n\\end{tcolorbox}\n\n\\subsection*{Matrix Times a Vector}\n\nMatrix multiplication does not work the way you might think. For instance, if \n\\[ A = \\begin{bmatrix} 3 & -2 \\\\ 1 & 0 \\end{bmatrix} \\quad \\text{and} \\quad B = \\begin{bmatrix} -1 & 4 \\\\ 5 & 6 \\end{bmatrix}\n\\]\n\\vspace{1in}\n\nThen $AB$ is not $\\begin{bmatrix} -3 & -8 \\\\ 5 & 0 \\end{bmatrix}$. In other words, we don't multiply corresponding elements.\t\\vspace{1in}\n\nIn order to even multiply a matrix by a vector or another matrix, the number of columns in the first matrix \\emph{must} equal the number of rows (for the vector or matrix) in the second.\t\\vspace{1in}\n\n\\newpage\n\nSuppose we have the basis vector $\\hat{\\imath} = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}$. What happened to $\\hat{\\imath}$ when we multiply it by the matrix $\\begin{bmatrix} 2 & 0 \\\\ 0 & 2 \\end{bmatrix}$?\n\\bigskip\n\n\\[\n\\begin{bmatrix}\n2 & 0 \\\\ 0 & 2 \n\\end{bmatrix}\n\\begin{bmatrix}\n1 \\\\ 0\n\\end{bmatrix}\n\\]\n\\vfill\n\n\\begin{center}\n\\begin{tikzpicture}[scale=0.75]\n\\draw[<->] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[gray!70] (-5,-5) grid (5,5);\n\\end{tikzpicture}\n\\end{center}\n\n\\vfill\n\n\n\\newpage\n\n\n\n{\\color{red}\\textbf{Example 1.}} What are the coordinates of $\\hat{\\jmath} = \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}$ when you multiply it by \n$\n\\begin{bmatrix}\n3 & 0 \\\\\n0 & 3\n\\end{bmatrix}\n$\n\\bigskip\n\n\\[\n\\begin{bmatrix}\n3 & 0 \\\\\n0 & 3\n\\end{bmatrix}\n\\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}\n\\]\n\n\\vfill\n\n\\begin{center}\n\\begin{tikzpicture}[scale=0.6]\n\\draw[<->] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[gray!70] (-5,-5) grid (5,5);\n\\end{tikzpicture}\n\\end{center}\n\n\\vfill\n\n\\newpage\n\nSuppose we have the vector $\\vec{v} = \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}$. This can be written using the basis vectors $\\hat{\\imath}$ and $\\hat{\\jmath}$ as $\\vec{v} = 2\\hat{\\imath} + 1\\hat{\\jmath}$.\n\\begin{center}\n\\begin{tikzpicture}[scale=0.6]\n\\draw[<->] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[gray!70] (-5,-5) grid (5,5);\n\\draw[color=red, very thick, ->] (0,0) -- (2,1) node [above right] {$\\vec{v}$};\n\\end{tikzpicture}\n\\end{center}\n\nWhat happens when we multiply $\\vec{v}$ by the matrix\n$\n\\begin{bmatrix} 2 & 0 \\\\ 0 & 2 \\end{bmatrix}\n$\n\\bigskip\n\n\\[\n\\begin{bmatrix} 2 & 0 \\\\ 0 & 2 \\end{bmatrix}\n\\begin{bmatrix} 2 \\\\ 1  \\end{bmatrix}\n\\]\n\n\\vfill\n\n\\begin{center}\n\\begin{tikzpicture}[scale=0.6]\n\\draw[<->] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\draw[gray!70] (-5,-5) grid (5,5);\n\\draw[color=red, very thick, ->] (0,0) -- (2,1) node [above right] {$\\vec{v}$};\n\\end{tikzpicture}\n\\end{center}\n\n\\newpage\n\n{\\color{red}\\textbf{Example 2.}} Find the product and graph the result.\n\\[\n\\begin{bmatrix}\n3 & -2 \\\\\n1 & 1\n\\end{bmatrix}\n\\begin{bmatrix}\n2 \\\\ 1\n\\end{bmatrix}\n\\]\n\n\\vfill\n\n\\begin{center}\n\\begin{tikzpicture}[scale=0.7]\n\\draw[<->] (-6.5,0) -- (6.5,0) node [right] {$x$};\n\\draw[<->] (0,-6.5) -- (0,6.5) node [above] {$y$};\n\\draw[gray!70] (-6,-6) grid (6,6);\n\\end{tikzpicture}\n\\end{center}\n\n\\vspace{1in}\n\n\\newpage\n\n\n\\subsection*{Matrix Times a Matrix}\n\nWe can look at multiplying matrices as where $\\hat{\\imath}$ and $\\hat{\\jmath}$ end up in the coordinate plane. \\newline\\\\\n\n{\\color{red}\\textbf{Example 3.}} Explain the following problem visually \n\\[\n\\begin{bmatrix}\n0 & -1 \\\\\n1 & 0\n\\end{bmatrix}\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & 1\n\\end{bmatrix}\n\\]\n\n\\vfill\n\n\\begin{center}\n\\begin{tikzpicture}[scale=0.8]\n\\draw [gray!70] (-5,-5) grid (5,5);\n\\draw[<->] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\\draw[<->] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\\end{tikzpicture}\n\\end{center}\n\n\\vspace{1in}\n\n\\newpage\n\n{\\color{red}\\textbf{Example 4.}} Given $A = \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}$, determine the effect of each product.\n\n\\begin{enumerate}[(a)]\n\t\\item $\\begin{bmatrix} 0 & -1 \\\\ 1 & 0 \\end{bmatrix}  \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}$.\t\\newline\\\\\n\t\\begin{flushright}\n\t\\begin{tikzpicture}\n\t\\draw[gray!70] (-3,-3) grid (3,3);\n\t\\draw[<->] (-3.5,0) -- (3.5,0) node [right] {$x$};\n\t\\draw[<->] (0,-3.5) -- (0,3.5) node [above] {$y$};\n\t\\end{tikzpicture}\n\t\\end{flushright}\n    \\vfill\n    \n\t\\item $\\begin{bmatrix} 1 & 0 \\\\ 1 & 1\n \\end{bmatrix}  \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}$  \\label{ex4b} \\newline\\\\\n \n    \\begin{flushright}\n\t\\begin{tikzpicture}\n\t\\draw[gray!70] (-3,-3) grid (3,3);\n\t\\draw[<->] (-3.5,0) -- (3.5,0) node [right] {$x$};\n\t\\draw[<->] (0,-3.5) -- (0,3.5) node [above] {$y$};\n\t\\end{tikzpicture}\n\t\\end{flushright}\n\\end{enumerate}\n\n\\newpage\n\nIn general, we can think of matrix multiplication like a composition of functions. \\newline\\\\\n\nEach multiplication leads to a new transformation of coordinates $\\hat{\\imath}$ and $\\hat{\\jmath}$:\n\n\\begin{itemize}\n    \\item Scaling $\\hat{\\imath}$ and/or $\\hat{\\jmath}$\n    \\item Rotating $\\hat{\\imath}$ and/or $\\hat{\\jmath}$\n    \\item Shearing $\\hat{\\imath}$ and/or $\\hat{\\jmath}$\n    \\item Changing the orientation of $\\hat{\\imath}$ and $\\hat{\\jmath}$\n\\end{itemize}\n\n\\vfill\n\nFor instance, in Example 4\\ref{ex4b}, when we multiplied by the matrix   \\newline\\\\ $\\begin{bmatrix} 1 & 0 \\\\ 1 & 1 \\end{bmatrix}$   \\newline\\\\ it transformed the coordinate plane by shearing $\\hat{\\imath}$:   \\newline\\\\\n\n\\begin{tikzpicture}\n\\draw[<->, thick] (-3.5,0) -- (3.5,0);\n\\draw[<->, thick] (0,-3.5) -- (0,3.5);\n\\draw[->, ultra thick, red] (0,0) -- (1,1) node [below right] {$\\hat{\\imath}'$};\n\\draw[->, ultra thick, blue] (0,0) -- (0,1) node [above left] {$\\hat{\\jmath}'$};\n\\draw [gray!70] (-3, -3) grid (3, 3);\n\\pgftransformcm{1}{1}{0}{1}{\\pgfpoint{0cm}{0cm}}\n\\draw[dashed, violet] (-3,-3) grid (3,3);\n\\end{tikzpicture}\n\n\\newpage\n\n{\\color{red}\\textbf{Example 5.}} Find each product.\t\n \n \\begin{enumerate}[(a)] \n \\item \\quad $\\begin{bmatrix} 2 & 1 \\\\ 1 & 2 \\end{bmatrix} \\begin{bmatrix} 1 &- 1 \\\\ 0 & 3 \\end{bmatrix}$\n \n    \\begin{flushright}\n\t\\begin{tikzpicture}[scale=0.6]\n\t\\draw[gray!70] (-5,-5) grid (5,5);\n\t\\draw[<->] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\t\\draw[<->] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\t\\end{tikzpicture}\n\t\\end{flushright}\n\\vfill\n \\item \\quad $\\begin{bmatrix} 1 &- 1 \\\\ 0 & 3 \\end{bmatrix} \\begin{bmatrix} 2 & 1 \\\\ 1 & 2 \\end{bmatrix}$\n \\begin{flushright}\n\t\\begin{tikzpicture}[scale=0.6]\n\t\\draw[gray!70] (-5,-5) grid (5,5);\n\t\\draw[<->] (-5.5,0) -- (5.5,0) node [right] {$x$};\n\t\\draw[<->] (0,-5.5) -- (0,5.5) node [above] {$y$};\n\t\\end{tikzpicture}\n\t\\end{flushright}\n\\end{enumerate}\n\n\n\n\\newpage\n\n\\begin{enumerate}[(a)] \\addtocounter{enumi}{2}\n \\item \\quad $\\begin{bmatrix} 2 & 4 \\\\ -2 & 3 \\\\ 5 & 1 \\end{bmatrix}$ $\\begin{bmatrix}\n -3 & 6 \\\\ 0 & -2 \n \\end{bmatrix}$\n \\vfill\n \n \\item \\quad $\\begin{bmatrix}\n -3 & 6 \\\\ 0 & -2 \n \\end{bmatrix}$ $\\begin{bmatrix} 2 & 4 \\\\ -2 & 3 \\\\ 5 & 1 \\end{bmatrix}$\n \\vfill\n\\end{enumerate}\n\n\\newpage\n\n\\begin{enumerate}[(a)] \\addtocounter{enumi}{4} \n\\item \\quad $\\begin{bmatrix} 1 & -2 \\end{bmatrix} \\begin{bmatrix}\n4 & -1 \\\\ 5 & 1 \n\\end{bmatrix}$\n\\vfill\n\n\\item \\quad $\\begin{bmatrix}\n0 & 2 & -1 \\\\ 4 & 1 & 0 \\\\ 0 & -1 & 2\n\\end{bmatrix}$\n$\\begin{bmatrix}\n4 & 3 & 0 \\\\ -1 & 0 & 2 \\\\ 1 & 0 & -2\n\\end{bmatrix}$\n\\end{enumerate}\n\\vfill\n\n \n \n\\end{document}\n", "meta": {"hexsha": "4850e883254b01cfb21e4101d3ab87dfa14c9543", "size": 7938, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Matrix_Multiplication.tex", "max_stars_repo_name": "BryanBain/HA2_BEAMER", "max_stars_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Matrix_Multiplication.tex", "max_issues_repo_name": "BryanBain/HA2_BEAMER", "max_issues_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Matrix_Multiplication.tex", "max_forks_repo_name": "BryanBain/HA2_BEAMER", "max_forks_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:45.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:45.000Z", "avg_line_length": 26.9084745763, "max_line_length": 220, "alphanum_fraction": 0.6095993953, "num_tokens": 3288, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.8006919997179627, "lm_q1q2_score": 0.58535275523423}}
{"text": "\\section{Hyper-parameter Optimization}%\n\\label{sec:hyper_parameter_optimization}\n\nBefore a machine learning model can be trained, certain decisions regarding the\narchitecture of the network have to be made. This involves parameters such as\nthe number of layers in feedforward networks, kernel sizes in\nconvolutional nets or, as in the case of the ESN, sparsity and spectral radius\nof the reservoir.  These parameters are called \\emph{hyper-parameters} (HP) and\nthey fill a compact space $\\mathbf{X}$.  The goal of HP optimization is to find\na configuration $\\mathbf{x_i}$ that maximizes the performance of the network:\n\\begin{equation}\n  \\arg \\max_{\\mathbf{x_i} \\in \\mathbf{X}} f(\\mathbf{x_i})\n\\end{equation}\nThe function $f$ is not known and very expensive to evaluate as one evaluation\namounts to the creation of a trained NN.  Often this optimization problem is\nsolved by handily tuning each HP until a \\emph{reasonably} good\n$\\mathbf{x_{\\text{opt}}}$ is found, but there are a few methods that can be\napplied to automate this process.  The two simplest methods are \\emph{grid\nsearches} and \\emph{random searches} of the HP space.  First, the space of\nvalid HPs is defined.  For example, a number of units $N = (10, 20, 30,...,\n100)$ and a learning rate $\\eta = (0.1, 0.01, 0.001, ...)$. Grid search then\nperforms an exhaustive search of all parameters, which is very simple to\nimplement but suffers from the curse of dimensionality.  Random search randomly\nsamples $\\mathbf{x_i}$ from the HP space, which does not have the problem of\nneeding to perform an exhaustive search and is widely applied in practice, as\nit is still very simple to implement.\n\nThe next section describes an algorithm called \\emph{Bayesian Optimization},\nwhich also samples the next $\\mathbf{x_i}$ but incorporates the knowledge of\nalready evaluated points in the HP space.  It utilizes Bayes' Theorem which,\nadjusted to this problem, states that the \\emph{posterior} probability\ndistribution of a model $M$ (over the HP space $\\mathbf{X}$) given a number of\nalready evaluated samples $A \\in \\mathbf{X}$ is:\n\\begin{equation}\n  P(M|A) \\propto P(A|M) P(M),\n\\end{equation}\nwhere $P(M)$ is the \\emph{prior} probability distribution over $X$, and\n$P(A|M)$ the \\emph{likelihood} of the samples given $M$.\n\n\n\\subsection{Bayesian Optimization}%\n\nThe following description is largely based on article~[\\cite{brochu2010bayesopt}]\nand will briefly introduce the concept of Gaussian\nProcesses and how they are used in Bayesian Optimization as introduced\nby~[\\cite{williams1996gaussian}].  In summary, the Bayesian Optimization\nalgorithm tries to maximize an objective function $f$ by balancing exploration\n(evaluating areas where the true values of $f$ are very uncertain) and\nexploitation (which tries to evaluate $f$ where it is expected to be high). It\nrests upon the concept of Gaussian Processes (GP), which can be used to define\na distribution over $f$.  From the GP it is possible to calculate an\nacquisition function, which is used to efficiently sample the next\n$\\mathbf{x}_i$ that should be evaluated.  There are different acquisition\nfunctions that emphasize either exploration or exploitation. The method of\nBayesian Optimization yields good results with only few evaluations and is very\nlikely to perform well on objective functions with local maxima.\\\\\n\n\\subsubsection{Gaussian Processes}%\n\\label{ssub:gaussian_processes}\n\nA GP is defined by its mean function $m(\\mathbf{x}_i)$ and its covariance\nfunction $k(\\mathbf{x}_i, \\mathbf{x}_j)$.  This means that to each argument\n$\\mathbf{x}_i$ we assign a random variable $f(\\mathbf{x}_i)$.  In analogy to\nthe Normal distribution, a GP is formally written as:\n\\begin{equation}\n  f \\sim {GP}(m,k)\n\\end{equation}\nwhere $m(\\mathbf{x}_i)$ can be any function but typically is just zero, and a\ncommon covariance matrix is created with a Gaussian kernel:\n\\begin{equation} \\label{eq:cov}\n  \\mathbf{K}_{ij} = k(\\mathbf{x}_i, \\mathbf{x}_j) =\n      \\exp \\bigg(\\frac{- ||\\mathbf{x}_i - \\mathbf{x}_j||^2}{2}\\bigg).\n\\end{equation}\nThis kernel is close to one for values close to each other and approaches zero\nas the values grow further apart, which implies that close values are highly\ncorrelated.\nWith this definition, a GP is equivalent to a multivariate Gaussian\n$\\mathcal{N}(\\mu, \\Sigma)$ with mean vector $\\mu$ and a covariance matrix\n$\\Sigma$:\n\\begin{align}\n  \\mu_i &= m(\\mathbf{x}_i) \\\\\n  \\Sigma_{ij} &= k(\\mathbf{x}_i, \\mathbf{x}_j)\n\\end{align}\n\nGiven mean and covariance we can use the GP as a prior for the function $f$ of\nwhich we want to find the maximum. By choosing a zero mean, the only actual\nprior information is a smoothly operating $f$, implied by the Gaussian kernel.\nThe interesting part is how to update this prior with information that is\ngained by evaluating $f$ at a certain point to obtain a posterior.  Assuming that\nwe already have $n$ observations $\\{\\mathbf{x}_{1:n} ; f(\\mathbf{x}_{1:n})\\}$\nwe can find the covariance matrix $\\mathbf{K}$ with Eq.~\\ref{eq:cov}.  To find\nthe probability distribution of an unobserved point $\\mathbf{x}_{n+1}$, the\nconditional probability of $f_{n+1}$ given the previous observations (also\ncalled predictive distribution) is needed:\n\\begin{equation}\n  P(f_{n+1} | \\mathbf{x}_{1:n} ; f(\\mathbf{x}_{1:n})) \n    = \\mathcal{N}(\\mu_p(\\mathbf{x}_{n+1}) , \\sigma_p^2(\\mathbf{x}_{n+1})),\n\\end{equation}\n\nwhere $\\mu_p$ and $\\sigma_p$ are the predicted mean and standard deviation at\nan unknown point.  They can be calculated thanks to the properties of the GP,\nwhich state that the observations and the arbitrary point $\\mathbf{x}_{n+1}$\nare jointly Gaussian:\n\\begin{align}\n  &\\begin{bmatrix} f(\\mathbf{x}_{1:n}) \\\\ f(\\mathbf{x}_{n+1}) \\end{bmatrix}\n  \\sim \\mathcal{N} \\bigg(\n    0,  \\begin{bmatrix} \n           \\mathbf{K} & \\mathbf{k}^T \\\\ \n           \\mathbf{k} & k(\\mathbf{x}_{n+1}, \\mathbf{x}_{n+1})\n        \\end{bmatrix}\n  \\bigg) \\\\\n  &\\mathbf{k} = \\begin{bmatrix}\n      k(\\mathbf{x}_1, \\mathbf{x}_{n+1}) &\n      \\dots &\n      k(\\mathbf{x}_n, \\mathbf{x}_{n+1})\n  \\end{bmatrix}\n\\end{align}\n\nFrom this it is possible to derive that:\n\\begin{align}\n  \\mu_p (\\mathbf{x}_{n+1}) &= \\mathbf{k}^T \\mathbf{K}^{-1} f(\\mathbf{x}_{1:n}) \\\\\n  \\sigma_p (\\mathbf{x}_{n+1}) &= k(\\mathbf{x}_{n+1}, \\mathbf{x}_{n+1}) \n                               - \\mathbf{k}^T \\mathbf{K}^{-1} \\mathbf{k}\n\\end{align}\n\nThe resulting predictive distribution is typically very cheap to evaluate as\nthe number of observations is low. With this distribution it is possible\nto find the so called \\emph{acquisition function}, which enables an educated\nguess of the next best point of evaluation of the objective function.\n\n\\label{sub:bayesian_optimization}\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{gpopt_01.pdf}\n  \\includegraphics[width=\\linewidth]{gpopt_02.pdf}\n  \\includegraphics[width=\\linewidth]{gpopt_04.pdf}\n  \\includegraphics[width=\\linewidth]{gpopt_08.pdf}\n  \\caption{A timeline of Bayesian Optimization. The plots show how the GP\n    approximates the objective function better and better after each iteration.\n    The maximum of the green acquisition function indicates where $f$ will be\n    sampled next.  The uncertainty of already observed points is zero in this\n    case, as there is no noise incorporated in the example. However, noisy\n    objective functions can be dealt with by only slight additions to the\n    described algorithm. An in depth description of Gaussian Processes and\n    how they can be applied to ML can be found in~[\\cite{rasmussen2004gaussian}].\n  }\n  \\label{fig:gpopt_01}\n\\end{figure}\n\n\n\\subsubsection{Acquisition function}%\n\\label{ssub:acquisition_function}\n\nThe acquisition function is obtained from the predictive distribution of the \nGP and is defined such that it is high where the objective function $f$ is\n\\emph{potentially} high.  The probability of large objective values is high\nwhere either the uncertainty or the mean (or both) of the GP are large. \nMaximizing the acquisition function amounts to sampling $f$ at\n\\begin{equation}\n  \\mathbf{x}_{n+1} = \\arg \\max_{\\mathbf{x_i} \\in \\mathbf{X}} \n                     \\text{ PI}(\\mathbf{x}_i).\n\\end{equation}\nThe function PI is an example of an acquisition function called the\n\\emph{probability of improvement}:\n\\begin{align}\n  \\text{PI}(\\mathbf{x}_i) &= P(f(\\mathbf{x}_i) > f(\\mathbf{x}^+)) \\\\\n  &= \\Phi\\bigg( \\frac{\\mu_p(\\mathbf{x}_i) - \\mu_p(\\mathbf{x}^+)}\n                     {\\sigma_p(\\mathbf{x}_i)}  \\bigg) \\nonumber\n\\end{align}\nHere $\\Phi$ is the cumulative distribution function. The emphasis of PI clearly\nlies on exploitation, as samples with a mean lower than the current maximal\nmean will never reach PI values over 0.5.  More sophisticated acquisition\nfunctions than PI, which are not purely driven by exploitation include\n\\emph{expected improvement}. A description of various acquisition functions\nand Bayesian Optimization as a whole can be found in~[\\cite{brochu2010bayesopt}].\n", "meta": {"hexsha": "a00f6d19d180a73ec5847ff12bdce30ad426fa6f", "size": 8941, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mainmatter/hyperparameter.tex", "max_stars_repo_name": "nmheim/thesis", "max_stars_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-09-22T12:17:23.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-22T12:17:23.000Z", "max_issues_repo_path": "mainmatter/hyperparameter.tex", "max_issues_repo_name": "nmheim/thesis", "max_issues_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mainmatter/hyperparameter.tex", "max_forks_repo_name": "nmheim/thesis", "max_forks_repo_head_hexsha": "feafb9f5c7bcf6b6473d3fca844a33dc25dcff0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.8011363636, "max_line_length": 81, "alphanum_fraction": 0.7354882004, "num_tokens": 2475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006919925839875, "lm_q2_score": 0.7310585844894971, "lm_q1q2_score": 0.5853527547105248}}
{"text": "\\section{Physics Background}\n\n\\subsection{Resonant Cavities}\nWe begin our discussion with a description of the physics problem at hand, as well as a derivation for the mathematics used. Implicitly in the title of this project we are looking at \\emph{resonant cavities}, which are volumes used to store standing waves. In the case of Electromagnetic waves that means that the walls of this cavity are perfect conductors. Most physics students will be able to draw parallels to a vibrating string as it is a common wave PDE problem. A one dimensional parallel for electromagnetic waves would be two parallel mirrors separated by a distance L. In this case, the normal modes are electromagnetic waves that bounce between the mirrors such that the total trip is an integer number over wavelengths\\cite{Zangwill}. This would be analogous to the \\emph{standing wave} on the string where the nodes and antinodes are stationary.\n\nThe approach we will use to find the standing waves of a conducting cavity focuses on time-harmonic and divergence-free solutions of the homogeneous wave equation in a volume $V$ with surface $S$\\footnote{At this point I am showing my future aspirations of being a theorist by using natural units $c=\\hbar=1$. This should make little difference to the physics we are studying, and it makes the equations slightly nicer.}.\n\\ea{\n\\qty(\\laplacian + \\epsilon\\mu\\omega^2)\\va{E} &= 0 \\ \\text{ in V} \\label{eq:Helmholtz}\\\\\n\\div \\va{E} &= 0 \\ \\text{ in V}\\\\\n\\value{n} \\cross \\va{E} &= 0 \\ \\text{ on S}\n}\nHere $\\epsilon$ is the electric permutivity of the volume, $\\mu$ is the magnetic permutivity, and $\\va{E}$ is the electric field which has the following spacial and temporal components:\n\\eq{\n\\va{E} = \\va{E}(\\va{r})e^{-i\\omega t}\n}\nSolving the Helmholtz equation \\eqref{eq:Helmholtz} is analytically possible for simple geometries using separation of variables. These geometries include variations on cylinders, rectangular prisms, and spheres. For further reading on the solutions to the Helmholtz equation you are welcome to suffer like every other Physics graduate student and read \\cite{Jackson}.\n\n\\subsection{Derivation for a Spherical Cavity}\nArmed with a deeper understanding of resonant cavities, we can now turn our attention to the problem at hand, spherical cavities. The electromagnetic normal modes of a spherical resonant cavity are time-harmonic, vector spherical waves that satisfy the perfect-conductor boundary condition $\\vu{r}\\cross \\va{E}\\big{|}_s=0$ at the cavity\u2019s walls \\cite{Zangwill}. If we take $u(r,\\theta,\\phi)$ to be the solution to \\eqref{eq:Helmholtz} then our normal modes would take the form of transverse magnetic and transverse electric waves.\n\\ea{\n\\va{E}_E = -i\\omega\\va{r}\\cross\\grad u &\\hlw{.05} \\va{E}_M = \\curl(\\va{r}\\cross\\grad u)\\\\\n\\va{B}_E = -\\curl(\\va{r}\\cross \\grad u) &\\hlw{.05} \\va{B}_M = -i\\omega\\va{r}\\cross\\grad u\n}\nMost readers will find a parallel to other areas of Electrodynamics in that our solution to the Helmholtz equation will be a linear combinations of radial equations multiplying the spherical harmonics. In this case our radial function will be a combination of spherical Bessel functions ($j_l(kr)$) and spherical Neumann functions ($n_l(kr)$.\n\\eq{\nu(\\va{r}) = \\sum_{lm}\\qty(A_l j_l(kr) + B_l n_l(kr))Y_{lm}(\\theta,\\phi)\n}\nHere $k=\\frac{\\omega}{c}$. It will be related to the zeros of these Bessel functions, which is how we will obtain our Eigenvalues or Eigenfrequencies. In this case we are only concerned with behavior inside the cavity. Since $n_l$ diverges are the origin we are able to force all $B_l$ to be zero\\cite{Jackson}. Thus the form of $u$ inside the cavity is the following:\n\\eqb{\nu_{lm}(\\va{r}) = A_l j_l(kr))Y_{lm}(\\theta,\\phi)\n}\nComputing the various transverse waves above will require some vector calculations on this solution. The following identity will prove to be quite useful due to the nature of spherical harmonics.\n\\eq{\n\\curl(\\va{r}\\cross\\grad u) = \\va{r}\\laplacian u - 2\\grad u - r\\pdv{r}\\grad u\n}\nThis allows us to write the form of the transverse electric and magnetic waves\\cite{Zangwill}.\n\\ea{\nE_E &= i\\omega j_l(kr)\\qty(\\frac{1}{\\sin\\theta}\\pdv{Y_{lm}}{\\phi}\\vu{\\theta}-\\pdv{Y_{lm}}{\\theta}\\vu{\\phi}) \\label{eq:wave1}\\\\\n\\nonumber\\\\\nE_M &= -\\qty(\\frac{l(l+1)}{r^2}\\va{r}+\\qty[\\vu{\\theta}\\pdv{\\theta}+\\vu{\\phi}\\frac{1}{\\sin\\theta}\\pdv{\\phi}]\\frac{1}{r}\\pdv{r})rj_l(kr)Y_{lm}(\\theta,\\phi) \\label{eq:wave2}\n}\n\n\\subsection{Analytic Solutions}\n\nIn order to make sure that we have a complete understanding of the Bessel functions we will do a slight review. Bessel functions are the solutions to Bessel's differential equation:\n\\eq{\nx^2\\dv[2]{y}{x} + x\\dv{y}{x}+(x^2-m^2)y = 0\n}\nMost readers will be familiar with the form of cylindrical Bessel functions as the solution to the Laplace equation in cylindrical coordinates. These functions occur in the case where $m$ is an integer value. The solutions to the Helmholtz equation are the spherical Bessel functions. These functions occur when $m$ is a half integer. In such a case we make the transformation $m^2\\to l(l+1)$ so that $l$ can be used as an integer index for the functions. It is customary to represent spherical Bessel functions will lowercase letters and cylindrical Bessel functions with uppercase\\cite{handbook}. While these functions can be related to cylindrical Bessel functions, they are unique with respect to the fact that there is a much simpler way of writing them. Specifically we will use Rayleigh's formulas and then push forward.\n\\ea{\nj_l(x) &= (-1)^l \\qty(\\dv{x})^l\\frac{\\sin(x)}{x}\\\\\nn_l(x) &= (-1)^l+1 \\qty(\\dv{x})^l\\frac{\\cos(x)}{x}\\\\\n}\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=.8\\linewidth]{Bessel.png}\n    \\caption{Spherical Bessel functions of various order}\n    \\label{fig:bessel}\n\\end{figure}\nWhile we could go down an endless rabbit hole when it comes to Bessel functions, I believe this will be enough for the problem at hand.\n\nWe're most interested in finding the Eigenfrequencies of the spherical cavity. To do so we look back at the boundary condition $\\vu{r}\\cross \\va{E}\\big{|}_s=0$. When applied to equations \\eqref{eq:wave1} and \\eqref{eq:wave2} this means that the angular comonents of those waves must be zero for any value all values of $\\theta$ and $\\phi$ when $r=R$ (the outer radius of the vanity). The only way to do so is to set either the Bessel function of the derivative of the Bessel function to zero at the walls.\n\\ea{\nj_l(k_{n,l}R) &= 0\\\\\n\\dv{r}\\qty(rj_l(k_{n,l}r))\\big{|}_{r=R} &= 0 \\label{eq:annoying}\n}\n%The magnetic wave equation requires us to make use of the derivative relation\n%\\eq{\\qty(\\frac{1}{z}\\dv{z})^m \\qty(z^{l+1}J_l(z)) = z^{l-m+1}J_{l-m}(z)}\n%Thus simplifying equation \\eqref{eq:annoying} to the following:\n%\\eq{\\dv{r}\\qty(rj_l(kr)) = \\frac{1}{k}\\dv{kr}\\qty(rj_l(kr))}\n\nThis allows us to write down the Eigenfrequencies in terms of the zeros of the bessel functions ($x_n,l)$.\n\\eqb{\n\\omega = x_{n,l}\\frac{c}{R}\n}\nThe zeros have multiple frequencies as the zeros of the $l$ bessel function are not the same as the zeros of the $l+1$ function. Moreover, due to the periodic nature of $\\sin$ and $\\cos$ each order of Bessel function has many zeros.\n\nAt this point it is clear that our goal is calculate the zeros of the various Bessel functions. Doing so by hand is rather straight forward, but it takes time, especially for higher order Bessel functions. Since these functions have been around for so long most people consult a table of zeros when they are needed. In our case\n\n\\section{Computational Solutions}\n%Be sure to mention what you are using and why\nDuring lecture we looked at multiple methods for root finding numerically. These methods have a direct parallel to a sort of graphical search in that they were easy to animate. Specifically we looked at four methods:\n\\begin{enumerate}\n    \\item Simple Search: Where we took a midpoint of a specific range and cut the step size in half each iteration\n    \\item Bisection: This method utilizes the intermediate value theorem to find one root in a given interval\n    \\item Secant Line: Draws a line connecting two points and uses the intersection at zero to start the next integration.\n    \\item Utilizes the derivative of the function to draw a tangent line to begin next iteration.\n\\end{enumerate}\nSince our Bessel functions are analytic the tangent method should be the most efficient. The scipy library holds Bessel functions and derivatives of Bessel functions for integer and half integer values. I used that alon with the algorithms mentioned in class to design my implementation.\n\nComparing these methods proved that assumption correct. We were also able to animate the search to more intuitively see how accurate they were. In general, even the simple search eventually hit the desired accuracy of $\\pm 1\\cross 10^{-6}$ within the max iterations.\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{simple.png}\n    \\caption{The final plot of the simple search's attempt to find a zero of a Bessel function of the first kind}\n    \\label{fig:simple}\n\\end{figure}\n\nWhile the animations may look nice I have a larger issue at hand. There are different algorithms in existence that can calculate the zeros of spherical Bessel functions quite exactly using the relation to cylindrical Bessel functions. In particular I think of the \\href{https://scipy-cookbook.readthedocs.io/items/SphericalBesselZeros.html}{provided link} as my direct competition. This method also makes use of an algorithm that is able to use the previous zero to calculate the next one. Ideally I would like to be able to do something that these methods can't do.\n\nThese methods do have one fatal flaw, they require you to calculate every zero previous to the one that you are looking for. Since our method is more \\emph{graphical} than a series, I do not have the same restraint. The hard part I need to overcome is I need a way to estimate a start point for my tangent search. \n\nIf we ignore the area right around zero (\\emph{I'll never beat the other algorithms at that small of a value of x anyway}) a spherical Bessel function of order 0 behaves like a $\\sin$ function of decreasing amplitude.\n\\eq{\nj_0 = \\frac{\\sin(x)}{x}\n}\nThe zeros of a sin function are integer values of $\\pi$ so I am already able to know exactly where these zeros are. Now if we look at a Bessel function of order 1 we essentially just subtract $\\cos$.\n\\eq{\nj_1 = \\frac{1}{x}(\\frac{\\sin(x)}{x} - \\cos(x))\n}\nOnce again it is pretty easy to guess these roots, they occur at $z\\pi-\\frac{\\pi}{2}$. At the risk of being too general I thought I could generalize this process. For a Bessel function of order $l$ the n-th zero should occur at:\n\\eqb{\n\\text{Estimated Zero } = (n+1)\\pi-\\frac{l\\pi}{2}\n}\nArmed with this knowledge I might actually stand a chance against those other algorithms for higher values of $x$, which is the same as saying higher Eigenfrequencies. Testing this out when beautifully even at order 16 we were able to calculate the 30th root with only 7 iterations.\\\\\n\\\\\nThus I believe I can make the following conclusion. Whilst it is more accurate to analytically calculate or use the series method \\href{https://scipy-cookbook.readthedocs.io/items/SphericalBesselZeros.html}{outlined here} to calculate the zeros of spherical Bessel functions. Our iteration excels at higher orders and highers Eigenfrequencies. \n\n\\section{Applications and Current Research}\n\nWhilst I was searching for information on resonant cavities I came across some interesting research currently happening with regards to this topic. First I found \\cite{research1} which used radial basis functions to solve the Helmholtz equation in rectangular coordinates as opposed to spherical ones. While I did not completely understand the math, the paper did have a very interesting visualization of the normal modes inside the chamber. I also found \\cite{research2} which technically used an overall cylindrical geometry, however it did make use of perturbation theory which I found to be interesting.", "meta": {"hexsha": "764384027c827843a59d891c2b7ae8a580f31e35", "size": 12056, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Latex/Content.tex", "max_stars_repo_name": "ubsuny/spherical-bessel-final20", "max_stars_repo_head_hexsha": "11dd8d43663c7d62989771ddbfbffe6b9b32f230", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Latex/Content.tex", "max_issues_repo_name": "ubsuny/spherical-bessel-final20", "max_issues_repo_head_hexsha": "11dd8d43663c7d62989771ddbfbffe6b9b32f230", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Latex/Content.tex", "max_forks_repo_name": "ubsuny/spherical-bessel-final20", "max_forks_repo_head_hexsha": "11dd8d43663c7d62989771ddbfbffe6b9b32f230", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.6363636364, "max_line_length": 859, "alphanum_fraction": 0.7626907764, "num_tokens": 3124, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5853527522810328}}
{"text": "\\chapter{Unsupervised Learning}\n\\label{chp:unsuplea}\nIn unsupervised learning there is no response variable Y. There is only a set of features $X_1, X_2,..., X_p$ measured on n observations. \nSupervised learning compared to unsupervised learning comes with some challenges. The tasks tends to be more subjective and there is no simple goal such as prediction of a response as there is in supervised learning. Furthermore there is no accepted method for determining the quality of the result, due to the fact that there is no Y to check the results up against.\nIn this section the focus will be on the  technique called \\emph{clustering} which is used to discover subgroups or clusters in a data set. \nThis is done by dividing the data set into clusters so that the observations in each of the clusters are similar to each other, and the observations belonging to clusters are different compared to the observations in other clusters.\nThe problem is to determine if observations are similar or different to each other.\n \n\\section{K-Means Clustering}\n\\label{chp:clus}\nK-means clustering is a technique to divide observations in a data set into a prespecified k number of clusters. Each observation is assigned to one of the k clusters. In figure \\ref{fig:K-meanex} a data set is assigned to three clusters. This means that \\emph{k} is equal to 3. The data set consists of 50 observations. Each cluster is marked with a color and the cross is the center point of a cluster. This point is also called a \\emph{centroid}.\n\n\\myFigure{K-mean.PNG}{Example of the K-means algorithm where k is 3}{fig:K-meanex}{0.55}\n\nK-means clustering needs to satisfy two properties. \n\\begin{itemize}\n\t\\item Each observation belongs to one of the clusters\n\t\\item Clusters cannot overlap, which means that no observation can belong to more than one cluster\n\\end{itemize} \n\nTo get a good clustering the within-cluster variation needs to be as small as possible. So the algorithm tries to minimize this. The within-cluster variation is commonly calculated with the sum of squared pairwise Euclidean distance between observations and the centroid they belong to, divided by the number of observations belonging to that centroid. \nThe K-means clustering algorithm can then be written as in algorithm \\ref{algo:KMeansClustering}. Other initialization algorithms than the one described in line 1 exists such as K-means++ \\citep{kplusplus}.\n\n\\begin{algorithm}\n\t\\caption{K-Means Clustering}\n\t\\label{algo:KMeansClustering}\n\t\\begin{algorithmic}[1]\n\t\t\\State A random number from 1 to k is chosen for each observation. This number will act as an initial cluster assignment for every given observation.\n\t\t\n\t\t\\State \n\t\tIterate until the observations stop changing between clusters.\n\t\t\\begin{enumerate}[label=(\\alph*)]\n\t\t\t\\item The centroids are calculated for each of the k clusters and these centroids act as the vector for the means for each observation in the k'th cluster.\n\t\t\t\n\t\t\t\\item For each observation the Euclidean distance to each centroid is calculated and the observation is assigned to the closest one.\n\t\t\\end{enumerate}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Lab 10.5.1 - K-Means Clustering}\nFirst, 50 random two dimensional data points with a normal distribution are generated. Then the mean is shifted for 25 of the observations such that two clusters should exists. After this, the K-means from the sklearn library is used to perform the K-means clustering.\nAfter the algorithm is used, the labels and the centroids are saved. The code is shown in listing \\ref{lst:kmeanscode} with the number of clusters set to k = 2.\n\\begin{lstlisting}[language=Python, label=lst:kmeanscode, caption=The KMeans function]\nk = 2\nkmeans = KMeans(n_clusters=k, init=\"k-means++\",n_init= 20).fit(zip(X,Y))\nLabels = kmeans.labels_\ncentroids = kmeans.cluster_centers_\n\\end{lstlisting}\n\nThe K-means also takes two other parameters. Namely the \\emph{init} and the \\emph{n\\_init}. Init states which approach is used, and in this case the \\emph{K-means++} approach is used. The n\\_init is how many times the approach should be performed, and this is set to 20. The algorithm then chooses the best initialization of the 20 that was run.\n\nThe clusters are visualized with the code in listing \\ref{lst:kmeansplot}.\n\\newpage\n\\begin{lstlisting}[language=Python, label=lst:kmeansplot, caption=Plotting the two clusters]\nlines = plt.plot(centroids[:,0], centroids[:,1], 'kx')\nplt.setp(lines, ms=15.0)\nplt.setp(lines, mew=3.0)\n\nfor i in range(k):\n\tdsx = X[np.where(Labels==i)]\n\tdsy = Y[np.where(Labels==i)]\n\tplt.plot(dsx, dsy, 'o')\n\nplt.show()\n\\end{lstlisting}\n\nInitially the two centroids are plotted. This is done from line 1 to 3. The \\emph{kx} is written so they are labeled with a cross. For plotting the observations a for loop is used. It runs through the two possible clusters. The \\emph{o} is added so the observations are plotted as a dot. \nThe result can be seen in figure \\ref{fig:kmeansResult}.\n\n\\myFigure{KmeansResult.PNG}{The plotted result of lab 10.5.1}{fig:kmeansResult}{0.6}\n\nThe centroids are marked with crosses and each observation is either blue or green depending on which cluster it is assigned too. The observations are clusters as expected but only because k was set correctly. If the true number of clusters were not known, then choosing k would have been a more difficult task, this is often the case when clustering is needed.\n\n\n\\section{Hierarchical Clustering}\nHierarchical clustering does not require a prespecified k value as K-means clustering does.\nFurthermore hierarchical clustering produces a dendrogram which shows the observations as a tree structure.\nThe most common type is called the agglomerative clustering, it is also known as the bottom-up approach.\n\n\\subsection{Dendrograms}\nFigure \\ref{fig:dendrogram} shows a dengrogram which is constructed of a data set with 50 observations.\nOne leaf is equal to one of the 50 observations. Observations that fuse into branches indicates similarity in the observations. The observations that fuse lowest in the dendrogram are more similar.\n\nDepending on the data set and the observations, the dendrogram can help determine the number of clusters that fits the data best. Line number one shows where to cut the dendrogram to divide the observations into two clusters. Line number two shows the cut dividing the observations into three clusters. Looking at the dendogram, two clusters would yield the best result.\n\n\n\\myFigure{Dendrogram.PNG}{Dendrogram with 50 observations}{fig:dendrogram}{1}\n\n\\FloatBarrier\nTo show how this works, a simple data set containing 7 observations is shown in figure \\ref{fig:dendrogramdata} to the right.\nIt can be seen that the observations 5 and 7 are close to each other, and observations 1 and 6 are also close to each other. These are at the bottom of the dendrogram, as it can been seen in the left side of figure \\ref{fig:dendrogramdata}. Now the dendrogram is expanded, and by looking at the next observation that are closest to 5 and 7. Clearly 3 is the closest one, so this is the next leaf in the dendrogram. The same is done for observations 1 and 6 and at the end the dendrogram will look like figure \\ref{fig:dendrogramdata}.\n\n\\myFigure{DendrogramData.PNG}{Illustration of a dendrogram and its data set}{fig:dendrogramdata}{1}\n\n\\FloatBarrier\n\\subsection{The Hierarchical Clustering Algorithm}\nTo produce the dendrogram the hierarchical clustering algorithm is used. First step is to find the Euclidean distance between the pair of observations and find the dissimilarities.\nThen it runs iteratively trough all the observations. It starts at the bottom and finds the two observations acting as their own cluster that are closest, and fuses those two so that there now are $n-1$ clusters. This continues, so the next two clusters that are most similar are fused, making the number of clusters $n-2$. This is done until all observations belong to one big cluster.\n\nThe hierarchical clustering algorithm is shown in algorithm \\ref{algo:HierarchicalClustering}.\n\n\\begin{algorithm}\n\t\\caption{Hierarchical Clustering}\n\t\\label{algo:HierarchicalClustering}\n\t\\begin{algorithmic}[1]\n \t\t\\State Given n observations, measure the pairwise dissimilarities for all  $(\\dfrac{n}{2}) =n(n-1)/2 $. Treat each observation as its own cluster.\n \t\t\n \t\t\\State for \\emph{i} $= n,n-1,...,2:$\n \t\t\\begin{enumerate}[label=(\\alph*)]\n \t\t\t\\item Examine the pairwise dissimilarities among the i clusters and identity the pair which is most similar and fuse them. The pairwise dissimilarity describes at which height the fuse should be placed in the dendogram.\n \t\t\t\\item Compute the new pairwise inter-cluster dissimilarity for the remaining $\\emph{i}-1$ clusters.\n \t\t\\end{enumerate}\n \t\\end{algorithmic}\n \\end{algorithm}\n\nOne issue has not been addressed. The algorithm needs to be capable of defining the dissimilarities between two clusters if one or both contains multiple observations. This is done with an extension called linkage which defines dissimilarities between two groups of observations. The most common types of linkage are complete, average, single and\ncentroid. These are explained in table \\ref{table:linkage}\n\n\\begin{table}\n\t\\begin{center}\n\t\t\\begin{tabular}{ | l | p{12cm} |}\n\t\t\t\\hline\n\t\t\tLinkage & Description \\\\ \\hline\n\t\t\tComplete & Maximal intercluster dissimilarity. Record the largest of the pairwise dissimilarities between observations in cluster A and B \\\\ \\hline\n\t\t\tSingle & Minimal intercluster dissimilarity. Record the smallest of the pairwise dissimilarities between observation in cluster A and B \\\\ \\hline\n\t\t\tAverage & Mean intercluster dissimilarity. Record the average of the pairwise dissimilarities between the observations in cluster A and B\\\\\n\t\t\t\\hline\n\t\t\tCentroid & Dissimilarity between the centroids/means for the clusters A and B\n\t\t\t\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Different linkage methods}\n\t\\label{table:linkage}\n\\end{table}\n\n\nWithin statistics complete and average linkage are generally the most preferred linkage as they produce more balanced dendrograms \\citep[pp. 394-395]{ISLR}. \n\n%Sometimes you need to scale your data set to obtain standard deviation. Then new problems arises, because of which dissimilarity measure should be used and which linkage should be used. This is a difficult problem and there is no standard answer.\n\n\\subsection{Lab 10.5.2 - Hierarchical Clustering}\nA random data set is generated the same way as in lab 10.5.1.\nThe hierarchical clustering dendrogram will be produced using both complete, single and average linkage with Euclidean distance for measuring dissimilarity.\nTo perform the hierarchical clustering the library scipy is imported.\nThe code for creating the three types of linkage are shown in listing \\ref{lst:Linkage}.\n\n\\begin{lstlisting}[language=Python, label=lst:Linkage, caption=Applying the three different linkage styles on the data set]\ncomplete = linkage(X, method='complete', metric='euclidean')\nsingle = linkage(X, method='single', metric='euclidean') \naverage = linkage(X, method='average', metric='euclidean')\n\\end{lstlisting}\n\nThree parameters are passed: the data set, the method describing the chosen linkage algorithm and metric. The result of each of the three different dendrograms can be seen in figures \\ref{fig:dendrogramsCompAvg} and \\ref{fig:dendrogramsingle}.\n%To create the dendrograms the function \\emph{dendrogram} from the scipy library is used. The only parameter that is needed to build the dendrograms is the three linkage objects that was made in listing \\ref{lst:Linkage}.\n\n%When plotting the dendrograms the only difference is which linkage object is parsed as a parameter.\n\n\n\n\\mySubFigure{DendrogramComplete.PNG}{DendrogramAverage.PNG}{Dendrograms for linkage methods complete and Average}{Complete}{Average}{fig:dendrogramsCompAvg}{fig:denComplete}{fig:denAverage}\n\n\n\\myFigure{DendrogramSingle.PNG}{Dendrogram for the linkage method single}{fig:dendrogramsingle}{0.7}\n\nAs expected the dendrograms are different from each other.\nLooking at figures \\ref{fig:denComplete} and \\ref{fig:denAverage} average and complete looks more balanced compared to figure \\ref{fig:dendrogramsingle} where the method single is used. Both average and complete also placed the 25 observations correctly in each their cluster. Single instead placed one observations in its own and 49 observations in the other. By looking at the dendogram it can also be seen that the three observations on the left should probably be placed in two different clusters, making a total of four clusters. Using four clusters with the single linkage method also gave a better result. Now only four observations were placed wrongly in a cluster.", "meta": {"hexsha": "6aee25673f58bda8550a65bb3cc13d358ec083da", "size": 12694, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/chapter/unsupervised_learning.tex", "max_stars_repo_name": "Rotvig/F17Q4---Decision-Support-Systems", "max_stars_repo_head_hexsha": "62c3bb41a8fb4df75f4f038dea7af521cddafa4a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/chapter/unsupervised_learning.tex", "max_issues_repo_name": "Rotvig/F17Q4---Decision-Support-Systems", "max_issues_repo_head_hexsha": "62c3bb41a8fb4df75f4f038dea7af521cddafa4a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/chapter/unsupervised_learning.tex", "max_forks_repo_name": "Rotvig/F17Q4---Decision-Support-Systems", "max_forks_repo_head_hexsha": "62c3bb41a8fb4df75f4f038dea7af521cddafa4a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.0119760479, "max_line_length": 673, "alphanum_fraction": 0.7896644084, "num_tokens": 3006, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110202, "lm_q2_score": 0.8006919925839875, "lm_q1q2_score": 0.5853527406355791}}
{"text": "\\section{Numerical experiments} \\slabel{exp}\n\nTo evaluate the simple model of \\sref{model}, we conduct a battery of direct numerical simulations.\nThe novel components of the simple model, i.e. viscous drag and mixing, are most pronounced at late times.\nWe primarily direct our effort at simulating higher aspect ratio domains to allow the bubble to reach a dissipative flow.\n\nThe numerical experiments simulate the incompressible Navier-Stokes equations with the Boussinesq approximation:\n\\begin{align}\n\\frac{\\partial u}{\\partial t} + u \\cdot \\nabla u &= \\nu \\nabla^2 u - \\nabla P + A g \\phi \\\\\n\\frac{\\partial \\phi}{\\partial t} + u \\cdot \\nabla \\phi &= D \\nabla^2 \\phi \n\\end{align}\nwhere $u$ is the velocity,\n$\\nu$ is the kinematic viscosity,\n$P$ is the pressure,\n$\\phi$ the non-dimensional density,\nand $D$ is the diffusivity of $\\phi$.\n\nThe initial conditions are quiescent with a horizontal interface perturbed by product of cosine functions and smeared by an error function:\n\\begin{equation}\n\\begin{split}\n\t\\phi(x,y,z,t=0) = \\\\ \n\t\\text{erf}\\left(\\frac{z + a_0 \\cos(2 \\pi (x/\\lambda)) \\cos(2 \\pi (y/\\lambda))}{\\delta})\\right)\n\\end{split}\n\\end{equation}\nwhere $a_0$ is the initial amplitude and $\\delta$ is the initial interface thickness.\nBoth $a_0$ and $\\delta$ are taken to be small enough to minimize their effects on the solution, $0.01$ and $1/128$, respectively.\nThe governing equations and initial condition have four dimensional parameters: $\\nu$, $D$, $Ag$, $\\lambda$.\nThese are combined into 2 non-dimensional numbers, the Grashof number and the Schmidt number:\n\\begin{equation}\n\\text{Gr} = \\frac{A_0 g \\lambda^3}{\\nu^2} \\quad \\text{Sc} = \\frac{\\nu}{D}\n\\end{equation}\nThe Grashof number serves the role of a Reynolds number for instability problems without a consistent characteristic velocity.\nFor this reason, the root of the Grashof number is sometimes called the \\textit{perturbation Reynolds number}~\\cite{Wei2012}:\n\\begin{equation}\n\\text{Re}_p = \\sqrt{\\frac{A_0 g \\lambda^3}{\\nu^2}}\n\\end{equation}\n\nThe domain is $\\left[0.5, 0.5, 64\\right]$ and rotated 45 degrees in the span-wise plane to model $\\lambda = \\sqrt{2}$, transforming the initial condition to:\n\\begin{equation}\n\\begin{split}\n\t\\phi(x,y,z,t=0) = \\\\\n\t\\text{erf}\\left(\\frac{z + a_0 \\cos(\\pi (x+y)) \\cos(\\pi (x-y))}{\\delta})\\right).\n\\end{split}\n\\end{equation}\nThis is done so the span-wise boundaries at $x=\\{0,0.5\\}$ and $y=\\{0,0.5\\}$ are symmetric.\nThe length of the domain is $64/\\sqrt{2} \\approx 45.2$ wavelengths with no-slip walls at the top and bottom.\nBased on a previous validation of the smRTI with no-slip boundaries, we expect the bubble to be unaffected by the top and bottom walls until it reaches 75\\% of the height, or about $17\\lambda$.\nThis provides significantly more data than the $h < 4 \\lambda$ results of Ramaprabhu \\etal~\\cite{Ramaprabhu2012}.\n\nThe model introduced in \\sref{model} assumes the bubbles and spikes are coherent structures, that is they travel at some velocity and have a well defined interface.\nAs the Grashof number increases and the bubbles and spikes break up, departing from the assumptions of the model.\nOn the other hand, at low Grashof number and finite diffusivity, diffusion moves the $\\phi = 0$ interface, as opposed to simply transporting the scalar across it, which also departs from the model assumptions.\nFor these reasons, we restrict our study to an intermediate range of Grashof numbers: those which are large enough to sustain bubble dynamics while not being so large as to break the bubbles apart.\nThis range has been identified empirically to be approximately $6 \\times 10^2 \\le \\text{Gr} \\le 6 \\times 10^5$ for Schmidt numbers greater than 1.\n\nThe number of spatial samples needed to resolve the advection-diffusion equation for the scalar goes with the Peclet number to the third power.\nIt is prohibitively expensive to perform calculations at high Schmidt numbers and high Grashof numbers.  \n\nSimulations are performed with the NekBox version~\\cite{NekBox} of the Nek5000 code~\\cite{argonne:nekdoc}, which has been previously validated against single-mode Rayleigh-Taylor experiments~\\cite{Hutchinson2016,Wilkinson2007}.\nThe spectral element method implemented by NekBox has purely dispersive errors and converges exponentially with the spectral order~\\cite{fischer:hom}.\nThe resolution parameters, the number of spectral elements, the order of the spectral elements, and the time step were chosen to achieve an accuracy of $10^{-4}$ in the bubble aspect ratio~\\cite{hutchinson2016efficiency}.\n\n\\subsection{Observables}\n\n\\begin{comment}\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figs/swp_Sc_1.0_nu_8.0-t_max_z-0256.eps}\n\\includegraphics[width=\\columnwidth]{figs/swp_Sc_1.0_nu_8.0-t_proj_z-0256.eps}\n\\caption{\\flabel{profiles}\n  Span-wise max and mean of scalar field.\n}\n\\end{figure}\n\\end{comment}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figs/comp-height-8-8}\n\\caption{ \\flabel{heights}\n  Comparison of height metrics at $\\text{Gr} = 4.8\\times10^4$ and $\\text{Sc} = 1$.\n}\n\\end{figure}\n\n\n\\paragraph{Bubble height}\nFor miscible RTI, the shape of the scalar field is due to a combination of advection in the bubble and diffusion across the interface.\nWe can assume an error function-like profile across the interface at the bubble tip, but diffusion across the bubble sidewalls results in a linear profile in both the span-wise mean and maximum of the scalar.\nTo separate the definition of the bubble tip from sidewall mixing, which is incorporated by the decreasing effective Atwood number, we introduce a new definition: the bubble tip is defined as the inflection point in the span-wise maximum scalar profile.\nFor the symmetric case, this span-wise maximum of the scalar is equivalent to the value along the bubble axis.\nWhile mixing leads to a linear decay behind the bubble tip, the profile remains sharp near the bubble tip, decoupling the position of the inflection point from the sidewall mixing.\n\nThis definition of the bubble height is compared to two more traditional definitions, based on a cutoff in the mean or maximum profiles, in \\fref{heights}.\nAt early times, the definition based on the mean profile grows diffusively.\nAt late times, the definition based on the max profile kinks as the linear part of the profile crosses zero and then stagnates.\nThe definition based on the inflection avoids both breakdowns while agreeing with the two traditional definitions within each of their valid ranges.\n\n\\paragraph{Mixed volume}\n\nThe scalar is normalized such that $\\phi \\in \\left[-1, 1\\right]$ and the average $\\bar\\phi = 0$.\nThe purity of the fluid is therefore $\\left| \\phi \\right|$ and the volume of mixed fluid is given by a simple integral:\n\\begin{equation}\nM(t) = \\int \\left( 1 - \\left| \\phi(x,y,z) \\right|\\right) dV\n\\end{equation}\n\n", "meta": {"hexsha": "a49ada60cb9b711d195ffdee52be999b0a454c70", "size": 6811, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exp.tex", "max_stars_repo_name": "maxhutch/2016_smRTI_model", "max_stars_repo_head_hexsha": "caeb8c95b33ad32b7cffc6a685713b323aba35de", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-06-08T11:33:32.000Z", "max_stars_repo_stars_event_max_datetime": "2016-06-08T11:33:32.000Z", "max_issues_repo_path": "exp.tex", "max_issues_repo_name": "maxhutch/2016_smRTI_model", "max_issues_repo_head_hexsha": "caeb8c95b33ad32b7cffc6a685713b323aba35de", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exp.tex", "max_forks_repo_name": "maxhutch/2016_smRTI_model", "max_forks_repo_head_hexsha": "caeb8c95b33ad32b7cffc6a685713b323aba35de", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.1262135922, "max_line_length": 253, "alphanum_fraction": 0.7669945676, "num_tokens": 1806, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972616934406, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.585330130539274}}
{"text": "\\section*{Discussion}\n\nOur method produces coherent movement in objects without explicit regularization, and is able to\nconverge on the same order as compared to prior work. It also does not suffer extremely significant deformations around time extremities as compared to D-NeRF, since we are analytically able to ensure that at time 0 there is no motion.\n\nIn addition, our method demonstrates extremely smooth interpolation, as enforced by the structure of our network.\nWe are thus able to exaggerate movements significantly, while ensuring plausibility between frames. In addition, because of the analytic nature\nof splines, we are likely able to perform interesting motion deformations, such as diverging motion paths in order to construct novel motion.\n\nOne issue with our approach is that because it is forced to learn continuous representations, it has some difficulty when surfaces collide and bounce apart, as it must separate each surface but the boundary between them may be very small and hard to distinguish. We expect that in the limit these issues would vanish, but they slow down training.\n\nAnother component of our model are the computational challenges with spline evaluation. We are currently unaware of a fully vectorized method for evaluating splines at a given point, and developing an algorithm for such would benefit our model.", "meta": {"hexsha": "ef0f3a92e6d24448c839ecc8ef426dd6e83f667b", "size": 1350, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "c0_paper/discussion.tex", "max_stars_repo_name": "princeton-computational-imaging/nerf_atlas", "max_stars_repo_head_hexsha": "f66ba284ea440cd816b303cdb7312288901da97e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2021-05-17T13:17:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-05T00:44:44.000Z", "max_issues_repo_path": "c0_paper/discussion.tex", "max_issues_repo_name": "princeton-computational-imaging/nerf_atlas", "max_issues_repo_head_hexsha": "f66ba284ea440cd816b303cdb7312288901da97e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-09-07T08:31:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-28T06:55:40.000Z", "max_forks_repo_path": "c0_paper/discussion.tex", "max_forks_repo_name": "princeton-computational-imaging/nerf_atlas", "max_forks_repo_head_hexsha": "f66ba284ea440cd816b303cdb7312288901da97e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2021-05-16T01:06:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-26T01:48:09.000Z", "avg_line_length": 112.5, "max_line_length": 346, "alphanum_fraction": 0.822962963, "num_tokens": 256, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8615382129861583, "lm_q2_score": 0.6791787121629466, "lm_q1q2_score": 0.5851384139751054}}
{"text": "\\hypertarget{classnumpp_1_1differentiation_1_1symbolic_1_1multiply}{}\\section{numpp\\+:\\+:differentiation\\+:\\+:symbolic\\+:\\+:multiply$<$ Left, Right $>$ Class Template Reference}\n\\label{classnumpp_1_1differentiation_1_1symbolic_1_1multiply}\\index{numpp\\+::differentiation\\+::symbolic\\+::multiply$<$ Left, Right $>$@{numpp\\+::differentiation\\+::symbolic\\+::multiply$<$ Left, Right $>$}}\n\\subsection*{Public Types}\n\\begin{DoxyCompactItemize}\n\\item \n\\mbox{\\Hypertarget{classnumpp_1_1differentiation_1_1symbolic_1_1multiply_a0d483b631e2e23b2f0f316ff192213aa}\\label{classnumpp_1_1differentiation_1_1symbolic_1_1multiply_a0d483b631e2e23b2f0f316ff192213aa}} \n{\\footnotesize template$<$std\\+::size\\+\\_\\+t Active$>$ }\\\\using {\\bfseries derivative} = simplify\\+\\_\\+addition$<$ simplify\\+\\_\\+multiplication$<$ typename Left\\+::template derivative$<$ Active $>$, Right $>$, simplify\\+\\_\\+multiplication$<$ Left, typename Right\\+::template derivative$<$ Active $>$ $>$ $>$\n\\end{DoxyCompactItemize}\n\\subsection*{Static Public Member Functions}\n\\begin{DoxyCompactItemize}\n\\item \n\\mbox{\\Hypertarget{classnumpp_1_1differentiation_1_1symbolic_1_1multiply_a02603172de5069f4a519d66ca326cd53}\\label{classnumpp_1_1differentiation_1_1symbolic_1_1multiply_a02603172de5069f4a519d66ca326cd53}} \nstatic C\\+O\\+N\\+S\\+T\\+E\\+X\\+PR auto {\\bfseries calculate} (auto \\&\\&values)\n\\end{DoxyCompactItemize}\n\n\nThe documentation for this class was generated from the following file\\+:\\begin{DoxyCompactItemize}\n\\item \ndifferentiation/symbolic/arithmetic.\\+hpp\\end{DoxyCompactItemize}\n", "meta": {"hexsha": "348ccffca5b1c111f09b9ecbc44c415e0ef746d1", "size": 1544, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/classnumpp_1_1differentiation_1_1symbolic_1_1multiply.tex", "max_stars_repo_name": "szymonmaszke/numpp", "max_stars_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-06-06T01:51:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-02T15:17:00.000Z", "max_issues_repo_path": "docs/classnumpp_1_1differentiation_1_1symbolic_1_1multiply.tex", "max_issues_repo_name": "vyzyv/numpp", "max_issues_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-11-28T12:15:46.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T00:03:38.000Z", "max_forks_repo_path": "docs/classnumpp_1_1differentiation_1_1symbolic_1_1multiply.tex", "max_forks_repo_name": "szymonmaszke/numpp", "max_forks_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-08-06T13:58:27.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-06T06:45:22.000Z", "avg_line_length": 77.2, "max_line_length": 307, "alphanum_fraction": 0.7940414508, "num_tokens": 547, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7905303087996142, "lm_q1q2_score": 0.585130266988261}}
{"text": "\\documentclass{article}\n\\usepackage{tocloft}\n\\include{common_symbols_and_format}\n\\renewcommand{\\cfttoctitlefont}{\\Large\\bfseries}\n\\begin{document}\n\\logo\n\\rulename{Triple Exponential Average} %Argument is name of rule\n\\tblofcontents\n\n\\ruledescription{The triple exponential average, known more commonly as the TRIX is a momentum indicator that is meant to filter out insignificant and unimportant price movements. Many consider it to be similar to the Moving Average Convergence/Divergence (MACD) indicator. The only difference is that the TRIX indicator provides outputs that are smoother due to triple smoothing of the exponential moving averages used to create the indicator. It is an indicator that is usually used for identifying overbought and oversold market conditions.}\n\n\\howtotrade\n{The strategy is to identify the oversold and overbought markets and a momentum indicator. \\\\\nBullish Momentum: when the TRIX is above zero. \\\\\nBearish Momentum: when the TRIX is below zero.\n}\n\n\n\\ruleparameters\n{Window size for \\break Exponential moving average of series i, \\break i $\\in \\{1,2,3\\}$}{14}{This is the number of time steps over which exponential contributions are sourced.}{$\\lookbacklength_{i}$}\n{Smoothing Factor for \\break Exponential moving average of series i, \\break i $\\in \\{1,2,3\\}$}{2}{Smoothing factor represents the weighting applied to the most recent period\u2019s value.}{$S_{i}$}\n\\stoptable\n\n\\newpage\n\\section{Equation}\n\\begin{equation}\n    EMA_{\\currenttime}^{1} = \\Big(V_{\\currenttime} * \\Big(\\frac{S_{1}}{1 + \\lookbacklength_{1}}\\Big)\\Big) + EMA_{\\currenttime-1} * \\Big(1-\\frac{S_{1}}{1 + \\lookbacklength_{1}}\\Big)\n\\end{equation}\n\n\\begin{equation}\n    EMA_{\\currenttime}^{2} = \\Big(EMA_{\\currenttime}^{1} * \\Big(\\frac{S_{2}}{1 + \\lookbacklength_{2}}\\Big)\\Big) + EMA_{\\currenttime-1}^{1} * \\Big(1-\\frac{S_{2}}{1 + \\lookbacklength_{2}}\\Big)\n\\end{equation}\n\n\\begin{equation}\n    EMA_{\\currenttime}^{3} = \\Big(EMA_{\\currenttime}^{2} * \\Big(\\frac{S_{3}}{1 + \\lookbacklength_{3}}\\Big)\\Big) + EMA_{\\currenttime-1}^{2} * \\Big(1-\\frac{S_{3}}{1 + \\lookbacklength_{3}}\\Big)\n\\end{equation}\n\n\\begin{equation}\n    TRIX_{\\currenttime} = \\frac{EMA_{\\currenttime}^{3} - EMA_{\\currenttime-1}^{3}}{EMA_{\\currenttime-1}^{3}}\n\\end{equation}\n\\\\\n\nwhere: \\\\\n\n$TRIX$ is Triple exponentially weighted moving average. \\\\\n\n$EMA$ is exponentially weighted moving average. \\\\\n\n$V_{\\currenttime}$ is current stock value. \\\\\n\n$\\lookbacklength$ \\ is look back length. \\\\\n\n$S$ is the smoothing factor.\n\n\\hspace{200mm}\n\\hspace{200mm}\n\\keyterms\n\\furtherlinks\n\\end{document}\n\n", "meta": {"hexsha": "d0ee1736775f4fd5078995028fbbf9ef7abd471e", "size": 2558, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/strategies/tex/TripleExponentialAverage.tex", "max_stars_repo_name": "parthgajjar4/infertrade", "max_stars_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_issues_repo_path": "docs/strategies/tex/TripleExponentialAverage.tex", "max_issues_repo_name": "parthgajjar4/infertrade", "max_issues_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 137, "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_forks_repo_path": "docs/strategies/tex/TripleExponentialAverage.tex", "max_forks_repo_name": "parthgajjar4/infertrade", "max_forks_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "avg_line_length": 41.9344262295, "max_line_length": 544, "alphanum_fraction": 0.7333854574, "num_tokens": 791, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.7662936484231889, "lm_q1q2_score": 0.5850527669336474}}
{"text": "\\section{Complexity and performance}\n\n\\subsection{Time complexity of the Merge-SVD algorithm}\nThe overall complexity of the \\cref{alg:svd-dist} can be expressed in\nterms of functions $\\func{Basecase-SVD}$ and $\\func{Merge-SVD}$; but\ngiven the former is seen as a black box over which we have little\ncontrol, and that the main contribution of \\Rehurek (from SVD \nperspective), is the \\cref{alg:merge-svd}, we focus on the complexity\nof that $\\func{Merge-SVD}$ alone. \\\\\n\nLet us review the cost of its main steps of\n\\cref{alg:merge-svd} individually, as a way to arriving to the the overall\ncomplexity (we will not consider the possible parallelization or\nvectorization of the basic kernel operations \\footnote{A ``kernel'' in\nthe context of Numerical Linear Algebra, is a basic routine which is\nheavily used by higher level algorithms; hence, its performance is\ncrucial and they are heavily optimized.}, which is usually\nachieved by using standard libraries like BLAS or LAPACK): \n\n\\begin{itemize}\n  \\item The matrix multiplication that produces $Z$ in line $1$, is\n    done against matrices $\\trans{U_1}$ (of dimensions $k_1 \\times m$)\n    and matrix $U_2$ (of dimensions $m \\times k_2$); hence it has a\n      complexity \\bigO{m k_1 k_2}. \\\\ \n\n  \\item The second step is dominated by the QR calculation; according\n    to Golub \\cite{golub13}, the complexity of a QR factorization\n    based on the Gram-Schmidt process for a matrix $A^{m \\times n}$,\n    is \\bigO{m n^2}. Applying that result to the particular case of\n    line $2$, give us a complexity of \\bigO{m k_2^2}. \\\\\n\n   \\item It seems hard to\n     find reported complexities for the SVD algorithms, in the\n     available literature; \\Rehurek mentions that the complexity of\n     the full SVD calculation from line $3$ is\n     \\bigO{(k_1+k_2)^3}\\footnote{We could find at least one reference\n       that also mentions this complexity, see\n       \\cite{plassman05}.}. Hence, given that the truncation factors  \n     $k_1$ and $k_2$ are usually a few hundreds in the context of LSI;\n     the cost of this step can be neglected. \\\\\n\n   \\item Finally, the complexity of the matrix operations in the last step \n     (focusing on the products only), is \\bigO{mkk_1 + mkk_2}. \n\\end{itemize}\n\\hfill\n\nIn practice the truncation factors do not vary much in LSI\napplications, thus, we can simplify further. Let us assume that $k\n\\approx k_1 \\approx k_2$, then the reported complexities in the list\nabove become: \\bigO{m k^2}, \\bigO{m k^2}, \\bigO{k^3}, \\bigO{m\n  k^2}. Given that the number of terms $m$ will be much bigger than\nthe truncation factor $k$ (hundred of thousands, vs a few hundreds);\nwe can conclude that the overall time complexity is \\bigO{m k^2}. \\\\\n\nDue time constraints we did not enter into detailed memory complexity\nanalysis of the algorithm, but is part of our todo list (for the case\nthat this project evolves into a full thesis). \n\n\\subsection{Performance with a large scale corpus}\n\n\\Rehurek used 3 different corpus to test his distributed SVD\nalgorithm, in the context of LSI. We focused only on the large corpus,\nwhich was the English Wikipedia. By that time, it contained $3.2$\nmillion documents; where 100,000 terms were chosen after removing the\nstop words. That resulted in an sparse matrix of dimensions \n$100,000 \\times 3,199,665$, with $0.5$Gb of non zero entries. Such\nmatrix can fit in memory of a modern personal computer, but as\nexplained earlier, the main objective of using the distributed\nalgorithm is to scale in time. The truncation factor $k$ was set to\n$400$ during this experiment. \\\\\n\nOn his Phd thesis, \\Rehurek reports the following wall times of the\ndistributed \\cref{alg:svd-dist}, running on a single computer and on a\ncluster: \\\\\n\n\\begin{itemize}\n\\item $8.5$ hours on a dual-core 2.53GHz MacBook Pro with 4GB RAM and\nvecLib, a fast BLAS (\\cite{blas})/ LAPACK (\\cite{lapack}) library\nprovided by the vendor. \\\\ \n\n\\item $2$ hours $23$ minutes on a cluster of four dual-core 2GHz Intel\n  Xeons, each with 4GB of RAM, which share the same Ethernet segment\n  and communicate via TCP/IP. These machines did not have any\n  optimized BLAS library installed. \n\\end{itemize}\n\\hfill\n\nThe above numbers suggest what we expected: given that the\nparallelization achieved by the distributed algorithm is almost\nperfect (only communication needed is on the final merge), the scaling\nin time is basically linear with respect to the number of computing\nnodes. \\\\\n\nThe \\href{https://radimrehurek.com/gensim/dist_lsi.html}{gensim page}\n(see \\cite{gensim}) has a more up to date\nexperiment, which reports $5$ hours 25 minutes for a single machine;\nand $1$ hour with 41 minutes for a cluster with 4 nodes (this time,\nthe cluster nodes got ATLAS installed, an open source BLAS/LAPACK\nimplementation (see \\cite{atlas}). \\\\\n\n\\Rehurek does an additional comparison for the execution on a single\nmachine, by contrasting with a custom implementation of the SVD\nalgorithm published in \\cite{zha98}, named as ZMS in his Phd\nthesis. The ZMS algorithm took 109 hours, which brutally contrast with\nthe 2 hours 23 minutes mentioned above for the \\cref{alg:svd-dist}. A\nprobably more fair comparison, would be against the SVD algorithm\nimplemented by SLEPc (\\cite{hernandez07}); thought this opensource\nimplementation does not target specifically the LSI problem, it claims\nto be distributed and highly scalable. Other comparisons with more\nopensource implementations are possible: together, along with\nreproducing the results published by \\Rehurek with more nodes in the\ncluster, are planned to be performed in a further stage of this\nproject. \n", "meta": {"hexsha": "e241e0cb0a3997419a93d62e0cebc0341fe499a3", "size": 5610, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "svd-dist-compres.tex", "max_stars_repo_name": "rzavalet/svd-lsi-project-master", "max_stars_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "svd-dist-compres.tex", "max_issues_repo_name": "rzavalet/svd-lsi-project-master", "max_issues_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "svd-dist-compres.tex", "max_forks_repo_name": "rzavalet/svd-lsi-project-master", "max_forks_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.2105263158, "max_line_length": 75, "alphanum_fraction": 0.7584670232, "num_tokens": 1497, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7634837743174788, "lm_q1q2_score": 0.5850527628587595}}
{"text": "\\section{Dependent type theory}\n\\label{ch:dtt}\n\n\\index{dependent type theory|(}\nDependent type theory is a system of inference rules that can be combined to make \\emph{derivations}. In these derivations, the goal is often to construct a term of a certain type. Such a term can be a function if the type of the constructed term is a function type; a proof of a property if the type of the constructed term is a proposition; an identification if the type of the constructed term is an identity type, and so on. In some respect, a type is just a collection of mathematical objects and constructing terms of a type is the everyday mathematical task or challenge. The system of inference rules that we call type theory offers a principled way of engaging in mathematical activity.\n\n\\subsection{Judgments and contexts in type theory}\\label{sec:judgments}\n\\index{judgment|(}\n\\index{context|(}\nAn \\define{inference rule}\\index{inference rule|see {rule}} is an expression of the form\n\\begin{prooftree}\n  \\AxiomC{$\\mathcal{H}_1$\\quad $\\mathcal{H}_2$ \\quad \\dots \\quad $\\mathcal{H}_n$}\n  \\UnaryInfC{$\\mathcal{C}$}\n\\end{prooftree}\ncontaining above the horizontal line\\index{horizontal line|see {inference rule}} a finite list $\\mathcal{H}_1$, $\\mathcal{H}_2$, \\dots, $\\mathcal{H}_n$ of \\emph{judgments} for the hypotheses\\index{inference rule!hypotheses}, and below the horizontal line a single judgment $\\mathcal{C}$ for the conclusion\\index{inference rule!conclusion}. A very simple example that we will encounter in \\cref{ch:pi} when we introduce function types\\index{function type}, is the inference rule\n\\begin{prooftree}\n  \\AxiomC{$\\Gamma\\vdash a:A$}\n  \\AxiomC{$\\Gamma\\vdash f:A\\to B$}\n  \\BinaryInfC{$\\Gamma\\vdash f(a):B$}\n\\end{prooftree}\nThis rule asserts that in any context $\\Gamma$ we may use a term $a:A$ and a function $f:A\\to B$ to obtain a term $f(a):B$. Each of the expressions\n\\begin{align*}\n  \\Gamma & \\vdash a :A \\\\\n  \\Gamma & \\vdash f : A \\to B \\\\\n  \\Gamma & \\vdash f(a):B\n\\end{align*}\nare examples of judgments. There are four kinds of judgments in type theory:\n\\begin{enumerate}\n\\item \\emph{$A$ is a (well-formed) \\define{type} in context $\\Gamma$.}\n  \\index{well-formed type}\\index{type}\n  The symbolic expression for this judgment is\\index{Gamma turnstile A type@{$\\Gamma\\vdash A~\\type$}}\\index{judgment!Gamma turnstile A type@{$\\Gamma\\vdash A~\\type$}}\n  \\begin{equation*}\n    \\Gamma\\vdash A~\\type\n  \\end{equation*}\n\\item \\emph{$A$ and $B$ are \\define{judgmentally equal types} in context $\\Gamma$.}\n  \\index{judgmental equality!of types} The symbolic expression for this judgment is\\index{Gamma turnstile A is B type@{$\\Gamma\\vdash A\\jdeq B~\\type$}}\\index{judgment!Gamma turnstile A is B type@{$\\Gamma\\vdash A\\jdeq B~\\type$}}\n  \\begin{equation*}\n    \\Gamma\\vdash A \\jdeq B~\\type\n  \\end{equation*}\n\\item \\emph{$a$ is a (well-formed) \\define{term}\\index{well-formed term}\\index{term} of type $A$ in context $\\Gamma$.} The symbolic expression for this judgment is\\index{Gamma turnstile a in A@{$\\Gamma\\vdash a:A$}}\\index{judgment!Gamma turnstile a in A@{$\\Gamma\\vdash a:A$}}\n  \\begin{equation*}\n    \\Gamma \\vdash a:A\n  \\end{equation*}\n\\item \\emph{$a$ and $b$ are \\define{judgmentally equal terms} of type $A$ in context $\\Gamma$.}\\index{judgmental equality!of terms} The symbolic expression for this judgment is\\index{Gamma turnstile a is b in A@{$\\Gamma\\vdash a\\jdeq b:A$}}\\index{judgment!Gamma turnstile a is b in A@{$\\Gamma\\vdash a\\jdeq b:A$}}\n  \\begin{equation*}\n    \\Gamma\\vdash a\\jdeq b:A\n  \\end{equation*}\n\\end{enumerate}\nThus we see that any judgment is of the form $\\Gamma\\vdash\\mathcal{J}$, consisting of a context $\\Gamma$ and an expression $\\mathcal{J}$ asserting that $A$ is a type, that $A$ and $B$ are equal types, that $a$ is a term of type $A$, or that $a$ and $b$ are equal terms of type $A$. The role of a context is to declare what hypothetical terms\\index{hypothetical term} are assumed, along with their types. More formally, a \\define{context} is an expression of the form\n\\begin{equation}\\label{eq:context}\nx_1:A_1,~x_2:A_2(x_1),~\\ldots,~x_n:A_n(x_1,\\ldots,x_{n-1})\n\\end{equation}\nsatisfying the condition that for each $1\\leq k\\leq n$ we can derive, using the inference rules of type theory, that\n\\begin{equation}\\label{eq:context-condition}\n  x_1:A_1,~x_2:A_2(x_1),~\\ldots,~x_{k-1}:A_{k-1}(x_1,\\ldots,x_{k-2})\\vdash A_k(x_1,\\ldots,x_{k-1})~\\type.\n\\end{equation}\nIn other words, to check that an expression of the form \\cref{eq:context} is a context, one starts on the left and works their way to the right verifying that each hypothetical term $x_k$ is assigned a well-formed type. Hypothetical terms are commonly called \\define{variables}\\index{variable}, and we say that a context as in \\cref{eq:context} \\define{declares the variables}\\index{variable declaration} $x_1,\\ldots,x_n$. We may use variable names other than $x_1,\\ldots,x_n$, as long as no variable is declared more than once.\n\nThe condition in \\cref{eq:context-condition} that each of the hypothetical terms is assigned a well-formed type, is checked recursively. Note that the context of length $0$ satisfies the requirement in \\cref{eq:context-condition} vacuously. This context is called the \\define{empty context}\\index{context!empty context}\\index{empty context}. An expression of the form $x_1:A_1$ is a context if and only if $A_1$ is a well-formed type in the empty context. Such types are called \\define{closed types}\\index{closed type}\\index{type!closed type}. We will soon encounter the type $\\N$ of natural numbers\\index{natural numbers}, which is an example of a closed type\\index{natural numbers!is a closed type}. There is also the notion of \\define{closed term}\\index{closed term}\\index{term!closed term}, which is simply a term in the empty context. The next case is that an expression of the form $x_1:A_1,~x_2:A_2(x_1)$ is a context if and only if $A_1$ is a well-formed type in the empty context, and $A_2(x_1)$ is a well-formed type, given a hypothetical term $x_1:A_1$. This process repeats itself for longer contexts.\n\nIt is a feature of \\emph{dependent} type theory that all judgments are context-dependent, and indeed that even the types of the variables may depend on any previously declared variables. For example, when we introduce the \\emph{identity type}\\index{identity type} in \\cref{chap:identity}, we make full use of the machinery of type dependency, as is clear from how they are introduced:\n\\begin{prooftree}\n  \\AxiomC{$\\Gamma\\vdash A~\\type$}\n  \\UnaryInfC{$\\Gamma,x:A,y:A\\vdash x=y~\\type$}\n\\end{prooftree}\nThis rule asserts that given a type $A$ in context $\\Gamma$, we may form a type $x=y$ in context $\\Gamma,x:A,y:A$. Note that in order to know that the expression $\\Gamma,x:A,y:A$ is indeed a well-formed context, we need to know that $A$ is a well-formed type in context $\\Gamma,x:A$. This is an instance of \\emph{weakening}\\index{weakening}, which we will describe shortly.\n\nIn the situation where we have\n\\begin{equation*}\n  \\Gamma,x:A\\vdash B(x)~\\type,\n\\end{equation*}\nwe say that $B$ is a \\define{family}\\index{family}\\index{type family} of types over $A$ in context $\\Gamma$. Alternatively, we say that $B(x)$ is a type \\define{indexed}\\index{indexed type}\\index{type!indexed} by $x:A$, in context $\\Gamma$. Similarly, in the situation where we have\n\\begin{equation*}\n  \\Gamma,x:A\\vdash b(x):B(x),\n\\end{equation*}\nwe say that $b$ is a \\define{section}\\index{section of a family} of the family $B$ over $A$ in context $\\Gamma$. Alternatively, we say that $b(x)$ is a term of type $B(x)$, \\define{indexed}\\index{indexed term}\\index{term!indexed} by $x:A$ in context $\\Gamma$. Note that in the above situations $A$, $B$, and $b$ also depend on the variables declared in the context $\\Gamma$, even though we have not explicitly mentioned them. It is common practice to not mention every variable in the context $\\Gamma$ in such situations.\n\n\\index{judgment|)}\n\\index{context|)}\n\n\\subsection{Inference rules}\\label{sec:rules}\n\nIn this section we present the basic inference rules of dependent type theory. Those rules are valid to be used in any type theoretic derivation. There are only four sets of inference rules:\n\\begin{enumerate}\n\\item Rules for judgmental equality \n\\item Rules for substitution\n\\item Rules for weakening\n\\item The ``variable rule''\n\\end{enumerate}\n\n\\subsubsection*{Judgmental equality}\n\n\\index{rules!for type dependency!rules for judgmental equality|(}\nIn this set of inference rules we ensure that judgmental equality (both on types and on terms) are equivalence relations, and we make sure that in any context $\\Gamma$, we can change the type of any variable to a judgmentally equal type.\n\n\\begin{samepage}\nThe rules postulating that judgmental equality on types and on terms is an equivalence relation are as follows\\index{judgmental equality!is an equivalence relation}:\n\\begin{center}\n%\\begin{small}\n\\begin{minipage}{.2\\textwidth}\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A~\\textrm{type}$}\n\\UnaryInfC{$\\Gamma\\vdash A\\jdeq A~\\textrm{type}$}\n\\end{prooftree}\n\\end{minipage}\n\\begin{minipage}{.25\\textwidth}\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A\\jdeq A'~\\textrm{type}$}\n\\UnaryInfC{$\\Gamma\\vdash A'\\jdeq A~\\textrm{type}$}\n\\end{prooftree}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A\\jdeq A'~\\textrm{type}$}\n\\AxiomC{$\\Gamma\\vdash A'\\jdeq A''~\\textrm{type}$}\n\\BinaryInfC{$\\Gamma\\vdash A\\jdeq A''~\\textrm{type}$}\n\\end{prooftree}\n\\end{minipage}\n\\\\*\n\\bigskip\n\\begin{minipage}{.2\\textwidth}\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash a:A$}\n\\UnaryInfC{$\\Gamma\\vdash a\\jdeq a : A$}\n\\end{prooftree}\n\\end{minipage}\n\\begin{minipage}{.25\\textwidth}\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash a\\jdeq a':A$}\n\\UnaryInfC{$\\Gamma\\vdash a'\\jdeq a: A$}\n\\end{prooftree}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash a\\jdeq a' : A$}\n\\AxiomC{$\\Gamma\\vdash a'\\jdeq a'': A$}\n\\BinaryInfC{$\\Gamma\\vdash a\\jdeq a'': A$}\n\\end{prooftree}\n\\end{minipage}\n%\\end{small}\n\\end{center}\n\\end{samepage}\n\n\\bigskip\nApart from the rules postulating that judgmental equality is an equivalence relation, there are also \\define{variable conversion rules}\\index{judgmental equality!conversion rules}\\index{variable conversion rules}\\index{conversion rule!variable}\\index{rules!for type dependency!variable conversion}.\nInformally, these are rules stating that if $A$ and $A'$ are judgmentally equal types in context $\\Gamma$, then any valid judgment in context $\\Gamma,x:A$ is also a valid judgment in context $\\Gamma,x:A'$. In other words: we can convert the type of a variable to a judgmentally equal type.\n\nThe first variable conversion rule states that\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A\\jdeq A'~\\textrm{type}$}\n\\AxiomC{$\\Gamma,x:A,\\Delta\\vdash B(x)~\\type$}\n\\BinaryInfC{$\\Gamma,x:A',\\Delta\\vdash B(x)~\\type$}\n\\end{prooftree}\nIn this conversion rule, the context of the form $\\Gamma,x:A,\\Delta$ is just any extension of the context $\\Gamma,x:A$, i.e., a context of the form\n\\begin{equation*}\n  x_1:A_1,\\ldots,x_{n-1}:A_{n-1},x:A,x_{n+1}:A_{n+1},\\ldots,x_{n+m}:A_{n+m}.\n\\end{equation*}\n\nSimilarly, there are variable conversion rules for judgmental equality of types, for terms, and for judgmental equality of terms. To avoid having to state essentially the same rule four times, we state all four variable conversion rules at once using a \\emph{generic judgment} $\\mathcal{J}$, which can be any of the four kinds of judgments.\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A\\jdeq A'~\\textrm{type}$}\n\\AxiomC{$\\Gamma,x:A,\\Delta\\vdash \\mathcal{J}$}\n\\BinaryInfC{$\\Gamma,x:A',\\Delta\\vdash \\mathcal{J}$}\n\\end{prooftree}\nAn analogous \\emph{term conversion rule}, stated in \\cref{ex:term_conversion}, converting the type of a term to a judgmentally equal type, is derivable using the rules for substitution and weakening, and the variable rule.\n\\index{rules!for type dependency!rules for judgmental equality|)}\n\n\\subsubsection*{Substitution}\n\\index{substitution|(}\\index{rules!for type dependency!rules for substitution|(}\nIf we are given a term $a:A$ in context $\\Gamma$, then for any type $B$ in context $\\Gamma,x:A,\\Delta$ we can simultaneously substitute $a$ for all occurences of the variable $x$ in $\\Delta$ and in $B$, to obtain a type $B[a/x]$ in context $\\Gamma,\\Delta[a/x]$. You are already familiar with simultaneous substitution, e.g., substituting $0$ for $x$ in the polynomial\n\\begin{equation*}\n  1+x+x^2+x^3\n\\end{equation*}\nresuts in the number $1+0+0^2+0^3$, which can be computed to the value $1$. \n\nType theoretic substitution is similar. In a bit more detail, suppose we have well-formed type\n\\begin{equation*}\n  x_1:A_1,\\ldots,x_{n-1}:A_{n-1},x_n:A_n,x_{n+1}:A_{n+1},\\ldots,x_{n+m}:A_{n+m}\\vdash B~\\textrm{type}\n\\end{equation*}\nand a term $a:A_n$ in context $x_1:A_1,\\ldots,x_{n-1}:A_{n-1}$. Then we can form the type\n\\begin{equation*}\n  x_1:A_1,\\ldots,x_{n-1}:A_{n-1},x_{n+1}:A_{n+1}[a/x_n],\\ldots,x_{n+m}:A_{n+m}[a/x_n]\\vdash B[a/x_n]~\\textrm{type}\n\\end{equation*}\nby substituting $a$ for all occurences of $x_n$. Note that the variables $x_{n+1},\\ldots,x_{n+m}$ are assigned new types after performing the substitution of $a$ for $x_n$.\n\nThis operation of substituting $a$ for $x$ is understood to be defined recursively over the length of $\\Delta$. When $B$ is a family of types over $A$ and $a:A$, we also say that $B[a/x]$ is the \\define{fiber}\\index{family!fiber of}\\index{fiber!of a family} of $B$ at $a$. We will usually write $B(a)$ for $B[a/x]$. Similarly we obtain for any term $b:B$ in context $\\Gamma,x:A,\\Delta$ a term $b[a/x]:B[a/x]$. The term $b[a/x]$ is called the \\define{value} of $b$ at $a$. When we substitute in a judgmental equality, either of types or terms, we simply subtitute on both sides of the equation.\n\nWe can now postulate the \\define{substitution rule} as follows:\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash a:A$}\n\\AxiomC{$\\Gamma,x:A,\\Delta\\vdash \\mathcal{J}$}\n\\RightLabel{$S$}\n\\BinaryInfC{$\\Gamma,\\Delta[a/x]\\vdash \\mathcal{J}[a/x]$}\n\\end{prooftree}\nIn other words, the substitution rule asserts that substitution preserves well-formedness and judgmental equality of types and terms. Furthermore, we postulate that substitution by judgmentally equal terms results in judgmentally equal types\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash a\\jdeq a':A$}\n\\AxiomC{$\\Gamma,x:A,\\Delta\\vdash B~\\type$}\n\\BinaryInfC{$\\Gamma,\\Delta[a/x]\\vdash B[a/x]\\jdeq B[a'/x]~\\type$}\n\\end{prooftree}\nand it also results in judgmentally equal terms\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash a\\jdeq a':A$}\n\\AxiomC{$\\Gamma,x:A,\\Delta\\vdash b:B$}\n\\BinaryInfC{$\\Gamma,\\Delta[a/x]\\vdash b[a/x]\\jdeq b[a'/x]:B[a/x]$}\n\\end{prooftree}\nTo see that these rules make sense, we observe that both $B[a/x]$ and $B[a'/x]$ are types in context $\\Delta[a/x]$, provided that $a\\jdeq a'$. This is immediate by recursion on the length of $\\Delta$.\n\\index{substitution|)}\\index{rules!for type dependency!rules for substitution|)}\n\n\\subsubsection*{Weakening}\n\\index{weakening|(}\\index{rules!for type dependency!rules for weakening|(}\nIf we are given a type $A$ in context $\\Gamma$, then any judgment made in a longer context $\\Gamma,\\Delta$ can also be made in the context $\\Gamma,x:A,\\Delta$, for a fresh variable $x$. The \\define{weakening rule}\\index{weakening} asserts that weakening by a type $A$ in context preserves well-formedness and judgmental equality of types and terms.\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A~\\textrm{type}$}\n\\AxiomC{$\\Gamma,\\Delta\\vdash \\mathcal{J}$}\n\\RightLabel{$W$}\n\\BinaryInfC{$\\Gamma,x:A,\\Delta \\vdash \\mathcal{J}$}\n\\end{prooftree}\nThis process of expanding the context by a fresh variable of type $A$ is called \\define{weakening} (by $A$).\n\nIn the simplest situation where weakening applies, we have two types $A$ and $B$ in context $\\Gamma$. Then we can weaken $B$ by $A$ as follows\n\\begin{prooftree}\n  \\AxiomC{$\\Gamma\\vdash A~\\textrm{type}$}\n  \\AxiomC{$\\Gamma\\vdash B~\\textrm{type}$}\n  \\RightLabel{$W$}\n  \\BinaryInfC{$\\Gamma,x:A\\vdash B~\\type$}\n\\end{prooftree}\nin order to form the type $B$ in context $\\Gamma,x:A$. The type $B$ in context $\\Gamma,x:A$ is called the \\define{constant family}\\index{family!constant family}\\index{constant family} $B$, or the \\define{trivial family}\\index{family!trivial family}\\index{trivial family} $B$.\n\\index{weakening|)}\\index{rules!for type dependency!rules for weakening|)}\n\n\\subsubsection*{The variable rule}\nIf we are given a type $A$ in context $\\Gamma$, then we can weaken $A$ by itself to obtain that $A$ is a type in context $\\Gamma,x:A$. The \\define{variable rule}\\index{variable rule}\\index{rules!for type dependency!variable rule} now asserts that any hypothetical term $x:A$ in context $\\Gamma$ is a well-formed term of type $A$ in context $\\Gamma,x:A$.\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A~\\textrm{type}$}\n\\RightLabel{$\\delta$}\n\\UnaryInfC{$\\Gamma,x:A\\vdash x:A$}\n\\end{prooftree}\nOne of the reasons for including the variable rule is that it provides an \\emph{identity function}\\index{identity function} on the type $A$ in context $\\Gamma$.\n\n\\subsection{Derivations}\\label{sec:derivations}\n\n\\index{derivation|(}\nA derivation in type theory is a finite tree in which each node is a valid rule of inference. At the root of the tree we find the conclusion, and in the leaves of the tree we find the hypotheses. We give two examples of derivations: a derivation showing that any variable can be changed to a fresh one, and a derivation showing that any two variables that do not mutually depend on one another can be swapped in order.\n\nGiven a derivation with hypotheses $\\mathcal{H}_1,\\ldots,\\mathcal{H}_n$ and conclusion $\\mathcal{C}$, we can form a new inference rule\n\\begin{prooftree}\n  \\AxiomC{$\\mathcal{H}_1$}\n  \\AxiomC{$\\cdots$}\n  \\AxiomC{$\\mathcal{H}_n$}\n  \\TrinaryInfC{$\\mathcal{C}$}\n\\end{prooftree}\nSuch a rule is called \\define{derivable}, because we have a derivation for it. In order to keep proof trees reasonably short and manageable, we use the convention that any derived rules can be used in future derivations.\n\n\\subsubsection*{Changing variables}\n\n\\index{change of variables}\nVariables can always be changed to fresh variables. We show that this is the case by showing that the inference rule\\index{rules!for type dependency!change of variables}\n\\begin{prooftree}\n  \\AxiomC{$\\Gamma,x:A,\\Delta\\vdash \\mathcal{J}$}\n  \\RightLabel{$x'/x$}\n  \\UnaryInfC{$\\Gamma,x':A,\\Delta[x'/x]\\vdash \\mathcal{J}[x'/x]$}\n\\end{prooftree}\nis derivable, where $x'$ is a variable that does not occur in the context $\\Gamma,x:A,\\Delta$. \n\nIndeed, we have the following derivation using substitution, weakening, and the variable rule:\n\\begin{prooftree}\n  \\AxiomC{$\\Gamma\\vdash A~\\type$}\n  \\RightLabel{$\\delta$}\n  \\UnaryInfC{$\\Gamma,x':A\\vdash x':A$}\n  \\AxiomC{$\\Gamma\\vdash A~\\type$}\n  \\AxiomC{$\\Gamma,x:A,\\Delta\\vdash \\mathcal{J}$}\n  \\RightLabel{$W$}\n  \\BinaryInfC{$\\Gamma,x':A,x:A,\\Delta\\vdash \\mathcal{J}$}\n  \\RightLabel{$S$}\n  \\BinaryInfC{$\\Gamma,x':A,\\Delta[x'/x]\\vdash \\mathcal{J}[x'/x]$}\n\\end{prooftree}\nIn this derivation it is the application of the weakening rule where we have to check that $x'$ does not occur in the context $\\Gamma,x:A,\\Delta$.\n\n\\subsubsection*{Interchanging variables}\n\nThe \\define{interchange rule}\\index{rules!for type dependency!interchange}\\index{interchange rule} states that if we have two types $A$ and $B$ in context $\\Gamma$, and we make a judgment in context $\\Gamma,x:A,y:B,\\Delta$, then we can make that same judgment in context $\\Gamma,y:B,x:A,\\Delta$ where the order of $x:A$ and $y:B$ is swapped. More formally, the interchange rule is the following inference rule\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash B~\\textrm{type}$}\n\\AxiomC{$\\Gamma,x:A,y:B,\\Delta\\vdash \\mathcal{J}$}\n\\BinaryInfC{$\\Gamma,y:B,x:A,\\Delta\\vdash \\mathcal{J}$}\n\\end{prooftree}\nJust as the rule for changing variables, we claim that the interchange rule is a derivable rule.\n\nThe idea of the derivation for the interchange rule is as follows: If we have a judgment\n\\begin{equation*}\n  \\Gamma,x:A,y:B,\\Delta\\vdash\\mathcal{J},\n\\end{equation*}\nthen we can change the variable $y$ to a fresh variable $y'$ and weaken the judgment to obtain the judgment\n\\begin{equation*}\n  \\Gamma,y:B,x:A,y':B,\\Delta[y'/y]\\vdash\\mathcal{J}[y'/y].\n\\end{equation*}\nNow we can substitute $y$ for $y'$ to obtain the desired judgment $\\Gamma,y:B,x:A,\\Delta\\vdash\\mathcal{J}$. The formal derivation is as follows:\n%\\begin{small}\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash B~\\textrm{type}$}\n\\RightLabel{$\\delta$}\n\\UnaryInfC{$\\Gamma,y:B\\vdash y:B$}\n\\RightLabel{$W$} \n\\UnaryInfC{$\\Gamma,y:B,x:A\\vdash y:B$}\n\\AxiomC{$\\Gamma,x:A,y:B,\\Delta\\vdash \\mathcal{J}$}\n\\RightLabel{$y'/y$}\n\\UnaryInfC{$\\Gamma,x:A,y':B,\\Delta[y'/y]\\vdash \\mathcal{J}[y'/y]$}\n\\RightLabel{$W$}\n\\UnaryInfC{$\\Gamma,y:B,x:A,y':B,\\Delta[y'/y]\\vdash \\mathcal{J}[y'/y]$}\n\\RightLabel{$S$}\n\\BinaryInfC{$\\Gamma,y:B,x:A,\\Delta\\vdash \\mathcal{J}$}\n\\end{prooftree}\n%\\end{small}\n\\index{derivation|)}\n\n\\begin{exercises}\n\\exercise \\label{ex:term_conversion}Give a derivation for the following \\define{term conversion rule}\\index{term conversion rule}\\index{rules!for type dependency!term conversion}\\index{term conversion rule}\\index{conversion rule!term}:\n\\begin{prooftree}\n\\AxiomC{$\\Gamma\\vdash A\\jdeq A'~\\textrm{type}$}\n\\AxiomC{$\\Gamma\\vdash a:A$}\n\\BinaryInfC{$\\Gamma\\vdash a:A'$}\n\\end{prooftree}\n\\exercise Consider a type $A$ in context $\\Gamma$. In this exercise we establish a correspondence between types in context $\\Gamma,x:A$, and uniform choices of types $B_a$, where $a$ ranges over terms of $A$ in a uniform way. A similar connection is made for terms.\n  \\begin{subexenum}\n  \\item We define a \\define{uniform family}\\index{uniform family} over $A$ to consist of a type\n    \\begin{equation*}\n      \\Delta,\\Gamma\\vdash B_a~\\type\n    \\end{equation*}\n    for every context $\\Delta$, and every term $\\Delta,\\Gamma\\vdash a:A$, subject to the condition that one can derive\n    \\begin{prooftree}\n      \\AxiomC{$\\Delta\\vdash d:D$}\n      \\AxiomC{$\\Delta,y:D,\\Gamma\\vdash a:A$}\n      \\BinaryInfC{$\\Delta,\\Gamma\\vdash B_a[d/y]\\jdeq B_{a[d/y]}~\\type$}\n    \\end{prooftree}\n    Define a bijection between the set of types in context $\\Gamma,x:A$ modulo judgmental equality, and the set of uniform families over $A$ modulo judgmental equality. \n  \\item Consider a type $\\Gamma,x:A\\vdash B$. We define a \\define{uniform term}\\index{uniform term} of $B$ over $A$ to consist of a type\n    \\begin{equation*}\n      \\Delta,\\Gamma\\vdash b_a:B[a/x]~\\type\n    \\end{equation*}\n    for every context $\\Delta$, and every term $\\Delta,\\Gamma\\vdash a:A$, subject to the condition that one can derive\n    \\begin{prooftree}\n      \\AxiomC{$\\Delta\\vdash d:D$}\n      \\AxiomC{$\\Delta,y:D,\\Gamma\\vdash a:A$}\n      \\BinaryInfC{$\\Delta,\\Gamma\\vdash b_a[d/y]\\jdeq b_{a[d/y]}:B[a/x][d/y]$}\n    \\end{prooftree}\n    Define a bijection between the set of terms of $B$ in context $\\Gamma,x:A$ modulo judgmental equality, and the set of uniform terms of $B$ over $A$ modulo judgmental equality. \n  \\end{subexenum}\n\\end{exercises}\n\\index{dependent type theory|)}\n", "meta": {"hexsha": "8b72c81728f44d014339dbd40b2e99599c58801c", "size": 23038, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/dtt.tex", "max_stars_repo_name": "hemangandhi/HoTT-Intro", "max_stars_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Book/dtt.tex", "max_issues_repo_name": "hemangandhi/HoTT-Intro", "max_issues_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Book/dtt.tex", "max_forks_repo_name": "hemangandhi/HoTT-Intro", "max_forks_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.8228571429, "max_line_length": 1113, "alphanum_fraction": 0.7303585381, "num_tokens": 7294, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7634837635542925, "lm_q1q2_score": 0.5850527627607739}}
{"text": "\n\\section{Ground State to an Excited Bound State}\nWe mentioned in Chapter 5 that ideally the total cross section would have both\namplitudes corresponding to ground-continuum transitions and bound-bound\ntransitions. The first step that is required is to calculate a photoabsorption\nmatrix element from the ground state to one of the excited bound\nstates of hydrogen. \n\nThis chapter provides semi analytic results for photoabsorption matrix elements\nfrom the ground state to the first three excited states of hydrogen.\nWe will consider the the relativistic absorption operator $\\mathcal{A} = \\Alpha_x \\Exp{k}{r}\n= \\Alpha_x e^{i\\eta(\\mb{k},\\theta,\\phi)r}$ for a photon polarised in the $x$\ndirection and we define $\\eta$ as:\n\\begin{equation}\n    \\eta(\\mb{k},\\theta,\\phi) = k_x \\sin\\phi \\cos\\phi +\n                               k_y \\sin\\phi \\sin\\theta +\n                               k_z \\cos\\phi\n\\end{equation}\nIn the electric dipole approximation $\\mb{k} = 0$ and hence $\\eta = 0$. \nThe use of $\\eta$ allows us to separate the angular and radial integrals\ntherefore writing the photoabsorption matrix element from the ground\nstate $\\ket{\\psi_0}$ to an excited bound state $\\ket{\\psi_i}$ as:\n\\begin{equation}\n    \\bra{\\psi_i} X \\ket{\\psi_0}\n    =\n    \\int (A_{i1}A_{01} + A_{ii}A_{0i}) R^g_{0i} \\; d\\Omega\n    +\n    \\int (A_{i3}A_{03} + A_{i4}A_{04}) R^h_{0i} \\; d\\Omega\n\\end{equation}\nwhere $A_{ij}(\\theta,\\phi)$ are the angular components of the excited \nstate Dirac spinors as defined in Chapter 2.\n$R^g_{0i}$ and $R^h_{0i}$ are the large and small component radial \nintegrals which are defined in the sections below for each excited state.\nSome of the radial integrals are analytic while others have to be solved\nnumerically. The final integration over all angles $ d\\Omega $ has to \nbe performed numerically.\n\nAll the radial integrals are written in terms of the constants defined in\nChapter 2 and in terms of the integrals $\\mathcal{J}$ and $\\mathcal{K}$ \ndefined below.\n\\begin{equation}\n    \\mathcal{J}[a,n] = \\int_0^\\infty e^{-ar} r^n \\; dr\n\\end{equation}\n\\begin{equation}\n    \\mathcal{K}[a,n,A,B,C,D] = \\int_0^\\infty \n                \\left( \\frac{A-Br}{C-Dr} \\right) e^{-ar} r^{n} \\; dr\n\\end{equation}\n\n\\section{Ground State to First Excited State}\n\\begin{equation}\nR^g_{01} = G_0 G'_1 \\mathcal{J}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta,2\\gamma_1] -\n           G_0 G''_1 \\mathcal{J}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta,2\\gamma_1 + 1]\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\nR^h_{01} = H_0 G_0 H_1 \n            G'_1 \\mathcal{K}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta, 2\\gamma_1,H'_1,H''_1,H'''_1,H''''_1]\n            \\\\ -\n   H_0 G_0 H_1          G''_1 \\mathcal{K}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta, 2\\gamma_1,  1H'_1,H''_1,H'''_1,H''''_1]\n\\end{split}\n\\end{equation}\n\n\\section{Ground State to Second Excited State}\n\\begin{equation}\nR^g_{02} = G_0 G'_2 \\mathcal{J}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta,2\\gamma_1] -\n           G_0 G''_2 \\mathcal{J}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta,2\\gamma_1 + 1]\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\nR^h_{02} = H_0 G_0 H_2 \n            G'_1 \\mathcal{K}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta, 2\\gamma_1,H'_2,H''_2,H'''_2,H''''_2]\n            \\\\ - \n   H_0 G_0 H_2          G''_1 \\mathcal{K}[\\Half(\\sigma_1 + \\sigma_2) - i\\eta, 2\\gamma_1, 1H'_2,H''_2,H'''_2,H''''_2]\n\\end{split}\n\\end{equation}\n\n\\section{Ground State to Third Excited State}\n\\begin{equation}\nR^g_{03} = G_0 G_3 \\mathcal{J}[\\Half(\\sigma_1 + \\sigma_3) - i\\eta, \\gamma_1 + \\gamma_2]\n\\end{equation}\n\n\\begin{equation}\nR^h_{03} = H_0 G_0 H_3 G_3 \\mathcal{J}[\\Half(\\sigma_1 + \\sigma_3) - i\\eta, \\gamma_1 + \\gamma_2]\n\\end{equation}\n\n", "meta": {"hexsha": "dd27cf9553218c4f6197bffcca5c643eb1507510", "size": 3632, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bound.tex", "max_stars_repo_name": "mikepsn/atomic-form-factors-thesis", "max_stars_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bound.tex", "max_issues_repo_name": "mikepsn/atomic-form-factors-thesis", "max_issues_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bound.tex", "max_forks_repo_name": "mikepsn/atomic-form-factors-thesis", "max_forks_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.2727272727, "max_line_length": 117, "alphanum_fraction": 0.6605176211, "num_tokens": 1313, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289388125473629, "lm_q2_score": 0.7057850278370112, "lm_q1q2_score": 0.5850526028889196}}
{"text": "%!TEX program = xelatex\n\n\\documentclass[12pt,a4paper]{article}\n\\usepackage{xeCJK}\n\\usepackage{amsmath}\n\\setmainfont{Times New Roman}\n\\usepackage{setspace}\n\\usepackage{caption}\n\\usepackage{graphicx, subfig}\n\\usepackage{float}\n\\usepackage{listings}\n\\usepackage{booktabs}\n\\usepackage{setspace}%\u4f7f\u7528\u95f4\u8ddd\u5b8f\u5305\n\\usepackage{mathtools}\n\\usepackage{amsfonts}\n\n\\begin{document} \n\\title{homework4}\n\t\\author{11611118 \u90ed\u601d\u6e90}  \n\n\\section{Problem 1}\n\\textbf{Question: State and verify the Lagrange Interpolation Theorem.}\n\n\\begin{spacing}{1.5}%%\u884c\u95f4\u8ddd\u53d8\u4e3adouble-space\n\\subsection{What is Lagrange Interpolation Theorem?}\n\\noindent In numerical analysis, \\textbf{Lagrangian interpolation} is a polynomial interpolation method named after French 18th century mathematician Joseph Lagrange. Many practical problems use functions to represent some kind of internal connection or law, and many functions can only be understood through experiments and observations. If you observe a physical quantity in practice and get the corresponding observations in several different places, Lagrangian interpolation can find a polynomial that takes the observed value at each observation point. Such a polynomial is called a \\textbf{Lagrangian (interpolation) polynomial}. Mathematically,  Lagrangian interpolation gives a polynomial function that happens to traverse several known points on a two-dimensional plane. The Lagrange interpolation method was first discovered by the British mathematician Edward Waring in 1779, and soon after (1783) was discovered again by Leonhard Euler. In 1795, Lagrange published this interpolation method in his book ``Basic Tutorials for Normal Schools'', and his name has since been associated with this method.\\\\\n\n\\noindent \nFor a given ${n+1}$ points ${(x_0,y_0), (x_1,y_1),\\ldots,(x_n,y_n)}$, there is only one Lagrangian polynomial L corresponding to their number of times not exceeding n. If a polynomial with a higher number of counts is counted, there is an infinite number because all polynomials that differ from L by \n${\\lambda(x_0,y_0)(x_1,y_1)\\ldots(x_n,y_n)}$ satisfy the condition.\n\n\\newpage\n\\subsection{Definition}\n\\noindent For a polynomial function, the given ${n+1}$ value points are known:\\\\\n${(x_0,y_0), (x_1,y_1),\\ldots,(x_n,y_n)}$\\\\\nWhere ${x_{j}}$ corresponds to the position of the argument, \\\\\nand ${y_{j}}$ corresponds to the value of the function at this position.\\\\\n\n\\noindent Assuming that any two different ${x_{j}}$ are different from each other, the \\textbf{Lagrangian interpolation polynomial} obtained by applying the Lagrangian interpolation formula is:\n\n\\begin{equation}\n\tL(x) \\coloneqq \\sum_{j=0}^k y_j \\ell_j(x)\n\\end{equation}\\\\\n\n\\noindent Each of $\\ell _{j}(x)$ is a \\textbf{Lagrangian basic polynomial} (or \\textbf{interpolation basis function}) whose expression is:\n\n\\begin{equation}\n\\begin{aligned}\n\t\\ell_j(x) \\coloneqq &\\prod_{i = 0,i = j}^k \n\t\\frac{x-x_i}{x_j-x_i} \\\\\n\t= &\\frac{x-x_0}{x_j-x_0} \\ldots\n\t\\frac{x-x_{j-1}}{x_j-x_{j-1}}\n\t\\frac{x-x_{j+1}}{x_j-x_{j+1}} \\ldots\n\t\\frac{x-x_k}{x_j-x_k}\n\\end{aligned}\n\\end{equation}\\\\\n\n\\noindent The \\textbf{Lagrangian basic polynomial} $\\ell _{j}(x)$ is characterized by a value of \\textbf{1} on ${x_{j}}$ and a value of \n\\textbf{0} on other points ${x_{j}}$, ${i\\neq j}$.\n\n\n\\newpage\n\\subsection{Proof of existence}\nFor a given $k+1$ point: $(x_{0}, y_{0}), \\ldots, (x_{k}, y_{k})$, the idea of \u200b\u200bLagrange interpolation is to find one at a point $x_{j}$ takes a value of \\textbf{1}, while at other points the polynomial $\\ell _{j}(x)$ whose value is \\textbf{0}. Thus, the polynomial $y_{j}\\ell _{j}(x)$ takes the value $y_{j}$ at point $x_{j}$ and \\textbf{0} at other points. \\\\\n\n\\noindent And the polynomial \n\\begin{equation*}\n\tL(x) \\coloneqq \\sum_{j=0}^k y_j \\ell_j(x)\n\\end{equation*}\n\n\\noindent can satisfy :\n\\begin{equation*}\n\tL(x_{j})=\\sum _{{i=0}}^{{k}}y_{i}\\ell _{i}(x_{j})=0+0+\\cdots +y_{ j}+\\cdots +0=y_{j}\n\\end{equation*}\n\n\\noindent Polynomials with a value of \\textbf{0} at other points are easy to find, for example:\\\\\n$(x-x_{0})\\cdots (x-x_{{j-1}})(x-x_{{j+1}})\\cdots (x-x_{{k}})$\\\\\n\n\\noindent It takes the value $(x_{j}-x_{0})\\cdots (x_{j}-x_{{j-1}})(x_{j}-x_{{j+ 1}})\\cdots (x_{j}-x_{{k}})$ at point $x_{j}$.\\\\\n\n\\noindent Since $x_{i}$ has been assumed to be different from each other, the above value is not equal to zero. Thus, by dividing the polynomial by this value, you get a polynomial that satisfies \"\\textbf{1} at $x_{j}$ and \\textbf{0} at other points\":\n\n\\begin{equation*}\n\\begin{aligned}\n\t\\ell_j(x) \\coloneqq &\\prod_{i = 0,i = j}^k \n\t\\frac{x-x_i}{x_j-x_i} \\\\\n\t= &\\frac{x-x_0}{x_j-x_0} \\ldots\n\t\\frac{x-x_{j-1}}{x_j-x_{j-1}}\n\t\\frac{x-x_{j+1}}{x_j-x_{j+1}} \\ldots\n\t\\frac{x-x_k}{x_j-x_k}\n\\end{aligned}\n\\end{equation*}\n\\newline\n\\noindent This is the Lagrangian basic polynomial.\n\n\\newpage\n\\subsection{Proof of uniqueness}\n\nThere is only one Lagrangian polynomial with a number not exceeding $k$, because the Lagrangian polynomial does not exceed k for any two times: \n$P_{1}$ and $P_{2}$, their difference $P_{1}-P_{ 2}$ is \\textbf{0} at all $k+1$ points, so it must be a polynomial $(x-x_{0})(x-x_{{1}})\\cdots (x-x_{{k}})$ multiples. \\\\\\\\\nTherefore, if the difference $P_{1}-P_{2}$ is not equal to \\textbf{0}, the number of times must not be less than $k+1$. However, $P_{1}-P_{2}$ is the difference between two polynomials whose number does not exceed $k$, and the number of times does not exceed k. So $P_{1}-P_{2}=0$, that is to say $P_{1}=P_{2}$. This proves the uniqueness.\n\n\\newpage\n\\subsection{Geometric property}\n\nLagrangian basic polynomial $\\ell_{0}, \\ell_{1}, \\cdots, \\ell_{n}$ used in Lagrangian interpolation (by a group $x_{0} < x_{1}< \\cdots <x_{n}$ determined) can be thought of as a linear space consisting of polynomials of not more than n a set of bases of ${\\mathbb {K}}_{n}[X]$.\\\\\n\n\\noindent First, if there is a set of coefficients: $\\lambda _{0}, \\lambda _{1}, \\cdots, \\lambda _{n}$\\\\\nlet $P=\\lambda _{0}\\ell _{0}+\\lambda _{1}\\ell _{1}+\\cdots +\\lambda _{n}\\ell _{n}=\\textbf{0}$\\\\\n\n\\noindent Then, on the one hand, the polynomial $P$ is he Lagrangian interpolation polynomial satisfies: \\\\\n$P(x_{0})=\\lambda _{0}, P(x_{1})=\\lambda _{1}, \\cdots, P(x_{n})=\\lambda _{n}$ \\\\\n\n\\noindent On the other hand, $P$ is a zero polynomial, so the value is always \\textbf{0}. \\\\\nSo:\\\\\n$\\lambda _{0}=\\lambda _{1}=\\cdots =\\lambda _{n}=\\textbf{0}$\\\\\n\n\\noindent This proves that $\\ell _{0}, \\ell _{1}, \\cdots, \\ell _{n}$ are linearly independent. At the same time it contains a total of $n+1$ polynomials, which is exactly equal to the dimension of ${\\mathbb {K}}_{n}[X]$. So $\\ell _{0}, \\ell _{1}, \\cdots, \\ell _{n}$ form a set of bases for ${\\mathbb {K}}_{n}[X]$.\\\\\n\n\\noindent The advantage of the Lagrangian basic polynomial as a base is that all polynomials are homogeneous (both nth polynomials).\n\n\\newpage\n\\section{Problem 2}\n\\textbf{Question: Give some data}\n\\[\n\t\\begin{array}{r|r}\n\tt(days) & 7 \\quad 14 \\quad\\ \\ 21 \\quad\\ \\ 28 \\quad\\ \\ 35\\quad \\ \\ 42 \\\\\n\t\\hline\n\tP(number\\;of\\;observed\\;flies) & 8\\quad41\\quad133\\quad250\\quad280\\quad297 \\\\\n\t\\end{array}\n\\]\n\n\\noindent(a) Use the one-term model to fit the data via the least-squares method.\\\\\n(b) Use fifth-order polynomial to interpolate the data.\\\\\n(c) Use low-order polynomial to minimize the $L_1$ norm, $L_2$ norm and $L_\\infty$ norm. \n\nTry second, third order polynomials, respectively.\n\n\\subsection{(a)}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.3]{figure/HW4_2_a.png}\n\\end{figure}\n\n\\begin{center}\n$y = (2279*x)/245 - 896/15$\n\\end{center}\n\n\n\\newpage\n\\subsection{(b)}\n\\begin{center}\nThe Lagrangian interpolation is used to interpolate the data:\n\\end{center}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=.3]{figure/HW4_2_b.png}\n\\end{figure}\n\\[ \\begin{split} \ny = (11*x^5)/84035 &- (145*x^4)/9604 + (1283*x^3)/2058 \\\\\n\t\t\t\t\t&- (2181*x^2)/196 + (9712*x)/105 - 274\n\\end{split} \\]\n\n\\newpage\n\\subsection{(c)}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=.3]{figure/HW4_2_c.png}\n\\end{figure}\n\nSecond order polynomials:\n\\begin{center}\n$y = (3714*x)/245 - (41*x^2)/343 - 572/5$\n\\end{center}\n\nThird order polynomials:\n\\begin{center}\n$y = -(58*x^3)/3087 + (1298*x^2)/1029 - (6185*x)/441 + 48$\n\\end{center}\n\n\\newpage\n\\subsection{Matlab code}\n\\begin{lstlisting}[language=matlab]\n% HW4_2_a.m\nx = [7,14,21,28,35,42];\ny = [8,41,133,250,280,297];\np = polyfit(x,y,1);\npoly2sym(p)\nx1 = linspace(0,49);\ny1 = polyval(p,x1);\nfigure\nplot(x,y,'o')\nhold on\nplot(x1,y1)\nhold off\n\n% Hw4_2_b.m\nx = [7 14 21 28 35 42];\ny = [8 41 133 250 280 297];\nx1 = linspace(0,49);\ny1 = la(x,y,x1);\nplot(x,y,'o')\nhold on\nplot(x1,y1)\nhold off\n\n\n\n\n% la.m\nfunction yy=la(x1,y1,xx)\nsyms x\nn=length(x1);\n    for i=1:n\n    t=x1;t(i)=[];\n    L(i)=prod((x-t)./(x1(i)-t));% L\u5411\u91cf\u7528\u6765\u5b58\u653e\u63d2\u503c\u57fa\u51fd\u6570\n    end\nu=sum(L.*y1);\np=simplify(u) % p\u662f\u7b80\u5316\u540e\u7684Lagrange\u63d2\u503c\u51fd\u6570\uff08\u5b57\u7b26\u4e32\uff09\nyy=subs(p,x,xx);    % p\u662f\u4ee5x\u4e3a\u81ea\u53d8\u91cf\u7684\u51fd\u6570\uff0c\u5e76\u6c42xx\u5904\u7684\u51fd\u6570\u503c\nend\n\n% Hw4_2_c.m\nx = [7 14 21 28 35 42];\ny = [8 41 133 250 280 297];\np2 = polyfit(x,y,2);\np3 = polyfit(x,y,3);\npoly2sym(p2)\npoly2sym(p3)\nx1 = linspace(0,49);\ny2 = polyval(p2,x1);\ny3 = polyval(p3,x1);\nplot(x,y,'o')\nhold on\nplot(x1,y2,'-')\nplot(x1,y3,'--')\n\\end{lstlisting}\n\n\n\n\\end{spacing}\n\n\n\\end{document}", "meta": {"hexsha": "3101e0b8ca1f19d673f46bd0c1a631ec2dc53d9d", "size": 9144, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Latex/HW4/11611118-4/HW4.tex", "max_stars_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_stars_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-30T11:32:36.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-30T11:32:36.000Z", "max_issues_repo_path": "Latex/HW4/11611118-4/HW4.tex", "max_issues_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_issues_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Latex/HW4/11611118-4/HW4.tex", "max_forks_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_forks_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4754098361, "max_line_length": 1113, "alphanum_fraction": 0.6967410324, "num_tokens": 3215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599562, "lm_q2_score": 0.8289388104343892, "lm_q1q2_score": 0.5850525911377931}}
{"text": "%!TEX root = ../Thesis.tex\n\\chapter{Arnoldi's Method for Monte Carlo Particle Transport \\label{ch:ArnoldiMethod}}\nAs described in the introduction, the power method has been the method of choice for calculating eigenvalues and eigenvectors for Monte Carlo criticality applications.  The power method is suited well for this type of calculation because of its simple, matrix-free implementation.\n\nThe power method, however, has a few drawbacks.  The rate of convergence to the fundamental eigenvalue---determined by the dominance ratio of the first higher-order eigenvalue to the fundamental eigenvalue---can be too small.  It is also limited to calculating one eigenvalue at a time.  Other Krylov subspace methods exist that have some of the same positive benefits of the power method, but also address the negative aspects that make the power method slow and limited.  Chief among these is Arnoldi's method.\n\nArnoldi's method \\cite{Arnoldi:1951The-P-0} is just one such method from the class of Krylov subspace methods.  While its implementation is not as straightforward as with the power method, an explicit form of the transport-fission operator remains unnecessary; we only need to know how to apply the operator to a fission source.  In this sense, Arnoldi's method is as equally suited to Monte Carlo criticality applications as the power method, but it has never been studied in this application.  In this dissertation, the first Monte Carlo application of Arnoldi's method for particle transport is demonstrated.\n\nArnoldi's method has advantages over the power method that may prove to make the difficulty in its implementation a small matter.  One benefit Arnoldi's method brings to Monte Carlo particle transport is the ability to calculate multiple eigenvalues and eigenvectors with minimal extra computational expense over what is required to calculate the fundamental mode.  In addition, the fission source converges faster in Arnoldi's method than in the power method, reducing the number of inactive iterations required.  In this chapter the basic Arnoldi method and the Monte Carlo implementation for particle transport is introduced.   The ability of Monte Carlo Arnoldi's method to calculate multiple eigenpairs of the fission-transport operator is also demonstrated.  Arnoldi's superior performance in converging the eigenvalue and the fission source is shown in \\Fref{ch:AdvancedArnoldi}.\n\n\\section{Arnoldi's Method \\label{sec:ArnoldiMethod}}\nArnoldi's method generates a Krylov subspace similar to \\Fref{eq:KrylovSubspaceBasisVectors} except at each iteration the basis vectors $\\left\\{v_i\\right\\}_{i=1}^m$ are orthogonalized against all the previously calculated Arnoldi vectors and normalized.  The basis vectors that form the Krylov subspace are called Arnoldi vectors \\citep[see][pp. 435-438]{Watkins:2002Funda-0}.\n\nAn Arnoldi process begins with a normalized vector\n\\begin{equation}\n    v_1 = \\frac{v}{\\left\\|v\\right\\|_2},\n\\end{equation}\nwhere \n\\begin{equation}\n    \\left\\|v\\right\\|_2 = \\left(\\sum_{i=1}^n\\left|v_i\\right|^2\\right)^{1/2}\n\\end{equation}\nis the Euclidean norm.  We then apply the linear operator to $v_1$ and orthogonalize the result against $v_1$, yielding\n\\begin{equation}\n    \\tilde{v}_2 = \\A v_1 - \\langle\\A v_1, v_1\\rangle v_1 = \\A v_1 - h_{1,1} v_1.\n\\end{equation}\nwhere\n\\begin{equation}\n    h_{jm} = \\langle \\A v_m, v_j\\rangle\n    \\label{eq:ArnoldiInnerProduct}\n\\end{equation}\nis the inner product between the vectors $v_m$ and $v_j$ \\[\\int v_m(x)v_j(x) \\dd x.\\]  Then we normalize $\\tilde{v}_2$\n\\begin{equation}\n    v_2 = \\frac{\\tilde{v}_2}{\\left\\|\\tilde{v}_2\\right\\|_2} = \\frac{\\tilde{v}_2}{h_{2,1}}.\n\\end{equation}\nThis process continues iteratively; at the $m$-th iteration we have\n\\begin{subequations}\\begin{gather}\n    \\tilde{v}_{m+1} = \\A v_m - \\sum_{j=1}^m h_{jm} v_j \\qquad \\mathrm{(Orthogonalization)} \\label{eq:ArnoldiOrthogonalization} \\\\[2ex]\n    v_{m+1} = \\frac{\\tilde{v}_{m+1}}{h_{m,m+1}} \\qquad \\mathrm{(Normalization)}. \\label{eq:ArnoldiNormalization}\n\\end{gather}\\end{subequations}\n\n\nThe process of orthogonalization and normalization involves the calculation of the values $h_{jm}$.  These values are the elements of an upper-Hessenberg matrix, $H_{m+1,m}$.  (An upper Hessenberg matrix is upper triangular except that the first subdiagonal is non-zero.)  We can write the $m$-th Arnoldi iteration in matrix form as\n\\begin{equation}\n    \\A V_m = V_{m+1}H_{m+1,m}, \n    \\label{eq:ArnoldiMatrixFormulation}\n\\end{equation}\nwhere the columns of $V_m$ are the Arnoldi vectors and the elements of $H_{m+1,m}$ are the results of the inner products of Arnoldi vectors as described in \\Fref{eq:ArnoldiInnerProduct}.  If we separate the last column of $V_{m+1}$ and the last row of $H_{m+1,m}$ we obtain from \\Fref{eq:ArnoldiMatrixFormulation}\n\\begin{equation}\n    \\A V_m = V_mH_m + v_{m+1}h_{m+1,m}e_m^T\n    \\label{eq:ArnoldiFactorization}\n\\end{equation}\nwhere $e_m$ is the $m$-th standard basis vector, $v_{m+1}$ is the Arnoldi vector calculated during the $m$-th Arnoldi iteration and $h_{m+1,m}$ is the normalization factor for the new Arnoldi vector.  Thus we see that at the $m$-th Arnoldi iteration we add a row and a column to $V_mH_m$.  \n\nEquation \\eqref{eq:ArnoldiFactorization} is called the \\emph{Arnoldi factorization} and is an important equation in further analysis of Arnoldi's method.  \n\n\\subsection{Finding Ritz Pairs from Arnoldi Factorization \\label{sec:RitzPairs}}\nThe Arnoldi process generates an upper-Hessenberg matrix $H_m$ which is the projection of \\A{} onto the Krylov subspace defined by the Arnoldi vectors.  Since $H_m$ is generated after a small number of iterations its size is small and we can find its eigenvalues and eigenvectors with relative ease.  The eigenpairs of $H_m$, \\mbox{$\\left(\\mu_i, x_i\\right)$} can be used to find Ritz pairs---approximate eigenpairs---of \\A.  To see this multiply the Arnoldi factorization, \\Fref{eq:ArnoldiFactorization} on the right by an eigenvector of $H_m$, $x_i$\n\\begin{equation}\n    \\begin{split}\n        \\A V_mx_i &= V_m\\left( H_mx_i \\right) + v_{m+1}h_{m+1,m}e_m^Tx_i \\\\\n        \\A V_mx_i &= V_m\\left( \\mu_ix_i \\right) + v_{m+1}h_{m+1,m}e_m^Tx_i \\\\\n        \\A y_i &= \\mu_iy_i + v_{m+1}h_{m+1,m}e_m^Tx_i,\n    \\end{split}\n    \\label{eq:ArnoldiFactorizationRitzPair}\n\\end{equation}\nwhere $y_i = V_mx_i$. \\mbox{$\\left(\\mu_i, y_i\\right)$} is a Ritz pair of \\A{} or an approximation to an eigenpair of \\A.  The Ritz vector $y_i$ is just the product of the Arnoldi vectors and an eigenvector of $H_m$.\n\nThe residual is defined as\n\\begin{equation}\n    r_m \\equiv \\A y - \\mu y.\n\\end{equation}\nWe can see from \\Fref{eq:ArnoldiFactorizationRitzPair} that the magnitude of the residual is just the magnitude of the last element of the eigenvector $x_i$ times $h_{m+1,m}$\n\\begin{equation}\n    \\left|r_m\\right| = \\left\\|\\A y - \\mu y\\right\\| = \\left|h_{m+1,m}\\right|\\left|e_mx_i\\right|.\n    \\label{eq:Residual}\n\\end{equation}\nIt is easy to see that if the residual is zero, then the Ritz pair is an eigenpair of \\A.  The residual is therefore a good indication---but not a guarantee---of the quality of the eigenpair approximation; in fact a small residual guarantees that \\mbox{$\\left(\\mu, x\\right)$} is an exact eigenpair of a matrix close to \\A{} \\citep[see][]{Watkins:2002Funda-0}.\n\n\\subsection{Explicitly Restarted Arnoldi \\label{sec:ERAM}}\n\\begin{comment}\nThe rate at which Arnoldi's method converges to good approximations of eigenpairs depends upon the starting vector.  Take for example starting with the eigenvector associated with the desired eigenvalue, i.e. largest eigenvalue in magnitude.  If we are so fortunate to know this in advance, the Krylov subspace calculated by Arnoldi's method will become invariant after just one iteration and the eigenpair will have been found exactly.  If we desire multiple, say $j$, eigenpairs and the starting vector is a linear combination of the $j$ eigenvectors associated with the desired eigenvalues then Arnoldi's method will be complete (the Krylov subspace will become invariant) in $j$ steps with all the desired eigenpairs known.\n\nUnfortunately we don't know \\emph{a priori} the desired eigenvector(s) and eigenvalue(s); if we did, we wouldn't need to perform the calculation.  We may not know beforehand a good guess to the solution, but after a few iterations of Arnoldi's method we have a better estimate of the desired solution than with what we started.  We could start Arnoldi's method over using the estimate of our desired eigenvectors (Arnoldi's Ritz vectors) as the new initial guess.\n\\end{comment}\nAs Arnoldi's method proceeds, each iteration adds one Arnoldi vector, and the size of Krylov subspace expands.  The increase in the number of Arnoldi vectors and the size of the Krylov subspace is problematic.  First, the memory requirements increase and second, it is computationally more expensive  because there are more vectors that the newest Arnoldi vector must be orthogonalized against.\n\nArnoldi's method begins with an estimate of the fundamental eigenvector  After a few Arnoldi iterations we have a better estimate of the fundamental eigenvector than what we started with.  We can therefore restart Arnoldi's method using the better estimate of the eigenvector as the starting vector for the Arnoldi process.  The idea of starting Arnoldi's method using the results of several previous iterations of Arnoldi's method is known as Restarted Arnoldi's method (RAM) and was first proposed by \\citet{Saad:1980Varia-0}.  \n\nArnoldi's method can be restarted repeatedly, each time starting with an improved starting vector that is a linear combination of the estimates of the desired eigenvectors of $\\A$.  Each sequence of several iterations and a calculation of the Ritz pairs of \\A{} is called a \\emph{restart}.  Restarting Arnoldi's method saves computational expense by reducing the number of Arnoldi vectors that we must orthogonalize against and reduces memory requirements as fewer Arnoldi vectors must be stored.  Each iteration adds one basis vector to the Krylov subspace so the size of the Krylov subspace is the same as the number Arnoldi vectors.  The size, $m$, of the Krylov subspace is also the size of the upper-Hessenberg matrix \\mbox{$H_m \\in \\mathbb{R}^{m \\times m}$} and is therefore the number of eigenpairs that can be estimated in one restart.  Although not necessary, the number of iterations in each restart is generally the same.\n\nThe estimated eigenvectors of $\\A$ form a basis that we can use in a linear expansion to represent a vector,\n\\begin{equation}\n    \\hat{v} = c_1y_1 + c_2y_2 + \\cdots + c_ny_n,\n\\end{equation}\nwhere the $y_i$'s are the estimated eigenvectors of the linear operator $\\A$ and the $c_i$'s are some expansion coefficients.  If the $j$ eigenvalues largest in magnitude are desired, then we would like to have our initial vector be\n\\begin{equation}\n    \\hat{v} = c_1y_1 + \\cdots + c_jy_j + 0\\,y_{j+1} + \\cdots + 0\\,y_n;\n\\end{equation} \ni.e., we want to suppress any information from the undesired portion of the spectrum of $\\A$.\n\nThe initial vector will most likely contain significant components of all the eigenvectors.  We can reduce the components from eigenvectors from the undesired region of the spectrum by forcing \\mbox{$c_{j+1} = \\cdots = c_n = 0$}.  The initial vector then becomes\n\\begin{equation}\n    \\hat{v} = c_1y_1 + \\cdots + c_jy_j.\n    \\label{eq:ExplicitRestartVector}\n\\end{equation} \nThe value of the coefficients \\mbox{$c_1, c_2, \\ldots, c_j$} are not important and are conveniently chosen to be 1.  When Arnoldi's method is restarted using $\\hat{v}$ from \\Fref{eq:ExplicitRestartVector} as the initial vector for a new Arnoldi restart, it is an explicit restart and will be referred to as \\emph{explicitly} restarted Arnoldi's method (ERAM).  It is implemented by only summing the vectors \\mbox{$x_1 + \\cdots + x_j$} and ignoring the other eigenvectors.\n\n\\section{Monte Carlo Implementation of Explicitly Restarted Arnoldi's Method \\label{sec:MCERAM}}\nNow that the basic restarted Arnoldi's method has been described, we can proceed to show how it can be implemented in a Monte Carlo particle transport application.  First we note that eigenvalue estimates are made at the end of every restart.  Eigenvalue estimates could have been made at every iteration, but that takes additional computational expense.  A choice was made to calculate and store the eigenvalue estimates at the end of every restart to reduce this cost.  The mean and standard deviation of these stored eigenvalue estimates can be calculated.  In this sense, an Arnoldi restart is similar to a power method iteration.  \n\nIn conjunction with the eigenvalues, the eigenvectors of the transport-fission operator are estimated at the end of each restart.  The eigenvectors ($y_i$) are normalized such that \n\\begin{equation}\n    \\int y_i(x)^2 \\dd x = 1\n    \\label{eq:NormalizedEigenvectors}\n\\end{equation}\nand stored.  The mean of the eigenvectors can also be calculated, but we must be careful that we sum appropriate values of the eigenvectors.  For example, suppose the fundamental eigenvector estimate from one restart has the opposite sign as the fundamental eigenvector estimate from another restart.  (Eigenvectors are unique only up to a multiplicative constant so both estimates are valid.)  Prior to adding a new eigenvector estimate when calculating the mean, the dot product of the new eigenvector estimate with the previous eigenvector estimate is calculated.  If the dot product is negative, the new eigenvector is multiplied by $-1$.  This preserves the normalization given in \\Fref{eq:NormalizedEigenvectors}.\n\nThe application of the linear operator in \\Fref{eq:ArnoldiOrthogonalization} was described in \\Fref{sec:ParticleTransport}.  In brief, neutrons are sampled from the fission source $v_k$, transported and the position of the neutrons at the time they initiate fission is stored, creating a new fission source, $\\tilde{v}_{k+1}$.  \n\n\\subsection{Negative Sources \\label{sec:NegativeSource}}\nWhen using the power method to calculate the fundamental eigenmode for criticality calculations, we are guaranteed that the solution is everywhere positive and the vectors that are sampled from at each iteration are also everywhere positive.  With Arnoldi's method we are not so fortunate.  The process of orthogonalization guarantees that some of the Arnoldi vectors will be partially negative.  This presents two challenges to Monte Carlo particle transport: first, a negative source means a negative flux which is not physical and second, to sample from a source it must be assumed to be everywhere positive.  \n\nTo sample from a distribution, it must be everywhere positive and integrate to 1.  If $p(x)$ is a probability distribution function, the quantity $p(x)\\dd x$ is the probability of choosing a point in $\\dd x$ about $x$.  For a fission source $v(x)$ which may have negative regions it is first normalized such that\n\\begin{subequations}\\begin{align}\n    \\int \\left|v(x)\\right| \\dd x &= q \\\\\n    p(x) = \\frac{\\left|v(x)\\right|}{q}.\n    \\label{eq:FissionSourceNormalization}\n\\end{align}\\end{subequations}\nWith this normalization, the quantity $p(x) \\dd x$ is the probability of choosing a point in $\\dd x$ about $x$.  A neutron position $x_s$ is sampled from $p(x)$ and is given an initial weight of\n\\begin{equation}\n    \\omega = \\frac{v(x_s)}{\\left|v(x_s)\\right|}, \n    \\label{eq:InitialWeight}\n\\end{equation}\nor, alternatively\n\\begin{equation}\n    \\omega = \\begin{cases}\n        1, & v(x_s) > 0 \\\\\n        -1, & v(x_s) < 1.\n    \\end{cases}\n    \\label{eq:OtherInitialWeight}\n\\end{equation}\nNeutrons sampled where \\mbox{$v(x_s) < 0$} reduce the tally where they score; neutrons sampled where \\mbox{$v(x_s) > 0$} contribute positively to the tally where they score.  A neutron is never sampled where \\mbox{$v(x_s) = 0$} because there the probability of choosing this point is exactly zero.\n\nOnce a particle has been sampled, Monte Carlo transport proceeds as usual, with the neutron scoring $\\omega\\left(\\nu\\Sigma_f/\\Sigma_T\\right)$ in the proper bin at each collision.  If non-analog Monte Carlo is being done, particle weight is reduced at each collision and Russian Roulette is played if the absolute value of the weight becomes small.  Giving a neutron a signed weight does not interfere with any variance reduction or tallying techniques.\n\nAt alternative approach to applying \\A{} to $v(x)$ this way is to separate $v(x)$ into its positive and negative parts, \\vp{} and \\vm{} respectively, and apply the transport-fission operator to each part independently\n\\begin{equation}\n    \\A v(x) = \\A\\vp - \\A\\left|\\vm\\right|.\n\\end{equation}\nThe first approach does this directly by assigning the weights as described.  The first approach is also superior because it samples the positive and negative parts proportionately to the magnitude of the positive or negative part; the second approach would use the same number of particles to apply the linear operator for both the positive and negative part, even if one part is significantly larger than the other.  \n\nOnce the sampling of a fission source, transporting particles, and creation of a new fission source is completed the new fission source is normalized per source particle and multiplied by integral of the absolute value of the previous source\n\\begin{equation}\n    v_{j+1}(x) = \\A v_j(x) \\frac{1}{N} \\int |v_j(x)| \\dd x,\n    \\label{eq:SourceScaling}\n\\end{equation}\nwhere $v_j(x)$ is the source we sample from and $\\A v(x)$ is the new source created after sampling from $v_j(x)$ and transporting.  This scaling is similar to the scaling performed in the power method shown in \\Fref{eq:eVectorIterative} and \\Fref{eq:eValueIterative}.  After the vector has been properly scaled, Arnoldi's method continues with orthogonalization and normalization of the newest Arnoldi vector.\n\n\\subsection{Spatial Discretization \\label{sec:SpatialDiscretization}}\nThe orthogonalization and normalization of the basis vectors in Arnoldi's method requires taking the inner product of two vectors\n\\begin{equation}\n    \\langle v_j,v_k\\rangle = \\int v_j(x)v_k(x) \\dd x.\n    \\label{eq:InnerProduct}\n\\end{equation}\nIn Monte Carlo Arnoldi's method we must take the inner product of two fission sources.  To do this, the fission source is represented as a linear combination of piecewise constant functions\n\\begin{equation}\n    \\vP(x) = \\sum_{b=1}^B a_b \\Pi_b(x),\n    \\label{eq:source_function}\n\\end{equation}\nwhere $B$ is the number of spatial bins and \n\\begin{equation}\n    \\Pi_b(x) = \\begin{cases}\n        \\left(\\frac{1}{\\Delta x_b}\\right)^{1/2}, & x_b \\leq x < x_{b+1} \\\\\n        0, & \\mathrm{otherwise},\n    \\end{cases}\n    \\label{eq:PiFunction}\n\\end{equation}\nwhere $\\Delta x_b = \\left(x_{b+1}-x_b\\right)$ is the width of bin $b$.  The term $a_b\\sqrt{\\Delta x_b}$ is the number of fission neutrons generated in the spatial bin $b$, \\mbox{$x \\in \\left[x_b, x_{b+1}\\right)$}.  Note that, to sample from $\\vP(x)$, we first sample a bin and then sample uniformly within the bin to determine the position of the particle.\n\nRepresenting the fission source with a piecewise-constant-in-space approximation, the elements of the Arnoldi vectors are just the expansion coefficients \n\\begin{equation}\n    \\vP = \\left[a_1, a_2, \\ldots, a_B\\right]^T,\n\\end{equation}\nand the inner product between two fission sources is defined as\n\\begin{equation}\n    \\langle \\vP^{(j)},\\vP^{(k)}\\rangle = \\sum_{b=1}^B a_b^{(j)}a_b^{(k)},\n    \\label{eq:InnerProductFissionSources}\n\\end{equation}\nwhere $a_b^{(j)}$ and $a_b^{(k)}$ are the expansion coefficients from the fission sources $\\vP^{(j)}$ and $\\vP^{(k)}$ respectively.\n\nApplying $\\A$ to a vector of coefficients $\\{a_b\\}_{b=1}^B$ simply requires sampling the piecewise constant source function $\\vP(x)$ in \\Fref{eq:source_function}, transporting these neutrons until they cause another fission, and tallying the resulting fission neutrons over the bins \\citep[see][]{Conlin:2008Arnol-0}.  This generates a truncation error.\n\nWith this representation of the fission source, the sampling techniques described in \\Fref{sec:NegativeSource} can be used and the inner product between two fission sources have been defined.  To estimate the mean eigenvector, we can  simply calculate the mean value of the coefficient in each bin.  \n\n\\section{Numerical Results \\label{sec:ArnoldiResults}}\nEverything necessary for a Monte Carlo application of explicitly restarted Arnoldi's method has been described.  To demonstrate how Monte Carlo Arnoldi's method can calculate multiple eigenvalues and eigenvectors of the transport-fission operatorm a few simulations of a homogeneous, semi-infinite slab of multiplying material are shown.  The simulations shown here were chosen to match results published by \\cite{Garis:1991One-s-0,Modak:1995Evalu-0} and \\cite{Dahl:1979Eigen-0}.  The cross sections are: \\mbox{$\\nu\\Sigma_f = 1.0$}, \\mbox{$\\Sigma_a = 0.2$}, \\mbox{$\\Sigma_s = 0.8$} with \\mbox{$\\Sigma_t = 1.0$}, thus the mean free path for this geometry is $1/\\Sigma_t = 1.0$.  I will show the results of slabs with width 0.2, 2.0 or 20 mfp.\n\nFor each slab width I have run two simulations; one simulation using Arnoldi's method and the other simulation using the power method for comparison.  In every simulation $10^5$ particles are tracked in an iteration.  The power method has 250 inactive and 1000 active iterations while Arnoldi's method has 25 inactive and 100 active restarts with 10 iterations in each restart.  Therefore the Krylov subspace size is 10.  The number of inactive and active iterations in each method is the same.  The total number of particles tracked in each simulation is also the same.  The slab is discretized into 50 spatial bins for the 0.2 mfp problem and 75 spatial bins for the 2.0 and 20 mfp problems.\n\n\\begin{comment}\n\\begin{table}[h] \\centering\n    \\begin{tabular}{ccccccc}\n        \\toprule\n        Width & Spatial & Particles &\\multirow{2}{*}{Method} &  \\multirow{2}{*}{Iterations} & Inactive & Active \\\\\n        (mfp) & Bins & per Iteration &&  & Cycles & Cycles \\\\\n        \\midrule\n        \\multirow{2}{*}{0.2} & \\multirow{2}{*}{50} & \\multirow{2}{*}{$1 \\times 10^5$} & Power & -- & 250 & 1000 \\\\\n         & & & Arnoldi & 10 & 25 & 100 \\\\\n        \\midrule\n        \\multirow{2}{*}{2.0} & \\multirow{2}{*}{75} & \\multirow{2}{*}{$1 \\times 10^5$} & Power & -- & 250 & 1000 \\\\\n         & & & Arnoldi & 10 & 25 & 100 \\\\\n        \\midrule\n        \\multirow{2}{*}{20} & \\multirow{2}{*}{75} & \\multirow{2}{*}{$1 \\times 10^5$} & Power & -- & 250 & 1000 \\\\\n         & & & Arnoldi & 10 & 25 & 100 \\\\\n        \\bottomrule\n    \\end{tabular}\n    \\caption{Problem Parameters.  Power method cycles are power iterations while Arnoldi method cycles are explicit restarts.}\n    \\label{tab:BasicParameters}\n\\end{table}\n\\end{comment}\n\nThe results of these simulations are shown in \\Fref{tab:BasicResults}, along with the dominance ratio (DR) for each of the three slab widths.  The fundamental eigenvalue estimates are shown for both the power method and Arnoldi's as well as the first and second harmonic eigenvalue estimates from Arnoldi's method.  The published results from \\cite{Garis:1991One-s-0,Modak:1995Evalu-0} and \\cite{Dahl:1979Eigen-0} are given as the reference.  Almost all the eigenvalue estimates are within one standard deviation of the reference solution.  The only exceptions are the second harmonic estimates for the 2.0 and 20 mfp thick problems and they are both within two standard deviations of the reference solution.\n\n\\begin{table}[h]\\centering\n    \\begin{tabular}{cccccc}\n        \\toprule\n        Width & \\multirow{2}{*}{Method} & \\multirow{2}{*}{Eigenvalue} & Standard & \\multirow{2}{*}{Reference} & \\multirow{2}{*}{Error} \\\\\n        (mfp) & & & Deviation & \\\\\n        \\midrule\n        \\multirow{2}{*}{0.2} & Power    & 0.329979 & 6.3\\e{-5} & 0.330000 & 2.1\\e{-5} \\\\\n        \\cmidrule{2-6}                    \n        & \\multirow{3}{*}{Arnoldi}      &  0.33008 & 1.8\\e{-4} &  0.33000 & 8.3\\e{-5} \\\\\n        \\multirow{2}{*}{DR = 0.23997} & &  0.07911 & 1.5\\e{-4} &  0.07919 & 7.6\\e{-5} \\\\\n        &                               &  0.04493 & 1.6\\e{-4} &  0.04499 & 5.8\\e{-5} \\\\\n        \\midrule                          \n        \\multirow{2}{*}{2.0} & Power    &  2.09593 & 2.7\\e{-4} &  2.09599 & 6.0\\e{-5} \\\\\n        \\cmidrule{2-6}                    \n        & \\multirow{3}{*}{Arnoldi}      &  2.09652 & 6.9\\e{-4} &  2.09599 & 5.3\\e{-4} \\\\\n        \\multirow{2}{*}{DR = 0.4015} &  &  0.84183 & 5.8\\e{-4} &  0.84150 & 3.3\\e{-4} \\\\\n        &                               &  0.48279 & 4.5\\e{-4} &  0.48230 & 4.9\\e{-4} \\\\\n        \\midrule                          \n        \\multirow{2}{*}{20} & Power     &  4.82734 & 6.3\\e{-4} &  4.82780 & 4.6\\e{-4} \\\\\n        \\cmidrule{2-6}                    \n        & \\multirow{3}{*}{Arnoldi}      &   4.8290 & 1.5\\e{-3} &   4.8278 & 1.2\\e{-3} \\\\\n        \\multirow{2}{*}{DR = 0.9079}&   &   4.3827 & 1.4\\e{-3} &   4.3831 & 4.2\\e{-4} \\\\\n        &                               &   3.8152 & 1.4\\e{-3} &   3.8174 & 2.2\\e{-3} \\\\\n         \\bottomrule\n     \\end{tabular}\n     \\caption{Eigenvalue estimates from power method and Arnoldi's method for slab geometries of width 0.2, 2.0 and 20 mfp.  Reference values taken from \\cite{Garis:1991One-s-0}, and \\cite{Dahl:1979Eigen-0}. }\n     \\label{tab:BasicResults}\n\\end{table}\n\nThe figure of merit (FOM) and computational time for each simulation is given in \\Fref{tab:BasicFOM}.  The figure of merit is a measure of the efficiency of a Monte Carlo calculation.  The variance of a Monte Carlo eigenvalue calculation goes as $\\sqrt{1/N}$, where $N$ is the number of eigenvalue estimates.  The computational expense should be directly proportional to the number of eigenvalue estimates.  The figure of merit is therefore defined to be\n\\begin{equation}\n    \\mathrm{FOM} \\equiv \\frac{1}{\\sigma^2 T}\n    \\label{eq:FOM}\n\\end{equation}\nwhere $\\sigma^2$ is the variance and $T$ is the time required to perform the Monte Carlo calculation.  \n\n\\begin{table}[h] \\centering\n    \\begin{tabular}{cccccc}\n        \\toprule\n        Width & \\multirow{2}{*}{Method} & Fundamental & Standard & \\multirow{2}{*}{FOM} & Time \\\\\n        (mfp) & & Eigenvalue & Deviation & & (sec)\\\\\n        \\midrule\n        \\multirow{2}{*}{0.2}    & Power   & 0.329979 & 6.3\\e{-5} & 1.7\\e{6} &  149.0 \\\\\n                                & Arnoldi &  0.33008 & 1.8\\e{-4} & 3.3\\e{5} &   95.3 \\\\ \n        \\cmidrule{2-6}            \n        \\multirow{2}{*}{2.0}    & Power   &  2.09593 & 2.7\\e{-4} & 5.5\\e{4} &  258.1 \\\\\n                                & Arnoldi &  2.09652 & 6.9\\e{-4} & 9.8\\e{3} &  212.5 \\\\ \n        \\cmidrule{2-6}            \n        \\multirow{2}{*}{20}     & Power   &  4.82734 & 6.3\\e{-4} & 5.4\\e{3} &  463.0 \\\\\n                                & Arnoldi &   4.8290 & 1.5\\e{-3} & 1.1\\e{3} &  378.5 \\\\ \n        \\bottomrule\n    \\end{tabular}\n    \\caption{Eigenvalue estimates, figure of merit for fundamental eigenvalue from the power method and Arnoldi's method for slab geometries of width 0.2, 2.0, and 20 mfp.}\n    \\label{tab:BasicFOM}\n\\end{table}\n\nWe see that the figure of merit is always larger for the power method than for Arnoldi's method, but that the Arnoldi simulations run faster.  The figure of merit for the power method is larger than the figure of merit for Arnoldi's method by a factor of 5 while the computational time for the power method is 1.6 times larger than Arnoldi's method for the 0.2 mfp simulation and 1.2 times larger than Arnoldi's method for the 2.0 and 20 mfp simulations.  \n\nThe power method has many (10 times for these simulations) more eigenvalue estimates because it calculates an estimate after every iteration while Arnoldi's method calculates an estimate after each restart consisting of many iterations.  Thus for the same number of particles tracked (computational expense, $T$) the power method has many more eigenvalue estimates and therefore its variance is smaller, and the FOM is larger.  \n\nWe can calculate the spread of the eigenvalue estimates from each method.  The spread is the root mean sqared difference of the eigenvalue estimates, $x_i$, from $\\overline{x}$, the mean eigenvalue estimate; \\[\\mathscr{s} \\equiv \\sqrt{\\frac{1}{N}\\sum_{i=1}^N \\left(x_i - \\overline{x}\\right)^2}.\\]  This should not be confused with the \\emph{population} standard deviation \\[\\sigma \\equiv \\sqrt{\\frac{1}{N-1}\\left(\\frac{1}{N}\\sum_{i=1}^N \\left(x_i - \\overline{x}\\right)^2\\right)}\\] which is called simply the standard deviation in this dissertation.  The spread of the fundamental eigenvalue estimates from the active iterations/restarts for the three slab widths is shown in \\Fref{tab:BasicSpread}.  We can see that even though the standard deviation of the mean is larger in Arnoldi's method than in the power method, the spread of the eigenvalue estimates is smaller in Arnold's method than in the power method.  From this we can conclude that the estimate of the standard deviation (and thus the figure of merit) is dominated by the number of eigenvalue estimates.  \n\nThe figure of merit isn't a complete comparison between these two methods.  It only measures the efficiency of estimating one eigenvalue, but in Arnoldi's method we have estimates of the first three eigenvalues at no additional cost.  Furthermore, because we are electing to compare multiple eigenvalues, we have fewer eigenvalue estimates to average together in Arnoldi's method.\n\\begin{table}[h]\n    \\centering\n    \\begin{tabular}{cccc}\n        \\toprule\n        & 0.2 mfp & 2.0 mfp & 20 mfp \\\\\n        \\midrule\n        Power    & 0.0020 & 0.0084 & 0.0201 \\\\\n        Arnoldi  & 0.0018 & 0.0069 & 0.0153 \\\\\n        \\bottomrule\n    \\end{tabular}\n    \\caption{Spread of active fundamental eigenvalue estimates from the power method and Arnoldi's method for slab geometries of width 0.2, 2.0, and 20 mfp.}\n    \\label{tab:BasicSpread}\n\\end{table}\n\nGraphical results for the 20 mfp thick slab are shown in figures \\ref{fig:BasicValuesW20}--\\ref{fig:BasicFundamentalW20}.  In \\Fref{fig:BasicValuesW20} we see the eigenvalue convergence for the fundamental and first two harmonic eigenvalue estimates.  Both inactive and active iterations are shown; the active iterations are the running average of the active eigenvalue estimates.  We can see that the spread of the estimates of the fundamental eigenvalue from Arnoldi's method is smaller than the estimates from the power method.  Black lines are drawn indicating the reference values published in \\cite{Garis:1991One-s-0} and \\cite{Dahl:1979Eigen-0}.  It appears that all three eigenvalue estimates agree with the reference solution.  However we know from \\Fref{tab:BasicResults} that the estimate of the second harmonic is just outside of one standard deviation.\n\n\\Fref{fig:BasicFundamentalW20} shows the fundamental eigenvector estimate from the power method and Arnoldi's method as well as a reference solution from an S$_\\mathrm{N}$ code.  We see that both the power method and Arnoldi's method accurately estimate the fundamental eigenmode.  \\Fref{fig:BasicVectorsW20} shows the fundamental eigenvector and the first two harmonics all calculated by Arnoldi's method.  The higher-order eigenmodes of a semi-infinite slab are similar to the higher modes of the cosine function as expected \\citep[see][pg. 173]{Duderstadt:1976Nucle-0}.  \n\nThe results of the 0.2 and 2.0 mfp simulations are shown  in Figures \\ref{fig:BasicValuesW02}--\\ref{fig:BasicVectorsW2}.  The results are similar to what we have seen for the 20 mfp problem.\n\n\\begin{sidewaysfigure}[hp]\\centering\n    \\input{Arnoldi/Data/BasicValues-w20}\n    \\caption{Eigenvalue estimates for the power method and Arnoldi's method for the 20 mfp thick slab.  The heavy lines indicate the reference eigenvalues.}\n    \\label{fig:BasicValuesW20}\n\\end{sidewaysfigure}\n\n\\begin{sidewaysfigure}[hp]\\centering\n    \\input{Arnoldi/Data/BasicFundamental-w20}\n    \\caption{Fundamental eigenvector estimates from the power method and Arnoldi's method for the 20 mfp thick slab.  The heavy line shows the S$_\\mathrm{N}$ solution.}\n    \\label{fig:BasicFundamentalW20}\n\\end{sidewaysfigure}\n\n\\begin{sidewaysfigure}[hp]\\centering\n    \\input{Arnoldi/Data/BasicVectors-w20}\n    \\caption{Fundamental and first and second harmonic eigenvector estimates from Arnoldi's method for the 20 mfp thick slab.}\n    \\label{fig:BasicVectorsW20}\n\\end{sidewaysfigure}\n\n\\begin{figure}\\centering\n    \\subfloat[Eigenvalue estimates]{\\label{fig:BasicValuesW02}\\input{Arnoldi/Data/BasicValues-w0.2}}\n\n    \\subfloat[Eigenvector Estimates]{\\label{fig:BasicVectorsW02}\\input{Arnoldi/Data/BasicVectors-w0.2}}\n    \\caption{Eigenvalue and eigenvector estimates from power method and Arnoldi's method for the 0.2 mfp thick slab.  Heavy lines show reference solution from \\cite{Garis:1991One-s-0} and \\cite{Dahl:1979Eigen-0}.}\n\\end{figure}\n\n\\begin{comment}\n\\begin{sidewaysfigure}[hp]\\centering\n%    \\includegraphics[width=\\textwidth, keepaspectratio]{Arnoldi/Data/BasicFundamental-w02}\n    \\input{Arnoldi/Data/BasicFundamental-w0.2}\n    \\caption{Fundamental eigenvector estimates from the power method and Arnoldi's method for the 0.2 mfp wide slab.  The heavy line shows the S$_\\mathrm{N}$ solution.}\n    \\label{fig:BasicFundamentalW02}\n\\end{sidewaysfigure}\n\\end{comment}\n\n\\begin{figure}\\centering\n    \\subfloat[Eigenvalue estimates]{\\label{fig:BasicValuesW2}\\input{Arnoldi/Data/BasicValues-w2}}\n    \n    \\subfloat[Eigenvector estimates]{\\label{fig:BasicVectorsW2}\\input{Arnoldi/Data/BasicVectors-w2}}\n    \\caption{Eigenvalue and eigenvector estimates from power method and Arnoldi's method for the 2.0 mfp thick slab.  Heavy lines show reference solution from \\cite{Garis:1991One-s-0} and \\cite{Dahl:1979Eigen-0}.}\n    \n\\end{figure}\n\n\\begin{comment}\n\\begin{sidewaysfigure}[hp]\\centering\n%    \\includegraphics[width=\\textwidth, keepaspectratio]{Arnoldi/Data/BasicFundamental-w2}\n    \\input{Arnoldi/Data/BasicFundamental-w2}\n    \\caption{Fundamental eigenvector estimates from the power method and Arnoldi's method for the 2.0 mfp wide slab.  The heavy line shows the S$_\\mathrm{N}$ solution.}\n    \\label{fig:BasicFundamentalW2}\n\\end{sidewaysfigure}\n\\end{comment}\n\n\\clearpage\n\\subsection{Discretization Error \\label{sec:DiscretizationBias}} \n\nOne of the benefits of Monte Carlo particle transport is the ability to use exact geometry without discretization.  This is true for the transport of particles in the power method, but the fission source must be discretized for tallying.  Arnoldi's method, on the other hand, must have a discretized source  in order to orthogonalize the Arnoldi vectors, as described in \\Fref{sec:SpatialDiscretization}.\n\nThe discretization of the fission source can lead to an error in the estimated eigenvalue if an insufficient number of spatial bins are used to represent the fission source.  To illustrate this effect a series of simulations is shown using the same slab of multiplying material with varying number of spatial bins.  The slab here is exactly the same as for the 20 mfp problem shown earlier (\\mbox{$\\nu\\Sigma_f = 1.0$}, \\mbox{$\\Sigma_a = 0.2$}, \\mbox{$\\Sigma_s = 0.8$} with \\mbox{$\\Sigma_t = 1.0$}).  This time $10^5$ histories are tracked per iteration with 50 inactive restarts and 500 active restarts.  The increase in the number of histories and restarts is to reduce the statistical uncertainty to ensure the error can be seen outside the noise.  This simulation was performed eleven times varying the number of spatial discretization bins from 10 to 150.  \n\nThe results of these simulations are shown in \\Fref{tab:Discretization} for the fundamental eigenvalue.  We can see that the uncertainty in the eigenvalue estimate (standard deviation) is relatively independent of the number of spatial bins.  The error in the eigenvalue estimate is the absolute value of the difference between the eigenvalue estimate and the reference solution.  We see that the error in the eigenvalue estimate is larger than the statistical uncertainty for bin widths \\mbox{$\\leq 0.5$} mfp thick.  For bin widths greater than 0.5 mfp thick the error is less than the statistical uncertainty. \n\nThe data from \\Fref{tab:Discretization} is shown graphically in \\Fref{fig:BasicBias}.  The error in the eigenvalue estimate for the first two harmonics are also shown in \\Fref{fig:BasicBias}.  The error in the eigenvalue estimate is denoted marked as $\\mathcal{B}$ for each of the eigenvalue estimates.  Best fit lines are drawn through the points on the graph.  We see that there is a very good linear fit to these data points.  \n\n\\begin{table}[h] \\centering\n    \\begin{tabular}{cccccc}\n        \\toprule\n        \\# of Bins & Bin Width (mfp) & Eigenvalue & Uncertainty & Error & FOM\\\\\n        \\midrule\n         10 & 2.00 & 4.8003 & 6.6\\e{-4} & 2.7\\e{-2} & 831.2 \\\\\n         25 & 0.80 & 4.8224 & 6.8\\e{-4} & 5.3\\e{-3} & 773.2 \\\\\n         40 & 0.50 & 4.8251 & 6.3\\e{-4} & 2.6\\e{-3} & 872.0 \\\\\n         50 & 0.40 & 4.8273 & 6.5\\e{-4} & 4.2\\e{-4} & 829.0 \\\\\n         60 & 0.33 & 4.8258 & 6.9\\e{-4} & 2.0\\e{-3} & 704.3 \\\\\n         75 & 0.27 & 4.8275 & 6.7\\e{-4} & 2.4\\e{-4} & 753.0 \\\\\n         90 & 0.22 & 4.8277 & 6.7\\e{-4} & 4.2\\e{-5} & 746.1 \\\\\n        105 & 0.19 & 4.8277 & 6.9\\e{-4} & 5.2\\e{-5} & 698.3 \\\\\n        120 & 0.17 & 4.8282 & 6.5\\e{-4} & 4.1\\e{-4} & 767.8 \\\\\n        135 & 0.15 & 4.8274 & 7.0\\e{-4} & 3.1\\e{-4} & 656.9 \\\\\n        150 & 0.13 & 4.8285 & 6.4\\e{-4} & 7.5\\e{-4} & 792.0 \\\\\n        \\bottomrule\n    \\end{tabular}\n    \\caption{Eigenvalue estimates of the fundamental eigenvalue from Arnoldi's method and the error in the estimate.  The error is the difference between the estimate and the reference value ($\\lambda_0 = 4.8278$, see \\cite{Garis:1991One-s-0} and \\cite{Dahl:1979Eigen-0}).}\n    \\label{tab:Discretization}\n\\end{table}\n\n\\begin{sidewaysfigure}[h] \\centering\n    \\input{Arnoldi/Data/BiasHistogram}\n    \\caption{Discretization error for Arnoldi's method.  The error is the difference between the eigenvalue estimate from Arnoldi's method and the reference value given in \\Fref{tab:BasicResults}.}\n    \\label{fig:BasicBias}\n\\end{sidewaysfigure}\n\n\\section{Variance}\nOne of the problems with Monte Carlo particle transport is the underestimation of the variance of the mean eigenvalue estimate.  This topic has received considerable attention lately \\cite{Brown:2009A-Rev-0}.  In this section I will investigate how this issue manifests itself in Arnoldi's method.\n\nThe process of using the fission source calculated in a previous iteration as the source of neutrons for the current iteration causes the uncertainty in the eigenvalue (or some other tally) to be too small.  The mean $\\overline{X}$ and standard deviation $\\sigma_{\\overline{X}}$ for some tally $X$ are calculated as\n\\begin{subequations}\n    \\begin{gather}\n        \\overline{X} = \\frac{1}{N}\\sum_{n=1}^N X_n, \\label{eq:Mean} \\\\\n        \\sigma_{\\overline{X}} = \\left[\\frac{1}{N-1}\\left(\\frac{1}{N}\\sum_{n=1}^NX_n^2\\right) - \\overline{X}^2\\right]^{\\sfrac{1}{2}}, \\label{eq:StD}\n    \\end{gather}\n    \\label{eq:MeanStD}\n\\end{subequations}\nwhere $X_n$ is one estimate of the tally and $N$ is the number of estimates.  Equations \\eqref{eq:MeanStD} assume that each estimate is independent of all the others.  Because of the procedure of using previous sources to generate the next source, the sources are correlated.  \\citet{Kiedrowski:1009An-In-0} explain it best, ``If the concentration of fission neutrons at a location within a cycle [iteration] is statistically high, the concentration of fission neutrons in the next cycle is likely to be higher than average as well.  The same applies if the concentration is statistically low.  This implies a positive correlation between the fission source distributions.''\n\nUsing \\Fref{eq:StD} to estimate the standard deviation with correlated estimates causes the calculated standard deviation to be too small \\cite[see][]{Brown:2009A-Rev-0}.  A standard deviation that is too small would give immoderate confidence in the eigenvalue.  \n\n\\subsection{Numerical Results}\nWhile there is no immediate way to reduce or eliminate the correlation between iterations, we can calculate the true standard deviation by running many independent, identical simulations and compute the mean and standard deviation of all of these.  This can then be compared to the standard deviation of an individual run.  \n\nFor this calculation a 50 mfp thick, homogeneous slab is used with cross sections: \\mbox{$\\nu\\Sigma_f = 1.0$}, \\mbox{$\\Sigma_a = 0.2$}, and \\mbox{$\\Sigma_s = 0.8$}; \\mbox{$\\Sigma_t = 1.0$}.  The fundamental eigenvalue for this geometry is 0.997520.  Both the power method and Arnoldi's method are run so as to compare the results.  In Arnoldi's method, 100 inactive and 100 active restarts are used with 25 iterations per restart.  For the power method 2500 inactive and 2500 active iterations are used.  Both methods track 500,000 particles per iteration.  Each method has 100 independent simulations.\n\nThe results of this study are shown in \\Fref{tab:TrueVariance}.  I show the mean of the eigenvalue estimate from the 100 simulations, the mean of the standard deviations from the simulations and the true standard deviation.  The true standard deviation is the standard deviation of eigenvalue estimates from all the simulations, while the mean reported standard deviation is the mean of the reported standard deviations from the simulations.\n\\begin{table}[h]\\centering\n    \\begin{tabular}{ccccc}\n        \\toprule\n        \\multirow{2}{*}{Method} & Mean & Mean Reported & True Standard & Percent\\\\\n        & Eigenvalue & Standard Deviation & Deviation & Difference \\\\\n        \\midrule\n        Power   &  0.99752 & 2.4\\e{-5} & 2.7\\e{-5} & -11.1 \\\\\n        Arnoldi &  0.9974  & 1.1\\e{-4} & 9.7\\e{-5} &  13.4 \\\\\n        \\bottomrule\n    \\end{tabular}\n    \\caption{Mean eigenvalue estimate of fundamental eigenvalue from Arnoldi's method and the power method from 100 independent simulations.  The mean reported standard deviation is the mean of the standard deviation from the 100 independent simulations.  The true standard deviation is the standard deviation of the eigenvalue estimates from the 100 independent simulations.  The difference is (Reported-True)/True.}\n    \\label{tab:TrueVariance}\n\\end{table}\n\nWe see from these results that both Arnoldi's method and the power method report a standard deviation that is different from the true standard deviation by about 10\\%.  The problem is that the power method underpredicts the standard deviation.  For Arnoldi's method we can have confidence that---at least for this problem---the reported standard deviation is larger than true standard deviation.\n\n\\section{Summary}\nIn this chapter the basic explicitly restarted Arnoldi's method for Monte Carlo particle transport has been described.  It has been shown that Monte Carlo Arnoldi's method estimates the fundamental eigenvalue as well as first two higher-order eigenmodes within statistical uncertainty of published results; the eigenvectors are similar to cosine functions as they are expected to be.\n\nArnoldi's method suffers from two problems; the discretization of the fission source causes an error in the estimate of the eigenvalue, and a smaller figure of merit than the power method.  These issues are addressed in \\Fref{ch:SpatialDiscretization} and \\Fref{ch:RelaxedArnoldi} respectively.  \n", "meta": {"hexsha": "6764900b5c315f2cd3734de46da1cf748d82acfa", "size": 43410, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Dissertation/Arnoldi/Arnoldi.tex", "max_stars_repo_name": "jlconlin/PhDThesis", "max_stars_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Dissertation/Arnoldi/Arnoldi.tex", "max_issues_repo_name": "jlconlin/PhDThesis", "max_issues_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Dissertation/Arnoldi/Arnoldi.tex", "max_forks_repo_name": "jlconlin/PhDThesis", "max_forks_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.1095890411, "max_line_length": 1069, "alphanum_fraction": 0.733701912, "num_tokens": 12213, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387956435734, "lm_q2_score": 0.7057850278370112, "lm_q1q2_score": 0.5850525909584781}}
{"text": "\\chapter{Optimization Problems}\n\\label{sec:optProGenOpt}\n\\section{Classification of Optimization Problems}\n\\label{sec:claOptPro}\nWe will now classify some optimization problems that can be solved with GenOpt's \noptimization algorithms. The classification will be used\nin Section~\\ref{sec:AlgSel} to recommend suitable optimization algorithms.\\\\\n\nWe distinguish between problems whose\ndesign parameters are continuous \nvariables\\footnote{Continuous variables can take on any value on the real line, \npossibly between lower and upper bounds.}, discrete variables\\footnote{Discrete\nvariables can take on only integer values.},\nor both.\nIn addition, we distinguish between\nproblems with and without inequality constraints on the dependent variables.\n\n% ----------------------------------------------------------------\n\\subsection{Problems with Continuous Variables}\nTo denote box-constraints on independent continuous variables,\nwe will use the notation\n\\begin{equation}\n  \\mathbf X \\triangleq \\bigl\\{ x \\in \\Re^n \\ | \\ \n  l^i \\le x^i \\le u^i, \\ i \\in \\{1, \\ldots, n \\} \\bigr\\},\n\\label{eq:setXPc}\n\\end{equation}\nwhere $ - \\infty \\le l^i < u^i \\le \\infty$ for $i \\in \\{1, \\ldots, n \\}$.\\\\\n\nWe will consider optimization problems of the form\n\\begin{eqnarray}\n\\lefteqn{ \\mathbf P_{c}   } \\qquad && \\min_{x \\in \\mathbf X} f(x),\n\\label{sub:Proc}\n\\end{eqnarray}\nwhere $f \\colon \\Re^n \\to \\Re$ is a once continuously differentiable cost function.\n\n\\begin{subequations}\nNow, we add inequality constraints on the dependent variables to~\\eqref{sub:Proc} and obtain\n\\begin{eqnarray}\n\\lefteqn{ \\mathbf P_{cg}   } \\qquad && \\min_{x \\in \\mathbf X} f(x), \\\\\n&& g(x) \\le 0,\n\\end{eqnarray}\nwhere everything is as in \\eqref{sub:Proc} and, in addition,\n$g \\colon \\Re^n \\to \\Re^m$ is a once continuously differentiable constraint function \n(for some $m \\in \\Na$).\nWe will assume that there exists an $x^* \\in \\mathbf X$ that satisfies $g(x^*) < 0$.\\\\\n\\label{sub:Procg}\n\\end{subequations}\n\n% ----------------------------------------------------------------\n\\subsection{Problems with Discrete Variables}\nNext, we will discuss the situation where all design parameters\ncan only take on user-specified discrete values.\\\\\n\nLet $\\mathbf X_d \\subset \\Z^{n_d}$ denote the constraint set\nwith a finite, non-zero number of integers for each variable.\\\\\n\nWe will consider integer programming problems of the form\n\\begin{eqnarray}\n\\lefteqn{ \\mathbf P_{d}   } \\qquad && \\min_{x \\in \\mathbf X_d} f(x).\n\\label{sub:Prod}\n\\end{eqnarray}\n\n\n% ----------------------------------------------------------------\n\\subsection{Problems with Continuous and Discrete Variables}\nNext, we will allow for continuous and discrete independent variables.\\\\\n\n\\begin{subequations}\nWe will use the notation\n\\begin{eqnarray}\n\\mathbf X & \\triangleq & \\mathbf X_c \\times \\mathbf X_d, \\\\\n\\mathbf X_c & \\triangleq & \\bigl\\{ x \\in \\Re^{n_c} \\ | \\ \nl^i \\le x^i \\le u^i, \\ i \\in \\{1, \\ldots, n_c \\} \\bigr\\},\n\\label{eq:feaSetXc}\n\\end{eqnarray}\nwhere the bounds on the continuous independent variables satisfy\n$ - \\infty \\le l^i < u^i \\le \\infty$ for $i \\in \\{1, \\ldots, n_c \\}$,\nand the constraint set $\\mathbf X_d \\subset \\Z^{n_d}$ for the discrete variables\nis a user-specified set with a finite, non-zero number of integers for each variable.\\\\\n\\label{sub:setXd}\n\\end{subequations}\n\nWe will consider mixed-integer programming problems of the form\n\\begin{subequations}\n\\begin{eqnarray}\n\\lefteqn{ \\mathbf P_{cd}   } \\qquad && \\min_{x \\in \\mathbf X} f(x), \\\\\n\\end{eqnarray}\n\\label{sub:Procd}\n\\end{subequations}\nwhere $x \\triangleq (x_c, x_d) \\in \\Re^{n_c} \\times \\Z^{n_d}$,\n$f \\colon \\Re^{n_c} \\times \\Z^{n_d} \\to \\Re$ and $\\mathbf X$ is as in~\\eqref{sub:setXd}.\n\n\\begin{subequations}\nNow, we add inequality constraints on the dependent variables to~\\eqref{sub:Procd} and obtain\n\\begin{eqnarray}\n\\lefteqn{ \\mathbf P_{cdg}   } \\qquad && \\min_{x \\in \\mathbf X} f(x), \\\\\n&& g(x) \\le 0,\n\\end{eqnarray}\n\\label{sub:Procdg}\n\\end{subequations}\nwhere everything is as in \\eqref{sub:Procd} and in addition\n$g \\colon \\Re^{n_c} \\times \\Re^{n_d} \\to \\Re^m$ (for some $m \\in \\Na$).\nWe will assume that there exists an $x^* \\in \\mathbf X$ that satisfies\n$g(x^*) < 0$.\\\\\n\n% ----------------------------------------------------------------\n\\subsection[Problems that use a Building Simulation Program]{Problems whose Cost Function is Evaluated by a Building Simulation Program}\n\\label{sec:proAppCosFun}\nNext, we will discuss problem $\\mathbf P_{c}$ defined in~\\eqref{sub:Proc} for\nthe situation where the cost function $f \\colon \\Re^n \\to \\Re$\ncannot be evaluated, \nbut can be approximated numerically by approximating cost functions\n$f^* \\colon \\Re_+^p \\times \\Re^n \\to \\Re$,\nwhere the first argument is the precision parameter of the numerical solvers.\nThis is typically the case when the cost is computed \nby a thermal building simulation program,\nsuch as\nEnergyPlus~\\cite{Crawley2001:1}, \nTRNSYS~\\cite{KleinDuffieBeckman1976}, or\nDOE-2~\\cite{WinkelmannBirsdall1993}.\nIn such programs,\ncomputing the cost involves \nsolving a system of partial and\nordinary differential equations that are coupled to algebraic equations.\nIn general, one cannot obtain an exact solution, but\none can obtain an approximate numerical solution.\nHence,\nthe cost function $f(x)$ can only be approximated by an approximating cost function \n$f^*(\\epsilon,x)$,\nwhere $\\epsilon \\in \\Re_+^q$ is a vector that contains precision parameters of \nthe numerical solvers.\nConsequently, the optimization algorithm can only be applied to $f^*(\\epsilon,x)$ \nand not to $f(x)$.\\\\\n\nIn such thermal building simulation programs\nit is common that the termination criteria of the solvers that are used \nto solve the partial differential equations, ordinary differential equations, and\nalgebraic equations depend on the independent variable $x$.\nTherefore, a perturbation of $x$ can cause \na change in the sequence of solver iterations,\nwhich causes the approximating cost functions $f^*(\\epsilon,x)$ \nto be discontinuous in $x$.\nFurthermore, if variable step size integration methods are used,\nthen the integration mesh can change from one simulation to the next.\nTherefore, part of the change in function values between different points is\ncaused by a change of the number of solver iterations, and\nby a change of the integration mesh.\nConsequently, $f^*(\\epsilon,\\cdot)$ is discontinuous, and\na descent direction for $f^*(\\epsilon,\\cdot)$ may not be a descent direction \nfor $f(\\cdot)$.\nTherefore, optimization algorithms\ncan terminate at points that are non-optimal.\\\\\n\nThe best one can do in trying to solve optimization problems where the cost and constraint functions are evaluated by a thermal building simulation program\nthat does not allow controlling the approximation error\nis to find points that are close to a local minimizer of $f(\\cdot)$.\nNumerical experiments show that by using tight enough precision and \nstarting the optimization algorithm with coarse initial values,\none often comes close to a minimizer of $f(\\cdot)$.\nFurthermore, by selecting different initial iterates for the optimization,\nor by using different optimization algorithms, one can increase the chance of \nfinding a point that is close to a minimizer of $f(\\cdot)$.\nHowever, even if the optimization terminates at a point \nthat is non-optimal for $f(\\cdot)$,\none may have obtained a better system performance compared to not doing any optimization.\\\\\n\nSee~\\cite{WetterPolak2003:1,WetterWright2003:1} for a further discussion \nof optimization problems in which the cost function value is computed\nby a building simulation program.\n\n% ----------------------------------------------------------------\n\\section{Algorithm Selection}\n\\label{sec:AlgSel}\nIn this section, we will discuss which of GenOpt's algorithms can be selected\nfor the optimization problems that we introduced in Section~\\ref{sec:claOptPro}.\\\\\n\n% ----------------------------------------------------------------\n\\subsection{Problem $\\mathbf P_{c}$ with $n>1$}\nTo solve $\\mathbf P_{c}$ with $n>1$, \nthe hybrid algorithm (Section~\\ref{sec:GPSPSOAlg}, page~\\pageref{sec:GPSPSOAlg}) or\nthe GPS implementation of the Hooke-Jeeves algorithm\n(Section~\\ref{sec:GPSHooJeeImp}, page~\\pageref{sec:GPSHooJeeImp}) can be used,\npossibly with multiple starting points (Section~\\ref{sec:GPSMulSta}, page~\\pageref{sec:GPSMulSta}).\nIf $f(\\cdot)$ is once continuously differentiable\nand has bounded level sets (or if the constraint set $\\mathbf X$\ndefined in~\\eqref{eq:setXPc} is compact)\nthen these algorithms construct for problem~\\eqref{sub:Proc}\na sequence of iterates with stationary accumulation points \n(see Theorem~\\ref{the:feaPoiConv}).\n\nAlternatively, the Discrete Armijo Gradient algorithm\n(Section~\\ref{sec:DAG}, page~\\pageref{sec:DAG}) can be used.\nEvery accumulation point of the\nDiscrete Armijo Gradient algorithm is a feasible stationary point.\\\\\n\nIf $f(\\cdot)$ is not continuously differentiable, or if\n$f(\\cdot)$ must be approximated by an approximating cost function $f^*(\\epsilon,\\cdot)$\nwhere the approximation error cannot be controlled,\nas described in Section~\\ref{sec:proAppCosFun}, then $\\mathbf P_c$ can only be solved\nheuristically.\nWe recommend using\nthe hybrid algorithm (Section~\\ref{sec:GPSPSOAlg}, page~\\pageref{sec:GPSPSOAlg}),\nthe GPS implementation of the Hooke-Jeeves algorithm \n(Section~\\ref{sec:GPSHooJeeImp}, page~\\pageref{sec:GPSHooJeeImp}), \npossibly with multiple starting points (Section~\\ref{sec:GPSMulSta}, page~\\pageref{sec:GPSMulSta}),\nor a Particle Swarm Optimization algorithm\n(Section~\\ref{sec:PSOAlg}, page~\\pageref{sec:PSOAlg}).\\\\\n\nWe do not recommend using the Nelder-Mead Simplex algorithm \n(Section~\\ref{sec:simAlgNelMea}, page~\\pageref{sec:simAlgNelMea}) or\nthe Discrete Armijo Gradient algorithm\n(Section~\\ref{sec:DAG}, page~\\pageref{sec:DAG}).\\\\\n\nThe following approach reduces the risk of failing at a point which is \nnon-optimal and far from a minimizer of $f(\\cdot)$:\n\\begin{enumerate}\n\\item \nSelecting large values for the parameter \\texttt{Step} in \nthe optimization command file (see page~\\pageref{ite:parStep}).\n\\item\nSelecting different initial iterates.\n\\item\nUsing the hybrid algorithm of Section~\\ref{sec:GPSPSOAlg},\nthe GPS implementation of the Hooke-Jeeves algorithm, \npossibly with multiple starting points (Section~\\ref{sec:GPSMulSta}, page~\\pageref{sec:GPSMulSta}),\nand/or\na Particle Swarm Optimization algorithm\nand select the best of the solutions.\n\\item\nDoing a parametric study around the solution that has been obtained by\nany of the above optimization algorithms.\nThe parametric study can be done using the algorithms \\texttt{Parametric} \n(Section~\\ref{sec:algParametric}, page~\\pageref{sec:algParametric})\nand/or\n\\texttt{EquMesh} \n(Section~\\ref{sec:algParRunGen}, page~\\pageref{sec:algParRunGen}).\nIf the parametric study yields a further reduction in cost,\nthen the optimization failed at a non-optimal point.\nIn this situation, one may want to try another optimization algorithm.\n\\end{enumerate}\n\n\nIf $f(\\cdot)$ is continuously differentiable but \nmust be approximated by approximating cost functions $f^*(\\epsilon,\\cdot)$\nwhere the approximation error can be controlled\nas described in Section~\\ref{sec:proAppCosFun}, \nthen $\\mathbf P_c$ can be solved using \nthe hybrid algorithm (Section~\\ref{sec:GPSPSOAlg}, page~\\pageref{sec:GPSPSOAlg}) or\nthe GPS implementation of the Hooke-Jeeves algorithm \n(Section~\\ref{sec:GPSHooJeeImp}, page~\\pageref{sec:GPSHooJeeImp}), \nboth with the error control scheme described in \nthe Model GPS Algorithm~\\ref{al:GPSImp}\n(page~\\pageref{al:GPSImp}).\nThe GPS implementation of the Hooke-Jeeves algorithm can be used\nwith multiple starting points (Section~\\ref{sec:GPSMulSta}, page~\\pageref{sec:GPSMulSta}).\nThe error control scheme can be implemented using the value of GenOpt's variable\n\\texttt{stepNumber} (page~\\pageref{sec:ImpWeiFac})\nand GenOpt's pre-processing capabilities\n(Section~\\ref{par:posPro}, page~\\pageref{par:posPro}).\nA more detailed description of how to use the error control scheme can\nbe found in~\\cite{PolakWetter2003:1,WetterPolak2003:1}.\n\n\n% ----------------------------------------------------------------\n\\subsection{Problem $\\mathbf P_{cg}$ with $n>1$}\nTo solve $\\mathbf P_{cg}$, \nthe hybrid algorithm (Section~\\ref{sec:GPSPSOAlg}, page~\\pageref{sec:GPSPSOAlg})\nor\nthe GPS implementation of the Hooke-Jeeves algorithm \n(Section~\\ref{sec:GPSHooJeeImp}, page~\\pageref{sec:GPSHooJeeImp})\ncan be used,\npossibly with multiple starting points (Section~\\ref{sec:GPSMulSta}, page~\\pageref{sec:GPSMulSta}).\nConstraints $g(\\cdot) \\le 0$ can be implemented\nusing barrier and penalty functions\n(Section~\\ref{cha:conGen}, page~\\pageref{cha:conGen}).\\\\\n\nIf $f(\\cdot)$ or $g(\\cdot)$ are not continuously differentiable,\nwe recommend using \nthe hybrid algorithm (Section~\\ref{sec:GPSPSOAlg}, page~\\pageref{sec:GPSPSOAlg})\nor\nthe GPS implementation of the Hooke-Jeeves algorithm\n(Section~\\ref{sec:GPSHooJeeImp}, page~\\pageref{sec:GPSHooJeeImp}),\npossibly with multiple starting points (Section~\\ref{sec:GPSMulSta}, page~\\pageref{sec:GPSMulSta}),\nand implement the constraints $g(\\cdot) \\le 0$ \nusing barrier and penalty functions\n(Section~\\ref{cha:conGen}, page~\\pageref{cha:conGen}).\nTo reduce the risk of terminating far from a minimum point of $f(\\cdot)$,\nwe recommend the same measures as for solving $\\mathbf P_c$.\\\\\n\n% ----------------------------------------------------------------\n\\subsection{Problem $\\mathbf P_{c}$ with $n=1$}\nTo solve $\\mathbf P_c$ with $n=1$, \nany of the interval division algorithms can be used\n(Section~\\ref{sec:IntDivAlg}, page~\\pageref{sec:IntDivAlg}).\nSince only a few function evaluations are required for parametric studies\nin one dimension, the algorithm \\texttt{Parametric} can also be used for this problem\n(Section~\\ref{sec:algParametric}, page~\\pageref{sec:algParametric}).\nWe recommend doing a parametric study if $f(\\cdot)$ is expected to have \nseveral local minima.\n\n% ----------------------------------------------------------------\n\\subsection{Problem $\\mathbf P_{cg}$ with $n=1$}\nTo solve $\\mathbf P_{cg}$ with $n=1$, \nthe same applies as for $\\mathbf P_{c}$ with $n=1$.\nConstraints $g(\\cdot) \\le 0$ can be implemented by setting \nthe penalty weighting factor $\\mu$ in~\\eqref{eq:penFun} to a large value.\nThis may still cause small constraint violations, \nbut it is easy to check whether the violation is acceptable.\n\n% ----------------------------------------------------------------\n\\subsection{Problem $\\mathbf P_{d}$}\nTo solve $\\mathbf P_{d}$,\na Particle Swarm Optimization algorithm can be used\n(Section~\\ref{sec:PSOAlg}, page~\\pageref{sec:PSOAlg}).\n\n% ----------------------------------------------------------------\n\\subsection{Problem $\\mathbf P_{cd}$ and $\\mathbf P_{cdg}$}\nTo solve $\\mathbf P_{cd}$, or $\\mathbf P_{cdg}$,\nthe hybrid algorithm (Section~\\ref{sec:GPSPSOAlg}, page~\\pageref{sec:GPSPSOAlg})\nor\na Particle Swarm Optimization algorithm can be used\n(Section~\\ref{sec:PSOAlg}, page~\\pageref{sec:PSOAlg}).\n\n\n% ----------------------------------------------------------------\n\\subsection{Functions with Several Local Minima}\nIf the problem has several local minima, we recommend using \nthe GPS implementation of the Hooke-Jeeves algorithm\nwith multiple starting points (Section~\\ref{sec:GPSMulSta}, page~\\pageref{sec:GPSMulSta}),\nthe hybrid algorithm (Section~\\ref{sec:GPSPSOAlg}, page~\\pageref{sec:GPSPSOAlg}), or\na Particle Swarm Optimization algorithm\n(Section~\\ref{sec:PSOAlg}, page~\\pageref{sec:PSOAlg}).\n\n", "meta": {"hexsha": "26ab00f24224c6ac0ffa9713c43a8a41f76587d1", "size": 15506, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/manual/problems.tex", "max_stars_repo_name": "bergsee/GenOpt", "max_stars_repo_head_hexsha": "3925277af881cea6e12e3d1bf0285bd657bbcced", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2015-08-30T09:47:56.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-06T15:16:18.000Z", "max_issues_repo_path": "src/manual/problems.tex", "max_issues_repo_name": "bergsee/GenOpt", "max_issues_repo_head_hexsha": "3925277af881cea6e12e3d1bf0285bd657bbcced", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2016-01-14T00:01:46.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-21T15:28:52.000Z", "max_forks_repo_path": "src/manual/problems.tex", "max_forks_repo_name": "lbl-srg/GenOpt", "max_forks_repo_head_hexsha": "3925277af881cea6e12e3d1bf0285bd657bbcced", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2015-08-30T09:47:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-01T18:07:07.000Z", "avg_line_length": 46.4251497006, "max_line_length": 155, "alphanum_fraction": 0.7223010448, "num_tokens": 4306, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387914176259, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5850525828459571}}
{"text": "\\subsection{Binary mixture LMN flow}\n\nIn general, the dimensional system of governing equations reads under the LMN approximation as\n\\begin{align}\n&\\frac{\\partial {\\rho}}{\\partial t}+\\frac{\\partial }{\\partial x_i} \\bigg({\\rho} {u}_i\\bigg) = 0, \\label{mass}\\\\\n&\\frac{\\partial {\\rho}{ u}_j }{\\partial t}+ \\frac{\\partial }{\\partial x_i} \\bigg({\\rho} {u}_j {u}_i \\bigg) =-\\frac{\\partial{ P}}{\\partial x_j} + \\frac{\\partial{\\tau}_{ij}}{\\partial x_i}+{\\rho} g_j, \\label{mom}\\\\\n&  \\frac{\\partial{\\rho}{ Y}_1}{\\partial t}+ \\frac{\\partial }{\\partial x_i} \\bigg({\\rho} {u}_i {Y}_1\\bigg)=\\frac{\\partial }{\\partial x_i}\\bigg( {\\rho} \\  D \\frac{\\partial {Y }_1}{\\partial x_i}\\bigg),\\label{spec}\\\\\n&{\\rho}={M}_{mix}\\frac{p}{RT}, \\label{state}\n\\end{align}\nwhere ${\\rho}$ is the mixture's density and ${u}_i$ the mass average component of the velocity vector ${\\textbf{u}}=({u}_1,{u}_2,{u}_3)$ and $g_j=(0,0,-g)$ the gravity vector. ${Y}_1$ and ${Y}_2$  are respectively the mass fractions of species 1 ans species 2 satisfying ${Y}_1+{Y}_2=1$.\n\n$D$ corresponds to the mixture diffusion coefficient of both species,  $R= 8.314$ J.K$^{-1}$.mol$^{-1}$ the specific gas constant and $M_{mix}=\\displaystyle\\left(\\sum_{i=1}^2  \\frac{{Y}_i}{M_i}\\right)^{-1}$ the mixing molar mass where  $M_1=M_{in}$ and $M_2 =M_{am}$, the injection and ambient molar mass respectively.\n\n ${\\tau}_{ij}= 2\\mu{e}_{ij}$ is the viscous stress tensor for Newtonian fluids with\n$${e}_{ij}= \\frac{1}{2}\\left(\\frac{\\partial  u_i}{\\partial x_j} + \\frac{\\partial u_j}{\\partial x_i}\\right) -\\frac{1}{3}\\delta_{ij}\\frac{\\partial u_k}{\\partial x_k},$$\nand $\\delta_{ij}$ the Kronecker symbol.  $\\mu$ denotes the mixture's dynamic viscosity calculated as a function of the mass fractions and fluids physical properties using the Wilke's formulation as follows\n\\begin{equation}\\label{wilke}\n        \\mu = \\frac{Y_1\\mu_1}{Y_1\\phi_{11}+Y_2\\phi_{12}}+\\frac{Y_2\\mu_2}{Y_1\\phi_{21}+Y_2\\phi_{22}},\n\\end{equation}\n        where $\\phi_{ij}$ is a set of dimensionless constants calculated as\n\\begin{equation}\n        \\phi_{ij}= \\displaystyle\\frac{\\displaystyle\\frac{\\textrm{M}_i}{\\textrm{M}_j}\\left[ 1+\\left(\\displaystyle\\frac{\\mu_i}{\\mu_j} \\right)^{1/2} \\left( \\displaystyle\\frac{\\textrm{M}_j}{\\textrm{M}_i} \\right)^{1/4}\\right]^2}{ \\left[ 8\\left(1+\\displaystyle\\frac{\\textrm{M}_i}{\\textrm{M}_j} \\right) \\right]^{1/2}}\\quad : \\quad i,j=\\{1,2\\}.\n\\end{equation}\n\nIn this study, we consider identical species (inj = amb) with\n\\begin{enumerate}\n  \\item $D = 7.72\\times 10^{-5}$ m$^{2}$.s$^{-1}$,\n  \\item $p = 100000$ Pa,\n  \\item $T= 284.15$ K,\n  \\item $M_1=M_2=0.02897$ kg.mol$^{-1}$,\n  \\item $\\mu_1=\\mu_2=1.792\\times 10^{-5}$ kg.m$^{-1}$.s$^{-1}$.\n\\end{enumerate}\n\n\\subsection{Non-compressible thermo-hydraulic flow}\n\nThe dimensional system of governing equations of a non-compressible thermo-hydraulic flow (without a source term) is expressed as\n\n\\begin{equation}\\label{ns1}\n\\frac{\\partial { u}_j }{\\partial t}+ {u}_j\\frac{\\partial {u}_i }{\\partial x_j}=-\\frac{\\partial{ P^{\\ast}}}{\\partial x_j} + \\frac{\\partial}{\\partial x_i} \\bigg( \\nu \\frac{\\partial  u_i}{\\partial x_j}\\bigg),\n\\end{equation}\n\\begin{equation}\\label{ns2}\n\\frac{\\partial T}{\\partial t}+  {u}_i\\frac{\\partial T}{\\partial x_i}=\\frac{\\partial }{\\partial x_i}\\bigg( {\\rho} \\frac{\\lambda}{\\rho c_p} \\frac{\\partial T}{\\partial x_i}\\bigg),\n\\end{equation}\nwhere $\\nu=\\mu/\\rho$ is the kinematic viscosity, $P^{\\ast}$ the reduced pressure expressed as a function of the pressure $P$,\nthe density $\\rho$ and the gravity vector $\\textbf{g}$ as\n$$P^{\\ast}=P/\\rho + \\textbf{g}.$$\n\nThe physical properties are considered in this case in accordance to the LMN problem, and described as follows\n\\begin{enumerate}\n  \\item $\\lambda = 9.466369670132119\\times 10^{-5}$ m$^{2}$.s$^{-1}$,\n  \\item $c_p=1$ J.kg$^{-1}$.K$^{-1}$\n  \\item $\\rho=1.226213687840948$ kg.m$^{-3}$,\n  \\item $\\mu=1.792\\times 10^{-5}$ kg.m$^{-1}$.s$^{-1}$.\n\\end{enumerate}\n\n\\subsection{Post processing integral quantities}\n\nTo validate the implementation of the binary mixture LMN flow problem, comparisons are provided by comparing two integral quantities\n\\begin{enumerate}\n  \\item $I_1 =\\displaystyle\\int_V \\rho Y_1 dV$,\n  \\item $I_2 =\\displaystyle\\int_V \\rho c_p T dV$,\n\\end{enumerate}\nwhich denote respectively the total mass of the injected species and the total energy.\n", "meta": {"hexsha": "01335412ce3bd6575d462ebb2534c2db9ce8b6d8", "size": 4329, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Validation/Rapports_automatiques/Verification/Verification_codage/QC_Melange_Binaire/src/ge.tex", "max_stars_repo_name": "cea-trust-platform/trust-code", "max_stars_repo_head_hexsha": "c4f42d8f8602a8cc5e0ead0e29dbf0be8ac52f72", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2021-06-30T18:50:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T09:03:16.000Z", "max_issues_repo_path": "Validation/Rapports_automatiques/Verification/Verification_codage/QC_Melange_Binaire/src/ge.tex", "max_issues_repo_name": "pledac/trust-code", "max_issues_repo_head_hexsha": "46ab5c5da3f674185f53423090f526a38ecdbad1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Validation/Rapports_automatiques/Verification/Verification_codage/QC_Melange_Binaire/src/ge.tex", "max_forks_repo_name": "pledac/trust-code", "max_forks_repo_head_hexsha": "46ab5c5da3f674185f53423090f526a38ecdbad1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-10-04T09:19:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-15T14:21:04.000Z", "avg_line_length": 67.640625, "max_line_length": 336, "alphanum_fraction": 0.6747516748, "num_tokens": 1556, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.893309411735131, "lm_q2_score": 0.6548947425132314, "lm_q1q2_score": 0.5850236371829248}}
{"text": "\\section*{Exercises}\n\n\\begin{ex}\n  In this section, we observed that $\\rho V^2AB$ has\n  the units of force. Describe a systematic way to obtain such\n  combinations of the variables that will yield something that has\n  the units of force.\n\\end{ex}\n\n", "meta": {"hexsha": "4931b9ce3c9b5e33689f6ec2dfdbbc00b1eb6ffc", "size": 248, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "baseText/exercises/SystemsofEquations-Application-Dimensionless.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "baseText/exercises/SystemsofEquations-Application-Dimensionless.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "baseText/exercises/SystemsofEquations-Application-Dimensionless.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 24.8, "max_line_length": 66, "alphanum_fraction": 0.7419354839, "num_tokens": 66, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7606506526772884, "lm_q1q2_score": 0.585001384289053}}
{"text": "\\input{../../../utils/header_article.tex}\n\n\\title{Regression Discontinuity Design (RDD)\\thanks{We are indebted to Liudmila Kiseleva for the design of the problem set.}}\n\\subtitle{-- Problem Set 3 --}\n\\date{}\n\n\\begin{document}\\maketitle\\vspace{-2cm}\n\nIn the problem set 3 we are going to practice RDD in \\cite{Lee.2008} framework presented in the lecture 11. We employ the original simplified data set on the individual candidates for the US House of Representatives from 1946 to 1998. If a candidate obtains more votes than his or her competitors, he or she takes the office.  Each elected candidate represents one of 435 congressional districts. The elections are held every two years. We seek the answer to the question whether winning the election has a causal influence on the probability that the candidate will win the next election.\\\\\n\\\\\nThe observations of the data set \\href{https://github.com/HumanCapitalAnalysis/microeconometrics/tree/master/problem-sets/03-regression-discontinuity-design/data}{\\texttt{individ\\_final.dta}} are clustered by district and election year. It consists of the following variables:\n\\begin{enumerate}\n\n\\item \\emph{outcome} is a treatment variable; it is coded as 1 if a candidate won the election in the corresponding year and 0 -- otherwise.\n\\item \\emph{outcomenext} is an outcome variable. It is coded as 1 if a candidate won the next election; as 0 if he or she did not win the next election; and as -1 if he or she did not participate in the next election.\n\\item \\emph{difshare} is an assignment variable; it is the winning candidate's vote share minus the vote share of the highest performing competitor. Therefore, 0 is the cutoff point: a candidate whose vote share is more than 0 is automatically assigned to treatment.\n\n\\end {enumerate}\n\n\\section*{Task A. Theoretical foundation}\n\n\\begin{boenumerate}\n\n  \\item \\emph{What is the main assumption that makes RDD possible?} Define the local randomization condition in the simplified setup presented in the lecture.\n\n\\end{boenumerate}\n\n\n\\section*{Task B. Graphical presentation using local averages}\n\nA major advantage of the RD design over competing methods is its transparency, which can be illustrated using graphical methods. A standard way of graphing the data is to divide the assignment variable  into a number of bins, making sure there are two separate bins on each side of the cutoff point. Then, the average value of the outcome variable can be computed for each bin and graphed against the mid-points of the bins.\n\n\\begin{boenumerate}\n\n\\item Create a new variable that groups the assignment variable values into 400 bins  with a size of 0.005.\n\n\\item Since we are interested in a causal influence on the probability that the candidate will win the next election based on winning the current election, drop the rows that do not have a comparable next election.\n\n\\item Find the mean of the outcome variable for each bin or, in other words, local average. Draw this relationship on the scatterplot.\n\n\\item For better visuality we also add to the graph the fitted values of logistic regression around the cutoff. For this apply logistic regression separately on either side of the threshold (we take the bins with the share values from -0.25 to 0.25 and use the package \\texttt{LogisticRegression} from \\texttt{sklearn.linear\\_model}). Extract probability estimates. Add them to the scatterplot in the proximity of cutoff. \\emph{Do you observe a discontinuity at the cutoff point?}\n\\end{boenumerate}\n\n\\section*{Task C. Local linear regression (LLR)}\n\n LLR as a method restricts the estimation to observations close to the cutoff. It is based on the assumption that  regression lines within the bins around the cutoff point are close to linear. That helps to avoid some of the drawbacks of other parametric/non-parametrics approaches (\\cite{Lee.2010a})\n\n\\begin{boenumerate}\n\n\\item Run the LLR with a specification $Y = \\alpha_r + \\tau D + \\beta X + \\gamma X D + \\epsilon$, where X is rectricted by a bandwidth: $h \\geq X \\geq -h$. Interpret the result. Experiment with few bandwidths on your choice.\n\n\\end{boenumerate}\n\n\\section*{Task D. Cross-validation}\n\nAs you might find, the treatment effect result is sensitive to the bandwidth choice. In general, choosing a bandwidth in estimation involves finding an optimal balance between precision and bias. One the one hand, using a larger bandwidth yields more precise estimates as more observations are available to estimate the regression. On the other hand, the linear specification is less likely to be accurate (\\cite{Lee.2010a})\\\\\n\\\\\nWe are going to review one of the approaches for choosing a bandwidth --  cross-validation \u201cleave one out\u201d procedure. The main idea is to take an observation $i$ in the data, leave it out, run LLR, and use the estimates to predict the value of $Y$ at $X = X_i$.  Proceeding with each observation separately on each side of the cutoff, we obtain the predicted values of $Y$ that can be compared to the actual values. The optimal bandwidth is then a value of $h$ that minimizes the mean square of the difference between the predicted and actual values of $Y$. And overall mean square error is simply the average of the squares of the prediction errors  on each side of the cutoff.\n\n\\begin{boenumerate}\n\n\\item If you want to practice your Python skills, we recommend to work with the packages \\texttt{LeaveOneOut()} and \\texttt{cross\\_val\\_score} from \\texttt{sklearn.model\\_selection} and to write the code that finds the optimal bandwidth. Otherwise, we created our draft in \\texttt{auxiliary.py}; you can use it to produce your solution. Draw the graph showing the relationship between the bandwidth and the mean square error. \\emph{What is the optimal bandwidth for LLR in our framework?}\n\n\\end{boenumerate}\n\n\\bibliographystyle{apacite}\n\\bibliography{../../../submodules/bibliography/literature}\n\n\\end{document}\n", "meta": {"hexsha": "56b3918dd41f6e3b5d605bb60295bc143eb58993", "size": 5900, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problem-sets/03-regression-discontinuity-design/sources/main.tex", "max_stars_repo_name": "milakis/microeconometrics", "max_stars_repo_head_hexsha": "6ede1eceb25e578b3109c03d35f26d34d41777aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "problem-sets/03-regression-discontinuity-design/sources/main.tex", "max_issues_repo_name": "milakis/microeconometrics", "max_issues_repo_head_hexsha": "6ede1eceb25e578b3109c03d35f26d34d41777aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem-sets/03-regression-discontinuity-design/sources/main.tex", "max_forks_repo_name": "milakis/microeconometrics", "max_forks_repo_head_hexsha": "6ede1eceb25e578b3109c03d35f26d34d41777aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 84.2857142857, "max_line_length": 678, "alphanum_fraction": 0.7862711864, "num_tokens": 1347, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.7690802370707283, "lm_q1q2_score": 0.585001384289053}}
{"text": "\\documentclass{article}\n    % General document formatting\n    \\usepackage[margin=0.7in]{geometry}\n    \\usepackage[parfill]{parskip}\n    \\usepackage[utf8]{inputenc}\n    \\usepackage{mathrsfs}\n    \\usepackage{amsmath}\n    \\usepackage{amssymb}\n    \\usepackage{tikz}\n    \\usepackage{fancyhdr}\n    \\usepackage{multicol}\n\n    \\usetikzlibrary{positioning}\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{Edgar Jacob Rivera Rios - A01184125}\n\n\\renewcommand{\\labelenumi}{\\alph{enumi})}\n\n\\begin{document}\n\\section*{6.1.1}\n\\begin{equation*}\n  Z = -3x_{1} + 2x_{2} + 10x_{3} - x_{4} \\rightarrow max\n\\end{equation*}\ns.t\n\\begin{align*}\n-x_{1} + 2x_{2} - 3x_{3} + 4x_{4} &\\leq 7\\\\\n       - 3x_{2} + 2x_{3} - 5x_{4} &\\leq 12\\\\\n2x_{1} + 2x_{2}          + 7x_{4} &\\leq 15\\\\\nx_{j} \\geq 0, j &= 1, 2, 3, 4\n\\end{align*}\n\\begin{enumerate}\n  \\item Rewrite the problem as an augmented LP problem ready to be solved by the simplex method.\n  \n  See file 6.1.1.txt\n  \n  \\item Work through the simplex method step by step to solve the augmented LP problem.\n  \n  See file 6.1.1.txt\n\\end{enumerate}\n\n\\section*{6.1.2}\nWork through the simplex method step by step to solve the following problem\n\nMaximize\n\\begin{equation*}\n  Z = 4x_{1} + 3x_{2} + 6x_{3}\n\\end{equation*}\n\\begin{align*}\n  3x_{1} + x_{2} + 3x_{3} &\\leq 30\\\\\n  2x_{1} + 2x_{2} + 3x_{3} &\\geq 10\\\\\n  x_{1} \\geq 0, x_{2} \\geq 0, x_{3} &\\geq 0,\n\\end{align*}\n\nSee file 6.1.2.txt\n\n\\end{document}", "meta": {"hexsha": "90e27524071914f9a524975cacfcecee1fa03a1f", "size": 1409, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/Hw 6_1/Homework6_1.tex", "max_stars_repo_name": "edjacob25/Applied-Maths", "max_stars_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework/Hw 6_1/Homework6_1.tex", "max_issues_repo_name": "edjacob25/Applied-Maths", "max_issues_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/Hw 6_1/Homework6_1.tex", "max_forks_repo_name": "edjacob25/Applied-Maths", "max_forks_repo_head_hexsha": "0a0f8e5b88083a1b0ec85069efbf266b6a12c741", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.2931034483, "max_line_length": 96, "alphanum_fraction": 0.6422995032, "num_tokens": 595, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802264851918, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5850013720642456}}
{"text": "\\subsection{Clustering}\\label{sec:clustering}\n%Another way of optimising the computation of the link model is to introduce clustering of nodes, used to reduce the amount of links required for the link model computation. \nThe second option for optimising the computation of the link model, is to introduce clustering of nodes. As mentioned in \\autoref{sec:pathloss}, links existing in the same physical environment should experience similar shadow fading \\gls{pathloss}. This means that we are be able to cluster nodes with a minimum loss of precision, provided that we minimise difference of the correlation (the angle) between links, before and after clustering.\n\n%minimise the difference of the angles between links before and after clustering (correlation)\n\n%This means that we should be able to compute clusters of nodes, and use the clusters to faster compute the link model of multiple nodes at the same time, as we would, depending on the number of clusters, have a far smaller link matrix. \\medbreak\n\n%For our purposes, we have chosen to use a density-based clustering algorithm, more specifically the \\gls{optics} algorithm (\\autoref{algo:optics}). The idea behind density-based clustering is that for every node in a cluster, the neighbourhood, within a given radius, has to contain at least a minimum number of other nodes~\\cite[p.~50]{Ankerst:1999:OOP:304182.304187}. A set of nodes in a network, $\\textbf{N} \\subseteq \\mathcal{X}$, along with a distance parameter $\\varepsilon$, denoting the maximum radius of a nodes neighbourhood, and $MinNodes \\geq 2$ denoting the minimum number of nodes required to form a cluster, such that a single node will not be considered a cluster. The goal is to find a set of clusters, $C$, such that we are able to form a new network with the centroids of our set of clusters, as well any outlying node not contained in a cluster, $\\text{NOISE} \\subseteq \\textbf{N}$, while minimising the radius of the clusters.\n\n%\\subsubsection{Metric Space}\n% Minimise the diameter using parameters\n% Maximum diameter as input, give me the smallest number of k-nodes, such that every other node is within the diameter of at least one node.\n% Output: Some number of nodes, which is the smallest possible diameter\n\n%metric space\n\\subsubsection{Metric Space}\nA metric space is a pair $( \\mathcal{X}, d )$ where $ \\mathcal{X} $ is a set and $\\textbf{d}:\\mathcal{X} \\times \\mathcal{X} \\rightarrow [0, \\infty )$ is a metric, satisfying the following axioms:\n\n\\begin{itemize}\n    \\item Positivity: $d(x, y) \\geq 0$\n    \\item Reflexivity: $d(x, y) = 0 \\Longleftrightarrow x = y$\n    \\item Symmetry: $d(x, y) = d(y, x)$\n    \\item Triangle inequality: $d(x, z) \\leq d(x, y) + d(y, z)$\n\\end{itemize}\n\nOur chosen metric space is $\\mathbb{R}^2$ using Euclidean distance.\n\n%$\\mathbb{R}^2$\n\n\\subsubsection{Problem Statement}\n\n%A set $\\textbf{N} \\subseteq \\mathcal{X}$, is provided together with parameters $\\varepsilon$. The goal is to find a subset of clusters, such that the maximum angle between any node to nodes in the cluster is minimised. \nA set $\\textbf{N} \\subseteq \\mathcal{X}$, is provided together with a parameter $k$, $k \\leq |\\textbf{N}|$. Initially, the set $\\textbf{N}$ is a set of clusters, each containing a single node. The goal is to find a set of clusters, such that the computation time required for computing the link model can be reduced, while minimising the loss of precision in the stochastic shadow fading part. We minimise the loss of precision by minimising the difference of the correlation ($\\Delta\\theta$) between links to any node in a cluster, to the centroid of the cluster. The centroid of a cluster $x$ is defined as $cent(x) = \\sum\\limits_{n \\in x} n \\div |x|$. \\smallbreak\n%\nThe problem is defined as follows:\\smallbreak\n\n%This correlation difference is computed using the autocorrelation function from \\autoref{eq:pathlossautocorrelation}.\n\n\n% \\autoref{figure:clusteringgoal} contains an example of this\n%For example, we want to minimise the difference $\\Delta\\theta$ between the links $l_{n,u}$, $l_{n, c}$, in relation to the link $l_{n,m}$ in \\autoref{figure:clusteringgoal}. The difference is computed using the autocorrelation function from \\autoref{eq:pathlossautocorrelation}: $\\Delta\\theta = r(l_{n,m}, l_{n,u}) - r(l_{n,m},l_{n,c})$, where $u \\in c$, $n \\not\\in c$, $m \\not\\in c$.\n\n%The goal is to find a \n%subset of clusters, such that the difference of the correlation between ($\\Delta\\theta$) links to any node in a cluster, to the centroid of the cluster, is minimised, while reducing the number of clusters, thus reducing the computational time required to compute the link model, while minimising the loss of precision.\n\n%s an example, in \\autoref{figure:clusteringgoal}, we want to minimise the difference $\\Delta\\theta$, between the links $l_{n,u}$, $l_{n, C}$, in relation to the link $l_{n,m}$, or $\\Delta\\theta = r(l_{n,m}, l_{n,u}) - r(l_{n,m},l_{n,C})$, where $u \\in C$, $n \\not\\in C$, $m \\not\\in C$, and $r$ is the autocorrelation function from \\autoref{eq:pathlossautocorrelation}. \n\n%such that the difference between an angle any node from outside the cluster to the centroid of the cluster, and the original node in the cluster is minimised.\n%\\autoref{figure:clusteringgoal} demonstrate an example of this, where we see three clusters of nodes.\n%$\\delta\\theta$\n\n%such that the difference in correlation between links is minimised, while maximising the number of clusters. \n\n%Minimise the difference between the original correlation matrix, and the new correlation matrix created using the centroid of the clusters.\n\nFor a metric space $(\\mathcal{X}, d)$,\n\\begin{itemize}\n    \\item Input: A set $\\textbf{N} \\subseteq \\mathcal{X}$ and a parameter $k$, $k \\leq |\\textbf{N}|$.\n    \\item Output: A set of clusters $C \\subseteq 2^{\\textbf{N}}$, such that\n          \\begin{enumerate}\n              \\item $\\bigcup\\limits_{x \\in C}x = \\textbf{N}$,\n              \\item $\\forall x, y \\in C, \\ \\text{if} \\ x \\neq y \\ \\text{then} \\ x \\cap y = \\emptyset$, and\n              \\item $|C| \\leq k$\n          \\end{enumerate}\n    \\item Goal: Minimise the $\\Delta\\theta(C)$ function:\\smallbreak $\\Delta\\theta(C) = \\mathlarger{\\sum}\\limits_{n_1, n_2, n_3 \\ \\in \\ \\textbf{N}} \\left( r(l_{n_1,n_2}, l_{n_1,n_3}) - r(l_{cent(C(n_1),cent(C(n_2)}, l_{cent(C(n_1),cent(C(n_3)}) \\right)^2$\n\n          % \\item Input: a set $\\textbf{N} \\subseteq \\mathcal{X}$, and a parameter $\\varepsilon$.\n          %       %    \\item Output: a set of clusters $C$ where, for each nodes in the cluster, the neighbourhood of a given radius $\\varepsilon$ of the nodes has to contain at least a minimum number of nodes $MinNodes$.\n          % \\item Output: a set of clusters $C$. % f\u00e6rre elementer end det set vi startede med\n          % \\item Goal: minimise the difference in correlation, $\\Delta\\theta$, between links $l_{n,c}$, $l_{n,u}$, with relation to $l_{n, m}$, for any $c \\in C$, $u \\in c$, $n \\not\\in c$, and $m \\not\\in c$, where $n \\neq m$.\n\n          %$\\forall c \\in C, \\forall u \\in c, \\forall m \\not\\in c, \\forall n \\not\\in c, $\n\n          %$\\Delta\\theta$, $\\forall c \\in C,  $  $\\Delta\\theta = r(l_{n,m}, l_{n,u}) - r(l_{n,m},l_{n,C})$\n          %    \\item Goal: minimise the cost $r^C_\\infty(\\textbf{N}) = \\max\\limits_{n \\in \\textbf{N}} d(n, C)$.\n          %\\item Goal: minimise the size of the network by creating a new network consisting of the set of nodes $centroids(C) \\cup \\text{NOISE}$\n          %\\item Goal: Minimise the cost $r^C_\\infty(\\textbf{P}) = \\max{p \\in \\textbf{P}} d(p, C)$\n\\end{itemize}\n\n%\\todo[inline]{How to describe the goal? Is it the goal of the particular clustering algorithm, or our goal for using the clustering algorithm? In the second case, the goal is to reduce the number of overall nodes in the network, by creating clusters of nodes very close to each other, and using the centroid of the clusters for the link model.}\n\n\\autoref{figure:clusteringgoal} illustrates the goal, where we want to minimise the correlation difference $\\Delta\\theta$ between the nodes $n$, $m$, and $u$ (as well of the rest of the nodes), to the centroids of their respective clusters $C(n)$, $C(m)$, and $C(u)$, where $C(n)$ is the cluster in $C$ that contains $n$, such that $n \\in C(n)$\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=.5\\textwidth]{figures/clustering/clustering.png}\n    \\caption{Minimising the difference of the correlation ($\\Delta\\theta$) between links to nodes in a cluster to the centroid of the cluster.}\n    \\label{figure:clusteringgoal}\n\\end{figure}\n\n\n%\\subsubsection{Choice of Clustering Algorithm}\\label{sec:clustering:choice}\n%In the paper~\\cite{proceeding:clustering-Survey}, the authors conduct a survey on density based clustering algorithms, giving an overview on existing algorithms. As mentioned above, we want the clustering to minimise the difference of the correlation ($\\Delta\\theta$) between links. We can approximate this by clustering nodes that are close to each other. The algorithm needs to handle clusters of varying densities and it should allow noise (outliers). \\smallbreak\n\n%The \\acrshort{dbscan} algorithm is a widely used algorithm, but does work very well when clustering with varying densities. \n\n%The \\acrshort{vdbscan} is a variation of the \\acrshort{dbscan} that fixes the problem with varying densities, and even automatically finds the parameters normally needed when running the algorithm. \n\n%This however does not make it possible, to more aggressively specify the density threshold. The \\gls{vdbscan}'s found optimal parameters, might not create the desired small clusters that is needed according to our clustering problem statement.\\smallbreak\n\n%\\gls{optics} however can handle variable densities although slower than \\gls{dbscan}. The complexity of \\gls{optics} is O$(n^2)$, but if implemented with a spatial index has the complexity of O$(n\\log n)$. As such we choose to use the \\gls{optics} algorithm.\n\n\\subsubsection{Approximation using OPTICS}\nWe have chosen a greedy approach using the \\gls{optics} algorithm~\\cite{Ankerst:1999:OOP:304182.304187} to approximate solutions for our clustering problem. \\gls{optics} is an algorithm for finding density-based clusters, and the run-time of the algorithm is O($n \\ \\cdot $ run-time of a neighbourhood query)~\\cite[p.~53]{Ankerst:1999:OOP:304182.304187}. We chose the \\gls{optics} algorithm based on a survey on density-based clustering algorithms in \\cite{proceeding:clustering-Survey}.\\smallbreak\n\nThe algorithm requires three parameters: our input set $\\textbf{N}$, $\\varepsilon$, describing the maximum distance between two nodes to consider for clustering, and $MinPts$, describing the minimum number of nodes required to form a new cluster. For our case, the parameter $MinPts = 2$. With this, the \\gls{optics} algorithm will return the set of clusters $C \\subseteq 2^{\\textbf{N}}$, where any nodes not satisfying the $MinPts$ parameter will be a singleton cluster. \\autoref{algo:clustering} presents the pseudo code description of our greedy approach, that repeatedly computes the a set of clusters $C$, until $|C| \\leq k$, each time incrementing the $\\varepsilon$ variable by 10 meters.\n\n\\begin{algorithm}[H]\n    \\DontPrintSemicolon\n    \\KwResult{A set of clusters $C \\subseteq 2^{\\textbf{N}}$, where $|C| \\leq k$}\n    \\SetKwFunction{FCluster}{Cluster}\n    \\SetKwProg{Fn}{Function}{:}{}\n    \\Fn{\\FCluster{\\textbf{N}, $k$}}{\n        $\\varepsilon \\leftarrow 0$ m\\;\n        $C \\leftarrow \\text{OPTICS}(\\textbf{N}, \\varepsilon, 2)$\\;\n\n        \\Repeat{$|C| \\leq k$}{\n            $\\varepsilon \\leftarrow \\varepsilon + 10$ m\\;\n            $C \\leftarrow \\text{OPTICS}(\\textbf{N}, \\varepsilon, 2)$\\;\n        }\n\n        \\KwRet $C$\\;\n    }\n\n    \\caption{Greedy approach using the \\gls{optics} algorithm.}\n    \\label{algo:clustering}\n\\end{algorithm}\n\n% The way we want to use the \\gls{optics} algorithm is to create many clusters with a higher density (a lower value for $\\varepsilon$), as bigger, lower density clusters, would not make sense for the purposes of the link model, given the loss of precision this would incur. The \\gls{optics} algorithm in itself does not assign cluster memberships to nodes in the network~\\cite[p.~52]{Ankerst:1999:OOP:304182.304187}. Instead, the algorithm produces a particular ordering of the nodes, from which we can extract actual cluster memberships from. This ordering is produced by computing two values for every node in the network: the \\textit{core-distance} (\\autoref{eq:coredist}) and the \\textit{reachability-distance} (\\autoref{eq:reachdist}) Additionally, a node is a \\textit{core node} if at least $MinNodes$ are found within its $\\varepsilon$-neighbourhood~\\cite[p.~52]{Ankerst:1999:OOP:304182.304187}.\n% \\begin{equation}\\label{eq:coredist}\n%     coredist_{\\varepsilon,MinNodes}(n) =\n%     \\begin{cases}\n%         \\text{UNDEFINED}                                          & \\text{if} |N_\\varepsilon(n)| < MinNodes \\\\\n%         MinNodes\\text{-th smallest distance to}\\ N_\\varepsilon(n) & \\text{otherwise}\n%     \\end{cases}\n% \\end{equation}\n% \\begin{equation}\\label{eq:reachdist}\n%     reachdist_{\\varepsilon,MinNodes}(o, n) =\n%     \\begin{cases}\n%         \\text{UNDEFINED}                                    & \\text{if} |N_\\varepsilon(n)| < MinNodes \\\\\n%         max(coredist_{\\varepsilon,MinNodes}(n), dist(o, n)) & \\text{otherwise}\n%     \\end{cases}\n% \\end{equation}\n\n\n% The \\gls{optics} (\\autoref{algo:optics}) algorithm processes each node only once, and performs one $\\varepsilon$-neighbourhood query during the processing of a node, at \\autoref{algo:optics:getneighbours1} and \\autoref{algo:optics:getneighbours2}. This means that the run-time of the algorithm is heavily dependent on the $\\varepsilon$-neighbourhood query, i.e., the run-time for the \\gls{optics} algorithm is O($n \\ \\cdot $ run-time of an $\\varepsilon$-neighbourhood query)~\\cite[p.~53]{Ankerst:1999:OOP:304182.304187}. The algorithm starts by setting the reachability-distance of each node to \\texttt{UNDEFINED} on \\autoref{algo:optics:setreachdistundefined}. Note that the very first node processed will be added to the ordered list, at \\autoref{algo:optics:addfirstnode}, with a reachability distance of \\texttt{UNDEFINED}, and that we assume \\texttt{UNDEFINED} to be greater than any defined distance~\\cite[p.~54]{Ankerst:1999:OOP:304182.304187}. Next, we check if the node $n$ is a core node on \\autoref{algo:optics:checkifcorepoint1}, and if yes, we initialise an empty priority queue \\texttt{Seeds} on \\autoref{algo:optics:initseeds}. With this, we call the \\texttt{Update} function on \\autoref{algo:optics:updateseeds}, which updates the priority queue with the reachability-distance of any unprocessed node in the $\\varepsilon$-neighbourhood of $n$. Finally, we repeat the process for any node $m$ in the priority queue, and move on to the next unprocessed node in $N$.\\medbreak\n\n% \\begin{algorithm}[ht]\n%     \\DontPrintSemicolon\n%     \\KwResult{Ordered list of nodes N$'$}\n%     \\SetKwFunction{FOptics}{Optics}\n%     \\SetKwFunction{FUpdate}{Update}\n%     \\SetKwProg{Fn}{Function}{:}{}\n%     \\Fn{\\FOptics{N, $\\varepsilon$, MinNodes}}{\n%         N$'$ = ordered list\\;\n\n%         \\ForEach{$n \\in$ N}{\n%             n.reachdist = UNDEFINED\\;\\label{algo:optics:setreachdistundefined}\n%             n.processed = false\\;\n%         }\n\n%         \\ForEach{unprocessed $n \\in$ N}{\n%             N$_\\varepsilon$ = getNeighbours(n, $\\varepsilon$)\\;\\label{algo:optics:getneighbours1}\n%             n.processed = true\\;\n%             N$'$.insert(n)\\;\\label{algo:optics:addfirstnode}\n\n%             \\If{coredist$_{\\varepsilon,MinNodes}$(n) $\\neq$ UNDEFINED}{\\label{algo:optics:checkifcorepoint1}\n%                 Seeds = empty priority queue\\;\\label{algo:optics:initseeds}\n%                 Update(N$_\\varepsilon$, n, Seeds, $\\varepsilon$, MinNodes)\\;\\label{algo:optics:updateseeds}\n\n%                 \\ForEach{next $m \\in \\text{Seeds}$}{\n%                     M$_\\varepsilon$ = getNeighbours(m, $\\varepsilon$)\\;\\label{algo:optics:getneighbours2}\n%                     m.processed = true\\;\n%                     N$'$.insert(m)\\;\n\n%                     \\If{coredist$_{\\varepsilon,MinNodes}$(m) $\\neq$ UNDEFINED}{\\label{algo:optics:checkifcorepoint2}\n%                         Update(M$_\\varepsilon$, m, Seeds, $\\varepsilon$, MinNodes)\\;\n%                     }\n%                 }\n%             }\n%         }\n\n%         \\KwRet N$'$\\;\n%     }\\;\n\n%     \\Fn{\\FUpdate{N, n, Seeds, $\\varepsilon$, MinNodes}}{\n%         coredist = coredist$_{\\varepsilon,MinNodes}$(n)\\;\n\n%         \\ForEach{$o \\in$ N}{\n%             \\If{o.processed = false}{\n%                 reachdist = reachdist$_{\\varepsilon,MinNodes}$(o, n)\\;\n\n%                 \\If{o.reachdist = UNDEFINED}{\n%                     \\tcp{o is not in Seeds}\n%                     o.reachdist = reachdist\\;\n%                     Seeds.insert(o, reachdist)\\;\n%                 }\n%                 \\Else{\n%                     \\tcp{o is in Seeds, check for improvement}\n%                     \\If{reachdist $<$ o.reachdist}{\n%                         o.reachdist = reachdist\\;\n%                         Seeds.move-up(o, reachdist)\\;\n%                     }\n%                 }\n\n\n%             }\n%         }\n%     }\n\n%     \\caption{The OPTICS algorithm~\\cite{Ankerst:1999:OOP:304182.304187}.}\n%     \\label{algo:optics}\n% \\end{algorithm}\n\n% \\todo[inline]{Write pseudocode for the cluster extraction algorithm.}\n% %\\todo[inline]{Add reachability plot for a sample run}\n% \\todo[inline]{Add table demonstrating parameterised runs, amount of clusters vs resulting nodes in the network, as well as the radius of the clusters}\n% \\todo[inline]{Maybe use a smaller area and with fewer/smaller actual clusters for the demonstration.  }\n\n% \\begin{table}[H]\n%     \\centering\n%     \\begin{tabular}{|l|l|l|l|}\n%         \\hline\n%         $\\varepsilon$ & Clusters & Correlation difference & Duration \\\\\\hline\n%         0.05 km       & 99       & 2.330                  & 2 ms     \\\\\\hline\n%         0.06 km       & 99       & 2.330                  & 2 ms     \\\\\\hline\n%         0.07 km       & 99       & 2.330                  & 2 ms     \\\\\\hline\n%         0.08 km       & 99       & 2.330                  & 2 ms     \\\\\\hline\n%         0.09 km       & 97       & 4.909                  & 2 ms     \\\\\\hline\n%     \\end{tabular}\n%     \\caption{Computation time measurement for NearPD and Cholesky decomposition.}\n%     \\label{table:clusteringtime}\n% \\end{table}\n\n\\subsubsection{Experiment}\\label{sec:clustering:experiments}\nTo measure the performance of our greedy approach, we conducted an experiment where we generated 1000 nodes with random locations in a $10 \\times 10$ kilometre area, and ran the \\texttt{Cluster} algorithm from \\autoref{algo:clustering}, with different values for the $k$ parameter. The purpose of this experiment is to measure the computational time used to form a number of clusters $|C| \\leq k$. The results of this experiment can be seen in \\autoref{table:clustering-benchmark}.\n\n%To measure the performance of our clustering approach, we ran an experiment. The experiment consisted of running the clustering algorithm on 1000 randomly generated nodes, create a fully connected network of the clusters and measure the computational time used on clustering. \\autoref{table:clustering-benchmark} contain the results of this experiment.\n\n\\begin{table}[H]\n    \\begin{tabular}{|c|c|c|c|}\\hline\n        Max clusters ($k$) & Resulting clusters & Links  & Clustering       \\\\\\hline%& Cholesky \\\\\\hline\n        1000               & 1000               & 499500 & 244 milliseconds \\\\\\hline%& \\dots    \\\\\\hline\n        900                & 884                & 390286 & 1 second         \\\\\\hline%& \\dots    \\\\\\hline\n        800                & 771                & 296835 & 2 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        700                & 692                & 239086 & 2 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        600                & 592                & 174936 & 3 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        500                & 463                & 106953 & 4 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        450                & 424                & 89676  & 4 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        400                & 389                & 75466  & 4 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        350                & 322                & 51681  & 5 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        300                & 299                & 44551  & 5 seconds        \\\\\\hline%& \\dots    \\\\\\hline\n        %  250          & 229                & 26106  & 249 milliseconds & \\dots           \\\\\\hline\n        %  200          & 183                & 16653  & 250 milliseconds & 32 minutes      \\\\\\hline\n        %  150          & 136                & 9180   & 250 milliseconds & 325 seconds     \\\\\\hline\n        %  100          & 98                 & 4753   & 250 milliseconds & 45 seconds      \\\\\\hline\n        %  50           & 37                 & 666    & 253 milliseconds & 67 milliseconds \\\\\\hline\n    \\end{tabular}\n    \\caption{Experiment results from benchmarking time required to cluster.}\n    \\label{table:clustering-benchmark}\n\\end{table}\n\n%To verify that the clustering actually improves the cholesky computations, experiments have been run. The results can be soon in \\autoref{table:clustering-benchmark}.\\medbreak\n\n%The experiment consisted of iteratively running our clustering on 1000 randomly generated nodes until the amount of clusters desired is achieved, create a fully connected network of the clusters and measure the used on clustering and computing the path loss with the cholesky. The columns in \\autoref{table:clustering-benchmark} are as followed:\n\n%\\begin{description}\n%    \\item[Min clusters:] The minimum allowed amount of clusters. In the \\gls{optics} algorithm, one does not specify an amount of desired clusters, hence a minimum needs to be specified to stop clustering once this minimum has been reached.\n%    \\item[Resulting clusters:] The resulting amount of clusters.\n%    \\item[Links:] The total amount of links.\n%    \\item[Clustering time:] How long the clustering took.\n%    \\item[Cholesky:] How long the cholesky computation took. Some fields in the cholesky column are marked as \\dots. This is because the computation took too long to let it finish as the correlation matrix simply became too big.\n%\\end{description}\n\n\n% write about the results \n\n%\\autoref{figure:clustering:nodes} demonstrates a sample run of the algorithm on a network of 2000 nodes, with 1000 nodes uniformly distributed across the entire 15.5 km by 7.8 km area, two clusters containing 200 nodes each, and two clusters containing 300 nodes each. The low $\\varepsilon = 0.08$ km value means that only very densely packed nodes will be added to the clusters. Ideally, a lot of the outliers in the figure should be able to be clustered with $MinNodes=2$, but increasing the $\\varepsilon$ could also mean that the radius of the clusters could become too large.\n\n%based on features called \\textit{core-distance} and\\textit{reachability-distance}~\\cite{Ankerst:1999:OOP:304182.304187}, \n%1179\n%\\begin{figure}[ht]\n%    \\centering\n%    \\begin{subfigure}[t]{.31\\textwidth}\n%        \\includegraphics[width=\\linewidth]{figures/clustering/2k.png}\n%        %\\caption{2000 nodes.}\n%        \\label{figure:clustering:2knodes}\n%    \\end{subfigure}\n%    \\begin{subfigure}[t]{.3223\\textwidth}\n%        \\includegraphics[width=\\linewidth]{figures/clustering/2k80m2pts.png}\n%        %\\caption{2000 nodes clustered.}\n%        \\label{figure:clustering:2k80m2ptsnodes}\n%    \\end{subfigure}\n%    \\caption{Result of the OPTICS algorithm with parameters $\\varepsilon = 0.08$ km, and %$MinNodes = 2$.}\n%    \\label{figure:clustering:nodes}\n%\\end{figure}\n\n%7.81 km * 15.57 km\n\n%\\begin{figure}[ht]\n%    \\centering\n%    \\includegraphics[width=.7\\textwidth]{figures/clustering/2k-80m-2pts.png}\n%    \\caption{2000 nodes}\n%    \\label{figure:clustering:2k}\n%\\end{figure}", "meta": {"hexsha": "e08c29332fcc83a02990831ec179d68c5be2f56c", "size": 24043, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/p9/sections/02-link-modelling/02b-clustering.tex", "max_stars_repo_name": "Joklost/masters", "max_stars_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "reports/p9/sections/02-link-modelling/02b-clustering.tex", "max_issues_repo_name": "Joklost/masters", "max_issues_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/p9/sections/02-link-modelling/02b-clustering.tex", "max_forks_repo_name": "Joklost/masters", "max_forks_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.5718954248, "max_line_length": 1490, "alphanum_fraction": 0.6727529842, "num_tokens": 6718, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802264851919, "lm_q2_score": 0.7606506418255927, "lm_q1q2_score": 0.5850013678913335}}
{"text": "%\n% STAT 100: Chance and Data Analysis - A Course Overview\n% Section: Analysis of Population Proportions\n%\n% Author: Jeffrey Leung\n%\n\n\\section{Analysis of Population Proportions}\n\t\\label{sec:analysis-of-population-proportions}\n\\subsection{Margin of Error and Confidence Interval}\n\t\\label{subsec:analysis-of-population-proportions:margin-of-error-and-confidence-interval}\n\\begin{easylist}\n\n\t& \\emph{Parameter:} Value which summarizes population data\n\t\t&& Calculation requires collection of data from the entire population (i.e. see \\emph{census}, subsection~\\ref{subsec:data-collection:methods})\n\t\t&& Estimation is often calculated from a sample statistic\n\t\t\t&&& E.g. If 33\\% of a random sample of Canadian adults support the Conservative Party, then the proportion of all Canadian adults who support the Conservative Party is estimated to be 33\\%.\n\t\t&& Denoted by $p$\n\t\t\t&&& Sample proportion/statistic is denoted by $\\hat{p}$\n\t\n\t\\medskip\n\t& \\emph{Margin of error:} Percentage value of the uncertainty of an estimated population proportion\n\t\t&& Calculation (for a 95\\% confidence level):\n\t\t\\begin{math}\n\t\t\tMargin\\ of\\ error = \\frac{1}{\\sqrt{sample\\ size}}\n\t\t\\end{math}\n\t\t\t&&& Unit: Percentage\n\t\t&& Valid only for a random sample\n\t\t&& Dependent on a confidence level\n\t\t\t&&& \\emph{Confidence level:} Degree of certainty of the accuracy of a population proportion estimate\n\t\t\t\t&&&& Unit: Percentage\n\t\t\t\t&&&& Often 95\\% (expressed as a fraction; 19 times out of 20 = $\\frac{19}{20}$)\n\t\t&& Interpretation (with the confidence level): \\smallskip \\\\\n\t\tIf many random samples of <sample size, subject> of the population are taken and the sample proportion of <statistic> is calculated for each sample, <confidence level percentage> of the sample proportions will be within $\\pm$ <margin of error percentage> of the population proportion.\n\t\t\n\t\t&& E.g. ``A probability sample of this design and sample size would carry a margin of error in the range of $\\pm 1.2\\%$, 19 times out of 20.'' \\smallskip \\\\\n\t\tMargin of error: $\\pm 1.2\\%$ \\\\\n\t\tConfidence level: $\\frac{19}{20} = 95\\%$\n\t\t&& E.g. Given a study of Canadian adults who support the Conservatives with a random sample of 6005 Canadian adults and a margin of error of $\\pm$ 1.2\\%, 19 times out of 20: \\smallskip \\\\\n\t\tIf many random samples of 6005 Canadian adults of the population are taken and the sample proportion of Canadian adults who support the Conservatives is calculated for each sample, 95\\% of the sample proportions will be within $\\pm$ 1.2\\% of the population proportion.\n\t\n\t\\medskip\n\t& \\emph{Confidence interval:} Set of values which the population proportion is within\n\t\t&& Calculation:\n\t\t\\begin{math}\n\t\t\tConfidence\\ interval = sample\\ proportion \\pm margin\\ of\\ error\n\t\t\\end{math}\n\t\t\t&&& Unit: Percentage range\n\t\t&& Interpretation: \\smallskip \\\\\n\t\tUsing the sample data, we are <confidence level percentage> confident that the population proportion of <statistic> is between <confidence interval lower bound percentage> and <confidence interval upper bound percentage>.\n\t\t\t&&& Always specify the population proportion\n\t\t&& E.g. Given the sample proportion of Canadian adults who will vote for the Conservatives as 33\\% for 6005 subjects, the confidence interval is: \\\\\n\t\t\\begin{math}\n\t\t\tSample\\ proportion \\pm \\frac{1}{\\sqrt{sample\\ size}}\n\t\t\t= 33\\% \\pm \\frac{1}{\\sqrt{6005}}\n\t\t\t= 33\\% \\pm 1.2904...\\%\n\t\t\t\\approx (31.7\\%, 34.3\\%)\n\t\t\\end{math} \\\\\n\t\tUsing the sample data, we are 95\\% confident that the population proportion of Canadian adults who support the Conservative Party is between 31.7\\% and 34.3\\%.\n\t\t\n\t\t&& Analyzing the confidence interval: Given a value, check whether or not all values of the confidence interval satisfy the condition\n\t\t\t&&& E.g. Given the confidence interval for the population proportion of Canadian adults who support the Conservative Party to be (31.7\\%, 34.3\\%), can you conclude that more than 30\\% of all Canadian adults support the Conservative Party? \\smallskip \\\\\n\t\t\tYes; all values in the confidence interval are greater than 30\\%.\n\t\t\t&&& E.g. Given the confidence interval for the population proportion of Canadian adults who support the Conservative Party to be (31.7\\%, 34.3\\%), can you conclude that more than 34\\% of all Canadian adults support the Conservative Party? \\smallskip \\\\\n\t\t\tNo; there exist values in the confidence interval which are less than 34\\%.\n\t\t\t\n\\end{easylist}\n\\subsection{Bias and Variability}\n\t\\label{subsec:analysis-of-population-proportions:bias-and-variability}\n\\begin{easylist}\n\n\t& \\emph{Bias:} Consistent under-estimation or over-estimation of results\n\t\t&& Can be reduced by:\n\t\t\t&&& Avoiding \\hyperref[subsec:data-collection:sampling-errors]{non-sampling errors}\n\t\t\t&&& Ensuring fair representation of the population\n\t\t\t&&& Using random sampling\n\t\t&& Increasing sample size does not reduce bias\n\t\t\t\n\t& \\emph{Sampling variability:} Degree of variability between random samples\n\t\t&& Quantified by the \\hyperref[subsec:analysis-of-population-proportions:margin-of-error-and-confidence-interval]{margin of error}\n\t\t&& Can be reduced by:\n\t\t\t&&& Increasing sample size\n\t\t\t\n\\end{easylist}\n\\subsection{Hypothesis Testing for Population Proportions}  %TODO move this to a different file\n\t\\label{subsec:analysis-of-population-proportions:hypothesis-testing-for-population-proportions}\n\\subsubsection{Introduction}\n\t\\label{subsubsec:analysis-of-population-proportions:hypothesis-testing-for-population-proportions:introduction}\n\\begin{easylist}\n\n\t& \\emph{Hypothesis test:} Calculation which determines whether a claim/research hypothesis is supported by evidence\n\n\t& \\emph{Null hypothesis (H\\textsubscript{o}):} Statement that a population proportion is equal to a given value (which may be another population proportion)\n\t\t&& Assumed to be possible until contradicting evidence is found\n\t& \\emph{Alternative hypothesis (H\\textsubscript{a}):} Statement that a population proportion is less than, not equal to, or greater than a given value (which may be another population proportion)\n\t& E.g. The sample proportion of Canadian adults who want to legalize marijuana was found to be 59\\%. Test whether or not the population proportion of Canadian adults who want to legalize marijuana is greater than 50\\%. \\\\\n\tH\\textsubscript{o}: The population proportion of Canadian adults who want to legalize marijuana is equal to 50\\%. \\\\\n\tH\\textsubscript{a}: The population proportion of Canadian adults who want to legalize marijuana is greater than 50\\%.\n\t\n\t\\medskip\n\t& \\emph{Test statistic:} Standardized value representing a numerical summary of sample data \n\t\t&& Calculated from sample data; analyzed to determine the p-value\n\t\t&& E.g. Z-statistic, t-statistic, chi-square statistic\n\t\t&& \\emph{Z-statistic:} Test statistic which can be calculated to determine whether an inequal relationship between population proportions exists\n\t\t\t&&& Formula for one population proportion compared against a given percentage: \\smallskip \\\\\n\t\t\t\\begin{displaymath}\n\t\t\t\tz-statistic = \n\t\t\t\t\\frac\n\t\t\t\t{\n\t\t\t\t\t\\hat{p} - p_{0}\n\t\t\t\t}\n\t\t\t\t{\n\t\t\t\t\t\\sqrt\n\t\t\t\t\t{\n\t\t\t\t\t\t\\frac\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tp_{0} \\cdot (1 - p_{0})\n\t\t\t\t\t\t}\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tn\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\\end{displaymath}\n\t\t\t\\Deactivate\n\t\t\t\\begin{center}\n\t\t\t\t\\begin{tabular}{ l r @{ = } l }\n\t\t\t\t\twhere & $\\hat{p}$ & sample proportion \\\\\n\t\t\t\t\t& $p_{0}$ & population proportion if H\\textsubscript{0} is true \\\\\n\t\t\t\t\t& $n$ & sample size\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\t\\Activate\n\t\t\t\n\t\t\t\\medskip\n\t\t\t&&& Formula for two population proportions compared against each other: \\medskip \\\\\n\t\t\t\\begin{displaymath}\n\t\t\t\tz-statistic =\n\t\t\t\t\\frac\n\t\t\t\t{\n\t\t\t\t\t\\hat{p_{1}} - \\hat{p_{2}}\n\t\t\t\t}\n\t\t\t\t{\n\t\t\t\t\t\\sqrt\n\t\t\t\t\t{\n\t\t\t\t\t\t\\hat{p} \\cdot (1-\\hat{p}) \\cdot\n\t\t\t\t\t\t(\n\t\t\t\t\t\t\t\\frac{1}{n_{1}} + \\frac{1}{n_{2}}\n\t\t\t\t\t\t)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\\end{displaymath}\n\t\t\t\\Deactivate\n\t\t\t\\begin{center}\n\t\t\t\t\\begin{tabular}{ l r @{ = } l }\n\t\t\t\t\twhere & $\\hat{p_{x}}$ & sample proportion of the $x$\\textsuperscript{th} set of data \\\\\n\t\t\t\t\t& $\\hat{p}$ & combined sample proportion \\\\\n\t\t\t\t\t& $n_{x}$ & sample size of the $x$\\textsuperscript{th} set of data\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\t\\Activate\n\t\t\t\n\t\t\t\t&&&& Combined sample proportion is the sum of the number of subjects satisfying the condition of sample dataset 1 and the number of subjects satisfying the condition of sample dataset 2, divided by the sum of the number of subjects of each dataset\n\t\t\t\n\t\t&& \\emph{Chi-square statistic:} %TODO\n\t\n\t\\medskip\n\t& \\emph{Probability value (p-value):} Probability that a given result is obtained through chance, calculated from sample data\n\t\t&& Unit: Percentage\n\t\t&& Interpretation: If many random samples of the given sample size and population are chosen and the sample proportion in question is calculated, then the p-value represents the percentage of the sample populations which would support the alternative hypothesis.\n\t\t\t&&& E.g. Given that 50\\% of all Canadian adults support the legalization of marijuana, the probability of calculating a sample proportion of 59\\% or higher through random sampling is equal to the p-value (0.62\\%).\n\t\t&& The lesser the p-value, the greater the evidence for the alternative hypothesis\n\t\t&& Calculated from a test statistic; compared against the significance level to determine the amount of evidence for the alternative hypothesis\n\t\t&& For the interpretation of the p-value compared against the significance level, see \\hyperref[subsec:analysis-of-population-proportions:statistical-significance]{statistical significance}\n\t\t&& For the interpretation of the magnitude, see table~\\ref{tab:p-value-magnitude-chart}\n\t\t\n\t\t\\Deactivate\n\t\t\\begin{table}[!htb]\n\t\t\t\\centering\n\t\t\t\\caption{P-Value Magnitude Chart}\n\t\t\t\\label{tab:p-value-magnitude-chart}\n\t\t\t\\begin{tabular}{ r c l l }\n\t\t\t\t\\multicolumn{3}{ c }{P-value} & Strength of Evidence to Support H\\textsubscript{a} \\\\\n\t\t\t\t 10\\% < & p-value & & No evidence \\\\\n\t\t\t\t  5\\% < & p-value & $\\leq$  10\\% & Weak evidence \\\\\n\t\t\t\t  1\\% < & p-value & $\\leq$   5\\% & Some evidence \\\\\n\t\t\t\t0.1\\% < & p-value & $\\leq$   1\\% & Strong evidence \\\\\n\t\t\t\t        & p-value & $\\leq$ 0.1\\% & Very strong evidence\n\t\t\t\\end{tabular}\n\t\t\\end{table}\n\t\t\\Activate\n\t\t\n\t\t&& P-value of a z-statistic: Area under the standard normal distribution where z-statistic equals the z-score (see %\\ref standard normal table\n\t\t\t&&& If the alternative hypothesis is `less than', the p-value is the area to the left of the z-statistic\n\t\t\t&&& If the alternative hypothesis is `not equal to', the p-value is the area to the left of the negative absolute value of the z-statistic an the area to the right of the positive absolute value of the z-statistic\n\t\t\t&&& If the alternative hypothesis is `greater than', the p-value is the area to the right of the z-statistic\n\t\t&& P-value of a chi-square statistic: %TODO\n\t\t\n\t& \\emph{Significance level:} %TODO\n\t\t\t\n\t& General process:\n\t\t&& Find the test statistic using the sample data\n\t\t&& Find the p-value using the test statistic\n\t\t&& Compare the p-value to the significance level\n\t\t\t&&& Conclusion: ``Since the p-value (<p-value>) is <less than/greater than> the significance level (<significance level>), we <do not> reject the null hypothesis. There is <sufficient/insufficient> evidence to conclude that <alternative hypothesis>.''\n\n\\end{easylist}\n\\subsection{One Population Proportion (Z-Statistic)}\n\t\\label{subsec:analysis-of-population-proportions:one-population-proportion-z-statistic}\n\\begin{easylist}\n\t\n\t& \n\t& E.g. %TODO\n\n\\end{easylist}\n\\subsection{Two Population Proportions (Z-statistic)}\n\t\\label{subsec:analysis-of-population-proportions:two-population-proportions-z-statistic}\n\\begin{easylist}\n\n\t& Process:\n\t\t&& Compute the proportion of subjects in each test group who satisfy the condition\n\t\t&& Compare the proportions using a bar graph\n\t\t&& Conclude which group has a greater/lesser proportion\n\t\t&& To compare population proportions, conduct a hypothesis test (see %TODO ref\n\t\t\t&&& Null hypothesis states that the population proportions are equal; alternative hypothesis states that the population proportions are unequal (less than, not equal to, or greater than)\n\t\t\t&&& Formula for the z-statistic for two population proportions:\n\t\t\n\t& E.g. Are the proportions...\n\t\n\\end{easylist}\n\\subsubsection{Confidence Interval}\n\t\\label{subsubsec:analysis-of-population-proportions:two-population-proportions-z-statistic:confidence-interval}\n\\begin{easylist}\n\n\t& Formula (95\\% confidence interval):\n\t\\begin{math}\n\t\t(\\hat{p_{1}} - \\hat{p_{2}}) \\pm 2 \\cdot\n\t\t\\sqrt\n\t\t{\n\t\t\t\\frac\n\t\t\t{\n\t\t\t\t\\hat{p_{1}} \\cdot (1 - \\hat{p_{1}})\n\t\t\t}{\n\t\t\t\tn_{1}\n\t\t\t}\n\t\t\t+ \\frac\n\t\t\t{\n\t\t\t\t\\hat{p_{2}} \\cdot (1-\\hat{p_{2}})\n\t\t\t}{\n\t\t\t\tn_{2}\n\t\t\t}\n\t\t}\n\t\\end{math}\n\t\n\t& Example: [..] How does the proportion of people who have lung cancer...\n\t\n\t& There may be no difference between the proportion of students who use iPhones in UBC compared to the proportion of students who use iPhones in SFU.\n\t\n\\end{easylist}\n\\subsection{Multiple Population Proportions (Chi-Square Statistic)}\n\t\\label{subsec:analysis-of-population-proportions:multiple-population-proportions-chi-square-statistic}\n\\begin{easylist}\n\n\t& The greater the difference, the greater the chi-square statistic\n\t& P-value always uses the right side of the normal distribution\n\t& Only concludes whether a relationship exists between two variables\n\t& Can compare many population proportions\n\n\\end{easylist}\n\\subsection{Errors}\n\t\\label{subsec:analysis-of-population-proportions:errors}\n\\begin{easylist}\n\n\t& \\emph{Type I error:} Rejection of H\\textsubscript{o} from analysis of the sample data when H\\textsubscript{o} is true\n\t\t&& I.e. False positive/confirmation of the alternative hypothesis; finding evidence where there is none\n\t\t&& May occur when H\\textsubscript{o} is rejected\n\t\t&& E.g. Judging a person for a crime: \\\\\n\t\tH\\textsubscript{o}: The person is not guilty. \\\\\n\t\tH\\textsubscript{a}: The person is guilty. \\\\\n\t\tTruth: The person is not guilty (H\\textsubscript{o} is true). \\\\\n\t\tDecision: The person is guilty (H\\textsubscript{o} is rejected).\n\t\t&& Probability of its occurrence is directly proportional to the \\hyperref[subsubsec:analysis-of-population-proportions:hypothesis-testing-for-population-proportions:introduction]{significance level}\n\t\t\t&&& Reducing significance level (and therefore, the probability of the type I error) increases the probability of the type II error\n\t\t\n\t& \\emph{Type II error:} Failure to reject H\\textsubscript{o} from analysis of the sample data when H\\textsubscript{a} is true\n\t\t&& I.e. Failing to find evidence which exists\n\t\t&& May occur when H\\textsubscript{o} is not rejected\n\t\t&& E.g. Judging a person for a crime: \\\\\n\t\tH\\textsubscript{o}: The person is not guilty. \\\\\n\t\tH\\textsubscript{a}: The person is guilty. \\\\\n\t\tTruth: The person is guilty (H\\textsubscript{a} is true). \\\\\n\t\tDecision: The person is not guilty (H\\textsubscript{o} is not rejected).\n\t\t&& Probability of its occurrence is inversely proportional to the \\hyperref[subsubsec:analysis-of-population-proportions:hypothesis-testing-for-population-proportions:introduction]{significance level}\n\t\t\t&&& Increasing significance level (and therefore, the probability of the type II error) increases the probability of the type I error\n\t\t\t\n\t& E.g. A company will renew a contract with a radio station only if the station can find sufficient evidence to support that more than 20\\% of the listeners have heard their ad. The station conducts a random survey of 400 people, 100 of which have heard the ad. \\smallskip \\\\\n\tH\\textsubscript{o}: 20\\% of the listeners have heard the ad. \\\\\n\tH\\textsubscript{a}: More than 20\\% of the listeners have heard the ad. \\smallskip \\\\\n\tA type I error will occur if 20\\% of the listeners have heard the ad, but the sample data provides sufficient evidence to conclude that more than 20\\% of the listeners have heard the ad. H\\textsubscript{o} is true but rejected; H\\textsubscript{a} is false but accepted. \\\\\n\tThe possibility of this error can be reduced by decreasing the significance level. \\smallskip \\\\\n\tA type II error will occur if more than 20\\$ of the listeners have heard the ad, but the sample data does not provide sufficient evidence to reject the hypothesis that 20\\% of the listeners have heard the ad. H\\textsubscript{o} false but not rejected; H\\textsubscript{a} is true but not affirmed. \\\\\n\tThe possibility of this error can be reduced by increasing the significance level.\n\t\n\\end{easylist}\n\\subsection{Statistical Significance}\n\t\\label{subsec:analysis-of-population-proportions:statistical-significance}\n\\begin{easylist}\n\n\t& P-value represents the probability that a given sample proportion is found, if the null hypothesis is correct, and therefore the probability of obtaining a difference of | sample proportion - test proportion | from the test proportion\n\t\n\t& \\emph{Statistically significant:} Result which is unlikely to occur by chance\n\t\t&& Statistically significant result: P-value is less than the significance level\n\t\t&& Not statistically significant result: P-value is greater than the significance level\n\t\n\t& If a p-value is less than the significance level, then the difference between the sample proportion and the test proportion is statistically significant at the given significance level\n\t& If a p-value is greater than the significance level, then the difference between the sample proportion and the test proportion is not statistically significant at the given significance level\n\n\n\\end{easylist}\n\\clearpage", "meta": {"hexsha": "5ee49342fee75e917f0096294897a1fd6532ec14", "size": 17409, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "stat-100-chance-and-data-analysis/tex/analysis-of-population-proportions.tex", "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_issues_repo_path": "stat-100-chance-and-data-analysis/tex/analysis-of-population-proportions.tex", "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "stat-100-chance-and-data-analysis/tex/analysis-of-population-proportions.tex", "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "avg_line_length": 54.403125, "max_line_length": 300, "alphanum_fraction": 0.7335286346, "num_tokens": 4583, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8104789178257654, "lm_q1q2_score": 0.5849576527881754}}
{"text": "\\section{Numerical Experiments}\n\t\n\tThis section intends to be the most important in the chapter since it will take advantage of everything that has been previously studied and expand in the best possible way the implementation of the spectral methods studied in the previous section. For this, we will focus on the nonlinear problem defined in (\\ref{IVP_Burgers}) directly, because after understanding these tools you want to have the ability to attack more complex problems such as (\\ref{navierstokes}) which it is nonlinear, and in general, in most problems, it is not always possible to obtain analytical solutions, and then an option is to approximate them with numerical methods. \\\\\n\t\n\tAccording to what was studied in the previous section, we will start solving our problem with each method considering $\\alpha> 0$, and later we will study the case when $\\alpha = 0$. To find solutions we are going to use Euler's method, either explicit or implicit, to solve in the variable $t$, and due to the importance of carrying out numerical studies, we will try to give a detailed methodology about each method implemented to construct an algorithm that can be useful in the realization of computational codes. Finally, with the aim of giving a more detailed analysis, we will present some simulations of the results obtained.\n\t\n\t\\subsection{Numerical Solutions for Burgers' Equation with Viscosity}\n\t\\input{burgers_equation/deterministic/numerical_experiments/viscid/Viscid_Galerkin}\n\t\\input{burgers_equation/deterministic/numerical_experiments/viscid/Viscid_Collocation}\n\t\\input{burgers_equation/deterministic/numerical_experiments/inviscid/Inviscid}", "meta": {"hexsha": "dc607ad64e0dffd2caebf44b414fe15a6db1eff3", "size": 1650, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/burgers_equation/deterministic/numerical_experiments/Numerical_Results.tex", "max_stars_repo_name": "alanmatzumiya/Maestria", "max_stars_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-12-29T10:44:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-12T11:18:45.000Z", "max_issues_repo_path": "docs/burgers_equation/deterministic/numerical_experiments/Numerical_Results.tex", "max_issues_repo_name": "alanmatzumiya/spectral-methods", "max_issues_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/burgers_equation/deterministic/numerical_experiments/Numerical_Results.tex", "max_forks_repo_name": "alanmatzumiya/spectral-methods", "max_forks_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-04T13:29:56.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T13:29:56.000Z", "avg_line_length": 165.0, "max_line_length": 653, "alphanum_fraction": 0.8157575758, "num_tokens": 357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8175744695262777, "lm_q1q2_score": 0.5849124037603571}}
{"text": "\\section{Simple model} \\slabel{model}\n\nWe base our model on the buoyancy-drag models of \\cite{Oron2001}:\n\\begin{equation}\n(\\rho_1 + \\rho_2) \\mathcal{V} \\ddot{h} = (\\rho_2 - \\rho_1) g \\mathcal{V} - C \\dot{h}^2 \\rho \\mathcal{A}\n\\end{equation}\nwhere $\\rho_1$ and $\\rho_2$ are the densities of the light and heavy fluid,\n$\\mathcal{V}$ is the characteristic volume of the bubble\n$g$ is the acceleration,\n$C$ is a drag-like coefficient, and\n$\\mathcal{A}$ is the characteristic cross sectional area of the bubble.\nMaking the Boussinesq approximation, $\\rho_1 \\approx \\rho_2$ yields:\n\\begin{equation}\n\\ddot{h} = A g - \\frac{C}{2} \\dot{h}^2 \\frac{\\mathcal{A}}{\\mathcal{V}}\n\\end{equation}\nIn the self-similar regime there is only one length-scale, so $\\mathcal{A}/\\mathcal{V} \\sim 1 / \\lambda$.\nHowever, in the single-mode regime that is the focus of this study, the bubbles are elongated, producing two length scales: a span-wise scale $\\lambda$ and a stream-wise scale $h$.\nTherefore, for the smRTI $\\mathcal{A}/\\mathcal{V} \\sim \\frac{1}{h}$ and the model of Oron \\etal yields un-bounded velocities:\n\\begin{equation}\n\\ddot{h} = A g - \\frac{C}{2} \\frac{\\dot{h}^2}{h}\n\\end{equation}\nBecause the strength of the form drag relative to buoyancy decreases at high aspect ratio, we must consider other drag terms, such as skin drag, that grow at least linearly with $h$.\n\n\\subsection{Dynamics}\n\nWe begin by listing the external forces the bubble experiences.  The first is the buoyant force:\n\\begin{equation}\nF_b = C_0 A g \\lambda^2 h,\n\\end{equation}\nwhere $C_0$ is an unknown coefficient.\nThe next is the form drag:\n\\begin{equation}\nF_f = C_1 \\lambda^2 \\dot{h}^2,\n\\end{equation}\nwhere $C_1$ is similar to a drag coefficient.\nThe next is the viscous, or skin, drag:\n\\begin{equation}\nF_s = C_2 \\nu h \\dot{h},\n\\end{equation}\nwhere $C_2$ is another unknown coefficient and \n$\\nu$ is the kinematic vicosity.\n\nTo complete the dynamic equation, we must characterize the inertia of the bubble.\nThe bubble is roughly cylindrical with a height $h$, so we expect an inertial term of the form $\\lambda^2 h$.\nHowever, consider the limit of $h \\rightarrow 0$.  \nHere, streamlines must extend from bubble to spike, which has a characteristic separation $\\lambda$ for an inertial term of the form $\\lambda^3$.\nTherefore, we expect the inertia to be a mix of a term that goes as $\\lambda^2 h$ and one that goes as $\\lambda^3$:\n\\begin{equation} \\elabel{inertia}\nI = C_3 \\lambda^2 h + C_4 \\lambda^3,\n\\end{equation}\nwhere $C_3$ and $C_4$ are two more unknown coefficients.\n\nThe complete dynamic equation is:\n\\begin{equation}\n\\ddot{h} = \\frac{C_0 A g \\lambda^2 h - C_1 \\lambda^2 \\dot{h}^2 - C_2 \\nu h \\dot{h}}{C_3 \\lambda^2 h + C_4 \\lambda^3}\n\\end{equation}\nWithout loss of generality, we can let $C_0 = 1$ and simplify:\n\\begin{equation} \\elabel{dynamics}\n\\ddot{h} = \\frac{A g h - C_1 \\dot{h}^2 - C_2 \\nu (h/\\lambda^2) \\dot{h}}{ C_3 h + C_4 \\lambda }\n\\end{equation}\nWe can non-dimensionalize by defining a dimensionless length and time:\n\\begin{equation}\nz = \\frac{h}{\\lambda} \\qquad \\tau = \\sqrt{\\frac{A g}{\\lambda}} t,\n\\end{equation}\nwhich simplifes:\n\\begin{equation}\n\\ddot{z} = \\frac{z - C_1 \\dot{z}^2 - C_2 \\text{Gr}^{-1/2} z \\dot{z}}{C_3 z + C_4},\n\\end{equation}\nwhere\nthe derivative is with repsect to $\\tau$ and \n$\\text{Gr} = A_0 g \\lambda^3 \\nu^{-2}$ is the Grashof number.\n\n\n\\subsection{Mixing}\n\nAs the bubble height grows, the velocity approaches a terminal value specified by the balance between buoyancy and skin drag.\nAt terminal velocity, the flux of pure fluid into the bubble is bounded.\nHowever, the interfacial mixing continues to grow with the interfacial area, which grows with $h$.\nTherefore, for any finite diffusivity, the bubble will ultimately diffuse away.\nFor this reason, we must include the effects of interficial mixing, at least to the first order.\n\nThe quantity of mixed fluid, $m$ is computed directly from the time, bubble height, diffusivity, and initial interface thicnkess.\nThe quantity of mixed fluid is defined as the integral of the absolute value of the scalar:\n\\begin{equation}\n\tm(t) = \\int \\left( 1-\\text{abs}\\left[\\phi(x,y,z,t)\\right] \\right) dV,\n\\end{equation}\nwhere we assume the mean scalar is zero, $\\int \\phi dV = 0$.\n\nWe approximate the volume integral by a 1D integral across the interface multiplied by the surface area:\n\\begin{equation}\n\tm(t) \\approx S \\int \\left( 1- \\text{abs}\\left[\\phi_1(r)\\right] \\right) dr\n\\end{equation}\nwhere $S$ is the surface area and\n$\\phi_1$ is a model 1D scalar profile:\n\\begin{equation}\n\\phi_1(r) = \\frac{1}{2} \\left( \\erf\\left[\\frac{r}{\\delta}\\right] - \\erf\\left[\\frac{r - d}{\\delta}\\right] \\right),\n\\end{equation}\nwhere $\\delta$ is the interface width and\n$d$ is the diameter of the bubble.\n\nThe surface area has contributions from the bubble tip and side walls:\n\\begin{equation} \\elabel{surface_area}\nS = \\left(C_6 \\lambda^2 + C_5 \\lambda h\\right)\n\\end{equation}\nwhere $C_5$ and $C_6$ are unknown coefficients.\n$C_5$ scales the perimeter of span-wise slices of the bubble while $C_6$ rescales bubble tip.\n\nTo the first order, the diameter is half the wavelength: $d \\approx \\lambda / 2$.\nHowever, the cylindrical bubbles do not always fill the span-wise domain.\nThis can be seen by values of $C_5$ that are below $4$, the value corresponding to space-filling rectangular bubbles.\nTherefore, we adjust the diameter using $C_5$:\n\\begin{equation}\nd = \\frac{\\lambda}{2} \\frac{C_5}{4}\n\\end{equation}\n\nThe interface width is modeled by simple 1D diffusion:\n\\begin{equation}\n\\delta(t) = 2 \\sqrt{D (t + t_0)},\n\\end{equation}\nwhere $t_0$ is chosen to match $\\delta(0)$ to the initial condition.\n\nWe perform the integral through the bubble:\n\\begin{equation} \\elabel{profile1d}\n\\begin{split}\n\t\\int_{-d/2}^{d/2} \\left(1- \\left|\\phi_1(r)\\right| \\right) dr &= \\frac{\\delta}{\\sqrt{\\pi}} \\left( 1 - \\exp\\left[-\\frac{d^2}{\\delta^2}\\right]\\right) \\\\\n&+ d \\left(1 - \\erf\\left[\\frac{d}{\\delta}\\right]\\right).\n\\end{split}\n\\end{equation}\n\nThe mixed mass must still be connected to the dynamics equation via the Atwood number:\n\\begin{equation} \\elabel{effective-atwood}\nA = A_0 \\left( 1 - \\frac{m}{V}\\right),\n\\end{equation}\nwhere $A_0$ is `pure` Atwood number and\n$V$ is the volume of the bubble.\nAs in the dynamics equation, we define the volume as a mixture of $\\lambda^3$ and $\\lambda^2 h$:\n\\begin{equation}\nV = \\left(C_8 \\lambda^3 + C_7 \\lambda^2 h\\right),\n\\end{equation}\nwhere $C_7$ and $C_8$ scale the volume analogously to $C_5$ and $C_6$.\n\nThe volume of mixed fluid, $m(t)$, can be measured directly in the simulations.\nThis gives meaning to the value of $m(t)$ independent of the ratio $m(t)/V$.\nTherefore, unlike in the dynmics, where a coefficient could be discarded without loss of generality, all four of $C_5, C_6, C_7$ and $C_8$ are nessesary.\nThe overall scale factor can not be removed if we want to compare to mixed volume measurements.\n\n\\subsection{Coefficient constraints}\nFirst, consider the limit where $D = 0$, $\\nu = 0$, and $h \\rightarrow 0$.\nThe dynamical equation becomes\n\\begin{equation} \\elabel{first_constraint}\n\\ddot{h} = \\frac{A g }{C_4 \\lambda} h,\n\\end{equation}\nwhich matches Rayleigh's original linear stability analysis if \n\\begin{equation} \nC_4 = 1/(2 \\pi).\n\\end{equation}\n\nWhen $\\nu > 0$, the growth rate $\\ddot{h}$ is given by Duff's linear theory:\n\\begin{equation}\n\\ddot{h} = \\left(\\sqrt{A g k + \\nu^2 k^4} - \\nu k^2\\right)^2 h,\n\\end{equation}\nwhere $k = 2\\pi / \\lambda$ is the wavenumber.\nSetting this equal to \\eref{first_constraint} yields:\n\\begin{equation} \\elabel{c4}\nC_4 = \\frac{1 + 2x\\left(\\sqrt{1 + x^2} + x\\right)}{2\\pi},\n\\end{equation}\nwhere:\n\\begin{equation}\nx = \\sqrt{\\frac{8 \\pi^3 \\nu^2}{A g \\lambda^3}} = \\sqrt{\\frac{(2 \\pi)^3}{\\text{Gr}}}\n\\end{equation}\nand Gr is the Grashof number.\n\nNext, consider the initial quantity of mixed mass for small sharp interfaces, $\\delta(0), a_0 \\rightarrow 0$.\nWe assume the initial condition is an error function profile:\n\\begin{equation}\nM(t=0) = \\lambda^2 \\int_{-\\infty}^{\\infty} \\text{erf}\\left[\\frac{z}{\\delta}\\right] = \\frac{2\\lambda^2 \\delta}{\\sqrt{\\pi}}.\n\\end{equation}\nEquating this to the product of \\eref{surface_area} and \\eref{profile1d}\n\\begin{equation}\n\\frac{2 \\lambda^2 \\delta}{\\sqrt{\\pi}}= \\frac{C_6 \\lambda^2 \\delta}{\\sqrt{\\pi}},\n\\end{equation}\nwhich implies that $C_6 = 2$.\n\nNext, consider the limit when $\\delta \\rightarrow 0 $ and $h \\rightarrow 0$.\nIn the linear theory, the Atwood number is rescaled:\n\\begin{equation}\nA = \\frac{A_0}{1 + \\pi^{-1/2} k \\delta} = A_0 \\left(1 - \\frac{k \\delta}{\\sqrt{\\pi} + k \\delta}\\right)\n\\end{equation}\nWe equate this to \\eref{effective-atwood}:\n\\begin{equation}\n\\frac{2 \\pi \\delta}{\\lambda (\\sqrt{\\pi} + 2 \\pi \\delta / \\lambda)} = \\frac{C_6 }{C_8}\\frac{\\delta}{\\lambda \\sqrt{\\pi} },\n\\end{equation}\nor\n\\begin{equation}\n\\frac{C_6}{C_8} = \\frac{2 \\pi \\sqrt{\\pi}}{\\sqrt{\\pi} + 2 \\pi \\delta / \\lambda}.\n\\end{equation}\nThe variable $\\delta$ is associated with the mixing model, not the dynamics, so it would be convenient to have $C_8$ independent of $\\delta$.\nWe've defined $C_6 = 2$ in the limit of $\\delta(0), a_0 \\rightarrow 0$, so we can add a term that goes to zero at $\\delta = 0$:\n\\begin{equation}\nC_6 = \\frac{2}{1 + 2 \\sqrt{\\pi} \\delta / \\lambda},\n\\end{equation}\nwhich constrains:\n\\begin{equation}\nC_8 = \\frac{1}{2\\pi},\n\\end{equation}\nwhich is the same as $C_4$ in the inviscid case.\n\n\\subsection{Coefficient estimation}\n\n\\begin{figure*}\n\\begin{subfigure}[b]{\\columnwidth}\n\\includegraphics[width=\\columnwidth]{figs/slice}\n\\caption{Scalar $\\phi$}\n\\end{subfigure}\n\\begin{subfigure}[b]{\\columnwidth}\n\\includegraphics[width=\\columnwidth]{figs/slice_w}\n\\caption{Vertical component of the velocity, $w$}\n\\end{subfigure}\n\\caption{ \\flabel{bubble_geom}\nSlices of the scalar and vertical component of the velocity at early times and high Grashof number.\nThe arrows indiciate the dependence of the model terms on different span-wise length scales, and are identical in both figures.\n$C_1$ is related to the maximum cross sectional diameter of the bubble in the velocity field.\n$C_2$ is related to the nominal side-wall diameter of the bubble in the velocity field.\n$C_5$ is related to the nominal side-wall diameter of the bubble in the scalar field.\n$\\delta$ is related to the interface thickness in the scalar field.\nThe slice is of the plane $x=y$, which passes through only bubble centers.\n}\n\\end{figure*}\n\nThe parameter $C_1$ scales the form drag and serves as a drag coefficient.  \nBecause we have let $C_0 = 1$, the force balance is really aggreated over two rising bubbles and two falling spikes, each with diameter $\\lambda / 2$.\nTherefore, we multiply the force on a single bubble of diameter $\\lambda/2$ by 4.\nNow, we relate $C_1$ to the drag coefficient $C_d$ in the drag equation:\n\\begin{equation}\n\tC_1 \\lambda^2 \\dot{h}^2 = 2 C_d \\mathcal{A} \\dot{h}^2,\n\\end{equation}\nwhere $\\mathcal{A}$ is the cross sectional area,\nso $C_1$ can be estimated using drag coefficients of similar objects:\n\\begin{equation} \\elabel{prior_c1}\nC_1 = 2 C_d \\frac{\n\t\\mathcal{A}}{\\lambda^2},\n\\end{equation}\nwhere $\\mathcal{A} \\approx (\\lambda/2)^2$.\nInitially, the bubble tip is a flat plate, which has $C_d = 1.28$.\nAt late times, the bubble is closer to an elongated cylinder, which has $C_d = 0.82$, but with a somewhat streamlined tip, which further reduces drag.\nWe expect $C_1 \\approx 0.64$, but possibly much smaller if the bubble takes a streamlined shape.\nHowever, if the bubble spreads to have a diameter greater than $\\lambda / 2$, $C_1$ could be greater than $0.64$.\n\nNext, onsider the limit when $h \\rightarrow \\infty$ and $D = 0$:\nThe dynamical equation becomes\n\\begin{equation}\n\\ddot{h} = \\frac{A_0 g - C_2 \\nu (1/\\lambda) \\dot{h}}{C_3}\n\\end{equation}\nwhich leads to a terminal velocity of:\n\\begin{equation} \\elabel{visc_vel}\n\\dot{h} = \\frac{A_0 g \\lambda^2 }{C_2 \\nu},\n\\end{equation}\nor a non-dimensional velocity, i.e. Froude number:\n\\begin{equation}\n\\text{Fr} = \\frac{d z}{d \\tau} = \\frac{\\sqrt{\\text{Gr}}}{C_2}.\n\\end{equation}\nThe case of extended bubbles and spikes affected only by viscous drag is highly analogous to flow through a square duct.\nThe pressure drop, $\\Delta p$, along a duct is given by the Darcey-Weisbach formula:\n\\begin{equation}\n\\Delta p = \\frac{f_D}{2} \\frac{v^2 L}{d},\n\\end{equation}\nwhere $L$ is the length of the duct,\n$v$ is the mean velocity,\n$d$ is the hydraulic diameter,\nand $f_D$ is the Darcy friction factor.\nIn our case, $L = h$, $v = \\dot{h}$, and $\\Delta p = A_0 g h$, so:\n\\begin{equation}\nA g = \\frac{f_D}{2} \\frac{\\dot{h}^2}{d},\n\\end{equation}\nFor laminar flows in circular pipes, $f_D = \\bar{f}_D = 64 / \\text{Re}$, so:\n\\begin{equation}\nA g = \\frac{f_D}{\\bar{f}_D} 32 \\nu \\frac{\\dot{h}}{d^2}.\n\\end{equation}\nThe hydraulic diameter $d = \\lambda / 2$, so:\n\\begin{equation}\n\\dot{h} = \\frac{\\bar{f}_D}{f_D} \\frac{A_0 g \\lambda^2}{128 \\nu} .\n\\end{equation}\nThis gives an estimate for $C_2$:\n\\begin{equation}\nC_2 \\approx 128 \\frac{f_D}{\\bar{f}_D},\n\\end{equation}\nwhere the ratio $f_D / \\bar{f}_D$ is affected by the geometry and departure from laminar flow.\nFor example, for square ducts $f_D/\\bar{f}_D \\approx 0.889$, so $C_2 \\approx 114$~\\cite{ghiaasiaan2011convective}. \n\nThe product of the coefficient $C_5$ and $\\lambda h$ gives the interfacial area of the side of the bubble.\nTherefore, $C_5$ captures information both about the bubble shape and the bubble diameter.\nIf the bubbles had diameter $\\lambda / 2$ and were smooth and rectangular, then $C_5 \\approx 4$\nIf the bubble has a lower surface area shape, e.g. cylindrical, or is thinner then $C_5 < 4$.\n\nThese diameters, along with the relevant span-wise length scales in the preceeding coefficients, are sketched in \\fref{bubble_geom}.\nThe diameter associated with $C_1$ is defined as the entrainment width at the bubble tip, in contrast to the width at the bubble center used for $C_2$.\nThe diameter associated with $C_5$ is defined similarly to $C_2$, but with respect to the scalar interface.\nThe interface width $\\delta$ also depends on the scalar representation of the bubble.\n", "meta": {"hexsha": "1408ad16e61b1f05941afe5b81d8df83b69d8b83", "size": 14048, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "model.tex", "max_stars_repo_name": "maxhutch/2016_smRTI_model", "max_stars_repo_head_hexsha": "caeb8c95b33ad32b7cffc6a685713b323aba35de", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-06-08T11:33:32.000Z", "max_stars_repo_stars_event_max_datetime": "2016-06-08T11:33:32.000Z", "max_issues_repo_path": "model.tex", "max_issues_repo_name": "maxhutch/2016_smRTI_model", "max_issues_repo_head_hexsha": "caeb8c95b33ad32b7cffc6a685713b323aba35de", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "model.tex", "max_forks_repo_name": "maxhutch/2016_smRTI_model", "max_forks_repo_head_hexsha": "caeb8c95b33ad32b7cffc6a685713b323aba35de", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.983277592, "max_line_length": 182, "alphanum_fraction": 0.7181805239, "num_tokens": 4557, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5849124035696939}}
{"text": "% As a sample LaTeX document, this is an actual assignment \n% written in LaTeX with my template for MATH 417, \n% Honors Real Variables (Measure Theory) at University of Alberta.\n% This source has been released with permission with the instructor,\n% Professor John C. Bowman as the solutions are available at\n% https://www.math.ualberta.ca/~bowman/m417/assign1.pdf\n\n% Of course, don't plagiarize! Just because you have the solutions\n% to whatever problems there is does not mean you should copy it.\n% The only way to learn math is to do the problems yourself!\n\n% Moreover, I do not warrant any correctness of these solutions.\n% These are simply what I have written which may contain mistakes.\n\n% UPDATE (2/1/19): Unfortunately, by the request of my professor\n% John C. Bowman with potential plagiarism in future years,\n% I removed all the solutions and replaced them with [Redacted].\n% However, all the problem texts are kept.\n\n% ------- END DISCLAIMER ------------- \n\n\\documentclass{article}\n\\usepackage{import}\n\n% Change this if you want a different margin.\n% Current margin is smaller than the LaTeX default but larger than\n% Standard word documents.\n\\newcommand{\\SMALLMARGINS}{1.5in}\n\n% Change this directory if you put your template files in a different directory.\n\\newcommand{\\basedir}{../../}\n\n% Comment out this if you want an alternative header (that I no longer support)\n\\newcommand{\\USESCSTYLE}{}\n\n% If you are using Crowdmark or require new page for each problem, uncomment this.\n% \\newcommand{\\CROWDMARK}{}\n\n\n\\import{\\basedir/base/}{mathPreamble}\n\n% Assignment number and Course Name\n\\newcommand{\\assignmentnum}{1}\n\\newcommand{\\coursename}{MATH 417}\n\n% Who are you, obviously.\n\\newcommand{\\studentname}{Jamie van Lindenberg}\n\\newcommand{\\studentid}{1234567}\n\\newcommand{\\studentemail}{rassamee@ualberta.ca}\n\\newcommand{\\coursesec}{Q1}\n\n\\begin{document}\n\n% Comment this out if you do not want the title.\n% Alternatively, use \n% \\import{\\basedir/base/}{hwTitle}\n\n% for a cover page.\n\\integratedtitle\n\n\\problemsec\n\nLet $(S_n)_1^\\infty$ sequences of sets.\nDefine\n\\[\n    \\liminf_{n\\to\\infty} S_n\\defeq \\bigcup_{k=1}^\\infty \\bigcap_{n=k}^\\infty S_n, \\quad \\limsup_{n\\to\\infty} S_n\\defeq \\bigcap_{k=1}^\\infty \\bigcup_{n=k}^\\infty S_n.\n\\]\nShow that\n\\[\n    \\liminf_{n\\to\\infty} S_n\\subset \\limsup_{n\\to\\infty} S_n. \n\\]\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\problemsec\nLet $(S_j)_1^\\infty$ sequence of sets.\nDefine $T_1=S_1$ and\n\\[\n    T_j\\defeq S_j \\setminus \\bigcup_{i=1}^{j-1}S_i, \\quad j\\in 2,3, \\ldots.\n\\]\nProve that $(T_j)_1^\\infty$ is a sequence of pairwise disjoint sets satisfying\n\\[\n    \\bigcup_{j=1}^n T_j=\\bigcup_{j=1}^n S_j, \\quad \\text{all }n\\in\\naturals\n\\]\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\problemsec\nSuppose that $A, B\\subset \\real^d$ with $m^*(A),m^*(B)<\\infty$.\nShow that\n\\[\n    \\abs{m^*(A)-m^*(B)}\\le m^*(A\\symdiff B)\n\\]\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\problemsec\nGive examples of\n\\begin{enumerate}\n    \\item a bounded countable union of Jordan measurable sets that is not Jordan measurable;\n    \\item a bounded countable intersection of Jordan measurable sets that is not Jordan measurable.\n\\end{enumerate}\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\problemsec\nDetermine whether each of the following sets are countable.\nJustify your answers.\n\n\\subsection{Part (a)}\nThe set of all mappings from $1\\ldots,N$ to $\\naturals$ where $N\\in\\naturals$ is fixed.\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\subsection{Part (b) and (c)}\nThe set of all mappings $S$ from $\\naturals$ to $[0,1]$ and the restriction $R$ of $S$ such that each $f\\in R$ is eventually $0$ (i.e. $\\lim_{n\\to\\infty} f(n)=0$ for $f\\in R$)\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\subsection{Part (d)}\nThe set of all finite subset of $\\naturals$.\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\problemsec\n\nLet $(x_\\alpha)_{\\alpha\\in A}$ a collection of $x_\\alpha \\in \\real_{\\ge 0}$ such that\n\\[\n    \\sum_{\\alpha\\in A} x_\\alpha <\\infty.\n\\]\nShow that $x_\\alpha=0$ for all but at most countably many $\\alpha \\in A$, even if $A$ is itself uncountable.\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\problemsec\nLet $S$ a set.\nProve that there is no surjective map from $S$ to $\\powerset(S)$.\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\n\\problemsec\n\nLet $S$ a countable set.\nShow that\n\\[\n    \\mathfrak{F}(S)\\defeq \\set{F\\subset S: F \\text{ finite}}\n\\]\n\n\\begin{solution}\n    [Redacted].\n\\end{solution}\n\\end{document}", "meta": {"hexsha": "70c4c6cc71138a90ecff32ed7ceb76cf2c08beee", "size": 4483, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sample/math417hw1/m417h1.tex", "max_stars_repo_name": "supakorn-ras/latex-templates", "max_stars_repo_head_hexsha": "94ac5cbb3addc4135db8e33395d4e9ce21716cbc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sample/math417hw1/m417h1.tex", "max_issues_repo_name": "supakorn-ras/latex-templates", "max_issues_repo_head_hexsha": "94ac5cbb3addc4135db8e33395d4e9ce21716cbc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sample/math417hw1/m417h1.tex", "max_forks_repo_name": "supakorn-ras/latex-templates", "max_forks_repo_head_hexsha": "94ac5cbb3addc4135db8e33395d4e9ce21716cbc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3705882353, "max_line_length": 175, "alphanum_fraction": 0.70890029, "num_tokens": 1385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8175744828610096, "lm_q1q2_score": 0.5849124033790303}}
{"text": "\\chapter{Integer domains}\\label{ch4:integers}\nIn this chapter we focus on the domain of integers, and try to determine sufficient conditions for the non existence of under-approximation abstract domains.\n\n\\section{Sum completeness}\nThe concept of complete abstraction (Definition \\ref{ch2:def:complete-abstr}) for a function is very important in over-approximation abstract interpretation because it correspond to analyses without false alarms. In general it's really rare that all the basic transfer functions of a program are complete, with boolean guards almost never being such. However, it's not uncommon that \\textit{some} of them are complete: for instance, in the interval domain sum and multiplication by a constant are complete operations.\n\nIn under-approximation, completeness is much harder to achieve: the reason is the fact that complete abstractions allow to ``distribute\" $\\alpha$ over the operation, so having even a single set that is abstracted to $\\bot$ forces, by strictness of concrete operations, all concrete sets that can be obtained applying the operation to that specific set to be $\\bot$ as well. This idea is more easily seen in the proof of the following proposition.\n\\begin{prop}\\label{ch3:th:sum-complete-trivial}\n\tLet $\\ugi{\\pow(\\setZ)}{\\alpha}{A}$ be an under-approximating Galois insertion. If $A$ is complete for the concrete sum\\footnote{In this proposition we're considering as usual the additive extension of $+$.} then it's trivial.\n\\end{prop}\n\\begin{proof}\n\tLet $+^{\\flat}$ be the complete abstraction of $+$.\n\tSince all sound under-approximations for a singleton set are $\\emptyset$ and the singleton itself, we can distinguish two cases.\n\n\tIf all singletons $\\{ n \\}$ are representable in $A$ then, by union closure (Proposition \\ref{ch2:th:under-gc-union-closure}) we get that any $S \\in \\pow(\\setZ)$ is also in $A$ since it's union of singletons. This in turn entails $A = \\pow(\\setZ)$, that is trivial.\n\n\tOtherwise, there exists an integer $\\bar{n}$ such that $\\alpha(\\{ \\bar{n} \\}) = \\emptyset$. Consider then an arbitrary set of integers $S \\subseteq \\setZ$.\n\t\\begin{align*}\n\t\t\\alpha(S) &= \\alpha(S - \\{ \\bar{n} \\} + \\{ \\bar{n} \\} )      &&[\\text{concrete operation}] \\\\\n\t\t&= \\alpha(S - \\{ \\bar{n} \\}) +^{\\flat} \\alpha(\\{ \\bar{n} \\}) &&[\\text{completeness of }+^{\\flat}] \\\\\n\t\t&= \\alpha(S - \\{ \\bar{n} \\}) +^{\\flat} \\emptyset             &&[\\alpha(\\{ \\bar{n} \\}) = \\emptyset] \\\\\n\t\t&\\subseteq \\alpha(S - \\{ \\bar{n} \\}) + \\emptyset = \\emptyset &&[\\text{correctness of }+^{\\flat}]\n\t\\end{align*}\n\tThis means that any set $S$ is abstracted to $\\emptyset$, so $A = \\{ \\emptyset \\}$.\n\\end{proof}\n\nIn our opinion, the reason this doesn't happen in over-approximation is an instance of the asymmetry born of basic transfer functions. As we introduced in the previous chapter, many operations, defined as the additive extension of basic constructs, increase the output cardinality. For instance, the sum of two sets of integers have a cardinality that is at least the largest of the two inputs. This asymmetry allows to get many subsets of the concrete domain as the result of applying the function to a small set (for instance a singleton) but not to a big one.\nThen in under-approximation small sets can be composed by union to make the abstract domain grow until it's too big to be feasible, while in over-approximation sets are composed by intersection. To emulate this behaviour, we would need big sets (eg. the whole $C$ without a single value) so that their intersections give rise to lots of smaller sets, but this conflicts with the ``direction\" of basic transfer functions.\n\nOf course in general completeness is a very strong property to require, so in the next section we'll introduce a weaker notion and show that it is enough to prevent the existence of under-approximation abstract domains.\n\n\\section{Non emptying-ness}\nCompleteness is a concept tailored to over-approximation: we use it to avoid false alarms while over-approximating the set of possible values. In bug catching, an alternative, weaker definition may be enough\n\\begin{definition}[Non emptying]\\label{ch3:def:non-emptying}\n\tLet $\\ugc{C}{\\alpha}{\\gamma}{A}$ be an under-approximation Galois connection, $f : C \\rightarrow C$ a monotone function on $C$ and $f^{\\flat} = \\alpha \\circ f \\circ \\gamma$ its best correct approximation in $A$.\n\tWe say that $f$ is \\textit{non emptying} (in $A$) if, for any concrete value $c$, if both $\\alpha(c) \\neq \\bot$ and $\\alpha(f(c)) \\neq \\bot$ then also $f^{\\flat}(\\alpha(c)) \\neq \\bot$.\n\\end{definition}\n\nUnlike completeness, this definition doesn't mean that the analysis will find the best possible result the abstraction can, but just that if it starts from something ($\\alpha(c) \\neq \\bot$) and it can find something ($\\alpha(f(c)) \\neq \\bot$) then it will find at least one of the possible results ($f^{\\flat}(\\alpha(c)) \\neq \\bot$).\nThe rationale behind it is that the analysis shouldn't get to $\\bot$ because, as stated in the previous chapter, recovery from it is hard. The definition exactly prevents this when it's caused only by imprecision of the abstracted function.\n\nClearly completeness implies non emptying-ness since $f^{\\flat}(\\alpha(c)) = \\alpha(f(c)) \\neq \\bot$, but the converse is not true, as shown in the following example.\n\\begin{example}\\label{ch3:ex:ne-not-complete}\n\tConsider the under-approximation domain $\\Int_0$ of intervals containing $0$ (Example \\ref{ch2:ex:intervals-0}). Note that in this example we write intervals to denote subsets of $\\setZ$, following the identification given by the Galois insertion, as well as use both $\\bot$ and $\\emptyset$ to refer to the same object (the empty subset of $\\setZ$).\n\n\tConsider the additive extension of the function $f(x) = x + 1$. This function is not complete in the abstract domain since, for instance on the concrete element $S = \\{ -1 \\}$, we have\n\t\\begin{align*}\n\t\t&\\alpha(f(S)) = \\alpha(\\{ 0 \\}) = [0, 0] \\\\\n\t\t&f^{\\flat}(\\alpha(S)) = f^{\\flat}(\\bot) = \\bot\n\t\\end{align*}\n\n\tHowever this function is non emptying.\n\tFirst we show that any concrete element $S \\in \\pow(\\setZ)$ is such that $\\alpha(S) \\neq \\emptyset$ if and only if $0 \\in S$.\n\tSuppose that $0 \\in S$. Then by monotonicity of $\\alpha$\n\t\\[\n\t\\alpha(S) \\supseteq \\alpha(\\{ 0 \\}) = [0, 0] \\supset \\emptyset\n\t\\]\n\tConversely, assume that $\\alpha(S) \\neq \\emptyset$. Since all elements of $\\Int_0$ but the empty set contains $0$, and $\\alpha \\preceq \\id$, we get\n\t\\[\n\t0 \\in \\alpha(S) \\subseteq S\n\t\\]\n\tthat is exactly $0 \\in S$.\n\n\tConsider now an arbitrary concrete element $S$ such that $\\alpha(S) = \\bot$ and $\\alpha(f(S)) = \\bot$. As shown above, the former condition is equivalent to $0 \\in S$, and the latter to $0 \\in f(S)$, that is in turn equivalent to $-1 \\in S$. Using these two hypothesis, we can show\n\t\\begin{align*}\n\t\t&S \\supseteq \\{ -1, 0 \\} \\\\\n\t\t\\implies& \\alpha(S) \\supseteq \\alpha(\\{ -1, 0 \\}) = [-1, 0] \\\\\n\t\t\\implies& f(\\alpha(S)) \\supseteq f([-1, 0]) = [0, 1] \\\\\n\t\t\\implies& f^{\\flat}(\\alpha(S)) = \\alpha(f(\\alpha(S))) \\supseteq \\alpha([0, 1]) = [0, 1] \\supset \\emptyset\n\t\\end{align*}\n\tthat is exactly $f^{\\flat}(\\alpha(S)) \\neq \\bot$, the non emptying condition.\n\\end{example}\nHowever, even this weaker notion allows to prove some results on $A$ under some assumptions, as we show in the remainder of this chapter.\n\nWe assume there is an under-approximation Galois insertion $\\ugi{\\pow(C)}{\\alpha}{A}$. Moreover, we say an element $S \\in \\pow(C)$ is \\textit{representable} if it belongs to $A$, or equivalently if $\\alpha(S) = S$.\n\n\\begin{definition}\\label{ch3:def:repr-with-set}\n\tLet $S \\subseteq C$ be a subset of $C$. We say that $d \\in C$ is \\textit{representable with $S$} if $S \\cup \\{ d \\}$ is representable. We call $R(S)$ the set of elements of $C$ representable with $S$, ie.\n\t\\[\n\tR(S) = \\{ d \\in C \\svert \\alpha(\\{ d \\} \\cup S) = \\{ d \\} \\cup S \\}\n\t\\]\n\\end{definition}\nFor the sake of brevity, we shall write $R$ for $R(\\emptyset)$, the set of representable values of $C$, and $R(c)$ for $R(\\{ c \\})$ where $c \\in C$ is any concrete value.\n\nWe now present a lemma about non emptying functions that makes reasoning easier. This lemma is weaker than the definition, but is nevertheless one of the main tools we use when considering non emptying functions.\n\n\\begin{lemma}\\label{ch3:th:f-non-repr-pair}\n\tLet $f: C \\rightarrow C$ be non emptying, $c \\in R$ and the pair $\\{ c, \\bar{c} \\}$ be not representable, ie. $\\bar{c} \\notin R(c)$. If $f(\\bar{c}) \\in R$ then also $f(c) \\in R$.\n\\end{lemma}\n\\begin{proof}\n\tSince $\\{ c, \\bar{c} \\} \\supseteq \\{ c \\}$ we have\n\t\\[\n\t\\alpha(\\{ c, \\bar{c} \\}) \\supseteq \\alpha(\\{ c \\}) = \\{ c \\}\n\t\\]\n\twhere the equality follows because $c \\in R$ is representable. Since by correctness\n\t\\[\n\t\\{ c, \\bar{c} \\} \\supseteq \\alpha(\\{ c, \\bar{c} \\})\n\t\\]\n\tand $\\alpha(\\{ c, \\bar{c} \\})$ can't be the pair $\\{ c, \\bar{c} \\}$ because this is not representable and hence not in the image of $\\alpha$, it should be the case that $\\alpha(\\{ c, \\bar{c} \\}) = \\{ c \\}$.\n\n\tNow\n\t\\[\n\t\\alpha(f(\\{ c, \\bar{c} \\})) = \\alpha(\\{ f(c), f(\\bar{c}) \\}) \\supseteq \\alpha(\\{ f(\\bar{c}) \\}) = \\{ f(\\bar{c}) \\}\n\t\\]\n\twhere the last equality follows by the hypothesis that $f(\\bar{c}) \\in R$.\n\tThis in particular means that $\\alpha(f(\\{ c, \\bar{c} \\})) \\neq \\emptyset$, and together with the fact that $\\alpha(\\{ c, \\bar{c} \\}) = \\{ c \\} \\neq \\emptyset$, since $f$ is non emptying we get that\n\t\\[\n\tf^{\\flat}(\\alpha(\\{ c, \\bar{c} \\})) \\neq \\emptyset\n\t\\]\n\n\tFrom this we find\n\t\\begin{align*}\n\t\t\\emptyset &\\subset f^{\\flat}(\\alpha(\\{ c, \\bar{c} \\})) \\\\\n\t\t&= f^{\\flat}(\\{ c \\}) \\\\\n\t\t&= \\alpha \\circ f(\\{ c \\}) \\\\\n\t\t&= \\alpha(\\{ f(c) \\})\n\t\\end{align*}\n\tAgain by correctness we have $\\alpha(\\{ f(c) \\}) \\subseteq \\{ f(c)\\}$, and since this can't be empty it should be exactly $\\alpha(\\{ f(c) \\}) = \\{ f(c) \\}$, that is $f(c) \\in R$.\n\\end{proof}\n\nThe goal of this lemma is to use non emptying functions to get new representable elements, as this will be our main tool to prove non existence of under-approximation abstract domains.\n\nWe now apply this definition on integer domains in two different situations, on the infinite domain $\\pow(\\setZ)$ and on the finite $\\pow([-N; N])$ of machine integers.\n\n\\subsection{Infinite domain}\nIn this subsection we focus on the infinite concrete domain $\\pow(\\setZ)$.\n\\begin{assumption}\n\tWe assume that an abstract domain $A$, to be feasible for analyses, must be at most countable.\n\\end{assumption}\nWe do this assumption because we want to represent abstract elements with an amount of bits comparable with that of concrete \\textit{values}, to have a complexity comparable with a single concrete execution of the program and not exponentially greater. Thus, we require the size of the abstract domain to be that of $\\setZ$, the set of values handled by the program, and not the concrete domain $\\pow(\\setZ)$.\n\nWe first present a simple cardinality estimate that will be useful in the proof of the following result. The goal of this lemma is to show that some sets must be ``small\" (in some sense, in this case finite), so we can find a contradiction showing that one of these sets is actually ``big\". We follow this line of reasoning in almost all of our proofs of non existence of abstract domain with some properties.\n\n\\begin{lemma}\\label{ch3:th:R-S-bound-integer-inf}\n\tFor any fixed subset $S \\subseteq \\setZ$, $R(S)$ is finite.\n\\end{lemma}\n\\begin{proof}\n\tBy union closure of the abstract domain (Proposition \\ref{ch2:th:under-gc-union-closure}), any set $S \\cup T$ for $T \\subseteq R(S)$ is representable too, since it can be expressed as the union of representable sets:\n\t\\[\n\tS \\cup T = \\bigcup\\limits_{x \\in T} (S \\cup \\{ x \\})\n\t\\]\n\tand $S \\cup \\{ x \\}$ is representable since $x \\in R(s)$.\n\n\tThe number of those sets is the cardinality of $\\pow(R(S))$, and since $A$ is at most countable the set $R(S)$ can't be infinite.\n\\end{proof}\n\nWe now present the first result that shows that requiring some functions to be non emptying makes impossible to create an abstract domain. The main idea is to define an infinite sequence of representable elements, that is in contradiction with the previous lemma that says that $R = R(\\emptyset)$ is finite.\nIn order to define such a sequence, we want to use Lemma \\ref{ch3:th:f-non-repr-pair}: we start from an initial representable $n_0$ and from a value $\\bar{n}$ not representable with it, then find a non-emptying $f$ that maps $\\bar{n}$ into $n_0$, so that $f(\\bar{n})$ is representable and we can then apply the lemma to get the new representable element $f(n_0)$. We then iterate this procedure, changing $f$, to build the infinite sequence.\n\nWe believe the assumption that there exists an initial representable value is not very restrictive since initializations like \\code{x = 0} must be abstracted to $\\bot$ if $0$ is not representable.\n\n\\begin{prop}\\label{ch3:th:ne-sum-nonexsistence-inf}\n\tLet $\\ugi{\\pow(\\setZ)}{\\alpha}{A}$ be an under-approximation Galois insertion, and assume that there is an integer $n_0$ that is representable. Then it can't be the case that all the functions of the form $f_n(x) = x + n$ are non emptying in $A$.\n\\end{prop}\n\n\\begin{proof}\n\tAssume by contradiction that all $f_n$ are non emptying in $A$.\n\tBy hypothesis, $n_0 \\in R$, and $R(n_0)$ is at most finite by Lemma \\ref{ch3:th:R-S-bound-integer-inf}. Since $\\setZ$ is infinite, this means there exists an $\\bar{n} \\in \\setZ \\setminus R(n_0)$, that is an element such that the pair $\\{ n_0, \\bar{n} \\}$ is not representable.\n\n\tLet $d = n_0 - \\bar{n}$ and consider $f_d$. We assumed it to be non emptying, so we can apply Lemma \\ref{ch3:th:f-non-repr-pair}: $n_0$ is representable while the pair $\\{ n_0, \\bar{n} \\}$ is not, and\n\t\\[\n\tf_d(\\bar{n}) = \\bar{n} + d = \\bar{n} + n_0 - \\bar{n} = n_0\n\t\\]\n\tso it's representable. Hence also $f_d(n_0) = n_0 + d$ is representable.\n\n\tFollowing this idea, we can prove by induction on $t$ that $n_0 + t d$ is representable for all $t$. The base step $t = 0$ is the hypothesis that $n_0$ is representable.\n\tFor the inductive step, assume $n_0 + (t - 1) d$ is representable, and consider $f_{t d}$. We assumed this non emptying, so we can apply again Lemma \\ref{ch3:th:f-non-repr-pair} to the pair $\\{ n_0, \\bar{n} \\}$:\n\t\\[\n\tf_{t d}(\\bar{n}) = \\bar{n} + t d = \\bar{n} + n_0 - \\bar{n} + (t - 1) d = n_0 + (t - 1) d\n\t\\]\n\tthat is representable by inductive hypothesis. So we get that $f_{t d}(n_0) = n_0 + t d$ is representable too, that is exactly the inductive step.\n\n\tSince $\\bar{n}$ is not representable while $n_0$ is, we have $n_0 \\neq \\bar{n}$, or equivalently $d \\neq 0$, hence $\\{ n_0 + t d \\svert t \\in \\setN \\}$ is infinite.\n\tMoreover $\\{ n_0 + t d \\svert t \\in \\setN \\} \\subseteq R$ by the induction above, but this is impossible since $R$ must be finite by Lemma \\ref{ch3:th:R-S-bound-integer-inf}.\n\\end{proof}\n\n\\subsection{Finite domain}\nNow we move to a slightly different setting: we assume as concrete domain $\\pow([-N, N])$, the power set of a finite, symmetric interval for some large integer value $N$, and we assume all operations are performed ``in machine arithmetic\", that is whenever the result is greater than $N$ it wraps back to $-N$ because of overflows. This correspond to work modulo $2N + 1$ taking the unique representative of each congruence class in the interval $[-N, N]$ of interest.\n\\begin{assumption}\n\tWe assume that an abstract domain $A$, to be feasible, must have cardinality polynomial in $N$.\n\\end{assumption}\nThis assumption guarantees that the number of bits required to represent an abstract element is linear in that for concrete elements.\n\nIn this section we'll use asymptotic notation for some quantities. For this to be completely formal we should define a sequence of abstract domain $A_N$, each one for the concrete domain $\\pow([-N, N])$, then define a sequence of values for each quantity we want to estimate, and take the limit of this sequence for $N$ going to infinity. However we do believe all these formal details would clutter both statements and proofs, making hard to get insight. For this reason, we avoid all this, just (ab)using the intuitive meaning associated with the notation.\n\n\\begin{lemma}\\label{ch3:th:R-S-bound-integer-fin}\n\tFor any fixed subset $S \\subseteq \\setZ$, $\\lvert R(S) \\rvert = O(\\log(N))$.\n\\end{lemma}\n\\begin{proof}\n\tAs in the proof of Lemma \\ref{ch3:th:R-S-bound-integer-inf}, by union closure any set $S \\cup T$ for $T \\subseteq R(S)$ is representable. We then have\n\t\\[\n\t\\poly(N) = \\abs{A} \\ge \\abs{\\pow(R(S))} = 2^{\\abs{R(S)}}\n\t\\]\n\tso, taking log at both sides, $\\abs{R(S))} = O(\\log(N))$.\n\\end{proof}\n\nThe following proposition uses the same proof line as Proposition \\ref{ch3:th:ne-sum-nonexsistence-inf} above: we define a sequence of representable elements, and prove that they are too much since, by the previous lemma, $R$ is quite small.\n\n\\begin{prop}\\label{ch3:th:ne-sum-nonexsistence-fin}\n\tLet $\\ugi{\\pow([-N, N])}{\\alpha}{A}$ be an under-approximation Galois insertion, and assume that there is an integer $n_0$ that is representable. Then it can't be the case that all the functions of the form $f_n(x) = x + n$ (modulo $2N + 1$) are non emptying in $A$.\n\\end{prop}\n\\begin{proof}\n\tLet $r = \\abs{R(n_0)}$. By the previous lemma \\ref{ch3:th:R-S-bound-integer-fin} we know that $r = O(\\log(N))$. Fix an element $\\bar{n} \\notin R(n_0)$ not representable with $n_0$ such that\n\t\\[\n\td = n_0 - \\bar{n} \\le r + 1\n\t\\]\n\tThis element should exists because otherwise all elements in the interval $[n_0 - r - 1; n_0 - 1]$ (modulo $2N + 1$) would be representable with $n_0$, that is impossible since they are $r + 1 = \\abs{R(n_0)} + 1$.\n\n\tFollowing the proof of proposition \\ref{ch3:th:ne-sum-nonexsistence-inf}, we can show by induction that for all $t \\ge 1$ the value $f_{td}(\\bar{n}) = n_0 + (t - 1) d$ is representable. However, all these values are different from one another for\n\t\\[\n\t1 \\le t < \\frac{2N + 1}{d}\n\t\\]\n\tand we also know that\n\t\\[\n\t\\frac{2N + 1}{d} > \\frac{2N}{r + 1} = \\frac{N}{O(\\log(N))}\n\t\\]\n\tBut this is a contradiction since all these values are representable while $\\abs{R} = O(\\log(N))$ by lemma \\ref{ch3:th:R-S-bound-integer-fin}, and\n\t\\[\n\t\\frac{N}{\\log(N)} = \\omega( \\log(N) )\n\t\\]\n\\end{proof}\n\n\\subsection{Consequences}\nThe previous two Propositions \\ref{ch3:th:ne-sum-nonexsistence-inf} and \\ref{ch3:th:ne-sum-nonexsistence-fin} show how requiring some functions to be non emptying prevents the existence of under-approximation abstract domains. We considered sums, some of the most common basic transfer functions over integers, and proved that no abstract domain small enough makes them all non-emptying, both for the infinite domain of all integers $\\setZ$ and for a finite, machine representable subset $[-N; N]$.\n\nWe presented the notion of non-emptying function motivated by observations of Chapter \\ref{ch3:comparison}. We can define correct under-approximation domains, for instance by taking complements of known over-approximation ones (see Example \\ref{ch2-5:ex:complement-intervals}), but the point is that these are not useful for real analyses. Requiring a domain to be non-emptying for some basic transfer functions is an attempt to force that usefulness, and we showed no domain satisfies this condition for integers.\n\nOf course this is not definitive evidence, since there may exist practical domains that doesn't make all sums non-emptying, or big domains (that is, not satisfying the assumption of having cardinality comparable with the set of values and not its power set) that can be represented efficiently, but we believe this to be nevertheless an interesting clue about the difficulty of designing under-approximation abstract domains.\n", "meta": {"hexsha": "f57055ff3566e5970c7bf097c1368530b1b28da3", "size": 19692, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/chapter3.tex", "max_stars_repo_name": "flavio-a/master-thesis", "max_stars_repo_head_hexsha": "9f23d79c205b82ca22106890dd297e64ce10c878", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/chapter3.tex", "max_issues_repo_name": "flavio-a/master-thesis", "max_issues_repo_head_hexsha": "9f23d79c205b82ca22106890dd297e64ce10c878", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/chapter3.tex", "max_forks_repo_name": "flavio-a/master-thesis", "max_forks_repo_head_hexsha": "9f23d79c205b82ca22106890dd297e64ce10c878", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 85.9912663755, "max_line_length": 562, "alphanum_fraction": 0.7070383912, "num_tokens": 5780, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8175744739711883, "lm_q1q2_score": 0.5849123970190389}}
{"text": "\\newcommand{\\RN}[1]{%\n  \\textup{\\uppercase\\expandafter{\\romannumeral#1}}%\n}\n\n\\Lecture{Jayalal Sarma}{Oct 19, 2020}{17}{Generating Functions(continued)}{Lalithaditya}{$\\alpha$}{JS}\n\n\\section{Quick Recap of Previous Two Lectures}\n \n\\begin{itemize}\n\t\\item We represented the sequence of non-negative integers in the form of a formal power series.\n\t\\item Operations on power series corresponding to combinatorial meanings.\n\t\\item We used the concept of Generating Functions for the following examples:\n\t\t\\begin{enumerate}\n\t\t\t\\item Distributing 'n' votes to 'k' candidates such that every candidate gets atleast one vote.\n\t\t\t\\item Count the number of non-negative solutions for the equation $a+b+c=n$\n\t\t\t\\item Derving the expression for Catalan numbers.\n\t\t\\end{enumerate}\t\t \n\\end{itemize}\n\n\\section{Recurrence Relations}\n\nThere are three types of recurrence relations,that are being discussed in this lecture. There are Linear Recurrence Relations, Degree Recurrence Relations and Homogenous Recurrence Relations.\\\\ \\ Before getting into examples,lets discuss about these relations.\n\n\\begin{itemize}\n\t\\item \\textbf{Linear Recurrence Relation:}\\\\ \\\\A Linear Recurrence Relation is a equation that defines  $n\\textsuperscript{th}$ in a sequence in terms of the $k$ previous terms in the sequence. The recurrence relation is in the form:\\\\$$a_n = c_1.a_{n-1}~+~c_2.a_{n-2}~+~c_3.a_{n-3}~+~\\dots~+~ c_k.a_{n-k}$$ $$ =\\sum_{i=1}^{k} c_i*a_{n-i}$$ $where~c_i's~are~constants~independent~of~n$,\\\\ $c_1,c_2,c_3,\\dots,c_k \\in \\mathbb{R} $ and $c_k \\neq 0$.\n\t\n\t\\item \\textbf{Degree Recurrence Relation:}\\\\ \\\\A recurrence relation of degree d is said to be Degree Recurrence Relation where $a_n$ depends only on $a_{n-d}$.\n\t\n\t\\item \\textbf{Homogenous Recurrence Relation:}\\\\ \\\\ A recurrence relation where each term of the right hand side of the equation has the same degree.\n\t\t\n\t\\item \\textbf{Some examples on recurrence relations:}\n\t\\begin{enumerate}\n\t\t\\item $a_n = 5.a_{n-1}~+~a_{n-2}.a_{n-3}$ : This is neither linear nor homogenous.\n\t\t\\item $a_n = a_{n-1}.a_{n-2}~+~a_{n-3}.a_{n-4}$ : This is not linear but homogenous of degree 4.\n\t\t\\item $a_n = 5.a_{n-2}~+~10^n$ : This is linear but not homogenous of degree 2.\n\t\\end{enumerate}\n\\end{itemize}\n\n\\section{Using Generating Functions to solve recurrence relations}\n\nIn this section, we will look how to solve recurrence relations using generating functions.\n\\\\ \\\\\n\\textbf{Example 1:}\\\\\nIn the previous lectures,we can calculated the number of binary strings of length n, which have even number of $0$'s. It turned out to be $2^{n-1}$.\\\\\nSimilarly, calculate the number of decimal strings of length n, which contain even number of $0$'s.\\\\ \\\\\n\\textbf{Solution:}\\\\\nLet the $a_n$ be the number of decimal strings,which satisfy the given condition.\\\\\nBy convention,lets take that when $n=0$, the number of such strings is 1.\\\\\nIf $n=1$,then the number of such strings will be 9.\\\\\n$\\implies a_0=1$ and $a_1=9$.\\\\ \\\\\n\\textbf{Forming the recurrence relation:}\nLet's take a n-length decimal string, and let $d_n$ be the last digit in the string. There are two cases for this type of situation i.e. if $d_n=0$ and $d_n \\neq 0$.\\\\\n\\underline{\\textbf{Case-\\RN{1}:}} \\\\If the last digit is 0, then the remaining string must have odd number of zeroes.Then the number of such strings will be $(10^{n-1} - a_{n-1})$.\\\\ \\\\\n\\underline{\\textbf{Case-\\RN{2}:}}\\\\\nIf the last digit is not zero, then the remaining string must have even number of zeroes,which is equal to number of such strings of length $n-1$ i.e. $a_{n-1}$. The last digit can vary from $1,2,3, \\dots,9$. Therefore, the number of such strings will be $(9a_{n-1})$.\\\\ \\\\\nThe resultant recurrence relation for $a_n$ is,\\\\\n$$\\implies a_n~=~(10^{n-1}-a_{n-1}+9a_{n-1})$$\n$$\\implies a_n~=~(10^{n-1}+8a_{n-1})$$\nThe generating function for this problem will be,\n\\begin{equation}\nG(x) = \\sum_{n \\geq 0} a_n.x^n\n\\end{equation}\n$$ G(x)~=~a_0~+~\\sum_{n \\geq 1}a_n.x^n $$\n$$ G(x)~=~a_0~+~\\sum_{n \\geq 1}.(10^{n-1}+8a_{n-1})x^n$$\n$$ G(x)~=~1~+~\\sum_{n \\geq 1} 8.a_{n-1}.x^n~+~\\sum_{n \\geq 1}10^{n-1}.x^n $$\n\n$$ G(x)~=~1~+~8.x ~\\sum_{n \\geq 1} a_{n-1}.x^{n-1}~+~x~ \\sum_{n \\geq 1}10^{n-1}.x^{n-1} $$\n\nLet $n-1~=~h$.Then,\n\n$$ G(x)~=~1~+~8.x ~\\sum_{h \\geq 0} a_{h}.x^{h}~+~x~ \\sum_{h \\geq 0}10^{h}.x^{h} $$\n\nAfter renaming the variable, we have\n\n$$ G(x)~=~1~+~8.x ~\\sum_{n \\geq 0} a_{n}.x^{n}~+~x~ \\sum_{n \\geq 0}10^{n}.x^{n} $$\n\nFrom the equation (17.72), we can see that $G(x) = \\sum_{n \\geq 0} a_n.x^n$\n\n$$ G(x)~=~1~+~8.x.G(x)~+~x~ \\sum_{n \\geq 0}10^{n}.x^{n} $$\n\n$$ G(x)~=~1~+~8.x.G(x)~+~x~ \\sum_{n \\geq 0}{(10.x)}^{n} $$\n\nFrom the summation of infinte geometric progression, we have\n\n$$\\sum_{n \\geq 0}{(10.x)}^{n} = \\frac{1}{1-10.x}$$\n\n$$ G(x)~=~1~+~8.x.G(x)~+~x.\\left(\\frac{1}{1-10.x}\\right) $$\n\nAfter rearranging the terms, we finally $G(x)$ as,\n\n$$G(x)~=~\\frac{\\left(1-9.x\\right)}{(1-8.x).(1-10.x)}$$\n\nBy using the concept of partial fractions, let's split the above into two fractions,\n\n$$\\frac{\\left(1-9.x\\right)}{(1-8.x).(1-10.x)} ~=~ \\frac{A}{(1-8.x)}~+~\\frac{B}{(1-10.x)}$$\n\n$$=~\\frac{A+B-(10.A.x)-(8.B.x)}{(1-8.x).(1-10.x)}$$\n\n$$\\implies A+B=9~and~10.A+8.B=9$$\n\nAfter solving for A and B, we get $A=\\frac{1}{2}$ and $B=\\frac{1}{2}$\n\n\\begin{equation}\n\tG(x)~=~\\frac{\\frac{1}{2}}{(1-8.x)}~+~\\frac{\\frac{1}{2}}{(1-10.x)}\n\\end{equation}\n\nOur aim was to find the number $a_n$,which is nothing but the coefficient of $x^n$ in $G(x)$.\n\n$$\\implies Coefficient~of~x^n~in~G(x)~=~ \\left(\\frac{1}{2}.8^n\\right) + \\left(\\frac{1}{2}.10^n\\right)$$\n$$\\hspace{1ex}=~\\frac{8^n+10^n}{2}$$\n\n$\\therefore$ Number of decimal strings with even number of zeroes is $\\left(\\frac{8^n+10^n}{2}\\right)$.\n\\\\ \\\\\n\\textbf{Example 2:}\\\\\nIn this example, we are not using any recurrence relations. We are proving combinatorial equations using generating functions.\n\\\\ \\\\\nFor $n \\geq k$, prove that \n\\begin{equation}\n\t\\sum_{m=k}^{k}{m \\choose k}~=~{n+1 \\choose k+1}\n\\end{equation}\n\\textbf{Solution:}\\\\\n\nFor a fixed $k$, lets assume that $$a_n~=~\\sum_{m=k}^{n}{m \\choose k} $$\n\nThe generating function for this problem will be,\n\n\\begin{equation}\n S(x)~=~ \\sum_{n \\geq k}a_n.x^n\n\\end{equation}\n\nWe can observe that in the above summation, n starts from k. It can also start from n = 0, but it is the same, as $k \\geq 0$. \\\\ \\\\\nLets introduce a new function $\\sigma$,\n\n$$ \\sigma = \\left\\{\n\\begin{array}{ll}\n      1 & if~k\\leq m\\leq n \\\\\n      0 & otherwise \\\\\n\\end{array} \n\\right. $$\n\nFrom equation (17.74),\n\n$$ S(x)~=~ \\sum_{n \\geq k}a_n.x^n$$\n\n$$ S(x)~=~ \\sum_{n \\geq k}~\\sum_{m=k}^{n}{m \\choose k}.x^n$$\n\n$$ S(x)~=~ \\sum_{n \\geq k}\\left(~\\sum_{m \\geq k}{m \\choose k}.x^n\\left(\\sigma \\right) \\right)$$\n\nAfter rearranging the summations,\n\n$$ S(x)~=~ \\left(~\\sum_{m \\geq k}\\sum_{n \\geq k}{m \\choose k}.x^n\\left(\\sigma \\right) \\right)$$\n\nSince ${k \\leq m}$ and ${k \\leq n}$ the new function $\\sigma$ becomes $1$.\\\\ \\\\\nAlso ${k \\leq m}$ and ${k \\leq n}$, $\\implies m \\leq n$.\n\n$$\\implies~S(x)~=~ \\left(~\\sum_{m \\geq k}\\sum_{n \\geq m}{m \\choose k}.x^n \\right)$$\n\nSince, ${m \\choose k}$ is independent of $n$,\n\n$$S(x)~=~ \\left(~\\sum_{m \\geq k}{m \\choose k}. \\sum_{n \\geq m}x^n \\right)$$\n\n$$S(x)~=~\\sum_{m \\geq k}{m \\choose k}.\\left(x^m \\sum_{n \\geq m}x^{n-m} \\right) $$\n\nWe can observe that the second summation is the sum of an infinite geometric progression.\n\n$$S(x)~=~\\sum_{m \\geq k}{m \\choose k}.\\left( \\frac{x^m}{1-x} \\right)$$\n\n\\begin{equation}\n\tS(x)~=~\\frac{x^k}{1-x} \\left(\\sum_{m \\geq k} {m \\choose k}.x^{m-k} \\right)\n\\end{equation}\n\nWe know that,\n$$\\frac{1}{1-x} = 1+x+x^2+\\dots + \\dots$$\n\nand also,\n\n$${\\left(\\frac{1}{1-x}\\right)}^{k+1} = {(1+x+x^2+\\dots+\\dots)^{k+1}} $$\n\n$${(1+x+x^2+\\dots+\\dots)^{k+1}} = (1+x+x^2+\\dots).(1+x+x^2+\\dots).\\dots $$\n\nLet $d_1,d_2,d_3,\\dots,d_{k+1}$ be the degree of each x terms in the product.\\\\\nOur aim is to get the coefficient of $x^{m-k}$ in the above product, this is equivalent to the question \\\\ \\\\\n\\emph{In how many ways can we pick $d_1,d_2,\\dots,d_{k+1}$ such that} $$\\sum_{i=1}^{k+1}d_i = (m-k)$$\\\\\nThis is an example of multichoosing. As discussed in the previous lectures, the number of solutions to this question is $${{k+1+m-k-1} \\choose {m-k}} = {m \\choose m-k}$$\n\nand also,\n\n$${m \\choose m-k} = {m \\choose k} $$\n\nFrom the equation (17.76), we can replace ${m \\choose k}.x^{m-k}$ with ${\\left(\\frac{1}{1-x}\\right)}^{k+1}$\n\n$$S(x)~=~\\frac{x^k}{1-x}{\\left(\\frac{1}{1-X} \\right)}^{k+1} $$\n\n\\begin{equation}\n S(x)~=~\\frac{x^k}{{\\left(1-x \\right)}^{k+2}}\n\\end{equation}\n\nOur aim is to find the number $a_n$, which is nothing but the coefficient of $x^n$ in the generating function $S(x)$.\\\\ \\\\\n\nWe can observe that the,\n\n$$Coefficient~of~x^n~in \\frac{x^k}{{\\left(1-x \\right)}^{k+2}}~~=~~Coefficient~of~x^{n-k}~in~{\\left(\\frac{1}{1-x}\\right)}^{k+2}$$\n\nAs we proved earlier in this example, that the coefficient of $x^{n-k}$ in the right hand side, is equivalent to the sum of degrees of x terms by expanding $\\left(\\frac{1}{1-x}\\right)$ equal to $(k+2)$.\n\\\\\nAs proved in earlier lectures, this sum is equal to \n\n$$a_n = {{k+2+n-k-1} \\choose {n-k}}$$\n\n$${{k+2+n-k-1} \\choose {n-k}} = {n+1 \\choose n-k} $$\n\n$${n+1 \\choose n-k} = {n+1 \\choose k+1}$$\n\n$$\\boxed{\\therefore a_n = {n+1 \\choose k+1}}$$\n\n\\Lecture{Jayalal Sarma}{Oct 20, 2020}{18}{Two Variable Generating Functions}{Lalithaditya and Pragnya}{$\\alpha$}{JS}\n\nTill now we had discussed Generating Functions with one variable. In this lecture, we are going to discuss Generating Functions with two variables. Such type of Generating functions are known as \\textbf{Bivariate Generating Functions}.\\\\ \\\\\n\nThe general form of the Bivariate Generating functions is,$$G(x,y) = \\sum_{n,k \\geq 0} a_{n,k}.x^n.y^k$$\n\nThese type of generating functions are useful, when dealing combinatorial problems with two variables.\\\\\nLets try out some examples, to get an idea on how to use Generating Functions with two variables.\n\n\\section{Examples based on Bivariate Generating Functions}\n\\textbf{Example 1:}\\\\ Prove the binomial theorem in single variable using the two variable generating functions.\\\\\nBinomial Theorem in single variable:\n$${\\left(1+x\\right)}^{n} = \\sum_{k=0}^{n}{n \\choose k}.x^k$$\n\\textbf{Solution:}\\\\\n\nWe know that the number of ways of choosing a k-sized subset from n-sized set is equal to ${n \\choose k}$.\\\\\n\nLet the number be $b_{n,k}$.\n\\\\\nWhen $n=0$ , $b_{0,k}~=~0$ and when $k=0$, $b_{n,0}~=~1$.\\\\\n\\\\\nAs discussed in previous lectures, we can choose a k-sized subset from n-1 elements or from n elements.\n\\\\\n\\textbf{Recurrence relation:}\n$$b_{n,k} = b_{n-1,k-1} + b_{n-1,k} $$\n\nThe generating function for this problem is,\n\\begin{equation}\nB(x,y) = \\sum_{n,k \\geq 0}b_{n,k}.\\left(x^n.y^k \\right)\n\\end{equation}\n\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0}b_{n,0}x^n + \\sum_{n=0,k \\geq 0}b_{0,k}.y^k + \\sum_{n,k \\geq 1}b_{n,k}.\\left(x^n.y^k \\right) $$\n\nWe know that $b_{0,k} = 0~and~b_{n,0}=1$.\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0}1. \\left( x^n \\right) + \\sum_{n=0,k \\geq 0}0. \\left( y^k \\right) + \\sum_{n,k \\geq 1}b_{n,k}.\\left(x^n.y^k \\right) $$\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0} \\left( x^n \\right) + \\sum_{n,k \\geq 1}b_{n,k}.\\left(x^n.y^k \\right)$$\n\nBy using the recurrence relation,\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0} \\left( x^n \\right) + \\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^n.y^k \\right) + \\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^n.y^k \\right)$$\n\nWe know that,\n$$\\sum_{n \\geq 0}x^n = \\frac{1}{1-x} $$\n\n$$B(x,y) = \\frac{1}{1-x} + \\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^n.y^k \\right) + \\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^n.y^k \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^{n-1}.y^{k-1} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^{n-1}.y^{k-1} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\nLet $(n-1)=h~and~(k-1)=p$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{h,p \\geq 1}b_{h,p}.\\left(x^{h}.y^{p} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\nAfter renaming of variables,\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n,k}.\\left(x^{n}.y^{k} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n,k}.\\left(x^{n}.y^{k} \\right) + x.\\left(\\sum_{n,k \\geq 0}b_{n,k}.\\left(x^{n}.y^k \\right) - \\sum_{n \\geq 0,k=0}x^n \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 0}b_{n,k}.\\left(x^{n}.y^{k} \\right) + x.\\left(\\sum_{n,k \\geq 0}b_{n,k}.\\left(x^{n}.y^k \\right) - \\sum_{n \\geq 0,k=0}x^n \\right)$$\n\nFrom the equation (18.78),\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right).B(x,y) + x.\\left(B(x,y) - \\frac{1}{1-x} \\right)$$\n\nAfter rearranging the terms,\n\n$$B(x,y) = 1+x.\\left(y+1\\right).B(x,y)$$\n\n$$B(x,y) = \\frac{1}{1-x.\\left(y+1\\right)}$$\n\n$\\therefore$ The generating function $B(x,y)$ is,\n\n\\begin{equation}\n\t\\boxed{B(x,y) = \\frac{1}{1-x.\\left(y+1\\right)}}\n\\end{equation}\n\nCoefficient of $x^n$ in the Left hand side of the above equation is equal to the coefficient of $x^n$ in the right hand side of the above equation.\n\nCoefficient of $x^n$ in the left hand side $= \\sum_{k \\geq 0}b_{n,k}.y^k~(\\because From~equation~(18.78))$ \n\\\\\n\n\\textbf{Note:} Coefficient of $x^n$ in $\\left(\\frac{1}{1-ax}\\right)$ is $a^n$.\\\\\n\n$\\implies$ Coefficient of $x^n$ in the right hand side $= {\\left(1+y \\right)}^{n}$\\\\\n\nHence,\n\n$$\\sum_{k \\geq 0}b_{n,k}.y^k = {\\left(1+y \\right)}^{n} $$\n\nAfter renaming of variables,\n\n$${\\left(1+x \\right)}^{n} = \\sum_{k \\geq 0}b_{n,k}.x^k $$\n\nAt the beginning of the proof, we assumed that $b_{n,k} = {n \\choose k}$\n\n\\begin{equation}\n\t\\boxed{\\therefore {\\left(1+x \\right)}^{n} = \\sum_{k \\geq 0}{n \\choose k}.x^k}\n\\end{equation}\n\nwhich completes our proof.\n\\\\ \\\\\n\\textbf{Example 2 (Delannoy Numbers):}\\\\\nConsider a nxn grid. Delannoy number D counts the number of paths from the left-bottom corner (0,0) to any other point on the grid (n,m). \\\\\nThe path can be reached by only three paths i.e Upward edges(U), Rightward edges(R) and upward forward diagonals(F). Find the Delannoy Number. \\\\ \\\\\n\\textbf{Solution:}\\\\\nLet $d_{n,m}$ be the number of Delannoy paths from (0,0) to (n,m), by using the above edges only.\n\\\\ \\\\\nFor example,When n=3 and m=3, then the number of Delannoy paths is 63.\n\n\t\\begin{figure}[H]\n\t\t\\centerline{\\includegraphics[width=0.5\\textwidth,height=0.5\\textwidth]{images/DelannoyNumbers.png}}\n\t\\end{figure}\n\t\n\\textbf{\\underline{Aim}:} To find $d_{n,m}$ \\\\ \n\n\\textbf{Recurrence Relation:}\\\\ \\\\\nLets find a recurrence relation for $d_{n,m}$.\n\\\\ \nA point (n,m) can be reached from three ways i.e. from (n-1,m), from (n-1,m-1) and from (n,m-1).\n\\\\\nHence, the recurrence relation for $d_{n,m}$ will be,\n\\begin{equation}\n\\boxed{d_{n,m} = d_{n,m-1} + d_{n-1,m} + d_{n-1,m-1}}\n\\end{equation}\n\n\\textbf{Generating Function:}\\\\ \\\\\nThe generating function for this problem is,\n\\begin{equation}\n\\boxed{D(x,y) = \\sum_{n,m \\geq 0}d_{n,m}.x^n.y^m}\n\\end{equation}\n\nWe can observe that $d_{n,0}=d_{0,m}=1$.\n\n$$D(x,y) = \\sum_{n \\geq 0,m=0}d_{n,0}.x^n + \\sum_{n=0,m \\geq 1}d_{0,m}.y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$D(x,y) = \\sum_{n \\geq 0,m=0}1.x^n + \\sum_{n=0,m \\geq 1}1.y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\nWe know that,\n$$\\sum_{n \\geq 0}x^n = \\frac{1}{1-x}$$\n\n$$D(x,y) = \\frac{1}{1-x} + \\sum_{n=0,m \\geq 1}y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\sum_{n=0,m \\geq 1}y^{m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n\nLet $(m-1)=h$,\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\sum_{n=0,h=0}y^h + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\nAfter renaming the variables,\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\sum_{n=0,m=0}y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\left(\\frac{1}{1-y}\\right) + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\nUsing the recurrence relation from equation (18.81),\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + \\sum_{n \\geq 1,m \\geq 1}(d_{n,m-1} + d_{n-1,m} + d_{n-1,m-1}).x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m-1}.x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m-1}.x^{n-1}.y^{m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}$$\n\nLet $(n-1)=h~and~(m-1)=p$,\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.\\sum_{h \\geq 0,p \\geq 0}d_{h,p}.x^{h}.y^{p} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}$$\n\nAfter renaming the variables,\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^{n}.y^{m} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n}.y^{m}$$\n\nFrom the equation (18.82),\n\\begin{equation}\nD(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.D(x,y) + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n}.y^{m} +\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m}\n\\end{equation}\n\nConsider the fourth term in the above equation,\n$$\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n}.y^{m} = x.\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n-1}.y^{m}$$\n\nLet $(n-1)=h$\n$$x.\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n-1}.y^{m} = x.\\sum_{h \\geq 0,m \\geq 1}d_{h,m}.x^h.y^m$$\nAfter renaming the variables,\n\n$$x.\\sum_{h \\geq 0,m \\geq 1}d_{h,m}.x^h.y^m = x.\\sum_{n \\geq 0,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$x.\\sum_{n \\geq 0,m \\geq 1}d_{n,m}.x^n.y^m = x.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^{n}.y^{m} - \\sum_{n \\geq 0,m = 0}d_{n,0}.x^{n} \\right)$$\n\n$$x.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^{n}.y^{m} - \\sum_{n \\geq 0,m = 0}d_{n,0}.x^{n} \\right) = x.\\left(D(x,y)-\\frac{1}{1-x} \\right) $$ \n\\\\ \\\\\nSubstituting the above value in the equation (18.83),then\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.D(x,y) + x.\\left(D(x,y)-\\frac{1}{1-x} \\right) +\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m}$$\n\nAfter rearranging the terms,\n\n\\begin{equation}\nD(x,y) = 1 + \\frac{y}{1-y} + x.y.D(x,y) + x.D(x,y) + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m}\n\\end{equation}\n\nConsider the last term of the above equation,\n\n$$\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m} = y.\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m-1} $$\n\nLet $p=(m-1)$,then \n\n$$y.\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m-1} = y.\\sum_{n \\geq 1,p \\geq 0}d_{n,p}.x^{n}.y^{p}$$\n\nAfter renaming the variables,\n\n$$y.\\sum_{n \\geq 1,m \\geq 0}d_{n,m}.x^{n}.y^{m} = y.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^n.y^m - \\sum_{n=0,m \\geq 0}d_{0,m}.y^{m} \\right) $$\n\n$$y.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^n.y^m - \\sum_{n=0,m \\geq 0}d_{0,m}.y^{m} \\right) = y.\\left(D(x,y) - \\frac{1}{1-y} \\right) $$\n\nSubstitute the above value in the equation (18.84),\n\n$$D(x,y) = 1 + \\frac{y}{1-y} + x.y.D(x,y) + x.D(x,y) + y.\\left(D(x,y) - \\frac{1}{1-y} \\right)$$\n\n$$D(x,y) = 1 + x.y.D(x,y) + x.D(x,y) + y.D(x,y) $$\n\nAfter rearranging the terms,\n\n$$D(x,y) = \\frac{1}{1-x-y-xy} $$\n\n$$D(x,y) = \\left(\\frac{1}{1-y}\\right).\\left(\\frac{1}{1-\\left(\\frac{1+y}{1-y}\\right).x}\\right) $$\n\nWe know that,\n$$\\frac{1}{1-a.x} = \\sum_{n \\geq 0}a^n.x^n $$\n\n$$D(x,y) = \\left(\\frac{1}{1-y}\\right).\\left(\\sum_{n \\geq 0}{\\left(\\frac{1+y}{1-y} \\right)}^n.x^n \\right) $$\n\nThe generating function is,\n\\begin{equation}\n\\boxed{D(x,y) = \\left(\\sum_{n \\geq 0}{\\frac{{(1+y)}^n}{{(1-y)}^{n+1}}}.x^n \\right)}\n\\end{equation}\n\nThe required number $d_{n,m}$ is,\\\\\n\n$$d_{n,m}~=~Coefficient~of~x^n.y^m~in~D(x,y)$$\n\n$$d_{n,m}~=~Coefficient~of~y^m~in~{\\frac{{(1+y)}^n}{{(1-y)}^{n+1}}}$$\n\n$$d_{n,m}~=~Coefficient~of~y^m~in~ {(1+y)}^n.\\left(\\frac{1}{1-y}.\\frac{1}{1-y}. \\dots (n+1) times\\right)$$\n\nWe know that,\n\n$$\\frac{1}{1-y} = 1+y+y^2+ \\dots$$\n\n$$d_{n,m}~=~Coefficient~of~y^m~in~{(1+y)}^n.\\left((1+y+y^2+\\dots).(1+y+y^2+\\dots). \\dots (n+1) times\\right)$$\n\nLet's say that a number $k \\geq 0$ is taken, such that the term along with its coefficient $y^k$ comes from ${(1+y)^n}$ and the remaining term along with its coefficient $y^{m-k}$ comes from the (n+1) term product.\\\\\n\nThe coefficient of $y^k$ in ${(1+y)^n}$ is ${n \\choose k}$.\\\\\n\nLet $c_1,c_2,\\dots,c_{n+1}$ be the degrees of x from the (n+1)-term product.\\\\\n\nFinding out the coefficient of $y^{m-k}$ from the (n+1) term product is equivalent to count the number of ways of picking $c_i$'s such that $c_1+c_2+\\dots+c_{n+1}=m-k$\\\\\n\nThe number of such pickings = ${{n+1+m-k-1} \\choose {m-k}}$ = ${n+m-k} \\choose {m-k}$ = ${n+m-k} \\choose {n}$.\\\\\n\nTherefore, the required number $d_{n,m}$ is,\n\n$$d_{n,m} = \\sum_{k \\geq 0}{n \\choose k}.{{n+m-k} \\choose n}$$\n\nHence,\n\\begin{equation}\n\\boxed{Delannoy~Number~(D) = \\sum_{k \\geq 0}{n \\choose k}.{{n+m-k} \\choose n}}\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\\Lecture{Jayalal Sarma}{Oct 21, 2020}{17}{Generating Functions(continued)}{Pragnya}{$\\alpha$}{JS}\n\n\\section{Introduction}\nIn this section we'll see some examples of ordinary generating functions and get introduced to exponential generating functions.\n\n\\subsection{Example 3 : }\nTo show two combinatorial qualities are equal it sufficies to show that they have same generating functions. Consider the following,\n$$ B_n(m) = \\{(x_1, x_2, \\dots x_n) ~|~ \\forall i ~ x_i \\in \\Z , \\sum |x_i| \\leq m \\}$$\nlet $b_{n,m} = |B_n(m)|$. Let's see properties of  $b_{n,m}$ :\n\\begin{enumerate}\n    \\item $b_{n, m} = \\sum_{k=0}^n {n \\choose k}{m \\choose k} 2^k$.\n    \\item $b_{n, m} = b_{m, n}$. This can also be proved using bijection.\n    \\item $b_{n, m} = d_{m, n}$. \n\\end{enumerate}\nWe'll prove property $3$ by showing they have same generating functions.\n\\begin{align*}\n    B_{x,y} &= \\sum_{n,m \\geq 0} b_{n, m} x^n y^m\\\\\n    &= \\sum_{n,m \\geq 0} (\\sum_{k=0}^n {n \\choose k}{m \\choose k} 2^k) x^n y^m \\\\\n    &= \\sum_{n,m,k \\geq 0}{n \\choose k}{m \\choose k} 2^k x^n y^m \\\\\n    &= \\sum_{k \\geq 0} 2^k \\sum_{n,m \\geq 0} {n \\choose k}{m \\choose k} x^n y^m \\\\\n    &= \\sum_{k \\geq 0} 2^k (\\sum_{n \\geq 0} {n \\choose k} x^n)(\\sum_{m \\geq 0} {m \\choose k} y^m) \\\\\nB_{x, y}&= \\sum_{k \\geq 0} 2^k (x^k \\sum_{n \\geq 0} {n \\choose k} x^{n-k})(y^k \\sum_{m \\geq 0} {m \\choose k} y^{m-k})\n\\end{align*}\nConsider $\\frac{1}{(1-x)^{k+1}}$ : \n$$\\frac{1}{(1-x)^{k+1}} = \\frac{1}{(1-x)}.\\frac{1}{(1-x)}. \\dots \\frac{1}{(1-x)} ~(k+1 ~times)$$\nCoefficient of $x^{n-k}$ in $\\frac{1}{(1-x)^{k+1}}$ is equivalent to no.of solutions of $a_1 + a_2 + \\dots + a_{k+1} = n-k$ which is $ = {(n-k) + (k+1) -1 \\choose n-k} = {n \\choose k}$.\nHence\n$$ \\sum_{n \\geq 0} {n \\choose k} x^{n-k} = \\frac{1}{(1-x)^{k+1}} $$\nSimilarly \n$$ \\sum_{m \\geq 0} {m \\choose k} y^{m-k} = \\frac{1}{(1-y)^{k+1}} $$\nSubstituting them in the above derived $B_{x, y}$ - \n\\begin{align*}\n    B_{x, y} &= \\sum_{k \\geq 0} 2^k x^k y^k \\frac{1}{(1-x)^{k+1}} \\frac{1}{(1-y)^{k+1}} \\\\\n    &= \\sum_{k \\geq 0} (2xy)^k \\frac{1}{(1-x)^{k+1}} \\frac{1}{(1-y)^{k+1}} \\\\\n    &= \\frac{1}{(1-x)(1-y)} \\sum_{k \\geq 0} \\frac{(2xy)^k}{(1-x)^k(1-y)^k} \\\\\n    &= \\frac{1}{(1-x)(1-y)} \\sum_{k \\geq 0} (\\frac{(2xy)}{(1-x)(1-y)}) ^k \\\\\n    &= \\frac{1}{(1-x)(1-y)} \\frac{1}{1- \\frac{2xy}{(1-x)(1-y)}} \\\\\n    &= \\frac{1}{(1-x)(1-y) - 2xy} \\\\\n    B(x, y) &=  \\frac{1}{1 - x - y - xy} = D(x, y)\n\\end{align*}\nSince $b_{n, m}$ and $d_{n, m}$ have same generating functions, $b_{n, m} = d_{n, m}$. Hence $b_{n, m}$ also satisfies recurrence relation of $d_{n, m}$ - \n$$ b_{n, m} = b_{n-1, m} + b_{n, m-1} + b_{n-1, m-1}$$\n\n\n\\subsection{Example 4 : Stirling number of second kind} \nAs discussed in previous lectures, number of ways to partition set $\\{1, 2, 3 \\dots n \\}$ into $k$ non-empty parts is called stirling number of second kind. Let's represent by $S_{n,k}$. It's recurrence relation is given by -  \n$$S_{n, k} = S_{n-1, k-1} + k S_{n-1, k} $$\nLHS : number of ways to partition set $\\{1, 2, 3 \\dots n \\}$ into $k$ non-empty parts $ = S_{n, k}$ \\\\\nRHS : \\begin{enumerate}\n\\item If element $1$ occurs in a singleton set. No.of ways to partition remaining $n-1$ elements to $k-1$ sets $= S_{n-1, k-1}$.\n\\item If element $1$ doesn't occur in a singleton set. Then we can partition remaining $n-1$ elements to $k$ sets and add element $1$ to one of these $k$ sets $=  k S_{n-1, k}$\n\\end{enumerate}\nWe can also see that $S_{0,0} = 1 ,~ S_{n, 0} = 0 ,~ S_{0, k} = 0$.\n%$S_{x, y} = {\\sum_{n, k \\geq 0} ^ {\\infty} S_{n,k} x^n y^k}$ \\\\\n%\\begin{equation}\n\\begin{align*}\n S(x, y) &= {\\sum_{n, k \\geq 0} ^ {\\infty} S_{n,k} x^n y^k} \\\\\n&= S_{0,0}~x^0y^0 + {\\sum_{n = 0, k \\geq 1}^ {\\infty} S_{0,k}~x^0y^k} + {\\sum_{n \\geq 1, k = 0}^ {\\infty} S_{n,0}~x^n y^0} + {\\sum_{n \\geq 1, k \\geq 1}^ {\\infty} S_{n,k}~x^n y^k}  \\\\\n &= 1 + \\sum_{n \\geq 1 , k \\geq 1} ^ {\\infty} S_{n,k} ~ x^n y^k  \\\\\n &= 1 + \\sum_{n \\geq 1 , k \\geq 1} S_{n-1, k-1}~ x^n y^k + \\sum_{n \\geq 1, k \\geq 1} k S_{n-1, k}~ x^n y^k \\\\\n &= 1 + xy \\sum_{n \\geq 1, k \\geq 1} S_{n-1, k-1} x^{n-1} y^{k-1} + x\\sum_{n \\geq 1, k \\geq 1} k S_{n-1, k}~ x^{n-1} y^k \\\\\n &= 1 + xy~ S(x, y) + x \\sum_{n \\geq 0, k \\geq 1} k S_{n, k}~ x^{n} y^k \\\\\n &= 1 + xy~ S(x, y) + \\frac{\\partial}{\\partial y} S(x,y)\n\\end{align*}\nNote : $\\frac{\\partial}{\\partial y} S(x, y) = \\sum_{n \\geq 0, k \\geq 1} k S_{n,k}~x^n y^{k-1}$ \n\n\nConsider $y^k$ coefficients on both sides : \n\\begin{equation}\n  \\begin{split}\n    LHS &= \\sum_{n \\geq 0} S_{n, k} x^n \\\\\n    RHS &= x\\sum_{n \\geq 0} S_{n, k-1} x^n + xk \\sum_{n \\geq 0} S_{n, k}x^n\n\\end{split}  \n\\end{equation}\nEquating LHS and RHS :\n\\begin{align*}\n\\sum_{n \\geq 0} S_{n, k}~ x^n &= x\\sum_{n \\geq 0} S_{n, k-1} ~x^n + xk \\sum_{n \\geq 0} S_{n, k}~x^n \\\\\n\\sum_{n \\geq 0} S_{n, k} ~x^n &= \\frac{x}{1-xk}  \\sum_{n \\geq 0} S_{n, k-1}~ x^n   \\\\\n&= \\frac{x}{1-xk} \\frac{x}{1-x(k-1)} \\sum_{n \\geq 0} S_{n, k-2}~ x^n \\\\\n&= \\frac{x}{1-xk} \\frac{x}{1-x(k-1)} \\dots \\frac{x}{1-x(k-(k-1))}\n\\sum_{n \\geq 0} S_{n, 0}~ x^n \\\\\n&= \\frac{x^k}{(1-x)(1-2x)\\dots (1-kx)} \\times 1 ~( Note : S_{0,0} = 1, S_{n, 0} = 0) \\\\\n\\sum_{n \\geq 0} S_{n, k}~ x^n &= x^k \\times (\\frac{A_1}{1-x} + \\frac{A_2}{1-2x} + \\dots +  \\frac{A_k}{1-kx})\n\\end{align*}\nSolving for $A_1, A_2, \\dots A_k $ we'll get $A_r = (-1)^{k-r} \\frac{r^{k-1}}{(r-1)! (k-r)!}$. \\\\\n$S_{n,k}$ is the coefficient of $x^n$ in RHS. i.e., \n\\begin{align*}\nS_{n,k} &= coeff~ of ~x^n ~in~ x^k \\times (\\frac{A_1}{1-x} + \\frac{A_2}{1-2x} + \\dots +  \\frac{A_k}{1-kx}) \\\\\n&= coeff ~ of ~ x^{n-k} ~ in ~ \\sum_{r=1}^k \\frac{A_r}{1-rx}\n\\end{align*}\nCoefficient of $x^{p}$ in $\\frac{1}{1-rx} = r^p$. hence,\n\\begin{align*}\nS_{n,k} &= \\sum_{r=1}^k A_r r^{n-k} \\\\\n&= \\sum_{r=1}^k (-1)^{k-r} \\frac{r^{k-1}}{(r-1)! (k-r)!} ~ r^{n-k}\\\\\nS_{n,k} &= \\sum_{r=1}^k (-1)^{k-r} \\frac{r^{n}}{(r-1)! (k-r)!}\n\\end{align*}\nThe above expression is Stirling number of second kind\n%\\end{equation}\n%$$ = S_{0,0}~x^0y^0 + {\\sum_{n = 0, k \\geq 1}^ {\\infty} S_{0,k}~x^0y^k} + {\\sum_{n \\geq 1, k = 0}^ {\\infty} S_{n,0}~x^ny^0} + {\\sum_{n \\geq 1, k \\geq 1}^ {\\infty} S_{n,k}~x^ny^k} $$ \\\\\n%$&= 1 + \\sum_{\\substack{n \\geq 1 \\\\ k \\geq 1}} ^ {\\infty} S_{n,k} ~ x^n y^k$\n\n\\section{Exponential generating functions}\nIn ordinary generating functions we associate sequence, ${(a_n)}_{n \\geq 0}$ with $G(x) = \\sum_{n \\geq 0} a_n x^n$. In $G(x)$ we chose basis $\\{ 1, x, x^2, x^3 \\dots\\}$ for set of all polynomials in one variable. But there are many other basis for set of polynomials, like $\\{1, x, x(x-1), x(x-1)(x-2), \\dots\\}$. We chose basis $\\{ 1, x, x^2, x^3 \\dots\\}$ because it has combinatorial meaning. Other such meaning full basis are $\\{\\frac{x^n}{n!}\\}_{n \\in \\N}$, $\\{e^{-x} \\frac{x^n}{n!}\\}_{n \\in \\N}$ and $\\{\\frac{1}{n^x}\\}_{n \\in \\N}$. In this lecture we'll explore exponential generating functions which use basis $\\{\\frac{x^n}{n!}\\}_{n \\in \\N}$ . \\\\\nSo  ${(a_n)}_{n \\geq 0}$ is associated with $E(x) = \\sum_{n \\geq 0} a_n \\frac{x^n}{n!}$ . Let's see a few examples - \n\\begin{align*}\n(1, 1, 1, \\dots ) &\\xrightarrow[generating function]{exponential} \\sum_{n \\geq 0} \\frac{x^n}{n!} = e^x    \\\\\n&\\xrightarrow[generating function]{ordinary} \\sum_{n \\geq 0} x^n = \\frac{1}{1-x} \\\\\n(1!, 2!, 3!, \\dots) &\\xrightarrow[generating function]{exponential} \\sum_{n \\geq 0} n! \\frac{x^n}{n!} = \\sum_{n \\geq 0} x^n = \\frac{1}{1-x}\n\\end{align*}\n$\\frac{1}{1-x}$ is ordinary generating function(ogf) of $(1, 1, 1, \\dots )$ and exponential generating function(egf) of $(1!, 2!, 3!, \\dots )$.\\\\\n\\\\\n\\textbf{\\Large {Operations of EGF}}\n\\begin{enumerate}\n\\item{\\textbf{Addition :} } It's similar to ogf.\n    \\begin{align*}\n        \\{a_n\\}_{n \\geq 0} &\\xrightarrow[]{egf} E(x) \\\\\n        \\{b_n\\}_{n \\geq 0} &\\xrightarrow[]{egf} F(x) \\\\\n        \\{a_n + b_n \\}_{n \\geq 0} &\\xrightarrow[]{egf} E(x) + F(x) \\\\\n    \\end{align*}\n\\item {\\textbf{Shifting :} } Multiplying ogf by $x$ shifts the sequence to left as seen in earlier lectures.  \n    \\begin{align*}\n        \\{a_0, a_1, a_2 \\dots \\} &\\xrightarrow[]{egf} E(x) \\\\\n        \\{0, a_0, a_1, a_2 \\dots \\} &\\xrightarrow[]{egf} xE(x) \\\\\n    \\end{align*}\nDifferentiating egf function will shift the sequence to right.\n\\begin{gather*}\n    \\{a_0, a_1, a_2 \\dots \\} \\xrightarrow[]{egf} ~~ E(x) \\endline\n        \\{a_1, a_2, a_3 \\dots \\} \\xrightarrow[]{egf} \\frac{d}{dx} E(x) \\\\\n        \\frac{d}{dx} E(x) = \\sum_{n \\geq 1} a_n~ \\frac{n. x^{n-1}}{n!} = \\sum_{n \\geq 1} a_n~ \\frac{x^{n-1}}{(n-1)!} =  \\sum_{n \\geq 0} a_{n+1}~ \\frac{x^{n}}{n!}\n\\end{gather*}\n\\item {\\textbf{Multiplication :} }   EGFs are used if the sequence counts labelled structures like permutations, derangements and partitions. Let $(a_n)_{n \\geq 0}$, $(b_n)_{n \\geq 0}$ count arrangements  of type $A$ and type $B$ respectively using $n$ labelled objects. If we want to count type $C$ arrangements, that can be obtained by a unique split of $n$ objects into two sets and then arranging first set according to type $A$ and second set according to type $B$ - \\\\\n No.of arrangements of type $C$ of size $n, c_n = \\sum_{k=0}^n {n \\choose k} a_k b_{n-k}$.\\\\\n Now Let's see how multiplication of $A(x)$(egf of $A$) and $B(x)$(egf of $B$) is useful \n \\begin{align*}\n     A(x).B(x) &= (\\sum_{n=0}^{\\infty} a_n \\frac{x^n}{n!})(\\sum_{n=0}^{\\infty} b_n \\frac{x^n}{n!}) \\\\\n     &= \\sum_{n=0}^{\\infty}(\\sum_{k=0}^{n} \\frac{a_k}{k!}.\\frac{b_{n-k}}{(n-k)!}) x^n \\\\\n     &= \\sum_{n=0}^{\\infty}(\\sum_{k=0}^{n} \\frac{n!}{k!(n-k)!} a_k b_{n-k}) \\frac{x^n}{n!} \\\\\n     &= \\sum_{n=0}^{\\infty}(\\sum_{k=0}^{n} {n \\choose k} a_k b_{n-k}) \\frac{x^n}{n!} \\\\\n     &= \\sum_{n=0}^{\\infty} c_n \\frac{x^n}{n!} \\\\\n     A(x).B(x) &= C(x)\n \\end{align*}\n \n Now Let's see few examples of egf \n \\subsection{Derangements } Recall that we've discussed derangements in PIE and recurrence relations. Now let's derive it using egf. Let $D_n$ represent set of derangements of $n$ objects and let $d_n = |D_n|$. we can see that $d_0 = 1 ,~ d_1 = 0,~ d_2 = 1$. Recall the recurrence relation : $$ d_{n+2} = (n+1)(d_{n+1} + d_n)$$\n \\begin{align*}\n     D(x) &= \\sum_{n=0}^{\\infty} d_n \\frac{x^n}{n!} \\\\\n     D'(x) &= \\sum_{n=0}^{\\infty} d_{n+1} \\frac{x^n}{n!} ~(by ~shifting ~operation)\\\\\n     &= \\sum_{n=1}^{\\infty} n(d_n + d_{n-1}) \\frac{x^n}{n!} \\\\\n     &= \\sum_{n=1}^{\\infty} nd_n \\frac{x^n}{n!} + \\sum_{n=1}^{\\infty} nd_{n-1} \\frac{x^n}{n!} \\\\\n     &= x \\sum_{n=1}^{\\infty} d_n \\frac{x^{n-1}}{(n-1)!} + x \\sum_{n=1}^{\\infty} d_{n-1} \\frac{x^{n-1}}{(n-1)!} \\\\\n     &= x \\sum_{n=0}^{\\infty} d_{n+1} \\frac{x^n}{n!} + x \\sum_{n=0}^{\\infty} d_n \\frac{x^n}{n!} \\\\\n     D'(x) &= xD'(x) + xD(x) \\\\\n     (1-x)D'(x) &= xD(x) \\\\\n     \\frac{D'(x)}{D(x)} &= \\frac{x}{1-x} = \\frac{1}{1-x} - 1\n \\end{align*}\nIntegrating on both sides\n     $$\\ln{D(x)} = \\ln{(1-x)} - x + c$$\nSince $D(0) = d_0 = 1 \\implies c=0$.\n    $$\\ln{D(x)} = \\ln{(1-x)} - x$$\n    $$D(x) = \\frac{e^{-x}}{1-x} = \\sum_{n=0}^{\\infty} d_n \\frac{x^n}{n!}$$\nTo get $d_n$ we need coefficient of $\\frac{x^n}{n!}$ in LHS.\n$$e^{-x} \\frac{1}{1-x} = (\\sum_{n=0}^{\\infty} (-1)^n \\frac{x^n}{n!})(\\sum_{n=0}^{\\infty} n! \\frac{x^n}{n!})$$\nCoefficient of $\\frac{x^n}{n!}$ by multiplication property = $\\sum_{k=0}^{n} {n \\choose k} a_k b_{n-k}$\n\\begin{align*}\n    d_n = \\sum_{k=0}^{n} {n \\choose k} (-1)^{n-k} k!  \n    = \\sum_{k=0}^{n} (-1)^{n-k} k! {n \\choose k}\n\\end{align*}\n$d_n$ is count of derangements of $n$ objects.\n\n\\subsection{Bell Numbers}\nLet $S_{n,k}$ represent number of ways of partitioning \\{$1, 2 ,3 \\dots n $\\} into $k$ non empty blocks and $B_n$ represent number of ways of partitioning \\{$1, 2 ,3 \\dots n $\\} ($B_0 = 1$). By definitions, \n$$B_n = \\sum_{k=0}^n S_{n,k}$$\nEquivalent intrepretation :  Consider a number whose prime factorization is square free i.e., $k \\in \\N$ such that $k = p_1p_2 \\dots p_n$ where $\\{p_1, p_2, \\dots p_n\\}$ are distinct primes. Number of ways of writing $k$ as product of natural numbers $\\geq 2 = $ number of ways of partitioning  $\\{p_1, p_2, \\dots p_n\\}$  $= B_n$. \\\\\nRecurrence Relation : \n $$ B_n = \\sum_{k=0}^{n-1} {{n-1} \\choose k} B_k $$ \n% LHS : Number ways of partitioning  \\{$1, 2 ,3 \\dots n $\\}. \\\\\n% RHS : \\begin{parts}\n% \\item If $1$ occurs in singleton set. No.of such partitions  = $B_{n-1}$\n% \\item If $1$ occurs in set with two elements. No.of ways element can be chosen $ = n-1$.No.of such partitions  = ${{n-1} \\choose 1}B_{n-2}$.\n% \\item $1$ occurs in set with $k+1$ elements. No.of ways elements in this set can be chosen $ = {n-1 \\choose k}$. No.of such partitions  = ${{n-1} \\choose k}B_{n-(k+1)}$.\n% \\end{parts}\n% Hence total $= \\sum_{k=0}^{n-1} {{n-1} \\choose k} B_{n-k-1}  = \\sum_{k=0}^{n-1} {{n-1} \\choose n-k-1} B_{n-k-1} = \\sum_{n-k-1=0}^{n-1} {{n-1} \\choose k} B_{k} = \\sum_{k=0}^{n-1} {{n-1} \\choose k} B_{k} $ \\\\\n Let's derive closed form expression for $B_n$ :\n \\begin{align*}\n     B(x) &= \\sum_{n=0}^{\\infty} B_n \\frac{x^n}{n!} \\\\\n     B'(x) &= \\sum_{n=0}^{\\infty} B_{n+1} \\frac{x^n}{n!} ~(by ~ shifting ~rule) \\\\\n     &= \\sum_{n=0}^{\\infty} (\\sum_{k=0}^{n} {n \\choose k} B_k) \\frac{x^n}{n!} \\\\\n     &= \\sum_{n=0}^{\\infty} (\\sum_{k=0}^{n} {n \\choose k}. ~1 B_k) \\frac{x^n}{n!} \\\\\n     &= (\\sum_{n=0}^{\\infty} B_n \\frac{x^n}{n!})(\\sum_{n=0}^{\\infty} 1 \\frac{x^n}{n!}) ~(by ~ multiplication ~rule) \\\\\n     B'(x) &= B(x) . e^x \\\\\n     \\frac{B'(x)}{B(x)} &= e^x\n \\end{align*}\n Integrating on both sides \n $$\\ln{B(x)} = e^x + c$$\n Since $B(0) = B_0 = 1 \\implies c = -1$. Hence,\n %$$ B(x) = e^{e^x - 1} = \\frac{e^{e^x}}{e}$$\n \\begin{align*}\n     B(x) &= e^{e^x - 1} \\\\\n     &= \\frac{e^{e^x}}{e} \\\\\n     &= \\frac{1}{e} (\\sum_{k=0}^{\\infty} \\frac{(e^x)^k}{k!}) \\\\\n     &= \\frac{1}{e} (\\sum_{k=0}^{\\infty} \\frac{e^{kx}}{k!}) \\\\\n     &= \\frac{1}{e} (~\\sum_{k=0}^{\\infty} \\frac{1}{k!}~ (\\sum_{n=0}^{\\infty} \\frac{(kx)^n}{n!})~) \\\\\n     &= \\frac{1}{e} (~\\sum_{n=0}^{\\infty} \\frac{x^n}{n!}~ (\\sum_{k=0}^{\\infty} \\frac{k^n}{k!})~)\\\\\nB(x)&= \\sum_{n=0}^{\\infty} \\frac{1}{e} (\\sum_{k=0}^{\\infty} \\frac{k^n}{k!}) \\frac{x^n}{n!} \\\\\n    B_n &= \\frac{1}{e} (\\sum_{k=0}^{\\infty} \\frac{k^n}{k!})\n \\end{align*}\n We've derived closed form expression for bell number. Above expression for $B_n$ is also called as Dobinski's formula.\n \n\\end{enumerate}\n", "meta": {"hexsha": "fa735328bb6768a7f916253fe3b686a168aed04d", "size": 34603, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week06.tex", "max_stars_repo_name": "pot8ohead/theory-toolkit", "max_stars_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week06.tex", "max_issues_repo_name": "pot8ohead/theory-toolkit", "max_issues_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-10-08T07:34:26.000Z", "max_issues_repo_issues_event_max_datetime": "2020-11-30T06:06:12.000Z", "max_forks_repo_path": "week06.tex", "max_forks_repo_name": "pot8ohead/theory-toolkit", "max_forks_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2020-09-25T01:35:07.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-28T11:22:06.000Z", "avg_line_length": 48.0597222222, "max_line_length": 651, "alphanum_fraction": 0.5823194521, "num_tokens": 15153, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484143, "lm_q2_score": 0.8175744673038222, "lm_q1q2_score": 0.5849123823277308}}
{"text": "\\chapter{Classic ciphers}\n\\label{chapter:classic}\n\nModern cryptography has come a long way. In his excellent book on\ncryptography, Singh traces it back to at least 5th century B.C., to\nthe times of Herodotus and the ancient Greeks~\\cite{Singh:1999:CBE}.\nThat's some 2500 years ago, and surely we do not use those methods\nanymore in modern day cryptography. However, the basic techniques are\nstill relevant for appreciating the art of secret writing.\n\nShift ciphers\\indShiftcipher construct the \\glosCiphertext\nciphertext\\indCiphertext from the \\glosPlaintext\nplaintext\\indPlaintext\\ by means of a predefined {\\em shifting}\noperation,\\glosCipherkey where the cipherkey of a particular shift\nalgorithm defines the shift amount of the cipher.\\indCipherkey\nTransposition ciphers work by keeping the plaintext the same, but {\\em\n  rearrange} the order of the characters according to a certain rule.\nThe cipherkey is essentially the description of how this transposition\nis done.\\indTranspositioncipher Substitution\nciphers\\indSubstitutioncipher generalize shifts and transpositions,\nallowing one to substitute arbitrary codes for plaintext elements.  In\nthis chapter, we will study several examples of these techniques and\nsee how we can code them in Cryptol.\n\nIn general, ciphers boil down to pairs of functions \\emph{encrypt} and\n\\emph{decrypt} which ``fit together'' in the appropriate way.  Arguing\nthat a cryptographic function is \\emph{correct} is subtle.\n\nCorrectness of cryptography is determined by cryptanalyses by expert\ncryptographers.  Each kind of cryptographic primitive (i.e., a hash, a\nsymmetric cipher, an asymmetric cipher, etc.) has a set of expected\nproperties, many of which can only be discovered and proven by hand\nthrough a lot of hard work.  Thus, to check the correctness of a\ncryptographic function, a best practice for Cryptol use is to encode\nas many of these properties as one can in Cryptol itself and use\nCryptol's validation and verification capabilities, discussed\nlater in~\\autoref{cha:high-assur-progr}.  For example, the fundamental\nproperty of most ciphers is that encryption and decryption are\ninverses of each other.\n\nTo check the correctness of an \\emph{implementation} $I$ of a\ncryptographic function $C$ means that one must show that the\nimplementation $I$ behaves as the specification ($C$) stipulates.  In\nthe context of cryptography, the minimal conformance necesssary is\nthat $I$'s output \\emph{exactly} conforms to the output characterized\nby $C$.  But just because a cryptographic implementation is\n\\emph{functionally correct} does not mean it is \\emph{secure}.  The\nsubtleties of an implementation can leak all kinds of information that\nharm the security of the cryptography, including abstraction leaking\nof sensitive values, timing attacks, side-channel attacks, etc.  These\nkinds of properties cannot currently be expressed or reasoned about in\nCryptol.\n\nAlso, Cryptol does \\emph{not} give the user any feedback on the\n\\emph{strength} of a given (cryptographic) algorithm.  While this is\nan interesting and useful feature, it is not part of Cryptol's current\ncapabilities.\n\n%=====================================================================\n\\section{Caesar's cipher}\n\\label{sec:caesar}\n\\sectionWithAnswers{Caesar's cipher}{sec:caesar}\n\nCaesar's cipher (a.k.a. Caesar's shift) is one of the simplest\nciphers.  The letters in the plaintext\\indPlaintext are shifted by a\nfixed number of elements down the alphabet.\\indCaesarscipher For\ninstance, if the shift is 2, {\\tt A} becomes {\\tt C}, {\\tt B} becomes\n{\\tt D}, and so on. Once we run out of letters, we circle back to {\\tt\n  A}; so {\\tt Y} becomes {\\tt A}, and {\\tt Z} becomes {\\tt B}.  Coding\nCaesar's cipher in Cryptol is quite straightforward (recall from\nSection~\\ref{sec:tsyn} that a {\\tt String n} is simply a sequence of n\n8-bit words.):\\indTSString\n\\begin{code}\n  caesar : {n} ([8], String n) -> String n\n  caesar (s, msg) = [ shift x | x <- msg ]\n        where map     = ['A' .. 'Z'] <<< s\n              shift c = map @ (c - 'A')\n\\end{code}\nIn this definition, we simply get a message {\\tt msg} of type {\\tt\n  String n}, and perform a {\\tt shift} operation on each one of the\nelements.  The {\\tt shift} function is defined locally in the {\\tt\n  where}-clause.\\indWhere To compute the shift, we first find the\ndistance of the letter from the character {\\tt 'A'} (via {\\tt c -\n  'A'}), and look it up in the mapping imposed by the shift. The {\\tt\n  map} is simply the alphabet rotated to the left by the shift amount,\n{\\tt s}. Note how we use the enumeration {\\tt ['A' .. 'Z']} to get all\nthe letters in the alphabet.\\indEnum\n\n\\begin{Exercise}\\label{ex:caesar:0}\n  What is the map corresponding to a shift of 2? Use Cryptol's\n  \\verb+<<<+\\indRotLeft to compute it.  You can use the command {\\tt\n    :set ascii=on}\\indSettingASCII to print strings in ASCII, like\n  this:\n\\begin{Verbatim}\n  Cryptol> :set ascii=on\n  Cryptol> \"Hello World\"\n  \"Hello World\"\n\\end{Verbatim}\nWhy do we use a left-rotate, instead of a right-rotate?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:caesar:0}\nHere is the alphabet and the corresponding shift-2 Caesar's alphabet:\n\\begin{verbatim}\n  Cryptol> ['A'..'Z'] \n  \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n  Cryptol> ['A'..'Z'] <<< 2\n  \"CDEFGHIJKLMNOPQRSTUVWXYZAB\"\n\\end{verbatim}\nWe use a left rotate to get the characters lined up correctly, as\nillustrated above.  \\indRotLeft\\indRotRight\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:caesar:1}\n  Use the above definition to encrypt the message {\\tt \"ATTACKATDAWN\"}\n  by shifts 0, 3, 12, and 52. What happens when the shift is a\n  multiple of 26? Why?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:caesar:1}\nHere are Cryptol's responses:\n\\begin{Verbatim}\n  Cryptol> caesar (0, \"ATTACKATDAWN\")\n  \"ATTACKATDAWN\"\n  Cryptol> caesar (3, \"ATTACKATDAWN\")\n  \"DWWDFNDWGDZQ\"\n  Cryptol> caesar (12, \"ATTACKATDAWN\")\n  \"MFFMOWMFPMIZ\"\n  Cryptol> caesar (52, \"ATTACKATDAWN\")\n  \"ATTACKATDAWN\"\n\\end{Verbatim}\nIf the shift is a multiple of 26 (as in 0 and 52 above), the letters\nwill cycle back to their original values, so encryption will leave the\nmessage unchanged. Users of the Caesar's cipher should be careful\nabout picking the shift amount!\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:caesar:2}\n  Write a function {\\tt dCaesar} which will decrypt a ciphertext\n  constructed by a Caesar's cipher. It should have the same signature\n  as {\\tt caesar}.  Try it on the examples from the previous exercise.\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:caesar:2}\n  The code is almost identical, except we need to use a right\n  rotate:\\indRotRight\n\n\\begin{code}\n  dCaesar : {n} ([8], String n) -> String n\n  dCaesar (s, msg) = [ shift x | x <- msg ]\n        where map     = ['A' .. 'Z'] >>> s\n              shift c = map @ (c - 'A')\n\\end{code}\n%  dCaesar : {n} ([8], String n) -> String n\n%  dCaesar (s, msg) = [ shift x | x <- msg ]\n%    where  map     = ['A' .. 'Z']  >>> s\n%           shift c = map @ (c - 'A')\nWe have:\n\\begin{Verbatim}\n  Cryptol> caesar (12, \"ATTACKATDAWN\")\n  \"MFFMOWMFPMIZ\"\n  Cryptol> dCaesar (12, \"MFFMOWMFPMIZ\")\n  \"ATTACKATDAWN\"\n\\end{Verbatim}\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:caesar:3}\n  Observe that the shift amount in a Caesar cipher is very limited:\n  Any shift of {\\tt d} is equivalent to a shift by {\\tt d \\% 26}. (For\n  instance shifting by 12 and 38 is the same thing, due to wrap around\n  at 26.) Based on this observation, how strong do you think the\n  Caesar's cipher is? Describe a simple attack that will recover the\n  plaintext and automate it using Cryptol.  Use your function to crack\n  the ciphertext {\\tt JHLZHYJPWOLYPZDLHR}.\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:caesar:3}\n  For the Caesar's cipher, the only good shifts are $1$ through $25$,\n  since shifting by $0$ would return the plaintext unchanged, and any\n  shift amount {\\tt d} that is larger than $26$ and over is essentially\n  the same as shifting by {\\tt d \\% 26} due to wrap around. Therefore,\n  all it takes to break the Caesar cipher is to try the sizes $1$\n  through $25$, and see if we have a valid message. We can automate this\n  in Cryptol by returning all possible plaintexts using these shift\n  amounts:\n\\begin{code}\n  attackCaesar : {n} (String n) -> [25](String n)\n  attackCaesar msg = [ dCaesar(i, msg) | i <- [1 .. 25] ]\n\\end{code}\nIf we apply this function to {\\tt JHLZHYJPWOLYPZDLHR}, we get:\n\\begin{Verbatim}\n  Cryptol> :set ascii=on\n  Cryptol> attackCaesar \"JHLZHYJPWOLYPZDLHR\",\n  [\"IGKYGXIOVNKXOYCKGQ\", \"HFJXFWHNUMJWNXBJFP\", \"GEIWEVGMTLIVMWAIEO\"\n   \"FDHVDUFLSKHULVZHDN\", \"ECGUCTEKRJGTKUYGCM\", \"DBFTBSDJQIFSJTXFBL\"\n   \"CAESARCIPHERISWEAK\", \"BZDRZQBHOGDQHRVDZJ\", \"AYCQYPAGNFCPGQUCYI\"\n   \"ZXBPXOZFMEBOFPTBXH\", \"YWAOWNYELDANEOSAWG\", \"XVZNVMXDKCZMDNRZVF\"\n   \"WUYMULWCJBYLCMQYUE\", \"VTXLTKVBIAXKBLPXTD\", \"USWKSJUAHZWJAKOWSC\"\n   \"TRVJRITZGYVIZJNVRB\", \"SQUIQHSYFXUHYIMUQA\", \"RPTHPGRXEWTGXHLTPZ\"\n   \"QOSGOFQWDVSFWGKSOY\", \"PNRFNEPVCUREVFJRNX\", \"OMQEMDOUBTQDUEIQMW\"\n   \"NLPDLCNTASPCTDHPLV\", \"MKOCKBMSZROBSCGOKU\", \"LJNBJALRYQNARBFNJT\"\n   \"KIMAIZKQXPMZQAEMIS\"]\n\\end{Verbatim}\nIf you skim through the potential ciphertexts, you will see that the\n$7^{th}$ entry is probably the one we are looking for. Hence the key\nmust be $7$.  Indeed, the message is {\\tt CAESARCIPHERISWEAK}.\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:caesar:4}\n  One classic trick to strengthen ciphers is to use multiple keys. By\n  repeatedly encrypting the plaintext multiple times we can hope that\n  it will be more resistant to attacks. Do you think this scheme might\n  make the Caesar cipher stronger?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:caesar:4}\n  No. Using two shifts $d_1$ and $d_2$ is essentially the same as\n  using just one shift with the amount $d_1 + d_2$. Our attack\n  function would work just fine on this schema as well. In fact, we\n  wouldn't even have to know how many rounds of encryption was\n  applied. Multiple rounds is just as weak as a single round when it\n  comes to breaking the Caesar's cipher.  \\end{Answer}\n\n\\begin{Exercise}\\label{ex:caesar:5}\n  What happens if you pass {\\tt caesar} a plaintext that has\n  non-uppercase letters in it? (Let's say a digit.) How can you fix\n  this deficiency?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:caesar:5}\nIn this case we will fail to find a mapping:\n\\begin{Verbatim}\n  Cryptol> caesar (3, \"12\")\n  ... index of 240 is out of bounds\n  (valid range is 0 thru 25).\n\\end{Verbatim}\nWhat happened here is that Cryptol computed the offset {\\tt '1' - 'A'}\nto obtain the $8$-bit index $240$ (remember, modular arithmetic!), but\nour alphabet only has $26$ entries, causing the out-of-bounds error.\n\\todo[inline]{Say something about how to guarantee that such errors\n  are impossible. (Use of preconditions, checking and proving safety,\n  etc.)}  We can simply remedy this problem by allowing our alphabet\nto contain all $8$-bit numbers:\\indRotLeft\n\\begin{code}\n  caesar' : {n} ([8], String n) -> String n\n  caesar' (s, msg) = [ shift x | x <- msg ]\n    where map     = [0 .. 255] <<< s\n          shift c = map @ c\n\\end{code}\nNote that we no longer have to subtract {\\tt 'A'}, since we are\nallowing a much wider range for our plaintext and ciphertext. (Another\nway to put this is that we are subtracting the value of the first\nelement in the alphabet, which happens to be 0 in this case!\nConsequently, the number of ``good'' shifts increase from $25$ to\n$255$.)  The change in {\\tt dCaesar'} is analogous:\\indRotRight\n\\begin{code}\n  dCaesar' : {n} ([8], String n) -> String n\n  dCaesar' (s, msg) = [ shift x | x <- msg ]\n    where  map     = [0 .. 255] >>> s\n           shift c = map @ c\n\\end{code}\n\\end{Answer}\n\n%=====================================================================\n\\section{\\texorpdfstring{Vigen\\`{e}re}{Vigenere} cipher}\n\\label{sec:vigenere}\n\\sectionWithAnswers{\\texorpdfstring{Vigen\\`{e}re}{Vigenere} cipher}{sec:vigenere}\n\nThe Vigen\\`{e}re cipher is a variation on the Caesar's cipher, where\none uses multiple shift amounts according to a\nkeyword~\\cite{wiki:vigenere}.\\indVigenere Despite its simplicity, it\nearned the notorious description {\\em le chiffre ind\\`{e}chiffrable}\n(``the indecipherable cipher'' in French), as it was unbroken for a\nlong period of time. It was very popular in the 16th century and\nonwards, only becoming routinely breakable by mid-19th century or so.\n\nTo illustrate the operation of the Vigen\\`{e}re cipher, let us\nconsider the plaintext {\\tt ATTACKATDAWN}. The cryptographer picks a\nkey, let's say {\\tt CRYPTOL}. We line up the plaintext and the key,\nrepeating the key as much as as necessary, as in the top two lines of\nthe following:\n\\begin{tabbing}\n\\hspace*{2cm} \\= Ciphertext: \\hspace*{.5cm} \\= {\\tt CKRPVYLVUYLG} \\kill \n\\> Plaintext : \\> {\\tt ATTACKATDAWN} \\\\\n\\> Cipherkey : \\> {\\tt CRYPTOLCRYPT} \\\\\n\\> Ciphertext: \\> {\\tt CKRPVYLVUYLG}\n\\end{tabbing}\nWe then proceed pair by pair, shifting the plaintext character by the\ndistance implied by the corresponding key character.  The first pair\nis {\\tt A}-{\\tt C}.  Since {\\tt C} is two positions away from {\\tt A}\nin the alphabet, we shift {\\tt A} by two positions, again obtaining\n{\\tt C}.  The second pair {\\tt T}-{\\tt R} proceeds similarly: Since\n{\\tt R} is 17 positions away from {\\tt A}, we shift {\\tt T} down 17\npositions, wrapping around {\\tt Z}, obtaining {\\tt K}.  Proceeding in\nthis fashion, we get the ciphertext {\\tt CKRPVYLVUYLG}. Note how each\nstep of the process is a simple application of the Caesar's\ncipher.\\indCaesarscipher\n\n\\begin{Exercise}\\label{ex:vigenere:0}\n  One component of the Vigen\\`{e}re cipher is the construction of the\n  repeated key.  Write a function {\\tt cycle} with the following\n  signature:\n\\begin{code}\n  cycle : {n, a} (fin n, n >= 1) => [n]a -> [inf]a\n\\end{code}\nsuch that it returns the input sequence appended to itself repeatedly,\nturning it into an infinite sequence. Why do we need the predicate\n{\\tt n >= 1}?\\indPredicates\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:vigenere:0}\nHere is one way to define {\\tt cycle}, using a recursive definition:\n\\begin{code}\n  cycle xs = xss\n        where xss = xs # xss\n\\end{code}\nWe have:\n\\begin{Verbatim}\n  Cryptol> cycle [1 .. 3]\n  [1, 2, 3, 1, 2, ...]\n\\end{Verbatim}\nIf we do not have the {\\tt n >= 1} predicate, then we can pass {\\tt\n  cycle} the empty sequence, which would cause an infinite loop\nemitting nothing.  The predicate {\\tt n >= 1} makes sure the input is\nnon-empty, guaranteeing that {\\tt cycle} can produce the infinite\nsequence.\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:vigenere:1}\n  Program the Vigen\\`{e}re cipher in Cryptol. It should have the\n  signature:\n\\begin{code}\n  vigenere : {n, m} (fin n, n >= 1) => (String n, String m) -> String m\n\\end{code}\nwhere the first argument is the key and the second is the\nplaintext. Note how the signature ensures that the input string and\nthe output string will have precisely the same number of characters,\n{\\tt m}. \\lhint{Use Caesar's cipher repeatedly.}\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:vigenere:1}\n\\begin{code}\n  vigenere (key, pt) = join [ caesar (k - 'A', [c]) \n                              | c <- pt\n                              | k <- cycle key\n                            ]\n\\end{code}\nNote the shift is determined by the distance from the letter {\\tt 'A'}\nfor each character. Here is the cipher in action:\n\\begin{Verbatim}\n  Cryptol> vigenere (\"CRYPTOL\", \"ATTACKATDAWN\")\n  \"CKRPVYLVUYLG\"\n\\end{Verbatim}\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:vigenere:2}\n  Write the decryption routine for Vigen\\`{e}re. Then decode \\\\\n  {\\tt \"XZETGSCGTYCMGEQGAGRDEQC\"} with the key {\\tt \"CRYPTOL\"}.\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:vigenere:2}\nFollowing the lead of the encryption, we can rely on {\\tt dCaesar}:\n\\begin{code}\n  dVigenere : {n, m} (fin n, n >= 1) => \n              (String n, String m) -> String m\n  dVigenere (key, pt) = join [ dCaesar (k - 'A', [c]) \n                               | c <- pt\n                               | k <- cycle key\n                             ]\n\\end{code}\nThe secret code is:\n\\begin{Verbatim}\n  Cryptol> dVigenere (\"CRYPTOL\", \"XZETGSCGTYCMGEQGAGRDEQC\")\n  \"VIGENERECANTSTOPCRYPTOL\"\n\\end{Verbatim}\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:vigenere:3}\n  A known-plaintext attack\\indKnownPTAttack is one where an attacker\n  obtains a plaintext-ciphertext pair, without the key. If the\n  attacker can figure out the key based on this pair then he can break\n  all subsequent communication until the key is replaced. Describe how\n  one can break the Vigen\\`{e}re cipher if a plaintext-ciphertext pair\n  is known.\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:vigenere:3}\n  All it takes is to decrypt using using the plaintext as the key and\n  message as the cipherkey. Here is this process in action. Recall\n  from the previous exercise that encrypting {\\tt ATTACKATDAWN} by the\n  key {\\tt CRYPTOL} yields {\\tt CKRPVYLVUYLG}. Now, if an attacker\n  knows that {\\tt ATTACKATDAWN} and {\\tt CKRPVYLVUYLG} form a pair,\n  he/she can find the key simply by:\\indVigenere\n\\begin{Verbatim}\n  Cryptol> dVigenere (\"ATTACKATDAWN\", \"CKRPVYLVUYLG\")\n  \"CRYPTOLCRYPT\"\n\\end{Verbatim}\nNote that this process will not always tell us what the key is\nprecisely.  It will only be the key repeated for the given message\nsize. For sufficiently large messages, or when the key does not repeat\nany characters, however, it would be really easy for an attacker to\nglean the actual key from this information.\n\nThis trick works since the act of using the plaintext as the key and\nthe ciphertext as the message essentially reverses the shifting\nprocess, revealing the shift amounts for each pair of characters.  The\nsame attack would essentially work for the Caesar's cipher as well,\nwhere we would only need one character to crack it.\\indCaesarscipher\n\\end{Answer}\n\n%%%% Way too complicated for the intro.. skipping for now\n%% \\section{Rail fence cipher}\n%% \\lable{sec:railfence}\n%% \\sectionWithAnswers{Rail fence cipher}{sec:railfence}\\indRailFence\n%% The $k$-rail fence cipher is a simple example of a transposition\n%% cipher\\indTranspositioncipher, where the text is written along {\\em\n%% k}-lines in a zig-zag fashion. For instance, to encrypt {\\tt\n%% ATTACKATDAWN} using a 3-rail fence, we construct the following\n%% text:\n%% \\begin{Verbatim}\n%%  A . . . C . . . D . . .\n%% . T . A . K . T . A . N\n%% . . T . . . A . . . W .\n%% \\end{Verbatim}\n%% going down and up the 3 fences in a zigzag fashion. We then read\n%% the ciphertext\\indCiphertext line by line to obtain:\n%% \\begin{Verbatim}\n%%   ACDTAKTANTAW\n%% \\end{Verbatim}\n%% \n%% \\begin{Exercise}\\label{ex:railfence:0}\n%%   Program the 3-rail fence cipher in Cryptol. You should write the\n%%   functions:\n%% \\begin{code}\n%%   rail3Fence, dRail3Fence : {a} (fin a) => String((4*a))  -> String ((4*a));\n%% \\end{code}\n%% that implements the 3-rails encryption/decryption. Using your\n%% functions, encrypt and decrypt the message {\\tt\n%% RAILFENCECIPHERISTRICKIERTHANITLOOKS}.\n%% \\end{Exercise}\n%% \\begin{Answer}\\ansref{ex:railfence:0}\n%% \\begin{code}\n%%   rail3Fence pt = heads # mids # tails\n%%   where {\n%%      regions = groupBy (4, pt);\n%%      heads   =      [| r @ 0             || r <- regions |];\n%%      mids    = join [| [(r @ 1) (r @ 3)] || r <- regions |];\n%%      tails   =      [| r @ 2             || r <- regions |];\n%%   };\n%% \\end{code}\n%% \\end{Answer}\n\n%=====================================================================\n\\section{The atbash}\n\\label{sec:atbash}\n\\sectionWithAnswers{The atbash}{sec:atbash}\n\nThe atbash cipher is a form of a shift cipher, where each letter is\nreplaced by the letter that occupies its mirror image position in the\nalphabet.\\indAtbash That is, {\\tt A} is replaced by {\\tt Z}, {\\tt B}\nby {\\tt Y}, etc. Needless to say the atbash is hardly worthy of\ncryptographic attention, as it is trivial to break.\n\n\\begin{Exercise}\\label{ex:atbash:0}\n  Program the atbash in Cryptol. What is the code for {\\tt\n    ATTACKATDAWN}?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:atbash:0}\n  Using the reverse index operator, coding atbash is\n  trivial:\\indRIndex\\indAtbash\n\\begin{code}\n  atbash : {n} String n -> String n\n  atbash pt = [ alph ! (c - 'A') | c <- pt ]\n      where alph = ['A' .. 'Z']\n\\end{code}\nWe have:\n\\begin{Verbatim}\n  Cryptol> atbash \"ATTACKATDAWN\"\n  \"ZGGZXPZGWZDM\"\n\\end{Verbatim}\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:atbash:1}\n  Program the atbash decryption in Cryptol. Do you have to write any\n  code at all? Break the code {\\tt ZGYZHSRHHVOUWVXIBKGRMT}.\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:atbash:1}\n  Notice that decryption for atbash\\indAtbash is precisely the same as\n  encryption, the process is entirely the same. So, we do not have to\n  write any code at all, we can simply define:\n\\begin{code}\n  dAtbash : {n} String n -> String n\n  dAtbash = atbash\n\\end{code}\nWe have:\n\\begin{Verbatim}\n  Cryptol> dAtbash \"ZGYZHSRHHVOUWVXIBKGRMT\"\n  \"ATBASHISSELFDECRYPTING\"\n\\end{Verbatim}\n\\end{Answer}\n\n%=====================================================================\n\\section{Substitution ciphers}\n\\label{section:subst}\n\\sectionWithAnswers{Substitution ciphers}{section:subst}\n\nSubstitution ciphers\\indSubstitutioncipher generalize all the ciphers\nwe have seen so far, by allowing arbitrary substitutions to be made\nfor individual ``components'' of the\nplaintext~\\cite{wiki:substitution}.  Note that these components need\nnot be individual characters, but rather can be pairs or even triples\nof characters that appear consecutively in the text. (The\nmulti-character approach is termed {\\em\n  polygraphic}.)\\indPolyGraphSubst Furthermore, there are variants\nutilizing multiple {\\em polyalphabetic} mappings,\\indPolyAlphSubst as\nopposed to a single {\\em monoalphabetic} mapping\\indMonoAlphSubst.  We\nwill focus on monoalphabetic simple substitutions, although the other\nvariants are not fundamentally more difficult to implement.\n\n\\tip{For the exercises in this section we will use a running key\n  repeatedly. To simplify your interaction with Cryptol, put the\n  following definition in your program file:}\n\\begin{code}\n  substKey : String 26\n  substKey = \"FJHWOTYRXMKBPIAZEVNULSGDCQ\"\n\\end{code}\nThe intention is that {\\tt substKey} maps {\\tt A} to {\\tt F}, {\\tt B}\nto {\\tt J}, {\\tt C} to {\\tt H}, and so on.\n\n\\begin{Exercise}\\label{ex:subst:0}\n  Implement substitution ciphers in Cryptol. Your function should have\n  the signature:\n\\begin{code}\n  subst : {n} (String 26, String n) -> String n\n\\end{code}\nwhere the first element is the key (like {\\tt substKey}).\nWhat is the code for \\\\\n{\\tt \"SUBSTITUTIONSSAVETHEDAY\"} for the key {\\tt substKey}?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:subst:0}\n\\begin{code}\n  subst (key, pt) = [ key @ (p - 'A') | p <- pt ]\n\\end{code}\nWe have:\n\\begin{Verbatim}\n  Cryptol> subst(substKey, \"SUBSTITUTIONSSAVETHEDAY\")\n  \"NLJNUXULUXAINNFSOUROWFC\"\n\\end{Verbatim}\n\\end{Answer}\n\n\\paragraph*{Decryption} Programming decryption is more subtle.  We can\nno longer use the simple selection operation ({\\tt @})\\indIndex on the\nkey. Instead, we have to search for the character that maps to the\ngiven ciphertext character.\n\n\\begin{Exercise}\\label{ex:subst:1}\nWrite a function {\\tt invSubst} with the following signature:\n%%   type Char = [8] // now in prelude.cry\n\\begin{code}\n  invSubst : (String 26, Char) -> Char\n\\end{code}\nsuch that it returns the mapped plaintext character. For instance,\nwith {\\tt substKey}, {\\tt F} should get you {\\tt A}, since the key\nmaps {\\tt A} to {\\tt F}:\n\\begin{Verbatim}\n  Cryptol> invSubst (substKey, 'F')\n  A\n\\end{Verbatim}\nAnd similarly for other examples:\n\\begin{Verbatim}\n  Cryptol> invSubst (substKey, 'J')\n  B\n  Cryptol> invSubst (substKey, 'C')\n  Y\n  Cryptol> invSubst (substKey, 'Q')\n  Z\n\\end{Verbatim}\nOne question is what happens if you search for a non-existing\ncharacter.  In this case you can just return {\\tt 0}, a non-valid\nASCII character, which can be interpreted as {\\em not found}.\n\\hint{Use a fold (see Pg.~\\pageref{par:fold}).}\\indFold\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:subst:1}\n\\begin{code}\n  invSubst (key, c) = candidates ! 0\n    where candidates = [0] # [ if c == k then a else p\n                             | k <- key\n                             | a <- ['A' .. 'Z']\n                             | p <- candidates\n                             ]\n\\end{code}\nThe comprehension\\indComp defining {\\tt candidates} uses a fold (see\npage~\\pageref{par:fold}).\\indFold The first branch ({\\tt k <- key})\nwalks through all the key elements, the second branch walks through\nthe ordinary alphabet ({\\tt a <- ['A' .. 'Z']}), and the final branch\nwalks through the candidate match so far. At the end of the fold, we\nsimply return the final element of {\\tt candidates}. Note that we\nstart with {\\tt 0} as the first element, so that if no match is found\nwe get a {\\tt 0} back.\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:subst:2}\n  Using {\\tt invSubst}, write the decryption function {\\tt dSubst}.\n  It should have the exact same signature as {\\tt subst}.  Decrypt\n  {\\tt FUUFHKFUWFGI}, using our running key.\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:subst:2}\n\\begin{code}\n  dSubst: {n} (String 26, String n) -> String n\n  dSubst (key, ct) = [ invSubst (key, c) | c <- ct ]\n\\end{code}\nWe have:\n\\begin{Verbatim}\n  Cryptol> dSubst (substKey, \"FUUFHKFUWFGI\")\n  \"ATTACKATDAWN\"\n\\end{Verbatim}\n\\end{Answer}\n\n\\todo[inline]{This exercise and the true type of \\texttt{invSubst}\n  indicate that specs are needed.  In other words, we cannot capture\n  \\texttt{invSubst}'s tightest type, which would encode the invariant\n  about contents being capital letters, and that lack of\n  expressiveness leaks to \\texttt{dSubst}.  We really need to either\n  enrich the dependent types or add some kind of support for\n  contracts.  The reason this works most of the time is that crypto\n  algorithms work on arbitrary bytes.}\n\n\\begin{Exercise}\\label{ex:subst:3}\n  Try the substitution cipher with the key {\\tt\n    AAAABBBBCCCCDDDDEEEEFFFFGG}. Does it still work?  What is special\n  about {\\tt substKey}?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:subst:3}\nNo, with this key we cannot decrypt properly:\n\\begin{Verbatim}\n  Cryptol> subst (\"AAAABBBBCCCCDDDDEEEEFFFFGG\", \"HELLOWORLD\")\n  \"BBCCDFDECA\"\n  Cryptol> dSubst (\"AAAABBBBCCCCDDDDEEEEFFFFGG\", \"BBCCDFDECA\")\n  \"HHLLPXPTLD\"\n\\end{Verbatim}\nThis is because the given key maps multiple plaintext letters to the\nsame ciphertext letter. (For instance, it maps all of {\\tt A}, {\\tt\n  B}, {\\tt C}, and {\\tt D} to the letter {\\tt A}.) For substitution\nciphers to work the key should not repeat the elements, providing a\n1-to-1 mapping. This property clearly holds for {\\tt substKey}. Note\nthat there is no shortage of keys, since for 26 letters we have 26!\npossible ways to choose keys, which gives us over 4-billion different\nchoices.\n\\end{Answer}\n\n%=====================================================================\n\\section{The scytale}\n\\label{sec:scytale}\n\\sectionWithAnswers{The scytale}{sec:scytale}\n\nThe scytale is one of the oldest cryptographic devices ever, dating\nback to at least the first century\nA.D.~\\cite{wiki:scytale}.\\indScytale Ancient Greeks used a leather\nstrip on which they would write their plaintext\\indPlaintext message.\nThe strip would be wrapped around a rod of a certain diameter. Once\nthe strip is completely wound, they would read the text row-by-row,\nessentially transposing the letters and constructing the\nciphertext\\indCiphertext. Since the ciphertext is formed by a\nrearrangement of the plaintext, the scytale is an example of a\ntransposition cipher.\\indTranspositioncipher To decrypt, the\nciphertext needs to be wrapped around a rod of the same diameter,\nreversing the process. The cipherkey\\indCipherkey is essentially the\ndiameter of the rod used. Needless to say, the scytale does not\nprovide a very strong encryption mechanism.\n\nAbstracting away from the actual rod and the leather strip, encryption\nis essentially writing the message column-by-column in a matrix and\nreading it row-by-row.  Let us illustrate with the message {\\tt\n  ATTACKATDAWN}, where we can fit 4 characters per column:\n\\begin{verbatim}\n    ACD\n    TKA\n    TAW\n    ATN\n\\end{verbatim}\nTo encrypt, we read the message row-by-row, obtaining {\\tt\n  ACDTKATAWATN}. If the message does not fit properly (i.e., if it has\nempty spaces in the last column), it can be padded by {\\tt Z}'s or\nsome other agreed upon character. To decrypt, we essentially reverse\nthe process, by writing the ciphertext row-by-row and reading it\ncolumn-by-column.\n\nNotice how the scytale's operation is essentially matrix\ntransposition.  Therefore, implementing the scytale in Cryptol is\nmerely an application of the {\\tt transpose} function.\\indTranspose\nAll we need to do is group the message by the correct number of\nelements using {\\tt split}.\\indSplit Below, we define the {\\tt\n  diameter} to be the number of columns we have. The type synonym {\\tt\n  Message} ensures we only deal with strings that properly fit the\n``rod,'' by using {\\tt r} number of rows:\\indJoin\n\n\\begin{code}\n  scytale : {row, diameter} (fin row, fin diameter)\n            => String (row * diameter) -> String (diameter * row)\n  scytale msg = join (transpose msg')\n       where   msg' : [diameter][row][8]\n               msg' = split msg\n\\end{code}\nThe signature\\indSignature on {\\tt msg'} is revealing: We are taking a\nstring that has {\\tt diameter * row} characters in it, and chopping it\nup so that it has {\\tt row} elements, each of which is a string that\nhas {\\tt diameter} characters in it.  Here is Cryptol in action,\nencrypting the message {\\tt ATTACKATDAWN}:\n\\begin{Verbatim}\n  Cryptol> :set ascii=on\n  Cryptol> scytale \"ATTACKATDAWN\"\n  \"ACDTKATAWATN\"\n\\end{Verbatim}\nDecryption is essentially the same process, except we have to {\\tt\n  split} so that we get {\\tt diameter} elements\nout:\\indSplit\\indJoin\\indScytale\n\\begin{code}\n  dScytale : {row, diameter} (fin row, fin diameter) \n             => String (row * diameter) -> String (diameter * row)\n  dScytale msg = join (transpose msg')\n     where   msg' : [row][diameter][8]\n             msg' = split msg\n\\end{code}\nAgain, the type on {\\tt msg'} tells Cryptol that we now want {\\tt\n  diameter} strings, each of which is {\\tt row} long.  It is important\nto notice that the definitions of {\\tt scytale} and {\\tt dScytale} are\nprecisely the same, except for the signature on {\\tt msg'}! When\nviewed as a matrix, the types precisely tell which transposition we\nwant at each step.  We have:\n\\begin{Verbatim}\n  Cryptol> dScytale \"ACDTKATAWATN\"\n  \"ATTACKATDAWN\"\n\\end{Verbatim}\n\n\\begin{Exercise}\\label{ex:scytale:0}\n  What happens if you comment out the signature for {\\tt msg'} in the\n  definition of {\\tt scytale}? Why?\\indScytale\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:scytale:0}\n  If you do not provide a signature for {\\tt msg'}, you will get the\n  following type-error message from Cryptol:\n\\begin{small}\n\\begin{Verbatim}\n  Failed to validate user-specified signature.\n    In the definition of 'scytale', at classic.cry:40:1--40:8:\n      for any type row, diameter\n        fin row\n        fin diameter\n      =>\n      fin ?b\n        arising from use of expression split at classic.cry:42:17--42:22\n      fin ?d\n        arising from use of expression join at classic.cry:40:15--40:19\n      row * diameter == ?a * ?b\n        arising from matching types at classic.cry:1:1--1:1\n\\end{Verbatim}\n\\end{small}\nEssentially, Cryptol is complaining that it was asked to do a {\\tt\n  split}\\indSplit and it figured that the constraint\n$\\text{\\emph{diameter}}*\\text{\\emph{row}}=a*b$ must hold, but that is\nnot sufficient to determine what {\\tt a} and {\\tt b} should really\nbe. (There could be multiple ways to assign {\\tt a} and {\\tt b} to\nsatisfy that requirement, for instance {\\tt a=4}, {\\tt b=row}; or {\\tt\n  a=2} and {\\tt b=2*row}, resulting in differing behavior.)  This is\nwhy it is unable to ``validate the user-specified signature''.  By\nputting the explicit signature for {\\tt msg'}, we are giving Cryptol\nmore information to resolve the ambiguity. Notice that since the code\nfor {\\tt scytale} and {\\tt dScytale} are precisely the same except for\nthe type on {\\tt msg'}. This is a clear indication that the type\nsignature plays an essential role here.\\indAmbiguity\\indSignature\n\\end{Answer}\n\n\\begin{Exercise}\\label{ex:scytale:1}\n  How would you attack a scytale encryption, if you don't know what\n  the diameter is?\n\\end{Exercise}\n\\begin{Answer}\\ansref{ex:scytale:1}\n  Even if we do not know the diameter, we do know that it is a divisor\n  of the length of the message. For any given message size, we can\n  compute the number of divisors of the size and try decryption until\n  we find a meaningful plaintext.  Of course, the number of potential\n  divisors will be large for large messages, but the practicality of\n  scytale stems from the choice of relatively small diameters, hence\n  the search would not take too long. (With large diameters, the\n  ancient Greeks would have to carry around very thick rods, which\n  would not be very practical in a battle scenario!)\\indScytale\n\\end{Answer}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"../main/Cryptol\"\n%%% End: \n", "meta": {"hexsha": "ceea8b60f3309d78a7517a38f8029ca45573d099", "size": 33001, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/ProgrammingCryptol/classic/Classic.tex", "max_stars_repo_name": "matcheydj/cryptol", "max_stars_repo_head_hexsha": "875a2e756ebcff292d9fe93cdfb59f21d13f6ed1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2015-11-08T14:54:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-06-03T17:03:50.000Z", "max_issues_repo_path": "docs/ProgrammingCryptol/classic/Classic.tex", "max_issues_repo_name": "dylanmc/cryptol", "max_issues_repo_head_hexsha": "33e4e3972c5f14e0554bca5cd1272f5444b6b046", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/ProgrammingCryptol/classic/Classic.tex", "max_forks_repo_name": "dylanmc/cryptol", "max_forks_repo_head_hexsha": "33e4e3972c5f14e0554bca5cd1272f5444b6b046", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.1468710089, "max_line_length": 81, "alphanum_fraction": 0.7116147996, "num_tokens": 9687, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7718435030872968, "lm_q1q2_score": 0.5848986621444542}}
{"text": "%auto-ignore\n\\providecommand{\\MainFolder}{..}\n\\documentclass[\\MainFolder/Text.tex]{subfiles}\n\n\\begin{document}\n\n\\section{Basics of IBL-infinity-algebras\n%for the \\texorpdfstring{$\\IBLInfty$-theory}{IBL-infinity-theory}\n}\n\\allowdisplaybreaks\n\\label{Sec:Alg1}\n\n%For the following definition, Definition~\\ref{Def:Grading}, \\ref{Def:SymAlgebra}, \\ref{Def:Filtrations} and \\ref{Def:Completion} are relevant.\n\\Correct[caption={DONE Bad picture},noline]{There migh be wrong dimensions on one side of the maurer cartan element. Compare to the section about filtrations. REMARK THAT THE DEFINITION NEEDS THE WEIGHT STRUCTURE OF THE BIALGEBRA. No Problem.}\n\\Add[caption={DONE New algebraic setting in Appendix},noline]{There is a new algebraic description of the iterated compatibility condition in the appendix.}\n\\begin{Definition}[Exterior algebra]\\label{Def:ExtAlg}\nGiven a graded vector space $C$ over $\\R$, we define the \\emph{exterior algebra} over $C$ by\n\\[ \\Ext C \\coloneqq \\Sym(C[1]). \\]\nThe weight-$k$ component is denoted by $\\Ext_k C$ and the weight-reduced part by~$\\RExt C$. If $C$ is in addition filtered, then $\\Ext_k C$ is filtered by the induced filtration and its completion is denoted by $\\hat{\\Ext}_k C$.\n\\end{Definition}\n\n\nWe have the concatenation product $\\Prod : \\Ext C \\otimes \\Ext C \\rightarrow \\Ext C$ and the shuffle coproduct $\\CoProd: \\Ext C \\rightarrow \\Ext C\\otimes \\Ext C$ defined~by\n\\begin{align*}\n&\\Prod(c_{11}\\dots c_{1k} \\otimes c_{21} \\dots c_{2k'}) \\coloneqq c_{11} \\dots c_{1k} c_{21} \\dots c_{2k'}\\quad\\text{and}\\\\[\\jot]\n&\\CoProd(c_1 \\dots c_k) \\coloneqq \\sum_{\\substack{k_1,\\,k_2 \\ge 0\\\\ k_1 + k_2 = k}} \\sum_{\\sigma\\in \\Perm_{k_1, k_2}} \\varepsilon(\\sigma,c) c_{\\sigma^{-1}_1} \\dots c_{\\sigma_{k_1}^{-1}}\\otimes c_{\\sigma_{k_1+1}^{-1}}\\dots c_{\\sigma_{k_1 + k_2}^{-1}}\n\\end{align*}\nfor all homogenous $c_{ij}$, $c_i \\in C[1]$ and $k$, $k'\\ge 0$, respectively, where $\\Perm_{k_1, k_2}\\subset \\Perm_{k_1+k_2}$ denotes the set of shuffle permutations with blocks of lengths $k_1$ and $k_2$. These operations satisfy the relations of an \\emph{associative bialgebra} (see \\cite{Loday2012}):\n\\begin{equation}\\label{Eq:Bialgebra}\n\\text{Assoc.~bialg.}\\quad \\left\\{\n\\begin{aligned}\n\\Prod\\circ (\\Id\\otimes \\Prod) &= \\Prod\\circ (\\Prod \\otimes \\Id), \\\\ \n(\\Id\\otimes \\CoProd)\\circ\\CoProd &= (\\CoProd\\otimes \\Id)\\circ \\CoProd, \\\\\n\\CoProd \\circ \\Prod &= (\\Prod\\otimes \\Prod) \\circ (\\Id \\otimes \\tau\\otimes \\Id) \\circ (\\CoProd\\otimes \\CoProd).\n\\end{aligned} \\right.\n\\end{equation}\nHere $\\tau: C_1 \\otimes C_2 \\rightarrow C_2 \\otimes C_1$, $c_1\\otimes c_2 \\mapsto (-1)^{\\Abs{c_1}\\Abs{c_2}}c_2\\otimes c_1$ denotes the \\emph{twist map}.\n\n\nWe will use the bialgebra calculus ($\\coloneqq$\\,relations \\eqref{Eq:Bialgebra}) to write down explicit formulas for the operations $\\circ_{h_1, \\dotsc, h_r}$ which were briefly introduced in \\cite{Cieliebak2015}; these operations take symmetric maps $f_1$, $\\dotsc$, $f_r$ and connect $h_1$, $\\dotsc$, $h_r$ of their outputs to the inputs of a symmetric map $f$ in all possible ways, so that the result, which we denote by $f\\circ_{h_1, \\dotsc, h_r}(f_1, \\dotsc,f_r)$, becomes a symmetric map again.\n\n%In accordance with~\\cite{Cieliebak2015}, one should think of a map $f: \\Ext_k C \\rightarrow \\Ext_l C$ as of a Riemannian surface with $k$ ingoing and $l$ outcoming boundary components (\\eqqcolon\\,ends). The partial composition of two maps then corresponds to gluing of Riemannian surfaces at a given number of boundary components. This will be discussed once more Remark~\\ref{Rem:SignConv}.\n\n\n\\begin{Definition}[Partial compositions] \\label{Def:CircS}\nLet $C$ be a graded vector space. For $i$, $j\\ge 0$, we denote by \n\\begin{align*} \\pi_i : \\Ext C \\longrightarrow \\Ext_i C,& \\quad \\iota_i : \\Ext_i C \\longrightarrow \\Ext C, \\\\\n \\Id_i : \\Ext_i C \\longrightarrow \\Ext_i C,& \\quad \\begin{aligned}[t]\\tau_{i,j}: \\Ext_i C\\otimes \\Ext_j C &\\longrightarrow \\Ext_j C \\otimes \\Ext_i C \\end{aligned}\n \\end{align*}\nthe components of the canonical projection $\\pi$, the canonical inclusion $\\iota$, the identity $\\Id$ and the twist map $\\tau$, respectively. We also set \n\\[ \\CoProd_{i,j} \\coloneqq (\\pi_i \\otimes \\pi_j)\\circ \\CoProd\\circ \\iota_{i+j} \\quad\\text{and}\\quad \\Prod_{i,j}\\coloneqq \\pi_{i+j}\\circ \\Prod\\circ (\\iota_i \\otimes \\iota_j). \\]\nFor $k'$, $k_1$, $l'$, $l_1\\ge 0$, let $f: \\Ext_{k'}C \\rightarrow \\Ext_{l'} C$ and $f_1: \\Ext_{k_1} C \\rightarrow \\Ext_{l_1} C$ be linear maps, and let $0 \\le h \\le \\min(k', l_1)$. We set  \n\\[k \\coloneqq k' + k_1 - h \\quad\\text{and}\\quad l \\coloneqq l' + l_1 - h \\]\nand define the \\emph{composition of $f$ and $f_1$ at $h$ common outputs} to be the linear map $f \\circ_h f_1: \\Ext_k C \\rightarrow \\Ext_l C$ given by \n\\begin{equation}\\label{Eq:CompositionSimple}\nf \\circ_h f_1 \\coloneqq \\begin{multlined}[t]\\Prod_{l', l_1 - h} \\circ (f\\otimes \\Id_{l_1-h})\\circ (\\Prod_{h, k'-h}\\otimes \\Id_{l_1-h})\\circ (\\Id_{h} \\otimes \\tau_{\\rule{0pt}{7pt}l_1-h,k'-h}) \\\\[\\jot] \\circ (\\CoProd_{h,l_1-h}\\otimes \\Id_{k'-h}) \\circ (f_1 \\otimes \\Id_{k'-h} ) \\circ \\CoProd_{k_1, k'-h}. \\end{multlined}\n\\end{equation}\nMore generally, we define the composition of $f: E_{k'}\\rightarrow E_{l'}$ with $r\\ge 1$ linear maps $f_{i}: E_{k_i} \\rightarrow E_{l_i}$ with $k_i$, $l_i \\ge 0$ for $i=1$,~$\\dotsc$, $r$ at $0 \\le h_i \\le l_i$ common outputs such that $h\\coloneqq h_1 + \\dotsb + h_r \\le k'$ as follows. We set \n\\[ k\\coloneqq k' + k_1 + \\dotsb + k_r-h\\quad\\text{and}\\quad l\\coloneqq l' +  l_1 + \\dotsb + l_r  - h \\]\nand define $f\\circ_{h_1, \\dotsc, h_r}(f_1, \\dotsc, f_r): \\Ext_k C \\rightarrow \\Ext_l C$ by\\ToDo[caption={DONE Add the other composition},noline]{Add the definition of $(f_1,\\dotsc,f_r)\\circ_{h_1,\\dotsc,h_r} f$. IT IS DONE IN THE APPENDIX.}\n\\begin{equation} \\label{Eq:CompositionFormula}\n\\begin{aligned}\n&f\\circ_{h_1, \\dotsc, h_r}(f_1, \\dotsc, f_r)\\\\\n&\\qquad \\coloneqq \\begin{multlined}[t]\n\\Prod\\circ (f\\otimes \\Id)\\circ(\\Prod\\otimes\\Id)\\circ(\\Id\\otimes \\tau) \\\\[\\jot] \\circ \\big(\\big[(\\Prod^{(r)}\\otimes \\Prod^{(r)})\\circ (F_{h_1,\\dotsc,h_r} \\otimes \\Id^{\\otimes r})\\circ \\sigma_r\\circ \\CoProd^{\\otimes r}\\big] \\otimes \\Id \\big) \\\\[\\jot] \\circ (f_1\\otimes \\dotsb \\otimes f_r\\otimes \\Id)\\circ\\CoProd^{(r+1)},\n\\end{multlined}\\end{aligned}\n\\end{equation}\nwhere we have:\n\\begin{itemize}\n\\item The operation $\\Prod^{(r)}$ is the ``product with $r$ inputs'' and the operation $\\CoProd^{(r)}$ the ``coproduct with $r$ outputs''; they are defined~by\n\\begin{align*}\n\\Prod^{(r)} &\\coloneqq \\Prod(\\Id \\otimes \\Prod)\\dotsb(\\Id^{\\otimes r-2} \\otimes \\Prod), & \\Prod^{(1)}&\\coloneqq \\Id,\\\\\n\\CoProd^{(r)} &\\coloneqq (\\Id^{\\otimes r-2} \\otimes \\CoProd)\\dotsb(\\Id \\otimes \\CoProd)\\CoProd, &\\CoProd^{(1)}&\\coloneqq \\Id.\n\\end{align*}\n\n\\item $F_{h_1,\\dotsc, h_r} \\coloneqq (\\iota_{h_1}\\pi_{h_1}) \\otimes \\dotsb \\otimes (\\iota_{h_r}\\pi_{h_r})$.%is the ``filter''.\n\\item The permutation $\\sigma_r \\in \\Perm_{2r}$ is given by  \\Add[caption={DONE Better definition!!!}]{This needs a better definition of $\\sigma_r$!!!}\n\\[\\sigma_r: (1,2,3,4\\dotsc, 2r-1, 2r) \\longmapsto (1,r+1,2,r+2, \\dotsc, r, 2r).\\]\n\\item The symbols $f$ and $f_i$ inside the formula denote the \\emph{trivial extensions} of $f$ and $f_i$, respectively; we extend a linear map $f: E_{k'} C \\rightarrow E_{l'} C$ trivially to $f: \\Ext C \\rightarrow \\Ext C$ by defining $f(\\Ext_i C)=0$ for $i\\neq k'$.\n\\end{itemize}\n\\end{Definition}\n\n\\begin{Remark}[On partial compositions]\\phantomsection\\label{Rem:Compositions}\n\\begin{RemarkList}\n\\item Defining $f\\circ_{h_1, \\dotsc, h_r}(f_1, \\dotsc, f_r): \\Ext_k C \\rightarrow \\Ext_l C$ using~\\eqref{Eq:CompositionFormula} makes sense because the right hand side is a trivial extension of its component $\\Ext_k C \\rightarrow \\Ext_l C$. In fact, all $\\Prod$, $\\CoProd$, $\\pi$, $\\iota$ in \\eqref{Eq:CompositionFormula} can be replaced with $\\Prod_{i,j}$, $\\CoProd_{i,j}$, $\\pi_i$, $\\iota_i$ for unique $i$, $j$, so that trivial extensions do not have to be used at all. In this way, it can be seen that \\eqref{Eq:CompositionSimple} is indeed a special case of~\\eqref{Eq:CompositionFormula}. \n\\item If $h = k' = l_1$, then $f \\circ_{h} f_1 = f\\circ f_1$.\n\\item It holds $f \\circ_0 f_1 = (-1)^{\\Abs{f}\\Abs{f_1}} f_1 \\circ_0 f$ and \n\\[ f\\circ_{h_1,\\dotsc,h_r}(f_1,\\dotsc,f_r) = \\varepsilon(\\sigma,f) f\\circ_{h_{\\sigma_1^{-1}},\\dotsc,h_{\\sigma_r^{-1}}}(f_{\\sigma_1^{-1}},\\dotsc,f_{\\sigma_r^{-1}}). \\]\n\\item Consider the (``non-trivial'') extension $\\hat{f}\\coloneqq \\Prod(f\\otimes \\Id)\\CoProd: \\Ext C \\rightarrow \\Ext C$ and the symmetric product $f_1 \\odot \\dotsb \\odot f_r \\coloneqq \\Prod^{(r)}(f_1 \\otimes \\dotsb \\otimes f_r)\\CoProd^{(r)}: \\Ext C \\rightarrow \\Ext C$. The following formulas from~\\cite{Cieliebak2015} hold:\n\\begin{equation} \\label{Eq:Mix}\n\\begin{aligned}\nf\\circ_{h_1,\\dotsc,h_{r-1},0}(f_1,\\dotsc, f_r) &= f\\circ_{h_1,\\dotsc, h_{r-1}}(f_1,\\dotsc, f_{r-1}) \\odot f_r, \\\\\n\\hat{f} \\circ \\hat{f}_1 &= \\sum_{h = 0}^{\\min(k',l_1)} \\widehat{f\\circ_h f_1}, \\\\ \n   \\hat{f} \\circ (f_1 \\odot \\dotsb \\odot f_r) &= \\sum_{\\substack{h_1, \\dotsc, h_r \\ge 0 \\\\ h_1 + \\dotsb + h_r = k'}} f\\circ_{h_1,\\dotsc, h_r}(f_1,\\dotsc, f_r).\n\\end{aligned}\n\\end{equation}\nWe also have the ``weak associativity''\n\\begin{equation}\\label{Eq:WeakAssoc}\n\\qquad\\qquad \\mathclap{\\sum_{\\substack{0 \\le h_2 \\le \\min(f_3^-, f_2^+) \\\\\n0 \\le h_1 \\le \\min(f_1^+,f_2^- + f_3^- - h_2) \\\\\nh_1 + h_2 = h}}}\\quad\\qquad f_1 \\circ_{h_1} (f_2 \\circ_{h_2} f_3) = \\qquad\\quad \\mathclap{\\sum_{\\substack{0 \\le h_1 \\le \\min(f_1^+,f_2^-) \\\\ 0 \\le h_2 \\le \\min(f_1^+ + f_2^+ - h_1, f_3^-) \\\\ h_1 + h_2 = h}}}\\qquad\\quad (f_1 \\circ_{h_1} f_2) \\circ_{h_2} f_3\n\\end{equation}\nfor every $0\\le h \\le \\min(k_1 + k_2 + k_3, l_1 + l_2 + l_3)$, where $f^+$ denotes the number of inputs and $f^-$ the number of outputs of $f$. The weak associativity of $\\circ_h$ can be proven using the associativity of $\\,\\hat{\\cdot}$ and the second relation of \\eqref{Eq:Mix}.\n\\item We refer to Section~\\ref{Sec:CompConvA} of Appendix~\\ref{App:IBLMV} for a thorough treatment of partial compositions. We show there that $\\circ_{h_1,\\dotsc,h_r}$ can be defined on maps on any connected weight-graded bialgebra using natural bilinear operations $\\SquareComp_A$ on polynomials in the convolution product. In Proposition~\\ref{Prop:PartCompositions} in the appendix, we prove the relations above.\\qedhere\n\\end{RemarkList}\n\\end{Remark}\n\nIf $C$ is filtered by a decreasing filtration, then the bialgebra operations extend continuously to \n\\begin{align*}\n\\Prod: \\hat{\\Ext}_{k_1} C \\COtimes \\hat{\\Ext}_{k_2} C &\\longrightarrow \\hat{\\Ext}_{k_1+k_2} C \\quad \\text{and}\\\\ \n\\CoProd: \\hat{\\Ext}_k C &\\longrightarrow \\bigoplus_{\\substack{l_1, l_2 \\ge 0 \\\\ l_1 + l_2 = k}} \\hat{\\Ext}_{l_1}C\\COtimes\\hat{\\Ext}_{l_2}C\n\\end{align*}\nfor all $k_1$, $k_2$, $k\\in \\N_0$ because they preserve the filtration degree (see~\\cite{Fresse} for a similar construction). Next, if $f_1: \\hat{\\Ext}_{k_1}C \\rightarrow \\hat{\\Ext}_{l_1}C$ and $f_2: \\hat{\\Ext}_{k_2}C\\rightarrow \\hat{\\Ext}_{l_2}C$ have finite filtration degrees, then $f_1 \\otimes f_2: \\hat{\\Ext}_{k_1} C \\otimes \\hat{\\Ext}_{k_2} C \\rightarrow \\hat{\\Ext}_{l_1}C\\otimes\\hat{\\Ext}_{l_2}C$ has finite filtration degree too, and hence it extends continuously to $f_1 \\otimes f_2: \\hat{\\Ext}_{k_1} C\\COtimes \\hat{\\Ext}_{k_2} C \\rightarrow \\hat{\\Ext}_{l_1}C\\COtimes\\hat{\\Ext}_{l_2}C$.\nUsing these facts, we can canonically extend Definition~\\ref{Def:CircS} to maps $f: \\hat{\\Ext}_{k'} C \\rightarrow \\hat{\\Ext}_{l'} C$ and $f_i: \\hat{\\Ext}_{k_i}C \\rightarrow \\hat{\\Ext}_{l_i}C$ of finite filtration degrees. The resulting map $f\\circ_{h_1,\\dotsc,h_r}(f_1,\\dotsc,f_r): \\hat{\\Ext}_k C \\rightarrow \\hat{\\Ext}_l C$ will have finite filtration degree too. Moreover, the formulas in Remark~\\ref{Rem:Compositions} will still hold.%The details will be given in~\\cite{MyPhD}.\n\n\nWe will now rephrase the definitions of an $\\IBLInfty$-algebra, a Maurer-Cartan element and twisted operations from~\\cite{Cieliebak2015} in terms of~$\\circ_{h_1, \\dotsc, h_r}$.\n% !!!! It can also be seen graphically from Figure \\ref{Fig:Surfaces}.!!!!\n% That the relations are satisfied.\n%The following is a combination of Definitions 8.1 and 2.3 and Lemma 2.5 in \\cite{Cieliebak2015}: \n\\begin{Def}[$\\IBLInfty$-algebra] \\label{Def:IBLInfty} Let $C$ be a graded vector space equipped with a decreasing filtration, and let $d\\in \\Z$ and $\\gamma\\ge 0$ be fixed constants. An \\emph{$\\IBLInfty$-algebra of bidegree $(d,\\gamma)$} on~$C$ is a collection of linear maps $\\OPQ_{klg}: \\hat{\\Ext}_k C \\rightarrow \\hat{\\Ext}_l C$ for all $k,l\\ge 1$, $g\\ge 0$ which are homogenous, of finite filtration degree and satisfy the following conditions: \n%\\marginnote{Discuss $\\circ_s$ once more. Make a remark how $\\circ_s$ is defined? I actually do not need it anywhere but $q_{120}^m$ and Appendix. Does filtered $\\IBLInfty$-algebra with the grading filtration gives this $\\IBLInfty$? What is the filtration degree?}\n% The filtration degree of the canonical dIBL with respect to the filtration by weights is 2. The pushforward Maurer-Cartan element satisfies the filtration condition strictly. All maps p_klg, f_klg constructed by summation over ribbon graphs satisfy the filtration condition with =.\n\\begin{enumerate}[label=\\arabic*)]\n\\item $\\Abs{\\OPQ_{klg}} = - 2d(k+g-1) - 1$.\n\\item $\\Norm{\\OPQ_{klg}} \\ge \\gamma \\chi_{klg}$,\nwhere $\\chi_{klg}\\coloneqq2-2g-k-l$. %(cf. $e$ in \\eqref{Eq:EulerFormula})\n\\item The \\emph{$\\IBLInfty$-relations} hold: for all $k,l\\ge 1$, $g\\ge 0$, we have\n\\begin{equation} \\label{Eq:IBLInfRel}\n\\sum_{h=1}^{g+1} \\sum_{\\substack{k_1, k_2, l_1, l_2 \\ge 1 \\\\ g_1, g_2 \\ge 0 \\\\k_1 + k_2 = k + h \\\\ l_1 + l_2 = l+ h\\\\ g_1 + g_2 = g+ 1 -h }} \\OPQ_{k_2 l_2 g_2} \\circ_h \\OPQ_{k_1 l_1 g_1} = 0.\n\\end{equation}\n%where the operator $\\circ_s$ connects exactly $s$ outputs of $\\OPQ_{k_1 l_1 g_1}$ to exactly $s$ inputs of $\\OPQ_{k_2 l_2 g_2}$ in all possible ways producing a map $\\widehat{E}_k C \\rightarrow \\widehat{E}_l C$.\n\\end{enumerate}\nWe denote a given $\\IBLInfty$-algebra structure on $C$ by $\\IBLInfty(C)$; i.e., we write $\\IBLInfty(C)=(C,(\\OPQ_{klg}))$.\n%We can also write $\\IBLInfty(C) = (\\OPQ_{klg})$, where the notation $(\\OPQ_{klg})$ denotes the collection of $\\OPQ_{klg}$ for all $k$, $l\\ge 1$, $g\\ge 0$.\n%If the filtration is trivial, i.e., $\\Filtr_\\lambda C = 0$ for all $\\lambda\\in \\R$, and hence $\\hat{\\Ext}_k C = \\Ext_k C$ for all $k\\ge 1$ and the condition 2) holds trivially, we call $\\IBLInfty(C)$ simply an \\emph{$\\IBLInfty$-algebra} of degree $d$ on $C$.\n\nIf $\\OPQ_{klg} \\equiv 0$ for all $(k,l,g)\\neq (1,1,0)$, $(2,1,0)$, $(1,2,0)$, then we call $\\IBLInfty(C)$ a \\emph{$\\dIBL$-algebra} and denote it by $\\dIBL(C)$. If in addition $\\OPQ_{110} \\equiv 0$, then we have an \\emph{$\\IBL$-algebra} $\\IBL(C)$.\n\nIf the operations on the completed exterior powers~$\\hat{\\Ext}_k C$ arise as continuous extensions of operations $\\OPQ_{klg}: \\Ext_k C \\rightarrow \\Ext_l C$, then we call the $\\IBLInfty$-algebra \\emph{completion-free} and denote $C$ together with the operations $\\OPQ_{klg}: \\Ext_k C \\rightarrow \\Ext_l C$ by $\\ShortIBLInfty(C)$.\n\\end{Def}\n%We note that filtered algebras are defined in Definition~\\ref{Def:FiltAlg} in the appendix and that they are compared to the definition above in Remark~\\ref{Rem:FiltrStr}.\n\nThe acronym $\\IBL$ stands for an \\emph{involutive Lie bialgebra.} It follows namely from the $\\IBLInfty$-relations \\eqref{Eq:IBLInfRel} that for $\\IBL(C) = (C,\\OPQ_{210},\\OPQ_{120})$ the following holds: \n\\begin{equation*}\n%\\label{Eq:IndIBL}\n\\raisebox{2ex}{$\\text{Lie bialg.}\\;\\left\\{\\rule{0pt}{5ex}\\right.$}\\;\n\\begin{aligned}   \n   0&= \\OPQ_{210}\\circ_1 \\OPQ_{210} &&\\leftarrow\\text{Jacobi id.}\\\\\n   0 &= \\OPQ_{120} \\circ_1 \\OPQ_{120} &&\\leftarrow\\text{co-Jacobi id.}\\\\\n   0 &= \\OPQ_{120}\\circ_1 \\OPQ_{210} + \\OPQ_{210}\\circ_1 \\OPQ_{120}&&\\leftarrow\\text{Drinfeld id.} \\\\\n   0 &= \\OPQ_{210} \\circ_2 \\OPQ_{120} &&\\leftarrow\\text{Involutivity}\n\\end{aligned}\n\\end{equation*}\nThe acronym $\\dIBL$ stands for a \\emph{differential involutive Lie bialgebra} --- an involutive Lie bialgebra together with a differential (a boundary operator in our case) such that the bracket and cobracket are chain maps.\n \n\\begin{Proposition}[Odd degree shift of an $\\IBL$-algebra]\\label{Prop:ClasModIBL}\nLet $(C,\\OPQ_{210}, \\OPQ_{120})$ be an $\\IBL$-algebra of degree $d$ from Definition~\\ref{Def:IBLInfty}, and let $\\tilde{\\OPQ}_{210} : C^{\\otimes 2} \\rightarrow C$ and $\\tilde{\\OPQ}_{120}: C \\rightarrow C^{\\otimes 2}$ be the linear maps defined by\n\\begin{equation}\\label{Eq:ClasModIBL}\n\\begin{aligned}\n\\SuspU \\tilde{\\OPQ}_{210}(x_1 \\otimes x_2) &\\coloneqq \\OPQ_{210}(\\pi(\\SuspU^2 x_1 \\otimes x_2)) \\quad\\text{and} \\\\\n\\SuspU^2 \\tilde{\\OPQ}_{120}(x) &\\coloneqq \\iota(\\OPQ_{120}(\\SuspU x))\n\\end{aligned}\n\\end{equation}\nfor all $x_1$, $x_2$, $x\\in C$, where $\\iota: \\Sym_2(C[1]) \\rightarrow C[1]^{\\otimes 2}$ is the section of $\\pi: C[1]^{\\otimes 2} \\rightarrow \\Sym_2(C[1])$ from Definition~\\ref{Def:SymAlgebra} and~$\\SuspU$ is a formal symbol of degree $\\Abs{\\SuspU} = -1$. Then the degrees satisfy\n\\[ \\Abs{\\tilde{\\OPQ}_{210}} = \\Abs{\\OPQ_{210}} - 1 = -2d - 2\\quad\\text{and}\\quad\\Abs{\\tilde{\\OPQ}_{120}} = \\Abs{\\OPQ_{120}} + 1 = 0, \\]\nthe operations $\\tilde{\\OPQ}_{210}$ and $\\tilde{\\OPQ}_{120}$ are graded antisymmetric, i.e., we have\n\\[ \\tilde{\\OPQ}_{210} \\circ\\tau = - \\tilde{\\OPQ}_{210}\\quad\\text{and}\\quad\\tau \\circ \\tilde{\\OPQ}_{120} = - \\tilde{\\OPQ}_{120} \\]\nfor the twist map $\\tau$, and the relations\n\\begin{align*}\n%\\label{Eq:ClassicIBL}\n0&=\\tilde{\\OPQ}_{210}\\circ (\\tilde{\\OPQ}_{210}\\otimes \\Id)\\circ (\\Id^{\\otimes 3}+ t_3 + t_3^2), \\\\\n0&=(\\Id^{\\otimes 3}+t_3 + t_3^2)\\circ (\\tilde{\\OPQ}_{120}\\otimes\\Id)\\circ \\tilde{\\OPQ}_{120}, \\\\\n0&= x_1 \\cdot \\tilde{\\OPQ}_{120}(x_2) - (-1)^{ x_1 x_2} x_2 \\cdot \\tilde{\\OPQ}_{120}(x_1) - \\tilde{\\OPQ}_{120}(\\tilde{\\OPQ}_{210}(x_1,x_2)), \\\\\n0& = \\tilde{\\OPQ}_{210} \\circ \\tilde{\\OPQ}_{120},\n\\end{align*}\nhold for all $x_1$, $x_2\\in C$. Here, $t_3 \\in \\Perm_3$ denotes the cyclic permutation with $t_3(1) = 2$ acting on $C^{\\otimes 3}$, and we define\n\\[ x\\cdot (y_1 \\otimes y_2) \\coloneqq \\tilde{\\OPQ}_{210}(x,y_1)\\otimes y_2 + (-1)^{ x y_1} y_1 \\otimes \\tilde{\\OPQ}_{210}(x,y_2) \\]\nfor all $x$, $y_1$, $y_2 \\in C$.\n%On the other hand, starting with~$\\tilde{\\OPQ}_{210}$ and $\\tilde{\\OPQ}_{120}$ satisfying the conditions above and defining $\\OPQ_{210}: \\Ext_2 C \\rightarrow \\Ext_1 C$ and $\\OPQ_{120}: \\Ext_1 C \\rightarrow \\Ext_2 C$ by \\eqref{Eq:ClasModIBL}, we get an $\\IBL$-algebra $(C,\\OPQ_{210},\\OPQ_{120})$ according to Definition~\\ref{Def:IBLInfty}.\n\\end{Proposition}\n\n\\begin{proof}\nThe proof is a lengthy but straightforward computation.\n\\end{proof}\n\nConsider the \\emph{sign-action of $\\Perm_k$} on $C^{\\otimes k}$ given by $\\sigma \\mapsto \\bar{\\sigma}$, where \n\\[ \\bar{\\sigma}(c_1\\otimes\\dotsb\\otimes c_k)\\coloneqq(-1)^\\sigma\\varepsilon(\\sigma,c)c_{\\sigma_1^{-1}}\\otimes\\dotsb\\otimes c_{\\sigma_k^{-1}} \\]\nfor all $c_1$, $\\dotsc$, $c_k\\in C$ and $\\sigma\\in \\Perm_k$. We define \n\\[ \\Lambda C \\coloneqq \\bigoplus_{k=0}^\\infty \\Lambda_k C\\quad\\text{with}\\quad \\Lambda_k C \\coloneqq C^{\\otimes k}/\\sim, \\]\nwhere $c\\sim \\bar{\\sigma}(c)$ for all $c\\in C^{\\otimes k}$ and $\\sigma\\in \\Perm_k$. It is easy to see that the degree-shift map $\\theta^{\\otimes k}: c_1\\otimes \\dotsb \\otimes c_k \\in C^{\\otimes k} \\mapsto \\varepsilon(c,\\theta)(\\theta c_1)\\otimes\\dotsb\\otimes(\\theta c_k)\\in C[1]^{\\otimes k}$ is equivariant with respect to the sign-action of $\\Perm_k$ on $C^{\\otimes k}$ and the standard action of $\\Perm_k$ on $C[1]^{\\otimes k}$ for all $k$, and thus it induces an isomorphism of vector spaces $\\Lambda C$ and $\\Ext C$. We use the following notation:\n\\[\\begin{tikzcd}\n \\OPQ_{klg}: \\hat{\\Ext}_k C \\arrow{r} \\arrow{d}{\\theta^{\\otimes k}} & \\hat{\\Ext}_l C \\arrow{d}{\\theta^{\\otimes l}} \\\\\n \\tilde{\\OPQ}_{klg}: \\hat{\\Lambda}_k C \\arrow{r} & \\hat{\\Lambda}_l C.\n\\end{tikzcd}\\]\nIn fact, $\\OPQ_{klg}$ and $\\tilde{\\OPQ}_{klg}$ are related precisely by the degree shift \\eqref{Eq:DegreeShiftConvII}. \n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{\\textwidth}\n\\centering\n\\input{\\GraphicsFolder/ibl.tex}\n\\caption{The term $\\OPQ_{k_2 l_2 g_2} \\circ_h \\OPQ_{k_1 l_1 g_1}$ in the $\\IBLInfty$-equation \\eqref{Eq:IBLInfRel}.}\n\\end{subfigure}\n\\par\\bigskip\n\\begin{subfigure}{\\textwidth}\n\\centering\n\\input{\\GraphicsFolder/MC.tex}\n\\caption{The term $\\OPQ_{k' l' g'}\\circ_{h_1,\\dotsc,h_r} (\\PMC_{l_1 g_1},\\dotsc,\\PMC_{l_r g_r})$ in the Maurer-Cartan equation \\eqref{Eq:MaurerCartanEquation}. We remark that the contour of the surface corresponding to $\\OPQ_{k'l'g'}$ starts on the left and continues to the right along the dotted line behind the two trivial cylinders.}\n\\end{subfigure}\n\\par\\bigskip\n\\begin{subfigure}{\\textwidth}\n\\centering\n\\input{\\GraphicsFolder/twisted.tex}\n\\caption{The term $\\OPQ_{k' l' g'}\\circ_{l_1,\\dotsc, l_r} (\\PMC_{l_1 g_1},\\dotsc,\\PMC_{l_r g_r})$ in the twisted operation \\eqref{Eq:TwistedOperations}. The remark to Figure (b) applies too.}\n\\end{subfigure}\n\\caption[$\\IBLInfty$-relations, Maurer-Cartan equation and twisting graphically.]{Graphical representation of compositions appearing in Definitions \\ref{Def:IBLInfty}, \\ref{Def:MaurerCartan} and \\ref{Def:TwistedOperations} as gluing of connected Riemannian surfaces. The figure is to be read from the top  to the bottom, the empty cylinder represents the identity, and the resulting surface must be connected. We emphasize that the gluing is not associative (c.f., weak associativity \\eqref{Eq:WeakAssoc}).}\n\\label{Fig:Surfaces}\n\\end{figure}\n%\n%\\noindent The following is a combination of Definition 9.1 and Lemma 2.9 in \\cite{Cieliebak2015}:\n%\n\\begin{Def}[Maurer-Cartan element] \\label{Def:MaurerCartan}\nA \\emph{Maurer-Cartan element} for an $\\IBLInfty$-algebra $\\IBLInfty(C)$ from Definition~\\ref{Def:IBLInfty} is a collection $\\PMC \\coloneqq (\\PMC_{lg})_{l\\ge 1, g\\ge 0}$ of elements $\\PMC_{lg}\\in \\hat{\\Ext}_l C$ which are homogenous, of finite filtration degree and satisfy the following conditions:\n\\begin{enumerate}[label=\\arabic*)]\n\\item $\\Abs{\\PMC_{lg}} = - 2d(g-1)$.\n\\item $\\Norm{\\PMC_{lg}}\\ge\\gamma \\chi_{0lg}$ with $>$ for $(l,g)=(1,0)$, $(2,0)$ (see Definition~\\ref{Def:IBLInfty} for~$\\chi_{klg}$).\n\\item The \\emph{Maurer-Cartan equation} holds: for all $l\\ge 1$, $g\\ge 0$, we have\n\\begin{equation} \\label{Eq:MaurerCartanEquation}\n\\begin{aligned}\n\\sum_{r\\ge 1}\\frac{1}{r!}  \\sum_{\\substack{l', k', l_1, \\dotsc, l_r\\ge 1 \\\\ g', g_1, \\dotsc, g_r \\ge 0 \\\\ h_1, \\dotsc, h_r \\ge 1 \\\\ \nl_1 + \\dotsb + l_r + l' - k'= l \\\\ g_1 + \\dotsb + g_r + g' +  k' = g + r \\\\ h_1 + \\dotsb + h_r - k' =0 } } \\OPQ_{k' l' g'}\\circ_{h_1,\\dotsc,h_r} (\\PMC_{l_1 g_1},\\dotsc,\\PMC_{l_r g_r}) = 0,\n\\end{aligned} \n\\end{equation}\nwhere we view $\\PMC_{lg}$ as a linear map $\\PMC_{lg}: \\hat{\\Ext}_0 C = \\R \\rightarrow \\hat{\\Ext}_l C$ with $\\PMC_{lg}(1) = \\PMC_{lg}$.\n%where $\\odot$ is the symmetric product of symmetric maps, and $\\circ_{s_1,\\dotsc,s_r}$ denotes the part of the composition with exactly $s_i$ outputs of $\\PMC_{l_i g_i}$ connected to exactly $s_i$ inputs of $\\OPQ_{klg}$ in all possible ways so that the result is symmetric.\n\n\\end{enumerate}\n\\end{Def}\n\n% \\noindent The following comes from Proposition 9.3 and its proof \\cite{Cieliebak2015}:\n\n\\begin{Def}[Twisted operations] \\label{Def:TwistedOperations}\nIn the setting of Definition~\\ref{Def:MaurerCartan}, the \\emph{twisted operations} $\\OPQ_{klg}^\\PMC: \\hat{\\Ext}_k C\\rightarrow \\hat{\\Ext}_l C$ for $k,l\\ge 1$, $g\\ge 0$ are defined by\n\\begin{equation}\\label{Eq:TwistedOperations}\n \\OPQ_{klg}^\\PMC =\\sum_{r\\ge 0} \\frac{1}{r!} \\sum_{\\substack{k', l', l_1, \\dotsc, l_r \\ge 1 \\\\ g', g_1, \\dotsc, g_r \\ge 0\\\\ h_1, \\dotsc, h_r \\ge 1 \\\\\n%k' \\ge k,\\,l'\\le l \\\\\nl_1 + \\dotsb + l_r + l' - k' = l-k \\\\ g_1 + \\dotsb +g_r + g' + k' = g + r + k \\\\ h_1 + \\dotsb + h_r - k' = -k}} \\OPQ_{k' l' g'}\\circ_{h_1,\\dotsc, h_r} (\\PMC_{l_1 g_1},\\dotsc,\\PMC_{l_r g_r}).\n\\end{equation}\nIn \\cite[Proposition~9.3]{Cieliebak2015}, they prove that $(\\OPQ_{klg}^\\PMC)_{k,l\\ge 1, g\\ge 0}$ is again an $\\IBLInfty$-algebra of bidegree $(d,\\gamma)$ on $C$ --- \\emph{the twisted $\\IBLInfty$-algebra}.  We denote it by $\\IBLInfty^\\PMC(C)$.\n\\end{Def}\n\n\n\nLet $(\\OPQ_{klg})$ be an $\\IBLInfty$-algebra on $C$. The boundary operator $\\OPQ_{110}: C[1] \\rightarrow C[1]$ induces the boundary operator $\\Bdd_k : \\Ext_k C \\rightarrow \\Ext_k C$ for every $k\\in \\N$ (see \\eqref{Eq:BddExt}). Because of the finite filtration degree, $\\Bdd_k$ continuously extends to $\\Bdd_k: \\hat{\\Ext}_k C \\rightarrow \\hat{\\Ext}_k C$.\n%, where we used that $\\hat{\\Ext}_k \\hat{C} \\simeq \\hat{\\Ext}_k C$ (see Remark~\\ref{Rem:ComplTens}).\nThe following is easy to see using \\eqref{Eq:CompositionSimple}:\n\\[\\begin{aligned}\n \\OPQ_{klg} \\circ_1 \\OPQ_{110} &= \\OPQ_{klg} \\circ \\Bdd_k, \\\\\n \\OPQ_{110} \\circ_1 \\OPQ_{klg} &= \\Bdd_l \\circ \\OPQ_{klg}.\n\\end{aligned}\\]\nBecause $\\OPQ_{klg}$ are odd ($\\coloneqq$\\,have odd degree), we have\n\\[ \\begin{aligned}\n  [\\Bdd,\\OPQ_{klg}] &\\coloneqq \\Bdd_l \\circ \\OPQ_{klg} - (-1)^{\\Abs{\\Bdd}\\Abs{\\OPQ_{klg}}} \\OPQ_{klg}\\circ \\Bdd_k \\\\\n   &= \\Bdd_l \\circ \\OPQ_{klg} + \\OPQ_{klg}\\circ \\Bdd_k \\\\\n   &= \\OPQ_{110}\\circ_1 \\OPQ_{klg} + \\OPQ_{klg}\\circ_1 \\OPQ_{110}.\n  \\end{aligned}\\]\nWith this notation, the $\\IBLInfty$-relations \\eqref{Eq:IBLInfRel} for \n$\\OPQ_{210}: \\hat{\\Ext}_2 C \\rightarrow \\hat{\\Ext}_1 C$ and $\\OPQ_{120}: \\hat{\\Ext}_1 C \\rightarrow \\hat{\\Ext}_2 C$ become $[\\Bdd,\\OPQ_{210}] = 0$ and $[\\Bdd,\\OPQ_{120}] = 0$, respectively. If moreover the canonical maps $\\Ext_k \\H(\\hat{C},\\tilde{\\OPQ}_{110}) \\rightarrow \\H(\\hat{\\Ext}_k C, \\Bdd_k)$ induce the isomorphisms $\\hat{\\Ext}_k \\H(\\hat{C},\\tilde{\\OPQ}_{110})  \\simeq \\H(\\hat{\\Ext}_k C, \\Bdd_k)$ for $k=1$, $2$, e.g., when Proposition~\\ref{Prop:Kuenneth} holds, then we obtain the maps\n\\[ \\OPQ_{210}: \\hat{\\Ext}_2\\H(\\hat{C},\\tilde{\\OPQ}_{110})  \\rightarrow \\hat{\\Ext}_1\\H(\\hat{C},\\tilde{\\OPQ}_{110}) \\quad \\text{and}\\quad \\OPQ_{120}: \\hat{\\Ext}_1\\H(\\hat{C},\\tilde{\\OPQ}_{110})  \\rightarrow \\hat{\\Ext}_2\\H(\\hat{C},\\tilde{\\OPQ}_{110}), \\]\nand $(\\H(\\hat{C},\\tilde{\\OPQ}_{110}),\\OPQ_{210},\\OPQ_{120})$ becomes an $\\IBL$-algebra according to Definition~\\ref{Def:IBLInfty} --- the \\emph{induced $\\IBL$-algebra on homology}.\n\n\\begin{Definition}[Homology]\\label{Def:HomIBL}\nWe define the homology of an $\\IBLInfty$-algebra $\\IBLInfty(C)$ by\n\\[ \\HIBL(C)[1] \\coloneqq \\H(\\hat{C}[1], \\OPQ_{110}). \\]\nIt is a graded vector space with the induced filtration. If $\\PMC$ is a Maurer-Cartan element for $\\IBLInfty(C)$, we denote by $\\HIBL^\\PMC(C)$ the homology of $\\IBLInfty^\\PMC(C)$.\n\\end{Definition}\n\n%These relations should be understood like the claim that $\\OPQ_{210}\\circ_1 \\OPQ_{210}: \\hat{\\Ext}_3 C \\rightarrow \\hat{\\Ext}_1 C$ induces the zero map $\\HIBL_3(C) \\rightarrow \\HIBL_1(C)$, for example.\n%It is an exercise to check that the first three relations of \\eqref{Eq:IndIBL} are ``algebraically'' equivalent to \\eqref{Eq:Bialgebra} (i.e., ignoring the completions and using just \\eqref{Eq:CompositionSimple}) under the replacement $\\Prod\\mapsto \\OPQ_{210}$, $\\CoProd\\mapsto \\OPQ_{120}$ (notice that $\\OPQ_{210}$, $\\OPQ_{120}$ are odd whereas $\\Prod$, $\\CoProd$ even).\n\\begin{Remark}[Weak $\\IBLInfty$-algebras and $\\mathrm{BV}$-formalism]\\phantomsection\\label{Rem:BVForm}\n\\begin{RemarkList}\n\\item A possible generalization of the $\\IBLInfty$-theory is to allow $k=0$ and $l=0$.\nSuch structures would be called \\emph{weak $\\IBLInfty$-algebras} while the structures from this section \\emph{strict $\\IBLInfty$-algebras.}\nIn fact, one does not need filtrations and completions to deal with the category of strict $\\IBLInfty$-algebras unless deformations (twisting) are considered.\nOn the other hand, one needs filtrations and completions for the definition of a morphism of weak $\\IBLInfty$-algebras already.\nWe refer to Appendix~\\ref{App:IBLMV} for more details. \n\\item Let $\\CExt C[[\\hbar]]$, resp.~$\\CExt C((\\hbar))$ be the spaces of formal power, resp.~Laurent series in the variable~$\\hbar$ of degree $\\Abs{\\hbar} = 2d$ with coefficients in $\\Ext C$ completed with respect to a suitable completion.\n%\\footnote{Note that $\\Ext C$ has two filtrations $\\F^1_\\lambda C$ and $\\F^2_\\lambda C$: the filtration induced from $C$ and the filtration by weights, respectively. In \\cite{Cieliebak2015}, they take the combined filtration $\\F^1_\\lambda C + \\F^2_\\lambda C$. We will discuss some variations in~\\cite{MyPhD}.}\nOperations of an $\\IBLInfty$-algebra on $C$ can be encoded in a degree~$-1$ operator $\\BVOp: \\CExt C[[\\hbar]] \\rightarrow \\CExt C[[\\hbar]]$ called the \\emph{$\\mathrm{BV}$-operator,} while the data of a Maurer-Cartan element~$(\\PMC_{lg})$ give rise to an element $e^{\\PMC}\\in \\CExt C((\\hbar))$. The prescriptions are\n\\[ \\BVOp \\coloneqq \\sum_{i\\ge 0}\\BVOp_{i+1} \\hbar^{i}\\quad\\text{and}\\quad e^{\\PMC} \\coloneqq \\sum_{j\\in \\Z} (e^{\\PMC})_j \\hbar^{j}, \\]\nwhere $\\BVOp_i$ and $(e^\\PMC)_j$ for $i\\ge 1$, $j\\in \\Z$ are defined by \n\\begin{align*}\n\\BVOp_i & \\coloneqq \\sum_{\\substack{k\\ge 1, g\\ge 0 \\\\k+g=i}} \\sum_{l\\ge 1} \\hat{\\OPQ}_{klg}\\quad\\text{and} \\\\\n(e^{\\PMC})_j &\\coloneqq \\sum_{r=0}^\\infty \\frac{1}{r!} \\sum_{\\substack{g_1, \\dotsc, g_r \\ge 0 \\\\ g_1 + \\dotsb +g_r - r= j }} \\sum_{l_1, \\dotsc, l_r\\ge 1} \\PMC_{l_1 g_1} \\odot \\dotsb \\odot \\PMC_{l_r g_r}.\n\\end{align*}\nIt can be shown that the $\\IBLInfty$-relations~\\eqref{Eq:IBLInfRel} and the Maurer-Cartan equation~\\eqref{Eq:MaurerCartanEquation} are equivalent to  \n\\begin{equation}\\label{Eq:BVEquat}\n \\BVOp\\circ \\BVOp = 0\\quad\\text{and}\\quad \\BVOp(e^\\PMC) = 0,\n\\end{equation}\nrespectively, and that the $\\BVInfty$-operator $\\BVOp^\\PMC$ for the twisted $\\IBLInfty$-structure $(\\OPQ_{klg}^\\PMC)$ satisfies\n\\begin{equation} \\label{Eq:TwistBV}\n\\BVOp^\\PMC(\\bullet)= e^{-\\PMC}\\BVOp(e^\\PMC\\bullet),\n\\end{equation}\nwhere we multiply with $e^{-\\PMC}$ and $e^{\\PMC}$, respectively.\nThese facts were shown in~\\cite{Cieliebak2015} using~\\eqref{Eq:Mix}.\nWe refer to Appendix~\\ref{App:IBLMV} to the precise formulation of the $\\BV$-formalism using a filtered version of the $\\MV$-formalism from \\cite{Markl2015}.\\qedhere\n%\\footnote{One has to check that the compositions \\eqref{Eq:BVEquat} and \\eqref{Eq:TwistBV} are well-defined and pick a suitable completion $\\CExt C$ so that all the constructions work.} \n%Some technical details, e.g., which completion $\\CExt C$ we take so that $\\BVOp$, $e^\\PMC$ and $\\Prod$ are well-defined, and why the compositions in \\eqref{Eq:BVEquat}, \\eqref{Eq:TwistBV} make sense, will be discussed in~\\cite{MyPhD}. %For more about $\\IBLInfty$-algebras as $\\BVInfty$-algebras see~\\cite{Markl2015a}.\n\\end{RemarkList}\n\\end{Remark}\n\nIn our applications in string topology, a canonical $\\dIBL$-algebra $\\dIBL(C)$ with a natural Maurer-Cartan element $\\PMC$ coming from the Chern-Simons theory is given, and we want to study $\\dIBL^\\PMC(C)$, which will be a chain model of string topology.\nWe are also interested in the homology $\\HIBL^\\PMC(C)$, the $\\IBL$-structure $\\IBL(\\HIBL^\\PMC(C))$ and possible higher operations on $\\HIBL^\\PMC(C)$ induced by $\\OPQ_{klg}^\\PMC$; however, these higher maps are not chain maps in general.\nThe following proposition summarizes some observations in this situation:\n\n\\begin{Proposition}[Twist of a $\\dIBL$-algebra]\\label{Prop:dIBL}\nLet $\\dIBL(C) = (C,\\OPQ_{110},\\OPQ_{210},\\OPQ_{120})$ be a $\\dIBL$-algebra, and let $\\PMC = (\\PMC_{lg})$ be a Maurer-Cartan element. The Maurer-Cartan equation~\\eqref{Eq:MaurerCartanEquation} reduces to the following:\n\\[ \n%\\raisebox{-3ex}{\\parbox{2em}{\\centering MC-eq.}\\quad $\\Biggl\\{$ } \n\\begin{multlined}[b] 0 = \\OPQ_{110}\\circ_1 \\PMC_{lg} + \\OPQ_{120} \\circ_1 \\PMC_{l-1,g} +  \\OPQ_{210}\\circ_2 \\PMC_{l+1,g-1} \\\\[\\jot]+  \\frac{1}{2}\\sum_{\\substack{l_1, l_2\\ge 1 \\\\ g_1, g_2 \\ge 0 \\\\ l_1 + l_2 = l + 1 \\\\ g_1 + g_2 = g}} \\OPQ_{210}\\circ_{1,1}(\\PMC_{l_1 g_1}, \\PMC_{l_2 g_2}) \\end{multlined}\\quad \\forall l\\ge1 , g\\ge 0.\n\\]\nIn particular, the ``lowest'' equation is given by\\footnote{In \\cite[Definition 2.4.]{Cieliebak2015}, they define a partial ordering on the signatures $(k,l,g)$.}\n\\begin{equation} \\label{Eq:MCEq}\n(l,g) = (1,0): \\qquad \\OPQ_{110}(\\PMC_{10}) + \\frac{1}{2}\\OPQ_{210}(\\PMC_{10}, \\PMC_{10}) = 0.\n\\end{equation}\nThis can be visualized as\n{\\begingroup \\def\\dist{0.25} %distance between two surfaces\n  \\def\\rad{0.5} % radius of bdd\n  \\def\\ecc{0.1} % eccentricity of bdd\n  \\def\\hght{1} % height of surfaces\n  \\def\\dif{1.5} % distance of two circles\n  \\def\\radO{\\rad} % radius of bdd\n  \\def\\eccO{\\ecc} % eccentricity of bdd\n  \\def\\hghtO{2*\\hght+\\dist} % height of surfaces\n  \\def\\difO{\\dif} % distance of two circles\n  \\def\\gencanc{0.05} % legth of extra line in genus\n  \\def\\genecc{20} % eccentricity of genus\n  \\def\\genrad{0.45} % radius of genus\n\\[0 =\\quad \\vcenterline{\\input{\\GraphicsFolder/p110n10.tex}}\\; + \\frac{1}{2} \\quad \\vcenterline{\\input{\\GraphicsFolder/p210n10n10.tex}}. \\]\n\\endgroup}\n\nThe twisted $\\IBLInfty$-algebra $\\dIBL^\\PMC(C)$ consists of the operations $\\OPQ_{110}^\\PMC$, $\\OPQ_{210}^\\PMC$ and~$\\OPQ_{120}^\\PMC$, which we call the \\emph{basic operations}, and of the operations $\\OPQ_{1lg}^\\PMC$ for the pairs $(l,g)\\in \\N \\times \\N_0 \\backslash \\{(1,0),(2,0)\\}$, which we call the \\emph{higher operations}. These operations are given by \n\\[ \\begin{aligned}\n\\OPQ_{110}^\\PMC &= \\OPQ_{110} + \\OPQ_{210}\\circ_1 \\PMC_{10},\\\\\n\\OPQ_{210}^\\PMC &= \\OPQ_{210}, \\\\\n\\OPQ_{120}^\\PMC & = \\OPQ_{120} + \\OPQ_{210}\\circ_1 \\PMC_{20},\\\\\n\\OPQ_{1lg}^\\PMC & = \\OPQ_{210}\\circ_1 \\PMC_{lg}.\n\\end{aligned}\\]\nThis can be visualized as\n{ \\begingroup \\allowdisplaybreaks\n\\def\\dist{0.25} %distance between two surfaces\n  \\def\\rad{0.5} % radius of bdd\n  \\def\\ecc{0.1} % eccentricity of bdd\n  \\def\\hght{1} % height of surfaces\n  \\def\\dif{1.5} % distance of two circles\n  \\def\\radO{\\rad} % radius of bdd\n  \\def\\eccO{\\ecc} % eccentricity of bdd\n  \\def\\hghtO{2*\\hght+\\dist} % height of surfaces\n  \\def\\difO{\\dif} % distance of two circles\n  \\def\\gencanc{0.05} % legth of extra line in genus\n  \\def\\genecc{20} % eccentricity of genus\n  \\def\\genrad{0.45} % radius of genus\n\\begin{align*}\n\\OPQ_{110}^\\PMC & =\\quad\\vcenterline{\\input{\\GraphicsFolder/p110.tex}}\\quad +\\quad \\vcenterline{\\input{\\GraphicsFolder/twist2.tex}}, \\\\[1ex] \n\\OPQ_{210}^\\PMC &=\\quad \\vcenterline{\\input{\\GraphicsFolder/p210.tex}}, \\\\[1ex] \n\\OPQ_{120}^\\PMC &=\\quad \\vcenterline{\\input{\\GraphicsFolder/p120.tex}}+\\quad\\vcenterline{\\input{\\GraphicsFolder/twist1.tex}}, \\\\[1ex]\n\\OPQ^{\\PMC}_{1lg} & =\\quad \\vcenterline{\\input{\\GraphicsFolder/twistn.tex}}.\n\\end{align*}\n\\endgroup}\nThe $\\IBLInfty$-relations satisfied by $(\\OPQ_{klg}^\\PMC)$ read for all $l\\ge 1$, $g\\ge 0$ as follows:\n\\begin{equation}\\label{Eq:IBLInftydIBL}\n\\begin{aligned}\n(3,1,0):\\quad 0& = \\OPQ_{210}^\\PMC \\circ_1 \\OPQ_{210}^\\PMC, \\\\[\\jot]\n(2,l,g):\\quad 0&=\\OPQ^\\PMC_{1lg}\\circ_1 \\OPQ^\\PMC_{210} + \\OPQ^\\PMC_{210}\\circ_1\\OPQ^\\PMC_{1lg}, \\\\[\\jot]\n(1,l,g):\\quad 0&= \\begin{multlined}[t] \\sum_{\\substack{l_1, l_2 \\ge 1 \\\\ g_1, g_2 \\ge 0 \\\\ l_1 + l_2 = l+1 \\\\ g_1 + g_2 = g}} \\OPQ^\\PMC_{1l_1 g_1}\\circ_1 \\OPQ^\\PMC_{1 l_2 g_2}+\\OPQ^\\PMC_{210} \\circ_2 \\OPQ^\\PMC_{1, l+1, g-1}.\\end{multlined}\n\\end{aligned}\n\\end{equation}\nWe call the relations for $(k,l,g) = (1,1,0)$, $(2,1,0)$, $(1,2,0)$, $(3,1,0)$, $(1,3,0)$, $(2,2,0)$, $(1,1,1)$ \\emph{basic relations} because they contain all compositions of basic operations. In the order above, they read:\n\\[\\begin{aligned} \n 0 & =\\OPQ^\\PMC_{110} \\circ_1 \\OPQ^\\PMC_{110}, && \\\\\n0 &=\\OPQ^\\PMC_{110}\\circ_1\\OPQ^\\PMC_{210} + \\OPQ^\\PMC_{210}\\circ_1 \\OPQ^\\PMC_{110}, && \\\\\n0 &= \\OPQ^\\PMC_{110}\\circ_1 \\OPQ^\\PMC_{120} + \\OPQ^\\PMC_{120}\\circ_1 \\OPQ^\\PMC_{110}, && \\\\\n0 &= \\OPQ^\\PMC_{210} \\circ_1 \\OPQ^\\PMC_{210}, && \\leftarrow\\text{Jacobi identity} \\\\\n0 &=\\OPQ^\\PMC_{120} \\circ_1 \\OPQ^\\PMC_{120} + \\OPQ^\\PMC_{110}\\circ_1 \\OPQ^\\PMC_{130} + \\OPQ^\\PMC_{130}\\circ_1 \\OPQ^\\PMC_{110}, && \\leftarrow\\text{co-Jacobi id. up to htpy.} \\\\\n0 & = \\OPQ^\\PMC_{120}\\circ_1 \\OPQ^\\PMC_{210} + \\OPQ^\\PMC_{210}\\circ_1 \\OPQ^\\PMC_{120}, && \\leftarrow\\text{Drinfeld identity} \\\\\n0 & = \\OPQ^\\PMC_{210}\\circ_2 \\OPQ^\\PMC_{120} + \\OPQ^\\PMC_{111}\\circ_1 \\OPQ^\\PMC_{110} + \\OPQ^\\PMC_{110}\\circ_1 \\OPQ^\\PMC_{111}. && \\leftarrow\\text{Involutivity up to htpy.}\n\\end{aligned}\\]\nThe last four equations can be visualized as\n{ \\begingroup \\allowdisplaybreaks\n\\def\\dist{0.25} %distance between two surfaces\n  \\def\\rad{0.4} % radius of bdd\n  \\def\\ecc{0.1} % eccentricity of bdd\n  \\def\\hght{1} % height of surfaces\n  \\def\\dif{1.1} % distance of two circles\n  \\def\\difbig{1.5*\\dif} % distance of two circles\n  \\def\\radO{\\rad} % radius of bdd\n  \\def\\eccO{\\ecc} % eccentricity of bdd\n  \\def\\hghtO{2*\\hght+\\dist} % height of surfaces\n  \\def\\difO{\\dif} % distance of two circles\n  \\def\\gencanc{0.05} % legth of extra line in genus\n  \\def\\genecc{20} % eccentricity of genus\n  \\def\\genrad{0.3} % radius of genus\n\\begin{align*}\n0 & =\\quad\\vcenterline{\\input{\\GraphicsFolder/jacobi.tex}}, \\\\[1ex] \n0&=\\quad \\vcenterline{\\input{\\GraphicsFolder/cojacobi.tex}}+\n\\vcenterline{\\input{\\GraphicsFolder/cojacobi2.tex}}\n+\\quad\\vcenterline{\\input{\\GraphicsFolder/cojacobi3.tex}}, \\\\[1ex] \n0&=\\quad \\vcenterline{\\input{\\GraphicsFolder/drinfeld.tex}}+\\quad\\vcenterline{\\input{\\GraphicsFolder/drinfeld2.tex}},\\\\[1ex]\n0& =\\quad \\vcenterline{\\input{\\GraphicsFolder/involutivity.tex}}\\quad+\\quad\\vcenterline{\\input{\\GraphicsFolder/involutivity2.tex}}\\quad+\\quad\\vcenterline{\\input{\\GraphicsFolder/involutivity3.tex}}.\n\\end{align*}\n\\endgroup}\n\\end{Proposition}\n\\begin{proof}\nThe proof is clear by specializing \\eqref{Eq:IBLInfRel}, \\eqref{Eq:MaurerCartanEquation} and \\eqref{Eq:TwistedOperations}.\n\\end{proof}\n\n\\begin{Remark}[Higher operations]\\phantomsection\\label{Rem:Higher}\n\\begin{RemarkList}\n\\item We see from Proposition~\\ref{Prop:dIBL} that if $\\OPQ_{120}^\\PMC \\circ_1 \\OPQ_{120}^\\PMC = 0$ and $\\OPQ_{210}^\\PMC \\circ_{2} \\OPQ_{120}^\\PMC = 0$, then $[\\Bdd^\\PMC, \\OPQ_{130}^\\PMC] = 0$ and $[\\Bdd^\\PMC, \\OPQ_{111}^\\PMC] = 0$, respectively, and hence the operations $\\OPQ_{130}^\\PMC: \\hat{\\Ext}_1\\HIBL^\\PMC\\rightarrow \\hat{\\Ext}_3\\HIBL^\\PMC$ and $\\OPQ_{111}^\\PMC: \\hat{\\Ext}_1\\HIBL^\\PMC\\rightarrow \\hat{\\Ext}_1\\HIBL^\\PMC$ are well-defined (provided that the assumption of Definition \\ref{Def:HomIBL} holds). Likewise, the higher operation~$\\OPQ_{1lg}^\\PMC$ defines a map $\\hat{\\Ext}_1\\HIBL^\\PMC \\rightarrow \\hat{\\Ext}_l\\HIBL^\\PMC$, provided that the following equation holds:\n\\[ \\OPQ^\\PMC_{210}\\circ_2 \\OPQ_{1,l+1,g-1}^\\PMC + \\sum_{\\substack{l_1, l_2 \\ge 1 \\\\ g_1, g_2 \\ge 0 \\\\ l_1 + l_2 = l+1 \\\\ g_1 + g_2 = g \\\\ (l_i,g_i)\\neq (1,0)}} \\OPQ^\\PMC_{1l_1 g_1}\\circ_1 \\OPQ^\\PMC_{1 l_2 g_2} = 0. \\]\nThis expression is just the left-over after subtracting the commutator $[\\OPQ_{1lg}^\\PMC,\\OPQ_{110}^\\PMC] = \\OPQ_{110}^\\PMC \\circ_1 \\OPQ_{1lg}^\\PMC + \\OPQ_{1lg}^\\PMC \\circ_1 \\OPQ_{110}^\\PMC$ from \\eqref{Eq:IBLInftydIBL}.\n%Later, when the twist realizes a homotopy transfered $\\IBLInfty$-algebra. Note that this is a similar situation as with higher $\\AInfty$-Massey product. \n\\item In the genus-$0$ case, i.e., $\\OPQ_{1lg}^\\PMC = 0$ whenever $g\\ge 1$, relations \\eqref{Eq:IBLInftydIBL} reduce to\n\\begin{align*}\n 0 & = \\OPQ_{210}\\circ_1\\OPQ_{210}, \\\\\n 0 & = \\OPQ_{1lg}^\\PMC\\circ_1\\OPQ_{210} + \\OPQ_{210}\\circ_1\\OPQ_{1lg}^\\PMC, \\\\\n 0 & = \\sum_{\\substack{l_1,l_2\\ge 1\\\\ g_1, g_2 \\ge 0 \\\\ l_1 + l_2 = l + 1 \\\\ g_1 + g_2 = g}} \\OPQ_{1l_1 g_1}^\\PMC \\circ_1 \\OPQ_{1l_2 g_2}^\\PMC.\n\\end{align*}\nThe first relation is the Jacobi identity for $\\OPQ_{210}$, the second relation is a generalization of the Drinfeld identity to higher coproducts $\\OPQ_{1l0}^\\PMC$, and the third relations are \\emph{$\\CoLInfty$-relations} for $\\OPQ_{1lg}^\\PMC$.\nTherefore, allowing $g\\ge 0$, we see that the twisted $\\dIBL$-algebra $\\OPQ_{110}^\\PMC$, $\\OPQ_{210}$, $(\\OPQ_{1lg}^\\PMC)$ is, in fact, a \\emph{quantum $\\CoLInfty$-algebra.}\\qedhere\n\\end{RemarkList}\n\\end{Remark}\n\n%It corresponds to complete gluings of a surface of signature $(k_1, l_1, g_1)$ at its $h$ outgoing ends, plus an appropriate number of trivial cylinders, to $h$ ingoing ends of a surface of signature $(k_2,l_2,g_2)$, plus an appropriate number of trivial cylinders. The signature consists of the number of incoming ends, outgoing ends and genus.\n%\n%\n%It corresponds to complete gluings of $r$ connected surfaces of signatures $(0,l_i,g_i)$, plus $k$ trivial cylinders, at their outgoing ends to the ingoing ends of a connected surface of signature $(k',l',g')$, plus an appropriate number of trivial cylinders, to obtain a connected surface of signature $(k,l,g)$.\n%\n%It corresponds to complete gluings of $r$ connected surfaces of signatures $(0,l_i,g_i)$ at their $h_i$ outgoing ends to the ingoing ends of a connected surface of signature $(k',l',g')$, plus an appropriate number of trivial cylinders, to obtain a connected surface of signature $(0,l,g)$.\n\n\\end{document}\n", "meta": {"hexsha": "3e4984da4f6e5703145293de7c0cdc7562641ca8", "size": 40652, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Subfiles/AlgStr_IBL.tex", "max_stars_repo_name": "p135246/phd-thesis", "max_stars_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Subfiles/AlgStr_IBL.tex", "max_issues_repo_name": "p135246/phd-thesis", "max_issues_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Subfiles/AlgStr_IBL.tex", "max_forks_repo_name": "p135246/phd-thesis", "max_forks_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.7652370203, "max_line_length": 681, "alphanum_fraction": 0.6864606907, "num_tokens": 15507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7718434873426303, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5848986544352027}}
{"text": "\\section{Critical points of the Energy in Euclidean space}\n", "meta": {"hexsha": "ddb388a10ae93019e58a34ec7bf671a0f5885b5a", "size": 59, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/CPoE_EuclideanSpace.tex", "max_stars_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_stars_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-12-28T05:53:38.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-20T05:56:59.000Z", "max_issues_repo_path": "src/CPoE_EuclideanSpace.tex", "max_issues_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_issues_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/CPoE_EuclideanSpace.tex", "max_forks_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_forks_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5, "max_line_length": 58, "alphanum_fraction": 0.813559322, "num_tokens": 13, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5848986541903077}}
{"text": " %!TEX root = ../dissertation_vkslm.tex\n\n\\chapter{Neural Networks and Deep Learning} \\label{ch:nndl}\nThis chapter introduces and gives a brief overview of the theoretical foundations of Deep Neural Networks. In the first section, basic concepts of Artificial Neural Networks are presented, starting from the original models. The next section gives an introduction and a general overview of the deep learning research field, discussing the recent advancements that enabled training Deep Neural Networks successfully. Finally, we briefly describe the fundamentals of the Convolutional Neural Network and present the Fully Convolutional Networks, which is a model used in this work, and we present the techniques used for training our model.\n\n\\section{Artificial Neural Networks}\n\nA biological neural network is an essential part of human brain. The human brain is a highly complex information processing system capable of interpreting large amounts of information and making decisions. It is a complex, non-linear and parallel ``computer'' consisting of millions of connected neurons \\cite{haykin2009neural}. In many tasks, the human brain is more efficient than computers. For instance, the human brain can recognize a familiar face in about 100-200 ms, while modern computers require minutes or even hours for solving the same problem \\cite{haykin2009neural}.\n\nBased on examples and feedback from the ``teacher'', our brain allows us learning how to\ndistinguish an apple from an orange or recognize letters. Moreover, even without the ``teacher'', we are still able to group similar patterns. Those and other strengths of human brain challenged scientists to emulate those processes by researching how to use machines for tasks that are common for humans. Moreover, one of the concepts that appeared as the result of that research is the Artificial neural network (ANN) concept. \n\n\\subsection{Artificial Neuron}\nThe first model to simulate a single neuron, which is the elemental building block of neural networks, was the perceptron \\cite{rosenblatt1958perceptron}. A single neuron implements a mathematical function given its inputs, to provide an output, as described in Equation \\ref{eq:activation} and illustrated in Figure \\ref{perceptron}.\n\\begin{equation}\ny = f(\\sum_{i=1}^{n} x_{i}w_{i} + b)\n\\label{eq:activation}\n\\end{equation}\n\nIn this equation, $x_{i}$ is the input $i$, $w_{i}$ is the weight associated with input $i$, $b$ is a\nbias term and $f$ the activation function. \n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=3.4in]{perceptron}\n\\caption{Perceptron representation. $x_{1}$ and $x_{2}$ represent the input signal, $b$ the bias term, $w_{0}$, $w_{1}$, $w_{2}$ the weights, $f_{\\:\\sum}$ is the activation function (in this case, a step function) and the output signal is given by $y$.}\n\\label{perceptron}\n\\end{figure}\n\n%The learning rule algorithm for learning relations in data with a neuron can be summarized as: for every input $x_{i}$, make a linear prediction about its label: $y^*_{i} = w^T x_{i}$\n%and update the weights ($w$) as,\n%\\begin{equation}\n%w \\leftarrow w + x_{i}(y_{i} - y^*_{i})\n%\\end{equation}\n\n\nModels based on perceptrons have severe limitations. An evaluation by Minsky and Papert \\cite{papert1969perceptrons} showed that a perceptron cannot model data that is not linearly separable, such as a simple XOR operator. It was observed that ``for data sets that are not linearly separable, the perceptron learning algorithm will never converge'' \\cite{bishop2006pattern}. This observation is related to the perceptron's limited representational power, the learning rule only converges to the correct solution if the data is linearly separable. The Multilayer Perceptron (MLP) with a single hidden layer, however, has been proven \\cite{hornik1989multilayer} to approximate any continuous function on a compact input domain to arbitrary precision. For this reason, MLPs are said to be \\textit{universal function approximators} \\cite{hornik1989multilayer}.\n\n\\section{Multi-layer Neural Networks}\nMulti-layer Neural Networks such as the MLP consist of stacking several neuron units together. Specifically, in the MLP, neurons are stacked in one layer connecting these stacks sequentially, without connections between the neurons in the same layer, as it is shown in Figure \\ref{mlp}. Each neuron in the MLP is normally fully connected with all the neurons in the next layer, with its own set of weights. The first layer of an MLP is usually the input sample, and the last layer is the output of the network. The layers between the input and output layers are called hidden layers.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{mlp}\n\\caption{Multilayer perceptron representation. Each layer contains several perceptron units, which are then connected to units in the subsequent layer.}\n\\label{mlp}\n\\end{figure}\n\nAn MLP can be thought of as a function that maps from input to output vectors, parameterized by the neuron connection weights. The output of a layer\nis calculated by applying the neuron activation function for all neurons on the layer, as\nnoted in Equation \\ref{eq:outputmlp}\n\\begin{equation}\ny^{(l)} = f(W^{(l)} y^{(l-1)} + b^{(l)})\n\\label{eq:outputmlp}\n\\end{equation}\nwhere $W^{(l)}$ is a matrix of weights assigned to each pair of neurons from layer $l$ and $l-1$, and $b^{(l)}$ is a vector of bias terms for each neuron in layer $l$. Calculating the output starting from the first hidden layer, up to the output layer is also referred to as the \\textit{forward propagation} phase.\n\n\\subsection{Loss Function}\n%control c control v\nIn order to train the model, an objective function, also called loss function, is defined.\nThis function measures the compatibility between a prediction and the ground truth label. The\nobjective of the training is to minimize the sum (or, equivalently, the mean) of\nthis error function applied to all examples in the dataset. Commonly used loss functions\nare the Mean Squared Error function (MSE), and the Cross-Entropy (CE). Equations \\ref{eq:mse} and \\ref{eq:ce} describe the MSE and CE, respectively, for a sample of the dataset,\n\n\\begin{equation}\nE =  \\frac{1}{N} \\sum_{c}^{N} (t_{(c)} - y^{}_{(c)} )^2 \n\\label{eq:mse}\n\\end{equation}\n\n\\begin{equation}\nE = - \\sum_{c}^{N} (t_{(c)} \\: log \\: y^{}_{(c)} )^2 \n\\label{eq:ce}\n\\end{equation} where $y^{}_{(c)}$ is the output of unit $c$ in the last layer, $t_{(c)}$\nthe true label $t$ for unit $c$ and $N$ is the number of units on the last layer.\n\\subsection{Backpropagation}\nThe phase called backpropagation consists in minimizing the error $E$ between the network output and the expected target. The algorithm works by calculating the derivatives of the error\nfunction with respect to the model\u2019s parameters (weights and biases). Then, it propagates the error from the output layers, back to the initial layers, one layer at a time, in order to update the weights of the neuron connections to minimize the error $E$. The weights are updated conforming Equation \\ref{eq:bp}\n\n\\begin{equation}\nw(t+1) = w(t) - \\alpha\\ \\Delta\\ E(w(t))\n\\label{eq:bp}\n\\end{equation}\nwhere $E$ is the error measure defined as loss sum over the entire training set, and $t$ indicates iterations (time steps). The $\\Delta(E)$ is the gradient vector, which is computed by applying the chain rule on the layers of the NN \\cite{rumelhart1985learning}. The parameter $\\alpha$ is a heuristic, called learning rate. The learning rate values help to avoid convergence to a local-minimum or maximum point. \n\n\\subsection{Activation Functions}\n\\begin{figure}[!htb]\n\\centering\n \\subfloat[Sigmoid]{\\includegraphics[width=2.5in]{sigmoid}} \n\\hspace*{0.2in} % separation between the subfigures\n\\subfloat[Tanh] {\\includegraphics[width=2.5in]{tanh}}\n\\\\\n\\subfloat[ReLU] {\\includegraphics[width=2.5in]{relu}}\n\\hspace*{0.2in} % separation between the subfigures\n\\subfloat[LReLU] {\\includegraphics[width=2.5in]{lrelu}}\n\n\n\\caption{Neural network activation functions. } \\label{fig:activation}\n\\end{figure}\n\\subsubsection{Sigmoid}\nThe sigmoid activation function is a non-linear function in the range of $]0, 1[$ and has the mathematical form defined in Equation \\ref{eq:sigmoid} and is shown in the Figure \\ref{fig:activation} (a).\n\\begin{equation}\nf(x) = \\frac{1}{1+e^{-x}}\n\\label{eq:sigmoid}\n\\end{equation}\nIt takes a real-valued number and transforms it to the range between 0 and 1. Small negative numbers become 0 and large positive numbers become 1. \n\\subsubsection{Tanh}\n\nThe Hyperbolic Tangent (tanh) non-linearity is shown in Figure \\ref{fig:activation} (b) and is defined in Equation \\ref{eq:tanh}. It limits a real-valued number to the range of $[-1, 1]$\n\\begin{equation}\nf(x) = \\frac{e^x - e^{-x}}{e^x + e^{-x}}\n\\label{eq:tanh}\n\\end{equation}\n\n\\subsubsection{Rectified Linear Unit}\n\nThe Rectified Linear Unit (ReLU) has become very popular in the last few years and was one of the main discoveries that enabled the training of deeper neural networks. It computes the function \n\\begin{equation}\nf(x)=max \\: (0,x)\n\\label{eq:relu}\n\\end{equation}\nthis activation function can be simply regarded as a threshold at zero, see Figure \\ref{fig:activation} (c). \n\n\\subsubsection{Leaky Rectified Linear Units}\n\nLeaky Rectified Linear Units (LReLU) is an attempt to improve the ReLU. Instead of the function being zero when $x < 0$, a leaky ReLU will instead have a small negative slope (around 0.01). The function computes\n\n\\begin{equation}\nf(x)=\\begin{cases}\n      \\alpha x, & \\text{if}\\ x < 0\\\\\n      x, & \\text{otherwise}\n    \\end{cases}\n\\label{eq:lrelu}\n\\end{equation} where $\\alpha$ is a small constant, see Figure \\ref{fig:activation} (d). \n\n\n\\section{Deep Neural Networks}\nDeep architectures are characterized by the multiple levels of non-linear operations\ncontained on a neural network. While many of the early successful applications of neural networks used shallow architectures (up to 3 hidden layers). It was found, however, that the mammal brain is organized in a deep architecture. The brain appears to process information through multiple stages, which is particularly clear in the primate visual system \\cite{bengio2009learning}. \n\nIt was observed in many experiments that deep networks are harder to train than shallow networks, and that training deep networks often get stuck in apparent local minima (or plateaus) when starting with random initialization of the network parameters. Deep Neural Networks have been investigated for decades, but training deep networks consistently yielded poor results. \n\nThe proposal of the Convolutional Neural Network \\cite{lecun1995convolutional} and some recent theoretical discoveries made training deep neural networks feasible. The CNN is a particular type of deep, feedforward neural network that is easier to train and generalized better than networks with full connectivity between adjacent layers \\cite{lecun2015deep}. Besides, the theoretical findings include unsupervised training of the layers \\cite{hinton2006fast}, Rectifier Linear Unit (ReLU) \\cite{nair2010rectified} as the activation function and the regularization techniques Dropout \\cite{srivastava2014dropout} and Batch Normalization \\cite{ioffe2015batch}. \n\n\n\\section{Convolutional Neural Networks}\nCNNs combine three architectural ideas: local receptive fields, shared\n(tied) weights and spatial or temporal sub-sampling \\cite{lecun1998gradient}. When trained with appropriate regularization, CNN achieves state-of-the-art performance\non visual object recognition and image classification tasks \\cite{lecun2015deep}. The CNN is composed of several layers of trainable filters (convolutional layers) and local neighborhood pooling operations (pooling layers) stacked in an alternating sequence starting with the raw input images. To illustrate the concept, Figure \\ref{lenet} shows an example of an architecture of a CNN, namely, the Lenet \\cite{lecun1998gradient}, one of the first applications of the CNN used for handwriting recognition.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{lenet}\n\\caption{Lenet \\cite{lecun1998gradient}, a convolutional neural network used for handwriting recognition.}\n\\label{lenet}\n\\end{figure}\n\nFigure \\ref{learned} provides an example of the filters learned on a convolutional layer of a network trained for object classification, showing that this type of architecture is\ncapable of learning interesting feature detectors, similar to edge detectors.\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{learned}\n\\caption{Example of feature maps learned by the CNN proposed by \\cite{krizhevsky2012imagenet} for object classification.}\n\\label{learned}\n\\end{figure}\n\n\n\\subsection{Convolutional Layer}\nConvolutional layers have trainable filters (also called feature maps) that are applied\nacross the entire input \\cite{lecun1995convolutional}. For each filter, each neuron is connected only to a subset of\nthe neurons in the previous layer. In the case of 2D input (such as images), the filters\ndefine a small area (e.g., 3x3 or 5x5 pixels), and each neuron is connected only to the\nnearby neurons (in this area) in the previous layer. The weights are shared across neurons,\nleading the filters to learn frequent patterns that occur in any part of the image.\n\nThe definition for a 2D convolution layer is presented in Equation \\ref{eq:cnn}. A 2D convolution layer is the application of a discrete convolution of the inputs $y^{(l-1)}$ with a filter $w^{(l)}$, adding a bias $b_{(l)}$ , followed by the application of an activation function $f$:\n\n\\begin{equation}\ny^{(l)}_{rc} = f(\\sum_{i=1}^{N_{r}} \\sum_{j=1}^{N_{c}} y^{(l-1)}_{(r+i-1)(c+j-1)} w^{(l)}_{ij} + b^{(l)} )\n\\label{eq:cnn}\n\\end{equation}\nwhere $y^{(l)}_{rc}$ is the output at position $\\{r,c\\}$, $N_{r}$ and $N_{c}$ are the number of rows and columns, respectively, of the 2D filter, $w^{(l)}_{ij}$ is the filter value at position $\\{i,j\\}$, $y^{(l-1)}_{(r+i-1)(c+j-1)}$ is the $\\{r+i-1,c+j-1\\}$.\n\n\nThe equation above is defined for all possible applications of the filter, as in Figure \\ref{fig:convop} (a), that is, for\n$r \\in \\{1, ..., I_{r} - N_{r} + 1\\}$ and $c \\in \\{1, ..., I_{c} - N_{c} + 1\\}$, where $I_{r}$ and $I_{c}$ are the number of rows and columns in the input to this layer. The convolutional layer can either apply the filters for all possible inputs \\footnote{Animated representation can be found in: \\url{https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/no_padding_no_strides.gif}}, or use a different strategy. Instead of applying the filter for all possible $\\{r, c\\}$ pairs, only the pairs with distance\n$s$ are used, which is called the stride. A stride $s = 2$ \\footnote{Animated representation of strides can be found in: \\url{https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/no_padding_strides.gif}} is equivalent to apply the convolution for half of the possible pairs, as in Figure \\ref{fig:convop} (b).\n\n\\begin{figure}[!htb]\n\\centering\n\\subfloat[Convolving a 3x3 kernel over a 4x4 input, with 1x1 stride.] {\\includegraphics[width=\\textwidth]{conv-operation}}\n\\\\\n\\subfloat[Convolving a 3x3 kernel over a 5x5 input, with 2x2 stride.] {\\includegraphics[width=\\textwidth]{conv-operation-strides}}\n\n\n\\caption{Convolution operation with (a) stride 1x1 and (b) stride 2x2. Extracted from \\cite{dumoulin2016guide}.} \\label{fig:convop}\n\\end{figure}\n\n\n\\section{Fully Convolutional Network Architecture}\n\nThe Fully Convolutional Network (FCN) \\cite{long2015fully} is a CNN modified for dense\npredictions. This architecture was developed under the observation that although a typical CNN is designed for spatial inputs, such as images, it normally discard spatial information in their fully connected layers (FC). \n\nFCNs are, thus, rooted in the observation that spatial filters, which are learned during the CNN training, are useful for extracting low-level features, but FC layers are needed to incorporate high-level reasoning. The main idea of the FCN architecture is to convert the FC layers of a CNN into convolutional layers, expecting that the network retains the ability to learn high-level information and at the same time preserve spatial information. The FCN becomes, therefore, ``fully convolutional'' by having end-to-end convolutional layers. \n\nWhile CNNs are typically built in a sequence of convolutional, pooling, and fully connected layers, FCN adds an expanding path built with a transposed convolutional layer \\cite{long2015fully}. The expanding path recovers spatial information by merging features skipped from the various resolution levels on the contracting path.\n\nUnlike a CNN, which learns a general, nonlinear function that characterizes the input, FCNs learn an end-to-end nonlinear mapping from one input image to another. Because pixel-wise prediction is used, the label space is also transformed from a scalar to a two-dimensional image, where each pixel value represents the object class of its corresponding input pixel. An FCN with an input image and label is illustrated in Figure \\ref{fcn-arch}\n\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{fcn-arch}\n\\caption{Fully convolutional networks can efficiently learn to make dense predictions for\nper-pixel tasks. Extracted from \\cite{long2015fully}}\n\\label{fcn-arch}\n\\end{figure}\n\n\\subsection{Transposed Convolution}\n%https://arxiv.org/pdf/1603.07285.pdf p 19\n\nTransposed convolutions - also called deconvolution or backward convolution - work by swapping the forward and backward passes of a convolution. Whether it is a direct convolution or a transposed convolution is determined by how the forward and backward passes are computed \\cite{dumoulin2016guide}. Figure \\ref{fig:transposed} illustrates the general backwards convolution process \\cite{dumoulin2016guide}.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{transposed}\n\n\\caption{Upsampling with transposed convolutions. Upsampling is done by padding\n(white) the original pixels (blue) and convolving them with a filter (gray). The result is\nan upsampled image (green). Here a stride of one is used, illustrated from left to right. Extracted from \\cite{dumoulin2016guide}.} \\label{fig:transposed}\n\\end{figure}\n\n\\section{Deep Neural Networks Training}\n\n\\subsection{Weight Initialization}\nWhen the training of the neural network starts, the initial values of the weights and biases need to be provided. Some techniques to weight initialization have been proposed and shown to be useful to smooth the model convergence.\n\n\\subsubsection{Uniform}\nThe uniform initialization is the most simple. The model parameters are initialized through a random distribution in the range of $[l1, l2]$, where $l1$ and $l2$ are two constants.\n\n\\subsubsection{Glorot}\nThis weight initialization technique was proposed in \\cite{glorot2010understanding}. The dynamic of the activation functions and the gradients weights were studied, and they found the neural network converges faster using the weight initialization described in Equation \\ref{eq:glorot}.\n\n\\begin{equation}\nW \\sim U \\; \\left[\\sqrt{\\frac{6}{f_{in}+f_{out}}}\\; \\right] \n\\label{eq:glorot}\n\\end{equation}\nwhere $W$ are the weights, $U$ is an uniform distribution, and $f_{in}$ and $f_{out}$ are the number of units in the previous layer and the next layer, respectively.\n\n\\subsection{Optimization Algorithms}\nTraining the network consists in minimizing the error function, based on the gradients of the parameters with relation to the cost function, by updating the weights and biases, expecting that the network learns how to perform the task it is being trained for.\n\n\\subsubsection{Stochastic Gradient Descent}\n\nThe Stochastic Gradient Descent (SGD) algorithm, as defined in \\cite{bishop2006pattern} can be summarized a series of iterations over mini-batches of the dataset, performing forward-propagation\nfollowed by a back-propagation to calculate the error derivatives with respect to the parameters. The weights are updated using these derivatives, and a new mini-batch is used. This procedure is repeated until a convergence criterion is  reached. Common convergence criteria are a maximum number of epochs (number of times that the whole training set was used); a desired value for the cost function is reached, or training until the cost function shows no improvement in a number of iterations.\n\n\\subsubsection{Adaptive Moment Estimation}\nThe Adaptive Moment Estimation (ADAM) \\cite{kingma2014adam} is a stochastic optimization method that can adaptively tune the learning rate per parameter and also keeps an exponentially decaying average of past gradients $m_t$ and $v_t$:\n\n\\begin{subequations}\n\\begin{gather}\n\\label{adamfirst}\nm_t = \\beta_1 m_{t-1} + (1 - \\beta_1) g_t \\\\   \nv_t = \\beta_2 v_{t-1} + (1 - \\beta_2) g_t^2  \n\\end{gather}\n\\end{subequations}\nwhere $m_t$ and $v_t$ are estimates of the first moment and the second moment of the gradients respectively, $\\beta_1$ and $\\beta_2$ are the decay rate and $g$ is the gradient with respect to $w$. \n\nSince the moments are biased towards zero mostly during the initial time steps, the authors work around these biases by computing bias-corrected estimates:\n\\begin{subequations}\n\\begin{gather}\n\\hat{m}_t=\\dfrac{m_t}{1 - \\beta^t_1} \\\\\n\\hat{v}_t=\\dfrac{v_t}{1 - \\beta^t_2}\n\\label{adamend}\n\\end{gather}\n\\end{subequations}\nThen, they use these computed variables to update the parameters:\n\\begin{equation}\nw_{t+1} = w_{t} - \\alpha \\dfrac{\\hat{m}_t}{\\sqrt{\\hat{v}_t} + \\epsilon} \n\\label{adamend2}\n\\end{equation}\nThe authors propose default values of 0.9 for $\\beta_1$, 0.999 for $\\beta_2$, and $10^{-8}$ for $\\epsilon$.\n\n\n", "meta": {"hexsha": "23d57078957c06d0e9aece609391532d989d0637", "size": 21642, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "conteudo/ch2.tex", "max_stars_repo_name": "victormelo/dissertation", "max_stars_repo_head_hexsha": "942bd6e57796d760e152dbfcc31745950dc3fd32", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-19T15:39:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-19T15:39:48.000Z", "max_issues_repo_path": "conteudo/ch2.tex", "max_issues_repo_name": "victormelo/dissertation", "max_issues_repo_head_hexsha": "942bd6e57796d760e152dbfcc31745950dc3fd32", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "conteudo/ch2.tex", "max_forks_repo_name": "victormelo/dissertation", "max_forks_repo_head_hexsha": "942bd6e57796d760e152dbfcc31745950dc3fd32", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-19T15:39:50.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-19T15:39:50.000Z", "avg_line_length": 75.6713286713, "max_line_length": 856, "alphanum_fraction": 0.7755290639, "num_tokens": 5491, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721305, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5847446174123375}}
{"text": "\\subsection{Consequences of Uniformisation}\r\n\\begin{corollary}\r\n    If $R$ is a compact Riemann surface with genus at least $2$, then it is uniformised by $\\mathbb D$.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    It cannot be uniformised by $\\mathbb C$ or $\\mathbb C_\\infty$.\r\n\\end{proof}\r\n\\begin{corollary}[Riemann Mapping Theorem]\r\n    If $D\\subsetneq\\mathbb C$ is a simply connected domain, then $D$ is conformally equivalent to $\\mathbb D$.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    We only have to show that $D$ is not conformally equivalent to $\\mathbb C_\\infty$ or $\\mathbb C$.\r\n    It certainly cannot be conformally equivalent to $\\mathbb C_\\infty$ since $D$ is not compact.\r\n    Suppose $f$ is a conformal equivalence from $\\mathbb C$ to $D$.\r\n    Casorati-Weierstrass shows that the singularity of $f$ at $\\infty$ is not essential (as $f$ has to be injective).\r\n    So $\\infty$ is either removable or a pole, therefore $f$ extends to $\\bar{f}:\\mathbb C_\\infty\\to D\\cup\\{\\bar{f}(\\infty)\\}\\subset\\mathbb C_\\infty$.\r\n    But now $\\mathbb C_\\infty$ is compact, so $\\bar{f}:\\mathbb C_\\infty\\to\\mathbb C_\\infty$ is surjective, hence $\\bar{f}(\\infty)=\\infty$ and thus $D=\\mathbb C$, contradiction.\r\n\\end{proof}\r\n\\begin{corollary}[Picard's Theorem]\r\n    Any analytic function $\\mathbb C\\to\\mathbb C\\setminus\\{0,1\\}$ is constant.\r\n\\end{corollary}\r\nOf course we can replace $\\{0,1\\}$ by any two distinct points in $\\mathbb C$.\r\n\\begin{proof}\r\n    $\\mathbb C\\setminus\\{0,1\\}$ is uniformised by $\\mathbb D$ by Example Sheet.\r\n    The statement then follows from the lifting lemma.\r\n\\end{proof}", "meta": {"hexsha": "b03f3ba867cc59497a9b86c8927e3c89c7a38434", "size": 1579, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "16/cons.tex", "max_stars_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_stars_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "16/cons.tex", "max_issues_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_issues_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "16/cons.tex", "max_forks_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_forks_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.7307692308, "max_line_length": 177, "alphanum_fraction": 0.6972767574, "num_tokens": 507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7745833841649232, "lm_q1q2_score": 0.5847446138268151}}
{"text": "\\section{Criterion}\nThe criterion and divergences listed here can be used to quantify the \"distance\"\nbetween two distributions. Hence, in conjunction with torch optimizers,\none can minimize said difference to learn the paramters of a distribution. For\nsake of notation clarity, $p$ is the true distribution and $q$ is the learned\ndistribution. Hence we \"fit\" $q$ to match $p$. In addition, we provide the\nMonte Carlo approximation.\n\n\\begin{center}\n\\begin{tabular}{l|lC{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}}\n\\toprule\n\\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{2}{c}{$P$} & \\multicolumn{2}{c}{$Q$} \\\\\n\\cmidrule(lr){3-4}  \\cmidrule(lr){5-6}\n\\multicolumn{1}{l}{} & \\multicolumn{1}{l}{Criterion} &  \\multicolumn{1}{c}{$\\log p(x)$} &  \\multicolumn{1}{c|}{$x \\sim P$} &  \\multicolumn{1}{c}{$\\log q(x)$} &  \\multicolumn{1}{c}{$x \\sim Q$} \\\\\n\\midrule\n\\multirow{5}{*}{Divergence}    & Cross-Entropy    &        & \\cmark & \\cmark &        \\\\\n                               & Perplexity       &        & \\cmark & \\cmark &        \\\\\n                               & Exponential      & \\cmark & \\cmark & \\cmark &        \\\\\n                               & Forward KL       & \\cmark & \\cmark & \\cmark &        \\\\\n                               & Reverse KL       & \\cmark &        & \\cmark & \\cmark \\\\\n                               & JS Divergence    & \\cmark & \\cmark & \\cmark & \\cmark \\\\\n\\midrule\n\\multirow{4}{*}{Adversarial}   & GAN              &        & \\cmark &        & \\cmark \\\\\n                               & MMGAN            &        & \\cmark &        & \\cmark \\\\\n                               & WGAN             &        & \\cmark &        & \\cmark \\\\\n                               & LSGAN            &        & \\cmark &        & \\cmark \\\\\n% \\midrule\n% \\multirow{1}{*}{Variational}   & ELBO             &        & \\cmark & \\cmark &        \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\n\\subsection{Divergences}\n\n\\subsubsection{Cross-Entropy}\n\\begin{equation}\n  \\begin{aligned}\n    H(p, q) =& - \\int p(x) \\log q(x) dx \\\\\n    =& - \\frac{1}{n} \\sum_{x \\sim p} \\log q(x)\n  \\end{aligned}\n\\end{equation}\n\\subsubsection{Perplexity}\n\\begin{equation}\n  \\begin{aligned}\n    H(p, q) =& \\exp \\left( - \\int p(x) \\log q(x) dx \\right) \\\\\n    =& \\exp \\left(- \\frac{1}{n} \\sum_{x \\sim p} \\log q(x) \\right)\n  \\end{aligned}\n\\end{equation}\n\n\\subsubsection{Exponential}\n\n\\subsubsection{Forward KL Divergence}\n\\begin{equation}\n  \\begin{aligned}\n    H(p, q) =& \\int p(x) \\log \\frac{p(x)}{q(x)} dx  \\\\\n    =&  \\frac{1}{n} \\sum_{x \\sim p} \\log \\frac{p(x)}{q(x)}\n  \\end{aligned}\n\\end{equation}\n\\subsubsection{Reverse KL Divergence}\n\\begin{equation}\n  \\begin{aligned}\n    H(p, q) =& \\int q(x) \\log \\frac{q(x)}{p(x)} dx  \\\\\n    =&  \\frac{1}{n} \\sum_{x \\sim q} \\log \\frac{q(x)}{p(x)}\n  \\end{aligned}\n\\end{equation}\n\\subsubsection{Jensen-Shannon Divergence}\n\\subsubsection{Earth Mover's Distance}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Adversarial Loss}\nAdversarial Losses are criterion functions that allow for sample-sample based\ntraining between models $p$ and $q$. More formally, it hides a Discriminator\nmodel that attempts to discriminate between the real data from $p$ and fake data\ngenerated from $q$.\n\\subsubsection{Adversarial Loss (Base Class)}\n\\subsubsection{GAN Loss}\n\\subsubsection{MMGAN Loss}\n\\subsubsection{WGAN Loss}\n\\subsubsection{LSGAN Loss}\n\\subsubsection{Gradient Penalty}\n\\subsubsection{Spectral Norm}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{ELBO}\n", "meta": {"hexsha": "ec6882aa642e46865214470f0781763542958c4e", "size": 3559, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/subsections/criterion.tex", "max_stars_repo_name": "nextBillyonair/DPM", "max_stars_repo_head_hexsha": "840ffaafe15c208b200b74094ffa8fe493b4c975", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-20T14:02:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-20T14:02:55.000Z", "max_issues_repo_path": "docs/subsections/criterion.tex", "max_issues_repo_name": "nextBillyonair/DPM", "max_issues_repo_head_hexsha": "840ffaafe15c208b200b74094ffa8fe493b4c975", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/subsections/criterion.tex", "max_forks_repo_name": "nextBillyonair/DPM", "max_forks_repo_head_hexsha": "840ffaafe15c208b200b74094ffa8fe493b4c975", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.8705882353, "max_line_length": 194, "alphanum_fraction": 0.5403203147, "num_tokens": 1102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.584744605284746}}
{"text": "\\section{Solving random-exist quantified SSAT}\n\\label{sect:ressat-technique}\n\nConsider a random-exist quantified SSAT formula $\\Qf=\\random{} X,\\exists Y.\\pf(X,Y)$.\nThe satisfying probability of $\\Qf$ equals the summation of weight of all SAT minterms over $X$, or, equivalently,\n$1$ minus the summation of weight of all UNSAT minterms over $X$.\nTo identify an assignment $\\as$ over $X$ as a SAT or an UNSAT minterm,\nit suffices to check whether $\\pcf{\\pf(X,Y)}{\\as}$ is satisfiable or not.\nA naive solution to computing the satisfying probability of $\\Qf$ is to exhaustively examine all assignments over $X$, classify them as SAT or UNSAT minterms, and aggregate the weight of collected minterms.\n\nThe above naive idea can be improved by exploiting the minterm-generalization techniques discussed in~\\cref{sect:ressat-generalize}.\nFor instance, in~\\cref{ex:ressat-assign},\n$\\as=x_1x_2$ is a SAT minterm over $\\{x_1,x_2\\}$ for $\\pf(x_1,x_2,y_1,y_2)=x_1 \\land (\\lnot x_2 \\lor y_1 \\lor y_2)$.\nObserve that $\\pf(x_1,x_2,y_1,y_2)$ is satisfiable under the partial assignment $\\as^+=x_1$.\nIn other words, the SAT minterm $\\as$ can be generalized into the SAT cube $\\as^+$, which contains two minterms.\nThrough the generalization analysis, multiple minterms can be collected in a single SAT-solving run,\nenhancing the efficiency to enumerate all possible assignments over $X$.\nAs will be shown in Section~\\ref{sect:ressat-evaluation},\nthe minterm-generalization techniques are essential to the efficiency of the proposed algorithm.\nHowever, the weight of each collected cube cannot be summed up directly due to the potential overlap between generalized cubes.\nThis difficulty is overcome by applying weighted model counting,\nwhich aggregates the total weight of the collected cubes correctly, taking the overlap into account.\n\n\\begin{algorithm}[p]\n    \\caption{Solving random-exist quantified SSAT formulas}\n    \\label{alg:ressat}\n    \\begin{algorithmic}[1]\n        \\REQUIRE\n        $\\Qf=\\random{} X,\\exists Y.\\pf(X,Y)$ and a run-time limit \\timeout\n        \\ENSURE\n        Lower and upper bounds $(P_L,P_U)$ of $\\spb{\\Qf}$\n        \\STATE $\\select(X) := \\top$\n        \\STATE $C_\\top := \\emptyset$\n        \\STATE $C_\\bot := \\emptyset$\n        \\WHILE{($\\sat{\\select}$ \\textbf{and} $\\texttt{run-time} < \\timeout$)}\n        \\STATE $\\as := \\model{\\select}$\n        \\IF{($\\sat{\\pcf{\\pf}{\\as}}$)}\n        \\STATE $\\as^+ := \\texttt{MinimalSatisfying}(\\pf,\\as)$\n        \\STATE $C_\\top := C_\\top \\cup \\{\\as^+\\}$\n        \\ELSE\n        \\STATE $\\as^+ := \\texttt{MinimalConflicting}(\\pf,\\as)$\n        \\STATE $C_\\bot := C_\\bot \\cup \\{\\as^+\\}$\n        \\ENDIF\n        \\STATE $\\select := \\select \\land \\lnot\\as^+$\n        \\ENDWHILE\n        \\RETURN $(\\texttt{ComputeWeight}(C_\\top),1-\\texttt{ComputeWeight}(C_\\bot))$\n    \\end{algorithmic}\n\\end{algorithm}\n\nThe above thoughts give rise to the proposed algorithm in~\\cref{alg:ressat} to compute the satisfying probability of $\\Qf=\\random{} X,\\exists Y.\\pf(X,Y)$.\nThe proposed algorithm works as follows.\nFor now, assume the run-time limit \\timeout to be infinity.\nThe effect of imposing a run-time limit on~\\cref{alg:ressat} will be explained in~\\cref{sect:ressat-approximate}.\nTwo SAT solvers are used in~\\cref{alg:ressat}.\nIn addition to the SAT solver that holds the matrix CNF $\\pf(X,Y)$,\nthe other SAT solver $\\select(X)$, named the selector in the following,\nis initialized as a tautology, i.e., without clauses.\nThe selector $\\select(X)$ is in charge of selecting an assignment $\\as$ over $X$.\nAfter $\\as$ is chosen, the matrix solver $\\pf(X,Y)$ will check whether $\\pcf{\\pf(X,Y)}{\\as}$ is satisfiable or not.\nIf $\\pcf{\\pf(X,Y)}{\\as}$ is satisfiable,\n$\\as$ will be generalized into a SAT cube by the subroutine \\texttt{MinimalSatisfying};\nif $\\pcf{\\pf(X,Y)}{\\as}$ is unsatisfiable,\n$\\as$ will be generalized into an UNSAT cube by the subroutine \\texttt{MinimalConflicting}.\n\nInstead of finding a \\textit{minimum} satisfying or conflicting assignment,\nwhich is computationally expensive,\nwe resort to finding a \\textit{minimal} satisfying or conflicting assignment,\ni.e., an assignment that has no literals removable without affecting the (un)satisfiability,\nto leverage the efficient UNSAT-core computation for effective generalization.\nAfter $\\as$ is generalized to $\\as^+$ and enlisted in $C_\\bot$ or $C_\\top$,\nthe negation of $\\as^+$, which becomes a blocking clause,\nwill be conjoined with $\\select$ to prune the assignments contained by $\\as^+$.\n\nThe above process is repeated until $\\select$ becomes unsatisfiable,\nwhich signifies the Boolean space spanned by $X$ has been exhaustively searched.\nThe subroutine \\texttt{ComputeWeight} is then invoked to evaluate the weight of the collected cubes.\nThe subroutines \\texttt{MinimalConflicting}, \\texttt{MinimalSatisfying}, and \\texttt{ComputeWeight} will be detailed below.\n\n\\subsection{Minimal satisfying assignment}\nGiven a SAT minterm $\\as$ over $X$,\nlet $\\mu$ be a satisfying assignment over $Y$ for $\\pcf{\\pf(X,Y)}{\\as}$.\nThe subroutine \\texttt{MinimalSatisfying} generalizes $\\as$ to $\\as^+$ by the following steps.\n\\begin{itemize}\n    \\item[a)] Remove every clause $C$ in $\\pcf{\\pf(X,Y)}{\\as}$ that contains some true literal from $\\mu$.\n    \\item[b)] For each literal $l$ in $\\as$, drop $l$ and examine whether the rest of clauses remain satisfied\n          by scanning these clauses and checking if each of them still contains some true literal.\n          If the rest of clauses are still satisfied, discard $l$; otherwise, put $l$ in $\\as^+$.\n\\end{itemize}\nAfter the above steps, the SAT minterm $\\as$ will be generalized into a minimal satisfying assignment $\\as^+$.\n\n\\subsection{Minimal conflicting assignment}\nLet $\\as$ be an UNSAT minterm over $X$ for $\\pf(X,Y)$.\nThe analysis of unsatisfiability can be done with a modern SAT solver (e.g., using function \\texttt{analyzeFinal()} in \\minisat) to find a conjunction of literals from $\\as$ responsible for the conflict.\nHowever, in general this conjunction of literals might not be minimal,\nand some of the literals could be dropped.\nThe subroutine \\texttt{MinimalConflicting} takes the conjunction of literals responsible for the conflict computed by a SAT solver and makes it minimal as follows.\nFor each literal $l$ in the conjunction,\ndrop $l$ and examine whether $\\pf(X,Y)$ remains unsatisfiable by invoking a SAT call.\nIf it is unsatisfiable, discard $l$; otherwise, put $l$ in $\\as^+$.\nAfter the above steps, the UNSAT minterm $\\as$ will be generalized into a minimal conflicting assignment $\\as^+$.\n\n\\subsection{Weight computation}\nThe subroutine \\texttt{ComputeWeight} aggregates the weight of collected cubes by invoking a weighted model counter.\nBecause a weighted model counter takes CNF formulas as input,\n\\texttt{ComputeWeight} first negates each collected cube to turn it into a clause,\nand conjoins the resulting clauses into a CNF formula.\nAs the CNF formula is the negation of the disjunction of the cubes,\nthe weight of the cubes equals $1$ minus the weight of the CNF formula,\nwhich is computed by a weighted model counter.\n\n\\subsection{Modification for approximate SSAT}\n\\label{sect:ressat-approximate}\n\nThe proposed algorithm can be easily modified to solve \\textit{approximate SSAT},\nwhere upper and lower bounds of the satisfying probability of an SSAT formula are computed.\nSuppose~\\cref{alg:ressat} is forced to terminate before the selector $\\select$ becomes unsatisfiable.\nThe weights of the collected SAT and UNSAT cubes are still valid and can be aggregated by \\texttt{ComputeWeight},\nand the resulted weights reflect the lower and upper bounds of the satisfying probability, respectively.\nThe early termination can be triggered by imposing a run-time limit for~\\cref{alg:ressat}.\nCompared to previous DPLL-based approaches that branch on a single variable,\nthe proposed algorithm considers all randomly quantified variables together and exploits the concept of SAT and UNSAT cubes over the Boolean space spanned by randomly quantified variables,\nmaking the intermediate collected SAT and UNSAT cubes convey useful information about the upper and lower bounds of the exact satisfying probability.\nCompared to the DPLL-based state-of-the-art methods,\nwhich cannot be easily modified for approximate SSAT,\nthe proposed method enjoys the flexibility of solving SSAT approximately or exactly,\ndepending on the imposed run-time constraint.\n\nWe note that the proposed algorithm is more efficient in memory consumption than previous DPLL-based algorithms.\nPrior DPLL-based algorithms mostly apply subproblem memorization to avoid repeated computation on the same subproblem.\nHowever, without special treatment, such memorization may result in rapid growth in memory usage.\nOn the other hand, in the proposed algorithm,\nthe numbers of collected cubes are greatly reduced by the minterm-generalization techniques,\nwhich gives rise to the memory efficiency.\nIn our empirical evaluation,\nthe proposed algorithm consumed two orders of magnitude less memory than the state-of-the-art DPLL-based solver.\n\n\\begin{example}\n    \\label{ex:ressat-solve}\n    Consider a random-exist quantified SSAT formula\n    \\begin{align*}\n        \\Qf=\\random{0.5}r_1,\\random{0.5}r_2,\\random{0.5}r_3,\\exists e_1,\\exists e_2,\\exists e_3.\\pf,\n    \\end{align*}\n    with $\\pf$ consisting of the following clauses:\n    \\begin{itemize}\n        \\item[] $C_1: (r_1 \\lor r_2 \\lor e_1)$\n        \\item[] $C_2: (r_1 \\lor \\lnot r_3 \\lor e_2)$\n        \\item[] $C_3: (r_2 \\lor \\lnot r_3 \\lor \\lnot e_1 \\lor \\lnot e_2)$\n        \\item[] $C_4: (r_3 \\lor e_3)$\n        \\item[] $C_5: (r_3 \\lor \\lnot e_3)$\n    \\end{itemize}\n    \\begin{table}[t]\n        \\centering\n        \\caption{Solving process of~\\cref{alg:ressat} on~\\cref{ex:ressat-solve}}\n        \\label{tbl:ressat-solve-example}\n        \\small\n        \\begin{tabular}{c|c|c|c|c}\n            Assignment                            & Minterm Type & Generalization                & UB      & LB      \\\\\n            \\hline\n            $\\as_1=\\lnot r_1 \\lnot r_2 \\lnot r_3$ & UNSAT        & $\\as_1^+=\\lnot r_3$           & $0.5$   & $0$     \\\\\n            $\\as_2=\\lnot r_1 \\lnot r_2 r_3$       & UNSAT        & $\\as_2^+=\\lnot r_1 \\lnot r_2$ & $0.375$ & $0$     \\\\\n            $\\as_3=\\lnot r_1 r_2 r_3$             & SAT          & $\\as_3^+=r_2r_3$              & $0.375$ & $0.25$  \\\\\n            $\\as_4=r_1 \\lnot r_2 r_3$             & SAT          & $\\as_4^+=r_1r_3$              & $0.375$ & $0.375$\n        \\end{tabular}\n    \\end{table}\n    The solving process is summarized in~\\cref{tbl:ressat-solve-example}.\n    In the beginning, the selector $\\select(r_1,r_2,r_3)$ is initialized without clauses,\n    and the sets $C_\\top$ and $C_\\bot$ to collect SAT and UNSAT cubes are empty.\n    Suppose $\\select$ first selects an assignment $\\as_1=\\lnot r_1 \\lnot r_2 \\lnot r_3$.\n    Since $\\pcf{\\pf}{\\as_1}$ is unsatisfiable due to the conflict between $C_4$ and $C_5$,\n    the subroutine \\texttt{MinimalConflicting} returns $\\as_1^+=\\lnot r_3$,\n    which is the minimal conflicting assignment responsible for this conflict.\n    Note that this minimal conflicting assignment $\\as_1^+$ reflects an upper bound of $0.5$ for $\\spb{\\Qf}$.\n    The selector $\\select$ is then strengthened through conjunction with the negation of $\\as_1^+$ to block the searched subspace.\n    Next, suppose $\\as_2=\\lnot r_1 \\lnot r_2 r_3$ is selected.\n    Under $\\as_2$,\n    formula $\\pcf{\\pf}{\\as_2}$ is unsatisfiable due to the conflict among clauses $C_1$, $C_2$, and $C_3$,\n    and the minimal conflicting assignment $\\as_2^+$ equals $\\lnot r_1 \\lnot r_2$.\n    After conjoining $\\select$ with $\\lnot \\as_2^+$, suppose $\\as_3=\\lnot r_1 r_2 r_3$ is chosen.\n    Formula $\\pcf{\\pf}{\\as_3}$ is satisfiable through the assignment $\\mu_3=\\lnot e_1 e_2 \\lnot e_3$.\n    The subroutine \\texttt{MinimalSatisfying} is invoked to generalize $\\as_3$ to $\\as_3^+=r_2r_3$,\n    which reflects a lower bound of $0.25$ for $\\spb{\\Qf}$.\n    Similarly, the negation of $\\as_3^+$ is conjoined with $\\select$.\n    Next, let the assignment chosen by $\\select$ be $\\as_4=r_1 \\lnot r_2 r_3$.\n    Since $\\pcf{\\pf}{\\as_4}$ is satisfiable through the assignment $\\mu_4=\\lnot e_1 \\lnot e_2 \\lnot e_3$,\n    assignment $\\as_4$ is generalized to $\\as_4^+=r_1r_3$ by \\texttt{MinimalSatisfying}.\n    After conjoined with $\\lnot \\as_4^+$,\n    formula $\\select$ becomes unsatisfiable,\n    which indicates the Boolean space over $\\{r_1,r_2,r_3\\}$ has been explored exhaustively.\n    At the end, we have\n    $C_\\bot=\\{\\as_1^+,\\as_2^+\\}=\\{\\lnot r_3,\\lnot r_1 \\lnot r_2\\}$ and\n    $C_\\top=\\{\\as_3^+,\\as_4^+\\}=\\{r_2r_3,r_1r_3\\}$.\n    The subroutine \\texttt{ComputeWeight} is finally invoked and returns $0.375$ as the satisfying probability of $\\Qf$.\n\n    For approximate SSAT solving, suppose the procedure is forced to terminate right after $\\as_3^+$ is collected.\n    The subroutine \\texttt{ComputeWeight} will be invoked over $C_\\top=\\{r_2r_3\\}$ and $C_\\bot=\\{\\lnot r_3,\\lnot r_1 \\lnot r_2\\}$.\n    The cubes in $C_\\top$ or $C_\\bot$ are negated into CNF formulas for weighted model counting.\n    To compute an upper bound,\n    the UNSAT cubes $\\lnot r_3$ and $\\lnot r_1 \\lnot r_2$ are rewritten into a CNF formula $(r_3)\\land(r_1 \\lor r_2)$ and yields a probability of $0.375$ with respect to the weights specified by the prefix.\n    This probability is the satisfying probability of the negation of the UNSAT cubes,\n    which gives an upper bound of $0.375$ for $\\spb{\\Qf}$.\n    Similarly, we can obtain a lower bound of $0.25$ for $\\spb{\\Qf}$ from the SAT cube $r_2 r_3$.\n\\end{example}\n\n\\section{Applications}\nIn this section, we discuss several applications of random-exist quantified SSAT formulas.\n\n\\subsection{Probability of success in planning}\nMany planning problems can be formulated in terms of forall-exist QBFs,\ni.e., QBFs of the form $\\Qf=\\forall X,\\exists Y.\\pf(X,Y)$.\nChanging the universal quantifiers of these QBFs to randomized ones yields random-exist quantified SSAT formulas.\nUnder the game interpretation of QBFs,\nthe satisfying probability of such an SSAT formula corresponds to the likelihood\nfor the existential player to win the game if the universal player decides its moves at random.\nIn~\\cref{sect:ressat-evaluation},\nwe will use the \\textit{strategic companies} problem~\\cite{Cadoli1997} as an example\nto evaluate SSAT solvers on planning applications.\n\n\\subsection{Probabilistic circuit verification}\nThe second application is the formal verification of \\textit{probabilistic design}.\nAs probabilistic errors are becoming more common in advanced nanometer technology,\nthe \\textit{probabilistic equivalence checking} (PEC) problem asks to compute the probability for a probabilistic circuit to produce different outputs from its faultless specification.\nPEC can be encoded into a random-exist quantified SSAT formula~\\cite{LeeTC18ProbDesign}.", "meta": {"hexsha": "0e13791638b3cdef19bf1fd3adaf3b79f735cc37", "size": 14878, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/random-exist-ssat/technique.tex", "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_issues_repo_path": "paper/random-exist-ssat/technique.tex", "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/random-exist-ssat/technique.tex", "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.018018018, "max_line_length": 206, "alphanum_fraction": 0.7201908859, "num_tokens": 4147, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914975839675, "lm_q2_score": 0.7745833945721304, "lm_q1q2_score": 0.5847446045992333}}
{"text": "\\subsection{L'H\\^{o}pital / l'Hospital}\r\n\r\n\\begin{frame}{Limits with L'H\\^{o}pital}\r\n  \\begin{itemize}\r\n    \\item\r\n      \\textbf{Intuitive:}\\\\\r\n      \\begin{displaymath}\r\n        \\lim\\limits_{n \\rightarrow \\infty} 2 + \\dfrac{1}{n}\r\n        = 2  + \\dfrac{1}{\\infty} = 2\r\n      \\end{displaymath}\r\n      \\vspace{0em}\\\\\r\n    \\item<2- |handout:1>\r\n      \\textbf{With L'H\\^{o}pital:}\r\n      \\begin{itemize}\r\n        \\item\r\n          Let $f, \\, g : \\mathbb{N} \\rightarrow \\mathbb{R}$\r\n        \\item\r\n          If\r\n          \\begin{math}\r\n            \\lim\\limits_{n \\to \\infty} f(n)\r\n              = \\lim\\limits_{n \\to \\infty} g(n)\r\n              = \\infty / 0\r\n          \\end{math}\r\n      \\end{itemize}\r\n      \\begin{displaymath}\r\n        \\Rightarrow \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{f(n)}{g(n)}\r\n          = \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{f'(n)}{g'(n)}\r\n      \\end{displaymath}\r\n    \\item<3- |handout:1>\r\n      \\textbf{Holy inspiration}\r\n      \\begin{center}\r\n        you need a doctoral degree for that\r\n      \\end{center}\r\n  \\end{itemize}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Limits with L'H\\^{o}pital}\r\n  \\textbf{The limit can not be determined in the way of an Engineer:}\r\n  \\begin{displaymath}\r\n    \\lim_{n \\to \\infty} \\dfrac{\\ln (n)}{n}\r\n      = \\dfrac{\\lim_{n \\to \\infty}\\; \\ln (n)}{\\lim\\limits_{n \\to \\infty}\\; n}\r\n    \\hspace{1em} \\stackrel{\\text{plugging in}}{\\longrightarrow} \\hspace{1em}\r\n      \\dfrac{\\infty}{\\infty}\r\n  \\end{displaymath}\r\n  \\textbf{Determine the limit using L'H\\^{o}pital:}\r\n  \\begin{displaymath}\r\n    \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{f(n)}{g(n)}\r\n      = \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{f'(n)}{g'(n)}\r\n  \\end{displaymath}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Limits with L'H\\^{o}pital}\r\n  \\begin{block}{\\textbf{Using L'H\\^{o}pital:}}\r\n    Numerator: \\; $\\textit{\\textbf{f(n)}}\\!: n \\mapsto \\ln (n)$\\\\\r\n    Denominator: $\\textit{\\textbf{g(n)}}\\!: n \\mapsto n$\\\\\r\n    \\hspace{1.5em}\r\n    $\\Rightarrow f'(n) = \\dfrac{1}{n}$ \\; (derivation from Numerator)\\\\\r\n    \\hspace{1.5em}\r\n    $\\Rightarrow g'(n) = 1$\\; (derivation from Denominator)\\\\\r\n    \\begin{displaymath}\r\n      \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{f'(n)}{g'(n)}\r\n        = \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{1}{n} = 0\r\n      \\hspace{0.5em} \\Rightarrow \\hspace{0.5em}\r\n      \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{f(n)}{g(n)}\r\n        = \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{\\ln (n)}{n} = 0\r\n    \\end{displaymath}\r\n  \\end{block}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Limits with L'H\\^{o}pital}\r\n  \\textbf{What can we take for granted without proofing?}\r\n  \\begin{itemize}\r\n    \\item\r\n      Only things that are trivial\r\n    \\item\r\n      It is always better to proof it\r\n  \\end{itemize}\r\n  \\textbf{Examples:}\r\n  \\begin{eqnarray*}\r\n    \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{1}{n} &= \\; 0\r\n      &\\hspace{2em} \\text{is trivial}\\\\\r\n    \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{1}{n^2} &= \\; 0\r\n      &\\hspace{2em} \\text{is trivial}\\\\\r\n    \\lim\\limits_{n \\rightarrow \\infty} \\dfrac{\\log (n)}{n} &= \\; 0\r\n      &\\hspace{2em} \\text{use L'Hopital}\r\n  \\end{eqnarray*}\r\n\\end{frame}\r\n", "meta": {"hexsha": "90fe84797d6c57ac0326ffb26a8ef0804e262505", "size": 3347, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-3/Chapter/eng/020_L_Hopital.tex", "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_issues_repo_path": "Lecture-3/Chapter/eng/020_L_Hopital.tex", "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_forks_repo_path": "Lecture-3/Chapter/eng/020_L_Hopital.tex", "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "avg_line_length": 35.6063829787, "max_line_length": 81, "alphanum_fraction": 0.5168807888, "num_tokens": 1167, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7745833789613197, "lm_q1q2_score": 0.5847446013564678}}
{"text": "\\subsection*{Q. 1}\n\\vspace{-2em}\n\\begin{center}\n\\begin{karnaugh-map}[4][4][1][$CD$][$AB$]\n\\minterms{0, 1, 5, 8, 9, 10, 11, 13, 15}\n\\implicant{1}{9}\n\\implicant{13}{11}\n\\implicant{8}{10}\n\\implicantedge{0}{1}{8}{9}\n\\end{karnaugh-map}\n\\begin{karnaugh-map}[4][4][1][$CD$][$AB$]\n\\maxterms{2,3,4,6,7,12,14}\n\\implicant{3}{6}\n\\implicantedge{4}{12}{6}{14}\n\\end{karnaugh-map}\n\\end{center}\n\\vspace{-2.5em}\n\\paragraph{(a)}$F(A,B,C,D)=B'C'+C'D+AD+AB'$\n\\vspace{-1em}\n\\paragraph{(b)}$F(A,B,C,D)=(B+C)'+(C+D')'+(A'+D')'+(A'+B)'$\n\\vspace{-1em}\n\\paragraph{(c)}$F(A,B,C,D)=((B'C')'(C'D)'(AD)'(AB')')'$\n\\vspace{-1em}\n\\paragraph{(d)}$F(A,B,C,D)=(B'+D)(A+C')$\n\\vspace{-1em}\n\\paragraph{(e)}$F(A,B,C,D)=((B'+D)'+(A+C')')'$\n\\vspace{-1em}\n\\paragraph{(f)}$F(A,B,C,D)=(BD')'(A'C)'$", "meta": {"hexsha": "314b611f6f7d4f154995510f59bd856972bc83f2", "size": 752, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2021F/CS207/A3/q1.tex", "max_stars_repo_name": "HeZean/SUSTech-Archive", "max_stars_repo_head_hexsha": "0c89d78f232fdef427ca17b7e508881b782d7826", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2021F/CS207/A3/q1.tex", "max_issues_repo_name": "HeZean/SUSTech-Archive", "max_issues_repo_head_hexsha": "0c89d78f232fdef427ca17b7e508881b782d7826", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2021F/CS207/A3/q1.tex", "max_forks_repo_name": "HeZean/SUSTech-Archive", "max_forks_repo_head_hexsha": "0c89d78f232fdef427ca17b7e508881b782d7826", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.8571428571, "max_line_length": 59, "alphanum_fraction": 0.5545212766, "num_tokens": 372, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5847446010137116}}
{"text": "\\section{More Gluing; Branching}\r\n\\subsection{A More Detailed Gluing Example}\r\nWe have seen how to glue two copies of $\\mathbb C$ to form the Riemann sphere $\\mathbb C_\\infty$.\r\nWe are gonna try something more interesting:\r\nThe Riemann surface $R$ associated with $w^2=z^3-z$.\r\nTopologically, we know it is homeomorphic to the torus with $4$ points removed.\r\nWe already know its conformal structure, but we are also interested in the analytic functions on it.\r\nWe want to compactify $R$ by adding $4$ points.\r\nBut we already know (from example sheet) that we can extend $R$ to the Riemann surface $R_1=\\{(z,w)\\in\\mathbb C:w^2=z^3-z\\}$, so it remains to compactify $R$ by adding the point $\\infty$.\r\nThe idea for this is to perform a change of coordinate sending $\\infty$ to a finite point.\r\nConsider $u=1/z$ and $v=z/w$ (the method to obtain it will be covered in Algebraic Geometry), so $z=1/u$ and $w=z/v=1/{uv}$, so the original equation becomes $u=v^2(1-u^2)$.\r\nThe set $R_2=\\{(z,w)\\in\\mathbb C^2:u=v^2(1-u^2)\\}$ is then a Riemann surface where the atlas is defined by the restriction of the projections $\\pi,\\tau$ to the first and second coordinate.\r\nOne can do some technical work to check that it works and $\\pi,\\tau$ are analytic functions under this conformal structure.\r\nWe are going to glue $R_1$ and $R_2$ together to get a compact Riemann surface.\r\nThe gluing map is given by $\\Phi(z,w)=(u,v)=(1/z,z/w)$ and $\\Phi^{-1}(u,v)=(z,w)=(1/u,1/(uv))$ between $S_1=R_1\\setminus\\{(0,0),(\\pm 1,0)\\}$ and $S_2=R_1\\setminus\\{(0,0)\\}$.\r\nThese are conformal equivalences as the coordinate functions are holomorphic and the atlases are defined byb coordinate projection.\r\nGlue them together gives $\\hat{R}=R_1\\cup_\\Phi R_2$ which is connected.\r\nBefore we check that it is Hausdorff, we want to first note that $\\hat\\pi_1:R_1\\to\\mathbb C_\\infty$ via $(z,w)\\mapsto z$ and $\\hat{\\pi}_2:R_2\\to\\mathbb C_\\infty$ via $(u,v)\\mapsto 1/u$ satisfy $\\hat\\pi_1=\\hat\\pi_2\\circ\\Phi$.\r\nSo they define a continuous function $\\hat\\pi$ on $\\hat{R}$ (which is automatically meromorphic once we can show that $\\hat{R}$ is Hausdorff hence is a Riemann surface).\r\nLet $i_1:R_1\\to\\hat{R}$ and $i_2:R_2\\to\\hat{R}$ are the respective inclusions, then to see $\\hat{R}$ is Hausdorff, it suffices to seperate $\\hat{R}\\setminus i_2(R_2)=\\{i_1(0,0),i_1(\\pm 1,0)\\}$ from $\\hat{R}\\setminus i_1(R_1)=\\{i_2(0,0)\\}$.\r\nThis can be done by simply taking $\\hat\\pi^{-1}(D(0,2))$ and $\\hat\\pi^{-1}(\\mathbb C_\\infty\\setminus\\bar{D}(0,2))$.\\\\\r\nNow to see $\\hat{R}$ is compact, we shall prove it is sequential compact.\r\nChoose a sequence $(p_n)_{n\\ge 0}$ in $\\hat{R}$.\r\nNow if we, via restricting to a subsequence, have $|\\hat\\pi(p_n)|\\le M$ for all $n$, then a further subsequence of $\\hat\\pi(p_n)$ converges to some $z_0$.\r\nBut there are at most two $w_0$ such that $i_1(z_0,w_0)\\in\\hat{R}$, so $(p_n)$ has a further subsequence that converges to $i_1(z_0,w_0)$ for one of those $w_0$.\r\nOtherwise $|\\hat\\pi(p_n)|\\to\\infty$, but $p_\\infty=i_2(0,0)$ is the only point of $\\hat{R}$ whose image under $\\hat\\pi$ is $\\infty$, therefore $p_n\\to p_\\infty$ as $n\\to\\infty$.\r\nEither way there is a convergent subsequence, so $\\hat{R}$ is sequential compact hence compact.\\\\\r\nIn summary, we obtained a compact Riemann surface $\\hat{R}$ associated with $w^2=z^3-z$.\r\nJust like $\\pi$ extends to a meromorphic function $\\hat\\pi$, we can extend $g$ to a meromorphic $\\hat{g}$ in the same way.", "meta": {"hexsha": "857d19798117c6bb0856be51718dcf746ff05eb4", "size": 3434, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "10/glue.tex", "max_stars_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_stars_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "10/glue.tex", "max_issues_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_issues_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "10/glue.tex", "max_forks_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_forks_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 118.4137931034, "max_line_length": 240, "alphanum_fraction": 0.7041351194, "num_tokens": 1157, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914975839675, "lm_q2_score": 0.7745833841649233, "lm_q1q2_score": 0.5847445967426769}}
{"text": "\\chapter{Primitive Machines}\n\\label{chap:basic}\n\nIn this chapter, we define several classes of \\textit{primitive} machines.\\footnote{Asperti and Ricciotti~\\cite{asperti2015} call these machines\n  \\textit{basic} machines.}  These machines are defined on an arbitrary finite alphabet $\\Sigma$, but they have at most one tape, and they terminate\nafter at most one transition.  Primitive machines are the only concrete machines for that we give transition functions $\\delta$ explicitly.  They are\nalso the only concrete machines, for that we fully specify the type of machine states $Q$.  In the following two chapters, we will show how to combine\nthese machines and build (and verify) complex machines without mentioning $Q$ or $\\delta$ again.  All states and transition functions will be derived\nfrom these machines.\n\n\n\\section{$\\MS{Null}$}\n\\label{sec:Null}\n\n\\setCoqFilename{ProgrammingTuringMachines.TM.Basic.Null}\n\n\nThe machine $\\MS{Null}_\\Sigma$ is parametrised over the alphabet $\\Sigma$.  Because $\\Sigma$ is always clear from the context, we leave the index out.\nWe fix an alphabet $\\Sigma$.  $\\MS{Null}$ has zero tapes and terminates immediately, i.e.\\ after $0$ steps.\n\n\\begin{definition}[$\\MS{Null}$][Null]\n  \\label{def:Null}\n  $\\MS{Null} : \\TM_\\Sigma^{\\mkern+1mu 0}$ is defined as follows:\n  \\begin{align*}\n    Q          &:= \\Unit \\\\\n    start      &:= \\unit \\\\\n    \\delta ~\\_ &:= (\\unit, \\nil) \\\\\n    halt   ~\\_ &:= \\true\n  \\end{align*}\n\\end{definition}\nNote that if we have an unlabelled machine $M:\\TM_\\Sigma^n$, we implicitly label their states over $\\Unit$ with the labelling function\n$lab_M(q):=\\unit$.\n\n% The correctness relation says exactly that the tapes don't change:\nThe correctness relation is the universal relation, because empty vectors do not have information.  However, the correctness lemma also states that\nthe machine terminates in $0$ steps.\n\\begin{lemma}[Correctness of $\\MS{Null}$][Null_Sem]\n  \\label{lem:Null_Sem} $\\MS{Null} \\RealiseIn0 NullRel$ with $NullRel := \\lambda~t~t'.~ \\True$.\n\\end{lemma}\n\\begin{proof}\n  By execution.  The machine terminates after zero steps in the state $\\unit$.\n\\end{proof}\n\n\\section{$\\MS{DoAct}$}\n\\label{sec:DoAct}\n\n\\setCoqFilename{ProgrammingTuringMachines.TM.Basic.Mono}\n\nMachines of the next class, $\\MS{DoAct}~a : \\TM_\\Sigma^1(L)$, do only one action $a:\\Act$ (i.e.\\ they optionally write a symbol and move the head of\nthe tape) and terminate.\n\\begin{definition}[$\\MS{DoAct}~a$][DoAct]\n  \\label{def:DoAct}\n  Let $a:\\Act_\\Sigma$.  Then $\\MS{DoAct}~a : \\TM_\\Sigma^1$ is defined as follows:\n  \\begin{align*}\n    Q          &:= \\Bool \\\\\n    start      &:= \\false \\\\\n    \\delta ~\\_ &:= (\\true, \\Vector{a}) \\\\\n    halt   ~ b &:= b\n  \\end{align*}\n\\end{definition}\nThe semantics of $\\MS{DoAct}$ is easily expressed using the function $\\MS{doAct}$ (see Definition~\\ref{def:step}).  The machine terminates after one\ntransition:\n\\begin{lemma}[Correctness of $\\MS{DoAct}$][DoAct_Sem]\n  \\label{lem:DoAct_Sem} $\\MS{DoAct}~a \\RealiseIn1 DoActRel~a$ with\n  \\[\n    DoActRel~a := \\lambda t~t'.~ t'[0] = \\MS{doAct}~t[0]~a.\n  \\]\n\\end{lemma}\n% \\begin{proof}\n%   By execution.\n% \\end{proof}\nWe define some abbreviations:\n\\begin{definition}[Machine classes derived from $\\MS{DoAct}$][DoAct_Derived]\n \\label{def:DoAct-derived} \n \\begin{align*}\n   \\MS{Move}       ~d &:= \\MS{DoAct} (\\None, d) \\\\\n   \\MS{Write}    ~s   &:= \\MS{DoAct} (\\Some{s}, \\MS{N}) \\\\\n   \\MS{WriteMove}~s~d &:= \\MS{DoAct} (\\Some{s}, d)\n \\end{align*}\n\\end{definition}\n\n\n\\section{$\\MS{Read}$}\n\\label{sec:basic_machines-Read}\n\n$\\MS{Read} : \\TM_\\Sigma^1(\\Option(\\Sigma))$ is an interesting class of labelled one-tape machines.  The machines of this class have one terminating\nstate for each symbol of the alphabet $\\Sigma$.  They read the current symbol from the tape and terminate in the state that corresponds to that\nsymbol.  For the case that there is no current symbol, they also have a distinct terminating state.  The labelling function maps the terminating state\nfor the symbol $s$ to the label $\\Some{s}$.\n\n\\begin{definition}[$\\MS{Read}$][ReadChar]\n  The machine $\\MS{Read} : \\TM_\\Sigma^1(\\Option(\\Sigma))$ is defined as follows:\n  \\begin{align*}\n    Q          &:= \\Bool+\\Sigma \\\\\n    start      &:= \\inl \\false \\\\\n    \\delta (\\_, s) &:=\n                     \\begin{cases}\n                       (\\inl \\true, \\Vector{(\\None, N)}) & s[0] = \\None \\\\\n                       (\\inr c, \\Vector{(\\None, N)})     & s[0] = \\Some c\n                     \\end{cases} \\\\\n    halt   ~ (\\inl  b) &:= b \\\\\n    halt   ~ (\\inr \\_) &:= \\true \\\\\n    lab    ~ (\\inl \\_) &:= \\None \\\\\n    lab    ~ (\\inr  s) &:= \\Some s\n  \\end{align*}\n\\end{definition}\n\nThe correctness lemma of $\\MS{Read}$ states that the machine terminates after one step, leaves the tape unchanged, and that the label of the\nterminating state corresponds to the current symbol on the tape.\n\n\\begin{lemma}[Correctness of $\\MS{Read}$][ReadChar_Sem]\n  \\label{lem:Read_Sem} $\\MS{Read} \\RealiseIn{1} ReadRel$ with\n  \\[\n    ReadRel := \\lambda~t~(l,t').~ l = \\MS{current}~t[0] \\land t' = t\n  \\]\n\\end{lemma}\n\\begin{proof}\n  Case distinction over $\\MS{current}~t[0]$.  Both cases follow by executing the machine one step.\n\\end{proof}\n\n\n%%% Local Variables:\n%%% TeX-master: \"thesis\"\n%%% End:", "meta": {"hexsha": "3c3926b64524de8376ad6c357eafe23dee37f823", "size": 5246, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/thesis/Basic.tex", "max_stars_repo_name": "mwuttke97/CoqTM", "max_stars_repo_head_hexsha": "f4d2aab2008e2158e2c7ca88ebb53b42808a0778", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-08-30T14:58:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-27T15:44:28.000Z", "max_issues_repo_path": "tex/thesis/Basic.tex", "max_issues_repo_name": "mwuttke97/CoqTM", "max_issues_repo_head_hexsha": "f4d2aab2008e2158e2c7ca88ebb53b42808a0778", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-10T09:16:49.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-10T09:16:49.000Z", "max_forks_repo_path": "tex/thesis/Basic.tex", "max_forks_repo_name": "mwuttke97/CoqTM", "max_forks_repo_head_hexsha": "f4d2aab2008e2158e2c7ca88ebb53b42808a0778", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-04-09T19:01:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-29T15:39:53.000Z", "avg_line_length": 42.3064516129, "max_line_length": 150, "alphanum_fraction": 0.6631719405, "num_tokens": 1594, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5847445931571547}}
{"text": "\\section{Discontinuous Galerkin Method}\n\\label{sec:dg}\n\nHere we briefly outline the DG method for the moment equations.  \n(See, e.g., \\cite{cockburnShu_2001}, for a comprehensive review on the application of DG methods to solve hyperbolic conservation laws.)  \nSince we do not include any physics that couples the energy dimension, the particle energy $\\epsilonNu$ is simply treated as a parameter.  \nFor notational convenience, we will suppress explicit energy dependence of the moments.  \nEmploying Cartesian coordinates, we write the moment equations in $d$ spatial dimensions as\n\\begin{equation}\n  \\pd{\\vect{\\cM}}{t}+\\sum_{i=1}^{d}\\pderiv{}{x^{i}}\\big(\\,\\vect{\\cF}^{i}(\\vect{\\cM})\\,\\big)\n  =\\frac{1}{\\tau}\\,\\vect{\\cC}(\\vect{\\cM}),\n  \\label{eq:angularMomentsCartesian}\n\\end{equation}\nwhere $x^{i}$ is the coordinate along the $i$th coordinate dimension.  \nWe divide the spatial domain $D$ into a disjoint union $\\mathscr{T}$ of open elements $\\bK$, so that $D = \\cup_{\\bK \\in \\mathscr{T}}\\bK$.  \nWe require that each element is a $d$-dimensional box in the logical coordinates; i.e.,\n\\begin{equation}\n  \\bK=\\{\\,\\vect{x} : x^{i} \\in K^{i} := (\\xL^{i},\\xH^{i}),~|~i=1,\\ldots,d\\,\\}, \n\\end{equation}\nwith surface elements denoted $\\tilde{\\bK}^{i}=\\times_{j\\ne i}K^{j}$.  \nWe let $|\\bK|$ denote the volume of an element\n\\begin{equation}\n  |\\bK| = \\int_{\\bK}d\\vect{x}, \\quad\\text{where}\\quad d\\vect{x} = \\prod_{i=1}^{d}dx^{i}.  \n\\end{equation}\nWe also define $\\tilde{\\vect{x}}^{i}$ as the coordinates orthogonal to the $i$th dimension, so that as a set $\\vect{x}=\\{\\tilde{\\vect{x}}^{i},x^{i}\\}$.  \nThe width of an element in the $i$th dimension is $|K^{i}|=\\xH^{i}-\\xL^{i}$.  \n\nWe let the approximation space for the DG method, $\\mathbb{V}^{k}$, be constructed from the tensor product of one-dimensional polynomials of maximal degree $k$.  \nNote that functions in $\\mathbb{V}^{k}$ can be discontinuous across element interfaces.  \nThe semi-discrete DG problem is to find $\\vect{\\cM}_{h}\\in\\mathbb{V}^{k}$ (which approximates $\\vect{\\cM}$ in Eq.~\\eqref{eq:angularMomentsCartesian}) such that\n\\begin{align}\n  &\\pd{}{t}\\int_{\\bK}\\vect{\\cM}_{h}\\,v\\,d\\vect{x}\n  +\\sum_{i=1}^{d}\\int_{\\tilde{\\bK}^{i}}\n  \\big(\\,\n    \\widehat{\\bcF}^{i}(\\vect{\\cM}_{h})\\,v\\big|_{\\xH^{i}}\n    -\\widehat{\\bcF}^{i}(\\vect{\\cM}_{h})\\,v\\big|_{\\xL^{i}}\n  \\,\\big)\\,d\\tilde{\\bx}^{i} \\nonumber \\\\\n  &\\hspace{24pt}\n  -\\sum_{i=1}^{d}\\int_{\\bK}\\bcF^{i}(\\vect{\\cM}_{h})\\,\\pderiv{v}{x^{i}}\\,d\\vect{x}\n  =\\f{1}{\\tau}\\int_{\\bK}\\bcC(\\vect{\\cM}_{h})\\,v\\,d\\vect{x},\n  \\label{eq:semidiscreteDG}\n\\end{align}\nfor all $v\\in\\mathbb{V}^{k}$ and all $\\bK\\in\\mathscr{T}$.  \n\nIn Eq.~\\eqref{eq:semidiscreteDG}, $\\widehat{\\bcF}^{i}(\\vect{\\cM}_{h})$ is a numerical flux, approximating the flux on the surface of $\\bK$ with unit normal along the $i$th coordinate direction.  \nIt is evaluated with a flux function $\\vect{\\mathscr{F}}^{i}$ using the DG approximation from both sides of the element interface; i.e.,\n\\begin{equation}\n  \\widehat{\\bcF}^{i}(\\vect{\\cM}_{h})\\big|_{x^{i}}=\\vect{\\mathscr{F}}^{i}(\\vect{\\cM}_{h}(x^{i,-},\\tilde{\\bx}^{i}),\\vect{\\cM}_{h}(x^{i,+},\\tilde{\\bx}^{i})),\n\\end{equation}\nwhere superscripts $-/+$ in the arguments of $\\vect{\\cM}_{h}$ indicate that the function is evaluated to the immediate left/right of $x^{i}$.  \nIn this paper we use the simple Lax-Friedrichs (LF) flux given by\n\\begin{equation}\n  \\vect{\\mathscr{F}}_{\\mbox{\\tiny LF}}^{i}(\\vect{\\cM}_{a},\\vect{\\cM}_{b})\n  =\\f{1}{2}\\,\\big(\\,\\bcF^{i}(\\vect{\\cM}_{a})+\\bcF^{i}(\\vect{\\cM}_{b})-\\alpha^{i}\\,(\\,\\vect{\\cM}_{b}-\\vect{\\cM}_{a}\\,)\\,\\big),\n  \\label{eq:fluxFunctionLF}\n\\end{equation}\nwhere $\\alpha^{i}$ is the largest eigenvalue (in absolute value) of the flux Jacobian $\\partial\\bcF^{i}/\\partial\\vect{\\cM}$.  \nFor particles propagating at the speed of light, we can simply take $\\alpha^{i}=1$ (i.e., the global LF flux).  \n\n\\begin{rem}\nFor simplicity, in Eq.~\\eqref{eq:semidiscreteDG}, we have approximated the opacities $\\sigma_{\\Ab}$ and $\\sigma_{\\Scatt}$ (and thus $\\xi$ and $\\tau$) on the right-hand side of Eq.~\\eqref{eq:angularMomentsCartesian} with constants in each element; i.e., $\\sigma_{\\Ab},\\sigma_{\\Scatt}\\in\\bbV^{0}$.  \n\\end{rem}", "meta": {"hexsha": "c6c799bd7f882efabebebe81927793fe8cbb566c", "size": 4168, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/M1/realizableFermionicM1/sections/dg.tex", "max_stars_repo_name": "srichers/thornado", "max_stars_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-12-08T16:16:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-24T19:31:21.000Z", "max_issues_repo_path": "Documents/M1/realizableFermionicM1/sections/dg.tex", "max_issues_repo_name": "srichers/thornado", "max_issues_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-07-10T20:13:15.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-11T13:21:00.000Z", "max_forks_repo_path": "Documents/M1/realizableFermionicM1/sections/dg.tex", "max_forks_repo_name": "srichers/thornado", "max_forks_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-11-14T01:13:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-24T02:08:20.000Z", "avg_line_length": 67.2258064516, "max_line_length": 297, "alphanum_fraction": 0.6578694818, "num_tokens": 1523, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952921073469, "lm_q2_score": 0.6513548782017745, "lm_q1q2_score": 0.5847182076528873}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Continuous distributions}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{frame}\n\\frametitle{Continuous distributions}\n\n\\begin{itemize}\n\n\\item Below is a histogram of the distribution of heights of US adults. \n\n\\item The proportion of data that falls in the shaded bins gives the probability that a randomly sampled US adult is between 180 cm and 185 cm (about 5'11\" to 6'1\").\n\n\\end{itemize}\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{3-5_continuous_distributions/figures/usHeightsHist180185/usHeightsHist180185}\n\\end{center}\n\n\n\\end{frame}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{From histograms to continuous distributions}\n\n\\begin{frame}\n\\frametitle{From histograms to continuous distributions}\n\nSince height is a continuous numerical variable, its \\hl{probability density function} is a smooth curve.\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{3-5_continuous_distributions/figures/fdicHeightContDist/fdicHeightContDist}\n\\end{center}\n\n\\end{frame}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{Probabilities from continuous distributions}\n\n\\begin{frame}\n\\frametitle{Probabilities from continuous distributions}\n\nTherefore, the probability that a randomly sampled US adult is between 180 cm and 185 cm can also be estimated as the shaded area under the curve.\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{3-5_continuous_distributions/figures/fdicHeightContDistFilled/fdicHeightContDistFilled}\n\\end{center}\n\n\n\\end{frame}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{frame}\n\\frametitle{By definition...}\n\nSince continuous probabilities are estimated as ``the area under the curve\", the probability of a person being exactly 180 cm (or any exact value) is defined as 0.\n\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{3-5_continuous_distributions/figures/fdicHeightContDist180}\n\\end{center}\n\n\\end{frame}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%", "meta": {"hexsha": "64cb52d2f680d4f8def723a7195a52215a43f304", "size": 1928, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/3-5_continuous_distributions/3-5_continuous_distributions.tex", "max_stars_repo_name": "sumitrmishra/data504", "max_stars_repo_head_hexsha": "e0cb3259f6dd362c9591375390c9d6e5d59689b7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/3-5_continuous_distributions/3-5_continuous_distributions.tex", "max_issues_repo_name": "sumitrmishra/data504", "max_issues_repo_head_hexsha": "e0cb3259f6dd362c9591375390c9d6e5d59689b7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/3-5_continuous_distributions/3-5_continuous_distributions.tex", "max_forks_repo_name": "sumitrmishra/data504", "max_forks_repo_head_hexsha": "e0cb3259f6dd362c9591375390c9d6e5d59689b7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-20T07:26:35.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-20T07:26:35.000Z", "avg_line_length": 27.9420289855, "max_line_length": 165, "alphanum_fraction": 0.7110995851, "num_tokens": 434, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.8031737963569016, "lm_q1q2_score": 0.5846908049524336}}
{"text": "\\chapter{Ray tracing on phase space} \\label{chap:PS}\nRay tracing on phase space is a method which employs the phase space (PS) of the source and the target of the optical system.\nMoreover, it takes into account the trajectory that every ray follows during its propagation.\nBefore explaining the method, we introduce the PS concept.\n\\section{Phase space}\\label{sec:PSconcept}\nEvery ray in three dimensions can be described by three position coordinates and three direction coordinates. \nThe PS of an optical surface is a four-dimensional space characterized by two position and two direction coordinates. The position coordinates are two of the coordinates of the intersection point of the ray with the surface, while the direction coordinates are the momentum coordinates of the vector tangent to the ray projected on that optical surface \\cite{wolf2004geometric}.\n\\\\ \\indent \nIn two dimensions, every ray parametrization is obtained from two position and two direction coordinates. The PS of an optical line is described by one position and one direction coordinate. Hence, for two-dimensional systems, every ray in PS is described by a point in a two-dimensional space.\nGiven an optical line $\\lineai$, the ray position coordinate on PS is the $\\variabile{x}$-coordinate of the intersection point between the ray and the line $\\lineai$. The direction coordinate is the sine of the angle that the ray forms with respect to the normal \\vect{$\\boldsymbol{\\nu}$} of line $\\lineai$ multiplied by the index of refraction \\n. We choose \\vect{$\\boldsymbol{\\nu}$} always directed inside the same medium in which the incident ray travels. The PS is indicated with \\set{S}{}{}$=$\\set{Q}{}{}$\\times$\\set{P}{}{},\nwhere \\set{Q}{}{} is the set of the position coordinates \\variabile{q} and \\set{P}{}{} is the set of the direction coordinates $\\variabile{p}=\\variabile{n}\\sin{\\myangle}$, with $\\myangle\\in[-\\pi/2, \\pi/2]$ the angle between the ray segment inside the system and the normal measured counterclockwise.\nIn the following, the PS is considered only for the source $\\point{S}$ and the target $\\point{T}$ and for no other line of the optical system.\nThe coordinates of every ray on \\set{S}{}{} and \\set{T}{}{} are indicated with $(\\variabile{q}_1,\\variabile{p}_1)$ and $(\\variabile{q},\\variabile{p})$, respectively.\\\\ \\indent\n\\begin{figure}[t]\n  \\begin{minipage}[]{0.49\\textwidth}\n\\centering\n    \\includegraphics[width  = \\textwidth]{source_PS_cup1_rev.pdf}\n    \\caption{\\textbf{Source PS.} Five different paths can occur for the two-faceted cup.}\n    \\label{fig:sourcePS1}\n  \\end{minipage}\n\\hfill\n  \\begin{minipage}[]{0.49\\textwidth}\n\\centering\n    \\includegraphics[width=\\textwidth]{target_PS_cup.pdf}\n  \\caption{\\textbf{Target PS.} Five different paths can occur for the two-faceted cup.}\n   \\label{fig:targetPS1}\n \\end{minipage}\n\\end{figure}\nAs an example, in Figures \\ref{fig:sourcePS1} and \\ref{fig:targetPS1} we show the source and target PS of the two-faceted cup (in Figure \\ref{fig:cup}), sampled with $10^4$ random rays. The coordinates of every point correspond to the position and direction coordinates of a ray which are calculated using the ray tracing procedure. Furthermore, we store the path $\\Pi$ that every ray follows, where we refer to a path as the sequence of lines encountered by the ray.\nIn Figures \\ref{fig:sourcePS1} and \\ref{fig:targetPS1} a color is associated to every path.\nWe note that the source and target phase spaces are partitioned into different regions according to the path $\\Pi$ followed by the rays.\nGiven a path $\\Pi$, the corresponding regions are indicated with $\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ and $\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ at source and target PS, respectively.\nRays that propagate through the two-faceted cup can follow $5$ different paths. Some rays are emitted from the source and arrive at the target without hitting any other line, they follow path $\\Pi_1= (1,4)$. These rays are depicted in red in the PS pictures. Some other rays can hit the left or the right reflector (line $2$ and $3$, respectively) once, their corresponding paths are $\\Pi_2 = (1,2,4)$ and $\\Pi_3 = (1,3,4)$, respectively. These rays are the green and blue dots in PS. Finally, there is the possibility that the rays have two reflections before hitting the target. They follow either path $\\Pi_4 = (1,2,3,4)$ or path $\\Pi_5 = (1,3,2,4)$ and they are depicted with the cyan and yellow points.\n\\\\ \\indent For the two-faceted cup all light emitted by the source arrives at the target. In order to derive the photometric variables at the target we need to understand where light ends up, i.e., which parts of the target PS are illuminated by the source. Indeed, while the source PS is completely covered by rays, some parts of the target PS are not reached by any ray at all.\n%, that is \n%\\begin{equation}\n%\\begin{aligned}\n%\\mbox{\\insieme{S}} &= \\bigcup_{\\Pi} \\mbox{\\insieme{R}}_\\textrm{s}(\\Pi),\\\\\n%\\mbox{\\insieme{T}} &\\supset \\bigcup_{\\Pi} \\mbox{\\insieme{R}}_\\textrm{t}(\\Pi),\n%\\end{aligned}\n%\\end{equation}\n%where the unions are over all the possible paths.\nThis means that, while the luminance at the source PS is positive for any possible position and direction, the luminance at the target PS is positive only inside the regions $\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$, for every path $\\Pi$, and it is equal to $0$ outside those regions. For this reason, from now on we will refer to $\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ as the \\textit{positive luminance regions}. \\\\ \\indent\nIt is very important to remark that, although \\insieme{S} and \\insieme{T} have a different ray distribution, the area covered by the rays is conserved. This follows from \\'{e}tendue conservation. From (\\ref{etendue2d}) we rewrite the two-dimensional \\'{e}tendue as:\n\\begin{equation}\nU = \\int_{\\variabile{x}^{\\textrm{min}}}^{\\variabile{x}^{\\textrm{max}}} \\int_{\\myangle^{\\textrm{min}}}^{\\myangle^{\\textrm{max}}} \\n \\cos(\\myangle)\\textrm{d}\\variabile{x}\\,\\textrm{d}\\myangle= \\int_{\\mbox{\\set{P}{}{}}}\\int_{\\mbox{\\set{Q}{}{}}} \\textrm{d}\\variabile{q}\\,\\textrm{d}\\variabile{p}.\n\\end{equation}\nwhere we indicated with $\\variabile{x}^{\\textrm{min}}$ and $\\variabile{x}^{\\textrm{max}}$ the minimum and maximum rays position coordinates and with $\\myangle^{\\textrm{min}}$ and $\\myangle^{\\textrm{max}}$ the minimum and maximum angles of the rays with the normal to the target. The second equality holds since $\\textrm{d}\\variabile{p}= \\n\\cos \\myangle\\textrm{d}\\myangle$.\nTherefore, in two dimensions, \\'{e}tendue can be seen as an area in PS. \\'{E}tendue conservation leads to the conservation of the areas of regions with positive luminance.\\\\ \\indent\nFor the two-faceted cup in Figure \\ref{fig:cup}, \\set{S}{}{}$= [-2,2]\\times[-1,1]$, thus, the \\'{e}tendue at the source is $U_\\textrm{s}=8$ (see Figure \\ref{fig:sourcePS1}). Using the trapezoidal rule, we compute the total area covered by the positive luminance regions at the target, and we obtain $U_\\textrm{t}=8$ which numerically proves \\'{e}tendue conservation for the two faceted-cup. \\\\ \\indent \nIn the next section we provide a literature overview of a fundamental principle in non-imaging optics:\"the edge-ray principle\".\n\\section{The edge-ray principle}\nThe goal in non-imaging optics is to transfer all light from the source aperture to the output aperture. Systems that satisfy this property are referred to \\textit{ideal optical systems}.\nSeveral methods to design ideal optical systems are based on the edge-ray principle \\cite{welford1978problem, benitez1997design}. \nBasically it states that all the light rays exiting the edges of the source will end at the edges of the target. \nThis guarantees that all light emitted from the source will arrive at the receiver, see Figure \\ref{fig:edge}. \n \\begin{figure}[h]\n  \\begin{center}\n  \\includegraphics[width= 3.5cm]{Edge-ray}\n  \\end{center}\n  \\caption{\\textbf{A lens receiving light.} The lens redirects light emitted from the source $\\textrm{A}\\textrm{B}$ to the receiver $\\textrm{D}\\textrm{E}$. \nRays that leave the edges of the source hit the edges of the target (blue and red rays). Rays coming from the interior of the source will end at the interior of the target (green rays) \\cite{wiki2}.}\n  \\label{fig:edge}\n\\end{figure}\n\\\\ \\indent\nIn $1985$ Mi{\\~n}ano proved the principle by using the PS of the source and the target of an optical system \\cite{minano1985two, minano1986design}. He proved the principle for systems in inhomogeneous media, where the index of refraction is a continuous function, so the map that connects the source and target phase spaces is a continuous map.\nIndicating with $\\map{M}{}{}(\\point{P})$ the optical map of a point $\\point{P}$, Mi{\\~n}ano showed that if $\\map{M}{}{}(\\partial\\mbox{\\set{S}{}{}})=\\partial\\mbox{\\set{T}{}{}}$ then \n$\\map{M}{}{}(\\mbox{\\set{S}{}{}})=\\mbox{\\set{T}{}{}}$ and vice versa. \n%Note that the trajectory of two rays in PS cannot cross. \nThe first version of the edge-ray principle \\cite{minano1986design} can be enunciated in two-dimensions as follows:\n\\begin{lemma}{Edge-ray principle (version1)}\\\\ \nSuppose that:\n\\begin{itemize}\n\\item[a)] There are two regions \\insieme{R}$_\\textrm{s}$ and \\insieme{R}$_\\textrm{t}$ in source and target PS with the same area such that \n$$\\map{M}{}{}(\\partial\\mbox{\\insieme{R}}_\\textrm{s}) = \\map{M}{}{}(\\partial\\mbox{\\insieme{R}}_\\textrm{t});$$ \n\\item[b)] The refractive-index distribution $\\n$ is a continuous function; \n%\\item[c)] They have the direction cosine with respect to the optical axis greater than $0$;\n\\end{itemize}\nThen, the following relation holds: $$\\map{M}{}{}(\\mbox{\\insieme{R}}_\\textrm{s}) = \\map{M}{}{}(\\mbox{\\insieme{R}}_\\textrm{t}).$$\n\\end{lemma} \nThe previous lemma claims that if there exist a map connecting the boundaries of two regions from source to target, also the interior of those regions are connected using the same map. Note that the second assumption in the previous lemma implies that the optical map is continuous in PS.\nHowever, for some optical systems, as for instance the compound parabolic concentrator (CPC), the ray mapping in PS is not continuous. This is due to multiple reflections that rays can encounter with the reflectors and implies that some rays at the edge of the source could not be mapped into rays at the edges of the target \\cite{davies1994edge}. \\\\ \\indent \nIn 1994 Ries and Rabl reformulated the edge-ray principle providing a version valid for all systems even if the ray map in PS is not continuous \\cite{Ries:2}. \nSuppose that \\insieme{R}$_\\textrm{s}(\\Pi)$ and $\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ are the regions, corresponding to path $\\Pi$, at the source and the target PS, respectively. \nThey showed that, for a given path $\\Pi$, if the boundaries\n$\\partial\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ are mapped into the boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$, then also the regions $\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ are mapped into the regions $\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$.\nThen, to map \\set{S}{}{} to \\set{T}{}{} it is necessary and sufficient that the first version of the edge ray principle is observed for all part of \\set{S}{}{} and \\set{T}{}{} defined by the number of reflections \\cite{Ries:2}. \n\\begin{lemma}{Edge-ray principle (generalized version)}\\\\\nLet $(\\Pi_\\variabile{j})_{\\variabile{j}=1, \\cdots, \\npath}$ denote all possible paths $\\npath$.\nEvery possible path corresponds to a certain number of reflections or refractions.\nLet us denote with \\insieme{R}$_\\textrm{s}(\\Pi_\\variabile{j})$ and \n\\insieme{R}$_\\textrm{t}(\\Pi_\\variabile{j})$ the regions at \\set{S}{}{} and \\set{T}{}{} associated to path $\\Pi_\\variabile{j}$ such that they are a partition of \\set{S}{}{} and \\set{T}{}{}, that is:\n\\begin{equation*}\n\\begin{aligned}\n\\mbox{\\set{S}{}{}} &= \\bigcup_{\\variabile{j}=1}^{\\npath} \\mbox{\\insieme{R}}_\\textrm{s}(\\Pi_\\variabile{i}), \\mbox{ with } \\mbox{\\insieme{R}}_\\textrm{s}(\\Pi_\\variabile{j})\\cap \\mbox{\\insieme{R}}_\\textrm{s}(\\Pi_\\variabile{i}) = \\emptyset \\mbox{ for } \\variabile{i}\\neq \\variabile{j}\\\\\n\\mbox{\\set{T}{}{}} & \\supset \\bigcup_{\\variabile{j}=1}^{\\npath} \\mbox{\\insieme{R}}_\\textrm{t}(\\Pi_\\variabile{i}), \\mbox{ with } \\mbox{\\insieme{R}}_\\textrm{t}(\\Pi_\\variabile{j})\\cap \\mbox{\\insieme{R}}_\\textrm{t}(\\Pi_{\\variabile{i}}) = \\emptyset \\mbox{ for } \\variabile{j}\\neq \\variabile{i}.\n\\end{aligned}\n\\end{equation*} \nThen, to map a source region into a target, it is necessary and sufficient that the first version of the edge ray principle is observed for all parts of \\set{S}{}{} and \\set{T}{}{}: \n\\begin{equation*}\n\\map{M}{}{}\\big(\\partial\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi_{\\variabile{j}})\\big) = \\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi_{\\variabile{j}}), \\quad \\forall \\variabile{j}\\in\\{1, \\cdots, \\npath\\}.\n\\end{equation*}\n\\end{lemma}\nHence, the edge-ray principle constitutes a tool for designing ideal systems and, to this purpose, it is sufficient that the rays of $\\partial\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ are transformed to the rays of $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ for every path $\\Pi$ \\cite{minano1992new}. \n\\\\ \\indent Using the PS concept and the edge-ray principle we develop a new ray tracing method. \nA non-uniform distribution of the rays is constructed by developing a triangulation refinement at the source PS which is explained in the next section. \nThe triangulation refinement provides more rays close to the boundaries of the regions $\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ each of them is formed by the rays that follow the same path $\\Pi$.\n\\section{Phase space ray tracing}\\label{sec:PS_raytracing}\nPS ray tracing takes advantage of the fact that there exists an optical map\n$\n\\map{M}{}{}: \\mbox{\\set{S}{}{}} \\rightarrow \\mbox{\\set{T}{}{}}\n$\n such that\n\\begin{equation}\\label{eq:map1}\n\\map{M}{}{}(\\variabile{q}_1,\\variabile{p}_1)=(\\variabile{q},\\variabile{p}),\n\\end{equation} for every $(\\variabile{q}_1,\\variabile{p}_1)\\in$ \\set{S}{}{}.\nFor very simple systems, like the two-faceted cup, it is possible to determine an analytic expression for $\\map{M}{}{}$.\nThis is not the case for most of the optical systems we deal with. In these cases it is necessary to implement ray tracing to calculate how light is distributed at the target.\nAs mentioned in the previous section, for some optical systems $\\map{M}{}{}$ is not even continuous.\nNevertheless, given a path $\\Pi$, the restriction of $\\map{M}{}{}$ to \\insieme{R}$_\\textrm{s}$, \ni.e., $\\map{M}{}{}(\\Pi): \\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)\\rightarrow \n\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ is a continuous and bijective map. \nThe edge ray principle guarantees that $\\map{M}{}{}(\\Pi)$ maps $\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ onto $\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ preserving topological features. In particular, the boundary $\\partial\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ is mapped onto the boundary $\\partial$\\insieme{R}$_\\textrm{t}(\\Pi)$. %, see \\cite{Ries, Davies, Minano}.\nEmploying the maps $\\map{M}{}{}(\\Pi)$ for all the possible paths $\\Pi$, the output light distribution is determined. Therefore, the photometric variables at the target can be calculated.\n\\\\ \\indent The luminance $L(\\variabile{q}, \\variabile{p})$ at the target PS is given by:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:PSluminance}\nL(\\variabile{q}, \\variabile{p}) &> 0  \\mbox{  for } (\\variabile{q}, \\variabile{p})\\in \\mbox{\\insieme{R}}_\\textrm{t}(\\Pi) \\mbox{ for some path } \\Pi,\\\\ \\vspace{0.5 cm}\nL(\\variabile{q}, \\variabile{p}) &= 0  \\mbox{  otherwise}.\n\\end{aligned}\n\\end{equation}\nThe target intensity along a given direction $\\variabile{p}=\\const{const}$ is computed through an integration of the target luminance $L(\\variabile{q}, \\variabile{p})$ over $\\variabile{q}$ and it is defined in \\set{T}{}{}$\\,$ by:\n\\begin{equation}\n\\label{eq:PSintensity}\nI_{\\textrm{PS}}(\\variabile{p}) = \\int_{\\mbox{\\set{Q}{}{}}}L(\\variabile{q}, \\variabile{p}) \\textrm{d}\\variabile{q}.\n\\end{equation}\nNote that, while in the real space the intensity is defined as a function of the angular coordinate $\\myangle$ (see Chapter \\ref{chap:Illumination optics}), in PS the intensity is defined as a function of the direction coordinate $\\variabile{p}=\\n\\sin(\\myangle)$. The previous equation implies that, assuming a Lambertian source, the problem of computing the target intensity is reduced to the problem of calculating the boundaries\n$\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ for all possible paths $\\Pi$. Hence, the intensity along the direction $\\variabile{p}= \\textrm{const.}$ is given by the sum of the interval lengths formed by the support of the luminance and the line $\\variabile{p}=\\textrm{const.}$ For example, if two intersection points between line $\\variabile{p}= \\textrm{const}$ and the boundary $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ are found, indicating their position coordinates with $\\variabile{q}^\\textrm{min}(\\Pi,\\variabile{p})$ and $\\variabile{q}^\\textrm{max}(\\Pi,\\variabile{p})$, where $\\variabile{q}^\\textrm{min}(\\Pi,\\variabile{p})<\\variabile{q}^\\textrm{max}(\\Pi,\\variabile{p})$ and, using (\\ref{eq:PSluminance}), we obtain that (\\ref{eq:PSintensity}) reduces to:\n\\begin{equation}\\label{eta2}\nI_{\\textrm{PS}}(\\variabile{p}) = \\sum_{\\Pi}\\int_{\\variabile{q}^\\textrm{\\,min}(\\Pi, \\variabile{p})}^{\\variabile{q}^\\textrm{\\,max}(\\Pi,\\variabile{p})}L(\\variabile{q}, \\variabile{p})\\textrm{d}\\variabile{q} = \\sum_{\\Pi}\\big (\\variabile{q}^\\textrm{max}(\\Pi,\\variabile{p})-\\variabile{q}^\\textrm{\\,min}(\\Pi,\\variabile{p})\\big )\\,,\n\\end{equation}\nwhere the sum is over all the possible paths and the second equation holds as we assume Lambertian source with $L=1$ in \\insieme{R}$_\\textrm{t}(\\Pi).$ In case more than two intersection points occur, a generalized equation needs to be used for calculating the intensity. \nNote that for every single ray only one path is possible as we are assuming that all the lines are reflective lines.\nBecause of this, the regions \\insieme{R}$_\\textrm{t}(\\Pi)$ do not overlap, i.e.\n\\begin{equation}\n\\bigcap_{\\Pi}\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)= \\emptyset,\n\\end{equation}\nwhere the intersection is over all possible paths. \\\\ \\indent\nFrom equation (\\ref{eta2}) we note that, using the PS structure, only the rays on the boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ are required for obtaining the target intensity profile.\n%Instead of tracing the rays randomly (as in MC ray tracing) or following a regular source distribution (as in QMC ray tracing)\nThe aim is to construct a ray tracing procedure that allows us tracing less rays overall and more rays close to the discontinuity of the luminance, i.e., close to the boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$.\nTo this purpose, we start from a triangulation made by only two triangles, then a triangulation refinement at \\set{S}{}{} is defined as explained in the following. \\\\ \\indent\nThe procedure starts with coordinates $(\\variabile{q}_1^\\variabile{k},\\variabile{p}_1^\\variabile{k})_{\\variabile{k}=1, \\cdots, 4}$ of the four corner points of \\set{S}{}{}. For each of them, the corresponding path $(\\Pi^\\variabile{k})_{\\variabile{k}=1, \\cdots, 4}$ is calculated. Next, the grid is divided into two equal triangles joining two opposite vertices (in our simulation we always trace the diagonal north-west south-east to define the new triangles). For each triangle, the rays located at its corners are traced. If the paths corresponding to\nthose rays are not all equal, one or more boundaries\n$\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ are expected to cross the triangle.\nIn that case, the middle points $(\\variabile{q}_1^\\variabile{k},\\variabile{p}_1^\\variabile{k})_{\\variabile{k}=5,6,7}$ of each side of the triangle are added and\nthe three corresponding rays are traced (unless they were already traced in the previous steps). Each refinement step leads to four new triangles (see Figure \\ref{fig:refinement}).\n \\begin{figure}[h]\n \\begin{minipage}[h]{\\textwidth}\n\\centering\n    \\includegraphics[width  = 0.9\\textwidth]{region}\n  \\caption{\\textbf{Triangulation refinement.} If the rays related to the vertices of the triangles follow a different path a new refinement step is required.\n   Each refinement step leads to four new triangles.}\n   %The parameters values are $\\varepsilon_{\\variabile{q}_{max}}~=~ 2$, $\\varepsilon_{\\variabile{p}_{max}}= 1$, $\\varepsilon_{\\variabile{q}_{min}}= 4$ and $\\varepsilon_{\\variabile{p}_{min}}=2$.}\n  \\label{fig:refinement}\n\\end{minipage}\\\\\n\\begin{minipage}[h]{\\textwidth}\n\\centering\n    \\includegraphics[width  = 0.9\\textwidth]{region_inside}\n  \\caption{\\textbf{Triangulation refinement.} The red line encloses a region of rays that follow the path $\\Pi_2$ and is completely located inside a triangle.\n  The algorithm is not able to detect that region and, a further refinement is required.}\n   \\label{fig:region inside}\n\\end{minipage}\n  \\end{figure} \\\\ \\indent\nWhen all the rays corresponding to the corners of each triangle have the same path, it is not necessary to refine the triangles anymore.\nSince the triangles very close to the boundaries are always crossed by at least a boundary, at least two different paths are found for the rays at the vertices of those triangles. \nBecause of this, the procedure could continue infinitely, therefore, two parameters $\\varepsilon_\\variabile{q}^\\textrm{min}$ and $\\varepsilon_\\variabile{p}^\\textrm{min}$ are introduced to defined a stopping criterion.\nThe algorithm stops when the length of the sides of the triangles is smaller than $\\varepsilon_\\variabile{q}^\\textrm{min}$ and $\\varepsilon_\\variabile{p}^\\textrm{min}$ in $\\variabile{q}$ and $\\variabile{p}$ direction. \\\\ \\indent \nWe indicate all the possible paths with $(\\Pi_{\\variabile{j}})_{\\variabile{j}=1, \\cdots, \\npath}$ where $\\npath$ is the maximum number of paths\\footnote{We remind the reader that we indicate with $\\Pi^\\variabile{k} = \\Pi(\\variabile{q}_1^\\variabile{k},\\variabile{p}_1^\\variabile{k})$ the path followed by rays with coordinates $(\\variabile{q}_1^\\variabile{k},\\variabile{p}_1^\\variabile{k})$ in source PS. Note that it\ncan happen that $\\Pi^\\variabile{k}= \\Pi^\\variabile{h}$ for $\\variabile{k}\\neq\\variabile{h}.$\nWith $(\\Pi_{\\variabile{j}})_{\\variabile{j}=1, \\cdots, \\npath}$ we indicate all the possible $\\npath$ paths that can occur, therefore $\\Pi_{\\variabile{i}}\\neq\\Pi_{\\variabile{j}}$ if $\\variabile{i}\\neq\\variabile{j}$.} (\\npath = 5 for the two-faceted cup in Figure \\ref{fig:cup}).\nIf the size of the triangles is too big, it can happen that a region formed by rays that follow a path $\\Pi_\\variabile{j}$ is located completely inside a triangle whose vertices are related to another path $\\Pi_\\variabile{i}$ with $\\variabile{j} \\neq  \\variabile{i}$, see Figure \\ref{fig:region inside}.\nTo avoid this, two parameters $\\varepsilon_\\variabile{q}^{\\textrm{max}}$ and $\\varepsilon_\\variabile{p}^{\\textrm{max}}$ are defined for the $\\variabile{q}_1$-axis and the $\\variabile{p}_1$-axis, respectively.\nWhen the length of the sides of the triangle are greater than these parameters, a new triangle is defined even if its vertices correspond to the same path.\nThe values of the parameters $\\varepsilon_\\variabile{q}^\\textrm{min}$, $\\varepsilon_\\variabile{p}^\\textrm{min}$, $\\varepsilon_\\variabile{q}^\\textrm{max}$ and $\\varepsilon_\\variabile{p}^\\textrm{max}$ determine the number of rays traced.\nThus, on the one hand, decreasing $\\varepsilon_\\variabile{q}^\\textrm{min}$ and $\\varepsilon_\\variabile{p}^\\textrm{min}$ more rays close to the boundaries are traced;\non the other hand, decreasing the values of $\\varepsilon_\\variabile{q}^\\textrm{max}$ and $\\varepsilon_\\variabile{p}^\\textrm{max}$ more rays in the interior of the regions are traced. \\\\ \\indent The triangulation refinement is provided by Algorithm \\ref{alg:triangulation} which uses the two recursive functions L$\\footnotesize{\\textrm{EFT}}$ T$\\footnotesize{\\textrm{RIANGLE}}$ and  R$\\footnotesize{\\textrm{IGHT}}$ T$\\footnotesize{\\textrm{RIANGLE}}$.\nThe function L$\\footnotesize{\\textrm{EFT}}$ T$\\footnotesize{\\textrm{RIANGLE}}$ is defined in Algorithm \\ref{alg:left_triangle} (see Figure \\ref{fig:triangulation_left}). \nA similar procedure gives the function R$\\footnotesize{\\textrm{IGHT}}$ T$\\footnotesize{\\textrm{RIANGLE}}$ (see Figure \\ref{fig:triangulation_right}).\n\\begin{algorithm}[h]\n\\caption{Triangulation refinement algorithm}\\label{alg:triangulation}\nInitialize $\\varepsilon_{\\variabile{q}}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}}^{\\textrm{min}},$ and\n $\\varepsilon_{\\variabile{p}}^{\\textrm{max}}$, Ray = [empty];\\\\\nDefine a structure that contains related data in fields. \\\\\n\\Comment $\\varepsilon_{\\variabile{q}}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}}^{\\textrm{min}},$ and\n $\\varepsilon_{\\variabile{p}}^{\\textrm{max}}$ are fixed parameters needed to stop the procedure\\\\\n\\Comment Ray: structure that contains all the data of rays traced (i.e., position, direction and path).\n\\begin{algorithmic}[1]\n%\\Procedure{Triangulation refinement}{}\n\\State $(\\variabile{q}_1^1, \\variabile{p}_1^1)= (-\\variabile{a}, -1)$ \\Comment \\mbox{left bottom corner of source PS \\;}\n\\State $(\\variabile{q}_1^2, \\variabile{p}_1^2) = (\\variabile{a}, -1)$ \\Comment \\mbox{right bottom corner of source PS}\n\\State $(\\variabile{q}_1^3, \\variabile{p}_1^3)  = (\\variabile{a}, 1)$ \\Comment \\mbox{right upper corner of source PS\\;\\;}\n\\State $(\\variabile{q}_1^4, \\variabile{p}_1^4) = (-\\variabile{a}, 1) $ \\Comment \\mbox{left upper corner of source PS\\;\\;\\;\\;\\,}\n\\For{$ \\variabile{k}= 1 \\to 4 $}\n\\State Trace the ray with initial coordinates $(\\variabile{q}_1^\\variabile{k}, \\variabile{p}_1^{\\variabile{k}})$ in \\set{S}{}{};\n\\State Calculate the corresponding path $\\Pi^{\\variabile{k}}$; $\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad\\quad$\n\\Comment Store the information found in the structure Ray;\n\\State $\\textrm{Ray}.\\variabile{q}= [\\mbox{\\textrm{Ray}}.\\variabile{q}, \\variabile{q}_1^\\variabile{k}]$;\n\\State $\\textrm{Ray}.\\variabile{p}= [\\mbox{\\textrm{Ray}}.\\variabile{p}, \\variabile{p}_1^\\variabile{k}]$;\n\\State $\\textrm{Ray}.\\Pi = [\\mbox{\\textrm{Ray}}.\\Pi, \\Pi^{\\variabile{k}}]$;\n\\EndFor\n\\State VL $= [1, 2, 4]$ \\Comment{VL vertices of the left triangle\\;\\,\\,\\,}\n\\State VR $= [2,3, 4]$   \\Comment{VR vertices of the right triangle}\n\\State \\Call{Left Triangle}{VL, Ray, $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}, \\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}$}\\Comment{Refine the left triangle\\;\\,\\,} \n\\State \\Call{Right Triangle}{VR, \\textrm{Ray}, $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}, \\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}$} \\Comment{Refine the right triangle} \\\\\n\\Return Ray;\n%\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\\begin{algorithm}[h]\n\\caption{Algorithm for the refinement of the left triangles}\\label{alg:left_triangle}\n\\begin{algorithmic}[1]\n\\Procedure{Left triangle}{VL, \\textrm{Ray}, $\\varepsilon_{\\variabile{q}}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}}^{\\textrm{min}}, \\varepsilon_{\\variabile{p}}^{\\textrm{max}}$}\n\\State $\\textrm{VL}= [1,2,4]$\n\\State $\\variabile{q}_1^1= \\textrm{Ray}.\\variabile{q}(\\textrm{VL}(1))$, $\\variabile{p}_1^1= \\textrm{Ray}.\\variabile{p}(\\textrm{VL}(1))$\n\\State $\\variabile{q}_1^2= \\textrm{Ray}.\\variabile{q}(\\textrm{VL}(2))$, $\\variabile{p}_1^2= \\textrm{Ray}.\\variabile{p}(\\textrm{VL}(2))$\n\\State $\\variabile{q}_1^3= \\textrm{Ray}.\\variabile{q}(\\textrm{VL}(3))$, $\\variabile{p}_1^3= \\textrm{Ray}.\\variabile{p}(\\textrm{VL}(4))$\n\\State $\\textrm{dist}_\\variabile{q}= \\mbox{$|\\variabile{q}_1^2-\\variabile{q}_1^1|$}$\n\\State $\\textrm{dist}_\\variabile{p}= \\mbox{$|\\variabile{p}_1^{3}-\\variabile{p}_1^1|$}$\n\\State RefineTriangle $=$  false;\n\\State DifferentPath $=$  false;\n\\If{$\\textrm{dist}_\\variabile{q}>\\varepsilon_{\\variabile{q}}^{\\textrm{max}} $ or $\\textrm{dist}_\\variabile{p} >\\varepsilon_{\\variabile{p}}^{\\textrm{max}}$}\n\\State RefineTriangle $=$  true;\n\\EndIf\n\\For{$ \\variabile{k} = 1 \\to 2 $}\n\\If{$\\Pi^{\\variabile{k}} \\neq \\Pi^{\\variabile{k}+1}$}\n\\State DifferentPath $=$  true;\n\\EndIf\n\\EndFor\n\\If{$\\textrm{dist}_\\variabile{q}>\\varepsilon_{\\variabile{q}}^{\\textrm{min}}$ or $\\textrm{dist}_\\variabile{p}>\\varepsilon_{\\variabile{p}}^{\\textrm{min}}$}\n\\State RefineTriangle $=$  DifferentPath;\n\\Else\n\\If{(DifferentPath is true)}\n\\State \\textrm{Ray}(\\textrm{VL}).boundary $=$ true; \\Comment{A boundary crosses the triangle}\n\\EndIf\n\\EndIf\n\\If {(RefineTriangle is true)}\n\\State Define the points at the middle of each side of the triangle\n\\State $(\\variabile{q}_1^5, \\variabile{p}_1^5) = ((\\variabile{q}_1^1+\\variabile{q}_1^2)/2, \\variabile{p}_1^1)$\n\\State $(\\variabile{q}_1^6, \\variabile{p}_1^6) = (\\variabile{q}_1^5, (\\variabile{p}_1^1+\\variabile{p}_1^2)/2)$\n\\State $(\\variabile{q}_1^7, \\variabile{p}_1^7) = (\\variabile{q}_1^1, \\variabile{p}_1^6)$\n\\For {$\\variabile{k} = 5 \\to 7 $}\n\\If {The ray with coordinates $(\\variabile{q}_1^\\variabile{k}, \\variabile{p}_1^\\variabile{k})$ is not traced yet}\n\\State Trace the ray with initial coordinates: $(\\variabile{q}_1^\\variabile{k}, \\variabile{p}_1^\\variabile{k})$ in PS;\n\\State Compute the corresponding path $\\Pi^{\\variabile{k}}$;\n\\State Store the ray's coordinates $\\mbox{\\textrm{Ray}}.\\variabile{q}= [\\mbox{Ray}.\\variabile{q}, \\variabile{q}_1^\\variabile{k}]$;\n\\State Store the ray path $\\mbox{\\textrm{Ray}}.\\Pi= [\\mbox{\\textrm{Ray}}.\\Pi, \\Pi^{\\variabile{k}}]$;\n\\EndIf\n\\EndFor\n\\State\\Return{\\Call{Left Triangle}{$[\\textrm{VL}(1),5, 7], \\mbox{\\textrm{Ray}}, \\varepsilon_{\\variabile{q}}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}}^{\\textrm{min}}, \\varepsilon_{\\variabile{p}}^{\\textrm{max}}$}};\n\\State\\Return{\\Call{Left Triangle}{$[5,\\textrm{VL}(2), 6], \\mbox{\\textrm{Ray}}, \\varepsilon_{\\variabile{q}}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}}^{\\textrm{min}}, \\varepsilon_{\\variabile{p}}^{\\textrm{max}}$}};\n\\State\\Return{\\Call{Left Triangle}{$[7,6,\\textrm{VL}(3)], \\mbox{\\textrm{Ray}}, \\varepsilon_{\\variabile{q}}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}}^{\\textrm{min}}, \\varepsilon_{\\variabile{p}}^{\\textrm{max}}$}};\n\\State\\Return{\\Call{Right Triangle}{$[5,6, 7], \\mbox{\\textrm{Ray}}, \\varepsilon_{\\variabile{q}}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}}^{\\textrm{min}}, \\varepsilon_{\\variabile{p}}^{\\textrm{max}}$}};\n\\EndIf \\\\\n\\Return \\textrm{Ray};\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\\\\ \\indent Figure \\ref{fig:triangulation_refinement} shows an example of a triangulation refinement at the source PS of the two-faceted cup in Figure \\ref{fig:cup}. \nFor this optical system, the width of the $\\variabile{q}_1$-axis in source PS is two times the width of the $\\variabile{p}_1$-axis.\nThus, our choice is $\\varepsilon_\\variabile{p}^{\\textrm{min}}=\\frac{1}{2}\\varepsilon_\\variabile{q}^{\\textrm{min}}$ and $\\varepsilon_\\variabile{p}^{\\textrm{max}} = \\frac{1}{2}\\varepsilon_\\variabile{q}^{\\textrm{max}}$\nwith $\\varepsilon_\\variabile{q}^{\\textrm{min}}=0.1$ and $\\varepsilon_\\variabile{q}^{\\textrm{max}}=1$.\n \\begin{figure}[h]\n \\begin{minipage}[h]{0.48\\textwidth}\n\\centering\n    \\includegraphics[width=\\textwidth]{Triangulation_left}\n    \\caption{\\textbf{Left triangulation refinement.} The algorithm used is the recursive function L$\\footnotesize{\\textrm{EFT}}$ T$\\footnotesize{\\textrm{RIANGLE}}$.}\n    \\label{fig:triangulation_left}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.48\\textwidth}\n\\centering\n    \\includegraphics[width = \\textwidth]{Triangulation_right}\n    \\caption{\\textbf{Right triangulation refinement.} The algorithm used is the recursive function R$\\footnotesize{\\textrm{IGHT}}$ T$\\footnotesize{\\textrm{RIANGLE}}$.}\n    \\label{fig:triangulation_right}\n\\end{minipage}\n\\end{figure}\n\\hfill\n\\begin{figure}[h]\n  \\begin{center}\n  \\includegraphics[width=0.9\\textwidth]{triangulation_source}\n  \\end{center}\n  \\caption{\\textbf{Triangulation refinement of source phase space.} \nNear the boundaries more rays are traced.\n    The values of the parameters are $\\varepsilon_{\\variabile{q}_1}^\\textrm{min}~=~ 0.1$, $\\varepsilon_{\\variabile{q}_1}^\\textrm{max}~=~1$, $\\varepsilon_{\\variabile{p}_1}^\\textrm{min}= \\varepsilon_{\\variabile{q}_1}^\\textrm{min}/2$ and $\\varepsilon_{\\variabile{p}_1}^\\textrm{max} = \\varepsilon_{\\variabile{q}_1}^\\textrm{max}/2$.}\n   \\label{fig:triangulation_refinement}\n  \\end{figure}\n\\\\ \\indent The triangulation refinement allows finding \\textit{all} the possible paths $(\\Pi_\\variabile{j})_{\\variabile{j} = 1, \\cdots, \\npath}$ and their corresponding regions \\insieme{R}$_\\textrm{s}(\\Pi_\\variabile{j})_{\\variabile{j} = 1, \\cdots, \\npath}$. Using the edge-ray principle, we conclude that also the regions $\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi_\\variabile{j})_{\\variabile{j} = 1, \\cdots, \\npath}$ at the target are determined and only the rays close to the boundaries\n$\\partial\\mbox{\\insieme{R}}_\\textrm{s}$ need to be considered to obtain the target ray distribution.\n\\section{Conclusions}\nIn this chapter we introduced the PS concept. \nWe explained a new ray tracing method based on the source and the target PS representation. \nIn PS every point corresponds to a unique ray. \nThe coordinates of every point correspond to the initial ray position $\\variabile{q}_1$ and the initial ray direction $\\variabile{p}_1 = \\sin\\myangle_1$ (as $\\n_1 = 1$) expressed with respect to the normal of the source. The method also takes into account the paths followed by every ray traced.\nConsidering only reflection, every single ray follows only one path and, therefore, the PS regions do not overlap. \n\\\\ \\indent\nAs an example, we provided the source and the target PS representation of the two-faceted cup.\nThe edge-ray principle guarantees that all the rays that follow the same path are located in the same regions in PS. If we know these regions at the source we can determine the corresponding regions at the target. \nIt is sufficient to map the boundaries at the source $\\partial\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ to obtain their corresponding target boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$. \\\\ \\indent\nThe boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ are particularly relevant because there the luminance jumps from $0$ to a positive value. \nAssuming a Lambertian source, only the rays at the boundaries are needed to compute the target intensity. \nBased on this idea, a triangulation in \\set{S}{}{} is constructed such that the rays closest to $\\partial\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$\nare selected and more rays in their vicinity are created to get progressively better estimates of the boundaries.\n\\\\ \\indent In Figure \\ref{fig:three_distributions} we show three different ray distributions on the source PS of the two-faceted cup. In Figure \\ref{fig:mc_sample}, $10^3$ random points are shown. MC ray tracing is based on this random distribution of the initial rays set. In Figure \\ref{fig:qmc_sample}, $10^3$ points of a two-dimensional Sobol sequence are shown. \nSince Sobol sequences are defined in a unit square, we scaled it such that all the source PS \\set{S}{}{}$=[-2, 2]\\times[-1, 1]$ is covered by rays. Such regular distributions can lead to several advantages for the computation of the target intensity (see Section \\ref{sec:QMC}). Finally, Figure \\ref{fig:ps_sample} shows a non-uniform distribution of rays at the source PS on which PS ray tracing is based. Such distribution is obtained from the triangulation refinement explained in the previous section. The procedure requires tracing more rays close to the boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{s}(\\Pi)$ and only few rays in their interior. From the edge ray-principle, we obtain that these rays will be located close to the boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ of the regions at the target PS. The target PS intensity is calculated using only the rays that are located at the boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$. Thus, in order to obtain the intensity profile at the target, the boundaries $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ need to be determined.\\\\ \\indent \nIn the next chapter we provide two different approaches to find $\\partial\\mbox{\\insieme{R}}_\\textrm{t}(\\Pi)$ using a set of rays given by the triangulation refinement.\n\\begin{figure}[h]\n \\begin{subfigure}[t]{\\textwidth}\n\\centering\n    \\includegraphics[width=0.57\\textwidth]{MC_source_cup}\n    \\caption{\\textbf{MC grid.} $10^3$ rays randomly distributed (MC ray tracing).}\n    \\label{fig:mc_sample}\n\\end{subfigure}\n\\hfill\n\\\\\n\\begin{subfigure}[t]{\\textwidth}\n\\centering\n    \\includegraphics[width = 0.57\\textwidth]{QMC_source_cup}\n    \\caption{\\textbf{QMC grid}. $10^3$ rays distributed as the point of a Sobol sequence (QMC ray tracing).}\n    \\label{fig:qmc_sample}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{\\textwidth}\n\\centering\n\\includegraphics[width=0.57\\textwidth]{PS_source_cup1_rev}\n\\caption{\\textbf{PS grid.} $1.5\\cdot10^3$ rays distributed using the triangulation refinement (PS ray tracing).}\n\\label{fig:ps_sample}\n\\end{subfigure}\n\\caption{\\textbf{Three different ray distributions.} Source PS of the two-faceted cup.}\n\\label{fig:three_distributions}\n\\end{figure}\n\n% Compare all the PS\n% Definition of luminance intensity and etendue\n%\\indent A similar method as described in this chapter is presented by Moore, \\cite{moore2013methods}. In Moore's method each ray leaves the source at the same position while the angle coordinate changes. The path followed by the rays is taken into account and an interpolation is required to finalize the illumination pattern.\n% This interpolation can affect the efficiency of the method. Our method employs the distribution of the rays at the target phase space and avoids using any interpolation.\n%Moreover, a criterion to stop the algorithm is provided in such a way that no more rays than necessary are traced. This makes ray tracing in phase space more accurate compared with Moore's procedure.\n% Finally, we claim that PS ray tracing is also more accurate than the ray tracing procedure proposed by Moore (2013), \\cite{moore2013methods}.\n%The novelty of our approach compared to the method used by Moore, is briefly explained below.\n%First, to compute the output intensity, we employ the phase space of the target. This avoids the use of any interpolation to compute the photometric variables and therefore, more accurate results are obtained.\n%Second, in \\cite{moore2013methods} all rays that leave the source start at the same position and only a sampling angular range is given. In our approach a rectangular source is considered thus, both the angular and spatial coordinates of each ray change. This extra variable can produce very irregular shapes of the regions at target phase space. To overcome this issue, we employ the edge-ray principle and we consider the regions at source phase space where the distribution of the rays is much more regular and the corresponding boundaries are easily computed.\n%As a consequence, our procedure is suitable to compute the output intensity as function of both the angular or the spatial coordinates.\n\n\n% We need to compute the boundaries, explained in next chapter\n", "meta": {"hexsha": "79af14d6c89e198838e20bd7bbd84deb6f4f4c66", "size": 39358, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/PS_ray_tracing.tex", "max_stars_repo_name": "melaniafilosa/ps_raytracing", "max_stars_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/PS_ray_tracing.tex", "max_issues_repo_name": "melaniafilosa/ps_raytracing", "max_issues_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/PS_ray_tracing.tex", "max_forks_repo_name": "melaniafilosa/ps_raytracing", "max_forks_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 110.2464985994, "max_line_length": 1120, "alphanum_fraction": 0.7336246761, "num_tokens": 12149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737963569016, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5846908002123666}}
{"text": "\\section{Principal Component Analysis}\n\n\\begin{enumerate}[label=\\alph*), leftmargin=*]\n%% a)\n\\item\n%\n\nThe singular values of $\\mathbf{X}$ and $\\mathbf{X}_{noise}$ as well as their squared error between each singular value\nare depicted in figure \\ref{fig:2_3_a}.\n\nThe noiseless input matrix, $\\mathbf{X}$, has 3 non-zero singular values, which reflect its rank.\nThe noise corrupted matrix, $\\mathbf{X}_{noise}$, has 3 dominant singular values, corresponding to the eigenvectors spanning\nthe signal subspace, while the rest non-zero singular values correspond to the noise subspace. Moreover, the signal subspace\nsingular values are offset from the true singular values of $\\mathbf{X}$, due to the noise corruption.\n\nThe noise subspace eigenvalues are half the magnitude compared to the signal subspace eigenvalues and as a result they\ncan be distinguished by thresholding. Nonetheless, if the noise power is increased and its singular values are of comparable\nmagnitude to the signal subspace eigenvalues, then the rank of $\\mathbf{X}_{noise}$ becomes hard to identify.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/principal-component-analysis/assets/a/svd}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/principal-component-analysis/assets/a/error}\n    \\end{subfigure}\n    \\caption{Singular Value Decomposition: squared prediction error for corrupted signal.}\n    \\label{fig:2_3_a}\n\\end{figure}\n\n\n\n%% b)\n\\item\n%\n\nLow-rank approximation of the noisy data $\\mathbf{X}_{noise}$ are obtained by keeping only its $r$ most principle components.\nThis denoising operation relies on the assumption that the $r$ most significant components can adequately explain the data,\nwhile the rest are pure noise. Figure \\ref{fig:2_3_b} shows the approximation error\n($\\|\\mathbf{X} - \\tilde{\\mathbf{X}}_{noise}\\|_{F}$) for different values of $r$. Interestingly, the error is minimised\nfor $r = r_{true} = 3$, demostrating the power of this method, Principle Component Analysis (PCA).\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/principal-component-analysis/assets/b/low-rank_X}\n    \\caption{Singular Value Decomposition: squared prediction error for corrupted signal.}\n    \\label{fig:2_3_b}\n\\end{figure}\n\n%% c)\n\\item\n%\n\nThe parameter matrix $\\mathbf{B}$ estimation is performed using OLS and PCR methods.\nThe estimation errors between $\\mathbf{Y}$-$\\mathbf{Y}_{OLS}$ and $\\mathbf{Y}$-$\\mathbf{Y}_{PCR}$ for the train and test datasets\nare illustrated for different values of $r$ in figure \\ref{fig:2_3_c}. We notice that for $r \\geq 3$ the OLS and PCR methods\nare score equally well at both the train and the test datasets. In more detail, while OLS performs better in training by $0.4\\%$\nPCR outperforms by $0.7\\%$ in test set.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/principal-component-analysis/assets/c/ols_vs_pcr_train}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/principal-component-analysis/assets/c/ols_vs_pcr_test}\n    \\end{subfigure}\n    \\caption{Singular Value Decomposition: train \\& test error.}\n    \\label{fig:2_3_c}\n\\end{figure}\n\n%% d)\n\\item\n%\n\nModel evaluation of OLS and PCR models is performed over an ensemble of 100 test realisations of the stochastic process generating\n$\\mathbf{X}_{noise}$. Figure \\ref{fig:2_3_d} summarises the prediction errors of the two models, verifying the results of the\nprevious part, where PCR outperforms OLS by $1.2\\%$.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/principal-component-analysis/assets/d/regval}\n    \\caption{Singular Value Decomposition: model evaluation.}\n    \\label{fig:2_3_d}\n\\end{figure}\n\n%\n\\end{enumerate}", "meta": {"hexsha": "babc1293ff4d87a882623dbbc333a60cf32fb310", "size": 4144, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/report/parametric-and-line-spectra/principal-component-analysis/index.tex", "max_stars_repo_name": "filangel/ASPMI", "max_stars_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-02-20T14:43:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-13T21:13:02.000Z", "max_issues_repo_path": "tex/report/parametric-and-line-spectra/principal-component-analysis/index.tex", "max_issues_repo_name": "AmjadHisham/ASPMI", "max_issues_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/report/parametric-and-line-spectra/principal-component-analysis/index.tex", "max_forks_repo_name": "AmjadHisham/ASPMI", "max_forks_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-07-17T08:32:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-12T18:26:18.000Z", "avg_line_length": 43.6210526316, "max_line_length": 130, "alphanum_fraction": 0.7458976834, "num_tokens": 1128, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.8031737892899222, "lm_q1q2_score": 0.5846907998078461}}
{"text": "\\chapter{Topological Field Theory}\n\n\\section{Chern-Simons Theory}\nAssume the action of the microscopical theory has the form $S[\\psi_i]$, where $\\{\\psi_i\\}$ denotes all degrees of microscopical freedom.\nIf the system has the U(1) symmetry, we can always rewrite the field theory as a gauge theory:\n\\begin{equation}\n\tS[\\psi_i; A] = S[\\psi_i] + \\int d^dx\\ j^\\mu(x) A_\\mu(x),\n\\end{equation}\nwhere the current $j^\\mu$ is the Noether current.\nThe gauge field $A^\\mu(x)$ is regarded as the back ground field which has no dynamics.\nIf we are interested in the low-energy physics, especially for gapped system, the ground state physics, we can formally integrate out other degrees of freedom, the resulting effective theory has only the gauge degree of freedom:\n\\begin{equation}\n\tZ_{\\mathrm{eff}}[A] = \\int D[\\psi_i] e^{i S[\\psi_i;A]}.\n\\end{equation}\nIn this section, we consider the effective gauge field on $(2+1)$-dimensional space-time.\nThe effective action should also be gauge-invariant.\nThe allowed terms include\n\\begin{equation*}\n\tA \\wedge dA,\\ dA \\wedge dA,\\ \\text{higher order terms}.\n\\end{equation*}\nFrom dimensional analysis, the first term is most relevant in the low-energy.\nSuch effective theory is the \\textit{Chern-Simons theory}:\n\\begin{equation}\n\tS_{\\mathrm{CS}} = \\frac{k}{4\\pi}\\int d^3 x\\ \\varepsilon_{\\mu\\nu\\rho} A^\\mu \\partial^\\nu A^\\rho.\n\\end{equation}\n", "meta": {"hexsha": "d4f400fe41f4c2118c8dc2ede2e9f3f54395a615", "size": 1369, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/Topological.tex", "max_stars_repo_name": "jayren3996/Notes_on_QFT", "max_stars_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/Topological.tex", "max_issues_repo_name": "jayren3996/Notes_on_QFT", "max_issues_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/Topological.tex", "max_forks_repo_name": "jayren3996/Notes_on_QFT", "max_forks_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.6538461538, "max_line_length": 228, "alphanum_fraction": 0.7399561724, "num_tokens": 402, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772384450967, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.5846530108355283}}
{"text": "\\documentclass[letter,10pt]{article}\n\n\\usepackage[utf8]{inputenc} % allow utf-8 input\n\\usepackage[T1]{fontenc}    % use 8-bit T1 fonts\n\\usepackage[colorlinks=true]{hyperref}       % hyperlinks\n\\usepackage{url}            % simple URL typesetting\n\\usepackage{booktabs}       % professional-quality tables\n\\usepackage{amsfonts,amsmath,amsthm,amssymb,gensymb}       % blackboard math symbols\n\\usepackage{microtype}      % microtypography\n\\usepackage{nicefrac}       % compact symbols for 1/2, etc.\n\\usepackage{color}\n\\usepackage{lmodern}\n\\usepackage{multicol}\n\\usepackage[margin=1.0in]{geometry}\n\n\\newcommand{\\SE}[1]{ \\mathrm{SE(#1)} }\n\\newcommand{\\SO}[1]{ \\mathrm{SO(#1)} }\n\\newcommand{\\real}{\\mathbb{R}}\n\\newcommand{\\refgroup}{\\mathrm{ref}}\n\\newcommand{\\Exp}{\\mathrm{Exp}}\n\\newcommand{\\Log}{\\mathrm{Log}}\n% \\newcommand{\\asym}[1]{{\\lbrack #1\\rbrack}^\\wedge{}}\n\\newcommand{\\asym}[1]{{\\lbrack #1\\rbrack}_\\times{}}\n% \\newcommand{\\asym}[1]{\\widehat{\\lbrack #1\\rbrack}}\n\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{prop}{Proposition}[section]\n\n\\title{An On Manifold Design of EKF-based VIO:\\\\\nImplementation Notes of XIVO}\n\\author{Xiaohan Fei\\\\\n\\texttt{feixh@cs.ucla.edu}}\n\\begin{document}\n\n\\maketitle\n\n% \\begin{abstract}\n% \\end{abstract}\n\n\\section{Model}\n% We simply layout the structure of this manuscript here.\n% Section~\\ref{sect-model} presents the model used in EKF Visual-Inertial SLAM: from an idealized one to an implementation-friendly one by adding more and more realistic considerations. \n% Section~\\ref{sect-implementation} gives some implementation details including state representation, prediction \\& update steps and initialization. \n% Section~\\ref{sect-discussion} lists some potential improvements. \n% In the Appendix~\\ref{sect-appendix}, we derive the analytical form of some Jacobians useful for implementation.\n\n\\subsection{Idealized model}\nThe model is the same as the one described in Section 2.1 of~\\cite{jonesS10IJRR} of which the major results are briefly reviewed here. An idealized model to generate the sensed inertial $y_{imu}$ and visual data $y^i$ is given by:\n\n\\begin{equation}\n\\begin{cases}\n\\dot{R}_{sb}(t)=R_{sb}(t)\\asym{\\omega_{sb}^b(t)}, &R_{sb}(0)=I\\\\\n\\dot{T}_{sb}(t)=R_{sb}(t)v_{sb}^b(t), &T_{sb}(0)=0\\\\\n\\dot{v}_{sb}^b(t)=-\\asym{\\omega_{sb}^b(t)}v_{sb}^b(t)+\\alpha_{sb}^b(t)\\\\\n\\dot{X}_0^i=0, &i=1\\cdots N\\\\\n\\text{Measurements:}\\\\\ny^i(t)=\\pi\\big(R_{sb}^\\top(t)(X_0^i-T_{sb}(t))\\big)\\\\\ny_{imu}(t)=\n\\begin{bmatrix}\n \\omega_m(t)\\\\\n \\alpha_m(t)\n\\end{bmatrix}\n\\doteq\n\\begin{bmatrix}\n\t    \\omega_{sb}^b(t)\\\\\n\t    \\alpha_{sb}^b(t)-R_{sb}^\\top(t)\\gamma\n           \\end{bmatrix}.\n\\end{cases}\n\\label{eq-ideal-model}\n\\end{equation}\n\n\\begin{itemize}\n \\item \n$R_{sb}(t) \\in \\SO{3}$ and $T_{sb}(t) \\in \\real^3$ are the body to spatial frame rotation and translation, which form $g_{sb}(t)\\doteq (R_{sb}(t), T_{sb}(t)) \\in \\SE{3}$ the rigid body transformation from the body frame to the spatial frame. We adopt the convention in~\\cite{maSKS} for the subscripts, where $g_{ab}$ denotes a transformation from reference frame $b$ to $a$, such that for a point $X$ represented in $b$, $g_{ab} X$ brings it to frame $a$. \\footnote{In other words, $T_{sb}$ is the position of the origin of the body frame with respect to the spatial frame with coordinates resolved in the spatial frame and $R_{sb}$ transforms coordinates resolved in the body frame to coordinates resolved in the spatial frame by multiplying on the left: $x^s = R_{sb}x^b$.}\n\n\\item\n$\\omega_{sb}^b(t), v_{sb}^b(t)\\in \\real^3$ are the body to spatial rotational and linear velocity measured in the body frame. $\\alpha_{sb}^b(t)$ is the body to spatial linear acceleration measured in body frame. \\footnote{In other words, they are, respectively, the rotational velocity, linear velocity, and linear acceleration of the body frame with respect to the spatial frame with coordinates resolved in the body frame.}\n\n\\item\nThe ``hat'' operator $\\hat{\\cdot}$ (or $\\asym{\\cdot}$ when the operand is an expression instead of a single variable) maps a vector $v \\doteq [v_1, v_2, v_3]^\\top \\in \\real^3$ to a skew-symmetric matrix \n$$\n\\bar{v}=\n\\asym{v}=\n\\begin{bmatrix}\n    0 & -v_3 & v_2 \\\\\n    v_3 & 0 & -v_1 \\\\\n    -v_2 & v_1 & 0\n\\end{bmatrix}\n\\doteq v_1 E_1 + v_2 E_2 + v_3 E_3$$ \nwhere $\\{E_1, E_2, E_3\\}$ forms a basis of ${\\it so}(3)$ -- the Lie algebra of $\\SO{3}$:\n\n\\begin{equation}\nE_1=\n\\begin{bmatrix}\n0 & 0 & 0 \\\\\n0 & 0 & -1 \\\\\n0 & 1 & 0\n\\end{bmatrix}\n\\quad\nE_2=\n\\begin{bmatrix}\n0 & 0 & 1 \\\\\n0 & 0 & 0 \\\\\n-1 & 0 & 0\n\\end{bmatrix}\n\\quad\nE_3=\n\\begin{bmatrix}\n0 & -1 & 0 \\\\\n1 & 0 & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}.\n\\end{equation}\n\n\n\\item\n$X_0^i \\in \\real^3, i=1\\cdots N$ denotes the set of static 3-D points in the spatial frame, which have trivial dynamics.\n\n\\item\n$y^i(t) \\in \\real^2$ is the measurement of point $i$ at time $t$, which is the projection of the 3-D point in our case.\n\n\\item\n$y_{imu}(t)$ is the inertial measurement, which consists of the instantaneous rotational velocity and linear acceleration measured in the body frame.\n\n\\item\n$\\gamma$ is gravity in spatial frame.\n\n\\item\n$\\pi: \\real^3 \\mapsto \\real^2$ is the perspective projection.\n\n\\end{itemize}\n\n\\subsection{From idealized model to a realistic one}\nDefine new linear velocity $v_{sb}\\doteq R_{sb}v_{sb}^b$ and then we have $\\dot{T}_{sb}(t)=v_{sb}(t)$. Define $\\dot{v}_{sb}(t)\\doteq \\alpha_{sb}(t)$ and then $\\alpha_{sb}=R_{sb}\\alpha_{sb}^b$. For consistency, also define $\\omega_{sb}\\doteq \\omega_{sb}^b$. The model becomes:\n\n\\begin{equation*}\n\\begin{cases}\n\\dot{R}_{sb}(t)=R_{sb}(t)\\asym{\\omega_{sb}(t)}, &R_{sb}(0)=I\\\\\n\\dot{T}_{sb}(t)=v_{sb}(t), &T_{sb}(0)=0\\\\\n\\dot{v}_{sb}(t)=\\alpha_{sb}(t)\\\\\n\\dot{X}_0^i=0, &i=1\\cdots N\\\\\n\\text{Measurements:}\\\\\ny^i(t)=\\pi\\big(R_{sb}^\\top(t)(X_0^i-T_{sb}(t))\\big)\\\\\n\\begin{bmatrix}\n \\omega_m(t)\\\\\n \\alpha_m(t)\n\\end{bmatrix}\n\\doteq\n\\begin{bmatrix}\n\t    \\omega_{sb}(t)\\\\\n\t    R_{sb}^\\top(t) \\big(\\alpha_{sb}(t)-\\gamma \\big)\n           \\end{bmatrix}\n+           \n           \\begin{bmatrix}\n           b_g\\\\\n           b_a\n           \\end{bmatrix}.\n\\end{cases}\n\\end{equation*}\n\nIt is common practice to treat inertial measurements as inputs to the system (driving signals), thus we substitute $\\omega_{sb}$ and $\\alpha_{sb}$ with the actual inertial measurements:\n\\begin{equation*}\n\\begin{cases}\n\\omega_{sb}(t) = \\omega_{m}-b_g\\\\\n\\alpha_{sb}(t) = R_{sb}(t)\\big( \\alpha_{m} - b_a\\big) + \\gamma\n\\end{cases}\n\\end{equation*}\n\nand now the model is:\n\n\\begin{equation*}\n\\begin{cases}\n\\dot{R}_{sb}(t)=R_{sb}(t)\\asym{\\omega_{m}(t)-b_g(t)}, &R_{sb}(0)=I\\\\\n\\dot{T}_{sb}(t)=v_{sb}(t), &T_{sb}(0)=0\\\\\n\\dot{v}_{sb}(t)= R_{sb}(t)\\big( \\alpha_m(t) - b_a(t) \\big) + \\gamma\\\\\n\\dot b_g(t)=0, & b_g(0)=\\omega_{bias}^{\\text{calib}}\\\\\n\\dot b_a(t)=0, & b_a(0)=\\alpha_{bias}^{\\text{calib}}\\\\\n\\dot{X}_0^i=0, &i=1\\cdots N\\\\\n\\text{Measurements:}\\\\\ny^i(t)=\\pi\\big(R_{sb}^\\top(t)(X_0^i-T_{sb}(t))\\big).\n\\end{cases}\n\\end{equation*}\n\nIn the above model, we assume the camera frame coincides with the body frame, which usually is not true. We need to insert the camera to body transformation $g_{bc}\\doteq (R_{bc}, T_{bc}) \\in \\SE{3}$, which is from an off-line calibration procedure, into the visual measurement process:\n\n\\begin{equation}\n\\begin{aligned}\ny^i(t) \n&= \\pi \\Big( g_{bc}^{-1} g_{sb}^{-1}(t)X_0^i \\Big)\\\\\n&= \\pi \\Big(R_{bc}^\\top \\Big( R_{sb}^\\top(t)\\big( X_0^i - T_{sb}(t)\\big)-T_{bc} \\Big)\\Big).\n\\end{aligned}\n\\label{eq-vismeas}\n\\end{equation}\n\nThis camera to body alignment is usually available an off-line calibration procedure. However, high precision can be obtained if one treats the alignment states as unknown constants and estimate them on-line along with the ego-motion~\\cite{li2012improving}. The alignment states have trivial dynamics since they are essentially constants:\n\n\\begin{equation*}\n\\begin{cases}\n\\dot{R}_{sb}(t)=R_{sb}(t)\\asym{\\omega_{m}(t)-b_g(t)}, &R_{sb}(0)=I\\\\\n\\dot{T}_{sb}(t)=v_{sb}(t), &T_{sb}(0)=0\\\\\n\\dot{v}_{sb}(t)= R_{sb}(t)\\big(\\alpha_{m}(t) - b_a(t) \\big) + \\gamma\\\\\n\\dot b_g(t)=0, & b_g(0)=\\omega_{bias}^{\\text{calib}}\\\\\n\\dot b_a(t)=0, & b_a(0)=\\alpha_{bias}^{\\text{calib}}\\\\\n\\dot{R}_{bc}(t)=0, &R_{bc}(0)=R_{bc}^{\\text{calib}}\\\\\n\\dot{T}_{bc}(t)=0, &T_{bc}(0)=T_{bc}^{\\text{calib}}\\\\\n\\dot{X}_0^i=0, &i=1\\cdots N\\\\\n\\text{Measurements:}\\\\\ny^i(t) = \\pi \\Big(R_{bc}^\\top(t) R_{sb}^\\top(t)\\big( X_0^i - T_{sb}(t)\\big)-T_{bc}(t) \\Big).\n\\end{cases}\n\\end{equation*}\n\nAlso, the gravity is assumed perfectly known, which is not true in practice -- at least one needs to estimate the orientation of gravity if not both the magnitude and the orientation. Let $R_g(t)$ be the orientation correction of the gravity, which is $R_g(0)=I$ if we initialize the body frame to be gravity aligned:\n\\begin{equation}\n R_{sb}(0)\\big( \\alpha_{m}(0)-b_a(0)\\big) + R_g(0) \\gamma = 0.\n\\end{equation}\nThe above equation holds if one puts the platform still for a certain amount of time during initialization, typically several seconds. One can either use one good sample of inertial measurements or average all samples during the stationary period to obtain $\\alpha_{m}(0)$ and solve for $R_{sb}(0)$, whose solution is $R_{sb}^{\\text{init}}$. The model is:\n\n\\begin{equation*}\n\\begin{cases}\n\\dot{R}_{sb}(t)=R_{sb}(t)\\asym{\\omega_{m}(t)-b_g(t)}, &R_{sb}(0)=R_{sb}^{\\text{init}}\\\\\n\\dot{T}_{sb}(t)=v_{sb}(t), &T_{sb}(0)=0\\\\\n\\dot{v}_{sb}(t)= R_{sb}(t)\\big( \\alpha_{m}(t) - b_a(t) \\big) + R_g(t) \\gamma\\\\\n\\dot b_g(t)=0, & b_g(0)=\\omega_{bias}^{\\text{calib}}\\\\\n\\dot b_a(t)=0, & b_a(0)=\\alpha_{bias}^{\\text{calib}}\\\\\n\\dot{R}_{bc}(t)=0, &R_{bc}(0)=R_{bc}^{\\text{calib}}\\\\\n\\dot{T}_{bc}(t)=0, &T_{bc}(0)=T_{bc}^{\\text{calib}}\\\\\n\\dot{R}_g(t)=0, &R_g(0) = I\\\\\n\\dot{X}_0^i=0, &i=1\\cdots N\\\\\n\\text{Measurements:}\\\\\ny^i(t) = \\pi \\Big(R_{bc}^\\top(t) R_{sb}^\\top(t)\\big( X_0^i - T_{sb}(t)\\big)-T_{bc}(t) \\Big).\n\\end{cases}\n\\end{equation*}\n\n\\section{Error state}\n\\label{sect-model}\n\nAdding noise to the measurement model and dropping the time variable we obtain the following model:\n\nSystem dynamics:\n\\begin{equation}\n \\begin{cases}\n \\dot R = R \\asym{\\omega_m-b_g-n_g}\\\\\n \\dot T = v\\\\\n \\dot v = R(a_m-b_a-n_a) + R_g \\gamma \\\\\n \\dot b_g = 0 \\\\\n \\dot b_a = 0 \\\\\n \\dot R_{bc} = 0 \\\\\n \\dot T_{bc} = 0\n \\end{cases}\n\\end{equation}\n\nEach component can be written as the composition of a nominal state and an error state. For variables defined in Euclidean space the composition is the usual addition such as $T=\\bar T + \\tilde T$; for rotation, the composition is defined as $R=\\bar R R(\\tilde \\omega)$ where $R(\\tilde \\omega)\\doteq \\exp(\\asym{\\tilde \\omega})$ maps the error vector $\\tilde \\omega \\in \\real^3$ to a rotation matrix $R(\\tilde \\omega) \\in \\SO{3}$. The exponential map can be approximated as follows when the magnitude of $\\tilde \\omega$ is small:\n\n\\begin{equation}\nR(\\tilde \\omega) \\doteq \\exp(\\asym{\\tilde \\omega})= I + \\asym{\\tilde \\omega}\n\\end{equation}\n\n\nDerivation of nominal and error state system dynamics is trivial for translation:\n\\begin{equation}\n\\begin{cases}\n\\dot{\\bar T} &= \\bar v\\\\\n\\dot{\\tilde T} &= \\tilde v\n\\end{cases}\n\\end{equation}\n\nFor rotation, it's more involved:\n\n\\begin{equation}\n \\begin{aligned}\n \\dot R &= R\\asym{\\omega_m-b_g-n_g} \\\\\n \\frac{d}{dt}\\bar R  R(\\tilde \\omega) &= \\bar R R(\\tilde \\omega) \\asym{\\omega_m - (\\bar b_g+\\tilde b_g) - n_g} \\\\\n \\frac{d}{dt}\\bar R  (I+ \\asym{\\tilde\\omega}) &= \\bar R (I+ \\asym{\\tilde\\omega}) \\asym{(\\omega_m - \\bar b_g) + (-\\tilde b_g-n_g)} \\\\\n \\dot{\\bar R} + \\dot{\\bar R} \\asym{\\tilde \\omega} + \\bar R \\asym{\\dot{\\tilde\\omega}}\n &= \\bar R\\asym{\\omega_m-\\bar b_g} + \\bar R\\asym{-\\tilde b_g - n_g} + \\bar R \\asym{\\tilde \\omega}\\asym{(\\omega_m-\\bar b_g) + (-\\tilde b_g - n_g)}\\\\\n \\dot{\\bar R} + \\dot{\\bar R} \\asym{\\tilde \\omega} + \\bar R \\asym{\\dot{\\tilde\\omega}}\n &= \\bar R\\asym{\\omega_m-\\bar b_g} + \\bar R\\asym{-\\tilde b_g - n_g} + \\bar R \\asym{\\tilde \\omega}\\asym{\\omega_m-\\bar b_g} + \\bar R \\asym{\\tilde \\omega}\\asym{-\\tilde b_g - n_g}\\\\\n \\dot{\\bar R} + \\dot{\\bar R} \\asym{\\tilde \\omega} + \\bar R \\asym{\\dot{\\tilde\\omega}}\n &\\approx \\bar R\\asym{\\omega_m-\\bar b_g} + \\bar R\\asym{-\\tilde b_g - n_g} + \\bar R \\asym{\\tilde \\omega}\\asym{\\omega_m-\\bar b_g}\\\\\n \\end{aligned}\n \\label{eq-rotation-derivation}\n\\end{equation}\n\n$\\asym{\\tilde \\omega}\\asym{-\\tilde b_g - n_g}$ is the product of two error states/one error state and one noise term, which approximates zero and thus is dropped.\n\nNow we have the evolving equation of the nominal \\& error rotation state:\n\\begin{equation}\n\\begin{aligned}\n \\dot{\\bar R} &= \\bar R\\asym{\\omega_m - \\bar b_g} = \\bar R \\asym{\\bar \\omega}\\\\\n\\dot{\\bar R}\\asym{\\tilde\\omega} + \\bar R \\asym{\\dot{\\tilde \\omega}} \n&= \\bar R \\asym{-\\tilde b_g - n_g} + \\bar R\\asym{\\tilde\\omega}\\asym{\\bar\\omega}\n\\end{aligned}\n\\end{equation}\nwhere $\\bar \\omega \\doteq \\omega_m - \\bar b_g$.\n\nRecall the identity $\\dot R=R \\asym{\\omega}$, and thus $R^\\top \\dot R = \\asym{\\omega}$. \n\nWe left-multiple both sides of the last equation by $\\bar R^\\top$, and use the identity $\\bar R^\\top\\dot{\\bar R}=\\asym{\\bar\\omega}$\n\\begin{equation}\n\\begin{aligned}\n \\asym{\\bar \\omega}\\asym{\\tilde \\omega} + \\asym{\\dot{\\tilde\\omega}} &= \\asym{-\\tilde b_g - n_g} + \\asym{\\tilde\\omega}\\asym{\\bar\\omega}\\\\\n \\asym{\\dot{\\tilde\\omega}} &= \\asym{-\\tilde b_g - n_g} \n + \\big(\\asym{\\tilde\\omega}\\asym{\\bar\\omega} - \\asym{\\bar\\omega}\\asym{\\tilde\\omega}\\big) \\\\\n \\asym{\\dot{\\tilde\\omega}} &= \\asym{-\\tilde b_g - n_g} + \\asym{\\big(\\tilde\\omega\\times\\bar\\omega\\big)} \\quad\\text{(recall} \\asym{a\\times b}=\\asym{a}\\asym{b}-\\asym{b}\\asym{a}\\text{)}\\\\\n\\dot{\\tilde\\omega} &= -\\tilde b_g - n_g + \\tilde\\omega\\times\\bar\\omega \\quad\\text{(}\\asym{\\cdot} \\text{ is linear)} \\\\\n\\dot{\\tilde\\omega} &= -\\tilde b_g - n_g + \\asym{\\tilde\\omega}\\bar\\omega \\quad\\text{(}a\\times b=\\asym{a}b \\text{)}\\\\\n\\dot{\\tilde\\omega} &= -\\tilde b_g - n_g - \\asym{\\bar\\omega}\\tilde\\omega \\quad\\text{(}\\asym{\\cdot} \\text{ is anti-commutative)}\\\\\n\\end{aligned}\n\\end{equation}\n\nTo summarize:\n\\begin{equation}\n\\begin{cases}\n \\dot{\\bar R} &= \\bar R \\asym{\\bar \\omega}\\\\\n\\dot{\\tilde\\omega} &= - \\asym{\\bar\\omega}\\tilde\\omega -\\tilde b_g - n_g.\n\\end{cases}\n\\end{equation}\n\nLast let's work on velocity term:\n\n\\begin{equation}\n\\begin{aligned}\n \\dot v &= R(a_m-b_a-n_a) + R_g \\gamma \\\\\n \\dot{\\bar v} + \\dot{\\tilde v} &= \\bar R R(\\tilde \\omega) (a_m-(\\bar b_a+\\tilde b_a)-n_a) + \\bar R_g(I + \\asym{\\tilde\\omega_g}) \\gamma \\\\\n \\dot{\\bar v} + \\dot{\\tilde v} &= \\bar R (I+\\asym{\\tilde\\omega}) (a_m-(\\bar b_a+\\tilde b_a)-n_a) + \\bar R_g \\gamma + \\bar R_g \\asym{\\tilde\\omega_g} \\gamma\\\\\n \\dot{\\bar v} + \\dot{\\tilde v} &= \\bar R (I+\\asym{\\tilde\\omega}) \\big( (a_m-\\bar b_a) - (\\tilde b_a+n_a) \\big) + \\bar R_g \\gamma - \\bar R_g \\asym{\\gamma}\\tilde\\omega_g\\\\\n \\dot{\\bar v} + \\dot{\\tilde v} &= \\bar R (a_m-\\bar b_a) -\\bar R (\\tilde b_a + n_a) + \\bar R \\asym{\\tilde\\omega}(a_m-\\bar b_a)- \\bar R\\asym{\\tilde\\omega}(\\tilde b_a+n_a) + \\bar R_g \\gamma - \\bar R_g \\asym{\\gamma}\\tilde\\omega_g \\\\\n \\dot{\\bar v} + \\dot{\\tilde v} &\\approx \\bar R (a_m-\\bar b_a) -\\bar R (\\tilde b_a + n_a) + \\bar R \\asym{\\tilde\\omega}(a_m-\\bar b_a) + \\bar R_g \\gamma - \\bar R_g \\asym{\\gamma}\\tilde\\omega_g\\\\\n\\end{aligned}\n \\label{eq-velocity-derivation}\n\\end{equation}\n\nwhere nominal and error state equation can be defined as\n\\begin{equation}\n\\begin{cases}\n\\dot{\\bar v} &= \\bar R\\bar a + \\bar R_g \\gamma\\\\\n\\dot{\\tilde v} &= -\\bar R(\\tilde b_a + n_a) + \\bar R \\asym{\\tilde\\omega}\\bar a - \\bar R_g \\asym{\\gamma} \\tilde\\omega_g \n\\end{cases}\n\\end{equation}\nwhere we have defined $\\bar a \\doteq a_m - \\bar b_a$.\n\nNote that $\\asym{a}b=a\\times b=-b\\times a=-\\asym{b} a$, we can write last equation \n\n\\begin{equation}\n\\dot{\\tilde v}=-\\bar R(\\tilde b_a + n_a) - \\bar R\\asym{\\bar a} \\tilde \\omega- \\bar R_g \\asym{\\gamma} \\tilde \\omega_g.\n\\end{equation}\n\nTo summarize:\n\\begin{equation}\n\\begin{cases}\n\\dot{\\bar R} &= \\bar R\\asym{\\bar \\omega}\\\\\n\\dot{\\bar T} &= \\bar v\\\\\n\\dot{\\bar v} &= \\bar R \\bar a + \\bar R_g \\gamma\n\\end{cases}\n\\label{eq:nominal-dynamics}\n\\end{equation}\n\nand\n\n\\begin{equation}\n\\begin{bmatrix}\n\\dot{\\tilde \\omega} \\\\\n\\dot{\\tilde T}\\\\\n\\dot{\\tilde v}\\\\\n\\dot{\\tilde b}_g\\\\\n\\dot{\\tilde b}_a\\\\\n\\dot{\\tilde \\omega}_g\n\\end{bmatrix}\n= \n\\begin{bmatrix}\n-\\asym{\\bar\\omega} & 0_{3\\times3} & 0_{3\\times3} & -I_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}\\\\\n0_{3\\times3} & 0_{3\\times3} & I_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}\\\\\n-\\bar R\\asym{\\bar a} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & -\\bar R& -\\bar R_g \\asym{\\gamma}\\\\\n0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}& 0_{3\\times3}\\\\\n0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}& 0_{3\\times3}\\\\\n0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}& 0_{3\\times3}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\tilde \\omega \\\\\n\\tilde T\\\\\n\\tilde v\\\\\n\\tilde b_g\\\\\n\\tilde b_a\\\\\n\\tilde \\omega_g\n\\end{bmatrix}\n+\n\\begin{bmatrix}\n-I_{3\\times3} & 0_{3\\times3}& 0_{3\\times3}& 0_{3\\times3}\\\\\n0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}& 0_{3\\times3}\\\\\n0_{3\\times3} & -\\bar R & 0_{3\\times3}& 0_{3\\times3}\\\\\n0_{3\\times3} & 0_{3\\times3} & I_{3\\times3}& 0_{3\\times3}\\\\\n0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}& I_{3\\times3}\\\\\n0_{3\\times3} & 0_{3\\times3} & 0_{3\\times3}& 0_{3\\times3}\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n n_g\\\\\n n_a\\\\\n n_{b_g}\\\\\n n_{b_a}\\\\\n\\end{bmatrix}\n\\end{equation}\n\nFor convenience, define $\\tilde X \\doteq [\\tilde \\omega, \\tilde T, \\tilde v, \\tilde b_g, \\tilde b_a, \\tilde\\omega_g]^\\top \\in \\real^{18}$ and equation above reads\n\\begin{equation}\n\\dot{\\tilde X} = F \\tilde X + G n_{imu}\n\\end{equation}\n\n\\subsection{Online calibration of IMU intrinsics}\nIMU intrinsics (scaling and misalignment factors) are typically obtained via an offline calibration procedure. Nevertheless, we can also treat them as unknown constants and calibrate them online by incorporating them into the state with small initial covariance. \n\nLet $C_a\\in\\real^3$ and $C_g\\in\\real^3$ be the calibration matrix of the accelerometer and gyroscope respectively.~\\footnote{To be precise, $C_a$ and $C_g$ live on some manifolds since they have some intrinsic structures. For instance, $C_a$ is composed of a scaling matrix which has positive entries on the diagonal and zeros otherwise, and an orthonormal misalignment matrix.}\n\n\\subsubsection{Rotation}\nRecall the last line of Eq.~\\eqref{eq-rotation-derivation}\n$$\n \\dot{\\bar R} + \\dot{\\bar R} \\asym{\\tilde \\omega} + \\bar R \\asym{\\dot{\\tilde\\omega}} \\approx \\bar R\\asym{\\omega_m-\\bar b_g} + \\bar R\\asym{-\\tilde b_g - n_g} + \\bar R \\asym{\\tilde \\omega}\\asym{\\omega_m-\\bar b_g}\n$$\nTo incorporate the gyro calibration matrix $C_g$, substitute the term $\\omega_m$ with $C_g\\omega_m$:\n$$\n \\dot{\\bar R} + \\dot{\\bar R} \\asym{\\tilde \\omega} + \\bar R \\asym{\\dot{\\tilde\\omega}} \\approx \\bar R\\asym{C_g\\omega_m-\\bar b_g} + \\bar R\\asym{-\\tilde b_g - n_g} + \\bar R \\asym{\\tilde \\omega}\\asym{C_g\\omega_m-\\bar b_g},\n$$\nand write the gyro calibration in terms of its nominal and error state $C_g=\\bar C_g + \\tilde C_g$:\n\\begin{align}\n \\dot{\\bar R} + \\dot{\\bar R} \\asym{\\tilde \\omega} + \\bar R \\asym{\\dot{\\tilde\\omega}}\n  &\\approx \\bar R\\asym{(\\bar C_g + \\tilde C_g)\\omega_m-\\bar b_g} + \\bar R\\asym{-\\tilde b_g - n_g} + \\bar R \\asym{\\tilde \\omega}\\asym{(\\bar C_g + \\tilde C_g)\\omega_m-\\bar b_g}\\\\\n \\dot{\\bar R} + \\dot{\\bar R} \\asym{\\tilde \\omega} + \\bar R \\asym{\\dot{\\tilde\\omega}}\n  &\\approx \n  \\bar R\\asym{\\bar C_g\\omega_m-\\bar b_g} \n  + \\bar R\\asym{\\tilde C_g\\omega_m} \\\\\n  & \\quad + \\bar R\\asym{-\\tilde b_g - n_g}  \\\\\n  & \\quad + \\bar R \\asym{\\tilde \\omega}\\asym{\\bar C_g\\omega_m-\\bar b_g}\n  + \\bar R \\asym{\\tilde \\omega}\\asym{\\tilde C_g\\omega_m} \\text{ (2nd term is high-order)}\\\\\n  &\\approx\n  \\bar R\\asym{\\bar C_g\\omega_m-\\bar b_g} + \\bar R\\asym{\\tilde C_g\\omega_m}\n  + \\bar R\\asym{-\\tilde b_g - n_g} + \\bar R \\asym{\\tilde \\omega}\\asym{\\bar C_g\\omega_m-\\bar b_g}\n\\end{align}\n\nNow, the nominal state evolves as\n$$\n\\dot{\\bar R} = \\bar R\\asym{\\bar C_g \\omega_m - \\bar b_g}\n$$\nresulting the identity $\\bar R^\\top \\dot{\\bar R} = \\asym{\\bar C_g \\omega_m - \\bar b_g}$ which can be used to simplify the rest of the equation:\n$$\n\\dot{\\bar R}\\asym{\\tilde\\omega} + \\bar R\\asym{\\dot{\\tilde\\omega}} =\n  \\bar R\\asym{\\tilde C_g\\omega_m} + \\bar R\\asym{-\\tilde b_g - n_g}  \n    + \\bar R \\asym{\\tilde \\omega}\\asym{\\bar C_g\\omega_m-\\bar b_g}.\n$$\nLet $\\bar\\omega=\\bar C_g \\omega_m - \\bar b_g$, and left-multiple $\\bar R^\\top$ on both sides of the equation above to cancel out $\\bar R$\n\\begin{align}\n\\asym{\\bar\\omega}\\asym{\\tilde\\omega} + \\asym{\\dot{\\tilde\\omega}} &=\n  \\asym{\\tilde C_g\\omega_m} + \\asym{-\\tilde b_g - n_g}  \n    + \\asym{\\tilde \\omega}\\asym{\\bar\\omega} \\\\\n\\asym{\\dot{\\tilde\\omega}} &=\n  \\asym{\\tilde C_g\\omega_m} + \\asym{-\\tilde b_g - n_g}  \n    + \\big(\\asym{\\tilde\\omega}\\asym{\\bar\\omega}-\\asym{\\bar\\omega}\\asym{\\tilde\\omega}\\big) \\\\\n\\asym{\\dot{\\tilde\\omega}} &=\n  \\asym{\\tilde C_g\\omega_m} + \\asym{-\\tilde b_g - n_g}  \n    + \\asym{\\tilde\\omega\\times\\bar\\omega} \\\\\n\\asym{\\dot{\\tilde\\omega}} &=\n  \\asym{\\tilde C_g\\omega_m} + \\asym{-\\tilde b_g - n_g}  \n    - \\asym{\\bar\\omega\\times\\tilde\\omega} \\\\\n\\asym{\\dot{\\tilde\\omega}} &=\n  \\asym{\\tilde C_g\\omega_m} + \\asym{-\\tilde b_g - n_g}  \n    - \\asym{\\asym{\\bar\\omega}\\tilde\\omega} \\\\\n\\dot{\\tilde\\omega} &= \\tilde C_g\\omega_m - \\tilde b_g - n_g - \\asym{\\bar\\omega}\\tilde\\omega\n\\end{align}\nleading to the error state equation of $\\tilde\\omega$:\n$$\n\\dot{\\tilde\\omega} = -\\asym{\\bar\\omega}\\tilde{\\omega} + \\tilde{C_g} \\omega_m - \\tilde b_g - n_g\n$$\n\n\n\\subsubsection{Velocity}\n\nRecall the last line of Eq.~\\eqref{eq-velocity-derivation}\n$$\n \\dot{\\bar v} + \\dot{\\tilde v} \\approx \\bar R (a_m-\\bar b_a) -\\bar R (\\tilde b_a + n_a) + \\bar R \\asym{\\tilde\\omega}(a_m-\\bar b_a) + \\bar R_g \\gamma - \\bar R_g \\asym{\\gamma}\\tilde\\omega_g.\n$$\n\nTo consider the accelerometer calibration, substitute $\\alpha_m$ with $C_a\\alpha_m=(\\bar C_a+\\tilde C_a)\\alpha_m$:\n$$\n \\dot{\\bar v} + \\dot{\\tilde v} \\approx \\bar R \\big((\\bar C_a + \\tilde C_a)a_m-\\bar b_a\\big) -\\bar R (\\tilde b_a + n_a) + \\bar R \\asym{\\tilde\\omega}\\big((\\bar C_a + \\tilde C_a) a_m-\\bar b_a\\big) + \\bar R_g \\gamma - \\bar R_g \\asym{\\gamma}\\tilde\\omega_g.\n$$\n\nNominal state evolves as $\\dot{\\bar v} = \\bar R\\bar\\alpha + \\bar R_g \\gamma$, where $\\bar\\alpha\\doteq\\bar C_a \\alpha_m - \\bar b_a$.\n\nThe rest of the equation can be further simplified\n\\begin{align}\n\\dot{\\tilde v} &= \\bar R \\tilde C_a a_m - \\bar R(\\tilde b_a + n_a) + \\bar R\\asym{\\tilde\\omega}\\big(\\bar\\alpha  + \\tilde C_a \\alpha_m\\big) - \\bar R_g\\asym{\\gamma}\\tilde\\omega_g \\\\\n\\dot{\\tilde v} &= \\bar R \\tilde C_a a_m - \\bar R(\\tilde b_a + n_a) - \\bar R\\asym{\\bar\\alpha}\\tilde\\omega - \\bar R_g \\asym{\\gamma}\\tilde\\omega_g\n\\end{align}\nby dropping 2nd-order residual terms and using the identity $\\asym{a}b=-\\asym{b}a$.\n\n\n\n\n\\subsection{Integration of motion}\n\nWith coming inertial measurements, the motion of the sensor platform can be obtained by numerical integration of Eq.~\\eqref{eq:nominal-dynamics} (there are numerous textbooks on this topic, for instance, \\cite{ascher1998computer}). As pointed out in \\cite{ascher1998computer}, Fehlberg's, and Prince-Dormand's embedded Runge-Kutta methods are among the most popular ones. Some reference implementations can be found at \\url{http://www.mymathlib.com/diffeq/embedded_runge_kutta/}.\n\nTo propagate the covariance matrix of the error state, we follow \\cite{mourikis2007multi} where the covariance matrix is partitioned as follows:\n\n\\begin{equation}\n    P_{k|k} = \n    \\begin{bmatrix}\n        P_{k|k}^{(1)} & P_{k|k}^{(2)} \\\\\n        P_{k|k}^{(2)\\top} & P_{k|k}^{(3)}\n    \\end{bmatrix}\n    \\label{eq:covariance-partition}\n\\end{equation}\nwhere $P_{k|k}^{(1)}$ is the $18\\times 18$ covariance matrix of the evolving motion state, $P_{k|k}^{(3)}$ corresponds to the structure state (including pose of groups and position of features, detailed later), and $P_{k|k}^{(2)}$  is the correlation between the errors in the motion state and the structure state. The covariance matrix of the propagated state is given by:\n\n\\begin{equation}\n    P_{k+1|k} =\n    \\begin{bmatrix}\n        P_{k+1|k}^{(1)} & \\Phi(t_{k + 1}, t_k) P_{k|k}^{(2)} \\\\\n        P_{k|k}^{(2)\\top}\\Phi(t_{k + 1}, t_k)^\\top & P_{k|k}^{(3)}  \\\\\n    \\end{bmatrix}\n    \\label{eq:covariance-propagation}\n\\end{equation}\n\nwhere $P_{k+1|k}^{(1)}$ is computed by numerical integration of the following Lyapunov equation for the time interval $(t_k, t_{k+1})$ with initial condition $P=P_{k|k}^{(1)}$:\n\n\\begin{equation}\n    \\dot{P} = F P + P F^\\top + G Q_{imu} G^\\top.\n\\label{eq:lyapunov}\n\\end{equation}\n\nThe state transition matrix $\\Phi(t_{k+1}, t_k)$ is obtained by numerical integration of the following matrix differential equation over $(t_k, t_{k+1})$ with initial condition $\\Phi(t_k, t_k)=I_{18\\times 18}$:\n\n\\begin{equation}\n    \\dot{\\Phi}(\\tau, t_k) = F \\Phi(\\tau, t_k), \\quad\\tau \\in [t_k, t_{k+1}].\n\\end{equation}\n\n\\section{State augmentation}\nOnce the pose estimate of a newly arrived image is available, its camera-to-spatial transformation $g_{sc} \\in \\SE{3}$ is inserted into the state. $g_{sc}$ is the composed of the body-to-spatial transformation $g_{sb} \\in \\SE{3}$ and the camera-to-body alignment $g_{bc} \\in \\SE{3}$. Written in components\n\\begin{equation}\n\\begin{cases}\nR_{sc} &= R_{sb} R_{bc}\\\\\nT_{sc} &= R_{sb} T_{bc} + T_{sb}\n\\end{cases}\n\\end{equation}\n\n\\subsection{Rotation}\nWith error state representation, the rotation part is:\n\\begin{equation}\n \\begin{aligned}\nR_{sc} &= R_{sb} R_{bc} \\\\\n (\\bar R_{sc} + \\bar R_{sc} \\asym{\\tilde\\omega_{sc}}) &= \n (\\bar R_{sb} + \\bar R_{sb} \\asym{\\tilde\\omega_{sb}}) (\\bar R_{bc} + \\bar R_{bc} \\asym{\\tilde\\omega_{bc}}) \\\\\n \\bar R_{sc} + \\bar R_{sc}\\asym{\\tilde \\omega_{sc}} &\\approx \\bar R_{sb} \\bar R_{bc} \n + \\bar R_{sb} \\asym{\\tilde \\omega_{sb}} \\bar R_{bc} + \\bar R_{sb} \\bar R_{bc} \\asym{\\tilde\\omega_{bc}} \\quad \\text{ drop } \\asym{\\tilde\\omega_{sb}}\\asym{\\tilde\\omega_{bc}}\n \\end{aligned}\n\\end{equation}\n\nGroup terms on both sides into nominal part and error-state part respectively:\n\\begin{equation}\n\\begin{cases}\n\\bar R_{sc} &= \\bar R_{sb} \\bar R_{bc}\\\\\n\\asym{\\tilde \\omega_{sc}} &= \\bar R_{sc}^\\top \\bar R_{sb} \\asym{\\tilde\\omega_{sb}} \\bar R_{bc} + \\bar R_{sc}^\\top \\bar R_{sb} \\bar R_{bc} \\asym{\\tilde\\omega_{bc}}\n\\end{cases}\n\\end{equation}\nwhere the second equation can be further simplified as $\\tilde \\omega_{sc} = \\bar R_{bc}^\\top \\tilde \\omega_{sb} + \\tilde \\omega_{bc}$ by noticing the following:\n\\begin{itemize}\n \\item \n$\\bar R_{sc}^\\top \\bar R_{sb}=\\bar R_{cb} = \\bar R_{bc}^\\top = \\bar R_{bc}^{-1}$ \n\\item\n$R \\asym{\\omega} R^\\top=\\asym{R \\omega} \\quad \\forall R \\in \\SO{3}, \\omega \\in \\real^3$ (see appendix for a proof)\n\\end{itemize}\n\n\\subsection{Translation}\n\\begin{equation}\n\\begin{aligned}\nT_{sc} &= R_{sb} T_{bc} + T_{sb} \\\\\n\\bar T_{sc} + \\tilde T_{sc} &= (\\bar R_{sb} + \\bar R_{sb} \\asym{\\tilde \\omega_{sb}}) (\\bar T_{bc} + \\tilde T_{bc}) + \\bar T_{sb} + \\tilde T_{sb} \\\\\n\\bar T_{sc} + \\tilde T_{sc} &\\approx \\big(\\bar R_{sb} \\bar T_{bc} + \\bar T_{sb} \\big)\n+ \\bar R_{sb} \\tilde T_{bc} + \\bar R_{sb} \\asym{\\tilde \\omega_{sb}} \\bar T_{bc} + \\tilde T_{sb} \\quad \\text{ drop } \\bar R_{sb} \\asym{\\tilde \\omega_{sb}} \\tilde  T_{bc}\n\\end{aligned}\n\\end{equation}\n\nGroup terms into nominal and error-state equation and simplify:\n\\begin{equation}\n\\begin{cases}\n \\bar T_{sc} &= \\bar R_{sb} \\bar T_{bc} + \\bar T_{sb} \\\\\n \\tilde T_{sc} &= \\bar R_{sb} \\tilde T_{bc} - \\bar R_{sb} \\asym{\\bar T_{bc}} \\tilde \\omega_{sb} + \\tilde T_{sb}\n\\end{cases}\n\\end{equation}\n\n\n\nTo summarize, the nominal equations for the augmented pose are\n\\begin{equation}\n\\begin{cases}\n\\bar R_{sc} &= \\bar R_{sb} \\bar R_{bc} \\\\\n \\bar T_{sc} &= \\bar R_{sb} \\bar T_{bc} + \\bar T_{sb}\n\\end{cases}\n\\end{equation}\n\nand the error-state equations are\n\\begin{equation}\n\\begin{cases}\n\\tilde \\omega_{sc} &= \\bar R_{bc}^\\top \\tilde \\omega_{sb} + \\tilde \\omega_{bc} \\\\\n \\tilde T_{sc} &= \\bar R_{sb} \\tilde T_{bc} - \\bar R_{sb} \\asym{\\bar T_{bc}} \\tilde \\omega_{sb} + \\tilde T_{sb}\n\\end{cases}\n\\end{equation}\n\nTo augment the covariance matrix, build the Jacobian according to the error-state equation as $J \\in \\real^{6\\times N}$ where the covariance matrix before augmentation is $P \\in \\real^{N\\times N}$ then\n\n\\begin{equation}\n P \\leftarrow \n \\begin{bmatrix}\n  I_{N\\times N}\\\\\n  J\n \\end{bmatrix}\n P\n \\begin{bmatrix}\n I_{N\\times N}\\\\\n J\n \\end{bmatrix}^\\top\n\\end{equation}\n\nNote, if camera-to-body transformation is not being estimated, simply set its error state  to zero, i.e., $\\tilde \\omega_{bc}=0$ and $\\tilde T_{bc}=0$ and the nominal state as the true state (calibrated offline), i.e., $\\bar R_{bc}=R_{bc}$ and $\\bar T_{bc}=T_{bc}$.\n\n\\subsection{State augmentation with $g_{sb}$}\n\nInstead of $g_{sc}$, we can augment the state with body-to-spatial transformation $g_{sb}$ of the arriving image and share the camera-to-body alignement $g_{bc}$  among them. In this case, initialization of the error state is trivial: Simply duplicate the nominal state, the error state and the covariance block corresponding to $g_{sb}$ of the current state being estimated\n\n\\subsection{State augmentation with 3D points}\n\nSimply duplicate the state and covariance of the depth subfilter (Sect.~\\ref{sect-depth}) into the main state and covariance.\n\n\n\\section{Measurement update - MSCKF}\nAssume the depth subfilter (Sect.~\\ref{sect-depth}) estimates the feature state well whose coordinates in the spatial frame $X_s\\in\\real^3$ can be computed. For each group $g_{sc}(t)$ to which the feature is visible, the pixel coordinates read \n\n\\begin{equation}\n x_p(t) = \\pi(g_{sc}^{-1}(t)X_s) = \\pi\\big( R_{sc}^\\top (X_s-T_{sc}) \\big).\n\\end{equation}\nFurthermore, if the state is augmented with camera-to-body pose $g_{sb}$ with shared camera-to-body alignemnt $ig_{bc}$ the visual measurements read\n\n\\begin{equation}\nx_p(t) = \\pi(g_{bc}^{-1}(t)g_{sb}^{-1}(t)X_s)=\\pi\\big(R_{bc}^\\top (R_{sb}^\\top(X_s-T_{sb}) - T_{bc})\\big).\n\\end{equation}\n\nIn this section, we derive the error-state form of the measurement equations w/ and w/o the camera-to-body alignment incorporated into the augmented pose.  With the $g_{bc}$ incorporated into $g_{sc}$, we are assuming $g_{bc}$ is perfectly known after calibration before running the estimator, which is not realistic due to imperfection of the calibration procedure. With only $g_{sb}$ augmented to the state and the $g_{bc}$ shared among all the augmented poses, we will be able to perform online spatial calibration with the initial nominal value provided by the offline calibration.\n\n\\subsection{State augmentation with $g_{sc}$}\n\nTaylor-expand the projection function\n\\begin{equation}\n \\pi(\\bar a + \\tilde a) \n = \\pi(\\bar a) + \\frac{d \\pi}{d a}|_{a=\\bar a} \\tilde a + o(\\tilde a)\n \\approx \\pi(\\bar a) + \\pi'(\\bar a) \\tilde a.\n\\end{equation}\n\nTherefore if we write $R_{sc}^\\top(X_s-T_{sc})$ in the form of $\\bar a+\\tilde a$, we will the following nominal and error-state equations:\n\n\\begin{equation}\n\\begin{cases}\n\\bar x_p = \\pi(\\bar a)\\\\\n\\tilde x_p = \\pi'(\\bar a) \\tilde a\n\\end{cases}\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\nR_{sc}^\\top(X_s-T_{sc}) \n&= (\\bar R_{sc} + \\bar R_{sc}\\asym{\\tilde \\omega_{sc}})^\\top (\\bar X_s - \\bar T_{sc} + (\\tilde X_s - \\tilde T_{sc})) \\\\\nR_{sc}^\\top(X_s-T_{sc}) \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n&= (\\bar R_{sc}^\\top - \\asym{\\tilde \\omega_{sc}}\\bar R_{sc}^\\top) (\\bar X_s - \\bar T_{sc} + (\\tilde X_s - \\tilde T_{sc})) \\\\\nR_{sc}^\\top(X_s-T_{sc}) \n&\\approx \\bar R_{sc}^\\top(\\bar X_s - \\bar T_{sc}) \n+ \\bar R_{sc}^\\top(\\tilde X_s - \\tilde T_{sc})\n- \\asym{\\tilde \\omega_{sc}} \\bar R_{sc}^\\top(\\bar X_s - \\bar T_{sc})\\\\\nR_{sc}^\\top(X_s-T_{sc}) \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n&\\approx \\bar R_{sc}^\\top(\\bar X_s - \\bar T_{sc}) \n+ \\bar R_{sc}^\\top(\\tilde X_s - \\tilde T_{sc})\n+ \\asym{\\bar R_{sc}^\\top(\\bar X_s - \\bar T_{sc})} \\tilde\\omega_{sc}\\\\\n\\end{aligned}\n\\end{equation}\n\nTo summarize:\n\n\\begin{equation}\n\\begin{cases}\n\\bar x_p=\\pi\\big( \\bar R_{sc}^\\top (\\bar X_s - \\bar T_{sc})\\big)\\\\\n\\tilde x_p=\\pi'\\big(\\bar R_{sc}^\\top(\\bar X_s - \\bar T_{sc})\\big) \n\\big( \\bar R_{sc}^\\top(\\tilde X_s - \\tilde T_{sc}) + \\asym{\\bar R_{sc}^\\top(\\bar X_s - \\bar T_{sc})} \\tilde \\omega_{sc}\\big)\n\\end{cases}\n\\end{equation}\n\n\\subsection{State augmentation with $g_{sb}$}\n\\label{sect-msckf-gsb}\nAgain, we need to write $R_{bc}^\\top\\big(R_{sb}^\\top(X_s-T_{sb})-T_{bc})\\big)$ in the form of $\\bar a + \\tilde a$:\n\n\\begin{equation}\n\\begin{aligned}\n & R_{bc}^\\top\\big(R_{sb}^\\top(X_s-T_{sb})-T_{bc})\\big) \\\\\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n = & (\\bar R_{bc} + \\bar R_{bc}\\asym{\\tilde\\omega_{bc}})^\\top\n \\big( (\\bar R_{sb} + \\bar R_{sb}\\asym{\\tilde\\omega_{sb}})^\\top (\\bar X_s - \\bar T_{sb}) + (\\bar R_{sb} + \\bar R_{sb}\\asym{\\tilde\\omega_{sb}})^\\top (\\tilde X_s - \\tilde T_{sb}) - (\\bar T_{bc} + \\tilde T_{bc}) \\big) \\\\\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n = & (\\bar R_{bc}^\\top - \\asym{\\tilde\\omega_{bc}} \\bar R_{bc}^\\top)\n \\big( (\\bar R_{sb}^\\top - \\asym{\\tilde\\omega_{sb}} \\bar R_{sb}^\\top) (\\bar X_s - \\bar T_{sb}) + (\\bar R_{sb}^\\top - \\asym{\\tilde\\omega_{sb}} \\bar R_{sb}^\\top) (\\tilde X_s - \\tilde T_{sb}) - (\\bar T_{bc} + \\tilde T_{bc}) \\big) \\\\\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n= & (\\bar R_{bc}^\\top - \\asym{\\tilde\\omega_{bc}} \\bar R_{bc}^\\top)\n (\\bar R_{sb}^\\top - \\asym{\\tilde\\omega_{sb}} \\bar R_{sb}^\\top) (\\bar X_s - \\bar T_{sb}) \\\\\n &+ (\\bar R_{bc}^\\top - \\asym{\\tilde\\omega_{bc}} \\bar R_{bc}^\n \\top)(\\bar R_{sb}^\\top - \\asym{\\tilde\\omega_{sb}} \\bar R_{sb}^\\top) (\\tilde X_s - \\tilde T_{sb}) \\\\\n &- (\\bar R_{bc}^\\top - \\asym{\\tilde\\omega_{bc}} \\bar R_{bc}^\\top)(\\bar T_{bc} + \\tilde T_{bc}) \\\\\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n \\approx & \\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) - \\asym{\\tilde \\omega_{bc}}\\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) - \\bar R_{bc}^\\top \\asym{\\tilde \\omega_{sb}} \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) \\\\\n &+ \\bar R_{bc}^\\top\\bar R_{sb}^\\top (\\tilde X_s - \\tilde T_{sb}) \\\\\n &- \\bar R_{bc}^\\top \\bar T_{bc} - \\bar R_{bc}^\\top \\tilde T_{bc} + \\asym{\\tilde \\omega_{bc}} \\bar T_{bc} \\quad\\text{(drop high-order terms)}\\\\\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n = & \\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) \n + \\asym{\\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb})}\\tilde \\omega_{bc} \n + \\bar R_{bc}^\\top \\asym{\\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb})} \\tilde \\omega_{sb} \\\\\n &+ \\bar R_{bc}^\\top\\bar R_{sb}^\\top (\\tilde X_s - \\tilde T_{sb}) \\\\\n &- \\bar R_{bc}^\\top \\bar T_{bc} \n - \\bar R_{bc}^\\top \\tilde T_{bc} \n - \\asym{\\bar R_{bc}^\\top \\bar T_{bc}}\\tilde \\omega_{bc}  \\\\\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n=& \\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) - \\bar R_{bc}^\\top \\bar T_{bc} \\quad\\text{(nominal term)}\\\\\n&+ \\asym{\\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb})}\\tilde \\omega_{bc} \n+ \\bar R_{bc}^\\top \\asym{ \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb})}\\tilde \\omega_{sb}\n + \\bar R_{bc}^\\top\\bar R_{sb}^\\top (\\tilde X_s - \\tilde T_{sb}) \n - \\bar R_{bc}^\\top \\tilde T_{bc} - \\asym{\\bar R_{bc}^\\top \\bar T_{bc}} \\tilde \\omega_{bc} \\\\\n\\end{aligned}\n\\end{equation}\n\nTo summarize:\n\n\\begin{equation}\n\\begin{cases}\n\\bar x_p &= \\pi\\big(\\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) - \\bar R_{bc}^\\top \\bar T_{bc}\\big) \\\\\n\\tilde x_p &= \\pi'\\big(\\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) - \\bar R_{bc}^\\top \\bar T_{bc}\\big) \\\\\n    &\\cdot \n    \n    \\big( \n    \\asym{\\bar R_{bc}^\\top \\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb}) - \\bar R_{bc}^\\top \\bar T_{bc}}\\tilde \\omega_{bc} \n+ \\bar R_{bc}^\\top \\asym{\\bar R_{sb}^\\top (\\bar X_s - \\bar T_{sb})}\\tilde \\omega_{sb}\n+ \\bar R_{bc}^\\top\\bar R_{sb}^\\top (\\tilde X_s - \\tilde T_{sb}) \n- \\bar R_{bc}^\\top \\tilde T_{bc}\n    \\big)\n\\end{cases}\n\\label{eq-msckf-gsb}\n\\end{equation}\n\n\\subsection{Online temporal calibration}\n\nWith the temporal offset $t_d$ in image timestamps, we need to bring the 3D point to time instant $t+t_d$ when the visual features are actually measured instead of time $t$ when the measurements are timestamped. The transformation reads $X_c = R_{bc}^\\top\\big(R_{sb}(t+t_d)^\\top (X_s - T_{sb}(t+t_d)) - T_{bc}\\big)$, where $T_{sb}\\approx \\bar T_{sb} + \\bar v_{sb} \\tilde t_d$,(2nd-order error terms dropped) and\n$$\nR_{sb}(t+t_d) \\approx \\bar R_{sb}(I+\\asym{\\tilde \\omega_{sb}})(I + \\asym{\\bar\\omega}\\tilde t_d)=\\bar R_{sb} + \\bar R_{sb}\\asym{\\tilde \\omega_{sb}} + \\bar R_{sb}\\asym{\\bar\\omega} \\tilde t_d \\quad\\text{(drop 2nd-order error terms)}\n$$\nwhere $\\bar R_{sb}\\doteq \\bar R_{sb}(t+\\bar t_d)$ is the rotation propagated to time $(t+t_d)$, and $\\bar \\omega$ is the measured angular velocity.\n\nError terms involving $\\tilde t_d$ in $X_c$ then reads\n$$\n\\bar R_{bc}^\\top \\big(\\bar R_{sb}\\asym{\\bar\\omega}\\tilde t_d)^\\top X_s - \\bar R_{bc}^\\top \\bar R_{sb}^\\top \\bar v_{sb} \\tilde t_d\n= -\\bar R_{bc}^\\top \\big(\\asym{\\bar\\omega}\\bar R_{sb}^\\top X_s + \\bar R_{sb}^\\top \\bar v_{sb} \\big)\\tilde t_d.\n$$\n\nThis residual block can be inserted to the second part of the measurement residual equation $\\tilde x_p$ of Eq.~\\eqref{eq-msckf-gsb}, i.e., \n$$\n\\tilde x_p=\\pi'(\\ldots)\\cdot\\big(\\ldots\n- \\bar R_{bc}^\\top \\big(\\asym{\\bar\\omega}\\bar R_{sb}^\\top X_s + \\bar R_{sb}^\\top \\bar v_{sb} \\big)\\tilde t_d\n\\big)\n$$\n\nNote, if online IMU intrinsics calibration is enabled, $\\bar \\omega=C_g \\omega_m + b_g$, where $C_g = \\bar C_g + \\tilde C_g$ and $b_g=\\bar b_g + \\tilde b_g$, and hence we need to expand $\\tilde x_p$ w.r.t. the error state of both the IMU intrinsics $\\tilde C_g$ and the bias $\\tilde b_g$:\n\n$$\n\\tilde x_p\n= \\pi'(\\ldots)\\cdot\n\\big(\n    \\ldots\n    +\\bar R_{bc}^\\top\\asym{\\bar R_{sb}^\\top X_s} \\bar t_d \n    ( \\tilde C_g \\omega_m + \\tilde b_g)\n\\big)\n$$\n\n\n\n\n\n\n\n\\section{Measurement update - EKF}\n\nThe derivation of the EKF measurement equation in error state form closely follows Sect.~\\ref{sect-msckf-gsb}. In fact, we can write down the error state representation of $X_s\\approx \\bar{X}_s + \\tilde{X}_s$ in terms of the feature parametrization (since the feature is in state now) and substitute each term into Eq.\\eqref{eq-msckf-gsb}.\n\n\\subsection{Error state form of $X_s$}\n\nLet's first express $X_s$ in error state form. Each 3D point is parametrized as $(x_c, y_c, \\rho)$ where $\\rho$ is the inverse depth, i.e., the coordinates of the 3D point represented in the its reference camera frame is $X_c = [x_c/\\rho, y_c/\\rho, 1/\\rho]^\\top$.\n\n\\begin{equation}\n\\begin{aligned}\nX_s &= g_{sb}(t_r) g_{bc} X_c \\quad\\text{(let } g_{sb}(t_r)=[R_r|T_r] \\text{)}\\\\\n&= R_r(R_{bc} X_c + T_{bc}) + T_r \\\\ \n%&= R_r(R_{bc} (\\bar X_c + \\tilde X_c)\n%+ T_{bc}) + T_r \\quad\\text{formally of } X_c = \\bar X_c + \\tilde X_c\\\\\n&= (\\bar R_r + \\bar R_r \\asym{\\tilde \\omega_r})(\\bar R_{bc} + \\bar R_{bc}\\asym{\\tilde \\omega_{bc}})(\\bar X_c + \\tilde X_c) + (\\bar R_r + \\bar R_r \\asym{\\tilde \\omega_r})(\\bar T_{bc} + \\tilde T_{bc}) + (\\bar T_r + \\tilde T_r) \\\\\n&= \\bar R_r \\bar R_{bc} \\bar X_c  + \n\\bar R_r \\bar R_{bc} \\tilde X_c +\n\\bar R_r \\bar R_{bc} \\asym{\\tilde\\omega_{bc}} \\bar X_c +\n\\bar R_r \\asym{\\tilde \\omega_r} \\bar R_{bc} \\bar X_c \\quad\\text{(1st term, drop higher-order terms)}\\\\\n&\\quad + \\bar R_r \\bar T_{bc} + \\bar R_r \\tilde T_{bc} + \\bar R_r \\asym{\\tilde\\omega_r} \\bar T_{bc} \\quad\\text{(2nd term, drop higher-order terms)}\\\\\n&\\quad + \\bar T_r + \\tilde T_r.\n\\end{aligned}\n\\end{equation}\n\nTo summarize:\n\\begin{equation}\n\\begin{cases}\n\\bar X_s &= \\bar R_r \\bar R_{bc} \\bar X_c + \\bar R_r \\bar T_{bc} + \\bar T_r\\\\\n\\tilde X_s &= \\bar R_r \\bar R_{bc} \\tilde X_c - \\bar R_r \\bar R_{bc} \\asym{\\bar X_c} \\tilde \\omega_{bc} - \\bar R_r \\asym{\\bar R_{bc} \\bar X_c + \\bar T_{bc}} \\tilde \\omega_r + \\bar R_r \\tilde T_{bc} + \\tilde T_r.\n\\end{cases}\n\\end{equation}\n\nNow we need to expand $X_c = [x_c/\\rho, y_c/\\rho, 1/\\rho]^\\top$ in error state form. Let's start with x-component (same procedure applies to y-component due to symmetry):\n\\begin{equation}\n\\begin{aligned}\n\\frac{x_c}{\\rho}\n&= \\frac{\\bar x_c + \\tilde x_c}{\\bar \\rho_c + \\tilde \\rho_c} \\\\\n&= (\\bar x_c + \\tilde x_c) \\left (  \\frac{\\bar \\rho_c - \\tilde \\rho_c}{\\bar \\rho_c^2 - \\tilde \\rho_c^2}  \\right ) \\\\\n&\\approx (\\bar x_c + \\tilde x_c) \\left (  \\frac{\\bar \\rho_c - \\tilde \\rho_c}{\\bar \\rho_c^2}  \\right ) \\quad\\text{(drop higher-order term in denominator)}\\\\\n&= (\\bar x_c + \\tilde x_c)\\left (\\frac{1}{\\bar \\rho_c} - \\frac{1}{\\bar \\rho_c^2}\\tilde \\rho_c \\right )\\\\\n&\\approx \\frac{\\bar x_c}{\\bar\\rho_c} - \\frac{\\bar x_c}{\\bar \\rho_c^2} \\tilde\\rho_c + \\frac{1}{\\bar \\rho_c} \\tilde x_c \\quad\\text{(drop higher-order terms)}.\n\\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n\\begin{cases}\n\\bar X_c &=\n\\frac{1}{\\bar \\rho_c}\n\\begin{bmatrix}\n    \\bar x_c\\\\\n    \\bar y_c\\\\\n    1\n\\end{bmatrix} \\\\\n\\tilde X_c &=\n\\begin{bmatrix}\n-\\frac{\\bar x_c}{\\bar \\rho_c^2} \\tilde\\rho_c + \\frac{1}{\\bar \\rho_c} \\tilde x_c \\\\\n-\\frac{\\bar y_c}{\\bar \\rho_c^2} \\tilde\\rho_c + \\frac{1}{\\bar \\rho_c} \\tilde y_c \\\\\n-\\frac{1}{\\bar \\rho_c^2} \\tilde \\rho_c\n\\end{bmatrix} \n=\n\\begin{bmatrix}\n    1/\\bar\\rho_c & 0 & -\\bar x_c / \\bar\\rho_c^2\\\\\n    0 & 1/\\bar\\rho_c & -\\bar y_c / \\bar\\rho_c^2\\\\\n    0 & 0 & -1 / \\bar\\rho_c^2\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n    \\tilde x_c \\\\\n    \\tilde y_c \\\\\n    \\tilde \\rho_c\n\\end{bmatrix}\n\\end{cases}\n\\end{equation}\nSubstitute the above into $\\bar X_s, \\tilde X_s$, and we get the full measurement equation in error state form.\n\n\n\n\n\n\n\n\\section{Depth subfilter}\n\\label{sect-depth}\nWhen a feature is first observed at time $t_r$ with pixel coordinates $[x_p(t_r) , y_p(t_r)]^\\top\\in\\real^2$, we associate it with a reference group with camera pose $g_{ref}\\doteq g_{sc}(t_r)$ and initialize its state as $x=[x_c, y_c, \\rho]^\\top \\in \\real^3$ where $\\rho \\in \\real_+$ is the inverse depth initialized at a nominal distance to the image plane and $[x_c, y_c]^\\top$ is the feature position in camera coordinates:\n\\begin{equation}\n\\begin{bmatrix}\nx_c\\\\\ny_c\\\\\n1\n\\end{bmatrix}\n= \\pi^{-1}\n\\begin{bmatrix}\nx_p\\\\\ny_p\\\\\n1\n\\end{bmatrix}\n\\end{equation}\n\nIn the depth subfilter, camera poses are treated as known and we estimate the state $x$ which has trivial dynamics $\\dot x=0$ assuming the feature is static in space. When a new image arrives at time $t$ to which the same feature is observed at pixel location $x_p(t)\\in\\real^2$, the feature state is updated using the following measurement model:\n\\begin{equation}\n x_p(t) = \\pi\\big(g_{bc}^{-1}(t) g_{sb}^{-1}(t) g_{sc}(t_r)\n \\frac{1}{\\rho}\n\\begin{bmatrix}\nx_c\\\\\ny_c\\\\\n1\n\\end{bmatrix}\n\\big)\n\\end{equation}\n\n% \\section{Feature process}\n\n% When a feature is dropped by the tracker, update all the groups which see the feature.\n\n% When a new image arrives, create a new group (the total number of groups might exceed the limitation, address later), attach the newly created features to the new group. However, since this group is just observed, no features can be used to update its pose. We should create the new group *after we process all the existing features* with the updated pose.\n\n% Once a new group has been created, attach the newly created features to it and initialize the features at nominal distance to the image plane.\n\n% After the measurement update, if there are groups which do not see any features, remove them from the state. Then add the new group to state.\n\n\n% \\section{Evaluation on TUM-VI dataset}\n\n% We compare Corvis~\\cite{tsotsosCS15} and our new implementation XIVO against\n% \\begin{itemize}\n%     \\item OKVIS of Leutenegger {\\it et al.}~\\cite{leutenegger2015keyframe}\n%     \\item ROVIO of Bloesch {\\it et al.}~\\cite{bloesch2015robust,bloesch2017iterated}\n%     \\item VINS of Qin {\\it et al.}~\\cite{qin2018vins}\n% \\end{itemize}\n% which have been benchmarked on the TUM-VI dataset \\cite{schubert2018tum}. Corvis, XIVO and ROVIO are EKF-based, whereas OKVIS and VINS perform keyframe-based nonlinear optimization.\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% % EVALUATION WITH TUM RGB-D BENCHMARK SCRIPT \n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% \\begin{table}[h]\n% \\begin{center}\n% \\begin{tabular}{c|c|c|c||c|c|c}\n%     \\hline\n%      \\multicolumn{2}{c|}{} & \\multicolumn{2}{c||}{Keyframe-based} & \\multicolumn{3}{c}{EKF-based} \\\\\n%     \\hline\n%     Sequence & length & OKVIS & VINS & ROVIO & Corvis & XIVO \\\\\n%     \\hline\\hline\n%     room1 & 156{\\em m} & \\textbf{0.06}{\\em m} & 0.07{\\em m} & 0.16{\\em m}& 0.10{\\em m} & 0.22{\\em m} \\\\\n\n%     room2 & 142{\\em m} & 0.11{\\em m} & \\textbf{0.07}{\\em m} & 0.33{\\em m}& 0.15{\\em m} & 0.08{\\em m} \\\\\n\n%     room3 & 135{\\em m} & \\textbf{0.07}{\\em m} & 0.11{\\em m} & 0.15{\\em m}& 0.12{\\em m} &0.11{\\em m} \\\\\n\n%     room4 & 68{\\em m} & \\textbf{0.03}{\\em m} & 0.04{\\em m}  & 0.09{\\em m}& 0.12{\\em m} & 0.08{\\em m} \\\\\n\n%     room5 & 131{\\em m} & \\textbf{0.07}{\\em m} & 0.20{\\em m} & 0.12{\\em m}& 0.12{\\em m} & 0.11{\\em m} \\\\\n\n%     room6 & 67{\\em m} & \\textbf{0.04}{\\em m} & 0.08{\\em m}  & 0.05{\\em m}& 0.06{\\em m} & 0.12{\\em m} \\\\\n\n%     \\hline\n% \\end{tabular}\n% \\end{center}\n% \\caption{\\textit{RMSE ATE} in meters.}\n% \\end{table}\n\n% \\begin{table}[h]\n% \\begin{center}\n% \\begin{tabular}{c|c|c||c|c|c}\n%     \\hline\n%       & \\multicolumn{2}{c||}{Keyframe-based} & \\multicolumn{3}{c}{EKF-based} \\\\\n%      \\hline\n%     Sequence & OKVIS & VINS & ROVIO & Corvis & XIVO \\\\\n%     \\hline\\hline\n%     room1 & \\textbf{0.013}{\\em m} / \\textbf{0.43}$\\degree$ & 0.015{\\em m} / 0.44$\\degree$ & 0.029{\\em m} / 0.53$\\degree$ & 0.047{\\em m} / 3.45$\\degree$ & 0.031{\\em m} / 0.59$\\degree$\\\\\n\n%     room2 & \\textbf{0.015}{\\em m} / \\textbf{0.62}$\\degree$ & 0.017{\\em m} / 0.63$\\degree$ & 0.030{\\em m} / 0.67$\\degree$ & 0.037{\\em m} / 2.56$\\degree$ & 0.023{\\em m} / 0.75$\\degree$\\\\\n\n%     room3 & \\textbf{0.012}{\\em m} / \\textbf{0.63}$\\degree$ & 0.023{\\em m} / \\textbf{0.63}$\\degree$ & 0.027{\\em m} / 0.66$\\degree$ & 0.033{\\em m} / 2.66$\\degree$ & 0.027{\\em m} / 0.73$\\degree$\\\\\n\n%     room4 & \\textbf{0.012}{\\em m} / 0.57$\\degree$ & 0.015{\\em m} / \\textbf{0.41}$\\degree$ & 0.022{\\em m} / 0.61$\\degree$ & 0.13{\\em m} / 4.80$\\degree$& 0.023{\\em m} / 0.62$\\degree$\\\\\n\n%     room5 & \\textbf{0.012}{\\em m} / \\textbf{0.47}$\\degree$ & 0.026{\\em m} / \\textbf{0.47}$\\degree$ & 0.031{\\em m} / 0.60$\\degree$ & 0.052{\\em m} / 4.59$\\degree$& 0.023{\\em m} / 0.65$\\degree$\\\\\n\n%     room6 & \\textbf{0.012}{\\em m} / 0.49$\\degree$ & 0.014{\\em m} / \\textbf{0.44}$\\degree$ & 0.019{\\em m} / 0.50$\\degree$ & 0.020{\\em m} / 2.04$\\degree$& 0.031{\\em m} / 0.53$\\degree$\\\\\n%     \\hline\n% \\end{tabular}\n% \\end{center}\n% \\caption{\\textit{RMSE RPE} on 1 second segments.}\n% \\end{table}\n\n% \\section{Planned features}\n\n% \\begin{enumerate}\n%     \\item Online camera intrinsics calibration.\n%     \\item Loop closure.\n%     \\item Hybrid in-state and out-of-state (MSCKF) measurement update.\n% \\end{enumerate}\n\n\n\\clearpage\n\n\\appendix\n\\section{Skew-symmetric matrix identities}\n\n\\begin{prop}\nIf $R \\in\\SO{3}, \\omega\\in\\real^3$, then $R\\asym{\\omega} R^\\top=\\asym{R\\omega}$.\n\\end{prop}\n\n\\begin{proof}\nLet $\\SO{3} \\ni R^\\top=[r_1, r_2, r_3]$, then $r_i^\\top r_j=\\delta_{ij}$ and \n\n\\begin{equation}\n\\begin{aligned}\nr_1 \\times r_2=r_3\\\\\nr_2 \\times r_3=r_1\\\\\nr_3 \\times r_1=r_2\\\\\n\\end{aligned}\n\\label{eq-basis}\n\\end{equation}\n\nrecall $\\asym{a} b\\doteq a\\times b$ and $a\\times b = -b\\times a$.\n\n\\begin{equation}\n\\begin{aligned}\nR\\asym{\\omega}R^\\top\n&=\n\\begin{bmatrix}\nr_1^\\top\\\\\nr_2^\\top\\\\\nr_3^\\top\n\\end{bmatrix}\n\\asym{\\omega}\n\\begin{bmatrix}\nr_1 & r_2 & r_3\n\\end{bmatrix}\\\\\n&=\n\\begin{bmatrix}\nr_1^\\top \\asym{\\omega}r_1 & r_1^\\top \\asym{\\omega}r_2 & r_1^\\top\\asym{\\omega} r_3 \\\\\nr_2^\\top \\asym{\\omega}r_1 & r_2^\\top \\asym{\\omega}r_2 & r_2^\\top\\asym{\\omega} r_3 \\\\\nr_3^\\top \\asym{\\omega}r_1 & r_3^\\top \\asym{\\omega}r_2 & r_3^\\top\\asym{\\omega} r_3 \\\\\n\\end{bmatrix}\n\\end{aligned}\n\\end{equation}\n\nNotice $r_i^\\top\\asym\\omega r_i=r_i^\\top (\\omega \\times r_i)=0$ and $\\real\\ni r_i^\\top \\asym\\omega r_j=\\big( r_i^\\top \\asym\\omega r_j \\big)^\\top=r_j^\\top (\\asym\\omega)^\\top r_i=-r_j^\\top\\asym\\omega r_i$ since $\\asym\\cdot$ is skew-symmetric. Therefore  $R\\asym\\omega R^\\top \\in so(3)$ to which we can apply the \\textit{vee} operator to bring it to $\\real^3$\n\\begin{equation}\n\\big(R\\asym{\\omega}R^\\top\\big)^\\vee\n=\n\\begin{bmatrix}\nr_3^\\top\\asym\\omega r_2\\\\\nr_1^\\top\\asym\\omega r_3\\\\\nr_2^\\top\\asym\\omega r_1\n\\end{bmatrix}\n\\end{equation}\n\nNow we only need to show $\\big(R\\asym\\omega R^\\top\\big)^\\vee=R\\omega=[r_1^\\top \\omega, r_2^\\top \\omega, r_3^\\top \\omega]^\\top$. Without loss of generality, let's show the equality holds for the first component, \\textit{i.e.}, \n\\begin{equation}\n\\begin{aligned}\n& r_3^\\top\\asym\\omega r_2 = r_1^\\top\\omega \\quad \\forall \\omega \\in \\real^3\\\\\n\\Leftrightarrow  & -r_3^\\top \\asym{r_2} \\omega = r_1^\\top \\omega \\quad \\forall \\omega \\in \\real^3 \\\\\n\\Leftrightarrow & -r_3^\\top \\asym{r_2} = r_1^\\top \\\\\n\\Leftrightarrow & \\asym{r_2} r_3 = r_1 \\quad \\text{ skew-symmetric } \\asym\\cdot^\\top=-\\asym\\cdot\\\\\n\\Leftrightarrow & r_2 \\times r_3 = r_1\n\\end{aligned}\n\\end{equation}\nwhere last equation holds due to properties of rotation matrics Eq.~\\eqref{eq-basis}.\n\\end{proof}\n\n\\begin{prop}\nIf $a, b\\in\\real^3$, then $\\big(\\asym{a}\\asym{b}\\big)^\\top=\\asym{b}\\asym{a}$.\n\\end{prop}\n\n\\begin{proof}\n$\\big(\\asym{a}\\asym{b}\\big)^\\top=\\asym{b}^\\top\\asym{a}^\\top=-\\asym{b}\\cdot -\\asym{a}=\\asym{b}\\asym{a}$\n\\end{proof}\n\n\\begin{prop}\nIf $a, b\\in\\real^3$, then $\\asym{(a\\times b)} = \\asym{a}\\asym{b} - \\asym{b}\\asym{a}$.\n\\end{prop}\n\\begin{proof}\n    Expand both sides as $3\\times 3$ matrices and verify each entry.\n\\end{proof}\n\n\\section{Some transformations and their Jacobians}\n\nSome useful transformation and the Jacobians with respect to the error state of each part involved in the transformation. \n\n\\subsection{Notation}\n\\begin{itemize}\n    \\item Rigid body transformation: $g=[R|T]\\in\\SE{3}, \\text{ where } R\\in\\SO{3}, T\\in\\real^3$. Error state representation: $R=\\bar R(I+\\asym{\\omega})$, $T=\\bar T + \\tilde T$\n    \\item 3D point: $X=\\bar X + \\tilde X \\in \\real^3$\n\\end{itemize}\n\n\\subsection{Apply rigid body transformation to a 3D point}\n\nThe new point $X_n=gX$, it is easy to see the nominal state and error state representation of $X_n$ is:\n\n$$\n\\bar X_n = \\bar R \\bar X + \\bar T, \\text{ and } \\tilde X_n = \\bar R\\tilde X - \\bar R\\asym{\\bar X}\\omega + \\tilde T\n$$\n\n\\subsection{Composition of rigid body transformation}\n\nThe composed rigid body motion $g\\doteq [R|T] = g_1 g_2=[R_1 R_2|R_1 T_2 + T_1]$\n\nTo obtain the error state representation of the rotation, expand both sides of the rotational part of the equation:\n\\begin{align}\n    \\bar R (I+\\asym{\\omega}) &= \\bar R_1(I+\\asym{\\omega_1})\\bar R_2(I+\\asym{\\omega_2}) \\\\\n    \\bar R + \\bar R\\asym{\\omega} &= \\bar R_1 \\bar R_2 + \\bar R_1\\asym{\\omega_1}\\bar R_2 + \\bar R_1 \\bar R_2 \\asym{\\omega_2}\\quad\\text{(drop 2nd-order error term)}\n\\end{align}\n\nIt's easy to see the nominal state is $\\bar R = \\bar R_1 \\bar R_2$, and the error state equation $\\bar R\\asym{\\omega}=\\bar R_1\\asym{\\omega_1}\\bar R_2 + \\bar R_1 \\bar R_2 \\asym{\\omega_2}$ requires more work, since the error term $\\omega$ is not expressed as a linear combination of $\\omega_1, \\omega_2$ yet:\n\n\\begin{align}\n    \\bar R \\asym{\\omega} &= \\bar R_1\\asym{\\omega_1}\\bar R_2 + \\bar R_1 \\bar R_2 \\asym{\\omega_2} \\\\\n    \\bar R \\asym{\\omega} &= \\bar R_1 \\bar R_2 \\bar R_2^\\top\\asym{\\omega_1}\\bar R_2 + \\bar R_1 \\bar R_2 \\asym{\\omega_2}\\quad\\text{(insert identity} I\\doteq \\bar R_2\\bar R_2^\\top \\text{)}\\\\\n    \\bar R \\asym{\\omega} &= \\bar R_1 \\bar R_2 \\asym{\\bar R_2^\\top\\omega_1} + \\bar R_1 \\bar R_2 \\asym{\\omega_2}\\quad\\text{(use identity} R\\asym{\\omega}R^\\top=\\asym{R \\omega} \\text{)} \\\\\n    \\bar R \\asym{\\omega} &= \\bar R \\asym{\\bar R_2^\\top\\omega_1} + \\bar R\\asym{\\omega_2}\\quad\\text{(use nominal equation} \\bar R=\\bar R_1 \\bar R_2 \\text{)} \\\\\n    \\asym{\\omega} &= \\asym{\\bar R_2^\\top \\omega_1} + \\asym{\\omega_2} \\\\\n    \\omega &= \\bar R_2^\\top \\omega_1 + \\omega_2\\quad\\text{(}\\asym{\\cdot}\\text{ is linear)}\n\\end{align}\n\nFor the translation part:\n\\begin{align}\n\\bar T + \\tilde T &= \\bar R_1(I + \\asym{\\omega_1})(\\bar T_2 + \\tilde T_2) + \\bar T_1 + \\tilde T_1 \\\\\n\\bar T + \\tilde T &= (\\bar R_1 \\bar T_2 + \\bar T_1) + \\bar R_1 \\tilde T_2 + \\bar R_1 \\asym{\\omega_1} \\bar T_2 + \\tilde T_1 \\\\\n\\bar T + \\tilde T &= (\\bar R_1 \\bar T_2 + \\bar T_1) + \\bar R_1 \\tilde T_2 - \\bar R_1 \\asym{\\bar T_2} \\omega_1 + \\tilde T_1,\n\\end{align}\nthus the nominal and error state equations are:\n$$\n\\bar T = \\bar R_1 \\bar T_2 + \\bar T_1 \\quad\\text{ and } \\tilde T = \\bar R_1 \\tilde T_2 -\\bar R_1 \\asym{\\bar T_2} \\omega_1 + \\tilde T_1\n$$\n\n\\subsection{Inverse of rigid body transformation}\nThe inverse of rigid body transformation $g=[R | T]$ is $g^{-1}\\doteq [R_i | T_i]=[R^\\top|-R^\\top T]$\n\nError state equation for the rotational part:\n\\begin{align}\n    \\bar R_i (I + \\asym{\\omega_i}) &= \\big(\\bar R(I + \\asym{\\omega})\\big)^\\top \\\\\n    \\bar R_i + \\bar R_i \\asym{\\omega_i} &= \\bar R^\\top + \\asym{\\omega}^\\top \\bar R^\\top \\\\\n    \\bar R_i + \\bar R_i \\asym{\\omega_i} &= \\bar R^\\top - \\asym{\\omega} \\bar R^\\top\\quad\\text{(}\\asym{\\cdot}\\text{ is asymmetric)}\\\\\n    \\bar R_i + \\bar R_i \\asym{\\omega_i} &= \\bar R^\\top - \\bar R^\\top \\bar R \\asym{\\omega} \\bar R^\\top\\quad\\text{(insert identity} I=\\bar R^\\top\\bar R\\text{ )} \\\\\n    \\bar R_i + \\bar R_i \\asym{\\omega_i} &= \\bar R^\\top - \\bar R^\\top \\asym{\\bar R\\omega}\\quad\\text{(use identity} R\\asym{\\omega}R^\\top=\\asym{R\\omega},\n\\end{align}\n\nNow the nominal equation is $\\bar R_i = \\bar R^\\top$, and the error state equation is:\n\\begin{align}\n    \\bar R_i\\asym{\\omega_i} &= -\\bar R^\\top\\asym{\\bar R\\omega} \\\\\n    \\bar R^\\top \\asym{\\omega_i} &= -\\bar R^\\top\\asym{\\bar R\\omega}\\quad\\text{(since} \\bar R_i=\\bar R^\\top\\text{ )} \\\\\n    \\asym{\\omega_i} &= -\\asym{\\bar R\\omega} \\\\\n    \\omega_i &= -\\bar R\\omega\\quad\\text{(} \\asym{\\cdot}\\text{ is linear)}.\n\\end{align}\n\nFor the translational part:\n\\begin{align}\n    \\bar T_i + \\tilde T_i &= -\\big(\\bar R(I + \\asym{\\omega})\\big)^\\top(\\bar T + \\tilde T) \\\\\n    \\bar T_i + \\tilde T_i &= (-\\bar R^\\top + \\asym{\\omega}\\bar R^\\top)(\\bar T + \\tilde T) \\\\\n    \\bar T_i + \\tilde T_i &= -\\bar R^\\top \\bar T + \\asym{\\omega}\\bar R^\\top\\bar T -\\bar R^\\top \\tilde T \\\\\n    \\bar T_i + \\tilde T_i &= -\\bar R^\\top \\bar T - \\asym{\\bar R^\\top\\bar T}\\omega -\\bar R^\\top \\tilde T,\n\\end{align}\nthus the nominal and error state equations are:\n$$\n\\bar T_i = -\\bar R^\\top\\bar T,\\text{ and } \\tilde T_i = -\\asym{\\bar R^\\top\\bar T}\\omega - \\bar R^\\top \\tilde T.\n$$\n\n\\subsection{General composition rule}\nGiven a nonlinear function $g_2=f(g_1): \\SE{3}\\mapsto\\SE{3}$, and assume its corresponding error state equation reads\n$$\n\\omega_2 = A \\omega_1, \\text{ and } \\tilde T_2 = B \\omega_1 + C \\tilde T_1.\n$$\n\nNow if we pass $g_2$ to another nonlinear function $g_3=h(g_2):\\SE{3}\\mapsto\\SE{3}$ whose error state equation reads\n$$\n\\omega_3 = D \\omega_2, \\text{ and } \\tilde T_3 = E \\omega_2 + F \\tilde T_2,\n$$\n\nthen the error state for $g_3=h(f(g_1)$ reads:\n$$\n\\omega_3 = D A \\omega_1, \\text{ and } \\tilde T_3 = E A \\omega_1 + F(B\\omega_1 + C\\tilde T_1).\n$$\n\n\n\n\n\\clearpage\n\\bibliographystyle{unsrt}\n\\bibliography{refs}\n\n\\end{document}\n", "meta": {"hexsha": "f890f452e18a10b562b4b5f0d301dce4c5817be9", "size": 55088, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doc.tex", "max_stars_repo_name": "tekrajchhetri/xivo", "max_stars_repo_head_hexsha": "3e3b9b978c48fa95e9c1259d0ba278c04de2a0e0", "max_stars_repo_licenses": ["BSD-4-Clause-UC"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/doc.tex", "max_issues_repo_name": "tekrajchhetri/xivo", "max_issues_repo_head_hexsha": "3e3b9b978c48fa95e9c1259d0ba278c04de2a0e0", "max_issues_repo_licenses": ["BSD-4-Clause-UC"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/doc.tex", "max_forks_repo_name": "tekrajchhetri/xivo", "max_forks_repo_head_hexsha": "3e3b9b978c48fa95e9c1259d0ba278c04de2a0e0", "max_forks_repo_licenses": ["BSD-4-Clause-UC"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.9233390119, "max_line_length": 775, "alphanum_fraction": 0.638959483, "num_tokens": 21182, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.793105951184112, "lm_q1q2_score": 0.5846445257447541}}
{"text": "\\chapter{Discrete-Time Stochastic Optimal Control}\n\\label{ch:discrete_time_stochastic_optimal_control}\n\nIn sequential decision problems, an agent interacts with an environment by\nselecting a series of actions in order to complete a specific task. During this\ninteraction, the agent receives a numerical reward from the environment and its\ngoal is to find the best strategy in order to maximize a certain measure of its\nperformance. The environment evolves stochastically and may be influenced by\nthe interaction with the agent, so that each action taken by the agent may \ninfluence the circumstances under which future decisions will be made. \nTherefore, the agent must balance his desire to obtain a large reward today \nby acting greedily and the opportunities that will be available in the future. \nWhile this setting appears quite simple, it is general enough to encompasses a \nwide range of applications in different fields. A classical example is \nportfolio management, where an investor must allocate his capital so as to \nmaximize his long-term profits. Another standard example is chess, where two\nplayers successively move pieces around the chessboard to checkmate the\nopponent's king.\\\\\nThe purpose of the following sections is to introduce the notation that will be\nused in the rest of this work and to recall the fundamental concepts and \nresults of the discrete-time stochastic optimal control theory, which is the \nstandard framework to study sequential decisions problems in mathematical\nterms. Since our discussion will be far from being comprehensive, we refer the\nreader to the extensive literature on the subject, such as\n\\cite{bertsekas1978stochastic}, \\cite{puterman1994markov}, \n\\cite{bertsekas1995dynamic}.\n\n\\section{Markov Decision Processes}\nA sequential decision problem under uncertainty can be schematized as in Figure\n\\ref{fig:sequential_decision_problem}: at a given time $t$, the agent (also\nknown as decision maker or controller) observes the state $s_t$ of the system \n(also know as environment) and subsequently performs an action $a_t$. Following\nthis action, the agent receives an immediate reward $r_{t+1}$ (or incurs an\nimmediate cost) and the system evolves to a new state according to a probability\ndistribution which depends on the action selected by the agent. At the\nsubsequent time $t+1$, the agent selects a new action given the new state of\nthe system and the process repeats. This interaction can be modeled rigorously\nusing a Markov decision process.  \n\\begin{definition}[Markov Decision Process]\n\tA Markov decision process (MDP) is a stochastic dynamical system specified by the tuple $<\\S, \\A, \\calP, \\calR,\n\t\\gamma>$, where\n\t\\begin{enumerate}[label={\\roman*)}]\n\t\t\\item $(\\S, \\calS)$ is a measurable space, called the state space.\n\t\t\\item $(\\A, \\calA)$ is a meausrable space, called the action space. \n\t\t\\item $\\calP: \\S \\times \\A \\times \\calS \\to \\R$ is a Markov transition\n\t\t\tkernel, i.e.\n\t\t\t\\begin{enumerate}[label={\\alph*)}]\n\t\t\t\t\\item for every $s\\in\\S$ and $a\\in\\A$, $B \\mapsto \\calP(s,a,B)$\n\t\t\t\t\t  is a probability distribution over $(\\S, \\calS)$.\n\t\t\t\t\\item for every $B\\in\\calS$, $(s,a) \\mapsto \\calP(s,a,B)$ is\n\t\t\t\t\t  a measurable function on $\\S \\times \\A$.\n\t\t\t\\end{enumerate}\n\t\t\\item $\\calR: \\S \\times \\A \\to \\R$ is a reward function.\n\t\t\\item $\\gamma \\in (0,1)$ is a discount factor.\n\t\\end{enumerate}\n\\end{definition}\nTypically, the state space (and similarly the action space) will be either\nfinite, namely $\\S = \\{s_1, \\ldots, s_d\\}$, or continuous, namely $\\S \\subseteq\n\\R^{D_s}$. The kernel $\\calP$ describes the random evolution of the system:\nsuppose that at time $t$ the system is in state $s$ and that the agent takes\naction $a$, then, regardless of the previous history of the system, the\nprobability to find the system in a state belonging to $B\\in\\calS$ at time\n$t+1$ is given by \n\\begin{equation}\n\t\\calP(s, a, B) = \\P{S_{t+1} \\in B | S_t = s, A_t = a}\n\\end{equation}\nFollowing this random transition, the agent receives a stochastic reward\n$R_{t+1}$. The reward function $\\calR(s, a)$ gives the expected reward\nobtained when action $a$ is taken in state $s$ \n\\begin{equation}\n\t\\calR(s, a) = \\E{R_{t+1} | S_t = s, A_t = a}\n\\end{equation}\nThis setting can be easily generalized to the following cases\n\\begin{enumerate}\n\t\\item The initial state of the system is a random variable. \n\t\\item The actions that an agent can select depend on the state of the system.\n\\end{enumerate}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{tikzpicture}[node distance = 6em, auto, thick]\n\t\t\\node [block] (Agent) {Agent};\n\t\t\\node [block, below of=Agent] (Environment) {Environment};\t\t    \n\t\t\\path [line] (Agent.0) --++ (4em,0em) |- node [near start]{Action $a_t$} (Environment.0);\n\t\t\\path [line] (Environment.190) --++ (-6em,0em) |- node [near start]{State  $s_{t}$} (Agent.170);\n\t\t\\path [line] (Environment.170) --++ (-4.25em,0em) |- node [near start, right] {Reward $r_{t+1}$} (Agent.190);\n\t\\end{tikzpicture}\n\t\\caption{Agent-environment interaction in sequential decision problems.}\n\t\\label{fig:sequential_decision_problem}\n\\end{figure}\n\n\\section{Policies}\nAt any time step, the agent selects its actions according to a certain policy. \n\\begin{definition}[Policy]\n\tA policy is a function $\\pi: \\S \\times \\calA \\to \\R$ such that\n\t\\begin{enumerate}[label={\\roman*)}]\n\t\t\\item for every $s \\in \\S$, $C \\mapsto \\pi(s,C)$ is a probability\n\t\t\t  distribution over $(\\A, \\calA)$. \n\t\t\\item for every $C \\in \\calA$, $s \\mapsto \\pi(s, C)$ is a measurable\n\t\t\t  function. \n\t\\end{enumerate}\n\\end{definition}\nIntuitively, a policy represents a stochastic mapping from the current state of\nthe system to actions. Deterministic policies are a particular case of this \ngeneral definition. We assumed that the agent's policy is stationary and only\ndepends on the current state of the system. We might in fact  consider more \ngeneral policies that depends on the whole history of the system. However, as\nwe will see, we can always find an optimal policy that depends only on the\ncurrent state, so that our definition is not restrictive. A policy $\\pi$ and an\ninitial state $s_0 \\in \\S$ determine a random state-action-reward sequence\n${\\{(S_t, A_t, R_{t+1})\\}}_{t\\geq 0}$ with values on $\\S \\times \\A \\times \\R$\nfollowing the mechanism described above. \n\\begin{definition}[History]\n\tGiven an initial state $s_0 \\in \\S$ and a policy $\\pi$, a history (or\n\tequivalently trajectory or roll-out) of the system is a random sequence\n\t$H_\\pi = {\\{(S_t, A_t)\\}}_{t\\geq 0}$ with values in $\\S \\times \\A$, defined\n\ton some probability space $(\\Omega, \\mathcal{F}, \\mathbb{P})$, such that for \n\t$t = 0, 1, \\ldots$\n\t\\begin{equation*}\n\t\t\\begin{cases}\n\t\t\tS_0 = s_0\\\\\n\t\t\tA_t \\sim \\pi(S_t, \\cdot)\\\\\n\t\t\tS_{t+1} \\sim \\calP(S_t, A_t, \\cdot)\\\\\n\t\t\\end{cases}\n\t\\end{equation*}\n\twe will denote by $(\\H, \\calH)$ the measurable space of all possible\n\thistories. \n\\end{definition}\nMoreover, we observe that\n\\begin{enumerate}[label={\\roman*)}]\n\t\\item the state sequence ${\\{S_t\\}}_{t\\geq 0}$ is a Markov process $<\\S,\n\t\t  \\calP_\\pi>$.\n\t  \\item the state-reward sequence ${\\{(S_t, R_{t+1})\\}}_{t\\geq 0}$ is a Markov \n\t\t  reward process $<\\S, \\calP_\\pi, \\calR_\\pi, \\gamma>$.\n\\end{enumerate}\nwhere we denoted \n\\begin{equation*}\n\t\\begin{split}\n\t\t\\calP_\\pi(s, s') &= \\int_\\A \\pi(s, a) \\calP(s, a, s') da\\\\\n\t\t\\calR_\\pi(s) &= \\int_\\A \\pi(s, a) \\calR(s, a) da\\\\\n\t\\end{split}\n\\end{equation*}\nIn stochastic optimal control, the goal of the agent is to find a policy that\nmaximizes a measure of the agent's long-term performance. In the next sections\nwe discuss some objective functions that are commonly used in the infinite\nhorizon framework. \n\n\\section{Risk-Neutral Framework}\nIn the risk-neutral setting, the agent is only interested in maximizing its reward, without considering the risk it needs to take on to achieve it. In an infinite horizon task, the agent's performance is typically measured either as the total discounted reward or as the average reward obtained at each time step. These two approaches are radically different both from a theoretical and an algorithmic point of view. For this reason, they will be always treated separately.  \n\n\\subsection{Discounted Reward Formulation}\nIn the discounted reward formulation, the agent's performance is measured as\nthe expected return obtained following a specific policy.\n\\begin{definition}[Return]\n\tThe return is the total discounted reward obtained by the agent starting \n\tfrom $t$  \n\t\\begin{equation*}\n\t\tG_t = \\sum^{\\infty}_{t=0} \\gamma^t R_{t+k+1} \n\t\\end{equation*}\n\twhere $0 < \\gamma < 1$ is the discount factor.\n\\end{definition}\nIn some domains, such as economics, discounting can be used to represent\ninterest earned on rewards, so that an action that generates an immediate\nreward will be preferred over one that generates the same reward some steps\ninto the future. Discounting thus models the trade-off between immediate and\ndelayed reward: if $\\gamma = 0$ the agent selects its actions in a myopic way,\nwhile if $\\gamma \\to 1$ it acts in a far-sighted manner. There are other\npossible reasons for discounting future rewards. The first is because it is\nmathematically convenient, as it avoids infinite returns and it solves many\nconvergence issues. Another interpretation is that it models the uncertainty\nabout the future, which may not be fully represented. Indeed, the discount\nfactor could be seen as the probability that the world does not stop at a given \ntime step. \n\\begin{definition}[State-Value Function]\n\tThe state-value function $V_\\pi: \\S \\to \\R$ is the expected return that can\n\tbe obtained starting from a state and following policy $\\pi$\n\t\\begin{equation}\n\t\tV_\\pi(s) = \\E[\\pi]{G_t|S_t = s}\n\t\\end{equation}\n\\end{definition}\nwhere the subscript in $\\mathbb{E}_{\\pi}$ indicates that all the actions are selected according to policy $\\pi$. The state-value function measures how good it is for the agent to be in a given state and follow a certain policy. Similarly, we can introduce an action-value function that measures how good it is for the agent to be in a state, take a certain action and then follow the policy. \n\\begin{definition}[Action-Value Function]\n\tThe action-value function $Q_\\pi: \\S \\times \\A \\to \\R$ is the expected \n\treturn that can be obtained starting from a state, taking an action and\n\tthen following policy $\\pi$\n\t\\begin{equation}\n\t\tQ_\\pi(s,a) = \\E[\\pi]{G_t|S_t = s, A_t = a}\n\t\\end{equation}\n\\end{definition}\nWe have the following relationship between $V_\\pi$ and $Q_\\pi$\n\\begin{equation}\n\tV_\\pi(s) = \\int_\\A \\pi(s,a) Q_\\pi(s,a) da\n\\end{equation}\nAlmost all reinforcement learning algorithms are designed to estimate these \nvalue functions and are typically based on the Bellman equations.\n\\begin{proposition}[Bellman Expectation Equations]\n\t\\begin{equation}\n\t\tV_\\pi(s) = \\calR_\\pi(s) + \\gamma T_\\pi V_\\pi(s)\t\n\t\t\\label{eq:bellman_expectation_eq_V}\n\t\\end{equation}\n\t\\begin{equation}\n\t\t\tQ_\\pi(s,a) = \\calR(s,a) + \\gamma T_a V_\\pi(s)\n\t\t\t\\label{eq:bellman_expectation_eq_Q}\n\t\\end{equation}\n\twhere we denoted by $T_a$ (resp. $T_\\pi$) the transition operator for action \n\t$a$ (resp. for policy $\\pi$)\n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\tT_a F(s) &= \\E{F(S_{t+1})|S_t = s, A_t = a} = \\int_\\S \\calP(s, a, s') F(s') ds'\\\\\n\t\t\tT_\\pi F(s) &= \\E[\\pi]{F(S_{t+1})|S_t = s} = \\int_\\A \\pi(s,a) \\int_\\S \\calP(s,a,s') F(s') ds'\n\t\t\tda\\\\ \n\t\t\\end{split}\n\t\\end{equation*}\n\\end{proposition}\nIf we introduce the Bellman expection operator $B_\\pi$, defined as \n\\begin{equation*}\n\tB_\\pi V_\\pi(s) = \\calR_\\pi(s) + \\gamma T_\\pi V_\\pi(s)\t\n\\end{equation*}\nThen Eq. (\\ref{eq:bellman_expectation_eq_V}) can be written as a fixed-point equation\n\\begin{equation*}\n\tV_\\pi(s) = B_\\pi V_\\pi(s)\n\\end{equation*}\nwhich, under some simple assumptions on the reward functions, admits a unique\nsolution by the contraction mapping theorem. A similar argument holds for Eq.\n(\\ref{eq:bellman_expectation_eq_Q}). The goal of the agent is to select a policy\n$\\pi_*$ that maximizes his expected return in all possible states. Such a\npolicy is called \\emph{optimal}.\n\\begin{definition}[Optimal State-Value Function]\n\tThe optimal state-value function $V_*: \\S \\to \\R$ is the largest expected \n\treturn that can be obtained starting from a state\n\t\\begin{equation}\n\t\tV_*(s) = \\sup_\\pi V_\\pi(s)\n\t\\end{equation}\n\\end{definition}\n\\begin{definition}[Optimal Action-Value Function]\n\tThe optimal action-value function $Q_*: \\S \\times \\A \\to \\R$ is the largest\n\texpected return that can be obtained starting from a state and taking an\n\taction\n\t\\begin{equation}\n\t\tQ_*(s,a) = \\sup_\\pi Q_\\pi(s,a)\n\t\\end{equation}\n\\end{definition}\nThe optimal value functions satisfy the following Bellman equations.\n\\begin{proposition}[Bellman Optimality Equations]\n\t\\begin{equation}\n\t\tV_*(s) = \\sup_a Q_*(s,a) = \\sup_a \\left\\{\\calR(s,a) + \\gamma T_a V_*(s)\\right\\}\n\t\\end{equation}\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\tQ_*(s,a) &= \\calR(s,a) + \\gamma T_a V_*(s)\\\\\n\t\t\t\t\t &= \\calR(s,a) + \\gamma \\int_\\S \\calP(s,a,s') \\sup_{a'} Q_*(s', a') ds'\n\t\t\\end{split}\n\t\\end{equation}\n\\end{proposition}\nAgain, these two equations are fixed-point equations and the existence and\nuniqueness of a solution is guaranteed, under some technical assumptions, by \nthe contraction mapping theorem. Starting from the optimal value functions, we\ncan easily derived an optimal policy. Let us define a partial ordering in the \npolicy space\n\\begin{equation}\n\t\\pi \\succeq \\pi' \\Leftrightarrow V_\\pi(s) \\geq V_{\\pi'}(s) \\;\\;\\; \\forall s \\in \\S\n\\end{equation}\nThen the optimal policy $\\pi_* \\succeq \\pi$, $\\forall \\pi$. We have the\nfollowing results\n\\begin{theorem}[Optimal Policy]\n\tFor any Markov decision process,\n\t\\begin{enumerate}[label={\\roman*)}]\n\t\t\\item It exists an optimal policy $\\pi_*$ such that $\\pi_* \\succeq\n\t\t\t\\pi$, $\\forall \\pi$. \n\t\t\\item $V_{\\pi_*}(s) = V_*(s)$.\n\t\t\\item $Q_{\\pi_*}(s,a) = Q_*(s,a)$. \n\t\\end{enumerate}\n\\end{theorem}\nAn optimal policy can be found by acting greedily with respect to $Q_*$, which corresponds to selecting the action that maximizes the action-value function in a given state\n\\begin{equation}\n\ta_* = \\argsup_{a\\in\\A} Q_*(s,a)\n\\end{equation}\nThis policy is deterministic and only depends on the current state of the system.\n\n\\subsection{Average Reward Formulation}\nMost of the research in RL has studied a problem formulation where agents\nmaximize the cumulative sum of rewards. However, this approach cannot handle\ninfinite horizon tasks, where there are no absorbing goal states, without\ndiscounting future rewards. Clearly, discounting is only necessary in cyclical\ntasks, where the cumulative reward sum can be unbounded. More natural long-term\nmeasure of  optimality exists for such cyclical tasks, based on maximizing the\naverage reward per action. For a more in-depth presentation, the reader may again refer to the extensive literature on the subject, such as \\cite{arapostathis1993discrete}, \\cite{mahadevan1996average} and the references therein. In the average reward setting, also known as long-run reward or ergodic reward, the goal of the agent is to find a policy that maximizes the expected reward per step. \n\\begin{definition}[Average Reward]\n\tThe average reward $\\rho_\\pi$ associated to a policy $\\pi$ is defined as  \n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\rho_\\pi &= \\lim_{T\\to\\infty} \\frac{1}{T} \\E[\\pi]{ \\sum^{T-1}_{t=0} R_{t+1}}\\\\\n\t\t\t\t\t &= \\E[\\substack{S\\sim d_\\pi \\\\ A \\sim \\pi}]{\\calR(S,A)}\\\\ \n\t\t\t\t\t &= \\int_\\S d_\\pi(s) \\int_\\A \\pi(s, a) \\calR(s,a) da ds\\\\ \n\t\t\\end{split}\n\t\\end{equation}\nwhere $d_\\pi$ is the stationary distribution of the Markov process induced by $\\pi$.\n\\end{definition}\nThe agent aims to find an \\emph{average optimal} policy\n\\begin{equation}\n\t\\pi_* = \\argsup_\\pi \\rho_\\pi\n\\end{equation}\nIn this setting, we introduce the \\emph{average adjusted} value and action-value \nfunctions. \n\\begin{definition}[Average Adjusted State-Value Function]\n\tThe average adjusted state-value function $V_\\pi : \\S \\to \\R$ is the\n\texpected residual return that can be obtained starting from a state and\n\tfollowing policy $\\pi$\n\t\\begin{equation}\n\t\tV_\\pi(s) = \\E[\\pi]{\\sum^{\\infty}_{t=0} \\left(R_{t+1} - \\rho_\\pi\\right)\n\t\t\\bigg| S_0 = s}\n\t\\end{equation}\n\\end{definition}\nThe term $V_\\pi(s)$ is usually referred to as the \\emph{bias} value, or the\n\\emph{relative} value, since it represents the relative difference in total\nreward gained starting from a state $s$ as opposed to a generic state. \n$\\rho_\\pi$ serves as a baseline that allows to avoid divergence in the value\nfunction definition.\n\\begin{definition}[Average Adjusted Action-Value Function]\n\tThe average adjusted action-value function $Q_\\pi : \\S \\times \\A \\to \\R$ is \n\tthe expected residual return that can be obtained starting from a state,\n\ttaking an action and then following policy $\\pi$\n\t\\begin{equation}\n\t\tQ_\\pi(s, a) = \\E[\\pi]{\\sum^{\\infty}_{t=0} \\left(R_{t+1} -\n\t\t\t\\rho_\\pi\\right) \\bigg| S_0 = s, A_0\n\t\t= a}\n\t\\end{equation}\n\\end{definition}\nWe have the following relation between the state-value function and the\naction-value function\n\\begin{equation}\\label{eq:VQ_equality}\n\tV_\\pi(s) = \\int_\\A \\pi(s,a) Q_\\pi(s,a) da\n\\end{equation}\nThe value functions satisfy the following Bellman equation \n\\begin{proposition}[Bellman Expectation Equations]\n\t\\begin{equation}\n\t\tV_\\pi(s) = \\calR_\\pi(s) - \\rho_\\pi + T_\\pi V_\\pi(s)\n\t\\end{equation}\n\t\\begin{equation}\n\t\t\tQ_\\pi(s,a) = \\calR(s,a) - \\rho_\\pi + T_a V_\\pi(s)\n\t\\end{equation}\n\\end{proposition}\nAgain, by introducing opportune Bellman operators, these equations can be rewritten as fixed-point equations. In the discrete case, where the transition operator correspond to matrices, these Bellman equations become linear systems that can be solved to obtain the value functions. \n\n\\section{Risk-Sensitive Framework}\n\\label{sec:risk_sensitive_formulation}\nIn many application, in addition to maximizing the average reward, the agent\nmay want to control risk by minimizing some measure of variability in rewards. In the risk-sensitive framework, the goal of the agent is to find the policy that optimally solves the trade-off between reward and risk. Although risk-sensitive sequential decision-making has a long history in operations research and finance, it has only recently grabbed the attention of the machine learning community. Hence, the literature offers many reference which approach the risk-sensitive control problem from the traditional stochastic optimal control perspective. On the other hand, there are only few references that attack the problem in the reinforcement learning setting. Again, we can consider the discounted formulation or the average formulation.    \n\n\\subsection{Discounted Reward Formulation}\nA standard way to measure the risk associated with a policy $\\pi$ is the variance of the total discounted reward obtained starting from a given state $s$\n\\begin{equation}\n\t\\Lambda_\\pi(s) = \\Var[\\pi]{G_t | S_t = s}\n\\end{equation}\nThis approach is the one considered in \\cite{sobel1982variance}. The variance can be decomposed as \n\\begin{equation}\n\t\\label{eq:variance_decomposition}\n\t\\Lambda_\\pi(s) = U_\\pi(s) - V_\\pi(s)^2\n\\end{equation}\nwhere we denoted by $U_\\pi$ the square state-value function.\n\\begin{definition}[Square State-Value Function]\n\tThe square state-value function $U_\\pi(s)$ is the second moment of the return that can be obtained under policy $\\pi$ starting from a state $s$ \n\t\\begin{equation}\n\t\tU_\\pi(s) = \\E[\\pi]{G_t^2 | S_t = s}\n\t\\end{equation}\n\\end{definition}\nAs for the risk-neutral formulation, it comes in handy to introduce a square action-value function.\n\\begin{definition}[Square Action-Value Function]\n\tThe square action-value function $W_\\pi(s)$ is the second moment of the return that can be obtained starting from a state $s$, taking action $a$ and then following policy $\\pi$\n\t\\begin{equation}\n\t\tW_\\pi(s, a) = \\E[\\pi]{G_t^2 | S_t = s, A_t = a}\n\t\\end{equation}\n\\end{definition}\nClearly, the two square value functions are related by the following equation\n\\begin{equation}\n\tU_\\pi(s) = \\int_{\\A} \\pi(s,a) W_\\pi(s,a) da\n\\end{equation}\nWe would like to obtain a Bellman expectation equation for $U_\\pi(s)$ and $W_\\pi(s,a)$, in order to piggyback on the discussion about the standard value functions. To do so, let us introduce the square reward function.\n\\begin{definition}[Square Reward Function]\n\tThe square reward function $\\calM(s,a)$ is the second moment of the reward that can be obtained in state $s$ when taking action $a$\n\t\\begin{equation}\n\t\t\\calM(s,a) = \\E{R_{t+1}^2 | S_t = s, A_t = a}\n\t\\end{equation}\n\tLet us also define the state-reward function $\\calM_\\pi(s)$ \n\t\\begin{equation}\n\t\t\\calM_\\pi(s) = \\E[\\pi]{R_{t+1}^2|S_t = s} = \\int_\\A \\pi(s,a) \\calM(s,a) da\n\t\\end{equation}\n\\end{definition}\nMoreover, we will need the reward-return state-covariance function. \n\\begin{definition}[Reward-Return State-Covariance Function]\n\tThe reward-return state-covariance function $C_\\pi(s)$ is the covariance between the first day reward and the successive returns starting from state $s$ and then following policy $\\pi$\n\t\\begin{equation}\n\t\tC_\\pi(s) = \\Cov[\\pi]{R_{t+1}, G_{t+1} | S_t = s}\n\t\\end{equation}\n\\end{definition}\nAgain, it comes in handy to also introduce the reward-return action-covariance function. \n\\begin{definition}[Reward-Return Action-Covariance Function]\n\tThe reward-return action-covariance function $C_\\pi(s,a)$ is the covariance between the first reward and the successive return starting from state $s$, taking action $a$ and then following policy $\\pi$\n\t\\begin{equation}\n\t\tC_\\pi(s,a) = \\Cov[\\pi]{R_{t+1}, G_{t+1} | S_t = s, A_t = a}\n\t\\end{equation}\n\\end{definition}\nThen, it is easy to show that the square state-value function satisfies the following Bellman equation \n\\begin{proposition}[Bellman Expectation Equation]\n\\begin{equation}\n\tU_\\pi(s) = \\calK_\\pi(s) + \\gamma^2 T_\\pi U_\\pi(s)\n\\end{equation}\n\\begin{equation}\n\t\\begin{split}\n\t\tW_\\pi(s,a) = \\calK_\\pi(s, a) + \\gamma^2 T_a U_\\pi(s)\n\t\\end{split}\n\\end{equation}\nwhere we denoted\n\\begin{equation}\n\t\\mathcal{K}_\\pi(s) = \\calM_\\pi(s) + 2\\gamma \\calR_\\pi(s) T_\\pi V_\\pi(s) + 2 \\gamma C_\\pi(s)\n\\end{equation}\n\\begin{equation}\n\t\\calK_\\pi(s, a) = \\calM(s,a) + 2\\gamma \\calR(s,a) T_a V_\\pi(s) + 2 \\gamma C_\\pi(s, a)\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\n\tLet us prove the Bellman expectation equation for the square action-value function. The proof for the square state-value function is analogous. \n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\tW_\\pi(s,a) &= \\E[\\pi]{G_t^2 | S_t = s, A_t = a}\\\\\n\t\t\t\t\t   &= \\E[\\pi]{(R_{t+1} + \\gamma G_{t+1})^2 | S_t = s, A_t = a}\\\\\n\t\t\t\t\t   &= \\calM(s,a) + 2\\gamma \\E[\\pi]{R_{t+1}G_{t+1} | S_t = s, A_t = a} + \\gamma^2 T_a U_\\pi(s)\n\t\t\\end{split}\n\t\\end{equation*}\n\tBy the definition of covariance\n\t\\begin{equation*}\n\t\t\tC_\\pi(s,a) = \\E[\\pi]{R_{t+1}G_{t+1} | S_t = s, A_t = a} - \\calR(s,a) T_a V_\\pi(s)\n\t\\end{equation*}\n\tPlugging in the first equation leads to the result.\n\\end{proof}\nThese equations are analogous to the Bellman expectation equations for the state-value function, with a synthetic reward function $\\mathcal{K}_\\pi(s)$ and a discount factor $\\gamma^2$. It is possible to combine the Bellman expectation equation for $V_\\pi(s)$ and for $U_\\pi(s)$ to obtain a Bellman equation for the return variance $\\Lambda_\\pi(s)$.\n\\begin{proposition}[Bellman Expectation Equation]\n\t\\begin{equation}\n\t\t\t\\Lambda_\\pi(s) = \\mathcal{V}_\\pi(s) + 2 \\gamma C_\\pi(s) + \\gamma^2 \\Var[\\pi]{V_\\pi(S_{t+1}) | S_t = s} + \\gamma^2 T_\\pi \\Lambda_\\pi(s)\t\n\t\\end{equation}\n\twhere $\\mathcal{V}_\\pi(s)$ denotes the conditional variance of the reward\n\t\\begin{equation}\n\t\t\\mathcal{V}_\\pi(s) = \\Var[\\pi]{R_{t+1} | S_t = s} = \\calM_\\pi(s) - \\calR_\\pi(s)^2\n\t\\end{equation}\n\\end{proposition}\n\\begin{proof}\nThe result can be proved starting from Eq. (\\ref{eq:variance_decomposition}) and exploiting the Bellman equations for $U_\\pi$ and $V_\\pi$. Here we follow an alternative way based on the law of total variance.\n\\begin{equation*}\n\\begin{split}\n\t\t\\Lambda_\\pi(s) &= \\Var[\\pi]{G_t | S_t = s}\\\\\n\t\t\t\t\t   &= \\Var[\\pi]{R_{t+1} + \\gamma G_{t+1} | S_t = s}\\\\\n\t\t\t\t\t   &= \\mathcal{V}_\\pi(s) + 2 \\gamma \\Cov[\\pi]{R_{t+1}, G_{t+1} | S_t = s} + \\gamma^2 \\Var[\\pi]{G_{t+1} | S_t = s}\\\\\n\\end{split}\n\\end{equation*}\nApplying the law of total variance we obtain\n\\begin{equation*}\n\t\\begin{split}\n\t\\Var[\\pi]{G_{t+1} | S_t = s} &= \\E[\\pi]{ \\Var[\\pi]{G_{t+1} | S_{t+1}} | S_t = s} + \\Var[\\pi]{\\E[\\pi]{G_{t+1} | S_{t+1}} | S_t = s}\\\\\n\t&= T_\\pi \\Lambda_\\pi(s) + \\Var[\\pi]{V_\\pi(S_{t+1}) | S_t = s}\n\t\\end{split}\n\\end{equation*}\nPlugging it into the first equality yields the result\n\\begin{equation*}\n\t\\Lambda_\\pi(s) = \\Var[\\pi]{R_{t+1} | S_t = s} + \\Var[\\pi]{V_\\pi(S_{t+1}) | S_t = s} + \\gamma^2 T_\\pi \\Lambda_\\pi(s)\n\\end{equation*}\n\\end{proof}\nEven if appealing from a theoretical point of view, these equations cannot be easily exploited to derive practical algorithms in a non-episodic environment. Indeed, the covariance term between the first day reward and the future return appearing in the synthetic reward would be hard to estimate when the lifespan of the experiment can be infinite. This does not necessarily represent a problem in an episodic environment, where the experiments have a finite (possibly random) lifespan. Even more problematic is the variance of the state-value function, which typically is unknown. For this reason, we will not develop any reinforcement learning algorithm in this framework and we will instead focus on the average reward formulation.\\\\\nIn \\cite{tamar2012policy} and \\cite{prashanth2014actor}, the authors circumvent these difficulties by implicitly assuming that the reward $R_{t+1}$ is conditionally independent from the future rewards given the current state. Under this assumption, the covariance terms are null and the previous Bellman equation become \n\\begin{corollary}[Bellman Equations Under Independence Assumption]\n\t\\begin{equation}\n\t\tU_\\pi(s) = \\calM_\\pi(s) + 2\\gamma \\calR_\\pi(s) T_\\pi V_\\pi(s) + \\gamma^2 T_\\pi U_\\pi(s)\n\t\\end{equation}\n\t\\begin{equation}\n\t\tW_\\pi(s,a) = \\calM(s,a) + 2\\gamma \\calR(s,a) T_a V_\\pi(s) + \\gamma^2 T_a U_\\pi(s)\n\t\\end{equation}\n\t\\begin{equation}\n\t\t\\Lambda_\\pi(s) = \\mathcal{V}_\\pi(s) + \\gamma^2 \\Var[\\pi]{V_\\pi(S_{t+1}) | S_t = s} + \\gamma^2 T_\\pi \\Lambda_\\pi(s)\t\n\t\\end{equation}\n\\end{corollary}\n\n\\subsection{Average Reward Formulation}\nIn \\cite{prashanth2014actor}, the authors consider the long-run variance as a measure of the risk associated to a policy $\\pi$\n\\begin{definition}[Long-Run Variance]\n\tThe long-run variance $\\Lambda_\\pi$ under policy $\\pi$ is defined as\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\Lambda_\\pi &= \\lim_{T \\to \\infty} \\frac{1}{T} \\E[\\pi]{\n\t\t\t\\sum^{T-1}_{t=0} (R_{t+1} - \\rho_\\pi)^2}\\\\\n\t\t\\end{split}\n\t\\end{equation}\n\\end{definition}\nThe long-run variance can be decomposed as follows\n\\begin{equation}\n\t\\Lambda_\\pi = \\eta_\\pi - \\rho_\\pi^2 \n\\end{equation}\nwhere $\\eta_\\pi$ is the average square reward per step  \n\\begin{definition}[Average Square Reward]\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\eta_\\pi &= \\lim_{T\\to\\infty}\\frac{1}{T}\\E[\\pi]{\\sum_{t=0}^{T-1} R_{t+1}^2}\\\\\n\t\t\t\t\t &= \\E[\\substack{S\\sim d_\\pi \\\\ A \\sim \\pi}]{\\calM(S,A)}\\\\\t\n\t\t\t\t\t &= \\int_\\S d_\\pi(s) \\int_\\A \\pi(s,a) \\calM(s,a) da ds\\\\\n\t\t\\end{split}\n\t\\end{equation}\n\twhere we denoted by $\\calM(s,a)$ the square reward function\n\t\\begin{equation}\n\t\t\\calM(s,a) = \\E{R_{t+1}^2 | S_t = s, A_t = a}\n\t\\end{equation}\n\\end{definition}\nAs before, we introduce the residual state-value and action-value functions \nassociated with the square reward under policy $\\pi$\n\\begin{definition}[Average Adjusted Square State-Value Function]\n\tThe average adjusted square state-value function $U_\\pi : \\S \\to \\R$ is the\n\texpected square residual return that can be obtained starting from a state \n\tand following policy $\\pi$\n\t\\begin{equation}\n\t\tU_\\pi(s) = \\E[\\pi]{\\sum^{\\infty}_{t=0} \\left(R_{t+1}^2 - \\eta_\\pi\\right)\n\t\t\\bigg| S_0 = s}\n\t\\end{equation}\n\\end{definition}\n\\begin{definition}[Average Adjusted Square Action-Value Function]\n\tThe average adjusted square action-value function $Q_\\pi : \\S \\times \\A \\to \n\t\\R$ is the expected residual square return that can be obtained starting from a\n\tstate, taking an action and then following policy $\\pi$\n\t\\begin{equation}\n\t\tW_\\pi(s, a) = \\E[\\pi]{\\sum^{\\infty}_{t=0} \\left(R_{t+1}^2 -\n\t\t\t\\eta_\\pi\\right) \\bigg| S_0 = s, A_0 = a}\n\t\\end{equation}\n\\end{definition}\nThe following relation between square state-value function and the square action-value holds\n\\begin{equation}\\label{eq:UW_equality}\n\tU_\\pi(s) = \\int_\\A \\pi(s,a) W_\\pi(s,a) da\n\\end{equation}\nThe average adjusted square value functions satisfy the following Bellman equations\n\\begin{proposition}[Bellman Expectation Equations]\n\t\\begin{equation}\n\t\tU_\\pi(s) = \\calM_\\pi(s) - \\eta_\\pi + T_\\pi U_\\pi(s)\n\t\\end{equation}\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\tW_\\pi(s,a) &= \\calM(s,a) - \\eta_\\pi + T_a U_\\pi(s)\\\\\n\t\t\t&= \\calM(s,a) - \\eta_\\pi + \\int_{\\S} \\calP(s,a,s') \\int_{\\A} \\pi(s',a') W_\\pi(s',a') da' ds'\n\t\t\\end{split}\n\t\\end{equation}\n\\end{proposition}\nNow, how can we formally define the agent's objective to take into account the trade-off between rewards and risks? Borrowing from the financial literature, a first alternative is to consider the following mean-variance optimization problem\n\\begin{equation}\\label{eq:risk_sensitive_problem}\n\t\\begin{cases}\n\t\t\\max_\\pi \\rho_\\pi\\\\\n\t\t\\text{subject to}\\ \\Lambda_\\pi \\leq \\alpha\\\\\n\t\\end{cases}\n\\end{equation}\nfor a given $\\alpha > 0$. Using the Lagrangian relaxation procedure, we can \nrecast (\\ref{eq:risk_sensitive_problem}) to the following unconstrained problem\n\\begin{equation}\n\t\\max_\\lambda \\min_\\pi L(\\pi, \\lambda) = - \\rho_\\pi + \\lambda \n\t(\\Lambda_\\theta - \\alpha)\n\\end{equation}\nAlternatively, the agent may want to optimize the Sharpe ratio, a commonly used risk-sensitive performance measure defined as\n\\begin{equation}\n\t\\text{Sh}_\\pi = \\frac{\\rho_\\pi}{\\sqrt{\\Lambda_\\pi}} \n\\end{equation}\n\n\n\\section{Dynamic Programming Algorithms}\n\\label{sec:policy_evaluation}\nIn discrete-time, the Bellman optimality equations can be seen as an expression of the dynamic programming principle and completely characterize the optimal value functions. In the case of discrete state and action spaces, the value functions can be represented as vectors and the Bellman operators become matrices so that the optimality equations can be directly solved in polynomial time \\cite{littman1996algorithms}. An alternative approach is to approximate the optimal value functions by using the Bellman optimality equations as the basis of a fixed-point iterative scheme. This leads to the well known \\emph{value iteration} and \\emph{policy iteration} algorithms \\cite{szepesvari2010algorithms}. The drawback of these methods is that typically they are only applicable to discrete MDP whose dynamics is perfectly known. Still, these simple algorithms provide some useful insight that can be exploited to develop more advanced reinforcement learning algorithm.\n\n\\subsection{Value Iteration}\n\\emph{Value iteration} is an iterative method to compute the optimal state-value function $V_*(s)$. Starting from an arbitrary function $V_0(s)$, the algorithm iteratively updates the approximation by applying the Bellman optimality operator in a fixed-point scheme\n\\begin{equation*}\n\tV_{k+1}(s) = B_* V_k(s) = \\sup_a \\left\\{\\calR(s,a) + \\gamma T_a V_k(s)\\right\\}\n\\end{equation*}\nThis algorithm is guaranteed to converge to the optimal value function $V_*$ by the contraction mapping theorem. Let us notice that the transition operator $T_a$ requires knowledge of the MDP dynamics, which is not available in the typical reinforcement learning framework. Moreover, the update involves an optimization with respect to all possible actions which can be carried out efficiently only in the case of finite action spaces. A drawback of this algorithm is that it does not provide an explicit representation of the optimal policy, which in many control problems is what we are looking for.  \n\n\\subsection{Policy Iteration}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{Images/2_0_policy_iteration.png}\n\t\\caption[Policy iteration algorithm.]{Policy iteration algorithm \\cite{sutton1998introduction}.}\n\t\\label{fig:policy_iteration}\n\\end{figure}\nPolicy iteration is an iterative method to approximate both the optimal state-value function $V_*$ and the optimal policy $\\pi_*$. This algorithm alternates an evaluation step, in which the current policy is evaluated using the state-value function, and an improvement step, in which the policy is improved by acting greedily with respect to the action-value function computed in the evaluation step. In the standard version of policy iteration, the state-value function for the current policy is evaluated starting from an arbitrary function $V_0(s)$ and iteratively applying the Bellman expectation operator in a fixed-point iteration scheme \n\\begin{equation*}\n\tV_{k+1}(s) = B_{\\pi_k} V_k(s)\n\\end{equation*}\nThis algorithm is guaranteed to converge to $V_{\\pi_*}$. The new policy is then computed as \n\\begin{equation*}\n\t\\pi_{k+1} = \\text{greedy}(V_{\\pi_k})\n\\end{equation*}\nand we go back to the evaluation step for this new policy. Let us notice that it is not necessary to perfectly evaluate to policy $\\pi_n$ before performing the improvement step and the evaluation procedure can be stopped before convergence. This algorithm suffers from the same problem as above. However it provides the basic structure for most of the value-based reinforcement learning methods. In particular, in \\emph{generalized policy iteration} the evaluation step is performed in an arbitrary way which does not necessarily employ the Bellman operator. This scheme is illustrated in Figure \\ref{fig:policy_iteration},\n\n  \n\n\n", "meta": {"hexsha": "acc09e6d38b3eb92c39f92a3087f41475a758ff4", "size": 33594, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/Chapters/2_Discrete_Time_Stochastic_Optimal_Control.tex", "max_stars_repo_name": "AmineAboussalah/Thesis", "max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z", "max_issues_repo_path": "Report/Chapters/2_Discrete_Time_Stochastic_Optimal_Control.tex", "max_issues_repo_name": "pnecchi/Thesis", "max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/Chapters/2_Discrete_Time_Stochastic_Optimal_Control.tex", "max_forks_repo_name": "pnecchi/Thesis", "max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z", "avg_line_length": 56.3657718121, "max_line_length": 967, "alphanum_fraction": 0.731559207, "num_tokens": 9902, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105941403651, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.5846445231147017}}
{"text": "%%%%%PLEASE CONSIDER CORRECTIONS AT PLACES INDICATED%%%%%%%%\n\\documentclass{discussion}\n%%%%%Packages Used, add more if necessary%%%%\n\\providecommand{\\tightlist}{%\n\\setlength{\\itemsep}{2pt}\\setlength{\\parskip}{0pt}} \n\n\\begin{document}\n\n%%%%%CHANGE HERE%%%%%%%%%\n%%%%%\\lecture{the ordinal number of the tutorial}{lecture title}{Instructor's name}%%%%%%%%\n\\lecture{5}{Naive Bayes and SVM}{Chansoo Lee}{}\n\n%%%%%CHANGE HERE%%%%%%%\n%%%%%\\section{title of the section} similarly with the rest \\section{} or \\subsection{} or \\subsubsection{} etc\n\n   \n\\renewcommand{\\R}{\\mathbb{R}}\n\\renewcommand{\\vec}[1]{\\boldsymbol{#1}}\n\\newcommand{\\Dtrain}{\\mathcal{D}_{\\mathrm{train}}}\n\\newcommand{\\tm}{\\mathbf{\\theta}_{\\text{model}}}\n\\newcommand{\\xtest}{\\mathbf{\\x}_{\\text{test}}}\n\\newcommand{\\test}{\\mathrm{test}}\n\\newcommand{\\pred}{\\mathrm{pred}}\n\n\\section{Naive Bayes}\n\\emph{Notations:} Let $C$ be number of classes, $D$ be the number of features, and $M$ be the number of values each feature can take. The Naive Bayes prior parameters are $\\vec{\\pi}$ and posterior parameters are $\\{\\vec{\\theta}_{cd}: c=1,\\ldots,C, d = 1,\\ldots,D\\}$. For simplicity we simply use $\\pi, \\theta$ when discussing them as general parameters and thus dimensions are not important in the context.\n\\subsection{Naive Bayes: MLE}\n\n\n\\begin{exercise}[Review Question]\n What are the semantics of these parameters? What are the dimensions of $\\vec{\\pi}$ and $\\vec{\\theta}_{cd}$? They are called \\emph{probability vectors}. What is the key property?\n\n \\textit{The individual elements of a probability vector sums up to 1.}\n\\end{exercise}\n\nFurthermore, suppose $M=2$. When dealing with sparse features such that $x_{d}$ is rarely 1 (\\texttt{Spambase} data in HW2, after our preprocessing step, has this property), we only see a small number of examples to accurately estimate $\\theta_{cd1}$.  This causes overfitting. (\\emph{Understand exactly why this is the case.})\n\n\\begin{exercise}\n\\label{ex:always1}\n  Discuss what happens if $x_d = 1$ for every training example $x$, regardless of its class label? What happens if we recieve a test data with $x_d = 0$?\n\\end{exercise}\n\n\\subsection{Naive Bayes: Mean estimate}\nThe overfitting problem can be fixed by putting Dirichlet priors on $\\vec{\\pi}$. Dirichlet distribution over $M$-dimensional probability vector, parameterized by a vector $\\vec{\\alpha} \\in \\mathbb{R}^{M}$ of positive values, has the following probability density function:\n\\[p(\\vec{u}; \\vec{\\alpha}) = Z(\\vec{\\alpha}) u_{1}^{\\alpha_1 - 1} \\cdots u_{M}^{\\alpha_{M}-1}\\]\nwhere $Z(\\vec{\\alpha})$ is the normalizing constant, which makes the above function integrates to 1.\n\n\\begin{exercise} Let's inspect the Dirichlet pdf.\n\\begin{itemize}\n  \\item   Fix every coordinates but $\\alpha_1$, and increase $\\alpha_1$. What happens to the distribution of $u_1$? What would happen to the mean of $u_1?$.\n\n  \\textit{We are more likely to get a high value, so the mean would increase.}\n  \\item Set all coordinates of $\\vec{\\alpha}$ to $\\lambda$. As you increase $\\lambda$, what would happen to the expected value of $\\vec{u}$?\n\n  \\textit{The expected value is always $\\vec{u} = (1/M, \\ldots, 1/M)^\\top$  because the sum $\\sum_{i=1}^{M}u_i$ is constrained to be 1 and pdf is symmetric.}\n\\end{itemize}\n\\end{exercise}\n\n\nWith Dirichlet prior, we can do the MAP estimate as usual\n\\[(\\hat{\\pi}, \\hat{\\theta}) = \\arg\\max_{\\pi,\\theta} P(\\theta,\\pi | \\mathcal{D}_{\\text{train}}).\\]\nBut unfortunately the expression for the maximizer isn't very pretty (functionally they are fine). So, we use the mean estimate instead:\n\\[\\hat{\\pi} = E[\\pi| \\mathcal{D}_{\\text{train}}], \\ \\\n \\hat{\\theta} = E[\\theta | \\mathcal{D}_{\\text{train}}]\\]\nwhere $\\vec{\\pi} \\sim \\text{Dirichlet}(\\vec{\\alpha})$ and $\\theta_{cd} \\sim \\text{Dirichlet}(\\vec{\\beta}_{cd}).$ If we compute the means, we get\n\\[\\hat{\\pi}_c = \\frac{N_c + \\alpha_c}{N + \\sum_{c} \\alpha_c}, \\ \\ \n\\hat{\\theta}_{cdm} = \\frac{N_{cdm} + \\beta_{cdm}}{N_c + \\sum_{m'} \\beta_{cdm'}} \\]\n\n\\begin{exercise} Redo Exercise \\ref{ex:always1}. What happens at the prediction time?\n\\end{exercise}\n\n\n\\section{SVM}\n\\newcommand{\\w}{\\mathbf{w}}\n\\renewcommand{\\x}{\\mathbf{x}}\n\\renewcommand{\\c}[1]{t_{#1} (\\w^\\top \\x_{#1}) + b}\n\\newcommand{\\ci}{\\c{i}}\nLet $\\{(\\x_i, t_i) \\in \\mathbb{R}^D \\times \\{-1,1\\}\\}_{i=1}^{N}$ be our training set.\n\n\nHyperplane is a \\emph{set} of $D$-dimensional vectors that ``constitute'' an $(D-1)$ dimensional object. The key here is \\emph{set}, as a hyperplane is usually described in the form of:\n\\[\\{\\vec{v}: \\mathbf{w}^\\top \\vec{v} + b = 0\\}.\\]\nThe distance between the above hyperplane and a point $\\mathbf{x}$ is\n\\[\\frac{|\\mathbf{w}^\\top \\mathbf{x} + b|}{\\|\\mathbf{w}\\|}.\\]\n\n% Throught the section, we use the shorthand notation $\\ci = t_i (\\w^\\top \\x_i) + b$. Don't forget it is a function of both the model $(\\w^\\top,b)$ and data $(\\x_i, t_i)$.\n\n\\subsection{Hard SVM}\nThe Hard SVM maximizes the \\emph{margin distance}, which is the minimum distance between a point and the hyperplane $\\{\\vec{v}: \\mathbf{w}^\\top \\vec{v} + b = 0\\}$:\n\\begin{equation}\n\\label{eq:hardsvm}\n  \\arg\\max_{\\mathbf{w}, b} \\min_{i = 1,\\ldots, N} \\frac{|\\mathbf{w}^\\top \\mathbf{x}_i + b|}{\\|\\mathbf{w}\\|} \\ \\ \\text{such that $\\ci > 0$ for all $i$.}\n\\end{equation}\n\\begin{exercise}(Review)Does the solution always exist? What is the condition under which the solution exists? \n\\end{exercise}\n\nWhen the solution exists, \\eqref{eq:hardsvm} is equivalent to\n\\begin{equation}\n\\label{eq:hardsvmci}\n  \\arg\\max_{\\mathbf{w}, b} \\min_{i = 1,\\ldots, N} \\frac{\\ci}{\\|\\mathbf{w}\\|} \\ \\ \\text{such that $\\ci > 0$ for all $i$.}\n\\end{equation}\n\n\\begin{exercise}\nShow the equivalence of \\eqref{eq:hardsvm} and \\eqref{eq:hardsvmci}.\n\\end{exercise}\n\n\\begin{exercise}(Review) Given a solution $(\\w^*,b^*)$ to the above, can we generate another solution? \n\\emph{Hint: As you scale $(\\w^*,b^*)$ with $\\gamma$ and set $\\w = \\gamma\\w^*$ and $b = \\gamma b^*$, what happens to $t_i (\\w^\\top \\x_i) + b$?}\n% \\emph{You can scale both with a constant $\\gamma$ and objective function value remains the same.}\n\\end{exercise}\n\nBy fixing $\\c{i^*} = 1$ for the minizer $i^*$, we get an optimization problem whose unique solution is the maximizer of (to be precise, \\emph{one of} the maximizers of) \\eqref{eq:hardsvmci}:\n\\begin{equation}\n    \\arg\\max_{\\mathbf{w}, b} \\min_{i = 1,\\ldots, N} \\frac{\\ci}{\\|\\mathbf{w}\\|}  \\ \\text{such that $\\ci = 1$ for some $i$ and $\\ci \\geq 1$ for all other $i$.}\n\\end{equation}\n\nThe above is equivalent to \n\\begin{equation}\n    \\arg\\max_{\\mathbf{w}, b} \\frac{1}{\\|\\mathbf{w}\\|}  \\ \\text{such that $\\ci = 1$ for some $i$ and $\\ci \\geq 1$ for all other $i$.}\n\\end{equation}\nwhich is again equivalent to \n\\begin{equation}\n    \\arg\\min_{\\mathbf{w}, b} \\|\\mathbf{w}\\|  \\ \\text{such that $\\ci = 1$ for some $i$ and $\\ci \\geq 1$ for all other $i$.}\n\\end{equation}\n\n\\begin{exercise}\nShow that the above is equivalent to\n\\begin{equation}\n\\label{eq:hardsvm_primal}\n    \\arg\\min_{\\mathbf{w}, b} \\|\\mathbf{w}\\|  \\ \\text{such that $\\ci \\geq 1$ for all $i$.}\n\\end{equation}\n\n\\emph{Proof technique: Suppose our minimizer $(\\w^*, b^*)$ gives $\\min_i t_i({\\w^*}^\\top \\x_i) > 1$. Can you find a better $(\\w, b)$ and get a contradictiction?}\n\\end{exercise}\n\nNow we can change $\\|\\w\\|$ into any increasing function of $\\|\\w\\|$ (like we changed likelihood to log likelihood). So why not something that looks exactly like our regularizer for linear and logistic regression?\n\\begin{equation}\n    \\arg\\min_{\\mathbf{w}, b} \\lambda\\|\\mathbf{w}\\|^2  \\ \\text{such that $\\ci \\geq 1$ for all $i$.}\n\\end{equation}\nwhere $\\lambda >0 $ is the regularization coefficient.\n\n\\subsection{Soft-SVM}\n\nWe relax the problem so that we allow $c_i$ to be less than 1 (\\textbf{margin error}), but we add a penalty term.\n\n\\begin{equation}\n    \\arg\\min_{\\mathbf{w}, b, \\vec{\\xi}} \\lambda\\|\\mathbf{w}\\|^2 + \\sum_{i=1}^{N} \\xi_i \\ \\text{such that $\\ci \\geq 1 - \\xi_i$ and $\\xi_ \\geq 0$ for all $i$.}\n\\end{equation}\n\nIn the HW, you will show that the above is equivalent to \n\n\\newcommand{\\Lhinge}{L_{\\text{hinge}}}\n\\begin{equation}\n\\label{eq:svm_primal}\n    \\arg\\min_{\\mathbf{w}, b} \\lambda\\|\\mathbf{w}\\|^2 + \\sum_i \\max(0,1 - (\\ci)).\n\\end{equation}\n% where $\\Lhinge(\\w;\\x_i) =\\max(0,1 - \\ci)$\n\\begin{exercise}\n  Plot the \\textbf{hinge loss} function $f(z) = \\max(0, 1 - z)$. What does it look like? What does the SVM penalty function do intuitively? How is it different from RMSE?\n\\end{exercise}\n\n\n\\subsection{Duality and Kernel}\n\nThe dual formulation optimizes over a vector in $\\mathbb{R}^N$, instead of $\\mathbb{R}^D$.\n    \\begin{align*}\n    \\underset{\\vec{\\alpha}}{\\text{maximize}} \\quad &  -0.5 \\sum \\nolimits_{i,j = 1}^N \\alpha_i \\alpha_j t_i t_j \\x_i^\\top \\x_j + \\sum \\nolimits_{i = 1}^N \\alpha_i\\\\\n    \\text{subject to} \\quad & 0 \\leq \\alpha_i \\leq C/n \\quad \\forall i\\ \\\\\n    \\quad & \\sum \\nolimits_{i=1}^n \\alpha_i t_i = 0    \n    \\end{align*} \nGiven a solution to the dual problem, we can get the primal solution, and vice versa. The main (arguably the only) reason we study the SVM dual formulation is that it helps us observe that the optimal  $\\w^*$ is a linear combination of examples:\n\\[\\boxed{ \\w^* = \\sum \\nolimits_{i=1}^N \\alpha_i^* t_i \\x_i }\\]\nThe name ``support vector'' comes from this observation.\n\nBecause we know that the optimal $\\w^*$ is a linear combination of $\\x_i$, we can rewrite \\eqref{eq:svm_primal} as:\n\\begin{equation}\n    \\arg\\min_{\\vec{\\alpha},b} \\lambda\\|\\mathbf{w}\\|^2 + \\sum_{i=1}^{N} \\max\\left(0, 1 - t_i\\left( \\left(\\sum_{j=1}^{N} \\alpha_j \\x_j\\right)^\\top \\x_i + b\\right)\\right).\n\\end{equation}\n\nUsing kernelized features, we get\n\\begin{equation}\n    \\arg\\min_{\\vec{\\alpha}} \\lambda\\|\\mathbf{w}\\|^2 + \\sum_{i=1}^{N} \\max\\left(0, 1 - t_i \\left(\\sum_{j=1}^{N} \\alpha_j \\phi(\\x_j)\\right)^\\top \\phi(\\x_i)\\right).\n\\end{equation}\nThe bias term is removed because we can always add a bias coordinate in our feature mapping $\\phi(\\x_j)$.\n\n\\begin{exercise}\n  Take the gradient of the objective function. In particular, take the derivative with respect to $\\alpha_k$.\n\\end{exercise} \n\nObserve that the derivative depends only on the dot product of the features. So we can implement the gradient descent as long as the dot products are well-defined, even if we are in infinite-dimensional space.\n\n\\end{document}\n", "meta": {"hexsha": "96fc0030472e4fd10e959ef29199172a46f90237", "size": 10272, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "discussion05_NB_SVM/discussion05_NB_SVM.tex", "max_stars_repo_name": "xipengwang/umich-eecs445-f16", "max_stars_repo_head_hexsha": "298407af9fd417c1b6daa6127b17cb2c34c2c772", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 97, "max_stars_repo_stars_event_min_datetime": "2016-09-11T23:15:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T08:03:24.000Z", "max_issues_repo_path": "discussion05_NB_SVM/discussion05_NB_SVM.tex", "max_issues_repo_name": "eecs445-f16/umich-eecs445-f16", "max_issues_repo_head_hexsha": "298407af9fd417c1b6daa6127b17cb2c34c2c772", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "discussion05_NB_SVM/discussion05_NB_SVM.tex", "max_forks_repo_name": "eecs445-f16/umich-eecs445-f16", "max_forks_repo_head_hexsha": "298407af9fd417c1b6daa6127b17cb2c34c2c772", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 77, "max_forks_repo_forks_event_min_datetime": "2016-09-12T20:50:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T14:41:23.000Z", "avg_line_length": 53.2227979275, "max_line_length": 406, "alphanum_fraction": 0.6778621495, "num_tokens": 3454, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799253, "lm_q2_score": 0.7931059487389968, "lm_q1q2_score": 0.5846445147829289}}
{"text": "\\documentclass[11pt, letter]{article}\n\n\\usepackage[margin=1in]{geometry}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{amssymb}\n\\usepackage{enumitem}\n\n\\newcommand{\\piece}[2]{\\mathbf{#1}\\text{#2}}\n\n\\theoremstyle{definition}\n\\newtheorem{alg}{Algorithm}\n\n\\title{Training a Linear Heuristic on Chess Positions for the Minimax Algorithm}\n\\author{Tae Hyung Kim}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\nFirst, we make clear the notation that we shall use. We shall represent the type\nof chess pieces by the following abbreviations:\n\\[\\{ \\mathbf{K} \\leftrightarrow \\text{King}, \n\\mathbf{Q} \\leftrightarrow \\text{Queen},\n\\mathbf{R} \\leftrightarrow \\text{Rook},\n\\mathbf{B} \\leftrightarrow \\text{Bishop},\n\\mathbf{N} \\leftrightarrow \\text{Knight},\n\\mathbf{P} \\leftrightarrow \\text{Pawn}\n\\}. \\]\nIn addition, we shall use the standard method of identifying location of a piece. \nThat is, each column shall be lettered $\\text{a}, \\dots, \\text{f}$ while each row\nwill be numbered $1, \\dots, 8$. Each position will be written with the column\nletter and the row number consecutively; for example, we would say that the\nwhite king initially occupies the position $\\text{a}4$. Finally, we represent a\npiece, which includes its position, by the piece type followed by its position.\nFor example, the king is denoted by $\\piece{K}{a4}$. \n\nRecall the algorithm of minimax with alpha-beta pruning. Suppose that we \nrepresent a \\texttt{Board} as an object (assume Python for simplicity) with the\nmethod \\texttt{get\\_moves}. In addition, suppose that we have a heuristic\nfunction $h : \\mathtt{Board} \\to \\mathbb R$ which evaluates how good a board is.\n\\renewcommand*{\\thealg}{M}\n\\begin{alg}\n    Suppose we have a board \\texttt{board}.\n    \\begin{enumerate}[label=\\textbf{\\thealg\\arabic*}.]\n        \\item If \n    \\end{enumerate}\n\\end{alg}\n\nDefine variables $\\theta_{P_1, P_2}$ where $P_1, P_2$ are any two pieces. \nNow, the heuristic of interest in this paper is $h_\\theta : \\mathtt{Board} \\to\n\\mathbb{R}$ defined by\n\\[ h_\\theta(\\texttt{board}) = \\sum_{P_1, P_2 \\in \\texttt{board}} \\theta_{P_1, P_2} \\cdot \\chi_\\texttt{board}(P_1) \\chi_\\texttt{board}(P_2) \\]\nwhere $\\chi_\\texttt{board} : \\texttt{Piece} \\to \\{0, 1\\}$ is the indicator of\nwhether a certain piece is in the board or not. Note that the product \n$\\chi_\\texttt{board}(P_1) \\chi_\\texttt{board}(P_2)$ simply returns $1$ if the\npair of pieces $P_1, P_2$ is in the board, and $0$ otherwise. \n\nThe ultimate goal is to find the $\\theta$ which will provide the best heuristic\nfor the minimax algorithm. Given that the data we will be given is whether a\ncertain board is ``good'' or ``bad'', the problem of computing $\\theta$ is a\nlogistic regression problem, as we wish to minimize the cost $\\log(g(h_\\theta(\\texttt{board})$ where $g$ is the sigmoid function. \n\n\\end{document}\n", "meta": {"hexsha": "d111f2e890a84c6444b5239069febcd4ce19e15c", "size": 2824, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/main.tex", "max_stars_repo_name": "thkim1011/chess-ai", "max_stars_repo_head_hexsha": "f627367e3c4d36d7b72c7665557b330aa44c4b83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-01-05T19:47:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-25T15:59:02.000Z", "max_issues_repo_path": "paper/main.tex", "max_issues_repo_name": "thkim1011/chess-ai", "max_issues_repo_head_hexsha": "f627367e3c4d36d7b72c7665557b330aa44c4b83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/main.tex", "max_forks_repo_name": "thkim1011/chess-ai", "max_forks_repo_head_hexsha": "f627367e3c4d36d7b72c7665557b330aa44c4b83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-09-15T14:13:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-12T16:23:04.000Z", "avg_line_length": 42.7878787879, "max_line_length": 141, "alphanum_fraction": 0.7347733711, "num_tokens": 850, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619436290699, "lm_q2_score": 0.7090191276365463, "lm_q1q2_score": 0.5845592880414145}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{graphicx} \n\\title{\\fontfamily{cmss}\\selectfont Plagiarism Checker}\n\\author{Navneel Singhal}\n\\date{August 2020}\n\n\\newcommand{\\ttt}{\\texttt}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Description}\n\nThe assignment was to implement a plagiarism checker in C.\n\n\\section*{Usage}\n\nTo compile the code, run the command \\texttt{make}. \\\\\nTo run the code, run \\texttt{./plagChecker <LOCATION\\_OF\\_TEST\\_FILE> <LOCATION\\_OF\\_CORPUS\\_FOLDER>}.\n\\\\\n\\textbf{Important Note:} When passing the location of the corpus folder, ensure that it ends with a /\n\\\\\nThis code has been tested to run on Linux (Ubuntu 20.04), and couldn't be tested on Windows due to the lack of a Windows machine. However this will work on POSIX-compliant machines.\n\n\\section*{The Model}\n\n\\subsection*{Introduction to $N$-grams}\nA concept widely used in natural language processing, word $N$-grams are essentially ``phrases'' of length $N$ in a given text, for our purposes. \nTheir main importance is in the fact that they can represent Markov chains of length $N$, which allows for a simple representation of phrases which is also natural fit for languages like English, because the main structures that stand out in a text are linear owing to the low nesting-levels in most languages.\nTheir relevance to our use-case is that in a plagiarized document, the frequency of any particular phrase should be heuristically similar to that in the document where it was plagiarized from.\n\n\\subsection*{High-level Description of the Algorithm}\n\nFrom any given text, we find the sorted array containing all possible $3$-gram hashes in the text. To find the similarity between the texts, we extract frequency vectors of $3$-grams corresponding to both texts, sorted according to $3$-gram hashes in both the texts (and if a hash is not found in one file, we simply assign the value 0 to the frequency of that hash), and find the cosine similarity between any two such vectors. This is going to be the measure of similarity we will use in this implementation.\\\\\nCosine similarity for two vectors $(a_1, \\dots, a_n)$ and $(b_1, \\dots, b_n)$ is defined as $$\\mathcal{S}((a_1, \\dots, a_n), (b_1, \\dots, b_n)) = \\frac{a_1b_1 + \\cdots + a_nb_n}{\\sqrt{(a_1^2 + \\cdots + a_n^2)(b_1^2 + \\cdots + b_n^2)}}$$\nThe reasoning behind this is as follows.\n\\begin{enumerate}\n        \\item For relatively more frequent $3$-grams, the weight of that feature should be intuitively high, and sparse features are quite unimportant.\n        \\item This measure is symmetric between the corpus and the queried file, so it doesn't introduce a bias between who plagiarised whom.\n        \\item It is resistant to scaling; i.e., if there is a small document which has been plagiarised from a larger document, the results are similar to those if the smaller document was slightly larger.\n\\end{enumerate}\nTo make feature extraction robust and resistant to the usage of different cases, we tokenize the text with respect to whitespace, commas, full-stops and newline characters, and then for each of the extracted strings, we convert them to lower case.\nTo make feature extraction simpler, we use a polynomial hash (with base $127$) for each string, and for each $3$-gram (which is now a triple of integers modulo $10^9 + 7$), we use a different polynomial hash (with base $997$ so as to not bring in any unwanted correlation between concatenated words or something else of this sort).\\\\\nWe also remove all words whose length is less than 3, so as to remove commonly used words from consideration.\n\n\\section*{Working of Code}\n\n\\begin{enumerate}\n    \\item \\texttt{long long int stringHash(char* s)}: This function computes the hash of a string as mentioned above. Time complexity is $\\mathcal{O}$(length of string), and space complexity is $\\mathcal{O}(1)$.\n    \\item \\texttt{long long int ngramHash(long long int a, long long int b, long long int c)}: This function computes the hash of a $3$-gram, as mentioned above. Time complexity is $\\mathcal{O}(1)$ and space complexity is $\\mathcal{O}(1)$.\n    \\item \\texttt{void sort(long long int* a, int l, int r)}: This function sorts the subarray \\texttt{a[l..r]} using merge-sort. Time complexity is $\\mathcal{O}(r - l + 1)$ and space complexity is $\\mathcal{O}(1)$ (since we use a global buffer).\n    \\item \\texttt{long long int* workFile(char* fileName)}: This function returns a pointer to the sorted array containing the hashes of all the $3$-grams in the file whose name is \\texttt{fileName}. Time complexity is $\\mathcal{O}$(total length + $N \\log N$) and space complexity is $\\mathcal{O}(N)$ where $N$ is the number of words in the file (we reuse the buffer used in the previous function).\n    \\item \\texttt{float findSimilarity(long long int* a, int n, long long int* b, int m)}: This function computes the frequency arrays for the arrays pointed to by \\texttt{a} and \\texttt{b}, and then finds the cosine similarity between them and returns the similarity as a ratio between $0$ and $1$. Time complexity is $\\mathcal{O}(n + m)$ and space complexity is also $\\mathcal{O}(n + m)$.\n    \\item \\texttt{int main(int argc, char* argv[])}: This is the main function, and here we traverse over all files in the given corpus directory, and find the similarity between the queried file and the corpus files, one corpus file at a time. \n        For the time complexity, suppose the number of corpus files is $t$, the total number of characters in all corpus text files and the queried test file is $s$, the total number of words in file $i$ is $n_i$, with the queried file being file $0$, and the rest being corpus files. Then the time complexity is $\\mathcal{O}(s + \\sum_{i = 0}^t n_i \\log n_i + tn_0)$. The space complexity is $\\mathcal{O}(n_0 + \\max_{i = 1}^t n_i)$, apart from a global buffer.\n\\end{enumerate}\n\n\n\\end{document}\n\n", "meta": {"hexsha": "52a7d7c0224413a1b8fd06eb3cc484541bc32b06", "size": 5882, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report.tex", "max_stars_repo_name": "NavneelSinghal/PlagiarismChecker", "max_stars_repo_head_hexsha": "f6a378a76b8dfe1c013a22a7d89725ada24be0bf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report.tex", "max_issues_repo_name": "NavneelSinghal/PlagiarismChecker", "max_issues_repo_head_hexsha": "f6a378a76b8dfe1c013a22a7d89725ada24be0bf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report.tex", "max_forks_repo_name": "NavneelSinghal/PlagiarismChecker", "max_forks_repo_head_hexsha": "f6a378a76b8dfe1c013a22a7d89725ada24be0bf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.90625, "max_line_length": 512, "alphanum_fraction": 0.7478748725, "num_tokens": 1530, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746911, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5845392735052091}}
{"text": "\\documentclass{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{bm}\n\\usepackage{hyperref}\n\\usepackage{enumerate}\n\\usepackage[margin=2cm]{geometry}\n\n\\title{Metropolis--Hastings proposal distribution\\\\for $\\theta$}\n\\author{Darjus Hosszejni}\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Proposal}\n\nWe want a nice proposal for $\\pi(\\theta\\vert y^\\ast,d,s)$.\nWe'll use the second order Taylor expansion of $g(\\theta)=\\log(\\pi(\\theta\\vert y^\\ast,d,s))$ as the base for the parameters of a normal approximation.\nThe approx. should be somehow around the mode with the correct second order structure.\n\nWe're only interested in approximations $f$ for which $f(\\theta^\\text{new})-f(\\theta^\\text{old})\\sim g(\\theta^\\text{new})-g(\\theta^\\text{old})$,\nso constants that cancel out don't matter.\nIt turns out that a good candidate is\n\\begin{equation*}\nf(\\theta)=-\\frac12(\\theta-\\theta_\\ast)^\\prime\\Sigma_\\ast^{-1}(\\theta-\\theta_\\ast),\n\\end{equation*}\nwhere\n\\begin{align*}\n\\theta_\\ast &= \\hat\\theta+\\Sigma_\\ast\\gamma_\\ast, \\\\\n\\hat\\theta &= \\text{arg}\\max_\\theta g(\\theta), \\qquad \\text{the mode of the posterior}, \\\\\n\\gamma_\\ast &= \\left.\\frac{\\partial g(\\theta)}{\\partial\\theta}\\right\\rvert_{\\theta=\\hat\\theta}, \\qquad \\text{the gradient},\\\\\n-\\Sigma_\\ast^{-1} &= \\left.\\frac{\\partial^2 g(\\theta)}{\\partial\\theta^\\prime\\partial\\theta}\\right\\rvert_{\\theta=\\hat\\theta}, \\qquad \\text{the Hessian},\n\\end{align*}\ni.e. $\\mathcal{N}(\\theta_\\ast,\\Sigma_\\ast)$ fits, its density is a constant times the second order Taylor expansion of the posterior density.\n\nWhen the Hessian is not negative definite we use $\\Sigma_\\ast=cI$ with a large $c$.\n\n\\subsection*{Taylor expansion}\n\nThis is the explanation and proof for the above.\n\\begin{align*}\ng(\\theta) &\\sim g\\left(\\hat\\theta\\right)+g^\\prime\\left(\\hat\\theta\\right)^\\text{T}\\left(\\theta-\\hat\\theta\\right)+\\frac12\\left(\\theta-\\hat\\theta\\right)^\\text{T}g^{\\prime\\prime}\\left(\\hat\\theta\\right)\\left(\\theta-\\hat\\theta\\right), \\\\\n&= -\\frac12\\left[-\\gamma_\\ast^\\text{T}\\left(\\theta-\\hat\\theta\\right)+\\left(\\theta-\\hat\\theta\\right)^\\text{T}\\Sigma_\\ast^{-1}\\left(\\theta-\\hat\\theta\\right)\\right]+g\\left(\\hat\\theta\\right), \\\\\n&= -\\frac12(\\theta-\\theta_\\ast)^\\text{T}\\Sigma_\\ast^{-1}(\\theta-\\theta_\\ast)+\\frac12\\gamma_\\ast^\\text{T}\\Sigma_\\ast\\gamma_\\ast+g\\left(\\hat\\theta\\right), \\\\\n&= f(\\theta)+\\text{constant}(\\hat\\theta).\n\\end{align*}\n\n\\end{document}\n\n", "meta": {"hexsha": "2a5e15e5eebeaccb16bae7f36d65fd1e880fcbc9", "size": 2399, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/other_calculations/MH_proposal_distribution_for_theta.tex", "max_stars_repo_name": "hdarjus/master-thesis", "max_stars_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-23T12:51:22.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-23T12:51:22.000Z", "max_issues_repo_path": "thesis/other_calculations/MH_proposal_distribution_for_theta.tex", "max_issues_repo_name": "hdarjus/master-thesis-WU", "max_issues_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/other_calculations/MH_proposal_distribution_for_theta.tex", "max_forks_repo_name": "hdarjus/master-thesis-WU", "max_forks_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-06-12T00:39:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-12T00:39:19.000Z", "avg_line_length": 45.2641509434, "max_line_length": 231, "alphanum_fraction": 0.710712797, "num_tokens": 777, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746911, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5845392691851126}}
{"text": "\nIn this introductory section we review theoretical concepts about the\n\\textit{Riordan group} that will be useful in subsequent chapters;\nadditionally, we provide a very short introduction to symbolic computation\nusing the \\verb|sympy| module implemented using the Python programming\nlanguage, giving a taste of our programming style.\n\n\n\\section{Riordan Arrays, formally}\n\nA \\textit{Riordan array} is an infinite lower triangular array $(d_{n,k} )_{n,k\n\\in \\mathbb{N}},$ defined by a pair of formal power series $(d(t),h(t))$ such\nthat\\newline\\noindent$d(0)\\neq 0, h(0)=0$ and $h^\\prime(0)\\neq 0$; furthermore,\neach element \\begin{displaymath}\n    d_{n,k}=[t^n]d(t)h(t)^k, \\qquad n,k \\geq 0,\n\\end{displaymath}\nis the coefficient of monomial $t^{n}$ in the series\nexpansion of $d(t)h(t)^{k}$ %where $d_{n,k}=0$ for $k>n$,\nand the bivariate generating function\n\\begin{equation}\n    \\label{bgf}\n    R(t,w) = \\sum_{n,k \\in\\mathbb{N}}{d_{n,k} t^n w^k} = {d(t) \\over 1-wh(t)}\n\\end{equation}\nenumerates the sequence $(d_{n,k})_{n,k \\in\\mathbb{N}}$.\n\nThese arrays were introduced in \\citep{SHAPIRO1991229}, with the aim of\ndefining a class of infinite, lower triangular arrays and since then they have\nattracted, and continue to attract, a lot of attention in the literature and\nrecent applications can be found in \\citep{LUZON201475}.\n\n\\begin{example}\nThe most simple Riordan matrix could be the \\textit{Pascal triangle}\n$$\\mathcal{R}\\left(\\frac{1}{1-t}, \\frac{t}{1-t}\\right)\\quad\\text{where}\\quad\nd_{n,k}={n\\choose k};$$ moreover, another remarkable matrix is the\n\\textit{Catalan triangle}\n$$\\mathcal{R}\\left(\\frac{1-\\sqrt{1-4\\,t}}{2\\,t}, \\frac{1-\\sqrt{1-4\\,t}}{2\\,t}\\right)$$\nwhere $\\displaystyle d_{n,k}={{2n-k}\\choose{n-k}} - {{2n-k}\\choose{n-k-1}}$.\n\\end{example}\n\nAn important property of Riordan arrays concerns the computation of\ncombinatorial sums; precisely, it is encoded by the identity\n\\begin{equation}\n    \\label{somme}\n    \\sum_{k=0}^n d_{n,k}f_k=[t^n]d(t)f(h(t))\n\\end{equation}\nand it is extensively commented in\n\\citep{LUZON2012631,Merlini:2009:CSI:2653507.2654195,SPRUGNOLI1994267}.\nIt states that every combinatorial sum involving a Riordan array can be computed by\nextracting the coefficient of $t^n$ from the series expansion of $d(t)f(h(t))$,\nwhere $f(t)=\\mathcal{G}(f_k)=\\sum_{k\\geq 0}f_kt^k$ denotes the generating function\nof the sequence $(f_k)_{k \\in\\mathbb{N}}$ and the symbol $\\mathcal{G}$\ndenotes the generating function operator. Due to its importance, relation\n(\\ref{somme}) is commonly known as the \\textit{fundamental rule} of Riordan arrays.\nFor short and when no confusion arises, the notation $(f_k)_{k}$ will be used\nas an abbreviation of $(f_k)_{k\\in\\mathbb{N}}$.\n\nAs it is well-known (see, e.g., \\citep{LUZON201475,MRSV97,SHAPIRO1991229}),\nRiordan arrays constitute an \\textit{algebraic group} with respect to the usual\nrow-by-column product between matrices; formally, the group operation $\\cdot$ applied\nto two Riordan arrays $D_1(d_1(t),\\ h_1(t))$ and $D_2(d_2(t),\\ h_2(t))$ is\ncarried out as\n\\begin{equation}\n  D_1 \\cdot D_2 =(d_1(t)d_2(h_1(t)),\\ h_2(h_1(t))).\n  \\label{eq:riordan:group:op}\n\\end{equation}\nMoreover, the Riordan array $I = (1,\\ t)$ acts as the identity element and the\ninverse of $D =(d(t), h(t))$ is the Riordan array\n\\begin{equation}\nD^{-1} = \\left( \\frac{1}{d(\\overline{h}(t))}, \\overline{h}(t) \\right)\n  \\label{eq:riordan:group:inverse}\n\\end{equation}\nwhere $\\overline{h}(t)$ denotes the compositional inverse of $h(t)$.\n\nAn equivalent characterization of a each matrix $\\mathcal{R}(d(t), h(t))$ in\nthe Riordan group is given by two sequences of coefficients\n$\\left(a_{n}\\right)_{n\\in\\mathbb{N}}$  and\n$\\left(z_{n}\\right)_{n\\in\\mathbb{N}}$ called $A$-sequence and $Z$-sequence,\nrespectively. The former one can be used to define every\ncoefficient $d_{n,k}$ with $k>0$,\n\\begin{displaymath}\n    d_{n+1, k+1} = a_{0}d_{n,k} + a_{1}d_{n,k+1} + a_{2}d_{n,k+2} + \\ldots + a_{j}d_{n,k+j} + \\ldots %+ a_{j+1}d_{n,k+j+1} + \\cdots\n\\end{displaymath}\nwhere the sum is finite because exists $j\\in\\mathbb{N}$ such that $n=k+j$; on\nthe other hand, the latter one can be used to define every coefficient\n$d_{n,0}$ lying on the first column,\n\\begin{displaymath}\n    d_{n+1, 0} = z_{0}d_{n,0} + z_{1}d_{n,1} + z_{2}d_{n,2} + \\ldots + z_{n}d_{n,n}\n\\end{displaymath}\nwhere the sum is finite because $d_{n,k+j}=0$ for $j>n-k$.\n\nMoreover, let $A(t)$ and $Z(t)$ be the generating functions of the $A$-sequence\nand $Z$-sequence respectively, then relations\n\\begin{displaymath}\n    h(t) = tA(h(t)) \\quad\\text{and}\\quad d(t)=\\frac{d_{0,0}}{1-tZ(h(t))}\n\\end{displaymath}\nconnect them with functions $d(t)$ and $h(t)$, where $d_{0,0}$ is the very\nfirst element of $\\mathcal{R}$; for the sake of completeness,\n\\citep{MRSV97} collects more alternative characterizations.\n\n\\section{Symbolic computation}\n\nThe main part of the symbolic computations supporting the topics discussed in\nthis dissertation has been coded using the Python language, relying on the\nmodule \\verb|sympy| for what concerns mathematical stuff. Quoting from\n\\url{http://www.sympy.org/}:\n\\begin{center}\n\\textit{ ``SymPy is a Python library for symbolic mathematics. It aims to\nbecome a full-featured computer algebra system (CAS) while keeping the code as\nsimple as possible in order to be comprehensible and easily extensible.''}\n\\end{center}\nThe paper \\citep{10.7717/peerj-cs.103} explains it and many other resources can\nbe found online; for example, a comprehensive documentation is\n\\citep{sympy:doc} and a well written, understandable tutorial\n\\citep{sympy:tutorial} is provided by the original development team.\n\nHere we avoid to duplicate the tutorial with similar examples, instead we state\nthe methodology used while coding our definitions. Python is a very expressive\nlanguage allowing programmers to use both the object-oriented and functional\nparadigms. It can tackle different application domains by means of\n\\textit{modules} and \\verb|sympy| is an example that targets the manipulation\nof symbolic terms.  Contrary to other proprietary software like Maple and\nMathematica which ship their own languages, \\verb|sympy| is implemented\nentirely in Python, allowing a transparent and easy integration in other Python\nprograms, as we will see in later chapters.\n\nThe main point to grasp in our opinion is the difference between the\n\\textit{meta language}, which is Python, and the \\textit{object language},\nwhich is the mathematical expressions denoted by \\verb|sympy| objects.\n\n\\begin{example}\n\\verb|Symbol| is a fundamental class of objects that introduces arbitrary\nmathematical symbols.\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> from sympy import Symbol\n>>> a_sym = Symbol('a')\n>>> a_sym\na\n\\end{minted}\nThe previous snippet allows us to clarify the duality among meta and object\nlanguages; precisely, the mathematical expression $a$ is denoted by the Python\nobject \\verb|a_sym|.\n\\end{example}\n\nThe above example is the first one found by the reader and it shows a common\npattern used through this document to illustrate \\textit{computations}; in\nparticular, when a line starts with (i)~\\verb|>>>| then it is an \\textit{input}\nline holding code to be executed, (ii)~\\verb|...| then it is a\n\\textit{continuation} line holding an unfinished code expression, otherwise\n(iii) it is an \\textit{output} line reporting the result of the evaluation.\n\nA second fundamental methodology that we embrace in our symbolic manipulations is\n\\textit{equational reasoning}, namely we use equations denoted by \\verb|Eq| objects\nto express identities to reason about, used both to define things and\nto be solved with respect to a desired symbol.\n\\begin{example}\nIntroduction of \\verb|Eq|, \\verb|solve| and \\verb|symbols| functions:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> from sympy import Eq, solve, symbols\n>>> a, t = symbols('a t')\n>>> a_def = Eq(a, 3)\n>>> at_eq = Eq(a+5*t, 1/(1-t))\n>>> a_def, at_eq\n\\end{minted}\n\\begin{displaymath}\n\\left(\na=3,\\quad a + 5 t = \\frac{1}{- t + 1}\n\\right)\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> sols = [Eq(t, s) for s in solve(at_eq, t)]\n>>> sols\n\\end{minted}\n\\begin{displaymath}\n\\begin{array}{c}\n\\left [ t = - \\frac{a}{10} - \\frac{1}{10} \\sqrt{a^{2} + 10 a + 5} + \\frac{1}{2}, \\right . \\\\\n\\left. t = - \\frac{a}{10} + \\frac{1}{10} \\sqrt{a^{2} + 10 a + 5} + \\frac{1}{2}\\right ]\n\\end{array}\n\\end{displaymath}\n\\end{example}\n\nDue to the importance of equations in our code, we introduce two\nhelper functions. First, \\verb|define| builds a definition:\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=5,lastline=10]\n    {python}{deps/simulation-methods/src/commons.py}\n\n\\begin{example}\nIntroduction of \\verb|Function| objects:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> from sympy import Function, sqrt\n>>> f = Function('f')\n>>> f(3)\nf(3)\n>>> t = symbols('t')\n>>> define(let=f(t), be=(1-sqrt(1-4*t))/(2*t), ctor=FEq)\n\\end{minted}\n\\begin{displaymath}\nf{\\left (t \\right )} = \\frac{1}{2 t} \\left(- \\sqrt{- 4 t + 1} + 1\\right)\n\\end{displaymath}\n\\end{example}\n\nThe keyword argument \\verb|ctor=FEq| asks \\verb|define| to promote\nthe equation we are defining as a \\verb|callable| object by means of\nthe \\verb|FEq| class,\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=46, lastline=78]\n    {python}{deps/simulation-methods/src/commons.py}\n\\noindent which provides methods that allows it to be used as a substitution\ntoo; in turn, it depends on\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=12, lastline=19]\n    {python}{deps/simulation-methods/src/commons.py}\n\n\\begin{example}\nIntroduction of \\verb|IndexedBase| objects:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> from commons import lift_to_Lambda\n>>> from sympy import IndexedBase\n>>> a = IndexedBase('a')\n>>> aeq = Eq(a[n], n+a[n-1])\n>>> aeq(n+1)\n\\end{minted}\n\\begin{displaymath}\n    a_{n + 1} = n + a_{n} + 1\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> b = Function('b')\n>>> beq = Eq(b(n), n+b(n-1))\n>>> beq(n+1)\n\\end{minted}\n\\begin{displaymath}\n    b{\\left (n + 1 \\right )} = n + b{\\left (n \\right )} + 1\n\\end{displaymath}\n\\end{example}\n\n\\section{Riordan Arrays, computationally}\n\nIn this section we describe a little framework that implements parts\nof the concepts seen in the previous section; in particular, we provide\nsome strategies to build Riordan arrays, to find corresponding production\nmatrices and their group inverse elements, respectively.\n\nFirst of all we introduce (i)~\\textit{function symbols} \\verb|d_fn| and\n\\verb|h_fn| to denote arbitrary symbolic functions $d$ and $h$,\n(ii)~\\textit{indexed symbols} \\verb|d| and \\verb|h| to denote arbitrary\nsymbolic and indexed coefficients\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> d_fn, h_fn = Function('d'), Function('h')\n>>> d, h = IndexedBase('d'), IndexedBase('h')\n\\end{minted}\nrespectively. To build Riordan matrices we use \\verb|Matrix| objects; in\nparticular, the expression \\verb|Matrix(r, c, ctor)| denotes a matrix with\n\\verb|r| rows and \\verb|c| columns where each coefficient $d_{n,k}$ in the\nmatrix is defined according to \\verb|ctor| which is a \\textit{callable}\n\\marginnote{From the official doc at\n\\url{https://docs.python.org/3/library/functions.html\\#callable}:\n\\texttt{callable(object)} return \\texttt{True} if the \\texttt{object} argument\nappears callable, \\texttt{False} if not. If this returns true, it is still\npossible that a call fails, but if it is false, calling object will never\nsucceed. Note that classes are callable (calling a class returns a new\ninstance); instances are callable if their class has a \\texttt{\\_\\_call\\_\\_}\nmethod.} object consuming two arguments $n$ and $k$, its row and column\ncoordinates.  We call it \\verb|ctor| as abbreviation for \\textit{constructor},\nbecause it allows us to code the definition of each coefficient with a Python\ncallable object.\n\nHere we show how to build a pure symbolic matrix:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> from sympy import Matrix\n>>> rows, cols = 5, 5\n>>> ctor = lambda i,j: d[i,j]\n>>> Matrix(rows, cols, ctor)\n\\end{minted}\n\\begin{displaymath}\n\\left[\\begin{matrix}d_{0,0} & d_{0,1} & d_{0,2} & d_{0,3} & d_{0,4}\\\\d_{1,0} & d_{1,1} & d_{1,2} & d_{1,3} & d_{1,4}\\\\d_{2,0} & d_{2,1} & d_{2,2} & d_{2,3} & d_{2,4}\\\\d_{3,0} & d_{3,1} & d_{3,2} & d_{3,3} & d_{3,4}\\\\d_{4,0} & d_{4,1} & d_{4,2} & d_{4,3} & d_{4,4}\\end{matrix}\\right]\n\\end{displaymath}\n\nIn the following sections we show a collection of such \\verb|ctor|s, each one\nof them implements one theoretical characterization used to denote Riordan\narrays and corresponding examples are given.\n\n\\subsection{Convolution ctor}\n\nThe following definition implements a ctor that allows us to build Riordan\narrays by convolution of their $d$ and $h$ functions; here it is,\n\n\\notbreakable{\n    \\inputminted[baselinestretch=0.8,stripnl=false,firstline=26, lastline=45]{python}{deps/simulation-methods/src/sequences.py}\n}\n\n\\begin{example}\nSymbolic Riordan array built by two polynomials with symbolic coefficients:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> d_series = Eq(d_fn(t), 1+sum(d[i]*t**i for i in range(1,m)))\n>>> h_series = Eq(h_fn(t), t*(1+sum(h[i]*t**i for i in range(1,m-1)))).expand()\n>>> d_series, h_series\n\\end{minted}\n\\begin{displaymath}\n\\left ( d{\\left (t \\right )} = t^{4} d_{4} + t^{3} d_{3} + t^{2} d_{2} + t d_{1} + 1, \\quad h{\\left (t \\right )} = t^{4} h_{3} + t^{3} h_{2} + t^{2} h_{1} + t\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> R = Matrix(m, m, riordan_matrix_by_convolution(m, d_series, h_series))\n>>> R\n\\end{minted}\n\\begin{displaymath}\n\\left[\\begin{matrix}1 &   &   &   &  \\\\d_{1} & 1 &   &   &  \\\\d_{2} & d_{1} + h_{1} & 1 &   &  \\\\d_{3} & d_{1} h_{1} + d_{2} + h_{2} & d_{1} + 2 h_{1} & 1 &  \\\\d_{4} & d_{1} h_{2} + d_{2} h_{1} + d_{3} + h_{3} & 2 d_{1} h_{1} + d_{2} + h_{1}^{2} + 2 h_{2} & d_{1} + 3 h_{1} & 1\\end{matrix}\\right]\n\\end{displaymath}\n\\end{example}\n\n\\begin{example}\nThe Pascal triangle built using closed generating functions:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> d_series = Eq(d_fn(t), 1/(1-t))\n>>> h_series = Eq(h_fn(t), t*d_series.rhs)\n>>> d_series, h_series\n\\end{minted}\n\\begin{displaymath}\n\\left ( d{\\left (t \\right )} = \\frac{1}{1-t}, \\quad h{\\left (t \\right )} = \\frac{t}{1-t}\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> R = Matrix(10, 10, riordan_matrix_by_convolution(10, d_series, h_series))\n>>> R\n\\end{minted}\n\\begin{displaymath}\n\\left[\\begin{matrix}1 &   &   &   &   &   &   &   &   &  \\\\1 & 1 &   &   &   &   &   &   &   &  \\\\1 & 2 & 1 &   &   &   &   &   &   &  \\\\1 & 3 & 3 & 1 &   &   &   &   &   &  \\\\1 & 4 & 6 & 4 & 1 &   &   &   &   &  \\\\1 & 5 & 10 & 10 & 5 & 1 &   &   &   &  \\\\1 & 6 & 15 & 20 & 15 & 6 & 1 &   &   &  \\\\1 & 7 & 21 & 35 & 35 & 21 & 7 & 1 &   &  \\\\1 & 8 & 28 & 56 & 70 & 56 & 28 & 8 & 1 &  \\\\1 & 9 & 36 & 84 & 126 & 126 & 84 & 36 & 9 & 1\\end{matrix}\\right]\n\\end{displaymath}\n\\end{example}\n\n\\subsection{Recurrence ctor}\n\nThe following definition implements a ctor that allows us to build Riordan\narrays by a recurrence relation over coefficients $d_{n+1, k+1}$; here it is,\n\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=81, lastline=107, mathescape=true]\n    {python}{deps/simulation-methods/src/sequences.py}\n\n\\begin{example}\nSymbolic Riordan Array built according to the recurrence:\n\\begin{displaymath}\n\\begin{split}\nd_{n+1, 0} &= \\bar{b}\\,d_{n, 0} + c\\,d_{n,1}, \\quad n \\in\\mathbb{N} \\\\\nd_{n+1, k+1} &= a\\,d_{n, k} + b\\,d_{n, k} + c\\,d_{n,k+1}, \\quad n,k \\in\\mathbb{N}\n\\end{split}\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> dim = 5\n>>> a, b, b_bar, c = symbols(r'a b \\bar{b} c')\n>>> M = Matrix(dim, dim,\n...            riordan_matrix_by_recurrence(\n...               dim, lambda n, k: {(n-1, k-1):a,\n...                                  (n-1, k): b if k else b_bar,\n...                                  (n-1, k+1):c}))\n>>> M\n\\end{minted}\n\\begin{displaymath}\n\\footnotesize\n\\left[\\begin{matrix}1 &   &   &   &  \\\\\\bar{b} & a &   &   &  \\\\\\bar{b}^{2} + a c & \\bar{b} a + a b & a^{2} &   &  \\\\\\bar{b}^{3} + 2 \\bar{b} a c + a b c & \\bar{b}^{2} a + \\bar{b} a b + 2 a^{2} c + a b^{2} & \\bar{b} a^{2} + 2 a^{2} b & a^{3} &  \\\\\\bar{b}^{4} + 3 \\bar{b}^{2} a c + 2 \\bar{b} a b c + 2 a^{2} c^{2} + a b^{2} c & \\bar{b}^{3} a + \\bar{b}^{2} a b + 3 \\bar{b} a^{2} c + \\bar{b} a b^{2} + 5 a^{2} b c + a b^{3} & \\bar{b}^{2} a^{2} + 2 \\bar{b} a^{2} b + 3 a^{3} c + 3 a^{2} b^{2} & \\bar{b} a^{3} + 3 a^{3} b & a^{4}\\end{matrix}\\right]\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> production_matrix(M)\n\\end{minted}\n\\begin{displaymath}\n\\left[\\begin{matrix}\\bar{b} & a &   &  \\\\c & b & a &  \\\\  & c & b & a\\\\  &   & c & b\\end{matrix}\\right]\n\\end{displaymath}\nForcing $a=1$ and $\\bar{b} = b$ yield the easier matrix \\verb|Msubs|\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> Msubs = M.subs({a:1, b_bar:b})\n>>> Msubs, production_matrix(Msubs)\n\\end{minted}\n\\begin{displaymath}\n\\left ( \\left[\\begin{matrix}1 &   &   &   &  \\\\b & 1 &   &   &  \\\\b^{2} + c & 2 b & 1 &   &  \\\\b^{3} + 3 b c & 3 b^{2} + 2 c & 3 b & 1 &  \\\\b^{4} + 6 b^{2} c + 2 c^{2} & 4 b^{3} + 8 b c & 6 b^{2} + 3 c & 4 b & 1\\end{matrix}\\right], \\quad \\left[\\begin{matrix}b & 1 &   &  \\\\c & b & 1 &  \\\\  & c & b & 1\\\\  &   & c & b\\end{matrix}\\right]\\right )\n\\end{displaymath}\nand the correspoding production matrix checks the substitution.\n\\end{example}\n\nPrevious examples uses the function \\verb|production_matrix| to compute the\n\\textit{production matrix} \\citep{DEUTSCH2005101,Deutsch2009} of a Riordan\narray, here is its definition with two helper \\verb|ctor|s:\n\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=159, lastline=180]{python}{deps/simulation-methods/src/sequences.py}\n\n\\noindent implemented according to \\citep[page~$215$]{barry2017riordan}.\n\n\\subsection{$A$ and $Z$ sequences ctor}\n\nThe following definition implements a ctor that allows us to build Riordan\narrays by their $Z$ and $A$ sequences; here it is,\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=47, lastline=78]\n    {python}{deps/simulation-methods/src/sequences.py}\nin words, (i)~it deconstructs \\verb|seqs| into objs denoting Z and A seqs,\npromoting both of them as callables, (ii)~it introduces the formal variable \\verb|t|,\n(iii)~it performs two Taylor expansions with respect to \\verb|t|.\nIn order to build the resulting matrix, it\nvisits each cell according to the order given by \\verb|lattice| then\nif the cell lies not on the first column, it combines using \\verb|A_series| coefficients\nup to the cell in position \\verb|(k-1, dim-1)|;\notherwise, %if the cell lies on the first column,\nit combine using \\verb|Z_series| coefficients up to the cell in position\n\\verb|(k-1, dim-1)|. Finally, it stores the combination and\nif some post processing is desired, it does so; at last,\nit returns a lambda that uses the computed matrix \\verb|R|.\n\n\\begin{example}\nAgain the Pascal triangle built using $A$ and $Z$ sequences\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> A, Z = Function('A'), Function('Z')\n>>> A_eq = Eq(A(t), 1 + t)\n>>> Z_eq = Eq(Z(t),1)\n>>> A_eq, Z_eq\n\\end{minted}\n\\begin{displaymath}\n\\left ( A{\\left (t \\right )} = t + 1, \\quad Z{\\left (t \\right )} = 1\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> R = Matrix(10, 10, riordan_matrix_by_AZ_sequences(10, (Z_eq, A_eq)))\n>>> R, production_matrix(R)\n\\end{minted}\n\\begin{displaymath}\n\\left ( \\left[\\begin{matrix}1 &   &   &   &   &   &   &   &   &  \\\\1 & 1 &   &   &   &   &   &   &   &  \\\\1 & 2 & 1 &   &   &   &   &   &   &  \\\\1 & 3 & 3 & 1 &   &   &   &   &   &  \\\\1 & 4 & 6 & 4 & 1 &   &   &   &   &  \\\\1 & 5 & 10 & 10 & 5 & 1 &   &   &   &  \\\\1 & 6 & 15 & 20 & 15 & 6 & 1 &   &   &  \\\\1 & 7 & 21 & 35 & 35 & 21 & 7 & 1 &   &  \\\\1 & 8 & 28 & 56 & 70 & 56 & 28 & 8 & 1 &  \\\\1 & 9 & 36 & 84 & 126 & 126 & 84 & 36 & 9 & 1\\end{matrix}\\right], \\quad \\left[\\begin{matrix}1 & 1 &   &   &   &   &   &   &  \\\\  & 1 & 1 &   &   &   &   &   &  \\\\  &   & 1 & 1 &   &   &   &   &  \\\\  &   &   & 1 & 1 &   &   &   &  \\\\  &   &   &   & 1 & 1 &   &   &  \\\\  &   &   &   &   & 1 & 1 &   &  \\\\  &   &   &   &   &   & 1 & 1 &  \\\\  &   &   &   &   &   &   & 1 & 1\\\\  &   &   &   &   &   &   &   & 1\\end{matrix}\\right]\\right )\n\\end{displaymath}\n\\end{example}\n\n\\begin{example}\nCatalan triangle built using $A$ and $Z$ sequences,\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> A_ones = Eq(A(t), 1/(1-t)) # A is defined as in the previous example\n>>> R = Matrix(10, 10, riordan_matrix_by_AZ_sequences(10, (A_ones, A_ones)))\n>>> R, production_matrix(R)\n\\end{minted}\n\\begin{displaymath}\n\\left ( \\left[\\begin{matrix}1 &   &   &   &   &   &   &   &   &  \\\\1 & 1 &   &   &   &   &   &   &   &  \\\\2 & 2 & 1 &   &   &   &   &   &   &  \\\\5 & 5 & 3 & 1 &   &   &   &   &   &  \\\\14 & 14 & 9 & 4 & 1 &   &   &   &   &  \\\\42 & 42 & 28 & 14 & 5 & 1 &   &   &   &  \\\\132 & 132 & 90 & 48 & 20 & 6 & 1 &   &   &  \\\\429 & 429 & 297 & 165 & 75 & 27 & 7 & 1 &   &  \\\\1430 & 1430 & 1001 & 572 & 275 & 110 & 35 & 8 & 1 &  \\\\4862 & 4862 & 3432 & 2002 & 1001 & 429 & 154 & 44 & 9 & 1\\end{matrix}\\right], \\quad \\left[\\begin{matrix}1 & 1 &   &   &   &   &   &   &  \\\\1 & 1 & 1 &   &   &   &   &   &  \\\\1 & 1 & 1 & 1 &   &   &   &   &  \\\\1 & 1 & 1 & 1 & 1 &   &   &   &  \\\\1 & 1 & 1 & 1 & 1 & 1 &   &   &  \\\\1 & 1 & 1 & 1 & 1 & 1 & 1 &   &  \\\\1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 &  \\\\1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\\\1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\end{matrix}\\right]\\right )\n\\end{displaymath}\n\\end{example}\n\n\\begin{example}\nSymbolic Riordan arrays built using $A$ and $Z$ sequences,\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> dim = 5\n>>> a = IndexedBase('a')\n>>> A_gen = Eq(A(t), sum((a[j] if j else 1)*t**j for j in range(dim)))\n>>> R = Matrix(dim, dim, riordan_matrix_by_AZ_sequences(dim, (A_gen, A_gen)))\n>>> R\n\\end{minted}\n\\begin{displaymath}\n\\footnotesize\n\\left[\\begin{matrix}1 &   &   &   &  \\\\1 & 1 &   &   &  \\\\a_{1} + 1 & a_{1} + 1 & 1 &   &  \\\\a_{1}^{2} + 2 a_{1} + a_{2} + 1 & a_{1}^{2} + 2 a_{1} + a_{2} + 1 & 2 a_{1} + 1 & 1 &  \\\\a_{1}^{3} + 3 a_{1}^{2} + 3 a_{1} a_{2} + 3 a_{1} + 2 a_{2} + a_{3} + 1 & a_{1}^{3} + 3 a_{1}^{2} + 3 a_{1} a_{2} + 3 a_{1} + 2 a_{2} + a_{3} + 1 & 3 a_{1}^{2} + 3 a_{1} + 2 a_{2} + 1 & 3 a_{1} + 1 & 1\\end{matrix}\\right]\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> z = IndexedBase('z')\n>>> A_gen = Eq(A(t), sum((a[j] if j else 1)*t**j for j in range(dim)))\n>>> Z_gen = Eq(Z(t), sum((z[j] if j else 1)*t**j for j in range(dim)))\n>>> Raz = Matrix(dim, dim, riordan_matrix_by_AZ_sequences(dim, (Z_gen, A_gen)))\n>>> Raz\n\\end{minted}\n\\begin{displaymath}\n\\footnotesize\n\\left[\\begin{matrix}1 &   &   &   &  \\\\1 & 1 &   &   &  \\\\z_{1} + 1 & a_{1} + 1 & 1 &   &  \\\\a_{1} z_{1} + 2 z_{1} + z_{2} + 1 & a_{1}^{2} + a_{1} + a_{2} + z_{1} + 1 & 2 a_{1} + 1 & 1 &  \\\\ \\left(\\begin{split} a_{1}^{2} z_{1} &+ 2 a_{1} z_{1} + 2 a_{1} z_{2} + a_{2} z_{1} +\\\\ z_{1}^{2} &+ 3 z_{1} + 2 z_{2} + z_{3} + 1\\end{split}\\right) & a_{1}^{3} + a_{1}^{2} + 3 a_{1} a_{2} + 2 a_{1} z_{1} + a_{1} + a_{2} + a_{3} + 2 z_{1} + z_{2} + 1 & 3 a_{1}^{2} + 2 a_{1} + 2 a_{2} + z_{1} + 1 & 3 a_{1} + 1 & 1\\end{matrix}\\right]\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> production_matrix(R), production_matrix(Raz)\n\\end{minted}\n\\begin{displaymath}\n\\left ( \\left[\\begin{matrix}1 & 1 &   &  \\\\a_{1} & a_{1} & 1 &  \\\\a_{2} & a_{2} & a_{1} & 1\\\\a_{3} & a_{3} & a_{2} & a_{1}\\end{matrix}\\right], \\quad \\left[\\begin{matrix}1 & 1 &   &  \\\\z_{1} & a_{1} & 1 &  \\\\z_{2} & a_{2} & a_{1} & 1\\\\z_{3} & a_{3} & a_{2} & a_{1}\\end{matrix}\\right]\\right )\n\\end{displaymath}\n\\end{example}\n\n\\subsection{Exponential ctor}\n\nThe following definition implements a ctor that allows us to build an\nexponential Riordan array; here it is,\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=23, lastline=24]\n    {python}{deps/simulation-methods/src/sequences.py}\n\n\\begin{example}\nBuild the triangle of Stirling numbers of the II kind:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> d_series = Eq(d_fn(t), 1)\n>>> h_series = Eq(h_fn(t), exp(t)-1)\n>>> d_series, h_series\n\\end{minted}\n\\begin{displaymath}\n\\left ( d{\\left (t \\right )} = 1, \\quad h{\\left (t \\right )} = e^{t} - 1\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> R = matrix(10, 10, riordan_matrix_exponential(\n...                     riordan_matrix_by_convolution(\n...                       10, d_series, h_series)))\n>>> R\n\\end{minted}\n\\begin{displaymath}\n\\left[\\begin{matrix}1 &   &   &   &   &   &   &   &   &  \\\\  & 1 &   &   &   &   &   &   &   &  \\\\  & 1 & 1 &   &   &   &   &   &   &  \\\\  & 1 & 3 & 1 &   &   &   &   &   &  \\\\  & 1 & 7 & 6 & 1 &   &   &   &   &  \\\\  & 1 & 15 & 25 & 10 & 1 &   &   &   &  \\\\  & 1 & 31 & 90 & 65 & 15 & 1 &   &   &  \\\\  & 1 & 63 & 301 & 350 & 140 & 21 & 1 &   &  \\\\  & 1 & 127 & 966 & 1701 & 1050 & 266 & 28 & 1 &  \\\\  & 1 & 255 & 3025 & 7770 & 6951 & 2646 & 462 & 36 & 1\\end{matrix}\\right]\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> production_matrix(R), production_matrix(R, exp=True)\n\\end{minted}\n\\begin{displaymath}\n\\left ( \\left[\\begin{matrix}0 & 1 &   &   &   &   &   &   &  \\\\  & 1 & 1 &   &   &   &   &   &  \\\\  &   & 2 & 1 &   &   &   &   &  \\\\  &   &   & 3 & 1 &   &   &   &  \\\\  &   &   &   & 4 & 1 &   &   &  \\\\  &   &   &   &   & 5 & 1 &   &  \\\\  &   &   &   &   &   & 6 & 1 &  \\\\  &   &   &   &   &   &   & 7 & 1\\\\  &   &   &   &   &   &   &   & 8\\end{matrix}\\right], \\quad \\left[\\begin{matrix}0 & 1 &   &   &   &   &   &   &  \\\\  & 1 & 2 &   &   &   &   &   &  \\\\  &   & 2 & 3 &   &   &   &   &  \\\\  &   &   & 3 & 4 &   &   &   &  \\\\  &   &   &   & 4 & 5 &   &   &  \\\\  &   &   &   &   & 5 & 6 &   &  \\\\  &   &   &   &   &   & 6 & 7 &  \\\\  &   &   &   &   &   &   & 7 & 8\\\\  &   &   &   &   &   &   &   & 8\\end{matrix}\\right]\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> inspect(R)\nnature(is_ordinary=False, is_exponential=True)\n\\end{minted}\n\\end{example}\nIn the above example we introduced another function \\verb|inspect| that studies\nthe type of array it consumes. Before reporting its definition we remark that\nthe matrix on the left is an usual production matrix (which tells us that\n$d_{6,4} = d_{5,3} + 4d_{5,4} = 25 + 4\\cdot 10 = 65$, for example); on the\nother hand, the matrix on right helps to decide if the array is an exponential\none by proving that each diagonal is an \\textit{arithmetic progression}, for\nmore on this see \\citep{barry2017riordan}.\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=183, lastline=209]\n    {python}{deps/simulation-methods/src/sequences.py}\n\n\\begin{example}\nIn this example we explore an exponential Riordan array starting from the\ngenerating functions of the Pascal triangle. Surprisingly the array we get back\nis known in the OEIS (\\url{https://oeis.org/A021009}) and looking for some\ncomments we quote the observation\\newline\n\\begin{center} \n\\textit{\"the generalized Riordan array $(e^x, x)$ with respect\nto\\newline the sequence $n!$ is Pascal's triangle A007318\"} \n\\end{center} \nby Peter Bala.\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> d_series, h_series = Eq(d_fn(t), 1/(1-t)), Eq(h_fn(t), t/(1-t))\n>>> d_series, h_series\n\\end{minted}\n\\begin{displaymath}\n\\left ( d{\\left (t \\right )} = \\frac{1}{1-t}, \\quad h{\\left (t \\right )} = \\frac{t}{1-t}\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> R = matrix(10, 10, riordan_matrix_exponential(\n...                     riordan_matrix_by_convolution(\n...                       10, d_series, h_series)))\n>>> R\n\\end{minted}\n\\begin{displaymath}\n\\left[\\begin{matrix}1 &   &   &   &   &   &   &   &   &  \\\\1 & 1 &   &   &   &   &   &   &   &  \\\\2 & 4 & 1 &   &   &   &   &   &   &  \\\\6 & 18 & 9 & 1 &   &   &   &   &   &  \\\\24 & 96 & 72 & 16 & 1 &   &   &   &   &  \\\\120 & 600 & 600 & 200 & 25 & 1 &   &   &   &  \\\\720 & 4320 & 5400 & 2400 & 450 & 36 & 1 &   &   &  \\\\5040 & 35280 & 52920 & 29400 & 7350 & 882 & 49 & 1 &   &  \\\\40320 & 322560 & 564480 & 376320 & 117600 & 18816 & 1568 & 64 & 1 &  \\\\362880 & 3265920 & 6531840 & 5080320 & 1905120 & 381024 & 42336 & 2592 & 81 & 1\\end{matrix}\\right]\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> production_matrix(R), production_matrix(R, exp=True)\n\\end{minted}\n\\begin{displaymath}\n\\left ( \\left[\\begin{matrix}1 & 1 &   &   &   &   &   &   &  \\\\1 & 3 & 1 &   &   &   &   &   &  \\\\  & 4 & 5 & 1 &   &   &   &   &  \\\\  &   & 9 & 7 & 1 &   &   &   &  \\\\  &   &   & 16 & 9 & 1 &   &   &  \\\\  &   &   &   & 25 & 11 & 1 &   &  \\\\  &   &   &   &   & 36 & 13 & 1 &  \\\\  &   &   &   &   &   & 49 & 15 & 1\\\\  &   &   &   &   &   &   & 64 & 17\\end{matrix}\\right], \\quad \\left[\\begin{matrix}1 & 1 &   &   &   &   &   &   &  \\\\1 & 3 & 2 &   &   &   &   &   &  \\\\  & 2 & 5 & 3 &   &   &   &   &  \\\\  &   & 3 & 7 & 4 &   &   &   &  \\\\  &   &   & 4 & 9 & 5 &   &   &  \\\\  &   &   &   & 5 & 11 & 6 &   &  \\\\  &   &   &   &   & 6 & 13 & 7 &  \\\\  &   &   &   &   &   & 7 & 15 & 8\\\\  &   &   &   &   &   &   & 8 & 17\\end{matrix}\\right]\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> inspect(R)\nnature(is_ordinary=False, is_exponential=True)\n\\end{minted}\n%More surprisingly, the same array is used in \\citep{barry2017riordan} as a case study.\n\\end{example}\n\n\\subsection{Group inverse elements}\n\nIn this final section we show how to compute the compositional inverse of a\nfunction and then apply this procedure to find the inverse of a given Riordan\narray. By small steps, your task is to find the compositional inverse of\nPascal array's $h$ function\n\\begin{displaymath}\nh(t)= \\frac{t}{1-t},\n\\end{displaymath}\nnamely you want to find a function $\\bar{h}$ such that $\\bar{h}(h(t))=t$.\nStarting from this very last identity we use the substitution notation\n\\begin{displaymath}\n\\bar{h}(h(t)) = t \\leftrightarrow \\left[ \\bar{h}(y) = t\\, | \\, y = h(t) \\right]\n\\end{displaymath}\nthat allows us to reduce the original problem to solve $y = h(t)$ with respect\nto $t$; formally, using the definition of $h$ we rewrite\n\\begin{displaymath}\ny = \\frac{t}{1-t} \\quad\\text{that implies}\\quad t = \\frac{y}{1+y}.\n\\end{displaymath}\nThe latter identity can be used back in $\\bar{h}(y) = t$ as substitution for\n$t$ yielding $\\displaystyle \\bar{h}(y)= \\frac{y}{1+y} $ as required; this\nprocedure is promptly implemented as\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=212, lastline=229, mathescape=true]\n    {python}{deps/simulation-methods/src/sequences.py}\n\\inputminted[baselinestretch=0.8,stripnl=false,firstline=231, lastline=254, mathescape=true]\n    {python}{deps/simulation-methods/src/sequences.py}\n\n\\begin{example}\nCompositional inverse of Catalan tringle's $h$ generating function:\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> catalan_term = (1-sqrt(1-4*t))/(2*t)\n>>> d_series = Eq(d_fn(t), catalan_term)\n>>> h_series = Eq(h_fn(t), t*catalan_term)\n>>> h_series, compositional_inverse(h_series)\n\\end{minted}\n\\begin{displaymath}\n\\left ( h{\\left (t \\right )} = - \\frac{1}{2} \\sqrt{- 4 t + 1} + \\frac{1}{2}, \\quad \\bar{ h }{\\left (y \\right )} = - y \\left(y - 1\\right)\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> C_inverse = group_inverse(d_series, h_series, post=radsimp)\n>>> C_inverse\n\\end{minted}\n\\begin{displaymath}\n\\left ( g{\\left (t \\right )} = \\frac{1}{2} \\sqrt{4 t^{2} - 4 t + 1} + \\frac{1}{2}, \\quad f{\\left (t \\right )} = t \\left(- t + 1\\right)\\right )\n\\end{displaymath}\n\\begin{minted}[baselinestretch=0.8]{python}\n>>> R = Matrix(10, 10, riordan_matrix_by_convolution(\n...                     10, C_inverse[0], C_inverse[1]))\n>>> R\n\\end{minted}\n\\begin{displaymath}\n\\left[\\begin{matrix}1 &   &   &   &   &   &   &   &   &  \\\\-1 & 1 &   &   &   &   &   &   &   &  \\\\  & -2 & 1 &   &   &   &   &   &   &  \\\\  & 1 & -3 & 1 &   &   &   &   &   &  \\\\  &   & 3 & -4 & 1 &   &   &   &   &  \\\\  &   & -1 & 6 & -5 & 1 &   &   &   &  \\\\  &   &   & -4 & 10 & -6 & 1 &   &   &  \\\\  &   &   & 1 & -10 & 15 & -7 & 1 &   &  \\\\  &   &   &   & 5 & -20 & 21 & -8 & 1 &  \\\\  &   &   &   & -1 & 15 & -35 & 28 & -9 & 1\\end{matrix}\\right]\n\\end{displaymath}\n\\end{example}\n\n\\section*{Conclusions}\n\nThis chapter offers to the reader a concise review of the theory of Riordan\nArrays by recalling definitions, characterizations and their fundamental\nproperties; moreover, we pair these formal arguments with a set of software\nabstractions that allow us to mimic the theory with objects living in a\ncomputer. Coding our definitions using the Python programming language and\ntaking advantage of the symbolic module \\textit{Sympy}, we provide a coherent\nand unified environment to experiment in.\n\n\n\n", "meta": {"hexsha": "ae93c9814a9380c4f597a927e4f04b144db91719", "size": 32590, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "backgrounds/backgrounds.tex", "max_stars_repo_name": "massimo-nocentini/PhD-thesis", "max_stars_repo_head_hexsha": "f30ec2cb9cdf1e93532935448c3438700b9fcbba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "backgrounds/backgrounds.tex", "max_issues_repo_name": "massimo-nocentini/PhD-thesis", "max_issues_repo_head_hexsha": "f30ec2cb9cdf1e93532935448c3438700b9fcbba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "backgrounds/backgrounds.tex", "max_forks_repo_name": "massimo-nocentini/PhD-thesis", "max_forks_repo_head_hexsha": "f30ec2cb9cdf1e93532935448c3438700b9fcbba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.8200972447, "max_line_length": 862, "alphanum_fraction": 0.6132862841, "num_tokens": 12253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.584539264865016}}
{"text": "\\chapter{EMT Problems}\n\\section{Magnetostatics}\n\\begin{enumerate}\n\t\\item  A spherical shell of radius $R$, carrying a uniform surface charge $\\sigma$, is set spinning at angular velocity $\\omega$. Find its Magnetic dipole moment.\n\t\\begin{answer}\n\t\t The total charge on the shaded ring is\n\t\t \\begin{figure}[H]\n\t\t \t\\centering\n\t\t \t\\includegraphics[height=5cm,width=4.5cm]{EM prblm-01}\n\t\t \\end{figure}\n\t\t\\begin{align*}\n\t\td q&=\\sigma(2 \\pi R \\sin \\theta) R d \\theta\\\\\n\t\\text{\tTime for one revolution is }d t&=\\frac{2 \\pi}{\\omega}\\\\\n\t\t\\Rightarrow\\text{ Current in the ring }I&=\\frac{d q}{d t}=\\sigma \\omega R^{2} \\sin \\theta d \\theta\n\t\t\\intertext{Area of the ring $=\\pi(\\mathrm{R} \\sin \\theta)^{2}$, so the magnetic moment of the ring is}\n\t\td m&=\\left(\\sigma \\omega R^{2} \\sin \\theta d \\theta\\right) \\times \\pi R^{2} \\sin ^{2} \\theta\\\\\n\t\tm=\\sigma \\omega R^{4} \\int_{0}^{\\pi} \\sin ^{3} \\theta d \\theta&=\\frac{4}{3} \\pi \\times . \\sigma \\omega R^{4} \\Rightarrow \\vec{m}=\\frac{4 \\pi}{3} \\sigma \\omega R^{4} \\hat{z}\n\t\t\\end{align*}\n\t\\end{answer}\n\t\\item  A solid cylinder of radius $R$ has total charge $Q$ distributed uniformly over its volume and uniform mass density has total mass $M$. It is rotating about its axis with angular speed $\\omega$. If its angular momentum is $L$ and magnetic moment is $\\mu$, then find the ratio $\\frac{\\mu}{L} .$\n\t\\begin{answer}\n\t\t\\begin{align*}\n\tI=\\frac{1}{2} M R^{2}, L&=I \\omega=\\frac{1}{2} M R^{2} \\omega\\\\\n\t\t\\text{Magnetic moment due to disc }\\mu&=\\frac{\\pi \\sigma \\omega R^{4}}{4}\\\\\n\t\t\\text{Due to cylinder }d \\mu&=\\frac{\\pi \\omega R^{4}}{4}(\\rho d z) \\quad(\\sigma \\rightarrow \\rho d z)\\\\\n\t\t\\mu=\\frac{\\pi \\omega R^{4}}{4} \\int_{0}^{L} \\frac{Q}{\\pi R^{2} L} d z&=\\frac{Q \\omega R^{2}}{4} \\quad \\Rightarrow \\frac{\\mu}{L}=\\frac{\\underline{Q} \\omega R^{2}}{\\frac{1}{2} M R^{2} \\omega}=\\frac{Q}{2 M}\n\t\t\\end{align*}\n\t\\end{answer}\n\t\\item  A current carrying loop is placed in a uniform magnetic field in four different orientations I, II, III and IV. Arrange them in the decreasing order of potential energy.\n\t \\begin{tasks}(2)\n\t\t\\task[\\textbf{I.}]\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2cm,width=3.5cm]{EM prblm-02}\n\t\t\\end{figure}\n\t\t\\task[\\textbf{II.}]\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=1.5cm,width=3cm]{EM prblm-03}\n\t\t\\end{figure}\n\t\t\\task[\\textbf{III.}]\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2cm,width=2.5cm]{EM prblm-04}\n\t\t\\end{figure}\n\t\t\\task[\\textbf{IV.}] \\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2cm,width=3cm]{EM prblm-05}\n\t\t\\end{figure}\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\t\t\\because U&=-\\vec{m} \\cdot \\vec{B}=-m B \\cos \\theta \\\\\n\t\\text { I. } \\theta=180^{\\circ} \\Rightarrow U&=+m B, \\hspace{1.5cm} \\text { II. } \\theta=90^{\\circ} \\Rightarrow U=0 \\\\\n\t \\text { III. } \\theta=\\text { Acute angle } \\Rightarrow U&=-v e,  \\text { IV. } \\hspace{1cm}\\theta=\\text { Obtuse angle } \\Rightarrow U=+v e \\\\ &\\text { Thus } 1>\\mathrm{IV}>\\mathrm{II}>\\mathrm{III} \n\t\t\\end{align*}\n\t\\end{answer}\n\t\\item The region between $x=0$ and $x=L$ is filled with uniform, steady magnetic field $B_{0} \\hat{k}$. A particle of mass $m$, positive charge $q$ and velocity $v_{0} \\hat{i}$ travels along $x$-axis and enters the region of magnetic field. Neglect gravity throughout the question.\\\\\n\t(a) Find the value of $L$ if the particle emerges from the region of magnetic field with its final velocity at an angle $30^{\\circ}$ to the initial velocity.\\\\\n\t(b) Find the final velocity of the particle and the time spent by it in the magnetic field, if the field now extends up to $x=2.1 L$.\n\t\\begin{answer}\n\t\t(a) As the initial velocity of the particle is perpendicular to the field the particle will move along the arc of a circle as shown.\n\t\tIf $r$ is the radius of the circle, then\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=3.7cm,width=5cm]{EM prblm-06}\n\t\t\\end{figure}\n\t\t\\begin{align*}\n\t\t\\frac{m v_{0}^{2}}{r}&=q v_{0} B_{0}\\\\\n\t\t\\text{Also from geometry,}\\hspace{1cm}\n\t\tL&=r \\sin 30^{\\circ}\\\\\n\t\t\\Rightarrow r&=2 L\\\\\n\t\t\\text{or}\n\t\tL&=\\frac{m v_{0}}{2 q B_{0}}\\hspace{1cm}\\\\\n\t\\text{\t(b) In this case,}\n\t\tL&=\\frac{2.1 m v_{0}}{2 q B_{0}}>r\n\t\t\\end{align*}\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=3.5cm,width=3.5cm]{EM prblm-07}\n\t\t\\end{figure}\n\t\t\tHence the particle will complete semi-circular path and emerge from the field with velocity $v_{0} \\hat{i}$ as shown. Time spent by the particle in the magnetic field\n\t\t\\begin{align*}\n\t\tT=\\frac{\\pi r}{v_{0}}&=\\frac{\\pi m}{q B_{0}}\n\t\t\\end{align*}\n\t\tThe speed of the particle does not change due to magnetic field.\n\t\\end{answer}\n\t\\item A straight segment $O C$ (of length $L$ meter) of a circuit carrying a current 1 ampere is placed along the $x$-axis. Two infinitely long straight wire $A$ and $B$, each exiuding $z=-\\infty$ to $+\\infty$ are fixed at $y=-a$ metre and $y=+$ a metre respectively, as shown in the figure. If the wires $A$ and $B$ each carry a current $I$ ampere into the plane of the paper, obtain the cxpression for the force acting on segment $O C$. What will be the force on $O C$ if the current in the wire $B$ is reversed?\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=3.2cm,width=5cm]{EM prblm-08}\n\t\\end{figure}\n\t\\begin{answer}\n\t\t\tMagnetic field $B_{A}$ produced at $P(x, 0,0)$ due to wire, $B_{A}=\\mu_{0} I / 2 \\pi R, B_{B}=\\mu_{0} I / 2 \\pi R .$ Components of $B_{A}$ and $B_{B}$ along $x$-axis cancel, while those along $y$-axis add up to give total field.\n\t\t\t\\begin{figure}[H]\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[height=3.5cm,width=5cm]{EM prblm-09}\n\t\t\t\\end{figure}\n\t\t\\begin{align*}\n\t\tB&=2\\left(\\frac{\\mu_{0} I}{2 \\pi R}\\right) \\cos \\theta=\\frac{2 \\mu_{0} I}{2 \\pi R} \\cdot \\frac{x}{R}=\\frac{\\mu_{0} I}{\\pi} \\frac{x}{\\left(a^{2}+x^{2}\\right)}\\text{ (along $y$ direction)}\\\\\n\t\t\\text{The force }&d F\\text{ acting on the current element is }d \\vec{F}=I(d \\vec{l} \\times \\vec{B})\\\\\n\t\t\\therefore \\quad d F&=\\frac{\\mu_{0} I^{2}}{\\pi} \\frac{x d x}{a^{2}+x^{2}}\\hspace{2cm}\\left[\\because \\sin 90^{\\circ}=1\\right]\\\\\n\t\t\\Rightarrow \\quad F&=\\frac{\\mu_{0} I^{2}}{\\pi} \\int_{0}^{L} \\frac{x d x}{a^{2}+x^{2}}=\\frac{\\mu_{0} I^{2}}{2 \\pi} \\ln \\frac{a^{2}+L^{2}}{a^{2}}\n\t\t\\end{align*}\n\t\tIf the current in $B$ is reversed, the magnetic field due to the two wires would be only along $x$-direction and the force on the current along $x$-direction will be zero.\n\t\\end{answer}\n\\item A wire loop carrying a current $I$ is placed in the $x-y$ plane as shown in figure, (a) If a particle with charge $q$ and mass $m$ is placed at the centre $P$ and given a velocity $v$ along $N P$, find its instantaneous acceleration. (b) If an external uniform magnetic induction $\\vec{B}=B \\hat{i}$ is applied, find the force and torque acting on the loop.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=4.5cm,width=4.7cm]{EM prblm-10}\n\\end{figure}\n\\begin{answer}\n\t\tAs in case of current-carrying straight conductor and arc, the magnitude of $B$ is given by\\\\\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=4.5cm,width=4.7cm]{EM prblm-11}\n\t\t\\end{figure}\n\t\\begin{align*}\n\tB_{1}&=\\frac{\\mu_{0} I}{4 \\pi d}(\\sin \\alpha+\\sin \\beta) \\\\\n\t\\text{and}\\hspace{1cm}B_{2}&=\\frac{\\mu_{0} I \\phi}{4 \\pi r}\n\t\\intertext{So in accordance with right hand screw rule,}\n\t\\end{align*}\n\t\\begin{align*}\n\t\\left(\\vec{B}_{w}\\right)&=\\frac{\\mu_{0}}{4 \\pi} \\frac{1}{(a \\cos 60)} \\times 2 \\sin 60(-\\hat{k})\\\\\n\t\\text{and due to arc}\\hspace{1cm}\n\t(\\vec{B})_{M N}&=\\frac{\\mu_{0}}{4 \\pi} \\cdot \\frac{I}{a} \\times\\left(\\frac{2}{3} \\pi\\right)(+\\hat{k})\n\\intertext{\tand hence net $\\vec{B}$ at $P$ due to the given loop}\n\t\\vec{B}&=\\vec{B}_{w}+\\vec{B}_{A} \\\\\n\t\\Rightarrow \\vec{B}&=\\frac{\\mu_{0}}{4 \\pi} \\cdot \\frac{2 I}{a}\\left[\\sqrt{3}-\\frac{\\pi}{3}\\right](-\\hat{k})\n\\intertext{\tNow as force on charged particle in a magnetic field is given by}\n\t\\vec{F}&=q(\\vec{v} \\times \\vec{B})\\\\\n\\text{\tSo here,}\\hspace{1cm}\n\t\\vec{F}&=q v B \\sin 90^{\\circ} \\text { along } P F \\\\\n\t\\vec{F}&=\\frac{\\mu_{0}}{4 \\pi} \\frac{2 q v}{a}\\left[\\sqrt{3}-\\frac{\\pi}{3}\\right] \\text { along } P F\\\\\n\\text{\tand so }\\quad\n\t\\vec{a}&=\\frac{\\vec{F}}{m}=10^{-7} \\frac{2 q v I}{a}\\left[\\sqrt{3}-\\frac{\\pi}{3}\\right] \\text { along } P F\\\\\n\t\\text{(b) As }\\hspace{1cm}d \\vec{F}&=I d \\vec{L} \\times \\vec{B},\\text{ so }\\vec{F}=\\int I d \\vec{L} \\times \\vec{B}\\\\\n\\text{\tAs here $I$ and $\\vec{B}$ are constant\n\t}\\quad\n\t\\vec{F}&=I[\\oint d \\vec{L}] \\times \\vec{B}=0\\quad[\\text { as } \\oint d \\vec{L}=0]\\\\\n\t\\text{Further as area of coil,}\\quad\n\t\\vec{S}&=\\left[\\frac{1}{3} \\pi a^{2}-\\frac{1}{2} \\cdot 2 a \\sin 60^{\\circ} \\times a \\cos 60^{\\circ}\\right] \\hat{k}=a^{2}\\left[\\frac{\\pi}{3}-\\frac{\\sqrt{3}}{4}\\right] \\hat{k}\\\\\n\\text{\tSo}\\quad\n\t\\vec{M}&=I \\vec{S}=I a^{2}\\left[\\frac{\\pi}{3}-\\frac{\\sqrt{3}}{4}\\right] \\hat{k}\\\\\n\\text{\tand hence}\\quad\n\t\\vec{\\tau}\n\t&=\\vec{M} \\times \\vec{B}=I a^{2} B\\left[\\frac{\\pi}{3}-\\frac{\\sqrt{3}}{4}\\right](\\hat{k} \\times \\hat{i})\\\\\n\t \\text{i.e.}\\qquad\\vec{\\tau}&=I a^{2}\\qquad B\\left[\\frac{\\pi}{3}-\\frac{\\sqrt{3}}{4}\\right] \\hat{j} N-m\n\t\\quad\\operatorname{as}(\\hat{k} \\times \\hat{i}=\\hat{j})\n\t\\end{align*}\n\\end{answer}\n\t\\item 11. A uniform magnetic field of $30 m T$ exists in the $+x$ direction. A particle of charge $+e$ and mass $1.67 \\times 10^{-27} \\mathrm{~kg}$ is projected into the field along the $+y$ direction with a speed of $4.8 \\times 10^{6} \\mathrm{~m} / \\mathrm{s} .$\\\\\n\t(a) Find the force on the charged particle in magnitude and direction.\\\\\n\t(b) Find the force if the particle were negatively charged.\\\\\n\t(c) Describe the nature of path followed by the particle in both the cases.\n\t\\begin{answer}\n\t\t\t(a) Force acting on a charge particle moving in the magnetic field\n\t\t\t\\begin{figure}[H]\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[height=3cm,width=3cm]{EM prblm-12}\n\t\t\t\\end{figure}\n\t\t\\begin{align*}\n\t\\vec{F}&=q(\\vec{v} \\times \\vec{B})\\\\\n\t\\text{Magnetic field}\n\t\\vec{B}&=30(m T) \\hat{j}\\\\\n\\text{\tVelocity of the charge particle }\\quad \\vec{v}&=4.8 \\times 10^{6}(\\mathrm{~m} / \\mathrm{s}) \\hat{j}\\\\\n\\vec{F}&=1.6 \\times 10^{-19}\\left[\\left(4.8 \\times 10^{6} \\hat{j}\\right) \\times\\left(30 \\times 10^{-3}\\right)(\\hat{i})\\right]\\\\\n\\vec{F}&=230.4 \\times 10^{-16}(-\\hat{k}) N\n\\intertext{\t(b) If the particle were negatively charged, the magnitude of the force will be the same but the direction will be along $(+z)$ direction.}\n\\intertext{\t(c) Since, $\\vec{v} \\perp \\vec{B}$, the path described is a circle}\n R &=\\frac{m v}{q B}=\\left(1.67 \\times 10^{-27}\\right) \\cdot\\left(4.8 \\times 10^{6}\\right) /\\left(1.6 \\times 10^{-19}\\right) \\cdot\\left(30 \\times 10^{-3}\\right) \\\\ &=1.67 \\mathrm{~m} .\n\t\t\\end{align*}\n\t\\end{answer}\n\t\\item A disc of radius $R$ rotates at an angular velocity $\\omega$ about the axis perpendicular to its surface and passing through its centre. If the disc has a uniform surface charge density $\\sigma$, find the magnetic induction on the axis of rotation at a distance $x$ from the centre.\n\t\\begin{answer}\n\t\tConsider a ring of radius $r$ and width $d r$.\n\t\t\\begin{align}\n\t\t\\text{\tCharge on the ring,}\td q&=(2 \\pi r d r) \\sigma\\notag\\\\\n\t\\text{\tCurrent due to ring is }d I&=\\frac{d q}{T}\\notag\\\\\n\t&=\\frac{\\omega d q}{2 \\pi}=\\sigma \\omega r d r\\notag\n\\intertext{\tMagnetic field due to ring at point $P$ is}\\notag\n\td B&=\\frac{\\mu_{0} d l r^{2}}{2\\left(r^{2}+x^{2}\\right)^{3 / 2}}\\notag\\\\\n\\text{\tor}\\quad\n\tB&=\\int d B=\\frac{\\mu_{0} \\sigma \\omega}{2} \\int_{0}^{R} \\frac{r^{3} d r}{\\left(r^{2}+x^{2}\\right)^{3 / 2}}\\label{emp20}\\\\\n\\text{\tPutting }r^{2}+x^{2}=t^{2}\\text{ and }2 r d r&=2 t d t\\text{ and integrating (\\ref{emp20}), we get}\\notag\\\\\n\tB&=\\frac{\\mu_{0} \\sigma \\omega}{2}\\left[\\frac{R^{2}+2 x^{2}}{\\sqrt{R^{2}+x^{2}}}-2 x\\right]\\notag\n\t\t\\end{align}\n\t\\end{answer}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\\end{enumerate}", "meta": {"hexsha": "4ce4b452909d14f451d27fe29384713fd002b86f", "size": 11698, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Electromganetic theory/chapter/Problems.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Electromganetic theory/chapter/Problems.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Electromganetic theory/chapter/Problems.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.5120772947, "max_line_length": 515, "alphanum_fraction": 0.6387416652, "num_tokens": 4515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5845392571058634}}
{"text": "\\documentclass{article}\n\\usepackage[british]{babel}\n\n\\title{Building Management System}\n\\author{Dr Alun Moon}\n\n\\usepackage{tlatex}\n\\usepackage{color}\n\\definecolor{boxshade}{gray}{.8}\n\\setboolean{shading}{true}\n\n\\begin{document}\n\\maketitle\n\\section{Requirements}\n\\begin{quote}\nTo track people as they enter and leave a building.\n\\end{quote}\n\n\\section{Specification}\n\\begin{tla}\n------------------------------ MODULE building ------------------------------\n(* Sample solution for first TLA+ exercise *)\nCONSTANT\n    People      \\* we're dealing with people here\n                \\* this is the set of all people        \nVARIABLE\n    register,   \\* Set of registered users\n    in,         \\* Set of people in the building\n    out         \\* Set of people out of the building\n    \nTypeOK ==  \\* type invarient \n    /\\ register \\subseteq People    \\* Everyone on the register is a person\n    /\\ register = in \\union out     \\* everyones location is known\n    /\\ in \\intersect out = {}       \\* noone can be both in and out of the building\n\n------\n\nInit ==\n   /\\ register = {}    \\* Initially no-one is registered\n   /\\  in      = {}    \\*           no-one is inside\n   /\\  out     = {}    \\*           no-one is outside\n\nRegister(p) ==  \n    /\\  p \\in People \\ register        \\* p is a person and not registered\n    /\\ register' = register \\union {p} \\* add p to register\n    /\\ out' = out \\union {p}           \\* p is outside\n    /\\ in' = in                        \\* must keep set of those inside the same\n\nEnter(p) ==                  \n    /\\ p \\in out               \\* p is outside the building\n    /\\ in' = in \\union {p}     \\* add p to the inside set\n    /\\ out' = out \\ {p}        \\* remove p from the outside set\n    /\\ register' = register    \\* register is unchanged\n    \nLeave(p) ==\n    /\\ p \\in in                \\* p is in the building\n    /\\ in' = in \\ {p}          \\* remove p from the inside set\n    /\\ out' = out \\union {p}   \\* add p to the outside set\n    /\\ register' = register    \\* resigter is unchanged\n\nNext ==\n    \\exists p \\in People :   \\* There is a person who can either\n        \\/ Register(p)       \\* be registered, or\n        \\/ Enter(p)          \\* enter the building, or\n        \\/ Leave(p)          \\* leave the building\n\n=============================================================================\n\\* Modification History\n\\* Last modified Wed Oct 02 10:31:48 BST 2019 by alun\n\\* Last modified Tue Sep 10 12:27:57 BST 2019 by cgam1\n\\* Created Mon Sep 24 11:53:39 BST 2018 by cgam1\n\\end{tla}\n\\begin{tlatex}\n\\@x{}\\moduleLeftDash\\@xx{ {\\MODULE} building}\\moduleRightDash\\@xx{}%\n\\@x{}%\n\\@y{%\n  Sample solution for first TLA+ exercise \n}%\n\\@xx{}%\n\\@x{ {\\CONSTANT}}%\n\\@x{\\@s{16.4} People\\@s{20.5}}%\n\\@y{%\n  we're dealing with people here\n}%\n\\@xx{}%\n\\@x{\\@s{65.90}}%\n\\@y{%\n  this is the set of all people        \n}%\n\\@xx{}%\n\\@x{ {\\VARIABLE}}%\n\\@x{\\@s{16.4} register ,\\,\\@s{10.62}}%\n\\@y{%\n  Set of registered users\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} in ,\\,\\@s{33.93}}%\n\\@y{%\n  Set of people in the building\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} out\\@s{34.75}}%\n\\@y{%\n  Set of people out of the building\n}%\n\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{ TypeOK \\.{\\defeq}\\@s{4.1}}%\n\\@y{%\n  type invarient \n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land}\\@s{9.74} register \\.{\\subseteq} People\\@s{18.61}}%\n\\@y{%\n  Everyone on the register is a person\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land}\\@s{9.74} register \\.{=} in \\.{\\cup} out\\@s{16.4}}%\n\\@y{%\n  everyones location is known\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land}\\@s{9.74} in \\.{\\cap} out \\.{=} \\{ \\}\\@s{35.06}}%\n\\@y{%\n  noone can be both in and out of the building\n}%\n\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{}\\midbar\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{ Init \\.{\\defeq}}%\n\\@x{\\@s{12.29} \\.{\\land} register \\.{=} \\{ \\}\\@s{14.78}}%\n\\@y{%\n  Initially no-one is registered\n}%\n\\@xx{}%\n\\@x{\\@s{12.29} \\.{\\land}\\@s{4.1} in\\@s{21.69} \\.{=} \\{ \\}\\@s{12.29}}%\n\\@y{%\n            no-one is inside\n}%\n\\@xx{}%\n\\@x{\\@s{12.29} \\.{\\land}\\@s{4.10} out\\@s{13.91} \\.{=} \\{ \\}\\@s{14.78}}%\n\\@y{%\n            no-one is outside\n}%\n\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{ Register ( p ) \\.{\\defeq}}%\n \\@x{\\@s{16.4} \\.{\\land}\\@s{8.33} p \\.{\\in} People \\.{\\,\\backslash\\,}\n register\\@s{28.7}}%\n\\@y{%\n  p is a person and not registered\n}%\n\\@xx{}%\n \\@x{\\@s{16.4} \\.{\\land} register \\.{'} \\.{=} register \\.{\\cup} \\{ p\n \\}\\@s{17.68}}%\n\\@y{%\n  add p to register\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land} out \\.{'} \\.{=} out \\.{\\cup} \\{ p \\}\\@s{53.71}}%\n\\@y{%\n  p is outside\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land} in \\.{'} \\.{=} in\\@s{91.15}}%\n\\@y{%\n  must keep set of those inside the same\n}%\n\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{ Enter ( p ) \\.{\\defeq}}%\n\\@x{\\@s{16.4} \\.{\\land} p \\.{\\in} out\\@s{57.4}}%\n\\@y{%\n  p is outside the building\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land} in \\.{'} \\.{=} in \\.{\\cup} \\{ p \\}\\@s{29.31}}%\n\\@y{%\n  add p to the inside set\n}%\n\\@xx{}%\n \\@x{\\@s{16.4} \\.{\\land} out \\.{'} \\.{=} out \\.{\\,\\backslash\\,} \\{ p\n \\}\\@s{21.51}}%\n\\@y{%\n  remove p from the outside set\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land} register \\.{'} \\.{=} register\\@s{9.55}}%\n\\@y{%\n  register is unchanged\n}%\n\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{ Leave ( p ) \\.{\\defeq}}%\n\\@x{\\@s{16.4} \\.{\\land} p \\.{\\in} in\\@s{62.69}}%\n\\@y{%\n  p is in the building\n}%\n\\@xx{}%\n \\@x{\\@s{16.4} \\.{\\land} in \\.{'} \\.{=} in \\.{\\,\\backslash\\,} \\{ p\n \\}\\@s{32.09}}%\n\\@y{%\n  remove p from the inside set\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land} out \\.{'} \\.{=} out \\.{\\cup} \\{ p \\}\\@s{18.73}}%\n\\@y{%\n  add p to the outside set\n}%\n\\@xx{}%\n\\@x{\\@s{16.4} \\.{\\land} register \\.{'} \\.{=} register\\@s{9.55}}%\n\\@y{%\n  resigter is unchanged\n}%\n\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{ Next \\.{\\defeq}}%\n\\@x{\\@s{16.4} \\exists\\, p \\.{\\in} People \\.{:}\\@s{21.44}}%\n\\@y{%\n  There is a person who can either\n}%\n\\@xx{}%\n\\@x{\\@s{32.8} \\.{\\lor} Register ( p )\\@s{24.59}}%\n\\@y{%\n  be registered, or\n}%\n\\@xx{}%\n\\@x{\\@s{32.8} \\.{\\lor} Enter ( p )\\@s{36.89}}%\n\\@y{%\n  enter the building, or\n}%\n\\@xx{}%\n\\@x{\\@s{32.8} \\.{\\lor} Leave ( p )\\@s{19.13}}%\n\\@y{%\n  leave the building\n}%\n\\@xx{}%\n\\@pvspace{8.0pt}%\n\\@x{}\\bottombar\\@xx{}%\n\\@x{}%\n\\@y{%\n  Modification History\n}%\n\\@xx{}%\n\\@x{}%\n\\@y{%\n  Last modified Wed Oct 02 10:31:48 BST 2019 by alun\n}%\n\\@xx{}%\n\\@x{}%\n\\@y{%\n  Last modified Tue Sep 10 12:27:57 BST 2019 by cgam1\n}%\n\\@xx{}%\n\\@x{}%\n\\@y{%\n  Created Mon Sep 24 11:53:39 BST 2018 by cgam1\n}%\n\\@xx{}%\n\\end{tlatex}\n\n\\section{Model}\n\\paragraph{What is the Model}\nThe model defines the constant \\textit{People}.\n\\begin{tla}\n\tPeople <- {\"Alun\", \"Neil\", \"David\", \"Michael\"}\n\\end{tla}\n\\begin{tlatex}\n \\@x{\\@s{32.8} People \\.{\\leftarrow} \\{\\@w{Alun} ,\\,\\@w{Neil} ,\\,\\@w{David}\n ,\\,\\@w{Michael} \\}}%\n\\end{tlatex}\n\n\\paragraph{What is the Behaviour spec?}  The behaviour specification is given\nby an \\emph{Initial predicate and next-state relation}\n\\subparagraph{Init} \\textit{Init}\n\\subparagraph{Next} \\textit{Next}\n\n\\paragraph{Invariants} The invariants checked are :\n\\begin{table}[h]\n\\begin{tabular}{ll}\n$\\mathit{TypeOK}$ & The type invariant from the specification\\\\\n$\\forall p \\in \\mathit{register} : p \\in \\mathit{People}$ & every\nregistered person is in People.\\\\\n\\begin{tla}\nregister\\subseteq People\n\\end{tla}\n\\begin{tlatex}\n\\@x{ register \\.{\\subseteq} People}%\n\\end{tlatex}\n & register is a subset of People\\\\\n\\begin{tla}\n\\A p \\in out : p \\in People\n\\end{tla}\n\\begin{tlatex}\n\\@x{ \\A\\, p \\.{\\in} out \\.{:} p \\.{\\in} People}%\n\\end{tlatex}\n & Everyone outside the building is a person (see next invariant)\\\\\n\\begin{tla}\nout \\subseteq People \n\\end{tla}\n\\begin{tlatex}\n\\@x{ out \\.{\\subseteq} People}%\n\\end{tlatex}\n & out is a subset of people (says the same thing as the last invariant) \\\\\n\\begin{tla}\n\\A p \\in in : p \\in register\n\\end{tla}\n\\begin{tlatex}\n\\@x{ \\A\\, p \\.{\\in} in \\.{:} p \\.{\\in} register}%\n\\end{tlatex}\n & everyone in the building is registered \\\\\n\\begin{tla}\nin \\subseteq register\n\\end{tla}\n\\begin{tlatex}\n\\@x{ in \\.{\\subseteq} register}%\n\\end{tlatex}\n & in is a subset of register \\\\\n\\end{tabular}\n\\end{table}\n\n\\section{Results}\nA summary of the numbers of states found by the model checking is shown below.\n\n\\subsection{Statistics}\n\\paragraph{States found} for model as a whole\n\n\\begin{table}[h]\n\\begin{tabular}{lr}\n\t\\hline\n\tStates Found & 325 \\\\\n\tDistinct States & 81 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\paragraph{Number of next states found} for the actions is:\n\n\\begin{table}[h]\n\\begin{tabular}{lr}\n\t\\hline\n\t\\textbf{Action} & \\textbf{States found} \\\\ \\hline\n\t\\textit{Init} (line 18) & 1 \\\\\n\t\\textit{Register} (line 23) & 108 \\\\\n\t\\textit{Enter} (line 29) & 108 \\\\\n\t\\textit{Leave} (line 35) & 108 \\\\\n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Discussion}\nThe (simple) building model has three state variables; the register of users,\nthe list of people inside the building, and the list of those outside the\nbuilding.  The model has the set of people the specification applies to.\n\nThere is a redundancy in the state variables, and consistency is enforced by\nthe type invariant $\\textit{register}=\\textit{in}\\cup\\textit{out}$\n\n\\paragraph{The Next action } can be interpreted as follows.\n\\begin{quote}\n\tThere is a person, who can either; be registered, or can enter the building,\n\tor can Leave the building.\n\\end{quote}\n\n\\begin{tla}\nNext ==\n    \\exists p \\in People :   \\* There is a person who can either\n        \\/ Register(p)       \\* be registered, or\n        \\/ Enter(p)          \\* enter the building, or\n        \\/ Leave(p)          \\* leave the building\n\\end{tla}\n\\begin{tlatex}\n\\@x{ Next \\.{\\defeq}}%\n\\@x{\\@s{16.4} \\exists\\, p \\.{\\in} People \\.{:}\\@s{21.44}}%\n\\@y{%\n  There is a person who can either\n}%\n\\@xx{}%\n\\@x{\\@s{32.8} \\.{\\lor} Register ( p )\\@s{24.59}}%\n\\@y{%\n  be registered, or\n}%\n\\@xx{}%\n\\@x{\\@s{32.8} \\.{\\lor} Enter ( p )\\@s{36.89}}%\n\\@y{%\n  enter the building, or\n}%\n\\@xx{}%\n\\@x{\\@s{32.8} \\.{\\lor} Leave ( p )\\@s{19.13}}%\n\\@y{%\n  leave the building\n}%\n\\@xx{}%\n\\end{tlatex}\n\\end{document}\n\n\n", "meta": {"hexsha": "26372dbe794acfa4ada79f7ac937e6dd53a2c8d0", "size": 9787, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Building-doc.tex", "max_stars_repo_name": "kf6009/Building", "max_stars_repo_head_hexsha": "1830d1fa5c2c8889286c5fd26b2979de78db0313", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Building-doc.tex", "max_issues_repo_name": "kf6009/Building", "max_issues_repo_head_hexsha": "1830d1fa5c2c8889286c5fd26b2979de78db0313", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Building-doc.tex", "max_forks_repo_name": "kf6009/Building", "max_forks_repo_head_hexsha": "1830d1fa5c2c8889286c5fd26b2979de78db0313", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-06T20:38:56.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-08T10:59:19.000Z", "avg_line_length": 24.5288220551, "max_line_length": 83, "alphanum_fraction": 0.5558393788, "num_tokens": 3809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7520125626441471, "lm_q1q2_score": 0.5845392527857668}}
{"text": "\\section{Linear algebra}\\label{sec:linear_algebra}\n\nLinear algebra is a branch of mathematics that is both very accessible and enormously useful. It studies \\hyperref[def:vector_space]{vector spaces}, mostly over \\hyperref[def:set_of_real_numbers]{real} or \\hyperref[def:set_of_complex_numbers]{complex} numbers, and \\hyperref[def:semimodule/homomorphism]{linear maps} between them. For \\hyperref[def:vector_space_dimension]{finite-dimensional} vector spaces, this reduces to studying \\hyperref[def:array/matrix]{matrices}, which also leads to a very rich computational theory.\n\nWe have discussed the basics of vector spaces in \\fullref{subsec:modules} in the context of general \\hyperref[def:module]{modules over rings}. The key takeaways are:\n\\begin{itemize}\n  \\item Vector spaces, defined incrementally in \\fullref{def:semimodule}, \\fullref{def:module} and \\fullref{def:vector_space}, with some properties listed in \\fullref{thm:def:vector_space}.\n  \\item Linear combinations, defined in \\fullref{def:free_semimodule} and characterized by \\fullref{thm:free_semimodule_universal_property}, with important remarks in \\fullref{rem:linear_combinations}.\n  \\item Linear spans, defined in \\fullref{def:semimodule/submodel} and characterized via \\fullref{thm:span_via_linear_combinations}.\n  \\item Linear maps, defined in \\fullref{def:semimodule/homomorphism}, and multilinear maps, defined in \\fullref{def:multilinear_function}.\n  \\item Quotient spaces, discussed in \\fullref{def:module/quotient} for general modules.\n  \\item \\Fullref{thm:quotient_module_universal_property} and \\Fullref{thm:quotient_submodule_lattice_theorem}.\n  \\item Linear (in)dependence, defined in \\fullref{def:linear_dependence} with some properties listed in \\fullref{thm:def:linear_dependence}.\n  \\item Hames bases, defined in \\fullref{def:hamel_basis} with some properties listed in \\fullref{thm:def:hamel_basis}.\n  \\item Basis decomposition defined in \\fullref{def:basis_decomposition}.\n  \\item Vector space dimensions, whose existence and uniqueness is shown in \\fullref{thm:vector_space_dimension}.\n  \\item \\Fullref{thm:rank_nullity_theorem}.\n\\end{itemize}\n", "meta": {"hexsha": "6b46f694cb97be54caf5d58cb272cafa495d3f1d", "size": 2134, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/linear_algebra.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/linear_algebra.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/linear_algebra.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 112.3157894737, "max_line_length": 525, "alphanum_fraction": 0.8097469541, "num_tokens": 562, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568417, "lm_q2_score": 0.752012562644147, "lm_q1q2_score": 0.5845392527857668}}
{"text": "\\pgfplotsset{\n  layers/axis lines on top/.define layer set={\n    axis background,\n    axis grid,\n    axis ticks,\n    axis tick labels,\n    pre main,\n    main,\n    axis lines,\n    axis descriptions,\n    axis foreground,\n  }{/pgfplots/layers/standard},\n}\n\n\\pgfkeys{\n\t/pgfplots/unit circle/.style={\n\t\tset layers=axis lines on top,\n\t\twidth=9cm, height=9cm,\n\t\taxis x line=middle,\n\t\taxis y line=middle,\n\t\txlabel=$x$,\n\t\tylabel=$y$,\n\t\tevery axis x label/.style={\n\t\t\tat={(ticklabel* cs:1.01)},\n\t\t\tanchor=west,\n\t\t},\n\t\tevery axis y label/.style={\n\t\t\tat={(ticklabel* cs:1.01)},\n\t\t\tanchor=south,\n\t\t},\n\t\taxis line style={stealth-stealth, thick},\n\t\tlabel style={font=\\Large},\n\t\ttick label style={font=\\Large},\n\t\tsamples=100,\n\t\txmin=-1.25, xmax=1.25,\n\t\tymin=-1.25, ymax=1.25,\n\t\txtick={-1,0,1},\n\t\tytick={-1,0,1},\n\t\txticklabels={},\n\t\tyticklabels={},\n\t\tdomain=-1.5:1.5,\n\t\tgrid=both,\n\t\tmajor grid style={black!5},\n\t},\n}\n\n\\section{Trigonometric functions}\n\\subsection{Basic Definitions}\nConsider a \\emph{right triangle} $\\triangle ABC$ with sides $a,b$, and Hypotenuse $c$, where the angle $\\angle ACB$ is $\\ang{90}$, and the angle $\\angle BAC$ is denoted as $\\alpha$:\n\n\\centering\n\\begin{tikzpicture}[node distance=3mm]\n\t\t\\coordinate (A) at (0,0);\n\t\t\\coordinate (B) at (4,3);\n\t\t\\coordinate (C) at (4,0);\n\n\t\t\\node[left of=A] {$A$};\n\t\t\\node[above of=B] {$B$};\n\t\t\\node[right of=C] {$C$};\n\n\t\t\\draw[fill=xblue!30] (A) -- node (c) [midway, above, rotate=36.87] {$c$ (Hypotenuse)} (B) -- node (a) [midway, right] {$a$ (Opposite)} (C) -- node (b) [midway, below] {$b$ (Adjacent)} cycle;\n\t\t\\draw[thick] ($(C)+(0,0.3)$) rectangle ($(C)-(0.3,0)$);\n\t\t\\draw[thick, xpurple!50!black, fill=xpurple!45] (A) -- ($(A)+(1,0)$) arc (0:36.87:1) node [midway, xshift=-3mm, yshift=-2pt] {$\\alpha$} -- cycle;\n\t\t%\\draw[thick, xblue!50!black, fill=xblue!45] (B) -- ($(B)+(0,-0.8)$) arc (270:216.97:0.8) node [midway, above, xshift=5pt] {$\\beta$} -- cycle;\n\t\t\\draw[thick] (A) -- (B) -- (C) -- cycle;\n\t\\end{tikzpicture}\n\\flushleft\n\nWe use the ratios between the three sides of the triangle to define three functions of $\\alpha$:\n\\vspace{5mm}\n\\begin{itemize}\n\t\\item The \\emph{sine} of the angle $\\alpha$ is $\\sin(\\alpha)=\\frac{a}{c}$,\n\t\\item the \\emph{cosine} of the angle $\\alpha$ is $\\cos(\\alpha)=\\frac{b}{c}$, and\n\t\\item the \\emph{tangent} of the angle $\\alpha$ is $\\tan(\\alpha)=\\frac{a}{b}$, which in turn is equal to $\\frac{\\sin(\\alpha)}{\\cos(\\alpha)}$.\n\\end{itemize}\n\nWe can rearrange the above definitions:\n\\begin{align}\n\ta &= c\\sin(\\alpha),\\nonumber\\\\\n\tb &= c\\cos(\\alpha).\n\t\\label{eq:basic_trig_rearrange}\n\\end{align}\n\nNormally, the Hypotenuse is the longest side of a right triangle. We will consider here the two edge cases where one of the sides $a$ or $b$ is equal to the Hypotenuse (and the other side is thus $0$):\n\\begin{itemize}\n\t\\item if $a=c$ then $\\alpha=\\ang{90}$,\\\n\t\\item if $b=c$ then $\\alpha=\\ang{0}$.\n\\end{itemize}\n\nThe possible length of $a$ is therefore in the range $0\\leq a \\leq c$, which means that $0\\leq \\frac{a}{c} \\leq 1$. Since $\\sin(\\alpha)=\\frac{a}{c}$ this means that the image of $\\sin(\\alpha)$ is $[0,1]$. The same idea is also true for $b$, and therefore $[0,1]$ is the image of $\\cos(\\alpha)$ as well.\n\nAs a reminder, the \\emph{Pythagorean theorem}\\footnote{It's worth mentioning that no three positive integers $a, b$, and $c$ satisfy the equation $a^{n}+b^{n}=c^{n}$ for any integer value of $n>2$. \\href{https://en.wikipedia.org/wiki/Fermat\\%27s_Last_Theorem}{This can be proven, however the proof is too large to fit in the footnotes}.} states that for a right triangle with sides $a, b$ and $c$,\n\\begin{equation}\n\ta^{2} + b^{2} = c^{2}.\n\t\\label{eq:pythagorean_theorem}\n\\end{equation}\nBy substituting \\xref[eq]{basic_trig_rearrange} into the Pythagorean theorem we get\n\\begin{align*}\n\tc^{2} &= a^{2}+b^{2}\\\\\n\t&= \\left[ c\\sin(\\alpha) \\right]^{2} + \\left[ c\\cos(\\alpha) \\right]^{2}\\\\\n\t&= c^{2}\\sin^{2}(\\alpha) + c^{2}\\cos^{2}(\\alpha)\\\\\n\t&= c^{2}\\left[ \\sin^{2}(\\alpha) + \\cos^{2}(\\alpha) \\right],\n\\end{align*}\nand therefore\n\\begin{equation}\n\t\\sin^{2}(x) + \\cos^{2}(x) = 1.\n\t\\label{eq:sin sqr plus cos sqr equals 1}\n\\end{equation}\n\n\\subsection{The Unit Circle}\nWe defined $\\sin(\\alpha)$ and $\\cos(\\alpha)$ so far in way such that their domains are both $[\\ang{0},\\ang{90}]$, and their images are both $[0,1]$. However, there is a simple way to extend these functions such that both their domains are $\\mathbb{R}$, and both their images are $[-1,1]$: by using a \\emph{unit circle}.\n\n\\autoref{fig:unit_circle} depicts a unit circle: it is simply a circle of radius $R=1$, which is placed such that its center lies at the origin of a 2-dimensional axis system (i.e. at the point $\\bm{O}=(0,0)$. We then draw a line from $\\bm{O}$ to a point $\\bm{P}=(x,y)$ on the circumference of the unit circle. We call the angle between the line $\\bm{OP}$ and the $x$-axis $\\theta$. We then draw another line, this time from the point $\\bm{P}$ to a point $\\bm{D}$ on the $x$-axis, such that $\\bm{PD}$ is perpendicular to the $x$-axis.\n\nThe triangle $\\triangle OPD$ is a right triangle. Therefore, we can use the trigonometric functions to calculate the coordinates of the point $\\bm{P}=(x,y)$:\n\\begin{align}\n\tx &= R\\cos(\\theta) = \\cos(\\theta),\\nonumber\\\\\n\ty &= R\\sin(\\theta) = \\sin(\\theta).\n\t\\label{eq:xy_P}\n\\end{align}\n\nWe then define $\\cos(\\theta)$ and $\\sin(\\theta)$ as a function of $\\theta$:\n\\begin{align}\n\t\\sin(\\theta) &= y,\\nonumber\\\\\n\t\\cos(\\theta) &= x.\n\t\\label{eq:unit circle definition of sin and cos}\n\\end{align}\n\nUsing this definition, the angle $\\theta$ can take any value between $\\ang{0}$ and $\\ang{360}$. In fact, the values of $\\theta$ can be extended to any real number in degrees: any real value of degrees is equivalent to some value in the range $[\\ang{0},\\ang{360}]$, the first and most obvious example is that $\\ang{360}$ is equivalent to $\\ang{0}$. Similarly, $\\ang{-30}\\equiv\\ang{330}$, $-\\ang{180}\\equiv\\ang{180}$, $-\\ang{90}\\equiv\\ang{270}$, etc (see \\autoref{fig:angles equivalency}). In fact, this property makes the trigonometric functions periodic, with a period $T=\\ang{360}$.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}[every node/.style={font=\\small}]\n\t\t\\def\\R{3}\n\t\t\\draw[very thick, fill=xpurple!15] (0,0) circle (\\R);\n\t\t\\foreach \\th in {0, 45, 90, 135, 180, 200, 270, 330}{\n\t\t\t\\draw[ultra thick, densely dashed, white] (0,0) -- ({1.05*\\R*cos(\\th)},{1.05*\\R*sin(\\th)});\n\t\t\t\\draw[very thick] ({0.95*\\R*cos(\\th)},{0.95*\\R*sin(\\th)}) -- ({1.05*\\R*cos(\\th)},{1.05*\\R*sin(\\th)});\n\t\t\t\\node at ({1.075*\\R*cos(\\th)},{1.075*\\R*sin(\\th)}) (\\th) {};\n\t\t}\n\t\t\\fill[black] (0,0) circle (0.15);\n\t\t\\node[right] at (0) {$\\ang{0}\\equiv\\ang{360}\\equiv\\ang{720}\\equiv\\ang{-360}\\equiv\\dots$};\n\t\t\\node[right] at (45) {$\\ang{45}\\equiv\\ang{-315}\\equiv\\ang{395}\\equiv\\dots$};\n\t\t\\node[right] at (90) {$\\ang{90}\\equiv\\ang{-270}\\equiv\\ang{450}\\equiv\\dots$};\n\t\t\\node[left] at (135) {$\\ang{135}\\equiv\\ang{-225}\\equiv\\dots$};\n\t\t\\node[left] at (180) {$\\ang{180}\\equiv\\ang{-180}\\equiv\\ang{540}\\equiv\\dots$};\n\t\t\\node[left] at (200) {$\\ang{200}\\equiv\\ang{-160}\\equiv\\ang{560}\\equiv\\dots$};\n\t\t\\node[right] at (270) {$\\ang{270}\\equiv\\ang{-90}\\equiv\\dots$};\n\t\t\\node[right] at (330) {$\\ang{330}\\equiv\\ang{-30}\\equiv\\dots$};\n\t\\end{tikzpicture}\n\t\\caption{Angles equivalency on a circle.}\n\t\\label{fig:angles equivalency}\n\\end{figure}\n\n\\subsection{Radians}\nUsing degrees to measure angles in a sphere creates an inconvenience: the domain and image of the trigonometric functions have different units. In order to measure both these magnitudes using the same unit we switch to measuring angles on a circle using \\emph{radians} instead of degrees. One radian equals the length of a single radius $R$ of the circle (in the case of the unit circle this is always $R=1$). We define an inner angle $\\theta$ to equal one radian if the arc length it represents is equal to $R$ (see \\autoref{fig:radians}).\n\nHow much is a radian in degrees? The full circumference of any circle with radius $R$ equals $2\\pi R$, which means that a single radian $R$ is equivalent to $\\frac{\\ang{180}}{\\pi} \\approx \\ang{57.3}$. \\autoref{tab:rad_degs} shows some common angles and their equivalent value in radians.\n\n\\begin{table}\n\t\\caption{Common angles in radians, and their respective images for the three main trigonometric functions.}\n\t\\label{tab:rad_degs}\n\t\\centering\n\t\\begin{tabular}{lcccc}\n\t\t\\toprule\n\t\t\\multicolumn{2}{c}{$\\theta$} & $\\sin(\\theta)$ & $\\cos(\\theta)$ & $\\tan(\\theta)$\\\\\n\t\tdegrees & radians & & &\\\\\n\t\t\\midrule\n\t\t\\ang{0}   & 0\t\t\t\t\t\t\t\t & $0$\t\t\t\t\t\t\t\t\t& $1$\t\t\t\t\t\t\t\t\t & $0$\\\\\n\t\t\\ang{30}  & $\\frac{\\pi}{6}$\t & $\\frac{1}{2}$\t\t\t\t& $\\frac{\\sqrt{3}}{2}$ & $\\frac{1}{\\sqrt{3}}$\\\\\n\t\t\\ang{45}  & $\\frac{\\pi}{4}$  & $\\frac{\\sqrt{2}}{2}$ & $\\frac{\\sqrt{2}}{2}$ & $1$\\\\\n\t\t\\ang{60}  & $\\frac{\\pi}{3}$  & $\\frac{\\sqrt{3}}{2}$ & $\\frac{1}{2}$\t\t\t\t & $\\sqrt{3}$\\\\\n\t\t\\ang{90}  & $\\frac{\\pi}{2}$  & $1$\t\t\t\t\t\t\t\t\t& $0$\t\t\t\t\t\t\t\t\t & undefined\\\\\n\t\t\\ang{180} & $\\pi$\t\t\t\t\t\t & $0$\t\t\t\t\t\t\t\t\t& $-1$\t\t\t\t\t\t\t\t & $0$\\\\\n\t\t\\ang{270} & $\\frac{3\\pi}{2}$ & $-1$\t\t\t\t\t\t\t\t\t& $0$\t\t\t\t\t\t\t\t\t & undefined\\\\\n\t\t\\ang{360} & $2\\pi$\t\t\t\t\t & $0$\t\t\t\t\t\t\t\t\t& $1$ \t\t\t\t\t\t\t\t & $0$\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}[scale=0.9]\n\t\t\\pgfmathsetmacro{\\ax}{4.5}\n\t\t\\pgfmathsetmacro{\\un}{3.5}\n\t\t\\pgfmathsetmacro{\\th}{35}\n\t\t\\coordinate (D) at ({\\un*cos(\\th)},0);\n\n\t\t\\def\\xcol{xred}\n\t\t\\draw[thick, fill=\\xcol!5] (A) circle (\\un);\n\t\t\\draw[vector, <->] (-\\ax,0) -- (\\ax,0) node [right] {\\Large$x$};\n\t\t\\draw[vector, <->] (0,-\\ax) -- (0,\\ax) node [above] {\\Large$y$};\n\t\t\\draw[ultra thick, \\xcol, rotate=\\th] (A) -- node [midway, above, rotate=\\th] {$R=1$} (\\un,0) node (B) {};\n\t\t\\draw[very thick, densely dotted, black!50] (B.center) -- ({\\un*cos(\\th)},0);\n\t\t\\draw[thick] ($(A)+(1.1,0)$) arc (0:\\th:1.1) node [midway, xshift=-2mm, yshift=-2pt] {$\\theta$};\n\t\t\\draw[thick] ($(D)+(0,0.3)$) -- ++(-0.3,0) -- ++(0,-0.3);\n\n\t\t\\filldraw (A) circle (2pt) node[below left] {$(0,0)=\\bm{O}$};\n\t\t\\filldraw (\\un,0) circle (2pt) node[below right] {$(1,0)$};\n\t\t\\filldraw (0,\\un) circle (2pt) node[above right] {$(0,1)$};\n\t\t\\filldraw (-\\un,0) circle (2pt) node[below left] {$(-1,0)$};\n\t\t\\filldraw (0,-\\un) circle (2pt) node[below right] {$(0,-1)$};\n\t\t\\filldraw (D) circle (2pt) node[below, anchor=north east, xshift=2mm] {$(x,0)=\\bm{D}$};\n\t\t\\filldraw (B) circle (2pt) node[right, xshift=1mm, yshift=1mm] {$\\bm{P}=(x,y)$};\n\t\\end{tikzpicture}\n\t\\caption{A unit circle with a point $\\bm{P}=(x,y)$ on its circumference. The triangle $\\triangle \\bm{OPD}$ is a right triangle with sides $\\bm{OD}=x$, $\\bm{OP}=y$ and an angle $\\theta$ opposing the side $\\bm{DP}$.}\n\t\\label{fig:unit_circle}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}[node distance=5mm, every node/.style={font=\\large}]\n\t\t\\begin{axis}[\n\t\t\tunit circle,\n\t\t]\n\t\t\t\\def\\xcol{xpurple}\n\t\t\t\\def\\sr{0.3}\n\t\t\t\\def\\rad{57.3}\n\t\t\t\\coordinate (O) at (0,0);\n\t\t\t\\coordinate (A) at (1,0);\n\t\t\t\\coordinate (B) at ({cos(\\rad)},{sin(\\rad)});\n\t\t\t\\coordinate (C) at ({\\sr*cos(\\rad)},{\\sr*sin(\\rad)});\n\t\t\t\\coordinate (T) at ({\\sr/1.5*cos(\\rad/2)},{\\sr/1.5*sin(\\rad/2)});\n\t\t\t\\coordinate (L1) at ({0.8*cos(\\rad/2)},{0.8*sin(\\rad/2)});\n\t\t\t\\coordinate (L2) at ({0.35*cos(\\rad/2)},{0.35*sin(\\rad/2)});\n\t\t\t\n\t\t\t\\draw[very thick, fill=\\xcol!5] (A) arc (0:360:1);\n\t\t\t%\\draw[line width=5mm, xpurple!75] (A) arc (0:\\rad:1);\n\t\t\t\\def\\outershift#1{\\raisebox{1ex}}\n\t\t\t\\draw[very thick, fill=\\xcol!50, postaction={decorate, decoration={text along path, text align=center, text={|\\large\\outershift|arc length=$R$}}}] (O) -- node[midway, above, rotate=\\rad] {$R$} (B) arc (\\rad:0:1) -- node[midway, below] {$R$} cycle;\n\t\t\t\\def\\innershift#1{\\raisebox{-2.5ex}}\n\t\t\t\\draw[postaction={decorate, decoration={text along path, text align=center, text={|\\large\\innershift|$1$ radians}}}] (B) arc (\\rad:0:1);\n\t\t\t\\draw[very thick, fill=\\xcol!20] (O) -- (C) arc (\\rad:0:\\sr) -- cycle;\n\t\t\t\n\t\t\t\\addplot+[only marks, black] coordinates {(0,0) (1,0) ({cos(\\rad)},{sin(\\rad)})};\n\t\t\t\\node[below left of=O] {$\\bm{O}$};\n\t\t\t\\node[below right of=A] {$\\bm{A}$};\n\t\t\t\\node[above of=B] {$\\bm{B}$};\n\t\t\t\\node at (T) {$\\theta$};\n\t\t\t\\draw[vector, thin] (L1) -- (L2);\n\t\t\\end{axis}\n\t\\end{tikzpicture}\n\t\\caption{In this figure the arc $\\bm{AB}$ has the same length of the radii $\\bm{OA}$ and $\\bm{OB}$ (all are equal to $R$), and therefore $\\theta=1$ radians.}\n\t\\label{fig:radians}\n\\end{figure}\n\n\\subsection{Graphs}\nAs seen previously, the functions $\\sin(x)$ and $\\cos(x)$ are periodic, having both the period $T=2\\pi$. Their graphs are depicted in \\autoref{fig:sin and cos graphs}. The value of $\\sin(x)$ is always equal to that of $\\cos\\left(x-\\frac{\\pi}{2}\\right)$: we say that the two functions have a \\emph{phase difference} of $\\pi/2$. The graph of $\\tan(x)$ is depicted in \\autoref{fig:tan graph}.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\begin{axis}[\n\t\t\t\tgraph2d,\n\t\t\t\twidth=15cm, height=4cm,\n\t\t\t\txmin=-10, xmax=10,\n\t\t\t\tymin=-1.2, ymax=1.2,\n\t\t\t\txtick={-9.425,-7.854,...,9.425,10.996},\n\t\t\t\txticklabels={$-3\\pi$, $-\\frac{5}{2}\\pi$, $-2\\pi$, $-\\frac{3}{2}\\pi$, $-\\pi$, $-\\frac{1}{2}\\pi$, , $\\frac{1}{2}\\pi$, $\\pi$, $\\frac{3}{2}\\pi$, $2\\pi$, $\\frac{5}{2}\\pi$, $3\\pi$},\n\t\t\t\tdomain=-10:10,\n\t\t\t]\n\t\t\t\\addplot[function, xred] {sin(deg(\\x))};\n\t\t\t\\addplot[function, xgreen] {cos(deg(\\x))};\n\t\t\\end{axis}\n\t\\end{tikzpicture}\n\t\\caption{The graphs of \\textcolor{xred}{\\bm{$\\sin(x)$}} and \\textcolor{xgreen}{\\bm{$\\cos(x)$}} for $x\\in[-10,10]$. Note how the graph of \\textcolor{xgreen}{$\\bm{\\cos(x)}$} is ``lagging'' behind the graph of \\textcolor{xred}{$\\bm{\\sin(x)}$} by $\\pi/2$.}\n\t\\label{fig:tan graph}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\begin{axis}[\n\t\t\t\tgraph2d,\n\t\t\t\twidth=15cm, height=8cm,\n\t\t\t\txmin=-10, xmax=10,\n\t\t\t\tymin=-4, ymax=4,\n\t\t\t\txtick={-9.425,-7.854,...,9.425,10.996},\n\t\t\t\txticklabels={$-3\\pi$, $-\\frac{5}{2}\\pi$, $-2\\pi$, $-\\frac{3}{2}\\pi$, $-\\pi$, $-\\frac{1}{2}\\pi$, , $\\frac{1}{2}\\pi$, $\\pi$, $\\frac{3}{2}\\pi$, $2\\pi$, $\\frac{5}{2}\\pi$, $3\\pi$},\n\t\t\t\tdomain=-10:10,\n\t\t\t\trestrict y to domain=-5:5,\n\t\t\t\tsamples=500,\n\t\t\t]\n\t\t\t\\addplot[function, xblue] {tan(deg(\\x))};\n\t\t\\end{axis}\n\t\\end{tikzpicture}\n\t\\caption{The graphs of $\\tan(x)$ on the domain $[-3\\pi,3\\pi]$.}\n\t\\label{fig:sin and cos graphs}\n\\end{figure}\n\n\\subsection{Identities}\nThe following are some useful facts and connections between trigonometric functions:\n\\begin{itemize}\n\t\\item Pythagorean identity:\n\t\t\\begin{equation}\n\t\t\t\\sin^{2}(\\theta) + \\cos^{2}(\\theta) = 1\n\t\t\\end{equation}\n\t\n\t\\item Symmetry/Antisymmetry:\n\t\t\\begin{align}\n\t\t\t\\sin(-\\theta) &= -\\sin(\\theta).\\\\\n\t\t\t\\cos(-\\theta) &= \\cos(\\theta).\\\\\n\t\t\t\\tan(-\\theta) &= -\\tan(\\theta).\n\t\t\\end{align}\n\t\n\t\\item Tangent from sine and cosine:\n\t\t\\begin{equation}\n\t\t\t\\tan(\\theta)=\\frac{\\sin(\\theta)}{\\cos(\\theta)}.\n\t\t\\end{equation}\n\t\n\t\\item Phase between sine and cosine:\n\t\t\\begin{align}\n\t\t\t\\sin\\left(\\theta\\pm\\frac{\\pi}{2}\\right) &= \\pm\\cos(\\theta).\\\\\n\t\t\t\\cos\\left(\\theta\\pm\\frac{\\pi}{2}\\right) &= \\mp\\sin(\\theta).\n\t\t\\end{align}\n\t\n\t\\item Half-period shift:\n\t\t\\begin{align}\n\t\t\t\\sin(\\theta+\\pi) &= -\\sin(\\theta).\\\\\n\t\t\t\\cos(\\theta+\\pi) &= -\\cos(\\theta).\n\t\t\\end{align}\n\t\n\t\\item Angle sum:\n\t\t\\begin{align}\n\t\t\t\\sin(\\alpha\\pm\\beta) &= \\sin(\\alpha)\\cos(\\beta)\\pm\\cos(\\alpha)\\sin(\\beta).\\\\\n\t\t\t\\cos(\\alpha\\pm\\beta) &= \\cos(\\alpha)\\cos(\\beta)\\mp\\sin(\\alpha)\\sin(\\beta).\n\t\t\\end{align}\n\t\n\t\\item Double angle:\n\t\t\\begin{align}\n\t\t\t\\sin(2\\theta) &= 2\\sin(\\theta)\\cos(\\theta) = \\frac{2\\tan \\left( \\theta \\right)}{1+\\tan^{2} \\left( \\theta \\right)}.\\\\\n\t\t\t\\cos(2\\theta) &= 1-2\\sin^{2}(\\theta) = \\frac{1-\\tan^{2} \\left( \\theta \\right) }{1+\\tan^{2} \\left( \\theta \\right)}.\n\t\t\t\\label{eq:tan_double_angles}\n\t\t\\end{align}\n\t\n\t\\item Half angle:\n\t\t\\begin{align}\n\t\t\t\\sin\\left( \\frac{\\theta}{2} \\right) &= \\pm\\sqrt{\\frac{1-\\cos(\\theta)}{2}}.\\\\\n\t\t\t\\cos\\left( \\frac{\\theta}{2} \\right) &= \\pm\\sqrt{\\frac{1+\\cos(\\theta)}{2}}.\\\\\n\t\t\t\\tan\\left( \\frac{\\theta}{2} \\right) &= \\frac{\\sin(\\theta)}{1+\\cos(\\theta)}.\n\t\t\\end{align}\n\t\n\t\\item Product to sum:\n\t\t\\begin{align}\n\t\t\t\\sin(\\theta)\\sin(\\varphi) &= \\frac{1}{2}\\left[ \\cos(\\theta-\\varphi)-\\cos(\\theta+\\varphi) \\right].\\\\\n\t\t\t\\cos(\\theta)\\cos(\\varphi) &= \\frac{1}{2}\\left[ \\cos(\\theta-\\varphi)+\\cos(\\theta+\\varphi) \\right].\\\\\n\t\t\t\\sin(\\theta)\\cos(\\varphi) &= \\frac{1}{2}\\left[ \\sin(\\theta+\\varphi) + \\sin(\\theta-\\varphi) \\right].\\\\\n\t\t\t\\tan(\\theta)\\tan(\\varphi) &= \\frac{\\cos(\\theta-\\varphi)-\\cos(\\theta+\\varphi)}{\\cos(\\theta-\\varphi)+\\cos(\\theta+\\varphi)}.\n\t\t\t\\label{eq:trig product to sum}\n\t\t\\end{align}\n\n\t\\item Sum to product:\n\t\t\\begin{align}\n\t\t\t\\sin(\\theta)\\pm\\sin(\\varphi) &= 2\\sin\\left( \\frac{\\theta\\pm\\varphi}{2} \\right)\\cos\\left( \\frac{\\theta\\mp\\varphi}{2} \\right).\\\\\n\t\t\t\\cos(\\theta)+\\cos(\\varphi) &= 2\\cos\\left( \\frac{\\theta+\\varphi}{2} \\right)\\cos\\left( \\frac{\\theta-\\varphi}{2} \\right).\\\\\n\t\t\t\\cos(\\theta)-\\cos(\\varphi) &= -2\\cos\\left( \\frac{\\theta+\\varphi}{2} \\right)\\cos\\left( \\frac{\\theta-\\varphi}{2} \\right).\\\\\n\t\t\t\\tan(\\theta)\\pm\\tan(\\varphi) &= \\frac{\\sin(\\theta\\pm\\varphi)}{\\cos(\\theta)\\cos(\\varphi)}.\n\t\t\t\\label{eq:trig sum to product}\n\t\t\\end{align}\n\\end{itemize}\n\n\\subsection{Useful theorems}\nThe area $S$ of a triangle $\\triangle ABC$ can be calculated using the length $L$ any side of the triangle (in this context called a \\emph{base}) and the height $h$ to its opposing vertex (see \\autoref{fig:area of a triangle}):\n\\begin{equation}\n\tS = \\frac{1}{2}Lh.\n\t\\label{eq:area of a triangle}\n\\end{equation}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\coordinate (A) at (-2,1);\n\t\t\\coordinate (B) at (1,3.5);\n\t\t\\coordinate (C) at (2,1);\n\t\t\\coordinate (D) at (B|-C);\n\t\t\\draw[very thick, fill=xblue!20] (A) node[left] {$A$} -- (B) node[above] {$B$} node[midway, above left] {$c$} -- node[midway, right] {$a$} (C) node[right] {$C$} -- cycle node[midway, below] {$b$};\n\t\t\\draw[thick, dashed] (B) -- (D) node[midway, left] {$h$};\n\t\t\\draw[thick] (D) -- ++(0.2,0) -- ++(0,0.2) -- ++(-0.2,0);\n\t\t\\pic[draw, thick, angle radius=11mm, angle eccentricity=0.7, \"$\\alpha$\"] {angle=C--A--B};\n\t\\end{tikzpicture}\n\t\\caption{The area of a triangle using the side $b$ as a base, and its corresponding height to the point $B$. The angle opposing the side $A$ is marked as $\\alpha$.}\n\t\\label{fig:area of a triangle}\n\\end{figure}\n\nThe triangle with sides $cbh$ is a right triangle, $c$ being its hypotenuse. We can therefore infer the size of $h$ using $\\alpha$:\n\\begin{equation}\n\th = c\\sin(\\alpha).\n\t\\label{eq:first equation in law of sines}\n\\end{equation}\nSubstituting this back to \\autoref{eq:area of a triangle} yields that the area of the triangle is\n\\begin{equation}\n\tS = \\frac{1}{2}bc\\sin(\\alpha).\n\t\\label{eq:area using sin theta}\n\\end{equation}\nThere is nothing special about choosing the side $b$ as a base: we can also use $a$ or $c$ for the calculation. This will yield, respectively,\n\\begin{align}\n\tS &= \\frac{1}{2}ac\\sin(\\gamma),\\\\\n\tS &= \\frac{1}{2}ac\\sin(\\beta),\n\t\\label{eq:}\n\\end{align}\nwhere $\\beta$ is the angle opposing $b$ and $\\gamma$ is the angle opposing $c$. Since $S$ is the same in all cases, we simply multiply each of the area equations by $2$ and divide by $abc$, which yields\n\\begin{equation}\n\t\\frac{\\sin(\\alpha)}{a} = \\frac{\\sin(\\beta)}{b} = \\frac{\\sin(\\gamma)}{c},\n\t\\label{eq:law of sines}\n\\end{equation}\ni.e. in a triangle, the ratio between any side and the sine of its opposing angle is always the same no matter which side we choose. This theorem is called the \\emph{law of sines}.\n\n\\begin{example}{Law of sines}{}\n\tGiven the triangle $\\triangle ABC$ below, what are $\\beta$ and $b$?\n\n\t\\vspace{2em}\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\coordinate (A) at (2,-1);\n\t\t\\coordinate (B) at (2,3);\n\t\t\\coordinate (C) at (-1,4);\n\t\t% a=3.15, b=5.83, c=4\n\t\t% \u03b1=30.79 \u03b2=108.67 \u03b3=40.54\n\t\t\\draw[very thick, fill=xpurple!40] (A) -- (B) node[midway, right] {$c$} -- node[midway, above right] {$a=3.15$} (C) -- cycle node[midway, below left] {$b$};\n\n\t\t% \"local bounding box\" gives the pic its label\n\t\t\\pic[local bounding box=alpha, draw, thick, angle radius=11mm, angle eccentricity=0.7, \"$\\alpha$\"] {angle=B--A--C};\n\t\t\\pic[local bounding box=beta, draw, thick, angle radius=7mm, angle eccentricity=0.6, \"$\\beta$\"] {angle=C--B--A};\n\t\t\\pic[local bounding box=gamma, draw, thick, angle radius=11mm, angle eccentricity=0.7, \"$\\gamma$\"] {angle=A--C--B};\n\n\t\t\\node[above right of=alpha, xshift=2mm] (alphatxt) {$\\ang{30.79}$};\n\t\t\\node[below left of=gamma, xshift=-2mm] (gammatxt) {$\\ang{40.54}$};\n\t\t\\draw[vector, thick] (alphatxt.south) to [out=-90, in=0] (alpha);\n\t\t\\draw[vector, thick] (gammatxt.north) to [out=90, in=180] (gamma);\n\t\\end{tikzpicture}\n\n\t\\vspace{1em}\n\t\\flushleft\n\tSince all angles in a triangle must add up to $\\ang{180}$,\n\t\\[\n\t\t\\beta=\\ang{180}-\\ang{30.79}-\\ang{40.54}=\\ang{108.67}.\n\t\\]\n\n\tUsing the law of sines,\n\t\\[\n\t\tb = \\frac{a}{\\sin(\\alpha)}\\cdot\\sin(\\beta) = \\frac{3.15}{\\sin(\\ang{30.79})}\\cdot\\sin(\\ang{108.67}) \\approx 5.83,\n\t\\]\n\tand\n\t\\[\n\t\tc = \\frac{a}{\\sin(\\alpha)}\\cdot\\sin(\\gamma) = \\frac{3.15}{\\sin(\\ang{30.79})}\\cdot\\sin(\\ang{40.54}) \\approx 4.\n\t\\]\n\\end{example}\n\n\\begin{note}{Ambiguity of solutions}{}\n\tThe above example reveals an issue that might arise due to the symmetrical nature of $\\sin(x)$ around $x=\\pi$ ($\\ang{180}$): say we wanted to calculate $\\beta$ using the law of sines instead of by using $\\beta=\\ang{180}-\\alpha-\\gamma$. In this case we would solve the equation\n\t\\[\n\t\t\\frac{\\sin(\\alpha)}{a} = \\frac{\\sin(\\beta)}{b},\n\t\\]\n\twhich would result in $\\beta=\\arcsin\\left( \\frac{b\\sin(\\alpha)}{a}  \\right)=\\arcsin\\left( 0.95 \\right)$. However, two angles can fit this requirement: the sines of $\\ang{71.34}$ and $\\ang{108.67}$ are both equal to $0.95$! Therefore, we must be careful when using the law of sine and make sure we always choose values that make sense (e.g. such that all angles add up to $\\ang{180}$).\n\\end{note}\n\nOf course, the sine function is not unique in having its own named ``Law'': another useful theorem is the so-called \\emph{law of cosines} (also \\emph{al-Kashi's theorem}). This theorem states that given a triangle with sides $a,b,c$ and an angle $\\gamma$ opposing $c$,\n\\begin{equation}\n\tc^{2} = a^{2} + b^{2} - 2ab\\cos(\\gamma).\n\t\\label{eq:law of cosines}\n\\end{equation}\n\nMuch like the law of sines, the choice of angle does not matter, as long as we plug the correct sides to the equation: for $\\alpha$ and $\\beta$ being the angles opposing to $a$ and $b$ respectively,\n\\begin{align}\n\ta^{2} &= b^{2} + c^{2} - 2bc\\cos(\\alpha),\\nonumber\\\\\n\tb^{2} &= a^{2} + c^{2} - 2ac\\cos(\\beta).\n\\end{align}\n\nIf the triangle in question is a right triangle then one of the angles is equal to $\\ang{90}$. Without loss of generality, let us assume that this is $\\gamma$. Since $\\cos(\\ang{90})=0$ we get that in the case of a right triangle\n\\begin{equation}\n\tc^{2} = a^{2} + b^{2},\n\t\\label{eq:pythagorean theorem from law of cosines}\n\\end{equation}\ni.e. we retrieve back the Pythagorean theorem.\n\n\\begin{example}{Law of cosines}{}\n\tCalculate all angles in the following triangle:\n\n\t\\vspace{2em}\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\coordinate (A) at (-3,0);\n\t\t\\coordinate (B) at (2,-1);\n\t\t\\coordinate (C) at (-1,3);\n\t\t% a=5 b=3.61 c=5.1\n\t\t% \u03b1=70.55 \u03b2=67.58 \u03b3=41.87\n\t\t\\draw[very thick, fill=xgreen!50] (A) -- (B) node[midway, below left] {$c=5.1$} -- node[midway, above right] {$a=5$} (C) -- cycle node[midway, above left] {$b=3.61$};\n\n\t\t% \"local bounding box\" gives the pic its label\n\t\t\\pic[local bounding box=alpha, draw, thick, angle radius=9mm, angle eccentricity=0.6, \"$\\alpha$\"] {angle=B--A--C};\n\t\t\\pic[local bounding box=beta, draw, thick, angle radius=11mm, angle eccentricity=0.7, \"$\\beta$\"] {angle=C--B--A};\n\t\t\\pic[local bounding box=gamma, draw, thick, angle radius=9mm, angle eccentricity=0.6, \"$\\gamma$\"] {angle=A--C--B};\n\t\\end{tikzpicture}\n\n\t\\vspace{1em}\n\t\\flushleft\n\tUsing the law of cosines:\n\t\\begin{align*}\n\t\t\\cos(\\gamma) &= \\frac{c^{2}-b^{2}-a^{2}}{-2ab} = \\frac{5.1^{2}-3.61^{2}-5^{2}}{-2\\cdot5\\cdot3.61} \\approx 0.33302\\Rightarrow \\gamma=\\ang{70.54},\\\\\n\t\t\\cos(\\beta) &= \\frac{b^{2}-a^{2}-c^{2}}{-2ac} = \\frac{3.61^{2}-5^{2}-5.1^{2}}{-2\\cdot5\\cdot5.1} \\approx 0.74466\\Rightarrow \\beta=\\ang{41.87},\\\\\n\t\t\\cos(\\alpha) &= \\frac{a^{2}-b^{2}-c^{2}}{-2cb} = \\frac{5^{2}-3.61^{2}-5.1^{2}}{-2\\cdot5.1\\cdot3.61} \\approx 0.38135\\Rightarrow \\alpha=\\ang{67.58}.\n\t\\end{align*}\n\\end{example}\n\n%\\begin{figure}\n%\t\\centering\n%\t\\begin{tikzpicture}\n%\t\t\\pgfmathsetmacro{\\ax}{4.5}\n%\t\t\\pgfmathsetmacro{\\un}{3.5}\n%\t\t\\pgfmathsetmacro{\\th}{35}\n%\t\t\\coordinate (D) at ({\\un*cos(\\th)},0);\n%\n%\t\t\\filldraw[xred!35] (A) -- (\\un,0) arc (0:90:\\un);\n%\t\t\\filldraw[xblue!35] (A) -- (0,\\un) arc (90:180:\\un);\n%\t\t\\filldraw[xgreen!35] (A) -- (-\\un,0) arc (180:270:\\un);\n%\t\t\\filldraw[xorange!35] (A) -- (0,-\\un) arc (270:360:\\un);\n%\t\t\\node at ({ \\un/2.5},{ \\un/2.5}) {\\Huge$1$};\n%\t\t\\node at ({-\\un/2.5},{ \\un/2.5}) {\\Huge$2$};\n%\t\t\\node at ({-\\un/2.5},{-\\un/2.5}) {\\Huge$3$};\n%\t\t\\node at ({ \\un/2.5},{-\\un/2.5}) {\\Huge$4$};\n%\t\t\n%\t\t\\draw[thick] (A) circle (\\un);\n%\t\t\\draw[vector, <->] (-\\ax,0) -- (\\ax,0) node [right] {\\Large$x$};\n%\t\t\\draw[vector, <->] (0,-\\ax) -- (0,\\ax) node [above] {\\Large$y$};\n%\t\t\\filldraw (A) circle (2pt) node[below right] {$(0,0)$};\n%\t\t\\filldraw (\\un,0) circle (2pt) node[below right] {$(1,0)$};\n%\t\t\\filldraw (0,\\un) circle (2pt) node[above right] {$(0,1)$};\n%\t\t\\filldraw (-\\un,0) circle (2pt) node[below left] {$(-1,0)$};\n%\t\t\\filldraw (0,-\\un) circle (2pt) node[below right] {$(0,-1)$};\n%\t\\end{tikzpicture}\n%\t\\caption{The different quadrants of the unit circle.}\n%\t\\label{fig:unit_circle_quadrants}\n%\\end{figure}\n\n%\\begin{table}\n%\t\\caption{Text}\n%\t\\label{tab:quadrants_trig_vals}\n%\t\\centering\n%\t\\begin{tabular}{lll}\n%\t\t\\toprule\n%\t\tQuadrant & $\\cos(\\theta)=x$ & $\\sin(\\theta)=y$\\\\\n%\t\t\\midrule\n%\t\t\\rowcolor{xred!35}1 & $\\left[ 0,1 \\right]$ & $\\left[ 0,1 \\right]$\\\\\n%\t\t\\rowcolor{xblue!35}2 & $\\left[-1,0 \\right]$ & $\\left[ 0,1 \\right]$\\\\\n%\t\t\\rowcolor{xgreen!35}3 & $\\left[-1,0 \\right]$ & $\\left[-1,0 \\right]$\\\\\n%\t\t\\rowcolor{xorange!35}4 & $\\left[ 0,1 \\right]$ & $\\left[-1,0 \\right]$\\\\\n%\t\t\\bottomrule\n%\t\\end{tabular}\n%\\end{table}\n\n", "meta": {"hexsha": "95aab00c56e79c68a000d84ac49df50b339a0e94", "size": 25861, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/intro/trigonomentry.tex", "max_stars_repo_name": "JASory/maths_book", "max_stars_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2021-12-25T20:02:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T17:57:59.000Z", "max_issues_repo_path": "chapters/intro/trigonomentry.tex", "max_issues_repo_name": "JASory/maths_book", "max_issues_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2022-01-17T05:01:10.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-20T06:18:24.000Z", "max_forks_repo_path": "chapters/intro/trigonomentry.tex", "max_forks_repo_name": "JASory/maths_book", "max_forks_repo_head_hexsha": "b5fdd19b09e97697f287f5ca83e0d9133b704789", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2022-01-17T10:15:18.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-02T10:45:13.000Z", "avg_line_length": 46.9346642468, "max_line_length": 583, "alphanum_fraction": 0.6213603496, "num_tokens": 9988, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8438951025545426, "lm_q1q2_score": 0.5845171773910699}}
{"text": "\n\\subsection{Estimating infinite state Markov chains}\n\nWe can represent the transition matrix as a series of rules to reduce the number of dimensions\n\n\\(P(x_t |y_{t-1})=f(x,y)\\)\n\n\n\ncan represent states as number, rather than atomic. could be continuous, or even real.\n\nin more complex, can use vectors.\n", "meta": {"hexsha": "5db60cfc27c841abd3b65cf12ebb1382acfcd34a", "size": 303, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/markov/01-02-MC_infinite.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/markov/01-02-MC_infinite.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/markov/01-02-MC_infinite.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.3076923077, "max_line_length": 94, "alphanum_fraction": 0.7557755776, "num_tokens": 73, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8438950947024555, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5845171773065518}}
{"text": "%!TEX root = ../CombinatoricsNotes.tex\n\n\\section{Compression and Shadows} %\\marginnote{Lecture 7: Wednesday, January 27, 2016.}\n\\lect{1}{27}\nAs an aside, let's consider Steiner symmetrization\\sidenote{\\cite{steiner1838einfache}}, a technique in geometry.\nGiven a shape $K\\subset \\R^2$, with a line of ``symmetry'' $L$, we obtain $S_L(K)$ from $K$ by replacing $K\\cap L'$ for every line $L'$ orthogonal to $L$ by an interval of length equal to $|K\\cap L'|$ centered at $L$.\n\\paragraph{Properties:}\n\\begin{enumerate}\n\t\\item $\\area(S_L(K)) = \\area(K)$\n\t\\item $\\diam(S_L(K)) \\leq \\diam (K)$ \\marginnote{The diameter is the largest distance between two points on $K$. We may prove this by drawing trapezoids between two lines $L'$ and $L''$, one of which passes through each point (close to) achieving the diameter.}\n\t\\item $\\perim(S_L(K)) \\leq \\perim(K)$.\n\\end{enumerate}\nGiven a shape in $\\R^2$ with area 1, what is the smallest diameter? That's the diameter of the area 1 disc. Sketch of proof: take the shape acheiving minimum diameter. By a compactness argument, we could show that if we ``repeatedly'' symmetrize, eventually it is symmetric across every line.\n\nIf some bounded shape is symmetric with respect to reflection about three lines $L_1, L_2,$ and $L_3$, then $L_1,L_2,L_3$ all go through the same point. Why? The center of mass must lie on each line.\nThis will show us that we have a disk. Let us consider a discrete to this symmetrization process: \\defn{compression}.\nFor $A\\in [n]^{(r)}$, set\n\\[\nR_{ij}(A) = \\begin{cases}\n(\\A\\setminus \\{j\\})\\cup \\{i\\}, & \\text{if } j\\in A, i\\not \\in A\\\\\nA, & \\text{otherwise}.\n\\end{cases}\n\\]\nFor example,\n\\begin{gather*}\t\nR_{15}(\\{2,3,5\\}) = \\{1,2,3\\}, \\qquad\nR_{15}(\\{2,3,4\\}) = \\{2,3,4\\},\\\\\nR_{15}(\\{1,3,5\\}) = \\{1,3,5\\}.\n\\end{gather*}\nLet $\\tilde{R}_{ij}(\\A) = \\{R_{ij}(A): A\\in \\A\\} \\cup \\{A: R_{ij}(A)\\in \\A \\}$. \\marginnote{Intuition: $\\tilde{R}_{ij}$ applies $R_{ij}$ unless the resulting set is already in $\\A$, to prevent collapse.}\n\n\\paragraph{Properties:}\n\\begin{enumerate}\n\t\\item $|\\tilde{R}_{ij}(\\A)| = |\\A|$.\n\n\tLet $P_{ij} \\subset [n]^{(r)}$ denote the collection of all sets containing $j$ but not $i$. Then $R_{ij}: P_{ij} \\to P_{ji}$  is bijection.\n\\end{enumerate}\n\n\n\\noindent We will say $\\A$ is \\defn{compressed} if $\\tilde{R}_{ij}(\\A)= \\A$ for all $i<j$.\n\\begin{enumerate}\\setcounter{enumi}{1}\n\\item Any set system $\\A$ can be made compressed by applying finitely many compression operators $\\tilde{R}_{ij}$, for $i<j$.\n\nLet $w(A) = \\sum_{i\\in A} i$, and $w(\\A) = \\sum_{A\\in \\A} w(A)$. Then $w(R_{ij}(A)) \\leq w(A)$ with equality iff $R_{ij}(A)=A$. Therefore, $w(\\tilde{R}_{ij}(\\A))\\leq w(\\A)$ with equality iff $\\tilde{R}_{ij}(\\A) = \\A$. Therefore, the process must stop, because we start with finite integer weight and the weight may only decrease (and may not be negative).\n\\end{enumerate}\n\n\nIn human terms, a compressed set system is one where if we try to replace any element of a set with a smaller element, we end up with something already in the set system.\nLet us define a natural partial order on $[n]^{(r)}$. If $A = \\{a_1,\\dotsc,a_r\\}$ with $a_1 < a_2 < \\dotsb < a_r$, and $B = \\{b_1,\\dotsc,b_r\\}$ with $b_1<\\dotsb < b_r$, then we say $A\\preceq B$ if $a_i\\leq b_i$ for all $i=1,\\dotsc,r$.\n\n\n\\begin{enumerate} \\setcounter{enumi}{2}\n\\item If $i<j$, then $R_{ij}(A) \\preceq A$, and conversely if $A' \\preceq A$, then $A'$ can be obtained from $A$ by using finitely many compression operators $R_{ij}$ for $i<j$.\n\n\\end{enumerate}\nTherefore, $\\A$ is compressed if and only if for every $A\\prec B$ with $B\\in \\A$, we have that $A\\in \\A$. In otherwords, $\\A$ is an ideal in this order. See \\cref{fig:compressed_sets_partial_order} for an example.\n\\begin{marginfigure}\n\\begin{center}\n\\begin{tikzcd}\n& \\boxed{\\{1,2\\}} \\arrow{d}\\\\\n&\\boxed{\\{1,3\\}} \\arrow{ld} \\arrow{rd}\\\\\n\\boxed{\\{1,4\\}} \\arrow{d} \\arrow{rrd}  & &\\boxed{\\{2,3\\}} \\arrow{d}\\\\\n\\boxed{\\{1,5\\}} \\arrow{d}&& \\boxed{\\{2,4\\}}\\arrow{d} \\arrow{lld}\\\\\n\\{2,5\\} \\arrow{rd} && \\boxed{\\{3,4\\}} \\arrow{ld}\\\\\n&\\{3,5\\} \\arrow{d}\\\\\n&\\{4,5\\}\n\\end{tikzcd}\n\\end{center}\n\\caption{The partial order on $[5]^{(2)}$, where $A\\rightarrow B$ means $a \\prec b$, and different sets in the same row are incomparable. The boxed elements together form a compressed set.} \\label{fig:compressed_sets_partial_order}\n\\end{marginfigure}\n\n\\begin{enumerate}\\setcounter{enumi}{3}\n\t\\item  If $\\A$ is intersecting, then $\\tilde{R}_{ij}(\\A)$ is intersecting.\n\tSuppose not. Then there exists $A,B\\in \\tilde{R}_{ij}(\\A)$ such that $A\\cap B= \\emptyset$. If both $A,B\\in \\A$, then they are intersecting, so let $A\\not \\in \\A$. Then $A = R_{ij}(A')$ for some $ A'\\in \\A$ with $j\\in A'$. Now, if $B \\not \\in \\A$ too, then $B = R_{ij}(B')$ for some $B'\\in \\A$ with $j\\in B'$, and we'd have $i\\in A\\cap B$, which is a contradiction. So $B\\in \\A$ with $i\\not\\in B$.  If $B= R_{ij}(B)$, then $R_{ij}(B)\\in \\A$. Otherwise, we have $j \\in B$, and we must  have $B \\neq R_{ij}(B')$ for every $B'\\in \\A$ (for   $B\\in P_{ij}$, while the image of $R_{ij}$ is  $P_{ji}$ which is disjoint from $P_{ij}$). In this case then, since $B\\in \\tilde{R}_{ij}(\\A)$, we must then have $R_{ij}(B)\\in \\A$ too. So in either case, $R_{ij}(B)\\cap A'\\in \\A$ which is intersecting, so $R_{ij}(B)\\cap A' \\neq \\emptyset$.\n\n\t% Now, $k\\neq j$ has $k\\in A'\\cap B$, then  then $k \\in R_{ij}(A') = A$, so $k\\in A\\cap B$, which is a contradiction. On the other hand, $A'\\cap B\\neq \\emptyset$ since $A',B\\in \\A$ which is intersecting. So $A'\\cap B = \\{j\\}$, and thus $j\\in B$. So $B\\in P_{ij}$. So $R_{ij}(B) \\neq B$, and $B\\in \\tilde{R}_{ij}(\\A)$, so $B$ is such that $R_{ij}(B) \\in \\A$ (otherwise $B$ would be the non-trivial image of some $B'$ under $R_{ij}$, which leads to a contradiction as showed earlier). Thus $R_{ij}(B)\\cap A' \\neq \\emptyset$.\n\n\n\n\t%   So $A',B\\in \\A$, and thus $A'\\cap B\\neq \\emptyset$.\n\n\n\n\t% Suppose not. Then there exists $A,B\\in \\tilde{R}_{ij}(\\A)$ such that $A\\cap B = \\emptyset$. We may assume that $\\A \\not \\ni A = R_{ij}(A')$ for some $A'\\in \\A$, and $B\\in \\A$\\sidenote{Otherwise $i\\in B$, so $A\\cap B \\ni i$.}, and $A'\\cap B = \\{j\\}$. So $B\\in P_{ij}$, and hence was unchanged by $R_{ij}$, \n\n\n\t% so $R_{ij}(B)\\in \\A$. Then $A'\\cap R_{ij}(B) \\neq \\emptyset$, since $\\A$ is intersecting.\n\n\tBut $A'$ is obtained from $A$ by changing $j$ to $i$, and $R_{ij}(B)$ is obtained from $B$ by changing $j$ to $i$, so if $A'\\cap R_{ij}(B) \\neq \\emptyset$, then we have $A\\cap B\\neq \\emptyset$.\n\n\n\n\\end{enumerate}\n\n\\begin{proof}[Proof of \\erdos-Ko-Rado theorem by compression]\nWe wish to show that if $r\\leq n/2$, $\\A\\subset [n]^{(r)}$ is intersecting, then\n\\[\n|\\A| \\leq {n-1 \\choose r-1}.\n\\]\nWe will proceed by induction on $n$ and $r$. The case $r=n/2$ is easy: ${n-1\\choose r-1} = \\frac{1}{2}{n\\choose r}$, and sets in $[n]^{(r)}]$ are just complementary pairs.\n\nAssume $r<n/2$. By the facts we have proven, we may assume $\\A$ is compressed. Let\n\\begin{align*}\n\\A_0 &= \\{A\\in \\A: n\\not\\in \\A\\}, & \\A_1 &= \\{A\\in \\A: n\\in A\\}.\n\\end{align*}\nThen by the IH,\n\\[\n|\\A| = |\\A_0| + |\\A_1| \\leq {n-2 \\choose r-1} + |\\A_1|\n\\]\nIf $|\\A_1| \\leq {n-2 \\choose r-2}$, then\\sidenote{using ${n-2 \\choose r-1} + {n-2 \\choose r-2} = {n-1\\choose r-1}$} we have $|\\A| \\leq {n-1\\choose r-1}$.\nLet \n\\[\n\\A_1 \\setminus \\{n\\} := \\{A-\\{n\\}: A\\in \\A_1\\}.\n\\]\n If $\\A_1\\setminus \\{n\\}$ is intersecting, then $|\\A_1\\setminus \\{n\\}| \\leq {n-2 \\choose r-2}$ by the IH, since $\\A_1\\setminus \\{n\\} \\subset [n-1]^{(r-1)}$. \nSuppose $\\A_1\\setminus \\{n\\}$ is not intersecting. Then there exists $A,B\\in \\A$ such that \n\\[\nA\\cap B = \\{n\\}.\n\\]\nBecause $2r<n$, there exists $i<n$ such that $i\\not \\in A\\cup B$. Then $R_{in}(B) \\in \\A$ since $\\A$ is compressed.\nBut $A\\cap R_{in}(B) = \\emptyset$, contradicting that $\\A$ is intersecting. \n\\end{proof}\n\nLet $\\A\\subset [n]^{(r)}$, with $|\\A| = m$. What is $\\min |\\partial \\A|$? \\marginnote{Recall: \n\\[\n\\partial A = \\{B \\in [n]^{(r-1)}: B\\subset A \\text{ for some } A\\in \\A\\}.\n\\]}\nWe know the local LYM inequality:\n\\begin{equation*}\t\\tag{\\cref{eq:local_LYM}}\n\\frac{|\\partial A|}{{n\\choose r-1}}\\geq \\frac{|\\A|}{{n\\choose r}}\n\\end{equation*}\nbut this is rarely tight.\n\nLet's consider $m=1$, that is, $|\\A|=1$. Then the answer is $r= {r\\choose r-1}$. For $m=2$, the answer is $2r-1$, because their shadows can at most share 1 set.\nFor $r=2$, we have a simple graph with $m$ edges. What is the minimal number of verticies? We want the minimal $n$ so that $m\\leq {n \\choose 2}$.\n\\lect{2}{1}\n% \\marginnote{Lecture 8: Monday, February 1, 2016.}\nLet us change our question to consider $\\A\\subset \\N^{(r)}$ . Given $r$ and $m$, what is $\\min |\\partial \\A|$ given $\\A\\subset \\N^{(r)}$ with $|\\A| = m$?\n\n\\begin{lemma} \\label{lem:compression_decreases_size_of_shadow}We have that\n\\[\n|\\partial \\tilde{R}_{ij}(\\A)| \\leq |\\partial \\A|\n\\]\nfor every $\\A\\subset \\N^{(r)}$.\n\\end{lemma}\n\\begin{proof}\t\n\nIt suffices to show that\n\\marginnote{ $|X|\\geq |Y| \\iff |X\\setminus Y| \\geq |Y\\setminus X| $ because $|X| = |X\\setminus Y| + |X \\cap Y|$ and $|Y| = |Y\\setminus X| + |X\\cap Y|$. }\n\\[\n|\\partial \\tilde{R}_{ij}(\\A)\\setminus(\\partial \\A)| \\leq | \\partial A\\setminus(\\partial \\tilde{R}_{ij}(\\A))|.\n\\]\nIf $B\\in \\partial \\tilde{R}_{ij}(\\A)\\setminus (\\partial \\A)$, then $i\\in B$ but $j\\not\\in B$. Let's see why this is true. Let $A\\in  \\tilde{R}_{ij}(\\A)\\setminus (\\A)$ such that $B = A\\setminus\\{k\\}$ for some $k$. Then $i\\in A$, $j\\not \\in A$ (and so $j\\not \\in B$ too). Now, if $i\\not \\in B$, then $k=i$. But then $B = R_{ji}(A)\\setminus\\{j\\}$ so $B\\in \\partial \\A$, a contradiction.\n\n\n\nIt is enough to show that\n\\[\nR_{ji}(B)\\in \\partial \\A\\setminus(\\partial \\tilde{R}_{ij}(\\A)).\n\\]\nClearly, $R_{ji}(B)\\in \\partial \\A$, since $B= A\\setminus\\{k\\}$ with $k\\neq i,j$, so $R_{ji}(B) = R_{ji}(A)\\setminus\\{k\\}$.\n\nThe last piece to check is that $R_{ji}(B)\\not\\in \\partial \\tilde{R}_{ij}(\\A)$. If that were the case, then $R_{ji}(A)\\setminus\\{k\\} \\subset C$ for some $C \\in \\tilde{R}_{ij}(\\A)$. Since $j\\in R_{ji}(A)$ and $k\\neq j$, we have $j\\in C$. Then $R_{ij}(C) \\in \\A$. But then $B=A\\setminus \\{k\\} \\in R_{ij}(C) \\in \\A$, so $B \\in \\partial \\A$, a contradiction.\n% Assume $B= C \\setminus \\{\\ell\\} \\in \\partial \\tilde{R}_{ij}(\\A)$ for some $C \\in \\tilde{R}_{ij}(\\A)$. Then $C\\setminus\\{\\ell\\} = R_{ji}(A)\\setminus \\{k\\}$.\n% \\understand\n\\end{proof}\nThis lemma shows that for the purposes of minimizing $|\\partial \\A|$, we may start with a compressed set.\n\n\\newthought{We defined a partial order} on $\\N^{(r)}$ by $A = \\{a_1,\\dotsc,a_r\\}$, $a_1<a_2<\\dotsb < a_r$, and $B=\\{b_1,\\dotsc,b_r\\}$, and $b_1<b_2<\\dotsb < b_r$, then $A\\preceq B$ if $a_i\\leq b_i$ for $i\\in[r]$.\nWe may define the \\defn{lexicographic order} on $\\N^{(r)}$. We say $A\\leq_L B$ if ($a_1<b_1$) or ($a_1=b_1$ and $a_2 < b_2$) or $(a_1 = b_1$ and $a_2=b_2$, but $a_3<b_3)$ etc. That is, either $A=B$, or if $s$ is the minimal index such that $a_s\\neq b_s$, then $a_s<b_s$. See \\cref{fig:53_lexographic_order} for an example.\n\\begin{marginfigure}\n\\begin{tikzcd}[column sep=small]\n\\boxed{\\{1,2,3\\}} \\arrow{r} & \n\\boxed{\\{1,2,4\\}} \\rar \\snakesetup &\n\\{1,2,5\\} \\snakearrow{lld}  \\\\\n\\boxed{\\{1,3,4\\}} \\rar &\n\\{1,3,5\\} \\rar \\snakesetup&\n\\{1,4,5\\}  \\snakearrow{lld}\\\\\n\\boxed{\\{2,3,4\\}} \\rar \\snakesetup &\n\\{2,3,5\\}  \\rar   &\n\\{2,4,5\\}   \\snakearrow{lld}\\\\\n\\{3,4,5\\}\n\\end{tikzcd}\n\n\\caption{Elements of $[5]^{(3)}$ in lexographic order, where $A\\imp B$ means $A\\leq_L B$. The subsets of $[4]^{(3)}$ are boxed; they appear in order, but not next to each other.} \\label{fig:53_lexographic_order}\n\\end{marginfigure}\n\nConsider the lexographic order on $\\N^{(3)}$. What is the 100th smallest set in this order? $\\{1,2,102\\}$.\n\n\nNow, let us define the \\defn{colexicographic order} on $\\N^{(r)}$. Define $A\\leq B$ if $a_r<b_r$ or ($a_r=b_r$ and $a_{r-1} <b_{r-1}$) or \\ldots etc. We may write this as $A=B$ or if $s$ is the maximal index such that $a_s\\neq b_s$, then $a_s<b_s$. \n\n\n\n% \\begin{fullwidth}\n\n\\begin{figure*}[ht]\n% \\makeatletter\\setlength\\hsize{\\@tufte@fullwidth}\\setlength\\linewidth{\\@tufte@fullwidth}\\let\\caption\\@tufte@orig@caption\\makeatother\n\\centering\n\n\\begin{tikzcd}[column sep=small]\n[3]^{(3)} &\\{1,2,3\\} \\arrow{d} \\\\\n {[4]^{(3)}}\\setminus[3]^{(3)} & \\{1,2,4\\} \\arrow{r}& \\{1,3,4\\} \\snakesetup \\arrow{r}& \\{2,3,4\\} \\snakearrow{lld}\\\\\n{[5]^{(3)}}\\setminus[4]^{(3)} & \\{1,2,5\\} \\arrow{r} & \\{1,3,5\\}\\arrow{r} & \\{2,3,5\\} \\arrow{r}& \\{1,4,5\\} \\arrow{r}& \\{2,4,5\\}\\arrow{r} & \\{3,4,5\\}\n\\end{tikzcd}\n\\bigskip\n\\caption{Elements of $[5]^{(3)}$ in colexographic order. Note that $[n]^{(r)}$ is an initial segment of the colex order on $\\N^{(r)}$. \\label{fig:53_colexographic_order}} \n\\end{figure*}\n% \\end{fullwidth}\n\n\\begin{theorem}[\\cite{kruskal1963,katona1968}] \\label{thm:kruskal_katona}\nIf $\\A\\subset \\N^{(r)}$ with $|\\A| = m$, then $|\\partial \\A|$ is at least as large as the size of the shadow of the first $m$ elements of $\\N^{(r)}$ in colexicographic order.\n\\end{theorem}\n\\begin{example}\nIf $A= \\{10,7,3\\}$ what is the position of $A$ in the colex order on $\\N^{(3)}$? In the rows\\sidenote{Referring to an analogous diagram to \\cref{fig:53_colexographic_order}.} before $A$, there will be $|[9]^{(3)}| ={9\\choose 3}$ elements.  In the row containing $A$, there are sets of the form $\\{10,x,*\\}$ with $x<7$, i.e. ${6\\choose 2}$ sets. Next, there are sets of the form $\\{10,7,y\\}$ with $y<3$, i.e. ${2\\choose 1}$ sets. Finally, we have our set, $\\{10,7,3\\}$. In total then,\n\\[\n{9\\choose 3} + {6\\choose 2} + {2\\choose 1} + 1 = 102.\n\\]\n\nSee \\cref{fig:colex_near_A} for a diagram.\n\\begin{figure*}[ht]\n\\begin{center}\n\\begin{tikzcd}[column sep=tiny]\n  \\{10,6,5\\} \\rar&\\{10,7,1\\} \\rar &\\{10,7,2\\} \\rar &\\{10,7,3\\} \\rar  & \\{10,7,4\\}  \\rar & \\{10,7,5\\}\\rar   & \\{10,7,6\\}  \\rar & \\{10,8,1\\}.\n\\end{tikzcd}\n\\caption{The colex order near $A=\\{10,7,3\\} \\subset \\N^{(3)}$. The notation $A\\imp B$ means $A\\leq B$ in colex order. Since $A$ is the 102nd set in colex order on $\\N^{(3)}$, $\\{10,7,1\\}$ is the 100th set which provides a nice comparison to $\\{1,2,102\\}$, the 100th set in lexicographic order. In particular, we note that in colex order, every set is finitely many positions away from the smallest set, while in lexicographic order many sets are infinitely far (like $\\{1,3,4\\}$).}\\label{fig:colex_near_A}\n\\end{center}\n\n\\end{figure*}\n% \\improvement{Put captions of full-width figures below the figure in the main column.}\n\\end{example}\n\n\n% \\begin{definition}\nFor $A\\in \\N^{(r)}$, let the \\defn{initial segment of $A$} be $I(A) =\\{B: B\\leq \\A\\}$ and let $i(A) = |I(A)|$. \n% \\end{definition}\n\\begin{lemma}\nIf $A = \\{a_1,\\dotsc,a_r\\}$ with $a_r>a_{r-1}>\\dotsb> a_1$ then \n\\[\ni(A) = {a_{r}-1\\choose r} + {a_{r-1}-1 \\choose r-1} + \\dotsb + {a_1-1 \\choose 1} + 1.\n\\]\n\\end{lemma}\n\\begin{proof}\t\n${a_{k} -1\\choose k}$ counts sets $B\\subset A$ which coincide with $A$ over \n\\[\n a_r,a_{r-1},\\dotsc,a_{k+1}\n\\]\n  but whose $k$th largest element is smaller.\n\\end{proof}\n\n\\begin{lemma} We have that the shadow of the inital segment $I(A)$ is the initial segment of $A\\setminus\\{\\min A\\}$. That is,\n\\[\n\\partial I(A) = I( A\\setminus \\{\\min A\\}).\n\\]\n\\end{lemma}\n\\begin{proof}\t\nIf $A = \\{a_r,\\dotsc,a_2,a_1\\}$, then we wish to show that $B\\subset \\N^{(r-1)}$ is in $\\partial I(A)$ if and only if $B \\leq \\{a_r,\\dotsc,a_2\\} = A\\setminus \\{\\min A\\}$ in colex order. This is left as an exercise. One way to proceed is by induction on $r$, splitting into cases where $B$ contains $a_r$ or not.\n% \\add{proof.}\n\\end{proof}\n\n\\begin{corollary}\nIf  \n\\[\n i(A) = {a_r-1\\choose r} + {a_{r-1} \\choose r-1} + \\dotsb + {a_1-1\\choose 1} + 1,\n \\]\nthen \n\\[\n|\\partial (I(A))| = {a_r -1\\choose r-1} + {a_{r-1} -1 \\choose r-2} + \\dotsb + {a_2-1 \\choose 1} + 1.\n\\]\n\\end{corollary}\n\nLet us restate \\cref{thm:kruskal_katona}:\n\\begin{theorem*}[Kruskal--Katona]\nLet $\\A\\subset \\N^{(r)}$ with \n\\[\n|\\A| = {a_r-1\\choose r} + {a_{r-1} -1\\choose r-1} + \\dotsb + {a_1-1\\choose 1} + 1\n\\]\nfor some $a_r>\\dotsc>a_1$. Then\n\\[\n|\\partial \\A| \\geq{a_r -1\\choose r-1} + {a_{r-1} -1 \\choose r-2} + \\dotsb + {a_2-1 \\choose 1} + 1.\n\\]\n\\end{theorem*}\n\\begin{remark}\nFor every positive integer $m$, we may write $m$ as \n\\[\nm={a_r-1\\choose r} + {a_{r-1} \\choose r-1} + \\dotsb + {a_1-1\\choose 1} + 1\n\\]\nfor some $a_r> a_{r-1} > \\dotsm > a_1$. We have shown this already by considering the $m$th element in colex order of $\\N^{(r)}$.\n\\end{remark}\n\n\\begin{proof}[Proof by induction on $r$; for fixed $r$, by induction on $|\\A|$.] The base case $r=1$ is trivial. For the induction step, we will assume that $\\A$ is compressed, by \\cref{lem:compression_decreases_size_of_shadow}. Let\n\\begin{align*}\t\nA_1 :\\!&= \\{A\\in \\A: 1\\in A\\}, & A_0 :\\!&= \\A\\setminus \\A_1.\n\\end{align*}\n\n\\begin{enumerate}[{Claim} 1:]\n\\item $|\\A_1| \\geq |\\partial A_0|$.\\marginnote{Compression pushes us towards smaller elements, so $\\A_1$ should be large.}\n\n\\begin{subproof}\t\nThe map $A\\mapsto A\\cup\\{1\\}$ is an injection from $\\partial \\A_0$ to $\\A_1$, by compression.\n\\end{subproof}\n\t\\item\n\t\\[\n\t|\\A_1|\\geq {a_r-2\\choose r-1} + {a_{r-1}-2\\choose r-2} + \\dotsb + {a_2-2\\choose 1} + 1.\n\t\\]\n\t\\begin{subproof}Assume not. Since $|\\A_0| =|\\A| - |\\A_1|$, we have then have\n\t\\begin{align*}\t\n\t|\\A_0|&> \\left( {a_r-1\\choose r} - {a_r-2\\choose r-1} \\right) + \\left( {a_{r-1}-1\\choose r-1} - {a_{r-1}-2\\choose r-2} \\right) + \\dotsb + {a_1-1 \\choose 1}\\\\\n\t&= {a_{r}-2 \\choose r} + {a_{r-1}-2\\choose r-1} + \\dotsb + {a_1-2 \\choose 1}+1.\n\t\\end{align*}\n\tBy the IH\\sidenote{Since $\\A$ is compressed it must contain 1}, \n\\[\n|\\partial \\A_0| \\geq {a_r-2\\choose r-1} + \\dotsb + {a_2-2\\choose 1}+1\n\\]\nThen claim 1 yields the result.\n\t\\end{subproof}\n\\end{enumerate}\nLet $B = \\{A-\\{1\\}: A\\in \\A_1\\}\\subset \\N^{(r-1)}$. Note $|B| = |\\A_1|$.\n\\begin{enumerate}[{Claim} 1:]\n\\setcounter{enumi}{2}\n\t\\item \n\t$|\\partial \\A|\\geq |B| + |\\partial B|$.\n\n\t\\begin{subproof}\t\n\tNote $B\\subset \\partial \\A$. Let $B' = \\{B\\cup \\{1\\}: B\\in \\partial B\\} \\subset \\partial A$. Then $B\\cup B' \\subset \\partial \\A$. But since sets in $B'$ contain $1$ and sets in $B$ do not contain $1$, $|B\\cup B'| = |B|+|B'|$.\n\t\\end{subproof}\n\\end{enumerate}\nBy claims 2, 3, and the IH, we have\n\\begin{align*}\t\n|\\partial \\A| &\\geq |B| + |\\partial B| \\\\\n&\\geq {a_r-2 \\choose r-1} + {a_{r-1}-2\\choose r-2} + \\dotsb + {a_2-2\\choose 1}+1 \\\\\n&\\qquad + {a_r-2 \\choose r-2} + {a_{r-1}-2 \\choose r-3} + \\dotsb + {a_3-2\\choose 1} + 1\\\\\n&= {a_r-1\\choose r-1} + \\dotsb {a_3-1\\choose 2} + {a_2-1\\choose 1}+1.  \\qedhere\n\\end{align*}\n\\end{proof}\n\\begin{theorem}[\\cite{lovaszbook_hyper}]\nLet $\\A\\subset \\N^{(r)}$, where $|\\A| = {x\\choose r}$ for $x\\in \\R$.\\marginnote{We define ${x\\choose r} := \\frac{x(x-1)\\dotsm(x-r+1)}{r!}$. Since the polynomials\n${x\\choose r}$ and ${x-1\\choose r} + {x-1\\choose r-1}$ agree on the integers, they agree everywhere.}\nThen $|\\partial \\A| \\geq {x\\choose r-1}$.\\sidenote{If $x$ is an integer, then equality can be achieved by taking $\\A=[x]^{(r)}$.}.\n\\end{theorem}\n\\begin{proof}\t\nThe proof follows that of the reformulated Kruskal--Katona. Claims 1 \\& 3 have the same proof. For claim 2, there is a much simpler proof:\n\\begin{enumerate}[{Claim }1.]\\setcounter{enumi}{1}\n% \\item $|\\partial \\A_0|\\leq|\\A_1|$\n\\item $|\\A_1|\\geq {x-1\\choose r-1}$.\n\\begin{subproof}\t\nIf not, $|\\A_0| = |\\A| - |\\A_1| \\geq {x\\choose r} - {x-1 \\choose r-1} = {x-1\\choose r}$. Then $|\\partial \\A_0| \\geq {x-1\\choose r-1} > |\\A_1|$, a contradiction.\n\\end{subproof}\n% \\item \n\\end{enumerate}\nThe rest of the proof is the same.\n\\end{proof}\n\n\n\\begin{corollary} \\label{cor:ell_shadows}\nLet $\\A\\subset \\N^{(r)}$ with $|\\A| = {x\\choose r}$ for $x\\in \\R$. Let \n\\[\n\\partial^{(\\ell)}\\A = \\{B\\in \\N^{(r-\\ell)}: B\\subset A \\text{ for some }A\\in\\A\\}.\n\\]\nThen\n\\[\n|\\partial^{(\\ell)}A|\\geq {x\\choose r-\\ell}.\n\\]\n\\end{corollary}\n\\begin{proof}[Proof by induction on $\\ell$.] The base case is Lor\\'asz's theorem. Then\n\\[\n\\partial^{(\\ell)}\\A = \\partial ( \\partial^{(\\ell-1)}\\A)\n\\]\nso the IH and Lor\\'asz's theorem yield the result.\n\\end{proof}\n\n\\begin{corollary}\nLet $G$ be a 2-graph. Then \\marginnote{$\\edges (G)$ denotes the edge set of $G$. We identify $G$ with $\\edges (G)$.}\n\\[\n|G| = |\\edges (G)| = {x\\choose 2}\n\\]\nfor some $x\\in \\R$. Then $G$ contains at most ${x\\choose k}$ complete subgraphs\\sidenote{$k$-tuples of verticies pairwise joined by edges.} of size $k$.\n\\end{corollary}\n\\begin{remark}\nWe may reformulate this as follows, in a special case. Let $G$ be a graph with ${n\\choose 2}$ edges (but possibly more than $n$ verticies). Then $G$ contains a maximum number of triangles when $G$ is a complete graph of $n$ verticies. We think of this as we are given a budget of edges, and are trying to maximize the number of triangles we make. Here it is intuitive that to do this, we make a complete graph.\n\\end{remark}\n\\begin{proof}\t\nWe may assume that $x\\geq k$; otherwise we may not form any complete $k$-subgraphs. If the number of complete subgraphs of $G$ is strictly larger than ${x\\choose k}$, then it is equal to ${x' \\choose k}$ for some $x' > x$. So we choose $\\A$ to be the family of verticies of complete subgraphs of $G$: \n\\[\n\\A = \\{V(H): H \\text{ is a complete subgraph of }G\\} \\subset V(G)^{(k)}.\n\\] Then each $\\{x,y\\} \\in \\partial^{(k-2)} \\A$ is a two element subset of a complete subgraph of $G$,  so $\\partial^{(k-2)} \\A\\subset \\edges(G)$. Then by \\cref{cor:ell_shadows}, we must have $|\\edges (G)| \\geq {x'\\choose 2} > {x\\choose 2}$, a contradiction.\n% \\understand\n% Everything that lies in the shadow of order $k-2$ must\n\\end{proof}", "meta": {"hexsha": "5bc0e31c8194553e7fcb396e8d3c628f880698e5", "size": 21390, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/ch4_shadows.tex", "max_stars_repo_name": "ericphanson/CombinatoricsNotes", "max_stars_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-04-24T06:43:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-20T04:27:41.000Z", "max_issues_repo_path": "chapters/ch4_shadows.tex", "max_issues_repo_name": "ericphanson/CombinatoricsNotes", "max_issues_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/ch4_shadows.tex", "max_forks_repo_name": "ericphanson/CombinatoricsNotes", "max_forks_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-04T19:38:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-04T19:38:24.000Z", "avg_line_length": 54.9871465296, "max_line_length": 825, "alphanum_fraction": 0.6260402057, "num_tokens": 8677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8438951005915208, "lm_q1q2_score": 0.5845171760313986}}
{"text": "\\chapter{Mapping the Data}\n\\label{ch:mapping-the-data}\n\nImagine a foreign visitor to the US who knows nothing about the US geography. He doesn't even have a map; the only data he has is a list of distances between the cities. Oh, yes, and he attended the Introduction to Data Mining.\n\nIf we know distances between the cities, we can cluster them.\\marginnote{For this example we retrieved the data from \\url{http://www.mapcrow.info/united_states.html}, removed the city names from the first line and replaced it with \"31 labelled\".\\break\\break The file is available at \\url{http://file.biolab.si/files/us-cities.dst.zip}. To load it, unzip the file and use the \\widget{File Distance} widget.}\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=\\linewidth]{dendrogram.png}\n    \\caption{$\\;$}\n\\end{figure}\n\nHow much sense does it make? Austin and San Antonio are closer to each other than to Houston; the tree is then joined by Dallas. On the other hand, New Orleans is much closer to Houston than to Miami. And, well, good luck hitchhiking from Anchorage to Honolulu.\n\nAs for Anchorage and Honolulu, they are leftovers; when there were only three clusters left (Honolulu, Anchorage and the big cluster with everything else), Honolulu and Anchorage were closer to each other than to the rest. But not close \u2014 the corresponding lines in the dendrogram are really long.\n\nThe real problem is New Orleans and San Antonio: New Orleans is close to Atlanta and Memphis, Miami is close to Jacksonville and Tampa. And these two clusters are suddenly more similar to each other than to some distant cities in Texas.\n\n\\marginnote{We can't run k-means clustering on this data, since we only have distances, and k-means runs on real (tabular) data. Yet, k-means would have the same problem as hierarchical clustering.}In general, two points from different clusters may be more similar to each other than to some points from their corresponding clusters.\n\nTo get a better impression about the physical layout of cities, people have invented a better tool: a map! Can we reconstruct a map from a matrix of distances? Sure. Take any pair of cities and put them on a paper with the distance corresponding to some scale. Add the third city and put it at the corresponding distance from the two. Continue until done. Excluding, for the sake of scale, Anchorage, we get the following map.\n\n\\begin{figure*}[h]\n    \\centering\n    \\includegraphics[width=\\linewidth]{mds-jitter.png}\n    \\caption{$\\;$}\n\\end{figure*}\n\nWe have not constructed this map manually, of course. We used a widget called \\widget{MDS}, which stands for Multidimensional scaling.\n\nIt is actually a rather exact map of the US from the Australian perspective. You cannot get the orientation from a map of distances, but now we have a good impression about the relations between cities. It is certainly much better than clustering.\n\n\\begin{marginfigure}\n    \\includegraphics[width=\\linewidth]{distance-workflow.png}\n\\end{marginfigure}\n\n\\newpage\n\nRemember the clustering of animals? Can we draw a map of animals?\nDoes the map make any sense? Are similar animals together? Color the points by the types of animals and you should see.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[scale=0.7]{zoo-workflow.png}\n    \\caption{$\\;$}\n\\end{figure}\n\n\\begin{figure*}[h]\n    \\centering\n    \\includegraphics[width=\\linewidth]{zoo-mds.png}\n    \\caption{$\\;$}\n\\end{figure*}\n\nThe map of the US was accurate: one can put the points in a plane so that the distances correspond to actual distances between cities. For most data, this is usually impossible. What we get is a projection (a non-linear projection, if you care about mathematical finesses) of the data. You lose something, but you get a picture.\n\nThe MDS algorithm does not always find the optimal map. You may want to restart the MDS  from random positions. Use the slider \"Show similar pairs\" to see whether the points that are placed together (or apart) actually belong together. In the above case, the honeybee belongs closer to the wasp, but could not fly there as in the process of optimization it bumped into the hostile region of flamingos and swans.\n", "meta": {"hexsha": "5bd14f758009183b9a460b30f98bb638fc15fd6a", "size": 4163, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/015-mapping-the-data/mapping-the-data.tex", "max_stars_repo_name": "PrimozGodec/orange-lecture-notes", "max_stars_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-10-13T14:31:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:47:06.000Z", "max_issues_repo_path": "chapters/015-mapping-the-data/mapping-the-data.tex", "max_issues_repo_name": "PrimozGodec/orange-lecture-notes", "max_issues_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-02-26T13:33:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-25T19:15:34.000Z", "max_forks_repo_path": "chapters/015-mapping-the-data/mapping-the-data.tex", "max_forks_repo_name": "PrimozGodec/orange-lecture-notes", "max_forks_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-01-19T16:55:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-21T20:35:41.000Z", "avg_line_length": 71.775862069, "max_line_length": 426, "alphanum_fraction": 0.7722796061, "num_tokens": 962, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455589, "lm_q2_score": 0.8438950947024555, "lm_q1q2_score": 0.5845171612440503}}
{"text": "\\subsubsection{FastFourierTransform}\n\\label{FastFourierTransformPP}\nThe \\xmlNode{FastFourierTransform} PostProcessor provides access to the Numpy fast Fourier transform function\n\\texttt{numpy.fft.fft}\nand provides the frequencies, periods, and amplitudes from performing the transform. The periods are simply\nthe inverse of the frequencies, and the frequency units are the deltas between pivot values in the provided\ninput. For example, if data is collected every 3600 seconds, the units of frequency are per-hour.  This\nPostProcessor expects uniformly-spaced pivot values. Note that for each realization in the input data object,\na separate fft will be created for each target.\n\nThe \\xmlNode{FastFourierTransform} PostProcessor can act on any target in a DataObject that depends on a\nsingle index, and generates three histories per sample per target: an independent variable\n\\xmlString{target\\_fft\\_frequency}, and two dependent values \\xmlString{target\\_fft\\_period} and\n\\xmlString{target\\_fft\\_amplitude}, which both depend on the frequency by default. In all three outputs,\n\\emph{target} is replaced by the name of the target for which the fft was requested.\n\n\\ppType{FastFourierTransform}{FastFourierTransform}\n%\n\\begin{itemize}\n  \\item \\xmlNode{target}, \\xmlDesc{comma separated strings, required field}, specifies the names of the\n    target(s) for which the fast Fourier transform should be calculated.\n \\end{itemize}\n\\textbf{Example:}\n\n\\begin{lstlisting}[style=XML]\n<Simulation>\n ...\n  <Models>\n    ...\n    <PostProcessor name=\"pp\" subType=\"FastFourierTransform\">\n      <target>x, y</target>\n    </PostProcessor>\n    ...\n  </Models>\n ...\n</Simulation>\n\\end{lstlisting}\n", "meta": {"hexsha": "656e77474659d58309fb7355fc2138aec0291a16", "size": 1678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/PostProcessors/FastFourierTransform.tex", "max_stars_repo_name": "dgarrett622/raven", "max_stars_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user_manual/PostProcessors/FastFourierTransform.tex", "max_issues_repo_name": "dgarrett622/raven", "max_issues_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/user_manual/PostProcessors/FastFourierTransform.tex", "max_forks_repo_name": "dgarrett622/raven", "max_forks_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.1578947368, "max_line_length": 109, "alphanum_fraction": 0.7818831943, "num_tokens": 400, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677660619633, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.5845089331750976}}
{"text": "\\subsubsection{EconomicRatio}\n\\label{EconomicRatio}\nThe \\xmlNode{EconomicRatio} post-processor provides the economic metrics from the percent change \nperiod return of the asset or strategy that is given as an input. These metrics measure the risk-adjusted returns. \n%\n\\ppType{EconomicRatio}{EconomicRatio}\n\n\\begin{itemize}\n  \\item \\xmlNode{\"metric\"}, \\xmlDesc{comma separated string or node list, required field},\n  specifications for the metric to be calculated. The name of each node is the requested metric. \n  The text of the node is a comma-separated list of the parameters for which the metric should be calculated. \n\n  Currently the scalar quantities available for request are:\n  \\begin{itemize}\n\n  \\item \\textbf{sharpeRatio}: the Sharpe Ratio, measures the performance of an investment. It is defined as the  historical returns of the investment, divided by the standard deviation of the investment(Volatility). \n  \\item \\textbf{sortinoRatio}: the Sortino ratio, measures the risk-adjusted return of an investment asset. Discounts the excess return of a portfolio above a target threshold by the volatility of downside returns. If this quantity is inputted as \\textit{sortinoRatio} the threshold for separate upside and downside value will assign as $0$. Otherwise the user can specify this quantity with a parameter \\xmlAttr{threshold='X'}, where the \\xmlAttr{X} represents the requested threshold \\xmlAttr{median} or \\xmlAttr{zero}.\n  \n  \\item \\textbf{gainLossRatio}: the gain-loss ratio, discounts the first-order higher partial moment of a portfolio's returns, by the first-order lower partial moment of a portfolio's returns. If this quantity is inputted as \\textit{gainLossRatio} the threshold for separate upside and downside value will assign as $0$. Otherwise the user can specify this quantity with a parameter \\xmlAttr{threshold='X'}, where the \\xmlAttr{X} represents the requested threshold \\xmlAttr{median} or \\xmlAttr{zero}.\n  \n  \n  \\item \\textbf{expectedShortfall}: the expected shortfall (Es) or conditional value at risk (CVaR), the expected return on the portfolio in the worst q of cases. If this quantity is inputted as \\textit{ExpectedShortfall} the q value will assign as $5\\%$. Otherwise the user can specify this quantity with a parameter \\xmlAttr{threshold='X'}, where the \\xmlAttr{X} represents the requested q value (a floating point value between 0.0 and 1.0)\n  \\begin{equation}\n    ES_\\alpha = -\\frac{1}{\\alpha} \\int_0^\\alpha \\operatorname{VaR}_\\gamma(X) \\, d\\gamma\n  \\end{equation}\n  \\item \\textbf{valueAtRisk}: the value at risk for investments. Estimates the maximum possible loss after exclude worse outcomes whose combined probability is at most $\\alpha$. If this quantity is inputted as \\textit{ValueAtRisk} the $\\alpha$ value will assign as $5\\%$. Otherwise the user can specify this quantity with a parameter \\xmlAttr{threshold='X'}, where the \\xmlAttr{X} represents the requested $\\alpha$ value (a floating point value between 0.0 and 1.0)\n  \n  \\begin{equation}\n    \\operatorname{VaR}_\\alpha(X)=-\\inf\\big\\{x\\in\\mathbb{R}:F_X(x)>\\alpha\\big\\} = F^{-1}_Y(1-\\alpha).\n  \\end{equation}\n  \\end{itemize}\n  This XML node needs to contain the attribute:\n  \\begin{itemize}\n    \\itemsep0em\n    \\item \\xmlAttr{prefix}, \\xmlDesc{required string attribute}, user-defined prefix for the given \\textbf{metric}.\n      For scalar quantifies, RAVEN will define a variable with name defined as:  ``prefix'' + ``\\_'' + ``parameter name''.\n      For example, if we define ``mean'' as the prefix for \\textbf{expectedValue}, and parameter ``x'', then variable\n      ``mean\\_x'' will be defined by RAVEN.\n      For matrix quantities, RAVEN will define a variable with name defined as: ``prefix'' + ``\\_'' + ``target parameter name'' + ``\\_'' + ``feature parameter name''.\n      For example, if we define ``sen'' as the prefix for \\textbf{sensitivity}, target ``y'' and feature ``x'', then\n      variable ``sen\\_y\\_x'' will be defined by RAVEN.\n      \\nb These variable will be used by RAVEN for the internal calculations. It is also accessible by the user through\n      \\textbf{DataObjects} and \\textbf{OutStreams}.\n  \\end{itemize}\n\n\\end{itemize}\n\n\n\\textbf{Example:}\n\\begin{lstlisting}[style=XML,morekeywords={name,subType,class,type,steps}]\n<Simulation>\n  ...\n    <Models>\n    ...\n    <PostProcessor name=\"EconomicRatio\" subType=\"EconomicRatio\" verbosity=\"debug\">\n      <sharpeRatio prefix=\"SR\">x0,y0,z0,x,y,z</sharpeRatio>\n      <sortinoRatio threshold='zero' prefix=\"stR\">x01,y01,x,z</sortinoRatio>\n      <sortinoRatio threshold='median' prefix=\"stR2\">z01,x0,x01</sortinoRatio>\n      <valueAtRisk threshold='0.07' prefix=\"VaR\">z01,x0,x01</valueAtRisk>\n      <expectedShortfall threshold='0.99' prefix=\"CVaR\">z01,x0,x01</expectedShortfall>\n      <gainLossRatio prefix=\"glR\">x01,y01,z0,x,y,z</gainLossRatio>\n    </PostProcessor>  \n    ...\n  </Models>\n  ...\n</Simulation>\n\\end{lstlisting}", "meta": {"hexsha": "69a8114fe5c141f835181b36ae836f6122ec0223", "size": 4908, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/EconomicRatio.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "doc/user_manual/EconomicRatio.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "doc/user_manual/EconomicRatio.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 73.2537313433, "max_line_length": 521, "alphanum_fraction": 0.739405053, "num_tokens": 1303, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8519528132451416, "lm_q2_score": 0.6859494485880927, "lm_q1q2_score": 0.5843965624685792}}
{"text": "\\subsection{Proposed Model}\n\\label{sec:models}\n\nWe designed a \\emph{Convolutional Neural Network (CNN)} that takes as inputs \nthe sensor distances and colour data obtained from the laser scanner \n(size: $180 \\times 4$) and produces as output the left and the right wheel \ntarget speeds. \n\nOne peculiarity is that we used convolutional layers with circular padding, \nsince the laser scanner returns a 360\u00b0 view of the world around the robot.The \n\\emph{Rectified Linear Unit} (ReLU) activation function is applied after every \nlayer, except for the last one. \n\nThe input data are normalised by subtracting and dividing the channel-wise mean \nand standard deviation over the training set. Furthermore, channel-wise \nmultiplicative $\\alpha$ and additive $\\beta$ parameters are learned during \ntraining to rescale the input to the most convenient range for the network:\n\\begin{IEEEeqnarray}{lL}\n\ty &= (x - \\mu) / \\sigma \\\\\n\tz &= \\alpha y + \\beta\n\\end{IEEEeqnarray}\nThis is implemented in the code with a \\texttt{BatchNorm1d} layer.\n\nThe training set data are shuffled at the beginning of each epoch, so the \nmini-batches (that are of size $2^{14}$) are generated independently between \nepochs. \n\nThe model is trained with the \\emph{Adam} optimiser and learning rate $0.001$, \nwhile the other parameters have their default values. The training is \ninterrupted using \\emph{early stopping}, if the validation loss doesn't improve \nfor 20 epochs, or after 500 epochs. \n\nDuring the various experiments, four different architectures are evaluated:\n\\begin{itemize}\n\t\\item \\emph{Baseline network}: 3 convolutional and 3 fully-connected\n\tlayers (Table~\\ref{tab: baseline})\n\t\\item Baseline network plus one max pooling layer (Table~\\ref{tab: maxpool})\n\t\\item Baseline network plus dropout (Table~\\ref{tab: baseline} + \n\tTable~\\ref{tab: dropout})\n\t\\item Baseline network plus one max pooling layer and dropout \n\t(Table~\\ref{tab: maxpool} + Table~\\ref{tab: dropout})\n\\end{itemize} \n\n\\begin{table}[htbp]\n\t\\caption{Architecture of the Baseline Network}\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\textbf{Layer}&\\textbf{Channels} &\\textbf{Kernel size} &\\textbf{Stride} &\\textbf{Padding}\\\\\n\t\t\t\\cline{1-5}\n\t\t\tconv1 &  4 $\\rightarrow$ 16 & 5 & 2 & 2, circular \\\\ \\hline\n\t\t\tconv2 & 16 $\\rightarrow$ 32 & 5 & 2 & 2, circular \\\\ \\hline\n\t\t\tconv3 & 32 $\\rightarrow$  \t\t\t 32 & 5 & 1 & 2, circular \\\\ \\hline\n\t\t\tfc1 &   45 $\\times$ 32 $\\rightarrow$ 128 &  &  &  \\\\ \\hline\n\t\t\tfc2 &  128 $\\rightarrow$ 128 &  &  &  \\\\ \\hline\n\t\t\tfc3 &  128 $\\rightarrow$   2 &  &  &  \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\label{tab: baseline}\n\t\\end{center}\n\\end{table}\n\n\\begin{table}[htbp]\n\t\\caption{Architecture of the Network with Max Pooling}\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\textbf{Layer}&\\textbf{Channels} &\\textbf{Kernel size} &\\textbf{Stride} &\\textbf{Padding}\\\\\n\t\t\t\\cline{1-5}\n\t\t\tconv1  &   4 $\\rightarrow$  \\bfseries\t32 & 5 & 2 & 2, circular \\\\ \\hline\n\t\t\tconv2  & \\bfseries 32 $\\rightarrow$  \t96 & 5 & 2 & 2, circular \\\\ \\hline\n\t\t\t\\bfseries mpool1 & \t\t\t\t\t   & \\bfseries 3\t& \\bfseries 3 & \\bfseries 1, circular \\\\ \n\t\t\t\\hline\t\t\t\n\t\t\tconv3  & \\bfseries 96 $\\rightarrow$  \t96 & 5 & 1 & 2, circular \\\\ \\hline\n\t\t\tfc1    & 15 $\\times$ 96 $\\rightarrow$ 128 &  &  &  \\\\ \\hline\n\t\t\tfc2    & 128 $\\rightarrow$ 128 &  &  &  \\\\ \\hline\n\t\t\tfc3    & 128 $\\rightarrow$   2 &  &  &  \\\\ \\hline\n\t\t\t%\\multicolumn{5}{l}{$^{\\mathrm{a}}$Sample of a Table footnote.}\n\t\t\\end{tabular}\n\t\t\\label{tab: maxpool}\n\t\\end{center}\n\\end{table}\n\n\\begin{table}[htbp]\n\t\\caption{Architecture of the Network with Dropout}\n\t\\begin{center}\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t\\textbf{Layer}&\\textbf{Channels} &\\textbf{Kernel size} &\\textbf{Stride} &\\textbf{Padding}\\\\\n\t\t\t\\cline{1-5}\n\t\t\t\\multicolumn{5}{|c|}{...} \\\\ \\hline\n\t\t\tfc1 &  1440 $\\rightarrow$ 128 &  &  &  \\\\ \\hline\n\t\t\t\\bfseries drop1 & \\multicolumn{4}{c|}{\\bfseries dropout with p = 0.5} \\\\ \\hline\n\t\t\tfc2 &  128 $\\rightarrow$ 128 &  &  &  \\\\ \\hline\n\t\t\t\\bfseries drop2 & \\multicolumn{4}{c|}{\\bfseries dropout with p = 0.5} \\\\ \\hline\n\t\t\tfc3 &  128 $\\rightarrow$   2 &  &  &  \\\\ \\hline\n\t\t\t%\\multicolumn{5}{l}{$^{\\mathrm{a}}$Sample of a Table footnote.}\n\t\t\\end{tabular}\n\t\t\\label{tab: dropout}\n\t\\end{center}\n\\end{table}\n\n", "meta": {"hexsha": "aa32f137e5a0cbdd7c10bfb78a75146d7a26cbed", "size": 4231, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/models.tex", "max_stars_repo_name": "GiorgiaAuroraAdorni/learning-relative-interactions-through-imitation", "max_stars_repo_head_hexsha": "ad3cddd7df85a2834e643317fc219189f2743424", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-31T19:12:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-31T19:12:50.000Z", "max_issues_repo_path": "report/sections/models.tex", "max_issues_repo_name": "GiorgiaAuroraAdorni/learning-relative-interactions-through-imitation", "max_issues_repo_head_hexsha": "ad3cddd7df85a2834e643317fc219189f2743424", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections/models.tex", "max_forks_repo_name": "GiorgiaAuroraAdorni/learning-relative-interactions-through-imitation", "max_forks_repo_head_hexsha": "ad3cddd7df85a2834e643317fc219189f2743424", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.4803921569, "max_line_length": 94, "alphanum_fraction": 0.6705270622, "num_tokens": 1469, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8519527869325346, "lm_q2_score": 0.685949467848392, "lm_q1q2_score": 0.5843965608283266}}
{"text": "\\section{Increasing Sparsity}\n\\label{sec:sparsity}\n\nWe noted that the 45 cluster spectral clustering result described in\nthe previous section assigned many more tags to each word than the\ngold standard.  To quantify the difference we used a measure called\ntag perplexity defined as follows:\n\n\\[ 2^{\\frac{1}{N}\\sum_{i=1}^N -\\log_2 p(t_i | w_i)} \\]\n\nHere $N$ is the number of words in the corpus, $w_i$ is the i'th word,\n$t_i$ is its assigned cluster or tag, and $p(t_i|w_i)$ is the fraction\nof times word $w_i$ has been assigned $t_i$.  A model which had to\nchoose from $q$ equally likely tags for each word would have a tag\nperplexity of $q$.  The tag perplexity of the gold standard 45-tag 24K\nword test corpus is 1.09, whereas the tag perplexity of the spectral\nclustering result is 2.76.\n\nWe experimented with two methods for reducing the number of tags\nassigned to each word: collapsing and word penalties.  Collapsing\nenforces the one-tag-per-word constraint by re-tagging the corpus,\nwhereas word penalties encourage it by increasing the distance between\ninstances with different target words.\n\nTo collapse a given tag assignment for a corpus, we re-tag each word\nwith its most frequent tag in the original assignment (we break ties\nrandomly).  This forcefully reduces the tag perplexity to 1 and\nremoves any ambiguity.  Collapsing improves the many-to-one accuracy\nby more than 10\\% from \\spectralResult\\% to \\collapseResult\\%.\n\nInterestingly when we try to enforce the one-tag-per-word restriction\nbefore clustering (by giving the average substitute vector for each\nword type to spectral clustering) the results get worse (58.02\\%\nmany-to-one accuracy).  The information in individual instances seems\nto be necessary for good clusters to arise.\n\nWord penalties include information about the target word in the\ndistance metric.  The substitute vectors and the KL2 distance metric\nbased on them carry no information about the target word, only its\ncontext.  We used the following distance metric which increases the\ndistance between instances with different target words:\n\n\\[ D(i, j) = KL2(s_i,s_j)+\\delta I(w_i \\neq w_j) \\]\n\nHere $s_i$ is the substitute vector and $w_i$ is the target word for\nthe i'th position, $\\delta$ is the regularization parameter, and $I$\nis the indicator function that gives 1 if the two words are different\nand 0 if they are the same.  Increasing the $\\delta$ decreases the tag\nperplexity, but the accuracy change is non-monotonic.  At $\\delta=1$\nwe obtain a tag perplexity of 1.91 and the many-to-one accuracy\nincreases from \\spectralResult\\% to 64.35\\%.  This demonstrates that\nwe can significantly increase the accuracy by including more\ninformation on the target word without employing the full\none-tag-per-word constraint.  \n%% Table~\\ref{tab:results} summarizes the results.\n\n%% % do we need graph of delta vs accuracy? no does not look meaningful.\n\n%% \\begin{table}[h] \\centering\n%% \\begin{tabular}{|lll|} \\hline\n%% algorithm & many-1 & tag-perp. \\\\ \\hline\n%% spectral & \\spectralResult & 2.76 \\\\\n%% word-penalty & 64.35 & 1.91 \\\\\n%% collapsed & \\collapseResult & 1.00 \\\\ \\hline\n%% gold & 100 & 1.09 \\\\ \\hline\n%% \\end{tabular}\n%% \\caption{The many-to-one accuracy and tag perplexity of spectral\n%%   clustering, word-penalty, and collapsed algorithms in comparison to\n%%   the gold standard.}\n%% \\label{tab:results}\n%% \\end{table}\n\n\\begin{figure*}[t]\n\\includegraphics[width=\\textwidth]{hinton.eps}\n\\vspace*{-10mm}\n\\caption{Hinton diagram of the most frequent tags (rows) and clusters\n  (columns).  Area of each square is proportional to the joint\n  probability of the given tag and cluster.}\n\\label{fig:hinton}\n\\end{figure*}\n", "meta": {"hexsha": "5b37ba1a6e5d5f843e2bf30128c0e1cff2fbf79f", "size": 3665, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/cl2012/acl12/sparsity.tex", "max_stars_repo_name": "ai-ku/upos", "max_stars_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-01-24T11:27:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-18T11:35:02.000Z", "max_issues_repo_path": "papers/cl2012/acl12/sparsity.tex", "max_issues_repo_name": "ai-ku/upos", "max_issues_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/cl2012/acl12/sparsity.tex", "max_forks_repo_name": "ai-ku/upos", "max_forks_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-06T07:56:00.000Z", "avg_line_length": 45.2469135802, "max_line_length": 72, "alphanum_fraction": 0.7593451569, "num_tokens": 958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825007, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.5842827243008611}}
{"text": "%% Chapter 8 : Generation Forecasting using \n\n\\section{Introduction to Artificial Neural Networks}\n\\\n\\\n\\\n\\\nArtificial Neural Networks (ANN) are a type of machine learning model, they are inspired from the biological neural networks present in the brains of the animals. Being highly non-linear, data driven and with a self adaptive approach; ANN's are a powerful tool for modelling phenomenons whose underlying principles and/or data relationships are unknown. They try to imitate the learning process of the human brain, and when trained properly can correlate patterns between input data and the corresponding target data; moreover they can predict outcomes for new independent data. Their adaptive nature replaces traditional programming with learning in problem solving. This helps in developing computational models for applications where there is little or incomplete understanding of the problem to be solved but where training data is readily available.\\\\\n\n\\textbf{Characterisctics of ANN's:}\n\n\\begin{itemize}\n\n\\item ANNs possess mapping capability, hence they can map input data to the corresponding output data.\n\n\\item ANNs possess learning capability, hence they can be trained with available data of a problem and be tested for inference on unknown data. They can identify new outcomes for data sets on which they were previously never trained.\n\n\\item ANNs possess generalizing capability, hence they can predict new outcomes from past trends.\n\n\\item ANNs are robust and fault tolerant, hence they can capture capture complete patterns from partial and/or noisy patterns.\n\n\\item ANNs can process information in parallel, at high speed and in a distributed manner.\n\n\\end{itemize}\n\n\\newpage\n\n\\section{Artificial Neuron}\n\\\n\\\n\\\n\\\nThe artificial neuron is the building block of the ANN. The Fig (\\ref{figc8h1}) shows the diagrammatic representation of an artificial neuron.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{ANNImg1}\n\\caption{Artificial Neuron Model}\n\\label{figc8h1} %% to refer use, \\ref{}\n\\end{figure}\n\nIt consists of the weights one each for the connecting input signals, a summation unit which sums the product of the weight and the incoming input for all inputs and a activation function (which is usually a non-linear function). It also sometimes consist of a bias. The mathematical model of the artificial neuron is described by the following equations:\n\n\\begin{equation}\n\\label{ANN1}\n\n\\textit{f}(x_{j})=f \\left (\\alpha_{j} + \\sum\\limits_{i=1}^{k}w_{ij}y_{i}  \\right )\n  \n\\end{equation}\\\\\nwhere,\\\\\n$ \\textit{f} $ = It is the activation function of the artificial neuron \\\\\n$ \\alpha_{j} $ = It is the bias associated with the $j^{th}$ neuron in the network \\\\\n$ k $ = It is total number of inputs connected to the $j^{th}$ neuron in the network \n$ w_{ij} $ = It is weight associated with the connection between the $i^{th}$ input and the $j^{th}$ neuron \\\\\n$ y_{i} $ = It is the $i^{th}$ input connected to the neuron \\\\\n$ \\textit{f}(x_{j}) $ = It is the out put of the $j^{th}$ neuron\n\n\nThe Fig (\\ref{ANNActFunc}) shows the different activation functions which can be used in the design of the artificial neurons.\n\n\\begin{figure}[H]    \n\\begin{center}\n\\includegraphics[width=.4\\textwidth]{ANNImg4}\n\\includegraphics[width=.4\\textwidth]{ANNImg5}\n\\includegraphics[width=.4\\textwidth]{ANNImg6}\n\\includegraphics[width=.4\\textwidth]{ANNImg7}\n\\end{center}\n\\caption{Different Activation Functions: Upper Left - Unit Step, Upper Right - Sigmoid, LowerLeft - Piecewise Linear and LowerRight - Gaussian}    \n\\label{ANNActFunc}\n\\end{figure} \n\nThese artificial neurons form the unit processing element s of an artificial neural network. The arrangement (network architectures) of these artificial neurons and its subsequent training leads to development of ANN models which solve real world problems whose principles and data relationships are little or incompletely understood.\n\n\\section{ANN Architectures}\n\\\n\\\n\\\n\\\nNeural network architectures are the structures developed by the interconnection of artificial neurons. The two most widely used ANN architectures are: Feed Forward Network and Recurrent Network.\n\n\\textbf{Feed Forward Network:}\\\\\n\nIn these networks information flows in one direction i.e from the input layer passing through the hidden layers and finally to the output layer. There are no feedbacks i.e. the output of any layer does not affect the same or the preceding layer. The Fig (\\ref{figc8h2}) illustrates a feed forward neural network.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{ANNImg2}\n\\caption{Schematic of Feed-Forward Network}\n\\label{figc8h2} %% to refer use, \\ref{}\n\\end{figure}\n\n\\textbf{Recurrent Network:}\\\\\n\nThese artificial neural networks consist of feedbacks (atleast one). The possible feedbacks are: feedback between two layers or feedback to a single neuron (self-feedback links). The Fig (\\ref{figc8h3}) shows a recurrent neural network where the feedback is provided from the output layer to input layer.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{ANNImg3}\n\\caption{Artificial Neuron Model}\n\\label{figc8h3} %% to refer use, \\ref{}\n\\end{figure}\n\nThe more specific neural network architesctures used widely for real world problem solving are: Multi-Layer Perceptron Networks (MLP), Radial Bias Function Networks and Kohonen Self Organizing Feature Maps.\n\n\\textbf{Multi-Layer Perceptron Network:}\\\\\n\nIt is the most popular form of the ANN architecture. It has the following properties:\n\n\\begin{itemize}\n\n\\item Has any number of inputs\n\n\\item Has one or more hidden layers with any number of units\n\n\\item Uses linear combination functions in the input layers\n\n\\item Generally uses sigmoid activation functions in the hidden layers\n\n\\item Has any number of outputs with any activation function\n\n\\item Has connections between the input layer and the first hidden layer, hidden layer and hidden layer, and between hidden layer and output layer\n\n\\end{itemize}\n\nThe Fig (\\ref{figc8h4}) illustrates a Multi-Layer Perceptron Network with a single hidden layer.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{ANN_MLP}\n\\caption{Multi-Layer Perceptron Architecture}\n\\label{figc8h4} %% to refer use, \\ref{}\n\\end{figure}\n\nMLP's are known as universal approximators and can  be used when we have little prior knowledge of the relationship between the inputs and the targets. Usually one hidden layer with sufficient number of neurons is enough for creating a good model, but sometimes more hidden layers with less number of neurons can improve generalization.\\\\\n\n\\textbf{Radial Bias Function Network:}\\\\\n\nA Radial Bias Function Network (RBF) is similar to a MLP but with only one hidden layer. The properties of RBF are as follows:\n\n\\begin{itemize}\n\n\\item Has any number of inputs\n\n\\item Typically has only one hidden layer with any number of units\n\n\\item Uses radial combination functions in the hidden layers, based on the squared Euclidean distance between the input vector and the weight vector\n\n\\item Typically uses exponential or softmax activation functions in the hidden layer, in which the network is a Gaussian RBF Network\n\n\\item Has any number of outputs with any activation function\n\n\\item has connections between the input layer and the hidden layer, and between hidden layer and the output layer.\n\n\\end{itemize}\n\nThe Fig (figc8h5}) illustrates a RBF Network.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.25]{ANN_RBF}\n\\caption{Artificial Neuron Model}\n\\label{figc8h5} %% to refer use, \\ref{}\n\\end{figure}\n\nGaussian RBF's are said to be local-processing networks as the  effect of a hidden unit is usually concentrated in a local area centered at the weight vector ; as opposed to the MLP's which are said to be distributed-processing networks as the effect of a hidden unit cn be distributed over the entire input space.\\\\\n\n\\textbf{Kohonen Self Organizing Feature Maps:}\\\\\n\nThese are used quite diffeently from other ANN networks. Most of the ANN networks are designed for supervised learning tasks, but the self organizing Maps (SOFM) are primarily designed for unsupervised learning tasks. They help to lear the structure of data and hence oten very useful for exploratory data analysis of complex data. They care also used for novelty detection. It learns to rcognize clusters in the training data and then respond to them in a fitting manner. A SOFM consists of only two layers: an input layer, and an output layer of radial units. The Fig (\\ref{figc8h6}) illustrates a Kohonen (SOFM) network.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANN_Kohonen}\n\\caption{Artificial Neuron Model}\n\\label{figc8h6} %% to refer use, \\ref{}\n\\end{figure}\n\n\n\\section{Training Methods}\n\\\n\\\n\\\n\\\nLearning methods for neural networks can be classified into three basic types, which are given as follows:\n\n\\begin{enumerate}\n\n\\item \\blindtext \\textbf{Supervised Learning:} In this learning method every input pattern that is used to train the network is associated with the target or the desired pattern. A teacher is assumed to be present during the learning/training process, when a comparison is made between the network's computed output and the correct expected output, to determine the error. The error can then be used to change the network parameters (weights and biases), which result in an improvement in performance.\n\n\\item \\blindtext \\textbf{Unsupervised Learning:}In this learning method, the target output is not presented to the network. It is as if there is no teacher to present the desired patterns. However, the network leans of its own by discovering and adapting to the structural features in the input patterns. \n\n\\item \\blindtext \\textbf{Reinforced Learning:} In this learning method, a teacher is available, but does not present the expected answer; however only indicates if the computed answer is correct or incorrect. The information provided helps the network in its learning process. A reward is given for a correct answer and a penalty for a wrong answer. But, this learning method is not very popular.\n\n\\end{enumerate\n\n\\section{ANN Applications}\n\\\n\\\n\\\n\\\nThe real world applications of ANN consist of a large spectrum of categories as listed below:\n\n\\begin{itemize}\n\n\\item Function approximation, or regression analysis, including time series prediction, fitness approximation and modeling\n\n\\item Classification, including pattern and sequenxce recognition, novelty detectiona nd sequential decision making\n\n\\item Data processing, including filtering, clustering, blind source separation and compression\n\n\\end{itemize}\n\n\\newpage\n\n\\section{Process Flow for Forecasting using ANN}\n\\\n\\\n\\\n\\\nThe Fig (\\ref{figc8h7}) shows the schematic of the process flow for the development of the ANN model for forecasting.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.8]{ANN_Flow}\n\\caption{Schematic of ANN Model Development for Forecasting}\n\\label{figc8h7} %% to refer use, \\ref{}\n\\end{figure}\n\n\\newpage\n\n\\section{Results}\n\\\n\\\n\\\n\\\nThe results for generation forecasting using ANN have been produced for the GSEC 1MW SPVP. The training data is of 11 months from November 2014 to September 2015. The ANN models trained on this data have been used to generate intra-hour weather variable forecasts for the month of October 2015. The ANN models have been developed using three different types of input data (Mode1, Mode2 and Mode3) with three different network architectures (Fitnet [FN], Feedforward Net [FF] and Cascaded Feedforward Net [CFF]) and five different hidden network configurations (5-Neurons, 10-Neurons, 15-Neurons, 10-10- Neurons and 10-10-10-neurons). All the results in the subsequent sections are for the $2^{nd}$ of October 2015 for all the modes, architectures and the hidden network configuration. The forecasted weather variables form the ANN models have been fed as inputs to the solar energy estimation app to generate the intra-hour generation forecast.\n\n\n\\subsection{Input Type - Mode1}\n\\\n\\\n\\\n\\\nIn Mode1 the Input training matrix consists of the [Day, Month, Year,Time] columns with the Target Training Matrix consisting of the corresponding [Wind Speed, Temperature, Irradiance] columns. The results are given in the following sections which are divided based on the hidden network configurations and each graph consists of the comparison of the actual variable with he forecasted variable from the three different ANN architectures. \n\n\\subsubsection{ANN Architecture - Hidden Nodes [5]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne5w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 1 With Network [5]}\n\\label{ANNResImg1} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne5t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 1 With Network [5]}\n\\label{ANNResImg2} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne5i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 1 With Network [5]}\n\\label{ANNResImg3} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne5e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 1 With Network [5]}\n\\label{ANNResImg4} %% to refer use, \\ref{}\n\\end{figure}\n\n\\newpage\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 1 With Network [10]}\n\\label{ANNResImg5} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 1 With Network [10]}\n\\label{ANNResImg6} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 1 With Network [10]}\n\\label{ANNResImg7} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 1 With Network [10]}\n\\label{ANNResImg8} %% to refer use, \\ref{}\n\\end{figure}\n\n\\subsubsection{ANN Architecture - Hidden Nodes [15]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne15w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 1 With Network [15]}\n\\label{ANNResImg9} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne15t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 1 With Network [15]}\n\\label{ANNResImg10} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne15i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 1 With Network [15]}\n\\label{ANNResImg11} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne15e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 1 With Network [15]}\n\\label{ANNResImg12} %% to refer use, \\ref{}\n\\end{figure}\n\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10-10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 1 With Network [10-10]}\n\\label{ANNResImg13} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 1 With Network [10-10]}\n\\label{ANNResImg14} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 1 With Network [10-10]}\n\\label{ANNResImg15} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 1 With Network [10-10]}\n\\label{ANNResImg16} %% to refer use, \\ref{}\n\\end{figure}\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10-10-10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10-10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 1 With Network [10-10-10]}\n\\label{ANNResImg17} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10-10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 1 With Network [10-10-10]}\n\\label{ANNResImg18} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10-10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 1 With Network [10-10-10]}\n\\label{ANNResImg19} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmOne10-10-10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 1 With Network [10-10-10]}\n\\label{ANNResImg20} %% to refer use, \\ref{}\n\\end{figure}\n\n\n\\subsection{Input Type - Mode2}\n\\\n\\\n\\\n\\\nIn Mode2 the Input training matrix consists of the [Day, Month, Year,Time] columns with the Target Training Matrix consisting of the corresponding [Wind Speed, Temperature, Irradiance, Previous Wind Speed, Previous Temperature, Previous Irradiance] columns. The results are given in the following sections which are divided based on the hidden network configurations and each graph consists of the comparison of the actual variable with he forecasted variable from the three different ANN architectures. \n\n\\subsubsection{ANN Architecture - Hidden Nodes [5]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo5w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 2 With Network [5]}\n\\label{ANNResImg21} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo5t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 2 With Network [5]}\n\\label{ANNResImg22} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo5i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 2 With Network [5]}\n\\label{ANNResImg23} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo5e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 2 With Network [5]}\n\\label{ANNResImg24} %% to refer use, \\ref{}\n\\end{figure}\n\n\\newpage\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 2 With Network [10]}\n\\label{ANNResImg25} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 2 With Network [10]}\n\\label{ANNResImg26} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 2 With Network [10]}\n\\label{ANNResImg27} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 2 With Network [10]}\n\\label{ANNResImg28} %% to refer use, \\ref{}\n\\end{figure}\n\n\\subsubsection{ANN Architecture - Hidden Nodes [15]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo15w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 2 With Network [15]}\n\\label{ANNResImg29} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo15t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 2 With Network [15]}\n\\label{ANNResImg30} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo15i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 2 With Network [15]}\n\\label{ANNResImg31} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo15e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 2 With Network [15]}\n\\label{ANNResImg32} %% to refer use, \\ref{}\n\\end{figure}\n\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10-10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 2 With Network [10-10]}\n\\label{ANNResImg33} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 2 With Network [10-10]}\n\\label{ANNResImg34} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 2 With Network [10-10]}\n\\label{ANNResImg35} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 2 With Network [10-10]}\n\\label{ANNResImg36} %% to refer use, \\ref{}\n\\end{figure}\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10-10-10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10-10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 2 With Network [10-10-10]}\n\\label{ANNResImg37} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10-10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 2 With Network [10-10-10]}\n\\label{ANNResImg38} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10-10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 2 With Network [10-10-10]}\n\\label{ANNResImg39} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmTwo10-10-10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 2 With Network [10-10-10]}\n\\label{ANNResImg40} %% to refer use, \\ref{}\n\\end{figure}\n\n\n\n\\subsection{Input Type - Mode3}\n\\\n\\\n\\\n\\\nIn Mode2 the Input training matrix consists of the [Day, Month, Year,Time] columns with the Target Training Matrix consisting of the corresponding [Wind Speed, Temperature, Irradiance, Previous Wind Speed, Previous Temperature, Previous Irradiance, Rate of Change of Previous Wind Speed, Rate of Change of Previous Temperature, Rate of Change of Previous Irradiance] columns. The results are given in the following sections which are divided based on the hidden network configurations and each graph consists of the comparison of the actual variable with he forecasted variable from the three different ANN architectures. \n\n\\subsubsection{ANN Architecture - Hidden Nodes [5]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree5w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 3 With Network [5]}\n\\label{ANNResImg41} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree5t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 3 With Network [5]}\n\\label{ANNResImg32} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree5i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 3 With Network [5]}\n\\label{ANNResImg33} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree5e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 3 With Network [5]}\n\\label{ANNResImg44} %% to refer use, \\ref{}\n\\end{figure}\n\n\\newpage\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 3 With Network [10]}\n\\label{ANNResImg45} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 3 With Network [10]}\n\\label{ANNResImg46} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 3 With Network [10]}\n\\label{ANNResImg47} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 3 With Network [10]}\n\\label{ANNResImg48} %% to refer use, \\ref{}\n\\end{figure}\n\n\\subsubsection{ANN Architecture - Hidden Nodes [15]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree15w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 3 With Network [15]}\n\\label{ANNResImg49} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree15t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 3 With Network [15]}\n\\label{ANNResImg50} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree15i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 3 With Network [15]}\n\\label{ANNResImg51} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree15e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 3 With Network [15]}\n\\label{ANNResImg52} %% to refer use, \\ref{}\n\\end{figure}\n\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10-10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 3 With Network [10-10]}\n\\label{ANNResImg53} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 3 With Network [10-10]}\n\\label{ANNResImg54} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 3 With Network [10-10]}\n\\label{ANNResImg55} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 3 With Network [10-10]}\n\\label{ANNResImg56} %% to refer use, \\ref{}\n\\end{figure}\n\n\\subsubsection{ANN Architecture - Hidden Nodes [10-10-10]}\n\\\n\\\n\\\n\\\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10-10w}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Wind Speed In Mode 3 With Network [10-10-10]}\n\\label{ANNResImg57} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10-10t}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Temperature In Mode 3 With Network [10-10-10]}\n\\label{ANNResImg58} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10-10i}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Irradiance In Mode 3 With Network [10-10-10]}\n\\label{ANNResImg59} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{ANNmThree10-10-10e}\n\\caption{Comparison Of Different ANN Architectures (FN,FF,CFF) For Forecasting Of Intra-Hour Energy In Mode 3 With Network [10-10-10]}\n\\label{ANNResImg60} %% to refer use, \\ref{}\n\\end{figure}\n\n\\subsubsection{Conclusion from Graphs}\n\\\n\\\n\\\n\\\nIt is observed from the above graphs that as the input training set holds more and more data about the previous states of the weather variables to be forecasted the model learning for one-step ahead generation forecasting becomes better and better even with less complex network architectures. It is seen that the learning and model generalization are better with increasing complexity of the hidden network architecture as deep learning takes place. Moreover, the learning is least affected by the use of the different ANN architectures as FN, FF and CFF all fall in the same family of Multi-Layer Perceptron Models. The use of Radial Bias Functions has to be explored to have a more clear understanding of which ANN architecture is better suite for the job.\n\n\n", "meta": {"hexsha": "d1049b291a42f7d91f5e9ec298974bcabac1718b", "size": 31169, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8_ProjectReport/2_ProjectProposal/Latex Support Files/Chapters/Ch8.tex", "max_stars_repo_name": "ninadkgaikwad/ARMA_TimeSeries_Forecasting_Project", "max_stars_repo_head_hexsha": "18329e436f823d55d2aad02b1d81d8cdda506ab2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8_ProjectReport/2_ProjectProposal/Latex Support Files/Chapters/Ch8.tex", "max_issues_repo_name": "ninadkgaikwad/ARMA_TimeSeries_Forecasting_Project", "max_issues_repo_head_hexsha": "18329e436f823d55d2aad02b1d81d8cdda506ab2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8_ProjectReport/2_ProjectProposal/Latex Support Files/Chapters/Ch8.tex", "max_forks_repo_name": "ninadkgaikwad/ARMA_TimeSeries_Forecasting_Project", "max_forks_repo_head_hexsha": "18329e436f823d55d2aad02b1d81d8cdda506ab2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-28T05:21:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-28T05:21:48.000Z", "avg_line_length": 39.4544303797, "max_line_length": 944, "alphanum_fraction": 0.7681670891, "num_tokens": 8631, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825007, "lm_q2_score": 0.7490872187162396, "lm_q1q2_score": 0.5842827199317213}}
{"text": "% !TEX root = ./Basilisk-Integrators20170724.tex\n\n\\section{Test Description and Success Criteria}\n\nAs the integrator functionality is not a regular BSK module with specified input/output behavior, the integration can only be tested in an integrated test.\nThe integrated test employed is located in:\\\\\n\n{\\tt src/tests/scenarios/test\\_scenarioIntegrators.py}\n\\\\\n\n\\subsection{Test inputs}\nEach simulation uses the point-mass spacecraft equations of motion\n\\begin{equation}\n\t\\ddot{\\bm r} = - \\frac{\\mu}{r^{3}}{\\bm r}\n\\end{equation}\nwith the initial orbit elements shown in Table~\\ref{tbl:oeInitial}.  The only gravitational body considered is Earth.  The simulation time for each case is 3/4 of an orbit period.  Each implemented Integrator method is tested using the above initial conditions and a large time step of 120 seconds.  This large time step makes the integrator errors more easily visible between the integration methods.\n\n\n\\begin{table}[htbp]\n\t\\caption{Initial Spacecraft Ephemeris}\n\t\\label{tbl:oeInitial}\n\t\\centering \\fontsize{10}{10}\\selectfont\n\t\\begin{tabular}{c | r | r } % Column formatting,\n\t\t\\hline\n\t\t\\hline\n\t\tElement    & Description & Value \\\\\n\t\t\\hline\n\t\t$a$      & Semi-Major Axis & 7000km \\\\\n\t\t$e$ & Eccentricity     &  0.0001 \\\\\n\t\t$i$       & Inclination Angle  & 33.3\\dg \\\\\n\t\t$\\Omega$       & Ascending Node   & 33.3\\dg \\\\\n\t\t$\\omega$       & Argument of Periapses  & 48.2\\dg \\\\\n\t\t$f$       & True Anomaly   & 347.8\\dg \\\\\n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\n\n\\subsection{Test sections}\nThe same simulation setup is run for each of the integration methods:\n\\begin{enumerate}\n\\item The first test uses the default RK4 integration method.  Here the simulation script does not specify the integration method, testing both that the RK4 integrator is the default integration method, and that that the integrator is implemented correctly.\n\n\\item The 2nd test uses the Euler's integration method.\n\n\\item the 3rd test uses Heun's integration method.\n\n\\end{enumerate}\n\n\\begin{figure}[t]\n\t\\centerline{\n\t\\includegraphics[]{Figures/intResults}\n\t}\n\t\\caption{Illustration of the true orbit position samples (Black) versus the RK4 (Blue), RK2 (Green) and RK1 (Red) integration results.}\n\t\\label{fig:intResults}\n\\end{figure}\nThe resulting data points are illustrated in Figure~\\ref{fig:intResults}.  The large 120 second time step causes all the integration methods to significantly deviate from the truth locations (shown in black).  The integrated validation test ensures that the BSK integrations yield the same integration corruptions to validate the mathematical implementation.\n\n\\subsection{Test success criteria}\nThe integrated position states are checked at 5 even data points along the simulation time again pre-computed truth answers.  These truth answers were generated in Mathematica implementing the same initial conditions, and replicating the integration math.    The accuracy threshold is set to 1 meter, a small value compared to the 7,000,000 meter near-circular orbit radius.\n\n\n\n\\section{Test Parameters}\nThree tests are run controlled through a single test parameter called {\\tt integratorCase}.  The possible values are shown in Table~\\ref{tbl:intCases}.\n\n\\begin{table}[htbp]\n\t\\caption{Error tolerance for each test.}\n\t\\label{tbl:intCases}\n\t\\centering \\fontsize{10}{10}\\selectfont\n\t\\begin{tabular}{ c | c } % Column formatting,\n\t\t\\hline\\hline\n\t\t\\textbf{Test}   \t      \t               & \\textbf{Tolerated Error} \t\t\t\t\t\t           \\\\ \\hline\n\t\t``rk4\"                           & 1 meter\t  \\\\\n\t\t``euler\"                           & 1 meter\t  \\\\\n\t\t``rk2\"                           & 1 meter\t  \\\\\n\t\t\\hline\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\section{Test Results}\nAll integration checks within the integrated test {\\tt scenarios/test\\_scenarioIntegrators.py} passed.  Table~\\ref{tbl:intResults} shows the test results, while Figure~\\ref{fig:intResultsPython} shows the resulting trajectories for each integration test.\n\n\\input{AutoTeX/scenarioIntegrators.tex}\n\n\\begin{table}[h]\n\t\\caption{Integration test results.}\n\t\\label{tbl:intResults}\n\t\\centering \\fontsize{10}{10}\\selectfont\n\t\\begin{tabular}{c | c | p{4in} } % Column formatting,\n\t\t\\hline\\hline\n\t\t\\textbf{Test} \t\t\t& \\textbf{Pass/Fail} \t & \\textbf{BSK Error Notes}\n\t\t\\\\ \\hline\n\t\t``rk4\"\t\t  \t&\n\t\t\\input{AutoTex/IntegratorsTestMsg-rk4}      \t  &\n\t\t\\input{AutoTex/IntegratorsMsg-rk4}\n\t\t\\\\ \\hline\n\t\t``euler''\t   \t           \t&\n\t\t\\input{AutoTex/IntegratorsTestMsg-euler}           \t\t&      \n\t\t\\input{AutoTex/IntegratorsMsg-euler}  \n\t\t\\\\ \\hline\n\t\t``rk2''      \t&\n\t\t\\input{AutoTex/IntegratorsTestMsg-rk2}\n\t\t&\n\t\t\\input{AutoTex/IntegratorsMsg-rk2}\n\t\t\\\\\n\t\t\\hline\\hline\n\t\\end{tabular}\n\\end{table}\n", "meta": {"hexsha": "45ec2656837229a29f5823b542c76f688d63cda3", "size": 4667, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/simulation/dynamics/Integrators/_Documentation/secTest.tex", "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/simulation/dynamics/Integrators/_Documentation/secTest.tex", "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_forks_repo_path": "src/simulation/dynamics/Integrators/_Documentation/secTest.tex", "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.6696428571, "max_line_length": 401, "alphanum_fraction": 0.7124491108, "num_tokens": 1312, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.779992900254107, "lm_q2_score": 0.7490872187162396, "lm_q1q2_score": 0.5842827122697622}}
{"text": "\\section{SSC Cashflow and NPV Models}\n\\label{sec:SSCNPV}\n\nImagine a portfolio of candidate projects, in which some of the decisions involve\neither replacing an item now or postponing replacement and facing potentially higher\nmaintenance and replacement costs. We choose to assume that the item must either\nbe replaced now or in the future, and it is in this context we describe the\nappropriate cashflow calculations. We then extend the discussion to allow the\nplanned replacement to occur in year 2 or 3, for example, rather than in year 1.\n\nWe assume that doing nothing is not an option, since the items under consideration\nare of significant importance and, if not replaced in due course, would impose\nan unacceptable risk to either safety or production.\n\nNotation:\n\n\\begin{itemize}\n\t\\item  \\( p \\) : probability of item failure for one year\n\n\t\\item  \\( C_{P} \\) : cost of planned replacement\n\n\t\\item  \\( C_{U} \\) : cost of unplanned replacement\n\n\t\\item  \\( C_{D} \\) : cost of shutdown per day\n\n\t\\item  \\( D \\) : number of days plant is off-line, if a shutdown occurs\n\n\t\\item  \\( N \\) : number of years\n\n\t\\item  \\( R \\) : discount rate\n\\end{itemize}\n\n\n%\\begin{comment}\nWe could incorporate additional parameters, such as weekly or monthly inspection costs,\nfixed costs of shutdown in addition to the daily costs specified above, etc. That said,\nthe setting we describe allows us to illustrate key ideas in the cashflow\ncalculations for computing NPV.\n\nWe further assume that, if we do not replace the item, its failure time is a random\nvariable that follows a geometric distribution where the probability of failure in\none year is denoted by  \\( p \\) (i.e., the probability of survival over one year\nis  \\( 1-p \\) ). Thus, if the plant faces a 20-year decision period, the probability\nof survival up to year  \\( t \\) is  \\(  \\left( 1-p \\right) ^{t} \\), and the probability\nof failing in year  \\( t \\)  is given by  \\( p \\left( 1-p \\right) ^{t-1} \\).\n\nHere is pedantic but useful construct for thinking about the calculations that follow\nis to visualize a {\\it coin flip}  for each year, yielding a {\\it fail}  or {\\it no fail}\nevent for that given year. Immediately after the coin flip, appropriate costs are incurred.\nIn other words, this discrete view of time with Bernoulli trials is useful to simplify the\nlogic of the calculations, rather than viewing time as a continuum and teasing\nout {\\it what happened when} during a given year.\n\nIf the item is not replaced today (time  \\( t=1 \\) ), we can compute the expected\nreplacement cost in any year  \\( t=1, 2, \\ldots ,N-1 \\)  as:\\par\n\n%\\begin{comment}\n\n\n\\begin{equation}\\label{npv_1}\n\\mbox{Expected Replacement Cost in Year } t=C_{U}p \\left( 1-p \\right) ^{t-1}\n\\end{equation}\n\n\nHere, we incur this cost only if the coin flips yielded $``$no fail$\"$  events in\nyears  \\( t=1, 2, \\ldots ,t-1 \\), then a $``$fail$\"$  event in year  \\( t \\)\n(i.e., we incur the cost through the geometric random variable\u2019s probability mass of having\nthe first failure in year  \\( t \\), which is given by  \\( p \\left( 1-p \\right) ^{t-1} \\)).\n\nSince we assume the item must be replaced in year  \\( N \\)  if it has not already failed,\nthe expected replacement cost does \\textit{not} depend on the result of a coin flip\nin year  \\( N \\), in the same way that replacement in year 1 precludes dependence on\nthe year 1 coin flip. Rather, we incur this cost with certainty in year  \\( N \\),\nas long as the item had not failed in previous years\n\\(  t=1, 2, \\ldots ,N-1 \\); In other words, the expected cost is given by:\\par\n\n%\\begin{comment}\n\n\\begin{equation}\\label{npv_2}\n\\mbox{Expected Replacement Cost in Year }N=C_{P} \\left( 1-p \\right) ^{N-1}\n\\end{equation}\n\nwhere the planned replacement cost is used.\n\nIn order to illustrate the computation of the cash flows, we further assume that,\nif the item is not replaced now, the plant faces a loss of revenue due to a shutdown\nat a cost of  \\( C_{D} \\)  per day. In this case, the expected downtime cost incurred in\nyear  \\( t=1, 2, \\ldots ,N-1 \\)  is given by:\\par\n\n\\begin{equation}\\label{npv_3}\n\\mbox{Expected Downtime Cost in Year }t=DC_{D}p \\left( 1-p \\right) ^{t-1}.\n\\end{equation}\n\nAgain, we assume that the planned replacement in year  \\( t=N \\)  precludes dependence\non the coin flip and therefore also precludes incurring any downtime cost in that year,\nthough we acknowledge other assumptions are possible. Thus, for practical purposes,\nthere is no coin flip in year  \\( N. \\)\n\nMore generally, if the item is not replaced now, the plant will face the possibility\nof increased costs due to reliability issues. Such costs include increased\ninspection costs, downtime costs for a week-long shutdown, lost revenue from a 6-hour\nshutdown, costs associated with the emergency replacement of an item, etc. We can express\nthe above costs in the following functional form:\n\\( Re \\_ Cost \\left( p,t,C_{1},C_{2, \\cdots , }C_{M} \\right)  \\),\nwhere  \\( C_{1},C_{2, \\cdots, }C_{M} \\)  are costs relevant for the considered item, and,\nin general,  \\( p \\)  could be a vector that incorporates multiple types of shutdown. In\nour case, the expected downtime cost in year  \\( t=1, 2, \\ldots ,N \\)  is given by a function:\n\\( Re \\_ Cost \\left( p,t,C_{P},C_{U},C_{D},D,N \\right)  \\) , with just a scalar value for  \\( p \\).\n\nThere are two relevant time-series of cash flows: one for replacing the item now and\nanother for replacing it later, either at failure or at the horizon year  \\( N \\).\nFirst, consider replacing the item today; in this case, we simply incur the cost of\nplanned replacement at time  \\( t=1 \\) (i.e., we incur cost  \\( C_{P} \\).)\n\nThe second time-series of cash flows is for replacing the item in the future.\nIn this case, for every year  \\( t \\), we have the expected replacement cost\nand downtime cost. So, for any year  \\( t=1, 2, \\ldots ,N-1 \\), the cash\nflows will be computed as:\\par\n\n\\begin{equation}\\label{npv_4}\n\\mbox{Cash Flow Replaced in Year }t= - \\left[  \\left( C_{U}+DC_{D} \\right)  p \\left( 1-p \\right) ^{t-1} \\right]\n\\end{equation}\n\n\nand for the final year as:\\par\n\n\\begin{equation}\\label{npv_5}\n\\mbox{Cash Flow Replaced in Year }N= - \\left[ C_{P} \\left( 1-p \\right) ^{N-1} \\right] .\n\\end{equation}\n\n\nNote the minus sign in front of the cash flows because all of them are costs\n(i.e., cash outflows). The timelines of the two options are illustrated as follows: \\par\n\n\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 1 starts here %%%%%%%%%%%%%%%%%%%%\n\n\\begin{figure}\n    \\centering\n    \\centerline{\\includegraphics[scale=1]{image1.pdf}}\n    \\caption{Graphical representation of option 1: replace now.}\n    \\label{fig:_Graphical_representation_of_option_1_replace_now}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 1 Ends here %%%%%%%%%%%%%%%%%%%%\n\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 2 starts here %%%%%%%%%%%%%%%%%%%%\n\n\\begin{figure}\n    \\centering\n    \\centerline{\\includegraphics[scale=1]{image2.pdf}}\n    \\caption{Graphical representation of option 2: replace later.}\n    \\label{fig:_Graphical_representation_of_option_1_replace_later}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 2 Ends here %%%%%%%%%%%%%%%%%%%%\n\n\nThe NPV of option 1 is: \\par\n\n\\begin{equation}\\label{npv_6}\n\\mbox{NPV option 1}= -C_{P}\n\\end{equation}\n\nThe NPV of option 2 is:\\par\n\n\\begin{equation}\\label{npv_7}\n\\mbox{NPV option }2= - \\left[  \\sum _{t=1}^{N-1}\\frac{ \\left( C_{U}+DC_{D} \\right) p \\left( 1-p \\right) ^{t-1}}{ \\left( 1+R \\right) ^{t-1}}+\\frac{C_{P} \\left( 1-p \\right) ^{N-1}}{ \\left( 1+R \\right) ^{N-1}} \\right]\n\\end{equation}\n\n\nWe can compare these options in two ways. The first is to simply compute the\nNPV as the difference between the NPV of option 1 and that of option 2 (i.e.,\n\\( \\mbox{NPV}=\\mbox{NPV option }1-\\mbox{NPV option }2 \\)). The resulting equation is:\\par\n\n\\begin{equation}\\label{npv_7}\n\\mbox{NPV}= -C_{P}+ \\left[  \\sum _{t=1}^{N-1}\\frac{ \\left( C_{U}+DC_{D} \\right) p \\left( 1-p \\right) ^{t-1}}{ \\left( 1+R \\right) ^{t-1}}+\\frac{C_{P} \\left( 1-p \\right) ^{N-1}}{ \\left( 1+R \\right) ^{N-1}} \\right]\n\\end{equation}\n\n\nIf the project in question is the only one under consideration, and if  \\( \\text{NPV} \\)  $>$ 0,\nthe decision is to replace the item today; otherwise, we replace it later.\n%As we discuss in detail in Section 6, we\\ employ an optimization model when multiple projects are considered simultaneously, and we need to stay within annual budgets in terms of, for example, capital costs.  \\par\n\nWe now extend the above logic to the planned replacement occurring in year \\( ~T_{0} \\).\nWe had assumed  \\( ~T_{0}=1 \\), but now we allow for\ndelaying this planned replacement to a later year, albeit at the risk of incurring\na failure prior to  \\( ~T_{0} \\), along with associated unplanned\\ replacement\nand downtime costs.  If the planned replacement happens at time  \\( T_{0} \\), the\ncorresponding time-series of cash flows will become as follows: one for replacing the item\neither at failure before  \\( T_{0} \\)  or at  \\( T_{0} \\),\nand another for replacing it later, either at failure or at the horizon year  \\( N \\).\\par\n\nThe first time-series of cash flows is for replacing the item at  \\( T_{0} \\).\nIn this case, for every year  \\( t=1, 2, \\ldots, T_{0}-1 \\), we have the expected\nreplacement cost and downtime cost. So, for any year  \\( 1, 2,\\ldots, T_{0}-1 \\),\nthe cash flows will be computed as:\\par\n\n\\begin{equation}\\label{npv_8}\n\\mbox{Cash Flow Replaced in Year }t= - \\left[  \\left( C_{U}+DC_{D} \\right)  p \\left( 1-p \\right) ^{t-1} \\right]\n\\end{equation}\n\nfor  \\( t=T_{0} \\)  the expected cash flow is:\\par\n\n\\begin{equation}\\label{npv_9}\n\\text{Cash Flow Replaced in Year }T_{0}= - \\left[ C_{P} \\left( 1-p \\right) ^{T_{0}-1} \\right]\n\\end{equation}\n\nThe timeline of this option is illustrated as follows: \\par\n\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 3 starts here %%%%%%%%%%%%%%%%%%%%\n\n\\begin{figure}\n    \\centering\n    \\centerline{\\includegraphics[scale=1]{image3.pdf}}\n    \\caption{Graphical representation of option 1: planned replacement at $T_0$.}\n    \\label{fig:_Graphical_representation_of_option_1_planned_replacement_at_}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 3 Ends here %%%%%%%%%%%%%%%%%%%%\n\n\\begin{equation}\\label{npv_10}\n\\mbox{NPV option }1= - \\left[  \\sum _{t=1}^{T_{0}-1}\\frac{ \\left( C_{U}+DC_{D} \\right) p \\left( 1-p \\right) ^{t-1}}{ \\left( 1+R \\right) ^{t-1}}+\\frac{C_{P} \\left( 1-p \\right) ^{T_{0}-1}}{ \\left( 1+R \\right) ^{T_{0}-1}} \\right]\n\\end{equation}\n\nNote that, if  \\( T_{0}=1 \\),  then the first term yields zero, and the NPV of\noption 1 reduces to that discussed above (i.e., it equals  \\( -C_{P} \\).)\\par\n\nThe second time-series of cash flow is for attempting to delay replacement of\nthe item to time  \\( N \\), and incurring the risk of unplanned replacement\nand downtime in the meantime. It is the same as we calculated before. \\par\n\n\\begin{equation}\\label{npv_11}\n\\mbox{NPV option }2= - \\left[  \\sum _{t=1}^{N-1}\\frac{ \\left( C_{U}+DC_{D} \\right) p \\left( 1-p \\right) ^{t-1}}{ \\left( 1+R \\right) ^{t-1}}+\\frac{C_{P} \\left( 1-p \\right) ^{N-1}}{ \\left( 1+R \\right) ^{N-1}} \\right]\n\\end{equation}\n\nWe can compute  \\( \\text{NPV} \\)  as:\\par\n\n\\begin{eqnarray}\\label{npv_12}\n\\mbox{NPV}&=&\\mbox{NPV Option }1-\\mbox{NPV Option }2\\\\\n&=& \\left[  \\sum _{t=T_{0}}^{N-1}\\frac{ \\left( C_{U}+DC_{D} \\right) p \\left( 1-p \\right) ^{t-1}}{ \\left( 1+R \\right) ^{t-1}}+\\frac{C_{P} \\left( 1-p \\right) ^{N-1}}{ \\left( 1+R \\right) ^{N-1}}-\\frac{C_{P} \\left( 1-p \\right) ^{T_{0}-1}}{ \\left( 1+R \\right) ^{T_{0}-1}} \\right]\n\\end{eqnarray}\n\nA new RAVEN \\textbf{External Model} with \\xmlAttr{subType} \\xmlString{LOGOS.IncrementalNPV}\nwas created to compute the NPV described above.\n\nExample RAVEN Input \\xmlNode{ExternalModel} XML:\n\\begin{lstlisting}[style=XML]\n<ExternalModel name=\"rvi_model\" subType=\"LOGOS.IncrementalNPV\">\n  <variables>fp,rvi_npv_a,rvi_npv_b</variables>\n  <alias variable=\"rvi_p_failure\" type=\"input\">fp</alias>\n  <Cp>19.82</Cp>\n  <Cu>39.64</Cu>\n  <fp>0.05</fp>\n  <Cd>1.</Cd>\n  <D>30</D>\n  <options>\n    <Td>1, 4</Td>\n    <output>rvi_npv_a,rvi_npv_b</output>\n  </options>\n  <discountRate>0.03</discountRate>\n  <startTime>2019</startTime>\n  <lifetime>20</lifetime>\n</ExternalModel>\n\\end{lstlisting}\n\nIn order to use this external model to compute NPV, \\xmlString{LOGOS.IncrementalNPV}\nshould be always used for \\xmlAttr{subType}. Except for the common nodes \\xmlNode{variables}\nand \\xmlNode{alias}, this entity accepts the following sub-nodes:\n\\begin{itemize}\n  \\item \\xmlNode{Cp}, \\xmlDesc{float, required parameter}, specifies the cost of planned replacement.\n  \\item \\xmlNode{Cu}, \\xmlDesc{float, required parameter}, specifies the cost of unplanned replacement.\n  \\item \\xmlNode{Cd}, \\xmlDesc{float, required parameter}, specifies the cost of shutdown per day.\n  \\item \\xmlNode{D}, \\xmlDesc{integer, required parameter}, specifies the number of days\n  plant is off-line, if a shutdown occurs.\n  \\item \\xmlNode{fp}, \\xmlDesc{float, required parameter}, specifies the probability\n  of item failure over one year.\n  \\item \\xmlNode{discountRate}, \\xmlDesc{float, required parameter}, specifies the discount rate.\n  \\item \\xmlNode{inflation}, \\xmlDesc{float, optional parameter}, specifies the inflation rate.\n  \\default{0.0}\n  \\item \\xmlNode{tax}, \\xmlDesc{float, optional parameter}, specifies the tax rate.\n  \\default{0.0}\n  \\item \\xmlNode{HardSavings}, \\xmlDesc{float, optional parameter}, specifies the hard\n  savings that would be added to the NPV calculation.\n  \\default{0.0}\n  \\item \\xmlNode{count}, \\xmlDesc{integer, optional parameter}, specifies the number\n  of the same items/components that need to be replaced at the same time.\n  \\default{1.0}\n  \\item \\xmlNode{startTime}, \\xmlDesc{integer, required parameter}, specifies the\n  start time of project.\n  \\item \\xmlNode{lifetime}, \\xmlDesc{integer, required parameter}, specifies the lifetime\n  of the project.\n  \\item \\xmlNode{options}, \\xmlDesc{optional parameter}, specifies options to compute\n  NPVs of planned replacements occurring in different years relative to the start time.\n  \\begin{itemize}\n    \\item \\xmlNode{Td}, \\xmlDesc{comma separated integers, required parameter},\n    specifies the lengths of delaying the planned replacement.\n    \\item \\xmlNode{output}, \\xmlDesc{comma separated strings, required parameter},\n    specifies the output variables corresponding to the different specified lengths of\n    delays in \\xmlNode{Td}. In order to communicate with RAVEN, these\n    variables need to be listed under node \\xmlNode{variables}.\n    \\nb These variables are defined by the user and would be used to store the\n    calculated NPVs.\n  \\end{itemize}\n\\end{itemize}\n\nThe parameters \\textbf{Cp, Cu, fp, Cd, inflation, and tax} can be sampled by RAVEN.\nIf the user specifies these parameters in the input XML \\xmlNode{ExternalModel},\nthe values for these parameters will be replaced by the sampled values from RAVEN.\n\nExample LOGOS Incremental NPV output CSV:\n\\begin{lstlisting}[language=python]\nfp,rvi_npv_a,rvi_npv_b\n...\n\\end{lstlisting}\n\n\\nb In order to compute the NPVs, the TEAL plugin is required.\nRefer to ~\\url{https://github.com/idaholab/raven/wiki/Plugins} for\ndetails on how to access and install the plugins.\n\nThe whole calculation flow is depicted in Fig.~\\ref{fig:LogosRAVEN}.\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 4 starts here %%%%%%%%%%%%%%%%%%%%\n\n\\begin{figure}\n    \\centering\n    \\centerline{\\includegraphics[scale=0.5]{image11.jpg}}\n    \\caption{Risk-informed capital budgeting via RAVEN and RAVEN plugins.}\n    \\label{fig:LogosRAVEN}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%% Figure/Image No: 4 Ends here %%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "a5997eea77738449b049ddbc7b850cd7f798ba89", "size": 15581, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/include/CashflowAndNPVModels.tex", "max_stars_repo_name": "dgarrett622/LOGOS", "max_stars_repo_head_hexsha": "7234b8b5e80bc79526b4cbced7efd5ae482f7c44", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2021-05-04T08:42:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-03T13:14:12.000Z", "max_issues_repo_path": "doc/user_manual/include/CashflowAndNPVModels.tex", "max_issues_repo_name": "albernsrya/LOGOS", "max_issues_repo_head_hexsha": "535a25ccd3a83259b615acd569257d751fe00439", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 28, "max_issues_repo_issues_event_min_datetime": "2021-01-12T17:41:24.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-03T18:20:16.000Z", "max_forks_repo_path": "doc/user_manual/include/CashflowAndNPVModels.tex", "max_forks_repo_name": "albernsrya/LOGOS", "max_forks_repo_head_hexsha": "535a25ccd3a83259b615acd569257d751fe00439", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-02-05T17:18:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T14:36:42.000Z", "avg_line_length": 46.3720238095, "max_line_length": 274, "alphanum_fraction": 0.6968743983, "num_tokens": 4746, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7490872131147275, "lm_q1q2_score": 0.5842827079006225}}
{"text": "% !TeX root = ../main.tex\n\\section{Conditional Term Rewrite Systems}\nIn this section, we study some basic notions covering \\textit{conditional term rewriting systems}. This type of rewriting was first conceived to reason formally about abstract data types (in the context of algebraic specification). Also, and more important to the study we pretend to carry on, conditional rewriting plays an important role in the integration of functional and logic programming paradigms. The reader is referred to \\cite{Ohlebusch:2010:ATT:1965591} (Chapter 7) for most of the statements we just give here without a proof and a more detailed discussion on the subject.\n\n\\begin{definition}\n    A \\textit{conditional term rewriting system} (abbreviated as CTRS) is a pair $(\\signature, \\trs)$ consisting of a signature $\\signature$ and a set $\\trs$ of \\textit{conditional rewrite rules}. Each of these rewrite rules is of the form $l \\contr r \\Leftarrow s_1 = t_1, \\dots, s_n = t_n$ with $l, r, s_i, t_i \\in \\terms$, for all $1 \\leq i \\leq n$. The term $l$ may not be a variable.\n\\end{definition}\n\nMost of the time we abbreviate the conditional rule $l \\contr r \\Leftarrow s_1 = t_1, \\dots, s_n = t_n$ as just $l \\contr r \\Leftarrow c$. If a rewrite rule has no conditions, we write $l \\contr r$, require $\\vars{l} \\supseteq \\vars{r}$, and call $l \\contr r$ an unconditional rewrite rule.\n\nAs in \\cite{Middeldorp1994}, rewrite rules $l \\contr r \\Leftarrow c$ will be classified according to their distribution of variables among $l, r$, and $c$, as follows:\n\n\\begin{table}[h!]\n    \\centering\n    \\begin{tabular}{c | l }\n    \\cline{1-2}\n        Type & Requirement \\\\\n    \\hline\n    1 & $\\vars{r} \\cup \\vars{c} \\subseteq \\vars{l}$\\\\\n    2 & $\\vars{r} \\subseteq  \\vars{l}$ \\\\\n    3 & $\\vars{r} \\subseteq \\vars{l} \\cup \\vars{c}$ \\\\\n    4 & No restrictions\n    \\end{tabular}\n\\end{table}\n\n\\begin{definition}\n    For every CTRS $(\\signature,\\trs)$ we associate the system $(\\signature, \\trs_u)$, where\n    $$\\trs_u := \\{ l \\contr r \\mid l \\contr r \\Leftarrow c \\in \\trs \\} $$\n\\end{definition}\n\nWe can interpret $=$ in the rules of a conditional term in different ways, which leads to different rewrite relations.\n\n\\begin{definition}\n    \\begin{enumerate}\n        \\item In an \\textit{oriented} CTRS $(\\signature, \\trs)$ the $=$ symbol in the conditions of the rewrite rules is interpreted as \\textit{reachability}, i.e. $l \\contr r \\Leftarrow s_1 \\contr t_1, \\dots, s_n \\contr t_n$.\n        \\item A \\textit{normal} CTRS $(\\signature, \\trs)$ is an oriented CRTS in which the rewrite rules are subjected to the additional constraint that every $t_j$ is a ground normal form with respect to $\\trs_u$. This rewrite rules will also be written as $l \\contr r \\Leftarrow s_1 \\contr t_1, \\dots, s_n \\contr t_n$.\n        \\item In a \\textit{join} CTRS $(\\signature, \\trs)$ the $=$ symbol in the conditions of the rewrite rules is interpreted as \\textit{joinability}. Rewrite rules of a join CTRS will be written as $l \\contr r \\Leftarrow s_1 \\downarrow t_1, \\dots, s_n \\downarrow t_n$.\n        \\item Interpreting the $=$ symbol in the conditions as \\textit{conversion} leads to semiequational CTRSs.\n    \\end{enumerate}\n\\end{definition}\n\nFrom now on we just consider \\textit{oriented} CTRSs. Let us formally define the rewrite relation induced by this type of CTRS.\n\\begin{definition}\n    Let $\\trs$ be a CTRS. We inductively define \\textit{unconditional} TRSs $\\trs_n$ for $n \\in \\mathbb{N}$ by:\n    \\begin{align*}\n        \\trs_0 &= \\emptyset \\\\\n        \\trs_{n+1} &= \\{ l\\sigma \\contr r\\sigma \\mid l \\contr r \\Leftarrow s_1 \\contr t_1, \\dots, s_n \\contr t_n \\in \\trs \\text{ and } \\\\\n        &s_j\\sigma \\contr_{\\trs_n}^* t_j \\sigma \\text{ for all } j \\in \\{ 1, \\dots, n \\} \\}\n    \\end{align*}\n\\end{definition}\n\nThe rewrite relation associated with a CTRS $\\trs$ is defined by:\n\\begin{displaymath}\n    \\contr_\\trs :=  \\bigcup\\limits_{n \\in \\mathbb{N}} \\contr_{\\trs_n}\n\\end{displaymath}\n\nWe say that $s \\contr_{\\trs_{n+1}} t$ iff there exists a rewrite rule $\\rho : l \\contr r \\Leftarrow s_1 \\contr t_1, \\dots, s_n \\contr t_n$ in $\\trs$, a substitution $\\sigma$, a position $p \\in \\pos{s}$ such that $\\restr{s}{p} = l \\sigma$ and $t = s[r\\sigma]_p$, and $s_j \\sigma \\stc_{\\trs_n} t_j \\sigma$ for all $1 \\leq j \\leq n$. We say that $s \\contr_\\trs t$ is a rewrite step or reduction step using $\\trs$ at the redex $l\\sigma$. We also omits the subscript $\\trs$ when $\\trs$ is meaningful.\n\nThe depth of a rewrite step $s \\contr t$ is the least $n$ such that $s \\contr_{\\trs_n} t$. This notion also extends to reduction sequences $s \\stc t$.\n\nIn order to work on conditional narrowing later we define the following notion.\n\\begin{definition}\\label{definition:ctrs-approximation-rewrite-relation}\n    Let $(\\signature, \\trs)$ be a join CTRS.We assume that $eq$ and $true$ are function symbols that do not occur in $\\sigma$. For every rewrite rule $\\rho: l \\contr r \\Leftarrow s_1 \\downarrow t_1, \\dots, s_n \\downarrow t_n$ in $\\trs$, we define:\n    $$n(\\rho) := eq(s_1, t_1) \\contr true, \\dots, eq(s_n, t_n) \\contr true$$\n    Then define $$n(\\trs) = \\{ eq(x,x) \\contr true \\} \\cup \\bigcup\\limits_{\\rho \\in \\trs} \\{ n(\\rho) \\}.$$\n\\end{definition}\nNote that $n(\\trs)$ is a normal CTRS. We will use it later on to define conditional narrowing.\n\n", "meta": {"hexsha": "2247b02ea8d5111daec63397d97a3009d6da788c", "size": 5300, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/ctrs.tex", "max_stars_repo_name": "deividrvale/report-narrowing", "max_stars_repo_head_hexsha": "1e3ce34a1afb5268b4307fcc9af9374d2e121a27", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/ctrs.tex", "max_issues_repo_name": "deividrvale/report-narrowing", "max_issues_repo_head_hexsha": "1e3ce34a1afb5268b4307fcc9af9374d2e121a27", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/ctrs.tex", "max_forks_repo_name": "deividrvale/report-narrowing", "max_forks_repo_head_hexsha": "1e3ce34a1afb5268b4307fcc9af9374d2e121a27", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.8115942029, "max_line_length": 585, "alphanum_fraction": 0.6924528302, "num_tokens": 1620, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825006, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5842827068243018}}
{"text": "%!TEX root = ../main.tex\n\\documentclass[../main.tex]{subfiles}\n\\begin{document}\n\n\\subsection{Derivation of the Optimal Task Choice} \\label{app:derive_lmb_opt}\nI start from the first order condition of the utility maximization:\n\n\\begin{equation}\n\t\\frac{\\partial u_{i, t}(b_i, \\lambda_{i,t}, \\phi, \\theta, \\tilde{w}_{i,t})}{\\partial \\lambda_{i,t}} = \\tilde{w}_{i,t} + 2 \\phi \\theta \\frac{(|b_i - \\lambda_{i,t}|)^\\phi}{b_i - \\lambda_{i,t}} \\overset{!}{=} 0 \\tag{\\ref{eq:first_derivative}}\n\\end{equation}\nDefining $x \\equiv b_i - \\lambda_{i,t}$, this can be restated as follows.\n\n\\begin{alignat*}{3}\n\t{}\t\t\t\t\t&& \\tilde{w}_t + 2 \\phi \\theta \\frac{(|x|)^\\phi}{x} \t&\\overset{!}{=} 0 \\\\\n\t\\Leftrightarrow \t&& \\frac{(|x|)^\\phi}{x} \t\t\t\t\t\t\t\t&= - \\frac{\\tilde{w}_t}{2 \\phi \\theta} \\label{eq:foc_restated} \\\\\n\\end{alignat*}\nThe absolute value function for any real number $x$ is defined as follows.\n\n\\begin{equation}\n\t|x| = \\left\\{\n\t\t\\begin{array}{ll}\n\t\t\tx, \\: & \\: \\text{if $x \\geq 0$} \\nonumber\\\\\n\t\t\t-x, \\: & \\: \\text{if $x < 0$} \\nonumber\\\\\n\t\t\\end{array}\n\t\\right.\n\\end{equation}\nUsing this definition, two cases can be distinguished when solving for the optimal task choice $\\lambda^*_{i,t}$. \\\\\n\nCase (1): $x \\geq 0$ (i.e., $b_i \\geq \\lambda_{i,t}$).\n\n\\begin{alignat*}{3}\n\t{}\t\t\t\t&& \\frac{x^\\phi}{x} \t&= - \\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta} \\\\\n\t\\Leftrightarrow && x\t\t\t\t\t&= (- \\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta})^{\\frac{1}{\\phi-1}}, \\text{with $x = b_i - \\lambda_{i,t}$} \\\\\n\t\\Leftrightarrow && \\lambda^*_{i,t} \t\t&= b_i - (- \\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta})^{\\frac{1}{\\phi-1}}\n\\end{alignat*}\n\nCase (2): $x < 0 $ (i.e., $b_i < \\lambda_{i,t}$).\n\n\\begin{alignat*}{3}\n\t{}\t\t\t\t&& \\frac{(-x)^\\phi}{x} \t\t\t\t\t&= - \\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta} \\\\\n\t\\Leftrightarrow && \\frac{(-x)^\\phi}{-x} \t\t\t\t&= \\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta} \\\\\n\t\\Leftrightarrow && - x \t\t\t\t\t\t\t\t\t&= (\\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta})^\\frac{1}{\\phi-1}, \\text{with $x = b_i - \\lambda_{i,t}$} \\\\\n\t\\Leftrightarrow && \\lambda^*_{i,t}\t\t\t\t\t\t&= b_i + (\\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta})^\\frac{1}{\\phi-1} \\\\\n\\end{alignat*}\nWhich can be rearranged to the expression in equation (\\ref{eq:lmb_opt}):\n\\begin{equation} \\tag{\\ref{eq:lmb_opt}}\n\t\\lambda^*_{i,t} = \\left\\{\n\t\\begin{array}{ll}\n\t\tb_i - (\\frac{- \\tilde{w}_{i,t}}{2 \\phi \\theta})^{\\frac{1}{\\phi -1}}, \\: & \\: \\text{if $b_i \\geq \\lambda_{i,t}$}\\\\\n\t\tb_i + (\\frac{\\tilde{w}_{i,t}}{2 \\phi \\theta})^{\\frac{1}{\\phi - 1}}, \\: & \\: \\text{if $b_i < \\lambda_{i,t}$}\n\t\\end{array}\n\\right.\n\\end{equation}\n\\end{document}", "meta": {"hexsha": "6730cde757d36d1b9e43ff45525601b9be20528c", "size": 2528, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "latex_files/Appendix/derive_opt_lmb.tex", "max_stars_repo_name": "DaLueke/estimating_skill_prices", "max_stars_repo_head_hexsha": "bc895c8b0b0439f86c1f2dd53b34108f1eaf69a1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "latex_files/Appendix/derive_opt_lmb.tex", "max_issues_repo_name": "DaLueke/estimating_skill_prices", "max_issues_repo_head_hexsha": "bc895c8b0b0439f86c1f2dd53b34108f1eaf69a1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "latex_files/Appendix/derive_opt_lmb.tex", "max_forks_repo_name": "DaLueke/estimating_skill_prices", "max_forks_repo_head_hexsha": "bc895c8b0b0439f86c1f2dd53b34108f1eaf69a1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.8148148148, "max_line_length": 240, "alphanum_fraction": 0.573971519, "num_tokens": 1057, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.5842827035314827}}
{"text": "\\documentclass{article}\n\\usepackage{mathtools}\n\\usepackage{amssymb}\n\\usepackage{tensor}\n\\usepackage{tikz-cd}\n\n\\begin{document}\n\n\\section*{Double backward of a function composition}\n\nIn a reverse autodiff, given a forward function $g\\circ f:\\mathbf x\\rightarrow \\mathbf y\\rightarrow \\mathbf z$, one needs to implement a backward function $B[g\\circ f]:(\\bar{\\mathbf z},\\mathbf x)\\rightarrow\\bar{\\mathbf x}$, where $\\bar\\square\\equiv{\\partial L_0(\\mathbf z)}/{\\partial \\square}$ and $L_0$ is some arbitrary unknown function.\nA double backward is simply the backward of $B[g\\circ f]$, that is, $B^2[g\\circ f]:(\\tilde{\\bar{\\mathbf x}},\\bar{\\mathbf z},\\mathbf x)\\rightarrow(\\tilde{\\bar{\\mathbf z}},\\tilde{\\mathbf x})$, where $\\tilde{\\bar \\square}\\equiv{\\partial L_1(\\bar{\\mathbf x})}/{\\partial\\bar\\square}$, $\\tilde\\square\\equiv{\\partial L_1(\\bar{\\mathbf x})}/{\\partial\\square}$.\n\nEinstein notation is used:\n$$\n\\frac{\\partial f^i(\\mathbf x)}{\\partial x^j}\\equiv f\\indices{^\\prime^i_j}(\\mathbf x),\\qquad\n\\frac{\\partial^2f^i(\\mathbf x)}{\\partial x^j\\partial x^k}\\equiv f\\indices{^{\\prime\\prime}^i_{jk}}(\\mathbf x),\\qquad\n\\sum_j A_{ij}x^j\\equiv A_{ij}x^j\n$$\n\n\\paragraph{Forward \\& backward}\n\n\\begin{tikzcd}\nx^i \\ar[r,\"f\"]\\ar[d]\n& f^\\mu(\\mathbf x)=y^\\mu \\ar[r,\"g\"]\\ar[d]\n& g^a(\\mathbf y)=z^a \n\\\\\n\\bar x_i=\\bar y_\\mu f\\indices{^\\prime^\\mu_i}(\\mathbf x) & \\bar y_\\mu=\\bar z_ag\\indices{^\\prime^a_\\mu}(\\mathbf y) \\ar[l,\"B\\lbrack f\\rbrack\"'] & \\bar z_a \\ar[l,\"B\\lbrack g\\rbrack\"']\n\\end{tikzcd}\n\n\\paragraph{Double backward}\n\n\\begin{tikzcd}\n\\tilde{\\bar x}^i \\ar[r,\"B^2_{(1)}\\lbrack f\\rbrack\"] \\ar[rd,\"B^2_{(2)}\\lbrack f\\rbrack\"']\n& f\\indices{^\\prime^\\mu_i}(\\mathbf x)\\tilde{\\bar x}^i=\\tilde{\\bar y}^\\mu \\ar[r,\"B^2_{(1)}\\lbrack g\\rbrack\"] \\ar[rd,\"B^2_{(2)}\\lbrack g\\rbrack\"]\n& g\\indices{^\\prime^a_\\mu}(\\mathbf y)\\tilde{\\bar y}^\\mu=\\tilde{\\bar z}^a\n\\\\\n& \\bar y_\\mu f\\indices{^{\\prime\\prime}^\\mu_{ij}}(\\mathbf x)\\tilde{\\bar x}^j=\\tilde x_i \\ar[rd]\n& \\bar z_a g\\indices{^{\\prime\\prime}^a_{\\mu\\nu}}(\\mathbf y)\\tilde{\\bar y}^\\nu=\\tilde y_\\mu \\ar[d]\n\\\\\n& & \\tilde x_i:=\\tilde x_i+\\underbrace{\\tilde y_\\mu f\\indices{^\\prime^\\mu_i}(\\mathbf x)}_{=B[f]_i(\\tilde{\\mathbf y})}\n\\end{tikzcd}\n\n\\section*{\\texttt{SLogLinearDet}}\n\n\\paragraph{Forward}\n$$\nf(\\mathbf A):=\\det\\mathbf A\\equiv D,\\qquad\n%\n\\frac{\\partial D}{\\partial A^{ij}}\n\\equiv f'_{ij}\n=(\\operatorname{Cof}\\mathbf A)_{ij}\n\\equiv C_{ij}\n$$\n%\n$$\ng(\\mathbf c,\\mathbf D):=\\ln c_pD^p,\\qquad\n\\frac{\\partial g}{\\partial c_p}\\equiv g'^p=\\Psi^{-1}D^p,\\quad\n\\frac{\\partial g}{\\partial D^p}\\equiv g'_{p}=\\Psi^{-1}c_p\n$$\n%\n$$\nF(\\mathbf c,\\mathbf A^q)\n:=g(\\mathbf c,f(\\mathbf A^q))\n=\\ln c_p\\det\\mathbf A^p\\equiv\\ln\\Psi\\equiv P\n$$\n\\paragraph{Backward}\n$$\n\\begin{aligned}\nB[g]^p(\\bar P)\\equiv\\bar Pg'^p\\equiv\\bar c^p&=\\bar P\\Psi^{-1}D^p \\\\\nB[g]_p(\\bar P)\\equiv\\bar Pg'_p\\equiv\\bar D_p&=\\bar P\\Psi^{-1}c_p \\\\\nB[f]_{ij}(\\bar D)\\equiv\\bar Df'_{ij}\\equiv\\bar A_{ij}&=\\bar DC_{ij} \\\\\nB[F]^p(\\bar P)\\equiv\\bar c^p&=B[g]^p(\\bar P) \\\\\nB[F]_{pij}(\\bar P)\n=\\bar A_{pij}\n&=B[f]_{ij}(\\bar D_p)\n\\leftarrow\\bar D_p=B[g]_p(\\bar P)\n\\end{aligned}\n$$\n\\paragraph{Double backward}\n$$\nf''_{ij,kl}=A^{-1}_{ji}C_{kl}-A^{-1}_{li}C_{kj}\n$$\n%\n$$\ng''^{pq}=-\\Psi^{-2}D^pD^q,\\quad\ng''_{pq}=-\\Psi^{-2}c_pc_q,\\quad\ng''^p_q=-\\Psi^{-2}D^pc_q+\\Psi^{-1}\\delta^p_q\n$$\n%\n$$\n\\begin{aligned}\nB_{(1)}^2[f](\\tilde{\\bar{\\mathbf A}})\n\\equiv f'_{ij}\\tilde{\\bar A}^{ij}\n\\equiv\\tilde{\\bar D}\n&=C_{ij}\\tilde{\\bar A}^{ij}\n=\\operatorname{Tr}[\\mathbf C\\tilde{\\bar{\\mathbf A}}^\\mathrm T]\n\\\\\nB_{(2)}^2[f]_{ij}(\\tilde{\\bar{\\mathbf A}})\n\\equiv\\bar Df''_{ij,kl}\\tilde{\\bar A}^{kl}\n\\equiv\\tilde A_{ij}&=\n\\bar D(A^{-\\mathrm T}_{ij}C_{kl}\\tilde{\\bar A}^{kl}-A^{-\\mathrm T}_{il}\\tilde{\\bar A}^{kl}C_{kj}) \\\\\n&=\\bar D\\{\\mathbf A^{-\\mathrm T}(\\operatorname{Tr}[\\mathbf C\\tilde{\\bar{\\mathbf A}}^\\mathrm T]\\mathbf I-\\tilde{\\bar{\\mathbf A}}^\\mathrm T\\mathbf C)\\}_{ij}\\ (\\equiv\\bar DY_{ij})\n\\\\\nB_{(1)}^2[g](\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf D}})\n\\equiv g'^p\\tilde{\\bar c}_p+g'_p\\tilde{\\bar D}^p\n\\equiv\\tilde{\\bar P}\n&=\\Psi^{-1}D^p\\tilde{\\bar c}_p+\\Psi^{-1}c_p\\tilde{\\bar D}^p=\\Psi^{-1}(\\mathbf D\\cdot\\tilde{\\bar{\\mathbf c}}+\\mathbf c\\cdot\\tilde{\\bar{\\mathbf D}})\n\\\\\nB_{(2)}^2[g]^p(\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf D}})\n\\equiv\\bar P(g''^{pq}\\tilde{\\bar c}_q+g''^p_q\\tilde{\\bar D}^q)\n\\equiv\\tilde c^p\n&=\\bar P(-\\Psi^{-2}D^pD^q\\tilde{\\bar c}_q+(-\\Psi^{-2}D^pc_q+\\Psi^{-1}\\delta^p_q)\\tilde{\\bar D}^q) \\\\\n&=\\bar P(-\\Psi^{-2}(\\mathbf D\\cdot\\tilde{\\bar{\\mathbf c}}+\\mathbf c\\cdot\\tilde{\\bar{\\mathbf D}})\\mathbf D+\\Psi^{-1}\\tilde{\\bar{\\mathbf D}})^p\n\\\\\nB_{(2)}^2[g]_p(\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf D}})\n\\equiv\\bar P(g''^q_p\\tilde{\\bar c}_q+g''_{pq}\\tilde{\\bar D}^q)\n\\equiv\\tilde D_p\n&=\\bar P((-\\Psi^{-2}D^qc_p+\\Psi^{-1}\\delta^q_p)\\tilde{\\bar c}_q-\\Psi^{-2}c_pc_q\\tilde{\\bar D}^q) \\\\\n&=\\bar P(-\\Psi^{-2}(\\mathbf D\\cdot\\tilde{\\bar{\\mathbf c}}+\\mathbf c\\cdot\\tilde{\\bar{\\mathbf D}})\\mathbf c+\\Psi^{-1}\\tilde{\\bar{\\mathbf c}})_p\n\\\\\nB_{(1)}^2[F](\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf A}}^q)\n\\equiv\\tilde{\\bar P}\n&=B_{(1)}^2[g](\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf D}})\n\\leftarrow\\tilde{\\bar D}^p=B^2_{(1)}[f](\\tilde{\\bar{\\mathbf A}}^p)\n\\\\\nB_{(2)}^2[F]^p(\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf A}}^q)\n\\equiv\\tilde c^p\n&=B^2_{(2)}[g]^p(\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf D}})\n\\\\\nB_{(2)}^2[F]_{pij}(\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf A}}^q)\n\\equiv\\tilde A_{pij}\n&=B^2_{(2)}[f]_{ij}(\\tilde{\\bar{\\mathbf A}}^p)+B^2_{(2)}[g]_p(\\tilde{\\bar{\\mathbf c}},\\tilde{\\bar{\\mathbf D}})C_{pij}\n\\end{aligned}\n$$\n\n\\end{document}\n", "meta": {"hexsha": "7a1ae2cf8b5648a4d22e1fd94371910e6af1d275", "size": 5516, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/sloglindet.tex", "max_stars_repo_name": "zenoone/deepqmc", "max_stars_repo_head_hexsha": "c05e6c0398212aff9a514e7ad2e7a19243115aef", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/sloglindet.tex", "max_issues_repo_name": "zenoone/deepqmc", "max_issues_repo_head_hexsha": "c05e6c0398212aff9a514e7ad2e7a19243115aef", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-24T20:15:28.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-24T20:15:28.000Z", "max_forks_repo_path": "doc/sloglindet.tex", "max_forks_repo_name": "zenoone/deepqmc", "max_forks_repo_head_hexsha": "c05e6c0398212aff9a514e7ad2e7a19243115aef", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.9710144928, "max_line_length": 351, "alphanum_fraction": 0.6187454677, "num_tokens": 2488, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835371034368, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5842155622776628}}
{"text": "\\documentclass{article}\n\\usepackage{mathtools}\n\\usepackage{mathrsfs}\n\\usepackage{hyperref}\n\\usepackage{multirow}\n\\hypersetup{colorlinks,citecolor=black,filecolor=black,linkcolor=black,urlcolor=black}\n\n\\begin{document}\n\n\\section*{DEFINITION OF TEXTURE FEATURES}\n\\bigskip\n\n\\noindent \\textbf{Input volume:} Volume of interest $V(x,y,z)$ with isotropic voxel size. The necessity for isotropically resampling $V$ to a given voxel dimension prior to texture analysis results from the fact that: 1) \\textit{Global} features are computed from histograms counting the number of gray-levels in 3D space; and 2) all higher-order texture measurements explicitly or implicitly involve a distance parameter in the matrix computation (GLCM, GLRLM, GLSZM and NGTDM). For this parameter to be meaningful in 3D space and in order for the orientation dependence of the tumour to be minimized, isotropic resolution is required.\n\\bigskip \\bigskip\n\n% GLOBAL TEXTURES\n\\noindent \\textbf{Global texture features (first-order gray-level statistics).} \\\\\nLet $P$ define the first-order histogram of a volume $V(x,y,z)$ with isotropic voxel size. $P(i)$ represents the number of voxels with gray-level $i$, and $N_g$ represents the number of gray-level bins set for $P$. The $i^{\\mathrm{th}}$ entry of the normalized histogram is then defined as:\n\n\\[p(i) = \\frac{P(i)}{\\sum_{i=1}^{N_g} P(i)}.\\] \\\\\n\n\\noindent The \\textit{Global} texture features are then defined as:\n\n\\begin{itemize}\n\t\\item Variance:\n\t\t  \\[\\sigma^2 = \\sum_{i=1}^{N_g} (i-\\mu)^2\\,p(i)\\] \n\t\\item Skewness:\n\t\t  \\[s = \\sigma^{-3} \\sum_{i=1}^{N_g} (i-\\mu)^3\\,p(i)\\]\n\t\\item Kurtosis\n\t\t  \\[k = \\sigma^{-4} \\sum_{i=1}^{N_g} \\left[(i-\\mu)^4\\,p(i)\\right]-3\\]\n\t\\\\\n\\end{itemize}\n\n\n% GLCM TEXTURES\n\\noindent \\textbf{Gray-Level Co-occurence Matrix (GLCM) texture features.} \\\\\nLet $P$ define the GLCM of a quantized volume $V(x,y,z)$ with isotropic voxel size. $P(i,j)$ represents the number of times voxels of gray-level $i$ were neighbours with voxels of gray-level $j$ in $V$, and $N_g$ represents the pre-defined number of quantized gray-levels set in $V$. Only one GLCM of size $N_g \\times N_g$ is computed per volume $V$ by simultaneously adding up the frequency of co-occurences of all voxels with their 26-connected neighbours in 3D space, with all voxels (\\textit{including} the peripheral region) considered once as a center voxel (according to \\cite{HaralickRM1973}, thus always using $d=1$). To account for discretization length differences, neighbours at a distance of $\\sqrt{3}$ voxels around a center voxel increment the GLCM by a value of $\\sqrt{3}$, neighbours at a distance of $\\sqrt{2}$ voxels around a center voxel increment the GLCM by a value of $\\sqrt{2}$, and neighbours at a distance of 1 voxel around a center voxel increment the GLCM by a value of 1. The entry $(i,j)$ of the of the normalized GLCM is then defined as:\n\n\\[p(i,j) = \\frac{P(i,j)}{\\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} P(i,j)}.\\]\n\n\\noindent The following quantities are also defined:\n\\[\\mu_i = \\sum_{i=1}^{N_g} i \\sum_{j=1}^{N_g} p(i,j), \\qquad \\mu_j = \\sum_{j=1}^{N_g} j \\sum_{i=1}^{N_g} p(i,j),\\]\n\\[\\sigma_i = \\sum_{i=1}^{N_g} (i-\\mu_i)^2 \\sum_{j=1}^{N_g} p(i,j), \\qquad \\sigma_j = \\sum_{j=1}^{N_g} (j-\\mu_j)^2 \\sum_{i=1}^{N_g} p(i,j).\\] \\\\\n\n\\noindent The GLCM texture features are then defined as:\n\n\\begin{itemize}\n\t\\item Energy \\cite{HaralickRM1973}:\n\t\t  \\[energy = \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \\left[p(i,j)\\right]^2\\]\n\t\\item Contrast \\cite{HaralickRM1973}:\n\t\t  \\[contrast = \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} (i-j)^2\\,p(i,j)\\]\n\t\\item Correlation (adapted from \\cite{HaralickRM1973}):\n\t\t  \\[correlation = \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \\frac{(i-\\mu_i)\\,(j-\\mu_j)\\,p(i,j)}\t\n\t\t  {\\sigma_i\\,\\sigma_j}\\]\n\t\\item Homogeneity (adapted from \\cite{HaralickRM1973}):\n\t\t  \\[homogeneity = \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \\frac{p(i,j)}{1 + \\vert i-j \\vert}\\]\n\t\\item Variance (adapted from \\cite{HaralickRM1973}):\n\t\t  \\[variance = \\frac{1}{N_g \\times N_g} \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \n\t\t  \\left[(i-\\mu_i)^2\\,p(i,j) + (j-\\mu_j)^2\\,p(i,j)\\right]\\]\n\t\\item Sum Average (adapted from \\cite{HaralickRM1973}):\n\t\t  \\[sum\\text{ }average = \\frac{1}{N_g \\times N_g} \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \n\t\t  \\left[i\\,p(i,j) + j\\,p(i,j)\\right]\\]\n\t\\item Entropy \\cite{HaralickRM1973}:\n\t\t  \\[entropy = -\\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} p(i,j)\\,\\text{log}_2\\big(p(i,j)\\big)\\]\n\t\\item Dissimilarity \\cite{ThibaultG2009t}:\n\t\t  \\[dissimilarity = \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \\vert i-j \\vert \\,p(i,j)\\]\n\t\\\\\t\n\\end{itemize}\n\n\n% GLRLM TEXTURES\n\\noindent \\textbf{Gray-Level Run-Length Matrix (GLRLM) texture features.} \\\\\nLet $P$ define the GLRLM of a quantized volume $V(x,y,z)$ with isotropic voxel size. $P(i,j)$ represents the number of runs of gray-level $i$ and of length $j$ in $V$, $N_g$ represents the pre-defined number of quantized gray-levels set in $V$, and $L_r$ represents the length of the longest run (of any gray-level) in $V$. Only one GLRLM of size $N_g \\times L_r$ is computed per volume $V$ by simultaneously adding up all possible longest run-lengths in the 13 directions of 3D space (one voxel can be part of multiple runs in different directions, but can be part of only one run in a given direction). A MATLAB toolbox created by Xunkai Wei \\cite{WeiX2008} computes GLRLMs from 2D images, and it can be used to facilitate the computation of GLRLMs from 3D volumes. To account for discretization length differences, runs constructed from voxels separated by a distance of $\\sqrt{3}$ increment the GLRLM by a value of $\\sqrt{3}$, runs constructed from voxels separated by a distance of $\\sqrt{2}$ increment the GLRLM by a value of $\\sqrt{2}$, and runs constructed from voxels separated by a distance of 1 increment the GLRLM by a value of 1. The entry $(i,j)$ of the of the normalized GLRLM is then defined as:\n\n\\[p(i,j) = \\frac{P(i,j)}{\\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} P(i,j)}.\\]\n\n\\noindent The following quantities are also defined:\n\\[\\mu_i = \\sum_{i=1}^{N_g} i \\sum_{j=1}^{L_r} p(i,j), \\qquad \\mu_j = \\sum_{j=1}^{L_r} j \\sum_{i=1}^{N_g} p(i,j).\\] \\\\\n\n\\noindent The GLRLM texture features are then defined as:\n\n\\begin{itemize}\n\t\\item Short Run Emphasis (SRE) \\cite{GallowayMM1975}:\n\t\t  \\[SRE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} \\frac{p(i,j)}{j^2}\\]\n\t\\item Long Run Emphasis (LRE) \\cite{GallowayMM1975}:\n\t\t  \\[LRE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} j^2\\,p(i,j)\\]\n\t\\item Gray-Level Nonuniformity (GLN) (adapted from \\cite{GallowayMM1975}):\n\t\t  \\[GLN = \\sum_{i=1}^{N_g}\\left(\\sum_{j=1}^{L_r} p(i,j)\\right)^2\\]\n\t\\item Run-Length Nonuniformity (RLN) (adapted from \\cite{GallowayMM1975}):\n\t\t  \\[RLN = \\sum_{j=1}^{L_r}\\left(\\sum_{i=1}^{N_g} p(i,j)\\right)^2\\]\n\t\\item Run Percentage (RP) (adapted from \\cite{GallowayMM1975}):\n\t\t  \\[RP = \\frac{\\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} p(i,j)}{\\sum_{j=1}^{L_r} j \n\t\t  \\sum_{i=1}^{N_g} p(i,j)}\\]\n\t\\item Low Gray-Level Run Emphasis (LGRE) \\cite{ChuA1990}:\n\t\t  \\[LGRE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} \\frac{p(i,j)}{i^2}\\]\n\t\\item High Gray-Level Run Emphasis (HGRE) \\cite{ChuA1990}:\n\t\t  \\[HGRE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} i^2\\,p(i,j)\\]\n\t\\item Short Run Low Gray-Level Emphasis (SRLGE) \\cite{DasarathyBV1991}:\n\t\t  \\[SRLGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} \\frac{p(i,j)}{i^2j^2}\\]\n\t\\item Short Run High Gray-Level Emphasis (SRHGE) \\cite{DasarathyBV1991}:\n\t\t  \\[SRHGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} \\frac{i^2\\,p(i,j)}{j^2}\\]\n\t\\item Long Run Low Gray-Level Emphasis (LRLGE) \\cite{DasarathyBV1991}:\n\t\t  \\[LRLGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} \\frac{j^2\\,p(i,j)}{i^2}\\]\n\t\\item Long Run High Gray-Level Emphasis (LRHGE) \\cite{DasarathyBV1991}:\n\t\t  \\[LRHGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} i^2j^2\\,p(i,j)\\]\n\t\\item Gray-Level Variance (GLV) (adapted from \\cite{ThibaultG2009}):\n\t\t  \\[GLV = \\frac{1}{N_g \\times L_r} \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} \n\t\t  \\left(i\\,p(i,j)-\\mu_i\\right)^2\\]\n\t\\item Run-Length Variance (RLV) (adapted from \\cite{ThibaultG2009}):\n\t\t  \\[RLV = \\frac{1}{N_g \\times L_r} \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_r} \n\t\t  \\left(j\\,p(i,j)-\\mu_j\\right)^2\\]\n\t\\\\\n\\end{itemize}\n\n\n% GLSZM TEXTURES\n\\noindent \\textbf{Gray-Level Size Zone Matrix (GLSZM) texture features.} \\\\\nLet $P$ define the GLSZM of a quantized volume $V(x,y,z)$ with isotropic voxel size. $P(i,j)$ represents the number of 3D zones of gray-levels $i$ and of size $j$ in $V$, $N_g$ represents the pre-defined number of quantized gray-levels set in $V$, and $L_z$ represents the size of the largest zone (of any gray-level) in $V$.  One GLSZM of size $N_g \\times L_z$ is computed per volume $V$ by adding up all possible largest zone-sizes, with zones constructed from 26-connected neighbours  of the same gray-level in 3D space (one voxel can be part of only one zone). The entry $(i,j)$ of the normalized GLSZM is then defined as:\n\n\\[p(i,j) = \\frac{P(i,j)}{\\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} P(i,j)}.\\]\n\n\\noindent The following quantities are also defined:\n\\[\\mu_i = \\sum_{i=1}^{N_g} i \\sum_{j=1}^{L_z} p(i,j), \\qquad \\mu_j = \\sum_{j=1}^{L_z} j \\sum_{i=1}^{N_g} p(i,j).\\] \\\\\n\n\\noindent The GLSZM texture features are then defined as:\n\n\\begin{itemize}\n\t\\item Small Zone Emphasis (SZE) \\cite{GallowayMM1975,ThibaultG2009}:\n\t\t  \\[SZE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} \\frac{p(i,j)}{j^2}\\]\n\t\\item Large Zone Emphasis (LZE) \\cite{GallowayMM1975,ThibaultG2009}:\n\t\t  \\[LZE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} j^2\\,p(i,j)\\]\n\t\\item Gray-Level Nonuniformity (GLN) (adapted from \\cite{GallowayMM1975,ThibaultG2009}):\n\t\t  \\[GLN = \\sum_{i=1}^{N_g}\\left(\\sum_{j=1}^{L_z} p(i,j)\\right)^2\\]\n\t\\item Zone-Size Nonuniformity (ZSN) (adapted from \\cite{GallowayMM1975,ThibaultG2009}):\n\t\t  \\[ZSN = \\sum_{j=1}^{L_z}\\left(\\sum_{i=1}^{N_g} p(i,j)\\right)^2\\]\n\t\\item Zone Percentage (RP) (adapted from \\cite{GallowayMM1975,ThibaultG2009}):\n\t\t  \\[ZP = \\frac{\\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} p(i,j)}{\\sum_{j=1}^{L_z} j \n\t\t  \\sum_{i=1}^{N_g} p(i,j)}\\]\n\t\\item Low Gray-Level Zone Emphasis (LGZE) \\cite{ChuA1990,ThibaultG2009}:\n\t\t  \\[LGZE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} \\frac{p(i,j)}{i^2}\\]\n\t\\item High Gray-Level Zone Emphasis (HGZE) \\cite{ChuA1990,ThibaultG2009}:\n\t\t  \\[HGZE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} i^2\\,p(i,j)\\]\n\t\\item Small Zone Low Gray-Level Emphasis (SZLGE) \\cite{DasarathyBV1991,ThibaultG2009}:\n\t\t  \\[SZLGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} \\frac{p(i,j)}{i^2j^2}\\]\n\t\\item Small Zone High Gray-Level Emphasis (SZHGE) \\cite{DasarathyBV1991,ThibaultG2009}:\n\t\t  \\[SZHGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} \\frac{i^2\\,p(i,j)}{j^2}\\]\n\t\\item Large Zone Low Gray-Level Emphasis (LZLGE) \\cite{DasarathyBV1991,ThibaultG2009}:\n\t\t  \\[LZLGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} \\frac{j^2\\,p(i,j)}{i^2}\\]\n\t\\item Large Zone High Gray-Level Emphasis (LZHGE) \\cite{DasarathyBV1991,ThibaultG2009}:\n\t\t  \\[LZHGE = \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} i^2j^2\\,p(i,j)\\]\n\t\\item Gray-Level Variance (GLV) (adapted from \\cite{ThibaultG2009}):\n\t\t  \\[GLV = \\frac{1}{N_g \\times L_z} \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} \n\t\t  \\left(i\\,p(i,j)-\\mu_i\\right)^2\\]\n\t\\item Zone-Size Variance (ZSV) (adapted from \\cite{ThibaultG2009}):\n\t\t  \\[ZSV = \\frac{1}{N_g \\times L_z} \\sum_{i=1}^{N_g}\\sum_{j=1}^{L_z} \n\t\t  \\left(j\\,p(i,j)-\\mu_j\\right)^2\\]\n\t\\\\\n\\end{itemize}\n\n\n% NGTDM TEXTURES\n\\noindent \\textbf{Neighbourhood Gray-Tone Difference Matrix (NGTDM) texture features.} \\\\\nLet $P(i)$ define the NGTDM of a quantized volume $V(x,y,z)$ with isotropic voxel size. $P(i)$ represents the summation of the gray-level differences between all voxels with gray-level $i$ and the average gray-level of their 26-connected neighbours in 3D space. $N_g$ represents the pre-defined number of quantized gray-levels set in $V$, and $(N_g)_{eff}$ is the effective number of gray-levels in $V$, with $(N_g)_{eff} < N_g$ (let the vector of gray-levels values in $V$ be denoted as $\\textbf{g} = g(1),g(2),\\ldots,g(N_g)$; some gray-levels excluding $g(1)$ and $g(N_g)$ may not appear in $V$ due to different quantization schemes). One NGTDM of size $N_g \\times 1$ is computed per volume $V$. To account for discretization length differences, all averages around a center voxel located at position $(j,k,l)$ in $V$ are performed such that the neighbours at a distance of $\\sqrt{3}$ voxels are given a weight of $1/\\sqrt{3}$, the neighbours at a distance of $\\sqrt{2}$ voxels are given a weight of $1/\\sqrt{2}$, and the neighbours at a distance of 1 voxel are given a weight of 1. The $i^{\\mathrm{th}}$ entry of the NGTDM is then defined as:\n\n\\[P(i) = \\begin{cases}\n\t\t \\sum_{\\text{all voxels}\\,\\in \\{N_i\\}} \\vert i-\\overline{A}_i \\vert & \\text{if } N_i > \n\t\t 0, \\\\\n\t\t 0 & \\text{if } N_i = 0.\n\t\t \\end{cases}\n\\]\n\n\\noindent where $\\{N_i\\}$ is the set of all voxels with gray-level $i$ in $V$ (\\textit{including} the peripheral region), $N_i$ is the number of voxels with gray-level $i$ in $V$, and $\\overline{A}_i$ is the average gray-level of the 26-connected neighbours around a center voxel with gray-level $i$ and located at position $(j,k,l)$ in $V$ such that:\n\n\\[\\overline{A}_i = \\overline{A}(j,k,l) = \\frac{\\sum_{m=-1}^{m=1}\\sum_{n=-1}^{n=1}\\sum_{o=-1}^{o=1} w_{m,n,o} \\cdot V(j+m,k+n,l+o)}{\\sum_{m=-1}^{m=1}\\sum_{n=-1}^{n=1}\\sum_{o=-1}^{o=1} w_{m,n,o}},\\]\n\n\\[\\text{where} \\quad w_{m,n,o} =  \\begin{cases}\n\t\t\t\t\t\t\t\t\t1 & \\text{if } |j-m| + |k-n| + |l-o| = 1,\\\\\n\t\t\t\t\t\t\t\t\t\\frac{1}{\\sqrt{2}} & \\text{if } |j-m| + |k-n| + |l-o| = \n\t\t\t\t\t\t\t\t\t2,\\\\\n\t\t\t\t\t\t\t\t\t\\frac{1}{\\sqrt{3}} & \\text{if } |j-m| + |k-n| + |l-o| = \n\t\t\t\t\t\t\t\t\t3,\\\\\n\t\t\t\t\t\t\t\t\t0 & \\text{if } V(j+m,k+n,l+o) \\text{ is undefined.} \\\\\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\\end{cases}\\]\n\n\\noindent The following quantity is also defined:\n\\[n_i = \\frac{N_i}{N}.\\]\n\n\\noindent where $N$ is the total number of voxels in $V$. The NGTDM texture features are then defined as:\n\n\\begin{itemize}\n\t\\item Coarseness \\cite{AmadasunM1989}:\n\t\t  \\[coarseness = \\left[\\epsilon + \\sum_{i=1}^{N_g} n_i\\,P(i)\\right]^{-1}\\]\n\t\t  where $\\epsilon$ is a small number to prevent $coarseness$ becoming infinite.\n\t\\item Contrast \\cite{AmadasunM1989}:\n\t\t  \\[contrast = \\left[\\frac{1}{(N_g)_{eff}\\,\\big[(N_g)_{eff}-1\\big]}\\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \n\t\t  n_i\\,n_j\\,(i-j)^2\\right]\\left[\\frac{1}{N}\\sum_{i=1}^{N_g} P(i)\\right]\\]\n\t\\item Busyness \\cite{AmadasunM1989}:\n\t\t  \\[busyness = \\frac{\\sum_{i=1}^{N_g} n_i\\,P(i)}{\\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} (i\\,n_i - j\\,n_j)}, \n\t\t  \\quad n_i \\neq 0,n_j \\neq 0\\]\n\t\\item Complexity \\cite{AmadasunM1989}:\n\t\t  \\[complexity = \\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} \\frac{|i-j|\\,\\big[n_i\\,P(i) + n_j\\,P(j)\\big]}{N\\,(n_i \n\t\t  + n_j)}, \\quad n_i \\neq 0,n_j \\neq 0\\]\n\t\\item Strength \\cite{AmadasunM1989}:\n\t\t  \\[strength = \\frac{\\sum_{i=1}^{N_g}\\sum_{j=1}^{N_g} (n_i + n_j)\\,(i-j)^2}{\\left[\\epsilon + \n\t\t  \\sum_{i=1}^{N_g} P(i)\\right]},  \\quad n_i \\neq 0,n_j \\neq 0\\]\n\t\t  where $\\epsilon$ is a small number to prevent $strength$ becoming infinite.\n\t\\\\\n\\end{itemize}\n\n\n\\section*{Online resources} \nMATLAB$^{\\textregistered}$ software code is freely shared under the GNU General Public License at: \\url{https://github.com/mvallieres/radiomics}.\n\n\\footnotesize\n\\begin{thebibliography}{99} % Referencing close to Harvard style, but better looking than in PMB\n\n\t\\bibitem\t{HaralickRM1973} Haralick, R.M., Shanmugam, K. and Dinstein, I. (1973).\n\t\t\t{Textural features for image classification}.\n\t\t\t{\\em IEEE Transactions on Systems, Man, and Cybernetics}, smc \\textbf{3}(6), \t\n\t\t\t610-621.\t\n\t\n\t\\bibitem{ThibaultG2009t}\t Thibault, G. (2009). \n\t\t\t{Indices de formes et de textures: de la 2D vers la 3D. Application au classement de noyaux de cellules}. \n\t\t\t{\\em PhD Thesis}, Universit\\'e AIX-Marseille: p.172.\n\t\n\t\\bibitem{GallowayMM1975} Galloway, M.M. (1975).\n\t\t\t{Texture analysis using gray level run lengths}.\n\t\t\t{\\em Computer Graphics and Image Processing}, \\textbf{4}(2), 172-179.\t\n\t\t\t\n\t\\bibitem{AmadasunM1989} Amadasun, M. and King, R. (1989).\n\t\t\t{Textural features corresponding to textural properties}.\n\t\t\t{\\em IEEE Transactions on Systems, Man, and Cybernetics}, \\textbf{19}(5), 1264-1274. \n\t\t\t\n\t\\bibitem{ChuA1990} Chu, A. Sehgal, C.M. and Greenleaf, J.F. (1990).\n\t\t\t{Use of gray value distribution of run lengths for texture analysis}.\n\t\t\t{\\em Pattern Recognition Letters}, \\textbf{11}(6), 415-419.\n\t\t\t\n\t\\bibitem{DasarathyBV1991} Dasarathy, B.V. and Holder, E.B. (1991).\n\t\t\t{Image characterization based on joint gray level-run length distributions}.\n\t\t\t{\\em Pattern Recognition Letters}, \\textbf{12}(8), 497-502.\n\t\t\t\n\t\\bibitem{ThibaultG2009} Thibault, G. et al. (2009).\n\t\t\t{Texture indexes and gray level size zone matrix. Application to cell nuclei \n\t\t\tclassification}. In {\\em Pattern Recognition and Information Processing}. \n\t\t\tMinsk, Belarus, 140-145.\n\t\t\t\n\t\\bibitem{WeiX2008} Wei, X. (2007). \n\t\t\t{Gray Level Run Length Matrix Toolbox v1.0, computer software}.\n\t\t\t{Beijing Aeronautical Technology Research Center}.\n\t\t\t{Available from: http://www.mathworks.com/matlabcentral/fileexchange/17482-gray-level-run-length-matrix-toolbox}. [11 January 2013]\n\t\t\t\n\\end{thebibliography}\n\n\\end{document}", "meta": {"hexsha": "86f9d5c6181be6f422a417935c38361398af6200", "size": 16865, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "radiomics_toolbox/TextureToolbox/DOCS/textures_definition.tex", "max_stars_repo_name": "thomaskuestner/TAToolbox", "max_stars_repo_head_hexsha": "0222d4c551c93965e8e92beffca1ff34103950fd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "radiomics_toolbox/TextureToolbox/DOCS/textures_definition.tex", "max_issues_repo_name": "thomaskuestner/TAToolbox", "max_issues_repo_head_hexsha": "0222d4c551c93965e8e92beffca1ff34103950fd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2017-12-12T10:32:43.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-20T19:18:50.000Z", "max_forks_repo_path": "radiomics_toolbox/TextureToolbox/DOCS/textures_definition.tex", "max_forks_repo_name": "thomaskuestner/TAToolbox", "max_forks_repo_head_hexsha": "0222d4c551c93965e8e92beffca1ff34103950fd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-17T02:36:18.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-17T02:36:18.000Z", "avg_line_length": 66.3976377953, "max_line_length": 1211, "alphanum_fraction": 0.6668840795, "num_tokens": 6398, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835493924953, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.5842155603982905}}
{"text": "\\subsection{First-order logic}\\label{subsec:first_order_logic}\n\n\\begin{definition}\\label{def:first_order_language}\\mcite[sec. 15.2]{OpenLogicFull}\n  The idea of first-order predicate logic (we will omit \\enquote{predicate} and only refer to \\enquote{first-order logic}) is to create a formal language whose semantics (given by structures) support boolean operations and can quantify over all elements of an ambient universe. Unlike in \\hyperref[subsec:propositional_logic]{propositional logic}, there are different first-order logic languages.\n\n  The alphabet for a \\term{first-order logic \\hyperref[def:formal_language]{language}} \\( \\mscrL \\) extends that of \\hyperref[subsec:propositional_logic]{propositional logic} and consists of two types of symbols (note that \\fullref{rem:propositional_language_is_alphabet} holds here also).\n\n  \\begin{description}\n    \\item[Logical symbols]\n    \\hfill\n    \\begin{thmenum}[series=def:first_order_language]\n      \\thmitem{def:first_order_language/propositional} The entirety of the \\hyperref[subsec:propositional_logic]{propositional logic language} except for the \\hyperref[def:propositional_language/prop]{propositional variables}.\n\n      \\thmitem{def:first_order_language/var} A nonempty at most \\hyperref[thm:omega_is_a_cardinal]{countable} alphabet of \\term{individual variables} \\( \\boldop{Var} \\), usually denoted by small Greek letters \\( \\xi_1, \\xi_2, \\ldots \\) or \\( \\xi, \\eta, \\zeta \\) --- see \\fullref{rem:mathematical_logic_conventions/variable_symbols}.\n\n      \\thmitem{def:first_order_language/quantifiers} The quantifiers \\( Q = \\set{ \\forall, \\exists } \\):\n      \\begin{thmenum}\n        \\thmitem{def:first_order_language/quantifiers/universal} The \\term{universal quantifier} \\( \\forall \\).\n        \\thmitem{def:first_order_language/quantifiers/existential} The \\term{existential quantifier} \\( \\exists \\).\n        \\thmitem{def:first_order_language/quantifiers/dot} The dot \\( . \\) for separating a quantifier from its formula.\n      \\end{thmenum}\n\n      The dot is not itself a quantifier and is not formally necessary --- we use it only for readability.\n\n      \\thmitem{def:first_order_language/equality} A symbol for \\term{formal equality} \\( \\doteq \\). Equality is sometimes omitted by logicians, but examples of first-order languages without formal equality are obscure.\n    \\end{thmenum}\n\n    \\item[Non-logical symbols]\n    \\hfill\n    \\begin{thmenum}[resume=def:first_order_language]\n      \\thmitem{def:first_order_language/func} A set of functional symbols, \\( \\boldop{Fun} \\), whose elements are usually denoted by \\( f_1, f_2, \\ldots \\) or \\( f, g \\) or by symbols like \\( \\otimes \\). In the latter case we usually use infix notation (see \\fullref{rem:first_order_formula_conventions/infix}). Each functional symbol has an associated natural number called its \\term{arity}, denoted by \\( \\# f \\). Functional symbols with a zero arity are called \\term{constants}.\n\n      \\thmitem{def:first_order_language/pred} A set of predicate symbols, \\( \\boldop{Pred} \\), whose elements are usually denoted by \\( p_1, p_2, \\ldots \\) or by symbols like \\( \\leq \\). Again, in the latter case we use infix notation. Predicate symbols also have an associated arity. Predicate symbols with zero arity are called \\term{propositional variables}.\n    \\end{thmenum}\n  \\end{description}\n\n  The logical symbols are common for all first-order languages. Thus, first-order languages differ by their non-logical symbols. The collection of functional and predicate symbols of a language are sometimes called its \\term{signature}.\n\\end{definition}\n\n\\begin{definition}\\label{def:first_order_syntax}\n  Similarly to the \\hyperref[def:propositional_syntax]{syntax of propositional logic}, we define the \\term{syntax} of a fixed first-order language \\( \\mscrL \\).\n\n  \\begin{thmenum}\n    \\thmitem{def:first_order_syntax/grammar_schema} Consider the following \\hyperref[ex:natural_number_arithmetic_grammar/backus_naur_form]{grammar schema}:\n    \\begin{bnf*}\n      \\bnfprod{variable}        {v \\in \\boldop{Var}} \\\\\n      \\bnfprod{connective}      {\\bincirc \\in \\Sigma} \\\\\n      \\bnfprod{quantifier}      {\\bnfts{\\( \\forall \\)} \\bnfor \\bnfts{\\( \\exists \\)}} \\\\\n      \\bnfprod{unary function}  {f \\in \\boldop{Fun}, \\#f = 1} \\\\\n      \\bnfmore                  {\\vdots} \\\\\n      \\bnfprod{n-ary function}  {f \\in \\boldop{Fun}, \\#f = n \\T{(standalone rule for each \\( n \\))}} \\\\\n      \\bnfmore                  {\\vdots} \\\\\n      \\bnfprod{unary predicate} {p \\in \\boldop{Pred}, \\#p = 1} \\\\\n      \\bnfmore                  {\\vdots} \\\\\n      \\bnfprod{n-ary predicate} {p \\in \\boldop{Pred}, \\#p = n \\T{(standalone rule for each \\( n \\))}} \\\\\n      \\bnfmore                  {\\vdots} \\\\\n      \\bnfprod{term}            {\\bnfpn{variable} \\bnfor} \\\\\n      \\bnfmore                  {\\bnfpn{unary function} \\bnfsp \\bnfts{(} \\bnfsp \\bnfpn{term} \\bnfsp \\bnfts{)} \\bnfor} \\\\\n      \\bnfmore                  {\\hspace{3cm} \\vdots} \\\\\n      \\bnfmore                  {\\bnfpn{n-ary function} \\bnfsp \\underbrace{\\bnfts{(} \\bnfsp \\bnfpn{term} \\bnfts{,} \\bnfsk \\bnfts{,} \\bnfpn{term} \\bnfsp \\bnfts{)}}_{n \\T{terms}} \\bnfor} \\\\\n      \\bnfmore                  {\\hspace{3cm} \\vdots} \\\\\n      \\bnfprod{atomic formula}  {\\bnfts{\\( \\top \\)} \\bnfor \\bnfts{\\( \\bot \\)} \\bnfor} \\\\\n      \\bnfmore                  {\\bnfts{(} \\bnfsp \\bnfpn{term} \\bnfsp \\bnfts{\\( \\doteq \\)} \\bnfsp \\bnfpn{term} \\bnfsp \\bnfts{)} \\bnfor} \\\\\n      \\bnfmore                  {\\bnfpn{unary predicate} \\bnfsp \\bnfts{(} \\bnfsp \\bnfpn{term} \\bnfsp \\bnfts{)} \\bnfor} \\\\\n      \\bnfmore                  {\\hspace{3cm} \\vdots} \\\\\n      \\bnfmore                  {\\bnfpn{n-ary predicate} \\bnfsp \\underbrace{\\bnfts{(} \\bnfsp \\bnfpn{term} \\bnfts{,} \\bnfsk \\bnfts{,} \\bnfpn{term} \\bnfsp \\bnfts{)}}_{n \\T{terms}} \\bnfor} \\\\\n      \\bnfmore                  {\\hspace{3cm} \\vdots} \\\\\n      \\bnfprod{formula}         {\\bnfpn{atomic formula} \\bnfor} \\\\\n      \\bnfmore                  {\\bnfts{\\( \\neg \\)} \\bnfpn{formula} \\bnfor} \\\\\n      \\bnfmore                  {\\bnfts{(} \\bnfsp \\bnfpn{formula} \\bnfsp \\bnfpn{connective} \\bnfsp \\bnfpn{formula} \\bnfsp \\bnfts{)} \\bnfor} \\\\\n      \\bnfmore                  {\\bnfpn{quantifier} \\bnfsp \\bnfpn{variable} \\bnfsp \\bnfts{.} \\bnfsp \\bnfpn{formula}}\n    \\end{bnf*}\n\n    In practice, we usually only have functions and predicates of specific arities. Note that we can have infinitely many functions, but only finitely many different arities. The \\hyperref[rem:theory_of_left_monoid_actions]{theory of monoid actions} is an example of a first-order language with infinitely many unary functional symbols, one constant and one binary functional symbol.\n\n    If we need the grammars to have a finite set of rules, except for having only finitely many different arities, we need to introduce appropriate naming conventions for variables, functions and predicates, analogously to the \\hyperref[def:propositional_syntax/grammar_schema]{grammar schema of propositional logic}.\n\n    We use the conventions in \\fullref{rem:propositional_formula_parentheses} regarding parentheses by extending them wherever appropriate.\n\n    In order to simplify notation, we also use the conventions in \\fullref{rem:first_order_formula_conventions}.\n\n    \\thmitem{def:first_order_syntax/term} The set \\( \\boldop{Term} \\) of \\term{terms} in \\( \\mscrL \\) is the language \\hyperref[def:grammar_derivation/grammar_language]{generated} by this grammar schema with \\( \\bnfpn{term} \\) as a starting rule.\n\n    The grammar of first-order terms is unambiguous due to \\fullref{thm:first_order_terms_and_formulas_are_unambiguous}, which makes it possible to perform proofs via \\fullref{thm:structural_induction_on_unambiguous_grammars}.\n\n    \\thmitem{def:first_order_syntax/subterm} If \\( \\tau \\) and \\( \\kappa \\) are terms and \\( \\kappa \\) is a \\hyperref[def:formal_language/subword]{subword} of \\( \\tau \\), we say that \\( \\kappa \\) is a \\term{subterm} of \\( \\tau \\).\n\n    \\thmitem{def:first_order_syntax/term_variables} For each term \\( \\tau \\), we define its variables as\n    \\begin{equation}\\label{eq:def:first_order_syntax/term_variables}\n      \\boldop{Var}(\\tau) \\coloneqq \\begin{cases}\n        \\xi,                                                        &\\tau = \\xi \\in \\boldop{Var}, \\\\\n        \\boldop{Var}(\\tau_1) \\cup \\ldots \\cup \\boldop{Var}(\\tau_n), &\\tau = f(\\tau_1, \\ldots, \\tau_n).\n      \\end{cases}\n    \\end{equation}\n\n    As in \\fullref{def:propositional_syntax/variables}, \\( \\boldop{Var} \\) is ordered by the position of the first occurrence of a variable.\n\n    \\thmitem{def:first_order_syntax/ground_term} A term \\( \\tau \\) is called a \\term{ground term} if \\( \\boldop{Var}(\\tau) = \\varnothing \\). Ground terms are also called \\term{closed terms}.\n\n    \\thmitem{def:first_order_syntax/atomic_formula} The set \\( \\boldop{Atom} \\) of \\term{atomic formulas} in \\( \\mscrL \\) is the language \\hyperref[def:grammar_derivation/grammar_language]{generated} by this grammar schema with \\( \\bnfpn{atomic formula} \\) as a starting rule.\n\n    \\thmitem{def:first_order_syntax/formula} The set \\( \\boldop{Form} \\) of \\term{formulas} in \\( \\mscrL \\) is the language \\hyperref[def:grammar_derivation/grammar_language]{generated} by this grammar schema with \\( \\bnfpn{formula} \\) as a starting rule.\n\n    The \\term{atomic formulas} are the ones generated from \\( \\bnfpn{atomic formula} \\).\n\n    The grammar of first-order formulas is unambiguous as shown by \\fullref{thm:first_order_terms_and_formulas_are_unambiguous}.\n\n    See \\fullref{ex:first_order_substitution} for examples of first-order formulas.\n\n    \\thmitem{def:first_order_syntax/subformula} If \\( \\varphi \\) and \\( \\psi \\) are formulas and \\( \\psi \\) is a \\hyperref[def:formal_language/subword]{subword} of \\( \\varphi \\), we say that \\( \\psi \\) is a \\term{subformula} of \\( \\varphi \\).\n\n    \\thmitem{def:first_order_syntax/formula_terms} If \\( \\varphi \\) is a formula, if \\( \\tau \\) is a term and if \\( \\tau \\) is a \\hyperref[def:formal_language/subword]{subword} of \\( \\varphi \\), we say that \\( \\tau \\) is a \\term{term} of \\( \\varphi \\).\n\n    \\thmitem{def:first_order_syntax/formula_free_variables} The \\term{free variables} of a formula are defined as\n    \\begin{equation}\\label{eq:def:first_order_syntax/formula_free_variables}\n      \\boldop{Free}(\\varphi) \\coloneqq \\begin{cases}\n        \\varnothing,                                                &\\varphi \\in \\set{ \\top, \\bot } \\\\\n        \\boldop{Var}(\\tau_1) \\cup \\ldots \\cup \\boldop{Var}(\\tau_n), &\\varphi = p(\\tau_1, \\ldots, \\tau_n) \\\\\n        \\boldop{Var}(\\tau_1) \\cup \\boldop{Var}(\\tau_2),             &\\varphi = \\tau_1 \\doteq \\tau_2, \\\\\n        \\boldop{Free}(\\psi),                                        &\\varphi = \\neg \\psi, \\\\\n        \\boldop{Free}(\\psi_1) \\cup \\boldop{Free}(\\psi_2),           &\\varphi = \\psi_1 \\bincirc \\psi_2, \\bincirc \\in \\Sigma, \\\\\n        \\boldop{Free}(\\psi) \\setminus \\set{ \\xi },                  &\\varphi = \\quantifier{Q}{\\xi} \\psi, Q \\in \\set{ \\forall, \\exists }\n      \\end{cases}\n    \\end{equation}\n\n    \\thmitem{def:first_order_syntax/ground_formula} A formula \\( \\varphi \\) is called a \\term{ground formula} if \\( \\boldop{Free}(\\varphi) = \\varnothing \\). Ground formulas are also called \\term{closed formulas} or \\term{sentences} (unlike in propositional logic where all formulas are called sentences --- see \\fullref{def:propositional_syntax/formula}).\n\n    We will not restrict our attention only to closed formulas, and we will even rely on implicit quantification as mentioned in \\fullref{rem:mathematical_logic_conventions/quantification}. That being said, certain important theorems like \\fullref{thm:semantic_deduction_theorem} and \\fullref{thm:syntactic_deduction_theorem} require some of the formulas to be closed, so we will often follow this restriction.\n\n    \\thmitem{def:first_order_syntax/formula_bound_variables} Dually, the \\term{bound variables} of a formula are defined as\n    \\begin{equation}\\label{eq:def:first_order_syntax/formula_bound_variables}\n      \\boldop{Bound}(\\varphi) \\coloneqq \\begin{cases}\n        \\varnothing,                                        &\\varphi \\T{is atomic,} \\\\\n        \\boldop{Bound}(\\psi),                               &\\varphi = \\neg \\psi, \\\\\n        \\boldop{Bound}(\\psi_1) \\cup \\boldop{Bound}(\\psi_2), &\\varphi = \\psi_1 \\bincirc \\psi_2, \\bincirc \\in \\Sigma, \\\\\n        \\boldop{Bound}(\\psi) \\cup \\set{ \\xi },              &\\varphi = \\quantifier{Q}{\\xi} \\psi, Q \\in \\set{ \\forall, \\exists }.\n      \\end{cases}\n    \\end{equation}\n\n    \\thmitem{def:first_order_syntax/formula_variables} Finally, the set of all variables of a formula \\( \\varphi \\) is\n    \\begin{equation}\\label{eq:def:first_order_syntax/formula_variables}\n      \\boldop{Var}(\\varphi) \\coloneqq \\boldop{Free}(\\varphi) \\cup \\boldop{Bound}(\\varphi).\n    \\end{equation}\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:first_order_terms_and_formulas_are_unambiguous}\n  The grammars of \\hyperref[def:first_order_syntax/term]{first-order terms} and of \\hyperref[def:first_order_syntax/formula]{first-order formulas} are \\hyperref[def:grammar_derivation/unambiguous]{unambiguous}.\n\\end{proposition}\n\\begin{proof}\n  The proof is more complicated, but similar in spirit to \\fullref{thm:propositional_formulas_are_unambiguous}.\n\\end{proof}\n\n\\begin{remark}\\label{rem:first_order_formula_conventions}\n  In order to simplify exposition, we use the following conventions\n  \\begin{thmenum}\n    \\thmitem{rem:first_order_formula_conventions/infix} Binary functional symbols are often written using \\term{infix notation}, i.e.\n    \\begin{equation*}\n      \\zeta \\doteq \\xi + \\eta\n    \\end{equation*}\n    rather than the \\term{prefix notation}\n    \\begin{equation*}\n      \\zeta \\doteq +(\\xi, \\eta).\n    \\end{equation*}\n\n    This also applies to predicates --- we write \\( \\xi \\sim \\eta \\) rather than \\( \\sim(\\xi, \\eta) \\).\n\n    \\thmitem{rem:first_order_formula_conventions/negation} Negation of an infix binary predicate symbol \\( \\sim \\) can be written more simply as\n    \\begin{equation*}\n      \\xi \\not\\sim \\eta\n    \\end{equation*}\n    rather than\n    \\begin{equation*}\n      \\neg(\\xi \\sim \\eta).\n    \\end{equation*}\n\n    \\thmitem{rem:first_order_formula_conventions/relativization}\\mcite{LeanFOL} If \\( \\sim \\) is a binary predicate, to further shorten notation, we write\n    \\begin{equation*}\n      \\qforall {\\xi \\sim \\eta} \\varphi\n    \\end{equation*}\n    as a shorthand for\n    \\begin{equation*}\n      \\qforall \\xi (\\xi \\sim \\eta \\rightarrow \\varphi)\n    \\end{equation*}\n    and\n    \\begin{equation*}\n      \\qexists {\\xi \\sim \\eta} \\varphi\n    \\end{equation*}\n    as a shorthand for\n    \\begin{equation*}\n      \\qexists \\xi (\\xi \\sim \\eta \\wedge \\varphi).\n    \\end{equation*}\n\n    This is called \\term{relativization} of the quantifier and is immensely useful when working with heterogeneous objects or even in \\hyperref[sec:set_theory]{set theory}.\n\n    \\thmitem{rem:first_order_formula_conventions/exists_unique} We sometimes want to specify not only existence, but also uniqueness. This is the case in \\eqref{eq:def:zfc/choice}, for example. It is conventional to write\n    \\begin{equation*}\n      \\qExists \\xi \\varphi\n    \\end{equation*}\n    as a shorthand for\n    \\begin{equation*}\n      \\qexists \\xi \\parens[\\Big]{ \\varphi \\wedge \\parens[\\Big]{ \\qforall \\eta \\varphi[\\xi \\mapsto \\eta] \\rightarrow (\\xi \\doteq \\eta) } }.\n    \\end{equation*}\n\n    \\thmitem{rem:first_order_formula_conventions/necessary_signature} We only add to the language itself the functional and predicate symbols that are necessary for our desired axioms --- see \\fullref{def:first_order_theory}. We can define additional functions and predicates in terms of these, but we avoid using them as much as possible when writing formulas in the object language. For example, we avoid adding the functional symbols \\hyperref[thm:well_ordered_order_type_existence]{\\( \\ord(A) \\)} and \\hyperref[def:cardinal]{\\( \\card(A) \\)} or even \\hyperref[def:basic_set_operations/union]{\\( \\cup \\)} and \\hyperref[def:basic_set_operations/intersection]{\\( \\cap \\)} to \\hyperref[def:zfc]{\\logic{ZFC}}.\n\n    If needed, we can consider these new functions and predicates to be abbreviations for more verbose terms and formulas as described in \\fullref{rem:predicate_formula}.\n  \\end{thmenum}\n\n  As for \\fullref{rem:propositional_formula_parentheses}, both of these conventions exist only in the metalanguage and the formulas themselves are assumed to have the former form within the object language.\n\\end{remark}\n\n\\begin{definition}\\label{def:first_order_structure}\\mcite[def. 16.1]{OpenLogicFull}\n  Fix a first-order logic language \\( \\mscrL \\). A \\term{structure} for \\( \\mscrL \\) is a pair \\( \\mscrX = (X, I) \\), where\n  \\begin{thmenum}\n    \\thmitem{def:first_order_structure/set} \\( X \\) is a nonempty set called the \\term{domain} or \\term{universe} of the structure \\( \\mscrX \\). See \\fullref{rem:empty_models}.\n\n    \\thmitem{def:first_order_structure/interpretation} The \\term{interpretation} \\( I \\) of the structure \\( \\mscrX \\) is a \\hyperref[def:function]{function} that is defined on the signature of \\( \\mscrL \\) and satisfies the following conditions:\n    \\begin{thmenum}\n      \\thmitem{def:first_order_structure/interpretation/function} For every \\( n \\)-ary function symbol \\( f \\), its interpretation is a \\hyperref[def:function]{function} with signature \\( I(f): X^n \\to X \\).\n\n      \\thmitem{def:first_order_structure/interpretation/predicate} For every \\( n \\)-ary predicate \\( p \\), its interpretation is a an n-ary \\hyperref[def:boolean_function]{Boolean-valued function} with signature \\( I(p): X^n \\to \\set{ T, F } \\). A tuple \\( (x_1, \\ldots, x_n) \\) satisfies \\( p \\) if \\( p(x_1, \\ldots, x_n) = T \\).\n\n      It is conventional to define the interpretation of a predicate to be a \\hyperref[def:relation]{relation} \\( I(p) \\subseteq X^n \\) (see e.g. \\mcite[def. 16.1]{OpenLogicFull}), however it is more convenient for us to work with Boolean-valued functions. The two approaches are equivalent as explained in \\fullref{rem:boolean_valued_functions_and_predicates}.\n    \\end{thmenum}\n  \\end{thmenum}\n\n  Unlike in the rest of this document, when dealing with first-order structures, it is important to distinguish between the structure \\( \\mscrX \\) as a pair and its domain \\( X \\) as a set. See \\fullref{rem:first_order_model_notation}.\n\\end{definition}\n\n\\begin{remark}\\label{rem:empty_models}\n   If we allow for the domain of a structure to be empty, we would have to reformulate a lot of important theorems (e.g. see the proof of \\fullref{thm:renaming_assignment_compatibility/formulas}), which would complicate compatibility between semantics and \\hyperref[def:deductive_system]{deductive systems}.\n\n   See \\fullref{thm:substructures_form_complete_lattice/bottom} for a context where empty sets are justified as domains of first-order structures.\n\\end{remark}\n\n\\begin{definition}\\label{def:first_order_valuation}\n  Fix a structure \\( \\mscrX = (X, I) \\) for a first-order logic language \\( \\mscrL \\).\n\n  \\begin{thmenum}\n    \\thmitem{def:first_order_valuation/variable_assignment}\\mcite[def. 16.7]{OpenLogicFull} A \\term{variable assignment} for the variables of \\( \\mscrL \\) is any function \\( v: \\boldop{Var} \\to X \\) (loosely similar to \\hyperref[def:propositional_valuation/interpretation]{propositional interpretations}).\n\n    \\thmitem{def:first_order_valuation/modified_assignment} For every variable \\( \\xi \\) and every domain element \\( x \\in X \\) we also define the \\term{modified assignment} at \\( \\xi \\) with \\( x \\):\n    \\begin{equation*}\n      v_{\\xi \\mapsto x}(\\zeta) \\coloneqq \\begin{cases}\n        x,        &\\zeta = \\xi, \\\\\n        v(\\zeta), &\\zeta \\neq \\xi.\n      \\end{cases}\n    \\end{equation*}\n\n    We can also modify the value at \\( \\xi \\) with another variable, e.g.\n    \\begin{equation*}\n      v_{\\xi \\mapsto \\eta}(\\zeta) \\coloneqq \\begin{cases}\n        v(\\eta),  &\\zeta = \\xi, \\\\\n        v(\\zeta), &\\zeta \\neq \\xi.\n      \\end{cases}\n    \\end{equation*}\n\n    Inductively,\n    \\begin{equation*}\n      v_{\\xi_1 \\mapsto x_1, \\ldots, \\xi_n \\mapsto x_n}(\\eta) \\coloneqq ((\\ldots(v_{\\xi_1 \\mapsto x_1})\\ldots)_{\\xi_n \\mapsto x_n})(\\eta).\n    \\end{equation*}\n\n    Except for semantics of quantification, these are also used in other places like \\fullref{thm:renaming_assignment_compatibility} and \\fullref{rem:first_order_formula_valuation_without_variable_assignment}.\n\n    \\thmitem{def:first_order_valuation/term_valuation}\\mcite[def. 16.8]{OpenLogicFull} The \\term{valuation} of a term \\( \\tau \\) is a value in the domain \\( X \\) given by\n    \\begin{equation}\\label{eq:def:first_order_valuation/term_valuation}\n      \\tau\\Bracks{v} \\coloneqq \\begin{cases}\n        v(\\xi),                                           &\\tau = \\xi \\in \\boldop{Var}, \\\\\n        I(f)(\\tau_1\\Bracks{v}, \\ldots, \\tau_n\\Bracks{v}), &\\tau = f(\\tau_1, \\ldots, \\tau_n).\n      \\end{cases}\n    \\end{equation}\n\n    \\thmitem{def:first_order_valuation/formula_valuation}\\mcite[def. 16.11]{OpenLogicFull} We extend the classical propositional valuations from \\fullref{def:propositional_valuation}. The (classical) \\term{valuation} of a formula \\( \\varphi \\) is a \\hyperref[def:boolean_value]{Boolean value} given by\n    \\begin{equation}\\label{eq:def:first_order_valuation/formula_valuation}\n      \\varphi\\Bracks{v} \\coloneqq \\begin{cases}\n        T,                                                              &\\varphi = \\top, \\\\\n        F,                                                              &\\varphi = \\bot, \\\\\n        \\tau_1\\Bracks{v} = \\tau_2\\Bracks{v},                            &\\varphi = \\tau_1 \\doteq \\tau_2, \\\\\n        I(p)(\\tau_1\\Bracks{v}, \\ldots, \\tau_n\\Bracks{v}),               &\\varphi = p(\\tau_1, \\ldots, \\tau_n), \\\\\n        \\overline{\\psi\\Bracks{v}},                                      &\\varphi = \\neg \\psi, \\\\\n        \\psi_1\\Bracks{v} \\bincirc \\psi_2\\Bracks{v},                     &\\varphi = \\psi_1 \\bincirc \\psi_2, \\bincirc \\in \\Sigma, \\\\\n        \\bigvee\\set{ \\psi\\Bracks{v_{\\xi \\mapsto x}} \\given x \\in X },   &\\varphi = \\qforall \\xi \\psi, \\\\\n        \\bigwedge\\set{ \\psi\\Bracks{v_{\\xi \\mapsto x}} \\given x \\in X }, &\\varphi = \\qexists \\xi \\psi.\n      \\end{cases}\n    \\end{equation}\n\n    The rules for evaluating constants, negations and connectives are a direct extension of the \\hyperref[def:propositional_valuation/formula_valuation]{rules for propositional logic}.\n\n    It is important that the domain is nonempty because \\( \\bigwedge\\varnothing = T \\), which directly contradicts our intent of defining \\( \\exists \\) as a quantifier for existence.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{remark}\\label{rem:first_order_formula_valuation_without_variable_assignment}\n  Somewhat similar to \\fullref{rem:propositional_formula_valuation_without_variable_assignment}, if we know that \\( \\boldop{Free}(\\varphi) \\subseteq \\{ \\xi_1, \\ldots, \\xi_n \\} \\), we know that the \\hyperref[def:first_order_valuation/formula_valuation]{valuation} \\( \\varphi\\Bracks{v} \\) only depends on the values \\( v(\\xi_1), \\ldots, v(\\xi_n) \\). This allows us to introduce the shorthand\n  \\begin{equation}\\label{eq:rem:first_order_formula_valuation_without_variable_assignment/long_semantic}\n    \\varphi\\Bracks{\\xi_1 \\mapsto x_1, \\ldots, \\xi_n \\mapsto x_n}\n  \\end{equation}\n  or even\n  \\begin{equation}\\label{eq:rem:first_order_formula_valuation_without_variable_assignment/short_semantic}\n    \\varphi\\Bracks{x_1, \\ldots, x_n}\n  \\end{equation}\n  for\n  \\begin{equation*}\n    \\varphi\\Bracks{v_{\\xi_1 \\mapsto x_1, \\ldots, \\xi_n \\mapsto x_n}}\n  \\end{equation*}\n  because the variable assignment \\( v \\) plays no role here.\n\n  When using either of these shorthand notations, we implicitly assume that \\( \\boldop{Free}(\\varphi) \\subseteq \\set{ \\xi_1, \\ldots, \\xi_n } \\).\n\n  When \\( \\varphi = p(\\xi_1, \\ldots, \\xi_n) \\) is a predicate formula, the shorter notation \\eqref{eq:rem:first_order_formula_valuation_without_variable_assignment/short_semantic} translates to\n  \\begin{equation*}\n    p\\Bracks{x_1, \\ldots, x_n} = I(p)(x_1, \\ldots, x_n) \\in \\set{ T, F }\n  \\end{equation*}\n  and analogously for function terms we have\n  \\begin{equation*}\n    f\\Bracks{x_1, \\ldots, x_n} = I(f)(x_1, \\ldots, x_n) \\in X.\n  \\end{equation*}\n\n  Of course, we avoid this notation for formulas like \\( p(f(\\xi)) \\) because \\( p\\Bracks{x} \\) would mean \\( I(p)(I(f)(x)) \\) rather than \\( I(p)(x) \\), which would be confusing.\n\n  We apply this notation for terms and, in particular, functions.\n\n  We also sometimes use the shortened notation\n  \\begin{equation}\\label{eq:rem:first_order_formula_valuation_without_variable_assignment/short_syntactic}\n    \\varphi[\\tau_1, \\ldots, \\tau_n]\n  \\end{equation}\n  for \\hyperref[def:first_order_substitution/term_in_formula]{substituting terms in formulas}.\n\\end{remark}\n\n\\begin{definition}\\label{def:first_order_equation}\n  A \\term{first-order equation} is a formula of the form\n  \\begin{equation}\\label{eq:def:first_order_equation}\n    f(\\xi_1, \\ldots, \\xi_n) \\doteq g(\\xi_1, \\ldots, \\xi_n),\n  \\end{equation}\n  where both \\( f(\\xi_1, \\ldots, \\xi_n) \\) and \\( g(\\xi_1, \\ldots, \\xi_n) \\) are functional symbols with the same free variables.\n\n  Given a structure \\( \\mscrX = (X, I) \\), we call the elements of the set defined by this formula \\term{solutions}. That is, we say that the tuple \\( (x_1, \\ldots, x_n) \\) is a, solution to the equation \\eqref{eq:def:first_order_equation} if\n  \\begin{equation*}\n    f\\Bracks{x_1, \\ldots, x_n} = g\\Bracks{x_1, \\ldots, x_n}.\n  \\end{equation*}\n\n  We can actually replace \\( f \\) and \\( g \\) with more general terms \\( \\tau \\) and \\( \\sigma \\), in which case the equation becomes \\( (\\tau \\doteq \\sigma) \\) and is satisfied if\n  \\begin{equation*}\n    \\tau\\Bracks{x_1, \\ldots, x_n} = \\sigma\\Bracks{x_1, \\ldots, x_n}.\n  \\end{equation*}\n\\end{definition}\n\n\\begin{example}\\label{ex:equations}\n  A remarkable portion of mathematics concerns the study of different types of equations (even though they are not generally restricted to \\hyperref[def:first_order_equation]{equations in first-order logic}). The reason for this is that equations provide a simple way to specify rich semantic structure using simple syntactic objects.\n\n  \\begin{itemize}\n    \\item Matrix theory can be regarded as the study of linear equations. See \\fullref{subsec:matrices}.\n    \\item Differential equations is aptly named since it studies equations in functional spaces concerning functions and their derivatives. See \\fullref{sec:diffeq}.\n    \\item Roots of generalized derivatives are studied in optimization. See \\fullref{subsec:nonsmooth_derivatives}.\n    \\item Diophantine equations are studied in number theory. See \\fullref{subsec:integers}.\n    \\item Fixed points of functions are studied in different branches of mathematics. See \\fullref{thm:banach_fixed_point_theorem} or \\fullref{thm:knaster_tarski_theorem}.\n    \\item Affine varieties, which are sets of roots of polynomials, are studied in algebraic geometry. See \\fullref{subsec:affine_varieties}.\n  \\end{itemize}\n\\end{example}\n\n\\begin{definition}\\label{def:first_order_semantics}\n  Fix a first-order logic language \\( \\mscrL \\). We introduce notions analogous to \\hyperref[def:propositional_semantics]{propositional semantics}:\n  \\begin{thmenum}\n    \\thmitem{def:first_order_semantics/satisfiability}\\mcite[def. 16.11]{OpenLogicFull} Given a \\hyperref[def:first_order_structure]{structure} \\( \\mscrX = (X, I) \\), a \\hyperref[def:first_order_valuation/variable_assignment]{variable assignment} \\( v \\) and a set \\( \\Gamma \\) of \\hyperref[def:first_order_syntax/formula]{first-order formulas}, we say that the variable assignment \\( v \\) \\term{satisfies} \\( \\Gamma \\) and we write \\( \\mscrX \\vDash_v \\Gamma \\) if, for every formula \\( \\gamma \\in \\Gamma \\) we have \\( \\gamma\\Bracks{v} = T \\).\n\n    If every variable assignment in \\( \\mscrX \\) satisfies \\( \\Gamma \\), we say that \\( \\mscrX \\) itself satisfies \\( \\Gamma \\) or that \\( \\mscrX \\) is a \\term{model} of \\( \\Gamma \\) and write \\( \\mscrX \\vDash \\Gamma \\) (or simply \\( \\mscrX \\vDash \\Gamma \\) if the interpretation is clear from the context).\n\n    Analogously to \\fullref{def:propositional_semantics/satisfiability}, we say that \\( \\Gamma \\) is satisfiable if there exists a model for \\( \\Gamma \\).\n\n    \\thmitem{def:first_order_semantics/entailment} We say that the set of formulas \\( \\Gamma \\) \\term{entails} the set of formulas \\( \\Delta \\) and write \\( \\Gamma \\vDash \\Delta \\) if every model of \\( \\Gamma \\) is also a model of \\( \\Delta \\).\n\n    \\thmitem{def:first_order_semantics/tautology} The formula \\( \\varphi \\) is a \\term{tautology} if every structure is a model of \\( \\varphi \\).\n\n    \\thmitem{def:first_order_semantics/contradiction} Dually, \\( \\varphi \\) is a \\term{contradiction} is no structure is a model of \\( \\varphi \\).\n\n    \\thmitem{def:first_order_semantics/equivalence} As in the simplest case with \\hyperref[def:propositional_semantics/equivalence]{propositional semantical equivalence}, we say that \\( \\Gamma \\) and \\( \\Delta \\) are \\term{semantically equivalent} and write \\( \\Gamma \\gleichstark \\Delta \\) if both \\( \\Gamma \\vDash \\Delta \\) and \\( \\Delta \\vDash \\Gamma \\).\n\n    \\thmitem{def:first_order_semantics/equisatisfiability} Again as in the simplest case with \\hyperref[def:propositional_semantics/equisatisfiability]{propositional equisatisfiability}, we say that the sets of formulas \\( \\Gamma \\) and \\( \\Delta \\) are \\term{equisatisfiable} when it holds that \\( \\Gamma \\) is satisfiable if and only if \\( \\Delta \\) is satisfiable.\n\n    \\Fullref{thm:quantifier_satisfiability/existential} provides an important example of equisatisfiable formulas that are not equivalent.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{remark}\\label{rem:propositional_logic_as_first_order_logic}\n  It is now clear that the \\hyperref[subsec:propositional_logic]{propositional logic language} can be regarded as a degenerate first-order logic language with no at most countably many predicate symbols, all of arity \\( 0 \\), and no functional symbols. Thus, first-order logic is indeed an extension of propositional logic.\n\\end{remark}\n\n\\begin{remark}\\label{rem:first_order_model_notation}\n  In first-order logic, \\hyperref[def:first_order_semantics/satisfiability]{models} are defined as pairs \\( \\mscrX = (X, I) \\). Each area of mathematics has its own conventions and models are usually specified as simply as possible without being unambiguous (and sometimes even beyond ambiguity).\n\n  A popular convention is to use compatible letters like we did with \\( X \\) and \\( \\mscrX \\) or \\( G \\) and \\( \\mscrG \\), where the structure itself is named using calligraphic letters while the domain is named using the corresponding capital letter in normal font. This only works very simple cases where we can say \\enquote{Let \\( \\mscrP = (P, \\leq) \\) be a \\hyperref[def:partially_ordered_set]{partially ordered set}}. In the case of Banach lattices, for example, this becomes \\( \\mscrX = (X, \\BbbK, +, \\norm \\anon, \\vee, \\wedge) \\), which is quite cumbersome.\n\n  Consider another example. In the sense of first-order structures, the language of the \\hyperref[def:group/theory]{theory of groups} has a signature consisting of three functional symbols and no predicate symbols. Specifying a structure for this language is thus the same as specifying a quadruple \\( \\mscrG = (G, e, (\\anon)^{-1}, \\cdot) \\). We usually identify the group \\( \\mscrG \\) as a tuple with its domain \\( G \\) and even go as far as to say \\enquote{Let \\( (\\mscrG, \\cdot) \\) be a group}. This is technically wrong, but it is both convenient and conventional. Furthermore, the rest of the definition of the group can easily be inferred. In case of ambiguity, the simplest disambiguation is to use lower indices with the name of the structure, e.g. \\( +_\\mscrG \\) and \\( +_\\mscrH \\) may be the addition operation in different abelian groups.\n\\end{remark}\n", "meta": {"hexsha": "5ba2ab4f38b3f22d91eac96c66df22ead778649b", "size": 31881, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/first_order_logic.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/first_order_logic.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/first_order_logic.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.5075757576, "max_line_length": 849, "alphanum_fraction": 0.6965590791, "num_tokens": 9336, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835330070839, "lm_q2_score": 0.6992544273261176, "lm_q1q2_score": 0.58421555941327}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{xcolor}\n\\usepackage{amsthm}\n\\usepackage[mathcal]{euscript}\n\n\\usepackage{url}\n\n\\newcommand{\\Hcal}{\\mathcal{H}}\n\\newcommand{\\real}{\\mathbb{R}}\n\n\\title{Study of the constructive proof of weak compactness}\n\\author{Nazarov Ivan}\n\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\nLet $(\\Hcal, \\langle\\cdot,\\cdot\\rangle)$ be a Hilbert space. A sequence $(x_n)_{n\\geq1} \n\\in \\Hcal$ converges {\\bf strongly} to $x\\in\\Hcal$ if $\\|x_n - x\\| \\to 0$, and {\\bf\nweakly} if $\\langle x_n, z \\rangle \\to \\langle x, z\\rangle$ for all $z\\in \\Hcal$.\nWeak convergence is necessary for strong convergence, since for every $z\\in \\Hcal$\nby the Cauchy-Schwartz inequality\n\\begin{equation*}\n  \\liminf_{n\\to \\infty}\n    \\lvert \\langle x_n - x, z\\rangle \\rvert\n    \\leq \\limsup_{n\\to \\infty}\n      \\lvert \\langle x_n - x, z\\rangle \\rvert\n    \\leq \\|z\\| \\limsup_{n\\to \\infty} \\|x_n - x \\|\n    = 0\n    \\,.\n\\end{equation*}\nLet $(x_n)_{n\\geq1} \\in \\Hcal$ be a bounded sequence $M = \\sup_{n\\geq1} \\|x_n\\| < +\\infty$.\nThen there is $(n_k)_{k\\geq1}\\uparrow$ such that $x_{n_k}$ converges weakly to some\n$h \\in \\Hcal$, i.e. $x_{n_k}\\rightharpoonup h$.\n\n\\paragraph{Proof} % (fold)\n\\label{par:proof}\n\nLet $\\Hcal^0_0 = \\{\\sum_{i=1}^n \\beta_i x_i \\colon \\beta_i \\in \\real,\\,n\\geq 0 \\}$.\nThe set $\\Hcal^0_0$ is a linear subspace of $\\Hcal$. It is a pre-Hilbert space with\nrespect to $\\langle\\cdot, \\cdot \\rangle$, since it can be completed w.r.t. this inner\nproduct to get $\\Hcal_0$ and $\\Hcal^0_0$ is dense in its completion.\n\n\\paragraph{Subsequence} % (fold)\n\\label{par:subsequence}\n\nLet's construct a subsequence, which could potentially weakly converge to something.\nLet $x^0_n = x_n$. Suppose for $k\\geq 0$ we have $(x^k_n)_{n\\geq 1} \\subseteq (x_n)_{n\\geq 1}$\nand $(\\alpha_i)_{i\\leq k}$, such that $\\langle x_i, x^k_n\\rangle \\to \\alpha_i$ for\nall $1\\leq i\\leq k$ as $n\\to \\infty$.\n\nA sequence $\\alpha^{k+1}_n = \\langle x_{k+1}, x^k_n \\rangle$ in $\\real$ is bounded,\nsince $\\lvert\\alpha^{k+1}_n\\rvert \\leq \\|x_{k+1}\\| \\|x^k_n\\| \\leq M^2$ and $x^k_n \\in\n\\{x_n\\colon n\\geq 1\\}$. Therefore, it contains a subsequence $(n_p)_{p\\geq1}$ such that\n$\\alpha^{k+1}_{n_p}$ converges to some $\\alpha^{k+1} \\in \\real$. If we let $x^{k+1}_p\n= x^k_{n_p}$, then\n\\begin{itemize}\n  \\item $\\langle x_j, x^{k+1}_p \\rangle \\to \\alpha^j$ since $(x^{k+1}_p)_{p\\geq 1}\n  \\subseteq (x^k_p)_{p\\geq 1} \\subseteq (x^j_p)_{p\\geq 1}$ for $j\\leq k$, and\n  subsequences converge to the same limit as the parent sequence;\n  \\item $\\langle x_{k+1}, x^{k+1}_p \\rangle = \\alpha^{k+1}_{n_p} \\to \\alpha^{k+1}$\n  by construction.\n\\end{itemize}\nFor the diagonal sequence $(x^p_p)_{p\\geq1}$ there are indices $(n_p)_{p\\geq1} \\uparrow$\nsuch that $x^p_p = x_{n_p}$, and we have $(x^p_p)_{p\\geq1} \\subseteq (x^k_p)_{p\\geq1}$\nfor all $k\\geq 0$, whence we must have  $\\langle x_n, x^p_p \\rangle \\to \\alpha^n$\nfor all $n\\geq 1$.\n\n% paragraph subsequence (end)\n\n\\paragraph{A mapping} % (fold)\n\\label{par:a_mapping}\n\nWe shall show that a map $l(x) = \\lim_{p\\to \\infty} \\langle x, x_{n_p} \\rangle$ is\nwell defined.\n\nRight from the start we know that $l(x_n) = \\alpha^n$ for all $n\\geq 1$. Since the\ninner product is bilinear and the limit is additive, for any $z = \\sum_{i=1}^n \\beta_i x_i$\nwe have\n\\begin{equation*}\n  l(z)\n    = \\lim_{p\\to \\infty} \\Bigl \\langle \\sum_{i=1}^n \\beta_i x_i, x_{n_p} \\Bigr \\rangle\n    = \\sum_{i=1}^n \\beta_i \\lim_{p\\to \\infty} \\langle x_i, x_{n_p} \\rangle\n    = \\sum_{i=1}^n \\beta_i l(x_i)\n    = \\sum_{i=1}^n \\beta_i \\alpha^i\n    \\in \\real\n    \\,.\n\\end{equation*}\nTherefore $l$ is defined on $\\Hcal^0_0$.\n\nSince $\\Hcal^0_0$ is dense in $\\Hcal_0$ w.r.t the induced norm, for any $z\\in \\Hcal_0$\nthere is $(z_k)_{k\\geq1} \\in \\Hcal^0_0$ such that $\\|z_k - z\\|\\to 0$. Thus for any\n$\\varepsilon > 0$ there is $n_\\varepsilon \\geq 1$ such that for all $n\\geq n_\\varepsilon$\nwe have $\\|z_k - z\\| \\leq \\tfrac\\varepsilon{M}$. Hence for every $p\\geq 1$ we get\n\\begin{align*}\n  \\bigl\\lvert \\langle z, x_{n_p} \\rangle -  \\langle z_k, x_{n_p} \\rangle \\bigr\\rvert\n    \\leq \\| z - z_k \\| \\|x_{n_p} \\|\n    \\leq \\tfrac\\varepsilon{M} M\n    \\,.\n\\end{align*}\nWe conclude that $\\sup_{p\\geq 1} \\lvert \\langle z - z_k, x_{n_p} \\rangle \\rvert \\to 0$\nas $k\\to \\infty$, since for any $\\varepsilon > 0$ there is $k_\\varepsilon \\geq 1$ such\nthat $\\sup_{p\\geq 1} \\lvert \\langle z - z_k, x_{n_p} \\rangle \\rvert \\leq \\varepsilon$ for\nall $k\\geq k_\\varepsilon$.\n\nNext observe that for any $k \\geq j$ and all $p$ we have\n\\begin{align*}\n  \\lvert l(z_k) - l(z_j) \\rvert\n    &\\leq \\lvert l(z_j) - \\langle z_j, x_{n_p} \\rangle \\rvert\n      + \\lvert l(z_k) - \\langle z_k, x_{n_p} \\rangle \\rvert\n    \\\\\n    &+ \\lvert \\langle z_j - z, x_{n_p} \\rangle \\rvert\n      + \\lvert \\langle z_k - z, x_{n_p} \\rangle \\rvert\n      \\,.\n\\end{align*}\nFrom the convergence of the supremum above we can pick $k_\\varepsilon \\geq1$ such\nthat the last two terms are bounded each by $\\tfrac\\varepsilon2$ for all $j,k \\geq\nk_\\varepsilon$. Taking limit suprema of both sides with respect to $p$ eliminates\nthe first two absolute terms of the right hand side, thereby implying that\n$\\lvert l(z_k) - l(z_j) \\rvert \\leq \\varepsilon$ for all such $j$ and $k$. Hence\n$(l(z_k))_{k\\geq 1}$ is Cauchy in $\\real$, and thus $l(z_k) \\to \\alpha$ for some\n$\\alpha \\in \\real$.\n\nTo show that $l(z) = \\alpha$ we make the following observation: for all $k,p\\geq 1$\n\\begin{equation*}\n  \\lvert \\langle z, x_{n_p} \\rangle - \\alpha \\rvert\n    \\leq \\lvert l(z_k) - \\alpha \\rvert\n    + \\lvert \\langle z_k, x_{n_p} \\rangle - l(z_k) \\rvert\n    + \\sup_{p\\geq 1} \\lvert \\langle z - z_k, x_{n_p} \\rangle \\rvert\n    \\,.\n\\end{equation*}\nFor any $\\varepsilon > 0$ there is $k_\\varepsilon \\geq 1$ such that the sum of the\nfirst and last terms on the right hand side is not greater than $\\varepsilon$ at\n$k_\\varepsilon$. Hence the $\\limsup_{p\\to \\infty}$ of the expression in the left\nis not greater than $\\varepsilon$, since\n\\begin{equation*}\n  \\limsup_{p\\to \\infty} \\lvert \\langle z, x_{n_p} \\rangle - \\alpha \\rvert\n    \\leq \\varepsilon\n    + \\limsup_{p\\to \\infty} \\lvert\n        \\langle z_{k_\\varepsilon}, x_{n_p} \\rangle - l(z_{k_\\varepsilon})\n      \\rvert\n    \\,,\n\\end{equation*}\nand $\\langle z_k, x_{n_p} \\rangle \\to l(z_k)$ in $\\real$ as $p\\to \\infty$ for any\n$k \\geq 1$. Since the last bound holds for arbitrarily small $\\varepsilon > 0$,\n$\\langle z, x_{n_p} \\rangle \\to \\alpha$, and therefore $l$ is defined on $\\Hcal_0$.\n\nWhat about $z\\in \\Hcal_0^\\perp$, the orthogonal complement of $\\Hcal_0$ in $\\Hcal$?\nFor any $z\\in \\Hcal_0^\\perp$ by orthogonality $\\langle z, x_n \\rangle = 0$ for all\n$n\\geq 1$, whence $\\lim_{p\\to\\infty} \\langle z, x_{n_p} \\rangle = 0$. Therefore $l$\nis defined on $\\Hcal_0^\\perp$.\n\n% paragraph a_mapping (end)\n\n\\paragraph{Linearity and continuity} % (fold)\n\\label{par:linearity_and_continuity}\n \nThus $z\\mapsto l(z) = \\lim_{p\\to\\infty} \\langle z, x_{n_p} \\rangle$ is a well defined\n$\\Hcal \\to \\real$ map. Furthermore, it is linear, since the inner product is bilinear:\n\\begin{equation*}\n  l(z + \\alpha x)\n    = \\lim_{p\\to\\infty} \\langle z + \\alpha x, x_{n_p} \\rangle\n    = \\lim_{p\\to\\infty} \\langle z, x_{n_p} \\rangle\n    + \\alpha \\lim_{p\\to\\infty} \\langle x, x_{n_p} \\rangle\n    = l(z) + \\alpha l(x)\n    \\,.\n\\end{equation*}\nFinally, $l$ is also a bounded map, since for all $z\\in \\Hcal$\n\\begin{equation*}\n  \\lvert l(z) \\rvert\n    = \\lim_{p\\to\\infty} \\lvert \\langle z, x_{n_p} \\rangle \\rvert\n    \\leq \\sup_{p\\geq 1} \\|z\\| \\| x_{n_p}\\|\n    \\leq M \\|z\\|\n    \\,.\n\\end{equation*}\nSo the map $l$ is linear and bounded, hence continuous.\n\n% paragraph linearity_and_continuity (end)\n\n\\paragraph{Weak limit} % (fold)\n\\label{par:weak_limit}\n\nRiesz representer theorem therefore implies the existence of some $h\\in \\Hcal$ such\nthat $l(z) = \\langle z, h\\rangle$ for any $z\\in \\Hcal$. Therefore $x_{n_p} \\rightharpoonup\nh$ for this $(x_{n_p})_{p\\geq 1} \\subseteq (x_n)_{n\\geq 1}$.\n\n% paragraph weak_limit (end)\n\n% paragraph proof (end)\n\n\\end{document}\n", "meta": {"hexsha": "4b22c67816eeb44e36cae8157ef375afcae48e43", "size": 8055, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "scribbles/weak-compactness.tex", "max_stars_repo_name": "ivannz/general-scribbles", "max_stars_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-12-07T20:41:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-28T12:47:40.000Z", "max_issues_repo_path": "scribbles/weak-compactness.tex", "max_issues_repo_name": "ivannz/general-scribbles", "max_issues_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scribbles/weak-compactness.tex", "max_forks_repo_name": "ivannz/general-scribbles", "max_forks_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.4773869347, "max_line_length": 94, "alphanum_fraction": 0.6559900683, "num_tokens": 3209, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8354835330070839, "lm_q1q2_score": 0.5842155541769946}}
{"text": "\n\\subsection{Orders of integration}\n\nHow many diffs do you need to do to get a stationary process?\n\nIf something is first order integrated it is \\(I(1)\\).\n\n", "meta": {"hexsha": "7b839ec0d9f0e1a7c80e800cbcc11e20070efd4c", "size": 156, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/probability/stochasticOrderIntegration/01-01-order.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/probability/stochasticOrderIntegration/01-01-order.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/probability/stochasticOrderIntegration/01-01-order.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.5, "max_line_length": 61, "alphanum_fraction": 0.7435897436, "num_tokens": 36, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8558511543206819, "lm_q2_score": 0.6825737344123242, "lm_q1q2_score": 0.5841815185057662}}
{"text": "\\chapter{Arithmetic}\n\nYou are probably somewhat familiar with doing various forms of arithmetic, and have probably even memorized the results of many operations. But the idea in computer science is to break down that process into a sequence of steps called an algorithm. It is important to understand that there are multiple possible algorithms to accomplish the same result. Remember that an algorithm is a set of directions to a destination, but the same destination can have several possible routes.\n\n\\section{Addition and Subtraction}\n\nLet's start simple with the addition of two digits. Like 1+1, or 2+5. You probably don't even need to think about how to do these, you just know the answer. But I want you to think about how to explain what is going on as an algorithm. Something you might not even think about anymore is why do these symbols 1, 2, 3, ... mean anything? We simply memorize their meaning, and the symbol itself does not have any intrinsic value.\\\\\n\nLet's start by adding 1 to any of the 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). We might use some conditional branching, and simply give the answer for each possibility. For the algorithm to be able to handle any digit, we need the idea of a variable. In this example it's called \\(x\\). And let's call the result \\(y\\). So We are basically saying how to computer \\(y = x+1\\), if \\(x\\) is any digit 0 through 9.\\\\\n\n\\begin{center}\\imagegraphic[0.75]{add_1_flowchart.png}\\end{center}\n\nBasically, the algorithm defines what adding 1 means for each possible value of \\(x\\). It checks for what value \\(x\\) actually has, and gives the answer based on that. But if \\(x\\) is not 0 through 9, then the answer is undefined because the algorithm doesn't know what to do in that case.\\\\\n\n\n\n\\section{Multiplication}\n\n\\section{Floating Point Division}\n\n\\section{Integer Division}\n\n\\section{Modulus}\n", "meta": {"hexsha": "856e2ebb2ccf741fad51f6c64c964fd8824ec4fb", "size": 1855, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TeX_files/Arithmetic.tex", "max_stars_repo_name": "kcdodd/ecsp-book", "max_stars_repo_head_hexsha": "371e0e07140bc2fa5a8e3d424510900f368f885a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-07-27T18:34:02.000Z", "max_stars_repo_stars_event_max_datetime": "2015-07-27T18:34:02.000Z", "max_issues_repo_path": "TeX_files/Arithmetic.tex", "max_issues_repo_name": "kcdodd/ecsp-book", "max_issues_repo_head_hexsha": "371e0e07140bc2fa5a8e3d424510900f368f885a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TeX_files/Arithmetic.tex", "max_forks_repo_name": "kcdodd/ecsp-book", "max_forks_repo_head_hexsha": "371e0e07140bc2fa5a8e3d424510900f368f885a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.2916666667, "max_line_length": 480, "alphanum_fraction": 0.7622641509, "num_tokens": 456, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511616741042, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5841815179986386}}
{"text": "\n%-------------------------------------------------------------------------------\n% LF                                                                            \n%-------------------------------------------------------------------------------\n\n\\section{LF: A Logical Framework}\n\nThe original LF logical framework was specified as follows:\n$$\n\\begin{array}{llll}\n\\textbf{Kinds} & K & ::= & \\Type \\Spb \\PiTyp{x}{A}{K} \\\\\n\\textbf{Families} & A & ::= & a \\Spb \\PiTyp{x}{A_2}{A} \\Spb \\Lam{x : A_2}{A} \\Spb \\Appl{A}{M} \\\\\n\\textbf{Terms} & M & ::= & c \\Spb x \\Spb \\Lam{x : A}{M} \\Spb \\Appl{M}{M_2} \\\\\n\\textbf{Signatures} & \\Sigma & ::= & \\cdot \\Spb \\Sigma,\\Of{c}{A} \n\\Spb \\Sigma,\\Of{a}{K}\\\\\n\\textbf{Contexts} & \\Gamma & ::= & \\cdot \\Spb \\Gamma,\\Of{x}{A}\\\\\n% \\textbf{Simple Types} & \\alpha & ::= & a \\Spb \\Arr{\\alpha_1}{\\alpha_2} \\\\\n\\end{array}\n$$\n\nWhile this definition is the simplest, the meta-theory of LF was\nsignificantly simplfied by the introduction of Canonical LF, admitting\nonly the so-called \\emph{canonical forms} into the language.\n\n%-------------------------------------------------------------------------------\n% Canonical LF                                                                  \n%-------------------------------------------------------------------------------\n\n\\section{Canonical LF}\nWe begin by describing Canonical LF, the language at the heart of our\nwork.  While the various representations will differ from the one presented here, \n(e.g., for efficiency) this language should always be kept in mind.\n\nThe main difference between Canonical LF and earlier versions\nis the lack of explicit $\\beta$-redices in types and terms.  \nAlso, a type annotation on $\\lambda$-expressions is no longer required (or allowed).\nSee Harper and Licata\\cite{Harper:2006:Mechanizing} for an extended\nexposition.\n\n$$\n\\begin{array}{llll}\n\\textbf{Kinds} & K & ::= & \\Type \\Spb \\PiTyp{x}{A}{K} \\\\\n\\textbf{Canonical Type Families} & A & ::= & P \\Spb \\PiTyp{x}{A_2}{A} \\\\\n\\textbf{Atomic Type Families} & P & ::= & a \\Spb \\Appl{P}{M} \\\\\n\\textbf{Canonical Terms} & M & ::= & R \\Spb \\Lam{x}{M} \\\\\n\\textbf{Atomic Terms} & R & ::= &  x \\Spb c \\Spb \\Appl{R}{M}\\\\\n\\textbf{Signatures} & \\Sigma & ::= & \\cdot \\Spb \\Sigma,\\Of{c}{A} \n\\Spb \\Sigma,\\Of{a}{K}\\\\\n\\textbf{Contexts} & \\Gamma & ::= & \\cdot \\Spb \\Gamma,\\Of{x}{A}\\\\\n% \\textbf{Simple Types} & \\alpha & ::= & a \\Spb \\Arr{\\alpha_1}{\\alpha_2} \\\\\n\\end{array}\n$$\n\n%-------------------------------------------------------------------------------\n% Spine Form LF                                                                 \n%-------------------------------------------------------------------------------\n\n\\subsection{Spine-Form Canonical LF}\n\nThere are a number of difficulties with the name-carrying \nrepresentation\\footnote{i.e., variable names associated with binders}\nof Canonical LF.  The first is that we must\nimplement capture-avoiding substitution and $\\alpha$-conversion,\na notoriously delicate and error-prone process.\nWe can circumvent this difficulty\nby using DeBruijn indices\\cite{DeBruijn:1972:Terms}. \n\n A more significant \ndifficulty lies in the implementation of hereditary substitution. \n\\XXX{citation needed}\nWhen applying a substitution, we often need to determine whether\nthe head of an expression is a constant or a variable in order\nto know which rule to apply.  Thus, for a term of the form\n$$f\\ x_1\\ x_2\\ \\ldots\\ x_n = (\\ldots((f\\ x_1)\\ x_2)\\ \\ldots\\ x_n) $$\nwe need to take apart $n$ applications just to determine how\na substitution should be applied.  Later, when we implement\nunification, that algorithm will need to compare the heads\nof such terms for equality.  Thus, quick access to the head\nof such a term is essential for a reasonably efficient implementation.\nWe thus define \\emph{Spine-Form Canonical LF}.\n\n$$\n\\begin{array}{llll}\n\\mathbf{Kinds} & K & ::= & \\Type \\Spb \\SpPiTyp{A}{K} \\\\\n\\mathbf{Canonical\\ Type\\ Families} & A & ::= & P \\Spb \\SpPiTyp{A_1}{A_2} \\\\\n\\mathbf{Atomic\\ Type\\ Families} & P & ::= & a\\cdot S \\\\\n\\mathbf{Canonical\\ Terms} & M & ::= & R \\Spb \\SpLam{M} \\\\\n\\mathbf{Atomic\\ Terms} & R & ::= & H\\cdot S \\\\\n\\mathbf{Heads} & H & ::= & c \\Spb i\\\\\n\\mathbf{Spines} & S & ::= & \\Nil \\Spb M;S\\\\\n\\end{array} \n$$\n\nIn the following judgments will have the same\nform for different classes.  For instance,\nthe rules for $\\Pi$-types and $\\Pi$-kinds will\noftentimes be identical in structure.  To avoid the\nrepetition of rules, we introduce a convenient \nsyntax.\n\n$$\n\\begin{array}{llll}\n\\mathbf{Levels} & L & ::= & \\Type \\Spb \\Kind \\\\\n\\mathbf{Expressions} & U & ::= & L \\Spb \\SpPiTyp{U_1}{U_2} \\Spb \\lambda U \\Spb H\\cdot S \\\\\n\\mathbf{Heads} & H & ::= & c \\Spb i\\\\\n\\mathbf{Spines} & S & ::= & \\Nil \\Spb U;S\\\\\n\\end{array} \n$$\n\nConstants $c$ are either type ($a$) or term ($c$) constants.\nThe rules that follow will refer to this basic syntax.  While this\nis somewhat less precise than the more explicit separation of \nlevels in the syntax above (indeed, we can easily write gramatically\ncorrect nonsense in this language, such as $\\SpLam{(\\SpPiTyp{U1}{U2})}$)\nwe are willing to allow such terms to lessen the number of rules.\nTerms that are not even gramatically correct (much less type-correct) \nin the original language, will be excluded by type-checking (rather than expression \nformation, as in the previous version).\n\nWe will freely mix the more concrete classes such as $K,A$ and $P$\nabove when the rules restrict the expressions to such cases.\nWe see no potential for confusion however, and again notational\nconvenience is our guide.  \n\n\n", "meta": {"hexsha": "0a606c8c85c3dd35014a964886d000ce1c2f6097", "size": 5525, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/inverse/tex/canonical.tex", "max_stars_repo_name": "kryptine/twelf", "max_stars_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 61, "max_stars_repo_stars_event_min_datetime": "2015-01-24T18:10:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-25T12:41:05.000Z", "max_issues_repo_path": "src/inverse/tex/canonical.tex", "max_issues_repo_name": "kryptine/twelf", "max_issues_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-02-27T22:17:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-27T22:17:51.000Z", "max_forks_repo_path": "src/inverse/tex/canonical.tex", "max_forks_repo_name": "kryptine/twelf", "max_forks_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-05-06T01:32:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T19:33:29.000Z", "avg_line_length": 43.8492063492, "max_line_length": 96, "alphanum_fraction": 0.603438914, "num_tokens": 1549, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8872045966995027, "lm_q2_score": 0.658417500561683, "lm_q1q2_score": 0.5841510330457226}}
{"text": "\\hypertarget{polygon-triangulation}{%\n\\section{Polygon Triangulation}\\label{polygon-triangulation}}\n\n\\hypertarget{introduction}{%\n\\subsection{Introduction}\\label{introduction}}\n\nThe decomposition of a polygon into triangles whose union again results\nin the original polygon is called polygon triangulation. In the method,\nno new Vertices are added. Only the diagonals of the polygon results in\nthe triangulation. There need not be a unique triangulation for a given\npolygon.\n\nThere are various methods which can be used to perform polygon\ntriangulation. In our implementation, we\n\\protect\\hyperlink{algorithm-approach}{discuss} two types of\ntriangulation which are: 1) Ear Clipping Triangulation. 2) Plane sweep\nmonotone division combined with Monotone triangulation.\n\n\\hypertarget{how-to-run}{%\n\\subsection{How to Run}\\label{how-to-run}}\n\nThe src folder contains the source code for the convex hull program.\n\\texttt{g++} from the GNU compiler suite is required to compile the\nprogram to a executable.\n\nSteps to Compile:\n\n\\begin{enumerate}\n\\def\\labelenumi{\\arabic{enumi})}\n\\tightlist\n\\item\n  \\texttt{cd} into the src directory\n\\item\n  Run \\texttt{g++\\ main.cpp} which generates an executable called\n  \\texttt{a.out} in the same directory\n\\item\n  Run the executable using \\texttt{./a.out} (on linux)\n\n  \\begin{enumerate}\n  \\def\\labelenumii{\\arabic{enumii})}\n  \\tightlist\n  \\item\n    The executable takes a dataset from command line argument. For\n    example, to use an existing dataset, run\n    \\texttt{./a.out\\ ../datasets/complex.txt}\n  \\item\n    If no command-line argument is given, it takes input from the shell\n    directly (stdin)\n  \\end{enumerate}\n\\end{enumerate}\n\n\\hypertarget{input}{%\n\\subsection{Input}\\label{input}}\n\nThe required file format for the algorithm to work correctly is:\n\n\\begin{itemize}\n\\tightlist\n\\item\n  The input \\textbf{must} be a clockwise ordering of Poins present on\n  the polygon.\n\\item\n  First line must contain the no of Points to be taken as input by the\n  program.\n\\item\n  Each of next line must contain 2 integers, space seperated denoting\n  the (x, y) coordinates of each point.\n\\item\n  Each coordinate must be of integer type in the range -10\\^{}8 to\n  10\\^{}8.\n\\item\n  No of coordinates must be less than 1 Billion.\n\\end{itemize}\n\n\\hypertarget{output}{%\n\\subsection{Output}\\label{output}}\n\nEach line of output contains three coordinates of points present on each\ntriangulation.\\\\\nThe last two lines of output show the time taken to take input in\nmicroseconds and the time taken by the algorithm to compute\nTriangulation (also in microseconds).\n\nThis is the output of one of the datasets\n\\href{./datasets/long.txt}{long.txt}\n\n\\includegraphics[width=8cm,height=8cm]{img/TRIsnake.png}\\\\\n\n\\hypertarget{documentation-and-report}{%\n\\subsection{Documentation and Report}\\label{documentation-and-report}}\n\nDocumentation of this algorithm, functions and classes can be found in\nthe \\texttt{docs} folder in the current directory. Open the\n\\href{../Triangulation/docs/html/index.html}{index.html} file from the\ndocs directory with your preferred browser to go through the\ndocumentation\n\n\\hypertarget{performance-analysis}{%\n\\subsection{Performance Analysis}\\label{performance-analysis}}\n\nAnalysis is performed with a system running (does not include printing\ntime):\n\n\\begin{itemize}\n\\tightlist\n\\item\n  OS: Arch Linux (64Bit) running Linux Kernel version 5.7.2\n\\item\n  Processor: Intel Core i7 7700HQ\n\\item\n  RAM: 8GB\n\\item\n  Compiler: GNU G++ (GCC) 10.1.0\n\\end{itemize}\n\n\\textbf{The following observations are recorded:}\n\nTime taken to triangulate basic shapes:\n\n\\begin{longtable}[]{@{}lcccc@{}}\n\\toprule\n\\begin{minipage}[b]{0.07\\columnwidth}\\raggedright\nFilename\\strut\n\\end{minipage} & \\begin{minipage}[b]{0.12\\columnwidth}\\centering\nInput Vertices\\strut\n\\end{minipage} & \\begin{minipage}[b]{0.15\\columnwidth}\\centering\nComputed Triangles\\strut\n\\end{minipage} & \\begin{minipage}[b]{0.25\\columnwidth}\\centering\nLine Sweep Triangulation Runtime\\strut\n\\end{minipage} & \\begin{minipage}[b]{0.27\\columnwidth}\\centering\nEar Clipping Triangulation Runtime\\strut\n\\end{minipage}\\tabularnewline\n\\midrule\n\\endhead\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\ntriangle.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n3\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n1\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n30 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n16 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\ndownConvex.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n6\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n4\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n43 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n23 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\ntest.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n6\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n4\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n35 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n24 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nsquare.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n4\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n2\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n36 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n18 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nhexagon.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n6\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n4\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n38 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n16 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nmonotone.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n15\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n13\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n109 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n75 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nuniMonotone.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n8\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n6\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n115 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n42 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\ncomplex.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n17\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n15\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n188 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n17 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nstrange.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n16\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n14\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n148 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n122 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nstar.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n10\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n8\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n159 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n101 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nspiral.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n32\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n30\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n278 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n214 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\ntank.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n55\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n53\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n429 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n452 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\begin{minipage}[t]{0.07\\columnwidth}\\raggedright\nlong.txt\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.12\\columnwidth}\\centering\n72\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.15\\columnwidth}\\centering\n70\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.25\\columnwidth}\\centering\n593 microsec\\strut\n\\end{minipage} & \\begin{minipage}[t]{0.27\\columnwidth}\\centering\n1668 microsec\\strut\n\\end{minipage}\\tabularnewline\n\\bottomrule\n\\end{longtable}\n\n\\hypertarget{algorithm-approach}{%\n\\subsection{Algorithm Approach}\\label{algorithm-approach}}\n\n\\hypertarget{ear-clipping}{%\n\\subsubsection{Ear Clipping}\\label{ear-clipping}}\n\nAccording to the two ears theorem, any simple polygon with minimum of 4\nvertices without holes has atleast two ``ears''. An Ear is defined at\nthe vertex where internal angle is less than PI and the line joining the\nadjacent vertices is a diagonal of the polygon.\n\nThe algorithm works by removing away these ``ears'' (which are\ntriangles) until the complete polygon is triangulated. This algorithm is\neasy to implement, but slower than some other algorithms, and it only\nworks on polygons without holes. The runtime complexity of this\nalgorithm is O(n\\textsuperscript{2})\n\n\\hypertarget{plane-sweep-monotone-triangulation}{%\n\\subsubsection{Plane Sweep Monotone\nTriangulation}\\label{plane-sweep-monotone-triangulation}}\n\nA monotone polygon can be easily triangulated in O(n) complexity. A\nsimple polygon is said to be monotone w.r.t a line L only if any line\nperpendicular to L passes through the polygon at most twice. Any\nmonotone polygon can be divided into two monotone chains. A polygon that\nis monotone w.r.t the x-axis is called x-monotone. Given x-monotone\npolygon, the greedy algorithm begins by walking on one chain of the\npolygon from top to bottom while adding diagonals (forming triangles)\nwhenever it is possible.\n\nIf a polygon is not monotone, it can be partitioned into monotone sub\npolygons in O(n log n) time using line/plane sweep method. Generally,\nthis algorithm can triangulate a planar subdivision with in O(n log n)\ntime using O(n) space.\n\n\\hypertarget{results}{%\n\\subsection{Results}\\label{results}}\n\n\\includegraphics[width=8cm,height=8cm]{img/TRItank.png}\\\\\n\nIn the above image, the red boundary represents the input given to the\nalgorithm and the random colored triangle represent the output given by\nthe program.\n\nIt can be observed from the examples that increase in number of vertices\naffects the runtime of ear clipping triangulation more than the plane\nsweep monotone triangulation. From the results of the above datasets, we\ncan see that for just an increase of vertices from 50 to 70 causes the\near clipping algorithm to triple its triangulation time.\n\n\\hypertarget{conclusion}{%\n\\subsection{Conclusion}\\label{conclusion}}\n\nFrom the above comparisions, we can see that ear clipping algorithm,\neven after being a simple algorithm, due it is asymptotic complexity\nbeing O(n\\textsuperscript{2}), it surely is not recomended for\napplications where number of vertices go beyond 70 (which almost always\nhappens).\n\nIf we know that the input only has a monotone polygon, it is even more\neasier and faster to triangulate it in just O(n). The main plane sweep\nalgorithm tries to use this to its advantage by dividing the polygon\nfirst into monotones.\n", "meta": {"hexsha": "5ac7a04ce528a7269a68ba6951e0eca9594612c7", "size": 12360, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Latex/Triangulation.tex", "max_stars_repo_name": "RikilG/Geometry-Algorithms", "max_stars_repo_head_hexsha": "7bdf25e425b93dc6955331a48980a4b4d8051a6d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Latex/Triangulation.tex", "max_issues_repo_name": "RikilG/Geometry-Algorithms", "max_issues_repo_head_hexsha": "7bdf25e425b93dc6955331a48980a4b4d8051a6d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Latex/Triangulation.tex", "max_forks_repo_name": "RikilG/Geometry-Algorithms", "max_forks_repo_head_hexsha": "7bdf25e425b93dc6955331a48980a4b4d8051a6d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.9302325581, "max_line_length": 72, "alphanum_fraction": 0.7741909385, "num_tokens": 4039, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417500561683, "lm_q2_score": 0.8872045862611166, "lm_q1q2_score": 0.5841510261729065}}
{"text": "\\section{Question 1}\nSystem:\n$$\nG_{1_{(s)}} = \\dfrac{1}{(s+1)(2s+1)}\\exp(-s) = \\dfrac{1}{2s^2+3s+1}\n$$\nWe use Optimal PID to design controller with ITAE, ISE and IAE cont function.\n\\newpage\n \\begin{itemize}\n     \\item ITAE\n     $$\n     K_p = 1.6922, \\quad K_i = 0.5472, \\quad K_d = 1.3675\n     $$\n     \\begin{figure}[H]\n        \\caption{Step responde with PID controller and ITAE cost function}\n        \\centering\n        \\includegraphics[width=11cm]{../Figure/Q1/ITAE.png}\n    \\end{figure}\n    \\item ISE\n    $$\n    K_p =1.8957, \\quad K_i = 0.8007, \\quad  K_d =2.5416\n    $$\n    \\begin{figure}[H]\n       \\caption{Step responde with PID controller and ISE cost function}\n       \\centering\n       \\includegraphics[width=11cm]{../Figure/Q1/ISE.png}\n   \\end{figure}\n   \\item IAE\n   $$\n   K_p = 1.8356, \\quad K_i = 0.6085, \\quad K_d = 1.7386\n   $$\n   \\begin{figure}[H]\n      \\caption{Step responde with PID controller and IAE cost function}\n      \\centering\n      \\includegraphics[width=11cm]{../Figure/Q1/IAE.png}\n  \\end{figure}\n \\end{itemize}\n PID designed with ITAE cost function work better system is fast with lower overshoot.", "meta": {"hexsha": "570abe95c35ee2638ab453878c9a5c89d9ba6bbc", "size": 1126, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW/HW VII/Report/Q1/Q1.tex", "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW/HW VII/Report/Q1/Q1.tex", "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW/HW VII/Report/Q1/Q1.tex", "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.4324324324, "max_line_length": 86, "alphanum_fraction": 0.6234458259, "num_tokens": 404, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.868826769445233, "lm_q2_score": 0.6723317123102956, "lm_q1q2_score": 0.584139789602136}}
{"text": "\\section[Unverified implementation]{Unverified implementation\\implementation{TwoPassMerge.NoProofs}} \\label{sec:no-proofs}\nWe begin by implementing the described algorithms without any proof of their correctness. We define \\texttt{Heap} datatype as:\n\n\\begin{code}\ndata Heap : Set where\n  empty : Heap\n  node  : Priority \u2192 Rank \u2192 Heap \u2192 Heap \u2192 Heap\n\\end{code}\n\\noindent\nAccording to this definition a heap is either empty or it is a node with priority, rank and two subheaps. Both \\texttt{Priority} and \\texttt{Rank} are aliases to \\texttt{Nat}, which allows us to perform on them any operation that works on \\texttt{Nat} type. Note that storing rank in a node is redundant: we could just compute size of a heap whenever necessary. I choose to store rank in the constructor because it will be instructive to show how it is converted into inductive type family index (see Section \\ref{sec:rank-property}).\n\n\\subsection{Merging two heaps}\\label{sec:twopass-merge}\n\nHeaps \\texttt{h1} and \\texttt{h2} are merged using a recursive algorithm. We need to consider four cases:\n\n\\begin{enumerate}\n \\item (base case) \\texttt{h1} is empty: return \\texttt{h2}.\n \\item (base case) \\texttt{h2} is empty: return \\texttt{h1}.\n \\item (inductive case) priority \\texttt{p1} is higher than \\texttt{p2}: \\texttt{p1} becomes new root, \\texttt{l1} becomes its one child and \\texttt{r1}$\\oplus$\\texttt{h2} becomes the other.\n \\item (inductive case) priority \\texttt{p2} is higher than or equal to \\texttt{p1}: \\texttt{p2} becomes new root, \\texttt{l2} becomes its one child and \\texttt{r2}$\\oplus$\\texttt{h1} becomes the other.\n\\end{enumerate}\n\\noindent\nThere is no guarantee that \\textit{r1}$\\oplus$\\textit{h2} (or \\textit{r2}$\\oplus$\\textit{h1}) is smaller than \\textit{l1} (or \\textit{l2}). To ensure that rank invariant is maintained we use helper function \\texttt{makeT}, as proposed by Okasaki \\cite{Oka99}. We pass new children and the priority to \\texttt{makeT}, which creates a new node with the given priority and swaps the children if necessary (see \\Listing{lst:makeT-merge}). As Okasaki points out this algorithm can be view as having two passes: a top-down pass that performs merging and a bottom-up pass that restores the rank invariant.\n\n\\begin{listing}[htb!]\n\\begin{code}\nmakeT : Priority \u2192 Heap \u2192 Heap \u2192 Heap\nmakeT p l r with rank l \u2265 rank r\nmakeT p l r | true  = node p (suc (rank l + rank r)) l r\nmakeT p l r | false = node p (suc (rank l + rank r)) r l\n\nmerge : Heap \u2192 Heap \u2192 Heap\nmerge empty h2 = h2\nmerge h1 empty = h1\nmerge (node p1 h1-r l1 r1) (node p2 h2-r l2 r2)\n  with p1 < p2\nmerge (node p1 h1-r l1 r1) (node p2 h2-r l2 r2)\n  | true  = makeT p1 l1 (merge r1 (node p2 h2-r l2 r2))\nmerge (node p1 h1-r l1 r1) (node p2 h2-r l2 r2)\n  | false = makeT p2 l2 (merge (node p1 h1-r l1 r1) r2)\n\\end{code}\n\\caption{Implementation of \\texttt{makeT} and \\texttt{merge}. \\texttt{rank} returns rank of a tree.}\\label{lst:makeT-merge}\n\\end{listing}\n\n\\subsection{Inserting element into a heap}\n\nInsert is defined by merging with a singleton heap as described in Section~\\ref{sec:wblh}. See companion code for implementation.\n\n\\subsection{Finding and removing element with the highest priority}\n\nTo retrieve element with the highest priority we return value stored in the root of a heap:\n\n\\begin{code}\nfindMin : Heap \u2192 Priority\nfindMin empty          = \\hilight{\\{ \\}?}\nfindMin (node p _ _ _) = p\n\\end{code}\n\\noindent\nHere we encounter a problem: what should \\texttt{findMin} return for an empty heap? If we were using a language like Haskell or ML one thing we could consider is raising an exception. This is the choice made by Okasaki in ``Purely Functional Data Structures''. But throwing an exception is precisely the thing that would make our implementation impure! Besides Agda is a total language, which means that every function must terminate with a result. Raising an exception is therefore not an option. Another alternative is to assume a default priority that will be returned for an empty heap. This priority would have to be some distinguished natural number. $0$ represents the highest priority so it is unreasonable to assume it as default. We could return $\\infty$, which represents the lowest possible priority. This would require us to extend definition of \\texttt{Nat} with $\\infty$, which in turn would force us to modify all functions that pattern match on values of \\texttt{Nat}. Redefining natural numbers for the sake of getting one function right also does not sound like a good option. Let's face it -- our types do not reflect the fact that \\texttt{findMin} function is not defined for an empty heap! To solve this problem we need to be more specific about types. One solution would be to use \\texttt{Maybe} datatype:\n\n\\begin{code}\ndata Maybe (A : Set) : Set where\n  nothing : Maybe A\n  just    : A \u2192 Maybe A\n\nfindMinM : Heap \u2192 Maybe Priority\nfindMinM empty          = nothing\nfindMinM (node p _ _ _) = just p\n\\end{code}\n\n\\noindent\nReturning \\texttt{nothing} is like saying ``no output exists for the given input data''. This allows us to express the fact that \\texttt{findMin} is not defined for some input values. This solution works but it forces every caller of \\texttt{findMinM} to inspect the result and be prepared for \\texttt{nothing}, which means extra boilerplate in the code and checks during run time. Implementation of \\texttt{deleteMin} based on description in Section~\\ref{sec:wblh} faces the same problem.\n\nThe best solution to this issue is to ensure that \\texttt{findMin} and \\texttt{deleteMin} cannot be applied to an empty heap. We can achieve this by indexing \\texttt{Heap} with its size. Doing so will also allow us to prove the rank property.\n\n\\section[Proving rank property]{Proving rank property\\implementation{TwoPassMerge.RankProof}} \\label{sec:rank-property}\n\nWe will now prove that our implementation maintains the rank property. The first step is to express \\texttt{Rank} at the type level as an index of \\texttt{Heap} datatype. Since rank of a heap is now part of its type we can ensure at compile time that rank of left subtree is not smaller than rank of the right subtree. We do this be requiring that \\texttt{node} constructor is given a proof that rank invariant holds. To express such proof we use \\texttt{\u2265} datatype:\n\n\\begin{code}\ndata _\u2265_ : Nat \u2192 Nat \u2192 Set where\n  ge0 : \\{y : Nat\\} \u2192 y \u2265 zero\n  geS : \\{x y : Nat\\} \u2192 x \u2265 y \u2192 suc x \u2265 suc y\n\\end{code}\n\\noindent\nValues of this type, which is indexed by two natural numbers, prove that: a) any natural number is greater than or equal to \\texttt{0} (\\texttt{ge0} constructor); b) if two numbers are in greater-equal relation then their successors are also in that relation (\\texttt{geS} constructor). This type represents concept of data as evidence~\\cite{AltMcBMcK05}. We use \\texttt{order} function to compare two natural numbers and \\texttt{Order} datatype to express the result. Implementation is located in \\texttt{Basics.Ordering} module of the companion code.\n\nHaving defined \u2265 we can now give new definition of \\texttt{Heap}:\n\n\\begin{code}\ndata Heap : Rank \u2192 Set where\n  empty : Heap zero\n  node  : \\{l r : Rank\\} \u2192 Priority \u2192 l \u2265 r \u2192\n          Heap l \u2192 Heap r \u2192 Heap (suc (l + r))\n\\end{code}\n\n\\noindent\nEmpty heap contains no elements and so \\texttt{empty} returns \\texttt{Heap} indexed with \\texttt{0}. Non-empty node stores an element and two children of rank \\textit{l} and \\textit{r}. Therefore the size of the resulting heap is $1 + l + r$, which we express as $\\suc(l + r)$. We must also supply a value of type \\texttt{l \u2265 r} to the constructor, ie. we must provide evidence that rank invariant holds.\n\nProving the rank invariant itself is surprisingly simple. We can obtain evidence that rank of left subtree is not smaller than rank of right subtree by replacing \\texttt{\u2265} in \\texttt{makeT} with \\texttt{order}, which compares two \\texttt{Nat}s and supplies evidence of the result. But there is another difficulty here. Recall that the merging algorithm is two pass: we use \\texttt{merge} to actually do the merging and \\texttt{makeT} to restore the rank invariant if necessary. Since we index heaps by their rank we now require that \\texttt{makeT} and \\texttt{merge} construct trees of correct rank. We must therefore prove that:\\linebreak a) \\texttt{makeT} creates a node with rank equal to sum of children nodes' ranks plus one;\\linebreak b) \\texttt{merge} creates a heap with rank equal to the sum of ranks of heaps being merged.\n\n\\subsection{Proving makeT}\\label{sec:twopass-makeT-proof}\n\n\\begin{listing}[b!]\n\\begin{code}\nmakeT-lemma : (a b : Nat) \u2192 suc (a + b) \u2261 suc (b + a)\nmakeT-lemma a b = cong suc (+comm a b)\n\nmakeT : \\{l r : Rank\\} \u2192 Priority \u2192 Heap l \u2192 Heap r \u2192 Heap (suc (l + r))\nmakeT \\{l-rank\\} \\{r-rank\\} p l r with order l-rank r-rank\nmakeT \\{l-rank\\} \\{r-rank\\} p l r | ge l\u2265r\n  = node p l\u2265r l r\nmakeT \\{l-rank\\} \\{r-rank\\} p l r | le r\u2265l\n  = subst Heap (makeT-lemma r-rank l-rank) (node p r\u2265l r l)\n\\end{code}\n\\caption{Implementation of \\texttt{makeT} with verified rank property.}\\label{lst:rank-proof-makeT-two-pass}\n\\end{listing}\n\\noindent\n\\texttt{makeT} takes subtrees of rank \\textit{l} and \\textit{r} and produces a new tree with rank $\\suc(l + r)$, where $\\suc$ follows from the fact that the node itself is storing one element. We must prove that each of two cases of \\texttt{makeT} returns heap of correct rank:\n\n\\begin{enumerate}\n \\item If \\textit{l} is greater than or equal to \\textit{r} then no extra proof is necessary as everything follows from the definition of + and type signature of \\texttt{node}.\n \\item If \\textit{r} is greater than or equal to \\textit{l} then we must swap \\texttt{l} and \\texttt{r} subtrees. This requires us to prove that:\n\n\\begin{equation*}\nsuc (r + l) \u2261 suc (l + r)\n\\end{equation*}\n\nThat proof is done using congruence on $\\suc$ function and commutativity of addition. We will define that proof as \\texttt{makeT-lemma}.\n\\end{enumerate}\n\\noindent\n\\Listing{lst:rank-proof-makeT-two-pass} shows new code of \\texttt{makeT}. Notice how \\texttt{subst} applies the proof to the \\texttt{Heap} type constructor and converts the type produced by \\texttt{(node p r\u2265l r l)} expression into the type given in \\texttt{makeT} type signature.\n\n\\subsection{Proving merge}\n\nWe now verify that all four cases of \\texttt{merge} shown in \\Listing{lst:makeT-merge} produce heap of required rank.\n\n\\subsubsection{base cases}\n\nIn the first base case we have $h1 \u2261 0$. Therefore:\n\n\\begin{equation*}\nh1 + h2 \u2261 0 + h2 \\stackrel{+, (1)}{\u2261} h2\n\\end{equation*}\n\\noindent\nWhich ends the first proof -- everything follows from definition of $+$\\footnote{The $\\stackrel{+, (1)}{\u2261}$ notation means that equality follows from the first defining equation of $+$.}. In the second base case $h2 \u2261 0$ and things are slightly more difficult: the definition of $+$ only says that $0$ is the left identity but it does not say that it is also the right identity. We must therefore construct a proof that:\n\n\\begin{equation*}\nh1 + 0 \\stackrel{?}{\u2261} h1\n\\end{equation*}\n\\noindent\nLuckily for us, we already have that proof defined in the \\texttt{Basics.Reasoning} module as \\texttt{+0}. Since that proof is in the opposite direction -- it proves $a \u2261 a + 0$, not $a + 0 \u2261 a$ -- we have to use symmetry of $\u2261$ .\n\n\\subsubsection{inductive cases}\\label{sec:twopass-merge-inductive}\n\nIn an inductive case we know that neither \\texttt{h1} nor \\texttt{h2} is empty, ie. their ranks are given as $\\suc (l1 + r1)$ and $\\suc (l2 + r2)$ respectively. This means that Agda sees expected rank of the merged heap as:\n\n\\begin{equation*}\n\\suc (l1 + r1) + \\suc (l2 + r2) \\stackrel{+, (2)}{\u2261} \\suc ((l1 + r1) + \\suc (l2 + r2))\n\\end{equation*}\n\\noindent\nThis will be our goal in both proofs of inductive cases.\n\nIn the first inductive case we construct the result by calling\\footnote{Note that \\texttt{node} constructor in the unverified implementation show in \\Listing{lst:makeT-merge} takes slightly different parameters. This is because we changed the definition of \\texttt{Heap} datatype to take the proof of rank property instead of storing the rank in the constructor.}:\n\n\\begin{code}\nmakeT p1 l1 (merge r1 (node p2 l2\u2265r2 l2 r2))\n\\end{code}\n\\noindent\nCall to \\texttt{node} with \\texttt{l2} and \\texttt{r2} as parameters produces node of rank $\\suc(l2 + r2)$. Passing it to \\texttt{merge} together with \\texttt{r1} gives a tree of rank $r1 + \\suc(l2 + r2)$ (by the type signature of \\texttt{merge}). Passing result of \\texttt{merge} to \\texttt{makeT} produces tree of rank $\\suc (l1 + (r1 + \\suc (l2 + r2)))$ by the type signature of \\texttt{makeT}. We must therefore construct a proof that:\n\n\\begin{equation*}\n\\suc (l1 + (r1 + \\suc (l2 + r2))) \u2261 \\suc ((l1 + r1) + \\suc (l2 + r2))\n\\end{equation*}\n\\noindent\nAppealing to congruence on $\\suc$ leaves us with a proof of:\n\n\\begin{equation*}\nl1 + (r1 + \\suc (l2 + r2)) \u2261 (l1 + r1) + \\suc (l2 + r2)\n\\end{equation*}\n\\noindent\nSubstituting $a = l1$, $b = r1$ and $c = \\suc (l2 + r2)$ gives:\n\n\\begin{equation*}\na + (b + c) \u2261 (a + b) + c\n\\end{equation*}\n\\noindent\nThis is associativity of addition that we have already proved in \\texttt{Basics.Reasoning}.\n\nThe proof of second inductive case is much more interesting. This time we construct the result by calling:\n\n\\begin{code}\nmakeT p2 l2 (merge r2 (node p1 l1\u2265r1 l1 r1))\n\\end{code}\n\\noindent\nand therefore have to prove:\n\n\\begin{equation*}\n\\suc (l2 + (r2 + \\suc (l1 + r1))) \u2261 \\suc ((l1 + r1) + \\suc (l2 + r2))\n\\end{equation*}\n\\noindent\nAgain we use congruence to deal with the outer calls to $\\suc$ and substitute $a = l2$, $b = r2$ and $c = l1 + r1$. This leaves us with a proof of lemma A:\n\n\\begin{equation*}\na + (b + \\suc c) \u2261 c + \\suc (a + b)\n\\end{equation*}\n\\noindent\nFrom associativity we know that:\n\n\\begin{equation*}\na + (b + \\suc c) \u2261 (a + b) + \\suc c\n\\end{equation*}\n\\noindent\nIf we prove lemma B:\n\n\\begin{equation*}\n(a + b) + \\suc c \u2261 c + \\suc (a + b)\n\\end{equation*}\n\\noindent\nthen we can combine lemmas A and B using transitivity to get the final proof. We substitute $n = a + b$, $m = c$ and rewrite lemma B as:\n\n\\begin{equation*}\nn + \\suc m \u2261 m + \\suc n\n\\end{equation*}\n\\noindent\nFrom symmetry of \\texttt{+suc} we know that:\n\n\\begin{equation*}\nn + \\suc m \u2261 \\suc (n + m)\n\\end{equation*}\n\\noindent\nUsing transitivity we combine it with congruence on commutativity of addition to prove:\n\n\\begin{equation*}\n\\suc (n + m) \u2261 \\suc (m + n)\n\\end{equation*}\n\\noindent\nAgain, using transitivity we combine it with \\texttt{+suc} to show:\n\n\\begin{equation*}\n\\suc (m + n) \u2261 m + \\suc n\n\\end{equation*}\n\\noindent\nWhich proves lemma B and therefore the whole proof is complete (\\Listing{lst:twopass-merge-2nd-proof}, see companion code for complete implementation).\n\n\\begin{listing}[thb!]\n\\begin{code}\nlemma-B : (n m : Nat) \u2192 n + suc m \u2261 m + suc n\nlemma-B n m = trans (sym (+suc n m)) (trans (cong suc (+comm n m)) (+suc m n))\n\nlemma-A : (a b c : Nat) \u2192 a + (b + suc c) \u2261 c + suc (a + b)\nlemma-A a b c = trans (+assoc a b (suc c)) (lemma-B (a + b) c)\n\nproof-2 : (l1 r1 l2 r2 : Nat) \u2192 suc (l2 + (r2  + suc (l1 + r1)))\n                              \u2261 suc ((l1 + r1) + suc (l2 + r2))\nproof-2 l1 r1 l2 r2 = cong suc (lemma-A l2 r2 (l1 + r1))\n\\end{code}\n\\caption{Proof of second inductive case of \\texttt{merge}.}\\label{lst:twopass-merge-2nd-proof}\n\\end{listing}\n\n\\subsection{insert}\n\nInserting new element into the heap increases its rank by one. Now that rank is encoded as a datatype index this fact must be reflected in the type signature of \\texttt{insert}. As previously we define \\texttt{insert} as \\texttt{merge} with a singleton heap. Rank of singleton heap is 1 (ie. \\texttt{suc zero}), while already existing heap has rank n. According to definition of merge the resulting heap will therefore have rank:\n\n\\begin{equation*}\n(\\suc \\zero) + n \\stackrel{+, (2)}{\u2261} \\suc (\\zero + n) \\stackrel{+, (1)}{\u2261} \\suc n\n\\end{equation*}\n\\noindent\nWhich is the size we require in the type signature of \\texttt{insert}. This means we don't need any additional proof because expected result follows from definition.\n\n\\subsection{findMin, deleteMin}\n\nEncoding rank at the type level allows us to write total versions of \\texttt{findMin} and \\texttt{deleteMin}. By requiring that input \\texttt{Heap} has rank \\texttt{suc n} we exclude the possibility of passing empty heap to any of these functions.\n\n\\section{Constructing equality proofs using transitivity}\\label{sec:eq-proofs-using-trans}\n\nNow that we have conducted an inductive proof of \\texttt{merge} in Section \\ref{sec:twopass-merge-inductive} we can focus on a general technique used in that proof. Let us rewrite \\texttt{proof-2} in a different way to see closely what is happening at each step. Inlining lemmas A and B into \\texttt{proof-2} gives:\n\n\\begin{code}\nproof-2i : (l1 r1 l2 r2 : Nat) \u2192 suc (l2 + (r2  + suc (l1 + r1)))\n                               \u2261 suc ((l1 + r1) + suc (l2 + r2))\nproof-2i l1 r1 l2 r2 =\n  cong suc (trans (+assoc l2 r2 (suc (l1 + r1)))\n           (trans (sym (+suc (l2 + r2) (l1 + r1)))\n           (trans (cong suc (+comm (l2 + r2) (l1 + r1)))\n                  (+suc (l1 + r1) (l2 + r2))))\n\\end{code}\n\\noindent\nWe see that \\texttt{proof-2} is structured around proofs of elementary properties combined using transitivity. In general, if we have to prove $a \u2261 e$ and we can prove $a \u2261 b$ using $\\prof 1$, $b \u2261 c$ using $\\prof 2$, $c \u2261 d$ using $\\prof 3$, $d \u2261 e$ using $\\prof 4$ then we can combine these proofs to get the final proof of $a \u2261 e$:\n\n\\begin{equation*}\n\\trans\\, (\\prof 1)\\, (\\trans\\, (\\prof 2)\\, (\\trans\\, (\\prof 3)\\, (\\prof 4)))\n\\end{equation*}\n\\noindent\nWhile simple to use, combining proofs using transitivity can be hard to comprehend. The intermediate terms are hidden from us and we have to reconstruct them every time we read our proof. Let us then replace usage of transitivity with the following notation, which explicitly shows intermediate proof steps together with their proofs:\n\n\\begin{align*}\na&\\;{\u2261}\\langle \\prof 1 \\rangle\\\\\nb&\\;{\u2261}\\langle \\prof 2 \\rangle\\\\\nc&\\;{\u2261}\\langle \\prof 3 \\rangle\\\\\nd&\\;{\u2261}\\langle \\prof 4 \\rangle\\\\\ne&\n\\end{align*}\n\\noindent\nRewriting \\texttt{proof-2i} in this notation gives:\n\n\\begin{align*}\n                                \\suc (l2 + (r2 + \\suc (l1 + r1)))&\\;{\u2261}\\langle \\congr\\;\\suc \\rangle\\\\\n{\\color{gray} \\suc(} l2 + (r2 + \\suc (l1 + r1))  {\\color{gray})} &\\;{\u2261}\\langle\\assoc\\;l2\\;r2\\;(\\suc (l1 + r1))\\rangle\\\\\n{\\color{gray} \\suc(} (l2 + r2) + \\suc (l1 + r1)  {\\color{gray})} &\\;{\u2261}\\langle \\sym (\\Psuc\\;(l2 + r2)\\;(l1 + r1))\\rangle\\\\\n{\\color{gray} \\suc(} \\suc ((l2 + r2) + (l1 + r1)){\\color{gray})} &\\;{\u2261}\\langle \\congr\\;\\suc\\;(\\comm\\;(l2 + r2)\\;(l1 + r1)) \\rangle\\\\\n{\\color{gray} \\suc(} \\suc ((l1 + r1) + (l2 + r2)){\\color{gray})} &\\;{\u2261}\\langle\\Psuc\\;(l1 + r1)\\;(l2 + r2) \\rangle\\\\\n{\\color{gray} \\suc(} (l1 + r1) + \\suc (l2 + r2)  {\\color{gray})} &\n\\end{align*}\n\n\\noindent\nGrey ${\\color{gray}\\suc}$ denotes that everything happens under call to \\texttt{suc} (thanks to using congruence on $\\suc$ as the first proof). Comparing this notation to \\texttt{proof-2i} on the previous page shows that proofs in angle brackets correspond to proofs combined using $\\trans$, while series of expressions left of $\u2261$ parallels our reasoning from Section \\ref{sec:twopass-merge-inductive}.\n\n\\section[Proving rank property for single pass merge by composing existing proofs]{Proving rank property for single pass merge by composing existing proofs\\implementation{SinglePassMerge.RankProof}} \\label{sec:single-pass-merge-proof-by-comp}\n\nAs mentioned in Section \\ref{sec:twopass-merge} \\texttt{merge} can be viewed as consisting of two passes. We can obtain a single pass version of the algorithm by inlining calls to \\texttt{makeT} into \\texttt{merge}. This new algorithm has two bases cases (as previously) and four inductive cases:\n\n\\begin{enumerate}\n \\item (base case) \\texttt{h1} is empty: return \\texttt{h2}.\n \\item (base case) \\texttt{h2} is empty: return \\texttt{h1}.\n \\item (1st inductive case) priority \\texttt{p1} is higher than \\texttt{p2} and \\textit{l1} is not smaller than  \\textit{r1}$\\oplus$\\textit{h2}: \\texttt{p1} becomes new root, \\texttt{l1} becomes the left child and \\texttt{r1}$\\oplus$\\texttt{h2} becomes the right child.\n \\item (2nd inductive case) priority \\texttt{p1} is higher than \\texttt{p2} and \\textit{r1}$\\oplus$\\textit{h2} is larger than \\textit{l1}: \\texttt{p1} becomes new root, \\texttt{r1}$\\oplus$\\texttt{h2} becomes the left child and \\texttt{l1} becomes the right child.\n \\item (3rd inductive case) priority \\texttt{p2} is higher than or equal to \\texttt{p1} and \\textit{l2} is not smaller than  \\textit{r2}$\\oplus$\\textit{h1}: \\texttt{p2} becomes new root, \\texttt{l2} becomes the left child and \\texttt{r2}$\\oplus$\\texttt{h1} becomes the right child.\n \\item (4th inductive case) priority \\texttt{p2} is higher than or equal to \\texttt{p1} and \\textit{r2}$\\oplus$\\textit{h1} is larger than  \\textit{l2}: \\texttt{p2} becomes new root, \\texttt{r2}$\\oplus$\\texttt{h1} becomes the left child and \\texttt{l2} becomes the right child.\n\\end{enumerate}\n\nNow that we have inlined \\texttt{makeT} we must construct proofs of new \\texttt{merge}. Note that previously we made calls to \\texttt{makeT} only in inductive cases. This means that implementation of base cases remains unchanged and so do the proofs. Let us take a closer look at proofs we need to supply for inductive cases:\n\n\\begin{itemize}\n \\item (1st inductive case): call to \\texttt{makeT} would not swap left and right children when creating a node from parameters passed to it. We must prove:\n\n\\begin{equation*}\n\\suc (l1 + (r1 + \\suc (l2 + r2))) \u2261 \\suc ((l1 + r1) + \\suc (l2 + r2))\n\\end{equation*}\n\n\\item (2nd inductive case): call to \\texttt{makeT} would swap left and right children when creating a node from parameters passed to it. We must prove:\n\n\\begin{equation*}\n\\suc ((r1 + \\suc (l2 + r2)) + l1) \u2261 \\suc ((l1 + r1) + \\suc (l2 + r2))\n\\end{equation*}\n\n\\item (3rd inductive case): call to \\texttt{makeT} would not swap left and right children when creating a node from parameters passed to it. We must prove:\n\n\\begin{equation*}\n\\suc (l2 + (r2  + \\suc (l1 + r1))) \u2261 \\suc ((l1 + r1) + \\suc (l2 + r2))\n\\end{equation*}\n\n\\item (4th inductive case): call to \\texttt{makeT} would swap left and right children when creating a node from parameters passed to it. We must prove:\n\n\\begin{equation*}\n\\suc ((r2 + \\suc (l1 + r1)) + l2) \u2261 \\suc ((l1 + r1) + \\suc (l2 + r2))\n\\end{equation*}\n\\end{itemize}\n\nFirst thing to note is that inductive cases 1 and 3 require us to supply the same proofs as the ones we gave for inductive cases in two-pass merge. This means we can reuse old proofs. What about cases 2 and 4? One thing we could do is construct proofs of these properties from scratch using technique described in Section~\\ref{sec:eq-proofs-using-trans}. This is left as an exercise to the reader. Here we will proceed in a different way.\n\nNotice that properties we have to prove in cases 2 and 4 are very similar to properties 1 and 3. The only difference between 1 and 2 and between 3 and 4 is the order of parameters inside outer $\\suc$ on the left hand side of equality. This is expected: in cases 2 and 4 we swap left and right subtree passed to \\texttt{node} and this is directly reflected in the types.  Now, if we could prove that:\n\n\\begin{equation*}\n\\suc ((r1 + \\suc (l2 + r2)) + l1) \u2261 \\suc (l1 + (r1 + \\suc (l2 + r2)))\n\\end{equation*}\n\\noindent\nand\n\\begin{equation*}\n\\suc ((r2 + \\suc (l1 + r1)) + l2) \u2261 \\suc (l2 + (r2  + \\suc (l1 + r1)))\n\\end{equation*}\n\\noindent\nthen we could use transitivity to combine these proofs with proofs of inductive cases 1 and 3. If we abstract the parameters in the above equalities we see that the property we need to prove in both cases is:\n\n\\begin{equation*}\n\\suc (a + b) \u2261 \\suc (b + a)\n\\end{equation*}\n\\noindent\nAnd that happens to be \\texttt{makeT-lemma} from Section~\\ref{sec:twopass-makeT-proof}! New version of \\texttt{merge} was created by inlining calls to \\texttt{makeT} and now it turns out we can construct proofs of that implementation by composing proofs of \\texttt{makeT} and \\texttt{merge} using transitivity. This is exactly the same technique that we developed in Section~\\ref{sec:eq-proofs-using-trans} only this time it is used on a slightly larger scale. It leads to a very elegant solution presented in module \\texttt{SinglePassMerge.}\\texttt{RankProof} of the companion code.\n\n\\section[Proving priority property]{Proving priority property\\implementation{TwoPassMerge.PriorityProof}} \\label{sec:priority-invariant}\n\nTo prove priority property I will index \\texttt{Heap} with \\texttt{Priority} and use technique demonstrated by Altenkirch, McBride and McKinna in Section 5.2 of ``Why Dependent Types Matter'' \\cite{AltMcBMcK05}\\footnote{To keep things simple let's forget about rank proof we conducted earlier -- in this section we once again store rank explicitly in the \\texttt{node} constructor.}. Index of value n says that ``this heap can store elements with priorities n or lower''. In other words \\texttt{Heap} indexed with 0 can store any priority, while \\texttt{Heap} indexed with 3 can store priorities 3, 4 and lower, but can't store 0, 1 and 2. The new definition of \\texttt{Heap} looks like this\\footnote{Actual implementation in the companion code is slightly different. It uses sized types \\cite{Abe08} to guide the termination checker in the \\texttt{merge} function. This issue is orthogonal to the conducted proofs, hence I avoid sized types here for the sake of simplicity.}:\n\n\\begin{code}\ndata Heap : Priority \u2192 Set where\n  empty : \\{n : Priority\\} \u2192 Heap n\n  node  : \\{n : Priority\\} \u2192 (p : Priority) \u2192 Rank \u2192 p \u2265 n \u2192\n          Heap p \u2192 Heap p \u2192 Heap n\n\\end{code}\n\\noindent\nAs always \\texttt{Heap} has two constructors. The \\texttt{empty} constructor returns \\texttt{Heap n}, where index n is not constrained in any way. This means that empty heap can be given any restriction on priorities of stored elements. The \\texttt{node} constructor also creates \\texttt{Heap n} but this time \\texttt{n} is constrained. If we store priority \\texttt{p} in a node then:\n\n\\begin{enumerate}\n \\item the resulting heap can only be restricted to store priorities at least as high as \\texttt{p}. For example, if we create a node that stores priority 3 we cannot restrict the resulting heap to store priorities 4 and lower, because the fact that we store 3 in that node violates the restriction. This restriction is expressed by the \\texttt{p \u2265 n} parameter: if we can construct a value of type \\texttt{p \u2265 n} then it becomes a proof that priority \\texttt{p} is lower than or equal to \\texttt{n}.\n \\item children of a node can only be restricted to store priorities that are not higher than \\texttt{p}. Example: if we restrict a node to store priorities 4 and lower we cannot restrict its children to store priorities 3 or higher. This restriction is expressed by index \\texttt{p} in the subheaps passed to node constructor.\n\\end{enumerate}\n\nAltenkirch, McKinna and McBride \\cite{AltMcBMcK05} used this technique to prove correctness of merge sort for lists. In a weight biased leftist heap every path from root to a leaf is a sorted list so extending their technique to heap case is straightforward. I elide discussion of \\texttt{merge} as I offer nothing new compared to Altenkirch's paper. I instead focus on issue of creating singleton heaps and inserting elements into a heap as these cases now become interesting.\n\nWhen creating a singleton heap we have to answer a question: \"what priorities can we later store in a singleton heap that we just created?\". \"Any\" seems to be a reasonable answer, which means the resulting heap will be indexed with 0 meaning: \"Priorities lower than or equal to 0 -- ie. any priorities -- can be stored in this Heap\". With such a liberal definition of singleton heap it is easy to write definition of \\texttt{insert} by requiring that both input and output heap can store any priorities:\n\n\\begin{code}\nsingleton : (p : Priority) \u2192 Heap zero\nsingleton p = node p (suc zero) ge0 empty empty\n\ninsert : Priority \u2192 Heap zero \u2192 Heap zero\ninsert p h = merge (singleton p) h\n\\end{code}\n\nBut what if we want to insert into a heap that is not indexed with 0? One solution is to be liberal and ``promote'' that heap so that after insertion it can store elements with any priorities. Priority restriction can always be loosened but it cannot be tightened easily. However such a liberal approach might not always be satisfactory. We might actually want to keep priority bounds as tight as possible. Let us explore that possibility.\n\nWe begin by rewriting the \\texttt{singleton} function:\n\n\\begin{code}\nsingletonB' : \\{b : Priority\\} \u2192 (p : Priority) \u2192 p \u2265 b \u2192 Heap b\nsingletonB' p p\u2265b = node p one p\u2265b empty empty\n\\end{code}\n\\noindent\nNow \\texttt{singletonB'} allows to construct a heap containing a single element with priority \\texttt{p} but the whole heap is bounded by some \\texttt{b}. To construct such a heap we must supply a proof that \\texttt{p} can actually be stored in \\texttt{Heap b}. We can now implement new insertion function:\n\n\\begin{small}\n\\begin{code}\ninsertB' : \\{b : Priority\\} \u2192 (p : Priority) \u2192 p \u2265 b \u2192 Heap p \u2192 Heap b\ninsertB' p p\u2265b h = merge (singletonB' p p\u2265b) (liftBound p\u2265b h)\n\\end{code}\n\\end{small}\n\\noindent\nwhere \\texttt{liftBound} is a function that loosens the priority bound of a heap given evidence that it is possible to do so (ie. that the new bound is less restrictive than the old one). But if we try to construct a heap using \\texttt{insertB'} we quickly discover that it is useless:\n\n\\begin{code}\nexample-heap : Heap zero\nexample-heap = (insertB' (suc zero) ge0\n               (insertB' zero \\hilight{\\{ \\}?} empty))\n\\end{code}\n\\noindent\nIn the second call to \\texttt{insertB'} we are required to supply a proof that $0 \\ge 1$, which of course is not possible. The problem is that using the new \\texttt{insertB'} function we can only lower the bound on the heap and thus insert the elements into the heap in decreasing order:\n\n\\begin{code}\nexample-heap : Heap zero\nexample-heap = (insertB' zero ge0\n               (insertB' (suc zero) ge0 empty))\n\\end{code}\n\\noindent\nThis is a direct consequence of our requirement that the heap we are inserting into is restricted exactly by the priority of element we are inserting.\n\nThe bottom line is: one has to carefully consider implications of a proof on the design of functions that manipulate the data structure.", "meta": {"hexsha": "51961c911e98dce2f5d9a85aa7b3a20a065fbae5", "size": 30397, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/3-proofs.tex", "max_stars_repo_name": "jstolarek/dep-typed-wbl-heaps", "max_stars_repo_head_hexsha": "57db566cb840dc70331c29eb7bf3a0c849f8b27e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-05-02T21:48:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-05-02T21:48:43.000Z", "max_issues_repo_path": "paper/3-proofs.tex", "max_issues_repo_name": "jstolarek/dep-typed-wbl-heaps", "max_issues_repo_head_hexsha": "57db566cb840dc70331c29eb7bf3a0c849f8b27e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/3-proofs.tex", "max_forks_repo_name": "jstolarek/dep-typed-wbl-heaps", "max_forks_repo_head_hexsha": "57db566cb840dc70331c29eb7bf3a0c849f8b27e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.0022371365, "max_line_length": 1328, "alphanum_fraction": 0.7237227358, "num_tokens": 9160, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8128673269042765, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5841219079162362}}
{"text": "% !TEX encoding = UTF-8 Unicode\n% !TEX spellcheck = en-US\n\\documentclass[\n    aps,\n    prl,\n    showkeys,\n    nofootinbib,\n    %twocolumn,\n    floatfix\n]{revtex4-1}\n\n\\usepackage[utf8]{inputenc} % UTF-8\n\\usepackage{braket}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n\\usepackage{graphicx}\n\n\\renewcommand{\\vec}[1]{\\boldsymbol{#1}}\n\n\n\\newcommand{\\fm}{\\,\\mathrm{fm}}\n\\newcommand{\\ifm}{\\,\\mathrm{fm}^{-1}}\n\\newcommand{\\MeV}{\\,\\mathrm{MeV}}\n\n\\begin{document}\n\n\\title{Discretization}\n\n\\author{C.~K\u00f6rber}\n\n\\date{\\today}\n\n\\begin{abstract}%\n\tHow does the contact interaction depend on discretization effects?\n\\end{abstract}\n\n\\maketitle\n\n\\section{No discretization}\n\\subsection{1-d}\n\\subsubsection{Positive energies}\n\nThe momentum space Schr\u00f6dinger equation\n\\begin{equation}\n\t\\frac{p^2}{2 \\mu} \\psi(p) + \\int \\frac{d p}{2\\pi} \\braket{p | \\hat V | p'} \\psi(p') = E \\psi(p')\n\\end{equation}\nfor contact interactions this becomes\n\\begin{equation}\n\t\\frac{p^2}{2 \\mu} \\psi(p) +  c I_0 = E \\psi(p') \\, , \\qquad I_0 = \\int \\frac{d p'}{2\\pi} \\psi(p') \\, .\n\\end{equation}\nThus\n\\begin{equation}\n\t\\psi(p) = \\frac{c I_0}{E - \\frac{p^2}{2\\mu}}\n\\end{equation}\nFor scattering solutions we need $i\\epsilon$ prescription\n\\begin{align}\n\tI_0 &= \\int \\frac{d p}{2\\pi}  \\frac{c I_0}{E - \\frac{p^2}{2\\mu} + i \\epsilon} \\\\\n\t&= I_0 \\times \\frac{\\mu c }{\\pi} \\int\\limits_{ - \\infty}^{\\infty} \\frac{d p}{\\gamma^2 - {p^2} + i \\tilde\\epsilon} \\, , \\quad \\gamma^2 = 2 \\mu E > 0\\,.\n\\end{align}\nThe remaining integral can be solved by the residual theorem: the zeros of the denominator are\n\\begin{align}\n\t\\gamma^2 - {p^2} + i \\tilde\\epsilon &= (\\gamma + i \\tilde\\epsilon - p)(\\gamma + i \\tilde\\epsilon + p) \\, .\n\\end{align}\nNote that the zeros are independent on the sign of $\\epsilon$ (one root in upper and one root in lower plane).\nCompleting the integral in the upper plane selects the residual $p_0 = \\gamma + i \\tilde \\epsilon$ and thus\n\\begin{equation}\n\t\\int\\limits_{ - \\infty}^{\\infty} \\frac{d p}{\\gamma^2 - {p^2} + i \\tilde\\epsilon}\n\t=\n\t \\frac{2 \\pi i}{2 \\gamma} \\, .\n\\end{equation}\nand therefore\n\\begin{equation}\n\t1 = \\frac{\\mu c}{\\pi} \\times \\frac{\\pi i}{\\gamma} \\, ,\n\\end{equation}\nor\n\\begin{equation}\n\t\\gamma = i \\mu c \\, \\Rightarrow  E = - \\frac{\\mu c^2}{2}\n\\end{equation}\nIn other words, you do not get positive energies for $c \\in \\mathbb{R}$.\n\nIt seems like, you only get positive energies once you are in a box...\n\n\\clearpage\n\\section{Understanding the unitary limit}\n\n\nFrom my understanding, the unitary limit appears when the total cross section is independent of microscopic parameters.\nThe cross section is given by\n\\begin{equation}\n\t\\frac{d \\sigma}{d \\Omega} = | f(\\Omega, p) | ^ 2 \\, ,\n\\end{equation}\nwhere\n\\begin{align}\n\tf(\\Omega, p) &= \\sum_{l=0}^{\\infty} f_l(\\Omega, p) \\, , \\\\ \n\tf_l(\\Omega, p) &\\sim g_l(\\Omega) \\times \\left[\\exp( 2 i \\delta_l (p)) - 1\\right]\n\\end{align}\nThis is generally true for all $d = 1, 2, 3$ cases modulo not important factors.\n\nIn our system, for a simple contact interaction of strength $c_0$, we only find contributions to $l=0$.\nRegarding your eq. (7), one finds that\n\\begin{equation}\n\t\\cot \\delta_0 (p) = \\begin{cases}\n\t\ta_0 p &, \\, d = 1 \\\\\n\t\t2 / \\pi \\log(a_0 p) &, \\, d = 2 \\\\\n\t\t- 1/(a_0 p) &, \\, d = 3\n\t\\end{cases}\n\\end{equation}\n\nIn other words, we are at the unitary limit if $\\cot \\delta_0$ is independent of $a_0$.\nThis can only happen if $a_0 \\rightarrow \\infty$ or $a_0 \\rightarrow \\pm 0$ because $a_0$ must be independent of $p$.\n\n\\textbf{Does this make sense?}\n\nTo discriminate between infinite volume and lattice, I will label the momentum points which are extracted from lattice (as in continuum but finite volume) as $\\gamma_i$.\nHere, $\\gamma_i^2 = 2 \\mu E_i$, $\\mu$ being the reduced two-particle mass and $E_i$ being the lattice energy spectrum.\n\nUsing your equations (35-37) and combining this with your eq. (7), one finds\n\\begin{equation}\\label{def:a0}\n\ta_0 = \\begin{cases}\n\t\t\\frac{L}{2\\pi^2}S_1(x_i) & \\, , d =1 \\\\\n\t\t\\frac{L}{2\\pi} \\exp \\left\\{ S_2(x_i) / (2\\pi) \\right\\} & \\, , d =2 \\\\\n\t\t\\frac{ \\pi L}{ S_3(x_i) } & \\, , d =3\n\t\\end{cases}\n\t\\qquad \\forall_i \\, ,\n\\end{equation}\nwith $S_j(x_i)$ being the L\u00fcscher Zeta function in $j$ dimensions and $ x_i = \\mu E_i L ^2 / (2 \\pi^2) $.\nIn other words, if your computation does the right thing, you fix your contact interaction strength $c_0$, which will give you a set of energy levels $E_i(c_0)$.\nFor all of the eigenstates $E_i$, all points $S_j(x_i)$ lay on a line because, because $a_0$ is independent of the energy level and eq.~\\eqref{def:a0} tells us that there is a one-to-one relation between $S_j(x_i)$ and $a_0$.\n\nSo in which cases do we find that $a_0 \\to 0, \\pm \\infty$?\n\n\\begin{itemize}\n\t\\item For $d = 1$ find $c_0$ such that $S_1(x_i(c_0)) \\to 0, \\pm \\infty$ $\\forall_i$.\n\t\tRegarding your plots,\n\t\t\\begin{itemize}\n\t\t\t\\item $S_1(x_i(c_0)) = 0$ is not possible $\\forall_i$ (ground state only approaches zero).\n\t\t\t\\item $S_1(x_i(c_0)) \\to - \\infty$ is not possible $\\forall_i$ (ground state $>0$).\n\t\t\t\\item $S_1(x_i(c_0)) \\to + \\infty$ is possible $\\forall_i$ if $c_0 \\to 0^-$.\n\t\t\\end{itemize}\n\t\\item For $d = 2$ find $c_0$ such that $S_2(x_i(c_0)) \\to + \\infty$ $\\forall_i$.\n\t\tRegarding your plots,\n\t\t\\begin{itemize}\n\t\t\t\\item $S_2(x_i(c_0)) \\to + \\infty$ is possible for $c_0 \\to 0^-$.\n\t\t\\end{itemize}\n\t\\item For $d = 3$ find $c_0$ such that $S_3(x_i(c_0)) \\to 0, \\pm \\infty$ $\\forall_i$.\n\t\tRegarding your plots,\n\t\t\\begin{itemize}\n\t\t\t\\item $S_3(x_i(c_0)) = 0$ is possible $\\forall_i$.\n\t\t\t\\item $S_3(x_i(c_0)) \\to \\pm \\infty$ is possible for $c_0 \\to 0^\\mp$.\n\t\t\\end{itemize}\n\\end{itemize}\n\nObviously, the cases for $c_0 \\to 0^\\pm$ make sense --- if there is no interaction, than there is only the mass scale.\nHowever this solution is not of interest.\nThis leaves $d=3$ as the only interesting case.\n\n\\newpage\n\\section{Momentum Space confusion}\nThis is a non complete description!\nIt should serve as a summary.\n\nAs we figured out most recently, it is interesting to look at L\u00fcschers equation in a discrete finite volume if you want to connect discrete finite energy levels with continuum infinite volume phase shifts.\nIf you don't do so, you find that the discretization (in a finite volume) apparently induces an effective range (see figure \\ref{fig-cont-lusch}).\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.9\\textwidth]{figs/eff-range-cont-lusch.pdf}\n\\caption{\n    \\label{fig-cont-lusch}Phase shifts extracted from continuum L\u00fcscher equation for discrete finite volume eigenvalues of contact interaction Hamiltonian.\n    The contact interactions was chosen such that the ground state matches the first zero of the zeta function.\n    $\\epsilon$ is the lattice spacing in [fm] for a box of size $L = 1$ [fm].\n    $n_{\\mathrm{step}}$ describes the implementation of the Laplace derivative.\n    E.g., $n_{\\mathrm{step}} = 1$ corresponds to an one step derivative where corrections are expected to scale with $\\epsilon^2$.\n    The variable $x$ is directly proportional to the eigenvalues of the Hamiltonian $x = 2 \\mu E L^2 / (2 \\pi)^2$.\n    Even thought the interaction has no effective range (contact interaction), the effective range expansion for more precise implementations of the derivative, certainly is proportional to $x$.\n }\n\\end{figure*}\n\nThe trick you have used to get there was to replace the finite volume continuum Zeta function .\n\\begin{equation}\n    S_3(x) = \\sum\\limits_{\\vec n \\in M(\\Lambda)} \\frac{1}{\\vec n ^2 - x} - 4 \\pi \\Lambda\n    \\, , \\qquad\n    M(\\Lambda) = \\left\\{ \\vec n \\in \\mathbb N \\middle\\vert 0 \\leq n_i \\leq \\Lambda \\right\\}\n\\end{equation}\nwith a \"discretized version\"\n\\begin{equation}\n    \\mapsto\n    S_3^{\\mathrm{lat}}(x) = \\frac{4 \\pi^2}{L^2} \\sum\\limits_{\\vec k \\in M^\\epsilon(L)} \\frac{1}{\\vec k ^2 - 2 \\mu E} - \\mathcal L \\pi^2 \\frac{L}{\\epsilon}\n    \\, , \\qquad\n    M^\\epsilon(L) = \\left\\{ \\vec k \\in \\frac{2 \\pi}{L} \\mathbb N \\middle\\vert - \\frac{\\pi}{\\epsilon} \\leq k_i < \\frac{\\pi}{\\epsilon} \\right\\}\n    \\, ,\n\\end{equation}\nwhere $\\mathcal L = 0.777551 \\cdots$ is the normalization of the momentum integral and the sum is limited but the momentum cutoff of the discrete spatial lattice.\n\nThis certainly improved the effective range expansion (see fig.~\\ref{fig-cont-lusch}).\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.9\\textwidth]{figs/eff-range-discrete-lusch.pdf}\n\\caption{\n    \\label{fig-cont-lusch}Phase shifts extracted from discrete L\u00fcscher equation for discrete finite volume eigenvalues of contact interaction Hamiltonian.\n    $\\epsilon$ is the lattice spacing in [fm] for a box of size $L = 1$ [fm].\n }\n\\end{figure*}\n\nEven thought the utilization of the discrete L\u00fcscher equation improves the description, e.g., reduces the size of the induced effective range, the results are not perfectly constant (e.g., equal to zero because of the choice of the contact interaction).\nOn top of that, we now see curvature in the effective range expansion.\nThis might be still the cause of further discretization effects.\nNext, I describe how one can improve on that.\n\n\\subsection{Removing all discretization effects -- making ERE flat}\n\\textbf{The idea:} Implement an exact discrete derivative by transforming to momentum space, multiplying by the exact $p^2$, and transforming back to coordinate space.\nDoing so, remove all derivative discretization approximations and hopefully recover a flat effective range expansion.\n\n\\textbf{The conclusion:} This is not possible.\n\n\\textbf{The next step:} Implementing the \"actual\" discrete L\u00fcscher equation.\n\n\\subsubsection{The idea}\nThe idea is straight forward:\n\n\\begin{equation}\n\t\\braket{ \\vec r ' | \\hat H_0 | \\vec r}\n\t=\n\t\\int d \\vec q^3 \\int d \\vec k^3 \\braket{ \\vec k | \\hat H_0 | \\vec q} \\braket{\\vec r' | \\vec k} \\braket{\\vec q | \\vec r}\n\t=\n\t\\int d \\vec q^3 H_0(\\vec q) \\exp\\left\\{ i \\vec q \\cdot \\left( \\vec r - \\vec r '\\right)\\right\\}\n\\end{equation}\nDiscrete FFTs are already implemented in python, so the only remaining challenge is to multiply the transformed input times the expectation values of the kinetic Hamiltionian $H_0(\\vec q)$.\n\nBut this turned out to be a major source of confusion.\nAs an example, suppose you have four spatial sites in a one dimensional discrete box with periodic boundary conditions.\nThis means, your momenta are discrete $\\ket{\\vec p} = \\ket{2 \\pi / L \\vec n}$ with $\\vec n \\in \\mathbb Z^3$ and periodic  $\\ket{\\vec p + 2 \\pi / \\epsilon} = \\ket{\\vec p}$.\nA possible choice for momentum basis states are\n\\begin{equation}\n\tB_1(L = 4\\epsilon, \\epsilon)\n\t= \\left\\{ \\frac{2\\pi}{4\\epsilon} \\times n \\middle \\vert n \\in \\{-2, -1, 0, 1\\} \\right\\}\n\t= \\left\\{ -\\frac{\\pi}{\\epsilon}, -\\frac{\\pi}{2\\epsilon}, 0, \\frac{\\pi}{2\\epsilon} \\right\\} \\, .\n\\end{equation}\nThe naive choice of eigenvalues are thus\n\\begin{equation}\n    H(B_1) = \\frac{1}{2\\mu} \\frac{\\pi^2}{4 \\epsilon^2} \\left\\{ 4, 1, 0, 1\\right\\} \\, .\n\\end{equation}\nBut because of the periodicity of momenta $\\ket{\\vec p} = \\ket{\\vec p + 2\\pi/\\epsilon}$, the following basis is a valid choice as well:\n\\begin{equation}\n\tB_2(L = 4\\epsilon, \\epsilon)\n\t= \\left\\{ \\frac{2\\pi}{4\\epsilon} \\times n \\middle \\vert n \\in \\{0, 1, 2, 3\\} \\right\\}\n\t= \\left\\{ 0, \\frac{\\pi}{2\\epsilon}, \\frac{\\pi}{\\epsilon}, \\frac{3\\pi}{2\\epsilon} \\right\\} \\, , \\qquad\n\tH(B_2) = \\frac{1}{2\\mu} \\frac{\\pi^2}{4 \\epsilon^2} \\left\\{ 0, 1, 4, 9\\right\\} \\, .\n\\end{equation}\nAnd since the discrete Fourier transformations are unitary transformations (and thus the eigenvalues of the Hamiltonian do not change through FTs), clearly the different implementations of the Hamiltonian return different answers.\n\nMost interestingly, when actually implementing a 1-step derivative and running the FFTs in python, the first basis is actually preferred.\n\nWhat's happening?\n\nYou have to manually take care of the periodicity!\nLook at coordinate space.\nL\u00fcscher actually defines his finite volume periodic boundaries operators such that\n\\begin{equation}\n    O_L(\\vec r) = O_L(\\vec r + \\vec n L) = \\frac{1}{\\mathcal N} \\sum _{\\vec n \\in \\mathbb Z} O(\\vec r + \\vec n L)\\, ,\n\\end{equation}\nwhere $\\mathcal N$ is a normalization and the infinite volume operator has a finite range such that $\\lim\\limits_{L\\to \\infty} O_L(\\vec r) = O(\\vec r)$.\n\nSo, how do we do the same thing for momenta.\nThe answer is basically the same.\nImplement the expectation values of the operators such that\n\\begin{itemize}\n    \\item[(A)] The operators $O_\\epsilon$ are periodic in $\\frac{2\\pi}{\\epsilon}$ and\n    \\item[(B)] In the limit of $\\epsilon \\to 0$, you obtain  $O_\\epsilon \\to  O$.\n\\end{itemize}\n\nSo what's a valid choice?\n\\begin{equation}\n\tD^1( \\vec p, \\epsilon) = \\sum_{i=1}^3 \\frac{2 - 2 \\cos(\\epsilon p_i)}{\\epsilon^2} \\, .\n\\end{equation}\nThat's just the same thing you would do in coordinate space by introducing a finite step derivative.\nIn principal you can also do better and implement improved operators for faster convergence.\n\nIn other words: a naive infinite order improvement is not possible -- it does not matter where or how you implement your derivative.\nIf you take care of discretization and periodic boundary conditions, there is always a one to one correspondence of momentum and coordinate space.\nAt some point, it might be more efficient numerically though to run computations in one or the other space...\n\nBut here is the actual catch: since the momentum dispersion relation you have used $D^\\infty(\\vec p, \\epsilon) = \\vec p^2$ is actually not obtainable on a lattice (unless $n_{\\mathrm{step}} \\to \\infty$), you should use the actual lattice dispersion relation (spectrum of $H_0$) to execute the L\u00fcscher sum.\n\n\\subsection{New equations}\n\nWhat I am suggesting is that the new equations are the following:\nInstead of the infinitely improved dispersion relation $D^\\infty(\\vec p, \\epsilon)$ (which is the continuum dispersion relation in a fixed interval), you should use the discrete lattice dispersion relation\n\\begin{equation}\n    D^\\infty(\\vec p, \\epsilon) = \\vec p ^2 \n    \\to \n    D^{n_{\\mathrm{s}}}(\\vec p, \\epsilon) \n    =\n    \\frac{1}{\\epsilon^2} \\sum\\limits_{i=1}^3 \\sum\\limits_{n_i=-n_{\\mathrm{s}}}^{n_{\\mathrm{s}}} c_{|n_i|}^{n_{\\mathrm{s}}} \\cos( p_i n_i \\epsilon)\n\\end{equation}\nWhere the coefficients $c_i^{n_{\\mathrm{s}}}$ are enforced by the constrained\n\\begin{equation}\\label{def:lattice-dispersion}\n    D^{n_{\\mathrm{s}}}(\\vec p, \\epsilon)\n    =\n    p^2 + \\mathcal O \\left(\\epsilon^{2 n_{\\mathrm{s}}}\\right)\n    \\, .\n\\end{equation}\nA few coefficients are given in table \\ref{tab-dispersion-coeff}.\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{r | ccccc}\n$ c_{i}^{n_{\\mathrm{s}}}$ & i = 0 & 1 & 2 & 3 & 4 \\\\ \\hline\n$n_{\\mathrm{s}} = 1$ & 2 & -1 \\\\\n2 & $\\frac{5}{2}$ & $-\\frac{4}{3}$ & $\\frac{1}{12}$ \\\\\n3 & $\\frac{49}{18}$ & $-\\frac{3}{2}$ & $\\frac{3}{20}$ & $-\\frac{1}{90}$ \\\\\n4 & $\\frac{205}{72}$ & $-\\frac{8}{5}$ & $\\frac{1}{5}$  & $-\\frac{8}{315}$ & $\\frac{1}{560}$\n\\end{tabular}\n\\caption{\\label{tab-dispersion-coeff}Coefficients for discrete momentum dispersion relation.\n    Note that this coefficients are minus the Laplace finite step coefficients (because $\\partial_x^2 \\leftrightarrow - p^2$).\n}\n\\end{table}\n\nThus one has to update the counter term in the zeta function definition (your $m = 2\\mu$) (do I get the factors right)\n\\begin{equation}\n    \\frac{\\pi}{\\epsilon}\n    \\mathcal L\n    =\n    \\frac{4 \\pi}{(2\\pi)^3}\n    \\left(\n    \\prod \\limits_{i=1}^3\n    \\int\\limits_{-\\pi /\\epsilon}^{\\pi / \\epsilon}\n    d q_i\n    \\right)\n    \\frac{1}{q_1^2 + q_2^2 + q_3^2}\n    \\to\n    \\frac{4 \\pi}{(2\\pi)^3}\n    \\left(\n    \\prod \\limits_{i=1}^3\n    \\int\\limits_{-\\pi / \\epsilon}^{\\pi / \\epsilon}\n    d q_i\n    \\right)\n    \\frac{1}{D^{n_{\\mathrm{s}}}(\\vec p, \\epsilon)}\n\\end{equation}\nand adjust the zeta function\n\\begin{equation}\n    S_3^{\\mathrm{lat}}(x) = \\frac{4 \\pi^2}{L^2} \\sum\\limits_{\\vec k \\in M^\\epsilon(L)} \\frac{1}{\\vec D^{n_{\\mathrm{s}}}(\\vec p, \\epsilon) - 2 \\mu E} - \\mathcal L \\pi^2 \\frac{L}{\\epsilon}\n    \\, , \\qquad\n    M^\\epsilon(L) = \\left\\{ \\vec k \\in \\frac{2 \\pi}{L} \\mathbb N \\middle\\vert - \\frac{\\pi}{\\epsilon} \\leq k_i < \\frac{\\pi}{\\epsilon} \\right\\}\n    \\, ,\n\\end{equation}\n\nLet me know what you think about that.\n\n\\newpage\n\\section{Long Range Forces}\nLong range potentials defined by non-zero effective range and higher ERE parameters are the next step in the direction of realistic nuclear potentials.\nIs it possible to improve the lattice finite volume effective range expansion for long range forces in the same manner as in the unitary case?\nParticularly, if length scales like a non-zero scattering length and effective range are present, how do they mix with induced length scales?\nIs there a power counting in terms of induced over physical quantities?\n\nIt is desirable to choose a potential where the infinite volume continuum results can be computed analytically, to answer the above questions in the most accurate way.\nFor this reason, I have choosen the following separable potential\n\\begin{align}\n\tV(\\vec p', \\vec p) &= - g^*(\\vec p') g(\\vec p) \\, ,\n\t&\n\tg(\\vec{p})=\\overline{g} \\sqrt{8 \\pi} \\frac{M^{3}}{\\left(\\vec{p}^{2}+M^{2}\\right)^{2}}, \\quad M, \\overline{g} \\in \\mathbb{R}\n\t\\, .\n\\end{align}\nThe factorization generally allows to extract numerical solutions for well defined functions $g(\\vec p)$ (as in all integrals are convergent).\nThe specific choice of $g$ mimics a pion exchange, $V(r, r') \\sim \\exp(- M r - M r')$, converges against a delta function for $M\\to \\infty$ but is well behaved in its ultraviolet and infrared parts and thus does not need any regulator.\n\n\\subsection{Analytical derivation of ERE}\n\nFor this potential, the two-nucleon binding momentum $\\gamma = \\sqrt{-2 \\mu E}$ (where $E < 0$ is the ground state energy and $\\mu$ the reduced mass of the two-nucleon system) is related to the potential parameters by\n\\begin{equation}\\label{def:g-lr-binding}\n\t\\overline{g} = \\frac{2M(\\gamma+M)^{2}}{\\sqrt{\\mu M^{3}\\left(\\gamma^{2}+5 M^{2}+4 \\gamma M\\right)}} \\, .\n\\end{equation}\n\nTo compute the phase shifts, one must know the $T$-matrix which is given by\n\\begin{align}\n\tT(\\vec p', \\vec p, E)\n\t&=\n\tV(\\vec p'. \\vec p) + \\lim\\limits_{\\epsilon \\to 0}\\int \\frac{d \\vec k^3}{(2\\pi)^3} V(\\vec p', \\vec k) G(\\vec k, E + i \\epsilon) T(\\vec k, \\vec p, E) \\, ,\n\t&\n\tG(\\vec k, E+ i \\epsilon) = \\frac{1}{E + i \\epsilon - \\frac{k^2}{2\\mu}}\n\t\\, .\n\\end{align}\nBecause of the separable potential, the equation factorizes to\n\\begin{align}\n\tT(\\vec p', \\vec p, E)\n\t&=\n\t- g^*(\\vec p') \\left[ g(\\vec p) + \\Gamma(\\vec p, E) \\right] \\, ,\n\t&\n\t\\Gamma(\\vec p, E)\n\t&\\equiv\n\t\\lim\\limits_{\\epsilon \\to 0}\\int \\frac{d \\vec k^3}{(2\\pi)^3} g(\\vec k) G(\\vec k, E + i \\epsilon) T(\\vec k, \\vec p, E)\n\t\\, .\n\\end{align}\nSubstituting $T$ back into the definition of $\\Gamma$, one finds\n\\begin{align}\n\t\\Gamma(\\vec p, E)\n\t&= - \\left[ g(\\vec p) + \\Gamma(\\vec p, E) \\right] I_0(E)\n\t\\, , &\n\tI_0(E) &= \\lim\\limits_{\\epsilon \\to 0}\\int \\frac{d \\vec k^3}{(2\\pi)^3} g(\\vec k) G(\\vec k, E + i \\epsilon) g^*(\\vec k) \n\t\\\\&&&= - \\lim\\limits_{\\epsilon \\to 0}\\int \\frac{d \\vec k^3}{(2\\pi)^3}  G(\\vec k, E + i \\epsilon) V(\\vec k, \\vec k)\n\t\\, ,\n\\end{align}\nor equivalently\n\\begin{equation}\n\t\\Gamma(\\vec p, E) = - \\frac{I_0(E)}{1 + I_0(E)}g(\\vec p) \\, .\n\\end{equation}\nThe $T$-matrix becomes\n\\begin{equation}\n\tT(\\vec p', \\vec p, E)\n\t=\n\t\\frac{V(\\vec p', \\vec p)}{1 + I_0(E)}\n\t\\, .\n\\end{equation}\nThe result for the integral is\n\\begin{equation}\n\tI_0\\left(E = - \\frac{\\gamma^2}{2\\mu}\\right)\n\t=\n\t- |\\bar g|^2 \\mu  M \\frac{\\gamma ^2+5 M^2+4 \\gamma  M}{4(\\gamma +M)^4}\n\t\\, , \\qquad\n\t\\gamma \\in \\mathbb R\n\\end{equation}\nIndeed, for the binding energy and the choice of $\\bar g$ as in eq.~\\eqref{def:g-lr-binding}, $I_0(E = - \\gamma^2/(2\\mu)) = -1$ and therefore the $T$-matrix is singular.\n\nIn general, one has\n\\begin{equation}\n\tI_0\\left(E = \\frac{k^2}{2\\mu}\\right)\n\t=\n\t-|\\bar g|^2 \\mu  M \\frac{\n\t\t-k^6-5 k^4 M^2+M^5 \\left(5 M+16 i \\sqrt{k^2}\\right)-15 k^2 M^4 \\sqrt{k^2}\n\t}{\n\t\t4\\left(k^2+M^2\\right)^4\n\t}\n\\end{equation}\n\nAnd finally using \n\\begin{equation}\n\t\\frac{2 \\pi}{ \\mu k } \\frac{1}{\\cot(\\delta_0(k)) - i} = - T_0\\left(\\vec k, \\vec k, \\frac{k^2}{2 \\mu} \\right) \\, ,\n\\end{equation}\none obtains the analytic effective range expansion\n\\begin{align}\n\tk \\cot( \\delta_0(k))\n\t&=\n\t-\\frac{5 |\\bar g|^2 \\mu  M-4 M^2}{16 |\\bar g|^2 \\mu }\n\t+\\frac{15 |\\bar g|^2 \\mu + 16 M}{16 |\\bar g|^2 \\mu M}k^2\n\t+\\frac{5 |\\bar g|^2 \\mu + 24 M}{16 |\\bar g|^2\\mu  M^3}k^4\n\t+\\frac{|\\bar g|^2 \\mu + 16 M}{16|\\bar g|^2 \\mu  M^5}k^6\n\t+\\frac{1}{4 |\\bar g|^2 \\mu  M^6}k^8\n\t\\\\\n\t&=\n\t-\\frac{16\\gamma  M^4+29 \\gamma ^2 M^3+20 \\gamma ^3 M^2+5 \\gamma ^4 M}{16 (\\gamma +M)^4}\n\t+\\frac{15 \\gamma ^4+35 M^4+76\\gamma  M^3+94 \\gamma ^2 M^2+60 \\gamma ^3 M}{16 M (\\gamma +M)^4}k^2\n\t\\\\\\nonumber&\\qquad\n\t+\\frac{5 \\gamma ^4+35 M^4+44 \\gamma  M^3+36 \\gamma ^2 M^2+20\\gamma ^3 M}{16 M^3 (\\gamma +M)^4}k^4\n\t+\\frac{\\gamma ^4+21 M^4+20 \\gamma  M^3+10 \\gamma ^2 M^2+4 \\gamma ^3 M}{16 M^5(\\gamma +M)^4}k^6\n\t\\\\\\nonumber&\\qquad\n\t+ \\frac{\\gamma ^2+5 M^2+4 \\gamma  M}{16 M^5 (\\gamma +M)^4}k^8 \n\t\\\\&\\overset{M\\to\\infty}{\\longrightarrow}-\\gamma\n\\end{align}\n\nThe above quantities are computed in the \\texttt{notebooks/long-range-ERE.nb} notebook.\n\n\\subsection{Long range forces in finite discrete volume}\n\n\\subsubsection{How are discretized finite volume quantities defined?}\n\nThe mandatory requirement is the convergence against continuum infinite volume counterparts.\nFor example, if we fix the finite volume lattice hamiltonian by the ground state energy (input), we expect that a second observable, e.g., the discrete finite volume phase shifts (output) converge against the continuum infinite volume phase shifts.\nA finite volume lattice implementation of an observable is better than another if it converges faster against the physical observable.\nAn implementation is incorrect if no or even a wrong conversion pattern is apparent.\n\nThe kinetic hamiltonian, and in general all finite volume lattice momenta, fulfill the convergence criterium if they are implemented by their finite volume lattice dispersion (see eq.~\\eqref{def:lattice-dispersion}).\n\n\\subsubsection{How to estimate finite volume lattice parameters?}\n\\begin{itemize}\n\t\\item You can identify if the potential is sufficiently mapped out by the FV lattice grid points (fig.~\\ref{fig:potential-component})\n\t\\item You can verify that the expected continuum infinite volume ERE has well defined intersections with the zeta function (fig.~\\ref{fig:pcotdelta-intersection})\n\t\\item You can estimate the finite volume energy shifts by the intersections of the zeta function and the ERE (fig.~\\ref{fig:finite-volume-energy-difference})\n\\end{itemize}\n\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.7\\textwidth]{figs/potential-component.pdf}\n\\caption{\n\t\\label{fig:potential-component}\n\tSeparable component $g(p)$ of the potential $V(p', p) = -g(p') g(p)$ for chosen lattice parameters agrees at available lattice momenta with continuum infinite volume counterpart.\n\tUltra violet momenta part is covered up till $g(p) \\sim 0.1$ fm.\n}\n\\end{figure*}\n\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.7\\textwidth]{figs/pcotdelta-intersection.pdf}\n\\caption{\n\t\\label{fig:pcotdelta-intersection}\n\tContinuum infinite volume effective range expansion has well defined intersection with zeta function for given parameters.\n\tE.g., only one intersection at negative energies is given and allows for an extraction of a unique ground state.\n}\n\\end{figure*}\n\n\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.7\\textwidth]{figs/finite-volume-energy-difference.pdf}\n\\caption{\n\t\\label{fig:finite-volume-energy-difference}\n\tDifference of continuum infinite volume energies and continuum finite volume energies times box length, extracted from the first intersection of the zeta function and the effective range expansion (see fig.~\\ref{fig:pcotdelta-intersection}), decays exponentially.\n\tNumerical precision disturbs the result at $10^{-8}$.\n}\n\\end{figure*}\n\nFor the chosen potential, solutions can be extracted reliably for the following parameters\n\\begin{itemize}\n\t\\item Infinite volume continuum ground state energy $E_0 = - 2.225$ MeV.\n\t\\item Finite volume box length $L = 10$ fm.\n\t\\item Potential parameter $M = 20 m_\\pi$ (this turns the potential more into a smeared delta function than into a pion exchange).\n\t\tThis parameter is also fixed in discrete finite volume (\"constant of nature\") and only $\\bar g$ will be adjusted.\n\\end{itemize}\n\n\\subsection{How to pin down finite volume lattice potential parameters?}\nHow does one implement the potential in a discrete finite volume?\nOr more specifically, how to estimate the potential coefficients?\nIn the following content, I will present three different ideas which take different inputs and try to predict the effective range expansion.\nIs there a well behaved way to solve the problem without knowing analytic relations between potential parameters and observables?\n\n\\begin{itemize}\n\t\\item Use all known continuum infinite volume parameters to reproduce the result on the lattice.\n\t\tTechnically this cannot be done for more complex potentials (see fig.~\\ref{fig:long-range-ere-v-inf-cont}). \n\t\\item Solve the discrete finite volume Schr\u00f6dinger equation to fit $\\bar g$ to the continuum infinite volume ground state for each lattice spacing and finite volume.\n\t\tCompute phase shifts for each fitted $\\bar g$ and observe the convergence pattern against \"experimental\" results (see fig.~\\ref{fig:long-range-ere-v-fit}).\n\t\\item Use \"experimental\" phase shifts to compute continuum finite volume energies.\n\t\tFit $\\bar g$ to the continuum finite volume ground state.\n\t\tCompute phase shifts for each fitted $\\bar g$ and observe the convergence pattern against \"experimental\" results (see fig.~\\ref{fig:long-range-ere-v-lat}).\n\\end{itemize}\n\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.7\\textwidth]{figs/long-range-ere-v-inf-cont.pdf}\n\\caption{\n\t\\label{fig:long-range-ere-v-inf-cont}\n\tInfinite volume continuum parameters were fed into the finite volume lattice potential from which the energy levels were used to compute the effective range expansion.\n\tThe difference between the standard L\u00fcscher phase shift extraction (red markers) and the modified dispersion L\u00fcscher extraction (green markers) is only visible at large normalized scattering momenta $x$.\n\tFor few lattice nodes (large lattice spacings), the extracted ERE does not agree well with the analytic result.\n\tBut for more lattice nodes (finer discretizations), the extracted ERE seemingly converges against the analytic result and the difference between original and dispersion L\u00fcscher shrinks as well.\n}\n\\end{figure*}\n\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.7\\textwidth]{figs/long-range-ere-v-fit.pdf}\n\\caption{\n\t\\label{fig:long-range-ere-v-fit}\n\tInfinite volume continuum energy was used to fit finite volume lattice potential.\n\tERE was extracted after fit.\n\tThe difference between the standard L\u00fcscher phase shift extraction (red markers) and the modified dispersion L\u00fcscher extraction (green markers) is small at small $x$ but increases more drastically for large $x$ as in the \"known parameter case\" (see fig~\\ref{fig:long-range-ere-v-inf-cont}).\n\tFor all discretizations, the original L\u00fcscher results agree with themselves at the smallest $x$ because the fit requires each $x$ to be the same and original L\u00fcscher does not provide different different zeta functions for different discretizations.\n\tThis is different for dispersion L\u00fcscher.\n\tThe $y$-values are different because one uses dispersion dependent zeta functions.\n\tThe standard L\u00fcscher has a nice internal convergence pattern (the finer discretization, the smaller the changes of the ERE) but it seemingly converges against a completely wrong value.\n\tThe dispersion L\u00fcscher convergence pattern seems more random (big step, small step, big step in directions of smaller lattice spacing).\n\tIt is not obvious if dispersion L\u00fcscher and standard L\u00fcscher will converge against the same dispersion relation, but it seems quite reasonable to assume that both approaches will not converge against the analytical result.\n}\n\\end{figure*}\n\n\\begin{figure*}[!htb]\n\\includegraphics[width=0.7\\textwidth]{figs/long-range-ere-v-lat.pdf}\n\\caption{\n\t\\label{fig:long-range-ere-v-lat}\n\tFirst, the finite volume continuum energy was computed using the \"experimental\" ERE.\n\tNext the finite volume ground state used to fit the finite volume lattice potential.\n\tThis means that discretization effects were correlated with the potential.\n\tThe ERE was extracted after fit.\n\tThe difference between the standard L\u00fcscher phase shift extraction (red markers) and the modified dispersion L\u00fcscher extraction (green markers) is drastic.\n\tFor all discretizations, again the original L\u00fcscher results agree with themselves at the smallest $x$.\n\tThe standard L\u00fcscher has again a nice internal convergence pattern but still it seemingly converges against a completely wrong value.\n\tIn this case, the dispersion L\u00fcscher results at small $x$ are on top of each other.\n\tThe dispersion L\u00fcscher convergence pattern seems well behaved such that the smaller the lattice spacing, the smaller the changes.\n\tAlso, dispersion L\u00fcscher results now seem to covnverge against the continuum infinite volume counterpart.\n}\n\\end{figure*}\n\n\\subsubsection{Questions and conclusions}\n\n\\begin{itemize}\n\t\\item Give me a night to sleep over the results.\n\t\\item One big question I have is: assuming that things actually work, why do we still see a convergence pattern in fig.~\\ref{fig:long-range-ere-v-lat}?\n\t\\item It seems to me that the best way of fixing coefficients is mapping the ERE to coefficients and computing binding energies.\n\\end{itemize}\n\n\n\\end{document}", "meta": {"hexsha": "8db28628f26d69d4f8a1c3449bfc6c765f199ce7", "size": 29509, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/notes.tex", "max_stars_repo_name": "ckoerber/luescher-nd", "max_stars_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-04-12T22:19:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-26T14:06:49.000Z", "max_issues_repo_path": "notes/notes.tex", "max_issues_repo_name": "ckoerber/luescher-nd", "max_issues_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2019-12-16T19:49:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-02T00:50:31.000Z", "max_forks_repo_path": "notes/notes.tex", "max_forks_repo_name": "ckoerber/luescher-nd", "max_forks_repo_head_hexsha": "d1bc6bff0c6ee9f4dc0d1d0bb4bcfa842c44cceb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.4427350427, "max_line_length": 305, "alphanum_fraction": 0.706665763, "num_tokens": 9211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673133042217, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.5841219030420748}}
{"text": "\\section{Linear combinations}\n\\label{sec:linear-combinations-rn}\n\n\\begin{outcome}\n  \\begin{enumerate}\n  \\item Compute linear combinations of vectors algebraically and\n    geometrically.\n  \\item Determine whether a vector is a linear combination of given\n    vectors.\n  \\item Find the coefficients of one vector as a linear combination of\n    other vectors.\n  \\end{enumerate}\n\\end{outcome}\n\nNow that we have studied both vector addition and scalar\nmultiplication, we can combine the two operations. You may remember\nthat when we talked about the solutions to homogeneous systems of\nequations in Section~\\ref{sec:homogeneous-systems}, we briefly\nmentioned that the general solution of a homogeneous system is a\nlinear combination of its basic solutions. We now return to the\nconcept of a linear combination.\n\n\\begin{definition}{Linear combination}{linear-combination}\n  A vector $\\vect{v}$ is said to be a \\textbf{linear combination}%\n  \\index{linear combination!in Rn@in $\\R^n$} of the vectors\n  $\\vect{u}_1,\\ldots, \\vect{u}_n$ if there exist scalars\n  $a_1,\\ldots,a_n$ such that\n  \\begin{equation*}\n    \\vect{v} = a_1 \\vect{u}_1 + \\ldots + a_n \\vect{u}_n.\n  \\end{equation*}\n  The numbers $a_1,\\ldots,a_n$ are called the \\textbf{coefficients}%\n  \\index{coefficient!of a linear combination}%\n  \\index{linear combination!coefficient} of the linear combination.\n\\end{definition}\n\n\\begin{example}{Linear combination}{linear-combination}\n  We have\n  \\begin{equation*}\n    3\n    \\begin{mymatrix}{r}\n      -4 \\\\\n      1 \\\\\n      0\n    \\end{mymatrix}\n    +\n    2\n    \\begin{mymatrix}{r}\n      -3 \\\\\n      0 \\\\\n      1\n    \\end{mymatrix}\n    =\n    \\begin{mymatrix}{r}\n      -18 \\\\\n      3 \\\\\n      2\n    \\end{mymatrix}.\n  \\end{equation*}\n  Thus we can say that\n  \\begin{equation*}\n    \\vect{v}= \\begin{mymatrix}{r}\n      -18 \\\\\n      3 \\\\\n      2\n    \\end{mymatrix}\n  \\end{equation*}\n  is a linear combination of the vectors\n  \\begin{equation*}\n    \\vect{u}_1 = \\begin{mymatrix}{r}\n      -4 \\\\\n      1 \\\\\n      0\n    \\end{mymatrix}\n    \\quad\\mbox{and}\\quad\n    \\vect{u}_2 =\n    \\begin{mymatrix}{r}\n      -3 \\\\\n      0 \\\\\n      1\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{example}\n\nFor the specific case of $\\R^3$, there are three special vectors which\nwe often use.  They are given by\n\\begin{equation*}\n  \\vect{i} =\n  \\begin{mymatrix}{c}\n    1 \\\\ 0 \\\\ 0\n  \\end{mymatrix},\n  \\quad\n  \\vect{j} =\n  \\begin{mymatrix}{c}\n    0 \\\\ 1 \\\\ 0\n  \\end{mymatrix},\n  \\quad\\mbox{and}\\quad\n  \\vect{k} =\n  \\begin{mymatrix}{c}\n    0 \\\\ 0 \\\\ 1\n  \\end{mymatrix}.\n\\end{equation*}\nWe can write any vector $\\vect{u} = \\mat{a_1,a_2,a_3}^T$ as a linear\ncombination of these vectors, namely\n\\begin{equation*}\n  \\vect{u} = a_1 \\vect{i} + a_2 \\vect{j} + a_3 \\vect{k}.\n\\end{equation*}\nWe will use this notation from time to time.\n\n\\begin{example}{Determining if linear combination}{determine-linear-comb}\n  Can $\\vect{v}=\\mat{1, 3, 5}^T$ be written as a linear combination\n  of $\\vect{u}_1=\\mat{2, 2, 6}^T$, $\\vect{u}_2=\\mat{1, 6, 8}^T$, and\n  $\\vect{u}_3=\\mat{3, 8, 18}^T$? If yes, find the coefficients.\n\\end{example}\n\n\\begin{solution}\n  This question can be rephrased as: can we find scalars $x,y,z$ such\n  that\n  \\begin{equation*}\n    x\\vect{u}_1+y\\vect{u}_2+z\\vect{u}_3=\\vect{v}?\n  \\end{equation*}\n  Multiplying out produces the system of linear equations\n  \\begin{align*}\n    2x+y+3z&=1 \\\\\n    2x+6y+8z&=3 \\\\\n    6x+8y+18z&=5.\n  \\end{align*}\n  Now we row reduce the corresponding augmented matrix to solve.\n  \\begin{align*}\n    &\\begin{mymatrix}{rrr|r} 2 & 1 & 3 & 1 \\\\ 2 & 6 & 8 & 3 \\\\ 6 & 8 & 18 & 5 \\end{mymatrix}\n                                                                            \\stackrel{R_1\\rowop R_1-R_2}{\\stackrel{R_3\\rowop R_3-3R_1}{\\roweq}}\n                                                                            \\begin{mymatrix}{rrr|r} 0 & -5 & -5 & -2 \\\\ 2 & 6 & 8 & 3 \\\\ 0 & 5 & 9 & 2 \\end{mymatrix}\n                                                                                                                                                     \\stackrel{R_1\\rowop R_1+R_3}{\\roweq}\n                                                                                                                                                     \\begin{mymatrix}{rrr|r} 0 & 0 & 4 & 0 \\\\ 2 & 6 & 8 & 3 \\\\ 0 & 5 & 9 & 2 \\end{mymatrix}\n                                                                                                                                                                                                                           \\stackrel{R_2\\rowswap R_1}{\\roweq} \\\\\n    &\\begin{mymatrix}{rrr|r} 2 & 6 & 8 & 3 \\\\ 0 & 0 & 4 & 0 \\\\ 0 & 5 & 9 & 2 \\end{mymatrix}\n                                                                           \\stackrel{\\frac{1}{2}R_1}{\\stackrel{R_2\\rowswap R_3}{\\roweq}}\n                                                                           \\begin{mymatrix}{rrr|r} 1 & 3 & 4 & \\frac{3}{2} \\\\ 0 & 5 & 9 & 2 \\\\ 0 & 0 & 4 & 0\\end{mymatrix}\n                                                                                                                                                           \\stackrel{\\frac{1}{4}R_3}{\\roweq}\n                                                                                                                                                           \\begin{mymatrix}{rrr|r} 1 & 3 & 4 & \\frac{3}{2} \\\\ 0 & 5 & 9 & 2 \\\\ 0 & 0 & 1 & 0 \\end{mymatrix}\n                                                                                                                                                                                                                                           \\stackrel{R_1\\rowop R_1-4R_3}{\\stackrel{R_2\\rowop R_2-9R_3}{\\roweq}} \\\\\n    &\\begin{mymatrix}{rrr|r} 1 & 3 & 0 & \\frac{3}{2} \\\\ 0 & 5 & 0 & 2 \\\\ 0 & 0 & 1 & 0 \\end{mymatrix}\n                                                                                     \\stackrel{\\frac{1}{5}R_2}{\\roweq}\n                                                                                     \\begin{mymatrix}{rrr|r} 1 & 3 & 0 & \\frac{3}{2} \\\\ 0 & 1 & 0 & \\frac{2}{5} \\\\ 0 & 0 & 1 & 0 \\end{mymatrix}\n                                                                                                                                                                               \\stackrel{R_1\\rowop R_1-3R_2}{\\roweq}\n                                                                                                                                                                               \\begin{mymatrix}{rrr|r} 1 & 0 & 0 & \\frac{3}{10} \\\\ 0 & 1 & 0 & \\frac{2}{5} \\\\ 0 & 0 & 1 & 0 \\end{mymatrix}.\n  \\end{align*}\n  We are in the case where we have a unique solution:\n  \\begin{align*}\n    x&=\\frac{3}{10} \\\\\n    y&=\\frac{2}{5} \\\\\n    z&=0.\n  \\end{align*}\n  This means that $\\vect{v}$ is a linear combination of $\\vect{u}_1$,\n  $\\vect{u}_2$, and $\\vect{u}_3$:\n  \\begin{equation*}\n    \\vect{v}=\\frac{3}{10}\\vect{u}_1+\\frac{2}{5}\\vect{u}_2+0\\vect{u}_3.\n  \\end{equation*}\n  The coefficients are $\\frac{3}{10}$, $\\frac{2}{5}$, and $0$.\n  In fact, $\\vect{v}$ is also a linear combination of just $\\vect{u}_1$ and\n  $\\vect{u}_2$.\n\\end{solution}\n\nIn the following example, we examine the geometric meaning of linear combinations.\n\n\\begin{example}{Graphing a linear combination of vectors}{graphing-linear-combination}\n  Consider the following picture of the vectors $\\vect{u}$ and $\\vect{v}$\n  \\begin{center}\n    \\begin{tikzpicture}[scale=2]\n      \\draw[->, thick, blue] (0,0) -- node[above,pos=0.45]{$\\vect{u}$} (2,1);\n      \\draw[->, thick, red] (4,1) -- node[above]{$\\vect{v}$} (5,0.5);\n    \\end{tikzpicture}\n  \\end{center}\n  Sketch a picture of $\\vect{u}+2\\vect{v}$ and $\\vect{u}-\\frac{1}{2}\\vect{v}$.\n\\end{example}\n\n\\begin{solution}\n  Both vectors are shown below.\n  \\begin{center}\n    \\begin{tikzpicture}[scale=2]\n      \\fill (1.5,1.25) circle (0.9pt);\n      \\fill (4,0) circle (0.9pt);\n      \\draw[->, thick, red] (2,1) -- node[above]{$\\vect{2v}$} (4,0);\n      \\draw[->, thick, red] (2,1) -- node[above right]{$-\\frac{1}{2}\\vect{v}$} (1.5, 1.25);\n      \\fill (2,1) circle (0.9pt);\n      \\draw[->, thick, blue] (0,0) -- node[below,pos=0.55]{$\\vect{u}$} (2,1);\n      \\draw[->, thick, purple](0,0) -- node[below]{$\\vect{u}+2\\vect{v}$} (4,0);\n      \\draw[->, thick, purple](0,0) -- node[above left]{$\\vect{u} - \\frac{1}{2}\\vect{v}$} (1.5,1.25);\n      \\fill (0,0) circle (0.9pt);\n    \\end{tikzpicture}\n  \\end{center}\n\\end{solution}\n\nGiven any two non-parallel vectors $\\vect{u}$ and $\\vect{v}$ in\n$\\R^2$, we can create a grid of their linear combinations. The integer\nones are pictured below. From this we can see that all vectors in\n$\\R^2$ can be written as a linear combination of $\\vect{u}$ and\n$\\vect{v}$.\n\n\\begin{center}\n  \\begin{tikzpicture}[scale=1.5]\n    \\draw[->, thick, blue] (0,0) -- node[below]{$\\vect{u}$} (1.5,0.5);\n    \\draw[->, thick, red] (0,0) -- node[left]{$\\vect{v}$} (0.5,1);\n    \\draw[-, gray] (-2,-1.5)--(4,0.5);\n    \\draw[-, gray] (-1.5,-0.5)--(0,0);\n    \\draw[-, gray] (1.5,0.5)--(4.5,1.5);\n    \\draw[-, gray] (-1,0.5)--(5,2.5);\n    \\draw[-, gray] (-0.5,1.5)--(5.5,3.5);\n    \\draw[-, gray] (0,2.5)--(6,4.5);\n    \\draw[-, gray] (-2,-1.5)--(0,2.5);\n    \\draw[-, gray] (-0.5,-1)--(0,0);\n    \\draw[-, gray] (0.5,1)--(1.5,3);\n    \\draw[-, gray] (1,-0.5)--(3,3.5);\n    \\draw[-, gray] (2.5,0)--(4.5,4);\n    \\draw[-, gray] (4,0.5)--(6,4.5);\n    \\draw (-2.0,-1.5) node[left]{$-1\\vect{u}-1\\vect{v}$};\n    \\draw (-1.5,-0.5) node[left]{$-1\\vect{u}+0\\vect{v}$};\n    \\draw (-1.0,0.5) node[left]{$-1\\vect{u}+1\\vect{v}$};\n    \\draw (-0.5,1.5) node[left]{$-1\\vect{u}+2\\vect{v}$};\n    \\draw (0.0,2.5) node[left]{$-1\\vect{u}+3\\vect{v}$};\n    \\draw (1.5,3.0) node[above=0.75ex]{$0\\vect{u}+3\\vect{v}$};\n    \\draw (3.0,3.5) node[above=0.75ex]{$1\\vect{u}+3\\vect{v}$};\n    \\draw (4.5,4.0) node[above=0.75ex]{$1\\vect{u}+3\\vect{v}$};\n    \\draw (6.0,4.5) node[right]{$3\\vect{u}+3\\vect{v}$};\n    \\draw (5.5,3.5) node[right]{$3\\vect{u}+2\\vect{v}$};\n    \\draw (5.0,2.5) node[right]{$3\\vect{u}+1\\vect{v}$};\n    \\draw (4.5,1.5) node[right]{$3\\vect{u}+0\\vect{v}$};\n    \\draw (4.0,0.5) node[right]{$3\\vect{u}-1\\vect{v}$};\n    \\draw (2.5,0.0) node[below=0.75ex]{$2\\vect{u}-1\\vect{v}$};\n    \\draw (1.0,-0.5) node[below=0.75ex]{$1\\vect{u}-1\\vect{v}$};\n    \\draw (-0.5,-1.0) node[below=0.75ex]{$0\\vect{u}-1\\vect{v}$};\n    \\draw[fill] (0,0) circle [radius=1.2pt] node[below right]{$\\vect{0}$};\n  \\end{tikzpicture}\n\\end{center}\n", "meta": {"hexsha": "53d7a127838925d374a8633994b80e64c0d3da68", "size": 10438, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "baseText/content/Vectors-LinearCombinations.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "baseText/content/Vectors-LinearCombinations.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "baseText/content/Vectors-LinearCombinations.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 44.6068376068, "max_line_length": 306, "alphanum_fraction": 0.4811266526, "num_tokens": 3695, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.8128673155708976, "lm_q1q2_score": 0.5841218948733716}}
{"text": "\\chapter{Algorithms} \\label{chap:algorithms}\n\nThis chapter gives an overview over the available algorithms in {\\ViennaCL}.\nThe focus of {\\ViennaCL} is on iterative solvers, for which {\\ViennaCL} provides a generic implementation that allows the use of the same code on the CPU (either using \\ublas, Eigen, MTL4 or \\OpenCL) and on the GPU (using \\OpenCL).\n\n\\section{Direct Solvers} \\label{sec:direct-solvers}\n{\\ViennaCLversion} provides triangular solvers and LU factorization without pivoting for the solution of dense linear systems. The interface is similar to that of {\\ublas}\n\n\\begin{lstlisting}\n  using namespace viennacl::linalg;  //to keep solver calls short\n  viennacl::matrix<float>  vcl_matrix;\n  viennacl::vector<float>  vcl_rhs;\n  viennacl::vector<float>  vcl_result;\n\n  /* Set up matrix and vectors here */\n\n  //solution of an upper triangular system:\n  vcl_result = solve(vcl_matrix, vcl_rhs, upper_tag());\n  //solution of a lower triangular system:\n  vcl_result = solve(vcl_matrix, vcl_rhs, lower_tag());\n\n  //solution of a full system right into the load vector vcl_rhs:\n  lu_factorize(vcl_matrix);\n  lu_substitute(vcl_matrix, vcl_rhs);    \n\\end{lstlisting}\nIn {\\ViennaCLminorversion} there is no pivoting included in the LU factorization\nprocess, hence the computation may break down or yield results with poor\naccuracy. However, for certain classes of matrices (like diagonal dominant\nmatrices) good results can be obtained without pivoting.\n\nIt is also possible to solve for multiple right hand sides:\n\\begin{lstlisting}\n  using namespace viennacl::linalg;  //to keep solver calls short\n  viennacl::matrix<float>  vcl_matrix;\n  viennacl::matrix<float>  vcl_rhs_matrix;\n  viennacl::matrix<float>  vcl_result;\n\n  /* Set up matrices here */\n\n  //solution of an upper triangular system:\n  vcl_result = solve(vcl_matrix, vcl_rhs_matrix, upper_tag());\n\n  //solution of a lower triangular system:\n  vcl_result = solve(vcl_matrix, vcl_rhs_matrix, lower_tag());\n\\end{lstlisting}\n\n\n\\section{Iterative Solvers} \\label{sec:iterative-solvers}\n{\\ViennaCL} provides different iterative solvers for various classes of\nmatrices, listed in Tab.~\\ref{tab:linear-solvers}. Unlike direct solvers, the\nconvergence of iterative solvers relies on certain properties of the system\nmatrix. Keep in mind that an iterative solver may fail to converge, especially\nif the matrix is ill conditioned or a wrong solver is chosen. \n\n\\TIP{For full details on linear solver calls, refer to the reference\ndocumentation located in \\texttt{doc/doxygen/} and to the tutorials}\n\n\\TIP{The iterative solvers can directly be used for {\\ublas}, Eigen and MTL4 objects! Please have a look at Chap.~\\ref{chap:other-libs} and the respective tutorials in the examples/tutorials/ folder.}\n\n\\NOTE{In {\\ViennaCLversion}, GMRES using ATI GPUs yields wrong results due to a bug in Stream SDK v2.1. Consider using newer versions of the Stream SDK.}\n\n\\begin{lstlisting}\nviennacl::compressed_matrix<float>  vcl_matrix;\nviennacl::vector<float>  vcl_rhs;\nviennacl::vector<float>  vcl_result;\n\n/* Set up matrix and vectors here */\n\n//solution using conjugate gradient solver:\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n\t\t\t\t     vcl_rhs,\n\t\t\t\t     viennacl::linalg::cg_tag());\n\n//solution using BiCGStab solver:\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n\t\t\t\t     vcl_rhs,\n\t\t\t\t     viennacl::linalg::bicgstab_tag());\n\n//solution using GMRES solver:\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n\t\t\t\t     vcl_rhs,\n\t\t\t\t     viennacl::linalg::gmres_tag());\n\\end{lstlisting}\n\n\\begin{table}[tb]\n\\begin{center}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{p{4cm}|p{3cm}|p{7.5cm}}\nMethod & Matrix class & ViennaCL\\\\\n\\hline\nConjugate Gradient (CG) & symmetric positive definite & \\texttt{y = solve(A, x, cg\\_tag());} \\\\\nStabilized Bi-CG (BiCGStab) & non-symmetric & \\texttt{y = solve(A, x, bicgstab\\_tag());} \\\\\nGeneralized Minimum Residual (GMRES) & general & \\texttt{y = solve(A, x, gmres\\_tag());} \\\\\n\\hline\n\\end{tabular}\n\\caption{Linear solver routines in {\\ViennaCL} for the computation of $y$ in the expression $Ay = x$ with given $A$, $x$.}\n\\label{tab:linear-solvers}\n\\end{center}\n\\end{table}\n\nCustomized error tolerances can be set in the solver tags. The convention is\nthat solver tags take the relative error tolerance as first argument and the\nmaximum number of iteration steps as second argument. Furthermore, after the\nsolver run the number of iterations and the estimated error can be obtained from\nthe solver tags as follows:\n\\begin{lstlisting}\n// conjugate gradient solver with tolerance 1e10\n// and at most 100 iterations:\nviennacl::linalg::cg_tag custom_cg(1e-10, 100);\nvcl_result = viennacl::linalg::solve(vcl_matrix, vcl_rhs, custom_cg);\n//print number of iterations taken and estimated error:\nstd::cout << \"No. of iters: \" << custom_cg.iters() << std::endl;\nstd::cout << \"Est. error: \" << custom_cg.error() << std::endl;\n\\end{lstlisting}\nThe BiCGStab solver tag can be customized in exactly the same way. The GMRES\nsolver tag takes as third argument the dimension of the Krylov space. Thus, a\ntag for GMRES(30) with tolerance $1\\mathrm{E}\\!-\\!10$ and at most $100$ total\niterations\n(hence, up to three restarts) can be set up by \n\\begin{lstlisting}\nviennacl::linalg::gmres_tag custom_gmres(1e-10, 100, 30);\n\\end{lstlisting}\n\n\\section{Preconditioners} \\label{sec:preconditioner}\n{\\ViennaCL} ships with a generic implementation of several preconditioners.\nThe preconditioner setup is expect for simple diagonal preconditioners always carried out on the CPU host due to the need for dynamically allocating memory.\nThus, one may not obtain an overall performance benefit if too much time is spent on the preconditioner setup.\n\n\\TIP{The preconditioner also works for {\\ublas} types!}\n\nAn overview of preconditioners available for the various sparse matrix types is as follows:\n\\begin{center}\n \\begin{tabular}{|l|c|c|c|c|c|c|}\n  \\hline\n  Matrix Type & ICHOL & (Block-)ILU[0/T] & Jacobi & Row-scaling & AMG & SPAI \\\\\n  \\hline\n  \\lstinline|compressed_matrix| & yes & yes & yes & yes & yes & yes \\\\\n  \\lstinline|coordinate_matrix| & no & no & yes & yes & no & no \\\\\n  \\lstinline|ell_matrix| & no & no & no & no & no & no \\\\\n  \\lstinline|hyb_matrix| & no & no & no & no & no & no \\\\\n  \\hline\n \\end{tabular}\n\\end{center}\nBroader support of preconditioners particularly for \\lstinline|ell_matrix| and \\lstinline|hyb_matrix| is scheduled for future releases.\nAMG and SPAI preconditioners are described in Chap.~\\ref{chap:additional-algorithms}.\n\n\nIn the following it is assumed that the sparse linear system of equations is given as follows:\n\\begin{lstlisting}\ntypedef viennacl::compressed_matrix<float>   SparseMatrix;\n\nSparseMatrix  vcl_matrix;\nviennacl::vector<float>  vcl_rhs;\nviennacl::vector<float>  vcl_result;\n\n/* Set up matrix and vectors here */\n\\end{lstlisting}\n\n% \\begin{table}[tb]\n% \\begin{center}\n% \\renewcommand{\\arraystretch}{1.2}\n% \\begin{tabular}{p{3cm}|p{4cm}|p{7cm}}\n% Method & Brief description & Parameters\\\\\n% \\hline\n% ILUT & incomplete LU factorization & First parameter: Maximum number of entries\n% per row. Second parameter: Drop tolerance. \\\\\n% Jacobi & Divide each row in $A$ by its diagonal entry & none \\\\\n% Row Scaling & Divide each row in $A$ by its norm & First parameter specifies\n% the norm (1: $l^1$-norm, 2: $l^2$-norm)\\\\\n% \\hline\n% \\end{tabular}\n% \\caption{Preconditioners for iterative solvers in {\\ViennaCL}.}\n% \\label{tab:preconditioners}\n% \\end{center}\n% \\end{table}\n\n\\subsection{Incomplete LU Factorization with Threshold (ILUT)}\nThe incomplete LU factorization preconditioner aims at computing sparse matrices lower and upper triangular matrices $L$ and $U$ such that the sparse system\nmatrix is approximately given by $A \\approx LU$. In order to control the sparsity pattern of $L$ and $U$, a threshold strategy is used (ILUT)\n\\cite{saad-iterative-solution}. Due to the serial nature of the preconditioner, the setup of ILUT is always computed on\nthe CPU using the respective ViennaCL backend. \n\n\\begin{lstlisting}\n//compute ILUT preconditioner:\nviennacl::linalg::ilut_tag ilut_config;\nviennacl::linalg::ilut_precond< SparseMatrix > vcl_ilut(vcl_matrix,\n                                                        ilut_config);\n\n//solve (e.g. using conjugate gradient solver)\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n                                     vcl_rhs,\n                                     viennacl::linalg::bicgstab_tag(),\n                                     vcl_ilut);   //preconditioner here\n\\end{lstlisting}\nThe triangular substitutions may be applied in parallel on GPUs by enabling \\emph{level-scheduling} \\cite{saad-iterative-solution} via the member function call \\lstinline|use_level_scheduling(true)| in the \\lstinline|ilut_config| object.\n\nThree parameters can be passed to the constructor of \\lstinline|ilut_tag|: The first specifies the maximum number of entries per row in $L$ and $U$, while the\nsecond parameter specifies the drop tolerance. The third parameter is the boolean specifying whether level scheduling should be used.\n\n\\TIP{The performance of level scheduling depends strongly on the matrix pattern and is thus disabled by default.}\n\n\\subsection{Incomplete LU Factorization with Static Pattern (ILU0)}\nSimilar to ILUT, ILU0 computes an approximate LU factorization with sparse factors L and U.\nWhile ILUT determines the location of nonzero entries on the fly, ILU0 uses the sparsity pattern of A for the sparsity pattern of L and U \\cite{saad-iterative-solution}.\nDue to the serial nature of the preconditioner, the setup of ILU0 is computed on the CPU.\n\\begin{lstlisting}\n//compute ILU0 preconditioner:\nviennacl::linalg::ilu0_tag ilu0_config;\nviennacl::linalg::ilu0_precond< SparseMatrix > vcl_ilut(vcl_matrix,\n                                                        ilu0_config);\n\n//solve (e.g. using conjugate gradient solver)\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n                                     vcl_rhs,\n                                     viennacl::linalg::bicgstab_tag(),\n                                     vcl_ilut);   //preconditioner here\n\\end{lstlisting}\nThe triangular substitutions may be applied in parallel on GPUs by enabling \\emph{level-scheduling} \\cite{saad-iterative-solution} via the member function call \\lstinline|use_level_scheduling(true)| in the \\lstinline|ilu0_config| object.\n\nOne parameter can be passed to the constructor of \\lstinline|ilu0_tag|, being the boolean specifying whether level scheduling should be used.\n\n\\TIP{The performance of level scheduling depends strongly on the matrix pattern and is thus disabled by default.}\n\n\\subsection{Block-ILU}\nTo overcome the serial nature of ILUT and ILU0 applied to the full system matrix,\na parallel variant is to apply ILU to diagonal blocks of the system matrix.\nThis is accomplished by the \\lstinline|block_ilu| preconditioner, which takes\nthe system matrix type as first template argument and the respective ILU-tag type as second template argument\n(either \\lstinline|ilut_tag| or \\lstinline|ilu0_tag|). Support for accelerators using {\\CUDA} or {\\OpenCL} is provided.\n\n\\begin{lstlisting}\n//compute block-ILU preconditioner using ILU0 for each block:\nblock_ilu_precond<SparseMatrix,\n                  ilu0_tag> vcl_block_ilu0(vcl_matrix,\n                                           ilu0_tag());\n\n//solve\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n                                     vcl_rhs,\n                                     viennacl::linalg::bicgstab_tag(),\n                                     vcl_block_ilu0);\n\\end{lstlisting}\nA third argument can be passed to the constructor of \\lstinline|block_ilu_precond|: \nEither the number of blocks to be used (defaults to $8$), or an index vector with fine-grained control over the blocks. Refer to the Doxygen pages in doc/doxygen for details.\n\n\\TIP{The number of blocks is a design parameter for your sparse linear system at hand. Higher number of blocks leads to better memory bandwidth utilization on GPUs, but may increase the number of solver iterations.}\n\n\\subsection{Jacobi Preconditioner}\nA Jacobi preconditioner is a simple diagonal preconditioner given by the reciprocals of the diagonal entries of the system matrix $A$.\nUse the preconditioner as follows:\n\\begin{lstlisting}\n//compute Jacobi preconditioner:\njacobi_precond< SparseMatrix > vcl_jacobi(vcl_matrix,\n                                          viennacl::linalg::jacobi_tag());\n\n//solve (e.g. using conjugate gradient solver)\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n                                     vcl_rhs,\n                                     viennacl::linalg::cg_tag(),\n                                     vcl_jacobi);\n\\end{lstlisting}\n\n\n\\subsection{Row Scaling Preconditioner}\nA row scaling preconditioner is a simple diagonal preconditioner given by the reciprocals of the norms of the rows of the system matrix $A$.\nUse the preconditioner as follows:\n\\begin{lstlisting}\n//compute row scaling preconditioner:\nrow_scaling< SparseMatrix > vcl_row_scaling(vcl_matrix,\n                                      viennacl::linalg::row_scaling_tag());\n\n//solve (e.g. using conjugate gradient solver)\nvcl_result = viennacl::linalg::solve(vcl_matrix,\n                                     vcl_rhs,\n                                     viennacl::linalg::cg_tag(),\n                                     vcl_row_scaling);\n\\end{lstlisting}\nThe tag \\lstinline|viennacl::linalg::row_scaling_tag()| can be supplied with a parameter denoting the norm to be used. A value of \\lstinline|1| specifies the\n$l^1$-norm, while a value of $2$ selects the $l^2$-norm (default).\n\n\n\\section{Eigenvalue Computations}\n%{\\ViennaCL} \nTwo algorithms for the computations of the eigenvalues of a matrix $A$ are implemented in {\\ViennaCL}:\n\\begin{itemize}\n\\item The Power Iteration \\cite{golub:matrix-computations}\n\\item The Lanczos Algorithm \\cite{simon:lanczos-pro}\n\\end{itemize}\nDepending on the parameter \\lstinline|tag| either one of them is called. \nBoth algorithms can be used for either {\\ublas} or {\\ViennaCL} compressed matrices.\\\\\nIn order to get the eigenvalue with the greatest absolut value the power iteration should be called. \\\\\nThe lanczos algorithm returns a vector of the largest eigenvalues with the same type as the entries of the matrix.\n\nThe algorithms are called for a matrix object \\lstinline|A| by\n\\begin{lstlisting}\nstd::vector<double> largest_eigenvalues = viennacl::linalg::eig(A, ltag);\ndouble largest_eigenvalue = viennacl::linalg::eig(A, ptag);\n\\end{lstlisting}\n\n\n\\subsection{Power Iteration}\nThe Power iteration aims at computing the eigenvalues of a matrix by calculating the product of the matrix and a vector for several times, where the resulting vector is used for the next product of the matrix and so on. The computation stops as soon as the norm of the vector converges. \\\\\nThe final vector is the eigenvector to the eigenvalue with the greatest absolut value.\\\\\nTo call this algorithm, \\lstinline|piter_tag| must be used.\nThis tag has only one parameter: \\\\ \\lstinline|terminationfactor| defines the accuracy of the computation, i.e. if the new norm of the eigenvector changes less than this parameter the computation stops and returns the corresponding eigenvalue (default: $1e-10$).\\\\\nThe call of the constructor may look like the following:\n\\begin{lstlisting} \nviennacl::linalg::piter_tag ptag(1e-8);\n\\end{lstlisting}\n\n\\TIP{Example code can be found in \\lstinline|examples/tutorial/power-iter.cpp|.}\n\n\\subsection{The Lanczos Algorithm}\nIn order to compute the eigenvalues of a sparse high-dimensional matrix the lanczos algorithm can be used to find these. \nThis algorithm reformulates the given high-dimensional matrix in a way such that the matrix can be rewritten in a tridiagonal matrix at much lower dimension. The eigenvalues of this tridiagonal matrix are equal to the largest eigenvalues of the original matrix. \\\\\nThe eigenvalues of the tridiagonal matrix are calculated by using the bisection method \\cite{golub:matrix-computations}. \\\\\nTo call this lanczos algorithm, \\lstinline|lanczos_tag| must be used.\nThis tag has several parameters that can be passed to the constructor:\n\n\\begin{itemize}\n \\item The exponent of epsilon for the tolerance of the reorthogonalization, defined by the parameter \\lstinline|factor| (default: $0.75$)\n \\item The method of the lanczos algorithm: $0$ uses partial reorthogonalization, $1$ full reothogonalization and $2$ does not use reorthogonalization (default: $0$)\n \\item The number of eigenvalues that are returned is specified by \\lstinline|num_eigenvalues| (default: $10$)\n \\item The size of the krylov space used for the computations can be set by the parameter \\lstinline|krylov_size| (default: $100$). The maximum number of iterations can be equal or less this parameter\n\\end{itemize}\nThe call of the constructor may look like the following:\n\\begin{lstlisting}\nviennacl::linalg::lanczos_tag ltag(0.85, 15, 0, 200);\n\\end{lstlisting}\n\n\\TIP{Example code can be found in \\lstinline|examples/tutorial/lanczos.cpp|.}\n\n\n\\section{QR Factorization}\n\n\\NOTE{The current QR factorization implementation depends on {\\ublas}.}\n\nA matrix $A \\in \\mathbb{R}^{n\\times m}$ can be factored into $A = Q R$, where $Q \\in \\mathbb{R}^{n\\times n}$ is an\northogonal matrix and $R \\in \\mathbb{R}^{n \\times m}$ is upper triangular. This so-called QR-factorization is important for eigenvalue computations as well as\nfor the solution of least-squares problems \\cite{golub:matrix-computations}. {\\ViennaCL} provides a generic implementation of the QR-factorization using\nHouseholder reflections in file \\lstinline|viennacl/linalg/qr.hpp|. An example application can be found in \\lstinline|examples/tutorial/qr.hpp|.\n\nThe Householder reflectors $v_i$ defining the Householder reflection $I - \\beta_i v_i v_i^{\\mathrm{T}}$ are stored in the\ncolumns below the diagonal of the input matrix $A$ \\cite{golub:matrix-computations}. The normalization coefficients $\\beta_i$ are returned by the\nworker function \\lstinline|inplace_qr|. The upper triangular matrix $R$ is directly written to the upper triangular part of $A$. \n\\begin{lstlisting}\n  std::vector<ScalarType> betas = viennacl::linalg::inplace_qr(A, 12);\n\\end{lstlisting}\nIf $A$ is a dense matrix from \\ublas, the calculation is carried out on the CPU using a single thread. If $A$ is a \n\\lstinline|viennacl::matrix|, a hybrid implementation is used: The panel factorization is carried out using \\ublas, while expensive BLAS level 3 operations\nare computed on the OpenCL device using multiple threads. \n\nTypically, the orthogonal matrix $Q$ is kept in inplicit form because of computational efficiency\nHowever, if $Q$ and $R$ have to be computed explicitly, the function \\lstinline|recoverQ| can be used:\n\\begin{lstlisting}\n  viennacl::linalg::recoverQ(A, betas, Q, R); \n\\end{lstlisting}\nHere, \\lstinline|A| is the inplace QR-factored matrix, \\lstinline|betas| are the coefficients of the Householder reflectors as returned by\n\\lstinline|inplace_qr|, while \\lstinline|Q| and \\lstinline|R| are the destination matrices. However, the explicit formation of $Q$ is expensive and is usually avoided.\nFor a number of applications of the QR factorization it is required to apply $Q^T$ to a vector $b$. This is accomplished by\n\\begin{lstlisting}\n viennacl::linalg::inplace_qr_apply_trans_Q(A, betas, b);\n\\end{lstlisting}\nwithout setting up $Q$ (or $Q^T$) explicitly.\n\n\\TIP{Have a look at \\lstinline|examples/tutorial/least-squares.cpp| for a least-squares computation using QR factorizations.}", "meta": {"hexsha": "2c90a56dae2c68de571bdc102d88e676cc649cbc", "size": 19571, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/manual/algorithms.tex", "max_stars_repo_name": "bollig/viennacl", "max_stars_repo_head_hexsha": "6dac70e558ed42abe63d8c5bfd08465aafeda859", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-21T08:33:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-21T08:33:10.000Z", "max_issues_repo_path": "doc/manual/algorithms.tex", "max_issues_repo_name": "bollig/viennacl", "max_issues_repo_head_hexsha": "6dac70e558ed42abe63d8c5bfd08465aafeda859", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/manual/algorithms.tex", "max_forks_repo_name": "bollig/viennacl", "max_forks_repo_head_hexsha": "6dac70e558ed42abe63d8c5bfd08465aafeda859", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.4726775956, "max_line_length": 289, "alphanum_fraction": 0.7402278882, "num_tokens": 5025, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673087708699, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.5841218899869098}}
{"text": "\\documentclass[12pt,notitlepage]{article}\n\\usepackage{hyperref}\n\\usepackage{amsmath,amsfonts,amssymb,amsthm}\n% \\usepackage{kbordermatrix}\n% theorem preambles\n\\newtheorem{mydef}{Definition}\n\\usepackage{graphicx}\t\t\t% allows us to import images\n\\usepackage{listings}\n\\lstset{\n  breaklines=true\n}\n\\usepackage{tikz}\n\\usepackage{verbatim}\n\\usepackage{cmds}\n\\hypersetup{colorlinks=true}\n\\usepackage[text={7in,10in}]{geometry}\n% \\setlength{\\parindent}{0in}\n\n% matrix columns\n\\newcommand\\scalemath[2]{\\scalebox{#1}{\\mbox{\\ensuremath{\\displaystyle #2}}}}\n\\setcounter{MaxMatrixCols}{20}\n\n\\begin{document}\n\\title{Neural Network Notes}\n\\author{Kevin J. Marshall}\n\\maketitle\n\n\\section{Feedforward Neural Networks}\n\\label{sec:ff-=nn}\nThe basic computational units of a neural network (NN) are called\nneurons.  Neurons are grouped together in finite sets called layers.\nThe structure of a neural network consists of an input layer followed\nby a finite set of hidden computational layers and a final output\nlayer.  In feedforward NNs, information flows in a single direction from the\ninputs to the outputs and the computation graph may always be\ninterpreted as a directed acyclic\ngraph (DAG).  Since the computational graph is a DAG, the neural\nnetwork may always be arranged such that each layer of the feedforward\nNN only depends on  the layer to its left (the input side).  This can\nbe accomplished by performing a topological sort on the neurons\n(nodes) of the DAG and\ngrouping them into layers $l_{k}$ with depth $k$ where $l_{k} \\equiv \\{n_{0},\\dots,n_{k-1}\\}$ is the\nset of all neurons in layer $k$.  Upon arranging the $K$ layers from\n$l_{0},l_{1},\\dots,l_{K}$ each layer's neuron set $l_{k}$ will only depend\non neurons from the set of layers $\\{l_{j<k}\\}$.\n\n\\subsection{Matrix Representation}\n\\label{sec:matrix-rep}\n\nDuring training, a set of $N$ vector valued input-output pairs $\\{(\\vx_{0},\\vy_{0}), \\dots,\n(\\vx_{N-1},\\vy_{N-1})\\}$ is fed into the network.  For each layer, $k$, the\nfollowing computation is carried out,\n\\begin{align}\n  \\label{eq:activation-component}\n  o_{j}^{k} &= w_{ij}^{k}a_{i}^{k-1}\\\\\n  \\label{eq:output-component}\n  a_{j}^{k} &= f( o_{j}^{k})  \n\\end{align}\nwhere we have used Einstein summation notation in\nEqns.~\\ref{eq:activation-component} and \\ref{eq:output-component} and\nwhere the associated vector expressions are\n\\begin{align}\n  \\label{eq:activation-matrix}\n  \\vect{o}^{k} &= (\\vect{W}^{k})^{T}\\vect{a}^{k-1}\\\\\n  \\label{eq:output-matrix}\n  \\vect{a}^{k} &= f(\\vect{o}^{k})\n\\end{align}\nIn these equations\n\\begin{itemize}\n\\item $o_{j}^{k}$ is the output computation of neuron $j$ in layer $k$\n  and $\\vect{o}^{k}$ is a vector with row dimension $|n_{k}+1|$ including a bias term.\n\\item $a_{j}^{k}$ is the nonlinear output or activation of neuron $j$ in layer $k$ after\n  passing through the nonlinear function $f(\\cdot)$\n\\item $\\vect{W}^{k}$ is the weight matrix of layer $k$ such that\n  elements $W_{ij}^{k}$ describe the weight associated with the $i$'th\n  neuron input in layer $k-1$ to the $j$'th neuron in layer $k$.\n  The dimensions of the weight matrix are $( d_{k-1} + 1 ) \\times\n  d_{k}$ where $d_{k-1} = |l_{k-1}|$ and $d_{k}\n  = |l_{k}|$ are the number of neurons in layers $k-1$ and $k$ and\n  $+1$ accounts for weights associated with the bias neuron.\n\\item $f(\\cdot;j,k)$ is a nonlinear activation function $f: \\mathbb{R}^{n_{k}} \\to\n  \\mathbb{R}^{n_{k}}$ which may be different for each layer and each neuron.  Historically\n  this function is taken to be the same for all neurons in all hidden\n  layers,\n  $f( \\cdot; j,k ) = f( \\cdot )$, and often takes the form of the\n  logistic function\n  \\begin{equation}\n    \\label{eq:logistic-fun}\n    f(x) = \\frac{A}{1+\\exp(-r(x-x_{0}))}\n  \\end{equation}\n  where $A$ is the curve's maximum value, $x_{0}$ is the x-value of\n  the functions midpoint, and $r$ is the logistic growth rate\n  (steepness of the curve).  Standard practice uses $A=1, r=1, x_{0} =\n  0$ which centers the function at $x=0$ and scales the input between\n  to $(0,1)$ however the shape of this curve may adjusted in order to\n  transform and/or scale layer outputs.\n\\end{itemize}\nOnce the information flow reaches the output layer, $l_{K}$, an error\nmay be computed for each input based on the nodal output values\n$\\hat{y}_{n}$ and the target values $y_{n}$.  Gradient decent\ntechniques attempt to minimize the error by adjusting the weights.\nWeight update rules involve computing\n\\begin{equation}\n  \\label{eq:error-grad}\n  \\begin{split}\n    \\pderiv{E}{w^{k}_{ij}} &=\n    \\pderiv{E}{o_{l}^{k}}\\pderiv{o_{l}^{k}}{w_{ij}^{k}}\\\\\n    &= \\pderiv{E}{o_{l}^{k}} \\pderiv{}{w_{ij}^{k}}\\left( w_{ml}^{k}o^{k-1}_{m} \\right)\\\\\n    &= \\pderiv{E}{o_{l}^{k}}\\delta_{mi}\\delta_{jl}o_{m}^{k-1}\\\\\n    &= \\delta_{j}^{k}o_{i}^{k-1}\n  \\end{split}\n\\end{equation}\n\n\\subsection{Bias}\n\\label{sec:bias}\nA bias term has been hidden in Eqn.~\\ref{eq:activation-matrix} such\nthat $\\vect{W}^{T}$ is given by\n\\begin{equation}\n  \\label{eq:wmat-explicit}\n  (\\vect{W}^{k})^{T}\\vect{a}^{k-1} =\n  \\begin{bmatrix}\n    w_{00} & w_{10} & \\dots & w_{n_{k-1}0} & w_{b_{k-1}0} \\\\\n    \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n    w_{0n_{k}} & w_{1n_{k}} & \\dots & w_{n_{k-1}n_{k}} & w_{b_{k-1}n_{k}}\n  \\end{bmatrix}\n  \\begin{bmatrix}\n    a_{0}^{k-1}\\\\\n    \\vdots\\\\\n    a_{n_{k-1}}^{k-1}\\\\\n    1\n  \\end{bmatrix}\n\\end{equation}\nfor $|l_{k}|=n_{k}$ and $|l_{k-1}|=n_{k-1}$ neurons in layers $k$ and $k-1$ and a bias\nweight vector of $[w_{b_{k-1}0},\\dots,w_{b_{k-1}n_{k}}]^{T}$.  In\ngeneral, layers contain a decreasing number of nodes as the depth\nincreases, i.e. $|l_{k}| \\le |n_{k-1}|$.\n\n\\subsection{Activation Functions}\n\\label{sec:activation-fun}\nActivation functions act on neuron layer outputs and may be interpreted as\nindividual layers themselves.  These functions introduce\nnon-linearities into the prediction model.  Several common activation\nfunctions and their derivatives are given below.  In what follows we\nassume $\\vect{o}^{k}$ to be the input to the activation layer and\n$\\vect{a}^{k}$ to be the associated output.\n\n\\begin{itemize}\n\\item Identity:\n  \\begin{align}\n    \\label{eq:identity}\n    a_{i}^{k} &= f(o_{i}^{k}) = o_{i}^{k}\\\\\n    \\pderiv{a_{i}^{k}}{o_{i}^{k}} &= 1\n  \\end{align}\n\\item Logistic:\n  \\begin{align}\n    \\label{eq:logistic-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) = \\frac{1}{1+\\exp(-o_{i}^{k})}\\\\\n    \\pderiv{a_{i}^{k}}{o_{i}^{k}} &=\n                                    \\frac{\\exp(-o_{i}^{k})}{(1+\\exp(-o_{i}^{k}))^{2}}\n                                    = f(o_{i}^{k})(1-f(o_{i}^{k}))\n  \\end{align}\n\\item TanH:\n  \\begin{align}\n    \\label{eq:tanh-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) = \\tanh(o_{i}^{k})\\\\\n    \\pderiv{a_{i}^{k}}{o_{i}^{k}} &= \\sech^{2}(o_{i}^{k}) = 1 - f^{2}(o_{i}^{k})\n  \\end{align}\n\\item arcTan:\n  \\begin{align}\n    \\label{eq:arctan-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) = \\arctan(o_{i}^{k})\\\\\n    \\pderiv{a_{i}^{k}}{o_{i}^{k}} &= \\frac{1}{(o_{i}^{k})^{2}+1}\n  \\end{align}\n\\item ReLU (rectified linear unit)\n  \\begin{align}\n    \\label{eq:relu-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) =\n                \\begin{cases}\n                  o_{i}^{k}, \\qquad & o_{i}^{k} \\ge 0\\\\\n                  0, \\qquad & o_{i}^{k} < 0\n                \\end{cases}\\\\\n    \\pderiv{a_{i}^{k}}{o_{i}^{k}} &=\n                                    \\begin{cases}\n                                      1, \\qquad & f(o_{i}^{k}) \\ge 0\\\\\n                                      0, \\qquad & f(o_{i}^{k}) < 0\n                                    \\end{cases}\n  \\end{align}\n\\item PLU (parametric linear unit):\n  \\begin{align}\n    \\label{eq:plu-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) =\n                \\begin{cases}\n                  \\alpha o_{i}^{k}, \\qquad & o_{i}^{k} \\ge 0\\\\\n                  0, \\qquad & o_{i}^{k} < 0\n                \\end{cases}\\\\\n    \\pderiv{a_{i}^{k}}{o_{i}^{k}} &=\n                                    \\begin{cases}\n                                      \\alpha, \\qquad & f(o_{i}^{k}) \\ge 0\\\\\n                                      0, \\qquad & f(o_{i}^{k}) < 0\n                                    \\end{cases}\n  \\end{align}\n\\item ELU (exponential linear unit):\n  \\begin{align}\n    \\label{eq:elu-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) =\n                \\begin{cases}\n                  \\alpha ( \\exp(o_{i}^{k}) - 1 ), \\qquad & o_{i}^{k} \\ge 0\\\\\n                  0, \\qquad & o_{i}^{k} < 0\n                \\end{cases}\\\\\n    \\pderiv{a_{i}^{k}}{o_{i}^{k}} &=\n                                    \\begin{cases}\n                                      f(o_{i}^{k}) + \\alpha, \\qquad & f(o_{i}^{k}) \\ge 0\\\\\n                                      0, \\qquad & f(o_{i}^{k}) < 0\n                                    \\end{cases}\n  \\end{align}\n\\item SoftMax:\n  \\begin{align}\n    \\label{eq:relu-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) = \\frac{\\exp(o_{i}^{k})}{\\sum_{j}\\exp(o_{j}^{k})}\\\\\n    \\pderiv{a_{i}^{k}}{o_{j}^{k}} &= f(o_{i}^{k})\\delta_{ij} - f(o_{i}^{k})f(o_{j}^{k})\n  \\end{align}\n\\item LogSoftMax:\n  \\begin{align}\n    \\label{eq:relu-fun}\n    a_{i}^{k} &= f(o_{i}^{k}) = o_{i}^{k} - \\log\\left(\n                \\sum_{j}\\exp(o_{j}^{k}) \\right)\\\\\n    \\pderiv{a_{i}^{k}}{o_{j}^{k}} &= \\delta_{ij} - \\exp_{i}(f(o_{j}^{k}))\n  \\end{align}\n\\end{itemize}\n\n\\subsection{Loss Functions}\n\\label{sec:loss-fun}\n\n\n\\subsubsection{Mean Squared Error}\n\\label{sec:error-mse}\nThe mean squared error is a good classifier for the numeric output in\nwhich case weight updates are computed by minimizing the error\n\\begin{equation}\n  \\label{eq:error}\n  \\begin{split}\n    E &= \\frac{1}{2N}\\sum_{n}(\\hat{\\vy}^{n} - \\vy^{n} )^{2}\\\\\n    &= \\frac{1}{2N}\\sum_{n}(f(\\vect{a}^{K};K)^{n} - \\vy^{n} )^{2}\\\\\n    &= \\frac{1}{N}\\sum_{n}E_{n}\\\\\n  \\end{split}\n\\end{equation}\nwhere $\\hat{\\vy}^{n}$ and $\\vy^{n}$ are respectively the\noutput (prediction) vector and target vector of the $n$'th input\n$(\\vx^{n},\\vy^{n})$.  For the output layer we find that\nEqn.~\\ref{eq:error-grad} becomes\n\\begin{equation}\n  \\label{eq:error-grad-output}\n  \\begin{split}\n    \\pderiv{E_{n}}{w^{K}_{ij}} &=\n    \\pderiv{E_{n}}{\\hat{y}_{l}^{K}}\\pderiv{\\hat{y}_{l}^{K}}{o_{m}^{K}}\\pderiv{\\hat{o_{m}^{K}}}{w_{ij}^{K}}\\\\\n    &= (\\hat{y}_{l}^{K} - y^{p}_{l} )f'(o_{l}^{K})\\delta_{ml}\\delta_{in}\\delta_{jm}a_{n}^{K-1}\\\\\n    &= (\\hat{y}_{j}^{K} - y^{p}_{j} )f'(o_{j}^{K})a_{i}^{K-1}\n  \\end{split}\n\\end{equation}\nWe note that equation \\ref{eq:error-grad-output} actually describes\nthree layer operations which take place sequentially.  These\noperations may be described from right to left in accordance with back\npropagation as\n\\begin{enumerate}\n\\item Error propagation over the loss function layer,\n  \\begin{equation}\n    \\label{eq:error-loss-fcn}\n    \\pderiv{E^{p}}{\\hat{y}_{l}^{p}} = (\\hat{y}_{l}^{K} - y^{p}_{l})\n  \\end{equation}\n\\item Error propagation over the activation layer\n  \\begin{equation}\n    \\label{eq:error-act-fun}\n    \\pderiv{E^{p}}{a_{m}^{K}} =\n    \\pderiv{E^{p}}{\\hat{y}_{l}^{p}}f'(o_{l}^{K})\\delta_{ml} = \\pderiv{E^{p}}{\\hat{y}_{m}^{p}}f'(o_{m}^{K})\n  \\end{equation}\n\\item Error propagation over the last layer $K$ to find gradient updates\n  \\begin{equation}\n    \\label{eq:error-last-layer}\n    \\pderiv{E^{p}}{w_{ij}^{K}} =\n    \\pderiv{E^{p}}{o_{m}^{K}}\\delta_{in}\\delta_{jm}a_{n}^{K-1} = \\pderiv{E^{p}}{o_{j}^{K}}a_{i}^{K-1}\n  \\end{equation}\n\\end{enumerate}\nThe key here is to notice that back propagation inputs are error\ngradients computed on the forward pass inputs.  Gradient update rules for\nweight matrices are obtained using stochastic variations on Euler's method.\n\\subsection{Hidden Layers}\n\\label{sec:hidden-layers}\n\nFor hidden layers $l_{k}$, $0 \\le k < K$, the calculation becomes,\n\\begin{equation}\n  \\label{eq:error-grad-hidden}\n  \\begin{split}\n    \\pderiv{E_{n}}{w^{k}_{ij}} &=\n    \\pderiv{E_{n}}{o_{l}^{k}}\\pderiv{o_{l}^{k}}{w_{ij}^{k}}\\\\\n    &=\n    \\pderiv{E_{n}}{o_{m}^{k+1}}\\pderiv{o_{m}^{k+1}}{o_{l}^{k}}\\pderiv{o_{l}^{k}}{w_{ij}^{k}}\\\\\n    &=\n    \\pderiv{E_{n}}{o_{m}^{k+1}}\\pderiv{o_{m}^{k+1}}{o_{l}^{k}}\\delta_{jl}a_{i}^{k-1}\\\\\n    &=\n    \\pderiv{E_{n}}{o_{m}^{k+1}}\\pderiv{o_{m}^{k+1}}{o_{j}^{k}}a_{i}^{k-1}\\\\\n    &=\n    \\pderiv{E_{n}}{o_{m}^{k+1}}\\pderiv{o_{m}^{k+1}}{a_{p}^{k}}\\pderiv{a_{p}^{k}}{o_{j}^{k}}o_{i}^{k-1}\\\\ \n    &=\n    \\pderiv{E_{n}}{o_{m}^{k+1}}w_{pm}^{k+1}f'(o_{p}^{k})\\delta_{pj}o_{i}^{k-1}\\\\ \n    &= \\delta^{k+1}_{m}w_{jm}^{k+1}f'(a_{j}^{k})o_{i}^{k-1}\\\\\n  \\end{split}\n\\end{equation}\n\n\\end{document}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: t\n%%% End:\n", "meta": {"hexsha": "a721af74afda8cee89b01eca349d5764cf48db6c", "size": 12351, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/nnet-kmarshal.tex", "max_stars_repo_name": "kjmarshall/NNet", "max_stars_repo_head_hexsha": "7b51a1c688666a626011b5b730dae3f6a9e97ec2", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/nnet-kmarshal.tex", "max_issues_repo_name": "kjmarshall/NNet", "max_issues_repo_head_hexsha": "7b51a1c688666a626011b5b730dae3f6a9e97ec2", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/nnet-kmarshal.tex", "max_forks_repo_name": "kjmarshall/NNet", "max_forks_repo_head_hexsha": "7b51a1c688666a626011b5b730dae3f6a9e97ec2", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.085443038, "max_line_length": 108, "alphanum_fraction": 0.586025423, "num_tokens": 4601, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673178375734, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.5841218867046679}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{August 29, 2014}\n\\maketitle\n\nlast time assigned gcd stuff and assigned to read results\n\n\\section*{gcd definition}\n$a,b$ not both 0, notation $\\text{gcd}(a,b)=(a,b)$\n\n\\subsection*{facts}\n\\begin{enumerate}\n\\item\ngcd exists and  is unique\n\nfollows from assigned theorems.\n\n\\subsubsection*{example}\n$\\text{gcd}(6,14)=2$\n\\item\nthe gcd of $a$ and $b$ is a linear combination of $a$ and $b$. ie, there exist $m,n\\in\\mathbb{Z}$ such that $(a,b)=ma+nb$\n\nin fact $\\text{gcd}(a.b)$ is the smallest positive inegerthat is a linear combination of $a$ and $b$\n\\begin{align*}\n  \\{ma+nb\\mid m,n\\in\\mathbb{Z}\\}\n\\end{align*}\n\\end{enumerate}\n\n\\subsection*{euclidean algorithm}\n\\begin{align*}\n  (a,b)&=(\\abs{a},\\abs{b})\n\\end{align*}\nwe may assume that $a\\ge b\\ge 0$. $a=bq_1+r_1$.\n\n\\subsubsection*{claim}\n\\begin{align*}\n  (a,b)&=(b,r_1)\\\\\n  d_1=(a,b),&\\quad d_2=(b,r_1)\\\\\n  d_1\\vert a&\\rightarrow d_1\\vert(bq_1+r_1)\\\\\n  d_1\\vert b&\\rightarrow d_1\\vert r_1\\\\\n  &\\rightarrow d_1\\vert d2 \\text{ because } d_2=(b,r_1)\n\\end{align*}\nsimilarly show that $d2\\vert d1$ hence d1=d2\n\nnow we see\n\\begin{align*}\n  a&=bq_1+r_1\\\\\n  b&=rq_2+r_2\\\\\n  r_1&=r_2q3+r_3\\\\\n  &\\vdots\\\\\n  r_n&=\\text{zero remainder because remainders are shrinking}\n\\end{align*}\nso $(a,b)=(r_{n-1},0)=r_{n-1}$\n\\subsection*{example}\nfind $(33,9)$\n\\begin{align*}\n  33&=9*3+6\\\\\n  9&=6*1+3\\\\\n  6&=3*2+0\\\\\n  (33,9)&=3\n  3&=9-1*6\\\\\n  &=9-1*(33-3*9)\\\\\n  &=9-1*33+3*9\\\\\n  &=4*9+(-1)*33\n\\end{align*}\ncan also use euclidean algorithm to generate linear combination from gcd\n\\section*{1.2 prime numbers}\n\\subsection*{definition}\n$p>1$ is prime if the only positive divisors of $p$ are $1$ and $p$\n\n$p>1$ is prime if the only divisors of $p$ are $\\pm1$ and $p$\n\n\\subsection*{definition}\nwe say that $a$ and $b$ are relatively prime if $\\text{gcd}(a,b)=1$\n\\subsection*{proposition}\nlet $p>1,p\\in\\mathbb{Z}$ then $p$ is prime iff the following property holds:\n\n$a,b\\in\\mathbb{Z}$ and $p\\vert ab$ then $p\\vert a$ or $p\\vert b$\n\nonly true if p is prime, ${4}{\\not\\vert}{6\\cdot6}$\n\\subsubsection*{proof}\nassume $p$ is prime.  assume $p\\vert ab$, then $(p,a)=1$ or $(p,a)=p$. this is because the only divisor of $p$ is $p$ or 1.\n\ncase 1, $(p,a)=p$. then $p|a$ and we are done.\n\ncase 2, $(p,a)=1$. then there exists $m,n\\in\\mathbb{Z}$ such that $mp+na=1$.\n\\begin{align*}\n  mp+na&=1\\\\\n  bmp+bna&=b\\\\\n  p|ab&\\rightarrow p|ab\n  p|bmp\n\\end{align*}\nsince $p|bmp$ and $p|bna$ therefore $p|b$\n\nconversely\n\nassume $\\alpha|p$ with $\\alpha>0$. Nee to prove that $\\alpha=1$ or $\\alpha=p$\n\n$\\alpha|p\\rightarrow p=\\alpha\\cdot\\beta$ with  $\\beta\\in\\mathbb{Z}$\n\nby the property satisfied $p|\\alpha or p|\\beta$. if $p|\\alpha$ since $\\alpha|p$ we have $\\alpha=p$. if $p|\\beta$, since $\\beta|p$ we have $\\beta=p$\n\nif $\\beta=p$ then $\\alpha=1$\n\\end{document}\n", "meta": {"hexsha": "d348fdec85f23c05b3f237ed5158dc807ff938dc", "size": 2992, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-08-29.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-08-29.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-08-29.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.0173913043, "max_line_length": 147, "alphanum_fraction": 0.6537433155, "num_tokens": 1204, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178138, "lm_q2_score": 0.8128673133042217, "lm_q1q2_score": 0.5841218834470269}}
{"text": "\\documentclass{article}\n\\usepackage[margin=2cm]{geometry}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\title{Flexible Snow Model (FSM2) scientific documentation}\n\\author{Richard Essery}\n\\date{2 January 2019}\n\\begin{document}\n\\maketitle\n\nThe Flexible Snow Model allows alternative process parametrizations to be combined in a complete model of the mass and energy balances of snow on the ground and in forest canopies. Existing models from which parametrizations are taken for inclusion in FSM2 include\n\\href{https://www.tandfonline.com/doi/abs/10.3137/ao.440301}{CLASS},\n\\href{http://www.cesm.ucar.edu/models/cesm1.2/clm/CLM45_Tech_Note.pdf}{CLM},\n\\href{https://www.geosci-model-dev.net/5/773/2012/}{Crocus}, \n\\href{https://www.umr-cnrm.fr/IMG/pdf/snowdoc_v2.pdf}{ISBA} and \n\\href{https://link.springer.com/article/10.1007/s003820050276}{MOSES}.\nParametrizations are selected by setting option numbers in a text file before the model is compiled. Parameter values and input data are read from text files when the model is run. Physical constants, meteorological driving variables, site characteristics, model state variables and parameters are listed in tables 1 to 5; refer to these tables for any variables that are not explicitly defined in the text.\n\n\\section{Snow-free and snow-covered ground albedos}\n\nBare ground with albedo $\\alpha_0$ and snow cover fraction $f_s$ with albedo $\\alpha_s$ have average albedo\n\\begin{equation}\n\\alpha_g = (1 - f_s)\\alpha_0 +f_s\\alpha_s.\n\\end{equation} \nThe snow cover is a function of snow depth $h$ parametrized as\n\\begin{equation}\nf_s(h) = \\frac{h}{h + h_f}\n\\end{equation}\nif option {\\tt SNFRAC=0} is set in the compilation file, and\n\\begin{equation}\nf_s(h) = \\tanh\\left(\\frac{h}{h_f}\\right)\n\\end{equation}\nif {\\tt SNFRAC=1}.\n\n\\subsection{Diagnosed snow albedo (option {\\tt ALBEDO=0})}\n\nFollowing a common approach in earlier generations of climate models, snow albedo is diagnosed as a function of surface temperature \n\\begin{equation}\n\\alpha_s(T_*) = \\alpha_{\\rm min} + (\\alpha_{\\rm max} - \\alpha_{\\rm min})\n                \\min\\left(\\frac{T_* - T_m}{T_\\alpha}, 1\\right).        \n\\end{equation}\n\n\\subsection{Prognostic snow albedo (option {\\tt ALBEDO=1})}\n\nBased on CLASS and ISBA, decreasing albedo as snow ages with time $t$ and increasing albedo as fresh snow falls at rate $S_f$ are parametrized by\n\\begin{equation}\n\\frac{d\\alpha_s}{dt} = \\frac{1}{\\tau_\\alpha}(\\alpha_{\\rm min} - \\alpha_s) + \\frac{S_f}{S_\\alpha}(\\alpha_{\\rm max} - \\alpha_s),\n\\label{eq:albedo} \n\\end{equation} \nwhere the timescale $\\tau_\\alpha$ has different values $\\tau_{\\rm cold}$ and $\\tau_{\\rm melt}$ for cold and melting snow. Equation (\\ref{eq:albedo}) is implemented by integrating over a timestep of length $\\delta t$ to give the change in snow albedo as\n\\begin{equation}\n\\delta\\alpha_s = (\\alpha_{\\rm lim} - \\alpha_s)(1-e^{-\\gamma\\delta t}),  \n\\end{equation}\nwhere\n\\begin{equation}\n\\gamma = \\frac{1}{\\tau_\\alpha}+\\frac{S_f}{S_\\alpha}\n\\end{equation}\nand\n\\begin{equation}\n\\alpha_{\\rm lim} = \\frac{1}{\\gamma}\\left(\\frac{1}{\\tau_\\alpha}\\alpha_{\\rm min} +\\frac{S_f}{S_\\alpha}\\alpha_{\\rm max}\\right).\n\\end{equation}\n\n\\section{Snow density}\n\n\\subsection{Compaction}\nSnow may be assumed to have constant density $\\rho_0$ (option {\\tt DENSTY=0}) or to compact at rate\n\\begin{equation}\n\\frac{d\\rho_s}{dt} = f_\\rho(\\rho_s,T_s).\n\\label{eq:drhodt}\n\\end{equation}\nSnow density $\\rho_s$ in layer $k$ with ice mass $I$ and liquid water mass $W$ at the beginning of timestep $n$ is diagnosed as\n\\begin{equation}\n\\rho_{s,k}^{(n)} = \\frac{I_k^{(n)} + W_k^{(n)}}{D_{s,k}}.\n\\label{eq:rhok}\n\\end{equation}\nEquation (\\ref{eq:drhodt}) is then implemented as\n\\begin{equation}\n\\rho_{s,k}^{(n+1)} = \\rho_{s,k}^{(n)} + f_\\rho\\left(\\rho_{s,k}^{(n)},T_{s,k}^{(n)}\\right)\\delta t.\n\\end{equation}\nFinally, the thickness of the compacted layer at the end of the timestep is calculated by inverting Equation (\\ref{eq:rhok}).\n\n\\subsubsection{Empirical maximum densities (option {\\tt DENSTY=1})}\n\nBased on CLASS, the compaction rate function is\n\\begin{equation}\nf_\\rho = \\frac{1}{\\tau_\\rho}(\\rho_{\\max} - \\rho_s)\n\\end{equation}\nwith the same time constant $\\tau_\\rho$ but different asymptotic values $\\rho_{\\rm cold}$ and $\\rho_{\\rm melt}$ for $\\rho_{\\max}$ in cold and melting snow.\n\n\\subsubsection{Viscous compaction by overburden and thermal metamorphism (option {\\tt DENSTY=2})}\n\nFollowing ISBA, the compaction rate function is\n\\begin{equation}\nf_\\rho = \\rho_s\\left\\{\\frac{gm}{\\eta} + c_1 \\exp\\left[\\frac{(T_s - T_m)}{23.8} \n                - \\max\\left(\\frac{\\rho_s - 150}{21.7}, 0\\right)\\right]\\right\\}\n\\end{equation}\nwhere\n\\begin{equation}\n\\eta = \\eta_0\\exp\\left[-\\frac{(T_s - T_m)}{12.4} + \\frac{\\rho_s}{55.6}\\right].\n\\end{equation}\nand the snow mass overlying the middle of a layer is\n\\begin{equation}\nm_k = \\sum_{j=1}^{k-1}\\left[I_j^{(n)} + W_j^{(n)}\\right] + 0.5\\left[I_k^{(n)} + W_k^{(n)}\\right].\n\\end{equation}\n\n\\subsection{Fresh snow density}\n\nFresh snow deposited with density $\\rho_f$ over a timestep increases the snow depth before compaction by $\\rho_f^{-1}S_f\\delta t$. Unless snow density is fixed, the density of fresh snow is taken from\n\\begin{equation}\n\\rho_f = \\max[\\rho_0 + b_\\rho (T_a - T_m) + c_\\rho U_a^{1/2}, 50].\n\\end{equation}\nParameter values can be selected to match CLM ($\\rho_0=100$ kg m$^{-3}$) or Crocus $(\\rho_0=109$ kg m$^{-3}$, $a_\\rho=6$ kg m$^{-3}$ K$^{-1}$ and $b_\\rho=26$ kg m$^{-7/2}$ s$^{1/2}$). Snow unloading from a forest canopy is added to snow on the ground with the bulk density of the existing snow. \n\n\n\\section{Canopy radiative transfer}\n\nForest structure is defined by canopy height $h$, vegetation area index $\\Lambda$ (including leaves and stems) and vegetation fraction $f_v$, which is either parametrized as $1-\\exp(-k_{\\rm veg}\\Lambda)$ or specified as an input. The fraction of radiation incident from above at elevation angle $\\theta$ that is transmitted through the canopy without interception is parametrized as \n\\begin{equation}\n\\tau_{\\rm dir} = \\exp[-(G_1 + G_2\\sin\\theta)\\Lambda/\\sin\\theta].\n\\end{equation}\nThe canopy transmissivity for diffuse radiation is parametrized as\n\\begin{equation}\n\\tau_{\\rm dif} = \\exp(-k_{\\rm dif}\\Lambda)\n\\label{eq:fsky}\n\\end{equation}\nby default but can also be specified as a sky view fraction $f_{\\rm sky}$ for sites which are not directly under trees ($\\Lambda=0$) but which are shaded by surrounding vegetation or topography ($f_{\\rm sky}<1$). In principle, the parameters $G_1$, $G_2$ and $k_{\\rm dif}$ are not independent.\n\n\\subsection{Shortwave radiation}\n\nA forest canopy with intercepted snow mass $S_v$ and interception capacity $S_{\\rm cap}$ is assumed to have snow cover fraction \n\\begin{equation}\nf_{cs} = \\frac{S_v}{S_{\\rm cap}}\n\\end{equation}\nand albedo\n\\begin{equation}\n\\alpha_c = f_v[(1 - f_{cs})\\alpha_{c0} + f_{cs}\\alpha_{cs}].\n\\end{equation}\nAs illustrated in Figure \\ref{fig:SWcan}, the canopy is assumed to reflect fraction $\\alpha_c$ of incoming shortwave radiation, but radiation reflected from the ground is transmitted and absorbed by the canopy without further reflections. For incoming shortwave radiation made up of a diffuse component $S_{\\rm dif}$ and a direct-beam component $S_{\\rm dir}$, the net absorption is\n\\begin{align}\nSW_v = & (1 - \\alpha_c)[1 - (1 - \\alpha_g)\\tau_{\\rm dif} \n                          - \\alpha_g\\tau_{\\rm dif}\\tau_{\\rm dif}]S_{\\rm dif} + \\\\\n       & (1 - \\alpha_c)[1 - (1 - \\alpha_g)\\tau_{\\rm dir} \n                          -  \\alpha_g\\tau_{\\rm dif}\\tau_{\\rm dir}]S_{\\rm dir}\n\\label{eq:SWv} \n\\end{align}\nby the forest canopy and\n\\begin{equation}\nSW_g = (1 - \\alpha_c)(1 - \\alpha_g)(\\tau_{\\rm dif}S_{\\rm dif} + \\tau_{\\rm dir}S_{\\rm dir}).\n\\label{eq:SWg} \n\\end{equation}\nby the ground or snow surface. The effective above-canopy albedo, including shortwave radiation reflected from the canopy and the ground, is\n\\begin{equation}\n\\alpha = \\alpha_c + (1 - \\alpha_c)\\alpha_g\\tau_{\\rm dif}\\left(\n         \\frac{\\tau_{\\rm dif}S_{\\rm dif} + \\tau_{\\rm dir}S_{\\rm dir}}\n              {S_{\\rm dif} + S_{\\rm dir}}\\right).\n\\end{equation} \n\n\\begin{figure}[t]\n\\includegraphics[width=6cm]{SWcan.png}\n\\caption{Reflection and transmission of diffuse and direct-beam shortwave radiation by a forest canopy and diffuse reflection by the ground surface.}\n\\label{fig:SWcan}\n\\end{figure}\n\nUsually, only measurements of global shortwave radiation are available. Option {\\tt SWPART=0} treats all of the global radiation as diffuse and option {\\tt SWPART=1} partitions global shortwave radiation into direct and diffuse components as described in Appendix A. If separate measurements of direct and diffuse components are available, they can be read as inputs using option {\\tt SWPART=2}. {\\it Two-stream radiative transfer to be added as an option}\n\n\\subsection{Longwave radiation}\nThe net absorption of longwave radiation is\n\\begin{equation}\nLW_v = (1 - f_{\\rm sky}) (LW_\\downarrow  + \\sigma T_*^4 - 2\\sigma T_v^4) \n\\end{equation} \nby the forest canopy and\n\\begin{equation}\nLW_g = f_{\\rm sky}LW_\\downarrow - \\sigma T_*^4  + (1 - f_{\\rm sky})\\sigma T_v^4\n\\label{eq:LWg}\n\\end{equation}\nby the ground or snow surface. Upwelling longwave radiation above a forest canopy is\n\\begin{equation}\nLW_\\uparrow = f_{\\rm sky}\\sigma T_*^4  + (1 - f_{\\rm sky})\\sigma T_v^4.\n\\end{equation}\n\n\n\\section{Turbulent fluxes}\n\n\\subsection{Roughness lengths and aerodynamic resistances}\n\nRoughness lengths for snow-free ground and snow on fraction $f_s$ of the ground are combined to give a ground roughness length\n\\begin{equation}\nz_{0g} =  z_{0f}^{1-f_s} z_{0s}^{f_s}.\n\\end{equation}\nFor vegetation of height $h$ covering fraction $f_v$ of the ground, the roughness length and displacement height are $z_{0v} = R_{z0}h$ and $d = f_vR_dh$. The combined surface roughness length is\n\\begin{equation}\nz_0 = z_{0g}^{1-f_v} z_{0v}^{f_v}.\n\\end{equation}\n\nThe aerodynamic resistance network for turbulent exchanges of heat between the ground, vegetation and the atmosphere is shown in Figure \\ref{fig:rescan}. The resistance between the canopy air space and the atmosphere is\n\\begin{equation}\nr_{aa} = \\frac{1}{f_H k u_*}\\ln\\left(\\frac{z_T-d}{z_0}\\right)\n\\label{eq:raa} \n\\end{equation}\nwith atmospheric stability factor $f_H$ and friction velocity\n\\begin{equation}\nu_* = kU_a\\left[\\ln\\left(\\frac{z_U-d}{z_0}\\right)\\right]^{-1}.\n\\end{equation}\nVegetated and unvegetated fractions of the ground share a common surface temperature but have separate aerodynamic resistances that combine in parallel to give resistance\n\\begin{equation}\nr_{ag} = \\frac{1}{ku_*}\\left[\\frac{(1-f_v)f_h}{\\ln(z_{0g}/z_{0h})} + \n                             f_vf_cC_{\\rm dense}\\right]^{-1}\n\\label{eq:rag} \n\\end{equation}\nbetween the ground and the canopy air space, with sub-canopy stability factor $f_c$. The roughness length for heat $z_{0h}$ is assumed to be equal to $0.1z_{0g}$. Between vegetation and the canopy air space, the aerodynamic resistance is\n\\begin{equation}\nr_{av} = \\frac{C_{\\rm veg}}{\\Lambda u_*^{1/2}}.\n\\end{equation}\n$C_{\\rm dense}$, $C_{\\rm veg}$ are constant parameters. In the absence of vegetation ($\\Lambda=f_v=0)$, resistances for heat transfer combine to give\n\\begin{equation}\nr_h \\equiv r_{aa} + r_{ag} = \\frac{1}{f_hk^2U_a}\\ln\\left(\\frac{z_U}{z_{0g}}\\right)\n                                                \\ln\\left(\\frac{z_T}{z_{0h}}\\right).\n\\label{eq:rh}\n\\end{equation}\n\n\\subsubsection{Neutral stability (option {\\tt EXCHNG=0})}\n\nAtmospheric stability is neglected by setting $f_H=1$.\n%and $\\psi_m=\\psi_H=0$.\n\n\\subsubsection{Richardson number stability functions (option {\\tt EXCHNG=1})}\n\nAtmospheric stability is characterized by a bulk Richardson number\n\\begin{equation}\n{\\rm Ri_B} = \\frac{g(z_U-d)^2[T_a - f_vT_v - (1-f_v)T_*]}{(z_T-d) T_a U_a^2}.\n\\end{equation}\nThe aerodynamic resistances in Equation (\\ref{eq:raa}) and (\\ref{eq:rag}) are adjusted by factor\n\\begin{equation}\nf_H  = \\begin{cases} \n       [1 + 3b_h{\\rm Ri_B}(1 + b_h{\\rm Ri_B})^{1/2}]^{-1}  & {\\rm Ri_B} \\geq 0 \\\\\n        1 - 3b_h{\\rm Ri_B}[1 + c(-{\\rm Ri_B})^{1/2}]^{-1}  & {\\rm Ri_B} < 0\n\\end{cases}\n\\end{equation}\nwith \n\\begin{equation}\nc = 3b_h^2k^2\\left(\\frac{z_U}{z_0}\\right)^{1/2}\\left[\\ln\\left(\\frac{z_U}{z_0}\\right)\\right]^{-2}.\n\\end{equation}\nThe sub-canopy stability factor is\n\\begin{equation}\nf_c = \\frac{1}{1 + {\\rm Ri}_c/2}\n\\end{equation}\nwith Richardson number\n\\begin{equation}\n{\\rm Ri}_c = \\frac{gh(T_c - T_*)}{T_c u_*^2}\n\\end{equation}\nlimited to values in the range 0 - 10.\n\n\\subsubsection{Obukhov length stability functions ({\\it option to be added})}\n\n\\begin{figure}[t]\n\\includegraphics[width=8cm]{rescan.png}\n\\caption{Aerodynamic resistance network for heat exchanges between the ground, vegetation and the atmosphere.}\n\\label{fig:rescan}\n\\end{figure}\n\n\\subsection{Snow on short vegetation and bare ground}\n\nSensible heat and moisture fluxes from the ground or snow surface without exposed vegetation to the atmosphere are\n\\begin{equation}\nH_g = \\frac{\\rho c_p}{r_h}(T_* - T_a)\n\\label{eq:Hg}\n\\end{equation}\nand\n\\begin{equation}\nE_g = \\frac{\\rho\\psi_g}{r_h}[Q_{\\rm sat}(T_*,P_s) - Q_a]\n\\label{eq:Eg}\n\\end{equation}\nwhere $Q_{\\rm sat}$ is the specific humidity at saturation over water or ice (Appendix B) and $\\rho=P_s/(R_{\\rm air}T_a)$ is the air density. The moisture availability factor $\\psi_g$ is set to 1 if there is snow on the ground or if $Q_{\\rm sat}(T_*,P_s)<Q_a$ (moisture flux onto the surface) and \n\\begin{equation}\n\\psi_g = \\frac{r_h}{r_h+r_{sg}}\n\\label{eq:psig}\n\\end{equation}\notherwise, where $r_{sg}$ is a resistance for evaporation from soil moisture. Without exposed vegetation, moisture and sensible heat fluxes to the atmosphere are $E=E_g$ and $H=H_g$. The latent heat flux is $LE$, with $L$ taken to be the latent heat of vapourisation if $T_*>T_m$ and the latent heat of sublimation otherwise.\n\n\\subsection{Forest canopies and underlying ground}\n\nSensible heat fluxes are \n\\begin{equation}\nH = \\frac{\\rho c_p}{r_{aa}}(T_c - T_a)\n\\label{eq:H}\n\\end{equation}\nupwards from the canopy air space,\n\\begin{equation}\nH_g = \\frac{\\rho c_p}{r_{ag}}(T_* - T_c)\n\\end{equation}\nfrom the ground to the canopy air space, and\n\\begin{equation}\nH_v = \\frac{\\rho c_p}{r_{av}}(T_v - T_c)\n\\label{eq:Hv}\n\\end{equation}\nfrom the vegetation to the canopy air space. Similarly, moisture fluxes are  \n\\begin{equation}\nE = \\frac{\\rho}{r_{aa}}(Q_c - Q_a)\n\\end{equation}\nupwards from the canopy air space,\n\\begin{equation}\nE_g = \\frac{\\rho \\psi_g}{r_{ag}}[Q_{\\rm sat}(T_*,P_s) - Q_c]\n\\end{equation}\nfrom the ground to the canopy air space, and\n\\begin{equation}\nE_v = \\frac{\\rho \\psi_v}{r_{av}}[Q_{\\rm sat}(T_v,P_s) - Q_c]\n\\label{eq:Ev}\n\\end{equation}\nfrom the vegetation to the canopy air space, with moisture availability factor $\\psi_v=1$ if there is snow on the canopy or if $Q_{\\rm sat}(T_v)<Q_c$ (moisture flux onto the canopy) and\n\\begin{equation}\n\\psi_v = \\frac{r_{av}}{r_{av}+r_{sg}}\n\\end{equation}\notherwise.\n\n\\section{Energy balance}\n\n\\subsection{Snow on short vegetation and bare ground}\n\nThe surface energy balance\n\\begin{equation}\nf(T_*) = LW_g + SW_g - G - H_g - LE_g - L_fM = 0\n\\label{eq:ebal}\n\\end{equation}\nwith flux parametrizations substituted from Equations (\\ref{eq:LWg}), (\\ref{eq:Hg}) and (\\ref{eq:Eg}) is a nonlinear equation for the unknown surface temperature and snowmelt rate $M$. From an initial guess of temperature $T_{*0}$ and no melt, a linear estimate of $T_*$ is given by\n\\begin{equation}\nT_* = T_{*0} - f(T_{*0})\\left(\\frac{df}{dT_*}\\right)^{-1}.\n\\label{eq:Newton}\n\\end{equation}\nA single application of Equation (\\ref{eq:Newton}) gives an approximate solution, and repeated applications with $T_*$ calculated in each iteration being used as $T_{*0}$ in the next is the Newton-Raphson method for solving Equation (\\ref{eq:ebal}). Neglecting the complicated temperature dependence of $r_{aa}$, this gives\n\\begin{equation}\nT_* = T_{*0} + \\frac{LW_g + SW_g - G - H_g - LE_g - L_fM}\n                    {4\\sigma T_{*0}^3 + 2\\lambda_1/\\Delta h_1\n                                      + \\rho(c_p + LD_g\\psi_g)/r_h}.\n\\label{eq:Tsurf}\n\\end{equation}\nwhere the fluxes in the numerator are calculated using $T_* = T_{*0}$, and\n\\begin{equation}\nD_g = \\left.\\frac{dQ_{\\rm sat}}{dT}\\right\\rvert_{T=T_{*0}} \n    = \\frac{LQ_{\\rm sat}(T_{*0})}{R_{\\rm wat}T_{*0}^2}.\n\\end{equation}\nEquation (\\ref{eq:Tsurf}) is first evaluated with $M=0$. If this gives $T_* > T_m$ and there is snow with ice mass $I$ on the ground, Equation (\\ref{eq:Tsurf}) is re-evaluated assuming that all of the snow melts and $M = I / \\delta t$. If this gives $T_* < T_m$, the snow does not all melt and $T_* = T_m$ is known; Equation (\\ref{eq:ebal}) is solved instead for the unknown melt rate by substitution of $T_* = T_m$ in the equations for the other fluxes.\n\n\\subsection{Forest canopies and underlying ground}\n\n\\subsubsection{Zero-layer canopy model (option {\\tt CANMOD=0})}\n\nThe zero-layer canopy model does not attempt to calculate the canopy energy balance, instead assuming that the canopy temperature is equal to the air temperature. Substituting $T_v = T_a$ in Equations (\\ref{eq:H}) to (\\ref{eq:Ev}) and rearranging gives\n\\begin{equation}\nH_g = \\frac{\\rho c_p}{r_h}(T_* - T_a)\n\\end{equation}\nwith\n\\begin{equation}\nr_h = r_{ag} + \\left(\\frac{1}{r_{aa}}+\\frac{1}{r_{av}}\\right)^{-1}.\n\\label{eq:modrh}\n\\end{equation}\nand \n\\begin{equation}\nE_g = \\frac{\\rho\\psi_g}{r_h}(T_* - T_a)\n\\end{equation}\nwith\n\\begin{equation}\n\\frac{r_h}{\\psi_g} = r_{ag} + r_{sg} + \\left(\\frac{1}{r_{aa}}+\\frac{\\psi_v}{r_{av}}\\right)^{-1}.\n\\label{eq:modpsi}\n\\end{equation}\nSurface temperature and melt rate are found by solving Equations (\\ref{eq:Tsurf}) and (\\ref{eq:ebal}) with the modified aerodynamic resistance and moisture factor from Equations (\\ref{eq:modrh}) and (\\ref{eq:modpsi}) instead of Equations (\\ref{eq:rh}) and (\\ref{eq:psig}). The rate of moisture transfer from or to the canopy is \n\\begin{equation}\nE_v = -\\left(\\frac{\\psi_v r_{aa}}{\\psi_v r_{aa} + r_{av}}\\right)E_g.\n\\end{equation}\n\n\n\\subsubsection{One-layer canopy model (option {\\tt CANMOD=1})}\n\nEnergy and mass conservation equations\n\\begin{equation}\nf_1 = (H - H_g - H_v)/(\\rho c_p) = 0,\n\\end{equation}\n\\begin{equation}\nf_2 = (E - E_g - E_v)/\\rho = 0,\n\\end{equation}\n\\begin{equation}\nf_3 = LW_g + SW_g - G - H_g - LE_g = 0,\n\\label{eq:f3}\n\\end{equation}\nand\n\\begin{equation}\nf_4 = LW_v + SW_v - H_v - LE_v - C_{\\rm can}\\frac{dT_v}{dt} = 0,\n\\end{equation}\nform a set of four nonlinear equations with four unknowns: $Q_c$, $T_c$, $T_*$ and $T_v$.\nWriting vectors ${\\bf f} = (f_1, f_2, f_3, f_4)^T$ and ${\\bf x} = (Q_c, T_c, T_*, T_v)^T$, a solution is found by iterating \n\\begin{equation}\n{\\bf x} = {\\bf x}_0 - {\\rm J}^{-1}{\\bf f}({\\bf x}_0)\n\\label{eq:multiNR}\n\\end{equation}\nwhere ${\\rm J}$ is the Jacobian matrix of {\\bf f} with elements\n\\begin{equation}\n{\\rm J}_{ij} = \\frac{\\partial f_i}{\\partial x_j}\n\\end{equation}\nor\n\\begin{equation}\n{\\rm J} =\n\\begin{bmatrix}\n0 & -\\left(r_{aa}^{-1} + r_{ag}^{-1} + r_{av}^{-1} \\right) & r_{ag}^{-1} & r_{av}^{-1} \\\\\n-\\left(r_{aa}^{-1} + \\psi_g r_{ag}^{-1} + \\psi_v r_{av}^{-1}\\right) & 0 \n& \\psi_g D_g r_{ag}^{-1} & \\psi_v D_v r_{av}^{-1} \\\\\n-L\\rho\\psi_g r_{ag}^{-1} & -\\rho c_p r_{ag}^{-1} & J_{33} & -4f_v\\sigma T_v^3\\\\\n-L\\rho\\psi_v r_{av}^{-1} & -\\rho c_p r_{av}^{-1} & -4f_v\\sigma T_*^3\n& J_{44} \\\\\n\\end{bmatrix}\n\\end{equation}\nwith\n\\begin{equation}\nJ_{33} = (c_p+L\\psi_gD_g)\\frac{\\rho}{r_{ag}}+4\\sigma T_*^3 + 2\\frac{\\lambda_1}{\\Delta h_1}\n\\end{equation}\nand\n\\begin{equation}\nJ_{44} = \\frac{C_{\\rm can}}{\\delta t}+(c_p+L\\psi_v D_v)\\frac{\\rho}{r_{av}}+8f_v\\sigma T_v^3\n\\end{equation}\nEquation (\\ref{eq:multiNR}) is implemented by solving\n\\begin{equation}\n{\\rm J}({\\bf x - x}_0) = {\\bf f}({\\bf x}_0)\n\\label{eq:increments}\n\\end{equation}\nfor ${\\bf x}$ by LU decomposition. If this gives $T_* > T_m$ and there is snow with ice mass $I$ on the ground, Equation (\\ref{eq:f3}) is replaced by\n\\begin{equation}\nf_3 = LW_g + SW_g - G - H_g - LE_g - L_fM.\n\\end{equation}\nIt is first assumed that all of the snow melts and Equation (\\ref{eq:increments}) is solved again with $M = I/\\delta t$. If this gives $T_* < T_m$, the snow does not all melt and the surface temperature is known but the melt rate is unknown. Fluxes in $f$ are recalculated with $T_* = T_m$, elements in the third column of ${\\rm J}$ are replaced by $J_{13}=J_{23}=J_{43}=0$ and $J_{33}=1$, and $M=x_3/L_f$ is found from the solution of Equation (\\ref{eq:increments}).\n\n\\subsubsection{Two-layer canopy model ({\\it option to be added})}\n\n\\section{Mass balance}\n\nA forest canopy can intercept a fraction of falling snow up to a maximum capacity which is either parametrized as $S_{\\rm cap} = c_{\\rm vai}\\Lambda$ or read from a file. If the canopy holds snow mass $S_v$ at the beginning of a timestep, the amount of snow intercepted in the timestep is\n\\begin{equation}\n\\delta S_v = (S_{\\rm cap} - S_v)\\left[1 - \\exp\\left(-\\frac{f_vS_f\\delta t}{S_{\\rm cap}}\n                                \\right)\\right]\n\\end{equation}\nand the rate of snowfall reaching the ground is reduced to\n\\begin{equation}\nT_f = S_f - \\frac{\\delta S_v}{\\delta t}.\n\\end{equation}\nSnow sublimates from the canopy at rate $E_v$ and unloads at rate $U_c = \\tau_{\\rm can}^{-1}S_v$ with different values of the time constant $\\tau_{\\rm can}$ for cold and melting snow; if $\\tau_{\\rm can} \\leq \\delta t$, all of the snow will unload immediately from the canopy. Interception and storage of liquid water in the canopy are neglected. The combined mass balance equation for canopy snow is\n\\begin{equation}\n\\frac{dS_v}{dt} = S_f - T_f - U_c - E_v.\n\\end{equation}\n\nIf there is no forest canopy, the throughfall and unloading rates are obviously $T_f=S_f$ and $U_c=0$. The mass balance equations for ice and liquid water in snow on the ground are\n\\begin{equation}\n\\frac{dI}{dt} = T_f + U_c - E_g - M + F\n\\end{equation}\nand\n\\begin{equation}\n\\frac{dW}{dt} = R_f + M - F - R_o.\n\\end{equation}\nIf storage of liquid water in snow is neglected (option {\\tt HYDROL=0}), internal refreezing rate in the snow $F=0$ and runoff rate at the base of the snow $R_o=R_f+M$. If bucket storage is selected (option {\\tt HYDROL=1}), snow layer $k$ with porosity\n\\begin{equation}\n\\phi_k = 1 - \\frac{I_k}{\\rho_{\\rm ice}D_{s,k}}\n\\end{equation}\ncan hold a maximum mass $\\rho_{\\rm wat}\\phi_k W_{\\rm irr}$ of liquid water. \n\n\\section*{Tables}\n\\parindent0pt\n%\\begin{table}\n\\begin{tabular}{|l|l|l|}\n\\hline\nDocumentation & Code & Value \\\\\n\\hline\nHeat capacity of air $c_p$            & {\\tt cp}        & 1005 J K$^{-1}$ kg$^{-1}$  \\\\\nHeat capacity of ice $c_{\\rm ice}$    & {\\tt hcap\\_ice} & 2100 J K$^{-1}$ kg$^{-1}$  \\\\\nHeat capacity of water $c_{\\rm wat}$  & {\\tt hcap\\_wat} & 4180 J K$^{-1}$ kg$^{-1}$  \\\\\nAcceleration due to gravity $g$       & {\\tt g}         & 9.81 m s$^{-2}$            \\\\\nSolar constant $I_0$                  & {\\tt I0}        & 1367 W m$^{-2}$            \\\\\nVon K\\'arm\\'an constant $k$           & {\\tt vkman}     & 0.4                        \\\\\nLatent heat of fusion of water $L_f$  & {\\tt Lf}        & $0.334 \\times 10^6$ J kg$^{-1}$ \\\\\nLatent heat of sublimation of ice $L_s$ & {\\tt Ls}      & $2.835 \\times 10^6$ J kg$^{-1}$ \\\\\nLatent heat of vapourisation of water $L_v$ & {\\tt Lv}  & $2.501 \\times 10^6$ J kg$^{-1}$ \\\\\nGas constant for air $R_{\\rm air}$    & {\\tt Rair}      & 287 J K$^{-1}$ kg$^{-1}$   \\\\\nGas constant for water vapour $R_{\\rm wat}$ & {\\tt Rwat} & 462 J K$^{-1}$ kg$^{-1}$ \\\\\nMelting point of ice $T_m$            & {\\tt Tm}        & 273.15 K                  \\\\\nThermal conductivity of air $\\lambda_{\\rm air}$   & {\\tt hcon\\_air}  & 0.025 W m$^{-1}$ K$^{-1}$ \\\\\nThermal conductivity of clay $\\lambda_{\\rm clay}$ & {\\tt hcon\\_clay} & 1.16 W m$^{-1}$ K$^{-1}$ \\\\\nThermal conductivity of ice $\\lambda_{\\rm ice}$   & {\\tt hcon\\_ice}  & 2.24 W m$^{-1}$ K$^{-1}$ \\\\\nThermal conductivity of sand $\\lambda_{\\rm sand}$ & {\\tt hcon\\_sand} & 1.57 W m$^{-1}$ K$^{-1}$ \\\\\nThermal conductivity of water $\\lambda_{\\rm wat}$ & {\\tt hcon\\_wat}  & 0.56 W m$^{-1}$ K$^{-1}$ \\\\\nDensity of ice $\\rho_{\\rm ice}$ & {\\tt rho\\_ice} & 917 kg m$^{-3}$ \\\\\nDensity of water $\\rho_{\\rm wat}$ & {\\tt rho\\_wat} & 1000 kg m$^{-3}$ \\\\\nStefan-Boltzmann constant $\\sigma$ & {\\tt sb} & $5.26 \\times 10^{-8}$ W m$^{-2}$ K$^{-4}$ \\\\\n\\hline \n\\end{tabular}\n\nTable 1. Physical constants and quantities assumed to be constant in the code and documentation.\n%\\label{table:constants}\n%\\end{table}\n\n\\vskip20pt\n\\begin{tabular}{|l|l|l|}\n\\hline\nDocumentation                                & Code       & Units                  \\\\\n\\hline\nIncoming longwave radiation $LW_\\downarrow$  & {\\tt LW}   & W m$^{-2}$             \\\\\nSurface air pressure $P_s$                   & {\\tt Ps}   & Pa                     \\\\\nSpecific humidity $Q_a$                      & {\\tt Qa}   & kg kg$^{-1}$           \\\\\nRainfall rate $R_f$                          & {\\tt Rf}   & kg m$^{-2}$ s$^{-1}$   \\\\\nSnowfall rate $S_f$                          & {\\tt Sf}   & kg m$^{-2}$ s$^{-1}$   \\\\\nDiffuse shortwave radiation $S_{\\rm dif}$    & {\\tt Sdif} & W m$^{-2}$             \\\\\nDirect-beam shortwave radiation $S_{\\rm dir}$& {\\tt Sdir} & W m$^{-2}$             \\\\\nIncoming shortwave radiation $SW_\\downarrow$ & {\\tt SW}   & W m$^{-2}$             \\\\\nAir temperature $T_a$                        & {\\tt Ta}   & K                      \\\\\nWind speed $U_a$                             & {\\tt Ua}   & m s$^{-1}$             \\\\\nWind direction (not yet used)                & {\\tt Udir} & degrees clockwise of N \\\\\n\\hline \n\\end{tabular}\n\nTable 2. Meteorological driving variables in the code and documentation\n\n\\vskip20pt\n\\begin{tabular}{|l|l|l|}\n\\hline\nDocumentation & Code & Units \\\\\n\\hline\n\\multicolumn{3}{|c|}{Forest canopy} \\\\\n\\hline\nCanopy air space specific humidity $Q_c$ & {\\tt Qcan} & kg kg$^{-1}$ \\\\\nSnow mass on canopy $S_v$                & {\\tt Sveg} & W m$^{-2}$   \\\\\nCanopy air space $T_c$                   & {\\tt Tveg} & K            \\\\\nVegetation temperature $T_v$             & {\\tt Tveg} & K            \\\\\n\\hline \n\\multicolumn{3}{|c|}{Surface} \\\\\n\\hline \nSurface skin temperature $T_*$           & {\\tt Tsrf} & K            \\\\\n\\hline \n\\multicolumn{3}{|c|}{Snow on the ground (up to {\\tt Nsmax} layers)}  \\\\\n\\hline \nNumber of snow layers $N_{\\rm snow}$     & {\\tt Nsnow} & -           \\\\\nAlbedo of snow $\\alpha_s$                & {\\tt albs}  & -           \\\\\nThickness of snow layers $D_s$           & {\\tt Ds}    & m           \\\\\nRadii of grains in snow layers $r$       & {\\tt rgrn}  & m           \\\\\nIce content of snow layers $I$           & {\\tt Sice}  & kg m$^{-2}$ \\\\\nLiquid water content of snow layers $W$  & {\\tt Sliq}  & kg m$^{-2}$ \\\\\nTemperature of snow layers $T_s$         & {\\tt Tsnow} & K           \\\\\n\\hline \n\\multicolumn{3}{|c|} {Soil ({\\tt Nsoil} layers)} \\\\\n\\hline \nTemperature of soil layers $T_g$ & {\\tt Tsoil} & K \\\\\nVolumetric moisture content of soil layers $\\theta_g$ & {\\tt theta} & - \\\\\n\\hline \n\\end{tabular}\n\nTable 3. Model state variables in the code and documentation.\n\n\\vskip20pt\n\\begin{tabular}{|l|l|l|}\n\\hline\nDocumentation & Code & Default \\\\\n\\hline\nSnow-free albedo $\\alpha_0$         & {\\tt alb0} & 0.2                                 \\\\\nCanopy heat capacity $C_{\\rm can}$  & {\\tt canh} & $2500\\Lambda$ (J K$^{-1}$ m$^{-2}$) \\\\\nSoil clay fraction $f_{\\rm clay}$   & {\\tt fcly} & 0.3                                 \\\\\nSoil sand fraction $f_{\\rm sand}$   & {\\tt fsnd} & 0.6                                 \\\\\nSky view fraction $f_{\\rm sky}$     & {\\tt fsky} & $\\exp(-k_{\\rm dif}\\Lambda)$         \\\\\nCanopy cover fraction $f_{\\rm veg}$ & {\\tt fveg} & $1 - \\exp(-\\Lambda)$                \\\\\nCanopy height $h_{\\rm can}$         & {\\tt hcan} & 0 m                                 \\\\\nLatitude $\\phi$                     & {\\tt lat}  & 0$^\\circ$                           \\\\\nVegetation area index $\\Lambda$     & {\\tt VAI}  & 0                                   \\\\\nSnow-free ground roughness length $z_{0g}$ & {\\tt z0sf} & 0.1 m                        \\\\\nTimestep $\\delta t$                 & {\\tt dt}   & 3600 s                              \\\\\nTemperature and humidity measurement height $z_T$& {\\tt zT} & 2 m                      \\\\\nWind speed measurement height $z_U$ & {\\tt zU}   & 10 m                                \\\\\n\\hline \n\\end{tabular}\n\nTable 4. Site and driving data characteristics in the code and documentation.\n\n\\vskip20pt\n\\begin{tabular}{|l|l|l|}\n\\hline\nDocumentation                                      & Code       & Default                \\\\\n\\hline\nMaximum albedo for fresh snow $\\alpha_{\\rm max}$   & {\\tt asmx} & 0.8                    \\\\\nMinimum albedo for melting snow $\\alpha_{\\rm min}$ & {\\tt asmn} & 0.5                    \\\\\nSnow-free vegetation albedo $\\alpha_{c0}$          & {\\tt avg0} & 0.1                    \\\\\nSnow-covered vegetation albedo $\\alpha_{cs}$       & {\\tt avgs} & 0.4                    \\\\ \nAtmospheric stability adjustment parameter $b_h$   & {\\tt bstb} & 5                      \\\\\nSnow thermal conductivity exponent $b_\\lambda$     & {\\tt bthr} & 2                      \\\\\nDense canopy turbulent transfer coefficient $C_{\\rm dense}$ & {\\tt cden} & 0.004         \\\\\nCanopy snow capacity per unit VAI $c_{\\rm vai}$    & {\\tt cvai} & 4.4 kg m$^{-2}$        \\\\\nVegetation turbulent transfer coefficient $C_{\\rm veg}$ & {\\tt cveg} & 20                \\\\\nSnow cover fraction depth scale $h_f$ & {\\tt hfsn} & 0.1 m                               \\\\\nReference snow viscosity $\\eta_0$                  & {\\tt eta0} & $3.7 \\times 10^7$ Pa s \\\\\nSurface conductance for saturated soil $g_{\\rm sat}$ & {\\tt gsat} & 0.01 m s$^{-1}$      \\\\\nLeaf angle distribution parameter $G_1$            & {\\tt Gcn1}   & 0.5                  \\\\\nLeaf angle distribution parameter $G_2$            & {\\tt Gcn2}   & 0                    \\\\\nDiffuse radiation extinction coefficient $k_{\\rm dif}$ & {\\tt kdif} &  0.5               \\\\\nFixed snow thermal conductivity $\\lambda_0$        & {\\tt kfix} & 0.24 W m$^{-1}$ K$^{-1}$ \\\\\nCanopy cover coefficient $k_{\\rm veg}$             & {\\tt kveg}   &  1 \\\\\nDisplacement height to canopy height ratio $R_d$   & {\\tt rchd}   & 0.67 \\\\\nRoughness length to canopy height ratio $R_{z0}$   & {\\tt rchz}   & 0.1 \\\\\nFresh snow grain radius $r_0$                      & {\\tt rgr0}   & $5\\times10^{-5}$ m \\\\\nFixed snow density $\\rho_0$                        & {\\tt rho0}   & 300 kg m$^{-3}$ \\\\\nTemperature factor in fresh snow density $b _\\rho$ & {\\tt rhob}   & 0 kg m$^{-3}$ K$^{-1}$ \\\\\nWind factor in fresh snow density $c_\\rho$         & {\\tt rhoc}   & 0 kg m$^{-7/2}$ s$^{1/2}$\\\\\nFresh snow density $\\rho_f$ & {\\tt rhof}           & 100 kg m$^{-3}$ \\\\\nMaximum density for cold snow $\\rho_{\\rm cold}$    & {\\tt rcld} & 300 kg m$^{-3}$ \\\\\nMaximum density for melting snow $\\rho_{\\rm melt}$ & {\\tt rmlt} & 500 kg m$^{-3}$ \\\\\nSnowfall to refresh albedo $S_\\alpha$ & {\\tt Salb} & 10 kg m$^{-2}$ \\\\\nThermal metamorphism parameter $c_1$ & {\\tt snda}  & $2.8 \\times 10^{-6}$ s$^{-1}$ \\\\\nSnow albedo decay temperature threshold $T_\\alpha$ & {\\tt Talb} & -2$^\\circ$C \\\\\nCanopy unloading time scale $\\tau_{\\rm can}$ for cold snow & {\\tt tcnc} & 240 h \\\\\nCanopy unloading time scale $\\tau_{\\rm can}$ for melting snow & {\\tt tcnm} & 2.4 h \\\\\nCold snow albedo decay time scale $\\tau_{\\rm cold}$ & {\\tt tcld} & 1000 h \\\\\nMelting snow albedo decay time scale $\\tau_{\\rm melt}$ & {\\tt tmlt} & 100 h \\\\\nSnow compaction time scale $\\tau_\\rho$ & {\\tt trho} & 200 h \\\\\nIrreducible liquid water content of snow $W_{\\rm irr}$ & {\\tt Wirr} & 0.03 \\\\\nRatio of roughness lengths for momentum and heat $z_0/z_{0h}$ & {\\tt z0zh} & 10 \\\\\nSnow surface roughness length $z_{0s}$ & {\\tt z0sn} & 0.01 m \\\\\n\\hline\n\\end{tabular}\n\nTable 5. Model parameters in the code and documentation.\n\n\\section*{Appendix: Solar radiation}\n\nFollowing \\href{https://www.elsevier.com/books/an-introduction-to-solar-radiation/iqbal/978-0-12-373750-2}{Iqbal (1983)}, solar declination $\\delta$ (in radians) and equation of time $E_t$ (in hours) are approximated by Fourier series\n\\begin{align}\n\\delta = 0.006918 &- 0.399912\\cos\\Gamma  + 0.070257\\sin\\Gamma  \\nonumber \\\\\n                  &- 0.006758\\cos2\\Gamma + 0.000907\\sin2\\Gamma \\nonumber \\\\\n                  &- 0.002697\\cos3\\Gamma + 0.001480\\sin3\\Gamma\n\\end{align}\nand\n\\begin{align}\nE_t = (12/\\pi)(0.000075 &+ 0.001868\\cos\\Gamma  - 0.032077\\sin\\Gamma \\nonumber \\\\\n                        &- 0.014615\\cos2\\Gamma - 0.04089\\sin2\\Gamma)\n\\end{align}\nfor day of year $d_n$ and day angle $\\Gamma = 2\\pi(d_n - 1)/365$. From a date given by intergers $y$, $m$ and $d$ for year, month and day, the day of year can be found using integer division in the magic formula\n\\begin{equation}\nd_n = (7y)/4 - 7(y+(m+9)/12)/4 + (275m)/9 + d - 30.\n\\end{equation}\n\nFor hour $h$ of a day at latitude $\\phi$, the hour angle is defined by\n\\begin{equation}\n\\omega = (\\pi/12)(12 - h - E_t),\n\\end{equation}\nthe sine of the solar elevation is\n\\begin{equation}\n\\sin\\theta = \\sin\\delta\\sin\\phi + \\cos\\delta\\cos\\phi\\cos\\omega\n\\end{equation}\nand the cosine of the solar azimuth is\n\\begin{equation}\n\\cos\\psi = (\\sin\\theta\\sin\\phi - \\sin\\delta)/(\\cos\\theta\\cos\\phi).\n\\end{equation}\nAzimuth is measured anticlockwise from south, so $\\psi$ has the same sign as $\\omega$ (positive in the morning and negative in the afternoon).\n\nSeparate direct and diffuse components of shortwave radiation are estimated by the empirical method of \\href{https://www.sciencedirect.com/science/article/pii/0038092X82903024}{Erbs et al. (1982)} if they are required but have not been measured. First, an atmospheric clearness parameter is found by dividing the global radiation by the radiation at the top of the atmosphere, giving\n\\begin{equation}\nk_t = \\frac{SW_\\downarrow}{I_o\\sin\\theta},\n\\end{equation}\nwhere $I_0$ = 1367 Wm$^{-2}$ is the solar constant. The diffuse fraction is then estimated as\n\\begin{equation}\n\\frac{S_{\\rm dif}}{S_0} = \\begin{cases}\n    1 - 0.09k_t &  k_t<0.22  \\\\\n    0.95-0.16k_t+4.39k_t^2-16.64k_t^3+12.34k_t^4 &  0.22<k_t<0.8  \\\\\n    0.165 &  k_t>0.8\n\\end{cases}\n\\end{equation}\nDirect radiation is the remainder $S_{\\rm dir} = S_0 - S_{\\rm dif}$.\n\nIf a digital elevation model is provided as an input, the array of elevations is assumed to follow the convention that the first row is at the north of the model domain and the first column is at the west. \n\\begin{equation}\n\\tan^{-1}\\left[\\sqrt{\\left(\\frac{dz}{dx}\\right)^2 + \\left(\\frac{dz}{dy}^2\\right)}\\right]\n\\end{equation}\n\\begin{equation}\n- \\tan^{-1}\\left(\\frac{dz}{dy}\\middle/\\frac{dz}{dx}\\right) - \\frac{\\pi}{2}\n\\end{equation}\nAspect is calculated to follow the same convention as solar azimuth and the Fortran {\\tt atan2} function is used to obtain the correct quadrant. Grid cells are indexed by row number $i$ from north to south and column number $j$ from west to east. The surface gradient is calculated by centred differences\n\\begin{equation}\n\\left(\\frac{dz}{dx},\\frac{dz}{dy}\\right)_{ij} = \n\\left(\\frac{z_{i,j+1} - z_{i,j-1}}{2\\Delta x}, \\frac{z_{i+1,j} - z_{i-1,j}}{2\\Delta x}\\right)\n\\end{equation}\nfor grid spacing $\\Delta x$.\n\n\\end{document}\n\n\n", "meta": {"hexsha": "1c559ee3523166f13469cf368381b29e99db4f8b", "size": 35041, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/FSMsci.tex", "max_stars_repo_name": "mariannecowherd/FSM2", "max_stars_repo_head_hexsha": "9c060a6a3935beb59510e800a228c5029b8df50b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2017-11-20T13:58:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-31T15:07:19.000Z", "max_issues_repo_path": "docs/FSMsci.tex", "max_issues_repo_name": "mariannecowherd/FSM2", "max_issues_repo_head_hexsha": "9c060a6a3935beb59510e800a228c5029b8df50b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 15, "max_issues_repo_issues_event_min_datetime": "2018-01-15T14:01:44.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-26T11:16:33.000Z", "max_forks_repo_path": "docs/FSMsci.tex", "max_forks_repo_name": "mariannecowherd/FSM2", "max_forks_repo_head_hexsha": "9c060a6a3935beb59510e800a228c5029b8df50b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2018-01-12T14:06:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-28T07:14:31.000Z", "avg_line_length": 51.1547445255, "max_line_length": 467, "alphanum_fraction": 0.6484689364, "num_tokens": 11557, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673087708699, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.5841218801893858}}
{"text": "\\SetPicSubDir{ch-Intro}\n\n\\chapter{Introduction}\n\\vspace{2em}\n\nThe concept of graphs was first used by Leonhard Euler to study the K\u00f6nigsberg bridges problem \\cite{euler:konis} in 1735. This laid the foundation for field of mathematics known as graph theory. \n\n\\noindent\nThese days graphs are used to model and study complex networks. Networks like the internet, electric grid, road networks, social networks etc. are too large to be directly studied.  In such situations a mathematical modelling of the structure allows us to study it's properties more efficiently.\n\n\\noindent\nIn this thesis we shall be studying one such property of a graph known as it's hamiltonicity, on particular classes of graphs.\n\n\\section{Preliminaries}\nIn the following section we shall briefly introduce some of the preliminaries and conventions that shall be used throughout the thesis.\n\n\\subsection{Graph Theory}\n\n\\begin{defn}\nWe define a \\textbf{Graph} $G$ as the pair of sets $(V, E)$ where $V$ is the \\textit{vertex set}, and the \\textit{edge set}, $E \\subseteq V \\times V$.\n\\end{defn}\nIn an \\textit{undirected graph} the order of vertices in an edge does not matter. In such graphs we represent an edge as $\\{u, v\\}$. On the other hand, in a \\textit{directed graph} (or digraph) the order of vertices in an edge matters. Edges in such a graph are represented as $(u, v)$.\n\n\\begin{defn}\nThe number of edges \\textit{adjacent} to a vertex, is known as the \\textbf{Degree} of that vertex.\n\\end{defn}\nNote that in a directed graph, we have separate in and out-degrees for each vertex. Henceforth, we shall be using $\\delta$ to represent the minimum degree of a vertex in a graph, and similarly $\\Delta$ to represent the maximum degree.\n\n\\begin{defn}\nA sequence of distinct vertices $P = v_0, v_1, \\cdots v_k$ in a directed graph $G(V, E)$, where $\\forall i<k, e_i = (v_i, v_{i+1}) \\in E$ is known as a \\textbf{Path} in the graph.\n\\end{defn}\n\n\\begin{defn}\nA path $C = v_0, v_1, \\cdots v_k$ in graph $G(V, E)$ where $v_0 = v_k$ is known as a \\textbf{Cycle}.\n\\end{defn}\n\n\\begin{defn}\nA cycle $C$ such that it passes through every vertex of the graph, is called a \\textbf{Hamiltonian Cycle}.\nA graph which contains a Hamiltonian cycle, is said to be a \\textbf{Hamiltonian Graph}.\n\\end{defn}\n\n\\subsection{Random Graph Models}\nThe graphs we shall be looking at in this thesis shall all be sampled from one of the distributions described below.\n\\begin{description}\n    \\item[$D_{n, m}$ model: ] A graph is picked uniformly at random from the set of all digraphs with $n$ vertices and $m$ edges.\n    \\item[$k$-in, $k$-out: ] In the $D_{k-in, k-out}$ model, for a graph with $n$ vertices, $k$ in-neighbours and $l$ out-neighbours are chosen independently at random from the vertex set $V_n$. \n    \\item[$k$-regular digraph: ] In a random regular digraph $D_{n, k}$, each of the n vertices has in-degree and out-degree exactly k.\n    \\item[Powerlaw graphs: ] Given a fixed degree sequence that has a power law tail, we pick a graph uniformly at random from the set of all graphs that realize the given degree seqence.\n    \\item[Random Tournaments: ] In a complete graph $K_n$, we assign a direction uniformly at random to each edge.\n    \\item[Random n-lifts: ] The \\textit{n-lift} of a given graph $G(V, E)$, is obtained by replacing each vertex in $V$ by a set of n vertices, and by adding a random perfect matching (between the corresponding sets of $u \\& v$) for every $e_i = (u, v) \\in E$, .\n\\end{description}\n", "meta": {"hexsha": "39dd88876737313e34460da5317b958cfabdc874", "size": 3482, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/chapters/ch-intro.tex", "max_stars_repo_name": "LaughingBudda/hachikuji", "max_stars_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/chapters/ch-intro.tex", "max_issues_repo_name": "LaughingBudda/hachikuji", "max_issues_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/chapters/ch-intro.tex", "max_forks_repo_name": "LaughingBudda/hachikuji", "max_forks_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.9615384615, "max_line_length": 295, "alphanum_fraction": 0.7383687536, "num_tokens": 975, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7956581000631542, "lm_q1q2_score": 0.5841081572494383}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{vntex}\n\\usepackage{amsmath}\n\\usepackage{units}\n\n\\title{Machine Learning Homework Week 1}\n\\author{T\u00f4 \u0110\u1ee9c Anh}\n\\date{September 2021}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Problems}\n    \\begin{enumerate}\n        \\item To evaluate a new test for detecting Hansen\u2019s disease, a group of people 5\\% of which are known to have Hansen\u2019s disease are tested. The test finds Hansen\u2019s disease among 98\\% of those with the disease and 3\\% of those who don\u2019t. What is the probability that someone testing positive for Hansen\u2019s disease under this new test actually has it?\n        \n        \\item Proof the following distributions are normalized then calculate the mean and standard deviation of these distribution:\n        \\begin{itemize}\n            \\item Univariate normal distribution.\n            \\item (Optional) Multivariate normal distribution.\n        \\end{itemize}\n            \n    \\end{enumerate}\n\n\\section{Answers}\n    \\subsection{Question 1}\n        \\begin{itemize}\n            \\item Probability that someone testing positive for Hansen\u2019s disease under this new test actually has it:\n               \\begin{equation} \\label{eq:1}\n                    \\begin{split}\n                        P (actually \\ has \\ disease \\mid tested) = \\\\ \\frac{P(tested \\mid actually \\ has \\ disease)*P(actually\\ has \\ disease)}{P(tested)} \n                    \\end{split}\n                \\end{equation}\n            \\item Probability of people with Hansen's disease in the 5\\% are known to have Hansen's disease is: \n                $$ \n                    5\\%\\times 98\\% = 4.9\\% \n                $$\n            \\item Probability of people with Hansen's disease in the 95\\% are known to not have Hansen's disease is: \n                $$\n                    95\\%\\times 3\\%  = 2.85\\% \n                $$\n            \\item Probability of people testing positive for Hansen's disease is: \n                $$ \n                    4.9\\% + 2.85\\% = 7.75\\% \n                $$\n            \\item Probability that someone testing positive for Hansen\u2019s disease under this new test actually has it is:\n                $$\n                    \\frac{5\\%\\times 98\\%}{7.75\\%} \\approx 63.2\\%\n                $$\n            \\end{itemize}\n    \n    \\subsection{Question 2}\n        \\begin{itemize}\n            \\item Univariate normal distribution\n            \\subsubsection{Formula for univariate normal distribution}\n                    \\[\n                        f(x) = \\frac{1}{\\sigma\\sqrt{2\\pi}}\\times e^{ -\\frac{1}{2}(\\frac{x - \\mu}{\\sigma})^2} \n                    \\]\n            With respect to $\\mu$ as Mean and $\\sigma$ as Standard Deviation\n            \n            \\subsubsection {Prove that f(x) always > 0 $\\forall \\mu $ and $ \\sigma $}\n            We can see that $\\frac{1}{\\sigma\\sqrt{2\\pi}}$ will always > 0 since the standard deviation $\\sigma$ always $\\ge$ 0. Therefore $ \\frac{1}{\\sigma\\sqrt{2\\pi}}\\times e^{ -\\frac{1}{2}(\\frac{x - \\mu}{\\sigma})^2} $ will always > 0 $\\forall \\mu$ and $\\sigma$\n            \\subsubsection{Prove that $\\int_{-\\infty}^{\\infty} f(x)dx$ = 1}\n            \n            \\subsubsection{Prove mean = $\\mu$:}\n            We have: \n                \\[\n                        f(x) = \\frac{1}{\\sigma\\sqrt{2\\pi}}\\times e^{ -\\frac{1}{2}(\\frac{x - \\mu}{\\sigma})^2} \n                \\]\n            So that expectation of f(x) is:\n                \\begin{equation}\n                    \\begin{split}\n                        E[f(x)] & = \\int_{-\\infty}^{\\infty} x \\times \\frac{1}{\\sigma\\sqrt{2\\pi}}\\times e^{ -\\frac{1}{2}(\\frac{x - \\mu}{\\sigma})^2}dx \\\\\n                                & = \\frac{1}{\\sigma\\sqrt{2\\pi}} \\times \\int_{-\\infty}^{\\infty} x  \\times e^{ -\\frac{1}{2}(\\frac{x - \\mu}{\\sigma})^2}dx \\\\\n                                & = \\frac{\\sqrt{2}\\sigma}{\\sigma\\sqrt{2\\pi}} \\times \\int_{-\\infty}^{\\infty} \\Big(\\sqrt{2} \\sigma t + \\mu\\Big) \\times e^{-t^2}dt \\\\ \n                                & = \\frac{1}{\\sqrt{\\pi}} \\times \\Bigg(\\sqrt{2} \\sigma \\times\n                                \\int_{-\\infty}^{\\infty}  t \\times e^{-t^2} dt + \\mu \\int_{-\\infty}^{\\infty} e^{-t^2}dt\\Bigg) \\\\ \n                                & = \\frac{1}{\\sqrt{\\pi}} \\times \\Bigg(\\sqrt{2} \\sigma \\times\n                                \\Bigg[\\frac{-1}{2} \\times e^{-t^2}\\Bigg] + \\mu \\times \\sqrt{\\pi}\\Bigg)\\\\\n                                & = \\frac{\\mu \\sqrt{\\pi}}{\\sqrt{\\pi}}\\\\\n                                & = \\mu\n                    \\end{split}\n                \\end{equation}\n            \\subsubsection{Prove Standard deviation = $\\sigma^2$}\n                \\begin{equation}\n                    \\begin{split}\n                        E[f(x)] & = \\int_{-\\infty}^{\\infty} x \\times \\frac{1}{\\sigma\\sqrt{2\\pi}}\\times e^{ -\\frac{1}{2}(\\frac{x - \\mu}{\\sigma})^2}dx \\\\\n                                & = \\frac{1}{\\sigma\\sqrt{2\\pi}} \\times \\int_{-\\infty}^{\\infty} x  \\times e^{ -\\frac{1}{2}(\\frac{x - \\mu}{\\sigma})^2}dx \\\\\n                                & = \\frac{\\sqrt{2}\\sigma}{\\sigma\\sqrt{2\\pi}} \\times \\int_{-\\infty}^{\\infty} \\Big(\\sqrt{2} \\sigma t + \\mu\\Big) \\times e^{-t^2}dt \\\\ \n                                & = \\frac{1}{\\sqrt{\\pi}} \\times \\Bigg(\\sqrt{2} \\sigma \\times\n                                \\int_{-\\infty}^{\\infty}  t \\times e^{-t^2} dt + \\mu \\int_{-\\infty}^{\\infty} e^{-t^2}dt\\Bigg) \\\\ \n                                & = \\frac{1}{\\sqrt{\\pi}} \\times \\Bigg(\\sqrt{2} \\sigma \\times\n                                \\Bigg[\\frac{-1}{2} \\times e^{-t^2}\\Bigg] + \\mu \\times \\sqrt{\\pi}\\Bigg)\\\\\n                                & = \\frac{\\mu \\sqrt{\\pi}}{\\sqrt{\\pi}}\\\\\n                                & = \\mu\n                    \\end{split}\n                \\end{equation}\n            \\item (Optional) Multivariate normal distribution. \n                \\begin{itemize}\n                    \\item Proof that the distribution are normalized:\n                    \\item Mean:\n                    \\item Standard Deviation:\n                \\end{itemize}\n        \\end{itemize}\n        \n\n\n\\end{document}", "meta": {"hexsha": "c7672c84ab231be6b43ced00f1d179eee7f58454", "size": 6033, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/Homework W1/To Duc Anh W1 HW.tex", "max_stars_repo_name": "Hyprnx/Machine-Learning", "max_stars_repo_head_hexsha": "eb3613d811ed2f9cf851e442778d7aa4e265ece0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-05T11:09:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-05T11:09:02.000Z", "max_issues_repo_path": "Homework/Homework W1/To Duc Anh W1 HW.tex", "max_issues_repo_name": "Hyprnx/Machine-Learning", "max_issues_repo_head_hexsha": "eb3613d811ed2f9cf851e442778d7aa4e265ece0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/Homework W1/To Duc Anh W1 HW.tex", "max_forks_repo_name": "Hyprnx/Machine-Learning", "max_forks_repo_head_hexsha": "eb3613d811ed2f9cf851e442778d7aa4e265ece0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.8454545455, "max_line_length": 355, "alphanum_fraction": 0.4840046411, "num_tokens": 1725, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.795658104908603, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5841081561781885}}
{"text": "\\section{Missing data\n\\normalfont\\sffamily, w/o labels $\\color{section-text-color}\\to$ unsupervised}\n\n% ===\n\\emph{Mixture modeling}\n\nModel each class as prob. distribution $P(x|\\theta_j)$\\\\\n$P(D|\\theta) = \\prod_{i=1}^n \\sum_{j=1}^c w_j P(x_i|\\theta_j)$\\\\\n$L(w, \\theta) = - \\sum_{i=1}^n \\log  \\sum_{j=1}^c w_j P(x_i| \\theta_j)$\\\\\n$\\implies \\theta^\\ast = \\arg\\!\\min L(\\theta)$\\enskip\\emph{$\\to$ non convex}\n\n% ===\n\\emph{Gaussian-Mixture Bayes classifiers}\n\nEstimate prior $P(y)$; Est. cond. distr. for each class:\n$P(x|y) = \\sum_{j=1}^{k_y} w_j^{(y)} \\mathcal{N}(x; \\mu_j^{(y)}, \\Sigma_j^{(y)})$\\\\\n\n% ===\n\\emph{Hard-EM algorithm}\\vspace{1pt}\n\\begin{highlightbox}\n\tInitialize $\\theta^{(0)}$; \\enskip Let $Q^{(t)} (z) = P(z\\vert x,\\theta^{(t)})$\\\\\n\t\\textbf{For $t=1,2,...$ do:}\\\\\n\t$\\bullet$ \\textbf{E-step}: estimate log-likelihood\\\\\n\t\\phantom{$\\bullet$} Predict most likely class for each point $x_i$\\\\\n\t\\phantom{$\\bullet$} $\\circ$\n\t\t$z_i^{(t)} = \\arg\\!\\max\\limits_z P(z\\vert x_i,\\theta^{(t-1)})$\\vspace{-5pt}\\\\\n\t\\phantom{$\\bullet$ $\\circ$}\n\t\t\\phantom{$z_i^{(t)}$}$\\;= \\arg\\!\\max\\limits_z P(z\\vert\\theta^{(t-1)}) \\, P(x_i\\vert z,\\theta^{(t-1)})$\\\\\n\t$\\bullet$ \\textbf{M-step}: Maximize (MLE)\\\\\n\t\\phantom{$\\bullet$} $\\circ$\n\t\t$L^{(t)} (\\theta) = \\mathbb{E}_{Q^{(t)}} [\\log P(x,y\\vert \\theta^{(t)})]$\\\\\n\t\\phantom{$\\bullet$} $\\circ$\n\t\t$\\theta^{(t)} = \\arg\\!\\max\\limits_\\theta L^{(t)}$\n\t\t$\\to \\pderiv{~}{\\theta}L = 0$\n\t\t$\\to \\theta^{(t)} = ...$\n\\end{highlightbox}\n$\\theta^{(t)} = \\underset{\\theta}{\\operatorname{argmax}} P(D^{(t)}|\\theta)$, i.e. $\\mu_j^{(t)} = \\frac{1}{n_j} \\sum_{i: z_i = j x_j}$\n\n% ===\n\\emph{Soft-EM algorithm}\n\nE-step: Calc p for each point and cls.: $\\gamma_j^{(t)}(x_i)$\\\\\nM-step: Fit clusters to weighted data points:\\\\\n$w_j^{(t)} = \\frac{1}{n} \\sum_{i=1}^n \\gamma_j^{(t)} (x_i)$; \n$\\mu_j^{(t)} = \\frac{\\sum_{i=1}^n \\gamma_j^{(t)} (x_i) x_i}{\\sum_{i=1}^n \\gamma_j^{(t)} (x_i)}$\\\\\n$\\sigma_j^{(t)} = \\frac{\\sum_{i=1}^n \\gamma_j^{(t)}(x_i) (x_i - \\mu_j^{(t)})^T (x_i - \\mu_j^{(t)})}{\\sum_{i=1}^n \\gamma_j^{(t)}(x_i)}$\n\n% ===\n\\emph{Soft-EM for semi-supervised learning}\n\nlabeled $y_i$: $\\gamma_j^{(t)}(x_i) = [j = y_i]$,\nunlabeled:\\\\ $\\gamma_j^{(t)}(x_i) = P(Z=j|x_i, \\mu^{(t-1)}, \\Sigma^{(t-1)}, w^{(t-1)})$\n\n%If enough space put this on summary.\n\\iffalse\n\\subsection*{Log-likelihood}\n$l(\\theta) = log P(\\mathcal{D})$ \\\\\n$=\\sum_{\\overset{i=1}{y_i=\\times}}^n log P(x_i;\\theta) + \\sum_{\\overset{i=1}{y_i\\not=\\times}}^n log P(x_i,y_i;\\theta)$\\\\\n$=\\sum_{\\overset{i=1}{y_i=\\times}}^n log \\sum_{i=1}^m P(x_i, Y=j;\\theta) +$\\\\\n$ \\sum_{\\overset{i=1}{y_i\\not=\\times}}^n log P(x_i,y_i;\\theta)$\\\\\n$=\\sum_{\\overset{i=1}{y_i=\\times}}^n log \\sum_{i=1}^m P(x_i|Y=j;\\theta)P(Y=j|\\theta) +$\\\\\n$ \\sum_{\\overset{i=1}{y_i\\not=\\times}}^n log P(x_i,y_i;\\theta)$\n\n\\subsection*{Latent variable}\nWe denote the latent variable indicating the component the point is sampled from by Z, which takes on values in $\\{1,...,k\\}$.\n\n\\subsection*{E-step: Posterior probabilities}\n$\\gamma_j^t(x_i) = P(Z=j|x_i, \\theta_t) = \\frac{P(x_i|Z=j, \\theta_t) P(Z=j|\\theta_t)}{P(x_i;\\theta_t)}$\n\n\\subsection*{M-step: maximizing expected log likelihood}\n$\\mathbb{E}_{\\gamma^t}[\\log P(\\mathcal{D;\\theta})] = \n\\mathbb{E}_{\\gamma^t}[\\log \\Pi_{i=1}^nP(x_i,z_i;\\theta)] = $ \\\\\n$\\sum_{i=1}^n \\mathbb{E}_{\\gamma^t}[\\log P(x_i,z_i;\\theta)] = $ \\\\\n$\\sum_{i=1}^n \\sum_{j=1}^k \\gamma_j^t(x_i) \\log (P(x_i|z_i=j;\\theta) P(z_i=j;\\theta))$ \\\\\n$\\theta_{t+1} = \\underset{\\theta}{\\operatorname{argmax}} \\mathbb{E}_{\\gamma^t}[\\log P(\\mathcal{D;\\theta})]$\n\\fi\n", "meta": {"hexsha": "2c354ce9b0a4a97301d2fcc2d0ed90c12c2914dd", "size": 3506, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/IML19/sections/Latent.tex", "max_stars_repo_name": "tstreule/eth-cheat-sheets", "max_stars_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-26T23:11:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T23:11:57.000Z", "max_issues_repo_path": "src/IML19/sections/Latent.tex", "max_issues_repo_name": "tstreule/eth-cheat-sheets", "max_issues_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/IML19/sections/Latent.tex", "max_forks_repo_name": "tstreule/eth-cheat-sheets", "max_forks_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.5324675325, "max_line_length": 134, "alphanum_fraction": 0.590416429, "num_tokens": 1527, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.7341195152660687, "lm_q1q2_score": 0.584108145850161}}
{"text": "\\documentclass[11pt]{article}\n%Gummi|065|=)\n\\title{\\textbf{Algorithms in Julia}}\n\\author{Valentin Churavy \\and James Schloss}\n\\date{}\n\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\usepackage{listings}\n\\usepackage{color}\n\n\\definecolor{dkgreen}{rgb}{0,0.6,0}\n\\definecolor{gray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mauve}{rgb}{0.58,0,0.82}\n\n\\lstset{frame=tb,\n  language=python,\n  aboveskip=3mm,\n  belowskip=3mm,\n  showstringspaces=false,\n  columns=flexible,\n  basicstyle={\\small\\ttfamily},\n  numbers=none,\n  numberstyle=\\tiny\\color{gray},\n  keywordstyle=\\color{blue},\n  commentstyle=\\color{dkgreen},\n  stringstyle=\\color{mauve},\n  breaklines=true,\n  breakatwhitespace=true,\n  tabsize=3\n}\n\n\n\\begin{document}\n\\maketitle\nIn this document, we have descriptions of the following algorithms:\n\\begin{itemize}\n\\item Monte Carlo\n\\item Convolution / Edge Detection for simple images\n\\item Split-Operator method for solving the Schrodinger Equation\n\\item Julia Fractal\n\\item Force Integration\n\\item Barnes-Hut (N-body galaxy simulation)\n\\item FFT\n\\end{itemize}\n\nAll algorithms can be found in the examples directory of the \\texttt{skillpill-julia} repository on github\n\n\\newpage\n\\section*{Monte Carlo}\n\nThere are many different methods that all work under the basic Monte Carlo principle of using random numbers to integrate systems.\nFor the sake of time and brevity, please refer to the following link for more information on Monte Carlo integration: \\url{http://leios.github.io/Batman_Montecarlo/}. From there, modify the provided code to work in Julia and to integrate $x^2$ where $-3 < x < 3$.\n\nMy function reads in a total range in x along with the number of points to sample. Try to match (or beat) the following benchmark:\n\n\\begin{lstlisting}\njulia> @benchmark monte_carlo(6.0, 100000)\nBenchmarkTools.Trial: \n  memory estimate:  0 bytes\n  allocs estimate:  0\n  --------------\n  minimum time:     573.931 \u00ce\u00bcs (0.00% GC)\n  median time:      578.105 \u00ce\u00bcs (0.00% GC)\n  mean time:        596.870 \u00ce\u00bcs (0.00% GC)\n  maximum time:     1.125 ms (0.00% GC)\n  --------------\n  samples:          8326\n  evals/sample:     1\n\n\\end{lstlisting}\n\n\\newpage\n\\section*{Edge Detection}\nThis is more about technical implementation than algorithms; however, Canny Edge Detection is a staple algorithm used in many areas -- most importantly image analysis. It is basically composed of 5 steps:\n\\begin{enumerate}\n\\item Apply Gaussian filter to smooth the image in order to remove the noise\n\\item Find the intensity gradients of the image\n\\item Apply non-maximum suppression to get rid of spurious response to edge detection\n\\item Apply double threshold to determine potential edges\n\\item Track edge by hysteresis: Finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges.\n\\end{enumerate}\n\nFor the most part, this edge detection can be done relatively simply with the following code:\n\n\\begin{lstlisting}\n# First, we need to use the appropriate packages\nusing Images  # All filtering and image stuff\nusing ImageView # imshow for showing images, run in REPL\nusing TestImages # Standart example images in julia\n\n# simple implementation of canny edge detection using in-built Julia functions\nfunction simple_edge_detection()\n    img = testimage(\"fabio_color_256\")\n    img_edge = canny(img)\n    save(string(\"fabio_edges.png\"), img_edge)\nend\n\nsimple_edge_detection()\n\n\\end{lstlisting}\n\nHowever, that doesn't really show the \\textbf{power} of Julia. Perform each of the 5 above steps without using the \\texttt{canny} function.\n\n\n\\newpage\n\\section*{Split-Operator Method}\nThe Split-Operator Method is a psuedo-spectral method that is important for many areas of physics, including quantum mechanics, where it is used to solve the Schr\\\"odinger equation,\n\n$$\ni \\hbar \\frac{\\partial \\Psi(\\mathbf{r}, t)}{\\partial t} = \\left[\\frac{-\\hbar^2}{2m} \\nabla^2 + V(\\mathbf{r},t) \\right] \\Psi(\\mathbf{r},t)\n$$\n\nTo solve this equation, we basically split the time evolution operator ($\\hat U = e^{-i \\hat H t / \\hbar}$) into 2 via the Baker-Campbell-Hausdorff relation:\n$$\\hat U_k = e^{-\\mathbf{\\hat p}^2 t / 2} \\quad \\hat U_V = e^{-V_0(\\mathbf{\\hat r})t/2}$$\n\nFrom here, the method simply involves initializing the wavefunction and then multiplying it by half of $\\hat U_V$ before flipping to momentum space via a FFT and multiplying it by all of $\\hat U_k$ and then flipping back to real space with an inverse FFT and multiplying by the other half of $\\hat U_V$, as shown in figure \\ref{fig:split-op}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width = 0.5\\textwidth]{split-op.png}\n\\end{center}\n\\caption{A single step in the split-op method as described in the description}\n\\label{fig:split-op}\n\\end{figure}\n\n\\newpage\n\\section*{Julia Fractal}\nFractals are cool. They are structures that continually repeat themselves no matter how far you zoom in. Because we are using Julia, the obvious fractal to work with is the Julia fractal. Full implementation details in Python can be found here: \\url{https://batchloaf.wordpress.com/2013/02/10/creating-julia-set-images-in-python/}. Basically, all we need to do is iteratively solve the following formula:\n$$z_{n+1} = z_n^2 + c$$\nEvery iteration, we reduce pixel color by 5 and recalculate z until either the pixel color is 0 or $|z|$ exceeds a provided cutoff value (somewhat arbitrary). It should be noted here that $c$ is entirely complex and that the value dratically changes the output fractal.\n\nCreate the following: \n\\begin{enumerate}\n\\item A function that sweeps through different $c$ values and outputs an image for each value\n\\item A function that zooms into a fractal of $c=1.0$ to see it's self-similar nature\n\\end{enumerate}\n\nIf you want to, you can even combine the images into a gif by using some additional tool (like ffmpeg)\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{fractal_zoom00030.png}\n\\includegraphics[width=0.45\\textwidth]{c_scan00030.png}\n\\end{center}\n\\end{figure}\n\n\\newpage\n\\section*{Force Integration}\nIntegrating forces is a staple of any physics engine and most physics simulations. Though there are many different algorithms to do this, here we focus on the Verlet algorithm (because it's conceptually simple); however, if you want to implement Runge-Kutta (or whatever), feel free to do so!\n\nFirst, let's start with the Taylor Series expansion:\n\n$$x = x_0 + v_0 t + \\frac{1}{2}at^2 + \\ldots$$\n\nIf we are looking for $x(t + \\Delta t)$, we can start by looking at the timesteps immediately before it:\n$$x(t+\\Delta t) = x(t) + v \\cdot \\Delta t + \\frac{1}{2} a\\cdot\\Delta t^2$$\n$$x(t-\\Delta t) = x(t) - v \\cdot \\Delta t + \\frac{1}{2} a\\cdot\\Delta t^2$$\n\nAdding these together and solving for $x(t + \\Delta t)$, we get:\n\n$$x(t + \\Delta t) = 2x(t) - x(t - \\Delta t) + a \\Delta t^2$$\n\nThis means we can solve for any object's new position if we know it's acceleration and where it's been! Of course, if we need the velocity (possibly to solve for the acceleration of a damped system), we can find it with:\n\n$$v(t) = \\frac{x(t + \\Delta t) - x(t - \\Delta t)}{2\\Delta t}$$\n\nNow, let's integrate some forces! Start with a ball at a height of 5m. How long does it take to hit the ground? Also, plot it's trajectory with time.\n\n\\newpage\n\\section*{Barnes-Hut}\nUsing the previous section on force integration, we know that all we really need for an N-body gravity simulator is a method to calculate the acceleration of an object in space. In principle, this is straightforward: We just add up all the particles and calculate\n\n$$a = \\frac{GMm}{r^2}$$\n\nIn practice, this means that we have an $O(n^2)$ problem on our hands! That said, we can take our N-body simulation and speed it up drastically with the help of a rather famous data structure known as an octree (or quadtree in 2d).\n\nThese data structures are straightforward. The root node is the entire simulation box, it's children are the octants of that box. Each octant has 8 octchildren, and we keep splitting up the simulation until every particle has it's own box. At this point, it's not obvious how this speed anything up; however, each box has a Center of Mass equal to all the elements on the inside of it. If our objects are sufficiently far away from each other, we can take a larger parent box instead of a smaller child box. This cuts computational time tremendously!\n\nIn particular, the cutoff value is defined as $\\theta = s/d$ where $s$ is the width of the region represented by the internal node and $d$ is the distance between the body and it's Center of Mass. By playing with $\\theta$, we can have either a fast, inaccurate simulation or a slow, accurate one. The trick is finding the happy-medium between the two!\n\n\\newpage\n\\section*{FFT}\nFFT's are everywhere nowadays. In fact, we used it previously in this document when talking about the Split-Operator method. For the most part, FFT's use an implementation of the Cooley-Tukey algorithm, which basically involves recursively dividing the array we wish to transorm into even and odd components and solving them with a slow Discrete Fourier Transform (DFT) when we get below a certain number of elements. \n\nBy solving our system recursively, we can take an $O(n^2)$ process to $(n\\log n)$, which is hughe savings! Full implementation details can be found here: \\url{https://leios.gitbooks.io/algorithm-archive/content/chapters/computational_mathematics/cooley_tukey.html} (you might need to refresh the page to get the formulas to work)\n\n\\end{document}\n", "meta": {"hexsha": "5031d012840489183d1de1f4194c47a3884d0f0f", "size": 9467, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "examples/day3_exercises.tex", "max_stars_repo_name": "vchuravy/skillpill-julia", "max_stars_repo_head_hexsha": "4ef5de50fe6fdc846278ed47eaeeb73acb121a02", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-07-11T18:24:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-19T01:00:44.000Z", "max_issues_repo_path": "examples/day3_exercises.tex", "max_issues_repo_name": "oist/skillpill-julia", "max_issues_repo_head_hexsha": "eb719677ae84bd26c1e3fdb889cf2b54f65d47ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/day3_exercises.tex", "max_forks_repo_name": "oist/skillpill-julia", "max_forks_repo_head_hexsha": "eb719677ae84bd26c1e3fdb889cf2b54f65d47ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-06-23T01:55:08.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-13T03:44:47.000Z", "avg_line_length": 50.3563829787, "max_line_length": 550, "alphanum_fraction": 0.7521918242, "num_tokens": 2534, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5841081433642726}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Written By Michael Brodskiy\n% Class: Analytic Geometry & Calculus III (Math-292)\n% Professor: V. Cherkassky\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[12pt]{article} \n\\usepackage{alphalph}\n\\usepackage[utf8]{inputenc}\n\\usepackage[russian,english]{babel}\n\\usepackage{titling}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\usepackage[super]{nth}\n\\usepackage{everysel}\n\\usepackage{ragged2e}\n\\usepackage{geometry}\n\\usepackage{fancyhdr}\n\\geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in}\n\\newcommand{\\subtitle}[1]{%\n  \\posttitle{%\n    \\par\\end{center}\n    \\begin{center}\\large#1\\end{center}\n    \\vskip0.5em}%\n\n}\n\\usepackage{hyperref}\n\\hypersetup{\ncolorlinks=true,\nlinkcolor=blue,\nfilecolor=magenta,      \nurlcolor=blue,\ncitecolor=blue,\n}\n\n\\urlstyle{same}\n\n\n\\title{Lecture VIII Notes}\n\\date{\\today}\n\\author{Michael Brodskiy\\\\ \\small Professor: V. Cherkassky}\n\n% Mathematical Operations:\n\n% Sum: $$\\sum_{n=a}^{b} f(x) $$\n% Integral: $$\\int_{lower}^{upper} f(x) dx$$\n% Limit: $$\\lim_{x\\to\\infty} f(x)$$\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Projectile Motion $-$ 13.4}\n\nGiven an acceleration, $\\overrightarrow{a}(t)$, one may find the velocity and position formulas. In projectile motion, the acceleration is always $\\overrightarrow{a}(t) = g = 9.80665\\left[\\frac{m}{s^2}\\right]$\\\\\n\nFrom here, one may find that: $\\Delta v = \\int \\overrightarrow{a}(t) dt \\Longrightarrow v-v_o=\\overrightarrow{a}(t)t\\Longrightarrow \\overrightarrow{v}(t)=v_o+\\overrightarrow{a}(t)t$\\\\\n\nFurthermore, one may find the position vector by integrating once more: $\\Delta r = \\int_{}^{}  v_o + \\overrightarrow{a}(t)t dt \\Longrightarrow r-r_o=v_ot+\\frac{1}{2}\\overrightarrow{a}(t)t^2\\Longrightarrow \\overrightarrow{r}(t)=r_o + v_ot+\\frac{1}{2}\\overrightarrow{a}(t)t^2$\\\\\n\nIn projectile motion, acceleration only acts in the vertical direction, and, therefore:\n\\begin{tabular}{c}\n\n  $\\overrightarrow{r}_y(t)=r_{oy}+v_{oy}t-\\frac{1}{2}gt^2$ \\\\\n  $\\overrightarrow{v}_y(t)=v_{oy}+gt$ \\\\\n  $\\overrightarrow{r}_x(t)=r_{ox}+v_{ox}t$ \\\\\n  $\\overrightarrow{v}_x(t)=v_{ox}$\n\n    \n\\end{tabular}\n\nOne may find the acceleration through a different method by using the curvature, $\\kappa$. $\\kappa = \\frac{|\\overrightarrow{T}'(t)|}{|\\overrightarrow{r}'(t)|}\\Longrightarrow |\\overrightarrow{T}'(t)|=\\kappa |\\overrightarrow{v}(t)|$\\\\\nSimplifying this leaves us with: $\\overrightarrow{a}=|\\overrightarrow{v}'|\\overrightarrow{T} + \\kappa |\\overrightarrow{v}|^2 \\overrightarrow{N}$\\\\\nThis makes sense, because on a straight line, $\\kappa = 0$, and, therefore, on a straight line, $\\overrightarrow{a}=|\\overrightarrow{v}'|\\overrightarrow{T}$\\\\\n\nThe components of acceleration, $a_N$ and $a_T$ can be found using the formulas: $a_N=\\frac{|\\overrightarrow{r}'(t)\\text{ x } \\overrightarrow{r}''(t)|}{|\\overrightarrow{r}'(t)|}$ and $a_T = \\frac{\\overrightarrow{r}'(t)\\cdot \\overrightarrow{r}''(t)}{|\\overrightarrow{r}'(t)|}$\n\n\\end{document}\n", "meta": {"hexsha": "71da94d8eb4d03f0e31c0549347ffb3d7057be3d", "size": 3285, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture Notes/Lecture8.tex", "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_issues_repo_path": "Lecture Notes/Lecture8.tex", "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notes/Lecture8.tex", "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5555555556, "max_line_length": 277, "alphanum_fraction": 0.6249619482, "num_tokens": 989, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5839750065476953}}
{"text": "\\section{Introduction}\n\\label{sec:introduction}\n\nPerceptrons are devices used to solve binary classification problems.\nA perceptron tries to learn a hyperplane that divides the training examples in $2$ groups, depending on their label.\nIn general, if the dataset is linear separable, there is an infinite number of hyperplanes that perfectly separates the examples based on their labels.\nThe MinOver algorithm \\cite{minover} tries to find the hyperplane with the maximum stability, i.e. maximize the distance of the closest examples from the hyperplane.\n\nIn this assignment, we implement the MinOver training algorithm and measured its generalization error on a linearly separable dataset as a function of the rate between the number of examples and their dimension.\nWe also compare its performances with the Rosenblatt algorithm \\cite{rosenblatt} implemented in the previous assignment.\nFinally, we introduce different degrees of noise in the datasets and observe the performances of both algorithms.\n\n\\cref{sec:theory} introduces the MinOver training algorithm.\n\\cref{sec:implementation} describes the implementation of the various experiments.\n\\cref{sec:evaluation} discusses the obtained results.\nFinally, \\cref{sec:conclusion} summarizes the most interesting results.\n", "meta": {"hexsha": "96f147c3fbf75eb70cc6b7413831f533f4e0d805", "size": 1273, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment_2/report/01_introduction.tex", "max_stars_repo_name": "davidepedranz/neural_networks_assignments", "max_stars_repo_head_hexsha": "262a2b33d5c3fe67bbeb20fa6ef1f4870bdfa9a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment_2/report/01_introduction.tex", "max_issues_repo_name": "davidepedranz/neural_networks_assignments", "max_issues_repo_head_hexsha": "262a2b33d5c3fe67bbeb20fa6ef1f4870bdfa9a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment_2/report/01_introduction.tex", "max_forks_repo_name": "davidepedranz/neural_networks_assignments", "max_forks_repo_head_hexsha": "262a2b33d5c3fe67bbeb20fa6ef1f4870bdfa9a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.8823529412, "max_line_length": 211, "alphanum_fraction": 0.8248232522, "num_tokens": 259, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5839750014938985}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{zak}\n\\section*{\\hspace*{-1.6cm} zak}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nZak transform.\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\ndzt = zak(sig)\ndzt = zak(sig,N)\ndzt = zak(sig,N,M)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty zak} computes the Zak transform of a signal. Its definition is\ngiven by\n\\[Z_{sig}(t,\\nu)=\\sum_{n=-\\infty}^{+\\infty} sig(t+n)\\ e^{-j2\\pi n\\nu}.\\]\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty sig} & Signal to be analyzed {\\ty (length(sig)=N1)}\\\\\n        {\\ty N}   & number of Zak coefficients in time ({\\ty N1} must be a multiple\n              of {\\ty N})          & {\\ty divider(N1)}\\\\\n        {\\ty M}   & number of Zak coefficients in frequency ({\\ty N1} must be a\n              multiple of {\\ty M}) & {\\ty N1/N}\\\\\n \\hline {\\ty dzt} & Output matrix {\\ty (N,M)} containing the discrete Zak transform\\\\\n\n\\hline\n\\end{tabular*}\n\n\\end{minipage}\n\\vspace*{1cm}\n\n{\\bf \\large \\sf Example}\n\\begin{verbatim}\n         sig=fmlin(256); \n         DZT=zak(sig);\n         imagesc(DZT);\n\\end{verbatim}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nizak, tfrgabor.\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Reference}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n[1] L. Auslander, I. Gertner, R. Tolimieri, ``The Discrete Zak Transform\nApplication to Time-Frequency Analysis and Synthesis of Nonstationary\nSignals'' IEEE Trans. on Signal Proc., Vol. 39, No. 4, pp. 825-835, April\n1991.\n\\end{minipage}\n\n", "meta": {"hexsha": "2333b08bd4af3112c652b20d768540eb4fbda605", "size": 2010, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/zak.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/zak.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/zak.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 24.8148148148, "max_line_length": 85, "alphanum_fraction": 0.631840796, "num_tokens": 748, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5839749977113974}}
{"text": "\\documentclass{article}\n\n\\usepackage[a4paper, total={6in, 9in}]{geometry}\n\\usepackage{amsmath, amssymb, graphicx, blindtext, tabularx, caption, hyperref, blindtext, physics, amsthm, wrapfig, subcaption}\n\\usepackage[noend, ruled]{algorithm2e}\n\\usepackage[square,numbers]{natbib}\n\\title{A vertical federated learning algorithm for classfication problems with gradient-based optimization}\n\\author{Zhu Xiaochen \\qquad National University of Singapore}\n\\date{}\n\\begin{document}\n\\maketitle\n\\section{The proposed \\texttt{vFedCCE} algorithm}\\label{secvfedcce}\n  To apply the general vertical federated learning architecture to the setting of classification problem, we studied the categorical cross entropy loss function to deploy the gradient-based optimizer on the clients, instead of the centralized server, in order to preserve privacy. Following this method, we proposed the following algorithm of vertical federated learning with categorical cross entropy loss (\\texttt{vFedCCE}). To simplify, we assume that the two clients have already aligned their datasets and train on a shared ID space $I$, which is the result after the preprocessing of private entity resolution.\n\n  Before presenting the algorithm, we need to clarify some of the notations that will be used later in this section. For an $m\\times n$ matrix $x$, $x_i$ is its $i$-th row vector and $x_{ij}$ is its entry at $i$-th row and $j$-th column, which means that $x_i=\\langle x_{i1},x_{i2},\\ldots,x_{in}\\rangle$. For real matrices $x,y$ of the same shape, $x\\oslash y$ is the Hadamard division (element-wise division) of $x$ and $y$, that is, $(x\\oslash y)_{ij}={x_{ij}}/{y_{ij}}$; $\\langle x,y\\rangle_\\mathrm{F}$ is the Frobenius inner product (element-wise dot product) of $x$ and $y$, that is $\\langle x,y\\rangle_\\mathrm{F}=\\sum_{i,j}x_{ij}y_{ij}$. With these notations, the algorithm \\texttt{vFedCCE} is as follows:\n\n  \\noindent\\begin{minipage}{\\textwidth}\n    \\begin{algorithm}[H]\n    \\caption{\\texttt{vFedCCE}. $m$-class classification. Shared ID space $I$. Epoch number $E$. Batch size $N$. Batch examples indexed as $x_i^1,x_i^2,y_i$, $i\\in\\{1,\\ldots,N\\}.$ $w_1,w_2$ are the client weight vectors. Client 2 stores $y_i\\in\\mathbb{R}^m$ as one hot arrays of length $m$.}\n    \\label{vflalgo}\n    \\DontPrintSemicolon\n    \\SetAlgoNoLine\n    \\SetArgSty{textnormal}\n    server executes:\\;\n    \\For {each epoch $e=1,2,\\ldots, E$} {\n      $\\mathcal{B}\\leftarrow$ divide $I$ into batches of size $N$\\;\n      \\For{batch in $\\mathcal{B}$}{\n        client 1 sends $a\\leftarrow\\begin{bmatrix} w_1x_1^1& w_1x_2^1& \\ldots &w_1x_N^1 \\end{bmatrix}^T$ and $\\frac{\\partial a_{ij}}{\\partial w_1}$ to client 2\\;\n        server logs $\\mathcal{L}\\leftarrow$ client2.CalculateLossAndUpdate($a$)\\;\n        $g\\leftarrow$ client2.AssembleGradient($\\frac{\\partial a_{ij}}{\\partial w_1}$)\\;\n        client1.UpdateWithGradient($g$)\\;\n      }\n    }\\;\n    \n    \\textbf{CalculateLossAndUpdate}$(a)$:\\quad // client 2 executes\\;\n    $b\\leftarrow\\begin{bmatrix} w_2x_1^2& w_2x_2^2& \\ldots &w_2x_N^2 \\end{bmatrix}^T$\\;\n    shared model output $p\\leftarrow(a+b)/2$\\;\n    $\\mathcal{L}=-\\frac{1}{N}\\sum_{i=1}^{N}y_i\\cdot \\log(p_i)=-\\frac{1}{N}\\sum_{i=1}^N\\sum_{j=1}^m y_{ij}\\log(p_{ij})$\\;\n    $c\\leftarrow\\begin{bmatrix}y_1&y_2&\\ldots&y_N\\end{bmatrix}^T\\oslash p$\\;\n    client2.UpdateWithGradient$\\left(\\frac{\\partial}{\\partial w_2}\\left(-\\frac{1}{N}\\left\\langle c,b\\right\\rangle_\\mathrm{F}\\right)\\right)$\\;\n    \\Return{$\\mathcal{L}$}\\;\\;\n\n    \\textbf{AssembleGradient}($\\frac{\\partial a_{ij}}{\\partial w_1}$):\\quad // client 2 executes\\;\n    \\Return{$-\\frac{1}{N}\\sum_{i=1}^N\\sum_{i=1}^mc_{ij}\\frac{\\partial a_{ij}}{\\partial w_1}$}\\;\\;\n\n    \\textbf{UpdateWithGradient}($g$):\\quad // client 1 or 2 executes\\;\n    update client weights via a specified optimizer with gradient $g$\\;\n    \\end{algorithm}\n\\end{minipage}\n\n\\begin{wrapfigure}{r}{0.6\\textwidth}\n  \\centering\n  \\includegraphics[width=0.6\\textwidth]{./images/vfedcce_diagram2.png}\n  \\caption{Sequence diagram for \\texttt{vFedCCE}}\n  \\label{vflseqdia}\n\\end{wrapfigure}\n\n\\paragraph{Correctness.} The correctness of \\texttt{vFedCCE} depends on the gradients calculated or assembled locally on clients to update their respective weight vectors.\n\n\\begin{proof}\nWe study a batch of $N$ examples, where features $x_1^1,x_2^1,\\ldots,x_N^1$ are stored on client 1, features $x_1^2,x_2^2,\\ldots,x_N^2$ and labels $y_1,y_2,\\ldots,y_N$ are stored on client 2 as one-hot arrays in $\\mathbb{R}^m$, where $m$ is the number of classes to predict. On a single example $i$, the probability distribution output of client 1 on its feature set $x_i^1$ will be $a_i=w_1x_i^1$ and the probability distribution output of client 2 on its feature set $x_i^2$ will be $b_i=w_2x_i^2$. For the shared model, we aim to train the model such that the average output of the client outputs converges to the actual labels $y_i$. Therefore, the predicted probability distribution of the shared model will be $p_i=\\frac{1}{2}(a_i+b_i)$. Note that all probability distributions, i.e. $a_i,b_i,y_{\\text{pred},i}$ are vectors in $\\mathbb{R}^m$.\n\nTherefore, on this batch of $N$ examples, probability distribution outputs of client 1, client 2 and the shared model on them will be $N\\times m$ matrices: $a=\\begin{bmatrix}a_1&a_2&\\ldots&a_N\\end{bmatrix}^T$, $b=\\begin{bmatrix}b_1&b_2&\\ldots&b_N\\end{bmatrix}^T$ and $p=\\frac{1}{2}(a+b)$. On the other hand, the actual labels $y$ stored on client 2 is also a $N\\times m$ matrix where $y=\\begin{bmatrix}y_1&y_2&\\ldots&y_N\\end{bmatrix}^T$.\n\nAs previously mentioned, the shared model aims to converge $p$ towards $y$, and therefore, the optimization goal is to minimize the categorical cross entropy loss $p$ against $y$, which is\n$$\n  \\mathcal{L}=-\\frac{1}{N}\\sum_{i=1}^N y_i \\cdot \\log p_i = -\\frac{1}{N}\\sum_{i=1}^N \\sum_{j=1}^m y_{ij}\\log p_{ij}=-\\frac{1}{N}\\sum_{i=1}^s \\sum_{j=1}^m (y_i)_j\\log \\frac{(a_i)_j+(b_i)_j}{2}.\n$$\nWhen using a gradient-based optimizer to minimize $\\mathcal{L}$, one must calculate the gradients $\\frac{\\mathcal{L}}{\\partial w_1}$ and $\\frac{\\mathcal{L}}{\\partial w_2}$. Note that only $a_{ij}$ instead of $b_{ij}$ or $y_{ij}$ is a function of $w_1$, then by calculus chain rule we have\n\\begin{equation}\n  \\label{client1}\n  \\frac{\\partial\\mathcal{L}}{\\partial w_1}=-\\frac{1}{N}\\sum_{i=1}^N \\sum_{j=1}^m y_{ij}\\frac{2}{a_{ij}+b_{ij}}\\frac{\\partial a_{ij}}{\\partial w_1}=-\\frac{1}{N}\\sum_{i=1}^N \\sum_{j=1}^m\\frac{y_{ij}}{p_{ij}}\\frac{\\partial a_{ij}}{\\partial w_1}.\n\\end{equation}\nSimilarly, for client 2,\n\\begin{equation}\n  \\label{client2}\n  \\frac{\\partial\\mathcal{L}}{\\partial w_2}=-\\frac{1}{N}\\sum_{i=1}^N \\sum_{j=1}^m y_{ij}\\frac{2}{a_{ij}+b_{ij}}\\frac{\\partial b_{ij}}{\\partial w_2}=-\\frac{1}{N}\\sum_{i=1}^N \\sum_{j=1}^m\\frac{y_{ij}}{p_{ij}}\\frac{\\partial b_{ij}}{\\partial w_2}.\n\\end{equation}\n\nNote that for privacy reasons, the labels $y$ stored on client 2 are not allowed to leave client 2. Therefore, we cannot directly send $y$ to client 1 in order to complete the calculations from (\\ref{client1}) entirely locally on client 1. However, it is possible to send example-wise partial gradients $\\frac{\\partial a_{ij}}{\\partial w_1}$ to client 2 to assemble into the total gradient $\\frac{\\partial\\mathcal{L}}{\\partial w_1}$ with the assistance of $y$ and $p$, which is what the function \\texttt{AssembleGradient()} does in Algorithm \\ref{vflalgo}.\n\nAs for client 2, since the labels $y$ are originally stored there and client 1's model output $a$ is safe to send, it can complete the calculations from (\\ref{client2}) entirely locally. If we denote the element wise division $y\\oslash p$ as a constant matrix $c$, client 2's gradient can be calculated as follows:\n\n\\begin{align*}\n  \\frac{\\partial\\mathcal{L}}{\\partial w_2}&=-\\frac{1}{N}\\sum_{i=1}^N \\sum_{j=1}^m\\frac{y_{ij}}{p_{ij}}\\frac{\\partial b_{ij}}{\\partial w_2}=-\\frac{1}{N}\\sum_{i=1}^N \\sum_{j=1}^m c_{ij}\\frac{\\partial b_{ij}}{\\partial w_2}\\\\\n  &=\\frac{\\partial}{\\partial w_2}\\left(-\\frac{1}{N}\\sum_{i=1}^N\\sum_{j=1}^m c_{ij} b_{ij}\\right)\n  =\\frac{\\partial}{\\partial w_2}\\left(-\\frac{1}{N}\\left\\langle c,b\\right\\rangle_\\mathrm{F}\\right)\n\\end{align*} which is what the function \\texttt{CalculateLossAndUpdate()} does in Algorithm \\ref{vflalgo} to calculate the gradient for client 2.\n\nTherefore, the gradients calculated locally on the clients in Algorithm \\ref{vflalgo} equals the actual gradients of the loss funciton w.r.t. $w_1$ and $w_2$ respectively. With a specified gradient-based optimizer, the output of the shared model, i.e. the average model of clients' models, $p$ will eventually converge to $y$ to achieve desired learning outcomes.\n\\end{proof}\n\n\\paragraph{Theoretical performance.} Unlike \\cite{hardy2017private} where a Taylor approximation of the loss function is used to apply vertical federated learning, \\texttt{vFedCCE}, according to the previous proof, calculates the accurate gradients without approximation. Therefore, theoretically \\texttt{vFedCCE} should produce the same training outcomes (i.e. accuracy et al) compared to a conventional machine learning algorithm with the same model over a combined dataset.\n\nExcept preserving privacy, one major advantage of vertical federated learning is to reduce communication cost by avoiding sending huge datasets from the clients to the server to combine and then apply conventional deep learning over it. Although compared with the VFL example from \\cite{yang2019federated} which only sends local gradients $\\frac{\\partial L}{\\partial w}$ (it is possible because they are dealing with a linear regression model and an $L^2$ regularized MSE loss function), much more amount of intermediate data is being sent in Algorithm \\ref{vflalgo} when sending $\\frac{\\partial a_{ij}}{\\partial w_1}$, we expect the communication cost will still be reduced when the dataset is too huge to efficiently send and combine on the server. This is because on each batch, \\texttt{vFedCCE} sends $Nm$ different partial gradients $\\frac{\\partial a_{ij}}{\\partial w_1}$, whose amount of data is independent of the number of features in the datasets.\n\nOne potential issue with Algorithm \\ref{vflalgo} is that, when assembling client 1's gradient from all the partial gradients $\\frac{\\partial a_{ij}}{\\partial w_1}$, a lot of floating point arithmetic calculation is being carried out and manually summed up, which may lead to a precision problem compared with a conventional approach. This concern has been dismissed by the experimental results shown later in section \\ref{vflexp}.\n\n\\paragraph{Privacy.} To analyze whether Algorithm \\ref{vflalgo} preserves privacy, we need to evaluate the data being sent across the clients, which, referring to Figure \\ref{vflseqdia}, is:\n\\begin{itemize}\n  \\item Client 1's model output $a=\\begin{bmatrix} w_1x_1^1& w_1x_2^1& \\ldots &w_1x_N^1 \\end{bmatrix}^T$. This data is safe and necessary to send as the shared model needs to average the two outputs for predictions.\n  \\item Client 1's partial gradients $\\frac{\\partial a_{ij}}{\\partial w_1}$. This is safe to send as client 2 cannot restore the $x$ values from partial derivatives $\\frac{\\partial a_{ij}}{\\partial w_1}$. Also, gradients are being sent in other literature such as \\cite{yang2019federated} and \\cite{hardy2017private}.\n  \\item Client 1's total gradient, sent from client 2 to client 1. This is also safe and necessary to sent as this is the gradient client 1 needs to update its own weight vectors.\n\\end{itemize}\n\nAlthough Algorithm \\ref{vflalgo} only sent secure intermediate data instead of sensitive raw data, it is still important to discuss the feasibility of applying some encryption scheme on top of \\texttt{vFedCCE}. In our case, since all the intermediate data (i.e. $a$ and $\\frac{\\partial a_{ij}}{w_1}$) is sent for linear operations, it is possible to apply additively homomorphic encryption on those data, such that client 2 receives encrypted values, applies linear operations on the encrypted values and returns the encrypted assembled gradient to client 1, without even knowing the decrypted values of the data it is being sent. More details on encryption will be discussed later.\n\n\\section{Implementation and experimental results}\\label{vflexp}\n\\paragraph{Implementation details.} The proposed \\texttt{vFedCCE} algorithm is implemented in Python with the TensorFlow Keras library on the vertically partitioned SGCP dataset from \\cite{groemping2019south}. In the experiments, the local models on client 1 and client 2 are the same neural network model:\n\\begin{itemize}\n  \\setlength\\itemsep{0em}\n  \\item one normalizer layer;\n  \\item two fully connected L2 regularized dense layers with dropout activated by \\texttt{elu};\n  \\item one more dense layer of unit $2$ to convert output into logits;\n  \\item one \\texttt{Softmax} layer to output probability distributions.\n\\end{itemize}\nThis neural network model is chosen because it leads to acceptable results with a conventional approach on the combined dataset, denoted by \\texttt{combinedNN}, which is the baseline method to benchmark the performance of \\texttt{vFedCCE}.\n\nFor hyperparameters, no deliberate tuning is being applied since the goal of experiments is to verify that the learning outcome of \\texttt{combinedNN} and \\texttt{vFedCCE} are similar, instead of to achieve perfect performance. Both methods are implemented with the same hyperparameters: the learning rate (of Adam optimizer) $\\eta=0.001$, the number of epochs $E=50$ and the batch size $N=32$.\n\n\\paragraph{Experimental results.} Figure \\ref{fig:vflexp} demonstrates the training loss and accuracy of \\texttt{combinedNN} and \\texttt{vFedCCE} under the implementation specified previously. It shows that the two methods are able to minimize loss and increase accuracy over epochs, with similar speed. Table \\ref{vfltest} presents the learning outcome of the two methods, which, expectedly, shows both similar and acceptable accuracy, precision, recall, F-score and AUC values. This also dismisses the concern on floating-point precision raised in section \\ref{secvfedcce}.\n\n\\begin{figure}[ht]\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{./images/vfl_train_loss.eps}\n  \\caption{Training loss over 50 epochs}\n  \\label{fig:vflloss}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{./images/vfl_train_acc.eps}\n  \\caption{Training accuracy over 50 epochs}\n  \\label{fig:vflacc}\n\\end{subfigure}\n\\caption{Training process of \\texttt{vFedCCE} against \\texttt{combinedNN}}\n\\label{fig:vflexp}\n\\end{figure}\n\n\\begin{table}[ht]\n\\centering\n  \\resizebox{0.7\\textwidth}{!}{%\n  \\begin{tabular}{|c|c|c|c|c|c|}\n    \\hline\n                        & Accuracy & Precision & Recall & F-score & AUC    \\\\ \\hline\n    \\texttt{combinedNN} & 0.74     & 0.7881    & 0.8561 & 0.9207  & 0.7720 \\\\ \\hline\n    \\texttt{vFedCCE}    & 0.74     & 0.7669    & 0.8993 & 0.8278  & 0.7561 \\\\ \\hline\n  \\end{tabular}%\n  }\n\\caption{Learning outcome on testing data of \\texttt{vFedCCE} against \\texttt{combinedNN}}\n\\label{vfltest}\n\\end{table}\n\n\\bibliographystyle{abbrvnat}\n\\bibliography{references}\n\\end{document}", "meta": {"hexsha": "ea00ddaa749743f2ab2ebc2e48cb238c3e6940e0", "size": 15143, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/report.tex", "max_stars_repo_name": "zhxchd/vFedCCE", "max_stars_repo_head_hexsha": "bfb8f10f722b486064f93f287c3c22d189181055", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-07-27T07:01:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T09:47:49.000Z", "max_issues_repo_path": "docs/report.tex", "max_issues_repo_name": "zhxchd/vFedCCE", "max_issues_repo_head_hexsha": "bfb8f10f722b486064f93f287c3c22d189181055", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-22T03:55:54.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-22T03:55:54.000Z", "max_forks_repo_path": "docs/report.tex", "max_forks_repo_name": "zhxchd/vFedCCE", "max_forks_repo_head_hexsha": "bfb8f10f722b486064f93f287c3c22d189181055", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-10T09:51:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-10T09:51:25.000Z", "avg_line_length": 96.4522292994, "max_line_length": 956, "alphanum_fraction": 0.7401439609, "num_tokens": 4582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529376, "lm_q2_score": 0.7461389873857265, "lm_q1q2_score": 0.5839749895107473}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n%%%%%%%%%%%%%%%%%%%%\n\\section{Algebra}\nFor algebra, the problem seems to fall into three categories: group theory, rep theory, and ring/Galois theory.\n\n\\subsection{Group Theory}\nThe most important thing here is the Sylow theorems and how they are used. Sylow's theorem is about subgroups of prime orders. The idea is looking at the orbits of the conjugation action of $G$ on $G$.\n\n\n\n\\begin{definition}\nLet $G$ be a finite group of order $p^n m$ where $m$ is coprime to $p$ to $p$. A Sylow $p$-subgroups of $G$ is a subgroup of order $p^n$. They are the maximal $p$-subgroups.\n\\end{definition}\n\nHere the Sylow theorems:\n\n\\begin{theorem}[Sylow's 1st Theorem]\nIf $p$ is a prime number and $p | |G|$, then there exists a Sylow $p$-subgroup of $G$.\n\\end{theorem}\n\n\\begin{proof}\nWe find the group by looking at orbits of $G$ on subset of $G$ of size $p^n$, and then finding one $w$ with orbit size coprime to $p$. Then the stabilizer of this is $H$, which as it stabilizes it must act on the elements of the set (very confusing sentence). As it acts on the elements of $w$ freely, $H | |w| = p^n$. On the other hand, $|G|/|H| = orb(w)$ has size coprime to $p$, this means that $|H| = p^n$. (In fact $w = H$ as sets.\n\\end{proof}\n\n\\begin{theorem}[Sylow's 2nd Theorem]\nFor a given prime $p$, all Sylow $p$-subgroups are conjugate to each other.\n\\end{theorem}\n\n\\begin{corollary}\nA Sylow $p$-subgroup of $G$ is unique iff it is normal in $G$. Thus it is unique iff it is abelian.\n\\end{corollary}\n\n\\begin{theorem}[Sylow's 3rd Theorem]\nLet the number of Sylow $p$-subgroups to be $n_p$. Then the floowing results hold:\n\\begin{enumerate}\n    \\item $n_p = 1\\ mod\\ p$\n    \\item If $|G| = p^n m$ so that $m$ and $p$ are coprime, then $n_p | m$\n    \\item If $P$ is any Sylow $p$-subgroup of $G$, then $n_p = [G: N(P)]$, where $N(P)$ is the normalizer of $P$ in $G$\n\\end{enumerate}\n\n\n\\end{theorem}\n\\begin{remark}\npart 2 and 3 of Sylow's 3rd theorem follows direct from  the facts that $G$ acts on the set of Sylow $p$-groups by conjugation, Sylow's 2nd theorem says that the action is transitive, and the stabilizer of this action at $P$ is exactly $N(P)$.\n\\end{remark}\n\\begin{corollary}\nLet $G$ be a fintie group and $p$ a prime number. If $p^n$ divides the order of $G$, then $G$ has a subgroup of order $p^n$.\n\\end{corollary}\n\nWe also have the Cauchy's theorem:\n\n\\begin{theorem}[Cauchy's theorem]\nIf prime $q$ divides $|G|$, then there exists an element of order $q$.\n\\end{theorem}\n\nLet $G$ acts on $G$ by conjugation, then the orbits are conjugacy classes. \nThen the class equations says that $|G| = |Z(G)| + k_1 + k_2 ..$, where $|Z(G)|$ is the center of $G$ (those that commutes with all elements those belong to a single orbit), and $k_1$ are the size of distinct non-singlet orbits.\n\n\n\\subsubsection{Groups of order $p^2$}\nGroups of order $p$ are isomorphic to $\\mathbb{Z}/p\\mathbb{Z}$, namely abelian. Similar thing is true for groups of order $p^2$, they are all abelian, so either $\\mathbb{Z}/p\\mathbb{Z} \\sum \\mathbb{Z}/p\\mathbb{Z}$ or $\\mathbb{Z}/p^2$\n\n\\subsubsection{Groups of order $p^3$}\nThere are ofc abelian once $3, (2,1), (1,1,1)$, and there are two non-isomorphic nonabelian. \n\n\\subsubsection{Groups of order $pq$}\nThis uses semidirect products: \n\\begin{definition}\nGiven $N, K$ groups and an action $K$ on $N$ ($K \\rightarrow Aut(N)$), then the semidirect product of $N$ and $K$ $N \\rtimes K$ as set is $N \\times K$, with multiplication:\n$$\n(a_1, b_1) (a_2, b_2) \\coloneqq (a_1 (b_1 \\dot a_2), b_1 b_2)\n$$\n\\end{definition}\n\nWe have inclusion $K \\rightarrow N \\rtimes K$ and projection $N \\rtimes K \\rightarrow K$. Conversely,\nSplitting of groups gives action of $N$ on the kernel of the projection, and exhibits the total group to be the semidirect sum.\n\n\\begin{proposition}\nLet $p \\geq q$, if $q \\nmid (p-1)$ then there are only one iso class of groups of order $pq$, namely the abelian group $\\mathbb{Z}/pq\\mathbb{Z}$. If $q | (p-1)$, then there are two groups of order $pq$:\n\\begin{enumerate}\n    \\item The abelian one: $\\mathbb{Z}/pq\\mathbb{Z}$\n    \\item The nonabelian one given by any non-trivial \n    $$\n    \\phi: \\mathbb{Z}/q\\mathbb{Z} \\rightarrow Aut(\\mathbb{Z}/p\\mathbb{Z}) \\cong \\mathbb{Z}/(p-1)\\mathbb{Z}\n    $$\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{corollary}\nFor any odd prime $p$ there are two non-isomorphic groups of order $2p$:\n\\begin{enumerate}\n    \\item The cyclic group $\\mathbb{Z}/2p\\mathbb{Z}$\n    \\item The dihedral group $D_p = \\mathbb{Z}/p\\mathbb{Z} \\rtimes \\mathbb{Z}/2\\mathbb{Z}$ with the sign action.\n\\end{enumerate}\n\\end{corollary}\n\n\n\n\n\\subsubsection{Dihedral Groups}\n\nThis leads us right into the dihedral group $D_n$. \n\\begin{definition}\n\nFor any $n \\geq 1$, we have the Dihedral group \n$$\nD_n \\coloneqq <r, s| r^n = 1, s^2 = 1, s r s = r^{-1} >\n$$\nThis is the group of symmetry of the regular $n$ polygon, with $r$ being the a mininum rotation and $s$ being a reflection.\n\nThe group of rotation is normal, and all reflections are conjugate when $n$ is odd and there are two conjugacy classes when $n$ is even (easy to see there are two different kinds of reflections)\n\\end{definition}\n\n\\subsubsection{$GL_n(\\mathbb{F}_p)$}\nThe size of finite group $GL_n(\\mathbb{F}_p)$ is\n$$\n(p^n -1) (p^n - p) ...\n$$\nas this is number of choosing a nonzero vector, then choosing another vector away from the first one (the size of a $\\mathbb{F}_p$ line is $p$)...\n\n\\subsection{Representation Theory}\nThis concerns $\\mathbb{C}$-linear representation theory of finite groups, via the character theory.\n\n\n\\subsubsection{Basics}\nLet $G$ be a finite group.\nThe category of (complex) finite dimensional representation $Rep(G)$ is quite simple, literally, it is a simple category:\n\n\\begin{theorem}[Maschke's Theorem]\n\nAny short exact sequences of representations splits, and irreducibles are simples. \n\n\\end{theorem}\n\nOne can show this by the following proposition:\n\\begin{proposition}\nFor any $G$-rep $V$ there is a $G$-equivariant Hermitian structure on $V$.\n\\end{proposition}\nThis is done by choosing any hermitian structure and then add its $G$-orbits.\n\n\nThe maps between irreducible representations are also simple:\n\n\\begin{theorem} [Schur's Lemma]\nlet $V$ and $W$ be irreducible representations, and $f : V \\rightarrow W$, then $f$ is either an isomorphism or 0. Moreover, endormorphism of a irreducible representation $V$ is scalar multiplication by $\\mathbb{C}$.\n\\end{theorem}\n\n\\begin{proof}\nAs $V$ and $W$ are irreducible, then the kernel of $f$ is either $V$ or 0, similiarly, the cokernel is then $0$ or $W$.\n\nAs for the second part, given $f : V \\rightarrow V$, then it must have one eigenvector $v$ with eigenvalue $\\lambda$ (this is why we are in $\\mathbb{C}$ and not in say $\\mathbb{R}$, think rotation), then $f - \\lambda Id$ has a nonzero kernel, thus it is $V$. So $f = \\lambda Id$.\n\\end{proof}\n\nThe most important representation is the regular representation, that is, $G$ act on $\\mathbb{C}[G]$ by left translation.\n\nHere's the decomposition theorem, which we can easily prove using character theory:\n\\begin{theorem}\nThere are finitely many non-isomorphic irreducible representations, in fact there are the number of conjugacy classes of $G$ many. Moreover:\n$$\n\\mathbb{C}[G] = \\bigoplus_i V_i ^{dim V_i}\n$$\nwhere $V_i$ are the distinct irreducible representations.\n\n\n\\end{theorem}\n\nNext we need the character theory:\n\\subsubsection{Character Theory}\nThis is the crux of the theory of finite dimensional reps. First we define characters:\n\n\\begin{definition}\nLet $\\rho$ be a $G$ rep on $V$. Then the character associated to $\\rho$ is a complex-valued map on the set $\\frac{G}{G}$ of conjugacy classes of $G$:\n$$\n\\chi_V(g) \\coloneqq tr(\\rho(g))\n$$\n\\end{definition}\nNote that $\\chi_V(1) = dim V$. It turns out they encode all the info about a rep $V$. \n\nAs $tr$ is conjugate-invariant, this is well-defined on conjugacy classes. A complex-valued map on $\\frac{G}{G}$ is called a class function on $G$.\n\nThere is an inner product on class functions:\n$$\n(f_1, f_2) \\coloneqq \\frac{1}{|G|}\\sum_{g \\in G} f_1(g) \\overline{f_2(g)}\n$$\n\nIt turns out the characters of irreducible representations are orthogonal to this decomposition:\n\n\\begin{lemma}\nLet $V$ and $V'$ be irreducible representations, if $V \\cong V'$, then \n$$\n(\\chi_V, \\chi_{V'}) = \\delta_{V, V'}\n$$\n\n\\end{lemma}\n\nWe also have another orthogonal condition:\n\\begin{lemma}\nGiven $g, h$, \n$$\n\\frac{1}{|G|} \\sum_{V_i}\\chi_{V_i}(g) \\overline{\\chi_{V_i}(h)} = \\delta_{[g], [h]}\n$$\n\\end{lemma}\n\nMoreover, we have the delta functions $\\delta_{[g]}$ on conjugacy classes, then we have \n$$\n(\\delta_{[g]}, \\delta_{[h]}) = \\delta_{g,h}\n$$\nand $\\chi_{V_i}(g) = (\\chi, g)$ tells us that $\\chi_{V_i}$ forms an orthonormal basis for the space of class functions!!!\n\nThis, for example, tells us that there are $|\\frac{G}{G}|$ many irreducible representations.\n\nThis also means that we can easily compute the irreducibles in a rep $V$: $(\\chi_V, \\chi_{V_i})$ is the number of $V_i$ in $V$.\n\nGiven a finite group $G$, we can write down a table, with columns indexed by conjugacy classes of $G$ and rows indexed by irreducible representations. The valued at $(i,j)$ is $\\chi_{V_i}(g_j)$. The orthogonal condition exists for both rows and columns (for rows need to multiply by the size of the conjugacy class). This can be used to find the characters of the irreducible reps.\n\n\n\n\n\\subsection{Galois Theory}\n\nGalois theory is very similar to covering space theory, however in this case the (absolute) Galois group is profinite, so a bit harder to understand. \n\n\\subsubsection{Normality}\n\n\nLet $F$ be a field. We will consider finite field (algebraic) extensions over $k$. Examples of them are  $F[x]/f(x)$ where $f$ is an irreducible polynomial over $k$. \n\\begin{remark}\nWe will assume that we have an algebriac closure $\\overline{F}$ over $F$. Note that this choice is not canonical and it is like choosing a universal cover together with a base-point, equivalently, choosing a point of $BG$, where $BG$ is viewed as a non-based space. Making a choice will trivialize a lot of other things. This probably will not play a big factor in the quals but it is something really important to keep in mind.\n\\end{remark}\n\nGiven an algebraic field extension $K/F$ and $k \\in K$, then there exists an unique monic minimal polynomial $f \\in F[x]$ such that $f(k) = 0$ and all other polynomial that does has $f$ as a factor (hence minimality). This is because we have a map $F[x] \\rightarrow K$ sending $x \\mapsto k$. Since the extension if algebraic, this has a nonzero ideal as a kernel, as $F[x]$ is a $PID$, this is generated by some monic polynomial $f$.\n\nHere's the notion of normality:\n\\begin{definition}\nLet $K/F$ be a finite extension, then it is normal if for every $k \\in K$, the minimal polynomial $f_k(x) \\in F[x]$ of $k$ splits completely over $K$, that is, we can find all the roots of $f_k$ in $K$. Equivalently, it is normal if for every irreducible $f \\in F[x]$ which has a root in $K$, then it splits completely over $K$.\n\\end{definition}\n\nHere's a criterion for normality: \n\\begin{lemma}\nLet $f \\in F[x]$ be an irreducible polynomial, the field extension $K \\coloneqq  F[x]/f(x)\\ /F $ is normal iff it contains all the roots of $f$ ($f$ splits completely).\n\\end{lemma}\n\nHere's some other equivalent notion for normality:\n\\begin{enumerate}\n    \\item All embeddings $K \\xhookrightarrow{} \\overline{F}$\n    has the same image\n    \\item The automorphism group $Aut_F(K)$ acts transitively on the set of homomorphisms $K \\xhookrightarrow{} \\overline{F}$.\n\\end{enumerate}\n\nGiven an extension $K/F$, easy to see that there is a normal closure $N(K)/F$, where $N(K)$ is the mininal normal extension of including $K$ over $F$. It is generated by all the other roots of the minimal polynomial of elements of $K$. \n\nGiven $f \\in F[x]$ not necessarily irreducible, we have the splitting field of $F$ which is the smallest normal extension that splits $f$ completely.\n\n\\begin{example}\nExamples of normal extensions are cyclotomic extensions ($\\mathbb{Q}[\\zeta_i]$ where $\\zeta_i$ is a primitive $i$-th root of unity. Equivalently, it is the splitting field of $x^n -1$ in $\\mathbb{Q}$.) \n\nAnother one is $\\mathbb{Q}[x]/(x^2 - 2)$, aka adjointing $\\sqrt{2}$. In fact, any degree 2 extensions are normal. \n\nAn example of a non-normal extension is $\\mathbb{Q}[\\sqrt{2}^3]$, because this is contained in $\\mathbb{R}$ (really an embedding of this into $\\mathbb{C}$), but the other two are complex. \n\\end{example}\n\n\n\\subsubsection{Separability and Discriminants}\n\nGiven a polynomials $f$ and $g$, we can ask how to determine if they have a common factor base on their coefficients (common root in some extension). And the answer is that in that case $f, xf, ..., x^n f, g, ..., x^m g$ are linear dependent, thus the determinant of the matrix associated to this vanishes, it is product of $(\\lamda_i - \\mu_j)$ for $\\lambda, \\mu$ roots of $f$ and $g$.\n\nTo ask if $f$ has repeated roots, it is equivalent to ask if $f$ and $f'$ has common factors, the resultant of $f$ and $f'$ is called the discrimants, with the formula \n$$\nDisc_(f) = (-1)^{n(n-1)/2}\\prod(r_i - r_j)\n$$\nit is invariant under galois group thus it is a polynomial in the coefficients. It is a degree $2(n-1)$ polynomial on coefficients of $f$. THIS IS SUPER IMPORTANT!!!!\nWe also have the notion of separable:\n\n\\begin{definition}\nA polynomial $f \\in F[x]$ is separable if it has no repeated roots (in $\\overline{F}$).\nAn extension $K/F$ is separable if any minimal polynomial of an element of $K$ is separable. \n\nAn extension $K/F$ is purely-nonseperable if any minimal polynomial of an element of $K - F$ is non-separable.\n\\end{definition}\n\nWe have criterion for a polynomial to be separable:\n\\begin{lemma}\nThe following condition is equivalent:\n\\begin{enumerate}\n    \\item $f$ is separable\n    \\item $(f, f') = 1$, where $f'$ is the derivative of $f$.\n    \\item The discriminant of $f$ is 0, where discriminant is a homogeneous degree $2n-2$ polynomial in coefficients of $f$. It is something like $\\prod_{i \\neq j} (r_i - r_j)$, where $r_i, r_j$ are the roots of $f$.\n\\end{enumerate}\n\\end{lemma}\n\nHere's an equivalent definition for seperable extension:\n\n\\begin{lemma}\nAn extension is separable iff there are exactly $[K:F]$ embedding from $K$ in $\\overline{F}$ (or the normal closure of $F$.\n\\end{lemma}\n\nHere's discriminant in low dimensions: in degree 2, the discriminant of $f = ax^2 + bx + c$ is $b^2 - 4ac$. In degree 3, the discriminant of $f = x^3 + px + q$ is $-4 p^3 - 27 q^2$ (note that this is not degree 4 polynomial as there are missing coefficients of $x^3$, which is set to 1.\n\nFor an irreducible polynomial $f$ to be nonseparable, we must have $f' = 0$. This can happen iff we are in char $p$ and $f$ is have entirely $x^{np}$ coefficients. \n\n\\begin{example}\nAll char $0$ irreducible polynomials are separable. \n\nAll char 0 field extensions and finite field extensions are separable. For a char $p$ field, any purely-nonseparable extensions are coming from $X^{p^n} - a$ for $a \\in F$. \nTypical example is to take $F = \\mathbb{F}_p (t)$ and $K = \\mathbb{F}_p (t^{\\frac{1}{p}})$\n\\end{example}\n\n\\begin{definition}\nA field $F$ is called perfect if $\\overline{F}/F$ is separable.\n\\end{definition}\n\nExamples are char 0 fields and finite fields.\n\n\n\\subsubsection{Galois theory}\nWe first start with the definition of a Galois extension:\n\n\\begin{definition}\nA field extension $K/F$ is Galois iff it is normal and separable. In this case, the Galois group $Gal(K/F)$ is defined as $Aut_K(F)$.\n\\end{definition}\n\nHere's where the Galois group comes in:\n\\begin{lemma}\n$|Gal(K/F)| = [K: F]$.\n\\end{lemma}\n\n\\begin{proof}\nBy normality, the Galois group acts transitively on the set of embedding of $K$ into $\\overline{F}$. One can show that the action if also free. Separability implies the size of the set of embedding of $K$ into $\\overline{F}$ is $[K:F]$.\n\\end{proof}\n\nNormality extension composes, and we also have the following lemma:\n\\begin{lemma}\nIf we have $E/K/F$, and $E/F$ is normal (separable, Galois), then so is $E/K$.\n\\end{lemma}\n\nGiven a Galois extension $E/F$, we have the associated group $G = Gal(E/F)$. We have the following construction:\n\nFor any intermediate field extension $E/K/F$, then we have the associated group $Gal(E/K) \\subset G = Gal(E/F)$. Conversely, given a subgroup $H$, then we can construct the subfield $E^H$ of the fixed points of $E$ under the $H$ action. \n\nEasily to see that $Gal(E/E^H) = H$. and $E^{Gal(E/K} = K$. Thus we have the following:\n\n\\begin{theorem}[Fundamental theorem of Galois Theory]\nThere exists an equivalence of category between the poset of intermediate extensions between $E/F$ and the poset of subgroup of $G = Gal(E/F)$.\n\\end{theorem}\n\n\\begin{remark}\nThis is completely analogous to the covering space with a base point situation.\n\\end{remark}\n\nHowever, as with covering space, it is more interesting to think about extensions of $F$ not necessarily with an embedding, but in this case it gets a bit harder (there are profinite things for examples). Nevertheless we still have that $Aut (K/F) = N(Gal (E/K))/Gal (E/K)$. $N$ is the normalizer of $Gal(E/K)$ in $Gal(E/F)$. This means that if $K/F$ is Galois, then we have $Gal (K/F) = Gal(E/F) / Gal(E/K)$\n\n\n\\todo[inline]{Review the basics of field extensions and Galois theory, discriminants. How to know if a Galois extension is in $A_n$ etc. Examples of basic construction of Galois extensions that are like $\\mathbb{Z}/2\\mathbb{Z} \\times \\mathbb{Z}/2 \\mathbb{Z}$ etc.}\n\n\\subsubsection{Galois theory of Cyclotomic Extensions}\nThe $n$-th cyclotomic field is the splitting field of $x^n-1$ over $\\mathbb{Q}$. Note that $x^n -1$ is ofc not irreducible. It breaks into cyclotomic polynomials: \n$\\Phi_n = \\prod_{i} (x - \\zeta_i)$ where $\\zeta_i$ are the primitive $n$-th roots (there are $\\phi(n)$ of them, where $\\phi$ is ofc the Euler function). They are actually integer valued polynomials. We have that \n$$\nx^n - 1 = \\prod_{d | n} \\Phi_d\n$$\n\nThus we see that the $\\mathbb{Q}[\\zeta]$, where $\\zeta$ is a primitive $n$-th root of unity,  is a degree $\\phi(n)$ extension over $\\mathbb{Q}$. Easy to see that this is the splitting field of $x^n -1$ as all other roots are powers of $\\zeta$. The Galois group is $(\\mathbb{Z}/n\\mathbb{Z})^\\times \\cong \\mathbb{Z}/\\phi(n)\\mathbb{Z}$, acting on $\\zeta$ as $\\zeta \\mapsto \\zeta^i$ for $i$ coprime to $n$. Ofc this determined where all other powers of $\\zeta$ goes to.\n\n\\subsubsection{Primitive Element and Normal Basis Theorems}\n\nHere's two nice theorems:\n\\begin{theorem}[Primitive element theorem]\nFor any separable field extension $K/F$, there exists an element of $k \\in K$ such that $K = F(k)$.\n\\end{theorem}\n\n\\begin{remark}\nThe counterexample is the degree $p^2$ extension $\\mathbb{F}_p(U^\\frac{1}{p}, T^\\frac{1}{p})$ over $\\mathbb{F}_p(U,T)$, any rational polynomial on $U^\\frac{1}{p}, T^\\frac{1}{p}$ has $p$-th power in $\\mathbb{F}_p(U,T)$.\n\\end{remark}\n\nHere's the normal basis theorem:\n\\begin{theorem}[Normal basis theorem]\nFor any Galois extension $K/F$ with Galois group $G$, there exists an element $\\sigma$ such that $G \\sigma$ spans $K$ over $F$. Equivalently, we have $F[G] \\cong K$ as left $F[G]$ modules. \n\\end{theorem}\n\n\\begin{remark}\nOf course when the extension is not normal this is not true, as the size of the automorphism group is smaller the $[K:F]$.\n\\end{remark}\n\n\\subsubsection{Galois Theory of Finite Fields}\n\nThe theory of finite field extensions are easy: fix a characteristic $p \\neq 0$. Then all finite extensions of char $p$ finite fields are Galois, for any $n > 0$, there is an unique finite field of size $p^n$ (degree $n$ extension of $\\mathbb{F}_p$). All the Galois groups are cyclic and generated by the Frobenius. Basically it is as easy as it gets.\n\nFix a prime $p$. Given a finite field $K$ of size $p^n$, then it is a degree $n$ extension of $\\mathbb{F}_p$. They are isomorphic to the splitting field of $x^{p^n} - x$:\n\n\\begin{lemma}\n$K^\\times$ is cyclic, thus $K^\\times \\cong \\mathbb{Z}/(p^n - 1)\\mathbb{Z} $\n\\end{lemma}\n\n\\begin{proof}\nConsider the group of units $K^\\times$, it is a group of size $p^n -1$. By Lagrange theorem, we have that for any $x \\in K^\\times$, $x^{p^n -1} = 1$. As there are only at most $p^m-1$ elements with order $p^m-1$, as $x^{p^m -1}$ has only maximally $p^m -1$ unique roots. The only abelian group is cyclic.\n\\end{proof}\n\n\n\\begin{lemma}\nAny finite field $K$ of size $p^n$ a splitting field of $x^{p^n} -x$:\n\\end{lemma}\n\n\\begin{proof}\nBy above, we see that $x^{p^n} -x$ splits completely over $K$. Note that $x^{p^n} -x$ is separable and has $p^n$ distinct roots. Thus $K$ is the splitting field.\n\\end{proof}\n\nThis also implies that extensions of finite fields are normal and separable, thus Galois.\n\nA field $K$ of order $p^n$ contains a field of order $p^m$ iff $m | n$. \n\nNow we need to find the Galois group. Recall that for a ring $R$ over $\\mathbb{F}_p$, there is a Frobenius morphism: $F : R \\rightarrow R$, $x \\mapsto x^p$.\n\n\\begin{theorem}\nLet $K$ be a field of order $p^n$, then $Gal(K/\\mathbb{F}_p) \\cong \\mathbb{Z}/n\\mathbb{Z}$ is cyclic with generator the Frobenius morphism $F$.\n\\end{theorem}\n\n\\begin{proof}\nNote that $F$ fixes. It is a isomorphism for $K$ as it is injective. As before, since $x^{p^n} = x$ is true for any element of $K$ we see that $F^n = Id$ on $K$. As $[K : \\mathbb{F}_p] = n$, this implies the statement.\n\\end{proof}\n\n\\begin{corollary}\nLet $F$ be a field of order $p^m$ and $K/F$ is a degree $n$ extension. Then $Gal(K/F) \\cong \\mathbb{Z}/n\\mathbb{Z}$ with generator $F^m$. \n\\end{corollary}\n\n\n\\subsubsection{Examples of Galois Extensions with a given Galois Group}\n\n\\todo[inline]{We will see how important this section is. Like a $Z2 \\times Z2$, $A_3, S_3,$, $D_n$, etc.}\n\n\\subsection{Ring Theory and Algebraic Number Theory}\n\nWe will first start with some (commutative) ring theory. There are a bunch of property a ring can satisfy: integral domain, integrally closed, UFD, PID... All rings are commutative.\n\n\\begin{definition}\nA ring $R$ is an integral domain if it has no zero-divisor, that is, if $xy = 0$ then either $x = 0$ or $y = 0$.\n\\end{definition}\n\nGeometrically they corresponds to irreducible schemes.\n\n\\begin{remark}\nGiven an integral domain $R$, we can localize at all nonzero element and get a field $K(R)$, called the field of fractions. $R$ injects into $K(R)$.\n\\end{remark}\n\n\\begin{definition}\nGiven an injective ring homomorphism $R \\xhookrightarrow{} S$, an element $s \\in S$ is integral over $R$ if the map $R[x] \\rightarrow S$ is not injective, that is, there is a polynomial $f(x)$ with coefficients $R$ such that $f(s) = 0$. The subset of integral elements of $S$ forms a ring, called the integral closure of $R$ in $S$.\n\\end{definition}\n\nGiven the injection $R \\xhookrightarrow{} K(R)$, we have $N(R)$ the integral closure of $R$.\n\n\\begin{definition}\nA ring $R$ is integrally closed iff it is its integral closure.\n\\end{definition}\n\nGiven an integral domain $R$, we can ask if factorization is unique:\n\n\\begin{definition}\nA domain $R$ is a uniquely factorizatable domain (UFD) if any two factorization of an element of $R$ is unique up to unit.\n\\end{definition}\n\n\\begin{lemma}\nAny UFD is integrally closed, the proof is the same argument as $\\mathbb{Z}$ is integrally closed in $\\mathbb{Q}$.\n\\end{lemma}\n\n\\begin{example}\nIf $R$ is a UFD (like a field), then so is $R[x]$ (they are not PID tho, think $(2, x)$ in $\\mathbb{Z}[x]$.)\n\nTo find an integrally closed ring that is not UFD, we go to one dimesion, where integrally closed Krull dimension 1 domains are called Dedekind domains. They are UFD iff PID. The ring $\\mathbb{Z}[\\sqrt{-5}]$ is not UFD as \n$$\n(1 + \\sqrt{-5}) (1 - \\sqrt{-5}) = 2 \\times 3\n$$\nIn fact, most ring of integers of number fields (are Dedekind domains) but are not UFD (or PID).\n\\end{example}\n\nWe can also ask if any ideals are principle (generated by one element):\n\n\\begin{definition}\nA domain $R$ is a principal ideal domain (PID) iff all ideals are principal.\n\\end{definition}\n\n\\begin{remark}\nAll PID are UFD, but not the inverse. Example: $\\mathbb{Z}[x]$ above is UFD but not PID ($(2, x)$).\n\\end{remark}\n\n\\begin{example}\nThe Gaussian integer $\\mathbb{Z}[i]$ is a PID. Factorization is by the Euclidean algorithm with respect to the norm $|a + bi|^2 = a^2 + b^2$. For an Gaussian integer $a + bi$, then its factors are elements of prime norms. So find the primes of $|a + bi|$ and use see which of the factors of these primes divides $a + bi$.\n\\end{example}\n\n\n\\subsubsection{The Ramification Theory of Number Fields}\n\nLet $R$ be a Dedekind domain, that is, a Krull dim 1 integrally closed domain. Given a finite extension of $K$ over $F = K(R)$ , then let $R_K$ be the integral closure of $R$ over $K$. It is also a Dedekind domain.\n\n\\begin{remark}\nThe setting that we are interested in is when $R = \\mathbb{Z}$, then the integral closure is written as $\\mathcal{O}_K$, called the ring of integer.\n\\end{remark}\n\n\\begin{lemma}\nIn a Dedekind domain, ideals uniquely factorizes into prime ideals.\n\\end{lemma}\n\n\\begin{remark}\nThere is this whole theory about fractional ideals (line bundles over the curve) and class group (the Picard group of the curve). A Dedekind domain is UFD (which in this case implies PID) iff its class group is zero.\n\\end{remark}\n\nGeometrically, $Spec \\mathcal{O}_K$ is the \"smooth curve\" with generic point corresponding to $K$.\n\n Geometrically, Dedekind domains are one dimensional curves. We can study $R_K$ by looking at the fibers over points of $R$, aka primes ideals. Algebraically, given a prime ideal $\\mathfrak{q}$ of $R$, we study the quotient ring $R_K/\\mathfrak{q}R_K$. This is a finite extension ring of $R/\\mathfrak{q}$, of a constant rank $[K: \\mathbb{Q}]$. \n\nThe fibers over each prime $p$ is a disjoint union of points, each one possibly non-reduced (ramifies) and have degree larger than $1$ over $p$.\n\nAlgebraically, this means that the prime $\\mathfrak{q}R_K$ factorizes as a product of primes, each one $\\mathfrak{p}_i$ may appear with multiplicities $ > 1$  and $R_K/\\mathfrak{q}R_K$ might be a nontrivial field extension over $k$. The multiplicities is called the ramification index $e_i$, and the field extension degree $[R/\\mathfrak{p}: R/\\mathfrak{q}] $ is called the called the inertial degree (residue degree) $f_i$. Note that $\\sum_i e_i f_i = [K : F = K(R)]$, where we summing over all primes over $p$. All of these works in general ground of an (injective) map of Dedekind domains and splitting of primes.\n\n\n\n\\begin{example}\nTake $R = \\mathbb{Z}$, $F = \\mathbb{Q}$, $K = \\mathbb{Q}[i]$ and $\\mathcal{O}_K = \\mathbb{Z}[i]$, then $(2) = (1+i)^2$ (this is because $1-i = -i (1 + i)$, $(5) = (2 + i) (2 -i)$, and $(7) = (7)$. So we see that $(2)$ ramifies, $(5)$ stays splits and $(7)$ is inert. As $[K = \\mathbb{Q}[i], \\mathbb{Q}]$ is a degree 2 extension, these are the only three behaviors.\n\\end{example}\n\n\nWhen the extension $[K: F]$ is Galois, then the situation is even better. Namely we can study how the Galois group acts. Note that the Galois group restricts to an action on $R_K$ fixing $R$. Thus it acts on the prime ideals lying over $\\mathfrak{q}$. This action is transitive, and thus the ramification and inertial index are all the same. If there are $g$ distinct primes, we see that $[K:F] = efg$. Fix a prime $\\mathfrak{p}$ over $\\mathfrak{q}$, then the decomposition group $D_\\mathfrak{p}$ is the subgroup of $G = Gal(K/F)$ that fixes $\\mathfrak{p}$. Therefore $|D_\\mathfrak{p}| = ef$. There is an induced map $D_\\mathfrak{p} \\rightarrow Gal(K/\\mathfrak{p}/F/\\mathfrak{q})$, and this is surjective with kernel $I_\\mathfrak{p}$, the inertial group. This group has size $e$. This is the general ramification theory of Galois extensions.\n\n\\subsubsection{Quadratic Extensions}\nLet $d$ be a square-free integer, then we have the ring $\\mathbb{Z}[\\sqrt{d}]$. One natural question to ask is is $\\mathbb{Z}[\\sqrt{d}]$ integrally closed? Equivalently, is it the ring of integers in $\\mathbb{Q}[\\sqrt{d}]$. The answer is not always:\n\n\\begin{theorem}\nIf $d = 2,3 mod 4$ then $\\mathbb{Z}[\\sqrt{d}]$ is the ring of integer. if $d = 1 mod 4$, then $\\mathbb{Z}[(1 + \\sqrt{d})/2]$ is the ring of integer.\n\\end{theorem}\n\n\\begin{example}\nSo $\\mathbb{Z}[i], \\mathbb{Z}[\\sqrt{2}]$\n\\end{example}\n\n\n\\subsection{Linear Algebra}\n\n\nGiven a finite dimensional vector space $V$ over a field $k$ and $T: V \\rightarrow T$, then we want to understand how to decompose $T$. The idea is simple: use algebraic geometry: consider $T$ as $x$ in $k[x]$, then we can view $V$ as a coherent sheave over $\\mathbb{A}^1 = spec k[x]$. Being finite dimensional, $V$ is supported at finite points. It is supported on the closed sub-scheme cut out by a polynomial $f$ on $k[x]$. This $f$ is the mininal polynomial on $x$ such that $f(T) = 0$. This is called the minimal polynomial of $T$: \n\n\\begin{definition}\nThe minimal polynomial $f$ of $T$ is the unique lowest degree polynomial in $T$ such that $f(T) = 0$.\n\\end{definition}\n\nOn the other hand, we also have the characteristic polynomial, which is a degree $n$ polynomial in T:\n\\begin{definition}\n$char(x) coloneqq det (xId - T)$ is called the characteristic polynomial in T\n\\end{definition}\n\nIt's $x^{n-1}$ coefficient is $-tr(T)$ and the $x^0$ coefficient is $(-1)^{n-1} \\ det \\ T$. \n\nThe Cayley-Hamilton theorem says that \n\\begin{theorem}[Cayley-Hamilton]\n$char(T) = 0$\n\\end{theorem}\nThus the mininal polynomial factors the characteristic polynomial.\n\nIn fact, using the classification theory of finitely generated modules over a PID (in this case $k[x]$), we can do more:\n\nUsing the invariant factor decomposition, we see that $V \\cong k[x]/(f_1) \\oplus k[x]/(f_2) .. \\oplus k[x]/(f_n)$ with $f_1 | f_2 | ... | f_n$. $f_n$ in this case will be the minimal polynomial. \n\nIn linear algebra, this means that we can find direct summands $V_i$ for $V$ such that $T$ sends $V_i$ to $V_i$ ($T$ looks block diagonal).For each $V_i$, we have basis for the $T$ on all but last basis is off diagonal 1: just think about how $x$ acts on $k[x]/(f(x))$ with basis $1, x, x^2, ...$. This is the rational canonical form. The minimal polynomial of $T|V_i$ is ofc $f_i$, equaling the characteristic polynomial of $T|V_i$ (just by degree reasons). Note that the first basis vector $v$ has $v, T\\ v, T^2 v $ spans $V_i$.\n\nThis off diagonal form is called the rational canonical form. Note that this exists for any field $k$. \n\nNow if $k$ is algebraically closed, then we know that we can factor each $f$ to $(x-a_1)^{e_1} (x - a_2)^{e_2}...$,\n\nSo we have the case now $T$ has characteristic polynomial (equals minimal polynomial) is $(x-a)^n$. This means that $T ' = T - a Id$ has characteristic polynomial $x^n$. There is a basis here $x^{n-1} = (T')^{n-1} 1,... x = T' 1, 1$, in this basis, $x^{n-1}$ is the eigenvector for $T'$ with eigenvalue $0$, thus it is an eigenvector for $T$ with eigenvalue $a$. Moreoever, the $i$-th basis vector gets annihilated by $(T')^i$ (this is all theory for nilpotent matricies). Thus we see that in this basis, $T'$ can be written as diagonal $0$, one up than diagonal $1$. Therefore $T$ can be written as diagonal $a$, one up that diagonal $1$. \n\nFor a more general $T$, then we see that we can find a basis with blocks looking like such above. This is called the Jordan normal form of $T$.\n\n\\begin{remark}\nNote that this is true for any matrix whose characteristic polynomial factors into linear polynomials. Ofc this is true for any matrix over algebraically closed fields.\n\\end{remark}\n\n\n\\end{document}", "meta": {"hexsha": "7bede2710f7068e0cb102d77625b1f06632cc78f", "size": 31600, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "A.tex", "max_stars_repo_name": "leon2k2k2k/harvard-qual", "max_stars_repo_head_hexsha": "c3abbcee3d77a688ce060de697f31e92b60ee393", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "A.tex", "max_issues_repo_name": "leon2k2k2k/harvard-qual", "max_issues_repo_head_hexsha": "c3abbcee3d77a688ce060de697f31e92b60ee393", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "A.tex", "max_forks_repo_name": "leon2k2k2k/harvard-qual", "max_forks_repo_head_hexsha": "c3abbcee3d77a688ce060de697f31e92b60ee393", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.8611111111, "max_line_length": 841, "alphanum_fraction": 0.707943038, "num_tokens": 9648, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.8198933425148214, "lm_q1q2_score": 0.5839544361984628}}
{"text": "\\documentclass{article}\n\n\\usepackage[png,comments]{mgltex}\n\\usepackage{hyperref}\n\n\\title{\\mglTeX*{} usage sample}\n\\author{Diego Sejas Viscarra \\and Alexey Balakin}\n\\date{\\today}\n\n\\mglsettings{\n\tdir=MGL/,\n\tscriptsdir=scripts/,\n\tgraphicsdir=graphics/,\n\tbackupsdir=backups/\n}\n\n\\begin{mglsetupscript}\n\tdefine gravity 9.81 # [m/s^2]\n\\end{mglsetupscript}\n\n\\begin{document}\n\t\n\t\\maketitle\n\t\n\t\\begin{abstract}\n\t\t\\noindent \\mglTeX{} is a \\LaTeX{} package that allows the creation of graphics directly from MGL scripts of the MathGL library (by Alexey Balakin) inside documents. The MGL code is extracted, executed (if shell escape is activated), and the resulting graphics are automatically included.\n\t\t\n\t\tThis document is intended as a sample of the capabilities of \\mglTeX{}, as well as a brief introduction to the package, for those who want to start right away to use it, without diving into the a little bit more technical documentation.\n\t\\end{abstract}\n\t\n\t\\section{Basics on environments}\n\t\\begin{description}\n\t\t\\item[mgl] The easiest way to embed MGL code is the \\verb|mgl| environment. It extracts its contents to a main script associated to the document.\\footnote{Generally, the main script has the same name as the document being compiled. In order to rename it or create a new one, the \\texttt{\\textbackslash mglname} command can be used.} If shell escape is activated, \\LaTeX{} will take care of calling \\verb|mglconv| (the MathGL compiler) with the appropriate settings, and the resulting image will be automatically included.\n\t\t\n\t\tFor example, you could write:\n\t\t\\begin{verbatim}\n\t\t\t\\begin{figure}[!ht]\n\t\t\t  \\centering\n\t\t\t  \\begin{mgl}[width=0.85\\textwidth,height=6cm]\n\t\t\t    call 'prepare1d'\n\t\t\t    subplot 2 1 0 '<_' : title 'Standard data plot'\n\t\t\t    box : axis : grid 'xy' ';k'\n\t\t\t    plot y \u2019rGb\u2019\n\t\t\t    \n\t\t\t    subplot 2 1 1 '<_' : title 'Region plot'\n\t\t\t\t  ranges -1 1 -1 1 : origin 0 0\n\t\t\t\t  new y1 200 'x^3-x' : new y2 200 'x'\n\t\t\t\t  axis : grid 'xy' 'W'\n\t\t\t\t  region y1 y2 'ry'\n\t\t\t\t  plot y1 '2k' : plot y2 '2k'\n\t\t\t\t  text -0.75 -0.35 '\\i{A}_1' 'k' : text 0.75 0.25 '\\i{A}_2' 'k'\n\t\t\t  \\end{mgl}\n\t\t\t  \\caption{A simple plot create by \\mglTeX's \\texttt{mgl} environment}\n\t\t\t\\end{figure}\n\t\t\\end{verbatim}\n\t\tThis will produce the following image:\n\t\t\\begin{figure}[!ht]\n\t\t\t\\centering\n\t\t\t\\begin{mgl}[width=0.85\\textwidth,height=5.5cm]\n\t\t\t\tcall 'prepare1d'\n\t\t\t\tsubplot 2 1 0 '<_' : title 'Standard data plot'\n\t\t\t\tbox : axis : grid 'xy' ';k'\n\t\t\t\tplot y '2'\n\t\t\t\t\n\t\t\t\tsubplot 2 1 1 '<_' : title 'Region plot'\n\t\t\t\tranges -1 1 -1 1 : origin 0 0\n\t\t\t\tnew y1 200 'x^3-x' : new y2 200 'x'\n\t\t\t\taxis 'AKDTVISO' : grid 'xy' ';W'\n\t\t\t\tregion y1 y2 'ry'\n\t\t\t\tplot y1 '2k' : plot y2 '2k'\n\t\t\t\ttext -0.75 -0.35 '\\i{A}_1' 'k' -2 : text 0.75 0.25 '\\i{A}_2' 'k' -2\n\t\t\t\\end{mgl}\n\t\t\t\\caption{A simple plot create by \\mglTeX's \\texttt{mgl} environment}\n\t\t\\end{figure}\n\t\t\n\t\tTwo important aspects of \\mglTeX{} can be noted from this example: First, the \\verb|mgl| environment accepts the same optional argument as the \\verb|\\includegraphics| command from the \\verb|graphicx| package. Actually, it also accepts other optional arguments, called \\verb|gray| (to activate/deactivate gray-scale mode), \\verb|mglscale| (to set the factor for scaling the image file), \\verb|quality| (to set the quality of the image), \\verb|variant| (to chose the variant of the arguments of MGL commands in the script), \\verb|imgext| (to specify the extension of the resulting graphic file), and \\verb|label| (to specify a name to save the image). Most of these options are available to every \\mglTeX{} environment or command to create graphics.\n\t\t\n\t\tThe second aspect to be noted about the example is that this script calls a MGL function, \\verb|prepare1d|, which hasn't been defined yet. \\mglTeX{} provides the \\verb|mglfunc| environment for this purpose (see below).\n\t\t\n\t\t\\item[mglfunc] This environment can be used in any part of the \\LaTeX{} document; \\mglTeX{} takes care of placing the corresponding code at the end of the main script, as has to be done in the MGL language.\n\t\t\n\t\tFor example, the function \\verb|prepare1d| that is called in the script above is defined like this\n\t\t\\begin{verbatim}\n\t\t  \\begin{mglfunc}{prepare1d}\n\t\t    new y 50 3\n\t\t    modify y '0.7*sin(2*pi*x)+0.5*cos(3*pi*x)+0.2*sin(pi*x)'\n\t\t    modify y 'sin(2*pi*x)' 1\n\t\t    modify y 'cos(2*pi*x)' 2\n\t\t  \\end{mglfunc}\n\t\t\\end{verbatim}\n\t\t\\begin{mglfunc}{prepare1d}\n\t\t\tnew y 50 3\n\t\t\tmodify y '0.7*sin(2*pi*x)+0.5*cos(3*pi*x)+0.2*sin(pi*x)'\n\t\t\tmodify y 'sin(2*pi*x)' 1\n\t\t\tmodify y 'cos(2*pi*x)' 2\n\t\t\\end{mglfunc}\n\t\tAs you can see, only the body of the function has to be written. The number of arguments of the function can be passed to \\verb|mglfunc| as optional argument, like in the code \\verb|\\begin{mglfunc}[3]{func_with_three_args}|.\n\t\t\n\t\t\\item[mgladdon] This environment just adds its contents to the main script, without producing any image. It is useful to load dynamic libraries, define constants, etc.\n\t\t\n\t\t\\item[mglcode] The \\verb|mglcode| environment is similar to \\verb|mgl|, but it creates its own script, whose name is passed as mandatory argument. The same optional arguments are accepted, except \\verb|label| (for obvious reasons).\n\t\t\\begin{verbatim}\n\t\t\t\\begin{figure}[!ht]\n\t\t\t  \\begin{mglcode}[scale=0.5]{vectorial_flow}\n\t\t\t    new a 20 30 'sin(pi*x)*sin(pi*y)+cos(2*pi*x*y)'\n\t\t\t    new b 20 30 'cos(pi*x)*cos(pi*y)+cos(2*pi*x*y)'\n\t\t\t    \n\t\t\t    subplot 1 1 0 '' : title 'Flow of vector field' : box\n\t\t\t    flow a b 'v'; value 20\n\t\t\t  \\end{mglcode}\n\t\t\t\\end{figure}\n\t\t\\end{verbatim}\n\t\t\\begin{figure}[!ht]\n\t\t\t\\centering\n\t\t\t\\begin{mglcode}[scale=0.5]{vectorial_flow}\n\t\t\t\tnew a 20 30 'sin(pi*x)*sin(pi*y)+cos(2*pi*x*y)'\n\t\t\t\tnew b 20 30 'cos(pi*x)*cos(pi*y)+cos(2*pi*x*y)'\n\t\t\t\t\n\t\t\t\tsubplot 1 1 0 '' : title 'Flow of a vector field' : box\n\t\t\t\tflow a b '2v'; value 10\n\t\t\t\\end{mglcode}\n\t\t\\end{figure}\n\t\t\n\t\t\\item[mglscript] This environment just creates a script, whose name is specified as mandatory argument. It is useful, for example, to create MGL scripts which can later be post-processed by another package, like \\verb|listings| or \\verb|pygments|.\n\t\t\n\t\tFor example, the following won't produce any image, just a script:\n\t\t\\begin{verbatim}\n\t\t\t\\begin{mglscript}{Gaston_surface}\n\t\t\t  subplot 1 1 0 '' : title 'Gaston\\'s surface'\n\t\t\t  ranges -13 13 -40 40\n\t\t\t  new a 200 200 '-x+(2*0.84*cosh(0.4*x)*sinh(0.4*x))/' \\\n\t\t\t    '(0.4*((sqrt(0.84)*cosh(0.4*x))^2+(0.4*sin(sqrt(0.84)*y))))+' \\\n\t\t\t    '0.5*sin(pi/2*x)'\n\t\t\t  new b 200 200 '(2*sqrt(0.84)*cosh(0.45*x)*(-(sqrt(0.84)*sin(y)*' \\\n\t\t\t    'cos(sqrt(0.84)*y))+cos(y)*sin(sqrt(0.84)*y)))/' \\\n\t\t\t    '(0.4*((sqrt(0.84)*cosh(0.4*x))^2+2*(0.4*sin(sqrt(0.84)*x))^2))'\n\t\t\t  new c 200 200 '(2*sqrt(0.84)*cosh(0.45*x)*(-(sqrt(0.84)*cos(y)*' \\\n\t\t\t    'cos(sqrt(0.84)*y))-sin(y)*sin(sqrt(0.84)*y)))/' \\\n\t\t\t    '(0.4*((sqrt(0.84)*cosh(0.4*x))^2+2*(0.4*sin(sqrt(0.84)*x))^2))'\n\t\t\t  rotate 60 60\n\t\t\t  light on\n\t\t\t  xrange c : yrange b : zrange a : crange c\n\t\t\t  surf c b a '#'; meshnum 100\n\t\t\t\\end{mglscript}\n\t\t\\end{verbatim}\n\t\t\\begin{mglscript}{Gaston_surface}\n\t\t\tsubplot 1 1 0 ''\n\t\t\tranges -13 13 -40 40\n\t\t\tnew a 200 200 '-x+(2*0.84*cosh(0.4*x)*sinh(0.4*x))/(0.4*((sqrt(0.84)*cosh(0.4*x))^2+(0.4*sin(sqrt(0.84)*y))))+0.5*sin(pi/2*x)'\n\t\t\tnew b 200 200 '(2*sqrt(0.84)*cosh(0.45*x)*(-(sqrt(0.84)*sin(y)*cos(sqrt(0.84)*y))+cos(y)*sin(sqrt(0.84)*y)))/(0.4*((sqrt(0.84)*cosh(0.4*x))^2+2*(0.4*sin(sqrt(0.84)*x))^2))'\n\t\t\tnew c 200 200 '(2*sqrt(0.84)*cosh(0.45*x)*(-(sqrt(0.84)*cos(y)*cos(sqrt(0.84)*y))-sin(y)*sin(sqrt(0.84)*y)))/(0.4*((sqrt(0.84)*cosh(0.4*x))^2+2*(0.4*sin(sqrt(0.84)*x))^2))'\n\t\t\trotate 60 60\n\t\t\tlight on\n\t\t\txrange c : yrange b : zrange a : crange c\n\t\t\tsurf c b a '#'; meshnum 100\n\t\t\ttitle 'Gaston surface'\n\t\t\\end{mglscript}\n\t\t\n\t\t\\item[mglblock] It writes its contents verbatim to a file, specified as mandatory argument, and to the \\LaTeX{} document.\n\t\t\n\t\tFor example:\n\t\t\\begin{verbatim}\n\t\t\t\\begin{mglblock}{fractal}\n\t\t\t\tlist A [0,0,0,.16,0,0,.01] [.85,.04,-.04,.85,0,1.6,.85] [.2,-.26,.23,.22,0,1.6,.07] [-.15,.28,.26,.24,0,.44,.07]\n\t\t\t\tifs2d f A 100000\n\t\t\t\tsubplot 2 1 0 '<_' : title 'A fractal fern'\n\t\t\t\tranges f(0) f(1) : axis\n\t\t\t\tplot f(0) f(1) 'G#o '; size 0.05\n\t\t\t\t\n\t\t\t\tsubplot 2 1 1 '<_' : title 'Bifurcation plot'\n\t\t\t\tranges 0 4 0 1 : axis\n\t\t\t\tbifurcation 0.005 'x*y*(1-y)' 'R'\n\t\t\t\\end{mglblock}\n\t\t\\end{verbatim}\n\\begin{mglblock}{fractal}\nlist A [0,0,0,.16,0,0,.01] [.85,.04,-.04,.85,0,1.6,.85] [.2,-.26,.23,.22,0,1.6,.07] [-.15,.28,.26,.24,0,.44,.07]\nifs2d f A 100000\nsubplot 2 1 0 '<_' : title 'A fractal fern'\nranges f(0) f(1) : axis\nplot f(0) f(1) 'G#o '; size 0.05\n\nsubplot 2 1 1 '<_' : title 'Bifurcation plot'\nranges 0 4 0 1 : axis\nbifurcation 0.005 'x*y*(1-y)' 'R'\n\\end{mglblock}\n\t\tAs you can see, although this is a verbatim-like environment, very long lines of code are split to fit the paragraph. Each line of code is numbered, this can be disabled with the \\verb|lineno| option, like \\verb|\\begin{mglblock}[lineno=false]{fractal}|.\n\t\t\n\t\t\\item[mglverbatim] This is like \\verb|mglblock| environment, but it doesn't produce any script, just typesets the code to the \\LaTeX{} document. It accepts the \\verb|lineno| option, plus the \\verb|label| option, in case you want to associate a name to the code.\n\t\t\n\t\t\\item[mglcomment] This environment is used to embed comments in the document. You can control whether the contents of this environment are displayed or not, using the \\verb|comments| and \\verb|nocomments| package options, or the \\verb|\\mglcomments{on}| and \\verb|mglcomments{off}| commands.\n\t\t\n\t\tAn example of this would be:\n\t\t\\begin{verbatim}\n\t\t\t\\begin{mglcomments}\n\t\t\t\tThis comment will be shown because we used the \"comments\" package option for mglTeX\n\t\t\t\\end{mglcomments}\n\t\t\\end{verbatim}\n\\begin{mglcomment}\nThis comment will be shown because we used the \"comments\" package option for mglTeX\n\\end{mglcomment}\n\t\tOnce again, long lines are broke down to fit the paragraph.\n\t\\end{description}\n\t\n\t\\section{Basics on commands}\n\t\\begin{description}\n\t\t\\item[\\textbackslash mglgraphics] This command takes the name of an external MGL script, compiles it, and includes the resulting image. It accespt the same optional arguments as the \\verb|mgl| environment, except for \\verb|label|, plus a \\verb|path| option, which can be used to specify the location of the script. This is useful when you have a script outside of the \\LaTeX{} document (sent by a colleague for example), but you don't want to transcript it to your document.\n\t\t\n\t\tFor example, in order to display the image of the script we created with \\verb|mglscript| environment, we write:\n\t\t\\begin{verbatim}\n\t\t\t\\begin{figure}[!ht]\n\t\t\t  \\centering\n\t\t\t  \\mglgraphics[height=9cm,width=9cm]{Gaston_surface}\n\t\t\t  \\caption{Gaston's surface}\n\t\t\t\\end{figure}\n\t\t\\end{verbatim}\n\t\t\\begin{figure}[!ht]\n\t\t\t\\centering\n\t\t\t\\mglgraphics[height=9cm,width=9cm]{Gaston_surface}\n\t\t\t\\caption{Gaston's surface: Three-dimensional parametric surface}\n\t\t\\end{figure}\n\t\t\n\t\tWe could also could compile the script we created with the \\verb|mglblock| environment:\n\t\t\\begin{verbatim}\n\t\t\t\\begin{figure}[!ht]\n\t\t\t  \\centering\n\t\t\t  \\mglgraphics[height=7cm,width=10cm]{fractal}\n\t\t\t  \\caption{Examples of fractal behavior}\n\t\t\t\\end{figure}\n\t\t\\end{verbatim}\n\t\t\\begin{figure}[!ht]\n\t\t\t\\centering\n\t\t\t\\mglgraphics[height=7cm,width=10cm]{fractal}\n\t\t\t\\caption{Examples of fractal behavior}\n\t\t\\end{figure}\n\t\t\n\t\t\\item[\\textbackslash mglinclude] This is equivalent to the \\verb|mglblock| environment, but works for external scripts.\n\t\t\n\t\t\\item[\\textbackslash mglplot] This command allows the fast creation of plots. It takes one mandatory argument, which is a block of MGL code to produce the plot. Accepts the same optional arguments as the \\verb|mgl| environment, plus an additional one, \\verb|setup|, that can be used to specify a block of code to append, defined inside a \\verb|mglsetup| environment (see the example below).\n\t\t\n\t\tThe \\verb|mglsetup| environment can be used if many plots will have the same settings (background color, etc.). Instead of writing the same code over and over again, it can be introduced in that environment, and used with the \\verb|\\mglplot| command.\n\t\t\n\t\tAn example of use of the \\verb|mglsetup| environment and the \\verb|\\mglplot| command would be:\n\t\t\\begin{verbatim}\n\t\t\t\\begin{mglsetup}{3d}\n\t\t\t  clf 'W'\n\t\t\t  rotate 50 60\n\t\t\t  light on\n\t\t\t  box : axis : grid 'xyz' ';k'\n\t\t\t\\end{mglsetup}\n\t\t\t\\begin{figure}[!ht]\n\t\t\t  \\centering\n\t\t\t  \\mglplot[setup=3d,scale=0.5]{fsurf 'cos(4*pi*hypot(x,y))*exp(-abs(x+y))'}\n\t\t\t\\end{figure}\n\t\t\t\\begin{figure}[!ht]\n\t\t\t  \\centering\n\t\t\t  \\mglplot[setup=3d,scale=0.5]{fsurf 'sin(pi*(x+y))'}\n\t\t\t\\end{figure}\n\t\t\\end{verbatim}\n\t\t\\begin{mglsetup}{3d}\n\t\t\tclf 'W'\n\t\t\trotate 50 60\n\t\t\tlight on : light 0 0 1 0 'w' 0.25\n\t\t\tbox : axis : grid 'xyz' ';k'\n\t\t\\end{mglsetup}\n\t\t\\begin{figure}[!ht]\n\t\t\t\\centering\n\t\t\t\\mglplot[setup=3d,scale=0.5]{fsurf 'cos(4*pi*hypot(x,y))*exp(-abs(x+y))'}\n\t\t\\end{figure}\n\t\t\\begin{figure}[!ht]\n\t\t\t\\centering\n\t\t\t\\mglplot[setup=3d,scale=0.5]{fsurf 'sin(pi*(x+y))'}\n\t\t\\end{figure}\n\t\\end{description}\n\t\n\tThere are more environments and commands defined by \\mglTeX{}. The ones presented here are the most basic. More on this topic can be found in the documentation.\n\\end{document}", "meta": {"hexsha": "91447c8de17f9d7e7730dbfdf8799e39bad39b4e", "size": 13180, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mathgl-2.4.3/mgltex/sample.tex", "max_stars_repo_name": "angelamsj/cruise-control", "max_stars_repo_head_hexsha": "0fb94e86217afee2e637de694b0148b99b052ccf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2017-05-06T19:05:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-30T15:12:09.000Z", "max_issues_repo_path": "mathgl-2.4.3/mgltex/sample.tex", "max_issues_repo_name": "angelamsj/cruise-control", "max_issues_repo_head_hexsha": "0fb94e86217afee2e637de694b0148b99b052ccf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2018-01-17T20:22:42.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-18T20:29:43.000Z", "max_forks_repo_path": "mathgl-2.4.3/mgltex/sample.tex", "max_forks_repo_name": "angelamsj/cruise-control", "max_forks_repo_head_hexsha": "0fb94e86217afee2e637de694b0148b99b052ccf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-04-19T15:33:06.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-15T16:13:12.000Z", "avg_line_length": 47.7536231884, "max_line_length": 749, "alphanum_fraction": 0.6734446131, "num_tokens": 4510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.8198933381139645, "lm_q1q2_score": 0.5839544230478703}}
{"text": "\\chapter{Algebraic Number Theory}\n\\label{chap:chap-two}\nAlgebraic number theory is a major branch of number theory that studies algebraic structures related to algebraic integers. Usually, it studies algebraic properties of the algebraic integers' ring such as factorization, the behaviour of ideals, and field extensions. In this chapter, we will introduce some definitions and useful results in algebraic number theory as well as some preliminaries of analytic number theory based on \\citep{Xianke2006ANT,cohen1993course,mollin1999algebraic}. \n\n\\section{Algebraic Numbers and Number Fields}\nWe first give the necessary background on algebraic numbers, number fields etc. Let $\\alpha\\in\\mathbb{C}$. Then $\\alpha$ is called an \\textbf{algebraic number} if there exists $f(x)\\in\\mathbb{Z}[x]/{0}$ such that $f(\\alpha)=0$. The number $\\alpha$ is called an \\textbf{algebraic integer} if, in addition, one can choose $f$ to be monic.(i.e. with leading coefficient equal to 1).\n\nMore generally, we can define the integral element of a ring(See \\citep{Xianke2006ANT}) through similar definition.\n\n%\\begin{definition}\n%Let $\\alpha\\in\\mathbb{C}$ be an algebraic number, and $f(x)$ its minimal polynomial. The conjugates of $\\alpha$ are all the $deg(f)$ roots of $f$ in $\\mathbb{C}$.\n%\\end{definition}\n\nA \\textbf{number field} is a field containing $\\mathbb{Q}$ which, considered\nas a $\\mathbb{Q}$-vector space, is finite dimensional. The number $d=[K:\\mathbb{Q}]=\\text{dim}_{\\mathbb{Q}}K$ is called the degree of the number field $K$.\n\nThe \\textbf{signature} of a number field is the pair $(r_1,r_2)$ where $r_1$ is the number of embeddings of $K$ whose image lie in $\\mathbb{R}$, and $2r_2$ is the number of non-real complex embeddings, so that $r_1+2r_2=n$. If $T$ is an irreducible polynomial defining the number field $K$ by one of its roots, the signature of $K$ will also be called the signature of $T$.\n\nThe following proposition \\citep{cohen1993course} shows that there are only two possibilities for the signature of a Galois extensions.\n\\begin{lemma}\\label{lem:signaturegal}\nLet $K$ be a Galois extension of $\\mathbb{Q}$ of degree $n$. Then,\neither $K$ is totally real $(r_1=n)$,or K is totally complex $(r_2=n/2)$ which can occur only if $n$ is even.\n\\end{lemma}\n\n\\section{Discriminants of Elements and Fields}\nThe definition of discriminants of polynomials can be found in the Appendix \\ref{chap:appB} if need some reviews, now we will introduce the definition of discriminant of elements and fields.\nLet $K$ be a number field of degree $n$, $\\sigma_i$ be the $n$ embeddings of $K$ into $\\mathbb{C}$, and $\\alpha_j$ be the set of $n$ elements of $K$. Then we have $$\\operatorname{Disc}(\\alpha_1,\\dots,\\alpha_n)=\\det(\\sigma_i(\\alpha_j))^2=\\det(\\operatorname{Tr}_{K/\\mathbb{Q}}(\\alpha_i\\alpha_j))$$\nIn particular,If $K=\\mathbb{Q}(\\alpha)$, $f(x)$ is the minimal polynomial of $\\alpha$, then $$\\operatorname{Disc}(f)=\\operatorname{Disc}(\\alpha)=\\operatorname{Disc}(1,\\dots,\\alpha^{n-1})$$\n\nwe denote by $O_K$ the ring of algebraic(rational) integers of $K$. Then we have that the ring $O_K$ is a free $\\mathbb{Z}$-module of rank $n=\\operatorname{deg}(K)$. Hence we can define the (absolutely) integral basis as follows:\n\n\\begin{definition}\nA $\\mathbb{Z}$-basis of the free module $O_K$ will be called an (absolutely)\\textbf{integral basis} of K. The discriminant of an integral basis is independent of the choice of that basis, and is called the \\textbf{discriminant of the field} $K$ and is denoted by $d(K)$.\n\\end{definition}\n\nSimilarly, we can define a \\textbf{relatively integral basis}: Let $A$ be a commutative integral domain with 1, $K=\\operatorname{Frac}(A)$,$L/K$ is an extension with $[L:K]=n$, $B$ is the integral closure of $A$ in $L$. If $B$ is a free $A$-module , i.e. $B=A \\alpha_1\\oplus\\cdots\\oplus A\\alpha_n$. Then we call $\\alpha_1,\\dots,\\alpha_n$ the $A$-basis of $B$(resp. integral basis of $L/K$.) Then we have $$\\operatorname{Disc}(L/K)=(\\operatorname{Disc}(\\alpha_1,\\dots,\\alpha_n)),$$ which is the discriminant of $L/K$. \n\nThen, we will show a important theorem \\citep{mollin1999algebraic} regarding the relationship of the discriminant of the minimal polynomial and the discriminant of the field. \n\n\\begin{lemma}\\label{lem:discpoly-field}\nLet $T$ be a monic irreducible polynomial of degree $n$ in $\\mathbb{Z}[x]$, $\\theta$ a root of $T$, and $K=\\mathbb{Q}(\\theta)$. If $f=[O_K:\\mathbb{Z}[\\theta]]$, then $$\\operatorname{Disc}(T)=d(K)f^2.$$\n\\end{lemma}\n\nThe number $f$ is called the \\textbf{index} of $\\theta$ in $O_K$. A theorem for recognizing the integral basis of a field is important. Moreover, we have general results for relative integral basis \\citep{Xianke2006ANT}. The following proposition has combined these two cases.\n\n\\begin{proposition}\nThe algebraic numbers $\\alpha_1,\\dots,\\alpha_n$ form an integral basis if and only if they are algebraic integers and if $\\operatorname{Disc}(\\alpha_1,\\dots,\\alpha_n)=d(K)$.\nMore generally, if $L/K$ has integral basis, then $\\beta_1,\\dots,\\beta_n\\in B$ form an integral basis if and only if $(\\operatorname{Disc}(\\beta_1,\\cdots,\\beta_n))=\\operatorname{Disc}(L/K)$.\n\\end{proposition}\n\nThe result related the structure of discriminant of a field due to Stickelberger can not be avoid when we talking about discriminant here:\n\\begin{lemma}[Stickelberger's criterion]\\label{thm:stickelberger}\nLet $K$ be a number field, then $$d(K)\\equiv 0\\text{ or }1(\\operatorname{mod} 4)$$\n\\end{lemma}\n \nThe determination of an explicit integral basis and of the discriminant of a number field is not an easy problem, and is one of the main tasks of this article. However one case in which the result is trivial:\n\\begin{corollary}\nLet $f$ be a monic irreducible(i.e. minimal) polynomial in $\\mathbb{Z}[x]$, $\\theta$ a root of $f$, and $K=\\mathbb{Q}(\\theta)$. Assume that the discriminant of $f$ is squarefree or is equal to $4d$ where $d$ is squarefree and not congruent to 1 modulo 4. Then the discriminant of K is equal to the discriminant of $f$, and an integral basis of $K$ is given by $1,\\theta,\\dots,\\theta^{n-1}$.\n\\end{corollary}\n\nFinally, the result to determine the sign of discriminant is useful:\n\\begin{lemma}\nLet $K$ be an algebraic number field, then $$d(K)=(-1)^{r_2}|d(K)|$$\n\\end{lemma}\n\n\\section{Decomposition of Prime Numbers}\nFor simplicity, we continue to work with a number field $K$ considered as an (finite) extension of $\\mathbb{Q}$, and not considered as a relative extension. Many of the results which are explained in that context are still true in the more general case, but some are not. Almost always, these generalizations fail because the ring of integers of the base field is not a PID(Dedekind).\nThe main results concerning the decomposition of primes are as follows:\n\\begin{proposition}\\label{thm:decomposition}\nLet $p$ be a prime number, then there exist positive integers $e_i$ such that $$pO_K=\\prod_{i=1}^g \\wp_i^{e_i},$$ where $\\wp_i$ are all the prime ideals above $p$, i.e. $\\wp_i\\cap \\mathbb{Z}=p\\mathbb{Z}$.\n\\end{proposition}\n\nThe integer $e_i$ is called the \\textbf{ramification index} of $p$ at $\\wp_i$ and is denoted $e(\\wp_i|p)$. The degree $f_i$ of the field extension defined by $$f_i=[O_K/\\wp_i:\\mathbb{Z}/p\\mathbb{Z}]$$ is called the \\textbf{residue degree} of $p$ and is denoted $f(\\wp_i|p)$. $g$ is called the \\textbf{decomposition number} of $p$ in $K$.\n\nThere is an important relation between these coefficients, which comes to a theorem.\n\\begin{proposition}\nLet $[K:\\mathbb{Q}]=n$, then for any $p$, the decomposition in Theorem  \\ref{thm:decomposition} satisfies $$\\sum_{i=1}^g e_if_i=n.$$\n\\end{proposition}\n\nLet $pO_K=\\prod_{i=1}^g \\wp_i^{e_i}$ be the decomposition of a prime $p$. We will say that $p$ is \\textbf{inert} if $g=1$ and $e_i=1$,i.e. $pO_K=\\wp_i$.\nWe will say that $p$ \\textbf{splits completely} if $g=n$.\nFinally, we say that $p$ is \\textbf{ramified} if there is an $e_i$ which is greater than or equal to 2 (in other words if $pO_K$ is not squarefree), otherwise we say that $p$ is \\textbf{unramified}. Those prime ideals $\\wp_i$ such that $e_i>1$ are called the ramified prime ideals of $O_K$. In particular, if $e_1=n$, then we say that $p$ \\textbf{ramifies totally}.\n\nFrom the definitions of these ramification index and decomposition number satisfy chain rule, which is a very important message for us to decompose prime number in a compositum of fields. \n\nIn the case when $K/\\mathbb{Q}$ is a Galois extension, the result is more specific:\nAssume $K/\\mathbb{Q}$ is a Galois extension. Then for any $p$, the ramification indices $e_i$ are equal, the residual degrees $f_i$ are equal as well, hence $e f g=n$. \nIn addition, the Galois group operates \\textbf{transitively} on the prime ideals above $p$: i.e. there exists $\\sigma\\in\\operatorname{Gal}(K)$, such that $\\sigma(\\wp_i)=\\wp_j$.\n\n\nThe existence of ramified prime has showed by Minkowski: If $K$ is a number field different from $\\mathbb{Q}$, then $|d(K)|>1$. In particular, there exists at least one ramified prime in $K$. What's more, the fundamental ramification theorem\\citep{cohen1993course} is as follows:\n\\begin{proposition}\\label{thm:ramification}\nLet $p$ be a prime number, then $p$ is ramified in $K$ if and only if $p$ divides the discriminant $d(K)$. In particular, there are only a finite number of ramified primes (exactly $w(d(K))$, where $w(x)$ is the number of distinct prime divisors of an integer $x$).\n\\end{proposition}\n\nOn the contrary, for unramified prime, we have another theorem given by Stickelberger\\citep{cohen1993course}.\n\\begin{proposition}[Stickelbeger]\\label{thm:unramified}\nIf $p$ is an unramified prime in $K$ with $pO_K=\\prod_{i=1}^g \\wp_i$, we have $$\\left(\\frac{d(K)}{p}\\right)=(-1)^{n-g},$$ for $p=2$, $\\left(\\frac{d(K)}{2}\\right)=(-1)^{n-g}$ is to be seen as the Jacobi-Kronecker symbol(See Appendix \\ref{chap:appA}). \n\\end{proposition}\n\n%\\begin{corollary}[quadratic field]\n%The decomposition type of a prime number $p$ in a quadratic field $K$ of discriminant $D$ is the %following: if $\\left(\\frac{D}{p}\\right)=-1$, then $p$ is inert. If  $\\left(\\frac{D}{p}\\right)=0$, then $p$ is ramified. Finally, if $\\left(\\frac{D}{p}\\right)=1$, then $p$ splits(completely).\n%\\end{corollary}\n\nWe now consider a more difficult algorithmic problem, that of determining the decomposition of prime numbers in a number field. The basic theorem on the subject, which unfortunately is not completely sufficient(but right for Dedekind domain with power integral basis, even for without essential factor), is as follows.\n\n\\begin{proposition}[Kummer]\\label{thm:kummer}\nLet $K=\\mathbb{Q}(\\theta)$ be a number field, where $\\theta$ is an algebraic integer, whose minimal polynomial is denoted $T(x)$. Let $f$ be the index of $\\theta$, i.e. from definition $f=[O_K:\\mathbb{Z}[\\theta]]$. Then for any prime $p$ not dividing $f$ on can obtain the prime decomposition of $pO_K$ as follows. Assume $$T(x)\\equiv \\prod_{i=1}^g T_{i}(x)^{e_i} (\\operatorname{mod } p)$$ be the decomposition of $T$ into irreducible factors in $\\mathbb{F}_p[x]$, where the $T_i$ are also monic. Then $$pO_K=\\prod_{i=1}^g\\wp_i^{e_i},$$ where $$\\wp_i=(p,T_i(\\theta))=pO_K+T_i(\\theta)O_K$$\nFurthermore, the residual index $f_i$ is equal to the degree of $T_i$. \n\\end{proposition}\n\n\\section{Units and Ideal Classes}\\label{sec:unitsideal}\nLet $K$ be a number field and $O_K$ be the ring of integers of $K$. We say that two (fractional) ideals\\footnote{Fractional ideal $I$ in $O_K$ is a non-zero sub-module of $K$ such that there exists a non-zero integer $d$ with $d I$ ideal of $O_K$.} $I$ and $J$ of $K$ are equivalent if there exists $\\alpha\\in K^*$ such that $J=\\alpha I$. The set of equivalence classes is called the \\textbf{class group} of $O_K$ and is denoted $Cl(K)$. \n\nSince fractional ideals of $O_K$ form a group it follows that $Cl(K)$ is also a group. The main theorem concerning $Cl(K)$ is that it is finite.\n\nFor any number field $K$, the class group $Cl(K)$ is a finite Abelian group, whose cardinality, called the \\textbf{class number}, is denoted $h(K)$. Note that $h(K)=1$ if and only if $O_K$ is a PID(UFD).\n\nDenote by $I(K)$ the set of fractional ideals of $K$, and $P(K)$ the set of principal ideals. We clearly have the exact sequence:\n$$1\\rightarrow P(K)\\rightarrow I(K)\\rightarrow Cl(K)\\rightarrow1.$$\n\nThe set of units in $K$ form a multiplicative group which we will denote by $U(K)$. Units are algebraic  integers of norm equal to $\\pm1$. The torsion subgroup of $U(K)$, i.e. the group of roots of unity in K, will be denoted by $\\mu(K)$.\n\nIt is clear that we have the exact sequence:\n$$1\\rightarrow U(K)\\rightarrow K^{\\times}\\rightarrow P(K)\\rightarrow1.$$\nTo sum up above two sequence, we have new exact sequence as follows:\n$$1\\rightarrow U(K)\\rightarrow K^{\\times}\\rightarrow P(K)\\rightarrow I(K)\\rightarrow Cl(K)\\rightarrow1.$$\n\nThe main result concerning units is the following theorem:\n\\begin{proposition}[Dirichlet's Unit Theorem]\\label{thm:Dirichlet}\nLet $(r_1,r_2)$ be the signature of $K$, then $U(K)$ is finitely generated Abelian group of rank $r_1+r_2-1$. i.e. we have a group isomorphism: $$U(K)\\cong\\mu(K)\\times\\mathbb{Z}^{r_1+r_2-1},$$ and $\\mu(K)$ is a finite cyclic group.\n\\end{proposition}\nIf we set $r = r_1+r_2-1$, we see that there exist units $u_1,\\dots,u_r$ such that every element $x$ of $U(K)$ can be written in a unique way as $$x=\\zeta u_1^{n_1}\\cdots u_r^{n_r},$$ where $n_i\\in\\mathbb{Z}$ and $\\zeta$ is a root of unity in $K$.Such a family $(u_i)$ is called a system of \\textbf{fundamental units} of $K$. \n\nA very important property is the number is finite, what's more, like we claimed before, the ideal class group for any number field is a finite Abelian group. As for the class number, we can give a certain upper bound of it. First of all, we give the definition of Minkowski bound \\citep{mollin1999algebraic}:\n\\begin{definition}\\label{def:minkowski}\nIf $K$ is a number field, the quantity $$C_K=\\left(\\frac{4}{\\pi}\\right)^{r_2}\\frac{n!}{n^n}\\sqrt{|d(K)|}$$ is called the  \\textbf{Minkowski's bound}, where $d(K)$ is the discriminant of $K$ and $[K:\\mathbb{Q}]=n$ with signature $\\{r_1,r_2\\}$.\n\\end{definition}\nThe following lemma \\citep{mollin1999algebraic} will give us a useful method to determined the class number and class field for $n=[K:\\mathbb{Q}]$ is small.\n\\begin{lemma}\nAny ideal class of a number field $K$ has an (integral) ideal $I$, such that $$\\operatorname{N}(I)\\leq C_K$$\n\\end{lemma}\nFrom this lemma, we can also get the so-called Hermite's theorem on discriminant: There are only finitely many number fields having a given discriminant $d$.\n\nIn fact, we have a following method to determined the class number and class group\" Firstly, we should calculated the Minkowski's bound of the number field $K$. From the lemma, we can find all rational primes that $p\\leq C_K$. Given the prime decomposition of $pO_K$, then we can compute all prime ideals $\\wp$ over $p$. Hence the class group $Cl(K)$ is generated by $A=\\{[\\wp]|\\wp|p\\leq C_K\\}$, where $[\\wp]$ is the ideal class which $\\wp$ lies in. If $A$ is not so big, the we can consider the multiple relationship between its elements, then we can get the class group and class number.\n\n\\section{Character and Conductor}\nA \\textbf{character} on a group $G$ is a group homomorphism from $G$ to the multiplicative group of a field( usually complex numbers field). The set $\\hat{G}$ of these morphisms forms an abelian group under pointwise multiplication. Sometimes we only consider unitary characters, thus the image is in the unit circle.\n\nNow we consider the special character we mentioned above, \\textbf{Dirichlet character}, which is defined as follows:\n\\begin{definition}\nA Dirichlet character is any function $\\chi$ from the integers $\\mathbb{Z}$ to the complex numbers $\\mathbb{C}$ such that $\\chi$ has the following properties:\n\\begin{enumerate}\n\\item the function is periodic, i.e. $\\exists k\\in\\mathbb{Z^+}$, s.t. $\\chi(n)=\\chi(n+k),\\forall n$.\n\\item If $\\operatorname{gcd}(n,k)>1$, then $\\chi(n)=0$; if $\\operatorname{gcd}(n,k)=1$, then $\\chi(n)\\neq0$\n\\item $\\chi(mn)=\\chi(m)\\chi(n)$ for all integers $m,n$.\n\\end{enumerate}\n\\end{definition}\nThe Dirichlet character has following properties: from the definition, we can directly get $\\chi(1)=1$, the character is periodic with period $k$, we say that $\\chi$ is a character to the \\textbf{modulus} $k$. i.e. we have $$a\\equiv b \\mod k \\Rightarrow \\chi(a)=\\chi(b)$$\nIf $\\operatorname{gcd}(a,k)=1$, then from Euler theorem, we have $a^{\\phi(k)}\\equiv 1 \\mod k$, therefore we have $\\chi(a^{\\phi(k)})=\\chi(1)=1$, on the other hand, $\\chi(a^{\\phi(k)})=\\chi(a)^{\\phi(k)}$. i.e. for all $a$ relatively prime to $k$, $\\chi(a)$ is a $\\phi(k)$-th complex root of unity. \n\nA character $\\chi$ is said to be \\textbf{odd} if $\\chi(-1)=-1$ and \\textbf{even} if $\\chi(-1)=1$. A character is called \\textbf{principal} if it assumes the value $1$ for arguments coprime to its modulus and otherwise is 0. A character of the (Abelian) field can be viewed as the character of the Galois group of the field.\n\nFor example, the character of a quadratic field $K$ is $\\hat{K}=\\{1,\\chi\\}$ (see \\citep{Xianke2006ANT} etc.). the character $\\chi$ is the same to the Legendre-Kronecker symbol (See Appendix \\ref{chap:appA}).\n\nThen, we will introduce the conductor. Taking as base the field of rational numbers, the Kronecker\u2013Weber theorem \\ref{thm:kronecker} states that an algebraic number field $K$ is abelian over $\\mathbb{Q}$ if and only if it is a subfield of a cyclotomic field $\\mathbb{Q}(\\zeta_n)$. The \\textbf{conductor} of $K$ is then the smallest such $n$. \n\nwe can also define a conductor of character, i.e. the conductor of a character is the smallest modulus of $\\chi$, More precisely, the conductor of a Dirichlet character $\\chi$ modulo $k$ is the smallest positive integer $k_0$ which divides $k$ and which has the property that $\\chi(n+k_0)=\\chi(n)$ for all $n$. For this case, the $\\chi$ is called the \\textbf{primitive character of conductor} (or modulus) $f_{\\chi}$. \n\nThe relation between the conductor of characters and Abelian number field is that, the conductor $f$ of the field $K$ is the least common multiple of conductor of character for Galois group of the Abelian number field, i.e. $$f=\\operatorname{lcm}_{\\chi\\in\\hat{K}}\\{f_{\\chi}\\}.$$\n\nFor example, for real quadratic field, $f$ is the fundamental discriminant of the field. For cyclic cubic field, $f$ is actually $e$ in Theorem \\ref{thm:ccpoly}, i.e. the arithmetic square root of the discriminant of the cyclic cubic field.\n\nLet $p$ be an odd prime and $K/\\mathbb{Q}$ a cyclic extension of degree $p$. Then it is well known \\citep{maki1980determination} that the conductor of $K$ must have the form $f=p^e\\cdot q_1q_2\\cdot q_n$, where $e=0$ or $2$, $n\\geq0$, and the $q_i$ are pairwise distinct rational primes satisfying $q_i\\equiv1(\\operatorname{mod} p)$ for $i=1,2,\\dots,n$. The discriminant of $K$ is just a power of the conductor, $d_K=f^{p-1}$.\n\nA theorem called \\textbf{conductor-discriminant formula} related to the conductor of a field and the discriminant of the field was first found by Dedekind. Then at the beginning of 1930's, E. Artin and H. Hasse found this general formula (See \\citep{rzedowski2011conductor} etc.) which is showed following:\n\\begin{proposition}\\label{thm:conddisc}\nLet $K/F$ be a finite Galois extension of global fields with Galois group $G$, $$d(K/F)=\\prod_{\\chi\\in\\hat{G}}f_{\\chi}^{\\chi(1)},$$\nsince for abelian field, $\\chi(1)=1$, hence for $K$ is an abelian number field, then we have a special form, $$d(K)=(-1)^{r_2}\\prod_{\\chi\\in\\hat{G}}f_{\\chi}$$\n\\end{proposition}\n\n", "meta": {"hexsha": "5e3ef314532ca654bfe80d9f906c78314bd56cd1", "size": 19635, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/chap-two.tex", "max_stars_repo_name": "daidahao/sustcthesis", "max_stars_repo_head_hexsha": "af536c6559c5a8a3c1315438b99d166153665187", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-03-17T08:46:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-12T02:50:26.000Z", "max_issues_repo_path": "chapter/chap-two.tex", "max_issues_repo_name": "daidahao/sustcthesis", "max_issues_repo_head_hexsha": "af536c6559c5a8a3c1315438b99d166153665187", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/chap-two.tex", "max_forks_repo_name": "daidahao/sustcthesis", "max_forks_repo_head_hexsha": "af536c6559c5a8a3c1315438b99d166153665187", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2015-06-17T06:55:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-17T01:21:27.000Z", "avg_line_length": 107.8846153846, "max_line_length": 589, "alphanum_fraction": 0.7277310924, "num_tokens": 5927, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8198933271118221, "lm_q1q2_score": 0.5839544152117905}}
{"text": "%!TEX root = ./egpaper_for_review.tex\n\\subsection{Datasets}\\label{sec:datasets}\n\\textbf{Social Networks.}\\label{sec:nets}\nOne important application for large scale correlation clustering are social networks.\nWe consider two of those networks from the Stanford Large Network Dataset Collection\\footnote{\\url{http://snap.stanford.edu/data/index.html}}.\nBoth networks are given by weighted directed graphs with edge weights $-1$ and $+1$. \n%\nThe first network is called \\emph{Epinions}. \nThis is a who-trust-whom online social network of a general consumer review site. \nEach directed edge $a\\to b$ indicates that user $a$ trusts  or does not trust user $b$ by a  positive or negative edge-weight, respectively.\nThe network contains $131828$ nodes and $841372$ edges from which $85.3\\%$ are positively weighted.\n%\nThe second network is called \\emph{Slashdot}. \nSlashdot is a technology-related news website known for its specific user community. \nIn 2002 Slashdot introduced the Slashdot Zoo feature which allows users to tag each other as friend or foe. \nThe network was obtained in November 2008 and contains $77350$ nodes and $516575$ edges of which $76.73\\%$ are positively weighted.\n\nWe consider the problem to cluster these graphs such that positively weighted edges ($E^+_{\\to}$) link inside and negatively weighted edges ($E^-_{\\to}$) between clusters.\nIn other words friends and people who trust each other should be in the same segment and foes and non-trusting people in different clusters.\n% \nTo compensate the large impact of nodes with high degree we can normalize the edge weights such that each person has the same impact on the overall network, by enforcing.\n\\begin{align}\n  \\sum_{i\\to j \\in E_{\\to}} |w_{i\\to j}| &= 1&\\forall i\\in V, deg^{\\textrm{out}}(i)\\geq 1 \n\\end{align}\nWe define the following energy function\n\\begin{align}\n J(y) &= \\sum_{i\\to j \\in E^+_{\\to}} y_{ij}\\cdot w_{i \\to j} +  \\sum_{i\\to j \\in E^-_{\\to}} (y_{ij}-1)\\cdot w_{i \\to j} \\nonumber\\\\\n      &= \\sum_{ij \\in E} y_{ij}\\cdot \\underbrace{(w_{i \\to j}+w_{j \\to i})}_{w_{ij}} + \\textrm{const}\n\\end{align}\nwhich is zero if the given partitioning does not violate any relation and larger otherwise.\nWe name these two datasets \\emph{social nets} and \\emph{normalized social nets}.\n\n\\textbf{Network Modularity Clustering.}\nAs another example for network clustering we use the \\emph{modularity-clustering} models from~\\cite{kappes-2015-ijcv} which are small but fully connected.\n\n\\textbf{2D and 3D Image Segmentation}\\label{sec:imseg}\nTo segment images or volumes into a previously\nunknown number of clusters, correlation clustering\nhas been used~\\cite{andres_2011_iccv,kroeger_2012_eccv}.\n\nStarting from a super-pixel/-voxel segmentation,\ncorrelation clustering finds the clustering with the lowest energy.\nThe energy is based on a likelihood of merging adjacent super-voxels.\nEach edge has a probability to keep adjacent segments separate ($p(y_{ij} =1)$)\nor to merge them ($p(y_{ij} = 0)$).\nThe energy function is\n\\begin{align}\n J(y)  &= \\sum_{ij \\in E} y_{ij}\\cdot \\underbrace{  log\\left( \\frac{p(y_{ij} =0)}{p(y_{ij} =1)}\\right) + log \\frac{1-\\beta}{\\beta}  }_{w_{ij}}\n\\end{align}\nwhere $\\beta$ is used as a prior~\\cite{andres_2011_iccv}.\n\nWe use the publicly available benchmark instances from~\\cite{kappes_2013_benchmark_cvpr,kappes-2015-ijcv}.\nFor 2D images from the Berkeley Segmentation Database~\\cite{martin_2001}, we took the segmentation problems called \\emph{image-seg}~\\cite{andres_2011_iccv,kappes_2013_benchmark_cvpr}.\nFor 3D volume segmentation we use the models \\emph{knott-3d-150}, \\emph{-300} and \\emph{-450} from~\\cite{kroeger_2012_eccv,kappes-2015-ijcv} as well as the large\ninstance from the \\emph{3d-seg} model~\\cite{andres_2011_iccv,kappes_2013_benchmark_cvpr}. These instances have underlying cube sizes of  $150^3$, $300^3$, $450^3$, and $900^3$, respectively.\nWe also requested larger instances from the authors of~\\cite{kroeger_2012_eccv} who kindly provided us the dataset~\\emph{knott-3d-550} with cube size  $550^3$.\n%More information of the size of instances is given in Tab.~\\ref{tab:instance_sizes}.\n", "meta": {"hexsha": "b665cba9c96ebfb2d4f7f12f66662ae10302f167", "size": 4115, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/datasets.tex", "max_stars_repo_name": "DerThorsten/boring_spaghetti", "max_stars_repo_head_hexsha": "0cacb5cf66d5ebd09f060fda87efcdb7e9c487f3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/datasets.tex", "max_issues_repo_name": "DerThorsten/boring_spaghetti", "max_issues_repo_head_hexsha": "0cacb5cf66d5ebd09f060fda87efcdb7e9c487f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/datasets.tex", "max_forks_repo_name": "DerThorsten/boring_spaghetti", "max_forks_repo_head_hexsha": "0cacb5cf66d5ebd09f060fda87efcdb7e9c487f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.9482758621, "max_line_length": 190, "alphanum_fraction": 0.7582017011, "num_tokens": 1203, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8311430645886583, "lm_q2_score": 0.7025300698514778, "lm_q1q2_score": 0.5839029952220415}}
{"text": "% !TEX root = ../../main.tex\n% !TEX spellcheck = en_GB\n\n\\section{Implementation}\n\\label{sec:impl}\nThe implementation has been done in the framework made by our supervisor Kim Bjerge.\nThe complete idea of the system has not been implemented on the Blackfin.\nWhat has been implemented on the blackfin is the Instantaneous Frequency algorithm, which determines the main frequency in the signal and a sine generator.\nTo the framework has been added a function called \\mintinline{c}{hilbert()} as can be seen below and and a function called \\mintinline{c}{sineGenerator()}.\nBefore the \\mintinline{c}{hilbert()} is called the FFT of the signal will be made using the function \n\n\\begin{minted}[breaklines]{c}\nvoid rfft_fr16(const fract16 input[], complex_fract16 output[], const complex_fract16 twiddle_table[], int twiddle_stride, int fft_size, int *block_exponent, int scale_method);\n\\end{minted}\n\nAfter the FFT, the \\mintinline{c}{hilbert()} will use the FFT signal. In \\mintinline{c}{hilbert()} the FFT signal will first be rearranged into the array $zHilbert[N\\_FFT]$.\nThe first value and the $N\\_FFT/2$ value will be waited less and therefore are multiplied with $0.5$ using the optimized function $cmlt\\_fr16(a,b)$.\nFrom $N\\_FFT/2$ and beyond everything is set to zero. \nAfter rearranging the inverse FFT of zHilbert is made and placed in $m\\_ifft\\_output$.\nThe inverse FFT is also made using the build in functions.\n\n\\begin{minted}[linenos, breaklines, bgcolor=lightgray]{c}\nvoid DynamicFilter::hilbert(){\n\t// parameters\n\tint block_exponent;\n\tfract16 instafreq[N_FFT];\n\tfract16 meanFreq;\n\tfract32 temp = 0;\n\tuint16_t dif;\n\tcomplex_fract16 scale;\n\tscale.re=0.5; scale.im=0.5;\n\tfract32 temp32[512];\n\t\t\t\n\t// Hilbert transform\n\tzHilbert[0] = cmlt_fr16(m_fft_output[0], scale);\n\t\n\tfor (int i = 1; i < N_FFT/2; i++){\n\t\tzHilbert[i] = m_fft_output[i];\n\t}\n\t\t\n\tzHilbert[N_FFT / 2] = cmlt_fr16(m_fft_output[N_FFT / 2], scale);\n\t\n\t// ifft of the hilbert signal\n\tifft_fr16(zHilbert, m_ifft_output, m_twiddle_table, 1, N_FFT, &block_exponent, 1);\n\t\t\n\t////// instant frequency finding //////\n\t//first finding the phase\n\tfor (int i = 0; i<N_FFT; i++){\n\t\tinstafreq[i] = atan2_fr16(m_ifft_output[i].im, m_ifft_output[i].re);\n\t}\n\t\n\t// unwrapping with threshold 2PI\n\ttemp32[0] = instafreq[0];\n\tfor(int i = 1; i<N_FFT;i++){\n\t\tdif = (((int16_t)instafreq[i - 1] - (int16_t)instafreq[i]) + (1 << 15)) % (2 * (1 << 15)); //1<<15 = pi\n\t\t\n\t\ttemp32[i] = ((int32_t)(dif - (1 << 15)));\n\t\t\t\n\t\ttemp32[i] = sub_fr1x32(temp32[i-1],\ttemp32[i]); // optimization\n\t}\n\t\t\n\t// find the diff\n\tfor(int i = 1; i<N_FFT;i++){\n\t\ttemp32[i-1] = (sub_fr1x32(temp32[i],temp32[i-1])) * fs / (2 * PI_FLOAT); // optimization\n\t}\n\t\t\n\t//finding the mean and therefore the value\n\tfor(int i = 100; i < N_FFT-99; i++){\n\t\ttemp += temp32[i];\n\t}\n\t\t\n\tmeanFreq = ((temp / (N_FFT-200)) * PI_FLOAT) / (1 << 15);\n\t\t\n}\n\\end{minted}\nNow the Instantaneous Frequency finding starts. First the argument of every value of the IFFT signal is found using \\mintinline{c}{atan2_fr16}.\nAfterwards the remainder is found after the subtraction of two arguments.\nLastly the difference is found and then a mean of the middle values are used to find the meanFreq.\n\n\\subsection{Optimized functions}\nIn the implementation some of the build in functions have been used, these functions are optimized and should use as few clock cycles as possible to execute.\nOn \\cref{tab:optimized_func}  some of the different build-in functions which are used can be seen.\n\n\\begin{table}\n\t\\centering\n\t\\begin{tabularx}{\\textwidth}{X X}\n\t\t\\toprule\n\t\t\\textbf{Function name} & \\textbf{Description }\\\\\n\t\t\\midrule\n\t\t\\mintinline[breaklines]{c}{rfft_fr16(input, output, twiddle_table, twiddle_stride, fft_size, *block_exponent, scale_method)} & Calculates the FFT of the input signal\\\\\n\t\t\\mintinline[breaklines]{c}{ifft_fr16(input, output, twiddle_table, twiddle_stride, fft_size, *block_exponent, scale_method)} & Calculates the inverse FFT of the input signal. \\\\\n\t\t\\mintinline{c}{cmlt_fr16(a, b)} & Multiplies two complex fract16 values. \\\\\n\t\t\\mintinline{c}{sub_fr1x32(a, b)}& Subtracts two complex fract32 values. \\\\\n\t\t\\mintinline[breaklines]{c}{atan2_fr16(imaginary_part, real_part)} & Find the argument of a complex fract16 value. \\\\\n\t\t\\bottomrule\n\t\\end{tabularx}\n\t\\caption{The used optimized functions.}\n\t\\label{tab:optimized_func}\n\\end{table}\n\n\\mintinline{c}{twiddle_table} is a table made in the setup of the program.\nIt is $W_n = \\cos(\\frac{2\\pi kn}{N})-j\\sin(\\frac{2\\pi kn}{N})$.\nIt is made so that the FFT and IFFT are executed faster since these function do not have to calculate this part of the equation but can just take the value from the \\mintinline{c}{twiddle_table}.\nAs long as the size of the FFT or IFFT does not change these values in the twiddle factor does not have to be calculated again.\n\n\\FloatBarrier", "meta": {"hexsha": "c3a30fd59cabf383794eb943713bdfb318707e38", "size": 4820, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/Report/SimpleSine/implementation.tex", "max_stars_repo_name": "lsangild/ETISB", "max_stars_repo_head_hexsha": "7ed401e1a9d7b34120f953d1afe5266d57f9e7a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/Report/SimpleSine/implementation.tex", "max_issues_repo_name": "lsangild/ETISB", "max_issues_repo_head_hexsha": "7ed401e1a9d7b34120f953d1afe5266d57f9e7a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/Report/SimpleSine/implementation.tex", "max_forks_repo_name": "lsangild/ETISB", "max_forks_repo_head_hexsha": "7ed401e1a9d7b34120f953d1afe5266d57f9e7a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.4716981132, "max_line_length": 195, "alphanum_fraction": 0.7265560166, "num_tokens": 1453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430394931456, "lm_q2_score": 0.702530051167069, "lm_q1q2_score": 0.5839029620622729}}
{"text": "%----------------------------------------------------------------------------------------\n% Limitations\n%----------------------------------------------------------------------------------------\n\n\\section{Limitations}\n\nAlthough what we have discussed in this paper so far is robust, there are some chinks in Randomization Based Inference\u2019s armor. \n\n\\begin{enumerate}\n    \\item \\textbf{Random Data Condition} \\newline\n\tThe first and most glaring limitation of RBI is that it is not truly conditionless. The one condition that it does have is that the initial data has to be random. Some basic cases where this is not met is when you have access to the whole population, in which case you do not need randomization based inference, or when collecting data you failed to use a randomization procedure. The latter is the situation we are more focused on in this paper. In general, when a sample fails to be random, it is no longer assumed to be representative of the population. \n\tWe can connect this to randomization based inference in that we are trying to find an asymptotically correct p-value. In order to do this, we have to randomly shuffle the data and extract our test statistic and build the randomization distribution and then calculate the p-value from that. This process becomes invalid when the original data fails to be random because of two key issues. We can start with the simpler one: The original sample contains high levels of bias. Let\u2019s assume that we are trying to figure out if a bill on an upcoming ballot does not have equal support in a community. Instead of using your city hall and a random number generator to sample people, you decide to ask everyone who walks into the local Hobby Lobby. In your mind, this seems reasonable, considering that you don\u2019t know who is going to walk in and what they are thinking about the bill. However, you have failed to consider the population of people who shop at Hobby Lobby. This can introduce bias into your sample. It could be response bias or under-coverage bias (specific groups of people left out). The fatal flaw that both the former and the latter present is that they alter the null hypothesis of your test. If you were to continue with your experiment and use randomization based inference, you won\u2019t be able to trust any of your permuted test statistics and your final p-value due to the bias present. \\newline\\\\\n\tThe second and more complex issue of not obtaining a random sample focuses on the obvious connection between a random sample and randomization based inference. The main point to be made here is that if a sample is not random, we cannot assume it to be representative of the population our sample comes from. In the Hobby Lobby story, the sample taken could be representative of the population. Here, even if some randomized sampling procedure is taken, the sample could still be not random. Typically when we code in R, we use tests like Bartel\u2019s test for randomness to determine if data has been sampled randomly. If the test fails to reject the null hypothesis that the data is random, then we are fair to move forward with randomization-based inference. When the null hypothesis is rejected, randomization-based inference will not work. This is due to the fact that we are typically using a test to make an inference about the population. We use the term correlation, because our tests can\u2019t conclude causation. Regardless of the prior statement, the main problem here is that our random variables, no matter the type of test, are not approximately normally distributed. This is because the Central Limit Theorem needs randomly distributed variables for it to apply. When a sample is not random, the variables we are observing are also not random. This makes the random variable we're observing to be not approximately normally distributed. No matter the test statistic calculated we will not be allowed us to trust our test statistic, causing a breakdown of the entire randomization process. \\newline\\\\\n\tIn summation, in order for our randomization based inference to be of value, we need a sample free of bias and that is random. This ensures that our variables are random variables and we can trust the test statistic created from them. \n    \\item \\textbf{Small Sample Sizes} \\newline\n\tFor most tests, there is typically a condition that requires the initial sample to be of an acceptable size. For example, when trying to determine if two different treatment groups have a difference in means that is not equal to zero, we require that for the Central Limit Theorem to work, we need at least 30 data points, assuming our sample is random. Although randomization based inference can generate many permutations of a given sample, the one thing it can\u2019t do is increase the sample size. The randomization based inference does not deal with the counterfactual statement that \u201cif the sample size was larger, we could be more sure of our conclusion\u201d. Instead, the point here is that our randomization distribution using a small original sample size will be less approximately normal than a satisfactory sample size. This affects the robustness of our randomization based inference. Sure, it created something that could pass as normal to any given statistician, but the level of asymptotic certainty we have is in jeopardy. \n\t\\item \\textbf{Population Parameter Estimation} \\newline\n\tOne of the important aspects of model creation in statistics is the idea of parameter estimation and statistical accuracy. We often fit one model at the beginning of our experiment and then go through rigorous testing and fitting procedures to develop the best model. This best model is not only accurate at its forecasting and initial predictions, but also does not over-fit to the data so that it can be generalized to new data. The limitation we are getting at here is that randomization based inference is not something we can use to develop better population parameter estimates. \\newline\\\\\n\tIn general, we use randomization based inference to help us figure out if the perceived effects we see in our original model are trustworthy. However, if we are doing linear regression, our beta estimates are only as good as the original sample we have. In randomization based inference, we resample data without replacement. This means we assume our null hypothesis is true. However, if we wanted to make sure our sample statistics were better at estimating the true population parameters given our test, we would have to take a different approach. Instead of just randomly shuffling our response variable, we could take all of our observations and pool them back up, then choose one response value at random and assign it to a given observation. We then could put the response value back into the pool of responses and repeat the process. This is sampling with replacement. Instead of building a randomization distribution which looks to estimate the distribution of the given test statistic, this estimates the sampling distribution. This estimated sampling distribution is centered around our original sampling mean and has a standard deviation from our original sample as well. However, with this distribution, we can make confidence intervals that give us more precise estimates of the population parameter. This process is referred to as bootstrapping. In most cases, we should really pair these models so that we can have better model accuracy as well as more clarity around the effect we are testing for. However, the computational acumen required for both tests is quite high. Lastly, the level of interpretability when using both methods of resampling is murky due to the different procedures used. Depending on what your goal is, picking one of these methods will be more useful than including both. \n\\end{enumerate}\n", "meta": {"hexsha": "256db4c18c4de1858b6203bd3958bfd8b99795d8", "size": 7797, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Project/Math%20420%20Final%20Paper/content/4-limitations.tex", "max_stars_repo_name": "jake-caldwell/Math420Proj", "max_stars_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project/Math%20420%20Final%20Paper/content/4-limitations.tex", "max_issues_repo_name": "jake-caldwell/Math420Proj", "max_issues_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project/Math%20420%20Final%20Paper/content/4-limitations.tex", "max_forks_repo_name": "jake-caldwell/Math420Proj", "max_forks_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 371.2857142857, "max_line_length": 1813, "alphanum_fraction": 0.7906887264, "num_tokens": 1502, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680199891789, "lm_q2_score": 0.7853085884247212, "lm_q1q2_score": 0.5836162287400971}}
{"text": "In this appendix we will show in detail some operations on lists built on top of what presented in Section \\ref{sec:ch_background_template_metaprogramming} that can be built by using meta-programming in C++ templates. The goal of this appendix is to convince the reader about the level of complexity of using C++ templates to express meta-programming and why it is preferable to use a dedicated meta-compiler.\n\n\\section{Element Getter}\n\\label{sec:app_templates_getter}\nAccessing the n-th element of a list defined with templates mimics the behaviour of the its definition in a functional programming languages given below:\n\n\\begin{lstlisting}\nlet rec nth (n : int) (l : List<'a>) : 'a =\n  match l,n with\n  | x :: xs,0 -> x\n  | x :: xs,_ -> nth (n - 1) xs\n\\end{lstlisting}\n\n\\noindent\nThe recursion base case is when the index we want to access is 0, which means that we want to access the head of the list. In this case we simply return the head by decomposing the list through pattern matching. In the other case we simply make a recursive call by passing the index decreased by 1 and the tail of the list. In template meta-programming, this is translated into a template that performs the same task:\n\n\\begin{lstlisting}\ntemplate <typename List> struct Nth<LST, 0> \n{\n    typedef typename List::Head result;\n};\n\\end{lstlisting}\n\n\\noindent\nAs shown in Section \\ref{sec:ch_background_template_metaprogramming}, the arguments of the function are passed as arguments of the template itself. This version of the template is specialized for the integer 0, which corresponds to the base case of the recursion. The general case of the recursion has a dedicated template as follows:\n\n\\begin{lstlisting}\ntemplate <typename List, int N> struct Nth \n{\n    typedef typename List::Tail Tail;\n    typedef typename Nth<Tail, N - 1>::result result;\n};\n\\end{lstlisting}\n\n\\noindent\nThe template contains a type definition for the parameter corresponding to the list tail and another type definition corresponding to the recursive call to another \\texttt{Nth} template, this time containing only the tail of the list and the counter decreased by 1. To test this we can use the following sample:\n\n\\begin{lstlisting}\ntemplate <int N> struct Int \n{\n  static const int result = N;\n};\n\ntypedef List<Int<1>, List<Int<2>, List<Int<3>>>> testList;\n\nint main()\n{\n  cout << Nth<testList, 2>::result::result << endl;\n}\n\\end{lstlisting}\n\n\\noindent\nNote that we need to access \\texttt{result} twice, because the first \\texttt{result} is the type of the head of the list generated by template, which is \\texttt{Int}. So calling \n\n\\begin{lstlisting}\nNth<testList, 2>::result\n\\end{lstlisting}\n\n\\noindent\nreturns \\texttt{Int}, that is a type. If we want to access the value stored in \\texttt{Int} then we must access the constant integer \\texttt{result} contained in it. Note that if we try to access an invalid index in the list, the compiler will complain because it will try to generate a template with the tail of a list that does not exist. In this way something that in a normal program becomes a runtime error is here treated as a compilation error.\n\n\\section{Element Existence}\n\\label{sec:app_templates_existance}\nThe code that tests the existence of an element within a list is recursive as well and mimics the behaviour of its functional counterpart:\n\n\\begin{lstlisting}\nlet exists (element: 'a) (l : List<'a>) : 'a =\n  match l with\n  | [] -> false\n  | x :: xs when element = x -> true\n  | x :: xs -> exists element xs\n\\end{lstlisting}\n\n\\noindent\nThe function returns \\texttt{false} as a base case when the list is empty, because it means that the whole list has been examined and the element has not been found. The second case is when the head of the list matches the element, which returns \\texttt{true}. The last case is used when the head of the list does not match the element, thus we call recursively \\texttt{exists} on the tail. In order to implement this function with C++ templates, we need to define two utility templates able to compare two elements:\n\n\\begin{lstlisting}\ntemplate <class X, class Y> struct Eq { static const bool result = false; };\ntemplate <class X> struct Eq<X, X> { static const bool result = true; };\n\\end{lstlisting}\n\n\\noindent\nThe first template has a result set to \\texttt{false} when its arguments are different, while the second template is a specialization of the first one where both the first template argument and the second are the same and its result is \\texttt{true}. With this utility templates we can correctly compare the values of a list defined with templates and define the recursive template for the existence function:\n\n\\begin{lstlisting}\ntemplate <class Element, class List> struct Exists\n{\n  static const bool result = \n    Eq<Element, typename List::Head>::result || Exists<Element, typename List::Tail>::result;\n};\n\ntemplate <class Element> struct Exists<Element, NIL>\n{\n  static const bool result = false;\n};\n\\end{lstlisting}\n\n\\noindent\nThe first template is the general case of the recursion. It uses \\texttt{Eq} to test the value of the searched element against the head of the list. It then combines this result with the logical or on \\texttt{Exists} run with the remaining tail of the list. The second template is the base case and contains a constant set to \\texttt{false}. This corresponds to the base case of the recursive function above.", "meta": {"hexsha": "02c8efcd4359121ba290b5b38c2011394e7e1902", "size": 5376, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PhD Thesis Francesco/Chapters/appendix_template.tex", "max_stars_repo_name": "vs-team/Papers", "max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z", "max_issues_repo_path": "PhD Thesis Francesco/Chapters/appendix_template.tex", "max_issues_repo_name": "vs-team/Papers", "max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PhD Thesis Francesco/Chapters/appendix_template.tex", "max_forks_repo_name": "vs-team/Papers", "max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.303030303, "max_line_length": 516, "alphanum_fraction": 0.763578869, "num_tokens": 1287, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5836162194047138}}
{"text": "\\documentclass{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\n% Main maths packages\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathrsfs}\n\n\\begin{document}\n\n\\subsection*{Math spacing}\n\t\nNormal space in math mode:\n\n\\[1 2\\]\n\nCustom spaces in math mode::\n\n% [mu] is the math unit equal to 1/18 em, where em is taken from the math symbols family, according to ShareLaTeX.\n\\[1\\!2\\] % -3/18 of \\quad (= -3 mu)\n\\[1\\,2\\] % 3/18 of \\quad (= 3 mu)\n\\[1\\:2\\] % 4/18 of \\quad (= 4 mu)\n\\[1\\;2\\] % 5/18 of \\quad (= 5 mu)\n\\[1\\ 2\\] % equivalent of space in normal text\n\\[1\\quad2\\] % space equal to the current font size (= 18 mu)\n\\[1\\qquad2\\] % twice of \\quad (= 36 mu)\n\n\\subsection*{Phantom spacing}\n\nThe command \\emph{phantom} creates a horizontal space according the width of the argument.\n\n\\[\n\t\\begin{pmatrix}\n\t\t123 \t\t\t& 456\\\\\n\t\t\\phantom{12}3\t& 4\\phantom{56}\n\t\\end{pmatrix}\n\\]\n\n\\end{document}", "meta": {"hexsha": "0c934c2a22966180af9d4021c754105cabeac96d", "size": 906, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "compendium/mathematics/spaces.tex", "max_stars_repo_name": "ZenLulz/LatexCompendium", "max_stars_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-07-30T21:43:55.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-23T20:16:19.000Z", "max_issues_repo_path": "compendium/mathematics/spaces.tex", "max_issues_repo_name": "ZenLulz/LatexCompendium", "max_issues_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "compendium/mathematics/spaces.tex", "max_forks_repo_name": "ZenLulz/LatexCompendium", "max_forks_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.0975609756, "max_line_length": 114, "alphanum_fraction": 0.6633554084, "num_tokens": 329, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5836162015362768}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\usepackage{tikz}\n\\usepackage{pgfplots}\n\\usepackage[margin=1.5in]{geometry}\n\\usepackage[T1]{fontenc}\n\n\\newcommand{\\norm}[1]{\\lvert #1 \\rvert}\n\n% see the following for various vector notations\n% http://www.tapdancinggoats.com/latex-vector-notation.htm\n\n\\title{Quintic Splines for FTC}\n\\author{Ryan Brott}\n\\date{}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\nIn this paper, we explore the use of quintic splines for more sophisticated robot pathing in FTC. Traditionally, FTC robot autonomous motion consists of a series of linear movements (including holonomic drive strafing) and point turns. Although these simple path primitives generally suffice, they are not time-optimal under reasonable kinematic constraints, especially for nonholonomic (e.g., tank/differential) drivetrains. That is, the fastest route between two poses in general is not two point turns and a line.\\footnote{The keen reader will realize that this is not necessarily the case for holonomic drives. All pose-to-pose movements can be executed with a combination of strafing and rotating. Nevertheless, splines can still be of use. For one, traveling along the tangent can reduce dead reckoning odometrical errors accrued from translating on the lateral axis (especially for mecanum drives). Additionally, splines can help navigate around obstacles in situations where piecewise linear paths would normally be required.} To address this, we propose the use of quintic splines to achieve quick, smooth motion on the field.\n\nIn the first part of the paper, we will describe the problem in depth and give motivation for splines. In the second half, we explore some of the mathematics behind quintic splines (and parametric curves more generally) including interpolation, reparametrization, and heading computation.\n\n\\section{Problem}\nIn FTC, it's standard to have a sequence a poses that you want the robot to follow in autonomous (although pre-planned motions can also be utilized in TeleOp). For example, in the Relic Recovery game, a common movement task may be moving from one pose on the balancing stone to another in front of the cryptobox. This is traditionally handled with a series of straight lines and turns that are executed with 1D position PID controllers and potentially motor-based velocity PID (e.g., \\texttt{RUN\\_USING\\_ENCODER}).\n\nFor many game tasks and teams, this is a perfectly viable approach. However, in scenarios where speed is desirable (such as the effectively unlimited scoring potential in Relic Recovery autonomous), conventional methods tend to become less effective. Higher speeds usually lead to greater wheel accelerations and slippage, hindering odometry. Additionally, the feedback controllers may have more chaotic transient behavior or decreased steady-state performance. \n\nIn this case, motion profiling can be used to achieve higher speeds without sacrificing accuracy by observing the robot's physical constraints (e.g., maximum velocity, acceleration).\\footnote{For a good introduction to motion profiling, see the canonical talk by mentors from FRC teams 254 and 971: \\url{https://www.youtube.com/watch?v=8319J1BEHwM}}. However, this is only part of the solution; in fact, motion profiling amplifies the discontinuities between the elements of a conventional pose-to-pose path. Each transition between rotation and translations demands a full deceleration-acceleration cycle.\n\nTo eliminate this unnecessary pause, we seek to combine the rotation and translation into a single smooth curve. Although many curves suffice, splines are typically employed for this purpose. For simplicity, this paper only considers polynomial  splines\\footnote{\\url{http://www.ryanjuckett.com/programming/biarc-interpolation/} is a great introduction to biarc splines.}. Quintic splines in particular were selected for their balance between continuity and curvature although the methods described can be easily extended to polynomials of arbitrary odd degree.\n\n\\begin{figure}\n    \\centering\n\n    \\begin{tikzpicture}\n        \\begin{axis}[\n            width=8cm,\n            height=8cm,\n            axis x line=center,\n            axis y line=center,\n            xtick={-72,-48,...,72},\n            ytick={-72,-48,...,72},\n            x dir=reverse,\n            xlabel={$y$},\n            ylabel={$x$},\n            xlabel style={below left, at={(-0.02, 0.5)}},\n            ylabel style={above right, at={(0.52, 0.98)}},\n            xmin=-80,\n            xmax=80,\n            ymin=-80,\n            ymax=80,\n            axis equal=true]\n            \\addplot graphics[xmin=-72,ymin=-72,xmax=72,ymax=72] {../image/transparent_field.png};\n            \\addplot [mark=none, style=very thick, black!30!green, domain=0:1] ({-108.00000*x^5+288.00000*x^4-216.00000*x^3+0.00000*x^2+0.00000*x+48.00000},{44.73506*x^5-119.29351*x^4+89.47013*x^3+0.00000*x^2-50.91169*x+48.00000});\n            \\addplot[mark=none, style=very thick, black!30!green] coordinates {(-48,48) (-48,30) (-12,12)};\n        \\end{axis}\n    \\end{tikzpicture}\n    \n    \\caption{Sample coordinate system for describing positions on the Relic Recovery field. This right-handed frame can be extended to three dimensions with the positive z-axis protruding from the origin. The path on the blue side is a typical autonomous route seen in FTC (don't forget there's a turn between the line segments --- three profiles in total). The path on the red side is a quintic spline version of roughly the same move. The spline path is 1.6 times faster than the conventional path with reasonable drive constraints.}\n    \\label{fig:coordinate_system}\n\\end{figure}\n\n\\section{Preliminaries}\n\n\\subsection{Coordinate System}\nBefore constructing paths, it is imperative to define a consistent coordinate system. With this coordinate frame, points on the field can be uniquely described. This allows for the specification of robot poses which consist of the position and a heading direction. This heading is typically encoded as angle from the $x$-axis. \\autoref{fig:coordinate_system} shows a sample coordinate system for the Relic Recovery field. When determining a coordinate system, it is advantageous to choose axes around the symmetries of the environment. For the standard square FTC field, this comprises putting the origin at the center and placing the planar axes perpendicular to the field walls.\n\n\\subsection{Vectors and Parametric Curves}\nTo fully understand the following mathematics, it helps to grasp a little vector calculus (don't be scared --- only basic knowledge of single-variable calculus is required). Recall that a Euclidean vector $\\mathbf{v}$ in $n$ dimensions ($\\mathbf{v} \\in \\mathbb{R}^n$) uniquely encodes a single point in the space (commonly represented as an arrow from the origin to the point). This vector can be decomposed into $n$ components:\n$$\n\\mathbf{v} = (x_1, x_2, \\ldots, x_n)\n$$\nThe length of a vector is given by its norm:\n$$\n\\norm{\\mathbf{v}} = \\sqrt{x_1^2 + x_2^2 + \\ldots + x_n^2}\n$$\n\nAside from vector addition and scalar multiplication, two additional operations are commonly defined between Euclidean vectors. First, the dot product is defined as the scalar sum of the element-wise products of two vectors:\n$$\n\\mathbf{v} \\cdot \\mathbf{w} = v_1w_1 + v_2w_2 + ... + v_nw_n\n$$\nNote that the norm can be alternatively expressed as $\\norm{\\mathbf{v}} = \\sqrt{\\mathbf{v} \\cdot \\mathbf{v}}$. Second, the cross product is defined for three-dimensional vectors as the pseudo-determinant of this quasi-matrix:\n$$\n\\mathbf{v} \\times \\mathbf{w} =\n\\begin{vmatrix}\n    \\hat{\\imath} & \\hat{\\jmath} & \\hat{k}\\\\[4pt]\n    v_1 & v_2 & v_3\\\\[4pt]\n    w_1 & w_2 & w_3\n\\end{vmatrix}\n$$\n\nTwo-dimensional curves can be represented as graphs of one-dimensional functions ($y = f(x)$). However, this puts unnecessary constraints on the set of possible curves (e.g., no two points on the curve can share the same x-value, no self-intersections, no vertical tangents), especially in higher dimensions. Instead, we will represent curves as a series of vectors that trace out the shape. This is analogous to a parametric curve $\\mathbf{r}(t)$ which maps a single real parameter $t$ to the corresponding path vector ($\\mathbf{r}:\\mathbb R\\to\\mathbb R^n$). Intuitively, $\\mathbf{r}(t)$ can be thought of as the trajectory of a fly located at $\\mathbf{r}(t_0)$ in the instant $t=t_0$.\n\nParametric curves can be represented as a vector of single-variable functions:\n$$\n\\mathbf{v}(t) = (x_1(t), x_2(t), \\ldots, x_n(t))\n$$\nAs one might expect, $\\mathbf{r}(t)$ can be differentiated component-by-component:\n$$\n\\mathbf{v}'(t) = (x_1'(t), x_2'(t), \\ldots, x_n'(t))\n$$\nDerivatives of dot products and cross products can also be computed:\n\\begin{equation*}\n\\begin{split}\n    \\frac{d}{dt} \\Big[\\ \\mathbf{r}_1(t) \\cdot \\mathbf{r}_2(t)\\ \\Big] &= \\mathbf{r}_1(t) \\cdot \\mathbf{r}_2'(t) +\\mathbf{r}_1'(t) \\cdot \\mathbf{r}_2(t) \\\\\n    \\frac{d}{dt} \\Big[\\ \\mathbf{r}_1(t) \\times \\mathbf{r}_2(t)\\ \\Big] &= \\mathbf{r}_1(t) \\times \\mathbf{r}_2'(t) +\\mathbf{r}_1'(t) \\times \\mathbf{r}_2(t)\n\\end{split}\n\\end{equation*}\n\nA parametric curve is said to be $C^n$ if its nth-order derivatives are defined and continuous everywhere on its domain ($n$ here is distinct from the $n$ in $\\mathbb R^n$). For the rest of the paper, we will restrict our attention to two-dimensional parametric curves $\\mathbf{r}(t)=(x(t), y(t))$.\n\n\\section{Interpolation}\nQuintic splines consist of a series of segments assembled together into a single piecewise curve. Each of the segments is a parametric curve with a quintic polynomial for each component. The ith segment of a two-dimensional quintic spline of $n$ segments can be represented as the following:\n$$\n    \\begin{cases}\n        x^{(i)}(t) = a^{(i)}_xt^5 + b^{(i)}_xt^4 + c^{(i)}_xt^3 + d^{(i)}_xt^2 + e^{(i)}_xt + f^{(i)}_x\\\\\n        y^{(i)}(t) = a^{(i)}_yt^5 + b^{(i)}_yt^4 + c^{(i)}_yt^3 + d^{(i)}_yt^2 + e^{(i)}_yt + f^{(i)}_y\n    \\end{cases}\n    \\quad \\text{where} \\quad 0 \\leq t \\leq 1\n$$\nNow the goal of interpolation is to ``fit'' these polynomials between a series of $n+1$ points (commonly referred to as knots) labeled $(x_i, y_i)$. To accomplish this, we can impose the conditions $x^{(i)}(0) = x_i$ and $x^{(i)}(1) = x_{i+1}$ (and correspondingly for $y^{(i)}(t)$). Additionally, to ensure greater continuity, we also match the first and second derivatives at each knot point\\footnote{This is generally the best choice for splines in the context of path planning although other schemes are occasionally employed. For instance, one can instead force the third and fourth derivatives to equal zero.}:\n\\begin{equation*}\n\\begin{split}\n    \\frac{dx^{(i)}}{dt}\\biggr\\rvert_{t=0} &= x'_i\\\\\n    \\frac{dx^{(i)}}{dt}\\biggr\\rvert_{t=1} &= x'_{i+1}\\\\\n    \\frac{d^2x^{(i)}}{dt^2}\\biggr\\rvert_{t=0} &= x^{\\prime\\prime}_i\\\\\n    \\frac{d^2x^{(i)}}{dt^2}\\biggr\\rvert_{t=1} &= x^{\\prime\\prime}_{i+1}\\\\\n\\end{split}\n\\end{equation*}\n$$(\\text{and analogously for }y)$$\nCollectively, these constraints guarantee that the overall spline will be (by construction) $C^2$. They also fully define the polynomial coefficients for each spline segment. To actually compute the coefficients, we simply need to solve the linear system with following matrix representation:\n\\[\n\\begin{bmatrix}\n    0 & 0 & 0 & 0 & 0 & 1 \\\\[4pt]\n    0 & 0 & 0 & 0 & 1 & 0 \\\\[4pt]\n    0 & 0 & 0 & 2 & 0 & 0 \\\\[4pt]\n    1 & 1 & 1 & 1 & 1 & 1 \\\\[4pt]\n    5 & 4 & 3 & 2 & 1 & 0 \\\\[4pt]\n    20 & 12 & 6 & 2 & 0 & 0\n\\end{bmatrix}\n\\begin{bmatrix}\n    a_x \\\\[4pt]\n    b_x \\\\[4pt]\n    c_x \\\\[4pt]\n    d_x \\\\[4pt]\n    e_x \\\\[4pt]\n    f_x\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n    x_i \\\\[4pt]\n    x'_i \\\\[4pt]\n    x^{\\prime\\prime}_i \\\\[4pt]\n    x_{i+1} \\\\[4pt]\n    x'_{i+1} \\\\[4pt]\n    x^{\\prime\\prime}_{i+1}\n\\end{bmatrix}\n\\]\nThis equation can then be solved to yield the coefficients with your favorite matrix library (or put in row echelon form to yield a set of back-substitutable equations).\n\n\\section{Reparametrization}\n\n\\subsection{Arc Length}\nNow that we've interpolated the spline segments, we have to join together all of the spline segments into a single piecewise function of a unified variable. To accomplish this, we're going to reparametrize $\\mathbf{r}(t)$ from $t \\in [0, 1]$ to the arc length parameter $s$. Although it sounds complex, $s$ is just the true displacement along the path, and it traces out the curve with unit speed ($\\norm{\\mathbf{r}'(s)} = \\sqrt{\\big(\\frac{dx}{ds}\\big)^2 + \\big(\\frac{dy}{ds}\\big)^2} = 1$). This procedure is completely generic, allowing you to combine arbitrary parametric curve with (basically) the same knot derivatives.\n\nTo shift from $t$ to $s$, we first have to define $t(s)$. This is difficult to represent analytically although the inverse function $s(t)$ is relatively simple:\n$$\n    s(t) = \\int_0^t \\! \\norm{\\mathbf{r}'(\\tau)} \\, d\\tau = \\int_0^t \\! \\sqrt{\\bigg(\\frac{dx}{d\\tau}\\bigg)^2 + \\bigg(\\frac{dy}{d\\tau}\\bigg)^2} \\, d\\tau\n$$\nIn practice, $t(s)$ can be obtained by numerically evaluating the above integral and stopping when the integral sum reaches $s$.\n\nBy computing $t(s)$, we can determine $\\mathbf{r}(s)$ by composition: $\\mathbf{r}(s) = \\mathbf{r}(t(s))$. Next, differentiating yields the new parametrized derivative:\n\\begin{equation*}\n\\begin{split}\n    \\mathbf{r}'(s) &= \\mathbf{r}'(t)\\ t'(s) \\\\\n                   &= \\frac{\\mathbf{r}'(t)}{s'(t)} \\\\\n                   &= \\frac{\\mathbf{r}'(t)}{\\norm{\\mathbf{r}'(t)}} \\\\\n\\end{split}\n\\end{equation*}\nThis makes sense intuitively as $\\frac{\\mathbf{r}'(t)}{\\norm{\\mathbf{r}'(t)}}$ is just a normalized version of $\\mathbf{r}'(t)$ that satisfies the condition $\\norm{\\mathbf{r}'(s)} = 1$. We similarly obtain the new second and third derivatives by differentiating again\\footnote{This derivation depends heavily on the BAC-CAB identity: $\\mathbf{a} \\times (\\mathbf{b} \\times \\mathbf{c}) = \\mathbf{b} (\\mathbf{a} \\cdot \\mathbf{c}) - \\mathbf{c} (\\mathbf{a} \\cdot \\mathbf{b})$.}$^{,}$\\footnote{$\\kappa(s) = \\norm{\\mathbf{r}''(s)}$ is the curvature of $\\mathbf{r}(s)$. Note that quintic splines also guarantee continuous curvature.}:\n\\begin{equation*}\n\\begin{split}\n    \\mathbf{r}''(s) &= \\frac{\\mathbf{r}''(t)}{\\norm{\\mathbf{r}'(t)}^2} - \\frac{\\mathbf{r}'(t)\\ \\mathbf{r}''(t) \\cdot \\mathbf{r}'(t)}{\\norm{\\mathbf{r}'(t)}^4} \\\\\n                    &= \\frac{\\mathbf{r}''(t)\\ \\mathbf{r}'(t) \\cdot \\mathbf{r}'(t) - \\mathbf{r}'(t)\\ \\mathbf{r}''(t) \\cdot \\mathbf{r}'(t)}{\\norm{\\mathbf{r}'(t)}^4} \\\\\n                    &= \\frac{\\mathbf{r}'(t) \\times (\\mathbf{r}''(t) \\times \\mathbf{r}'(t))}{\\norm{\\mathbf{r}'(t)}^4} \\\\\n    \\mathbf{r}'''(s) &= \\frac{\\mathbf{r}'(t) \\times (\\mathbf{r}'''(t) \\times \\mathbf{r}'(t)) + \\mathbf{r}''(t) \\times (\\mathbf{r}''(t) \\times \\mathbf{r}'(t))}{\\norm{\\mathbf{r}'(t)}^9}\n\\end{split}\n\\end{equation*}\n\n\\subsection{Trajectory}\nNow, all of the spline segments and various other parametric curves are combined into a single curve $\\mathbf{r}(s)$. However, this still needs to be combined with the motion profile to yield the robot's kinematic state over time.\n\nLet $s(t)$ be the generated motion profile (note: this $t$ actually refers to time; it's different from the $t$ used earlier). Now the velocity and acceleration of the robot can be determined:\n\\begin{equation*}\n\\begin{split}\n    \\mathbf{v}(t) &= \\frac{d}{dt} \\Big[\\ \\mathbf{r}(s(t))\\ \\Big] \\\\\n                  &= \\mathbf{r}'(s(t)) s'(t) \\\\\n    \\mathbf{a}(t) &= \\frac{d}{dt} \\Big[\\ \\mathbf{r}'(s(t)) s'(t)\\ \\Big] \\\\\n                  &= \\mathbf{r}''(s(t)) [s'(t)]^2 + \\mathbf{r}'(s(t)) s''(t)\n\\end{split}\n\\end{equation*}\n\n\\section{Heading}\nThe discussion in the previous sections has been limited to the translational components of the path. This, of course, must be accompanied by some sort of angular motion. For holonomic drives, the heading can be controlled completely independently. In this case, heading can be treated as a third independent path component that can be specified by an arbitrary parametric curve.\n\nHowever, for nonholonomic drives, the heading is constrained to the direction of travel. For parametric curves, this direction is given by the vector $\\mathbf{r}'(s)$:\n$$\n    \\theta(s) = \\arctan \\frac{y'(s)}{x'(s)}\n$$\nIn practice, $\\theta(s)$ is computed with the two-argument version of $\\arctan$, eliminating issues arising from signs and cases when $x'(s) = 0$. Of course, the derivatives are also necessary:\n\\begin{equation*}\n\\begin{split}\n    \\theta'(s) &= \\frac{1}{\\big(\\frac{y'(s)}{x'(s)}\\big)^2+1} \\cdot \\frac{x'(s)y''(s) - y'(s)x''(s)}{[x'(s)]^2} \\\\\n               &= \\frac{x'(s)y''(s) - y'(s)x''(s)}{[x'(s)]^2 + [y'(s)]^2} \\\\\n               &= x'(s)y''(s) - y'(s)x''(s) \\\\\n    \\theta''(s) &= \\big[x''(s)y''(s) + x'(s)y'''(s)\\big] - \\big[y''(s)x''(s) - y'(s)x'''(s)\\big] \\\\\n                &= x'(s)y'''(s) - y'(s)x'''(s)\n\\end{split}\n\\end{equation*}\nNotice that the expressions for $\\theta^{(n)}(s)$ involve $\\mathbf{r}(s)$ derivatives of order $n + 1$ and lower. Hence, $C^2$ $\\mathbf{r}(s)$ only guarantees $C^1$ $\\theta(s)$. This heading derivative continuity is one of the primary reasons for quintic over cubic splines.\n\n\\section{Conclusion}\nThis paper discussed the motivation for splines in FTC and some mathematics for generating basic quintic splines. The author hopes this will help further the proliferation of advanced motion planning techniques in FTC.\n\n\\end{document}", "meta": {"hexsha": "0dbee4d548b9cc22b1835efaf9ca421b9351efb9", "size": 17452, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/pdf/Quintic_Splines_for_FTC.tex", "max_stars_repo_name": "acmerobotics/motion-planner", "max_stars_repo_head_hexsha": "448a3adb2d2f821aed9e42998ef216ca4732f690", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 140, "max_stars_repo_stars_event_min_datetime": "2018-08-07T00:54:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-15T18:17:25.000Z", "max_issues_repo_path": "doc/pdf/Quintic_Splines_for_FTC.tex", "max_issues_repo_name": "acmerobotics/motion-planner", "max_issues_repo_head_hexsha": "448a3adb2d2f821aed9e42998ef216ca4732f690", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 69, "max_issues_repo_issues_event_min_datetime": "2018-08-09T03:13:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-23T20:25:36.000Z", "max_forks_repo_path": "doc/pdf/Quintic_Splines_for_FTC.tex", "max_forks_repo_name": "acmerobotics/motion-planner", "max_forks_repo_head_hexsha": "448a3adb2d2f821aed9e42998ef216ca4732f690", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 65, "max_forks_repo_forks_event_min_datetime": "2018-08-10T03:49:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T22:12:41.000Z", "avg_line_length": 75.224137931, "max_line_length": 1135, "alphanum_fraction": 0.6953930782, "num_tokens": 5217, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7853085708384735, "lm_q1q2_score": 0.5836161978021234}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{hyperref}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\n\\usepackage{graphicx}\n\n\\usepackage{array}\n\\newcolumntype{C}[1]{>{\\centering\\arraybackslash\\hspace{0pt}}m{#1}}\n\n\\setlength{\\parindent}{0cm}\n\n\\title{Udacity Navigation Project}\n\\author{Thomas Lecat}\n\\date{April 2021}\n\n\\begin{document}\n\n    \\maketitle\n\n    \\section{Environment}\\label{sec:environment}\n\n    The environment is described as Markov Decision Process $(S, A, R, p, \\gamma)$ with unknown dynamics.\n    The goal is to approximate the optimal policy $\\pi^*$ that maximizes the upcoming cumulative reward in every state.\n\n    This project uses value based methods, which seek to approximate the optimal $Q$ value $Q^*$, defined as:\n    \\[\n    Q^*(s, a) = \\max_{\\pi} \\mathbb{E}_{\\pi} (\\sum_{t} \\gamma^t r_t | S_0=s, A_0=a),  \\forall (a, s) \\in A \\times S\n    \\]\n\n    Given $Q^*$, one can derive $\\pi^*$ to solve the problem:\n    $\\pi^*(s) = \\argmax_a Q^*(s, a), \\forall s \\in S$\n\n    \\section{Agent}\\label{sec:agent}\n\n    \\subsection{DQN}\\label{subsec:dqn}\n\n\n    A vanilla \\href{https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf}{DQN} agent\n    is implemented as the base solution.\n\n    DQN approximates the $Q$ value by a neural network parameterized by $\\theta$.\n    Its loss function is defined as the mean squared TD error of the Bellman optimality equation:\n    \\[\n    \\mathcal{L}(\\theta) = \\mathbb{E}_{s, a, r, s'} \\big[((r + \\gamma \\max_{a'} Q(s', a' | \\theta^-)) - Q(s, a | \\theta))^2\\big]\n    \\]\n\n    where $\\theta^-$ are the parameters of a target network, slowly updated towards $\\theta$ at every training step.\n    This target network makes the TD target $r + \\gamma \\max_{a'} Q(s', a' | \\theta^-)$ independent from the parameters $\\theta$,\n    which stabilizes the training.\n\n    This loss is minimized by an Adam optimizer, a variant of the Stochastic Gradient Descent algorithm.\n    At each step, the gradient is approximated from a minibatch of experiences ${(s, a, r, s', d)_i}$.\n    To avoid a strong correlation between the steps of the minibatch, transitions are stored in a large replay buffer\n    during rollouts and then sampled uniformly.\n\n    \\subsection{Double DQN}\\label{subsec:double-dqn}\n\n    The DQN agent is implemented in a modular way, to enable extension from recent papers to be added later.\n\n    As a start, we implemented the improvements from the \\href{https://arxiv.org/abs/1509.06461}{Double DQN} paper:\n    Instead of using the target network to chose the next action and estimate its value in the target, use $Q_\\theta$ to chose the action,\n    and $Q_{\\theta^-}$ to estimate its value.\n\n    \\section{Hyperparemeters}\\label{sec:hyperparemeters}\n\n    The network architecture is composed of 2 fully connected layers with 64 neurons each.\n    The agent follows an $\\epsilon$-greedy policy, with epsilon gradually decreasing from $1$ to $0.1$ over $30,000$ steps.\n\n    The table below summarizes the main hyperparameters:\n\n    \\begin{tabular}{ |p{5cm} C{1cm}|p{3cm}|p{3cm}| }\n        \\hline\n        \\multicolumn{4}{|c|}{Hyperparameters} \\\\\n        \\hline\n        \\multicolumn{2}{|c|}{Parameter}   & Vanilla DQN & Double DQN                    \\\\\n        \\hline\n        learning rate                     & $\\alpha$    & \\multicolumn{2}{|c|}{5e-04}   \\\\\n        \\hline\n        discount factor                   & $\\gamma$    & \\multicolumn{2}{|c|}{0.99}    \\\\\n        \\hline\n        target network update coefficient & $\\tau$      & \\multicolumn{2}{|c|}{0.001}   \\\\\n        \\hline\n        buffer size                       &             & \\multicolumn{2}{|c|}{100,000} \\\\\n        \\hline\n        batch size                        &             & \\multicolumn{2}{|c|}{64}      \\\\\n        \\hline\n    \\end{tabular}\n\n    \\section{Results}\\label{sec:results}\n\n    Both vanilla DQN and double DQN have been trained with 3 seeds.\n    The reward curves are averaged over seeds and plotted below:\n\n    \\begin{figure}\n        \\centering\n        \\includegraphics[scale=0.5]{results/reward_per_episode.png}\\label{fig:figure}\n    \\end{figure}\n\n    The addition of the double DQN strategy doesn't improve the results much and both\n    agents manage to solve the environment in less than 400 episodes.\n\n    \\section{Next steps}\\label{sec:next-steps}\n\n    The next steps for the project consist in implemented other DQN extensions such as\n    \\href{https://arxiv.org/abs/1511.06581}{Dueling DQN} and\n    \\href{https://arxiv.org/abs/1511.05952}{Prioritized Experience Replay}.\n    Hyperparemeters could also be further tuned using strategies such as\n    \\href{https://github.com/hyperopt/hyperopt}{HyperOpt} or\n    \\href{https://deepmind.com/blog/article/population-based-training-neural-networks}{Population Based Training}.\n\n\\end{document}\n", "meta": {"hexsha": "c859ba7f8cdac05056cd429ae14d130e68574c5e", "size": 4884, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report.tex", "max_stars_repo_name": "ThomasLecat/udacity-navigation-project", "max_stars_repo_head_hexsha": "ddf503932a11cc726fea352a900337fe8376696a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report.tex", "max_issues_repo_name": "ThomasLecat/udacity-navigation-project", "max_issues_repo_head_hexsha": "ddf503932a11cc726fea352a900337fe8376696a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report.tex", "max_forks_repo_name": "ThomasLecat/udacity-navigation-project", "max_forks_repo_head_hexsha": "ddf503932a11cc726fea352a900337fe8376696a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.1034482759, "max_line_length": 138, "alphanum_fraction": 0.664004914, "num_tokens": 1381, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7981867753392728, "lm_q1q2_score": 0.5835212894607958}}
{"text": "\\chapter{Approach}\\label{chap:approach}\n\n\n\\section{Soft Actor-Critic}\n\nThe Soft Actor-Critic Algorithm \\cite{haarnoja2018soft} is based on the maximizing entropy objective\n\n\\begin{equation}\n\\label{eq:maxent_objective}\n  J(\\pi)  = \\sum_{t=0}^{T} \\mathbb{E}(s_{t}, a_{t}) \\sim \\rho_{pi}[r(s_{t},a_{t}) + \\alpha \\mathcal{H}(\\pi(\\cdot|s_{t}))]\n\\end{equation}\n\n\n\n\n\nreparameterization trick\n\n\n\\section{Truncated Quantile Critics TQC}\n\nFor training the agent the algorithm proposed in \\cite{} was used.\n\nEstimating a distribution of the Q-value and an ensemble of critics\nenables it to controll  over and under estimation bias  \n\n\n\\subsection{Distributional Reinforcement Learning with\nQuantile Regression}\n\nWith Distributional Reinforcement Learning the objectiv changes to\n$ Z^{\\pi}(s, a) := \\sum_{t=0}^{\\infty} \\gamma^{t} R\\left(s_{t}, a_{t}\\right)$\nand the Q Function to $Q^{\\pi}(s, a):=  \\mathbb{E} {Z^{\\pi}(s, a)}$.\nIn the work QR-DQN % \\cite{dabney2018distributional}% \nthe distribution $Z^\\pi(s,a)$ is approximated with\n$ Z_{\\psi}(s, a):=\\frac{1}{M} \\sum_{m=1}^{M} \\delta ( \\theta^{m}_\\psi(s, a) )$,\na mixture of atoms---Dirac delta functions at locations $\\theta^1_\\psi(s,a), \\dots ,\\theta^M_\\psi(s,a)$ \ngiven by a parametric model  $\\theta_{\\psi}: \\mathcal{S} \\times \\mathcal{A} \\rightarrow \\mathbb{R}^M$.\n\n\n\n\n\n\\begin{equation}\n  Z_{\\psi_n}(s, a) := \\frac{1}{M} \\sum_{m=1}^{M} \\delta \\left(\\theta_{\\psi_n}^m(s, a) \\right) ,\n\\end{equation}\n\nsupported on atoms $ \\theta_{\\psi_n}^1(s,a), \\dots, \\theta_{\\psi_n}^M(s,a) $ \n\n\n\n\n\n\n\\subsection{Loss Function}\n\nThe author chooses to use the Wasserstein distance as a metric to minimize \nthe differences between each of the Q-value distribution  \nand the temporal difference target distribution Y (s, a).\n\n\\begin{equation}\n  J_Z (\\psi_n) = \\mathbb{E}_{\\mathcal{D}, \\pi} \\mathcal{L}^k(s_t,a_t; \\psi_n)]\n\\end{equation}\n\nover the parameters $\\psi_n$, where\n\n\\begin{equation}\n  \\mathcal{L}^k(s,a; \\psi_n) = \\frac{1}{kNM} \\sum_{m=1}^M \\sum_{i=1}^{kN} \\rho_{ \\tau_m}^H(y_i(s, a) - \\theta_{\\psi_n}^>\n\\end{equation}\n\n\n\nThe TQC pseudo code is given in Algorithm \\ref{alg:main}.   \n\n\\input{figures/approach/Tqc.tex}\n\n", "meta": {"hexsha": "4e713ff74a8fa2fefaeeac24b2f21ef8c6ff4a25", "size": 2150, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "4-approach.tex", "max_stars_repo_name": "ChrisProgramming2018/bachelorThesisLatex", "max_stars_repo_head_hexsha": "46ca9c643797dea09c11d72cc95fbe85e35b169b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "4-approach.tex", "max_issues_repo_name": "ChrisProgramming2018/bachelorThesisLatex", "max_issues_repo_head_hexsha": "46ca9c643797dea09c11d72cc95fbe85e35b169b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4-approach.tex", "max_forks_repo_name": "ChrisProgramming2018/bachelorThesisLatex", "max_forks_repo_head_hexsha": "46ca9c643797dea09c11d72cc95fbe85e35b169b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.9220779221, "max_line_length": 121, "alphanum_fraction": 0.6841860465, "num_tokens": 751, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7310585669110203, "lm_q1q2_score": 0.5835212818616525}}
{"text": "\\section{Asset Allocation with Transaction Costs}\n\\label{sec:asset_allocation}\n\n\\begin{frame}{Asset Allocation with Transaction Costs}\n\t\n\t\\begin{block}{Goal}\n\t\tHow to dynamically invest the available capital in a portfolio of different assets in order to maximize the expected total return or another relevant performance measure.\n\t\\end{block}\n\t\n\n\n\t\\begin{block}{\\textbf{Reward function}: portfolio log-return with transaction costs}\n\t\t\\vspace{-0.5cm}\n\t\t\\begin{equation*}\n\t\t\\resizebox{.9 \\textwidth}{!}{ \n\t\t $R_{t+1} = \\log \\left\\{ 1 + \\sum^{I}_{i=0} \\left[ a_t^i X_{t+1}^i - \\delta_i\n\t\t \t\t\t \t\\left| a_t^i - \\tilde{a}_t^i \\right| - \\delta_s {(a_t^i)}^- \\right] -\n\t\t \t\t\t \t\\delta_f \\mathbf{1}_{{a}_t \\neq \\tilde{{a}}_{t-1}}\\right\\}$\n\t\t }\n\t\t \\end{equation*}\n\t\\end{block}\n\n\t\\begin{block}{\\textbf{Actions}: Portfolio weights}\n\t\t\\begin{equation*}\n\t\t\t\\{a_t^i\\}_{i=0}^I \\;\\;\\; \\text{s.t.}\\;\\;\\; \\sum^{I}_{i=0} a_t^i = 1 \\;\\;\\;\\;\\; \\forall t \\in \\{0, 1, 2, \\ldots\\}\n\t\t\\end{equation*}\n\t\\end{block}\n\n\t\\begin{block}{\\textbf{State}: assets past returns and current allocation}\n\t\t\\begin{equation*}\n\t\t\tS_t = \\{X, X_t, X_{t-1}, \\ldots, X_{t-P}, \\tilde{a}_t\\}\n\t\t\\end{equation*}\n\t\\end{block}\n\n\\end{frame}\n\n", "meta": {"hexsha": "1b2d2090faf62ce4568b1dea23d17945c10c1915", "size": 1188, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Presentation/Sections/3_asset_allocation.tex", "max_stars_repo_name": "AmineAboussalah/Thesis", "max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z", "max_issues_repo_path": "Pacs/Presentation/Sections/3_asset_allocation.tex", "max_issues_repo_name": "pnecchi/Thesis", "max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pacs/Presentation/Sections/3_asset_allocation.tex", "max_forks_repo_name": "pnecchi/Thesis", "max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z", "avg_line_length": 32.1081081081, "max_line_length": 172, "alphanum_fraction": 0.638047138, "num_tokens": 456, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916205190226, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.58344949960108}}
{"text": "\\documentclass{standalone}\n\\begin{document}\n\n\\subsection{Clustering}\n\n\tClustering approach is similar to the classifiers one, but in an unsupervised fashion, so does not require a training dataset.\n\tClustering iteratively alternates between the image segmentation and its classes properties characterization. In this way, we can say that clustering approach trains itself by using the available data information.\n\tWe can identify 3 main clustering algorith\n\t\\begin{itemize}\n\t\n\t\t\\item \\textbf{k-means clustering: } partition $n$ observations in $k$ clusters iteratively computing a mean intensity for each class. It segments the image by classifying each pixel in the class with the closest mean\n\t\n\t\t\\item \\textbf{Fuzzy C-means: } this algorithm generalizes the K-means clustering in order to achieve soft-segmentation;\n\t\t\n\t\t\\item \\textbf{Expectation Maimization:} uses the same clustering principle as k-means by assuming that each observation belongs to a Gaussian mixture model. It is an iterative method that seek to find maximum likelihood estimates means, covariances and mixing coefficients of the mixture model.\n\t\\end{itemize}\n\n\tIt does not require training data, but it suffers from high sensitivity to the initial parameters and it does not incorporate spatial models: it is a pixel classification technique~\\cite{ART:Pham}. \n\t\n\tThe most used algorithm for clustering is the k-means clustering, which seeks to assign each pixel to a particular cluster aiming to minimize the average square distance between pixels in the same cluster~\\cite{Arthur2007}. Each cluster is described by a vector representing its centre, so this technique is described as a centroid model~\\cite{ART:Morisette}. The labelling is performed by assigning each point to the cluster with the nearest centroid. Each point is assigned to the cluster with the nearest centre.\n\t\n\tGiven an integer $k$ and a set of $n$ data points from $\\mathbb{R}^d$, the k-means clustering seeks to find the $k$ centres that minimize a potential function given by the sum of squares: \n\t\\begin{equation}\n\t\t\\Phi = \\sum_{x\\in S}\\min\\| x - c\\|^2\n\t\\end{equation} \n\tWhere $S\\subset \\mathbb R^d$ is a set of points. In this work, $\\mathbb{R}^d$ will be the colours space and $S$ is the space of colour of each voxel.\n\t\n\tThe steps of the algorithm are the following: \n\t\t\\begin{enumerate}\n\t\t\t\\item Randomly initialize the values of the $k$ centres,\n\t\t\t\\item Generates the $k$ clusters associating each point to its nearest centroid\n\t\t\t\\item Re-compute the centroid for each cluster until the centroid does not change.\n\t\t\t\n\t\t\\end{enumerate}\n\t\t\n\tArthur and Vassilvitskii ~\\cite{Arthur2007} have pointed out that this algorithm is not accurate and can produce arbitrarily bad clusters. So they have developed a method, the \"k-means++\" which improves the clustering accuracy by made an accurate choice of the initial cluster centres.\n\n\tThey pointed out that the bad clustering is caused to the fact that $\\frac{\\Phi}{\\Phi_{opt}}$ is unbounded even if the number of clusters and points are fixed, where $\\Phi_{opt}$ is the potential function in the optimal centroids case. They proposed a variant for the choosing of the centroids: They randomly chose only the first centroid. To chose the other, they weight the initial points according to the distance square ($D(x)^2$) from the closest centre already chosen. So the final algorithm is equal to the k-means except for the initial centroids selection that is made as follows: \n\t\\begin{enumerate}\n\t\t\t\\item Take one centre $c_1$, chosen uniformly at random from $S$.\n\t\t\t\n\t\t\t\\item  Take a new centre $c_i$, choosing $x \\in Si$ with probability $\\frac{D(x)^2}{\\sum _{x \\in S} D(x)^2}$\n\t\t\t\n\t\t\t\\item Repeat the step 2 till $k$ centres are chosen\n\t\t\t\n\t\t\t\\item Proceed like a classical k-means clustering.\n\t\t\t\n\t\t\\end{enumerate}\n\t\t\n\tThey have proved that this approach leads to better results in less time. For more details refer to ~\\cite{Arthur2007}.\n\t\n\\end{document}", "meta": {"hexsha": "e1afcf619662d05199bea8dd9c8a2fcc143900e0", "size": 3958, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/Chapter1/MainSegmentation/clustering.tex", "max_stars_repo_name": "RiccardoBiondi/SCDthesis", "max_stars_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/Chapter1/MainSegmentation/clustering.tex", "max_issues_repo_name": "RiccardoBiondi/SCDthesis", "max_issues_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/Chapter1/MainSegmentation/clustering.tex", "max_forks_repo_name": "RiccardoBiondi/SCDthesis", "max_forks_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.1153846154, "max_line_length": 591, "alphanum_fraction": 0.7693279434, "num_tokens": 956, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8152324960856175, "lm_q1q2_score": 0.5832368997350014}}
{"text": "\\hypertarget{regression}{%\n\\chapter{Regression}\\label{regression}}\n\nIn the previous chapter we used simple regression to quantify the\nrelationship between two variables. In this chapter we'll get farther\ninto regression, including multiple regression and one of my all-time\nfavorite tools, logistic regression.\n\nThese tools will allow us to explore relationships among sets of\nvariables. As an example, we will use data from the General Social\nSurvey (GSS) to explore the relationship between income, education, age,\nand sex. But first let's understand the limits of simple regression.\n\n\\hypertarget{limits-of-simple-regression}{%\n\\section{Limits of Simple\nRegression}\\label{limits-of-simple-regression}}\n\nIn a previous exercise, you made a scatter plot of vegetable consumption\nas a function of income, and plotted a line of best fit. Here's what it\nlooks like:\n\n\\includegraphics{figs/fig09-01.png}\n\nThe slope of the line is 0.07, which means that the difference between\nthe lowest and highest income brackets is about 0.49 servings per day.\nSo that's not a very big difference.\n\nBut it was an arbitrary choice to plot vegetables as a function of\nincome. We could have plotted it the other way around, like this.\n\n\\includegraphics{figs/fig09-02.png}\n\nThe slope of this line is about 0.2, which means that the difference\nbetween 0 and 10 servings per day is about 2 income levels, roughly from\nlevel 5 to level 7.\n\nAnd the difference between income levels 5 and 7 is about \\$30,000 per\nyear, which is substantial.\n\nSo if we use vegetable consumption to predict income, we see a big\ndifference. But when we used income to predict vegetable consumption, we\nsaw a small difference.\n\nThis example shows that regression is not symmetric; the regression of A\nonto B is not the same as the regression of B onto A.\n\nWe can see that more clearly by putting the two figures side by side and\nplotting both regression lines on both figures.\n\n\\includegraphics{figs/fig09-03.png}\n\nThey are different because they are based on different assumptions.\n\n\\begin{itemize}\n\\item\n  On the left, we treat income as a known quantity and vegetable\n  consumption as random.\n\\item\n  On the right, we treat vegetable consumption as known and income as\n  random.\n\\end{itemize}\n\nWhen you run a regression model, you make decisions about how to treat\nthe data, and those decisions affect the results you get.\n\nThis example demonstrates another point, which is that regression\ndoesn't tell you much about causation.\n\n\\begin{itemize}\n\\item\n  If you think people with lower income can't afford vegetables, you\n  might look at the figure on the left and conclude that it doesn't make\n  much difference.\n\\item\n  If you think better diet increases income, the figure on the right\n  might make you think it does.\n\\end{itemize}\n\nBut in general, regression can't tell you what causes what. If you see a\nrelationship between any two variables, A and B, the reason for the\nrelationship might be that A causes B, or B causes A, or there might be\nother factors that cause both A and B. Regression alone can't tell you\nwhich way it goes.\n\nHowever, we have tools for teasing apart relationships among multiple\nvariables; one of the most important is multiple regression.\n\nSciPy doesn't do multiple regression, so we'll to switch to a new\nlibrary, StatsModels. Here's the import statement.\n\n\\begin{lstlisting}[language=Python,style=source]\nimport statsmodels.formula.api as smf\n\\end{lstlisting}\n\nFor the first example, we'll load data from the Behavioral Risk Factor\nSurveillance Survey (BRFSS), which we saw in the previous chapter.\n\n\\begin{lstlisting}[language=Python,style=source]\nimport pandas as pd\n\nbrfss = pd.read_hdf('brfss.hdf5', 'brfss')\n\\end{lstlisting}\n\nNow we can use StatsModels to fit a regression model. The name of the\nfunction is \\passthrough{\\lstinline!ols!}, which stands for ``ordinary\nleast squares'', another name for regression.\n\n\\begin{lstlisting}[language=Python,style=source]\nresults = smf.ols('INCOME2 ~ _VEGESU1', data=brfss).fit()\n\\end{lstlisting}\n\nThe first argument is a \\textbf{formula string} that specifies that we\nwant to regress income as a function of vegetable consumption.\n\nThe second argument is the BRFSS \\passthrough{\\lstinline!DataFrame!}.\nThe names in the formula correspond to columns in the\n\\passthrough{\\lstinline!DataFrame!}.\n\nThe result from \\passthrough{\\lstinline!ols!} represents the model; then\nwe run \\passthrough{\\lstinline!fit!} to get the results.\n\nResults is a \\passthrough{\\lstinline!RegressionResultsWrapper!}, which\ncontains a lot of information; the first thing we'll look at is the\nattribute \\passthrough{\\lstinline!params!}, which contains the estimated\nintercept and the slope associated with\n\\passthrough{\\lstinline!\\_VEGESU1!}.\n\n\\begin{lstlisting}[language=Python,style=source]\nresults.params\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &         0 \\\\\n\\midrule\nIntercept &  5.450700 \\\\\n\\_VEGESU1  &  0.204935 \\\\\n\\bottomrule\n\\end{tabular}\n\nAnd we get the same results we got from SciPy, so that's good!\n\nIn the next section we'll move on to multiple regression. But first,\nsome exercises.\n\n\\textbf{Exercise:} In the BRFSS dataset, there is a strong relationship\nbetween vegetable consumption and income. The income of people who eat 8\nservings of vegetables per day is double the income of people who eat\nnone, on average.\n\nWhich of the following conclusions can we draw from this data?\n\n\\begin{enumerate}\n\\def\\labelenumi{\\Alph{enumi}.}\n\\item\n  Eating a good diet leads to better health and higher income.\n\\item\n  People with higher income can afford a better diet.\n\\item\n  People with high income are more likely to be vegetarians.\n\\end{enumerate}\n\n\\textbf{Exercise:} Let's run a regression using SciPy and StatsModels,\nand confirm we get the same results.\n\n\\begin{itemize}\n\\item\n  Compute the regression of \\passthrough{\\lstinline!\\_VEGESU1!} as a\n  function of \\passthrough{\\lstinline!INCOME2!} using SciPy's\n  \\passthrough{\\lstinline!linregress()!}.\n\\item\n  Compute the regression of \\passthrough{\\lstinline!\\_VEGESU1!} as a\n  function of \\passthrough{\\lstinline!INCOME2!} using StatsModels'\n  \\passthrough{\\lstinline!smf.ols()!}.\n\\end{itemize}\n\nNote: \\passthrough{\\lstinline!linregress!} does not handle\n\\passthrough{\\lstinline!NaN!} values, so you will have to use\n\\passthrough{\\lstinline!dropna!} to select the rows with valid data.\n\n\\hypertarget{multiple-regression}{%\n\\section{Multiple Regression}\\label{multiple-regression}}\n\nNow that we have StatsModels, getting from simple to multiple regression\nis easy. As an example, we'll use data from the General Social Survey\n(GSS) and we'll explore variables that are related to income.\n\nFirst, let's load the GSS data.\n\n\\begin{lstlisting}[language=Python,style=source]\nimport pandas as pd\n\ngss = pd.read_hdf('gss_eda.hdf', 'gss')\n\\end{lstlisting}\n\nHere are the first few rows of \\passthrough{\\lstinline!gss!}:\n\n\\begin{lstlisting}[language=Python,style=source]\ngss.head()\n\\end{lstlisting}\n\n\\begin{tabular}{lrrrrrrrr}\n\\toprule\n{} &  YEAR &  ID\\_ &   AGE &  EDUC &  SEX &  GUNLAW &  GRASS &  REALINC \\\\\n\\midrule\n0 &  1972 &    1 &  23.0 &  16.0 &    2 &     1.0 &    NaN &  18951.0 \\\\\n1 &  1972 &    2 &  70.0 &  10.0 &    1 &     1.0 &    NaN &  24366.0 \\\\\n2 &  1972 &    3 &  48.0 &  12.0 &    2 &     1.0 &    NaN &  24366.0 \\\\\n3 &  1972 &    4 &  27.0 &  17.0 &    2 &     1.0 &    NaN &  30458.0 \\\\\n4 &  1972 &    5 &  61.0 &  12.0 &    2 &     1.0 &    NaN &  50763.0 \\\\\n\\bottomrule\n\\end{tabular}\n\nWe'll start with another simple regression, estimating the parameters of\nreal income as a function of years of education.\n\n\\begin{lstlisting}[language=Python,style=source]\nresults = smf.ols('REALINC ~ EDUC', data=gss).fit()\nresults.params\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &             0 \\\\\n\\midrule\nIntercept & -13054.459834 \\\\\nEDUC      &   3464.463066 \\\\\n\\bottomrule\n\\end{tabular}\n\nOn the left side of the formula string,\n\\passthrough{\\lstinline!REALINC!} is the variable we are trying to\npredict; on the right, \\passthrough{\\lstinline!EDUC!} is the variable we\nare using to inform the predictions.\n\nThe estimated slope is about \\passthrough{\\lstinline!3450!}, which means\nthat each additional year of education is associated with an additional\n\\$3450 of income. But income also depends on age, so it would be good to\ninclude that in the model, too. Here's how:\n\n\\begin{lstlisting}[language=Python,style=source]\nresults = smf.ols('REALINC ~ EDUC + AGE', data=gss).fit()\nresults.params\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &             0 \\\\\n\\midrule\nIntercept & -16152.855386 \\\\\nEDUC      &   3514.291894 \\\\\nAGE       &     54.008253 \\\\\n\\bottomrule\n\\end{tabular}\n\nOn the right side of the formula string, you can list as many variables\nas you like, in this case, education and age. The\n\\passthrough{\\lstinline!plus!} sign indicates that we expect the\ncontributions of the two variables to be additive, which is a common\nassumption for models like this.\n\nThe estimated slope for \\passthrough{\\lstinline!EDUC!} is a little less\nthan what we saw before, about \\$3514 per year.\n\nThe estimated slope for \\passthrough{\\lstinline!AGE!} is only about \\$54\nper year, which is surprisingly small. To see what's going on, let's\nlook more closely at the relationship between income and age.\n\n\\hypertarget{groupby}{%\n\\section{Groupby}\\label{groupby}}\n\nI'll use \\passthrough{\\lstinline!groupby()!}, which is a Pandas feature\nwe have not seen before, to divide the DataFrame into age groups. The\nresult is a \\passthrough{\\lstinline!GroupBy!} object that contains one\ngroup for each value of \\passthrough{\\lstinline!age!}.\n\n\\begin{lstlisting}[language=Python,style=source]\ngrouped = gss.groupby('AGE')\ntype(grouped)\n\\end{lstlisting}\n\n\\begin{lstlisting}[style=output]\npandas.core.groupby.generic.DataFrameGroupBy\n\\end{lstlisting}\n\nThe \\passthrough{\\lstinline!GroupBy!} object behaves like a DataFrame in\nmany ways. You can use brackets to select a column, like\n\\passthrough{\\lstinline!REALINC!} in this example, and then invoke a\nmethod like \\passthrough{\\lstinline!mean()!}.\n\n\\begin{lstlisting}[language=Python,style=source]\nmean_income_by_age = grouped['REALINC'].mean()\n\\end{lstlisting}\n\nThe result is a Pandas series that contains the mean income for each age\ngroup, which we can plot like this.\n\n\\begin{lstlisting}[language=Python,style=source]\nimport matplotlib.pyplot as plt\n\nplt.plot(mean_income_by_age, 'o', alpha=0.5)\nplt.xlabel('Age (years)')\nplt.ylabel('Income (1986 $)')\nplt.title('Average income, grouped by age');\n\\end{lstlisting}\n\n\\begin{center}\n\\includegraphics[scale=0.75]{10_regression_files/10_regression_36_0.pdf}\n\\end{center}\n\nAverage income increases from age 20 to age 50, then starts to fall.\\\\\nAnd that explains why the estimated slope is so small, because the\nrelationship is non-linear. Remember that correlation and simple\nregression can't measure non-linear relationships.\n\nBut multiple regression can! To describe a non-linear relationship, one\noption is to add a new variable that is a non-linear combination of\nother variables.\n\nAs an example, I'll create a new variable called\n\\passthrough{\\lstinline!AGE2!} that equals \\passthrough{\\lstinline!AGE!}\nsquared.\n\n\\begin{lstlisting}[language=Python,style=source]\ngss['AGE2'] = gss['AGE']**2\n\\end{lstlisting}\n\nNow we can run a regression with both \\passthrough{\\lstinline!age!} and\n\\passthrough{\\lstinline!age2!} on the right side.\n\n\\begin{lstlisting}[language=Python,style=source]\nmodel = smf.ols('REALINC ~ EDUC + AGE + AGE2', data=gss)\nresults = model.fit()\nresults.params\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &             0 \\\\\n\\midrule\nIntercept & -49865.446557 \\\\\nEDUC      &   3293.454914 \\\\\nAGE       &   1758.622812 \\\\\nAGE2      &    -17.341566 \\\\\n\\bottomrule\n\\end{tabular}\n\nNow the slope associated with \\passthrough{\\lstinline!AGE!} is\nsubstantial, about \\$1760 per year.\n\nThe slope associated with \\passthrough{\\lstinline!AGE2!} is about -\\$17,\nbut that's harder to interpret.\n\nIn the next section, we'll see methods to interpret multivariate models\nand visualize the results. But first, let's practice multiple\nregression.\n\n\\textbf{Exercise:} To get a closer look at the relationship between\nincome and education, let's use the variable\n\\passthrough{\\lstinline!EDUC!} to group the data, then plot mean income\nin each group.\n\n\\begin{itemize}\n\\item\n  Group \\passthrough{\\lstinline!gss!} by \\passthrough{\\lstinline!EDUC!}.\n  Store the result in \\passthrough{\\lstinline!grouped!}.\n\\item\n  From \\passthrough{\\lstinline!grouped!}, extract\n  \\passthrough{\\lstinline!REALINC!} and compute the mean.\n\\item\n  Plot mean income in each education group as a scatter plot.\n\\end{itemize}\n\nWhat can you say about the relationship between education and income?\nDoes it look like a linear relationship?\n\n\\textbf{Exercise:} The graph in the previous exercise suggests that the\nrelationship between income and education is non-linear. So let's try\nfitting a non-linear model.\n\n\\begin{itemize}\n\\item\n  Add a column named \\passthrough{\\lstinline!EDUC2!} to the\n  \\passthrough{\\lstinline!gss!} DataFrame; it should contain the values\n  from \\passthrough{\\lstinline!EDUC!} squared.\n\\item\n  Run a regression model that uses \\passthrough{\\lstinline!EDUC!},\n  \\passthrough{\\lstinline!EDUC2!}, \\passthrough{\\lstinline!age!}, and\n  \\passthrough{\\lstinline!age2!} to predict\n  \\passthrough{\\lstinline!REALINC!}.\n\\end{itemize}\n\n\\hypertarget{visualizing-regression-results}{%\n\\section{Visualizing regression\nresults}\\label{visualizing-regression-results}}\n\nIn the previous section we ran a multiple regression model to\ncharacterize the relationships between income, age, and education.\nBecause the model includes quadratic terms, the parameters are hard to\ninterpret. For example, you might notice that the parameter for\n\\passthrough{\\lstinline!EDUC!} is negative, and that might be a\nsurprise, because it suggests that higher education is associated with\nlower income.\n\nBut the parameter for \\passthrough{\\lstinline!EDUC2!} is positive, and\nthat makes a big difference. In this section we'll see a way to\ninterpret the model visually and validate it against data.\n\nHere's the model from the previous exercise.\n\n\\begin{lstlisting}[language=Python,style=source]\ngss['EDUC2'] = gss['EDUC']**2\n\nmodel = smf.ols('REALINC ~ EDUC + EDUC2 + AGE + AGE2', data=gss)\nresults = model.fit()\nresults.params\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &             0 \\\\\n\\midrule\nIntercept & -26080.884938 \\\\\nEDUC      &   -522.032930 \\\\\nEDUC2     &    153.405410 \\\\\nAGE       &   1711.838648 \\\\\nAGE2      &    -17.128130 \\\\\n\\bottomrule\n\\end{tabular}\n\nSometimes we can understand a model by looking at its parameters, but\noften it is better to look at its predictions.\n\nThe regression results provide a method called\n\\passthrough{\\lstinline!predict!} that uses the model to generate\npredictions. It takes a \\passthrough{\\lstinline!DataFrame!} as a\nparameter and returns a \\passthrough{\\lstinline!Series!} with a\nprediction for each row in the \\passthrough{\\lstinline!DataFrame!}. To\nuse it, I'll create a new \\passthrough{\\lstinline!DataFrame!} with\n\\passthrough{\\lstinline!AGE!} running from 18 to 89, and\n\\passthrough{\\lstinline!AGE2!} set to \\passthrough{\\lstinline!AGE!}\nsquared.\n\n\\begin{lstlisting}[language=Python,style=source]\nimport numpy as np\n\ndf = pd.DataFrame()\ndf['AGE'] = np.linspace(18, 89)\ndf['AGE2'] = df['AGE']**2\n\\end{lstlisting}\n\nNext, I'll pick a level for \\passthrough{\\lstinline!EDUC!}, like 12\nyears, which is the most common value. When you assign a single value to\na column in a \\passthrough{\\lstinline!DataFrame!}, Pandas makes a copy\nfor each row.\n\n\\begin{lstlisting}[language=Python,style=source]\ndf['EDUC'] = 12\ndf['EDUC2'] = df['EDUC']**2\n\\end{lstlisting}\n\nThen we can use \\passthrough{\\lstinline!results!} to predict the average\nincome for each age group, holding education constant.\n\n\\begin{lstlisting}[language=Python,style=source]\npred12 = results.predict(df)\n\\end{lstlisting}\n\nThe result from \\passthrough{\\lstinline!predict!} is a\n\\passthrough{\\lstinline!Series!} with one prediction for each row. So we\ncan plot it with age on the \\(x\\)-axis and the predicted income for each\nage group on the \\(y\\)-axis. And we can plot the data for comparison.\n\n\\begin{lstlisting}[language=Python,style=source]\nplt.plot(mean_income_by_age, 'o', alpha=0.5)\n\nplt.plot(df['AGE'], pred12, label='High school', color='C4')\n\nplt.xlabel('Age (years)')\nplt.ylabel('Income (1986 $)')\nplt.title('Income versus age, grouped by education level')\nplt.legend();\n\\end{lstlisting}\n\n\\begin{center}\n\\includegraphics[scale=0.75]{10_regression_files/10_regression_53_0.pdf}\n\\end{center}\n\nThe dots show the average income in each age group. The line shows the\npredictions generated by the model, holding education constant. This\nplot shows the shape of the model, a downward-facing parabola.\n\nWe can do the same thing with other levels of education, like 14 years,\nwhich is the nominal time to earn an Associate's degree, and 16 years,\nwhich is the nominal time to earn a Bachelor's degree.\n\n\\begin{lstlisting}[language=Python,style=source]\ndf['EDUC'] = 16\ndf['EDUC2'] = df['EDUC']**2\npred16 = results.predict(df)\n\ndf['EDUC'] = 14\ndf['EDUC2'] = df['EDUC']**2\npred14 = results.predict(df)\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Python,style=source]\nplt.plot(mean_income_by_age, 'o', alpha=0.5)\n\nplt.plot(df['AGE'], pred16, ':', label='Bachelor')\nplt.plot(df['AGE'], pred14, '--', label='Associate')\nplt.plot(df['AGE'], pred12, label='High school', color='C4')\n\nplt.xlabel('Age (years)')\nplt.ylabel('Income (1986 $)')\nplt.title('Income versus age, grouped by education level')\nplt.legend();\n\\end{lstlisting}\n\n\\begin{center}\n\\includegraphics[scale=0.75]{10_regression_files/10_regression_56_0.pdf}\n\\end{center}\n\nThe lines show mean income, as predicted by the model, as a function of\nage, for three levels of education. This visualization helps validate\nthe model, since we can compare the predictions with the data. And it\nhelps us interpret the model since we can see the separate contributions\nof age and education.\n\nIn the exercises, you'll have a chance to run a multiple regression,\ngenerate predictions, and visualize the results.\n\n\\textbf{Exercise:} At this point, we have a model that predicts income\nusing age, education, and sex.\n\nLet's see what it predicts for different levels of education, holding\n\\passthrough{\\lstinline!AGE!} constant.\n\n\\begin{itemize}\n\\item\n  Create an empty \\passthrough{\\lstinline!DataFrame!} named\n  \\passthrough{\\lstinline!df!}.\n\\item\n  Using \\passthrough{\\lstinline!np.linspace()!}, add a variable named\n  \\passthrough{\\lstinline!EDUC!} to \\passthrough{\\lstinline!df!} with a\n  range of values from \\passthrough{\\lstinline!0!} to\n  \\passthrough{\\lstinline!20!}.\n\\item\n  Add a variable named \\passthrough{\\lstinline!AGE!} with the constant\n  value \\passthrough{\\lstinline!30!}.\n\\item\n  Use \\passthrough{\\lstinline!df!} to generate predicted income as a\n  function of education.\n\\end{itemize}\n\n\\textbf{Exercise:} Now let's visualize the results from the previous\nexercise!\n\n\\begin{itemize}\n\\item\n  Group the GSS data by \\passthrough{\\lstinline!EDUC!} and compute the\n  mean income in each education group.\n\\item\n  Plot mean income for each education group as a scatter plot.\n\\item\n  Plot the predictions from the previous exercise.\n\\end{itemize}\n\nHow do the predictions compare with the data?\n\n\\textbf{Optional Exercise:} Extend the previous exercise to include\npredictions for a few other age levels.\n\n\\hypertarget{categorical-variables}{%\n\\section{Categorical variables}\\label{categorical-variables}}\n\nMost of the variables we have used so far --- like income, age, and\neducation --- are numerical. But variables like sex and race are\ncategorical; that is, each respondent belongs to one of a specified set\nof categories.\n\nWith StatsModels, it is easy to include a categorical variable as part\nof a regression model. Here's an example:\n\n\\begin{lstlisting}[language=Python,style=source]\nformula = 'REALINC ~ EDUC + EDUC2 + AGE + AGE2 + C(SEX)'\nresults = smf.ols(formula, data=gss).fit()\nresults.params\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &             0 \\\\\n\\midrule\nIntercept   & -24567.566164 \\\\\nC(SEX)[T.2] &  -4626.727813 \\\\\nEDUC        &   -303.398385 \\\\\nEDUC2       &    143.954306 \\\\\nAGE         &   1702.199322 \\\\\nAGE2        &    -16.995151 \\\\\n\\bottomrule\n\\end{tabular}\n\nIn the formula string, the letter \\passthrough{\\lstinline!C!} indicates\nthat \\passthrough{\\lstinline!SEX!} is a categorical variable.\n\nThe regression treats the value \\passthrough{\\lstinline!SEX=1!}, which\nis male, as the default, and reports the difference associated with the\nvalue \\passthrough{\\lstinline!SEX=2!}, which is female. So this result\nindicates that income for women is about \\$4156 less than for men, after\ncontrolling for age and education.\n\n\\hypertarget{logistic-regression}{%\n\\section{Logistic Regression}\\label{logistic-regression}}\n\nIn the previous section, we added a categorical variables on the right\nside of a regression formula; that is, we used it as a predictive\nvariables.\n\nBut what if the categorical variable is on the left side of the\nregression formula; that is, it's the value we are trying to predict? In\nthat case, we can use \\textbf{logistic regression}.\n\nAs an example, one of the questions in the General Social Survey asks\n``Would you favor or oppose a law which would require a person to obtain\na police permit before he or she could buy a gun?'' The responses are in\na column called \\passthrough{\\lstinline!GUNLAW!}; here are the values.\n\n\\begin{lstlisting}[language=Python,style=source]\ngss['GUNLAW'].value_counts()\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &  GUNLAW \\\\\n\\midrule\n1.0 &   32038 \\\\\n2.0 &    9975 \\\\\n\\bottomrule\n\\end{tabular}\n\n\\passthrough{\\lstinline!1!} means yes and \\passthrough{\\lstinline!2!}\nmeans no, so most respondents are in favor.\n\nTo explore the relationship between this variable and factors like age,\nsex, and education, we can use StatsModels, which provides a function\nthat does logistic regression.\n\nTo use it, we have to recode the variable so \\passthrough{\\lstinline!1!}\nmeans ``yes'' and \\passthrough{\\lstinline!0!} means ``no''. We can do\nthat by replacing \\passthrough{\\lstinline!2!} with\n\\passthrough{\\lstinline!0!}.\n\n\\begin{lstlisting}[language=Python,style=source]\ngss['GUNLAW'].replace([2], [0], inplace=True)\n\\end{lstlisting}\n\nNow we can run the regression. Instead of\n\\passthrough{\\lstinline!ols()!}, we use\n\\passthrough{\\lstinline!logit()!}, which is named for the logit\nfunction, which is related to logistic regression.\n\n\\begin{lstlisting}[language=Python,style=source]\nformula = 'GUNLAW ~ AGE + AGE2 + EDUC + EDUC2 + C(SEX)'\nresults = smf.logit(formula, data=gss).fit()\n\\end{lstlisting}\n\n\\begin{lstlisting}[style=output]\nOptimization terminated successfully.\n         Current function value: 0.532878\n         Iterations 6\n\\end{lstlisting}\n\nEstimating the parameters for the logistic model is an iterative\nprocess, so the output contains information about the number of\niterations. Other than that, everything is the same as what we have seen\nbefore. And here are the results.\n\n\\begin{lstlisting}[language=Python,style=source]\nresults.params\n\\end{lstlisting}\n\n\\begin{tabular}{lr}\n\\toprule\n{} &         0 \\\\\n\\midrule\nIntercept   &  1.347289 \\\\\nC(SEX)[T.2] &  0.771791 \\\\\nAGE         & -0.021314 \\\\\nAGE2        &  0.000222 \\\\\nEDUC        & -0.075406 \\\\\nEDUC2       &  0.004867 \\\\\n\\bottomrule\n\\end{tabular}\n\nThe parameters are in the form of \\textbf{log odds}, which you may or\nmay not be familiar with. I won't explain them in detail here, except to\nsay that positive values are associated with things that make the\noutcome more likely, and negative values make the outcome less likely.\n\nFor example, the parameter associated with\n\\passthrough{\\lstinline!SEX=2!} is 0.75, which indicates that women are\nmore likely to support this form of gun control. To see how much more\nlikely, we can generate predictions, as we did with linear regression.\n\nAs an example, I'll generate predictions for different ages and sexes,\nwith education held constant.\n\nFirst we need a \\passthrough{\\lstinline!DataFrame!} with\n\\passthrough{\\lstinline!AGE!} and \\passthrough{\\lstinline!EDUC!}.\n\n\\begin{lstlisting}[language=Python,style=source]\ndf = pd.DataFrame()\ndf['AGE'] = np.linspace(18, 89)\ndf['EDUC'] = 12\n\\end{lstlisting}\n\nThen we can compute \\passthrough{\\lstinline!AGE2!} and\n\\passthrough{\\lstinline!EDUC2!}.\n\n\\begin{lstlisting}[language=Python,style=source]\ndf['AGE2'] = df['AGE']**2\ndf['EDUC2'] = df['EDUC']**2\n\\end{lstlisting}\n\nWe can generate predictions for men like this.\n\n\\begin{lstlisting}[language=Python,style=source]\ndf['SEX'] = 1\npred1 = results.predict(df)\n\\end{lstlisting}\n\nAnd for women like this.\n\n\\begin{lstlisting}[language=Python,style=source]\ndf['SEX'] = 2\npred2 = results.predict(df)\n\\end{lstlisting}\n\nNow, to visualize the results, I'll start by plotting the data. As we've\ndone before, we'll divide the respondents into age groups and compute\nthe mean in each group. The mean of a binary variable is the fraction of\npeople in favor.\n\nThen we can plot the predictions, for men and women, as a function of\nage.\n\n\\begin{lstlisting}[language=Python,style=source]\ngrouped = gss.groupby('AGE')\nfavor_by_age = grouped['GUNLAW'].mean()\nplt.plot(favor_by_age, 'o', alpha=0.5)\n\nplt.plot(df['AGE'], pred2, label='Female')\nplt.plot(df['AGE'], pred1, '--', label='Male')\n\nplt.xlabel('Age')\nplt.ylabel('Probability of favoring gun law')\nplt.title('Support for gun law versus age, grouped by sex')\nplt.legend();\n\\end{lstlisting}\n\n\\begin{center}\n\\includegraphics[scale=0.75]{10_regression_files/10_regression_83_0.pdf}\n\\end{center}\n\nAccording to the model, people near age 50 are least likely to support\ngun control (at least as this questions was posed). And women are more\nlikely to support it than men, by almost 15 percentage points.\n\nLogistic regression is a powerful tool for exploring relationships\nbetween a binary variable and the factors that predict it. In the\nexercises, you'll explore the factors that predict support for\nlegalizing marijuana.\n\n\\textbf{Exercise:} Let's use logistic regression to predict a binary\nvariable. Specifically, we'll use age, sex, and education level to\npredict support for legalizing cannabis (marijuana) in the U.S.\n\nIn the GSS dataset, the variable \\passthrough{\\lstinline!GRASS!} records\nthe answer to the question ``Do you think the use of marijuana should be\nmade legal or not?''\n\n\\begin{enumerate}\n\\def\\labelenumi{\\arabic{enumi}.}\n\\item\n  First, use \\passthrough{\\lstinline!replace!} to recode the\n  \\passthrough{\\lstinline!GRASS!} column so that\n  \\passthrough{\\lstinline!1!} means yes and \\passthrough{\\lstinline!0!}\n  means no. Use \\passthrough{\\lstinline!value\\_counts!} to check.\n\\item\n  Next, use \\passthrough{\\lstinline!smf.logit()!} to predict\n  \\passthrough{\\lstinline!GRASS!} using the variables\n  \\passthrough{\\lstinline!AGE!}, \\passthrough{\\lstinline!AGE2!},\n  \\passthrough{\\lstinline!EDUC!}, and \\passthrough{\\lstinline!EDUC2!},\n  along with \\passthrough{\\lstinline!SEX!} as a categorical variable.\n  Display the parameters. Are men or women more likely to support\n  legalization?\n\\item\n  To generate predictions, start with an empty DataFrame. Add a column\n  called \\passthrough{\\lstinline!AGE!} that contains a sequence of\n  values from 18 to 89. Add a column called\n  \\passthrough{\\lstinline!EDUC!} and set it to 12 years. Then compute a\n  column, \\passthrough{\\lstinline!AGE2!}, which is the square of\n  \\passthrough{\\lstinline!AGE!}, and a column,\n  \\passthrough{\\lstinline!EDUC2!}, which is the square of\n  \\passthrough{\\lstinline!EDUC!}.\n\\item\n  Use \\passthrough{\\lstinline!predict!} to generate predictions for men\n  (\\passthrough{\\lstinline!SEX=1!}) and women\n  (\\passthrough{\\lstinline!SEX=2!}).\n\\item\n  Generate a plot that shows (1) the average level of support for\n  legalizing marijuana in each age group, (2) the level of support the\n  model predicts for men as a function of age, and (3) the level of\n  support predicted for women as a function of age.\n\\end{enumerate}\n\n\\hypertarget{summary}{%\n\\section{Summary}\\label{summary}}\n\nAt this point, I'd like to summarize the topics we've covered so far,\nand make some connections that might clarify the big picture.\n\nA central theme of this book is \\textbf{exploratory data analysis},\nwhich is a process and a set of techniques for working with data,\nespecially in the early stages of a project, or when you are working\nwith a new data set.\n\nThe last four chapters demonstrate the steps of this process:\n\n\\begin{itemize}\n\\item\n  Chapter 7 is about importing and cleaning data, and checking for\n  errors and other special conditions. This might not be the most\n  exciting part of the process, but time spent validating data can save\n  you from embarrassing errors.\n\\item\n  Chapter 8 is about exploring variables one at a time, visualizing\n  distributions using PMFs, CDFs, and KDE, and choosing appropriate\n  summary statistics.\n\\item\n  In Chapter 9 we explored relationships between variables two at a\n  time, using scatter plots and other visualizations; and we quantified\n  those relationships using correlation and simple regression.\n\\item\n  Finally, in this chapter, we explored multivariate relationships using\n  multiple regression and logistic regression.\n\\end{itemize}\n\nIn Chapter 7, we looked at the distribution of birth weights from the\nNational Survey of Family Growth. If you only remember one thing,\nremember the 99 pound babies, and how much it can affect your results if\nyou don't validate the data.\n\nIn Chapter 8 we looked at the distributions of age, income, and other\nvariables from the General Social Survey. I recommended using CDFs as\nthe best way to explore distributions. But when you present to audiences\nthat are not familiar with CDFs, you can use PMFs if there are a small\nnumber of unique values, and KDE if there are a lot.\n\nIn Chapter 9 we looked at heights and weights from the BRFSS, and\ndeveloped several ways to visualize relationships between variables,\nincluding scatter plots, violin plots, and box plots.\n\nWe used the coefficient of correlation to quantify the strength of a\nrelationship. We also used simple regression to estimate slope, which is\noften what we care more about, not correlation.\n\nBut remember that both of these methods only capture linear\nrelationships; if the relationship is non-linear, they can be\nmisleading. Always look at a visualization, like a scatter plot, before\ncomputing correlation or simple regression.\n\nIn Chapter 10 we used multiple regression to add control variables and\nto describe non-linear relationships. And finally we used logistic\nregression to explain and predict binary variables.\n\nWe moved through a lot of material quickly, but if you practice and\napply these methods to other questions and other datasets, you will\nlearn more as you go.\n\n", "meta": {"hexsha": "dd662b05ec26de5c884ea6176f727dd6579879b5", "size": 30931, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "book/10_regression.tex", "max_stars_repo_name": "AllenDowney/ElementsOfDataScienceBook", "max_stars_repo_head_hexsha": "3b87dfdd81c68ebd17f84a818326ed87da265ddb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2021-05-06T13:57:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T18:21:30.000Z", "max_issues_repo_path": "book/10_regression.tex", "max_issues_repo_name": "AllenDowney/ElementsOfDataScienceBook", "max_issues_repo_head_hexsha": "3b87dfdd81c68ebd17f84a818326ed87da265ddb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/10_regression.tex", "max_forks_repo_name": "AllenDowney/ElementsOfDataScienceBook", "max_forks_repo_head_hexsha": "3b87dfdd81c68ebd17f84a818326ed87da265ddb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-27T10:41:22.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T10:41:22.000Z", "avg_line_length": 35.0691609977, "max_line_length": 74, "alphanum_fraction": 0.7540331706, "num_tokens": 8362, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8152324938410784, "lm_q1q2_score": 0.5832368981292042}}
{"text": "\\section{Application to Color Transfer}\n\\label{sec-appli-color} \n\nThis section shows how the relaxed and regularized OT formulation can be applied to imaging problems, more specifically to color transfer, and how the regularization and the relaxation improve the results obtained by previous methods. The color transfer problem consists in modifying an input image $X^0$ so that its colors match the colors of another input image $Y^0$. \n\n\\begin{figure*}[h]\n\\centering\n\\begin{tabular}{@{}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{}}\n\\includegraphics[width=.32\\linewidth]{colorization/parrot_1} &\n\\includegraphics[width=.32\\linewidth]{colorization/parrot_2} &\n\\includegraphics[width=.32\\linewidth]{colorization/asymmetricparrot_l00008_KX1_KY1_nn4} \\\\ \n$X^0$ & $Y^0$ & ${\\tilde X^0}$ \\\\\n\\includegraphics[width=.32\\linewidth]{colorization/histo3Dx} &\n\\includegraphics[width=.32\\linewidth]{colorization/histo3Dy} &\n\\includegraphics[width=.32\\linewidth]{colorization/histo3Du}\\\\\n$\\mu_{X^0}$ & $\\mu_{Y^0}$ & $\\mu_{\\tilde X^0}$\n\\end{tabular}\n\\caption{Example of the colorization problem. Given images $X^0$ and $Y^0$ with their corresponding 3-D color distributions $\\mu_{X^0}$ and $\\mu_{Y^0}$ (represented here using their 2-D projection on the RG plane), the goal of colorization methods is to define an image ${\\tilde X^0}$ that has the geometry of $X^0$ and a histogram $\\mu_{{\\tilde X}^0}$  that is similar to $\\mu_{Y^0}$.}\\label{imcolorization}\n\\end{figure*}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Color Images and Histograms}\n\nIn the following, an image is stored as a vector $X^0 \\in \\RR^{N_0 \\times d}$ where $d=3$ is the number of channels (here $d=3$ since we handle color images, with R, G and B color channels) and where $N_0=N_1 N_2$ is the number of pixels ($N_1$ being horizontal and $N_2$ vertical dimensions). The color histogram of such an image $X^0$ can be estimated using the empirical distribution $\\mu_{X^0}$. The goal of color transfer algorithms is to compute a transformation $T^0$ such that $(\\tilde X^0)_i = T^0(X^0_i)$, where the new empirical distribution $\\mu_{\\tilde X^0}$ is close (or equal) to $\\mu_{Y^0}$. Figure~\\ref{imcolorization} shows an example where $X^0$, $Y^0$ are the original input images, the second row displays the 2-D projection of the 3-D distribution of pixels $\\mu_{X^0}$ and $\\mu_{Y^0}$, and in the third column, we show the $\\mu_{\\tilde X^0}$ which is the result of applying $T^0$ to $X^0$, where $T^0$ is computed using the method described below. The associated image ${\\tilde X^0}$ has the geometry of $X^0$ and the color palette (3-D histogram) of $Y^0$.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Regularized OT Color Transfer}\n\nAs exposed in Section~\\ref{subsec-ot-imaging}, OT is now routinely used to perform color palette modification, and in particular color transfer. As we illustrate below in the numerical examples, relaxing the mass conservation constraint is crucial in order to better match the modes (i.e. the dominant colors) of each distribution. Regularizing the transport is also important to reduce colorization artifacts.\n\nTo make the optimization problem~\\eqref{eq-symm-reg-energy} tractable for histograms obtained from large scale images, we apply the method on a sub-sampled point cloud. That is to say, before computing the relaxed and regularized transport, we define two smaller point clouds $X$ and $Y$ from $X^0$ and $Y^0$. These clouds are created such that their respective distributions $\\mu_X$ and $\\mu_Y$ are close to the two original distributions $\\mu_{X^0}$ and $\\mu_{Y^0}$. The mapping $T$ between these small clouds is then extended by interpolation to the original clouds. The complete algorithm for regularized OT color transfer between a pair of images $(X^0,Y^0)$ is exposed in Algorithm~\\ref{alg-rot}. We now detail each step of the method. \n\n\\begin{algorithm}[ht!]\n\\caption{Regularized OT Color Transfer}\n\\label{alg-rot}\n% \\begin{algorithmic}[1]\n\\Require Images $X^0,Y^0 \\in \\RR^{N_0 \\times d}$, $\\la_X,\\la_Y \\in \\RR^+ $, and $k_X,K_X,k_Y,K_Y \\in \\RR^+$, where $k_X \\le K_X$ and $k_Y \\le K_Y$.\n\n\\Ensure Image $\\tilde X^0 \\in \\RR^{N_0 \\times d}$.\n% \\Statex\n\\begin{enumerate}\n\t\\algostep{Histogram down-sample} Compute $X,Y$ from $X^0,Y^0$ respectively \\\\ using K-means clustering.\n\t\\algostep{Compute Mapping} Compute the optimal $\\Sig$ such that  $T(X) = \\diag(\\Sig \\U)^{-1} \\Sig Y$ by solving eq.~\\eqref{eq-symm-reg-energy} with algorithm~\\eqref{eq-frankwolfe-update} or the linear program~\\eqref{eq-symm-TV} solving with an interior point algorithm.\n\t%\\algostep{Transport up-sample} Compute $\\tilde T^0$ by solving eq.~\\eqref{eq-upsample}, where  $T(X) = diag(\\Sig \\U)^{-1} \\Sig Y$.\n\t\\algostep{Obtain high resolution result} Compute $\\tilde X^0$ with eq.~\\eqref{eq-upsample}.\n\\end{enumerate}\n% \\end{algorithmic}\n\\end{algorithm}\n\n\n%%%%%%%%\n\\paragraph{Pixels down-sampling} \n\nWe construct a smaller data set $X \\in \\RR^{N \\times d}$ by clustering the set $X^0$ into $N$ clusters with the K-means algorithm (see~\\cite{Lloyd57}).\n% and~\\cite{Quantization} for its application to image quantization). \nEach cluster corresponds to a point $X_i$ in our smaller data set $X$.  The same procedure is done for $Y^0$ to obtain $Y \\in \\RR^{N \\times d}$.%, and we compute  \n\n%%%%%%%%\n\\paragraph{Graph and $(G_X,G_Y)$ operator} \n\nAs exposed in Section~\\ref{sec:regsymme}, the regularization is defined using gradient operators $(G_X,G_Y)$ on graphs $(\\Gg_X,\\Gg_Y)$ connecting the points in $X$ and $Y$. Inspired by several recent works on manifold learning (see Section~\\ref{subsec-regul-intro}), we use here a $n$-nearest neighbor graph, where $n$ is the number of edges adjacent to each vertex, i.e. $\\abs{\\enscond{j}{(i,j) \\in E_X}} = n$ where $E_X$ is the set of edges of $X$. The weights of the graphs are defined as $w_{i,j}=\\norm{X_i - X_j}^{-1}$ (same applies for $Y$), which is consistent with the computation of the directional derivatives. An example of this graph can be observed in Figure~\\ref{im:matching}. Note that this graph does not need to be fully connected. \n\n%%%%%%%%\n\\paragraph{Transport map computation}\n\nThe regularized transport map $T$ between the sub-sampled data $(X,Y)$ is computed as\n\\eq{\n\tT(X_i)=\\left(\\diag(\\Sig \\U)^{-1}\\Sig Y\\right)_i \\qforq i={1,\\ldots,N}\n} \nwhere $\\Sig$ is a solution of~\\eqref{eq-symm-reg-energy}.\n\n%%%%%%%%\n\\paragraph{Transport map up-sampling}\n\nThe transport map $T$ is extended to the whole space using a nearest neighbor interpolation \n% to every point the transport map of the nearest point in $X$: \n\\eql{\\label{eq-upsample}\n\t% (\\tilde X^0)_{i} = \n\t\\foralls x \\in \\RR^d, \\quad\n\tT^0(x) = T(X_{i(x)}) + x - X_{i(x)}, \n\t\\qwhereq\n\ti(x) = \\uargmin{1 \\leq i \\leq N} \\norm{ x - X_i }. \n} \nNote that this interpolation scheme contains an additive term $x-X_{i(x)}$. This corresponds to adding back the quantization error (due to the K-means sub-sampling) to the nearest neighbors interpolation, which helps to restore small scale textural details, and improves the visual quality of the result.  Note also that one could use smoother interpolation schemes (for instance linear or natural neighbors) to help reducing the quantization error. These interpolations however require to compute a Delaunay tetrahedrization of the 3-D color space, which leads to a significant computational overhead.  \n\nThe resulting transport computed in~\\eqref{eq-upsample} can now be  applied to the input image $X^0$ to obtain the new pixel values $(\\tilde X^0)_{i} = T^0(X^0_i)$.\n\n\\begin{figure}[ht]\n\\centering\n\\begin{tabular}{@{}c@{\\hspace{1mm}}c}\n\\includegraphics[width=.4\\linewidth]{barycenter/flowers/flowers-1.jpg} &\n\\fbox{\\includegraphics[width=.55\\linewidth,height=5.1cm]{coloredgraph}} \\\\\n(a) & (b)\\vspace{-0.3cm}\n\\end{tabular}\n\\caption{(a) Flower image ; (b) its empirical distribution projected on the Red-Blue plane. The line segments represent the edges $E_X$ of the $n$-nearest neighbor graph computed with $n=4$. \\vspace{-0.1cm}}\n\\label{im:matching}\n\\end{figure}\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Parameters Selection}\n\nThe color transfer method has several parameters. We explain below their influence and how they were selected in the numerical examples shown in the following section.\n\n\n%%%%%%%%\n\\paragraph{Parameters $(p,q)$: Sobolev vs. total variation}\n\nAll the numerical results reported in the remaining of the paper are computed using the total variation (TV) regularization, which corresponds to using the parameters $(p,q)=(1,1)$. We found empirically on the color transfer application that both methods produce visually similar results. The main difference is that the resulting optimal $\\Sig$ is somehow sparser with the total variation prior, i.e. the resulting weights are more concentrated on a few entries. \n\n\n%%%%\n\\paragraph{Relaxation parameter $\\kappa$ } \n\nThe impact of this parameter has already been discussed in Section~\\ref{subsec-relaxed-transport}~on synthetic 2-D examples. \n\n%%%%\n\\paragraph{Regularization parameters $(\\la_X,\\la_Y)$} \n\n%While the impact of these parameters has already been explored in Section~\\ref{sec:regsymme} on 2-D examples, we further illustrate it for color transfer of synthetic images.\nFigure~\\ref{im:synth} shows an example of color transfer between two synthetic images $X^0$ and $Y^0$ shown in Figure~\\ref{im:synth}~(a). We apply Algorithm~\\ref{alg-rot} to obtain the image ${\\tilde X^0}$ with a color palette close to $Y^0$, but with the geometry of the original $X^0$. We now study the influence of the parameters $\\la_X$ and $\\la_Y$. Figure~\\ref{exlk} shows a 2-D projection in the Red-Green plane of $X$ and $Y$, displayed using respectively red and blue, and $\\tilde X$ in green. As already pointed out in Section~\\ref{secalgosymm},  a low value of $\\la_X$ and $\\la_Y$ (zero for the first column) tends to match the points in X to the closest point in Y. This behavior can be observed in the map of the column (b). Many points in the big cluster of $X$ are mapped to very few points in the small cluster of $Y$, which corresponds in the images to mapping many red values of $X^0$ to very few brown values in $Y^0$. The consequence is that the color resolution of $\\tilde X$ is reduced, the brown area of Figure~\\ref{im:synth}~(b) is flat, unlike the original brown values in $Y$. As we increase the value of $\\la_X$ and $\\la_Y$, the mapping spreads within the small cluster of $Y$ in Figure~\\ref{exlk}(b) and we gain color resolution, as can be observed in Figure~\\ref{im:synth}~(c). It should be observed that, even in this case, the contrast rendition of $\\tilde{X}$ and $Y$ are still not the same. Comparing the convex hull size of the color histograms $\\mu_{Y^0}$, $\\mu_{{\\tilde X}^0}$, we see that the convex hull of $\\mu_{{\\tilde X}^0}$ is smaller, thus the contrast is reduced. \n\nOn the other hand, if we increase too much the values of $\\la_X$ and $\\la_Y$, many points in $X$ get matched to the big cluster in $Y$ in Figure~\\ref{exlk}~(c) which leads to a single dominant color in the final image $\\tilde X^0$, in Figure~\\ref{im:synth}~(d). \n\n\n\n\\newlength{\\mylenX}\\settowidth{\\mylenX}{\\includegraphics[width=1.75cm]{../images/syntheticexamples/X} } % Widest element\n\\newcommand{\\sidecapX}[1]{ {\\begin{sideways}\\parbox{1.65cm}{\\centering #1}\\end{sideways}} }\n\n\\begin{figure}[ht]\n\\centering\n\\begin{tabular}{@{}c@{}}\n\\sidecapX{\\scriptsize $X^0$} \\includegraphics[width=.13\\linewidth]{../images/syntheticexamples/X} \\\\\n\\sidecapX{\\scriptsize $Y^0$} \\includegraphics[width=.13\\linewidth]{../images/syntheticexamples/Y} \\\\\n\\hspace{0.4cm} (a)\\vspace{-0.25cm}\n\\end{tabular} \n\\begin{tabular}{@{}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{}}\n\\includegraphics[width=.27\\linewidth]{../images/syntheticexamples/symmetricsyntheticinv_l0_KX8_KY8_nn4} &\n\\includegraphics[width=.27\\linewidth]{../images/syntheticexamples/symmetricsyntheticinv_l0001_KX8_KY8_nn4} &\n\\includegraphics[width=.27\\linewidth]{../images/syntheticexamples/symmetricsyntheticinv_l10_KX8_KY8_nn4} \\\\\n(b) &  (c) & (d) \\vspace{-0.25cm}\n\\end{tabular} \n\\caption{Effect of changing the parameters $\\la_X$ and $\\la_Y$ of the relaxed and regularized OT formulation presented in section~\\ref{sec:regsymme}, using parameters $\\kappa=(0.1,8,0.1,8)$.  \n\t{(a)} original input images, \n\t{(b)} relaxed OT, $\\la_X=\\la_Y=0$; \n\t{(c)} $\\la_X=\\la_Y=0.001$; \n\t{(d)} $\\la_X=\\la_Y=10$. \n\tEach of these mappings can be observed in Figure~\\ref{exlk}.\\vspace{-0.1cm}}\n  \\label{im:synth}\n\\end{figure}\n\n\\paragraph{Number $N$ of quantization points}\n\nIn our numerical experiments, we have used $N=400$, which corresponds to a tradeoff between computation time and visual quality. In practice, of course, this number should be set as a function of the complexity (e.g. number of modes) of the input color distributions, but we found that $N=400$ was sufficient for all considered images in our tests.  Figure~\\ref{changeN} shows some experiments to support this claim and to show that even setting $N$ to a smaller value did not affect the results much. \n \n % For images with a small number of homogeneous regions, where we could do with less clusters, the points just cluster around a single value. Then, for images with a huge amount of regions, it seems that 400 is enough to summarize the information, of course this value may depend on the size of the image. \n\n % Given the moderate impact of a change in this parameter, we decided to not introduce these experiments in the paper. \n \n\\newcommand{\\myfigb}[1]{\\includegraphics[width=.25\\linewidth,height=.25\\linewidth]{./images/influence-quantiz/#1}}\n\n\\begin{figure}[h!]\n\\centering\n\\begin{tabular}{@{}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{}}\n\\myfigb{parrot_1.jpg} &\n\\myfigb{parrotX_N64} &\n\\myfigb{parrotX_N144} &\n\\myfigb{parrotX_N400} \\\\\n\\myfigb{parrot_2.jpg} &\n\\myfigb{parrotY_N64} &\n\\myfigb{parrotY_N144} &\n\\myfigb{parrotY_N400} \\\\\nOriginal & $N=64$ & $N=144$ & $N=400$\n\\end{tabular}\n\\vspace{0.1in}\n\\begin{tabular}{@{}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{}}\n\\myfigb{flowers-1} & \n\\myfigb{flowersX_N64} &\n\\myfigb{flowersX_N144} &\n\\myfigb{flowersX_N400} \\\\\n\\myfigb{flowers-2} & \n\\myfigb{flowersY_N64} &\n\\myfigb{flowersY_N144} &\n\\myfigb{flowersY_N400} \\\\\nOriginal & $N=64$ & $N=144$ & $N=400$\\vspace{-0.1cm}\n\\end{tabular}\n\\caption{Results obtained computing the relaxed-regularized OT between the original images, changing the value of $N$. The parameters were set as $n=4$, $M=N$, for all experiments. Then, for the first and second row $\\kappa=(0.1,1 ,0.1,1)$, and $\\la_X=\\la_Y=0.001$. For the second experiment that involves the third and forth row, $\\kappa=(0.1,1.1 ,0.1,1.1)$ and $\\la_X=\\la_Y=9 \\cdot 10^{-4}$. Each column corresponds to a single experiment which differ on the value of $N$.\\vspace{-0.1cm}}\n\n%\\caption{Results obtained computing the relaxed-regularized OT between the image in the first and third row ($X$) with the image in the second and forth row ($Y$), respectively. The parameters were set as $n=4$, $M=N$, for all experiments. Then, for the first and second row $\\kappa=(0.1,1 ,0.1,1)$, and $\\la_X=\\la_Y=0.001$. For the second experiment that involves the third and forth row, $\\kappa=(0.1,1.1 ,0.1,1.1)$ and $\\la_X=\\la_Y=9 \\cdot 10^{-4}$. Each column corresponds to a single experiment which differ on the value of $N$.\\vspace{-0.1cm}}\n\\label{changeN}\n\\end{figure}\n\n\n%%%%\n\\paragraph{Number $n$ of nearest neighbors} \n\nAs for the other parameters of the method, the value of $n$ is set in order to obtain the best visual result.  This parameter is related to the connectivity of the graphs $\\mathcal{G}_X$ and $\\mathcal{G}_Y$. There are two behaviors that we need to avoid when setting it: \n\\begin{itemize}\n\\item Too small $n$: The graph has very few edges. So, an area that is almost homogeneous in the spatial domain could end up being disconnected. In this case, this homogeneous area in the original image could be transported to different locations, generating color artifacts. In practice, this situation has been observed only for very small values of $n$. An example can be seen in  the first row of Figure~\\ref{imn}, for $n=1$ and $n=2$ there is an area in the background that after the color matching gets two different colors. For greater values of $n$ this artifact disappears. \n\\item Too large  $n$: The graph has a lot of edges and all the points are very interconnected, so the regularization term plays a very strong role by maintaining the structure of the graph in the original cloud of points, which reduces the flexibility of the map. This effect is quite difficult to observe in the resulting images. \n\\end{itemize}\n\\newcommand{\\myfig}[1]{\\includegraphics[width=.25\\linewidth]{./images/influence-nn/#1}}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{@{}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{}}\n\\myfig{parrotX_nnx1} &\n\\myfig{parrotX_nnx2} &\n\\myfig{parrotX_nnx4} &\n\\myfig{parrotX_nnx10} \\\\\n\\myfig{parrotY_nnx1} &\n\\myfig{parrotY_nnx2} &\n\\myfig{parrotY_nnx4} &\n\\myfig{parrotY_nnx10} \\\\\n\\myfig{flowersX_nnx1} &\n\\myfig{flowersX_nnx2} &\n\\myfig{flowersX_nnx4} &\n\\myfig{flowersX_nnx10} \\\\\n\\myfig{flowersY_nnx1} &\n\\myfig{flowersY_nnx2} &\n\\myfig{flowersY_nnx4} &\n\\myfig{flowersY_nnx10} \\\\\n$n=1$ & $n=2$ & $n=4$ & $n=10$\n\\end{tabular}\n\\caption{Results obtained by changing the $n$ parameter. The original images are presented in Fig.~\\ref{changeN}}\n\\label{imn}\n\\end{figure}\n\nAs pointed out before, in our experiments we saw that setting $n=4$ avoids situations related to both cases. In Fig.~\\ref{imn}, we can see how the final image evolves with a change in the value of the parameter $n$. Note that the only noticeable difference between the images can be observed on the results obtained with very small $n$ values ($n=1,2$) where homogeneous areas get divided. %Above a certain value, the results do not change.\n\n\n\n%%%%\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Results}\n\n\nFigure~\\ref{star} shows some results on natural images and compare them with the methods of Piti\\'e et al.~\\cite{Pitie07}, Rabin et al.~\\cite{Rabin_ip11}\\footnote{We thank Julien Rabin for providing us these results} and Papadakis et. al~\\cite{Papadakis_ip11}. Note that the method of~\\cite{Pitie07} includes a post-processing step to reduce the color artifacts, but that it is applied here without the post-processing (it thus corresponds to the application of an approximate optimal transport).\n The goal of the experiment is to transfer the color palette of the images in the second row to the image on the first row.  Note that the methods in the state of the art introduce color artifacts (in the first column there is violet outside the flower, and in the second column the wheat is blueish), which can be avoided with the proposed method by an appropriate choice of $\\la_X$, $\\la_Y$ and $\\kappa$. These results were obtained setting $N=400$ and constructing the graph as a $4$-nearest neighbor graph. By column, the values of $\\la_X=\\la_Y$ are $9\\times 10^{-4}, 5\\times 10^{-4}$, and $10^{-3}$ and $\\kappa$ was set to (0.1,1.1,0.1,1.1), (0.1,1.3,0.1,1.3), and (0.1,1,0.1,1), respectively.\n \n\n\\newlength{\\mylen}\\settowidth{\\mylen}{\\includegraphics[width=3.8cm,height=3.3cm]{star/fleur_1}} % Widest element\n\n\\newcommand{\\sidecap}[1]{{\\begin{sideways}\\parbox{3.3cm}{\\centering #1}\\end{sideways}} \\hspace{-4mm}}\n\n\\newcommand{\\myimg}[1]{\\includegraphics[width=.25\\linewidth,height=.22\\linewidth]{#1}}\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{tabular}{@{}@{}cc@{\\hspace{1mm}}c@{\\hspace{1mm}}c@{}}\n\\sidecap{ Original $X^0$ }  & \n\\myimg{star/fleur_1} &\n\\myimg{star/wheat_1.jpg} &\n\\myimg{star/parrot_1.jpg} \\vspace{-0.45cm}\\\\\n\\sidecap{ Original $Y^0$ } & \n\\myimg{star/fleur_2} &\n\\myimg{star/wheat_2.jpg} &\n\\myimg{star/parrot_2.jpg}\\vspace{-0.45cm} \\\\\n\\sidecap{ Piti\\'e et al.~\\cite{Pitie07} } & \n\\myimg{star/fleur_pitie} &\n\\myimg{star/wheat_pitie} &\n\\myimg{star/parrot_pitie}\\vspace{-0.45cm} \\\\\n\\sidecap{ Rabin et al.~\\cite{Rabin_ip11} } & \n\\myimg{star/fleur_Rabin} &\n\\myimg{star/wheat_Rabin} &\n\\myimg{star/parrot_Rabin}\\vspace{-0.45cm} \\\\\n\\sidecap{ Papadakis et al.~\\cite{Papadakis_ip11} } & \n\\myimg{star/fleur_papadakis} &\n\\myimg{star/wheat_papadakis}  &\n\\myimg{star/parrot_papadakis} \\vspace{-0.45cm}\\\\\n\\sidecap{ Proposed method } & \n\\myimg{symmetric/symmetricfleur_l00009_KX11_KY11_nn4.eps} &\n\\myimg{symmetric/symmetricwheat_l00005_KX13_KY13_nn4.eps} &\n\\myimg{symmetric/symmetricparrot_l0001_KX1_KY1_nn4.eps} \\\\\n & (a) & (b) &(c)\\vspace{-0.25cm}\n\\end{tabular}\n\\caption{Comparison between the results obtained with our method and with the methods of~\\cite{Pitie07}, ~\\cite{Rabin_ip11}  and~\\cite{Papadakis_ip11} for image color transfer. Note how the proposed method is able to generate results without color artifacts for example, in \\textbf{(a)} the violet color of the flower is not spread outside the flower, in \\textbf{(b)} the wheat does not become bluish and in \\textbf{(c)} the result does not enhance or colorize differently the flat areas of the background.\\vspace{-0.25cm}}\n\\label{star}\n\\end{figure}\n\n", "meta": {"hexsha": "4a2b8a0aceeecff537f9e06b3d3b7e8422aba8a5", "size": 21020, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sections/sec-applications.tex", "max_stars_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_stars_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_stars_repo_licenses": ["CECILL-B"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-06-27T03:15:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-19T17:21:04.000Z", "max_issues_repo_path": "paper/sections/sec-applications.tex", "max_issues_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_issues_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_issues_repo_licenses": ["CECILL-B"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sections/sec-applications.tex", "max_forks_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_forks_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_forks_repo_licenses": ["CECILL-B"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2016-10-12T17:29:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-04T01:52:32.000Z", "avg_line_length": 72.9861111111, "max_line_length": 1607, "alphanum_fraction": 0.7284966698, "num_tokens": 6380, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324848629214, "lm_q2_score": 0.7154240079185318, "lm_q1q2_score": 0.583236891706015}}
{"text": "\\section{Proof of ELBO lower bound}\n\\label{sec:nvmp-proof}\n\n\\begin{proof}\n(This proof is very similar to that in \\citet{Johnson2016}.)\nIn each step of mean-field variational inference,\nwe maximize the ELBO with respect to a single variational factor.\nConsider a latent variable $\\z_\\n$.\nChoosing a form for $\\q(\\z_\\n)$\nthat does not capture the optimal $\\q^*(\\z_\\n)$\nwill result\nin a lower maximum than the maximizing\nover all possible $\\q(\\z_\\n)$ distributions.\nWe express this as\n\\begin{equation}\n    \\max_{\\q(\\z_\\n)}\\L[\\q] \\ge\\q  \\L(\\eta_{\\z_\\n}^*)\n\\end{equation}\nwhere \n\\[\n\\eta_{\\z_\\n}^* \\triangleq \\argmax_{\\eta_{\\z_\\n}} \\L[\\q]\n\\]\nis the optimal parameter of some chosen form of $\\q(\\z_\\n)$.\n\nIn NVMP, we further modify this local optimization\nby modifying the ELBO to include recognition networks\nusing the trick from \\cite{Johnson2016}.\nThis results in a surrogate objective $\\hat{\\L}_\\recparam[\\q]$,\nand maximizing this objective can do no better than maximizing the\nparameterized objective $\\L[\\q]$.\n\\begin{equation}\n    \\max_{\\q(\\z_\\n)}\\L[\\q] \\ge\\q  \\L(\\eta_{\\z_\\n}^*) \\ge\\q\n    \\L(\\hat{\\eta}_{\\z_\\n}^*)\n\\end{equation}\nwhere \n\\[\n\\hat{\\eta}_{\\z_\\n}^* \\triangleq \\argmax_{\\eta_{\\z_\\n}} \\hat{\\L}_\\recparam[\\q]\n\\]\n\nAs each local optimization lower-bounds the ELBO,\nthe NVMP optimization holistically \nlower-bounds the ELBO.\n\nThis bound is tight in that\nif there exists a set of recognition\nnetwork weights $\\recparam$ such that\n\\begin{equation}\n\\log \\p(c \\given \\z_\\n, \\parents_c \\not{\\z_\\n}) = \\psi(c, \\z_\\n, \\parents_{c} \\not{\\z_\\n}; \\recparam) \\\\\n\\end{equation}\nwe have\n\\begin{equation}\n    \\max_{\\q(\\z_\\n)}\\L[\\q] \\geq  \\L(\\eta_{\\z_\\n}^*) = \n    \\L(\\hat{\\eta}_{\\z_\\n}^*)\n\\end{equation}\n\\end{proof}\n\n\\section{Extension: stochastic variational inference}\n\\label{sec:nvmpsvi}\nSVI \\citep{Hoffman2013} treats local and global variational factors differently,\nfirst performing VMP to optimize local\nfactors to completion. Note that these factors are only\noptimized over mini-batches of data.\nGlobal variational factors are then updated\nwith natural gradients $\\tilde{\\nabla}_{\\globals}\\L$ of the ELBO, \nwhich take a simple form in conjugate exponential models. Specifically,\nfor the natural parameter for the global variables\n$\\eta_{\\globals}^\\q$,\nthe natural gradient update is\n\\begin{equation}\n    \\tilde{\\nabla}_{\\eta^\\q_{\\globals}} \\L =  f_{\\globals}\\left(\\{\\vmpmessage{p}{\\globals}\\}_{p \\in \\pi_{\\globals}}\\right) + B \\left(\\sum_{c \\in \\children_{\\globals}}\\vmpmessage{c}{\\globals}\\right) - \\eta^\\q_{\\globals}\n\\end{equation}\nwhere $B$ is the number of minibatches in the dataset.\n\nImportantly,\nthis natural gradient update natural gradient can be computed from VMP messages.\nThus, integrating SVI into NVMP requires no extra steps, \nas NVMP simply modifies the messages.\nNVMP can thus be treated as the inner-loop\nin stochastic VMP, where natural gradients are computed\nfrom NVMP messages when necessary. \\footnote{This results in natural gradients computed over the NVMP surrogate objective.}\n\n\\section{Extension: amortized inference}\n\\label{sec:amortizeddetail}\nNVMP produces approximates previously messages. \nThus, the role\nof the recognition network is to assist in inference,\nrather than directly amortizing inference.\nThis distinction is analogous to the difference\nbetween the VAE and SVAE. In the VAE,\nthe recognition network directly outputs\nthe model parameters of the latent variables,\nwhereas in SVAE, the recognition network\noutputs a message. \nThis difference plays a part in practice, \nwhen we may desire a network that directly acts as a variational posterior,\navoiding the use of message passing in answering an inference query.\n\nNVMP can be modified to include\nrecognition networks that directly output parameters.\nPreviously, where the optimal natural parameter for a factor $\\q(\\z_\\n)$\nwas computed as a combination of messages\n\\begin{equation}\n   \\eta^\\q_{\\z_\\n} = f_{\\z_\\n}\\left(\\{\\vmpmessage{p}{\\z_\\n}\\}_{p \\in \\parents_{\\z_\\n}}\\right) + \\sum_{c \\in \\children_{\\z_\\n}} \\vmpmessage{c}{\\z_\\n}\n\\end{equation}\nwe replace the entire natural parameter with the output of a neural network\nthat takes children and coparent samples as input\n\\begin{equation}\n   \\eta^{\\text{amort.}}_{\\z_\\n} = r^{\\z_\\n}_\\recparam(\\left\\{c'\\sim \\q(c)\\right\\}_{c \\in \\children_{\\z_\\n}}, \\{v' \\sim \\q(v)\\}_{v \\in \\parents_c\\not{\\z_\\n}})\n\\end{equation}\nThis produces a VAE-style algorithm, where the recognition networks\ndirectly produce parameters and thus have utility outside of message passing.\nHowever, we do lose the coordinate-ascent style of the algorithm, as\n$\\eta^{\\text{amort.}}_{\\z_\\n}$ is not the optimal parameter\nof a surrogate objective, but the natural parameter of a variational\ndistribution parameterized by $\\recparam$.\n", "meta": {"hexsha": "cbd6c0c57f7e6a3c546955bb24ee1a9b3fe7d8c0", "size": 4741, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writeup/content/appendix/nvmp-appendix.tex", "max_stars_repo_name": "sharadmv/thesis", "max_stars_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-30T01:28:54.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-30T01:28:54.000Z", "max_issues_repo_path": "writeup/content/appendix/nvmp-appendix.tex", "max_issues_repo_name": "sharadmv/thesis", "max_issues_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writeup/content/appendix/nvmp-appendix.tex", "max_forks_repo_name": "sharadmv/thesis", "max_forks_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.2260869565, "max_line_length": 218, "alphanum_fraction": 0.7399282852, "num_tokens": 1393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232480373843, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5832368835479738}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage[utf8]{luainputenc}\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{13 f\u00e9vrier, 2015}\n\\maketitle\n\\section*{weierstrass m-test}\nif $f_{k}:S\\to \\mathbb{R}^n$ and for all $k$ there is $M_k$ with $||f_k||_\\infty\\le M_k$ then if $\\sum\\limits{M_k}$ converges then the series $\\sum\\limits{f_n}$ converges uniformly.\n\nthat is to say, if we can bound $f_n$ by  $M_n$ and the series of $M_n$ converges, then $f_n$ converges.\n\\subsection*{example}\nif $\\sum\\limits_{n=1}^\\infty{|a_n|}<\\infty$ then $\\sum\\limits_{n=1}^\\infty{a_n\\cos(nx)}$ converges uniformly on $\\mathbb{R}$.\n\n$|a_n\\cos(nx)|\\le |a_n|$ for any  $x$. $a_n=M_n$\\dots$\\Box$\n\nnote that $\\sum a_n\\cos(nx)$ is continuous because $a_n\\cos(nx)$ is continuous and $\\sum a_n\\cos(nx)$ converges uniformly.\n\n\\section*{theorem}\nif $f_n(x)$ is continuous and $\\sum\\limits_{n=1}^\\infty{f_n(x)}$ converges uniformly to $F(x)$ then $F(x)$ is continuous (the sums are continuous if $f_n$ is continuous)\n\n\\subsection*{example}\n$\\sum\\limits_{n=1}^\\infty{x^ne^{-nx}}$ converges uniformly  on $[0,A]$ for any $A>0$.\n\n$|x^ne^{-nx}|\\le |x^n|\\le A^n$. $f'(x)=(nx^{n-1}e^{-nx})-nx^ne^{-nx}=ne^{-nx}x^{n-1}(1-x)=0$ so critical points are $0,1,A$.\n\n$f''$ shows us that $1$ is concave down so it is a max. and so $x^ne^{-nx}\\le e^{-n}$ on $[0,\\infty)$.\n\nso $\\sum\\limits{e^{-n}}=\\sum\\limits{\\left(\\frac{1}{e}\\right)^n}$ which is geometric and so it converges to $\\left(\\frac{1}{1-\\frac{1}{e}}\\right)-1$ by m-test we converge uniformly on $[0,\\infty)$\n\\end{document}\n", "meta": {"hexsha": "69bcb67e7cfd0950da55c30122b21d11a3840b0d", "size": 1685, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ra2/ra2-notes-2015-02-13.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ra2/ra2-notes-2015-02-13.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ra2/ra2-notes-2015-02-13.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2051282051, "max_line_length": 195, "alphanum_fraction": 0.6700296736, "num_tokens": 637, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8152324871074608, "lm_q1q2_score": 0.5832368834189184}}
{"text": "\\documentclass[11pt]{article}\n\n% Thanks to Evan Chen (http://www.mit.edu/~evanchen/index.html)\n% for the template and helpful resources.\n\n% Include math\n\\usepackage{amsmath,amsthm,amssymb,xspace}\n% Include links\n\\usepackage{hyperref}\n\n\n%%%%%%%%%%%%%  THEOREMS  %%%%%%%%%%%%%%%%%\n\\theoremstyle{plain} % other options: definition, remark\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}[theorem]{Lemma}\n\n\\theoremstyle{definition}\n\\newtheorem*{definition}{Definition} % the star prevents numbering\n\n% Remarks\n\\theoremstyle{remark}\n\\newtheorem{remark}{Remark}\n\n%%%%%%%%%%%%%%  PAGE SETUP %%%%%%%%%%%%%%%%%\n\\usepackage[margin=1.5in]{geometry}\n\n% The following sets up some headers\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\lhead{An eggs-ample of a linear programming problem} % Left Header\n\\rhead{\\thepage} % Right Header\n\\cfoot{} % Center Foot (empty)\n\n%%%%%%%%%%%%% SHORTCUTS %%%%%%%%%%%%%%%%%%%%\n\n%latex symbol needs a forced space afterwords for God knows what reason\n\\newcommand{\\latex}{\\LaTeX\\xspace}\n\n\\begin{document}\n\n\\title{Visualizing The Four Fundamental Subspaces in a Linear Programming Problem}\n\\author{Brooks Mershon}\n\\date{\\today}\n\\maketitle\nPlease refer to Figure 1.6 on page 25 of \\textit{BT} throughout this exercise. Pages 21-25 cover all of section 1.4, which provides some useful techniques for visualizing standard form  problems. Consider reading section 1.4 before continuing. I found section 1.4 helpful, but wanted to further explore the connection between a particular example contained within to some of the linear algebra prerequisites assumed for the course. In particular, I hope this exercise will help tie together the concepts of solving a system of linear equations with the Four Fundamental Subspaces in the context of a linear programming problem.\n\n\\bigskip\n\n\\section{Constraints in $\\mathbb{R}^3$}\nOn page 25 of Bertsimas and Tsitsiklis (BT), we are instructed to visualize the feasible set in $\\mathbb{R}^3$ defined by the constraints:\n$$x_1 + x_2 + x_3 = 1 $$\nand\n$$x_1, x_2, x_3 \\ge 0 $$\n\nTry graphing the feasible set defined by this constraint in the empty box provided. You should get a view similar to Figure 1.6a from the textbook. The equation $x_1 + x_2 + x_3 = 1 $ defines a hyperplane, which is in this case a plane in $\\mathbb{R}^3$.\n\n\\begin{center}\\framebox(200, 200){}\\end{center}\n\nIgnoring for a moment the constraint that $x_1$, $x_2$, and $x_3$ are all positive, we can represent the information contained in the hyperplane we visualized with the $3\\times 1$ matrix A as follows:\n\n\\[\n\\mathbf{A}=\n  \\begin{bmatrix}\n    1 & 1 & 1\n  \\end{bmatrix}\n\\]\n\n\\[\n\\vec{x}=\n  \\begin{bmatrix}\n    x_1 \\\\\n    x_2 \\\\\n    x_3\n  \\end{bmatrix}\n\\]\n\n\\[\n\\vec{b}=\n  \\begin{bmatrix}\n    1\n  \\end{bmatrix}\n\\]\n\nIt is common for us to specify constraints in linear programming problems with the expression $\\mathbf{A}\\vec{x}=\\vec{b}$. Here, $\\vec{x}$ is a decision variable, usually chosen such that $\\mathbf{A}\\vec{x}=\\vec{b}$ holds while some objective function of $\\vec{x}$ is minimized. The graph you have just drawn shows a set in 3-dimensional space where the information contained in $\\mathbf{A}$ and $\\vec{b}$ has defined a hyperplane. Tasked with finding an assignment of  $x_1$, $x_2$, and $x_3$ for which $\\mathbf{A}\\vec{x}=\\vec{b}$ holds, you only need to produce a vector $\\vec{x}$ whose tip lies somewhere on the hyperplane (the feasible set). This feasible set is a triangle in $\\mathbb{R}^3$ if $x_1, x_2, x_3 \\ge 0 $. \n\n\\bigskip\n\nGo ahead and add a vector $\\vec{x}$ which stretches from the origin to a point somewhere in the feasible region of your drawing (the triangle in $\\mathbb{R}^3$).\n\n\\newpage\n\n\\section{Resource allocation}\nIt's often easier to think about abstract ideas by playing with concrete examples. The \\textit{hyperplane in three dimensions} example we just encountered can be made even more concrete by relating it to a ``real-world'' problem. Let us imagine a linear programming problem (LPP) that would be served well by the $\\mathbf{A}\\vec{x}=\\vec{b}$ model of constraints that we just examined. This example will let us connect several linear algebra concepts to our new desire to solve all things that look like $\\mathbf{A}\\vec{x}=\\vec{b}$.\n\n\\bigskip\n\nSuppose you run a Bed \\& Breakfast in Vermont that specializes in local food. You keep chickens around so you always have lots of fresh eggs on hand, and you use these eggs to prepare, among other things, three fantastic breakfast dishes, each of which uses just one egg. You never know how many of these dishes you will serve on a particular day. Some days one of the three dishes (call them dish A, B, and C) is very popular while the others are rarely ordered. On other days guests have you cooking up all three dishes in similar quantities. You do, however, only have a set number of eggs on hand; perhaps 40 eggs on a good day and only 20 or so on a bad day. On a bad day in which you only have 24 fresh eggs, you will be able to serve 9 guests who order dish A, 4 guests order dish B, and 11 guests who order dish C, but you will not be able to serve 30 total orders for an egg dish. On that day, you would have to recommend other choices to those 6 who were late to come to breakfast!\n\nIf we wanted to visualize a vector space that told us something about the combinations of orders we could satisfy with $b$ eggs on hand, we could imagine an $3 \\times 1$ matrix A that looked like this:\n\n\\[\n\\mathbf{A}=\n  \\begin{bmatrix}\n    1 & 1 & 1\n  \\end{bmatrix}\n\\]\n\nAnd a vector $\\vec{x}$ specifying how many of each dish we serve:\n\n\n\n\\[\n\\vec{x}=\n  \\begin{bmatrix}\n    \\text{\\# of dish A}\\\\\n   \\text{\\# of dish B} \\\\\n   \\text{\\# of dish C}\n  \\end{bmatrix}\n\\]\n\n$\\mathbf{A}\\vec{x}$ gives us the total number of eggs used in satisfying the orders vector $\\vec{x}$. This is a scalar for all practical purposes, but we should still consider $\\vec{b}$ as a column vector of length 1:\n\n\n\\[\n\\vec{b}=\n  \\begin{bmatrix}\n   \\text{\\# of eggs on hand}\n  \\end{bmatrix}\n\\]\n\n\nGo back to your graph and annotate the three axes, where $x_1$ is now dish A, $x_2$ is now dish B, and $x_3$ is now dish C. You have just created \\textit{dish space}. Any vector $\\vec{x}$ in \\textit{dish space}, where $x_1, x_2, x_3 \\ge 0 $, corresponds to some particular day's distribution of orders. For example, 10 of dish A, 3 of dish B, and no orders for dish C. The graph you have now annotated shows the combinations of orders you could satisfy if you had ONLY ONE EGG TO USE. If you relax the requirement that you cook whole dishes, rather than fractions of a dish, then you can pick a vector $\\vec{x}$ anywhere in the feasible set you have drawn.\n\n\n\\section{Column space in $\\mathbb{R}$} \n\nThe column space of a matrix $\\mathbf{A}$ is the span of the column vectors of $\\mathbf{A}$. In this example, the three column vectors are not linearly independent. Intuitively, this is because the three dishes we make all use eggs. Viewed as a resource allocation problem, we have 3 products and one material. Each product (dish) uses one material (eggs). Our specific problem indicates that each product uses the same amount of material, but this could easily be changed so that dish A took 3 eggs, dish B took 2 eggs, and dish C required only 1 egg. So the column space in this case is really a one-dimensional ``number line.'' The vector $\\vec{b}$ lies in \\textit{egg space}, or \\textit{material space}. Given a vector $\\vec{x}$ in $\\mathbb{R}^3$, $\\mathbf{A}\\vec{x}$ gives us the number of eggs used, which is actually a vector in \\textit{egg space}. The \\textit{egg space} we have now visualized is in fact the \\textit{column space} of the matrix $\\mathbf{A}$. And this column space is one-dimensional.\n\nGo ahead and draw a number line in the box below. This is \\textit{egg space}. This is the \\textit{column space} of matrix A. The matrix $\\mathbf{A}$ maps vectors from $\\mathbb{R}^3$ to $\\mathbb{R}$, or from \\textit{dish space} to \\textit{egg space}.\n\n\\begin{center}\\framebox(375, 100){}\\end{center}\n\n\\section{Null space in $\\mathbb{R}^3$}\n\nThe null space of a matrix $\\mathbf{A}$ is the set of vectors $\\vec{x}$ that satisfy $\\mathbf{A}\\vec{x}=\\vec{0}$. If we restrict ourselves to positive numbers of dishes, then the only way to make dishes while using zero eggs is to make none of each dish. This corresponds to a plane through the origin in \\textit{dish space}. This plane can be shifted away from the origin in the direction of a normal to our plane by increasing the amount of eggs we have to use. This corresponds to picking a vector $\\vec{b}$ on our number line in \\textit{egg space} further to the right. The plane can be tilted if we change the entries of $\\mathbf{A}$. Draw a ``tilted`` version of the feasible region by specifying a different matrix $\\mathbf{A}$ and corresponding graph in the box below.\n\n\\begin{center}\\framebox(375, 200){}\\end{center}\n\n\\bigskip\n\nNotice that a normal to our feasible set is orthogonal to our null space...\n\n\\section{Row space in $\\mathbb{R}^3$}\n\nThe row space of a  matrix $\\mathbf{A}$ is the span of the row vectors. In this case, we see that this is a line specified by scalar multiples of the normal vector to our hyperplane. The row space is the orthogonal complement to the null space, and it is spanned by a normal vector to our feasible set!. Any vector with positive components in $\\mathbb{R}^3$ (\\textit{dish space}) can be thought of as some vector in the null space plus some vector in the row space of $\\mathbf{A}$. Remember, the column space lives elsewhere, on the number line you have drawn, but the row space lives in the same space where you decide how many of each dish to make (the vector $\\vec{x}$).\n\n\\bigskip\n\n*So here's the really neat part: you can think of the row space as providing a direction to move if you want to require more or less eggs \\textit{in total}. If you want to reallocate your numbers of each type of dish so as to use the same number of eggs, then you should move around in the feasible set. How do you move around in the feasible set? Moving strictly along the row space keeps the number of each dish in proportion; you simply end up requiring more ore less total eggs. If you take a vector $\\vec{x}$ that already satisfies $\\mathbf{A}\\vec{x}=\\vec{b}$, and add to that vector some vector in the null space, you get another feasible combination of dishes. You are moving from one feasible set of orders in \\textit{dish space}, given a set number of eggs on hand, to another feasible set of orders in \\textit{dish space}.\n\n\n\\section{Left null space in $\\mathbb{R}$}\n\nThe left null space is the orthogonal complement to the column space. Since this subspace lives in $\\mathbb{R}^m$, and $m=1$, the dimension of this subspace is zero.\n\n\n\\section{Where's the optimization?}\n\nLastly, the linear programming formulation of the bed \\& breakfast problem requires some objective function to minimize (maximize). With just the feasible set that you have drawn, you might just stop at imagining the combinations of orders you will be able to satisfy as the guests show up and request various dishes, given the number of fresh eggs $b$ that you have for the day. However, let us say each dish has a different cost, and the line chef gets bored if he doesn't get to experience some variety in the orders for the three dishes. With a price vector $\\vec{p}$ and an interest vector $\\vec{q}$, we can aim to maximize the restaurant's success counterbalanced against the chef's fulfillment. Let $p_i$ represent the price of dish $i$, and let $q_i$ represent the chef's interest, normalized in a unit comparable to dollars, in cooking dish $i$. We wish to maximize:\n\n$$\\sum_{i=1}^{n} (p_i + q_i)x_i$$ \n\n\nLook back at the graphs you have drawn and annotated. If things got messy, draw  $\\mathbb{R}^3$ and  $\\mathbb{R}$ again in the boxes below. Try drawing and labeling the basis vectors corresponding to the column space (egg space) and row space.\n\n\\begin{center}\\framebox(200, 200){}\\end{center}\n\n\\begin{center}\\framebox(375, 100){}\\end{center}\n\n\nI hope this helps. Writing this sure helped me better understand the connection between subspaces and solving constraints of the form $\\mathbf{A}\\vec{x}=\\vec{b}$. The big box below is a good space to write down a diagram representing the Four Fundamental Subspaces. Gilbert Strang's (MIT) essay on the Four Fundamental Subspaces is a useful resource. \n\\bigskip\n\n\\url{<http://web.mit.edu/18.06/www/Essays/newpaper_ver3.pdf>}\n\n\\begin{center}\\framebox(375, 200){}\\end{center}\n\nFor a refresher on the interaction between row space and null space, consider watching the following series of videos from Khan Academy:\n\n\\bigskip\n\n\\url{<https://www.khanacademy.org/math/linear-algebra/alternate_bases>}\n\n\\bigskip\n\nAreas for further exploration:\n\n\\begin{itemize}\n  \\item The left null space and quotient spaces and their relation to LPP's.\n  \\item Other``real-world'' examples of LPP's visualized as linear transformations.\n  \\item Find interactive resources to put textbook theory into practice.\n    \n\\end{itemize}\n\n\\end{document}", "meta": {"hexsha": "48b61d85685cc0b5394dc6cdef2b1ddafb79112b", "size": 13023, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "egg-resource-example-09012015.tex", "max_stars_repo_name": "bmershon/four-fundamental-subspaces", "max_stars_repo_head_hexsha": "60d02462566b41196410e92869d2c2edfe25dba8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "egg-resource-example-09012015.tex", "max_issues_repo_name": "bmershon/four-fundamental-subspaces", "max_issues_repo_head_hexsha": "60d02462566b41196410e92869d2c2edfe25dba8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "egg-resource-example-09012015.tex", "max_forks_repo_name": "bmershon/four-fundamental-subspaces", "max_forks_repo_head_hexsha": "60d02462566b41196410e92869d2c2edfe25dba8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.5268292683, "max_line_length": 1008, "alphanum_fraction": 0.7414574215, "num_tokens": 3443, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8152324826183822, "lm_q1q2_score": 0.5832368802073239}}
{"text": "\\problemname{Identifying Map Tiles}\n\nMap websites such as Bing Maps and Google Maps often store their maps as many different image files, called tiles. The lowest zoom level (level $0$) consists of a single tile with a low-detail image of the whole map, zoom level $1$ consists of four tiles each containing a slightly more detailed version of a quarter of the map, and in general zoom level $n$ contains $4^n$ different tiles that each contain a part of the map.\n\nOne way of identifying a tile is by means of a \\emph{quadkey}. A quadkey is a string of digits uniquely identifying a tile at a certain zoom level. The first digit specifies in which of the four  quadrants of the whole map the tile lies: \\texttt{0} for the top-left quadrant, \\texttt{1} for the top-right quadrant, \\texttt{2} for the bottom-left quadrant and \\texttt{3} for the bottom-right quadrant. The subsequent digits specify in which sub quadrant of the current quadrant the tile is. The quadkeys for zoom levels $1$ to $3$ are shown in Figure~\\ref{fig:maps}(a).\n\n\\begin{figure}[h]\n\t\\centering\n\t\\subfigure[Quadkeys for zoom levels $1$ to $3$]{ \\label{fig:quadkey}\n\\ifplastex\n\t\t\\includegraphics[width=0.85\\textwidth]{maptiles.jpg}\n\\else\n\t\t\\includegraphics[width=0.6\\textwidth]{maptiles.jpg}\n\\fi\n\t}\n\\ifplastex\\else\t\\quad \\fi\n\t\\subfigure[Coordinates for zoom level 3]{ \\label{fig:coords}\n\t\t% Ugly, use specific width to make the height equal, but problem\n\t\t% is that there's text in the left image so we can't easily use height\n\\ifplastex\n\t\t\\includegraphics[width=0.90\\textwidth]{maptiles2.jpg}\n\\else\n\t\t\\includegraphics[width=0.338\\textwidth]{maptiles2.jpg}\n\\fi\n\t}\n\t\\caption{Visualisation of the two representations. The images are taken from the \\href{https://msdn.microsoft.com/en-us/library/bb259689.aspx}{MSDN}.}\n    \\label{fig:maps}\n\\end{figure}\n\nAnother way of identifying a tile is to give the zoom level and $x$ and $y$ coordinates, where $(0,0)$ is the left-top corner. The coordinates for the tiles of zoom level 3 are shown in Figure~\\ref{fig:maps}(b). Given the quadkey of a tile, output the zoom level and $x$ and $y$ coordinates of that tile.\n\n\\section*{Input}\n\nThe input consists of:\n\\begin{itemize}\n   \\item one line with a string $s$ ($1\\leq \\text{length}(s) \\leq 30$), the quadkey of the map tile.\n\\end{itemize}\nThe string $s$ consists of only the digits `\\texttt{0}', `\\texttt{1}', `\\texttt{2}' and `\\texttt{3}'.\n\n\\section*{Output}\n\nOutput three integers, the zoom level and the $x$ and $y$ coordinates of the tile.\n", "meta": {"hexsha": "69344a9946765c3e81cb62aba06711a2de544316", "size": 2503, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problems/identifyingmaptiles/problem_statement/problem.en.tex", "max_stars_repo_name": "stoman/CompetitiveProgramming", "max_stars_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-22T13:21:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-12T22:26:26.000Z", "max_issues_repo_path": "problems/identifyingmaptiles/problem_statement/problem.en.tex", "max_issues_repo_name": "stoman/CompetitiveProgramming", "max_issues_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/identifyingmaptiles/problem_statement/problem.en.tex", "max_forks_repo_name": "stoman/CompetitiveProgramming", "max_forks_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.2093023256, "max_line_length": 568, "alphanum_fraction": 0.7447063524, "num_tokens": 723, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8080672204860316, "lm_q1q2_score": 0.5832170266175765}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath,amsfonts}\n\\usepackage{graphicx}\n\\usepackage[colorinlistoftodos]{todonotes}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{enumitem}\n\\usepackage{listings}\n\\usepackage{soul}\n\n\\title{Depth First Learning Problem Set 2: Random matrix theory basics}\n\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Problem 1: Avoided crossings in random matrix spectra}\n\nIn the first DFL session's intro to RMT, we mentioned that eigenvalues of random matrices tend to \\emph{repel} each other.  In fact, one of the recommended textbooks on RMT says that \n\\begin{quote}\n    This interplay between confinement and repulsion is the physical mechanism at the heart of many results in RMT.\n\\end{quote}\n\nThis problem explores that statement, relating it to a concept which comes up often in physics: the avoided crossing.  \n\n\\begin{enumerate}[label=(\\alph*)]\n\\item The simplest example of an avoided crossing is in a two-by-two matrix.  Let's take the matrix \n \\begin{center}\n\\begin{pmatrix}\n\\Delta & J \\\\\nJ & -\\Delta\n\\end{pmatrix}\n\\end{center}\n\nSince this matrix is symmetric, its eigenvalues will be real.  \\textbf{What are its eigenvalues?}\n\nTo see the avoided crossing here, \\textbf{plot the eigenvalues as a function of $\\Delta$, first for $J=0$, then for a few non-zero values of $J$.}\n\nYou should see a gap open up as $J$ grows beyond zero.  \\textbf{What is the size of this gap?}\n\n\\item Now take a matrix of the form \n \\begin{center}\n\\begin{pmatrix}\nA & C \\\\\nC & D\n\\end{pmatrix}\n\\end{center}\n\nIn terms of $A$, $B$, $C$, and $D$, \\textbf{what is the absolute value of the difference between the two eigenvalues of this matrix?}\n\n\\item Now let's make the matrix a random matrix.  We will take $A$, $C$, and $D$ to be random variables, where the diagonal entries $A$ and $D$ are distributed according to a normal distribution with mean zero and variance one, while the non-diagonal entry $C$ is also a zero-mean Gaussian but with a variance of $1/2$.  \n\n\\textbf{Use the formula you derived in the previous part of the question to calculate the probability distribution function for the spacing between the two eigenvalues of the matrix.}\n\n\\textbf{What is the behavior of this pdf at zero?  How does this relate to the avoided crossing you calculated earlier?}\n\n\\item \\textbf{Verify using numerical simulation that the pdf you found in the previous part is correct.}\n\n\\end{enumerate}\n\n\n\\section*{Problem 2: Properties of the Gaussian Orthogonal Ensemble}\n\nOne of the simplest and most-studied random matrices is the so-called Gaussian Orthogonal Ensemble, or GOE.  In this problem, we will discover some properties of the GOE.  \n\\begin{enumerate}[label=(\\alph*)]\n\\item To sample from the Gaussian orthogonal ensemble, one way is to generate an $N$ by $N$ random matrix where all of the entries are independent, identically-distributed Gaussians with zero mean and unit variance.  The eigenvalues of this matrix will in general be complex, so we symmetrize the matrix by adding it to its transpose and dividing by two.  \n\n\\textbf{From the above information, what is the joint probability density function of the entries of a GOE matrix?}\n\n\\item One of the celebrated properties of the GOE is its rotational invariance. Rotational invariance here means that any orthonormal vector in $N$ dimensions is equally likely to be an eigenvector of a GOE matrix: no direction is preferred.  The rest of the problem is dedicated to showing that the GOE is in fact rotationally invariant.  \n\nFirst, imagine a diagonalizable matrix $M$, which has some (real) eigenvalues and eigenvectors.  \\textbf{If I want to make a matrix which is the same as $M$ except that every eigenvector is rotated by a rotation matrix $O$, what is the resulting matrix?}\n\n\\item The trace of a matrix is defined as the sum of its diagonal entries.  \\textbf{Show that a matrix $M$, and its rotated version, have the same trace.}\n\n\\item \\textbf{Write the pdf of entries of a GOE matrix (which you found in part (a) above) in terms of the trace of some matrix, and use this to argue that the distribution is rotationally invariant.}\n\n\\end{enumerate}\n\n\\section*{Problem 3: Analysis with random matrices warm-up}\n\nIn this problem, we'll explore a one of the first brushes with actually analyzing random matrices that one will come across in studying the subject - getting some notion of the sizes of various random matrix ensembles, and understanding how the distributions, conditions, and dependencies between them, all of which determine various random matrix ensembles, generate the size. This will also be directly relevant to the family of papers we're interested in since the size of the random matrices we'll be considering is a prerequisite to understanding both what initial constraints to place on the matrices we study and how to properly normalize them for analysis.\n\nWe'll break this problem down into several concrete steps, building up to the final definition of size we'll use and how to calculate it for the prototypical example of random matrices, which also happens to be the ensemble relevant for us: Wigner matrices.\n\n\\begin{enumerate}\n    \\item What exactly does it mean for a matrix (which is really just a linear transformation on, say, some Euclidean space) to have a magnitude? Let's start with how we measure size for numbers where this already makes sense, and then generalize. The absolute value of a real (or complex) number is typically conceptualized as the \"distance from zero\", but since matrices are linear transformations it makes more sense to treat numbers as one-dimensional linear transformations too, visualized as acting via multiplication to stretch the real line. The absolute value, then, is the stretch factor. Matrices, however, do not have a single stretch factor - after all, if they did, all their eigenvalues would be the same - but instead we can simply use the maximum stretch factor, defining the \\textbf{operator norm} of a matrix $ M $ as $ || M ||_\\text{op} = \\sup_{x \\neq 0} \\frac{|| M x ||_2}{|| x ||_2} $.\n    \n    \\textbf{Let's start by proving that this norm is coordinate-independent. Take a matrix $ M $ and any invertible change-of-basis matrix $ P $, and show that $ M $ and $ P M P^{-1} $ have the same operator norm.}\n    \n    \\item \\textbf{How is the operator norm related to the spectrum of a matrix?} Assume the matrix is symmetric.\n    \n    \\item Now let's take an $ N \\times N $ random matrix $ M_N $ whose entries are i.i.d. with zero mean and unit variance. To make this a Wigner matrix, we'll also stipulate that it's symmetric - so $ (M_N)_{ji} = (M_N)_{ij} $ - and therefore guaranteed $ N $ (counting multiplicity) real eigenvalues. Exactly computing the operator norm of $ M_N $ requires techniques that are out of scope (see the epsilon-net method, or the moment method, for how to do this) but something we can determine using the spectrum is how the operator norm grows with $ N $.\n    \n    \\textbf{To start this off, prove that}\n    $$ || M_N ||_\\text{op} = \\lim_{p \\to \\infty} \\left( \\text{tr}(M_N)^p \\right)^\\frac{1}{p} $$\n    \n    \\item \\textbf{Taking } $ p = 2 $ \\textbf{above, show that } $ \\frac{1}{N} \\text{tr}(M_N^2) $ \\textbf{is the average squared eigenvalue.} Note that this is a reasonable proxy for the average size of the eigenvalues - without squaring, we would have unwanted cancellations in the average.\n    \n    \\item \\textbf{Show that the average squared eigenvalue of $ M_N $ grows as $ O(N) $.}\n    \n    \\item \\textbf{Finally, show why dividing each entry of $ M_N $ by $ \\sqrt{N} $ is precisely what we need to ensure that the spectrum is on average $ O(1) $ (and thus the operator norm is too, though you don't have to prove this).}\n    \n\\end{enumerate}\n\n\\section*{Problem 4: Proving the semicircle law}\n\nIn this problem, we'll prove one of the most prominent results in random matrix theory: the semicircle law. Essentially, we'll find that the distribution of eigenvalues of an $ N \\times N $ Wigner matrix converges (in $ N $) to a particular distribution - that of a semicircle. This is striking because the definition of a Wigner matrix imposes a small number of mild constraints on the structure of the random matrices, basically just the bare minimum we need to ensure our random matrices are well-defined enough to actually study their spectra, but beyond requiring the matrix be symmetric and its entries be i.i.d with bounded moments, we've stipulated nothing about the actual distributions of the entries. We'll use a powerful technique commonly applied in random matrix theory, and which we'll meet again in upcoming lectures: the Stieltjes transform.\n\nLet's define the $ N \\times N $ Wigner matrix $ W_N $ by normalizing the $ M_N $ from the above $ M_N $ from the previous problem. This way, we can guarantee that the eigenvalue spectrum doesn't explode without bound as we take $ N \\to \\infty $ and actually forms a valid distribution worth studying. More formally, define\n\n$$ \\begin{aligned} W_n := \\frac{1}{\\sqrt{n}} \\begin{bmatrix} \\zeta_{ij} \\end{bmatrix}_{1 \\leq i, j \\leq n} &\\text{ where for } \\begin{cases} i > j&: \\mathbb{E}(\\zeta_{ij}) = 0 \\text{ (centered) and } \\text{Var}(\\zeta_{ij}) = \\mathbb{E}(| \\zeta_{ij} |^2) = 1 \\\\ i < j&: \\zeta_{ij} = \\zeta_{ji} \\\\ i = j&: \\mathbb{E}(\\zeta_{ii}) = 0, \\text{Var}(\\zeta_{ii}) < \\infty \\end{cases} \\\\ &\\text{ and the } \\zeta_{ij} \\text{ are i.i.d. for } i \\geq j \\end{aligned} $$\n\nTo formally study the eigenvalue distribution of $ W_N $, let's define the \\textit{empirical spectral distribution} $ \\mu_N = \\frac{1}{N} \\sum_{i = 1}^N \\delta_{\\lambda_i} $ where the $ \\lambda_i $ are the $ N $ eigenvalues of $ W_N $ and $ \\delta $ refers to the Dirac delta function. So, we're essentially constructing a pretty artifical \"distribution\" of the finitely many eigenvalues of $ W_N $ by simply placing a point mass at each eigenvalue and then normalizing. This is the distribution we'll show converges to the semicircle distribution.\n\n\\begin{enumerate}\n    \\item The \\textit{Stieltjes transform} of $ W_N $ is defined\n        $$ s_{\\mu_N}(z) = \\int_\\mathbb{R} \\frac{\\text{ d} \\mu_N(t)}{t - z} $$\n    \\textbf{How can we rewrite this in terms of the trace of $ W_N $?}\n        \n    \\item The \\textit{semicircle distribution} that we'll show $ \\mu_N $ converges to is defined precisely to have the shape of a semicircle in the upper half plane with radius 2 (because the eigenvalues of $ W_N $ concentrate in the interval $ [-2, 2] $, though this isn't super relevant).\n        $$ \\rho_\\text{sc}(z) = \\frac{1}{2 \\pi} \\sqrt{4 - x^2} $$\n    \\textbf{What is the Stieltjes transform of the semicircle distribution?}\n        \n    \\end{enumerate}\n        \nYou might be wondering why the semicircle, of all shapes, comes up here. This is related to another observation you may have made about the Stieltjes transform of the semicircle from the previous part of this problem, namely that it looks strikingly like something akin to the quadratic formula. Indeed, this is because the semicircle distribution is in fact the solution to a quadratic equation: the \\textit{self-consistency equation}. $ \\rho_\\text{sc} $ is the unique distribution that satisfies\n    $$ s_\\rho = \\frac{1}{-z - s_\\rho} $$\nBecause the self-consistency equation has one unique solution (the proof of uniqueness is out of scope), if we can show that\n    $$ s_{\\mu_N} \\approx \\frac{1}{-z - s_{\\mu_N}} $$\nwith the approximation becoming exact as $ N \\to \\infty $, then we'll be done. The next couple parts of this question will walk you through proving this.\n\n\\begin{enumerate}\n    \\item[3.] The Stieltjes transform of a Wigner matrix is basically the average diagonal entry of the quantity $ (W_N - z I_N)^{-1} $, which is also known as the resolvent. The resolvent is a diagonal matrix, and based on the assumption that its last entry is typical (it is, but we won't prove this here), we can approximate the Stieltjes transform, which is exactly equal to $ \\frac{1}{n} \\text{tr} \\left( (W_N - z I_N)^{-1} \\right) $ with $(W_N - z I_N)^{-1}_{NN}$.\n    \n    \\textbf{Letting $\\zeta_{ij}$ denote the entries of $ W_N $, use the Schur complements formula}\n    $$ \\text{ for } n \\times n \\text{ block matrix } M = \\begin{bmatrix} A & B \\\\ C & D \\end{bmatrix}, M_{nn} = (D - C A^{-1} B)^{-1} $$\n    \\textbf{to show that}\n    $$(W_N - z I_N)^{-1}_{NN} = \\left( \\frac{1}{\\sqrt{N}} \\zeta_{NN} - z - \\frac{1}{N} X^* (W_{N - 1} - z I_{N - 1})^{-1} X \\right)^{-1}$$\n    \\textbf{for random vector $X$ with i.i.d entries and top left minor $W_{N - 1}$ Of $ W_N$.}\n    \\textit{Hint: Decompose $ W_N $ into a block matrix first.}\n    \n    \\item[4.] \\textbf{Finally, show that}\n    $$ \\mathbb{E} \\left( \\frac{1}{N} X^* (W_{N - 1} - z I_{N - 1})^{-1} X \\right) = \\frac{1}{N} \\text{tr} \\left( (W_{N - 1} - z I_{N - 1})^{-1} \\right) = s_{\\mu_{N - 1}}(z) $$\n    \n\\end{enumerate}\n\nThis part suffices to complete the proof, because it shows that at least in expectation, as we take $ N \\to \\infty $, the $ \\frac{1}{\\sqrt{N}} \\zeta_{NN} $ term vanishes since $ \\frac{1}{\\sqrt{N}} \\rightarrow 0 $ and $ \\zeta_{NN}$ is $ O(1) $, and so we are left with\n$$ s_{\\mu_N}(z) \\approx (W_N - z I_N)^{-1}_{NN} \\approx \\left(-z - s_{\\mu_{N - 1}}(z) \\right)^{-1} \\approx \\left(-z - s_{\\mu_N}(z) \\right)^{-1} $$\nfor large $ N $. Hence, the empirical spectral distribution of $W_N$ obeys the self-consistency equation as $N \\to \\infty$, and thus converges to the semicircle distribution. Note that while we've shown this only in expectation, it can be easily extended rigorously using the Cauchy interlacing formula, though this is out of scope.\n\n\\end{document}", "meta": {"hexsha": "f8b20985a1342befe19c68e465bbddb746729a39", "size": 13702, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/sigmoid/problem-sets/source/4/pset_RMT.tex", "max_stars_repo_name": "skroon/depthfirstlearning.com", "max_stars_repo_head_hexsha": "326c2a5de7497c2383caae937753da3309506e7b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 192, "max_stars_repo_stars_event_min_datetime": "2018-06-06T16:01:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-13T16:56:14.000Z", "max_issues_repo_path": "assets/sigmoid/problem-sets/source/4/pset_RMT.tex", "max_issues_repo_name": "skroon/depthfirstlearning.com", "max_issues_repo_head_hexsha": "326c2a5de7497c2383caae937753da3309506e7b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 39, "max_issues_repo_issues_event_min_datetime": "2018-06-05T02:46:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T08:43:34.000Z", "max_forks_repo_path": "assets/sigmoid/problem-sets/source/4/pset_RMT.tex", "max_forks_repo_name": "skroon/depthfirstlearning.com", "max_forks_repo_head_hexsha": "326c2a5de7497c2383caae937753da3309506e7b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 27, "max_forks_repo_forks_event_min_datetime": "2018-06-07T06:08:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-26T09:25:16.000Z", "avg_line_length": 88.4, "max_line_length": 908, "alphanum_fraction": 0.7255145234, "num_tokens": 3759, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.8080672089305841, "lm_q1q2_score": 0.5832170086046247}}
{"text": "% !TEX root = thesis.tex\n\n\\section{Methodology}\\label{sec:style}\n\nAs we've seen in the previous section the log data comprises of error messages of multiple unknown classes. The basic idea is to analyse the data present and cluster the messages in to different classes. One can see from the error logs that a few features of all the messages are unique (eg. timestamps, ErrorID.. etc). These unique features are given lesser priority in the clustering of the data  hence corresponding weights are assigned to these features. The distance between the messages is compared by ignoring these unique features. The trivial logic is to put the messages whose distance is less than a predefined threshold are put into the same class. A dummy illustration of the algorithm is shown below. \n\n\\begin{figure}[htp]\n    \\centering\n    \\includegraphics[width=7cm]{img/proc.png}\n    \\caption{Illustration of the log data analysis procedure}\n    \\label{fig:galaxy}\n\\end{figure}\n\nThe figure shows the demonstration of the clustering and the classification procedure. Each one of the 3 shapes represents as class and the variables inside each shape represent the unique features in the dataset. The data is clustered based on how similar the shapes are ignoring the unique features of the message. Once the messages are clustered, the principal features are extracted from each class and marked as a feature vector of that class. Now, every new entry in the data point is compared with these principal feature vectors to \n\nThe first step idea is to extract message \"keys\" which\ncorrespond to messages generated by the same lines of code but with different message variables, implemented using regex. They proceed to utilise\nthe similarity in clustering raw log keys by using the edit distance as their\nmetric, which is defined as the number of insertion, deletion or replacement\noperations required to convert one message into another. This distance does\nnot account for the position of the words in the logs, and so they use the\nweighted edit distance which uses sigmoid function to place greater emphasis on words appearing earlier in the message than later based on the\nidea/observation that programmers tend to place important information in\nthis way. The edit distances to all other logs is calculated, and all members\nwhose distance is less than a threshold is connected via a link. \n\nIn this section, we define how we approach the problem\nmathematically, starting with our choice of similarity measure, the Jaccard\nSimilarity, J: in order to identify similar items, we must first identify how\nto define and quantify similarity. We progress to describe shingling, a na\u00efve\nmethod to fully iterate over the parameter space in order to compute J, and\nlearn how it is infeasible to apply to large datasets, such as when hundreds\nof thousands of logs are generated every minute, as can be the case in distributed systems.\n\n\\subsection{Min hash}\\label{sec:_academic_style}\nIn order to identify similar items, a similarity measure is necessary. A common and intuitive means is that of the Jaccard Similarity. This treats two strings as sets of words, and defines the size of their intersection relative to their union as the defining metric of similarity. We therefore see that not only can the relation between two logs be evaluated,but the distances between different logs represent hierarchies of similarity, and these can be mined to reveal underlying clusters. Other alternative methods include measuring distance as the number of edits required to transform one message in another. he signatures we desire to construct for sets are composed of the results of a large number of calculations, say several hundred, each of which is a \u201cminhash\u201d. To minhash a set represented by a column of the characteristic matrix, pick\na permutation of the rows. The minhash value of any column is the number of\nthe first row, in the permuted order, in which the column has a 1. The probability that the minhash function for a random permutation of\nrows produces the same value for two sets equals the Jaccard similarity\nof those sets. \\textbf{[Add reference for minhash here]}\n\n\n\\subsection{Min Hash Locality Sensitive Hashing}\\label{sec:_tips_for_writing}\n\nMinHash is an effective means for reducing the dimensionality of variable\nlength alphanumerical data to fixed length numerical output, with which\nJaccard similarity can be probabilistically calculated via random sampling.\nHowever, each log hash must still be compared to every other in order to\nidentify relationships, resulting in complexity O(n**2). There are a range of\nsearch methods for identifying nearest neighbours (examples, references)\nbased on space partitioning but it was shown that they all degrade to linear search for sufficiently high dimensions. Locality Sensitive Hashing\nis used to reduce the search space by using hash collisions as a proxy for\nsimilarity.  \\textbf{{[Add reference for LSH here]}}\n\n\\subsection{Shingling}\\label{sec:_tips_for_writing}\n", "meta": {"hexsha": "534a4e7df2f47ca001bfe6d84db8bad01e60b4cd", "size": 5000, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2_style.tex", "max_stars_repo_name": "RavicharanN/Log-Data-Analysis-Report", "max_stars_repo_head_hexsha": "c93bd2066a073b083c38cb21f4981b4fdf1a9d6e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2_style.tex", "max_issues_repo_name": "RavicharanN/Log-Data-Analysis-Report", "max_issues_repo_head_hexsha": "c93bd2066a073b083c38cb21f4981b4fdf1a9d6e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2_style.tex", "max_forks_repo_name": "RavicharanN/Log-Data-Analysis-Report", "max_forks_repo_head_hexsha": "c93bd2066a073b083c38cb21f4981b4fdf1a9d6e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 89.2857142857, "max_line_length": 849, "alphanum_fraction": 0.81, "num_tokens": 1031, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.5832170052685983}}
{"text": "\\graphicspath{ {lecture2/} }\n\\section{Models of Computation}\n\nWhat is an Algorithm ? \n\\begin{enumerate}\n    \\item Mathematical abstraction of computer program\n    \\item Computational procedure to solve a problem\n\\end{enumerate}\n\n\\begin{figure}[H]\n    \\includegraphics[width=\\textwidth]{Figure1}\n    \\centering\n    \\caption{Algorithm}\n    \\label{fig:img1}\n\\end{figure}\n\n\n\\noindent Model of computation specifies\n\\begin{enumerate}\n    \\item what operations an algorithm is allowed\n    \\item cost (time, space, . . .) of each operation\n    \\item cost of algorithm = sum of operation costs\n\\end{enumerate}", "meta": {"hexsha": "2f011ad5f2314a7165587489b2dea6bad6565f8f", "size": 601, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "OpenCourse/MIT6.006-Introduction toAlgorithms/NoteBook/lecture2/lecture2.tex", "max_stars_repo_name": "FontaineGuo/PathOfLearning", "max_stars_repo_head_hexsha": "a5af03af6dcea409375276b53a397f014a4de231", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "OpenCourse/MIT6.006-Introduction toAlgorithms/NoteBook/lecture2/lecture2.tex", "max_issues_repo_name": "FontaineGuo/PathOfLearning", "max_issues_repo_head_hexsha": "a5af03af6dcea409375276b53a397f014a4de231", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "OpenCourse/MIT6.006-Introduction toAlgorithms/NoteBook/lecture2/lecture2.tex", "max_forks_repo_name": "FontaineGuo/PathOfLearning", "max_forks_repo_head_hexsha": "a5af03af6dcea409375276b53a397f014a4de231", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.1304347826, "max_line_length": 54, "alphanum_fraction": 0.7271214642, "num_tokens": 155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.78793120560257, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5832064774375153}}
{"text": "%-----------------------------------------------------------------------------------------------\n\\section{Illustration of Carnot's General Proposition}\n\n    Solutions to the integrals of Eqs.~(\\ref{eq:W}) and (\\ref{eq:Q})  are  given  by  Cheng  and\n    Yang~\\cite{2012-ChengCH+YangHS-ApEnergy}, which applied the residual theorem. The  resulting\n    expressions are:\n    %\n    \\begin{align}\n        \\label{eq:Ws}\n        \\bar{W} &= \\frac{2\\pi\\tau\\kappa(1 - \\tau)\\sin\\alpha}{a^2 + b^2}\\cdot\\Lambda,\\\\\n        \\label{eq:Qs}\n        \\bar{Q}_{in} &= \\frac{2\\pi\\tau\\kappa\\sin\\alpha}{a^2 + b^2}\\cdot\\Lambda,\n    \\end{align}\n    %\n    \\noindent with\n    %\n    \\begin{equation}\n        \\label{eq:Lamda}\n        \\Lambda = \\left(\n                \\frac{\\beta - \\sqrt{\\beta^2 - (a^2 + b^2)}}{\\sqrt{\\beta^2 - (a^2 + b^2)}}\n            \\right),\n    \\end{equation}\n    %\n    \\noindent where $a$, $b$, and $\\beta$ are configuration dependent expressions, given as:\n    %\n    \\begin{align}\n        \\label{eq:aa}\n        a^{\\alpha}  &= \\tau\\sin\\alpha,\\\\\n        \\label{eq:ab}\n        a^{\\beta}   &= -(1 - \\tau)\\sin\\alpha,\\\\\n        \\label{eq:ag}\n        a^{\\gamma}  &= -(1 - \\tau)\\sin\\alpha;\\\\\n        \\label{eq:ba}\n        b^{\\alpha}  &= \\kappa + \\tau\\cos\\alpha,\\\\\n        \\label{eq:bb}\n        b^{\\beta}   &= \\kappa - (1 - \\tau)\\cos\\alpha,\\\\\n        \\label{eq:bg}\n        b^{\\gamma}  &= \\kappa - (1 - \\tau)\\cos\\alpha;\\\\\n        \\label{eq:betaa}\n        \\beta^{\\alpha}  &= 4\\tau\\chi/(1 + \\tau) + \\tau + \\kappa,\\\\\n        \\label{eq:betab}\n        \\beta^{\\beta}   &= 4\\tau\\chi/(1 + \\tau) + \\tau \\nonumber\\\\\n                        &+ \\sqrt{1 - 2\\kappa\\cos\\alpha + \\kappa^2},\\\\\n        \\label{eq:betag}\n        \\beta^{\\gamma}  &= 4\\tau\\chi/(1 + \\tau) + 1 + \\tau + \\kappa.\n    \\end{align}\n\n    Although the  cycle  net  work,  $\\bar{W}$,  and  inlet  heat,  $\\bar{Q}_{in}$,  are  indeed\n    \\emph{complicated} functions of the engine's configuration, internal  details,  and  of  the\n    reservoir temperatures, as shown by Eqs.~(\\ref{eq:Ws}) and (\\ref{eq:Qs}), one  can  realize,\n    by mere inspection on these same equations, that the thermal  efficiencies,  $\\eta_t  \\equiv\n    \\bar{W} / \\bar{Q}_{in}$, of all these reversible engines:\n    %\n    \\begin{enumerate}\n        \\item Are indeed \\emph{the same}: $\\eta^{\\alpha}_t = \\eta^{\\beta}_t = \\eta^{\\gamma}_t$;\n        \\item Are \\emph{equal to any reversible engine's} thermal efficiency, $\\eta_{t,rev} =  1\n            - \\tau$, operating between the same temperature reservoirs; and\n        \\item  Are  indeed   a   function   of   \\emph{only}   the   reservoirs'   temperatures:\n            $\\eta_t\\!:\\!\\eta_t(\\tau)$.\n    \\end{enumerate}\n\n    Therefore, the reversible $\\alpha$-, $\\beta$-, and $\\gamma$-types Stirling engine models  of\n    Cheng and Yang~\\cite{2012-ChengCH+YangHS-ApEnergy}, are shown  to  follow  Carnot's  general\n    proposition, according to the second law of thermodynamics.\n\n    The worked out illustration serves thus as an instance of defined  reversible  engines  with\n    different internal details and  working  fluids  of  possibly  different  properties,  whose\n    thermal efficiencies are restricted by Carnot's general proposition, despite the  fact  that\n    their net work and inlet heat are different and complicated functions of the various working\n    parameters and engine configuration.\n\n%-----------------------------------------------------------------------------------------------\n\n", "meta": {"hexsha": "cc5bc51f62f09399a664fbfb0b3b9c148e790b19", "size": 3461, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "01-03-Section-03.tex", "max_stars_repo_name": "cnaak/man-CarnotPrinciple", "max_stars_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01-03-Section-03.tex", "max_issues_repo_name": "cnaak/man-CarnotPrinciple", "max_issues_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01-03-Section-03.tex", "max_forks_repo_name": "cnaak/man-CarnotPrinciple", "max_forks_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.7702702703, "max_line_length": 96, "alphanum_fraction": 0.5515746894, "num_tokens": 1072, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.78793120560257, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.5832064729215098}}
{"text": "\\section{Interpreter Exercise}\n\n\\begin{lstlisting}[language=Haskell]\ndivProg =\n  CpsCmd [\n    AssiCmd \"q\" (LitAExpr 0),\n    AssiCmd \"r\" (IdAExpr \"m\"),\n    WhileCmd (RelBExpr GreaterEq (IdAExpr \"r\") (IdAExpr \"n\"))\n      (CpsCmd [\n         AssiCmd \"q\" (DyaAExpr Plus (IdAExpr \"q\") (LitAExpr 1)),\n         AssiCmd \"r\" (DyaAExpr Minus (IdAExpr \"r\") (IdAExpr \"n\"))\n      ])\n  ]\n\ntype Value = Int\ntype Ident = String\n\ntype State = Ident -> Value\n\nstate_q5 \"q\" = 5\nstate_q5 _ = 0\n\nreadS :: State -> Ident -> Value\nreadS state ident = state ident\n\nupdateS :: State -> (Ident, Value) -> State\nupdateS s (ident, val) =\n  \\ident' -> if ident' == ident then val\n             else s ident'\n\ndata ArithExpr\n  = LitAExpr Int   -- Literal Arithmetic Expression\n  | IdAExpr Ident\n  | DyaAExpr ArithOperator ArithExpr ArithExpr   -- dyadic, means two\n  deriving Show\n\ndata ArithOperator\n  = Times | Div | Mod | Plus | Minus\n  deriving Show\n\n-- (2 + 3) * 4\nae1, ae2 :: ArithExpr\nae1 = DyaAExpr Plus (LitAExpr 2) (LitAExpr 3)\nae2 = DyaAExpr Times ae1 (LitAExpr 4)\n\n-- (q + 1)\naeq1 = DyaAExpr Plus (IdAExpr \"q\") (LitAExpr 1)\n-- (r - n)\naern = DyaAExpr Minus (IdAExpr \"r\") (IdAExpr \"n\")\n\nevalAExpr :: ArithExpr -> State -> Int   -- Semantic function\nevalAExpr (LitAExpr c) _ = c\nevalAExpr (IdAExpr ident) state = readS state ident\nevalAExpr (DyaAExpr opr expr1 expr2) state =\n    operation val1 val2\n  where operation = evalAOpr opr\n        val1 = evalAExpr expr1 state\n        val2 = evalAExpr expr2 state\n\nevalAOpr :: ArithOperator -> (Int -> Int -> Int)\nevalAOpr Times = (*)\nevalAOpr Div = div\nevalAOpr Mod = mod\nevalAOpr Plus = \\a b -> a + b\nevalAOpr Minus = (-)\n\nevalAOprV2 :: ArithOperator -> (Int -> Int -> Int)\nevalAOprV2 Times a b = a * b\n\n\n\ndata RelOperator\n  = LessEq\n  | GreaterEq\n  deriving Show\n\nevalROpr :: RelOperator -> (Value -> Value -> Bool)\nevalROpr LessEq = (<=)\nevalROpr GreaterEq = (>=)\n\ndata BoolExpr\n  = RelBExpr RelOperator ArithExpr ArithExpr\n  deriving Show\n\nevalBExpr :: BoolExpr -> State -> Bool   -- Semantic function\nevalBExpr (RelBExpr opr expr1 expr2) state =\n    operation val1 val2\n  where operation = evalROpr opr\n        val1 = evalAExpr expr1 state\n        val2 = evalAExpr expr2 state\n\ndata Command\n  = AssiCmd Ident ArithExpr\n  | CpsCmd [Command]\n  | WhileCmd BoolExpr Command\n  deriving Show\n\ninterCmd :: Command -> State -> State\ninterCmd (AssiCmd ident expr) state =\n    updateS state (ident, val)\n  where\n    val = evalAExpr expr state\n\ninterCmd (CpsCmd cmds) state =\n  foldl (flip interCmd) state cmds\n\ninterCmd (WhileCmd guard repetend) state\n  | evalBExpr guard state =\n      interCmd (WhileCmd guard repetend) state'\n  | otherwise = state\n        where state' = interCmd repetend state\n\nstate175 \"m\" = 17\nstate175 \"n\" = 5\nstate175 _ = 0\n\nstateOut = interCmd divProg state175\n\nqres = readS stateOut \"q\"\nrres = readS stateOut \"r\"\n\n--evalAOpr :: ArithOperator -> (Int -> Int -> Int)\n--evalAOpr Times = (*)\n\n--data T = Prelude.False\n\\end{lstlisting}\n\\clearpage", "meta": {"hexsha": "061c034531dcd98e7b6880d880bbc11da7dd365e", "size": 2984, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TSM_AdvPrPa/Excercises/Haskell/09_InterpreterExercise.tex", "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": ["Beerware"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TSM_AdvPrPa/Excercises/Haskell/09_InterpreterExercise.tex", "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_licenses": ["Beerware"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TSM_AdvPrPa/Excercises/Haskell/09_InterpreterExercise.tex", "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": ["Beerware"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "avg_line_length": 23.4960629921, "max_line_length": 69, "alphanum_fraction": 0.6658847185, "num_tokens": 989, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.583206470065619}}
{"text": "\\section{Automaton $\\rarr$ Regular Expression}\n\\textbf{BMC} (Brzozowski \\& McCluskey) method.\n\n\\textbf{Assumptions}: unique initial state $i$ without incoming arcs, unique final state $t$ without outgoing arcs.\n\nRemove internal states (i.e. not $i$ nor $t$) compensating with new moves.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.5\\linewidth]{automata/bmc.png}\n\\end{figure}\n\nReplace $q$ with: for each pair $p_i$, $r_j$ a compensating transition $p_i \\xrightarrow{H_iJ^*K_j} r_j$. All the parallel transitions should be merged ASAP in $p \\xrightarrow{e_1|e_2} r$.\n", "meta": {"hexsha": "4aa1424885a299038fc97adb32c524b985ffeb03", "size": 584, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "automata/automaton-to-reg.tex", "max_stars_repo_name": "TiberioG/FLC-cheatsheet", "max_stars_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-01-13T14:36:20.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-18T16:22:18.000Z", "max_issues_repo_path": "automata/automaton-to-reg.tex", "max_issues_repo_name": "TiberioG/FLC-cheatsheet", "max_issues_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "automata/automaton-to-reg.tex", "max_forks_repo_name": "TiberioG/FLC-cheatsheet", "max_forks_repo_head_hexsha": "d86e8ba9c80fece75ffccf47c273334b677f4934", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-21T11:05:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-17T14:59:50.000Z", "avg_line_length": 41.7142857143, "max_line_length": 188, "alphanum_fraction": 0.7380136986, "num_tokens": 183, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.787931185683219, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5832064672097276}}
{"text": "\\section{Moment Realizability for the Fermionic Two-Moment Model}\n\\label{sec:realizability}\n\nOur goal is to simulate massless fermions (e.g., neutrinos) and study their interactions with matter.  \nThe principal objective is to obtain the fermionic distribution function $f$ (or moments of $f$ as in the two-moment model employed here).  \nThe Pauli exclusion principle requires the distribution function to satisfy the condition $0 \\le f \\le 1$, which puts restrictions on the admissible values for the moments of $f$.  \nIn this paper, we seek to design a numerical method for solving the system of moment equations given by Eq.~\\eqref{eq:momentEquations} that preserves realizability of the moments; i.e., the moments evolve within the set of admissible values as dictated by Pauli's exclusion principle.  \n(Since we are only concerned with the angular dependence of $f$ in this section, we simplify the notation by suppressing the $\\vect{z}$ and $t$ dependence and write $f(\\omega,\\vect{z},t)=f(\\omega)$.)  \n\n\\modified{\nWe let\n\\begin{equation}\n  \\mathfrak{R} := \\left\\{\\,f~|~0\\le f \\le 1 ~\\text{and}~0<\\f{1}{4\\pi}\\int_{\\bbS^{2}}f\\,d\\omega<1\\,\\right\\},\n\\end{equation}\nand begin with the following definition of moment realizability.}  \n\\begin{define}\n  \\modified{The moments $\\vect{\\cM}=\\big(\\cJ,\\vect{\\cH}\\big)^{T}$ are realizable if they can be obtained from a distribution function $f(\\omega)\\in\\mathfrak{R}$.}  \n  %satisfying $0 < f(\\omega) < 1~\\forall~\\omega\\in\\bbS^{2}$.  \n  The set of all realizable moments $\\cR$ is\n  \\begin{equation}\n    \\cR:=\\big\\{\\,\\vect{\\cM}=\\big(\\cJ,\\vect{\\cH}\\big)^{T}~|~\\cJ\\in(0,1)~\\text{and}~\\gamma(\\vect{\\cM}) > 0\\,\\big\\},\n    \\label{eq:realizableSet}\n  \\end{equation}\n  where we have defined the concave function $\\gamma(\\vect{\\cM})\\equiv(1-\\cJ)\\cJ-|\\vect{\\cH}|$.  \n  \\label{def:momentRealizability}\n\\end{define}\n\\begin{rem}\n  Following \\cite{lareckiBanach_2011}, in Definition~\\ref{def:momentRealizability}, and in the rest of this paper, we exclude the cases $f=0$ and $f=1$ almost everywhere (a.e.) on $\\bbS^{2}$, which would give $\\cJ=0$, $\\vect{\\cH}=0$ and $\\cJ=1$, $\\vect{\\cH}=0$, respectively.  \n\\end{rem}\n\\modified{\n\\begin{rem}\n  The conditions in Eq.~\\eqref{eq:realizableSet} are necessary and sufficient for the moments to be realizable \\cite[Theorem 7.1]{banachLarecki_2017a}.  (See also \\cite{lareckiBanach_2011,banachLarecki_2013}.)\n\\end{rem}\n}\n\n%The algebraic constraints in Eq.~\\eqref{eq:realizableSet} are proven in \\cite{banachLarecki_2017a}.  \n%\\begin{lemma}\n%  Suppose that $0 < f(\\omega) < 1~\\forall~\\omega\\in\\bbS^{2}$.  \n%  Then the moments $\\vect{\\cM}=(\\cJ,\\vect{\\cH})^{T}$, defined as in Eq.~\\eqref{eq:angularMoments}, satisfy $0 < \\cJ < 1$ and $\\big(1-\\cJ\\big)\\,\\cJ-|\\vect{\\cH}| > 0$. \n%  \\label{lem:MomentRealizable} \n%\\end{lemma}\n\n%\\begin{define}\n%  For the fermion two-moment model, the realizable set is\n%  \\begin{equation}\n%    \\cR:=\\big\\{\\,\\vect{\\cM}=\\big(\\cJ,\\vect{\\cH}\\big)^{T}~|~\\cJ\\in(0,1)~\\text{and}~\\gamma(\\vect{\\cM})\\equiv(1-\\cJ)\\cJ-|\\vect{\\cH}| > 0\\,\\big\\}.\n%    \\label{eq:realizableSet}\n%  \\end{equation}\n%\\end{define}\n\n\\begin{lemma}\n  The realizable set $\\cR$ is convex.  \n\\end{lemma}\n\\begin{proof}\n%  In order to prove that $\\cR$ is convex, it is sufficient to show that any convex combination of elements in $\\cR$ also belongs to $\\cR$.  \n  Let $\\vect{\\cM}_{a}=\\big(\\cJ_{a},\\vect{\\cH}_{a}\\big)^{T}$ and $\\vect{\\cM}_{b}=\\big(\\cJ_{b},\\vect{\\cH}_{b}\\big)^{T}$ be two arbitrary elements in $\\cR$, and let $\\vect{\\cM}_{c} = \\theta\\,\\vect{\\cM}_{a} + (1-\\theta)\\,\\vect{\\cM}_{b}$, with $0\\leq\\theta\\leq1$.\n  The first component of $\\vect{\\cM}_{c}$ is\n  \\begin{equation*}\n    \\cJ_{c} = \\theta\\,\\cJ_{a} + (1-\\theta)\\,\\cJ_{b}.\n  \\end{equation*}\n  Since $\\cJ_{a},\\cJ_{b} \\in (0,1)$, it follows that $\\cJ_{c} \\in (0,1)$.  \n  Concavity of $\\gamma$ implies that\n%  Since $\\cJ_{a},\\cJ_{b} \\in (0,1)$, it is straightforward to verify that\n  \\begin{equation*}\n  \\gamma(\\vect{\\cM}_{c}) \\geq \\theta\\,\\gamma(\\vect{\\cM}_{a}) + (1-\\theta)\\,\\gamma(\\vect{\\cM}_{b}) > 0.\n  \\end{equation*}\n  Hence, $\\vect{\\cM}_{c}\\in\\cR$.\n\\end{proof}\n\nFigure~\\ref{fig:RealizableSetFermionic} illustrates the geometry of the convex set $\\cR$ in the $(\\cH,\\cJ)$-plane (light blue region).  \nThe boundary $\\partial\\cR$ (black curves) is given by $\\gamma(\\vect{\\cM})=0$.  \nThe realizable domain of positive distribution functions, $\\cR^{+}$ (no upper bound on $f$), which is a convex cone defined by\n\\begin{equation}\n  \\cR^{+}:=\\big\\{\\,\\vect{\\cM}=\\big(\\cJ,\\vect{\\cH}\\big)^{T}~|~\\cJ > 0~\\text{and}~\\cJ > |\\vect{\\cH}|\\,\\big\\}, \n  \\label{eq:realizableSetPositive}\n\\end{equation}\nis partially shown as the light red region above the red lines, which mark the boundary of $\\cR^{+}$ (denoted $\\partial\\cR^{+}$).  \nThe realizable set $\\cR$ is a bounded subset of $\\cR^{+}$.  \n%The realizable domains $\\cR$ and $\\cR^{+}$ overlap for low particle densities ($\\cJ\\ll1$), but for larger values of $\\cJ$, the realizable domain of particles governed by Fermi-Dirac statistics is much more restricted than that of particles described by positive distribution functions with no upper bound.  \n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=1.0\\linewidth]{figures/RealizableSetFermionic}\n  \\caption{Illustration of the realizable set $\\cR$ (light blue region) defined in Eq.~\\eqref{eq:realizableSet}.  \n  The black lines define the boundary $\\partial\\cR$, while the red lines indicate the boundary of the realizable set $\\cR^{+}$ (light red region) defined in Eq.~\\eqref{eq:realizableSetPositive}.}\n  \\label{fig:RealizableSetFermionic}\n\\end{figure}\n\nFor the realizability-preserving scheme developed in Section~\\ref{sec:realizableDGIMEX}, we state some additional results.  \nLemma~\\ref{lem:explicitStep} is used to help prove the realizability-preserving property of explicit steps in the IMEX scheme, while Lemmas~\\ref{lem:implicitStep} and \\ref{lem:correctionStep} are used to prove realizability-preserving properties of implicit steps.  \n\\begin{lemma}\n  Let $\\big\\{\\cJ_{a},\\vect{\\cH}_{a},\\vect{\\cK}_{a}\\big\\}$ and $\\big\\{\\cJ_{b},\\vect{\\cH}_{b},\\vect{\\cK}_{b}\\big\\}$ be moments defined as in Eq.~\\eqref{eq:angularMoments} with distribution functions $f_{a}$ and $f_{b}$, respectively, such that $f_{a}(\\omega),f_{b}(\\omega)\\in(0,1)\\,\\forall\\,\\omega\\in\\bbS^{2}$.  \n  Let $\\Phi^{\\pm}(\\vect{\\cM},\\vect{\\cK})=\\f{1}{2}\\big(\\vect{\\cM}\\pm\\widehat{\\vect{e}}\\cdot\\vect{\\cF}\\big)$, where $\\widehat{\\vect{e}}\\in\\bbR^{3}$ is an arbitrary unit vector, and $\\widehat{\\vect{e}}\\cdot\\vect{\\cF}=\\big(\\widehat{\\vect{e}}\\cdot\\vect{\\cH},\\widehat{\\vect{e}}\\cdot\\vect{\\cK}\\big)^{T}$.  \n  Then\n  \\begin{equation*}\n    \\vect{\\cM}_{ab} \\equiv \\Phi^{+}(\\vect{\\cM}_{a},\\vect{\\cK}_{a})+\\Phi^{-}(\\vect{\\cM}_{b},\\vect{\\cK}_{b})\\in\\cR.\n  \\end{equation*}\n  \\label{lem:explicitStep}\n\\end{lemma}\n\\begin{proof}\n  The components of $\\vect{\\cM}_{ab}$ are\n  \\begin{equation*}\n    \\cJ_{ab}=\\f{1}{4\\pi}\\int_{\\bbS^{2}}f_{ab}(\\omega)\\,d\\omega\n    \\quad\\text{and}\\quad\n    \\vect{\\cH}_{ab}=\\f{1}{4\\pi}\\int_{\\bbS^{2}}f_{ab}(\\omega)\\,\\vect{\\ell}(\\omega)\\,d\\omega,\n  \\end{equation*}\n  where $f_{ab}(\\omega)=\\vartheta\\,f_{a}(\\omega)+(1-\\vartheta)\\,f_{b}(\\omega)$ and $\\vartheta(\\omega)=(1+\\widehat{\\vect{e}}\\cdot\\vect{\\ell}(\\omega))/2\\in[0,1]$.  \n  Then, since $f_{ab}(\\omega)\\in(0,1)\\,\\forall\\,\\omega\\in\\bbS^{2}$, it follows that $\\vect{\\cM}_{ab}\\in\\cR$.  \n\\end{proof}\n\n\\begin{lemma}\n  Let $\\vect{\\cM}_{a}=(\\cJ_{a},\\vect{\\cH}_{a})^{T}\\in\\cR$ and $\\alpha>0$.  \n  Let $\\vect{\\cM}_{b}=(\\cJ_{b},\\vect{\\cH}_{b})^{T}$ satisfy\n  \\begin{equation}\n    \\vect{\\cM}_{b}=\\vect{\\cM}_{a}+\\alpha\\,\\vect{\\cC}(\\vect{\\cM}_{b}), \n    \\label{eq:implicitStep}\n  \\end{equation}\n  where $\\vect{\\cC}(\\vect{\\cM})=\\vect{\\eta}-\\vect{\\cD}\\,\\vect{\\cM}$ is the collision term in Eq.~\\eqref{eq:collisionTermMoments}.  \n  Then $\\vect{\\cM}_{b}\\in\\cR$.  \n  \\label{lem:implicitStep}\n\\end{lemma}\n\\begin{proof}\n  Solving Eq.~\\eqref{eq:implicitStep} for $\\vect{\\cM}_{b}$ gives $\\vect{\\cM}_{b} = \\big(\\vect{I}+\\alpha\\,\\vect{\\cD}\\big)^{-1}\\big(\\vect{\\cM}_{a}+\\alpha\\,\\vect{\\eta}\\big)$.  \n  The first component of $\\vect{\\cM}_{b}$ can be written as\n  \\begin{equation*}\n    \\cJ_{b}=\\f{1}{4\\pi}\\int_{\\bbS}f_{b}(\\omega)\\,d\\omega,\n  \\end{equation*}\n  where $f_{b}(\\omega)=\\zeta\\,f_{a}(\\omega)+(1-\\zeta)\\,f_{0}$, $\\zeta=1/(1+\\alpha\\,\\xi)\\in[0,1]$, and $f_{0}$ and $\\xi$ are defined in Eq.~\\eqref{eq:collisionTerm} in Section~\\ref{sec:model}.  \n  Then, since $f_{b}(\\omega)\\in(0,1)\\,\\forall\\,\\omega\\in\\bbS^{2}$, $\\cJ_{b}\\in(0,1)$.  \n  Meanwhile, \n  \\begin{equation*}\n    \\vect{\\cH}_{b}=\\f{(1+\\alpha\\,\\xi)}{(1+\\alpha)}\\,\\widetilde{\\vect{\\cH}}_{b},\n    \\quad\\text{where}\\quad\n    \\widetilde{\\vect{\\cH}}_{b}=\\f{1}{4\\pi}\\int_{\\bbS^{2}}f_{b}(\\omega)\\,\\vect{\\ell}(\\omega)\\,d\\omega.  \n  \\end{equation*}\n  It follows that $\\widetilde{\\vect{\\bcM}}_{b}=(\\cJ_{b},\\widetilde{\\vect{\\cH}}_{b})^{T}\\in\\cR$.  \n  Then, since $0\\le\\xi\\le1$, $|\\vect{\\cH}_{b}|\\le|\\widetilde{\\vect{\\cH}}_{b}| < (1-\\cJ_{b})\\,\\cJ_{b}$.  \n\\end{proof}\n\n\\begin{lemma}\n  Let $\\vect{\\cM}_{a}=(\\cJ_{a},\\vect{\\cH}_{a})^{T}\\in\\cR$ and $\\alpha>0$.  \n  Let $\\vect{\\cM}_{b}$ satisfy\n  \\begin{equation*}\n    \\vect{\\cM}_{b}=\\vect{\\cM}_{a}+\\alpha\\,\\vect{\\cD}\\,\\vect{\\cC}(\\vect{\\cM}_{b}),    \n  \\end{equation*}\n  where $\\vect{\\cD}$ and $\\vect{\\cC}(\\vect{\\cM})$ are given by Eq.~\\eqref{eq:collisionTermMoments}.  \n  Then $\\vect{\\cM}_{b}\\in\\cR$.  \n  \\label{lem:correctionStep}\n\\end{lemma}\nThe proof of Lemma~\\ref{lem:correctionStep} follows along the same lines as the proof of Lemma~\\ref{lem:implicitStep} and is omitted.  \n", "meta": {"hexsha": "19afe4d12128aa6dbb3f17b61cdadcac8254b314", "size": 9501, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/M1/realizableFermionicM1/sections/realizability.tex", "max_stars_repo_name": "srichers/thornado", "max_stars_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-12-08T16:16:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-24T19:31:21.000Z", "max_issues_repo_path": "Documents/M1/realizableFermionicM1/sections/realizability.tex", "max_issues_repo_name": "srichers/thornado", "max_issues_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-07-10T20:13:15.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-11T13:21:00.000Z", "max_forks_repo_path": "Documents/M1/realizableFermionicM1/sections/realizability.tex", "max_forks_repo_name": "srichers/thornado", "max_forks_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-11-14T01:13:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-24T02:08:20.000Z", "avg_line_length": 62.9205298013, "max_line_length": 310, "alphanum_fraction": 0.6534049047, "num_tokens": 3614, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743505760727, "lm_q2_score": 0.7879311856832191, "lm_q1q2_score": 0.5832064536617116}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n%%% \\documentclass{article}\n%%% \\usepackage{graphicx}\n%%% \\usepackage{color}\n\n%%% \\sloppy\n%%% \\definecolor{lightgray}{gray}{0.5}\n\\setlength{\\parindent}{0pt}\n\n%%% \\begin{document}\n\n    \n    \n\\subsection*{Ratio of signal to noise and distortion and Effective number of bits (time space)}\n\n\\begin{par}\nExample for algorithm SINAD-ENOB.\n\\end{par} \\vspace{1em}\n\\begin{par}\nAlgorithm calculates Ratio of signal to noise and distortion and Effective number of bits in time space. First signal is generated, then estimates of signal parameters are calculated by Four parameter sine wave fitting and next SINAD and ENOB are calculated. This example do not take into account a quantisation noise.\n\\end{par} \\vspace{1em}\n\n\\subsubsection*{Contents}\n\n\\begin{itemize}\n\\setlength{\\itemsep}{-1ex}\n   \\item Generate sample data\n   \\item Calculate estimates of signal parameters\n   \\item Copy results to inputs\n   \\item Calculate SINAD and ENOB\n   \\item Display results:\n\\end{itemize}\n\n\n\\subsubsection*{Generate sample data}\n\n\\begin{par}\nTwo quantities are prepared: \\lstinline{t} and \\lstinline{y}, representing 1 second of sinus waveform of nominal frequency 1 kHz, nominal amplitude 1 V, nominal phase 1 rad and offset 1 V sampled at sampling frequency 10 kHz.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nDI = [];\nAnom = 2; fnom = 100; phnom = 1; Onom = 0.2;\nDI.t.v = [0:1/1e4:1-1/1e4];\nDI.y.v = Anom*sin(2*pi*fnom*DI.t.v + phnom) + Onom;\n\\end{lstlisting}\n\\begin{par}\nAdd a noise with normal distribution probability:\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nnoisestd = 1e-4;\nDI.y.v = DI.y.v + noisestd.*randn(size(DI.y.v));\n\\end{lstlisting}\n\\begin{par}\nLets make an estimate of frequency 0.2 percent higher than nominal value:\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nDI.fest.v = 100.2;\n\\end{lstlisting}\n\n\n\\subsubsection*{Calculate estimates of signal parameters}\n\n\\begin{par}\nUse QWTB to apply algorithm \\lstinline{FPNLSF} to data \\lstinline{DI}.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nCS.verbose = 1;\nDO = qwtb('FPNLSF', DI, CS);\n\\end{lstlisting}\n\n        \\begin{lstlisting}[style=output]\nQWTB: no uncertainty calculation\nFitting started\n\nLocal minimum found.\n\nOptimization completed because the size of the gradient is less than\nthe default value of the function tolerance.\n\n\n\nFitting finished\n\\end{lstlisting} \\color{black}\n    \n\n\\subsubsection*{Copy results to inputs}\n\n\\begin{par}\nTake results of \\lstinline{FPNLSF} and put them as inputs \\lstinline{DI}.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nDI.f = DO.f;\nDI.A = DO.A;\nDI.ph = DO.ph;\nDI.O = DO.O;\n\\end{lstlisting}\n\\begin{par}\nSuppose the signal was sampled by a 20 bit digitizer with full scale range \\lstinline{FSR} of 6 V (+- 3V). (The signal is not quantised, so the quantization noise is not present. Thus the simulation and results are not fully correct.):\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nDI.bitres.v = 20;\nDI.FSR.v = 3;\n\\end{lstlisting}\n\n\n\\subsubsection*{Calculate SINAD and ENOB}\n\n\\begin{lstlisting}[style=mcode]\nDO = qwtb('SINAD-ENOB', DI, CS);\n\\end{lstlisting}\n\n        \\begin{lstlisting}[style=output]\nQWTB: no uncertainty calculation\n\\end{lstlisting} \\color{black}\n    \n\n\\subsubsection*{Display results:}\n\n\\begin{par}\nResults are:\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nSINADdB = DO.SINADdB.v\nENOB = DO.ENOB.v\n\\end{lstlisting}\n\n        \\begin{lstlisting}[style=output]\n\nSINADdB =\n\n   82.9229\n\n\nENOB =\n\n   13.0657\n\n\\end{lstlisting} \\color{black}\n    \\begin{par}\nTheoretical value of SINADdB is 20*log10(Anom./(noisestd.*sqrt(2))). Theoretical value of ENOB is log2(DI.range.v./(noisestd.*sqrt(12))). Absolute error of results are:\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nSINADdBtheor = 20*log10(Anom./(noisestd.*sqrt(2)));\nENOBtheor = log2(DI.FSR.v./(noisestd.*sqrt(12)));\nSINADerror = SINADdB - SINADdBtheor\nENOBerror = ENOB - ENOBtheor\n\\end{lstlisting}\n\n        \\begin{lstlisting}[style=output]\n\nSINADerror =\n\n   -0.0874\n\n\nENOBerror =\n\n   -0.0145\n\n\\end{lstlisting} \\color{black}\n    \n\n\n%%% \\end{document}\n    \n", "meta": {"hexsha": "0b8e6b6827a2d2940bda4baf246c58a773d6e57b", "size": 4216, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/algs_examples_published/doc_SINAD-ENOB.tex", "max_stars_repo_name": "qwtb/qwtb", "max_stars_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-12-09T13:18:54.000Z", "max_stars_repo_stars_event_max_datetime": "2015-12-09T13:18:54.000Z", "max_issues_repo_path": "doc/algs_examples_published/doc_SINAD-ENOB.tex", "max_issues_repo_name": "qwtb/qwtb", "max_issues_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 18, "max_issues_repo_issues_event_min_datetime": "2015-12-09T13:08:38.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-13T11:33:41.000Z", "max_forks_repo_path": "doc/algs_examples_published/doc_SINAD-ENOB.tex", "max_forks_repo_name": "qwtb/qwtb", "max_forks_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2016-11-11T02:12:47.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-17T12:59:18.000Z", "avg_line_length": 24.9467455621, "max_line_length": 318, "alphanum_fraction": 0.7215370019, "num_tokens": 1333, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8459424256566558, "lm_q2_score": 0.6893056231680122, "lm_q1q2_score": 0.583112870881521}}
{"text": "%% Chapter 3 : Basics of Solar Energy\n\n\\section{Solar Radiation and Solar Spectrum}\n\\\n\\\n\\\n\\\nSun is the source of radiation. It is a thermonuclear furnace, where hydrogen atoms undergo fusion to become helium atoms. The mass lost in the process is converted into electromagnetic energy of the order of $3.8\\times10^{20}$ MW. This energy is radiated outwards from the surface of he sun into space.\\\\\n\nFor the purpose of understanding the nature of absorption and emission of electromagnetic radiation by an object, we use a mathematical abstraction called a \\textit{Blackbody}. It is the perfect emitter and absorber of radiation; which means that for a given area and temperature the blackbody will absorb  all radiation impinging on it, and emit more radiation per square unit area as compared to any other object. Moreover, no radiation is reflected or transmitted through a blackbody. The idea of the blackbody is credited to the German physicist Max Planck.\\\\\n\nThe wavelengths emitted by a blackbody are dependent on its temperature. This is described by the \\textit{Planck's Law} given in eq (\\ref{planck})\n\n\\begin{equation}\n\\label{planck}\nE_{\\lambda}=\\frac{3.74\\times10^{8}}{\\lambda^{5} \\left[\\text{exp}\\left( \\frac{14,400}{\\lambda T}\\right)-1 \\right]}\n\\end{equation}\\\\\nwhere,\\\\\n$ E_{\\lambda} $ = Emissive power per unit area of a blackbody $ (W/m^{2}) $ \\\\\n$ T $ = Absolute temperature of the body $ (K) $ \\\\\n$ \\lambda $ = Wavelength $ (\\mu m) $ \\\\\n\nThe \\textit{Stefan-Boltzman Law of Radiation} given in eq (\\ref{stefbolt}) expresses the area under the Planck's curve between any two points, which represents the power emitted between those two wavelengths. This is called as the total emitted radiant power.\n\n\\begin{equation}\n\\label{stefbolt}\nE = A \\sigma T^{4}\n\\end{equation}\\\\\nwhere,\\\\\n$ E $ = Total blackbody emission rate $ (W) $ \\\\\n$ A $ = Area of the body $ (m^{2}) $ \\\\\n$ \\sigma $ = Stefan-Boltzman constant $ (5.67\\times10^{-8} W/m^{2}K^{4}) $ \\\\\n\nOne characteristic feature of the blackbody radiation curve is given by \\textit{Wien's Displacement Rule} given in eq (\\ref{wien}), which identifies the wavelength where the blackbody will emit maximum radiation.\n\n\\begin{equation}\n\\label{wien}\n\\lambda_{{max}}(\\mu m)=\\frac{2898}{T(K)}\n\\end{equation}\\\\\nwhere,\\\\\n$ \\lambda_{max} $ = Wavelength at which blackbody emits maximum radiation $ (\\mum) $ \\\\\n\nThe Fig \\ref{figc3h1} shows the ideal blackbody Planck's curve at 5800 K, which is the surface temperature of the sun. The total area under this blackbody curve gives the solar insolation just outside the earth's atmosphere, which is approximately $1.37 \\ \\text{kW/m}^{2}$. The yellow part of the Fig \\ref{figc3h1} shows the actual Solar Spectrum prior to entering earth's atmosphere, it approximately depicts the blackbody curve. The actual solar spectrum consists of:\n\n\\begin{enumerate}\n\\item Ultraviolet (UV) is 7\\%\n\\item Visible is 47\\%\n\\item Infrared (IR) is 46\\%\n\\end{enumerate}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{SolarSpec}\n\\caption{Solar Spectrum [5]}\n\\label{figc3h1} %% to refer use, \\ref{}\n\\end{figure}\n\nAlso, the red part of Fig \\ref{figc3h1} shows the Solar Spectrum (terrestrial spectrum) after passing through the earth's atmosphere, where it undergoes various stages of absorption. Hence, the available solar energy on the surface of the earth is lesser than that available outside the atmosphere. \n\n\\section{Photovoltaic Cell}\n\\\n\\\n\\\n\\\nThe photovoltaic cell is physically equivalent to a p-n junction diode shown in Fig (\\ref{figc3h2}). A p-n junction diode consists of two types of semiconducting materials, which are n-type and p-type materials. The n-type material has been doped with a valency +5 element atoms, which donates the extra electron to the lattice which is free to move within the crystal lattice formed by the valency 4 Silicon atoms and the donor atoms; hence, the n-type materials have free electrons. Whereas, for creating a p-type material valency +3 element atoms, which creates a hole in the crystal lattice which can readily accept a free moving electron; hence, the p-type materials have positively charged holes (lack of an electron can be mathematically termed as a positive charge). However, the net charge of both the n and p type materials is neutral.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{pn-junction}\n\\caption{PN- Junction Diode}\n\\label{figc3h2} %% to refer use, \\ref{}\n\\end{figure}\n\nA pn-junction diode is formed by joining a p-type material with an n-type material. Now, at the junction of the diode the mobile electrons from the n region and the mobile holes from the p region of the diode drift by diffusion across the junction, leaving behind static positive charges and negative charges in the n region and p region of the diode respectively. These immobile charges create  a static electric field which opposes the diffusion of mobile electron and holes (also called charged carriers). The diffusion process continues till the electric field increases so much so that it restricts all further  diffusion of the charged carriers across the junction. This region of immobile charges at the junction is called the depletion region (depleted of mobile charged carriers). The width of the depletion region is about 1 $\\mu \\text{m} $ and the voltage across it is about 1 V; hence, electric field strength is about 10,000 V/cm. The Fig (\\ref{diode12}) shows the symbol characteristics of a pn-junction diode.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{diode1}\n\\caption{PN Juction Diode Symbol and Characteristics [5]}\n\\label{diode12} %% to refer use, \\ref{}\n\\end{figure}\n\nThere are two modes of operation of a p-n junction diode. Firstly, the forward conduction mode, where an external positive voltage is applied across p-side and n-side (the external voltage cancels the effect of electric feild developed by the depletion region and allows for further diffusion reducing the width of the depletion region), in this mode the diode conducts very well offering least resistance. But, when a the voltage polarity across the pn-junction diode is reversed it blocks the flow of current offering maximum resistance, this mode of operation is called the reverse bias. Still a small amount of current flows due to thermally generated carriers, this current is called the reverse saturation current. The voltage-current characteristics for the pn-junction diode is described by the Shockley diode equation given by eq (\\ref{diode}) \n\n\\begin{equation}\n\\label{diode}\nI_{d} = I_{0}( e^{qV_{d}/kT}-1)\n\\end{equation}\\\\\nwhere,\\\\\n$ I_{d} $ = Diode Current $ (A) $ \\\\\n$ I_{0} $ = Reverse saturation current $ (A) $ \\\\\n$ q $ = Electron charge $ (1.602\\times10^{-19} C) $ \\\\\n$ V_{d} $ = Voltage across diose terminals frm p-side to n-side $ (V) $ \\\\\n$ k $ = Boltzman's constant $ (1.381\\times10^{-23} J/K) $ \\\\\n\nThe Fig (\\ref{d2}) shows the operation principle of a photovoltaic cell. When a pn-junction diode is exposed to sunlight, owing to the photo-electric effect photons of sunlight will be absorbed and electron-hole pairs will be formed.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv8}\n\\caption{PN-Junction Diode as a Photovoltaic Cell [5]}\n\\label{d2} %% to refer use, \\ref{}\n\\end{figure}\n\nIf these mobile charge carriers reach near the depletion layer, the electric field in the depletion layer will push the holes to the p-side and electrons to the n-side. Hence, holes will accumulate in the p-side and electrons in the n-side of the pn-junction diode. Now, if electrical contacts are attached to the pn-junction diode ends, conventional current will from p-side (electrons flow from the n-side) into the wire, through the load and back to the n-side, thus completing a circuit.\\\\\n\nA simple model of a photovoltaic cell consists of a pn-junction diode in parallel with an ideal current source shown in Fig (\\ref{p1}), where the ideal current source produces current in direct proportion to the amount of solar flux incident on it.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv9}\n\\caption{Simple Equivalent circuit of a Photovoltaic Cell [5]}\n\\label{p1} %% to refer use, \\ref{}\n\\end{figure}\n\nFrom Fig (\\ref{pv9}) we can write the voltage and current equations for the PV cell as,\n\n\\begin{equation}\n\\label{pv1}\nI = I_{SC}-I_{d}\n\\end{equation}\\\\\nwhere,\\\\\n$ I $ = Current output from PV Cell $ (A) $ \\\\\n$ I_{SC} $ = Short circuit current of PV Cell $ (A) $ \\\\\n\nSubstituting the value of $I_{d}$ in above equation we get,\n\n\\begin{equation}\n\\label{pv2}\nI = I_{SC}-I_{0} \\left(e^{qV/kT}-1 \\right )\n\\end{equation}\n\nWhere, $I_{SC}$ is the short circuit current, i.e. the current measured when the p-side and n-side are shorted, which means $V=0$, and $I$ is the load current.\\\\\n\nConversely, when the leads from the PV cell are kept open, no current flows i.e. $I=0$, the voltage measured is called as open-circuit voltage $(V_{OC})$, which is given by the following equation,\n\n\\begin{equation}\n\\label{pv3}\nV_{OC} = \\frac{kT}{q}\\ln \\left (\\frac{I_{SC}}{I_{0}}+1\\right )\n\\end{equation}\\\\\nwhere,\\\\\n$ V_{OC} $ = Open circuit voltage of PV Cell $ (V) $\\\\\n\nThe eq (\\ref{pv2},\\ref{pv3}) at $25^{\\circ} \\text{C}$ are given by the following equations,\n\n\\begin{equation}\n\\label{pv4}\nI = I_{SC}-I_{0} \\left(e^{38.9V}-1 \\right )\n\\end{equation}\n\n\\begin{equation}\n\\label{pv5}\nV_{OC} = 0.0257\\ln \\left (\\frac{I_{SC}}{I_{0}}+1\\right )\n\\end{equation} \n\n\n\n\\subsection{PV Cell Model with Shunt Resistance}\n\\\n\\\n\\\n\\\nA more accurate model of PV cell is developed when a resistance is added in parralel to the pn-junction diode as shown in Fig (\\ref{fig3h4})\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{PVmo}\n\\caption{PV Cell Model With Shunt Resistance [5]}\n\\label{figc3h4} %% to refer use, \\ref{}\n\\end{figure}\n\nThis model describes the behaviour of a PV cell in a string of PV cell. If we consider that one of the PV cells in the string is shaded, then it will not produce current, moreover the pn-junction diode pertaining to that cell will be reverse biased and no current will not allow any flow of current. This will cause the entire current output from the string of PV cells to be reduced to zero, and no power is delivered to the load according to the simple equivalent model. But, in reality this is not the case and power though reduced in amount is delivered to the load even during shading. Hence, the voltage and current characteristics of a PV cell model with parallel resistance $R_{P}$ is given as follows,\n\n\\begin{equation}\n\\label{pv6}\nI = (I_{SC}-I_{d})-\\frac{V}{R_{P}}\n\\end{equation}\\\\\nwhere,\\\\\n$ V $ = Voltage at output of PV Cell $ (V) $ \\\\\n$ R_{P} $ = Parallel resistance of the PV Cell $ (\\Omega) $ \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv1}\n\\caption{Effect of Parallel Resistance on I-V Curve [5]}\n\\label{figc3h111} %% to refer use, \\ref{}\n\\end{figure}\n\nThe Fig (\\ref{figc3h111]) shows the effect of parallel resistance on the I-V characteristics of the PV cell. It can be observed that the parallel resistance causes the load current for the simple model to be decreased by $V/R^{P}$.\n\n\\subsection{PV Cell Model with Series Resistance}\n\\\n\\\n\\\n\\\nThere is some intrinsic resistance of the semiconductor and also the contact resistance of the bond between the the wires and the cell leads. This resistance is called the series resistance ($R_{S}$), as it reduces the voltage availabe at the load terminals.\nHence, an accurate model of a PV cell should consinder $R_{S}$ as shown in Fig (\\re{figc3h3})\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{PVmm}\n\\caption{PV Cell Model with Series Resistance [5]}\n\\label{figc3h3} %% to refer use, \\ref{}\n\\end{figure}\n\nThe voltage and current characteristics of the PV cell with a series resistance are given by the following equations,\n\n\\begin{equation}\n\\label{pv7}\nV_{d}=V+I.R_{S}\n\\end{equation}\\\\\nwhere,\\\\\n$ R_{S} $ = Series resistance of the PV Cell $ (\\Omega) $ \n\n\\begin{equation}\n\\label{pv8}\nI = I_{SC}-I_{0} \\left\\{\\text{exp}\\left[\\frac{q(V+I.R_{S})}{kT} \\right]-1 \\right\\}\n\\end{equation}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv2}\n\\caption{Effect of Series Resistance on I-V Curve [5]}\n\\label{figc3h222} %% to refer use, \\ref{}\n\\end{figure}\n\nThe Fig (\\ref{figc3h222}) shows the effect of series resistance on the I-V curve of the PV cell. It can be observed that the series resistance reduces the voltage available at the load by the factor of $IR_{S}$.\n\n\\subsection{Complete PV Cell Model}\n\\\n\\\n\\\n\\\nThe complete PV cell model as shown in Fig (\\ref{figc3h333}) includes the both the parallel as well as the series resistance. This model of the PV cell is highly accurate.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv3}\n\\caption{Complete Model of PV Cell [5]}\n\\label{figc3h333} %% to refer use, \\ref{}\n\\end{figure}\n\nThe load current equation for this model is given by the following equation,\n\n\\begin{equation}\n\\label{pv10}\nI_{SC}=I+I_{d}+I_{P}\n\\end{equation}\\\\\nwhere,\\\\\n$ I_{P} $ = Current in the parallel branch of the PV Cell $ (A) $\\\\ \n\nThe effect of series resistance is given by the following equation,\n\n\\begin{equation}\n\\label{pv11}\nV=V_{d}-IR_{S}\n\\end{equation}\n\nThe combined effect of the series and parallel resistances is given in the following equation, \n\n\\begin{equation}\n\\label{pv9}\nI = I_{SC}-I_{0} \\left\\{\\text{exp}\\left[\\frac{q(V+I.R_{S})}{kT} \\right]-1 \\right\\}-\\left(\\frac{V+I.R_{S}}{R_{P}}\\right)\n\\end{equation}\n\n\\subsection{I-V Curve}\n\\\n\\\n\\\n\\\nFig (\\ref{figc3h5}) shows the I-V curve of the PV cell. The I-V curve helps us identify the key parameters of the PV cell. On the horizontal axis we find open-circuit voltage $(V_{OC})$ when current is zero, and on the vertical axis we find the short-circuit current $(I_{SC})$ when the voltage is zero; at both these points the power delivered by the PV cell is zero as either voltage or current are zero $(P=VI)$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{PVivp [5]}\n\\caption{PV Cell I-V Curve}\n\\label{figc3h5} %% to refer use, \\ref{}\n\\end{figure}\n\nThe maximum power delivered is obtained at the knee of the I-V curve, the point is called as the maximum power point (MPP). The voltage and current at the MPP are represented as $V_{m}$ and $I_{m}$. The I-V curve summarizes the entire performance characteristic of the PV cell. Any other point on the I-V curve is represented as $V_{R}$ and $I_{R}$.\n\n\\subsection{Effect of Insolation and Temperature on I-V Curve}\n\\\n\\\n\\\n\\\n\\\nThe Fig (\\ref{figc3h6}) shows the PV cell curves for two different values of insolation. We observe that the short-circuit current is directly proportional to insolation, and it changes linearly with insolation. But, the open-circuit voltage is inversely proportional to the insolation, and it changes modestly with insolation as it follows a logarithmic relationship. Hence, as insolation increases the power output from the PV cell increases.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{PVinso}\n\\caption{Effect of Insolation on I-V Curve [5]}\n\\label{figc3h6} %% to refer use, \\ref{}\n\\end{figure}\n\nThe Fig (\\ref{figc3h7}) shows the PV cell curves for two different values of temperature. We observe that as temperature increases the open-circuit voltage decreases considerably, while the short-circuit current increases very slightly. Hence, the power output of PV cell increases on a clear and cold day. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{PVtemp}\n\\caption{Effect of Temperature on I-V Curve [5]}\n\\label{figc3h7} %% to refer use, \\ref{}\n\\end{figure}\n\nThe cell temperature depends not only on ambient temperature; but also on insolation as small fraction of insolation hitting the cell is converted to electricity, and the rest is converted to heat. Hence, manufacturers of the PV cells provide clients with a normal operating cell temperature (NOCT); it is the temperature in the cell when the ambient temperature is $20^{\\circ} \\text{C}}$, solar irradiance is $0.8 \\  \\text{kW/m}^{2}$, and windspeed is 1 m/s. This value helps in accounting the changes in the PV cell performance with temperature. The cell temperature can be calculated using the following equation, \n\n\\begin{equation}\n\\label{tefpv}\nT_{{cell}}=T_{{amb}}+\\left(\\frac{NOCT-20^{\\circ}}{0.8}\\right)\n.\\ S\n\\end{equation}\n\n\\newpage\n\n\\section{Cells, Modules and Arrays}\n\\\n\\\n\\\n\\\nThe individual cell with its small voltage of 0.5 V cannot be used alone for producing power for everyday applications. Hence, to increase the power output, a number of cells are connected together in series and parallel to form a module, which is encased in tough weather-resistant packages. Usually 36 cells are connected in series to give a module with voltage of 12 V and the number of parallel strings can be adjusted according to the amount of power to be generated from a module. Also, there are large modules with 72 cell connected in series giving a voltage of 72 V.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv4}\n\\caption{PV - Cell, Module and Array [5]}\n\\label{figc3h444} %% to refer use, \\ref{}\n\\end{figure}\n\nTo increase the power generation capacity further, modules can be connected in series to increase voltage and a number of similar parallel strings can be connected to increase the current. Such a combination of modules is called as an array.\n\nThe eq (\\ref{mod1}) gives the voltage equation for a module, where $n$ is the number of cells connected in series.\n\n\\begin{equation}\n\\label{mod1}\nV_{{module}}=n(V_{d}-IR_{S})\n\\end{equation}\\\\\nwhere,\\\\\n$ V_{{module} $ = Total voltage of a PV Module $ (V) $ \\\\\n$ n $ = Total number of PV Cells in one string in the PV Module  \\\\\n\nFrom Fig (\\ref{figc3h555}) we observe that as the number of cells in series in a module increase, the open-circuit voltage increases. But, the short-circuit current remains the same, hence the MPP has horizontal movement; and power output of the module increases\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv5}\n\\caption{Effect of Series connected modules on I-V Curve [5]}\n\\label{figc3h555} %% to refer use, \\ref{}\n\\end{figure}\n\nThe eq (\\ref{mod2}) gives the current equation for a module, where $p$ is the number of strings in the module.\n\n\\begin{equation}\n\\label{mod2}\nI_{{module}}=p(I_{SC}-I_{d}-I_{P})\n\\end{equation}\\\\\nwhere,\\\\\n$ I_{{module}} $ = Total current of the PV Module $ (A) $ \\\\\n$ p $ = Total number of parallel strings of PV Cells in the PV Module\\\\\n\nFrom Fig (\\ref{figc3h666}) we observe that as the number of strings in a module increase, the short-circuit current increases. But, the open-circuit voltage remains the same, hence the MPP has a vertical movement; and power output of the module increases.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{pv6}\n\\caption{Effect of Parallel connected modules on I-V Curve [5]}\n\\label{figc3h666} %% to refer use, \\ref{}\n\\end{figure}\n\n\\newpage\n\n\\section{Effect of Shading on PV Modules}\n\\\n\\\n\\\n\\\nTo understand the concept of shading in the PV cells let us consider the Fig (\\ref{figc3h888}). It shows a string of $n$ cells, figure on the left shows condition when all the cells are in the sun. This will give us the usual behaviour of the PV module, where the major component of the current from the preceding cells will pass through the $I_{SC}$ current source branch.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{shading1}\n\\caption{PV module with \\textit{n} Cells - top cell in sun, or in shade [5]}\n\\label{figc3h888} %% to refer use, \\ref{}\n\\end{figure}\n\nBut, the figure on the right shows the condition when one cell from the string is shaded (it does not produce current). Here we can see that the path taken by the current from the preceding cells in the string will have to pass through the branch of parallel resistance, which causes a large voltage drop as the value of parallel resistance is high. This drop in voltage reduces the power output of the module severely.\\\\\n\nThe output voltage $(V_SH)$ of the $n$ celled shaded module is given in the following equation,\n\n\\begin{equation}\n\\label{shade1}\nV_{SH}=V_{n-1}-I(R_{P}+R_{S})\n\\end{equation}\n\nOutput voltage across $n-1$ cells $(V_{n-1})$ if $V$ is the output voltage of $n$ cells is given by, \n\n\\begin{equation}\n\\label{shade2}\nV_{n-1}=\\left(\\frac{n-1}{n}\\right)V\n\\end{equation}\\\\\nwhere,\\\\\n$ n $ = Total number of cells ahaded $  $ \\\\\n$ V_{n-1} $ = Output voltage across $ n-1 $ cells $ (V) $ \\\\\n$ \\triangledown V $ = Voltage drop caused by shaded cell $ (V}) $ \n\nPutting eq (\\ref{shade2}) in eq (\\ref{shade1}), we get;\n\n\\begin{equation}\n\\label{shade3}\nV_{SH}=\\left(\\frac{n-1}{n}\\right)V-I(R_{P}+R_{S})\n\\end{equation}\\\\\nwhere,\\\\\n$ V_{SH} $ = Output voltage of a $\\text{n celled}$ shaded module $ (V) $ \\\\\n\nHence, the drop in voltage $(\\triangledown)V$ at any given current $I$, caused by shaded cell is given by the following equation,\n\n\\begin{equation}\n\\label{shade4}\n\\triangledown V=\\frac{V}{n}-I(R_{P}+R_{S})\n\\end{equation}\\\\\nwhere,\\\\\n$ \\triangledown V $ = Voltage drop caused by shaded cell $ (V}) $ \\\\\n\nThe above equation can be modified as follows, as parallel resistance is very large as compared to the series resistance. Hence, neglecting the effect of series resistance, we get the following equation, \n\n\\begin{equation}\n\\label{shade4}\n\\triangledown V=\\frac{V}{n}-IR_{P}\n\\end{equation}\n\nThe Fig (\\ref{figc3h999}) shows the comparison of I-V curves for an un-shaded and a shaded module. We can observe that in the module with shading both the short-circuit current and open-circuit voltage are reduced and the curve assumes a linear shape, which drastically reduces the power output from the module.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{shading2}\n\\caption{Effect of shading one cell in \\textit{n} cell module [5]}\n\\label{figc3h999} %% to refer use, \\ref{}\n\\end{figure}\n\n\\newpage\n\n\\subsection{Bypass Diode}\n\\\n\\\n\\\n\\\nIn order to overcome the problem of shading, bypass diodes are used as shown in Fig (\\ref{figc3h100}). The figure on the left in Fig (\\ref{figc3h100}) shows a PV cell which is un-shaded, here the bypass diode will be reverse biased as the PV ell produces reverse voltage across the bypass diode; hence, no current follow through the bypass diode when the cell is un-shaded.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{bydiode1}\n\\caption{Mitigation of shading problem with Bypass Diode - In sunny cell bypass diode is cut-off, in shaded cell it conducts [5]}\n\\label{figc3h100} %% to refer use, \\ref{}\n\\end{figure}\n\nThe figure in right in Fig (\\ref{figc3h100}) shows a shaded PV cell with a bypass diode. As the cell is shaded the output voltage of the cell is reduced drastically, moreover the current from the preceding cells rather than passing through the parallel resistance of the shaded cell causing large voltage drop flows through the bypass diode as it gets forward biased. Hence, the bypass diode reduces the amount of voltage reduced due to shading(voltage drop in the bypass diode is about 0.5 V).\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{bydiode2}\n\\caption{Effect of Bypass Diode on I-V Curve [5]}\n\\label{figc3h190} %% to refer use, \\ref{}\n\\end{figure}\n\nThe Fig (\\ref{figc3h190}) shows the comparison of I-V curves for shaded cell, un-shaded cell and shaded cell with bypass diode. We observe that the bypass diode improves the performance of the shaded cell by increasing the MPP point; however, it does not completely eliminate the effect of shading.\n\n\\subsection{Blocking Diode}\n\\\n\\\n\\\n\\\nBlocking diodes as shown in Fig (\\ref{figc3h200}) are connected at the top of each string of an array. The blocking diode will conduct only when the source of current is the string as it is in forward biased mode. But, when the string malfunctions or is shaded it acts as a load and draws current, the blocking diode becomes reverse biased and prevents any flow of current to the string. Hence a blocking diode helps improve the performance of a PV array.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{bldiode1}\n\\caption{Blocking Diode prevents reverse flow of current through PV modules [5]}\n\\label{figc3h200} %% to refer use, \\ref{}\n\\end{figure}\n\n\\newpage\n\n\\section{Results of PV-IV Curve App}\n\\\n\\\n\\\n\\\nThe Table (\\ref{tabc3h1})gives the information of the different PV modules, which are used to simulate P-V and I-V curves from the PV-IV Curve App.\n\n\\begin{table}[htbp]\n  \\centering\n  \\caption{PV Module Information}\n    \\begin{tabular}{|l|c|c|c|c|}\n    \\hline\n    \\textbf{TYPE} & \\textbf{Poly} & \\textbf{Mono} & \\textbf{Cdte} & \\textbf{A-si} \\bigstrut\\\\\n    \\hline\n    \\textbf{MAKE} & Lanco & Lanco & First Solar & Du-Pont \\bigstrut\\\\\n    \\hline\n    \\textbf{Crys/TF} & Crys & Crys & TF & TF \\bigstrut\\\\\n    \\hline\n    \\textbf{Pmpp (W)} & 235 & 250 & 85 & 107 \\bigstrut\\\\\n    \\hline\n    \\textbf{Vmpp (V)} & 29.56 & 31.15 & 46.4 & 74 \\bigstrut\\\\\n    \\hline\n    \\textbf{Impp (A)} & 7.95 & 8.034 & 1.83 & 1.44 \\bigstrut\\\\\n    \\hline\n    \\textbf{Voc (V)} & 37.17 & 38.04 & 60.5 & 99 \\bigstrut\\\\\n    \\hline\n    \\textbf{Isc (A)} & 8.4 & 8.712 & 1.94 & 1.81 \\bigstrut\\\\\n    \\hline\n    \\textbf{Kv} & -0.31 & -0.33 & -0.27 & -0.3 \\bigstrut\\\\\n    \\hline\n    \\textbf{Ki} & 0.06 & 0.036 & 0.04 & 0.09 \\bigstrut\\\\\n    \\hline\n    \\textbf{Kp} & -0.43 & -0.47 & -0.25 & -0.25 \\bigstrut\\\\\n    \\hline\n    \\end{tabular}%\n  \\label{tabc3h1}%\n\\end{table}%\n\\\\\nWhere Crys-Crystalline, TF-Thin Film, mmp-Maximum Power Point, oc-Open Circuit, sc-Short Circuit, P-Power, V-Voltage, I-Current and Kv, Ki, Kp are the temperature co-efficients of voltage, current and power respectively.\\\\\n\nThe PV and IV curves are plotted at different temperatures (-25, 0, 25, 50 and 75 \u00baC) and different irradiances ( 200, 400, 600, 800 and 1000 W/m^{2}) are as follows,\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{Poly_Irradiance}\n\\caption{Polycrystalline Solar PV module I-V and P-V Curves at different Irradiances}\n\\label{figc3h15} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{Poly_Temperature}\n\\caption{Polycrystalline Solar PV module I-V and P-V Curves at different Temperatures}\n\\label{figc3h16} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{Mono_Irradiance}\n\\caption{Monocrystalline Solar PV module I-V and P-V Curves at different Irradiances}\n\\label{figc3h17} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{Mono_Temperature}\n\\caption{Monocrystalline Solar PV module I-V and P-V Curves at different Temperatures}\n\\label{figc3h18} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{A-si_Irradiance}\n\\caption{A-Si Thin Film Solar PV module I-V and P-V Curves at different Irradiances}\n\\label{figc3h19} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{A-si_Temperature}\n\\caption{A-Si Thin Film Solar PV module I-V and P-V Curves at different Temperatures}\n\\label{figc3h20} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{CDTE_Irradiance}\n\\caption{CDTE Thin Film Solar PV module I-V and P-V Curves at different Irradiances}\n\\label{figc3h21} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{CDTE_Temperature}\n\\caption{CDTE Thin Film Solar PV module I-V and P-V Curves at different Temperatures}\n\\label{figc3h22} %% to refer use, \\ref{}\n\\end{figure}\nThe effect of decreasing irradiance with temperature at kept constant at 25\u00baC can be observed in Fig (\\ref{figc3h15}, \\ref{figc3h17}, \\ref{figc3h19}, \\ref{figc3h21}). It is clearly seen that the photovoltaic module current output and power are directly proportional to the irradiance level, however the module voltage increases by very small values with decreasing irradiance levels. This concurs with the theory that module current is a direct function of the solar irradiance.\\\\\n\nThe effect of increasing temperature with irradiance level kept constant at 1000 W/m2 can be observed in Fig (\\ref{figc3h16}, \\ref{figc3h18}, \\ref{figc3h20}, \\ref{figc3h22}). It is clearly seen that the module voltage and power output are inversely proportional to the temperature, however the module current increases very slightly with increase in temperature. This concurs with theory that diode voltage is inversely proportional to temperature.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{Poly_Shading_WithoutByPassDiode}\n\\caption{Polycrystalline Solar PV moduleP-V and I-V curves for Shading without Bypass Diode and at different number of shaded cells}\n\\label{figc3h23} %% to refer use, \\ref{}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{Poly_Shading_With_ByPassDiode}\n\\caption{Polycrystalline Solar PV moduleP-V and I-V curves for Shading with Bypass Diode and at different number of shaded cells}\n\\label{figc3h24} %% to refer use, \\ref{}\n\\end{figure}\n\nIn Fig (\\ref{figc3h23}) we can observe that as the number of single cells within a module being shaded increase, both the PV and IV curves start becoming flat. Hence, shading of even a single cell almost reduces the module power by half and this can be attributed to the voltage loss in the series and parallel equivalent resistances due to the flow of the module output current.\\\\\n\nTo mitigate the shading effect, bypass diodes are used, which allow for alternative path to the current to flow which would have otherwise flowed through Rp and Rs causing large voltage drops, which consequently causes large reductions in output power. Ideally for the best shading effect mitigation, each cell within a module should be supplied with its own bypass diode, but as this would be escalate the cost and manufacturing complexity; hence optimal number of cell per bypass diodes have to selected. In Fig (\\ref{figc3h24}) it can be observed that as the number of cell per bypass diode increase the performance of the module deteriorates.\\\\\n\n\\newpage\n\n\\section{Variables Affecting Solar PV Cell Output}\n\\\n\\\n\\\n\\\nFrom the discussions in the previous sections, it is clear that the performance of a PV Cell depends on the following four variable;\n\n\\begin{enumerate}\n\\item Insolation\n\\item Temperature\n\\item Wind Speed\n\\item Cloud Cover\n\\end{enumerate}\n\nThe Fig (\\ref{mfigc3h1}) shows pictures of some instruments which help in measuring the above mentioned variables.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[b]{0.45\\textwidth}\n       \\centering\n\\includegraphics[scale=0.5]{Pyranometer}\n\\caption{Pyranometer}\n\\label{figc3h8}\n    \\end{subfigure}\n    \\hfill%% Fill Horizontal Space between two Figures\n        \\begin{subfigure}[b]{0.45\\textwidth}\n       \\centering\n\\includegraphics[scale=0.5]{Pyrheliometer}\n\\caption{Pyrheliometer}\n\\label{figc3h9} %% to refer use, \\ref{}\n    \\end{subfigure}\n    \\caption{}\n    \\label{}\n\\end{figure} \n   \n\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[b]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.5]{Anemometer}\n\\caption{Anemometer}\n\\label{figc3h10} %% to refer use, \\ref{}\n    \\end{subfigure}\n    \\hfill%% Fill Horizontal Space between two Figures\n        \\begin{subfigure}[b]{0.45\\textwidth}\n\\centering\n\\includegraphics[scale=0.5]{Temperature}\n\\caption{TEmperature Sensor}\n\\label{figc3h11} %% to refer use, \\ref{}\n    \\end{subfigure}\n    \\caption{Weather Variables Measurement Instruments}\n    \\label{mfigc3h1}\n\\end{figure}  \n\nThe Fig (\\ref{figc3h8}) shows a Pyranometer, it mmeasures the Global Horizontal Irradiance. The Fig (\\ref{figc3h9}) shows a Pyrheliometer, it measures the Direct Beam Irradiance of the sun. The Fig (\\ref{figc3h10}) shows an Anemometer, which measures the wind speed. The Fig (\\ref{figc3h11}) shows a temperature sensor. Also, there is a sky imager which can measure the cloud cover.\n\n\\section{PV Cell Technologies}\n\\\n\\\n\\\n\\\nThere are majorly three PV technologies, Mono-Crysalline, Poly-Crystalline and Thin Film which are commercially widely used in Solar PV Plants.\n\n\\subsection{Mono-Crystalline PV}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=2]{PVmono}\n\\caption{Mono-Crystalline Cell and its Crystal}\n\\label{figc3h12} %% to refer use, \\ref{}\n\\end{figure}\n\nIt is manufactured by slicing thin wafers from a single long Silicon crystal rod (shown in the figure at the right of Fig (\\ref{figc3h12} ) using Czochralski process. It is the most expensive of the three technologies. It has an efficiency of 15-20\\%, and has the highest density of power production (area/power is lowest). It has a market share of 36%.\n\n\\subsection{Poly-Crystalline PV}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.5]{PVPoly}\n\\caption{Poly-Crystalline Cell and its Crystal}\n\\label{figc3h13} %% to refer use, \\ref{}\n\\end{figure}\n\nIt is manufactured from the metallurgical grade silicon (shown in the figure at the right of Fig \\ref{figc3h13} ) by a chemical process, called as Siemens process. It is the moderately priced. It has an efficiency of 13-16\\%, and has the moderate density of power production (area/power is lower than Mono-Crystalline). \n\n\\subsection{Thin Film PV}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=2.5]{PVthin}\n\\caption{Thin Film Sheet}\n\\label{figc3h14} %% to refer use, \\ref{}\n\\end{figure}\n\nIt is manufactured by depositing one or more thin layes of PV material on a substrate such as glass, plastic or metals. The  Fig (\\ref{figc3h13}) shows the Thin Film PV sheet. It is the least expensive of the three technologies. But, has has an efficiency of 7-13\\%, and has the lowest density of power production (area/power is lowest). ", "meta": {"hexsha": "9930e63f980bf1a1f11b87853cdd259366267920", "size": 33543, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8_ProjectReport/2_ProjectProposal/Latex Support Files/Chapters/Ch3.tex", "max_stars_repo_name": "ninadkgaikwad/ARMA_TimeSeries_Forecasting_Project", "max_stars_repo_head_hexsha": "18329e436f823d55d2aad02b1d81d8cdda506ab2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8_ProjectReport/2_ProjectProposal/Latex Support Files/Chapters/Ch3.tex", "max_issues_repo_name": "ninadkgaikwad/ARMA_TimeSeries_Forecasting_Project", "max_issues_repo_head_hexsha": "18329e436f823d55d2aad02b1d81d8cdda506ab2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8_ProjectReport/2_ProjectProposal/Latex Support Files/Chapters/Ch3.tex", "max_forks_repo_name": "ninadkgaikwad/ARMA_TimeSeries_Forecasting_Project", "max_forks_repo_head_hexsha": "18329e436f823d55d2aad02b1d81d8cdda506ab2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-28T05:21:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-28T05:21:48.000Z", "avg_line_length": 47.9871244635, "max_line_length": 1024, "alphanum_fraction": 0.7431356766, "num_tokens": 9475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256512199032, "lm_q2_score": 0.6926419958239132, "lm_q1q2_score": 0.5830837991967192}}
{"text": "\\section{Logic Gates}\n\nThe basic building blocks of computers are called \\emph{gates}. Their only function is to report, with some yes-or-no electrical output, the result of their circuit following some input from somewhere else in the circuit. The way to understand how gates make decisions is called ``Boolean Algebra\", but really it's a fairly simple set of decisions that get strung together in useful ways. The circuits are called gates because they either let electricity pass or prevent electricity from passing through, like a gate allows physical movement or prevents it.\n\n\\subsection*{Boolean Logic}\n\nBoolean logic is the sort of decision making and answers that computers use. You can use it too, of course, and it will help you learn to program computers. The answer to every question in Boolean logic is either ``yes'' or ``no'' -- ``true'' or ``false''. In computers, that's a one or a zero, of course.   Every operation (decision) is composed of a set of operations: \\emph{and}, \\emph{or}, and \\emph{not}. A British man named George Boole introduced these ideas to the world in 1847\\footnote{\n\\emph{The Mathematical Analysis of Logic} (1847), and \\emph{An Investigation of the Laws of Thought} (1854). For more, see his {\\color{webblue}\\href{https://en.wikipedia.org/wiki/George_Boole}{Wikipedia entry}}.}. He did not realize he was laying the foundations for computing, but 90 years later another very important mathematician, Claude Shannon, understood that Boole's system for understanding decisions could be used to design electronic circuits\\footnote{Shannon, Claude E. 1938. ``A Symbolic Analysis of Relay and Switching Circuits\". \\emph{Trans. AIEE.} 57 (12): 713--723.}.\n\nIt may be helpful to see some of the logic and operations (decisions) made in a graphical form, looking at the overlap between circles, or between a circle and its surrounding area. Here, the shaded area shows the part of the relationship that is ``true'' for the given operation. That is, the region that is {\\sc{not}} A, in the third diagram, is shaded.\n\n% Draw Venn diagrams to graphically demonstrate OR, AND, and NOT.\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{minipage}{6in}\n\\input{./include/vennand.tex}\n\\caption{A graphic representing {\\sc{and}}, {\\sc{or}}, and {\\sc{not}} with overlapping circles. The shaded area represents the location where the Boolean operation is ``true''.}\n\\label{fig:vennlogic}\n\\end{minipage}\n\\end{center}\n\\end{figure}\n\n\nAlso note: more complex decisions can be made by combining these operations: NOT $+$ AND, for instance).\n\n\n\\subsection*{The OR Gate}\n\nOR gates take two inputs and provide a ``yes'' if line 1 OR line 2 is equal to 1 -- that is, if either line has a voltage level above zero.\n\n\\medskip\n\\begin{center}\n\n\\begin{tabular}{p{2.3in} p{3.7in} }\n\\hline\\\\[\\negsep]\nThe symbol for an OR gate is:\n\n\\vspace{0.25in}\n\n\\begin{circuitikz}\n\t\\draw(0,0)\n\tnode[american or port](orgate){}\n\t(orgate.in 1) node [left](){{\\color{red}$INPUT~A$}}\n\t(orgate.in 2) node [left](){{\\color{red}$INPUT~B$}}\n\t(orgate.out) node [right](){{\\color{red}$OUT$}}\n;\n\\end{circuitikz}\n\n&\n\n\\centering\n\nThe ``Truth Table\" (how they behave) is:\n\\vspace{0.15in}\n\n\\begin{tabular}{ll | c}\n\\multicolumn{3}{c}{\\textbf{OR Gate }}\\\\\n\\multicolumn{3}{c}{\\textbf{Truth Table}}\\\\\n\\hline\\\\[\\negsep]\n\\textbf{A} & \\textbf{B} & \\textbf{OUT}\\\\\n\\hline\n0 & 0 & 0  \\\\\n1 & 0 & 1  \\\\\n0 & 1 & 1  \\\\\n1 & 1 & 1  \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\tabularnewline\n\\hline\\\\[\\negsep]\n\\end{tabular}\n\n\\end{center}\n\\bigskip\n\n\\noindent OR gates are very simple. They are so simple that transistors are not even needed, though they certainly can be used.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\input{./include/diodeorgate.tex}\n\\caption{A very simple OR gate schematic. Diodes conduct electricity only in the ``forward'' direction. So Input A cannot affect Input B. But either way, if there is a signal on Input A or on Input B, OUT will carry a voltage (and therefore, a ``1''). Diode \\emph{propagation delay} (how much time the signal takes to cross through the diode) is slower than transistors, meaning transistorized circuits run faster.}\n\\label{fig:diodeorgate}\n\\end{center}\n\\end{figure}\n\n\\stbox{6.0in}{\\emph{Experiment:} Wire up an OR gate on a breadboard, like the one seen in the cover image. Use LEDs as indicators for Input A, Input B, and OUT. Feel free to use diodes, but using transistors is pretty easy. It's best to use N-type transistors as switches for each LED and for the OR-output LED.}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\input{./include/npnorgate.tex}\n\\caption{A simple transistor-based OR gate. Transistors are better than diodes in this application because transistors amplify (or at least do not degrade) signals, while diodes lose half a volt or more.  In this schematic, each transistor acts as an independent switch for the output -- either switch pushes OUT to high.}\n\\label{fig:npnorgate}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=1.25]{orgateschembetter.png}\n\\caption{A schematic from a real circuit design program showing a practical, real implementation of the idealized schematic in Figure \\ref{fig:npnorgate}. The wires connecting components are the green lines. Mostly there are more resistors to help keep the transistors behaving well. For instance, the 100K resistors make sure the transistors are always at a low logic level (zero volts) when the inputs are `off'.}\n\\label{fig:orgateeagleschematic}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\begin{figure}[!hb]\n\\begin{center}\n\\includegraphics[scale=0.25]{trimmedorgate.png}\n\\caption{A circuit board designed using the above OR gate schematic. You can solder these without much trouble. You can see the top and bottom of the board here, but all components go on the top. However, both top and bottom have \\emph{traces}, that carry electricity instead of using loose/floppy wires. The larger resistors are the lowest value (they're just there to make the transistors behave a little better): 100R means ``100 Ohms''. The smaller resistors are either 100,000 Ohms (that is, 100K Ohms) or 1,000 Ohms (1K Ohms). The OR gate takes two \\emph{inputs}, A and B, and produces one \\emph{result}, R.}\n\\label{fig:orgatepcbs}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\\clearpage\n\\newpage\n\n\\subsection*{The AND Gate}\n\nAND gates take two inputs and provide a ``yes'' (a one, or a ``logic high\" signal) only if both line 1 AND line 2 are equal to 1.\n\n\\medskip\n\\begin{center}\n\n\\begin{tabular}{p{2.3in} p{3.7in} }\n\\hline\\\\[\\negsep]\n\nThe symbol for an AND gate is:\n\n\\vspace{0.25in}\n\n\\begin{circuitikz}\n\t\\draw(0,0)\n\tnode[american and port](andgate){}\n\t(andgate.in 1) node [left](){{\\color{red}$INPUT~A$}}\n\t(andgate.in 2) node [left](){{\\color{red}$INPUT~B$}}\n\t(andgate.out) node [right](){{\\color{red}$OUT$}};\n\n\\end{circuitikz}\n\n&\n\n\\centering\n\nThe ``Truth Table\" (how they behave) is:\n\\vspace{0.15in}\n\n\\begin{tabular}{ll | c}\n\\multicolumn{3}{c}{\\textbf{AND Gate }}\\\\\n\\multicolumn{3}{c}{\\textbf{Truth Table}}\\\\\n\\hline\\\\[\\negsep]\n\\textbf{A} & \\textbf{B} & \\textbf{OUT}\\\\\n\\hline\n0 & 0 & 0  \\\\\n1 & 0 & 0  \\\\\n0 & 1 & 0  \\\\\n1 & 1 & 1  \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\tabularnewline\n\n\\hline\\\\[\\negsep]\n\n\\end{tabular}\n\\end{center}\n\n\\bigskip\n\n\nThere are much more complicated AND gates, but see below for an easy-to-understand example (and it totally works). Remember that transistors act like switches: the \\emph{gate} acts like the switch that opens up the flow of electricity from the \\emph{source} to the \\emph{drain}. Look at the schematic in Figure \\ref{fig:simpleandgate} -- note that the drain of the top transistor ($Q_1$) flows into the source of the other transistor ($Q_2$), meaning that even if $Q_2$ is turned on, it may not necessarily start carrying electricity (because $Q_1$ may not allow the electricity from $V_{cc}$ to pass through). Thus, both gate 1 AND gate 2 have to be turned on in order to see voltage at the output.\n\n\\bigskip\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\input{./include/simpleand.tex}\n\n\\caption{A simple AND gate schematic. When Input A is a zero, no current flows into the ``collector\" (voltage input wire) of Transistor 2. When Input B is zero, no current can flow to OUT, regardless of whether Input A is 0 or 1. Only when Input A is 1 (current can flow past Q1) and when Input B is 1 (current is available to Q2, and Q2 is turned ``on'', allowing current to pass) does OUT go ``high\". $R_3$ prevents too much current from flowing to ground, and is not strictly necessary in all gates, but is included here because it is common to see them. $R_1$ and $R_2$ should be about 100 ohms. $R_3$ should be about 100K ohms. This design is just for demonstration.}\n\\label{fig:simpleandgate}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\fbox{\n\\includegraphics[scale=0.14]{andgatebreadboard.jpg}\n}\n\\medskip\n\nA single AND gate on a solderless breadboard. The transistor on the left, controlled by the gray button, makes electricity available to the input of the second transistor, controlled by the green button. Only if both transistors are ``on'' (that is, A \\emph{and} B are passing electricity) will the LED light up. Using this picture, wire up an AND gate on a breadboard with N-type transistors. Use LEDs as indicators for Input A, Input B, and the result.\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.25]{andgate.png}\n\\caption{A circuit board designed using the above AND gate schematic, though with two 100K pull-down resistors also implemented (these guarantee the transistor's gate are reliably held at 0 volts instead of allowed to \\emph{float}, until either of the inputs goes high (that is, pushes some voltage into the transistor gate). The pull-down resistors are important practically, but they do not appear in this circuit diagram for simplicity. This board looks almost exactly like the OR gate, but the wiring is different! Also, you can see both the front and back sides of the circuit board in this image. The AND gate takes two \\emph{inputs}, A and B, and produces one \\emph{result}, R.}\n\\end{center}\n\\label{fig:andgate}\n\\end{figure}\n\n\n\\clearpage\n\\newpage\n\n\\subsection*{The NOT Gate}\n\nNOT gates, sometimes called \\emph{inverters}, take one input and reverse the value of that input. So a 1 becomes a 0, or vice versa. Put another way, this gate creates the \\emph{complement} of the input value.\n\n\\medskip\n\\begin{center}\n\n\\begin{tabular}{p{2.3in} p{3.7in} }\n\\hline\\\\[\\negsep]\n\nThe symbol for a NOT gate is:\n\\vspace{0.25in}\n\n\\begin{circuitikz}\n\t\\draw(0,-0.5)\n\tnode[american not port](notgate){}\n\t(notgate.in) node [left](){{\\color{red}$INPUT$}}\n\t(notgate.out) node [right](){{\\color{red}$OUT$}}\n;\n\\end{circuitikz}\n\n&\n\n\\centering\n\nThe ``Truth Table\" (how they behave) is:\n\\vspace{0.15in}\n\n\\begin{tabular}{l | c}\n\\multicolumn{2}{c}{\\textbf{NOT Gate }}\\\\\n\\multicolumn{2}{c}{\\textbf{Truth Table}}\\\\\n\\hline\\\\[\\negsep]\n\\textbf{IN} & \\textbf{OUT}\\\\\n\\hline\n0 & 1 \\\\\n1 & 0  \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\tabularnewline\n\n\\hline\\\\[\\negsep]\n\n\\end{tabular}\n\\end{center}\n\n\\bigskip\n\nA simple NOT circuit, using a very old resistor-transistor logic (RTL) design. It is easy to understand, though this example is not how NOT gates are actually implemented any more, because of the poor efficiency---wasted energy turns into heat, which then has to be cooled somehow. It is better to not make the heat in the first place.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\input{./include/simplenot.tex}\n\\caption{A very simple NOT gate schematic. When IN is logic low (off), electricity cannot pass through to the ground, and so it is available at the OUT terminal---OUT is a 1. When IN is carrying current, the transistor opens up and electricity starts flowing to ground -- the ``path of least resistance''. As electricity flows to ground instead of OUT, OUT has a very low voltage available -- making it a zero.}\n\\label{fig:simplenot}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\input{./include/CMOSNOT.tex}\n\n\\caption{A practical inverter (NOT) circuit. While more complicated than a simple NOT circuit, it is more efficient, wasting much less energy. It uses one P-type and one N-type \\emph{field effect transistor} (FET). P-type transistors clamp off electricity when their gate \\emph{has} voltage on it. N-type transistors clamp off when their gate \\emph{does not have} voltage on it. So, here, when the input is high, the top transistor clamps off voltage input, and the bottom transistor opens up, allowing any current to flow to ground--but there's no current available! The situation reverses when the input is low: the top transistor opens up, allowing current to flow, and the bottom transistor clamps off, preventing current from going anywhere -- but putting a high signal at the output.}\n\\label{fig:cmosnot}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.15]{trimmednotgate.png} % convert NOTgate.png -alpha off -trim  trimmednotgate.png\n\\caption{A circuit board designed using the more elegant NOT gate schematic seen in Figure \\ref{fig:cmosnot}. The NOT gate takes exactly one \\emph{input}, A, and produces a \\emph{result}, R.}\n\\label{fig:notgateboard}\n\\end{center}\n\\end{figure}\n\n\\begin{table}\n\\input{./include/NOTgateBOM.tex}\n\\caption{The list of components needed to make the NOT gate seen in Figures \\ref{fig:cmosnot} and \\ref{fig:notgateboard}.}\n\n\\end{table}\n\n\\clearpage\n\\newpage\n\n\\section{Complex Logic Gates}\n\nCombining gates is possible to produce more complex circuits that answer different questions, like outputs that tell us if two inputs are ``NOT AND\" or ``NOT OR\" or ``Exclusively OR\". Each of these gates allows new comparisons between two inputs, but each is composed of the fundamental three gates, seen above. There is even an \"Exclusive NOT OR\" gate, which is how computers determine if two numbers are equal to each other.\n\n\\subsection*{The NAND Gate}\n\nA ``NOT-AND\", or ``NAND\", gate just reverses the value obtained from an AND gate -- so it has the exact opposite truth table. Since a NAND gate just reverses the output of AND, one way to create a NAND gate is to put a NOT gate on the result (output) line of an AND gate and make a NAND gate, just like you were using Legos. The AND gate goes through the NOT gate, and becomes NOT-AND, or ``NAND\".\n\n\n\\medskip\n\\begin{center}\n\n\\begin{tabular}{p{2.3in} p{3.7in} }\n\\hline\\\\[\\negsep]\n\nThe NAND gate symbol is:\n\n\\vspace{0.25in}\n\n\\begin{circuitikz}\n\t\\draw(0,0)\n\tnode[american nand port](nandgate){}\n\t(andgate.in 1) node [left](){{\\color{red}$INPUT~A$}}\n\t(andgate.in 2) node [left](){{\\color{red}$INPUT~B$}}\n\t(andgate.out) node [right](){{\\color{red}$OUT$}};\n\n\\end{circuitikz}\n\n\\vspace{0.15in}\n\nSee the circle on the output? That means ``\\emph{not}\".\n\n&\n\n\\centering\n\nThe ``Truth Table\" (how they behave) is:\n\\vspace{0.15in}\n\n\\begin{tabular}{ll | c}\n\\multicolumn{3}{c}{\\textbf{NAND Gate }}\\\\\n\\multicolumn{3}{c}{\\textbf{Truth Table}}\\\\\n\\hline\\\\[\\negsep]\n\\textbf{A} & \\textbf{B} & \\textbf{OUT}\\\\\n\\hline\n0 & 0 & 1  \\\\\n1 & 0 & 1  \\\\\n0 & 1 & 1  \\\\\n1 & 1 & 0  \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\tabularnewline\n\n\\hline\\\\[\\negsep]\n\n\\end{tabular}\n\\end{center}\n\n\nYou can think of a NAND gate as two separate gates (AND and NOT):\n\\begin{center}\n\n\\begin{circuitikz}\n\t\\draw(0,0)\n\tnode[american and port](andgate){}\n\t(andgate.in 1) node [left](){{\\color{red}$INPUT~A$}}\n\t(andgate.in 2) node [left](){{\\color{red}$INPUT~B$}}\n\t;\n\t\n\t\\draw(1.5,0)\n\tnode[american not port](notgate){}\n\t(andgate.out) |- (notgate.in)\n\t;\n\t\n\t\\draw(3,0)\n\t(notgate.out) node [right](){{\\color{red}$OUT$}}\n\t;\n\n\\end{circuitikz}\n\n\\end{center}\n\n\\bigskip\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.16]{trimmednandgate.png}\n\\caption{A circuit board designed to make a NAND gate using four transistors.}\n\\label{fig:nandgate}\n\\end{center}\n\\end{figure}\n\n\\clearpage\n\\newpage\n\n\\subsection*{The NOR Gate}\n\nNot surprisingly, there is also a NOR gate, that has the reverse truth table to an OR gate. NOR gates are easy to implement, though as we have said elsewhere, the more complicated NOR gates save power and run cooler.\n\n\\medskip\n\\begin{center}\n\n\\begin{tabular}{p{2.3in} p{3.7in} }\n\\hline\\\\[\\negsep]\n\nThe symbol for a NOR gate is:\n\n\\vspace{0.25in}\n\n\\begin{circuitikz}\n\t\\draw(0,0)\n\tnode[american nor port](norgate){}\n\t(andgate.in 1) node [left](){{\\color{red}$INPUT~A$}}\n\t(andgate.in 2) node [left](){{\\color{red}$INPUT~B$}}\n\t(andgate.out) node [right](){{\\color{red}$OUT$}};\n\n\\end{circuitikz}\n\n\\vspace{0.10in}\n\nSee the circle on the output? That means ``\\emph{not}\".\n\\vspace{0.15in}\n\n&\n\n\\centering\n\nThe ``Truth Table\" (how they behave) is:\n\\vspace{0.15in}\n\n\\begin{tabular}{ll | c}\n\\multicolumn{3}{c}{\\textbf{NOR Gate }}\\\\\n\\multicolumn{3}{c}{\\textbf{Truth Table}}\\\\\n\\hline\\\\[\\negsep]\n\\textbf{A} & \\textbf{B} & \\textbf{OUT}\\\\\n\\hline\n0 & 0 & 1  \\\\\n1 & 0 & 0  \\\\\n0 & 1 & 0  \\\\\n1 & 1 & 0  \\\\\n\\hline\n\\end{tabular}\n\\\\\n\n\\tabularnewline\n\n\\hline\\\\[\\negsep]\n\n\\end{tabular}\n\\end{center}\n\n\nYou can think of a NOR gate as two separate gates (OR and NOT):\n\\begin{center}\n\\begin{circuitikz}\n\t\\draw(0,0)\n\tnode[american or port](orgate){}\n\t(orgate.in 1) node [left](){{\\color{red}$INPUT~A$}}\n\t(orgate.in 2) node [left](){{\\color{red}$INPUT~B$}}\n\t;\n\t\n\t\\draw(1.5,0)\n\tnode[american not port](notgate){}\n\t(orgate.out) |- (notgate.in)\n\t;\n\t\n\t\\draw(3,0)\n\t(notgate.out) node [right](){{\\color{red}$OUT$}}\n\t;\n\n\\end{circuitikz}\n\\end{center}\n\n\n\\bigskip\n\n\n\\begin{figure}[!hb]\n\\begin{center}\n\\includegraphics[scale=0.4]{Agc_nor2.jpg}\n\\caption{An early integrated circuit NOR gate. It has six transistors, eight resistors, and three pairs of A/B inputs (thus, it has three separate yes/no outputs). This chip was part of the guidance computer that took Americans to the moon in 1969. \\emph{Photo credit: Wikimedia Foundation}.}\n\\end{center}\n\\label{fig:apollonorgate}\n\\end{figure}\n\n\n\\clearpage\n\\newpage\n\n\\subsection*{The Exclusive-OR Gate}\nThere is also the ``eXclusive-OR\", or ``XOR\", gate. Here, ``exclusive'' means ``one, but not both.'' The XOR gate output (result) is 1 when \\emph{the inputs are not the same}. It returns a 1 if, and \\emph{only} if, both inputs are different. That is, it provides a one ``exclusively if'' there is a single ``one'' (sometimes called ``logic high\", for ``voltage above zero\") in the two inputs. XOR gates take a lot of transistors to implement. The most common version requires 12, though it is possible to make an XOR gate with 8 transistors. See Figures \\ref{fig:xorcomposite}, \\ref{fig:xorbreadboard}, and \\ref{fig:xorgate}.\n\n\\medskip\n\\begin{center}\n\n\\begin{tabular}{p{2.3in} p{3.7in} }\n\\hline\\\\[\\negsep]\n\nThe symbol for an XOR gate is:\n\n\\vspace{0.25in}\n\n\\begin{circuitikz}\n\t\\draw(0,0)\n\tnode[american xor port](xorgate){}\n\t(andgate.in 1) node [left](){{\\color{red}$INPUT~A$}}\n\t(andgate.in 2) node [left](){{\\color{red}$INPUT~B$}}\n\t(andgate.out) node [right](){{\\color{red}$OUT$}};\n\n\\end{circuitikz}\n\n\\vspace{0.15in}\n\n(the curved line behind the OR gate means that it is ``exclusive\")\n\n&\n\n\\centering\n\nThe ``Truth Table\" (how they behave) is:\n\\vspace{0.15in}\n\n\\begin{tabular}{ll | c}\n\\multicolumn{3}{c}{\\textbf{XOR Gate }}\\\\\n\\multicolumn{3}{c}{\\textbf{Truth Table}}\\\\\n\\hline\\\\[\\negsep]\n\\textbf{A} & \\textbf{B} & \\textbf{OUT}\\\\\n\\hline\n0 & 0 & 0  \\\\\n1 & 0 & 1  \\\\\n0 & 1 & 1  \\\\\n1 & 1 & 0  \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\tabularnewline\n\n\\hline\\\\[\\negsep]\n\n\\end{tabular}\n\\end{center}\n\n\\bigskip\n\n% XOR gate, composited:\n\\begin{figure}[h!]\n\\begin{center}\n\\input{include/xorcomposited.tex}\n\\caption{XOR gate, as a composite of the simple gates.}\n\\label{fig:xorcomposite}\n\\end{center}\n\\end{figure}\n\nXOR gates return ``true'' if---and \\emph{only} if---the inputs are different. Put another way, the inputs ``must sum to 1'', or the inputs ``must not be equal''. OR gates return ``true'' if the inputs are different, but \\emph{also} if both inputs are 1. So, it isn't the same. There are two cases where XOR can be ``true''--where A is 1 and B is 0, or where A is 0 and B is 1. The logic of the XOR circuit must handle either case, and list the result of 1 (``true'') for only those two cases.\n\nTake a look at Figure \\ref{fig:xorcomposite}: You can see that there are two AND gates. Recall that AND gates only return ``true'' if both inputs are 1. For XOR, we require that both inputs are different. So to correctly identify ``A is 1 and B is 0'', we use an inverter (a NOT gate) on the B input -- so if the B is a zero, the NOT gate converts the B to a 1, and then the AND gate gets both inputs as 1. The lower AND gate ($AND_2$) captures the opposite case: if A is 0 and B is 1, then the A input gets inverted by the NOT gate into a 1, and for that case, also, the AND gate correctly reports that the inputs are ``true''. Since the OR gate will return ``true'' if either input is 1, then both opposite cases where XOR is true can be passed along to the result. \n\nIt's also worth knowing that XOR gates can be used like a NOT gate, by holding one of the two inputs at 1. Then the other input will always be negated. If one input of an XOR gate is always held at zero, then the XOR gate always passes the same value that comes in on the other input (a gate that passes on its input without changing it is called a ``buffer\").\n\n\\bigskip\n\n\\begin{figure}[!hb]\n\\begin{center}\n\\fbox{\n\\includegraphics[scale=0.15, clip=true, trim=0 0 100 0]{xorgatebreadboard.jpg}\n}\n\\caption{An XOR gate wired up on a breadboard with discrete transistors and resistors. This is the same schematic (and the same color-coded signal wires) seen in Figure \\ref{fig:xorgate}. The red switch is signal A and the blue switch is signal B. On the LED line, 0 and 1 are A and B, and 2 and 3 (lit up) are NOT A and NOT B.}\n\\label{fig:xorbreadboard}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\input{include/xorgate.tex}\n\\caption{An exclusive-or (XOR) gate, composed of 12 transistors and some supporting resistors. You don't need to understand how it works, but it's not that hard if you break it down into the colored signals, two NOT gates, two AND gates with N-type FETs, and two linked AND gates with P-type FETs. A ``0'' (ground) signal opens a P-type FET, and a ``1'' signal opens an N-type FET.}\n\\label{fig:xorgate}\n\\end{center}\n\\end{figure}\n\n\\clearpage\n", "meta": {"hexsha": "a468a4276af21b9ee3171e8dea2e904bfc556907", "size": 22159, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/logicgates.tex", "max_stars_repo_name": "jessehamner/TechMillForKids", "max_stars_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2017-11-13T21:45:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T09:31:54.000Z", "max_issues_repo_path": "chapters/logicgates.tex", "max_issues_repo_name": "jessehamner/TechMillForKids", "max_issues_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2017-03-10T21:46:26.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-25T19:21:58.000Z", "max_forks_repo_path": "chapters/logicgates.tex", "max_forks_repo_name": "jessehamner/TechMillForKids", "max_forks_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-11-14T04:40:14.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-17T05:31:36.000Z", "avg_line_length": 37.8786324786, "max_line_length": 790, "alphanum_fraction": 0.7229116837, "num_tokens": 6697, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8499711794579723, "lm_q1q2_score": 0.5830372673218668}}
{"text": "\\chapter{Interpreting a Linear Regression Model \\label{chapter:linreg}}\n\nThis chapter is devoted to understanding the structure of linear regression models. We first encountered them in Chapter~\\ref{chapter:regression} as ``just one example'' of a regression model. However, linear regression's overwhelming popularity in the clinical domain means that one cannot do clinical data science without fully understanding these models' structure and how to interpret the output produced by model fitting software. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Biomarker Example from Chapter~\\ref{chapter:regression}}\n\nIn Chapter~\\ref{chapter:regression}, we saw an example where information about two predictors -- a disease severity score ($x_1$) and a social determinants score ($x_2$) -- was used to predict the level of a disease recurrence biomarker. One of the three supervised learning algorithms we tried was a \\textbf{linear regression} model (Section~\\ref{ssect:linreg}). The output from that model is repeated below.\n\\begin{center}\n\\includegraphics[width=0.35\\textwidth]{img/esl-reg-linear.png}\n\\includegraphics[width=0.64\\textwidth]{img/linear-regression-model-output.png}\n\\end{center}\n\n\\begin{question}{}\nWhat are the number of samples, $n$, and the number of predictors, $p$, for this dataset?\n\\end{question}\n\n\\noindent But what do all these numbers \\emph{mean}?\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Understanding the Model Summary}\n\nA linear regression model looks like this (see also Chapter~\\ref{chapter:regression}, Section~\\ref{ssect:linreg}):\n$$ y = \\beta_0 + \\beta_1 x_1 + \\dots + \\beta_p x_p + \\varepsilon $$\nwhere we assume that the error, $\\varepsilon$, is normally distributed, $\\mathcal{N}(0, \\sigma)$. Another way of saying this is that we are assuming the outcome, $y$, is normally distributed with mean $\\beta_0 + \\beta_1 x_1 + \\dots + \\beta_p x_p$ and standard deviation $\\sigma$. \n\n\\subsection{The Call}\n\nThe first line of the output is just repeating the call you made to the \\verb|lm| function in R to fit the model. The \\texttt{lm} package fits linear regression models using ordinary least squares. However, linear regression models are also a type of generalized linear model (see Chapter~\\ref{chapter:glms}) and can be fit using maximum likelihood. In R, if you use the \\texttt{glm} package with \\texttt{family = \"gaussian\"} you should get identical coefficients and error estimates to what you get using the \\texttt{lm} package.\n\n\\subsection{The Residuals}\n\nA \\textbf{residual} is a measure of how much the true outcome value ($y$) of one datapoint differs from the model's prediction. For a linear regression model, the residual of training point $i$ is:\n$$ \\text{residual}^{(i)} = y^{(i)} - \\hat{y}^{(i)} $$\nwhere $\\hat{y}^{(i)}$ is the model's prediction:\n$$ \\hat{y}^{(i)} = \\beta_0 + \\beta_1 x_1^{(i)} + \\dots + \\beta_p x_p^{(i)}. $$\nHere is a histogram of the residuals for this model:\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{img/linreg-example-residuals.png}\n\\end{center}\n\n\\begin{question}{}\nEstimate the maximum, minimum, and mean residuals from this graph. Do they match what is in the model output?\n\\end{question}\n\nThe residuals play an important role in linear regression models because they are what allow us to estimate $\\sigma$. They also play an important role in model diagnostics because they enable us to check one of the core model assumptions: the assumption that $\\sigma$ is constant. We can check this assumption by making a plot of the residuals vs. $\\hat{y}$:\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{img/linreg-example-residual-vs-fitted.png}\n\\end{center}\nWe can also check the normality of the residuals by making a plot of their values vs. what we would expect if the residuals were perfectly normally distributed with the same mean and standard deviation:\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{img/linreg-example-almost-qq-plot.png}\n\\end{center}\nThis is not quite a \\textbf{QQ-plot}, one of the standard diagnostic plots of linear regression, because it's plotting the actual values of the residuals instead of their quantiles. But aside from the axis scales, it's exactly the same. We'll see formal QQ-plots later. \n\nThe fact that the points in the second plot lie close to a line is a good indication that the residuals are normally distributed. Another thing to notice from this plot is that the colors of the points (their $y$ values) are unrelated to their residuals. \n\\vspace{5mm}\n\n\\begin{question}{}\nThe four plots below show a famous dataset called \\textbf{Anscombe's quartet}. The regression lines produced by fitting a linear regression model to each dataset are identical, but only one dataset actually fulfills the assumptions of a linear regression model. \n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{img/anscombe.png}\n\\end{center}\nWe can check these assumptions by examining plots of the residuals vs. fitted values of the model (here the ``fitted value'' of point $i$ means $\\hat{y}^{(i)}$). \n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{img/anscombe-diagnostics-1.png}\n\\end{center}\nWhich of the four datasets fulfills the assumption of a linear regression model that the error has constant variance? How can you tell?\n\\end{question}\n\n\\subsection{Coefficients and Standard Errors}\n\nAlthough residuals are important, the parts of the model output that will be scrutinized, reported in papers, etc. are the coefficients, along with their standard errors and hypothesis tests. The coefficients are what create the regression surface, which captures the model's prediction for every point in the feature space (see Section~\\ref{ssect:linreg}). \n\nThere are actually a few different ways to derive the coefficients in a linear regression model. The most common way uses \\textbf{least squares}, which adjusts the coefficients $\\beta_0, \\beta_1, \\dots, \\beta_p$ until the sum of the squared residuals is minimized:\n\\begin{align*} \\text{SSE} &= \\sum_{i=1}^n (y^{(i)} - \\hat{y}^{(i)})^2 \\\\\n&= \\sum_{i=1}^n (y^{(i)} - \\beta_0 - \\beta_1 x_1^{(i)} - \\beta_2 x_2^{(i)} - \\dots - \\beta_p x_p^{(i)})^2 \\end{align*}\nIt turns out that you can find the optimal values of the $\\beta$s analytically by taking derivatives of this thing and setting them equal to zero. Alas, this requires some matrix multiplication and the taking of matrix inverses, so we will save it for a later chapter. Suffice it to say that the $\\beta$s are adjusted to minimize the SSE, and the values in the model output are the optimal values.\n\\vspace{5mm}\n\n\\begin{question}{}\nLooking at the form of the linear regression model\n$$ y = \\beta_0 + \\beta_1 x_1 + \\dots + \\beta_p x_p + \\varepsilon $$\nwhat does the value of each of the $\\beta$s mean? What is $\\beta_j$ telling us about how $y$ varies with the predictor $j$, all else being equal?\n\\end{question}\n\nThe model's overall estimate of $\\sigma$, the standard deviation of the error term, is obtained very naturally by averaging over the residuals, taking into account the number of predictors, $p$:\n$$ \\hat{\\sigma}^2 = \\frac{1}{n-p-1} \\sum_{i=1}^n (y^{(i)} - \\hat{y}^{(i)})^2.$$\n\n\\begin{question}{}\nThe sum of the squared residuals for our model is $4480.678$. There are $n=200$ datapoints, and the number of predictors, $p$, is 2. Calculate $\\hat{\\sigma}$ for this model. Do you see this number anywhere in the model output? What is it called?\n\\end{question}\n\nThe standard errors of the model coefficients likewise require matrix multiplication to fully understand, but they are related to three factors: (1) the value of $\\hat{\\sigma}$, (2) the spread of the values of the corresponding covariate about its mean (more spread will decrease the standard error), and (3) correlations between that covariate and other covariates in the model (tighter correlations will increase the standard error).\n\\vspace{5mm}\n\n\\begin{question}{}\nThe standard errors attempt to capture how much we expect our estimates of the model coefficients to vary if we were to refit the model using a different dataset, provided that the new dataset is similar to (i.e., sampled from the same population distribution as) the one used to fit the model. On average, approximately how much would we expect $\\beta_0$ (the intercept) to deviate from its fitted value of 49.8600? How much would we expect $\\beta_1$ and $\\beta_2$ to deviate from their fitted values? \n\\end{question}\n\n\\subsection{Hypothesis Tests of Coefficients}\n\nArmed with our coefficients and standard errors, we can perform a hypothesis test on each regression coefficient. Our null hypothesis in each case will be that the true value of that coefficient is zero: that is, it has no effect on the outcome. Under the null hypothesis that $\\beta_j = 0$ and assuming $n$, our number of samples, is large enough, the quantity $\\hat{\\beta_j}/\\text{se}(\\hat{\\beta}_j)$ will be distributed according to a Student's T distribution (see Chapter~\\ref{chapter:probabilitydistributions}, Section~\\ref{sect:tdist}) with $n-p-1$ degrees of freedom.\n\\vspace{5mm}\n\n\\begin{question}{}\nSketch the null distributions for the hypothesis tests of our three regression coefficients, $\\beta_0$, $\\beta_1$, and $\\beta_2$. Do you see why the $p$-values for these tests are so low?\n\\end{question}\n\n\\subsection{Other Model Output}\n\nThe software also provides some other output. The quantity \\textbf{R-squared} is defined as:\n\\begin{align*} R^2 &= 1 - \\frac{\\sum_{i=1}^n (y^{(i)} - \\hat{y}^{(i)})^2}{\\sum_{i=1}^n (y^{(i)} - \\overline{y})^2} \\\\[3mm]\n&=  1 - \\frac{\\sum_{i=1}^n (y^{(i)} - \\hat{\\beta}^T x^{(i)})^2}{\\sum_{i=1}^n (y^{(i)} - \\overline{y})^2} \\end{align*}\nor rather, the proportion of total variance in $y$ explained by the model. The \\textbf{adjusted R-squared} is almost exactly the same except it fixes a source of bias in $R^2$, namely that $R^2$ will favor models with more parameters. Adjusted $R^2$ penalizes models with more parameters. It is defined as:\n\\begin{align*}\n R^2_{\\text{adj}} &= 1 - \\frac{n-1}{n-p-1} \\frac{\\sum_{i=1}^n (y^{(i)} - \\hat{y}^{(i)})^2}{\\sum_{i=1}^n (y^{(i)} - \\overline{y})^2} \\\\[3mm]\n &= 1 - (1 - R^2) \\frac{n-1}{n-p-1}\n\\end{align*}\nThe \\textbf{F-statistic} is the ratio of two variances: the variance in the outcome, $y$, that is explained by the model parameters (``sum of squares of regression'', or SSR) and the residual, or unexplained variance (``sum of squares of error'', or SSE). The corresponding \\textbf{F-test} tests the null hypothesis that a model with no independent variables (that is, an intercept-only model with $\\beta_1 = 0$ and $\\beta_2 = 0$) fits the data as well as our model. The F-statistic follows an $F$ distribution with $p$ and $n-p-1$ degrees of freedom (see Chapter~\\ref{chapter:probabilitydistributions}, Section~\\ref{sect:fdist}). The $p$-value reported in the model output is the $p$-value for this hypothesis test.  \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Example: Small Cities Pollution Dataset}\n\nThe following data come from an early study that examined the possible link between air pollution and mortality. The authors examined 60 cities throughout the United States and recorded the following data:\n\\begin{center}\n\\texttt{ \\small \\begin{tabular}{ll}\n\\toprule\nMORT & Total age-adjusted mortality from all causes, \\\\\n& in deaths per 100,000 population \\\\\nPRECIP & Mean annual precipitation (in inches) \\\\\nEDUC & Median number of school years completed \\\\\n& for persons of age 25 years or older \\\\\nNONWHITE & Percentage of the 1960 population that is nonwhite \\\\\nNOX & Relative pollution potential of oxides of nitrogen \\\\\nSO2 & Relative pollution potential of sulfur dioxide \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{center}\nNote: ``Relative pollution potential'' refers to the product of the tons emitted per day per square kilometer and a factor correcting the SMSA dimensions and exposure.\n\nWe want to predict the value of \\texttt{MORT} ($y$) using the predictors \\texttt{PRECIP, EDUC, NONWHITE, NOX,} and \\verb|SO2| ($x_1, x_2, x_3, x_4$ and $x_5$). Here is the output for a fitted linear regression model:\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{img/linear-small-cities-model.png}\n\\end{center}\n\n\\begin{question}{}\nInterpret the values of each of these coefficients. Based on the coefficient values and their standard errors, which predictor(s) do you think have the greatest impact on mortality? \n\\end{question}\n\n\\begin{question}{}\nIn this model, is the effect of one predictor (say, \\verb|PRECIP|) impacted by the value(s) of any of the other predictor(s)? How does this differ from the other regression algorithms we've seen (KNN and decision trees)? What are the advantages and disadvantages of this choice? \n\\end{question}\n\n\\begin{question}{}\nIs a normal distribution the right distribution to model an outcome of age-adjusted mortality (MORT)? Why or why not? Look back at our discussion of the normal distribution in Chapter~\\ref{chapter:probabilitydistributions} if you need a refresher. \n\\end{question}\n", "meta": {"hexsha": "caee05964fdb21d0330d9fa9cff2f8cd0287d3fd", "size": 13109, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/mcds-linear-regression.tex", "max_stars_repo_name": "blpercha/mcds-notes", "max_stars_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-12-10T16:51:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-03T01:31:23.000Z", "max_issues_repo_path": "tex/mcds-linear-regression.tex", "max_issues_repo_name": "blpercha/mcds-notes", "max_issues_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/mcds-linear-regression.tex", "max_forks_repo_name": "blpercha/mcds-notes", "max_forks_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-14T17:16:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-14T17:16:44.000Z", "avg_line_length": 80.9197530864, "max_line_length": 718, "alphanum_fraction": 0.7385765505, "num_tokens": 3411, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210896, "lm_q2_score": 0.8376199653600371, "lm_q1q2_score": 0.582948577542183}}
{"text": "\\documentclass{subfile}\n\n\\begin{document}\n\t\\section{Majorization and Symmetric Sums}\\label{sec:mazorization}\n\n\tLet $\\mathbf{x}$ and $\\mathbf{y}$ be two vectors of $n$ real numbers. We say that $\\mathbf{x}$ \\textit{dominates} or \\index{majorization}\\textit{majorizes} $y$ if\n\t\t\\begin{align*}\n\t\t\tx_{1}\n\t\t\t\t& \\geq x_{2}\\geq \\ldots\\geq x_{n}\\\\\n\t\t\ty_{1}\n\t\t\t\t& \\geq y_{2}\\geq \\ldots\\geq y_{n}\\\\\n\t\t\tx_{1}+\\ldots+x_{n}\n\t\t\t\t& = y_{1}+\\ldots+y_{n}\\\\\n\t\t\tx_{1}+\\ldots+x_{k}\n\t\t\t\t& \\geq y_{1}+\\ldots+y_{k}\n\t\t\\end{align*}\n\tfor $1\\leq k\\leq n-1$. If $\\mathbf{x}$ dominates $\\mathbf{y}$ (resp. $\\mathbf{y}$ is \\textit{dominated by} $\\mathbf{x}$), then we write $\\mathbf{x}\\succ\\mathbf{y}$ (resp. $\\mathbf{y}\\prec\\mathbf{x}$). For example, $(4,0,0)\\succ(3,1,0)\\succ(2,2,0)$. The vectors $\\mathbf{x}$ and $\\mathbf{y}$ need not be monotonic because we can just sort them into monotonic vectors.\n\n\tWe will also introduce the cyclic and symmetric polynomials and notations juxtaposed with them. The expression $x^{2}+y^{2}+z^{2}$ is \\textit{symmetric} whereas $x^{2}y+y^{2}z+z^{2}x$ is \\textit{cyclic} but not symmetric because $y^{2}x+z^{2}y+x^{2}z\\neq x^{2}y+y^{2}z+z^{2}x$. A symmetric polynomial in the variables $x_{1},\\ldots,x_{n}$ should remain same regardless of the order in which the variables are used. So $f(x_{1},\\ldots,x_{n})$ is symmetric if $f$ remains \\textit{invariant} for all permutations of $x_{1},\\ldots,x_{n}$ in the expression unlike the cyclic example we just saw. For example, $xy+yz+zx$ is symmetric and so is $xyz$. And $\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}$ is cyclic but not symmetric. We can use the cyclic and symmetric notations to represent the expressions in a short form. Here are some demonstrations.\n\t\t\\begin{align*}\n\t\t\ta^{2}+b^{2}+c^{2}\n\t\t\t\t& = \\sum_{cyc}a^{2}\\\\\n\t\t\ta^{2}b+b^{2}c+c^{2}a\n\t\t\t\t& = \\sum_{cyc}a^{2}b\\\\\n\t\t\txy+yz+zx\n\t\t\t\t& = \\sum_{cyc}xy\n\t\t\\end{align*}\n\tNote that even though the expression $xy+yz+zx$ and $a^{2}+b^{2}+c^{2}$ are symmetric, we do not consider them symmetric polynomial sums in this notation. A symmetric polynomial sum should have all $n!$ terms in the sum since it is symmetric on all $n!$ permutations of the variables in it. Even if there can be duplicates, the total number of terms should still remain $n!$. For this reason, this sum is often denoted by\n\t\t\\begin{align*}\n\t\t\t\\sum{!}\n\t\t\t\t&\\mbox{ or }\\sum_{\\sigma}\n\t\t\\end{align*}\n\twhere $\\sigma$ indicates that the sum is taken over all possible permutations. Here are some examples.\n\t\t\\begin{align*}\n\t\t\t\\sum_{sym}a^{2}\n\t\t\t\t& = 2(a^{2}+b^{2}+c^{2})\\\\\n\t\t\t\\sum{!} x^{2}y\n\t\t\t\t& = x^{2}y+y^{2}x+y^{2}z+z^{2}y+z^{2}x+x^{2}z\\\\\n\t\t\t\\sum_{\\sigma}x^{3}y\n\t\t\t\t& = x^{3}y+x^{3}z+y^{3}z+y^{3}x+z^{3}x+z^{3}y\n\t\t\\end{align*}\n\tLet $\\mathbf{x}$ and $\\mathbf{a}$ be vectors with $n$ elements. We write\n\t\t\\begin{align*}\n\t\t\tF(\\mathbf{x}, \\mathbf{a})\n\t\t\t\t& = x_{1}^{a_{1}}\\cdots x_{n}^{a_{n}}\\\\\n\t\t\tT(\\mathbf{x},\\mathbf{a})\n\t\t\t\t& = \\sum{!}F(\\mathbf{x},\\mathbf{a})\\\\\n\t\t\t\t& = \\sum_{sym} F(\\mathbf{x},\\mathbf{a})\\\\\n\t\t\t\t& = \\sum{!}x_{1}^{a_{1}}\\cdots x_{n}^{a_{n}}\n\t\t\\end{align*}\n\tWe will explain it a little bit more. Say, we want to find the symmetric polynomials of the type $x^{3}y$ for the variables $x,y,z$. Write it as $x^{3}y^{1}z^{0}$. Now, we fix the powers $3,1,0$ in their respective places and let $x,y,z$ run through the permutations. Then we get\n\t\t\\begin{align*}\n\t\t\tx^{3}y^{1}z^{0}+x^{3}z^{1}y^{0}+y^{3}z^{1}x^{0}+y^{3}x^{1}z^{0}+z^{3}x^{1}y^{0}+z^{3}y^{1}x^{0}\n\t\t\\end{align*}\n\tSee that this is exactly what we got in the last line of\n\t\t\\begin{align*}\n\t\t\t\\sum_{\\sigma}x^{3}y\n\t\t\\end{align*}\n\tSome authors write this notation as\n\t\t\\begin{align*}\n\t\t\tT[\\mathbf{a}](\\mathbf{x})\n\t\t\t\t& = T[a_{1},\\ldots,a_{n}](x_{1},\\ldots,x_{n})\n\t\t\\end{align*}\n\tSimply $T[\\mathbf{a}]=T[a_{1},\\ldots,a_{n}]$ is used if it is clear what $\\mathbf{x}$ is.\n\t\t\\begin{align*}\n\t\t\tT[1,0,\\ldots,0]\n\t\t\t\t& = (x_{1}+\\ldots+x_{n})(n-1)!\\\\\n\t\t\tT\\left[\\dfrac{1}{n},\\ldots,\\dfrac{1}{n}\\right]\n\t\t\t\t& = n!\\sqrt[n]{x_{1}\\cdots x_{n}}\n\t\t\\end{align*}\n\tThen for a vector of positive real numbers $\\mathbf{x}$, we can write the arithmetic-geometric mean inequality as\n\t\t\\begin{align*}\n\t\t\tT[1,0,\\ldots,0]\n\t\t\t\t& \\geq T\\left[\\dfrac{1}{n},\\ldots,\\dfrac{1}{n}\\right]\n\t\t\\end{align*}\n\tWe can also define mean values based on the symmetric polynomials. We call\n\t\t\\begin{align*}\n\t\t\t\\mathfrak{M}[\\mathbf{a}](\\mathbf{x})\n\t\t\t\t& = \\dfrac{T[\\mathbf{a}](\\mathbf{x})}{n!}\n\t\t\\end{align*}\n\tthe \\index{symmetrical mean}\\textit{symmetrical mean}. For example,\n\t\t\\begin{align*}\n\t\t\t\\mathfrak{M}[1,0\\ldots,0]\n\t\t\t\t& = \\dfrac{(n-1)!}{n!}(a_{1}+\\ldots+a_{n})\\\\\n\t\t\t\t& = \\mathfrak{A}(\\mathbf{a})\\\\\n\t\t\t\\mathfrak{M}\\left[\\dfrac{1}{n},\\ldots,\\dfrac{1}{n}\\right]\n\t\t\t\t& = \\dfrac{n!}{n!}a_{1}^{\\frac{1}{n}}\\cdots a_{n}^{\\frac{1}{n}}\\\\\n\t\t\t\t& = \\mathfrak{G}(\\mathbf{a})\n\t\t\\end{align*}\n\tSo, the symmetrical mean is a generalization of $\\mathfrak{A}$ and $\\mathfrak{G}$. While we are talking about symmetric sums, we should consider the next identity.\n\t\t\\begin{align*}\n\t\t\t(x+a_{1})\\cdots(x+a_{n})\n\t\t\t\t& = x^{n}+\\binom{n}{1}x^{n-1}D_{1}+\\binom{n}{2}x^{n-2}d_{2}+\\ldots+\\binom{n}{n}D_{n}\n\t\t\\end{align*}\n\twhere $D_{k}$ is the sum of products of $a_{1},\\ldots,a_{n}$ taken $k$ at the same time. If $S_{k}$ is the coefficient of $x^{n-k}$ in the expansion, then\n\t\t\\begin{align*}\n\t\t\tS_{k}\n\t\t\t\t& = \\binom{n}{i}D_{k}\n\t\t\\end{align*}\n\n\t\t\\begin{theorem}[\\itshape Newton's inequality]\\label{thm:newton}\n\t\t\tFor $1\\leq i\\leq n-1$, we have\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tD_{i}^{2}\n\t\t\t\t\t\t& \\geq D_{i-1}D_{i+1}\n\t\t\t\t\\end{align*}\n\t\t\\end{theorem}\n\n\t\t\\begin{theorem}[\\itshape Maclaurin's inequality]\\label{thm:maclaurin}\n\t\t\tWith the same notation as \\nameref{thm:newton},\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tD_{1}\n\t\t\t\t\t\t& \\geq \\sqrt{D_{2}}\\geq\\ldots\\geq\\sqrt[n]{D_{n}}\n\t\t\t\t\\end{align*}\n\t\t\\end{theorem}\n\\end{document}", "meta": {"hexsha": "b08db1edab7048a9508fa95e1c8a4b6d335697e8", "size": 5772, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "majorization.tex", "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_issues_repo_path": "majorization.tex", "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "majorization.tex", "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.7586206897, "max_line_length": 838, "alphanum_fraction": 0.6181566182, "num_tokens": 2399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210895, "lm_q2_score": 0.8376199592797929, "lm_q1q2_score": 0.5829485733105864}}
{"text": "% !TEX TS-program = pdflatexmk\n\\documentclass{article} % For LaTeX2e\n\\usepackage{nips15submit_e,times}\n\\usepackage{hyperref}\n\\usepackage{url}\n%\\documentstyle[nips14submit_09,times,art10]{article} % For LaTeX 2.09\n\n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathtools}\n\\usepackage{thmtools}\n\n\\usepackage{algpseudocode,algorithm,algorithmicx}  \n\n%\\newtheorem{theorem}{Theorem}\n%\\newtheorem{corollary}{Corollary}[theorem]\n%\\newtheorem{lemma}[theorem]{Lemma}\n%\\newtheorem{example}{Example}[theorem]\n\n%\\theoremstyle{definition}\n\\declaretheorem[style=definition,qed=$\\blacksquare$]{definition}\n\\declaretheorem[style=definition,qed=$\\blacktriangle$,sibling=definition]{example}\n\n\\declaretheorem[style=plain,qed=$\\blacktriangle$,sibling=definition]{theorem}\n\\declaretheorem[style=plain,qed=$\\lhd$,sibling=definition]{lemma}\n\n%\\newtheorem{definition}{Definition}\n\\newtheorem{assumption}{A}\n\\newtheorem{question}{Q}\n\n\\newcommand\\myeq{\\stackrel{\\mathclap{\\tiny\\mbox{d}}}{=}}\n\n\\newcommand\\mc{\\mathcal} %calligraphic\n\\newcommand\\ts{\\mathcal} %tensor\n\\newcommand\\mt{} %matrix\n\\newcommand\\vt{\\mathbf} %vector\n\\newcommand\\fn{} %function\n\\newcommand\\triple[3]{(#1 \\stackrel{#2}\\rightarrow #3)}\n%\\newcommand\\triple[3]{(#1, #2, #3)}\n\n\\title{Collaborative Matrix Completion}\n\n\\author{\nDongwoo Kim\n}\n\n% The \\author macro works with any number of authors. There are two commands\n% used to separate the names and addresses of multiple authors: \\And and \\AND.\n%\n% Using \\And between authors leaves it to \\LaTeX{} to determine where to break\n% the lines. Using \\AND forces a linebreak at that point. So, if \\LaTeX{}\n% puts 3 of 4 authors names on the first line, and the last on the second\n% line, try using \\AND instead of \\And before the third author name.\n\n\\newcommand{\\fix}{\\marginpar{FIX}}\n\\newcommand{\\new}{\\marginpar{NEW}}\n\n\\nipsfinalcopy % Uncomment for camera-ready version\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nLow-rank or approximately low-rank matrix approximation is a popular method to complete a matrix given partial observation of its entry. As the seminar work by \\cite{candes2009exact} pointed out, one can recover the original matrix with high probability given a certain number of samples. In practice, when the number of observations is not satisfying this criteria, one widely used heuristic to overcome the shortage of samples is to jointly completing multiple matrices that shares some common dimensions. For example, to get a reliable estimator of user-movie rating matrix, one may use movie-actor matrix as additional information to get more information about movies. In this work, we would like to uncover when and how this joint completion would work and why.\n\\end{abstract}\n\n\\section{Preliminary}\nAssume that unknown rank-$r$ matrix $M$ is $m_1 \\times m_2$. The goal of single matrix completion is to recover original matrix $M$ from partially observed entries. The singular value decomposition (SVD) is\n\\begin{equation}\nM = \\sum_{i=1}^{r}\\sigma_i u_i v_i^\\top,\n\\end{equation}\nwhere $\\sigma_i,...,\\sigma_r \\geq 0$ are the singular values, and $u_1,...,u_r$ and $v_1,...,v_r$ are two sets of orthonormal singular vectors. The degree of freedom of this matrix is $(m_1+m_2)r - r^2$, which means we cannot recover the original matrix if less than $(m_1+m_2)r - r^2$ entries are available.  \n\nLet $\\Omega_M$ be a set of indices of observed entries, i.e. $(i,j) \\in \\Omega_M$ if $M_{ij}$ is observed, and $\\mc{P}_{\\Omega}: \\mathbb{R}^{m_1\\times m_2} \\rightarrow \\mathbb{R}^{m_1\\times m_2}$ be the orthogonal projection onto index set $\\Omega$ which vanishes outside of $\\Omega$; that is $\\bar{M} = \\mc{P}_{\\Omega}(M)$ is defined as\n\\begin{equation}\n\\bar{M}_{ij} = \\left\\{\n  \\begin{array}{lr}\n    M_{ij}, & (i,j) \\in \\Omega\\\\\n    0, & \\text{otherwise}.\n  \\end{array}\n\\right.\n\\end{equation}\n\nThe spectral norm of matrix $M$ is denoted by\n\\begin{equation}\n||M|| := \\sup_{u\\in \\mathbb{R}^n:||u||=1}||Mu|| = \\sup_{j\\in[n]}\\sigma_j(M),\n\\end{equation}\nwhich corresponds to the largest singular value of matrix $M$. The nuclear norm of matrix $M$ is denoted by\n\\begin{equation}\n||M||_* := \\sum_{j\\in[n]}\\sigma_j(M),\n\\end{equation}\nwhich corresponds to the sum of singular values of matrix $M$. Additionally, we let $||M||_\\infty := \\max_{i,j}|M_{ij}|$.\n\nThe Euclidean inner product between two matrices is defined by\n\\begin{align}\n\\langle M, N \\rangle := \\text{tr}(M^\\top N),\n\\end{align}\nand corresponding Frobenius norm is denoted as\n\\begin{align}\n||M||_F\n\\end{align}\n\nBy the duality between nuclear and spectral norm, we can compute an upper bound on the Euclidean inner product between matricies\n\\begin{align}\n\\langle M, N \\rangle & = ||M|| \\langle \\frac{M}{||M||}, N \\rangle \\\\\n& = ||M|| \\text{tr}(\\frac{M}{||M||}^\\top N)\\\\\n& \\leq ||M|| \\sup_{||X|| \\leq 1} \\text{tr}(X^\\top A)\\\\\n& = ||M||\\,||A||_*.\n\\end{align}\n\n\\subsection{Sampling scheme}\nLet $\\pi_{ij}$ be the probability to observe the $(i,j)$ entiry of $M$. Let $C_j = \\sum_{i=1}^{m_1} \\pi_{ij}$ be the probability of observing an entry from column $j$ and $R_i = \\sum_{j=1}^{m_2} \\pi_{ij}$ be the probability of observing an entry from row $i$.\nIf there is a noise in observation, then noisy observation of entry\n\\begin{align}\n\\bar{M}_{ij} = M_{ij} + \\sigma\\xi_{ij}, \\quad (i,j) \\in \\Omega\n\\end{align}\nwhere noise variable $\\xi_{ij}$ satisfies $\\mathbb{E}[\\xi_{ij}] = 0$ and $\\mathbb{E}[\\xi_{ij}^2] = 1$, and $\\sigma$ is the variance of the noise.\n\nIn terms of the trace regression model, the observation also can be defined as\n\\begin{align}\n\\bar{M}_{i} = \\text{tr}(E_i^\\top M) + \\sigma\\xi_i, \\quad i=1,...,n\n\\end{align}\nwhere $E_i$ are random matrices of dimension $m_1 \\times m_2$, and tr($X$) denotes the trace of matrix $X$. Each $E_i$ is i.i.d copy from a random matrix $E$ having distribution $\\Pi$ on the set $\\mathcal{E} = \\{e_j(m_1)e_k^\\top(m_2), 1\\leq j \\leq m_1, 1\\leq k \\leq m_2 \\}$, where $e_j(m)$ is $j$th canonical basis vector in $\\mathbb{R}^m$, i.e., zero vector except $i$th entry equal to 1. Following two stochastic terms play an important role in the case of noisy observation analysis. For an Rademacher sequence $\\{\\epsilon_i\\}_{i=1}^{n}$, we define\n\\begin{align}\n\\Sigma_R = \\frac{1}{n}\\sum_{i=1}^{n}\\epsilon_i E_i \\quad \\text{and} \\quad\\Sigma = \\frac{\\sigma}{n}\\sum_{i=1}^{n}\\xi_i E_i.\n\\end{align}\nLet $||M||_{L_2(\\Pi)}^2 = \\mathbb{E}[\\langle E,M \\rangle)]^2$, where the expectation is taken over ${E}$. If $\\pi_{ij} \\geq pm_1m_2$ for all entries, then this implies\n\\begin{align}\n\\label{eqn:fro_ineq}\n||M||_{L_2(\\Pi)}^2 \\geq \\frac{||M||_F^2}{pm_1m_2}.\n\\end{align}\n\n\\subsection{Orthogonal projection}\n\nSet $U := \\text{span}(u_1,...,u_r)$ and $V := \\text{span}(v_1,...,v_r)$. It is convenient to introduce the orthogonal decomposition $\\mathbb{R}^{m_1 \\times m_2} = T \\oplus T^\\perp$ where $T$ is the linear space spanned by elements of the form $u_ky^\\top$ and $xv_k^\\top$, $1\\leq k \\leq r$, where $x$ and $y$ are arbitrary vectors, and $T^\\perp$ is its orthogonal complement. The orthogonal projection $\\mc{P}_T:\\mathbb{R}^{m_1 \\times m_2} \\rightarrow T$ onto $T$ is given by\n\\begin{equation}\n\\mc{P}_T(Z) = P_U(Z) + ZP_V - P_UZP_V,\n\\end{equation}\nwhere $P_U$ and $P_V$ are the orthogonal projections onto $U$ and $V$ respectively. The orthogonal projection onto $T^\\perp$ is given by\n\\begin{equation}\n\\mc{P}_{T^\\perp}(Z) = (I_{m_1} - P_U)Z(I_{m_2} - P_V),\n\\end{equation}\nwhere $I_m$ denotes the $m \\times m$ identity matrix.\n\n\\section{Single Matrix Completion}\nThe noiseless matrix completion was first studied by \\cite{candes2009exact}, where the number of samples needed to recover a matrix of rank $r$ exactly is provided under some incoherent assumptions on the singular vectors of the matrix.\n\\begin{definition}[Coherence \\cite{candes2009exact}]\nLet $U$ be a subspace of $\\mathbb{R}^n$ of dimension $r$, i.e., $U := \\text{span}(u_1, ..., u_r)$, and $P_U$ be the orthogonal projection onto $U$. Then the coherence of $U$ is defined to be\n\\begin{equation}\n\\mu(U) := \\frac{m_1}{r}\\max_{1\\leq i \\leq m_1}||P_U e_i||^2,\n\\end{equation}\nwhere $e_i$ is the $i$th canonical basis vector.\n\\end{definition}\n\n\\begin{assumption}\nThe coherences obey $\\max(\\mu(U),\\mu(V)) \\leq \\mu_0$ for some positive $\\mu_0$.\n\\end{assumption}\n\\begin{assumption}\nThe matrix $E=\\sum_{1\\leq k \\leq r} u_k v_k^\\top$ has a maximum entry bounded by $\\mu_1 \\sqrt{r/(m_1m_2)}$ in absolute value for some positive $\\mu_1$.\n\\end{assumption}\n\n\\begin{theorem}[\\cite{candes2009exact}]\nUnder the assumption 1 and 2, a nuclear norm minimisation\n\\begin{align}\n\\text{minimise}&\\quad ||X||_* \\\\\n\\text{subject to}&\\quad \\mc{P}_{\\Omega}(X) = \\mc{P}_{\\Omega}(M) \n\\end{align}\nperfectly recovers the original matrix with high probability if the number of uniformly sampled entires $n$ obey $n\\geq \\mathcal{O}(\\max(\\mu_1^2,\\mu_0^{1/2},\\mu_1,\\mu_0n^{1/4}) m_2 r \\beta \\log m_2)$ for some $\\beta > 2$.\n\\end{theorem}\nLater, a tighter analysis of the same convex relaxation was developed in \\cite{candes2010power,recht2011simpler}. More practical settings, where the few observed entries are corrupted by noise, has been extensively studied recently \\cite{candes2010matrix,keshavan2010matrix,negahban2012restricted,klopp2014noisy,lafond2015low}. \n%These studies show that when the distribution on noise is additive and sub-exponential, then the prediction error with the nuclear norm minimiser $\\hat{X}$ satisfies with high probability\n%\\begin{equation}\n%\\frac{||\\hat{X} - X||^2_{F}}{m_1 m_2} = \\mathcal{O}\\bigg(\n%\\frac{(m_1+m_2) \\text{rank}(X) \\log (m_1 + m_2)}{m}\n%\\bigg),\n%\\end{equation}\n%where $||\\cdot||_F$ denotes the Frobenius norm, and $X\\in\\mathbb{R}^{m_1 \\times m_2}$.\nTo prevent an overfitting of the noisy matrix observation, a penalised nuclear norm estimator of $M$ has been introduced\n\\begin{align}\\label{eqn:nn_est}\n\\hat{X} = {\\arg\\min}_{||{A}||_\\infty \\leq \\gamma} \\bigg\\{\\frac{1}{n} \\sum_{(i,j) \\in \\Omega}(\\bar{M}_{ij} - {A}_{ij})^2 + \\lambda ||{A}||_* \\bigg\\},\n\\end{align}\nwhere $\\bar{M}_{ij}$ is a noisy observation of $M_{ij}$, $\\lambda > 0$ is a regularisation parameter, and $\\gamma$ is an upper on the absolute value of each entry. The optimiser consists of the sum of a data fitting terms and a nuclear norm penalisation term.\n\n\\begin{theorem}[Noisy matrix completion \\cite{klopp2014noisy}] \nLet noisy observation of $M$ be sampled i.i.d. from distribution $\\Pi$ with known variance of the noise $\\sigma$. If $\\lambda \\geq 3||\\Sigma||$ and following two assumptions are satisfied:\n\n\\begin{assumption}\n\\label{assume:row_column}\nThere exists a positive constant $L \\geq 1 $ such that\n$\\max_{i,j}(C_i, R_j) \\leq L / \\min(m_1, m_2)$.\n\\end{assumption}\n\n\\begin{assumption}\n\\label{assume:min_p}\nThere exists a positive constant $p \\geq 1 $ such that\n$\\pi_{jk} \\geq (p m_1 m_2)^{-1}$.\n\\end{assumption}\n\nThen, there exist numerical constants $c_1$, and $c_2$ such that\n\\begin{equation}\n\\frac{||\\hat{X} - M||_F^2}{m_1m_2} \\leq \\max\\bigg\\{ c_1 p^2 m_1 m_2 r (\\lambda^2 + \\gamma^2(\\mathbb{E}||\\Sigma_R||)^2), c_2\\gamma^2p\\sqrt{\\frac{\\log(m_1 + m_2)}{n}} \\bigg\\},\n\\end{equation}\nwith probability at least $1 - 2/(m_1 + m_2)$.\n\\end{theorem}\n\nAn upper bound of the stochastic term $\\mathbb{E}||\\Sigma_R||$ could be also obtained under certain assumptions (e.g., sub-exponential noise). See more details in \\cite{klopp2014noisy}.\n\n%\\textbf{Sketch of Proof}. From the definition of estimator in Equation $\\ref{eqn:nn_est}$, we know that\n%\\begin{equation}\n%\\frac{1}{n} \\sum_{(i,j) \\in \\Omega}(\\bar{M}_{ij} - \\hat{X}_{ij})^2 + \\lambda||\\hat{X}||_*\n%\\leq \\frac{1}{n} \\sum_{(i,j) \\in \\Omega}(\\bar{M}_{ij} - {M}_{ij})^2 + \\lambda||M||_*,\n%\\end{equation}\n%which implies\n%\\begin{equation}\n%\\frac{1}{n}||\\mathcal{P}_\\Omega(M - \\hat{X}) ||_F^2 \\leq 2||\\Sigma|| ||M - \\hat{X}||_* + \\lambda(||M||_* - ||\\hat{X}||_*).\n%\\end{equation}\n\n\\section{Joint Matrix Completion}\nLet $A_0 \\in \\mathbb{R}^{m_1 \\times m_2}$ and $B_0 \\in \\mathbb{R}^{m_1 \\times m_3}$ be two matrices where the first dimension represents the same object, i.e. user-rating matrix $A$ and user-attribute matrix $B$. When the number of observed entries $n_A$ of matrix $A_0$ is insufficient to obtain a stable estimator of the unobserved entries, one widely used heuristic method is a joint factorisation of $A_0$ and $B_0$ with the hope that the both matrices share some common low-rank structure in its first dimension. The collaborative matrix factorisation has been widely used in practice, but there is not theoretical guarantee so far.\n\nOne of our main question is how many samples are required to perfectly recover the original matrices $A_0$ and $B_0$ under what assumptions. More precisely, an interesting question might be the number of observation $n_A$ of matrix $A_0$ needed to recover the matrix $A_0$ given the number of observation $n_B$ of matrix $B_0$ or vice versa. Often, it is more easier to obtain the entries of $B_0$ instead of those in $A_0$, therefore, understanding the nature of joint completion will provide some guidelines to the joint completion approach. We start from an assumption to make the analysis feasible.\n\n\\begin{assumption}\\label{assume:share}\nLet SVD of $A_0 = \\sum_{i=1}^{r_A} \\sigma_i^{(A_0)} u_i^{(A_0)} v_i^\\top$ and $B_0 = \\sum_{i=1}^{r_B} \\sigma_i^{(B_0)} u_i^{(B_0)} w_i^\\top$. We assume that the rank of two matrices are the same, i.e., $r_M = r_N = r$, and the singular vector $u_i^{(A_0)}$ and $u_i^{(B_0)}$ are the same in order to share the common low-rank structure between two matrices\\footnote{Remark: most of existing knowledge graph factorisation models such as RESCAL follow this assumption if we consider each relation as a matrix (a knowledge graph is a collection of matrices).}.\n\\end{assumption}\n\nLet $Z_0 = [A_0, B_0] \\in \\mathbb{R}^{m_1 \\times (m_2+m_3)}$ be the horizontally combined matrix of $A_0$ and $B_0$. The SVD of $Z$ is $\\sum_{i=1}^{r}\\sigma_i u_i v_i^\\top$ where $v_i^\\top = [v_i^{(A_0)\\top}, v_i^{(B_0)\\top}]$ is the stacked right singular vectors of $A_0$ and $B_0$. If both singular vectors satisfy the incoherence assumptions, the joint matrix completion problem is reduced to the single matrix completion problem with different sampling rates between the first $m_2$ columns and the rightmost $m_3$ columns of combined matrix. Some of the previous studies focus on a non-uniform sampling distribution of matrix \\cite{foygel2011learning,lounici2011optimal,negahban2012restricted,klopp2014noisy}. For example, popular movies are more likely to be rated by many users in a collaborative filtering. This will lead to different sampling distributions between different columns (rows). \n\nOur problem is a special case where we sample $n_M$ samples from leftmost $m_2$ columns and $n_N$ from rightmost $m_3$ columns. More formally, let $\\Pi_A$ be a distribution on set $\\mc{E}^{A} = \\{e_j(m_1)e_k^\\top(m_2+m_3), 1\\leq j \\leq m_1, 1\\leq k \\leq m_2\\}$, and $\\Pi_{B}$ a distribution on set $\\mc{E}^{B} = \\{e_j(m_1)e_k^\\top(m_2+m_3), 1\\leq j \\leq m_1, 1+m_2\\leq k \\leq m_2+m_3\\}$. And define $\\Pi_\\rho = \\rho \\Pi^{A} + (1-\\rho)\\Pi^{B}$ where $0 < \\rho < 1/2$. If $\\Phi_A$ and $\\Phi_B$ satisfy Assumption \\ref{assume:min_p} with parameter $p_A$ and $p_B$, Equation \\ref{eqn:fro_ineq} implies\n\\begin{align}\n||Z_0||_{L_2(\\Pi_\\rho)}^2 \\geq \\min(\\frac{\\rho}{p_Am_1m_2},\\frac{1-\\rho}{p_Bm_1m_3})||Z_0||_F^2 \\geq \\frac{\\rho}{pm_1(m_2+m_3)} ||Z_0||_F^2,\n\\end{align}\nwhere $p = \\min(p_A, p_B)$.\n\n%And define $\\Pi_\\rho = \\rho \\Pi^{(M)} + (1-\\rho) \\Pi^{(N)}$ where $0 \\leq \\rho \\leq 1$. %If we set $\\rho = \\frac{n_M}{n_M+n_N}$, and \n%There are $p^{(M)}$ and $p^{(N)}$ which satisfies Assumption \\ref{assume:min_p} for each distribution, respectively, then the inequality in Equation \\ref{eqn:fro_ineq} also holds as well\n%\\begin{align}\n%||Z||_{\\Pi_\\rho}^2 \\geq \\frac{||Z||_F^2}{p'm_1(m_2+m_3)},\n%\\end{align}\n%where $p' = \\min(\\rho p^{(M)}m_1m_2, (1-\\rho) p^{(N)}m_1m_3)$.\n\n\\begin{question}[$n_A$ given $n_B$]\nGiven $n_B$ uniformly sampled entries of matrix $B_0$, how many samples do we need from $A_0$ to perfectly recover matrix $A_0$? More practically, what will be the relation between the reconstruction error on $A_0$ and the sample size $n_A$ and $n_B$.\n\\end{question}\nLet the penalised estimator of joint matrix $Z$\n\\begin{align}\n\\label{eqn:z_estimator}\n\\hat{Z} = {\\arg\\min}_{||{A}||_\\infty \\leq \\gamma} \\bigg\\{\\frac{1}{n_M + n_N} \\sum_{(i,j) \\in \\Omega}(\\bar{Z}_{ij} - {A}_{ij})^2 + \\lambda ||{A}||_* \\bigg\\},\n\\end{align}\nwhere $\\bar{Z}_{ij}$ denotes the noisy observation of $(i,j)$ entry of $Z_0$. Let $\\hat{A}$ (\\textit{resp.} $\\hat{B}$) be the corresponding part of $A_0$ (\\textit{resp.} $B_0$) in $\\hat{Z}$, then the first question corresponds to identify an upper bound on\n$||\\hat{A} - A_0||_F^2$\nwith respect to the other quantities such as $n_A$ and $n_B$.\n\nThe Frobenius norm error of the combined matrix can be decomposed into two different parts:\n\\begin{equation}\n\\label{eqn:decomp}\n\\frac{||\\hat{Z} - Z||_F^2}{m_1(m_2+m_3)} = \\frac{||\\hat{A} - A_0||_F^2}{m_1(m_2+m_3)} + \\frac{||\\hat{B} - B_0||_F^2}{m_1(m_2+m_3)}.\n\\end{equation}\n\n%From the previous work, it is possible to obtain an upper bound on Frobenius error of combined matrix. However, this does not give any information gain obtained from the shared low rank structure. Therefore, a lower bound on the second term in RHS of equation \\ref{eqn:decomp} is needed to induce an upper bound on equation \\ref{eqn:upper_m}.\n\nFrom the definition of estimator in Equation \\ref{eqn:z_estimator}, the following inequality holds (the objective function  w.r.t optimiser $\\hat{Z}$ is always less or equal than the objective function w.r.t unknown true matrix $Z_0$).\n\\begin{align}\n\\frac{1}{n} \\sum_{(i,j) \\in \\Omega}(\\bar{Z}_{ij} - \\hat{Z}_{ij})^2 + \\lambda ||\\hat{Z}||_* \n\\leq \\frac{1}{n} \\sum_{(i,j) \\in \\Omega}(\\bar{Z}_{ij} - {Z}_{0ij})^2 + \\lambda ||Z_0||_*,\n\\end{align}\nwhere $n = n_M + n_N$. This implies\n\\begin{equation}\n\\frac{1}{n}||\\mathcal{P}_\\Omega(Z_0 - \\hat{Z}) ||_F^2 \\leq 2||\\Sigma||\\,||Z_0 - \\hat{Z}||_* + \\lambda(||Z_0||_* - ||\\hat{Z}||_*). \\\\\n\\end{equation}\nApplying Lemma \\ref{lmm:normdiff} and using triangular inequality $||Z_0 - \\hat{Z}||_* \\leq ||\\mc{P}_{Z_0}(Z_0 - \\hat{Z})||_* + ||\\mc{P}_{Z_0}^\\perp(Z_0 - \\hat{Z})||_*$, we obtain\n\\begin{equation}\n\\frac{1}{n}||\\mathcal{P}_\\Omega(Z_0 - \\hat{Z}) ||_F^2 = \\frac{1}{n}||\\mc{P}_{\\Omega_A}(A_0-\\hat{A})||_F^2 + \\frac{1}{n}||\\mc{P}_{\\Omega_B}(B_0-\\hat{B})||_F^2\n\\leq \\frac{5}{3}\\lambda ||\\mc{P}_{Z_0}(Z_0-\\hat{Z})||_*.\n\\label{eqn:tri}\n\\end{equation}\n\nLet $\\zeta = 3168p\\gamma^2\\text{rank}(Z_0)m_1(m_2+m_3)\\rho^{-1}(\\mathbb{E}||\\Sigma_R||)^2$. Since $||\\hat{Z} - Z_0||_* \\leq \\sqrt{72\\text{rank}(Z_0)}||\\hat{Z} - Z_0||_F$, if $||\\hat{Z}-Z_0||_\\infty^{-1}(\\hat{Z}-Z_0) \\in \\mc{C}(72\\text{rank}(Z_0))$, then Lemma \\ref{lmm:constraintset} and \\ref{lmm:prj_rankbound} leads an upper bound on the squared error of entire matrix\n\\begin{align}\n\\frac{1}{2}||\\hat{Z}-Z_0||_{L_2(\\Pi_\\rho)}^2 &\\leq \\frac{5}{3}\\lambda\\sqrt{2\\text{rank}(Z_0)}||\\hat{Z} - Z_0||_F + \\zeta \\\\\n\\frac{\\rho}{2pm_1(m_2+m_3)} ||\\hat{Z} - Z_0||_F^2 &\\leq \\frac{5}{3}\\lambda\\sqrt{2\\text{rank}(Z_0)}||\\hat{Z} - Z_0||_F + \\zeta \\label{eqn:bound1}\n\\end{align}\nwith probability at least $1-2/(m_1+m_2+m_3)$. By using inequality $ab \\leq a^2 + b^2$, the first term on the RHS is bounded\n\\begin{align}\n\\frac{5}{3}\\lambda\\sqrt{2\\text{rank}(Z_0)}||\\hat{Z} - Z_0||_F & \\leq \\bigg(\\frac{5}{3}\\lambda\\sqrt{\\frac{8\\text{rank}(Z_0)pm_1(m_2+m_3)}{\\rho}}\\bigg)^2 \n+ \\bigg(\\sqrt{\\frac{\\rho}{4pm_1(m_2+m_3)}}||\\hat{Z} - Z_0||_F\\bigg)^2 \\\\\n& = \\frac{320\\lambda^2 \\text{rank}(Z_0)pm_1(m_2+m_3)}{9\\rho} + \\frac{\\rho}{4pm_1(m_2+m_3)}||\\hat{Z}-Z_0||_F^2.\n\\end{align}\nApplying this inequality to Equation \\ref{eqn:bound1} yields\n\\begin{align}\n\\frac{\\rho}{4pm_1(m_2+m_3)} ||\\hat{Z} - Z_0||_F^2 \\leq  36\\lambda^2 \\text{rank}(Z_0)pm_1(m_2+m_3)\\rho^{-1} + \\zeta\n\\end{align}\nTherefore, there exist numerical constants $c_1$ such that\n\\begin{align}\n||\\hat{Z} - Z_0||_F^2 \\leq  c_1 \\text{rank}(Z_0)\\bigg(\\frac{pm_1(m_2+m_3)}{\\rho}\\bigg)^2(\\lambda^2 + \\gamma^2(\\mathbb{E}||\\Sigma_R||)^2).\n\\end{align}\nIt is trivial to see that the bound on matrix $A_0$\n\\begin{align}\n\\frac{||\\hat{A}-A_0||_F^2}{m_1m_2} \\leq  c_1 \\text{rank}(Z_0)\\frac{m_1}{m_2}\\bigg(\\frac{p(m_2+m_3)}{\\rho}\\bigg)^2(\\lambda^2 + \\gamma^2(\\mathbb{E}||\\Sigma_R||)^2).\n\\end{align}\n\n%Let $\\alpha = ||M - \\hat{M}||_\\infty (\\leq 2\\gamma)$. If $\\alpha^{-1}(M-\\hat{M}) \\in \\mc{C}(t)$, then by Lemma \\ref{lmm:constraintset}\n%\\begin{equation}\n%\\frac{n_M}{2n\\alpha^2}||M-\\hat{M}||_{\\Pi^{(M)}}^2  - 44 \\frac{ptm_1m_2n_M}{n}(\\mathbb{E}||\\Sigma_R^{(M)}||)^2 \n%\\leq \\frac{n_M}{n}\\frac{1}{\\alpha^2n_M}||\\mc{P}_{\\Omega_M}(M-\\hat{M})||_F^2 \\label{eqn:apl_lmm}\n%\\end{equation}\n%Same argument can be made on the lower bound of $||\\mc{P}_{\\Omega_N}(N-\\hat{N})||_F^2$ if $\\beta^{-1}(N-\\hat{N}) \\in \\mc{C}(t)$\n%\\begin{equation}\n%\\label{eqn:apl_lmm2}\n%\\frac{n_N}{2n\\beta^2}||N-\\hat{N}||_{\\Pi^{(N)}}^2  - 44 \\frac{ptm_1m_3n_N}{n}(\\mathbb{E}||\\Sigma_R^{(N)}||)^2 \n%\\leq \\frac{n_N}{n}\\frac{1}{\\beta^2n_N}||\\mc{P}_{\\Omega_N}(N-\\hat{N})||_F^2 \n%\\end{equation}\n%Combining \\ref{eqn:tri}, \\ref{eqn:apl_lmm} and \\ref{eqn:apl_lmm2} yields\n%\\begin{align}\n%&\\frac{n_M}{2n}||M-\\hat{M}||_{\\Pi^{(M)}}^2 + \\frac{n_N}{2n}||N-\\hat{N}||_{\\Pi^{(N)}}^2&  \\notag\\\\\n%&\\quad \\leq \\frac{5}{3}\\lambda ||\\mc{P}_Z(Z-\\hat{Z})||_* + 176 \\frac{ptm_1\\gamma^2}{n}\\bigg(m_2n_M(\\mathbb{E}||\\Sigma_R^{(M)}||)^2 + m_3n_N(\\mathbb{E}||\\Sigma_R^{(N)}||)^2\\bigg).&\n%\\end{align}\n%Equation \\ref{eqn:fro_ineq} implies\n%\\begin{align}\n%&\\frac{n_M}{2nm_1m_2p}||M-\\hat{M}||_F^2 + \\frac{n_N}{2nm_1m_3p}||N-\\hat{N}||_F^2& \\notag\\\\\n%&\\quad \\leq \\frac{5}{3}\\lambda ||\\mc{P}_Z(Z-\\hat{Z})||_* + 176 \\frac{ptm_1\\gamma^2}{n}\\bigg(m_2n_M(\\mathbb{E}||\\Sigma_R^{(M)}||)^2 + m_3n_N(\\mathbb{E}||\\Sigma_R^{(N)}||)^2\\bigg).&\n%\\end{align}\n%The upper bound on $||\\mc{P}_Z(Z-\\hat{Z})||_*$ can be decomposed into Frobenius error on two sub matrices:\n%\\begin{align}\n%||\\mc{P}_Z(Z-\\hat{Z})||_* & \\leq \\sqrt{2r}||Z-\\hat{Z}||_F \\\\\n%& =  \\sqrt{2r} \\sqrt{||M-\\hat{M}||_F^2 + ||N-\\hat{N}||_F^2}\\\\\n%& \\leq 2r + \\frac{1}{4}\\bigg(||M-\\hat{M}||_F^2 + ||N-\\hat{N}||_F^2\\bigg),\n%\\end{align}\n%where the first inequality comes from $||A||_* \\leq \\sqrt{\\text{rank}(A)}||A||_F$ and $\\text{rank}(\\mc{P}_Z(A)) \\leq \\text{rank}(P_{U^\\perp}AP_V) + \\text{rank}(P_{U}A)$, and we use the inequality $ab \\leq a^2 + b^2/4$ in the third line.\n\n\n\\begin{lemma} \\label{lmm:constraintset} Given constraint set\n\\begin{equation}\n\\mc{C}(t) = \\bigg\\{ A \\in \\mathbb{R}^{m_1 \\times m_2}: ||A||_\\infty = 1, ||A||_{L_2(\\Pi)}^2 \\geq \\sqrt{\\frac{64\\log(m_1+m_2)}{\\log(6/5)n}}, ||A||_* \\leq \\sqrt{t}||A||_F \\bigg\\},\n\\end{equation}\n%where $A$ is a submatrix of $B$. Note that the condition $||A||_* \\leq \\sqrt{t}||B||_F$ always satisfied if $t$ = rank($B$) since $||A||_* \\leq ||B||_*$. \nLet $E_i$ be i.i.d. from distribution $\\Pi$ on $\\mc{E}$ which satisfies Assumptions \\ref{assume:row_column} and \\ref{assume:min_p}, and $\\Sigma_R = n^{-1}\\sum_{i=1}^{n}\\epsilon_i E_i$, where $\\{\\epsilon_i\\}_{i=1}^{n}$ is Rademacher sequence. Then, for all $A\\in \\mc{C}(t)$\n\\begin{align}\n\\frac{1}{2}||A||_{L_2(\\Pi)}^2  - 44 ptm_1m_2(\\mathbb{E}||\\Sigma_R||)^2 \\leq \\frac{1}{n}||\\mc{P}_{\\Omega}(A)||_F^2\n\\end{align}\nwith probability at least $1-2(m_1+m_2)^{-1}$.\n\\end{lemma}\n\n\\begin{lemma}\\label{lmm:normdiff}For any pair of matrices $A, B \\in \\mathbb{R}^{m_1 \\times m_2}$, we have\n\\begin{equation}\n||A||_* - ||B||_* \\leq ||\\mc{P}_A(A-B)||_* - ||\\mc{P}_A^\\perp(A-B)||_*\n\\end{equation}\n\\end{lemma}\n\\textbf{Proof} The definition on $\\mc{P}_A^\\perp(B)$ for any matrix $B$ implies $||A + \\mc{P}_A^\\perp(B-A)||_* = ||A||_* + ||\\mc{P}_A^\\perp(B-A)||_*$. Then the nuclear norm on $B$\n\\begin{align}\n||B||_* = ||B + A - A||_* &= ||A + \\mc{P}^\\perp_A(B-A) + \\mc{P}_A(B-A)||_*\\\\\n&\\geq ||A + \\mc{P}^\\perp_A(B-A)||_* + ||\\mc{P}_A(B-A)||_*\\\\\n&= ||A||_* + ||\\mc{P}^\\perp_A(B-A)||_* + ||\\mc{P}_A(B-A)||_*,\n\\end{align}\nwhere we use the triangular inequality on the projected space.\n\n\\begin{lemma}\\label{lmm:bregman} If $\\lambda > 3||\\Sigma||$, then\n$||\\mc{P}_{Z_0}^\\perp(\\hat{Z}-Z_0)||_* \\leq 5 ||\\mc{P}_{Z_0}(\\hat{Z}-Z_0)||_*$\n\\end{lemma}\n\\textbf{Proof} Let $Q^2(Z) = \\frac{1}{n}\\sum_{i=1}^{n}(\\bar{Z}_i - \\langle E_i, Z \\rangle)^2$ which is convex in $Z$. From the Bregman divergence between $\\hat{Z}$ and $Z_0$ with generating function $Q^2$, the following inequality holds\n\\begin{align}\nQ^2(\\hat{Z}) - Q^2(Z_0) &\\geq - \\frac{2}{n}\\sum_{i=1}^n(\\bar{Z}_i - \\langle E_i, Z \\rangle)\\langle E_i, \\hat{Z} - Z_0 \\rangle = -2 \\langle \\Sigma, \\hat{Z} - Z_0 \\rangle \\\\\n&\\geq -2 ||\\Sigma||\\,||\\hat{Z} - Z_0||_* \\\\\n&\\geq - \\frac{2}{3} \\lambda ||\\hat{Z} - Z_0||_*\n\\end{align}\nThe definition of estimator $\\hat{Z}$ implies\n\\begin{align}\n\\lambda||\\hat{Z}||_* - \\lambda||Z_0||_* \\leq Q^2(Z_0) - Q^2(\\hat{Z}) \\leq \\frac{2}{3} \\lambda ||\\hat{Z} - Z_0||_*\n\\end{align}\nUsing Lemma \\ref{lmm:normdiff}, we compute\n\\begin{align}\n\\frac{2}{3}||\\hat{Z} - Z_0||_* \\leq ||\\mc{P}_{Z_0}(\\hat{Z} - Z_0)||_* - ||\\mc{P}_{Z_0}^\\perp(\\hat{Z} - Z_0)||_*,\n\\end{align}\nwhich together with the triangular inequality on $||\\hat{Z} - Z_0||_*$ leads to Lemma \\ref{lmm:bregman}.\n\n\\begin{lemma}\\label{lmm:prj_rankbound} For any pair of matrices $A, B \\in \\mathbb{R}^{m_1 \\times m_2}$, $\\text{rank}(\\mc{P}_{A}(B)) \\leq 2 \\text{rank}(A)$\n\\end{lemma}\n\n\\subsection{Lower Bound on $||B_0-\\hat{B}||_F^2$}\n\nIt is crucial to derive a lower bound on $||N-\\hat{N}||_F^2$. We can use the same argument used in \\cite{lafond2015low} to show the lower bound.\n\n\n\\begin{question}[Approximate singular vectors]\nWhat if the left singular vectors of $M$ and those of $N$ are not equal, but approximate one another? (i.e., $u_i^{(M)} \\approx u_i^{(N)}$)\n\\end{question}\n\n\\begin{question}[Lower bound on $n_M$]\nAssume that we have fully observed matrix $N$. How many samples do we need from $M$ to perfectly recover matrix $M$?\n\\end{question}\n\n\\begin{question}[Active learning on $N$ to recover $M$]\nIn many cases, it is more easy to obtain entries of $N$ instead of $M$. The question is which entry of $N$ would be actively sampled to improve the reconstruction performance of $M$.\n\\end{question}\n\n\n%If the low-rank representations are the same, then we can apply the standard theories on the combined matrix $[X, Y] \\in \\mathbb{R}^{m_1 \\times (m_2 + m_3)}$. However, this is not the case, under which conditions we can apply joint optimisation for reconstructing $X$ and what would be the error bound of the estimate? Following questions might be of interest to solve this problem.\n%\\begin{itemize}\n%\\item What are the corresponding optimiser of the nuclear norm optimiser for the joint factorisation?\n%\\item What structural assumptions should be made to link two matrices? for example, a certain similarity assumption between two singular value vectors or two low-rank representations? \n%\\item Noise / noiseless assumptions. which noise model?\n%\\item Sampling distribution. uniform? non-uniform?\n%\\end{itemize}\n\n\\bibliographystyle{apalike}\n\\bibliography{ref}\n\n\\end{document}\n", "meta": {"hexsha": "c15296c0dd44448c1e89917378e449fc77e6ccf4", "size": 26963, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/multiple_matrix_completion/mmc.tex", "max_stars_repo_name": "arongdari/sparse-graph-prior", "max_stars_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-12-08T19:04:31.000Z", "max_stars_repo_stars_event_max_datetime": "2016-12-08T19:04:31.000Z", "max_issues_repo_path": "notes/multiple_matrix_completion/mmc.tex", "max_issues_repo_name": "dongwookim-ml/sparse-graph-prior", "max_issues_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-07-10T05:20:44.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-10T05:20:44.000Z", "max_forks_repo_path": "notes/multiple_matrix_completion/mmc.tex", "max_forks_repo_name": "dongwookim-ml/sparse-graph-prior", "max_forks_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.603406326, "max_line_length": 901, "alphanum_fraction": 0.6812669213, "num_tokens": 10008, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.8221891283434876, "lm_q1q2_score": 0.5829478235856205}}
{"text": "\\part{Algorithms \\& Implementations}\n\n\\chapter{Hamiltonian Monte Carlo Sampling}\\label{hmc.chapter}\n\n\\noindent\nThis part of the manual details the algorithm implementations used by\nStan and how to configure them. This chapter presents the Hamiltonian\nMonte Carlo (HMC) algorithm and its adaptive variant the no-U-turn\nsampler (NUTS) along with details of their implementation and\nconfiguration in Stan; the next two chapters present Stan's optimizers\nand diagnostics.\n\n\n\\section{Hamiltonian Monte Carlo}\n\nHamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC)\nmethod that uses the derivatives of the density function being sampled\nto generate efficient transitions spanning the posterior (see, e.g.,\n\\citep{Betancourt-Girolami:2013,Neal:2011} for more details). It uses\nan approximate Hamiltonian dynamics simulation based on numerical\nintegration which is then corrected by performing a\nMetropolis acceptance step.\n\nThis section translates the presentation of HMC by\n\\cite{Betancourt-Girolami:2013} into the notation of\n\\cite{GelmanEtAl:2013}.\n\n\\subsection{Target Density}\n\nThe goal of sampling is to draw from a density $p(\\theta)$ for\nparameters $\\theta$.  This is typically a Bayesian posterior\n$p(\\theta|y)$ given data $y$, and in particular, a Bayesian posterior\ncoded as a Stan program.\n\n\n\n\\subsection{Auxiliary Momentum Variable}\n\nHMC introduces auxiliary momentum variables $\\rho$ and draws from a\njoint density\n%\n\\[\np(\\rho,\\theta) = p(\\rho|\\theta) p(\\theta).\n\\]\n%\nIn most applications of HMC, including Stan, the auxiliary density is\na multivariate normal that does not depend on the parameters $\\theta$,\n\\[\n\\rho \\sim \\distro{MultiNormal}(0, \\Sigma).\n\\]\nThe covariance matrix $\\Sigma$ acts as a Euclidean metric to rotate\nand scale the target distribution; see \\citep{Betancourt-Stein:2011}\nfor details of the geometry.\n\nIn Stan, this matrix may be set to the identity matrix (i.e., unit\ndiagonal) or estimated from warmup draws and optionally restricted\nto a diagonal matrix. The inverse $\\Sigma^{-1}$ is known as the mass\nmatrix, and will be a unit, diagonal, or dense if $\\Sigma$ is.\n\n\\subsection{The Hamiltonian}\n\nThe joint density $p(\\rho,\\theta)$ defines a Hamiltonian\n%\n\\begin{eqnarray*}\nH(\\rho,\\theta) & = & - \\log p(\\rho,\\theta)\n\\\\[3pt]\n& = & - \\log p(\\rho|\\theta) - \\log p(\\theta).\n\\\\[3pt]\n& = & T(\\rho|\\theta) + V(\\theta),\n\\end{eqnarray*}\n%\nwhere the term\n\\[\nT(\\rho|\\theta) = - \\log p(\\rho | \\theta)\n\\]\nis called the ``kinetic energy'' and the term\n\\[\nV(\\theta) = - \\log p(\\theta)\n\\]\nis called the ``potential energy.''  The potential energy is specified\nby the Stan program through its definition of a log density.\n\n\\subsection{Generating Transitions}\n\nStarting from the current value of the parameters $\\theta$, a\ntransition to a new state is generated in two stages before being\nsubjected to a Metropolis accept step.\n\nFirst, a value for the momentum is drawn independently of the current\nparameter values,\n%\n\\[\n\\rho \\sim \\distro{MultiNormal}(0,\\Sigma).\n\\]\n%\nThus momentum does not persist across iterations.\n\nNext, the joint system $(\\theta,\\rho)$ made up of the current\nparameter values $\\theta$ and new momentum $\\rho$ is evolved\nvia Hamilton's equations,\n%\n\\[\n\\begin{array}{rcccl}\n\\displaystyle\n\\frac{d\\theta}{dt}\n& = &\n\\displaystyle\n+ \\frac{\\partial H}{\\partial \\rho}\n& = &\n\\displaystyle\n+ \\frac{\\partial T}{\\partial \\rho}\n\\\\[12pt]\n\\displaystyle\n\\frac{d\\rho}{dt}\n& = &\n\\displaystyle\n- \\frac{\\partial H}{\\partial \\theta }\n& = &\n\\displaystyle\n- \\frac{\\partial T}{\\partial \\theta}\n- \\frac{\\partial V}{\\partial \\theta}.\n\\end{array}\n\\]\n%\nWith the momentum density being independent of the target density,\ni.e., $p(\\rho|\\theta) = p(\\rho)$, the first term in the\nmomentum time derivative, ${\\partial T} / {\\partial \\theta}$ is\nzero, yielding the pair time derivatives\n%\n\\begin{eqnarray*}\n\\frac{d \\theta}{d t} & = & +\\frac{\\partial T}{\\partial \\rho}\n\\\\[2pt]\n\\frac{d \\rho}{d t} & = & -\\frac{\\partial V}{\\partial \\theta}.\n\\end{eqnarray*}\n\n\\subsection{Leapfrog Integrator}\n\nThe last section leaves a two-state differential equation to solve.\nStan, like most other HMC implementations, uses the leapfrog\nintegrator, which is a numerical integration algorithm that's\nspecifically adapted to provide stable results for Hamiltonian systems\nof equations.\n\nLike most numerical integrators, the leapfrog algorithm takes discrete\nsteps of some small time interval $\\epsilon$. The leapfrog algorithm\nbegins by drawing a fresh momentum term independently of the parameter\nvalues $\\theta$ or previous momentum value.\n%\n\\[\n\\rho \\sim \\distro{MultiNormal}(0,\\Sigma).\n\\]\nIt then alternates half-step updates of the momentum and full-step\nupdates of the position.\n%\n\\vspace*{-6pt}\n\\begin{eqnarray*}\n\\rho & \\leftarrow\n     & \\rho \\, - \\, \\frac{\\epsilon}{2} \\frac{\\partial V}{\\partial \\theta}\n\\\\[6pt]\n\\theta & \\leftarrow\n       & \\theta \\, + \\, \\epsilon \\, \\Sigma \\, \\rho\n\\\\[6pt]\n\\rho & \\leftarrow\n     & \\rho \\, - \\, \\frac{\\epsilon}{2} \\frac{\\partial V}{\\partial \\theta}.\n\\end{eqnarray*}\n%\nBy applying $L$ leapfrog steps, a total of $L \\, \\epsilon$ time is\nsimulated. The resulting state at the end of the simulation ($L$\nrepetitions of the above three steps) will be denoted\n$(\\rho^{*},\\theta^{*})$.\n\nThe leapfrog integrator's error is on the order of $\\epsilon^3$ per\nstep and $\\epsilon^2$ globally, where $\\epsilon$ is the time interval\n(also known as the step size);  \\cite{LeimkuhlerReich:2004} provide a\ndetailed analysis of numerical integration for Hamiltonian systems,\nincluding a derivation of the error bound for the leapfrog\nintegrator.\n\n\n\\subsection{Metropolis Accept Step}\n\nIf the leapfrog integrator were perfect numerically, there would no\nneed to do any more randomization per transition than generating a\nrandom momentum vector. Instead, what is done in practice to account\nfor numerical errors during integration is to apply a Metropolis\nacceptance step, where the probability of keeping the proposal\n$(\\rho^{*},\\theta^{*})$ generated by transitioning from $(\\rho,\\theta)$ is\n%\n\\[\n\\min\\!\\left(1, \\ \\exp\\!\\left( H(\\rho,\\theta) - H(\\rho^{*},\\theta^{*})\\right)\\right).\n\\]\n%\nIf the proposal is not accepted, the previous parameter value is\nreturned for the next draw and used to initialize the next iteration.\n\n\n\\subsection{Algorithm Summary}\n\nThe Hamiltonian Monte Carlo algorithm starts at a specified initial\nset of parameters $\\theta$; in Stan, this value is either\nuser-specified or generated randomly. Then, for a given number of\niterations, a new momentum vector is sampled and the current value of\nthe parameter $\\theta$ is updated using the leapfrog integrator with\ndiscretization time $\\epsilon$ and number of steps $L$ according to\nthe Hamiltonian dynamics. Then a Metropolis acceptance step is\napplied, and a decision is made whether to update to the new state\n$(\\theta^{*},\\rho^{*})$ or keep the existing state.\n\n\n\\section{HMC Algorithm Parameters}\n\nThe Hamiltonian Monte Carlo algorithm has three parameters which must\nbe set,\n%\n\\begin{itemize}\n\\item discretization time $\\epsilon$,\n\\item mass matrix $\\Sigma^{-1}$, and\n\\item number of steps taken $L$.\n\\end{itemize}\n%\nIn practice, sampling efficiency, both in terms of iteration speed and\niterations per effective sample, is highly sensitive to these three\ntuning parameters \\citep{Neal:2011,Hoffman-Gelman:2014}.\n\nIf $\\epsilon$ is too large, the leapfrog integrator will be inaccurate\nand too many proposals will be rejected. If $\\epsilon$ is too small,\ntoo many small steps will be taken by the leapfrog integrator leading\nto long simulation times per interval. Thus the goal is to balance the\nacceptance rate between these extremes.\n\nIf $L$ is too small, the trajectory traced out in each iteration will\nbe too short and sampling will devolve to a random walk.  If $L$ is\ntoo large, the algorithm will do too much work on each iteration.\n\nIf the mass matrix $\\Sigma$ is poorly suited to the covariance of the\nposterior, the step size $\\epsilon$ will have to be decreased to\nmaintain arithmetic precision while at the same time, the number of\nsteps $L$ is increased in order to maintain simulation time to ensure\nstatistical efficiency.\n\n\\subsection{Integration Time}\n\nThe actual integration time is $L \\, \\epsilon$, a function of number\nof steps.  Some interfaces to Stan set an approximate integration time\n$t$ and the discretization interval (step size) $\\epsilon$.  In these\ncases, the number of steps will be rounded down as\n\\[\nL = \\left\\lfloor \\frac{t}{\\epsilon} \\right\\rfloor.\n\\]\nand the actual integration time will still be $L \\, \\epsilon$.\n\n\\subsection{Automatic Parameter Tuning}\n\nStan is able to automatically optimize $\\epsilon$ to match an\nacceptance-rate target, able to estimate $\\Sigma$ based on warmup\nsample iterations, and able to dynamically adapt $L$ on the fly during\nsampling (and during warmup) using the no-U-turn sampling (NUTS)\nalgorithm \\citep{Hoffman-Gelman:2014}.\n\n\\begin{figure}\n\\setlength{\\unitlength}{0.005in}\n\\centering\n\\begin{picture}(1000, 200)\n%\n\\footnotesize\n\\put(25, 20) { \\framebox(75, 200)[c]{I} }\n\\put(100, 20) { \\framebox(25, 200)[c]{II} }\n\\put(125, 20) { \\framebox(50, 200)[c]{II} }\n\\put(175, 20) { \\framebox(100, 200)[c]{II} }\n\\put(275, 20) { \\framebox(200, 200)[c]{II} }\n\\put(475, 20) { \\framebox(400, 200)[c]{II} }\n\\put(875, 20) { \\framebox(50, 200)[c]{III} }\n\\put(25, 20) { \\vector(1, 0){950} }\n\\put(800, -10) { \\makebox(200, 20)[l]{{\\small Iteration}} }\n%\n\\end{picture}\n\\caption{ \\small\\it Adaptation during warmup occurs in three stages:\n  an initial fast adaptation interval (I),\n  a series of expanding slow adaptation intervals (II),\n  and a final fast adaptation interval (III).\n  For HMC, both the fast and slow intervals are used for adapting the\n  step size, while the slow intervals are used for learning the\n  (co)variance necessitated by the metric.  Iteration numbering starts\n  at 1 on the left side of the figure and increases to the right.}%\n\\label{adaptation.figure}\n\\end{figure}\n\nWhen adaptation is engaged (it may be turned off by fixing a step size\nand mass matrix), the warmup period is split into three stages, as\nillustrated in \\reffigure{adaptation}, with two \\textit{fast}\nintervals surrounding a series of growing \\textit{slow} intervals.\nHere fast and slow refer to parameters that adapt using local and\nglobal information, respectively; the Hamiltonian Monte Carlo\nsamplers, for example, define the step size as a fast parameter and\nthe (co)variance as a slow parameter. The size of the the initial and\nfinal fast intervals and the initial size of the slow interval are all\ncustomizable, although user-specified values may be modified slightly\nin order to ensure alignment with the warmup period.\n\nThe motivation behind this partitioning of the warmup period is to\nallow for more robust adaptation.  The stages are as follows.\n\n\\begin{enumerate}\n\\item[I.]\nIn the initial fast interval the chain is allowed to converge\ntowards the typical set,%\n%\n\\footnote{The typical set is a concept borrowed from information\n  theory and refers to the neighborhood (or neighborhoods in\n  multimodal models) of substantial posterior probability mass through\n  which the Markov chain will travel in equilibrium.}\n%\nwith only parameters that can learn from local information adapted.\n\\item[II.]\nAfter this initial stage parameters that require global\ninformation, for example (co)variances, are estimated in a series of\nexpanding, memoryless windows; often fast parameters will be adapted\nhere as well.\n\\item[III.]\nLastly, the fast parameters are allowed to adapt to the\nfinal update of the slow parameters.\n\\end{enumerate}\n\nThese intervals may be controlled through the following\nconfiguration parameters, all of which must be positive integers:\n%\n\\begin{center}\n\\begin{tabular}{l|lc}\n{\\it parameter} & {\\it description} & {\\it default}\n\\\\ \\hline\n{\\it initial buffer} & width of initial fast adaptation interval\n                     & 75\n\\\\\n{\\it term buffer} & width of final fast adaptation interval\n                  &  50\n\\\\\n{\\it window}  & initial width of slow adaptation interval\n              & 25\n\\end{tabular}\n\\end{center}\n\n\\subsection{Discretization-Interval Adaptation Parameters}\n\nStan's HMC algorithms utilize dual averaging \\citep{Nesterov:2009} to\noptimize the step size.%\n%\n\\footnote{This optimization of step size during\nadaptation of the sampler should not be confused with running Stan's\noptimization method.}\n%\nThis warmup optimization procedure is extremely flexible and for\ncompleteness, Stan exposes each tuning option for dual averaging,\nusing the notation of \\cite{Hoffman-Gelman:2014}. In practice, the\nefficacy of the optimization is sensitive to the value of these\nparameters, but we do not recommend changing the defaults without\nexperience with the dual-averaging algorithm. For more information,\nsee the discussion of dual averaging in \\citep{Hoffman-Gelman:2011,\n Hoffman-Gelman:2014}.\n\nThe full set of dual-averaging parameters are\n%\n\\begin{center}\n\\begin{tabular}{c|lcc}\n{\\it parameter} & {\\it description} & {\\it constraint} & {\\it default}\n\\\\ \\hline\n$\\delta$ & target Metropolis acceptance rate\n         & $\\delta \\in [0,1]$\n         & 0.80\n\\\\\n$\\gamma$ & adaptation regularization scale\n         & $\\gamma > 0$\n         & 0.05\n\\\\\n$\\kappa$ & adaptation relaxation exponent\n         & $\\kappa > 0$\n         & 0.75\n\\\\\n$t_0$    & adaptation iteration offset\n         & $t_0 > 0$\n         & 10\n\\end{tabular}\n\\end{center}\n%\nBy setting the target acceptance parameter $\\delta$ to a value closer\nto 1 (its value must be strictly less than 1 and its default value is\n0.8), adaptation will be forced to use smaller step sizes. This can\nimprove sampling efficiency (effective sample size per iteration) at the\ncost of increased iteration times. Raising the value of $\\delta$ will\nalso allow some models that would otherwise get stuck to overcome\ntheir blockages.\n\n\\subsection{Step-Size Jitter}\n\nAll implementations of HMC use numerical integrators requiring a step\nsize (equivalently, discretization time interval). Stan allows the\nstep size to be adapted or set explicitly. Stan also allows the step\nsize to be ``jittered'' randomly during sampling to avoid any poor\ninteractions with a fixed step size and regions of high curvature. The\njitter is a proportion that may be added or subtracted, so the maximum\namount of jitter is 1, which will cause step sizes to be selected in\nthe range of 0 to twice the adapted step size. The default value is 0,\nproducing no jitter.\n\nSmall step sizes can get HMC samplers unstuck that would otherwise get\nstuck with higher step sizes. The downside is that jittering below the\nadapted value will increase the number of leapfrog steps required and\nthus slow down iterations, whereas jittering above the adapted value\ncan cause premature rejection due to simulation error in the\nHamiltonian dynamics calculation. See \\citep{Neal:2011} for further\ndiscussion of step-size jittering.\n\n\n\\subsection{Euclidean Metric}\n\nAll HMC implementations in Stan utilize quadratic kinetic energy\nfunctions which are specified up to the choice of a symmetric,\npositive-definite matrix known as a \\textit{mass matrix} or, more\nformally, a \\textit{metric} \\citep{Betancourt-Stein:2011}.\n\nIf the metric is constant then the resulting implementation is known\nas \\textit{Euclidean} HMC.  Stan allows for three Euclidean HMC\nimplementations,\n%\n\\begin{itemize}\n\\item a unit metric (diagonal matrix of ones),\n\\item a diagonal metric (diagonal matrix with positive diagonal\n  entries), and\n\\item a dense metric (a dense, symmetric positive definite matrix)\n\\end{itemize}\n%\nThe user may configure the form of the metric.\n\nIf the mass matrix is specified to be diagonal, then regularized\nvariances are estimated based on the iterations in each slow-stage\nblock (labeled II in \\reffigure{adaptation}).  Each of these estimates\nis based only on the iterations in that block.  This allows early\nestimates to be used to help guide warmup and then be forgotten later\nso that they do not influence the final covariance estimate.\n\nIf the mass matrix is specified to be dense, then regularized\ncovariance estimates will be carried out, regularizing the estimate to\na diagonal matrix, which is itself regularized toward a unit matrix.\n\nVariances or covariances are estimated using Welford accumulators\nto avoid a loss of precision over many floating point operations.\n\n\\subsubsection{Warmup Times and Estimating the Mass Matrix}\n\nThe mass matrix can compensate for linear (i.e. global) correlations\nin the posterior which can dramatically improve the performance of HMC\nin some problems. This requires knowing the global correlations.\n\nIn complex models, the global correlations are usually difficult, if\nnot impossible, to derivate analytically; for example, nonlinear model\ncomponents convolve the scales of the data, so standardizing the data\ndoes not always help.  Therefore, Stan estimates these correlations\nonline with an adaptive warmup.  In models with strong nonlinear\n(i.e. local) correlations this learning can be slow, even with\nregularization. This is ultimately why warmup in Stan often needs to\nbe so long, and why a sufficiently long warmup can yield such\nsubstantial performance improvements.\n\n\\subsubsection{Nonlinearity}\n\nThe mass matrix compensates for only linear (equivalently global or\nposition-independent) correlations in the posterior. The hierarchical\nparameterizations, on the other hand, affect some of the nasty\nnonlinear (equivalently local or position-dependent) correlations\ncommon in hierarchical models.%\n%\n\\footnote{Only in Riemannian HMC does the metric, which can be thought\n  of as a position-dependent mass matrix, start compensating for\n  nonlinear correlations.}\n\nOne of the biggest difficulties with dense mass matrices is the\nestimation of the mass matrix itself which introduces a bit of a\nchicken-and-egg scenario;  in order to estimate an appropriate mass\nmatrix for sampling, convergence is required, and in order to\nconverge, an appropriate mass matrix is required.\n\n\\subsubsection{Dense vs.\\ Diagonal Mass Matrices}\n\nStatistical models for which sampling is problematic are not typically\ndominated by linear correlations for which a dense mass matrix can\nadjust.  Rather, they are governed by more complex nonlinear\ncorrelations that are best tackled with better parameterizations or\nmore advanced algorithms, such as Riemannian HMC.\n\n\\subsubsection{Warmup Times and Curvature}\n\nMCMC convergence time is roughly equivalent to the autocorrelation\ntime.  Because HMC (and NUTS) chains tend to be lowly autocorrelated\nthey also tend to converge quite rapidly.\n\nThis only applies when there is uniformity of curvature across the\nposterior, an assumption which is violated in many complex models.\nQuite often, the tails have large curvature while the bulk of the\nposterior mass is relatively well-behaved; in other words, warmup is slow\nnot because the actual convergence time is slow but rather because the\ncost of an HMC iteration is more expensive out in the tails.\n\nPoor behavior in the tails is the kind of pathology that can be\nuncovered by running only a few warmup iterations. By looking at the\nacceptance probabilities and step sizes of the first few iterations\nprovides an idea of how bad the problem is and whether it must be\naddressed with modeling efforts such as tighter priors or\nreparameterizations.\n\n\n\\subsection{NUTS and its Configuration}\n\nThe no-U-turn sampler (NUTS) automatically selects an appropriate\nnumber of leapfrog steps in each iteration in order to allow the\nproposals to traverse the posterior without doing unnecessary work.\nThe motivation is to maximize the expected squared jump distance (see,\ne.g., \\citep{RobertsEtAl:1997}) at each step and avoid the random-walk behavior\nthat arises in random-walk Metropolis or Gibbs samplers when there is\ncorrelation in the posterior. For a precise definition of the NUTS\nalgorithm and a proof of detailed balance, see\n\\citep{Hoffman-Gelman:2011,\n  Hoffman-Gelman:2014}.\n\nNUTS generates a proposal by starting at an initial position\ndetermined by the parameters drawn in the last iteration. It then\ngenerates an independent unit-normal random momentum vector. It then\nevolves the initial system both forwards and backwards in time to form\na balanced binary tree. At each iteration of the NUTS algorithm the\ntree depth is increased by one, doubling the number of leapfrog steps\nand effectively doubles the computation time. The algorithm terminates\nin one of two ways, either\n%\n\\begin{itemize}\n\\item the NUTS criterion (i.e., a U-turn in Euclidean space on a\n  subtree) is satisfied for a new subtree or the completed tree, or\n\\item the depth of the completed tree hits the maximum depth allowed.\n\\end{itemize}\n%\nRather than using a standard Metropolis step, the final parameter\nvalue is selected via multinomial sampling with a bias toward the\nsecond half of the steps in the trajectory \\citep{Betancourt:2016}.%\n%\n\\footnote{Stan previously used slice sampling along the trajectory,\n  following the original NUTS paper of \\cite{Hoffman-Gelman:2014}.}\n\nConfiguring the no-U-turn sample involves putting a cap on the depth\nof the trees that it evaluates during each iteration.   This is\ncontrolled through a maximum depth parameter.   The number of leapfrog\nsteps taken is then bounded by 2 to the power of the maximum depth minus 1.\n\nBoth the tree depth and the actual number of leapfrog steps computed\nare reported along with the parameters in the output as\n\\code{treedepth\\_\\_} and \\code{n\\_leapfrog\\_\\_}, respectively. Because\nthe final subtree may only be partially constructed, these two will\nalways satisfy\n%\n\\[\n2^{\\mathrm{treedepth} - 1} - 1 < N_{\\mathrm{leapfrog}} \\le 2^{\\mathrm{treedepth} } - 1.\n\\]\n\nTree depth is an important diagnostic tool for NUTS. For example, a\ntree depth of zero occurs when the first leapfrog step is immediately\nrejected and the initial state returned, indicating extreme curvature\nand poorly-chosen step size (at least relative to the current\nposition). On the other hand, a tree depth equal to the maximum depth\nindicates that NUTS is taking many leapfrog steps and being terminated\nprematurely to avoid excessively long execution time. Taking very many\nsteps may be a sign of poor adaptation, may be due to targeting a very\nhigh acceptance rate, or may simply indicate a difficult posterior\nfrom which to sample. In the latter case, reparameterization may help\nwith efficiency. But in the rare cases where the model is correctly\nspecified and a large number of steps is necessary, the maximum depth\nshould be increased to ensure that that the NUTS tree can grow as\nlarge as necessary.\n\n\n\\subsection{Sampling without Parameters}\n\nIn some situations, such as pure forward data simulation in a directed\ngraphical model (e.g., where you can work down generatively from known\nhyperpriors to simulate parameters and data), there is no need to\ndeclare any parameters in Stan, the model block will be empty, and all\noutput quantities will be produced in the generated quantities block.\nFor example, to generate a sequence of $N$ draws from a binomial with\ntrials $K$ and chance of success $\\theta$, the following program suffices.\n%\n\\begin{stancode}\ndata {\n  real<lower=0,upper=1> theta;\n  int<lower=0> K;\n  int<lower=0> N;\n}\nmodel {\n}\ngenerated quantities {\n  int<lower=0,upper=K> y[N];\n  for (n in 1:N)\n    y[n] = binomial_rng(K, theta);\n}\n\\end{stancode}\n%\nThis program includes an empty model block because every Stan program\nmust have a model block, even if it's empty.  For this model, the\nsampler must be configured to use the fixed-parameters setting because\nthere are no parameters.  Without parameter sampling there is no need\nfor adaptation and the number of warmup iterations should be set to\nzero.\n\nMost models that are written to be sampled without parameters will not\ndeclare any parameters, instead putting anything parameter-like in the\ndata block.  Nevertheless, it is possible to include parameters for\nfixed-parameters sampling and initialize them in any of the usual ways\n(randomly, fixed to zero on the unconstrained scale, or with\nuser-specified values).  For example, \\code{theta} in the example\nabove could be declared as a parameter and initialized as a parameter.\n\n\n\n\\section{General Configuration Options}\\label{general-config.section}\n\nStan's interfaces provide a number of configuration options that are\nshared among the MCMC algorithms (this chapter), the optimization\nalgorithms (\\refchapter{optimization-algorithms}), and the diagnostics\n(\\refchapter{diagnostic-algorithms}).\n\n\\subsection{Random Number Generator}\n\nThe random-number generator's behavior is fully determined by the\nunsigned seed (positive integer) it is started with. If a seed is not\nspecified, or a seed of 0 or less is specified, the system time is\nused to generate a seed. The seed is recorded and included with Stan's\noutput regardless of whether it was specified or generated randomly\nfrom the system time.\n\nStan also allows a chain identifier to be specified, which is useful\nwhen running multiple Markov chains for sampling. The chain identifier\nis used to advance the random number generator a very large number of\nrandom variates so that two chains with different identifiers draw\nfrom non-overlapping subsequences of the random-number sequence\ndetermined by the seed.  When running multiple chains from a single\ncommand, Stan's interfaces will manage the chain identifiers.\n\n\\subsubsection{Replication}\n\nTogether, the seed and chain identifier determine the behavior of the\nunderlying random number generator. See \\refchapter{reproducibility} for a\nlist of requirements for replication.\n\n\\subsection{Initialization}\n\nThe initial parameter values for Stan's algorithms (MCMC,\noptimization, or diagnostic) may be either specified by the user or\ngenerated randomly. If user-specified values are provided, all\nparameters must be given initial values or Stan will abort with an\nerror message.\n\n\\subsubsection{User-Defined Initialization}\n\nIf the user specifies initial values, they must satisfy the\nconstraints declared in the model (i.e., they are on the constrained\nscale).\n\n\\subsubsection{System Constant Zero Initialization}\n\nIt is also possible to provide an initialization of 0, which causes\nall variables to be initialized with zero values on the unconstrained\nscale. The transforms are arranged in such a way that zero\ninitialization provides reasonable variable initializations for most\nparameters, such as 0 for unconstrained parameters, 1 for parameters\nconstrained to be positive, 0.5 for variables to constrained to lie\nbetween 0 and 1, a symmetric (uniform) vector for simplexes, unit\nmatrices for both correlation and covariance matrices, and so on. See\n\\refchapter{variable-transforms} for full details of the\ntransformations.\n\n\n\\subsubsection{System Random Initialization}\n\nRandom initialization by default initializes the parameter values with\nvalues drawn at random from a $\\distro{Uniform}(-2,2)$ distribution.\nAlternatively, a value other than 2 may be specified for the absolute\nbounds. These values are on the unconstrained scale, and are inverse\ntransformed back to satisfy the constraints declared for parameters.\nSee \\refchapter{variable-transforms} for a complete description of the\ntransforms used.\n\nBecause zero is chosen to be a reasonable default initial value for\nmost parameters, the interval around zero provides a fairly diffuse\nstarting point. For instance, unconstrained variables are initialized\nrandomly in $(-2,2)$, variables constrained to be positive are\ninitialized roughly in $(0.14,7.4)$, variables constrained to fall\nbetween 0 and 1 are initialized with values roughly in $(0.12,0.88)$.\n\n\n\\section{Divergent Transitions}\n\nThe Hamiltonian Monte Carlo algorithms (HMC and NUTS) simulate the\ntrajectory of a fictitious particle representing parameter values when\nsubject to a potential energy field, the value of which at a point is\nthe negative log posterior density (up to a constant that does not\ndepend on location).  Random momentum is imparted independently in\neach direction, by drawing from a standard normal distribution.  The\nHamiltonian is defined to be the sum of the potential energy and\nkinetic energy of the system. The key feature of the Hamiltonian is\nthat it is conserved along the trajectory the particle moves.\n\nIn Stan, we use the leapfrog algorithm to simulate the path of a\nparticle along the trajectory defined by the initial random momentum\nand the potential energy field.  This is done by alternating updates\nof the position based on the momentum and the momentum based on the\nposition.  The momentum updates involve the potential energy and are\napplied along the gradient.  This is essentially a stepwise\n(discretized) first-order approximation of the trajectory.\n\\cite{LeimkuhlerReich:2004} provide details and error analysis for the\nleapfrog algorithm.\n\nA divergence arises when the simulated Hamiltonian trajectory departs\nfrom the true trajectory as measured by departure of the Hamiltonian\nvalue from its initial value.  When this divergence is too high,%\n%\n\\footnote{The current default threshold is a factor of $10^3$, whereas\n  when the leapfrog integrator is working properly, the divergences\n  will be around $10^{-7}$ or so and do not compound due to the\n  symplectic nature of the leapfrog integrator.}\n%\nthe simulation has gone off the rails and cannot be trusted.  The\npositions along the simulated trajectory after the Hamiltonian\ndiverges will never be selected as the next draw of the MCMC\nalgorithm, potentially reducing Hamiltonian Monte Carlo to a simple\nrandom walk and biasing estimates by not being able to thoroughly\nexplore the posterior distribution.  \\cite{Betancourt:2016b} provides\ndetails of the theory, computation, and practical implications of\ndivergent transitions in Hamiltonian Monte Carlo.\n\nThe Stan interfaces report divergences as warnings and provide ways to\naccess which iterations encountered divergences.  ShinyStan provides\nvisualizations that highlight the starting point of divergent\ntransitions to diagnose where the divergences arise in parameter\nspace.  A common location is in the neck of the funnel in a centered\nparameterization (as in the funnel example discussed in\n\\refsection{reparameterization}).\n\nIf the posterior is highly curved, very small step sizes are required\nfor this gradient-based simulation of the Hamiltonian to be accurate.\nWhen the step size is too large (relative to the curvature), the\nsimulation diverges from the true Hamiltonian.  This definition is\nimprecise in the same way that stiffness for a differential equation\nis imprecise; both are defined by the way they cause traditional\nstepwise algorithms to diverge from where they should be.\n\nThe primary cause of divergent transitions in Euclidean HMC (other\nthan bugs in the code) is highly varying posterior curvature, for\nwhich small step sizes are too inefficient in some regions and diverge\nin other regions.  If the step size is too small, the sampler becomes\ninefficient and halts before making a U-turn (hits the maximum tree\ndepth in NUTS); if the step size is too large, the Hamiltonian\nsimulation diverges.\n\n\n\\subsection{Diagnosing and Eliminating Divergences}\n\nIn some cases, simply lowering the initial step size and increasing\nthe target acceptance rate will keep the step size small enough that\nsampling can proceed.  In other cases, a reparameterization is\nrequired so that the posterior curvature is more manageable; see the\nfunnel example in \\refsection{reparameterization} for an example.\nBefore reparameterization, it may be helpful to plot the posterior\ndraws, highlighting the divergent transitions to see where they arise.\nThis is marked as a divergent transition in the interfaces; for\nexample, ShinyStan and RStan have special plotting facilities to\nhighlight where divergent transitions arise.\n\n\n\n\\chapter{Transformations of Constrained Variables}\\label{variable-transforms.chapter}\n\n\\noindent\nTo avoid having to deal with constraints while simulating the\nHamiltonian dynamics during sampling, every (multivariate) parameter\nin a Stan model is transformed to an unconstrained variable behind\nthe scenes by the model compiler.  The transform is based on the\nconstraints, if any, in the parameter's definition.  Scalars or the\nscalar values in vectors, row vectors or matrices may be constrained\nwith lower and/or upper bounds.  Vectors may alternatively be\nconstrained to be ordered, positive ordered, or simplexes.  Matrices\nmay be constrained to be correlation matrices or covariance matrices.\nThis chapter provides a definition of the transforms used for each\ntype of variable.\n\nStan converts models to \\Cpp classes which define probability\nfunctions with support on all of $\\reals^K$, where $K$ is the number\nof unconstrained parameters needed to define the constrained\nparameters defined in the program.  The \\Cpp classes also include\ncode to transform the parameters from unconstrained to constrained and\napply the appropriate Jacobians.\n\n\n\\section{Changes of Variables}\\label{change-of-variables.section}\n\nThe support of a random variable $X$ with density $p_X(x)$ is that\nsubset of values for which it has non-zero density,\n%\n\\[\n\\mbox{supp}(X) = \\{ x | p_X(x) > 0 \\}.\n\\]\n\nIf $f$ is a total function defined on the support of $X$, then $Y =\nf(X)$ is a new random variable.  This section shows how to compute the\nprobability density function of $Y$ for well-behaved transforms $f$.\nThe rest of the chapter details the transforms used by Stan.\n\n\n\n\\subsection{Univariate Changes of Variables}\n\nSuppose $X$ is one dimensional and $f: \\mbox{supp}(X) \\rightarrow\n\\reals$ is a one-to-one, monotonic function with a differentiable\ninverse $f^{-1}$.  Then the density of $Y$ is given by\n%\n\\[\np_Y(y) = p_X(f^{-1}(y))\n         \\,\n         \\left| \\, \\frac{d}{dy} f^{-1}(y)\\, \\right|.\n\\]\nThe absolute derivative of the inverse transform measures how the\nscale of the transformed variable changes with respect to the\nunderlying variable.\n\n\n\\subsection{Multivariate Changes of Variables}\n\nThe multivariate generalization of an absolute derivative is a\nJacobian, or more fully the absolute value of the determinant of the\nJacobian matrix of the transform.  The Jacobian matrix measures the change of\neach output variable relative to every input variable and the absolute\ndeterminant uses that to determine the differential change in volume\nat a given point in the parameter space.\n\nSuppose $X$ is a $K$-dimensional random variable with probability\ndensity function $p_X(x)$.  A new random variable $Y = f(X)$ may be\ndefined by transforming $X$ with a suitably well-behaved function $f$.\nIt suffices for what follows to note that if $f$ is one-to-one\nand its inverse $f^{-1}$ has a well-defined Jacobian, then the\ndensity of $Y$ is\n%\n\\[\np_Y(y) = p_X(f^{-1}(y)) \\, \\left| \\, \\det \\, J_{f^{-1}}(y) \\, \\right|,\n\\]\n%\nwhere $\\det{}$ is the matrix determinant operation and $J_{f^{-1}}(y)$\nis the Jacobian matrix of $f^{-1}$ evaluated at $y$.  Taking $x =\nf^{-1}(y)$, the Jacobian matrix is defined by\n\\[\nJ_{f^{-1}}(y) =\n\\left[\n\\begin{array}{ccc}\\displaystyle\n\\frac{\\partial x_1}{\\partial y_1}\n& \\cdots\n& \\displaystyle \\frac{\\partial x_1}{\\partial y_{K}}\n\\\\[6pt]\n\\vdots & \\vdots & \\vdots\n\\\\\n\\displaystyle\\frac{\\partial x_{K}}{\\partial y_1}\n& \\cdots\n& \\displaystyle\\frac{\\partial x_{K}}{\\partial y_{K}}\n\\end{array}\n\\right].\n\\]\n%\nIf the Jacobian matrix is triangular, the determinant reduces to the\nproduct of the diagonal entries,\n%\n\\[\n\\det \\, J_{f^{-1}}(y)\n= \\prod_{k=1}^K \\frac{\\partial x_k}{\\partial y_k}.\n\\]\n%\nTriangular matrices naturally arise in situations where the variables\nare ordered, for instance by dimension, and each variable's\ntransformed value depends on the previous variable's transformed\nvalues.  Diagonal matrices, a simple form of triangular matrix, arise\nif each transformed variable only depends on a single untransformed\nvariable.\n\n\\section{Lower Bounded Scalar}\\label{lower-bound-transform.section}\n\nStan uses a logarithmic transform for lower and upper bounds.\n\n\\subsection{Lower Bound Transform}\n\nIf a variable $X$ is declared to have lower bound $a$, it is\ntransformed to an unbounded variable $Y$, where\n%\n\\[\nY = \\log(X - a).\n\\]\n\n\\subsection{Lower Bound Inverse Transform}\n%\nThe inverse of the lower-bound transform maps an unbounded\nvariable $Y$ to a variable $X$ that is bounded below by $a$ by\n%\n\\[\nX = \\exp(Y) + a.\n\\]\n\n\\subsection{Absolute Derivative of the Lower Bound Inverse Transform}\n\nThe absolute derivative of the inverse transform is\n\\[\n\\left| \\,\n\\frac{d}{dy} \\left( \\exp(y) + a \\right)\n\\, \\right|\n= \\exp(y).\n\\]\nTherefore, given the density $p_X$ of $X$, the density of $Y$ is\n%\n\\[\np_Y(y)\n= p_X\\!\\left( \\exp(y) + a \\right) \\cdot \\exp(y).\n\\]\n\n\n\\section{Upper Bounded Scalar}\n\nStan uses a negated logarithmic transform for upper bounds.\n\n\\subsection{Upper Bound Transform}\n\nIf a variable $X$ is declared to have an upper bound $b$, it is\ntransformed to the unbounded variable $Y$ by\n%\n\\[\nY = \\log(b - X).\n\\]\n\n\\subsection{Upper Bound Inverse Transform}\n%\nThe inverse of the upper bound transform converts the unbounded\nvariable $Y$ to the variable $X$ bounded above by $b$ through\n%\n\\[\nX = b - \\exp(Y).\n\\]\n\n\\subsection{Absolute Derivative of the Upper Bound Inverse Transform}\n\nThe absolute derivative of the inverse of the upper bound transform is\n\\[\n\\left| \\,\n\\frac{d}{dy} \\left( b - \\exp(y) \\right)\n\\, \\right|\n= \\exp(y).\n\\]\n%\nTherefore, the density of the unconstrained variable $Y$ is defined in\nterms of the density of the variable $X$ with an upper bound of $b$ by\n%\n\\[\np_Y(y)\n =   p_X \\!\\left( b - \\exp(y) \\right) \\cdot \\exp(y).\n\\]\n\n\n\\section{Lower and Upper Bounded Scalar}\\label{logit-transform-jacobian.section}\n\nFor lower and upper-bounded variables, Stan uses a scaled and\ntranslated log-odds transform.\n\n\\subsection{Log Odds and the Logistic Sigmoid}\n\nThe log-odds function is defined for $u \\in (0,1)$ by\n%\n\\[\n\\mbox{logit}(u) = \\log \\frac{u}{1 - u}.\n\\]\n%\nThe inverse of the log odds function is the logistic sigmoid, defined\nfor $v \\in (-\\infty,\\infty)$ by\n%\n\\[\n\\mbox{logit}^{-1}(v) = \\frac{1}{1 + \\exp(-v)}.\n\\]\n%\nThe derivative of the logistic sigmoid is\n%\n\\[\n\\frac{d}{dy} \\mbox{logit}^{-1}(y)\n= \\mbox{logit}^{-1}(y) \\cdot \\left( 1 - \\mbox{logit}^{-1}(y) \\right).\n\\]\n\n\\subsection{Lower and Upper Bounds Transform}\n\nFor variables constrained to be in the open interval $(a,b)$, Stan\nuses a scaled and translated log-odds transform.  If variable $X$ is\ndeclared to have lower bound $a$ and upper bound $b$, then it is\ntransformed to a new variable $Y$, where\n%\n\\[\nY = \\mbox{logit} \\left( \\frac{X - a}{b - a} \\right).\n\\]\n%\n\n\\subsection{Lower and Upper Bounds Inverse Transform}\n\nThe inverse of this transform is\n%\n\\[\nX = a + (b - a) \\cdot \\mbox{logit}^{-1}(Y).\n\\]\n%\n\n\\subsection{Absolute Derivative of the Lower and Upper Bounds Inverse\n  Transform}\n\nThe absolute derivative of the inverse transform is given by\n\\[\n\\left|\n  \\frac{d}{dy}\n    \\left(\n      a + (b - a) \\cdot \\mbox{logit}^{-1}(y)\n    \\right)\n  \\right|\n= (b - a)\n    \\cdot \\mbox{logit}^{-1}(y)\n    \\cdot \\left( 1 - \\mbox{logit}^{-1}(y) \\right).\n\\]\nTherefore, the density of the transformed variable $Y$ is\n%\n\\[\np_Y(y)\n=\n p_X \\! \\left( a + (b - a) \\cdot \\mbox{logit}^{-1}(y) \\right)\n    \\cdot (b - a)\n    \\cdot \\mbox{logit}^{-1}(y)\n    \\cdot \\left( 1 - \\mbox{logit}^{-1}(y) \\right).\n\\]\n%\nDespite the apparent complexity of this expression, most of the terms\nare repeated and thus only need to be evaluated once.  Most\nimportantly, $\\mbox{logit}^{-1}(y)$ only needs to be evaluated once,\nso there is only one call to $\\exp(-y)$.\n\n\n\\section{Ordered Vector}\n\nFor some modeling tasks, a vector-valued random variable $X$ is\nrequired with support on ordered sequences.  One example is the set of\ncut points in ordered logistic regression (see \\refsection{ordered-logistic}).\n\nIn constraint terms, an ordered $K$-vector $x \\in \\reals^K$ satisfies\n\\[\nx_k < x_{k+1}\n\\]\n%\nfor $k \\in \\setlist{1,\\ldots,K-1}$.\n\n\n\\subsection{Ordered Transform}\n\nStan's transform follows the constraint directly.  It maps an\nincreasing vector $x \\in \\reals^{K}$ to an unconstrained vector $y \\in\n\\reals^K$ by setting\n%\n\\[\ny_k\n=\n\\left\\{\n\\begin{array}{ll}\nx_1 & \\mbox{if } k = 1, \\mbox{ and}\n\\\\[4pt]\n\\log \\left( x_{k} - x_{k-1} \\right) & \\mbox{if } 1 < k \\leq K.\n\\end{array}\n\\right.\n\\]\n\n\\subsection{Ordered Inverse Transform}\n\nThe inverse transform for an unconstrained $y \\in \\reals^K$ to an\nordered sequence $x \\in \\reals^K$ is defined by the recursion\n%\n\\[\nx_k\n=\n\\left\\{\n\\begin{array}{ll}\ny_1 & \\mbox{if } k = 1, \\mbox{ and}\n\\\\[4pt]\nx_{k-1} + \\exp(y_k) & \\mbox{if } 1 < k \\leq K.\n\\end{array}\n\\right.\n\\]\n%\n$x_k$ can also be expressed iteratively as\n\\[\nx_k = y_1 + \\sum_{k'=2}^k \\exp(y_{k'}).\n\\]\n\n\\subsection{Absolute Jacobian Determinant of the Ordered\n  Inverse Transform}\n\nThe Jacobian of the inverse transform $f^{-1}$ is lower triangular,\nwith diagonal elements for $1 \\leq k \\leq K$ of\n\\[\nJ_{k,k} =\n\\left\\{\n\\begin{array}{ll}\n1 & \\mbox{if } k = 1, \\mbox{ and}\n\\\\[4pt]\n\\exp(y_k) & \\mbox{if } 1 < k \\leq K.\n\\end{array}\n\\right.\n\\]\n%\nBecause $J$ is triangular, the absolute Jacobian determinant is\n%\n\\[\n\\left| \\, \\det \\, J \\, \\right|\n\\ = \\\n\\left| \\, \\prod_{k=1}^K J_{k,k} \\, \\right|\n\\ = \\\n\\prod_{k=2}^K \\exp(y_k).\n\\]\n\nPutting this all together, if $p_X$ is the density of $X$, then the\ntransformed variable $Y$ has density $p_Y$ given by\n%\n\\[\np_Y(y)\n= p_X(f^{-1}(y))\n\\\n\\prod_{k=2}^K \\exp(y_k).\n\\]\n\n\n\\section{Unit Simplex}\\label{simplex-transform.section}\n\nVariables constrained to the unit simplex show up in multivariate\ndiscrete models as both parameters (categorical and multinomial) and\nas variates generated by their priors (Dirichlet and multivariate\nlogistic).\n\nThe unit $K$-simplex is the set of points $x \\in \\reals^K$ such that\nfor $1 \\leq k \\leq K$,\n\\[\nx_k > 0,\n\\]\nand\n\\[\n\\sum_{k=1}^K x_k = 1.\n\\]\n%\nAn alternative definition is to take the convex closure of the\nvertices.  For instance, in 2-dimensions, the simplex vertices are the\nextreme values $(0,1)$, and $(1,0)$ and the unit 2-simplex is the line\nconnecting these two points; values such as $(0.3,0.7)$ and\n$(0.99,0.01)$ lie on the line.  In 3-dimensions, the basis is\n$(0,0,1)$, $(0,1,0)$ and $(1,0,0)$ and the unit 3-simplex is the\nboundary and interior of the triangle with these vertices.  Points in\nthe 3-simplex include $(0.5,0.5,0)$, $(0.2,0.7,0.1)$ and all other\ntriplets of non-negative values summing to 1.\n\nAs these examples illustrate, the simplex always picks out a subspace\nof $K-1$ dimensions from $\\reals^K$.  Therefore a point $x$ in the\n$K$-simplex is fully determined by its first $K-1$ elements $x_1, x_2,\n\\ldots, x_{K-1}$, with\n%\n\\[\nx_K = 1 - \\sum_{k=1}^{K-1} x_k.\n\\]\n%\n\n\\subsection{Unit Simplex Inverse Transform}\n\nStan's unit simplex inverse transform may be understood using the\nfollowing stick-breaking metaphor.%\n%\n\\footnote{For an alternative derivation of the same transform using\n  hyperspherical coordinates, see \\citep{Betancourt:2010}.}\n%\n\\begin{quote}\n  Take a stick of unit length (i.e., length 1).  Break a piece off and\n  label it as $x_1$, and set it aside.  Next, break a piece off what's\n  left, label it $x_2$, and set it aside.  Continue doing this until\n  you have broken off $K-1$ pieces labeled $(x_1,\\ldots,x_{K-1})$.\n  Label what's left of the original stick $x_K$.  The vector $x =\n  (x_1,\\ldots,x_{K})$ is obviously a unit simplex because each piece\n  has non-negative length and the sum of their lengths is 1.\n\\end{quote}\n%\nThis full inverse mapping requires the breaks to be represented as the\nfraction in $(0,1)$ of the original stick that is broken off.  These\nbreak ratios are themselves derived from unconstrained values in\n$(-\\infty,\\infty)$ using the inverse logit transform as described\nabove for unidimensional variables with lower and upper bounds.\n\nMore formally, an intermediate vector $z \\in \\reals^{K-1}$, whose\ncoordinates $z_k$ represent the proportion of the stick broken off in\nstep $k$, is defined elementwise for $1 \\leq k < K$ by\n%\n\\[\nz_k = \\mbox{logit}^{-1} \\left( y_k\n                             + \\log \\left( \\frac{1}{K - k}\n                                            \\right)\n                       \\right).\n\\]\n%\nThe logit term\n$\\log\\left(\\frac{1}{K-k}\\right) (i.e., \\mbox{logit}\\left(\\frac{1}{K-k+1}\\right)$) in\nthe above definition adjusts the transform so that a\nzero vector $y$ is mapped to the simplex $x = (1/K,\\ldots,1/K)$.  For instance, if\n$y_1 = 0$, then $z_1 = 1/K$; if $y_2 = 0$, then $z_2 = 1/(K-1)$; and\nif $y_{K-1} = 0$, then $z_{K-1} = 1/2$.\n\nThe break proportions $z$ are applied to determine the stick sizes and\nresulting value of $x_k$ for $1 \\leq k < K$ by\n%\n\\[\nx_k =\n\\left( 1 - \\sum_{k'=1}^{k-1} x_{k'} \\right) z_k.\n\\]\n%\nThe summation term represents the length of the original stick left at\nstage $k$.  This is multiplied by the break proportion $z_k$ to yield\n$x_k$.  Only $K-1$ unconstrained parameters are required, with\nthe last dimension's value $x_K$ set to the length of the remaining\npiece of the original stick,\n\\[\nx_K = 1 - \\sum_{k=1}^{K-1} x_k.\n\\]\n\n\\subsection{Absolute Jacobian Determinant of the Unit-Simplex\n  Inverse Transform}\n\nThe Jacobian $J$ of the inverse transform $f^{-1}$ is\nlower-triangular, with diagonal entries\n\\[\nJ_{k,k}\n=\n\\frac{\\partial x_k}{\\partial y_k}\n=\n\\frac{\\partial x_k}{\\partial z_k} \\,\n\\frac{\\partial z_k}{\\partial y_k},\n\\]\n%\nwhere\n\\[\n\\frac{\\partial z_k}{\\partial y_k}\n= \\frac{\\partial}{\\partial y_k}\n   \\mbox{logit}^{-1} \\left(\n                       y_k + \\log \\left( \\frac{1}{K-k}\n                                          \\right)\n                    \\right)\n= z_k (1 - z_k),\n\\]\n%\nand\n%\n\\[\n\\frac{\\partial x_k}{\\partial z_k}\n=\n\\left(\n  1 - \\sum_{k' = 1}^{k-1} x_{k'}\n   \\right)\n.\n\\]\n%\nThis definition is recursive, defining $x_k$ in terms of\n$x_{1},\\ldots,x_{k-1}$.\n\nBecause the Jacobian $J$ of $f^{-1}$ is lower triangular and positive, its\nabsolute determinant reduces to\n%\n\\[\n\\left| \\, \\det J \\, \\right|\n\\ = \\\n\\prod_{k=1}^{K-1} J_{k,k}\n\\ = \\\n\\prod_{k=1}^{K-1}\nz_k\n\\,\n(1 - z_k)\n\\\n\\left(\n1 - \\sum_{k'=1}^{k-1} x_{k'}\n\\right)\n.\n\\]\n%\nThus the transformed variable $Y = f(X)$ has a density given by\n%\n\\[\np_Y(y)\n= p_X(f^{-1}(y))\n\\,\n\\prod_{k=1}^{K-1}\nz_k\n\\,\n(1 - z_k)\n\\\n\\left(\n1 - \\sum_{k'=1}^{k-1} x_{k'}\n\\right)\n.\n\\]\n%\nEven though it is expressed in terms of intermediate values $z_k$,\nthis expression still looks more complex than it is. The exponential\nfunction need only be evaluated once for each unconstrained parameter\n$y_k$; everything else is just basic arithmetic that can be computed\nincrementally along with the transform.\n\n\\subsection{Unit Simplex Transform}\n\nThe transform $Y = f(X)$ can be derived by reversing the stages of the\ninverse transform.  Working backwards, given the break proportions\n$z$, $y$ is defined elementwise by\n%\n\\[\ny_k\n= \\mbox{logit}(z_k)\n- \\mbox{log}\\left(\n   \\frac{1}{K-k}\n   \\right)\n.\n\\]\n%\nThe break proportions $z_k$ are defined to be the ratio of $x_k$ to\nthe length of stick left after the first $k-1$ pieces have been broken\noff,\n%\n\\[\nz_k\n= \\frac{x_k}\n       {1 - \\sum_{k' = 1}^{k-1} x_{k'}}\n.\n\\]\n\n\\section{Unit Vector}\\label{unit-vector.section}\n\nAn $n$-dimensional vector $x \\in \\reals^n$ is said to be a unit vector\nif it has unit Euclidean length, so that\n%\n\\[\n\\Vert x \\Vert\n\\ = \\ \\sqrt{x^{\\top}\\,x}\n\\ = \\ \\sqrt{x_1^2 + x_2^2 + \\cdots + x_n^2}\n\\ = \\ 1\\ .\n\\]\n%\n\n\n\\subsection{Unit Vector Inverse Transform}\n\nStan divides an unconstrained vector $y \\in \\reals^{n}$ by its norm,\n$\\Vert y \\Vert = \\sqrt{y^\\top y}$, to obtain a unit vector $x$,\n%\n\\[\nx = \\frac{y}{\\Vert y \\Vert}.\n\\]\n%\n\nTo generate a unit vector, Stan generates points at random in\n$\\reals^n$ with independent unit normal distributions, which are then\nstandardized by dividing by their Euclidean length.\n\\cite{Marsaglia:1972} showed this generates points uniformly at random\non $S^{n-1}$.  That is, if we draw $y_n \\sim \\distro{Normal}(0, 1)$\nfor $n \\in 1{:}n$, then $x = \\frac{y}{\\Vert y \\Vert}$ has a uniform\ndistribution over $S^{n-1}$.  This allows us to use an $n$-dimensional\nbasis for $S^{n-1}$ that preserves local neighborhoods in that points\nthat are close to each other in $\\reals^n$ map to points near each\nother in $S^{n-1}$.  The mapping is not perfectly distance preserving,\nbecause there are points arbitrarily far away from each other in\n$\\reals^n$ that map to identical points in $S^{n-1}$.\n\n\n\\subsubsection{Warning: undefined at zero!}\n\nThe above mapping from $\\reals^n$ to $S^n$ is not defined at zero.\nWhile this point outcome has measure zero during sampling, and may\nthus be ignored, it is the default initialization point and thus unit\nvector parameters cannot be initialized at zero.  A simple workaround\nis to initialize from a very small interval around zero, which is an\noption built into all of the Stan interfaces.\n\n\\subsection{Absolute Jacobian Determinant of the Unit Vector Inverse\n  Transform}\n\nThe Jacobian matrix relating the input vector $y$ to the output vector\n$x$ is singular because $x^\\top x = 1$ for any non-zero input vector\n$y$. Thus, there technically is no unique transformation from $x$ to\n$y$. To circumvent this issue, let $r = \\sqrt{y^\\top y}$ so that $y = r\nx$. The transformation from $\\left(r, x_{-n}\\right)$ to $y$ is\nwell-defined but $r$ is arbitrary, so we set $r = 1$. In this case,\nthe determinant of the Jacobian is proportional to $-\\frac{1}{2} y^\\top y$,\nwhich is the kernel of a standard multivariate normal distribution\nwith $n$ independent dimensions.\n\n\\section{Correlation Matrices}\\label{correlation-matrix-transform.section}\n\nA $K \\times K$ correlation matrix $x$ must be is a symmetric, so that\n%\n\\[\nx_{k,k'} = x_{k',k}\n\\]\nfor all $k,k' \\in \\setlist{1,\\ldots,K}$, it must have a unit diagonal,\nso that\n\\[\nx_{k,k} = 1\n\\]\nfor all $k \\in \\setlist{1,\\ldots,K}$, and it must be positive\ndefinite, so that for every non-zero $K$-vector $a$,\n\\[\na^{\\top} x a > 0.\n\\]\nTo deal with this rather complicated constraint, Stan implements the\ntransform of \\cite{LewandowskiKurowickaJoe:2009}.  The number of free\nparameters required to specify a $K \\times K$ correlation matrix is $\\binom{K}{2}$.\n\n\\subsection{Correlation Matrix Inverse Transform}\n\nIt is easiest to specify the inverse, going from its $\\binom{K}{2}$\nparameter basis to a correlation matrix.  The basis will actually be\nbroken down into two steps.  To start, suppose $y$ is a vector\ncontaining $\\binom{K}{2}$ unconstrained values.  These are first\ntransformed via the bijective function $\\tanh : \\reals \\rightarrow\n(0,1)$\n%\n\\[\n\\tanh x = \\frac{\\exp(2x) - 1}{\\exp(2x) + 1}.\n\\]\n%\nThen, define a $K \\times K$ matrix $z$, the upper triangular values of\nwhich are filled by row with the transformed values.  For example, in\nthe $4 \\times 4$ case, there are $\\binom{4}{2}$ values arranged as\n%\n\\[\nz\n=\n\\left[\n\\begin{array}{cccc}\n0 & \\tanh y_1 & \\tanh y_2 & \\tanh y_4\n\\\\\n0 & 0 & \\tanh y_3 & \\tanh y_5\n\\\\\n0 & 0 & 0 & \\tanh y_6\n\\\\\n0 & 0 & 0 & 0\n\\end{array}\n\\right]\n.\n\\]\n%\nLewandowski et al.\\ show how to bijectively map the array $z$ to a correlation\nmatrix $x$.  The entry $z_{i,j}$ for $i < j$ is interpreted as the\ncanonical partial correlation (\\CPC) between $i$ and $j$, which is the\ncorrelation between $i$'s residuals and $j$'s residuals when both $i$\nand $j$ are regressed on all variables $i'$ such that $i'< i$.\nIn the case of $i=1$, there are no earlier variables,\nso $z_{1,j}$ is just the Pearson correlation between $i$ and $j$.\n\nIn Stan, the \\LKJ transform is reformulated in terms of a Cholesky factor $w$\nof the final correlation matrix, defined for $1 \\leq i,j \\leq K$ by\n%\n\\[\nw_{i,j} =\n\\left\\{\n\\begin{array}{cl}\n%\n0 & \\mbox{if } i > j,\n\\\\[4pt]\n1 & \\mbox{if } 1 = i = j,\n\\\\[12pt]\n\\prod_{i'=1}^{i - 1} \\left( 1 - z_{i'\\!,\\,j}^2 \\right)^{1/2}\n& \\mbox{if } 1 < i = j,\n\\\\[12pt]\nz_{i,j} & \\mbox{if } 1 = i < j, \\mbox{ and}\n\\\\[12pt]\nz_{i,j} \\, \\prod_{i'=1}^{i-1} \\left( 1 - z_{i'\\!,\\,j}^2 \\right)^{1/2}\n& \\mbox{ if } 1 < i < j.\n%\n\\end{array}\n\\right.\n\\]\n%\nThis does not require as much computation per matrix entry as it may appear;\ncalculating the rows in terms of earlier rows yields the more\nmanageable expression\n%\n\\[\nw_{i,j} =\n\\left\\{\n\\begin{array}{cl}\n%\n0 & \\mbox{if } i > j,\n\\\\[4pt]\n1 & \\mbox{if } 1 = i = j,\n\\\\[8pt]\nz_{i,j} & \\mbox{if } 1 = i < j, \\mbox{ and}\n\\\\[8pt]\nz_{i,j} \\ w_{i-1,j} \\left( 1 - z_{i-1,j}^2 \\right)^{1/2}\n& \\mbox{ if } 1 < i \\leq j.\n%\n\\end{array}\n\\right.\n\\]\nGiven the upper-triangular Cholesky factor $w$, the final correlation\nmatrix is\n\\[\nx = w^{\\top} w.\n\\]\n\nLewandowski et al.\\ show that the determinant of the correlation\nmatrix can be defined in terms of the canonical partial correlations\nas\n%\n\\[\n\\mbox{det} \\, x = \\prod_{i=1}^{K-1} \\ \\prod_{j=i+1}^K \\ (1 - z_{i,j}^2)\n = \\prod_{1 \\leq i < j \\leq K} (1 - z_{i,j}^2),\n\\]\n\n\\subsection{Absolute Jacobian Determinant of the Correlation\n  Matrix Inverse Transform}\nFrom the inverse of equation 11 in \\cite{LewandowskiKurowickaJoe:2009},\nthe absolute Jacobian determinant is\n%\n\\[\n\\sqrt{\\prod_{i=1}^{K-1}\\prod_{j=i+1}^K \\left(1-z_{i,j}^2\\right)^{K-i-1}} \\\n\\times \\prod_{i=1}^{K-1}\\prod_{j=i+1}^K\n\\frac{\\partial \\tanh z_{i,j}}{\\partial y_{i,j}}\n\\]\n\\subsection{Correlation Matrix Transform}\n\nThe correlation transform is defined by reversing the steps of the\ninverse transform defined in the previous section.\n\nStarting with a correlation matrix $x$, the first step is to find the\nunique upper triangular $w$ such that $x = w w^{\\top}$.  Because $x$\nis positive definite, this can be done by applying the Cholesky\ndecomposition,\n\\[\nw = \\mbox{chol}(x).\n\\]\n\n\nThe next step from the Cholesky factor $w$ back to the array $z$ of\n{\\CPC}s is simplified by the ordering of the elements in the\ndefinition of $w$, which when inverted yields\n%\n\\[\nz_{i,j} =\n\\left\\{\n\\begin{array}{cl}\n0 & \\mbox{if } i \\leq j,\n\\\\[8pt]\nw_{i,j} & \\mbox{if } 1 = i < j, \\mbox{ and}\n\\\\[8pt]\n{w_{i,j}}\n\\\n\\prod_{i'=1}^{i-1} \\left( 1 - z_{i'\\!,j}^2 \\right)^{-2}\n& \\mbox{if } 1 < i < j.\n\\end{array}\n\\right.\n\\]\nThe final stage of the transform reverses the hyperbolic tangent\ntransform, which is defined by\n\\[\n\\tanh^{-1} v = \\frac{1}{2} \\log \\left( \\frac{1 + v}{1 - v} \\right).\n\\]\nThe inverse hyperbolic tangent function, $\\tanh^{-1}$, is also called\nthe Fisher transformation.\n\n\n\\section{Covariance Matrices}\n\nA $K \\times K$ matrix is a covariance matrix if it is symmetric and\npositive definite (see the previous section for definitions).  It\nrequires $K + \\binom{K}{2}$ free parameters to specify a $K \\times K$\ncovariance matrix.\n\n\n\\subsection{Covariance Matrix Transform}\n\nStan's covariance transform is based on a Cholesky decomposition\ncomposed with a log transform of the positive-constrained diagonal\nelements.%\n%\n\\footnote{An alternative to the transform in this section, which can\n  be coded directly in Stan, is to parameterize a covariance matrix\n  as a scaled correlation matrix.  An arbitrary $K \\times K$\n  covariance matrix $\\Sigma$ can be expressed in terms of a $K$-vector\n  $\\sigma$ and correlation matrix $\\Omega$ as\n  \\[\n  \\Sigma = \\mbox{diag}(\\sigma) \\times \\Omega \\times \\mbox{diag}(\\sigma),\n  \\]\n  so that each entry is just a deviation-scaled correlation,\n  \\[\n  \\Sigma_{m,n} = \\sigma_m \\times \\sigma_n \\times \\Omega_{m,n}.\n  \\]\n}\n\nIf $x$ is a covariance matrix (i.e., a symmetric, positive definite\nmatrix), then there is a unique lower-triangular matrix $z =\n\\mbox{chol}(x)$ with positive diagonal entries, called a Cholesky\nfactor, such that\n\\[\nx = z \\, z^{\\top}.\n\\]\nThe off-diagonal entries of the Cholesky factor $z$ are unconstrained,\nbut the diagonal entries $z_{k,k}$ must be positive for $1 \\leq k\n\\leq K$.\n\nTo complete the transform, the diagonal is log-transformed to produce\na fully unconstrained lower-triangular matrix $y$ defined by\n\\[\ny_{m,n} =\n\\left\\{\n\\begin{array}{cl}\n0 & \\mbox{if } m < n,\n\\\\[4pt]\n\\log z_{m,m} & \\mbox{if } m = n, \\mbox{ and}\n\\\\[4pt]\nz_{m,n} & \\mbox{if } m > n.\n\\end{array}\n\\right.\n\\]\n\n\\subsection{Covariance Matrix Inverse Transform}\n\nThe inverse transform reverses the two steps of the transform.\nGiven an unconstrained lower-triangular $K \\times K$ matrix $y$, the\nfirst step is to recover the intermediate matrix $z$ by reversing the\nlog transform,\n\\[\nz_{m,n} =\n\\left\\{\n\\begin{array}{cl}\n0 & \\mbox{if } m < n,\n\\\\[4pt]\n\\exp(y_{m,m}) & \\mbox{if } m = n, \\mbox{ and}\n\\\\[4pt]\ny_{m,n} & \\mbox{if } m > n.\n\\end{array}\n\\right.\n\\]\n%\nThe covariance matrix $x$ is recovered from its Cholesky factor $z$ by\ntaking\n%\n\\[\nx = z \\, z^{\\top}.\n\\]\n\n\\subsection{Absolute Jacobian Determinant of the Covariance\n  Matrix Inverse Transform}\n\nThe Jacobian is the product of the Jacobians of the exponential\ntransform from the unconstrained lower-triangular matrix $y$ to matrix\n$z$ with positive diagonals and the product transform from the\nCholesky factor $z$ to $x$.\n\nThe transform from unconstrained $y$ to Cholesky factor $z$ has a\ndiagonal Jacobian matrix, the absolute determinant of which is thus\n%\n\\[\n\\prod_{k=1}^K  \\frac{\\partial}{\\partial_{y_{k,k}}} \\, \\exp(y_{k,k})\n\\ = \\\n\\prod_{k=1}^K \\exp(y_{k,k})\n\\ = \\\n\\prod_{k=1}^K z_{k,k}.\n\\]\n\nThe Jacobian matrix of the second transform from the Cholesky factor $z$ to\nthe covariance matrix $x$ is also triangular, with diagonal entries\ncorresponding to pairs $(m,n)$ with $m \\geq n$, defined by\n\\[\n\\frac{\\partial}{\\partial z_{m,n}}\n\\left( z \\, z^{\\top} \\right)_{m,n}\n\\ = \\\n\\frac{\\partial}{\\partial z_{m,n}}\n\\left( \\sum_{k=1}^K z_{m,k} \\, z_{n,k} \\right)\n\\ = \\\n\\left\\{\n\\begin{array}{cl}\n2 \\, z_{n,n} & \\mbox{if } m = n \\mbox{ and }\n\\\\[4pt]\nz_{n,n} & \\mbox{if } m > n.\n\\end{array}\n\\right.\n\\]\n%\nThe absolute Jacobian determinant of the second transform is thus\n\\[\n2^{K} \\ \\prod_{m = 1}^{K} \\ \\prod_{n=1}^{m} z_{n,n}\n\\ = \\\n\\prod_{n=1}^K \\ \\prod_{m=n}^K z_{n,n}\n\\ = \\\n2^{K} \\ \\prod_{k=1}^K z_{k,k}^{K - k + 1}.\n\\]\nFinally, the full absolute Jacobian determinant of the inverse\nof the covariance matrix transform from the unconstrained lower-triangular\n$y$ to a symmetric, positive definite matrix $x$ is the product of the\nJacobian determinants of the exponentiation and product transforms,\n\\[\n\\left( \\prod_{k=1}^K z_{k,k} \\right)\n\\left(\n2^{K} \\ \\prod_{k=1}^K z_{k,k}^{K - k + 1}\n\\right)\n\\ = \\\n2^K\n\\, \\prod_{k=1}^K z_{k,k}^{K-k+2}.\n\\]\n\nLet $f^{-1}$ be the inverse transform from a $K + \\binom{K}{2}$-vector\n$y$ to the $K \\times K$ covariance matrix $x$.  A density function\n$p_X(x)$ defined on $K \\times K$ covariance matrices is transformed to\nthe density $p_Y(y)$ over $K + \\binom{K}{2}$ vectors $y$ by\n\\[\np_Y(y) = p_X(f^{-1}(y)) \\ 2^K \\ \\prod_{k=1}^K z_{k,k}^{K-k+2}.\n\\]\n\n\\section{Cholesky Factors of Covariance Matrices}\n\nAn $M \\times M$ covariance matrix $\\Sigma$ can be Cholesky factored to\na lower triangular matrix $L$ such that $L\\,L^{\\top} = \\Sigma$.  If\n$\\Sigma$ is positive definite, then $L$ will be $M \\times M$.  If\n$\\Sigma$ is only positive semi-definite, then $L$ will be $M \\times N$,\nwith $N < M$.\n\nA matrix is a Cholesky factor for a covariance matrix if and only if\nit is lower triangular, the diagonal entries are positive, and $M \\geq\nN$.  A matrix satisfying these conditions ensures that $L \\,\nL^{\\top}$ is positive semi-definite if $M > N$ and positive definite\nif $M = N$.\n\nA Cholesky factor of a covariance matrix requires $N + \\binom{N}{2} +\n(M - N)N$ unconstrained parameters.\n\n\\subsection{Cholesky Factor of Covariance Matrix Transform}\n\nStan's Cholesky factor transform only requires the first step of the\ncovariance matrix transform, namely log transforming the positive\ndiagonal elements.  Suppose $x$ is an $M \\times N$ Cholesky factor.\nThe above-diagonal entries are zero, the diagonal entries are\npositive, and the below-diagonal entries are unconstrained.  The\ntransform required is thus\n%\n\\[\ny_{m,n} =\n\\left\\{\n\\begin{array}{cl}\n0 & \\mbox{if } m < n,\n\\\\[4pt]\n\\log x_{m,m} & \\mbox{if } m = n, \\mbox{ and}\n\\\\[4pt]\nx_{m,n} & \\mbox{if } m > n.\n\\end{array}\n\\right.\n\\]\n\n\\subsection{Cholesky Factor of Covariance Matrix Inverse Transform}\n\nThe inverse transform need only invert the logarithm with an\nexponentiation.  If $y$ is the unconstrained matrix representation,\nthen the elements of the constrained matrix $x$ is defined by\n\\[\nx_{m,n} =\n\\left\\{\n\\begin{array}{cl}\n0 & \\mbox{if } m < n,\n\\\\[4pt]\n\\exp(y_{m,m}) & \\mbox{if } m = n, \\mbox{ and}\n\\\\[4pt]\ny_{m,n} & \\mbox{if } m > n.\n\\end{array}\n\\right.\n\\]\n\n\\subsection{Absolute Jacobian Determinant of Cholesky Factor Inverse Transform}\n\nThe transform has a diagonal Jacobian matrix, the absolute determinant\nof which is\n%\n\\[\n\\prod_{n=1}^N  \\frac{\\partial}{\\partial_{y_{n,n}}} \\, \\exp(y_{n,n})\n\\ = \\\n\\prod_{n=1}^N \\exp(y_{n,n})\n\\ = \\\n\\prod_{n=1}^N x_{n,n}.\n\\]\n\nLet $x = f^{-1}(y)$ be the inverse transform from a $N + \\binom{N}{2}\n+ (M - N)N$ vector to an $M \\times N$ Cholesky factor for a covariance\nmatrix $x$ defined in the previous section.  A density function\n$p_X(x)$ defined on $M \\times N$ Cholesky factors of covariance\nmatrices is transformed to the density $p_Y(y)$ over $N + \\binom{N}{2}\n+ (M - N)N$ vectors $y$ by\n%\n\\[\np_Y(y) = p_X(f^{-1}(y)) \\prod_{N=1}^N x_{n,n}.\n\\]\n\n\\section{Cholesky Factors of Correlation Matrices}\n\nA $K \\times K$ correlation matrix $\\Omega$ is positive definite and\nhas a unit diagonal.  Because it is positive definite, it can be\nCholesky factored to a $K \\times K$ lower-triangular matrix $L$ with\npositive diagonal elements such that $\\Omega = L\\,L^{\\top}$.  Because\nthe correlation matrix has a unit diagonal,\n\\[\n\\Omega_{k,k} = L_k\\,L_k^{\\top} = 1,\n\\]\neach row vector $L_k$ of the Cholesky factor is of unit length.  The\nlength and positivity constraint allow the diagonal elements of $L$ to\nbe calculated from the off-diagonal elements, so that a Cholesky\nfactor for a $K \\times K$ correlation matrix requires only\n$\\binom{K}{2}$ unconstrained parameters.\n\n\\subsection{Cholesky Factor of Correlation Matrix Inverse Transform}\n\nIt is easiest to start with the inverse transform from the\n$\\binom{K}{2}$ unconstrained parameters $y$ to the $K \\times K$\nlower-triangular Cholesky factor $x$.  The inverse transform is based\non the hyperbolic tangent function, $\\tanh$, which satisfies $\\tanh(x)\n\\in (-1,1)$.  Here it will function like an inverse logit with a sign\nto pick out the direction of an underlying canonical partial\ncorrelation (see \\refsection{correlation-matrix-transform} for more\ninformation on the relation between canonical partial correlations and\nthe Cholesky factors of correlation matrices).\n\nSuppose $y$ is a vector of $\\binom{K}{2}$ unconstrained values.  Let\n$z$ be a lower-triangular matrix with zero diagonal and below\ndiagonal entries filled by row.  For example, in the $3 \\times 3$\ncase,\n\\[\nz =\n\\left[\n\\begin{array}{ccc}\n0 & 0 & 0\n\\\\\n\\tanh y_1 & 0 & 0\n\\\\\n\\tanh y_2 & \\tanh y_3 & 0\n\\end{array}\n\\right]\n\\]\n%\nThe matrix $z$, with entries in the range $(-1,1)$, is then\ntransformed to the Cholesky factor $x$, by taking%\n%\n\\footnote{For convenience, a summation with no terms, such as\n  $\\sum_{j' < 1} x_{i,j'}$, is defined to be 0.  This implies\n  $x_{1,1} = 1$ and that $x_{i,1} = z_{i,1}$ for $i > 1$.}\n%\n\\[\nx_{i,j}\n=\n\\left\\{\n\\begin{array}{lll}\n0 & \\mbox{ if } i < j & \\mbox{ [above diagonal]}\n\\\\[12pt]\n\\sqrt{1 - \\sum_{j' < j} x_{i,j'}^2}\n  & \\mbox{ if } i = j & \\mbox{ [on diagonal]}\n\\\\[12pt]\nz_{i,j} \\ \\sqrt{1 - \\sum_{j' < j} x_{i,j'}^2}\n  & \\mbox{ if } i > j & \\mbox{ [below diagonal]}\n\\end{array}\n\\right.\n\\]\n%\nIn the $3 \\times 3$ case, this yields\n\\[\nx =\n\\left[\n\\begin{array}{ccc}\n1 & 0 & 0\n\\\\[6pt]\nz_{2,1} & \\sqrt{1 - x_{2,1}^2} & 0\n\\\\[6pt]\nz_{3,1} & z_{3,2} \\sqrt{1 - x_{3,1}^2}\n        & \\sqrt{1 - (x_{3,1}^2 + x_{3,2}^2)}\n\\end{array}\n\\right],\n\\]\nwhere the $z_{i,j} \\in (-1,1)$ are the $\\tanh$-transformed $y$.\n\nThe approach is a signed stick-breaking process on the quadratic\n(Euclidean length) scale.  Starting from length 1 at $j=1$, each\nbelow-diagonal entry $x_{i,j}$ is determined by the (signed) fraction\n$z_{i,j}$ of the remaining length for the row that it consumes. The\ndiagonal entries $x_{i,i}$ get any leftover length from earlier\nentries in their row.  The above-diagonal entries are zero.\n\n\\subsection{Cholesky Factor of Correlation Matrix Transform}\n\nSuppose $x$ is a $K \\times K$ Cholesky factor for some correlation\nmatrix.  The first step of the transform reconstructs the intermediate\nvalues $z$ from $x$,\n\\[\nz_{i,j} = \\frac{x_{i,j}}{\\sqrt{1 - \\sum_{j' < j}x_{i,j'}^2}}.\n\\]\n%\nThe mapping from the resulting $z$ to $y$ inverts\n$\\tanh$,\n\\[\ny\n\\ = \\\n\\tanh^{-1} z\n\\ = \\\n\\frac{1}{2} \\left( \\log (1 + z) - \\log (1 - z) \\right).\n\\]\n%\n\n\n\\subsection{Absolute Jacobian Determinant of Inverse Transform}\n\nThe Jacobian of the full transform is the product of the Jacobians of\nits component transforms.\n\nFirst, for the inverse transform $z = \\tanh y$, the derivative is\n%\n\\[\n\\frac{d}{dy} \\tanh y = \\frac{1}{(\\cosh y)^2}.\n\\]\n%\n\nSecond, for the inverse transform of $z$ to $x$, the resulting\nJacobian matrix $J$ is of dimension $\\binom{K}{2} \\times\n\\binom{K}{2}$, with indexes $(i,j)$ for $(i > j)$.  The Jacobian\nmatrix is lower triangular, so that its determinant is the product of\nits diagonal entries, of which there is one for each $(i,j)$ pair,\n%\n\\[\n\\left| \\, \\mbox{det} \\, J \\, \\right|\n  \\ = \\ \\mathlarger{\\prod}_{i > j} \\left| \\frac{d}{dz_{i,j}} x_{i,j} \\right|,\n\\]\n%\nwhere\n%\n\\[\n\\frac{d}{dz_{i,j}} x_{i,j}\n= \\sqrt{1 - \\sum_{j' < j} x^2_{i,j'}}.\n\\]\n%\nSo the combined density for unconstrained $y$ is\n\\[\np_Y(y)\n= p_X(f^{-1}(y))\n  \\ \\\n  \\mathlarger{\\prod}_{n < \\binom{K}{2}} \\frac{1}{(\\cosh y)^2}\n  \\ \\\n  \\mathlarger{\\prod}_{i > j} \\left( 1 - \\sum_{j' < j} x_{i,j'}^2\n  \\right)^{1/2},\n\\]\n%\nwhere $x = f^{-1}(y)$ is used for notational convenience.  The log\nJacobian determinant of the complete inverse transform $x = f^{-1}(y)$\nis given by\n\\[\n\\log \\left| \\, \\det J \\, \\right|\n=\n-2 \\sum_{n \\leq \\binom{K}{2}}\n\\log \\cosh y\n\\\n+\n\\\n\\frac{1}{2} \\\n\\sum_{i > j}\n\\log \\left( 1 - \\sum_{j' < j} x_{i,j'}^2 \\right)\n.\n\\]\n\n\n\\chapter{Optimization  Algorithms}%\n\\label{optimization-algorithms.chapter}\n\n\\noindent\nStan provides optimization algorithms which find modes of the density\nspecified by a Stan program. Such modes may be used as parameter\nestimates or as the basis of approximations to a Bayesian posterior;\nsee \\refchapter{mle} for background on point estimation.\n\nStan provides three different optimizers, a Newton optimizer, and two\nrelated quasi-Newton algorithms, BFGS and L-BFGS; see\n\\citep{NocedalWright:2006} for thorough description and analysis of\nall of these algorithms. The L-BFGS algorithm is the default\noptimizer. Newton's method is the least efficient of the three, but\nhas the advantage of setting its own stepsize.\n\n\\section{General Configuration}\n\nAll of the optimizers are iterative and allow the maximum number of\niterations to be specified;  the default maximum number of iterations\nis 2000.\n\nAll of the optimizers are able to stream intermediate output reporting\non their progress.  Whether or not to save the intermediate iterations\nand stream progress is configurable.\n\n\\section{BFGS and L-BFGS Configuration}\n\n\\subsection{Convergence Monitoring}\n\nConvergence monitoring in (L-)BFGS is controlled by a number of\ntolerance values, any one of which being satisfied causes the\nalgorithm to terminate with a solution. Any of the convergence tests\ncan be disabled by setting its corresponding tolerance parameter to\nzero.  The tests for convergence are as follows.\n\n\\subsubsection{Parameter Convergence}\n\nThe parameters $\\theta_i$ in iteration $i$ are considered to have\nconverged with respect to tolerance \\code{tol\\_param} if\n%\n\\[\n|| \\theta_{i} - \\theta_{i-1} || < \\mbox{\\code{tol\\_param}}.\n\\]\n\n\n\\subsubsection{Density Convergence}\n%\nThe (unnormalized) log density\n$\\log p(\\theta_{i}|y)$ for the parameters $\\theta_i$ in iteration $i$\ngiven data $y$ is considered to have converged with\nrespect to tolerance \\code{tol\\_obj} if\n%\n\\[\n\\left| \\log p(\\theta_{i}|y) - \\log p(\\theta_{i-1}|y) \\right| <\n\\mbox{\\code{tol\\_obj}}.\n\\]\n%\nThe log density is considered to have converged to within\nrelative tolerance \\code{tol\\_rel\\_obj} if\n%\n\\[\n\\frac{\\left| \\log p(\\theta_{i}|y) - \\log p(\\theta_{i-1}|y) \\right|}{\\\n  \\max\\left(\\left| \\log p(\\theta_{i}|y)\\right|,\\left| \\log\n      p(\\theta_{i-1}|y)\\right|,1.0\\right)}\n < \\mbox{\\code{tol\\_rel\\_obj}} * \\epsilon.\n\\]\n%\n\n\n\\subsubsection{Gradient Convergence}\n\nThe gradient is considered to have converged to 0 relative to a\nspecified tolerance \\code{tol\\_grad} if\n%\n\\[\n|| g_{i} || < \\mbox{\\code{tol\\_grad}},\n\\]\nwhere $\\nabla_{\\theta}$ is the gradient operator with respect to\n$\\theta$ and $g_{i} = \\nabla_{\\theta} \\log p(\\theta_{i}|y)$ is the gradient at\niteration $i$.\n\nThe gradient is considered to have converged to 0 relative to a\nspecified relative tolerance\n\\code{tol\\_rel\\_grad} if\n%\n\\[\n\\frac{g_{i}^T \\hat{H}_{i}^{-1} g_{i} }{ \\max\\left(\\left|\\log p(\\theta_{i}|y)\\right|,1.0\\right) } < \\mbox{\\code{tol\\_rel\\_grad}} * \\epsilon,\n\\]\n%\nwhere $\\hat{H}_{i}$ is the estimate of the Hessian at iteration $i$,\n$|u|$ is the absolute value (L1 norm) of $u$, $||u||$ is the vector\nlength (L2 norm) of $u$, and $\\epsilon \\approx 2e-16$ is machine\nprecision.\n\n\n\\subsection{Initial Step Size}\n\nThe initial step size parameter $\\alpha$ for BFGS-style optimizers may\nbe specified. If the first iteration takes a long time (and requires a\nlot of function evaluations) initialize $\\alpha$ to be the roughly\nequal to the $\\alpha$ used in that first iteration. The default value\nis intentionally small, 0.001, which is reasonable for many problems\nbut might be too large or too small depending on the objective\nfunction and initialization. Being too big or too small just means\nthat the first iteration will take longer (i.e., require more gradient\nevaluations) before the line search finds a good step length. It's not\na critical parameter, but for optimizing the same model multiple times\n(as you tweak things or with different data), being able to tune\n$\\alpha$ can save some real time.\n\n\\subsection{L-BFGS History Size}\n\nL-BFGS has a command-line argument which controls the size of the\nhistory it uses to approximate the Hessian. The value should be less than\nthe dimensionality of the parameter space and, in general, relatively\nsmall values (5--10) are sufficient; the default value is 5.\n\nIf L-BFGS performs poorly but BFGS performs well, consider increasing\nthe history size. Increasing history size will increase the\nmemory usage, although this is unlikely to be an issue for typical\nStan models.\n\n\n\\section{General Configuration Options}\n\nThe general configuration options for optimization are the same as\nthose for MCMC;  see \\refsection{general-config} for details.\n\n\n\\section{Writing Models for Optimization}\n\n\\subsection{Constrained vs.\\ Unconstrained Parameters}\n\nFor constrained optimization problems, for instance, with a standard\ndeviation parameter $\\sigma$ constrained so that $\\sigma > 0$, it can\nbe much more efficient to declare a parameter \\code{sigma} with no\nconstraints.  This allows the optimizer to easily get close to 0\nwithout having to tend toward $-\\infty$ on the $\\log \\sigma$ scale.\n\nThe Jacobian adjustment is not an issue for posterior modes, because\nStan turns off the built-in Jacobian adjustments for optimization.\n\nWith unconstrained parameterizations of parameters with constrained\nsupport, it is important to provide a custom initialization that is\nwithin the support.  For example, declaring a vector\n%\n\\begin{quote}\n\\begin{Verbatim}\nvector[M] sigma;\n\\end{Verbatim}\n\\end{quote}\n%\nand using the default random initialization which is\n$\\distro{Uniform}(-2,2)$ on the unconstrained scale, means that there\nis only a $2^{-M}$ chance that the initialization will be within\nsupport.\n\nFor any given optimization problem, it is probably worthwhile trying\nthe program both ways, with and without the constraint, to see which\none is more efficient.\n\n\n\\chapter{Variational Inference}\\label{vi-algorithms.chapter}\n\n\\noindent\nStan implements an automatic variational inference algorithm, called\nAutomatic Differentiation Variational Inference (ADVI)\n\\citep{Kucukelbir:2015}. In this chapter, we describe the specifics of\nhow ADVI maximizes the variational objective. For a high-level\ndescription, please see \\refchapter{vi-advanced}.\n\n\\section{Stochastic Gradient Ascent}\n\nADVI optimizes the ELBO in the real-coordinate space using stochastic\ngradient ascent. We obtain noisy (yet unbiased) gradients of the\nvariational objective using automatic differentiation and Monte Carlo\nintegration. The algorithm ascends these gradients using an adaptive\nstepsize sequence. We evaluate the ELBO also using Monte Carlo\nintegration and measure convergence similar to the relative tolerance\nscheme in Stan's optimization feature.\n\n\\subsection{Monte Carlo Approximation of the ELBO}\n\nADVI uses Monte Carlo integration to approximate the variational\nobjective function, the ELBO. The number of draws used to\napproximate the ELBO is denoted by \\texttt{elbo\\_samples}. We\nrecommend a default value of $100$, as we only evaluate the ELBO every\n\\texttt{eval\\_elbo} iterations, which also defaults to $100$.\n\n\\subsection{Monte Carlo Approximation of the Gradients}\n\nADVI uses Monte Carlo integration to approximate the gradients of the\nELBO. The number of draws used to approximate the gradients is\ndenoted by \\texttt{grad\\_samples}. We recommend a default value of\n$1$, as this is the most efficient. It also a very noisy estimate of\nthe gradient, but stochastic gradient ascent is capable of following\nsuch gradients.\n\n\\subsection{Adaptive Stepsize Sequence}\n\nADVI uses a finite-memory version of adaGrad \\citep{Duchi:2011}. This\nhas a single parameter that we expose, denoted \\texttt{eta}. We now\nhave a warmup adaptation phase that selects a good value for\n\\texttt{eta}. The procedure does a heuristic search over \\texttt{eta}\nvalues that span 5 orders of magnitude.\n\n\\subsection{Assessing Convergence}\n\nADVI tracks the progression of the ELBO through the stochastic\noptimization.  Specifically, ADVI heuristically determines a rolling\nwindow over which it computes the average and the median change of the\nELBO. Should either number fall below a threshold, denoted by\n\\texttt{tol\\_rel\\_obj}, we consider the algorithm to have\nconverged. The change in ELBO is calculated the same way as in Stan's\noptimization module.\n\n\n\\chapter{Diagnostic Mode}\\label{diagnostic-algorithms.chapter}\n\n\\noindent\nStan's diagnostic mode runs a Stan program with data, initializing\nparameters either randomly or with user-specified initial values, and\nthen evaluates the log probability and its gradients. The gradients\ncomputed by the Stan program are compared to values calculated by\nfinite differences.\n\nDiagnostic mode may be configured with two parameters.\n%\n\\begin{center}\n\\begin{tabular}{c|lcc}\n{\\it parameter} & {\\it description} & {\\it constraints} & {\\it\n  default}\n\\\\ \\hline\n{\\it $\\epsilon$} & finite difference size & $\\epsilon > 0$ & 1e--6\n\\\\\n{\\it error} & error threshold for matching & $\\mbox{error} > 0$ & 1e--6\n\\end{tabular}\n\\end{center}\n%\nIf the difference between the Stan program's gradient value and that\ncalculated by finite difference is higher than the specified\nthreshold, the argument will be flagged.\n\n\\section{Output}\n\nDiagnostic mode prints the log posterior density (up to a proportion)\ncalculated by the Stan program for the specified initial values. For\neach parameter, it prints the gradient at the initial parameter values\ncalculated by Stan's program and by finite differences over Stan's\nprogram for the log probability.\n\n\\subsection{Unconstrained Scale}\n\nThe output is for the variable values and their gradients are on the\nunconstrained scale, which means each variable is a vector of size\ncorresponding to the number of unconstrained variables required to\ndefine it. For example, an $N \\times N$ correlation matrix, requires\n$\\binom{N}{2}$ unconstrained parameters. The\ntransformations from constrained to unconstrained parameters are based\non the constraints in the parameter declarations and described in\ndetail in \\refchapter{variable-transforms}.\n\n\\subsection{Includes Jacobian}\n\nThe log density includes the Jacobian adjustment implied by the\nconstraints declared on variables; see\n\\refchapter{variable-transforms} for full details. The Jacobian\nadjustment will be turned off if optimization is used in practice, but\nthere is as of yet no way to turn it off in diagnostic mode.\n\n\\section{Configuration Options}\n\nThe general configuration options for diagnostics are the same as\nthose for MCMC; see \\refsection{general-config} for details. Initial\nvalues may be specified, or they may be drawn at random. Setting the\nrandom number generator will only have an effect if a random\ninitialization is specified.\n\n\\section{Speed Warning and Data Trimming}\n\nDue to the application of finite differences, the computation time\ngrows linearly with the number of parameters. This can be require a\nvery long time, especially in models with latent parameters that grow\nwith the data size. It can be helpful to diagnose a model with smaller\ndata sizes in such cases.\n", "meta": {"hexsha": "601847b49d11c261a02b9fd65cd8a0c0adac3c94", "size": 76864, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cmdstan/stan/src/docs/stan-reference/algorithms.tex", "max_stars_repo_name": "yizhang-cae/torsten", "max_stars_repo_head_hexsha": "dc82080ca032325040844cbabe81c9a2b5e046f9", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-05T01:40:40.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-05T01:40:40.000Z", "max_issues_repo_path": "cmdstan/stan/src/docs/stan-reference/algorithms.tex", "max_issues_repo_name": "yizhang-cae/torsten", "max_issues_repo_head_hexsha": "dc82080ca032325040844cbabe81c9a2b5e046f9", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cmdstan/stan/src/docs/stan-reference/algorithms.tex", "max_forks_repo_name": "yizhang-cae/torsten", "max_forks_repo_head_hexsha": "dc82080ca032325040844cbabe81c9a2b5e046f9", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0256750775, "max_line_length": 139, "alphanum_fraction": 0.7352856994, "num_tokens": 21493, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5829478138966363}}
{"text": "\\chapter{Sparse Component Analysis, data restoration and inpainting on the sphere}\n\\label{ch_sca_datarest}\n\n\\section{Morphological Component Analysis on the sphere}\n\nA usual task  in processing signals, images as well as spherical data maps, is to decompose the data \ninto its elementary building blocks. This can be formulated as an inverse problem where the data is \nassumed to have been generated according to the following model : \n\\begin{equation}\n y = \\sum_{i} \\alpha_i \\phi_i + \\eta\n\\end{equation}\nthat is a linear combination of relevant waveforms $\\phi_i \\in \\mathbb{R}^n$ with weights $\\alpha_i$. \nHere $\\eta$ represents possible contamination by additive, typically Gaussian white noise. Given data \n$y\\in \\mathbb{R}^n$, one then wants to recover the underlying structures that is to say estimate a set \nof waveforms $\\phi_i$ that build the data and their corresponding weights $\\tilde{\\alpha}_i$. The solution \nto this estimation problem will depend heavily on the available prior information. Of interest here is \nthe case where one is given \\emph{a priori} a set a waveforms from which to select a good subset. This set \nmay be a basis, a frame or several bases or frames grouped into a large redundant dictionary.\\\\\n%\nPossible dictionaries in 1D and 2D include Fourier and related bases, wavelet bases, as well as \nother more recent multiscale systems such as the ridgelet~\\cite{cur:candes99_1} and curvelet \nframes~\\cite{cur:donoho99,starck:sta01_3}, etc. Depending on the morphology of the data, each of \nthese dictionaries will have different performance characteristics \n%perform more or less efficiently \nin a non-linear approximation scheme. For instance, sparse approximations of piecewise smooth \nsignals or images with point singularities are easily obtained using wavelets. However these \nare no longer optimal in the case of piecewise smooth images with singularities along smooth \ncurves or edges. Such images are more efficiently approximated using curvelets which are highly \nanisotropic and thus exhibit high directional selectivity. Digital implementations of both ridgelet \nand curvelet transforms and their application to image denoising are described in~\\cite{starck:sta01_3}.\\\\\n%\nAvailable transforms in the spherical topology include the spherical harmonics and several \nwavelet transforms. Software packages such as Healpix\\footnote {http://www.eso.org/science/healpix}~\\cite{pixel:healpix} \nor Glesp~\\cite{pixel:glesp} provide approximate digital spherical harmonic transform routines \nbased on their specific pixelization schemes. Schr{\\\"o}der and Sweldens~\\cite{wave:sweldens95a} \nhave developed an orthogonal wavelet transform on the sphere based on the Haar wavelet function \nwhich then suffers from the poor frequency domain localization properties of the primitive Haar function \n%properties of the Haar function \nand from the problems inherent in orthogonal decomposition (\\emph{e.g.} lack of translation invariance). \nA few papers describe continuous wavelet transforms on the sphere~\\cite{wave:antoine99bis,wave:cayon01,wave:holschneider96,wave:wiaux,bogdanova} \nwhich have been extended to directional wavelet transforms~\\cite{wave:wiaux2005,wave:hobson04}. \nAlthough useful for data analysis, these continuous transforms lack an inverse transform and hence \nare clearly not suitable for restoration or synthesis purposes.\\\\\n%\nIn their pioneering  work, Freeden and Maier~\\cite{freeden02,freeden03} gave a wavelet transform \nand reconstruction scheme on the sphere which is based on the spherical harmonic transform. \nFollowing this idea, Starck \\emph{et al.}~\\cite{starck:sta05_2} have proposed a new invertible \nisotropic undecimated wavelet transform (UWT) on the sphere which preserves the same desirable \nproperties as the standard isotropic UWT for flat 2D maps~\\cite{starck:book98}: the reconstruction \nis simple and immediate since it is just the addition of all the wavelet bands with the coarsest scale. \nBased on this new decomposition, other multiscale transforms such as the pyramidal wavelet transform, \nthe ridgelet transform and the curvelet transform have been successfully constructed on the sphere \\cite{starck:sta05_2}. \n%\nEach of these decompositions on the sphere will sparsely represent parts of the image based \non their morphological properties. Wavelets will easily detect more or less isotropic localized \nstructures, while curvelets are better suited for efficiently detecting highly anisotropic objects.\\\\\n%\n% ,  based on the spherical harmonics decomposition, is to our knowledge the only one to have an inverse transform. This set of spherical transforms has recently been enriched with a new invertible  isotropic undecimated wavelet transform~\\cite{starck:sta05_2} the properties of which are similar to those of the \\emph{\\`a trous} algorithm and with digital ridgelet and curvelet transforms on the sphere. These tools make it possible to detect both point singularities and  edges in spherical maps. \n%\n%\nA data set $y$ has an exact representation over any complete basis of the data space, or several \nsuch exact representations in the case of redundant overcomplete dictionaries. However, these \nrepresentations are not equally interesting in terms of data modeling or feature detection. In fact, \na strong \\emph{a priori} is to favor representations of $y$ that use only a small number of waveforms \nleading to a more concise and possibly more interpretable representation of the data. In fact, \nbuilding sparse representations or approximations is the (he)art of structured data processing: \nthe design of good detection, denoising, restoration and compression algorithms relies on the \navailability of good dictionaries and good selection algorithms. Indeed, selecting the smallest \nsubset of waveforms from a large dictionary, that will linearly combine to reproduce the salient \nfeatures of a given signal or image, is a hard combinatorial problem. Several \\emph{pursuit} algorithms \nhave been proposed that can help build very sparse decompositions such as the greedy Matching Pursuit (MP)~\\cite{wave:mallat93} \nalgorithm which refines the signal approximation by picking at each iteration the one waveform \nwhich best correlates with the current approximation error. Basis Pursuit (BP) ~\\cite{wave:donoho98} \nis a global procedure which seeks an approximation $\\tilde{y}$ to $y$ by solving the linear programming problem:\n\\begin{equation}\n\\min_{ \\alpha } ~ \\|\\alpha\\|_{{ \\ell_1}} \\mbox{ subject to } y = \\Phi \\alpha.\n\\label{eqn_bp}\n\\end{equation}\nwhere the ${\\ell_1}$ norm measures sparsity in place of the ${\\ell_0}$ counting norm. \n%\nIn the presence of noise, a noise-aware variant of BP, known as BPDN (for BP denoising), can be \nstated as a convex quadratic programming problem and solved using the Interior Point method \n\\cite{wave:donoho98}. The BPDN problem can also be written in the augmented Lagrangian form:\n\\begin{equation}\n\\min_{ \\alpha } ~  \\|y - \\Phi\\alpha\\|_{{ \\ell_2}} ^2 + \\lambda \\cdot \\|\\alpha\\|_{ \\ell_1}\n\\label{eqn_mp}\n\\end{equation}\nAmong all possible solutions, the chosen one has the minimum ${\\ell_1}$ norm. This choice of \n$\\ell_1$ norm is very important. An $\\ell_2$ norm, as used in the method of frames \\cite{wave:daube88b}, \ndoes not favor sparsity \\cite{wave:donoho98}. A number of recent results prove that these \nalgorithms will recover the unique maximally sparse decomposition provided this solution is \nsparse enough and the dictionnary is sufficiently incoherent~\\cite{Donoho-Elad,cur:elad02,miki:Gribonval-Nielsen,miki:Temlyakov,miki:fuchs}. \nNevertheless, in problems involving large data sets~(\\emph{e.g.} images, spherical maps), \nBP or MP synthesis algorithms are computationally prohibitive.\n%\nMorphological Component Analysis (MCA) is a recent faster alternative described in~\\cite{starck:sta04} \n%\\cite{starck:elad05,starck:sta04_1,starck:sta04,SPIE2005a} \nthat  constructs a sparse representation of a signal or an image assuming that it is a combination \nof morphologically distinct features which are sparsely represented in different dictionaries \nassociated with fast transform algorithms. For instance, images commonly combine contours and textures: \nthe former are well accounted for using curvelets, while the latter may be well represented using \nlocal cosine functions. In searching for a sparse decomposition of a signal or image $y$, it is \nassumed that $y$ is a sum of $K$ components $ (s_k)_{1,\\ldots,K}$, where each can be described as \n$s_k = {\\bf \\Phi}_k \\alpha_k$ with a possibly over-complete dictionary ${\\bf \\Phi}_k$ and a sparse \nvector of coefficients $\\alpha_k$. It is further assumed that for any given component the sparsest \ndecomposition over the proper dictionary yields a highly sparse description, while its decomposition \nover the other dictionaries, ${\\bf \\Phi}_{k'\\ne k}$, is non sparse. Thus, the different ${\\bf \\Phi}_k$ \ncan be seen as discriminating between the different components of the initial signal. MCA achieves its \nsparse decomposition relying on an iterative thresholding algorithm with a successively decreasing \nthreshold~\\cite{starck:bobin06_tip} thus refining the current approximation by including finer structures \nalternatingly in the different morphological components. Based on MCA, it has also been shown that we can \nderive a very efficient inpainting method \\cite{starck:elad05}.\\\\\n%\n{\\em This paper: } Motivated by the success of MCA in signal and image processing, the purpose of this contribution \nis to take advantage of the variety of transforms on the sphere recently made available~\\cite{starck:sta05_2} to extend \nthe applicability of MCA to the analysis of spherical maps which are commonly recorded in a number of areas such as \ngeophysics, astrophysics or medical imaging. As in the case of Euclidean 2D images, we further extend the MCA algorithm \non the sphere in order to perform inpainting tasks on the sphere. The proposed numerical tools are shown to be valuable \nin several selected applications in physics and astrophysics. The construction of the undecimated isotropic wavelet and \ncurvelet transforms on the sphere is reviewed in the next section. Sections~\\ref{sect_mca} and~\\ref{sect_inpaint} describe \nthe extension to the sphere of the MCA algorithm and of its modification for inpainting purposes. \n\n\n\\section{MCA on the Sphere}\n\\label{sect_mca}\n%---------------------------------------------------------------------------------------------------------------------------------------\n%---------------------------------------------------------------------------------------------------------------------------------------\n%a short introduction to this section?\n%%---------------------------------------------------------------------------------------------------------------------------------------\n%\t\\subsection{Introduction}\n%%---------------------------------------------------------------------------------------------------------------------------------------\n%---------------------------------------------------------------------------------------------------------------------------------------\n \\subsection{Principle and algorithm}\n %---------------------------------------------------------------------------------------------------------------------------------------\nFor a given spherical map $y$ modeled as a linear { combination} of $K$ spherical maps $s_k$, $y = \\sum_{k=1}^K s_k$, \nhaving different morphologies, MCA assumes that a dictionary of bases $\\{ {\\bf \\Phi}_1,\\cdots,{\\bf \\Phi}_K\\}$ exists \nsuch that, for each $ ~ k$, $ s_k$ is sparse in $ ~ {\\bf \\Phi}_k$ while its representation in the other \n$ ~ {\\bf \\Phi}_{k'}$ ($ ~ k' \\ne k$) is not sparse : $ ~ \\forall k' \\neq k, ||{\\bf \\Phi}_k^T s_k||_0 < ||{\\bf \\Phi}_{k'}^T s_k||_0$, \nwhere $||x||_0$ denotes the $\\ell_0$ pseudo-norm of the vector $x$\n%  (\\textit{i.e.}  the number of non-zero coefficients of $x$)\nThe problem is to separate the mixture $y$ into its constitutive morphological components $ (s_k)_{k=1,\\cdots,K}$ relying \non the discriminating power of the different dictionaries ${\\bf \\Phi}_k$. Ideally, the $\\alpha_k$ are the solutions { to} :\n\\begin{equation}\\label{eq:Separation1}\n ~ \n\\min_{ {\\alpha}_1, \\dots, ~{\\alpha}_K }~\\sum_{k=1}^K\\|{\\alpha}_k\\|_0 \\quad \\mbox{subject to } \\quad y= \\sum_{k=1}^K  {\\bf \\Phi}_k {\\alpha}_k\n\\end{equation}\nWhile sensible from the point of view of the desired solution, the problem formulated in Equation (\\ref{eq:Separation1}) \nis non-convex and combinatorial by nature. Its complexity grows exponentially with the number of columns in the overall \ndictionary { (NP-hard problem)}. Motivated by recent equivalence results \\emph{e.g.} in~\\cite{Donoho-Elad}, the MCA \nalgorithm seeks a solution to the following minimization problem: \n\\begin{equation}\\label{mca:model1}\n ~ \n\\min_{s_1,\\ldots,s_K} \\lambda \\sum_{k=1}^K  \\|\\alpha_k\\|_1 + \\left\\|y-\\sum_{k=1}^K s_k\\right\\|_2^2\\,\\,\\,\\textrm{with}\\,\\,\\,s_k ={\\bf \\Phi}_k \\alpha_k \n\\end{equation}\nwhere an ${ \\ell_1}$ sparsity measure is substituted to the $\\ell_0$ counting norm following a prescription of \nthe Basis Pursuit algorithm \\cite{wave:donoho98}. In the above, the equality constraint was relaxed and again \n$ ~ s_k = {\\bf \\Phi}_k \\alpha_k $. In the case where each ${\\bf \\Phi}_k$ is an orthonormal basis, \n{ a block-coordinate solution to the above problem is given by} the following set of coupled equations:\n\\begin{equation}\n ~\n\\forall k, s_k = r_k - \\frac{\\lambda_k}{2 }  {\\bf \\Phi}_k  \\mbox{sign}( {\\bf \\Phi}_k^{T} s_k ) \\mbox{  with  } r_k = s - \\sum_{k' \\neq k} s_{k'}\n\\end{equation}\nThis can be solved efficiently using the iterative \\textit{Block-Coordinate Relaxation Method} \\cite{text:Bruce98} \nin conjunction with, at a given $k$, a soft-thresholding of the decomposition of $r_k$ over ${\\bf {\\bf \\Phi}_k}$. \nHowever, when non-unitary or redundant transforms are used, the above is no longer strictly valid. Nevertheless, \nsimple shrinkage still gives satisfactory results as explained in~\\cite{miki:shrinkage}. Finally, denoting by \n${\\bf T}_k$ and ${\\bf R}_k$ the forward and inverse transforms associated with the redundant dictionary \n${\\bf {\\bf \\Phi}_k}$, MCA seeks a solution to problem~(\\ref{mca:model1}) with the following algorithm: \n\\begin{center}\n\\begin{minipage}[b]{0.9\\linewidth}\n\\vspace{0.1in}\n\\footnotesize{\\textsf{1. Set the number of iterations $I_{\\max}$ and the initial thresholds $ ~ \\left(\\lambda_k^{(0)}\\right)_{k}$}\n\n\\textsf{2. While  $\\lambda^{(t)}_{k}$ is greater than a given lower bound $\\lambda_{\\min}$ (e.g. can depend on the noise standard deviation), }\n\n\\hspace{0.15in} \\textsf{-- Proceed with the following iteration to estimate components $ (s_k)_{k=1,\\ldots,K}$ at iteration $t$:}\n\n\\hspace{0.25in} \\textsf{For $ ~ k=1,\\cdots,K$ }\n\n\\hspace{0.5in} \\textsf{$\\bullet$ Compute the residual term $ ~ r_k^{(t)}$ assuming the current estimates $\\tilde{s}_{k' \\neq k}^{(t-1)}$ of $ ~ s_{k' \\neq k}$, are \nfixed:}\n\n\\hspace{0.75in} \\textsf{ $ ~ r_k^{(t)} = y - \\sum_{k' \\neq k} \\tilde{s}_{k'}^{(t-1)}$}\n\n\\hspace{0.5in} \\textsf{$\\bullet$ Estimate the current coefficients of $ ~ \\tilde{s}_k^{(t)}$ by thresholding with threshold $ ~ \\lambda_k^{(t)}$:}\n\n\\hspace{0.75in} \\textsf{$ \\tilde{\\alpha}_k^{(t)} = \\delta_{\\lambda_k^{(t)}}\\left( {\\bf T}_k r_k^{(t)}\\right)$}\n\n\\hspace{0.5in} \\textsf{$\\bullet$ Get the new estimate of $ s_k$ by reconstructing from the selected coeffcients $ \\tilde{\\alpha}_k^{(t)}$ :}\n\n\\hspace{0.75in} \\textsf{$ ~ \\tilde{s}_k^{(t)} = {\\bf R}_k \\tilde{\\alpha}_k^{(t)}$}\n\n\\hspace{0.15in} \\textsf{-- Decrease the thresholds $\\lambda_{k}$ following a given strategy.}\n}\n\\vspace{0.05in}\n\\end{minipage}\n\\end{center}\n%---------------------------------------------------------------------------------------------------------------------------------------\n\t\\subsection{Thresholding strategy}\n%---------------------------------------------------------------------------------------------------------------------------------------\nThe operator $\\delta$ in the above algorithm is a soft thresholding operator as a result of the use of an \n${ \\ell_1}$ sparsity measure in approximation to the ideal $ ~ \\ell_0$ norm. In practice, hard thresholding \nleads generally to better results~\\cite{starck:sta04}. The final threshold should vanish in the { noiseless} \ncase or it may be set to a multiple of the noise standard deviation in the presence of noise as in common \ndetection or denoising methods. The way the threshold is decreased along the iterations of the proposed \niterative thresholding scheme is paramount in terms of performance of the MCA separation mechanism. \nThe original algorithm~\\cite{starck:sta04} used a linear strategy :  \n\\begin{equation}\n\\lambda^{(t)} = \\lambda^{(0)} - (t-1)\\frac{\\lambda^{(0)} -\\lambda_{\\min}}{I_{\\max}-1}\n\\end{equation}\nwhere $\\lambda^{(0)}$ is the initial threshold, and $I_{\\max}$ is the number of iterations. The first threshold \ncan be set automatically to a large enough value such as the maximum of all coefficients { $\\lambda^{(0)}=\\max_k~\\|{\\bf T}_k y\\|_\\infty$}. \nBut there is no way to estimate the minimum number of iterations yielding a successful separation. Too small a number \nof iterations leads to bad separation while too large a number is computationally costly. Further, experiments have \nclearly shown that the optimal number of iterations depends on the data. We recently focused on devising some new data \nadaptive thresholding strategies to speed up the MCA decomposition preserving the quality of the component separation. \nHereafter we describe two promising strategies, namely MAD and MOM, in the case where $K = 2$ ; \ngeneralizing to $K\\geq2$ is straightforward.\n%~\\cite{starck:bobin06_tip,starck:dave06}.\n\\paragraph{MAD} Consider a map $y$ such that $ ~ y = s_1 + s_2 = {\\bf \\Phi}_1\\alpha_1 + {\\bf \\Phi}_2\\alpha_2$ \nwhere $ ~ s_1$ and $ ~ s_2$ have similar ${ \\ell_2}$ norm and $ \\alpha_{k=1,2} = {\\bf \\Phi}_{k=1,2}^T s_{k=1,2}$ are sparse. \nWhen both $ ~ {\\bf \\Phi}_{k=1,2}$ are orthonormal bases, decomposing $y$ in ${\\bf \\Phi}_{1}$ leads to \n$ y {\\bf \\Phi}_{1}^T = \\alpha_1 + {\\bf \\Phi}_1^T{\\bf \\Phi}_2\\alpha_2 $. Provided the mutual coherence~\\cite{Elad-Bruckstein,miki:Gribonval-Nielsen,Donoho-Elad} \nof ${\\bf \\Phi}_1$ and ${\\bf \\Phi}_2$ is low, $y_2$ has no particular structure in ${\\bf \\Phi}_{1}$ and hence \nit is tempting to model $ ~ {\\bf \\Phi}_1^Ts_2$ as a Gaussian \\emph{noise}. Its { standard deviation} can be \nestimated using a robust estimator such as the Median Absolute Deviation (MAD)~\\cite{rest:donoho93_1}. It follows that \nestimating the significant entries $\\tilde{\\alpha_1}$ in $\\alpha_1$ is a denoising problem readily solved \nby thresholding ${\\bf \\Phi}_{1}^T y$ with { a} threshold $k\\sigma$ { (typically $k$ is in the range 3 to 4)}. \nThe next step is to project the residual $ ~\u2020y - \\tilde{s_1} = y - {\\bf \\Phi}_{1}\\tilde{\\alpha_1}$ on ${\\bf \\Phi}_2$ and so on. \nClearly, the variance of the residual decreases along iterations and so this provides a simple strategy to adaptively \ncontrol the threshold in the MCA algorithm. In practice, this strategy remains fruitful in the case of redundant dictionaries. \nDonoho \\emph{et al.} in \\cite{starck:dave06} have recently focused on an iterative thresholding scheme applied to solving \nunder-determined linear sparse problems in which they use a similar rule to manage their decreasing threshold.\n%\n\\paragraph{MOM}  Let $ ~ \\tilde{s}_1^{(t)}$ and $ ~ \\tilde{s}_2^{(t)}$ denote the current estimates of \ncomponents $ ~ s_1$ and $ ~ s_2$ at the $t^{th}$ iteration of the MCA decomposition of $y$. The current \nresidual is $ ~ r^{(t)} = y -  \\tilde{s}_1^{(t)} - \\tilde{s}_2^{(t)} $. In the strategy coined MOM as in \n\"Mean of Max\", the value of the threshold at iteration $ ~ t$ is given by : \n\\begin{equation}\n\\label{MOM_threshold}\n ~ \n\\lambda^{(t)} = \\frac{1}{2}\\left[ ||{\\bf \\Phi}_1^T\\left( y- \\tilde{s}_1^{(t-1)} - \\tilde{s}_2^{(t-1)} \\right)||_\\infty + ||{\\bf \\Phi}_2^T\\left( y- \\tilde{s}_1^{(t-1)} - \\tilde{s}_2^{(t-1)} \\right)||_\\infty\\right]\n\\end{equation}\nwhich is easily computed at each step of the iterative process. When one considers more than two dictionaries, \none should take the mean of the two largest decomposition coefficients of the full residual over two distinct dictionaries. \nThe intuition underlying this strategy is that the next significant coefficients to be selected should be attached \nto the dictionary in which the projection of the full residual has coefficients of largest amplitudes. Assuming the \ncoefficients selected at iteration $ ~ t$ are in ${\\bf \\Phi}_1$, it can be shown, under some conditions on the sparsity \nof the components and the mutual coherence of the dictionary~\\cite{starck:bobin06_tip}, that the proposed strategy \nfixes the threshold so that :\n\\begin{equation}\n\\label{intuitive_choice}\n||{\\bf \\Phi}_1^T{\\bf \\Phi}_2\\bar{\\alpha}_2^{(t-1)} ||_\\infty < \\lambda_1^{(t)} < ||\\bar{\\alpha}_1^{(t-1)} ||_\\infty, \\bar{\\alpha}_{k=1,2}^{(t-1)}=\\alpha_{k=1,2}-\\tilde{\\alpha}_{k=1,2}^{(t-1)}\n\\end{equation}\nhence avoiding false detections (upper bound) and ensuring that at least one coefficient is selected (lower bound). \nThis thresholding strategy can easily be made more or less conservative depending on the desired decomposition speed. \nWith these new thresholding strategies, MCA is a fast and robust algorithm to achieve sparse decompositions in redundant \ndictionaries and a practical alternative to other well-known sparse decomposition algorithms~\\cite{starck:bobin06_tip}. \n\\subsection*{Example}\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n%\\psfig{figure=test_gauss_alm_bw.ps,bbllx=0.cm,bblly=6.5cm,bburx=22.cm,bbury=21.5cm,height=6.5cm,width=8cm,clip=}\n\\psfig{figure=test_gauss_alm_bw.ps,bbllx=0.cm,bblly=8.5cm,bburx=22.cm,bbury=19.5cm,height=4cm,width=6.5cm,clip=}\n}\n}\n\\centerline{\n\\hbox{\n%\\psfig{figure=test_gauss_alm_mca_alm_bw.ps,bbllx=0.cm,bblly=8.5cm,bburx=22cm,bbury=19.5cm,height=4.5cm,width=8cm,clip=}\n%\\psfig{figure=test_gauss_alm_mca_wt_bw.ps,bbllx=0.cm,bblly=8.5cm,bburx=22cm,bbury=19.5cm,height=4.5cm,width=8cm,clip=}\n\\psfig{figure=test_gauss_alm_mca_alm_bw.ps,bbllx=0.cm,bblly=8.5cm,bburx=22cm,bbury=19.5cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=test_gauss_alm_mca_wt_bw.ps,bbllx=0.cm,bblly=8.5cm,bburx=22cm,bbury=19.5cm,height=4cm,width=6.5cm,clip=}\n}\n}\n}\n\\caption{ Simple toy experiment with MCA on the sphere - The top map shows a linear combination of a spherical harmonic function and a localized Gaussian-like function on the sphere. The bottom maps show the resulting separated components that were obtained using the proposed Morphological Component Analysis on the sphere.}\n\\label{Figure:mcatoy}\n\\end{figure}\nThe spherical maps shown on { figure~\\ref{Figure:mcatoy}} illustrate a simple numerical experiment. \nWe applied the proposed Morphological Component Analysis on the Sphere to synthetic data resulting \nfrom the linear mixture of components respectively sparse in the spherical harmonics and the isotropic \nwavelet representations. The method was able to separate the data back into its original constituents. \nA more involved application is described in the next section.    \n%---------------------------------------------------------------------------------------------------------------------------------------\n\\subsection{Application in Physics}\n\\label{section:bedros}\nIn Inertial Confinement Fusion (ICF) a spherical shell is irradiated by laser energy directly or after the \nlaser energy has been converted to soft X-rays~\\cite{bedros:atzeni}. Either way, the aim is to implode the \ncapsule which contains a shell of nuclear fusion fuel (deuterium and tritium) ready to ignite if, after it \nhas been imploded, its density is high enough and a hot spot in its center becomes hot enough to cause a \npropagating nuclear burn wave to travel through the rest of the fuel. This ultimate energy source will not \nwork if during the implosion hydrodynamic instabilities develop which can break apart the shell before it \nassembles at the center and a hot spot forms~\\cite{bedros:lindl}. Hydrodynamic instabilities such as Rayleigh-Taylor \noccur due to nonuniformities in the laser spatial profile or imperfections in the composition of multiple \nsurfaces which make up the layers of thin material that surround the nuclear  fuel. Very small amplitude imperfections \ninitially can result in the ultimate failure of the target due to the large compression ratios involved in ICF.\n\\begin{figure}\n\\centerline{\n\\hbox{\n\\psfig{figure=bed_mca_orig_data.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=bed_mca_large_scale.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n\\caption{ \\textbf{left : }Surface structures of ICF spherical shells measured on the nanometer scale are a superposition of global scale variations, isolated bumps and scratches as well as artifacts which look like interference patterns on intermediate scales. \\textbf{right :} Coarsest scale of the undecimated isotropic wavelet transform of the surface measurements of an ICF target.}\n\\label{bedros_mca_data}\n\\end{figure}\nIt is therefore extremely important to characterize the inner and outer surfaces of ICF shell targets \nso as to know whether they are worthy of consideration for ICF implosions. One day in a reactor setting \ntens of thousands of targets will have to be imploded daily so that checking each one is totally \nout of the question. Instead, very good target fabrication quality control processes have to be adopted so that \nconfidence levels in proper performance will be high. A major step along this path to fusion energy then is to \nunderstand why imperfections occur and to correct the systematic elements and control the harm done by random sources.\n%\nFine structures on the surfaces of spherical shells can be measured on the nanometer scale, among others, \nby atomic force microscopy or phase shifting spherical diffractive optical interferometry. An example of \nsuch measurements is shown on figure~\\ref{bedros_mca_data}. As can be seen from the figure, there appears \nto be a superposition of global scale variations, isolated bumps and scratches as well as artifacts which \nlook like interference patterns on intermediate scales of localization. The latter must be isolated and \neliminated from consideration when deciding the readiness of the target for implosion. We have achieved \nthe morphological feature separation by first doing an isotropic wavelet transform on the spherical data \nand subtracting the coarsest scale information. MCA on the sphere was used on the rest of the image using \nthe undecimated wavelet and the local cosine transforms on the sphere. The isolated bumps were thus identified \nand the measurement technique caused artifacts were removed easily. The resulting bumps added to the coarsest scale, \nis the clean data with the interference patterns and artifacts removed as shown in figure~\\ref{bedros_mca_result}. \nThe spherical harmonic decomposition of the cleaned image gives rise to coefficients of various $\\ell$ modes \nwhich will be amplified by the implosion process which can now be assessed correctly using numerical hydrodynamics \nsimulation generated growth factors. If the bumps are clustered and not randomly distributed, then systematic errors \nin the manufacturing process can be tracked down. A code called MODEM has been put together to study such \ntarget surface data and extract the localized bump statistics including their correlations in height, \nsize and relative location. For more details see~\\cite{bedros:modem}.\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=bed_mca_in_data.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n\\centerline{\n\\hbox{\n\\psfig{figure=bed_mca_out_dct.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=bed_mca_out_wt.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n}\n\\caption{\\textbf{top :} Spherical map obtained by subtracting the coarse scale map on the right of figure~\\ref{bedros_mca_data} from the initial map on the left of figure~\\ref{bedros_mca_data}. \\textbf{ bottom :} Component maps separated by the MCA method on the sphere : interference patterns and measurement artifacts were grabbed by the local cosine functions on the sphere (left) while  the isolated bumps were caught using the undecimated wavelet on the sphere (right). Adding back the coarse scale on the right of figure~\\ref{bedros_mca_data} to the latter map results in a clean map of the surface structures of an ICF spherical shell with the interference patterns and artifacts removed.} \n\\label{bedros_mca_result}\n\\end{figure}\n%---------------------------------------------------------------------------------------------------------------------------------------\n\\section{Inpainting on the Sphere}\n\\label{sect_inpaint}\n%---------------------------------------------------------------------------------------------------------------------------------------\n\\subsection{Algorithm}\n%---------------------------------------------------------------------------------------------------------------------------------------\nNamed after the expert recovery process used for the restoration of deteriorated masterpieces, inpainting refers to a set \nof techniques used to alter images in a way that is undetectable to people who are unaware of the original images. There are \nnumerous applications among which removing scratches or objects in digitized photographs, removing overlayed text or graphics, \nfilling-in missing blocks in unreliably transmitted images, predicting values in images for better compression or image upsampling. \nInpainting algorithms strive to interpolate through the gaps in the image relying on the available pixels, the continuation of edges, \nthe periodicity of textures, etc. The preservation of edges and texture, in other words discontinuities, across gaps has attracted \nmuch interest, and many contributions have been proposed to solve this interpolation task. Non-texture image inpainting has received \nconsiderable interest and excitement since the pioneering paper by Masnou and Morel \\cite{Masnou98, Masnou02} who proposed variational \nprinciples for image disocclusion. A recent wave of interest in inpainting has started from the recent contributions of Sapiro \n\\emph{et al.}~\\cite{text:sapiro1,text:sapiro2,text:sapiro3}, followed by Chan and Shen~\\cite{text:chan1}. In these works, authors point \nto the importance of geometry and design anisotropic diffusion PDEs to fill in gaps by smooth continuation of isophotes. \nPDE methods have been shown to perform well on piecewise smooth functions.\n% \nA very different approach is the inpainting algorithm based on MCA described in~\\cite{starck:elad05} which has proved capable \nof filling in holes in either texture or cartoon content in 2D images. To make the link between building sparse representations \nand inpainting, consider the effect of a rectangular gap on the set of Fourier coefficients of a monochromatic sinewave : \nbecause of the non-locality of the Fourier basis functions it takes a large number of coefficients to account for the gap, \nwhich is known as the Gibbs effect. Seeking a sparse representation of the incomplete sine-wave outside the gap, that is without \nfitting the gap, enables the recovery of the complete monochromatic sinewave.\n%\nFollowing~\\cite{starck:elad05}, an inpainting algorithm on the sphere is readily built from the Morphological Component Analysis \non the sphere described in the previous section. Consider a discrete spherical data map $y$ and a binary map $M$ such that ones \nin $M$ indicate that the corresponding pixels in $y$ are valid data while zeros indicate invalid data. The objective function of \nMCA~(eq.~\\ref{mca:model1}) can be modified as follows : \n \\begin{equation}\\label{inp:model}\n\\min_{s_1,\\ldots,s_n} \\lambda \\sum_{k=1}^K  \\|\\alpha_k\\|_1 + \\left\\|M  \\odot (y-\\sum_{k=1}^K s_k) \\right\\|_2^2\\,\\,\\,\\textrm{with}\\,\\,\\,s_k ={\\bf \\Phi}_k \\alpha_k .\n\\end{equation}\nwhere $\\odot$ stands for entry-wise multiplication. Thus we are preventing the sparse model under construction from attempting to fit \nthe invalid data. Other constraints can be easily imposed on the interpolated sparse components. For instance, in~\\cite{starck:elad05}, \na total variation penalty is shown to enhance the recovery of piece-wise smooth components. Asking for the regularity across the gaps of \nsome localized statistics (~\\emph{e.g.} enforcing that the empirical variance of a given inpainted sparse component be \\emph{nearly equal} \noutside and inside the masked areas) are other possible constraints. In practice, because of the lack of accuracy of some digital \ntransformations we used in the spherical topology, additional constraints, which may be relaxed close to convergence, were also found \nuseful in some cases to stabilize the described iterative algorithms. \n%\nIt is proposed that a solution to the above minimization problem can be reached using the same iterative thresholding process as \nin the MCA algorithm detailed in the previous section, with the only required modification consisting in \\emph{masking} the full \nresidual using $M$ after each residual estimation. The MCA-inpainting algorithm is as follows : \n\\begin{center}\n\\begin{minipage}[b]{0.9\\linewidth}\n\\vspace{0.1in}\n\\footnotesize{\\textsf{1. Set the number of iterations $I_{\\max}$ and the initial thresholds $ \\lambda^{(0)} $}\n\n\\textsf{2. While  $ ~\u2020\\lambda^{(t)}_{k}$ is greater than a given lower bound $\\lambda_{\\min}$ (e.g. can depend on the noise standard deviation), }\n\n\\hspace{0.15in} \\textsf{-- Proceed with the following iteration to estimate components $ (s_k)_{k=1,\\ldots,K}$ at iteration $t$:}\n\n\\hspace{0.25in} \\textsf{For $ ~ k=1,\\cdots,K$ }\n\n\\hspace{0.5in} \\textsf{$\\bullet$ Compute the residual term $  ~ r^{(t)}$ :}\n\n\\hspace{0.75in} \\textsf{ $  ~ r^{(t)} = y - \\sum_{k} \\tilde{s}_{k}^{(t-1)}$}\n\n\\hspace{0.5in} \\textsf{$\\bullet$ Estimate the current coefficients of $  ~ \\tilde{s}_k^{(t)}$ by thresholding with threshold $ ~ \\lambda_k^{(t)}$:}\n\n\\hspace{0.75in} \\textsf{$  \\tilde{\\alpha}_k^{(t)} = \\delta_{\\lambda_k^{(t)}}\\left( {\\bf T}_k \\left(M \\odot  r^{(t)}+\\tilde{s}_k^{(t-1)}\\right)\\right)$}\n\n\\hspace{0.5in} \\textsf{$\\bullet$ Get the new estimate of $s_k$ by reconstructing from the selected coeffcients $  \\tilde{\\alpha}_k^{(t)}$ :}\n\n\\hspace{0.75in} \\textsf{$  ~ \\tilde{s}_k^{(t)} = {\\bf R}_k \\tilde{\\alpha}_k^{(t)}$}\n\n\\hspace{0.15in} \\textsf{-- Decrease the thresholds $\\lambda_{k}$ following a given strategy.}\n}\n\\vspace{0.05in}\n\\end{minipage}\n\\end{center} \nThe different thresholding strategies described in the previous section can be used in the proposed MCA inpainting iterative thresholding algorithm. \n%\n\\subsection*{Example}\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=earth_ori.ps,bbllx=0.cm,bblly=8.8cm,bburx=22cm,bbury=19.75cm,height=3.6cm,width=6.3cm,clip=}\n\\psfig{figure=earth_mask.ps,bbllx=0.cm,bblly=8.8cm,bburx=22cm,bbury=19.75cm,height=3.6cm,width=6.3cm,clip=}\n}\n}\n\\centerline{\n\\hbox{\n\\psfig{figure=earth_recons.ps,bbllx=0.cm,bblly=8.8cm,bburx=22cm,bbury=19.75cm,height=3.6cm,width=6.3cm,clip=}\n\\psfig{figure=earth_diff.ps,bbllx=0.cm,bblly=8.8cm,bburx=22cm,bbury=19.75cm,height=3.6cm,width=6.3cm,clip=}\n}\n}\n}\n\\caption{ Application of the proposed MCA-inpainting algorithm on the sphere. \\textbf{top left :} original satellite view of the Earth (~$mean= 76.9$, $\\sigma =  47.7$~).  \\textbf{top right :} incomplete map retaining 40 percent of the original pixels.  \\textbf{bottom left : } inpainted map.  \\textbf{bottom right :} map of reconstruction errors (~$mean= 0.0$, $\\sigma =  2.86$ empirically estimated from the reconstructed pixels only~).}\n\\label{Figure:mcaearth}\n\\end{figure*}\nA simple numerical experiment is shown on figure~\\ref{Figure:mcaearth}. Starting with a full satellite view \nof the Earth\\footnote{availbale from : http://www.nasa.gov/vision/earth/features/bmng\\_gallery\\_4.html}, \nan incomplete spherical map was obtained by randomly masking some of the pixels. In fact, as much as sixty percent \nof the pixels were masked. Using both the spherical harmonics transform and the curvelet transform on the sphere \nwithin the proposed MCA inpainting algorithm, it is possible to fill in the missing pixels in a visually undetectable way. \nThe residual map is shown at the bottom right of figure~\\ref{Figure:mcaearth}. \n%---------------------------------------------------------------------------------------------------------------------------------------\n\\subsection{Application in Astrophysics}\n%---------------------------------------------------------------------------------------------------------------------------------------\nA major issue in modern cosmology is the measurement and the statistical characterization (spatial power spectrum, Gaussianity) \nof the slight fluctuations in the Cosmic Microwave Background radiation field. These are indeed strongly related to the cosmological \nscenarios describing the properties and evolution of our Universe. Some 370 000 years after the 'Big Bang', when the temperature \nof the Universe was around 3000~K, thermal energy was no longer sufficient to keep electrons and positively charged particles apart \nso they combined. Photons were then set free in a nearly transparent Universe. Since the Universe further expanded, these photons \nare now in the microwave range but they should still be distributed according to a Black Body emission law. Indeed, before recombination, \nthe Universe was a highly homogeneous opaque plasma in near thermal equilibrium in which photons and charged particles were highly interacting. \nHence the slight fluctuations in matter density from which such large scale structures as galaxies or clusters of galaxies have evolved, \nare also imprinted on the distribution of photons. \n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=ima_wmap3y.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=wmap3y_inp.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n}\n\\caption{ \\textbf{left :} CMB data map provided by the WMAP team. Areas of significant foreground contamination in the galactic region and at the locations of strong radio point sources have been masked out. \\textbf{right :} Map obtained by applying the proposed MCA-inpainting algorithm on the sphere to the former incomplete WMAP CMB data map.}\n\\label{Figure:cmb_wmap_inpainting}\n\\end{figure}\nThe Cosmic Microwave Background~(CMB) was first observed in 1965 by Penzias and Wilson confirming a prediction \nmade by Gamow in the late 1940's. But it was not until the early 1990's that evidence for small fluctuations \nin the CMB sky could finally be found thanks to the observations made by COBE~\\cite{astro:COBE}. This was confirmed \nby several subsequent observations and recently by NASA's Wilkinson Microwave Anisotropie Probe\\footnote{The WMAP data and mask we used here are available online at http://map.gsfc.nasa.gov/}. \nFull-sky multi-spectral observations with unprecedented sensitivity and angular resolution are expected from \nthe ESA's PLANCK\\footnote{http://astro.estec.esa.nl/Planck} mission, which is to be launched in 2008. The statistical \nanalysis of this data set will help set tighter bounds on major cosmological parameters.\n%\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=tpjls_mask_kp0.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=wt_wmap_inp_scale_1.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n\n\\centerline{\n\\hbox{\n\\psfig{figure=wt_wmap_inp_scale_2.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=wt_wmap_inp_scale_3.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n\n\\centerline{\n\\hbox{\n\\psfig{figure=wt_wmap_inp_scale_4.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=wt_wmap_inp_scale_5.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n\\centerline{\n\\hbox{\n\\psfig{figure=wt_wmap_inp_scale_6.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n\\psfig{figure=wt_wmap_inp_scale_7.ps,bbllx=1.5cm,bblly=10cm,bburx=19cm,bbury=19.cm,height=4cm,width=6.5cm,clip=}\n}\n}\n}\n\\caption{ \\textbf{left :} Mask provided by the WMAP team. The dark blue pixels indicate areas of high level foreground contamination in the WMAP CMB data map.  \\textbf{From top to bottom and left to right :} Maps of the wavelet decomposition on seven scales of the inpainted WMAP CMB map shown on the right of figure~\\ref{Figure:cmb_wmap_inpainting}. From the visual point of view, the masked area cannot be distinguished anymore in the wavelet scales of the inpainted map.}\n\\label{Figure:cmb_scale_wmap_inpainting}\n\\end{figure}\nThere are nonetheless a few practical issues and notably that several other astrophysical sources also emit radiation \nin the frequency range used for CMB observations~\\cite{fb-rg99}. Separating back the observed mixtures into maps of \nthe different astrophysical contributions in order to isolate the CMB properly is a difficult inverse problem for which \nmethods and algorithms are being actively designed (see \\emph{e.g.}~\\cite{astro:2005MNRAS.364.1185P,starck:bobin06,starck:yassir05,wlens:pires06} \nand references therein). The estimated spherical CMB maps will inevitably be contaminated by some level of residual \ncontributions, most significantly in the galactic region and at the locations of strong radio point sources. Therefore, it is \ncommon practice to mask out that part of the data (\\emph{e.g.} using the mask shown on figure~\\ref{Figure:cmb_scale_wmap_inpainting} \nupper left, provided by the WMAP team) in order to reliably assess the non-gaussianity of the CMB field through estimated \nhigher order statistics (\\emph{e.g.} skewness, kurtosis ) in various representations (\\emph{e.g. wavelet, curvelet, etc.}) \n\\cite{starck:sta03_1,starck:jin05}. But the gaps in the data thus created need to be handled properly as the detection \nof non-gaussianity in CMB would have a major scientific impact. The proposed MCA-inpainting on the sphere was used here \nsuccessfully to fill in the masked regions in order to restore the stationarity of the observed CMB field and lower the impact \nof the incompleteness of the data set on the estimated measures of non-gaussianity or any other non-local statistical test. \nThe experiment was conducted on several simulations of full-sky Gaussian CMB maps. A typical CMB map (the CMB data map disclosed \nby the WMAP consortium) is shown on figure~\\ref{Figure:cmb_wmap_inpainting} along with the map obtained as a result of \nthe inpainting process allowing for a first visual assessment of the quality of the proposed method. Figure~\\ref{Figure:cmb_scale_wmap_inpainting} \nshows the wavelet decomposition of the inpainted map. We can see that the mask is not visible at all in the different scales. \nHere we have applied the MCA-Inpainting algorithm with 200 iterations and a single transform which was the Spherical \nHarmonic Decomposition. A more quantitative evaluation of the proposed inpainting algorithm is reported on figure ~\\ref{Figure:mca_skew_kur} \nwhere plots of the estimated measures of non-Gaussianity on both the original map and the inpainted map are given. \nThese reveal no significant discrepancy: we believe that the proposed method will help discriminate between truly non-Gaussian CMB \nand non-Gaussianity related to the non-stationarity of incomplete maps. This will be further investigated in the future.\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness.ps,bbllx=1cm,bblly=1cm,bburx=17cm,bbury=14cm,height=4.5cm,width=6.5cm,clip=}\n\\psfig{figure=kurtosis.ps,bbllx=1cm,bblly=1cm,bburx=17cm,bbury=14cm,height=4.5cm,width=6.5cm,clip=}\n}\n}\n}\n\\caption{ Horizontally is the scale number increasing for lower frequencies. \\textbf{Left :} skewness of the wavelet coefficients in a given scale of the original complete simulated spherical CMB map ($\\times$) and of the inpainted map ($\\lozenge $). \\textbf{Right :} kurtosis of the wavelet coefficients in a given scale of the original complete simulated spherical CMB map ($\\times$) and of the inpainted map ($\\lozenge $). Error bars were estimated on a small set of fifteen simulated complete CMB maps.}\n\\label{Figure:mca_skew_kur}\n\\end{figure}\n\n\n\\section{Generalized Morphological Component Analysis on the sphere}\n\n\\label{sec:gmca}\n\\subsection{The GMCA model}\n\\label{sec:model}\n\nThe observation with detector $i$ is then a noisy linear mixture of $n$ independent sources $\\{s_j\\}_{j=1,\\cdots,n}$ : \n$x_i = \\sum_{j=1}^n a_{ij} s_j + n_i$. The coefficient $a_{ij}$ reflects the emission law of source $s_j$ in the \nfrequency band of the $i$-th sensor; $n_i$ models instrumental noise. When $m$ sensors provide observations at \ndifferent frequencies, this linear mixture model can be rewritten in a more convenient matrix formulation :\n\\begin{equation}\n\\label{eq:lm_model}\n{\\bf X} = {\\bf AS} + {\\bf N}\n\\end{equation}\nwhere ${\\bf X}$ is the $m \\times t$ data matrix the rows of which are the observed data maps in each channel, ${\\bf A}$ \nis the $m \\times n$ mixing matrix, ${\\bf S}$ is the $n \\times t$ source matrix the rows of which are the sources $s_j$, \nand ${\\bf N}$ is the $m \\times t$ noise matrix.\n\nWe further assume that all the protagonists of the model in Equation~\\ref{eq:lm_model} are random components (variables \nor vectors). More particularly, the entries of the noise matrix ${\\bf N}$ are assumed to be \\textit{independently} \ndistributed according to a zero mean Gaussian distribution with variance $\\sigma_i^2$ depending on the detector. \nFrom physical considerations, ${\\bf N}$ models instrumental noise the level of which varies independently from one \ndetector to another. ${\\bf N}$ is thus a random Gaussian variable with zero mean and covariance matrix \n${\\bf \\Gamma_N} = \\mbox{diag}(\\sigma_1^2,\\cdots,\\sigma_m^2)$. In practice, as the detectors are assumed to be accurately calibrated, \n${\\bf \\Gamma_N}$ is known with high precision. The log-likelihood function is then the following one :\n\\begin{equation}\n\\label{eq:ll}\n\\log P({\\bf X} \\big| {\\bf A},{\\bf S},{\\bf \\Gamma_N}) = -\\frac{1}{2} \\|{\\bf X} - {\\bf AS}\\|_{2,{\\bf \\Gamma_N}}^2 + C\n\\end{equation}\nwhere $C$ is a constant. The notation $\\| . \\|_{2,{\\bf \\Gamma_N}}^2$ stands for the Frobenius norm of ${\\bf Y}$ in the noise \ncovariance metric : $\\| Y \\|_{2,{\\bf \\Gamma_N}}^2 = \\mbox{ Trace}\\left( {\\bf Y}^T {\\bf \\Gamma_N}^{-1} {\\bf Y}\\right)$. \nFrom a Bayesian point of view, adding physical priors should help the separation task. We first assume no particular knowledge \nabout the emission laws of the components modeled by ${\\bf A}$. For simplicity, we consider that each entry of the mixing \nmatrix ${\\bf A}$ is \\textit{i.i.d.}\\footnote{Independently and identically distributed.} from a uniform zero mean distribution. \nNote that it would be possible to add some physical constraint on the emission laws reflected in ${\\bf A}$.\n\nIn the general case, source separation is merely a question of diversity and contrast between the sources (see \\cite{Cardo1}). \nFor instance, on the one hand JADE relies on non-gaussianity to distinguish between the sources. On the other, SMICA takes advantage \nof the diversity of the mixed components' power spectra to achieve the separation task. ``Non-gaussianity\" and ``power spectra diversity\" \nare contrasts between the sources. A combination of both characteristics, ``Non-gaussianity\" and ``power spectra diversity\", \nwas also proposed to separate CMB from kinetic SZ signal which are otherwise undistinguishable \\cite{forni}. Recent work has \nalready emphasized on sparsity as a source of diversity to improve component separation (see \\cite{Zibu} and \\cite{MMCA}). \nIn that setting, each source $\\{s_j\\}_{j=1,\\cdots,n}$ is assumed to be sparse in a representation (potentially overcomplete) $\\mathcal{D}$. \nFormally, $\\mathcal{D}$ is a fixed dictionary of signal waveforms written as a $T \\times t$ matrix. We define the set of projection \ncoefficients $\\alpha_j$ such that : $\\forall j \\in \\{1,\\cdots,n\\}, \\quad s_j = \\alpha_j \\mathcal{D}$. Any source $s_j$ is said \nto be sparse in $\\mathcal{D}$ if most of the entries of $\\alpha_j$ are nearly zero and only a few have ``significant\" amplitudes. \nWhen $\\mathcal{D}$ is overcomplete ($T > t$), $\\mathcal{D}$ is called a dictionary. Overcomplete representations attractiveness \nin image processing theory leans on their potential to generate very sparse representations of data based on their morphological \ncontent (see e.g. \\cite{DH} and references therein).\n\nIn the field of basic source separation we showed in \\cite{MMCA} that morphological diversity and sparsity are key properties \nleading to better separation. We noticed that the gist of sparsity-based source separation methods leans on the rationale : \n``\\textit{independent sources are distinctly sparse in a dictionary $\\mathcal{D}$}\". In that study, we considered the simple \ncase of morphologically different sources : components were assumed to be sparsely represented in different sub-dictionaries. \nWe illustrated that such sparsity prior provides a very effective way to distinguish between sources. In the present paper, \nwe focus on a more general setting : the sources can have similar morphologies (\\textit{i.e.} all the sources are sparsely \nrepresented over the whole $\\mathcal{D}$). When the overcomplete dictionary $\\mathcal{D}$ is made of the union of $D$ orthonormal \nbases (\\textit{i.e.} $\\mathcal{D} = \\left[\\Phi_1,\\cdots,\\Phi_D\\right]$) then each source is modeled as the linear combination \nof $D$ so-called morphological components (see \\cite{SED} for details on Morphological Component Analysis) - each morphological \ncomponent being sparse in a different orthonormal basis $\\{\\Phi_1,\\cdots,\\Phi_D\\}$:\n\\begin{eqnarray}\n\\forall  j\\in \\{1,\\cdots,n\\}, \\quad s_j  & = &  \\sum_{k=1}^D \\varphi_{jk} = \\sum_{k=1}^D \\alpha_{jk} \\Phi_k\n\\end{eqnarray}\nFrom a statistical viewpoint, we assume that the entries of $\\alpha_{jk} = \\varphi_{jk}\\Phi_k^T$ are \\textit{i.i.d} from \na Laplacian probability distribution with scale parameter $1/\\mu$:\n\\begin{equation}\n\\label{eq:source_prior}\nP(\\varphi_{jk}) \\propto \\exp\\left(- \\mu \\|\\varphi_{jk}\\Phi_k^T\\|_1\\right)\n\\end{equation}\nwhere the $\\ell_1$-norm $\\|.\\|_1$ stands for $\\|x\\|_1 = \\sum_{p=1}^t |x[p]|$ in which $x[p]$ is the $p$-th entry of $x$. In practice, \nthe Laplacian prior is well adapted to model leptokurtic sparse signals. We classically assume that the morphological components are \nstatistically mutually independent : $P({\\bf S}) = \\prod_{j,k} P(\\varphi_{jk})$. Estimating the sources ${\\bf S}$ is then equivalent \nto estimating the set of morphological components $\\{\\varphi_{jk}\\}_{j=1,\\cdots,n;k=1,\\cdots,D}$. In this Bayesian context, we propose \nto estimate those morphological components $\\{\\varphi_{jk}\\}$ and the mixing matrix ${\\bf A}$ from a \\textit{maximum a posteriori} (MAP) \nleading to the following optimization problem:\n\\begin{equation}\n\\left\\{\\{\\hat{\\varphi}_{jk}\\},{\\bf \\hat{A}}\\right\\} = {\\arg\\max}_{\\{\\varphi_{jk}\\},{\\bf A}} P({\\bf X}|{\\bf A},\\{\\varphi_{jk}\\},{\\bf \\Gamma_N}) \\prod_{j,k}P(\\varphi_{jk}) P({\\bf A})\n\\end{equation}\nwhere we further assumed that the morphological components $\\{\\varphi_{jk}\\}$ are independent of ${\\bf A}$. Owing to Equations~\\ref{eq:ll} \nand \\ref{eq:source_prior}, the mixing matrix ${\\bf A}$ and the morphological components $\\{\\varphi_{jk}\\}$ are obtained by minimizing \nthe following negative log \\textit{a posteriori}:\n\\begin{equation}\n\\label{eq:optim}\n\\left\\{\\{\\hat{\\varphi}_{jk}\\},{\\bf \\hat{A}}\\right\\} = {\\arg\\min}_{\\{\\varphi_{jk}\\},{\\bf A}}\\|{\\bf X} - {\\bf AS}\\|_{2,{\\bf \\Gamma_N}}^2 + 2 \\mu \\sum_{j=1}^n \\sum_{k=1}^D \\|\\varphi_{jk}\\Phi_k^T\\|_1\n\\end{equation}\nwhere $\\forall j \\in \\{1,\\cdots,n\\},\\quad s_j = \\sum_{k=1}^D \\varphi_{jk}$. Equation~\\ref{eq:optim} leads to the GMCA estimates \nof the sources and the mixing matrix in a general sparse component separation context. Interestingly, in the case of CMB data, \nthe sources we look for (CMB, galactic dust and SZ) are quite sparse in the same unique orthonormal wavelet basis. The dictionary\n$\\mathcal{D}$ then reduces to a single orthonormal basis $\\Phi$. In that case, since $\\Phi$ is unitary, Equation~\\ref{eq:optim} \ncan be rewritten as follows :\n\\begin{eqnarray}\n\\label{eq:foptim}\n\\left\\{{\\bf \\hat{\\alpha}},{\\bf \\hat{A}}\\right\\} &=& {\\arg\\min}_{{\\boldsymbol \\alpha},{\\bf A}} \\|{\\bf X}\\Phi^T - {\\bf A\\alpha}\\|_{2,{\\bf \\Gamma_N}}^2 + 2 \\mu \\|{\\boldsymbol \\alpha}\\|_1 \\nonumber \\\\\n\t\t\t\t\t\t&=& {\\arg\\min}_{{\\boldsymbol \\alpha},{\\bf A}} f_\\mu({\\boldsymbol \\alpha},{\\bf A}) = {\\arg\\min}_{{\\boldsymbol \\alpha},{\\bf A}} f_0({\\bf A}, {\\boldsymbol \\alpha}) + 2\\mu f_1({\\boldsymbol \\alpha})\n\\end{eqnarray}\nwhere ${\\boldsymbol \\alpha} = {\\bf S}\\Phi^T$. Note that the estimation is done in the sparse representation $\\Phi$ requiring a single \ntransform of the data ${\\bf X}\\Phi^T$. To remain computationally efficient, GMCA relies on practical transforms which generally involve \nfast implicit operators (typical complexity of $\\mathcal{O}\\left(t\\right)$ or $\\mathcal{O}\\left(t \\log t \\right)$). In \\cite{Zibu}, \nthe authors also used a unique orthonormal wavelet basis. While a gradient descent is used in \\cite{Zibu}, we use a fast and efficient \niterative thresholding optimization scheme which we describe in the next section.\n\n\\subsection{Solving the optimization problem}\n\\label{sec:algo}\nThe \\textit{maximum a posteriori} estimates of the coefficients ${\\boldsymbol \\alpha}$ and the mixing matrix in Equation~\\ref{eq:foptim} \nlead to a non-convex minimization problem. Note that in Equation~\\ref{eq:foptim} the functional to be minimized suffers from several \ninvariances : any permutation or rescaling of the sources and the mixing matrix leaves the product $\\bf A{\\boldsymbol \\alpha}$ unaltered. \nThe scale invariance is computationally alleviated by forcing the columns of ${\\bf A}$ to have unit $\\ell_2$ norm~: $\\forall i\\in{1,\\cdots,n},\\quad a^{i^T}a^i = 1$ \nwhere $a^i$ is the $i$-th column of ${\\bf A}$. \n\nAs solutions of problem~(\\ref{eq:foptim}) have no explicit formulation, we propose solving it by means of a block-coordinate relaxation \niterative algorithm such that each iteration $(h)$ is decomposed into two steps : (i) estimation of the sources ${\\bf S}$ assuming the \nmixing matrix is fixed to its current estimate ${\\bf \\hat{A}}^{(h-1)}$ and (ii) estimation of the mixing matrix assuming the sources are \nfixed to their current estimates ${\\bf \\hat{S}}^{(h)}$. It is not difficult to see that the objective MAP functional in (\\ref{eq:foptim}) \nis continuous on its effective domain and has compact level sets. Moreover, this objective function is convex in the source coefficient \nvectors $(\\alpha_1,\\ldots,\\alpha_n)$, and $f_0$ has an open domain, is continuous and G\\^ateaux differentiable. Thus by \\cite[Theorem 4.1]{Tseng2001}, \nthe iterates generated by our alternating algorithm are defined and bounded, and each accumulation point is a stationary point of the MAP \nfunctional. In other words, our iterative algorithm will converge. Hence, at iteration $(h)$, the sources are estimated from a \\textit{maximum a posteriori} \nassuming ${\\bf A} = {\\bf \\hat{A}}^{(h-1)}$. By classical ideas in convex analysis, a necessary condition for ${\\boldsymbol \\alpha}$ to be \na minimizer is that the zero is an element of the subdifferential of the objective at ${\\boldsymbol \\alpha}$. We calculate\\footnote{For clarity, \nwe drop the upper script $(h-1)$ and write $\\hat{\\bf A} = \\hat{\\bf A}^{(h-1)}$.}:\n\\begin{equation}\n\\label{eq:subdiff}\n\\partial_{\\boldsymbol \\alpha} f_\\mu({\\boldsymbol \\alpha},{\\bf A})= -2{\\bf {\\bf A}}^T{\\bf \\Gamma_N}^{-1}({\\bf X}\\Phi^T - {\\bf A}{\\boldsymbol \\alpha}) + 2\\mu \\partial_{\\boldsymbol \\alpha} \\|{\\boldsymbol \\alpha}\\|_1\n\\end{equation}\nwhere $\\partial_{\\boldsymbol \\alpha} \\|{\\boldsymbol \\alpha}\\|_1$ is defined as (owing to the separability of the prior):\n\\[\n\\partial_{\\boldsymbol \\alpha} \\|{\\boldsymbol \\alpha}\\|_1 = \\left\\{U \\in \\mathbb{R}^{n \\times t} \\Bigg| \n\\begin{array}{ccc}\nU{j,k} & = \\mbox{ sign}(\\alpha_{j,k}), & ~ \\alpha_{j,k} \\neq 0 \\\\\nU{j,k} & \\in [-1,1], & ~ \\alpha_{j,k} = 0\n\\end{array} \\right\\}.\n\\]\nHence, Equation \\ref{eq:subdiff} can be rewritten equivalently as two conditions leading to the following (proximal) fixed point equation:\n\\begin{equation}\n\\label{eq:it_estim1}\n\\begin{array}{cc}\n\\hat{\\alpha}_{j,k} = 0, & \\text{if} ~ \\left|{\\left({\\bf A}^T{\\bf \\Gamma_N}^{-1}{\\bf X}\\Phi^T\\right)}_{j,k} \\right| \\leq \\mu \\\\\n{\\bf {\\bf A}}^T{\\bf \\Gamma_N}^{-1}({\\bf X}\\Phi^T - {\\bf A}\\hat{\\boldsymbol \\alpha}) = \\mu \\mbox{ sign}\\left(\\hat{\\boldsymbol \\alpha}\\right), & \\text{otherwise}. \n\\end{array}\n\\end{equation}\nUnfortunately, Equation~\\ref{eq:it_estim1} has no closed-form solution in general. It must be iterated and is thus computationally demanding. \nFortunately, it can be simplified when ${\\bf A}$ has nearly orthogonal columns in the noise covariance matrix (\\textit{i.e.} \n${\\bf \\hat{A}}^T{\\bf \\Gamma_N}^{-1}{\\bf \\hat{A}} \\simeq \\mbox{diag}\\left({\\bf \\hat{A}}^T{\\bf \\Gamma_N}^{-1}{\\bf \\hat{A}}\\right)$). \nLet ${\\bf C} = {\\left({\\bf \\hat{A}}^T{\\bf \\Gamma_N}^{-1}{\\bf \\hat{A}}\\right)}^{-1}{\\bf A}^T{\\bf \\Gamma_N}^{-1}{\\bf X}\\Phi^T$, Equation~\\ref{eq:it_estim1} \nboils down to the following set of equations $\\forall j\\in\\{1,\\cdots,n\\}$:\n\\begin{equation}\n\\label{eq:it_estim}\n\\begin{array}{ccc}\n\\hat{\\alpha}_{j,k} & = 0, \\quad \\text{if} ~ \\left|{{\\bf C}}_{j,k} \\right| \\leq \\mu^{(h)} \\sigma_j^2 \\\\\n\\hat{\\alpha}_j & = {\\left[{\\bf C}\\right]}_j - \\mu \\sigma_j^2  \\mbox{ sign}\\left(\\hat{\\alpha}_j\\right), \\quad \\text{otherwise}.\n\\end{array}\n\\end{equation}\nwhere $[{\\bf Y}]_j$ is the $j$-th row of ${\\bf Y}$. In practice, even if the approximation we make is not strictly valid, such a simplification \nleads to good computational results. These equations are known as soft-thresholding with threshold $\\mu^{(h)} \\sigma_j^2$. We define $\\mathrm{ST}_{\\delta}(.)$, \nthe soft-thresholding operator with threshold $\\delta$. At iteration $(h)$, the sources are thus estimated such that:\n\\begin{equation}\n\\hat{\\alpha}_j^{(h)} = \\mathrm{ST}_{\\mu^{(h)} \\sigma_j^2}\\left(\\left[{\\bf C}\\right]_j\\right)\n\\end{equation}\nThe $j$th source is reconstructed as $\\hat{s}_j^{(h)} = \\hat{\\alpha}_j^{(h)}\\Phi$. The mixing matrix ${\\bf A}$ is then estimated by a maximum \nlikelihood estimate amounting to a simple least-squares update assuming ${\\bf S}$ is fixed. The GMCA algorithm is then described as follows :\n\\begin{flushleft}\n\\vspace{0.15in}\n\\centering\n\\begin{tabular}{|c|} \\hline\n\\begin{minipage}[h]{0.95\\linewidth}\n\\vspace{0.025in} \\footnotesize{\\textsf{1. Set the number of iterations $I_{\\max}$ and thresholds $\\delta_j^{(0)} = \\mu^{(0)}\\sigma_j^2$\\\\} \n\\textsf{2. While each $\\mu^{(h)}$ is higher than a given lower bound $\\mu_{min}$ (e.g. can depend on the noise variance), \\\\}\n\\hspace{0.1in} \\textsf{-- Proceed with the following iteration to estimate source coefficients ${\\boldsymbol \\alpha}$ at iteration $h$ assuming ${\\bf A}$ is fixed:}\n\\hspace{0.2in} \\textsf{$\\hat{\\alpha}_j^{(h)} = \\mathrm{ST}_{\\mu^{(h)} \\sigma_j^2}\\left(\\left[{\\left({\\bf \\hat{A}}^T{\\bf \\Gamma_N}^{-1}{\\bf \\hat{A}}\\right)}^{-1}{\\bf \\hat{A}}^T{\\bf \\Gamma_N}^{-1}{\\bf X}\\Phi^T\\right]_j\\right)$:\\\\}\n\\hspace{0.1in} \\textsf{-- Update $\\bf A$ assuming ${\\boldsymbol \\alpha}$ is fixed :}\n\\hspace{0.2in} \\textsf{${\\bf \\hat{A}}^{(h)} = {\\bf X}\\Phi^T{\\bf \\hat{\\boldsymbol \\alpha}}^T\\left({\\bf \\hat{\\boldsymbol \\alpha}}{\\bf \\hat{\\boldsymbol \\alpha}}^T\\right)^{-1}$\\\\}\n\\textsf{-- Decrease the threshold $\\mu^{(h)}$ following a given strategy}}\n\\vspace{0.05in}\n\\end{minipage}\n\\\\\\hline\n\\end{tabular}\n\\vspace{0.15in}\n\\end{flushleft}\nNote that the overall optimization scheme is based on an iterative and alternate thresholding algorithm involving a \n\\textit{coarse to fine} estimation process. Indeed, \\textit{coarse} versions of the sources (\\textit{i.e.} containing \nthe most ``significant\" features of the sources) are first computed with high values of $\\mu^{(h)}$.\n%\\begin{equation}\n%\\forall j\\in\\{1,\\cdots,n\\},\\quad \\hat{s}_j = \\mathrm{ST}_{\\mu^{(h)} \\sigma_j^2}\\left({\\left({\\bf \\hat{A}}^T{\\bf \\Gamma_N}^{-1}{\\bf \\hat{A}}\\right)}^{-1}{\\bf \\hat{A}}^T{\\bf \\Gamma_N}^{-1}{\\bf X}\\Phi^T\\right)\\Phi\n%\\end{equation}\nIn the early stages of the algorithm, the mixing matrix is then estimated from the most ``significant\" features of the sources \nwhich are less perturbed by noise. The estimation of ${\\bf A}$ and ${\\bf S}$ is then refined at each iteration as $\\mu^{(h)}$ \n(and thus the thresholds $\\{\\mu^{(h)}\\sigma_j^2\\}_{j=1,\\cdots,n}$) decreases towards a final value $\\mu_{min}$. We already used \nthis minimization scheme in \\cite{MMCA} where this optimization process provided robustness and helped convergence even in a \nnoisy context. Experiments in Section~\\ref{sec:results} illustrate that it achieves good results with GMCA as well.\n\n", "meta": {"hexsha": "92a6f2d075fba44397b34aaee69c0562f7cdc575", "size": 61199, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_isap/archive_tex/mrs_sca.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_isap/archive_tex/mrs_sca.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_isap/archive_tex/mrs_sca.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.9254742547, "max_line_length": 697, "alphanum_fraction": 0.7323158875, "num_tokens": 16838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7634837635542925, "lm_q1q2_score": 0.5829074572110267}}
{"text": "\\problemname{Better Productivity}   \n\nACME Inc.~is reorganizing their factory, in order to maximize their\nproductivity of useless trinkets.  The new factory design consists of\n$p$ independent and identical production lines.  Each production line\ncan be assigned some number of workers.\n\nThe actual work is of course all done by machines, so the production\nrate of a production line is independent of the actual number of\nworkers assigned to it.  However, the workers work in shifts, and due\nto local safety regulations, a production line can only be active\nwhile all of its workers are present (any worker who arrives before\nthe last person to arrive, or leaves after the first person leaves,\nwill be spending the inactive time having a coffee break).  In other\nwords, the productivity of a production line equals the length of the\ntimespan during which all of the workers assigned to this production line are present.  Crucially, the\nproductivity of each line \\emph{must} be positive (i.e., the last worker to arrive for a line must\narrive strictly before the first worker for that line leaves), since otherwise the\nworkers feel that their jobs are meaningless.\n\nUnfortunately, due to some pesky labor laws, ACME are not allowed to\nfire any of their workers, which means that each of their $n$ workers\nmust be assigned to some production line, even though this may\nactually decrease the overall productivity of the factory.\n\nAll these rules and regulations are making a headache for ACME\nmanagement.  Can you help them figure out the maximum possible total\nproductivity (sum of productivities of the $p$ production lines) of\ntheir new factory?\n\n\\section*{Input}\n\nThe input consists of:\n\\begin{itemize}\n\\item one line containing two integers $n$ and $p$ ($1 \\le p \\le n \\le 200$), the number of employees and the number of production lines;\n\\item $n$ lines each containing two integers $a$ and $b$ ($0 \\le a < b \\le 100\\,000$), representing a worker that arrives at time $a$ and leaves at time $b$.\n\\end{itemize}\n\nYou may assume that there exists at least one valid assignment of\nworkers to production lines.\n\n\\section*{Output}\n\nOutput the maximum productivity level possible for the\nfactory. \n\n", "meta": {"hexsha": "6ece20933b960c5c81d74fb1c12be41ede43d768", "size": 2195, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problems/betterproductivity/problem_statement/problem.en.tex", "max_stars_repo_name": "stoman/CompetitiveProgramming", "max_stars_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-22T13:21:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-12T22:26:26.000Z", "max_issues_repo_path": "problems/betterproductivity/problem_statement/problem.en.tex", "max_issues_repo_name": "stoman/CompetitiveProgramming", "max_issues_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/betterproductivity/problem_statement/problem.en.tex", "max_forks_repo_name": "stoman/CompetitiveProgramming", "max_forks_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.7021276596, "max_line_length": 157, "alphanum_fraction": 0.7835990888, "num_tokens": 505, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.7634837527911057, "lm_q1q2_score": 0.5829074572110263}}
{"text": "\n\\section{Complexity Analysis\\label{sec:complexity}}\nIn this section, we analyze the complexity of the algorithms required by \\qbkix. \nThe input to our overall algorithm is a domain boundary $\\Gamma$ with $\\Ninit$ patches and boundary condition $f$.\nWe begin with a summary of algorithm parameters that impact complexity: \n\n\\begin{itemize}\n\\item The number of patches $N$ \\emph{after} admissibility refinement.  \n  This is a function of $\\Ninit$, the geometry of $\\Gamma$, the definition of $f$, and the choices of parameters $a$ and $b$ in check point construction. \n\\item Quadrature order $q$ and the degree of smoothness $k$ of $\\Gamma$ and $f$.\n  We assume that $k$ is sufficiently high to obtain optimal error behavior for a given $q$ by letting $k=2q$ in \\cref{eq:error_quad_gen_target}.\n\\item \\qbkix interpolation order $p$. \n\\item The numbers of evaluation points in different zones $\\Nf$, $\\Ni$, and $\\Nn$, with $\\Nt = \\Nf+\\Ni+\\Nn$.\n%\\item Maximum upsampling ratio $m = \\max_i m_i$, where $m_i$ is the upsampling ratio for the $i$-th patch.\n%\\item Target error $\\etrg$ used to determine check point location.\n\\end{itemize}\n\n%The complexity is also affected by the geometric characteristics of the surface. These include maximum patch size $\\Lmx$, relative minimal patch size $\\Lmn = \\beta_0 \\Lmx$, $\\beta_0 \\leq 1$, as well as \\emph{minimal feature size} $\\ellm = \\alpha_0\\Lmx$ and \\emph{the variation of area distortion} of the parametrization $C_J$, which we now define more precisely.  \n%The local feature size $\\ellm$ is defined in terms of the \\emph{medial axis} of the surface.  The medial axis of a surface $\\Gamma$, $M(\\Gamma)$, is the set of points in $\\mathbb{R}^3$ with more than one closest point on $\\Gamma$. For $\\vx \\in\\Gamma$, the local feature size  $\\ell(\\vx)$ is the distance from $\\vx$ to $M(\\Gamma)$. We assume that the minimal feature size \\emph{relative to $\\Lmx$} is bounded from below by $\\alpha_0$, i.e., $\\ell(\\vx) \\geq \\alpha_0 \\Lmx$.  \n%The variation of the area distortion $C_J(P)$ is $\\max_{(u,v)} |J|/\\min_{(u,v)} |J|$, where $J$ is the Jacobian of the patch parametrization for an initial patch $P$, and $C_J$ is the maximum of $C_J(P)$ over all patches; this constant indicates how nonuniform the parametrization is, and allows us to estimate how the patch size decreases with refinement \\note[MJM]{add a sentence with example: high $C_J$ vs. low $C_J$}. \n%We assume that the $\\alpha_0$, $\\beta_0$ and $C_J$ are independent of $N^{init}$. We also assume that surface principal curvatures are bounded, independently of $N^{init}$. \nThe complexity is also affected by the geometric characteristics of $\\Gamma$. \nThese include:\n\\begin{itemize}\n  \\item The \\textit{maximum patch length} $\\Lmx = \\max_P L(P)$\n  \\item The \\textit{relative minimal patch length} $\\Lmn = \\beta_0 \\Lmx$, $\\beta_0 \\leq 1$.\n  \\item The \\textit{minimal feature size relative to $\\Lmx$}, $\\ellm = \\alpha_0\\Lmx$, which is defined in terms of the \\textit{local feature size} and the \\textit{medial axis} of $\\Gamma$.\nThe medial axis of $\\Gamma$, denoted $M(\\Gamma)$, is the set of points in $\\mathbb{R}^3$ with more than one closest point on $\\Gamma$. \nFor $\\vy \\in\\Gamma$, the local feature size  $\\ell(\\vy)$ is the distance from $\\vy$ to $M(\\Gamma)$.\nWe assume that the local feature size is bounded below by $\\alpha_0\\Lmx$, i.e., $\\ell(\\vy) \\geq \\alpha_0 \\Lmx = \\ellm$ for $\\vy \\in \\Gamma$.\n  \\item The \\emph{maximum variation of area distortion} of the parametrization $C_J$.\nThe variation of the area distortion of a patch $P$ is $C_J(P) = \\max_{(u,v)} |J_P(u,v)|/\\min_{(u,v)} |J_P(u,v)|$, where $J_P(u,v)$ is the Jacobian of $P$ at the point $(u,v)$. \nWe define $C_J = \\max_{P\\in\\Gamma} C_J(P)$.\nThis value is an indicator of how non-uniform the parametrization of $P$ is and allows us to estimate how the patch length decreases with refinement. \n\\end{itemize}\n\n\nWe assume that the $\\alpha_0$, $\\beta_0$ and $C_J$ are independent of $\\Ninit$. \nWe also assume that principal curvatures are bounded globally on $\\Gamma$ and independent of $\\Ninit$. \nWe now briefly summarize the results of this section:\n\n\\begin{itemize}\n\\item \\emph{Admissibility.} (\\cref{sec:complexity_admissibility}) The complexity of this step is $O(\\Ninit \\log \\Ninit)$,\n  with constants dependent on $\\alpha_0$, $\\beta_0$ and $C_J$. \n  The logarithmic factor is due to use of an \\aabb tree for closest surface point queries. \n\n\\item \\emph{Upsampling.} (\\cref{sec:complexity_upsampling}) The complexity of upsampling is $ O(m  N \\log(N))$, where $m$ is the upsampling ratio.\n  The logarithmic factor appears for similar reason to admissibility,\n  with constants that depend on geometric parameters and the boundary condition through the error estimate of \\cref{sec:error}.\n  We show that the upsampling ratio is independent of $N$.\n  %The upsampling ratio is $m = O(\\etrg^{-1/q})$.\n  \n\\item \\emph{Point marking.} (\\cref{sec:complexity-point-marking}) Identifying which zone an evaluation point belongs to ($\\Omega_F, \\Omega_I$ or $\\Omega_N$) depends on $N$ and the total number of points to be classified $\\Nt = \\Nf + \\Ni + \\Nn$. \n  The complexity is $O(\\Nt \\log N)$ with constants dependent on geometric parameters, due to the cost of closest surface point queries. \n  \n\\item\\emph{ Far, intermediate and near zone integral evaluation.} (\\cref{sec:complexity_matvec}) The complexity of these components depends on $N$ and  $\\Nf$, $\\Ni$ and $\\Nn$ respectively, with the general form $O(s_1 N + s_2 N')$, where $N'$ is the number of evaluation points in the corresponding class.  For the far field, $s_1 = s_2 = 1$.  For the intermediate evaluation,\n$s_2 =1$, and $s_1 = m q^2$; finally, for the near zone, $s_2 = p$, and $s_1 = mq^2$, the same as in the intermediate zone.  \nIf $b$ is chosen appropriately, the intermediate and near zone error is $\\etrg$. \n\n  %\\note[DZ]{This is not quite true: we should probably modify the upsampling alg description, so that we actually get\n  %  this in the near zone, not just intermediate; otherwise it is a pain to explain, as there is also the extra dependence on the extrapolation order, so smth like $(\\Lmx*2^\\eta_1)^q$}\n  %  \\note[MJM]{fix} \n  \\item \\emph{\\gmres solve.} Due to the favorable conditioning of the double-layer formulation in \\cref{eq:int-eq}, \\gmres converges rapidly to a solution in a constant number of iterations for a given $\\Gamma$ that is independent of $N$. \n    This means that the complexity to solve \\cref{eq:int-eq} is asymptotically equal (up to a constant dependent on $\\Gamma$) to the complexity equal to a near-zone evaluation with $\\Nn=N(q+1)^2$. \n\\end{itemize}\n\n\n\\subsection{Admissibility \\label{sec:complexity_admissibility}}\n\nThe patch refinement procedure \\cref{sec:admissible} to enforce \\cref{criteria:1,criteria:2} of admissibility and achieve given approximation errors of the geometry $\\err{g}$ and boundary data $\\err{f}$ is a local operation on each patch.\nIf we assume that $\\Lmn$, $\\Lmx$, the partial derivatives of all patches composing $\\Gammah$, and the partial derivatives of $f$ are bounded, then \n errors $\\err{g}$ and $\\err{f}$ can always be achieved after a fixed number of refinement steps. As a consequence, this stage must have complexity $O(\\Ninit)$. \n\n\nWe focus on the additional refinement needed to satisfy \\cref{criteria:3}: ensuring that each check center $\\vhc$ is closest to its corresponding quadrature point $\\vy$. \nThis can be restated in terms of local feature size: for a quadrature patch $P \\in \\Gamma$ and quadrature node $\\vx \\in P$ with check center $\\vhc$, $\\|\\vx - \\vhc\\|_2 \\leq \\ell(\\vx) \\leq \\alpha_0 L_0$. \nWe will first relate the number of required refinement steps $\\eta$ to satisfy \\cref{criteria:3} to the shape parameters $\\alpha_0$ and $C_J$, then we will show that this number does not depend on $N$ under our assumptions.\n\nRecall that the distance from a check center to the surface  for a patch $P$ is given by\n$R + r(p+1)/2 = (a+ (p+1)b/2)L(P)= K L(P)$. %, where $L(P)$ is the square root of the area of $P$.\nAfter $\\eta$ refinement steps, the area of each child of $P$ relative to $P$ itself will have decreased by at least by $C_J(P)(1/4)^\\eta$.  \nSince the distance from $\\vhc$ to the surface is proportional to $L(P)$, we can estimate the required level\nof uniform refinement to satisfy \\cref{criteria:3} by requiring that the check center distance is less than the minimal local feature size, then taking the maximum value of $L(P)$ over all patches:\n\\[\nK \\Lmx \\sqrt{C_J} (1/2)^\\eta \\leq \\ellm = \\alpha_0 \\Lmx\n\\]\nThis yields\n\n\\begin{equation}\n  \\eta   =  \\lceil -\\log_2 \\frac{\\alpha_0}{K \\sqrt{C_J}}\\rceil,\n\\label{eq:eta-estimate}\n\\end{equation}\nwhich we note depends only on nondimensional quantities $\\alpha_0$, $K$ and $C_J$ characterizing the shape of the surface and its parametrization.  \nIf we assume these to be independent of $N$, then\nthe number of required levels of refinement $\\eta$ are also independent of $N$. \nThis means that the number of patches $N$ generated \\cref{alg:admissibility} is a linear function of $\\Ninit$, bounded by $4^{\\eta} \\Ninit$.\n\nNext, we estimate the complexity of work per patch in \\cref{alg:admissibility} to determine if a given patch requires refinement. \nAs described in \\cref{sec:admissible}, for each patch, we query the \\aabb tree $T_B$ for patches that are at the distance $R + r(p+1)/2 = K L(P)$ from a check center $\\vhc$.\nThe cost of the query is logarithmic in the number of patches $\\Ninit$ and proportional to the number of patches $N(\\vhc)$ returned.  \nThis means that we need to estimate the number of patches that can be within the distance $K L(P)$ from $\\vhc$.\n\nConsider an area element $dA$ of $\\Gammah$ at a point $\\vx_0$. The parallel surface of $dA$,\ngiven by $\\vx_0 + h \\vn(\\vx_0)$ does not have self-intersections when $|h| \\leq \\ellm$ and has a corresponding area element given by $dA^{h} = (1+ h \\kappa_1)(1+h \\kappa_2)dA$ \\cite[Section 6.2]{K}, where $\\kappa_1$ and $\\kappa_2$ are the principal curvatures of $\\Gammah$ at $\\vx_0$.\nThe volume of the truncated cone bounded by $dA$ and $dA^{h}$ of height $\\ellm$ can be computed directly from the integral $\\int_0^{\\ellm} dA^h dh$:\n\\[\n  dV = dA \\ellm (1 + \\frac{1}{2} (\\kappa_1 + \\kappa_2) \\ellm + \\frac{1}{3} \\kappa_1 \\kappa_2 \\ellm^2)   = dA\\ellm(1 + \\frac{1}{2} H \\ellm + \\frac{1}{3} K \\ellm^2)\n\\]\nwhere $K$ and $H$ are Gaussian and mean curvatures respectively. As principal curvatures satisfy\n$\\kappa_i \\geq -1/\\ellm$,  this expression has minimal value for $\\kappa_1 = \\kappa_2 = -1/\\ellm$:\n\\begin{equation}\ndV \\geq \\frac{1}{3}\\ellm dA\n\\label{eq:vol-lower-bound}\n\\end{equation}\nIn other words, each surface element $dA$ has (at least) a volume $\\frac{1}{3}\\ellm dA$ with no other surface elements inside associated with it.  From this, we can estimate the total area of surface contained within distance $K L(P)$ from $\\vhc$ by equating \\cref{eq:vol-lower-bound} with the volume of a sphere of raidus $KL(P)$, producing $4 \\pi K^3 L(P)^3/\\ellm$.\nSince the area of each patch is at least $L_{\\min}^2$, the number of patches $KL(P)$ from $\\vhc$ is bounded by\n\n\\begin{equation}\n  N(\\vhc) \\leq  4 \\pi K^3 \\frac{L(P)^3}{\\ellm \\Lmn^2} \\leq 4 \\pi K^3 \\frac{\\Lmx^3}{\\ellm \\Lmn^2} =\n  \\frac{4 \\pi K^3}{\\alpha_0 \\beta_0^2}\n \\label{eq:numpatches-estimate}\n \\end{equation}\n\nThis is independent of $\\Ninit$, which means that the complexity of nearest patch retrieval is $O( \\Ninit \\log \\Ninit)$, with constant given by the product of\n\\eqref{eq:numpatches-estimate} and $4^\\eta$, with $\\eta$ given by \\eqref{eq:eta-estimate}.\n\nTo complete the complexity estimate of the admissibility refinement, we need to estimate the cost of computing the closest point on each patch.\nThe complexity of the Newton's method for finding roots of polynomials  in \\cref{app:closest_point} depends only on the polynomial degree and the desired accuracy of the optimization, which we can assume to be bounded by floating-point precision \\cite{schleicher2017newton}.\nWe conclude that the overall complexity of admissibility refinement is $O( \\Ninit \\log \\Ninit)$ with constants proportional to the patch degree and optimization accuracy.\n\n\\subsection{Upsampling\\label{sec:complexity_upsampling}}\nWe estimate the complexity of the upsampling algorithm in \\cref{sec:adaptive_upsampling} in terms of $N$, the number of patches produced by admissibility refinement, and a parameter $\\eps$, which is the desired accuracy achieved by the final upsampled patches at the check points.\nAs the distance from the surface to the check points $\\vc_i$ is bounded from below by\n$a \\Lmn$,  the $\\tilde{V}$ term \nin \\cref{eq:error_quad_gen_target} is bounded from above by $C \\Lmn^{-2q-1}$, for a constant $C$ independent of $q$. \nFurthermore, since $\\Gammah$ and $f$ are assumed to be smooth, the density and its derivatives can also be assumed to be bounded.\nThe overall form of the estimate in \\cref{eq:error_quad_gen_target} can then be bounded and written as $\\tilde{C}(q) \\Lmn^{-2q-1} \\tilde{L}^{2q}$ for some constant $\\tilde{C}(q)$. \nThe maximum patch length obtained by refinement $\\tilde{L}$ is \n\\begin{equation}\n  \\tilde{L} = \\Lmx^\\lbl{fine} \\leq \\Lmx 2^{-\\tilde{\\eta}},\n  \\label{eq:Ltilde}\n\\end{equation}\nwhere $\\tilde{\\eta}$ is the maximum amount of required patch refinement.\nBy setting $C(q) \\Lmn^{-2q-1} \\tilde{L}^{2q} \\leq \\eps$ and using \\cref{eq:Ltilde}, we can \nobtain an upper bound for $\\tilde{\\eta}$ as a function of $\\Lmn$, $\\Lmx$, and $\\eps$:\n\\begin{equation}\n  \\tilde{\\eta} \\leq -\\frac{1}{2q}\\log_2 \\left( \\frac{\\eps}{\\Lmn^{-2q-1} \\Lmx^{2q} C(q)}\\right)= \\log_2 \\eps^{-1/(2q)} + \\bar{C}(q,\\Lmn,\\Lmx),\n  \\label{eq:tildeeta}\n\\end{equation}\nfor some constant $\\bar{C}(q,\\Lmn,\\Lmx)$.\n\nThe number of points generated by upsampling is $O(4^{\\tilde{\\eta}} N )$.\nTaking powers of both sides of \\cref{eq:tildeeta} yields an\nestimate in terms of $\\err{target}$:  $O( (2^{\\tilde{\\eta}})^2N ) \\leq O(\\eps^{-2/(2q)}N ) =  O(\\eps^{-1/q}N )$. \nAs discussed in \\cref{sec:complexity_admissibility}, the closest point computation needed to determine if a checkpoint is in $\\Omega_I$ has $\\log (N)$ cost per point, leading to $O(\\eps^{-1/q} N \\log(N))$ overall complexity and an upsampling factor of $\\eps^{-1/q}$. \nSince we desire upsampled quadrature with an accuracy of $10^{-12}$, we set $\\eps$ as such to arrive at the desired complexity.\n\n\n%Note that both $C$ and $\\tilde{C}(q)$ above depend on higher-order derivatives of the patch $P$ and therefore implicitly on principal curvatures $\\kappa_1$ and $\\kappa_2$.\n%This partial explains the difference in accuracy between \\cref{eq:error_quad_gen_target} and \\cref{eq:rem2d_heuristic}.\n%In practice, we use the more precise error bound \\cref{eq:rem2d_heuristic}, so the actual refinement is less than \\cref{eq:error_quad_gen_target} might suggest in this analysis\n\n\\subsection{Point marking\\label{sec:complexity-point-marking}}\nIn the point marking algorithm of \\cref{app:point_marking}, we first use the Laplace \\fmm to cull points far from $\\Gamma$, which requires $O(N+\\Nt)$ time. \nLet $\\bar{L} = \\frac{1}{M}\\sum_{P \\in \\Pcoarse}L(P)$ be the average patch length.\nAfter \\fmm culling, the remaining unmarked evaluation points are those whose distances from $\\Gamma$ are approximately $\\bar{L}$ or less.  For each unmarked point $\\vx$,  we query the \\aabb tree $T_T$ for the nearest triangle in the linear approximation of $\\Pcoarse$.\n\nSince there are $O(N)$ such triangles in $T_T$, we can perform this query in $O(\\log N)$ time \\cite{samet2006foundations}.\nThis triangle provides a candidate closest patch that is distance $d_0$ from $\\vx$. \nWe then use to query $T_B$ for all bounding boxes at distance $d_0$ from $\\vx$.  \nThis query too can be performed in $O(\\log N)$ time \\cite{samet2006foundations} and returns a bounded number of boxes and that each is processed in constant time, as discussed in \\cref{sec:complexity_admissibility}.\nAs the number of unmarked points after culling is bounded above by $\\Nt$, the overall complexity of our marking scheme is $O(\\Nt \\log N)$.\n\n\n\\subsection{Integral evaluation complexity \\label{sec:complexity_matvec}}\nWe assume that geometric admissibility criteria are already satisfied. \nAll integral evaluation is accelerated using an \\fmm with complexity $O(N + \\Nt)$. \n\\paragraph{Far zone}  The complexity of far evaluation is just the complexity of computing the integrals on $\\Pcoarse$ using standard quadrature and \\fmm acceleration, i.e.,  $O(q^2N + \\Nf)$.% \\sim O(n^2N + \\Nf)$.\n\n\\paragraph{Intermediate zone} The complexity of the intermediate zone evaluation is similar to that of the far zone.\nHowever the computation is performed on $\\Pfine$ rather than $\\Pcoarse$, which is up to $m$ times finer than $\\Pcoarse$,  with $m = O(\\eps^{-1/q})$ and $\\eps=10^{-12}$.\nThe density values must be interpolated from points in $\\Pcoarse$ to points in $\\Pfine$: this can be computed in $O(mq^4N)$ time using a \\twod version of the barycentric interpolation formula \\cite{BT}.\nThis yields an overall complexity of  $O(mq^4N + m q^2N + \\Ni)$.\nAlthough not asymptotically dominant, for all practical target errors, the quadrature evaluation is the dominant cost in practice due to suppressed \\fmm-related constants, as demonstrated in \\cref{sec:results-compare}.\n\n\\paragraph{Near zone} %In our algorithm, the near zone includes points on the surface itself. \n\\cref{sec:singular-eval} requires a closest point computation, an intermediate-zone evaluation at $p$ check points and an extrapolation for each target point in $\\Omega_N$.\nThe intermediate zone calculation is the dominant cost, resulting in a complexity of $O(mq^4N + m q^2 N + p\\Nn)$.\n\n\\paragraph{\\gmres solve} As a result of the second-kind integral formulation in \\cref{sec:formulation}, the cost of solving \\cref{eq:int-eq} via \\gmres is asymptotically equal to the cost of a single singular integral evaluation, since the low number of iterations are independent of $N$.  \nIn our algorithm, this is a special case of near-zone evaluation with $\\Nn = q^2N$, producing a complexity of $O(mq^4N + mq^2N + pq^2N) =O( (m+p+mq^2) q^2 N)$. \n\n\\paragraph{Overall complexity for uniform point distribution}\nWe now suppose that we wish to evaluate the solution $u$ determined by a density $\\phi$ at a set of uniformly distributed points throughout $\\Omega$.\nWe also assume that $\\Gammah$ is discretized uniformly by $N$ patches, i.e., $\\Lmx = O(N^{-1/2})$ and that the distances between samples in $\\Omega$ and from samples to $\\Gammah$ are also $O(N^{-1/2})$.\nSince the total number of evaluation points is proportional to $1/\\Lmx^3$, this implies that $\\Nt = O(N^{3/2})$.\n\nThe size of the intermediate zone $\\Omega_I$ is bounded by the estimate discussed in \\cref{sec:complexity_upsampling}.\nLetting $d_I$ be the shortest distance along a normal vector of $\\Gammah$ which is contained in $\\Omega_I$, following the discussion in \\cref{sec:complexity_upsampling} yields the following relation:\n\\begin{equation}\n  \\tilde{C}(n) d_I^{-2q-1} \\Lmx^{2q} \\leq \\eps.\n\\end{equation}\nSolving for $d_I$ gives us\n\\begin{equation}\n  d_I \\leq \\left(\\frac{\\eps}{C(n)}\\right)^{-\\frac{1}{2q-1}} \\left(\\Lmx\\right)^{\\frac{2q}{2q-1}}.\n\\end{equation}\nWe are interested in the regime as $N \\to \\infty$, or $\\Lmx \\to 0$.\nSince $\\Lmx^{\\frac{2q}{2q-1}} \\leq \\sqrt{\\Lmx} = O(N^{-1/4})$, this gives us \n\\begin{equation}\n  d_I \\leq \\left(\\frac{\\eps}{C(n)}\\right)^{-\\frac{1}{2q-1}} N^{-1/4} = O(\\eps^{-1/2q}N^{-1/4}) = O(\\sqrt{m}N^{-1/4}),\n\\end{equation}\nafter recalling from above that $m = O(\\eps^{-1/q})$ is the average upsampling rate to produce $\\Pfine$ from $\\Pcoarse$.\nThe size of the near zone is, by construction, of the order $\\Lmx$.\nIt follows that $\\Ni =O(  \\sqrt{m} N^{5/4})$, and $\\Nn =O(N)$.\n\nThe overall complexity for this evaluation is the sum of the cost of each separate evaluation: \n\\begin{align*}\n  &O(q^2N + \\Nf + mq^4 N + mq^2 N + \\Ni + mq^4 N + mq^2N + p\\Nn)\\\\\n  %&= O((m + mq^2)q^2N + \\Nf + \\Ni  + p\\Nn)\\\\\n  &= O\\left( (m+mq^2)q^2N + \\Nt + (p-1)\\Nn \\right)\n\\end{align*}\nUsing the estimates for $\\Nt$ and $\\Nn$ and dropping dominated terms, we obtain  $O( (m + m q^2)q^2 N + N^{3/2})$ for the overall complexity.\nThis suggests that for a given $q$ and $\\eps$, the minimal cost is obtained from choosing the number of discretization points \n$N =O(m^2)$, i.e., $ N = O(\\eps^{-2/q})$. \n\n%The overall complexity, $O(n^2N + \\Nf + mn^2 N + \\Ni + n\\Nn  + (m+n) n^2 N)$,\n%i.e., $O( (m + n)n^2 N  + \\Ni + n \\Nn + \\Nf) = O( (m+n)n^2 N + \\Nt + (n-1)\\Nn)$.  \n%Using the estimates for $\\Nt$ and $\\Nn$,\n%we obtain, dropping  dominated terms,  $O( m n^2 N + N^{3/2})$ for the overall complexity.\n%This suggests that for a given target error $\\etrg$, the minimal cost is obtained from choosing the number of surface points\n%$N \\sim m^2$, i.e., $\\etrg^{-4/n}$. \n\n\n\n\n\\iffalse\n\\subsection{AABB Tree complexity}\nIt is a non-trivial task to prove that an AABB tree query has logarithmic complexity with respect to the number of contained boxes for general geometries in 3D.\nSince the location of bounding boxes are a function of $\\Gamma$, they have no well-defined spatial partition in general; this makes it challenging to make claims about the tree depth.\nThis is a research topic in its own right \\note[MJM]{list two recent-ish papers}.\nHowever, Proposition 1 of \\cite{RKO} similarly applies to AABB trees:\n\\begin{proposition}\n  Any query of an AABB tree requires in $O(d+n_\\lbl{box})$ work, where $d$ is the number of levels in the tree, and $n_\\lbl{box}$ is the number of bounding boxes returned from the query.\n  \\label{}\n\\end{proposition}\n\\note[MJM]{we can comment about how no closed boundary can have the worst case complexity of \\cite{hammer2002box} globally, but i don't think we can prove $log(N)$ time}.\n\n\\subsection{Complexity comparison with \\cite{YBZ}}\n\\note[MJM]{some comments about why we're only looking at one solver, not sure how to justify}\nWe compare \\qbkix with the singular quadrature scheme of \\cite{YBZ}, which we will refer to as \\textit{partition of unity quadrature}, or \\pou quadrature.\nThis scheme is quite different from the \\qbkix method presented in this work; we summarize the key differences to avoid confusion.\n\\note[MJM]{add image highlighting difference in discretization, overlapping vs non overlapping}\nIn \\cite{YBZ}, $\\Gamma$ is defined as a set of compactly supported overlapping patches, detailed in \\cite{ying2004simple}.\nEach patch is discretized with a periodic tensor-product trapezoidal rule with spacing $h$.\nThe potential is computed at all discretization points, then a new partition of unity function $\\eta(\\vx)$, centered at the target point, is used to subtract the inaccurate near contribution.\n$\\eta(\\vx)$ is then integrated with a periodic polar trapezoidal rule, which cancels the singularity by centering the polar grid at the target, and added to the accumulated potential.\n\nThe complexity of this algorithm is $O(N^{3/2})$, because the number of points used in the polar trapezoidal rule for each target point is $O(\\sqrt{N})$. \nAsymptotically, \\qbkix will outperform \\pou quadrature.\nHowever, consider that the dominant cost of \\qbkix is the \\fmm evaluation of size $O(kN + pN)$; meanwhile, the \\fmm evaluation of \\pou quadrature is simply $O(N+N)$, since the source and targets are equal.\nThis also implies that the size of the \\fmm tree for \\pou quadrature is smaller than for \\qbkix, whose source and target points are more distributed in space.\nFor small values of $N$ and low target accuracies, we can expect \\pou quadrature to have a shorter \\fmm setup and evaluation time than \\qbkix, and therefore a lower overall runtime.\n\n\\fi\n", "meta": {"hexsha": "33e38cf422990509db6f203226bd4d2d19ee58aa", "size": 23622, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hedgehog/complexity.tex", "max_stars_repo_name": "mmorse1217/nyu-thesis-template", "max_stars_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hedgehog/complexity.tex", "max_issues_repo_name": "mmorse1217/nyu-thesis-template", "max_issues_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hedgehog/complexity.tex", "max_forks_repo_name": "mmorse1217/nyu-thesis-template", "max_forks_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 89.1396226415, "max_line_length": 474, "alphanum_fraction": 0.7245787825, "num_tokens": 7014, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7634837527911057, "lm_q1q2_score": 0.5829074489935083}}
{"text": "\\subsection{Test 1}\r\n\r\n\\begin{enumerate}[label=\\arabic*.]\r\n\t\\item \r\n\t\tDetermine if $y - \\ln{y} = x^2 + 1$ is an implicit solution to $\\dd{y}{x} = \\frac{2xy}{y-1}$. Give reasons for your answer.\r\n\t\\item \r\n\t\tFind the function $y(x)$ that satisfies the differential equation\r\n\t\t\\begin{equation*}\r\n\t\t\t\\dd{y}{x} + 4y - e^{-x} = 0 \\text{ and } y(0) = \\frac{4}{3}\r\n\t\t\\end{equation*}\r\n\t\\item\r\n\t\tFind the solution to the initial value problem $\\dd{x}{t} = x\\frac{(t-3)^2}{x+2} \\text{, } x(3) = -1$. You may leave the solution in implicit form.\r\n\t\\item\r\n\t\tFind the general solution to the differential equation $y''' + y'' - 2y = 0$.\r\n\t\\item\r\n\t\tA tank contains 90kg of salt and 2000L of water. Pure water enters the tank at a rate of 3L/min. The solutions is mixed and drains from the tank at a rate of 6L/min.\r\n\t\t\\begin{enumerate}[label=(\\alph*)]\r\n\t\t\t\\item\r\n\t\t\t\tWhat is the amount of salt in the tank initially?\r\n\t\t\t\\item \r\n\t\t\t\tFind the amount of salt in the tank after 2 hours.\r\n\t\t\\end{enumerate}\r\n\t\\item\r\n\t\tA detective arrives at a murder scene at noon that has an ambient temperature of $16^{\\circ}$C. He measures the temperature of the body as $34^{\\circ}$C. He then returns an hour later and measures the temperature of the body as $32^{\\circ}$C.\r\n\t\tAssuming the normal body temperature is $37^{\\circ}$C, write a differential equation using Newton's law of cooling that models the situation. Describe, without actually solving the equation, how one would solve the equation to find then the murder took place.\r\n\\end{enumerate}", "meta": {"hexsha": "a440f43c73494ff27aad8b4698d5ea903944ab53", "size": 1522, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/additionalResources/tests/test1.tex", "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "diffEq/additionalResources/tests/test1.tex", "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "diffEq/additionalResources/tests/test1.tex", "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.5384615385, "max_line_length": 262, "alphanum_fraction": 0.6839684625, "num_tokens": 469, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7634837527911056, "lm_q1q2_score": 0.5829074489935082}}
{"text": "\\documentclass{article}\n\\usepackage[elliptic]{qed}\n\n\\begin{document}\n\n\\section*{Elliptic integrals}\n\nThe complete elliptic integrals of the first and second kinds are defined as\n%\n\\begin{qed}\n    \\ellipk(k^2) = \\int_0^{\\frac{\\pi}{2}}\n    \\frac{1}{\\sqrt{1 - k^2 \\sin^2\\theta}} \\, d\\theta\n\\end{qed}\n%\nand\n%\n\\begin{qed}\n    \\ellipe(k^2) = \\int_0^{\\frac{\\pi}{2}}\n    \\sqrt{1 - k^2 \\sin^2\\theta} \\, d\\theta\n\\end{qed}\n%\nrespectively. They have the following properties:\n%\n\\begin{qed}\n    \\ellipk(0) = \\frac{\\pi}{2}\n\\end{qed}\n%\n\\begin{qed}\n    \\ellipe(0) = \\frac{\\pi}{2}\n\\end{qed}\n%\n\\begin{qed}\n    \\ellipk(1) = \\cinfty\n\\end{qed}\n%\n\\begin{qed}\n    \\ellipe(1) = 1\n\\end{qed}\n%\n\\begin{qed}\n    \\frac{d}{d k}\\ellipk(k^2) = -\\frac{\\ellipk(k^2)\n        + \\frac{\\ellipe(k^2)}{k^2 - 1}}{k}\n\\end{qed}\n%\n\\begin{qed}\n    \\frac{d}{d k}\\ellipe(k^2) = \\frac{\\ellipe(k^2) - \\ellipk(k^2)}{k}\n\\end{qed}\n%\nThey also satisfy the Legendre relation:\n%\n\\begin{qed}\n    \\ellipe(k^2) \\ellipk(1-k^2) + \\ellipe(1 - k^2) \\ellipk(k^2)\n    - \\ellipk(k^2) \\ellipk(1 - k^2) = \\frac{\\pi}{2}\n\\end{qed}\n%\n\\end{document}\n", "meta": {"hexsha": "c2e2eb20fef99a92707b3d4df8d6d45bd3de023b", "size": 1079, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "examples/elliptic_integrals.tex", "max_stars_repo_name": "rodluger/qed", "max_stars_repo_head_hexsha": "36c9cd37c5f29398e36b6a35790c678cd099535c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-15T18:49:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-15T18:49:48.000Z", "max_issues_repo_path": "examples/elliptic_integrals.tex", "max_issues_repo_name": "rodluger/qed", "max_issues_repo_head_hexsha": "36c9cd37c5f29398e36b6a35790c678cd099535c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2021-03-15T17:27:11.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-22T18:10:56.000Z", "max_forks_repo_path": "examples/elliptic_integrals.tex", "max_forks_repo_name": "rodluger/qed", "max_forks_repo_head_hexsha": "36c9cd37c5f29398e36b6a35790c678cd099535c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.9298245614, "max_line_length": 76, "alphanum_fraction": 0.5866543095, "num_tokens": 474, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5828841265857063}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{natbib,amsfonts,graphics,amsmath}\n\\usepackage{graphicx}\n\n\\include{newcommands}\n\n\\begin{document}\n\n\\title{Derivation of the nondimensional lee wave equations and perturbation expansions}\n\n\\author{Oliver Fringer and Eric Mayer}\n\n\\maketitle\n\n\\section{Dimensional analysis}\n\nThe form drag $F$ (per unit width) due to a hill of height $h_0$ in an infinitely \ndeep fluid is characterized by the dimensional quantities\n$U$, $k$, $N$, and $h_0$.  Choosing $U$ and $N$ to nondimensionalize $k$ and $h_0$, the governing\nnondimensional parameters are $J = N h_0/U$ and $\\epsilon = k U/N$.  Since the units of the form drag\nare length$^3$~time$^{-2}$, then a scale for $F$ in terms of $U$ and $N$ is $\\rho_0 U^3 N^{-1}$. Therefore,\nthe form drag must satisfy\n\\[\n\\frac{F}{\\rho_0 U^3 N^{-1}} = f(J, \\epsilon)\\,.\n\\]\n\n\\section{Nondimensional equations} \\label{eq:equations}\n\nLet $\\ub_{\\mbox{total}} = U {\\bf e}_x + \\ub$, $\\rho_{\\mbox{total}} = \\rhobar(z) + \\rho$, and\n$p_{\\mbox{total}} = \\rho_0 \\overline{p}(z) + \\rho_0 p$ (the total notation is\nto avoid carrying around the prime, but in a paper you should carry around the prime).\nThe steady momentum and density transport equations are given by\n\\begin{eqnarray}\nU \\D{u}{x} + \\ub\\cdot\\nabla u &=& -\\D{p}{x}\\,,\\\\\nU \\D{w}{x} + \\ub\\cdot\\nabla w &=& -\\D{p}{z} - \\frac{\\rho}{\\rho_0} g\\,,\\\\\nU \\D{\\rho}{x} + \\ub\\cdot\\nabla\\rho &=& \\frac{\\rho_0 N^2}{g} w\\,,\n\\end{eqnarray}\nwhere $N^2 = -g/\\rho_0 \\partial\\rhobar/\\partial z$, subject to continuity $\\nabla\\cdot\\ub=0$ and\nthe kinematic bottom boundary condition\n\\[\nU\\D{h}{x} + u \\D{h}{x} = w\\,.\n\\]  \nNondimensionalize with\n\\begin{eqnarray*}\nu &=& u_0 u^*\\,,\\\\\nw &=& w_0 w^*\\,,\\\\\n\\rho &=& R \\rho^*\\,,\\\\\np &=& P p^*\\,,\\\\\nx &=& k^{-1} x^*\\,,\\\\\nz &=& \\delta z^*\\,.\n\\end{eqnarray*}\nNondimensionalizing the kinematic bottom boundary condition gives\n(after ignoring the $*$)\n\\[\nk U h_0 \\D{h}{x} + k u_0 h_0 \\D{h}{x} = w_0 w\\,.\n\\]\nIf we require a balance between the linear terms, this implies $w_0 = k h_0 U$, so that\n\\[\n\\D{h}{x} + F \\D{h}{x} = w\\,,\n\\]\nwhere $F = u_0/U$ is a Froude number. Now, the vertical scale of the flow as given by $\\delta$ is not the\nsame as the hill height $h_0$, since $\\delta$ must be finite as $h_0\\to 0$ (the linear limit). The \nvertical scale is thus dictated by continuity, which requires\n\\[\nk u_0 \\D{u}{x} + \\frac{w_0}{\\delta} \\D{w}{z} = 0\\,,\n\\]\nor, since this implies $k u_0 = w_0/\\delta$, then we must have $\\delta = w_0/(k u_0) = k h_0 U/(k u_0) = F^{-1} h_0$.\nNondimensionalizing the $u$-momentum equation,\n\\begin{eqnarray}\nk u_0 U \\D{u}{x} + k u_0^2\\ub\\cdot\\nabla u  &=& -k P \\D{p}{x}\\,.\n\\end{eqnarray}\nSince we require a leading-order balance between the pressure gradient and the linear momentum advection term,\nwe must have $P = u_0 U$, which gives\n\\begin{eqnarray}\n\\D{u}{x} + F \\ub\\cdot\\nabla u &=& -\\D{p}{x}\\,.\n\\end{eqnarray}\nThe nondimensional density transport equation is given by\n\\[\nk U R \\D{\\rho}{x} + k u_0 R \\ub\\cdot\\nabla\\rho = \\frac{k \\rho_0 h_0 N^2 U}{g} w\\,.\n\\]\nIf we require a balance between the linear advection terms, then the scale for the density\nperturbation is \n\\[\nR = \\frac{\\rho_0 N^2 h_0}{g}\\,,\n\\]\nso that the nondimensional density equation is\n\\[\n\\D{\\rho}{x} + F \\ub\\cdot\\nabla\\rho = w\\,.\n\\]\nNondimensionalizing the vertical momentum equation, we have\n\\[\nk^2 h_0 U^2 \\D{w}{x} + k^2 h_0 u_0^2\\ub\\cdot\\nabla w = -\\frac{P}{\\delta}\\D{p}{z} - \\frac{g R}{\\rho_0} \\rho\\,.\n\\]\nIf we require a vertical hydrostatic balance to leading order, then we must have \n\\[\n\\frac{P}{\\delta} = \\frac{g R}{\\rho_0} \n  = N^2 h_0\\,,\n\\]\nand\n\\[\nP = \\delta N^2h_0\n= \\frac{N^2h_0^2}{F}\n= \\frac{N^2h_0^2U}{u_0}.\n\\]\n%\\[\n%P = \\frac{g R \\delta}{\\rho_0} \n%  = \\frac{g}{\\rho_0} \\frac{\\rho_0 N^2 h_0}{g} \\frac{h_0}{F}\n%  = \\frac{g}{\\rho_0} \\frac{\\rho_0 N^2 h_0}{g} \\frac{U}{N}\n%  = U N h_0\\,.\n%\\]\nwhich gives\n\\[\n\\epsilon^2 \\left(\\D{w}{x} + F\\ub\\cdot\\nabla w\\right) = -\\D{p}{z} - \\rho\\,,\n\\]\nwhere \n\\[\n\\epsilon = \\frac{k U}{N}\n\\]\nis the nonhydrostatic parameter. Now, returning to the pressure, since from the vertical momentum equation we\nrequire $P = N^2h_0^2U u_0^{-1}$ and from the horizontal momentum equation we require $P = u_0 U$, then equating the\ntwo implies that $u_0 = N h_0$ and the Froude number is given by\n\\[\nF = \\frac{u_0}{U} = \\frac{N h_0}{U} = J\\,,\n\\]\nwhere\n\\[\nJ = \\frac{N h_0}{U}\\,.\n\\]\nTherefore, in terms of $J$, the governing nondimensional equations are given by\n\\begin{eqnarray*}\n\\D{u}{x} + J\\ub\\cdot\\nabla u &=& -\\D{p}{x}\\,,\\\\\n\\epsilon^2 \\left(\\D{w}{x} + J\\ub\\cdot\\nabla w\\right) &=& -\\D{p}{z} - \\rho\\,,\\\\\n\\D{\\rho}{x} + J \\ub\\cdot\\nabla\\rho &=& w\\,,\n\\end{eqnarray*}\nsubject to $\\nabla\\cdot\\ub=0$ and the kinematic bottom boundary condition\n\\[\n\\left(1 + J \\right)\\D{h}{x} = w\\,.\n\\]\nThese nondimensional equations are consistent with the original nondimensionalization which implied\nthat the problem is uniquely characterized by $\\epsilon$ and $J$.\nThe relevant scales (nondimensionalized by $N$ and $U$) are given by\n\\begin{eqnarray*}\n\\frac{u_0}{U} &=& J\\,,\\\\\n\\frac{w_0}{U} &=& \\epsilon J\\,,\\\\\n\\frac{gR}{\\rho_0 U N} &=& J\\,,\\\\\n\\frac{P}{U^2} &=& J\\,,\\\\\n\\frac{\\delta N}{U} &=& 1\\,.\n\\end{eqnarray*}\n\nContrary to what is stated in the literature, we can show that it is\nin fact appropriate to refer to $J$ as an internal Froude number. If we define\n$Fr_\\delta = u_0/\\sqrt{g' \\delta}$, where $g'\\delta =g (R/\\rho_0) \\delta = J U^2$, this gives\n$Fr_{\\delta} = J^{1/2}$. Therefore, $J^{1/2}$ can be thought of as \nthe ratio of the inertial to gravitational forces arising from the perturbed\nflow above the hill. Although we would expect a larger $N$ to block the flow and\nreduce the magnitude of the perturbation above the hill, the scaling shows that\n$u_0 = J U$, implying that the perturbation velocity above the sill increases in step with\nincreasing $J$. However, the gravitational force resulting from the perturbation is\ngiven by $\\sqrt{g'\\delta} = J^{1/2} U$, which grows more slowly than $u_0$ with\nincreasing $J$.\n\nThis identification of J as the internal froude number squared appears to have gone without notice in the 65 years of studying Long's model. However, an inquiry into the upper limit of this relationship recently emerged from consideration of blocked flow past a half cylinder (Winters and Armi, 2012). In the blocked regime, that is, for flow in which $J>1$, the lowest fluid elements upstream of the obstacle lack sufficient kinetic energy to overtop the obstacle, and thus form a pool of stagnant fluid at the obstacle's base. As a result, the obstacle takes on the apparent height to the unblocked flow of U/N, that is, exactly the height of the flow's kinetic hill-climbing capacity, giving $J_{unblocked}=1$. In this case, Winter's and Armi show that the internal froude number above the hill is exactly 1, and in analogy to hydraulic control of an unstratified river, the flow directly above the crest exhibits a transition from a supercritical upstream jet to subcritical (and unsteady) downstream lee wave. In other words, the relation between J and $Fr_{\\delta}$ holds only up to J=$Fr_{\\delta}^2$=1. Above this limit, $Fr_{\\delta}^2$=1, and J informs the depth of the stagnant layer as well as the thickness and speed of the accelerated jet resulting from hydraulic control (Winters and Armi, 2012).\n\nWinters an Armi's analysis highlights an essential difference between this problem and the single layer channel flow from which our standard understanding of the Froude number derives: here the ``depth'' of the fluid as it travels over subcritical bathymetry is oblivious to both the ocean's depth and the bathymetry's height, and is instead entirely specified according to the impinging flow's properties, namely U and N. \n\nThat J as a Froude number only holds up until the critical limit $J=1$ has prevented its general interpretation as a Froude number. Nonetheless, in the subcritical regime of interest here, $J^{1/2}=(Nh_0/U)^{1/2}=u_0/\\sqrt{g'\\delta}$ carries the true meaning of a Froude number, relating the speed with which the competing processes of advective non-linearities and gravity waves respond to the introduction of bathymetry into the flow. When J is small, waves accommodate the disturbance adiabatically, and carry it away from the site of generation, just as in an unstratified river flowing over a sub-critical sill. Likewise, when J is large, non-linear advective processes respond faster than gravity, often resulting in a local energy sink, in analogy to a hydraulic jump downstream of supercritical river flow.\n\n(In the 2-D case, blocking is an adiabatic advective process while span wise vorticity generation and downslope winds are diabatic. In the 3-D case, horizontal splitting is an adiabatic advective response while vertical vorticity generation represents a nearly adiabatic advective non-linearity with the unique capacity to carry energy away from the site of generation.)\n\nThat the outer and inner variable representations of J should present velocity and gravity inversely highlights a unique element of this problem's physics. In the limit of subcritical bathymetry, for which the bottom boundary condition becomes linear,  there is no vertical scale imposed upon the potential energy of the flow from any boundary condition. Rather, the vertical scale of the wave must come from properties of the fluid itself, hence $\\delta = U/N$. \n\n%Understanding the kinetic energy in the wave is more nuanced. Beginning with the denominators in the relation $J^{1/2}=(Nh_0/U)^{1/2}=u_0/\\sqrt{g'\\delta}$, and multiplying $J=Nh_0/U$ by $U/U$ for dimensional consistency, we have $\\sqrt{g'\\delta}= U$, this expresses that the gravity wave response exists as a direct consequence of the background velocity. Indeed, in the frame of reference moving with the water, U is the magnitude of the group velocity of this wave (as both borne out in the math and evidenced by observation of the steady state hydrostatic wave standing motionless above a hill). Similarly, considering the numerators: $(Nh_0U)^{1/2}=u_0$, we see that perturbation velocity above the hill is a direct consequence of buoyancy's attempt to restore an element to its equilibrium position, where the maximum possible displacement is the height of the hill. \n\nThere are still two other candidates for the title Froude number of this flow. One of these we have termed $\\epsilon$. It is perhaps closer to Froude's original use of the number in the study of a ship's hull length relative to the surface gravity wavelength. For our flow, as we will see below, it represents the partition of wave response between a propagating and an evanescent regime. It is a ratio of excitation rate to wave response rate. If the flow excited perturbations from stability faster than buoyancy can respond, the wave cannot propagate. It is like punching a speedbag faster than the bag can bounce back. \n\nThe other arises when one considers the reality of finite depth. In this case, the number $Fr=U/ND$ arises as a measure of the flow speed to the first mode internal gravity wave speed. Baines reserved the term Fr for this number. \n\nFinally, we would be remiss in omitting an alternative interpretation of J that relates to its description of flow around the critical limit. NF2010 show that by making the hydrostatic approximation, J is equivalent to the steepness parameter from the internal tide problem, that is, it is a ratio of the slope of topography to the slope of the wave group velocity. \n\n\\subsection{Rotation and viscosity}\nIncluding rotation and viscosity to the nondimensional equations requires only slight modification. Because rotational effects necessarily involve a span wise direction, we must now include an equation for the span wise momentum. Assume there is no background span wise velocity, redefine $\\ub = u\\hat{x} + v\\hat{y} + w\\hat{z}$ as the perturbation velocity vector, and the steady transport equations for momentum and density become\n\\begin{eqnarray*}\nU \\D{u}{x} + \\ub\\cdot\\nabla u &=& -\\D{p}{x} + fv + \\nu \\nabla^2 u\\,,\\\\\nU \\D{v}{x} + \\ub\\cdot\\nabla v &=& -\\D{p}{y} - fu + \\nu \\nabla^2 v\\,,\\\\\nU \\D{w}{x} + \\ub\\cdot\\nabla w &=& -\\D{p}{z} - \\frac{\\rho}{\\rho_0} g + \\nu \\nabla^2 w\\,,\\\\\nU \\D{\\rho}{x} + \\ub\\cdot\\nabla\\rho &=& \\frac{\\rho_0 N^2}{g} w + \\kappa \\nabla^2 \\rho\\,,\n\\end{eqnarray*}\nNondimensionalizing as above, with the addition of $v = u_0v*$, the $u$-momentum equation becomes (after ignoring the *)\n\\begin{eqnarray*}\nk u_0 U \\D{u}{x} + k u_0^2\\ub\\cdot\\nabla u  &=& -k P \\D{p}{x} + fu_0v + \\nu k^2 u_0 \\nabla^2 u \\,.\n\\end{eqnarray*}\nAgain, requiring a first order balance between linear advection and pressure gives\n\\begin{eqnarray*}\n\\D{u}{x} + J\\ub\\cdot\\nabla u  &=& \\D{p}{x} +Ro^{-1} \\ v + Re^{-1}\\nabla^2 u \\,,\n\\end{eqnarray*}\nwhere $Ro =  \\frac{Uk}{f}$ and $Re = \\frac{U}{\\nu k}$.\nNondimensionalizing the $v$-momentum equations, we have\n\\begin{eqnarray*}\nk u_0 U  \\D{v}{x} +k u_0^2 \\ub\\cdot\\nabla v &=& -k P\\D{p}{y} - fu_0fu + \\nu k^2 u_0 \\nabla^2 v\\,.\n\\end{eqnarray*}\nUpon again requiring a balance of linear advection and pressure, this becomes\n\\begin{eqnarray*}\n\t\\D{v}{x} + J\\ub\\cdot\\nabla v  &=& \\D{p}{y} - Ro^{-1} \\ u + Re^{-1}\\nabla^2 v \\,.\n\\end{eqnarray*}\nNondimensionalizing the $w$-momentum equations with the scaling $w_0 = Uk h_0$ that we gleaned from the bottom boundary condition gives\n\\[\nk^2 h_0 U^2 \\D{w}{x} + k^2 h_0 u_0^2\\ub\\cdot\\nabla w = -\\frac{P}{\\delta}\\D{p}{z} - \\frac{g R}{\\rho_0} \\rho + \\frac{\\nu U k h_0 }{\\delta^2}\\nabla^2 w\\,.\n\\]\nRequiring hydrostatic balance, as before, we have\n\\[\n\\epsilon^2 \\left(\\D{w}{x} + F\\ub\\cdot\\nabla w\\right) = -\\D{p}{z} - \\rho + \\frac{\\nu U k h_0 }{\\delta^2 h_0 N^2}\\nabla^2 w\\,.\n\\]\nAnd recalling $\\delta = U/N$, this simplifies to \n\\[\n\\epsilon^2 \\left(\\D{w}{x} + F\\ub\\cdot\\nabla w\\right) = -\\D{p}{z} - \\rho + Re^{-1}\\nabla^2 w\\,.\n\\]\nLastly, the nondimensional density transport equation is\n\\[\nk U R \\D{\\rho}{x} + k u_0 R \\ub\\cdot\\nabla\\rho = \\frac{k \\rho_0 h_0 N^2 U}{g} w + \\kappa k^2 R \\nabla^2 \\rho\\,.\n\\]\nRequiring a balance between the linear advection terms, as before, results in\n\\[\n\\epsilon^2 \\left(\\D{w}{x} + F\\ub\\cdot\\nabla w\\right) = -\\D{p}{z} - \\rho + \\frac{Pr}{Re}\\nabla^2 \\rho\\,,\n\\]\nwhere $Pr=\\nu/\\kappa$ is the Prandtl number.\n\\section{Linear, nonhydrostatic equations}\nThe governing equations in the linear limit $J\\to 0$ are given by\n\\begin{eqnarray*}\n\\D{u}{x} &=& -\\D{p}{x}\\,,\\\\\n\\epsilon^2 \\D{w}{x} &=& -\\D{p}{z} - \\rho\\,,\\\\\n\\D{\\rho}{x} &=& w\\,.\n\\end{eqnarray*}\nThe governing equation for $w$ is then given by\n\\[ \n\\ND{w}{z}{2} + \\epsilon^2 \\ND{w}{x}{2} + w = 0\\,.\n\\]\nAssume a sinusoidal topography such that $h(x) = \\sin(x)$.\n\n\\subsection{Propagating solution}\n\nWhen $\\epsilon<1$, the hydrostatic solution is of the form\n$w(x,z) = \\cos(x + m z)$, which implies $m = (1-\\epsilon^2)^{1/2}$.  Substitution into\nthe governing linear equations gives\n\\begin{eqnarray*}\nu(x,z) &=& -m \\cos(x + m z)\\,,\\\\\nw(x,z) &=& \\cos(x + m z)\\,,\\\\\n\\rho(x,z) &=& \\sin(x + m z)\\,,\\\\\np(x,z) &=& m \\cos(x + m z)\\,.\n\\end{eqnarray*}\nThe dimensional form drag (per unit width) over one wavelength is given by (here, $*$ implies dimensional\nquantities)\n\\[\nF^* = \\int_{0}^{2\\pi/k^*} p^*(x^*,z^*=0)\\D{h^*}{x^*}\\, dx^*\\,.\n\\]\nNondimensionalizing gives\n\\[\n\\frac{F^*}{\\rho_0 U^3 N^{-1}} = J^2 \\int_{0}^{2\\pi} p(x,z=0)\\D{h}{x}\\, dx\\,.\n\\]\nSubstitution of $p$ and $h$ then gives\n\\[\n\\frac{F^*}{\\rho_0 U^3 N^{-1}} = \\pi J^2\\left(1 - \\epsilon^2\\right)^{1/2}\\,.\n\\]\n\n\n\\subsection{Evanescent solution}\n\nWhen $\\epsilon>1$, the nonhydrostatic solution is of the form\n$w(x,z) = \\cos(x)\\exp(-m z)$, which implies $m = (\\epsilon^2-1)^{1/2}$.  Substitution into\nthe governing linear equations gives\n\\begin{eqnarray*}\nu(x,z) &=& m \\sin(x) \\exp(-m z)\\,,\\\\\nw(x,z) &=& \\cos(x) \\exp(-m z)\\,,\\\\\n\\rho(x,z) &=& \\sin(x) \\exp(-m z)\\,,\\\\\np(x,z) &=& -m \\sin(x) \\exp(-m z)\\,.\n\\end{eqnarray*}\nSince $p$ and $w$ are $\\pi/2$ out of phase, the form drag is identically zero.\n\n\\subsection{Long's Model}\n\nLong 1953 recast the equations in terms of $\\delta$ where $\\delta(x,z) = z(x)-z_0$ is the displacement of the streamline found at $(x,z)$ from its upstream height $z_0$. Note that by definition, $\\delta$ is a transcendental equation in $z$. In taking the linear limit, we essentially say that this difference is everywhere very small with respect to the length scale of the restoring force. \n\nLong's Model (Long 1953)\n\\[\n\\nabla^2\\delta + \\frac{N^2}{U^2}\\delta = 0.\n\\]\nUpon taking the hydrostatic approximation, this simplifies further to\n\\[\n\\partial_z^2\\delta +  \\frac{N^2}{U^2}\\delta = 0,\n\\]\nwhere the subscript z represents differentiation with respect to z.\nAssuming sinusoidal topography of the form $h(x) = h_0 cos(kx)$, the kinematic boundary condition now reads\n\\[\n\\delta(x,h(x)) = h(x) = h_0 cos(kx).\n\\]\n\nLong's model is a linear helmholz equation with \n\n\\subsection{Perturbation expansion}\n\n\nWe are now in a position to consider a perturbation solution to Long's equation, as in Smith 1977. We can postulate a perturbation expansion of $\\delta$ in powers of J as\n\\[\n\\delta(x,z) = \\delta_0+ J\\delta _1 +  J^2 \\delta_2+ O(J^3).\n\\]\nAnd we can write the bottom boundary condition as a Taylor expansion around $x=0$ (employing the scaling $h(x)\\propto h_0$ and $\\partial_z \\propto N/U$)\n\\[\n\\delta(x,h(x)) = \\delta(x,0) + h \\partial_z \\delta |_0 +  h^2 \\partial_z^2 \\delta |_0 + O(J^3) = h(x).\n\\]\nThe 0th order solution comes rapidly upon plugging the perturbation expansion into the boundary condition\n\\begin{eqnarray*}\n\t\\delta &=& \\delta_0 + O(J) \\\\\n\t\\delta(x,0) + O(J) &=& h(x)\\\\\n\t\\implies \\delta_0|_0 &=& h_0cos(kx)\n\\end{eqnarray*}\nAnd, unsurprisingly, upon appeal to Long's Model, the 0th order propagating solution for $\\delta$ emerges as\n\\[\n\\delta = h_0cos(kx+lz)  + O(J),\n\\]\nWhere  \n\\[\nl =(\\frac{N^2}{U^2}-k^2)^{1/2} = \\frac{N}{U}(1-\\epsilon^2)^{1/2}\n\\]\nfor the nonhydrostatic propagating solution, and simply \n\\[l=\\frac{N}{U} \\]\nin the hydrostatic approximation. \n\nIf $\\epsilon>1$, the solution is instead evanescent, giving\n\\[\n\\delta = h_0cos(kx)exp(-lz)  + O(J).\n\\]\nFor the sake of brevity, I will omit the higher order evanescent solutions in what follows. \n\nBy similar steps, Smith arrived at the 1st order solution\n\\begin{eqnarray*}\n\t\\delta &=& \\delta_0 + J\\delta_1 + O(J^2) \\\\\n\t\\delta(x,0) + h \\partial_z \\delta |_0 +  O(J^2) &=& h(x)\\\\\n\t\\delta_0|_0 + J\\delta_1|_0+  h \\partial_z \\delta_0 |_0  +O(J^2)&=& h(x)\\\\\n\tand \\   \\delta_0|_0 &=& h(x) \\\\\n\t\\implies h \\partial_z \\delta_0 |_0 &=& -J\\delta_1|_0\\\\\n\th_0 cos(kx) \\partial_z(h_(cos(kx+lz))|_0&=& -J\\delta_1|_0\\\\\n\t -J h_0 cos(kx)sin(kx) &=& -J\\delta_1|_0\\\\\n\t  -J h_0 \\frac{1}{2}sin(2kx) &=& -J\\delta_1|_0\\\\\n\t\\implies \\delta_1|_0 &=& \\frac{h_0}{2}sin(2kx)\n\\end{eqnarray*}\nAnd, on appeal to Long's Model, we have \n\\[\n\\delta(x,z) = h_0cos(kx+lz) + \\frac{J h_0}{2}sin(2kx+lz) + O(J^2)\n\\]\nWith a little more paper and time, it can be shown by identical steps that the 2nd order solution is  \n\\[\n\\delta(x,z) = h_0cos(kx+lz) + \\frac{J h_0}{2}sin(2kx+lz) + \\frac{J ^2h_0}{2}cos(kx+lz) + O(J^3)\n\\]\nSo far, all of these solutions are evaluated at the displaced streamline height $z$, rather than the undisturbed height of the streamline far upstream, $z_0$. The two are related via\n\\[\n\\delta(x,z) = \\delta(x,z_0+\\eta(x,z_0)) = \\eta(x,z_0)\n\\]\nWe can now use $\\eta(x,z_0)$ to write $\\delta(x,z)$ as a Taylor expansion around $z_0$, and after some effort, arrive at the 2nd order solution for $\\eta(x,z_0)$:\n\\begin{eqnarray*}\n\\eta(x,z_0) &=& h_0cos(kx+lz_0)\\\\\n &+&\\frac{1}{2}Jh_0(sin(2kx+lz_0)_ - sin(2kx + 2lz_0))\\\\\n &+&\\frac{1}{2}J^2h_0[cos(kx+lz_0)+cos(kx+lz_0)cos(2kx+lz_0)\\\\\n &-&sin(kx+lz_0)sin(2kx+lz_0)+sin(kx+lz_0)sin(2kx+2lz_0)\\\\\n &-&3cos^3(kx+lz_0)]\\\\\n &+& O(J^3h_0)\n\\end{eqnarray*}\nNote that this is an extension of Smith 1977, in which the solution is given only to $O(Jh_0)$. \n\nPlots of these solutions for J=0.3 are given below, as well as those of the velocity fields $u$ and $w$ that these displacements entail. Note the steepening of the streamlines increases with the order of the solution. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{delta_solutions.eps}\n\t\\caption{0th, 1st, and 2nd order solutions for $\\delta(x,z)$ with $J=0.3$.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{eta_solutions.eps}\n\t\\caption{0th, 1st, and 2nd order solutions for $\\eta(x,z_0)$ with $J=0.3$.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{u_solutions.eps}\n\t\\caption{0th, 1st, and 2nd order solutions for $u(x,z_0)$ with $J=0.3$.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{w_solutions.eps}\n\t\\caption{0th, 1st, and 2nd order solutions for $w(x,z_0)$ with $J=0.3$.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{vel_via_diff.eps}\n\t\\caption{2nd order solutions for velocity calculated by numerical differentiation of $\\eta_2$.}\n\\end{figure}\n\n\\subsection{Instability}\n\nThese if taken to large enough order, these perturbation expansions are asymptotically exact so long as $J<1$. However, before $J$ gets to unity, these solutions suggest that two forms of instability will arise, invalidating the underlying hypothesis of Long's model. \n\nFirstly, as $J$ increases, the streamlines steepen, and at some value of $J<1$, become locally vertical. Vertical streamlines imply wave breaking (Long 1953, Miles 1969) which results in heavy water pooling in valleys and reduced wave amplitude (Kimura and Manins 1980). The condition of vertical streamlines can be expressed mathematically as $\\partial_z \\delta = 1$. \n\nBelow, I include a plot of the maximum value of $\\partial_z \\delta$ as a function of J for each order solution. As Smith reported in 1977, the 1st order accurate solution for $\\delta$ predicts that wave breaking will commence at $J=0.75$. With our next order solution, this threshold drops to $J=0.65$. Of course, neither solutions are suitable for analysis of such nonlinear flow; in order to limit the error in a $J=0.65$ solution to less than $5\\%$, we would need a 6th order accurate expansion ($0.65^7 = 0.049$). \n\nA second measure of instability derives from consideration of turbulent energetics, and is quantified by the  a condition on $Ri = N^2/(\\partial_z u)^2$, where $Ri$ is the Richardson number and is a ratio of the  contributions of buoyancy flux and shear production  to the turbulent kinetic energy equation. If $Ri>1/4$, buoyancy will not allow the energy cascade to perpetuate, and the flow stabilizes. However, if $Ri<1/4$, vertical shear wins and the flow collapses into turbulence (Miles 1961, Baines 1995). \n\nNote that from our scaling above, we have $u_0 = JU$, $\\partial_z = N/U$, and together $(\\partial_z u)^2 \\propto N^2J^2$ and $Ri \\propto J^{-2}$. Thus we expect the flow to go unstable at least by $J=2$. To get at this more quantitatively, however, consider the vertical gradient of our perturbation solutions for $u$, plotted as a function of J below. I find (numerically), that in the $O(J^6)$ solution (recall, it is a square of the gradient of $u$, which we have solved to $O(J^2)$), Ri falls below 1/4 for $J>0.69$. Furthermore, because of the higher order of this solution, the error in this estimate is much more acceptable than for the vertical streamlines ($0.69^7=0.0745$).\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{max_slopes.eps}\n\t\\caption{Maximum slope in $\\delta$ as a function of J.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{max_shearsq.eps}\n\t\\caption{Maximum slope in $(\\partial_z u)^2$ as a function of J.}\n\\end{figure}\n\n\\subsection{Momentum and Energy fluxes}\nThe generation of a lee wave represents a transfer of momentum and energy from the mean flow into the perturbation flow. This transfer initiates at the bottom boundary, where it is felt as a form drag. The fluxes then radiate upwards with the wave, to infinity as far as this problem is concerned. \n\nThe horizontal force of the topography on the mean flow from the pressure gradient at the surface is \n\\[\nF_D =  \\int \\D{P}{x} h(x) dx = -\\int P(x) \\D{h}{x} dx.\n\\]\nUsing the work energy theorem, the energy expended in flowing over the topography is then\n\\[\nE_D = -\\int u(x)P(x) \\D{h}{x} dx.\n\\]\nAway from the hill, the wave momentum flux is\n\\[\nF_w(x,z_0) = -\\rho u(z,x_0)w(x,z_0),\n\\]\nand the wave energy flux is\n\\[\nE_w(x,z_0) = P(x,z_0)w(x,z_0).\n\\]\nFor our inviscid and irrotational problem, there is no energy lost or converted to inertial oscillations while fluxing through the domain, and $E_D = E_w$.\n\nTo evaluate these fluxes, it will be helpful to have a clear relation between the pressure field and $\\eta(x,z_0)$.\n\n\\subsection{Fourier synthesis}\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{witch_bathymetry.eps}\n\t\\caption{Recreating the Witch of Agnesi using Fourier synthesis.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{witch_eta.eps}\n\t\\caption{Streamlines above a Witch of Agnesi found via Fourier synthesis.}\n\\end{figure}\n\n\\end{document}\n\n\n\n\nw = cos(x)exp(-m z)\nwx = -sin(x)exp(-m z)\nux = -wz = m cos(x) exp(-m z)\nu = m sin(x) exp(-m z)\n\nrhox = w = cos(x)exp(-m z)\nrho = sin(x)exp(-m z)\n\ne2 wx = -pz - rho\npz = -e2 wx - rho = e2 sin(x) exp(-m z) - sin(x) exp(-m z) = m^2 sin(x) exp(-m z)\np = (-1/m) m^2 sin(x) exp(-m z) = -m sin(x) exp(-m z)\n\n\n\n", "meta": {"hexsha": "26a40f4f7696d70ac1a2ff6835c3db3ee8b9e4e2", "size": 25321, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "nondimensional/nondimensional.tex", "max_stars_repo_name": "fmayer2010/LeeWavePaper", "max_stars_repo_head_hexsha": "39a0c6edd22da86c12b9cbec560c8fb350ba62be", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nondimensional/nondimensional.tex", "max_issues_repo_name": "fmayer2010/LeeWavePaper", "max_issues_repo_head_hexsha": "39a0c6edd22da86c12b9cbec560c8fb350ba62be", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nondimensional/nondimensional.tex", "max_forks_repo_name": "fmayer2010/LeeWavePaper", "max_forks_repo_head_hexsha": "39a0c6edd22da86c12b9cbec560c8fb350ba62be", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.4244306418, "max_line_length": 1309, "alphanum_fraction": 0.6999328621, "num_tokens": 8508, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079208, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5828841265857062}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{hyperref}\n\\usepackage{amsmath}\n\\usepackage{listings}\n\\usepackage[dvipsnames]{xcolor}\n\\lstset{\n    language=[LaTeX]Tex,\n    showspaces=true,\n    keywordstyle=[1]\\color{Red},\n    keywordstyle=[2]\\color{Orange},\n    keywordstyle=[3]\\color{Sepia},\n    keywordstyle=[4]\\color{PineGreen},\n    keywordstyle=[5]\\color{Blue},\n    keywords=[1]{P,JacobiP,cos,cos@,Cos},\n    keywords=[2]{alpha,Alpha},\n    keywords=[3]{beta,Beta},\n    keywords=[4]{n},\n    keywords=[5]{a,Theta,CapitalTheta},\n    xleftmargin=.1\\textwidth, \n    xrightmargin=.1\\textwidth\n}\n\n\\begin{document}\n\\begin{center}\n    \\Large\n    \\LaTeX-Equation 2 CAS\\\\[12pt]\n    \\large Examples\n\\end{center}\n\n\\section{The Jacobi Polynomial}\nLet us take a look to the Jacobi polynomial (\\href{http://dlmf.nist.gov/18.3\\#T1.t1.r2}{\\textcolor{blue}{\\underline{DLMF-Link}}}). We want to translate the general expression (section \\ref{sec:generic}) to a CAS (section \\ref{sec:cas}) representation. For instance we have the following term:\n\\begin{equation}\\label{eq:jacP}\n    P_n^{(\\alpha, \\beta)}(\\cos(a \\Theta))\n\\end{equation}\n\n\\subsection{Generic \\LaTeX}\\label{sec:generic}\n\\begin{lstlisting}[mathescape]\nP_n^{(\\alpha,\\beta)}(\\cos(a\\Theta))\n\\end{lstlisting}\n\n\\subsection{Semantic \\LaTeX}\\label{sec:semantic}\n\\begin{lstlisting}[mathescape]\n\\JacobiP{\\alpha}{\\beta}{n}@{\\cos@{a\\Theta}}\n\\end{lstlisting}\n\n\\subsection{Computer Algebra Systems (CAS)}\\label{sec:cas}\nFrom now on, be aware of the spaces between the symbols. For instance, a space between $a$ and $\\Theta$ is needed to show $a$ and $\\Theta$ are different variables. Otherwise Maple (for example) would think $aTheta$ is one new variable.\n\\subsubsection{Maple}\\label{subsec:maple}\n\\begin{lstlisting}[mathescape]\nJacobiP(n,alpha,beta,cos(a Theta))\n\\end{lstlisting}\n\n\\subsubsection{Mathematica}\\label{subsec:mathematica}\n\\begin{lstlisting}[mathescape]\nJacobiP[n,\\[Alpha],\\[Beta],Cos[a \\[CapitalTheta]]]\n\\end{lstlisting}\n\\end{document}\n", "meta": {"hexsha": "dfc991fc0dc4c1febae31dd37232fb6e3e57ad69", "size": 1997, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "misc/LaTeX2CAS.tex", "max_stars_repo_name": "ag-gipp/LaCASt", "max_stars_repo_head_hexsha": "e24830e16cdc4ae34c193b0bac11fef7066b1f23", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "misc/LaTeX2CAS.tex", "max_issues_repo_name": "ag-gipp/LaCASt", "max_issues_repo_head_hexsha": "e24830e16cdc4ae34c193b0bac11fef7066b1f23", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "misc/LaTeX2CAS.tex", "max_forks_repo_name": "ag-gipp/LaCASt", "max_forks_repo_head_hexsha": "e24830e16cdc4ae34c193b0bac11fef7066b1f23", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.8474576271, "max_line_length": 292, "alphanum_fraction": 0.7195793691, "num_tokens": 630, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.8006920092299293, "lm_q1q2_score": 0.5828841248545868}}
{"text": "\\section{Partial Dependence Plots}\n\n\\begin{frame}[c]\n\\Huge{\\centerline{Partial Dependence Plots}}\n\\end{frame}\n%----------------------------------------------------------------------------------------\n\n\\begin{frame}\\frametitle{Review of Marginal Expectation}\n\t\\begin{itemize}\n\t\t\\item Recall that in a $P$-dimensional feature space, we can consider a single feature $X_j \\in \\mathcal{P}$ and its complement set $\\mathcal{P}_{(-j)}$ (where $\\{X_j\\} \\cup \\mathcal{P}_{(-j)} = \\mathcal{P}$). \n\t\t\\item To describe the partial dependence $g(\\mathbf{X})$ on $X_j$, we go through the following enumerated explanation:\n\t\t\\begin{enumerate}\n\t\t\t\\item Expected value is $\\mathbb{E}[g\\mathbf(X)] = \\displaystyle\\sum_{i = 0}^{N-1}g(x^{(i)})p(x^{(i)})$.\n\t\t\t\\item  Let $g(\\mathbf{X}) = g(X_j, X_{(-j)})$ and set $X_j = [x_{j}^{0}, \\dots, x_{j}^{P-1}]^T)$. \n\t\t\t\\item Then the marginal expectation over $X_{(-j)}$ is $\\mathbb{E}_{X_{(-j)}}\\left[g(X_j, X_{(-j)})\\right] =  \\displaystyle\\sum_{i = 0}^{N-1} g(X_j, X_{(-j)}) p(x^{(i)})$.\n\t\t\t\\item Given that  $\\displaystyle\\sum_{i = 0}^{N-1} p(x^{(i)}) = 1$, and equal probabilities, $p(x^{(i)}) = \\frac{1}{N}$.\n\t\t\t\\item Thus $\\mathbb{E}_{X_{(-j)}}\\left[g(X_j, X_{(-j)})\\right] = \\displaystyle\\frac{1}{N}\\sum_{i = 0}^{N-1}g(x_j, \\mathbf{x}_{(-j)}^{(i)})$.\n\t\t\\end{enumerate}\n\t\\end{itemize}\n\\end{frame}\n\n%----------------------------------------------------------------------------------------\n\n\\begin{frame}\\frametitle{Partial Dependence Plots (Cont.)}\n\t\\begin{itemize}\n\t\\item The partial dependence of a given feature $X_j$ is the average of the response function $g$, where all the components of $X_j$ are set to $x_j$ $(X_j= [x_j^{(0)}, \\dots, x_j^{(N-1)}]^T)$, and all other feature vectors of the complement set $\\mathbf{x}_{(-j)}^{(i)}$ are left as the original dataset specified.\n\t\\item Thus, the \\textit{one-dimensional partial dependence} of a function $g$ on  $X_j$ is the marginal expectation:\n\t\t\t\\begin{equation}\\label{eq:pd1}\n\t\t\t\\begin{aligned}\n\t\t\t\t\\text{PD}(X_j, g) &= \\mathbb{E}_{X_{(-j)}}\\left[g(X_j, X_{(-j)})\\right]\\\\\n\t\t\t&= \\frac{1}{N}\\sum_{i = 0}^{N-1}g(x_j, \\mathbf{x}_{(-j)}^{(i)})\n\t\t\\end{aligned}\n\t\t\\end{equation}\n\t\\end{itemize}\n\\end{frame}\n\n%----------------------------------------------------------------------------------------\n\n\n\\begin{frame}\\frametitle{Partial Dependence Plots (Cont.)}\t\t\n\t\\begin{itemize}\n\t\t\\item Partial dependence plots show the partial dependence as a function of \\textit{specific values} of our feature subset.\n\t\t\\item The plots show how machine-learned response functions change based on the values of an input feature of interest, while taking nonlinearity into consideration and averaging out the effects of all other input features.\n\t\t\\item Partial dependence plots enable increased transparency in $g$ and enable the ability to validate and debug $g$ by comparing a feature's average predictions across its domain to known standards and reasonable expectations. \n\t\\end{itemize}\n\\end{frame}\n\n%----------------------------------------------------------------------------------------\n\n\\begin{frame}\n\\frametitle{Conceptual Example}\n\\begin{figure}\n\\includegraphics[scale=0.25]{images/pdp_image.png}\n\\caption{We want to predict whether someone will default on their monthly utility bill. The image shows one feature that we fixed ($Rent=2000$), keeping the rest unchanged, to see its impact on the dataset's average default rate.}\n\\end{figure}\n\\end{frame}", "meta": {"hexsha": "dc3c9abb4dfb320fa61a51c1c70dbfede2f9a14e", "size": 3428, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/sections/pdp.tex", "max_stars_repo_name": "sparsh999gupta/interpretable-ml", "max_stars_repo_head_hexsha": "a2c06777f686cb8e23c210e23120ccab6650c508", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2018-08-21T09:37:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-04T17:34:07.000Z", "max_issues_repo_path": "tex/sections/pdp.tex", "max_issues_repo_name": "sparsh999gupta/interpretable-ml", "max_issues_repo_head_hexsha": "a2c06777f686cb8e23c210e23120ccab6650c508", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2018-09-03T19:27:19.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-06T21:27:47.000Z", "max_forks_repo_path": "tex/sections/pdp.tex", "max_forks_repo_name": "sparsh999gupta/interpretable-ml", "max_forks_repo_head_hexsha": "a2c06777f686cb8e23c210e23120ccab6650c508", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-10-22T16:22:24.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-24T17:24:41.000Z", "avg_line_length": 61.2142857143, "max_line_length": 316, "alphanum_fraction": 0.6137689615, "num_tokens": 1037, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.800692004473946, "lm_q1q2_score": 0.5828841213923477}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[utf8]{inputenc}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{October 3, 2014}\n\\maketitle\nif $\\liminf x_n=L$ then there exists $\\{x_{n_k}\\}$ such that $\\lim x_{n_k}=L$\n\n$l=\\liminf x_n=\\lim(inf\\{\\underbrace{x_{n_1},x_{n_2},x_{n_3},\\dots}_{c_n}\\}$\n\nwhy not just let $c_n$ be the subseuence? because $c_n$ may not be equal to any of the $x_k$ in the sequence\n\n$c_n=\\inf\\{x_{n_1},x_{n_2},\\dots\\}$ give $\\varepsilon=2^{-n}$ there exists $x_{n_k}\\in\\{x_{n_1},x_{n+1},x_{n+2},\\dots\\}$ such that $\\left\\lvert c_n-x_{n_k}\\right\\rvert<2^{-n}$ by def of infinum\n\nwe has a sequence $\\{c_n\\}$ given $\\varepsilon>0$ there exists $N$ such that $\\left\\lvert c_n-L\\right\\rvert<\\varepsilon$ if $n\\ge N$. we approximate each $c_n$ by some $x_{n_k}$ from the oricinal sequence sutch that ....\n\n\\section*{convergence test for series}\nfirst we talk about series with positive terms $\\sum\\limits_{k=1}^\\infty{a_k}$, $s_n=\\sum\\limits_{k=1}^n{a_k}$. So if $s_n$ is bounded about then the series is convergent. and if not, it is divergent.\n\ngeometric series $\\sum\\limits_{n=0}^\\infty{r^n}$ is convergent if $\\left\\lvert r\\right\\rvert<1$. $s_n=\\sum\\limits_{k=0}n{r^k}=1+r+r^2+\\dots+r^n, rs_n=r+r^2+r^3+\\dots, sn-rSn=1-r^{n+1}$\n\n$s_n=\\frac{1-r^{n+1}}{1-r}\\to\\frac{1}{1-r}$\n\n\n\\section*{comparison test}\nif $\\forall n, |a_n|\\le b_n$\n\\begin{itemize}\n\\item\n  if $\\sum\\limits_{n=1}^\\infty{b_n}$ is convergent then $\\sum\\limits_{n=1}^\\infty{a_n}$ is convergent,\n  \\item\n  if $\\sum\\limits{a_n}$ is divergent, so is $\\sum\\limits{b_n}$.\n\\end{itemize}\n\\section*{3.2.b}\nshow that if $\\left(|a_n|\\right)_{n=1}^\\infty$ is summable then so is $\\left(a_n\\right)_{n=1}^\\infty$.\n\\begin{align*}\n  \\sum\\limits_{k=n+1}^m{|a_k|}&<\\varepsilon\\text{ for all }N\\le n\\le m \\text{ because is is summable}\\\\\n  \\left\\lvert\\sum\\limits_{k=n+1}^m{a_k}\\right\\rvert&\\le\\sum\\limits_{k=n+1}^m{|a_k|}<\\varepsilon\n\\end{align*}\nso then $\\sum\\limits{a_k}$ is also cauchy and summable\n\n\\section*{cauchy-schwartz inequality}\n$\\sum\\limits_{k=1}^n{a_kb_k}\\le \\left(\\sum\\limits_{k=1}^n{a_k^2}\\right)^{1/2}\\left(\\sum\\limits_{k=1}^n{b_k^2}\\right)^{1/2}$\n\n\\section*{3.2.f}\n\n\\section*{leibniz test for alternating series}\nif $\\{a_n\\}$ is a monotone decreasing sequence of positive terms with the $\\lim a_n=0$ then $\\sum\\limits_{n=1}^\\infty{(-1)^na_n}$ is convergent\n\n\\section*{note!} a sequence my have the property $\\lim |a_n-a_{n+!}|=0$ but not be cauchy\n\\section*{3.2.h}\nShow that if $\\sum\\limits_{n=1}^\\infty{a_n}$ and $\\sum\\limits_{n=1}^\\infty{b_n}$ are series with $b_n\\ge 0$ such that $\\limsup\\limits_{n\\to\\infty}<\\infty$ and $\\sum\\limits_{n=1}^\\infty{b_n}<\\infty$, then the series $\\sum\\limits_{n=1}^\\infty{a_n}$ converges.\n\n\\begin{align*}\n  \\left\\lvert\\left(\\sup\\limits_{k\\ge n}\\frac{|a_k|}{b_k}\\right)-L\\right\\rvert<\\varepsilon\\\\\n  \\left(\\sup\\limits_{k\\ge n}\\frac{|a_k|}{b_k}\\right)<L+\\varepsilon\\\\\n  \\frac{|a_k|}{b_k}<L\\varepsilon\\\\\n  |a_k|<(L+\\varepsilon) b_k\\\\\n\\end{align*}\n\\section*{3.2.j}\n$\\liminf\\frac{a_n+1}{a_n}\\le\\liminf{a_n^{\\frac{1}{n}}}\\le\\limsup a_n^{\\frac{1}{n}}\\le \\limsup\\frac{a_n+1}{a_n}$.\n\\subsection*{step 1}\nif $x\\ge r$ for all $r>b$ then $x$ is a lower bound for the set $\\{r\\in \\mathbb{R}:r>b\\}$, $x\\le\\inf\\{r\\in\\mathbb{R}:r>b\\}=b$\n\nwe will show that if $\\limsup \\frac{a_n}{b_n}<r$ then $\\limsup a_n^{\\frac{1}{n}}\\le r$ and then apply step one.\n\nlet $r>\\limsup \\frac{a_{n+1}}{a_n}$ then $\\exists N$ such that $r>\\frac{a_{n+!}}{a_n}\\forall n\\ge N$\n\\begin{align*}\n  a_{N+1}<ra_N\\\\\n  a_{N+2}<ra_{N+1}\\le r^2a_{N}\\\\\n  a_{N+K}<r^{k}a_N\\\\\n  a_{N+k}^{\\frac{1}{N+k}}<(r^ka_N)^{\\frac{1}{N+k}}\n\\end{align*}\n\\section*{quiz from 10/1/2014}\n$L_k\\to L$ then $\\{x_n\\}$ such that $\\forall k, \\exists$ a subsequence of $\\{x_n\\}$ converging to $L_k$. prove that $\\{x_n\\}$ has a subsequence converging to $L$.\n\ngiven $\\varepsilon>0\\exists N_0$ such that $\\left\\lvert L_k-L\\right\\rvert<\\varepsilon$ if $k\\ge N_0$\n\n$\\left\\lvert x_{N_k}-L\\right\\rvert\\le\\left\\lvert x_{N_k}-L_k\\right\\rvert+\\left\\lvert L_k-L\\right\\rvert<2\\varepsilon$\n\n\\section*{example}\nlet $A,B\\subseteq \\mathbb{R}$, prove that $\\sup A\\le\\inf B$, if $\\forall a\\in A, b\\in B, a\\leb$\n\\end{document}\n\n", "meta": {"hexsha": "f85e08d6cf513d0b1effd08fe47c270b01f7f818", "size": 4298, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-10-03.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-10-03.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-10-03.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.2150537634, "max_line_length": 257, "alphanum_fraction": 0.6654257794, "num_tokens": 1814, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.8006919997179627, "lm_q1q2_score": 0.5828841179301086}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section*{9 October 2019}\n\nClarification from last lecture.\nDithering is caused by the fact that any single pixel in an X-ray CCD may be arbitrarily noisy, and if the telescope only looks at an astronomical object from a certain viewpoint without rotation a certain region of the object will always be imaged by the same pixel, therefore the errors between images will be correlated.\nSo, we move our telescope around in order to have a certain region of the astronomical object be imaged by several uncorrelated pixels.\n\nNormal distribution: the pdf is\n%\n\\begin{equation}\n  \\mathcal N (x; \\mu, \\sigma^2) = \\frac{1}{\\sqrt{2 \\pi \\sigma^2}} \\exp(\\frac{-(x-\\mu)^2}{2\\sigma^2} ) \\,.\n\\end{equation}\n\nPoisson distribution: the probability of getting \\(m\\) events if they happen independently at a rate \\(\\lambda\\) is\n%\n\\begin{equation}\n\\mathbb{P} (m ; \\lambda) = \\frac{e^{-\\lambda} \\lambda^m}{m!} \\,,\n\\end{equation}\n%\nand it is easy to prove that for a Poisson distribution \\(\\mu = \\sigma^2 = \\lambda\\).\n\nIf the rate is, say, a temporal rate like \\([\\alpha] = \\SI{}{\\hertz}\\) then we substitute \\(\\lambda = \\alpha t_{\\text{obs}}\\).\n\nWe discuss detectors: semiconductors, NP junctions.\n\nWe define the signal-to-noise ratio:\n\n\\begin{equation}\n\t\\text{SNR} = \\frac{S}{\\sqrt{S + B}}\n\t= \\frac{S_{\\text{ph}} t A W_b}{\\sqrt{\\qty(S_{\\text{ph}} + B_\\text{sky})tAW_b + Dt + R^2_N } }\\,,\n\\end{equation}\n%\nwhere $S= S_{\\text{ph}} tAW_b$ is the (constant) signal photon count: \\(t\\) is the observation time, \\(A\\) is the telescope area, \\(W_b\\) is the bandwidth, and \\(S_{\\text{ph}}\\) is the source photon intensity.\n\nWe must also account for the background $B$: it is caused by dark counts whose rate is \\(D\\), by background photons whose intensity is \\(B_{\\text{sky}}\\), and by readout noise \\(R_N^2\\).\n\nWe can define the detection threshold\\dots\nThis will be talked about next lecture.\n\n\\end{document}\n", "meta": {"hexsha": "8b7f8dd2e224ca7b3cb1a6decd62f1ef6568f88b", "size": 1937, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_first_semester/astrophysics_lab/09oct.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_first_semester/astrophysics_lab/09oct.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_first_semester/astrophysics_lab/09oct.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 45.0465116279, "max_line_length": 323, "alphanum_fraction": 0.7077955601, "num_tokens": 577, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7279754371026368, "lm_q1q2_score": 0.5828841171348658}}
{"text": "% declare document class and geometry\n\\documentclass[12pt]{article} % use larger type; default would be 10pt\n\\usepackage[margin=1in]{geometry} % handle page geometry\n\n% import packages and commands\n\\input{../header2.tex}\n\n\n\\title{Phys 220A -- Classical Mechanics -- HW03}\n\\author{UCLA, Fall 2014}\n\\date{\\formatdate{23}{10}{2014}} % Activate to display a given date or no date (if empty),\n         % otherwise the current date is printed \n\n\\begin{document}\n\\maketitle\n\n\n\\section*{Problem 1 (15 pts)}\n\\textit{\nIn this problem you will show that for small times in a one dimensional system the solution of the Euler-Lagrange equations minimizes the action. Consider an action of the form\n\\begin{eqn}\nS[x] = \\int_0^T \\dif{t} \\left( \\frac{1}{2} m \\dot{x}^2 - V(x) \\right).\n\\end{eqn}\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nShow that for a variation $\\delta x (0) = \\delta x(T) = 0$ that the variation in the action is\n\\begin{eqn}\n\\Delta S = - \\int_0^T \\dif{t} \\left( m \\ddot{x} + \\pd{V}{x} \\right) \\delta x + \\bigO(\\delta x^2).\n\\end{eqn}\n}\n\n\n% part B\n\\item \\textit{\nShow that for an $x(t)$ which satisfies the Euler-Lagrange equations the second variation has the form\n\\begin{eqn}\n\\delta S = \\frac{1}{2} \\int_0^T \\dif{t} \\left( m(\\delta \\dot{x})^2 - g(t) \\delta x^2 \\right) + \\bigO(\\delta x^3)\n\\end{eqn}\nand show that\n\\begin{eqn}\ng(t) = \\eval[3]{ \\pd[2]{V}{x} }_{x = x(t)}.\n\\end{eqn}\n}\n\n\n% part C\n\\item \\textit{\nUsing the boundary condition $\\delta x(0) = \\delta x(T) = 0$ and estimating the value of $g(t)$ show that one can choose a time $T$ small enough that the order $\\delta x^2$ term in the variation of $S$ is always positive.\n}\n\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 2 (15 pts)}\n\\textbf{\nNote: A variant of this problem appeared in the 2013 comprehensive exam.\n}\n\n\\textit{\nA mass $m_1$ is moving (without friction) on a desk in the presence of constant gravitational fields. It is attached by a massless rope through a hole in the desk to a second mass $m_2$, the first mass rotates on the desk (see figure 1).\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nFind the constraints and the Lagrangian for this system.\n}\n\n\n% part B\n\\item \\textit{\nCalculate the equations of motion.\n}\n\n\n% part C\n\\item \\textit{\nFind conserved quantities of the system to reduce the equations of motion to a single differential equation of first order.\n}\n\n\n% part D\n\\item \\textit{\nFind the initial conditions for which the mass $m_1$ moves on a circle, calculate the tension in the string for this solution.\n}\n\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 3 (15 pts)}\n\\textit{\nA mass $m_1$ is moving frictionless on a ramp with angle $\\alpha$ with base accelerates with constant acceleration $x_0(t) = \\frac{1}{2} a t^2$. (See Figure 2.) \n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nSet up the constraints and the Lagrangian.\n}\n\n\n% part B\n\\item \\textit{\nFind generalized coordinates and solve the Euler Lagrangian equations \n}\n\n\n% part C\n\\item \\textit{\nWhat is the condition that the mass leaves the ramp?\n}\n\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 4 (20 pts)}\n\\textit{\nA particle of mass $m_1$ is constrained to move on a circle $x^2 + y^2 = r_1^2$, a second particle of mass $m_2$ is constrained to move on a second circle $(x - a)^2 + y^2 = r_2^2$. The two masses are connected by a (massless) spring with constant $k$ (i.e. there is a potential which takes the form $V = \\frac{1}{2} k d^2$ where $d$ is the euclidean distance between the masses.\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nIdentify the two generalized coordinates and express the Lagrangian in terms of these. \n}\n\n\n% part B\n\\item \\textit{\nWrite down the equations of motion following from the Lagrangian\n}\n\n\n% part C\n\\item \\textit{\nFor which value of $a$ is there an additional conserved quantity ?\n}\n\n\n% part D\n\\item \\textit{\nFor the special case in (c) reduce the equation of motion to a first order equation.\n}\n\n\n\\end{enumproblem}\n\n\n\\end{document}\n", "meta": {"hexsha": "491771d1744b58fa5d5214e6d74aff53fe797298", "size": 3916, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classical/hw03.tex", "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classical/hw03.tex", "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classical/hw03.tex", "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.0245398773, "max_line_length": 379, "alphanum_fraction": 0.7048008172, "num_tokens": 1194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.8006919925839875, "lm_q1q2_score": 0.5828841032859094}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                                                                      %\n% GLE - Graphics Layout Engine <http://www.gle-graphics.org/>          %\n%                                                                      %\n% Modified BSD License                                                 %\n%                                                                      %\n% Copyright (C) 2009 GLE.                                              %\n%                                                                      %\n% Redistribution and use in source and binary forms, with or without   %\n% modification, are permitted provided that the following conditions   %\n% are met:                                                             %\n%                                                                      %\n%    1. Redistributions of source code must retain the above copyright %\n% notice, this list of conditions and the following disclaimer.        %\n%                                                                      %\n%    2. Redistributions in binary form must reproduce the above        %\n% copyright notice, this list of conditions and the following          %\n% disclaimer in the documentation and/or other materials provided with %\n% the distribution.                                                    %\n%                                                                      %\n%    3. The name of the author may not be used to endorse or promote   %\n% products derived from this software without specific prior written   %\n% permission.                                                          %\n%                                                                      %\n% THIS SOFTWARE IS PROVIDED BY THE AUTHOR \"AS IS\" AND ANY EXPRESS OR   %\n% IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED       %\n% WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE   %\n% ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY       %\n% DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL   %\n% DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE    %\n% GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS        %\n% INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER %\n% IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR      %\n% OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN  %\n% IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.                        %\n%                                                                      %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\chapter {GLE Utilities}\n\\label{util:chap}\n\\section{Fitls}\n\\index{Fitls}\nThe FITLS utility allows an equation with $n$ unknown constants to \nbe fitted to experimental data.\n\nFor example to fit a simple least squares regression line to a set of \npoints you would give FITLS the equation: \\verb# a*x+b#\n\nFITLS would then solve the equation to find the {\\em best} values for\nthe constants $a$ and $b$. \n\nFITLS can work with non linear equations, it will ask for initial\nvalues for the parameters so that a solution around those initial\nguesses will be found.\n\nFITLS writes out a GLE file containing commands to draw the data\npoints and the equation it has fitted to them.\n\nHere is a sample FITLS session:\n\\begin{verbatim}\n$ fitls\nInput data file (x and y columns optional) [test.dat,1,2] ? fitls.dat\nLoading data from file, fitls.dat,  xcolumn=1, ycolumn=2\nValid operators: +, -, *, /, ^ (power) \nValid functions:\n        abs(), atn(), cos(), exp(), fix(), int()\n        log(), log10(), not(), rnd(), sgn(), sin()\n        sqr(), sqrt(), tan()\n\n Enter a function of 'x' using constants 'a'...'z' \n e.g.    a + b*x   (standard linear least squares fit) \n         sin(x)*a+b \n         a + b*x + c*x^2 + d*x^3  \n         log(a*x)+(b+x)*c+a \n\nEquation ? sin(a*x)*b+c*x^2+d\n\nOutput file name to write gle file [fitls.gle] ?\nPrecision of fit required, [1e-4] ?\nInitial value for constant a [1.0] ? \nInitial value for constant b [1.0] ? \nInitial value for constant c [1.0] ? \nInitial value for constant d [1.0] ? \n0 evaluations, 1 1 1 1 , fit = 1355.36 \n20 evaluations, 1.97005 1 1 1 , fit = 1281.95 \n40 evaluations, 1.97005 10.228 0.151285 1 , fit = 54.7694 \n60 evaluations, 2.01053 10.228 0.151285 1.06365 , fit = 54.1771 \n.\n.\n.\n440 evaluations, -0.640525 -2.81525 0.13997 1.13871 , fit = 0.940192 \n460 evaluations, -0.638055 -2.82934 0.140971 1.10502 , fit = 0.93842 \n480 evaluations, -0.63808 -2.82357 0.140993 1.10452 , fit = 0.938389 \na = -0.638262 b = -2.81719 c = 0.140722 d = 1.11256 \n\n10 Iterations, sum of squares divided by n = 0.938389\ny = sin(-0.638262*x)*-2.81719+0.140722*x^2+1.11256\n\\end{verbatim}\n\\vspace*{1.0cm}\n\\psgraphin{utilities/fig/fitls}{8.0}{5.0}{\\sf fitls}             \n", "meta": {"hexsha": "4a7f33bbad2fa4274b69bb14d7f081e3e707f72c", "size": 4888, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "utilities/fitls.tex", "max_stars_repo_name": "vlabella/gle-manual", "max_stars_repo_head_hexsha": "b7ff1659436a544efacd9fd5df13b6206131605b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "utilities/fitls.tex", "max_issues_repo_name": "vlabella/gle-manual", "max_issues_repo_head_hexsha": "b7ff1659436a544efacd9fd5df13b6206131605b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "utilities/fitls.tex", "max_forks_repo_name": "vlabella/gle-manual", "max_forks_repo_head_hexsha": "b7ff1659436a544efacd9fd5df13b6206131605b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.396039604, "max_line_length": 72, "alphanum_fraction": 0.5362111293, "num_tokens": 1130, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115012, "lm_q2_score": 0.7606506581031359, "lm_q1q2_score": 0.5828817557940509}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath, amsthm, amssymb, amsfonts}\n\\usepackage{mathtools, physics, xcolor}\n\\usepackage{hyperref}\n\n\\newcommand{\\sor}{\\mathbf{R}}\n\\DeclareMathOperator{\\aut}{Aut}\n\\DeclareMathOperator{\\card}{card}\n\n\\theoremstyle{plain}\n\\newtheorem{thm}{Theorem}\n\\numberwithin{thm}{section}\n\n\\theoremstyle{plain}\n\\newtheorem{prop}{Proposition}\n\\numberwithin{prop}{section}\n\n\\theoremstyle{definition}\n\\newtheorem{defn}{Definition}\n\\numberwithin{defn}{section}\n\n\\theoremstyle{remark}\n\\newtheorem*{rem}{Remark}\n\n\\newtheorem*{cor}{Corollary}\n\n\\numberwithin{equation}{section}\n\n\\title{Group Theory}\n\\author{Amey Joshi}\n\\date{24-Dec-2020}\n\\begin{document}\n\\maketitle\n\\section{Basic definitions}\\label{s1}\n\\begin{defn}\\label{s1d1}\nA group is a set $G$ and a binary operation $\\cdot$ defined on it with the\nfollowing properties:\n\\begin{enumerate}\n\\item For all $a, b \\in G$, $a \\cdot b \\in G$;\n\\item For all $a, b, c \\in G$, $a \\cdot (b \\cdot c)= (a \\cdot b) \\cdot c$;\n\\item There exists an element $e \\in G$ such that for all $a \\in G$, $a \\cdot e\n= e \\cdot a = a$. It is called the identity element.\n\\item For all $a \\in G$ there exists an element $x_1$ such that $a \\cdot x_1 = \ne$ and an element $x_2$ such that $x_2 \\cdot a = e$. The elements $x_1$ and \n$x_2$ are called the right and left inverse of $a$.\n\\end{enumerate}\n\\end{defn}\n\\begin{rem}\nWe frequently abbreviate $a \\cdot b$ as $ab$.\n\\end{rem}\n\nA few propositions follow immediately.\n\\begin{prop}\\label{s1p1}\nIdentity element is unique.\n\\end{prop}\n\\begin{proof}\nLet, if possible, there be two identity elements $e_1$ and $e_2$. Since $e_1$\nis an identity $a e_1 = a$ for all $a \\in G$. Choose $a = e_2$ so that $e_2\ne_1 = e_2$. But $e_2$ is also an identity so that $e_1 = e_2$.\n\\end{proof}\n\n\\begin{prop}\\label{s1p2}\nThe right and left inverse of an element are identical.\n\\end{prop}\n\\begin{proof}\nLet is possible there be elements $x, y \\in G$ such that $xa = e$ and $ay = e$.\nThen $(xa)y = ey \\Rightarrow x(ay) = y \\Rightarrow xe = y \\Rightarrow x = y$.\n\\end{proof}\n\n\\begin{rem}\nWe no longer have to differentiate between the right and the left inverse of \nan element and call it just the inverse.\n\\end{rem}\n\n\\begin{prop}\\label{s1p3}\nThe inverse of an element is unique.\n\\end{prop}\n\\begin{proof}\nLet if possible there be elements $x, y \\in G$ such that $ax = e$ and $ay = e$.\nThen $y(ax) = ye \\Rightarrow (ya)x = y \\Rightarrow ex = y \\Rightarrow x = y$.\n\\end{proof}\n\n\\begin{rem}\nThe unique inverse of an element $a$ is denoted by $a^{-1}$.\n\\end{rem}\n\n\\begin{prop}\\label{s1p4}\nIf $x, y \\in G$ then $(xy)^{-1} = y^{-1}x^{-1}$.\n\\end{prop}\n\\begin{proof}\nIt is easy to check that $y^{-1}x^{-1}$ is an inverse of $xy$. From proposition\n\\ref{s1p3} it is the only inverse.\n\\end{proof}\n\n\\begin{prop}\\label{s1p5}\nFor all $x \\in G$, $(x^{-1})^{-1} = x$.\n\\end{prop}\n\\begin{proof}\nFollows from the fact that $xx^{-1} = e$ and the uniqueness of inverse of \n$x^{-1}$.\n\\end{proof}\n\n\\begin{defn}\\label{s1d2}\nA group $G$ is called abelian if $xy = yx$ for all $x, y \\in G$.\n\\end{defn}\n\n\\section{Conjugate elements}\\label{s2}\n\\begin{defn}\\label{s2d1}\nElements $a$ and $b$ of a group $G$ are said to be conjugate to each other\nif there exists an element $c \\in G$ such that $a = cbc^{-1}$. \n\\end{defn}\n\nIf $a$ is conjugate to $b$ it is written as $a \\sim b$. Clearly, conjugacy is\na binary relation on $G$. Here are a few examples of conjugate elements.\n\n\\begin{enumerate}\n\\item\nLet $a$ denote a rotation of a rigid body by an angle $\\phi$ about the $x$-axis\nand $b$ be the rotation by an angle $\\phi$ about the $y$ axis. Then,\n\\begin{eqnarray}\na &=& \\begin{pmatrix}1 & 0 & 0 \\\\ 0 & \\cos\\phi & \\sin\\phi \\\\\n0 & -\\sin\\phi & \\cos\\phi\\end{pmatrix} \\label{s2e1} \\\\\nb &=& \\begin{pmatrix} \\cos\\phi & 0 & \\sin\\phi \\\\ 0 & 1 & 0 \\\\\n-\\sin\\phi  & 0 & \\cos\\phi\\end{pmatrix} \\label{s2e2} \n\\end{eqnarray}\nChoose\n\\begin{equation}\\label{s2e3}\nc = \\begin{pmatrix}0 & 1 & 0 \\\\ -1 & 0 & 0 \\\\ 0 & 0 & 1\\end{pmatrix}\n\\end{equation}\nso that\n\\begin{equation}\\label{s2e4}\nc^{-1} = \\begin{pmatrix}0 & -1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 0 & 1\\end{pmatrix}.\n\\end{equation}\nWe can readily confirm that $cac^{-1} = b$. The matrix $c$ is just the\nrotation around the $z$-axis by $\\pi/2$. Therefore the operation $cbc^{-1}$ \nmeans\n\\begin{enumerate}\n\\item Apply $c^{-1}$, that is, rotate around the $z$ axis by $-\\pi/2$. This\nbrings the $y$ axis to the where the $x$ axis was prior to rotation;\n\\item Apply $b$, that is, rotate around the `rotated $y$ axis' by $\\phi$;\n\\item Apply $c$, that is, rotate around the $z$ axis by $\\pi/2$. This restores\nthe $x$ and $y$ axes to their original positions.\n\\end{enumerate}\n\n\\item Consider the elements of the symmetric group $S_3$ which involve \ntransposing two elements. They are written in Cayley's two line notation as\n\\begin{eqnarray}\nx &=& \\begin{pmatrix}1 & 2 & 3 \\\\ 1 & 3 & 2 \\end{pmatrix} \\label{s2e5} \\\\\ny &=& \\begin{pmatrix}1 & 2 & 3 \\\\ 3 & 2 & 1 \\end{pmatrix} \\label{s2e6} \\\\\nz &=& \\begin{pmatrix}1 & 2 & 3 \\\\ 2 & 1 & 3 \\end{pmatrix} \\label{s2e7}\n\\end{eqnarray}\nIt is easy to check that the elements $x, y, z$ are their own inverses. This\nis not a surprising observation because carrying out the same transposition\ntwice gives the original arrangement. It is easy to confirm that $zxz^{-1} = y$.\n\\end{enumerate}\n\nIn some sense, conjugate elements are `similar' to each other. The matrices\n$a$ and $b$ in equations \\eqref{s2e1} and \\eqref{s2e2} are both rotations \nby the same angle but about different axes. This notion of similarity is \ncaptured in\n\\begin{prop}\\label{s2p1}\nConjugacy is an equivalence relation.\n\\end{prop}\n\\begin{proof}\nChoose $c = a, b = a$ in the equation $a = cbc^{-1}$ to show that $a \\sim a$.\nIf $a = cbc^{-1}$ then $c^{-1}a = bc^{-1} \\Rightarrow c^{-1}ac = b$ so that\n$b \\sim a$. Finally, let $a \\sim b$ and $b \\sim c$. Thus, there exist $x, y\n\\in G$ such that $a = xbx^{-1}$ and $b = ycy^{-1}$ so that $a = xycy^{-1}x^{-1}\n= (xy)c(xy)^{-1}$. Thus, $a \\sim c$.\n\\end{proof}\n\nTherefore, the conjugacy relation partitions the groups into equivalence \nclasses. The members of the equivalence classes are indeed `similar' to each\nother in the sense illustrated by the two examples above.\n\n\\section{Subgroups}\\label{s3}\n\\begin{defn}\\label{s3d1}\nA subset $H$ of a group $G$ is called a subgroup if $H$ is a group under\nthe same binary operation.\n\\end{defn}\n\n\\begin{rem}\nA subgroup must contain the identity element.\n\\end{rem}\n\n\\begin{rem}\n$H = G$ and $H = \\{e\\}$ are called trivial subgroups of $G$.\n\\end{rem}\n\n\\begin{prop}\\label{s3p1}\n$H$ is a subgroup of a group $G$ if and only if for all $h_1, h_2 \\in H$,\n$h_1^{-1}h_2 \\in H$.\n\\end{prop}\n\\begin{proof}\nThe `only if' part follows immediately from the closure and inverse properties\nof $H$.\n\nLet $H \\subset G$ such that for all $h_1, h_2 \\in H$, $h_1^{-1}h_2 \\in H$.\nChoose $h_2 = h_1$ to conclude that $e \\in H$. Choose $h_2 = e$ to conclude\nthat $h_1^{-1} \\in H$. Since $h_1^{-1} \\in H$, $(h_1^{-1})^{-1}h_2 = h_1h_2\n\\in H$. Associatity of the binary operation is valid for all elements of $G$\nand therefore is valid for $H \\in G$.\n\\end{proof}\n\nA subgroup $H$ of a group $G$ is a set from which one cannot escape by carrying\nout group operations among its elements. The only way to move out of $H$ is\nby operating an $h \\in H$ with a $g \\in G$ such that $g \\notin H$.\n\n\\begin{defn}\\label{s3d2}\nGiven a subgroup $H$ of a group $G$, the left coset of an element $a \\in G$\nis the set $aH = \\{ah | h \\in H\\}$.\n\\end{defn}\n\n\\begin{prop}\\label{s3p2}\n$\\card{aH} = \\card{H}$.\n\\end{prop}\n\\begin{proof}\nIf $ah_i = ah_j$ then $a^{-1}ah_i = a^{-1}ah_j \\Rightarrow h_i = h_j$. Thus\nall elements of $aH$ are distinct.\n\\end{proof}\n\n\\begin{prop}\\label{s3p3}\n$a \\in aH$. If $b \\in aH$ then $aH = bH$.\n\\end{prop}\n\\begin{proof}\nSince $H$ is a subgroup, $e \\in H$. Therefore, the set $aH$ contains $ae = a$.\n\nLet $b \\in aH$. Therefore, there exists $h \\in H$ such that $b = ah$. Therefore\nany element of $bH$ is of the form $bh^\\prime = ahh^\\prime$. Thus $bH \\subset\naH$. Since $b = ah$, $a = bh^\\prime$ and any element of $aH$ can be written\nas $ah = bh^\\prime h$, which is an element of $bH$. Thus $aH \\subset bH$.\n\\end{proof}\n\nGiven a subgroup $H$ of $G$, the left cosets of $H$ partition the set $G$.\nTherefore, the relation $a R b$ if $b \\in aH$ is an equivalence relation. We\ncan prove this fact independently of proposition \\ref{s3p2}.\n\n\\begin{prop}\\label{s3p4}\nLet $H$ be a subgroup of a group $G$ and $aH$ be one of its left cosets. Define\na relation $R$ on $G$ as $a R b$ if $b \\in aH$. Then $R$ is an equivalence\nrelation.\n\\end{prop}\n\\begin{proof}\nSince $H$ is a subgroup of $G$, $e \\in H$ and hence $a \\in aH$. We showed\nin the proof of proposition \\ref{s3p2} that if $b \\in aH$ then $a \\in bH$.\nFinally, assume $a \\in bH$ and $b \\in cH$. Then $a = bh_1$ and $b = ch_2$ \nso that $a = c(h_1h_2)$. Thus $a \\in cH$.\n\\end{proof}\n\n\\begin{prop}\\label{s3p5}\nLeft coset $aH$ of a subgroup $H$ of a group $G$ is not a subgroup of $G$\nunless $a = e$.\n\\end{prop}\n\\begin{proof}\nIf $a \\ne e$ then $e \\notin aH$.\n\\end{proof}\n\n\\begin{rem}\nNote that $a \\notin H$ when we talk about the coset $aH$. Therefore, $a^{-1}$\ntoo is not a member of $H$. For if it did then $a$ too would be a member of \nthe subgroup $H$.\n\nTherefore, we are guaranteed not to have $e$ in the set $aH$.\n\\end{rem}\n\nSince all left cosets of a subgroup $H$ of a group $G$ have the same cardinality\nand since they partition $G$, we have\n\\begin{thm}\\label{s3t1}\n$\\card H | \\card G$.\n\\end{thm}\n\n\\begin{defn}\\label{s3d3}\nGiven a subgroup $H$ of a group $G$, the right coset of an element $a \\in G$\nis the set $Ha = \\{ha | h \\in H\\}$.\n\\end{defn}\n\nRight cosets have properties similar to that of left cosets. In particular,\nthey obey propositions analogous to \\ref{s3p2}, \\ref{s3p3}, \\ref{s3p4} and\n\\ref{s3p5}.\n\nAlthough cosets of a subgroup are not themselves subgroups, the set\n\\begin{equation}\\label{s3e1}\nH_a = \\{aha^{-1} | h \\in H\\}\n\\end{equation}\nfor a fixed element $a \\in G$ is indeed a group. We prove it as \n\\begin{prop}\\label{s3p6}\n$H_a$ is a subgroup of $G$.\n\\end{prop}\n\\begin{proof}\nConsider two elements $x_1 = ah_1a^{-1}$ and $x_2 = ah_2a^{-1}$. Then $x_1^{-1}\nx_2 = ah_1^{-1}a^{-1}ah_2 a^{-1} = a(h_1^{-1}h_2) a^{-1}$. From proposition\n\\ref{s3p1}, $h_1^{-1}h_2 \\in H$ so that $x_1^{-1}x_2 \\in H_a$.\n\\end{proof}\n\nWe also have\n\\begin{prop}\\label{s3e7}\nIf $a \\in H$, $H_a = H$.\n\\end{prop}\n\\begin{proof}\nFrom the closure properties of $H$ it is clear that $H_a \\subset H$. Let $x$\nbe an element of $H$ and $y = axa^{-1} \\in H$. But $y$ is also a member of\n$H_a$ so that $H \\subset H_a$.\n\\end{proof}\n\n\\begin{rem}\nThe set $H_a$ contains elements conjugate to elements of $H$. We observed\npreviously that conjugate elements are `similar' in some sense. Therefore, it\nshould be no surprise that the `similar elements' also form a subgroup.\n\\end{rem}\n\n\\begin{defn}\\label{s3d4}\nIf $H = H_a$ for all $a \\in G$ then $H$ us called an invariant subgroup of $G$.\n\\end{defn}\n\n\\begin{rem}\nInvariant subgroup is also called a normal subgroup. An invariant subgroup\nremains unchanged under the conjugacy operation.\n\\end{rem}\n\n\\begin{prop}\\label{s3e8}\nIf $H$ is an invariant subgroup of $G$ then $aH = Ha$ for all $a \\in G$.\n\\end{prop}\n\\begin{proof}\n$aha^{-1}$ must be equal to an element $h^\\prime$ of $H$. Therefore, $ah =\nh^\\prime a$. Since this is true for all $h \\in H$, $aH = Ha$.\n\\end{proof}\n\nIf $H$ is an invariant subgroup of $G$ then we can define the product of its\ncosets as\n\\begin{equation}\\label{s3e2}\naHbH  = \\{ahbh^\\prime | h, h^\\prime \\in H\\}.\n\\end{equation}\n\n\\begin{prop}\\label{s3e9}\nThe set of all cosets of an invariant group is closed under coset composition\ndefined by equation \\eqref{s3e2}.\n\\end{prop}\n\\begin{proof}\n$ahbh^\\prime = a(hb)h^\\prime$. Since $H$ is an invariant subgroup $bH = Hb$.\nTherefore, we can write $hb = bh^{\\prime\\prime}$ and hence $ahbh^\\prime = \nab(h^{\\prime\\prime}h^\\prime)$, which is a member of $abH$.\n\\end{proof}\n\nIf $H$ is an invariant subgroup then the set $G/H$ of its cosets with a \ncomposition law defined by equation \\eqref{s3e2} forms a group. It is called\nthe factor group. Its closure is proved in proposition \\eqref{s3e9}. The set\n$eH = H$ is its identity. The set $a^{-1}H$ is the inverse of $aH$ and the\nassociativity of coset composition follows immediately from the associativity\nof the elements of $G$.\n\n\\begin{defn}\\label{s3d5}\nThe set $G/H$ of all cosets of an invariant subgroup $H$ of a group $G$ is\ncalled the factor group.\n\\end{defn}\n\n\\begin{defn}\\label{s3d6}\n$G$ is called a simple group if it has no nontrivial invariant subgroups.\n\\end{defn}\n\n\\begin{defn}\\label{s3d7}\n$G$ is called a semisimple group if it has no nontrivial abelian invariant \nsubgroups.\n\\end{defn}\n\nFrom their definitions it is clear that\n\\begin{prop}\\label{s3p10}\nIf $G$ is simple then it is semisimple.\n\\end{prop}\n\n\\begin{rem}\nThe converse need not be true.\n\\end{rem}\n\n\\section{Some important groups}\\label{s4}\nIn this section we list a few groups that are of importance in physics.\n\\begin{enumerate}\n\\item $O(n)$, the set of all orthogonal $n \\times n$ matrices. Recall that an\n$n \\times n$ matrix $M$ is orthogonal if $MM^T = M^T M = I_n$.\n\\item $SO(n)$, a subgroup of $O(n)$ of matrices whose determinant is $1$.\n\\item $U(n)$, the set of all unitary matrices. $M$ is a unitary matrix if\n$MM^\\dagger = M^\\dagger M = I_n$.\n\\item $SU(n)$, a subgroup of $U(n)$ of matrices whose determinant is $1$.\n\\item $GL(n, \\sor)$, the set of all invertible, real $n \\times n$ matrices.\n\\item $SL(n, \\sor)$ is a subgroup of $GL(n, \\sor)$ of matrices with determinant\nequal to $1$.\n\\item A $(2n) \\times (2n)$ matrix $M$ is called a symplectic matrix if\n\\begin{equation}\\label{s4e1}\nM^T\\Omega M = \\Omega,\n\\end{equation}\nwhere\n\\begin{equation}\\label{s4e2}\n\\Omega = \\begin{pmatrix}0 & I_n \\\\ -I_n & 0 \\end{pmatrix}.\n\\end{equation}\nThe set of all symplectic matrices also forms a group. It is called $Sp(2n,\n\\sor)$.\n\\end{enumerate}\n\n\\section{Commutator subgroup}\\label{s5}\n\\begin{defn}\\label{s5d1}\nA function $q: G \\times G \\rightarrow G$ defined as\n\\[\nq(a, b) = aba^{-1}b^{-1}\n\\]\nis called the commutator of $a$ and $b$.\n\\end{defn}\n\n\\begin{prop}\\label{s5p1}\nIf $G$ is abelian then $q(a, b) = e$ for all $a, b \\in G$.\n\\end{prop}\n\\begin{proof}\nSince $G$ is abelian, $aba^{-1}b^{-1} = abb^{-1}a^{-1} = e$.\n\\end{proof}\n\n\\begin{prop}\\label{s5p2}\nThe commutator function has the following properties:\n\\begin{itemize}\n\\item $q(a, b)^{-1} = q(b, a)$.\n\\item $cq(a, b)c^{-1} = q(cac^{-1}, cbc^{-1})$.\n\\end{itemize}\n\\end{prop}\n\\begin{proof}\n$q(a, b)q(b, a) = aba^{-1}b^{-1}bab^{-1}a^{-1} = e$ implies that $q(b, a)$\nis an inverse of $q(a, b)$. From proposition \\eqref{s1p3} it is the only\ninverse.\n\nThe second property follows from \n\\begin{eqnarray*}\ncq(a, b)c^{-1} &=& caba^{-1}b^{-1}c^{-1} \\\\\n &=& (cac^{-1})(cbc^{-1})(ca^{-1}c^{-1})(cb^{-1}c^{-1}) \\\\\n &=& (cac^{-1})(cbc^{-1})(cac^{-1})^{-1}(cbc^{-1})^{-1} \\\\\n &=& q(cac^{-1}, cbc^{-1}).\n\\end{eqnarray*}\n\\end{proof}\n\n\\begin{defn}\\label{s5d2}\n$Q(G, G) = \\{\\prod_{i=0}^nq(a_i, b_i) | a_i, b_i \\in G, i \\in \\mathbf{N}\\}$.\n\\end{defn}\n\n\\begin{rem}\n$Q(G, G)$ is the set of arbitrary products of commutators of $G$.\n\\end{rem}\n\\begin{rem}\n$Q(G,G) \\subset G$.\n\\end{rem}\n\n\\begin{prop}\\label{s5p3}\n$Q(G, G)$ is an invariant subgroup of $G$.\n\\end{prop}\n\\begin{proof}\nLet $x, y \\in Q(G, G)$. Then $x = q(a, b)$ and $y = q(c, d)$ for some $a, b,\nc, d \\in G$. Consider $q(a, b)^{-1}q(c, d) = q(b, a)q(c, d) \\in Q(G, G)$, by\ndefinition. Therefore, by proposition \\ref{s3p1}, $Q(G, G)$ is a subgroup of\n$G$.\n\nLet $c \\in G$. Now $cq(a, b)c^{-1} = q(cac^{-1}, cbc^{-1}) \\in Q(G, G)$. \nTherefore, $Q(G, G)_c = Q(G, G)$. Since $c$ was an arbitrary element of $G$,\n$Q(G, G)_c = Q(G, G)$ for all $c \\in G$. This makes is an invariant subgroup.\n\\end{proof}\n\n\\begin{prop}\\label{s5p4}\nThe factor group $G/Q(G, G)$ is abelian.\n\\end{prop}\n\\begin{proof}\nRecall that $G/Q(G, G)$ is a set of all cosets of $Q(G, G)$ with the coset\ncomposition law defined in equation \\eqref{s3e2}. By definition of $Q(G, G)$,\n$x^{-1}y^{-1}xyQ(G, G) = Q(G, G)$. Therefore, $y^{-1}xyQ(G, G) = xQ(G, G)$ and\nhence $xyQ(G, G) = yxQ(G, G)$ or $xQ(G, G) yQ(G, G) = yQ(G, G) xQ(G, G)$.\n\\end{proof}\n\n\\begin{prop}\\label{s5p5}\nLet $H$ be an invariant subgroup of $G$. $G/H$ is abelian if and only if $H\n\\supset Q(G, G)$.\n\\end{prop}\n\\begin{proof}\nLet $G/H$ be abelian. Therefore, $abH = baH$ for all $a, b \\in G$. In other\nwords, $H = b^{-1}a^{-1}baH$ or that $b^{-1}a^{-1}ba = q(b^{-1}, a^{-1}) \\in\nH$ for all $a, b \\in G$. Therefore, $Q(G, G) \\subset H$.\n\nNow consider a normal subgroup $H \\supset Q(G, G)$ and its coset $abH$.\nSince $H \\supset Q(G, G)$, $x = b^{-1}a^{-1}ba \\in H$. Therefore, $abxH = abH$,\nor $abb^{-1}a^{-1}ba H = abH$ or $baH = abH$.\n\\end{proof}\n\nA statement equivalent to proposition \\eqref{s5p5} is\n\\begin{prop}\\label{s5p6}\n$Q(G, G)$ is the smallest subgroup of $G$ such that its factor group is\nabelian.\n\\end{prop}\n\nLet $G_0 = G$ and $G_1 = Q(G, G) = Q(G_1, G_1)$. Then we saw that $G_1$ is an\ninvariant subgroup of $G_0$. In some sense $G_1$ is smaller than $G_0$. We can\ndefine a sequence of groups\n\\begin{equation}\\label{s5e1}\nG_j = \\begin{cases}\nG & \\text{ if } j = 0 \\\\\nQ(G_{j-1}, G_{j-1}) & \\text{ if } j \\ge 1.\n\\end{cases}\n\\end{equation}\nClearly, each $G_{j}$ is an invariant subgroup of $G_{j-1}$ and is in some\nsense smaller than $G_{j-1}$. \n\n\\begin{defn}\\label{s5d3}\nA group $G$ is solvable if $G_j = {e}$ for a finite $j \\in \\mathbf{N}$, where\n$G_j$ is defined by equation \\eqref{s5e1}.\n\\end{defn}\n\nIf $G$ is abelian then $G_1 = Q(G, G) = \\{e\\}$. Therefore, an abelian group\nis (trivially) solvable. Solvability in thus a generalisation of commutativity.\n\nRecall definitions \\ref{s3d6} of a simple group. From definition \\ref{s5d3}\nit immediately follows that\n\\begin{prop}\\label{s5e7}\nA solvable group is not simple.\n\\end{prop}\nAn equivalent, contrapositive statement is\n\\begin{prop}\\label{s5e8}\nA simple group is not solvable.\n\\end{prop}\n\n\\section{Mappings between groups}\\label{s6}\n\\begin{defn}\\label{s6d1}\nA homomorphism from a group $G$ to a group $G^\\prime$ is a mapping $\\phi: G\n\\mapsto G^\\prime$ such that $\\phi(a)\\phi(b) = \\phi(ab)$ for all $a, b \\in G$.\n\\end{defn}\n\\begin{rem}\nA homomorphism, as its name suggests, indicates that $G$ and $G^\\prime$ have\nsimilar structure. The elements $a, b, ab$ of $G$ correspond to the elements\n$\\phi(a), \\phi(b), \\phi(a)\\phi(b) = \\phi(ab)$ of $G^\\prime$. \n\\end{rem}\n\n\\begin{prop}\\label{s6p1}\nIf $\\phi: G \\mapsto G^\\prime$ is a homomorphism then $\\phi(e)$ is an identity\nof $G^\\prime$.\n\\end{prop}\n\\begin{proof}\n$\\phi(a)\\phi(e) = \\phi(ae) = \\phi(a)$ for all $a \\in G$.\n\\end{proof}\n\nSimilarly, one can show that\n\\begin{prop}\\label{s6p2}\nIf $\\phi: G \\mapsto G^\\prime$ is a homomorphism then $\\phi(a^{-1}) = \n(\\phi(a)^{-1})$ for all $a \\in G$.\n\\end{prop}\n\\begin{proof}\nPut $b = a^{-1}$ in the definition \\ref{s6d1} to conclude that $\\phi(a)\n\\phi(a^{-1}) = \\phi(aa^{-1}) = \\phi(e)$. From the previous proposition we \nknow that $\\phi(e)$ is the identity of $G^\\prime$.\n\\end{proof}\n\n\\begin{prop}\\label{s6p3}\nIf $\\phi: G \\mapsto G^\\prime$ is a homomorphism then $\\phi(G) \\subset G^\\prime$.\n\\end{prop}\n\\begin{proof}\nChoose $b = e$ in the definition \\ref{s6d1} to conclude that $\\phi(a) \\in\nG^\\prime$ for all $a \\in G$.\n\\end{proof}\n\nIn fact, we can also show that\n\\begin{prop}\\label{s6p4}\nIf $\\phi: G \\mapsto G^\\prime$ is a homomorphism then $\\phi(G)$ is a subgroup\nof $G^\\prime$.\n\\end{prop}\n\\begin{proof}\nLet $y_1 = \\phi(x_1)$ and $y_2 = \\phi(x_2)$. From proposition \\ref{s6p2}\n$y_1^{-1} = \\phi(x_1^{-1})$ so that $y^{-1}_1y_2 = \\phi(x_1^{-1})\\phi(x_2)\n= \\phi(x_1^{-1}x_2) \\in \\phi(G)$.\n\\end{proof}\n\n\\begin{defn}\\label{s6d2}\nIf $\\phi: G \\mapsto G^\\prime$ is a homomorphism then $\\ker(\\phi) = \\{a \\in G\n| \\phi(a) = e^\\prime\\}$ where $e^\\prime$ is the identity of $G^\\prime$, is\ncalled the kernel of $\\phi$.\n\\end{defn}\n\n\\begin{prop}\\label{s6p5}\nIf $\\phi: G \\mapsto G^\\prime$ is a homomorphism then $\\ker(\\phi)$ is a subgroup \nof $G$.\n\\end{prop}\n\\begin{proof}\nLet $a, b \\in \\ker(\\phi)$ then $\\phi(a) = \\phi(b) = e^\\prime$. Now, \n$(\\phi(a))^{-1} = (e^\\prime)^{-1} \\Rightarrow \\phi(a^{-1}) = e^\\prime$\nso that $a^{-1} \\in \\ker(\\phi)$. Therefore $\\phi(a^{-1})\\phi(b) = e^\\prime\ne^\\prime = e^\\prime \\Rightarrow \\phi(a^{-1}b) = e^\\prime \\Rightarrow a^{-1}b\n\\in \\ker(\\phi)$ making $\\ker(\\phi)$ is subgroup of $G$.\n\\end{proof}\n\n\\begin{prop}\\label{s6p6}\nIf $\\phi: G \\mapsto G^\\prime$ is a homomorphism then $\\ker(\\phi)$ is an\ninvariant subgroup of $G$.\n\\end{prop}\n\\begin{proof}\nLet $a \\in G$ then, using definition \\ref{s3d1} $\\ker(\\phi)_a = \\{a^{-1}ha|\nh \\in \\ker(\\phi)\\}$. Now $\\phi(a^{-1}ha) = e^\\prime$ for all $a \\in G$. \nTherefore $\\ker(\\phi)_a = \\ker(\\phi)$ for all $a \\in G$.\n\\end{proof}\n\n\\begin{defn}\\label{s6d3}\nA homomorphism $\\phi: G \\mapsto G^\\prime$ is an isomorphism if $\\phi$ is \nbijective.\n\\end{defn}\n\n\\begin{prop}\\label{s6p7}\nIf $\\phi:G \\mapsto G^\\prime$ is a homomorphism then $G/\\ker(\\phi)$ is \nisomorphic to $\\phi(G)$.\n\\end{prop}\n\\begin{proof}\n$G/\\ker(\\phi) = \\{a\\ker(\\phi)| a \\in G\\}$. Define a mapping $\\psi:G/\\ker(\\phi)\n\\mapsto \\phi(G)$ such that $\\psi(a\\ker(\\phi)) = \\phi(a)$. Now $\\psi(a\\ker(\\phi)\nb\\ker(\\phi)) = \\psi(ab\\ker(\\phi)) = \\phi(ab) = \n\\psi(a\\ker(\\phi))\\psi(b\\ker(\\phi)) $ so that $\\psi$ is indeed a homomorphism.\n\nLet $g^\\prime \\in \\phi(G)$. Therefore, there exists $g \\in G$ such that \n$\\phi(g) = g^\\prime$. Therefore $\\psi(g\\ker(\\phi)) = g^\\prime$. Thus, $\\psi$\nis surjective.\n\nLet, if possible, $\\psi(a\\ker(\\phi)) = \\psi(b\\ker(\\phi)) = g^\\prime$. Therefore\nthere exists $x, y \\in \\ker(\\phi)$ such that $\\phi(ax) = \\phi(by)$, or $\\phi(a)\n= (\\phi(x))^{-1}\\phi(b)\\phi(y) = \\phi(b)$, because $x$ and $y$ belong to \n$\\ker(\\phi)$ and therefore map to $e^\\prime$, the identity in $G^\\prime$. This\nshows that $\\psi$ is injective.\n\\end{proof}\n\n\\begin{defn}\\label{s6d4}\nAn automorphism on a group $G$ is an isomorphism $G \\mapsto G$.\n\\end{defn}\n\n\\section{Direct and semidirect products}\\label{s7}\n\\begin{defn}\\label{s7d1}\nThe direct product $G_1 \\times G_2$ of two groups $G_1$ and $G_2$ is the set\n$\\{(g_1, g_2) | g_1 \\in G_1, g_2 \\in G_2\\}$ with the binary operation\n\\[\n(a_1, a_2)(b_1, b_2) = (a_1b_1, a_2b_2).\n\\]\n\\end{defn}\n\\begin{rem}\nNote that $a_1, b_1 \\in G_1$ so that $a_1b_1$ is a well-defined operation. \nSame holds good for $a_2b_2$.\n\\end{rem}\n\n\\begin{rem}\nThe difference between direct product and direct sum is explained in \n\\href{https://math.stackexchange.com/questions/2412232/direct-product-vs-direct-sum-of-infinite-dimensional-vector-spaces}{Wikipedia} article.\n\\end{rem}\n\nIt is easy to confirm that\n\\begin{thm}\\label{s7t1}\n$G_1 \\times G_2$ is a group.\n\\end{thm}\n\\begin{proof}\nThe operation is clearly closed. $(e_1, e_2)$ is the identity, $(a^{-1}_1, \nb_1^{-1})$ is the inverse of $(a, b)$ and the associativity of the binary\noperation follows from the associativity of components. \n\\end{proof}\n\n\\begin{defn}\\label{s7d2}\nLet $N$ and $H$ be two groups, $\\aut{N}$ be the set of all automorphisms on $N$ \nand let $\\varphi : H \\rightarrow \\aut{N}$ be a homomorphism. Then a semidirect \nproduct of $N$ and $H$ with respect to $\\varphi$, denoted $N \\rtimes_\\varphi H$,\nis defined the set $N \\times H$ with a binary operation $\\ast$ defined as\n\\[\n(n_1, h_1)\\ast(n_2, h_2) = (n_1 + \\varphi(h_1)(n_2), h_1 \\cdot h_2),\n\\]\nwhere $+$ is the group operation of $N$ and $\\cdot$ that of $H$.\n\\end{defn}\n\n\\begin{prop}\\label{s7t2}\n$(N \\rtimes_\\varphi H, \\ast)$ is a group.\n\\end{prop}\n\\begin{proof}\nClosure of $\\ast$ follows from the definition. \n\nLet $e_N$ and $e_H$ be identities of $N$ and $H$. Therefore, $(e_N, e_H)\\ast\n(n, h) = (e_N + \\varphi(e_H)(n), e_H \\cdot h)$. Since $\\varphi$ is a \nhomomorphism $\\varphi(e_H) = I$, where $I$ is the identity isomorphism, so that \n$(e_N, e_H)\\ast(n, h) = (e_N + I(n), h) = (n, h)$. Similarly $(n, h)\\ast\n(e_N, e_H) = (n + \\varphi(h)(e_N), e_H \\cdot h) = (n + e_N, h) = (n, h)$. Thus, \n$(e_N, e_H)$ is an identity.\n\nConsider \n\\[\n(n, h) \\ast (\\varphi(h^{-1})(n^{-1}), h^{-1}) = \n(n + \\varphi(h)\\varphi(h^{-1})(n^{-1}), h\\cdot h^{-1})\n\\]\nSince $\\varphi$ is a homomorphism, $\\varphi(h)\\varphi(h^{-1}) = \\varphi(hh^{-1})\n = \\varphi(e_H) = I$ so that,\n\\[\n(n, h) \\ast (\\varphi(h^{-1})(n^{-1}), h^{-1}) = \n(n + I(n^{-1}), e_H) = (n + n^{-1}, e_H) = (e_N, e_H)\n\\]\nSimilarly, $(\\varphi(h^{-1})(n^{-1}), h^{-1}) \\ast (n, h) = (e_N, e_H)$. Thus \n$(\\varphi(h^{-1})(n^{-1}), h^{-1})$ is an inverse.\n\nLastly \n\\begin{eqnarray*}\n(n_1, h_1) \\ast ((n_2, h_2) \\ast (n_3, h_3)) &=& \n(n_1, h_1) \\ast (n_2 + \\varphi(h_2)(n_3), h_2 \\cdot h_3) \\\\\n &=& (n_1 + \\varphi(h_1)(n_2 + \\varphi(h_2)(n_3)), h_1 \\cdot (h_2 \\cdot h_3)) \\\\\n &=& (n_1 + \\varphi(h_1)(n_2) + \\varphi(h_1)\\varphi(h_2)(n_3), (h_1 \\cdot h_2) \n \\cdot h_3) \\\\\n &=& (n_1 + \\varphi(h_1)(n_2) + \\varphi(h_1h_2)(n_3), (h_1 \\cdot h_2) \\cdot h_3) \\\\\n &=& (n_1 + \\varphi(h_1) + n_2, h_1 \\cdot h_2) \\ast (n_3, h_3) \\\\\n &=& ((n_1, h_1) \\ast ((n_2, h_2)) \\ast (n_3, h_3)\n\\end{eqnarray*}\nThus $\\ast$ is an associative operation.\n\\end{proof}\n\nIf $\\varphi$ in definition \\ref{s7d2} is an identity then \n\\begin{equation}\\label{s7e1}\n(n_1, h_1) \\ast (n_2, h_2) = (n_1 + n_2, h_1h_2).\n\\end{equation}\nThus, the semidirect product becomes the direct product. We also observe that\n\\begin{enumerate}\n\\item $\\{e_N\\} \\rtimes_\\varphi H$ is a subgroup of $N \\rtimes_{\\varphi} H$ and \nit is isomorphic to $H$.\n\\item $N \\rtimes_\\varphi \\{e_H\\}$ is a subgroup of $N \\rtimes_{\\varphi} H$ and \nit is isomorphic to $N$.\n\\end{enumerate}\n\n\\begin{prop}\\label{s7p1}\n$N \\rtimes_\\varphi \\{e_H\\}$ is an invariant subgroup of $N \\rtimes_\\varphi H$.\n\\end{prop}\n\\begin{proof}\nLet $(n, e_H)$ be an element of $N \\rtimes_\\varphi \\{e_H\\}$ and $(m, h)$ be\nan element of $N \\rtimes_\\varphi H$. \n\\[\n(m,h)\\ast(n,e_H)\\ast(m,h)^{-1} = (m + \\varphi(e_H)(n), h e_H)\\ast(m, h)^{-1}.\n\\]\nSince $\\varphi(e_H)$ is an identity mapping,\n\\[\n(m,h)\\ast(n,e_H)\\ast(m,h)^{-1} = (m + n, h)\\ast(m, h)^{-1}.\n\\]\nSince $(m, h)^{-1} = (\\varphi(h^{-1})(m^{-1}), h^{-1})$\n\\begin{eqnarray*}\n(m,h)\\ast(n,e_H)\\ast(m,h)^{-1} &=& (m + n, h)\\ast\n(\\varphi(h^{-1})(m^{-1}), h^{-1}) \\\\\n &=& (m + n + \\varphi(h)(\\varphi(h^{-1})(m^{-1})), hh^{-1}) \n\\end{eqnarray*}\nSince $\\varphi$ is an automorphism $\\varphi(h^{-1}) = \\varphi^{-1}(h)$ so that\n\\[\n(m,h)\\ast(n,e_H)\\ast(m,h)^{-1} = (m + n + m^{-1}, e_H) = (n, e_H).\n\\]\n\\end{proof}\n\n\\nocite{*}\n\\bibliographystyle{plain}\n\\bibliography{gt}\n\\end{document}\n", "meta": {"hexsha": "1854298fcc3705db62e62c7505ad5855f8cc6160", "size": 26372, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "group-theory/gt.tex", "max_stars_repo_name": "amey-joshi/physics", "max_stars_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "group-theory/gt.tex", "max_issues_repo_name": "amey-joshi/physics", "max_issues_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "group-theory/gt.tex", "max_forks_repo_name": "amey-joshi/physics", "max_forks_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.256684492, "max_line_length": 142, "alphanum_fraction": 0.6462536023, "num_tokens": 10034, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737214979745, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.5828584056168253}}
{"text": "\\documentclass[notitlepage]{simple}\n\n\\author{Matt McCarthy}\n\\title{Finding Trigonometric Identities using DeMoivre's Theorem}\n\\date{April 2016}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{problem*}\n\tFor $x,r\\in\\RR$ find\n\t\\[\n\t\t\\sum_{k=0}^n r^k\\cos kx \\text{ and } \\sum_{k=0}^n r^k\\sin kx.\n\t\\]\n\tThen, assuming $|r|<1$ take their limits as $n\\rightarrow\\infty$.\n\\end{problem*}\n\n\\section{Background}\n\nBefore beginning, we need to mention DeMoivre's theorem.\n\\begin{thm}\n\t\\[\n\t\t\\cos kx + i\\sin kx = \\paren{\\cos x+i\\sin x}^k\n\t\\]\n\\end{thm}\n\\begin{proof}\n\tWe know by Euler's formula that\n\t\\[\n\t\t\\paren{\\cos x+i\\sin x}^k = e^{ikx} = \\cos kx +i\\sin kx.\n\t\\]\n\\end{proof}\n\nAnother thing to know is that for $z=a+bi\\in\\CC$, the \\textit{conjugate} of $z$, denoted $\\overline{z}$, is $\\overline{z}=a-bi$.\nFurthermore, $z\\cdot\\overline{z}=a^2+b^2$.\n\nLastly, we need that\n\\[\n\t\\sum_{k=0}^n z^k = \\frac{1-z^{k+1}}{1-z}.\n\\]\n\n\\section{Solution}\n\nConsider the following.\n\\begin{align*}\n\t\\sum_{k=0}^n r^k\\cos kx + i\\sum_{k=0}^n r^k\\sin kx =& \\sum_{k=0}^n r^k \\paren{\\cos kx + i\\sin kx}\\\\\n\t=& \\sum_{k=0}^n r^k \\paren{\\cos x + i\\sin x}^k\\\\\n\t=& \\frac{1-r^{n+1}\\paren{\\cos x + i\\sin x}^{n+1}}{1-r\\cos x -ir\\sin x}\\\\\n\t=& \\frac{1-r^{n+1}\\cos((n+1)x) -ir^{n+1}\\sin((n+1)x)}{(1-r\\cos x) -ir\\sin x}\\cdot\\frac{(1-r\\cos x)+ir\\sin x}{(1-r\\cos x) +ir\\sin x}\\\\\n\t=& \\frac{1-r\\cos x-r^{n+1}\\cos((n+1)x)+r^{n+2}\\cos x\\cos((n+1)x)+r^{n+2}\\sin x\\sin((n+1)x)}{1-2r\\cos x +r^2\\cos^2 x+r^2\\sin^2 x}\\\\\n\t&+i\\frac{r\\sin x -r^{n+1}\\sin((n+1)x)-r^{n+2}\\cos((n+1)x)\\sin x +r^{n+2}\\cos x\\sin((n+1)x)}{1-2r\\cos x +r^2\\cos^2 x+r^2\\sin^2 x}\n\\end{align*}\n\nBy matching real parts and imaginary parts we get\n\\begin{align*}\n\t\\sum_{k=0}^n r^k\\cos kx &= \\frac{1-r\\cos x-r^{n+1}\\cos((n+1)x)+r^{n+2}\\cos x\\cos((n+1)x)+r^{n+2}\\sin x\\sin((n+1)x)}{1-2r\\cos x +r^2}\\\\\n\t\\sum_{k=0}^n r^k\\sin kx &= \\frac{r\\sin x -r^{n+1}\\sin((n+1)x)-r^{n+2}\\cos((n+1)x)\\sin x +r^{n+2}\\cos x\\sin((n+1)x)}{1-2r\\cos x +r^2}.\n\\end{align*}\n\nIf we suppose $|r|<1$ and let $n\\rightarrow\\infty$ we get that\n\\begin{align*}\n\t\\sum_{k=0}^\\infty r^k\\cos kx &= \\frac{1-r\\cos x}{1-2r\\cos x +r^2}\\\\\n\t\\sum_{k=0}^\\infty r^k\\sin kx &= \\frac{r\\sin x}{1-2r\\cos x +r^2}.\n\\end{align*}\n\n\\end{document}\n", "meta": {"hexsha": "a46b734690bf910a9104d1c4e87060e01a2d0de4", "size": 2183, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2016-spring/demoivre/demoivre.tex", "max_stars_repo_name": "matt-mccarthy/problem-solving", "max_stars_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2016-spring/demoivre/demoivre.tex", "max_issues_repo_name": "matt-mccarthy/problem-solving", "max_issues_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2016-spring/demoivre/demoivre.tex", "max_forks_repo_name": "matt-mccarthy/problem-solving", "max_forks_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.5820895522, "max_line_length": 135, "alphanum_fraction": 0.5904718278, "num_tokens": 1077, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943822145998, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5828046871353003}}
{"text": "\\chapter{\\proj 1D Data}\n\\label{ch_1d}\n\\markright{1D Data}\n\n\n\\section{1D Tools}\n\\subsection{Conversion: im1d\\_convert}\n\\index{im1d\\_convert}\nProgram {\\em im1d\\_convert} converts a 1D signal from one data format \nto another one. \nCurrently supported signal formats are the ascii format, the FITS format, \n  and Excel format. Suffixes for these formats are \nrespectively ``.dat'', ``.fits'',\n  and ``.csv\"''. The ``-s'' option allows the user to \nsuppress a given number\n  of lines at the beginning of the file. This option has an effect only  \n  for ascii and Excel input formats. The ascii format consists of a series of\n  numbers separated by a space, a tab, or a new line.\n{\\bf\n\\begin{center}\n     USAGE: im1d\\_convert [-s NbrLines] file\\_name\\_in file\\_name\\_out\n\\end{center}}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item im1d\\_convert sig.dat image.fits \\\\\nConverts an ascii file to FITS format.\n\\item im1d\\_convert -s 2 sig.dat image.fits  \\\\\nDitto, but the first lines are not taken into account.\n\\end{itemize}\n\n\n\\subsection{Statistical information: im1d\\_info}\n\\index{im1d\\_info}\n\n{\\em im1d\\_info} gives information about a signal:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item the number of pixels\n\\item the minimum, the maximum \n\\item the arithmetic mean: $  {\\bar x} = \\frac{1}{N}\\sum_k x_k$\n\\item the standard deviation: $\\sigma = \\frac{1}{N} \\sum_k (x_k - {\\bar x})^2 = \\sqrt{{\\bar {x^2}} - {\\bar x}^2}$\n\\item the flux: $F  = \\sum_k x_k$\n\\item the energy: $E = \\sum_k x_k^2$\n\\item the skewness: \n$\nS  =   \\frac{1}{N \\sigma^3}\\sum_k (x_k - {\\bar x})^3  \n        =   \\frac{1}{\\sigma^3} ({\\Bar {x^3}} - 3 {\\bar x}{\\Bar {x^2}} + 2{\\bar x}^3)\n$ \\\\\n\\item the kurtosis:\n$\nC  =   \\frac{1}{N \\sigma^4}\\sum_k (x_k - {\\bar x})^4 - 3 \n   =  \\frac{1}{\\sigma^4} ({\\Bar {x^4}} - 4 {\\bar x}{\\Bar {x^3}} + 6 {\\Bar {x^2}}{\\bar x}^2 - 3 {\\bar x}^4) - 3\n$ \\\\\n\\item Measure of dependence: calculate the autoregressive model  \nwhich fits the data, for all orders between $1$ and $M$, and select the \nAR model which minimizes the following equation: \n\\begin{eqnarray*}\nJ(p) = \\log \\sigma_{A(p)}^2 + {\\cal P}(A(p))\n\\end{eqnarray*}\nwhere $\\sigma_{A}$ is the prediction error, and\n${\\cal P}$ is a penalty function, which increases with the AR order.\nExamples of penalty functions are:\n\\begin{itemize}\n\\item AIC: $AIC = \\log \\sigma_{A_j}^2 + \\frac{2 A_j}{N}$ \n\\item AICC: $AICC = \\log \\sigma_{A_j}^2 + \\frac{N +  A_j}{N - A_j - 2}$\n\\item SIC: $SIC = \\log \\sigma_{A_j}^2 + \\frac{ A_j\\log N }{N}$\n\\end{itemize}\n\\end{itemize}\nWhen the ``-a'' option is set, then the  autocorrelation function $\\rho(h)$\nis also calculated:\n\\begin{eqnarray}\n\\rho(h) = \\frac{\\gamma(h)}{\\gamma(0)} \t\n\\end{eqnarray}\n where $\\gamma(h)$ is the autocovariance function defined by:\n \\begin{eqnarray}\n \\gamma(h)= \\frac{1}{N} \\sum_k (x_{k+h} - {\\bar x})(x_k - {\\bar x})\n \\end{eqnarray}\n\nThe command line is:\n{\\bf\n\\begin{center}\n     USAGE: im1d\\_info file\\_name\\_in \n\\end{center}\n}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf[-a Nbr\\_of\\_Lags]} \\\\\n Calculate the autocorrelation function (AF) with a given number of lags.\nThe output AF is saved in a file of name ``autocor''.\nDefault number of lags is 10.\n\\item {\\bf[-O Estimation\\_AR\\_Order\\_Method]} \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item AIC \n\\item AICC\n\\item BIC\n\\end{enumerate}\nDefault is BIC method.\n\\item {\\bf[-M MaxAROrder]} \\\\\nMaximum AR model order. Default is 10.\n\\end{itemize}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\itemsep=0.1truecm\n\\item im1d\\_info data.dat  \\\\\nGives information about the data.\n\\item im1d\\_info -a 10 data.dat \\\\\nDitto, but calculates also autocorrelation function with 10 lags.\n\\end{itemize}\n\n\\subsection{Tendency estimation: im1d\\_tend}\n\\index{im1d\\_tend}\nIf the signal exhibits a tendency, it may be convenient to remove it\nbefore starting the analysis. A well-known fitting\nmethod is the polynomial one.\nA polynomial of order $p$ is fitted around each pixel \nin a window of size $T$ ($p$ equals 2 in this program). \nIn order to have smooth tendency estimation,\nit is recommended to weight the pixels with weights from 1 to zero for pixels\nin the middle to the border of the window.\n{\\bf\n\\begin{center}\n USAGE: im1d\\_tend option in\\_data out\\_tend out\\_signal\\_no\\_tend\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-T WindowSize\\_for\\_tendency\\_estimation]} \\\\\nDefault is 100.\n\\item {\\bf [-f FirstPixels]} \nDefault is the input signal size. If this option is set, the tendency is\nestimated only on the first {\\em FirstPixels} pixels.\n\\end{itemize}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item im1d\\_tend sig.dat tend.dat sig\\_out \\\\\nRemove the tendency in the input data with all default options.\n\\item im1d\\_tend -T 200 sig.dat tend.dat sig\\_out   \\\\\nDitto, but increase the window size.\n\\end{itemize}\n\\begin{figure}[htb]\n\\centerline{\n\\vbox{ \n\\psfig{figure=fig_ar_tend.ps,bbllx=2cm,bblly=12.5cm,bburx=20cm,bbury=25.5cm,width=13.5cm,height=10cm,clip=}\n}}\n\\caption{Top, input signal containing an AR(4) signal and a tendency. Middle,\nestimated tendency. Bottom, difference between the two previous signals.}\n\\label{fig_artend}\n\\end{figure}  \nFigure~\\ref{fig_artend} shows the results when applying the {\\em tend\\_est}\nprogram (with a window size equal to 400) \nto a time series containing a tendency and an AR(4). \n\n\n\\section{Wavelet Transform}\n\\subsection{Multiresolution transform: mr1d\\_trans}\n\\label{sect_mr1dtrans}\n\\index{mr1d\\_trans}\n\nProgram {\\em mr1d\\_trans} applies a one-dimensional multiresolution\ntransform to a signal. The ouput file contains an image, in which\neach row is a band of the multiresolution transform. Morlet, Mexican hat,\nand French hat wavelet transforms are non-dyadic (the resolution is\nnot decreased by a factor two between two scales), \nand 12 voices (fixed value) are calculated (instead of one for the\ndyadic case) when the resolution is divided by two. Then the number of \nrows in the output image will be equal to 12*{\\em number\\_of\\_scales}.\nThe Morlet wavelet is complex, so the wavelet transform is complex too.\nUsing this transform, the first 12*{\\em number\\_of\\_scales}  lines represent\nthe real part of the transform, and the 12*{\\em number\\_of\\_scales} last lines \nthe imaginary part. Four transforms are non-redundant: the bi-orthogonal,\nthe lifting scheme, and the wavelet packet methods (transforms 16 and 17). \nIn this case, the output is not an \nimage but a signal (i.e.\\ 1D rather than 2D), \nwhich has the same size as the original one.\nPosition and length of a given band in the output signal can be found by\nreading the file created using the ``-w'' option. For the lifting scheme\nbased method, the type of lifting can be changed using the ``-l'' option,\nand for the (bi-) orthogonal and packet one, the filter can be changed\nby the ``-f'' option.\n\n19 transforms are available, which are grouped into 5 classes \n\\begin{itemize}\n\\item{Class 1:} no decimation (transforms 1 to 7 and 11 to 14).\n\\item{Class 2:} pyramidal transform (transforms 8 to 10).\n\\item{Class 3:} orthogonal transform (15 and 16).\n\\item{Class 4:} Wavelet packets (17 and 18).\n\\item{Class 5:} Wavelet packets via the \\`a trous algorithm  (19).\n\\end{itemize}\nDepending on the class, the transform does not contain the\nsame number of pixels, and the data representation differs.\nBy default, the number of scales is calculated from\nthe length of the signal.\n{\\bf\n\\begin{center}\n USAGE: mr1d\\_trans option signal\\_in image\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-t type\\_of\\_multiresolution\\_transform]}\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item Linear wavelet transform: \\`a trous algorithm \n\\item B$_1$-spline wavelet transform: \\`a trous algorithm \n\\item B$_3$-spline wavelet transform: \\`a trous algorithm \n\\item Derivative of a B$_3$-spline: \\`a trous algorithm \n\\item Undecimated Haar wavelet transform: \\`a trous algorithm\n\\item Morphological median transform \n\\item Undecimated (bi-) orthogonal wavelet transform \n\\item Pyramidal linear wavelet transform \n\\item Pyramidal B$_3$-spline wavelet transform \n\\item Pyramidal median transform \n\\item Morlet's wavelet transform \\\\\nContinuous wavelet transform with a complex wavelet \nwhich can be decomposed into two parts, one for\nthe real part, and the other for the imaginary part:\n\\begin{eqnarray*}\n\\psi_r(x) & = & \\frac{1}{\\sqrt{2 \\pi}\\sigma}  e^{-\\frac{x^2}{2\\sigma^2}} \\cos(2\\pi\\nu_0 \\frac{x}{\\sigma}) \\nonumber \\\\\n\\psi_i(x) & = & \\frac{1}{\\sqrt{2 \\pi}\\sigma}  e^{-\\frac{x^2}{2\\sigma^2}} \\sin(2\\pi\\nu_0 \\frac{x}{\\sigma}) \n\\end{eqnarray*}\nFirst scale $\\sigma_0$ is chosen equal to $2 \\nu_0$ and $\\nu_0 = 0.8$, \nusing 12 voices per octave.\n\\item Mexican hat wavelet transform \\\\\nContinuous wavelet transform with the Mexican hat wavelet.\nThis is the second derivative of a Gaussian\n\\begin{eqnarray}\n\\psi(x) = (1 - \\frac{x^2}{\\sigma^2}) e^{-\\frac{ x^2}{2\\sigma^2}} \n\\end{eqnarray}\nFirst scale $\\sigma_0$ is chosen equal to $\\frac{1}{\\sqrt{3}}$, using 12 voices per octave.\n\\item French hat wavelet transform \\\\\nContinuous wavelet transform with the French  hat wavelet\n\\begin{eqnarray}\n\\psi(x) =  \\left \\{\n\\begin{array}{ll}\n     0  & \\mbox{ if }  \\mid \\frac{x}{\\sigma} \\mid > 1 \\\\\n     -1 & \\mbox{ if } \\mid  \\frac{x}{\\sigma} \\mid \\in [\\frac{1}{3},1]  \\\\\n     2  & \\mbox{ if } \\mid  \\frac{x}{\\sigma} \\mid < \\frac{1}{3}\n\\end{array}\n\\right.\n\\end{eqnarray}  \nFirst scale $\\sigma_0$ equal to $0.66$, and 12 voices per octave. \n\\item Gaussian derivative wavelet transform \\\\\n Continuous wavelet transform. The wavelet is the first \nderivative of a Gaussian\n \\begin{eqnarray}\n \\psi(x) =  - x e^{-\\frac{1}{2}x^2} \n \\end{eqnarray}\n First scale $\\sigma_0$ equal to $\\frac{1}{\\sqrt{3}}$, and 12 voices per octave. \n\\item (bi-) orthogonal transform. \n\\item (bi-) orthogonal transform via lifting scheme. \n\\item Wavelet packets.\n\\item Wavelet packets via lifting scheme. \n\\item Wavelet packets using the \\`a trous algorithm.\n\\end{enumerate}\n\\item {\\bf [-n number\\_of\\_scales]} \\\\\nNumber of scales used in the multiresolution transform.\n\\item {\\bf [-r]} \\\\\nRebin all scales to the input signal size \n(for pyramidal transforms only).\n\\item {\\bf [-k]} \\\\\nSet to 0 pixels contaminated by the border problem.\n\\item {\\bf [-T type\\_of\\_filters]}  \n{\\small\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item Antonini 7/9 filters. \n\\item Daubechies filter 4. \n\\item Biorthogonal 2/6 Haar filters.\n\\item Biorthogonal 2/10 Haar filters.\n\\item Odegard 7/9 filters.\n\\item User's filters.\n\\end{enumerate}}\nDefault is Antonini 7/9 filters. \\\\\n This option is only available if the chosen transform method is\n the (bi-) orthogonal transform (-t option in [7,15,17]).\n\\item {\\bf [-L]} \\\\\nUse an $L_2$ normalization. Default is $L_1$.\n\\item {\\bf [-l type\\_of\\_lifting\\_transform]}  \n{\\small \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item Lifting scheme: CDF WT. \n\\item Lifting scheme: median prediction.\n\\item Lifting scheme: integer Haar WT. \n\\item Lifting scheme: integer CDF WT. \n\\item Lifting scheme: integer (4,2) interpolating transform. \n\\item Lifting scheme:  Antonini 7/9 filters.\n\\item Lifting scheme: integer Antonini 7/9 filters. \n\\end{enumerate}}\n Default is Lifting scheme: integer Haar WT. \\\\\n This option is only available if the chosen transform method is\n the lifing scheme (-t 24).\n\\item {\\bf [-w InfoFileName]} \\\\\nWrite in a file the size and the starting index of each band.\nThis file contains a 2D float array (Array[2, NbrBand+3]).\n\\begin{verbatim}\n        info[0,0] = transform number\n        info[1,0] = number of scales\n        info[0,1] = transform class number (5 classes)\n        info[1,1] = number of bands\n                  it is not equal to the number of scales\n                  for wavelet packets transform.\n        info[0,2] = number of pixels\n        info[1,2] = lifting scheme type\n        info[0,3] = type of filter\n        info[1,3] = type of normalization\n        for i=4 to Number_of_bands + 3\n        info[0,i] = number of pixels in the band i\n        info[1,i] = position number of the pixel of the band\n\\end{verbatim}\nIf a user filter file is given (i.e. -T 6,filename), with a filename \nof $L$ characters, $L$ lines are added to the array:\n\\begin{verbatim}\n        info[1,Number_of_bands + 4] = number of characters of the filter file name\n        for i=Number_of_bands+4 to  Number_of_bands+4+L-1\n\tinfo[0,i] = ascii number of the ith character.\n\\end{verbatim}\n\n\\end{itemize}\n\\subsubsection*{Examples:}\n\\baselineskip=0.4truecm\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item mr1d\\_trans -n 7 -t 3 sig.fits image.fits \\\\\n\\`A trous algorithm with 7 scales.\n\\item mr1d\\_trans -n 7 -T 5 -t 15 sig.fits image.fits  \\\\\nBi-orthogonal wavelet transform (Odegard 7/9 filters).\n\\item mr1d\\_trans -n 7 -T 5 -t 15 -w info.fits sig.fits image.fits  \\\\\nDitto, but an information file is written. This information \nis necessary for reconstruction.\n\\item mr1d\\_trans -n 7 -T 5 -t 15 -w info.fits sig.fits image.fits  \\\\\nDitto, but an information file is written. This information \nis necessary for reconstruction.\n\\item mr1d\\_trans -n 7 -T 6,dau4 -t 15 -w info.fits sig.fits image.fits  \\\\\nDitto, but the filters defined in the file ``dau4.wvf'' are used instead\nof the Odegard filters.\n\\end{itemize}\n\n\n\\subsection{Reconstruction: mr1d\\_recons}\n\\index{mr1d\\_recons}\nProgram {\\em mr1d\\_recons} reconstructs a signal from its transform.\nTo be able to reconstruct it, the program needs the information file\nwhich has been created by the ``-w'' option during the transformation.\n{\\bf\n\\begin{center}\n USAGE: mr1d\\_recons TransformFileName\\_in InfoFileName\\_in  signal\\_out\n\\end{center}}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr1d\\_trans -n 7 -t 3 -w info sig.fits image.fits \\\\\n      mr1d\\_recons image.fits info sig\\_rec.fits \\\\\n\\`A trous algorithm with 7 scales, and reconstruction.\n\\item mr1d\\_trans -n 7 -T 4 -t 15 -w info sig.fits image.fits  \\\\\nmr1d\\_recons image.fits info sig\\_rec.fits \\\\\nBi-orthogonal wavelet transform (Odegard 7/9 filters), and reconstruction.\n\\end{itemize}\n\nReconstruction is not implemented for transforms 10, 11, and 13. \nFor pyramidal transforms,\nthe reconstruction cannot be performed by {\\em mr1d\\_recons} if \nthe rebin option ``-r'' was set.\n\n\n\\section{Filtering: mr1d\\_filter}\n\\index{mr1d\\_filter}\nProgram {\\em mr1d\\_filter} filters a signal using different methods. \n{\\bf\n\\begin{center}\n USAGE: mr1d\\_filter option signal\\_in signal\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-f type\\_of\\_filtering]} \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item  Multiresolution Hard K-Sigma Thresholding \n\\item Multiresolution Soft K-Sigma Thresholding \n\\item Iterative Multiresolution Thresholding \n\\item Universal Hard Thesholding \n\\item Universal Soft Thesholding \n\\item SURE Hard Thesholding \n\\item SURE Soft Thesholding \n\\item MULTI-SURE Hard Thesholding \n\\item MULTI-SURE Soft Thesholding \n\\item Median Absolute Deviation (MAD) Hard Thesholding \n\\item Median Absolute Deviation (MAD) Soft Thesholding \n\\item Total Variation + Wavelet Constraint \n\\end{enumerate}\nDefault is Multiresolution Hard K-Sigma Thresholding.\n\\item {\\bf [-t type\\_of\\_multiresolution\\_transform]} \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item Linear wavelet transform: \\`a trous algorithm \n\\item B$_1$-spline wavelet transform: \\`a trous algorithm \n\\item B$_3$-spline wavelet transform: \\`a trous algorithm \n\\item Derivative of a B$_3$-spline: \\`a trous algorithm \n\\item Undecimated Haar wavelet transform: \\`a trous algorithm \n\\item morphological median transform \n\\item Undecimated (bi-) orthogonal wavelet transform \n\\item pyramidal linear wavelet transform \n\\item pyramidal B$_3$-spline wavelet transform \n\\item pyramidal median transform \n\\end{enumerate}\nDefault is 3. \n\\item {\\bf [-T type\\_of\\_filters]}  \n{\\small\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item Antonini 7/9 filters. \n\\item Daubechies filter 4. \n\\item Biorthogonal 2/6 Haar filters.\n\\item Biorthogonal 2/10 Haar filters.\n\\item Odegard 7/9 filters.\n\\item User's filters.\n\\end{enumerate}}\nDefault is Antonini 7/9 filters. \\\\\nThis option is only available if the chosen transform method is\nthe (bi-) orthogonal transform (-t 7).\n\\item {\\bf [-m type\\_of\\_noise]}\n{\\small\n\\begin{enumerate}\n\\itemsep=0.1truecm\n\\baselineskip=0.4truecm\n\\item Gaussian Noise \n\\item Poisson Noise \n\\item Poisson Noise + Gaussian Noise \n\\item Multiplicative Noise \n\\item Non-stationary additive noise \n\\item Non-stationary multiplicative noise \n\\item Undefined stationary noise \n\\item Undefined noise \n\\item Stationary correlated noise \n\\item Poisson noise with few events \n\\end{enumerate}\n}\nDefault is Gaussian noise.\n\\item {\\bf [-g sigma]} \n\\item {\\bf [-c gain,sigma,mean]} \n\\item {\\bf [-E Epsilon]} \\\\\nEpsilon = precision for computing thresholds\n(only used in case of poisson noise with few events).\nDefault is $1e-03$.\n\\item {\\bf [-n number\\_of\\_scales]} \n\\item {\\bf [-s NSigma]} \n\\item {\\bf [-i number\\_of\\_iterations]} \n\\item {\\bf [-e epsilon]}  \\\\\nConvergence parameter. Default is $1e-4$.\n\\item {\\bf [-K]}  \\\\\nSuppress the last scale. Default is no.\n\\item {\\bf [-p]}  \\\\\nDetect only positive structure. Default is no.\n\\item {\\bf [-k]}  \\\\\nSuppress isolated pixels in the support. Default is no.\n\\item {\\bf [-S SizeBlock]} \\\\\nSize of the  blocks used for local variance estimation. Default is 7.\n\\item {\\bf [-N NiterSigmaClip]} \\\\\nIteration number used for local variance estimation. Default is 1.\n\\item {\\bf [-F first\\_detection\\_scale]} \\\\\nIf this option is set, all wavelet coefficients detected at scales lower \nthan {\\em first\\_detection\\_scale} are considered as significant.\n\\item {\\bf [-G RegulParam]} \\\\\nRegularization parameter for filtering methods 11 and 12. default is 0.01.\n\\item {\\bf [-v]}  \\\\\nVerbose. Default is no.\n\\end{itemize}\n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr1d\\_filter sig.fits filter\\_sig.fits \\\\\nFiltering using the \\`a trous algorithm, and a Gaussian noise model.\n\\item mr1d\\_filter -m 2 sig.fits filter\\_sig.fits   \\\\\nDitto, but assuming Poisson noise.\n\\end{itemize}\n\n\n\\section{Band Detection: mr1d\\_detect}\n\\index{mr1d\\_detect}\n\nProgram {\\em mr1d\\_detect} detects emission or absorption bands in \na spectrum \\cite{starck:sta97}. The output is an image, in which each row\ncontains a detected band. By default, {\\em mr1d\\_detect} detects\nboth emission and absorption bands. The program gives better results\nif we detect only emission (``-e\" option) bands, or only absorption bands\n(``-a\" option). This is due to the fact that  positivity and negativity \nconstraints can be introduced in the iterative reconstruction for, \nrespectively, \nemission and absorption band detection. If the user wants to \ndetect both kinds of band at the same time, he/she should \nuse the multiresolution\nmedian transform (``-M\" option) which gives in this case better results\nthan the wavelet transform.\n{\\bf\n\\begin{center}\n USAGE: mr1d\\_detect option signal\\_in tab\\_band\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-n number\\_of\\_scales]} \n\\item {\\bf [-s NSigma]} \n\\item {\\bf [-m type\\_of\\_noise]}\n{\\small\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item Gaussian Noise \n\\item Poisson Noise \n\\item Poisson Noise + Gaussian Noise \n\\item Multiplicative Noise \n\\item Non-stationary additive noise \n\\item Non-stationary multiplicative noise \n\\item Undefined stationary noise \n\\item Undefined noise \n\\end{enumerate}\n}\nDescription in section~\\ref{sect_filter}. Default is Gaussian noise.\n\\item {\\bf [-g sigma]} \n\\item {\\bf [-c gain,sigma,mean]} \n\\item {\\bf [-a]} \\\\\nDetection of Absorption lines only. Default is not to do this. \n\\item {\\bf [-e]} \\\\\nDetection of Emission lines only. Default is not to do this. \n\\item {\\bf [-f FirstScale]} \\\\\nFirst scale. Default is 1. All bands detected at the scale smaller than \n{\\em FirstScale} are not reconstructed.\n\\item {\\bf [-l LastScale]} \\\\\nLast scale. Default is number\\_of\\_scales -- 2.\nAll bands detected at the scale higher than {\\em LastScale} are not reconstructed.\n\\item {\\bf [-i IterNumber]} \\\\\nNumber of iterations for the reconstruction. Default is 20.\n \\item {\\bf [-M]} \\\\\nUse the multiresolution median transform \ninstead of the \\`a trous algorithm. If this option is set, the reconstruction\nis not iterative, and the previous option (``-i\") is not active.\n\\item {\\bf [-A ]} \\\\\nDetect only negative multiresolution coefficients. Default is no. \n\\item {\\bf [-E ]} \\\\\nDetect only positive multiresolution coefficients. Default is no. \n\\item {\\bf [-w ]} \\\\\nWrite two other files: \n\\begin{itemize}\n\\item tabadd.fits: sum of the reconstructed objects. All detected bands are co-added. \n\\item tabseg.fits: segmented wavelet transform.\n\\end{itemize}\n\\item {\\bf [-v ]} \\\\\nVerbose. Default is no.\n\\end{itemize}\n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr1d\\_detect -a -A sig.fits band\\_sig.fits \\\\\nDetection using the \\`a trous algorithm, and a Gaussian noise model.\nThe detection is performed from the positive wavelet coefficients, and\nonly emission bands are reconstructed.\n\\item mr1d\\_detect -m 2 -a -A -s 5 sig.fits band\\_sig.fits   \\\\\nDitto, but assuming Poisson noise and a five sigma detection.\n\\end{itemize}\n\n\\begin{figure}[htb] \n\\centerline{\n\\vbox{\n\\hbox{\n\\psfig{figure=ch4_sig10_wid02.ps,bbllx=1.5cm,bblly=19cm,bburx=20cm,bbury=25.5cm,height=5cm,width=16cm,clip=}}\n\\hbox{\n\\psfig{figure=ch4_extr_simu3.ps,bbllx=1.5cm,bblly=13cm,bburx=20cm,bbury=25.5cm,height=10cm,width=16cm,clip=}}\n}}\n\\caption{Simulation: top, simulated spectrum, \nmiddle, reconstructed simulated band (full line) and original band (dashed line). Bottom,\nsimulated spectrum minus the reconstructed band.}\n\\label{fig_detect_spect}\n\\end{figure}\n\nA simulated spectrum was created containing a smooth continuum  \nwith superimposed, variable Gaussian noise and an emission band at $3.50\\mu$m \nwith a maximum of five times the local noise standard \ndeviation and a width of \nFWHM = $0.01 \\mu$m (see Figure~\\ref{fig_detect_spect} top). \nFigure~\\ref{fig_detect_spect}\n(middle) shows the comparison of the \nreconstructed emission band with the original Gaussian.\nFigure~\\ref{fig_detect_spect} (bottom) shows the  difference between \nthe simulated spectrum and the reconstructed band.\n\n\n\\section{Modulus Maxima Representation}\n\\subsection{Modulus maxima detection: mr1d\\_max}\n\\index{mr1d\\_max}\nProgram {\\em mr1d\\_max} detects the modulus maxima using the dyadic\n  \\`a trous algorithm  with the wavelet function equal to the derivative\nof a B-spline (transform number 4 in {\\em mr1d\\_trans}).\n{\\bf\n\\begin{center}\n USAGE: mr1d\\_max option signal\\_in  output\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\item {\\bf [-n number\\_of\\_scales]} \n\\item {\\bf [-v ]} \\\\\nVerbose. Default is no.\n\\end{itemize}\n\\subsubsection*{Example:}\n\\begin{itemize}\n\\item mr1d\\_max sig.fits band\\_sig.fits   \\\\\nModulus maxima detection.\n\\end{itemize}\n\n\n\\subsection{Reconstruction: mr1d\\_maxrecons}\n\\index{mr1d\\_maxrecons}\nProgram {\\em mr1d\\_maxrecons} reconstructs a signal from its modulus maxima.\n{\\bf\n\\begin{center}\n USAGE: mr1d\\_maxrecons option tab\\_band\\_in signal\\_out \n\\end{center}}\nwhere options are:\n\\begin{itemize} \n \\item {\\bf [-i number\\_of\\_iterations]}  \\\\\nMaximum number of iterations for the reconstruction. Default is 10.\n\\item {\\bf [-v ]} \\\\\nVerbose. Default is no.\n\\end{itemize}\n\\subsubsection*{Example:}\n\\begin{itemize}\n\\item mr1d\\_max sig.fits band\\_sig.fits   \\\\\n      mr1d\\_maxrecons band\\_sig.fits sig\\_rec.fits \\\\\nModulus maxima detection, and reconstruction from the maxima.\n\\end{itemize}\n\n", "meta": {"hexsha": "d4af118dc0302bb40c6dd7eb0eab48c23c2c39f8", "size": 23832, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_mra/doc_mr1/ch_1d.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_mra/doc_mr1/ch_1d.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_mra/doc_mr1/ch_1d.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8346213292, "max_line_length": 118, "alphanum_fraction": 0.7297750923, "num_tokens": 7271, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7690802476562641, "lm_q1q2_score": 0.5828046827324236}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{tftb\\_window}\n\\section*{\\hspace*{-1.6cm} tftb\\_window}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nWindow generation.\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nh = tftb\\_window(N)\nh = tftb\\_window(N,name)\nh = tftb\\_window(N,name,param)\nh = tftb\\_window(N,name,param,param2)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty tftb\\_window} yields a window of length {\\ty N} with a given shape.\\\\\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty N}      & length of the window\\\\\n        {\\ty name}   & name of the window shape & {\\ty 'Hamming'}\\\\\n        {\\ty param}  & optional parameter\\\\\n        {\\ty param2} & second optional parameter\\\\\n \\hline {\\ty h}      & output window\\\\\n\\hline\n\\end{tabular*}\n\\vspace*{.5cm}\n\n       Possible names are :\\\\\n        {\\ty 'Hamming', 'Hanning', 'Nuttall', 'Papoulis', 'Harris',\n        'Rect', 'Triang', 'Bartlett', 'BartHann', 'Blackman',\n        'Gauss', 'Parzen', 'Kaiser', 'Dolph', 'Hanna', 'Nutbess', 'spline'}\\\\\n\n        For the gaussian window, an optional parameter {\\ty k}\n        sets the value at both extremities. The default value is {\\ty 0.005}.\\\\\n \n        For the Kaiser-Bessel window, an optional parameter\n        sets the scale. The default value is {\\ty 3*pi}.\\\\\n \n        For the Spline windows, {\\ty h=tftb\\_window(N,'spline',nfreq,p)}\n        yields a spline weighting function of order {\\ty p} and frequency\n        bandwidth proportional to {\\ty nfreq}.\n\n\\end{minipage}\n\\vspace*{1cm}\n\n{\\bf \\large \\sf Example}\n\\begin{verbatim}\n         h=tftb\\_window(256,'Gauss',0.005); \n         plot(h);\n\\end{verbatim}\n\n%\\newpage\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\ndwindow.\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Reference}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n[1] F. Harris ``On the Use of Windows for Harmonic Analysis with the\nDiscrete Fourier Transform'', Proceedings of the IEEE, Vol. 66, pp. 51-83,\n1978.\\\\\n\n[2] A.H. Nuttal, \"A Two-Parameter Class of Bessel Weighting \n\tFunctions for Spectral Analysis or Array Processing\", \n\tIEEE Trans on ASSP, Vol 31, pp 1309-1311, Oct 1983.\\\\\n\n[3] Y. Ho Ha, J.A. Pearce, \"A New Window and Comparison to\n\tStandard Windows\", Trans IEEE ASSP, Vol 37, No 2, \n\tpp 298-300, February 1989.\\\\\n\n[4] C.S. Burrus, ``Multiband Least Squares FIR Filter Design'',\n\tTrans IEEE SP, Vol 43, No 2, pp 412-421, February 1995.\n\\end{minipage}\n\n", "meta": {"hexsha": "cf907a11701f586f343e1f51af4bd9dd645fbbef", "size": 2904, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/tftb_window.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/tftb_window.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/tftb_window.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 27.9230769231, "max_line_length": 82, "alphanum_fraction": 0.645661157, "num_tokens": 1018, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746407, "lm_q2_score": 0.7690802476562641, "lm_q1q2_score": 0.5828046827324236}}
{"text": "% This is a model template for the solutions in computational science. You can find a very useful documentation for LaTeX in Finnish at ftp://ftp.funet.fi/pub/TeX/CTAN/info/lshort/finnish/ or in English at ftp://ftp.funet.fi/pub/TeX/CTAN/info/lshort/english/. The section List of mathematical symbols in Chapter 3 is especially useful for the typesetting of mathematical formulas.\n\n% Compile the document to PDF by command 'pdflatex model.tex' in the terminal. The command must be run twice for the references in the text to be correct.\n\n\\documentclass[a4paper,11pt]{article}\n\\usepackage[utf8]{inputenc}\n% This includes letters such as \ufffd and \ufffd\n\\usepackage[T1]{fontenc}\n% Use here 'Finnish' for Finnish hyphenation. You may have to compile the code twice after the change. \n\\usepackage[english]{babel}\n\\usepackage{graphicx}\n% Some math stuff\n\\usepackage{amsmath,amsfonts,amssymb,amsbsy,commath,booktabs,hyperref}  \n% This is just to include the urls\n\\usepackage{hyperref}\n\\usepackage[margin=2cm]{geometry}\n\n\\setlength{\\parindent}{0mm}\n\\setlength{\\parskip}{1.0\\baselineskip}\n\n\\usepackage{listings}\n\\usepackage{color}\n\\usepackage{pdfpages}\n\n\\definecolor{dkgreen}{rgb}{0,0.6,0}\n\\definecolor{gray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mauve}{rgb}{0.58,0,0.82}\n\n\\lstset{frame=tb,\n\tlanguage=Python,\n\taboveskip=3mm,\n\tbelowskip=3mm,\n\tshowstringspaces=false,\n\tcolumns=flexible,\n\tbasicstyle={\\tiny\\ttfamily},\n\tnumbers=none,\n\tnumberstyle=\\tiny\\color{gray},\n\tkeywordstyle=\\color{blue},\n\tcommentstyle=\\color{dkgreen},\n\tstringstyle=\\color{mauve},\n\tbreaklines=true,\n\tbreakatwhitespace=true,\n\ttabsize=4\n}\n\n\\begin{document}\n\n\\title{Becs-114.1100 Computational Science -- exercise round 7} % Replace the exercise round number\n\\author{Kunal Ghosh, 546247} % Replace with your name and student number\n\\maketitle\n\\section{Solution to Question 3}\n\\subsection{Determining a polynomial regressor to fit a given dataset}\\label{prob3a}\nIn this exercise we are required to fit a polynomial regressor to a given dataset. The dataset can be found in Table 1. \n\\begin{table}[ht]\n\\centering\n\\label{initial-label}\n\\begin{tabular}{|c|c|}\n\\hline\n \\textbf{x}&\\textbf{y} \\\\ \\hline\n-9.7 & 3.76 \\\\\n-7.3 & 1.78 \\\\\n-5.4 & 1.52 \\\\\n-5.0 & 1.31 \\\\\n-3.01 & 0.31 \\\\\n-2.13 & 0.23 \\\\\n-1.2 & 0.45 \\\\\n-0.56 & 0.29 \\\\\n0.0 & 0.0 \\\\\n1.2 & 0.45 \\\\\n4.5 & 0.28 \\\\\n6.7 & 2.12 \\\\\n9.9 & 3.91 \\\\\n10.0 & 3.47 \\\\\n12.3 & 5.59 \\\\\n\\hline\n\\end{tabular}\n\\caption{Tabulated dataset that needs to be fit with a polynomial}\n\\end{table}\n\n The general idea here is that we try to fit polynomials of increasing degrees and when the variance of the errors of two successive polynomials become almost same ( difference less than 0.1 ) then we stop. For the given dataset, our algorithm determines that using a polynomial of degree n=2 results in the least variance with respect to the given data, as can be from Table 2.\n\nWe calculate the polynomials $q_n(x)$ of successively higher degrees (n) using the orthogonal polynomial generation scheme as shown below.\n\\begin{equation}\n$q_{n+1}(x) = xq_{n}(x) - \\alpha_n q_{n}(x) - \\beta_n q_{n-1}(x)$\n\\end{equation}\nhere the base conditions are\n$q_{0}(x) =  1$ and $q_{1}(x) = x - \\alpha_{0}$\n\nwhere $\\alpha_{n} = \\frac{<xq_{n},q_{n}>}{<q_{n},q_{n}>}$ and $\\beta_{n} = \\frac{<xq_{n},q_{n-1}>}{<q_{n-1},q_{n-1}>}$ \n\nonce we have determined the degree n of the polynomial we want to fit. The polynomial equation is given by:\n\n\\begin{equation}\n $p_{n}(x) = \\sum_{i=0}^{n}c_{i}q_{i}$}\n\\end{equation}\n\nwhere $c_{i} = \\frac{<F,q_{i}>}{<q_{i},q_{i}>}$ and $q_{i}$ is the orthogonal polynomial of degree i.\n\nThe output of the algorithm on the given dataset can be found in Table 2 and the dataset along with the polynomial regressor can be found in Figure 1.\n\n\\begin{table}[ht]\n\\centering\n\\label{polynomial_args}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n i  &    alpha   &    beta  &  variance     &     c \\\\ \\hline\n 0  &  0.686667  &  0.000000  & 2.990889 &  1.698000 \\\\\n 1  & 2.777125  & 41.713196  & 2.541700  & 0.118797 \\\\\n 2  & -0.453838 & 38.348371 &  0.089604 &  0.036500 \\\\\n\\hline\n\\end{tabular}\n\\caption{Tabulated values of $i$ : the degree n of the polynomial. alpha($\\alpha$) and beta($\\beta$) : The corresponding multipliers used to generate the nth degree polynomial in the equation \\\\ $q_{n+1}(x) = xq_{n}(x) - \\alpha_n q_{n}(x) - \\beta_n q_{n-1}(x)$ \\and Finally c : is the coefficient used in evaluating the polynomial $p_{n}(x) = \\sum_{i=0}^{n}c_{i}q_{i}$}\n\\end{table}\n\n\\begin{figure}[ht]\n\t\\center\n    \\includegraphics[scale=0.75]{Q1_fig.png}\n    \\caption{Plot showing the data points and the polynomial as determined by the algorithm. The polynomial has been evaluated at 100 equidistant points between [-9.7, 12.3] and the degree of the polynomial fit is 2}\n\t\\label{fig:Q1_fig}\n\\end{figure}\n\n% \\begin{figure}[ht]\n% \t\\center\n%     \\includegraphics[scale=0.75]{plotS_dash.png}\n%     \\caption{Plot showing the first derivative of the natural cubic interpoland S'(X). Values of $S'(X_i)$ for $X_i$ in the range of (min(knot values), max(knot values)) are shown as red circles.} \n% \t\\label{fig:sdash}\n% \\end{figure}\n\n\n% \\begin{table}[ht]\n% \\centering\n% \\label{my-label}\n% \\begin{tabular}{|c|c|c|}\n% \\hline\n%  \\textbf{n}&\\textbf{RMS Error}&\\textbf{Condition Number}  \\\\ \\hline\n%  2&0.73833521&15.0167409881  \\\\ \\hline\n%  5&1.02602254&282901.77002 \\\\ \\hline\n%  8&1.09505307&7657562245.89 \\\\ \\hline\n%  12&1.16783884&5.89342127254e+15 \\\\ \\hline\n%  15&2.11656192&1.74359790918e+17 \\\\ \\hline\n% \\end{tabular}\n% \\caption{Table showing the values of n and the RMS error after solving the system of linear equations with \\textbf{hilbert(n)} as the coefficients.}\n% \\end{table}\nThe corresponding python code can be found at \\ref{code:problem3a}\n\\subsection{Fitting the polynomial regressor on \\textit{World Population} data.}\\label{prob3b}\nIn this question we are asked to repeat the polynomial regression as above, but on a different dataset. The dataset and the output of the algorithm can be found in Table 3 and Table 4 respectively.\n\nIn the figure below we have plotted the degree 4 polynomial as suggested by the algorithm. It can be seen that the polynomial does not fit the data very well.\n\nThis is because of the large set of missing values between years 1000 and 1650. Because of these missing values, the dataset doesn't represent the underlying trend very well, resulting in a bad polynomial fit.\n \nHence, polynomial regression is not very well suited to fit data with a lot of missing values, It should be used only when the dataset is fairly evenly spread through the range of values over which we want to fit the polynomial.\n\n\\textbf{NOTE:} In the subsequent section we evaluate the fit of the same data augmented with some dummy values between 1000 and 1650. To evaluate whether the fit of the polynomial regressor improves.\n\\begin{figure}[ht]\n\t\\center\n    \\includegraphics[scale=0.75]{Q1b_fig.png}\n    \\caption{Plot showing the World population dataset and the 4th degree polynomial as suggested by our algorithm. It can be visually verified that the data is not very well fit with the given 4th degree polynomial}\n\t\\label{fig:world_pop}\n\\end{figure}\n\n\\begin{table}[ht]\n\\centering\n\\label{population-label}\n\\begin{tabular}{|c|c|}\n\\hline\n \\textbf{x}&\\textbf{y} \\\\ \\hline\n1000 & 0.34 \\\\\n1650 & 0.545 \\\\\n1800 & 0.907 \\\\\n1900 & 1.61 \\\\\n1950 & 2.51 \\\\\n1960 & 3.15 \\\\\n1970 & 3.65 \\\\\n1980 & 4.2 \\\\\n1990 & 5.3 \\\\\n\\hline\n\\end{tabular}\n\\caption{The \\textit{World Population} dataset that needs to be fit with the polynomial regressor used in the previous section.}\n\\end{table}\n\n\\begin{table}[ht]\n\\centering\n\\label{polynomial-args}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n i &       alpha  &       beta    &  variance   &         c\\\\ \\hline\n 0 &  1800.000000 &    0.000000   &   3.035407  &   2.468000\\\\\n 1 &  1201.833741 &  90888.888889 &    1.821713 &    0.003755\\\\\n 2 &  1671.383205 &  58100.462442 &    0.714551 &    0.000013\\\\\n 3 &  1811.975200 &  10318.358986 &    0.270872 &    0.000000\\\\\n 4 &  1881.282629 &  6073.275792  &   0.063930  &   0.000000\\\\\n\\hline\n\\end{tabular}\n\\caption{Tabulated values of $i$ : the degree n of the polynomial. alpha($\\alpha$) and beta($\\beta$) : The corresponding multipliers used to generate the nth degree polynomial in the equation \\\\ $q_{n+1}(x) = xq_{n}(x) - \\alpha_n q_{n}(x) - \\beta_n q_{n-1}(x)$ \\and Finally c : is the coefficient used in evaluating the polynomial $p_{n}(x) = \\sum_{i=0}^{n}c_{i}q_{i}$ these values are for the \\textit{World Population} dataset. Here the algorithm computes that a polynomial of degree 4 would be the best fit for the given data.}\n\\end{table}\n\\clearpage\n\\subsubsection{Augmenting the world population data with dummy values to check if the overall fit of the polynomial regressor improves.}\\label{augmented_section}\n% Augmented data\nIn this section, we run a small experiment to check if the overall fit of the polynomial regressor improves if the dataset has fewer gaps ( missing values ).\n\nFor the experiment, we have augmented the \\textit{World Population dataset} with a few dummy values between years 1000 and 1650. The new dataset is given in Table 5. \n\n\n\n\\begin{table}[ht]\n\\centering\n\\label{augmented-label}\n\\begin{tabular}{|c|c|}\n\\hline\n \\textbf{x}&\\textbf{y} \\\\ \\hline\n1000 & 0.34 \\\\\n1100 & 0.38 \\\\\n1200 & 0.42 \\\\\n1300 & 0.5 \\\\\n1450 & 0.51 \\\\\n1650 & 0.545 \\\\\n1800 & 0.907 \\\\\n1900 & 1.61 \\\\\n1950 & 2.51 \\\\\n1960 & 3.15 \\\\\n1970 & 3.65 \\\\\n1980 & 4.2 \\\\\n1990 & 5.3 \\\\\n\\hline\n\\end{tabular}\n\\caption{The \\textit{World Population} dataset \\textbf{\\textit{augmented with dummy data}}  between years 1000 and 1650 to check the hypothesis that adding in the missing values indeed results in a polynomial regressor which has a better fit to the overall data.}\n\\end{table}\nWe run the same polynomial regression algorithm on the new dataset and find that a 4th degree polynomial would give the smallest least squares error with the augmented dataset. The output of the algorithm can be seen in Table 6.\n\n\\begin{table}[ht]\n\\label{table:polynomial_args_3}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n i &       alpha  &       beta     &  variance &         c\\\\ \\hline\n 0 &  1634.615385 &    0.000000    &  2.962034 &   1.847846\\\\\n 1 &  1443.872563 &  129609.467456 &  1.225232 &  0.003619\\\\\n 2 &  1433.542239 &  49394.313405  &  0.671342 &  0.000009\\\\\n 3 &  1478.542841 &  67784.468510  &  0.342014 &  0.000000\\\\\n 4 &  1492.567253 &  58694.769631  &  0.187048 &  0.000000\\\\\n\\hline\n\\end{tabular}\n\\caption{Tabulated values of $i$ : the degree n of the polynomial. alpha($\\alpha$) and beta($\\beta$) : The corresponding multipliers used to generate the nth degree polynomial in the equation \\\\ $q_{n+1}(x) = xq_{n}(x) - \\alpha_n q_{n}(x) - \\beta_n q_{n-1}(x)$ \\and Finally c : is the coefficient used in evaluating the polynomial $p_{n}(x) = \\sum_{i=0}^{n}c_{i}q_{i}$ these values are for the \\textit{World Population} dataset \\textit{\\textbf{augmented with dummy data}} to test the hypothesis that adding the missing data between years 1000 and 1650 gives a better polynomial regressor for the entire dataset. Here the algorithm computes that a polynomial of degree 4 would be the best fit for the data.}\n\\end{table}\n\nWe see that visually overall fit of the 4th degree polynomial to the augmented dataset looks much better. See \\ref{fig:test_fig}\n\n\\begin{figure}[ht]\n\t\\center\n    \\includegraphics[scale=0.75]{Q1b_test_fig.png}\n    \\caption{Plot showing the augmented data and the polynomial regressor. It can be visually verified that this fit is better than the one in which the data between years 1000 and 1650 were missing.}\n\t\\label{fig:test_fig}\n\\end{figure}\n\n\n% \\begin{figure}[ht]\n% \t\\center\n%     \\includegraphics[scale=0.75]{matlabS_dash.png}\n%     \\caption{Plot showing the first derivative of the natural cubic interpoland S'(X) as calculated using Matlab. Values of $S'(X_i)$ for $X_i$ in the range of (min(knot values), max(knot values)) are shown as red circles.} \n% \t\\label{fig:sdash_matlab}\n% \\end{figure}\n% The corresponding plots created by matlab are much more smoother because matlab implements an additional constraint that the third derivatives of the piecewise polynomials are also equal at the second and the last knots. This are apparently called the \"Not-A-Knot\" end conditions. This is only done when the length of \\textbf{t} and \\textbf{y} are same.\n% \n% Natural splines are a good choice only when the functions have 0 second derivative at the end points. I read it up from here. \\url{http://www.mathworks.com/matlabcentral/newsreader/view_thread/172988} would really appreciate some more information about why using the Not-A-Knot condition is better. \n% \\\\\n% The corresponding matlab code can be found at \\ref{code:problem2b}\n\\clearpage\n\n\\section{Appendix A}\\label{code:problem3a}\nPython source code for \\ref{prob3a} and \\ref{prob3b}.\n{\\footnotesize\n\\begin{lstlisting}\nimport numpy as np\nimport pylab as pl\n\ndef dot_prod(x,y):\n    # the lenght of arrays must be same.\n    assert(len(x) == len(y))\n    # get corresponding values from x and y \n    # for v in zip(x,y)\n    # find their product and put the result in a list, list comprehension.\n    # sum over the list\n    return np.sum([v[0]*v[1] for v in zip(x,y)])\n\ndef get_next_alpha(x,qPrev):\n    # Caclulating new alpha now\n    alpha_num = dot_prod(x*qPrev,qPrev)\n    alpha_denom = dot_prod(qPrev,qPrev)\n    return np.true_divide(alpha_num, alpha_denom) \n\ndef get_next_beta(x,qPrev,qOld):\n    # Calculating new beta now\n    beta_num = dot_prod(x*qPrev,qOld) # beta's numerator\n    beta_denom = dot_prod(qOld,qOld) # beta's denominator\n    return np.true_divide(beta_num, beta_denom) \n\ndef get_parameters(x,y):\n    m = len(x) - 1\n    # qi-2\n    qOld = np.ones(x.shape)\n\n    # Initial alpha alues\n    alpha = np.zeros(x.shape)\n    alpha[0] = np.true_divide(dot_prod(x * qOld, qOld),dot_prod(qOld,qOld))\n\n    # allocating space for beta\n    beta = np.zeros(x.shape)\n\n    # qi-1\n    qPrev = x - alpha[0]\n\n    # Initializing c\n    c = np.zeros(x.shape)\n    c[0] = np.true_divide(dot_prod(y,qOld),dot_prod(qOld,qOld))\n    c[1] = np.true_divide(dot_prod(y,qPrev),dot_prod(qPrev,qPrev))\n\n    # Rho i and i-1\n    rho = np.ones(x.shape)\n    rho[0] = dot_prod(y,y) - np.true_divide(np.square(dot_prod(y,qOld)),dot_prod(qOld,qOld))\n    rho[1] = rho[0] - np.true_divide(np.square(dot_prod(y,qPrev)),dot_prod(qPrev,qPrev))\n\n    # Start with degree 1\n    n = 1 \n\n    # Initializing sigma_sq\n    sigma_sq = np.ones(x.shape)\n    sigma_sq[0] = np.true_divide(rho[0], m)\n    sigma_sq[1] = np.true_divide(rho[1], m-n) \n    while(True):\n        # Calculating new beta now\n        beta[n] = get_next_beta(x,qPrev,qOld)\n        \n        # Caclulating new alpha now\n        alpha[n] = get_next_alpha(x,qPrev)\n\n        # Calculating q_n+1\n        qNew = x*qPrev - alpha[n]*qPrev - beta[n]*qOld\n\n        # Calculating new C\n        c[n+1] = np.true_divide(dot_prod(y,qNew),dot_prod(qNew,qNew))\n\n        # Calculate rho_n+1\n        rho[n+1] = rho[n] - np.true_divide(np.square(dot_prod(y,qNew)),dot_prod(qNew,qNew))\n        # Calculate sigma_sq_n+1\n        sigma_sq[n+1] = np.true_divide(rho[n+1],m-(n+1))\n        # if sigma_sq_n+1 > sigma_sq_n \n        # or abs(sigma_sq_n - sigma_sq_n+1) < 0.1\n        # Then stop\n        if sigma_sq[n+1] > sigma_sq[n] or np.abs(sigma_sq[n] - sigma_sq[n+1]) < 0.1:\n            break\n        else:\n            # Update n += 1\n            n += 1\n            # update qOld,qPrev = qPrev,qNew\n            np.copyto(qOld,qPrev)\n            np.copyto(qPrev,qNew)\n    \n    return c,alpha,beta,sigma_sq,n\n\ndef evaluate(t,alpha,beta,c,n):\n    x_vals = np.asarray(t)\n    retVals = []\n    for x in x_vals:\n        sum = 0\n        # qi-2\n        qOld = np.ones(1)\n        sum += c[0]*qOld\n        # qi-1\n        qPrev = x - alpha[0]\n        sum += c[1]*qPrev\n        for i in range(2,n+1): # 2,n+1 because 0 and 1 are already calculated\n            qNew = x*qPrev - alpha[i]*qPrev - beta[i]*qOld\n            qOld = qPrev\n            qPrev = qNew\n            sum += c[i] * qNew\n        retVals.append(sum)\n    return retVals\n\ndef process_and_plot(data,pl,file_name=None):\n    x,y = map(np.asarray,zip(*data))\n    c,alpha,beta,sigma_sq,n = get_parameters(x,y)\n    \n    print(\"\\n{:>2} {:>10} {:>10} {:>10} {:>10}\".format(\"i\",\"alpha\",\"beta\",\"variance\",\"c\"))\n    for i in range(n+1):\n        print(\"{:2d} {:10f} {:10f} {:10f} {:10f}\".format(i,alpha[i],beta[i],sigma_sq[i],c[i]))\n        # print(i,alpha[i],beta[i],sigma_sq[i],c[i])\n\n    pl.scatter(x,y,label=\"Original Data\")\n    xvals = np.linspace(min(x),max(x),100)\n    yvals = evaluate(xvals,alpha,beta,c,n)\n    pl.plot(xvals,yvals,c='r',label=\"Fitted Polynomial\")\n    pl.xlim(min(x)-5,max(x)+5)\n    pl.ylim(min(y)-5,max(y)+5)\n    pl.xlabel(\"X\")\n    pl.ylabel(\"Y\")\n    pl.legend()\n    if file_name != None:\n        pl.savefig(file_name)\n\nif __name__ == '__main__':\n    # Q3.a data\n    data = [(-9.70,3.76),\n            (-7.30,1.78),\n            (-5.40,1.52),\n            (-5.00,1.31),\n            (-3.01,0.31),\n            (-2.13,0.23),\n            (-1.20,0.45),\n            (-0.56,0.29),\n            (0.00,0.00),\n            (1.20,0.45),\n            (4.50,0.28),\n            (6.70,2.12),\n            (9.90,3.91),\n            (10.00,3.47),\n            (12.30,5.59)]\n    pl.figure()\n    process_and_plot(data,pl,\"Q1_fig.png\") \n    # test data, almost straight line\n    data_2 = [(1,4),(2,4.5),(3,3.9),(4,4.6),(4,3.7)]\n    pl.figure()\n    process_and_plot(data_2,pl)\n    # Q3.b data\n    data_3 = [(1000,0.340),(1650,0.545),(1800,0.907),(1900,1.61),(1950,2.51),(1960,3.15),(1970,3.65),(1980,4.20),(1990,5.30)]\n    pl.figure()\n    process_and_plot(data_3,pl,\"Q1b_fig.png\")\n    # Q3.b test data\n    test_data = [(1000,0.340),(1100,0.38),(1200,0.42),(1300,0.5),(1450,0.51),(1650,0.545),(1800,0.907),(1900,1.61),(1950,2.51),(1960,3.15),(1970,3.65),(1980,4.20),(1990,5.30)]\n    pl.figure()\n    process_and_plot(test_data,pl,\"Q1b_test_fig.png\")\n\n    pl.show()\n    pl.close()\n\\end{lstlisting}\n}\n\\end{document}\n\n\n", "meta": {"hexsha": "2b3bd2c68649313b8e171b8e4a42f0dcc216bbc4", "size": 17952, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercise7/report.tex", "max_stars_repo_name": "kunalghosh/BECS-114.1100-Computational-Science", "max_stars_repo_head_hexsha": "ca91ac59cb5276d213c1aec50ae7786efe72ae43", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercise7/report.tex", "max_issues_repo_name": "kunalghosh/BECS-114.1100-Computational-Science", "max_issues_repo_head_hexsha": "ca91ac59cb5276d213c1aec50ae7786efe72ae43", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercise7/report.tex", "max_forks_repo_name": "kunalghosh/BECS-114.1100-Computational-Science", "max_forks_repo_head_hexsha": "ca91ac59cb5276d213c1aec50ae7786efe72ae43", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.8, "max_line_length": 706, "alphanum_fraction": 0.6732397504, "num_tokens": 5868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5828046662970572}}
{"text": "\\documentclass[10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{fullpage}\n\\usepackage{exsheets}\n\n\\linespread{1.1}\n\\setlength\\parindent{0pt}\n\n\\begin{document}\n\n\\section{Problem: The nonlinear pendulum}\n\nThe equation of motion for the pendulum using Newton's second law can be written in terms of the displacement angle $\\theta$ as:\n%\n\\begin{equation}\\label{eq:pendulum}\n\\frac{d^2\\theta}{dt^2} = - \\frac{g}{l} \\sin \\theta\n\\end{equation}\n\nWe first use the trick to turn the second-order equation into two first-order equations by introducing a new variable $\\omega$:\n%\n\\begin{eqnarray}\n\\frac{d\\theta}{dt} & = & \\omega \\label{eq:pendulum1}\\\\ \n\\frac{d\\omega}{dt} & = & -\\frac{g}{l} \\sin \\theta \\label{eq:pendulum2}\n\\end{eqnarray}\n\n\\begin{question}\nWrite a program to solve the equations~(\\ref{eq:pendulum1}) and (\\ref{eq:pendulum2}) using the second-order Runge-Kutta method for a pendulum with a 10 cm arm.\nCalculate the angle $\\theta$ of displacement for several periods of the pendulum starting from the initial condition $(\\theta, \\omega) = (179^\\circ, 0)$.\n\\end{question}\n\n\\begin{question}\nMake a plot of the total energy of the system as a function of time:\n%\n\\begin{equation}\nE = K + U = \\frac{1}{2} m (\\omega^2 L^2) + m g L (1 - \\cos \\theta) \n\\end{equation}\n\\end{question}\n\n\\newpage\n\n\\section{Problem: The 1-D diffusion equation}\n\nThe 1-dimensional diffusion equation assuming constant diffusion coefficient and steady state is\n%\n\\begin{equation}\\label{eq:diffusion}\n-D\\frac{\\partial^2 f(x)}{\\partial x^2} = Q(x)\n\\end{equation}\n%\nwhere $x$ is a spatial coordinate between $[-H,H]$ and we assume as boundary conditions: $f(x = \\pm 1) = 0$.\n\nTo provide us with an analytical solution (with the correct boundary conditions) we prescribe \n%\n\\begin{equation}\n\\tilde f(x) = \\cos(\\pi x / 2)\n\\end{equation}\n%\nto be an analytical solution and we derive the correspondent source term using equation~(\\ref{eq:diffusion}):\n%\n\\begin{equation}\nQ(x) = \\dots \n\\end{equation}\n\nEquipped now with a source term, we find numerically the solution of the equation~(\\ref{eq:diffusion}) as a limit for $t \\rightarrow \\infty$ of the time-dependent equation:\n%\n\\begin{equation}\n\\frac{\\partial f(x,t)}{\\partial t} -D \\frac{\\partial^2 f(x,t)}{\\partial x^2} = Q(x)\n\\end{equation}\n\nWe do this by using the Crank-Nicolson method and by plotting as a function of time a measure (e.g., 2-norm) of the distance between the numerical solution and the analytical one.\n\n\\begin{question}\nUsing the parameter values $H = 1$, $D = 0.1$, $dt = 10^{-3}$, $N = 2^8$ the plot should look like the following fig.\n\\end{question}\n\n\\begin{center}\n\\includegraphics[scale=0.35]{DiffusionTimeEvolution.pdf}\n\\end{center}\n\n\\begin{question}\nShow also the case for $N = 2^9$ and $N = 2^{10}$. What is the error ratio in the limit $t \\rightarrow \\infty$ between these cases?\n\\end{question}\n\n\\end{document}\n", "meta": {"hexsha": "54e208b7dae421b6549db35a995adca7af571dc6", "size": 2859, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/exercises2021.tex", "max_stars_repo_name": "carmeloevoli/NumericalMethodsGSSI", "max_stars_repo_head_hexsha": "5f09a60d407e60270de2ffc82afe7cd36eec7971", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-03-10T02:51:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-17T14:59:23.000Z", "max_issues_repo_path": "exercises/exercises2021.tex", "max_issues_repo_name": "carmeloevoli/NumericalMethodsGSSI", "max_issues_repo_head_hexsha": "5f09a60d407e60270de2ffc82afe7cd36eec7971", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/exercises2021.tex", "max_forks_repo_name": "carmeloevoli/NumericalMethodsGSSI", "max_forks_repo_head_hexsha": "5f09a60d407e60270de2ffc82afe7cd36eec7971", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0357142857, "max_line_length": 179, "alphanum_fraction": 0.7205316544, "num_tokens": 876, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410558746814, "lm_q2_score": 0.8757870013740061, "lm_q1q2_score": 0.5827579179274741}}
{"text": "\\chapter{Finite element discretisation}\n\\label{sec:discretization}\n\nIn this chapter we introduce a mixed finite element discretisation for a steady-state incompressible MHD problem that model electrically conductive fluids. Following the setting in \\cite{schotzau2004mixed}, using curl-conforming elements for the magnetic field and conforming continuous elements for the velocity field. The resulting discretisation is verified though a series of numerical experiments which appear later in Chapter \\ref{chap:results}. For simplicity, we only discuss in detail homogeneous Dirichlet boundary conditions, that is\n\\begin{equation} \\label{eq:homogeneousBC}\n    \\uu{u} = \\uu{0} \\quad \\mbox{and} \\quad \\uu{n}\\times \\uu{b} = \\uu{0}.\n\\end{equation}\nInhomogeneous conditions as in \\eqref{eq:bc} can be incorporated in a straightforward fashion.\n\n\n\\section{Variational formulation}\n\\label{sec:variation}\n\nTo express \\eqref{eq:mhd}, \\eqref{eq:bc} in weak form we follow \\cite{schotzau2004mixed} and denote the $L^2$ inner product on $L^2(\\Omega)^d$ by $(\\cdot,\\cdot)_\\Omega$, for $d = 2,3$. We introduce the standard Sobolev spaces\n\\begin{equation} \\label{eq:FuncSpace} \\nonumber\n \\left. \\begin{aligned}\n\\uu{V}&=H_0^1(\\Omega)^d=\\left\\{\\,\\uu{u}\\in H^1(\\Omega)^d\\,:\\,\\text{$\\uu{u}=\\uu{0}$ on $\\partial\\Omega$}\\,\\right\\},\\\\\nQ&=L^2_0(\\Omega)=\\{\\,p\\in L^2(\\Omega)\\,:\\,(p\\,,1)_\\Omega=0\\,\\},\\\\\n\\uu{C}&=H_0({\\rm curl};\\Omega) = \\left\\{\\,\\uu{b}\\in L^2(\\Omega)^d\\,:\\,\\nabla\\times\\uu{b}\\in L^2(\\Omega)^{2d-3}, \\\n\\text{$\\uu{n}\\times\\uu{b}=\\uu{0}$ on $\\partial\\Omega$}\\,\\right\\},\\\\\nS&=H^1_0(\\Omega)=\\{\\,r\\in H^1(\\Omega)\\,:\\,r=0\\ \\mbox{on $\\partial\\Omega$}\\,\\},\n \\end{aligned}\n \\right.\n \\qquad \\text{}\n\\end{equation}\nWe write $\\|\\cdot\\|_{L^2(\\Omega)}$, $\\|\\cdot\\|_{H^1(\\Omega)}$ and $\\|\\cdot\\|_{H(\\rm{curl};\\Omega)}$ for the associated natural norms. More precisely, for a vector fields $\\uu{u},\\uu{b}$ and a scalar functions $r$ the norms are defined as follows:\n\\begin{equation} \\nonumber\n \\left. \\begin{aligned}\n    \\|\\uu{u}\\|_{L^2 (\\Omega)} &= \\left({\\int_{\\Omega} \\uu{u}\\cdot\\uu{u}\\;dx}\\right)^{\\frac{1}{2}},\\\\\n   \\|\\uu{u}\\|_{H^1(\\Omega)} &=  \\left(\\|\\uu{u}\\|_{L^2(\\Omega)}^2 + \\|\\nabla  \\uu{u}\\|_{L^2(\\Omega)}^2 \\right)^{\\frac{1}{2}},\\\\\n   \\|\\uu{b}\\|_{H(\\rm{curl},\\Omega)} &=  \\left(\\|\\uu{b}\\|_{L^2(\\Omega)}^2 + \\|\\nabla \\times \\uu{b}\\|_{L^2(\\Omega)}^2 \\right)^{\\frac{1}{2}}, \\\\\n    \\|r\\|_{L^2 (\\Omega)} &= \\left({\\int_{\\Omega} r^2\\;dx}\\right)^{\\frac{1}{2}},\\\\\n    \\|r\\|_{H^1(\\Omega)} &=  \\left(\\|r\\|_{L^2(\\Omega)}^2 + \\|\\nabla  r\\|_{L^2(\\Omega)}^2 \\right)^{\\frac{1}{2}},\\\\\n \\end{aligned}\n \\right.\n \\qquad \\text{}\n\\end{equation}\nwhere $\\|\\nabla  \\uu{u}\\|_{L^2(\\Omega)}^2$ is the $L^2$-norm of the gradient tensor $\\nabla \\uu{u}$. The weak formulation of the incompressible MHD system (\\ref{eq:mhd}) and the boundary conditions (\\ref{eq:bc}) consists in finding~$(\\uu{u},p,\\uu{b},r)\\in \\uu{V} \\times Q\\times \\uu{C} \\times S$ such that\n\\begin{subequations}\n\\label{eq:weak}\n\\begin{eqnarray}\n\\label{eq:weak1} A(\\uu{u},\\uu{v}) + O(\\uu{u};\\uu{u},\\uu{v})\n+C(\\uu{b};\\uu{v},\\uu{b})\n+B(\\uu{v}, p) & =& (\\uu{f}, \\uu{v})_{\\Omega},\\\\[.1cm]\n\\label{eq:weak2}\nB(\\uu{u},q)&=&0, \\\\[.1cm]\n\\label{eq:weak3}\nM(\\uu{b},\\uu{c})-C(\\uu{b};\\uu{u},\\uu{c})+D(\\uu{c},r)&=& (\\uu{g},\\uu{c})_\\Omega, \\\\[.1cm]\n\\label{eq:weak4} D(\\uu{b},s)&=&0,\n\\end{eqnarray}\n\\end{subequations}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V} \\times Q\\times \\uu{C}\\times\nS$. The individual variational forms are given by\n\\begin{alignat*}2\\nonumber\n&A(\\uu{u},\\uu{v})=  \\int_\\Omega \\nu \\, \\nabla\\uu{u}:\n\\nabla\\uu{v}\\,d\\uu{x},&\\qquad  & O(\\uu{w};\\uu{u},\\uu{v}) = \\int_\\Omega\n(\\uu{w}\\cdot\\nabla)\\uu{u} \\cdot\\uu{v} \\, d\\uu{x},\n\\\\[.1cm]\n&  B(\\uu{u},q) = -\\int_\\Omega\\,(\\nabla\\cdot\\uu{u}) \\,q \\,d\\uu{x},\n&\\qquad  &\n M(\\uu{b},\\uu{c})= \\int_\\Omega\\, \\kappa\\nu_m\n(\\nabla\\times\\uu{b})\\cdot(\\nabla\\times\\uu{c})\\,d\\uu{x},\\\\[0.1cm]\n& D(\\uu{b},s) = \\int_\\Omega\\, \\uu{b} \\cdot \\nabla s\\,\nd\\uu{x}, & \\qquad &\nC(\\uu{d};\\uu{v},\\uu{b}) =  \\int_\\Omega \\kappa\\, (\\uu{v}\\times\\uu{d})\\cdot\n(\\nabla\\times\\uu{b})\\, d\\uu{x},\n\\end{alignat*}\nwhere  $\\nabla \\uu{u}:\\nabla \\uu{u}$ is  defined as\n$$\\nabla \\uu{u}:\\nabla \\uu{u} = \\sum^d_{i,j=1}(\\nabla \\uu{u})_{ij}(\\nabla \\uu{u})_{ij}.$$ In \\cite{schotzau2004mixed} it has been shown that this formulation of a the problem is discrete energy-stable and has a unique solution for small data.\n\n\\section{Mixed finite element discretisation}\n\nConsider the domain $\\Omega$ to be divided up into a regular and quasi-uniform mesh ${\\mathcal T}_h=\\{K\\}$ consisting of triangles ($d = 2$) or tetrahedra ($d = 3$)  with mesh size $h$. Based on the function spaces defined in \\eqref{eq:FuncSpace}, our finite element approximation will be sought in the finite spaces given by:\n\\begin{equation}\n\\label{eq:FiniteSpace}\n\\begin{split}\n\\uu{V}_h &=  \\{\\, \\uu{u}\\in H^1( \\Omega)\\, :\\, \\uu{u}|_K \\in {\\mathcal P}_{k}(K)^d, \\, K \\in{\\mathcal T}_h \\, \\},\\\\[.1cm]\nQ_h&=  \\{\\, p\\in L^2(\\Omega) \\cap H^1(\\Omega)\\,:\\, p|_K \\in {\\mathcal P}_{k-1}(K), \\, K \\in{\\mathcal T}_h \\,\\},\\\\[.1cm]\n\\uu{C}_h &=  \\{\\, \\uu{b}\\in H_0({\\rm curl}; \\Omega) \\,:\\, \\uu{b}|_K \\in {\\mathcal P}_{k-1}(K)^d \\oplus \\uu{R}_k(K), \\, K \\in{\\mathcal T}_h \\,\\},\\\\[.1cm]\nS_h&=  \\{\\, r\\in H_0^1(\\Omega) \\,:\\, r|_K \\in {\\mathcal P}_{k}(K), \\, K \\in {\\mathcal T}_h \\, \\},\n\\end{split}\n\\end{equation}\nfor $k\\geq 2$. Here we note that we are using ${\\mathcal P_k}/{\\mathcal P_{k-1}}$ Taylor-Hood elements for the fluid unknowns $(\\uu{u},p)$ \\cite{taylor1973numerical}. For the magnetic variables $(\\uu{b},r)$ we use the curl-conforming \\nedelec elements of the first kind \\cite{nedelec1980mixed}. These choices of finite elements spaces $\\uu{V}_h, \\, \\uu{C}_h, \\, Q_h$ and $S_h$ imply that  we have conforming subspaces to our Sobolev spaces $\\uu{V}, \\, \\uu{C}, \\,Q$ and $S$, respectively. Then the finite element solution consists in finding $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)\\in \\uu{V}_h\\times Q_h\\times \\uu{C}_h\\times S_h$ such that\n\\begin{subequations}\n\\label{eq:VariationForm}\n\\begin{eqnarray}\n\\label{eq:bn1} \\hspace{-15mm} A(\\uu{u}_h,\\uu{v}) + \\tilde{O}(\\uu{u}_h;\\uu{u}_h,\\uu{v}) +C(\\uu{b}_h;\\uu{v},\\uu{b}_h) +B(\\uu{v}, p_h) & = & ( \\uu{f},\\uu{v}),\\\\[.1cm]\n\\label{eq:bn2}\nB(\\uu{u}_h,q)&=& 0, \\\\[.1cm]\n\\label{eq:bn3} M(\\uu{b}_h,\\uu{c})-C(\\uu{b}_h;\\uu{u}_h,\\uu{c})+ D(\\uu{c},r_h)&=& (\\uu{g},\\uu{c}),\\\\[.1cm]\n\\label{eq:bn4} D(\\uu{b}_h,s)&=&0,\n\\end{eqnarray}\n\\end{subequations}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$.\n\nThe forms $A, M, B, D$ and $C$ stay the same as on the continuous level. However, for the convection term $\\tilde{O}(\\cdot;\\cdot,\\cdot)$ we to modify the form $O(\\uu{w};\\uu{u},\\uu{v})$ in a standard fashion to ensure the energy-stability property\n\\begin{equation} \\label{eq:convection}\n    \\tilde{O}(\\uu{w};\\uu{u},\\uu{u}) = 0, \\quad \\forall \\uu{w},\\uu{u} \\in  \\uu{V}_h.\n\\end{equation}\nTo ensure this property we integrate by parts the convection form $O(\\uu{w};\\uu{u},\\uu{u})$  to obtain\n\\begin{equation} \\nonumber\n \\left. \\begin{aligned}\n     \\int_\\Omega (\\uu{w}\\cdot\\nabla)\\uu{u} \\cdot\\uu{u} \\, d\\uu{x} =& -\\frac{1}{2}\\int_{\\Omega} \\nabla \\cdot \\uu{w} \\uu{u} \\cdot \\uu{u} \\, d\\uu{x}\n     +\\frac{1}{2}\\int_{\\partial \\Omega} \\uu{w}\\cdot \\uu{n} |\\uu{u}|^2\\, ds,\n \\end{aligned}\n \\right.\n \\qquad \\text{}\n\\end{equation}\nrecalling that $\\uu{n}$ is the unit outward normal on $\\partial \\Omega$. Therefore, we choose the modified convection form $\\tilde{O}(\\uu{w};\\uu{u},\\uu{v})$ as\n$$\\tilde{O}(\\uu{w};\\uu{u},\\uu{v}) =  \\int_\\Omega (\\uu{w}\\cdot\\nabla)\\uu{u} \\cdot\\uu{v} \\, d\\uu{x} +\\frac{1}{2}\\int_{\\Omega} \\nabla \\cdot \\uu{w} \\uu{u} \\cdot \\uu{v}\\, d\\uu{x}-\\frac{1}{2}\\int_{\\partial \\Omega} \\uu{w}\\cdot \\uu{n} \\uu{u} \\cdot \\uu{v}\\, ds.$$\nBy construction, the property \\eqref{eq:convection} is now satisfied. Note also that for homogeneous boundary conditions as assumed in \\eqref{eq:homogeneousBC}, the boundary integral term in $\\tilde{O}$ can be omitted.\n\nAgain in \\cite{schotzau2004mixed} it has been shown that this formulation of a MHD is discrete energy-stable and has a unique solution for small data. Also, optimal order error estimates in the mesh size $h$ have been derived for small data using the stability property \\eqref{eq:convection}. Namely, for sufficiently smooth solutions, we have that\n$$\\|\\uu{u}-\\uu{u}_h\\|_{H^1(\\Omega)}+\\|\\uu{b}-\\uu{b}_h\\|_{H(\\rm{curl};\\Omega)}+\\|p-p_h\\|_{L^2(\\Omega)}+\\|r-r_h\\|_{H^1(\\Omega)} \\leq C h^k,$$\nfor a constant $C>0$ independent of the mesh size. However, the $L^2$-norm error for the velocity field is of order $\\mathcal{O}(h^{k+1})$ (as $\\uu{V}_h$ consists of a full polynomial space on each element). In contrast, we cannot expect $L^2$-norm errors of  order $\\mathcal{O}(h^{k+1})$ for the magnetic field (as $\\uu{C}_h$ does not consists of a full polynomial space on each element).\n\n\n\\subsection{Matrix representation}\n\nThis variational formulation \\eqref{eq:VariationForm} now can be converted into a matrix representation. To do this, we introduce the basis function for the finite elements spaces in \\eqref{eq:FiniteSpace}:\n\\begin{alignat}2\n\\label{eq:bases1}\n\\uu{V}_h & = \\mbox{span}\\langle  \\uu{\\psi}_j \\rangle _{j=1}^{n_u}, & \\qquad &\nQ_h  = \\mbox{span} \\langle  \\alpha_i \\rangle _{i=1}^{m_u},\\\\[0.1cm]\n \\uu{C}_h& =\\mbox{span}\\langle \\uu{\\phi}_j \\rangle _{j=1}^{n_b}, & \\qquad & S_h = \\mbox{span} \\langle \\beta_i\n\\rangle_{i=1}^{m_b}.\n\\end{alignat}\nThe aim now is to find the coefficient vectors $u = (u_1, \\ldots , u_{n_u}) \\in \\mathbb{R}^{n_u}$, $p = (p_1, \\ldots , p_{m_u}) \\in \\mathbb{R}^{m_u}$, $b = (b_1, \\ldots , b_{n_b}) \\in \\mathbb{R}^{n_b}$, and $r = (r_1, \\ldots , r_{m_b}) \\in \\mathbb{R}^{m_b}$ of the finite element functions $(\\uu{u}_h, p_h,\\uu{b}_h, r_h)$. As usual, this is done by writing the bilinear forms in \\eqref{eq:VariationForm} in terms of the following stiffness matrices and load vectors:\n\\begin{alignat*}2\nA_{i,j} &= A(\\uu{\\psi}_j,\\uu{\\psi}_i), &\\quad  &1 \\leq i,j \\leq n_u,\\\\[0.1cm]\nB_{i,j} &= B(\\uu{\\psi}_j,\\alpha_i), &\\quad &1 \\leq i \\leq m_u, \\ 1 \\leq j \\leq n_u,\\\\[.1cm]\nD_{i,j} &= D(\\uu{\\phi}_j,\\beta_i),  & & 1 \\leq i \\leq m_b,\\ 1 \\leq j \\leq n_b,\\\\[.1cm]\nM_{i,j}&= M(\\uu{\\phi}_j,\\uu{\\phi}_i), &\\qquad & 1 \\leq i,j \\leq n_b,\\\\[.1cm]\nf_i &= (\\uu{f},\\uu{\\psi}_i)_\\Omega, & & 1\\leq i\\leq n_u,\\\\[.1cm]\ng_i &= (\\uu{g},\\uu{\\phi}_i)_\\Omega, & & 1\\leq i \\leq n_b.\n\\end{alignat*}\nFor the two non-linear forms, $\\tilde{O}$ and $C$, we define the corresponding stiffness matrices with respect to given finite element functions $\\uu{w} \\in \\uu{V}_h$ and $\\uu{d}_h\\in \\uu{C}_h$ in the first argument and their associated coefficient vectors $w$ and $d$ as\n\\begin{alignat*}2\nO(w)_{i,j} &=\\tilde{O}(\\uu{w};\\uu{\\psi}_j,\\uu{\\psi}_i), &\\quad  &1 \\leq i,j \\leq n_u,\\\\[.1cm]\nC(d)_{i,j} &= C(\\uu{d};\\uu{\\psi}_j,\\uu{\\phi}_i), & & 1\\leq i \\leq n_b,\\ 1 \\leq j \\leq n_u.\n\\end{alignat*}\n\nThus, the numerical solution to \\eqref{eq:mhd} consists in solving the non-linear system\n\\begin{equation}\n\\label{eq:matrix-system}\n\\left(\n\\begin{array}{cccc}\nA+O(u) & B^T & C^T(b) & 0\\\\\nB & 0 & 0 & 0\\\\\n-C(b) & 0 & M & D^T \\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\nu\\\\\np\\\\\nb\\\\\nr\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{c} f\\\\0\\\\g\\\\0\n\\end{array}\n\\right).\n\\end{equation}\nwhere the vectors  $u\\in\\mathbb{R}^{n_u}$, $p\\in\\mathbb{R}^{m_u}$,  $b\\in\\mathbb{R}^{n_b}$, and $r\\in\\mathbb{R}^{m_b}$ are the unknown coefficients of the finite element functions. We shall omit the dependence of $O$ and $C$ on $b$ and $u$, respectively, and simply  write $O$ and $C$.\n\n\\section{Picard iteration (P)}\n\\label{sec:nonlinear}\nThe discrete system \\eqref{eq:matrix-system} is non-linear, and therefore appling a non-linear solver to this problem is necessary. A common choice to deal with the non-linearity within the incompressible Navier-Stokes equations in isolation is to perform Oseen or Picard iterations \\cite{elman2005finite}. This involves linearising around the current velocity and solving for updates.\n\nWe adapt this approach for the full MHD system as well. Given a current iterate $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$  we solve for updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$ and introduce the next iterate by setting:\n\\begin{equation}\\nonumber\n\\begin{array}{cc}\n% \\label{eq:updates}\n\\uu{u}_h& \\hspace{-3mm} \\rightarrow \\uu{u}_h +\\delta \\uu{u}_h, \\quad p_h \\rightarrow p_h +\\delta p_h,\\\\\n\\uu{b}_h& \\hspace{-3mm}  \\rightarrow \\uu{b}_h +\\delta \\uu{b}_h, \\quad r_h \\rightarrow r_h +\\delta r_h.\n\\end{array}\n\\end{equation}\nIn variational form, the updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$ are found by solving the Picard system (P):\n\\begin{equation} \\nonumber\n% \\label{eq:picard}\n\\begin{split}\nA(\\delta\\uu{u}_h, \\uu{v}) +\\tilde{O}(\\uu{u};\\delta\\uu{u}_h,\\uu{v})+ C(\\uu{b}_h;\\uu{v},\\delta \\uu{u}_h) + B(\\uu{v}, \\delta p_h) & = R_u(\\uu{u}_h,\\uu{b}_h,p_h;\\uu{v}),\\\\[.1cm]\nB(\\delta\\uu{u}_h,q)&= R_p(\\uu{u}_h;q), \\\\[.1cm]\nM(\\delta \\uu{b}_h,\\uu{c})+\nD(\\uu{c},\\delta r_h)-C(\\uu{b}_h;\\delta \\uu{u}_h,\\uu{v})&= R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c}),\\\\[.1cm]\nD(\\delta \\uu{b}_h,s)&= R_r(\\uu{b}_h;s),\n\\end{split}\n\\end{equation}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$. The right hand side linear forms correspond to the residual at the current iteration $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ defined by:\n\\begin{align*}\n R_u(\\uu{u}_h,\\uu{b}_h,p_h;\\uu{v})&=(\\uu{f}, \\uu{v})_\\Omega-A(\\uu{u}_h,\\uu{v})\n-  \\tilde{O}(\\uu{u}_h;\\uu{u}_h,\\uu{v})  - C(\\uu{b}_h;\\uu{v},\\uu{b}_h)-B(\\uu{v},p_h),\\\\[.1cm]\nR_p(\\uu{u}_h;q)&=-B(\\uu{u}_h,q),\\\\[.1cm]\n R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c})&=(\\uu{g,c})_\\Omega -M(\\uu{b}_h,\\uu{c})\n+ C(\\uu{b}_h;\\uu{u}_h,\\uu{c})-D(\\uu{c},r_h),\\\\[.1cm]\nR_r(\\uu{b}_h;s)&=-D(\\uu{b}_h,s),\n\\end{align*}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$.\n\nIn \\cite{schotzau2004mixed} it is shown that for small data the Picard iteration (P) will converge to the exact solution given any initial guess.\n\nTo formulate the variational form of the Picard iteration (P) in matrix form, let $({u},p,{b},r)$ be the coefficient vectors associated with $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ and $(\\delta{u},\\delta p,\\delta{b},\\delta r)$ be the coefficient vectors of $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$, then it can readily seen that the Picard iteration (P) amounts to solving the matrix system\n\\begin{equation}\n\\label{eq:mhd_saddle}\n%\\mathcal{K} x \\equiv\n\\left(\n\\begin{array}{cccc}\nA+O & B^T & C^T & 0\\\\\nB & 0 & 0 & 0 \\\\\n-C & 0 & M & D^T\\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\n\\delta u\\\\\n\\delta p\\\\\n\\delta b\\\\\n\\delta r\n\\end{array}\n\\right)  =\n\\begin{pmatrix}\nr_u \\\\\nr_p\\\\\nr_b\\\\\nr_r\n\\end{pmatrix},\n\\end{equation}\nwith\n\\begin{align*}\nr_u &= f- Au -O(u) u - C(b)^T b- B^T p,\\\\[0.1cm]\nr_p &=-B u,\\\\[0.1cm]\nr_b &=g-Mu+C(b)b-D^T r,\\\\[0.1cm]\nr_r &=-D b.\n\\end{align*}\nHere, the matrix $A$  is symmetric positive-definite (SPD), $O$ is non-symmetric and $-C,C^T$ appear in a skew symmetric fashion. We also note that $M$ is symmetric positive-semidefinite (SPSD) with nullity $m_b$ corresponding to the discrete gradients.\n\n\n\\section{Decoupled iterations}\n\\label{sec:FEMdecouple}\n\n\nThe full MHD system \\eqref{eq:mhd}, \\eqref{eq:bc} is a coupled system consisting of the incompressible Navier-Stokes and Maxwell's equations, coupled through the non-linear skew symmetric coupling term $C$. In addition, the convection term $O$ is non-linear as well. These two terms make the numerical solution challenging. Therefore, if one or both of these terms is small then it may be possible to iterate explicitly. In particular if the coupling term, $C$, is small then we may completely decouple the system into a Navier-Stokes problem and a Maxwell problem. The two resulting decoupling schemes are what we call Magnetic and Complete Decoupling and are both described below.\n\n\n\\subsection{Magnetic decoupling (MD)}\n\\label{sec:FEMmd}\n\nConsider the first situation where there is  weak coupling within the system, that is when $C$ is small. Then it may be possible to drop these terms to completely decouple the system into the two subproblems, the Navier-Stokes and Maxwell's equations. We will call this Magnetic decoupling.\n% For a given solution $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$, neglecting the coupling terms in \\eqref{eq:picard} results in solving for the updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h) \\in \\uu{V}_h \\times Q_h \\times \\uu{C}_h \\times S_h$  such that\n% \\begin{equation}\n% \\label{eq:picard_explicit_MD}\n% \\begin{split}\n% A(\\delta\\uu{u}_h, \\uu{v}) +O(\\uu{u};\\delta\\uu{u}_h,\\uu{v})+ B(\\uu{v}, \\delta p_h) & = R_u(\\uu{u}_h,\\uu{b}_h,p_h;\\uu{v})\\\\[.1cm]\n% B(\\delta\\uu{u}_h,q)&= R_p(\\uu{u}_h;q), \\\\[.1cm]\n% M(\\delta \\uu{b}_h,\\uu{c})+\n% D(\\uu{c},\\delta r_h)&= R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c}),\\\\[.1cm]\n% D(\\delta \\uu{b}_h,s)&=R_r(\\uu{b}_h;s),\n% \\end{split}\n% \\end{equation}\n% where again $(\\uu{v},q,\\uu{c},s)\\in\\uu{V}_h\\times Q_h\\times\\uu{C}_h\\times S_h$ and $R_u$, $R_p$, $R_b$ and $R_r$ which are defined in section \\ref{sec:nonlinear}. Again, let $({u},p,{b},r)$ be the coefficient vectors of $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ and $(\\delta{u},\\delta p,\\delta{b},\\delta r)$ be the coefficient vectors of $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$, then this amounts to solving the linear system:\nThen \\eqref{eq:mhd_saddle} amounts\n\\begin{equation}\n\\label{eq:matrix_MD}\n%\\mathcal{K} x \\equiv\n\\left(\n\\begin{array}{cccc}\nA+O(u) & B^T & 0 & 0\\\\\nB & 0 & 0 & 0 \\\\\n0 & 0 & M & D^T\\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\n\\delta u\\\\\n\\delta p\\\\\n\\delta b\\\\\n\\delta r\n\\end{array}\n\\right)  =\n\\begin{pmatrix}\nr_u \\\\\nr_p\\\\\nr_b\\\\\nr_r\n\\end{pmatrix},\n\\end{equation}\nwith\n\\begin{align*}\nr_u &= f- Au -O u - C^T b- B^T p,\\\\[0.1cm]\nr_p &=-B u,\\\\[0.1cm]\nr_b &=g-Mu+Cb-D^T r,\\\\[0.1cm]\nr_r &=-D b.\n\\end{align*}\nFrom \\eqref{eq:matrix_MD} we can see that the system is now completely decoupled. This enable us to apply solve each individual subproblem separately and possibly in parallel.\n\n\\subsection{Complete decoupling}\n\n\nFor the second decoupling scheme, we again consider there to be weak coupling of the system but we also consider that the fluid equations are diffusion dominated and hence can exclude the convection terms.\n% This is the simplest technique as it removes all non-linear terms. Again, for a given solution $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$, removing the coupling and convection terms in \\eqref{eq:picard} results in solving for the updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h) \\in \\uu{V}_h \\times Q_h \\times \\uu{C}_h \\times S_h$  such that\n% \\begin{equation}\n% \\label{eq:picard_explicit_CD}\n% \\begin{split}\n% A_h(\\delta\\uu{u}_h, \\uu{v}) + B(\\uu{v}, \\delta p_h) & = R_u(\\uu{u}_h,\\uu{b}_h.p_h;\\uu{v})\\\\[.1cm]\n% B(\\delta\\uu{u}_h,q)&= R_p(\\uu{u}_h;q), \\\\[.1cm]\n% M(\\delta \\uu{b}_h,\\uu{c})+\n% D(\\uu{c},\\delta r_h)&= R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c}),\\\\[.1cm]\n% D(\\delta \\uu{b}_h,s)&=R_r(\\uu{b}_h;s),\n% \\end{split}\n% \\end{equation}\n% where $(\\uu{v},q,\\uu{c},s)\\in\\uu{V}_h\\times Q_h\\times\\uu{C}_h\\times S_h$.  Taking $({u},p,{b},r)$ as the coefficient vectors of $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ and $(\\delta{u},\\delta p,\\delta{b},\\delta r)$ be the coefficient vectors of $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$, then the proposed decoupled linear system is\nThis amounts to\n\\begin{equation}\n\\label{eq:matrix_CD}\n%\\mathcal{K} x \\equiv\n\\left(\n\\begin{array}{cccc}\nA & B^T & 0 & 0\\\\\nB & 0 & 0 & 0 \\\\\n0 & 0 & M & D^T\\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\n\\delta u\\\\\n\\delta p\\\\\n\\delta b\\\\\n\\delta r\n\\end{array}\n\\right)  =\n\\begin{pmatrix}\nr_u \\\\\nr_p\\\\\nr_b\\\\\nr_r\n\\end{pmatrix},\n\\end{equation}\nwith\n\\begin{align*}\nr_u &= f- Au -O u - C^T b- B^T p,\\\\[0.1cm]\nr_p &=-B u,\\\\[0.1cm]\nr_b &=g-Mu+Cb-D^T r,\\\\[0.1cm]\nr_r &=-D b.\n\\end{align*}\nThis is the simplest technique as it removes all non-linear terms and hence leaves the linear Stokes problem in the upper $(1,1)$ block matrix.\n\nIn this chapter we have introduced a mixed finite element approximation to the full MHD system given in \\eqref{eq:mhd} and \\eqref{eq:bc}. We followed the mixed approach outlined in \\cite{schotzau2004mixed} and expressed the MHD system in the matrix form \\eqref{eq:mhd_saddle}. Using the Picard iteration  \\eqref{eq:mhd_saddle} we introduced two possible decoupling schemes which maybe simpler to solve for depending on the parameters ($\\kappa$, $\\nu$ and $\\nu_m$). The next chapter will discuss possible preconditioning approaches to these systems.", "meta": {"hexsha": "a08da0621f2486f31204b70b0b91b20869ec386e", "size": 20460, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MHD/THESISnew/FEM/FEM.tex", "max_stars_repo_name": "wathen/PhD", "max_stars_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-10-25T13:30:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T21:27:30.000Z", "max_issues_repo_path": "MHD/THESISnew/FEM/FEM.tex", "max_issues_repo_name": "wathen/PhD", "max_issues_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MHD/THESISnew/FEM/FEM.tex", "max_forks_repo_name": "wathen/PhD", "max_forks_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-28T16:12:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-13T13:59:44.000Z", "avg_line_length": 57.3109243697, "max_line_length": 682, "alphanum_fraction": 0.6499022483, "num_tokens": 8198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246118695629, "lm_q2_score": 0.6992544335934766, "lm_q1q2_score": 0.5827059294723549}}
{"text": "\\documentclass[11pt, oneside]{article} \n\\usepackage{geometry}\n\\geometry{letterpaper}\n\\usepackage{graphicx}\t\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{natbib}\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true,linkcolor=blue,citecolor=blue,urlcolor=blue}\n\\usepackage{parskip}\n\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\title{CSS-only quadrant rotations}\n\\author{}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\section{The first rotation}\n\nWe want to rotate a quadrant square of side length $L$ by $\\pi/2$, translating it diagonally\nas we go to avoid intersecting the crossbars.\nAs we do the rotation, the corner of the quadrant moves through an angle of $\\pi/2$ from $-\\pi/4$ to $\\pi/4$,\nrequiring a shift of\n\\begin{align*}\nm(t) &= \\frac{L}{\\sqrt{2}} \\cos(\\pi t/2 - \\pi/4) d - \\frac{L}{2} d = \\cos(\\pi t/2 - \\pi/4) w + w_0\n\\end{align*}\nas $t$ goes from 0 to 1, where $d = (\\pm 1, \\pm 1)$ is the appropriate diagonal vector.\nIn terms of the complex plain, the effect we want is the animated transform\n\\begin{align*}\nf(t,z) &= w_0 + \\cos(\\pi t/2 - \\pi/4) w + e^{i \\pi t/2} z\n\\end{align*}\nwhere $z \\in \\C$, $w \\in \\C$ is the maximum translation, and $t \\in [0,1]$.  Let\n\\begin{align*}\ne(t) &= \\exp(2 \\pi i t) \\\\\nc(t) &= \\cos(2 \\pi t) \\\\\ns(t) &= \\sin(2 \\pi t)\n\\end{align*}\nso that\n\\begin{align*}\ns(t) &= \\frac{i}{2} (e(-t) - e(t)) \\\\\nc(t) &= \\frac{1}{2} (e(t) + e(-t))\n\\end{align*}\nand $f(t,z)$ becomes\n\\begin{align*}\nf(t,z) &= w_0 + c(t/4 - 1/8) w + e(t/4) z \\\\\n  &= w_0 + (e(t/4 - 1/8) + e(-t/4 + 1/8)) w/2 + e(t/4) z \\\\\n  &= w_0 + e(t/4)e(-1/8) w/2 + e(-t/4)e(1/8) w/2 + e(t/4) z \\\\\n  &= w_0 + e(t/4)(e(-1/8) w/2 + e(-t/2)e(1/8) w/2 + z) \\\\\n  &= w_0 + e(t/4)(e(-1/8) w/2 + e(-t/2) (e(1/8) w/2 + e(t/2) z))\n\\end{align*}\nwhich is now expressible as primitive transforms:\n\\begin{align*}\nf(t) &= T(w_0) R(t/4) T(e(-1/8) w/2) R(-t/2) T(e(1/8) w/2) R(t/2)\n\\end{align*}\nwhere $R$, $T$ are rotate, translate.  If we set the CSS transform property to that sequence of 5 primitive\ntransforms, everything will animate correctly.\n\n\\section{The second rotation}\n\nBut wait!  If we extrapolate this into the next $\\pi/2$, our translation will go in the wrong way.  To fix this, we\nneed to make our transition correction ping-pong back between $-\\pi/4$ and $\\pi/4$ without leaving that\ninterval:\n\\begin{align*}\nf(t,z) &= w_0 + \\cos(\\pi/2 \\operatorname{jag}(t) - \\pi/4) w + e^{i \\pi t/2} z \\\\\n\\operatorname{jag}(t) &= \\min(\\operatorname{rem}(t, 2), \\operatorname{rem}(2-t,2))\n\\end{align*}\nwhere $\\operatorname{rem}$ is the (correctly signed) floating point remainder function.  Fortunately for us we\nonly need to evaluate $\\operatorname{jag}$ at integers, where it has the simpler form\n\\begin{align*}\n\\operatorname{jag}(n) &= \\operatorname{rem}(n, 2)\n\\end{align*}\nSetting $j = \\operatorname{jag}(t)$, we can repeat our derivation:\n\\begin{align*}\nf(t,z) &= w_0 + c(j/4 - 1/8) w + e(t/4) z \\\\\n  &= w_0 + (e(j/4 - 1/8) + e(-j/4 + 1/8)) w/2 + e(t/4) z \\\\\n  &= w_0 + e(j/4)e(-1/8) w/2 + e(-j/4)e(1/8) w/2 + e(t/4) z \\\\\n  &= w_0 + e(j/4)(e(-1/8) w/2 + e(-j/2)e(1/8) w/2 + e(t/4-j/4) z) \\\\\n  &= w_0 + e(j/4)(e(-1/8) w/2 + e(-j/2) (e(1/8) w/2 + e(t/4+j/4) z)) \\\\\nf(t) &= T(w_0) R(j/4) T(e(-1/8) w/2) R(-j/2) T(e(1/8) w/2) R(t/4+j/4)\n\\end{align*}\n\n\\end{document}  ", "meta": {"hexsha": "70ea762f11562d33f1bbd849a21ea8e3fcafb4c7", "size": 3255, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "web/client/tex/transforms.tex", "max_stars_repo_name": "girving/pentago-learn", "max_stars_repo_head_hexsha": "b28399aed8a77152786e97e62ee3e3a8c384af2a", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 41, "max_stars_repo_stars_event_min_datetime": "2015-01-25T14:37:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T16:29:18.000Z", "max_issues_repo_path": "web/client/tex/transforms.tex", "max_issues_repo_name": "girving/pentago-learn", "max_issues_repo_head_hexsha": "b28399aed8a77152786e97e62ee3e3a8c384af2a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-07-12T22:35:53.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-12T22:35:53.000Z", "max_forks_repo_path": "web/client/tex/transforms.tex", "max_forks_repo_name": "girving/pentago-learn", "max_forks_repo_head_hexsha": "b28399aed8a77152786e97e62ee3e3a8c384af2a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2015-05-06T11:04:40.000Z", "max_forks_repo_forks_event_max_datetime": "2017-04-18T00:22:11.000Z", "avg_line_length": 38.2941176471, "max_line_length": 115, "alphanum_fraction": 0.6086021505, "num_tokens": 1371, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245787544825, "lm_q2_score": 0.6992544335934767, "lm_q1q2_score": 0.5827059063164882}}
{"text": "\\renewcommand{\\inl}{\\textsf{inl}\\;}\n\\renewcommand{\\inr}{\\textsf{inr}\\;}\n\\newcommand{\\caseof}[3]{\\textsf{case}\\,#1\\,\\textsf{of inl}\\,x \\Rightarrow #2 \\mid \\textsf{inr}\\,y \\Rightarrow #3}\n\n\n\\section{Extension: Disjoint sums}\n\nWe will now extend our simply-typed lambda-calculus to disjoint sums.\n\n\\[\n\\begin{array}{llcl}\n\\mbox{Types}  & A & \\bnfas & \\ldots \\mid A + B\\\\\n\\mbox{Terms}  & M & \\bnfas & \\ldots \\mid \\inl M \\mid \\inr M \\mid \\caseof{M}{N_1}{N_2}\n\\end{array}\n\\]\n\nLet us first extend our definition of $\\SN$ and $\\SNe$ (see Fig.~\\ref{fig:sncase}).\n\n\\begin{figure}\n \\centering\n\n\\[\n\\begin{array}{c}\n\\multicolumn{1}{l}{\\mbox{Neutral terms}} \\\\% [0.5em]\n\\infer{\\Gamma \\vdash \\caseof{M}{N_1}{N_2} : C \\in \\SNe}{\\Gamma \\vdash M : A + B \\in \\SNe & \\Gamma, x{:}A \\vdash N_1 : C \\in \\SN & \\Gamma, y{:}B \\vdash N_2 : C\\in \\SN}\n\\\\% [0.5em]\n%\n\\multicolumn{1}{l}{\\mbox{Normal terms}} \\\\% [0.5em]\n\\infer{\\Gamma \\vdash \\inl M : A + B \\in \\SN}{\\Gamma \\vdash M : A \\in \\SN} \\qquad \\infer{\\Gamma \\vdash \\inr M : A + B \\in \\SN}{\\Gamma \\vdash M : B \\in \\SN}\n\\\\% [0.5em]\n\\multicolumn{1}{l}{\\mbox{Strong head reduction}} \\\\[1em]\n\\infer{\\Gamma \\vdash \\caseof{(\\inl M)}{N_1}{N_2} \\redSN [M/x]N_1 : C}{\\Gamma \\vdash M : A \\in \\SN & \\Gamma, y{:}B \\vdash N_2 : C \\in \\SN}\n\\\\[0.75em]\n\\infer{\\Gamma \\vdash \\caseof{(\\inr M)}{N_1}{N_2} \\redSN [M/x]N_2 : C}{\\Gamma \\vdash M : B \\in \\SN & \\Gamma, x{:}A \\vdash N_1 : C \\in \\SN}\n\\\\[0.75em]\n\\infer{\\Gamma \\vdash \\caseof{M}{N_1}{N_2} \\redSN \\caseof{M'}{N_1}{N_2} : C}{\\Gamma \\vdash M \\redSN M' : A + B}\n\\end{array}\n\\]\n\n   \\caption{Inductive definition of strongly normalizing terms - extended for case-expressions and injections}\n   \\label{fig:sncase}\n \\end{figure}\n\n\nNext, we extend our definition of semantic type to disjoint sums. A first attempt might be to define $\\denot{A + B}$ as follows:\n\n\\paragraph{Attempt 1}\n\\begin{alignat*}{3}\n\\inden{\\Gamma}{M}{A + B} ~\\text{iff} &&M = \\inl M' &~\\text{and}~ \\inden{\\Gamma}{M'}{A}, \\\\\n& ~~\\emph{or}~ &M = \\inr M' &~\\text{and}~ \\inden{\\Gamma}{M'}{B}.\n% \\den{A+B} := \\{\\inl M \\mid M \\in \\den{A} \\} \\cup \\{\\inr M \\mid M \\in \\den{B} \\}\n\\end{alignat*}\n\nHowever, this definition would not satisfy the key property $\\CR3$ and hence would fail to be a reducibility candidate. For example,  while we have $\\inden{y : A}{\\inl y}{A + B}$, it is not the case that $\\inden{y : A}{(\\lambda x. \\inl x)\\;y}{A + B}$, despite the fact that $(\\lambda x. \\inl x)\\;y \\redSN \\inl y$.\n\\\\[1em]\nOur definition of $\\denot{A + B}$ is not closed under the reduction relation $\\redSN$. Let $\\A$ denote the denotation of $\\denot{A}$. We then define the closure of $\\denot{A} = \\A$, written as  $\\clos\\A$, inductively as follows:\n\n%\\[\n%\\begin{array}{c}\n%\\ianc{M \\in \\A}{M\\in \\clos\\A}{}  \\qquad\n%\\ianc{M \\in \\SNe}{M \\in \\clos\\A}{} \\qquad\n%\\ibnc{M \\in \\clos\\A}{N \\redSN M}{N \\in \\clos\\A}{}\n%\\end{array}\n%\\]\n\n\\[\n\\begin{array}{c}\n\\ianc{\\Gamma \\vdash M \\in \\A}{\\Gamma \\vdash M \\in \\clos\\A}{} \\qquad\n\\ianc{\\Gamma \\vdash M : A \\in \\SNe}{\\Gamma \\vdash M \\in \\clos\\A}{} \\qquad\n\\ibnc{\\Gamma \\vdash M : \\clos\\A}{\\Gamma \\vdash N \\redSN M : A}{\\Gamma \\vdash N : \\clos\\A}{}\n\\end{array}\n\\]\n\nand we define\n\n\\[\n\\begin{array}{lcl}\n\\inden{\\Gamma}{M}{A + B} & \\text{iff} & \\Gamma \\vdash M \\in \\clos{ \\{\\inl M' \\mid \\inden{\\Gamma}{M'}{A} \\} \\cup \\{\\inr M' \\mid \\inden{\\Gamma}{M'}{B} \\}  }\n\\end{array}\n\\]\n\n\\subsection{Semantic type $\\denot{A + B}$ is a reducibility candidate}\nWe first extend our previous theorem which states that all denotations of types must be in $\\CR$.\n\n\\begin{theorem}\nFor all types $C$, $\\denot{C}  \\in \\CR$.\n\\end{theorem}\n\\begin{proof}\nBy induction on the structure of $C$. We highlight the case for disjoint sums.\n\n\\paragraph{Case $C = A + B$.}\n\n  \\begin{enumerate}\n  \\item \\textit{Show} $\\CR1$. Assume that $\\inden{\\Gamma}{M}{A + B}$. We consider different subcases and prove by an induction on the closure defining $\\denot{A + B}$ that $\\Gamma \\vdash M : A + B \\in \\SN$.\n\n\\paragraph{Subcase:} $\\Gamma \\vdash M \\in \\{\\inl N \\mid \\inden{\\Gamma}{N}{A}\\}$. Therefore $M = \\inl N$. Since $\\inden{\\Gamma}{N}{A}$ and by i.h. ($\\CR1$), $\\Gamma \\vdash N : A \\in \\SN$. By definition of $\\SN$, we have that $\\Gamma \\vdash \\inl N : A + B \\in \\SN$.\n\n\\paragraph{Subcase:} $\\Gamma \\vdash M \\in \\{\\inr N \\mid \\inden{\\Gamma}{N}{B}\\}$. Therefore $M = \\inr N$. Since $\\inden{\\Gamma}{N}{B}$ and by i.h. ($\\CR1$), $\\Gamma \\vdash N : B \\in \\SN$. By definition of $\\SN$, we have that $\\Gamma \\vdash \\inr N : A + B \\in \\SN$.\n\n\\paragraph{Subcase:} $\\Gamma \\vdash M : A + B \\in \\SNe$. By definition of $\\SN$, we conclude that $\\Gamma \\vdash M : A + B \\in \\SN$.\n\n\\paragraph{Subcase:} $\\Gamma \\vdash M \\redSN M' : A + B$ and $\\inden{\\Gamma}{M'}{A+B}$.\n\\\\[0.5em]\n$\\Gamma \\vdash M \\redSN M' : A + B$ and $\\inden{\\Gamma}{M'}{A + B}$ \\hfill by assumption\\\\\n$\\Gamma \\vdash M' : A + B \\in \\SN$ \\hfill by inner i.h. \\\\\n$\\Gamma \\vdash M : A + B \\in \\SN$ \\hfill by reduction $\\redSN$\n\n \\item \\textit{Show} $\\CR2$. If $\\Gamma \\vdash M : A + B \\in \\SNe$, then $\\inden{\\Gamma}{M}{A + B}$. \\\\\nBy definition of the closure, if $\\Gamma \\vdash M : A + B \\in \\SNe$, we have $\\inden{\\Gamma}{M}{A + B}$.\n\n\n  \\item \\textit{Show} $\\CR3$. If $\\Gamma \\vdash M \\redSN M' : A + B$ and $\\inden{\\Gamma}{M'}{A+B}$\n    then $\\inden{\\Gamma}{M}{A+B}$.\\\\\nBy definition of the closure, if $\\Gamma \\vdash M \\redSN M' : A + B$ and $\\inden{\\Gamma}{M'}{A+B}$, we have\n$\\inden{\\Gamma}{M}{A+B}$.\n\\end{enumerate}\n\n\\end{proof}\n\n\n \\subsection{Revisiting the fundamental lemma}\n\n We can now revisit the fundamental lemma.\n\n \\begin{lemma}[Fundamental lemma]\n If $\\Gamma \\vdash M : C$ and $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$\n then $\\inden{\\Gamma'}{[\\sigma]M}{C}$.\n \\end{lemma}\n \\begin{proof}\n By induction on $\\Gamma \\vdash M : C$.\n\n \\paragraph{Case} $\\D = \\ianc{\\Gamma \\vdash M \\hastype A}{\\Gamma \\vdash \\inl M \\hastype A + B}{}$\n \\\\[1em]\n $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n $\\inden{\\Gamma'}{[\\sigma]M}{A}$ \\hfill by i.h. \\\\\n $\\inden{\\Gamma'}{\\inl [\\sigma]M}{A + B}$ \\hfill by definition of $\\denot{A + B}$ \\\\\n $\\inden{\\Gamma'}{[\\sigma] \\inl M}{A + B}$ \\hfill by subst. definition\n\n \\paragraph{Case} $\\D = \\ianc{\\Gamma \\vdash M \\hastype B}{\\Gamma \\vdash \\inr M \\hastype A + B}{}$\n \\\\[1em]\n similar to the case above.\n\n \\paragraph{Case} $\\D = \\icnc{\\Gamma \\vdash M : A + B}{\\Gamma, x{:}A \\vdash M_1 :  C}{\\Gamma, y{:}B \\vdash M_2 : C}\n {\\Gamma \\vdash \\caseof{M}{M_1}{M_2} : C}{}$\n \\\\[1em]\n $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n $\\inden{\\Gamma'}{[\\sigma]M}{A + B}$ \\hfill by i.h.\n \\\\[1em]\n We consider different subcases and prove by induction on the closure defining $\\denot{A + B}$, that $\\inden{\\Gamma'}{[\\sigma](\\caseof{M}{M_1}{M_2})}{C}$.\n\n\\paragraph{Subcase $\\Gamma' \\vdash [\\sigma]M \\in \\{\\inl N \\mid \\inden{\\Gamma'}{N}{A}\\}$}$\\;$\\\\[1em]\n$[\\sigma]M = \\inl N$ for some $\\inden{\\Gamma'}{N}{A}$ \\hfill by assumption \\\\\n$\\Gamma' \\vdash N : A \\in \\SN$ \\hfill by $\\CR1$ \\\\\n$\\Gamma' \\vdash \\inl N : A + B \\in \\SN$ \\hfill by definition \\\\\n% $x \\in \\den{A}$,\n$\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n$\\inden{\\Gamma'}{[\\sigma, N/x]}{\\Gamma, x:A}$ \\hfill by definition \\\\\n$\\inden{\\Gamma'}{[\\sigma, N/x]M_1}{C}$ \\hfill by outer i.h \\\\\n$\\inden{\\Gamma', y{:}B}{y}{B}$  \\hfill by definition \\\\\n$\\inden{\\Gamma', y{:}B}{[\\sigma, y/y]}{\\Gamma, y:B}$ \\hfill by definition \\\\\n$\\inden{\\Gamma', y{:}B}{[\\sigma, y/y]M_2}{C}$ \\hfill by outer i.h. \\\\\n$\\Gamma', y{:}B \\vdash [\\sigma, y/y]M_2 : C \\in \\SN$ \\hfill by $\\CR1$ \\\\\n$\\Gamma' \\vdash \\caseof{(\\inl N)}{[\\sigma,x/x]M_1}{[\\sigma, y/y]M_2} \\redSN [\\sigma, N/x]M_1 : C$ \\hfill by $\\redSN$\\\\\n$\\caseof{(\\inl N)}{[\\sigma,x/x]M_1}{[\\sigma, y/y]M_2}$ \\\\\n$\\qquad = [\\sigma](\\caseof{M}{M_1}{M_2}) $ \\hfill by subst. definition and $[\\sigma]M = \\inl N$\\\\\n$\\inden{\\Gamma'}{[\\sigma](\\caseof{M}{M_1}{M_2})}{C}$ \\hfill by $\\CR3$\n\n\\paragraph{Subcase $\\Gamma' \\vdash [\\sigma]M \\in \\{\\inr N \\mid \\inden{\\Gamma'}{N}{B}\\}$}$\\;$\\\\[1em]\nsimilar to the case above.\n\n\\paragraph{Subcase: $\\Gamma' \\vdash [\\sigma]M : A + B \\in \\SNe$}.$\\;$\\\\\n$\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n$\\inden{\\Gamma', x{:}A}{x}{A}$ \\hfill by definition \\\\\n$\\inden{\\Gamma', y{:}B}{y}{B}$  \\hfill by definition \\\\\n$\\inden{\\Gamma', x{:}A}{[\\sigma, x/x]}{\\Gamma, x:A}$ \\hfill by definition \\\\\n$\\inden{\\Gamma', y{:}B}{[\\sigma, y/y]}{\\Gamma, y:B}$ \\hfill by definition \\\\\n$\\inden{\\Gamma', x{:}A}{[\\sigma, x/x]M_1}{C}$ \\hfill by outer i.h. \\\\\n$\\inden{\\Gamma', y{:}B}{[\\sigma, y/y]M_2}{C}$ \\hfill by outer i.h. \\\\\n$\\Gamma', x{:}A \\vdash [\\sigma, x/x]M_1 : C \\in \\SN$ \\hfill by $\\CR1$ \\\\\n$\\Gamma', y{:}B \\vdash [\\sigma, y/y]M_2 : C \\in \\SN$ \\hfill by $\\CR1$ \\\\\n$\\Gamma' \\vdash \\caseof{[\\sigma]M}{[\\sigma, x/x]M_1}{[\\sigma, y/y]M_2} : C \\in \\SNe$ \\hfill by $\\SNe$ \\\\\n$\\Gamma' \\vdash [\\sigma](\\caseof{M}{M_1}{M_2}) : C \\in \\SNe$ \\hfill by substitution def. \\\\\n$\\inden{\\Gamma'}{[\\sigma](\\caseof{M}{M_1}{M_2})}{C}$ \\hfill by $\\CR2$\n\n\n\\paragraph{Subcase: $\\Gamma' \\vdash [\\sigma]M \\redSN M' : A + B$ and $\\inden{\\Gamma'}{M'}{A+B}$}$\\;$\\\\\n$\\Gamma' \\vdash [\\sigma]M \\redSN M' : A + B$ and $\\inden{\\Gamma'}{M'}{A+B}$ \\hfill by assumption \\\\\n$\\inden{\\Gamma'}{\\caseof{M'}{[\\sigma,x/x]M_1}{[\\sigma,y/y]M_2}}{C}$ \\hfill by inner i.h. \\\\\n$\\Gamma' \\vdash \\caseof{[\\sigma]M}{[\\sigma,x/x]M_1}{[\\sigma,y/y]M_2} $ \\\\\n$\\qquad \\qquad\\redSN\n\\caseof{M'}{[\\sigma,x/x]M_1}{[\\sigma,y/y]M_2} : C$ \\hfill by $\\redSN$\\\\\n$\\inden{\\Gamma'}{[\\sigma](\\caseof{M}{M_1}{M_2})}{C}$ \\hfill by $\\CR3$\n\n\\end{proof}\n\n\n \\section{Extension: Recursion}\n \\newcommand{\\zero}{\\mathsf{z}}\n % \\renewcommand{\\suc}{\\textsf{s}\\;}\n % \\newcommand{\\recnat}[3]{\\textsf{rec} (#1,\\,#2,\\; #3)}\n \\newcommand{\\recnat}[3]{\\recmatch{}{#1}{#2}{#3}}\n % \\renewcommand{\\nat}{\\textsf{Nat}}\n We now extend our simply-typed lambda-calculus to include natural numbers\n defined by $\\zero$ and $\\suc t$ as well as a primitive recursion operator\n written as $\\recnat{M}{M_z}{M_s}$ where $M$ is the argument we recurse over,\n $M_z$ describes the branch taken if $M = \\zero$ and $M_s$ describes the branch\n taken when $M = \\suc N$ where $n$ will be instantiated with $N$ and $f\\;n$\n describes the recursive call.\n\n\n\\[\n\\begin{array}{llcl}\n\\mbox{Types}  & A & \\bnfas & \\ldots \\mid \\nat \\\\\n\\mbox{Terms}  & t & \\bnfas & \\ldots \\mid \\zero \\mid \\suc t \\mid \\recnat{t}{t_z}{t_s}\n\\end{array}\n\\]\n\nTo clarify, we give the typing rules for the additional constructs.\n\n\\[\n\\begin{array}{c}\n\\infer{\\Gamma \\vdash \\zero : \\nat}{} \\qquad\n\\infer{\\Gamma \\vdash \\suc M : \\nat}{\\Gamma \\vdash M : \\nat}\n\\\\[1em]\n\\infer{\\Gamma \\vdash \\recnat M {M_z} {M_s} : C}{\n\\Gamma \\vdash M : \\nat & \\Gamma \\vdash M_z : C &\n\\Gamma, n:\\nat,\\;f\\,n:C \\vdash M_s : C}\n\\end{array}\n\\]\n\n\nWe again extend our definition of $\\SN$ and $\\SNe$.\n\n\\[\n\\begin{array}{c}\n\\multicolumn{1}{l}{\\mbox{Neutral terms}} \\\\[1em]\n\\infer{\\Gamma \\vdash \\recnat{M}{M_z}{M_s} : C \\in \\SNe}{\\Gamma \\vdash M : \\nat \\in \\SNe & \\Gamma \\vdash M_z : C \\in \\SN & \\Gamma, n \\hastype \\nat, f~n \\hastype C \\vdash M_s : C \\in \\SN}\\\\[1em]\n\\multicolumn{1}{l}{\\mbox{Normal terms}} \\\\[1em]\n\\infer{\\Gamma \\vdash \\zero : \\nat \\in \\SN}{} \\qquad \\infer{\\Gamma \\vdash \\suc M : \\nat \\in \\SN}{\\Gamma \\vdash M : \\nat \\in \\SN}\\\\[1em]\n\\multicolumn{1}{l}{\\mbox{Strong head reduction}} \\\\[1em]\n\\infer{\\Gamma \\vdash \\recnat{\\zero}{M_z}{M_s} \\redSN M_z : C}{\\Gamma, n \\hastype \\nat, f~n \\hastype C \\vdash M_s : C \\in SN} \\\\[1em]\n\\infer{\\recnat{(\\suc N)}{M_z}{M_s} \\redSN [N/n,\\, f_r/f\\,n]M_s : C}{\n  \\Gamma \\vdash N : \\nat \\in \\SN & \\Gamma \\vdash M_z : C \\in \\SN & \\Gamma, n \\hastype \\nat, f~n \\hastype C \\vdash M_s : C \\in SN & f_r = \\recnat{N}{M_z}{M_s}} \\\\[1em]\n\\infer{\\Gamma \\vdash \\recnat{M}{M_z}{M_s} \\redSN \\recnat{M'}{M_z}{M_s} : C}{\\Gamma \\vdash M \\redSN M' : \\nat}\n\\end{array}\n\\]\n\n \\section{Extension: Natural numbers}\nHere we add natural numbers to our language and show how the language remains normalizing.\n\\subsection{Semantic type $\\denot{\\nat}$} We define the denotation of $\\nat$ as\n follows:\n\n% \\[\n% \\den{\\nat} := \\clos{\\{\\zero\\}\\cup \\{\\suc M \\mid M \\in \\den{\\nat}\\} }\n% \\]\n\n\t\\[\n\t\\begin{array}{lcl}\n\t\\inden{\\Gamma}{M}{\\nat} & \\text{iff} & \\Gamma \\vdash M \\in \\clos{ \\{\\zero\\}\\cup \\{\\suc M' \\mid \\inden{\\Gamma}{M'}{\\nat}\\} }\n\t\\end{array}\n\t\\]\n\n\n\\subsection{Semantic type $\\denot{\\nat}$ is a reducibility candidate}\nWe again extend our previous theorem which states that all denotations of types must be in $\\CR$.\n\n\\begin{theorem}\nFor all types $C$, $\\denot{C}  \\in \\CR$.\n\\end{theorem}\n\\begin{proof}\nBy induction on the structure of $C$. We highlight the case for $\\nat$.\n\n\\paragraph{Case $C = \\nat$.}\n\n\\begin{enumerate}\n\\item \\textit{Show} $\\CR1$. Assume $\\inden{\\Gamma}{M}{\\nat}$. We consider different subcases\n  and prove by induction on the closure defining $\\denot{\\nat}$ that $\\Gamma \\vdash M : \\nat \\in \\SN$.\n%\n\n\\paragraph{Subcase:} $M = \\zero$. By definition of $\\SN$, $\\Gamma \\vdash \\zero : \\nat \\in \\SN$.\n\n\\paragraph{Subcase:} $M = \\suc N$ where $\\inden{\\Gamma}{N}{\\nat}$.  By i.h. ($\\CR1$),\n$\\Gamma \\vdash N : \\nat \\in \\SN$. By definition of $\\SN$, $\\Gamma \\vdash \\suc N : \\nat \\in \\SN$.\n\n \\paragraph{Subcase:} $\\Gamma \\vdash M  : \\nat \\in \\SNe$. By definition of $\\SN$, $\\Gamma \\vdash M : \\nat \\in \\SN$.\n\n \\paragraph{Subcase:} $\\Gamma \\vdash M \\redSN M' : \\nat$ and $\\inden{\\Gamma}{M'}{\\nat}$.\n \\\\\n $\\Gamma \\vdash M \\redSN M' : \\nat$ and $\\inden{\\Gamma}{M'}{\\nat}$ \\hfill by assumption \\\\\n $\\Gamma \\vdash M' : \\nat \\in \\SN$ \\hfill by inner i.h. \\\\\n $\\Gamma \\vdash M  : \\nat \\in \\SN$ \\hfill by reduction $\\redSN$\n\n\n \\item \\textit{Show} $\\CR2$. If $\\Gamma \\vdash M : \\nat \\in \\SNe$, then $\\inden{\\Gamma}{M}{\\nat}.$ \\\\\n\t By definition of the closure, if $\\Gamma \\vdash M : \\nat \\in \\SNe$, then $\\inden{\\Gamma}{M}{\\nat}$.\n\n \\item \\textit{Show} If $\\Gamma \\vdash M \\redSN M' : \\nat$ and $\\inden{\\Gamma}{M'}{\\nat}$, then $\\inden{\\Gamma}{M}{\\nat}$. \\\\\n   By definition of the closure, if $\\Gamma \\vdash M \\redSN M' : \\nat$ and $\\inden{\\Gamma}{M'}{\\nat}$, we have $\\inden{\\Gamma}{M}{\\nat}$.\n \\end{enumerate}\n\\end{proof}\n\n \\subsection{Revisiting the fundamental lemma}\n\n We can now revisit the fundamental lemma.\n \\begin{lemma}[Fundamental lemma]\n If $\\Gamma \\vdash M : C$ and $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$\n then $\\inden{\\Gamma'}{[\\sigma]M}{C}$.\n \\end{lemma}\n \\begin{proof}\n By induction on $\\Gamma \\vdash M : C$.\n\n \\paragraph{Case} $\\D = \\ianc{}{\\Gamma \\vdash \\zero : \\nat}{}$\n \\\\\n $[\\sigma]\\zero = \\zero$ \\hfill by subst. def \\\\\n $\\inden{\\Gamma'}{\\zero}{\\nat}$ \\hfill by definition.\n\n\n \\paragraph{Case} $\\D = \\ianc{\\Gamma \\vdash M : \\nat}{\\Gamma \\vdash \\suc M :  \\nat}{}$\n \\\\\n $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n $\\inden{\\Gamma'}{[\\sigma]M}{\\nat}$ \\hfill by i.h. \\\\\n $\\inden{\\Gamma'}{\\suc [\\sigma]M}{\\nat}$ \\hfill by definition \\\\\n $\\inden{\\Gamma'}{[\\sigma] \\suc M}{\\nat}$ \\hfill by subst. def\n\n\n \\paragraph{Case} $\\D = \\icnc\n {\\Gamma \\vdash M : \\nat}\n {\\Gamma \\vdash M_z : C}\n {\\Gamma, n:\\nat,\\,f~n:C \\vdash M_s : C}\n {\\Gamma \\vdash \\recnat M {M_z} {M_s} : C}{}$\n \\\\\n $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n $\\inden{\\Gamma'}{[\\sigma]M}{\\nat}$ \\hfill by i.h. \\\\[1em]\n We distinguish cases based on $\\inden{\\Gamma'}{[\\sigma]M}{\\nat}$ and prove by induction on $\\inden{\\Gamma'}{[\\sigma]M}{\\nat}$ that $\\inden{\\Gamma'}{[\\sigma](\\recnat M {M_z} {M_s})}{C}$.\n\n \\paragraph{Subcase } $[\\sigma]M = \\zero$.\n \\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{n}{\\nat}$ \\hfill by definition of $\\SNe$, $\\denot{\\nat}$ \\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{f~n}{C}$ \\hfill by definition of $\\SNe$, $\\denot{\\nat}$ \\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{[\\sigma, n/n, f~n/f~n]}{\\Gamma, n:\\nat, f~n:C}$ \\hfill by definition \\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{[\\sigma, n/n, f~n/f~n]M_s}{C}$ \\hfill by outer i.h. \\\\\n $\\Gamma', n \\hastype \\nat, f~n \\hastype C \\vdash [\\sigma, n/n, f~n/f~n]M_s : C \\in \\SN$ \\hfill by $\\CR1$ \\\\\n $\\inden{\\Gamma'}{[\\sigma]M_z}{C}$ \\hfill by outer i.h. \\\\\n $\\Gamma' \\vdash \\recnat \\zero {[\\sigma]M_z} {[\\sigma, n/n,f~n/f~n]M_s} \\redSN [\\sigma]M_z : C$\n \\hfill by $\\redSN$ \\\\\n $\\recnat \\zero {[\\sigma]M_z} {[\\sigma, n/n,f~n/f~n]M_s}$ \\\\\n $\\qquad = [\\sigma](\\recnat{M}\n {M_z} {M_s})$ \\hfill by subst. def. and $[\\sigma]M = \\zero$\\\\\n $\\inden{\\Gamma'}{[\\sigma](\\recnat{M} {M_z}{M_s}}{C}$ \\hfill by $\\CR3$.\n\n \\paragraph{Subcase } $[\\sigma]M = \\suc M'$ where $\\inden{\\Gamma'}{M'}{\\nat}$.\n \\\\\n $\\inden{\\Gamma'}{M'}{\\nat}$ \\hfill by assumption \\\\\n $\\Gamma \\vdash M' : \\nat \\in \\SN$ \\hfill by $\\CR1$ \\\\\n $\\inden{\\Gamma'}{[\\sigma]M_z}{C}$ \\hfill by outer i.h. \\\\\n $\\Gamma' \\vdash [\\sigma]M_z \\in \\SN$ \\hfill by $\\CR1$ \\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{[\\sigma, n/n, f~n/f~n]}{\\Gamma, n:\\nat, f~n:C}$ \\hfill by definition \\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{[\\sigma, n/n, f~n/f~n]M_s}{C}$ \\hfill by outer i.h. \\\\\n $\\Gamma', n \\hastype \\nat, f~n \\hastype C \\vdash [\\sigma, n/n, f~n/f~n]M_s : C \\in \\SN$ \\hfill by $\\CR1$ \\\\\n $\\inden{\\Gamma'}{\\recnat{M'}{[\\sigma]M_z}{[\\sigma, n/n, f~n/f~n]M_s}}{C}$ \\hfill by inner i.h. \\\\\n $\\inden{\\Gamma'}{[\\sigma, M'/n, (\\recnat{M'}{[\\sigma]M_z}{[\\sigma, n/n, f~n/f~n]M_s})/f~n]}{\\Gamma, n:\\nat, f~n:C}$ \\\\\n $~$ \\hfill by definition \\\\\n $\\inden{\\Gamma'}{[\\sigma, M'/n, (\\recnat{M'}{[\\sigma]M_z}{[\\sigma, n/n, f~n/f~n]M_s})/f~n]M_s}{C}$ \\\\\n $~$ \\hfill by outer i.h. \\\\\n $\\Gamma' \\vdash \\recnat{(\\suc M')}{[\\sigma]M_z}{[\\sigma, n/n, f~n/f~n]M_s} $ \\\\\n $\\qquad \\qquad \\redSN\n [\\sigma, M'/n, (\\recnat{M'}{[\\sigma]M_z}{[\\sigma, n/n, f~n/f~n]M_s})/f~n]M_s : C$ \\\\\n $~$ \\hfill by $\\redSN$ \\\\\n $\\inden{\\Gamma'}{[\\sigma](\\recnat{M}{M_z}{M_s})}{C}$ \\hfill by $\\CR3$.\n\n \\paragraph{Subcase } $\\Gamma' \\vdash [\\sigma]M : \\nat \\in \\SNe$. \\\\\n $\\inden{\\Gamma'}{[\\sigma]M_z}{C}$ \\hfill by outer i.h.\\\\\n $\\Gamma' \\vdash [\\sigma]M_z : C \\in \\SN$ \\hfill by $\\CR1$\\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{[\\sigma, n/n, f~n/f~n]}{\\Gamma, n:\\nat, f~n:C}$ \\hfill by definition \\\\\n $\\inden{\\Gamma', n \\hastype \\nat, f~n \\hastype C}{[\\sigma, n/n, f~n/f~n]M_s}{C}$ \\hfill by outer i.h. \\\\\n $\\Gamma', n \\hastype \\nat, f~n \\hastype C \\vdash [\\sigma, n/n, f~n/f~n]M_s : C\\in \\SN$ \\hfill by $\\CR1$ \\\\\n $\\Gamma \\vdash \\recnat{[\\sigma]M}{[\\sigma]M_z}{[\\sigma, n/n, f~n/f~n]M_s} : C \\in \\SNe$ \\hfill by $\\SNe$\\\\\n $\\Gamma' \\vdash [\\sigma](\\recnat{M}{M_z}{M_s}) : C \\in \\SNe$ \\hfill by subst. def. \\\\\n $\\inden{\\Gamma'}{[\\sigma](\\recnat{M}{M_z}{M_s})}{C}$ \\hfill by $\\CR2$.\n\n\n \\paragraph{Subcase } $\\Gamma' \\vdash [\\sigma]M \\redSN M' : \\nat$ and $\\inden{\\Gamma'}{M'}{\\nat}$.\\\\\n $\\Gamma' \\vdash [\\sigma]M \\redSN M' : \\nat$ and $\\inden{\\Gamma'}{M'}{\\nat}$ \\hfill by assumption.\\\\\n $\\inden{\\Gamma'}{\\recnat{M'}{[\\sigma]M_z}{[\\sigma,n/n, f~n/f~n]M_s}}{C}$ \\hfill by inner i.h. \\\\\n $\\Gamma' \\vdash \\recnat{[\\sigma]M}{[\\sigma]M_z}{[\\sigma,n/n, f~n/f~n]M_s}$ \\\\\n $\\qquad \\qquad \\redSN \\recnat{M'}{[\\sigma]M_z}{[\\sigma,n/n, f~n/f~n]M_s} : C$ \\hfill by $\\redSN$\\\\\n $\\inden{\\Gamma'}{[\\sigma](\\recnat{M}{M_z}{M_s})}{C}$ \\hfill by $\\CR3$.\n\n\n \\end{proof}\n\n\n%  \\section{Exercises}\n%  \\begin{problem}\n% % \\begin{exercise}\n%  The Def. \\ref{def:norm}, defines strong normalization informally. We can replace this definition with a more formal definition.\n\n%  \\begin{definition}[Inductive definition of strongly normalizing terms] A term $M$ is strongly normalizing, if all its reducts are strongly normalizing, i.e.\n%  $M \\csn$ if for all $M'$, if $M \\red M'$ then $M' \\in \\csn$.\n%  \\end{definition}\n\n%  This definition gives rise to an induction principle to reason about strongly normalizing terms. To prove $\\forall M \\csn. P(M)$, we can assume the property holds for $P(M')$ for any $M'$ s.t. $M \\red M'$. Using this induction principle, we can now prove  constructively that any subterm of a strongly normalizing term is itself normalizing.\n\n%  Before we however, precisely define our notion of subterm to simplify our reasoning. We write $M = C[N]$ for $N$ is a subterm of $M$; $C$ denotes an evaluation context, i.e. the term $M$ where we identify $N$ as a subterm at a given position. Evaluation contexts can  be defined as follows.\n\n%  \\[\n%  \\begin{array}{lcl}\n%  \\mbox{Evaluation Context}\\;C & \\bnfas & [ ] \\mid \\lambda x. C \\mid C\\;N \\mid M\\;C\n%  \\end{array}\n%  \\]\n\n%  As an alternative to the congruence rules, we can redefine evaluation using evaluation contexts:\n\n%  \\[\n%  \\ianc{M \\red N}{C[M] \\red C[N]}{}\n%  \\]\n\n% \\begin{theorem}\n% Any subterm of a strongly normalizing term is strongly normalizing itself, i.e. if $M \\csn$ and $N$ is a subterm of $M$, i.e. $M = C[N]$ then $N \\in \\csn$.\n% \\end{theorem}\n% \\end{exercise}\n% \\end{problem}\n\n\n\n% \\begin{problem}\n%   \\begin{exercise}\n% Extend the semantic strong normalization proof to treat $A \\times B$.\n% \\begin{itemize}\n% \\item Extend our definition of normal and neutral terms (i.e. $\\SN$ and $\\SNe$). You might also need to extend our definition of strong head reduction (i.e. $\\redSN$).\n% \\item Define an appropriate denotation of $\\den{A \\times B}$.\n% % \\den{A * B} := {M | \\fst(M) \\in \\den{A} or \\snd{M} \\in \\den{B}}\n% \\item Show that $\\den{A \\times B}$ is a reducibility candidate.\n% \\item Show the additional cases in the fundamental lemma.\n% \\end{itemize}\n%   \\end{exercise}\n% \\end{problem}", "meta": {"hexsha": "7e93cdc61fc52ae5444f7a1a4444c4b003a844e3", "size": 21066, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sn-proof/extensions.tex", "max_stars_repo_name": "andreasabel/strong-normalization", "max_stars_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2017-05-22T14:33:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-05T12:12:03.000Z", "max_issues_repo_path": "sn-proof/extensions.tex", "max_issues_repo_name": "andreasabel/strong-normalization", "max_issues_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-02-14T16:42:36.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-20T14:54:18.000Z", "max_forks_repo_path": "sn-proof/extensions.tex", "max_forks_repo_name": "andreasabel/strong-normalization", "max_forks_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-11-10T16:44:52.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-23T18:22:17.000Z", "avg_line_length": 48.2059496568, "max_line_length": 344, "alphanum_fraction": 0.6014905535, "num_tokens": 8253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587586, "lm_q2_score": 0.8333245932423308, "lm_q1q2_score": 0.5827059060016915}}
{"text": "\\chapter{Algorithms for Parametric Runs}\n\\label{sec:algParRun}\n\nThe here described algorithms for \nparametric runs can be used to determine how sensitive a function is \nwith respect to a change in the independent variables.\nThey can also be used to do a parametric\nsweep of a function over a set of parameters.\nThe algorithm described in Section~\\ref{sec:algParametric} varies one parameter at a time while holding all other parameters fixed at the value \nspecified by the keyword \\texttt{Ini}. The algorithm described in Section~\\ref{sec:algParRunGen}, in contrast, constructs a mesh in the space of the independent parameters, and evaluates the objective function at each mesh point.\n\n\\section{Parametric Runs by Single Variation}\n\\lab{sec:algParametric}\n\\subsection{Algorithm Description}\n\nThe \\texttt{Parametric} algorithm allows doing parametric runs where\none parameter at a time is varied and\nall other parameters are fixed at their initial values\n(specified by the keyword \\texttt{Ini}).\n\nEach parameter must have a lower and upper bound.\nFor the logarithmic scale, the lower and upper bounds must be bigger than zero.\nTo allow negative increments, the lower bound can be larger than the upper bound.\nThe absolute value of the keyword \\texttt{Step} defines in how many intervals \neach coordinate axis will be divided.\nIf $\\text{\\texttt{Step}} < 0$, then the spacing is logarithmic; otherwise it is linear. Set $\\text{\\texttt{Step}} = 0$ to keep the parameter always fixed at the value\nspecified by \\texttt{Ini}.\n\nThis algorithm can also be used with discrete parameters. This allows, for example, using a string to specify a window construction.\n\\\\\n\n\nThe spacing is computed as follows:\nFor simplicity, the explanation is done for one parameter.\nLet $l \\triangleq \\text{\\texttt{Min}}$, $u \\triangleq \\text{\\texttt{Max}}$ and $m \\triangleq |\\text{\\texttt{Step}}|$,\nwhere \\texttt{Min}, \\texttt{Max} and \\texttt{Step} are specified in the command file.\\\\\n\n\\noindent\nIf $\\text{\\texttt{Step}} < 0$, we compute, for $i \\in \\{0, \\ldots , m \\}$,\n\\begin{subequations}\n  \\begin{eqnarray}\n    p & = & \\frac{1}{m} \\, \\log \\frac{u}{l}, \\\\\n    x_i & = & l \\, 10^{p \\, i}.\n  \\end{eqnarray}\nIf $\\text{\\texttt{Step}} > 0$, we compute, for $i \\in \\{0, \\ldots , m \\}$,\n  \\begin{equation}\n    x_i = l + \\frac{i}{m} \\, (u-l).\n  \\label{eq:AlgParLinSpa}\n  \\end{equation}\n  \\label{subeq:AlgParSpa}\n\\end{subequations}\n\n\\begin{example}[Parametric run with logarithmic and linear spacing]\n{\\em\nSuppose the parameter specification is of the form\n\\vspace{-0.5\\baselineskip}\n\\begin{lstlisting}\nVary{\n   Parameter{ Name = x1; Ini = 5; Step = -2; Min = 10; Max = 1000; }\n   Parameter{ Name = x2; Ini = 3; Step = 1;  Min = 2;  Max = 20;   }\n}\n\\end{lstlisting}\n\\vspace{-0.5\\baselineskip}\nand the cost function takes two arguments, $x_1, x_2 \\in \\Re$.\nThen, the cost function will be evaluated at the points\\\\\n\\noindent $(x_1, x_2) \\in \\{ (10,3), \\ (100,3), \\ (1000,3), \\ (5,2), \\ (5,20) \\}$.\\rbox \\\\\n}\n\\end{example}\n\n\\subsection{Keywords}\nFor this algorithm, the command file (see page~\\pageref{par:comFil}) can contain continuous and discrete parameters.\\\\\n\nThe \\texttt{Parametric} algorithm is invoked by the following specification in the command file:\n\\begin{lstlisting}\nAlgorithm{\n   Main = Parametric;\n   StopAtError = true | false;\n}\n\\end{lstlisting}\n\n\\noindent The keywords have the following meaning:\n\\begin{codedescription}\n\\item[Main]\nThe name of the main algorithm.\n\\item[StopAtError]\nIf \\texttt{true}, then the parametric run stops if a simulation error occurs. \nIf \\texttt{false}, then the parametric run does not stop if a simulation error occurs.\nThe failed function evaluation will be assigned the function value zero.\nFor information, an error message will be written to the user interface\nand the optimization log file.\n\\end{codedescription}\n\n% ===================================\n\\section{Parametric Runs on a Mesh}\n\\label{sec:algParRunGen}\n\\subsection{Algorithm Description}\n\nIn contrast to the algorithm \\texttt{Parametric}, the algorithm \\texttt{Mesh} spans a multi-dimensional grid in the space of the independent parameters, and it evaluates the objective function at each grid point.\n\nNote that the number of function evaluations increases exponentially with the number \nof independent parameters.\nFor example, a $5$-dimensional grid with $2$ intervals in each dimension requires $3^5=243$ function evaluations, whereas a $10$-dimensional grid would require $3^{10}=59049$ function evaluations.\\\\\n\nThe values that each parameter can take on are computed in the same way\nas for the algorithm \\texttt{Parametric}. Therefore, the specification of a\n\\texttt{Parameter} underlies the same constraints as for the algorithm \\texttt{Parametric}, which is described above.\n\n\\begin{example}[Parametric run on a mesh]~\\\\\n{\\em\nSuppose the parameter specification is of the form\n\\vspace{-0.5\\baselineskip}\n\\begin{lstlisting}\nVary{\n  Parameter{ Name = x1; Min = -10; Ini = 99; Max = 10; Step = 1; } \n  Parameter{ Name = x2; Min = 1; Ini = 99; Max = 100; Step = -2; } \n}\n\\end{lstlisting}\n\\vspace{-0.5\\baselineskip}\nand the cost function takes two arguments, $x_1, x_2 \\in \\Re$.\nThen, the cost function will be evaluated at the points\\\\\n$(x_1, x_2) \\in\n\\{(-10,\\,  1)$,\n$ ( 10, \\, 1)$, \n$ (-10, \\,  10)$, \n$ ( 10, \\,  10)$, \n$ (-10, \\, 100)$, \n$ ( 10, \\, 100)\\}$.\n\nAn alternative specification for $x_2$ that uses a discrete parameter and gives the same result is\n\\vspace{-0.5\\baselineskip}\n\\begin{lstlisting}\nParameter{ \n  Name = x2; \n  Ini = \"1\"; \n  Values = \"1, 10, 100\";\n} \n\\end{lstlisting}\n\\vspace{-\\baselineskip}\n\\rbox\n}\n\\end{example}\n\n\n\\subsection{Keywords}\n\nThe \\texttt{Mesh} algorithm is invoked by the following specification in the command file:\n\\begin{lstlisting}\nAlgorithm{\n   Main        = Mesh;\n   StopAtError = true | false;\n}\n\\end{lstlisting}\n\n\\noindent The keywords have the following meaning:\n\\begin{codedescription}\n\\item[Main]\nThe name of the main algorithm.\n\\item[StopAtError]\nIf \\texttt{true}, then the parametric run stops if a simulation error occurs. \nIf \\texttt{false}, then the parametric run does not stop if a simulation error occurs.\nThe failed function evaluation will be assigned the function value zero.\nFor information, an error message will be written to the user interface\nand the optimization log file.\n\\end{codedescription}\n\n\\label{sec:algImpEnd}\n", "meta": {"hexsha": "fd3115be816656881a5a334bc5c27331b508a564", "size": 6369, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/manual/algParametric.tex", "max_stars_repo_name": "bergsee/GenOpt", "max_stars_repo_head_hexsha": "3925277af881cea6e12e3d1bf0285bd657bbcced", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2015-08-30T09:47:56.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-06T15:16:18.000Z", "max_issues_repo_path": "src/manual/algParametric.tex", "max_issues_repo_name": "bergsee/GenOpt", "max_issues_repo_head_hexsha": "3925277af881cea6e12e3d1bf0285bd657bbcced", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2016-01-14T00:01:46.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-21T15:28:52.000Z", "max_forks_repo_path": "src/manual/algParametric.tex", "max_forks_repo_name": "lbl-srg/GenOpt", "max_forks_repo_head_hexsha": "3925277af881cea6e12e3d1bf0285bd657bbcced", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2015-08-30T09:47:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-01T18:07:07.000Z", "avg_line_length": 38.1377245509, "max_line_length": 229, "alphanum_fraction": 0.7288428325, "num_tokens": 1801, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5826762282909761}}
{"text": "\\section{Typed realizability}\n\\label{sec:typed-realizability}\n\nRZ is based on \\emph{typed realizability} by John\nLongley~\\cite{Longley99}.   This variant of realizability corresponds most\ndirectly to programmers' intuition about implementations.\n\nWe approach typed realizability and its relationship to\nreal-world programming by way of example. Suppose we are asked to\ndesign a data structure for the set $\\mathcal{G}$ of all finite\nsimple%\n\\iflong\n\\footnote{Simple means at most one arrow between any two vertices.}\n\\fi % \\iflong\n\\ directed graphs with vertices labeled by distinct integers. \n%\n\\iflong\nAn exemplar\ndirected graph~$G$ is shown in Figure~\\ref{fig:digraph}.\n%\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=0.3\\textwidth]{digraph}\n  \\caption{A finite directed graph $G$}\n  \\label{fig:digraph}\n\\end{figure}\n\\fi % \\iflong\n%\nOne common representation of such graphs uses a pair of lists $(\\ell_V, \\ell_A)$, where\n$\\ell_V$ is the list of vertex labels and $\\ell_A$ is the \\emph{adjacency list} \nrepresenting the arrows by pairing the labels of each source and target.\n\\iflong\nFor the above graph $G$,\n$\\ell_V = [1; 2;\n3; 4]$ and $\\ell_A = [(1,2); (2,2); (2,3); (3,2); (3;1)]$.\n\\fi % \\iflong\n%\nThus we define the datatype of graphs as\\footnote{We use OCaml\n  notation in which $\\clist{t}$ classifies finite lists of elements of\n  type~$t$, and $t_1 * t_2$ classifies pairs containing a value of\n  type $t_1$ and a value of type $t_2$.}\n%\n\\begin{equation*}\n  \\ctype \\mathtt{graph} = \\clist{\\cint} \\ \\ * \\ \\ \\clist{(\\cint * \\cint)}\n\\end{equation*}\n%\nHowever, this is not a complete description of the intended representation, as\nthere are representation invariants and conditions not expressed by the type, e.g.,\n%\n\\iflong\n\\begin{enumerate}\n\\item The order in which vertices and arrows are listed is not\n  important%\n; for example, $[1;2;3;4]$ and $[4;1;2;3]$ represent the same vertices.\n\\item Each vertex and arrow must be listed exactly once.\n\\item The source and target of each arrow must appear in the list of vertices.\n\\end{enumerate}\n\\else % \\iflong\nthe order in which vertices and arrows are listed is not\nimportant, each vertex and arrow must be listed exactly once, and\nthe source and target of each arrow must appear in the list of vertices.\n\\fi %\\iflong\n\n%\nThus, to implement the mathematical set~$\\mathcal{G}$, we must not\nonly decide on the underlying datatype $\\mathtt{graph}$, but also\ndetermine what values of that type represent which elements\nof~$\\mathcal{G}$. As we shall see next, this can be expressed either\nusing a \\emph{realizability relation} or a \\emph{partial equivalence\n  relation (per)}.\n\n\n\\subsection{Modest sets and pers}\n\\label{sec:modest-sets-pers}\n\n\\iflong\nWe now define typed realizability as it\napplies to OCaml. Other general-purpose programming languages could be\nused instead, as long as they provide the usual ground types, product\nand function types.\\footnote{It is also convenient to work with a\nlanguage that supports sum types, as this allows a more natural\nrepresentation of disjoint unions.}\n\\else\nWe now define typed realizability as it\napplies to OCaml. Other general-purpose programming languages could be\nused instead.\n\\fi % \\iflong\n\nLet $\\Type$ be the collection of all (non-parametric) OCaml types. To\neach type $t \\in \\Type$ we assign the set $\\values{t}$ of values of\ntype~$t$ that behave \\emph{functionally} in the sense of\nLongley~\\cite{longley99when}. Such values are represented by\nterminating expressions that do not throw exceptions or return\ndifferent results on different invocations. They may \\emph{use}\nexceptions, store, and other computational effects, provided they\nappear functional from the outside; a useful example using\ncomputational effects is presented in\nSection~\\ref{sec:we-show-modulus-of-continuity-example}. A functional\nvalue of function type may diverge as soon as it is applied. The\ncollection $\\Type$ with the assignment of functional values\n$\\values{t}$ to each $t \\in \\Type$ forms a \\emph{typed partial\n  combinatory algebra (TPCA)}%\n\\iflong\n, which provides a theoretical basis for\nthe definition of a realizability model that suits our needs%\n\\fi%\\iflong\n.\n\nGoing back to our example, we see that an implementation of directed\ngraphs $\\mathcal{G}$ specifies a datatype $\\typeOf{\\mathcal{G}} =\n\\mathtt{graph}$ together with a \\emph{realizability relation}\n$\\rz_{\\mathcal{G}}$ between $\\mathcal{G}$ and\n$\\values{\\mathtt{graph}}$. The meaning of $(\\ell_V, \\ell_A)\n\\rz_\\mathcal{G} G$ is ``OCaml value $(\\ell_V, \\ell_A)$\nrepresents/realizes/implements graph $G$''.\n%\n\\iflong\n%\nThere are two natural\nconditions that $\\rz_\\mathcal{G}$ ought to satisfy: (1) for every $G\n\\in \\mathcal{G}$ there should be at least one realizer $(\\ell_V,\n\\ell_A)$ representing it, and (2) if $(\\ell_V, \\ell_A)$ represents\nboth $G$ and $G'$ then $G = G'$.\\footnote{The latter condition is\n  called \\emph{modesty} and is not strictly necessary for the\n  development of the theory, though programmers would naturally expect\n  it to hold.} If $(\\ell_V, \\ell_A)$ and $(\\ell'_V, \\ell'_A)$\nrepresent the same graph (e.g., because $\\ell_V$ is a permutation of\n$\\ell'_V$, and similarly for $\\ell_A$ and $\\ell'_A$) we say that they\nare \\emph{equivalent} and write $(\\ell_V, \\ell_A) \\per_\\mathcal{G}\n(\\ell'_V, \\ell'_A)$. The relation $\\per_\\mathcal{G}$ is a\n\\emph{partial} equivalence relation (symmetric and transitive, but not\nreflexive) because not every $(\\ell_V, \\ell_A) \\in\n\\values{\\mathtt{graph}}$ represents a graph.\n\n\\smallskip\n\nA general definition is in order. A \\emph{modest set} is \n%\n\\else % iflong\n%\nGeneralizing from this, we define a \\emph{modest set} to be\n%\n\\fi\n%\na triple $A = (\\setOf{A}, \\typeOf{A}, {\\rz_A})$ where $\\setOf{A}$ is\nthe \\emph{underlying set}, $\\typeOf{A} \\in \\Type$ is the\n\\emph{underlying type}, and $\\rz_A$ is a \\emph{realizability relation}\nbetween $\\values{\\typeOf{A}}$ and $\\setOf{A}$, satisfying\n%\n\\iflong\n% \n\\begin{enumerate}\n\\item \\emph{totality:} for every $x \\in \\setOf{A}$ there is $v \\in\n  \\values{\\typeOf{A}}$ such that $v \\rz_A x$, and\n\\item \\emph{modesty:} if $u \\rz_A x$ and $u \\rz_A y$ then $x = y$.\n\\end{enumerate}\n%\n\\else % iflong\n%\n(1) \\emph{totality:} for every $x \\in \\setOf{A}$ there is $v \\in\n\\values{\\typeOf{A}}$ such that $v \\rz_A x$, and (2) \\emph{modesty:} if\n$u \\rz_A x$ and $u \\rz_A y$ then $x = y$.\n%\n\\fi % iflong\n%\nThe \\emph{support} of $A$ is the set $\\support{A} = \\set{v \\in\n  \\values{\\typeOf{A}} \\such \\xsome{x}{\\setOf{A}}{v \\rz_A x}}$ of those\nvalues that realize something. We define the relation $\\per_A$ on\n$\\values{\\typeOf{A}}$ by\n%\n\\begin{equation*}\n  u \\per_A v\n  \\iff\n  \\some{x}{\\setOf{A}}{u \\rz_A x \\land v \\rz_A x} \\;.\n\\end{equation*}\n%\nFrom totality and modesty of $\\rz_A$ it follows that $\\per_A$ is a per,\ni.e., symmetric and transitive. Observe that $\\support{A} = \\set{v \\in\n  \\values{\\typeOf{A}} \\such v \\per_A v}$, whence $\\per_A$\nrestricted to $\\support{A}$ is an equivalence relation. In fact, we\nmay recover a modest set up to isomorphism from $\\typeOf{A}$ and\n$\\per_A$ by taking $\\setOf{A}$ to be the set of equivalence classes of\n$\\per_A$, and $v \\rz_A x$ to mean $v \\in x$.\n\nThe two views of implementations, as modest sets $(\\setOf{A},\n\\typeOf{A}, {\\rz_A})$, and as pers $(\\typeOf{A}, {\\per_A})$, are\nequivalent.\\footnote{And there is a third view, as a partial surjection\n  $\\delta_A : {\\subseteq}\\values{\\typeOf{A}} \\epito \\setOf{A}$, with\n  $\\delta_A(v) = x$ when $v \\rz_A x$. This is how realizability is\n  presented in Type Two Effectivity~\\cite{Wei00}.} \n%\nWe concentrate on the view of modest sets as pers. They are more\nconvenient to use in RZ because they refer only to types and values,\nas opposed to arbitrary sets.\n%\nNevertheless, it is useful to understand\nhow modest sets and pers arise from natural programming practice.\n\n\\iflong\n%\nModest sets form a category whose objects are modest sets and\nmorphisms are the realized functions. A \\emph{realized function} $f :\nA \\to B$ is a function $f : \\setOf{A} \\to \\setOf{B}$ for which there\nexists $v \\in \\values{\\typeOf{A} \\to \\typeOf{B}}$ such that, for all\n$x \\in \\setOf{A}$ and $u \\in \\typeOf{A}$,\n%\n\\begin{equation}\n  \\label{eq:rz-function-space}\n  u \\rz_A x \\implies v\\;u \\rz_B f(x) \\;.\n\\end{equation}\n%\nThis condition is just a mathematical expression of the usual idea\nthat~$v$ is an implementation of~$f$ if it does to realizers\nwhat~$f$ does to the elements they represent.\n\\fi\n\n\\iflong\nThe equivalent category of pers has as objects\n\\else\nPers form a category whose objects are\n\\fi\n%\npairs $A = (\\typeOf{A}, {\\per_A})$ where $\\typeOf{A} \\in \\Type$ and\n$\\per_A$ is a per on $\\values{\\typeOf{A}}$. A morphism $A \\to B$ is\nrepresented by a function $v\\in \\values{\\typeOf{A} \\to \\typeOf{B}}$\nsuch that, for all $u, u' \\in \\support{A}$,\n%\n\\iflong\n\\begin{equation}\n  \\label{eq:per-exponential}\n  u \\per_A u' \\implies v\\;u \\per_B v\\;u' \\;.\n\\end{equation}\n%\nValues $v$ and $v'$ that both satisfy~\\eqref{eq:per-exponential}\n\\else % \\iflong\n$u \\per_A u' \\implies v\\;u \\per_B v\\;u'$.  Two such functions $v$ and $v'$\n\\fi\nrepresent the same morphism if, for all $u, u' \\in \\support{A}$,\n$u \\per_A u'$ implies $v\\;u \\per_B v'\\;u'$.\n\n\\iflong\n%\nThe category of pers has a very rich structure. For example, we may\nform a cartesian product $A \\times B$ of pers $A$ and $B$ by\n%\n\\begin{align*}\n  \\typeOf{A \\times B} &= \\typeOf{A} * \\typeOf{B},\\\\\n  (u_1, v_1) \\per_{A \\times B} (u_2, v_2) &\\iff\n  u_1 \\per_A u_2 \\land v_1 \\per_B v_2.\n\\end{align*}\n%\nThe projections $\\pi_1 : A \\times B \\to A$ and $\\pi_2 : A \\times B \\to\nB$ are realized by $\\mathtt{fst}$ and $\\mathtt{snd}$, respectively.\n\nThe morphisms between pers~$A$ and~$B$ again form a per\n$B^A$, also written as $A \\to B$, called the \\emph{exponential} of~$A$\nand~$B$, with\n%\n\\begin{align*}\n  \\typeOf{B^A} &= \\typeOf{A} \\to \\typeOf{B},\\\\\n  \\support{B^A} &=\n  \\set{v \\in \\values{\\typeOf{A} \\to \\typeOf{B}} \\such \n    \\all{u,u'}{\\values{\\typeOf{A}}}{u \\per_A u' \\implies v\\,u \\per_B v\\,u'}}\n  \\\\\n  u \\per_{B^A} v &\\iff u, v \\in \\support{A} \\land\n  \\xall{w}{\\support{A}}{u\\;w \\per_B v\\;w}.\n\\end{align*}\n%\nThe evaluation map $e : B^A \\times A \\to B$ is realized by OCaml\napplication, $\\cfun{(u,v)}{u\\;v}$. If a function $f : C \\times A \\to\nB$ is realized by $v$, then its transpose $\\tilde{f} : C \\to B^A$,\n$\\tilde{f}(z)(x) = f(z,x)$, is realized by $\\cfun{z\\;x}{v \\; (z,x)}$.\nThis shows that the category of pers is cartesian closed. In\nSection~\\ref{sec:transl-sets-terms} we review other canonical\nconstructions on modest sets.\n\n\\bigskip\n\\else % \\iflong\n%\nThe category of pers has a very rich structure, namely that of a\nregular locally cartesian closed category~\\cite{Bauer:00}. This\nsuffices for the interpretation of first-order logic and (extensional)\ndependent types~\\cite{JacobsB:cltt}.\n%\n\\fi % \\iflong\n\n\\iflong\n%\nAs an example we consider the cyclic group on seven elements $(\\ZZ_7,\n0, {-}, {+})$. To implement the group, we must give a representation\nof $\\ZZ_7$ as a per~$Z = (\\typeOf{Z}, {\\per_Z})$, and\nprovide realizers for the neutral element~$0$, negation~$-$, and\naddition~$+$. \n\nOne possibility is to choose $\\cint$ as the underlying type\n$\\typeOf{Z}$, and to let $\\support{Z}$ be only the integers \\texttt{0}\nthrough \\texttt{6}. Then negation and addition must work modulo\n\\texttt{7} (i.e., must return an integer in the range\n\\texttt{0}--\\texttt{6} when given integers in this range). The neutral\nelement would be the integer constant \\texttt{0}, and the equivalence\n$\\per_Z$ would be integer equality.\n\nAlternatively, we could take $\\cint$ as the underlying type\n$\\typeOf{Z}$, but let $\\support{Z}$ include all values of type $\\cint$. In this\ncase, negation and addition could be simply integer addition and\nnegation\\footnote{Taking care to prevent integer\n  overflow.}. Here the neutral element could be implemented as any\ninteger multiple of \\texttt{7}, and the equivalence $\\per_Z$ would be\nequivalence-modulo-7.\n\nBoth of these pers happen to be \\emph{decidable}, i.e., it can be\nalgorithmically decided whether two values in $\\support{Z}$ represent the same element\nof~$\\ZZ_7$.\\fi % \\iflong\n\n\\iflong\n%\nNot all pers are decidable.\n%\n\\else\n%\nNot all pers are \\emph{decidable}, i.e., there may be no algorithm for deciding\nwhen two values are equivalent.\n%\n\\fi\n%\nExamples include implementations of semigroups with an undecidable\nword problem~\\cite{post47:_recur_unsol_probl_thue}%\n\\iflong, implementations of computable sets of integers (which might\nbe realized by membership functions of type $\\cint\\to\\cbool$),\\fi\n\\ and implementations of computable real numbers (which might be\nrealized by infinite Cauchy sequences).\n%\n\\iflong\n%\nThere is no presupposition that pers are computable\n(implementable). We can require decidable equivalence by adding a\nsuitable axiom; see Section~\\ref{sec:decidable-sets}. \\fi\n\n\\subsection{Interpretation of logic}\n\\label{sec:interpretation-logic}\n\nIn the realizability interpretation of logic, each formula~$\\phi$ is\nassigned a set of \\emph{realizers}, which can be thought of as\ncomputations that witness the validity of~$\\phi$. The situation is\nsomewhat similar, but not equivalent, to the propositions-as-types\ntranslation of logic into type theory, where proofs of a\nproposition correspond to terms of the corresponding type. More\nprecisely, to each formula~$\\phi$ we assign an underlying type\n$\\typeOf{\\phi}$ of realizers, but unlike the propositions-as-types\ntranslation, not all terms of type $\\typeOf{\\phi}$ are necessarily\nvalid realizers for~$\\phi$, and some terms that are realizers may not\ncorrespond to any proofs (for example, if they denote partial\nfunctions or use computational effects).\n\nIt is customary to write $t \\rz \\phi$ when $t \\in\n\\values{\\typeOf{\\phi}}$ is a realizer for~$\\phi$. The underlying types\nand the realizability relation~$\\rz$ are defined inductively on the\nstructure of~$\\phi$; an outline is shown in Figure~\\ref{fig:rz-logic}.\nWe say that a formula~$\\phi$ is \\emph{valid} if it has at least one\nrealizer.\n%\n\\begin{figure*}[t]\n  \\textbf{Underlying types of realizers:}\n\\[\n  \\begin{array}{ll@{\\qquad}ll}\n    \\typeOf{\\itrue} &= \\mathtt{unit} &\n    \\typeOf{\\ifalse} &= \\mathtt{unit} \\\\\n    \\typeOf{\\iequal{x}{y}} &= \\mathtt{unit} &\n    \\typeOf{\\iand{\\phi}{\\psi}} &= \\oprod{\\typeOf{\\phi}}{\\typeOf{\\psi}} \\\\\n    \\typeOf{\\iimply{\\phi}{\\psi}} &= \\oarrow{\\typeOf{\\phi}}{\\typeOf{\\psi}} &\n    \\typeOf{\\ior{\\phi}{\\psi}} &=\n    \\osumtyx{\\mathtt{or}_0}{\\typeOf{\\phi_0}}{\\mathtt{or}_1}{\\typeOf{\\phi_1}} \\\\\n    \\typeOf{\\iforall{x}{A}{\\phi}} &= \\oarrow{\\typeOf{A}}{\\typeOf{\\phi}} &\n    \\typeOf{\\iexists{x}{A}{\\phi}} &= \\oprod{\\typeOf{A}}{\\typeOf{\\phi}}\n  \\end{array}\n\\]\n  \\textbf{Realizers:}\n\\[\n  \\begin{array}{ll}\n    () \\rz \\itrue & \\\\\n    () \\rz \\iequal{x}{y}\n    &\\quad\\text{iff}\\quad \n    x = y\n    \\\\\n    (t_1,t_2) \\rz \\iand{\\phi}{\\psi}\n    &\\quad\\text{iff}\\quad\n    \\text{$t_1 \\rz \\phi$ and $t_2 \\rz \\psi$}\n    \\\\\n    t \\rz \\iimply{\\phi}{\\psi}\n    &\\quad\\text{iff}\\quad\n    \\text{for all $u \\in \\typeOf{\\phi}$, if $u \\rz \\phi$ then $t\\,u\n      \\rz \\psi$}\n    \\\\\n    \\oinj{\\mathtt{or}_0}{t} \\rz \\ior{\\phi}{\\psi}\n    &\\quad\\text{iff}\\quad\n    \\text{$t \\rz \\phi$}\n    \\\\\n    \\oinj{\\mathtt{or}_1}{t} \\rz \\ior{\\phi}{\\psi}\n    &\\quad\\text{iff}\\quad\n    \\text{$t \\rz \\psi$}\n    \\\\\n    t \\rz \\iforall{x}{A}{\\phi(x)}\n    &\\quad\\text{iff}\\quad\n    \\text{for all $u \\in \\typeOf{A}$, if $u \\rz_A x$ then $t\\,u \\rz \\phi(x)$}\n    \\\\\n    (t_1, t_2) \\rz \\iexists{x}{A}{\\phi(x)}\n    &\\quad\\text{iff}\\quad\n    \\text{$t_1 \\rz_A x$ and $t_2 \\rz \\phi(x)$}\n  \\end{array}\n\\]\n\\vspace{-0.5cm}\n  \\caption{Realizability interpretation of logic (outline)}\n  \\label{fig:rz-logic}\n\\end{figure*}\n\nIn classical mathematics, a predicate on a set~$X$ may be viewed as a\nsubset of~$X$ or a (possibly non-computable) function $X \\to \\oProp$,\nwhere $\\oProp = \\set{\\ofalse, \\otrue}$ is the set of truth values.\nAccordingly, since in realizability propositions are witnessed by\nrealizers,\n%\n\\iflong\na predicate~$\\phi$ on a modest set~$A$ may be viewed as a\nsubset of $\\setOf{A} \\times \\values{\\typeOf{\\phi}}$, or a (possibly\nnon-computable) function $\\setOf{A} \\times \\values{\\typeOf{\\phi}} \\to\n\\set{\\ofalse, \\otrue}$. In terms of pers, which is what RZ uses,\n\\fi\n%\na predicate~$\\phi$ on a per~$A = (\\typeOf{A}, {\\per_A})$ is a\n(possibly non-computable) function $\\phi : \\values{\\typeOf{A}} \\times\n\\values{\\typeOf{\\phi}} \\to \\oProp$ that is\n%\n\\iflong\n\\begin{itemize}\n\\item \\emph{strict:} if $\\phi(u,v)$ then $u \\in \\support{A}$, and\n\\item \\emph{extensional:} if $\\phi(u_1,v)$ and $u_1 \\per_A u_2$ then\n  $\\phi(u_2,v)$.\n\\end{itemize}\n\\else % \\iflong\n\\emph{strict} (if $\\phi(u,v)$ then $u \\in \\support{A}$) \nand \\emph{extensional} (if $\\phi(u_1,v)$ and $u_1 \\per_A u_2$ then\n  $\\phi(u_2,v)$).\n\\fi\n\n\\iflong\nWe illustrate how the realizability interpretation extracts the\ncomputational content of a proposition. To make an interesting\nexample, suppose\n\\else % \\iflong\nSuppose\n\\fi % \\iflong\nwe have implemented the real\nnumbers~$\\RR$ as a per~$R = (\\mathtt{real}, {\\per_R})$, and\nconsider  \n\\iflong\nthe statement\nthat every cubic $x^3 + a x + b$ has a root,\n%\n\\begin{equation}\n  \\label{eq:square-root}%\n  \\iforall{a}{R}{\\iforall{b}{R}{\\iexists{x}{R}{\\iequal{x^3 + a x + b}{0}}}}.\n\\end{equation}\n\\else % \\iflong\n$\\iforall{a}{R}{\\iforall{b}{R}{\\iexists{x}{R}{\\iequal{x^3 + a x + b}{0}}}}$.\n\\fi % \\iflong\n%\nBy computing according to Figure~\\ref{fig:rz-logic}, we see that\na realizer for this proposition is a value~$r$ of type\n$\\oarrow{\\mathtt{real}}{\\oarrow{\\mathtt{real}}{\\oprod{\\mathtt{real}}{\\mathtt{unit}}}}$\nsuch that, if $t$ realizes $a \\in \\RR$ and $u$ realizes $b \\in\n\\RR$, then $\\oapp{\\oapp{r}{t}}{u} = (v, w)$ with $v$ realizing a real\nnumber~$x$ such that $x^3 + a x + b = 0$, and with $w$ being trivial. (This\ncan be ``thinned'' to a realizer of type\n$\\oarrow{\\mathtt{real}}{\\oarrow{\\mathtt{real}}{\\mathtt{real}}}$ that\ndoes not bother to compute~$w$.) In essence, the realizer~$r$\ncomputes a root of the cubic equation. Note\nthat $r$ is \\emph{not} extensional, i.e., different realizers~$t$\nand~$u$ for the same~$a$ and~$b$ may result in different roots. \nTo put this in another way, $r$ realizes a \\emph{multi-valued}\nfunction\\footnote{The multi-valued nature of the realizer comes from\n  the fact that it computes \\emph{any one} of many values, not that it\n  computes \\emph{all} of the many values.} rather than a per\nmorphism. It is well known in computable mathematics that certain\noperations, such as equation solving, are only computable if we allow\nthem to be multi-valued. They arise naturally in RZ as translations of\n$\\forall\\exists$~statements.\n\n\\iflong\nThere are propositions whose realizers are ``irrelevant'' or free of\ncomputational content. For example, realizers for $\\itrue$ and\nequality have type $\\ounit$. Another example is a negation\n$\\inot{\\phi}$, which is defined to be the same as\n$\\iimply{\\phi}{\\ifalse}$, whose realizers have type\n$\\oarrow{\\typeOf{\\phi}}{\\ounit}$. Such realizers do not compute\nanything useful, and we may as well throw them away. Sometimes only a\npart of a realizer is computationally irrelevant, as we saw in the\nlast example. Propositions that are free of computational content\nare characterized as the \\emph{$\\lnot\\lnot$-stable propositions}. A\nproposition~$\\phi$ is said to be $\\lnot\\lnot$-stable, or just\n\\emph{stable} for short, when $\\iimply{\\inot{\\inot{\\phi}}}{\\phi}$ is\nvalid. Any \\emph{negative} proposition, i.e., one built from $\\itrue$,\n$\\ifalse$, $=$, $\\land$, $\\Rightarrow$ and $\\forall$ is stable, but\nthere may be other propositions that are stable and are not written\nin the negative form.\n\nIt would be unproductive to bother the programmer with requirements\nfor useless code.\n%\n\\else\n%\nSome propositions, such as equality and negation, have ``irrelevant'' realizers\nfree of computational content. Sometimes only a\npart of a realizer is computationally irrelevant. \nPropositions that are free of computational content are\ncharacterized as the \\emph{$\\lnot\\lnot$-stable propositions}. A\nproposition~$\\phi$ is said to be $\\lnot\\lnot$-stable, or just\n\\emph{stable} for short, when $\\iimply{\\inot{\\inot{\\phi}}}{\\phi}$ is\nvalid.\n%\n\\fi\n%\nOn input, one can specify whether abstract predicates\nhave computational content. On output, extracted realizers\ngo through a \\emph{thinning} phase, which removes\nirrelevant realizers.\n\n\n\\iflong\n\\subsection{Uniform families of modest sets}\n\\fi % \\iflong\n\\label{sec:uniform-families}\n\nMany structures are naturally viewed as families of sets, or sets\ndepending on parameters, or \\emph{dependent types} as they are called\nin type theory. For example, the $n$-dimensional Euclidean space\n$\\RR^n$ depends on the dimension $n \\in \\NN$, the Banach space\n$\\mathcal{C}([a,b])$ of uniformly continuous real functions on the\nclosed interval $[a,b]$ depends on $a, b \\in \\RR$ such that $a < b$,\netc. In general, a family of sets $\\family{A_i}{i \\in I}$ is an\nassignment of a set $A_i$ to each $i \\in I$ from an \\emph{index\n  set}~$I$.\n\n\\iflong\nIn the category of modest sets the appropriate notion is that of a\n\\emph{uniform} family~$\\family{A_i}{i \\in I}$, which is an assignment\nof a modest set $A_i = (\\setOf{A_i}, \\typeOf{A}, {\\rz_{A_i}})$ to each\n$i \\in \\setOf{I}$, where $I$ is an index modest\nset~\\cite[6.3]{JacobsB:cltt}. The uniformity comes from the\nrequirement that all the~$A_i$'s share the same underlying\ntype~$\\typeOf{A_i} = \\typeOf{A}$. It is a desirable restriction from\nthe implementation point of view, because it removes dependencies at\nthe level of types. Note also that there is no dependency on the\nrealizers, only on the elements of the underlying set.\n\nWe may express uniform families in terms of pers, too.\n\\else\nIn the category of pers the appropriate notion is that of a\n\\emph{uniform} family.\n%\n\\fi\nA uniform family of pers $\\family{A_i}{i \\in I}$ indexed\nby a per~$I$ is given by an underlying type $\\typeOf{A}$ and a family\nof pers $(\\per_{A_i})_{i \\in \\values{\\typeOf{I}}}$ that is\n% \n\\iflong\n\\begin{itemize}\n\\item \\emph{strict:} if $u \\per_{A_i} v$ then $i \\in \\support{I}$, and\n\\item \\emph{extensional:} if $u \\per_{A_i} v$ and $i \\per_I j$ then $u\n  \\per_{A_j} v$.\n\\end{itemize}\n\\else % \\iflong\nstrict (if $u \\per_{A_i} v$ then $i \\in \\support{I}$) and\nextensional (if $u \\per_{A_i} v$ and $i \\per_I j$ then $u\n  \\per_{A_j} v$).\n\\fi % \\iflong\n\n\\iflong\nWe may form the \\emph{sum} $\\depsum{i \\in I}{A_i}$ of a uniform family\n$\\family{A_i}{i \\in I}$ as\n%\n\\begin{align*}\n  \\typeOf{\\depsum{i \\in I}{A_i}} &=\n  \\typeOf{I} \\times \\typeOf{A}\n  \\\\\n  (i_1, u_1) \\per_{\\depsum{i \\in I}{A_i}} (i_2, u_2)\n  &\\iff\n  i_1 \\per_I i_2 \\land u_1 \\per_{A_{i_1}} u_2\n\\end{align*}\n%\nand the \\emph{product} $\\depprod{i \\in I}{A_i}$ as\n%\n\\begin{align*}\n  \\typeOf{\\depprod{i \\in I}{A_i}} &=\n  \\typeOf{I} \\to \\typeOf{A}\n  \\\\\n  \\support{\\depprod{i \\in I}{A_i}} &=\n  \\set{v \\in \\values{\\typeOf{I} \\to \\typeOf{A}} \\such\n    \\all{i, j}{\\values{\\typeOf{I}}}{i \\per_{I} j \\implies v\\,i\n      \\per_{A_i} v\\,j}\n    } \\\\\n  u \\per_{\\depprod{i \\in I}{A_i}} v\n  &\\iff\n  u, v \\in \\support{\\depprod{i \\in I}{A_i}} \\land\n  \\all{i,j}{\\values{\\typeOf{I}}}{\n    i \\per_I j \\implies u\\;i \\per_{A_i} v\\;j\n  }.\n\\end{align*}\n%\nThese constructions allow us to interpret (extensional) dependent type\ntheory in the category of modest sets.\n\\else % \\iflong\nWe can also form the \\emph{sum} $\\depsum{i \\in I}{A_i}$ or\n\\emph{product} $\\depprod{i \\in I}{A_i}$ of\na uniform family, allowing an interpretation of (extensional) dependent type\ntheory.\n\\fi % \\iflong\n\n\\iflong\nAs an example of a uniform family we consider the cyclic group\n$(\\ZZ_n, 0, {-}, {+})$ of order~$n$. To keep things simple, we assume\nthat~$n$ ranges over natural numbers that can be represented by\ntype~$\\cint$ (i.e., $\\typeOf{N}=\\cint$), and that $\\per_N$ is equality.\n%\nThe uniform family $\\family{Z_n}{n \\in N}$ is then like the cyclic\ngroup of order~$7$, with $7$ replaced by~$n$. Ignoring overflow, the\nunderlying type would be $\\typeOf{Z_n} = \\cint$. Any of the\nimplementations suggested for $\\ZZ_7$ would work here, with~$7$\nreplaced by the parameter~$n$; in one case we would have $u \\per_{Z_n}\nv \\iff u = v$ and in the other $u \\per_{Z_n} v \\iff u\n\\mathbin{\\mathrm{mod}} n = v \\mathbin{\\mathrm{mod}} n$.\n%\nNegation would be specified as a constant of dependent type\n$\\Pi_{n \\in N} Z_n \\to Z_n$. Its realizer \\texttt{neg} would then have\ntype $\\typeOf{N}\\to\\typeOf{Z_n}\\to{\\typeOf{Z_n}}$, i.e.,\n$\\cint\\to\\cint\\to\\cint$, so that $\\mathtt{neg}(n)$ would be a realizer\nfor negation on $\\ZZ_n$. The realizer for addition would similarly\ntake an extra argument $n$.\n\n\\fi % \\iflong\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"cie-long\"\n%%% End: \n", "meta": {"hexsha": "2ab3fb44f78cda8e2b97caaff86385db60cad318", "size": 24607, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "private/cie/modest_sets.tex", "max_stars_repo_name": "andrejbauer/rz", "max_stars_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-08-28T10:12:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-07T21:04:22.000Z", "max_issues_repo_path": "private/cie/modest_sets.tex", "max_issues_repo_name": "andrejbauer/rz", "max_issues_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "private/cie/modest_sets.tex", "max_forks_repo_name": "andrejbauer/rz", "max_forks_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.0913312693, "max_line_length": 87, "alphanum_fraction": 0.6969561507, "num_tokens": 8189, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7718434873426302, "lm_q1q2_score": 0.5826762161111261}}
{"text": "\\documentclass[]{amsart}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{color}\n\\usepackage[\n   pagebackref=true,\n   colorlinks=true,\n   urlcolor=ogblue,\n   citecolor=oggreen,\n   linkcolor=ogblue]{hyperref}\n%\\usepackage{epsfig}\n%\\usepackage{subfigure}\n\\usepackage{natbib}\n\\usepackage{pgf}\n\n\\pgfdeclareimage[height=0.7cm]{oglogo}{../OpenGammaLogo}\n\n\\definecolor{ogblue}{RGB}{0,85,129}\n\\definecolor{oggreen}{RGB}{93,194,164}\n\\definecolor{ogbluelight}{RGB}{17,116,158} \n\\definecolor{ogred}{RGB}{204,44,0}\n\\definecolor{oggray}{RGB}{178,178,178} \n\\definecolor{oggraylight}{RGB}{234,234,234}\n\n\\setlength{\\oddsidemargin}{0.5cm}\n\\setlength{\\evensidemargin}{0.5cm}\n\\setlength{\\textwidth}{15cm}\n\\setlength{\\textheight}{22cm}\n\n\\newcommand{\\TODO}[1]%\n   {{\\sffamily \\textbf{\\color{oggreen}\\noindent TO DO}\\marginpar{\\hspace*{0.5cm}\n     \\sffamily \\textbf{\\color{oggreen}TO DO}}: #1}}\n     \n\\newtheorem{theo}{Theorem}\n     \n\\newcommand{\\rem}{\\par\\noindent \\textit{Remark:} }\n\\newcommand{\\class}[1]{{\\texttt{#1}}}\n\\newcommand{\\spot}{{\\operatorname{Spot}}}\n\\newcommand{\\pv}{{\\operatorname{PV}}}\n\\newcommand{\\rr}{{\\operatorname{RR}}}\n\\newcommand{\\str}{{\\operatorname{SS}}}\n\\newcommand{\\atm}{{\\operatorname{ATM}}}\n\\newcommand{\\E}[2]{\\operatorname{E}^{#1}\\left[#2\\right]}\n%\\newcommand{\\N}{{\\mathbb N}}\n     \n\\title[Forex options]%\n   {Forex option}\n\n\\author[M. Henrard]%\n   {Marc Henrard}\n\n\\date{First version: 10 June 2011; this version: 21 June 2011}\n\n\\thanks{Version 1.1}\n\n\\address{Marc Henrard \\\\ Quantitative Research \\\\ OpenGamma}\n\n\\email{marc@opengamma.com}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{center}\n\\pgfuseimage{oglogo} \n\\end{center}\n\n\\begin{abstract}\nPricing of vanilla Forex options using a Black formula and a delta and time dependent volatility.\n\\end{abstract}\n\n\\section{Introduction}\n\nThe note presents the pricing of vanilla Forex European options. The data is described by the standard figures in the Forex market with the smile represented by the ATM volatility, risk reversals and strangles.\n\nA Forex transaction is represented by an amount in foreign currency $N_1$ and amount in domestic currency $N_2$ (also called quote currency) and a payment date $t_p$ ($N_1$ and $N_2$ have opposite signs). The option on the forex transaction is represented by the underlying Forex transaction, a strike $K$, the expiry $\\theta \\leq t_p$ and a call/put feature. The strike is relate to the amounts by $K = -N_2/N_1$. \n\nThere are two currencies involved and thus two discounting curves. They are denoted $P_D$ for the domestic currency and $P_F$ for the foreign currency. The $P_F$ used here should not be confused with the $P^F$ used in interest rate modelling where the $F$ refers to forward curves.\n\nFor a EURUSD transaction, buying EUR at a rate of 1.47 and for a nominal of 1,000,000 EUR, the domestic currency is USD and the foreign currency is EUR. The amounts are $N_1=1,000,000$ and $N_2=-1,470,000$.\n\n\\section{Preliminaries}\n\nIn this note (and in the implementation), we consider that the exchange rate used is today's exchange rate (and not spot exchange rate). \n\nThe options present value are computed from a Black formula with strike and expiry dependent volatility. The forward rate for (payment) date $t$ and today rate $S_0$ is given by\n\\[\nF^{t}_0 = \\frac{P_F(0,t)}{P_D(0,t)} S_0.\n\\]\n\nThe present value is computed in the domestic currency.\nThe present value with the Black formula and a volatility $\\sigma$ is\n\\[\n\\pv = N_1 P_D(0,t_p) \\omega \\left( F^{t_p}_0 N(\\omega d_+) - K N(\\omega d_-)\\right)\n\\]\nwhere $\\omega = 1$ for a call, $\\omega = -1$ for a put and \n\\[\nd_{\\pm} = \\frac{\\ln\\left(\\frac{F^{t_p}_0}{K}\\right) \\pm \\frac12 \\sigma^2 t}{ \\sigma \\sqrt{t}}.\n\\]\n\nThe delta of the forward value with respect to the forward rate is given by\n\\begin{equation}\n\\label{EqnDeltaForward}\n\\Delta_F = \\omega N(\\omega d_+).\n\\end{equation}\nThe delta of the present value with respect to the today's rate is given by\n\\[\n\\Delta_\\spot = \\omega P_F(0,t) N(\\omega d_+).\n\\]\n\n\\section{Smile}\n\nThe smile is the description of the implied volatility for each strike, i.e. of a volatility function $\\sigma = \\sigma(K)$.\nA standard smile description is done with the at-the-money (ATM) volatility, risk reversals (RR) and smile strangles (SS) (see \\cite{CLA.2011.1} for more details and other possibilities).\n\nThe delta used here is the delta forward described in (\\ref{EqnDeltaForward}).\nThe \\emph{at-the-money} (ATM) used is the ATM - Delta Neutral Straddle (DNS), i.e. the strike for which the straddle has 0 delta forward. It is denoted $K_\\atm$.\n\nThe risk reversals and strangles are given for some deltas, in general for delta 0.10 and 0.25. It means that the figures given are related to puts with deltas -0.10 and -0.25 and to calls with deltas 0.25 and 0.10. All the options are out-of-the-money; the puts have strikes below the ATM strikes and calls have strikes above the ATM strike. The strikes are denoted $K_{x,T}$ with $x$ the absolute value of the delta and $T$ the type ($C$ for call and $P$ for put).\n\nThe risk reversals contain information about the skew (or slope) of the smile. The risk reversal figure is the difference between the volatility at the call and the volatility at the put:\n\\[\n\\rr(x) = \\sigma(K_{x,C}) - \\sigma(K_{x,P}).\n\\]\n\nThe strangles contain information about the curvature of the smile. The strangle figure is the difference between the average of the volatility out-of-the-money and volatility at-the-money:\n\\[\n\\str(x) = \\frac12 \\left(\\sigma(K_{x,C}) + \\sigma(K_{x,P})\\right) - \\sigma(K_{\\atm}).\n\\]\n\n\nThe strike can be computed explicitly from the forward delta by\n\\begin{equation}\n\\label{EqnDelta}\nK = F \\exp\\left( -(\\sigma \\sqrt{t} \\omega N^{-1}(\\omega \\Delta) - \\frac12 \\sigma^2 t)  \\right).\n\\end{equation}\n\nThe smile is described with the figures\n\\begin{enumerate}\n\\item Deltas (usually 0.25 and 0.10).\n\\item ATM volatility.\n\\item Risk reversal for each delta.\n\\item Strangle for each delta.\n\\end{enumerate}\nThe strikes/volatilities table is obtained through\n\\begin{enumerate}\n\\item Compute the wing volatilities from the risk reversals and strangles.\n\\item Compute the ATM strike from the ATM volatility.\n\\item Compute the put/call strikes for each delta from the wing volatilities with (\\ref{EqnDelta}).\n\\end{enumerate}\n\nIn the current implementation, the volatility is interpolated linearly on strikes and extrapolated flat beyond the extreme strikes.\n\n\\TODO{Add a smooth interpolation approach.}\n\n\\section{Currency exposure}\n\nFor non-forex instruments and forex forward transactions, the currency risk is simply the present value in each currency. \n\nThis is obviously not the case for currency options. For currency options it is the amount in each currency required to neutralise the currency movements. The neutrality will depend on the model used for the option valuation. Here we use the Black model (with the volatility provided by the smile description). By using the Black model, the change of volatility implied by the change of spot is not taken into account.\n\nFor vanilla options the currency exposure is computed in the following way: for the foreign currency the exposure is\n\\[\n\\Delta_\\spot N_1\n\\]\nand for the domestic currency it is\n\\[\n-\\Delta_\\spot N_1 \\spot + \\pv.\n\\]\n\nIn other world, if one buys an option at the fair price and at the same time sells $\\Delta_\\spot N_1$ foreign currency unit against receiving $\\Delta_\\spot N_1 \\spot$ domestic currency unit, he will have no currency position. The $\\pv$ part of the currency exposure hedges the premium paid in the domestic currency.\n\n\\section{Implementation}\n\nThe class implementing the computation of the implied strike from a delta and a volatility is\n\\class{BlackImpliedStrikeFromDeltaFormula}.\n\nThe smile data for one time to expiry can be stored in the class\n\\class{SmileDeltaParameter}. It can be constructed from deltas, ATM, strangles and risk reversals as described above.\n\nThe term structure version of the data consisting of an array of the previous parameters and an array of times to expiry can be stored in the class\n\\class{SmileDeltaTermStructureParameter}.\n\nThe pricing method for vanilla Forex option is \\class{ForexOptionVanillaMethod}. There is a present value method and a currency exposure method.\n\n\\bibliography{../bibtex/finance}\n\\bibliographystyle{apa}\n\n\\tableofcontents\n\n\\end{document}\n", "meta": {"hexsha": "c4aed4a1ad4ecb9081bf680ee95837f1caa4ff00", "size": 8320, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "projects/OG-Analytics/docs/notes/forex-option/ForexOption.tex", "max_stars_repo_name": "gsteri1/OG-Platform", "max_stars_repo_head_hexsha": "e682c31e69cadde06dd3776544913dde17fe41ba", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-27T21:05:05.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-27T21:05:05.000Z", "max_issues_repo_path": "projects/OG-Analytics/docs/notes/forex-option/ForexOption.tex", "max_issues_repo_name": "gsteri1/OG-Platform", "max_issues_repo_head_hexsha": "e682c31e69cadde06dd3776544913dde17fe41ba", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "projects/OG-Analytics/docs/notes/forex-option/ForexOption.tex", "max_forks_repo_name": "gsteri1/OG-Platform", "max_forks_repo_head_hexsha": "e682c31e69cadde06dd3776544913dde17fe41ba", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.1088082902, "max_line_length": 466, "alphanum_fraction": 0.7506009615, "num_tokens": 2323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914975839675, "lm_q2_score": 0.7718434873426302, "lm_q1q2_score": 0.5826762075992722}}
{"text": "\\section{Verifying Strategies}\n\nThe problem of \\emph{verifying strategies} is to decide whether a given partition $(W'_0, W'_1)$ of a parity \ngame $G$ along with two strategies $\\sigma_0: W'_0 \\cap V_0 \\rightarrow V$ and \n$\\sigma_1: W'_1 \\cap V_1 \\rightarrow V$ matches the partition into winning sets, i.e. whether $W'_0 = W_0$ \nand $W'_1 = W_1$ as well as $\\sigma_0$ and $\\sigma_1$ ensure the win of the respective player in the \nrespective winning set.\n\nThe verification process basically traverses three phases. The algorithm verifies in the first phase that \n$\\sigma_0$ and $\\sigma_1$ are welldefined in the sense that a strategy actually uses valid transitions in \nthe game graph. In the second phase the algorithm checks whether the strategies stay in their winning \nregions, i.e. $\\sigma_i[W'_i] \\subseteq W'_i$ for both player $i$, as well as whether the winning regions \nare closed w.r.t. the respective player.\n\nThe third phase finally checks whether the given strategies are winning strategies. In order to solve this \nproblem, the algorithm computes the subgames $G_i := (G|_{W'_i})|_{\\sigma_i}$ induced by the respective sets \nand strategies in question. Note that both $G_i$ are special games, namely one-player games. Hence, they\ncan be solved as described above.\n\nSince $\\sigma_i$ is a winning strategy on $W'_i$ iff $\\sigma_i$ is a winning strategy on $G_i$, it suffices \nto check whether the computed winning set $W^{G_i}_i$ correspond with \n$W'_i$, and if they do not, the counter strategy for player $1 - i$ can be used to extract a cycle in $G$ \nfollowing strategy $\\sigma_i$ that is won by $1-i$.\n\n\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End:\n", "meta": {"hexsha": "80ef793a13f7bc47b291f889b92e948b0223049f", "size": 1693, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/verify.tex", "max_stars_repo_name": "tcsprojects/pgsolver", "max_stars_repo_head_hexsha": "88202c9452ccdcd4092280b4e76c31a16085d14c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2016-04-03T22:53:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T20:53:38.000Z", "max_issues_repo_path": "doc/verify.tex", "max_issues_repo_name": "tcsprojects/pgsolver", "max_issues_repo_head_hexsha": "88202c9452ccdcd4092280b4e76c31a16085d14c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2015-03-28T15:29:29.000Z", "max_issues_repo_issues_event_max_datetime": "2019-09-22T16:48:34.000Z", "max_forks_repo_path": "doc/verify.tex", "max_forks_repo_name": "tcsprojects/pgsolver", "max_forks_repo_head_hexsha": "88202c9452ccdcd4092280b4e76c31a16085d14c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2015-01-06T10:32:50.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T15:58:23.000Z", "avg_line_length": 52.90625, "max_line_length": 109, "alphanum_fraction": 0.7430596574, "num_tokens": 449, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5824964482306131}}
{"text": "\\pageId{pmathmlEnhancement}\n\n\\newcommand{\\ue}[1]{\\upConversionExample{#1}}\n\nThe first part of the up-conversion process takes the rather flat MathML normally\noutput by SnuggleTeX and creates ``enhanced'' presentation MathML that displays in the\nsame way whilst having a structure that is more amenable to further up-conversion,\nsuch as the conversion to Content MathML and Maxima input syntax.\n\nGenerally speaking, the enhancements made are as follows:\n\n\\begin{itemize}\n\\item Precedence of infix operators is inferred, as described in the table\nbelow, using \\verb|<mrow>...</mrow>| to make groupings and delimit inferred arguments.\n\\item Implicit multiplications are inferred, using \\verb|<mo>&InvisibleTimes;</mo>|\n(a.k.a.\\ \\verb|<mo>&#8290;</mo>| or \\verb|<mo>&#x2062;</mo>|)\nto represent this in the resulting MathML.\n\\item Applications of pre-defined functions like \\verb|sin| and explicitly assumed\nfunctions are inferred, using\n\\verb|<mo>&ApplyFunction;</mo>| (a.k.a.\\ \\verb|<mo>&#8289;</mo>| or \\verb|<mo>&#x2061;</mo>|)\nto represent this.\n\\end{itemize}\n\n\\subsection*{Precedence Table}\n\nThe enhancement process works on a list of one or more adjacent MathML element siblings by applying\neach test in the table below, in the order shown. When the first match occurs,\nthe siblings are mapped into a new form, with the process being repeated in each of the operands.\n\n\\begin{tabular}{|c|c|c|}\n\\hline\nTest & Result & Live Example \\\\\n\\hline\nInfix $,$ & Grouped into a \\verb|<mfenced>| with empty opener and closer and a child for each operand & \\ue{\\verb|x,y,z+1|} \\\\\nInfix $\\vee$ & Associative Grouping & \\ue{\\verb|x\\vee \\lnot y|} \\\\\nInfix $\\wedge$ & Associative Grouping & \\ue{\\verb|x\\vee y \\wedge z|} \\\\\nInfix relation operator(s) & Associative Grouping (all at same level) & \\ue{\\verb|1\\leq x-a < 2|} \\\\\nInfix $\\cup$ & Associative Grouping & \\ue{\\verb|A\\cup B \\cap C|} \\\\\nInfix $\\cap$ & Associative Grouping & \\ue{\\verb|A\\cup B \\cap C|} \\\\\nInfix $\\setminus$ & Left-associative Grouping & \\ue{\\verb|A\\setminus B+x|} \\\\\nInfix $+$ & Associative Grouping & \\ue{\\verb|x-1+y-2|} \\\\\nInfix $-$ & Left-associative Grouping & \\ue{\\verb|--x-y-z|} \\\\\nInfix $*$, $\\times$ and $\\cdot$ & Associative Grouping (all at same level) & \\ue{\\verb|2x+5\\times (y-4)|} \\\\\nInfix $/$ and $\\div$ & Left-associative Grouping & \\ue{\\verb|a/b/c/(1 \\div x)|} \\\\\n``Space'' operators (i.e. anything producing \\verb|<mspace/>|) & Treated as explicit multiplication, grouped associatively & \\ue{\\verb|a\\,b|} \\\\\nAny infix operator in unary context & Operator ``applied'' by wrapping in \\verb|<mrow/>| & \\ue{\\verb|-+x|} \\\\\nNo Infix Operator present & Split into subgroups (as defined below) and apply implicit product & \\ue{\\verb|\\sin x\\cos y|} \\\\\n``Atoms'' & Kept, applying conversion process to children & \\ue{\\verb|\\sqrt{x}|} \\\\\n\\hline\n\\end{tabular}\n\n\\subsection*{``No Infix Operator'' Handling}\n\nGroups of two or more MathML siblings elements that do not contain any infix\noperators are treated as an implicit product of adjacent sibling subgroups\nstarting with MathML elements satisfying any of the following conditions:\n\n\\begin{itemize}\n\\item The first sibling\n\\item The first sibling after an \\verb|<mfenced/>|\n\\item The first of one or more prefix operator or function siblings\n\\item The first non-postfix operator after one or more postfix operator siblings\n\\end{itemize}\n\nThe handling of these subgroups is described below. The following examples\nhopefully illuminate this process in more detail:\n\n\\subsubsection*{Grouping Examples}\n\n\\begin{tabular}{|c|c|c|}\n\\hline\nInput & Subgroups & Live Example \\\\\n\\hline\n\\verb|xy| & Kept together as one subgroup & \\ue{\\verb|xy|} \\\\\n\\verb|\\sin 2x\\cos y| & Split into \\verb|\\sin 2x| and \\verb|\\cos y| & \\ue{\\verb|\\sin 2x\\cos y|} \\\\\n\\verb|\\sin f(x)| & Treated as \\verb|\\sin (f(x))|. (\\verb|f| is assumed to be a function in these examples; this is configurable) & \\ue{\\verb|\\sin f(x)|} \\\\\n\\verb|\\min(x,y)z| & Split into \\verb|\\min(x,y)| and \\verb|z|. This demonstrates the rule for handling fences and is appears reasonable here & \\ue{\\verb|\\min(x,y)z|} \\\\\n\\verb|\\sin(x+1)z| & Split into \\verb|\\sin(x+1)| and \\verb|z|. This is perhaps contentious, but allows brackets to be used to explicitly delimit function arguments & \\ue{\\verb|\\sin(x+1)z|} \\\\\n\\verb|x!y!| & Split into \\verb|x!| and \\verb|y!| & \\ue{\\verb|x!y!|} \\\\\n\\verb|\\cos x!y!| & Split into \\verb|cos x!| and \\verb|y!| & \\ue{\\verb|\\cos x!y!|} \\\\\n\\verb|\\sin\\cos x| & Kept together & \\ue{\\verb|\\sin\\cos x|} \\\\\n\\verb|xy\\sin\\cos 2ax!y!\\min(x,y)a| & Split into \\verb|xy|, \\verb|\\sin\\cos 2ax!|, \\verb|y!|, \\verb|\\min(x,y)| and \\verb|a| & \\ue{\\verb|xy\\sin\\cos 2ax!y!\\min(x,y)a|} \\\\\n\\hline\n\\end{tabular}\n\n\\subsection*{Sibling Subgroup Handling}\n\nOnce subgroups have been identified (as described above), each of these\nsubgroups is then split into:\n\n\\begin{itemize}\n\\item Zero or more prefix operators or unary/n-ary functions\n\\item Zero or more adjacent ``atoms''\n\\item Zero or more postfix operators\n\\end{itemize}\n\nThese are then treated as follows:\n\n\\begin{itemize}\n\n\\item Any postfix operators are ``applied'' from right to left to the\nthe atoms before them, using a \\verb|<mo>&ApplyFunction;</mo>| to represent the\noperator applications.\n\nNote that the factorial operator is handled specially in that it only\ngets applied to the preceding item, so that \\verb|2ax!| is treated as\n\\verb|2a(x!)|, which fits in with common conventions.)\n\n\\ue{\\verb|x!!|}\n\n\\item The atoms resulting after applying postfix operators are treated as an\nimplicit multiplication.\n\nRevisiting the \\verb|2ax!| example, we end up with a product of \\verb|2|,\n\\verb|a| and \\verb|x!|.\n\n\\ue{\\verb|2ax!|}\n\n\\item Any prefix functions (e.g. \\verb|\\sin|) and operators\n(e.g. \\verb|\\lnot|) are ``applied'' from left to right to\nwhatever is left, using a \\verb|<mo>&ApplyFunction;</mo>| to represent the\nfunction applications.\n\nSo, \\verb|\\sin\\cos 2ax!| is handled as it if were \\verb|\\sin(\\cos(2ax!))|.\n\n\\ue{\\verb|\\sin\\cos 2ax!|}\n\n\\end{itemize}\n", "meta": {"hexsha": "3eef9bc21669cf3005abfc24a4e61f7e4ac8cb92", "size": 5975, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/pmathml-enhancement.tex", "max_stars_repo_name": "davemckain/snuggletex", "max_stars_repo_head_hexsha": "1d694f9e07bef4dd99792cca5d5eaade76663f1f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/pmathml-enhancement.tex", "max_issues_repo_name": "davemckain/snuggletex", "max_issues_repo_head_hexsha": "1d694f9e07bef4dd99792cca5d5eaade76663f1f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/pmathml-enhancement.tex", "max_forks_repo_name": "davemckain/snuggletex", "max_forks_repo_head_hexsha": "1d694f9e07bef4dd99792cca5d5eaade76663f1f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.3178294574, "max_line_length": 190, "alphanum_fraction": 0.7077824268, "num_tokens": 1816, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5824964482306131}}
{"text": "\\section*{Introduction}\n\nIn this part, we present a development  for the \\coq{} proof assistant, after the work of Kirby and Paris. This formalization contains the following main parts:\n\n\\begin{itemize}\n\\item Representation in \\coq{} of hydras and hydra battles.\n\\item A proof that every battle is finite and won by Hercules. This proof is based on a \\emph{variant} which maps any hydra to an ordinal strictly less than $\\epsilon_0$ and is strictly decreasing along any battle.\n\n\\item Using a combinatorial toolkit designed by J.~Ketonen and R.~Solovay~\\cite{KS81}, we prove that, for any ordinal $\\mu<\\epsilon_0$, there exists no such variant mapping any hydra to an ordinal strictly less than $\\mu$. Thus, the complexity of $\\epsilon_0$ is really needed in the previous proof.\n\n\\item We prove a relation between the length of a ``classic''  kind of  battles \\footnote{This class is also called \\emph{standard} in this document (text and proofs). The \\emph{replication factor} of the hydra is exactly $i$ at the $i$-th round of the battle (see Sect~\\vref{sect:replication-def}).}\nand the Wainer-Hardy hierarchy of ``rapidly growing functions'' $H_\\alpha$~\\cite{Wainer1970}. The considered class of battles, which we call \\emph{standard},  is the most considered one in the scientific  literature (including popularization).\n\\end{itemize}\n\n\nSimply put, this document tries to combine the scientific interest of two articles~\\cite{KP82, KS81} and a book~\\cite{schutte} with the playful activity of truly proving theorems.\nWe hope  that such a work, besides exploring a nice piece of discrete mathematics,\nwill show how \\coq{} and its standard library are well fitted to help us to understand some non-trivial mathematical developments, and also to experiment the constructive parts of  the proof through functional programming.\n\n We also hope to provide a little clarification on infinity (both potential and actual) through the notions of function, computation, limit,\n type and proof.\n\n\n\n%\\section{Remarks}\n\n\\subsection*{Difference from Kirby and Paris's work}\nIn~\\cite{KP82}, Kirby and Paris show  that there is no proof of termination of all hydra battles in Peano Arithmetic (PA).\nSince we are used to writing proofs in higher order logic, the restriction to PA was quite unnatural for us. So we chose to prove another statement without any reference to PA, by considering a class of proofs indexed by ordinal numbers up to $\\epsilon_0$.\n\n\\subsection*{State of the development}\nThe \\coq{} scripts herein are in constant development since our contribution~\\cite{CantorContrib} on  notations for the ordinals $\\epsilon_0$ and $\\Gamma_0$.\nWe added new material: axiomatic definitions of countable ordinals after Sch\u00fctte~\\cite{schutte}, combinatorial aspects of $\\epsilon_0$, after Ketonen and Solovay~\\cite{KS81} and Kirby and Paris~\\cite{KP82}, recent \\coq{} technology: type classes, equations, etc.\n\nWe are now working in order to make clumsy proofs more readable, simplify definitions, and ``factorize'' proofs as much as possible. \nMany possible improvements are suggested as ``todo''s or ``projects'' in this text.\n\n\n\\section*{Future work (projects)}\n\\index{hydras}{Projects}\n\nThis document and the proof scripts are far from being complete.\n\nFirst, there must be a lot of typos to correct, references and index items to add. Many proofs are too complex and should be simplified, etc.\n\nThe following extensions are planned, but help is needed:\n\n\\begin{itemize}\n\\item Semi automatic tactics for proving inequalities $\\alpha < \\beta$, even when $\\alpha$ and $\\beta$ are not closed terms.\n\\item Extension to $\\Gamma_0$ (in Veblen normal form)\n\\item More lemmas about hierarchies of rapidly growing functions, and their relationship \n    with primitive recursive functions and provability in Peano arithmetic \n(following~\\cite{KS81, KP82}).\n\\item From \\coq's point of view, this development could be used as an illustration of the evolution of the software, every time new libraries and sets of tactics could help to simplify the proofs.\n\\end{itemize}\n\n\\subsection*{Main references}\n\nIn our development, we adapt the definitions and prove many theorems which\nwe found in the following articles. \n\\begin{itemize}\n\\item ``Accessible independence results for Peano arithmetic''  by Laurie Kirby and Jeff Paris~\\cite{KP82}\n\\item ''Rapidly growing Ramsey Functions'' by Jussi Ketonen and Robert Solovay~\\cite{KS81}\n\\item ``The Termite and the Tower'', by Will Sladek~\\cite{Sladek07thetermite}\n\\item Chapter V of ``Proof Theory'' by Kurt Sch\u00fctte~\\cite{schutte}\n\\end{itemize}\n\n\n\n\n\n\n\\chapter{Hydras and hydra games}\n\n\\label{sec:orgheadline91}\n\\label{chapter:hydras}\n\n\n\n\nThis chapter is dedicated to the representation of hydras and rules of the hydra game in \\coq's specification language:~\\gallina. \n\n\nTechnically, a \\emph{hydra} is just a finite ordered tree, each node of which \nhas any number of sons. Contrary to the computer science tradition, we display the hydras \nwith the heads up and the foot (i.e., the root of the tree) down.\nFig.~\\ref{fig:Hy} represents such  a hydra, which will be referred to as \\texttt{Hy} in our examples (please look at the \nmodule~\\href{../theories/html/hydras.Hydra.Hydra_Examples.html}{Hydra.Hydra\\_Examples}). \n\\emph{For a less formal description of hydras, please see \n\\url{https://www.smbc-comics.com/comic/hydra}.}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.6]\n\n\\node (foot) at (2,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (2,4) {$\\bullet$};\n\\node (N3) at (2,6) {$\\bullet$};\n\\node (H0) at (0,2) {$\\Smiley[2][vertfluo]$};\n\\node (H1) at (0,8) {$\\Smiley[2][vertfluo]$};\n\\node (H2) at (4,8) {$\\Smiley[2][vertfluo]$};\n\\node (H4) at (4,2) {$\\Smiley[2][vertfluo]$};\n\\node (H5) at (4,4) {$\\Smiley[2][vertfluo]$};\n\\draw (foot) -- (N1)[very thick] ;\n\\draw (N1) -- (N2);\n\\draw (N2) -- (N3);\n\\draw (N3) to [bend left= 10]  (H1) ;\n\\draw (N3) to [bend right= 16] (H2);\n\\draw (foot) to [bend left= 10]  (H0) ;\n\\draw (foot) to [bend right = 10] (H4) ;\n\\draw (N1) to [bend right= 16] (H5);\n\\end{tikzpicture}\n\\caption{The hydra \\texttt{Hy} \\label{fig:Hy}}\n\\end{figure}\n\n\n\nWe use a specific vocabulary for talking about hydras. Table~\\ref{tab:hyd2tree} shows the correspondence between our terminology and the usual vocabulary for trees in computer science.\n\n\n\\begin{figure}[h]\n  \\centering\n  \\begin{tabular}{ll}\nHydras & Finite rooted trees\\\\\n\\hline\nfoot & root\\\\\nhead & leaf\\\\\nnode & node\\\\\nsegment  & (directed) edge \\\\\nsub-hydra & subtree\\\\\ndaughter & immediate subtree\\\\\n\\end{tabular}\n  \\caption{Translation from hydras to trees}\n  \\label{tab:hyd2tree}\n\\end{figure}\n\n\nThe hydra \\texttt{Hy} has a \\emph{foot} (below), five \\emph{heads}, and eight \\emph{segments}. \nWe leave it to the reader to define various parameters such as the height, the size, the highest arity (number of sons of a node) of a hydra. In our example, these parameters have the respective values $4$, $9$ and $3$.\n\n\n\n\n\\subsection{The rules of the game}\n\n\\label{sec:orgheadline44}\n\\label{sect:replication-def}\n\nA \\emph{hydra battle} is a fight between Hercules and the Hydra. \nMore formally, a  battle is a sequence of \\emph{rounds}.\nAt each round:\n\\begin{itemize}\n\\item If the hydra is composed of just one head, the battle is finished\nand  Hercules is the winner.\n\\item Otherwise, Hercules chops off \\emph{one} head of the hydra,\n\n\\begin{itemize}\n\\item If the head is at distance 1 from the foot, the head is just lost by the hydra, with no more reaction.\n\\item Otherwise, let us denote by \\(r\\) the node that was at distance \\(2\\) from \nthe removed head in the direction of the foot,  and consider the  sub-hydra \\(h'\\) of \\(h\\), whose  root is \\(r\\) \\footnote{$h'$ will be called ``the wounded part of the hydra'' in the subsequent text. In Figures~\\vref{fig:Hy2} and ~\\vref{fig:Hy4}, this sub-hydra  is displayed in red.}. Let $n$ be some natural number.\nThen $h'$ is replaced by  $n+1$ of copies of \\(h'\\) which share the same root $r$.\n The \\emph{replication factor} $n$ may be different (and generally is)   at each round of the fight.\nIt may be chosen by the hydra, according to its strategy, or imposed by some \nparticular rule. In many presentations of hydra battles, this number is increased by $1$ at each round. In the following presentation, we will also consider battles where the hydra is free to choose its ~replication factor at each round of the battle\\footnote{Let us recall that, if the chopped-off head was at distance 1 from the foot, the replication factor is meaningless.}.\n\\end{itemize}\n\\end{itemize}\n\n\n\nNote that the description given in~\\cite{KP82} of the replication process in hydra battles is also  semi-formal. \n\n\\label{original-rules}\n\n\\begin{quote}\n  ``From the node that used to be attached to the head which was just chopped off, traverse one \nsegment towards the root until the next node is reached. From this node sprout $n$ replicas of \nthat part of the hydra (after decapitation) which is ``above'' the segment just traversed, i.e., those \nnodes and segments from which, in order to reach the root, this segment would have to be \ntraversed. If the head just chopped off had the root of its nodes, no new head is grown. ''\n\\end{quote}\n\nMoreover, we note that this description is in \\emph{imperative} terms. In order to formally study the properties of hydra battles, we prefer to use a mathematical vocabulary, i.e., graphs, relations, functions, etc.\nThus, the replication process will be represented as a binary relation on a data type \\texttt{Hydra},\nlinking the state of the hydra \\emph{before} and \\emph{after} the transformation.\nA battle will thus be represented as a sequence of terms of type \\texttt{Hydra}, respecting the rules of the game.\n\n\n\n\n\n\\subsection{Example}\nLet us start a battle between Hercules and the hydra \\texttt{Hy} of Fig.~\\ref{fig:Hy}.\n\nAt the first round, Hercules chooses to chop off the rightmost head of \\texttt{Hy}.\nSince this head is near the floor, the hydra simply loses this head. Let us call \n \\texttt{Hy'} the resulting state of the hydra, represented in Fig.~\\vref{fig:Hy-prime}.\n\nNext, assume Hercules chooses to chop off one of the two highest heads of \\texttt{Hy'}, for instance the rightmost one. Fig.~\\vref{fig:Hy2} represents the broken segment in dashed lines, and the part that will be replicated in red. Assume also that the hydra decides to add 4 copies of the red part\\footnote{In other words, the replication factor at this round is equal to $4$.}. We obtain a new state \\texttt{Hy''} depicted in Fig.~\\ref{fig:Hy3}.\n\n\n\n\\begin{figure}[h]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.6]\n\n\\node (foot) at (2,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (2,4) {$\\bullet$};\n\\node (N3) at (2,6) {$\\bullet$};\n\\node (H0) at (0,2) {$\\Smiley[2][vertfluo]$};\n\\node (H1) at (0,8) {$\\Smiley[2][vertfluo]$};\n\\node (H2) at (4,8) {$\\Smiley[2][vertfluo]$};\n\\node (H5) at (4,4) {$\\Smiley[2][vertfluo]$};\n%\\node (H4) at (6,0) {$\\Xey[2][lightgray]$};\n\\draw (foot) -- (N1)[very thick] ;\n\\draw (N1) -- (N2);\n\\draw (N2) -- (N3) ;\n\\draw (N3) to [bend left= 10]  (H1) ;\n\\draw (N3) to [bend right= 16] (H2);\n\\draw (foot) to [bend left= 10]  (H0) ;\n\\draw (N1) to [bend right= 16] (H5);\n\\end{tikzpicture}\n\n\\caption{\\texttt{Hy'}: the state  of \\texttt{Hy} after one round \\label{fig:Hy-prime}}\n\\end{figure}\n\n\n\\begin{figure}[hp]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.5]\n\n\\node (foot) at (2,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (2,4)  {{\\color{lightred}$\\bullet$}};\n\\node (N3) at (2,6) {{\\color{lightred}$\\bullet$}};\n\\node (H0) at (0,2) {$\\Smiley[2][vertfluo]$};\n\\node (H1) at (0,8) {$\\Sey[2][lightred]$};\n%\\node (H2) at (5,0) {$\\Xey[2][lightgray]$};\n\\node (H5) at (4,4) {$\\Smiley[2][vertfluo]$};\n\\node (ex) at (5,8) {};\n\\draw (foot) -- (N1)[very thick] ;\n\\draw (N1) -- (N2);\n\\draw  (N2) -- (N3)[draw=lightred];\n\\draw (N3) to   [bend left= 10](H1) [draw=lightred];\n\\draw [dashed] (N3) to [bend left= 10](ex);\n\\draw (foot) to [bend left= 10]  (H0) ;\n\\draw (N1) to [bend right= 16] (H5);\n\\end{tikzpicture}\n\\caption{A second beheading}\n\\label{fig:Hy2}\n\\end{figure}\n\n\\begin{figure}[hp]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.6]\n\n\\node (foot) at (2,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (2,4) {$\\bullet$};\n\\node (N3) at (2,6) {{\\color{lightred}$\\bullet$}};\n\\node (H1) at (0,8) {$\\Smiley[2][vertfluo]$};\n\\node (H11) at (2,8) {$\\Smiley[2][vertfluo]$};\n\\node (H12) at (4,8) {$\\Smiley[2][vertfluo]$};\n\\node (H13) at (6,8) {$\\Smiley[2][vertfluo]$};\n\\node (H14) at (8,8) {$\\Smiley[2][vertfluo]$};\n\n\\node (N3) at (1,6) {$\\bullet$};\n\\node (N31) at (2,6) {$\\bullet$};\n\\node (N32) at (3,6) {$\\bullet$};\n\\node (N33) at (4,6) {$\\bullet$};\n\\node (N34) at (5,6) {$\\bullet$};\n\n\\node (H0) at (0,2) {$\\Smiley[2][vertfluo]$};\n\\node (H5) at (4,4) {$\\Smiley[2][vertfluo]$};\n\\draw (foot) -- (N1)[very thick] ;\n\\draw (N1) -- (N2);\n\\draw (N2) -- (N3);\n\\draw (N2) -- (N31);\n\\draw (N2) -- (N32);\n\\draw (N2) -- (N33);\n\\draw (N2) -- (N34);\n\\draw (N3) to   [bend left= 10](H1) ;\n\\draw (N31) to   [bend left= 10](H11) ;\n\\draw (N32) to   [bend left= 10](H12) ;\n\\draw (N33) to   [bend left= 10](H13) ;\n\\draw (N34) to   [bend left= 10](H14) ;\n\\draw (foot) to [bend left= 10]  (H0) ;\n\\draw (N1) to [bend left= 10]  (H5) ;\n\\end{tikzpicture}\n\\caption{\\texttt{Hy''}: the state of \\texttt{Hy} after two rounds \\label{fig:Hy3}}\n\\end{figure}\n\nFigs.~\\ref{fig:Hy4} and~\\vref{fig:Hy5} represent a possible third round of the battle, with a replication factor equal to $2$. Let us call \\texttt{Hy'''} the state of the hydra after that third round.\n\n\\begin{figure}[hp]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.6]\n\n\\node (foot) at (2,0)  {{\\color{lightred}$\\bullet$}};\n\\node (N1) at (2,2) {{\\color{lightred}$\\bullet$}};\n\\node (N2) at (2,4) {{\\color{lightred}$\\bullet$}};\n\\node (N3) at (2,6) {{\\color{lightred}$\\bullet$}};\n\\node (exN4) at (4,4) {};\n\\node (H1) at (0,8) {$\\Sey[2][lightred]$};\n\\node (H11) at (2,8) {$\\Sey[2][lightred]$};\n\\node (H12) at (4,8) {$\\Sey[2][lightred]$};\n\\node (H13) at (6,8) {$\\Sey[2][lightred]$};\n\\node (H14) at (8,8) {$\\Sey[2][lightred]$};\n\n\\node (N3) at (1,6) {{\\color{lightred}$\\bullet$}};\n\\node (N31) at (2,6) {{\\color{lightred}$\\bullet$}};\n\\node (N32) at (3,6) {{\\color{lightred}$\\bullet$}};\n\\node (N33) at (4,6) {{\\color{lightred}$\\bullet$}};\n\\node (N34) at (5,6) {{\\color{lightred}$\\bullet$}};\n\n\\node (H0) at (0,2) {$\\Smiley[2][vertfluo]$};\n%\\node (H5) at (4,0) {$\\Xey[2][lightgray]$};\n\\draw (foot) -- (N1)[very thick,draw=lightred] ;\n\\draw (N1) -- (N2)[draw=lightred];\n\\draw (N2) -- (N3)[draw=lightred];\n\\draw (N2) -- (N31)[draw=lightred];\n\\draw (N2) -- (N32)[draw=lightred];\n\\draw (N2) -- (N33)[draw=lightred];\n\\draw (N2) -- (N34)[draw=lightred];\n\\draw (N3) to   [bend left= 10](H1) [draw=lightred];\n\\draw (N31) to   [bend left= 10](H11) [draw=lightred];\n\\draw (N32) to   [bend left= 10](H12) [draw=lightred];\n\\draw (N33) to   [bend left= 10](H13) [draw=lightred];\n\\draw (N34) to   [bend left= 10](H14) [draw=lightred];\n\\draw (foot) to [bend left= 10]  (H0) ;\n\\draw [dashed] (N1) to  [bend left= 10](exN4);\n\\end{tikzpicture}\n\\caption{A third beheading (wounded part in red) \\label{fig:Hy4}}\n\\end{figure}\n\n\\begin{figure}[hp]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.4]\n\n\\node (foot) at (10,0) {$\\bullet$};\n\n\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (2,4) {$\\bullet$};\n\\node (N3) at (2,6) {{\\color{lightred}$\\bullet$}};\n\\node (H1) at (0,8) {$\\Smiley[1][vertfluo]$};\n\\node (H11) at (2,8) {$\\Smiley[1][vertfluo]$};\n\\node (H12) at (4,8) {$\\Smiley[1][vertfluo]$};\n\\node (H13) at (6,8) {$\\Smiley[1][vertfluo]$};\n\\node (H14) at (8,8) {$\\Smiley[1][vertfluo]$};\n\n\\node (N3) at (1,6) {$\\bullet$};\n\\node (N31) at (2,6) {$\\bullet$};\n\\node (N32) at (3,6) {$\\bullet$};\n\\node (N33) at (4,6) {$\\bullet$};\n\\node (N34) at (5,6) {$\\bullet$};\n\n\\node (H0) at (-3,3) {$\\Smiley[1][vertfluo]$};\n\n\\draw (foot) to [bend left=10] (N1)[very thick] ;\n\\draw (N1) -- (N2);\n\\draw (N2) -- (N3);\n\\draw (N2) -- (N31);\n\\draw (N2) -- (N32);\n\\draw (N2) -- (N33);\n\\draw (N2) -- (N34);\n\\draw (N3) to   [bend left= 10](H1) ;\n\\draw (N31) to   [bend left= 10](H11) ;\n\\draw (N32) to   [bend left= 10](H12) ;\n\\draw (N33) to   [bend left= 10](H13) ;\n\\draw (N34) to   [bend left= 10](H14) ;\n\\draw (foot) to [bend left = 15]  (H0) ;\n\n\n% second copy \n\\node (N01) at (12,2) {$\\bullet$};\n\\node (N02) at (12,4) {$\\bullet$};\n\\node (N03) at (12,6) {{\\color{lightred}$\\bullet$}};\n\\node (H001) at (10,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0011) at (12,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0012) at (14,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0013) at (16,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0014) at (18,8) {$\\Smiley[1][vertfluo]$};\n\n\\node (N03) at (11,6) {$\\bullet$};\n\\node (N031) at (12,6) {$\\bullet$};\n\\node (N032) at (13,6) {$\\bullet$};\n\\node (N033) at (14,6) {$\\bullet$};\n\\node (N034) at (15,6) {$\\bullet$};\n\n\\draw (foot) -- (N01)[very thick] ;\n\\draw (N01) -- (N02);\n\\draw (N02) -- (N03);\n\\draw (N02) -- (N031);\n\\draw (N02) -- (N032);\n\\draw (N02) -- (N033);\n\\draw (N02) -- (N034);\n\\draw (N03) to   [bend left= 10](H001) ;\n\\draw (N031) to   [bend left= 10](H0011) ;\n\\draw (N032) to   [bend left= 10](H0012) ;\n\\draw (N033) to   [bend left= 10](H0013) ;\n\\draw (N034) to   [bend left= 10](H0014) ;\n\n% third copy \n\\node (N001) at (22,2) {$\\bullet$};\n\\node (N002) at (22,4) {$\\bullet$};\n\\node (N003) at (22,6) {{\\color{lightred}$\\bullet$}};\n\\node (H001) at (20,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0011) at (22,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0012) at (24,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0013) at (26,8) {$\\Smiley[1][vertfluo]$};\n\\node (H0014) at (28,8) {$\\Smiley[1][vertfluo]$};\n\n\\node (N003) at (21,6) {$\\bullet$};\n\\node (N0031) at (22,6) {$\\bullet$};\n\\node (N0032) at (23,6) {$\\bullet$};\n\\node (N0033) at (24,6) {$\\bullet$};\n\\node (N0034) at (25,6) {$\\bullet$};\n\n\\draw (foot) -- (N001)[very thick] ;\n\\draw (N001) -- (N002);\n\\draw (N002) -- (N003);\n\\draw (N002) -- (N0031);\n\\draw (N002) -- (N0032);\n\\draw (N002) -- (N0033);\n\\draw (N002) -- (N0034);\n\\draw (N003) to   [bend left= 10](H001) ;\n\\draw (N0031) to   [bend left= 10](H0011) ;\n\\draw (N0032) to   [bend left= 10](H0012) ;\n\\draw (N0033) to   [bend left= 10](H0013) ;\n\\draw (N0034) to   [bend left= 10](H0014) ;\n\\end{tikzpicture}\n\\caption{The configuration \\texttt{Hy'''} of \\texttt{Hy} \\label{fig:Hy5}}\n\\end{figure}\n\\FloatBarrier\n\nWe leave it to the reader  to guess the following  rounds of the battle \\dots\n % Please keep in mind that, in this \n% the hydra is free to chose any number of replications at each time, whereas\n% Hercules chops only one head per round.\n\n% Let us precise that, in this game, Hercules wins if the hydra is eventually reduced \n% to a single head. \n% We know from~\\cite{KP82} that, whichever the initial configuration of the\n% hydra, and the strategies of both players, Hercules eventually wins. The \n% aforementionned paper shows also that there do not exist any \\emph{simple} proof of this result.\n\n\n\\section{Hydras and their representation in \\emph{Coq}}\n\\label{sec:orgheadline48}\n\n\\index{hydras}{Library Hydra!Types!Hydra}\n\\index{hydras}{Library Hydra!Types!Hydrae}\n\n\nIn order to describe trees where each node can have an arbitrary (but finite) number of sons, it is usual to define a type where each node carries a \\emph{forest}, \\emph{i.e} a list of trees\n(see for instance Chapter 14, pages 400-406 of \\cite{BC04}).\n\nFor this purpose, we define two mutual \\emph{ad-hoc}  inductive types, where \\texttt{Hydra} is the main type, and \\texttt{Hydrae} is a helper for describing finite sequences of hydra.\n\\label{types:Hydra}\n\\label{types:Hydrae}\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#Hydra}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/HydraDef}\n\n\n%\\index{To do}\n\\index{hydras}{Projects}\n\n\\begin{project}\nLook for an existing library on trees with nodes of arbitrary arity, in order to replace  this ad-hoc type with something more generic.\n\\end{project}\n\n\n\\index{hydras}{Projects}\n\n\\begin{project}\n\n Another very similar representation could use the \\texttt{list} type family instead of the specific \ntype \\texttt{Hydrae}:\n\n\n\\input{movies/snippets/Hydra_Definitions/HydraAlt}\n\nUsing this representation, re-define all the constructions of this chapter.\nYou will probably have to use patterns described for instance in~\\cite{BC04} or the archives of the Coq-club~\\cite{Coq}.\n\n  \n\\end{project}\n\n\n\\index{hydras}{Projects}\n\n\\begin{project}\nThe type \\texttt{Hydra} above describes hydras as \\emph{plane trees}, i.e., as drawn on a sheet of paper or computer screen. Thus, hydras are \\emph{oriented},\nand it is appropriate to consider a \\emph{leftmost} or \\emph{rightmost} head of\nthe beast. It could be interesting to consider another representation, in which\nevery non-leaf node has a \\emph{multi-set} -- not an ordered list -- of daughters.\n\\end{project}\n\n\\subsubsection{Abbreviations}\n\nWe provide several notations for hydra patterns  which occur often in our developments. \n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#head}{Hydra.Hydra\\_Definitions}}\n\n\n\\input{movies/snippets/Hydra_Definitions/headsEtc}\n\nFor instance, the hydra \\texttt{Hy}  of Figure~\\vref{fig:Hy} is defined in \\emph{Gallina} as follows:\n\n\\vspace{4mm}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Examples.html\\#Hy}{Hydra.Hydra\\_Examples}}\n\n\\input{movies/snippets/Hydra_Examples/Hy}\n\n\n\nHydras quite frequently contain  multiple adjacent  copies of the same subtree. The following functions\nwill help us to describe and reason about replications in hydra battles.\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#hcons_mult}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/hconsMult}\n\n\n\n\\vspace{4mm}\n\n\n\nFor instance, the hydra \\texttt{Hy''} of Fig~\\vref{fig:Hy3}  can be defined in \\coq{} as follows:\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Examples.html}{Hydra.Hydra\\_Examples}}\n\n\\input{movies/snippets/Hydra_Examples/HySecond}\n\n\n\n\n\n\n\\subsubsection{Recursive functions on type \\texttt{Hydra}}\n\\label{sec:orgheadline41}\n\\label{sec:hsize-def}\n\n\n\n\nIn order to  define a recursive function over the type \\texttt{Hydra}, one has to consider the three constructors \n\\texttt{node}, \\texttt{hnil} and \\texttt{hcons} of the mutually inductive types \\texttt{Hydra} and \\texttt{Hydrae}. \nLet us define for instance the function which  computes the number of nodes of any hydra:\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/hsize}\n\n\\input{movies/snippets/Hydra_Examples/HySize}\n\n\nLikewise, the \\emph{height} (maximum distance between the foot and a head) \nis defined by mutual recursion:\n\n\\input{movies/snippets/Hydra_Definitions/height}\n\n\\input{movies/snippets/Hydra_Examples/HyHeight}\n\n\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nDefine a function \\texttt{max\\_degree: Hydra $\\arrow$ nat} which  returns the highest degree of a node in any hydra. For instance, the evaluation of the term \\texttt{(max\\_degree Hy)} should return $3$.\n\\end{exercise}\n\n\\subsection{Induction principles for hydras}\n\\label{sec:orgheadline42}\n\n\nIn this section, we show how induction principles are used to prove properties on the type \n\\texttt{Hydra}. Let us consider for instance the following statement:\n\\begin{quote}\n  `` The height of any hydra is strictly less than its size. ''\n\\end{quote}\n\n\n\n\\subsubsection{A failed attempt}\n\nOne may try to use the default tactic of proof by induction, which corresponds to an application of the automatically  generated  induction principle for  type \\texttt{Hydra}:\n\n\\input{movies/snippets/Hydra_Examples/HydraInd}\n\nLet us start a simple proof by induction.\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Examples.html}{Hydra.Hydra\\_Examples}}\n\n\\input{movies/snippets/Hydra_Examples/BadInductiona}\n\n\n\nWe might be tempted to do an induction on the sequence \\texttt{s}:\n\n\\input{movies/snippets/Hydra_Examples/BadInductionb}\n\nThe first subgoal is trivial.\n\n\\input{movies/snippets/Hydra_Examples/BadInductionc}\n\nLet us look at the second subgoal of the induction.\n\n\\input{movies/snippets/Hydra_Examples/BadInductiond}\n\nWe notice that this subgoal does not contain any hypothesis\non the height and size of the hydra \\texttt{h}. So, it looks hard to prove the conclusion. Let's stop.\n\n\\input{movies/snippets/Hydra_Examples/BadInductione}\n\n\\subsubsection{A Principle of mutual induction}\nIn order to get an appropriate induction scheme for the types \n\\texttt{Hydra} and \\texttt{Hydrae}, we can use  \\coq{}'s  command \\texttt{Scheme}.\n\n\n\\index{coq}{Mutually inductive types}\n\\index{coq}{Commands!Scheme}\n\n\\input{movies/snippets/Hydra_Definitions/HydraRect2}\n\n\\input{movies/snippets/Hydra_Examples/HydraRect2Check}\n\n\n\n\n\n\\subsubsection{A Correct proof}\n\nLet us now use \\texttt{Hydra\\_rect2} for proving that the height of any hydra is strictly less than its size.\nUsing this scheme requires an auxiliary predicate, called \\texttt{P0} in \\texttt{Hydra\\_rect2}'s statement. \n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/hForall}\n\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Examples.html}{Hydra.Hydra\\_Examples}}\n\\input{movies/snippets/Hydra_Examples/heightLtSizea}\n\n\n\n\nThe first subgoal is easily solvable, using some arithmetic.\nThe second and third ones are\nalmost trivial. We let the reader look at the source. \n\\input{movies/snippets/Hydra_Examples/heightLtSizez}\n\n\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nIt happens very often that, in the proof of  a proposition of the form \n(\\texttt{$\\forall\\,$ h:Hydra, $P$ h}), the predicate \\texttt{P0}\nis  (\\texttt{h\\_forall $P$}).  Design a tactic for induction on hydras that frees the user from binding explicitly \\texttt{P0},  and solves trivial subgoals. Apply it for writing  a shorter proof of \\texttt{height\\_lt\\_size}.\n\\end{exercise}\n \n\n\n\\section{Relational description of hydra battles}\n\n\nIn this section, we represent the rules of hydra battles as a binary relation associated with\na \\emph{round}, i.e., an interaction composed of the two following actions:\n\\begin{enumerate}\n\\item Hercules chops off one head of the hydra.\n\\item Then, the  hydra replicates the wounded part (if the head is at distance $\\geq 2$ from the foot).\n\\end{enumerate}\nThe relation associated with each round of the battle is parameterized  by the \\emph{expected} replication  factor (irrelevant if the chopped head is at distance 1 from the foot,\nbut present for consistency's sake).\n\nIn our description,  we will apply the following naming convention: if $h$ represents the configuration of the hydra before a round, then the configuration of $h$ after this round will be called $h'$.\n Thus, we are going to define a proposition  (\\texttt{round\\_n $n\\;h\\;h'$})  whose intended meaning will be `` the hydra $h$  is transformed into $h'$  in a single round of a battle, with the expected replication factor $n$ ''.\n\n\nSince the replication of parts of the hydra depends on the distance of the chopped head from  the foot, we  decompose our description into two main  cases, under the form of a bunch of [mutually] inductive predicates over the types \\texttt{Hydra} and \\texttt{Hydrae}.\n\nThe mutually exclusive cases we consider are the following:\n\\begin{itemize}\n\\item \\textbf{R1}: The chopped off head was at distance 1 from the foot.\n\\item \\textbf{R2}: The chopped off head was at a distance greater than or equal to  $2$ from the foot.\n\\end{itemize}\n\n\n\n\\subsection{Chopping off a head at distance 1 from the foot (relation  R1)}\n\nIf Hercules chops off a head near the floor, there is no replication at all. We use an auxiliary \npredicate \\texttt{S0}, associated with the removing of one head from a sequence of hydras.\n\n\n\\vspace{4pt}\\emph{From Module\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/S0Def}\n\\input{movies/snippets/Hydra_Definitions/R1Def}\n\n\\subsubsection{Example}\n\\label{sec:orgheadline45}\n\nLet us represent in \\coq{}   the transformation of the hydra of Fig.~\\vref{fig:Hy} into\nthe configuration represented in Fig.~\\ref{fig:Hy-prime}.\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Examples.html}{Hydra.Hydra\\_Examples}}\n\n\n\\input{movies/snippets/Hydra_Examples/Hy1}\n\n\n\\subsection{Chopping off a head at distance \\texorpdfstring{$\\geq 2$}{>= 2} from the foot (relation R2) }\n\n\nLet us now consider beheadings  where the chopped-off head is at distance greater than or equal to $2$ from the foot. All the following relations are parameterized by the replication factor  $n$.\n\n Let $s$ be a sequence of hydras. \nThe proposition (\\texttt{S1 n s s'}) holds if $s'$ is obtained by replacing some element $h$ of $s$ by \n$n+1$ copies of $h'$, where  the proposition (\\texttt{R1 h h'}) holds, in other words, $h'$ is just $h$, without the chopped-off  head. \\texttt{S1} is an inductive relation with two constructors that allow us to choose the position in $s'$ of the wounded sub-hydra $h$.\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#S1}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/S1Def}\n\n\nThe rest of the definition is composed of two mutually inductive relations on hydras and sequences of hydras. The first constructor of \\texttt{R2} describes the case where the chopped head is exactly at height $2$. The others constructors allow us to consider beheadings at height strictly greater than $2$.\n\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#R2}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/R2Def}\n\n\\subsubsection{Example}\nLet us prove the transformation of \\texttt{Hy'} into \\texttt{Hy''} (see Fig.~\\vref{fig:Hy3}). We use an experimental set of tactics for specifying the place where the \ninteraction between Hercules and the hydra holds. \n\n\n\\vspace{4pt}\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Examples.html}{Hydra.Hydra\\_Examples}}. \n\n\\input{movies/snippets/Hydra_Examples/R2Example}\n\n\nThe reader is encouraged to look at all the successive subgoals of this example.\n\\emph{Please consider also exercise~\\vref{exo:interactive-battle}.}\n\n\n\\subsection{Binary relation associated with a round}\n\nWe combine the two cases above into a single relation.\nFirst,  we define the  relation \\texttt{(round\\_n n h h')} where \\texttt{n} is the expected number of  replications (irrelevant in the case of an \\texttt{R1}-transformation).\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#round_n}{Hydra.Hydra\\_Definitions}}\n\n\\index{hydras}{Library Hydra!Predicates!round\\_n}\n\n\\input{movies/snippets/Hydra_Definitions/roundNDef}\n\nBy abstraction over \\texttt{n}, we define a \\emph{round} (small step) of a battle:\n\n\\index{hydras}{Library Hydra!Predicates!round}\n\\label{sect:infix-round}\n\n\\input{movies/snippets/Hydra_Definitions/roundDef}\n\n\\index{hydras}{Projects}\n\n\\begin{project}\nGive a direct translation of Kirby and Paris's description of hydra battles (quoted on page~\\pageref{original-rules}) and prove that our relational description is consistent with theirs.\n\\end{project}\n\n\n\\subsection{Rounds and battles}\n\n\nUsing library \\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Relations.Relation_Operators.html}{Relations.Relation\\_Operators}, we define \\texttt{round\\_plus},  the transitive closure of \\texttt{round}, and \\texttt{round\\_star},  the reflexive and transitive closure of \\texttt{round}.\n\n\\label{sect:infix-rounds} \n\n\\input{movies/snippets/Hydra_Definitions/roundPlus}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\n  Prove that if \\texttt{$h$ -+-> $h'$}, then\n  the height of $h'$ is less or equal than the height of $h$.\n\n\\end{exercise}\n\n\\begin{remark}\n\\label{remark:transitive-closure}\n\\coq's library \\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Relations.Relation_Operators.html}{Coq.Relations.Relation\\_Operators} \ncontains three logically equivalent definitions of the transitive closure of a binary relation. This equivalence is proved in \n\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Relations.Operators_Properties.html}{Coq.Relations.Operators\\_Properties} . \n\nWhy three definitions for a single mathematical concept?\nEach definition generates an associated induction principle. \n According to the form of statement one would like to prove, there is a ``best choice'':\n\n\\begin{itemize}\n\\item For proving $\\forall y, x\\,R^+\\,y \\;\\arrow\\; P\\,y$, prefer \n\\texttt{clos\\_trans\\_n1}\n\\item For proving $\\forall x,\\,x\\,R^+\\,y \\;\\arrow\\; P\\,x$, prefer \\texttt{clos\\_trans\\_1n}\n\\item For proving $\\forall x\\,y, \\,x\\,R^+\\,y \\;\\arrow\\;P\\,x\\,y$,  \nprefer \\texttt{clos\\_trans},\n\\end{itemize}\nBut there is no ``wrong choice'' at all: the equivalence lemmas in \\linebreak \n\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Relations.Operators_Properties.html}{Coq.Relations.Operators\\_Properties} \n allow the user\nto convert any one of the three closures into another one before applying the corresponding elimination tactic.\nThe same remark also holds for reflexive and transitive closures. \n\\end{remark}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nDefine a restriction of \\coqsimple{round},  where Hercules always chops off\nthe leftmost among the lowest heads.\n\nProve that, if $h$ is not a simple head, then there exists a unique $h'$ such that \\texttt{h}  is transformed into \\texttt{ h'} in one round, according to this restriction.\n\n\n\\end{exercise}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}[Interactive battles]\n\\label{exo:interactive-battle}\nGiven a hydra \\texttt{h}, the specification of a hydra battle for \\texttt{h} is the type \n\\Verb@{h':Hydra | h -*-> h'}@. In order to avoid long sequences of \\texttt{split}, \\texttt{left}, and \n\\texttt{right}, design a set of dedicated tactics for the interactive building of a battle.\nYour tactics will have the following functionalities:\n\\begin{itemize}\n\\item  Chose to stop a battle, or continue\n\\item Chose an expected number of replications\n\\item Navigate in a hydra, looking for a head to chop off.\n\\end{itemize}\n\nUse your tactics for simulating a small part of a hydra battle, for instance the rounds which lead from\n\\texttt{Hy} to \\texttt{Hy'''}  (Fig.~\\vref{fig:Hy5}).\n\n\\textbf{Hints:} \n\\begin{itemize}\n\n\\item Please keep in mind that the last  configuration of your interactively built battle is known only at the end of the battle. Thus, you will have to create and solve subgoals with existential variables. For that purpose, the tactic \\texttt{eexists}, applied to the \ngoal \\Verb@{h':Hydra | h -*-> h'}@ generates the subgoal \\Verb|h -*-> ?h'|.\n\\item You may use G\u00e9rard Huet's \\emph{zipper} data structure~\\cite{zipper} for writing tactics associated with Hercules's  interactive search for a head to chop off.\n\\end{itemize}\n\n\n\n\n\n\n\\end{exercise}\n\n\n\n\n\\subsection{Classes of battles}\n\\label{sect:battle-classes}\n\nIn some presentations of hydra battles, e.g.~\\cite{KP82, bauer2008}, the transformation associated with the $i$-th round may depend on $i$. For instance, in these articles, the replication factor at the $i$-th round is equal to $i$. In other examples, one can allow the hydra to apply any replication factor at any time. In order to be the most general as possible, we define the type of predicates which relate the state of the hydra before and after the $i$-th round of a battle.\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html}{Hydra.Hydra\\_Definitions}}\n\\label{types:Battle}\n\\index{hydras}{Library Hydra!Type classes!Battle}\n\n\\input{movies/snippets/Hydra_Definitions/BattleDef}\n\nThe most general class of battles is \\texttt{free}, which allows the hydra to chose any replication factor at every step:\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#free}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/freeDef}\n\nWe chose to call \\emph{standard}\\footnote{This appellation is ours. If there is a better one, we will change it.} the kind of battles which appear  most often in the literature and correspond to an arithmetic progression of the replication factor : $0,1,2,3, \\dots$\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#standard}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/standardDef}\n\n\n\n\\subsection{Big steps}\n\nLet $B$ be any instance of class \\texttt{Battle}. It is easy to define inductively the relation between the $i$-th and the $j$-th steps of a battle of type $B$. \n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#fight}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/battleRelDef}\n\nThe following property allows us to build battles by composition of smaller ones.\n\n%% TODO display subgoals when fixed\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Lemmas.html}{Hydra.Hydra\\_Lemmas}}\n\n\\noindent\n\\input{movies/snippets/Hydra_Lemmas/battleTrans}\n\n\n% \\begin{remark}\n%  The class \\texttt{free} is strongly related with the transitive closure  \\texttt{round\\_plus}, as expressed by the following lemmas.\n\n% \\vspace{4pt}\n% \\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Lemmas.html}{Hydra.Hydra\\_Lemmas}}\n\n%  \\begin{Coqsrc}\n%  Lemma battle_free_equiv1 : forall i j h h',  \n%              battle free i h j h' ->   h -+-> h'.\n \n%  Lemma battle_free_equiv2 : forall h h',\n%      h -+-> h' ->\n%     forall i, exists j,  battle free i h j h'.\n%  \\end{Coqsrc}\n\n% \\end{remark}\n\n\n\n\\section{A long battle}\n\\label{sect:big-battle}\n\n\nIn this section we consider a simple example of battle, starting with a small hydra,\nshown on figure~\\vref{fig:hinit}, with a simple strategy for both players:\n\n\\begin{itemize}\n\\item At each round, Hercules chops off the rightmost head of the hydra.\n\\item The battle is {standard}: at the round number $i$, the expected replication is $i$.\n\\end{itemize}\n\n\n\n\\begin{figure}[h]\n  \\centering\n  \\begin{tikzpicture}[thick, scale=0.30]\n \\node (foot) at (6,0) {$\\bullet$};\n\\node (n1) at  (3,3) {$\\bullet$};\n\\node (h1) at  (1,6) {$\\Smiley[1][green]$};\n\\node (h2) at  (3,6) {$\\Smiley[1][green]$};\n\\node (h3) at  (6,6) {$\\Smiley[1][green]$};\n\\node (h4) at  (6,6) {$\\Smiley[1][green]$};\n\\node (h5) at  (6,3) {$\\Smiley[1][green]$};\n\\node (h6) at  (9,3) {$\\Smiley[1][green]$};\n\\draw (foot) -- (n1);\n\\draw (n1) to   [bend left=20] (h1);\n\\draw (n1) to   (h2);\n\\draw (n1) to   [bend right=20] (h3);\n\\draw (foot) -- (h5);\n\\draw (foot) to  [bend right=20] (h6);\n\\end{tikzpicture}\n\n  \\caption{The hydra \\texttt{hinit}}\n  \\label{fig:hinit}\n\\end{figure}\n\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.BigBattle.html}{Hydra.BigBattle}}\n\n\\input{movies/snippets/BigBattle/hinitDef}\n\n\nThe lemma we would like to prove is ``The considered battle lasts exactly $N$ rounds'',\nwith $N$ being a natural number we gave to guess.\n\nBut the  battle is so long that no \\emph{test} can give us any estimation of its length, and we do need the expressive power of logic to compute this length. However, in order to  guess this length, we made some experiments, computing with \\gallina{}, \\coq{}'s  functional programming language.\nThus, we can consider this development as a collaboration of proof with computation.\nIn the following lines, we show how we found experimentally the value of the number $N$.\nThe complete proof is in file \\url{../theories/html/hydras.Hydra.BigBattle.html}. \n\n\\subsection{The beginning of hostilities}\nDuring the two first rounds, our hydra loses its two rightmost heads.  Figure~\\vref{fig:hinit-plus2} shows the state of the hydra   just before the third round.\n\n\n\\begin{figure}[h]\n  \\centering\n  \\begin{tikzpicture}[thick, scale=0.30]\n \\node (foot) at (3,0) {$\\bullet$};\n\\node (n1) at  (3,3) {$\\bullet$};\n\\node (h1) at  (1,6) {$\\Smiley[1][green]$};\n\\node (h2) at  (3,6) {$\\Smiley[1][green]$};\n\\node (h3) at  (6,6) {$\\Smiley[1][green]$};\n\\node (h4) at  (6,6) {$\\Smiley[1][green]$};\n\\draw (foot) -- (n1);\n\\draw (n1) to   [bend left=20] (h1);\n\\draw (n1) to   (h2);\n\\draw (n1) to   [bend right=20] (h3);\n\\end{tikzpicture}\n\n  \\caption{The hydra (\\texttt{hyd1 h3})}\n  \\label{fig:hinit-plus2}\n\\end{figure}\n\nThe following lemma  is a formal description of these first rounds, in terms of the\n\\texttt{battle} predicate.\n\n\n\\input{movies/snippets/BigBattle/L02}\n\n\n\\subsection{Looking for regularities}\n\n\nA first study with pencil and paper suggested us that, after three rounds, the hydra always looks like in figure~\\vref{fig:hinit-plusn} (with a variable number of \nsubtrees of height 1 or 0).\nThus, we introduce a few handy abbreviations.\n\n\\input{movies/snippets/BigBattle/Notations}\n\n\nFor instance, the hydra shown in  Fig~\\ref{fig:hinit-plusn} is  (\\texttt{hyd 3 4 2}). The hydra (\\texttt{hyd 0 0 0})  is the ``final'' hydra of any terminating battle, {i.e.},\na tree whith exactly one node and no edge.\n\n\\begin{figure}[h]\n  \\centering\n  \\begin{tikzpicture}[thick, scale=0.30]\n \\node (foot) at (15,0) {$\\bullet$};\n\\node (a) at  (3,4) {$\\bullet$};\n\\node (b) at  (6,4) {$\\bullet$};\n\\node (c) at  (9,4) {$\\bullet$};\n\\node (d) at  (13,4) {$\\bullet$};\n\\node (e) at  (16,4) {$\\bullet$};\n\\node (f) at  (19,4) {$\\bullet$};\n\\node (g) at  (22,4) {$\\bullet$};\n\\node (h) at  (25,4) {$\\Smiley[1][green]$};\n\\node (i) at  (28,4) {$\\Smiley[1][green]$};\n\\node (aa) at  (2.5,8) {$\\Smiley[1][green]$};\n\\node (ab) at  (3.5,8) {$\\Smiley[1][green]$};\n\\node (ba) at  (5.5,8) {$\\Smiley[1][green]$};\n\\node (bb) at  (6.5,8) {$\\Smiley[1][green]$};\n\\node (ca) at  (8.5,8) {$\\Smiley[1][green]$};\n\\node (cb) at  (9.5,8) {$\\Smiley[1][green]$};\n\\node (da) at  (13,8) {$\\Smiley[1][green]$};\n\\node (ea) at  (16,8) {$\\Smiley[1][green]$};\n\\node (fa) at  (19,8) {$\\Smiley[1][green]$};\n\\node (ga) at  (22,8) {$\\Smiley[1][green]$};\n\\draw (foot) -- (a);\n\\draw (foot) -- (b);\n\\draw (foot) -- (c);\n\\draw (foot) -- (d);\n\\draw (foot) -- (e);\n\\draw (foot) -- (f);\n\\draw (foot) -- (g);\n\\draw (foot) -- (h);\n\\draw (foot) -- (i);\n\\draw (a) -- (aa);\n\\draw (a) -- (ab);\n\\draw (b) -- (ba);\n\\draw (b) -- (bb);\n\\draw (c) -- (ca);\n\\draw (c) -- (cb);\n\\draw (d) -- (da);\n\\draw (e) -- (ea);\n\\draw (f) -- (fa);\n\\draw (g) -- (ga);\n\\end{tikzpicture}\n\n  \\caption{The hydra (\\texttt{hyd 3 4 2})}\n  \\label{fig:hinit-plusn}\n\\end{figure}\n\n\nWith these notations, we get a formal description of the first three rounds.\n\n\\input{movies/snippets/BigBattle/L23L03}\n\n\n\\subsection{Testing  \\dots}\nIn order to study \\emph{experimentally} the different  configurations of the  battle, we will use a simple data type for representing the states as tuples composed of\nthe round number, and the respective number of daughters  \\texttt{h2}, \\texttt{h1}, and heads\nof the current hydra.\n\n\n\\input{movies/snippets/BigBattle/stateDef}\n\n\n\nThe following function returns the next configuration of the game.\nNote that this function is defined only for making experiments and is not  ``certified''.  Formal proofs about our battle only start with the lemma\n\\texttt{step\\_battle}, page~\\pageref{lemma:step-battle}.\n\n\\input{movies/snippets/BigBattle/nextDef}\n\n\n\nWe can make bigger steps through iterations of \\texttt{next}.\nThe functional \\texttt{iterate}, similar to Standard Library's \\texttt{Nat.iter},\nis defined and studied in~\\href{../theories/html/hydras.Prelude.Iterates.html\\#iterate}{Prelude.Iterates}.\n\\index{hydras}{Library Prelude!iterate}\n\n\\label{Functions:iterate}\n\n\\input{movies/snippets/Iterates/iterateDef}\n\n\nThe following function computes the state of the battle at the $n$-th round.\n\n\\input{movies/snippets/BigBattle/testDefTests}\n\nThe battle we are studying looks to be awfully long. Let us concentrate our\ntests on some particular events : the states where $\\texttt{nh}=0$.\nFrom the value of \\texttt{test 5},  it is obvious that at the 10-th round, the counter \\texttt{nh} is equal to zero.\n\n\n\\input{movies/snippets/BigBattle/smartTest}\n\nThus, $(1 + 11)$ rounds later, the \\texttt{n1} field is equal to $2$, and \n\\texttt{nh}   to $0$. \n\n\\input{movies/snippets/BigBattle/smartTestb}\n\n\n\nNext round, we decrement \\texttt{n2} and set \\texttt{n1} to $95$.\n\n\\input{movies/snippets/BigBattle/smartTestc}\n\n\n\nWe now have some intuition of the sequence.\nIt looks like the next ``\\texttt{nh}=0'' event will happen at the $192=2(95+1)$-th round, then at the $2(192+1)$-th round, etc.\n\n\\input{movies/snippets/BigBattle/doubleS}\n\n\n\\subsection{Proving \\dots}\nWe are now able to reason about the sequence of transitions defined by our hydra battle. Instead of using the data-type \\texttt{state} we study the relationship\nbetween different configurations of the battle.\n\nLet us define a binary relation associated with every round of the battle.\nIn the following definition \\texttt{i} is associated with the round number (or date, if we consider a discrete time), and \\texttt{a}, \\texttt{b}, \\texttt{c} respectively associated with the number of \\texttt{h2}, \\texttt{h1} and heads connected to the hydra's foot.\n\n\\input{movies/snippets/BigBattle/oneStep}\n\nThe relation between \\texttt{one\\_step} and the rules of hydra battles is asserted by the following lemma. \n\n\\label{lemma:step-battle}\n\n\\input{movies/snippets/BigBattle/stepBattle}\n\n\\vspace{4pt}\n\nNext, we define ``big steps'' as the transitive closure of \\texttt{one\\_step},\nand reachability (from the initial configuration of figure~\\ref{fig:hinit} at time $0$).\n\n\n\\input{movies/snippets/BigBattle/steps}\n\n\n\nThe following lemma establishes a relation between \\texttt{steps} and the predicate \\texttt{battle}.\n\n\\input{movies/snippets/BigBattle/stepsBattle}\n\n\\vspace{4pt}\n\nThus, any result about \\texttt{steps} will be applicable to standard battles.\nUsing the predicate \\texttt{steps},  our study of the length of the considered battle\ncan  be decomposed into three parts:\n\n\\begin{enumerate}\n\\item  Characterization of regularities of some events\n\\item Study of the beginning of the battle\n\\item Computing the exact length of the battle.\n\\end{enumerate}\n\nFirst, we prove that, if at round $i$ the hydra is equal to\n(\\texttt{hyd a (S b) 0}), then it will be equal to (\\texttt{hyd a b 0}) at the $2(i+1)$-th round.\n\n\\vspace{4pt}\n\n\n\\input{movies/snippets/BigBattle/LS}\n\nFrom now on, the lemma \\texttt{reachable\\_S} allows us to watch larger and larger steps of \nthe battle.\n\n\n\n\\input{movies/snippets/BigBattle/L4}\n\n\\input{movies/snippets/BigBattle/L10To95}\n\n\\subsection{Giant steps}\n\nWe are now able to make bigger steps in the simulation of the battle.\nFirst, we iterate the lemma \\texttt{reachable\\_S}.\n\n\n\\vspace{4pt}\n\n\\input{movies/snippets/BigBattle/Bigstep}\n\n\\vspace{4pt}\n\nApplying lemmas \\texttt{BigStep} and \\texttt{L95} we make a first jump.\n\n\\vspace{4pt}\n\n\\input{movies/snippets/BigBattle/MDef}\n\n\n\nFigure~\\ref{fig:HM}  represents the hydra at the $M$-th round.\nAt the $(M+1)$-th round, it will look like in fig~\\ref{fig:HM-plus1}.\n\n\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.5]\n\\node (foot) at (2,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (3,4) {$\\Smiley[2][green]$};\n\\node (N3) at (1,4) {$\\Smiley[2][green]$};\n\\draw (foot) -- (N1);\n\\draw (N1) to [bend right =15] (N2);\n\\draw (N1) to  [bend left=15](N3);\n\\end{tikzpicture}\n\\caption{\\label{fig:HM}}\nThe state of the hydra after $M$ rounds.\n% The hydra \\texttt{h} of the proof that \\(\\omega^2\\) is too small for proving Hercules' victory\n\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.5]\n\\node (foot) at (10,0) {$\\bullet$};\n\\node (N1) at (0,5) {$\\bullet$};\n\\node (N12) at (0,8) {$\\Smiley[2][green]$};\n\\node (N2) at (2,5) {$\\bullet$};\n\\node (N22) at (2,8) {$\\Smiley[2][green]$};\n\\node (N3) at (4,5) {$\\bullet$};\n\\node (N32) at (4,8) {$\\Smiley[2][green]$};\n\\node (N4) at (6,5) {$\\bullet$};\n\\node (N42) at (6,8) {$\\Smiley[2][green]$};\n\n\\node (Ndots) at (12,8) {\\Huge $\\dots$};\n\\node (Ndots2) at (12,5) {\\Huge $\\dots$};\n\n\\node (N8) at (18,5) {$\\bullet$};\n\\node (N82) at (18,8) {$\\Smiley[2][green]$};\n\\node (N9) at (20,5) {$\\bullet$};\n\\node (N92) at (20,8) {$\\Smiley[2][green]$};\n\n\n\\draw (foot) -- (N1);\n\\draw (foot) -- (N2);\n\\draw (foot) -- (N3);\n\\draw (foot) -- (N4);\n\\draw (foot) -- (N8);\n\\draw (foot) -- (N9);\n\\draw (N1) to  (N12);\n\\draw (N2) to  (N22);\n\\draw (N3) to  (N32);\n\\draw (N4) to  (N42);\n\\draw (N8) to  (N82);\n\\draw (N9) to  (N92);\n\\end{tikzpicture}\n\\caption{\\label{fig:HM-plus1}}\nThe state of the hydra after $M+1$ rounds (with $M+1$ heads). \n\n\\end{figure}\n\n\n\\input{movies/snippets/BigBattle/L295S}\n\n\\vspace{4pt}\n\nThen, applying once more the lemma \\texttt{BigStep}, we get the exact time when\nHercules wins!\n\n\\vspace{4pt}\n\n\\input{movies/snippets/BigBattle/NDef}\n\n\\vspace{4pt}\n\nWe are now able to prove formally that the considered battle is \ncomposed of $N$ steps.\n\n\\vspace{4pt}\n\n\\input{movies/snippets/BigBattle/Done}\n\n\n\\subsection{A minoration lemma}\n\nNow, we would like to get an intuition of  how big the number $N$ is.\nFor that purpose, we use a minoration of the function \\texttt{doubleS} by the\nfunction (\\texttt{fun n => 2 * n}).\n\n\\vspace{4pt}\n\n\\input{movies/snippets/Exp2/exp2Def}\n\n\\vspace{4pt}\n\nUsing a few facts (proven in \n\\href{../theories/html/hydras.Hydra.BigBattle.html}{hydras.Hydra.BigBattle}),we get several  minorations.\n\n\\input{movies/snippets/BigBattle/minorationLemmas}\n\n\\vspace{4pt}\n\n\nThe number $N$ is greater than or  equal to $2^{2^{95}\\times 95}.$ If we wrote $N$ in base $10$, $N$ would require at least $10^{30}$ digits!\n\n\n\\section{Generic properties}\n\n\nThe example we just studied shows that the termination of any battle may take a very long time. If we want to study hydra battles in general, we have to consider \nany hydra and any strategy, both for Hercules and the hydra itself. So, we first  give some definitions, generally borrowed from transition systems vocabulary (see~\\cite{tel_2000} for instance).\n\n\n\\subsection{Liveliness}\n\n\nLet $B$ be an instance of \\texttt{Battle}. We say that $B$ is \\emph{alive} if\nfor any configuration $(i,h)$, where $h$ is not a head, there exists a further step in class $B$.\n\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#Alive}{Hydra.Hydra\\_Definitions}}\n\n\\input{movies/snippets/Hydra_Definitions/AliveDef}\n\n\nThe theorems \\texttt{Alive\\_free} and \\texttt{Alive\\_standard} of the module \n\\href{../theories/html/hydras.Hydra.Hydra_Theorems.html}{Hydra.Hydra\\_Theorems} show that the classes \\texttt{free} and \\texttt{standard} satisfy this property.\n\n\n\\input{movies/snippets/Hydra_Theorems/AliveThms}\n\n\nBoth theorems are proved with the help of the  following strongly specified function:\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Lemmas.html\\#next_round_dec}{Hydra.Hydra\\_Lemmas}}\n\n\\input{movies/snippets/Hydra_Lemmas/nextRoundDec}\n\n\\subsection{Termination}\n\nThe termination of \\emph{any}  battle is naturally expressed by the predicate \\texttt{well\\_founded} defined in the module \\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Init.Wf.html}{Coq.Init.Wf} \n of the Standard Library.\n\n\\index{hydras}{Library Hydra!Predicates!Termination}\n\n\\input{movies/snippets/Hydra_Definitions/TerminationDef}\n\n\n\nLet $B$ be an instance of class \\texttt{Battle}. A \\emph{variant} for $B$ consists\nin a well-founded relation $<$  on some type \\texttt{A}, and a function\n(also called a \\emph{measure}) \\texttt{m:Hydra->A} such that for any successive steps $(i,h)$ and $(1+i,h')$  of a battle in $B$, the inequality $m(h')<m(h)$ holds.\n\n\n\\vspace{4pt}\n\\noindent\n\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Definitions.html\\#Hvariant}{Hydra.Hydra\\_Definitions}}\n\n\n\\label{sect:hvariant-def}\n\n\\index{hydras}{Library Hydra!Type classes!Hvariant}\n\n\\input{movies/snippets/Hydra_Definitions/HvariantDef}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\n Prove that, if there exists some  instance of (\\texttt{Hvariant Lt wf\\_Lt $B$ $m$}), then there exists no infinite battle in  $B$.\n\\end{exercise}\n\n\n\n\n\\subsection{A  small proof of impossibility}\n\\index{coq}{Proofs of impossibility}\n\n\\label{omega-case}\n\nWhen one wants to prove a termination theorem with the help of a variant, \none has to consider first a well-founded set $(A,<)$, then a strictly decreasing measure on this set.  The following two lemmas show that if  the order structure $(A,<)$ is too simple, it is useless to look for a convenient measure, which simply no exists. Such kind of result is useful, because it saves you time and effort.\n\n\nThe best known well-founded order is the natural order on the set $\\mathbb{N}$ of natural numbers (the type \\texttt{nat} of Standard library). It would be interesting to look for some measure $m:\\texttt{nat}\\arrow\\texttt{nat}$ and prove it is a variant.\n\nUnfortunately, we can prove that \n\\emph{no} instance of class (\\texttt{WfVariant round Peano.lt $m$}) can be built, where\n$m$ is \\emph{any} function of type \\texttt{Hydra $\\arrow$ nat}.\n\n\nLet us present the main steps of that proof, the script of which  is in the module ~\\href{../theories/html/hydras.Hydra.Omega_Small.html}{Hydra/Omega\\_Small.v} \\footnote{ The name of this file means ``the ordinal $\\omega$ is too small for proving the termination of [free] hydra battles ''. In effect, the elements of $\\omega$, considered as a set, are just the natural numbers (see next chapter for more details)}.\n\n%\\subsubsection{Preliminaries}\n\n\nLet us assume there exists some variant $m$ from \\texttt{Hydra} into \\texttt{nat} for proving\n    the  termination of all hydra battles.\n\n\\input{movies/snippets/Omega_Small/omegaSmalla}\n    \nWe define an injection $\\iota$ from the type \\texttt{nat} into \\texttt{Hydra}.\nFor any natural number $i$, $\\iota(i)$ is the hydra composed of a foot and\n$i+1$ heads at height $1$. For instance, Fig.~\\ref{fig:flower} represents the hydra $\\iota(3)$.\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.5]\n\\node (foot) at (4,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\Smiley[2][green]$};\n\\node (N2) at (4,2) {$\\Smiley[2][green]$};\n\\node (N3) at (6,2) {$\\Smiley[2][green]$};\n\\node (N4) at (8,2) {$\\Smiley[2][green]$};\n\\draw (foot) to [bend left =25] (N1);\n\\draw (foot) to [bend left =15] (N2);\n\\draw (foot) to [bend right =15] (N3);\n\\draw (foot) to [bend right =25] (N4);\n\\end{tikzpicture}\n\\caption{\\label{fig:flower}\nThe hydra $\\iota(3)$}\n\\end{figure}\n\n\\input{movies/snippets/Omega_Small/iotaDef}\n\nLet us consider now some hydra \\texttt{big\\_h} out of the range of the injection $\\iota$ (see Fig.~\\vref{fig:h-omega-omega}).\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.5]\n\\node (foot) at (2,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (2,4) {$\\Smiley[2][green]$};\n\\draw (foot) -- (N1);\n\\draw (N1) to  (N2);\n\\end{tikzpicture}\n\\caption{\\label{fig:h-omega-omega}}\n The hydra \\texttt{big\\_h}.\n\\end{figure}\n\n\\input{movies/snippets/Omega_Small/bigHDef}\n\n\n Using the functions $m$ and $\\iota$, we define a second hydra \\texttt{small\\_h}, and show\n there is a one-round battle that transforms \\texttt{big\\_h} into \\texttt{small\\_h}. Please note that,\ndue to the hypothesis \\texttt{Hvar}, we are interested in the termination of \\emph{free} battles. \nThere is no problem to consider a round with (\\texttt{m big\\_h}) as the replication factor.\n\n\n\\input{movies/snippets/Omega_Small/smallHDef}\n\n\n \nBut, by hypothesis, $m$ is a variant. Hence, we infer the following inequality.\n\n\\vspace{4pt}\n\n\\input{movies/snippets/Omega_Small/mLt}\n\n\nIn order to get a contradiction, it suffices to  prove the inequality\n$m(\\texttt{big\\_h}) \\leq m(\\texttt{small\\_h})$ i.e.,  $m(\\texttt{big\\_h}\\leq m(\\iota (m(\\texttt{big\\_h})))$.\n\n\n\\input{movies/snippets/Omega_Small/mGea}\n\n\nIntuitively, it means that, from any hydra of the form (\\texttt{iota $i$}), the battle will \ntake (at least) $i$ rounds. Thus the associated measure cannot be less than $i$.\nTechnically, we prove this lemma by Peano induction on $i$.\n\n\\begin{itemize}\n\\item The base case $i=0$ is trivial\n\\item Otherwise, let $i$ be any natural number and assume  the inequality\n  $i \\leq m(\\iota(i))$.\n  \\begin{enumerate}\n  \\item  But the hydra $\\iota(S(i))$ can be transformed in one round into\n    $\\iota(i)$ (by losing its rightmost head, for instance)\n  \\item Since $m$ is a variant, we have $m(\\iota(i)) < m(\\iota(S(i)))$,\n    hence  $i< m(\\iota(S(i)))$, which implies  $S(i)\\leq  m(\\iota(S(i)))$.\n  \\end{enumerate}\n\\end{itemize}\n\nWe are now ready to complete our impossibility proof.\n\n\\vspace{4pt}\n\n\\inputsnippets{Omega_Small/mGeb, omega_Small/mGez,\n  Omega_Small/omegaSmallz}\n \n\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nProve that there exists no variant $m$ from \\texttt{Hydra} into \\texttt{nat} for proving\n    the  termination of all \\emph{standard} battles.\n\\end{exercise}\n\n\n\n\n\n\n\\subsubsection{Conclusion}\n\nIn order to build a variant for proving the termination of all hydra battles, we need to consider order structures more complex than the usual order on type \\texttt{nat}. \nThe notion of \\emph{ordinal number} provides a catalogue of well-founded order types.\nFor a reasonably large bunch of ordinal numbers, \\emph{ordinal notations} are data-types which allow the \\coq{} user to define functions, to compute and prove some properties, for instance by reflection.\n\nThe next chapter is dedicated to a generic formalization of ordinal notations, and chapter~\\ref{chap:T1} to a proof of termination of all hydra battles with the help of an ordinal notation for the interval $[0,\\epsilon_0)$\\,\\footnote{We use the mathematical notation $[a,b)$ for the interval $\\{x|a\\leq x < b\\}$.}.\n\\index{maths}{Notations!Interval}\n \n%--------------------------------------------------------------\n\n\\chapter{Introduction to ordinal numbers and ordinal notations}\n\n\nThe proof of termination of all hydra battles presented in~\\cite{KP82} is based\non \\emph{ordinal numbers}.\nFrom a mathematical point of view, an ordinal is a representative of an equivalence class for isomorphisms of  totally ordered well-founded sets.\n\nFor the computer scientist, ordinals are tools for proving the totality of a given recursive function, or termination of a transition system. \\emph{Ordinal arithmetic} \nprovides a set of functions whose properties, like \\emph{monotony}, allow to define \\emph{variants}, \\emph{i.e.} strictly decreasing measures used in proofs of termination.\n\n\\vspace{4pt}\n\nLet us have a look at Figure~\\ref{fig:ordinal-sequence}. It presents a few items of a  sequence of ordinal numbers, which extends the sequence of natural numbers. \n\n\n\n\n\\begin{figure}[h]\n  \\centering\n\\fbox{\\Large\n  \\begin{minipage}{1.0\\linewidth}\n  \\begin{align*}\n     &\\textcolor{blue}{0},\\,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,\\ldots\\\\\n&\\textcolor{red}{\\omega},\\,\\omega+1,\\omega+2,\\omega+3,\\ldots\\\\\n&\\textcolor{red}{\\omega\\times 2},\\,\\omega\\times 2+1,\\ldots, \\textcolor{red}{\\omega\\times 3},\\,\\omega\\times 3+1,\\ldots, \\textcolor{red}{\\omega\\times 4},\\ldots\\\\\n&\\textcolor{red}{\\omega^2},\\ldots, \\textcolor{red}{\\omega^2\\times 42},\\ldots,\\textcolor{red}{\\omega^3},\\ldots, \\textcolor{red}{\\omega^4},\\omega^4+1,\\ldots\\\\\n&\\textcolor{red}{\\omega^\\omega},\\ldots, \\textcolor{red}{\\omega^\\omega+\\omega^7\\times 8},\\ldots,\\textcolor{red}{\\omega^\\omega\\times 2},\\omega^\\omega\\times 2+1, \\ldots\\\\\n&\\textcolor{red}{\\omega^{\\omega^\\omega}},\\ldots, \\textcolor{red}{\\omega^{\\omega^\\omega}+\\omega^\\omega\\times 42+ \\omega^{55}+\\omega}, \\ldots, \\textcolor{red}{\\omega^{\\omega^{\\omega+1}}}, \\omega^{\\omega^{\\omega+1}}+1,\\dots\\\\\n& \\textcolor{red}{\\epsilon_0 (= \\omega^{\\epsilon_0)}}, \\epsilon_0+1, \\epsilon_0+2, \\epsilon_0+3, \\ldots\\\\\n& \\textcolor{red}{\\epsilon_1}, \\ldots, \\textcolor{red}{\\epsilon_2}, \\ldots, \\textcolor{red}{\\epsilon_\\omega},\\ldots \\\\\n& \\textcolor{red}{\\Gamma_0}, \\Gamma_0+1, \\Gamma_0+2, \\Gamma_0+3,\\ldots, \\textcolor{red}{\\Gamma_0+\\omega}, \\ldots\\\\\n&\\ldots\n  \\end{align*}   \n  \\end{minipage}}\n \n \n  \\caption{A short overview of the sequence of ordinal numbers}\n  \\label{fig:ordinal-sequence}\n\\end{figure}\n\n\nLet us comment some features of this figure:\n\n\\begin{itemize}\n\\item The ordinals are listed in a strictly increasing order. \n\\item Dots : ``$\\ldots$'' stand for  infinite sequences of ordinals, not shown for lack of space. For instance, the ordinal $42$ is not shown in the first line, but it exists, somewhere between $17$ and $\\omega$.\n\\item Each ordinal printed in black is the immediate successor of another ordinal. We call it a \\emph{successor} ordinal. For instance, $12$ is the successor of $11$, and $\\omega^4+1$ the successor of $\\omega^4$.\n\\item Ordinals (displayed in red)  that  follow immediately dots are called \\emph{limit ordinals}. With respect to the order induced by this sequence, any limit ordinal $\\alpha$ is the least upper bound of  the set $\\mathbb{O}_\\alpha$ of all ordinals strictly less than $\\alpha$.\n\\item\nFor instance $\\omega$ is the least upper bound of the set of all finite ordinals (in the first line). It is also the first limit ordinal, and the first infinite ordinal, in the sense that \nthe set $\\mathbb{O}_\\omega$ is infinite.\n\\item The ordinal $\\epsilon_0$ is the first number which is equal to its own exponential of base $\\omega$. It plays an important role in proof theory, and is particularly studied in chapters~\\ref{chap:T1} to \\ref{chap:alpha-large}.\n\\item Any ordinal is  either the ordinal \\textcolor{blue}{$0$},\na successor ordinal, or a \\textcolor{red}{limit ordinal}.\n\\end{itemize}\n\n\n\n\n\\section{The mathematical point of view}\n\n\\subsection{Well-ordered sets}\nLet us start with some definitions.\nA  \\emph{well-ordered set} is a set provided with a binary relation $<$ which has the following properties.\n\\begin{description}\n\\item[irreflexivity] : $\\forall x\\in A, x\\not< x$\n\\item[transitivity] : $\\forall x\\,y\\,z\\in A, x<y \\Rightarrow y<z \\Rightarrow x<z$\n\\item[trichotomy]: $\\forall x\\,y\\in A, x<y \\vee x = y \\vee y < x$\n\\item[well foundedness]: $<$ is well-founded (every element of $A$ is accessible)\\footnote{In classical mathematics, we would say that there is no infinite sequence $a_1>a_2> \\dots a_n> a_{n+1}\\dots$ in $A$. In contrast, \\coq's standard library contains\nan inductive definition of a predicate \\texttt{Acc} which allows us to write \nconstructive proofs of accessibility (See \\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Init.Wf.html}{Coq.Init.Wf}).}.\n\\end{description}\n\nThe best known examples of well-ordered sets are the set $\\mathbb{N}$ of natural numbers (with the usual order $<$), as well as any finite segment $[0,i)=\\{j\\in\\mathbb{N}\\,|\\,j<i\\}$.\nThe disjoint union of two copies of $\\mathbb{N}$, \\emph{i.e.} the set $\\{0,1\\}\\times\\mathbb{N}$ is also well-ordered,\nwith respect to the order below:\n\n\\begin{align*}\n(i,j) < (i,k) & \\;\\textit{\\textbf{if} }\\; j < k\\\\\n(0,k) < (1,l) & \\;\\textit{\\textbf{for\\,any}}\\;k \\;\\textit{\\textbf{and}} \\; l\n\\end{align*}\n\n\\subsection{Ordinal numbers}\n\n\\index{maths}{Ordinal numbers}\n\nLet $(A,<_A)$ and $(B,<_B)$ two well-ordered sets. $A$ and $B$ are said to have \\emph{the same order type} if \nthere exists a strictly monotonous bijection $b$ from $A$ to $B$, \\emph{i.e.} which verifies the proposition\n$\\forall x\\,y\\in A,\\, x <_A y \\Rightarrow b(x) <_B  b(y)$.\n\nHaving the same order type is an equivalence relation between well-ordered sets. Ordinal numbers (in short: \\emph{ordinals})  are descriptions (\\emph{names}) of the equivalence classes.\nFor instance, the order type of $(\\mathbb{N},<)$ is associated with the ordinal called  $\\omega$, and the order we considered on \nthe disjoint union of $\\mathbb{N}$ and itself is named $\\omega+\\omega$.\n\nIn a set-theoretic framework, one can consider any ordinal $\\alpha$ as a well-ordered set, whose  elements are just the ordinals strictly less than $\\alpha$, \\emph{i.e.} the \\emph{segment} $\\mathbb{O}_\\alpha=[0, \\alpha)$. So, one can speak about \\emph{finite}, \\emph{infinite}, \\emph{countable}, etc., ordinals. Nevertheless, since we work within type theory, \nwe do not identify ordinals as sets of ordinals, but consider the correspondence between ordinals and sets of ordinals as the function that maps $\\alpha$ to $\\mathbb{O}_\\alpha$.\nFor instance $\\mathbb{O}_\\omega=\\mathbb{N}$, and $\\mathbb{O}_7=\\{0,1,2,3,4,5,6\\}$.\n\n\nWe cannot cite all the literature published on ordinals since Cantor's book\n\\cite{cantorbook}, and \nleave it to the reader to explore the bibliography. \n\n\n\\section{Ordinal numbers in Coq}\n\nTwo kinds of representation of ordinals are defined in our development.\n\n\\begin{itemize}\n\\item A ``mathematical'' representation of the set of countable ordinal numbers, after Kurt Sch\u00fctte~\\cite{schutte}. This representation uses several (hopefully harmless) axioms. We use it as a reference for proving the correctness of ordinal notations.\n\\item A family of \\emph{ordinal notations} (also called \\emph{[ordinal] notation systems}), \\emph{i.e.} data types used to represent segments $[0,\\mu)$, where $\\mu$ is some countable ordinal. Each ordinal notation is defined inside the Calculus of Inductive Constructions (without axioms). Many functions are defined, allowing proofs by computation. Note that proofs of \ncorrectness of a given ordinal notation with respect to Sch\u00fctte's model obviously use axioms.\nPlease execute the \\texttt{Print Assumptions} command in case of doubt.\n\\end{itemize}\n\n% \\section{Countable ordinals}\n\n% Chapter~\\ref{chap:schutte} of this document presents an adaptation to \\coq{} of an axiomatization in classical logic of the set of countable ordinals by K. Sch\u00fctte~\\cite{schutte}. \n% That formalization is quite complex, technical and unshamedly non-constructive,  so we put its description  in the last chapter of this document. \n\n% Please note that Sch\u00fctte considers the (uncountable) set $\\mathbb{O}$ of all countable ordinals. This set is well ordered (which is one of Sch\u00fctte's axioms), and associates to any ordinal $\\alpha$ the segment $\\mathbb{O}_\\alpha$ of all ordinals strictly less than $\\alpha$.\n\n% In our adaptation to \\coq{}, we declare a type \\texttt{Ord}, a binary relation \\texttt{lt} (with infix notation \\texttt{\"\\_<\\_\"}, and assume Sch\u00fctte's axiom. In Chapter~\\ref{chap:schutte},\n% we derive some interesting properties of countable ordinals from these axioms.\n\nIt is interesting to compare proofs of a given property (for instance the associativity of addition) both in the computational framework of some ordinal notation, and in the axiomatic model of Sch\u00fctte.\n\n\\section{Ordinal Notations}\n\n\nFortunately, the ordinals we need for  studying hydra battles are much simpler than Sch\u00fctte's, and can be represented as quite simple data types in \\gallina. \n\nLet $\\alpha$ be some (countable) ordinal; \nin \\coq{} terms, we call \\emph{ordinal notation for $\\alpha$} a structure composed \nof:\n\\begin{itemize}\n\\item A data type $A$ for representing all ordinals strictly below $\\alpha$,\n\\item A well founded order $<$ on $A$, \n\\item A correct function for comparing two ordinals. Note  that the reflexive closure of $<$ is thus a \\emph{total order}.\n\\end{itemize}\n\n\nSuch a structure can be proved correct relatively to a bigger ordinal notation or\nto Sch\u00fctte's model.\n\n\n\n\n\n\\subsection{A class for ordinal notations}\n\nFrom the \\coq{} user's point of view, an ordinal notation is\na structure allowing to compare two ordinals by computation, and proving by well-founded induction.\n\n\\subsubsection{The \\texttt{Comparable class}}\n\nThe following class, contributed by J\u00e9r\u00e9my Damour and Th\u00e9o Zimmermann, allows us to apply generic lemmas and tactics\nabout decidable strict orders.\nThe correctness of the comparison function is expressed through Stdlib's type \n\\texttt{Datatypes.CompareSpec}.\n\n\n\\begin{Coqsrc}\nInductive CompareSpec (Peq Plt Pgt : Prop) : comparison -> Prop :=\n    CompEq : Peq -> CompareSpec Peq Plt Pgt Eq\n  | CompLt : Plt -> CompareSpec Peq Plt Pgt Lt\n  | CompGt : Pgt -> CompareSpec Peq Plt Pgt Gt.\n\\end{Coqsrc}\n\n\\emph{From Module~\\href{../theories/html/hydras.Prelude/mparable.html\\#Hvariant}{Prelude.Comparable}}\n\n\\label{sect:comparable-def}\n\n\\input{movies/snippets/Comparable/ComparableDef}\n\n\n\n\n\n\\subsubsection{The \\texttt{ON} class}\n\nThe following class definition, parameterized with a type $A$, a binary relation \\texttt{lt} on $A$, specifies that \\texttt{lt} is a well-founded strict order, provided with a correct comparison function.\n\n\n\\vspace{4pt}\n\\noindent\\emph{From\nLibrary~\\href{../theories/html/hydras.OrdinalNotations.ON_Generic.html}{OrdinalNotations.ON\\_Generic}}\n\n\\label{types:ON}\n\\index{hydras}{Library OrdinalNotations!Type classes!ON}\n\n\\input{movies/snippets/ON_Generic/ONDef}\n\n\nWe give  also a few handy definitions and lemmas for any ordinal notation.\n\n\\label{sect:on-lt-notation}\n\\label{sect:on-le-notation}\n\\label{sect:measure-ON}\n\\label{sect:bigO-ON}\n\n\n\\inputsnippets{ON_Generic/ONDefsa, ON_Generic/ONDefsb}\n\n\\begin{remark}\nThe infix notations \\texttt{o<} and \\texttt{o<=} are defined in order to make apparent the distinction between the various notation scopes that may co-exist in a same statement. So the infix \\texttt{<} and \\texttt{<=} are reserved to the natural numbers. In the mathematical formulas, however, we still use $<$ and $\\leq$ for comparing ordinals.\n\\end{remark}\n\n\n% \\subsection{Ordinal notations and  termination measures}\n% \\label{sect:measure-ON}\n\n% The following lemma (together with the type class mechnism) allows us to define termination measures over any ordinal notation. It is just an application of  the libraries \\texttt{Coq.Wellfounded.Inverse\\_Image}\n% and  \\texttt{Coq.Wellfounded.Inclusion}. \n\n% \\begin{Coqsrc}\n% Definition measure_lt {A:Type}{lt: relation A}\n%             {compare : A -> A -> comparison}\n%             {on : ON lt compare}\n%             {B : Type} (m : B -> A) : relation B :=\n%              fun x y => on_lt (m x) (m y).\n            \n% Lemma wf_measure  {A:Type}(lt: relation A)\n%             {compare : A -> A -> comparison}\n%             {on : ON lt compare}\n%             {B : Type}\n%             (m : B -> A):  well_founded (measure_lt m). \n% \\end{Coqsrc}\n\n% A simple example of application is given in Sect.~\\vref{sect:merge-example}.\n\n\n\\section{Example: the ordinal \\texorpdfstring{$\\omega$}{omega}}\n\n\n\n\nThe simplest example of ordinal notation is built over the type \\texttt{nat} of \\coq's standard library. We just have to apply already proven lemmas about Peano numbers.\n\n\\vspace{4pt}\n\\noindent\\emph{From Library~\\href{../theories/html/hydras.OrdinalNotations.ON_Omega.html}{OrdinalNotations.ON\\_Omega}}\n\n\\inputsnippets{ON_Omega/OmegaDefa, ON_Omega/OmegaDefb}\n\n\n\\section{Sum of  two ordinal notations}\n\nLet \\texttt{NA} and \\texttt{NB} be two ordinal notations, on the respective types \\texttt{A} and \\texttt{B}.\n\n We consider a new strict order\non the disjoint sum of the associated types, by putting all elements of \\texttt{A} before the elements of \\texttt{B} (thanks to Standard Library's relation operator \\texttt{le\\_AsB}).\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Library~\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Relations.Relation_Operators.html}{Relations.Relation\\_Operators}}.\n\n\\begin{Coqanswer}\nInductive\nle_AsB (A B : Type) (leA : A -> A -> Prop) (leB : B -> B -> Prop)\n  : A + B -> A + B -> Prop :=\n| le_aa : forall x y : A, leA x y -> le_AsB A B leA leB (inl x) (inl y)\n| le_ab : forall (x : A) (y : B), le_AsB A B leA leB (inl x) (inr y)\n| le_bb : forall x y : B, leB x y -> le_AsB A B leA leB (inr x) (inr y)\n\\end{Coqanswer}\n\n\\pagebreak\n\n\\vspace{4pt}\n\\noindent\\emph{From Library~\\href{../theories/html/hydras.OrdinalNotations.ON_plus.html}{OrdinalNotations.ON\\_plus}}\n\n\n\\input{movies/snippets/ON_plus/Defs}\n\nIn order to build an instance of \\texttt{Comparable}, we have to define a correct comparison function.\n\n\\inputsnippets{ON_plus/compareDef,\n  ON_plus/compareCorrect, ON_plus/plusComp}\n\n\nThe Lemma \\texttt{Wellfounded.Disjoint\\_Union.wf\\_disjoint\\_sum} of Standard Library\nhelps us to prove that our order \\texttt{lt} is well-founded, thenwe can build an instance of \\texttt{ON}:\n\n\\inputsnippets{ON_plus/ltWf, ON_plus/OnPlus}\n\n\\subsection{The ordinal \\texorpdfstring{$\\omega+\\omega$}{omega + omega}}\n\nThe ordinal $\\omega+\\omega$ (also known as $\\omega\\times 2$) may be represented as the concatenation \nof two copies of $\\omega$ (Figure~\\ref{fig:omega-plus-omega}).\nIt is also represented by the two first lines of Figure~\\ref{fig:ordinal-sequence}.\n\n\\begin{figure}[h]\n   \\centering\n   \\begin{tikzpicture}[very thick, scale=0.5]\n\\begin{scope}[color=blue]\n\\node(A0) at (2,0)[label=below:$0$]{$\\bullet$};\n\\node(A1) at (3,0)[label=below:$1$]{$\\bullet$};\n\\node(A2) at (4,0)[label=below:$2$]{$\\bullet$};\n\\node (Adots) at (6,0) {$\\ldots$};\n\\node(An) at (8,0)[label=below:$n$]{$\\bullet$};\n\\node(A2) at (10,0)[label=below:$n+1$]{$\\bullet$};\n\\node (Adots1) at (12,0) {$\\ldots$};\n\\end{scope}\n\\begin{scope}[color=red]\n\\node(B0) at (14,0)[label=below:$0$,label=above:\\textcolor{red}{$\\omega$}]{$\\bullet$};\n\\node(B1) at (16,0)[label=below:$1$, label=above:$\\omega+1$]{$\\bullet$};\n\\node(B2) at (18,0)[label=below:$2$,label=above:$\\omega+2$]{$\\bullet$};\n\\node (Bdots) at (20,0) {$\\ldots$};\n\\node (Bn) at (22,0) [label=below:$p$, label=above:$\\omega+p$]{$\\bullet$};\n\\node (Bdots2) at (24,0) {$\\ldots$};\n\\end{scope}\n\\end{tikzpicture}\n   \\caption{\\textcolor{blue}{$\\omega+{\\color{red}\\omega}$}}\n   \\label{fig:omega-plus-omega}\n \\end{figure}\n\nWe can define this notation in \\coq{} as an instance of \\texttt{ON\\_plus}.\n\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Omega_plus_omega.html}{OrdinalNotations.ON\\_Omega\\_plus\\_omega}}\n\n\\input{movies/snippets/ON_Omega_plus_omega/OmegaPlusOmegaDef}\n\n\\vspace{4pt}\n\nWe can now define abbreviations. For instance, the finite ordinals are represented by terms built with  the constructor \\texttt{inl}, and the first infinite ordinal $\\omega$ by the term \\texttt{(inr 0)}.\n\n\\vspace{4pt}\n\n\\input{movies/snippets/ON_Omega_plus_omega/finiteOmega}\n\n\\vspace{4pt}\n\n\\input{movies/snippets/ON_Omega_plus_omega/ltOmega}\n\n\n\n% \\label{warning:coercions}\n% \\index{Coq!Coercions} \n% \\begin{remark}\n% Beware of coercions and notation scopes!\n% Let us consider the following goal:\n\n% \\begin{Coqsrc}\n%  Goal (6 o< 8).\n%  auto with arith.\n% \\end{Coqsrc}\n\n\n% \\begin{Coqanswer}\n% 1 subgoal (ID 9)\n  \n%   ============================\n%   6 o< 8\n% \\end{Coqanswer}\n\n% Please keep in mind that the current notation scope interprets the infix \\texttt{``<''} as the predicate \\texttt{Omega\\_plus\\_omega.lt} and not \\texttt{Nat.lt}. More,  the coercion mechanism converts the terms \\texttt{6:nat} [resp. \\texttt{8:nat} ]\n% into \\texttt{inl 6} [resp. \\texttt{inl 8}].  So, the initial goal is correctly interpreted by \\coq{}, but not as an inequality between two natural numbers.\n\n\n% \\begin{Coqsrc}\n% Set Printing All.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n% 1 subgoal (ID 337)\n  \n%   ============================\n%   @on_lt nat Peano.lt Nat.compare Omega (S (S (S (S (S (S O))))))\n%     (S (S (S (S (S (S (S (S O))))))))\n% \\end{Coqanswer}\n\n\n% Anyway, the initial goal is provable, using \\texttt{le\\_AsB}'s first constructor.\n\n% \\begin{Coqsrc}\n%   constructor; auto with arith.\n% Qed.\n% \\end{Coqsrc}\n\n% \\end{remark}\n%\n\n\n\n\\section{Limits and successors}\n\nLet us look again at our implementation of $\\omega+\\omega$. We can classify its elements into three categories:\n\n\\begin{itemize}\n\\item The least ordinal, \\texttt{(inl 0)}, also known as  \\texttt{(fin 0)}.\n\\item The first infinite ordinal $\\omega$.\n\\item The remaining ordinals, either of the form \\texttt{(inl (S $i$))} or \\texttt{(inr (S $i$))} (in black on Figure~\\ref{fig:ordinal-sequence}), called \\emph{successor ordinals}.\n\\end{itemize}\n\n\\subsection{Definitions}\nIt would be interesting to specify at the most generic level, what is a zero, a successor or a limit ordinal. Let $<$ be a strict order on a type $A$.\n\n\\begin{itemize}\n\\item A \\emph{least} element is a minorant (in the large sense) of the full set  on $A$,\n\\item $y$ is a \\emph{successor} of $x$ if $x<y$ and there is no element between $x$ and $y$. We will also say that $x$ is a \\emph{predecessor} of $y$.\n\\item $x$ is a \\emph{limit} if $x$ is not a least element, and for any $y$ such that $yo<x$,\n there exists some $z$ such that $y<z<x$.\n\\end{itemize}\n\n\nThe following definitions are in Library \\href{../theories/html/hydras.Prelude.MoreOrders.html}{Prelude.MoreOrders}.\n\n\\input{movies/snippets/MoreOrders/Defs}\n\n\n\\index{hydras}{Exercises}\n\\begin{exercise}\nProve, that, in any ordinal notation system, every ordinal has at most one predecessor, and at most one successor. \n\n\\emph{You may start this exercise with the file\n\\href{https://github.com/coq-community/hydra-battles/blob/master/exercises/ordinals/predSuccUnicity.v}{exercises/ordinals/predSuccUnicity.v}.}\n\n\\end{exercise}\n\n\\index{hydras}{Exercises}\n\\begin{exercise}\nProve, that, in any ordinal notation system, if $\\beta$ is a successor of $\\alpha$,\nthen for any $\\gamma$, $\\gamma<\\beta$ implies \n$\\gamma\\leq\\alpha$.\n\n\\emph{You may start this exercise with the file\n\\href{https://github.com/coq-community/hydra-battles/blob/master/exercises/ordinals/lt_succ_le.v}{exercises/ordinals/lt\\_succ\\_le.v}.}\n\\end{exercise}\n\n\n\n\n\\subsection{Limits and successors in \\texorpdfstring{$\\omega+\\omega$}{omega+omega}}\n\nUsing the definitions above, we can prove the following lemma:\n\n\\vspace{4pt}\n\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Omega_plus_omega.html}{OrdinalNotations.ON\\_Omega\\_plus\\_omega}}\n\n\\input{movies/snippets/ON_Omega_plus_omega/limitIff}\n\n\\vspace{4pt}\n\nRegarding successors, let us define the following function and prove its correctness:\n\n\n\\input{movies/snippets/ON_Omega_plus_omega/succDef}\n\n\\input{movies/snippets/ON_Omega_plus_omega/succCorrect}\n\n\\vspace{4pt}\n\n\n\nWe can also check whether an ordinal is a successor by a simple computation:\n\n\\input{movies/snippets/ON_Omega_plus_omega/succb}\n\n\\vspace{4pt}\n\nFinally, the nature of any ordinal is decidable (inside this notation system) :\n\n\n\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Omega_plus_omega.html}{OrdinalNotations.ON\\_Omega\\_plus\\_omega}}\n\n\\input{movies/snippets/ON_Omega_plus_omega/ZeroLimitSuccDec}\n\n\\section{Product of ordinal notations}\n\nLet \\texttt{NA} and \\texttt{NB} be two ordinal notations, on the respective  ordered types \\texttt{A} and \\texttt{B}. The product of \\texttt{NA} and \\texttt{NB} is considered as the concatenation of $B$ copies of $A$, ordered by the lexicographic order on $B\\times A$.\n\nIn \\coq{}, we build an instance of class \\texttt{ON} through a sequence of steps as for the sum of ordinal notations.\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_mult.html}{OrdinalNotations.ON\\_mult}}\n\n\\inputsnippets{ON_mult/Defs}\n\\inputsnippets{ON_mult/multComp,  ON_mult/ONMult}\n\\inputsnippets{ON_mult/endDefs} \n\n\\section{The ordinal \\texorpdfstring{$\\omega^2$}{omega2}}\n\nThe ordinal $\\omega^2$ (also called $\\phi_0(2)$, see Chap.~\\ref{chap:schutte}), is an instance of the multiplication presented in the preceding section.\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Omega2.html}{OrdinalNotations.ON\\_Omega2}}\n\n\\inputsnippets{ON_Omega2/Omega2Def, ON_Omega2/Defs}\n\n\\subsection{Arithmetic of \\texorpdfstring{$\\omega^2$}{omega^2}} \n\n\\subsubsection{Successor}\n\nThe successor of any ordinal is defined by a simple pattern-matching.\n\n\\input{movies/snippets/ON_Omega2/succ}\n\n\nThis function is proved to be correct w.r.t. the \\texttt{Successor} predicate.\n\n\\input{movies/snippets/ON_Omega2/succOK}\n\n\\input{movies/snippets/ON_Omega2/succLemmas}\n\n\n\\subsubsection{Addition}\n\nWe can define on \\texttt{Omega2} an addition which extends the addition on \\texttt{nat}. Please note that this operation is not commutative:\n\n\\inputsnippets{ON_Omega2/plusDef, ON_Omega2/plusExamples}\n\n\n\\subsubsection{Multiplication}\n\nThe restriction of ordinal multiplication to the segment $[0,\\omega^2)$ is not a total function.\nFor instance $\\omega\\times\\omega= \\omega^2$ is outside the set of represented values.\nNevertheless, we can define two operations mixing natural numbers and ordinals.\n\n\\input{movies/snippets/ON_Omega2/multFinDef}\n\\input{movies/snippets/ON_Omega2/multFinExamples}\n\n\nMultiplication with a finite ordinal and addition are related through the following lemma:\n\n \n\\inputsnippets{ON_Omega2/uniqueDecompositiona,\n  ON_Omega2/uniqueDecompositionb,\n  ON_Omega2/uniqueDecompositionz}\n\n\\subsection{A proof of termination using \\texorpdfstring{$\\omega^2$}{omega^2}} \n\\label{sect:merge-example}.\n\nUsing the lemma of Sect.~\\vref{sect:measure-ON}, we can define easily a total function which merges two lists (example contributed by Pascal Manoury).\n\n\\index{coq}{Commands!Function}\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Omega2.html}{OrdinalNotations.ON\\_Omega2}}\n\n\n\\input{movies/snippets/ON_Omega2/Merge}\n \n\n\\subsection{Yet another  proof of impossibility}\n\\label{omega2-case}\n\nIn Sect.~\\vref{omega-case}, we proved that there exists no variant from \\texttt{Hydra} to \\texttt{(nat,$<$)}\n(\\emph{i.e.} the ordinal $\\omega$) for proving the termination of all hydra battles.\nWe  prove now that  the ordinal $\\omega^2$ is also insufficient for this purpose. \n\nThe proof we are going to comment has exactly the same structure as in Section~\\ref{omega-case}.\n Nevertheless, the proof of technical  lemmas is a little more complex, due to \n the structure of the lexicographic order on $\\mathbb{N}\\times\\mathbb{N}$. \nConsider for instance that there exists an infinite number of ordinals  between\n$\\omega$ and $\\omega\\times 2$.\n\n\n\nThe detailed  proof script is in the file\n\\href{https://github.com/coq-community/hydra-battles/blob/master/theories/ordinals/Hydra/Omega2_Small.v}{theories/ordinals/Hydra/Omega2\\_Small.v}.\n\n\\subsubsection{Preliminaries}\nLet us assume there is a variant from \\texttt{Hydra} into $\\omega^2$  for proving the   termination of all hydra battles.\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Omega2_Small.html}{Hydra.Omega2\\_Small}}\n\n\\input{movies/snippets/Omega2_Small/Impossibility}\n\\input{movies/snippets/Omega2_Small/Impossibilitya}\n\n\n\nWe  follow the same pattern as in Sect.~\\ref{omega-case}.\nFirst, we define an injection $\\iota$ from type \\texttt{t} into \\texttt{Hydra}, by\n associating to  each ordinal $\\omega\\times i+ j = (i,j)$ the hydra with $i$ branches of length $2$ and\n$j$ branches of length $1$.\n\n%% revenir ici\n\n\\vspace{4pt}\n\\emph{From Module ~\\href{../theories/html/hydras.Hydra.Omega2_Small.html\\#iota}{Hydra.Omega2\\_Small}}\n\n\n\\input{movies/snippets/Omega2_Small/Impossibilityc}\n\nFor instance, Figure~\\vref{fig:essai2} shows the hydra associated to the ordinal \n$(3,5)$, a.k.a. $\\omega\\times 3 + 5$.\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.4]\n\\node (foot) at (6,0) {$\\bullet$};\n\\node (N1) at (1,3) {$\\bullet$};\n\\node (N2) at (3,3) {$\\bullet$};\n\\node (N3) at (5,3) {$\\bullet$};\n\\node (N4) at (8,3) {$\\Smiley[2][green]$};\n\\node (N5) at (11,3) {$\\Smiley[2][green]$};\n\\node (N6) at (14,3) {$\\Smiley[2][green]$};\n\\node (N7) at (17,3){$\\Smiley[2][green]$};\n\\node (N8) at (20,3){$\\Smiley[2][green]$};\n\\node  (N9) at (0,5) {$\\Smiley[2][green]$};\n\\node (N10) at (2,5) {$\\Smiley[2][green]$};\n\\node (N11) at (4,5) {$\\Smiley[2][green]$};\n\\draw (foot) to [bend left=10] (N1);\n\\draw (foot) -- (N2);\n\\draw (foot) -- (N3);\n\\draw (foot) -- (N4);\n\\draw (foot) -- (N5);\n\\draw (foot) -- (N6);\n\\draw (foot) to [bend right=10] (N7);\n\\draw (foot) to [bend right=15] (N8);\n\\draw (N1) to [bend left=10] (N9);\n\\draw (N2) -- (N10);\n\\draw (N3) -- (N11);\n\\end{tikzpicture}\n\\caption{\\label{fig:essai2}\nThe hydra $\\iota(\\omega\\times 3+5)$}\n\\end{figure}\n\n\n\n\nLike in Sect.~\\ref{omega-case}, we build a hydra out of the range of \\texttt{iota} (represented in Fig.~\\vref{fig:h-omega2-small}).\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.5]\n\\node (foot) at (2,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (3,4) {$\\Smiley[2][green]$};\n\\node (N3) at (1,4) {$\\Smiley[2][green]$};\n\\draw (foot) -- (N1);\n\\draw (N1) to [bend right =15] (N2);\n\\draw (N1) to  [bend left=15](N3);\n\\end{tikzpicture}\n\\caption{\\label{fig:h-omega2-small}}\n The hydra \\texttt{big\\_h}.\n\\end{figure}\n\n\n\\input{movies/snippets/Omega2_Small/Impossibilityb}\n\n \n In a second step, we build a ``smaller'' hydra\\footnote{With respect to the measure $m$.}.\n \n\\input{movies/snippets/Omega2_Small/Impossibilityd}\n\n\\vspace{4pt}\n\nLike in Sect.~\\ref{omega-case}, we prove the double inequality \\texttt{m big\\_h o<= m small\\_h o< m big\\_h}, which is impossible.\n\n\\subsubsection{Proof of the inequality \\texttt{m small\\_h o< m big\\_h}}\n\nIn order to prove the inequality  \\texttt{m\\_lt: m small\\_h o< m big\\_h}, it suffices to\nbuild a battle transforming \\texttt{big\\_h} into \\texttt{small\\_h}.\n\nFirst we prove that \\texttt{small\\_h} is reachable from \\texttt{big\\_h} in one or two steps. Let us decompose \\texttt{m big\\_h} as $(i,j)$.\nIf $j=0$, then one round suffices to transform \\texttt{big\\_h} into $\\iota(i,j)$.\nIf $j>0$, then a first round transforms \\texttt{big\\_h} into $\\iota(i+1,0)$ and a second round into $\\iota(i,j)$. So, we have the following result.\n\n\\input{movies/snippets/Omega2_Small/bigToSmall}\n\nSince $m$ is a variant, we infer the following inequality:\n\n\\input{movies/snippets/Omega2_Small/mLt}\n\n\n\n\\subsubsection{Proof of the inequality \\texttt{m big\\_h o<= m small\\_h} }\n\n\nThe proof of the inequality \\texttt{m big\\_h o<= m small\\_h} is quite more complex than in Sect~\\ref{omega-case}.  If we consider any ordinal $\\alpha=(i,j)$, where $i>0$, there exists an infinite number of\nordinals strictly less than $\\alpha$, and there exists an infinite number of battles that start from\n$\\iota(\\alpha)$. Indeed, at any configuration $\\iota(k,0)$, where $k>0$, the hydra can freely choose any replication number. Intuitively, the measure of such a hydra must be large enough for taking into account\nall the possible battles issued from that hydra.\nLet us now give more technical details.\n\nThe first steps of our proof prepare a well-founded induction on $\\omega^2$.\n\n\n\\input{movies/snippets/Omega2_Small/mGe}\n\n\nThen a case analysis on $i$ and $j$ allows us to\nconsider three cases :\n\n\\begin{itemize}\n\\item $i=j=0$: the inequality is trivial.\n\\item $i=1+l, j=0$ ($(i,j)$ is a limit ordinal): By the induction hypothesis \\texttt{IHij},\n  $(l,k)\\leq m(\\iota(l,k))$ for any $k$. But (by the rules of the hydra game), $\\iota(i,0)$ is transformed into any $\\iota(l,k)$ in one round. Thus $m(\\iota(l,k)) < m(\\iota(i,0))$ for any $k$.\n  Therefore, $(l,k) <  m(\\iota(i,0))$ for any $k$, thus\n  $(i,0) \\leq m(\\iota(i,0))$.\n \\item $j= l+1$  ($(i,j)$ is a successor).  a similar, but simpler case: we apply the induction hypothesis to the pair $(i,l)$.\n \\end{itemize}\n\n Please look at the proof script for more details.\n \n \\input{movies/snippets/Omega2_Small/mGeb}\n\n\\subsubsection{End of the proof}\nFrom \\texttt{m\\_ge}, we get \\texttt{m big\\_h o<= m small\\_h = m (iota (m big\\_h)) }. \nSince $<$ is a strict order (irreflexive  and transitive), this inequality is incompatible with the strict inequality  \\texttt{m small\\_h o< m big\\_h} (lemma \\texttt{m\\_lt}).\n\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Omega2_Small.html\\#Impossible}{Hydra.Omega2\\_Small}}\n\n\n  \\input{movies/snippets/Omega2_Small/Impossible}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nProve that there exists no variant $m$ from \\texttt{Hydra} into $\\omega^2$ for proving\n    the  termination of all \\emph{standard} battles.\n\\end{exercise}\n\n\n\n\\begin{remark}\nIn Chapter~\\ref{ks-chapter}, we  prove a generalization of the impossibility lemmas of\nSect.~\\ref{omega-case} and this section, with the same proof structure, but with much more \ncomplex technical details.\n \\end{remark}\n\n% \\index{Exercises}\n% \\begin{exercise}\n\n% \\label{sec:orgheadline63}\n% Write \\emph{direct} proofs ({i.e.},  without applying the result and tools of Chap.~\\ref{ks-chapter}) that the following data structures  are too simple for defining a variant for any hydra battle.\n\n% \\begin{itemize}\n% \\item  $\\omega^n$ : the set of all $n$-uples of natural numbers, ordered  by \n%   lexicographic ordering\n% \\item  $\\omega^\\omega$: the set of all decreasing sequences (with respect to $\\le$)  of natural numbers, ordered by lexicographic ordering on lists.\n\n% For instance, the following inequality holds:\n% \\[\\langle 4,3,3,3,3,3,3,2,2,2 \\rangle\\,<\\,\\langle 4,4,2 \\rangle\\]\n% \\end{itemize}\n\n  \n% \\end{exercise}\n\n\n\\section{A notation for finite ordinals}\n\n\nLet $n$ be some natural number. The segment associated with $n$ is the interval \n$[0,n)\\,=\\,\\{0,1,\\dots,n-1\\}$. \nOne may represent the ordinal $n$ by a sigma type.\n\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Finite.html}{OrdinalNotations.ON\\_Finite}}\n\n\\label{def: Finite-ord-type}\n\n\\input{movies/snippets/ON_Finite/Defs}\n\nFor instance, let us build two elements of the segment $[0, 7)$, \\emph{i.e.} two\ninhabitants of   type (\\texttt{t 7}), and prove a simple  inequality (see Fig.~\\ref{fig:O7}).\n\n\\begin{figure}[h]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.6]\n\n\\node (N0) at (0,0) {$\\bullet$};\n\\node (i0) at (0,1) {$0$};\n\\node (N1) at (2,0) {$\\bullet$};\n\\node (i1) at (2,1) {$1$};\n\\node (N2) at (4,0) {$\\bullet$};\n\\node (i2) at (4,1) {$2$};\n\\node (N3) at (6,0) {$\\bullet$};\n\\node (i3) at (6,1) {$3$};\n\\node (N4) at (8,0) {$\\bullet$};\n\\node (i4) at (8,1) {$4$};\n\\node (N5) at (10,0) {$\\bullet$};\n\\node (i5) at (10,1) {$5$};\n\\node (N6) at (12,0) {$\\bullet$};\n\\node (i6) at (12,1) {$6$};\n\\node(alpha1) at (4,-1) {$\\alpha_1$};\n\\node(alpha2) at (10,-1) {$\\beta_1$};\n\\end{tikzpicture}\n\n\\caption{The segment $\\mathbb{O}_7$\\label{fig:O7}}\n\\end{figure}\n  \n\\index{coq}{Commands!Program}\n\n\\input{movies/snippets/ON_Finite/Example1}\n\n\n\n\nNote that the type (\\texttt{t 0}) is empty, and that, for any natural number\n $n$, $n$ does not belong to (\\texttt{t $n$}).\n\n \\input{movies/snippets/ON_Finite/t0Empty}\n\n\\begin{Coqsrc}\nProgram Definition bad : t 10 := 10.\nNext Obligation.\n\\end{Coqsrc}\n\n\\begin{Coqanswer}\n1 subgoal (ID 118)\n  \n  ============================\n  10 <? 10 \n\\end{Coqanswer}\n\n\\begin{Coqsrc}\n  compute.\nAbort.\n\\end{Coqsrc}\n\n%\\input{movies/snippets/ON_Finite/bad}\n\n \n\n\n\n\nIn order to build an instance of \\texttt{ON}, we define a comparison function,  and prove its correctiness.\n\n\\vspace{4pt}\n\n\\input{movies/snippets/ON_Finite/compareDef}\n\n\n\n\\begin{remark}\n The proof of \\texttt{compare\\_correct} uses a well-known pattern of \\coq{}.\nLet us consider  the following subgoal.\n\n\\begin{Coqanswer}\n 1 subgoal (ID 110)\n  \n  n, x0 : nat\n  i, i0 : x0 <? S n\n  ============================\n  exist (fun i1 : nat => i1 <=? n) x0 i =\n  exist (fun i1 : nat => i1 <=? n) x0 i0\n\\end{Coqanswer}\n\nApplying the tactic \\texttt{f\\_equal} generates a simpler subgoal.\n\n\\begin{Coqanswer}\n1 subgoal (ID 112)\n  \n  n, x0 : nat\n  i, i0 : x0 <? S n\n  ============================\n  i = i0\n\\end{Coqanswer}\n\nWe have now to prove that there exists at most one  proof of (\\texttt{Nat.ltb x0 (S n)}). This is not obvious, but  a consequence of the following lemma of library \n\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Logic.Eqdep_dec.html}{Coq.Logic.Eqdep\\_dec}.\n\n\\index{Coq}{Unicity of equality proofs}\n\\label{sect:eq-proof-unicity}\n\n\\begin{Coqanswer}\neq_proofs_unicity_on :\nforall (A : Type) (x : A),\n(forall y : A, x = y \\/ x <> y) -> \nforall (y : A) (p1 p2 : x = y), p1 = p2\n\\end{Coqanswer}\n\nThus unicity of proofs of \\texttt{Nat.ltb x0 (S n)}  comes from the decidability of\nequality on type \\texttt{bool}.\nThis is why we used the boolean function \\texttt{Nat.ltb} instead of the inductive predicate \\texttt{Nat.lt} in the definition of type \\texttt{t $n$} (see page~\\pageref{def: Finite-ord-type}).\nFor more information about this pattern, please look at the numerous mailing lists and \nFAQs on \\coq{}).\n\n\n\n\\end{remark}\n\nPlease note  that attempting to compare a term  of type (\\texttt{t $n$}) with a term of\ntype (\\texttt{t $p$})  leads to an error if $n$ and $p$ are not convertible.\n\n\n\\input{movies/snippets/ON_Finite/Example2}\n\n\n\nApplying lemmas of the libraries \\texttt{Coq.Wellfounded.Inverse\\_Image}, \\linebreak\n \\texttt{Coq.Wellfounded.Inclusion}, and \\texttt{Coq.Arith.Wf\\_nat}, we prove that our\nrelation \\texttt{lt} is well founded.\n\n\\input{movies/snippets/ON_Finite/ltWf}\n\n\nNow we can build our instance of \\texttt{OrdinalNotation}.\n\n\\input{movies/snippets/ON_Finite/ONInstance}\n\n\\begin{remark}\nIt is important to keep in mind  that the integer $n$ is not an ``element'' of \\texttt{FinOrd $n$}. In set-theoretic presentations of ordinals, the set associated with the ordinal $n$ is $\\{0,1,\\dots,n-1\\}$. \nIn our formalization, the interpretation of an ordinal as a set is realized by the function \\texttt{bigO} (see Section\\vref{sect:bigO-ON}).\n\\end{remark}\n\n\n\\begin{remark}\n There is no interesting arithmetic on finite ordinals, since functions like successor, addition, etc.,  cannot be represented in \\coq{} as \\emph{total} functions.\n\\end{remark}\n\n\\begin{remark}\nFinite ordinals are also formalized in MathComp~\\cite{MCB}.  See also Adam Chlipala's \\emph{CPDT}~\\cite{chlipalacpdt2011} for a thorough study of the use of dependent types.  \n\\end{remark}\n\n\n\n%%%\n\n\\section{Comparing two ordinal notations}\n\nIt is sometimes useful to compare two ordinal notations with respect to expressive power\n(the segment of ordinals  they represent). \n\nThe following class specifies a strict inclusion of segments. The notation \\texttt{OA} describes a segment $[0,\\alpha)$, and \\texttt{OB} is a larger segment (which contains a notation for $\\alpha$, whilst $\\alpha$ is not represented in \\texttt{OA}). We require also  that the comparison functions of the two notation systems are compatible.\n\n\\begin{figure}[h]\n   \\centering\n   \\begin{tikzpicture}[very thick, scale=0.6]\n\\begin{scope}[color=blue]\n\\node (A) at (0,0) {$A$};\n\\node(A0) at (2,0)[label=below:$0$]{$\\bullet$};\n\\node(A1) at (3,0)[label=below:$1$]{$\\bullet$};\n\\node(A2) at (4,0)[label=below:$2$]{$\\bullet$};\n\\node (Adots) at (6,0) {$\\ldots$};\n\\end{scope}\n\\begin{scope}[color=red]\n\\node (B) at (0,2) {$B$};\n\\node(B0) at (2,2)[label=above:$0$]{$\\bullet$};\n\\node(B1) at (3,2)[label=above:$1$]{$\\bullet$};\n\\node(B2) at (4,2)[label=above:$2$]{$\\bullet$};\n\\node (Bdots) at (6,2) {$\\ldots$};\n\\node (b) at (8,2) [label=above:$\\omega$]{$\\bullet$};\n\\node (bsucc) at (9,2) [label=above:$\\omega+1$]{$\\bullet$};\n\\node (Bdots2) at (10,2) {$\\ldots$};\n\\end{scope}\n\\begin{scope}[color=red!50!blue]\n\\draw [->,thin] (A0) -- node [auto] {$\\iota$} (B0);\n\\draw [->,thin] (A1) -- node [auto] {$\\iota$} (B1);\n\\draw [->,thin] (A2) -- node [auto] {$\\iota$} (B2);\n\\draw [->,thin] (Adots) -- node [auto] {$\\iota$} (Bdots);\n\\end{scope}\n\\end{tikzpicture}\n   \\caption{\\textcolor{blue}{$A$} is a sub-segment  of \\textcolor{red}{$B$}}\n   \\label{fig:subsegment}\n \\end{figure}\n\nIf \\texttt{OB} is presumed to be correct, then we may consider that \\texttt{OA} ``inherits'' its correctness from the bigger notation system \\texttt{OB}.\n\n\n\\label{types:SubON}\n\\index{hydras}{Library OrdinalNotations!Type classes!SubON}\n\nfollowing definition\n(in ~\\href{../theories/html/hydras.OrdinalNotations.ON_Generic.html}{ON\\_Generic}).\n\n\\input{movies/snippets/ON_Generic/SubONDef}\n\n\n\nFor instance, we prove that \\texttt{Omega} is a sub-notation of\n\\texttt{Omega\\_plus\\_Omega} (with $\\omega$ as the first ``new'' ordinal, and \\texttt{fin} as the injection).\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Omega_plus_omega.html}{OrdinalNotations.ON\\_Omega\\_plus\\_omega}}\n\n\\input{movies/snippets/ON_Omega_plus_omega/Incl}\n\n\n\n\n\nWe can also show that, if $i<j$, then the segment $[0,i)$ is a ``sub-segment'' of\n$[0,j)$. Since the terms  ($t\\;i$) and ($t\\;j$) are not convertible, we consider a ``cast'' \nfunction $\\iota$ from ($t\\;i$) into ($t\\;j$), and prove that this function is  a monotonous bijection  from ($t\\;i$) to\nthe segment $[0,i)$ of ($t\\;j$).\n\n\n\n\n \n\n\n\n\\index{coq}{Commands!Program}\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.OrdinalNotations.ON_Finite.html}{OrdinalNotations.ON\\_Finite}}\n\n\n\\input{movies/snippets/ON_Finite/InclIJ}\n\\input{movies/snippets/ON_Finite/InclIJa}\n\\input{movies/snippets/ON_Finite/InclIJb}\n\\input{movies/snippets/ON_Finite/InclIJc}\n\\input{movies/snippets/ON_Finite/InclIJd}\n\n\n\n\\index{hydras}{Exercises}\n\\begin{exercise}\nProve that \\texttt{Omega\\_plus\\_Omega} cannot be a sub-notation of \\texttt{Omega}.\n\\end{exercise}\n\n\\index{hydras}{Projects}\n\\begin{project}\nAdapt the definition of \\texttt{Hvariant} (Sect.~\\ref{sect:hvariant-def}) in order to\nhave an ordinal notation as argument. Prove that if $O_A$ is a sub-notation of $O_B$, then any variant defined on  $O_A$ can be automatically transformed into \na variant on $O_B$.\n\\end{project}\n\n\n\n\n\\section{Comparing an ordinal notation with Sch\u00fctte's model}\n\nFinally, it may be interesting to compare an ordinal notation with the more theoretical model from Sch\u00fctte (well, at least with our formalization of that model). This would be a relative proof of correctness of the considered  ordinal  notation.\n\nThe following class specifies that a notation \\texttt{OA} describes a segment $[0,\\alpha)$,\nwhere $\\alpha$ is a countable ordinal \\emph{\u00e0 la}  Sch\u00fctte.\n\n\n\\label{types:ON-for}\n\\index{hydras}{Library OrdinalNotations!Type classes!ON\\_correct}\n\n\\input{movies/snippets/ON_Generic/ONCorrect}\n\n\nFor instance, the following theorem tells that \\texttt{Epsilon0}, our notation system for the segment $[0,\\epsilon0)$ is a correct implementation of the theoretically defined  ordinal $\\epsilon_0$\n(see chapter~\\ref{chap:schutte} for more details).\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.Schutte.Correctness_E0.html}{Schutte.Correctness\\_E0}}\n\n\\input{movies/snippets/Correctness_E0/Epsilon0Correct}\n\n\n\n\\index{hydras}{Projects}\n\n\\begin{project}\n  When you have read Chapter~\\ref{chap:schutte}, prove that the sum of two ordinal notations \\texttt{ON\\_plus} implements the addition of ordinals.\n\\end{project}\n\n\n\n\n\n\\section{Isomorphism of ordinal notations}\n\n\nIn some cases we want to show that two notation systems describe the same segment (for instance $[0,3+\\omega)$ and $[0,\\omega)$\\;). For this purpose, one may prove that the two notation systems are order-isomorphic.\n\n\\index{hydras}{Library OrdinalNotations!Type classes!ON\\_Iso}\n\n\\label{types:ON-iso} \n\n\n\\input{movies/snippets/ON_Generic/ONIso}\n\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\n\\label{exo:i-plus-omega}\nLet $i$ be some natural number. Prove that the notation systems \n\\texttt{Omega} and (\\texttt{ON\\_plus (OrdFin $i$) Omega}) are isomorphic.\n\n{\\it \\textbf{Note:} This property reflects the equality $i+\\omega=\\omega$, that we prove also in larger notation systems, as well as in Sch\u00fctte's model.}\nThis exercise is partially solved for $i=3$ (in ~\\href{../theories/html/hydras.OrdinalNotations.Example_3PlusOmega.html}{OrdinalNotations.Example\\_3PlusOmega}).\n\n\\end{exercise}\n\n\\index{hydras}{Projects}\n\\label{exo:ON-mult}\n\\begin{project}\n% Define in \\coq{} the product of two ordinal notations $N_A$ and $N_B$.\n% If $A$ [resp. $B$] is the underlying type of $N_A$ [resp. $N_B$], the\n% product \\texttt{ON\\_mult $N_A$ $N_B$} is implemented over the cartesian product $B\\times A$ (with the lexicographic ordering).\n\nThis exercise is about the non-commutativity of the multiplication of ordinals, reflected in ordinal notations.\n\nFor instance, the\nelements of the product (\\texttt{ON\\_mult Omega (FinOrd 3)}) are ordered as follows.\n\\[(0,0),(0,1),(0,2),(0,3),(0,4),\\dots,{\\color{red}(1,0),} (1,1),(1,2),\\dots, {\\color{red}(2,0)},(2,1),(2,2),\\dots\\]\n\nNote that the elements of  (\\texttt{ON\\_mult (FinOrd 3) Omega}) are differently ordered (without limit ordinals):\n\\[(0,0),(1,0),(2,0),(0,1),(1,1),(2,1),(0,2),(1,2),(2,2),(0,3),\\dots\\]\n\n\nProve formally  that \\texttt{ON\\_mult (FinOrd $i$) Omega} is isomorphic to\n\\texttt{Omega}  whilst\n\\texttt{Omega}  is a sub-notation of \\texttt{ON\\_mult Omega (FinOrd $i$)},\nfor any strictly positive $i$. \n\n\\textbf{Note:} Like Exercise~\\ref{exo:i-plus-omega}, this project corresponds to the [in]equalities $i+\\omega=\\omega<\\omega+i$, for any natural number $i$.\n\\end{project}\n\n\\index{hydras}{Projects}\n\\begin{project}\nConsider two isomorphic ordinal notations \\texttt{OA} and \\texttt{OB}.\nProve that, if \\texttt{OA} [resp. \\texttt{OB}] is a correct implementation \nof $\\alpha$ [resp. $\\beta$], then $\\alpha=\\beta$.\n\\end{project}\n\n\n\\index{hydras}{Projects}\n\\begin{project}\n\\label{project:succ-limit-dec}\nAdd to the class \\texttt{ON} the requirement that for any $\\alpha$ it is decidable whether $\\alpha$ is $0$, a successor or a limit ordinal.\n\n\n\\textbf{Hint:}   Beware of the instances associated with sum and product of notations!\n  You may consider additional fields \nto make the sum and product of notations ``compositional''.\n\n\\end{project}\n\n\\index{hydras}{Projects}\n\\begin{project}\n\\label{project:on-setoid}\nReconsider the  class \\texttt{ON}, with an equivalence instead of Leibniz equality.\n\\end{project}\n\n\n\n\n\n%%%% ICI ICI\n\n\\section{Other ordinal notations}\n%%% TODO : Fix the multiplication function in branch FixOmegaOmega\n\n% \\index{hydras}{Projects}\n\n% \\begin{project}\n% The directory \\texttt{theories/OmegaOmega} contains an ad-hoc formalization of $\\omega^\\omega$, contributed by Pascal Manoury. Every ordinal $\\alpha$ is represented by a list $l$ whose elements are the coefficients of $\\omega$ in  the Cantor normal form of $\\alpha$ (in reverse order). For instance, the ordinal \n% $\\omega^{8}\\times 5 + \\omega^{6}\\times 8 + \\omega^2\\times 10 + \\omega + 7$ is represented by the list \\texttt{[5;0;8;0;0.0;10,1,7]}. \n\n\n%  Develop this representation and compare it with the other ordinal notations.\n\n\n\n% \\end{project}\n\n\\index{hydras}{Projects}\n\n\\begin{project}\nLet $N_A$ be a notation system for ordinals strictly less than $\\alpha$, \nwith the strict order $(A,<_A)$. Please build the notation system\n\\texttt{ON\\_Expl $N_A$}, on the type of multisets of elements of $A$\n(or, if preferred, the type of non-increasing finite sequences on $A$,\nprovided with the lexicographic ordering on lists).\n\nFor instance, let us take $N_A=\\texttt{Omega}$, and take $\\alpha=\\langle 4,4,2,1,0\\rangle$,\n $\\beta=\\langle 4,3,3,3,3,3,2\\rangle$, and $\\gamma=\\langle 5\\rangle$. Then $\\beta<\\alpha<\\gamma$. \n\nIn contrast the list $\\langle5,6,3,3\\rangle$ is not non-increasing (\\emph{i.e.} sorted w.r.t. $\\geq$), so it is not to be considered.\n\nNote that if the notation $N_A$ implements the ordinal \n$\\alpha$,  the new notation $\\omega^{N_A}$ must implement the ordinal $\\phi_0(\\alpha)$, a.k.a. $\\omega^\\alpha$ (see chapter~\\ref{chap:schutte})\n\n\\end{project}\n\n\n\n\\begin{remark}\n The set of ordinal terms in Cantor normal form (see Chap.~\\ref{chap:T1}) and \nin Veblen normal form (see \n\\href{../theories/html/hydras.Gamma0.Gamma0.html}{Gamma0.Gamma0}) are shown to be ordinal notation systems, but there is a lot of work to be done in order to unify ad-hoc  definitions and proofs which were written before the definition of the \\texttt{ON} type class.\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n%------------------------------------------------------------------------\n\n\\chapter[A proof of termination, using epsilon0]{A proof of termination, using ordinals below \\texorpdfstring{$\\epsilon_0$}{Epsilon0}}\n\n\\label{cnf-math-def}\n\\label{chap:T1}\n\nIn this chapter, we adapt to \\coq{} the well-known~\\cite{KP82}  proof that Hercules eventually wins every battle, whichever the strategy  of each player.\nIn other words, we present  a formal and self contained proof of termination  of all [free] hydra battles.\nFirst, we take from Manolios and Vroon~\\cite{Manolios2005} a representation of the ordinal $\\epsilon_0$ as terms in Cantor normal form. Then, we define a variant for hydra battles as a measure that maps any hydra to some ordinal strictly less than $\\epsilon_0$.\n\n\n\n\\section{The ordinal \\texorpdfstring{\\(\\epsilon_0\\)}{epsilon0}}\n\\label{sec:epsilon0-intro}\n\n\\subsection{Cantor normal form}\n\\index{maths}{Cantor normal form}\n\nThe ordinal \\(\\epsilon_0\\) is the least ordinal number that satisfies \nthe equation \\(\\alpha = \\omega^\\alpha\\), where \\(\\omega\\) is \nthe least infinite ordinal. Thus, we can consider \\(\\epsilon_0\\) as an\n\\emph{infinite} \\(\\omega\\)-tower.\nNevertheless, \nany ordinal strictly less that \\(\\epsilon_0\\) \ncan be finitely represented by a unique  \\emph{Cantor normal form}, \nthat is, an expression  which is either  the ordinal \\(0\\) or \na sum  \\(\\omega^{\\alpha_1} \\times n_1 + \\omega^{\\alpha_2} \\times n_2 + \n  \\dots + \\omega^{\\alpha_p} \\times n_p\\) where all the \\(\\alpha_i\\) \nare ordinals in Cantor  normal form, \\(\\alpha_1 > \\alpha_2 > \\alpha_p\\), \nand all the \\(n_i\\) are positive integers.\n\nAn example of Cantor normal form is displayed in Fig \\ref{fig:cnf-example}:\nNote that  any ordinal of\nthe form \\(\\omega^0 \\times i + 0\\) is just written \\(i\\).\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[scale=2, every node/.style={transform shape}]\n\\node[color=blue]{$\\omega^{(\\omega^\\omega\\,+\\, \\omega^2 \\times 8 \\,+\\, \\omega)}+ \\omega^\\omega + \\omega^4+ 6$};\n\\end{tikzpicture}\n\\caption{\\label{fig:cnf-example}\nAn ordinal in Cantor normal form}\n\\end{figure}\n\n\n\n\nIn the rest of this section, we define an inductive type for representing in \\texttt{Coq}\nall the ordinals strictly  less than  \\(\\epsilon_0\\), then extend some arithmetic operations\nto this type, and finally prove that our representation fits well with \nthe expected mathematical properties: the order we define is a well order, \nand the decomposition into Cantor normal form  is consistent \nwith the implementation of the arithmetic operations of exponentiation of base \\(\\omega\\) \nand addition.\n\n\\paragraph*{Remark}\n\\label{sec:orgheadline65}\nUnless explicitly mentioned, the term ``ordinal\" will be used instead of\n``ordinal strictly less than \\(\\epsilon_0\\)\" (except in Chapter~\\ref{chap:schutte} where it stands for ``countable ordinal'').\n\n\n\n\\subsection{A data type for  ordinals in Cantor normal form}\n\\label{sec:orgheadline72}\n\\label{sec:T1-inductive-def}\n\n\n\n% Our user contribution~\\cite{CantorContrib} represents \n% the set of ordinals strictly less than $\\epsilon_0$ in Cantor normal form as in~\\cite{Manolios2005}, and also the set\n% of ordinals strictly  less than $\\Gamma_0$ in Veblen normal form.\n\n\n    Let us define an inductive type whose \nconstructors are respectively associated\nwith the ways to build Cantor normal forms:\n\n\\begin{itemize}\n\\item the ordinal \\(0\\)\n\\item the construction \\((\\alpha,\\, n,\\,\\beta)  \\mapsto \\omega^\\alpha \\times (n + 1)+ \\beta \\quad (n\\in\\mathbb{N})\\)\n\\end{itemize}\n\n\n\\vspace{4pt}\n\\noindent\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#T1}{Epsilon0.T1}}\n\n\\label{types:T1}\n%\\index{Constants!zero:T1}\n\\index{hydras}{Library Epsilon0!Types!T1}\n\n\\input{movies/snippets/T1/T1Def}\n\n\n\n\\paragraph{Remark}\nThe name \\texttt{T1} we gave to this data-type  is proper to this development and refers\nto a hierarchy of ordinal notations. For instance, in Library \\href{../theories/html/hydras.Gamma0.T2.html}{Gamma0.T2},  the following type is used to represent ordinals strictly less than \\(\\Gamma_0\\),  in Veblen normal form (see also~\\cite{schutte}).\n\\noindent\n\n\\input{movies/snippets/T2/T2Def}\n\n\n\\subsubsection{Example}\n\n\\label{alpha0-def}\nFor instance, the ordinal  $\\omega^\\omega+\\omega^3\\times 5+2$ is represented by the following term:\n\n\n\n\\input{movies/snippets/T1/alpha0}\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.5, level 1/.style={sibling distance=6cm},\nlevel 2/.style={sibling distance=35mm},  \nlevel 3/.style={sibling distance=17mm}]\n\\node  {ocons}\n  child {  node {ocons}\n            child { node {ocons} child {node {zero}} child {node{0}} child{node{zero}}}\n         child {node {0}}\n         child {node {zero}}}\n    child {node {0}}\n   child {node {ocons} \n child { node {ocons} child {node {zero}} child {node{2}} child{node{zero}}}\n  child {node {4}}\n         child {node {ocons} child {node {zero}} child {node{1}} child{node{zero}}}};\n\n\\end{tikzpicture}\n\n\\caption{The tree-like representation of the ordinal $\\omega^\\omega+\\omega^3\\times 5 +2$\\label{fig:cnf-tree}}\n\n\\end{figure}\n\n\n\n\\paragraph{Remark}\nFor simplicity's sake, we chose to forbid  expressions of the form $\\omega^\\alpha\\times 0 + \\beta$. Thus, the construction (\\texttt{ocons $\\alpha$ $n$ $\\beta$}) is intended to represent the\nordinal $\\omega^\\alpha\\times(n+1)+\\beta$ and not $\\omega^\\alpha\\times n+\\beta$.\nIn a future version, we should replace  the type \\texttt{nat} with \\texttt{positive} in \\texttt{T1}'s \ndefinition. But this replacement would take a lot of time \\dots{}\n\n\\subsection{Abbreviations}\n\nSome abbreviations may help to write more concisely complex ordinal terms.\n\n\\subsubsection{Finite ordinals}\n\\label{sec:orgheadline67}\n\nFor representing finite ordinals, \\emph{i.e.} natural numbers, we first introduce a notation for terms of the form $n+1$, then define a coercion from type \\texttt{nat} into \\texttt{T1}.\n\\label{sect:notation-FS}\n\n\\label{sect:notation-F}\n\n\\input{movies/snippets/T1/finiteOrds}\n\n% \\index{Coq!Coercions}\n% \\index{Functions!Coercions@Coercions (from nat to ordinal types)}\n% \\begin{remark}\n% Please refer to the remark~\\pageref{warning:coercions} about the use of coercions.\n% % The use of coercions like \\texttt{fin} allow us to be close to the mathematical tradition where natural numbers are ordinals too.\n% % Nevertheless, it may happen that a goal like \\texttt{3 < 5} could be \n% % interpreted as \\texttt{(lt (fin 3) (fin 5))},  depending on the current notation scope.  \n% % When this misinterpretation happens, tactics like \\texttt{auto with arith}, \\texttt{lia} do not work!\n% % Thus, it is useful to write \\texttt{(3 < 5)\\%nat}  an inequality between two natural numbers. \n% \\end{remark}\n\n\n\\subsubsection{The ordinal \\(\\omega\\)}\n\\label{sec:orgheadline68}\n\n  Since \\(\\omega\\)'s Cantor normal form is\ni.e. \\(\\omega^{\\omega^0}\\times 1+ 0\\), we can define the following abbreviation:\n\n\\label{sect:omega-notation2}\n\n\\input{movies/snippets/T1/omegaDef}\n\n\nNote that \\texttt{omega} is not an identifier, thus any tactic like \\texttt{unfold omega} would fail.\n\n\n\\subsubsection{The ordinal \\(\\omega^\\alpha\\), a.k.a. \\(\\phi_0(\\alpha)\\)}\n\\label{sect:notation-phi0}\nWe provide also a notation for ordinals of the form $\\omega^\\alpha$.\n\n\\index{hydras}{Library Epsilon0!Notations!phi0@phi0 (exponential of base omega)}\n\n\n\\input{movies/snippets/T1/phi0Def}\n\n\\index{maths}{Additive principal ordinals}\n\n\\begin{remark}\n\\label{sec:orgheadline69}\nThe name \\(\\phi_0\\)\n   comes from ordinal numbers theory. In~\\cite{schutte}, Sch\u00fctte defines \n$\\phi_0$  as the ordering (\\emph{i.e.} enumerating) function of the set  of \\emph{additive principal ordinals} \\emph{i.e.} strictly positive ordinals $\\alpha$ that verify $\\forall \\beta<\\alpha, \\beta+\\alpha=\\alpha$. For Sch\u00fctte,  $\\omega^\\alpha$ is just a notation for $\\phi_0(\\alpha)$.  See also Chapter~\\ref{chap:schutte} of this document.\n\\end{remark}\n\n\n\n  \n\\subsubsection{The hierarchy of \\(\\omega\\)-towers:}\n\\label{sec:orgheadline71}\n\nThe ordinal $\\epsilon_0$, although not represented by a finite term in Cantor normal form, is approximated by the sequence of $\\omega$-towers (see also Sect~\\vref{sect:epsilon0-as-limit} ).\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html}{Epsilon0.T1}}\n\n\\input{movies/snippets/T1/towerDef}\n\n\nFor instance, Figure~\\ref{fig:tower7} represents  the ordinal returned by the\n evaluation of the term \\texttt{omega\\_tower 7}.\n\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[scale=2, every node/.style={transform shape}]\n\\node[color=blue]{$\\omega^{{{\\omega}^{{{\\omega}}^{{{\\omega}}^{{\\omega^{{\\omega}^{\\omega}}}}}}}}$};\n\\end{tikzpicture}\n\\caption{\\label{fig:tower7}\nThe $\\omega$-tower of height 7}\n\\end{figure}\n\n\\subsection{Pretty-printing ordinals in Cantor normal form}\n\\label{sect:ppT1}\n\\index{hydras}{Library Epsilon0!Types!ppT1}\n\nLet us consider again the ordinal $\\alpha_0$ defined in section~\\vref{alpha0-def}\nIf we ask \\coq{} to print its  normal form, we get a hardly readable term of type \\texttt{T1}.\n\n\\input{movies/snippets/T1/alpha0Compute}\n\nThe following data type defines an abstract syntax for more readable ordinals terms in Cantor normal form:\n\n\\label{types:ppT1}\n\\index{hydras}{Library Epsilon0!Functions!pp@ pp (pretty printing terms in Cantor normal form)}\n\n\\input{movies/snippets/T1/ppT1Def}\n\n\nThe function \\texttt{pp: T1 -> ppT1} converts any closed term of type \\texttt{T1} into a human-readable expression. For instance, let us convert the term \\texttt{alpha\\_0}.\n\n\\input{movies/snippets/T1/ppAlpha0}\n\n\n\\index{hydras}{Projects}\n\\begin{project}\nDesign  (in \\ocaml?) a set of tools for systematically pretty printing ordinal terms in Cantor normal form.\n\\end{project}\n\n\n\\subsection{Comparison between ordinal terms}\n\\label{sec:orgheadline73}\n\n\n% Our formalisation of Cantor Normal Form will take two steps:\n% 1 Definition of a strict order \\texttt{o<} on the type \\texttt{T1}, \n% 2 Using \\texttt{o<} for characterizing terms in normal form.\n\nIn order to compare two terms of type \\texttt{T1}, we define a recursive function \\texttt{compare} that maps two ordinal tems $\\alpha$ and $\\beta$ to a value of type \\texttt{comparison}. This type is defined in \\coq's standard library \n\\texttt{Init.Datatypes} and\ncontains three constructors:  \\texttt{Lt} (less than), \\texttt{Eq} (equal), and\n\\texttt{Gt} (greater than).\n\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#compare}{Epsilon0.T1}}\n\n\\input{movies/snippets/T1/compareDef}\n\n\n\\label{Predicates:lt-T1}\nPlease note that this definition of \\texttt{lt} makes it easy to write proofs by computation, as shown by the following examples.\n\n\\vspace{4pt}\n\n\n\\input{movies/snippets/T1/ltExamples}\n\n%%% TODO : link to Comparable and StrictOrder classes\n\nLinks between the function \\texttt{compare} and the relations\n\\texttt{lt} and \\texttt{eq} are established through the following lemmas (~\\vref{sect:comparable-def}).\n\\vspace{4pt}\n\n\\input{movies/snippets/T1/Instances}\n\n\n% \\vspace{4pt}\n% \\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#compare_refl}{Epsilon0.T1}}\n\n\n% \\begin{Coqsrc}\n% Lemma compare_refl : forall alpha, compare alpha alpha =  Eq.\n% \\end{Coqsrc}\n\n% \\begin{Coqsrc}\n% Lemma compare_reflect : forall alpha beta,\n%     match compare alpha beta with\n%     |   Lt => lt alpha  beta\n%     |   Eq => alpha = beta\n%     |   Gt => lt beta  alpha\n%     end.\n% \\end{Coqsrc}\n\n% We prove also that the relation \\texttt{lt} is a strict total order.\n\n% \\vspace{4pt}\n% \\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#lt_irrefl}{Epsilon0.T1}}\n\n  \n% \\begin{Coqsrc}\n% Theorem lt_irrefl (alpha: T1):  ~ lt alpha alpha.\n\n% Theorem lt_trans (alpha beta gamma : T1) :\n%   lt alpha  beta -> lt beta gamma -> lt alpha gamma.\n\n% Definition lt_eq_lt_dec  :\n%    forall alpha beta : T1, \n%           {lt alpha  beta} + {alpha = beta} + {lt beta alpha}.\n% \\end{Coqsrc}\n\n% \\vspace{4pt}\n\n% Please note that the order \\texttt{lt} is not reflected \n% in the structure (size and/or height) of the terms of \\texttt{T1}. \n\n% \\begin{Coqsrc}\n% Example Ex0:\n%   lt (ocons (phi0 (phi0 omega)) 2\n%             (ocons (phi0 10) 33\n%                    (ocons (phi0 9) 63 zero)))\n%      (ocons  (phi0 (phi0 omega)) 2 (phi0 (phi0 11))).\n% Proof.\n%   reflexivity. \n% Qed.\n% \\end{Coqsrc}\n\n% Uncomment for testing \n%\\textbf{Todo: Fix subgoal display }\n%\\input{movies/snippets/T1/compareRev}\n\n\\subsubsection{A Predicate for Characterizing Normal Forms}\n\\label{sect:t1-nf}\n\n\\label{sec:orgheadline74}\n\\label{sec:orgheadline75}\nOur data-type \\texttt{T1} allows us to write expressions that\nare not properly in Cantor normal form as specified in Section \\ref{sec:epsilon0-intro}.\nFor instance, consider the following term of type  \\texttt{T1}. \n\n\n\\input{movies/snippets/T1/badTerm}\n\n\nThis term would have been written \\(\\omega^1\\times 2 + \\omega^\\omega \\times 3\\) in the usual mathematical notation. We note that the exponents of $\\omega$ are not in the right (strictly decreasing) order.\nNevertheless, with the help of the order \\texttt{lt} on \\texttt{T1}, we are now able to characterize\nthe set of all well-formed ordinal terms:\n\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#nf_b}{Epsilon0.T1}}\n\n\\label{Predicates:nf-T1}\n\n\n\\input{movies/snippets/T1/nfDef}\n\n\\input{movies/snippets/T1/nfAlpha0}\n\n\\input{movies/snippets/T1/nfBadTerm}\n\n\n\n\n\\subsection{Making normality implicit}\n  We would like to get rid of terms of type \\texttt{T1} which are not in Cantor normal form.\nA simple way to do this is to consider statements of the form \n\\texttt{forall alpha: T1, nf alpha -> $P$ alpha}, where $P$ is a predicate over type \\texttt{T1}, like in the following lemma \\footnote{Ordinal addition is formally defined a little later (page~\\ref{sect:infix-plus-T1})}.\n\n\\input{movies/snippets/T1/plusIsZero}\n\n\\vspace{4pt}\n\n\nBut this style leads to clumsy statements, and generates too many subgoals in interactive proofs (although often solved with \\texttt{auto} or \\texttt{eauto}).\n\nOne may encapsulate conditions of the form \\texttt{(nf $\\alpha$)} in\nthe most used predicates. For instance, we introduce the restriction of \\texttt{lt} to terms in normal form, and provide a handy notation for this restriction.\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Prelude.Restriction.html}{hydras.Prelude.Restriction}}\n\n\\input{movies/snippets/Restriction/restrictionDef}\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#LT}{Epsilon0.T1}}\n\n\\input{movies/snippets/T1/LTDef}\n\n\\label{Predicates:LT-T1}\n \n\nFor instance, in the following lemma, the condition that $\\alpha$ is in normal form is included in the condition $\\alpha< 1$.\n\n\\input{movies/snippets/T1/LTOne}\n\n\n\\subsubsection{A sigma-type for \\texorpdfstring{$\\epsilon_0$}{epsilon0}}\n\nAs we noticed in Sect.~\\ref{sect:t1-nf}, the type \\texttt{T1} is not a correct ordinal notation, since it contains terms that are not in Cantor normal form. In certain contexts (for instance in Sections~\\ref{sect:L-equations}, \\ref{sect:hardy},\nand \\ref{sect:wainer}),  we need to define total recursive functions on well-formed ordinal terms less  than $\\epsilon_0$, using the \\texttt{Equations} plug-in~\\cite{sozeau:hal-01671777}.\n In order to define a type whose inhabitants represent just ordinals, we build a type gathering a term of type \\texttt{T1} and a proof that this term is in normal form.\n \n\n\\label{sect:E0-def}\n\\label{types:E0}\n\\index{hydras}{Library Epsilon0!Types!E0}\n\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.E0.html}{Epsilon0.E0}}\n\n\\input{movies/snippets/E0/E0Def}\n\n\nMany constructs : types, predicates, functions, notations, etc., on type \\texttt{T1} are adapted to \\texttt{E0}.\n\nFirst, we declare a notation scope for \\texttt{E0}.\n\n\n\\input{movies/snippets/E0/E0Scope}\n\n\nThen we redefine the predicates of comparison.\n\n\\label{Predicates:Lt-E0}\n\n\\input{movies/snippets/E0/LtLeDef}\n\n\n\n\nEquality in \\texttt{E0} is just Leibniz equality. Note that, since \\texttt{nf} is\ndefined by a Boolean function, for  any term $\\alpha:\\texttt{T1}$, there exists at most one proof of \\texttt{nf $\\alpha$}, thus two ordinals of type \\texttt{E0} are\nequal if and only iff their projection to \\texttt{T1} are equal (see also Sect.~\\vref{sect:eq-proof-unicity}).\n\n\\vspace{4pt}\n\n\n\\index{coq}{Unicity of equality proofs}\n\n\\input{movies/snippets/E0/nfProofUnicity}\n\n\n\\vspace{4pt}\n\n\\input{movies/snippets/E0/E0EqIff}\n\n\\vspace{4pt}\n\nIn order to  upgrade constants and functions from type \\texttt{T1} to \\texttt{E0}, we have to prove that\nthe term they build is in normal form.\nFor instance, let us represent the ordinals $0$ and $\\omega$   as instances of the class \\texttt{E0}.\n\n\\vspace{4pt}\n\n\\label{sect:omega-T1}\n\n\\input{movies/snippets/E0/ZeroOmega}\n\n%\\index{Constants!zero:T1}\n\n\n\n\n\n\\subsection{Syntactic definition of limit and successor ordinals}\n\nPattern matching and structural recursion allow us to define boolean characterizations  of successor and limit ordinals.\n\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#succb}{Epsilon0.T1}}\n\n\n\n\\input{movies/snippets/T1/succbLimitb}\n\n\nThe correctness of these definitions with respect to the mathematical notions of\nlimit and successor ordinals is established through several lemmas. For instance,\nLemma \\texttt{canonS\\_limit\\_lub}, page~\\pageref{lemma:canonS-limit}, shows that\nif $\\alpha$ is (syntactically) a limit ordinal, then it is the least upper bound of\na strictly increasing sequence of ordinals.\n\n\n\n\n\n\n\\subsection{Arithmetic on \\texorpdfstring{$\\epsilon_0$}{epsilon0}}\n\\subsubsection{Successor}\n\n\\index{hydras}{Library Epsilon0!Functions!succ}\n\nThe successor of any ordinal $\\alpha< \\epsilon_0$ is defined by structural \nrecursion on its Cantor normal form.\n\n\\label{Functions:succ-T1}\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.T1.html\\#succ}{Epsilon0.T1}}\n\n\n\\input{movies/snippets/T1/succDef}\n\nThe following lemma establishes the connection between the  function\n\\texttt{succ} and the Boolean predicate \\texttt{succb}.\n\n\\vspace{4pt}\n\n\\input{movies/snippets/T1/succbIff}\n\n\n\n\n\\index{hydras}{Exercises}\n \\begin{exercise}\nProve in \\coq{} that for any ordinal $0< \\alpha<\\epsilon_0$, $\\alpha$ is a limit if \nand only if for all $\\beta<\\alpha$, the interval $[\\beta,\\alpha)$ is\ninfinite.\n\n\\emph{You may start this exercise with the file\n     \\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/Limit_Infinity.v}{exercises/ordinals/Limit\\_Infinity.v}.}\n \\end{exercise}\n\n\n\\subsubsection{Addition and multiplication}\n\nOrdinal addition and multiplication are also defined by structural recursion over the type \\texttt{T1}. Please note that they use the \\texttt{compare} function on some subterms of their arguments.\n\n\\label{sect:infix-plus-T1}\n\n\\input{movies/snippets/T1/plusDef}\n\n\\input{movies/snippets/T1/multDef}\n\n\\subsubsection{Examples}\n\nThe following examples are instances of \\emph{proofs by computation}. Please note that  addition and multiplication on \\texttt{T1}\nare not commutative. Moreover,  both operations fail to be strictly monotonous in their first argument.\n\n\\input{movies/snippets/T1/plusMultExamples}\n\n\\input{movies/snippets/T1/notMono}\n\n\nThe function \\texttt{succ} is related with addition through the following lemma:\n\n\\input{movies/snippets/T1/succIsPlusOne}\n\n\n\n\\subsubsection{Arithmetic on type \\texttt{E0}}\n\n We define an addition in type \\texttt{E0}, since the sum of two terms in normal form is in normal form too.\n\n\\input{movies/snippets/T1/plusNf}\n\n\\input{movies/snippets/E0/plusE0}\n\\input{movies/snippets/E0/CheckPlus}\n\n\n\\begin{remark}\nIn all this development, two representations of ordinals co-exist: ordinal terms (type \\texttt{T1}, notation scope \\texttt{t1\\_scope}, for reasoning on the tree-structure of Cantor normal forms), and ordinal terms \\emph{known to be in normal form} (type \\texttt{E0}, notation scope \\texttt{E0\\_scope}). Looking at the contexts displayed by \\coq{} prevents you from any risk of confusion.\n\\end{remark}\n\n%% To simplify !\n\\index{hydras}{Exercises} \n\\begin{exercise}\nProve that for any ordinal $\\alpha:\\texttt{E0}$, \n$\\omega\\leq \\alpha$ if and only if, for any natural number $i$,\n$i+\\alpha=\\alpha$.\n\n\\emph{You may start this exercise with the file\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/ge_omega_iff.v}{exercises/ordinals/ge\\_omega\\_iff.v}.}\n\\end{exercise}\n\n\n\n\\section{Well-foundedness and transfinite induction}\n\n\\index{maths}{Transfinite induction}\n\n\\subsection{About  well-foundedness}\n\\label{sec:orgheadline82}\n   In order to use \\texttt{T1} for proving termination results,\nwe need to prove that  our order \\texttt{<} is well-founded. Then we will get \\emph{transfinite induction} for free.\n\n\nThe proof of well-foundedness of the strict order $<$ on Cantor normal forms is already \navailable in the Cantor contribution by Cast\u00e9ran and Contejean~\\cite{CantorContrib}. That proof relies on a library on recursive path orderings written by\nE. Contejean. We present here  a direct proof of the same result, which does not require any knowledge on r.p.o.s.\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nProve that the \\emph{total} order \\texttt{lt} on \\texttt{T1} is not well-founded. \n\\textbf{Hint:}  You will have to build a counter-example with terms of type \\texttt{T1}\nwhich are not in Cantor normal form.\n\n\\emph{You may start this exercise with the file\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/T1_ltNotWf.v}{exercises/ordinals/T1\\_ltNotWf.v}.}\n\\end{exercise}\n\n\\subsubsection{A first attempt}\n\\label{sec:orgheadline77}\n\\index{coq}{Well-founded induction}\n\nIt is natural to try to prove by structural induction over \\texttt{T1} \nthat every term in normal form is accessible through \\texttt{LT}.\n\nUnfortunately, it won't work. Let us consider some well-formed term\n $\\alpha=\\texttt{ocons $\\beta\\;n\\;\\gamma$}$, and assume that \\(\\beta\\) and \\(\\gamma\\) are accessible\n through \\texttt{LT}. For proving the accessibility of $\\alpha$, we have to consider\nany well formed term \\(\\delta\\) such that \\(\\delta<\\alpha\\). \nBut nothing guarantees that \\(\\delta\\)  is strictly  less than \\(\\beta\\) nor \\(\\gamma\\), and we cannot use the induction hypotheses on   \\(\\beta\\) nor \\(\\gamma\\).\n\n\\input{movies/snippets/T1/wfLTBada}\n\n\nThe problem comes from the hypothesis \\texttt{Hdelta}. It does not prevent  \\(\\delta\\) to be bigger that \\(\\beta\\) or\n\\(\\gamma\\);\nfor instance \\(\\delta\\) may be of the form\n\\texttt{ocons $\\beta$ $p'$  $\\gamma'$},\nwhere    \\(p' < n\\).\nThus, the induction hypotheses \\texttt{IHbeta} and \\texttt{IHgamma}  are useless for finishing our proof.\n\n\\input{movies/snippets/T1/wfLTBadz}\n\n\\subsubsection{Using a stronger inductive predicate.}\n\\label{sec:orgheadline78}\n  Instead of trying to prove directly that any ordinal term \\(\\alpha\\) in normal form is accessible\nthrough \\texttt{LT}, we propose to show first that any well formed \nterm of the form \\(\\omega^\\alpha\\times(n+1)+\\beta\\) is accessible (which is a stronger result).\n\n\\input{movies/snippets/T1/AccStrongDef}\n\n\nThe following lemma is an application of the strict inequality \n$\\alpha < \\omega ^\\alpha$. If $\\alpha$ is strongly accessible, then, by definition,\n$\\omega^\\alpha$ is accessible, thus $\\alpha$ is \\emph{a fortiori} accessible.\n\n\\input{movies/snippets/T1/AccStrongStronger}\n\n\nThus, it remains to prove that every ordinal strictly less than $\\epsilon_0$\nis strongly accessible.\n\n% \\subsubsection{Structure of the proof of well-foundedness of \\texttt{LT}}\n\n\\label{sec:orgheadline81}\n\\label{proof-wf-epsilon0}\n\\paragraph{A helper}\n\\label{sec:orgheadline79}\n\nFirst, we prove that, for  any \\texttt{LT}-accessible term $\\alpha$, $\\alpha$ is \nstrongly accessible too (\\emph{i.e.} any well formed\nterm (\\texttt{ocons $\\alpha$ $n$ $\\beta$})  is accessible).\n\nThe proof is structured as an induction on $\\alpha'$s accessibility. Let us consider\nany  accessible term $\\alpha$.\n\n\\input{movies/snippets/T1/AccImpAccStrong}\n\n\n\nLet \\texttt{n:nat} and \\texttt{beta:T1} such that (\\texttt{ocons alpha n beta}) is in normal form. \nWe prove first that \\texttt{beta} is accessible,  which allows us to prove by well-founded induction on \\texttt{beta}, \nand natural induction on \\texttt{n}, that (\\texttt{ocons alpha n beta}) is accessible.\nThe proof, quite long, can be consulted in \\href{../theories/html/hydras.Epsilon0.T1.html}{Epsilon0.T1}.\n\n\\paragraph{Accessibility of any well-formed ordinal term}\n\\label{sec:orgheadline80}\n\nOur goal is still to prove accessibility of any well formed ordinal term.\nThanks to our previous lemmas, we are almost done (by a simple structural induction!).\n\n\\input{movies/snippets/T1/nfAcc}\n\n\\input{movies/snippets/T1/T1Wf}\n\n\\index{maths}{Transfinite induction}\n\n\\input{movies/snippets/T1/transfiniteRecursor}\n\n\n\nThe following tactic starts a proof by  transfinite induction on any ordinal \\mathcolor{$\\alpha<\\epsilon_0$}.\n\n\\input{movies/snippets/T1/transfiniteInduction}\n\n\n\\begin{remark}\n\\label{remark:a3pat}\nThe alternate proof of well-foundedness using \\'Evelyne Contejean's work on\n    recursive path ordering~\\cite{DershowitzRPO, a3pat} is available in the\n    library \\href{../theories/html/hydras.Epsilon0.Epsilon0rpo.html}{Epsilon0.Epsilon0rpo}.\n \\end{remark}\n\n\n\\subsection{An ordinal notation for  \\texorpdfstring{$\\epsilon_0$}{epsilon0}}\n\nWe build an instance of \\texttt{ON}, and prove its correction w.r.t. Schutte's model.\n\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.E0.html}{Epsilon0.E0}}\n\n\\input{movies/snippets/E0/InstanceEpsilon0}\n\n\\label{instance-epsilon0}\n\n\\emph{From Module~\\href{../theories/html/hydras.Schutte.Correctness_E0.html}{Schutte.Correctness\\_E0}}\n\n\n\\input{movies/snippets/Correctness_E0/injectDef}\n\n\\input{movies/snippets/Correctness_E0/Epsilon0Correct}\n\n\\index{hydras}{Projects}\n\\begin{project}\n \\emph{This exercise is a continuation of Project~\\vref{exo:ON-mult}.}\nUse \\texttt{ON\\_mult} to define an ordinal notation \\texttt{Omega2} for $\\omega^2=\\omega\\times\\omega$.\n\nProve that \\texttt{Omega2} is a sub-notation of \\texttt{Epsilon0}.\n\nDefine on \\texttt{Omega2} an addition compatible with the addition on \\texttt{Epsilon0}.\n\n\\textbf{Hint}. You may use the following definition (in \n    \\href{../theories/html/hydras.OrdinalNotations.ON_Generic.html}{OrdinalNotations.ON\\_Generic}).\n\n\\input{movies/snippets/ON_Generic/SubONSameOp}\n    \n     \\end{project}\n\n    \\index{hydras}{Projects}\n    \\begin{project}\n    The class \\texttt{ON} of ordinal notations has been defined long after this \n    chapter, and is not used in the development of the type \\texttt{E0} yet.\n    A better integration of both notions should simplify the development on ordinals in Cantor normal form. This integration is planned for the future versions.\n\n    \\end{project}\n\n \n\n    \\section{A refinement of \\texttt{E0} : an ordinal notation for \\texorpdfstring{$\\omega^\\omega$}{omega\\^omega}}\n\n    In Module   \\href{https://github.com/coq-community/hydra-battles/blob/master/theories/ordinals/OrdinalNotations/OmegaOmega.v}{theories/ordinals/OrdinalNotations/OmegaOmega.v},\n    we represent ordinals below $\\omega^\\omega$ by list of pairs of natural numbers (with the same coefficient shift as in \\texttt{T1}).\n    For instance, the ordinal $\\omega^4\\times 10 + \\omega^3 + \\omega+ 5$ is represented by the list \\texttt{(4,9)::(3,0)::(1,0)::(0,4)::nil}.\n\n  The usual operations : \\texttt{succ}, \\texttt{+}, \\texttt{*} are simple variants of the same operations in \\texttt{T1}.\n\n    We establish this representation as a \\emph{refinement} of the data types we used to represent ordinals less than $\\epsilon_0$. Thus, many properties like well-foundedness of $<$ and associativity of $+$,  of this ordinal notations have very short proofs.\n    \n%ici\n    \\section{A variant for hydra battles}\n\n    In order to prove the termination of any hydra battle, we try to define a variant mapping hydras to ordinals strictly less than $\\epsilon_0$.\n    In order to make such a variant easy to define (for instance by a structural recursion), we introduce a variant of addition, which, contrary to\n    $+$, is commutative and strictly monotonous in both of its arguments. This last property makes it possible to prove that our function is \n    truly a variant for hydra battles (in Sect.~\\vref{sect:variant-decr}).\n\n    \\subsection{Natural sum (a.k.a. Hessenberg's  sum)}\n    \\label{sec:orgheadline87}\n    \\label{hydra-variant}\n\n    Natural sum (Hessenberg's  sum) is a commutative and monotonous version of\n    addition. It is used as an auxiliary operation  for defining variants\n    for hydra battles, where Hercules is allowed to chop off any  head of the hydra.\n\n    In the literature, the natural sum of ordinals \\(\\alpha\\) and \\(\\beta\\)\n    is often denoted by \\(\\alpha \\# \\beta\\)  or  \\(\\alpha \\oplus  \\beta\\).\n    Thus we called \\texttt{oplus} the associated \\emph{Coq} function.\n\n    \\subsubsection{Definition of \\texttt{oplus}}\n    \\label{sec:orgheadline84}\n    %\\index{Functions!oplus @ oplus (Hessenberg commutative sum)}\n\n    The definition of \\texttt{oplus} is recursive in both of its \n    arguments and uses  the same pattern as for the \\texttt{merge} function on lists of library\n    \\texttt{Coq.Sorting.Mergesort}.\n\n    \\begin{enumerate}\n    \\item Define a nested recursive function, using the \\texttt{Fix} \n        construct\n\n    \\item Build a principle of induction dedicated to \\texttt{oplus}\n\n    \\item Establish equations associated to each case of the definition.\n    \\end{enumerate}\n\n    \\paragraph{Nested recursive definition}\n    \\label{sec:orgheadline83}\n\n    The following definition is composed of \n    \\begin{itemize}\n    \\item A main function \\texttt{oplus}, structurally recursive in its \n    first argument \\texttt{alpha}\n    \\item An auxiliary function \\texttt{oplus\\_aux} within the scope of \\texttt{alpha},\n    structurally recursive in its argument \\texttt{beta};  \\texttt{oplus\\_aux beta} \n       is supposed to compute  \\texttt{oplus alpha beta}.\n    \\end{itemize}\n      \n    \\vspace{4pt}\n    \\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Hessenberg.html\\#oplus}{Epsilon0.Hessenberg}}\n\n    \\label{sect:infix-oplus}\n\n    \\input{movies/snippets/Hessenberg/oplusDef}\n    \n \n    The reader will note that each recursive call of the functions\n    \\texttt{oplus} and \\texttt{oplus\\_aux} satisfies \\emph{Coq}'s constraint\n    on recursive definitions. The function \\texttt{oplus} is recursively called on a sub-term of its first argument,\n    and \\texttt{oplus\\_aux} on a sub-term of its unique argument.\n    Thus, \\texttt{oplus}'s definition is accepted by \\coq{} as a structurally recursive function.\n\n    \\subsubsection{Rewriting lemmas}\n    \\label{sec:orgheadline86}\n\n    \\emph{Coq}'s constraints on recursive definitions result in \n    the quite  complex form of \\texttt{oplus}'s definition.\n    Proofs of properties of this function can be simpler if we\n     derive a few  rewriting lemmas that will help to simplify \n    expressions of the form (\\texttt{oplus $\\alpha$ $\\beta$}).\n\n    A first set of lemmas correspond to the various cases of \\texttt{oplus}'s \n    definition. They can be proved almost immediately.\n\n    \\input{movies/snippets/Hessenberg/oplusNeutral}\n    \n\n\n    % \\subsubsection{A hand-made induction principle}\n    % \\label{sec:orgheadline85}\n\n    % \\index{Coq!Commands!Functional Scheme}\n\n    % \\emph{Coq} contains a command  \\texttt{Functional Scheme} that \n    % generates induction principles which correspond to recursive functions.\n    % Unfortunately, the current version ( \\texttt{8.11.0} ) doesn't work on \\texttt{oplus},\n    % probably because of the inner \\texttt{Fix}.\n\n    % \\begin{Coqsrc}\n    % Functional Scheme oplus_ind := Induction for oplus Sort Prop.\n    % \\end{Coqsrc}\n\n    % \\begin{Coqanswer}\n    % Error: Anomaly \"todo.\" Please report at http://coq.inria.fr/bugs/.\n    % \\end{Coqanswer}\n\n\n    % Fortunately, it's a good exercise for a semi-experienced user, to write\n    % her/him-self induction principles similar to the ones returned by\n    % \\texttt{Functional Scheme}.\n\n    % \\begin{itemize}\n    % \\item First, we choose to write a version for sort \\texttt{Type}, since versions\n    % for sorts \\texttt{Prop} and \\texttt{Set} can be easily derived from\n    % the former one. According to \\emph{Coq}'s naming politics, we will call our \n    % principle \\texttt{oplus\\_rect}\n\n    % \\item The conclusion of \\texttt{oplus\\_rect} will be (\\texttt{$P$ a b (oplus a b)}),\n    % where $P$ is an arbitrary function of type \n    % \\texttt{T1 -> T1 -> T1 -> Type}\n\n    % \\item The premises of \\texttt{oplus\\_rect} will describe how to build an induction \n    % on the graph of \\texttt{oplus}.\n    % \\end{itemize}\n\n    % We are now ready to state and prove \\texttt{oplus\\_rect}, and the reader\n    % will note that the statement is longer than the proof script itself,\n    % which is a standard proof by induction, simplification and case-analysis \n    % that follows  \\texttt{oplus}'s definition.\n\n    % We associate also a tactic to the application of \\texttt{oplus\\_rect}.\n\n    % \\begin{Coqsrc}\n    %  Lemma oplus_rect:\n    %       forall P: T1 -> T1 -> T1 -> Type, \n    %         (forall a:T1, P zero a a) ->\n    %         (forall a: T1, P a zero a) ->\n    %         (forall a1 n1 b1 a2 n2 b2 o,\n    %            compare a1 a2 = Gt ->\n    %            P b1 (ocons a2 n2 b2) o ->\n    %            P (ocons a1 n1 b1) (ocons a2 n2 b2)\n    %              (ocons a1 n1 o)) ->\n    %         (forall a1 n1 b1 a2 n2 b2 o,\n    %            compare a1 a2 = Lt ->\n    %            P (ocons a1 n1 b1) b2 o ->\n    %            P (ocons a1 n1 b1) (ocons a2 n2 b2) \n    %            (ocons a2 n2 o)) ->\n    %         (forall a1 n1 b1 a2 n2 b2 o,\n    %            compare a1 a2 = Eq ->\n    %            P b1 b2 o ->\n    %           P (ocons a1 n1 b1) (ocons a2 n2 b2)\n    %             (ocons a1 (S (n1 + n2)%nat) o)) ->\n    %          forall a b, P a b (oplus a b).\n    % Proof with auto.\n    %    induction a.\n    %    -    intro; simpl; destruct b;auto.\n    %    -   induction b.\n    %        + apply X0.\n    %        + case_eq (compare a1 b1).\n    %          * intro Comp; unfold oplus; rewrite Comp.\n    %            cbn; apply X3 ...\n    %          * intro Comp; cbn; rewrite Comp; apply X2...\n    %          * intro Comp; cbn; rewrite Comp ...\n    %  Defined.\n\n\n    % Ltac oplus_induction a b:= pattern (oplus a b); apply oplus_rect.\n    % \\end{Coqsrc}\n\n    % \\index{Exercises}\n\n    % \\begin{exercise}\n    % The induction principle \\texttt{oplus\\_rect} is still unused in our development. \n    % Please build some nice examples of application.\n    % \\end{exercise}\n\n    \\index{hydras}{Projects}\n    \\begin{project}\n    Compare \\texttt{oplus}'s definition (with inner fixpoint) with other possibilities\n    (\\texttt{coq-equations}, \\texttt{Function}, etc.).\n    \\end{project}\n    \\subsection{More theorems on Hessenberg's sum}\n\n    We need to prove some properties of $\\oplus$, particularly about \n    its relation with the order $<$ on \\texttt{T1}.\n\n    \\subsubsection{Commutativity, associativity}\n\n    We prove  the commutativity of $\\oplus$ in two steps. \n\n    First, we prove by transfinite induction on $\\alpha$ that the restriction of $\\oplus$ to the\n    interval $[0..\\alpha)$ is commutative.\n\n    \\index{maths}{Transfinite induction}\n\n    \\input{movies/snippets/Hessenberg/oplusComm0}\n\n    \n\n    Then, we infer  $\\oplus$'s commutativity for any pair of ordinals:\n    Let $\\alpha$ and $\\beta$ be two ordinals strictly less than $\\epsilon_0$. Both ordinals $\\alpha$ and $\\beta$ are\n    strictly less than $\\textrm{max}(\\alpha,\\beta)+1$.\n        Thus, we have just to apply the lemma \\coqsimple{oplus\\_comm\\_0}.\n\n\\input{movies/snippets/Hessenberg/oplusComm}\n  \n\n    Associativity of Hessenberg's sum is proved the same way.\n\n    \\input{movies/snippets/Hessenberg/oplusAssoc0}\n    \\input{movies/snippets/Hessenberg/oplusAssoc}\n    \n    \n \n\n    \\subsubsection{Monotonicity}\n\n    At last, we prove that $\\oplus$ is strictly monotonous in both of its arguments.\n\n    \\input{movies/snippets/Hessenberg/oplusMono}\n    \n      \\index{hydras}{Projects}\n\n    \\begin{project}\n    The library \\texttt{Hessenberg} looks too long (proof scripts and compilation).\n    Please try to make it simpler and more efficient!\n    Thanks!\n    \\end{project}\n\n    \\subsection{A termination measure for hydra battles }\n\n    \\label{sec:hydra-measure}\n\n    Let us define a measure from type \\texttt{Hydra} into \\texttt{T1}.\n\n\n    \\vspace{4pt}\n    \\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Termination.html\\#m}{Hydra.Hydra\\_Termination}}\n\n   \\input{movies/snippets/Hydra_Termination/mDef}\n\n\n\n    First, we prove that the measure $m(h)$  of any hydra $h$ is a well-formed ordinal term of type \\texttt{T1}.\n\n    \\input{movies/snippets/Hydra_Termination/mNf}\n   \n    For proving the termination of all hydra battles, we have to prove that\n    \\texttt{m} is a variant. First, a few technical lemmas follow the decomposition of \\texttt{round} into several relations. Then the lemma \\texttt{round\\_decr} gathers all the cases.\n\n    \\label{sect:variant-decr}\n\n    \\input{movies/snippets/Hydra_Termination/S0Decr}\n    \\input{movies/snippets/Hydra_Termination/R1Decr}\n    \\input{movies/snippets/Hydra_Termination/S1Decr}\n    \\input{movies/snippets/Hydra_Termination/R2Decr}\n     \\input{movies/snippets/Hydra_Termination/RoundDecr}\n \n      Finally, we prove termination of all (free) battles.\n\n    \\label{thm:every-battle-terminates}\n\n    \\input{movies/snippets/Hydra_Termination/FinalThm}\n\n    \\section*{Conclusion}\n\n    Let us recall three results we have proved so far.\n    \\begin{itemize}\n    \\item There exists a strictly decreasing variant which maps \\texttt{Hydra} into \n    the segment $[0,\\epsilon_0)$ for proving the termination of any hydra battle\n    \\item There exists \\emph{no} such variant from \\texttt{Hydra} into \n    $[0,\\omega^2)$, \\emph{a fortiori} into $[0,\\omega)$.\n    \\end{itemize}\n\n    So, a  natural question is `` Does there exist any strictly decreasing variant mapping\n    type \\texttt{Hydra} into some interval $[0,\\alpha)$ (where $\\alpha <\\epsilon_0$) for proving the termination of all hydra battles''. The next chapter is dedicated to a formal proof that there exists no such $\\alpha$, even if we consider a restriction to the set of ``standard'' battles.\n\n\n\n\n\n\n    %\\include{epsilon0}\n\n\n    %\\include{impossibility-proofs}\n\n\n\n\n%-------------------------------------------------------------------\n\n\\chapter[The Ketonen-Solovay machinery]{Accessibility inside \\texorpdfstring{$\\epsilon_0$}{Epsilon0}: The Ketonen-Solovay Machinery\\label{ks-chapter}}\n\\label{chap:ketonen}\n\\index{maths}{Ordinal numbers!Ketonen-Solovay machinery}\n\n\\section{Introduction}\nThe reader may think that our proof of termination in the previous  chapter requires a lot of mathematical tools and may be too  complex. So, the question is ``is there  any  simpler proof'' ?\n\nIn their article~\\cite{KP82}, Kirby and Paris show that this result cannot be proved in Peano arithmetic. Their proof uses some knowledge about model theory and non-standard models of Peano arithmetic. In this chapter, we focus on a specific class of proofs of termination of hydra battles: construction of some variant mapping the type \\texttt{Hydra} into a given initial  segment of ordinals. Our proof relies only on the Calculus of Inductive Constructions and is a natural complement of the results proven in the previous chapters.\n\n\\begin{itemize}\n\\item There is no variant mapping the type \\texttt{Hydra} into the interval $[0,\\omega^2)$ (section ~\\vref{omega2-case}), and a fortiori \n$[0,\\omega)$ (section ~\\vref{omega-case}).\n\n\\item There exists a variant which maps the type \\texttt{Hydra} into the\ninterval $[0,\\epsilon_0)$ (theorem \\texttt{every\\_battle\\_terminates}, in section~\\vref{thm:every-battle-terminates}).\n\\end{itemize}\n\n\nThus, a very natural question is the following one:\n\\begin{quote}\n  `` Is there  any variant from\n\\texttt{Hydra} into some interval $[0,\\mu)$, where $\\mu<\\epsilon_0$, for proving the termination of all hydra battles ?''\n\\end{quote}\n\nWe prove in \\coq{} the following result:\n\n\\begin{quote}\nThere is no variant for proving the termination of all hydra battles\nfrom \\texttt{Hydra} into the interval $[0..\\mu)$, where\n$\\mu< \\epsilon_0$.\nThe same impossibility holds even if we consider only standard battles (with the successive replication factors $0,1,2,\\dots,t,t+1,\\dots$).\n\\end{quote}\n\nOur proofs are  constructive and require no axioms: they are  closed terms of the CIC, and are mainly composed on function definitions and proofs of properties of these functions. \nThey  share much theoretical material with Kirby and Paris', although they do not use any knowledge about Peano arithmetic nor model  theory.  The combinatorial arguments we use and implement\ncome from \n an article by J.~Ketonen and R.~Solovay~\\cite{KS81}, already  cited in the work\n by L.~Kirby and J.~Paris.% on the termination of Goodstein sequences and hydra battles~\\cite{KP82}.\n Section $2$ of this article: ''A hierarchy of probably recursive functions'', contains a systematic study of \\emph{canonical sequences}, which are closely related to\nrounds of hydra battles. \nNevertheless, they have the same global structure as the simple proofs described in\nsections~\\vref{omega-case} and \\vref{omega2-case}. \nWe invite the reader to compare the three proofs step by step, lemma by lemma.\n\n\\section{Canonical Sequences}\n\\label{ketonen-solovay-sect}\n\\index{maths}{Ordinal numbers!Canonical sequences}\n\nCanonical sequences are functions that associate an ordinal $\\canonseq{\\alpha}{i}$ to every ordinal $\\alpha<\\epsilon_0$ and positive integer $i$. They satisfy several nice properties:\n\n\\index{maths}{Transfinite induction}\n\\begin{itemize}\n\\item If $\\alpha\\not=0$, then $\\canonseq{\\alpha}{i}<\\alpha$. Thus canonical sequences can be used for proofs by transfinite induction or function definition by transfinite recursion\n\\item If $\\lambda$ is a limit ordinal, then $\\lambda$ is the least upper bound of the set \n$\\{\\canonseq{\\lambda}{i}\\;|\\,i\\in\\mathbb{N}_1\\}$\n\n\n\\item If $\\beta<\\alpha<\\epsilon_0$, then there is a ``path'' from $\\alpha$ to $\\beta$, \\emph{i.e.} a\nsequence $\\alpha_0=\\alpha, \\alpha_1, \\dots, \\alpha_n=\\beta$, where for every $k<n$, there exists some $i_k$ such that $\\alpha_{k+1}=\\canonseq{\\alpha_k}{i_k}$\n\\item Canonical sequences correspond tightly to rounds of hydra battles: if $\\alpha\\not=0$,\nthen $\\iota(\\alpha)$ is transformed into $\\iota(\\canonseq{\\alpha}{i+1})$ in one round with\nthe replication factor $i$ (Lemma \\href{../theories/html/hydras.Hydra.O2H.html\\#canonS_iota_i}{Hydra.O2H.canonS\\_iota\\_i}).\n\\item From the two previous properties, we infer that whenever $\\beta<\\alpha<\\epsilon_0$, there exists a (free) battle from $\\iota(\\alpha)$ to $\\iota(\\beta)$.\n\\end{itemize}\n\n\\begin{remark}\n  In~\\cite{KS81}, canonical sequences are defined for any ordinal $\\alpha <\\epsilon_0$,\nby stating that if $\\alpha$ is a successor ordinal $\\beta+1$,  the sequence associated with \n$\\alpha$ is simply the constant sequence whose terms are equal to $\\beta$.\nLikewise, the canonical sequence of $0$ maps any natural number to $0$.\n\nThis convention allows us to make total the function that maps any ordinal $\\alpha$ and natural number $i$ to the ordinal $\\canonseq{\\alpha}{i}$.\n\n\\end{remark}\n\n\nFirst, let us recall how canonical sequences are defined in~\\cite{KS81}. For efficiency's sake, we decided not to implement directly K.\\&S's definitions, but to define in \\gallina{} simply typed structurally recursive functions which share the abstract properties which are used in the mathematical proofs\\footnote{With a small difference: the $0$-th term of the canonical sequence is not the same in our development as in~\\cite{KS81}.}.\n\n\n\n\n\n\\subsubsection{Mathematical definition of canonical sequences} \n\nIn~\\cite{KS81} the definition of $\\canonseq{\\alpha}{i}$ is based on the following remark:\n\\begin{quote}\nAny non-zero ordinal $\\alpha$ can be decomposed in a unique way as the product\n$\\omega^\\beta\\times (\\gamma+1)$.\n\\end{quote}\n\nThus the $\\canonseq{\\alpha}{i}$\\,s are defined in terms of this decomposition:\n\\begin{definition}[Canonical sequences: mathematical definition]\n\\label{def:canonseq-math}\n  \n\\end{definition}\n\\begin{mathframe}\n  \\begin{itemize}\n\\item Let $\\lambda<\\epsilon_0$ be a limit ordinal \n\n\\begin{itemize}\n\\item If $\\lambda=\\omega^{\\alpha+1}\\times (\\beta+1)$, then \n$\\canonseq{\\lambda}{i}= \\omega^{\\alpha+1}\\times\\beta +  \\omega^\\alpha \\times i$\n\\item If $\\lambda=\\omega^{\\gamma}\\times (\\beta+1)$, where $\\gamma<\\lambda$ is a limit ordinal, then \n$\\canonseq{\\lambda}{i}=\\omega^{\\gamma}\\times \\beta + \\omega^{\\canonseq{\\gamma}{i}}$\n\\end{itemize}\n\n\\item For successor ordinals, we have $\\canonseq{\\alpha+1}{i}= \\alpha$ \n\n\\item Finally, $\\canonseq{0}{i}= \\alpha$.\n\\end{itemize}\n\\end{mathframe}\n\n\\subsubsection{Canonical sequences in Coq}\n\\index{hydras}{Library Epsilon0!Functions!canon}\n\\index{hydras}{Library Epsilon0!Functions!canonS}\n\nOur definition may look more complex than the mathematical one, but\nuses plain structural recursion over the type \\coqsimple{T1}. Thus, tactics like\n\\coqsimple{cbn}, \\coqsimple{simpl}, \\coqsimple{compute}, etc., are applicable. \n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Canon.html\\#canon}{Epsilon0.Canon}}\n\n\\label{Functions:canonS}\n\\label{Functions:canon}\n\n\\input{movies/snippets/Canon/canonDef}\n\nIn the present state of this library, the following specializations of \\texttt{canon} are still used in some proofs or lemma statements. \n\n\\input{movies/snippets/Canon/CanonS0}\n\n\n\nFor instance \\coq's computing facilities allow us to verify the equalities\\linebreak \n\\mathcolor{$\\canonseq{\\omega^\\omega}{3} = \\omega^3$} and\n\\mathcolor{$\\canonseq{\\omega^\\omega*3}{42} = \\omega^\\omega*2 + \\omega^{42}$}.\n\n\\input{movies/snippets/Canon/canonExamples}\n\n\n\n% \\index{hydras}{Projects}\n% \\begin{project}\n% Many lemmas presented in this chapter were stated and proved before the introduction of \n% the type class \\texttt{ON} of ordinal notations, and in particular its  instance \\texttt{Epsilon0}.\n% Thus definitions and lemmas refer to the type \\texttt{T1} of possibly not well-formed terms.\n% This should be fixed in  a future version.\n% \\end{project}\n\n\n\\subsection{Basic properties of canonical sequences}\n\nWe did not  try to prove that our definition truly implements Ketonen and Solovay's  \\cite{KS81}'s canonical sequences. The most important is that we were able to prove the \nabstract properties  of canonical sequences that are really used in our proof. The complete proofs are in the module\n~\\href{../theories/html/hydras.Epsilon0.Canon.html}{Epsilon0.Canon}\n\nFor instance, the equality $\\canonseq{\\alpha+1}{i}=\\alpha$  can be  proved by  structural induction on $\\alpha$.\n\n\\input{movies/snippets/Canon/canonSucc}\n\n\n\\subsubsection{Canonical sequences and the order $<$}\n\n\\index{maths}{Transfinite induction}\n\nWe prove by transfinite induction over $\\alpha$ that $\\canonseq{\\alpha}{i+1}$ is an ordinal strictly less than $\\alpha$ (assuming $\\alpha\\not=0$). This property allows us to use the function \\texttt{canonS} and its derivatives in function definitions by transfinite recursion.\n\n\\label{lemma:canonS_LT}\n\n\\input{movies/snippets/Canon/canonSLT}\n\n\\subsubsection{Limit ordinals are truly limits}\nThe following theorem states that any limit ordinal $\\lambda<\\epsilon_0$ \nis the limit of the sequence \\showmath{\\canonseq{\\lambda}{i}\\;(1\\le i)}.\n\n\n\\vspace{4pt}\n\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Canon.html\\#canonS_limit_strong}{Epsilon0.Canon}}\n\n\\input{movies/snippets/Canon/canonSLimitStrong}\n\n\n\n\\label{lemma:canonS-limit}\n\n\nNote the use of \\coq's \\texttt{sig} type in the theorem's statement, which\nrelates the boolean function \\texttt{limitb} defined on the \\texttt{T1} data-type with a constructive view of the limit of a sequence: for any $\\beta<\\lambda$, we can compute an item of the canonical sequence of $\\lambda$ which is greater than $\\beta$.\nWe can also state directly that $\\lambda$ is a (strict) least upper bound of the elements of its canonical sequence.\n\n\\input{movies/snippets/Canon/canonSLimitLub}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\\label{exo:simply-typed-canonseq}\nInstead of using the \\texttt{sig} type, define a simply typed function that, given two ordinals $\\alpha$ and $\\beta$, returns a natural number $i$ such that, if $\\alpha$ is a limit ordinal and $\\beta<\\alpha$, then $\\beta< \\canonseq{\\alpha}{i+1}$. Of course, you will have to prove the correctness of your function. \n\n\\textbf{Hint:} You may add to your function a third argument usually called \\texttt{fuel} for allowing you to give a structurally \nrecursive function (\\emph{cf} the post of Guillaume Melquiond on Coq-club (Dec 21, 2020)\n\\url{https://sympa.inria.fr/sympa/arc/coq-club/2020-12/msg00069.html}).\nThe type \\texttt{fuel}, an alternative \nto \\texttt{nat} is available on \\href{../theories/html/hydras.Prelude.Fuel.html}{Prelude.Fuel}).\n\\index{coq}{Giving fuel to a long computation}\n\n\\end{exercise}\n\n\n\n\n\n\n\\section{Accessibility inside \\texorpdfstring{$\\epsilon_0$}{epsilon0} : paths}\n\\index{maths}{Ordinal numbers!Accessibility inside epsilon0}\n\\label{sect:pathes-intro}\n\nLet us consider a kind of accessibility problem inside $\\epsilon_0$: given two ordinals $\\alpha$ and $\\beta$, where $\\beta<\\alpha<\\epsilon_0$, find a \\emph{path} consisting of a finite sequence $\\gamma_0=\\alpha,\\dots,\\gamma_l=\\beta$,\nwhere, for every $i<l$, $\\gamma_i \\not= 0$ \\footnote{This condition allows us to ignore paths which end by a lot of useless $0$s.} and there exists some strictly positive integer $s_i$\nsuch that $\\gamma_{i+1}=\\canonseq{\\gamma}{s_i}$.\n\nLet $s$ be the sequence $\\langle s_0,s_1,\\dots, s_{l-1} \\rangle$. We describe the\nexistence of such a path with the notation $\\alpha\\xrightarrow [s]{}\\beta$.\n\nWe say also that the considered path from $\\alpha$ to $\\beta$ \\emph{starts at [index] $s_0$ and ends at $s_l$}.\n\nFor instance, we have $\\omega*2 \\xrightarrow[2,2,2,4,5]{}3$, through the \npath $\\langle\\omega\\times 2, \\omega+2,\\omega+1,\\omega,4,3\\rangle$.\n\n\n\\begin{remark}\n  \n\nNote that, given $\\alpha$ and $\\beta$, where $\\beta < \\alpha$, the sequence $s$ which leads from $\\alpha$ to $\\beta$ is not unique.\n\nIndeed, if $\\alpha$ is a limit ordinal, the first element of $s$ can be any integer $i$ such that $\\beta<\\canonseq{\\alpha}{i}$, and if $\\alpha$ is a successor ordinal,\nthen the sequence $s$ can start with any positive integer.\n\n\nFor instance, we have also \n$\\omega*2 \\xrightarrow[3,4,5,6]{}\\omega$. \nLikewise,\n$\\omega*2 \\xrightarrow[1,2,1,4]{} 0$ and\n$\\omega*2 \\xrightarrow[3,3,3,3,3,3,3,3]{} 0$.\n\\end{remark}\n\n\\subsection{Formal definition}\n\n\\label{path-to-definition}\n\nIn \\coq{}, the notion of path can be simply defined as an inductive predicate \nparameterized by the destination $\\beta$.\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Paths.html}{Epsilon0.Paths}}\n\n\\index{hydras}{Library Epsilon0!Predicates!path\\_to}\n\\label{sect:path-to-def}\n\n\\input{movies/snippets/Paths/transitionDefs}\n\n\\input{movies/snippets/Paths/pathDef}\n\n\\begin{remark}\nIn the present version of our library, we use a variant \\texttt{path\\_toS} of\n\\texttt{path\\_to}, where the proposition\n(\\texttt{path\\_toS $\\beta$ $s$ $\\alpha$}) is equivalent to\n(\\texttt{path\\_to $\\beta$ (List.map S $s$) $\\alpha$}). This variant is scheduled to be deprecated.\n\\end{remark}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nWrite a tactic for solving goals of the form (\\texttt{path\\_to $\\beta$ $s$ $\\alpha$})\nwhere $\\alpha$, $\\beta$ and $s$ are closed terms. \nYou should solve automatically the following goals:\n\n\\begin{Coqsrc}\n path_to omega (2::2::2::nil) (omega * 2).\n\n path_to omega (3::4::5::6::nil) (omega * 2).\n\n path_to zero (interval 3 14) (omega * 2).\n\n path_to zero (repeat 3 8) (omega * 2).\n\\end{Coqsrc}\n\n\\end{exercise}\n\n\n\n\\subsection{Existence of a path}\n\n\\index{maths}{Transfinite induction}\n\nBy transfinite induction on $\\alpha$, we prove that for any $\\beta<\\alpha$, \none can build a path from $\\alpha$ to $\\beta$ (in other terms, $\\beta$ is accessible from $\\alpha$).\n\n\\input{movies/snippets/Paths/LTPathTo}\n\n\n\n\\noindent \nFrom the lemma \\texttt{canonS\\_LT}~\\vref{lemma:canonS_LT}, we can convert any path into an inequality on ordinals (by induction on paths).\n\n\\input{movies/snippets/Paths/pathToLT}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}[continuation of exercise~\\vref{exo:simply-typed-canonseq}]\nDefine a simply typed function for computing a path from $\\alpha$ to $\\beta$.\n\\end{exercise}\n\n\\subsection{Paths and hydra battles}\n\\label{KS-o2h}\n\nIn order to apply our knowledge about  ordinal numbers less than $\\epsilon_0$ to the study of hydra battles, we define an injection\nfrom the interval $[0,\\epsilon_0)$ into the type \\texttt{Hydra}.\n\n\\vspace{4pt}\n\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.O2H.html}{Hydra.O2H}}\n\n\\input{movies/snippets/O2H/iotaDef}\n\n\nFor instance Fig.~\\ref{fig:iota-example} shows the image by $\\iota$ of the ordinal  \\textcolor{black}{$\\omega^{\\omega+2}+\\omega^\\omega \\times 2 + \\omega + 1$}\n\n  \\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[very thick, scale=0.3]\n\\node (foot) at (10,0) {$\\bullet$};\n\\node (N1) at (2,2) {$\\bullet$};\n\\node (N2) at (10,2) {$\\bullet$};\n\\node (N22) at (7,2) {$\\bullet$};\n\\node (N3) at (14,2) {$\\bullet$};\n\\node (N4) at (18,2) {$\\Smiley[2][green]$};\n\\node (N5) at (0,4) {$\\bullet$};\n\\node (N6) at (2,5) {$\\Smiley[2][green]$};\n\\node (N7) at (4,6) {$\\Smiley[2][green]$};\n\\node (N88) at (7,4) {$\\bullet$};\n\\node (N8) at (10,4) {$\\bullet$};\n\\node (N9) at (14,6) {$\\Smiley[2][green]$};\n\\node (N10) at (0,8) {$\\Smiley[2][green]$};\n\\node (N11) at (10,7) {$\\Smiley[2][green]$};\n\\node (N111) at (7,7) {$\\Smiley[2][green]$};\n\\draw (foot) to [bend left=10] (N1);\n\\draw (foot) -- (N2);\n\\draw (foot) -- (N22);\n\\draw (foot) -- (N3);\n\\draw (foot) -- (N4);\n\\draw (N1) to  (N5);\n\\draw (N1) to   [bend left=10] (N6);\n\\draw (N1) to   [bend right=20] (N7);\n\\draw (N2) to  (N8);\n\\draw (N22) to  (N88);\n\\draw (N8) to  (N11);\n\\draw (N88) to  (N111);\n\\draw (N3) to  (N9);\n\\draw (N5) to  (N10);\n\\end{tikzpicture}\n\\caption{The hydra $\\iota(\\omega^{\\omega+2}+\\omega^\\omega \\times 2 + \\omega + 1$) \\label{fig:iota-example}}\n\n\\end{figure}\n\n\nThe following lemma (proved in ~\\href{../theories/html/hydras.Hydra.O2H.html}{Hydra.O2H.v}) maps  canonical sequences to rounds of hydra battles.\n\n\n\\label{lemma:canonS-iota}\n\n\\input{movies/snippets/O2H/canonSIota}\n\nThe next step of our development extends this relationship to\nthe order $<$ on $[0,\\epsilon_0)$ on one side, and hydra battles on the other side.\n\n\\input{movies/snippets/O2H/pathToRoundPlus}\n\nAs a corollary, we are now able to transform any inequality $\\beta<\\alpha<\\epsilon_0$ into a (free) battle.\n\n\\input{movies/snippets/O2H/LTToRoundPlus}\n\n\n\\section{A  proof of impossibility}\n\nWe now have  the tools for proving that  there exists no variant bounded by some $\\mu<\\epsilon_0$ for proving the termination   of all battles. The proof we are going to show is a proof by contradiction. It  can\n be considered as a generalization of the\nproofs described in  sections~\\vref{omega-case} and \\vref{omega2-case}.\n\n\n\nIn the module\n\\href{../theories/html/hydras.Hydra.Epsilon0_Needed_Generic.html}{Hydra.Epsilon0\\_Needed\\_Generic}, we assume there exists some variant $m$ bounded by some ordinal $\\mu<\\epsilon_0$. This part of the development is parameterized by some class $B$ of battles, which will be instantiated later to \\texttt{free} or \\texttt{standard}.\n\n\n\n\\input{movies/snippets/Hydra_Definitions/BoundedVariant}\n\nLet us assume there exists such a variant:\n\n\\input{movies/snippets/Epsilon0_Needed_Generic/theContext}\n\n\\label{remark:m-decrease}\n\\begin{remark}\n  The hypothesis \\texttt{m\\_decrease} is not provable  in general, but is satisfied by\nthe  \\texttt{free} and \\texttt{standard} kinds of battles. This trick allows to \n``factorize'' our proofs  of impossibility.\n\\end{remark}\n\n\\index{maths}{Transfinite induction}\n\nFirst, we prove that $m(\\iota(\\alpha))$ is always greater than or equal to $\\alpha$, by  transfinite induction over $\\alpha$.\n\n\n\\input{movies/snippets/Epsilon0_Needed_Generic/mGe0}\n\n\\begin{itemize}\n\\item If $\\alpha=0$, the inequality trivially holds\n\\item If $\\alpha$ is the successor of  some ordinal $\\beta$, the inequality $\\beta \\leq m(\\iota(\\beta))$ holds (by induction hypothesis). But the hydra $\\iota(\\alpha)$ is transformed in one round into \n$\\iota(\\beta)$, thus $m(\\iota(\\beta))<m(\\iota(\\alpha))$. Hence $\\beta<m(\\iota(\\alpha))$, which implies $\\alpha \\leq m(\\iota(\\alpha))$\n\\item If $\\alpha$ is a limit ordinal, then $\\alpha$ is the least upper bound of the set\nof all  the $\\canonseq{\\alpha}{i}$.  Thus, we have just to prove that $\\canonseq{\\alpha}{i}< m(\\iota(\\alpha))$ for any $i$. \n\\begin{itemize}\n\\item Let $i$ be some natural number.\nBy the induction hypothesis, we have $\\canonseq{\\alpha}{i} \\leq m(\\iota(\\canonseq{\\alpha}{i}))$. But the hydra $\\iota(\\alpha)$ is transformed into $\\iota(\\canonseq{\\alpha}{i})$ in one round, thus $m(\\iota(\\canonseq{\\alpha}{i})) < m(\\iota(\\alpha))$, by our hypothesis \\texttt{m\\_decrease}.\n\\end{itemize}\n\\end{itemize}\n\nPlease note that the impossibility proofs of \nsections~\\vref{omega-case} and \\vref{omega2-case} contain a similar lemma, also called \\texttt{m\\_ge}.\nWe are now able to build a counter-example.\n\n\\input{movies/snippets/Epsilon0_Needed_Generic/mGeGeneric}\n\nThe (big) rest of the proof is dedicated to prove formally the converse inequality \n\\texttt{m small\\_h t1< m big\\_h}. \n\n\n\n\\subsection{The case of free battles}\n\\label{sec:free-battles-case}\nLet us now consider that $B$ is instantiated to \\texttt{free} (which means that we are considering proofs of termination of \\emph{all} battles). The following lemmas are proved in Module~\\href{../theories/html/hydras.Hydra.Epsilon0_Needed_Free.html}{Hydra.Epsilon0\\_Needed\\_Free}.\nThe case $B=\\texttt{standard}$ is studied in section~\\vref{std-case}.\n\n\n\\input{movies/snippets/Epsilon0_Needed_Free/theContext}\n\n\n\\begin{enumerate}\n\\item The following lemma is an application of \\texttt{m\\_ge\\_generic}, since \\texttt{free}\nsatisfies trivially the hypothesis \\texttt{m\\_decrease} (see page~\\pageref{remark:m-decrease}).\n\n\\input{movies/snippets/Epsilon0_Needed_Free/mGe}\n\n\n\\item From the hypothesis \\texttt{Hy}, we have \\texttt{m big\\_h t1< mu}\n\\item By Lemma \\texttt{LT\\_to\\_round\\_plus}, we get a (free) battle from\n\\texttt{big\\_h = iota mu} to \\texttt{small\\_h = iota (m big\\_h)}.\n\n\\input{movies/snippets/Epsilon0_Needed_Free/bigToSmall}\n\n\n\\item From the hypotheses on $m$, we infer:\n\\input{movies/snippets/Epsilon0_Needed_Free/mLt}\n\n\\item From lemmas \\texttt{m\\_ge} and \\texttt{m\\_lt}, and the irreflexivity of $<$, we get a contradiction. \n\\input{movies/snippets/Epsilon0_Needed_Free/ImpossibilityFree}\n\n\\end{enumerate}\n\nWe have now proved there exists no bounded variant for the class of free battles.\n\n\n\\input{movies/snippets/Epsilon0_Needed_Free/CheckDemo}\n\n\\section{The case of standard battles}\n\\label{sec:standard-intro}\\label{std-case}\nOne may wonder if our theorem holds also in the framework of standard battles. Unfortunately, its proof relies on the lemma \\texttt{LT\\_to\\_round\\_plus} of\nModule~\\href{../theories/html/hydras.Hydra.O2H.html}{Hydra.O2H}.\n\n\\input{movies/snippets/O2H/LTToRoundPlus}\n\nThis lemma builds a battle out of any inequality $\\beta<\\alpha$. \nIt is a straightforward application of \\texttt{LT\\_path\\_to} of\nModule~\\href{../theories/html/hydras.Epsilon0.Paths.html}{Epsilon0.Paths}:\n\n\\input{movies/snippets/Paths/LTPathTo}\n\nThe sequence $s$, used to build the sequence of replication factors of the battle, depends on \n$\\beta$, so we cannot be sure that the generated battle is a genuine standard battle.\n\n\nThe solution of this issue comes  once again from Ketonen and Solovay's article~\\cite{KS81}. Instead of considering plain paths, i.e. sequences \n$\\alpha_0=\\alpha,\\alpha_1,\\dots,\\alpha_k=\\beta$ where $\\alpha_{j+1}$ is equal\nto $\\canonseq{\\alpha_j}{i_j}$ where $i_j$ is \\emph{any} natural number, \nwe consider various constraints on these sequences.\nIn particular, a path is called \\emph{standard} if $i_{j+1} = i_j + 1$ for every $j<k$.\nIt  corresponds to a ``segment'' of some standard battles. \nPlease note that the vocabulary on paths is ours, but all the concepts come really from~\\cite{KS81}.\n\nIn \\coq{}, standard paths can be defined as follows.\n\n\\vspace{4pt}\n\\emph{From\nModule~\\href{../theories/html/hydras.Epsilon0.Paths.html}{Epsilon0.Paths}}\n\n\\input{movies/snippets/Paths/standardPathR}\n\nIn the mathematical text and figures, we shall use the notation \n$\\alpha \\xrightarrow[i,j]{}\\beta$ for the proposition \n(\\texttt{standard\\_path $i$ $\\alpha$ $j$ $\\beta$}).\nIn~\\cite{KS81} the notation is\n$\\alpha \\xrightarrow[i]{*}\\beta$\nfor \nthe proposition  $\\exists j, i<j \\wedge \\alpha \\xrightarrow[i,j]{} \\beta$.\n\n\n\nOur goal is now  to transform any inequality $\\beta<\\alpha<\\epsilon_0$ into a standard path $\\alpha \\xrightarrow[i,j]{} \\beta$ for some $i$ and $j$, then into a standard battle\nfrom $\\iota(\\alpha+i)$ to $\\iota(\\beta)$. \nFollowing~\\cite{KS81}, we proceed in two stages:\n\\begin{enumerate}\n\\item we simulate plain (free) paths from $\\alpha$ to $\\beta$ with\npaths made of steps $(\\gamma,\\canonseq{\\gamma}{n})$, \\emph{with the same $n$ all along the path}\n\\item we simulate any such path by a standard path.\n\\end{enumerate}\n\n\n\n\\subsection{Paths with a constant index}\n\nFirst of all, paths with a constant index \nenjoy nice properties. They are defined as paths where all the $i_j$ are equal to the same natural number $i$, for some $i>0$. \n\n\nLike in~\\cite{KS81}, we shall use the notation $\\alpha \\xrightarrow[i]{} \\beta$ for denoting such a path, also called an $i$-path.\n\n\\input{movies/snippets/Paths/constPathDef}\n\n% Paths with a given index can be effectively computed.\n% Given $i$, $\\alpha$ and $l$, the following function returns the ordinal $\\beta$ such that there exists a path \n% $\\alpha \\xrightarrow [i+1] {} \\beta$ of length $l$. \n\n% \\begin{Coqsrc}\n% Fixpoint const_funS (i:nat)(alpha : T1)(l:nat):  T1  :=\n%   match l\n%   with\n%   | 0 => alpha\n%   | S m => const_funS i (canonS i alpha) m\n%   end.\n% \\end{Coqsrc}\n\n% The following computations show  applications of \\texttt{constS\\_fun} to the \n% ordinal $\\omega^\\omega$, with various values of $i$ and $l$.\n\n% \\begin{Coqsrc}\n% Compute  (const_funS 2 (omega ^omega)  55).\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%   = zero\n%      : T1 \n% \\end{Coqanswer}\n\n% \\begin{Coqsrc}\n% Compute pp (const_funS 2 (omega ^omega) 15).\n% \\end{Coqsrc}\n\n%   \\begin{Coqanswer}\n%  = (omega ^ 2 * 2)%pT1\n%      : ppT1   \n%   \\end{Coqanswer}\n\n\n% \\begin{Coqsrc}\n% Compute pp (const_funS 4 (omega^omega)  100).\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n% = (omega ^ 4 * 4 + omega ^ 3 * 4 + omega ^ 2 + omega * 4 + 4)%pT1\n%      : ppT1\n% \\end{Coqanswer}\n\n\nA most interesting property of $i$-paths is that we can ``upgrade'' their index, as stated by K.\\&S.'s Corollary 12.\n\n\\index{maths}{Transfinite induction}\n\n\\emph{From\n  Module~\\href{../theories/html/hydras.Epsilon0.Paths.html}{Epsilon0.Paths}}\n\n\\input{movies/snippets/Paths/Cor12}\n\n\nWe  also use a version of \\texttt{Cor12} with large inequalities.\n\n\\input{movies/snippets/Paths/Cor121}\n\n\n\n\\subsubsection{Sketch of proof of \\texttt{Cor12}}\n\\index{maths}{Transfinite induction}\n\nWe prove this lemma by transfinite induction on $\\alpha$.\nLet us consider a path $\\alpha \\xrightarrow [i]{} \\beta$ $(i>0)$. Its first step is\nthe pair $(\\alpha,\\canonseq{\\alpha}{i})$, We have $\\canonseq{\\alpha}{i}<\\alpha$ and\n$\\canonseq{\\alpha}{i} \\xrightarrow [i]{} \\beta$. \nLet $n$ be any natural number such that $n>i$.\nBy the induction hypothesis, there exists a path $\\canonseq{\\alpha}{n} \\xrightarrow[i]{} \\beta$.\n\\begin{itemize}\n\\item  If $\\alpha$ is a successor ordinal $\\gamma+1$, then $\\canonseq{\\alpha}{n} =\n\\canonseq{\\alpha}{i}=\\gamma$. Thus we have a path \n$\\alpha  \\xrightarrow [n]{}  \\gamma \\xrightarrow [n]{} \\beta$\n\\item If $\\alpha$ is a limit ordinal, we apply the following theorem (numbered \\texttt{2.4} in Ketonen and Solovay's article). \n\n%   \\begin{theorem}\n% Let $\\lambda$ be a limit ordinal, then for any pair of indices $0<i<j$, there is a path $\\canonseq{\\lambda}{j} \\xrightarrow[1]{} \\canonseq{\\lambda}{i}$.    \n%   \\end{theorem}\n\n\\input{movies/snippets/Paths/KSThm24}\n  \n\n We build the following paths :\n\n \\begin{enumerate}\n \\item $\\alpha \\xrightarrow[n]{} \\canonseq{\\alpha}{n}$\n \\item $\\canonseq{\\alpha}{n} \\xrightarrow[1]{} \\canonseq{\\alpha}{i}$ (by \\texttt{Theorem\\_2\\_4}),\n\\item $\\canonseq{\\alpha}{n} \\xrightarrow[n]{} \\canonseq{\\alpha}{i}$ (applying the induction hypothesis to the preceding path);\n\\item $\\canonseq{\\alpha}{i} \\xrightarrow[n]{} \\beta$ (applying the induction hypothesis)\\item $\\alpha \\xrightarrow[n]{} \\beta$ (by composition of 1, 3, and 4).\n\n\n \\end{enumerate}\n\n\n\\end{itemize}\n\n\n\n\n\n\\begin{remark}\n \\texttt{Cor12} ``casts'' $i$-paths into $n$-paths for any $n>i$.\nBut the obtained $n$-path can be much longer than the original $i$-path.\nThe following exercise will give an idea of this increase. \n\\end{remark}\n\n\\index{hydras}{Exercises}\n\\begin{exercise}\n  Prove that  the length of the $i+1$-path from\n  $\\omega^\\omega$ to $\\omega^i$ is $1 + (i+1)^{(i+1)}$, for any $i$. Note that the $i$-path from\n  $\\omega^\\omega$ to $\\omega^i$ is only one step long.\n \\end{exercise}\n\n\nWhy is \\texttt{Cor12} so useful? \nLet us  consider two ordinals  $\\beta<\\alpha<\\epsilon_0$. By induction on $\\alpha$,\nwe decompose any inequality $\\beta<\\alpha$ into $\\beta < \\canonseq{\\alpha}{i}< \\alpha$, where $i$ is some integer. Applying corollary \\texttt{Cor12'} we build a $n$-path from $\\beta$ to $\\alpha$,\nwhere $n$ is the maximum of the indices $i$ met in the induction.\n\n Lemma 1, Section 2.6 of~\\cite{KS81} is naturally expressed in terms of \\coq's\n\\verb@sig@ construct.\n\n\\label{lemma:L-2_6-1}\n\\index{coq}{Sigma types}\n\n\\input{movies/snippets/Paths/Lemma261}\n\n\nIntuitively, lemma   \\texttt{Lemma2\\_6\\_1}  shows that if $\\beta<\\alpha<\\epsilon_0$, then there exists  a battle from $\\iota(\\alpha)$ to $\\iota(\\beta)$ where the replication factor is constant, although large enough. \n\n\n\n\n\n\n\n\\subsection{Casting paths with a constant index into a standard path}\n\n\nThe article~\\cite{KS81} contains \nthe following lemma, the proof of which is quite complex, which allows to simulate $i$-paths by $[i+1,j]$-paths, where $j$ is large enough.\n\n\\input{movies/snippets/Paths/constantToStandard}\n\n\\subsubsection{Sketch of proof of \\texttt{constant\\_to\\_standard\\_path}}\n\nOur proof follows the proof by Ketonen and Solovay, including its organization as a sequence of lemma.  Since it is a non-trivial proof, we will comment its main steps below.\n\n\\subsubsection*{Preliminaries}\n\n\nPlease note that, given an ordinal $\\alpha:\\texttt{T1}$, and two natural numbers $i$ and $l$, there exists at most a standard path $\\alpha \\xrightarrow [i,i+l]{*} \\beta$.\nThe following function computes $\\beta$ from $\\alpha$, $i$ and $l$.\n\n\\input{movies/snippets/Paths/standardGnaw}\n\n\n\n\\index{maths}{Transfinite induction}\n\nBy transfinite induction over  $\\alpha$, we prove that the ordinal $0$ is reachable from any ordinal $\\alpha<\\epsilon_0$ by some standard path.\n\n\\input{movies/snippets/Paths/standardPathToZero}\n\n\n\\paragraph*{}\nNow, let us consider two ordinals  $\\beta<\\alpha<\\epsilon_0$.  Let $p$  be some $(n+1)$-path from $\\alpha$ to $\\beta$.\n\n\\input{movies/snippets/Paths/ConstantToStandardProof}\n\n\nApplying \\texttt{standard\\_path\\_to\\_zero}, $0$ is reachable from $\\alpha$ by some standard path  (see figure~\\vref{fig:belle-preuve-1}).\n\n\\begin{figure}[h]\n  \\centering\n \n\\begin{tikzpicture}[very thick, scale=0.25]\n\\node (alpha) at (0,0) {$\\alpha$};\n    \\node (beta) at (32, 0){$\\beta$};\n  \n\n  \\draw[->, very thick,blue] (alpha)-- node [below]{$n+1$} node [above] {$+$} (beta);\n\n  \\node (alpha1) at (5,5) {};\n  \\node (alpha2) at (13,5) {};\n    \\node (alpha3) at (20,5) {};\n  \\node (alphalast) at (35,5) {};\n  \\node (zero) at (45,0) {$0$};\n  \\draw [->, dashed,very thick,blue] (alpha)-- node [below, rotate=40]{$n+1$}  (alpha1);\n  \\draw [->, dashed,very thick,blue] (alpha1)-- node [below]{$n+2$}  (alpha2);\n   \\draw [->, dashed,very thick,blue] (alpha2)-- node [below]{$n+3$}  (alpha3);\n  \n  \\node (dots) at (24,5) {$\\dots$};\n  \\draw [->, dashed, very thick,blue] (alphalast)-- node [below, rotate=-26]{$n+p+1$}  (zero);\n\n\\end{tikzpicture}\n\\caption{A nice proof (1)}\n  \\label{fig:belle-preuve-1}\n\\end{figure}\n\n\n\\paragraph*{}\n\n\n\n\nSince comparison on \\texttt{T1} is decidable, one can compute the last step $\\gamma$ of the standard path from $(\\alpha,n+1)$  such that $\\beta\\leq \\gamma$.\nLet $l$ be the length of the path from $\\alpha$ to $\\gamma$.  \nThis step of the proof is illustrated in figure~\\vref{fig:belle-preuve-2}.\n\n\n\n\\begin{figure}[h]\n  \\centering\n \n\n\\begin{tikzpicture}[very thick, scale=0.25]\n\\node (alpha) at (0,0) {$\\alpha$};\n    \\node (beta) at (32, 0){$\\beta$};\n  \n\n\n\n  \\node (alpha1) at (5,5) {};\n  \\node (alpha2) at (13,5) {};\n  \\node (dots) at (17,5) {$\\ldots$};\n    \\node (alpha3) at (20,5) {};\n    \\node (gamma) at (24,0) {$\\gamma$};\n    \\node (delta) at (38,0) {$\\delta$};\n    \\draw [->, dashed,very thick,blue] (alpha)-- node [below,rotate=35]{$n+1$}  (alpha1);\n  \\draw [->, dashed,very thick,blue] (alpha1)-- node [below]{$n+2$}  (alpha2);\n   \\draw [->, dashed,very thick,blue] (alpha3)-- node [below,rotate = -48]{\\tiny $n+l$}  (gamma);\n   \\draw  [->, dashed, blue] (gamma) to    [bend left=80] node [below]{$n+l+1$} (delta);\n   \\draw[->, very thick,blue] (alpha) to [bend right=34] node [below]{$n+1$} node [above] {$+$} (beta);\n   \\draw[thick] (alpha)--  (gamma);\n   \\draw[thick] (gamma)--  node [above] {$\\geq$} (beta);\n    \\draw[thick] (beta)--  node [above] {$>$} (delta);\n\\end{tikzpicture}\n\n\\caption{A nice proof (2)}\n  \\label{fig:belle-preuve-2}\n\\end{figure}\n\n\\paragraph*{}\n\n\\begin{itemize}\n\\item If $\\beta=\\gamma$, its OK! We have got a standard path\nfrom  \n$\\alpha$ to $\\beta$ with successive indices  $n+1, n+2, \\dots, n+l+1$\n\n\\item Otherwise,  $\\beta < \\gamma$.  Let us consider  $\\delta=\\canonseq{\\gamma}{n+l+1}$.\nBy applying several times lemma \\texttt{Cor12},  one converts  every path of Fig~\\ref{fig:belle-preuve-2} into\n a $n+l+1$-path  (see figure~\\ref{fig:belle-preuve-3}).\n\n\nBut $\\gamma$ is on the $n+l+1$-path from $\\alpha$ to $\\beta$.\nAs shown by figure~\\vref{fig:fin-belle-preuve}, the ordinal $\\delta$, reachable from\n$\\gamma$ in one single step,  must be greater than or equal to $\\beta$, which contradicts our  hypothesis $\\beta < \\gamma$.\n\n\n\\begin{figure}[h]\n  \\centering\n  \n\\begin{tikzpicture}[very thick, scale=0.25]\n\\node (alpha) at (0,0) {$\\alpha$};\n    \\node (beta) at (32, 0){$\\beta$};\n    \\node (alpha1) at (5,5) {};\n  \\node (alpha2) at (13,5) {};\n  \\node (dots) at (17,5) {$\\ldots$};\n    \\node (alpha3) at (20,5) {};\n    \\node (gamma) at (24,0) {$\\gamma$};\n    \\node (delta) at (38,0) {$\\delta$};\n     \\draw [->, dashed,very thick,blue] (alpha)-- node [below, rotate = 40] {\\tiny $n+l+1$}  node [above, rotate = 40]{\\tiny $+$}  (alpha1);\n  \\draw [->, dashed,very thick,blue] (alpha1)-- node [below]{\\tiny $n+l+1$} node [above]{\\tiny $+$} (alpha2);\n   \\draw [->, dashed,very thick,blue] (alpha3)-- node [below, rotate = -48]{\\tiny $n+l+1$} node [above, rotate = -36]{\\tiny $+$}  (gamma);\n   \\draw  [->, dashed, blue] (gamma) to    [bend left=80] node [below]{\\tiny $n+l+1$} node [above]{\\color{red} $1$} (delta);\n   \\draw[->, very thick,blue] (alpha) to [bend right=34] node [below]{\\small $n+l+1$} node [above] {\\tiny $+$} (beta);\n    \\draw[thick] (gamma)--   node [above]{\\color{red} $>$}(beta);\n   \\draw[thick] (alpha)--  (gamma);\n  \n    \\draw[thick] (beta)--  node [above] {$>$} (delta);\n\n  \n  \n\\end{tikzpicture}\n\n\\caption{A nice proof (3)}\n  \\label{fig:belle-preuve-3}\n\\end{figure}\n\n\n\\begin{figure}[h]\n  \\centering\n\\begin{tikzpicture}[very thick, scale=0.25]\n\\node (alpha) at (0,0) {$\\alpha$};\n    \\node (beta) at (32, 0){$\\beta$};\n  \n  \\node (alpha1) at (5,5) {};\n  \\node (alpha2) at (13,5) {};\n  \\node (dots) at (15,5) {$\\ldots$};\n    \\node (alpha3) at (18,5) {};\n    \\node (gamma) at (24,0) {$\\gamma$};\n    \\node (delta) at (42,0) {$\\delta$};\n    \\draw [->, dashed,very thick,blue] (alpha)-- node [below, rotate = 40] {\\tiny $n+l+1$}  node [above, rotate = 40]{\\tiny $+$}  (alpha1);\n  \\draw [->, dashed,very thick,blue] (alpha1)-- node [below]{\\tiny $n+l+1$} node [above]{\\tiny $+$} (alpha2);\n   \\draw [->, dashed,very thick,blue] (alpha3)-- node [below, rotate = -36]{\\tiny $n+l+1$} node [above, rotate = -36]{\\tiny $+$}  (gamma);\n   \\draw  [->, dashed, blue] (gamma) to    [bend left=80] node [below]{\\small $n+l+1$} node [above]{\\color{red} $1$} (delta);\n   \\draw[->, very thick,blue] (alpha) to [bend right=34] node [below]{\\small $n+l+1$} node [above] {\\tiny $+$} (beta);\n   \\draw[thick] (alpha)--  (gamma);\n  \n    \\draw[thick] (gamma)--  node [below]{\\tiny $n+l+1$} node [above]{\\color{red} $+$}(beta);\n    \\draw[thick] (beta)--  node [above] {$>$} (delta);\n\n\\end{tikzpicture}\n\n\\caption{A nice proof (4)}\n  \\label{fig:fin-belle-preuve}\n\\end{figure}\n\n\n\\end{itemize}\n The only possible case is  thus $\\beta=\\gamma$, so we have got a standard path  from $\\alpha$ to $\\beta$.\n\n\\input{movies/snippets/Paths/constantToStandard0}\n \n\n\nHere is the full statement of the conversion from constant to standard paths.\n\n\\input{movies/snippets/Paths/constantToStandardPath}\n\n\nApplying \\texttt{Lemma2\\_6\\_1} and \\texttt{constant\\_to\\_standard\\_path}, we get the following corollary.\n\n\\input{movies/snippets/Paths/LTToStandardPath}\n\n\n\\subsection{Back to hydras}\n\\label{sec:standard-battles-cases}\nWe are now able to complete our proof that there exists no bounded variant for proving the termination of standard hydra battles. This proof can\nbe consulted in the module \n\\href{../theories/html/hydras.Hydra.Epsilon0_Needed_Std.html}{Hydra.Epsilon0\\_Needed\\_Std}.\nPlease note that it has the same global structure as in section\\ref{sec:free-battles-case} \n% ICI !\nApplying the  lemmas  \\texttt{Lemma2\\_6\\_1} of the module \n\\href{../theories/html/hydras.Epsilon0.Paths.html\\#Lemma2_6_1}%\n{Epsilon0.pathS}   and \n\\href{../theories/html/hydras.Epsilon0.Paths.html\\#constant_to_standard_path}%\n{\\texttt{constant\\_to\\_standard\\_path}},\nwe can convert any inequality $\\beta<\\alpha<\\epsilon_0$ into a standard path from\n$\\alpha$ to  $\\beta$, then into a fragment of a standard battle from \n$\\iota(\\alpha)$ to $\\iota(\\beta)$, hence the inequality $m(\\iota(\\beta))<m(\\iota(\\alpha))$.\n\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Epsilon0_Needed_Std.html\\#LT_to_standard_battle}{Hydra.Epsilon0\\_Needed\\_Std}}\n\n\\input{movies/snippets/Epsilon0_Needed_Std/LTToStandardBattle}\n\n\nNext, please consider the following context:\n\n\\input{movies/snippets/Epsilon0_Needed_Std/theContext}\n\nIn the same way as for free battles, we import a large inequality \nfrom \nthe module \\href{../theories/html/hydras.Hydra.Epsilon0_Needed_Generic.html}{Epsilon0\\_Needed\\_Generic}.\n\n\\input{movies/snippets/Epsilon0_Needed_Std/mGe}\n\n\n\n\\paragraph*{} If remains to prove the following strict inequality, in order to have a contradiction.\n\n\\input{movies/snippets/Epsilon0_Needed_Std/mLt}\n\n\n\\paragraph*{Sketch of proof:} Let us recall that $\\texttt{big\\_h} = \\iota(\\mu)$\n and $\\texttt{small\\_h} = \\iota (m (\\texttt{big\\_h}))$.\n\nSince $m(\\texttt{big\\_h})< \\mu$, there exists a standard path from $\\mu$ to\n$m(\\texttt{big\\_h})$, hence a   standard battle from $\\iota(\\mu)$  to\n$\\iota(m(\\texttt{big\\_h}))$,  i.e. from \\texttt{big\\_h} to \\texttt{small\\_h}.\n\nSince $m$ is assumed to be a variant for standard battles, we get the inequality  $m(\\texttt{small\\_h}) < m(\\texttt{big\\_h})$.\n\n\n\\input{movies/snippets/Epsilon0_Needed_Std/endOfProof}\n\n\n\n\n\\subsection{Remarks}\n\nWe are grateful to \n J. Ketonen and R. Solovay  for the high quality of their explanations and proof details.\nOur proof follows tightly the sequence of lemmas in their article, with a focus on \nconstructive aspects.\nRoughly speaking, our implementation \\emph{builds}, out of a hypothetical\n  variant $m$, bounded by some ordinal $\\mu<\\epsilon_0$, a hydra \\texttt{big\\_h} which verifies the impossible inequality  $m(\\texttt{big\\_h})< m(\\texttt{big\\_h})$.\n\n\n\nOn may ask whether the preceding results are too restrictive, since they \nrefer to a particular data type \\texttt{T1}.\nIn fact, our representation of ordinals strictly less than \n $\\epsilon_0$ is faithful to their mathematical definition, at least \nKurt Sch\u00fctte's~\\cite{schutte}, as proved in Chapter~\\vref{chap:schutte}.\n(please see also the module\n\\href{../theories/html/hydras.Schutte.Correctness_E0.html}{hydras.Schutte.Correctness\\_E0}).\n\nThus, we can infer that our theorems can be applied to any well order.\n\n\\index{hydras}{Projects}\n\\begin{project}\nStudy a possible modification of the definition of a variant  (for  standard battles).\n\n\\begin{itemize}\n\\item The variant is assumed to be strictly decreasing \\emph{on configurations \nreachable from some initial configuration where the replication factor is equal to $0$}\n\\item The variant may depend on the number of the current round.\n\\end{itemize}\n\nIn other words, its type should be \\texttt{nat -> Hydra -> T1}, and it must \nverify the inequality $m\\, (S\\,i)\\, h' < m\\,i\\, h$ whenever the configuration \n$(i,h)$ is reachable from some initial configuration $(0,h_0)$\nand \\texttt{h} is transformed into \\texttt{h'} in the considered round.\nCan we still prove the theorems of section~\\ref{std-case} with this new definition?\n\n\\end{project}\n\n\n \n%---------------------------------------------------------------------\n\\chapter{Large sets and rapidly growing functions}\\label{chap:alpha-large}\n\n\\begin{remark}\nSome notations (mainly names of fast-growing functions) of our development may differ slightly from the literature. Although this fact does not affect our proofs, we are preparing a future version where the names $F\\_alpha$, $f\\_alpha$, $H\\_alpha$, etc., are fully consistent with the cited articles.\n\n\\end{remark}\n%\\section{Introduction}\n\nIn this chapter, we try to feel how long a standard battle can be.\nTo be precise, for any ordinal $\\alpha<\\epsilon_0$ and any positive integer $k$,\nwe give a minoration of the number of steps of a standard battle which\nstarts with the hydra $\\iota(\\alpha)$ and the replication factor $k$.\n\nWe express this number in terms of the Hardy hierarchy of fast-growing \nfunctions~\\cite{BW85, Wainer1970, KS81, Promel2013}.\n From the \\coq{} user's point of view, such  functions are  very \nattractive:  they are defined as functions  in \\gallina{}, and we can apply them \\emph{in theory}, but they are so complex that you will never be able to look at the result of the computation.\n Thus, our knowledge on these functions must rely on \\emph{proofs}, not tests. In our development, we use often the rewriting rules generated by \\coq's \\texttt{Equations} plug-in.\n\n\n\\section{Definitions}\n\n%\\subsection{Definition}\n\n\\begin{definition}\nLet $0<\\alpha<\\epsilon_0$ be any ordinal, and $s=\\langle s_1, s_2, \\dots, s_N\\rangle$ a finite sequence of strictly positive natural numbers. \n\nWe say that $s$ is \\emph{$\\alpha$-large} if the sequence $\\langle \\alpha_0=\\alpha,\\dots,\\alpha_{i+1}=\\canonseq{\\alpha_i}{i+1},\\dots \\rangle$ leads to $0$. \nWe say also that $s$ is \\emph{minimally $\\alpha$-large} (in short:\n\\emph{$\\alpha$-mlarge}) if $s$ is $\\alpha$-large \n and every strict prefix of $s$ leads to a non-zero ordinal (\\emph{cf} Sect.~\\vref{sect:path-to-def}).\n\n\\index{maths}{Ordinal numbers!Large sets}\n\\index{maths}{Ordinal numbers!Minimal large sets}\n\n\\end{definition}\n\n\n\n\\begin{remark}\n  Ketonen and Solovay~\\cite{KS81} consider  large finite \\emph{sets} of natural numbers,  but they are mainly used as sequences. Thus, we chose to represent them explicitely as (sorted) lists. \n\\end{remark}\n\n\nThe following function ``gnaws'' an ordinal $\\alpha$, following a sequence of indices (ignoring the $0$s).\n\n\\vspace{4pt}\n\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Paths.html\\#gnaw}{Epsilon0.Paths}}\n\n\\input{movies/snippets/Paths/gnawDef}\n\n\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Large_SetsPaths.html\\#gnaw}{Epsilon0.Large\\_Sets}}\n\n\n\\input{movies/snippets/Large_Sets/largeDef}\n\n\n\nMinimal large sequences can be directly defined in terms of the\npredicate \\texttt{path\\_to} (\\vref{sect:path-to-def}) which already prohibits paths containing non-final \\texttt{zero}s.\n\n\\vspace{4pt}\n\n\\noindent\n\\emph{From Module~ \\href{../theories/html/hydras.Epsilon0.Large_Sets.html\\#mlarge}{Epsilon0.Large\\_Sets}}\n\n\n\\index{hydras}{Library Epsilon0!Predicates!mlarge@mlarge (minimal large sequences)}\n\n\\input{movies/snippets/Large_Sets/largeDef}\n\n\nLet us consider two integers $k$ and $l$, such that $0<k<l$. In order to check whether the interval $[k,l]$ is minimally large for $\\alpha$, it is enough to\nfollow from $\\alpha$ the path associated with the interval $[k,l)$ and verify that the last ordinal we obtain is equal to $1$.\n \n\\subsection{Example}\n\nFor instance the interval $[6,70]$ leads $\\omega^2$ to $\\omega\\times 2 + 56$. Thus this interval is not $\\omega^2$-mlarge.\n\n\n\n\\noindent\n\\emph{From Module~ \\href{../theories/html/hydras.Epsilon0.Large_Sets_Examples.html\\#mlarge}{Epsilon0.Large\\_Sets\\_Examples}}\n\n\\input{movies/snippets/Large_Sets_Examples/gnawEx1}\n\n\n\nLet us try another computation.\n\\input{movies/snippets/Large_Sets_Examples/gnawEx2}\n\n\n\nWe may say that the interval $[6,700]$ is $\\omega^2$-large, since it leads to $0$, but nothing assures us that the condition of minimality is satisfied.\n\nThe following lemma relates minimal largeness with the\nfunction \n\\texttt{gnaw}. \n\n\\input{movies/snippets/Large_Sets/mlargeIff}\n\n\n\nFor instance, we can verify that the interval $[6,510]$ is $\\omega^2$-mlarge.\n\n\\vspace{4pt}\n \\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Large_Sets_Examples.html}{Epsilon0.Large\\_Sets\\_Examples}}\n\n\\input{movies/snippets/Large_Sets_Examples/Ex1Lemma}\n\n\\section{Length of minimal large sequences}\n\nNow, consider any natural number $k>0$ any ordinal $0<\\alpha<\\epsilon_0$.  We would like to compute\na number $l$ such that the interval $[k,l]$ is $\\alpha$-mlarge. So, \nthe standard battle starting with $\\iota(\\alpha)$ and the replication factor $k$ will end after $(l-k+1)$ steps.\n\n\n\nFirst, we notice that this  number $l$ exists, since the segment $[0,\\epsilon_0)$ is well-founded and $\\canonseq{\\alpha}{i}<\\alpha$ for any $i$ and $\\alpha>0$.\nMoreover, it is unique:\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Large_Sets.html}{Epsilon0.Large\\_Sets}}\n\n\\input{movies/snippets/Large_Sets/mlargeUnicity}\n\n\nThus, we would like to define a function, parameterized by $\\alpha$ which associates to any  strictly positive integer $k$ the number $l$ such that\nthe interval $[k,l]$ is $\\alpha$-mlarge. It would be fine to write in \\gallina{} a definition like this:\n\n\\begin{Coqbad}\nFunction L_ (alpha: E0) (i:nat) :  nat := ...\n\\end{Coqbad}\n\nBut we do not know how to fill the dots yet \\dots{}   In the next section, we will \nuse \\coq{} to reason  about the \\emph{specification} of \\texttt{L},\nprove properties of any function which satisfies this specification.\nIn Sect.~\\ref{sect:L-equations}, we use the \\texttt{coq-equations} plug-in\nto define a function \\texttt{L\\_}, and prove its correctness w.r.t. its specification.\n\n\n\\subsection{Formal specification}\n\n\nLet $0<\\alpha<\\epsilon_0$ be an ordinal term. We consider any  function which  maps  any strictly positive integer $k$ to the number $l$, where \nthe interval $[k,l)$ is $\\alpha$-mlarge.\n\n\\begin{remark}\nIn~\\cite{KS81} Ketonen and Solovay consider the least natural number $l$ where the interval $[k,l]$ ($l$ included) is $\\alpha$-large, and call $H_\\alpha$ the function which maps $k$ to $l$. We chose to consider intervals $[l,k)$ instead of $[l,k]$\nin order to simplify  some statements and proofs in composition lemmas associated with the ordinals of the form $\\alpha\\times i$ and \n$\\omega^\\alpha\\times i + \\beta$.\nClearly, both approaches are related through the equality\n$L_\\alpha(k)=H_\\alpha(k)+1$, for any non-null $\\alpha$ and $k$.\n\\end{remark}\n\n\n\n\nOur specification of the function \\texttt{L} is as follows:\n\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Large_Sets.html}{Epsilon0.Large\\_Sets}}\n\n\\input{movies/snippets/Large_Sets/LSpecDef}\n\n\n\\begin{todo}\n Check if the functions $L_\\alpha$ are the same as\n\\cite{KS81}' functions $f\\_alpha$ (p. 297).\n\\end{todo}\n\n\nNote that, for $\\alpha\\not=0$, the value of $f(0)$ is not specified.\nNevertheless, the restriction of $f$ to the set of strictly positive integers is unique (up to extensionality).\n\n\\input{movies/snippets/Large_Sets/LSpecUnicity}\n\n\n\\subsection{Abstract properties}\n\n\n\nLet us now prove properties of any function $f$ (if any) which satisfies \n\\texttt{L\\_spec}. We are looking for properties which could be used for writing \\emph{equations} and prove the correctness of the function generated by the \\texttt{coq-equations} plug-in. Moreover, they will give us some examples (for small values of $\\alpha$).\n\n\n\nOur exploration of the $L_\\alpha$\\,s  follows the usual scheme : transfinite induction, and proof-by-cases : zero, successors and limit ordinals.\n\n\\index{maths}{Transfinite induction}\n\n\\subsubsection{The  ordinal zero}\n\\label{sect:L-spec-zero}\nThe base case is directly a consequence of the specification.\n\n\\input{movies/snippets/Large_Sets/LZeroInv}\n\n\n\\subsubsection{Successor ordinals}\n\\label{sect:L-spec-succ}\nLet $\\beta$ be some ordinal, and assume the arithmetic function $f$ satisfies \nthe specification $(\\texttt{L\\_spec}\\;\\beta)$.  Let $k$ be any natural number.\nAny path from $\\texttt{succ}\\,\\beta$ to $0$ starting at $k+1$ can be decomposed into a first step from $\\texttt{succ}\\,\\beta$ to $\\beta$, then a path from\n$\\beta$ at $k+2$ to $0$. \nBy hypothesis the interval $[k+2, f(k+2)-1]$ is $\\beta$-mlarge.\nBut the interval $[k+1, f(k+2)-1]$ is the concatenation of the singleton\n$\\{k+1\\}$ and the interval $[k+2, f(k+2)-1]$.\nSo, the function $\\lambda\\,k.\\,f(k+1)$ satisfies the specification $\\texttt{L\\_spec}\\,\\beta$.\n\n\nNote that our decomposition of intervals works only if the intervals we consider are not empty. In order to ensure this property, we assume that $f\\;k$ is always greater than $k$, which we note \\texttt{S <<= f}, or \\texttt{(fun\\_le S f)}.\n\nUseful abstract properties of arithmetic functions are\ndefined \nin~\\href{../theories/html/hydras.Prelude.Iterates.html\\#fun_le}{Prelude.Iterates}.\n\n\\label{sect:abstract-arith-prop}\n\n\\input{movies/snippets/Iterates/funLeDef}\n\nWe prove also that the functions we consider are strictly monotonous. The section on successor ordinals has the following structure.\n\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Large_Sets.html}{Epsilon0.Large\\_Sets}}\n\n\\input{movies/snippets/Large_Sets/SectionSucc}\n\n\n\\subsubsection{Limit ordinals}\n\\label{sect:L-spec-lim}\n\nLet $\\lambda<\\epsilon_0$ be any limit ordinal. In a similar way as for successors, we decompose any path from $\\lambda$  into a first step to\n$\\canonseq{\\lambda}{k}$, followed by a path to $0$. In the following section, we assume that there exists a correct function computing  $L_{\\canonseq{\\lambda}{k}}$ for any strictly positive $k$.\n\n\\input{movies/snippets/Large_Sets/SectionLim}\n\n\n\\subsection{First results}\n\nApplying the previous lemmas on successors and limit ordinals, \nwe obtain a few  correct implementations of \\texttt{(L\\_spec $\\alpha$)} for small values of $\\alpha$.\n\n\\subsubsection{Finite ordinals}\n\nBy iterating the functional \\texttt{L\\_succ}, we get a realization of\n\\texttt{(L\\_spec (fin $i$))} for any natural number $i$. \n\n\\input{movies/snippets/Large_Sets/LFinDef}\n\\vspace{-16pt}\n\\input{movies/snippets/Large_Sets/LFinOk}\n\n\\subsubsection{The first limit ordinal  \\texorpdfstring{$\\omega$}{omega}}\n\nThe lemmas \\texttt{L\\_fin\\_ok} and \\texttt{L\\_lim\\_ok}   allow us to get \nby diagonalization a correct implementation for \n\\texttt{L\\_spec omega}.\n\n\\input{movies/snippets/Large_Sets/LOmegaDef}\n\\vspace{-16pt}\n\\input{movies/snippets/Large_Sets/LOmegaOk}\n\n\\subsubsection{Towards  \\texorpdfstring{$\\omega^2$}{omega*omega}}\n\nWe would like to get exact formulas for the ordinal $\\omega^2$, a.k.a.\n$\\phi_0(2)$. This ordinal is the limit of the sequence $\\omega\\times i\\;(i \\in \\mathbb{N})$. Thus, we have to study ordinals of this form, then use \nour lemma on limits.\n\nThe following lemma establishes a path from $\\omega\\times ( i+1)$ to\n$\\omega \\times i$.\n\n\\input{movies/snippets/Large_Sets/pathToOmegaMult}\n\nLet us consider a path from  $\\omega\\times(i+1)$ to $0$ starting at $k+1$.\nA first ``big step'' will lead to $\\omega\\times i$ at $2(k+1)$. If $i>0$, the\nnext jump leads to $\\omega\\times(i-1)$ at $2(2(k+1))+1$, etc.\n\nThe following lemma expresses the length of the mlarge sequences associated with the finite multiples of $\\omega$.\n\n\\input{movies/snippets/Large_Sets/omegaMultMlarge0}\n\n\n\\emph{From Module~ \\href{../theories/html/hydras.Epsilon0.Large_Sets.html\\#L_omega_mult}{Epsilon0.Large\\_Sets}}\n\n\\input{movies/snippets/Large_Sets/LOmegaMultDef}\n\nMore generally, we prove the equality $L_{\\omega\\times i}(k)=2^i\\times(k+1)-1$.\n\n\\input{movies/snippets/Large_Sets/LOmegaMultEqn}\n\n\nCorrectness of the function \\texttt{L\\_omega\\_mult} is asserted through the following lemma.\n\n\\input{movies/snippets/Large_Sets/LOmegaMultOk}\n\n\nBy diagonalization, we obtain a simple formula for $L_{\\omega^2}$.\n\n\\input{movies/snippets/Large_Sets/LOmegaSquare}\n\\input{movies/snippets/Large_Sets/LOmegaSquareEqn}\n\\input{movies/snippets/Large_Sets/LOmegaSquareOk}\n\n\n\n\n\n\n\n%%%% ICI \n\n\n\\subsubsection{Going further}\nLet us consider a last example, ``computing'' $L_{\\omega^3}$.\nSince the canonical sequence associated with this ordinal is composed of the\n$\\omega^2\\times i\\;(i\\in\\mathbb{N}_1)$, we have to study this sequence.\n\nTo this end, we prove a generic lemma, which expresses $L_{\\omega^\\alpha\\times i}$ as an iterate of $L_{\\omega^\\alpha}$. Note that in this lemma, we assume that the function associated with $\\alpha$ is strictly monotonous and\ngreater or equal than the successor function, and prove that $L_{\\omega^\\alpha\\times i}$ satisfies  the same properties.\n\n\n\\input{movies/snippets/Large_Sets/phi0Mult}\n\nLet us look now\nat the ordinal $\\omega^2\\times i$, using \\texttt{L\\_phi0\\_mult}.\n\n\\input{movies/snippets/Large_Sets/LOmegaSquareTimes}\n\n\nWe are now ready to get an exact formula for $L_{\\omega^3}$, by diagonalization upon $L_{\\omega^2\\times i}$.\n\n\\input{movies/snippets/Large_Sets/LOmegaCube}\n\n\nThus, for instance, $L_{\\omega^3}(3)=L_{\\omega^2\\times 4}(3)$.\n\n\\input{movies/snippets/Large_Sets/LOmegaCube3Eq}\n\n\nThis number is quite big. Using \\texttt{Ocaml}'s \\texttt{float} arithmetic,\nwe can under-approximate it by $2^{3.8\\times10^{30}}\\times 3.8\\times{10^{30}}$.\n\n\\begin{Coqsrc}\n# let exp2 x = 2.0 ** x;;\n\nval exp2 : float -> float = <fun>\n#   exp2 95.0 *. 97.0 -. 1.0;;\n- : float = 3.84256588194182037e+30\n# let n = exp2 95.0 ;;\n# let p = n *. 97.0 -. 1.0;;\nval p : float = 3.84256588194182037e+30\n\nEstimation :\n2 ** (3.84 e+30) * 3.84 e+30.\n\\end{Coqsrc}\n\n\n\\subsection{Using \\texttt{Equations}}\n\\label{sect:L-equations}\n\nNote that we did not define any function $L_\\alpha$ \\emph{for any $\\alpha<\\epsilon_0$} yet. We have got no more than a collection of proved realizations of $\\texttt{L\\_spec}\\;\\alpha$ for a few values of $\\alpha$.\n\n\\index{coq}{Plug-ins!Equations}\n\nUsing the \\texttt{coq-equations} plug-in by \nM. Sozeau~\\cite{sozeau:hal-01671777}, we will now define a function \\texttt{L\\_} which maps any ordinal  $\\alpha<\\epsilon_0$ to a proven realization of \n$\\texttt{L\\_spec}\\;\\alpha$.   \nTo this end, we represent ordinals as inhabitants of the type \n\\texttt{E0} of well-formed ordinal terms (see Sect~\\vref{sect:E0-def}). So, we define a total function \\texttt{L\\_} of type\n\\texttt{E0 -> nat -> nat}, by transfinite recursion, considering the usual three cases : $\\alpha=0$, $\\alpha$ is a successor, $\\alpha$ is a limit ordinal.\n \n\n\\subsubsection{Definition}\n\n\n\n\\vspace{4pt}\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.L_alpha.html\\#L_}{L\\_alpha}).}\n\n\\label{Functions:L-alpha}\n \\index{hydras}{Library Epsilon0!Functions!L\\_@L\\_ (final step of a minimal path}\n\n\\input{movies/snippets/L_alpha/LDef}\n \nThis definition results in a bunch of automatically generated lemmas. For instance:\n\n\\input{movies/snippets/L_alpha/AboutLEquation1}\n\n\n\nIn most cases, it may be useful to write human-readable  paraphrases of these statements.\n\n\\input{movies/snippets/L_alpha/Paraphrases}\n\n\n\nUsing these three lemmas as rewrite rules, we can prove more properties of the functions \\texttt{L\\_$\\alpha$}.\n\n\\input{movies/snippets/L_alpha/LFiniteOmega}\n\nBy  well-founded induction on $\\alpha$, we prove the following lemmas:\n\n\\input{movies/snippets/L_alpha/LGeS}\n\n\\input{movies/snippets/L_alpha/LCorrect}\n\n\nPlease note that the proof of \\texttt{L\\_correct} applies the lemmas proven in Sections~\\ref{sect:L-spec-zero}, ~\\ref{sect:L-spec-succ} and ~\\ref{sect:L-spec-lim}.\nOur previous study of \\texttt{L\\_spec} allowed us to pave the way for the definition by \\texttt{Equations} and the correctness proof.\n\n\n\n\\subsubsection{Back to hydra battles}\n\\label{def:L-alpha}\n\nLemma \\texttt{battle\\_length\\_std } of\nModule~\\href{../theories/html/hydras.Hydra.Battle_length.html}{Hydra.Battle\\_length} relates the length of standard battles with the functions $L_\\alpha$.\n\n\\input{movies/snippets/Battle_length/battleLengthStd}\n\n\n\\index{hydras}{Projects}\n\\begin{project}\nInstead of considering standard paths and battles, consider ``constant'' paths and the corresponding battles. Please use \\texttt{Equations} in order to define the function that computes the length of the $k$-path which leads  from $\\alpha$ to $0$.\nProve a few  exact formulas and minoration lemmas.\n\\end{project}\n\n\\section{A variant of the Wainer-Hardy hierarchy}\n\n\\label{sect:hardy}\n\n\n\n\nIn order to give a feeling on  the complexity of the functions  $L_\\alpha$s, we compare them with a better known family of functions, the  \\emph{Wainer-Hardy hierarchy} of fast growing functions,\npresented for instance in~\\cite{Promel2013}. \n\\index{maths}{Rapidly growing functions!Hardy Hierarchy}\n\n\\begin{remark}\n  Indeed, the functions presented in this section are a \\emph{variant} of the Hardy hierarchy of functions. In the future versions of this development, we will correct the references to the literature. For the time being, we call our functions $H'_\\alpha$ in order to underline the difference from ``classic'' Hardy functions.\n\\end{remark}\n\nFor each ordinal $\\alpha$ below $\\epsilon_0$, $H'_\\alpha$ is \na total arithmetic function, defined  by  transfinite recursion on $\\alpha$, according to three cases:\n\n\\index{maths}{Transfinite induction}\n\n\\begin{itemize}\n\\item If $\\alpha=0$, then $H'_\\alpha (k)= k$ for any natural number $k$.\n\\item If $\\alpha=\\textrm{succ}(\\beta)$, then \n$H'_\\alpha(k)=H'_\\beta(k+1)$ for any $k \\in \\mathbb{N}$\n\\item If $\\alpha$ is a limit ordinal, then \n$H'_\\alpha(k) = H'_{(\\canonseq{\\alpha}{k+1})}(k)$ for any $k\\in \\mathbb{N}$.\n\\end{itemize}\n\n\\begin{remark}\n The ``classic'' definition of the Wainer-Hardy hierarchy differs in the third equation.\n\n\\begin{itemize}\n\\item If $\\alpha=0$, then $H_\\alpha (k)= k$ for any natural number $k$.\n\\item If $\\alpha=\\textrm{succ}(\\beta)$, then \n$H_\\alpha(k)=H_\\beta(k+1)$ for any $k \\in \\mathbb{N}$\n\\item If $\\alpha$ is a limit ordinal, then \n$H_\\alpha(k) = H_{(\\canonseq{\\alpha}{k})}(k)$ for any $k\\in \\mathbb{N}$.\n\\end{itemize}\n\n\\end{remark}\n\n\\subsection{Definition in \\texttt{Coq}}\n\n\nWe define a function \\texttt{H'\\_} of type \\texttt{E0 -> nat -> nat} by transfinite induction over the type \\texttt{E0} of the well formed ordinals below $\\epsilon_0$.\n\n\\vspace{4pt}\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Hprime.html\\#H_}{Epsilon0.Hprime}}\n\n\\index{hydras}{Library Epsilon0!Functions!H\\_@H\\_ (Hardy hierarchy (variant))}\n\\index{coq}{Plug-ins!Equations}\n\\label{Functions:Hprime-alpha}\n\n\\input{movies/snippets/Hprime/HprimeDef}\n \n\\input{movies/snippets/Hprime/paraphrases}\n\n\n\n\\subsection{First  steps of the H' hierarchy}\nUsing rewrite rules from \\texttt{H'\\_eq1} to \\texttt{H'\\_succ\\_eqn}, we can explore the functions $H'_\\alpha$ for  small values of $\\alpha$.\n\n%ICI\n\n\\subsubsection{Finite ordinals} \n\nBy induction on $i$, we prove a simple expression of \\texttt{H'\\_ (Fin i)}, where \n\\texttt{Fin $i$}  is the $i$-th finite ordinal.\n\n\\input{movies/snippets/Hprime/HprimeFin}\n\n\n\\subsubsection{Multiples of \\texorpdfstring{$\\omega$}{omega}}\n\nSince the canonical sequence of $\\omega$ is composed of finite ordinals, \nit is easy to get the formula associated with $H'_\\omega$.\n\n\\input{movies/snippets/Hprime/HprimeOmega}\n\nBefore going further, we prove a useful rewriting lemma:\n\n\\input{movies/snippets/Hprime/HprimePlusFin}\n\n\n\nThen, we get easily formulas for $H'_{\\omega+i}$, and $H'_{\\omega\\times i}$ for any natural number $i$.\n\n\n\\input{movies/snippets/Hprime/HprimeExamples}\n\n\nCrossing a new limit, we prove the following equality: \n$$H'_{\\omega^2} (k) = 2 ^ {k+1} \\times (k+1) - 1$$.\n\n\\input{movies/snippets/Hprime/HprimeOmegaSqr}\n\n\n\\subsubsection{New limits}\n\nOur next step would be to prove an exact formula for $H'_{\\omega^\\omega}(k)$.\nSince the canonical sequence of $\\omega^\\omega$ is composed of all the\n$\\omega^i$, we first need to express $H'_{\\omega^i}$ for any natural number $i$.\n\nLet $i$ and $k$ be two natural numbers. \nThe ordinal $\\canonseq{\\omega^(i+1)}{k}$ is the product\n$\\omega^i \\times k$, so we need also to consider ordinals of this form.\n\n\\begin{enumerate}\n\\item First,  we express $H'_{\\omega^\\alpha \\times (i+2)}$ in terms of\n$H'_{\\omega^\\alpha \\times (i+1)}$.\n\n\\input{movies/snippets/Hprime/HprimeOmegaTerm1}\n\n\n\\item\nThen, we prove by induction on $i$ that $H'_{\\omega^\\alpha \\times (i+1)}$ is just the\n$(i+1)$-th iterate of $H'_{\\omega^\\alpha}$.\n\n\\input{movies/snippets/Hprime/HprimeOmegaTerm}\n\n\\item In particular, we derive a formula for $H'_{\\omega^{i+1}}$.\n\n\\input{movies/snippets/Hprime/HprimeSuccFun}\n\\input{movies/snippets/Hprime/HprimePhi0SI}  \n\n\\item We get now a  formula for $H'_{\\omega^3}$:\n\n\\input{movies/snippets/Hprime/HprimeOmegaCube}\n\\end{enumerate}\n\n\n\\subsubsection{A numerical example}\n\nIt looks hard to capture the complexity of this function by looking only at this\n``exact'' formula. \nLet us consider a simple example: the number $H'_{\\omega^3}(3)$.  \n\n\\input{movies/snippets/Hprime/HprimeOmegaCube3a}\n\n\nThus, the number $H_{\\omega^3}(3)$ can be written as four nested applications of $f$.\n \n\\input{movies/snippets/Hprime/HprimeOmegaCube3b}\n\nIn order to make this statement more readable, we can introduce a local d\u00e9finition.\n\n\\input{movies/snippets/Hprime/HprimeOmegaCube3c}\n\n\nThis number looks quite big; let us compute an approximation with \\texttt{Ocaml}:\n\n\n\\begin{Coqsrc}\n# (2.0 ** 64.0 *. 64.0 -. 1.0);; \n\\end{Coqsrc}\n\n\\begin{Coqanswer}\n- : float = 1.1805916207174113e+21\n\\end{Coqanswer}\n\n\\input{movies/snippets/Hprime/HprimeOmegaCube3d}\n\n\n\nIn a more classical writing, this number is displayed as follows:\n\n{\\Large\n$$\nH'_{\\omega^3}(3) =  2 ^ {(2 ^ {N + 1} \\, (N+1) )}   \\,  (2 ^ {N+1} \\, ( N +1) ) - 1\n$$\n}\n\n\nWe leave as an exercise to determine the best approximation as possible of\n the size of this number (for instance its number of digits).  For instance, if\nwe do not take into account the multiplications in the formula above,\nwe obtain that, in base $2$, the number $H'_{\\omega^3}(3)$ has at least\n$2^{10^{21}}$  digits. But it is still an under-approximation !\n\n\nNow, we have got at last an exact formula for $H'_{\\omega^\\omega}$.\n\n\\input{movies/snippets/Hprime/HprimePhi0Omega}\n\n\n\nUsing extensionality of the functional \\texttt{iterate}, we also get a closed formula.\n\n\\input{movies/snippets/Hprime/HprimePhi0OmegaClosed}\n\n\nNote that this formula contains two occurrences of the functional \\texttt{iterate}, the outer one is in fact a second-order iteration (on type \\texttt{nat -> nat)}\nand the inner one  first-order (on type \\texttt{nat}). \n\n\n\\subsection{Abstract properties of  \n\\texorpdfstring{$H'_\\alpha$}{H'}}\n~\\label{sect:H-alpha-prop} \n\nSince pure computation seems to be useless for dealing with expressions of the form $H'_\\alpha(k)$, even for small values of $\\alpha$ and $k$, we need to prove theorems for comparing $H'_\\alpha(k)$ and $H'_\\beta(l)$, in terms of comparison\nbetween $\\alpha$ and $\\beta$ on the one hand, $k$ and $l$ on the other hand.\n\nBut beware of fake theorems! For instance, one could believe that $H'$ is monotonous in its first argument. The following proof shows this is false.\n\n\\input{movies/snippets/Hprime/HprimeNonMono1}\n\n\nOn the contrary, the functions of the $H'$ hierarchy have the following five properties~\\cite{KS81}: for any $\\alpha < \\epsilon_0$,\n\\begin{itemize}\n\\item the function $H'_\\alpha$ is strictly monotonous :\n      For all $n,p \\in\\mathbb{N}, n < p \\Rightarrow H'_\\alpha(n)< H'_\\alpha(p)$.\n\\item If $\\alpha \\not= 0$, then for every $n$, $n<H'_\\alpha(n)$.\n\\item The function $H'_\\alpha$ is pointwise less or equal than $H'_{\\alpha+1}$\n\n\\item For any $n\\geq 1$, $H'_\\alpha(n)<H'_{\\alpha+1}(n)$.\n\\emph{We say that $H'_{\\alpha+1}$ dominates $H'_\\alpha$ from $1$}.\n\\item For any $n$ and $\\beta$, if $\\alpha \\xrightarrow[n]{} \\beta$, then\n$H'_\\beta(n)\\leq H'_\\alpha(n)$.\n\\end{itemize}\n\n\n\\index{maths}{Abstract properties of arithmetic functions}\n\\index{hydras}{Abstract properties of arithmetic functions}\n\n\n\\index{maths}{Transfinite induction}\n\nIn \\coq{}, we follow the  proof in~\\cite{KS81}. This proof is mainly a single  proof by transfinite induction on $\\alpha$ of the conjunction of the five properties.\nFor each $\\alpha$, the three cases : $\\alpha=0$, $\\alpha$ is a limit, and \n$\\alpha$ is a successor are considered. Inside each case, the five sub-properties are proved sequentially, using the abstract properties defined in Sect~\\vref{sect:abstract-arith-prop}\n\n\\input{movies/snippets/Hprime/PAP}\n\n\n\n\n% By elimination, we get a catalogue of properties of the functions $L'_\\alpha$:\n\n% \\begin{Coqsrc}\n% Section Abstract_Properties.\n%  Variable alpha : E0.\n\n%  Theorem H'_alpha_mono : strict_mono (H'_ alpha).\n\n%  Theorem H'_alpha_gt : alpha <> Zero ->\n%         forall n, n < H'_ alpha n.\n\n%  Theorem H'_alpha_Succ_le :\n%         H'_ alpha <<= H'_ (Succ alpha).\n  \n%  Theorem H'_alpha_dom :\n%     dominates_from 1 (H'_ (Succ alpha)) (H'_ alpha).\n\n%  Theorem H'_restricted_mono_l :\n%     forall beta n, Canon_plus n alpha beta -> \n%                                  H'_ beta n <= H'_ alpha n.\n\n%   Lemma H'_alpha_ge_id : id <<= H'_ alpha.\n\n%   Lemma H'_alpha_mono_weak : forall k l, k <= l ->\n%                                         H'_ alpha k <= H'_ alpha l.\n  \n% End Abstract_Properties.\n% \\end{Coqsrc}\n\nUsing a few lemmas \\emph{\u00e0 la} Ketonen-Solovay, we prove that\nif $\\alpha<\\beta$, then $H'_\\beta$ eventually dominates\n$H'_\\alpha$.\nWe let the reader look at its proof (Section \\texttt{Proof\\_of\\_H'\\_mono\\_l} of \\href{../theories/html/hydras.Epsilon0.Hprime.html\\#H_}{Epsilon0.Hprime}).\n\n\\noindent\n\\input{movies/snippets/Hprime/HprimeDom}\n\n\\subsection{Comparison between \\texttt{L\\_} and \\texttt{H'\\_} }\n\nBy well-founded induction on $\\alpha$, we prove a simple relation between $L_\\alpha$ and $H'\\_alpha$.\n\n\\emph{From Module~\\href{../theories/html/hydras.Epsilon0.L_alpha.html\\#H'_L_}{Epsilon0.L\\_alpha}}\n\n\\input{movies/snippets/L_alpha/HprimeL}\n \n\\subsubsection{Back to hydras}\n\nThe following theorem relates the length of (standard) battles with the the $H'$ family of fast growing functions.\n\n\\vspace{4pt}\n\n\\noindent\n\\emph{From Module~\\href{../theories/html/hydras.Hydra.Hydra_Theorems.html}{Hydra.Hydra\\_Theorems}}\n\n\\input{movies/snippets/Hydra_Theorems/battleLengthStdHardy}\n\n\n\\section{A variant of the Wainer hierarchy (functions \\texorpdfstring{$F_\\alpha$}{F\\_alpha})}\n\\label{sect:wainer}\n\n\\index{maths}{Rapidly growing functions!Wainer Hierarchy}\n\nKetonen and Solovay introduce in~\\cite{KS81} a ``trivial'' variant of the Wainer hierarchy~\\cite{BW85, Wainer1970} of fast growing functions, indexed by ordinals below $\\epsilon_0$.\n\n\\label{F_equations}\n\\begin{itemize}\n\\item $F_0(i)=i+1$\n\\item $F_{\\beta+1}(i)= (F_\\beta)^{(i+1)}(i)$, where $f^{(i)}$ is the $i$-th iterate of $f$.\n\\item $F_\\alpha(i) = F_{\\canonseq{\\alpha}{i}} (i)$ if $\\alpha$ is a limit ordinal.\n\\end{itemize}\n\n\\begin{remark}\nThe difference with the ``classic'' Wainer hierarchy \n$f\\_\\alpha\\;(\\alpha<\\epsilon_0)$ lies in the second equation:\n$f_{\\beta+1}(i) = (f_\\beta)^{(i)}(i)$ and not\n$f_{\\beta+1}(i) = (f_\\beta)^{(i+1)}(i)$. A module about \nthe classic Wainer hierarchy is in preparation.\n\\end{remark}\n\nA first attempt is to write a definition of $F_\\alpha$ by equations, in the same way as for $H\\_alpha$.  We use the functional \\texttt{iterate} defined in \nModule~\\href{../theories/html/hydras.Prelude.Iterates.html\\#iterate}{Prelude.Iterates}.\n\n\\index{hydras}{Library Prelude!iterate}\n\n\\input{movies/snippets/Iterates/iterateDef}\n\n\nThe following code comes from \n \\href{../theories/html/hydras.Epsilon0.F_alpha.html}{Epsilon0.F\\_alpha}.\n\n\n\n \\index{coq}{Plug-ins!Equations}\n\n \\input{movies/snippets/F_alpha/FailDemo}\n \n\n\nWe presume that this error comes from the recursive call of \\texttt{F\\_} inside\nan application of \\texttt{iterate}. The workaround we propose is to define first \nthe iteration of \\texttt{F\\_}  as an helper $F^*$, then to define the function $F$ as a ``iterating $F^*$ once''.\n\n\\texttt{Equations} accepts the following definition, relying on  lexicographic ordering on pairs $(\\alpha,n)$.\n\n\n\\label{sect:F-equations}\n\n\\index{coq}{Plug-ins!Equations}\n\\label{Functions:F-alpha}\n\\index{maths}{Rapidly growing functions}\n\\index{hydras}{Library Epsilon0!Functions!F\\_@F\\_ (Wainer hierarchy)}\n  \n\\input{movies/snippets/F_alpha/goodDef}\n\n\nIt is quite easy to prove that our function \\texttt{F\\_} satisfies the equations on page~\\pageref{sect:F-equations}.\n\\index{hydras}{Library Prelude!iterate}\n\n\\input{movies/snippets/F_alpha/FEquations}\n\n\nAs for the Hardy functions, we can use these equalities as rewrite rules for\n``computing'' some values of $F_\\alpha(i)$, for small values of $\\alpha$.\n\n\\input{movies/snippets/F_alpha/FirstValues}\n\nLike in Sect~\\ref{sect:H-alpha-prop}, we prove by induction the following properties (see~\\cite{KS81}). \n\n\\input{movies/snippets/F_alpha/FalphaThms}\n\n\nAs a corollary, we prove the following proposition, p. 284 of~\\cite{KS81}.\n\\begin{quote}\n  If $\\beta<\\alpha$, $F_\\alpha$ dominates $F_\\beta$.\n\\end{quote}\n\n\\input{movies/snippets/F_alpha/FDomContext}\n\\input{movies/snippets/F_alpha/FDom}\n\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nProve the following property:\n\n\\begin{Coqsrc}\nLemma LF3 : dominates_from  2 (F_ 3) (fun  n => iterate exp2 n n).\n\\end{Coqsrc}\n\n\\emph{You may start this exercise with the file\n\\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/F_3.v}{exercises/ordinals/F\\_3.v}.}\n\\end{exercise}\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nProve that, for any $\\alpha\\geq 3$ and $n\\geq 2$,\n$F_\\alpha(1+n)\\geq 2^{F_\\alpha(n)}$.\n\n\n\n\\emph{You may start this exercise with the file\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/F_3.v}{exercises/ordinals/F\\_3.v}.}\n\\end{exercise}\n\n\n%ici\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nIt is tempting to prove a simple property of monotony \nof the function \\texttt{F\\_}.\n\n\\begin{quote}\n   Let $\\alpha\\leq\\beta<\\epsilon_0$. For any $n\\geq 2$,\n$F_\\alpha(n)\\leq F_\\beta(n)$. \n\\end{quote}\nProve or disprove this statement.\n\n\\emph{You may start this exercise with the file\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/is_F_monotonous.v}{exercises/ordinals/is\\_F\\_monotonous.v}.}\n\\end{exercise}\n\n\\index{hydras}{Exercises}\n\\begin{exercise}\n\n\n\nProve that for any $n\\geq 2$, $\\textrm{Ack}\\,\\,n\\,n\\leq  F_\\omega(n)$, where \\textrm{Ack} is the Ackermann function. Next, prove that $F_\\alpha$ is not primitive recursive, for any $\\alpha\\geq\\omega$  (please see Sect.~\\vref{sect:ack-not-PR}).\nOn the other hand, please show that for any natural number $n$, the function $F_n$ is primitive recursive.\nThus $F\\_alpha$ is primitive recursive if and only if $\\alpha$ is finite.\n\n\\emph{You may start this exercise with the file\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/F_omega.v}{exercises/ordinals/F\\_Omega.v}.\nProperties of the Ackermann function are studied in\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/theories/ordinals/MoreAck/Ack.v}{theories/ordinals/MoreAck/Ack.v} and\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/theories/ordinals/MoreAck/AckNotPR.v}{theories/ordinals/MoreAck/AckNotPR.v}\n.}\n\\end{exercise} \n\n\\section{Conclusion}\n\nIn Sect.~\\vref{sect:battle-length-notPR}, we prove that the length of hydra-battles (for a given hydra, according to the initial replication factor) is not primitive recursive in general. \nThis proof uses properties of the Ackermann function, and the $H'_\\alpha$, $F_\\alpha$, $L_\\alpha$ families of functions.\n\n\n%%% TODO : Check the statement !\n%\n% \\index{hydras}{Exercises}\n% \\begin{exercise}\n% Let us quote a theorem from ~\\cite{KS81} (page 297).\n\n% \\begin{quote}\n% \\begin{align*}\n%   H'_{\\omega^\\alpha}(n+1) &\\geq F_{\\alpha}(n) \\quad (n\\geq 1, \\alpha<\\epsilon_0) \\\\\n%  F_{\\alpha}(n+1) &\\geq H'_{\\omega^\\alpha}(n) \\quad (n\\geq 1, \\alpha<\\epsilon_0) \n% \\end{align*}\n% Thus $H'_{\\omega^\\alpha}$ and $F_{\\alpha}$ have essentially the same order of growth.\n\n% \\end{quote}\n\n%  But, before trying to prove these facts, look at the definition of function $H$ in Ketonen and Solovay's paper ! Is it really the same as the definition we quote from Pr{\\H o}mel's chapter~\\cite{Promel2013},\n% whereas \\cite{KS81} define $H_\\alpha(n)$ as ``the least integer $k$ such that $[n,k]$ is $\\alpha$-large''. Thus, it may be useful to adapt the statement above.\n\n\n\n% \\end{exercise}\n\n\n\n\n%  \\subsection{Gnawing ordinals}\n\n% \\begin{definition}[After~\\cite{KS81}]\n%   Let $S$ be a finite set of positive integers, and $\\alpha$ be an ordinal strictly less than $\\epsilon_0$.  Let us denote by $s=s_1,s_2, \\dots, s_N$ the sequence of \n% the elements of $S$, enumerated in strictly increasing order.\n\n% We consider the sequence of ordinals $\\alpha_o=\\alpha, \\alpha_1=\\canonseq{\\alpha_0}{s_1},\\dots,\\alpha_{i+1}=\\canonseq{\\alpha_i}{s_{i+1}},\\dots, \n% \\alpha_{N}=\\canonseq{\\alpha_{N-1}}{s_N}$.\n% We denote by $\\gnaw{s}{\\alpha}$ the last ordinal of the sequence, \\emph{i.e.}  $\\alpha_N$.\n% \\end{definition}\n\n\n% The following function computes $\\gnaw{s}{\\alpha}$ by recursion on $s$.\n% \\vspace{4pt}\n\n\n\n% For instance, let us consider the ordinal $\\omega^\\omega$, and try some sequence of integers. \n\n\n% \\begin{Coqsrc}\n% Compute pp (gnaw (omega ^ omega) (1::nil)).  \n% \\end{Coqsrc}\n\n\n% \\begin{Coqanswer}\n%   = omega%pT1 : ppT1\n% \\end{Coqanswer}\n\n\n% \\begin{Coqsrc}\n% Compute pp (gnaw (omega ^ omega) (1::2::nil)).  \n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n% = P_fin 2\n%      : ppT1\n% \\end{Coqanswer}\n\n% Likewise, we can verify that $\\gnaw{\\omega^\\omega}{\\langle 1,2,3 \\rangle}=1$\n% and $\\gnaw{\\omega^\\omega}{\\langle 1,2,3,4 \\rangle}=0$.\n\n% \\begin{Coqsrc}\n% Example omega_omega_1_4 : gnaw (omega ^omega) (interval 1 4) = 0.\n% Proof. trivial. Qed.\n\n% Example omega_omega_1_3 : gnaw (omega ^omega) (interval 1 3) = 1.\n% Proof. trivial. Qed.\n% \\end{Coqsrc}\n\n% \\begin{remark}\n% \\label{remark:gnaw-vs-battles}\n% Let us consider an hydra battle, where Hercules always chops off the rightmost head of the hydra. Let $\\alpha<\\epsilon_0$ be an ordinal number, and \n% $s=\\langle i_1<i_2<\\dots<i_N\\rangle$ be  any finite sequence of positive integers.\n% Then the battle initiated by the hydra $\\iota(\\alpha)$, with $s$ as the sequence of successive replication factors, leads to the hydra $\\iota(\\gnaw{s}{\\alpha})$ as the final state.\n% \\end{remark}\n\n\n% \\subsection{Large sequences}\n\n% \\begin{remark}\n% In their article~\\cite{KS81}, Ketonen and Solovay use the appellation ``large set'' instead of ``large sequence'', but their definitions use an enumeration of the elements in increasing order. Thus, we shall use the term ``sequence'' when referring to our implementation in \\coq{}, and ``set'' when referring to the statements of~\\cite{KS81}.\n% \\end{remark}\n\n\n% \\begin{definition}[After\\cite{KS81}]\n% The sequence $s$ is said to be \\emph{$\\alpha$-large} if $\\gnaw{\\alpha}{s}=0$.  \n% \\end{definition}\n\n\n% \\begin{Coqsrc}\n% Definition largeb (alpha : T1) (s: list nat) :=\n%   match gnaw alpha s with\n%     | zero => true\n%     | _ => false\n%   end.\n\n\n% Definition large (alpha : T1) (s : list nat) : Prop :=\n%   largeb alpha s.\n% \\end{Coqsrc}\n\n\n\n\n% For instance, the sequence $\\langle 1,2,3,4 \\rangle$ is $\\omega^\\omega$-large but not\n% $\\omega^{\\omega+1}$-large, since $\\gnaw{\\omega^{\\omega+1}}{\\langle 1,2,3,4\\rangle}$ is equal to $\\omega \\times 2 + 4$.\n\n\n% %\\omega^\\omega+\\omega^2\\times 3 + \\omega\\times 4+ 5$.\n\n\n% \\begin{Coqsrc}\n% Compute largeb (omega^omega) (interval 1 4).  \n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n% = true : bool\n% \\end{Coqanswer}\n\n% \\begin{Coqsrc}\n% Compute largeb (omega^(omega+1)) (interval 1 4).\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n% = false : bool\n% \\end{Coqanswer}\n\n\n% \\begin{remark}\n% A sequence $s$ is $\\alpha$-large if (still considering Hercules ``righmost-head'', tactic), it leads to Hercules' victory.\n% \\end{remark}\n\n\n% \\subsection{A little game}  \n\n% Let $\\alpha<\\epsilon_0$ be an ordinal, and $i$ a positive integer. We want to guess the least natural number  $j$  such that the interval $[i,i+j]$ is $\\alpha$-large.\n% Equivalently, we should have $\\gnaw{\\alpha}{[i, i+j-1]}=1$.\n\n% The following functions takes three arguments: an ordinal $\\alpha$, and two positive natural numbers  $i$ and $j$ (we assume, but not verify that $i$ and $j$ are strictly positive). It returns one of the three possible answers:\n\n% \\begin{itemize}\n% \\item \\texttt{Ok} if $j$ is the smallest integer such that the interval $[i,i+j]$ is $\\alpha$-large\n% \\item \\texttt{Too\\_far} if $[i,i+j]$ is  $\\alpha$-large, but $j$ is not the smallest such positive integer\n% \\item \\texttt{(Remaining $\\beta$)} if $j$ is too small, and gnawing $\\alpha$ with\n%   $[i,i+j]$ is still equal to $\\beta$, instead of $0$\n% \\end{itemize}\n\n\n% \\vspace{4pt}\n% \\noindent\n% \\emph{From Module~\\href{../theories/html/hydras.Epsilon0.Large_Sets_Demo.html}{Ordinals.Epsilon0.Large\\_Sets\\_Demo}}   \n\n% \\begin{Coqsrc}\n% Inductive answer : Set := You_won | Too_far | Remaining (rest : ppT1).\n\n% Definition large_set_check alpha i j :=\n%   let beta := gnaw alpha (interval i (Nat.pred j))\n%   in match beta with\n%      | one => Ok\n%      | zero => Too_far\n%      |  _ => Remaining (pp (canonseq j beta))\n%      end.\n% \\end{Coqsrc}\n\n% \\subsubsection{\\texorpdfstring{$\\omega$}{omega}-large intervals}\n\n\n% For instance, let us consider the ordinal $\\omega$ and start with $i=1$.\n\n% \\begin{Coqsrc}\n% Compute large_set_check omega 1 2.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%      = Ok\n%      : answer\n% \\end{Coqanswer}\n\n% Let us give greater values of $i$, still with the ordinal $\\omega$.\n\n% \\begin{Coqsrc}\n% Compute large_set_check omega 2 3.\n% \\end{Coqsrc}\n\n\n% \\begin{Coqanswer}\n%   = Remaining 1\n%      : answer\n% \\end{Coqanswer}\n\n% \\begin{Coqsrc}\n% Compute large_set_check omega 3 3.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%   = Remaining 3\n%      : answer\n% \\end{Coqanswer}\n\n% \\begin{Coqsrc}\n% Compute large_set_check omega 2 4.\n% \\end{Coqsrc}\n\n\n% \\begin{Coqanswer}\n%   = Ok\n%      : answer\n% \\end{Coqanswer}\n\n% \\begin{Coqsrc}\n% Compute large_set_check omega 3 6.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%   = Ok\n%      : answer\n% \\end{Coqanswer}\n\n% It looks like every request to compute (\\texttt{large\\_set\\_check omega $i$ $2\\times i$})  will succeed. Let us try an example. \n\n% \\begin{Coqsrc}\n% Compute large_set_check omega 49 98.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%   = Ok\n%      : answer\n% \\end{Coqanswer}\n\n% \\subsubsection{\\texorpdfstring{$\\omega^2$}{omega\\^{}2}-large intervals}\n\n% Still using \\texttt{Compute} and \\texttt{large\\_set\\_check}, we obtained the \n% following values of $j$, the least integer such that the interval $[i,j]$ is $\\omega^2$ large.\n\n% $$\n% \\begin{array}{|c|c|}\n% \\hline\n%   i & j \\\\\n% \\hline \n% 1 & 4 \\\\\n% 2 & 14 \\\\\n% 3 & 38 \\\\\n% 4 & 94 \\\\\n% 5 & 222 \\\\\n% % 6 & 510 \n% \\hline\n% \\end{array}\n% $$\n\n% \\begin{exercise}\n% Please give the 6-th and 7-th line of the array above.\n% \\end{exercise}\n\n\n% \\subsubsection{The limits of (pure) computation}\n\n% Let us now play with bigger ordinals, for instance $\\alpha=\\omega^\\omega$ or\n% $\\alpha=\\omega^{\\omega + 1}$. We notice that, even for small values of $i$, it is hard\n% to guess values of $j$ such that $[i,i+j]$ is $\\alpha$-large.\n\n% \\begin{Coqsrc}\n% Compute large_set_check (omega ^ omega) 1 4.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%  = Ok\n%      : answer\n% \\end{Coqanswer}\n\n% \\begin{Coqsrc}\n% Compute large_set_check (omega ^ omega) 2 38.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%  = Ok\n%      : answer\n% \\end{Coqanswer}\n\n\n\n\n% \\begin{Coqsrc}\n% Compute large_set_check (omega ^ omega) 3 1000.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%   = Remaining (omega ^ 2 * 2 + omega * 220 + 798)%pT1\n%      : answer\n% \\end{Coqanswer}\n\n\n% \\begin{Coqsrc}\n% Compute large_set_check (omega ^ omega) 3 1798.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%   = Remaining (omega ^ 2 * 2 + omega * 220)%pT1\n%      : answer\n% \\end{Coqanswer}\n\n\n% \\begin{Coqsrc}\n% Compute large_set_check (omega ^ omega) 3 5000.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%   = Remaining (omega ^ 2 * 2 + omega * 218 + 2198)%pT1\n%      : answer\n% \\end{Coqanswer}\n\n\n\n\n\n\n\n% \\begin{Coqsrc}\n% Compute large_set_check (omega ^ (omega + 1)) 3 5000.\n% \\end{Coqsrc}\n\n% \\begin{Coqanswer}\n%  = Remaining\n%          (omega ^ omega * 2 + omega ^ 3 * 4 + \n%           omega ^ 2 * 4 + omega * 1148 +  4222)%pT1\n%      : answer  \n% \\end{Coqanswer}\n\n% \\begin{Coqsrc}\n% Compute large_set_check (omega ^ (omega + 1)) 3 10000.\n% \\end{Coqsrc}\n\n\n% \\begin{Coqanswer}\n% Warning: To avoid stack overflow, large numbers in nat are interpreted as\n% applications of Init.Nat.of_uint. [abstract-large-number,numbers]\n\n%     = Remaining\n%          (omega ^ omega * 2 + omega ^ 3 * 4 + \n%           omega ^ 2 * 4 + omega * 1147 +\n%           8446)%pT1\n%      : answer\n% \\end{Coqanswer}\n\n% Since computation is not enough, let us comme back to proofs, or, better, proofs \\emph{and} computations.\n\n% \\subsection{Proving largeness}\n\n\n\n% %%% ICI %%%\n\n% \\subsection{$n$-large and $\\omega$-large intervals }\n\n% A finite ordinal is just made out of \\texttt{zero} and \\texttt{succ} constructions.\n% Thus any sequence of strictly positive integers of length greater than  or equal to $n$ will completely gnaw the ordinal $n$.\n\n% Concerning the first limit ordinal $\\omega$ the largeness of a sequence $s$ depends only on its first element and the length of the rest of the sequence (please keep in mind that the argument $i$ of $\\canonseq{\\alpha}{i}$ is meaningful only if $\\alpha$ is a limit ordinal).\n% The following proposition is labeled $4.2$ in~\\cite{KS81}.\n\n% \\begin{proposition}\n%   For $n<\\omega$, a set $X$id $n$-large if and only if $|X|\\geq n$.\n% A finite set $X$ is $\\omega$-large if $|X|>\\min{X}$.\n% \\end{proposition}\n\n% Rewritten in terms of list of strictly positive integers, we get the following statements:\n\n% \\begin{Coqsrc}\n% Lemma large_n_iff : forall s (n:nat) , ~ In 0 s  ->\n%                                   large n s  <-> (n <= List.length s)%nat.\n% \\end{Coqsrc}\n\n\n% \\begin{Coqsrc}\n% Lemma large_omega_iff : forall s n,  ~In 0 (n::s) -> \n%                                             large omega (n::s) <->\n%                                             (n <=  List.length s)%nat.\n% \\end{Coqsrc}\n\n\n\n% % \\begin{Coqsrc}\n% % (* sorted list of natural numbers greater than or equal to n *)\n\n% % Inductive sorted_ge (n: nat) : list nat -> Prop :=\n% % | sorted_ge_nil : sorted_ge n nil\n% % | sorted_ge_one : forall p, n<=p ->\n% %                             sorted_ge n (p::nil)\n% % | sorted_ge_cons: forall p q s,  n<=p -> p<q ->\n% %                                  sorted_ge p (q::s) ->\n% %                                  sorted_ge n (p::q::s).\n% % \\end{Coqsrc}\n\n\n\n% \\subsection{Mimimal large sequences}\n\n\n\n% Let us consider \\emph{minimal} large sequences, \\emph{i.e.} large sequences \n% the strict prefix of which do not lead to $0$. In other words, only the last ordinal \n% defined by the sequence is null.  For this purpose, we use the predicate \\texttt{path\\_to} introduced page~\\pageref{path-to-definition}.\n\n% \\begin{Coqsrc}\n% Definition mlarge alpha s := path_to zero s alpha.\n% \\end{Coqsrc}\n\n\n\n%------------------------------------------------------\n\\section{A certified catalogue of rapidly growing functions}\n\nIn this section, we try to present an abstract of the properties of the main variants of fast-growing hierarchies of functions\nwe found in the literature or we had to define as helpers in our proofs.\n\n\n\\subsection{Ketonen-Solovay's \\texttt{F\\_alpha}}\n\n  This hierarchy is presented p.280 of~\\cite{KS81} as `` a trivial variant of one introduced by Wainer ''.\n  Let us recall the equations shown in Sect.~\\vref{F_equations}.\n  \n\\begin{itemize}\n\\item $F_0(i)=i+1$\n\\item $F_{\\beta+1}(i)= (F_\\beta)^{(i+1)}(i)$, where $f^{(i)}$ is the $i$-th iterate of $f$.\n\\item $F_\\alpha(i) = F_{\\canonseq{\\alpha}{i}} (i)$ if $\\alpha$ is a limit ordinal.\n\\end{itemize}\n\nNote that \\cite{KS81} defines also $F_{\\epsilon_0}$ (by the third equation). Since $\\epsilon_0$ is not representable in type \\texttt{E0}, our translation in \\coq{} does not take $F_{\\epsilon_0}$ into account.\n\nSeveral properties of $F_\\alpha$ are already presented in Sect.~\\ref{F_equations}.\n\n\n\n\n\n%----------------------------------------------------------------------\n\\chapter[Countable ordinals (after Sch\\\"{u}tte)]{Kurt Sch\u00fctte's axiomatic definition of countable ordinals}\n\n\\label{chap:schutte} \n%ON\n\nIn the present chapter, we  compare our implementation of the segment $[0,\\epsilon_0)$ with a mathematical text in order to ``validate'' our constructions.\nOur reference here is the axiomatic definition of the set of countable ordinals,\nin chapter V of Kurt Sch\u00fctte's book `` Proof Theory ''~\\cite{schutte}.\n\n\\begin{remark}\n\\emph{In all this chapter, the word ``ordinal'' will be considered as a synonymous of\n``countable ordinal''}  \n\\end{remark}\n\n\n\nSch\u00fctte's definition of countable ordinals relies on the following three axioms:\n\nThere  exists a strictly ordered set , such that\n\\begin{enumerate}\n\\item  $(\\mathbb{O},<)$ is well-ordered\n\\item Every bounded subset of $\\mathbb{O}$  is countable\n\\item Every countable subset of $\\mathbb{O}$  is bounded.\n\\end{enumerate}\n\nStarting with these three axioms, Sch\u00fctte re-defines the vocabulary about ordinal numbers: the null ordinal $0$, limits and successors, the addition of ordinals, the infinite ordinals $\\omega$, $\\epsilon_0$, $\\Gamma_0$, etc.\n\nThis chapter describes an adaptation to \\coq{} of Sch\u00fctte's axiomatization. \n Unlike the rest of our libraries, our library\n\\href{../theories/html/hydras.Schutte.Schutte.html}{hydras.Schutte}\nis not constructive, and relies on several axioms.\n\n\\begin{itemize}\n\\item First, please keep in mind  that the set of countable ordinals is not countable. Thus, we cannot hope to represent all countable ordinals as finite terms of an inductive type, which was possible with  the set of ordinals strictly less than $\\epsilon_0$ (resp. $\\Gamma_0$)\n\\item We tried to be as close as possible to K. Sch\u00fctte's text, which uses ``classical'' mathematics : excluded middle, Hilbert's $\\epsilon$ (choice) and Russel's $\\iota$ (definite description) operators. Both operators allow us to write definitions close to the natural mathematical language, such as ``$\\textrm{succ}$ is \\emph{the} least ordinal strictly greater than $\\alpha$''\n\\item Please note that only the library \\href{../theories/html/hydras.Schutte.Schutte.html}{Schutte/*.v} is ``contaminated'' by axioms, and that the rest of our libraries remain constructive.\n\\end{itemize}\n\n\\section{Declarations and axioms}\n\nLet us declare a type \n\\texttt{Ord} for representing countable ordinals, and a binary relation\n \\texttt{lt}. Note that, in our development, \\texttt{Ord} is a type, while the \\emph{set} of countable ordinals (called $\\mathbb{O}$ by Sch\u00fctte) \nis the full set over the type \\texttt{Ord}.\n\n\\label{types:Ord} \n\nWe use Florian Hatat's library on countable sets, written as he was a student of  \\emph{\\'Ecole Normale Sup\u00e9rieure de Lyon}. A set $A$ is countable if there is an injective function from $A$ to $\\mathbb{N}$ (see \nLibrary \\href{../theories/html/hydras.Schutte.Countable.html}%\n{\\texttt{Schutte.Countable}}).\n\n\n\\vspace{6pt}\n\n\\emph{From Module\\href{../theories/html/hydras.Schutte.Schutte_basics.html}%\n{\\texttt{Schutte.Schutte\\_basics}}}\n\n\\index{hydras}{Library Schutte!Types!Ord}\n\n\\input{movies/snippets/Schutte_basics/OrdDecl}\n\n\nSch\u00fctte's first axiom tells that \\texttt{lt} is a well order on the set \n\\texttt{ordinal} (The  class \\texttt{WO} is defined in\nModule~\\href{../theories/html/hydras.Schutte.Well_Orders.html}{Schutte.Well\\_Orders.v}).\n\n\\index{hydras}{Library Schutte!Type classes!WO@ WO (well order)}\n\n\\label{types:WO}\n\n\\input{movies/snippets/Well_Orders/Mdecl}\n\n\\input{movies/snippets/Well_Orders/WODef}\n\n\\input{movies/snippets/Schutte_basics/AX1}\n\nThe second and third axioms say that a subset $X$ of $\\mathbb{O}$ is\n(strictly) bounded if and only if it is countable. \n\n\\input{movies/snippets/Schutte_basics/AX23}\n\n\n\\texttt{AX2} and \\texttt{AX3} could have been replaced by a single axiom (using the \\texttt{iff} connector), but we decided to respect as most as possible the structure of Sch\u00fctte's definitions.\n\n\\section{Additional  axioms}\n\nThe adaptation of Sch\u00fctte's mathematical discourse to \\coq{} led us to\nimport a few axioms from the standard library. We encourage the reader to consult \\coq{}'s FAQ about the safe use of axioms\n \\url{https://github.com/coq/coq/wiki/The-Logic-of-Coq#axioms}.\n\n\\subsubsection{Classical logic}\n\nIn order to work with classical logic, we import the module\n\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Logic.Classical.html}{Coq.Logic.Classical}  of \\coq{}'s standard library, specifically the following axiom:\n\n\\begin{Coqsrc}\n Axiom classic : forall P:Prop, P \\/ ~P.\n\\end{Coqsrc}\n\n\n\\subsubsection{Description operators}\n\nIn order to respect Sch\u00fctte's style, we imported also the library \n\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Logic.Epsilon.html}{\\texttt{Coq.Logic.Epsilon}}.  The rest of this section presents a few examples of\nhow Hilbert's choice operator and Church's definite description allow us\n to write understandable definitions (close to the mathematical natural language).\n\n\\subsubsection{The definition of zero}\n\nAccording to the  definition of a well order, every non-empty subset of \\texttt{Ord} has a least element. Furthermore, this least element is unique. We would like to call this element  \\texttt{zero}.\n\n\\input{movies/snippets/Schutte_basics/iotaDemo}\n\n\n\n\n\nIndeed, the basic logic of  \\coq{} does not allow us to eliminate a proof of a proposition \n$\\exists!\\,x:A,\\,P(x)$ for building a term whose type lies in the sort \\texttt{Type}. \nThe reasons for this impossibility are explained in many documents~\\cite{BC04, chlipalacpdt2011, Coq}.\n\nLet us import the library \\texttt{Coq.Logic.Epsilon}, which contains the following axiom and lemmas.\n\n\n\\begin{Coqsrc}\nAxiom epsilon_statement:\n  forall (A : Type) (P : A->Prop), inhabited A ->\n    {x : A | (exists x, P x) -> P x}.\n\\end{Coqsrc}\n\nHilbert's $\\epsilon$ \\emph{operator} is derived from this  axiom.\n\n\\begin{Coqsrc}\nDefinition epsilon (A : Type) (i:inhabited A) (P : A->Prop) : A\n  := proj1_sig (epsilon_statement P i).\n\nLemma constructive_indefinite_description :\n  forall (A : Type) (P : A->Prop),\n    (exists x, P x) -> { x : A | P x }.\n\\end{Coqsrc}\n\n\n\n\nIf we consider the \\emph{unique existential} quantifier $\\exists!$, we obtain\nChurch's \\emph{definite description operator}.\n\n\\begin{Coqsrc}\nDefinition iota (A : Type) (i:inhabited A) (P : A->Prop) : A\n  := proj1_sig (iota_statement P i).\n\\end{Coqsrc}\n\n\n\\begin{Coqsrc}\n Lemma constructive_definite_description :\n  forall (A : Type) (P : A->Prop),\n    (exists! x, P x) -> { x : A | P x }.\n\\end{Coqsrc}\n\n\n\\begin{Coqsrc}\nDefinition iota_spec (A : Type) (i:inhabited A) (P : A->Prop) :\n  (exists! x:A, P x) -> P (iota i P)\n  := proj2_sig (iota_statement P i).\n\\end{Coqsrc}\n\n\n\nIndeed, the operators \\texttt{epsilon} and \\texttt{iota} allowed us to make our definitions \nquite close to Sch\u00fctte's text. Our libraries \\href{../theories/html/hydras.Schutte.MoreEpsilonIota.html}%\n{\\texttt{Schutte.MoreEpsilonIota}}\nand\n\\href{../theories/html/hydras.Schutte.PartialFun.html}%\n{\\texttt{Schutte.PartialFun}} are extensions of \\texttt{Coq.logic.Epsilon} for making easier \nsuch definitions. See also an article in french~\\cite{PCiota}. \n\n\n\\input{movies/snippets/MoreEpsilonIota/Defs}\n\n\nIn order to use these tools,  we have to tell \\coq{}  that the declared type \\texttt{Ord} is not empty:\n\n\\input{movies/snippets/Schutte_basics/inhOrd}\n\n\n\n\nWe are now able to define \\texttt{zero} as the least ordinal. For this purpose,\nwe define a function returning the least element of any [non-empty]  subset.\n\n\n\\emph{From Module\\href{../theories/html/hydras.Schutte.Well_Orders.html}%\n  {\\texttt{Schutte.Well\\_Orders}}}\n\n\\input{movies/snippets/Well_Orders/leastMemberDef}\n\\input{movies/snippets/Well_Orders/theLeast}\n\n\n\n\\vspace{4pt}\n\nFrom Module \\href{../theories/html/hydras.Schutte.Schutte_basics.html}%\n{\\texttt{~Schutte.Schutte\\_basics}}\n\n\\label{Constants:zero:Ord}\n\\index{hydras}{Library Schutte!Constants!zero}\n\n\\input{movies/snippets/Schutte_basics/zeroDef}\n\nWe want to prove now that zero is less than or equal to any ordinal number.\n\n\\input{movies/snippets/Schutte_basics/zeroLe}\n\n\n\\subsubsection{Remarks on \\texttt{epsilon} and \\texttt{iota}}\n\n What would happen in case of a misuse of \\texttt{epsilon} or \\texttt{iota} ?\nFor instance, one could give a unsatisfiable specification to \\texttt{epsilon} or \na specification for \\texttt{iota} that admits several realizations.\n\nLet us consider an example:\n\n\\input{movies/snippets/Schutte_basics/BadBottoma}\n\nSince we won't be able to prove the proposition\n\\linebreak \\Verb|{exists! a: Ord, least_member (Empty_set Ord) a|, the only properties we would be able to prove about \\texttt{bottom} would be \\emph{trivial} properties, \n\\emph{i.e.}, satisfied by \\emph{any} element of type \\texttt{Ord}, like for instance\n\\texttt{bottom = bottom}, or \\texttt{zero <= bottom}.\n\n\\input{movies/snippets/Schutte_basics/trivialProps}\n\nOn the other hand, the following attempt fails, because of the unprovable first subgoal (please notice that the second subgoal is easy to solve !).\n\n\\input{movies/snippets/Schutte_basics/Failure}\n\n\nIn short, using \\texttt{epsilon} and \\texttt{iota} in our implementation of countable ordinals after Sch\u00fctte has two main advantages.\n\n\n\\begin{itemize}\n\\item It allows us to give a \\emph{name} (using \\texttt{Definition}) two witnesses \nof existential quantifiers (let us recall that, in classical logic, one may consider non-constructive proofs of existential statements)\n\\item By separating definitions from proofs of [unique] existence, one may make definitions  more concise and readable. Look for instance at \nthe definitions of  \\texttt{zero}, \\texttt{succ}, \\texttt{plus}, etc. in the rest of this chapter.\n\\end{itemize}\n%%%% ICI ICI \n\n\\section{The  successor function}\n\nThe definition of the function \\texttt{succ:Ord -> Ord} is very concise. The successor of any ordinal $\\alpha$ is the smallest ordinal strictly greater than $\\alpha$.\n\n\\label{Functions:succ-sch}\n\\index{hydras}{Library Schutte!Functions!succ}\n\n\\input{movies/snippets/Schutte_basics/succDef}\n\nUsing \\texttt{succ}, we define the following predicates.\n\n\\input{movies/snippets/Schutte_basics/isSuccIsLimit}\n\n\n\n\n% \\begin{remark}\n% Please look at remark~\\vref{warning:coercions}.\n\nHow do we prove properties of the successor function?\nFirst, we make its specification explicit.\n\n\\input{movies/snippets/Schutte_basics/succSpec}\n\nThen, we prove that our function \\texttt{succ} meets this specification. \n\n\\input{movies/snippets/Schutte_basics/succOka}\n\n\nWe have now to prove that the set of all ordinals strictly greater than $\\alpha$ has a unique least element. But the singleton set $\\{\\alpha\\}$ is countable, hence  bounded (by the axiom \\texttt{AX3}). Hence; the set $\\{\\beta\\in\\mathbb{O}|\\alpha < \\beta\\}$ is not empty\nand therefore has a unique least element.\n\nThe rest of the \\coq{} proof script is quite short.\n\n\\input{movies/snippets/Schutte_basics/succOkb}\n\nWe can ``uncap'' the description operator for proving properties of the\n\\texttt{succ} function.\n\n\\input{movies/snippets/Schutte_basics/succProps}\n\n\n\n\\section{Finite ordinals}\n\nUsing \\texttt{succ}, it is now easy to define recursively all the finite ordinals.\n\n\\label{sect:notation-F-sch}\n\n\\input{movies/snippets/Schutte_basics/finiteDef}\n\n\\section{The definition of \\texttt{omega}}\nIn order to define $\\omega$, the first infinite ordinal, we use an operator which\n``returns'' the least upper bound (if it exists) of a subset $X\\subseteq \\mathbb{O}$.\nFor that purpose, we first use a predicate:\n(\\texttt{is\\_lub $D$ \\textit{lt} $X$ $a$}) if $a$ belongs to $D$ and is the least \nupper bound  of $X$ (with respect to \\textit{lt}).\n\n\n\\input{movies/snippets/Lub/isLubDef}\n\n\\input{movies/snippets/Schutte_basics/supDef}\n\n\nThen, we define the function \\texttt{omega\\_limit} which returns the least upper bound \nof the  (denumerable) range of any sequence \\texttt{s: nat -> Ord}. \nBy \\texttt{AX3} this range is bounded, hence the set of its upper bounds is not empty and has a least element.\nThen we define \\texttt{omega} as the limit of the sequence of finite ordinals.\n\n\\input{movies/snippets/Schutte_basics/omegaDef}\n\n\\label{sect:notation-omega}\n\n\n\n\nAmong the numerous properties of the ordinal $\\omega$, let us quote the following ones\n(proved in Module \n\\href{../theories/html/hydras.Schutte.Schutte_basics.html\\#finite_lt_omega}{\\texttt{Schutte.Schutte\\_basics}})\n\n\\input{movies/snippets/Schutte_basics/omegaProps}\n\n\n\n\\subsection{Ordering functions and ordinal addition}\n\nAfter having defined the finite ordinals and the infinite ordinal $\\omega$, we  define the sum $\\alpha+\\beta$ of two countable ordinals.\nSch\u00fctte's definition looks like the following one:\n\n\\begin{quote}\n``$\\alpha+\\beta$ is the $\\beta$-th ordinal greater than or equal to $\\alpha$''\n\\end{quote}\n\n\nThe purpose of this section is to give a meaning to the construction\n``the $\\alpha$-th element of $X$''  where $X$ is any non-empty subset of $\\mathbb{O}$.\nWe follow Sch\u00fctte's approach, by defining the notion of \\emph{ordering functions},\na way to associate a unique ordinal to each element of $X$.\nComplete definitions and proofs can be found in Module\n \\href{../theories/html/hydras.Schutte.Ordering_Functions.html}%\n{\\texttt{Schutte.Ordering\\_Functions}} ).\n\n\\subsection{Definitions}\n\nA \\emph{segment} is a set $A$ of ordinals such that, whenever  $\\alpha\\in A$ and\n$\\beta<\\alpha$, then $\\beta\\in A$; a segment is  \\emph{proper} if it strictly included in $\\mathbb{O}$.\n\n\\input{movies/snippets/Ordering_Functions/segmentDef}\n\n\nLet  $A$ be a segment, and $B$ a subset of $\\mathbb{O}$ : an \\emph{ordering function for $A$ and  $B$} is a strictly increasing bijection from $A$ to $B$.\nThe set $B$ is said to be an \\emph{ordering segment} of $A$.\nOur definition in \\coq{} is a direct translation of the mathematical text of~\\cite{schutte}.\n\n\\index{maths}{Ordinal numbers!Ordering functions}\n\\index{hydras}{Library Schutte!Predicates!ordering function@ordering\\_function}\n\n\\input{movies/snippets/Ordering_Functions/orderingFunctionDef}\n\nWe are now able to associate with any subset $B$ of $\\mathbb{O}$ its ordering segment and ordering function.\n\n\\input{movies/snippets/Ordering_Functions/ordDef}\n\n\nThus (\\texttt{ord $B \\;\\alpha$}) is the $\\alpha$-th element of $B$.\nPlease note that the last definition uses the epsilon-based operator \\texttt{some} and\nnot \\texttt{the}. This is due to the fact that we cannot prove the unicity (w.r.t. Leibniz' equality) of the ordering function of a given set. \nBy contrast, we admit the axiom  \\texttt{Extensionality\\_Ensembles}, from the library \n\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Sets.Ensembles.html}{Coq.Sets.Ensembles}, so we use the operator \\texttt{the} in the definition of\n\\texttt{the\\_ordering\\_segment}.\n\nOne of the main theorems of\n\\href{../theories/html/hydras.Schutte.Ordering_Functions.html\\#ordering_function_ex}%\n{\\texttt{Ordering\\_Functions}} \nassociates a unique segment and a unique (up to extensionality) ordering function to every subset $B$ of $\\mathbb{O}$.\n\n\\input{movies/snippets/Ordering_Functions/orderingFunctionEx}\n\nThus,  our function \\texttt{ord}  which enumerates the elements of $B$ is defined in a non-ambiguous way.\nLet us quote the following theorems (see Library\n\\href{../theories/html/hydras.Schutte.Ordering_Functions.html}%\n{\\texttt{Schutte.Ordering\\_Functions}} for more details).\n \n\\input{movies/snippets/Ordering_Functions/orderingLe}\n\\input{movies/snippets/Ordering_Functions/Th1352}\n\n\n\\subsection{Ordinal addition}\n\nWe are now ready to define and study addition on the type \\texttt{Ord}.\nThe following definitions and proofs can be consulted in Module\n\\href{../theories/html/hydras.Schutte.Addition.html}%\n{\\texttt{Schutte.Addition.v}}.\n\n\\index{hydras}{Library Schutte!Functions!plus}\n\n\\input{movies/snippets/Addition/additionDef}\n\nIn other words,  $\\alpha + \\beta$ is the  $\\beta$-th ordinal greater than or equal to $\\alpha$. \nThanks to generic properties of ordering functions, we can show the following \nproperties of addition on $\\mathbb{O}$. First, we prove a useful lemma:\n\n\\begin{Coqsrc}\nLemma plus_elim (alpha : Ord) :\n  forall P : (Ord->Ord)->Prop,\n    (forall f: Ord->Ord, \n        ordering_function f ordinal (ge alpha)-> P f) ->\n    P (plus alpha).\n\\end{Coqsrc}\n\nAs a use-case, let us prove that $0$ is a right neutral element of $+$.\n\n\\input{movies/snippets/Addition/alphaPlusZero}\n\n\nThe following lemmas are proved the same way.\n\n\\input{movies/snippets/Addition/bunchOfLemmas}\n\n\nThe following lemmas are not direct applications of \\texttt{plus\\_elim}.\n\n\\input{movies/snippets/Addition/plusAssoc}\n\\input{movies/snippets/Addition/finitePlusInfinite}\n  \n\n\nIt is interesting to compare the proof of these lemmas with the\ncomputational proofs of the corresponding statements in Module\n\\href{../theories/html/hydras.Epsilon0.T1.html}%\n{\\texttt{Epsilon0.T1}}. \nFor instance, the proof of the lemma \n\\texttt{one\\_plus\\_omega} uses the continuity of ordering functions (applied to  \\texttt{(plus 1)}) and compares the limit of the $\\omega$-sequences $i_{(i \\in \\mathbb{N})}$ and\n$(1+i)i_{(i \\in \\mathbb{N})}$, whereas in the library  \\texttt{Epsilon0/T1}, the equality \n$1+\\omega=\\omega$ is just proved with \\texttt{reflexivity}!\n\n\n\n\\subsubsection{Multiplication by a natural number}\n\nThe multiplication of an ordinal by a natural number is defined in terms of addition.\nThis operation is useful for the study of Cantor normal forms.\n\n\\input{movies/snippets/Addition/multFin}\n\n\\section{The exponential of basis \\texorpdfstring{$\\omega$}{omega}}\n\nIn this section, we define the function which maps any $\\alpha\\in\\mathbb{O}$ to\nthe ordinal  $\\omega^\\alpha$, also written \n$\\phi_0(\\alpha)$. \nIt is an opportunity to apply the definitions and results of the preceding section. \nIndeed,  Sch\u00fctte first defines a subset of $\\mathbb{O}$: the set of additive principal ordinals, and $\\phi_0$  is just defined as the ordering function of this set.\n\n\\subsection{Additive principal ordinals}\n\n\\index{maths}{Ordinal numbers!Additive principal ordinals}\n\\index{hydras}{Library Schutte!Predicates!AP@AP (additive principal ordinals)}\n\n\\begin{definition}\nA non-zero ordinal  $\\alpha$ is said to be \\emph{additive principal} if, for all  $\\beta<\\alpha$, $\\beta+\\alpha$ is equal to  $\\alpha$.\nWe call \\texttt{AP} the set of additive principal ordinals.\n\n\\end{definition}\n\n\n\n\\noindent\\emph{From Module \\href{../theories/html/hydras.Schutte.AP.html}%\n{\\texttt{Schutte.AP}}}\n\n\\input{movies/snippets/AP/APDef}\n\n\\subsection{The function \\texttt{phi0}}\n\nLet us call  $\\phi_0$ the ordering function of \\texttt{AP}.\nIn the mathematical text, we shall use indifferently the notations  $\\omega^\\alpha$ and$\\phi_0(\\alpha)$. \n\n\\index{hydras}{Library Schutte!Functions!phi0}\n\n\\input{movies/snippets/AP/phi0Def}\n\n\n\\subsection{Omega-towers and the ordinal \\texorpdfstring{$\\epsilon_0$}{epsilon0}}\n\n\nUsing $\\phi_0$, we can define recursively the set of finite omega-towers.\n\n\\input{movies/snippets/AP/omegaTower}\n\n\n\n\\label{sect:epsilon0-as-limit}\nThen, the ordinal  $\\epsilon_0$ is defined as the limit of the sequence of all finite towers (a kind of infinite tower).\n\n\\input{movies/snippets/AP/epsilon0Def}\n\n\nThe rest of our library \\texttt{AP} is devoted to the proof of properties of additive principal ordinals, hence of the ordering function  $\\phi0$ and the ordinal $\\epsilon_0$ (which we could not express within the type \\texttt{T1}).\n\n\\subsection{Properties of the set  \\texttt{AP}}\n\nThe set of additive principal ordinals is not empty: it contains at least the ordinals  $1$ and  $\\omega$. \n\n\\inputsnippets{AP/APOne,AP/APOne_least_AP,AP/APOne_AP_omega,AP/APOne_omega_second_AP}\n\nThe set  \\texttt{AP} is  \\emph{closed} under addition, and unbounded.\n\n\\input{movies/snippets/AP/APPlusClosed}\n\n\\input{movies/snippets/AP/APUnbounded}\n\nFinally, \\texttt{AP} is (topologically) \\emph{closed} and ordered by the segment of all countable ordinals.\n\n\\index{hydras}{Library Schutte!Predicates!Closed}\n\nFrom Module \\href{../theories/html/hydras.Schutte.Schutte_basics.html}%\n{\\texttt{~Schutte.Schutte\\_basics}}\n\n\n\\input{movies/snippets/Schutte_basics/ClosedDef}\n\\input{movies/snippets/AP/APClosed}\n\n\n\\subsubsection{Properties of the function \\texorpdfstring{$\\phi_0$}{phi0}}\n \nThe ordering function $\\phi_0$ of the set \\texttt{AP} is defined on the full set $\\mathbb{O}$ and is continuous (Sch\u00fctte calls such a  function  \\emph{normal}).\n\n\\begin{Coqsrc}\nTheorem normal_phi0 : normal phi0 AP.\n\\end{Coqsrc}\n\nThe following properties come from  the definition of $\\phi_0$ as the ordering function of \\texttt{AP}. It may be interesting to compare these proofs with the computational ones described in Chapter ~\\ref{chap:T1}.\n\n\\input{movies/snippets/AP/APPhi0}\n\n\n\n\\section{More about \\texorpdfstring{$\\epsilon_0$}{\\texttt{epsilon0}}}\n\nLet us recall that the limit ordinal  $\\epsilon_0$ cannot be written within the type \\texttt{T1}. Since we are now considering the set of all countable ordinals, we can now prove some properties of this ordinal.\n\n\nWe prove the inequality  $\\alpha<\\omega^\\alpha$ whenever $\\alpha < \\epsilon_0$.\n\\emph{Note that this condition was implicit in Module\n\\href{../theories/html/hydras.Epsilon0.T1.html\\#lt_phi0}{Epsilon0.T1}.}\n\n\\input{movies/snippets/AP/ltPhi0}\n\n\nThe proof is as follows:\n\\begin{enumerate}\n\\item Since $\\alpha<\\epsilon_0$, consider the least $i$ such that $\\alpha$ is strictly less than the omega-tower of height $i$.\n\\item\n  \\begin{itemize}\n  \\item If $i=0$, then the result is trivial (because $\\alpha=0$)\n \\item  Otherwise let $i=j+1$; \n          $\\alpha$ is greater than or equal to the omega-tower of height $j$.\n         By monotonicity,  $\\phi_0(\\alpha)$ is greater than or equal to \n        the omega-tower of height $j+1$, thus strictly greater than $\\alpha$\n  \\end{itemize}\n \\end{enumerate}\n\nMoreover,  $\\epsilon_0$ is the least ordinal $\\alpha$ that verifies the equality \n$\\alpha = \\omega^\\alpha$, in other words, the least fixpoint of the function  $\\phi_0$.\n\n\\input{movies/snippets/AP/epsilon0Lfp}\n\n\\section{Critical ordinals}\n\n\\index{maths}{Ordinal numbers!Critical ordinals}\n\\index{hydras}{Library Schutte!Predicates!Cr@Cr (critical ordinals)}\n\nFor any  (countable) ordinal $\\alpha$, the set $\\textit{Cr}(\\alpha)$ is inductively defined \nas follows by Sch\u00fctte (p.81 of~\\cite{schutte}).\n\n\\begin{quote}\n  \\begin{itemize}\n  \\item $\\textit{Cr}(0)$ is the set \\textit{AP} of additive principal ordinals.\n  \\item If $0<\\alpha$, then $\\textit{Cr}(\\alpha)$ is the intersection of all the sets of fixpoints of the $\\textit{Cr}(\\beta)$ for $\\beta<\\alpha$.\n  \\end{itemize}\n\\end{quote}\n\nThis definition is translated in \\coq{} in \nModule \\href{../theories/html/hydras.Schutte.Critical.html}%\n{\\texttt{Schutte.Critical}}, as the least fixpoint of a functional. \n\n\\input{movies/snippets/Critical/CrDef}\n\nLets us denote by $\\varphi_\\alpha$ the ordering function of the set $\\textit{Cr}(\\alpha)$ and by $A_\\alpha$ its ordering segment.\n\n\\input{movies/snippets/Critical/phiDef}\n\n\\label{sect:phi-schutte}\n\n\n\nFor instance,  we prove that $\\textit{Cr}(0)$ is the set of additive principals and that $\\epsilon_0$\nbelongs to $\\textit{Cr}(1)$.\n\n\n\\input{movies/snippets/Critical/CrZeroAP}\n\\input{movies/snippets/Critical/epsilon0Cr1}\n\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\n Prove that $\\epsilon_0$ is the least element of $\\textit{Cr}(1)$.\n\\end{exercise}\n\n\n\\subsection{A flavor of infinity}\n\n\n\nThe family of the $\\textit{Cr}(\\alpha)$s is made of infinitely many unbounded (hence infinite) sets.\nLet us quote Lemma 5, p. 82  of~\\cite{schutte}:\n\\begin{quote}\n  For all $\\alpha$, the set $\\textit{Cr}(\\alpha)$ is closed (for the least upper bound of non-empty countable sets) and unbounded.\n\\end{quote}\n\nWe prove this result by a transfinite induction on $\\alpha$ of the conjunction of  both properties.\n\n\n\n\n\\index{maths}{Transfinite induction}\n\n\\input{movies/snippets/Critical/Lemma5}\n\n\n\n\\section{Cantor normal form}\n\nThe notion of Cantor normal form is defined for all countable ordinals.\nNevertheless, note that, contrary to the implementation based on type \\texttt{T1},\nthe Cantor normal form of an ordinal $\\alpha$ may contain $\\alpha$ as a \nsub-term\\footnote{This would prevent us from trying to represent Cantor normal forms as finite trees (like in Sect.~\\ref{sec:T1-inductive-def})}.\n\n\\index{maths}{Ordinal numbers!Cantor normal form}\n\\index{hydras}{Library Schutte!Predicates!is\\_cnf\\_of@is\\_cnf\\_of (to be a Cantor normal form of}\n\nWe represent  Cantor normal forms as lists of ordinals.\nA  list $l$ is a Cantor normal form of a given ordinal $\\alpha$ if it satisfies two conditions:\n\n\n\n\\begin{itemize}\n\\item The list  $l$ is sorted (in decreasing order) w.r.t. the order $\\leq$\n\\item The sum of all the  $\\omega^{\\beta_i}$ where the $\\beta_i$ are the terms of $l$ (in this order) is equal to $\\alpha$.\n\\end{itemize}\n\n\n\n\\vspace{4pt}\n\n\\noindent\\emph{From \\href{../theories/html/hydras.Schutte.CNF.html\\#cnf_t}%\n{\\texttt{Schutte.CNF}}}\n\n\\input{movies/snippets/CNF/Defs}\n\n\n\n\\index{maths}{Transfinite induction}\n\nBy transfinite induction on $\\alpha$, we prove that every countable ordinal $\\alpha$ \n has at least a Cantor normal form.\n\n\\input{movies/snippets/CNF/cnfExists}\n\nBy structural induction on lists, we prove that this normal form is unique.\n\n\n\\input{movies/snippets/CNF/cnfUnicity}\n\n\\input{movies/snippets/CNF/cnfExUnique}\n\n\n\nFinally, the following two lemmas relate  $\\epsilon_0$ with Cantor normal forms.\n\nIf $\\alpha<\\epsilon_0$, then the Cantor normal form of $\\alpha$ is made of ordinals strictly less than $\\alpha$.\n\n\\input{movies/snippets/CNF/cnfLtEpsilon0}\n\n\n\n\\index{hydras}{Exercises}\n\n\\begin{exercise}\nPlease consider the following statement :\n\n\\begin{Coqsrc}\nLemma cnf_lt_epsilon0_iff : \n forall l alpha, \n   is_cnf_of alpha l ->  \n   (alpha < epsilon0 <->  Forall (fun beta =>  beta < alpha) l).\n\\end{Coqsrc}\n\nIs it true ?\n\n\\emph{You may start this exercise with the file\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/exercises/ordinals/schutte_cnf_counter_example.v}{exercises/ordinals/schutte\\_cnf\\_counter\\_example.v}.}\n\\end{exercise}\n\nFinally, the Cantor normal form of $\\epsilon_0$ is just $\\omega^{\\epsilon_0}$.\n\n\\input{movies/snippets/CNF/cnfOfEpsilon0}\n\n\\index{hydras}{Projects}\n\n\\begin{project}\nImplement pages 82 to 85 of~\\cite{schutte} (critical, strongly critical, maximal critical ordinals, Feferman's ordinal $\\Gamma_0$).\n\\end{project}\n\n\\begin{remark}\nThe sub-directory\n    \\href{https://github.com/coq-community/hydra-battles/tree/master/theories/ordinals/Gamma0}{theories/oridinals/Gamma0} contains an (incomplete, still undocumented) implementation of the set of ordinals below $\\Gamma_0$, represented in Veblen normal form. \n\\end{remark}\n\n\\section{An embedding of \\texttt{T1} into \\texttt{Ord}}\n\n\nOur library \n\\href{../theories/html/hydras.Schutte.Correctness_E0.html}%\n{\\texttt{Schutte.Correctness\\_E0}} establishes the link between two very different modelizations of ordinal numbers. In other words, it ``validates'' a data structure in terms of\na classical mathematical discourse considered as a model. \nFirst, we define a function from \\texttt{T1} into  \\texttt{Ord} by structural recursion.\n\n\\input{movies/snippets/Correctness_E0/injectDef}\n\n\nThis function enjoys good commutation properties with respect to the main operations which\nallow us to build Cantor normal forms.\n\n\\input{movies/snippets/Correctness_E0/commutationLemmas}\n\\input{movies/snippets/Correctness_E0/injectPlus}\n\\input{movies/snippets/Correctness_E0/injectMultFinR}\n\n% \\begin{Coqsrc}\n% Theorem inject_mono (beta gamma : T1) :\n%   T1.lt  beta gamma -> \n%   T1.nf beta -> T1.nf gamma -> \n%   inject beta < inject gamma.\n\n% Theorem inject_injective (beta gamma : T1) : nf beta -> nf gamma ->\n%   inject beta = inject gamma -> beta = gamma.\n% \\end{Coqsrc}\n\nFinally, we prove that \\texttt{inject} is a bijection from the set of all terms of \\texttt{T1} in normal form to the set \n\\texttt{members epsilon0} of the elements of \\texttt{Ord} strictly less than  $\\epsilon_0$.\n\n\\input{movies/snippets/Correctness_E0/injectLtEpsilon0}\n\\input{movies/snippets/Correctness_E0/embedding}\n\n\n\n\\subsection{Remarks}\nLet us recall that the library \\href{../theories/html/hydras.Schutte.Schutte.html}%\n{\\texttt{Schutte}} depends on five \\emph{axioms} and lies explicitly in the  \nframework of classical logic with a weak version of the axiom of choice\n(please look at the documentation of\n\\href{https://coq.inria.fr/distrib/current/stdlib/Coq.Logic.ChoiceFacts.html}{\\texttt{Coq.Logic.ChoiceFacts}}).\nNevertheless, the other modules:\n\\href{../theories/html/hydras.Epsilon0.Epsilon0.html}%\n{\\texttt{Epsilon0}},\n\\href{../theories/html/hydras.Hydra.Hydra.html}%\n{\\texttt{Hydra}}, et \n\\href{../theories/html/hydras.Gamma0.Gamma0.html}%\n{\\texttt{Gamma0}}\ndo not import any axioms and are really constructive.\n\n\\index{hydras}{Projects}\n\\begin{project}\nThere is no construction of ordinal multiplication in~\\cite{schutte}. \nIt would be interesting to derive this operation from Sch\u00fctte's axioms,\nand prove its consistence with multiplication in ordinal notations for \n$\\epsilon_0$ and $\\Gamma_0$.\n\\end{project}\n\n\\section{Related work}\n\nIn~\\cite{grimm:hal-00911710}, Jos\u00e9 Grimm establishes the consistency between our ordinal notations \\texttt{T1} and \\texttt{T2} (Veblen normal form) and his implementation\nof ordinal numbers after Bourbaki's set theory.\n\nThe Gaia project ~\\url{https://github.com/coq-community/gaia} maintains Grimm's  theory of ordinals as part of coq-community on GitHub. Integration\nof the present ordinal theory with Gaia, i.e., relating the different notions of ordinals\nand transferring relevant results, is an interesting project.\nFirst experiments in that direction are developped in\nthe \\href{https://github.com/coq-community/hydra-battles/blob/master/theories/gaia/}{theories/gaia/} directory.\n\n\n\n\n\\chapter{The Ordinal \\texorpdfstring{$\\Gamma_0$}{Gamma0} (first draft)}\n\n\n\\emph{This chapter and the files it presents are still very incomplete, considering the impressive properties of $\\Gamma_0$~\\cite{Gallier91}.  We hope to add new material soon, and accept contributions!}\n\n\n\\section{Introduction}\nWe present a notation system for the ordinal $\\Gamma_0$, following Chapter V, Section 14 of~\\cite{schutte}: ``A notation system for the ordinals $<\\Gamma_0$''.\nWe try to be as close as possible to Sch\u00fctte's text and usual practices of \\coq{} developments.\n\nThe ordinal $\\Gamma_0$ is defined in Section 13 of ~\\cite{schutte} as the least \\emph{strongly critical ordinal}. It is widely known as the \\emph{Feferman-Sch\u00fctte ordinal}.\n\n\nSection V, 13 of~\\cite{schutte} defines \\emph{strongly critical} and\n\\emph{maximal $\\alpha$-critical} ordinals: \n\n\\begin{itemize}\n\\item $\\alpha$ is strongly critical if\n$\\alpha$ is $\\alpha$-critical,\n\\item $\\gamma$ is maximal $\\alpha$-critical if $\\gamma$ is $\\alpha$-critical, and, for all $\\xi>\\alpha$, $\\gamma$ is not $\\xi$-critical.\n\n\\end{itemize}\n\n\n\n\n\n\\vspace{4pt}\n\n\\noindent\\emph{From \\href{../theories/html/hydras.Schutte.Critical.html\\#strongly_critical}%\n{\\texttt{Schutte.Critical}}}\n\n\\input{movies/snippets/Critical/Gamma0Def}\n\n\n\\index{hydras}{Projects}\n\\begin{project}\nProve that a (countable)  ordinal $\\alpha$ is strongly critical iff \n$\\phi_\\alpha(0)=\\alpha$ (Theorem 13.13 of~\\cite{schutte} ). \n\\end{project}\n\n\n\\index{hydras}{Projects}\n\\begin{project}\nProve that the set of strongly critical ordinals is unbounded and closed (Theorem 13.14 of~\\cite{schutte} ). Thus this set is not empty,  hence has a least element. Otherwise, the definition of $\\Gamma_0$ above would be useless.\n\\end{project}\n\n\n\n\nIn the present version of this development, we  only study $\\Gamma_0$ as a notation system, much more powerful than the ordinal notation for $\\epsilon_0$.\n\n%\\index{Projects}\n%\n%\\begin{project}\n% Sch\u00fbtte's section 13 of~\\cite{schutte} contain several definitions and lemmas which give another view on $\\Gamma_0$.  We leave it as a project to implement them in \\coq{}.  \n%\\end{project}\n\n\n\n\n\\section{The type \\texttt{T2} of ordinal terms}\n\nThe notation system for ordinals less than $\\gamma_0$ comes from the following theorem of~\\cite{schutte}, where $\\psi\\,\\alpha$ is the ordering function \nof the set of maximal $\\alpha$-critical ordinals.\n\n\\index{hydras}{Library Gamma0!Types!T2}\n\n\\begin{quote}\n  Any ordinal $\\not= 0$ which is not strongly critical can be expressed in terms of $+$ and $\\psi$.\n\\end{quote}\n\n\\index{hydras}{Projects}\n\\begin{project}\nThis theorem is not formally proved in this development yet. It should be!\n\\end{project}\n\n\nLike in Chapter~\\ref{chap:T1}, we define an inductive type with two constructors, one for $0$, the other for the construction $\\psi(\\alpha,\\beta)\\times(n+1)+\\gamma$, adapting a Manolios-Vroon-like notation~\\cite{Manolios2005} to\n\\emph{Veblen normal forms}.\n\\label{types:T2}\n\n\\noindent\\emph{From \\href{../theories/html/hydras.Gamma0.T2.html\\#T2}%\n{\\texttt{Gamma0.T2}}}\n\n\\input{movies/snippets/T2/T2Def}\n\n\n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[width=11cm]{epsilon0.jpg}\n  \\caption{Veblen normal form}\n  \\label{fig:gamma0}\n\\end{figure}\n\nLike in chapter~\\ref{chap:T1}, we get familiar with the type \\texttt{T2} by recognizing simple constructs like finite ordinals, $\\omega$, etc., as inhabitants of \\texttt{T2}.\n\n\\input{movies/snippets/T2/Notations}\n\n\\section{How big is \\texorpdfstring{$\\Gamma_0$}{\\texttt{Gamma0}}?}\n\nLet us define a strict order on type \\texttt{T2}. The following definition is \nan adaptation of Sch\u00fctte's, taking into account the multiplications by a natural number (inspired by~\\cite{Manolios2005}, and also present in \\texttt{T1}).\n\n\\label{sect:t2-lt-def}\n\n\\input{movies/snippets/T2/ltDef}\n\n\nSeven constructors! In order to get accustomed with this definition, let us look at a small set of examples, covering all the constructors of \\texttt{lt}.\n\n\n\\input{movies/snippets/T2/ltExamples}\n\n\n\\index{hydras}{Projects}\n\\begin{project}\nWrite a tactic that solves automatically goals of the form (\\texttt{$\\alpha$ t2< $\\beta$}), where $\\alpha$ and $\\beta$ are closed terms of type \\texttt{T2}.\n\\end{project}\n\n\\section{Veblen normal form}\n\\begin{definition}\n  A term of the form $\\psi(\\alpha_1,\\beta_1)\\times n_1+ \\psi(\\alpha_2,\\beta_2)\\times n_2+\\dots+\\psi(\\alpha_k,\\beta_k)\\times n_k$ is said to be in\n \\emph{[Veblen] normal form} if for every $i<n$, $\\psi(\\alpha_i,\\beta_i)<\\psi(\\alpha_{i+1},\\beta_{i+1})$, all the $\\alpha_i$ and $\\beta_i$ are in normal form, and all the $n_i$ are strictly positive integers.\n\\end{definition}\n\n\\input{movies/snippets/T2/nfDef}\n\n\n\nLet us look at some positive examples (we have to prove some inversion lemmas before proving counter-examples).\n\n\\input{movies/snippets/T2/nfExamples}\n\n\n\\subsection{Length of a term}\n\nThe notion of \\emph{term length} is introduced by Sch\u00fctte as a helper for proving (at least) the \\emph{trichotomy} property and transitivity of the strict order \\texttt{lt} on \\texttt{T2}. These properties are proved by induction on length.\n\n\n\\subsection{Trichotomy}\n\n\\emph{Trichotomy} is another name for the well-known property of decidable total ordering (like Standard Library's \\texttt{Compare\\_dec.lt\\_eq\\_lt\\_dec}).\n\nWe first prove by induction on $l$ the following lemma:\n\n\\vspace{4pt}\n\n\\noindent\\emph{From \\href{../theories/html/hydras.Gamma0.Gamma0\\#tricho_aux}%\n{\\texttt{Gamma0.Gamma0}}}\n\n\\input{movies/snippets/Gamma0/tricho}\n\n\\input{movies/snippets/Gamma0/compare}\n\n\nWith the help of \\texttt{compare}, we get a boolean version of \\texttt{nf}\n(being in Veblen normal form).\n\n\\input{movies/snippets/Gamma0/nfb}\n\n\n\n\n\n\\section{Main functions on \\texttt{T2}}\n\n\\subsection{Successor}\nThe successor function is defined by structural recursion.\n\n\\noindent\\emph{From \\href{../theories/html/hydras.Gamma0.T2.html\\#succ}%\n{\\texttt{Gamma0.T2}}}\n\n\\input{movies/snippets/T2/succDef}\n\n\n\\subsection{Addition}\n\nLike for Cantor normal forms (see Sect.~\\ref{sect:infix-plus-T1}),  the definition of addition in \\texttt{T2}  requires comparison between ordinal terms.\n\n\\noindent\\emph{From \\href{../theories/html/hydras.Gamma0.Gamma0T2.html\\#succ}%\n  {\\texttt{Gamma0.Gamma0}}}\n\n\\input{movies/snippets/Gamma0/plusDef}\n\n\n\n\\subsection{The Veblen function \\texorpdfstring{$\\phi$}{\\texttt{phi}}}\n\nThe enumeration function of critical ordinals, presented in Sect.~\\vref{sect:phi-schutte}, is recursively defined in type \\texttt{T2}.\n\n\\input{movies/snippets/Gamma0/phiDef}\n\nDespite its complexity, the function \\texttt{phi} is well adapted to proofs by simplification or computation.\n\n\nThe relation between the constructor $\\psi$ and the function $\\phi$ is\nstudied in~\\cite{schutte}, and partially implemented in this development.\n\\emph{Please contribute!}\n \nFor instance, the following theorem states that, if $\\gamma$ is the sum of a limit ordinal $\\beta$ and a finite ordinal $n$, and $\\beta$ is a fixpoint of\n$\\phi(\\alpha)$, then $\\psi(\\alpha,\\gamma)=\\phi_\\alpha(\\gamma+1)$.\n\n\\input{movies/snippets/Gamma0/phiPsi}\n\\input{movies/snippets/Gamma0/Ex9}\n\n\n\n\nOn the other hand, $\\phi$ can be expressed in terms of $\\psi$.\n\n\\input{movies/snippets/Gamma0/phiOfPsi}\n\\input{movies/snippets/Gamma0/Ex10}\n\n\\index{hydras}{Projects}\n\\begin{project}\nPlease study a way to pretty print ordinal terms in Veblen normal form (see Section~\\vref{sect:ppT1}).\n\\end{project}\n\n\\section{An ordinal notation for \\texorpdfstring{$\\Gamma_0$}{\\texttt{Gamma0}}}\n\nIn order to consider type \\texttt{T2} as an ordinal notation, we have to build an instance of class \\texttt{ON} (See Definition page~\\pageref{types:ON}).\n\nFirst, we define a type that contains only terms in Veblen normal form, and redefine \\texttt{lt} and \\texttt{compare} by delegation (see for comparison the construction of type \\texttt{E0} in Sect.~\\vref{sect:E0-def}).\n\n\\input{movies/snippets/Gamma0/G0a}\n\\input{movies/snippets/Gamma0/G0b}\n\n\nThen, we buils an instance of class \\texttt{ON}. \nfunction \\texttt{compare} is correct.\n\n\\input{movies/snippets/Gamma0/ltSto}\n\\input{movies/snippets/Gamma0/ltWf}\n\\input{movies/snippets/Gamma0/ONGamma0}\n\n\n\n\\begin{remark}\nThe proof of \\texttt{lt\\_wf} has been written by \\'Evelyne Contejean, using her library on the recursive path ordering (see also remark~\\vref{remark:a3pat}).\n\\end{remark}\n\n\\index{hydras}{Projects}\n\\begin{project}\nProve that \\texttt{Epsilon0} (page~\\pageref{instance-epsilon0})\nis a sub-notation system of \\texttt{Gamma0}.\n\nProve that the implementations of \\texttt{succ}, \\texttt{+}, $\\phi_0$, etc.\nare compatible in both notation systems.\n\nNote that a function \\texttt{T1\\_inj} from \\texttt{T1} to \\texttt{T2} has already been defined. It may help to complete the task.\n\n\n\n\\noindent\\emph{From \\href{../theories/html/hydras.Gamma0.T2.html\\#T1_to_T2}%\n{\\texttt{Gamma0.T2}}}\n\n\\input{movies/snippets/T2/T1ToT2}\n\n\\end{project}\n\n\\begin{project}\nProve that the notation system \\texttt{Gamma0} is a correct implementation \nof the segment $[0,\\Gamma_0)$ of the set of countable ordinals.\n\\end{project}\n\n\n\n\n", "meta": {"hexsha": "0a9ea705ee7eee893a68d4fcac88bf2832ed8df1", "size": 263998, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/part-hydras.tex", "max_stars_repo_name": "palmskog/hydra-battles", "max_stars_repo_head_hexsha": "8ad2a5397c748cac56c676e5a0745a5935ceec46", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/part-hydras.tex", "max_issues_repo_name": "palmskog/hydra-battles", "max_issues_repo_head_hexsha": "8ad2a5397c748cac56c676e5a0745a5935ceec46", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/part-hydras.tex", "max_forks_repo_name": "palmskog/hydra-battles", "max_forks_repo_head_hexsha": "8ad2a5397c748cac56c676e5a0745a5935ceec46", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8969951083, "max_line_length": 535, "alphanum_fraction": 0.7187289298, "num_tokens": 81418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125793176222, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5824964447092612}}
{"text": "\\section{Lecture 5 - A \\tsolver for \\Idl}\n\n\\begin{frame}\n  \\frametitle{A \\tsolver for \\Idl}\n\n  \\scriptsize\n\n  The constraint $\\quad x - y \\leq c \\quad$ says that \n  \n  \\begin{center}\n    ``the distance between $x$ and $y$ is at most $c$''\n  \\end{center}\n\n  This can be encoded as $(x,y;c)$\n\n  \\begin{center}\n    \\input{arc}\n  \\end{center}\n\n  \\vfill\n  So, a set $\\babst{\\mu}$ can be encoded as a graph. Concrete example:\n\n  \\begin{columns}\n\n    \\begin{column}{.3\\textwidth}\n      \\ra{1.3}\n      $$\n      \\begin{array}{rcr}\n\tx - y & \\leq & 8   \\\\\n\ty - z & \\leq & -1  \\\\\n\tz - x & \\leq & -6  \\\\\n\tz - w & \\leq & 2   \\\\\n\tw - x & \\leq & -10 \\\\\n\tw - t & \\leq & 0   \\\\\n\tt - x & \\leq & 3     \n      \\end{array}\n      $$\n    \\end{column}\n\n    \\begin{column}{.6\\textwidth}\n      \\begin{center}\n\t\\input{example}\n      \\end{center}\n    \\end{column}\n\n  \\end{columns}\n  \\vfill\n  $G( V, E )$: $V = \\{ x, y, z, w, t \\}$, $E = \\{ (x,y;8), (y,z;-1), (z,x;-6), (z,w;2), (w,x;-10), \\ldots \\}$ \n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{A \\tsolver for \\Idl}\n\n  \\scriptsize\n\n    \\begin{theorem}[Translation]\n      \\label{the:idl}\n      $\\babst{\\mu}$ is \\Idl-unsatisfiable \n      \\begin{center}\n      iff\n      \\end{center}\n      there is a {\\bf negative cycle} in the corresponging graph $G(V,E)$\n    \\end{theorem}\n\n  \\vfill\n  E.g.:\n\n  \\begin{columns}\n\n    \\begin{column}{.3\\textwidth}\n      \\ra{1.3}\n      $$\n      \\begin{array}{rcr}\n\t\\coloneat{x - y}{1} & \\coloneat{\\leq}{1} & \\coloneat{8}{1}   \\\\\n\t\\coloneat{y - z}{1} & \\coloneat{\\leq}{1} & \\coloneat{-1}{1}  \\\\\n\t\\colfouat{z - x}{1} & \\colfouat{\\leq}{1} & \\colfouat{-6}{1}  \\\\\n\t\\coloneat{z - w}{1} & \\coloneat{\\leq}{1} & \\coloneat{2}{1}   \\\\\n\t\\coloneat{w - x}{1} & \\coloneat{\\leq}{1} & \\coloneat{-10}{1} \\\\\n\t\\colfouat{w - t}{1} & \\colfouat{\\leq}{1} & \\colfouat{0}{1}   \\\\\n\t\\colfouat{t - x}{1} & \\colfouat{\\leq}{1} & \\colfouat{3}{1}     \n      \\end{array}\n      $$\n    \\end{column}\n\n    \\begin{column}{.6\\textwidth}\n      \\begin{center}\n\t\\begin{overlayarea}{.5\\textwidth}{3cm}\n\t  %\\only<2|handout:0>{\\input{example}}\n\t  \\input{example_hl}\n\t\\end{overlayarea}\n      \\end{center}\n    \\end{column}\n\n  \\end{columns}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Lecture 5 - Exercize 4}\n  \n  \\scriptsize\n\n  Let's first recall the notion of {\\bf minimality}\n  \\begin{center}\n  A conflict $\\nu^\\Boo$ is {\\bf minimal} if it does not contain redundant \\tatoms\n  \\end{center}\n  A \\tatom $P$ in a conflict $\\nu^\\Boo$ is redundant if $\\nu^\\Boo \\setminus \\{ P \\}$\n  is still a conflict\n  \\vfill\n  So, how do we check, in general, that a conflict $\\nu^\\Boo = \\{ P_1, \\ldots, P_n \\}$ is minimal ?\n  Iteratively for $i=1,\\ldots,n$, we see if $\\nu^\\Boo \\setminus \\{ P_i \\}$ is still a conflict.\n  \\vfill\\pause\n  In the case of difference logic every conflict is minimal {\\bf by construction}. In fact $\\nu^\\Boo$ \n  is a conflict if and only if it is a {\\bf cycle} with negative sum.\n  \\vfill\n  Doing $\\nu^\\Boo \\setminus \\{ P_i \\}$ is equivalent to breaking the cycle, no matter what $P_i$.\n  Therefore all \\tatoms are not redundant, and so conflicts are minimal.\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Lecture 5 - Exercize 5}\n\n  \\scriptsize\n\n  Prove\n  \\vfill\n\n  \\begin{exampleblock}{Observation 2}\n    A set of constraints \\\\\n    $\\{\\ \\colone{x_1} - x_2 \\leq c_1,\\ x_2 - x_3 \\leq c_2,\\ \\ldots,\\ x_{n-1} - \\coltwo{x_n} \\leq c_{n-1}\\ \\}$ \\\\\n    implies \n    $\\quad\\quad\\colone{x_1} - \\coltwo{x_n} \\leq c_n\\quad\\quad$ \n    iff \n    $\\quad\\quad c_1 + c_2 + \\ldots c_{n-1} \\leq c_n$\n  \\end{exampleblock}\n\n  \\vfill\n  using\n  \\vfill\n\n  \\begin{lemma}[Farka's Lemma for \\Idl]\n    $\\babst{\\mu}$ is unsatisfiable iff there exists a subset \n    $\\babst{\\nu} = \\{\\ \\coltwo{x_1} - x_2 \\leq c_1,\\ x_2 - x_3 \\leq c_2,\\ \\ldots,\\ x_n - \\coltwo{x_1} \\leq c_n\\ \\}$ of $\\babst{\\mu}$\n    such that $c_1 + \\ldots + c_n < 0$\n  \\end{lemma}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Lecture 5 - Exercize 5}\n\n  \\scriptsize\n\n  \\begin{exampleblock}{Observation 2}\n    A set of constraints \\\\\n    $\\{\\ \\colone{x_1} - x_2 \\leq c_1,\\ x_2 - x_3 \\leq c_2,\\ \\ldots,\\ x_{n-1} - \\coltwo{x_n} \\leq c_{n-1}\\ \\}$ \\\\\n    implies \n    $\\quad\\quad\\colone{x_1} - \\coltwo{x_n} \\leq c_n\\quad\\quad$ \n    iff \n    $\\quad\\quad c_1 + c_2 + \\ldots c_{n-1} \\leq c_n$\n  \\end{exampleblock}\n\n  \\vfill\n  is better formalized as\n  \\vfill\n\n  \\begin{center}\n  $(\\colone{x_1} - x_2 \\leq c_1\\ \\wedge\\ x_2 - x_3 \\leq c_2\\ \\wedge\\ \\ldots\\ \\wedge\\ x_{n-1} - \\coltwo{x_n} \\leq c_{n-1}) \\rightarrow (\\colone{x_1} - \\coltwo{x_n} \\leq d_n)$ \\\\\n  is valid iff \\\\\n  $c_1 + c_2 + \\ldots c_{n-1} \\leq d_n$\n  \\end{center}\n\n  \\begin{lemma}[Farka's Lemma for \\Idl]\n    $\\babst{\\mu}$ is unsatisfiable iff there exists a subset \n    $\\babst{\\nu} = \\{\\ \\coltwo{x_1} - x_2 \\leq c_1,\\ x_2 - x_3 \\leq c_2,\\ \\ldots,\\ x_n - \\coltwo{x_1} \\leq c_n\\ \\}$ of $\\babst{\\mu}$\n    such that $c_1 + \\ldots + c_n < 0$\n  \\end{lemma}\n\n  \\vfill\n  is better formalized as\n  \\vfill\n\n  \\begin{center}\n  $\\colone{x_1} - x_2 \\leq c_1\\ \\wedge\\ x_2 - x_3 \\leq c_2\\ \\wedge\\ \\ldots\\ \\wedge\\ x_{n-1} - x_n \\leq c_{n-1}\\ \\wedge\\ x_n - \\colone{x_1} \\leq c_n$ \\\\\n  is unsatisfiable iff \\\\\n  $c_1 + c_2 + \\ldots c_{n-1} + c_n < 0$\n  \\end{center}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Lecture 5 - Exercize 5}\n\n  \\scriptsize\n\n  \\begin{center}\n  $(\\colone{x_1} - x_2 \\leq c_1\\ \\wedge\\ x_2 - x_3 \\leq c_2\\ \\wedge\\ \\ldots\\ \\wedge\\ x_{n-1} - \\coltwo{x_n} \\leq c_{n-1}) \\rightarrow (\\colone{x_1} - \\coltwo{x_n} \\leq d_n)$ \\\\\n  is valid iff \\\\\n  $c_1 + c_2 + \\ldots c_{n-1} \\leq d_n$\n  \\end{center}\n\n  \\vfill\n  is equivalent to (using the well-known fact: $\\varphi \\rightarrow \\psi$ is valid iff $\\varphi \\wedge \\neg \\psi$ is unsat)\n  \\vfill\n\n  \\begin{center}\n  $\\colone{x_1} - x_2 \\leq c_1\\ \\wedge\\ x_2 - x_3 \\leq c_2\\ \\wedge\\ \\ldots\\ \\wedge\\ x_{n-1} - \\coltwo{x_n} \\leq c_{n-1}\\ \\wedge\\ \\neg(\\colone{x_1} - \\coltwo{x_n} \\leq d_n)$ \\\\\n  is unsatisfiable iff \\\\\n  $c_1 + c_2 + \\ldots c_{n-1} \\leq d_n$\n  \\end{center}\n\n  \\vfill\\pause\n  which is equivalent to (using the fact $\\neg (x - y \\leq c)\\ \\Longleftrightarrow\\ y - x \\leq - c - 1$)\n  \\vfill\n\n  \\begin{center}\n  $\\colone{x_1} - x_2 \\leq c_1\\ \\wedge\\ x_2 - x_3 \\leq c_2\\ \\wedge\\ \\ldots\\ \\wedge\\ x_{n-1} - \\coltwo{x_n} \\leq c_{n-1}\\ \\wedge\\ \\coltwo{x_n} - \\colone{x_1} \\leq - d_n - 1$ \\\\\n  is unsatisfiable iff \\\\\n  $c_1 + c_2 + \\ldots c_{n-1} \\leq d_n$\n  \\end{center}\n\n  \\vfill\\pause\n  which is equivalent to (using the fact $c \\leq d\\ \\Longleftrightarrow\\ c - d \\leq 0\\ \\Longleftrightarrow\\ c - d - 1 < 0$)\n  \\vfill\n\n  \\begin{center}\n  $\\colone{x_1} - x_2 \\leq c_1\\ \\wedge\\ x_2 - x_3 \\leq c_2\\ \\wedge\\ \\ldots\\ \\wedge\\ x_{n-1} - x_n \\leq c_{n-1} \\wedge x_n - \\colone{x_1} \\leq - d_n - 1$ \\\\\n  is unsatisfiable iff \\\\\n  $c_1 + c_2 + \\ldots c_{n-1} - d_n - 1 < 0$\n  \\end{center}\n\n  \\vfill\n  which is Farka's Lemma if we set $c_n \\equiv - d_n - 1$\n\n\\end{frame}\n", "meta": {"hexsha": "d68bd386d60f1482fad5f1392861b2c98935d459", "size": 6806, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture10/dl.tex", "max_stars_repo_name": "formalmethods/smtlectures", "max_stars_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-11-07T19:34:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-24T08:05:50.000Z", "max_issues_repo_path": "lecture10/dl.tex", "max_issues_repo_name": "formalmethods/smtlectures", "max_issues_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture10/dl.tex", "max_forks_repo_name": "formalmethods/smtlectures", "max_forks_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-06T00:40:41.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-06T00:40:41.000Z", "avg_line_length": 28.5966386555, "max_line_length": 176, "alphanum_fraction": 0.5828680576, "num_tokens": 2933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7745833841649233, "lm_q1q2_score": 0.5824964443174377}}
{"text": "\\section{Overview}\\label{sec:boundedrefinementtypes:overview}\n\nWe start with a high level overview of bounded refinement types.\n%\nWe first present a short introduction to refinement type specifications, \nto make this chapter self contained.\n%\nThen, we introduce bounded refinements,\nand show how they permit \\emph{modular} higher-order specifications.\n%\nFinally, we describe how they are implemented via an elaboration\nprocess that permits \\emph{automatic} first-order verification.\n\n\\subsection{Preliminaries}\n\n\\mypara{Refinement Types} let us precisely specify subsets of values,\nby conjoining base types with logical predicates that constrain the values.\nWe get decidability of type checking, by limiting these predicates to\ndecidable, quantifier-free, first-order logics, including the theory\nof linear arithmetic, uninterpreted functions, arrays, bit-vectors\nand so on. \n%\nFor example, the refinement types\n%\n\\begin{code}\n  type Pos     = {v:Int | 0 < v}\n  type IntGE x = {v:Int | x <= v}\n\\end{code}\n%\nspecify subsets of @Int@ corresponding to values\nthat are positive or larger than some other value @x@\nrespectively. \n%\nWe can use refinement types to specify contracts\nlike pre- and post-conditions by suitably refining the input\nand output types of functions.\n\n\\mypara{Preconditions} are specified by refining input types.\nWe specify that the function @assert@ must\n\\emph{only} be called with @True@,\n%\nwhere the type @TRUE@ contains only the singleton @True@:\n%\n\\begin{code}\n  type TRUE = {v:Bool | v <=> True}\n\n  assert         :: TRUE -> a -> a\n  assert True x  = x\n  assert False _ = error \"Provably Dead Code\"\n\\end{code}\n\n\\mypara{We can specify post-conditions} by refining output types.\nFor example, a primitive @Int@ comparison operator @leq@ can be\nassigned a type that says that the output is @True@ iff the\nfirst input is actually less than or equal to the second:\n%\n\\begin{mcode}\n  leq :: x:Int -> y:Int -> {v:Bool | v  <=> x <= y}\n\\end{mcode}\n\n\\mypara{Refinement Type Checking} proceeds by checking that at each\napplication, the types of the actual arguments are \\emph{subtypes}\nof those of the function inputs, in the environment (or context) in\nwhich the call occurs.\n%\nConsider the function:\n%\n\\begin{code}\n  checkGE     :: a:Int -> b:IntGE a -> Int\n  checkGE a b = assert cmp b\n    where cmp = a `leq` b\n\\end{code}\n%\nTo verify the call to @assert@ we check that\nthe actual parameter @cmp@ is a subtype of @TRUE@,\nunder the assumptions given by the input types for\n@a@ and @b@.\n%\nVia subtyping~\\cite{Vazou14} the check reduces to establishing\nthe validity of the \\emph{verification condition}~(VC)\n%\n\\begin{code}\n  a <= b => (cmp <=> a <= b) => v = cmp => (v<=>true)\n\\end{code}\n%\nThe first antecedent comes from the input type of @b@, the\nsecond from the type of @cmp@ obtained from the output of @leq@,\nthe third from the \\emph{actual} input passed to @assert@,\nand the goal comes from the input type \\emph{required} by @assert@.\n%\nAn SMT solver \\cite{Nelson81} readily establishes the validity\nof the above VC, thereby verifying @checkGE@.\n\n\\begin{comment}\n\n% \\subsection{Abstract Refinements}\n\n\\mypara{First order refinements prevent modular specifications.}\nConsider the function that returns the largest element of a list:\n%\n\\begin{code}\n  maximum         :: List Int -> Int\n  maximum [x]     = x\n  maximum (x:xs)  = max x (maximum xs)\n    where max a b = if a < b then b else a\n\\end{code}\n%\nHow can one write a first-order refinement type specification for\n@maximum@ that will let us verify the below code?\n%\n\\begin{code}\n  posMax :: List Pos -> Pos\n  posMax = maximum\n\\end{code}\n%\n%   posMax :: List Neg -> Neg\n%   posMax = maximum\n%\nAny suitable specification would have to enumerate the\nsituations under which @maximum@ may be invoked\nbreaking modularity.\n\n\\mypara{Abstract Refinements} overcome the above modularity\nproblems \\cite{vazou13}.\n%\nThe main idea is that we can type @maximum@ by observing\nthat it returns \\emph{one of} the elements in its input list.\nThus, if every element of the list enjoys some refinement @p@\nthen the output value is also guaranteed to satisfy @p@.\n%\nConcretely, we can type the function as:\n%\n\\begin{code}\nmaximum :: forall<p::Int->Bool>. List Int<p> -> Int<p>\n\\end{code}\n%\nwhere informally, @Int<p>@ stands for @{v:Int | p v}@,\nand @p@ is an \\emph{uninterpreted function} in the refinement\nlogic~\\cite{Nelson81}.\n%\nThe signature states that for any refinement @p@ on @Int@,\nthe input is a list of elements satisfying @p@\nand returns as output an integer satisfying @p@.\n%\nIn the sequel, we will drop the explicit quantification\nof abstract refinements; all free abstract refinements\nwill be \\emph{implicitly} quantified at the top-level\n(as with classical type parameters.)\n\n\\paragraph{Abstract Refinements Preserve Decidability.}\nAbstract refinements do not require the use of higher-order\nlogics. Instead, abstractly refined signatures (like @maximum@)\ncan be verified by viewing the abstract refinements @p@ as\nuninterpreted functions that only satisfy the axioms of\ncongruence, namely:\n%\n\\begin{code}\n  forall x y. x = y => p x <=> p y\n\\end{code}\n%\nAs the quantifier free theory of uninterpreted functions\nis decidable~\\cite{Nelson81}, abstract refinement type\nchecking remains decidable \\cite{vazou13}.\n\n\\paragraph{Abstract Refinements are Automatically Instantiated} at call-sites,\nvia the abstract interpretation framework of Liquid Typing~\\cite{vazou13}.\nEach instantiation yields fresh refinement variables on\nwhich subtyping constraints are generated; these constraints\nare solved via abstract interpretation yielding the instantiations.\n%\nHence, we verify @posMax@ % and @negMax@\nby instantiating:\n%\n\\begin{code}\n  p |-> \\ v -> 0 < v   -- at posMax\n\\end{code}\n  % p |-> \\ v -> v < 0   -- at negMax\n\n\\end{comment}\n\n\\subsection{Bounded Refinements}\n\nRefinement types hit various expressiveness walls, \nas for decidability, refinements are constraint to \nfirst order, decidable logics.\n%\nConsider the following example\nfrom~\\cite{TerauchiPOPL13}.\n%\n@find@ takes as input a predicate @q@, a continuation\n@k@ and a starting number @i@; it proceeds to compute\nthe smallest @Int@ (larger than @i@) that satisfies\n@q@, and calls @k@ with that value.\n%\n@ex1@ passes @find@ a continuation that checks that the\n``found'' value equals or exceeds @n@.\n%\n\\begin{code}\n  ex1 :: (Int -> Bool) -> Int -> ()\n  ex1 q n = find q (checkGE n) n\n\n  find q k i\n    | q i       = k i\n    | otherwise = find q k (i + 1)\n\\end{code}\n\n\\mypara{Verification fails} as there is no way to specify that\n@k@ is only called with arguments greater than @n@.\n%\nFirst, the variable @n@ is not in scope at the function\ndefinition so we cannot refer to it.\n%\nSecond, we could try to say that @k@ is invoked with values\ngreater than or equal to @i@, which gets substituted with @n@\nat the call-site. Alas, due to the currying order, @i@ too is\nnot in scope at the point where @k@'s type is defined so\nthe type for @k@ cannot depend upon @i@.\n\n\\mypara{Can Abstract Refinements Help?} Lets try to\nuse Abstract Refinements, from chapter~\\ref{chapter:abstractrefinements},\nto abstract over the refinement that @i@ enjoys, and\nassign @find@ the type:\n%\n\\begin{code}\n  find :: (Int -> Bool) -> (Int<p> -> a) -> Int<p> -> a\n\\end{code}\n%\nwhich states that for any refinement @p@, the function takes\nan input @i@ which satisfies @p@ and hence that the continuation\nis also only invoked on a value which trivially enjoys @p@, namely @i@.\n%\nAt the call-site in @ex1@ we can instantiate\n\\begin{equation}\n\\cc{p} \\mapsto \\lambda \\cc{v} \\rightarrow \\cc{n} \\leq \\cc{v} \\label{eq:inst:find}\n\\end{equation}\n%\nThis instantiated refinement is satisfied by the parameter @n@ and is\nsufficient to verify, via function subtyping, that @checkGE n@ will\nonly be called with values satisfying @p@, and hence larger than @n@.\n\n\\mypara{The function find is ill-typed} as the signature requires that\nat the recursive call site, the value @i+1@ \\emph{also}\nsatisfies the abstract refinement @p@.\n%\nWhile this holds for the example we have in mind~(\\ref{eq:inst:find}),\nit does not hold \\emph{for all} @p@, as required by the type of @find@!\n%\nConcretely, @{v:Int | v = i + 1}@ is in general \\emph{not} a subtype of\n@Int<p>@, as the associated VC\n% Concretely, the recursive call generates the VC\n%\n\\begin{equation}\n    ... \\Rightarrow \\cc{p i} \\Rightarrow \\cc{p (i+1)} \\label{eq:vc:find}\n\\end{equation}\n%\n%\\begin{equation}\n%p\\ i \\Rightarrow p\\ (i + 1) \\label{eq:vc:find}\n%\\end{equation}\n%\nis \\emph{invalid} -- the type checker thus (soundly!) rejects @find@.\n\n\\mypara{We must Bound the Quantification} of @p@ to limit\nit to refinements satisfying some constraint, in this case\nthat @p@ is \\emph{upward closed}. In the dependent setting,\nwhere refinements may refer to program values, bounds\nare naturally expressed as constraints between refinements.\n% Horn clauses over refinements.\n%\nWe define a bound, @UpClosed@\n%\nwhich states that @p@ is a refinement that is \\emph{upward closed},\n\\ie satisfies @forall x. p x =>  p (x+1)@,\nand use it to type @find@ as:\n%\n\\begin{code}\n  bound UpClosed (p :: Int -> Bool)\n    = \\x -> p x => p (x+1)\n\n  find :: (UpClosed p) => (Int -> Bool)\n                       -> (Int<p> -> a)\n                       ->  Int<p> -> a\n\\end{code}\n%\nThis time, the checker is able to use the bound to\nverify the VC~(\\ref{eq:vc:find}).\n%\nWe do so in a way that refinements (and thus VCs) remain quantifier\nfree and hence, SMT decidable~(\\S~\\ref{sec:overview:implementation}).\n\n\\mypara{At the call} to @find@ in the body of @ex1@, we perform\nthe instantiation~(\\ref{eq:inst:find}) which generates the\n\\emph{additional} VC\n%\n\\hbox{@n <=  x => n <=  x+1@}\n%\nby plugging in the concrete refinements to the bound constraint.\n%\nThe SMT checks the validity of the VC\nand hence this instantiation, thereby statically\nverifying @ex1@, \\ie that the assertion inside\n@checkGE@ cannot fail.\n%\n\n\\subsection{Bounds for Higher-Order Functions}\n\nNext, we show how bounds expand the scope of refinement typing by\nletting us write precise modular specifications for various canonical\nhigher-order functions.\n\n\\subsubsection{Function Composition}\\label{sec:compose}\n\nFirst, consider @compose@. What is a modular specification\nfor @compose@ that would let us verify that @ex2@ enjoys\nthe given specification?\n%\n\\begin{code}\n  compose f g x = f (g x)\n\n  type Plus x y = {v:Int | v = x + y}\n  \n  ex2    :: n:Int -> Plus n 2\n  ex2    = incr `compose` incr\n\n  incr   :: n:Int -> Plus n 1\n  incr n = n + 1\n\\end{code}\n\n\\mypara{The challenge is to chain the dependencies} between the\ninput and output of @g@ and the input and output of @f@ to\nobtain a relationship between the input and output of the\ncomposition. We can capture the notion of chaining in a bound:\n%\n%% f -> p\n%% g -> q\n\\begin{code}\n  bound Chain p q r = \\x y z ->\n        q x y => p y z => r x z\n\\end{code}\n%\nwhich states that for any @x@, @y@ and @z@, if\n%\n(1) @x@ and @y@ are related by @q@, and\n(2) @y@ and @z@ are related by @p@, then\n(3) @x@ and @z@ are related by @r@.\n\nWe use @Chain@ to type @compose@ using three abstract\nrefinements @p@, @q@ and @r@, relating the arguments\nand return values of @f@ and @g@ to their composed value.\n%\n(Here, @c<r x>@ abbreviates @{v:c | r x v}@).\n\n\\begin{code}\n  compose :: (Chain p q r) => (y:b -> c<p y>)\n                           -> (x:a -> b<q x>)\n                           -> (w:a -> c<r w>)\n\\end{code}\n\n\\mypara{To verify} @ex2@ we instantiate, at the call to @compose@,\n%\n\\begin{code}\n  p, q |-> \\x v -> v = x + 1\n     r |-> \\x v -> v = x + 2\n\\end{code}\n%\nThe above instantiation satisfies the bound, as shown by the validity\nof the VC derived from instantiating @p@, @q@, and @r@ in @Chain@:\n%\n\\begin{code}\n  y = x + 1 => z = y + 1 => z == x + 2\n\\end{code}\n%\nand hence, we can check that @ex2@ implements its specified type.\n\n\n\\subsubsection{List Filtering}\n\nNext, consider the list @filter@ function.\n%\nWhat type signature for @filter@ would let us check @positives@?\n\\begin{code}\n  filter q (x:xs)\n    | q x         = x : filter q xs\n    | otherwise   = filter q xs\n  filter _ []     = []\n\n  positives       :: [Int] -> [Pos]\n  positives       = filter isPos\n    where isPos x = 0 < x\n\\end{code}\n%\nSuch a signature would have to relate the @Bool@ returned by\n@f@ with the property of the @x@ that it checks for.\n%\nTyped Racket's latent predicates~\\cite{typedracket}\naccount for this idiom, but are a special construct\nlimited to @Bool@-valued ``type'' tests, and not\narbitrary invariants.\n%\nAnother approach is to avoid the so-called\n``Boolean Blindness'' that accompanies\n@filter@ by instead using option types\nand @mapMaybe@.\n\n\\mypara{We overcome blindness using a witness} bound:\n%\n\\begin{code}\n  bound Witness p w = \\x b -> b => w x b => p x\n\\end{code}\n%\nwhich says that @w@ \\emph{witnesses} the\nrefinement @p@. That is, for any boolean @b@ such\nthat @w x b@ holds, if @b@ is @True@ then @p x@ also holds.\n\n\\mypara{We can give} @filter@ a type that says that the output values\nenjoy a refinement @p@ as long as the test predicate @q@ returns\na boolean witnessing @p@:\n%\n\\begin{code}\n  filter :: (Witness p w) => (x:a -> Bool<w x>)\n                          -> List a\n                          -> List a<p>\n\\end{code}\n\n\\mypara{To verify} @positives@ we infer the following type and\ninstantiations for the abstract refinements @p@ and @w@ at the\ncall to @filter@:\n%\n\\begin{code}\n  isPos :: x:Int -> {v:Bool | v <=> 0 < x}\n  p     |-> \\v    -> 0 < v\n  w     |-> \\x b  -> b <=> 0 < x\n\\end{code}\n\n\\subsubsection{List Folding}\n\nNext, consider the list fold-right function. Suppose we\nwish to prove the following type for @ex3@:\n%\n\\begin{code}\n  foldr :: (a -> b -> b) -> b -> List a -> b\n  foldr op b []     = b\n  foldr op b (x:xs) = x `op` foldr op b xs\n\n  ex3 :: xs:List a -> {v:Int | v == len xs}\n  ex3 = foldr (\\_ -> incr) 0\n\\end{code}\n%\nwhere @len@ is a \\emph{logical} or \\emph{measure}\nfunction used to represent the number of elements of\nthe list in the refinement logic~\\ref{sec:measures}:\n%\n\\begin{code}\n  measure len   :: List a -> Nat\n    len []      = 0\n    len (x:xs)  = 1 + len xs\n\\end{code}\n\n\\mypara{We specify induction as a bound.} Let\n(1)~@inv@ be an abstract refinement relating a list @xs@\n    and the result @b@ obtained by folding over it and\n(2)~@step@ be an abstract refinement relating the\n    inputs @x@, @b@ and output @b'@ passed to and\n    obtained from the accumulator @op@ respectively.\n%\nWe state that @inv@ is closed under @step@ as:\n%\n\\begin{code}\n  bound Inductive inv step = \\x xs b b' ->\n        inv xs b => step x b b' => inv (x:xs) b'\n\\end{code}\n%\n\n\\mypara{We can give} @foldr@ a type that says that the\nfunction \\emph{outputs} a value that is built inductively\nover the entire \\emph{input} list:\n%\n\\begin{code}\n  foldr :: (Inductive inv step)\n        => (x:a -> acc:b -> b<step x acc>)\n        -> b<inv []>\n        -> xs:List a\n        -> b<inv xs>\n\\end{code}\n%\nThat is, for any invariant @inv@ that is inductive\nunder @step@, if the initial value @b@ is @inv@-related\nto the empty list, then the folded output is @inv@-related\nto the input list @xs@.\n\n\\mypara{We verify} @ex3@ by inferring, at the call to @foldr@\n%\n\\begin{code}\n  inv  |-> \\xs v   -> v  == len xs\n  step |-> \\x b b' -> b' == b + 1\n\\end{code}\n%\nThe SMT solver validates the VC obtained by plugging the\nabove into the bound.\n%\nInstantiating the signature for @foldr@ yields precisely the\noutput type desired for @ex3@.\n\n%% \\nv{addition:}\nPreviously,~\\ref{chapter:abstractrefinements} describes a way to type @foldr@\nusing abstract refinements that required the operator @op@\nto have one extra ghost argument.\nBounds let us express induction without ghost arguments.\n\n\\subsection{Implementation}\\label{sec:overview:implementation}\n\nTo implement bounded refinement typing, we must solve two\nproblems. Namely, how do we\n%\n(1)~\\emph{check} and\n%\n(2)~\\emph{use}\n%\nfunctions with bounded signatures?\n%\nWe solve both problems via an insight inspired\nby the way typeclasses are implemented in Haskell.\n%\n\\begin{enumerate}\n%\n\\item \\emphbf{A Bound Specifies} a function type\nwhose inputs are unconstrained and whose output is\nsome value that carries the refinement corresponding\nto the bound's body.\n%\n\\item \\emphbf{A Bound is Implemented} by a ghost\nfunction that returns @True@, but is defined\nin a context where the bound's constraint holds when\ninstantiated to the concrete refinements at the context.\n%\n\\end{enumerate}\n\n\\mypara{We elaborate bounds into ghost functions} satisfying\nthe bound's type.\n%\nTo \\emph{check} bounded functions, we need to\n\\emph{call} the ghost function to materialize the\nbound constraint at particular values of interest.\n%\nDually, to \\emph{use} bounded functions, we need to\n\\emph{create} ghost functions whose outputs are\nguaranteed to satisfy the bound constraint.\n%\nThis elaboration reduces \\emph{bounded} refinement\ntyping to the simpler problem\nof \\emph{unbounded} abstract refinement typing.\n%\nThe formalization of our elaboration is described in\n\\S~\\ref{sec:check}.\n%\nNext, we illustrate the elaboration by explaining how\nit addresses the problems of checking and using bounded\nsignatures like @compose@.\n\n\\mypara{We Translate Bounds into Function Types} called the\nbound-type where the inputs are unconstrained, and the\noutputs satisfy the bound's constraint.\n%\nFor example, the bound @Chain@ used to type @compose@ in\n\\S~\\ref{sec:compose}, corresponds to a function type, yielding\nthe translated type for @compose@:\n%\n\\begin{code}\n  type ChainTy p q r\n    =  x:a -> y:b -> z:c -> {v:Bool | q x y => p y z => r x z}\n\n  compose :: (ChainTy p q r) \n          -> (y:b -> c<p y>)\n          -> (x:a -> b<q x>)\n          -> (w:a -> c<r w>)\n\\end{code}\n\n\\mypara{To Check Bounded Functions} we view the bound constraints\nas extra (ghost) function parameters (cf. type class dictionaries),\nthat satisfy the bound-type. Crucially, each expression where a\nsubtyping constraint would be generated (by plain refinement typing)\nis wrapped with a ``call'' to the ghost to materialize the constraint\nat values of interest. For example we elaborate @compose@ into:\n%\n\\begin{code}\n  compose dollarchain f g x =\n    let t1 = g x\n        t2 = f t1\n        _  = dollarchain x t1 t2   -- materialize\n    in  t2\n\\end{code}\n%\nIn the elaborated version @dollarchain@ is the ghost parameter %\ncorresponding to the bound. As is standard \\cite{LiquidPLDI08},\nwe perform ANF-conversion to name intermediate values, and then\nwrap the function output with a call to the ghost to materialize\nthe bound's constraint. Consequently, the output of compose, namely\n@t2@, is checked to be a subtype of the specified output type,\nin an environment \\emph{strengthened} with the bound's constraint\ninstantiated at @x@, @t1@ and @t2@. This subtyping reduces to a\nquantifier free VC:\n%\n\\begin{code}\n      q x t1\n  =>  p t1 t2\n  => (q x t1 => p t1 t2 => r x t2)\n  =>  v = t2 => r x v\n\\end{code}\n%\nwhose first two antecedents are due to the types of @t1@ and @t2@\n(via the output types of @g@ and @f@ respectively) and the third\ncomes from the call to @dollarchain@.\n%\nThe output value @v@ has the singleton refinement that\nstates it equals to @t2@ and finally the VC states that the\noutput value @v@ must be related to the input @x@ via @r@.\n%\nAn SMT solver validates this decidable VC easily, thereby\nverifying @compose@.\n\nOur elaboration inserts materialization calls \\emph{for all}\nvariables (of the appropriate type) that are in scope at the\ngiven point. This could introduce upto $n^k$ calls where $k$\nis the number of parameters in the bound and $n$ the number\nof variables in scope. In practice (\\eg in @compose@) this\nnumber is small (\\eg 1) since we limit ourselves to variables\nof the appropriate types.\n\n%% Colin's comment: how about strictness and termination?\nTo preserve semantics we ensure that none of these materialization\ncalls can diverge, by carefully constraining the structure of\nthe arguments that instantiate the ghost functional parameters.\n\n\\mypara{At Uses of Bounded Functions} our elaboration uses\nthe bound-type to create lambdas with appropriate parameters\nthat just return @true@. For example, @ex2@ is elaborated to:\n%\n\\begin{code}\n  ex2 = compose (\\_ _ _ -> true) incr incr\n\\end{code}\n%\nThis elaboration seems too na\\\"ive to be true: how do we\nensure that the function actually satisfies the bound type?\n\nHappily, that is automatically taken care of by function subtyping.\n%\nRecalling the translated type for @compose@, the elaborated lambda\n@(\\_ _ _ ->  true)@ is constrained to be a subtype of @ChainTy p q r@.\n%\nIn particular, given the call site instantiation\n%\n\\begin{mcode}\n  p $\\mapsto$ \\ y z -> z == y + 1\n  q $\\mapsto$ \\ x y -> y == x + 1\n  r $\\mapsto$ \\ x z -> z == x + 2\n\\end{mcode}\n%\nthis subtyping constraint reduces to the quantifier-free VC:\n%\n\\begin{equation}\n\\inter{\\Gamma}\n  \\Rightarrow \\mathtt{true}\n  \\Rightarrow \\cc{(z == y + 1)}\n   \\Rightarrow \\cc{(y == x + 1)}\\notag\n   \\Rightarrow \\cc{(z == x + 2)}  \\label{vc:bounded:ex2}\n\\end{equation}\n%\n%\nwhere $\\Gamma$ contains assumptions about the various binders in\nscope.\n%\nThe above VC is easily proved valid by an SMT solver, thereby\nverifying the subtyping obligation defined by the bound, and hence,\nthat @ex2@ satisfies the given type.\n", "meta": {"hexsha": "f62dafc4eb82550e5fc77f3457f20bdf6b96a0de", "size": 21247, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/boundedrefinements/overview.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/boundedrefinements/overview.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/boundedrefinements/overview.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 31.5237388724, "max_line_length": 81, "alphanum_fraction": 0.7025933073, "num_tokens": 6094, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.774583389368527, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5824964439256142}}
{"text": "% !TEX root = scombinatorics.tex\n\\documentclass[scombinatorics.tex]{subfiles}\n\\begin{document}\n\\chapter{Stability}\n\\label{sauer}\n\n\n\n\\def\\medrel#1{\\parbox[t]{5ex}{$\\displaystyle\\hfil #1$}}\n\\def\\ceq#1#2#3{\\parbox[t]{20ex}{$\\displaystyle #1$}\\medrel{#2}{$\\displaystyle #3$}}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The order property}\\label{ladder}\n\nThe \\emph{chain index\\/} of $\\phi(\\U\\,;b)_{b\\in\\V}$, or of $\\phi(x\\,;z)$ when $\\U$ and $\\V$ are clear, is the maximal length (that is, $n+1$) of a chain of the form\n\n\\ceq{\\ssf{ch}\\hfill\\phi(A\\,;b_0)\\ \\subset}{\\dots}{\\subset\\ \\phi(A\\,;b_n)}\n\nfor some set $A\\subseteq\\U$ and some $b_0,\\dots,b_n\\in\\V$.\nNote that we allow $\\phi(A\\,;b_0)$ to be empty.\nThis choice produces a small asymmetry in the definition of ladder; see also Fact~\\ref{fact_stability_dual}.\n\n\\begin{example}\n  If $\\phi(\\U\\,;b)_{b\\in\\V}$ consists of just one set, the ladder index is $1$.\n  If it contains two distinct sets, the  chain index is at least $2$ and it is exactly $2$ if there are no more two sets, or if all sets are disjoint.\\QED\n\\end{example}\n\nIf a maximal length does not exist, we say that $\\phi(x\\,;z)$ is \\emph{unstable,} or that it has the \\emph{order-property.} \nOtherwise we say that it is \\emph{stable.}\n\nIn place of requiring the existence of the chain in \\ssf{ch}, we could equivalently ask for a pair of tuples $a_1,\\dots,a_n\\in\\U$ and $b_0,\\dots,b_n\\in\\V$ such that\n\n\\ceq{\\ssf{ld}\\hfill\\phi(a_h\\,;b_k)}\n{\\IFF}\n{h\\le k.}\n\nWe call this pair of tuples a \\emph{ladder\\/} of length $n+1$.\nWe may also say \\emph{ladder index\\/} instead of chain index.\nSetting $A=\\{a_1,\\dots,a_n\\}$ we easily obtain a chain from a ladder, the converse is left as an easy exercise for the reader.\n\n\\begin{exercise}\n  Let $\\phi(x\\,;z)$ have chain index $n+1$ or more. \n  Let $A\\subseteq\\U$ be a minimal set such that a chain as in \\ssf{ch} obtains for some $b_0,\\dots,b_n\\in\\V$.\n  Prove that there is a ladder $a_1,\\dots,a_n$ and $b_0,\\dots,b_n$ such that $A=\\{a_1,\\dots,a_n\\}$ and conclude that $\\phi(x\\,;z)$ has ladder index $n+1$.\\QED\n\\end{exercise}\n\nThe following facts are obvious but worth noting.\n\n\\begin{fact}\\label{fact_stability_dual}\n  Let $\\phi(x\\,;z)$ have ladder index $n+1$. \n  Then $\\phi(x\\,;z)^{\\rm op}$ has ladder index $\\ge n$.\\QED\n\\end{fact}\n \n\\begin{fact}\\label{fact_stability_neg}\n  Let $\\phi(x\\,;z)$ have ladder index $n$. \n  Then $\\neg\\phi(x\\,;z)$ has ladder index $n$.\\QED\n\\end{fact}\n\n\\begin{proof}\n  If for all $h\\in(n]$ and $k\\in[n]$ \n\n  \\ceq{\\hfill\\neg\\phi(a_h\\,;b_k)}\n  {\\IFF}\n  {h\\le k}\n\n  then $a'_h=a_{n+1-h}$ and $b'_k=b_{n-k}$ satisfy\n\n  \\ceq{\\hfill\\phi(a'_h\\,;b'_k)}\n  {\\IFF}\n  {\\phi(a_{n+1-h}\\,;b_{n-h})}\n\n  \\ceq{}\n  {\\IFF}\n  {n-k>n+1-h}\n\n  \\ceq{}\n  {\\IFF}\n  {h\\le k}\n\\end{proof}\n \nThe following definition is connected with those above, though in a less evident manner.\nWe write ${}^n2$ for the set of binary sequences of length $n$ or, more precisely, the set of functions $s:[n)\\to[2)$.\nWe write $s_h$ for the value of $s$ at $h$, and $s{\\restriction} h$ for the restriction of $s$ to $[h)$.\nWe define ${}^{<n}2=\\big\\{r\\, :\\, r\\in {}^h2,\\ h\\in[n)\\big\\}$.\n\nA \\emph{branching tree\\/} of hight $n$ for the formula $\\phi(x\\,;z)$ is a function \n\n\\ceq{\\hfill\\bar{a}\\ :\\ {}^{<n}2}{\\to}{\\U}\n\\nopagebreak[4]\\par\n\\ceq{\\hfill r}{\\mapsto}{a_r},\n\nwhich we may also present by writing $\\bar a=\\<a_r:r\\in{}^{<n}2\\>$, such that\n\n\\ceq{\\ssf{2r}\\hfill\\0}\n{\\neq}\n{\\bigcap_{h=0}^n\\neg^{s_h}\\,\\phi(a_{s\\restriction h}\\,;\\V)}\n\nwhere $\\neg^i$, for $i$ a non negative integer, denotes a negation symbol repeated $i$ times.\nIn other words \\ssf{2r} requires the existence of some $\\<b_s:s\\in{}^n2\\>$ such that\n\n\\ceq{\\hfill\\phi(a_{s\\restriction h}\\,;b_s)}\n{\\IFF}\n{s_h=1}\\hfill for all pairs $s\\in {}^n2$ and $h\\in[n)$\n\nor, with slightly different notation,\n\n\\ceq{\\hfill\\phi(a_r\\,;b_s)}\n{\\IFF}\n{r^\\frown1\\subseteq s}\\hfill for all pairs $r\\subset s\\in {}^n2$.\n\nIt helps to represent a branching tree as follows.\nFor definiteness, fix $n=3$.\nConsider a full binary tree of height $n+1$ and assign to each internal node (different from root and leaves) a formula as depicted below.\nThen \\ssf{2r} requires that all formulas in each branch $s\\in2^n$ are satified by some $b_s\\in\\V$.\n\\medskip\n\n% Set the overall layout of the tree\n\\tikzstyle{level 1}=[level distance=3.5cm, sibling distance=2.5cm]\n\\tikzstyle{level 2}=[level distance=3.5cm, sibling distance=1.2cm]\n\\tikzstyle{level 3}=[level distance=2.5cm, sibling distance=0.5cm]\n\\tikzstyle{level 4}=[level distance=0.5cm, sibling distance=0.5cm]\n\n% Define styles for bags and leafs\n\\tikzstyle{bag0} = [text width=2.5ex, align=left]\n\\tikzstyle{bag} = [text width=7.5ex, align=right]\n%\\tikzstyle{end} = [circle, minimum width=3pt,fill, inner sep=0pt]\n\n\\begin{tikzpicture}[grow=right]\n\\node[bag0] {{$\\top$}}\n    child {\n        node[bag] {$\\phi(a_{\\0};z)$}      \n            child {\n                node[bag] {$\\phi(a_1;z)$}\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{11};z)$\\rlap{ ----- $b_{111}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize$\\llap{$\\neg$}\\phi(a_{11};z)$\\rlap{ ----- $b_{110}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            }\n            child {\n                node[bag] {\\llap{$\\neg$}$\\phi(a_1;z)$}\n                edge from parent\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{10};z)$\\rlap{ ----- $b_{101}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize\\llap{$\\neg$}$\\phi(a_{10};z)$\\rlap{ ----- $b_{100}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            }\n       edge from parent \n    }\n    child {\n        node[bag] {\\llap{$\\neg$}$\\phi(a_{\\0};z)$}         \n            child {\n                node[bag] {$\\phi(a_0;z)$}\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{01};z)$\\rlap{ ----- $b_{011}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize\\llap{$\\neg$}$\\phi(a_{01};z)$\\rlap{ ----- $b_{010}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            }\n            child {\n                node[bag] {\\llap{$\\neg$}$\\phi(a_0;z)$}\n                edge from parent\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{00};z)$\\rlap{ ----- $b_{001}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize\\llap{$\\neg$}$\\phi(a_{00};z)$\\rlap{ ----- $b_{000}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            } \n        edge from parent\n    };\n\\end{tikzpicture}\n\\smallskip\n\nThe Shelah local \\emph{2-rank\\/} of $\\phi(x\\,;z)$ is the maximal height of a branching tree for $\\phi(x\\,;z)$.\nIf such a maximal integer does not exist, we say that the 2-rank is infinite.\n\nA branching tree $\\bar a'=\\<a'_r:r\\in{}^m2\\>$ is a \\emph{subtree\\/} of $\\bar a$ if there is an $\\subseteq$-preserving map $f:{}^m2\\to{}^n2$ such that $a'_r=a_{fr}$.\n\nFor the theorem below we need the following Ramsey-like lemma.\n\n% \\begin{lemma}\\label{lem_Ramsey_on_trees}\n%   Let $\\bar a=\\<a_r\\, :\\, r\\in{}^{<2m}2\\>$ be a branching tree for $\\phi(x\\,;z)$.\n%   Let $k\\in[2m]$ be given.\n%   Then, for every $2$-coloring of $\\range(\\bar a)$, there is a branching tree $\\bar a'=\\<a'_r\\, :\\, r\\in{}^m2\\>$ such that  $\\range(\\bar a')$ is a monochromatic subset of $\\range(\\bar a)$.\\QED\n% \\end{lemma}\n\n% As it happens, a more general lemma is easier to prove.\n\n\\begin{lemma}\\label{lem_Reamsey_tree}\n  Let $\\bar a=\\<a_r\\, :\\, r\\in{}^{<n}2\\>$ be a branching tree for $\\phi(x\\,;z)$.\n  Let $k\\in[n]$ be given.\n  Then, for every red-blue coloring of $\\range(\\bar a)$,  there is a monochromatic branching subtree, say $\\bar a'=\\<a'_r\\, :\\, r\\in{}^{n'}2\\>$, and either all nodes of $\\bar a'$ are red and $n'=k$, or they are all blue and $n'=n-k$.\n\\end{lemma}\n\n\\begin{proof}\n  Induction on $n$.\n  First, note that the lemma is trivial when $k=0$ or $k=n$.\n  In particular the lemma holds for $n=0$.\n\n  Assume the lemma true for $n$ and prove it for $n+1$.\n  Let  $\\bar a=\\<a_r\\, :\\, r\\in{}^{n+1}2\\>$ be given.\n  Fix some $k\\in[n+1]$. \n  We want a branching tree $\\bar a'$ that is either red of height $k$ or blue of height $n+1-k$.\n  \n  If we discard the trivial cases, we can assume $k\\in(n]$.\n\n  For $i=0,1$ we define $\\bar a_i=\\<a_{i^\\frown r}\\;:\\;r\\in {}^{<n}2\\>$.\n\n  First suppose that $a_\\0$ is blue.\n  If, for either $i=0$ or $i=1$, there is a red branching tree $\\bar a'_i$ of height $k$, we are done.\n  Otherwise, for both $i=0,1$ there is a blue branching tree $\\bar a'_i$ of hight $n-k$.\n  Then we graft these two trees on $a_\\0$, which is also blue, and obtain the required blue tree of height $n-k+1$.\n  \n  Now suppose that $a_\\0$ is red.\n  If, for either $i=0$ or $i=1$, there is a blue branching tree $\\bar a'_i\\subseteq \\bar a_i$ of height $n-(k-1)$, we are done.\n  Otherwise, we graft on $a_\\0$ two red trees of hight $k-1$ to obtain a red tree of height $k$.  \n\\end{proof}\n\nWe are now ready to characterize stability via the 2-rank.\nThe proof is based on Hodges~\\cite{hodges}.\nIt is a direct proof which yields an explicit bound on the 2-rank given the ladder index (it yields also the converse, but this is easy).\nHowever, this bound seems to be far from optimal.\nShelah's original proof is model theoretic and does not gives explicit bounds.\nIn the next section we will introduce some of the ideas used in Shelah's proof.\n% In the next section we sketch a second proof which elaborates on Shelah's original ideas.\n% This proof is indirect but.\n\n\\begin{theorem}\\label{thm_hodges}\n  The following are equivalent\n  \\begin{itemize}\n    \\item[1.] $\\phi(x\\,;z)$ is stable;\n    \\item[2.] $\\phi(x\\,;z)$ has finite 2-rank.\n  \\end{itemize}\n  Precisely, if $n_{\\rm ld}$ and $n_{\\rm 2r}$ are the ladder index and the 2-rank, respectively, then \n  \n  \\ceq{\\hfill n_{\\rm ld}}{<}{2^{n_{\\rm 2r}+1};}\n\n  \\ceq{\\hfill n_{\\rm 2r}}{<}{2^{n_{\\rm ld}}+2.}\n\\end{theorem}\n\n\\begin{proof}[Proof (\\kern.1ex\\ssf{2}\\kern.3ex\\boldmath$\\IMP$\\ssf{1})]\n  % \\def\\ceq#1#2#3{\\parbox[t]{29ex}{$\\displaystyle #1$}\\medrel{#2}{$\\displaystyle #3$}}\n  % \\ssf{1}$\\IMP$\\ssf{2} \n  % We prove the contrapositive.\n  % If there is a binary tree of hight $n$ then there is a ladder of length $n$.\n  % This also proves the first inequality of the proposition.\n\n  % Let $\\{a_s\\; :\\; s\\in2^{<n}\\}$ and $\\{b_s\\; :\\; s\\in2^{n}\\}$ satisfy \\ssf{2r}.\n  % Apply \\ssf{2r} to sequences of the form $1^k\\,0^{n-k}$ for $0\\le k\\le n$.\n\n  % % Define $a'_1,\\dots,a'_n$ and $b'_0,\\dots,b'_n$, where $a'_h=a_{1^{h-1}}$ and $b'_k=b_{1^k0^{n-k}}$.\n  % % By \\ssf{2r} we have\n\n  % \\ceq{\\hfill\\phi(a_{1^k\\,0^{n-k}\\,\\restriction\\, h-1}\\;;\\,b_{1^k\\,0^{n-k}})}\n  % {\\IFF}\n  % {\\big(1^k0^{n-k}\\big)_{h-1}=1}\n\n  % \\ceq{}\n  % {\\IFF}\n  % {h\\le k}\n\n  \n  We prove the contrapositive.\n  We show that if there is a ladder of length $m=2^n$, say $a_1,\\dots,a_{m-1}$ and $b_0,\\dots,b_{m-1}$, then there is a branching tree $\\bar a'$ of height $n$.\n  This also proves the first inequality above.\n  In fact, if $n_{\\rm ld}\\ge 2^{n_{\\rm 2r}+1}$, there would exist a branching tree of height $n_{\\rm 2r}+1$ which is a contradiction.\n  \n  The branching tree $\\bar a'=\\<a'_r\\, :\\, r\\in{}^{<n}2\\>$ is defined as follows\n\n  \\quad $a'_r=a_h$\\quad  where $h$ is obtained reading $r^\\frown1^\\frown0^{n-|r|-1}$ as an $n$-digit binary number.\n\n  To verify \\ssf{2r} we define for $s\\in{}^n2$ \n  \n  \\quad $b'_s=b_k$\\quad  where $k$ is obtained reading $s$ as an $n$-digit binary number.\n\n  Then it is easy to verify that for all pairs $r\\subset s\\in{}^n2$\n\n  \\ceq{\\hfill\\phi(a'_r\\,;b'_s)}\n  {\\IFF}\n  {\\phi(a_h\\,;b_k)}\\hfill where $h$ and $k$ are like above\n\n  \\ceq{}\n  {\\IFF}\n  {h\\le k}\n\n  \\ceq{}\n  {\\IFF}\n  {r^\\frown1^\\frown0^{n-|r|-1}\\ \\le\\ s}\\hfill  as $n$-digit binary numbers\n\n  \\ceq{}\n  {\\IFF}\n  {r^\\frown1\\ \\subseteq\\ s}\n\n  \\textbf{(\\ssf{1}\\kern.2ex\\boldmath$\\IMP$\\ssf{2}\\kern.1ex)}\\ \n  We prove the contrapositive.\n  We claim that if there is a branching tree of height $2^n+2$ then there is a ladder of length $n+1$.\n  This yields also the second inequality of the theorem.\n  In fact, if $n_{\\rm 2r}\\ge 2^{n_{\\rm ld}}+2$, there would exist a ladder of length $n_{\\rm ld}+1$ which is a contradiction.\n\n  The claim is true for $n=0$, as witnesses by the ladder $a_\\0$.\n  Now we assume the claim is true for $n$ and prove it for $n+1$.\n\n  Let $\\<a_r\\, :\\, r\\in{}^{<\\, 2m+2}2\\>$, where $m=2^n$, be a branching tree of height $2^{n+1}+2$.\n  To each $b\\in\\V$ we associate a red-blue coloring of $\\range(\\bar a)$ as follows.\n  A node $a\\in\\range(\\bar a)$ is colored\n  \n  \\qquad\\qquad \\llap{red}\\qquad if $\\phi(a\\,;b)$ holds;\n  \n  \\qquad\\qquad \\llap{blue}\\qquad otherwise, that is, $\\neg\\phi(a\\,;b)$.\n  \n  We consider two cases that are exhaustive by Lemma~\\ref{lem_Reamsey_tree}.\n  Note that we are applying the lemma only to the subtree $\\<a_{1^\\frown r}:r\\in{}^{2m+1}2\\>$.\n\n  Case 1: for some $b$ there is a red subtree of $\\<a_{1^\\frown r}:r\\in{}^{2m+1}2\\>$ of height $m+1$.\n  Let $\\bar a'$ be this red tree and consider its subtree $\\<a'_{0^\\frown r}\\,:\\,r\\in {}^{<\\,m}2\\>$.\n  By induction hypothesis, there are $A\\subseteq\\{a'_{0^\\frown r}\\,:\\,r\\in {}^{<\\,m}2\\}$ and $b_0,\\dots,b_n$ such that\n\n  \\ceq{(1)\\hfill\\phi(A\\,;b_0)\\ \\subset}{\\dots}{\\subset\\ \\phi(A\\,;b_n)}\n\n  Let $A'=A\\cup\\{a'_\\0\\}$ then\n\n  \\ceq{(2)\\hfill\\phi(A'\\,;b_0)\\ \\subset}{\\dots}{\\subset\\ \\phi(A'\\,;b_n)}\n\n  In fact, as $b_0,\\dots,b_n\\in\\neg\\phi(a'_\\0\\,;\\V)$, this is the same chain as (1).\n  Therefore, if we extend the chain on the right with $\\phi(A'\\,;b)=A'$, we obtain the required chain of length $n+2$.\n  \n  Case 2: for every $b$ there is a blue subtree of $\\<a_{1^\\frown r}:r\\in{}^{2m+1}2\\>$ of height $m$.\n  Choose $b$ such that $\\neg\\phi(a_\\0, b)$ and let $\\bar a'$ the corresponding blue subtree.\n  Apply the induction hypothesis to obtain $A\\subseteq\\range(\\bar a')$ and $b_0,\\dots,b_n$ such that (1).\n  We claim that (2) above holds with $A'=A\\cup\\{a_\\0\\}$.\n  In fact, $b_0,\\dots,b_n\\in\\phi(a_\\0\\,;\\V)$ so (2) is the chain in (1) with all sets augmented by $a_\\0$.\n  We can extend the chain on the left with $\\phi(A'\\,;b)=\\0$ and obtain the required chain of length $n+2$.\n\\end{proof}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Approximable sets}\n\nThe notion of approximable set is a combinatorial counterpart of the model theoretical notion of externally definable set.\n\n\\begin{definition}\\label{def_approx}\n  We say that $\\Aa\\subseteq\\U$ is \\emph{approximable\\/} if for every finite set $A\\subseteq\\U$ there is a $b\\in\\V$ such that $\\phi(A\\,;b)=\\Aa\\cap A$.\n  If we also have that $\\phi(\\U\\,;b)\\subseteq\\U$, then we say that $\\Aa$ is approximable \\emph{from below.}\n\\end{definition}\n\nThe following is immediate.\n\n\\begin{fact}\n  The following are equivalent for every $\\Aa\\subseteq\\U$\n  \\begin{itemize}\n    \\item[1.] $\\Aa$ is approximable from below;\n    \\item[2.] for every finite set $A\\subseteq\\Aa$ there is a $b\\in\\V$ such that $A\\subseteq\\phi(\\U\\,;b)\\subseteq\\Aa$.\\QED\n  \\end{itemize}\n\\end{fact}\n\n\\begin{proposition}\n  Let $\\phi(x\\,;z)$ be a stable formula.\n  Every set $\\Aa\\subseteq\\U$ approximable by $\\phi(x\\,;z)$ is approximable from below by the formula\n\n  \\ceq{\\hfill \\psi(x\\,;z_0,\\dots,z_n)}{=}{\\bigwedge_{i=0}^n\\phi(x\\,;z_i)}\n  \n  where $n+1$ is the ladder index of $\\phi(x\\,;z)$.\n\\end{proposition}\n\n\\begin{proof}\n  Let $A\\subseteq\\Aa$ be finite.\n  We prove that $A\\subseteq\\psi(\\U\\,;b_0,\\dots,b_n)\\subseteq\\Aa$ for some $b_0,\\dots,b_n$.\n  To obtain the $b_k$, we construct a ladder inductively.\n  Pick any $b_0$ be such that $A\\subseteq\\phi(\\U\\,;b_0)$, which we can do by approximability.\n  Now, suppose that $a_1,\\dots,a_k$ and $b_0,\\dots,b_k$ have been defined.\n  If \n\n  \\ceq{\\hfill A}{\\subseteq}{\\bigcap_{i=0}^k\\phi(\\U\\,;b_i)\\medrel{\\subseteq}\\Aa}\n\n  we set $b_{k+1}=\\dots=b_n=b_k$ and stop as we already have the required parameters.\n  Otherwise pick any element\n\n  \\ceq{\\hfill a_{k+1}}{\\in}{\\bigcap_{i=0}^k\\phi(\\U\\,;b_i)\\sm\\Aa}\n\n  and some $b_{k+1}$ such that\n\n  \\ceq{\\hfill A}{\\subseteq}{\\phi(\\U\\,;b_{k+1})\\medrel{\\subseteq}\\U\\sm\\{a_1,\\dots,a_{k+1}\\}}.\n\n  Such a parameter $b_{k+1}$ exists because $\\Aa$ is approximable (apply Definition~\\ref{def_approx} with $A\\cup\\{a_1,\\dots,a_{k+1}\\}$ for $A$).\n  Note that at stage $n$ we have constructed a ladder of length $n$ for $\\neg\\phi(x\\,;z)$.\n  In fact for all $h\\in(n]$ and $k\\in[n]$ we have\n  \n  \\ceq{\\hfill\\neg\\phi(a_h\\,;b_k)}\n  {\\IFF}\n  {h\\le k.}\n  \n  As $\\phi(x\\,;z)$ has ladder index $n+1$, by Fact~\\ref{fact_stability_neg} it is not possible to further prolong this chain, hence\n\n  \\ceq{\\hfill A}{\\subseteq}{\\bigcap_{i=0}^n\\phi(\\U\\,;b_i)\\medrel{\\subseteq}\\Aa}\n\n  as required.\n\\end{proof}\n\nWe prove that the formula $\\psi(x\\,;z_0,\\dots,z_n)$ in the lemma above is itself stable, though with a larger ladder index.\n  \n\\begin{lemma}\n  Let $\\psi_i(x\\,;z)$, where $i=1,\\dots,m$, be formulas with ladder index $n_i$. Let\n  \n  \\ceq{\\hfill \\phi(x\\,;z)}{=}{\\bigwedge_{i=1}^m\\phi_i(x\\,;z)}\n  \n  Then $\\phi(x\\,;z)$ has ladder index $<R(n_1,\\dots, n_m)$, the Ramsey number for $m$-colorings.\n\\end{lemma}\n  \n\\begin{proof} \n  Suppose for a contradiction that there is a ladder $a_1,\\dots,a_n$ and $b_0,\\dots,b_n$ of length $n=R(n_1,\\dots, n_m)$.\n  Let $C_i$ contains the pairs $\\{h,k\\}$ such that $0\\le k<h\\le n$ and $\\neg\\phi_i(a_h\\,;b_k)$.\n  Then from \\ssf{ld} we obtain \n  \n  \\ceq{\\hfill\\bigcup_{i=1}^mC_i}{=}{\\binom{[n]}{2}}\n  \n  By the definition of $n$, for some $i\\in[m]$, there is a set $H$ of cardinality $n_i$ such that $H^{(2)}\\subseteq C_i$.\n  Assume $i{=}1$ for definiteness.\n\n  Write $a'_1,\\dots,a'_{n_1}$ and $b'_0,\\dots,b'_{n_1}$ for the tuples obtained by restricting $a_1,\\dots,a_n$ and $b_0,\\dots,b_n$ to the indexes in $H$.\n  These tuples witness that $\\phi_1(x\\,;z)$ has ladder index at least $n_1+1$, which contradicts the assumption of the lemma.\n\\end{proof}\n\nTo apply the proposition to the formula $\\psi(x\\,;z_0,\\dots,z_n)$ in the lemma above, let $z=z_0,\\dots,z_n$ and let $\\phi_i(x\\,;z)=\\phi(x\\,;z_i)$.\nThat is, $\\phi_i(x\\,;z)$ defines a relation between $\\U$ and $\\V^{n+1}$ that depends only on the $i$-th coordinate.\nThe ladder index of every $\\phi_i(x\\,;z)$ is the same as $\\phi(x\\,;z_0)$ as a relation between $\\U$ and $\\V$.\n\n\n\\begin{lemma}\n  If $\\Aa$ is approximated from below by a stable formula $\\phi(x\\,;z)$ then for some $b_0,\\dots,b_n\\in\\V$ \n  \\\\[-1ex]\n  \\ceq{\\hfill \\Aa}{=}{\\bigcup^n_{i=0}\\phi(\\U\\,;b_i)}\n  \n  where $n+1$ is the ladder index of $\\phi(x\\,;z)$. \n\\end{lemma}\n  \n\\begin{proof}\n  Let $A\\subseteq\\U$ be finite.\n  To obtain the $b_k$, we construct a ladder by stages.\n  Pick any $b_0$ be such that $\\phi(\\U\\,;b_0)\\subseteq\\Aa$.\n  Now, suppose that $a_1,\\dots,a_k$ and $b_0,\\dots,b_k$ have been defined.\n  If \n  \\\\[-1ex]\n  \\ceq{\\hfill\\Aa}{\\subseteq}{\\bigcup_{i=1}^k\\phi(\\U\\,;b_i)}\n\n  we set $b_{k+1}=\\dots=b_n=b_k$ and stop.\n  As the construction guarantees the converse inclusion, we have the required parameters.\n  Otherwise pick any element\n\n  \\ceq{\\hfill a_{k+1}}{\\in}{\\Aa\\sm\\bigcup_{i=1}^k\\phi(\\U\\,;b_i)}\n\n  and some $b_{k+1}$ such that\n\n  \\ceq{\\hfill \\{a_1,\\dots,a_{k+1}\\}}{\\subseteq}{\\phi(\\U\\,;b_{k+1})\\medrel{\\subseteq}\\Aa}.\n\n  Such a parameter $b_{k+1}$ exists because $\\Aa$ is approximable from below.\n  Note that at stage $n$ we have constructed a ladder of length $n$ for $\\phi(x\\,;z)$.\n  In fact for all $h\\in(n]$ and $k\\in[n]$ we have\n  \n  \\ceq{\\hfill\\neg\\phi(a_h\\,;b_k)}\n  {\\IFF}\n  {h\\le k.}\n  \n  As $\\phi(x\\,;z)$ has ladder index $n+1$, it is not possible to prolonge this chain, hence\n\n  \\ceq{\\hfill\\Aa}{=}{\\bigcup_{i=0}^n\\phi(\\U\\,;b_i)}\n\n  as required.\n\\end{proof}\n  \nThe main theorem of this section is an immediate corollary is the three lemmas above.\n\n\\begin{theorem}\\label{thm_stable_definability}\n  Let $\\phi(x\\,;z)$ be stable.\n  Then there is a formula of the form\n\n  \\ceq{\\hfill\\psi(x\\,;\\bar z)}{=}{\\bigvee_{j=0}^m\\bigwedge_{i=0}^n\\phi(z\\,;z_{i,j})}\n  \n  such that every set $\\Aa\\subseteq\\U$ approximable by  $\\phi(x\\,;z)$ is definable by $\\psi(x\\,;\\bar b)$ for some $n{\\times}m$-tuple of parameters $\\bar b\\in\\V^{n\\times m}$.\\QED\n\\end{theorem}\n\nWe conclude this section with the sketch of an argument that shows that stable formulas have finite 2-rank, that is, \\ssf{1}$\\IMP$\\ssf{2} of Theorem~\\ref{thm_hodges}.\n\n% Assume the 2-rank is infinite and reason for a contradiction.\n% Assume for the moment that the rank is $\\omega$, in the sense that there is an infinite branching tree $\\<a_r:r\\in2^{<\\omega}\\>$. \n% Let $\\<b_s:s\\in2^{\\omega}\\>$ be witnesses of \\ssf{2r}.\n% Let $Q\\subseteq\\{b_s:s\\in2^{\\omega}\\}$ the set of sequences that are eventually $0$.\n% Note that in the proof of Theorem~\\ref{thm_stable_definability} we can require that $\\bar b\\in Q^{n\\times m}$.\n% But there are \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\\section{Stable Erd\\H{o}s-Hajnal}\n\n\\end{document}", "meta": {"hexsha": "b316739d1cc264f69a661bd35a84753a86a0f3e3", "size": 21568, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "stable.tex", "max_stars_repo_name": "LorenzoNot/scombinatorics", "max_stars_repo_head_hexsha": "64392c4f5793019b479376acb5eb248115335b93", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "stable.tex", "max_issues_repo_name": "LorenzoNot/scombinatorics", "max_issues_repo_head_hexsha": "64392c4f5793019b479376acb5eb248115335b93", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "stable.tex", "max_forks_repo_name": "LorenzoNot/scombinatorics", "max_forks_repo_head_hexsha": "64392c4f5793019b479376acb5eb248115335b93", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.2390057361, "max_line_length": 233, "alphanum_fraction": 0.6014929525, "num_tokens": 7707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5824964404042623}}
{"text": "\\section{Green's Theorems}\r\n\\noindent\r\nSimilar to how we had the fundamental theorem of calculus (FTC) for single-variable calculus, there are several higher-dimensional versions of the FTC.\\\\\r\nRecall that the fundamental theorem of calculus is $\\int_{a}^{b}{f^\\prime(x)\\mathrm{d}x} = f(b) - f(a)$. One way to think of this statement is \u201csumming up the derivative on a closed interval is the same as the \u2018sum\u2019 of $f$ on the boundary.\u201d We will see that many of these higher versions of the FTC deal with intervals and boundaries.\r\n\r\n\\input{./vectorAnalysis/greensTheoremCirculation}\r\n\\input{./vectorAnalysis/greensTheoremFlux}", "meta": {"hexsha": "5c0319ff3e9350a321236a83dae94cccf4a2a3c6", "size": 625, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "multiCalc/vectorAnalysis/greensTheorems.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "multiCalc/vectorAnalysis/greensTheorems.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "multiCalc/vectorAnalysis/greensTheorems.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 89.2857142857, "max_line_length": 335, "alphanum_fraction": 0.7648, "num_tokens": 162, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.5824056088361771}}
{"text": "\\input{../../style/preamble} \n\\input{../../latex-math/basic-math}\n\\input{../../latex-math/basic-ml}\n\\input{../../latex-math/ml-ensembles.tex}\n\\input{../../latex-math/ml-trees.tex}\n\n\\newcommand{\\titlefigure}{figure/boosting_classif_title.png}\n\\newcommand{\\learninggoals}{\n  \\item Transfering gradient boosting for regression to binary classification problems\n  \\item Introducing gradient boosting for multiclass problems\n}\n\n\\title{Introduction to Machine Learning}\n\\date{}\n\n\\begin{document}\n\n\\lecturechapter{Gradient Boosting for Classification}\n\\lecture{Introduction to Machine Learning}\n\n\n\\begin{vbframe}{Binary classification}\n\n\nFor $\\Yspace = \\{0, 1\\}$, we simply have to select an appropriate loss function, so let us\nuse Bernoulli loss as in logistic regression:\n\n$$ \\Lxy = - y \\cdot \\fx + \\log(1 + \\exp(\\fx)).$$\n\nThen,\n\n\\vspace{-0.5cm}\n\n\\begin{align*}\n\\tilde{r}(f) &=-\\pd{\\Lxy}{\\fx} \\\\\n&= y - \\frac{\\exp(\\fx)}{1 + \\exp(\\fx)} \\\\\n&= y - \\frac{1}{1 + \\exp(-\\fx)} = y - s(\\fx).\n\\end{align*}\n\nHere, $s(\\fx)$ is the logistic function, applied to a scoring model.\nHence, effectively, the pseudo-residuals are $y - \\pix$.\n\nThrough $\\pix = s(\\fx)$ we can also estimate posterior probabilities.\n\n\\framebreak\n%\n\n\\begin{itemize}\n\\item Using this loss function, we can simply run GB as for regression.\n  NB: Also here, we fit regression base learners against our numerical \n  vector of pseudo-residuals with $L2$ loss. \n%\\item The tree extension for boosting for the binary case works exactly as in regression.\n\\item  We could also have used the exponential loss for classification with \n  GB. It can be shown that the resulting GB algorithm is basically equivalent \n    to AdaBoost. In practice there is no big difference, although Bernoulli loss \n    makes a bit more sense from a theoretical (maximum likelihood) perspective.\n\\item It follows that GB is a generalization of AdaBoost which can also use other loss functions and be used for different ML scenarios.\n\\end{itemize}\n\n\n\\end{vbframe}\n\n\\begin{vbframe}{Example}\n\nWe now illustrate the boosting iterations for a classification example in a similar manner as we did for regression.\nHowever, we will now look at a simulation example with 2 instead of 1 influencial features and one binary target variable.\n\n\\begin{columns}\n\\column{5.5cm}\n\\begin{itemize}\n\\item We used the \\texttt{mlbench} dataset \\texttt{circle} with $n = 50$ observations.\n\\item We used the Bernoulli loss to calculate pseudo-residuals and fitted in each iteration a base learner (regression tree with max. depth of 3) on them.\n\\item We initialized with $f^{[0]} = log(1) = 0.$\n\\end{itemize}\n\\column{4.5cm}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figure_man/boosting_classif_example.png}\n\\end{center}\n\n\\end{columns}\n\n\n\n\\end{vbframe}\n\n\\begin{frame}{Example}\nLeft: Background color refers to prediction probabilites $\\hat{\\pi}^{[m]}$ and points to $y$ (probabilities);  \nRight: Background color refers to predictions of base learner $\\hat{b}^{[m]}$ and points to $\\tilde{r}$. \n\n\n\\vfill\n\n\\only<1>{ \n\\includegraphics[width=\\textwidth]{figure_man/boosting_classif_1.png}\n\n\n\\vfill\nIteration 1\n}\n\\only<2>{ \n\\includegraphics[width=\\textwidth]{figure_man/boosting_classif_2.png}\n\n\\vfill\nIteration 2\n}\n\n\n\n\\only<3>{ \n\\includegraphics[width=\\textwidth]{figure_man/boosting_classif_5.png}\n\n\n\\vfill\nIteration 5\n}\n\\only<4>{ \n\\includegraphics[width=\\textwidth]{figure_man/boosting_classif_10.png}\n\n\\vfill\nIteration 10\n}\n\n\\only<5>{ \n\\includegraphics[width=\\textwidth]{figure_man/boosting_classif_100.png}\n\n\\vfill\nIteration 100\n}\n\n\n\\end{frame}\n\n%\\section{Interactive Playgrounds}\n\n% \\begin{vbframe}{Gradient Boosting Playground}\n% \\begin{center}\n% \n% \\includegraphics[width=0.7\\textwidth]{figure_man/gbm_playground.png}\n% \n% \\href{http://arogozhnikov.github.io/2016/07/05/gradient_boosting_playground.html}{\\beamergotobutton{Open in browser.}}\n% \n% \\end{center}\n% \\end{vbframe}\n\n\\begin{vbframe}{mlrPlayground}\n\\begin{center}\n\n\\includegraphics[width=\\linewidth]{figure_man/mlrplayground_welcome.png}\n\n\\href{https://compstat-lmu.shinyapps.io/mlrPlayground/}{\\beamergotobutton{Open in browser.}}\n\n\\end{center}\n\\end{vbframe}\n\n% \\section{Gradient Boosting for Multiclass Problems}\n\n\\begin{vbframe}{Multiclass problems}\n\nWe proceed as in softmax regression and model a categorical distribution with multinomial / log loss.\nFor $\\Yspace = \\{1, \\ldots, g\\}$, we create $g$ discriminant functions $\\fkx$, one for each class and each one being an \\textbf{additive} model of base learners.\n\nWe define the $\\pi_k(\\xv)$ through the softmax function:\n$$ \\pikx = s_k(f_1(\\xv), \\ldots, f_g(\\xv)) = \\exp(\\fkx) / \\sum_{j=1}^g \\exp(f_j(\\xv)). $$\n\nMultinomial loss $L$:\n$$ L(y, f_1(\\xv), \\ldots f_g(\\xv)) = - \\sumkg \\mathds{1}_{\\{y = k\\}} \\ln \\pikx. $$\n\nPseudo-residuals:\n$$-\\pd{L(y, f_1(\\xv), \\ldots, f_g(\\xv))}{\\fkx} =  \\mathds{1}_{\\{y = k\\}} - \\pikx. $$\n\n\n\\framebreak\n\n\\begin{algorithm}[H]\n  \\begin{footnotesize}\n  \\begin{center}\n  \\caption{GB for Multiclass}\n    \\begin{algorithmic}[1]\n      \\State Initialize $f_{k}^{[0]}(\\xv) = 0,\\ k = 1,\\ldots,g$\n      \\For{$m = 1 \\to M$}\n      \\State Set $\\pik^{[m]}(\\xv) = \\frac{\\exp(f_k^{[m]}(\\xv))}{\\sum_j \\exp(f_j^{[m]}(\\xv))}, k = 1,\\ldots,g$\n            \\For{$k = 1 \\to g$}\n            \\State For all $i$: Compute $\\rmi_k = \\mathds{1}_{\\{\\yi = k\\}} - \\pik^{[m]}(\\xi)$\n              \\State Fit a regression base learner $\\hat{b}^{[m]}_k$ to the pseudo-residuals $\\rmi_k$.\n              \\State Obtain $\\betamh_k$ by constant learning rate or line-search.\n              \\State Update $\\hat{f}_k^{[m]} = \\hat{f}_k^{[m-1]} + \\betamh_k \\hat{b}^{[m]}_k$\n            \\EndFor\n      \\EndFor\n    \\State Output $\\hat{f}_1^{[M]}, \\ldots, \\hat{f}_g^{[M]}$\n    \\end{algorithmic}\n    \\end{center}\n    \\end{footnotesize}\n\\end{algorithm}\n\n\\end{vbframe}\n\n\n\n\\endlecture\n\\end{document}\n\n", "meta": {"hexsha": "0c20537f0ca2df952a9ae6c435fa0101d5ea4d14", "size": 5804, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/boosting/slides-boosting-gbm-classification.tex", "max_stars_repo_name": "compstat-lmu/lecture_i2ml", "max_stars_repo_head_hexsha": "f02753d581fb1de3f5db70ae5b23d50cf39e1326", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2019-02-27T17:20:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-24T12:05:52.000Z", "max_issues_repo_path": "slides/boosting/slides-boosting-gbm-classification.tex", "max_issues_repo_name": "compstat-lmu/lecture_i2ml", "max_issues_repo_head_hexsha": "f02753d581fb1de3f5db70ae5b23d50cf39e1326", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 323, "max_issues_repo_issues_event_min_datetime": "2019-02-27T08:02:59.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T08:11:41.000Z", "max_forks_repo_path": "slides/boosting/slides-boosting-gbm-classification.tex", "max_forks_repo_name": "compstat-lmu/lecture_i2ml", "max_forks_repo_head_hexsha": "f02753d581fb1de3f5db70ae5b23d50cf39e1326", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 56, "max_forks_repo_forks_event_min_datetime": "2019-02-27T16:25:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T05:53:20.000Z", "avg_line_length": 28.7326732673, "max_line_length": 161, "alphanum_fraction": 0.6952101999, "num_tokens": 1808, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8104789132480439, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.582405607241334}}
{"text": "\n\n\\begin{wrapfigure}{r}{0.7\\textwidth}\n    %\\centering\n    \\includegraphics[width = 0.7\\textwidth]{Chapters/BackgroundKnowledgeAndRelatedWork/MultiAgentTargetDetectionBackground/Figs/DBNs/Complex2TDBN.PNG}\n    \\caption{A DBN used for \\textbf{S}imultaneous \\textbf{L}ocalisation \\textbf{a}nd \\textbf{M}apping (SLAM), based on \\cite[p.~311]{Thrun:2005:ProbabilisticRobotics}.}\n    \\label{fig:2TDBNExample}\n\\end{wrapfigure}\n\nDBNs are a generalization of HMMs and are used to model time series. The key difference between DBNs and HMMs is that DBNs are not limited in how they decompose the hidden state variables of a complex stochastic system into the variables that represent its constituent conditional distributions \\cite{AIAMA}. Rather than use a single random variable, $X_t$ to represent the state space, a set of random variables are used \\cite{Murphy1994DynamicLearning}. DBNs can be considered as a special case of a Bayesian Network with an infinite number of nodes and repeating structure which specifies the transition probabilities between one time step and the next \\cite[p.~204]{KollerPGM}. The hidden state variables are represented by nodes which have an associated conditional probability distribution (CPD) which states the probability of observing a value of a random variable given its parents: $p(X_i | Parents(X_i))$, where $i$ is the index of a node in the network \\cite[p.~14]{Murphy1994DynamicLearning}. Given this representation, it is possible to calculate the joint distribution, $p(X_t | X_{t-1})$, of all $N$ state variables at time $t$ given the joint distribution at $t-1$ as \\cite[p.~14]{Murphy1994DynamicLearning}:\n\\[\np(X_t | X_{t-1}) = \\prod_{i=1}^{N}p(X_t^i | Parents(X_t^i)\n\\]\n$i$ denotes the index of each $N$ variables in the network. \\citeauthor{KollerPGM} provide a technical definition of DBNs: \n\"\\textit{A dynamic Bayesian network (DBN) is a pair $\\langle B_0, B_\\rightarrow \\rangle$, where $B_0$ is a Bayesian network over $X^{(0)}$, representing the initial distribution over states, and $B_\\rightarrow$ is a 2-TBN for the process. For any desired time span T $\\geq$ 0, the distribution over $X^{(0:T)}$ is defined as a unrolled Bayesian network, where, for any $i$ = 1, . . . , $n$:\n\\begin{itemize}\n    \\item The structure and CPDs of $X^{(0)}_i$ are the same as those for $X_i$ in $B_0$,\n    \\item The structure and CPD of $X^{(t)}_i$ for $t > 0$ are the same as those for $X^{'}_i$ in $B_\\rightarrow$.\n\\end{itemize}}\" \\cite[p.~204]{KollerPGM}. What this essentially means is that a DBN is a compact way of specifying an infinite set of Bayesian Networks (one for each $T>0$) with repeating structure.\n\n\nTechnically, every DBN can be represented as a HMM and visa versa, however, the number of parameters that need to be determined to represent DBNs can be significantly less than that of HMMs \\cite{KollerPGM}. For example, consider the case of $n$ discrete random variables, each with support of size $m$. Specifying the full joint distribution without assuming any conditional independences between variables would require a number of probabilities exponential in the number of random variables, $m^{n}-1$, whereas specifying the same joint distribution as factored conditional distributions may require far fewer, reaching a number of probabilities directly proportional to the number of random variables, $n \\times m$ in the degenerate case \\cite[p.~63]{KollerPGM}. \n%\\subsubsection{A Note on Bayesian Networks}\n%A Bayesian Network (BN) is a graphical way to represent a particular factorization of a joint distribution. For a detailed explanation, the reader is referred to \\cite{KollerPGM}. To fully explain a complex system, it is often natural to model it as the joint distribution of a number of random variables. This is in general intractable \\cite{KollerPGM}. In order to circumvent this, independence properties in the distribution can be exploited to provide a much more compact representation of the distribution. BNs exploit the fact that independence is a strong notion that doesn't often occur in the real-world; however conditional independence is a weaker property that is far more prevalent, which still leads to the desired compact representation. BNs are described by a directed acyclic graph (DAG), $G$. The nodes in the graph correspond to the random variables whose joint distribution is of interest, and the arcs represent conditional independences. Specifically, if one Burglary points to Alarm as in figure \\ref{fig:BayesianNetwork}, it is implied that the distribution of Alarm is conditionally independent on all other nodes in the network given Burglary. This means that only local conditional probability distributions must be provided in order to specify the full joint distribution.\n\n%\\begin{figure}\n%example bayesian network figure\n%\\begin{tikzpicture}[\n%  node distance=1cm and 0cm,\n%  mynode/.style={draw,ellipse,text width=2cm,align=center}\n%]\n%\\node[mynode] (i) {Burglary};\n%\\node[mynode,below right=of i] (g) {Alarm};\n%\\node[mynode,above right=of g] (c) {Earthquake};\n%\\path %(ra) edge[latex-] (sp)\n%(g) edge[latex-] (c) \n%(g) edge[latex-] (i);\n%\\node[left=0.45cm of i]\n%{\n%\\begin{tabular}{cM{3}M{3}}\n%\\toprule\n%\\multicolumn{2}{c}{Burglary} \\\\\n%\\multicolumn{1}{c}{T} & \\multicolumn{1}{c}{F} \\\\\n%\\cmidrule(r){1-2}\n%0.001 & 0.999 \\\\\n%\\bottomrule\n%\\end{tabular}\n%};\n%\\node[right=0.45cm of c]\n%{\n%\\begin{tabular}{cM{3}M{3}}\n%\\toprule\n%\\multicolumn{2}{c}{Earthquake} \\\\\n%\\multicolumn{1}{c}{T} & \\multicolumn{1}{c}{F} \\\\\n%\\cmidrule(r){1-2}\n%0.008 & 0.992 \\\\\n%\\bottomrule\n%\\end{tabular}\n%};\n%\\node[below=0.5cm of g]\n%{\n%\\begin{tabular}{ccM{2}M{2}}\n%\\toprule\n%& & \\multicolumn{2}{c}{Alarm} \\\\\n%\\multicolumn{2}{l}{Burglary Earthquake} & %\\multicolumn{1}{c}{T} & \\multicolumn{1}{c}{F} \\\\\n%\\cmidrule(r){1-2}\\cmidrule(l){3-4}\n%F & F & 0.01 & 0.99 \\\\\n%F & T & 0.95 & 0.05 \\\\\n%T & F & 0.8 & 0.2 \\\\\n%T & T & 0.99 & 0.01 \\\\\n%\\bottomrule\n%\\end{tabular}\n%};\n%\\end{tikzpicture}\n\n%\\caption{Simple Bayesian Network based on %\\cite[P.~512]{AIAMA}}\n%\\label{fig:BayesianNetwork}\n%\\end{figure}\n\n\n\n%\\subsubsection{Dynamic Bayesian Network Technical Description}\n\n\n\n%, which represent a general temporal probability model that describes a random system which is assumed to have a number of random variables, some of which are observable and some not \\cite{AIAMA}.\n%Dynamic Bayesian networks are usually used to represent the dynamics of a random system over time and can be described generally by specifying how the system transitions from its state at $t-1$ to $t$, if the first order Markov property is assumed. This thesis is concerned with the case when the first order Markov property holds. \nIt is often convenient to categorise the random variables in the network into state variables (assumed to be hidden), control variables and observation variables \\cite{Thrun:2005:ProbabilisticRobotics}. An example of a Dynamic Bayesian Network can be seen in Figure \\ref{fig:2TDBNExample}. The hidden state variables are shaded in light grey, the observation variables in light orange and the control variables in light green. The directed arcs represent \"instantaneous\" causation, as with regular Bayesian Networks \\cite[p.~15]{Murphy1994DynamicLearning}. Note that unlike HMMs, there isn't a requirement to have a single state and observation variable across each time-step and other variables can be introduced into the model.\n\n%The nodes in the graph correspond to the random variables whose joint distribution we would like to calculate, and the arcs represent conditional independences \\cite{KollerPGM}. Specifically, if one node points to another. \\par\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "dba6aecd0b4e6c423dd70f73efbc1bf45d8b6efb", "size": 7630, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/BackgroundKnowledgeAndRelatedWork/MultiAgentTargetDetectionBackground/DynamicBayesNets.tex", "max_stars_repo_name": "DavidLSmyth/ResearchMScThesis", "max_stars_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/BackgroundKnowledgeAndRelatedWork/MultiAgentTargetDetectionBackground/DynamicBayesNets.tex", "max_issues_repo_name": "DavidLSmyth/ResearchMScThesis", "max_issues_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-18T11:59:42.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-18T11:59:42.000Z", "max_forks_repo_path": "Chapters/BackgroundKnowledgeAndRelatedWork/MultiAgentTargetDetectionBackground/DynamicBayesNets.tex", "max_forks_repo_name": "DavidLSmyth/ResearchMScThesis", "max_forks_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.8039215686, "max_line_length": 1301, "alphanum_fraction": 0.756225426, "num_tokens": 2069, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8289388040954683, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.582354430781046}}
{"text": "\n\\subsection{Structural breaks}\n\nTesting for structural breaks with the Chow test.\n\n", "meta": {"hexsha": "47355f698ed26cfd6024fab054377ad227304eab", "size": 84, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/time/03-04-Structural_breaks.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/time/03-04-Structural_breaks.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/time/03-04-Structural_breaks.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.0, "max_line_length": 49, "alphanum_fraction": 0.7976190476, "num_tokens": 17, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8289388083214156, "lm_q2_score": 0.702530051167069, "lm_q1q2_score": 0.5823544234244133}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Contents: TeX and LaTeX and AMS symbols for Maths\n% $Id: lssym.tex 477 2011-06-18 13:47:14Z oetiker $\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\section{List of Mathematical Symbols}  \\label{symbols}\n \nThe following tables demonstrate all the symbols normally accessible\nfrom \\emph{math mode}.  \n\n%\n% Conditional Text in case the AMS Fonts are installed\n%\nNote that some tables show symbols only accessible after loading the \\pai{amssymb}\npackage in the preamble of your document\\footnote{The tables were derived\n  from \\texttt{symbols.tex} by David~Carlisle and subsequently changed\nextensively as suggested by Josef~Tkadlec.}. If the \\AmS{} package and\nfonts are not installed on your system, have a look at\n\\CTANref|CTAN:pkg/amslatex|. An even more comprehensive list of\nsymbols can be found at \\CTANref|CTAN:info/symbols/comprehensive|.\n \n\\begin{table}[!h]\n\\caption{Math Mode Accents.}  \\label{mathacc}\n\\begin{symbols}{*3{cl}}\n\\W{\\hat}{a}   & \\W{\\check}{a} & \\W{\\tilde}{a}       \\\\\n\\W{\\grave}{a} & \\W{\\dot}{a}   & \\W{\\ddot}{a}        \\\\\n\\W{\\bar}{a}   & \\W{\\vec}{a}   & \\W{\\widehat}{AAA}   \\\\  \n\\W{\\acute}{a} & \\W{\\breve}{a} & \\W{\\widetilde}{AAA} \\\\\n\\W{\\mathring}{a}\n\\end{symbols}\n\\end{table}\n \n\n\\begin{table}[!h]\n\\caption{Greek Letters.} \\label{greekletters}\n\\bigskip\nThere is no uppercase of some of the letters like \\ci{Alpha}, \\ci{Beta} and so\non, because they look the same as normal roman letters: A, B\\ldots\n\\begin{symbols}{*4{cl}}\n \\X{\\alpha}     & \\X{\\theta}     & \\X{o}          & \\X{\\upsilon}  \\\\\n \\X{\\beta}      & \\X{\\vartheta}  & \\X{\\pi}        & \\X{\\phi}      \\\\\n \\X{\\gamma}     & \\X{\\iota}      & \\X{\\varpi}     & \\X{\\varphi}   \\\\\n \\X{\\delta}     & \\X{\\kappa}     & \\X{\\rho}       & \\X{\\chi}      \\\\\n \\X{\\epsilon}   & \\X{\\lambda}    & \\X{\\varrho}    & \\X{\\psi}      \\\\\n \\X{\\varepsilon}& \\X{\\mu}        & \\X{\\sigma}     & \\X{\\omega}    \\\\\n \\X{\\zeta}      & \\X{\\nu}        & \\X{\\varsigma}  &               \\\\\n \\X{\\eta}       & \\X{\\xi}        & \\X{\\tau} & \\\\\n \\X{\\Gamma}     & \\X{\\Lambda}    & \\X{\\Sigma}     & \\X{\\Psi}      \\\\\n \\X{\\Delta}     & \\X{\\Xi}        & \\X{\\Upsilon}   & \\X{\\Omega}    \\\\\n \\X{\\Theta}     & \\X{\\Pi}        & \\X{\\Phi} \n\\end{symbols}\n\\end{table}\n\n\n\n\\begin{table}[!tbp]\n\\caption{Binary Relations.} \\label{binaryrel}\n\\bigskip\nYou can negate the following symbols by prefixing them with a \\ci{not} command.\n\\begin{symbols}{*3{cl}}\n \\X{<}           & \\X{>}           & \\X{=}          \\\\\n \\X{\\leq}or \\verb|\\le|   & \\X{\\geq}or \\verb|\\ge|   & \\X{\\equiv}     \\\\\n \\X{\\ll}         & \\X{\\gg}         & \\X{\\doteq}     \\\\\n \\X{\\prec}       & \\X{\\succ}       & \\X{\\sim}       \\\\\n \\X{\\preceq}     & \\X{\\succeq}     & \\X{\\simeq}     \\\\\n \\X{\\subset}     & \\X{\\supset}     & \\X{\\approx}    \\\\\n \\X{\\subseteq}   & \\X{\\supseteq}   & \\X{\\cong}      \\\\\n \\X{\\sqsubset}$^a$ & \\X{\\sqsupset}$^a$ & \\X{\\Join}$^a$    \\\\\n \\X{\\sqsubseteq} & \\X{\\sqsupseteq} & \\X{\\bowtie}    \\\\\n \\X{\\in}         & \\X{\\ni}, \\verb|\\owns|  & \\X{\\propto}    \\\\\n \\X{\\vdash}      & \\X{\\dashv}      & \\X{\\models}    \\\\\n \\X{\\mid}        & \\X{\\parallel}   & \\X{\\perp}      \\\\\n \\X{\\smile}      & \\X{\\frown}      & \\X{\\asymp}     \\\\\n \\X{:}           & \\X{\\notin}      & \\X{\\neq}or \\verb|\\ne|\n\\end{symbols}\n\\centerline{\\footnotesize $^a$Use the \\textsf{latexsym} package to access this symbol}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{Binary Operators.}\n\\begin{symbols}{*3{cl}}\n \\X{+}              & \\X{-}              & &                 \\\\\n \\X{\\pm}            & \\X{\\mp}            & \\X{\\triangleleft} \\\\\n \\X{\\cdot}          & \\X{\\div}           & \\X{\\triangleright}\\\\\n \\X{\\times}         & \\X{\\setminus}      & \\X{\\star}         \\\\\n \\X{\\cup}           & \\X{\\cap}           & \\X{\\ast}          \\\\\n \\X{\\sqcup}         & \\X{\\sqcap}         & \\X{\\circ}         \\\\\n \\X{\\vee}, \\verb|\\lor|     & \\X{\\wedge}, \\verb|\\land|  & \\X{\\bullet}       \\\\\n \\X{\\oplus}         & \\X{\\ominus}        & \\X{\\diamond}      \\\\\n \\X{\\odot}          & \\X{\\oslash}        & \\X{\\uplus}        \\\\\n \\X{\\otimes}        & \\X{\\bigcirc}       & \\X{\\amalg}        \\\\\n \\X{\\bigtriangleup} &\\X{\\bigtriangledown}& \\X{\\dagger}       \\\\\n \\X{\\lhd}$^a$         & \\X{\\rhd}$^a$         & \\X{\\ddagger}      \\\\\n \\X{\\unlhd}$^a$       & \\X{\\unrhd}$^a$       & \\X{\\wr}\n\\end{symbols}\n \n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{BIG Operators.}\n\\begin{symbols}{*4{cl}}\n \\X{\\sum}      & \\X{\\bigcup}   & \\X{\\bigvee}  \\\\\n \\X{\\prod}     & \\X{\\bigcap}   & \\X{\\bigwedge} \\\\\n \\X{\\coprod}   & \\X{\\bigsqcup} & \\X{\\biguplus} \\\\\n \\X{\\int}      & \\X{\\oint}     & \\X{\\bigodot} \\\\\n \\X{\\bigoplus} & \\X{\\bigotimes} & \\\\\n\\end{symbols}\n \n\\end{table}\n\n\n\\begin{table}[!tbp]\n\\caption{Arrows.} \\label{tab:arrows}\n\\begin{symbols}{*2{cl}}\n \\X{\\leftarrow}or \\verb|\\gets|& \\X{\\longleftarrow} \\\\\n \\X{\\rightarrow}or \\verb|\\to|& \\X{\\longrightarrow} \\\\\n \\X{\\leftrightarrow}    & \\X{\\longleftrightarrow} \\\\\n \\X{\\Leftarrow}         & \\X{\\Longleftarrow}     \\\\\n \\X{\\Rightarrow}        & \\X{\\Longrightarrow}    \\\\\n \\X{\\Leftrightarrow}    & \\X{\\Longleftrightarrow}\\\\\n \\X{\\mapsto}            & \\X{\\longmapsto}        \\\\\n \\X{\\hookleftarrow}     & \\X{\\hookrightarrow}    \\\\\n \\X{\\leftharpoonup}     & \\X{\\rightharpoonup}    \\\\\n \\X{\\leftharpoondown}   & \\X{\\rightharpoondown}  \\\\\n \\X{\\rightleftharpoons} & \\X{\\iff}(bigger spaces) \\\\\n \\X{\\uparrow}   & \\X{\\downarrow} \\\\\n \\X{\\updownarrow} & \\X{\\Uparrow} \\\\\n \\X{\\Downarrow} &  \\X{\\Updownarrow} \\\\\n \\X{\\nearrow} &  \\X{\\searrow} \\\\\n  \\X{\\swarrow} & \\X{\\nwarrow} \\\\\n \\X{\\leadsto}$^a$\n\\end{symbols}\n\\centerline{\\footnotesize $^a$Use the \\textsf{latexsym} package to access this symbol}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{Arrows as Accents.}  \\label{arrowacc}\n\\begin{symbols}{*2{cl}}\n\\W{\\overrightarrow}{AB}     & \\W{\\underrightarrow}{AB}     \\\\\n\\W{\\overleftarrow}{AB}      & \\W{\\underleftarrow}{AB}      \\\\\n\\W{\\overleftrightarrow}{AB} & \\W{\\underleftrightarrow}{AB} \\\\\n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{Delimiters.}\\label{tab:delimiters}\n\\begin{symbols}{*3{cl}}\n \\X{(}            & \\X{)}            & \\X{\\uparrow} \\\\\n \\X{[}or \\verb|\\lbrack|   & \\X{]}or \\verb|\\rbrack|  & \\X{\\downarrow}   \\\\\n \\X{\\{}or \\verb|\\lbrace|  & \\X{\\}}or \\verb|\\rbrace|  & \\X{\\updownarrow} \\\\\n \\X{\\langle}      & \\X{\\rangle}      &  \\X{\\Uparrow} \\\\\n \\X{|}or \\verb|\\vert| & \\X{\\|}or \\verb|\\Vert| & \\X{\\Downarrow} \\\\\n  \\X{/}            & \\X{\\backslash}   &   \\X{\\Updownarrow}  \\\\ \n \\X{\\lfloor}      & \\X{\\rfloor}      &  \\\\\n \\X{\\rceil}       &  \\X{\\lceil}  &&\\\\\n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{Large Delimiters.}\n\\begin{symbols}{*3{cl}}\n \\Y{\\lgroup}      & \\Y{\\rgroup}      & \\Y{\\lmoustache}  \\\\\n \\Y{\\arrowvert}   & \\Y{\\Arrowvert}   & \\Y{\\bracevert} \\\\\n \\Y{\\rmoustache} \\\\\n\\end{symbols}\n\\end{table}\n\n\n\\begin{table}[!tbp]\n\\caption{Miscellaneous Symbols.}\n\\begin{symbols}{*4{cl}}\n \\X{\\dots}       & \\X{\\cdots}      & \\X{\\vdots}      & \\X{\\ddots}     \\\\\n \\X{\\hbar}       & \\X{\\imath}      & \\X{\\jmath}      & \\X{\\ell}       \\\\\n \\X{\\Re}         & \\X{\\Im}         & \\X{\\aleph}      & \\X{\\wp}        \\\\\n \\X{\\forall}     & \\X{\\exists}     & \\X{\\mho}$^a$      & \\X{\\partial}   \\\\\n \\X{'}           & \\X{\\prime}      & \\X{\\emptyset}   & \\X{\\infty}     \\\\\n \\X{\\nabla}      & \\X{\\triangle}   & \\X{\\Box}$^a$     & \\X{\\Diamond}$^a$ \\\\\n \\X{\\bot}        & \\X{\\top}        & \\X{\\angle}      & \\X{\\surd}      \\\\\n\\X{\\diamondsuit} & \\X{\\heartsuit}  & \\X{\\clubsuit}   & \\X{\\spadesuit} \\\\\n \\X{\\neg}or \\verb|\\lnot| & \\X{\\flat}       & \\X{\\natural}    & \\X{\\sharp}\n\\end{symbols}\n\\centerline{\\footnotesize $^a$Use the \\textsf{latexsym} package to access this symbol}\n\\end{table}\n\n\n\\begin{table}[!tbp]\n\\caption{Non-Mathematical Symbols.}\n\\bigskip\nThese symbols can also be used in text mode.\n\\begin{symbols}{*4{cl}}\n \\SC{\\dag}  &  \\SC{\\S}  &  \\SC{\\copyright} &  \\SC{\\textregistered}  \\\\\n \\SC{\\ddag} &  \\SC{\\P}  &  \\SC{\\pounds}    &  \\SC{\\%}               \\\\\n\\end{symbols}\n\\end{table}\n\n\\clearpage\n\n%\n%\n% If the AMS Stuff is not available, we drop out right here :-)\n%\n\n\\begin{table}[!tbp]\n\\caption{\\AmS{} Delimiters.}\\label{AMSD}\n\\bigskip\n\\begin{symbols}{*4{cl}}\n\\X{\\ulcorner}&\\X{\\urcorner}&\\X{\\llcorner}&\\X{\\lrcorner}\\\\\n\\X{\\lvert}&\\X{\\rvert}&\\X{\\lVert}&\\X{\\rVert}\n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{\\AmS{} Greek and Hebrew.}\n\\begin{symbols}{*5{cl}}\n\\X{\\digamma}     &\\X{\\varkappa} & \\X{\\beth} &\\X{\\gimel} & \\X{\\daleth}    \n\\end{symbols}\n\\end{table}\n\n\\begin{table}[tbp]\n  \\caption{Math Alphabets.} \\label{mathalpha}\n\\bigskip See Table~\\ref{mathfonts} on \\pageref{mathfonts} for other math fonts.\n\\begin{symbols}{@{}*3l@{}}\nExample& Command &Required package\\\\\n\\hline\n\\rule{0pt}{1.05em}$\\mathrm{ABCDE abcde 1234}$\n        & \\verb|\\mathrm{ABCDE abcde 1234}|\n        &       \\\\\n$\\mathit{ABCDE abcde 1234}$\n        & \\verb|\\mathit{ABCDE abcde 1234}|\n        &       \\\\\n$\\mathnormal{ABCDE abcde 1234}$\n        & \\verb|\\mathnormal{ABCDE abcde 1234}|\n        &  \\\\\n$\\mathcal{ABCDE abcde 1234}$\n        & \\verb|\\mathcal{ABCDE abcde 1234}|\n        &  \\\\\n$\\mathscr{ABCDE abcde 1234}$\n        &\\verb|\\mathscr{ABCDE abcde 1234}|\n        &\\pai{mathrsfs}\\\\\n$\\mathfrak{ABCDE abcde 1234}$\n        & \\verb|\\mathfrak{ABCDE abcde 1234}|\n        &\\pai{amsfonts}  or \\textsf{amssymb}  \\\\\n$\\mathbb{ABCDE abcde 1234}$\n        & \\verb|\\mathbb{ABCDE abcde 1234}|\n        &\\pai{amsfonts}  or \\textsf{amssymb} \\\\\n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{\\AmS{} Binary Operators.}\n\\begin{symbols}{*3{cl}}\n \\X{\\dotplus}        & \\X{\\centerdot}      &       \\\\\n \\X{\\ltimes}         & \\X{\\rtimes}         & \\X{\\divideontimes} \\\\\n \\X{\\doublecup}      & \\X{\\doublecap}\t   & \\X{\\smallsetminus} \\\\\n \\X{\\veebar}         & \\X{\\barwedge}       & \\X{\\doublebarwedge}\\\\\n \\X{\\boxplus}        & \\X{\\boxminus}       & \\X{\\circleddash}   \\\\\n \\X{\\boxtimes}       & \\X{\\boxdot}         & \\X{\\circledcirc}   \\\\\n \\X{\\intercal}       & \\X{\\circledast}     & \\X{\\rightthreetimes} \\\\\n \\X{\\curlyvee}       & \\X{\\curlywedge}     & \\X{\\leftthreetimes}\n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{\\AmS{} Binary Relations.}\n\\begin{symbols}{*3{cl}}\n \\X{\\lessdot}           & \\X{\\gtrdot}            & \\X{\\doteqdot} \\\\\n \\X{\\leqslant}          & \\X{\\geqslant}          & \\X{\\risingdotseq}     \\\\\n \\X{\\eqslantless}       & \\X{\\eqslantgtr}        & \\X{\\fallingdotseq}    \\\\\n \\X{\\leqq}              & \\X{\\geqq}              & \\X{\\eqcirc}           \\\\\n \\X{\\lll}or \\verb|\\llless| & \\X{\\ggg}            & \\X{\\circeq}  \\\\\n \\X{\\lesssim}           & \\X{\\gtrsim}            & \\X{\\triangleq}        \\\\\n \\X{\\lessapprox}        & \\X{\\gtrapprox}         & \\X{\\bumpeq}           \\\\\n \\X{\\lessgtr}           & \\X{\\gtrless}           & \\X{\\Bumpeq}           \\\\\n \\X{\\lesseqgtr}         & \\X{\\gtreqless}         & \\X{\\thicksim}         \\\\\n \\X{\\lesseqqgtr}        & \\X{\\gtreqqless}        & \\X{\\thickapprox}      \\\\\n \\X{\\preccurlyeq}       & \\X{\\succcurlyeq}       & \\X{\\approxeq}         \\\\\n \\X{\\curlyeqprec}       & \\X{\\curlyeqsucc}       & \\X{\\backsim}          \\\\\n \\X{\\precsim}           & \\X{\\succsim}           & \\X{\\backsimeq}        \\\\\n \\X{\\precapprox}        & \\X{\\succapprox}        & \\X{\\vDash}            \\\\\n \\X{\\subseteqq}         & \\X{\\supseteqq}         & \\X{\\Vdash}            \\\\\n \\X{\\shortparallel}     & \\X{\\Supset}            & \\X{\\Vvdash}           \\\\\n \\X{\\blacktriangleleft} & \\X{\\sqsupset}          & \\X{\\backepsilon}      \\\\\n \\X{\\vartriangleright}  & \\X{\\because}           & \\X{\\varpropto}        \\\\\n \\X{\\blacktriangleright}& \\X{\\Subset}            & \\X{\\between}          \\\\\n \\X{\\trianglerighteq}   & \\X{\\smallfrown}        & \\X{\\pitchfork}        \\\\\n \\X{\\vartriangleleft}   & \\X{\\shortmid} \t & \\X{\\smallsmile} \t\\\\\n \\X{\\trianglelefteq}    & \\X{\\therefore} \t & \\X{\\sqsubset}  \n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{\\AmS{} Arrows.}\n\\begin{symbols}{*2{cl}}\n \\X{\\dashleftarrow}      & \\X{\\dashrightarrow}     \\\\\n \\X{\\leftleftarrows}     & \\X{\\rightrightarrows}   \\\\\n \\X{\\leftrightarrows}    & \\X{\\rightleftarrows}    \\\\\n \\X{\\Lleftarrow}         & \\X{\\Rrightarrow}        \\\\\n \\X{\\twoheadleftarrow}   & \\X{\\twoheadrightarrow}  \\\\\n \\X{\\leftarrowtail}      & \\X{\\rightarrowtail}     \\\\\n \\X{\\leftrightharpoons}  & \\X{\\rightleftharpoons}  \\\\\n \\X{\\Lsh}                & \\X{\\Rsh}                \\\\\n \\X{\\looparrowleft}      & \\X{\\looparrowright}     \\\\\n \\X{\\curvearrowleft}     & \\X{\\curvearrowright}    \\\\\n \\X{\\circlearrowleft}    & \\X{\\circlearrowright}   \\\\\n \\X{\\multimap}  &  \\X{\\upuparrows}  \\\\\n \\X{\\downdownarrows} & \\X{\\upharpoonleft} \\\\\n \\X{\\upharpoonright} & \\X{\\downharpoonright} \\\\\n \\X{\\rightsquigarrow} & \\X{\\leftrightsquigarrow} \\\\\n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp]\n\\caption{\\AmS{} Negated Binary Relations and Arrows.}\\label{AMSNBR}\n\\begin{symbols}{*3{cl}}\n \\X{\\nless}           & \\X{\\ngtr}            & \\X{\\varsubsetneqq}  \\\\\n \\X{\\lneq}            & \\X{\\gneq}            & \\X{\\varsupsetneqq}  \\\\\n \\X{\\nleq}            & \\X{\\ngeq}            & \\X{\\nsubseteqq}     \\\\\n \\X{\\nleqslant}       & \\X{\\ngeqslant}       & \\X{\\nsupseteqq}     \\\\\n \\X{\\lneqq}           & \\X{\\gneqq}           & \\X{\\nmid}           \\\\\n \\X{\\lvertneqq}       & \\X{\\gvertneqq}       & \\X{\\nparallel}      \\\\\n \\X{\\nleqq}           & \\X{\\ngeqq}           & \\X{\\nshortmid}      \\\\\n \\X{\\lnsim}           & \\X{\\gnsim}           & \\X{\\nshortparallel} \\\\\n \\X{\\lnapprox}        & \\X{\\gnapprox}        & \\X{\\nsim}           \\\\\n \\X{\\nprec}           & \\X{\\nsucc}           & \\X{\\ncong}          \\\\\n \\X{\\npreceq}         & \\X{\\nsucceq}         & \\X{\\nvdash}         \\\\\n \\X{\\precneqq}        & \\X{\\succneqq}        & \\X{\\nvDash}         \\\\\n \\X{\\precnsim}        & \\X{\\succnsim}        & \\X{\\nVdash}         \\\\\n \\X{\\precnapprox}     & \\X{\\succnapprox}     & \\X{\\nVDash}         \\\\\n \\X{\\subsetneq}       & \\X{\\supsetneq}       & \\X{\\ntriangleleft}  \\\\\n \\X{\\varsubsetneq}    & \\X{\\varsupsetneq}    & \\X{\\ntriangleright} \\\\\n \\X{\\nsubseteq}       & \\X{\\nsupseteq}       & \\X{\\ntrianglelefteq}\\\\\n \\X{\\subsetneqq}      & \\X{\\supsetneqq}      &\\X{\\ntrianglerighteq}\\\\[0.5ex]\n \\X{\\nleftarrow}      & \\X{\\nrightarrow}     & \\X{\\nleftrightarrow}\\\\\n \\X{\\nLeftarrow}      & \\X{\\nRightarrow}     & \\X{\\nLeftrightarrow}\n\n\\end{symbols}\n\\end{table}\n\n\\begin{table}[!tbp] \\label{AMSmisc}\n\\caption{\\AmS{} Miscellaneous.}\n\\begin{symbols}{*3{cl}}\n \\X{\\hbar}             & \\X{\\hslash}           & \\X{\\Bbbk}            \\\\\n \\X{\\square}           & \\X{\\blacksquare}      & \\X{\\circledS}        \\\\\n \\X{\\vartriangle}      & \\X{\\blacktriangle}    & \\X{\\complement}      \\\\\n \\X{\\triangledown}     &\\X{\\blacktriangledown} & \\X{\\Game}            \\\\\n \\X{\\lozenge}          & \\X{\\blacklozenge}     & \\X{\\bigstar}         \\\\\n \\X{\\angle}            & \\X{\\measuredangle}    & \\\\\n \\X{\\diagup}           & \\X{\\diagdown}         & \\X{\\backprime}       \\\\\n \\X{\\nexists}          & \\X{\\Finv}             & \\X{\\varnothing}      \\\\\n \\X{\\eth}              & \\X{\\sphericalangle}   & \\X{\\mho}              \n\\end{symbols}\n\\end{table}\n\n\n\n\n\n\\endinput\n\n%\n\n% Local Variables:\n% TeX-master: \"lshort2e\"\n% mode: latex\n% mode: flyspell\n% End:\n", "meta": {"hexsha": "ddb9e8fed205d07f6f5f822d873d5ef3cd9db103", "size": 14986, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "synctex test files/less basic/2017/lshort-5.05/src/lssym.tex", "max_stars_repo_name": "templateK/synctex", "max_stars_repo_head_hexsha": "555467da1535b0b1d7e97532a2c6251d9c2b3957", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 38, "max_stars_repo_stars_event_min_datetime": "2016-12-23T02:21:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-18T22:07:39.000Z", "max_issues_repo_path": "synctex test files/less basic/2017/lshort-5.05/src/lssym.tex", "max_issues_repo_name": "templateK/synctex", "max_issues_repo_head_hexsha": "555467da1535b0b1d7e97532a2c6251d9c2b3957", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 27, "max_issues_repo_issues_event_min_datetime": "2016-12-23T13:20:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T22:31:48.000Z", "max_forks_repo_path": "synctex test files/less basic/2017/lshort-5.05/src/lssym.tex", "max_forks_repo_name": "templateK/synctex", "max_forks_repo_head_hexsha": "555467da1535b0b1d7e97532a2c6251d9c2b3957", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2017-08-28T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-25T21:50:14.000Z", "avg_line_length": 40.722826087, "max_line_length": 86, "alphanum_fraction": 0.4927265448, "num_tokens": 5595, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300449389326, "lm_q2_score": 0.8289388104343893, "lm_q1q2_score": 0.5823544197460968}}
{"text": "% jam 2004-09-10\n\n\\section{Mesh Operations}\n\\label{sec:mesh-operations}\n\n\\subsection{Split}\n\nSplitting a simplex produces a refined complex by adding a new vertex\nand subdivided some of the simplices.\n\n\nSuppose $\\Ssimplex$ is a $d$-simplex\nin the pure $n$-dimensional simplicial complex $\\Kcomplex_0$.\nLet $\\Vvertex$ be a vertex not in $\\Kcomplex_0$.\n{\\it Splitting $\\Ssimplex$ around $\\Vvertex$}\nproduces a refined pure $n$-dimensional simplicial complex, $\\Kcomplex_1$:\n\n$\\Kcomplex_1$ contains all the  $n$-simplexes of $\\Kcomplex_0$\nthat do not contain $\\Ssimplex$.\n\nThe simplexes that do contain $\\Ssimplex$ are split:\nLet $\\Tsimplex$ be an $n$-simplex that contains $\\Ssimplex$.\nLet $\\Rsimplex$ be the $(n-d-1)$-simplex opposite $\\Ssimplex$\nin $\\Tsimplex$.\n(If $\\Tsimplex = \\Ssimplex$, then $\\Rsimplex$ is the empty set.)\nLet $\\{ \\Ffacet_0 \\ldots \\Ffacet_d \\}$ be the\nd $(d-1)$-simplex facets of $\\Ssimplex$.\n$\\Kcomplex_1$ has\n$d$ $n$-simplices formed from\n$\\Vvertex$, $\\Rsimplex$, and\n$\\Ffacet_i$ where $i=0 \\ldots d$.\n\nThis is also known as \"placing a vertex\" \n(See Lee~\\cite[sec.~17.2]{lee-hdcg-17-2004}).\n\n\\subsection{Collapse}\n\nCollapsing a simplex produces a reduced complex by mergng\nthe vertices of the simplex into a single new one.\n\nSuppose $\\Ssimplex$ is a $d$-simplex\nin the pure $n$-dimensional simplicial complex $\\Kcomplex_0$.\nLet $\\Vvertex$ be a vertex not in $\\Kcomplex_0$.\n{\\it Collapsing $\\Ssimplex$ to $\\Vvertex$}\nproduces a reduced pure $n$-dimensional simplicial complex, $\\Kcomplex_1$:\n\n$\\Kcomplex_1$ contains all the\n$n$-simplices of $\\Kcomplex_0$ that share no vertices with $\\Ssimplex$.\nAny $n$-simplex in $\\Kcomplex_0$ sharing more than one vertex\nwith $\\Ssimplex$ is ignored.\nFor every $n$-simplex in $\\Kcomplex_0$ sharing one vertex with $\\Ssimplex$,\n$\\Kcomplex_1$ gets an $n$-simplex with $\\Vvertex$ replacing the\nshared vertex.\n\n", "meta": {"hexsha": "ebed05b7e132a3aadd57c3b82fb4448b5ae84a96", "size": 1865, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/old/fosm/mesh-operations.tex", "max_stars_repo_name": "palisades-lakes/les-elemens", "max_stars_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/old/fosm/mesh-operations.tex", "max_issues_repo_name": "palisades-lakes/les-elemens", "max_issues_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/old/fosm/mesh-operations.tex", "max_forks_repo_name": "palisades-lakes/les-elemens", "max_forks_repo_head_hexsha": "970bcbf5e31e40017b2333039e1505c7ea2f56dd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.9090909091, "max_line_length": 75, "alphanum_fraction": 0.727613941, "num_tokens": 618, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289388083214155, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.5823544182616692}}
{"text": "\n\\subsection{Testing for cointegration with Johansen}\n\n\n", "meta": {"hexsha": "edb2cf2fd3bf7601927e07c769a298b7f4c37bd6", "size": 56, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/forecastingMulti/01-01-johansen.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/forecastingMulti/01-01-johansen.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/forecastingMulti/01-01-johansen.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 11.2, "max_line_length": 52, "alphanum_fraction": 0.8035714286, "num_tokens": 13, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8289387998695209, "lm_q2_score": 0.7025300511670689, "lm_q1q2_score": 0.5823544174867032}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n\n% \\begin{figure}[ht]\n%     \\centering\n%     \\incfig{riemmans-theorem}\n%     \\caption{Riemmans theorem}\n%     \\label{fig:riemmans-theorem}\n% \\end{figure}\n\n\\begin{document}\n\n\\begin{defn}[Holomorphic] % Testing\n\\label{defn:Holomorphic}\n\tLet \\(\\Omega\\subset \\C\\) be open. A complex-valued function \\(f:\\Omega\\to \\C\\) is \\textbf{holomorphic} (or \\textbf{regular}, \\textbf{complex differentiable}) if for every \\(z\\in \\Omega\\), the limit exists:\n\t\\begin{align*}\n\t\t\\lim_{h \\to 0} \\frac{f(z+h)-f(z)}{h} := f'(z)\n\t\\end{align*}\n\t\\(f'\\) is called the \\textbf{complex derivative} of \\(f\\).\n\\end{defn}\nNote that \\(h\\) is a non-zero complex number in \\(\\Omega\\). Hence the derivative has to exist and be equal for any possible direction that \\(h\\) may approach zero.\\\\\n\n\\begin{anki}\nTARGET DECK\nComplex Qual::Complex Analysis\nSTART\nMathJaxCloze\nText: Let \\(\\Omega\\subset \\C\\) be open. A complex-valued function \\(f:\\Omega\\to \\C\\) is **holomorphic** (or **regular**, **complex differentiable**) if for every \\(z\\in \\Omega\\), the limit exists:\n {{c1::\\(\\begin{align*}\n         \t\\lim_{h \\to 0} \\frac{f(z+h)-f(z)}{h} := f'(z)\n         \\end{align*}\\)}} \n\t\\(f'\\) is called the **complex derivative** of \\(f\\).\nExtra: Note that \\(h\\) is a non-zero complex number in \\(\\Omega\\). Hence the derivative has to exist and be equal for any possible direction that \\(h\\) may approach zero.\nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1624408998120-->\nEND\n\\end{anki}\n\n\\begin{exmp}[Holomorphic and non-holomorphic functions]\n\tAny polynomial \\(p \\in \\C[z]\\) defined by\n\t\\begin{align*}\n\t\tp(z) = a_0 + a_1z + \\ldots + a_n z^{n}\n\t\\end{align*}\n\tis holomorphic in the entire complex plane with\n\t\\begin{align*}\n\t\tp'(z) = a_1 + \\ldots + na_n z^{n-1}.\n\t\\end{align*}\n\tHowever, \\(f(z) = 1 / z\\) is only holomorphic on open sets that do not contain the origin.\\\\\n\n\tFor an example of a function that is never holomorphic, consider the involuntary transformation\n\t\\begin{align*}\n\t\tf(z) = \\overline{z}.\n\t\\end{align*}\n\tOne can see that\n\t\\begin{align*}\n\t\t\\frac{f(z_0+h) - f(z_0)}{h} = \\frac{\\overline{h}}{h}\n\t\\end{align*}\n\twhich has no limit as \\(h\\to 0\\), as if \\(h\\) approaches zero on the real axis, then \\(\\frac{\\overline{h}}{h} = 1\\), and if it approaches zero on the imaginary axis, then \\(\\frac{\\overline{h}}{h}= -1\\).\n\\end{exmp}\n\nIt is useful to review the definition of real differentiation in \\(\\R^2\\). At first, it seems there should be no reason to view complex and real differentiation differently, but we will start to see some subtle and important differences soon.\n\\begin{defn}[Real Differentiation]\n\tLet \\(\\Omega\\subset \\R^2\\) be open. Let \\(F:\\Omega\\to \\R^2\\). Then \\(F\\) is real-differentiable if there exists a linear transformation \\(J:\\R^2\\to \\R^2\\) such that, for every \\(z \\in \\Omega\\)\n\t\\begin{align*}\n\t\t\\lim_{\\left| h \\right|  \\to 0} \\frac{\\left| F(z+h)-F(z)-J_F(h) \\right| }{\\left| h \\right| } = 0\n\t\\end{align*}\n\twhere \\(h \\in \\R^2\\). \\(J_F\\) is called the Jacobian and is exactly the \\(2\\times 2\\) real matrix of partial derivatives\n\\begin{align*}\n\tJ = J_F(x,y) = \\begin{pmatrix} \\sfrac{\\partial u}{\\partial x} & \\sfrac{\\partial u}{\\partial y} \\\\ \\sfrac{\\partial v}{\\partial x} & \\sfrac{\\partial v}{\\partial y}  \\end{pmatrix} \n\\end{align*}\n\\end{defn}\nWhen \\(F\\) is a complex function, there is a special relation between the entries of the Jacobian, if it is holomorphic.\n\n\\begin{prop}[Cauchy-Riemann Equations]\n\tLet \\(f\\) be a complex function with \\(f = u + iv\\) for \\(u,v\\) real-valued functions. If \\(f\\) is holomorphic, then \\(f\\) satisfies the \\textbf{Cauchy-Riemann equations}:\n\\begin{align*}\n\t\\frac{\\partial f}{\\partial x} = \\frac{1}{i}\\frac{\\partial f}{\\partial y} &\\implies \\\\\n\t\\frac{\\partial u}{\\partial x} &= \\frac{\\partial v}{\\partial y} \\quad \\frac{\\partial u}{\\partial y} = -\\frac{\\partial v}{\\partial x} \n\\end{align*}\nwhere \\((x,y)\\) is a complex variable.\n\\end{prop}\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Let \\(f\\) be a complex function with \\(f = u + iv\\) for \\(u,v\\) real-valued functions. If \\(f\\) is holomorphic, then \\(f\\) satisfies the **Cauchy-Riemann equations**:\n {{c1::\\(\\begin{align*}\n         \t\\frac{\\partial f}{\\partial x} = \\frac{1}{i}\\frac{\\partial f}{\\partial y} \\implies \\\\\n         \t\\frac{\\partial u}{\\partial x} = \\frac{\\partial v}{\\partial y} \\quad \\frac{\\partial u}{\\partial y} = -\\frac{\\partial v}{\\partial x} \n         \\end{align*}\\)}} \nwhere \\((x,y)\\) is a complex variable.\nExtra: The reverse direction holds: \\(f\\) is complex-differentiable iff \\(f\\) is real-differentiable and satisfies the Cauchy-Riemann equation.\nTags: analysis complex_analysis complex_analyticity\n<!--ID: 1624408998160-->\nEND\n\\end{anki}\n\n\n\\begin{proof}[Construction of Cauchy-Riemann Equations]\n\tRecall that any complex-valued function \\(f\\) can be parametrized by some mapping\n\t\\begin{align*}\n\t\tf = F(x,y) = (u(x,y), v(x,y))\n\t\\end{align*}\n\twhere \\(x,y\\) represent the real and imaginary coordinate respectively, and \\(u,v\\) are real-valued functions. If \\(F\\) is real-differentiable, then the partial derivatives of \\(u,v\\) exist and hence\n\t\\begin{align*}\n\t\tJ_F(x,y) = \\begin{pmatrix} \\sfrac{\\partial u}{\\partial x} & \\sfrac{\\partial u}{\\partial y} \\\\ \\sfrac{\\partial v}{\\partial x} & \\sfrac{\\partial v}{\\partial y}  \\end{pmatrix}\n\t\\end{align*}\n\tsatisfies the necessary properties as \\(h\\to 0\\). However, there is an implicit relation imposed, as we utilize the parametrization \\(h = (h_1,h_2)\\). Observe that we can treat \\(x\\) or \\(y\\) as fixed when approaching from the imaginary or real axes respectively, and get\n\t\\begin{align*}\n\t\tf'(z) &= \\lim_{h_1 \\to 0} \\frac{f(x+h_1,y) - f(x,y)}{h_1} = \\frac{\\partial f}{\\partial x} (z)\\\\\n\t\tf'(z) &= \\lim_{h_2 \\to 0} \\frac{f(x,y+h) - f(x,y)}{i h_2} = \\frac{1}{i} \\frac{\\partial f}{\\partial y} (z)\n\t\\end{align*}\n\tHence, if \\(f\\) is holomorphic, these limits must be equal and thus\n\t\\begin{align*}\n\t\t\\frac{\\partial f}{\\partial x} = \\frac{1}{i} \\frac{\\partial f}{\\partial y} \n\t\\end{align*}\n\tSubstituting \\(f = u + iv\\), we get the relations\n\t\\begin{align*}\n\t\t\\frac{\\partial u}{\\partial x} = \\frac{\\partial v}{\\partial y} \\quad \\frac{\\partial u}{\\partial y} = - \\frac{\\partial v}{\\partial x} \n\t\\end{align*}\n\\end{proof}\n\n\\begin{thm}\n\tLet \\(\\Omega\\subset \\C\\) be open. Let \\(f:\\Omega\\to \\C\\). Express \\(z = x+yi\\) and \\(f = u+vi\\) in the usual way. Then \\(f\\) is complex-differentiable if and only if \\(f\\) is real-differentiable and the Cauchy-Riemann equations are satisfied:\n\t\\begin{align*}\n\t\t\\frac{\\partial u}{\\partial x} = \\frac{\\partial v}{\\partial y} \\quad \\frac{\\partial u}{\\partial y} = -\\frac{\\partial v}{\\partial x} \n\t\\end{align*}\n\\end{thm}\nThis follows directly from the work above.\n\n\\begin{prop}[Properties of Complex Differentiation]\\label{prop:properties_of_complex_differentiation}\n\tLet \\(f,g:\\Omega \\to \\C\\) be holomorphic complex-valued functions. Then\n\t\\begin{align*}\n\t\t(f+g)' &= f'+g'\\\\\n\t\t(fg)' &= f'g + fg'\\\\\n\t\t\\left( \\frac{f}{g} \\right)' &= \\frac{f'g-fg'}{g^2} \\text{ at all \\(g(z)\\neq 0\\)}\\\\\n\t\t(f \\circ g)' &= (f'\\circ g)\\cdot g' \\text{ at all \\(g(z) \\in \\Omega\\)}\n\t\\end{align*}\n\tand hence all of these compositions are holomorphic functions.\n\\end{prop}\n\n\\begin{hw}\n\tProve \\ref{prop:properties_of_complex_differentiation}.\n\\end{hw}\n\n\\subsection{Complex Differential Forms}\n\\label{sub:complex_differential_forms}\n\nWhen first encountering the complex plane and complex functions, it is tempting to \"view\" these structures as a variant on \\(\\R^2\\). While this interpretation is not wrong, it can be restrictive at times. While it is true that a complex function is a parametrization \\(f = (u,v)\\) where each component represents the real and imaginary coordinate, the underlying relation between the real and imaginary coordinate can be hidden.\\\\\n\nThus, we encourage the reader to view complex functions \\(f:\\C\\to \\C\\) as mappings \\(f:z\\mapsto f(z)\\). This interpretation comes in handy when working with differential forms on \\(\\C\\). Let \\(f(z) = f((x,y))\\) be given. Then the differential \\(df\\) can be given by\n\\begin{align*}\n\tdf = \\frac{\\partial f}{\\partial x} \\,d x + \\frac{\\partial f}{\\partial y} \\,d y.\n\\end{align*}\nThis form is perfectly valid, but can be clunky depending on how \\(f\\) is defined. Instead, let \\(z = (x,y)\\). Then we have a relation given by\n\\begin{align*}\n\tz &= (x,y) &\\quad x &= (\\sfrac{1}{2},0)(z+\\overline{z})\\\\\n\t\\overline{z}&=(x,-y) &\\quad y &= (0,\\sfrac{1}{2})(z-\\overline{z})\n\\end{align*}\n\nThese identities carry over to the differentials \\(dx, \\,d y\\) so that\n\\begin{align*}\n\tdz &= (dx,dy) & \\quad dx &= (\\sfrac{1}{2},0)(dz + d\\overline{z}) \\\\\n\td \\overline{z} &= (dx,-dy) &\\quad \\,d y &= (0,\\sfrac{1}{2})(dz - d \\overline{z})\n\\end{align*}\nThis change of variable offers a few distinct advantages, but first we point out its limitations. One convenience offered by \\(dx,\\,d y\\) is that they are independent of one another, unlike \\(z\\) and \\(\\overline{z}\\). We cannot treat \\(dz,\\,d \\overline{z}\\) as independent variables-- however, we do have a form of independence:\n\\begin{prop}\n\tLet \\(g,h\\) be complex-valued functions. Then\n\t\\begin{align*}\n\t\tg \\,d z + h \\,d \\overline{z} = 0 \\iff g= h = 0.\n\t\\end{align*}\n\\end{prop}\n\nThis differential form is not particularly useful if we cannot define the differential \\(df\\) in the form\n\\begin{align*}\n\tdf = \\frac{\\partial f}{\\partial z} \\,d z + \\frac{\\partial f}{\\partial \\overline{z}} \\,d \\overline{z}.\n\\end{align*}\nWe can already obtain\n\\begin{align*}\n\tdf &= \\frac{\\partial f}{\\partial x} \\,d x + \\frac{\\partial f}{\\partial y} \\,d y\\\\\n\t   &= \\frac{\\partial f}{\\partial x} \\left( (\\sfrac{1}{2},0)(dz + d \\overline{z} ) \\right) + \\frac{\\partial f}{\\partial y} \\left( (0,\\sfrac{1}{2})(dz - d \\overline{z} \\right) \\\\\n\t   &= \\frac{1}{2}\\left( \\frac{\\partial f}{\\partial x} , - \\frac{\\partial f}{\\partial y}  \\right) \\,d z + \\frac{1}{2} \\left( \\frac{\\partial f}{\\partial x} , \\frac{\\partial f}{\\partial y}  \\right) \\,d \\overline{z}.\n\\end{align*}\nThis makes it clear what a natural definition for \\(\\frac{\\partial }{\\partial z} \\) and \\(\\frac{\\partial }{\\partial \\overline{z}} \\) should look like. With this, we formally define what we have just discussed:\n\n\\begin{defn}[Complex Differential 1-form]\n\tLet \\((x,y)\\) be complex variables with corresponding differentials \\(dx , \\,d y\\) and partial derivatives \\(\\frac{\\partial }{\\partial x} , \\frac{\\partial }{\\partial y} \\). Then taking \\(z = (x,y)\\), we define the \\textbf{complex differential 1-form} by\n\t\\begin{align*}\n\t\tdz = (dx, dy)\\\\\n\t\td \\overline{z} = (dx,-dy)\n\t\\end{align*}\n\tso that\n\t\\begin{align*}\n\t\tdx = (\\sfrac{1}{2},0) (dz + d \\overline{z})\\\\\n\t\tdy = (0, \\sfrac{1}{2}) (dz - d \\overline{z})\n\t\\end{align*}\n\tis satisfied. Furthermore, we define the **complex differential operators** by\n\t\\begin{align*}\n\t\t\\frac{\\partial }{\\partial z} &= \\frac{1}{2}\\left( \\frac{\\partial }{\\partial x} , \\frac{\\partial }{\\partial y}  \\right)\\\\\n\t\t\\frac{\\partial }{\\partial \\overline{z}} &= \\frac{1}{2} \\left( \\frac{\\partial }{\\partial x} , - \\frac{\\partial }{\\partial y}  \\right) \n\t\\end{align*}\n\tso that\n\t\\begin{align*}\n\t\tdf = \\frac{\\partial f}{\\partial z} \\,d z + \\frac{\\partial f}{\\partial \\overline{z}} \\,d \\overline{z}\n\t\\end{align*}\n\tis satisfied.\n\\end{defn}\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Let \\((x,y)\\) be complex variables with corresponding differentials \\(dx , \\,d y\\) and partial derivatives \\(\\frac{\\partial }{\\partial x} , \\frac{\\partial }{\\partial y} \\). Then taking \\(z = (x,y)\\), we define the **complex differential 1-form** by\n\\(\\begin{align*}\n  \tdz = (dx, dy)\\\\\n  \td \\overline{z} = (dx,-dy)\n  \\end{align*}\\)\nso that\n{{c1::\\(\\begin{align*}\n      \tdx = (\\sfrac{1}{2},0) (dz + d \\overline{z})\\\\\n      \tdy = (0, \\sfrac{1}{2}) (dz - d \\overline{z})\n        \\end{align*}\\)}}\nis satisfied. Furthermore, we define the \\textbf{complex differential operators} by\n{{c2::\\(\\begin{align*}\n        \t\\frac{\\partial }{\\partial z} &= \\frac{1}{2}\\left( \\frac{\\partial }{\\partial x} , \\frac{\\partial }{\\partial y}  \\right)\\\\\n        \t\\frac{\\partial }{\\partial \\overline{z}} &= \\frac{1}{2} \\left( \\frac{\\partial }{\\partial x} , - \\frac{\\partial }{\\partial y}  \\right) \n        \\end{align*}\\)}} \nso that\n\\(\\begin{align*}\n  \tdf = \\frac{\\partial f}{\\partial z} \\,d z + \\frac{\\partial f}{\\partial \\overline{z}} \\,d \\overline{z}\n  \\end{align*}\\)\nis satisfied.\nExtra: One convenience offered by \\(dx,\\,d y\\) is that they are independent of one another, unlike \\(z\\) and \\(\\overline{z}\\). We cannot treat \\(dz,\\,d \\overline{z}\\) as independent variables-- however, we do have a form of independence. If \\(g,h\\) are complex functions, then:\n\\(\\begin{align*}\n  \tg \\,d z + h \\,d \\overline{z} = 0 \\iff g= h = 0.\n  \\end{align*}\\)\nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1626483183627-->\nEND\n\\end{anki}\n\n\n\\begin{exmp}\n\tWhat is the derivative of \\(f(z) = z^2 \\)?\n\\end{exmp}\n\n\\begin{prop}\n\tA complex-valued function \\(f\\) is holomorphic if and only if\n\t\\begin{align*}\n\t\t\\frac{\\partial f}{\\partial \\overline{z}} = 0.\n\t\\end{align*}\n\\end{prop}\nThis follows directly from applying \\(\\frac{\\partial }{\\partial \\overline{z}} \\) to \\(f = (u,v)\\). We encourage the reader to compute this by hand as a short exercise if it is not immediately apparent.\\\\\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: A complex-valued function \\(f\\) is holomorphic if and only if\n {{c1::\\(\\begin{align*}\n        \t\\frac{\\partial f}{\\partial \\overline{z}} = 0.\n        \\end{align*}\\)::complex differential operator}} \nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1626483183638-->\nEND\n\\end{anki}\n\n\nOne should also verify that \\(\\frac{\\partial }{\\partial z} \\) and \\(\\frac{\\partial }{\\partial \\overline{z}} \\) commute as one expects partial derivatives to commute from real analysis. Furthermore, they satisfy the product rule and chain rule.\\\\\n\nWe conclude the discussion on the complex differential with one last identity of note. We will use this tool throughout, although we will not see the full scope of the complex differential operator until we discuss harmonic functions.\n\\begin{prop}\nIf \\(f = (u,v)\\), then we can express the complex derivative by\n\\begin{align*}\n\tf'(z) = 2 \\frac{\\partial u}{\\partial z} = \\frac{\\partial u}{\\partial x} - i \\frac{\\partial u}{\\partial y} .\n\\end{align*}\n\\end{prop}\n\n% Info on potential functions??\n\n\\subsection{Analytic Continuity}\n\\label{sub:analytic_continuity}\n\nWe briefly mentioned that the set of zeroes of a non-constant holomorphic function is discrete. This property is a remarkably strong statement on holomorphic functions. First, we state it a bit more formally.\n\n\\begin{thm}[Discreteness of Holomorphic Zeroes]\n\tLet \\(\\Omega \\subset \\C\\) be a connected open set. If \\(f:\\Omega \\to \\C\\) is holomorphic on \\(\\Omega \\) and non-constant, then \\(U = f^{-1}(\\left\\{ 0 \\right\\}\\subset \\Omega  )\\) is discrete.\\\\\n\n\tIn particular, if \\(f,g\\) are holomorphic on \\(\\Omega \\), and there exists \\(U\\subset \\Omega \\) non-discrete so that \\(f(z)=g(z)\\) for \\(z \\in U\\), then \\(f(z)=g(z)\\) for \\(z \\in \\Omega \\).\n\\end{thm}\n\n\\begin{proof}\n\tThe second statement follows from the first by observing that \\(f-g \\) is holomorphic and hence must have a discrete set of zeroes. Hence, if \\(f-g=0\\) on a non-discrete set, \\(f-g\\) must be constant.\\\\\n\n\tWe will prove the first part once we introduce the notion of locally constant.\n\\end{proof}\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Let \\(\\Omega \\subset \\C\\) be a connected open set. If \\(f\\) is holomorphic on \\(\\Omega \\) and non-constant, then the set of zeroes of \\(f\\) on \\(\\Omega \\) is {{c1::discrete}}.\n\n\tIn particular, if \\(f,g\\) are holomorphic on \\(\\Omega \\), and there exists \\(U\\subset \\Omega \\) {{c1::non-discrete}} so that \\(f(z)=g(z)\\) for \\(z \\in U\\), then {{c1::\\(f(z)=g(z)\\) for \\(z \\in \\Omega \\)}}.\nTags: analysis complex_analysis complex_analyticity\n<!--ID: 1625191420102-->\nEND\n\\end{anki}\n\n\nThe theorem above is so powerful because it gives us a uniqueness of holomorphic functions.\n\n\\begin{defn}[Analytic Continuation]\n\tLet \\(f\\) be a holomorphic function on an open set \\(U\\) with \\(U\\subset \\Omega \\subset \\C \\) for some \\(\\Omega \\) open. Suppose \\(g\\) is a holomorphic function on \\(\\Omega \\) with \\(f(z)=g(z)\\) for \\(z \\in U\\). Then \\(g\\) is the \\textbf{analytic continuation} of \\(f\\) to \\(\\Omega \\).\\\\\n\n\tInstead, suppose there exists an open set \\(V \\subset \\C\\), \\(U,V\\) connected, with \\(U\\cap V \\neq \\emptyset\\). Then if \\(g\\) is holomorphic on \\(V\\) and equal to \\(f\\) on \\(U\\cap V\\), then we also say that \\(g\\) is the analytic continuation of \\(f\\) to \\(V\\).\n\\end{defn}\nIn general, there are many ways to extend \\(f\\) depending on the structure of our sets by ensuring it agrees on non-isolated sets. We may refer to any of these extensions as analytic continuations. The theorem above guarantees that these extensions are unique.\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Let \\(f\\) be a holomorphic function on an open set \\(U\\) with \\(U\\subset \\Omega \\subset \\C \\) for some \\(\\Omega \\) open. Suppose \\(g\\) is {{c1::a holomorphic function}} on \\(\\Omega \\) with {{c1::\\(f(z)=g(z)\\)}} for \\(z \\in U\\). Then \\(g\\) is the **analytic continuation** of \\(f\\) to \\(\\Omega \\).\nExtra: Another form of analytic continuity: suppose there exists an open set \\(V \\subset \\C\\), \\(U,V\\) connected, with \\(U\\cap V \\neq \\emptyset\\). Then if \\(g\\) is holomorphic on \\(V\\) and equal to \\(f\\) on \\(U\\cap V\\), then we also say that \\(g\\) is the analytic continuation of \\(f\\) to \\(V\\).\nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1625191420112-->\nEND\n\\end{anki}\n\n\n\\subsection{Conformality of Holomorphic Functions}\n\\label{sub:conformality_of_holomorphic_functions}\n\n\nHolomorphic functions play an important role with regard to curves.\\\\\n\nWe will ask that curves in \\(\\C\\) are differentiable (and not, say, holomorphic). This is because we cannot construct a concept of holomorphicity that makes sense with complex curves at the moment (why?).\\\\\n\nHence, we merely say that a curve \\(\\gamma \\) is differentiable provided its components are differentiable. That is, if \\(\\gamma = (x,y)\\) for real-valued curves \\(x,y\\), then \\(\\gamma \\) is holomorphic if and only if \\(x,y\\) are differentiable. Differentiable curves get a special name.\n\n\\begin{defn}[Smooth]\n\tLet \\(\\gamma =(x,y)\\) be a curve in \\(\\C\\), where \\(x,y\\) are real-valued curves. Then \\(\\gamma \\) is \\textbf{smooth} if it is component-wise \\(C^{1}\\)-- that is, \\(x,y \\in C^{1}\\), and define\n\t\\begin{align*}\n\t\t\\gamma '(t) = (x'(t),y'(t)).\n\t\\end{align*}\n\tFurthermore, we require that \\(\\gamma'(t) \\neq 0\\) for any time \\(t\\).\n\\end{defn}\nA smooth curve is \\textbf{closed} if \\(\\gamma(t_0) = \\gamma (t_1)\\), and \\textbf{simple} if \\(\\gamma(t) \\neq \\gamma(s)\\) for \\(t\\neq s\\), where \\(t,s\\) are not both endpoints.\n\n\\begin{defn}[Tangent Vectors]\n\tLet \\(\\gamma ,\\eta \\) be smooth curves passing through a point \\(z_0 \\in \\C\\), that is:\n\t\\begin{align*}\n\t\t\\gamma (\\tau_1) = \\eta (\\tau_2) = z_0\n\t\\end{align*}\n\tfor some \\(\\tau_1,\\tau_2 \\in \\R\\). Then \\(\\gamma '(\\tau_1)\\) is the \\textbf{tangent vector of \\(\\gamma \\) at \\(z_0\\)}.\\\\\n\n\tThe \\textbf{angle between \\(\\gamma ,\\eta \\)} is defined to be the angle between \\(\\gamma'(\\tau_1)\\) and \\(\\eta'(\\tau_2)\\), that is\n\t\\begin{align*}\n\t\t\\theta_{z_0} (\\gamma ,\\eta ) = \\theta (\\gamma'(\\tau_1), \\eta'(\\tau_2))\n\t\\end{align*}\n\\end{defn}\n\n\\begin{prop}[Preservation of Angles]\n\tLet \\(\\gamma ,\\eta \\) be smooth curves passing through a point \\(z_0 \\in \\Omega\\subset \\C\\), and let \\(f:\\Omega \\to \\C\\) be holomorphic. Then\n\t\\begin{align*}\n\t\t\\frac{\\partial }{\\partial t} f(\\gamma (t)) = f'(\\gamma (t)) \\gamma '(t). \n\t\\end{align*}\n\tand\n\t\\begin{align*}\n\t\t\\theta_{z_0}(\\gamma,\\eta ) = \\theta_{f(z_0)}(f\\circ \\gamma , f\\circ \\eta ).\n\t\\end{align*}\n\\end{prop}\nIn other words, holomorphic functions preserve the angles between curves.\\\\\n\nAny isomorphism between metric spaces with curves that satisfies\n\\begin{align*}\n\t\\theta_{z_0}(\\gamma ,\\eta ) = \\theta_{f(z_0)}(f\\circ \\gamma , f\\circ \\eta )\n\\end{align*}\nis called \\textbf{conformal}. Hence, the proposition above states that all holomorphic isomorphisms are conformal.\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Any isomorphism between metric spaces with curves that satisfies for every \\(z_0\\) in the metric space and \\(\\gamma ,\\eta \\) curves within the space:\n {{c1::\\(\\begin{align*}\n        \t\\theta_{z_0}(\\gamma ,\\eta ) = \\theta_{f(z_0)}(f\\circ \\gamma , f\\circ \\eta )\n        \\end{align*}\\)}}\nis called \\textbf{conformal}.\nTags: analysis complex_analysis defn\n<!--ID: 1624675761922-->\nEND\n\\end{anki}\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Holomorphic isomorphisms in \\(\\C\\) are {{c1::conformal}} maps.\nTags: \n<!--ID: 1624675761966-->\nEND\n\\end{anki}\n\n\n\\begin{proof}\n\tThe derivative of the composition follows from the rule of complex differentiation on compositions. For the second part, let \\(f'(z_0) = \\alpha \\). Then\n\t\\begin{align*}\n\t\t\\langle \\alpha z, \\alpha w \\rangle = \\textrm{Re}(\\alpha z \\overline{\\alpha } \\overline{w}) = \\alpha \\overline{\\alpha } \\textrm{Re}(z \\overline{w}) = \\left| \\alpha  \\right|^2 \\langle z,w \\rangle .\n\t\\end{align*}\n\tHence\n\t\\begin{align*}\n\t\t\\langle (f\\circ \\gamma)' , (f\\circ \\eta)'  \\rangle = \\langle \\alpha \\gamma' , \\alpha \\eta'  \\rangle \\\\\n\t\t\\left| (f\\circ \\gamma)'  \\right| = \\left| \\alpha  \\right|\\left| \\gamma ' \\right| \n\t\\end{align*}\n\twhich implies that\n\t\\begin{align*}\n\t\t\\theta ( (f\\circ \\gamma)' , (f \\circ \\eta )') = \\theta (\\alpha \\gamma , \\alpha \\eta )\n\t\\end{align*}\n\tOne can check that scalar multiplication of complex vectors does not change the angle between complex vectors, and hence it holds that \\(f\\) is conformal.\n\\end{proof}\nThe reverse is false in general.\\\\\n\nThis gives an interesting new light to holomorphic functions. We can instead treat functions that are holomorphic on \\(\\C\\) as isomorphisms of \\(\\C\\). Hence we can view functions as holomorphic transformations of the complex plane. This equivalence allows us to infer a lot of information of geometric transformations of the complex plane.\\\\\n\nFurthermore, one can show that \\(\\overline{f}\\) is \\textbf{indirectly conformal}-- that is, angles are preserved but reversed in direction. In general, holomorphic transformations scale continuously the area of sets. That is, let \\(E\\subset \\C\\) be a measurable set. Then\n\\begin{align*}\n\tA(E) &= \\int_{E} 1 \\,d \\mu \\\\\n\t     &\\implies A(f(E)) = \\int_E \\left| f' \\right| \\,d \\mu .\n\\end{align*}\nThis holds because in general for any real-differentiable isomorphism \\(f = u+iv\\)\n\\begin{align*}\n\tA(f(E)) = \\int_E \\left| u_x v_y - u_y v_x \\right| \\,d x \\,d y\n\\end{align*}\nWhen \\(f\\) is conformal, we have \\(u_xv_y - u_yv_x = \\left| f'(z) \\right|^2\\) and hence the statement holds.\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: One can show that \\(\\overline{f}\\) is **indirectly conformal**-- that is, {{c1::angles are preserved but reversed in direction}}. \n\nHolomorphic transformations {{c2::scale continuously the area of sets}} . That is, let \\(E\\subset \\C\\) be a measurable set. Then\n {{c2::\\(\\begin{align*}\n        \tA(E) &= \\int_{E} 1 \\,d \\mu \\\\\n        \t     &\\implies A(f(E)) = \\int_E \\left| f' \\right| \\,d \\mu .\n        \\end{align*}\\)}} \n\nExtra: In general, for any real-differentiable isomorphism \\(f = u+iv\\)\n\\(\\begin{align*}\n  \tA(f(E)) = \\int_E \\left| u_x v_y - u_y v_x \\right| \\,d x \\,d y\n  \\end{align*}\\)\nWhen \\(f\\) is conformal, we have \\(u_xv_y - u_yv_x = \\left| f'(z) \\right|^2\\) and hence the statement holds.\nTags: analysis complex_analysis defn complex_analyticity\n<!--ID: 1624675762007-->\nEND\n\\end{anki}\n\n\n\\end{document}\n", "meta": {"hexsha": "76b81b3c122b052de86eccab35cf63871ff97670", "size": 23456, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Complex Analysis/Notes/source/2020-02-21-Holomorphic.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Complex Analysis/Notes/source/2020-02-21-Holomorphic.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Complex Analysis/Notes/source/2020-02-21-Holomorphic.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.5516483516, "max_line_length": 430, "alphanum_fraction": 0.6640944748, "num_tokens": 7728, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.668880247169804, "lm_q2_score": 0.8705972751232809, "lm_q1q2_score": 0.582325320569818}}
{"text": "% Copyright 2018 Melvin Eloy Irizarry-Gelp\u00ed\n\\chapter{Nilplex Numbers}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nAn exo-1 real number has the form\n\\begin{equation}\n    a_{0} + a_{1} W\n\\end{equation}\nThese follow from a parabolic Cayley-Dickson construct on the reals.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Arithmetic Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Multiplication}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Conjugate Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Asterisk Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quadrance}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Cloak Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Dagger Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Hodge Star}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Differential Operators}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...", "meta": {"hexsha": "8fb4de3b23666c46595b04211146e0e70072d10a", "size": 2067, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/C.tex", "max_stars_repo_name": "meirizarrygelpi/plexifications", "max_stars_repo_head_hexsha": "0976772cc56b7f4b6cb9856c6d2bb052fed0b525", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/C.tex", "max_issues_repo_name": "meirizarrygelpi/plexifications", "max_issues_repo_head_hexsha": "0976772cc56b7f4b6cb9856c6d2bb052fed0b525", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/C.tex", "max_forks_repo_name": "meirizarrygelpi/plexifications", "max_forks_repo_head_hexsha": "0976772cc56b7f4b6cb9856c6d2bb052fed0b525", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.9772727273, "max_line_length": 80, "alphanum_fraction": 0.1915820029, "num_tokens": 211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972616934406, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.5823253173333294}}
{"text": "%TODOS FOR THIS LAB\n%1.  Maybe merge mst and ImgSegMST\n% Note: Prim's section has been taken out since it was covered in class, content is commented out\n% Kruskal's section also contains a short comparison of Kruskal's and Prim's\n\n\\lab{Kruskal's Algorithm}{Kruskal's Algorithm}\n\n\\label{lab:Kruskal}\n\n\\objective{Find a minimum spanning tree for a connected, weighted graph using Kruskal's Algorithm}\n\n\\section*{Weighted Graphs and Spanning trees}\n\nIn this lab we will focus on graphs that are \\emph{connected, undirected}, and \\emph{weighted}. Recall that a graph is represented as a set of nodes combined with a set of edges that connect the nodes. A \\emph{connected graph} is a graph where there is a path between each pair of nodes. If the directions of these edges are specified, we have a \\emph{directed graph}, and if all of the edges are bidirectional, we have an \\emph{undirected graph}.\n\n\\begin{figure}[h]\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {A/B, B/F, F/C, B/C, F/E, E/D, C/D, E/C, A/F}{\\path[draw] (\\s) edge (\\t);}\n\n\\end{tikzpicture}\n\\caption{An example of a connected, undirected graph}\n\\label{mst:graph1}\n\\end{figure}\n\nA \\emph{weighted graph} is a graph where each edge has a value associated with it.\nUsually these values represent some sort of cost or distance. For example, if we had a graph representing cities in the U.S., the weights between nodes (cities) would be the distance between the two cities. We represent the graph in the figure above as an adjacency matrix. Note that since the graph is undirected, the matrix is symmetric.\n\n\\[\n\\bordermatrix{\\hspace{.4cm}&A&B&C&D&E&F\\cr\nA&0 & 1 & 0 & 0 & 0 & 1\\cr\nB&1 & 0 & 1 & 0 & 0 & 1\\cr\nC&0 & 1 & 0 & 1 & 1 & 1\\cr\nD&0 & 0 & 1 & 0 & 1 & 0\\cr\nE&0 & 0 & 1 & 1 & 0 & 1\\cr\nF&1 & 1 & 1 & 0 & 1 & 0\\cr}\n\\]\n\nNow consider the graph in Figure \\ref{mst:graph4}.  This graph is the same as the graph in Figure \\ref{mst:graph1}, except now there is a weight attached to each edge.  The adjacency matrix for this graph is\n\\[\n\\bordermatrix{\\hspace{.4cm}&A&B&C&D&E&F\\cr\nA&0 & 3 & 0 & 0 & 0 & 6\\cr\nB&3 & 0 & 5 & 0 & 0 & 4\\cr\nC&0 & 5 & 0 & 1 & 1 & 5\\cr\nD&0 & 0 & 1 & 0 & 2 & 0\\cr\nE&0 & 0 & 1 & 2 & 0 & 4\\cr\nF&6 & 4 & 5 & 0 & 4 & 0\\cr}\n\\]\n\nWe can also represent weighted graphs as adjacency lists. When we make the adjacency list of a directed, unweighted graph, we can make a dictionary with each node as a key and a list of its neighbors as the corresponding value. We can also make a list of the pairs of nodes corresponding to each edge (we will need two pairs to represent each edge in an undirected graph). For a weighted graph, we add a third value representing the weight of that edge.\nSuch a list for our undirected, weighted graph would look like the following:\n\n\\begin{lstlisting}\n[('A', 'F', 6), ('A', 'B', 3), ('B', 'A', 3), ('B', 'C', 5), ('B', 'F', 4),\n ('C', 'B', 5), ('C', 'D', 1), ('C', 'E', 1), ('C', 'F', 5), ('D', 'C', 1), \n ('D', 'E', 2), ('E', 'C', 1), ('E', 'D', 2), ('E', 'F', 4), ('F', 'A', 6), \n ('F', 'B', 4), ('F', 'C', 5), ('F', 'E', 4)]\n\\end{lstlisting}\n\nA \\emph{spanning tree} of a connected, undirected graph $G$ is an undirected graph that contains all the nodes of $G$, a subset of the edges, and no cycles.\nA cycle, for undirected graphs, is a path where you start and end on the same node without crossing any edge more than once.\nThe red in Figure \\ref{mst:graph2} is an example of a cycle in an undirected graph.\n\n\\begin{figure}[H]\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {A/B, A/F, F/C, B/C}{\\path[draw, red](\\s) edge (\\t);}\n\\foreach \\s/\\t in {E/C, C/D}{\\path[draw](\\s)edge(\\t);} \n\n\\end{tikzpicture}\n\\caption{A cycle in an undirected graph.}\n\\label{mst:graph2}\n\\end{figure}\n\nThe minimum spanning tree (MST) of a weighted, undirected graph is a spanning tree where the total weight is less than or equal to the total weight of every other spanning tree.\nKruskal's Algorithm is a method that finds the minimum spanning tree of a weighted, undirected graph.\nFigure \\ref{mst:graph3} shows a spanning tree of the graph shown in Figure \\ref{mst:graph1}.\nFigure \\ref{mst:graph5} shows a minimum spanning tree of the graph shown in Figure \\ref{mst:graph4}.\n\n\\begin{figure}[H]\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {A/B, B/C, F/E, E/C, C/D}{\\path[draw](\\s)edge(\\t);}\n\\end{tikzpicture}\n\\caption{A spanning tree with no cycles for the graph in Figure \\ref{mst:graph1}.}\n\\label{mst:graph3}\n\\end{figure}\n\n\n\n\\begin{figure}[H]\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\node[draw=none]at (-.6,.2)(3){3};\n\\node[draw=none]at (-.2,.75)(4){4};\n\\node[draw=none]at (.75,1.7)(4_2){4};\n\\node[draw=none]at (1.7,.75)(1){1};\n\\node[draw=none]at (2.1,.2)(1_2){1};\n\\node[draw=none]at (-.6,1.3)(6){6};\n\\node[draw=none]at (.75, -.2)(5){5};\n\\node[draw=none]at (2.1,1.3)(2){2};\n\\node[draw=none]at (.6,.6)(five){5};\n\n\\foreach \\s/\\t in {A/B, B/F, F/C, B/C, F/E, E/D, C/D, E/C, A/F, E/D}{\\path[draw] (\\s) edge (\\t);}\n\n\\end{tikzpicture}\n\\caption{A weighted, undirected graph}\n\\label{mst:graph4}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {A/B, B/F, F/E, E/C, C/D}{\\path[draw](\\s)edge(\\t);}\n\\node[draw=none]at (-.6,.2)(3){3};\n\\node[draw=none]at (-.2,.75)(4){4};\n\\node[draw=none]at (.75,1.7)(4_2){4};\n\\node[draw=none]at (1.7,.75)(1){1};\n\\node[draw=none]at (2.1,.2)(1_2){1};\n\n\\end{tikzpicture}\n\\caption{The MST of the graph in Figure \\ref{mst:graph4}.}\n\\label{mst:graph5}\n\\end{figure}\n\n\\section*{Kruskal's algorithm}\n\nGiven a connected, weighted graph G with $n$ nodes, Kruskal's algorithm finds a minimum spanning tree. It is similar to Prim's algorithm, which also finds a minimum spanning tree of a connected, weighted graph. While Prim's algorithm is much faster for dense graphs since it avoids sorting the edges, it is much slower than Kruskal's for sparse graphs. Note that although we have not explicitly required that the graph be undirected, the fact that we are finding a minimum spanning tree means that the graph must be undirected.\n\nKruskal's algorithm first sorts the edges from smallest to largest, then starting with the smallest, the algorithm adds edges to the tree as long as the addition of each new edge does not create a cycle.\nWhen $n-1$ edges have been added, the algorithm stops.\n\nIn order to avoid creating cycles while building the tree, it is necessary to keep track of which portions of the tree currently lie in connected groups.\nThis can be done by creating a dictionary where, when the algorithm starts, each node points to itself as the root of its own tree.\nAs we iterate over the edges, we compare the roots of the nodes connected by the edge. If they are distinct, then the nodes are in separate trees, so adding this edge will not create a cycle. When we add an edge, we update the root of one of the nodes connected by it so we can track which nodes are already connected by our growing MST.\nWe apply Kruskal's algorithm to the graph in Figure \\ref{mst:graph4}. We are given a list of edges. Note that since the graph is undirected, we only have one entry for each edge instead of including both \\li{(A, B, 2)} and \\li{(B, A, 2)} for an edge from \\li{A} to \\li{B} with weight 2. Before beginning, we initialize what will become our MST as an empty list, initialize a dictionary \\li{\\{A:A, B:B, C:C, D:D, E:E, F:F\\}} where each node points to itself, and sort the edges by weight to get the list \\li{[(C, D, 1), (C, E, 1), (D, E, 2), (A, B, 3), (B, F, 4), (E, F, 4), (B, C, 5), (C, F, 5), (A, F, 6)]}. Since there are six nodes in the graph, we will continue until we have 5 edges in the tree.\n\n\\vspace{.25cm}\n\n\\begin{minipage}{0.35\\textwidth}\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {C/D}{\\path[draw](\\s)edge(\\t);}\n%\\node[draw=none]at (1.7,.75)(1){1};\n\\node[draw=none]at (2.1,.2)(1_2){1};\n\n\\end{tikzpicture}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.55\\textwidth}\nWe begin iterating through the edges to build the tree.\nThe first edge in the list is \\li{(C,D,1)}.\nThe root for \\li{C} is \\li{C} and the root for \\li{D} is \\li{D}. Since their roots are distinct, they are in separate trees, so we add this edge to the tree. The tree becomes \\li{[(C, D, 1)]}.\nWe then change the root node of \\li{D} to \\li{C}.\nThe dictionary of root nodes is now \\li{\\{A:A, B:B, C:C, D:C, E:E, F:F\\}}.\n\\end{minipage}\n\n\\vspace{.25cm}\n\n\\begin{minipage}{0.35\\textwidth}\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {C/D, C/E}{\\path[draw](\\s)edge(\\t);}\n\\node[draw=none]at (1.7,.75)(1){1};\n\\node[draw=none]at (2.1,.2)(1_2){1};\n\n\\end{tikzpicture}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.55\\textwidth}\nNow we process the next edge, \\li{(C, E, 1)}.\nThe root node of \\li{C} is \\li{C}, and the root node of \\li{E} is \\li{E}, so adding this edge does not create a cycle.\nWe add this edge into the tree, so the tree is now \\li{[(C, D, 1), (C, E, 1)]}.\nThen we change the root of \\li{E} to \\li{C}, so the dictionary is \\li{\\{A:A, B:B, C:C, D:C, E:C, F:F\\}}.\n\\end{minipage}\n\n\\vspace{.25cm}\n\nThe next edge is \\li{(D, E, 2)}.\nThe root node of the tree containing \\li{D} is \\li{C} and the root node of the tree containing \\li{E} is \\li{C}, so these nodes are already connected, so we do not add this edge to the tree.\n\n\\vspace{.25cm}\n\n\\begin{minipage}{0.35\\textwidth}\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {A/B, E/C, C/D}{\\path[draw](\\s)edge(\\t);}\n\\node[draw=none]at (-.6,.2)(3){3};\n\\node[draw=none]at (1.7,.75)(1){1};\n\\node[draw=none]at (2.1,.2)(1_2){1};\n\n\\end{tikzpicture}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.55\\textwidth}\nThe next edge is \\li{(A, B, 3)}.\nThe root node for \\li{A} is \\li{A}, and the root node for \\li{B} is \\li{B}, so we add the edge to the tree and change the root node of \\li{B} to \\li{A}.\nThe dictionary becomes \\li{\\{A:A, B:A, C:C, D:C, E:C, F:F\\}}\n\\end{minipage}\\hfill\n\n\\vspace{.25cm}\n\n\\begin{minipage}{0.35\\textwidth}\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {A/B, B/F, E/C, C/D}{\\path[draw](\\s)edge(\\t);}\n\\node[draw=none]at (-.6,.2)(3){3};\n\\node[draw=none]at (-.2,.75)(4){4};\n%\\node[draw=none]at (.75,1.7)(4_2){4};\n\\node[draw=none]at (1.7,.75)(1){1};\n\\node[draw=none]at (2.1,.2)(1_2){1};\n\n\\end{tikzpicture}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.55\\textwidth}\nThe next edge is \\li{(B, F, 4)}.\nThe root node for \\li{B} is \\li{A} and the root node for \\li{F} is \\li{F}, so we add the edge to the tree and change the root node of \\li{F} to \\li{A}.\nThe dictionary becomes \\li{\\{A:A, B:A, C:C, D:C, E:C, F:A\\}}\n\\end{minipage}\\hfill\n\n\\vspace{.25cm}\n\n\\begin{minipage}{0.35\\textwidth}\n\\begin{tikzpicture}[node distance= 1.5cm, thick, main \n\tnode/.style={circle, draw, minimum size = .75cm}]\n\n\\node[main node](B)[]{B};\n\\node[main node](C)[right of=B]{C};\n\\node[main node](F)[above of=B]{F};\n\\node[main node](E)[right of=F]{E};\n\\node[draw=none, node distance=.75cm](dummy)[above of= B]{};\n\\node[main node, node distance=1cm](A)[left of=dummy]{A};\n\\node[main node, node distance=2.5cm](D)[right of=dummy]{D};\n\n\\foreach \\s/\\t in {A/B, B/F, F/E, E/C, C/D}{\\path[draw](\\s)edge(\\t);}\n\\node[draw=none]at (-.6,.2)(3){3};\n\\node[draw=none]at (-.2,.75)(4){4};\n\\node[draw=none]at (.75,1.7)(4_2){4};\n\\node[draw=none]at (1.7,.75)(1){1};\n\\node[draw=none]at (2.1,.2)(1_2){1};\n\n\\end{tikzpicture}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.55\\textwidth}\nThe next edge is \\li{(E, F, 4)}.\nThe root node for \\li{E} is \\li{C} and the root node for \\li{F} is \\li{A}.\nWe add the edge to the tree and end the algorithm since there are now 5 edges in the tree.\n(If we were to continue the algorithm, we would change the root node of \\li{A} to \\li{C}.)\n\\end{minipage}\n\n\\vspace{.25cm}\n\nNotice how we updated the root of the root node to one of the nodes in the edge added.\nWe manage the root nodes in this manner in order to find the root node by tracing back through the dictionary until we find a node that points to itself.\nFor example, to find the root node of \\li{F} we first need to get the value for \\li{F} from the dictionary.  This is \\li{A}.\nThen we get the value for \\li{A} from the dictionary which is \\li{C}.\nSince the value of \\li{C} in the dictionary is itself, the \\li{C} is the root node of the graph containing \\li{F}.\nIt is necessary to trace through the dictionary in this manner each time because it allows us to avoid iterating over all of the nodes and updating their root values everytime.\n\n\nAlgorithm \\ref{alg:kruskal} outlines the psuedocode for the algorithm as described above:\n\\begin{algorithm}\n\\begin{algorithmic}[1]\n\\Procedure{kruskal}{$edges$}\n\t\\State $tree \\gets \\text{Empty list of edges for MST}$\n\t\\State $nodes \\gets \\text{Dictionary that points each node towards its root, initially itself}$\n\t\\State $remaining \\gets \\text{Set number of nodes to be processed to } n-1$\n\t\\Function{track}{$node$} \\Comment{Find the root of the given node}\n\t\t\\State $temp \\gets \\text{Node whose root we are finding}$\n\t\t\\While{$temp$ does not point to itself in the dictionary}\n\t\t\t\\State Update $temp$ to be the node it currently points to in $nodes$\n\t\t\t\\State \\pseudoli{return} $temp$\n\t\t\\EndWhile\n\t\\EndFunction\n\t\\For{$n_1, n_2, weight$ in sorted list of edges}\n\t\t\\State $root \\gets \\text{Root node of } n_1$\n\t\t\\State $remove \\gets \\text{Root node of } n_2$\n\t\t\\If{$root$ is not $remove$}\n\t\t\t\\State Add the edge to the tree\n\t\t\t\\State Lower $remaining$ by 1\n\t\t\t\\If{$remaining$ is 0}\n\t\t\t\t\\State \\pseudoli{return} $tree$\n\t\t\t\\EndIf\n\t\t\t\\State Change the value associated with $remove$ to $root$\n\t\t\\EndIf\n\t\\EndFor\t\n\\EndProcedure\n\\end{algorithmic}\n\\caption{Kruskal's algorithm for finding a MST}\n\\label{alg:kruskal}\n\\end{algorithm}\n\n\\begin{comment}\nPrevious description of the algorithm\n\\begin{itemize}\n\n% Note: The comments in the solutions match the pseudocode.\n% When making changes, please keep that in mind.\n\n\\item Initialize an empty list of edges for the minimum spanning tree.\n\n\\item Make a dictionary that points each node toward its root (not always directly to it).\nStart with each node pointing to itself.\n\n\\item Initialize the number of nodes that still need to be processed to the number of nodes minus 1.\n\n\\item Define a helper function that, given a node, traces through the dictionary to find the root of its tree.\nThis can be done like this:\n\n\t\\begin{itemize}\n\n\t\\item Initialize a temporary variable to be the node for which we are finding the root.\n\n\t\\item While the temporary node does not point to itself in the dictionary:\n\n\t\t\\begin{itemize}\n\n\t\t\\item Update the temporary node to be the node it currently points to in the dictionary.\n\n\t\t\\end{itemize}\n\n\t\\item Return the temporary node.\n\n\t\\end{itemize}\n\n\\item Iterate over the edges by ascending weight.\nUse a \\li{for} loop for this and return the tree when it is big enough which breaks the loop for you.\n\n\t\\begin{itemize}\n\n\t\\item Trace through the dictionary to find the root node of each of the nodes in the edge you are processing.\n\n\t\\item If the roots are not the same (i.e. if adding the edge doesn't form a cycle):\n\n\t\t\\begin{itemize}\n\n\t\t\\item Add the edge to the tree.\n\n\t\t\\item Lower the number of edges remaining by one.\n\n\t\t\\item If the number of edges remaining is 0, return the tree (which also breaks the loop).\n\n\t\t\\item Update the root of the root of the second node in the edge to be the root of the first node in the edge.\n\t\t\tThis lets us record that the two subtrees are now connected.\n\n\t\t\\end{itemize}\n\n\t\\end{itemize}\n\n\\end{itemize}\n\\end{comment}\nNotice that if $remaining$ is 0, that means that the MST is as big as it needs to be and the for loop ends, and that updating the root of the second node in the edge is what allows us to check if two nodes are already connected.\n\nYou can iterate over a sorted copy of a list using the built in \\li{sorted} function.\nYou can sort by the third value in each tuple using the \\li{itemgetter} function that is part of the \\li{operator} library included with Python.\nFor example:\n\\begin{lstlisting}\nfrom operator import itemgetter\n\nfor n1, n2, weight in sorted(edges, key=itemgetter(2)):\n    ...\n\\end{lstlisting}\n\n\\begin{problem}\nImplement Kruskal's algorithm as in Algorithm \\ref{alg:kruskal}\nTest your algorithm on lists of weighted edges generated by the \\li{randgraph} function below. Also use the data from MSTdata.npy to test your algorithm.\nUse np.load(\"MSTdata.npy\") to get it, and use the \\li{formChanger} function below to put it in the right form.\n\\begin{lstlisting}\nfrom scipy import linalg as la\nimport numpy as np\n\n# n is the number of nodes in the graph\ndef randgraph(n):\n    # Creates a sparse upper triangular matrix of ints between 1 and 50\n    A = la.triu(np.random.randint(1,50,(n,n))*(np.random.rand(n,n)>.5))\n    S = []\n    # Iterate through (i,j) pairs in A; x is the value of (i,j)\n    for index, x in np.ndenumerate(A):\n        # If an edge exists\n        if x != 0:\n            # Append (i,j) as an edge with weight x\n            S.append((str(index[0]), str(index[1]), x))\n\ndef formChanger(oldData):\n    newData=[]\n    for i in oldData: newData.append((i[0],i[1],int(i[2])))\n    return newData\n\\end{lstlisting}\n\\end{problem}\n\n\\begin{comment}\n\\section*{Prim's algorithm}\n\nPrim's is a similar algorithm for finding minimum spanning trees.\nWhile it is much slower than Kruskal's algorithm for sparse graphs, it is much faster for dense graphs because Prim's algorithm avoids sorting the edges.\n\nAgain, consider the example shown in Figure 7.4.\nWe first initialize a dictionary with all the nodes as keys in order to track which nodes have not been processed.\nAt the beginning it will be \\li{\\{A:False, B:False, C:False, D:False, E:False, F:False\\}}.\nWe then form the dictionary that maps each node to the edges that contain it.\nSince we will already know one node of each edge while we use the dictionary, we only need to store the node that is not being looked up.\nIt should end up looking like \\li{\\{A:[(B, 3), (F, 6)], B:[(A, 3), (C, 5), (F, 4)], C:[(B, 5), (D, 1), (E, 1), (F, 5)], D:[(C, 1), (E, 2)], E:[(C, 1), (D, 2), (F, 4)], F:[(A, 6), (B, 4), (C, 5), (E, 4)]\\}}.\nWe will also initialize an empty dictionary to track the shortest edges that run between nodes we have processed and nodes that we haven't.\nAs we iterated over the edges in our initialization step, we are also able to find the shortest edge.\nIn this case, it is \\li{(D, C, 1)}.\nLet's start with \\li{(D, C, 1)} and initialize our tree as the list \\li{[(D, C, 1)]}.\nNext, we mark \\li{D} and \\li{C} as processed in the dictionary that tracks which nodes have been processed.\nWe now start to build our dictionary of nodes that are one edge away from our processed nodes.\nIn this case, after adding the shortest edges between processed and unprocessed nodes to the dictionary, the dictionary becomes \\li{\\{B:(C, B, 5), E:(C, E, 1), F:(C, F, 5)\\}}.\nNotice we did not include \\li{E:(D, E, 2)} because there is a shorter edge to \\li{E} from \\li{C}.\n\nOf the edges in the dictionary of edges that can be processed next, the shortest is \\li{(C, E, 1)}, so we add that edge to the tree and mark \\li{E} as processed.\nWith \\li{E} being processed, we can now reach \\li{F} at a cost of 4.\nAfter making this change, the dictionary of edges to process becomes \\li{\\{B:(C, B, 5), E:(C, E, 1), F:(E, F, 4)\\}}.\nSince \\li{E} no longer needs to be processed, we can remove it from consideration.\nSo this dictionary now becomes \\li{\\{B:(C, B, 5), F:(E, F, 4)\\}}.\n\nOf the edges to be processed next, \\li{(E, F, 4)} is the shortest, so we add it to the tree and mark \\li{F} as processed.\nAfter performing the appropriate modifications to the dictionary of edges to be processed next, it becomes \\li{\\{B:(F, B, 4), A:(F, A, 6)\\}}.\n\nOf the edges to be processed next, \\li{(F, B, 4)} is the shortest, so we add it to the tree and mark \\li{B} as processed.\nAfter performing the appropriate modifications to the dictionary of edges to be processed next, it becomes \\li{\\{A:(B, A, 3)\\}}.\n\nThe only edge to be considered is \\li{(B, A, 3)}, so we add it to the tree.\nThe tree is long enough that it it spans the nodes, so the algorithm is finished.\n\nHere's pseudocode for a version of Prim's algorithm.\nWhile it is not a perfectly optimized version, it is pretty good.\n\n\\begin{itemize}\n\n% Note: The comments in the solutions match the pseudocode.\n% When making changes, pleas keep that in mind.\n\n\\item Initialize a dictionary to track which nodes have been processed.\n\n\\item Initialize an empty dictionary of lists to track the edges containing each node.\n\n\\item Fill the edge list.\n\tBe sure to add each edge to the list corresponding to both of its nodes.\n\n\\item Get the first edge to add (The shortest edge from any given node is a good pick).\n\n\\item Mark the nodes in the first edge as processed.\n\n\\item Initialize the tree to be the list containing the first edge.\n\n\\item Initialize an empty dictionary that will be used to contain the edges that can be processed next.\n\n\\item Define a helper function to insert an edge into the dictionary (if that insertion is needed).\n\tThis can be done as follows:\n\n\t\\begin{itemize}\n\n\t\\item Get the value of the node that is reached by the edge.\n\n\t\\item If that node isn't in the dictionary, set its value to be the edge passed to the functions.\n\n\t\\item If it is in the dictionary already, set its value to be the shorter of the edges being processed and the edges already in the dictionary.\n\n\t\\end{itemize}\n\n\\item Use the helper function to insert the edges reached by the first two processed nodes into the dictionary of edges to be processed.\n\n\\item Until the tree contains enough edges to span all the nodes:\n\n\t\\begin{itemize}\n\n\t\\item Find the shortest edge in the dictionary of edges to be processed.\n\n\t\\item Remove the shortest edge from the dictionary.\n\n\t\\item Add it to the tree.\n\n\t\\item Mark the node reached by the new edge as processed.\n\n\t\\item Use the helper function to insert the edges reached by the newly processed node into the dictionary of edges to be processed.\n\n\t\\end{itemize}\n\n\\item Return the completed tree.\n\n\\end{itemize}\n\n\\begin{problem}\nWrite a Python function that uses Prim's algorithm to find the minimum spanning tree of a graph.\nTest your implementation with the same data as the previous problem.\nCompare the speed of Prim's algorithm with the speed of Kruskal's algorithm.\nCreate a function which prints these two times.\n\\end{problem}\n\\end{comment}\n\n\\section*{NetworkX}\n\nYou have already used the NetworkX to find the shortest path between nodes in a graph. You can also use NetworkX to find the minimal spanning tree. Recall that NetworkX is used as follows:\n\\begin{lstlisting}\nimport networkx as nx\n\n# Create an undirected graph G.\nG = nx.Graph()\n# Add the node x.\nG.add_node(x)\n\n# Add an edge with weight n from x to y.\n# Remember that NetworkX will add nodes in an edge if they do not already exist and does not permit duplicate nodes\nG.add_edge(x, y, weight=n)\n\n# Add multiple nodes from a list at once\nG.add_nodes_from(N)\n\n# Return the minimal spanning tree using Kruskal's algorithm\nnx.minimum_spanning_tree(G)\n\\end{lstlisting}\n\n\\begin{problem}\nUse NetworkX to find the minimal spanning tree of the data in MSTdata.npy.\nCompare the timing of your implementation with that of NetworkX.\n\n\\emph{Helpful Hint:} You should not use the output of your \\li{formChanger} function to create your NetworkX graph. Instead, use a for loop to iterate over the original data and add an edge \\li{i[0], i[2], weight=int(i[2])} for each element \\li{i} .\n\\end{problem}\n\n%Add specifications\n\n\\section*{Image Segmentation}\n\n%Lab \\ref{MSTImgSeg}\n\nOne application of Minimal Spanning Trees (MSTs) is image segmentation.\nKruskal's algorithm is especially good at this.\nYou can convert an image into a graph. Each pixel becomes a node and you define weights between the nodes. You then build the minimal spanning tree. If you take away the edge with the greatest weight you will get two forests (removing an edge from any graph with no cycles will create two forests). Then you can continue doing this to each forest to split up those forests. Each forest corresponds to a segment of the image.\nLet $k$ be the number of divisions that is wanted and $n$ be the number of nodes.\nKruskal's algorithm is performed until $n-(k+1)$ edges are added, which is equivalent to taking out the $k$ edges with greatest weights.\n\nThere are many different ways to turn an image into a graph and weight the edges.\nA simple, yet effective, version is to make every pixel a node and the edges are the difference in intensities in the four cardinal directions.\n\nThis means that there are less than $4n$ edges in the graph.\nOther image segmentation algorithms have to use $n^2$ space.\nThis gives the MST algorithm a critical advantage over other image segmentation algorithms.\n\n\\begin{problem}\nWrite a function that takes a black and white .jpg image as input and outputs a list of the weighted edges.\nStore the edges using the form \\li{(node,node,weight)}. Use \\li{scipy.ndimage.imread('filename.jpg')} to read in an RGB .jpg image. The output will be a three-dimensionsal Numpy array with the third dimension representing the red, green, and blue intensities of each pixel. In a black and white image, these will all be the same, so use the command \\li{array[:,:,0]} to create a two-dimensional array with values set as the zeroth index of the corresponding list in the three dimensional array.\n\n\\emph{Helpful Hint:} Use the \\li{ndenumerate()} command to easily iterate through the indices of the array. Keep in mind that since the array is bidirectional, you do not actually have to create four edges for every index.\n\\end{problem}\n\nOur original Kruskal's algorithm returns a list of edges representing a MST. We modify this algorithm to take in an additional argument representing the number of divisions desired and return a dictionary with the nodes as the keys and their roots as the values (remember that the dictionary used in the original algorithm points \\emph{towards} the root instead of pointing to the root itself). The number of distinct roots is the number of divisions, and the number of keys associated with each root gives the size of its tree. When we segment an image, the number of divisions often has to be higher than the actual number that is needed because sometimes one or two pixels form a division because the difference between them and the pixels around them is so great. You will have to adjust the number of divisions until the desired result is found.  See Figure \\ref{mst:segmented}.\n\n\\begin{algorithm}\n\\begin{algorithmic}[1]\n\\Procedure{kruskal}{$edges$, $div$}\n\t\\State $nodes \\gets \\text{Dictionary that points each node towards its root, initially itself}$\n\t\\State $end \\gets \\text{Set number of nodes to be processed to } n-div$\n\t\\Function{track}{$node$} \\Comment{Find the root of the given node}\n\t\t\\State $temp \\gets \\text{Node whose root we are finding}$\n\t\t\\While{$temp$ does not point to itself in the dictionary}\n\t\t\t\\State Update $temp$ to be the node it currently points to in $nodes$\n\t\t\t\\State \\pseudoli{return} $temp$\n\t\t\\EndWhile\n\t\\EndFunction\n\t\\For{$n_1, n_2, weight$ in sorted list of edges}\n\t\t\\State $root \\gets \\text{Root node of } n_1$\n\t\t\\State $remove \\gets \\text{Root node of } n_2$\n\t\t\\If{$root$ is not $remove$}\n\t\t\t\\State Lower $end$ by 1\n\t\t\t\\If{$end$ is 0}\n\t\t\t\t\\State Change the value associated with $remove$ to $root$\n\t\t\t\t\\State \\pseudoli{return} dict with nodes as keys and their roots as values\n\t\t\t\\EndIf\n\t\t\t\\State Change the value associated with $remove$ to $root$\n\t\t\\EndIf\n\t\\EndFor\t\n\\EndProcedure\n\\end{algorithmic}\n\\caption{Modified Kruskal's algorithm for Image Segmentation}\n\\label{alg:modifiedkruskal}\n\\end{algorithm}\n\n\\begin{problem}\nConvert the given image to a list of weighted edges using your previous solution. Implement the modified Kruskal's algorithm as described above and use it to segment the image, then graph the original image and the three largest divisions. \n\n\\emph{Helpful Hint:} Use the \\li{Counter} class from \\li{collections} to find the number of pixels (nodes) in each divisions, plot the images as subplots, and use \\li{plt.imshow()} to view the images.\n\\end{problem}\n\n\\begin{comment}\n\\begin{problem}\nMake a division of the image a different color.\n\\end{problem}\n\\end{comment}\n\n\\vfill\n\\begin{figure}[ht]\n\\begin{minipage}[b]{0.47\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{MSTseg1.jpg}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.47\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{MSTseg2.jpg}\n\\end{minipage}\n\\begin{minipage}[b]{0.47\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{MSTseg3.jpg}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.47\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{MSTseg4.jpg}\n\\end{minipage}\n\\caption{The original image is in the top left hand corner. The three largest segments are shown in the other corners. The original image was 498x498 and 50000 divisions were used.}\n\\label{mst:segmented}\n\\end{figure}\n\\vfill \n", "meta": {"hexsha": "01bf6af14c5d7b92c63efbc40811493f0a326f2c", "size": 32273, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Labs/MST/mst.tex", "max_stars_repo_name": "jessicaleete/numerical_computing", "max_stars_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2016-10-18T19:54:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-09T20:12:38.000Z", "max_issues_repo_path": "Labs/MST/mst.tex", "max_issues_repo_name": "jessicaleete/numerical_computing", "max_issues_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Labs/MST/mst.tex", "max_forks_repo_name": "jessicaleete/numerical_computing", "max_forks_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-05-14T16:07:59.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-20T09:05:06.000Z", "avg_line_length": 46.5028818444, "max_line_length": 883, "alphanum_fraction": 0.7087968271, "num_tokens": 10019, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.8175744806385543, "lm_q1q2_score": 0.5823028581232201}}
{"text": "\\section{Part 3. Linear Structure Growth}\n\nThe file of the functions used for this exercise is:\n\n\\lstinputlisting{part_three.py}\n\nThe result of this function is shown in \\ref{fig:lgf}:\n\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[width=0.9\\linewidth]{./plots/growth_factors.png}\n  \\caption{Linear Growth Factors for different cases.}\n  \\label{fig:lgf}\n\\end{figure}\n\nAs can be seen from the plot, the numerical solution is close the analytical solution, but not exact.\nThe Runge-Kutta 4th order method was chosen for its ease of implementation and relative accuracy.\nOne issue that I was not able to figure out was for the third case, the analytical and numerical solutions differ by\nquite a bit. This is unexpected, since the other two cases, it does match fairly closely. Infact, the third case, the numerical\nsolution seems to converge on the first case for some reason. Quite possibly going to a 5th order Runge-Kutta could fix the issue,\nbut there was not time to implement it.\n\nI did not have time to determine why this is the case and fix it.", "meta": {"hexsha": "b3dca0437d1ce7e1eb5b1a21bc3f3e934e213118", "size": 1055, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "part_3.tex", "max_stars_repo_name": "jacobbieker/NUR_Handin2", "max_stars_repo_head_hexsha": "6e620b23191edaec4452d29eac90ec37ced0c038", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "part_3.tex", "max_issues_repo_name": "jacobbieker/NUR_Handin2", "max_issues_repo_head_hexsha": "6e620b23191edaec4452d29eac90ec37ced0c038", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "part_3.tex", "max_forks_repo_name": "jacobbieker/NUR_Handin2", "max_forks_repo_head_hexsha": "6e620b23191edaec4452d29eac90ec37ced0c038", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-05-17T07:33:07.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-17T07:33:07.000Z", "avg_line_length": 45.8695652174, "max_line_length": 130, "alphanum_fraction": 0.7791469194, "num_tokens": 258, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744673038222, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5823028536197109}}
{"text": "\\section{Numerical Results}\n\\label{sec:numerical_results}\n\nIn this section we present the numerical results of the online version of the policy gradient algorithms discussed in Section \\ref{sec:basics_reinforcement_learning} for the asset allocation problem. \n\n\\subsection{Synthetic Asset}\nTo assess the different reinforcement learning methods in a controlled environment, the algorithms have been tested on a synthetic asset whose behavior presents some features that can be traded profitably. We simulated the log-price series $\\{z_t\\}$ for the risky asset as a random walk with autoregressive trend $\\{\\beta_t\\}$. The two-parameter model is thus given by\n\\begin{equation}\n\t\\begin{split}\n\t\tz_t &= z_{t-1} + \\beta_{t-1} + \\kappa \\epsilon_t\\\\\n\t\t\\beta_t &= \\alpha \\beta_{t-1} + \\nu_t\\\\\n\t\\end{split}\n\\end{equation}\nWe then define the synthetic price series as\n\\begin{equation}\n\tZ_t = \\exp\\left(\\frac{z_t}{\\max_t z_t - \\min_t z_t}\\right)\n\\end{equation}\nThis model is often taken as a benchmark test in the automated trading literature, see for instance \\cite{moody1998performance}. In addition to presenting some exploitable patterns, the model is stationary and therefore the policy learned on the training set should generalize well on the test set, also known as backtest in the financial jargon. We would thus expect our learning algorithms to perform well on this test case. \n\n\\subsection{Experimental Setup}   \nAll the algorithms were tested on the same price series of size $9000$, generated from the process above using $\\alpha = 0.9$ and $\\kappa = 3$. The learning process consisted of $500$ training epochs on the first $7000$ days of the series with a learning rate that decreased at each epoch according to a polynomial schedule. The trained agents were subsequently backtested on the final $2000$ days, during which the agents kept learning online in order to try to adapt to the changing environment. The results that we present are the average of $10$ independent experiments that used slightly different random initialization of the policy parameters.   \n\n\\subsection{Convergence}\nLet us first discuss the case with no transaction costs. Figure \\ref{fig:single_synthetic_neutral_convergence} shows the learning curves three algorithms in terms of average daily reward, which is the quantity being maximized by the algorithms, the daily reward standard deviation and the annualized Sharpe ratio. \nThe NPGPE algorithm is an enhancement of the PGPE algorithm based on the natural gradient technique \\cite{miyamae2010natural}. The first thing we observe is the ARAC algorithm seems not to be improving the trading strategy as the training epochs go by. The average reward obtained is close to zero and will be surely be negative once transaction costs are introduced. On the other hand, NPGPE slowly converges to a profitable strategy which is however suboptimal compared to the one found by PGPE, that is better in all three measures considered. It is interesting to notice that PGPE and NPGPE yield a learning curve for the Sharpe ratio very similar to the one for the average reward. Even if the algorithm is risk-neutral, it manages to improve a risk-senitive measure at the same time of the average reward. This might be simply a peculiarity of the very simple model assumed for the synthetic risky asset. Moreover, since the price process is stationary, the trading strategy learned on the training set generalizes well to the test set. \n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[height=6cm,width=1.0\\textwidth]{Images/6_0_single_synthetic_neutral_convergence}\n\t\\caption[Risk-neutral learning process for one synthetic risky asset]{Risk-neutral learning process for the asset allocation problem with one synthetic risky asset.}\n\t\\label{fig:single_synthetic_neutral_convergence}\n\\end{figure}\n\n\\subsection{Performances}\nFigure \\ref{fig:single_synthetic_neutral_performance} compares the backtest performances of the three learned policies and a Buy and Hold strategy, which simply consists in investing all the available capital in the risky asset. Let us repeat that the solid lines are the averages of $10$ independent experiments, which allows us to determine the $95\\%$ confidence intervals represented with the dashed lines. We clearly see that NPGPE and PGPE consistently outperform the market, realizing a total profit of $231.63\\%$ and $314.34\\%$ respectively against the $7.81\\%$ profit of the Buy and Hold strategy over the same period. More statistics of the trading strategies are reported in Table \\ref{tab:single_synthetic_neutral_performance}.\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[height=6cm,width=1.0\\textwidth]{Images/6_1_single_synthetic_neutral_performance}\n\t\\caption[Backtest performance with one synthetic risky asset]{Backtest performance of trained trading systems for the asset allocation problem with one synthetic risky asset.}\n\t\\label{fig:single_synthetic_neutral_performance}\n\\end{figure}\n\\begin{table}[t!]\n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\n & \\multicolumn{1}{c}{Buy and Hold} & \\multicolumn{1}{c}{ARAC} & \\multicolumn{1}{c}{NPGPE} & \\multicolumn{1}{c}{PGPE} \\\\ \\midrule\nTotal Return & 7.81\\% & -0.86\\% & 231.63\\% & 314.34\\% \\\\\nDaily Sharpe & 0.27 & -0.02 & 4.13 & 4.95 \\\\\nMonthly Sharpe & 0.19 & -0.07 & 2.90 & 3.26 \\\\\nYearly Sharpe & 0.23 & -0.10 & 1.55 & 1.76 \\\\\nMax Drawdown & -22.35\\% & -12.60\\% & -3.72\\% & -3.27\\% \\\\\nAvg Drawdown & -1.75\\% & -1.81\\% & -0.49\\% & -0.43\\% \\\\\nAvg Up Month & 2.87\\% & 1.14\\% & 2.47\\% & 2.74\\% \\\\\nAvg Down Month & -2.58\\% & -1.10\\% & -0.73\\% & -0.67\\% \\\\\nWin Year \\% & 40.00\\% & 44.00\\% & 98.00\\% & 100.00\\% \\\\\nWin 12m \\% & 56.36\\% & 48.00\\% & 100.00\\% & 100.00\\% \\\\\nReallocation Freq & 0.00\\% & 50.01\\% & 19.99\\% & 15.43\\% \\\\\nShort Freq & 0.00\\% & 50.13\\% & 41.59\\% & 44.25\\% \\\\ \\bottomrule\n\\end{tabular}\n\\caption[Backtest statistics for risk-neutral learning with one synthetic risky asset]{Backtest statistics of the risk-neutral trading strategies for the asset allocation problem with one synthetic risky asset.}\n\\label{tab:single_synthetic_neutral_performance}\n\\end{table}\n\n\\subsection{Impact of Transaction Costs}\nIn the algorithmic trading literature there are many examples of strategies based on the prediction of future returns based on more or less complex indicators \\cite{kamijo1990stock}, \\cite{saad1998comparative}, \\cite{liang2011stock}. However, as pointed out in \\cite{deng2016deep}, the performances of these methods quickly degrade when transaction costs for changing the portfolio composition or for shorting a security\nare considered. Indeed, these methods simply invest based on the prediction of the future returns, without explicitly taking into account transaction costs. On the other hand, reinforcement learning algorithms should learn to avoid frequent reallocations or shorts thanks to the feedback mechanism between the learning agent and the system, thus generating better trading performances. In this section we analyze how the strategies learned by PGPE and by NPGPE change when gradually increasing the proportional transaction costs and the short-selling fees. Intuitively, we expect a progressive reduction of the frequency of reallocation and of shorting the risky asset.\\\\\nFigure \\ref{fig:impact_transaction_costs} shows the impact of proportional transaction costs on the trading strategies learned by PGPE and by NPGPE. As expected, the frequency of reallocation for both strategies quickly drops to zero as the transaction costs increase, converging to the profitable buy and hold strategy. It is peculiar that the reallocation frequency for the PGPE strategy initially drops more quickly than for the NPGPE strategy, but then slows down and even increases when $\\delta_P = 20$ bps.\nIn summary, both algorithms are able to identify reallocation as the cause for lower rewards and to subsequently reduce the rate of reallocation, converging towards the simple yet profitable buy and hold strategy. Figure \\ref{fig:impact_short_selling_fees} shows the impact of short-selling fees on the trading strategies learned by PGPE and NPGPE. Both algorithms behave as expected, displaying a progressive reduction of the frequency of short positions as the fees increase. For large values of short-selling fees, both strategies converge to the profitable buy and hold strategy, which completely avoids paying the fees. In particular, PGPE quickly replicates the buy and hold strategy. On the other hand, NPGPE is not able to exactly reproduce the buy and hold strategy but it seems to converge to it for very large values of the short-selling fee. \n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[height=6cm,width=1.0\\textwidth]{Images/6_2_impact_transaction_costs}\n\t\\caption[Impact of proportional transaction costs]{Impact of proportional transaction costs on the trading strategies learned by PGPE and NPGPE.}\n\t\\label{fig:impact_transaction_costs}\n\\end{figure}\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[height=6cm,width=1.0\\textwidth]{Images/6_3_impact_short_selling_fees}\n\t\\caption[Impact of short-selling fees]{Impact of short-selling fees on the trading strategies learned by PGPE and NPGPE.}\n\t\\label{fig:impact_short_selling_fees}\n\\end{figure}", "meta": {"hexsha": "94f47471bd82997c5891a703c11fe91dc3ac1fb6", "size": 9202, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Pacs/Report/Sections/7_numerical_results.tex", "max_stars_repo_name": "AmineAboussalah/Thesis", "max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z", "max_issues_repo_path": "Pacs/Report/Sections/7_numerical_results.tex", "max_issues_repo_name": "pnecchi/Thesis", "max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pacs/Report/Sections/7_numerical_results.tex", "max_forks_repo_name": "pnecchi/Thesis", "max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z", "avg_line_length": 115.025, "max_line_length": 1043, "alphanum_fraction": 0.785155401, "num_tokens": 2304, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.817574471748733, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5823028517916033}}
{"text": "%\n% originally set up at Sep. 17th, 2009\n% \n% working list:\n% TDDFT energy expression\n%\n%\n\n% problem list:\n%  1  why we have the expression of eq:functional:45 established?\n%   Why Can I use the chain rule of functional and delta function \n%   (just the above content) can not prove it?\n%   By the way, if we replace the g(x^{'}) with the f(x^{'}) in the \n%   functional chain rule and use the delta function, we can not get\n%   This result. Puzzling.....\n%\n\n\n\n\n    \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Introduction to the functional analysis}\n%\n%\n%\n%\n%\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Original concepts for the functional and functional\n  derivatives}\n%\n% 1 how to make define the functional, and examples 2 how to usher\n% into the concept for the differential of functional from the limit\n% of the finite space case\n%  \n%\n%\nIn mathematical analysis, the definition for the function is very\nclear; it's some relation that connects two number fields together:\n\\begin{equation}\\label{eq:functional:1}\n  x \\xrightarrow{f}  y \\quad \\text{we can abbreviate the $y$ as $f(x)$}\n\\end{equation}\nHere the $x$ and $y$ are varied in some given field, it's usually the\nreal number field or the complex number field. As an example, the $y =\n\\sin x$ just makes the $ (-\\infty, +\\infty)$ field correspond to the\n$[-1, +1]$ field.\n\nFurthermore, in mathematics a functional is also some ``mapping\nrelation'' between some function and a number, here function is some\nbasic variables and through the expression we can get some number:\n\\begin{equation}\\label{eq:functional:2}\n  f(x) \\xrightarrow{F} y\n\\end{equation}\nThe $F$ is the functional for the functions of $f(x)$, here such\nrelation can be abbreviated as $F[f(x)] = y$.\n\nthere are many examples of functional that we can put forward in\nphysics. In fact, in traditional quantum mechanics, the average\nexpectation value of $\\bra{\\Psi}\\hat{A}\\ket{\\Psi}$ is just some\nfunctional of wave function of $\\Psi$:\n\\begin{equation}\\label{eq:functional:3}\n  \\Psi \\xrightarrow{F} \\bra{\\Psi}\\hat{A}\\ket{\\Psi}\n\\end{equation}\nIf the operator is Hamiltonian, then the functional is just transform\nthe wave function into it's energy.\n\nOn the other hand, let's prompt some mathematical examples:\n\\begin{equation}\\label{eq:functional:4}\n  A[y(x)] = \\int^{1}_{0} y^{2}(x) d x\n\\end{equation}\nHere the $ A[y(x)]$ corresponds to some number which is depending on\nthe function of $y(x)$. For example, if $y(x) = x^{2}$, then $A[y(x)]\n= \\frac{1}{5}$; if $y(x) = e^{x}$, then $A[y(x)]$ gives\n$\\frac{1}{2}(e^{2} - 1)$.\n\nNow let's proceed to the functional derivatives, which is very\nimportant in density functional theory. Firstly let's return back to\nthe math analysis to recall the differential definition for function.\nConsider a function $F(y)$. If $y$ is making some infinitesimal change\nof $\\bigtriangleup y$, the corresponding change for the function of\n$\\bigtriangleup F(y)$ will be $\\bigtriangleup F(y) =\nF(y+\\bigtriangleup y) - F(y) = F^{'}(y)\\bigtriangleup y + \\alpha\nO(\\bigtriangleup y)$.\n\nHere, the $\\alpha$ means higher derivatives for the function $F(y)$\nwith respect to the $y$. It can be made clear by the Talyor expansion\n(see the standard mathematical books for more detail).\n                                                                    \nHence, for the infinitesimal change of some function, it mainly\ncontains two parts; one is the linear part of $F^{'}(y)\\bigtriangleup\ny$, and the other is the non-linear part of $\\alpha O(\\bigtriangleup\ny)$. The linear part means it's linear with $\\bigtriangleup y$, so as\n$\\bigtriangleup y$ becomes smaller and smaller; then the difference\nbetween the $\\bigtriangleup F(y)$ and $F^{'}(y)\\bigtriangleup y$ will\nbe very small, that means we can safely replace the $\\bigtriangleup\nF(y)$ with $F^{'}(y)\\bigtriangleup y$. That's the key idea for the\ndifferential of function.\n\nFor the functional differential, we have the similar case. When the\nfunction of $y(x)$ is making some infinitesimal changes $\\delta y(x)$;\nthen the corresponding changes for the functional can always be\nexpressed as:\n\\begin{equation}\n  \\delta A[f(x)] = A[f(x) + \\delta f(x)] - A[f(x)] = M[f(x)]\\delta\n  f(x) + N[f(x)]O(\\delta f(x))\n\\end{equation}\nHere, the $O(\\delta f(x))$ is just the higher order changes of $\\delta\nf(x)$, and compared with the definition of function differential, we\ncan see that the functional differential is just the linear part of\n$M[f(x)]\\delta f(x)$. Hence, the functional derivative is the\n$M[f(x)]$.   \n\nNow let's put forward the definition for the functional derivative\nwith respect to the functional in the form of $F[f(x)] = \\int L[f(x)]\ndx$. Firstly let's define the infinitesimal of the functional:\n\\begin{equation}\n  \\label{eq:functional:5}\n \\delta F[ f(x)] = \\int \\left(\\frac{\\delta L}{\\delta\n      f(x)}\\delta f(x)\\right)dx\n\\end{equation}\nHere the $\\delta F[ f(x)]$ means the functional corresponding to\ninfinitesimal of $\\delta f(x)$, and we can see that $\\dfrac{\\delta\nL}{\\delta f(x)}$ is made in certain $x$ point, for example; the\n$x_{0}$ point; Then the integration over all the $x$ gives the\ninfinitesimal of the functional. \n\nThe derivation of the functional derivative, should be made to fit for\nthe expression of (\\ref{eq:functional:5}). On the other hand,\n(\\ref{eq:functional:5}) has some equivalent form that:\n\\begin{align}\n  \\label{eq:functional:6}\n \\delta F[ f(x)] &= \\lim_{\\epsilon \\rightarrow 0}\\Bigg[ \\frac{F[f(x)\n    +\\epsilon \\delta f(x) ] - F[f(x)]}{\\epsilon}\\Bigg] \\nonumber \\\\\n  &= F^{'}_{\\epsilon=0}[f(x) + \\epsilon \\delta f(x) ]\n\\end{align}\nHence in the (\\ref{eq:functional:6}) the functional derivative\nexpression is degenerated into some common function derivatives\nexpression.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{How to derive the functional derivative}\n%\n%\n%\n%\n%\nThe preceding discussion gives a definition of the functional\nderivative, but it does not give a useful method for calculating it.\nMathematically, We can start with (\\ref{eq:functional:5}) as a\ndefinition of the functional derivative, and use it to calculate.\n\nAs an example to show that how to perform this process, it's better\nfor us to start with an simple case; hence let's derive the functional\nderivative for the kinetic energy gotten from the uniform electron gas\nmodel: $T_{TF}[\\rho] = c_{F}\\int \\rho^{\\frac{5}{3}}(r) d^{3}r$:\n\\begin{equation}\\label{eq:functional:9}\n  \\begin{split}\n    T_{TF}[\\rho] + \\delta T_{TF}[\\rho] &= c_{F}\\int \\left(\\rho(r) +\n      \\delta\\rho(r) \\right)^{\\frac{5}{3}} d^{3}r \\\\\n    &= c_{F}\\int \\left(\\rho^{\\frac{5}{3}}(r) +\n      \\frac{5}{3}\\rho^{\\frac{2}{3}}(r)\\delta\\rho(r)\n      + O(\\delta\\rho(r)) \\right) d^{3}r  \\\\\n    &= T_{TF}[\\rho] + \\frac{5}{3}c_{F}\\int\n    \\rho^{\\frac{2}{3}}(r)\\delta\\rho(r)d^{3}r\n    + O(\\delta\\rho(r)) \\Rightarrow \\\\\n   \\delta T_{TF}[\\rho] &= \\frac{5}{3}c_{F}\\int\\rho^{\\frac{2}{3}}(r)\n    \\delta\\rho(r)d^{3}r\n  \\end{split}\n\\end{equation}\nNow if we compare the form in (\\ref{eq:functional:9}) with the\nfunctional derivative in (\\ref{eq:functional:5}), then we can see that\nthe derivative for the $T_{TF}[\\rho]$ is:\n\\begin{equation}\n  \\label{eq:functional:10}\n  \\frac{\\delta T_{TF}[\\rho]}{\\delta\\rho(r)} = \\frac{5}{3}c_{F}\n  \\rho^{\\frac{2}{3}}(r)   \n\\end{equation}\nWe note that in the (\\ref{eq:functional:9}), we expand the\n$\\left(\\rho(r) + \\delta\\rho(r) \\right)^{\\frac{5}{3}}$ according to the\nTaylor expansion rule:\n\\begin{equation}\n  \\label{eq:functional:11}\n  f(x) = f(x_{0}) +\n  \\sum_{n=1}^{\\infty}\\frac{\\left.f^{(n)}(x)\\right|_{x=x_{0}}}{n!}\\times\n  (x-x_{0})^{n}   \n\\end{equation}\nHere $x$ should be close enough to the $x_{0}$ so that the\n(\\ref{eq:functional:11}) could be converged.\n\nCompared with (\\ref{eq:functional:11}), in (\\ref{eq:functional:9}) the\nvariable of $x_{0}$ is replaced by the function of $\\rho(r)$, and the\nvariable of $x$ corresponds to the $\\rho(r) + \\delta\\rho(r)$; then the\nsimilar expansion can be used. However, here we omit the convergence\ndiscussion, so we assume that all the expansion are mathematically\nconverged.\n\nThis simple example reveals how to derive the functional derivative\nfor the more general expression. Suggest we have such more general\nform:\n\\begin{equation}\n  \\label{eq:functional:12}\n  F[f(x)] = \\int A[f(x)] dx\n\\end{equation}\nHere $A[f(x)]$ represents some analytical expression of the given\nfunction $f(x)$, for example; $A[f(x)] = c_{F}f^{\\frac{5}{3}}(x)$ in\nthe (\\ref{eq:functional:9}). Then if $f(x)$ gets some infinitesimal\nchange $f(x) + \\delta f(x)$, by using the Taylor expansion defined in\nthe (\\ref{eq:functional:11}) we can expand the $A[f(x)] + \\delta A[\nf(x)]$\ninto the corresponding series and omitting all the higher order\ninfinitesimal terms of $\\delta f(x)$, finally we can reach the\nexpression same with (\\ref{eq:functional:5}) so that to obtain the\nfunctional derivative for the $F[f(x)]$.\n\nFinally we note that the discussion made for the\n(\\ref{eq:functional:12}) is only for the one dimensional function of\n$f(x)$; however, it can be simply expanded into the multiple dimension\nfunction case.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The general functional derivative expression}\n%\n%\n%\n%\n%\nWhen the functional is a simple integral (most of the functionals in\nphysics belong to this type), the equation below gives a powerful\nformula for quick calculation of the functional derivative. Now let's\nstart with the simplest case:\n\\begin{equation}\n  F[y(x)] = \\int L(x, y(x)) dx\n\\end{equation}\nAccording to the general rule described in the\n(\\ref{eq:functional:12}), we have:\n\\begin{multline}\n  \\label{eq:functional:13}\n  F[y(x)] + \\delta F[ y(x)] =  \\\\\n  \\int \\left\\{ L(x, y(x)) + \\frac{\\partial L(x, y(x))}{\\partial y(x)}\n    \\delta y(x) + O(\\delta y(x))\\right\\} dx\n\\end{multline}\nNow we omit the term of $O(\\delta y(x))$, then the\n(\\ref{eq:functional:13}) turns into:\n\\begin{equation}\n  \\label{eq:functional:14}\n  \\delta F[ y(x)] = \\int \\left(\\frac{\\partial\n      L(x, y(x))}{\\partial y(x)} \\delta y(x)\\right) dx\n\\end{equation}\nHence the functional derivative for the (\\ref{eq:functional:13}) is:\n\\begin{equation}\n  \\label{eq:functional:15}\n  \\frac{\\delta  F[y(x)]}{\\delta y(x)} =  \\frac{\\partial\n    L(x, y(x))}{\\partial y(x)} \n\\end{equation}\n\nFor example, the functional $T_{TF}[\\rho]$ in (\\ref{eq:functional:9})\njust belongs to this type, hence we have:\n\\begin{equation}\n  \\label{eq:functional:16}\n  \\frac{\\delta T_{TF}}{\\delta \\rho} =  c_{F}\\frac{5}{3}\\rho^{\\frac{2}{3}} \n\\end{equation}\n\nHere we note something important, that the difference between the\n$\\delta$ symbol and the $\\partial$ symbol. Here in the left side of\nthe equation (\\ref{eq:functional:15}), the $\\delta$ symbol means that\nthe infinitesimal change of $f(x)$ will cause the corresponding\nfunctional change, too. However, in the right side, the $\\partial$\nsymbol means that only the change of $f(x)$ for the functional is\nconsidered, so it's partial derivative.\n\nFor the functional in form of $F[y(x)] = \\int L(x, y(x)) dx$,\nactually it's difficult to see their difference; however, as the\nfunctional is composed by many variables; such as \n\\begin{equation}\n F[f(x)] = \\int L(f(x), f^{'}(x), f^{''}(x), \\cdots) dx\n\\end{equation} \nThen the partial derivative will be more clear, for example;\n$\\dfrac{\\partial L}{\\partial f^{'}(x)}$ will only keep the $f^{'}(x)$\ncomponent change but keep the $f(x), f^{''}(x), \\cdots$ constant.\nHowever, since $\\delta f(x)$ will also cause changes in the $f^{'}(x),\nf^{''}(x), \\cdots$, hence the functional derivative in the left side\nof equation clearly demonstrates how much linear changes for the $F$\nwith respect to the change of $\\delta f(x)$. All in all, they\npossesses different meanings.\n\nNext, let's go to see a little more complicated case, where the\nfunctional has the general form as:\n\\begin{equation}\n  \\label{eq:functional:17}\n  F[y(x)] = \\int L(x, y(x), y^{'}(x)) dx\n\\end{equation}\nThis functional depends on two type of functions: $y(x)$ and\n$y^{'}(x)$. If the $y(x)$ and $y^{'}(x)$ make some infinitesimal\nchange, in terms of the Taylor expansion the (\\ref{eq:functional:17})\nbecomes:\n\\begin{multline}\n  \\label{eq:functional:18}\n  F[y(x)] + \\delta F[ y(x)] =  \\\\\n  \\int \\left\\{ L(x, y(x), y^{'}(x)) + \\frac{\\partial L(x, y(x),\n      y^{'}(x))} {\\partial y(x)} \\delta y(x) + \\right. \\\\\n  \\left. \\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial y^{'}(x)}\n    \\delta y^{'}(x) + O(\\delta y(x)) + O(\\delta y^{'}(x)) \\right\\} dx\n\\end{multline}\n\nOmitting the higher order terms of infinitesimal, then we have:\n\\begin{multline}\n  \\label{eq:functional:19}\n\\delta  F[ y(x)] =  \\\\\n  \\int \\left\\{ \\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial y(x)}\n    \\delta y(x) + \\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial\n      y^{'}(x)} \\delta y^{'}(x) \\right\\} dx\n\\end{multline}\n\nHowever, it can not simply compare the (\\ref{eq:functional:19}) with\nthe (\\ref{eq:functional:5}) to obtain the functional derivative, since\nin (\\ref{eq:functional:19}) we have $ \\delta y^{'}(x)$ but not the $\n\\delta y(x)$; hence additional treatment must be appended, which is to\nturn $\\delta y^{'}(x)$ into $\\delta y(x)$ in the last term; we do this\nwith an integration by parts:\n\\begin{equation}\n  \\label{eq:functional:20}\n  \\begin{split}\n    \\int^{b}_{a} \\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial\n      y^{'}(x)} \\delta y^{'}(x) dx &= \\int^{b}_{a} \\frac{\\partial L(x,\n      y(x), y^{'}(x))}\n    {\\partial y^{'}(x)} d (\\delta y(x)) \\\\\n    &= \\left. \\frac{\\partial L(x, y(x), y^{'}(x))}\n      {\\partial y^{'}(x)} \\delta y(x) \\right |^{b}_{a} -\\\\\n    &\\int^{b}_{a} \\frac{d}{dx} \\left(\\frac{\\partial L(x, y(x),\n        y^{'}(x))} {\\partial y^{'}(x)} \\right) \\delta y(x) dx\n  \\end{split}\n\\end{equation}\n\nHerein the first term in the (\\ref{eq:functional:20}) is called\n``boundary term'' (in quantum physics the $a$ and $b$ usually denote\nthe boundary condition for the wave functions), since it's\nproportional to the $\\delta y(x)$ in the interval of $[a,b]$, then\nit's some infinitesimal hence we can safely omit it here.\n\nFinally, the (\\ref{eq:functional:20}) turns into:\n\\begin{equation}\n  \\label{eq:functional:21}\n  \\int^{b}_{a} \\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial\n    y^{'}(x)} \\delta y^{'}(x) dx = - \\int^{b}_{a} \\frac{d}{dx}  \n  \\left(\\frac{\\partial L(x, y(x), y^{'}(x))}\n    {\\partial y^{'}(x)} \\right) \\delta y(x)  dx\n\\end{equation}\n\nThe differential of functional for the (\\ref{eq:functional:17})\nbecomes:\n\\begin{multline}\n  \\label{eq:functional:22}\n \\delta F[ y(x)] =  \\\\\n  \\int \\left\\{ \\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial y(x)} -\n    \\frac{d}{dx} \\left(\\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial\n        y^{'}(x)} \\right) \\right\\} \\delta y(x) dx\n\\end{multline}\n\nThe functional derivative then becomes:\n\\begin{equation}\n  \\label{eq:functional:23}\n  \\frac{\\delta F[y(x)]}{\\delta y(x)} = \n  \\frac{\\partial L(x, y(x), y^{'}(x))} {\\partial y(x)} -\n  \\frac{d}{dx} \\frac{\\partial L(x, y(x), y^{'}(x))}\n  {\\partial y^{'}(x)}\n\\end{equation}\n\nNow the result in the (\\ref{eq:functional:23}) are easily extended\ninto higher orders derivatives of $y(x)$. For example, if the function\nis $L(x, y(x), y^{'}(x), y^{''}(x))$; then we have the functional as:\n\\begin{equation}\n  \\label{eq:functional:25}\n  F[y(x)] = \\int L(x, y(x), y^{'}(x), y^{''}(x)) dx\n\\end{equation}\n\nthrough the same procedure we can get it's derivative:\n\\begin{multline}\n  \\label{eq:functional:24}\n  \\frac{\\delta F[ y(x)]}{\\delta y(x)} = \\frac{\\partial L(x, y(x),\n    y^{'}(x), y^{''}(x))} {\\partial y(x)} - \\frac{d}{dx}\n  \\frac{\\partial L(x, y(x), y^{'}(x), y^{''}(x))}\n  {\\partial y^{'}(x)} + \\\\\n  \\frac{d^{2}}{dx^{2}} \\frac{\\partial L(x, y(x), y^{'}(x), y^{''}(x))}\n  {\\partial y^{''}(x)}\n\\end{multline}\nThe second derivative comes from the need to integrate by parts twice\nto deal with $y^{''}(x)$ in the (\\ref{eq:functional:20}), and two\nminus signs make a plus sign.\n\nFinally, let's push the one dimension case to the three dimensional\nsituation. For the simplest case in the (\\ref{eq:functional:13}), now\nwe have:\n\\begin{equation}\n  \\label{eq:functional:26}\n  F[y(r)] = \\int L(r, y(r)) d^{3}r\n\\end{equation}\nIt's functional derivative becomes:\n\\begin{equation}\n  \\label{eq:functional:27}\n  \\frac{\\delta F}{\\delta y(r)} =  \\frac{\\partial\n    L(r, y(r))}{\\partial y(r)} \n\\end{equation}\n\nNow for the functional containing the first derivative, now it's\ngeneral expression in three dimension is:\n\\begin{equation}\n  \\label{eq:functional:28}\n  F[y(r)] = \\int L(r, y(r), \\nabla y(r)) d^{3}r\n\\end{equation}\nCompared with (\\ref{eq:functional:23}), it's functional derivative is:\n\\begin{equation}\n  \\label{eq:functional:29}\n  \\frac{\\delta  F[y(r)]}{\\delta y(r)} = \n  \\frac{\\partial L(r, y(r), \\nabla y(r))} {\\partial y(r)} -\n  \\nabla \\cdot \\frac{\\partial L(r, y(r), \\nabla y(r))}\n  {\\partial (\\nabla y(r))}\n\\end{equation}\n\nFinally, for the functional containing the second derivative:\n\\begin{equation}\n  \\label{eq:functional:30}\n  F[y(r)] = \\int L(r, y(r), \\nabla y(r), \\nabla^{2} y(r)) d^{3}r\n\\end{equation}\nThe (\\ref{eq:functional:24}) becomes:\n\\begin{multline}\n  \\label{eq:functional:31}\n  \\frac{\\delta F[y(r)]}{\\delta y(r)} = \\frac{\\partial L(r, y(r),\n    \\nabla y(r), \\nabla^{2} y(r))} {\\partial y(r)} - \\nabla \\cdot\n  \\frac{\\partial L(r, y(r), \\nabla y(r), \\nabla^{2} y(r))}\n  {\\partial (\\nabla y(r))} \\\\\n  + \\nabla^{2} \\cdot \\frac{\\partial L(r, y(r), \\nabla y(r), \\nabla^{2}\n    y(r) )} {\\partial (\\nabla^{2} y(r))}\n\\end{multline}\n\nIn generality, for the functional which is in more general form, who\ncontains $N$ order of function derivatives:\n\\begin{equation}\n  \\label{eq:functional:32}\n  F[y(r)] = \\int L(r, y(r), \\nabla y(r), \\nabla^{2} y(r), \\cdots\n  \\nabla^{N} y(r)) d^{3}r \n\\end{equation}\n\nIt's corresponding functional derivative can be expressed as:\n\\begin{multline}\n  \\label{eq:functional:33}\n  \\frac{\\delta  F[y(r)]}{\\delta y(r)} = \\frac{\\partial L} {\\partial\n    y(r)} - \\nabla \\cdot \\frac{\\partial L} {\\partial (\\nabla y(r))} +\n  \\nabla^{2} \\cdot \\frac{\\partial L} {\\partial\n    (\\nabla^{2} y(r))} \\\\\n  + \\cdots + (-1)^{i} \\nabla^{i} \\cdot \\frac{\\partial L} {\\partial\n    (\\nabla^{i} y(r))} + \\cdots\n\\end{multline}\nAll of these formulas can be derived in the similar way which we used\nfor deriving the functional derivative for (\\ref{eq:functional:23}).\n\nFinally, let's add some important note with respect to the evaluation\nof the expression below:\n\\begin{equation}\n\\int^{+\\infty}_{-\\infty}\\frac{d L(f(x))}{dx} g(x) dx \n\\end{equation}\nIf the $g(x)$ is vanishing at the boundary ($+\\infty$ and $-\\infty$);\nthen we can use integration by parts to transform the above\nintegration:\n\\begin{align}\n \\int^{+\\infty}_{-\\infty}\\frac{d L(f(x))}{dx} g(x) dx &= \n \\int^{+\\infty}_{-\\infty} g(x) dL(f(x)) \\nonumber \\\\\n&= g(x)L(f(x))|^{+\\infty}_{-\\infty} - \\int^{+\\infty}_{-\\infty} L(f(x))\ndg(x) \\nonumber \\\\\n&= - \\int^{+\\infty}_{-\\infty} L(f(x)) dg(x)\n\\end{align}\nHence the differentiation on the $L(f(x))$ is dropped. Such\ntransformation is usually very useful because the\ndifferentiation for $L(f(x))$ is always much more complicated then\nthe differentiation for $g(x)$. In quantum chemistry, the $g(x)$ is\nthe approximated wave function, which is the bound state so in the\n$\\infty$ it becomes zero. \n\nWhat's more, if it's not the first order but the $N$ times order of\ndifferentiation on $L(f(x))$:\n\\begin{equation}\n \\int^{+\\infty}_{-\\infty}\\frac{d^{N} L(f(x))}{dx^{N}} g(x) dx \n\\end{equation} \nBy repeating the above procedure, we can get:\n\\begin{equation}\n  \\int^{+\\infty}_{-\\infty}\\frac{d^{N} L(f(x))}{dx^{N}} g(x) dx  =\n(-1)^{N} \\int^{+\\infty}_{-\\infty} L(f(x)) d^{N}g(x)\n\\end{equation} \nHere we have assumed that the $g(x)$ can be differentiated up to $N$\ntimes, and the derivatives for the $g(x)$ are all vanished at the\ninfinity.\n\nFinally, the result can be extended to the three-dimensional case,\nwhere it takes the form of:\n\\begin{equation}\n \\label{eq:integration_rule_functional}\n  \\int^{+\\infty}_{-\\infty}\\nabla^{N}L(f(r)) g(r) d^{3}r  =\n(-1)^{N} \\int^{+\\infty}_{-\\infty} L(f(r)) \\nabla^{N}g(r) d^{3}r\n\\end{equation} \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{More rigorous explanation for the expansion of functional\nderivative}\n%\n%\n%\n%\nIn the above conetent, generally we drive the functional derivative\nby expanding them to a series of partial derivatives according to\nTalyor expansion. However, why we can do this? In the following\ncontent, we will give a brief description for the reason. This is\ncome from the rigorous functional derivative shown in the\n(\\ref{eq:functional:6}). \n\nNow let's start from the gradient form in (\\ref{eq:functional:28}),\nbut the other more complicated expressions are actually using the\nsame way:\n\\begin{equation}\n   F[y(r)] = \\int L(r, y(r), \\nabla y(r)) d^{3}r\n\\end{equation}\n\nthen according to the (\\ref{eq:functional:6}), we have:\n\\begin{equation}\n\\begin{split}\n \\left.\\frac{d F[y(r) + \\epsilon \\delta\ny(r)]}{d\\epsilon}\\right|_{\\epsilon=0} &= \\dfrac{d}{d\\epsilon}\\int L(r,\ny(r) + \\epsilon \\delta y(r), \\nabla y(r) +\n\\epsilon\\nabla\\delta y(r)) d^{3}r \\\\\n&= \\int \\left\\lbrace \\dfrac{\\partial L}{\\partial\ny}\\delta y + \\dfrac{\\partial L}{\\partial\\nabla y}\\nabla\\delta\ny \\right\\rbrace d^{3}r \\\\\n&=  \\int \\left\\lbrace \\dfrac{\\partial L}{\\partial\ny}\\delta y + \\left[ \\nabla \\cdot\\left( \\dfrac{\\partial\nL}{\\partial\\nabla y} \\delta y\\right) \\right.\\right. \\\\\n&-\\left.\\left.  \\nabla \\cdot\\left( \\dfrac{\\partial f}{\\partial\\nabla\ny}\\right) \\delta y \\right]\\right\\rbrace  d^{3}r \\\\\n&= \\int \\left\\lbrace \\dfrac{\\partial L}{\\partial\ny}\\delta y - \\nabla \\cdot\\left( \\dfrac{\\partial\nf}{\\partial\\nabla\ny}\\right) \\delta y \\right\\rbrace  d^{3}r \n\\end{split}\n\\end{equation}\nNow we can see that it's same with the result in the\n(\\ref{eq:functional:29}). What's more, we have used the integration\nby parts in the above derivation, which is same to the derivation\nof (\\ref{eq:functional:20}).\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Other properties}\n\\label{general_functional_other_properties}\n%\n%\n%\n%\nHere in this section, we are going to gather some common properties\nrelated to the functional. They are very useful in calculation of the\nfunctional derivatives.\n\nThe first one we wish to prompt is the chain rule in functional\nderivatives. Suggest that we have a functional in expression of\n$F[f(x)] = \\int L(f(x))dx$, then for $f(x)$; at each point of $x$ it\nturns to be some functional of $g(x^{'})$:\n\\begin{equation}\n f[g] = \\int f[g(x^{'})] dx^{'}\n\\end{equation}\n\nThen we can evaluate the\n$\\delta f$ as:\n\\begin{equation}\n  \\delta f = \\int \\frac{\\delta f}{\\delta g(x^{'})} \\delta\ng(x^{'}) dx^{'} \n\\end{equation}\n\nNow let's combine the functional derivative expression for the $f[g]$\nand $F[f(x)]$ together, we can get:\n\\begin{equation}\n\\begin{split}\n\\delta F[ f(x)] &= \\int \\frac{\\delta L}{\\delta f(x)} \\delta f(x) dx\n\\\\ \n&= \\int \\int \\frac{\\delta L}{\\delta f(x)} \\frac{\\delta f}{\\delta\ng(x^{'})} \\delta g(x^{'}) dx dx^{'}\n\\end{split}\n\\end{equation}\nOn the other hand, since the variables of $x$ and $x^{'}$ are\nonly two dummy variables and they are independent with each other, we\ncan also have:\n\\begin{equation}\n\\delta F[ g(x^{'})] = \\int \\int \\frac{\\delta L}{\\delta f(x)}\n\\frac{\\delta f}{\\delta\ng(x^{'})} \\delta g(x^{'}) dx dx^{'}\n\\end{equation}\nthen compared with the definition of the functional derivative, we\ncan get that:\n\\begin{equation}\n \\label{eq:functional:42}\n\\frac{\\delta F[ g(x^{'})]}{\\delta g(x^{'})} = \\int \\frac{\\delta\nL}{\\delta\nf(x)}\n\\frac{\\delta f(x)}{\\delta\ng(x^{'})} dx\n\\end{equation} \nThis is the chain rule for the functional derivatives, and it can be\neasily extended to the three-dimensional case.\n\nBy the way, there's something very interesting to see; that if we\nderive the chain rule for partial derivative of the functional; what\nis its expression? Actually it's same with the partial derivatives of\nvariables:\n\\begin{equation}\n\\label{partial_derivative_functional}\n\\frac{\\partial F[ g(x^{'})]}{\\partial g(x^{'})} = \\frac{\\partial\nL}{\\partial f(x^{'})}\\frac{\\partial f(x^{'})}{\\partial\ng(x^{'})}\n\\end{equation} \nCompared with (\\ref{eq:functional:42}), we can see the difference\nbetween the partial derivative of functional and the derivative of\nthe functional.\n\nthe second important character for the functional derivative, is the\nuse of delta function. the function itself can be expressed as some\nfunctional: \n\\begin{equation}\n  \\label{eq:functional:43}\n  f(x) = \\int f(x^{'})\\delta(x-x^{'})dx^{'}\n\\end{equation}\nIn this sense, we can even calculate the functional derivative with\nrespect to the function itself by using the equation above:\n\\begin{align}\n  \\label{eq:functional:44}\n\\frac{\\delta f(x)}{\\delta f(x^{'})} = \\frac{\\partial\n  [f(x^{'})\\delta(x-x^{'})]}{\\partial f(x^{'})} = \\delta(x-x^{'})  \n\\end{align}\nThis relation is very important in evaluating the functional such\nas $\\dfrac{\\delta F[f(x)]}{\\delta f(x^{'})}$. Mathematically, it\nmeans that as function $f(x)$ make infinitesimal change of\n$\\delta f(x)$ at the point of $x_{0}$, how can we calculate the\nresponse of $\\delta F[f(x)]$ at another point of $x$? In physics we\nalways meet such problems (for example; in TDDFT). For this case, we\nhave:\n\\begin{equation}\n  \\label{eq:functional:45}\n  \\frac{\\delta F[f(x)]}{\\delta f(x^{'})} = \\frac{\\delta\n    F[f(x)]}{\\delta f(x^{'})} \\delta(x-x^{'})\n\\end{equation}\n\nSometimes if the functional $F[f(x)]$ indirectly depends on some\nvariable of $\\lambda$, that is to say, $F[f(x)]$ depends on the\n$f(x)$ and $f(x)$ depends on some variable of $\\lambda$. This is the\nusual case in variational method in quantum mechanics. In section\n(\\ref{SE:2}) we have described the Ritz variation process in\n(\\ref{SEeq:7}), there we have the basis functions fixed and the\nchanging of wave function is depending on the variables of $c_{1},\nc_{2}, \\cdots$. In practice, in the quantum chemistry we always meeet\nsuch case so it's very important.\n\nIf we want to evaluate the functional derivative of for $F[f(x)]$\nbased on the $\\lambda$, according to the chain rule in the\n(\\ref{eq:functional:42}) we can have:\n\\begin{equation}\n \\label{lambda_variable:1}\n  \\frac{\\partial F[f(x)]}{\\partial \\lambda} = \\int \\frac{\\delta\nF}{\\delta f(x)}\\frac{\\partial f(x)}{\\partial\\lambda}dx\n\\end{equation} \nHere we have to mentioned that $\\dfrac{\\delta F}{\\delta f(x)}$ is the\nfunctional derivative for the $F[f(x)]$, and $\\frac{\\partial\nf(x)}{\\partial\\lambda}$ is the partial derivative for the $f(x)$. We\nshould care about the difference meaning between the symbol of\n$\\partial$ and $\\delta$.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Functional derivatives etc. in DFT}\n\\label{sec:examples_functional_derivative}\n%\n%\n%\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Minimization with respect to density VS. density matrix}\n\\label{sec:minimization_density_vs_density_matrix}\n%\n%\n%\n%\nBefore our further discussion, let's firstly talk about the energy\nminimization. In density functional theory, the energy is minimized\nwith respect to the density, then we can get the optimized density\nand all the other physical properties related to the result density.\n\nHowever, there's something difficult in the evaluation process, that\nwe have to use the functional derivatives expression. As the energy\nfunctional becomes more complicated, for example; as it is in META\nGGA form; to evaluate the functional derivatives through the\nminimization of density becomes more and more difficult. Hence how can\nwe avoid this case?\n\nFortunately, In quantum chemistry field, things could be easiler to\nsolve. In this region, the density is actually solely depending on\nthe density matrix:\n\\begin{equation}\n \\label{minimization_density_vs_density_matrix_eq:1}\n\\rho^{\\sigma} = \\sum_{\\mu\\nu}P_{\\mu\\nu}^{\\sigma}\\phi_{\\mu}\\phi_{\\nu} \n\\end{equation} \nHere the so called basis set functions, $\\phi_{\\mu}$ and\n$\\phi_{\\nu}$, are fixed in the whole calculation; only the density\nmatrix are varied. Hence it's easy to see:\n\\begin{equation}\n  \\label{minimization_density_vs_density_matrix_eq:2}\n\\delta\\rho^{\\sigma} =\n\\sum_{\\mu\\nu}\\delta P_{\\mu\\nu}^{\\sigma}\\phi_{\\mu}\\phi_{\\nu} \n\\end{equation} \nHence as each $P_{\\mu\\nu}^{\\sigma}$ is minimized, then the density is\nminimized. The density is solely determined by some $n\\times n$\nmatrix:\n\\begin{equation}\n \\label{minimization_density_vs_density_matrix_eq:3}\n\\rho \\Leftrightarrow \n\\begin{bmatrix}\n P_{11} & P_{12} & \\cdots & P_{1n} \\\\\n P_{21} & P_{22} & \\cdots & P_{2n} \\\\\n \\cdots & \\cdots & \\cdots & \\cdots \\\\\n P_{n1} & P_{n2} & \\cdots & P_{nn} \\\\\n\\end{bmatrix}\n\\end{equation}  \nAs each element is minimized, then the whole density is minimized.\nTherefore, the minimization of energy with respect to the density\ncould be transformed to the minimization of energy with respect to\nthe density matrix:\n\\begin{equation}\n \\label{minimization_density_vs_density_matrix_eq:4}\n\\frac{\\delta E_{XC}}{\\delta \\rho} \\Leftrightarrow \\frac{\\partial\nE_{XC}}{\\partial P_{\\mu\\nu}} (\\mu,\\nu = 1,2, \\cdots, n)\n\\end{equation}  \nFinally the energy functional of $F(\\rho)$ is transformed back to the\ncommon multiple dimension function of $F(P_{\\mu\\nu})$, then we can\nuse the partial derivatives to evaluate the function derivatives for\nany giving expression! As we can see in the following content, this\nis fairly important in the meta-GGA evaluation.\n\nFurthermore, down from this way we can get more similarities for the\nfunctional derivatives; as indicated below:\n\\begin{equation}\n \\label{minimization_density_vs_density_matrix_eq:5}\n\\begin{split}\n \\int d^{3}r\\frac{\\delta F}{\\delta\\rho(r)}\\rho(r) \n &\\Leftrightarrow\n \\sum_{\\mu\\nu}\\frac{\\partial E_{XC}}{\\partial P_{\\mu\\nu}}P_{\\mu\\nu}\\\\\n \\int d^{3}r\\int d^{3}r^{'}\n \\frac{\\delta^{2} F}\n      {\\delta\\rho(r)\\delta\\rho(r^{'})}\n       \\rho(r)\\rho(r^{'})\n&\\Leftrightarrow\n \\sum_{\\lambda\\eta}\\sum_{\\mu\\nu}\n\\frac{\\partial^{2} E_{XC}}\n     {\\partial P_{\\mu\\nu}\\partial\nP_{\\lambda\\eta}}P_{\\mu\\nu}P_{\\lambda\\eta} \\\\\n&\\cdots\\cdots\n\\end{split}\n\\end{equation}  \nThis is some enlightening but not rigorous comparison. Later from\nthis clue, we will get more results. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Functional derivatives for ground state DFT}\n%\n%\n%\n%\nConsider the general functional expression for the density, it\nrepresents the skeleton of the formulation of GGA in DFT:\n\\begin{equation}\n  \\label{eq:functional:34}\n  E_{XC} = \\int  F \\Big(\\rho_{\\alpha}, \\rho_{\\beta},\n  \\gamma_{\\alpha\\alpha}, \\gamma_{\\beta\\beta},\n  \\gamma_{\\alpha\\beta}\\Big) d^{3}r\n\\end{equation}\nWhere we have:\n\\begin{align}\n  \\label{eq:functional:35}\n  \\gamma_{\\alpha\\alpha} &= |\\nabla\\rho_{\\alpha}|^{2} =\n\\nabla\\rho_{\\alpha}\\cdotp\\nabla\\rho_{\\alpha}  \\nonumber \\\\\n  \\gamma_{\\beta\\beta}  &= |\\nabla\\rho_{\\beta}|^{2} = \n \\nabla\\rho_{\\beta}\\cdotp\\nabla\\rho_{\\beta}\\nonumber \\\\\n  \\gamma_{\\alpha\\beta} &=\\nabla\\rho_{\\alpha} \\cdot \\nabla\\rho_{\\beta}\n\\end{align}\n\nAccording to the (\\ref{eq:functional:33}), we can easily derive their\nenergy potential of $V_{XC}$ as:\n\\begin{align}\n  \\label{eq:functional:36}\n  V_{\\alpha}^{XC} &= \\frac{\\delta E_{XC}} {\\delta \\rho_{\\alpha}} =\n  \\frac{\\partial F} {\\partial \\rho_{\\alpha}} - \\nabla\\cdot \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\alpha})}\n  \\right) \\nonumber \\\\\n  &= \\frac{\\partial F}{\\partial \\rho_{\\alpha}} - \\nabla\\cdot \\left(\n    \\frac{\\partial F}{\\partial \\gamma_{\\alpha\\alpha}}\\frac{\\partial\n      \\gamma_{\\alpha\\alpha}} {\\partial (\\nabla\\rho_{\\alpha})} \\right)-\n  \\nabla\\cdot \\left( \\frac{\\partial F}{\\partial\n      \\gamma_{\\alpha\\beta}}\\frac{\\partial \\gamma_{\\alpha\\beta}}\n    {\\partial (\\nabla\\rho_{\\alpha})} \\right)\n  \\nonumber \\\\\n  &= \\frac{\\partial F}{\\partial \\rho_{\\alpha}} - 2\\nabla\\cdot \\left(\n    \\frac{\\partial F} {\\partial \\gamma_{\\alpha\\alpha}}\n    \\nabla\\rho_{\\alpha}\\right) - \\nabla\\cdot \\left( \\frac{\\partial F}\n    {\\partial \\gamma_{\\alpha\\beta}} \\nabla\\rho_{\\beta} \\right)\n\\end{align}\nHere we mention that $\\frac{\\partial F} {\\partial\n\\gamma_{\\alpha\\alpha}}$ is a number at given r point\n($\\gamma_{\\alpha\\alpha}$ is a number), hence we neither have dot\nproduct nor cross product between this partial derivative and the\n$\\nabla\\rho_{\\alpha}$.\n\nThe potential for the beta electron density can be derived in the\nsimilar way. Because of the symmetric formation between the\n$\\rho_{\\alpha}$ and $\\rho_{\\beta}$ in the (\\ref{eq:functional:34}), we\ncan see that we only need to exchange the $\\rho_{\\alpha}$ and\n$\\rho_{\\beta}$ then we can get the $V_{\\beta}^{XC}$. Finally, we\nnote that this form has been gotten in some\nprevious papers\\cite{CPL_1992_6_557,johnson:5612}.\n\nNow the next question is, how can we express the energy in terms of\nthe density matrix? In this sense, we can express the whole thing\ninto real calculation. By the chain rule, we have:\n\\begin{align}\n  \\label{eq:XC_functional.9}\nF^{\\alpha}_{\\mu\\nu} &= \\frac{\\partial E^{\\alpha}_{XC}}{\\partial\nP^{\\alpha}_{\\mu\\nu}}\n\\nonumber \\\\\n&=\n\\int \\frac{\\delta F}{\\delta\n  \\rho^{\\alpha}(r)}\\frac{\\partial \\rho^{\\alpha}(r)}{\\partial\nP^{\\alpha}_{\\mu\\nu}} d^{3}r\n\\nonumber \\\\\n&= \\int V^{\\alpha}_{XC}(r) \\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r\n\\end{align}\n\nNow let's discuss the formation of electron density. Generally we\nconsider the unrestricted case, where the alpha electron and beta\nelectron do not need to share the same spatial orbitals (the\nrestricted type can be seen as the special case of unrestricted\nmethod). Hence we have:\n\\begin{align}\n  \\label{eq:XC_functional.5}\n  \\rho_{\\alpha} &=\n  \\sum_{j}^{n}\\sum_{k}^{n}P^{\\alpha}_{jk}\\phi_{j}^{*}\\phi_{k}\n  \\nonumber\n  \\\\\n  \\rho_{\\beta} &=\n  \\sum_{j}^{n}\\sum_{k}^{n}P^{\\beta}_{jk}\\phi_{j}^{*}\\phi_{k}\n\\end{align}\n\nThe expression in (\\ref{eq:XC_functional.9}) can be also got directly\nfrom the Kohn-Sham equation:\n\\begin{equation}\n\\begin{split}\n  \\hat{H}_{KS}\\varphi_{i} = \\epsilon_{i}\\varphi_{i} &\\Rightarrow\n(\\cdots + \\frac{\\delta E_{XC}}{\\delta \\rho})\\varphi_{i} =\n\\epsilon_{i}\\varphi_{i} \\Rightarrow \n\\nonumber \\\\\n\\cdots + \\int d^{3}r \\varphi^{*}_{i}\\frac{\\delta E_{XC}}{{\\delta\n\\rho}}\\varphi_{i} = \\epsilon_{i} &\\Rightarrow  \n\\cdots + \\sum_{\\mu\\nu}P_{\\mu\\nu}\\int d^{3}r\n\\phi^{*}_{\\mu}\\frac{\\delta E_{XC}}{{\\delta \\rho}}\\phi_{\\nu}\n= \\epsilon_{i} \\Rightarrow \n\\nonumber \\\\\n& \\cdots + \\sum_{\\mu\\nu}P_{\\mu\\nu}F_{\\mu\\nu} = \\epsilon_{i}\n\\end{split}\n\\end{equation} \n\nIn evaluating the (\\ref{eq:XC_functional.9}), we will meet some\ndifficulty in calculating the gradient part in\n(\\ref{eq:functional:36}); however, we have some simple method to\navoid it. Let's take a gradient part as example:\n\\begin{equation}\n  \\label{eq:XC_functional.7}\n  \\begin{split}\n    & \\int \\phi^{*}_{\\mu}(r) \\Bigg\\{2\\nabla\\cdot \\left( \\frac{\\partial\n        f} {\\partial \\gamma_{\\alpha\\alpha}}\n      \\nabla\\rho_{\\alpha}\\right)\\Bigg\\}\\phi_{\\nu}(r) d^{3}r  \\\\\n    &=\\int \\Bigg\\{2\\nabla\\cdot \\left( \\frac{\\partial f} {\\partial\n        \\gamma_{\\alpha\\alpha}}\n      \\nabla\\rho_{\\alpha}\\right)\\Bigg\\}\\phi^{*}_{\\mu}(r) \\phi_{\\nu}(r)\n    d^{3}r  \\\\\n    &= 2\\int \\phi^{*}_{\\mu}(r) \\phi_{\\nu}(r)\\Bigg\\{\n\\triangle\\cdot\\left(\n      \\frac{\\partial f} { \\gamma_{\\alpha\\alpha}}\n      \\nabla\\rho_{\\alpha}\\right) \\Bigg\\} \\\\\n    &= 2\\phi^{*}_{\\mu}(r) \\phi_{\\nu}(r)\\left. \\left( \\frac{\\partial f}\n        {\\partial \\gamma_{\\alpha\\alpha}}\n        \\nabla\\rho_{\\alpha}\\right)\\right|^{+\\infty}_{-\\infty} - 2\\int\n    \\left( \\frac{\\partial f} {\\partial \\gamma_{\\alpha\\alpha}}     \n\\nabla\\rho_{\\alpha}\\right)\\cdotp\\Big\\{\\triangle(\\phi^{*}_{\\mu}(r)\n\\phi_{\\nu}(r))\\Big\\} \\\\\n    &= - 2\\int \\left( \\frac{\\partial f} {\\partial\n        \\gamma_{\\alpha\\alpha}}     \n\\nabla\\rho_{\\alpha}\\right)\\cdotp\\Big\\{\\nabla(\\phi^{*}_{\\mu}(r)\n    \\phi_{\\nu}(r))\\Big\\}d^{3}r\n  \\end{split}\n\\end{equation}\nHere in this derivation, we have used the integration by parts, and\nsince the wave function should be zero at the $+\\infty$ and $-\\infty$,\nhence it's clear the first term is going zero. Then we get the final\nresult. The $\\triangle$ symbol means the differential in the three\ndimensional space, which can be specified as: $\\triangle\nf(x,y,z) = f^{'}(x)d\\vec{x} + f^{'}(y)d\\vec{y} + f^{'}(z)d\\vec{z}$.\nLater we will come back to this derivation for further discussion.\n\nIn terms of the transformation in (\\ref{eq:XC_functional.7}), we can\nfinally reach the result:\n\\begin{equation}\n  \\label{eq:XC_functional.8}\n  \\begin{split}\n      F^{\\alpha}_{\\mu\\nu} &= \\int\n  \\phi^{*}_{\\mu}(r)\\hat{V}^{\\alpha}_{XC}\\phi_{\\nu}(r) d^{3}r  \\\\\n  &= \\int \\frac{\\partial f}{\\partial\n      \\rho_{\\alpha}}\\phi^{*}_{\\mu}(r) \\phi_{\\nu}(r)d^{3}r + \\\\\n  &\\int\\Bigg\\{2\\left( \\frac{\\partial f} {\\partial\n\\gamma_{\\alpha\\alpha}}\n    \\nabla\\rho_{\\alpha}\\right) + \\left( \\frac{\\partial f} {\\partial\n      \\gamma_{\\alpha\\beta}} \\nabla\\rho_{\\beta} \\right)\n  \\Bigg\\}\\cdot\\nabla(\\phi^{*}_{\\mu}(r) \\phi_{\\nu}(r))d^{3}r\n  \\end{split}\n\\end{equation}\nThis is what we have got for calculating the ground state Kohn-Sham\nequation by the GGA.\n\nOn the other hand, the energy for the GGA can be expressed in terms\nof the variable form:\n\\begin{equation}\n E_{XC} = \\int F(\\rho_{\\alpha}, \\rho_{\\beta}, \\nabla\\rho_{\\alpha},\n\\nabla\\rho_{\\beta})d^{3}r\n\\end{equation} \nSo we do not have the $\\gamma$ terms.\n\nAccording to the chain rule of the functional, we have:\n\\begin{align}\\label{TDADDEDeq:2}\nF_{\\mu\\nu} &= \\sum_{\\sigma}\\frac{\\partial E_{XC}}{\\partial\nP^{\\sigma}_{\\mu\\nu}}\n\\nonumber \\\\\n&=\n\\sum_{\\sigma}\\int \\frac{\\delta F}{\\delta\n  \\rho^{\\sigma}(r)}\\frac{\\partial \\rho^{\\sigma}(r)}{\\partial\nP^{\\sigma}_{\\mu\\nu}} d^{3}r\n\\nonumber \\\\\n&= \\sum_{\\sigma}\\int V^{\\sigma}_{XC}\n\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r\n\\end{align}\n\nthe $\\hat{V}^{\\sigma}_{XC}$ is the energy potential, which has the\nform\nthat:\n \\begin{align}\n  \\label{TDADDEDeq:3}\n  V_{\\sigma}^{XC} &= \\frac{\\delta E_{XC}} {\\delta \\rho_{\\sigma}} =\n  \\frac{\\partial F} {\\partial \\rho_{\\sigma}} - \\nabla\\cdot \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n  \\right) \n\\end{align} \nThese results are just coincided with the previous ones.\n\nCombined with (\\ref{TDADDEDeq:2}), we can get:\n\\begin{align}\\label{TDADDEDeq:4}\nF_{\\mu\\nu} &= \n\\sum_{\\sigma}\\int \\left\\lbrace \\frac{\\partial f} {\\partial\n\\rho_{\\sigma}} - \\nabla\\cdot \\left(\n    \\frac{\\partial f}{\\partial (\\nabla\\rho_{\\sigma})}\n  \\right) \\right\\rbrace \n\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r \\nonumber \\\\\n&=  \\sum_{\\sigma}\\int \\frac{\\partial f} {\\partial\n\\rho_{\\sigma}} \\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r \\nonumber \\\\\n&- \\sum_{\\sigma}\\int \n\\nabla\\cdot \\left(\n    \\frac{\\partial f}{\\partial (\\nabla\\rho_{\\sigma})}\n  \\right) \n\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r \n\\end{align}\n\nSince we have the following transformation:\n\\begin{align}\n &\\sum_{\\sigma}\\int \n\\nabla\\cdot \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n  \\right) \n\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r \\nonumber \\\\\n&= -\\sum_{\\sigma}\\int \n \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n  \\right)\\cdot \n\\nabla(\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r)) d^{3}r \n\\end{align} \nthis can be easily got from the integration by parts method.\n\nHence we can get the expression in (\\ref{TDADDEDeq:2}) as:\n\\begin{equation}\n F_{\\mu\\nu} = \\sum_{\\sigma}\\int \\frac{\\partial F} {\\partial\n\\rho_{\\sigma}} \\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r \n+ \\sum_{\\sigma}\\int \n \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n  \\right)\\cdot \n\\nabla(\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r)) d^{3}r \n\\end{equation} \nThis is for the ground state DFT evaluation, and it's in AO form.\nFinally we note that this form is easily transform into the\n$\\gamma$ form. Let's take the case that $\\sigma$ is $\\alpha$ as an\nexample: \n\\begin{align}\n\\label{TDADDEDeq:1} \n\\frac{\\partial F}{\\partial (\\nabla\\rho_{\\alpha})} &=\n\\frac{\\partial F}{\\partial\\gamma_{\\alpha\\alpha}}\n\\frac{\\partial\\gamma_{\\alpha\\alpha}}{\\partial(\\nabla\\rho_{\\alpha})} \n+ \n\\frac{\\partial F}{\\partial\\gamma_{\\alpha\\beta}}\n\\frac{\\partial\\gamma_{\\alpha\\beta}}{\\partial(\\nabla\\rho_{\\alpha})}\n\\nonumber \\\\\n&= 2\\frac{\\partial F} {\\partial \\gamma_{\\alpha\\alpha}}\n   \\nabla\\rho_{\\alpha} + \\frac{\\partial F}\n   {\\partial \\gamma_{\\alpha\\beta}} \\nabla\\rho_{\\beta} \n\\end{align}\nHence in terms of the $\\alpha$ component, we come back to the\nexpression in (\\ref{eq:XC_functional.8}).\n\nBecause this method to derive the functional derivatives of DFT is\nmuch more clear, hence in the following conent we will stick to this\nform of expression.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Extention to the meta-GGA form}\n%\n%\n%\n%\n%\nNow let' extend the discussion to the meta-GGA form, while the energy\nexpression is expressed as:\n\\begin{equation}\n \\label{functional_mega_gga_eq:1}\nE_{XC} = \\int  F(\\rho_{\\alpha}, \\rho_{\\beta}, \\nabla\\rho_{\\alpha},\n\\nabla\\rho_{\\beta}, \\nabla^{2}\\rho_{\\alpha},\n\\nabla^{2}\\rho_{\\beta}, \\tau_{\\alpha}, \\tau_{\\beta})d^{3}r\n\\end{equation} \nHere we note that the $ \\nabla^{2}\\rho_{\\alpha}$ and\n$\\nabla^{2}\\rho_{\\beta}$ are some number rather than the vector\nbecause of the operator form: $\\nabla^{2} = \\nabla\\cdotp\\nabla$. On\nthe other hand, the $\\tau_{\\sigma}$ is expressed as:\n\\begin{equation}\n \\begin{split}\n  \\tau_{\\sigma} \n&= \\frac{1}{2}\\sum_{i=1}^{occ}|\\nabla\\varphi_{i\\sigma}|^{2} \\\\\n&= \\frac{1}{2}\\sum_{i=1}^{occ}(\\nabla\\varphi_{i\\sigma}\\cdotp\n\\nabla\\varphi_{i\\sigma}) \\\\\n&= \\frac{1}{2}P_{\\mu\\nu}(\\nabla\\phi_{\\mu\\sigma}\\cdotp\n\\nabla\\phi_{\\nu\\sigma})\n \\end{split}\n\\label{functional_mega_gga_eq:2}\n\\end{equation} \nHere in the above expression, the $\\varphi$ express the KS orbitals,\nand the $\\phi$ is the AO basis set function, so the $P_{\\mu\\nu}$\ncorresponds the the density matrix. This term in meta-GGA is termed\nas ``kinetic energy density''. \n\nHowever, here we meet some problems; that the kinetic energy density\nis not directly associated with the density; thus if we just minimize\nthe energy with respect to the density, then it's difficult to deal\nwith the kinetic energy density term. However, as what we have stated\nin the (\\ref{sec:minimization_density_vs_density_matrix}), actually\nwe can do it in the other way; by seeking he minimization with\nrespect to the density matrix:\n\\begin{align}\n\\label{functional_mega_gga_eq:3}\nF_{\\mu\\nu}^{\\sigma}  &= \\frac{\\partial E_{XC}}{\\partial P_{\\mu\\nu}}\n\\nonumber \\\\\n  &= \\sum_{\\xi}\\sum_{\\sigma}\\int \\frac{\\partial F}{\\partial\n\\xi_{\\sigma}} \\frac{\\partial \\xi_{\\sigma}}{\\partial P_{\\mu\\nu}}\nd^{3}r \n\\end{align}\nSince in quantum chemistry, the energy finally turns out to be the\nfunction of density matrix, hence each of the variables in\n(\\ref{functional_mega_gga_eq:1}) is some function for density matrix;\nthen we can use the chain rule for multivariate function to evaluate\nthe final expression.\n\nOn the other hand, we note that the $\\frac{\\partial\nF}{\\partial \\xi_{\\sigma}}$ is not the functional derivatives for the \n(\\ref{functional_mega_gga_eq:1}). Here the functional derivative for\n$F$ should be minimized with respect to the density, that is to say:\n\\begin{equation}\n \\label{not_the_functional_derivatives}\nV_{XC} = \\frac{\\delta F(r)}{\\delta \\rho} \\Leftrightarrow F_{pq} = \\int V_{XC}\n\\varphi_{p}(r)\\varphi_{q}(r) d^{3} r\n\\end{equation}\nBy simple mathematical procedure we can see that if we do not count into the\nkinetic energy density, then the functional derivatives and the $\\frac{\\partial\nF}{\\partial \\xi_{\\sigma}}$ are equivalent with each other, but in terms of the\nkinetic energy density, then they are not equal to each other anymore.\nThat's all because the kinetic energy density is not directly related to the\ndensity.\n\nHere we introduce the ``$\\xi$'', which is called ``variables'', it\nrepresents each term in the (\\ref{functional_mega_gga_eq:1}):\n\\begin{equation}\n\\label{functional_mega_gga_eq:4}\n \\xi_{\\sigma} = \\rho_{\\sigma}, \\nabla\\rho_{\\sigma},\n\\nabla^{2}\\rho_{\\sigma}, \\tau_{\\sigma}\n\\end{equation} \nthe most important character for the variables is that they are all\nlinear with the density matrix, that is to say:\n\\begin{equation}\n \\label{functional_mega_gga_eq:5}\n\\xi_{\\sigma} = P_{\\mu\\nu}f(\\phi_{\\mu},\\phi_{\\nu})\n\\end{equation} \nThe $f$ is determined by the variable form, e.g.; for the Laplacian\noperator, $f(\\phi_{\\mu},\\phi_{\\nu}) =\n\\nabla^{2}(\\phi_{\\mu}\\phi_{\\nu})$, and for the kinetic energy\ndensity; it's $f(\\phi_{\\mu},\\phi_{\\nu}) =\n\\nabla\\phi_{\\mu}\\cdotp\\nabla\\phi_{\\nu}$. However, as what we have\nnoted; the variable is only determined by density matrix, the basis\nset functions are all fixed in variation.\n\nBy bringing the potential into the expression, we can finally get the\n$F_{\\mu\\nu}^{\\sigma}$ as:\n\\begin{equation}\n\\begin{split}\nF_{\\mu\\nu}^{\\sigma} &= \n\\int \\frac{\\partial F} {\\partial\n\\rho_{\\sigma}} \\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r) d^{3}r \n+ \n\\int \\left(\n     \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n     \\right)\\cdot \n\\nabla(\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r)) d^{3}r \\\\ \n&+\n\\int\n    \\frac{\\partial F}{\\partial (\\nabla^{2}\\rho_{\\sigma})}\n\\nabla^{2}(\\phi^{*}_{\\mu}(r)\\phi_{\\nu}(r)) d^{3}r + \\frac{1}{2}\n\\int\n    \\frac{\\partial F}{\\partial \\tau_{\\sigma}}\n(\\nabla\\phi^{*}_{\\mu}(r)\\cdotp\\nabla\\phi_{\\nu}(r)) d^{3}r \n\\end{split}\n\\label{functional_mega_gga_eq:6}\n\\end{equation}    \nBy further apply the linear relation between the variables and the\ndensity matrix, we can have:\n\\begin{equation}\n \\label{functional_mega_gga_eq:7}\n\\begin{split}\n E_{XC} \n&=\\sum_{\\sigma}\\sum_{\\mu\\nu}F_{\\mu\\nu}^{\\sigma}P_{\\mu\\nu}^{\\sigma} \\\\\n&= \\sum_{\\sigma}\\int \\frac{\\partial F} {\\partial\n\\rho_{\\sigma}}\\rho_{\\sigma}d^{3}r \n+ \n\\sum_{\\sigma}\\int \\left(\n     \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n     \\right)\\cdot \n\\nabla\\rho_{\\sigma} d^{3}r \\\\ \n&+\n\\sum_{\\sigma}\\int\n    \\frac{\\partial F}{\\partial (\\nabla^{2}\\rho_{\\sigma})}\n\\nabla^{2}\\rho_{\\sigma} d^{3}r + \\sum_{\\sigma}\n\\int\n    \\frac{\\partial F}{\\partial \\tau_{\\sigma}}\\tau_{\\sigma}\n d^{3}r \\\\\n&= \\sum_{\\sigma}\\sum_{\\xi}\\int\n    \\frac{\\partial F}{\\partial \\xi_{\\sigma}}\n\\xi_{\\sigma}d^{3}r\n\\end{split}\n\\end{equation} \nHence we can get some very simplied term.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Variable expression for TDDFT}\n%\n%\n%\n%\n%\nIn TDDFT, the exchange correlation part is defind as:\n\\begin{equation}\n \\label{functional_derivatives_TDDFT_new_variable:1}\n\\int d^{3}r \\int d^{3}r^{'}\n\\varphi_{p\\sigma}^{*}(r)\\varphi_{q\\sigma}(r)\n\\frac{\\delta^{2} F(r) }{\\delta\\rho_{\\sigma}(r)\n\\delta\\rho_{\\tau}(r^{'})}\n\\varphi_{s\\tau}^{*}(r^{'})\\varphi_{t\\tau}(r^{'})\n\\end{equation}\nMoreover, we know that this expression is equivalent to the\nexpression below:\n\\begin{equation}\n\\label{functional_derivatives_TDDFT_new_variable:2}\n\\begin{split}\n&\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}\nc^{*}_{\\mu p,\\sigma}c^{*}_{\\nu q, \\sigma}\nc^{*}_{\\lambda s, \\tau}c^{*}_ {\\eta q, \\tau} \n\\frac{\\partial F_{\\mu\\nu\\sigma}}{\\partial P_{\\lambda\\eta\\tau}} \\\\\n&=\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}\nc^{*}_{\\mu p,\\sigma}c^{*}_{\\nu q, \\sigma}\nc^{*}_{\\lambda s, \\tau}c^{*}_ {\\eta q, \\tau}\n\\int d^{3}r \\frac{\\delta F_{\\mu\\nu\\sigma}}\n{\\delta\\rho_{\\tau}(r^{'})}\n\\frac{\\partial\\rho_{\\tau}(r^{'})}\n{\\partial P_{\\lambda\\eta\\tau}} \\\\ \n&=\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}\nc^{*}_{\\mu p,\\sigma}c^{*}_{\\nu q, \\sigma}\nc^{*}_{\\lambda s, \\tau}c^{*}_ {\\eta q, \\tau}\n\\int d^{3}r^{'} \n\\frac{\\delta F_{\\mu\\nu\\sigma}}\n{\\delta\\rho_{\\tau}(r^{'})}\n\\phi^{*}_{\\lambda\\tau}(r^{'})\\phi_{\\eta\\tau}(r^{'})\n\\\\\n&=\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}\nc^{*}_{\\mu p,\\sigma}c^{*}_{\\nu q, \\sigma}\nc^{*}_{\\lambda s, \\tau}c^{*}_ {\\eta q, \\tau}\n\\int d^{3}r\\int d^{3}r^{'}\n\\phi^{*}_{\\mu\\sigma}(r)\\phi_{\\nu\\sigma}(r)\n\\frac{\\delta V^{\\sigma}_{XC}(r)}{\\delta\\rho_{\\tau}(r^{'})}\n\\phi_{\\lambda\\tau} (r^{'})\\phi_{\\eta\\tau}(r^{'}) \\\\\n&=\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}\nc^{*}_{\\mu p,\\sigma}c^{*}_{\\nu q, \\sigma}\nc^{*}_{\\lambda s, \\tau}c^{*}_ {\\eta q, \\tau} \\\\\n&\\int d^{3}r\\int d^{3}r^{'}\n\\phi^{*}_{\\mu\\sigma}(r)\\phi_{\\nu\\sigma}(r)\n\\frac{\\delta^{2}\nF(r)}{\\delta\\rho_{\\sigma}(r)\\delta\\rho_{\\tau}(r^{'})}\n\\phi_{\\lambda\\tau}(r^{'})\\phi_{\\eta\\tau}(r^{'}) \\\\\n&= \\int d^{3}r \\int d^{3}r^{'}\n\\varphi_{p\\sigma}^{*}(r)\\varphi_{q\\sigma}(r)\n\\frac{\\delta^{2} F(r) }{\\delta\\rho_{\\sigma}(r)\n\\delta\\rho_{\\tau}(r^{'})}\n\\varphi_{s\\tau}^{*}(r^{'})\\varphi_{t\\tau}(r^{'})\n\\end{split}\n\\end{equation} \nActually this nasty long expression only demonstrates that we can\nevaluate the integral back to the AO space, further more; we can use\nnew variable to calculate the Fock matrix element by avoding the\nfunctional derivatives expression.\n\nNow let's concentrate on the $\\dfrac{\\partial\nF_{\\mu\\nu\\sigma}}{\\partial P_{\\lambda\\eta\\tau}}$, this is the\ncentrail expression we need to express into variable form.\nAccording to the (\\ref{functional_mega_gga_eq:6}), this Fock matrix\nelement can be expressed as:\n\\begin{equation}\n\\begin{split}\n \\frac{\\partial F_{\\mu\\nu\\sigma}}{\\partial P_{\\lambda\\eta\\tau}}\n&= \n\\frac{\\partial \\sum_{\\xi} \\int d^{3}r \n\\frac{\\partial F}{\\partial \\xi_{\\sigma}}\nf(\\phi_{\\mu}(r)\\phi_{\\nu}(r))}\n{\\partial P_{\\lambda\\eta\\tau}} \\\\\n&=\\sum_{\\xi}\\sum_{\\zeta}\\int d^{3} r^{'}\\int d^{3}r \n\\frac{\\partial^{2} F}\n{\\partial \\xi_{\\sigma} \\partial \\zeta_{\\tau}}\nf(\\phi_{\\mu}(r)\\phi_{\\nu}(r))\nf(\\phi_{\\lambda}(r^{'})\\phi_{\\eta}(r^{'}))\n\\end{split}\n\\end{equation}\nThis is the kernel for the second functional derivatives, and by\nfurther considering the spin, density matrix etc. we can finally get\nthe exchange correlation part expression in terms of variables.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{The functional derivative for the gradient of DFT}\n\\label{Func_Deriv_gradient_variable_GGA}\n%\n%\n%\n%\n%\nNow let's turn to the energy gradient of ground state DFT. Firstly,\nthe energy expression for the exchange-correlation part is still\nexpressed as:\n\\begin{equation}\n\\label{NUCLEAR_GRADIENT_DFTeq:1}\n E_{XC} = \\int F(\\rho_{\\alpha}, \\rho_{\\beta}, \\nabla\\rho_{\\alpha},\n\\nabla\\rho_{\\beta})d^{3}r\n\\end{equation} \nSo we do not have the $\\gamma$ terms.\n\nThe variation with respect to the nuclear coordinate of $r_{A}$ is:\n\\begin{align}\n\\label{NUCLEAR_GRADIENT_DFTeq:2}\n E_{XC}^{A} &=  \\int  F^{A}(\\rho_{\\alpha}, \\rho_{\\beta},\n\\nabla\\rho_{\\alpha},\\nabla\\rho_{\\beta}) d^{3}r \\nonumber \\\\\n  &= \\sum_{\\sigma}\\int  \\frac{\\delta E_{XC} }{\\delta\\rho_{\\sigma}\n}\\rho_{\\sigma}^{A} d^{3}r \\nonumber \\\\\n  &=  \\sum_{\\sigma}\\sum_{\\mu\\nu}\\int \nV^{\\sigma}_{XC}(r)P_{\\mu\\nu}^{\\sigma}(\\phi_{\\mu}\\phi_{\\nu\n})^{A} d^{3}r \\nonumber \\\\\n  &=  \\sum_{\\mu\\nu}\\sum_{\\sigma}\\int\\left\\lbrace \\frac{\\partial F}\n{\\partial \\rho_{\\sigma}} - \\nabla\\cdot \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n  \\right) \\right\\rbrace  P_{\\mu\\nu}^{\\sigma}(\\phi_{\\mu}\\phi_{\\nu\n})^{A} d^{3}r \\nonumber \\\\\n  &= \\sum_{\\mu\\nu} P_{\\mu\\nu}^{\\sigma}\\sum_{\\sigma}\\int\\left\\lbrace\n\\left(\n\\frac{\\partial F}{\\partial \\rho_{\\sigma}}(\\phi_{\\mu}\\phi_{\\nu\n})^{A}\\right) + \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n    \\cdotp\\nabla(\\phi_{\\mu}\\phi_{\\nu\n})^{A}\\right)\\right\\rbrace  d^{3}r\n\\end{align}\n\nNow let's consider a little more about the $(\\phi_{\\mu}\\phi_{\\nu\n})^{A}$, this is short for the variation for the $(\\phi_{\\mu}\\phi_{\\nu\n})$ with respect to the $r_{A}$. Since the basis sets function is\ngenerally taking the form that:\n\\begin{equation}\n \\phi = f(r)e^{-ar^{2}}\n\\end{equation}\nWhere the $r$ is $r = r_{e} - r_{I}$, th $r_{I}$ is the nuclear\nposition, and the $r_{e}$ is the distance between the $r_{I}$ and the\nelectron position. Hence we can see that if the $r_{I}$ is not\n$r_{A}$, then the differentiation for the $\\phi$ will be zero, thus\nonly the basis set functions centering the nuclear A has the gradient\nwhich is not zero. \n\nOn the other hand, since the movement for the nuclear A can be\ndecomposd into three weights: $r_{Ax}$, $r_{Ay}$ and $r_{Az}$,\nthe variation for the nuclear A actually is taken like:\n\\begin{align}\n\\label{NUCLEAR_GRADIENT_DFTeq:6}\n\\frac{\\partial \\phi}{\\partial r_{Ax}} &= \\frac{\\partial\n\\phi}{\\partial r_{x}}\\frac{\\partial r_{x}}{\\partial r_{Ax}}\n=-\\frac{\\partial\\phi}{\\partial r_{x}}  \\nonumber \\\\\n\\frac{\\partial \\phi}{\\partial r_{Ay}} &= \\frac{\\partial\n\\phi}{\\partial r_{y}}\\frac{\\partial r_{y}}{\\partial r_{Ay}}\n=-\\frac{\\partial\\phi}{\\partial r_{y}}  \\nonumber \\\\\\frac{\\partial\n\\phi}{\\partial r_{Az}} &= \\frac{\\partial\n\\phi}{\\partial r_{z}}\\frac{\\partial r_{z}}{\\partial r_{Az}}\n=-\\frac{\\partial\\phi}{\\partial r_{z}} \n\\end{align}\nWhere the $r_{i}$ ($i=x,y,z$) are the compnents for the the $r$:\n$r^{2} = \\sum_{i}^{3}r_{i}^{2}$. Furthermore, we note that in the\nabove process, the derivation has some important point that only\n$r_{i}$ is connected to the $r_{Ai}$, hence in the chain rule of\nthe partial differentiation we only have the $r_{i}$ term.\n\nFinally, we can see that the $\\phi^{A}$ is expressed as some vector\nhas three components, and only the basis set functions centering the\nnuclear A has the gradient which is not zero:\n\\begin{equation}\n\\label{NUCLEAR_GRADIENT_DFTeq:3}\n\\phi^{A} =\n \\begin{cases}\n -\\nabla\\phi & r = r^{'} - r_{A} \\\\\n 0           & r = r^{'} - r_{I}, I \\neq A\n \\end{cases}\n\\end{equation} \n\nNow we can apply the (\\ref{NUCLEAR_GRADIENT_DFTeq:3}) to the\nbelow expression:\n\\begin{equation}\n  \\sum_{\\mu\\nu}(\\phi_{\\mu}\\phi_{\\nu})^{A} = \n  -2\\sum_{\\mu}^{A}\\sum_{\\nu}(\\nabla\\phi_{\\mu}\\phi_{\\nu})\n\\label{NUCLEAR_GRADIENT_DFTeq:4}\n\\end{equation} \nWe suppose that the gradient is only taken on the $\\phi_{\\mu}$ who\nare centering on the A. Since that the $\\mu$ and $\\nu$ are\nsymmetrical in the summation, hence we multiply factor of 2 before\nthe summation and restrict the nuclear gradient is only applied on\nthe $\\phi_{\\mu}$(see the \\cite{CPL_1992_6_557,\njohnson:5612} etc.).\n\nSimilarly, we have:\n\\begin{equation}\n\\label{NUCLEAR_GRADIENT_DFTeq:5}\n\\begin{split}\n  \\sum_{\\mu\\nu}\\nabla(\\phi_{\\mu}\\phi_{\\nu})^{A} &= \n  2\\sum_{\\mu}^{A}\\sum_{\\nu}\\nabla(\\phi_{\\mu}\\phi_{\\nu})^{A} \\\\ \n  &= \n  2\\sum_{\\mu}^{A}\\sum_{\\nu}(\\nabla\\phi_{\\mu}\\phi_{\\nu} +\n\\phi_{\\mu}\\nabla\\phi_{\\nu})^{A} \\\\\n   &= \n  2\\sum_{\\mu}^{A}\\sum_{\\nu}\\left\\lbrace\n(\\nabla\\phi_{\\mu})^{A}\\phi_{\\nu} +\n\\phi_{\\mu}^{A}\\nabla\\phi_{\\nu}\\right\\rbrace  \\\\\n&=-2\\sum_{\\mu}^{A}\\sum_{\\nu}\\left\\lbrace\n\\nabla\\cdotp(\\nabla\\phi_{\\mu})^{T}\\phi_{\\nu} +\n\\nabla\\phi_{\\mu}\\cdotp\\nabla\\phi_{\\nu}^{T}\\right\\rbrace\n\\end{split}\n\\end{equation}\nWhere the supperscript of $T$ means ``tranpose'' the vector of\n$\\nabla\\phi$, so that the final result we can see is actually some\nnumber but not the vector. Hence we can label it as $X_{\\mu\\nu}$ by\nfollowing the tradition in the \\cite{CPL_1992_6_557,\njohnson:5612} etc.) \n\nNow by applying the (\\ref{NUCLEAR_GRADIENT_DFTeq:4}) and\n(\\ref{NUCLEAR_GRADIENT_DFTeq:5}), we can improve the result in the\n(\\ref{NUCLEAR_GRADIENT_DFTeq:2}) as:\n\\begin{equation}\n\\label{NUCLEAR_GRADIENT_DFTeq:7}\n\\begin{split}\n E_{XC}^{A} &=  \\sum_{\\mu\\nu}\nP_{\\mu\\nu}^{\\sigma}\\sum_{\\sigma}\\int\\left\\lbrace\n\\left(\n\\frac{\\partial F}{\\partial \\rho_{\\sigma}}(\\phi_{\\mu}\\phi_{\\nu\n})^{A}\\right) + \\left(\n    \\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n    \\cdotp\\nabla(\\phi_{\\mu}\\phi_{\\nu\n})^{A}\\right)\\right\\rbrace  d^{3}r \\\\\n&=  -2\\sum_{\\mu}^{A}\\sum_{\\nu}\nP_{\\mu\\nu}^{\\sigma}\\sum_{\\sigma}\\int\\left\\lbrace\n\\left(\n\\frac{\\partial F}{\\partial \\rho_{\\sigma}}(\\nabla\\phi_{\\mu}\\phi_{\\nu\n})\\right) + \\left(\n    X_{\\mu\\nu}\\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n    \\right)\\right\\rbrace  d^{3}r\n\\end{split}\n\\end{equation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Extension to the meta-GGA form}\n\\label{Func_Deriv_gradient_variable_META_GGA}\n%\n%\n%\n%\n%\nNow let's extend the gradient form into the meta-GGA by using the\nvariables, in this case; the energy expressed as:\n\\begin{equation}\n \\label{functional_meta_gga_gradient_eq:1}\nE_{XC} = \\int  F(\\rho_{\\alpha}, \\rho_{\\beta}, \\nabla\\rho_{\\alpha},\n\\nabla\\rho_{\\beta}, \\nabla^{2}\\rho_{\\alpha},\n\\nabla^{2}\\rho_{\\beta}, \\tau_{\\alpha}, \\tau_{\\beta})d^{3}r\n\\end{equation} \nAs suggested by the (\\ref{NUCLEAR_GRADIENT_DFTeq:2}), the general\ngradient form in terms of the variable should be:\n\\begin{equation}\n \\label{functional_meta_gga_gradient_eq:2}\nE_{XC}^{A} = \\sum_{\\mu\\nu}\\sum_{\\sigma}\\sum_{\\xi}\\int\n\\frac{\\partial F}{\\partial \\xi_{\\sigma}} \\xi_{\\sigma}^{A} d^{3}r\n\\end{equation} \nSince we have got the functional derivatives of $\\dfrac{\\partial\nF}{\\partial \\xi_{\\sigma}}$, hence the problem is how to obtain the\ngradient form of variables.\n\nFor the tau, we have:\n\\begin{equation}\n \\label{functional_meta_gga_gradient_eq:3}\n\\begin{split}\n \\sum_{\\mu\\nu}(\\nabla\\phi_{\\mu}\\cdot\\nabla\\phi_{\\nu})^{A}\n&=2\n\\sum_{\\mu}^{A}\\sum_{\\nu}(\\nabla\\phi_{\\mu})^{A}\\cdot(\\nabla\\phi_{\\nu})\n\\\\\n&=-2\n\\sum_{\\mu}^{A}\\sum_{\\nu}\\nabla^{2}\\phi_{\\mu}(\\nabla\\phi_{\\nu})\n\\end{split}\n\\end{equation} \nwe note that this is some vector variable, we can label it as\n$\\xi_{tau}^{A}$.\n\nFor the laplacian variable, it gives:\n\\begin{equation}\n \\begin{split}\n  \\sum_{\\mu\\nu}\\nabla^{2}(\\phi_{\\mu}\\phi_{\\nu})^{A} &= \n  2\\sum_{\\mu}^{A}\\sum_{\\nu}\\nabla^{2}(\\phi_{\\mu}\\phi_{\\nu})^{A} \\\\ \n  &= \n  2\\sum_{\\mu}^{A}\\sum_{\\nu}\\left[\n\\nabla\\cdotp(\\nabla\\phi_{\\mu}\\phi_{\\nu} +\n\\phi_{\\mu}\\nabla\\phi_{\\nu})\\right]^{A} \\\\\n   &= 2\\sum_{\\mu}^{A}\\sum_{\\nu}\\left[\n(\\nabla^{2}\\phi_{\\mu})\\phi_{\\nu} +\n2\\nabla\\phi_{\\mu}\\cdotp\\nabla\\phi_{\\nu} +\n\\phi_{\\mu}(\\nabla^{2}\\phi_{\\nu})\\right]^{A} \\\\\n&=2\\sum_{\\mu}^{A}\\sum_{\\nu}\\left[\n(\\nabla^{2}\\phi_{\\mu})^{A}\\phi_{\\nu} +\n2(\\nabla\\phi_{\\mu})^{A}\\cdotp\\nabla\\phi_{\\nu} +\n(\\phi_{\\mu})^{A}(\\nabla^{2}\\phi_{\\nu})\\right] \\\\\n&=-2\\sum_{\\mu}^{A}\\sum_{\\nu}\\left[\n(\\nabla^{3}\\phi_{\\mu})\\phi_{\\nu} +\n2(\\nabla^{2}\\phi_{\\mu})\\cdotp\\nabla\\phi_{\\nu} +\n(\\nabla\\phi_{\\mu})(\\nabla^{2}\\phi_{\\nu})\\right] \n \\end{split}\n\\label{functional_meta_gga_gradient_eq:4}\n\\end{equation} \nWe note that this is also some vector form variable, since the\n$\\nabla^{3}$ and $\\nabla$ are some vector operator, and the\n$\\nabla^{2}$ is a scalar operator. We label it as $\\xi_{lap}^{A}$.\n\nFinally, the gradient for the meta-GGA form can be expressed as:\n\\begin{equation}\n \\label{functional_meta_gga_gradient_eq:4}\n\\begin{split}\nE_{XC}^{A} &= -2\\sum_{\\mu}^{A}\\sum_{\\nu}\\sum_{\\sigma}\nP_{\\mu\\nu}^{\\sigma}\\int\\left\\lbrace\n\\left(\n\\frac{\\partial F}{\\partial \\rho_{\\sigma}}(\\nabla\\phi_{\\mu}\\phi_{\\nu\n})\\right) + \\left(\n    X_{\\mu\\nu}\\frac{\\partial F}{\\partial (\\nabla\\rho_{\\sigma})}\n\\right)\n\\right. \\\\\n&\\left. + \n \\frac{1}{2}\\left(\n    \\xi_{tau}^{A}\\frac{\\partial F}{\\partial \\xi_{tau}}\\right) +\n \\left(\n    \\xi_{lap}^{A}\\frac{\\partial F}{\\partial \\xi_{lap}}\\right)\n    \\right\\rbrace  d^{3}r \n\\end{split}\n\\end{equation}   \n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{comment}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{The functional derivative for the time dependent DFT}\n%\n%\n%\n%\n%\nThe XC term in the time dependent density functional theory is\nexpressed as:\n\\begin{equation}\n \\label{eq:functional:38}\nE_{TD} = \\int d^{3}r \\int d^{3}r^{'}\n\\varphi_{p}^{*}(r)\\varphi_{q}(r)\nf_{xc}\\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\n\\end{equation}\nHere the $f_{XC}$ is called exchange-correlation kernel, which is\nexpressed as:\n\\begin{equation}\n \\label{eq:functional:39}\nf_{XC} = \\frac{\\delta^{2} E_{XC} }{\\delta\\rho(r)\n\\delta\\rho(r^{'})}\n\\end{equation} \nHence we need to derive the second functional derivative for the\n$E_{XC}$. Additionaly, compared with (\\ref{eq:XC_functional.8}), this\nterm is expressed in the MO space. \n\nLet's start from the $\\dfrac{\\delta V_{\\sigma}^{XC}(r)}{\\delta\n\\rho_{\\sigma^{'}}(r^{'})}$ (now the spin term is generally considered)\nand assume that the exchange-correlation energy is generally expressed\nas:\n\\begin{equation}\n E_{XC} = \\int F(\\rho_{\\alpha}, \\rho_{\\beta}, \\nabla\\rho_{\\alpha},\n\\nabla\\rho_{\\beta})d^{3}r\n\\end{equation} \nIt's in GGA's form and express in variables. Furthermore, we can also\nexpress it in the $\\gamma$ form by slight changes.\n\nHere the only expression need to evaluate is the second implicit\nderivatives for the $E_{XC}$, hence with the result from\n(\\ref{TDADDEDeq:3}),\nwe can have:\n\\begin{equation}\n \\begin{split}\n\\frac{\\delta^{2}\nE_{XC}}{\\delta \\rho_{\\sigma}(r)\\delta \\rho_{\\sigma^{'}}(r^{'})} \n&= \\frac{\\partial V_{\\sigma}^{XC}(r)} {\\partial\n\\rho_{\\sigma^{'}}(r^{'})}\n- \\nabla_{r^{'}}\\cdot \\left(\n    \\frac{\\partial V_{\\sigma}^{XC}(r)} {\\partial\n(\\nabla_{r^{'}}\\rho_{\\sigma^{'}}(r^{'}))}\n  \\right) \\\\\n&= \\frac{\\partial^{2} E_{XC} }{\\partial\\rho_{\\sigma}(r)\n\\partial\\rho_{\\sigma^{'}}(r^{'})} - \\nabla_{r}\\cdotp\\frac{\\partial^{2}\nE_{XC} }{\\partial(\\nabla_{r}\\cdotp\\rho_{\\sigma}(r))\n\\partial\\rho_{\\sigma^{'}}(r^{'})} \\\\\n&-\n\\nabla_{r^{'}}\\cdotp\\frac{\\partial^{2} E_{XC}}\n{\\partial\\rho_{\\sigma}(r)\n\\partial(\\nabla_{r^{'}}\\cdotp\\rho_{\\sigma^{'}}(r^{'}))} \\\\\n&+ \n\\nabla_{r}\\nabla_{r^{'}}\\cdotp\\frac{\\partial^{2} E_{XC}}\n{\\partial(\\nabla_{r}\\cdotp\\rho_{\\sigma}(r))\n\\partial(\\nabla_{r^{'}}\\cdotp\\rho_{\\sigma^{'}}(r^{'}))}\n \\end{split}\n\\label{TDADDEDeq:5}\n\\end{equation} \nIn this derivation, we just bring in the expression of exchange\ncorrelation potential then get the result.\n\nFinally, we can use the integration by parts method. Hence the\ngradient operator in the (\\ref{TDADDEDeq:5}) can be all dropped. We\nnote that for the $\\nabla_{r}\\nabla_{r^{'}}$ actually we have to use\ntwice integration by parts method. Finally we can get:\n\\begin{equation}\n\\begin{split}\n  &XC_{ia, jb}   \\\\\n&=\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}c^{*}_{i\\mu\\sigma}c_{a\\nu\\sigma} \nc^{*}_{j\\lambda\\sigma^{'}}c_{b\\eta\\sigma^{'}}\\int d^{3}r \\int\nd^{3}r^{'} \\\\\n&\\phi_{i\\mu}^{*}(r)\\phi_{a\\nu}(r) \n\\frac{\\partial^{2} E_{XC} }{\\partial\\rho_{\\sigma}(r)\n\\partial\\rho_{\\sigma^{'}}(r^{'})}\n\\phi_{j\\lambda}^{*}(r^{'})\\phi_{b\\eta}(r^{'}) \\\\\n&+ \\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}c^{*}_{i\\mu\\sigma}c_{a\\nu\\sigma} \nc^{*}_{j\\lambda\\sigma^{'}}c_{b\\eta\\sigma^{'}}\n\\int d^{3}r \\int d^{3}r^{'} \\\\\n&\\nabla_{r}\\cdotp(\\phi_{i\\mu}^{*}(r)\\phi_{a\\nu}(r)) \n\\frac{\\partial^{2}\nE_{XC} }{\\partial(\\nabla_{r}\\cdotp\\rho_{\\sigma}(r))\n\\partial\\rho_{\\sigma^{'}}(r^{'})}\n\\phi_{j\\lambda}^{*}(r^{'})\\phi_{b\\eta}(r^{'}) \\\\\n&+\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}c^{*}_{i\\mu\\sigma}c_{a\\nu\\sigma} \nc^{*}_{j\\lambda\\sigma^{'}}c_{b\\eta\\sigma^{'}}\n\\int d^{3}r \\int d^{3}r^{'} \\\\\n&\\phi_{i\\mu}^{*}(r)\\phi_{a\\nu}(r) \n\\frac{\\partial^{2} E_{XC}}\n{\\partial\\rho_{\\sigma}(r)\n\\partial(\\nabla_{r^{'}}\\cdotp\\rho_{\\sigma^{'}}(r^{'}))}\n\\nabla_{r^{'}}\\cdotp\\phi_{j\\lambda}^{*}(r^{'})\\phi_{b\\eta}(r^{'}) \\\\\n&+\\sum_{\\mu\\nu}\\sum_{\\lambda\\eta}c^{*}_{i\\mu\\sigma}c_{a\\nu\\sigma} \nc^{*}_{j\\lambda\\sigma^{'}}c_{b\\eta\\sigma^{'}}\n\\int d^{3}r \\int d^{3}r^{'} \\\\\n&\\nabla_{r}\\cdotp(\\phi_{i\\mu}^{*}(r)\\phi_{a\\nu}(r)) \n\\frac{\\partial^{2} E_{XC}}\n{\\partial(\\nabla_{r}\\cdotp\\rho_{\\sigma}(r))\n\\partial(\\nabla_{r^{'}}\\cdotp\\rho_{\\sigma^{'}}(r^{'}))}\n\\nabla_{r^{'}}\\cdotp\\phi_{j\\lambda}^{*}(r^{'})\\phi_{b\\eta}(r^{'})\n\\end{split}\n\\label{TDADDEDeq:6}\n\\end{equation} \nNow in this final expression we just spread out the MO expression\ninto the AO expression. By using the (\\ref{eq:functional:45}) and\ndelta function we can finally drop the $r^{'}$ integral and change\n$r^{'}$ into $r$, then we get the final expression. \n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"../main\"\n%%% End: \n\n\n%\n% here below is my first derivation for TDDFT functional expression.\n% wrong and not accurate. So abandan it.\n%\n%\n\n\n\nNow let's expand it in the careful way. Firstly, let's try to expand\nthe $\\dfrac{\\partial V_{\\alpha}^{XC}(r)}\n{\\partial\\rho_{\\alpha}(r^{'})}$ by using the expression in\n(\\ref{eq:functional:36}):\n\\begin{equation}\n  \\label{eq:functional:7}\n\\begin{split}\n\\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n\\rho_{\\alpha}(r^{'})} &= \n\\frac{\\partial^{2} f(r)} {\\partial \\rho_{\\alpha}(r)\\partial\n\\rho_{\\alpha}(r^{'})} \n- 2\\nabla_{r}\\cdot \n\\left(\\frac{ \\partial\n\\left(  \\frac{\\partial f(r)} {\\partial\n\\gamma_{\\alpha\\alpha}(r)}\\nabla_{r}\\rho_{\\alpha}(r) \\right) } \n    {\\partial \\rho_{\\alpha}(r^{'})}\n\\right) \\\\\n&- \n\\nabla_{r}\\cdot \n\\left( \n\\frac{\\partial\\left( \\frac{\\partial f(r)}\n    {\\partial \\gamma_{\\alpha\\beta}(r)} \\nabla_{r}\\rho_{\\beta}(r)\n\\right) }\n{\\partial \\rho_{\\alpha}(r^{'})}\n\\right) \\\\\n&= \\frac{\\partial^{2} f(r)} {\\partial \\rho_{\\alpha}(r)\\partial\n\\rho_{\\alpha}(r^{'})}  - \n2\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)\\partial\\rho_{\\alpha}(r^{'})}\n\\nabla_{r}\\rho_{\\alpha}(r)\\right) \\\\\n&- \n\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\beta}(r)\\partial\\rho_{\\alpha}(r^{'})}\n\\nabla_{r}\\rho_{\\beta}(r)\\right)\n\\end{split}\n\\end{equation}\nHere in this derivation, we have to remember that if the partial\nderivative is applied, the $\\nabla\\rho$ and the $\\rho$ are\nindependent so if we differentiate with $\\rho$, then $\\nabla\\rho$\nshould be viewed as some constant.\n\n%Here in this derivation, we have omited some step:\n%\\begin{align}\n% \\label{eq:XC_functional.3}\n%& 2\\nabla_{r}\\cdot\n%\\left(  \\frac{\\partial f(r)} {\\partial\n%\\gamma_{\\alpha\\alpha}(r)}\\frac{\\partial(\\nabla_{r}\\rho_{\\alpha}(r))}{\n%\\partial \\rho_{\\alpha}(r^{'})} \\right) \\nonumber \\\\\n%&= 2\\nabla_{r}\\cdot\n%\\left(  \\frac{\\partial f(r)} {\\partial\n%\\gamma_{\\alpha\\alpha}(r)}\\nabla_{r}\\cdot\\left(\n%\\frac{\\partial\\rho_{\\alpha}(r) } {\n%\\partial \\rho_{\\alpha}(r^{'})} \\right) \\right) \\nonumber \\\\\n%&= 2\\nabla_{r}\\cdot\n%\\left(  \\frac{\\partial f(r)} {\\partial\n%\\gamma_{\\alpha\\alpha}(r)}\\nabla_{r}\\cdot\\delta(r-r^{'}) \\right)\n%\\nonumber \\\\\n%&= 0\n%\\end{align}\n\nThen we expand the second term in the (\\ref{eq:functional:40}) again\nby using the (\\ref{eq:functional:36}):\n\\begin{equation}\n  \\label{eq:functional:8}\n\\begin{split}\n&\\nabla_{r^{'}}\\cdot \\left(\n\\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n(\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}  \n\\right) \\\\\n&= \n\\nabla_{r^{'}}\\cdot \\left( \\frac{\\partial^{2} f(r)} {\\partial\n\\rho_{\\alpha} (r)\n\\partial(\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}\\right) \n- 2\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left(\\frac{ \\partial\n\\left(  \\frac{\\partial f(r)} {\\partial\n\\gamma_{\\alpha\\alpha}(r)}\\nabla_{r}\\rho_{\\alpha}(r) \\right) } \n    {\\partial (\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}\n\\right) \\\\ \n&- \n\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left( \n\\frac{\\left( \\frac{\\partial f(r)}\n    {\\partial \\gamma_{\\alpha\\beta}(r)} \\nabla_{r}\\rho_{\\beta}(r)\n\\right) }\n{\\partial (\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}\n\\right) \\\\\n&= \\nabla_{r^{'}}\\cdot \\left( \\frac{\\partial^{2} f(r)} {\\partial\n\\rho_{\\alpha}(r) \\partial(\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}\\right) \n- \n2\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)}\n\\delta(r-r^{'})\\right) \\\\\n&- \n2\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)\\partial(\\nabla_{r^{'}}\\rho_{\\alpha}\n(r^{'})) }\n\\nabla_{r}\\rho_{\\alpha}(r)\\right) \\\\\n&- \n\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\beta}(r)\\partial(\\nabla_{r^{'}}\\rho_{\\alpha}\n(r^{'}))}\n\\nabla_{r}\\rho_{\\beta}(r)\\right)\n\\end{split}\n\\end{equation}\nAdditionally, we note that here the $\\nabla_{r}\\rho_{\\beta}(r)$ is\nactually independent with $\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'})$, since\nthe alpha electron density and beta electron density are assumed into\ndifferent and independent spatial space. For the colse shell case, it\ncan be viewed as a special case that by forcing all the alpha\nelectron density equals to the beta electron density, and the formula\ngenerated from the unrestricted type is still useful.\n\n\nHere we can use some old trick to drop the differentiation with\n$\\nabla_{r^{'}}\\rho_{\\alpha}$, which has been used in\n(\\ref{eq:functional:36}):\n\\begin{align}\n \\label{eq:functional:52}\n\\frac{\\partial L(r)}{\\partial(\\nabla_{r^{'}}\\rho_{\\alpha}\n(r^{'}))} \n&= \\int \\frac{\\partial\nL(r)}{\\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n\\frac{\\partial \\gamma_{\\alpha\\alpha}(r^{'})}\n{\\partial(\\nabla_{r^{'}} \\rho_{\\alpha} (r^{'}))} \nd^{3}r^{'} \\nonumber \\\\\n&= \\int \\frac{\\partial\nL(r)}{\\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n2\\nabla_{r^{'}} \\rho_{\\alpha} (r^{'})\nd^{3}r^{'} \\nonumber \\\\\n&= \\frac{\\partial\nL(r)}{\\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n2\\nabla_{r^{'}} \\rho_{\\alpha} (r^{'}) \\delta(r-r^{'})\n\\end{align}\n\nBy using the result in the (\\ref{eq:functional:52}), the\nresult expression in the (\\ref{eq:functional:8}) can be transformed\nas:\n\\begin{equation}\n \\begin{split}\n  &\\nabla_{r^{'}}\\cdot \\left(\n\\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n(\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}  \n\\right) \\\\\n&= 2\\nabla_{r^{'}}\\cdot \\left( \\frac{\\partial^{2} f(r)} {\\partial\n\\rho_{\\alpha}(r) \\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'})\\delta(r-r^{'})\\right) \\\\ \n&- \n2\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)}\n\\delta(r-r^{'})\\right) \\\\\n&-\n4\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)\n \\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'})\n\\nabla_{r}\\rho_{\\alpha}(r) \\delta(r-r^{'}) \\right) \\\\\n&-\n2\\nabla_{r^{'}}\\nabla_{r}\\cdot \n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\beta}(r)\n \\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'})\n\\nabla_{r}\\rho_{\\beta}(r) \\delta(r-r^{'}) \\right)\n \\end{split}\n \\label{eq:functional:53}\n\\end{equation} \n\nNow let's begin to deal with the integral form shown in\n(\\ref{eq:functional:38}). For the (\\ref{eq:functional:53}), we can\napply the general method in the (\\ref{eq:integration_rule_functional})\nso that we can shift the $\\nabla$ from the functional to the orbitals:\n\\begin{align}\n  \\label{eq:functional:54}\n&\\int \\int f(r) \\left[ \\nabla_{r^{'}}\\nabla_{r}\\cdot\nL(r,r^{'})\\right] g(r^{'}) d^{3}r d^{3}r^{'} \\nonumber \\\\\n&= \\int f(r)d^{3}r \\int \\left[ \\nabla_{r^{'}}\\nabla_{r}\\cdot\nL(r,r^{'})\\right] g(r^{'}) d^{3}r^{'} \\nonumber \\\\\n&= \\int f(r)d^{3}r \\left[ -\\int \\nabla_{r}\\cdot L(r,r^{'})\n\\nabla_{r^{'}}g(r^{'})\nd^{3}r^{'} \\right] \\nonumber \\\\\n&= -\\int \\nabla_{r^{'}}\\cdot g(r^{'})\nd^{3}r^{'} \\int \\nabla_{r} \\cdot L(r,r^{'}) f(r)d^{3}r  \\nonumber \\\\\n&= -\\int \\nabla_{r^{'}}\\cdot g(r^{'})\nd^{3}r^{'} \\left[ -\\int  L(r,r^{'}) \\nabla_{r}\\cdot f(r) d^{3}r\\right]\n \\nonumber \\\\\n&= \\int \\int \\nabla_{r}\\cdot f(r) L(r,r^{'})\n\\nabla_{r^{'}}\\cdot g(r^{'})  \nd^{3}r d^{3}r^{'}    \n\\end{align}\nBy this result, we can safely transform the integral for\n(\\ref{eq:functional:53}) as:\n\\begin{equation}\n \\begin{split}\n&-\\int d^{3}r \\int d^{3}r^{'}\n\\varphi_{p}^{*}(r)\\varphi_{q}(r)\n\\left[ \\nabla_{r^{'}}\\cdot \\left(\n\\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n(\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}  \n\\right)\\right] \\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'}) \\\\\n&=\n-2\\int d^{3}r \\int d^{3}r^{'}\n\\left( \\frac{\\partial^{2} f(r)} {\\partial\n\\rho_{\\alpha}(r) \\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'})\\delta(r-r^{'})\\right) \\\\\n&\\left(\\varphi_{p}^{*}(r)\\varphi_{q}(r)\\right)\n\\nabla_{r^{'}}\\cdot\\left(\n\\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\\right) \\\\\n&+2\n\\int d^{3}r \\int d^{3}r^{'}\n\\left(\n\\frac{\\partial f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)}\n\\delta(r-r^{'})\\right) \\\\\n&\\nabla_{r}\\cdot\n\\left(\\varphi_{p}^{*}(r)\\varphi_{q}(r)\\right)\n\\nabla_{r^{'}}\\cdot\n\\left(\\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\\right) \\\\\n&+4\n\\int d^{3}r \\int d^{3}r^{'}\n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)\n \\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'})\n\\nabla_{r}\\rho_{\\alpha}(r) \\delta(r-r^{'}) \\right) \\\\\n&\\nabla_{r}\\cdot\n\\left(\\varphi_{p}^{*}(r)\\varphi_{q}(r)\\right)\n\\nabla_{r^{'}}\\cdot\n\\left(\\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\\right) \\\\\n&+2\n\\int d^{3}r \\int d^{3}r^{'}\n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\beta}(r)\n \\partial\\gamma_{\\alpha\\alpha}(r^{'})}\n\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'})\n\\nabla_{r}\\rho_{\\beta}(r) \\delta(r-r^{'}) \\right) \\\\\n&\\nabla_{r}\\cdot\n\\left(\\varphi_{p}^{*}(r)\\varphi_{q}(r)\\right)\n\\nabla_{r^{'}}\\cdot\n\\left(\\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\\right)\n\\end{split}\n\\label{eq:functional:55}\n\\end{equation} \n\nFinally, let's use the delta function so that generally we have:\n\\begin{equation}\n \\label{eq:functional:56}\n\\int d^{3}r \\int d^{3} r^{'} f(r^{'}) g(r) \\delta (r-r^{'}) = \n\\int d^{3}r f(r) g(r)\n\\end{equation} \nHence in the result of (\\ref{eq:functional:55}), what we are going to\ndo in presence of delta function is that to transform all the\nfunctions and variables that related to $r^{'}$ into $r$, and then\ndrop the integration symbol of $\\int d^{3} r^{'}$. After such\nprocedure, the result in (\\ref{eq:functional:55}) finally becomes:\n\\begin{equation}\n \\begin{split}\n&-\\int d^{3}r \\int d^{3}r^{'}\n\\varphi_{p}^{*}(r)\\varphi_{q}(r)\n\\left[ \\nabla_{r^{'}}\\cdot \\left(\n\\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n(\\nabla_{r^{'}}\\rho_{\\alpha}(r^{'}))}  \n\\right)\\right] \\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'}) \\\\\n&=\n-2\\int d^{3}r \n\\left( \\frac{\\partial^{2} f} {\\partial\n\\rho_{\\alpha} \\partial\\gamma_{\\alpha\\alpha}}\n\\nabla\\rho_{\\alpha}\\right) \n\\left(\\varphi_{p}^{*}\\varphi_{q}\\right)\n\\left[ \\nabla\\cdot\\left(\n\\varphi_{s}^{*}\\varphi_{t}\\right)\\right]  \\\\\n&+2\n\\int d^{3}r\n\\left(\n\\frac{\\partial f}\n{\\partial\\gamma_{\\alpha\\alpha}}\\right)\n\\left[ \\nabla\\cdot\n\\left(\\varphi_{p}^{*}\\varphi_{q}\\right)\\right] \n\\left[ \\nabla\\cdot\n\\left(\\varphi_{s}^{*}\\varphi_{t}\\right)\\right] \\\\\n&+4\n\\int d^{3}r\n\\left(\n\\frac{\\partial^{2} f}\n{\\partial\\gamma_{\\alpha\\alpha}^{2}}\n\\nabla\\rho_{\\alpha}\n\\nabla\\rho_{\\alpha}\\right)\n\\left[ \\nabla\\cdot\n\\left(\\varphi_{p}^{*}\\varphi_{q}\\right)\\right] \n\\left[ \\nabla\\cdot\n\\left(\\varphi_{s}^{*}\\varphi_{t}\\right)\\right] \\\\\n&+2\n\\int d^{3}r\n\\left(\n\\frac{\\partial^{2} f}\n{\\partial\\gamma_{\\alpha\\beta}\n \\partial\\gamma_{\\alpha\\alpha}}\n\\nabla\\rho_{\\alpha}\n\\nabla\\rho_{\\beta}\\right) \n\\left[ \\nabla\\cdot\n\\left(\\varphi_{p}^{*}\\varphi_{q}\\right)\\right] \n\\left[ \\nabla\\cdot\n\\left(\\varphi_{s}^{*}\\varphi_{t}\\right)\\right] \n\\end{split}\n\\label{eq:functional:57}\n\\end{equation}\n\nNext let's repeat the same procedure to the first item in\n(\\ref{eq:functional:40}), and we have got some explicit formula to\nexpress it in (\\ref{eq:XC_functional.7}). We can just drive the\nintegration form by the method shown in (\\ref{eq:functional:54}):\n\\begin{equation}\n \\begin{split}\n&\\int d^{3}r \\int d^{3}r^{'}\n\\varphi_{p}^{*}(r)\\varphi_{q}(r)\n\\left[ \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n\\rho_{\\alpha}(r^{'})}\\right] \n\\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'}) \\\\\n&=\\int d^{3}r \\int d^{3}r^{'}\n\\left( \\varphi_{p}^{*}(r)\\varphi_{q}(r)\\right) \n\\frac{\\partial^{2} f(r)} {\\partial \\rho_{\\alpha}(r)\\partial\n\\rho_{\\alpha}(r^{'})}\n\\left( \\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\\right)  \\\\\n&+2\\int d^{3}r \\int d^{3}r^{'}\n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\alpha}(r)\\partial\\rho_{\\alpha}(r^{'})}\n\\nabla_{r}\\rho_{\\alpha}(r)\\right)\n\\left[ \\nabla\\cdot\\varphi_{p}^{*}(r)\\varphi_{q}(r)\\right] \n\\left( \\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\\right) \\\\\n&+\\int d^{3}r \\int d^{3}r^{'}\n\\left(\n\\frac{\\partial^{2} f(r)}\n{\\partial\\gamma_{\\alpha\\beta}(r)\\partial\\rho_{\\alpha}(r^{'})}\n\\nabla_{r}\\rho_{\\beta}(r)\\right)\n\\left[ \\nabla\\cdot\\varphi_{p}^{*}(r)\\varphi_{q}(r)\\right]\n\\left( \\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\\right)\n \\end{split}\n\\label{eq:functional:58}\n\\end{equation} \nAccording to the (\\ref{eq:functional:45}), so we have:\n\\begin{equation}\n \\label{eq:functional:59}\n\\frac{\\partial^{2} f(r)} {\\partial \\rho_{\\alpha}(r)\\partial\n\\rho_{\\alpha}(r^{'})} = \\frac{\\partial^{2} f(r)} {\\partial\n\\rho_{\\alpha}(r)\\partial\n\\rho_{\\alpha}(r^{'})}\\delta(r-r^{'})\n\\end{equation} \nSo we can drop all the $r^{'}$ then the (\\ref{eq:functional:58})\nbecomes:\n\\begin{equation}\n \\begin{split}\n&\\int d^{3}r \\int d^{3}r^{'}\n\\varphi_{p}^{*}(r)\\varphi_{q}(r)\n\\left[ \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n\\rho_{\\alpha}(r^{'})}\\right] \n\\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'}) \\\\\n&=\\int d^{3}r \n\\left(\n\\varphi_{p}^{*}\\varphi_{q}\\varphi_{s}^{*}\\varphi_{t}\n\\right) \n\\frac{\\partial^{2} f} \n{\\partial^{2}\\rho_{\\alpha}} \\\\\n&+2\\int d^{3}r\n\\left(\n\\frac{\\partial^{2} f}\n{\\partial\\gamma_{\\alpha\\alpha}\\partial\\rho_{\\alpha}}\n\\nabla\\rho_{\\alpha}\\right)\n\\left[ \\nabla\\cdot\\varphi_{p}^{*}\\varphi_{q}\\right] \n\\left( \\varphi_{s}^{*}\\varphi_{t}\\right) \\\\\n&+\\int d^{3}r\n\\left(\n\\frac{\\partial^{2} f}\n{\\partial\\gamma_{\\alpha\\beta}\\partial\\rho_{\\alpha}}\n\\nabla\\rho_{\\beta}\\right)\n\\left[ \\nabla\\cdot\\varphi_{p}^{*}\\varphi_{q}\\right]\n\\left( \\varphi_{s}^{*}\\varphi_{t}\\right)\n \\end{split}\n\\label{eq:functional:60}\n\\end{equation}\n\n\n\n\n\nFrom the (\\ref{functional_mega_gga_eq:7}), it turns out that the\nfunctional derivative takes a very simple expression, it equals to\nthe partial derivatives of the variables:\n\\begin{equation}\n \\label{functional_mega_gga_functional_derivatives_eq:1}\n\\frac{\\delta E_{XC}}{\\delta \\rho(r)} = \\sum_{\\sigma}\\sum_{\\xi}\n    \\frac{\\partial F}{\\partial \\xi_{\\sigma}}\n\\end{equation} \n\nThen from this expression we can easily get the higher order of\nfunctional derivatives:\n\\begin{equation}\n \\label{functional_mega_gga_functional_derivatives_eq:2}\n\\begin{split}\n\\frac{\\delta^{2} E_{XC}}{\\delta \\rho(r) \\delta \\rho(r^{'})} &=\n\\sum_{\\sigma^{'}}\\sum_{\\zeta}\\left\\lbrace \\sum_{\\sigma}\\sum_{\\xi}\n   \\frac{\\partial\\left( {\\frac{\\partial F(r)}{\\partial\n\\xi_{\\sigma}(r)}}\\right)}{\\partial\\zeta_{\\sigma^{'}}(r^{'})}\n\\right\\rbrace  \\\\\n&= \\sum_{\\sigma^{'}}\\sum_{\\zeta}\\sum_{\\sigma}\\sum_{\\xi}\n   \\frac{\\partial^{2} F(r)}{\\partial\n\\xi_{\\sigma}(r)\\partial\\zeta_{\\sigma^{'}}(r^{'})}\n\\end{split}\n\\end{equation}\nHere the $\\xi$ and $\\zeta$ are both representing the variables.\n\nFurthermore, even the third functional derivatives can be derived in\nthe similar way:\n\\begin{equation}\n \\label{functional_mega_gga_functional_derivatives_eq:3}\n\\begin{split}\n\\frac{\\delta^{3} E_{XC}}{\\delta \\rho(r) \\delta \\rho(r^{'})\\delta\n\\rho(r^{''})} \n&=\n\\sum_{\\sigma^{''}}\\sum_{\\kappa}\n\\sum_{\\sigma^{'}}\\sum_{\\zeta}\n\\sum_{\\sigma }\\sum_{\\xi}\n   \\frac{\\partial^{3} F(r)}{\\partial\n\\xi_{\\sigma}(r)\\partial\\zeta_{\\sigma^{'}}(r^{'})\\partial\\kappa_{\n\\sigma^{''}(r^{''}) } }\n\\end{split}\n\\end{equation}\n\n\n% First, the expression for the\n% $f_{XC}$ is related to two variables: $r$ and $r^{'}$. We can make\n% use of (\\ref{eq:functional:45}) to drop the additional $r^{'}$:\n% \\begin{align}\n%   \\label{eq:functional:41}\n% f_{XC} &= \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% \\rho_{\\alpha}(r^{'})}\n% - \\nabla\\cdot \\left(\n%     \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% (\\nabla\\rho_{\\alpha}(r^{'}))}\n%   \\right) \\nonumber \\\\\n% &= \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% \\rho_{\\alpha}(r)} \\delta(r-r^{'}) - \\nabla\\cdot \\left(\n%     \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% (\\nabla\\rho_{\\alpha}(r))} \\delta(r-r^{'}) \\right)\n% \\end{align} \n% Combined with integration form in (\\ref{eq:functional:38}), we can\n% see that the integration for the $r^{'}$ is droped:\n% \\begin{equation}\n%  \\begin{split}\n%   \\label{eq:functional:46}\n% E_{XC} &= \\int d^{3}r \\int d^{3}r^{'} \\varphi_{p}^{*}(r)\\varphi_{q}(r)\n% \\left\\lbrace \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% \\rho_{\\alpha}(r)}\n% \\delta(r-r^{'})\\right\\rbrace \\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\n% \\\\\n% &- \\int d^{3}r \\int d^{3}r^{'} \\varphi_{p}^{*}(r)\\varphi_{q}(r)\n% \\left\\lbrace \\nabla\\cdot \\left(\n%     \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% (\\nabla\\rho_{\\alpha}(r))}  \\right)\\delta(r-r^{'})\\right\\rbrace\n% \\varphi_{s}^{*}(r^{'})\\varphi_{t}(r^{'})\n% \\\\\n% &= \\int d^{3}r \\varphi_{p}^{*}(r)\\varphi_{q}(r)\n% \\left\\lbrace \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% \\rho_{\\alpha}(r)}\n% \\right\\rbrace \\varphi_{s}^{*}(r)\\varphi_{t}(r) \\\\\n% &- \\int d^{3}r \\varphi_{p}^{*}(r)\\varphi_{q}(r)\n% \\left\\lbrace \\nabla\\cdot \\left(\n%     \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% (\\nabla\\rho_{\\alpha}(r))}  \\right)\\right\\rbrace\n% \\varphi_{s}^{*}(r)\\varphi_{t}(r)\n%  \\end{split} \n% \\end{equation} \n% We can get it because of the defintion of delta function. Then the\n% $f_{XC}$ is transformed into:\n% \\begin{equation}\n%  \\label{eq:functional:47}\n% f_{XC} = \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% \\rho_{\\alpha}(r)} - \\nabla\\cdot \\left(\n%     \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% (\\nabla\\rho_{\\alpha}(r))}  \\right)\n% \\end{equation} \n% In the later content, since it's merely functional on the $r$, hence\n% we will omit it in the expression.\n\n\n% Next let's evaluate the second term of $\\nabla\\cdot \\left(\n%     \\dfrac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% (\\nabla\\rho_{\\alpha}(r))}  \\right)$ in (\\ref{eq:functional:47}):\n% \\begin{equation}\n%   \\label{eq:functional:49}\n% \\begin{split}\n% \\nabla\\cdot \\left(\n% \\frac{\\partial V_{\\alpha}^{XC}(r)} {\\partial\n% (\\nabla\\rho_{\\alpha}(r))}  \n% \\right) &= \n% \\nabla\\cdot \\left( \\frac{\\partial^{2} f} {\\partial\n% (\\nabla\\rho_{\\alpha})\\partial \\rho_{\\alpha}}\\right) \n% - 2\\nabla^{2}\\cdot \n% \\left(\\frac{ \\partial\n% \\left(  \\frac{\\partial f} {\\partial\n% \\gamma_{\\alpha\\alpha}}\\nabla\\rho_{\\alpha} \\right) } \n%     {\\partial (\\nabla\\rho_{\\alpha})}\n% \\right) \\\\ \n% &- \n% \\nabla^{2}\\cdot \n% \\left( \n% \\frac{\\left( \\frac{\\partial f}\n%     {\\partial \\gamma_{\\alpha\\beta}} \\nabla\\rho_{\\beta} \\right) }\n% {\\partial (\\nabla\\rho_{\\alpha})}\n% \\right) \\\\\n% &= \\nabla\\cdot \\left( \\frac{\\partial^{2} f} {\\partial\n% (\\nabla\\rho_{\\alpha})\\partial \\rho_{\\alpha}}\\right)  - \n% 2\\nabla^{2}\\cdot \n% \\left(\n% \\frac{\\partial f}\n% {\\partial\\gamma_{\\alpha\\alpha}}\n% \\right) \\\\\n% &- \n% 2\\nabla^{2}\\cdot \n% \\left(\n% \\frac{\\partial^{2} f}\n% {\\partial\\gamma_{\\alpha\\alpha}\\partial(\\nabla\\rho_{\\alpha})}\n% \\nabla\\rho_{\\alpha}\\right) \\\\\n% &- \n% \\nabla^{2}\\cdot \n% \\left(\n% \\frac{\\partial^{2} f}\n% {\\partial\\gamma_{\\alpha\\beta}\\partial(\\nabla\\rho_{\\alpha})}\n% \\nabla\\rho_{\\beta}\\right)\n% \\end{split}\n% \\end{equation}\n\n% Now we begin to deal with the integration. For the first term in the\n% (\\ref{eq:functional:46}), according to the derivation in the\n% (\\ref{eq:integration_rule_functional}), its result is:\n% \\begin{equation}\n%  \\begin{split}\n% &\\int d^{3}r \\varphi_{p}^{*}(r)\\varphi_{q}(r)\n% \\left\\lbrace \\frac{\\partial V_{\\alpha}^{XC}(r)} \n% {\\partial\\rho_{\\alpha}(r)}\n% \\right\\rbrace \n% \\varphi_{s}^{*}(r)\\varphi_{t}(r) \\\\\n% &=\n% \\int d^{3}r \\varphi_{p}^{*}\\varphi_{q}\n% \\left\\lbrace \n% \\frac{\\partial^{2} f} {\\partial \\rho^{2}_{\\alpha}}\n% \\right\\rbrace \n% \\varphi_{s}^{*}\\varphi_{t} + \\int \\left[ \n% 2\n% \\left(\n% \\frac{\\partial^{2} f}\n% {\\partial\\gamma_{\\alpha\\alpha}\\partial\\rho_{\\alpha}}\n% \\nabla\\rho_{\\alpha}\\right) \\right. \\\\\n% &+\n% \\left. \\left(\n% \\frac{\\partial^{2} f}\n% {\\partial\\gamma_{\\alpha\\beta}\\partial\\rho_{\\alpha}}\n% \\nabla\\rho_{\\beta}\\right) \\right] \\nabla\n% (\\varphi_{p}^{*}\\varphi_{q}\\varphi_{s}^{*}\\varphi_{t})d^{3}r\n%  \\end{split}\n%   \\label{eq:functional:50}\n% \\end{equation} \n\n% For the second term, we apply the same idea; so the operator\n% of $\\nabla$ and the Laplacian operator can all be dropped from the\n% functional. Hence we can get the result:\n% \\begin{equation}\n%  \\begin{split}\n%   &\\int d^{3}r \\varphi_{p}^{*}(r)\\varphi_{q}(r)\n% \\left\\lbrace \\nabla\\cdot \\left(\n%     \\frac{\\partial V_{\\alpha}^{XC}(r)} \n%          {\\partial (\\nabla\\rho_{\\alpha}(r))} \\right)\n% \\right\\rbrace\n% \\varphi_{s}^{*}(r)\\varphi_{t}(r) \\\\\n% &=- \\int d^{3}r\n% \\frac{\\partial^{2} f} {\\partial\n% (\\nabla\\rho_{\\alpha})\\partial \\rho_{\\alpha}} \n% \\nabla (\\varphi_{p}^{*}\\varphi_{q}\\varphi_{s}^{*}\\varphi_{t}) \\\\\n% &+ \\int d^{3}r\n% \\left[ \n% -2\\left(\n% \\frac{\\partial f}\n% {\\partial\\gamma_{\\alpha\\alpha}}\n% \\right) \n% -2 \n% \\left(\n% \\frac{\\partial^{2} f}\n% {\\partial\\gamma_{\\alpha\\alpha}\\partial(\\nabla\\rho_{\\alpha})}\n% \\nabla\\rho_{\\alpha}\\right) \\right. \\\\\n% &-\\left. \n% \\left(\n% \\frac{\\partial^{2} f}\n% {\\partial\\gamma_{\\alpha\\beta}\\partial(\\nabla\\rho_{\\alpha})}\n% \\nabla\\rho_{\\beta}\\right)\n%  \\right]\n% \\nabla^{2}(\\varphi_{p}^{*}\\varphi_{q}\\varphi_{s}^{*}\\varphi_{t})\n%  \\end{split}\n% \\label{eq:functional:51}\n% \\end{equation} \n\n\\end{comment}\n\n", "meta": {"hexsha": "03e46d9a5eff2e5c5480bdfd1bbd7a065cd89526", "size": 83017, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algorithm/math/functional.tex", "max_stars_repo_name": "murfreesboro/fenglai-note", "max_stars_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-16T07:23:48.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-16T07:23:48.000Z", "max_issues_repo_path": "algorithm/math/functional.tex", "max_issues_repo_name": "murfreesboro/fenglai-note", "max_issues_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algorithm/math/functional.tex", "max_forks_repo_name": "murfreesboro/fenglai-note", "max_forks_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6682862191, "max_line_length": 79, "alphanum_fraction": 0.6424949107, "num_tokens": 29849, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744850834649, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.5823028513011959}}
{"text": "\\section{Syntactical considerations}\n\n\\begin{defn}\nA language $\\Li$ is a set of symbols. Throughout this dissertation we are going to work only with relational laguages; in some specific moments we will add constants to the language, but we will be explicit when we are doing so. We use $P, Q, P\\p, Q\\p, \\dots$ to denote relation symbols. It is assumed that each relation symbol $P$ of $\\Li$ is an $n$-ary relation symbol for $n \\in \\omega$. We call a $0$-ary relation symbol a \\textit{propositional letter}, and we use $p,q,p\\p,q\\p, \\dots$ to denote propositional letters (also called \\textit{propositional variables}). \n\n\\qquad We use $\\Li, \\Li^{\\prime}, \\Li^{\\prime \\prime}, \\dots$ as variables for languages. If $\\Li \\subseteq \\Li^{\\prime}$, we say that $\\Li^{\\prime}$ is an \\textit{expansion} of $\\Li$, and that $\\Li$ is a \\textit{reduction} of $\\Li^{\\prime}$.\n\\end{defn}\n\n\\begin{defn}\nTogether with $\\Li$ we define the following \\textit{logical symbols}:\n\n\\begin{itemize} \n\\item $x_{0}, x_{1}, x_{2}, \\dots$ (\\textit{variables});\n\\item $\\nao, \\ou$ (\\textit{not, or});\n\\item $\\ex$ (\\textit{there exists});\n\\item $\\Diamond$ (\\textit{possibility symbol});\n\\item $=$ (\\textit{equality symbol});\n\\item $),($ (\\textit{parentheses}).\n\\end{itemize}\n\\qquad We use $x, y, z, \\dots$ as syntactical variables for variables.\n\\end{defn}\n\n\\begin{defn}\nThe set $\\FLi$ of formulas of $\\Li$ is defined by the following rules:  \n\\begin{itemize} \n\\item If $x, y$ are variables, then $x =y$ is a formula of $\\Li$.\n\\item If $x_1, \\dots, x_n$ $(n \\geq 0)$  are variables and $P$ is an $n$-ary relation symbol of $\\Li$, then $Px_1 \\dots x_n$ is a formula of $\\Li$.\n\\item If $\\varphi$ is a formula of $\\Li$, then $\\nao \\varphi$ is a formula of $\\Li$.\n\\item If $\\varphi$ and $\\psi$ are formulas of $\\Li$, then $(\\varphi \\ou \\psi)$ is a formula of $\\Li$.\n\\item If $\\varphi$ is a formula of $\\Li$, then $\\Diamond \\varphi$ is a formula of $\\Li$.\n\\item If $\\varphi$ is a formula of $\\Li$ and $x$ a variable, then $\\ex x \\varphi$ is a formula of $\\Li$.\n\\end{itemize}\n\\qquad We assume the standard syntactical notions of \\textit{atomic formula}, \\textit{free variable}, \\textit{bound variable}, \\textit{sentence}, \\textit{formula complexity} and \\textit{proof (definition) by induction on formulas}. We are going to employ the usual abbreviations: \n\n\\begin{center}\t\n$(\\varphi \\e \\psi) := \\nao (\\nao \\varphi \\ou \\nao \\psi)$\\\\\n$(\\varphi \\impli \\psi) := (\\nao \\varphi \\ou \\psi)$\\\\\n$(\\varphi \\see \\psi) := (\\varphi \\impli \\psi)\\e (\\psi \\impli \\varphi)$\\\\\n$\\todo x \\varphi := \\nao\\ex x \\nao \\varphi$\\\\\n$\\Box \\varphi := \\nao\\Diamond \\nao \\varphi$\\\\\n\\end{center}\n\n\\qquad We write $\\varphi(x_{1}, \\dots, x_{n})$ to denote that the free variables of $\\varphi$ are among $\\{x_{1}, \\dots, x_{n}\\}$. Where $y_{1}, \\dots, y_{n}$ are variables, we write $\\varphi(y_{1}/x_{1}, \\dots, y_{n}/x_{n})$ to denote the formula obtained by substitution of $y_{1}, \\dots, y_{n}$ for all the free occurrences of $x_{1}, \\dots, x_{n}$ in $\\varphi$,  respectively. When it is clear from the context which variables are free in $\\varphi$ we simply write $\\varphi(y_{1}, \\dots, y_{n})$ instead of $\\varphi(y_{1}/x_{1}, \\dots, y_{n}/x_{n})$. We use $\\vec{x},\\vec{y}, \\dots$ for sequence of variables; and we write $\\todo \\vec{x} \\varphi(\\vec{x})$ in the place of $\\todo x_1 \\dots \\todo x_n\\varphi(x_1, \\dots ,x_n)$. \n\\end{defn}\n\n\n\\section{Models: basic notions}\n\n\\begin{defn}\nA \\textit{frame} is a tuple $\\bl\\W,\\R\\br$ in which:\n\n\\begin{itemize}\n\\item $\\W \\neq \\vazio$.\n\\item $\\R \\subseteq \\W\\times \\W$.\n\\end{itemize}\n\\end{defn}\n\n\n\\begin{defn}\nA \\textit{skeleton}\\footnote{Sometimes called \\textit{augmented frame}.} is a quadruple $\\Frame$ in which $\\bl\\W,\\R\\br$ is a frame and: \n\\begin{itemize} \n\\item $\\D \\neq \\vazio$.\n\\item $\\barD: \\W \\impli \\Pa(\\D)$, and for every $w$ of $\\W$, $\\barD(w) \\neq \\vazio$. \n\\item $\\D = \\bigcup_{w \\in \\W} \\barD_{w}$.\n\\end{itemize}\n\n\\qquad The intuition behind the notion of skeleton is the same as in \\cite{Kripke63}: $\\W$ is the set of all `possible worlds'; $\\R$ is the accessibility relation between worlds; $\\barD$ is a function which gives to each world a domain of individuals, and $\\D$ is the set of all possibles individuals.\n\n\\qquad We use $w, v, u, w^{\\prime}, w_0, w_1, \\dots$ as variables for worlds. From now on we write $\\barD_{w}$ instead of $\\barD(w)$. In the cases where $\\barD$ is a constant function we write $\\bl\\W,\\R, \\D\\br$ instead of $\\Frame$.\n\\end{defn}\n\n\\begin{defn}\nA \\textit{(modal) model} for $\\Li$ is a quintuple $\\M = \\strucA$ in which $\\Frame$ is a skeleton and $\\I$ is an \\textit{interpretation function}, i.e., a function assigning to each $n$-ary relational symbol $P$ of $\\Li$ and each possible world $w$ an $n$-ary relation $\\I(P,w)$ on $\\D$. \n\n\\qquad We use $\\M, \\N, \\M\\p, \\dots$ as variables for models. \n\\end{defn}\n\n\\begin{defn}\nLet $\\Li$ and $\\Li^{\\prime}$ be languages such that $\\Li^{\\prime} \\subseteq \\Li$, $\\M = \\strucA$ be a model for $\\Li$ and $\\M^{\\prime} = \\bl\\W\\p,\\R\\p, \\D\\p, \\barD\\p,\\I\\p\\br$ be a model for $\\Li^{\\prime}$. We call $\\M^{\\prime}$ a \\textit{reduct} of $\\M$ (and $\\M$ an \\textit{expansion} for $\\M^{\\prime}$) iff $\\W =\\W^{\\prime}$, $\\R =\\R^{\\prime}$, $\\D =\\D^{\\prime}$, $\\barD =\\barD^{\\prime}$, and $\\I$ and $\\I^{\\prime}$ agree on the symbols of $\\Li^{\\prime}$. We write $\\M^{\\prime} = \\M|_{\\Li^{\\prime}}$.\n\\end{defn}\n\n\\begin{defn}\nA \\textit{valuation} in a model $\\M = \\strucA$ is a function $h$ from the set of variables to $\\D$.\\footnote{Although is more natural to use $v$ to denote a valuation, it is easy to get lost in the proofs when we use $v$ for valuations and $w$ and $u$ for worlds.} We say that $h\\p$ is an $x$\\textit{-variant} of $h$ if the two valuations agree on all variables except possibly $x$. Similarly, we say that a valuation $h\\p$ is an $x$\\textit{-variant of $h$ at $w$} if $h\\p$ is an $x$-variant of $h$ and $h\\p(x) \\in \\barD_w$.\n\\end{defn}\n\n\\begin{defn}\nLet $\\M = \\strucA$ be a model for $\\Li$, $\\varphi$ a formula of $\\Li$, $h$ a valuation in $\\M$ and $w \\in \\W$. The notion \\textit{$\\varphi$ is true at world $w$ of $\\M$ with respect to valuation $h$}, in symbols $\\M,w \\vSs \\varphi$, is defined recursively as follows: \n\n\\begin{itemize} \n\\item[] $\\M,w \\vSs x = y$ iff $h(x) = h(y)$. \n\\item[] $\\M,w \\vSs Px_{1} \\dots x_{n}$ iff $\\bl h(x_{1}), \\dots, h(x_{n})\\br \\in \\I(P,w)$. \n\\item[] $\\M,w \\vSs \\nao \\psi$ iff $\\M,w \\nvSs \\psi$.\n\\item[] $\\M,w \\vSs \\psi \\ou \\theta$ iff $\\M,w \\vSs \\psi$ or $\\M,w \\vSs \\theta$.\n\\item[] $\\M,w \\vSs \\Diamond \\psi$ iff there is a $w\\p \\in \\W$ such that $w\\R w\\p$ and $\\M,w\\p \\vSs \\psi$.\n\\item[] $\\M,w \\vSs \\ex x \\psi$ iff there is an $x$-variant $h\\p$ of $h$ at $w$ such that $\\M,w \\vSp \\psi$.\n\\end{itemize}\n\n\\qquad This definition enables us to speak of the truth of a formula at a world in a model without mentioning the valuation. We write $\\M,w \\models \\varphi$ if for every valuation $h$, $\\M,w \\vSs \\varphi$; and when that is the case we say that \\textit{$\\varphi$ is true in $\\M$ at $w$}. We write $\\M \\models \\varphi$ if for every world $w$ of $\\M$, $\\M,w \\models \\varphi$. And we say that a formula $\\varphi$ is\\textit{ valid in a class of models}, if for every model $\\M$ of this class, $\\M \\models \\varphi$.\n\n\\qquad Let $\\Gamma$ be a set of formulas of $\\Li$ (we also call $\\Gamma$ a \\textit{theory}); then $\\M,w \\models \\Gamma$ if for every $\\varphi \\in \\Gamma$, $\\M,w \\models \\varphi$. In this case, we say that the pair  $\\M,w$ is \\textit{a model for} $\\Gamma$. Two formulas $\\varphi$ and $\\psi$ are \\textit{equivalent} if for every model $\\M$, $\\M \\models \\varphi$ iff $\\M \\models \\psi$.\n\n% Similarly, $\\M \\models \\Gamma$ if for every $\\varphi \\in \\Gamma$, $\\M \\models \\varphi$.\n\n%, or that \\textit{$\\M, w$ is a model for $\\varphi$}. Similarly, if $\\M,w \\models T$, we say that \\textit{$\\M,w$ is a model for $T$}.\n\\end{defn}\n\n\\qquad There are some basic propositions about the relation $\\models$. Since their proofs are straightforward and they can be found in many different textbooks, we are going to state these propositions without proof.\n\n\\begin{pro}\nSuppose that $\\M$ and $\\M\\p$ are models for $\\Li$ and $\\Li^{\\prime}$, respectively; that $\\Li\\subseteq\\Li\\p$; and that $\\M$ is the reduct of $\\M\\p$ to $\\Li$. Then for every world $w$ of $\\M$, for every valuation $h$ in $\\M$, if $\\varphi$ is a formula of $\\Li$ then:\n\n\\begin{center}\n $\\M,w \\vSs \\varphi$ iff  $\\M\\p,w \\vSs \\varphi$.\n\\end{center} \n\\end{pro}\n\n\\begin{pro}\nLet $\\M$ be a model for $\\Li$, $w$ a world of $\\M$, $h_{1}$ and $h_{2}$ valuations in $\\M$ and $\\varphi$ a formula of $\\Li$. If $h_{1}$ and $h_{2}$ agree on all the free variables of $\\varphi$, then\n\n\\begin{center}\n $\\M,w \\models_{h_{1}} \\varphi$ iff  $\\M,w \\models_{h_{2}} \\varphi$.\n\\end{center} \n\\end{pro}\n\n\n\n\n\n\n\\begin{defn}\nLet $\\M = \\strucA$ be a model for $\\Li$:\n\n\\begin{itemize} \n\\item $\\M$ is an \\textit{S5-model} iff $\\R$ is an equivalence relation.\n\\item $\\M$ is an \\textit{universal model} iff $\\R = \\W\\times \\W$.\n\n\\item $\\M$ is a \\textit{constant domain model} iff for every $w,v \\in \\W$, $\\barD_{w} = \\barD_{v}$.\n\\item $\\M$ is a \\textit{monotonic model} iff for every $w,v \\in \\W$, if $w\\R v$, then $\\barD_{w} \\subseteq \\barD_{v}$.  \n\\item $\\M$ is an \\textit{anti-monotonic model} iff for every $w,v \\in \\W$, if $w\\R v$, then $\\barD_{v} \\subseteq \\barD_{w}$.   \n\\item $\\M$ is a \\textit{locally constant domain model} iff for every $w,v \\in \\W$, if $w\\R v$, then $\\barD_{w} = \\barD_{v}$. \n\\end{itemize}\n\\end{defn}\n\n\\qquad Very often, in different books and papers on first-order modal logic, there is the mentioning of the `Barcan Formula'. The following explains the connection between locally constant domain models and this formula.  \n\n\\begin{defn}\nLet $\\M = \\strucA$ be a model for $\\Li$:\n\n\\begin{itemize} \n\\item We say that \\textit{$\\M$ satisfies the Barcan Formula} iff for every $\\varphi \\in Fml(\\Li)$ of the form $\\todo x \\Box \\psi \\impli \\Box \\todo x\\psi$, we have that $\\M \\models \\varphi$.\n\\item We say that \\textit{$\\M$ satisfies the Converse Barcan Formula} iff for every $\\varphi \\in Fml(\\Li)$ of the form $\\Box \\todo x\\psi \\impli \\todo x \\Box \\psi$, we have that $\\M \\models \\varphi$.\n\\end{itemize}\n\\end{defn}\n\n\\qquad By well-know equivalences of first-order modal logic, we have: \n\n\\begin{itemize} \n\\item[] $\\M$ satisfies the Barcan Formula iff for every $\\varphi \\in Fml(\\Li)$ of the form $\\Diamond \\ex x\\psi \\impli \\ex x \\Diamond \\psi$, we have that $\\M \\models \\varphi$.\n\\item[] $\\M$ satisfies the Converse Barcan Formula iff for every $\\varphi \\in Fml(\\Li)$ of the form $\\ex x \\Diamond \\psi \\impli \\Diamond \\ex x\\psi$, we have that $\\M \\models \\varphi$.\n\\end{itemize}\n\n\\begin{pro}\nLet $\\M = \\strucA$ be a model for $\\Li$:\n\\begin{enumerate}[(a)]\n\\item $\\M$ is an anti-monotonic model iff $\\M$ satisfies the Barcan Formula.\n\\item $\\M$ is a monotonic model iff $\\M$ satisfies the Converse Barcan Formula.\n\\item $\\M$ is a locally constant domain model iff $\\M$  satisfies the Barcan Formula and its converse.\n\\end{enumerate}\n\\end{pro}\n\n\\begin{proof}\n(a) ($\\Rightarrow$) Let $\\varphi \\in Fml(\\Li)$ be a formula of the form $\\todo x \\Box \\psi \\impli \\Box \\todo x\\psi$, $w \\in \\W$ and $h$ a valuation. If $\\M,w \\vSs \\todo x \\Box \\psi$, then for every $x$-variant $h\\p$ of $h$ at $w$ $\\M,w \\vSp \\Box \\psi$. Let $v$ be a member of $\\W$ such that $w\\R v$. By hypothesis, $\\barD_{v} \\subseteq \\barD_{w}$, so every $x$-variant $h\\p$ of $h$ at $v$ is an $x$-variant $h\\p$ of $h$ at $w$, thus $\\M,v \\vSs \\todo x\\psi$. Since $v$ was arbitrarily chosen, $\\M,w \\vSs \\Box \\todo x \\psi$, and hence $\\M,w \\vSs \\varphi$.\n\n\\qquad ($\\Leftarrow$) Suppose that $\\M$ satisfies the Barcan Formula and \n$\\M$ is not an anti-monotonic model; then there are $w,v \\in \\W$ such that $w\\R v$ and $\\barD_{v} \\nsubseteq \\barD_{w}$. Hence, there is an $a \\in \\D$ such that $a \\in \\barD_{v}$ and $a \\notin \\barD_{w}$. Then for a valuation $h$ such that $h(y) = a$, $\\M,v \\vSs \\ex x (x =y)$; and, since $w\\R v$, $\\M,w \\vSs \\Diamond \\ex x (x =y)$.  By hypothesis, $\\M,w \\models \\Diamond \\ex x (x=y) \\impli \\ex x \\Diamond (x =y)$. So, in particular, $\\M,w \\vSs \\Diamond \\ex x (x=y) \\impli \\ex x \\Diamond (x =y)$; hence, $\\M,w \\vSs \\ex x \\Diamond (x =y)$. Then, there is an $x$-variant $h\\p$ of $h$ at $w$ such that $\\M,w \\vSp \\Diamond (x =y)$; so there is a $w\\p \\in \\W$ such that $w\\R w\\p$ and $\\M,w\\p \\vSp (x=y)$, hence $h\\p(x) =h\\p(y)$. Since $h\\p(x) \\in \\barD_{w}$, $a \\in \\barD_{w}$; a contradiction. Therefore, if $\\M$ satisfies the Barcan Formula, then $\\M$ is an anti-monotonic model.       \n\n\\qquad (b) ($\\Rightarrow$) Let $\\varphi \\in Fml(\\Li)$ be a formula of the form $\\Box \\todo x\\psi \\impli \\todo x \\Box \\psi$, $w \\in \\W$ and $h$ a valuation. If $\\M,w \\vSs \\Box \\todo x\\psi$, then let $v$ be a member of $\\W$ such that $w\\R v$; so $\\M,v \\vSs \\todo x\\psi$. Then, for every  $x$-variant $h\\p$ of $h$ at $v$, $\\M,v \\vSp \\psi$. By hypothesis, $\\barD_{w} \\subseteq \\barD_{v}$; therefore every $x$-variant $h\\p$ of $h$ at $w$ is an $x$-variant $h\\p$ of $h$ at $v$, thus $\\M,v \\vSp \\psi$ for every $x$-variant $h\\p$ of $h$ at $w$. Since $v$ was arbitrarily chosen, $\\M,w \\vSs \\Box \\psi$ for every $x$-variant $h\\p$ of $h$ at $w$. So $\\M,w \\vSs \\todo x \\Box \\psi$ and hence $\\M,w \\vSs \\varphi$.   \n\n\\qquad ($\\Leftarrow$) Suppose that $\\M$ satisfies the Converse Barcan Formula and $\\M$ is not a monotonic model; then there are $w,v \\in \\W$ such that $w\\R v$ and $\\barD_{w} \\nsubseteq \\barD_{v}$. Hence, there is an $a \\in \\D$ such that $a \\in \\barD_{w}$ and $a \\notin \\barD_{v}$. So for a valuation $h$ such that $h(x)=a$, $\\M,v \\vSs \\todo y (y \\neq x)$, and since $w\\R v$, $\\M,w \\vSs \\Diamond \\todo y (y\\neq x)$ and so $\\M,w \\models \\ex x \\Diamond \\todo y (y\\neq x)$. By hypothesis, $\\M,w\\models \\ex x \\Diamond \\todo y (y\\neq x) \\impli \\Diamond \\ex x \\todo y (y\\neq x)$. So,  $\\M,w\\models \\Diamond \\ex x \\todo y (y\\neq x)$. Thus there is a $w\\p \\in \\W$ such that $w\\R w\\p$ and $\\M,w\\p \\models \\ex x \\todo y (y\\neq x)$; this clearly implies a contradiction. Therefore, if $\\M$ satisfies the Converse Barcan Formula, then $\\M$ is a monotonic model.   \n\n\\qquad (c) The result follows directly from (a) and (b). \n\\end{proof}\n\n\\qquad Strictly speaking, both the Barcan Formula and the Converse Barcan Formula are not formulas, they are formula schemes. So it is natural to ask if there is a formula which has the same `expressive power' as the Barcan Formula and the Converse Barcan Formula. In fact, dealing with S5-models we can find this formula.\n\n\\begin{pro}\nLet $\\M = \\strucA$ be an S5-model for $\\Li$, then:\n\\begin{center}\n$\\M \\models \\Box \\todo x \\Box \\ex y (y=x)$ iff $\\M$ satisfies the Barcan Formula and its converse.\n\\end{center}\n\\end{pro}\n\n\n\\begin{proof}\n($\\Rightarrow$) Let $w$ and $v$ be members of $\\W$ such that $w \\R v$. If $a \\in \\barD_{w}$, then since $\\M,w \\models \\Box\\todo x \\Box \\ex y (y=x)$ and $w\\R w$, we have that $\\M,w \\models \\todo x \\Box \\ex y (y=x)$. In particular, for a valuation $h$ such that $h(x) = a$, $\\M,w \\vSs \\Box \\ex y (y=x)$. Then, $\\M,v \\vSs \\ex y (y=x)$. So there is an $x$-variant $h\\p$ of $h$ at $v$ such that $\\M,v \\vSp y=x$, thus $h\\p (y) = h\\p (x)$ and so $a \\in \\barD_{v}$. Hence, $\\barD_{w} \\subseteq \\barD_{v}$. We can prove that $\\barD_{v} \\subseteq \\barD_{w}$ in a similar way. Therefore, $\\M$  is a locally constant domain model; by Proposition 3, $\\M$ satisfies the Barcan Formula and its converse.   \n\n\\qquad ($\\Leftarrow$) Suppose that $\\M$ satisfies the Barcan Formula and its converse and there is a $w \\in \\W$ such that $\\M,w \\not\\models \\Box \\todo x \\Box \\ex y (y=x)$. So, for some valuation $h$, $\\M,w \\nvSs \\Box \\todo x \\Box \\ex y (y=x)$. By Proposition 3, $\\M$  is a locally constant domain model; and by equivalences of first-order modal logic, $\\M,w \\vSs \\Diamond \\ex x \\Diamond \\todo y (y \\neq x)$. Hence there is a $v \\in \\W$ such that $w\\R v$ and $\\M,v \\vSs \\ex x \\Diamond \\todo y (y \\neq x)$. So there is an $x$-variant $h\\p$ of $h$ at $v$ such that $\\M,v \\vSp \\Diamond \\todo y (y \\neq x)$. Then there is a $w\\p \\in \\W$ such that $v\\R w\\p$ and $\\M,w\\p \\vSp \\todo y (y \\neq x)$. Therefore, $h\\p(x) \\in \\barD_{v}\\backslash\\barD_{w\\p}$;  contradicting the assumption that $\\M$  is a locally constant domain model.  \n\\end{proof}\n\n\\section{First-order S5: two versions}\n\n\\qquad Before we advance, we need to address some technical details concerning S5-models. In order to save time we are going to skip the proofs of the propositions in this section.  \n\n\\qquad First, since $\\R$ is an equivalence relation in an S5-model, all the different notions of monotonic, anti-monotonic and locally constant domain model become equivalent when we work with an S5-model. Therefore, we shall only distinguish between locally constant domain models and \\textit{varying domain models} (models with no restriction on the domains). \n\n\\qquad Second, the distinction between constant domain models and locally constant domain models can be dropped. Of course, as mathematical structures constant domain models and locally constant domain models are very different objects. But from the point of view of modal formulas they are the same. The following proposition states this fact more clearly:\n\n\\begin{pro}\nLet $\\varphi$ be a formula of $\\Li$. $\\varphi$ is valid in the class of constant domain models for $\\Li$ iff $\\varphi$ is valid in the class of locally constant domain models for $\\Li$.\n\\end{pro}\n\n\n\\qquad Third, sometimes both for technical and theoretical reasons it is more useful to deal with universal models instead of S5-models. And as before, although they are different mathematical structures, from the point of view of the valid formulas we can take them as the same:\n\n\\begin{pro}\nLet $\\varphi$ be a formula of $\\Li$. $\\varphi$ is valid in the class of universal models for $\\Li$ iff $\\varphi$ is valid in the class of S5-models for $\\Li$. \n\\end{pro}\n\n\n\\qquad We can now define the two main kinds of models that we are going to work with. Propositions 5 and 6 serve to show the non-arbitrariness of the following definition and to connect it with the results of the previous section. \n\n\\begin{defn}\nFor a fixed language $\\Li$ we say that:\n\\begin{itemize}\n\n\\item a \\textit{model for first-order S5 with constant domains}, denoted FOS5-model, is a universal and constant domain model.\n\n\\item a \\textit{model for first-order S5 with varying domains}, denoted FOS5V-model, is a universal and varying domain model.\n\t\n\\end{itemize}\n\t\n\t\n\\end{defn}\n\n\\qquad The following definitions apply both to FOS5 and FOS5V models; to avoid duplication of definitions we use L as a variable for FOS5 and FOS5V.  From now on, when dealing with FOS5V-models we omit the accessibility relation, and when working with FOS5-models we omit the $\\barD$ function too.  \n\n\\pagebreak\n\\begin{defn}\nLet $\\Li$ be a language and let $\\{\\varphi\\}, \\{\\psi\\}$ and $\\Gamma$ be sets of sentences of $\\Li$:\n\n\\begin{itemize} \n\\item $\\varphi$ is \\textit{L-valid}, in symbols $\\vSL \\varphi$, iff $\\varphi$ is valid in the class of L-models. We say that $\\varphi$ is \\textit{L-satisfiable} iff there is an L-model $\\M$ and a world $w$ of $\\M$ such that $\\M,w \\models \\varphi$. And we say that $\\varphi$ is \\textit{L-unsatisfiable} iff $\\nao \\varphi$ is L-valid.   \n\\item $\\varphi$ is a \\textit{consequence of $\\Gamma$ in L}, in symbols $\\Gamma \\vSL \\varphi$, iff for every pair $\\M,w$, if $\\M,w$ is an L-model for $\\Gamma$, then $\\M,w \\models \\varphi$. Instead of $\\{\\psi\\} \\vSL \\varphi$ we write $\\psi \\vSL \\varphi$.\n\\end{itemize}\n\\end{defn}\n\n\\qquad For example, from propositional modal logic it is well-known that:\n\n\\begin{center}\n$\\vSL \\Box (\\varphi \\impli \\psi) \\impli (\\Box \\varphi \\impli \\Box \\psi)$\\\\\n$\\vSL \\Box \\varphi \\impli \\varphi$\\\\\n$\\vSL \\Box \\varphi \\impli \\Box \\Box \\varphi$\\\\\n$\\vSL \\nao \\Box \\varphi \\impli \\Box \\nao \\Box \\varphi$\\\\\n\\end{center}\n\n\n\\qquad And using what we have seen so far, we have: \n\n\\begin{center}\n$ \\vS \\Box \\todo x \\Box \\ex y (y=x) \\impli (\\Box \\todo x Px \\see  \\todo x \\Box Px)$\\\\\n$\\vSB  \\Box \\todo x Px \\see  \\todo x \\Box Px$\\\\\n\\end{center}\n\n\\qquad These last two examples are just instances of a more general fact that is an immediate consequence of Propositions 3 and 4.\n\n\\begin{pro}\nA sentence $\\varphi$ of $\\Li$ is FOS5-valid iff $\\Box \\todo x \\Box \\ex y (y=x) \\impli \\varphi$ is FOS5V-valid. \n\\end{pro}\n\nNow we have all the ingredients to present a notion of logic.\n\n\\begin{defn}\nThe logic L is a tuple $\\bl Lan, \\vSL\\br$ where $Lan$ is a function which associates to every language $\\Li$ a set $sen(\\Li)$, the set of sentences of $\\Li$; and $\\vSL$ is the relation as defined above.\n\\end{defn}\n\n\\qquad A last basic topic worth noticing is that we can define an unary relation symbol $E$ such that $Ex$ expresses that the individual denoted by $x$ exists in the world in question. The definition of this relation, often called \\textit{existence predicate},  is:\n\n\\begin{center}\n$Ex:= \\ex y (y=x) $\n\\end{center}\n\n\\qquad Obviously, $\\M,w \\vSs Ex$ iff $h(x) \\in \\barD_{w}$. The following proposition states some useful facts about the relation $\\models$ and $Ex$. \n\n\\begin{pro}\nFor a formula $\\varphi$ of $\\Li$ such that $fv(\\varphi) = \\{x_1, \\dots, x_n\\}$, let $E\\vec{x}$ be an abbreviation of $Ex_{1} \\e \\dots \\e Ex_{n}$, then:\n\\begin{itemize}\n\\item $\\vS \\todo \\vec{x}\\varphi$ iff $\\vS (E\\vec{x} \\impli \\varphi)$.\n\\item $\\vSB \\todo \\vec{x}\\varphi$ iff $\\vSB \\varphi$.\n\\item If $\\vS\\varphi$, then $\\vS \\todo \\vec{x}\\varphi$.\n\\end{itemize}\n\\end{pro}\n\n\n", "meta": {"hexsha": "04ab95c1413f5085078c0d67c762d025dd73ce47", "size": 21446, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/chapters/Preliminaries.tex", "max_stars_repo_name": "felipessalvatore/dissertacao_mestrado", "max_stars_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/chapters/Preliminaries.tex", "max_issues_repo_name": "felipessalvatore/dissertacao_mestrado", "max_issues_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/chapters/Preliminaries.tex", "max_forks_repo_name": "felipessalvatore/dissertacao_mestrado", "max_forks_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.9455782313, "max_line_length": 883, "alphanum_fraction": 0.6625477945, "num_tokens": 7413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5822654031988922}}
{"text": "\\subsection{Polynomials}\\label{subsec:polynomials}\n\n\\begin{remark}\\label{rem:polynomials_vs_polynomial_functions}\n  Polynomial are a purely algebraic framework for describing certain well-behaved functions. The link between a polynomial and a function is given by \\fullref{def:polynomial_function}, but in general these are different concepts. The link between a polynomial and its conventional symbolic expression is given by \\fullref{def:algebra_of_polynomials}.\n\n  \\Fullref{thm:polynomial_embedding_behavior} allow us to ignore this distinction in most applications of polynomials outside of algebra.\n\\end{remark}\n\n\\begin{definition}\\label{def:polynomial}\\mcite[149]{Knapp2016BasicAlgebra}\n  A \\term{polynomial} \\( p \\) over \\( R \\) is a sequence of members of \\( R \\) called \\term{coefficients},\n  \\begin{equation*}\n    p \\coloneqq ( a_0, a_1, a_2, \\ldots ) \\subseteq R,\n  \\end{equation*}\n  such that only finitely many coefficients are nonzero. We can regard \\( p \\) as a \\hyperref[def:topological_net]{net} over \\( \\BbbZ_{\\geq 0} \\) with finite \\hyperref[def:function_support]{support}.\n\n  \\begin{thmenum}\n    \\thmitem{def:polynomial/zero_polynomial} An exception to most rules is the \\term{zero polynomial}, all of whose coefficients are zeroes.\n\n    \\thmitem{def:polynomial/leading_coefficient} The last nonzero coefficient of a nonzero polynomial is called the \\term{leading coefficient} and is denoted by \\( \\op{LC}(p) \\). If \\( \\op{LC}(p) = 1 \\), we call the polynomial \\term{2}.\n\n    \\thmitem{def:polynomial/degree} The zero-based index of the leading coefficient is called the \\term{degree} of the polynomial as is denoted by \\( \\deg(p) \\). That is, if only \\( a_0 \\) is nonzero, then \\( \\deg(p) = a_0 \\). The degree of the zero polynomial is left undefined.\n\n    \\thmitem{def:polynomial/monomial} A \\term{monomial} is a polynomial with only one nonzero element.\n\n    \\thmitem{def:polynomial/degree_names} Polynomials of degree \\( n \\) with special names include\n    \\begin{itemize}\n      \\item \\term{constant} for the zero polynomial or if \\( n = 0 \\).\n      \\item \\term{linear} if \\( n = 1 \\).\n      \\item \\term{quadratic} if \\( n = 2 \\).\n      \\item \\term{cubic} if \\( n = 3 \\).\n      \\item \\term{quartic} if \\( n = 4 \\).\n      \\item \\term{quintic} if \\( n = 5 \\).\n    \\end{itemize}\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{definition}\\label{def:algebra_of_polynomials}\n  Denote by \\( R[X] \\) the set of polynomials over \\( R \\). Note that it is bijective with \\( c_{00} \\), and we can inherit pointwise addition and scalar multiplication from there. That is,\n\n  \\begin{thmenum}\n    \\thmitem{def:algebra_of_polynomials/addition} We define \\term{polynomial addition} point as\n    \\begin{balign*}\n       & +: R[X] \\times R[X] \\to R[X]                                                     \\\\\n       & (a_0, a_1, \\ldots) + (b_0, b_1, \\ldots) \\coloneqq (a_0 + b_0, b_0 + b_1, \\ldots)\n    \\end{balign*}\n\n    \\thmitem{def:algebra_of_polynomials/scalar_multiplication} We define \\term{scalar multiplication} as\n    \\begin{balign*}\n       & \\cdot: R[X] \\times R[X] \\to R[X]                            \\\\\n       & t \\cdot (a_0, a_1, \\ldots) \\coloneqq (t a_0, t b_1, \\ldots)\n    \\end{balign*}\n\n    \\thmitem{def:algebra_of_polynomials/polynomial_multiplication} In order to make \\( R[X] \\) into an \\hyperref[def:algebra_over_ring]{algebra}, we define \\term{Cauchy multiplication} \\( \\odot: R[X] \\times R[X] \\to R[X] \\) as follows: if \\( (a_0, a_1, \\ldots) \\) and \\( (b_0, b_1, \\ldots) \\) are polynomials, their product is defined to be the polynomial with coefficients\n    \\begin{equation}\n      c_l \\coloneqq \\sum_{i+j=l} a_i b_j, l = 0, 1, \\ldots\n    \\end{equation}\n\n    Polynomial multiplication is bilinear, associative and commutative.\n  \\end{thmenum}\n\n  We will implicitly use the canonical embedding \\( \\iota: R \\to R[X] \\), which sends an element \\( r \\) of \\( R \\) into the constant polynomial \\( p(X) \\coloneqq r \\).\n\n  We usually refer to \\( R[X] \\) as the \\term{polynomial ring}, especially since scalar multiplication is the same multiplication by constants.\n\\end{definition}\n\n\\begin{remark}\\label{rem:polynomial_symbolic_expression}\n  We choose a \\hyperref[def:formal_language]{symbol}, usually \\( X \\), called an \\term{indeterminate}, and give it \\hyperref[def:first_order_semantics/satisfiability]{semantics} using the \\hyperref[def:polynomial/monomial]{monomial}\n  \\begin{equation*}\n    X \\coloneqq (0, 1, 0, 0, \\ldots)\n  \\end{equation*}\n  (the assignment should be understood as \\enquote{defining the interpretation of \\( X \\) in a certain first-order language}).\n\n  Using the definition of multiplication, we see that the coefficients \\( c_0, c_1, \\ldots \\) of \\( X^2 = X \\cdot X \\) can be expressed in terms of the coefficients \\( a_0, a_1, \\ldots \\) of \\( X \\) as\n  \\begin{balign*}\n    c_0 & = a_0 \\cdot a_0 = 0                                                                 \\\\\n    c_1 & = a_0 \\cdot a_1 + a_1 \\cdot a_0 = 0 + 0 = 0                                         \\\\\n    c_2 & = a_0 \\cdot a_2 + a_1 \\cdot a_1 + a_2 \\cdot a_0 = 0 + 1 + 0 = 1                     \\\\\n    c_3 & = a_0 \\cdot a_3 + a_1 \\cdot a_2 + a_2 \\cdot a_1 + a_0 \\cdot a_3 = 0 + 0 + 0 + 0 = 0 \\\\\n    c_4 & = \\cdots = 0                                                                        \\\\\n    c_5 & = \\cdots = 0                                                                        \\\\\n    \\vdots\n  \\end{balign*}\n\n  Proceeding by induction, we see that \\( X^k \\) corresponds to the sequence\n  \\begin{equation*}\n    (\\underbrace{0, 0, \\ldots, 0, 0}_{k \\text{times}}, 1, 0, 0, \\ldots).\n  \\end{equation*}\n\n  We define \\( X^0 \\coloneqq 1 \\).\n\n  It is conventional to then write a polynomial of degree \\( n \\) as the \\hyperref[def:formal_language]{expression}\n  \\begin{equation}\\label{eq:rem:polynomial_symbolic_expression}\n    p(X) = a_n X^n + a_{n-1} X^{n-1} + \\ldots + a_2 X^2 + a_1 X + a_0 = \\sum_{i=0}^n a_i X^i\n  \\end{equation}\n  and the zero polynomial as\n  \\begin{equation*}\n    p(X) \\coloneqq 0.\n  \\end{equation*}\n\n  We use capital letters to highlight that this is not a function - see \\fullref{def:polynomial_function}. We say that \\( p(X) \\) is a polynomial in one indeterminate.\n\\end{remark}\n\n\\begin{proposition}\\label{thm:def:polynomial_degree/properties}\n  The polynomial degree has the following basic properties:\n  \\begin{thmenum}\n    \\thmitem{thm:def:polynomial_degree/properties/sum} For nonzero polynomials \\( p, q \\in R[X] \\) with \\( p \\neq -q \\), we have\n    \\begin{equation*}\n      \\deg (p + q) \\leq \\max \\{ \\deg p, \\deg q \\}.\n    \\end{equation*}\n\n    \\thmitem{thm:def:polynomial_degree/properties/product} For nonzero polynomials \\( p, q \\in R[X] \\) with \\( pq \\neq 0 \\), we have\n    \\begin{equation*}\n      \\deg (pq) \\leq \\deg p + \\deg q,\n    \\end{equation*}\n    with equality holding if \\( R \\) is an integral domain.\n\n    The requirement that \\( pq \\neq 0 \\) may also be dropped if \\( R \\) is an integral domain as per \\fullref{thm:polynomials_over_integral_domain_are_integral_domain}.\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  Fix nonzero polynomials\n  \\begin{align}\n    p(X) &\\coloneqq \\sum_{k=0}^n a_k X^k, \\label{eq:thm:def:polynomial_degree/properties/p} \\\\\n    q(X) &\\coloneqq \\sum_{k=0}^m b_k X^k. \\label{eq:thm:def:polynomial_degree/properties/q}\n  \\end{align}\n\n  \\SubProofOf{thm:def:polynomial_degree/properties/sum} Additionally assume that \\( p \\neq -q \\) since otherwise \\( p + q = 0 \\) and \\( \\deg(p + q) \\) is undefined. Thus, there exists at least one index \\( k = 1, 2, \\ldots \\), so that \\( a_k \\neq b_k \\). Denote by \\( k_0 \\) the largest such index (only finitely many are nonzero). Then\n  \\begin{equation*}\n    a_k = b_k = 0 \\T{for} k > k_0.\n  \\end{equation*}\n\n  Therefore, \\( \\deg(p + q) = k_0 \\). Note that \\( k_0 \\) cannot exceed both \\( \\deg p \\) and \\( \\deg q \\) because it corresponds to a nonzero coefficient. Thus, \\( k \\leq \\max\\{ \\deg p, \\deg q \\} \\).\n\n  \\SubProofOf{thm:def:polynomial_degree/properties/product} By assumption \\( pq \\neq 0 \\), so there exists at least one nonzero coefficient, say \\( c_k \\). Obviously \\( k \\leq \\deg p + \\deg q \\) since the highest possible degree of \\( pq \\) is \\( \\deg p + \\deg q \\). Thus,\n  \\begin{equation*}\n    k = \\deg (pq) \\leq \\deg p + \\deg q.\n  \\end{equation*}\n\n  If \\( R \\) is an integral domain, the product of nonzero elements is nonzero. Thus, the leading coefficient \\( \\op{LC}(pq) = \\op{LC}(p)\\op{LC}(q) \\) is nonzero and\n  \\begin{equation*}\n    \\deg(pq) = \\deg p + \\deg q.\n  \\end{equation*}\n\\end{proof}\n\n\\begin{proposition}\\label{thm:polynomial_ring_universal_property}\\mcite[150]{Knapp2016BasicAlgebra}\n  For any nontrivial commutative unital ring \\( T \\), any unital ring homomorphism \\( \\varphi: R \\to T \\) and any \\( t \\in T \\), there exists a unique homomorphism \\( \\Phi_t: R[X] \\to T \\) such that \\( \\iota(1) = t \\) and the following diagram commutes:\n\n  \\begin{alignedeq}\\label{thm:polynomial_ring_universal_property/diagram}\n    \\includegraphics[page=1]{output/thm__polynomial_ring_universal_property.pdf}\n  \\end{alignedeq}\n\n  We call the map \\( \\Phi_t \\) a \\term{substitution homomorphism}.\n\n  If \\( T \\) is a superring of \\( R \\), we call \\( \\Phi_t \\) an \\term{evaluation} at \\( t \\in T \\). We also write\n  \\begin{equation}\n    R[t] \\coloneqq \\Phi_t(R[X]).\n  \\end{equation}\n\n  This allows us to \\term{adjoin} elements from a superring to a subring. See \\fullref{ex:polynomial_evaluation_gaussian_integers}.\n\\end{proposition}\n\\begin{proof}\n  We will first prove uniqueness. Let \\( \\Phi_t: R[X] \\to T \\) and \\( \\Psi_t: R[X] \\to T \\) are two such homomorphisms and take their difference \\( \\Theta_t \\coloneqq \\Phi_t - \\Psi_t \\).\n\n  Then for \\( r \\in R \\) we have\n  \\begin{equation*}\n    \\Theta_t(\\iota(r)) = \\Psi_t(\\iota(r)) - \\Psi_t(\\iota(r)) = \\varphi(r) - \\varphi(r) = 0.\n  \\end{equation*}\n\n  Now since for any polynomial we have \\( p = \\iota(1) p \\) and since \\( \\Theta_t \\) is a homomorphism of rings, we have\n  \\begin{equation*}\n    \\Theta_t(p) = \\Theta_t(\\iota(1) p) = 0 \\Theta_t(p) = 0.\n  \\end{equation*}\n\n  Thus, \\( \\Theta_t = 0 \\) and \\( \\Phi_t = \\Psi_t \\).\n\n  Now we will show existence. Take a polynomial\n  \\begin{equation*}\n    p = (a_0, a_1, \\ldots, a_n, 0, 0, \\ldots).\n  \\end{equation*}\n\n  Define\n  \\begin{equation*}\n    \\Phi_t(p) \\coloneqq \\sum_{i=1}^n \\phi(a_0) t^i.\n  \\end{equation*}\n\n  By additivity, \\( \\Phi_t: R[X] \\to T \\) is obviously a homomorphism and \\( \\Phi_t((0, 1, 0, \\ldots)) = t \\). Therefore, we have proven existence.\n\\end{proof}\n\n\\begin{example}\\label{ex:polynomial_evaluation_gaussian_integers}\n  The Gaussian \\hyperref[def:gaussian_integers]{integers} \\( \\BbbZ[i] \\) are complex numbers with integer real and complex parts. We will now motivate this notation.\n\n  Consider the substitution map \\( \\Phi_i: \\BbbZ[X] \\to \\BbbC \\) for the imaginary unit given by \\fullref{thm:polynomial_ring_universal_property}. Let \\( p(X) \\in \\BbbZ[X] \\). Then\n  \\begin{equation*}\n    p(i)\n    =\n    \\Phi_i(p)\n    =\n    \\sum_{j=0}^n a_j i^n\n    =\n    \\sum_{\\rem(j, 4) = 0}^n a_j - \\sum_{\\rem(j, 4) = 2}^n a_j + i \\left(\\sum_{\\rem(j, 4) = 1}^n a_j - \\sum_{\\rem(j, 4) = 3}^n a_j \\right).\n  \\end{equation*}\n\n  This is clearly a Gaussian integer.\n\n  Now fix a Gaussian integer \\( z = a + bi \\). It can be given by the polynomial\n  \\begin{equation*}\n    p_z(X) \\coloneqq a + bX.\n  \\end{equation*}\n\n  It remains to show that multiplication in \\( \\BbbZ[X] \\) is compatible with multiplication in \\( \\BbbC \\). But complex \\hyperref[def:set_of_complex_numbers]{multiplication} is defined to be compatible with the notation \\( a + bi \\), that is,\n  \\begin{equation*}\n    (a + bi) (c + di)\n    =\n    ac + ibc + iad - bd\n    =\n    (ac - bd) + i(bc + ad).\n  \\end{equation*}\n\n  Thus, the Gaussian integers are precisely the homomorphic image of \\( Z[X] \\) under \\( \\Phi_i \\).\n\\end{example}\n\n\\begin{proposition}\\label{thm:polynomial_ring_units}\n  The units of the polynomial ring \\( R[X] \\) are precisely the units of \\( R \\).\n\\end{proposition}\n\\begin{proof}\n  Any unit of \\( R \\) is obviously a unit of \\( R[X] \\).\n\n  For the converse, fix a nonzero constant polynomial \\( p(X) = r \\). In order for it to have an inverse \\( q(X) \\), we should have\n  \\begin{equation*}\n    1 = p(X) q(X) = r q(X),\n  \\end{equation*}\n  which can only happen if \\( q(X) \\) is a constant and a multiplicative inverse of \\( r \\).\n\\end{proof}\n\n\\begin{proposition}\\label{thm:polynomial_algebra_basis}\n  The polynomial \\hyperref[def:algebra_of_polynomials]{algebra} \\( R[X] \\) has a Hamel \\hyperref[def:left_module_hamel_basis]{basis} consisting of all \\hyperref[def:polynomial/monomial]{monomials}\n  \\begin{equation*}\n    B \\coloneqq \\{ 1, X, X^2, X^3, \\ldots \\}.\n  \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n  By \\fullref{def:algebra_of_polynomials/addition}, every polynomial can easily be represented as a sum of finitely many monomials.\n\\end{proof}\n\n\\begin{definition}\\label{def:polynomial_free_module}\n  It is convenient, especially in \\hyperref[sec:approximation_theory]{approximation theory}, to work with the free module of polynomials of degree at most \\( n \\). We define\n  \\begin{equation*}\n    \\pi_n(R[X]) \\coloneqq \\linspan \\{ 1, X, \\ldots, X^{n-1}, X^n \\}.\n  \\end{equation*}\n\n  We will use \\( \\pi_n \\) and when the ring is clear from the context.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:polynomials_over_integral_domain_are_integral_domain}\n  If \\( R \\) is an integral domain, the polynomial ring \\( R[X] \\) is also an integral domain.\n\\end{proposition}\n\\begin{proof}\n  Polynomial multiplication inherits its commutativity from multiplication in \\( R \\). It remains only to show that \\( R[X] \\) has no zero divisors.\n\n  Fix two polynomials \\( p, q \\in R[X] \\). If either of them is zero, their product \\( pq \\) is zero.\n\n  Assume that both are nonzero polynomials. The leading coefficient of their product is, by definition of multiplication, \\( \\op{LC}(pq) = \\op{LC}(p) \\op{LC}(q) \\). Since \\( R \\) has no zero divisors, then \\( \\op{LC}(pq) \\neq 0 \\) and thus \\( pq \\) is a nonzero polynomial.\n\n  Therefore, \\( R[X] \\) is an integral domain.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:polynomials_over_unique_factorization_domain_are_unique_factorization_domain}\n  If \\( R \\) is a unique factorization domain, the polynomial ring \\( R[X] \\) is also a unique factorization domain.\n\\end{proposition}\n\n\\begin{theorem}[Euclidean division of polynomials]\\label{thm:euclidean_division_of_polynomials}\\mcite[10]{Knapp2016BasicAlgebra}\n  Let \\( a, b \\in R[X] \\) and \\( b \\) be \\hyperref[def:polynomial/leading_coefficient]{monic} (in particular, \\( b \\neq 0 \\)). Then there exist unique polynomials \\( q, r \\in R[X] \\), where \\( r \\) is either zero or \\( \\deg r < \\deg b \\), such that\n  \\begin{equation*}\n    a = bq + r.\n  \\end{equation*}\n\\end{theorem}\n\\begin{proof}\n  Let \\( a, b \\in R[X] \\) and \\( b \\neq 0 \\). If \\( a = 0 \\) or \\( \\deg a < \\deg b \\), define\n  \\begin{balign*}\n    q & \\coloneqq 0, \\\\\n    r & \\coloneqq a.\n  \\end{balign*}\n\n  In this case, \\( \\deg r = \\deg a < \\deg b \\).\n\n  Suppose that \\( \\deg b \\leq \\deg a \\). We will use proof by induction on \\( \\deg a \\). If \\( \\deg a = 0 \\), obviously \\( \\deg b = 0 \\) (thus \\( b = 1 \\)) and we define\n  \\begin{balign*}\n    q & \\coloneqq a, \\\\\n    r & \\coloneqq 0.\n  \\end{balign*}\n\n  In this case, \\( r \\) is the zero polynomial.\n\n  Assume the result holds for \\( \\deg a < n \\) and let \\( \\deg a = n, \\deg b = m \\). Then there exists a polynomial \\( \\hat a(X) \\) that is either zero or \\( \\deg \\hat a = n - 1 \\) such that\n  \\begin{equation*}\n    a(X) = a_n a^n + \\hat a(X).\n  \\end{equation*}\n\n  Analogously, we find \\( \\hat b(X) \\) that is either zero or \\( \\deg \\hat b = m - 1 \\) such that\n  \\begin{equation*}\n    b(X) = X^m + \\hat b(X).\n  \\end{equation*}\n\n  Thus,\n  \\begin{balign*}\n    \\hat r(X)\n     & \\coloneqq\n    a(X) - b(X) a_n X^{n-m}\n    =            \\\\ &=\n    a_n X^n + \\hat a(X) - (b_m X^m + \\hat b(X)) a_n X^{n-m}\n    =            \\\\ &=\n    a_n X^n + \\hat a(X) - a_n X^n - \\hat b(X) a_n X^{n-m}\n    =            \\\\ &=\n    \\hat a(X) - \\hat b(X) a_n X^{n-m}.\n  \\end{balign*}\n\n  Therefore, \\( \\hat r(X) \\) is either the zero polynomial (in which case we define \\( r(X) \\coloneqq \\hat r(X) \\)) or \\( \\deg \\hat r \\leq n - 1 \\). In the latter case, we can divide \\( \\hat r \\) by \\( b \\) to obtain \\( \\hat q(X) \\) and \\( r(X) \\) such that\n  \\begin{equation*}\n    \\hat r(X) \\coloneqq b(X) \\hat q(X) + r(X),\n  \\end{equation*}\n  where \\( r = 0 \\) or \\( \\deg r < \\deg b \\).\n\n  Substitute into the definition of \\( \\hat r(X) \\):\n  \\begin{balign*}\n    \\hat r(X)                                         & = a(X) - b(X) a_n X^{n-m} \\\\\n    b(X) \\hat q(X) + r(X)                             & = a(X) - b(X) a_n X^{n-m} \\\\\n    b(X) \\left(\\hat q(X) - a_n X^{n-m} \\right) + r(X) & = a(X).\n  \\end{balign*}\n\n  Define\n  \\begin{equation*}\n    q(X) \\coloneqq \\hat q(X) - a_n X^{n-m}.\n  \\end{equation*}\n\n  We have obtained polynomials \\( r(X) \\) and \\( q(X) \\) where \\( r(X) \\) is either zero or \\( \\deg r < \\deg b \\).\n\n  It remains only to show uniqueness. Suppose that\n  \\begin{equation*}\n    a = bq + r = bq' + r'.\n  \\end{equation*}\n\n  If \\( r - r' \\) is the zero polynomial, so is \\( q - q' \\), and uniqueness follows.\n\n  If \\( r - r' \\) is not zero, then neither is \\( q - q' \\). Then \\( b(q - q') = -(r - r') \\) and\n  \\begin{equation*}\n    \\deg b + \\deg(q - q') = \\deg[b (q - q')] = \\deg(r - r') \\leq \\max(\\deg r, \\deg r') < \\deg b,\n  \\end{equation*}\n  which is a contradiction.\n\n  This proves uniqueness.\n\\end{proof}\n\n\\begin{corollary}\\label{thm:polynomials_over_field_are_euclidean_domain}\\mcite[10]{Knapp2016BasicAlgebra}\n  The polynomial \\hyperref[def:semiring/integral_domain]{ring} \\( \\BbbK[X] \\) over a field \\( \\BbbK \\) is an \\hyperref[def:semiring/euclidean_domain]{Euclidean} domain with \\( \\delta(p) \\coloneqq \\deg p \\). Furthermore, the remainder and quotient are unique.\n\\end{corollary}\n\\begin{proof}\n  By \\fullref{thm:polynomials_over_integral_domain_are_integral_domain}, \\( \\BbbK[X] \\) is an integral domain.\n\n  To show that it is Euclidean, fix two polynomials \\( a, b \\in \\BbbK[X] \\) with \\( b \\neq 0 \\). We use \\fullref{thm:euclidean_division_of_polynomials} to perform Euclidean division of \\( a \\) by the monic polynomial \\( \\frac {b} {\\op{LC}(b)} \\) and obtain polynomials \\( q, r \\), where either \\( r = 0 \\) or \\( \\deg r < \\deg b \\), such that\n  \\begin{equation*}\n    a = \\frac {b} {\\op{LC}(b)} q + r.\n  \\end{equation*}\n\n  Instead of dividing \\( b \\) by its leading coefficient \\( \\op{LC}(b) \\), we can divide \\( q \\) and thus obtain the required factorization.\n\\end{proof}\n\n\\begin{definition}\\label{def:polynomial_function}\n  Denote by \\( \\cat{Set}(R) \\) the ring of all functions over \\( R \\) (see \\fullref{thm:functions_over_ring_form_algebra}). Define the unital ring homomorphism\n  \\begin{balign*}\n     & \\Phi: R[X] \\to \\cat{Set}(R)                                                                      \\\\\n     & \\Phi((a_0, a_1, \\ldots, a_n, 0, 0, \\ldots)) \\coloneqq \\left( x \\to \\sum_{i=0}^n a_i x^n \\right),\n  \\end{balign*}\n  which constructs a \\term{polynomial function} from a polynomial. This map is not injective in general and multiple polynomials may be equivalent as \\hyperref[def:function]{functions}.\n\n  If we want to highlight that we are referring to a polynomial function rather than a polynomial \\( p(X) \\), we usually use a lowercase letter for the variable, i.e.\n  \\begin{equation*}\n    p(x) = \\sum_{i=0}^n a_i x^i.\n  \\end{equation*}\n\n  If the ring is \\hyperref[def:set_finiteness]{infinite}, \\fullref{thm:polynomial_embedding_behavior} allows us to set aside the difference between polynomials and polynomial functions and we usually identify the two when working over \\( \\BbbR \\) or \\( \\BbbC \\).\n\\end{definition}\n\\begin{proof}\n  It is obvious that \\( \\Phi \\) is a homomorphism of the additive groups of \\( R \\) and \\( \\cat{Set}(R) \\).\n\n  We will prove that multiplication of polynomials corresponds to multiplication of polynomial functions. Take\n  \\begin{balign*}\n    p(X) & \\coloneqq \\sum_{i=0}^n a_i X^i, \\\\\n    q(X) & \\coloneqq \\sum_{j=0}^m b_j X^j.\n  \\end{balign*}\n\n  For their product \\( s(X) = \\sum_{i=0}^{n + m} c_i X^i \\) by definition we have\n  \\begin{equation*}\n    c_l = \\sum_{i+j=l} a_i b_j, l = 0, 1, \\ldots, n + m.\n  \\end{equation*}\n\n  We need to show that for any \\( r \\in R \\),\n  \\begin{equation*}\n    \\Phi_r(p) \\Phi_r(q) = \\Phi_r(s).\n  \\end{equation*}\n\n  Using associativity and commutativity of multiplication and distributivity of multiplication over addition, we obtain\n  \\begin{balign*}\n    \\Phi_r(p) \\Phi_r(q)\n     & =\n    \\left( \\sum_{i=0}^n a_i r^i \\right) \\left( \\sum_{j=0}^m b_j r^j \\right)\n    =    \\\\ &=\n    \\sum_{i=0}^n a_i r^i \\left( \\sum_{j=0}^m b_j r^j \\right)\n    =    \\\\ &=\n    \\sum_{i=0}^n a_i r^i \\left( \\sum_{j=0}^m b_j r^j \\right)\n    =    \\\\ &=\n    \\sum_{i=0}^n \\sum_{j=0}^m a_i r^i b_j r^j\n    =    \\\\ &=\n    \\sum_{i=0}^n \\sum_{j=0}^m a_i b_j r^{i + j}\n    =    \\\\ &=\n    \\sum_{l=0}^{n + m} \\sum_{i+j=l} a_i b_j r^l\n    =    \\\\ &=\n    \\sum_{l=0}^{n + m} c_l r^l\n    =\n    \\Phi_r(s),\n  \\end{balign*}\n  where we have used that \\( a_i = 0, i > n \\) and \\( b_j = 0, j > m \\).\n\n  Thus, \\( \\Phi: R[X] \\to \\fun(R) \\) is indeed a homomorphism of rings.\n\\end{proof}\n\n\\begin{definition}\\label{def:formal_power_series}\n  If we extend \\fullref{def:polynomial} to allow for polynomial with infinite terms (that is, allow for the sequence to have infinitely many nonzero items), we obtain a set \\( R\\Bracks{X} \\) which we call the \\term{formal power series} over \\( R \\).\n\n  Note that the operations in \\fullref{def:algebra_of_polynomials} are well-defined and still make \\( R\\Bracks{X} \\) into an algebra. Evaluation (see \\fullref{thm:polynomial_ring_universal_property} and \\fullref{def:polynomial_function}) is problematic, however, since algebraic operations are finitary by nature. In practice, we use a topology over \\( R \\) and speak of convergent and divergent power series.\n\\end{definition}\n\n\\begin{theorem}[Newton's binomial theorem]\\label{thm:binomial_theorem}\n  \\begin{equation*}\n    (X + Y)^n = \\sum_{k=0}^n \\binom n k X^k Y^{n-k}\n  \\end{equation*}\n\\end{theorem}\n\\begin{proof}\n  We use induction on \\( n \\). For \\( n = 0 \\), the theorem trivially holds. Assume that the theorem holds for \\( 1, \\ldots, n \\). Then\n  \\begin{balign*}\n    (X + Y)^{n+1}\n     & =\n    X (X + Y)^n + Y (X + Y)^n\n    =    \\\\ &=\n    \\sum_{k=0}^n \\binom n k X^{k+1} Y^{n-k} + Y \\sum_{k=0}^n \\binom n k X^k Y^{n-k}\n    =    \\\\ &=\n    X^{n+1} + Y \\sum_{k=0}^{n-1} \\binom n k X^{k+1} Y^{n-(k+1)} + Y \\sum_{k=0}^n \\binom n k X^k Y^{n-k}\n    =    \\\\ &=\n    X^{n+1} + Y \\left[ \\sum_{k=1}^n \\binom n {k-1} X^k Y^{n-k} + Y^n \\sum_{k=1}^n \\binom n k X^k Y^{n-k} \\right] + Y^{n+1}\n    =    \\\\ &\\reloset {\\ref{thm:pascals_identity}} =\n    X^{n+1} + Y \\sum_{k=1}^n \\binom {n+1} k X^k Y^{n-k} + Y^{n+1}\n    =    \\\\ &=\n    \\sum_{k=0}^n \\binom {n+1} k X^k Y^{(n+1)-k}.\n  \\end{balign*}\n\\end{proof}\n\n\\begin{proposition}\\label{thm:xn_minus_one_factorization}\n  For any nonnegative \\( n \\) we have\n  \\begin{equation}\\label{eq:thm:xn_minus_one_factorization}\n    X^{n + 1} - 1 = (X - 1)(X^n + X^{n-1} + \\cdots + 1) = (X - 1) \\sum_{k=0}^n X^k.\n  \\end{equation}\n\\end{proposition}\n\\begin{proof}\n  We use induction on \\( n \\).\n  \\begin{itemize}\n    \\item For \\( n = 0 \\) we have\n    \\begin{equation*}\n      X^1 - 1 = X - 1 = (X - 1) 1.\n    \\end{equation*}\n\n    \\item If for some fixed \\( n \\)\n    \\begin{equation*}\n      X^{n + 1} - 1 = (X - 1) \\sum_{k=0}^n X^k,\n    \\end{equation*}\n    then\n    \\begin{balign*}\n      X^{n + 2} - 1\n      &=\n      X(X^{n + 1} - 1) + (X - 1)\n      \\reloset {\\T{ind.}} = \\\\ &=\n      X (X - 1) \\sum_{k=0}^n X^k + (X - 1)\n      = \\\\ &=\n      (X - 1) \\parens*{ X \\sum_{k=0}^n X^k + 1 }\n      = \\\\ &=\n      (X - 1) \\sum_{k=0}^{n + 1} X^k.\n    \\end{balign*}\n  \\end{itemize}\n\\end{proof}\n\n\\begin{proposition}\\label{thm:polynomial_root_iff_divisible}\n  The value \\( u \\in R \\) is a \\hyperref[def:semiring_kernel]{root} of the polynomial function \\( p(x) \\) if any only if the polynomial \\( (X - u) \\) divides \\( p(X) \\).\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Suppose that \\( u \\in R \\) is a root of \\( p(x) \\). By \\fullref{thm:euclidean_division_of_polynomials}, we can divide \\( p(X) \\) by the monic polynomial \\( (X - u) \\):\n  \\begin{equation*}\n    p(X) = (X - u) q(X) + r(X).\n  \\end{equation*}\n\n  Assume that \\( r(X) \\) is nonzero. Evaluating \\( p(X) \\) at \\( u \\) gives us\n  \\begin{equation*}\n    0 = p(u) = (u - u) q(r) + r(u),\n  \\end{equation*}\n  hence \\( u \\) is a root of \\( r(X) \\). But \\( \\deg r < \\deg (X - u) = 1 \\), that is, \\( r \\) is a nonzero constant and cannot have roots. The obtained contradiction proves the statement.\n\n  \\NecessitySubProof Suppose that \\( (X - u) \\) divides \\( p(X) \\) with quotient \\( q(X) \\). Then\n  \\begin{equation*}\n    p(X) = (X - u) q(X).\n  \\end{equation*}\n\n  Evaluation at \\( u \\) gives us\n  \\begin{equation*}\n    p(u) = (u - u) q(u) = 0.\n  \\end{equation*}\n\n  Therefore, \\( u \\) is a root of \\( p(X) \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:polynomial_root_multiplicity}\n  We say that the polynomial \\( p \\in R[X] \\) has the root \\( r \\in R \\) with multiplicity \\( m \\in \\BbbZ_{>0} \\) if there exists a polynomial \\( q \\in R[X] \\) of degree \\( \\deg q = \\deg p - m \\) such that \\( (X - r) \\) does not divide \\( q(X) \\) and\n  \\begin{equation*}\n    p(X) = (X - r)^m q(X).\n  \\end{equation*}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:integral_domain_polynomial_root_limit}\n  If \\( R \\) is an integral domain, a nonzero polynomial of degree \\( n \\) has at most \\( n \\) (not necessarily \\hyperref[def:polynomial_root_multiplicity]{distinct}) roots.\n\\end{proposition}\n\\begin{proof}\n  We will use induction on \\( n \\).\n\n  In the case \\( n = 0 \\), we have that \\( p \\) is a nonzero constant polynomial. Such a polynomial cannot have roots, hence the statement holds.\n\n  Now assume that the statement holds for polynomials of degrees \\( 1, \\ldots, n - 1 \\). Let \\( p \\in R[X] \\) be a polynomial of degree \\( n \\) and let \\( r \\in R \\) be a root of \\( p \\). \\Fullref{thm:polynomial_root_iff_divisible} implies that there exists a polynomial \\( q(X) \\) of degree \\( n - 1 \\) such that\n  \\begin{equation*}\n    p(X) = (X - r) q(X).\n  \\end{equation*}\n\n  Fix \\( t \\neq r \\) that is a not a root of \\( q(X) \\). Evaluation at \\( t \\) gives us\n  \\begin{equation*}\n    p(t) = (t - r) q(t).\n  \\end{equation*}\n\n  Both \\( (t - r) \\neq 0 \\) and \\( q(t) \\neq 0 \\). Since \\( R \\) has no zero divisors, the product \\( p(t) \\) of \\( (t - r) \\) and \\( q(t) \\) is also nonzero. Thus, the only roots of \\( p(X) \\) are \\( r \\) and the roots of \\( q(X) \\).\n\n  By the induction conjecture, \\( q(X) \\) has at most \\( n - 1 \\) roots (counting multiplicities). Thus, \\( p(X) \\) has at most \\( (n - 1) + 1 = n \\) roots.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:polynomials_with_identical_values}\n  In an integral domain, two polynomials \\eqref{eq:thm:def:polynomial_degree/properties/p} and \\eqref{eq:thm:def:polynomial_degree/properties/q} with \\( m \\leq n \\) are equal (i.e. have the same coefficients) if and only if their functions agree at \\( n + 1 \\) points.\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Obvious\n\n  \\NecessitySubProof Define\n  \\begin{equation*}\n    r(X) \\coloneqq p(X) - q(X) = \\sum_{k=0}^n (a_k - b_k) X^k.\n  \\end{equation*}\n\n  This is a polynomial of degree at most \\( n \\) that has \\( n + 1 \\) roots. By \\fullref{thm:integral_domain_polynomial_root_limit}, \\( r \\) is the zero polynomial. Hence, \\( p = q \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:primitive_polynomial}\\mcite[394]{Knapp2016BasicAlgebra}\n  A nonzero polynomial is called \\term{primitive} if its coefficients are \\hyperref[def:coprime_ring_ideals]{coprime}.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:polynomial_quotient_rings_equinumerous_with_module_of_polynomials}\n  Fix a monic polynomial \\( p(X) \\in R[X] \\) of degree \\( n \\). The free module \\( \\pi_n(R[X]) \\), as defined in \\fullref{def:polynomial_free_module}, is \\hyperref[def:equinumerosity]{equinumerous} with the quotient ring \\( R[X] / \\braket{p(X)} \\). This allows us to choose a \\enquote{canonical representative} of the cosets of the quotient ring in similarly to \\fullref{thm:cyclic_group_isomorphic_to_integers_modulo_n}.\n\n  Consequently, for different polynomials \\( p(X) \\), only the ring structure on \\( R[X] / \\braket{p(X)} \\) differs.\n\\end{proposition}\n\\begin{proof}\n  Euclidean \\hyperref[thm:euclidean_division_of_polynomials]{division} allows us to define the homomorphism\n  \\begin{balign*}\n     & \\Theta: R[X] / \\braket{p(X)} \\to \\pi_n(R[X])             \\\\\n     & \\Theta(q(X) + \\braket{p(X)}) \\coloneqq \\rem(q(X), p(X)).\n  \\end{balign*}\n\n  It is injective since if \\( q_1(X) \\) and \\( q_2(X) \\) are not congruent modulo \\( \\braket{p(X)} \\), they have different remainders. Conversely, \\( \\Theta \\) is surjective because any remainder \\( r(X) \\) belongs to the coset\n  \\begin{equation*}\n    r(X) + \\braket{p(X)}.\n  \\end{equation*}\n\n  See \\fullref{ex:polynomial_quotient_rings_gaussian_integers} and \\fullref{ex:polynomial_quotient_rings_z2} for differing ring structures in \\( \\pi_n(R[X]) \\).\n\\end{proof}\n\n\\begin{example}\\label{ex:polynomial_quotient_rings_gaussian_integers}\n  The value of \\fullref{thm:polynomial_quotient_rings_equinumerous_with_module_of_polynomials} is in that, like \\fullref{thm:integers_modulo_isomorphic_to_quotient_group}, it allows us to identify elements of the quotient rings of the form \\( R[X] / \\braket{p(X)} \\), for monic \\( p \\), with concrete polynomials.\n\n  \\Fullref{thm:polynomial_quotient_rings_equinumerous_with_module_of_polynomials} tells us that we can choose a concrete polynomial from \\( \\pi_{n-1}(R[X]) \\) for every equivalence class in \\( R[X] / \\braket{p(X)} \\) and that these polynomials have degree strictly less than \\( n \\) (if they are not zero).\n\n  For example, consider the polynomial \\( p(X) \\coloneqq X^2 + 1 \\) over the \\hyperref[def:set_of_integers]{integers} \\( \\BbbZ \\). It has degree \\( 2 \\), so the quotient ring \\( R[X] / \\braket{p(X)} \\) can be identified with polynomials of the form\n  \\begin{equation}\\label{ex:polynomial_quotient_rings_gaussian_integers/linear_polynomial}\n    bX + a\n  \\end{equation}\n  with integer coefficients.\n\n  In order to make sense of the imposed ring structure in \\( \\BbbZ[X] / \\braket{X^2 + 1} \\), we can see how multiplication modulo \\( X^2 + 1 \\) works. We have\n  \\begin{balign*}\n    (bX + a) (dX + c)\n     & \\cong\n    bdX^2 + (ad + bc)X + ac\n     & \\pmod {X^2 + 1} \\cong            \\\\ &\\cong\n    bd[X^2 + 1] + [(ad + bc)X - bd + ac]\n     & \\pmod {X^2 + 1} \\cong            \\\\ &\\cong\n    (ad + bc)X + (ac - bd)\n     & \\pmod {X^2 + 1}. \\phantom{\\cong}\n  \\end{balign*}\n\n  This is precisely the definition of multiplication of complex numbers (see \\fullref{def:set_of_complex_numbers}). Thus, we can identify\n  \\begin{equation*}\n    \\BbbZ[X] / \\braket{X^2 + 1} \\cong \\BbbZ[i].\n  \\end{equation*}\n\n  Like in \\fullref{ex:polynomial_evaluation_gaussian_integers}, we arrive at the Gaussian \\hyperref[def:gaussian_integers]{integers}, but using a different approach.\n\\end{example}\n\n\\begin{example}\\label{ex:polynomial_quotient_rings_z2}\n  Similarly to how the Gaussian integers were identified using \\fullref{ex:polynomial_quotient_rings_gaussian_integers}, we will provide a different ring structure on \\( \\BbbZ[X] / \\braket{p(X)} \\) for a polynomial \\( p(X) \\) of degree \\( 2 \\).\n\n  Consider the polynomial \\( p(X) \\coloneqq X^2 - 2 \\) over the \\hyperref[def:set_of_integers]{integers} \\( \\BbbZ \\). We know that \\( \\sqrt 2 \\) is a root of \\( X^2 - 2 \\) in \\( \\BbbR \\), so we can identify\n  \\begin{equation*}\n    \\BbbZ[X] / \\braket{X^2 - 2} \\cong \\BbbZ[\\sqrt 2].\n  \\end{equation*}\n\n  We will verify that multiplication is indeed compatible. Multiplication modulo \\( X^2 - 2 \\) works as follows:\n  \\begin{balign*}\n    (aX + b) (cX + bd)\n     & \\cong\n    acX^2 + (bc + ad)X + bd\n     & \\pmod X^2 - 2 \\cong            \\\\ &\\cong\n    ac[X^2 - 2] + [(bc + ad)X + 2ac + bd]\n     & \\pmod X^2 - 2 \\cong            \\\\ &\\cong\n    (bc + ad)X + (2ac + bd)\n     & \\pmod X^2 - 2. \\phantom{\\cong}\n  \\end{balign*}\n\n  Multiplication in \\( \\BbbZ[\\sqrt 2] \\) works as follows:\n  \\begin{balign*}\n    (a \\sqrt 2 + b) (c \\sqrt b + d)\n    =\n    2ac + (bc + ad) \\sqrt 2 + bd.\n  \\end{balign*}\n\n  The two results are identical.\n\\end{example}\n\n\\begin{definition}\\label{def:algebraic_derivative}\n  Generalizing \\fullref{def:differentiability} from analysis, we define the \\term{algebraic derivative} of a polynomial \\( p(X) \\in R[X] \\) as\n  \\begin{equation*}\n    p'(X) \\coloneqq n a_n X^{n-1} + (n-1) a_{n-1} X^{n-2} + \\cdots + a_2 X + a_1.\n  \\end{equation*}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:algebraic_derivative_product_rule}\n  Algebraic \\hyperref[def:algebraic_derivative]{derivatives} satisfy the product rule\n  \\begin{equation*}\n    (pq)' = p'q + pq'.\n  \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n  By linearity, it is enough to consider the case where both \\( p(X) \\) and \\( q(X) \\) are monomials.\n\n  \\begin{balign*}\n    p'(X) q(X) + p(X) q'(X)\n     & =\n    n a_n X^{n-1} b_m X^m + a_n X^n m b_m X^{m-1}\n    =    \\\\ &=\n    (n + m) a_n b_m X^{n+m-1}\n    =    \\\\ &=\n    (a_n b_m X^{n+m})\n    =    \\\\ &=\n    (pq)'(X).\n  \\end{balign*}\n\\end{proof}\n\n\\begin{proposition}\\label{thm:algebraic_derivative_of_linear_polynomial_power}\n  The algebraic \\hyperref[def:algebraic_derivative]{derivative} of\n  \\begin{equation*}\n    p(X) \\coloneqq a (X - u)^n\n  \\end{equation*}\n  is\n  \\begin{equation*}\n    p'(X) = an(X - u)^{n-1}.\n  \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n  We use induction on \\( n \\). The case \\( n = 1 \\) is obvious. Assume that the statement holds for \\( 1, \\ldots, n - 1 \\).\n\n  By \\fullref{thm:algebraic_derivative_product_rule},\n  \\begin{balign*}\n    p'(X)\n     & =\n    [a(X - u)^{n-1}]' (X - u) + a(X - u)^{n-1} [(X - u)]'\n    =    \\\\ &=\n    a(n-1)(X - u)^{n-2} (X - u) + a(X - u)^{n-1}\n    =    \\\\ &=\n    an(X - u)^{n-1}.\n  \\end{balign*}\n\\end{proof}\n\n\\begin{corollary}\\label{thm:repeated_root_iff_derivatives_divisible}\n  The value \\( u \\in R \\) is a \\hyperref[def:semiring_kernel]{root} of multiplicity \\( m \\) of the polynomial function \\( p(x) \\) if any only if \\( u \\) is a root of multiplicity \\( m - 1 \\) of its algebraic \\hyperref[def:algebraic_derivative]{derivative} \\( p'(x) \\).\n\\end{corollary}\n\\begin{proof}\n  \\SufficiencySubProof Let \\( u \\) be a root of \\( p(X) \\) of multiplicity \\( m \\), i.e. there exists a polynomial \\( q(X) \\) of degree \\( \\deg(q) = \\deg(p) - m \\) such that \\( (X - u) \\) does not divide \\( q(X) \\) and\n  \\begin{equation*}\n    p(X) = (X - u)^m q(X).\n  \\end{equation*}\n\n  By \\fullref{thm:algebraic_derivative_product_rule} and \\fullref{thm:algebraic_derivative_of_linear_polynomial_power},\n  \\begin{equation*}\n    p'(X)\n    =\n    m (X - u)^{m-1} q(X) + (X - u)^m q'(X)\n    =\n    (X - u)^{m-1} [m q(X) + (X - u) q'(X)].\n  \\end{equation*}\n\n  Then \\( u \\) is a root of \\( p'(X) \\) of multiplicity at least \\( m - 1 \\). Assume that the multiplicity is at least \\( m \\), that is,\n  \\begin{equation*}\n    (X - u) \\mid [mq(X) + (X - u) q'(X)].\n  \\end{equation*}\n\n  Since \\( X - u \\) obviously divides \\( (X - u) q'(X) \\), then the above implies\n  \\begin{equation*}\n    (X - u) \\mid mq(X).\n  \\end{equation*}\n\n  But this contradicts our choice of \\( q(X) \\) that does not divide \\( (X - u) \\).\n\n  The obtained contradiction proves that \\( u \\) if a root of \\( p'(X) \\) of multiplicity exactly \\( n \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:multivariate_polynomial}\n  We define the \\term{multivariate polynomial ring} in \\( n \\) indeterminates as the \\enquote{iterated} single-variable polynomial ring\n  \\begin{equation*}\n    R[X_1, X_2, \\ldots, X_n] \\coloneqq R[X_1][X_2] \\cdots [X_n].\n  \\end{equation*}\n\n  Note that we are using the letter \\( n \\) to represent the number of indeterminates rather than the degree of the multivariate polynomial. When we need the degree, we will note it specifically.\n\n  Just like a polynomial \\( p(X) \\) in one indeterminate is a sequence, that is, a \\hyperref[def:topological_net]{net} over \\( \\BbbZ_{>0} \\), each multivariate polynomial \\( p(X_1, \\ldots, X_n) \\) is a map from \\( \\BbbZ_{>0}^n \\) to \\( R \\) (there is no natural order on \\( \\BbbZ_{>0}^n \\) for defining \\( p \\) as a net). We can regard it as an \\( n \\)-dimensional \\hyperref[def:array]{array} over \\( R \\), although arrays are purposely defined to only have finitely many elements, which makes it more difficult to define operations for an arbitrary pair of polynomials. For example, if there are only two variables, multivariate polynomials are \\enquote{infinite matrices}\n  \\begin{equation*}\n    p(X_1, X_2) \\coloneqq \\begin{pmatrix}\n      a_{0,0} & a_{0,1} & \\cdots \\\\\n      a_{1,0} & a_{1,1} & \\cdots \\\\\n      \\vdots  & \\vdots  & \\ddots \\\\\n    \\end{pmatrix}\n  \\end{equation*}\n  with only finitely many nonzero elements.\n\n  A \\term{monomial} is a polynomial with only one nonzero element. The monomial\n  \\begin{equation*}\n    p(X_1, X_2) \\coloneqq \\begin{pmatrix}\n      0      & 0      & 0      & \\cdots \\\\\n      0      & 0      & 0      & \\cdots \\\\\n      0      & r      & 0      & \\cdots \\\\\n      0      & 0      & 0      & \\cdots \\\\\n      \\vdots & \\vdots & \\vdots & \\ddots \\\\\n    \\end{pmatrix}\n  \\end{equation*}\n  can also be written symbolically as \\( p(X_1, X_2) = r X_1^2 X_2 \\), with the power for each variable corresponding to the zero-based index of the element along the corresponding axis.\n\n  The sum of the indices over all axes is called the \\term{degree} of this monomial. The above monomial has degree \\( 3 = 2 + 1 \\). We leave the degree for the zero monomial undefined.\n\n  Every polynomial can be regarded as the sum of finitely many monomials by taking each element of the array and putting it in its own monomial array.\n\n  The \\term{degree} \\( \\deg p \\) of a multivariate polynomial \\( p \\) is defined as the maximal degree among all of its nonzero monomials. If all monomials are zero, the degree is left undefined.\n\n  If all nonzero monomials have the same degree, the polynomial is said to be \\term{homogeneous}. Homogeneous polynomials of degree \\( 1 \\) are called \\term{linear}.\n\\end{definition}\n\n\\begin{theorem}\\label{thm:polynomial_embedding_behavior}\n  Let \\( \\mscrR \\) be an integral domain and \\( \\xi \\coloneqq \\min \\{ \\card(\\mscrR), \\aleph_0 \\} \\).\n\n  \\begin{thmenum}\n    \\thmitem{thm:polynomial_embedding_behavior/zero} The only polynomial corresponding to the zero function \\( f(x) = 0 \\) is the zero polynomial.\n\n    \\thmitem{thm:polynomial_embedding_behavior/univariate} Let \\( p(X) \\) be a nonzero polynomial and let \\( \\Phi(p) \\) is its polynomial function. Then there exists exactly one polynomial \\( q(X) \\) of degree \\( \\deg(p) < \\xi \\) such that \\( \\Phi(q) = \\Phi(p) \\).\n\n    \\thmitem{thm:polynomial_embedding_behavior/multivariate} Let \\( p(X_1, \\ldots, X_n) \\) be a nonzero multivariate polynomial and let \\( \\Phi(p) \\) be its polynomial function. Then there exists exactly one polynomial \\( q(X_1, \\ldots, X_n) \\) such that the power of every variable in every monomial is strictly less than \\( \\xi \\) and \\( \\Phi(q) = \\Phi(p) \\).\n  \\end{thmenum}\n\\end{theorem}\n\\begin{proof}\n  \\SubProofOf{thm:polynomial_embedding_behavior/zero} The nonzero constant polynomials are obviously different from the zero function. The nonconstant polynomials should have at least one value different from zero, hence are again different from the zero function.\n\n  Therefore, only the zero polynomial gives rise to the zero function.\n\n  \\SubProofOf{thm:polynomial_embedding_behavior/univariate} In the univariate case, by \\fullref{thm:polynomials_with_identical_values}, two nonzero polynomials of degree less than \\( \\xi \\) are equal if and only if they agree at \\( \\xi \\) points.\n\n  If \\( \\deg(p) < \\xi \\), which is automatically true if \\( \\mscrR \\) is an infinite ring, then \\( q \\coloneqq p \\) is the desired polynomial. It is unique by the previous paragraph.\n\n  If \\( \\deg(p) \\geq \\xi \\), which is only possible if \\( \\mscrR \\) is a finite ring, we can easily give a non-constructive proof in the general case. \\Fullref{alg:finite_field_polynomial_reduction} gives a concrete procedure for finding \\( q \\) in the case of certain Galois fields.\n\n  If \\( \\mscrR \\) is finite, both the \\hyperref[def:polynomial_free_module]{polynomial free module} \\( \\pi_{\\xi - 1} \\) and the \\hyperref[thm:functions_over_ring_form_algebra]{function space} \\( \\fun(\\mscrR) \\) have exactly \\( \\xi^\\xi \\) elements. Furthermore, by the first paragraph of the proof, two distinct polynomials in \\( \\pi_{\\xi - 1} \\) give rise to distinct functions. Therefore, the evaluation \\( \\Phi: \\pi_{\\xi - 1} \\to \\fun(\\mscrR) \\) is an injective function between finite sets of the same cardinality. It is therefore a bijection. That is, to each endofunction over \\( \\mscrR \\), in particular to \\( \\Phi(p) \\), there corresponds a unique univariate polynomial over \\( \\mscrR \\) of degree less than \\( \\xi \\).\n\n  \\SubProofOf{thm:polynomial_embedding_behavior/multivariate} As in the univariate case, if \\( \\mscrR \\) is infinite, the statement trivially holds.\n\n  Assume that \\( \\mscrR \\) is a finite ring. A simple counting argument shows that the vector spaces \\( \\fun(\\mscrR^n, \\mscrR) \\) and\n  \\begin{equation*}\n    \\mscrL \\coloneqq \\linspan \\{ X_1^{k_1} \\cdots X_n^{k_n} \\colon k_j = 1, \\ldots, \\xi \\T{for} j = 1, \\ldots, n \\}\n  \\end{equation*}\n  have exactly \\( \\xi^{\\xi^n} \\) vectors.\n\n  We must, therefore, only show that \\( \\Phi \\) is injective on \\( \\mscrL \\).\n\n  We use induction on the number of variables. We already showed injectivity for one variable, and we now assume that the statement is true for all polynomials with (strictly) less than \\( n \\) variables.\n\n  Assume that \\( \\Phi \\) is not injective for polynomials in \\( n \\) variables. Then there exist polynomials \\( f, g \\in L \\) such that \\( r \\coloneqq f - g \\) is nonzero and yet \\( \\Phi(r) = 0 \\). We know that\n  \\begin{equation*}\n    f \\in R[X_1, \\ldots, X_n] = \\mscrR[X_1, \\ldots, X_{n-1}][X_n],\n  \\end{equation*}\n  hence\n  \\begin{equation*}\n    f(X_1, \\ldots, X_n) = \\sum_{k=0}^{q-1} f_k(X_1, \\ldots, X_{n-1}) X_n^k,\n  \\end{equation*}\n  where \\( f_k(X_1, \\ldots, X_{n-1}) \\in L \\) satisfy the inductive hypothesis and analogously for \\( g \\).\n\n  Then, since \\( \\Phi \\) is a homomorphism,\n  \\begin{equation*}\n    \\Phi(r(X_1, \\ldots, X_n)) = \\sum_{k=0}^{q-1} \\Phi(f_k(X_1, \\ldots, X_{n-1}) - g_k(X_1, \\ldots, X_{n-1})) \\Phi(X_n^k) = 0.\n  \\end{equation*}\n\n  By the inductive hypothesis, the vectors \\( \\Phi[f_k - g_k], k = 0, \\ldots, q - 1 \\) are linearly independent. Therefore, \\( \\Phi[r(X_1, \\ldots, X_n)] = 0 \\) if and only if \\( \\Phi(X_n^k) = 0 \\) for all \\( k = 0, \\ldots, q - 1 \\). But evaluating \\( \\Phi_t(X_n) \\) would give \\( t \\), which may be nonzero, hence \\( \\Phi(X_n) \\) is necessarily a nonzero function.\n\n  Hence, \\( \\Phi(r) \\) is a nonzero function as a nontrivial combination of linearly independent nonzero functions. But this contradicts our assumption that \\( f \\neq g \\). Therefore, \\( \\Phi \\) is injective over \\( \\mscrL \\).\n\n  Therefore, as in the univariate case, to each function from \\( \\fun(\\mscrR^n, \\mscrR) \\), in particular to \\( \\Phi(p) \\), there corresponds a unique polynomial from \\( \\mscrL \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:rational_algebraic_function}\n  We denote the field of \\hyperref[def:field_of_fractions]{fractions} of \\( R[X_1, \\ldots, X_n] \\) by \\( R(X_1, \\ldots, X_n) \\) and call it the field of \\term{rational algebraic functions}.\n\\end{definition}\n\n\\begin{definition}\\label{def:laurent_polynomial}\n  We generalize \\hyperref[def:polynomial]{polynomials} by allowing negative coefficients.\n\n  \\begin{thmenum}\n    \\thmitem{def:laurent_polynomial/polynomial} A \\term{Laurent polynomial} over \\( R \\) is, formally, a \\hyperref[def:topological_net]{net} with finite support over the integers \\( \\BbbZ \\) rather than a net over \\( \\BbbZ_{\\geq 0} \\) (i.e. the nonnegative integers). The operations \\fullref{def:algebra_of_polynomials} make the Laurent polynomials into a ring, which we denote by \\( R[X^\\pm] \\). This notation is consistent with \\fullref{thm:polynomial_ring_universal_property}. Note that\n    \\begin{equation*}\n      R[X^\\pm] \\cong R[X, X^{-1}] \\cong R[X, Y] / (XY - 1),\n    \\end{equation*}\n    so all three notations are used.\n\n    The terms \\enquote{\\hyperref[def:polynomial/degree]{degree}} and \\enquote{\\hyperref[def:polynomial/leading_coefficient]{leading coefficient}} do not generalize naturally, so we leave them undefined.\n\n    We use the notation\n    \\begin{equation*}\n      p(X) = \\sum_{k \\in \\BbbZ} a_k X^k = \\sum_{k=-\\infty}^\\infty a_k X^k.\n    \\end{equation*}\n\n    \\thmitem{def:laurent_polynomial/multivariate} Analogously to \\fullref{def:multivariate_polynomial}, we define the ring of multivariate Laurent polynomials \\( R[X_1^\\pm, \\ldots, X_n^\\pm] \\) as maps from \\( \\BbbZ^n \\) to \\( R \\) rather than from \\( \\BbbZ \\) to \\( R \\) or, inductively, as\n    \\begin{equation*}\n      R[X_1^\\pm, \\ldots, X_n^\\pm] = R[X_1^\\pm, \\ldots, X_{n-1}^\\pm][X_n^\\pm]\n    \\end{equation*}\n\n    \\thmitem{def:laurent_polynomial/series} A \\term{formal Laurent series} is simply a Laurent polynomial in which we remove the restriction of only finitely many nonzero coefficients. We denote the set of all formal Laurent series over \\( R \\) by \\( R\\Bracks{X^\\pm} \\).\n  \\end{thmenum}\n\\end{definition}\n", "meta": {"hexsha": "638adbafa6fec90bdd8a8e2c7cb2fed9ebb45b9f", "size": 45404, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/polynomials.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/polynomials.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/polynomials.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.8568102445, "max_line_length": 725, "alphanum_fraction": 0.6398334948, "num_tokens": 15494, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7772998663336158, "lm_q1q2_score": 0.5822653993344077}}
{"text": "%!TEX root = ../gfkg.tex\n\\subsection{Chapter 3}\n\\begin{p}\n{Show that $v+w$ and $gw\\in \\text{Vect}(M)$.}\n\\end{p}\n\n$[v+w](f+g)=v(f+g)+w(f+g)=v(f)+w(f)+v(g)+w(g)=[v+w](f)+[v+w](g)$\\\\\n$[v+w](\\alpha f)=v(\\alpha f)+w(\\alpha f)=\\alpha (v(f)+w(f))=\\alpha[v+w](f)$\\\\\n$[v+w](fg)=v(fg)+w(fg)=v(f)g+f v(g)+w(f)g+f w(g)=[v+w](f)g+f[v+w](g)$.\\\\\n\n$[gw](f+h)=g\\, w(f+h)=g(w(f)+w(h))=[gw](f)+[gw](h)$.\\\\\n$[gw](\\alpha f)=g\\, w(\\alpha f)=\\alpha g\\, w(f)=\\alpha[gw](f)$.\\\\\n$[gw](fh)=g\\,w(fh)=g(w(f)h+fw(h))=g\\,w(f)h+fg\\,w(h)=[gw](f)\\,h+f\\,[gw](h)$.\n\n\n\\begin{p} Show that the following rules hold for all $v, w\\in \\text{Vect}(M)$ {\\em and}\n$f,g\\in C^\\infty(M)$: 1. $f(v+w)=fv+fw$, 2. $(f+g)v=fv+gv$, 3. $(fg)v=f(gv)$, 4. $1v=v$, where 1 denotes the constant function equal to 1 on all of $M$. This makes $\\text{Vect}(M)$ \na \\emph{module} over $C^\\infty(M)$.\n\\end{p}\n$f(v+w)[g]=fv(g)+fw(g)$, so the first one holds. The remainder work exactly the same way:\napply both sides of the equation to a test function and show that they are equal.\n\n\\begin{p}\n{Show that if $v^\\mu\\partial_\\mu=0$ we must have $v^\\mu=0$ for all $\\mu$.}\n\\end{p}\nChoose as test functions the coordinate functions $x^\\mu$ so that $\\partial_\\mu x^\\mu = 1$ for all\n$\\mu$, meaning $v=0$.\n\n\\begin{p}\n{Let $v,w\\in$ \\emph{Vect($M$)}. Show that $v=w$ iff $v_p=w_p$ for all $p\\in M$.}\n\\end{p}\nIf $v=w$ then $v_p=w_p$ since $v_p(f)=v(f)[p]=w(f)[p]=w_p$ for all $f$. If $v_p=w_p$, \nthen $v(f)[p]=v_p(f)=w_p(f)=w(f)[p]$ for all $p\\in M$ and $f\\in C^\\infty(M)$, so $v=w$.\n\n\n\\begin{p}\n{Show that $T_p M$ is a vector space over the real numbers.}\n\\end{p}\n... Need to show linearity, closedness, and existence of a zero.\n\n\\begin{p}\n{Check that $\\gamma'(t)\\in T_{\\gamma(t)}M$.}\n\\end{p}\n\n$\\gamma'(t)[f+g]=\\frac{\\rm d}{{\\rm d}t}(f(\\gamma(t))+g(\\gamma(t)))=\\gamma'(t)[f]+\\gamma'(t)[g]$. Similarly\nwe get $\\gamma'(t)[\\alpha f]=\\alpha \\gamma'(t)[f]$. Finally, $\\gamma'(t)[fg]=\\frac{\\rm d}{{\\rm d}t}\n(f(\\gamma(t))g(\\gamma(t)))=\\gamma'(t)[f]g+f\\gamma'(t)[g]$.\n\n\\begin{p}\nLet $\\phi: \\R\\rightarrow \\R$ be given by $\\phi(t)=e^t$. Let\n$x$ be the usual coordinate function on $\\R$. Show that $\\phi^*x=e^x$.\n\\end{p}\n\nThe coordinate of the transformed point $\\phi(p)$ is clearly $e^{x(p)}$, so $\\phi^*x$ should be $e^x$. More\nformally, $\\phi^*x(p)=x(\\phi(p))=x(e^{p})=e^{x(p)}$ so indeed $\\phi^*x=e^x$.\n%It's a little tricky to keep the notion of the coordinate function $x:M\\rightarrow \\R$ and its value $x(p)$ at the point $p$ distinct, and it will just get worse when looking at derivatives of $x$.\\\\\n\n{\\begin{p}\n{Let $\\phi:\\R^2\\rightarrow \\R^2$ be a rotation counterclockwise\nby an angle $\\theta$. Let $x,y$ be the usual coordinate functions on $\\R^2$. Show that\n$\\phi^*x=x\\cos \\theta-y\\sin\\theta$ and $\\phi^*y=x\\sin\\theta+y\\cos\\theta$.}\n\\end{p}\n\nAgain $\\phi^*x(p)=x(\\phi(p))$, the $x$ coordinate of the rotated point. \nIf $p$ is the point $(s,t)$, then \n$\\phi(p)=(s',t')=(s\\cos\\theta-t\\sin\\theta,s\\sin\\theta+t\\cos\\theta)$. \nThus $\\phi^*x(p)=s\\cos\\theta-t\\sin\\theta=x(p)\\cos\\theta-y(p)\\sin\\theta$ and similarly for $\\phi^*y$. \nSince the point $p$ is arbitrary, the desired result follows.\n\n\\begin{p}{$\\phi:M\\rightarrow N$ is smooth if $f\\in C^\\infty(N)$ implies $\\phi^*f\\in C^\\infty(M)$. Show\nthat this definition is consistent with smooth functions $f:M\\rightarrow\\R$ and smooth\ncurves $\\gamma:\\R\\rightarrow M$.}\n\\end{p}\n\nA function $f:N\\rightarrow\\R$ is smooth if \n$f\\circ\\varphi^{-1}_\\alpha:\\R^n\\rightarrow \\R$ is smooth\nfor all $\\alpha$. Pulling the function back by using $\\phi$ makes a new smooth function $f':M\\rightarrow \n\\R$ where $f'=f\\circ\\phi$ since its smoothness is determined by\nthat of $(f\\circ\\phi)\\circ(\\varphi_\\alpha\\circ\\phi)^{-1}=f\\circ\\varphi^{-1}_\\alpha$ which is assumed to\nbe smooth. Any other coordinate system for $M$ would also produce a smooth result, since the \ndifferent systems are related by smooth coordinate transformations.\n\n\\begin{p}{Prove that $(\\phi\\circ\\gamma)'(t)=\\phi_*(\\gamma'(t))$}\n\\end{p}\n\n$\\phi_*(\\gamma'(t))[f]=\\gamma'(t)(\\phi^*f)=\\gamma'(t)(f\\circ\\phi)=\\frac{\\rm d}{{\\rm d}t}(f\\circ\\phi)(\\gamma(t))=\\frac{\\rm d}{{\\rm d}t}f(\\phi(\\gamma(t)))=\\frac{\\rm d}{{\\rm d}t}f((\\phi\\circ\\gamma)(t))=(\\phi\\circ\\gamma)'(t)[f]$\n\n\\begin{p}\n Show that the pushforward operation $\\phi_*:T_pM\\rightarrow T_{\\phi(p)}N$ is linear.\n \\end{p}\n\n$\\phi_*(v+w)[f]=(v+w)(\\phi^*f)=v(\\phi^*f_+w(\\phi^*f)=\\phi_*v[f]+\\phi_*w[f]$ and similarly for scalars $\\alpha$.\n\n\\begin{p}{Show that if $\\phi:M\\rightarrow N$\nwe can push forward a vector field $v$ on $M$ to obtain a vector field \n$\\phi_*v$ on $N$ satisfying $(\\phi_*v)_q=\\phi_*(v_p)$ whenever $\\phi(p)=q$.}\n\\end{p}\n\nSince the vector fields can be specified pointwise, it suffices to consider a general point $p$. \nNow, $(\\phi_*v)_q[f]=\\phi_*v(f)[q]=v(f\\circ\\phi)[p]=v_p(\\phi^*f)=\\phi_*(v_p)[f]$.\n\n\\begin{p}\n{Let $\\phi:\\R^2\\rightarrow\\R^2$ be rotation counterclockwise\nby an angle $\\theta$. Let $\\partial_x,\\partial_y$ be the coordinate vector fields on $\\R^2$. \nShow that at any point of $\\R^2$: $\\phi_*\\partial_x=(\\cos\\theta)\\partial_x+(\\sin\\theta)\\partial_y$\nand $\\phi_*\\partial_y=-(\\sin\\theta)\\partial_x+(\\cos\\theta)\\partial_y$.}\n\\end{p}\n\nIntuitively, this is just a rotation of the vectors, since they are tangent vectors to the coordinate paths\nwhich are rotated by $\\phi$. To make this more precise, consider $(\\phi_*\\partial_x)f=\\partial_x(\\phi^*f)=\n\\partial_x(f\\circ\\phi)$. Note that $\\partial_x$ really means the derivative of the\nfunction with respect to whatever quantity is in the first input; choosing the point $(x,y)$ as the input\nmakes this automatic. Then $\\partial_x(f\\circ\\phi)(x,y)=\\partial_uf \\cdot \\partial_x u+\\partial_v f\\cdot \\partial_x v$, where $(u,v)=(x\\cos\\theta-y\\sin\\theta,x\\sin\\theta+y\\cos\\theta)=\\phi(x,y)$. We then obtain\n$(\\phi_*\\partial_x)f=(\\cos\\theta)\\partial_u f+(\\sin\\theta)\\partial_vf$. Since $u$ is the first input to $f$, $\\partial_u$ is\nthe same as $\\partial_x$ in the rotated space, and similarly for $v$ and $y$. This gives us\n$\\phi_*\\partial_x=(\\cos\\theta)\\partial_x+(\\sin\\theta)\\partial_y$; the same argument works for $\\phi_*\\partial_y$.\n\n\\begin{p}{Let $v$ be the vector field $x^2\\partial_x+y\\partial_y$ on $\\R^2$. Calculate \nthe integral curves $\\gamma(t)$ and see which ones are defined for all $t$.}\n\\end{p}\n\nWe start by making up a representation. $\\gamma'(t)[f]=\\frac{\\rm d}{{\\rm d}t}f(\\gamma(t))\n=\\partial_\\mu f\\frac{{\\rm d}\\gamma^\\mu}{{\\rm d}t}$, so $\\gamma'(t)=\n\\frac{{\\rm d}\\gamma^\\mu}{{\\rm d}t}\\partial_\\mu$. \nIf $\\gamma(t)=(x(t),y(t))$, then $\\gamma'(t)=\\dot{x}(t)\\partial_x+\\dot{y}(t)\\partial_y$. \nThen $\\dot{x}=x^2$ and $\\dot{y}=y$. The solution is $\\gamma(t)=(x(0)/(1-x(0)t),y(0)e^t)$, which\nis finite for finite $t$ provided $1-x(0)t\\neq 0$. Thus, only the curves with $x(0)=0$ are defined for all\n$t$.\n\n\\begin{p}{Show that $\\phi_0$ is the identity map id:$X\\rightarrow X$, and that for all $s,t\\in\\R$ we have $\\phi_t\\circ\\phi_s=\\phi_{t+s}$.}\n\\end{p}\n\nThe point $p$ is translated to the point $\\gamma(t)$ along the integral curve $\\gamma$ of $v$ with $p$\nas the initial point. Since $\\gamma(0)=p$, $\\phi_0(p)=p$. Moreover, as the integral curve is the solution\nto a first-order differential equation, the solutions are unique and the curves don't intersect. If we move\napply $\\phi_t\\circ\\phi_s$ to $p$ we are first moving the point $p=\\gamma(0)$ to $\\gamma(s)$ and then using\nthis point as the new intial value for the integral curve so that we can move it to $\\widetilde{\\gamma}(t)$. But \nthe curves are unique and any point on the curve defines a boundary condition which together with\nthe differential equation, yields the curve. Thus, $\\gamma$ and $\\widetilde{\\gamma}$ are the same curve\nwith the same parameterization (except for the offset).\n\n\\begin{p}\n{Consider the normalized vector fields in the $r$ and $\\theta$ direction on the plane in polar coordinates (not defined at the origin):\n\\[\nv=\\frac{x\\partial_x+y\\partial_y}{\\sqrt{x^2+y^2}},\\qquad w=\\frac{x\\partial_y-y\\partial_x}{\\sqrt{x^2+y^2}}\n\\]\nCalculate $[v,w]$}.\n\\end{p}\n\nLet's put this into component notation first. Define $\\bar{r}=1/\\sqrt{x^2+y^2}$,\n$x_0=x$, and $x_1=y$. Then $v=\\bar{r}x^k\\partial_k$ while $w=\\bar{r}\\epsilon^j_i x^i\\partial_j$, for $\\epsilon^j_i$ the antisymmetric symbol/tensor. Before getting started, note that $\\partial_j\\bar{r}=-x_j\\bar{r}^3$ (watch out for the lower index!). Now\n\\begin{eqnarray*}\n[v,w]&=&vw-wv=\\bar{r}x^k\\partial_k(\\bar{r}\\epsilon^j_ix^i\\partial_j)-\n\\bar{r}\\epsilon^j_ix^i\\partial_j(\\bar{r}x^k\\partial_k)\\\\\n&=&\\bar{r}^2\\epsilon^j_ix^k\\left(\\delta_k^i\\partial_j+x^i\\partial_k\\partial_j-\\bar{r}^2x_k x^i\\partial_j\\right)-\\bar{r}^2\\epsilon^j_i x^i\\left(\\partial_j+x^k\\partial_j\\partial_k-\\bar{r}^2 x_j x^k\\partial_k\\right)\\\\\n&=&-\\bar{r}^4x^kx_k\\epsilon^j_i x^i\\partial_j=-\\bar{r}^2\\epsilon^j_i x^i\\partial_j=-\\frac{x\\partial_y-y\\partial_x}{x^2+y^2}=-\\frac{w}{r},\n\\end{eqnarray*}\nsince $[\\partial_i,\\partial_j]=0$ and $\\epsilon^j_ix^ix_j=0$.\n\n\\begin{p}{Check that $[v,w](f)(p)=\\frac{\\partial^2}{\\partial t \\partial s}\\left[\nf(\\psi_s(\\phi_t(p)))-f(\\phi_t(\\psi_s(p)))\\right]_{s=t=0}$}\n\\end{p}\n\nStart with $(vf)(p)=\\frac{d}{dt}f(\\phi_t(p))|_{t=0}$ and\n$(wf)(p)=\\frac{d}{ds}f(\\psi_s(p))|_{s=0}$. \nThen we have $(vwf)(p)=v(wf)(p)=\\frac{d}{dt}wf(\\phi_t(p))|_{t=0}=\\frac{\\partial^2}{\\partial t\\partial s}f(\\psi_s(\\phi_t(p)))|_{s=t=0}$. Note that $wf$ means take $f$ and displace its input a little bit, and evaluate the derivative at zero\ndisplacement. In the compound expression, the input to $f$ is $\\phi_t(p)$, so $\\psi_s$ ends up first inside $f$ in the final expression. Alternatively, we could work from inside-out: $v(wf)(p)=v\\left(\\frac{d}{dt}f(\\psi_s(\\cdot))|_{s=0}\\right)(p)=\\frac{\\partial^2}{\\partial t\\partial s}f(\\psi_s(\\phi_t(p)))|_{s=t=0}.$\nSimilarly, $(wvf)(p)=\\frac{\\partial^2}{\\partial t\\partial s}f(\\phi_t(\\psi_s(p)))|_{s=t=0}$, so the expression for the commutator holds.\n\n\\begin{p}{Show that for all vector fields $u, v, w$ on a manifold, and all real numbers $\\alpha$ and $\\beta$, we have:\n\\begin{enumerate}\n\\item $[v,w]=-[w,v]$\n\\item $[u,\\alpha v+\\beta w]=\\alpha[u,v]+\\beta[u,w]$\n\\item The {\\bf Jacobi identity}: $[u,[v,w]]+[v,[w,u]]+[w,[u,v]]=0$\n\\end{enumerate}}\n\\end{p}\n\\begin{enumerate}\n\\item $[u,v]=uv-vu=-(vu-uv)=-[v,u]$\n\\item $[u,\\alpha v+\\beta w]=u(\\alpha v+\\beta w)-(\\alpha v+\\beta w)u=\\alpha[u,v]+\\beta[u,w]$\n\\item $[u,[v,w]]=u(vw-wv)-(vw-wv)u$, so $[u,[v,w]]+[v,[w,u]]+[w,[u,v]]=\nu(vw-wv)-(vw-wv)u+v(wu-uw)-(wu-uw)v+w(uv-vu)-(uv-vu)w=0$\n\\end{enumerate}\n\n", "meta": {"hexsha": "f8e5b5255eceacde43370a2d9f7dc26fcbd36b92", "size": 10438, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/I3.tex", "max_stars_repo_name": "joerenes/Baez-Muniain-solutions", "max_stars_repo_head_hexsha": "e1e38de9acab877bc4200af59c7910d42de748ca", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-04-13T12:10:03.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-19T18:18:34.000Z", "max_issues_repo_path": "src/I3.tex", "max_issues_repo_name": "joerenes/Baez-Muniain-solutions", "max_issues_repo_head_hexsha": "e1e38de9acab877bc4200af59c7910d42de748ca", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-04-13T12:15:30.000Z", "max_issues_repo_issues_event_max_datetime": "2017-04-13T20:19:44.000Z", "max_forks_repo_path": "src/I3.tex", "max_forks_repo_name": "joerenes/Baez-Muniain-solutions", "max_forks_repo_head_hexsha": "e1e38de9acab877bc4200af59c7910d42de748ca", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.7282608696, "max_line_length": 316, "alphanum_fraction": 0.6500287411, "num_tokens": 3938, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.582265394490783}}
{"text": "\\documentclass[11pt]{article}\n\n\\textheight 22cm\n\\textwidth  16cm\n\\hoffset= -0.6in \n\\voffset= -0.5in   \n\\setlength{\\parindent}{0cm}\n\\setlength{\\parskip}{10pt plus 2pt minus 2pt}\n\\pagenumbering{roman} \n\\setcounter{page}{-9}\n\n\\newcommand{\\be}{\\begin{equation}}\n\\newcommand{\\ee}{\\end{equation}}\n\\newcommand{\\bd}{\\begin{displaymath}}\n\\newcommand{\\ed}{\\end{displaymath}}\n\\begin{document}\n\n\\title{Astrophysics II: Laboratory 6}\n\\author{\\Large The Shape and Scale of the Milky Way}\n\\maketitle\n\n\\setlength{\\parindent}{0.2pt}\n\\setlength{\\parskip}{2ex}\n\n\\section{{\\bf Objectives:}}\n\n{\\it The student investigates the shape, structure and scale of the\n Milky Way using only visible wavelengths using the locations of stellar clusters and compares the galactic distribution of open vs. globular clusters.}\n\n\n\\section{{\\bf Procedure}}\n\n\nPart I - Open Clusters A: The Orientation of the Milky Way\n\n1) Go to www.astro.iag.usp.br/$\\sim$wilton/clusters.txt.  You will see data\non 1777 Milky Way open clusters.  The format is the following:\\\\\ncolumn 1 - object name\\\\\ncolumn 2-4 - right accession (RA) (hours : minutes : seconds)\\\\\ncolumn 5-7 - declination ($\\delta$) (degrees : arc-minutes : arc-seconds)\n\nIn order to make importing the part of this information relevant to this lab easier, I have written a python script in order to extract only the values of columns 2-7. This is much easier to import into Matlab. It is not necessary to understand the python script, but if you know python, it is available at http://www.astro.umd.edu/~mavara/matlab/121labs.html\n\n2) Save the simplified data text file to your desktop.\n\n3a) Import the file into Matlab like this:\n$\\gg$ load clusters\\_relevant.txt\n\n3b) Double click on the clusters\\_relevant structure name in the ``Workspace\" window to check that this structure is a matrix with the correct information: each row contains locational information for each cluster.\n\n4) Convert the RA and $\\delta$ into decimals of hours and degrees with the\nfollowing formulas:\n\\begin{equation}\nRA = RA(hr) + \\frac{RA(min)}{60 min/hr} + \\frac{RA(sec)}{3600 sec/hr} \n\\end{equation}\n\\begin{equation}\n\\delta = \\delta(deg) + \\frac{\\delta(arcmin)}{60 arcmin/deg} + \\frac{\\delta(arcsec)}{3600 arcsec/deg}\n\\end{equation}\n\nIn other words, create a new matrix with each three value set of coordinates transformed into a single value each; so, this new matrix has two columns, the first giving RA and the second giving $\\delta$.\n\n5) Plot the $\\delta$ (vertical axis) and RA (horizontal axis) of all 1777\nopen clusters.  What does this plot tell you about the shape of the\nMilky Way?  Feel free to sketch this out with labels to show or turn in to your TA.  \n\n6) On this plot the ecliptic (plane of the solar system) is a sine\nwave starting at RA = 0 hours, with a period of 24 hours and an\namplitude of 23 degrees.  Print out your plot and draw the ecliptic.  \n\nPart II - Open Clusters B: The Scale of the Milky Way\n\n1) Go to www.astro.iag.usp.br/$\\sim$wilton/clustersGAL.txt.  You will see the data\non the same 1777 Milky Way open clusters, this time with galactic coordinates.  The format is the following:\\\\\ncolumn 1 - object name\\\\\ncolumn 2 - galactic longitude\\\\\ncolumn 3 - galactic latitude (with sign)\\\\\ncolumn 4 - classification flag (you will not need this)\\\\\ncolumn 5 - apparent size (you will not need this either)\\\\\ncolumn 6 - distance in pc\n\n2) Save the data file clusters\\_relevantGAL.txt to your desktop (this file was made from that on the website using clusters2.py).\n\n3) Import this data into a Matlab matrix.  Go ahead and change the values of the first two columns from degrees to radians.\n\n4) Next you will be changing into a Cartesian coordinate system\ncentered on the Sun.  To make life easier, the positive x axis points\ntowards l=0, b=0. Make three variables which will give cartesian positions in units of distance in parsecs.  Use these formulas to convert from (l,b,d) to (x,y,z):\n\\begin{equation}\nx = d cos(b)cos(l)\n\\end{equation}\n\\begin{equation}\ny = d cos(b)sin(l)\n\\end{equation}\n\\begin{equation}\nz = d sin(b)\n\\end{equation}\n\n5) Make three scatter plots, x verses y, x verses z, and y verses z.\nBased on your plots what is the shape of the Milky Way?  Make sure to\nbase your answer only on what you see in your plots, not anything you\nread in the text or on-line.\\\\  \n\n6) On your plots label the thickness and diameter of the Milky Way in\nparsecs and kiloparsecs.  Where is the center of the Milky Way?\\\\\n\nPart III - Globular Clusters: The Scale of the Milky Way, Reloaded\n\n1) Go to the Globular Cluster Catalog at\nhttp://physwww.physics.mcmaster.ca/$\\sim$harris/mwgc.dat.  The format is provided\nat the top of the file.  Note, the distances this time are in\nkiloparsecs, not parsecs.\n\n2) Save the simplified file just containing the positions of the clusters, mwgc-short\\_relevant.txt, to your desktop. Each row is the (x,y,z) position of the cluster centered on the Sun, in the same format as you put the data in section 2 in, but with distances in kpc now.\n\n3) Load the positions into Matlab vectors for x, y, and z.\n\n5) Plot the positions for the globular\nclusters in the same way you did in Part 2.  What does your plot tell you about the distribution of globular clusters relative to the Milky Way plane?  Where would you\nexpect the center of the globular cluster distribution to be?\\\\\n\n6) Mark the diameter on your plots in kiloparsecs.  Where is the center\nof your distribution (check out x verses y and x verses z for the best\nview of this)?\\\\\n\n\\section{{\\bf Questions}}\n\n{\\it Answer these questions on a separate sheet of paper and hand them\n  in with your lab}\n\n1) What does your plot from Part I tell you about the orientation of\nthe Milky Way plane with respect to the plane of the solar system? \n\n2) How does the size of the Milky Way you found with the open clusters\ncompare to the size you found with the globulars?  What happened to\nthe position of the Sun in the galaxy when you used the globular\nclusters?  Which answer is closer to the one given in your book for\nthe size of the Milky Way and the Sun's position in it?\n\n3) Which set of objects were better probes of the Milky Way's size?  Why?\n\n4) Which set of objects were better probes of the Milky Way's disk?  Why?   \n\n5) Many of the distance measurements to globulars were initially done\nusing Cepheid variables.  Why would Cepheid make such excellent\nstandard candles for the Milky Way's globular clusters?\n\n\\end{document}\n\n", "meta": {"hexsha": "44c59215d5604bc1e773b084e4181b77c8a22f2a", "size": 6437, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MATLAB/ASTR121/labMW/shapeMW.tex", "max_stars_repo_name": "cheyu-c/cheyu-c.github.io", "max_stars_repo_head_hexsha": "b1f139027dfe036276476c5b0fd83a515fba515c", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MATLAB/ASTR121/labMW/shapeMW.tex", "max_issues_repo_name": "cheyu-c/cheyu-c.github.io", "max_issues_repo_head_hexsha": "b1f139027dfe036276476c5b0fd83a515fba515c", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MATLAB/ASTR121/labMW/shapeMW.tex", "max_forks_repo_name": "cheyu-c/cheyu-c.github.io", "max_forks_repo_head_hexsha": "b1f139027dfe036276476c5b0fd83a515fba515c", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2013422819, "max_line_length": 359, "alphanum_fraction": 0.7542333385, "num_tokens": 1723, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.5822653828973295}}
{"text": "\\SecDef{intro}{Introduction}\nAfter the introduction of \\emph{linear cryptanalysis} in~\\cite{EC:Matsui93} as a powerful method to attack symmetric cryptographic primitives, people started studying how to generalize this method in order to exploit \\emph{nonlinear approximations} for cryptanalysis, see, e.g.,~\\cite{EC:HarKraMas95} and~\\cite{EC:KnuRob96}. While it might be easier to find a nonlinear approximation over parts of the primitive, e.g., over an S-box of small size, a crucial problem in nonlinear cryptanalysis is to find nonlinear approximations that hold true for the \\emph{whole round function} of the primitive. An example  that exploits nonlinear approximations that are preserved over the whole round function is \\emph{bilinear cryptanalysis} over Feistel ciphers~\\cite{C:Courtois04}.\n\nMore recently, an interesting solution for the above problem was described by Todo, Leander and Sasaki in~\\cite{NonlinInv} for round functions that can be described in terms of an LS-design~\\cite{FSE:GLSV14}.  Let one round of a substitution-permutation cipher operating on $n$ S-boxes of $t$-bit length be given as depicted in \\FigRef{spn} and let the linear layer $L^{(t)} \\colon \\F_2^{nt} \\rightarrow \\F_2^{nt}$ only XOR the outputs of the S-boxes, i.e., each $(y_1,\\dots,y_{n})$ for $y_j \\in \\F_2^t$ is mapped to $(z_1,\\dots,z_{n})$ where $z_j = \\sum_{i=1}^{n}\\alpha_{i,j} y_i$ for particular $\\alpha_{i,j} \\in \\F_2$. In that case, $L^{(t)}$ can be  defined by $t$ parallel applications of the matrix $L$ given by\n\\[L = \\left[\\begin{array}{cccc} \\alpha_{1,1} & \\alpha_{1,2} & \\dots & \\alpha_{1,n} \\\\\n\\alpha_{2,1} & \\alpha_{2,2} & \\dots& \\alpha_{2,n} \\\\\n\\vdots & \\vdots & \\ddots& \\vdots \\\\\n\\alpha_{n,1} & \\alpha_{n,2} & \\dots & \\alpha_{n,n} \\\\\n\\end{array}\\right]\\;.\\]\nTodo \\etal{} observed that if $L$ is orthogonal, then for \\emph{any} $t$-bit Boolean function $f$ of algebraic degree less than or equal to $2$ and for any $y_1,\\ldots,y_n \\in \\F_2^t$ it is \n\\begin{equation}\n\\EqLabel{target}\n    f(y_1) + f(y_2) + \\dots + f(y_{n}) = f(z_1) + f(z_2) + \\dots + f(z_{n})\\;.\n\\end{equation}\nThis fact was used to successfully cryptanalyze the block ciphers Midori, Scream and iScream in a weak key setting. Indeed, if $f$ is any invariant function for the S-box $S$, i.e., if for all $x \\in \\F_2^t$, $f(x) = f(S(x))$, and if $\\deg(f) \\leq 2$, one obtains an invariant function for the whole round according to \\EqRef{target}.\n\nAn interesting question is whether the property of $L$ being orthogonal is also necessary for \\EqRef{target} to hold for all $f$ with degree upper-bounded by $2$. More generally, we would like to understand the necessary and sufficient properties of the linear layer that preserve such invariants in the case when $\\deg(f) \\leq d$ for $d > 2$. Although the existence of a non-trivial\\footnote{By \\emph{non-trivial}, we mean that the matrix of $L$ is not a permutation matrix.} linear layer for which \\EqRef{target} holds for all $f$ with $\\deg f \\leq d$ is totally unclear, such a construction would be of significant interest. On the one hand, it would deepen the knowledge on how to design strong symmetric cryptographic primitives and to avoid possible attacks and could on the other hand be useful in order to design symmetric \\emph{trapdoor ciphers} to be used as public-key schemes, see, e.g.,~\\cite{FSE:RijPre97,DBLP:conf/icics/PatarinG97a,DBLP:journals/iacr/BannierBF16}.\nThe idea would be to hide a nonlinear approximation as the trapdoor information. If the linear layer is designed such that it preserves \\emph{all invariants} of a special form, e.g., all functions of degree at most $d$, the specification of the linear layer would not leak more information on the particular invariant and thus on the trapdoor. There could also be applications besides cryptography, so the above problem might be of independent interest.\n\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.9]{\\PathFig{spn_round.pdf}}\n    \\FigDef{spn}{The round function of a substitution-permutation cipher based on an LS-design.}\n\\end{figure}\n\n\\subsection{Our Contribution} In this work we answer the above question and consider the case of $L \\in \\F_2^{n \\times m}$, i.e., the number of outputs might be different than the number of inputs. We precisely characterize the matrices that preserve \\emph{all} invariants of the form similar as given in \\EqRef{target}, i.e.,\n\\begin{equation}\n\\EqLabel{target_general}\n    f(y_1) + \\dots + f(y_{n}) = f(z_1) + \\dots + f(z_{m}) + f(0)\\cdot (m+n \\mod 2)\\;,\n\\end{equation}\nwhere the degree of $f$ is upper bounded by $d$. We call such matrices \\emph{degree-$d$ sum-invariant}. We show that such matrices correspond to $n$-bit Boolean functions of degree at most $n-d-1$ which admit no linear annihilators. We call the \\emph{supports} of such Boolean functions \\emph{degree-$d$ zero-sum sets of rank $n$}. This characterization is obtained in \\PropRef{zero_sum}, \\PropRef{inner_product} and \\PropRef{annihilator_construction}. Our results imply that $m \\geq n$ and, for the case of $d=2$, the property of $L$ being (semi-)orthogonal is not only sufficient, but also necessary. Moreover, we obtain an interesting characterization of orthogonal matrices over $\\F_2$, i.e., $L \\in \\F_2^{n \\times n}$ is orthogonal if and only if in every $2 \\times 2n$ submatrix of $\\left[\\begin{array}{c|c}I_n & L\\end{array}\\right]$, each column occurs an even number of times.\n\nBesides showing the link between degree-$d$ zero-sum sets and degree-$d$ sum-invariant matrices, we study degree-$d$ zero-sum sets of full rank in more detail. We are in particular interested in the smallest of such sets. Let $\\minzs{n}{d}$ denote the minimum number of elements in a degree-$d$ zero-sum set of rank $n$. The following theorem summarizes our main results.\n\n\\begin{theorem}\n\\ThmLabel{main}\nLet $n,d \\in \\ZZplus$ with $n > d \\geq 1$. Then the following properties of $\\minzs{n}{d}$ hold.\n \\renewcommand{\\labelenumi}{(\\roman{enumi})}\n\\begin{enumerate}\n    \\item $\\minzs{n}{d} = \\min\\{ \\wt(g) \\mid g \\in \\BF{n}{n-d-1} \\setminus \\{0\\} \\text{ with } g \\text{ having at most 1}$ affine annihilator$\\}$.\n    \\vspace{.3em}\n    \\item $\\minzs{n}{1} = n+2-(n \\mod 2)$ and, for $n = 4$ or $n > 5$, $\\minzs{n}{2} = 2n$.\n    \n    As exceptions, $\\minzs{3}{2} = 8$ and $\\minzs{5}{2} = 12$.\n    \\vspace{.3em}\n    \\item $\\minzs{d+1}{d} = \\minzs{d+2}{d} = 2^{d+1}$. Moreover, $\\minzs{d+3}{d} = 3 \\cdot 2^d$ and $\\minzs{2d+4}{d} = 2^{d+2}$. For $d+4 \\leq n \\leq 2d+3$,  \\[\\minzs{n}{d} = 2^{2d-n+4}(2^{n-d-2}-1)\\;.\\]\n    \\item for any fixed $d$, the sequence $\\minzs{n}{d}$ is increasing, i.e., $\\minzs{n+1}{d} \\geq \\minzs{n}{d}$.\n    \\vspace{.3em}\n    \\item for $n_1, n_2 > d$, the inequality \\[\\minzs{n_1+n_2}{d} \\leq \\minzs{n_1}{d} + \\minzs{n_2}{d}\\] holds. Moreover, for $d \\geq 2$, it is \\[\\minzs{n+d}{d-1} \\leq \\minzs{n}{d} \\leq 2 \\minzs{n-1}{d-1}\\;.\\]\n\\end{enumerate}\nThe last inequality implies that, for $n \\geq 4$,  $\\minzs{n}{3} \\geq 2n +6$.\n\\end{theorem}\n\nWe prove the above values by providing a construction of the corresponding zero-sum sets (resp. Boolean functions). In case where we only prove an upper bound, we provide a construction that meets this bound. \\TabRef{bounds} shows the values and bounds for $\\minzs{n}{d}$ for $n \\leq 30$ and $d \\leq 10$.\n\nThe last inequality in \\ThmRef{main} implies that any degree-$d$ sum-invariant matrix $L \\in \\F_2^{n \\times n}$ for $d \\geq 3$ must be a permutation matrix, i.e. an invertible matrix with exactly $n$ ones. In other words, the observation of Todo \\etal cannot be extended for higher-degree invariants without $L$ being expanding.\n\n\\subsection{Organization} In \\SecRef{prelim}, I fix notation specific to this chapter. I also recall the observations made in~\\cite{NonlinInv} with regard to orthogonal matrices and the preservation of degree-$2$ invariants. For motivating the remainder of the chapter, I directly present an example construction of an expanding linear mapping that preserves higher-degree invariants. \n\nIn \\SecRef{zerosum}, I show equivalent characterizations of degree-$d$ zero-sum sets and explain the links between degree-$d$ sum-invariant matrices and degree-$d$ zero-sum sets. \n\nMinimal degree-$d$ zero-sum sets are studied in \\SecRef{minimal}, where \\ThmRef{main} is proven. I further summarize the implications to degree-$d$ sum-invariant matrices in \\SecRef{implications}. Finally, the chapter is concluded in \\SecRef{conclusions}.\n\n", "meta": {"hexsha": "afbf20dbca018475924b8a64e2d0ee00754410dd", "size": 8443, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-source/9niLinear/1intro.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "thesis-source/9niLinear/1intro.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "thesis-source/9niLinear/1intro.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 129.8923076923, "max_line_length": 979, "alphanum_fraction": 0.7205969442, "num_tokens": 2573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5822653780537046}}
{"text": "\\section{Two Other CRDTs: Counter and Set}\n\\label{sect.simple.crdts}\n\nTo demonstrate that our proof framework provides reusable components that significantly simplify SEC proofs for new algorithms, we show proofs for two other well-known operation-based CRDTs: the Observed-Remove Set (ORSet) and the Increment-Decrement Counter as described by \\citet{Shapiro:2011wy}.\nThese proofs build upon the abstract convergence theorem and the network model of Sections~\\ref{sect.abstract.convergence} and~\\ref{sect.network}, and reuse some of the proof techniques developed in the formalisation of RGA in Section~\\ref{sect.rga}.\n\nAs these proofs leverage the framework's machinery and proof techniques, we were able to develop them very quickly: the counter was proved correct in a matter of minutes, and the specification and correctness proof of the ORSet was done in about four hours by one of the authors, an Isabelle novice who had never used any proof assistant software prior to the start of this project.\nAlthough these anecdotes do not constitute a formal evaluation of ease of use, we take them as being an encouraging sign.\n\n\\subsection{Increment-Decrement Counter}\n\\label{subsect.increment-decrement.counter}\n\nThe Increment-Decrement Counter is perhaps the simplest CRDT, and a paradigmatic example of a replicated data structure with commutative operations.\nAs the name suggests, the data structure supports two operations: $\\isa{increment}$ and $\\isa{decrement}$ which respectively increment and decrement a shared integer counter:\n\\begin{isabelle}\n\\isacommand{datatype}\\ operation {\\isacharequal}\\ Increment\\ {\\isacharbar}\\ Decrement\n\\end{isabelle}\n\\noindent The interpretation function for these two operations is straightforward:\n\\begin{isabelle}\n~~~~{\\isachardoublequoteopen}counter{\\isacharunderscore}op\\ Decrement\\ \\=\\kill\n\\isacommand{fun}\\ counter{\\isacharunderscore}op\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}operation\\ {\\isasymRightarrow}\\ int\\ {\\isasymRightarrow}\\ int\\ option{\\isachardoublequoteclose}\\ \\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}counter{\\isacharunderscore}op\\ Increment\\>x\\ {\\isacharequal}\\ Some\\ {\\isacharparenleft}x\\ {\\isacharplus}\\ {\\isadigit{1}}{\\isacharparenright}{\\isachardoublequoteclose}\\ {\\isacharbar}\\\\\n~~~~{\\isachardoublequoteopen}counter{\\isacharunderscore}op\\ Decrement\\>x\\ {\\isacharequal}\\ Some\\ {\\isacharparenleft}x\\ {\\isacharminus}\\ {\\isadigit{1}}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\nNote that the operations do not fail on under- or overflow, as they are defined on a type of unbounded (mathematical) integers.\nWe could also have implemented the counter using fixed-size integers---e.g. signed 32- or 64-bit machine words---with wrap-around on overflow, which would not have impacted the proofs.\nShowing commutativity of the operations is an easy exercise in applying Isabelle's proof automation:\n\\begin{isabelle}\n\\isacommand{lemma}\\ {\\isachardoublequoteopen}counter{\\isacharunderscore}op\\ x\\ {\\isasymrhd}\\ counter{\\isacharunderscore}op\\ y\\ {\\isacharequal}\\ counter{\\isacharunderscore}op\\ y\\ {\\isasymrhd}\\ counter{\\isacharunderscore}op\\ x{\\isachardoublequoteclose}\n\\end{isabelle}\nUnlike more complex CRDTs such as RGA, the operations of the increment-decrement counter commute unconditionally.\nAs a result, this CRDT converges in any asynchronous broadcast network, without requiring causally ordered delivery.\nFor simplicity, we define $\\isa{counter}$ as a simple extension of our existing $\\isa{network}{\\isacharunderscore}\\isa{with}{\\isacharunderscore}\\isa{ops}$ locale.\nWe need only specify the interpretation function and the initial state 0:\n\\begin{isabelle}\n\\isacommand{locale}\\ counter\\ {\\isacharequal}\\ network{\\isacharunderscore}with{\\isacharunderscore}ops\\ {\\isacharunderscore}\\ counter{\\isacharunderscore}op\\ {\\isadigit{0}}\n\\end{isabelle}\nIt is then straightforward to prove that $\\isa{counter}$ is a sublocale of $\\isa{strong-eventual-consistency}$ (see Section~\\ref{sect.abstract.sec.spec}), from which we obtain concrete convergence and progress theorems for the counter CRDT.\n\n\\subsection{Observed-Remove Set}\n\\label{subsect.orset}\n\nThe Observed-Remove Set (ORSet) is a well-known CRDT for implementing replicated sets, supporting two operations: \\emph{adding} and \\emph{removing} arbitrary elements in the set.\nIt has mostly been studied in its state-based formulation \\cite{Bieniusa:2012wu,Bieniusa:2012gt,Brown:2014hs,Zeller:2014fl}, but here we use the operation-based formulation as described by \\citet{Shapiro:2011wy}.\nThe name derives from the fact that the algorithm ``observes'' the state of a node when removing an element from the set, as explained below.\n\nWe start by defining the two possible operations of the datatype:\n\\begin{isabelle}\n\\isacommand{datatype}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ operation\\ {\\isacharequal}\\ Add\\ {\\isachardoublequoteopen}{\\isacharprime}id{\\isachardoublequoteclose}\\ {\\isachardoublequoteopen}{\\isacharprime}a{\\isachardoublequoteclose}\\ {\\isacharbar}\\ Rem\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id\\ set{\\isacharparenright}{\\isachardoublequoteclose}\\ {\\isachardoublequoteopen}{\\isacharprime}a{\\isachardoublequoteclose}\n\\end{isabelle}\n\\noindent Here, $\\isacharprime\\isa{id}$ is an abstract type of message identifiers, and the type variable $\\isacharprime\\isa{a}$ represents the type of values that the application wishes to add to the set.\nWhen an element $\\isa{e}$ is added to the set, the operation $\\isa{Add}\\ i\\ e$ is tagged with a unique identifier $\\isa{i}$ in order to distinguish it from other operations that may concurrently add the same element $\\isa{e}$ to the set.\nWhen an element $\\isa{e}$ is removed from the set, the operation $\\isa{Rem}\\ is\\ e$ contains a set of identifiers $\\isa{is}$, identifying all of the additions of that element that causally happened-before the removal.\n\nThe state maintained at each node is a function that maps each element $\\isacharprime\\isa{a}$ to the set of identifiers of operations that have added that element:\n\\begin{isabelle}\n\\isacommand{type{\\isacharunderscore}synonym}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ state\\ {\\isacharequal}\\ {\\isachardoublequoteopen}{\\isacharprime}a\\ {\\isasymRightarrow}\\ {\\isacharprime}id\\ set{\\isachardoublequoteclose}\n\\end{isabelle}\nWe consider an element $\\isacharprime\\isa{a}$ to be a member of the ORSet if the set of addition identifiers is non-empty.\nThe initial state of a node---the empty ORSet---is then simply $\\isasymlambda\\isa{x}\\isachardot\\ \\isacharbraceleft\\isacharbraceright$, i.e. the function that maps every possible element $\\isacharprime\\isa{a}$ to the empty set of identifiers $\\isacharbraceleft\\isacharbraceright$.\n\nWhen interpreting an $\\isa{Add}$ operation, we must add the identifier of that operation to the node state.\nWhen interpreting a $\\isa{Rem}$ operation, we must update the node state to remove all causally prior $\\isa{Add}$ identifiers.\nIf there are no concurrent additions of the same element, this has the effect of making the set of identifiers for that element empty, and thus considering the element as no longer being in the set.\nWe express this as follows:\n\\begin{isabelle}\n~~~~~~~~let\\ \\=before\\ \\={\\isacharequal}\\ case\\ oper\\ of\\ \\=Rem\\ \\=is\\ \\=e\\kill\n\\isacommand{definition}\\ op{\\isacharunderscore}elem\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ operation\\ {\\isasymRightarrow}\\ {\\isacharprime}a{\\isachardoublequoteclose}\\ \\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}op{\\isacharunderscore}elem\\ oper\\ {\\isasymequiv}\\ case\\ oper\\ of\\ Add\\ i\\ e\\ {\\isasymRightarrow}\\ e\\ {\\isacharbar}\\ Rem\\ is\\ e\\ {\\isasymRightarrow}\\ e{\\isachardoublequoteclose}\\\\[4pt]\n\\isacommand{definition}\\ interpret{\\isacharunderscore}op\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ operation\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ state\\ {\\isasymRightarrow}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ state\\ option{\\isachardoublequoteclose}\\ \\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}interpret{\\isacharunderscore}op\\ oper\\ state\\ {\\isasymequiv}\\\\\n~~~~~~~~let\\>before\\>{\\isacharequal}\\ state\\ {\\isacharparenleft}op{\\isacharunderscore}elem\\ oper{\\isacharparenright}{\\isacharsemicolon}\\\\\n\\>after\\>{\\isacharequal}\\ case\\ oper\\ of \\>Add\\>i\\>e\\ {\\isasymRightarrow}\\ before\\ {\\isasymunion}\\ {\\isacharbraceleft}i{\\isacharbraceright}\\ {\\isacharbar}\\\\\n\\>\\>\\>Rem\\>is\\>e\\ {\\isasymRightarrow}\\ before\\ {\\isacharminus}\\ is\\\\\n~~~~~~~~in \\>Some\\ {\\isacharparenleft}state\\ {\\isacharparenleft}{\\isacharparenleft}op{\\isacharunderscore}elem\\ oper{\\isacharparenright}\\ {\\isacharcolon}{\\isacharequal}\\ after{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\nHere, $\\isa{state}{\\isacharparenleft}{\\isacharparenleft}op{\\isacharunderscore}elem\\ oper{\\isacharparenright}\\ {\\isacharcolon}{\\isacharequal}\\ \\isa{after}{\\isacharparenright}$ is Isabelle's syntax for pointwise function update.\nA remove operation effectively undoes the prior additions of that element of the set, while leaving any concurrent or later additions of the same element unaffected.\nWhen an element $e$ is concurrently added and removed, the identifier of the addition operation will not be in the identifier set of the removal operation.\nAs a result, the final state after interpreting these two operations will contain the element $e$.\n\nAs the last part of specifying ORSet, we must require that $\\isa{Add}$ and $\\isa{Rem}$ use identifiers correctly.\nWe require the identifier of $\\isa{Add}$ operations to be globally unique, which we can express by making it equal to the unique ID of the message containing the operation (Section~\\ref{sect.network.broadcast}).\nA $\\isa{Rem}$ operation must contain the set of addition identifiers in the node state at the moment when the $\\isa{Rem}$ operation was issued.\nWe express these constraints using the following $\\isa{valid-behaviours}$ predicate:\n\\begin{isabelle}\n~~~~~~~~case\\ msg\\ of\\ \\={\\isacharparenleft}i{\\isacharcomma}\\ Rem\\ \\=is\\ \\=e\\kill\n\\isacommand{definition}\\ valid{\\isacharunderscore}behaviours\\ {\\isacharcolon}{\\isacharcolon}\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ state\\ {\\isasymRightarrow}\\ {\\isacharprime}id\\ {\\isasymtimes}\\ {\\isacharparenleft}{\\isacharprime}id{\\isacharcomma}\\ {\\isacharprime}a{\\isacharparenright}\\ operation\\ {\\isasymRightarrow}\\ bool{\\isachardoublequoteclose}\\ \\isakeyword{where}\\\\\n~~~~{\\isachardoublequoteopen}valid{\\isacharunderscore}behaviours\\ state\\ msg\\ {\\isasymequiv}\\\\\n~~~~~~~~case\\ msg\\ of \\>{\\isacharparenleft}i{\\isacharcomma}\\ Add\\>j\\>e{\\isacharparenright}\\ {\\isasymRightarrow}\\ i\\ {\\isacharequal}\\ j\\ {\\isacharbar}\\\\\n\\>{\\isacharparenleft}i{\\isacharcomma}\\ Rem\\>is\\>e{\\isacharparenright}\\ {\\isasymRightarrow}\\ is\\ {\\isacharequal}\\ state\\ e{\\isachardoublequoteclose}\n\\end{isabelle}\nTo prove that ORSet satisfies the specification of strong eventual consistency, we follow the same pattern as before.\nWe first define a locale $\\isa{orset}$ that extends $\\isa{network-with-constrained-ops}$:\n\\begin{isabelle}\n\\isacommand{locale}\\ orset\\ {\\isacharequal}\\ network{\\isacharunderscore}with{\\isacharunderscore}constrained{\\isacharunderscore}ops\\ {\\isacharunderscore}\\ interpret{\\isacharunderscore}op\\ {\\isachardoublequoteopen}{\\isacharparenleft}{\\isasymlambda}x{\\isachardot}\\ {\\isacharbraceleft}{\\isacharbraceright}{\\isacharparenright}{\\isachardoublequoteclose}\\ valid{\\isacharunderscore}behaviours\n\\end{isabelle}\n\n\\noindent \nRecall the requirements of the $\\isa{strong-eventual-consistency}$ specification (Section~\\ref{sect.abstract.sec.spec}).\nFirstly, we must show that $\\isa{apply-operations}$ never fails, which is easy in this case, since the interpretation function never returns $\\isa{None}$:\n\\begin{isabelle}\n\\isacommand{theorem}\\ apply{\\isacharunderscore}operations{\\isacharunderscore}never{\\isacharunderscore}fails{\\isacharcolon}\\\\\n~~~~\\isakeyword{shows}\\ {\\isachardoublequoteopen}hb.apply{\\isacharunderscore}operations\\ {\\isacharparenleft}node{\\isacharunderscore}deliver{\\isacharunderscore}messages\\ {\\isacharparenleft}history\\ i{\\isacharparenright}{\\isacharparenright}\\ {\\isasymnoteq}\\ None{\\isachardoublequoteclose}\n\\end{isabelle}\n\\noindent Secondly, we must show that concurrent operations commute.\nIsabelle's proof automation can easily verify that two addition operations commute unconditionally, as do two removal operations:\n\\begin{isabelle}\n\\isacommand{lemma}\\ add{\\isacharunderscore}add{\\isacharunderscore}commute{\\isacharcolon}\\\\\n~~~~\\isakeyword{shows}\\ {\\isachardoublequoteopen}{\\isasymlangle}Add\\ i{\\isadigit{1}}\\ e{\\isadigit{1}}{\\isasymrangle}\\ {\\isasymrhd}\\ {\\isasymlangle}Add\\ i{\\isadigit{2}}\\ e{\\isadigit{2}}{\\isasymrangle}\\ {\\isacharequal}\\ {\\isasymlangle}Add\\ i{\\isadigit{2}}\\ e{\\isadigit{2}}{\\isasymrangle}\\ {\\isasymrhd}\\ {\\isasymlangle}Add\\ i{\\isadigit{1}}\\ e{\\isadigit{1}}{\\isasymrangle}{\\isachardoublequoteclose}\\\\[4pt]\n\\isacommand{lemma}\\ rem{\\isacharunderscore}rem{\\isacharunderscore}commute{\\isacharcolon}\\\\\n~~~~\\isakeyword{shows}\\ {\\isachardoublequoteopen}{\\isasymlangle}Rem\\ i{\\isadigit{1}}\\ e{\\isadigit{1}}{\\isasymrangle}\\ {\\isasymrhd}\\ {\\isasymlangle}Rem\\ i{\\isadigit{2}}\\ e{\\isadigit{2}}{\\isasymrangle}\\ {\\isacharequal}\\ {\\isasymlangle}Rem\\ i{\\isadigit{2}}\\ e{\\isadigit{2}}{\\isasymrangle}\\ {\\isasymrhd}\\ {\\isasymlangle}Rem\\ i{\\isadigit{1}}\\ e{\\isadigit{1}}{\\isasymrangle}{\\isachardoublequoteclose}\n\\end{isabelle}\n\\noindent However, add and remove operations commute only if the identifier of the addition is not one of the identifiers affected by the removal:\n\\begin{isabelle}\n~~~~\\isakeyword{assumes}\\ \\=\\kill\n\\isacommand{lemma}\\ add{\\isacharunderscore}rem{\\isacharunderscore}commute{\\isacharcolon}\\\\\n~~~~\\isakeyword{assumes}\\>{\\isachardoublequoteopen}i\\ {\\isasymnotin}\\ is{\\isachardoublequoteclose}\\\\\n~~~~\\isakeyword{shows}\\>{\\isachardoublequoteopen}{\\isasymlangle}Add\\ i\\ e{\\isadigit{1}}{\\isasymrangle}\\ {\\isasymrhd}\\ {\\isasymlangle}Rem\\ is\\ e{\\isadigit{2}}{\\isasymrangle}\\ {\\isacharequal}\\ {\\isasymlangle}Rem\\ is\\ e{\\isadigit{2}}{\\isasymrangle}\\ {\\isasymrhd}\\ {\\isasymlangle}Add\\ i\\ e{\\isadigit{1}}{\\isasymrangle}{\\isachardoublequoteclose}\n\\end{isabelle}\n\nProving that the assumption $\\isa{i} \\mathbin{\\isasymnotin} \\isa{is}$ holds for all concurrent $\\isa{Add}$ and $\\isa{Rem}$ operations is a bit more laborious.\nWe define \\isa{added-ids} to be the identifiers of all $\\isa{Add}$ operations in a list of delivery events, even if those elements are subsequently removed.\nThen we prove that the set of identifiers in the node state is a subset of \\isa{added-ids} (since $\\isa{Add}$ operations only ever add identifiers to the node state, and $\\isa{Rem}$ operations only ever remove identifiers):\n\\begin{isabelle}\n~~~~\\isakeyword{assumes}\\ \\=\\kill\n\\isacommand{lemma}\\ apply{\\isacharunderscore}operations{\\isacharunderscore}added{\\isacharunderscore}ids{\\isacharcolon}\\\\\n~~~~\\isakeyword{assumes}\\>{\\isachardoublequoteopen}{\\isasymexists}suf{\\isachardot}\\ pre\\ {\\isacharat}\\ suf\\ {\\isacharequal}\\ history\\ i{\\isachardoublequoteclose}\\\\\n~~~~~~~~\\isakeyword{and}\\>{\\isachardoublequoteopen}apply{\\isacharunderscore}operations\\ pre\\ {\\isacharequal}\\ Some\\ state{\\isachardoublequoteclose}\\\\\n~~~~\\isakeyword{shows}\\>{\\isachardoublequoteopen}state\\ e\\ {\\isasymsubseteq}\\ set\\ {\\isacharparenleft}added{\\isacharunderscore}ids\\ pre\\ e{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\n\\noindent From this lemma, we deduce that when an $\\isa{Add}$ and a $\\isa{Rem}$ operation are concurrent, the identifier of the $\\isa{Add}$ cannot be in the set of identifiers removed by $\\isa{Rem}$:\n\\begin{isabelle}\n~~~~\\isakeyword{assumes}\\ \\={\\isachardoublequoteopen}Rem\\ \\=is\\ \\=e{\\isadigit{2}}\\ \\={\\isasymin}\\kill\n\\isacommand{lemma}\\ concurrent{\\isacharunderscore}add{\\isacharunderscore}remove{\\isacharunderscore}independent{\\isacharcolon}\\\\\n~~~~\\isakeyword{assumes}\\>{\\isachardoublequoteopen}{\\isacharparenleft}Add\\ i\\ e{\\isadigit{1}}{\\isacharparenright}\\ $\\|$ {\\isacharparenleft}Rem\\ is\\ e{\\isadigit{2}}{\\isacharparenright}{\\isachardoublequoteclose}\\ \\\\\n~~~~~~~~\\isakeyword{and}\\>{\\isachardoublequoteopen}Add\\>i\\>e{\\isadigit{1}}\\>{\\isasymin}\\ set\\ {\\isacharparenleft}node{\\isacharunderscore}deliver{\\isacharunderscore}messages\\ {\\isacharparenleft}history\\ j{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}\\\\\n~~~~~~~~\\isakeyword{and}\\>{\\isachardoublequoteopen}Rem\\>is\\>e{\\isadigit{2}}\\>{\\isasymin}\\ set\\ {\\isacharparenleft}node{\\isacharunderscore}deliver{\\isacharunderscore}messages\\ {\\isacharparenleft}history\\ j{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}\\\\\n~~~~\\isakeyword{shows}\\>{\\isachardoublequoteopen}i\\ {\\isasymnotin}\\ is{\\isachardoublequoteclose}\n\\end{isabelle}\n\\noindent Now that we have proved that the assumption of $\\isa{add-rem-commute}$ holds for all concurrent operations, we can deduce that all concurrent operations commute:\n\\begin{isabelle}\n\\isacommand{theorem}\\ concurrent{\\isacharunderscore}operations{\\isacharunderscore}commute{\\isacharcolon}\\\\\n~~~~\\isakeyword{shows}\\ \\ {\\isachardoublequoteopen}hb{\\isachardot}concurrent{\\isacharunderscore}ops{\\isacharunderscore}commute\\ {\\isacharparenleft}node{\\isacharunderscore}deliver{\\isacharunderscore}messages\\ {\\isacharparenleft}history\\ i{\\isacharparenright}{\\isacharparenright}{\\isachardoublequoteclose}\n\\end{isabelle}\n\nHaving proved $\\isa{apply-operations-never-fails}$ and $\\isa{concurrent-operations-commute}$, we can now immediately prove that $\\isa{orset}$ is a sublocale of $\\isa{strong-eventual-consistency}$, using the familiar proof pattern from the other CRDTs.\nThis proof produces concrete convergence and progress theorems for the ORSet.\n", "meta": {"hexsha": "0b03400476fba123c5d9e736a701c6ea43cf9222", "size": 18065, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/simple-crdts.tex", "max_stars_repo_name": "trvedata/crdt-isabelle", "max_stars_repo_head_hexsha": "8fde89bfb5e88acce000ac26e45f3f1cfe334e6e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 62, "max_stars_repo_stars_event_min_datetime": "2017-07-09T22:49:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-28T07:18:10.000Z", "max_issues_repo_path": "paper/simple-crdts.tex", "max_issues_repo_name": "trvedata/crdt-isabelle", "max_issues_repo_head_hexsha": "8fde89bfb5e88acce000ac26e45f3f1cfe334e6e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-29T14:33:47.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-17T09:22:42.000Z", "max_forks_repo_path": "paper/simple-crdts.tex", "max_forks_repo_name": "trvedata/crdt-isabelle", "max_forks_repo_head_hexsha": "8fde89bfb5e88acce000ac26e45f3f1cfe334e6e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-03-03T00:36:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-08T02:18:48.000Z", "avg_line_length": 121.2416107383, "max_line_length": 509, "alphanum_fraction": 0.7780791586, "num_tokens": 5328, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059609645724, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.5822345820721122}}
{"text": "\\section{Problem Definition}\r\n\\subsection{Graph Problems}\r\n\\begin{frame}{The Problem Presented as Graphs}\r\n\\framesubtitle{How?}\r\n\t\\begin{itemize}\r\n\t\t\\item Nodes are devices\r\n\t\t\\item Edges indicate which devices a device is in range of\r\n\t\\end{itemize}\r\n\\end{frame}\r\n\r\n\\begin{frame}{The Problem Presented as Graphs}\r\n\\framesubtitle{How?}\r\n\\begin{figure}\r\n\\centering\r\n\\begin{tikzpicture}  [\r\n        node distance = 1 cm, \r\n        vertex/.style = {circle, draw, fill=blue!10}, \r\n        label/.style={fill=white},\r\n        <->\r\n    ]\r\n\r\n    \\node[draw=none](0){};\r\n    \\node[vertex, left  = 2cm of 0]   (1) {$v_1$};\r\n    \\node[vertex, above =     of 0]   (2) {$v_2$};\r\n    \\node[vertex, right = 2cm of 0]   (3) {$v_3$};\r\n    \\node[vertex, below =     of 0]   (4) {$v_4$};\r\n    \r\n    \\draw (1) -- (2);\r\n    \\draw (1) -- (3);\r\n    \\draw (1) -- (4);\r\n    \\draw (2) -- (3);\r\n    \\draw (2) -- (4);\r\n    \\draw (3) -- (4);\r\n\r\n\\end{tikzpicture}\r\n\\end{figure}\r\nCompletely Connected Reliable Communication (CCRC)\r\n\\end{frame}\r\n\r\n\\begin{frame}{The Problem Presented as Graphs}\r\n\\framesubtitle{How?}\r\n\\begin{figure}\r\n\\centering\r\n\\begin{tikzpicture}  [\r\n        node distance = 1 cm, \r\n        vertex/.style = {circle, draw, fill=blue!10}, \r\n        label/.style={fill=white},\r\n        <->\r\n    ]\r\n\r\n    \\node[draw=none](0){};\r\n    \\node[vertex, left  = 2cm of 0]   (1) {$v_1$};\r\n    \\node[vertex, above =     of 0]   (2) {$v_2$};\r\n    \\node[vertex, right = 2cm of 0]   (3) {$v_3$};\r\n    \\node[vertex, below =     of 0]   (4) {$v_4$};\r\n    \r\n    \\draw (1) -- (2);\r\n    \\draw (1) -- (4);\r\n    \\draw (2) -- (3);\r\n    \\draw (2) -- (4);\r\n    \\draw (3) -- (4);\r\n\r\n\\end{tikzpicture}\r\n\\end{figure}\r\nStrongly Connected Reliable Communication (SCRC)\r\n\\end{frame}\r\n\r\n\\begin{frame}{The Problem Presented as Graphs}\r\n\\framesubtitle{How?}\r\n    \\begin{itemize}\r\n        \\item Nodes are devices\r\n        \\item Edges indicate which devices a device is in range of\r\n        \\item Weights indicate the chance of a transmission being received\r\n    \\end{itemize}\r\n\\end{frame}\r\n\r\n\\begin{frame}{The Problem Presented as Graphs}\r\n\\framesubtitle{How?}\r\n\\begin{figure}\r\n\\centering\r\n\\begin{tikzpicture} [\r\n        node distance = 1 cm, \r\n        vertex/.style = {circle, draw, fill=blue!10}, \r\n        edge/.style = {draw, -stealth, shorten >= 1pt},\r\n        label/.style={fill=white}\r\n    ]\r\n\r\n    \\node[draw=none](0){};\r\n    \\node[vertex, left =  3cm of 0]   (1) {$v_1$};\r\n    \\node[vertex, above=  2cm of 0]   (2) {$v_2$};\r\n    \\node[vertex, right=  3cm of 0]   (3) {$v_3$};\r\n    \\node[vertex, below=  2cm of 0]   (4) {$v_4$};\r\n    \r\n    \\path[edge] (1) edge[bend left=15] node [label] {0.7} (2) edge[bend left=15] node [label] {0.5} (4);\r\n    \\path[edge] (1) edge[bend left=15] node [label] {0.7} (2) edge[bend left=15] node [label] {0.3} (3);\r\n    \\path[edge] (3) edge[bend left=15] node [label] {0.7} (2) edge[bend left=15] node [label] {0.3} (1);\r\n    \\path[edge] (2) edge[bend left=15] node [label] {0.7} (1) edge[bend left=15] node [label] {0.5} (4);\r\n    \\path[edge] (4) edge[bend left=15] node [label] {0.7} (1) edge[bend left=15] node [label] {0.5} (2);\r\n    \\path[edge] (2) edge[bend left=15] node [label] {0.8} (1) edge[bend left=15] node [label] {0.7} (3);\r\n    \\path[edge] (3) edge[bend left=15] node [label] {0.9} (2) edge[bend left=15] node [label] {0.8} (4);\r\n    \\path[edge] (4) edge[bend left=15] node [label] {0.4} (1) edge[bend left=15] node [label] {0.8} (3);\r\n    \r\n\\end{tikzpicture}\r\n\\end{figure}\r\nCompletely Connected Unreliable Communication (CCUC)\r\n\\end{frame}\r\n\r\n\r\n\\begin{frame}{The Problem Presented as Graphs}\r\n\\framesubtitle{How?}\r\n\\begin{figure}\r\n\\centering\r\n\\begin{tikzpicture} [\r\n        node distance = 1 cm, \r\n        vertex/.style = {circle, draw, fill=blue!10}, \r\n        edge/.style = {draw, -stealth, shorten >= 1pt},\r\n        label/.style={fill=white}\r\n    ]\r\n\r\n    \\node[draw=none](0){};\r\n    \\node[vertex, left =  2cm of 0]   (1) {$v_1$};\r\n    \\node[vertex, above=      of 0]   (2) {$v_2$};\r\n    \\node[vertex, right=  2cm of 0]   (3) {$v_3$};\r\n    \\node[vertex, below=      of 0]   (4) {$v_4$};\r\n    \r\n    \\path[edge] (1) edge[bend left=15] node [label] {0.7} (2) edge[bend left=15] node [label] {0.5} (4);\r\n    \\path[edge] (2) edge[bend left=15] node [label] {0.8} (1) edge[bend left=15] node [label] {0.7} (3);\r\n    \\path[edge] (3) edge[bend left=15] node [label] {0.9} (2) edge[bend left=15] node [label] {0.8} (4);\r\n    \\path[edge] (4) edge[bend left=15] node [label] {0.4} (1) edge[bend left=15] node [label] {0.8} (3);\r\n    \r\n\\end{tikzpicture}\r\n\\end{figure}\r\nStrongly Connected Unreliable Communication (SCUC)\r\n\\end{frame}\r\n\r\n\r\n\r\n\\begin{frame}{Time Division Multiple Access(TDMA)}\r\n\\subsection{TDMA Problem}\r\n\\framesubtitle{The generic method}\r\n\\begin{itemize}\r\n\t\\item One frequency\r\n\t\\item Using the frequency in turns\r\n\t\\item Frame and time-slots\r\n\\end{itemize}\r\n\\vspace{-15pt}\r\n\\begin{figure}\r\n\\centering\r\n\\begin{tikzpicture}[thick,scale=0.6, every node/.style={scale=0.6}]\r\n    \\path[draw] (-5,0) -- (-5,1) -- (5,1) -- (5,0) -- (-5,0)\r\n                (-3,1) -- (-3,0)\r\n                (-1,1) -- (-1,0)\r\n                (1,1) -- (1,0)\r\n                (3,1) -- (3,0);\r\n                \r\n    \\node at (-4,0.5) {Device A};\r\n    \\node at (-2,0.5) {Device B};\r\n    \\node at (0,0.5) {Device C};\r\n    \\node at (2,0.5) {Device D};\r\n    \\node at (4,0.5) {Device E};\r\n    \r\n    \\path[draw, dashed] (-1,1) -- (-5,2)\r\n                        (1,1) -- (5,2);\r\n    \r\n    \\path[draw] (-5,2) -- (-5,3) -- (5,3) -- (5,2) -- (-5,2)\r\n                (-3.8,2) -- (-3.8,3)\r\n                (-2.6,2) -- (-2.6,3)\r\n                (-1.2,2) -- (-1.2,3)\r\n                (4,2) -- (4,3);\r\n    \r\n    \\node[text width=1cm, align=center] at (-4.4,2.5) {Guard\\\\time};\r\n    \\node at (-3.2,2.5) {Sync};\r\n    \\node at (-1.9,2.5) {Control};\r\n    \\node at (1.5,2.5) {Transmission Data};\r\n    \\node[text width=1cm, align=center] at (4.5,2.5) {CRC\\\\check};\r\n    \r\n    \\path (-5,4) -- node{\\textbf{(b)} Typical time slot (some fields may not be used)} (5,3);\r\n    \r\n    \\path (5,-1) -- node{\\textbf{(a)} A frame with five time slots} (-5,0);\r\n\\end{tikzpicture}\r\n\\end{figure}\r\n\\vspace{-15pt}\r\nFor our project:\r\n\t\\begin{itemize}\r\n\t\t\\item Devices should be able to join\r\n\t\t\\item The number of devices are not known in advance\r\n\t\\end{itemize}\r\n\\end{frame}\r\n\r\n\\section{CCRC}\r\n\\begin{frame}{Design of CCRC}\r\n\\framesubtitle{Changes from generic TDMA}\r\n\r\n\\begin{itemize}\r\n\t\\item The TDMA solution for CCRC\r\n\t\\item Joining devices wait for Empty-Slot and then connects to the network\r\n\\end{itemize}\r\n\\begin{figure}\r\n\\centering\r\n\\begin{tikzpicture}[thick,scale=0.6, every node/.style={scale=0.6}]\r\n    \\path[draw] (-5,0) -- (-5,1) -- (5,1) -- (5,0) -- (-5,0)\r\n                (-3,1) -- (-3,0)\r\n                (-1,1) -- (-1,0)\r\n                (1,1) -- (1,0)\r\n                (3,1) -- (3,0);\r\n                \r\n    \\node at (-4,0.5) {Device A};\r\n    \\node at (-2,0.5) {Device B};\r\n    \\node at (0,0.5) {Device C};\r\n    \\node at (2,0.5) {Device D};\r\n    \\node at (4,0.5) {Empty-Slot};\r\n    \r\n    \\path[draw, dashed] (-1,1) -- (-5,2)\r\n                        (1,1) -- (6.7,2);\r\n    \r\n    \\path[draw] (-5,2) -- (-5,3) -- (6.6,3) -- (6.6,2) -- (-5,2)\r\n                (-1,2) -- (-1,3)\r\n                (5,2) -- (5,3)\r\n                (1,2) -- (1,3)\r\n                (-3,2) -- (-3,3);\r\n\r\n    \\node at (-4,2.5) {Guard time};\r\n    \\node at (-2,2.5) {ASK Sync};\r\n    \\node at (0,2.5) {Control};\r\n    \\node at (3,2.5) {Transmission Data};\r\n     \\node[text width=1cm, align=center] at (5.8,2.5) {CRC\\\\check};\r\n\r\n    \\path[draw, dashed] (1,3) -- (-6,4)\r\n                        (5,3) -- (7.5,4);\r\n\r\n    \\path[draw] (-6,4) -- (-6,5) -- (7.5,5) -- (7.5,4) -- (-6,4)\r\n    \t\t\t(-3.4,4) -- (-3.4,5)\r\n    \t\t\t(-1.2,4) -- (-1.2,5)\r\n    \t\t\t(0.8,4) -- (0.8,5);\r\n\r\n    \\node at (-4.7,4.5) {Current Slot};\r\n    \\node at (-2.3,4.5) {Slot Count};\r\n    \\node[text width=1cm, align=center] at (-0.35,4.5) {Address};\r\n    \\node[text width=1cm, align=center] at (4.0,4.5) {Data};\r\n    \r\n    \\path (5,-1) -- node{Figure is not to scale} (-5,0);\r\n\r\n\\end{tikzpicture}\r\n\\end{figure}\r\n\\begin{itemize}\r\n\\item Presentation of CCRC in UPPAAL\r\n\\end{itemize}\r\n\\end{frame}\r\n", "meta": {"hexsha": "c14ebc824a4523518c931dcbf9554777de3a3308", "size": 8199, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/SW5/soren.tex", "max_stars_repo_name": "SW618/SW6-Presentation", "max_stars_repo_head_hexsha": "d7448e6b12085924892f8e2d14edfee8d1794b6b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/SW5/soren.tex", "max_issues_repo_name": "SW618/SW6-Presentation", "max_issues_repo_head_hexsha": "d7448e6b12085924892f8e2d14edfee8d1794b6b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/SW5/soren.tex", "max_forks_repo_name": "SW618/SW6-Presentation", "max_forks_repo_head_hexsha": "d7448e6b12085924892f8e2d14edfee8d1794b6b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.9277108434, "max_line_length": 105, "alphanum_fraction": 0.5184778632, "num_tokens": 3076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5822345702785424}}
{"text": "\\subsection{Times Table} % (fold)\n\\label{sub:times_table_flow}\n\nThis program prints out the times table for a number entered by the user, displaying from 1 x n to 10 x n. The description of the program is in Table \\ref{tbl:flow-times-table}, the pseudocode in Listing \\ref{lst:flow-times-pseudo}, the C code in Listing \\ref{lst:flow-times-c}, and the Pascal code in Listing \\ref{lst:flow-times-pas}.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{l|p{10cm}}\n  \\hline\n  \\multicolumn{2}{c}{\\textbf{Program Description}} \\\\\n  \\hline\n  \\textbf{Name} & \\emph{Times Table} \\\\\n  \\\\\n  \\textbf{Description} & Displays the Times Table from 1 x n to 10 x n. \\\\\n  \\hline\n\\end{tabular}\n\\caption{Description of the Times Table program}\n\\label{tbl:flow-times-table}\n\\end{table}\n\n\\pseudocode{lst:flow-times-pseudo}{Pseudocode for Times Table program.}{./topics/control-flow/examples/times-table.txt}\n\n\\mynote{\nThis is an updated version of the Seven Times Table Program. See \\sref{sub:times_table} \\nameref{sub:times_table}.\n}\n\n\\clearpage\n\n\\csection{\\ccode{lst:flow-times-c}{C Times Table}{topics/control-flow/examples/times_table.c}}\n\n\\passection{\\pascode{lst:flow-times-pas}{Pascal Times Table}{topics/control-flow/examples/TimesTable.pas}}\n\n% subsection times_table (end)\n\n\\clearpage\n\\subsection{Circle Area} % (fold)\n\\label{sub:circle_area_control_flow}\n\nThis program prints out the area of a circle. The description of the program is in Table \\ref{tbl:flow-circle-area}, the pseudocode in Listing \\ref{lst:flow-circle-areas-pseudo}, the C code in Listing \\ref{lst:flow-circle-areas-c}, and the Pascal code in Listing \\ref{lst:flow-circle-areas-pas}.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{l|p{10cm}}\n  \\hline\n  \\multicolumn{2}{c}{\\textbf{Program Description}} \\\\\n  \\hline\n  \\textbf{Name} & \\emph{Circle Areas} \\\\\n  \\\\\n  \\textbf{Description} & Displays the Circle Areas for circles with radius from 1.0 to 5.0 with increments of 0.1. \\\\\n  \\hline\n\\end{tabular}\n\\caption{Description of the Circle Areas program}\n\\label{tbl:flow-circle-area}\n\\end{table}\n\n\\pseudocode{lst:flow-circle-areas-pseudo}{Pseudocode for Circle Areas program.}{./topics/control-flow/examples/circle_areas.txt}\n\n\\mynote{\nThis is an updated version of the Circle Areas Program. See Section \\ref{sub:circle_area_data} \\nameref{sub:circle_area_data}.\n}\n\n\n\\clearpage\n\n\\csection{\\ccode{lst:flow-circle-areas-c}{C Circle Areas}{topics/control-flow/examples/circle_areas.c}}\n\n\\passection{\\pascode{lst:flow-circle-areas-pas}{Pascal Circle Areas}{topics/control-flow/examples/CircleAreas.pas}}\n\n% subsection circle_area (end)\n\n\\clearpage\n\\subsection{Moving Rectangle} % (fold)\n\\label{sub:moving_rectangle}\n\nThis example SwinGame code will move a rectangle back and forth across the screen.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{l|p{10cm}}\n  \\hline\n  \\multicolumn{2}{c}{\\textbf{Program Description}} \\\\\n  \\hline\n  \\textbf{Name} & \\emph{Moving Rectangle} \\\\\n  \\\\\n  \\textbf{Description} & Displays a rectangle that is moved back and forth across the screen. \\\\\n  \\hline\n\\end{tabular}\n\\caption{Description of the Moving Rectangle program}\n\\label{tbl:flow-moving-rect}\n\\end{table}\n\n\\begin{figure}[h]\n   \\centering\n   \\includegraphics[width=0.8\\textwidth]{./topics/control-flow/examples/MovingRect.png} \n   \\caption{Example execution of the Moving Rectangle program}\n   \\label{fig:moving-rect-img}\n\\end{figure}\n\n\n\\clearpage\n\n\\cppsection{\\ccode{clst:moving-rect}{C++ Moving Rect SwinGame code}{topics/control-flow/examples/moving-rect.cpp}}  \n\n\\passection{\\pascode{plst:moving-rect}{Pascal Moving Rect SwinGame code}{topics/control-flow/examples/MovingRect.pas}}  \n\n\n% subsection moving_rectangle (end)\n\n\\clearpage\n\\subsection{Button Click in SwinGame} % (fold)\n\\label{sub:button_click_in_swingame}\n\nThis example SwinGame code draws a rectangle that the user can `click'.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{l|p{10cm}}\n  \\hline\n  \\multicolumn{2}{c}{\\textbf{Program Description}} \\\\\n  \\hline\n  \\textbf{Name} & \\emph{Button Click} \\\\\n  \\\\\n  \\textbf{Description} & Displays a rectangle the user can `click'. Having the mouse held down over the rectangle changes it to a filled rectangle. Clicking the rectangle shows the text `Clicked' in the top left corner. \\\\\n  \\hline\n\\end{tabular}\n\\caption{Description of the Button Click program}\n\\label{tbl:flow-button-click}\n\\end{table}\n\n\\begin{figure}[h]\n   \\centering\n   \\includegraphics[width=0.8\\textwidth]{./topics/control-flow/examples/ButtonClick.png} \n   \\caption{Example execution of the Button Click program}\n   \\label{fig:button-click-img}\n\\end{figure}\n\n\n\\clearpage\n\n\\cppsection{\\ccode{clst:swingame_button}{C++ Simple Button using SwinGame code}{topics/control-flow/examples/button-click.cpp}}  \n\n\\passection{\\pascode{plst:button-click}{Pascal Button Click code}{topics/control-flow/examples/ButtonClick.pas}}  \n\n\n% subsection button_click_in_swingame (end)\n\n% \\clearpage\n% \\subsection{Rocket Launch} % (fold)\n% \\label{sub:rocket_launch}\n% \n% \\cppsection{\\ccode{clst:rocket_launch}{C++ Rocket Launch using SwinGame code (Continues in \\lref{clst:rocket_launch1}) }{topics/control-flow/examples/rocket_launch.cpp}}\n% \n% \\begin{figure}[p]\n%   \\cppsection{\\ccode{clst:rocket_launch1}{C++ Rocket Launch using SwinGame code (cont.) }{topics/control-flow/examples/rocket_launch1.cpp}}\n% \\end{figure}\n% \n% \\begin{figure}[p]\n%   \\cppsection{\\ccode{clst:rocket_launch2}{C++ Rocket Launch using SwinGame code (cont.) }{topics/control-flow/examples/rocket_launch2.cpp}}\n% \\end{figure}\n% \n% \n% % subsection rocket_launch (end)", "meta": {"hexsha": "04342c2f609f48641f764e7ab9a34593897d1276", "size": 5525, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "topics/control-flow/examples/examples-flow.tex", "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_issues_repo_path": "topics/control-flow/examples/examples-flow.tex", "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_forks_repo_path": "topics/control-flow/examples/examples-flow.tex", "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "avg_line_length": 34.1049382716, "max_line_length": 335, "alphanum_fraction": 0.7466063348, "num_tokens": 1603, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.5822345692550135}}
{"text": "%% Introduction %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Introduction}\n\\label{sec:intro}\n\n% What is essential to say?\n\nIn this paper, we present a decision procedure for solving systems of\ninteger linear constraints where each expression is subject\nto both upper and lower bounds.  Such systems have the form:\n%\n\\begin{equation}\n    \\label{eq:prob-inequalities}\n    \\begin{array}{ccccc}\n        l_1    & \\le & a_{11} x_1 + \\cdots + a_{1n} x_n & \\le & u_1 \\\\\n        l_2    & \\le & a_{21} x_1 + \\cdots + a_{2n} x_n & \\le & u_2 \\\\\n        \\vdots &     & \\vdots                         &     & \\vdots \\\\\n        l_m    & \\le & a_{m1} x_1 + \\cdots + a_{mn} x_n & \\le & u_m\n    \\end{array}\n\\end{equation}\n%\nwhere $l_i, u_i, a_{ij}$ are rational constants and the $x_i$ are unknown\ninteger variables.\n%\nAs a more compact notation, we use $\\v{l} \\le \\mat{A} \\v{x} \\le \\v{u}$\nto denote systems with this form.\n\n\n% The problem is also\n%known as \\emph{integer linear programming} (ILP), though we are only concerned\n%with the feasibility question here.\n\nOur decision procedure is based on a variant of the Schnorr-Euchner\nalgorithm~\\cite{Schnorr-Euchner} for computing the closest lattice\nelement to a target point, and is implemented in a tool which we call\n\\emph{BLT}.  The decision procedure reduces the constraint problem to\nthe problem of checking whether there is a common point $\\v{y}$ in\nboth the lattice $\\mathcal{L}_{\\mat{A}}$ generated by the columns of $\\mat{A}$\nand the hyperrectangle containing the points between $\\v{l}$ and\n$\\v{u}$.\n\n%BLT does this by\n%rescaling the coefficients in $\\v{l}$, $\\v{u}$ and $\\mat{A}$\n%attempting to find the lattice point closest (with respect to the $\\linf{}$-norm) to\n%the point $p$ in the center of the hyperrectangle.\n\n%The main difference is the algorithm uses the $L^\\infty$-norm to compute distances while %Schnorr-Euchner uses the $L^2$-norm.  BLT first\n%rescales the\n% preprocesing the formula with the form~\\eqref{eq:prob-inequalities}, BLT computes the center %point $P$ of the hyperrectangle containing the points between $L$ and $U$.  To check %satisfiability, BLT attempts to find an assignment to $\\v{x} \\in \\ZZ^n$ such that $A\\v{x}$ is\n%the closest point to $\\v{P}$ in the lattice generated by the columns of $A$.\n%If during the search BLT can find a point $A \\v{x}$ inside the hyperrectangle defined\n%by $L$ and $U$, then BLT returns SAT with the coefficients $\\v{x}$.  If it can prove\n%no such point exists, then it returns UNSAT.\n\nWe developed BLT while trying to apply constraint solving to signal\nprocessing algorithms.  In particular, we were studying the problem of\n\\emph{reversing JPEG decompression}, that is finding JPEGs that decompress\ninto images that satisfy given constraints.  This could be used to encode\nspecific values on pixels in the image, perhaps for steganographic purposes.\nAs we will show, this problem can be expressed as an integer linear constraint\nproblem with the form~\\eqref{eq:prob-inequalities} over $64$ variables.\n\nBefore developing BLT, we had generated various instances of this\nproblem, including both unsatisfiable and satisfiable cases.  We\napplied several SMT solvers, including Yices~\\cite{Dutertre:cav2014},\nCVC4~\\cite{DBLP:conf/cav/BarrettCDHJKRT11}, and\nZ3~\\cite{DeMoura:2008:ZES:1792734.1792766}, as well as an evaluation\nversion of Gurobi\\footnote{Available\n  at~\\url{http://www.gurobi.com/}.}, an industrial linear programming\nsolver. The solvers we tried were incapable of solving\nall but the most trivial instances of this problem without giving\nadditional hints. This was true even when after running some of the\nproblems for months on selected solvers.  In contrast, BLT is able to\nsolve the majority of the problems in under a second.\n\n%For problems of this shape, we have found that our algorithm is able to solve problems in %seconds that are intractable to all other solvers that we have tried, including %\n%  , but is non-trivial to solve; JPEG is a lossy compression algorithm.  Even if a set of %constraints is satisfiable on arbitrary images, it may not be accessible on images obtained %from decompressed images.\n\n%The rest of the paper is organized as follows. First, we describe the lattice\n%operations we use in Section~\\ref{sec:preliminaries}. Then, we describe the\n%algorithm in Section~\\ref{sec:dp} and the JPEG preimage problem\n%and performance of BLT on that problem in~\\ref{sec:jpeg}.  Finally, we\n%conclude with a brief discussion of related and future work in\n%Section~\\ref{sec:final}.", "meta": {"hexsha": "7141b1947bfc275a353db10ff9c735a4e7e1f658", "size": 4535, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "publications/2015_smt/introduction.tex", "max_stars_repo_name": "benjaminfjones/blt", "max_stars_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 63, "max_stars_repo_stars_event_min_datetime": "2016-11-15T22:09:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T02:47:27.000Z", "max_issues_repo_path": "publications/2015_smt/introduction.tex", "max_issues_repo_name": "benjaminfjones/blt", "max_issues_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-03-24T18:52:05.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-28T03:03:27.000Z", "max_forks_repo_path": "publications/2015_smt/introduction.tex", "max_forks_repo_name": "benjaminfjones/blt", "max_forks_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-11-15T23:16:43.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T08:32:52.000Z", "avg_line_length": 54.6385542169, "max_line_length": 275, "alphanum_fraction": 0.7316427784, "num_tokens": 1255, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7931059414036511, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.5822345677120576}}
{"text": "\n\n\\section{Preliminaries and exploratory data analysis}\n\nWe start off by loading a useful support library, the \\texttt{cluster} library.   We need to load the \\texttt{flexclust} library to get hold of some interesting data.   Make that data available (the \\texttt{milk} data) and start doing an exploratory data analysis.   The suggested code here includes star plots (quite interesting here) and faces (not very useful).   You may also consider parallel co-ordinates plots and scatterplots, but we will come back to them later.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> library(cluster)\n> library(flexclust)\n> library(MASS)\n> library(TeachingDemos)\n> data(milk)\n> stars(milk, full = FALSE, draw.segments = TRUE, key.loc = c(10, \n+     0.4), main = \"Milk data\")\n> faces(milk, fill = TRUE, ncol = 4, scale = TRUE, nrow = 7)\n\\end{Sinput}\n\\end{Schunk}\n\n\n\\section{Calculating distances and performing hierarchical clustering}\n\nThe first piece of clustering will demonstrate the defaults, i.e. euclidean distance and complete linkage.   A good habit is to create a distance object, then create a \\texttt{hclust} object and then extract whatever information we need from that \\texttt{hclust} object.   For example, \\texttt{plot()} will produce a dendrogram and \\texttt{cutree()}, given an argument for \\texttt{k} will ``cut'' the tree at the given number of clusters (you can also cut based on height if you prefer)\n\n\\begin{Schunk}\n\\begin{Sinput}\n> milk.dist <- dist(milk)\n> milk.hclust <- hclust(milk.dist)\n> plot(milk.hclust)\n> milk.cut <- cutree(milk.hclust, 3)\n\\end{Sinput}\n\\end{Schunk}\n\n\nIt is of interest investigating the stability of your cluster solution to different choices of distance measure, for example using the Manhattan distance:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> milk.dist.man <- dist(milk, method = \"manhattan\")\n> milk.hclust.man <- hclust(milk.dist.man)\n> plot(milk.hclust.man)\n> milk.cut.man <- cutree(milk.hclust.man, 3)\n> xtabs(~milk.cut + milk.cut.man)\n\\end{Sinput}\n\\end{Schunk}\n\nor the Minkowski, with $\\lambda=5$ (R calls this \\texttt{p})\n\n\\begin{Schunk}\n\\begin{Sinput}\n> milk.dist.min5 <- dist(milk, method = \"minkowski\", p = 5)\n> milk.hclust.min5 <- hclust(milk.dist.min5)\n> plot(milk.hclust.min5)\n> milk.cut.min5 <- cutree(milk.hclust.min5, 3)\n> xtabs(~milk.cut + milk.cut.min5)\n\\end{Sinput}\n\\end{Schunk}\n\n\nAlso, one may wish to consider different clustering strategies, in this case we consider Ward's method (based on the Euclidean distance):\n\n\\begin{Schunk}\n\\begin{Sinput}\n> milk.hclust.ward <- hclust(milk.dist, method = \"ward\")\n> plot(milk.hclust.ward)\n> milk.cut.ward <- cutree(milk.hclust.ward, 3)\n> xtabs(~milk.cut + milk.cut.ward)\n\\end{Sinput}\n\\end{Schunk}\n\n\nYou should actually find for these data, whatever you do yields quite a stable three cluster ``solution''.\n\n\n\\section{Visualising the solution in terms of the data}\n\nWe can use some standard exploratory data analysis techniques, only in this case it is possible to ``label'' the rows in terms of the cluster membership we have proposed:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> parcoord(milk, col = milk.cut, lty = milk.cut, main = \"milk data, four group clustering\")\n> pairs(milk, lower.panel = function(x, y) {\n+     points(x, y, pch = milk.cut, col = milk.cut)\n+ }, main = \"Scatterplot for milk data\")\n\\end{Sinput}\n\\end{Schunk}\n\n\\section{k-means clustering}\n\nk-means clustering assumes we have some idea of $k$ before we start.   In this example, we seem to be quite sure that $k=3$, that won't necessarily be the case with other problems.   The \\texttt{kmeans} object contains other information about the solution, such as the centroids, but the cluster assignments are easily extracted and compared with the solutions we found earlier.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> milk.kmeans <- kmeans(milk, centers = 3)\n> milk.kmeans$cluster\n> xtabs(~milk.cut + milk.kmeans$cluster)\n\\end{Sinput}\n\\end{Schunk}\n\nDo note that k-means clustering is applied to the \\emph{data} and \\emph{not} to the distance matrix!!!!!!\n\n\\section{Measures of fit}\n\nThis is a massive area.   We consider just one possibility amongst many based on the cophenetic distance, a measure of the distance between the level at which two points are merged.   Measuring the correlation between this distance and the original distance matrix tells us something about how well a given dendrogram represents a given distance matrix.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> d.retained <- cophenetic(milk.hclust)\n> cor(milk.dist, d.retained)\n\\end{Sinput}\n\\end{Schunk}\n\n(Too) many more possibilities are given in \\texttt{cluster.stats()} in \\texttt{library(fpc)}.   One of the more common ones is the Rand statistic, which can be used to compare two cluster solutions.   Again, we supply the original distance matrix, and then the cluster solution we have chosen and an alternative.   You should find with the milk data that the Rand statistic = 1, as these solutions agree perfectly.  The helpfile for this function gives you references to the literature if you want to know what all the other statsitics are measuring.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> library(fpc)\n> cluster.stats(milk.dist, milk.cut, alt = milk.cut.man)\n\\end{Sinput}\n\\end{Schunk}\n\n\n\n\\section{Summary}\n\n\n\\fbox{\\parbox[c]{0.9\\textwidth}{\\color{blue}\nThis week, we should have:\n\n\\begin{itemize}\n\\item Quickly carried out an e.d.a. using scripts saved from week 1 in order to look for similar subgroups within a dataset\n\\item Carried out cluster analysis using a variety of hierarchical schemes as well as k-means clustering\n\\item be able to describe the difference between hierarchical and k-means clustering; and to demonstrate this understanding by working simple examples by hand\n\\item be able to interpret the results of a cluster analysis; and  explain whether a particular hierarchical cluster analysis can be considered a ``good'' fit\n\\end{itemize}\n}}\n", "meta": {"hexsha": "2c862d31e178ce66569429b7e40705c3ba3df4e7", "size": 5850, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "newworksheet/week4cluster.tex", "max_stars_repo_name": "phewson/mvstats", "max_stars_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "newworksheet/week4cluster.tex", "max_issues_repo_name": "phewson/mvstats", "max_issues_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2020-08-28T16:37:22.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-28T16:49:11.000Z", "max_forks_repo_path": "newworksheet/week4cluster.tex", "max_forks_repo_name": "phewson/mvstats", "max_forks_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.700729927, "max_line_length": 550, "alphanum_fraction": 0.7485470085, "num_tokens": 1575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059609645724, "lm_q2_score": 0.7341195152660688, "lm_q1q2_score": 0.5822345636179416}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath,amssymb,amsfonts}\n\\begin{document}\n\\title{LIP-BFGS Theory}\n\\author{Jesse Lu}\n\\maketitle\n\\tableofcontents\n\n\\section{Introduction}\nThe acronym LIP-BFGS stands for \n  Limited-memory Interior-Point Broyden-Fletcher-Goldfarb-Shanno.\nIt is simply an interior-point (IP) method which uses the \n  limited-memory BFGS (L-BFGS) algorithm.\nThe main body of the algorithm is described in Chapter 19.3 of \\cite{NW04}.\n\nThe purpose of this document is \n  to allow the user to understand the accompanying \\textsc{Matlab} \n  implementation.\n\n\\part{Theory}\n\\section{Theory for the interior-point method}\nInterior-point methods attempt to minimize $f(x)$ \n    subject to the equality and inequality constraints $c_E(x)$ and $c_I(x)$,\n\\begin{subequations}\n\\begin{align}\n\\text{minimize} \\quad & f(x) \\\\\n\\text{subject to} \\quad & c_E(x) = 0 \\\\\n                        & c_I(x) \\ge 0,\n\\end{align}\n\\end{subequations}\n    by satisfying the Karush-Kuhn-Tucker (KKT) conditions\n    (see Chapter 12.3 of \\cite{NW04})\n\\begin{subequations}\\begin{align}\n    \\nabla f(x) - A_E^T(x) y - A_I^T(x) z &= 0 \\label{Lagrangian} \\\\\n    z - \\mu s^{-1} &= 0 \\\\\n    c_E(x) &= 0 \\\\\n    c_I(x) - s &= 0 \\\\\n    s &\\ge 0 \\\\\n    z &\\ge 0,\n\\end{align}\\end{subequations}\n    where\n\\begin{subequations}\\begin{align}\n    A_E(x) &= \\nabla c_E(x), \\quad\\text{the Jacobian of $c_E(x)$} \\\\\n    A_I(x) &= \\nabla c_I(x), \\quad\\text{the Jacobian of $c_I(x)$} \n\\end{align}\\end{subequations}\n    $y$ and $z$ are the dual variables for $c_E(x)$ and $c_I(x)$ respectively,\n    and $s$ is the slack variable.\nNote that the expression $s^{-1}$ \n    refers to the element-wise inverse of the vector $s$.\nAlso, the expression in \\eqref{Lagrangian} can also be written as\n    $\\nabla_x \\mathcal{L} = 0$, \n    where $\\mathcal{L}$ is the Lagrangian of the problem.\n\nAs taken from Chapter 19.3 of \\cite{NW04}, \n    the interior point method obtains step direction $p$ by solving\n\\begin{equation}\n\\begin{bmatrix}\n    \\nabla^2_{xx}\\mathcal{L} & 0 & A_E^T(x) & A_I^T(x) \\\\\n    0 & \\Sigma & 0 & -I \\\\\n    A_E(x) & 0 & 0 & 0 \\\\\n    A_I(x) & -I & 0 & 0\n\\end{bmatrix}\n\\begin{bmatrix} p_x \\\\ p_s \\\\ -p_y \\\\ -p_z \\end{bmatrix}\n    = -\n\\begin{bmatrix}\n    \\nabla f(x) - A_E^T(x) y - A_I^T(x) z \\\\\n    z - \\mu s^{-1} \\\\\n    c_E(x) \\\\\n    c_I(x) - s\n\\end{bmatrix}.\n\\end{equation}\n    where \n\\begin{equation} \\Sigma = \\text{diag}(z/s).\\end{equation}\nThis equation can be simplified by \n    first backsubstituting for $p_s$ and then for $p_z$.\nThe reduced system is then\n\\begin{multline}\n\\begin{bmatrix}\n    \\nabla^2_{xx}\\mathcal{L} + A_I^T(x) \\Sigma A_I^T(x) & A_E^T(x) \\\\\n    A_E(x) & 0 \n\\end{bmatrix}\n\\begin{bmatrix} p_x \\\\ -p_y \\end{bmatrix}\n    = -\n\\begin{bmatrix}\n    \\nabla f(x) - A_E^T(x) y - A_I(x) h \\\\\n    c_E(x)\n\\end{bmatrix},\n\\end{multline}\n    where \n\\begin{equation} h = z - \\Sigma c_I(x) + \\mu s^{-1} \\end{equation} \n    and\n\\begin{subequations}\\begin{align}\n    p_s &= A_I(x) p_x + c_I(x) - s \\\\\n    p_z &= -\\Sigma A_I(x) p_x - \\Sigma c_I(x) + \\mu s^{-1}.\n\\end{align}\\end{subequations}\n\nWe now choose to consider only  \n    simple bound inequality constraints $l \\le x \\le u$, and\n    affine equality constraints $A x - b = 0$.\nOur problem can then be written down as\n\\begin{equation}\n\\begin{bmatrix}\n    \\nabla^2 f(x) + \\Sigma_0 + \\Sigma_1 & A^T \\\\\n    A & 0 \n\\end{bmatrix}\n\\begin{bmatrix} p_x \\\\ -p_y \\end{bmatrix}\n    = -\n\\begin{bmatrix}\n    \\nabla f(x) - A^T y + h_0 + h_1 \\\\ A x - b\n\\end{bmatrix},\n\\label{IP}\n\\end{equation}\n    where\n    \\begin{subequations}\\begin{align}\n    \\Sigma_0 &= \\text{diag}(z_0 / s_0) \\\\\n    \\Sigma_1 &= \\text{diag}(z_1 / s_1),\n    \\end{align}\\end{subequations}\n    and\n    \\begin{subequations}\\begin{align}\n    h_0 &= -z_0 + \\Sigma_0 (x - l) - \\mu s_0^{-1} \\\\\n    h_1 &= z_1 - \\Sigma_1 (u - x) + \\mu s_1^{-1}, \n    \\end{align}\\end{subequations}\n    and the other components of $p$ are\n\\begin{subequations}\\begin{align}\n    p_{s_0} &= p_x + (x - l) - s_0 \\\\\n    p_{z_0} &= -\\Sigma_0 p_x - \\Sigma_0 (x - l) + \\mu s_0^{-1} \\\\\n    p_{s_1} &= -p_x + (u - x) - s_1 \\\\\n    p_{z_1} &= \\Sigma_1 p_x - \\Sigma_1 (u - x) + \\mu s_1^{-1}.\n\\end{align}\\label{p_other}\\end{subequations}\n\n\\section{Theory for the limited-memory BFGS algorithm}\nPractically, computing and solving for $\\nabla^2 f(x)$ in \\eqref{IP}, \n    the \\emph{Hessian} of $f(x)$, is often computationally challenging.\nFor this reason, we use the \n    limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm\n    to approximate $\\nabla^2 f(x)$.\nSpecifically, we use the compact or outer-product representation of \n    \\begin{equation} B \\sim \\nabla^2 f(x), \\end{equation}\n    as described in chapter 7.2 of \\cite{NW04},\n    to efficiently solve for $p$ in \\eqref{IP}.\n\nThe BFGS algorithm works by approximating the Hessian of function\n    based on a list of the previous values of $x$ and $\\nabla f(x)$.\nThe approximate Hessian, $B$, is recursively updated by the following formula,\n    taken from chapter 6.1 of \\cite{NW04},\n    \\begin{equation}\n    B_{k+1} = B_k - \\frac{B_k s_k s_k^T B_k}{s_k^T B_k s_k} +\n            \\frac{y_k y_k^T}{y_k^T s_k},\n    \\end{equation}\n    where\n    \\begin{subequations}\\begin{align}\n    s_k &= x_{k+1} - x_k \\\\\n    y_k &= \\nabla f(x_{k+1}) - \\nabla f(x_k).\n    \\end{align}\\end{subequations}\n\nThe limited-memory BFGS algorithm simply truncates the list of $(s_k, y_k)$\n    to the most recent $m$ values,\n    which allows us to store $B$ efficiently\n    in what is called the compact or outer-product representation\n    (see chapter 7.2 of \\cite{NW04}):\n    \\begin{equation}\n    B_k = B_0 - \n        \\begin{bmatrix} B_0 S_k & Y_k \\end{bmatrix}\n        \\begin{bmatrix}\n            S_k^T B_0 S_k & L_k \\\\\n            L_k^T & -D_k\n        \\end{bmatrix}^{-1}\n        \\begin{bmatrix} S_k^T B_0^T \\\\ Y_k^T \\end{bmatrix},\n    \\end{equation}\n    where $B_0$ is an initial guess for $B$, \n    \\begin{subequations}\\begin{align}\n        S_k &= [s_{k-m}, \\ldots, s_{k-1}] \\\\\n        Y_k &= [y_{k-m}, \\ldots, y_{k-1}]\n    \\end{align}\\end{subequations}\n    and\n    \\begin{subequations}\\begin{align}\n        (L_k)_{i,j} &= \n            \\begin{cases}\n                s_{i-1}^T y_{j-1} & \\text{if $i > j$,} \\\\\n                0 & \\text{otherwise,} \n            \\end{cases} \\\\\n        D_k &= \\text{diag}([s_{k-m}^T y_{k-m}, \\ldots, s_{k-1}^T y_{k-1}]).\n    \\end{align} \\end{subequations}\n\nSpecifically, we choose\n    \\begin{equation} B_0 = \\delta_k I, \\end{equation}\n    where $\\delta_k$ is a scaling variable, given by\n    \\begin{equation} \n        \\delta_k = \\frac{y_{k-1}^T y_{k-1}}{s_{k-1}^T y_{k-1}}.\n    \\end{equation}\nThis results in a computationally-efficient \n    diagonal-plus-low-rank structure for $B_k$,\n    \\begin{equation}\n    B_k = \\delta_k I + W_k M_k W_k^T \\label{lbfgs}\n    \\end{equation}\n    where\n    \\begin{subequations}\\begin{align}\n        W_k &= \\begin{bmatrix} \\delta_k S_k & Y_k \\end{bmatrix} \\\\\n        M_k &= \n            \\begin{bmatrix}\n                \\delta_k S_k^T S_k & L_k \\\\\n                L_k^T & -D_k\n            \\end{bmatrix}^{-1}.\n    \\end{align}\\end{subequations}\n\nLastly, when $k = 0$ and there are no $(s_k, y_k)$ pairs\n    with which to construct $B_k$,\n    we simply choose $B_0 = I$.\n\n\n\\section{Efficiently solving an arrow-plus-low-rank system}\nSubstituting the expression for $B_k$ in \\eqref{lbfgs} for\n    $\\nabla^2 f(x)$ in \\eqref{IP} yields\n    \\begin{multline}\n    \\left(\n    \\begin{bmatrix}\n        \\delta_k I + \\Sigma_0 + \\Sigma_1 & A^T \\\\\n        A & 0 \n    \\end{bmatrix}\n    +\n    \\begin{bmatrix} WM \\\\ 0 \\end{bmatrix}\n    \\begin{bmatrix} W^T & 0 \\end{bmatrix}\n    \\right)\n    \\begin{bmatrix} p_x \\\\ -p_y \\end{bmatrix}\n        \\\\ = -\n    \\begin{bmatrix}\n        \\nabla f(x) - A^T y + h_0 + h_1 \\\\ A x - b\n    \\end{bmatrix},\\label{lipbfgs}\n    \\end{multline}\n    which can be efficiently solved by taking advantage of the structure of \n    the matrix\n    \\begin{equation}\n    \\begin{bmatrix}\n        \\delta_k I + \\Sigma_0 + \\Sigma_1 & A^T \\\\\n        A & 0 \n    \\end{bmatrix}\n    +\n    \\begin{bmatrix} WM \\\\ 0 \\end{bmatrix}\n    \\begin{bmatrix} W^T & 0 \\end{bmatrix}. \\label{aplr}\n    \\end{equation}\n\nSuch a matrix contains arrow-plus-low-rank structure;\n    in the sense that the \n    \\begin{equation}\n    \\begin{bmatrix}\n        \\delta_k I + \\Sigma_0 + \\Sigma_1 & A^T \\\\\n        A & 0 \n    \\end{bmatrix} \\label{arrow}\n    \\end{equation}\n    term has ``arrow'' structure (pointing down and to the left)\n    especially if $A$ is fat ($A \\in \\mathbb{R}^{m \\times n}, m \\ll n$), \n    and that the \n    \\begin{equation}\n    \\begin{bmatrix} WM \\\\ 0 \\end{bmatrix}\n    \\begin{bmatrix} W^T & 0 \\end{bmatrix}.\n    \\end{equation}\n    term is a low-rank matrix.\n\nThe arrow matrix can be efficiently solved via block substitution;\n    meaning that we solve\n    \\begin{equation}\n    \\begin{bmatrix}\n        \\tilde{D} & A^T \\\\\n        A & 0 \n    \\end{bmatrix} \n    \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} \n    =\n    \\begin{bmatrix} b_1 \\\\ b_2 \\end{bmatrix} \n    \\end{equation}\n    where\n    \\begin{equation} \\tilde{D} = \\delta_k I + \\Sigma_0 + \\Sigma_1 \\end{equation}\n    by computing, in order,\n    \\begin{subequations}\\begin{align}\n    A \\tilde{D}^{-1} A^T x_2 &= \n        A \\tilde{D}^{-1} b_1 - b_2 \\\\\n    x_1 &= \\tilde{D}^{-1}\n        (b_1 - A^T x_2).\n    \\end{align}\\end{subequations}\nThis is computationally efficient because the term $A \\tilde{D}^{-1} A^T$ is \n    small, if the number of rows in $A$ is small,\n    and therefore easy to invert.\n\nNow that we can compute $\\tilde{A}^{-1} b$, \n    where $\\tilde{A}$ is the arrow matrix in \\eqref{arrow},\n    we employ the matrix inversion lemma\n    (also known as the Sherman-Woodbury-Morrison formula),\n    which states\n    \\begin{equation}\n    (A+UV^T)^{-1}b = A^{-1}b - A^{-1}U(I + V^T A^{-1}U)^{-1} V^T A^{-1}b,\n    \\end{equation}\n    in order to solve the entire arrow-plus-low-rank system in \\eqref{aplr},\n    \\begin{equation} \\tilde{A} + \\tilde{U}\\tilde{V}^T \\end{equation}\n    where\n    \\begin{subequations}\\begin{align}\n    \\tilde{U} &= \\begin{bmatrix} WM \\\\ 0 \\end{bmatrix} \\\\\n    \\tilde{V}^T &= \\begin{bmatrix} W^T & 0 \\end{bmatrix}.\n    \\end{align}\\end{subequations}\n\n\\part{Implementation}\n\\section{Outline of LIP-BFGS algorithm}\nLIP-BFGS requires the following input parameters:\n\\begin{itemize}\n    \\item $\\nabla f(x)$,  function on $\\mathbb{C}^n \\to \\mathbb{C}^n$\n        to evaluate gradient at $x$,\n    \\item $x \\in \\mathbb{C}^n$,  initial value of optimization variable,\n    \\item $l, u \\in \\mathbb{R}^n$,  lower and upper bounds on $x$, and\n    \\item $A \\in \\mathbb{C}^{m \\times n}, b \\in \\mathbb{C}^m$, \n        equality constraint on $x$.\n\\end{itemize}\n\nThe basic outline of the LIP-BFGS algorithm is:\n\\begin{enumerate}\n    \\item Determine initial values of $s_{0,1}$, $y$, and $z_{0,1}$,\n    \\item Check termination condition; if needed,\n        update $\\mu$ and perform steps \\ref{start}-\\ref{stop},\n\n    \\item Form or update $B_k$ using \\eqref{lbfgs}, \\label{start}\n    \\item Compute step-direction $p$ by solving \n        \\eqref{lipbfgs} and \\eqref{p_other},\n    \\item Perform a line-search to determine step-size along $p$,\n        update $x$, $s_{0,1}$, $y$, and $z_{0,1}$. \\label{stop}\n\\end{enumerate}\n\n\\section{Determining initial values of $s_{0,1}$, $y$, and $z_{0,1}$}\n\\section{Termination condition}\nThe suggested termination condition from chapter 19.2 of \\cite{NW04}\n    is used (with $\\mu = 0$),\n    \\begin{equation}\n    \\text{if } E(x, s_0, s_1, y, z_0, z_1) \\le \\epsilon_\\text{tol}\n    \\text{ then terminate,}\n    \\end{equation}\n    where\n    \\begin{multline}\n    E(x, s_0, s_1, y, z_0, z_1) = \\text{max}\\{ \n        \\| \\nabla f(x) - A^T y - z_0 + z_1 \\|, \\\\\n        \\| s_0 z_0 \\|, \n        \\| s_1 z_1 \\|, \n        \\| A x - b \\|, \n        \\|(x - l) - s_0 \\|, \n        \\|(u - x) - s_1 \\| \\},\n    \\end{multline}\n    and $s_0 z_0$, $s_1 z_1$ are element-wise vector products.\n\n\n\\section{L-BFGS update of $B_k$}\n\\section{Computing step direction $p$}\n\\section{Line-search and variable update}\n    \nLastly, inspired from section 11.7.3 of \\cite{BL04}, \n    we perform a backtracking line search (see section 9.2 or \\cite{BL04})\n    in order to guarantee decrease of the residual\n    $r(x^+, s_0^+, s_1^+, y^+, z_0^+, z_1^+, \\mu)$ where,\n    \\begin{subequations}\\begin{align}\n    x^+ &= x + t \\alpha_p p_x \\\\\n    s_0^+ &= s_0+t \\alpha_p p_{s_0} \\\\\n    s_1^+ &= s_1+t \\alpha_p p_{s_1} \\\\\n    y^+ &= y +\\alpha_d p_y \\\\\n    z_0^+ &= z_0 + \\alpha_d p_{z_0} \\\\\n    z_1^+ &= z_1+\\alpha_d p_{z_1} \n    \\end{align}\\end{subequations}\n    and,\n\\begin{equation}\n    r(x, s_0, s_1, y, z_0, z_1, \\mu) = \n\\left\\|\n\\begin{bmatrix}\n    \\nabla f(x) - A^T y + h_0 + h_1 \\\\ A x - b\n\\end{bmatrix}\\right\\|_2.\n\\end{equation}\nThe exit condition for the line search is \n    \\begin{equation} r(x^+, s_0^+, s_1^+, y^+, z_0^+, z_1^+, \\mu) \\le\n    (1-\\alpha t) r(x, s_0, s_1, y, z_0, z_1, \\mu).\n    \\end{equation}\nwhere $t$ is initially set to $t = \\alpha_p$.\n\n\n\\begin{thebibliography}{99}\n\\bibitem{NW04} Nocedal and Wright, \n    Numerical Optimization, Second Edition (Cambridge 2004)\n\\bibitem{BL04} Boyd and Vandenberghe,\n    Convex Optimization (Cambridge 2004)\n\\end{thebibliography}\n\\end{document}\n", "meta": {"hexsha": "5ea61b343c229cf3f47757536d4b13cad2d759cf", "size": 13183, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doc.tex", "max_stars_repo_name": "JesseLu/lip-bfgs", "max_stars_repo_head_hexsha": "7a4ca28c0003d81caee7c464159e8c9d621ed15d", "max_stars_repo_licenses": ["Xnet", "X11"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-01-16T15:19:43.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-21T08:54:32.000Z", "max_issues_repo_path": "doc/doc.tex", "max_issues_repo_name": "JesseLu/lip-bfgs", "max_issues_repo_head_hexsha": "7a4ca28c0003d81caee7c464159e8c9d621ed15d", "max_issues_repo_licenses": ["Xnet", "X11"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/doc.tex", "max_forks_repo_name": "JesseLu/lip-bfgs", "max_forks_repo_head_hexsha": "7a4ca28c0003d81caee7c464159e8c9d621ed15d", "max_forks_repo_licenses": ["Xnet", "X11"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-11-03T15:14:29.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-27T13:56:55.000Z", "avg_line_length": 34.4203655352, "max_line_length": 80, "alphanum_fraction": 0.6058560267, "num_tokens": 4879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105951184112, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5822345610514573}}
{"text": "\\section*{Problem 1 Solution}\n\n\\begin{enumerate}[a)]\n\n\\item Reactivity is defined as $\\rho \\equiv \\frac{k - 1}{k}$. From the information provided in the problem, we can determine $k$ using the 6 factor formula. \n$$ k = \\varepsilon p f \\eta P_{\\textit{FNL}} P_{\\textit{TNL}} $$\nWe are told that 13\\% of neutrons are absorbed in resonances, so $p = 0.87$. We are told 10\\% of neutrons leak from the reactor, so $P_{\\textit{NL}} = 0.9$ and we will assume $P_{\\textit{NL}} = P_{\\textit{FNL}} P_{\\textit{TNL}}$. We are also given the fraction of fissions that are fast fissions,\n$$ \\frac{\\Sigma_f^F}{\\Sigma_f^T + \\Sigma_f^F} = 0.02 ,$$\nfrom which we can calculate the fraction of fissions that are thermal:\n$$ \\frac{\\Sigma_f^T}{\\Sigma_f^T + \\Sigma_f^F} = 0.98 .$$\nNoting that the definition of the fast fission factor, $\\varepsilon$, is just the inverse of the thermal fission fraction, we can find a value for $\\varepsilon$:\n\\begin{align*}\n\\varepsilon\t&\\equiv \\frac{\\Sigma_f^T + \\Sigma_f^F}{\\Sigma_f^T} \\\\\n\t\t\t&= \\left(\\frac{\\Sigma_f^T}{\\Sigma_f^T + \\Sigma_f^F}\\right)^{-1} \\\\\n\t\t\t&= \\left(0.98\\right)^{-1} \\\\\n\t\t\t&= 1.02 \n\\end{align*}\nFinally, using the cross sections provided we determine $f$ and $\\eta$. In general,\n$$ \\eta = \\frac{\\nu\\Sigma_f}{\\Sigma_a^{\\text{fuel}}} ,$$\nand for a homogeneous reactor, \n$$ f = \\frac{\\Sigma_a^{\\text{fuel}}}{\\Sigma_a} .$$\nWhen multiplied together,\n\\begin{align*}\nf \\eta\t&= \\frac{\\nu\\Sigma_f}{\\Sigma_a} \\\\\n\t\t&= \\frac{2.4\\left(0.6\\text{ cm}^{-1}\\right)}{1\\text{ cm}^{-1}} \\\\\n\t\t&= 1.44\n\\end{align*}\nAltogether,\n\\begin{align*}\nk\t&= \\varepsilon p f \\eta P_{\\textit{FNL}} P_{\\textit{TNL}} \\\\\n\t&= \\left(1.02\\right)\\left(0.87\\right)\\left(1.44\\right)\\left(0.9\\right) \\\\\n\t&= 1.15\n\\end{align*}\nand\n$$ \\rho = \\frac{1.15-1}{1.15} $$\n$$\\boxed{ \\rho = 0.13 }$$\n\n\\item Having calculated a value for the reactor's excess reactivity, we can use the provided plot to find the start of the xenon-induced deadtime and its duration. The plot gives 4 curves describing the negative reactivity created by xenon in a reactor over the first few days after shutdown. Each curve corresponds to an operating flux level in the reactor. While we are not given the flux in our reactor, we can approximate it from the information given. \n\nWe know that the reactor is producing 10,000 W/cm$^3$. This is equal to the energy produced per second, which is also equal to the product of the fission reaction rate and the energy produced per fission,\n$$ P \\approx E_f \\Sigma_f \\phi .$$\nWe solve this relationship for the flux (using the conversion from MeV to Joules),\n\\begin{align*}\n\\phi\t&\\approx \\frac{P}{E_f \\Sigma_f} \\\\\n\t\t&\\approx \\frac{\\left(10,000\\text{ W/cm}^3\\right)}{\\left(3.204\\times10^{-11}\\text{ J}\\right)\\left(0.6\\text{ cm}^{-1}\\right)} \\\\\n\t\t&\\approx 5.2\\times10^{14}\\text{ cm}^{-2}\\cdot\\text{s}^{-1}\n\\end{align*}\nAt this flux level, we can use the top line as a reasonable approximation for our reactor's behavior. Our excess reactivity of 0.13 is shown as a dashed line on the plot, and we have added large points where our flux curve intersects our excess reactivity. When there is more negative reactivity than excess reactivity, there is no possible way to restart the reactor (even if all control rods, poisons, and absorbers are removed, the negative reactivity will surpass the excess reactivity and keep the reactor subcritical). \n\n\\includegraphics[width=12cm]{xenon-deadtime-marked}\n\nThe point the negative reactivity becomes greater than 0.13 and the deadtime begins is about 1 hour after shutdown. The deadtime then lasts for slightly more than 40 hours, and the reactor can be made critical again at the 42-43 hour mark.\n\n\n\\end{enumerate}\n\n", "meta": {"hexsha": "b7780a8d0dccbfdfbe0ade67de8f60696c27fd7e", "size": 3657, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/drafts/disc13/disc13_solution01.tex", "max_stars_repo_name": "mitchnegus/NE150-discussion", "max_stars_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/drafts/disc13/disc13_solution01.tex", "max_issues_repo_name": "mitchnegus/NE150-discussion", "max_issues_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/drafts/disc13/disc13_solution01.tex", "max_forks_repo_name": "mitchnegus/NE150-discussion", "max_forks_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.1578947368, "max_line_length": 525, "alphanum_fraction": 0.7087776866, "num_tokens": 1167, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660687, "lm_q2_score": 0.7931059487389968, "lm_q1q2_score": 0.5822345546429079}}
{"text": "\\section{Parameter Estimation and Likelihood}\\label{S:ParamEstAndLikelihood}\n\nNow that we have been introduced to point and set estimation of the population mean and the population proportion using the notion of convergence in distribution for sequences of RVs as well as concentration inequalities, \nwe can begin to appreciate the art of estimation in a more general setting.  \nParameter estimation is the basic problem in statistical inference and machine learning.  \nWe will formalize the general estimation problem here.  \n\nAs we have already seen, when estimating the population mean or population proportion, there are two basic types of estimators.  \nIn point estimation, as seen in Definition~\\ref{D:Estimator}, we are interested in estimating a particular point of interest that is supposed to belong to a set of points.  \nIn (confidence) set estimation, we are interested in estimating a set with a particular form that has a specified probability of ``trapping'' the particular point of interest from a set of points.  \nHere, a point should be interpreted as an element of a collection of elements from some space.\n\n\\subsection{Point and Set Estimation -- A General Likelihood Approach}\\label{S:PointSetEstimationLikelihood}\n{\\bf Point estimation} is any statistical methodology that provides one with a ``{\\bf single best guess}'' of some specific quantity of interest.  Traditionally, we denote this {\\bf quantity of interest as $\\theta^*$} and {\\bf its point estimate as $\\widehat{\\theta}$ or $\\widehat{\\theta}_n$}.  The subscript $n$ in the point estimate $\\widehat{\\theta}_n$ emphasizes that our estimate is based on $n$ observations or data points from a given statistical experiment to estimate $\\theta^*$.  This quantity of interest, which is usually unknown, can be: %, namely $\\theta$\n%, may be an {\\bf integral} $\\Iz$ of a real valued function $h(x)$, i.e.~$\\theta=\\Iz := \\int_a^b h(x)\\,dx \\in \\Rz$, or simply a \n\\begin{itemize}\n\\item a {\\bf parameter} $\\theta^*$ which is an element of the {\\bf parameter space} $\\BB{\\Theta}$, i.e.~$\\theta^* \\in \\BB{\\Theta}$ such that $\\theta^*$ specifies the ``law'' of the observations (realizations or samples) of the \\rv~$(X_1,\\ldots,X_n)$ modeled by JPDF or JPMF $f_{X_1,\\ldots,X_n}(x_1,\\ldots,x_n; \\theta^*)$, or\n%\\item a {\\bf distribution function (DF)} $F^* \\in \\Fz := \\text{the set of all DFs}$\n%\\item a {\\bf density function (pdf)} $f \\in \\{ \\text{``not too wiggly Sobolev functions''} \\}$, or \n\\item a {\\bf regression function} $\\theta^* \\in \\BB{\\Theta}$, where $\\BB{\\Theta}$ is a class of regression functions in a regression experiment with model: $Y=\\theta^*(X)+\\epsilon$, such that $\\e(\\epsilon)=0$ and $\\theta^*$ specifies the ``law'' of pairs of observations $\\{(X_i,Y_i)\\}_{i=1}^n$, for e.g., fitting parameters in noisy ODE or PDEs from observed data --- one can always do a {\\bf prediction} in a regression experiment, i.e.~when you want to estimate $Y_i$ given $X_i$, or\n\\item a {\\bf classifier} $\\theta^* \\in \\BB{\\Theta}$, i.e.~a regression experiment with discrete $Y = \\theta^*(X)+\\epsilon$, for e.g.~training an scrub-nurse robot to assist a human surgeon, or \n\\item an {\\bf integral} $\\theta^* := \\int_A h(x)\\,dx \\in \\BB{\\Theta}$.  If $\\theta^*$ is finite, then $\\BB{\\Theta} =  \\Rz$, for e.g.~$\\theta^*$ could be the volume of a high-dimensional irregular polyhedron, a traffic congestion measure on a network of roadways, the expected profit from a new brew of beer, or the probability of an extreme event such as the collapse of a dam in the Southern Alps in the next 150 years.%  For e.g.~see \\ref{S:BMC} on Monte Carlo integration.\n\\end{itemize}\n{\\bf Set estimation} is any statistical methodology that provides one with a ``{\\bf best smallest set}'', such as an interval, rectangle, ellipse, etc.~that contains $\\theta^*$ with a high probability $1-\\alpha$.\n\nRecall that a statistic is a RV or \\rv~$T(X)$ that maps every data point $x$ in the data space $\\Xz$ with $T(x)=t$ in its range $\\Tz$, i.e.~$T(x):\\Xz \\to \\Tz$ (\\hyperref[D:Statistic]{Definition~\\ref*{D:Statistic}}).  \nNext, we look at a specific class of estimators based on the likelihood of the data.\n\n\\remove{\n\\begin{definition}[Point Estimator]\\label{D:Estimator}\n%{\\rm\nA {\\bf point estimator} $\\widehat{\\Theta}$ of some {\\bf fixed and possibly unknown} $\\theta^* \\in \\BB{\\Theta}$ is a statistic that associates each data point $x \\in \\Xz$ with a {\\bf point estimate} $\\widehat{\\Theta}(x)=\\widehat{\\theta} \\in \\BB{\\Theta}$,  \n\\[\n\\boxed{\n \\widehat{\\Theta} := \\widehat{\\Theta}(x)=\\widehat{\\theta}: \\Xz \\to \\BB{\\Theta}\n } \\ .\n\\]\nIf our data point $x := (x_1,x_2,\\ldots,x_n)$ is an $n$-vector or a point in the $n$-dimensional real space, i.e.~$x := (x_1,x_2,\\ldots,x_n) \\in \\Xz_n \\subset \\Rz^n$, then we emphasize the dimension $n$ in our point estimator $\\widehat{\\Theta}_n$ of $\\theta^* \\in \\BB{\\Theta}$.\n\\[\n\\boxed{\n\\widehat{\\Theta}_n :=  \\widehat{\\Theta}_n(x:=(x_1,x_2,\\ldots,x_n))=\\widehat{\\theta}_n : \\Xz_n \\to \\BB{\\Theta}, \\quad \\Xz_n \\subset \\Rz^n \n} \\ .\n\\]\nThe typical situation for us involves point {\\bf estimation} of $\\theta^* \\in \\BB{\\Theta}$ from \n$(x_1,x_2,\\ldots,x_n)$, the {\\bf observed data} (realization or sample), based on the {\\bf model}\n\\[\nX=(X_1,X_2,\\ldots,X_n) \\sim f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta^*) \\enspace .\n\\]\n%}\n\\end{definition}\n\n\\begin{example}[Coin Tossing Experiment ($X_1,\\ldots,X_n \\overset{IID}{\\sim} \\bernoulli(\\theta^*)$)]\\label{EX:CoinTossing}\n{\\rm\nI tossed a coin that has an unknown probability $\\theta^*$ of landing Heads independently and identically $10$ times in a row.  Four of my outcomes were Heads and the remaining six were Tails, with the actual sequence of Bernoulli outcomes (Heads $\\to 1$ and Tails $\\to 0$) being $(1,0,0,0,1,1,0,0,1,0)$.  I would like to estimate the probability $\\theta^* \\in \\BB{\\Theta} = [0,1]$ of observing Heads using the natural estimator $\\widehat{\\Theta}_n((X_1,X_2,\\ldots,X_n))$ of $\\theta^*$:\n\\[\n\\widehat{\\Theta}_n((X_1,X_2,\\ldots,X_n)) := \\widehat{\\Theta}_n = \\frac{1}{n} \\sum_{i=1}^n X_i =: \\overline{X}_n\n\\]\nFor the coin tossing experiment I just performed ($n=10$ times), the point estimate of the unknown $\\theta^*$ is:\n\\begin{eqnarray}\n\\widehat{\\theta}_{10} = \\widehat{\\Theta}_{10}((x_1,x_2,\\ldots,x_{10})) \n&=&\\widehat{\\Theta}_{10}((1,0,0,0,1,1,0,0,1,0)) \\notag \\\\\n&=& \\frac{1+0+0+0+1+1+0+0+1+0}{10}=\\frac{4}{10}=0.40 \\notag \\ .\n\\end{eqnarray}\n}\n\\end{example}\n}\n\n\\subsection{Likelihood}\\label{S:Likelihood}\nWe take a look at {\\bf likelihood} --- one of the most fundamental concepts in Statistics.  \n\n\\begin{definition}[Likelihood Function]\\label{D:LklFn}\n{\\rm\nSuppose $(X_1,X_2,\\ldots,X_n)$ is a \\rv~with JPDF or JPMF $f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta)$ specified by parameter $\\theta \\in \\BB{\\Theta}$.  \nLet the observed data be $(x_1,x_2,\\ldots,x_n)$.  \nThen the {\\bf likelihood} function given by $L_n(\\theta)$ is merely the joint probability of the data, with the exception that we see it as a function of the parameter:\n\\begin{equation}\n\\boxed{\nL_n(\\theta) := L_n(x_1,x_2,\\ldots,x_n; \\theta) = f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta) \\enspace .\n}\n\\end{equation}\nThe {\\bf log-likelihood} function is defined by:\n\\begin{equation}\n\\boxed{\n\\ell_n(\\theta) := \\log(L_n(\\theta))\n} \\enspace\n\\end{equation}\n}\n\\end{definition}\n\n\\begin{example}[Likelihood of the IID $\\bernoulli(\\theta^*)$ experiment]\\label{EX:LklCoinTossing}\n{\\rm\nConsider our IID Bernoulli experiment:\n$$\nX_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\bernoulli(\\theta^*), \\text{ with PDF } f_{X_i}(x_i;\\theta)=\\theta^{x_i}(1-\\theta)^{1-x_i} \\BB{1}_{\\{0,1\\}}(x_i), \\, \\text{for } i \\in \\{1,2,\\ldots,n\\} \\enspace .\n$$\nLet us understand the likelihood function for one observation first.  There are two possibilities for the first observation.  \n\nIf we only have one observation and it happens to be $x_1=1$, then our likelihood function is:\n$$L_1(\\theta)=L_1(x_1;\\theta)\n= f_{X_1}(x_1;\\theta)\n=\\theta^{1}(1-\\theta)^{1-1} \\BB{1}_{\\{0,1\\}}(1)\n=\\theta (1-\\theta)^0 1\n=\\theta \\enspace\n$$\nIf we only have one observation and it happens to be $x_1=0$, then our likelihood function is:\n$$L_1(\\theta)=L_1(x_1;\\theta)\n= f_{X_1}(x_1;\\theta)\n=\\theta^{0}(1-\\theta)^{1-0} \\BB{1}_{\\{0,1\\}}(0)\n=1 (1-\\theta)^1 1\n=1-\\theta \\enspace\n$$\nIf we have $n$ observations $(x_1,x_2,\\ldots,x_n)$, i.e.~a vertex point of the unit hyper-cube $\\{0,1\\}^n$ (see top panel of Figure~\\ref{F:BernoulliSampleLkl} when $n \\in \\{1,2,3\\}$), then our likelihood function (see bottom panel of Figure~\\ref{F:BernoulliSampleLkl}) is obtained by multiplying the densities due to our IID assumption:\n\\begin{eqnarray}\\label{E:LklBernoulli}\nL_n(\\theta) &:=& L_n(x_1,x_2,\\ldots,x_n; \\theta)  = f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n ; \\theta) \\notag \\\\\n&=& f_{X_1}(x_1 ; \\theta) f_{X_2}(x_2 ; \\theta) \\cdots f_{X_n}(x_n ; \\theta) := \\prod_{i=1}^n f_{X_i}(x_i ; \\theta) \\notag \\\\\n&=& \\theta^{\\sum_{i=1}^n x_i} (1-\\theta)^ {n-\\sum_{i=1}^n x_i} %:= \\theta^{t_n} (1-\\theta)^{n-t_n} \\notag \n\\end{eqnarray}\n%In the last step, we have formally defined the following statistic of the data: \n%$$T_n(X_1,X_2,\\ldots,X_n)=\\sum_{i=1}^n X_i :  \\Xz_n \\rightarrow \\Tz_n$$ with the corresponding realisation $t_n := T_n(x_1,x_2,\\ldots,x_n)=\\sum_{i=1}^n x_i \\in \\Tz_n$. \n}\n\\end{example}\n\n\\begin{figure}[htbp]\n\\caption{Data Spaces $\\Xz_1=\\{0,1\\}$, $\\Xz_2=\\{0,1\\}^2$ and $\\Xz_3=\\{0,1\\}^3$ for one, two and three IID Bernoulli trials, respectively and the corresponding likelihood functions.\\label{F:BernoulliSampleLkl}}\n\\centering   \\makebox{\\includegraphics[width=5.0in]{figures/BernoulliSampleLkl}}\n\\end{figure}\n\n\n\\begin{definition}[Maximum Likelihood Estimator (MLE)]\\label{D:MLE}\n{\\rm\nLet the model for the data be\n$$(X_1,\\ldots,X_n) \\sim f_{X_1,X_2,\\ldots,X_n}(x_1,\\ldots,x_n;\\theta^*) \\enspace .$$ \nThen the maximum likelihood estimator (MLE) $\\widehat{\\Theta}_n$ of the fixed and possibly unknown parameter $\\theta^* \\in \\BB{\\Theta}$ is the value of $\\theta$ that maximizes the likelihood function:\n\\[\n\\boxed{\n\\widehat{\\Theta}_n := \\widehat{\\Theta}_n(X_1,X_2,\\ldots,X_n) :=  \\argmax_{\\theta \\in \\BB{\\Theta}} L_n(\\theta) \\enspace ,\n}\n\\]\nEquivalently, MLE is the value of $\\theta$ that maximizes the log-likelihood function (since $\\log=\\log_e=\\ln$ is a monotone increasing function):\n\\[\n\\boxed{\n\\widehat{\\Theta}_n := \\argmax_{\\theta \\in \\BB{\\Theta}} \\ell_n(\\theta) \\enspace ,\n}\n\\]\n}\n\\end{definition}\n\n\\subsubsection*{Useful Properties of the Maximum Likelihood Estimator}\n\\be\n\\item The ML Estimator is {\\em asymptotically consistent} (gives the ``true'' $\\theta^*$ as sample size $n \\to \\infty$):\n\\[\n\\boxed{\\widehat{\\Theta}_n \\rightsquigarrow \\pointmass(\\theta^*)}\n\\]\n\\item The ML Estimator is asymptotically normal (has a normal distribution concentrating on $\\theta^*$ as $n \\to \\infty$):\n\\[\n\\boxed{\\widehat{\\Theta}_n \\rightsquigarrow \\normal(\\theta^*,(\\widehat{\\mathsf{se}}_n)^2)}\n\\]\nor equivalently:\n\\[\n\\boxed{(\\widehat{\\Theta}_n - \\theta^*) / \\widehat{\\mathsf{se}}_n \\rightsquigarrow \\normal(0,1)}\n\\]\nwhere $\\widehat{\\mathsf{se}}_n$ is the {\\bf estimated standard error}, i.e.~the standard deviation of $\\widehat{\\Theta}_n$, and it is given by the square-root of the inverse negative curvature of $\\ell_n(\\theta)$ at $\\widehat{\\theta}_n$:\n\\[\n\\boxed{\n\\widehat{\\mathsf{se}}_n = \\sqrt{\\left( \\left[ -\\frac{d^2 \\ell_n(\\theta)}{d \\theta^2}\\right]_{\\theta=\\widehat{\\theta}_n}\\right)^{-1}}\n}\n\\]\n\\item Because of the previous two properties, the $1-\\alpha$ confidence interval is:\n\\[\n\\boxed{\n\\widehat{\\Theta}_n \\pm z_{\\alpha/2} \\widehat{\\mathsf{se}}_n \n}\n\\]\n%\\item The ML Estimator is {\\bf equivariant}, i.e.~$\\widehat{\\psi}_n=g(\\widehat{\\theta}_n)$ is the ML Estimate of $\\psi^*=g(\\theta^*)$, for some smooth function $g(\\theta)=\\psi: \\BB{\\Theta} \\to \\BB{\\Psi}$.  \n\\ee\n\nMLE is a general methodology for parameter estimation in an essentially arbitrary parameter space $\\BB{\\Theta}$ that is defining or indexing the laws in a parametric family of models, although we are only seeing it in action when $\\BB{\\Theta} \\subset \\Rz$ for simplest parametric family of models involving IID product experiments here.  \nWhen $\\BB{\\Theta} \\subset \\Rz^d$ with $2 \\leq d < \\infty$ then MLE $\\widehat{\\Theta}_n \\rightsquigarrow \\pointmass(\\theta^*)$, where $\\theta^*=(\\theta^*_1,\\theta^*_2,\\ldots,\\theta^*_d)^{T}$ is a column vector in $\\BB{\\Theta} \\subset \\Rz^d$ \nand $\\widehat{\\Theta}_n \\rightsquigarrow \\normal\\left(\\theta^*,\\widehat{\\Sigma(\\mathsf{se})}_n\\right)$, a multivariate Normal distribution with mean vector $\\theta^*$ and variance-covariance matrix of standard errors given by the {\\em Hessian} (a $d \\times d$ matrix of mixed partial derivatives) of $\\ell_n(\\theta_1,\\theta_2,\\ldots,\\theta_d)$.  The ideas in the cased of dimension $d=1$ naturally generalize to an arbitrary, but finite, dimension $d$.\n \n\\begin{rem}\nIn order to use MLE for parameter estimation we need to ensure that the following two conditions hold:\n\\be\n\\item The {\\em support} of the data, i.e.~the set of possible values of $(X_1,X_2,\\ldots,X_n)$ must not depend on $\\theta$ for every $\\theta \\in \\BB{\\Theta}$ --- of course the probabilities do depend on $\\theta$ in an {\\em identifiable} manner, i.e.~for every $\\theta$ and $\\vartheta$ in $\\BB{\\Theta}$, if $\\theta \\neq \\vartheta$ then $f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n;\\theta) \\neq f_{X_1,\\ldots,X_n}(x_1,x_2,\\ldots,x_n;\\vartheta)$ at least for some $(x_1,x_2,\\ldots,x_n) \\in \\Xz$.\n\\item If the parameter space $\\BB{\\Theta}$ is bounded then $\\theta^*$ must not belong to the boundaries of $\\BB{\\Theta}$.\n\\ee\n\\end{rem}\n\n\\subsubsection*{Maximum Likelihood Estimation Method in Six Easy Steps}\n{\\bf Background:} We have observed data:\n\\[\n(x_1,x_2,\\ldots,x_n)\n\\]\nwhich is modeled as a sample or realization from the random vector:\n\\[\n(X_1,X_2,\\ldots,X_n) \\sim f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta^*), \\quad \\theta^* \\in \\BB{\\Theta} \\enspace .\n\\]\n{\\bf Objective:} We want to obtain an estimator $\\widehat{\\Theta}_n$ that will give:\n\\be\n\\item the point estimate $\\widehat{\\theta}_n$ of the ``true'' parameter $\\theta^*$ and\n\\item the $(1-\\alpha)$ confidence interval for $\\theta^*$.\n\\ee\n{\\bf Steps of MLE:} \n\\begin{itemize}\n\\item{{\\sf Step 1}:} Find the expression for the log likelihood function:\n\\[\n\\ell_n(\\theta) = \\log(L_n(\\theta)) = \\log\\left( f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta) \\right) \\enspace .\n\\]\nNote that if the model assumes that $(X_1,X_2,\\ldots,X_n)$ is jointly independent, i.e.~we have an independent and identically distributed (IID) experiment, then $\\ell_n(\\theta)$ simplifies further as follows:\n\\[\n\\ell_n(\\theta) = \\log(L_n(\\theta)) = \\log\\left( f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta) \\right) = \\log \\left(\\prod_{i=1}^n f_{X_i}(x_i;\\theta) \\right) %= \\sum_{i=1}^n \\log\\left(f_{X_i}(x_i;\\theta)\\right) \n\\enspace .\n\\]\n\n\\item{{\\sf Step 2}:} Obtain the derivative of $\\ell_n(\\theta)$ with respect to $\\theta$:\n\\[\n\\frac{d}{d \\theta}\\left(\\ell_n(\\theta)\\right) \\enspace .\n\\]\n\\item{{\\sf Step 3}:} Set the derivative equal to zero, solve for $\\theta$ and let $\\widehat{\\theta}_n$ equal to this solution. \n\\item{{\\sf Step 4}:} Check if this solution is indeed a maximum of $\\ell_n(\\theta)$ by checking  if:\n\\[\n\\frac{d^2}{d \\theta^2}\\left(\\ell_n(\\theta)\\right) < 0 \\enspace .\n\\]\n\\item{{\\sf Step 5}:} If $\\frac{d^2}{d \\theta^2}\\left(\\ell_n(\\theta)\\right) < 0$ then you have found the maximum likelihood estimate $\\widehat{\\theta}_n$.\n\\item{{\\sf Step 6}:} If you also want the $(1-\\alpha)$ confidence interval then get it from\n\\[\n\\widehat{\\theta}_n \\pm z_{\\alpha/2} \\widehat{\\mathsf{se}}_n \\quad \\text{, where } \\quad \\widehat{\\mathsf{se}}_n = \\sqrt{\\left( \\left[ -\\frac{d^2 \\ell_n(\\theta)}{d \\theta^2}\\right]_{\\theta=\\widehat{\\theta}_n}\\right)^{-1}} \\enspace .\n\\]\n\\end{itemize}\n\nLet us apply this method in some examples.\n\n\n{\n\\begin{example}[Maximum likelihood estimation for IID $\\exponential(\\lambda^*)$ trials]\\label{Eg:ExponentialMLE}\nFind (or derive) the maximum likelihood estimate $\\widehat{\\lambda}_n$ and the $(1-\\alpha)$ confidence interval of the fixed and possibly unknown parameter $\\lambda^*$ for the IID experiment:\n$$X_1,\\ldots,X_n \\overset{IID}{\\sim} \\exponential(\\lambda^*), \\qquad \\lambda^* \\in \\BB{\\Lambda} = (0,\\infty) \\enspace .$$\nNote that $\\BB{\\Lambda}$ is the parameter space.\n\nWe first obtain the log-likelihood function $\\ell_n(\\theta)$ given data $(x_1,x_2,\\ldots,x_n)$.\n\\begin{flalign*}\n\\ell_n(\\lambda) \n& := \\log(L(x_1,x_2,\\ldots,x_n;\\lambda))  = \\log \\left( \\prod_{i=1}^n f_{X_i}(x_i;\\lambda) \\right) = \\log \\left( \\prod_{i=1}^n \\lambda e^{-\\lambda x_i}  \\right)\\\\\n& = \\log \\left( \\lambda e^{-\\lambda x_1} \\cdot \\lambda e^{-\\lambda x_2}  \\cdots \\lambda e^{-\\lambda x_n}  \\right) = \\log \\left( \\lambda^n e^{-\\lambda \\sum_{i=1}^n x_i}  \\right) =\\log \\left( \\lambda^n \\right) + \\log \\left( e^{-\\lambda \\sum_{i=1}^n x_i}  \\right) \\\\\n&= \\boxed{\\log \\left( \\lambda^n \\right) -\\lambda \\sum_{i=1}^n x_i}\n\\end{flalign*}\n\nNow, let us take the derivative with respect to $\\lambda$,\n\\begin{flalign*}\n\\frac{d}{d \\lambda} \\left( \\ell_n(\\lambda) \\right) \n& :=  \\frac{d}{d \\lambda} \\left( \n\\log \\left( \\lambda^n \\right) -\\lambda \\sum_{i=1}^n x_i\n\\right) = \\frac{d}{d \\lambda} \\left( \n\\log \\left( \\lambda^n \\right) \\right) -  \\frac{d}{d \\lambda} \\left( \\lambda \\sum_{i=1}^n x_i \\right)  = \\frac{1}{\\lambda^n}  \\frac{d}{d \\lambda} \\left( \\lambda^n \\right) - \\sum_{i=1}^n x_i \\\\\n&= \\frac{1}{\\lambda^n}  n \\lambda^{n-1}  - \\sum_{i=1}^n x_i = \\boxed{\\frac{n}{\\lambda} - \\sum_{i=1}^n x_i } \\enspace .\n\\end{flalign*}\nNext, we set the derivative to $0$, solve for $\\lambda$, and let the solution equal to the ML estimate $\\widehat{\\lambda}_n$.\n\\begin{flalign*}\n0 = \\frac{d}{d \\lambda} \\left( \\ell_n(\\lambda) \\right) \n& \\iff 0 = \\frac{n}{\\lambda} - \\sum_{i=1}^n x_i \\iff \\sum_{i=1}^n x_i = \\frac{n}{\\lambda} \\iff \\lambda = \\frac{n}{\\sum_{i=1}^n x_i} \\quad \\text{ and let } \\boxed{\\widehat{\\lambda}_n = \\frac{1}{\\overline{x}_n}} \\enspace .\n\\end{flalign*}\nNext, we find the second derivative and check if it is negative.\n\\begin{flalign*}\n\\frac{d^2}{d \\lambda^2} \\left( \\ell_n(\\lambda) \\right) \n&= \\frac{d}{d \\lambda} \\left( \\frac{d}{d \\lambda} \\left( \\ell_n(\\lambda) \\right) \\right) = \\frac{d}{d \\lambda} \\left( \\frac{n}{\\lambda} - \\sum_{i=1}^n x_i \\right) = \\boxed{-n\\lambda^{-2}} \\enspace .\n\\end{flalign*}\nSince $\\lambda>0$ and $n \\in \\Nz$, $\\boxed{-n\\lambda^{-2}=-n/\\lambda^2 < 0}$, so we have found the maximum likelihood estimate:\n\\[\n\\boxed{\\widehat{\\lambda}_n = \\frac{1}{\\overline{x}_n} } \\enspace .\n\\]\nNow, let us find the estimated standard error:\n\\begin{align*}\n\\boxed{\\widehat{\\mathsf{se}}_n} \n&= \\sqrt{\\left( \\left[ -\\frac{d^2 \\ell_n(\\lambda)}{d \\lambda^2}\\right]_{\\lambda=\\widehat{\\lambda}_n}\\right)^{-1}} \n= \\sqrt{\\left( \\left[ - \\left( -\\frac{n}{\\lambda^2}\\right) \\right]_{\\lambda=\\widehat{\\lambda}_n}\\right)^{-1}} = \\sqrt{\\left( \\frac{n}{\\widehat{\\lambda}_n^2}\\right)^{-1}}=\\sqrt{\\frac{\\widehat{\\lambda}_n^2}{n}}=\\frac{\\widehat{\\lambda}_n}{\\sqrt{n}}\\\\\n&=\\boxed{\\frac{1}{\\ol{x}_n\\sqrt{n}}}\n\\enspace .\n\\end{align*}\nAnd finally, the $(1-\\alpha)$ confidence interval is\n\\[\n\\widehat{\\lambda}_n \\pm z_{\\alpha/2} \\widehat{\\mathsf{se}}_n \n= \\boxed{\\frac{1}{\\overline{x}_n} \\pm z_{\\alpha/2} \\frac{1}{\\ol{x}_n\\sqrt{n}}} \\enspace . \n\\]\n\\end{example}\n}\n\nSince we have worked ``hard'' to get the maximum likelihood estimate for a general IID model $X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\exponential(\\lambda^*)$.  Let us kill two birds with the same stone by applying it to two datasets:\n\\be\n\\item Orbiter waiting times and\n\\item Time between measurable earthquakes in New Zealand over a few months.\n\\ee\nTherefore, the ML estimate $\\widehat{\\lambda}_n$ of the unknown rate parameter $\\lambda^* \\in \\BB{\\Lambda}$ on the basis of $n$ IID observations $x_1,x_2,\\ldots,x_n \\overset{IID}{\\sim} \\exponential(\\lambda^*)$ is $1/\\overline{x}_n$ and the ML estimator $\\widehat{\\Lambda}_n=1/\\overline{X}_n$.  \n\n\\begin{example}[Orbiter Waiting Times]\\label{EgOrbiterMLE}\nLet us apply this ML estimator of the rate parameter for the supposedly exponentially distributed waiting times at the on-campus Orbiter bus-stop.\n\nJoshua Fenemore and Yiran Wang collected data on waiting times between buses at an Orbiter bus-stop close to campus.  \nThey collected a sample of size $n=132$ with sample mean $\\overline{x}_{132}=9.0758$.\n\\begin{VrbM}\n% Joshu Fenemore\u2019s Data from 2007 on Waiting Times at Orbiter Bust Stop by Balgay Street\n%The raw data -- the waiting times to nearest minute between Orbiter buses\n>> orbiterTimes=[8 3 7 18 18 3 7 9 9 25 0 0 25 6 10 0 10 8 16 9 1 5 16 6 4 1 3 21 0 28 3 8 ...\n 6 6 11 8 10 15 0 8 7 11 10 9 12 13 8 10 11 8 7 11 5 9 11 14 13 5 8 9 12 10 13 6 11 13 ...\n 0 0 11 1 9 5 14 16 2 10 21 1 14 2 10 24 6 1 14 14 0 14 4 11 15 0 10 2 13 2 22 ...\n 10 5 6 13 1 13 10 11 4 7 9 12 8 16 15 14 5 10 12 9 8 0 5 13 13 6 8 4 13 15 7 11 6 23 1];\n>> mean(orbiterTimes)\nans =\n    9.0758\n\\end{VrbM}\n\nFrom our work in Example~\\ref{Eg:ExponentialMLE} we can now easily obtain the maximum likelihood estimate of $\\lambda^*$ and the $95\\%$ confidence interval for it, under the assumption that the waiting times $X_1,\\ldots,X_{132}$ are IID $\\exponential(\\lambda^*)$ RVs as follows: \n$$\\widehat{\\lambda}_{132}=1/\\overline{x}_{132}=1/9.0758=0.1102 \\quad \n(0.1102 \\pm 1.96 \\times 0.1102/\\sqrt{132}) = (0.0914,0.1290) \\enspace ,$$ \nand thus the estimated mean waiting time is \n$$1/\\widehat{\\lambda}_{132}=9.0763 \\text{ minutes} .$$  \nThe estimated mean waiting time for a bus to arrive is well within the $10$ minutes promised by the Orbiter bus company.  \nThis data and its maximum likelihood analysis is presented visually in Figure~\\ref{F:ExponentialMLEOrbiter}.\n \nThe following script was used to generate the \\hyperref[F:ExponentialMLEOrbiter]{Figure \\ref*{F:ExponentialMLEOrbiter}}:\n\\begin{figure}[htpb]\n\\caption{Plot of $\\log(L(\\lambda))$ as a function of the parameter $\\lambda$  and the MLE $\\widehat{\\lambda}_{132}$ of $0.1102$ for Fenemore-Wang Orbiter Waiting Times Experiment from STAT 218 S2 2007.  The density or PDF and the DF at the MLE of $0.1102$ are compared with a histogram and the empirical DF.\\label{F:ExponentialMLEOrbiter}}\n\\centering   \\makebox{\\includegraphics[width=4.5in]{figures/ExponentialMLEOrbiter}}\n\\end{figure}\nNotice how the exponential PDF $f(x;\\widehat{\\lambda}_{132}=0.1102)$ and the DF $F(x;\\widehat{\\lambda}_{132}=0.1102)$ based on the MLE fits with the histogram and the empirical DF, respectively.  \n%This is an indication of the inadequacy of our parametric model.  Partly this discrepancy is due to the resolution of the the measurements being confined to whole minutes.  We can overcome this problem by fitting a minute-discretized PMF from the $\\exponential(\\lambda)$ PDF.  \n\n\\begin{figure}[htpb]\n\\caption{Comparing the $\\exponential(\\widehat{\\lambda}_{6128}= 28.6694)$ PDF and DF with a histogram and empirical DF of the times (in units of days) between earth quakes in  NZ.  The epicenters of $6128$ earth quakes are shown in left panel.\\label{F:NZSIEarthQuakesExponentialMLE}}\n\\centering   \\makebox{\\includegraphics[width=4.5in]{figures/NZSIEarthQuakesExponentialMLE}}\n\\end{figure}\n\\end{example}\n\n\\begin{example}[Waiting Times between Earth Quakes in NZ]\\label{EX:NZSIEarthQuakesExponentialMLE}  \nOnce again from our work in Example~\\ref{Eg:ExponentialMLE} we can now easily obtain the maximum likelihood estimate of $\\lambda^*$ and the $95\\%$ confidence interval for it, under the assumption that the waiting times (in days) between the $6128$ measurable earth-quakes in NZ from 18-Jan-2008 02:23:44 to 18-Aug-2008 19:29:29 are IID $\\exponential(\\lambda^*)$ RVs as follows: \n$$\\widehat{\\lambda}_{6128}=1/\\overline{x}_{6128}=1/0.0349=28.6694 \\quad \n(28.6694 \\pm 1.96 \\times 28.6694/\\sqrt{6128}) = (27.95,29.39) \\enspace ,$$ \nand thus the estimated mean time in days and minutes between earth quakes (somewhere in NZ over the first 8 months in 2008), as processed in Labwork~\\ref{LW:NZSIEarthQuakesExponentialMLE}, is \n$$1/\\widehat{\\lambda}_{6128}=\\overline{x}_{6128}=0.0349 \\text{ days } \\quad = \\quad 0.0349*24*60=50.2560  \\text{ minutes} \\enspace .$$  \nThis data and its maximum likelihood analysis is presented visually in Figure~\\ref{F:NZSIEarthQuakesExponentialMLE}.  The PDF and DF corresponding to the $\\widehat{\\lambda}_{6128}$ (blue curves in Figure~\\ref{F:NZSIEarthQuakesExponentialMLE}) are the best fitting PDF and DF from the parametric family of PDFs in $\\{\\lambda e^{-\\lambda x}: \\lambda \\in (0,\\infty) \\}$ and DFs in $\\{1- e^{-\\lambda x}: \\lambda \\in (0,\\infty) \\}$ to the density histogram and the empirical distribution function given by the data, respectively.  Clearly, there is room for improving beyond the model of IID $\\exponential(\\lambda)$ RVs, but the fit with just one real-valued parameter is not too bad either.  Finally, with the best fitting PDF $28.6694 e^{-28.6694 x}$ we can get probabilities of events and answer questions like: ``what is the probability that there will be three earth quakes somewhere in NZ within the next hour?'', etc.\n\\end{example}\n\n\\begin{labwork}[Inter Earth Quake Time Processing]\\label{LW:NZSIEarthQuakesExponentialMLE}\nTo process the data to get the times between earth quakes, we can compute as in the following script:\\VrbMf[label=NZSIEarthQuakesExponentialMLE.m]{scripts/NZSIEarthQuakesExponentialMLE.m}\nWe first load the data in the text file {\\tt earthquakes.csv} into a matrix {\\tt EQ}.  Using the {\\tt datenum} function in {\\sc Matlab} we transform the time stamps into a number starting at zero.  These transformed time stamps are in units of days.  Then we find the times between consecutive events and estimate a histogram.  We finally compute the ML estimate of $\\lambda^*$ and super-impose the PDF of the $\\exponential(\\widehat{\\lambda}_{6128}= 28.6694)$ upon the histogram.\n\\begin{VrbM}\n>> NZSIEarthQuakesExponentialMLE\nans =        6128          13\n\nEarth Quakes in NZ between\n18-Jan-2008 02:23:44 and18-Aug-2008 19:29:29\n\nSampleMean =    0.0349\nMLELambdaHat =   28.6694\n\\end{VrbM}\nThus, the average time between earth quakes is $0.0349*24*60=50.2560$ minutes.\n\\end{labwork}\n\n\\begin{example}[ML Estimation for the IID $\\bernoulli(\\theta^*)$ experiment]\\label{EX:MLECoinTossing}\nLet us do maximum likelihood estimation for the coin-tossing experiment of Example~\\ref{EX:CoinTossing} with likelihood derived in Example~\\ref{EX:LklCoinTossing} to obtain the maximum likelihood estimate $\\widehat{\\theta}_n$ of the unknown parameter $\\theta^* \\in \\BB{\\Theta} = [0,1]$ and the $(1-\\alpha)$ confidence interval for it.\n\nFrom Equation~\\eqref{E:LklBernoulli} the log likelihood function is\n\\begin{eqnarray}\n\\ell_n(\\theta) = \\log(L_n(\\theta)) \n= \\log \\left( \\theta^{\\sum_{i=1}^n x_i} (1-\\theta)^ {n-\\sum_{i=1}^n x_i} \\right) \n= \\boxed{\\left({\\sum_{i=1}^n x_i}\\right) \\log(\\theta) + \\left(n-{\\sum_{i=1}^n x_i}\\right) \\log(1-\\theta)} \\notag \\enspace ,\n\\end{eqnarray}\nNext, we take the derivative with respect to the parameter $\\theta$:\n\\begin{eqnarray}\n\\frac{d}{d \\theta} \\left(\\ell_n(\\theta)\\right) \n&=& \\frac{d}{d \\theta}  \\left( \\left({\\sum_{i=1}^n x_i}\\right) \\log(\\theta) \\right) + \\frac{d}{d \\theta} \\left(  \\left( n-{\\sum_{i=1}^n x_i} \\right) \\log(1-\\theta) \\right) \\notag = \\boxed{\\frac{{\\sum_{i=1}^n x_i}}{\\theta} - \\frac{n-{\\sum_{i=1}^n x_i}}{1-\\theta}} \\notag \\enspace .\n\\end{eqnarray}\nNow, set $\\frac{d}{d \\theta} \\log(L_n(\\theta))=0$, solve for $\\theta$ and set the solution equal to $\\widehat{\\theta}_n$: \n\\begin{align*}\n\\frac{d}{d \\theta} \\left( \\ell_n(\\theta) \\right) = 0 \n&\\iff  \\frac{{\\sum_{i=1}^n x_i}}{\\theta} = \\frac{n-{\\sum_{i=1}^n x_i}}{1-\\theta} \\iff\n\\frac{1-\\theta}{\\theta} = \\frac{n-{\\sum_{i=1}^n x_i}}{{\\sum_{i=1}^n x_i}} \\\\\n&\\iff\n\\frac{1}{\\theta}-1 = \\frac{n}{{\\sum_{i=1}^n x_i}}-1 \\quad \\text{ let } \\boxed{\\widehat{\\theta}_n = \\frac{{\\sum_{i=1}^n x_i}}{n}}\n\\end{align*}\nNext, we find the second derivative and check if it is negative.\n\\begin{align*}\n\\frac{d^2}{d \\theta^2} (\\ell_n(\\theta)) \n&= \\frac{d}{d \\theta} \\left( \\frac{{\\sum_{i=1}^n x_i}}{\\theta} - \\frac{n-{\\sum_{i=1}^n x_i}}{1-\\theta} \\right) \n= \\boxed{-\\frac{{\\sum_{i=1}^n x_i}}{\\theta^2} - \\frac{n-{\\sum_{i=1}^n x_i}}{(1-\\theta)^2}} \n\\end{align*}\n\nSince each term in the numerator and the denominator of the two fractions in the above box are non-negative, $\\frac{d^2}{d \\theta^2} (\\ell_n(\\theta))< 0$ and therefore we have found the maximum likelihood estimate\n\\[\n\\widehat{\\theta}_n  = \\frac{1}{n} \\sum_{i=1}^n x_i = \\overline{x}_n\n\\]\nWe already knew this to be a point estimate for $E(X_i)=\\theta^*$ from LLN and CLT.  But now we know that MLE also agrees.\nNow, let us find the estimated standard error:\n\\begin{align*}\n\\boxed{\\widehat{\\mathsf{se}}_n} \n&= \\sqrt{\\left( \\left[ -\\frac{d^2 \\ell_n(\\theta)}{d \\theta^2}\\right]_{\\theta=\\widehat{\\theta}_n}\\right)^{-1}} \n= \\sqrt{\\left( \\left[ - \\left( -\\frac{{\\sum_{i=1}^n x_i}}{\\theta^2} - \\frac{n-{\\sum_{i=1}^n x_i}}{(1-\\theta)^2}\\right) \\right]_{\\theta=\\widehat{\\theta}_n}\\right)^{-1}} \\\\\n&= \\sqrt{\\left( \\frac{{\\sum_{i=1}^n x_i}}{\\widehat{\\theta}_n^2} + \\frac{n-{\\sum_{i=1}^n x_i}}{(1-\\widehat{\\theta}_n)^2} \\right)^{-1}}\n= \\sqrt{\\left( \\frac{n\\ol{x}_n}{\\ol{x}_n^2} + \\frac{n-n\\ol{x}_n}{(1-\\ol{x}_n)^2} \\right)^{-1}}\n= \\sqrt{\\left( \\frac{n}{\\ol{x}_n} + \\frac{n}{(1-\\ol{x}_n)} \\right)^{-1}}\\\\\n&= \\sqrt{\\left( \\frac{n(1-\\ol{x}_n)+n\\ol{x}_n}{\\ol{x}_n(1-\\ol{x}_n)} \\right)^{-1}}\n= \\sqrt{\\frac{\\ol{x}_n(1-\\ol{x}_n)}{n((1-\\ol{x}_n)+\\ol{x}_n}}\n= \\boxed{\\sqrt{\\frac{\\ol{x}_n(1-\\ol{x}_n)}{n}}} \\enspace .\\\\\n\\enspace .\n\\end{align*}\nAnd finally, the $(1-\\alpha)$ confidence interval is\n\\[\n\\widehat{\\theta}_n \\pm z_{\\alpha/2} \\widehat{\\mathsf{se}}_n \n= \\boxed{\\overline{x}_n \\pm z_{\\alpha/2} \\sqrt{\\frac{\\ol{x}_n(1-\\ol{x}_n)}{n}}} \\enspace . \n\\]\n\nFor the coin tossing experiment that was performed ($n=10$ times) in Example~\\ref{EX:CoinTossing}, the maximum likelihood estimate of $\\theta^*$ and the $95\\%$ confidence interval for it, under the model that the tosses are IID $\\bernoulli(\\theta^*)$ RVs, are as follows:\n\\[\n\\widehat{\\theta}_{10} \n= \\ol{x}_{10} =\\frac{4}{10}=0.40\n\\quad \\text{and} \\quad \\left(0.4 \\pm 1.96 \\times \\sqrt{\\frac{0.4 \\times 0.6}{10}}\\right) =  (0.0964, 0.7036) \\enspace .\n\\]\nSee Figures~\\ref{F:BernoulliMLE} and \\ref{F:BernoulliMLEConsistency} to completely understand parameter estimation for IID Bernoulli experiments.\n% (not just coin toss, but for any event of interest!) through the pictures.\n\\end{example}\n\n\\begin{figure}[htpb]\n\\caption{Plots of the log likelihood $\\ell_n(\\theta)=\\log(L(1,0,0,0,1,1,0,0,1,0;\\theta))$ as a function of the parameter $\\theta$ over the parameter space $\\BB{\\Theta}=[0,1]$ and the MLE $\\widehat{\\theta}_{10}$ of $0.4$ for the coin-tossing experiment shown in standard scale (left panel) and log scale for $x$-axis (right panel).\\label{F:BernoulliMLE}}\n\\centering   \\makebox{\\includegraphics[width=4.5in]{figures/BernoulliMLE}}\n\\end{figure}\n\n\n\\begin{figure}[htbp]\n\\caption{{\\small $100$ realizations of $95\\%$ confidence intervals based on samples of size $n$ $=$ $10$, $100$ and $1000$ simulated from IID $\\bernoulli(\\theta^*=0.5)$ RVs.  %as per \\hyperref[Mf:BernoulliMLEConsistency]{Labwork~\\ref*{Mf:BernoulliMLEConsistency}}.  \nThe MLE $\\widehat{\\theta}_n$ (cyan dot) and the log-likelihood function (magenta curve) for each of the $100$ replications of the experiment for each sample size $n$ are depicted.  \nThe approximate normal-based $95\\%$ confidence intervals with blue boundaries are based on the exact $\\mathsf{se}_n=\\sqrt{\\theta^*(1-\\theta^*)/n}=\\sqrt{1/4}$, while those with red boundaries are based on the estimated $\\widehat{\\mathsf{se}_n}=\\sqrt{\\widehat{\\theta}_n(1-\\widehat{\\theta}_n)/n} = \\sqrt{\\frac{\\ol{x}_n(1-\\ol{x}_n)}{n}}$.  \nThe fraction of times the true parameter $\\theta^*=0.5$ was contained by the exact and approximate confidence interval (known as {\\em empirical coverage}) over the $100$ replications of the simulation experiment for each of the three sample sizes are given by the numbers after {\\tt Cvrg.=} and {\\tt $\\sim$=}, above each sub-plot, respectively.}\\label{F:BernoulliMLEConsistency}}\n\\begin{center}\n\\makebox{\\includegraphics[width=5.00in]{figures/BernoulliMLEConsistency}}\n\\end{center}\n\\end{figure}  \n\n\\clearpage\n\n%\\iftoggle{PlaceTutsHere}{%\n%\\subsection{Tutorial Exercises}\n%\\input{Tutorials/Tut_Lkl_preps.tex}\n%~\\\\\n%\\input{Tutorials/Tut_Lkl_inTut.tex}\n%}{%\n%  % don't do anything otherwise\n%}\n\n\\begin{Exercise}[title={Likelihoods of tiny $\\bernoulli$ trials},label={ExLklOfTinyBernoulliTrials}]\nFind and plot the likelihood function of the following observations $(x_1,x_2,\\ldots,x_n)$ from the following IID sequence of $\\mathrm{Bernoulli}(\\theta)$ RVs:\n\\begin{enumerate}\n\\item~$(x_1)=(1)$\n\\item~$(x_1)=(0)$\n\\item~$(x_1,x_2)=(0,0)$\n\\item~$(x_1,x_2)=(1,1)$\n\\item~$(x_1,x_2)=(1,0)$\n\\item~$(x_1,x_2)=(0,1)$\n\\item~$(x_1,x_2,x_3)=(1,1,0)$\n\\item~$(x_1,x_2,x_3)=(0,0,1)$\n\\end{enumerate}\n[Hint: your x-axis is $\\theta$ with values in $[0,1]$, the parameter space, and $y$-axis is $L_n(\\theta; x_1,x_2,\\ldots,x_n) = \\prod_{i=1}^n f_{X_i}(x_i;\\theta)$, where $f_{X_i}(x_i;\\theta)$ is the PMF of $\\mathrm{Bernoulli}(\\theta)$ RV $X_i$]\n\\end{Exercise}\n\n\\begin{Answer}\n~\\\\\nLikelihood is just the joint PDF (if continuous) or joint PMF (if discrete) of the data {\\bf but} seen as a function of the parameter $\\theta$: \n\\[\nL(\\theta)=L(\\theta; (x_1,x_n,\\ldots,x_n)) = f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta)\n\\]\nWe sometimes write $L(\\theta)$ instead of $L(\\theta; (x_1,x_n,\\ldots,x_n))$ for notational simplicity.  \nIn this case, since we are assuming independent and identically distributed observations, the joint PDF/PMF is simply the product of the marginal PDFs/PMFs:\n\\[\nL(\\theta) = f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta) = \\prod_{i=1}^n f_{X_i}(x_i;\\theta) = \\prod_{i=1}^n f_{X_1}(x_i;\\theta)\n\\]\nSince each $X_i \\overset{IID}{\\sim} \\mathrm{Bernoulli}(\\theta)$ RV, the marginal PMF of each $X_i$ is the same as that of the first RV $X_1$, which is:\n\\[\nf_{X_1}(x_i; \\theta) \n= \n\\begin{cases}\n\\theta & \\text{ if } x_i = 1\\\\\n0 & \\text{ if } x_i=0 \\enspace.\n\\end{cases}\n\\]\nFrom this we get the eight likelihood functions in the question:\n\\begin{enumerate}\n\\item~When $(x_1)=(1)$, there is only one data point so $n=1$ and therefore\n$$L(\\theta)= \\prod_{i=1}^1 f_{X_1}(x_1;\\theta) = f_{X_1}(x_1=1;\\theta) = f_{X_1}(1;\\theta)=\\theta$$\nThe above step-by-step break down is for understanding only.  In the exam, you can just write:\n$$L(\\theta)= \\theta$$\nMake a plot of $L(\\theta)=\\theta$ as a function of $\\theta$ (with x-axis values taken by $\\theta$ in the unit interval $[0,1]$).\n\\item~When $(x_1)=(0)$\n$$L(\\theta)= \\prod_{i=1}^1 f_{X_1}(x_1;\\theta) = f_{X_1}(x_1=0;\\theta) = f_{X_1}(0;\\theta)=1-\\theta$$\nIn the exam, you can just write:\n$$L(\\theta)= 1-\\theta$$\nMake a plot of $L(\\theta)=1-\\theta$ as a function of $\\theta$.\n\\item~When $(x_1,x_2)=(0,0)$, we have $n=2$ data points and therefore\n\\begin{multline*}\nL(\\theta)= \\prod_{i=1}^2 f_{X_1}(x_i;\\theta) = f_{X_1}(x_1=0;\\theta) \\times f_{X_1}(x_2=0;\\theta) \\\\\n= f_{X_1}(0;\\theta)\\times f_{X_1}(0;\\theta) =(1-\\theta) \\times (1-\\theta) = (1-\\theta)^2\n\\end{multline*}\nOr just\n\\[\nL(\\theta) = (1-\\theta) \\times (1-\\theta) = (1-\\theta)^2 \\enspace .\n\\]\nMake a plot of $L(\\theta)=(1-\\theta)^2$ as a function of $\\theta$.\n\\item~When $(x_1,x_2)=(1,1)$, we have $n=2$ data points and therefore\n\\begin{multline*}\nL(\\theta)= \\prod_{i=1}^2 f_{X_1}(x_i;\\theta) = f_{X_1}(x_1=1;\\theta) \\times f_{X_1}(x_2=1;\\theta) \\\\\n= f_{X_1}(1;\\theta)\\times f_{X_1}(1;\\theta) = \\theta \\times \\theta = \\theta^2\n\\end{multline*}\nOr just\n\\[\nL(\\theta) = \\theta \\times \\theta = \\theta^2 \\enspace .\n\\]\nMake a plot of $L(\\theta)=\\theta^2$ as a function of $\\theta$.\n\\item~When $(x_1,x_2)=(1,0)$, we have $n=2$ data points and therefore\n\\[\nL(\\theta) = \\theta \\times (1-\\theta)  = \\theta  - \\theta^2 \\enspace .\n\\]\nMake a plot of $L(\\theta)=\\theta-\\theta^2$ as a function of $\\theta$.\nThis plot is easy to draw if you overlay the plot for $\\theta$ and $\\theta^2$ (that you just made separately) and then see where $\\theta > \\theta^2$ and by mow much.\n\\item~When $(x_1,x_2)=(0,1)$,\n\\[\nL(\\theta) = (1-\\theta) \\times \\theta  = \\theta  - \\theta^2 \\enspace .\n\\]\nNotice that the likelihood with data $(x_1,x_2)=(0,1)$ is the same as the likelihood with the previous data $(x_1,x_2)=(1,0)$.  \nThis is because, the likelihood being a product (from the IID assumption) is invariant to the order of the data, i.e., first observing $0$ and then oberving $1$ has the same likelihood as first observing $1$ and then observing $0$.  \nThis means, in general even when $n>2$, only the number of $1$'s and $0$'s in the $n$ IID $\\mathrm{Bernoulli}(\\theta)$ experiment affects the likelihood of $\\theta$.       \nYou have already made the plot of $L(\\theta)=\\theta-\\theta^2$ as a function of $\\theta$ in the previous problem!\n\\item~When $(x_1,x_2,x_3)=(1,1,0)$\n\\[\nL(\\theta) = \\theta \\times \\theta \\times (1-\\theta)  = \\theta^2  (1-\\theta) = \\theta^2-\\theta^3 \\enspace .\n\\]\nThis plot is easy to draw if you first plot $\\theta^2$ and $\\theta^3$ separately and then see how far apart they are to get a sense for $\\theta^2-\\theta^3$.\n\\item~When $(x_1,x_2,x_3)=(0,0,1)$\n\\[\nL(\\theta) = (1-\\theta) \\times (1-\\theta) \\times \\theta  = (1-\\theta)^2  \\theta = 1-2\\theta+\\theta^2 \\enspace .\n\\]\nThis is just a polynomial in $\\theta$ and can be plotted.  It should be clear from this exercise that the likelihood with observation $(x_1,x_2,\\ldots,x_n)$ from $n$ IID $\\mathrm{Bernoulli}(\\theta)$ RVs is just \n$$L(\\theta) = \\theta^{\\sum_{i=1}^{n}x_i}  (1-\\theta)^{n-\\sum_{i=1}^{n}x_i}$$\nwhere, the number of $1$'s is $\\sum_{i=1}^{n}x_i$ and the number of $0$'s is $n-\\sum_{i=1}^{n}x_i$.\n\\end{enumerate}\n\nSee Figure 19 on page 125 of the notes for the plots of these likelihoods as well as those of all possible observations one could have from $n=1$, $n=2$ or $n=3$ trials (seen as vertices of the unit interval $[0,1]$, the unit square $[0,1]^2$ and the unit cube $[0,1]^3$, respectively).\n\\end{Answer}\n\n\\begin{Exercise}[title={MLE Exercises},label={ExsMLEExercises}]\nAssume that an independent and identically distributed sample, $X_1 , X_2 , \\ldots, X_n$ \nis drawn from the distribution of $X$ with PDF $f(x; \\theta^*)$ for a fixed and unknown \nparameter $\\theta^*$ and derive the maximum likelihood estimate of $\\theta^*$ (you only need to do {\\sf Steps 1--5} from {\\bf Steps of MLE} in Lecture Notes on pages 126--127).  \nConsider the following PDFs:\n\\begin{enumerate}\n\\item~\nThe parameter $\\theta$ is a real number in $(0,\\infty)$ and the PDF is given by\n\\[\nf(x;\\theta) = \n\\begin{cases}\n\\theta x^{\\theta-1} & \\text{ if } 0 < x < 1\\\\\n0 & \\text{otherwise} \\enspace .\n\\end{cases}\n\\]\n\n\\item~\nThe parameter $\\theta$ is a real number in $(0,\\infty)$ and the PDF is given by\n\\[\nf(x;\\theta) = \n\\begin{cases}\n\\frac{1}{\\theta} x^{(1-\\theta)/\\theta} & \\text{ if } 0 < x < 1\\\\\n0 & \\text{otherwise} \\enspace .\n\\end{cases}\n\\]\n\n\\item~\nThe parameter $\\theta$ is a real number in $(0,\\infty)$ and the PDF is given by\n\\[\nf(x;\\theta) = \n\\begin{cases}\n\\frac{1}{2\\theta^3} x^{2}e^{-x/\\theta} & \\text{ if } 0 < x < \\infty\\\\\n0 & \\text{otherwise} \\enspace .\n\\end{cases}\n\\]\n\n\\item~\nThe parameter $\\theta$ is a real number in $(0,\\infty)$ and the PDF is given by\n\\[\nf(x;\\theta) = \n\\begin{cases}\n\\frac{x}{\\theta^2} e^{-\\frac{1}{2}(x/\\theta)^2} & \\text{ if } 0 < x  < \\infty \\\\\n0 & \\text{otherwise} \\enspace .\n\\end{cases}\n\\]\n\n\n\n%\\item~\n%The parameter $\\theta$ is a real number in $(0,\\infty)$ and the PDF is given by\n%\\[\n%f(x;\\theta) = \n%\\begin{cases}\n%\\frac{1}{\\theta^2} x e^{-x/\\theta} & \\text{ if } 0 < x < \\infty\\\\\n%0 & \\text{otherwise} \\enspace .\n%\\end{cases}\n%\\]\n\\end{enumerate}\n\\end{Exercise}\n\n\\begin{Answer}\n~\\\\\nSince all five of these problems involve $n$ IID samples we note that the likelihood function is\n\\[\nL(\\theta) = f_{X_1,X_2,\\ldots,X_n}(x_1,x_2,\\ldots,x_n; \\theta)\n= \\prod_{i=1}^n f_{X_i}(x_i;\\theta) = \\prod_{i=1}^n f_{X_1}(x_i;\\theta)\n\\]\nFor ease of notation, we just write $f(x;\\theta)$, instead of the more accurate $f_{X_1}(x;\\theta)$, for the {\\bf common} PDF/PMF of each RV $X_i$.  Thus, for IID samples we just write the likelihood as\n\\[\nL(\\theta) = \\prod_{i=1}^n f(x_i;\\theta) \\enspace.\n\\] \nRecall that the logarithm of a product is the sum of the logarithms of each term in the product, i.e., $\\log(a \\times b) = \\log(a)+\\log(b)$.  \nMore generally this means: \n$$\\log\\left(\\prod_{i=1}^n a_i\\right) = \\log\\left( a_1 \\times a_2 \\times \\cdots \\times a_n\\right) = \\log(a_1)+\\log(a_2)+\\cdots+\\log(a_n)=\\sum_{i=1}^n \\log(a_i)$$\nThe above formula won't appear in the formula sheet --- you should know this by now.\nPutting all of the above facts together we can get the log-likelihood as\n\\[\n\\ell(\\theta) = \\log \\left( L(\\theta) \\right) = \\log \\left( \\prod_{i=1}^n f(x_i;\\theta) \\right) \n= \\sum_{i=1}^n \\log\\left(f(x_i;\\theta)\\right)\n\\] \nRecall the main steps (from Section~\\ref{S:Likelihood}) to find $\\widehat{\\theta}_n$, the maximum likelihood estimate (MLE) of the unknown parameter $\\theta^*$ according to which the data is independently and identically distributed, are as follows:\\\\ \n({\\sf Step 1:}) find $\\ell(\\theta)$, the log-likelihood as a function of the parameter $\\theta$, \n({\\sf Step 2:}) find $\\frac{d}{d \\theta} \\ell(\\theta)$, the first derivative of $\\ell(\\theta)$ with respect to $\\theta$, \n({\\sf Step 3:}) solve the equation $\\frac{d}{d \\theta} \\ell(\\theta)=0$ for $\\theta$ and set this solution equal to $\\widehat{\\theta}_n$, \n({\\sf Step 4:}) find $\\frac{d^2}{d \\theta^2} \\ell(\\theta)$, the second derivative of $\\ell(\\theta)$ with respect to $\\theta$ and finally \n({\\sf Step 5:}) $\\widehat{\\theta}_n$ is the MLE if $\\frac{d^2}{d \\theta^2} \\ell(\\theta) < 0$.\n\nWe are now ready to answer the four questions in this problem.\n\\begin{enumerate}\n\\item~\\\\\n{\\sf Step 1:} If $x_i \\in (0,1)$ for each $i \\in \\{1,2,\\ldots,n\\}$, i.e.~when each data point lies inside the open interval $(0,1)$, the log-likelihood is\n\\begin{multline*}\n\\ell(\\theta) = \\sum_{i=1}^n \\log\\left(f_{X}(x_i;\\theta)\\right)\n= \\sum_{i=1}^n \\left(\\log\\left( \\theta x_i^{\\theta-1} \\right)\\right)\n= \\sum_{i=1}^n \\left(\\log( \\theta)+ \\log\\left( x_i^{\\theta-1} \\right)\\right)\\\\\n= \\sum_{i=1}^n \\left(\\log( \\theta)+ (\\theta-1)(\\log( x_i))\\right)\n= \\sum_{i=1}^n \\left(\\log( \\theta)+ \\theta\\log( x_i)-\\log( x_i)\\right)\\\\\n= \\sum_{i=1}^n \\log( \\theta)+ \\sum_{i=1}^n \\theta\\log( x_i)- \\sum_{i=1}^n \\log( x_i)\n= n \\log( \\theta) + \\theta \\sum_{i=1}^n \\log( x_i)- \\sum_{i=1}^n \\log( x_i)\n\\end{multline*} \n{\\sf Step 2:}\n\\[\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) = \\frac{d}{d \\theta} \\left( n \\log( \\theta) + \\theta \\sum_{i=1}^n \\log( x_i)- \\sum_{i=1}^n \\log( x_i)\\right) = \\frac{n}{\\theta} + \\sum_{i=1}^n \\log( x_i)-0=\\frac{n}{\\theta} + \\sum_{i=1}^n \\log( x_i)\n\\]\n{\\sf Step 3:}\n\\[\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) = 0 \\iff \\frac{n}{\\theta} + \\sum_{i=1}^n \\log( x_i)=0 \\iff\n\\frac{n}{\\theta} = - \\sum_{i=1}^n \\log( x_i) \\iff \\theta = -\\frac{n}{\\sum_{i=1}^n \\log( x_i)}\n\\]\nLet $$\\widehat{\\theta}_n = -\\frac{n}{\\sum_{i=1}^n \\log( x_i)} \\enspace .$$\n{\\sf Step 4:}\n\\begin{multline*}\n\\frac{d^2}{d \\theta^2} \\ell(\\theta) = \\frac{d}{d \\theta} \\left( \\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right)\\right)\n=  \\frac{d}{d \\theta} \\left( \\frac{n}{\\theta} + \\sum_{i=1}^n \\log( x_i)\\right)\n=  \\frac{d}{d \\theta} \\left( n\\theta^{-1} + \\sum_{i=1}^n \\log( x_i)\\right)\\\\\n=   -n\\theta^{-2} + 0 = -\\frac{n}{\\theta^2} \\qquad \\quad\n\\end{multline*}\n{\\sf Step 5:}\nThe problem states that $\\theta > 0$.  Since $\\theta^2 > 0$ and $n \\geq 1$, we have indeed checked that \n$$\\frac{d^2}{d \\theta^2} \\ell(\\theta)=-\\frac{n}{\\theta^2}<0$$\nand therefore the MLE is indeed\n$$\\widehat{\\theta}_n = \\frac{-n}{\\sum_{i=1}^n \\log(x_i)} \\enspace .$$\n\n\\item~\n{\\sf Step 1:} If $x_i \\in (0,1)$ for each $i \\in \\{1,2,\\ldots,n\\}$, i.e.~when each data point lies inside the open interval $(0,1)$, the log-likelihood is\n\\begin{multline*}\n\\ell(\\theta) = \\sum_{i=1}^n \\log\\left(f_{X}(x_i;\\theta)\\right)\n= \\sum_{i=1}^n \\left(\\log\\left( \\frac{1}{\\theta} x_i^{(1-\\theta)/\\theta} \\right)\\right)\n= \\sum_{i=1}^n \\left(\\log\\left( \\frac{1}{\\theta}\\right) + \\log \\left( x_i^{(1-\\theta)/\\theta} \\right)\\right)\\\\\n= \\sum_{i=1}^n \\left(\\log\\left( \\frac{1}{\\theta}\\right) + \\left(\\frac{1-\\theta}{\\theta}\\right)\\log \\left( x_i \\right)\\right)\n= \\sum_{i=1}^n \\left(\\log\\left( \\frac{1}{\\theta}\\right) + \\left(\\frac{1}{\\theta}-1\\right)\\log \\left( x_i \\right)\\right)\\\\\n= \\sum_{i=1}^n \\left(\\log\\left( \\frac{1}{\\theta}\\right) + \\frac{1}{\\theta} \\log \\left( x_i \\right)-\\log \\left( x_i \\right)\\right)\n= \\sum_{i=1}^n \\log\\left( \\frac{1}{\\theta}\\right) + \\sum_{i=1}^n \\frac{1}{\\theta} \\log \\left( x_i \\right)- \\sum_{i=1}^n \\log \\left( x_i \\right)\\\\\n= n \\log\\left( \\frac{1}{\\theta}\\right) + \\frac{1}{\\theta} \\sum_{i=1}^n \\log \\left( x_i \\right)- \\sum_{i=1}^n \\log \\left( x_i \\right)\n= n \\log\\left( \\theta^{-1}\\right) + \\frac{1}{\\theta} \\sum_{i=1}^n \\log \\left( x_i \\right)- \\sum_{i=1}^n \\log \\left( x_i \\right)\\\\\n= -n \\log\\left( \\theta \\right) + \\theta^{-1} \\sum_{i=1}^n \\log \\left( x_i \\right)- \\sum_{i=1}^n \\log \\left( x_i \\right)\n\\end{multline*} \n{\\sf Step 2:}\n\\begin{multline*}\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) \n= \\frac{d}{d \\theta} \\left( -n \\log\\left( \\theta \\right) + \\theta^{-1} \\sum_{i=1}^n \\log \\left( x_i \\right)- \\sum_{i=1}^n \\log \\left( x_i \\right)\\right)\n= -n \\theta^{-1} - \\theta^{-2} \\sum_{i=1}^n \\log \\left( x_i \\right) \n\\end{multline*}\n{\\sf Step 3:}\n\\[\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) = 0 \\iff -n \\theta^{-1} - \\theta^{-2} \\sum_{i=1}^n \\log \\left( x_i \\right)=0\n\\]\nMultiplying both sides of the above equality by $\\theta^2$ we get\n\\begin{multline*}\n\\theta^2 \\times \\left(-n \\theta^{-1} - \\theta^{-2} \\sum_{i=1}^n \\log \\left( x_i \\right) \\right)=0 \\times \\theta^2\n\\iff\n\\left(-n \\theta - \\sum_{i=1}^n \\log \\left( x_i \\right) \\right)=0 \\\\\n\\iff\nn \\theta = - \\sum_{i=1}^n \\log \\left( x_i \\right) \\iff\n\\theta = -\\frac{1}{n}\\sum_{i=1}^n \\log \\left( x_i \\right)\n\\end{multline*}\nLet $$\\widehat{\\theta}_n =  -\\frac{1}{n}\\sum_{i=1}^n \\log \\left( x_i \\right) \\enspace .$$\n{\\sf Step 4:}\n\\begin{multline*}\n\\frac{d^2}{d \\theta^2} \\ell(\\theta) = \\frac{d}{d \\theta} \\left( \\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right)\\right)\n=  \\frac{d}{d \\theta} \\left( -n \\theta^{-1} - \\theta^{-2} \\sum_{i=1}^n \\log \\left( x_i \\right) \\right)\n= n \\theta^{-2} + 2 \\theta^{-3} \\sum_{i=1}^n \\log \\left( x_i \\right) \n\\end{multline*}\n{\\sf Step 5:}\nSince $\\theta > 0$ and $n \\geq 1$, we know that $n \\theta^{-2}=n/\\theta^2 > 0$, $2 \\theta^{-3}=2/\\theta^3 >0$.  \nAnd since every $x_i$ only takes values in $(0,1)$ we know that $\\log(x_i)<0$ and therefore $\\sum_{i=1}^n \\log \\left( x_i \\right) < 0$.  \nThis problem is more interesting because we have some positive and some negative terms in $\\frac{d^2}{d \\theta^2} \\ell(\\theta)$.     \nLet us find out when $\\frac{d^2}{d \\theta^2} \\ell(\\theta) < 0$\n\\begin{multline*}\n\\frac{d^2}{d \\theta^2} \\ell(\\theta)= n \\theta^{-2} + 2 \\theta^{-3} \\sum_{i=1}^n \\log \\left( x_i \\right) <0 \\\\\n\\iff 2 \\theta^{-3} \\sum_{i=1}^n \\log \\left( x_i \\right) < 0 - n \\theta^{-2} \\qquad \\text{{\\tiny subtracting $n \\theta^{-2}$ from both sides preserves the inequality}}\\\\\n\\iff \\sum_{i=1}^n \\log \\left( x_i \\right) <  - \\frac{n \\theta^{-2}}{2 \\theta^{-3}} \\qquad \\text{{\\tiny dividing by the positive quantity $2 \\theta^{-3}$ on both sides preserves the inequality}}\\\\\n\\iff \\sum_{i=1}^n \\log \\left( x_i \\right) <  - \\frac{n \\theta^{3}}{2 \\theta^{2}}  \\iff \\sum_{i=1}^n \\log \\left( x_i \\right) < - \\frac{n \\theta}{2} \\qquad \\qquad \\quad \\qquad \\qquad \\quad \\qquad \\qquad \\quad\n\\end{multline*}\nand therefore only when the observed data and the parameter jointly satisfy the condition: $$\\sum_{i=1}^n \\log \\left( x_i \\right) <  - \\frac{n \\theta}{2}$$ will the MLE be\n$$\\widehat{\\theta}_n = -\\frac{1}{n}{\\sum_{i=1}^n \\log(x_i)} \\enspace .$$\nIf the condition is not satisfied then we cannot be sure about the MLE we found by setting the first derivative of the log-likelihood function to $0$.  \nThis exercise illustrates that just ensuring that the slope or the first derivative of the log-likelihood function is zero at the MLE does not necessarily ensure that the curvature or second derivative of the log-likelihood function will always be negative or concave downward in order to ensure a global maximum at the MLE for every observable data and every possible parameter.\n\n\\item~\n{\\sf Step 1:} If $x_i \\in (0,\\infty)$ for each $i \\in \\{1,2,\\ldots,n\\}$, i.e.~when each data point is greater than $0$, the log-likelihood is\n\\begin{multline*}\n\\ell(\\theta) = \\sum_{i=1}^n \\log\\left(f_{X}(x_i;\\theta)\\right)\n= \\sum_{i=1}^n \\left(\\log\\left(  \\frac{1}{2 \\theta^3}x_i^2 e^{-x_i/\\theta} \\right)\\right)\\\\\n= \\sum_{i=1}^n \\left(\\log\\left(  \\frac{1}{2} \\right) + \\log\\left(\\frac{1}{ \\theta^3}\\right) + \\log \\left(x_i^2 \\right) + \\log \\left( e^{-x_i/\\theta} \\right) \\right)\\\\\n= \\sum_{i=1}^n \\left(\\log\\left(  2^{-1} \\right) + \\log\\left(\\theta^{-3}\\right) + 2 \\log \\left(x_i \\right) + (-x_i/\\theta) \\right)\\\\\n= \\sum_{i=1}^n \\left(-\\log\\left(  2 \\right) -3 \\log\\left(\\theta\\right) + 2 \\log \\left(x_i \\right) - x_i \\theta^{-1} \\right)\\\\\n= -\\sum_{i=1}^n \\log(  2 ) - \\sum_{i=1}^n 3 \\log(\\theta) + \\sum_{i=1}^n 2 \\log (x_i) - \\sum_{i=1}^n x_i \\theta^{-1}\\\\ \n= -n \\log(  2 ) - 3 n  \\log(\\theta) + \\sum_{i=1}^n 2 \\log (x_i) - \\sum_{i=1}^n x_i \\theta^{-1} \n\\end{multline*} \n{\\sf Step 2:}\n\\begin{multline*}\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) \n= \\frac{d}{d \\theta} \\left( -n \\log(  2 ) - 3 n  \\log(\\theta) + \\sum_{i=1}^n 2 \\log (x_i) - \\sum_{i=1}^n x_i \\theta^{-1} \\right)\\\\ \n= -0 - 3 n \\theta^{-1} + \\frac{d}{d \\theta} \\left(\\sum_{i=1}^n 2 \\log (x_i)\\right) - \\frac{d}{d \\theta} \\left( \\sum_{i=1}^n x_i \\theta^{-1} \\right)\\\\ \n= - 3 n \\theta^{-1} + 0 - \\sum_{i=1}^n \\frac{d}{d \\theta} \\left( x_i \\theta^{-1} \\right) \n= - 3 n \\theta^{-1} - \\sum_{i=1}^n \\left( -x_i \\theta^{-2} \\right) \n= - 3 n \\theta^{-1} + \\sum_{i=1}^n x_i \\theta^{-2} \n\\end{multline*}\n{\\sf Step 3:}\n\\begin{multline*}\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) = 0 \\iff - 3 n \\theta^{-1} + \\sum_{i=1}^n x_i \\theta^{-2}=0\n\\iff 3 n \\theta^{-1} = \\sum_{i=1}^n x_i \\theta^{-2} \\\\\n\\iff 3 n \\theta^{-1} \\times \\theta^2 =  \\sum_{i=1}^n x_i \\theta^{-2} \\times \\theta^2 \\qquad {\\text{\\tiny Multiplying both sides of the quality by $\\theta^2$}}\\\\\n\\iff 3 n \\theta = \\sum_{i=1}^n x_i \\iff \\theta = \\frac{1}{3n} \\sum_{i=1}^n x_i \\qquad \\qquad \\qquad \\qquad\n\\end{multline*}\nLet $$\\widehat{\\theta}_n =  \\frac{1}{3n} \\sum_{i=1}^n x_i \\enspace .$$\n{\\sf Step 4:}\n\\begin{multline*}\n\\frac{d^2}{d \\theta^2} \\ell(\\theta) = \\frac{d}{d \\theta} \\left( \\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right)\\right)\n=  \\frac{d}{d \\theta} \\left( - 3 n \\theta^{-1} + \\sum_{i=1}^n x_i \\theta^{-2}\\right)\n=   3 n \\theta^{-2} + \\frac{d}{d \\theta} \\left(\\sum_{i=1}^n x_i \\theta^{-2}\\right)\\\\\n=   3 n \\theta^{-2} + \\sum_{i=1}^n \\frac{d}{d \\theta} \\left( x_i \\theta^{-2}\\right)\n=   3 n \\theta^{-2} + \\sum_{i=1}^n  \\left( -2 x_i \\theta^{-3}\\right)\n=   3 n \\theta^{-2} - 2 \\theta^{-3} \\sum_{i=1}^n  x_i\n\\end{multline*}\n{\\sf Step 5:}\nThe problem states that $\\theta > 0$ and each data point $x_i>0$.  \nThus $3 n \\theta^{-2}=3n/\\theta^2 > 0$, and more crucially the cubic term $2 \\theta^{-3}=2/\\theta^3 > 0$.  \nFinally with at least one sample $n \\geq 1$ and each data point $x_i>0$ , we have the following condition for the negativity of the second derivative \n\\begin{multline*}\n\\frac{d^2}{d \\theta^2} \\ell(\\theta) <0\n\\iff\n3 n \\theta^{-2} - 2 \\theta^{-3} \\sum_{i=1}^n  x_i  < 0 \\quad {\\text{ \\small you can stop here for full credit in exam}}\\\\\n\\iff\n3 n \\theta^{-2} < 2 \\theta^{-3} \\sum_{i=1}^n  x_i  \n\\iff\n\\theta^{-2}\\theta^3 < \\frac{2}{3n} \\sum_{i=1}^n  x_i  \n\\iff\n\\theta < \\frac{2}{3n} \\sum_{i=1}^n  x_i  \n\\end{multline*}\nand therefore when the above condition is satisfied the MLE is indeed\n$$\\widehat{\\theta}_n = \\frac{\\sum_{i=1}^n x_i}{3n} \\enspace .$$\n\n\\item~\n{\\sf Step 1:} If $x_i \\in (0,\\infty)$ for each $i \\in \\{1,2,\\ldots,n\\}$, the log-likelihood is\n\\begin{multline*}\n\\ell(\\theta) \n= \\sum_{i=1}^n \\log\\left(f_{X}(x_i;\\theta)\\right)\n= \\sum_{i=1}^n \\left(\\log\\left(  \\frac{x_i}{\\theta^2} e^{-\\frac{1}{2} (x_i/\\theta)^2} \\right)\\right)\n= \\sum_{i=1}^n \\left(\\log\\left(  \\frac{x_i}{\\theta^2}\\right)+\\log\\left( e^{-\\frac{1}{2} (x_i/\\theta)^2} \\right)\\right)\\\\\n=  \\sum_{i=1}^n \\left( \\log(x_i) - \\log(\\theta^2)  {-\\frac{1}{2} (x_i/\\theta)^2} \\right) \n= \\sum_{i=1}^n \\left( \\log(x_i) \\right) - \\sum_{i=1}^n \\left( 2 \\log(\\theta) \\right) - \\sum_{i=1}^n \\left(  \\frac{1}{2} x_i^2 \\theta^{-2} \\right)\\\\ \n= \\sum_{i=1}^n \\log(x_i) - 2n \\log(\\theta) - \\sum_{i=1}^n  \\left(  \\frac{1}{2} x_i^2 \\theta^{-2} \\right) \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \n\\end{multline*} \n{\\sf Step 2:}\n\\begin{multline*}\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) \n=  \\frac{d}{d \\theta} \\left( \\sum_{i=1}^n \\left( \\log(x_i) \\right) - 2 n \\log(\\theta) - \\sum_{i=1}^n \\left(  \\frac{1}{2} x_i^2 \\theta^{-2} \\right) \\right) \\\\\n= \\frac{d}{d \\theta} \\left( \\sum_{i=1}^n \\left( \\log(x_i) \\right) \\right) -  \\frac{d}{d \\theta} \\left( 2 n \\log(\\theta) \\right) -  \\frac{d}{d \\theta} \\left( \\sum_{i=1}^n \\left(  \\frac{1}{2} x_i^2 \\theta^{-2} \\right) \\right) \\\\\n= 0 - 2n \\frac{1}{\\theta} - \\sum_{i=1}^n \\left(  \\frac{1}{2} x_i^2 (-2 \\theta^{-3}) \\right)\n= - 2n \\theta^{-1} + \\theta^{-3} \\sum_{i=1}^n  x_i^2 \n\\end{multline*}\n{\\sf Step 3:}\n\\begin{multline*}\n\\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right) = 0 \n\\iff - 2n \\theta^{-1} + \\theta^{-3} \\sum_{i=1}^n x_i^2 = 0\n\\iff 2n \\theta^{-1} = \\theta^{-3} \\sum_{i=1}^n x_i^2 \\\\\n\\iff 2n \\theta^{-1} \\theta^{3} = \\sum_{i=1}^n x_i^2 \n\\iff \\theta^{2} = \\frac{1}{2n} \\sum_{i=1}^n x_i^2\n\\iff \\theta = \\sqrt{  \\frac{1}{2n} \\sum_{i=1}^n x_i^2 }\n\\end{multline*}\nLet $$\\widehat{\\theta}_n =  \\sqrt{  \\frac{1}{2n} \\sum_{i=1}^n x_i^2 } \\enspace .$$\n{\\sf Step 4:}\n\\begin{multline*}\n\\frac{d^2}{d \\theta^2} \\ell(\\theta) = \\frac{d}{d \\theta} \\left( \\frac{d}{d \\theta} \\left( \\ell(\\theta)\\right)\\right)\n=  \\frac{d}{d \\theta} \\left( - 2n \\theta^{-1} + \\theta^{-3} \\sum_{i=1}^n  x_i^2 \\right)\n=   2n \\theta^{-2} -3 \\theta^{-4} \\sum_{i=1}^n  x_i^2 \n\\end{multline*}\n{\\sf Step 5:}\nSince $\\theta > 0$, we have the following condition for the the second derivative to be negative \n\\begin{multline*}\n\\frac{d^2}{d \\theta^2} \\ell(\\theta)= 2n \\theta^{-2} -3 \\theta^{-4} \\sum_{i=1}^n  x_i^2 <0 \\quad {\\text{ \\small you can stop here for full credit in exam}}\\\\\n\\iff\n2n \\theta^{-2} < 3 \\theta^{-4} \\sum_{i=1}^n  x_i^2\n\\iff \\theta^{2} < \\frac{3}{2n}  \\sum_{i=1}^n  x_i^2\n\\end{multline*}\nand therefore when the inequality above is satisfied the MLE is indeed\n$$\\widehat{\\theta}_n = \\sqrt{  \\frac{1}{2n} \\sum_{i=1}^n x_i^2 } \\enspace.$$\n\\end{enumerate}\n\n%{\\bf For more MLE problems if you want:} \n%Every common random variable we have seen in EMTH119 and EMTH210 has a maximum likelihood estimate.  Obtain MLE for Poisson (see \\url{http://en.wikipedia.org/wiki/Poisson_distribution}), Geometric (see ?), Normal (make your own problem with say $\\sigma^2=1$ to be known and we are only estimating $\\mu^*$, the true mean parameter), etc.  See Wikipedia for the solutions to these problems or email me if you are stuck and can't get the solutions they are supposed to get!\n\\end{Answer}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n%Sorry ---  no time to make an exhaustive glossary of all the notation yet. \n%\\subsubsection*{Summarizing Table of Point Estimators}\n%Using the sample mean $\\overline{X}_n$ and sample standard deviation $S_n$ defined in \\eqref{E:SampleMeanRV} and \\eqref{E:SampleStdDevRV}, respectively, we summarise the two point estimators of the parameters of some common distributions below.  For some cases, the MLE is the same as the MME and can be solved analytically.\n%\\begin{center}\n%\\begin{table}[htbp]\n%\\caption{Summary of the Method of Moment Estimator (MME) and the Maximum Likelihood Estimator (MLE) for some IID Experiments. \\label{T:MMEMLE}}\n%\\begin{tabular}{l | r | r}\n%\\hline\n%Statistical Experiment & MLE & MME \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\bernoulli(\\theta)$ & $\\widehat{\\theta}=\\overline{X}_n$ & same as MLE \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\exponential(\\lambda)$ & $\\widehat{\\lambda}={1}/{\\overline{X}_n} $ & same as MLE \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\normal(\\mu,\\sigma^2)$ & $\\widehat{\\mu}=\\overline{X}_n, \\widehat{\\sigma} = \\sqrt{\\frac{n-1}{n}S^2_n} $ & $\\widehat{\\mu}=\\overline{X}_n, \\widehat{\\sigma} = S_n $ \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\lognormal(\\lambda,\\zeta)$ & $\\widehat{\\lambda}=\\frac{1}{n}{\\sum_{i=1}^n \\log(X_i)} $ & $\\widehat{\\lambda} = \\log(\\overline{X}_n) - \\frac{1}{2} {\\widehat{\\zeta}} \\ ^2$ \\\\ %\\\\\n% & $\\widehat{\\zeta} = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n{(\\log(X_i)-\\widehat{\\lambda})^2}} $ & $\\widehat{\\zeta} = \\sqrt{\\log \\left({S_n^2}/{\\overline{X}_n^2} +1 \\right)}$ \\\\\n%\\hline\n%\\end{tabular}\n%\\end{table}\n%\\end{center}\n\n\\subsection{Moment Estimator (MME)}\\label{S:MME}\nSee notes from class.  \n\n\\remove{\n%\\begin{labwork}\n%Choose some parameter $p\\in[0,1]$ and some sample size $n$.  Then, using the {\\sc Matlab} expression {\\tt floor(rand(1,n)+p)} (See \\hyperref[SIM:Bernoulli]{Simulation~\\ref*{SIM:Bernoulli}}) simulate $n$ IID $\\bernoulli(p)$ RVs $X_1,X_2,\\ldots,X_n$.  Once you have generated the data, pretend that you don't know the $p$ used in your simulation and estimate $p$ using the sample mean estimator we have seen in \\hyperref[EX:EstimatePFromNIIDBernoulliTrials]{Example~\\ref*{EX:EstimatePFromNIIDBernoulliTrials}},  That is, compute the estimate $\\widehat{p}_n=n^{-1}\\sum_{i=1}^n x_i$ and also the approximate $95\\%$ Normal-based confidence interval $C_n = \\widehat{p}_n \\pm 1.96 \\sqrt{\\frac{\\widehat{p}_n(1-\\widehat{p}_n)}{n}}$.\n\n%For each value of $p \\in [0, 1/100, 1/10, 2/10, 3/10, 4/10, 5/10, 6/10, 7/10, 8/10, 9/10, 9/100, 1]$,  generate $n$ IID $\\bernoulli(p)$ samples where $n$ ranges in $\\{10^i: i =1,2,3,4,5 \\}$ and estimate $p$ from the simulated data of different sample sizes.  Do you intuitively agree with the behavior of the confidence intervals, in terms of the changes in their widths, as $n$ gets large for each of the fixed $p$'s ?  Is the point estimate $\\widehat{p}_n$ approaching the $p$ from which the data were simulated for each of the fixed $p$'s as $n$ gets large ?\n%For each value of $p$ as before, generate, say $1000$ data sets each of sample size $n \\in \\{10^i: i =1,2,3,4,5 \\}$ and empirically study the coverage properties of the estimator.  That is, for each $p$ and $n$, find what fraction of the Normal-based  $95\\%$ confidence intervals constructed from each of the $1000$ replicate data sets actually contain the parameter $p$ that they were simulated from.  Try to explain why coverage properties are both a function of how close $p$ is to the boundary of the the parameter space $[0,1]$ and the sample size $n$.  [Inspired by Russell Gribble]\n%\\end{labwork}\n\n}\n\n\n\n%original cse book notes to be reabsorbed into the condensed content above\n\\remove{\n\\chapter{Maximum Likelihood Estimator}\n\nXXXX\n\nNext we look at a specific point estimator called the maximum likelihood estimator (MLE) of a possibly unknown but fixed parameter $\\theta^*$ in a parametric experiment, i.e.~$\\theta^* \\in \\BB{\\Theta} \\subset \\Rz^k$ with $k < \\infty$.  Other point estimators in such a setting include the moment estimator (MME). \n\nRecall that the likelihood function (See \\hyperref[D:LklFn]{Definition~\\ref*{D:LklFn}}) for an IID experiment with observations $x_1,x_2,\\ldots,x_n$ is simply the product of the densities:\n$$L_n(\\theta)=\\prod_{i=1}^n f(x_i;\\theta) : \\BB{\\Theta} \\to (0,\\infty) \\enspace , $$\nand its logarithm or log-likelihood function is:\n$$\\ell_n(\\theta)= \\log(L_n(\\theta)) = \\sum_{i=1}^n \\log(f(x_i)) :  \\BB{\\Theta} \\to (-\\infty,\\infty) \\enspace . $$\n\n\\section{Introduction to Maximum Likelihood Estimation}\\label{S:MLE}\n\\begin{definition}[Maximum Likelihood Estimator (MLE)]\\label{D:MLE}\nLet $X_1,\\ldots,X_n \\sim f(x_1,\\ldots,x_n;\\theta^*)$.  The maximum likelihood estimator (MLE) $\\widehat{\\Theta}_n$ of the fixed and possibly unknown parameter $\\theta^* \\in \\BB{\\Theta}$ is the value of $\\theta$ that maximises the likelihood function:\n\\[\n\\boxed{\n\\widehat{\\Theta}_n := \\widehat{\\Theta}_n(X_1,X_2,\\ldots,X_n) :=  \\argmax_{\\theta \\in \\BB{\\Theta}} L_n(\\theta) \\enspace ,\n}\n\\]\nEquivalently, MLE is the value of $\\theta$ that maximises the log-likelihood function:\n\\[\n\\boxed{\n\\widehat{\\Theta}_n := \\argmax_{\\theta \\in \\BB{\\Theta}} \\ell_n(\\theta) \\enspace ,\n}\n\\]\nsince the maximum of the likelihood coincides with that of the log-likelihood.  It is analytically and numerically convenient to work with the log-likelihood instead of the likelihood.  Optimisation algorithms can be used to find the MLE numerically.  Such algorithms by convention tend to find the minimum and the value that minimises a function.  So, the MLE is also the the value of $\\theta$ that minimises the negative likelihood or negative log-likelihood functions:\n\\[\n\\boxed{\n\\widehat{\\Theta}_n := \\argmin_{\\theta \\in \\BB{\\Theta}} -L_n(\\theta), \\qquad\n\\widehat{\\Theta}_n := \\argmin_{\\theta \\in \\BB{\\Theta}} -\\ell_n(\\theta)  \\enspace .\n}\n\\]\nOnce again, the realisation of the MLE, namely $\\widehat{\\theta}_n = \\widehat{\\Theta_n}(x_1,\\ldots,x_n)$ based on the observation is the maximum likelihood estimate (MLe) of the $\\theta^*$.\n\\end{definition}\n\n\\begin{example}[Coin Tossing Experiment]\\label{EX:CoinTossingML}\nI tossed a coin that has an unknown probability $\\theta^*$ of landing Heads independently and identically $10$ times in a row, i.e., $X_1,\\ldots,X_{10} \\overset{IID}{\\sim} \\bernoulli(\\theta^*)$.  Four of my outcomes were Heads and the remaining six were Tails, with the actual sequence of Bernoulli outcomes (Heads $\\to 1$ and Tails $\\to 0$) being $(1,0,0,0,1,1,0,0,1,0)$.  I would like to estimate the probability $\\theta^* \\in \\BB{\\Theta} = [0,1]$ of observing Heads using the maximum likelihood estimator or MLE $\\widehat{\\Theta}_n((X_1,X_2,\\ldots,X_n))$ of $\\theta$. We derive the MLE next.\n\nFirst, the likelihood function is:\n\\begin{eqnarray}\nL_n(\\theta) &:=& L_n(x_1,x_2,\\ldots,x_n; \\theta)  =  \\prod_{i=1}^n f(x_i|\\theta) = \\theta^{\\sum_{i=1}^n x_i} (1-\\theta)^ {n-\\sum_{i=1}^n x_i} := \\theta^{t_n} (1-\\theta)^{n-t_n} \\notag \n\\end{eqnarray}\nIn the last step, we have formally defined the following statistic of the data: \n$$T_n(X_1,X_2,\\ldots,X_n)=\\sum_{i=1}^n X_i :  \\Xz_n \\rightarrow \\Tz_n$$ with the corresponding realisation $t_n := T_n(x_1,x_2,\\ldots,x_n)=\\sum_{i=1}^n x_i \\in \\Tz_n$.  Let us now take the natural logarithm of both sides:\n\\begin{eqnarray}\n\\log(L_n(\\theta)) := \\log(L(x_1,x_2,\\ldots,x_n; \\theta))   \n= \\log \\left( \\theta^{t_n} (1-\\theta)^ {n-t_n} \\right) \n= t_n \\log(\\theta) + (n-t_n) \\log(1-\\theta) \\notag\n\\end{eqnarray}\nNext, we take the derivative with respect to the parameter $\\theta$:\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial \\theta} \\log(L_n(\\theta)) \n&=& \\frac{\\partial}{\\partial \\theta}  t_n \\log(\\theta) + \\frac{\\partial}{\\partial \\theta}  (n-t_n) \\log(1-\\theta) \\notag \\\\\n&=& \\frac{t_n}{\\theta} - \\frac{n-t_n}{1-\\theta} \\notag\n\\end{eqnarray}\nNow, set $\\frac{\\partial}{\\partial \\theta} \\log(L_n(\\theta))=0$ and solve for $\\theta$ to obtain the maximum likelihood estimate  $\\widehat{\\theta}_n$:\n\\[\n\\frac{\\partial}{\\partial \\theta} \\log(L(\\theta)) = 0 \\iff  \n\\frac{t_n}{\\theta} = \\frac{n-t_n}{1-\\theta} \\iff\n\\frac{1-\\theta}{\\theta} = \\frac{n-t_n}{t_n} \\iff\n\\frac{1}{\\theta}-1 = \\frac{n}{t_n}-1 \\iff \\widehat{\\theta}_n = \\frac{t_n}{n}\n\\]\nTherefore the MLE is:\n\\[\n\\widehat{\\Theta}_n(X_1,X_2,\\ldots,X_n) = \\frac{1}{n}T_n(X_1,X_2,\\ldots,X_n) = \\frac{1}{n} \\sum_{i=1}^n X_i = \\overline{X}_n\n\\]\nFor the coin tossing experiment I just performed ($n=10$ times), the point estimate of $\\theta$ is:\n\\begin{eqnarray}\n\\widehat{\\theta}_{10} = \\widehat{\\Theta}_{10}((x_1,x_2,\\ldots,x_{10})) \n&=&\\widehat{\\Theta}_{10}((1,0,0,0,1,1,0,0,1,0)) \\notag \\\\\n&=& \\frac{1+0+0+0+1+1+0+0+1+0}{10}=\\frac{4}{10}=0.40 \\notag \\ .\n\\end{eqnarray}\n\\end{example}\n}% remove\n\n\\newpage\n\n\\section{Practical Excursion in One-dimensional Optimisation}\nNumerically maximising a log-likelihood function of one parameter is a useful technique.  This can be used for models with no analytically known MLE.  A fairly large field of maths, called optimisation, exists for this sole purpose.  Conventionally, in optimisation, one is interested in minimisation.  Therefore, the basic algorithms are cast in the ``find the minimiser and the minimum'' of a target function $f:\\Rz \\to \\Rz$.  Since we are interested in maximising our target, which is the likelihood or log-likelihood function, say $\\log(L(x_1,\\ldots,x_n; \\theta)): \\BB{\\Theta} \\to \\Rz$, we will simply apply the standard optimisation algorithms directly to $-\\log(L(x_1,\\ldots,x_n; \\theta)):\\BB{\\Theta}\\to \\Rz$.\n\nThe algorithm implemented in {\\tt fminbnd} is based on the golden section search and an inverse parabolic interpolation, and attempts to find the minimum of a function of one variable within a given fixed interval.  Briefly, the golden section search proceeds by successively {\\bf bracketing} the minimum of the target function within an acceptably small interval inside the given starting interval [see Section 8.2 of Forsythe, G.~E., M.~A.~Malcolm, and C. B. Moler, 1977, {\\em Computer Methods for Mathematical Computations}, Prentice-Hall].  {\\sc Matlab}'s {\\tt fminbnd} also relies on Brent's inverse parabolic interpolation [see Chapter 5 of Brent, Richard.~P., 1973, {\\em Algorithms for Minimization without Derivatives}, Prentice-Hall, Englewood Cliffs, New Jersey].  Briefly, additional smoothness conditions are assumed for the target function to aid in a faster bracketing strategy through polynomial interpolations of past function evaluations.  {\\sc Matlab}'s {\\tt fminbnd} has several limitations, including:\n\\begin{itemize}\n\\item The likelihood function must be continuous. \n\\item Only local MLE solutions, i.e.~those inside the starting interval, are given.\n\\item One needs to know or carefully guess the starting interval that contains the MLE.\n\\item {\\sc Matlab}'s {\\tt fminbnd} exhibits slow convergence when the solution is on a boundary of the starting interval.\n\\end{itemize}\n\n\\remove{\n\\begin{figure}[htpb]\n\\caption{Plot of $\\log(L(1,0,0,0,1,1,0,0,1,0;\\theta))$ as a function of the parameter $\\theta$ over the parameter space $\\BB{\\Theta}=[0,1]$ and the MLE $\\widehat{\\theta}_{10}$ of $0.4$ for the coin-tossing experiment.\\label {F:BernoulliMLE}}\n\\centering   \\makebox{\\includegraphics[width=6.5in]{figures/BernoulliMLE}}\n\\end{figure}\n}\n\n\\begin{labwork}[Coin-tossing experiment]\\label{LW:BernoulliMLE}\nThe following script was used to study the coin-tossing experiment in {\\sc Matlab}.  The plot of the log-likelihood function and the numerical optimisation of MLE are carried out using {\\sc Matlab}'s built-in function {\\tt fminbnd} (See \\hyperref[F:BernoulliMLE]{Figure \\ref*{F:BernoulliMLE}}).\n\n{\\VrbMf[label=BernoulliMLE.m]{scripts/BernoulliMLE.m}}\n\n\\begin{VrbM}\n>> BernoulliMLE\nx =     1     0     0     0     1     1     0     0     1     0\nt =     4\nMLE =    0.4000\nFunc-count     x          f(x)         Procedure\n    1       0.381966      6.73697        initial\n    2       0.618034      7.69939        golden\n    3       0.236068       7.3902        golden\n    4       0.408979      6.73179        parabolic\n    5       0.399339      6.73013        parabolic\n    6       0.400045      6.73012        parabolic\n    7       0.400001      6.73012        parabolic\n    8       0.399968      6.73012        parabolic\nOptimisation terminated:\n the current x satisfies the termination criteria using OPTIONS.TolX of 1.000000e-04 \nNumericalMLE =   0.4000\n\\end{VrbM}\n\\end{labwork}\n\n\\remove{\n\\begin{example}[MLE of an IID $\\exponential(\\lambda^*)$ experiment]\nLet us derive the MLE $\\widehat{\\Lambda}_n$ of the fixed and possibly unknown $\\lambda^*$ for the IID experiment:\n$$X_1,\\ldots,X_n \\overset{IID}{\\sim} \\exponential(\\lambda^*), \\qquad \\lambda^* \\in \\BB{\\Lambda} = (0,\\infty) \\enspace .$$\nNote that $\\BB{\\Lambda}$ is the parameter space.\n\nWe first obtain the log-likelihood function of $\\lambda$ for the data $x_1,x_2,\\ldots,x_n \\overset{IID}{\\sim} \\exponential(\\lambda)$.\n\\begin{flalign*}\n\\ell(\\lambda) & := \\log(L(x_1,x_2,\\ldots,x_n;\\lambda)) = \\log \\left( \\prod_{i=1}^n f(x_i;\\lambda) \\right) \n= \\log \\left( \\prod_{i=1}^n \\lambda e^{-\\lambda x_i}  \\right) \\\\\n&= \\log \\left( \\lambda e^{-\\lambda x_1} \\cdot \\lambda e^{-\\lambda x_2}  \\cdots \\lambda e^{-\\lambda x_n}  \\right)\n= \\log \\left( \\lambda^n e^{-\\lambda \\sum_{i=1}^n x_i}  \\right) \\\\\n&=\\log \\left( \\lambda^n \\right) + \\log \\left( e^{-\\lambda \\sum_{i=1}^n x_i}  \\right) \n= \\log \\left( \\lambda^n \\right) -\\lambda \\sum_{i=1}^n x_i\n\\end{flalign*}\nNow, let us take the derivative with respect to $\\lambda$,\n\\begin{flalign*}\n\\frac{\\partial}{\\partial \\lambda} \\left( \\ell(\\lambda) \\right) \n& :=  \\frac{\\partial}{\\partial \\lambda} \\left( \n\\log \\left( \\lambda^n \\right) -\\lambda \\sum_{i=1}^n x_i\n\\right) = \\frac{\\partial}{\\partial \\lambda} \\left( \n\\log \\left( \\lambda^n \\right) \\right) -  \\frac{\\partial}{\\partial \\lambda} \\left( \\lambda \\sum_{i=1}^n x_i \\right) \\\\\n& = \\frac{1}{\\lambda^n}  \\frac{\\partial}{\\partial \\lambda} \\left( \\lambda^n \\right) - \\sum_{i=1}^n x_i \n= \\frac{1}{\\lambda^n}  n \\lambda^{n-1}  - \\sum_{i=1}^n x_i \n= \\frac{n}{\\lambda} - \\sum_{i=1}^n x_i \\enspace .\n\\end{flalign*}\nNext, we set the derivative to $0$, solve for $\\lambda$, and set the solution equal to the ML estimate $\\widehat{\\lambda}_n$.\n\\begin{flalign*}\n0 = \\frac{\\partial}{\\partial \\lambda} \\left( \\ell(\\lambda) \\right) \n& \\iff 0 = \\frac{n}{\\lambda} - \\sum_{i=1}^n x_i \\iff \\sum_{i=1}^n x_i = \\frac{n}{\\lambda} \\iff \\lambda = \\frac{n}{\\sum_{i=1}^n x_i} \\iff \\boxed{\\widehat{\\lambda}_n = \\frac{1}{\\overline{x}_n}} \\enspace .\n\\end{flalign*}\nTherefore, the ML estimate $\\widehat{\\lambda}_n$ of the unknown rate parameter $\\lambda^* \\in \\BB{\\Lambda}$ on the basis of $n$ IID observations $x_1,x_2,\\ldots,x_n \\overset{IID}{\\sim} \\exponential(\\lambda^*)$ is $1/\\overline{x}_n$ and the ML estimator $\\widehat{\\Lambda}_n=1/\\overline{X}_n$.  Let us apply this ML estimator of the rate parameter for the supposedly exponentially distributed waiting times at the on-campus Orbiter bus-stop.\n\\end{example}\n}\n\n\\begin{labwork}[Numerical MLE of $\\lambda$ from n IID $\\exponential(\\lambda)$ RVs]\\label{LW:ExponentialMLEOrbiter}\nJoshua Fenemore and Yiran Wang collected data on waiting times between buses at an Orbiter bus-stop close to campus and modelled the waiting times as IID $\\exponential(\\lambda^*)$ RVs (\\href{http://www.math.canterbury.ac.nz/~r.sainudiin/courses/STAT218/projects/Stat218StudentProjects2007.pdf}{\\url{http://www.math.canterbury.ac.nz/~r.sainudiin/courses/STAT218/projects/Stat218StudentProjects2007.pdf}}).  We can use their data {\\tt sampleTimes} to find the MLE of $\\lambda^*$ under the assumption that the waiting times $X_1,\\ldots,X_{132}$ are IID $\\exponential(\\lambda^*)$.  We find the ML estimate $\\widehat{\\lambda}_{132}=0.1102$ and thus the estimated mean waiting time is $1/\\widehat{\\lambda}_{132}=9.0763$ minutes.  The estimated mean waiting time for a bus to arrive is well within the $10$ minutes promised by the Orbiter bus company.  The following script was used to generate the \\hyperref[F:ExponentialMLEOrbiter]{Figure \\ref*{F:ExponentialMLEOrbiter}}:\n\n\\VrbMf[label=ExponentialMLEOrbiter.m]{scripts/ExponentialMLEOrbiter.m}\n\nThe script output the following in addition to the plot:\n\\begin{VrbM}\n>> ExponentialMLEOrbiter\nMLE =    0.1102\nMeanEstimate =    9.0763\n\\end{VrbM}\n\\end{labwork}\n\n\\remove{\n\\begin{figure}[htpb]\n\\caption{Plot of $\\log(L(\\lambda))$ as a function of the parameter $\\lambda$  and the MLE $\\widehat{\\lambda}_{132}$ of $0.1102$ for Fenemore-Wang Orbiter Waiting Times Experiment from STAT 218 S2 2007.  The density or PDF and the DF at the MLE of $0.1102$ are compared with a histogram and the empirical DF.\\label{F:ExponentialMLEOrbiter}}\n\\centering   \\makebox{\\includegraphics[width=6.5in]{figures/ExponentialMLEOrbiter}}\n\\end{figure}\nNotice how poorly the exponential PDF $f(x;\\widehat{\\lambda}_{132}=0.1102)$ and the DF $F(x;\\widehat{\\lambda}_{132}=0.1102)$ based on the MLE fits with the histogram and the empirical DF, respectively.  This is an indication of the inadequacy of our parametric model.  Partly this discrepancy is due to the resolution of the the measurements being confined to whole minutes.  We can overcome this problem by fitting a minute-discretized PMF from the $\\exponential(\\lambda)$ PDF.  In the next Labwork, we simulate data from an $\\exponential(\\lambda^*=0.1)$ RV to conduct point estimation in the theoretically ideal setting. \n\n\\begin{labwork}[MLE of the rate parameter for waiting times at my bus stop]\\label{LW:ExponentialBusMLE}\nRecall \\hyperref[LW:Next7Buses]{Labwork~\\ref*{LW:Next7Buses}} where you modeled the arrival of buses at a bus stop using the IID $\\exponential(\\lambda^*=0.1)$ distributed inter-arrival times with a mean of $1/\\lambda^*=10$ minutes.  Once again, seed the fundamental sampler by your Student ID (e.g.~if your ID is {\\tt 11424620} then type {\\tt rand('twister', 11424620);}), just before simulating the inter-arrival times of the next seven buses.  Hand in the following six items:\n\\begin{enumerate}\n\\item Waiting times $x_1,x_2,\\ldots,x_7$ between arrivals of the next seven buses at your ID-seeded bus stop;\n\\item A plot of the empirical DF $\\widehat{F}_n$  from your (simulated) data $x_1,x_2,\\ldots,x_7$.  [You may use the {\\sc Matlab} function {\\tt ECDF} of  \\hyperref[Mf:ECDF]{Labwork \\ref*{Mf:ECDF}})];\n\\item The first, second and third sample quartiles as well as the $0.20^{\\text{th}}$ sample quantile for your data $x_1,x_2,\\ldots,x_7$.  [You may use the {\\sc Matlab} function {\\tt qthSampleQuantile} of \\hyperref[Mf:qthSampleQuantile]{Labwork \\ref*{Mf:qthSampleQuantile}}];\n\\item Pretending that you did not know the true parameter ($\\lambda^*=0.1$) used in the simulation, produce the maximum likelihood estimate (ML estimate) $\\widehat{\\lambda}_7$ from your seven observations $x_1,x_2,\\ldots,x_7$;\n\\item Plot the log-likelihood function for your data $x_1,x_2,\\ldots,x_7$ as a function of the parameter $\\lambda$; and\n\\item Show that you have verified that the numerical optimisation routine {\\tt fminbnd} returns the correct ML estimate $\\widehat{\\lambda}_7$.\n\\end{enumerate}\n \\end{labwork}\n \n%\\subsubsection*{Summarizing Table of Point Estimators}\n%Using the sample mean $\\overline{X}_n$ and sample standard deviation $S_n$ defined in \\eqref{E:SampleMeanRV} and \\eqref{E:SampleStdDevRV}, respectively, we summarise the two point estimators of the parameters of some common distributions below.  For some cases, the MLE is the same as the MME and can be solved analytically.\n%\\begin{center}\n%\\begin{table}[htbp]\n%\\caption{Summary of the Method of Moment Estimator (MME) and the Maximum Likelihood Estimator (MLE) for some IID Experiments. \\label{T:MMEMLE}}\n%\\begin{tabular}{l | r | r}\n%\\hline\n%Statistical Experiment & MLE & MME \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\bernoulli(\\theta)$ & $\\widehat{\\theta}=\\overline{X}_n$ & same as MLE \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\exponential(\\lambda)$ & $\\widehat{\\lambda}={1}/{\\overline{X}_n} $ & same as MLE \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\normal(\\mu,\\sigma^2)$ & $\\widehat{\\mu}=\\overline{X}_n, \\widehat{\\sigma} = \\sqrt{\\frac{n-1}{n}S^2_n} $ & $\\widehat{\\mu}=\\overline{X}_n, \\widehat{\\sigma} = S_n $ \\\\ \\hline\n%$X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\lognormal(\\lambda,\\zeta)$ & $\\widehat{\\lambda}=\\frac{1}{n}{\\sum_{i=1}^n \\log(X_i)} $ & $\\widehat{\\lambda} = \\log(\\overline{X}_n) - \\frac{1}{2} {\\widehat{\\zeta}} \\ ^2$ \\\\ %\\\\\n% & $\\widehat{\\zeta} = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n{(\\log(X_i)-\\widehat{\\lambda})^2}} $ & $\\widehat{\\zeta} = \\sqrt{\\log \\left({S_n^2}/{\\overline{X}_n^2} +1 \\right)}$ \\\\\n%\\hline\n%\\end{tabular}\n%\\end{table}\n%\\end{center}\n\\begin{figure}[htpb]\n\\caption{Comparing the $\\exponential(\\widehat{\\lambda}_{6128}= 28.6694)$ PDF and DF with a histogram and empirical DF of the times (in units of days) between earth quakes in  NZ.  The epicentres of $6128$ earth quakes are shown in left panel.\\label{F:NZSIEarthQuakesExponentialMLE}}\n\\centering   \\makebox{\\includegraphics[width=6.5in]{figures/NZSIEarthQuakesExponentialMLE}}\n\\end{figure}\n\\begin{labwork}[Time between Earth Quakes in NZ]\\label{LW:NZSIEarthQuakesExponentialMLE}  \nWe model the time between $6128$ earth-quakes in NZ from 18-Jan-2008 02:23:44 to 18-Aug-2008 19:29:29 as:\n\\[\nX_1,X_2,\\ldots,X_{6128} \\overset{IID}{\\sim} \\exponential(\\lambda^*) \\enspace .\n\\]\nThen, the ML estimate of $\\lambda^* = 1/\\overline{x}_{6128} = 1/0.0349=28.6694$ as computed in the following script:\n\\VrbMf[label=NZSIEarthQuakesExponentialMLE.m]{scripts/NZSIEarthQuakesExponentialMLE.m}\n\nWe first load the data in the text file {\\tt earthquakes.csv} into a matrix {\\tt EQ}.  Using the {\\tt datenum} function in {\\sc Matlab} we transform the time stamps into a number starting at zero.  These transformed time stamps are in units of days.  Then we find the times between consecutive events and estimate a histogram.  We finally compute the ML estimate of $\\lambda^*$ and super-impose the PDF of the $\\exponential(\\widehat{\\lambda}_{6128}= 28.6694)$ upon the histogram.\n\\begin{VrbM}\n>> NZSIEarthQuakesExponentialMLE\nans =        6128          13\n\nEarth Quakes in NZ between\n18-Jan-2008 02:23:44 and18-Aug-2008 19:29:29\n\nSampleMean =    0.0349\nMLELambdaHat =   28.6694\n\\end{VrbM}\nThus, the average time between earth quakes is $0.0349*24*60=50.2560$ minutes.\n\\end{labwork}\n}%remove\n\n\\begin{figure}[htpb]\n\\caption{The ML fitted ${\\rm Rayleigh}(\\widehat{\\alpha}_{10}= 2)$ PDF and a histogram of the ocean wave heights.\\label{F:RayleighOceanHeightsMLE}}\n\\centering   \\makebox{\\includegraphics[width=5.5in]{figures/RayleighOceanHeightsMLE}}\n\\end{figure}\n\\begin{example}[6.7, p.~275 of Ang \\& Tang]\\label{LW:RayleighOceanHeightsMLE}\nThe distribution of ocean wave heights, $H$, may be modeled with the ${\\rm Rayleigh}(\\alpha)$ RV with parameter $\\alpha$ and probability density function,\n\\[\nf(h;\\alpha) = \\frac{h}{\\alpha^2} \\exp \\left({-\\frac{1}{2} (h/\\alpha)^2}\\right), \\qquad h \\in \\Hz := [0,\\infty) \\ .\n\\]\nThe parameter space for $alpha$ is $\\BB{A} = (0,\\infty)$.  Suppose that the following measurements $h_1,h_2,\\ldots,h_{10}$ of wave heights in meters were observed to be\n\\[\n1.50, \\  2.80, \\ 2.50, \\ 3.20, \\ 1.90, \\ 4.10, \\ 3.60, \\ 2.60, \\ 2.90, \\ 2.30 \\ ,\n\\]\nrespectively.  Under the assumption that the $10$ samples are IID realisations from a ${\\rm Rayleigh}(\\alpha^*)$ RV with a fixed and unknown parameter $\\alpha^*$, find the ML estimate $\\widehat{\\alpha}_{10}$ of $\\alpha^*$.\n\nWe first obtain the log-likelihood function of $\\alpha$ for the data $h_1,h_2,\\ldots,h_n \\overset{IID}{\\sim} {\\rm Rayleigh}(\\alpha)$.\n\\begin{flalign*}\n\\ell(\\alpha) & := \\log(L(h_1,h_2,\\ldots,h_n;\\alpha)) = \\log \\left( \\prod_{i=1}^n f(h_i;\\alpha) \\right) = \\sum_{i=1}^n \\log(f(h_i; \\alpha))\\\\\n& = \\sum_{i=1}^n \\log \\left( \\frac{h_i}{\\alpha^2} e^{-\\frac{1}{2} (h_i/\\alpha)^2} \\right) \n=  \\sum_{i=1}^n \\left( \\log(h_i) - 2 \\log(\\alpha)  {-\\frac{1}{2} (h_i/\\alpha)^2} \\right) \\\\\n& = \\sum_{i=1}^n \\left( \\log(h_i) \\right) - 2 n \\log(\\alpha) - \\sum_{i=1}^n \\left(  \\frac{1}{2} h_i^2 \\alpha^{-2} \\right) \n\\end{flalign*}\nNow, let us take the derivative with respect to $\\alpha$,\n\\begin{flalign*}\n\\frac{\\partial}{\\partial \\alpha} \\left( \\ell(\\alpha) \\right) \n& :=  \\frac{\\partial}{\\partial \\alpha} \\left( \\sum_{i=1}^n \\left( \\log(h_i) \\right) - 2 n \\log(\\alpha) - \\sum_{i=1}^n \\left(  \\frac{1}{2} h_i^2 \\alpha^{-2} \\right) \\right) \\\\\n& = \\frac{\\partial}{\\partial \\alpha} \\left( \\sum_{i=1}^n \\left( \\log(h_i) \\right) \\right) -  \\frac{\\partial}{\\partial \\alpha} \\left( 2 n \\log(\\alpha) \\right) -  \\frac{\\partial}{\\partial \\alpha} \\left( \\sum_{i=1}^n \\left(  \\frac{1}{2} h_i^2 \\alpha^{-2} \\right) \\right) \\\\\n& = 0 - 2n \\frac{1}{\\alpha} - \\sum_{i=1}^n \\left(  \\frac{1}{2} h_i^2 (-2 \\alpha^{-3}) \\right)\n= - 2n \\alpha^{-1} + \\alpha^{-3} \\sum_{i=1}^n \\left( h_i^2  \\right)\n\\end{flalign*}\nNext, we set the derivative to $0$, solve for $\\alpha$, and set the solution equal to the ML estimate $\\widehat{\\alpha}_n$.\n\\begin{flalign*}\n0 = \\frac{\\partial}{\\partial \\alpha} \\left( \\ell(\\alpha) \\right) \n& \\iff 0 = - 2n \\alpha^{-1} + \\alpha^{-3} \\sum_{i=1}^n h_i^2 \\iff 2n \\alpha^{-1} = \\alpha^{-3} \\sum_{i=1}^n h_i^2 \\\\\n& \\iff 2n \\alpha^{-1} \\alpha^{3} = \\sum_{i=1}^n h_i^2 \\iff \\alpha^{2} = \\frac{1}{2n} \\sum_{i=1}^n h_i^2\n\\iff \\widehat{\\alpha}_n = \\sqrt{  \\frac{1}{2n} \\sum_{i=1}^n h_i^2 }\n\\end{flalign*}\nTherefore, the ML estimate of the unknown $\\alpha^* \\in \\BB{A}$ on the basis of our $10$ observations $h_1,h_2,\\ldots,h_{10}$ of wave heights is\n\\begin{flalign*}\n\\widehat{\\alpha}_{10} & =  \\sqrt{  \\frac{1}{2*10} \\sum_{i=1}^{10} h_i^2 } \\\\\n& = \\sqrt{ \\frac{1}{20} \\left( 1.50^2 + 2.80^2 + 2.50^2 + 3.20^2 + 1.90^2 + 4.10^2 + 3.60^2 + 2.60^2 + 2.90^2 + 2.30^2 \\right)} \\approxeq 2\n\\end{flalign*}\nWe use the following script file to compute the MLE $\\widehat{\\alpha}_{10}$ and plot the PDF at $\\widehat{\\alpha}_{10}$ in \\hyperref[F:RayleighOceanHeightsMLE]{Figure~\\ref*{F:RayleighOceanHeightsMLE}}.\n\\VrbMf[label=RayleighOceanHeightsMLE.m]{scripts/RayleighOceanHeightsMLE.m}\n\\begin{VrbM}\n>> RayleighOceanHeightsMLE\nAlphaHat =    2.0052\n\\end{VrbM}\n\\end{example}\n\n%\\begin{labwork}\n%Choose some parameter $p\\in[0,1]$ and some sample size $n$.  Then, using the {\\sc Matlab} expression {\\tt floor(rand(1,n)+p)} (See \\hyperref[SIM:Bernoulli]{Simulation~\\ref*{SIM:Bernoulli}}) simulate $n$ IID $\\bernoulli(p)$ RVs $X_1,X_2,\\ldots,X_n$.  Once you have generated the data, pretend that you don't know the $p$ used in your simulation and estimate $p$ using the sample mean estimator we have seen in \\hyperref[EX:EstimatePFromNIIDBernoulliTrials]{Example~\\ref*{EX:EstimatePFromNIIDBernoulliTrials}},  That is, compute the estimate $\\widehat{p}_n=n^{-1}\\sum_{i=1}^n x_i$ and also the approximate $95\\%$ Normal-based confidence interval $C_n = \\widehat{p}_n \\pm 1.96 \\sqrt{\\frac{\\widehat{p}_n(1-\\widehat{p}_n)}{n}}$.\n\n%For each value of $p \\in [0, 1/100, 1/10, 2/10, 3/10, 4/10, 5/10, 6/10, 7/10, 8/10, 9/10, 9/100, 1]$,  generate $n$ IID $\\bernoulli(p)$ samples where $n$ ranges in $\\{10^i: i =1,2,3,4,5 \\}$ and estimate $p$ from the simulated data of different sample sizes.  Do you intuitively agree with the behavior of the confidence intervals, in terms of the changes in their widths, as $n$ gets large for each of the fixed $p$'s ?  Is the point estimate $\\widehat{p}_n$ approaching the $p$ from which the data were simulated for each of the fixed $p$'s as $n$ gets large ?\n%For each value of $p$ as before, generate, say $1000$ data sets each of sample size $n \\in \\{10^i: i =1,2,3,4,5 \\}$ and empirically study the coverage properties of the estimator.  That is, for each $p$ and $n$, find what fraction of the Normal-based  $95\\%$ confidence intervals constructed from each of the $1000$ replicate data sets actually contain the parameter $p$ that they were simulated from.  Try to explain why coverage properties are both a function of how close $p$ is to the boundary of the the parameter space $[0,1]$ and the sample size $n$.  [Inspired by Russell Gribble]\n%\\end{labwork}\n\n\\section{More Properties of the Maximum Likelihood Estimator}\\label{S:PropsMLE}\nNext, we list some nice properties of the ML Estimator $\\widehat{\\Theta}_n$ for the fixed and possibly unknown $\\theta^* \\in \\BB{\\Theta}$.\n\\begin{enumerate}\n\\item The ML Estimator is asymptotically consistent, i.e.~\n$\\widehat{\\Theta}_n \\overset{P}{\\longrightarrow} \\theta^*$.\n\\item The ML Estimator is asymptotically normal, i.e.~\n$(\\widehat{\\Theta}_n - \\theta^*) / \\widehat{se}_n \\rightsquigarrow \\normal(0,1)$.\n\\item The estimated standard error of the ML Estimator, $\\widehat{\\mathsf{se}}_n$, can usually be computed analytically using the \\hyperref[S:FisherInfo]{\\bf Fisher Information}.\n\\item Because of the previous two properties, the $1-\\alpha$ confidence interval can also be computed analytically as $\\widehat{\\Theta}_n \\pm z_{\\alpha/2} \\widehat{\\mathsf{se}}_n$.\n\\item The ML Estimator is {\\bf equivariant}, i.e.~$\\widehat{\\psi}_n=g(\\widehat{\\theta}_n)$ is the ML Estimate of $\\psi^*=g(\\theta^*)$, for some smooth function $g(\\theta)=\\psi: \\BB{\\Theta} \\to \\BB{\\Psi}$.  \n\\item We can also obtain the estimated standard error of the estimator \n$\\widehat{\\Psi}_n$ of $\\psi^* \\in \\BB{\\Psi}$ via the \\hyperref[S:DeltaMethod]{\\bf Delta Method}.\n\\item The ML Estimator is {\\bf asymptotically optimal} or {\\bf efficient}.  This means that the MLE has the smallest variance among the well-behaved class of estimators as the sample size gets larger.\n\\item ML Estimator is close to the Bayes estimator (obtained in the Bayesian inferential paradigm).\n\\end{enumerate}\n\n\\section{Fisher Information}\\label{S:FisherInfo}\nLet $X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} f(X_1;\\theta)$.  Here, $f(X_1;\\theta)$ is the probability density function (pdf) or the probability mass function (pmf) of the RV $X_1$.  Since all RVs are identically distributed, we simply focus on $X_1$ without loss of generality.\n\\begin{definition}[Fisher Information]\\label{D:FisherInfo}\nThe {\\bf score function} of an RV $X$ for which the density is parameterised by $\\theta$ is defined as:\n\\[\n\\mathscr{S}(X;\\theta) := \\frac{\\partial}{\\partial \\theta} \\log f(X;\\theta), \\qquad \\text{and} \\quad \n\\E_{\\theta} (\\mathscr{S}(X;\\theta))=0 \\ .\n\\]\nThe expectation of the score function is $0$ if $X$ is distributed according to $f(x; \\theta)$ and under regularity conditions that allow for the interchange of differentiation and integration or summation, as shown below:\n\\begin{eqnarray*}\n\\E_{\\theta} (\\mathscr{S}(X;\\theta)) \n&=& \\E_{\\theta} \\left( \\frac{\\partial}{\\partial \\theta} \\log f(X;\\theta) \\right) = \\int \\frac{\\frac{\\partial}{\\partial \\theta} f(x;\\theta)}{f(x;\\theta)} d F(x;\\theta)\\\\ \\notag \n&=& \n\\begin{cases}\n\\smashoperator[r]{\\sum_{x \\in \\Xz}}  \\left( \\frac{\\frac{\\partial}{\\partial \\theta} f(x;\\theta)}{f(x;\\theta)}  \\right) f(x;\\theta) \n%= \\sum_{x \\in \\Xz} \\frac{\\partial}{\\partial \\theta} f(x;\\theta) \n= \\frac{\\partial}{\\partial \\theta} \\left( \\sum_{x \\in \\Xz} f(x;\\theta) \\right) = \\frac{\\partial}{\\partial \\theta} (1) = 0  & \\text{for discrete $X$}\\\\\n\\smashoperator[r]{\\int_{x \\in \\Xz}}  \\left( \\frac{\\frac{\\partial}{\\partial \\theta} f(x;\\theta)}{f(x;\\theta)}  \\right) f(x;\\theta) dx  \n%=  \\int_{x \\in \\Xz} \\frac{\\partial}{\\partial \\theta} f(x;\\theta) dx \n= \\frac{\\partial}{\\partial \\theta} \\left( \\int_{x \\in \\Xz}  f(x;\\theta) dx \\right) = \\frac{\\partial}{\\partial \\theta} (1) = 0 & \\text{for continuous $X$} \n\\end{cases} \n\\end{eqnarray*}\nThe {\\bf Fisher Information} is simply the variance of the score function.\n\\begin{equation}\\label{E:FisherInfo}\nI_n := \\V_{\\theta} \\left( \\sum_{i=1}^n \\mathscr{S}(X_i;\\theta) \\right) \n=  \\sum_{i=1}^n \\V_{\\theta} \\left( \\mathscr{S}(X_i;\\theta) \\right) \n= n I_1(\\theta),\n\\end{equation}\nwhere $I_1$ is the Fisher Information of just one of the RVs $X_i$, say for e.g.~$X$:\n\\begin{eqnarray}\nI_1 (\\theta) &:=& \\V_{\\theta} \\left( \\mathscr{S}(X;\\theta) \\right) \n= \\E_{\\theta} \\left(  \\mathscr{S}^2(X,\\theta) \\right) - \\left( \\E_{\\theta}  \\mathscr{S} (X,\\theta) \\right)^2 = \\E_{\\theta} \\left(  \\mathscr{S}^2(X,\\theta) \\right) - 0^2 \\notag \\\\\n&=& \\E_{\\theta} \\left(  \\mathscr{S}^2(X,\\theta) \\right) \n= \n\\begin{cases}\n\\smashoperator[r]{\\sum_{x \\in \\Xz}}  \\left( \\frac{\\partial}{\\partial \\theta} \\log ( f(x;\\theta) )  \\right)^2 f(x;\\theta)  & \\text{for discrete $X$}\\\\\n\\smashoperator[r]{\\int_{x \\in \\Xz}}  \\left( \\frac{\\partial}{\\partial \\theta} \\log ( f(x;\\theta) )  \\right)^2 f(x;\\theta) dx & \\text{for continuous $X$} \n\\end{cases}\n\\label{E:FisherInfoOriginal}\n\\end{eqnarray}\nIf $\\log(f(x;\\theta))$ is twice differentiable with respect to $\\theta$ and under further regularity conditions, we can also express Fisher Information as the negative of the expected curvature of the log likelihood function:\n\\begin{eqnarray}\nI_1 (\\theta)\n&=& - \\E_{\\theta} \\left(  \\frac{\\partial^2 \\log f(X;\\theta)}{\\partial \\theta^2} \\right)\n=\n\\begin{cases}\n-\\smashoperator[r]{\\sum_{x \\in \\Xz}}  \\left( \\frac{\\partial^2 \\log f(x;\\theta)}{\\partial \\theta^2} \\right) f(x;\\theta)  & \\text{for discrete $X$}\\\\\n-\\smashoperator[r]{\\int_{x \\in \\Xz}}  \\left( \\frac{\\partial^2 \\log f(x;\\theta)}{\\partial \\theta^2} \\right) f(x;\\theta) dx  & \\text{for continuous $X$} \\qquad \\label{E:FisherInfo1}\n\\end{cases}\n\\end{eqnarray}\nNote that \\eqref{E:FisherInfo1} is due to taking expectation, $E_{\\theta} (\\cdot)$, of the LHS and RHS of the following equalities:\n\\begin{eqnarray*}\n\\left( \\frac{\\partial^2}{\\partial \\theta^2} \\log (f(x; \\theta)) \\right)\n&=& \n\\frac{\\partial}{\\partial \\theta} \\left( \\frac{\\partial}{\\partial \\theta} \\log ( f(x;\\theta) )   \\right)\n=\n\\frac{\\partial}{\\partial \\theta} \\left( \\frac{\\frac{\\partial}{\\partial \\theta} f(x;\\theta)}{f(x;\\theta)}  \\right) \\\\\n&=&\n\\frac{\\frac{\\partial^2}{\\partial \\theta^2} f(x;\\theta)}{f(x;\\theta)}  - \\left( \\frac{\\frac{\\partial}{\\partial \\theta} f(x;\\theta)}{f(x; \\theta)} \\right)^2\n=\n\\frac{\\frac{\\partial^2}{\\partial \\theta^2} f(x;\\theta)}{f(x;\\theta)}  - \\left( \\frac{\\partial}{\\partial \\theta} \\log(f(x;\\theta)) \\right)^2\n\\end{eqnarray*}\nand due to the $E_{\\theta}$ of the second-last term above being $0$ and that of the last term being $I_1$ as per \\eqref{E:FisherInfoOriginal}.\n\\end{definition}\nNext, we give a {\\bf general method} for obtaining:\n\\begin{enumerate}\n\\item\nThe standard error $\\mathsf{se}_n(\\widehat{\\Theta}_n)$ of {\\bf any} maximum likelihood estimator $\\widehat{\\Theta}_n$ of the possibly unknown and fixed parameter of interest $\\theta^* \\in \\BB{\\Theta}$, and\n\\item The $1-\\alpha$ confidence interval for $\\theta^*$.\n\\end{enumerate}\n\n\\begin{prop}[Asymptotic Normality of the ML Estimator \\& Confidence Intervals]\nLet $\\widehat{\\Theta}_n$ be the maximum likelihood estimator of $\\theta^* \\in \\BB{\\Theta}$ with standard error $\\mathsf{se}_n := \\sqrt{\\V_{\\theta^*} (\\widehat{\\Theta}_n)}$.  Under appropriate regularity conditions, the following propositions are true:\n\\begin{enumerate}\n\\item The standard error $\\mathsf{se}_n$ can be approximated by the side of a square whose area is the inverse Fisher Information at $\\theta^*$, and the distribution of $\\widehat{\\Theta}_n$ approaches that of the $\\normal(\\theta^*, \\mathsf{se}_n^2)$ distribution as the samples size $n$ gets larger.  In other terms:\n\\[\n\\mathsf{se}_n \\approxeq \\sqrt{1/I_n(\\theta^*)} \\qquad \\text{and} \\quad \\frac{\\widehat{\\Theta}_n-\\theta^*}{\\mathsf{se}_n} \\rightsquigarrow \\normal(0,1)\n\\]\n\\item The approximation holds even if we substitute the ML Estimate $\\widehat{\\theta}_n$ for $\\theta^*$ and use the estimated standard error $\\widehat{\\mathsf{se}}_n$ instead of $\\mathsf{se}_n$.  Let $\\widehat{\\mathsf{se}}_n = \\sqrt{1/I_n(\\widehat{\\theta}_n)}$.  Then:\n\\[\n \\frac{\\widehat{\\Theta}_n-\\theta^*}{\\widehat{\\mathsf{se}}_n} \\rightsquigarrow \\normal(0,1)\n\\]\n\\item Using the fact that $\\widehat{\\Theta}_n \\rightsquigarrow \\normal(\\theta^*,\\widehat{se}_n^2)$, we can construct the estimate of an approximate Normal-based $1-\\alpha$ confidence interval as:\n\\[\nC_n  =[\\underline{C}_n, \\overline{C}_n]= [\\widehat{\\theta}_n - z_{\\alpha/2} \\widehat{se}_n, \\widehat{\\theta}_n + z_{\\alpha/2} \\widehat{\\mathsf{se}}_n]= \\widehat{\\theta}_n \\pm z_{\\alpha/2} \\widehat{\\mathsf{se}}_n\n\\]\n\\end{enumerate}\n\\end{prop}\nNow, let us do an example.\n\\begin{example}[MLE and Confidence Interval for the IID $\\poisson(\\lambda)$ experiment]\nSuppose the fixed parameter $\\lambda^* \\in \\BB{\\Lambda} = (0,\\infty)$ is unknown.  Let $X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\poisson(\\lambda^*)$.  We want to find the ML Estimate $\\widehat{\\lambda}_n$ of $\\lambda^*$ and produce a $1-\\alpha$ confidence interval for $\\lambda^*$.\n\nThe MLE can be obtained as follows:\n\nThe likelihood function is:\n\\[\nL(\\lambda) := L(x_1,x_2,\\ldots,x_n; \\lambda) = \\prod_{i=1}^n f(x_i;\\lambda) = \\prod_{i=1}^n e^{-\\lambda} \\frac{\\lambda^x_i}{x_i!}\n\\]\nHence, the log-likelihood function is:\n\\begin{eqnarray}\n\\ell(\\theta) := \\log(L(\\lambda)) \n&=&  \\log \\left( \\prod_{i=1}^n e^{-\\lambda} \\frac{\\lambda^{x_i}}{x_i!} \\right) =  \\sum_{i=1}^n \\log \\left(  e^{-\\lambda} \\frac{\\lambda^{x_i}}{x_i!} \\right) \n=  \\sum_{i=1}^n \\left( \\log (e^{-\\lambda}) + \\log( \\lambda^{x_i}) - \\log({x_i!}) \\right)  \\notag \\\\\n&=& \\sum_{i=1}^n \\left({-\\lambda} + x_i \\log( \\lambda) - \\log({x_i!}) \\right) \n= \\sum_{i=1}^n {-\\lambda} + \\sum_{i=1}^n x_i \\log( \\lambda) - \\sum_{i=1}^n \\log({x_i!})  \\notag \\\\\n&=& n ( {-\\lambda}) + \\log( \\lambda) \\left( \\sum_{i=1}^n x_i \\right) - \\sum_{i=1}^n \\log({x_i!})  \\notag \n\\end{eqnarray}\nNext, take the derivative of $\\ell(\\lambda)$:\n\\[\n\\frac{\\partial}{\\partial \\lambda} \\ell (\\lambda) \n= \\frac{\\partial}{\\partial \\lambda} \\left(  n ( {-\\lambda}) + \\log( \\lambda) \\left( \\sum_{i=1}^n x_i \\right) - \\sum_{i=1}^n \\log({x_i!})  \\right)\n= n(-1) + \\frac{1}{\\lambda} \\left( \\sum_{i=1}^n x_i \\right) + 0\n\\]\nand set it equal to $0$ to solve for $\\lambda$, as follows:\n\\[\n0 = n(-1) + \\frac{1}{\\lambda} \\left( \\sum_{i=1}^n x_i \\right) + 0 \\iff n = \\frac{1}{\\lambda} \\left( \\sum_{i=1}^n x_i \\right) \\iff \\lambda =  \\frac{1}{n} \\left( \\sum_{i=1}^n x_i \\right) = \\overline{x}_n\n\\]\nFinally, the ML Estimator of $\\lambda^*$ is $\\widehat{\\Lambda}_n = \\overline{X}_n$ and the ML estimate is $\\widehat{\\lambda}_n = \\overline{x}_n$.\n\nNow, we want an $1-\\alpha$ confidence interval for $\\lambda^*$ using the $\\widehat{\\mathsf{se}}_n \\approxeq \\sqrt{1/I_n(\\widehat{\\lambda}_n)}$ that is based on the Fisher Information $I_n(\\lambda) = n I_1(\\lambda)$ given in \\eqref{E:FisherInfo}.  We need $I_1$ given in \\eqref{E:FisherInfo1}.  Since $X_1,X_2,\\ldots,X_n \\sim \\poisson(\\lambda)$, we have discrete RVs:\n\\[\nI_1 = -\\sum_{x \\in \\Xz}  \\left( \\frac{\\partial^2 \\log (f(x;\\lambda))}{\\partial \\lambda^2} \\right) f(x;\\lambda) = -\\sum_{x =0}^{\\infty}  \\left( \\frac{\\partial^2 \\log (f(x;\\lambda))}{\\partial \\lambda^2} \\right) f(x;\\lambda)\n\\]\nFirst find \n\\begin{eqnarray}\n\\frac{\\partial^2 \\log (f(x;\\lambda))}{\\partial \\lambda^2}\n&=& \\frac{\\partial}{\\partial \\lambda} \\left( \\frac{\\partial} {\\partial \\lambda} \\log \\left(  f(x;\\lambda) \\right\n) \\right)\n=  \\frac{\\partial}{\\partial \\lambda} \\left( \\frac{\\partial} {\\partial \\lambda} \\log \\left( e^{-\\lambda} \\frac{\\lambda^x}{x!} \\right) \\right) \\notag \\\\\n&=& \\frac{\\partial}{\\partial \\lambda} \\left( \\frac{\\partial} {\\partial \\lambda} \\left( -\\lambda + x \\log(\\lambda)-\\log({x!}) \\right) \\right)\n= \\frac{\\partial}{\\partial \\lambda} \\left( -1 + \\frac{x}{\\lambda}-0 \\right)\n= -\\frac{x}{\\lambda^2} \\notag\n\\end{eqnarray}\n\\end{example}\nNow, substitute the above expression into the right-hand side of $I_1$ to obtain:\n\\[\nI_1= - \\sum_{x=0}^{\\infty} \\left( -\\frac{x}{\\lambda^2} \\right) f(x;\\lambda)\n= \\frac{1}{\\lambda^2} \\sum_{x=0}^{\\infty} \\left( x \\right) f(x;\\lambda)\n= \\frac{1}{\\lambda^2} \\sum_{x=0}^{\\infty} \\left( x \\right) e^{-\\lambda} \\frac{\\lambda^x}{x!} \n=  \\frac{1}{\\lambda^2} \\E_{\\lambda}(X)\n= \\frac{1}{\\lambda^2} \\lambda\n=\\frac{1}{\\lambda}\n\\]\nIn the third-to-last step above, we recognise the sum as the expectation of the $\\poisson(\\lambda)$ RV $X$, namely $\\E_{\\lambda}(X)=\\lambda$.  Therefore, the estimated standard error is:\n\\[\n\\widehat{\\mathsf{se}}_n \\approxeq \\sqrt{1/I_n(\\widehat{\\lambda}_n)}\n= \\sqrt{1/(n I_1(\\widehat{\\lambda}_n))}\n= \\sqrt{1/(n (1/\\widehat{\\lambda}_n))}\n= \\sqrt{\\widehat{\\lambda}_n/n}\n\\]\nand the approximate $1-\\alpha$ confidence interval is\n\\[\n\\widehat{\\lambda}_n \\pm z_{\\alpha/2} \\widehat{\\mathsf{se}}_n \n= \\widehat{\\lambda}_n \\pm z_{\\alpha/2} \\sqrt{\\widehat{\\lambda}_n/n}\n\\]\n\nThus, using the MLE and the estimated standard error via the Fisher Information, we can carry out point estimation and confidence interval construction in {\\bf most} parametric families of RVs encountered in typical engineering applications.  \n\n\\begin{example}[Fisher Information of the $\\bernoulli$ Experiment]\\label{EX:BernoulliFisherInfo}\nSuppose $X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim}$ $\\bernoulli(\\theta^*)$.  Also, suppose that $\\theta^* \\in \\BB{\\Theta} = [0,1]$ is unknown.  We have already shown in \\hyperref[EX:MLECoinTossing]{Example~\\ref*{EX:MLECoinTossing}} that the ML estimator of $\\theta^*$ is $\\widehat{\\theta}_n = \\overline{X}_n$.  Using the identity:\n\\[\n\\widehat{\\mathsf{se}}_n = \\frac{1}{\\sqrt{I_n(\\widehat{\\theta}_n)}}\n\\]\n(1) we can compute $\\widehat{\\mathsf{se}}_n(\\widehat{\\Theta}_n)$, the estimated standard error  of the unknown parameter $\\theta^*$ as follows:\n\\begin{flalign*}\n\\widehat{\\mathsf{se}}_n (\\widehat{\\Theta}_n) &= \\frac{1}{\\sqrt{I_n(\\widehat{\\theta}_n)}} =   \\frac{1}{\\sqrt{n I_1(\\widehat{\\theta}_n)}} \\ .\n\\end{flalign*}\nSo, we need to first compute $I_1(\\theta)$, the Fisher Information of one sample.  Due to \\eqref{E:FisherInfo1} and the fact that the $\\bernoulli(\\theta^*)$ distributed RV $X$ is discrete with probability mass function $f(x;\\theta)=\\theta^{x} (1-\\theta)^{1-x}$, for $x\\in \\Xz := \\{0,1\\}$, we have,\n\\begin{flalign*}\nI_1(\\theta) &= - \\E_{\\theta} \\left(  \\frac{\\partial^2 log f(X;\\theta)}{\\partial \\theta^2} \\right) = - \\sum_{x \\in \\Xz = \\{0,1\\}} \\left( \\frac{\\partial^2 \\log \\left( \\theta^{x} (1-\\theta)^{1-x} \\right)}{\\partial \\theta^2} \\right) \\theta^{x} (1-\\theta)^{1-x} \\\\\n\\end{flalign*}\nNext, let us compute,\n\\begin{flalign*}\n\\frac{\\partial^2 \\log \\left( \\theta^{x} (1-\\theta)^{1-x} \\right)}{\\partial \\theta^2} & :=\n\\frac{\\partial}{\\partial \\theta} \\left( \\frac{\\partial}{\\partial \\theta} \\left( \\log \\left( \\theta^{x} (1-\\theta)^{1-x} \\right) \\right) \\right) = \n\\frac{\\partial}{\\partial \\theta} \\left( \\frac{\\partial}{\\partial \\theta} \\left( x \\log(\\theta) + (1-x) \\log(1-\\theta)  \\right) \\right) \\\\\n& = \\frac{\\partial}{\\partial \\theta} \\left(  x \\theta^{-1} + (1-x)(1-\\theta)^{-1} (-1)  \\right) \n=\\frac{\\partial}{\\partial \\theta} \\left(  x \\theta^{-1} - (1-x)(1-\\theta)^{-1}  \\right)  \\\\\n&=  x (-1) \\theta^{-1-1} - (1-x) (-1) (1-\\theta)^{-1-1} (-1)\n= - x \\theta^{-2} - (1-x) (1-\\theta)^{-2}\n\\end{flalign*}\nNow, we compute the expectation $I_1$, i.e.~the sum over the two possible values of $x\\in\\{0,1\\}$,\n\\begin{flalign*}\nI_1(\\theta) &= - \\sum_{x \\in \\Xz = \\{0,1\\}} \\left( \\frac{\\partial^2 \\log \\left( \\theta^{x} (1-\\theta)^{1-x} \\right)}{\\partial \\theta^2} \\right) \\theta^{x} (1-\\theta)^{1-x} \\\\\n& = - \\left( \\left(- 0 \\ \\theta^{-2} - (1-0) (1-\\theta)^{-2} \\right) \\theta^{0} (1-\\theta)^{1-0} + \\left( - 1 \\ \\theta^{-2} - (1-1) (1-\\theta)^{-2} \\right) \\theta^{1} (1-\\theta)^{1-1} \\right) \\\\\n& = - \\left( \\left(0 - 1 (1-\\theta)^{-2} \\right) \\ 1 \\ (1-\\theta)^1 + \\left( - \\theta^{-2} - 0  \\right) \\theta^1 \\ 1 \\right) \n=  (1-\\theta)^{-2} (1-\\theta)^1 +  \\theta^{-2} \\theta^1 \\\\\n& = (1-\\theta)^{-1} +  \\theta^{-1}  = \\frac{1}{1-\\theta} + \\frac{1}{\\theta} \n= \\frac{\\theta}{\\theta (1-\\theta)} +  \\frac{1-\\theta}{\\theta (1-\\theta)} \n= \\frac{\\theta+(1 - \\theta)}{\\theta (1-\\theta)} = \\frac{1}{\\theta (1-\\theta)}\n\\end{flalign*}\nTherefore, the desired estimated standard error of our estimator, can be obtained by substituting the ML estimate $\\widehat{\\theta}_n=\\overline{x}_n := n^{-1}\\sum_{i=1}^n x_i$ of the unknown $\\theta^*$ as follows:\n\\begin{flalign*}\n\\widehat{\\mathsf{se}}_n(\\widehat{\\theta}_n) = \\frac{1}{\\sqrt{I_n(\\widehat{\\theta}_n)}} \n = \\frac{1}{\\sqrt{n I_1(\\widehat{\\theta}_n)}}\n = \\sqrt{\\frac{1}{n \\frac{1}{\\widehat{\\theta}_n (1-\\widehat{\\theta}_n)} }} \n= \\sqrt{\\frac{\\widehat{\\theta}_n(1-\\widehat{\\theta}_n)}{n}} \n= \\sqrt{\\frac{\\overline{x}_n(1-\\overline{x}_n)}{n}}  \\ .\n\\end{flalign*}\n(2) Using $\\widehat{\\mathsf{se}}_n(\\widehat{\\theta}_n)$ we can construct an approximate $95\\%$ confidence interval $C_n$ for $\\theta^*$, due to the asymptotic normality of the ML estimator of $\\theta^*$, as follows:\n\\[\nC_n = \\widehat{\\theta}_n \\pm 1.96 \\sqrt{\\frac{\\widehat{\\theta}_n(1-\\widehat{\\theta}_n)}{n}}\n= \\overline{x}_n \\pm 1.96 \\sqrt{\\frac{\\overline{x}_n(1-\\overline{x}_n)}{n}}\n\\]\nRecall that $C_n$ is the realisation of a random set based on your observed samples or data  $x_1,x_2,\\ldots,x_n$.   Furthermore, $C_n$'s construction procedure ensures the engulfing of the unknown $\\theta^*$ with probability approaching $0.95$ as the sample size $n$ gets large.\n\n%(3) Flip any New Zealand coin as identically and independently as possible exactly $30$ times and record the outcomes ($1$ for heads and $0$ for tails).  Report the ML point estimate and the $95\\%$ confidence interval from your data.  Do you think that the way you have flipped your coin and the outcomes you have witnessed can hint at the fairness ($\\theta^*=0.5$) or unfairness ($\\theta^*\\neq0.5$) of the coin.  Write a couple of sentences to make your case.    [Take the time to flip coins this many times in a row, if you have not done so already.  Be honest and really do it.  I flipped an American quarter $100$ times to produce the data in \\hyperref[EX:EstimatePFromNIIDBernoulliTrials]{Example \\ref*{EX:EstimatePFromNIIDBernoulliTrials}}].\n\\end{example}\n\n\\begin{example}[[Fisher Information of the $\\exponential$ Experiment]]\\label{EX:ExponentialFisherInfo}\nLet us get our hands dirty with a continuous RV next.  Let $X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\exponential(\\lambda^*)$.  We saw that the ML estimator of $\\lambda^* \\in \\BB{\\Lambda}=(0,\\infty)$ is $\\widehat{\\Lambda}_n = 1/\\overline{X}_n$ and its ML estimate is $\\widehat{\\lambda}_n=1/\\overline{x}_n$, where $x_1,x_2,\\ldots,x_n$ are our observed data.\n\n(1) Let us obtain the Fisher Information $I_n$ for this experiment to find the standard error:\n\\[\n\\widehat{\\mathsf{se}}_n(\\widehat{\\Lambda}_n) = \\frac{1}{\\sqrt{I_n(\\widehat{\\lambda}_n)}}\n= \\frac{1}{\\sqrt{n I_1(\\widehat{\\lambda}_n)}}\n\\]\nand construct an approximate $95\\%$ confidence interval for $\\lambda^*$ using the asymptotic normality of its ML estimator $\\widehat{\\Lambda}_n$.  \n\nSo, we need to first compute $I_1(\\theta)$, the Fisher Information of one sample.  Due to \\eqref{E:FisherInfo1} and the fact that the $\\exponential(\\lambda^*)$ distributed RV $X$ is continuous with probability density function $f(x;\\lambda)=\\lambda e^{-\\lambda x}$, for $x\\in \\Xz := [0,\\infty)$, we have,\n\\begin{flalign*}\nI_1(\\theta) &= - \\E_{\\theta} \\left(  \\frac{\\partial^2 log f(X;\\theta)}{\\partial \\theta^2} \\right) = - \\int_{x \\in \\Xz = [0,\\infty)} \\left( \\frac{\\partial^2 \\log \\left( \\lambda e^{-\\lambda x} \\right)}{\\partial \\lambda^2} \\right) \\lambda e^{-\\lambda x} \\ dx\n\\end{flalign*}\nLet us compute the above integrand next.\n\\begin{flalign*}\n\\frac{\\partial^2 \\log \\left( \\lambda e^{-\\lambda x} \\right)}{\\partial \\lambda^2}\n&:= \n\\frac{\\partial}{\\partial \\lambda} \\left( \\frac{\\partial}{\\partial \\lambda} \\left( \\log \\left( \\lambda e^{-\\lambda x} \\right)   \\right) \\right)\n= \\frac{\\partial}{\\partial \\lambda} \\left( \\frac{\\partial}{\\partial \\lambda} \\left( \\log(\\lambda) + \\log(e^{-\\lambda x} \\right) \\right) \\\\\n&= \\frac{\\partial}{\\partial \\lambda} \\left( \\frac{\\partial}{\\partial \\lambda} \\left( \\log(\\lambda) -\\lambda x \\right) \\right) \n= \\frac{\\partial}{\\partial \\lambda} \\left( {\\lambda}^{-1} - x \\right) = - \\lambda^{-2} - 0 = -\\frac{1}{\\lambda^2}\n\\end{flalign*}\nNow, let us evaluate the integral by recalling that the expectation of the constant $1$ is 1 for any RV $X$ governed by some parameter, say $\\theta$.  For instance when $X$ is a continuous RV, $\\E_{\\theta}(1) = \\int_{x \\in \\Xz} 1 \\ f(x;\\theta) =  \\int_{x \\in \\Xz} \\ f(x;\\theta) = 1$.  Therefore, the Fisher Information of one sample is\n\\begin{flalign*}\nI_1(\\theta) = - \\int_{x \\in \\Xz = [0,\\infty)} \\left( \\frac{\\partial^2 \\log \\left( \\lambda e^{-\\lambda x} \\right)}{\\partial \\lambda^2} \\right) \\lambda e^{-\\lambda x} \\ dx\n &=  - \\int_{0}^{\\infty} \\left(-\\frac{1}{\\lambda^2} \\right) \\lambda e^{-\\lambda x} \\ dx \\\\\n& = -  \\left(-\\frac{1}{\\lambda^2} \\right) \\int_{0}^{\\infty} \\lambda e^{-\\lambda x} \\ dx = \\frac{1}{\\lambda^2} \\ 1 = \\frac{1}{\\lambda^2}\n\\end{flalign*}\nNow, we can compute the desired estimated standard error, by substituting in the ML estimate $\\widehat{\\lambda}_n = 1/(\\overline{x}_n) := 1 / \\left( \\sum_{i=1}^n x_i \\right)$ of $\\lambda^*$, as follows:\n\\[\n\\widehat{\\mathsf{se}}_n(\\widehat{\\Lambda}_n) = \\frac{1}{\\sqrt{I_n(\\widehat{\\lambda}_n)}}\n= \\frac{1}{\\sqrt{n I_1(\\widehat{\\lambda}_n)}} \n= \\frac{1}{\\sqrt{n \\frac{1}{\\widehat{\\lambda}_n^2} }}\n= \\frac{\\widehat{\\lambda}_n}{\\sqrt{n}}\n= \\frac{1}{\\sqrt{n} \\ \\overline{x}_n}\n\\]\nUsing $\\widehat{\\mathsf{se}}_n(\\widehat{\\lambda}_n)$ we can construct an approximate $95\\%$ confidence interval $C_n$ for $\\lambda^*$, due to the asymptotic normality of the ML estimator of $\\lambda^*$, as follows:\n\\[\nC_n \n= \\widehat{\\lambda}_n \\pm 1.96 \\frac{\\widehat{\\lambda}_n}{\\sqrt{n}}\n= \\frac{1}{\\overline{x}_n} \\pm 1.96 \\frac{1}{\\sqrt{n} \\ \\overline{x}_n} \\enspace .\n\\]\nLet us compute the ML estimate and the $95\\%$ confidence interval for the rate parameter for the waiting times at the Orbiter bus-stop (see \\hyperref[LW:ExponentialMLEOrbiter]{labwork~\\ref*{LW:ExponentialMLEOrbiter}}).  The sample mean $\\overline{x}_{132}=9.0758$ and the ML estimate is:\n$$\\widehat{\\lambda}_{132}=1/\\overline{x}_{132}=1/9.0758=0.1102 \\enspace ,$$\nand the $95\\%$ confidence interval is:  \n\\[\nC_n \n= \\widehat{\\lambda}_{132} \\pm 1.96 \\frac{\\widehat{\\lambda}_{132}}{\\sqrt{132}}\n= \\frac{1}{\\overline{x}_{132}} \\pm 1.96 \\frac{1}{\\sqrt{132} \\ \\overline{x}_{132}} = 0.1102 \\pm 1.96 \\cdot 0.0096 = [0.0914, 0.1290] \\enspace .\n\\]\nNotice how poorly the exponential PDF $f(x;\\widehat{\\lambda}_{132}=0.1102)$ and the DF $F(x;\\widehat{\\lambda}_{132}=0.1102)$ based on the MLE fits with the histogram and the empirical DF, respectively, in \\hyperref[F:ExponentialMLECIOrbiter]{Figure~\\ref*{F:ExponentialMLECIOrbiter}}, despite  taking the the confidence interval into account.  This is a further indication of the inadequacy of our parametric model.\n\\end{example}\n\n\\begin{figure}[htpb]\n\\caption{Plot of $\\log(L(\\lambda))$ as a function of the parameter $\\lambda$, the MLE \n$\\widehat{\\lambda}_{132}=0.1102$ and $95\\%$ confidence interval $C_n=[0.0914, 0.1290]$ for Fenemore-Wang Orbiter Waiting Times Experiment from STAT 218 S2 2007.  The PDF and the DF at (1) the MLE $0.1102$ (black), (2) lower $95\\%$ confidence bound $0.0914$ (red) and (3) upper $95\\%$ confidence bound $0.1290$ (blue) are compared with a histogram and the empirical DF.\n\\label{F:ExponentialMLECIOrbiter}}\n\\centering   \\makebox{\\includegraphics[width=6.5in]{figures/ExponentialMLECIOrbiter}}\n\\end{figure}\n\n\\begin{labwork}[Maximum likelihood estimation for Orbiter bus-stop]\\label{LW:ExponentialMLECIOrbiter}\nThe above analysis was undertaken with the following M-file:\n\\VrbMf[label=ExponentialMLECIOrbiter.m]{scripts/ExponentialMLECIOrbiter.m}\nA call to the script generates \\hyperref[F:ExponentialMLECIOrbiter]{Figure~\\ref*{F:ExponentialMLECIOrbiter}} and the following output of the sample mean, MLE, sample size, standard error and the $95\\%$ confidence interval.  \n\\begin{VrbM}\n>> ExponentialMLECIOrbiter\nSampleMean =    9.0758\nMLE =    0.1102\nn =   132\nStdErr =    0.0096\nMLE95CI =    0.0914    0.1290\n\\end{VrbM}\n\\end{labwork}\n\n\\begin{labwork} [Maximum likelihood estimation for your bus-stop]\\label{LW:IDSeededBusStopMLECI}\nRecall \\hyperref[LW:Next7Buses]{labwork~\\ref*{LW:Next7Buses}} where you modeled the arrival of buses using $\\exponential(\\lambda^*=0.1)$ distributed inter-arrival time with a mean of $1/\\lambda^*=10$ minutes.  Using the data of these seven inter-arrival times at your ID-seeded bus stop and pretending that you do not know the true $\\lambda^*$, report (1) the ML estimate of $\\lambda^*$, (2) $95\\%$ confidence interval for it and (3) whether the true value $\\lambda^*=1/10$ is engulfed by your confidence interval.  \n\\end{labwork}\n\n\\section{Delta Method}\\label{S:DeltaMethod}\nA more general estimation problem of interest concerns some function of the parameter $\\theta \\in \\BB{\\Theta}$, say $g(\\theta)=\\psi:\\BB{\\Theta} \\to \\BB{\\Psi}$.  So, $g(\\theta)=\\psi$ is a function from the parameter space $\\BB{\\Theta}$ to $\\BB{\\Psi}$.  Thus, we are not only interested in estimating the fixed and possibly unknown $\\theta^* \\in \\BB{\\Theta}$ using the ML estimator $\\widehat{\\Theta}_n$ and its ML estimate $\\widehat{\\theta}_n$, but also in estimating $\\psi^* = g(\\theta^*) \\in \\BB{\\Psi}$ via an estimator $\\widehat{\\Psi}_n$ and its estimate $\\widehat{\\psi}_n$.  We exploit the equivariance property of the ML estimator $\\widehat{\\Theta}_n$ of $\\theta^*$ and use the Delta method to find the following analytically:\n\\begin{enumerate}\n\\item The ML estimator of $\\psi^*=g(\\theta^*) \\in \\BB{\\Psi}$ is \n$$\\boxed{\\widehat{\\Psi}_n = g(\\widehat{\\Theta}_n)}$$ \nand its point estimate is \n$$\\boxed{\\widehat{\\psi}_n=g(\\widehat{\\theta}_n)}$$\n\\item Suppose $g(\\theta)=\\psi:\\BB{\\Theta} \\to \\BB{\\Psi}$ is {\\bf any} smooth function of $\\theta$, i.e.~$g$ is differentiable, and $g'(\\theta) := \\frac{\\partial}{\\partial \\theta}g(\\theta) \\neq 0$.  Then, the distribution of the ML estimator $\\widehat{\\Psi}_n$ is asymptotically $\\normal(\\psi^*, {\\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n)}^2)$, i.e.:\n\\[\n\\boxed{\n\\frac{\\widehat{\\Psi}_n - \\psi^*}{\\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n)} \\rightsquigarrow\n\\normal(0,1)\n}\n\\]\nwhere the standard error $\\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n)$ of the ML estimator $\\widehat{\\Psi}_n$ of the unknown quantity $\\psi^* \\in \\BB{\\Psi}$ can be obtained from the standard error $\\widehat{\\mathsf{se}}_n(\\widehat{\\Theta}_n)$ of the ML estimator $\\widehat{\\Theta}_n$ of the parameter $\\theta^* \\in \\BB{\\Theta}$, as follows:\n\\[\n\\boxed{\n\\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n) = |g'(\\widehat{\\theta}_n)| \\widehat{\\mathsf{se}}_n(\\widehat{\\Theta}_n)\n}\n\\]\n\\item Using $\\normal(\\psi^*, {\\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n)}^2)$, we can construct the estimate of an approximate Normal-based $1-\\alpha$ confidence interval for $\\psi^* \\in \\BB{\\Psi}$:\n\\[\n\\boxed{\nC_n  =[\\underline{C}_n, \\overline{C}_n]= \\widehat{\\psi}_n \\pm z_{\\alpha/2} {\\widehat{\\mathsf{se}}_n(\\widehat{\\psi}_n)}\n}\n\\]\n\\end{enumerate}\n\nLet us do an example next.\n\\begin{example}\\label{EX:BernoulliDelta}\nLet $X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\bernoulli(\\theta^*)$.  Let $\\psi=g(\\theta)=\\log(\\theta/(1-\\theta))$.  Suppose we are interested in producing a point estimate and confidence interval for $\\psi^*=g(\\theta^*)$.  We can use the Delta method as follows: \n\nFirst, the estimated standard error of the ML estimator of $\\theta^*$, as shown in \\hyperref[EX:BernoulliFisherInfo]{Example~\\ref*{EX:BernoulliFisherInfo}}, is\n\\[\n\\widehat{\\mathsf{se}}_n(\\widehat{\\Theta}_n) = \\sqrt{\\frac{\\widehat{\\theta}_n (1-\\widehat{\\theta}_n)}{n}} \\ .\n\\]\nThe ML estimator of $\\psi^*$ is:\n$$\\widehat{\\Psi}_n=\\log(\\widehat{\\Theta}_n / (1-\\widehat{\\Theta}_n))$$ \nand the ML estimate of $\\psi^*$ is:\n$$\\widehat{\\psi}_n=\\log(\\widehat{\\theta}_n / (1-\\widehat{\\theta}_n)) \\ .$$\nSince, $g'(\\theta) = 1/(\\theta (1-\\theta))$, by the Delta method, the estimated standard error of the ML estimator of $\\psi^*$ is:\n\\[\n\\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n) = |g'(\\widehat{\\theta}_n)| (\\widehat{\\mathsf{se}}_n(\\widehat{\\Theta}_n))\n= \\frac{1}{\\widehat{\\theta}_n (1-\\widehat{\\theta}_n)} \\sqrt{\\frac{\\widehat{\\theta}_n (1-\\widehat{\\theta}_n)}{n}}\n= \\frac{1}{\\sqrt{n\\widehat{\\theta}_n (1-\\widehat{\\theta}_n)}}\n= \\frac{1}{\\sqrt{n \\overline{x}_n (1-\\overline{x}_n)}} \\ .\n\\]\nAn approximate $95\\%$ confidence interval for $\\psi^*=\\log(\\theta^*/(1-\\theta^*))$ is:\n\\[\n\\widehat{\\psi}_n \\pm \\frac{1.96}{\\sqrt{n \\widehat{\\theta}_n (1-\\widehat{\\theta}_n)}}\n= \\log(\\widehat{\\theta}_n / (1-\\widehat{\\theta}_n)) \\pm \\frac{1.96}{\\sqrt{n \\widehat{\\theta}_n (1-\\widehat{\\theta}_n)}}\n= \\log(\\overline{x}_n / (1-\\overline{x}_n)) \\pm \\frac{1.96}{\\sqrt{n \\overline{x}_n (1-\\overline{x}_n)}} \\ .\n\\]\n\\end{example}\n\n\\begin{example}[Delta Method for a $\\normal$ Experiment]\\label{EX:NormalDelta}\nLet us try the Delta method on a continuous RV.  Let $X_1,X_2,\\ldots,X_n \\overset{IID}{\\sim} \\normal(\\mu^*, {\\sigma^*}^2)$.  Suppose that $\\mu^*$ is known and $\\sigma^*$ is unknown.  Let us derive the ML estimate $\\widehat{\\psi}_n$ of $\\psi^* = \\log(\\sigma^*)$ and a $95\\%$ confidence interval for it in 6 steps.\n\n(1) First let us find the log-likelihood function $\\ell(\\sigma)$ \n\\begin{flalign*}\n\\ell(\\sigma) \n& := \\log (L(\\sigma)) := \\log( L(x_1,x_2,\\ldots,x_n; \\sigma)) = \\log \\left( \\prod_{i=1}^n f(x_i; \\sigma) \\right) = \\sum_{i=1}^n \\log \\left( f(x_i; \\sigma) \\right) \\\\\n& = \\sum_{i=1}^n \\log \\left( \\frac{1}{\\sigma \\sqrt{2 \\pi}}\n \\exp{\\left( - \\frac{1}{2 \\sigma^2} (x_i-\\mu)^2 \\right)} \\right) \\quad \\because \\text{\\scriptsize{$f(x_i;\\sigma)$ in \\eqref{eqn:norm_pdf} \nis pdf of $\\normal(\\mu,\\sigma^2)$ RV with known $\\mu$}} \\\\\n & =  \\sum_{i=1}^n \\left( \\log \\left( \\frac{1}{\\sigma \\sqrt{2 \\pi}} \\right) +\n \\log \\left( \\exp{\\left( - \\frac{1}{2 \\sigma^2} (x_i-\\mu)^2 \\right)} \\right) \\right) \\\\\n& =  \\sum_{i=1}^n \\log \\left( \\frac{1}{\\sigma \\sqrt{2 \\pi}} \\right) +\n \\sum_{i=1}^n \\left( - \\frac{1}{2 \\sigma^2} (x_i-\\mu)^2 \\right)\n = n \\log \\left( \\frac{1}{\\sigma \\sqrt{2 \\pi}} \\right) +\n\\left( - \\frac{1}{2 \\sigma^2} \\right) \\sum_{i=1}^n (x_i-\\mu)^2 \\\\\n& = n \\left( \\log \\left( \\frac{1}{\\sqrt{2 \\pi}} \\right) + \\log \\left( \\frac{1}{\\sigma} \\right) \\right) -\n\\left( \\frac{1}{2 \\sigma^2} \\right) \\sum_{i=1}^n (x_i-\\mu)^2 \\\\\n& = n \\log \\left({\\sqrt{2 \\pi}}^{-1} \\right) + n \\log \\left( \\sigma^{-1} \\right)  -\n\\left( \\frac{1}{2 \\sigma^2} \\right) \\sum_{i=1}^n (x_i-\\mu)^2 \\\\\n& = - n \\log \\left({\\sqrt{2 \\pi}} \\right) - n \\log \\left( \\sigma \\right)  -\n\\left( \\frac{1}{2 \\sigma^2} \\right) \\sum_{i=1}^n (x_i-\\mu)^2 \n\\end{flalign*}\n(2) Let us find its derivative with respect to the unknown parameter $\\sigma$ next.\n\\begin{flalign*}\n\\frac{\\partial}{\\partial \\sigma } \\ell(\\sigma) \n& :=\n\\frac{\\partial}{\\partial \\sigma } \\left( - n \\log \\left({\\sqrt{2 \\pi}} \\right) - n \\log \\left( \\sigma \\right)  -\n\\left( \\frac{1}{2 \\sigma^2} \\right) \\sum_{i=1}^n (x_i-\\mu)^2 \\right) \\\\\n& = \\frac{\\partial}{\\partial \\sigma } \\left( - n \\log \\left({\\sqrt{2 \\pi}} \\right) \\right) \n-  \\frac{\\partial}{\\partial \\sigma } \\left( n \\log \\left( \\sigma \\right) \\right) \n-  \\frac{\\partial}{\\partial \\sigma } \\left( \\left( \\frac{1}{2 \\sigma^2} \\right) \\sum_{i=1}^n (x_i-\\mu)^2 \\right)  \\\\\n& = 0 - n  \\frac{\\partial}{\\partial \\sigma } \\left( \\log(\\sigma) \\right) - \\left( \\frac{1}{2} \\sum_{i=1}^n (x_i-\\mu)^2 \\right) \\frac{\\partial}{\\partial \\sigma } \\left( \\sigma^{-2} \\right) \\\\\n& = -n \\sigma^{-1} - \\left( \\frac{1}{2} \\sum_{i=1}^n (x_i-\\mu)^2 \\right) \\left(-2  \\sigma^{-3} \\right)\n = -n \\sigma^{-1} +  \\sigma^{-3} \\sum_{i=1}^n (x_i-\\mu)^2\n\\end{flalign*}\n(3) Now, let us set the derivative equal to $0$ and solve for $\\sigma$.\n\\begin{flalign*}\n0  =  -n \\sigma^{-1} +  \\sigma^{-3} \\sum_{i=1}^n (x_i-\\mu)^2 \n& \\iff n \\sigma^{-1}   =   \\sigma^{-3} \\sum_{i=1}^n (x_i-\\mu)^2 \n \\iff n \\sigma^{-1} \\sigma^{+3}    =   \\sum_{i=1}^n (x_i-\\mu)^2 \\\\\n& \\iff n \\sigma^{-1+3}    =   \\sum_{i=1}^n (x_i-\\mu)^2 \n\\iff n \\sigma^{2}    =   \\sum_{i=1}^n (x_i-\\mu)^2 \\\\\n& \\iff \\sigma^{2}    =  \\left( \\sum_{i=1}^n (x_i-\\mu)^2 \\right) / n\n\\iff \\sigma = \\sqrt{\\sum_{i=1}^n(x_i-\\mu)^2/n}\n\\end{flalign*}\nFinally, we set the solution, i.e.~the maximiser of the concave-down log-likelihood function of $\\sigma$ with a known and fixed $\\mu^*$ as our ML estimate $\\widehat{\\sigma}_n=\\sqrt{\\sum_{i=1}^n(x_i-\\mu^*)^2/n}$.  Analogously, the ML estimator of $\\sigma^*$ is $\\widehat{\\Sigma}_n=\\sqrt{\\sum_{i=1}^n(X_i-\\mu^*)^2/n}$.  Don't confuse $\\Sigma$, the upper-case sigma, with $\\sum_{i=1}^n \\bigcirc_i $, the summation over some $\\bigcirc_i$'s.  This is usually clear from the context.\n\n(4) Next, let us get the estimated standard error $\\widehat{se}_n$ for the estimator of $\\sigma^*$ via Fisher Information.  The Log-likelihood function of $\\sigma$, based on one sample from the $\\normal(\\mu, \\sigma^2)$ RV with known $\\mu$ is,\n\\[\n\\log f(x;\\sigma) = \\log \\left( \\frac{1}{\\sigma \\sqrt{2 \\pi}}\n \\exp{\\left( - \\frac{1}{2 \\sigma^2} (x-\\mu)^2 \\right)} \\right)\n = - \\log \\left({\\sqrt{2 \\pi}} \\right) - \\log \\left( \\sigma \\right)  -\n\\left( \\frac{1}{2 \\sigma^2} \\right)  (x-\\mu)^2 \\\\\n\\]\nTherefore, in much the same way as in part (2) earlier,\n\\begin{flalign*}\n\\frac{\\partial^2 \\log f(x;\\sigma)}{\\partial \\sigma^2} \n& := \\frac{\\partial}{\\partial \\sigma} \n\\left( \n\\frac{\\partial}{\\partial \\sigma} \n\\left(  \n- \\log \\left({\\sqrt{2 \\pi}} \\right) - \\log \\left( \\sigma \\right)  - \\left( \\frac{1}{2 \\sigma^2} \\right)  \n(x-\\mu)^2 \\right) \\right) \\\\\n& = \\frac{\\partial}{\\partial \\sigma} \n\\left( -\\sigma^{-1}  +  \\sigma^{-3} \n(x-\\mu)^2 \\right) \n = \\sigma^{-2}  -3  \\sigma^{-4} (x-\\mu)^2 \n\\end{flalign*}\nNow, we compute the Fisher Information of one sample as an expectation of the continuous RV $X$ over $\\Xz = (-\\infty,\\infty)$ with density $f(x;\\sigma)$,\n\\begin{flalign*}\nI_1(\\sigma) \n& = - \\int_{x \\in \\Xz = (-\\infty,\\infty)} \\left( \\frac{\\partial^2 \\log f(x;\\sigma)}{\\partial \\lambda^2} \\right) f(x;\\sigma) \\ dx\n =  - \\int_{-\\infty}^{\\infty} \\left( \\sigma^{-2}  -3  \\sigma^{-4} (x-\\mu)^2  \\right) f(x;\\sigma) \\ dx \\\\\n &=   \\int_{-\\infty}^{\\infty} -\\sigma^{-2}  f(x;\\sigma) \\ dx +  \\int_{-\\infty}^{\\infty} 3  \\sigma^{-4} (x-\\mu)^2  f(x;\\sigma) \\ dx \\\\\n& =  -\\sigma^{-2} \\int_{-\\infty}^{\\infty} f(x;\\sigma) \\ dx + 3  \\sigma^{-4} \\int_{-\\infty}^{\\infty} (x- \\mu)^2  f(x;\\sigma) \\ dx \\\\\n& =  -\\sigma^{-2}  + 3  \\sigma^{-4} \\sigma^2  \\qquad \\qquad \\because \\sigma^2 = \\V(X) = \\E(X- \\E(X))^2 = \\int_{-\\infty}^{\\infty} (x- \\mu)^2  f(x;\\sigma) \\ dx \\\\\n& =  -\\sigma^{-2}  + 3  \\sigma^{-4+2} =  -\\sigma^{-2}  + 3  \\sigma^{-2}  = 2 \\sigma^{-2}  \n\\end{flalign*}\nTherefore, the estimated standard error of the estimator of the unknown $\\sigma^*$ is\n\\[\n\\widehat{\\mathsf{se}}_n(\\widehat{\\Sigma}_n) = \\frac{1}{\\sqrt{I_n(\\widehat{\\sigma}_n)}} = \\frac{1}{\\sqrt{n I_1 (\\widehat{\\sigma}_n)}}\n=  \\frac{1}{\\sqrt{n 2 \\sigma^{-2}}} =  \\frac{\\sigma}{\\sqrt{2 n}}  \\ .\n\\]\n(5) Given that $\\psi=g(\\sigma)=\\log(\\sigma)$, we derive the estimated standard error of $\\psi^*=\\log(\\sigma^*)$ via the Delta method as follows:\n\\[\n\\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n) = |g'(\\sigma)| \\widehat{\\mathsf{se}}_n(\\widehat{\\Sigma}_n) = \\left| \\frac{\\partial}{\\partial \\sigma} \\log(\\sigma) \\right|  \\frac{\\sigma}{\\sqrt{2 n}} \n= \\frac{1}{\\sigma} \\frac{\\sigma}{\\sqrt{2 n}} = \\frac{1}{\\sqrt{2 n}} \\ .\n\\]\n(6) Finally, the $95\\%$ confidence interval for $\\psi^*$ is $\\widehat{\\psi}_n \\pm 1.96 \\widehat{\\mathsf{se}}_n(\\widehat{\\Psi}_n) = \\log(\\widehat{\\sigma}_n) \\pm 1.96 \\frac{1}{\\sqrt{2 n}} $.\n\\end{example}\n\n\n\\remove{\n%In summary, there are three basic experimental situations to bear in mind, when estimating confidence sets from data.  We already saw the first two situations.\n%\\begin{enumerate}\n%\\item\n%{\\bf Variance of point estimator, $\\mathbf {\\mathsf{se}}_n^2$, is known {\\em a priori}:}\\\\\n%This was the case in Example \\hyperref[EX:CLTPoisson]{\\ref*{EX:CLTPoisson}} as well as Example 6.8 of Ang \\& Tang.  In such a case, we may can obtain the {\\bf exact} confidence interval directly via:\n%\\[\n%C_n := [\\underline{C}_{\\, n}, \\overline{C}_{\\, n}]\n%= [\\widehat{\\Theta}_n - z_{\\alpha/2} {\\mathsf{se}}_n, \\widehat{\\Theta}_n + z_{\\alpha/2} {\\mathsf{se}}_n] \\ .\n%\\]\n%\\item\n%{\\bf Variance of point estimator, $\\mathbf {\\mathsf{se}}_n^2$, is unknown but we have numerous samples, say $n \\geq 30$:}\\\\\n%In this case, we may use the asymptotically valid approximation $\\widehat{\\mathsf{se}}_n$ for ${\\mathsf{se}}_n$ to compute the confidence interval.\n%\\[\n%C_n := [\\underline{C}_{\\, n}, \\overline{C}_{\\, n}]\n%= [\\widehat{\\Theta}_n - z_{\\alpha/2} \\widehat{\\mathsf{se}}_n, \\widehat{\\Theta}_n + z_{\\alpha/2} \\widehat{\\mathsf{se}}_n]\n%\\]\n%\\item\n%{\\bf Variance of point estimator, $\\mathbf {\\mathsf{se}}_n^2$, is unknown and we have few samples, say $n < 30$:}\n%In this case, the asymptotically valid approximation may not hold any longer and we see how one can handle this situation in \\hyperref[S:SmallSamples]{\\ref*{S:SmallSamples}}.\n%\\end{enumerate}\n\n%\\section{One-sided Confidence Intervals}\n%So far, we have only considered two-sided confidence intervals, i.e.~both end-points of the confidence interval are random variables, actually estimators.  In some decision situations, we may want to fix one of these end-points to some value and only estimate the other one.\n\n%notes...\n\n%\\section{Small Samples, Measurement Theory and Student's $t$-Distrubution}\\label{S:SmallSamples}\n\n%See notes in class.\n\n%Student's $\\mathfrak{t}$-distribution\n\n%determining sample size\n\n%measurement theory\n\\clearpage\n}\n", "meta": {"hexsha": "899c06776aa1fafdd7f00a0c59c4e843d80e91bc", "size": 119978, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "matlab/csebook/SingleParamEstim.tex", "max_stars_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_stars_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-02-19T07:54:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T13:55:18.000Z", "max_issues_repo_path": "matlab/csebook/SingleParamEstim.tex", "max_issues_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_issues_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "matlab/csebook/SingleParamEstim.tex", "max_forks_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_forks_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-07-18T07:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-19T11:28:24.000Z", "avg_line_length": 72.802184466, "max_line_length": 1021, "alphanum_fraction": 0.6680808148, "num_tokens": 43607, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7341195152660688, "lm_q2_score": 0.7931059487389968, "lm_q1q2_score": 0.5822345546429079}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath,amssymb,cite,graphicx,array}\n\n\\newcommand{\\bpm}{\\left(\\begin{matrix}}\n\\newcommand{\\epm}{\\end{matrix}\\right)}\n\\newcommand{\\grad}{\\nabla}\n\\newcommand{\\dx}{\\Delta x}\n\\newcommand{\\du}{\\Delta u}\n\\newcommand{\\dt}{\\Delta t}\n\\newcommand{\\dv}{\\Delta v}\n\\newcommand{\\dz}{\\Delta z}\n\\newcommand{\\dlam}{\\Delta\\lambda}\n\\newcommand{\\dnu}{\\Delta\\nu}\n\\newcommand{\\packname}{{\\sc $\\ell_1$-magic}\\ }\n\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\<}{\\langle}\n\\renewcommand{\\>}{\\rangle}\n\n\\newcommand{\\diag}{\\operatorname{diag}}\n\\newcommand{\\vzero}{\\mathbf{0}}\n\\newcommand{\\vone}{\\mathbf{1}}\n\n% formatting\n\\parindent = 0 pt\n\\parskip = 8 pt      \n\\addtolength{\\textwidth}{1in}\n\\addtolength{\\oddsidemargin}{-0.5in}\n\\addtolength{\\textheight}{1.6in}\n\\addtolength{\\topmargin}{-0.8in}         \n                                                                                                                                                 \n%-------------------------------------------------------------------------------\n\n\\title{\\packname: Recovery of Sparse Signals\\\\ via Convex Programming}\n\n\\author{Emmanuel Cand\\`es and Justin Romberg, Caltech}\n\n\\date{October 2005}\n\n\\begin{document}\n\n\\maketitle\n\n%-------------------------------------------------------------------------------\n\\section{Seven problems}\n\nA recent series of papers \n\\cite{candes04ro,candes04ne,candes05qu,candes05st,candes05da,candes05de} develops a theory of signal recovery from highly incomplete information.  The central results state that a sparse vector $x_0\\in\\R^N$ can be recovered from a small number of linear measurements $b=Ax_0\\in\\R^K,~K\\ll N$ (or $b=Ax_0+e$ when there is measurement noise) by solving a convex program.  \n\nAs a companion to these papers, this package includes MATLAB code that implements this recovery procedure in the seven contexts described below.  The code is not meant to be cutting-edge, rather it is a proof-of-concept showing that these recovery procedures are computationally tractable, even for large scale problems where the number of data points is in the millions.  \n\nThe problems fall into two classes: those which can be recast as linear programs (LPs), and those which can be recast as second-order cone programs (SOCPs).  The LPs are solved using a generic path-following primal-dual method.  The SOCPs are solved with a generic log-barrier algorithm.  The implementations follow Chapter 11 of \n\\cite{boyd04co}.\n\nFor maximum computational efficiency, the solvers for each of the seven problems are implemented separately.  They all have the same basic structure, however, with the computational bottleneck being the calculation of the Newton step (this is discussed in detail below).  The code can be used in either ``small scale'' mode, where the system is constructed explicitly and solved exactly, or in ``large scale'' mode, where an iterative matrix-free algorithm such as conjugate gradients (CG) is used to approximately solve the system.\n \nOur seven problems of interest are:\n\\begin{itemize}\n%\n\\item {\\bf Min-$\\ell_1$ with equality constraints.}  The program\n\\[\n(P_1) \\quad \\min~\\|x\\|_1\\quad\\text{subject~to}\\quad Ax=b,\n\\]\nalso known as {\\em basis pursuit}, finds the vector with smallest {\\em $\\ell_1$ norm}\n\\[\n\\|x\\|_1 ~:=~ \\sum_i |x_i|\n\\]\nthat explains the observations $b$.\nAs the results in \\cite{candes04ro,candes04ne} show, if a sufficiently sparse $x_0$ exists such that $Ax_0=b$,\nthen $(P_1)$ will find it.  When $x,A,b$ have real-valued entries, $(P_1)$ can be recast as an LP (this is discussed in detail in \\cite{chen99at}).  \n%\n\\item {\\bf Minimum $\\ell_1$ error approximation.}  Let $A$ be a $M\\times N$ matrix with full rank.  Given $y\\in\\R^M$, the program\n\\[\n(P_A) \\quad \\min_x~\\|y-Ax\\|_1\n\\]\nfinds the vector $x\\in\\R^N$ such that the {\\em error} $y-Ax$ has minimum \n$\\ell_1$ norm (i.e. we are asking that the difference between $Ax$ and $y$ be sparse).\nThis program arises in the context of channel coding \\cite{candes05de}. \n\nSuppose we have a channel code that produces a codeword $c=Ax$ for a message $x$.  The message travels over the channel, and has an unknown number of its entries corrupted.  The decoder observes $y = c + e$, where $e$ is the corruption.\nIf $e$ is sparse enough, then the decoder can use $(P_A)$ to recover $x$ exactly.  Again, $(P_A)$ can be recast as an LP.\n%\n\\item {\\bf Min-$\\ell_1$ with quadratic constraints.}  This program finds the\nvector with minimum $\\ell_1$ norm that comes close to explaining the observations:\n\\[\n(P_2) \\quad \\min~\\|x\\|_1\\quad\\text{subject~to}\\quad \\|Ax-b\\|_2\\leq \\epsilon,\n\\]\nwhere $\\epsilon$ is a user specified parameter.  As shown in \\cite{candes05st}, if a sufficiently sparse $x_0$ exists such that $b = Ax_0 + e$, for some small error term $\\|e\\|_2\\leq\\epsilon$, then the solution $x^\\star_2$ to $(P_2)$ will be close to $x_0$.  That is, $\\|x^\\star_2-x_0\\|_2\\leq C\\cdot\\epsilon$, where $C$ is a small constant.  $(P_2)$ can be recast as a SOCP.\n%\n\\item {\\bf Min-$\\ell_1$ with bounded residual correlation.} Also referred to as the {\\em Dantzig Selector},\nthe program\n\\[\n(P_D) \\quad \\min~\\|x\\|_1\\quad\\text{subject~to}\\quad \\|A^*(Ax-b)\\|_\\infty\\leq\\gamma,\n\\]\nwhere $\\gamma$ is a user specified parameter,\nrelaxes the equality constraints of $(P_1)$ in a different way.  $(P_D)$ requires that the residual $Ax-b$\nof a candidate vector $x$ not be too correlated with any of the columns of $A$ (the product $A^*(Ax-b)$ measures each of these correlations).  If $b = Ax_0 + e$, where $e_i\\sim \\mathcal{N}(0,\\sigma^2)$, then the solution $x^\\star_D$ to $(P_D)$ has near-optimal minimax risk:\n\\[\nE\\|x^\\star_D-x_0\\|^2_2 \\leq C (\\log N)\\cdot\\sum_i \\min(x_0(i)^2, \\sigma^2),\n\\]\n(see \\cite{candes05da} for details).  For real-valued $x,A,b$, $(P_D)$ can again be recast as an LP; in the complex case, there is an equivalent SOCP.\n%\n\\end{itemize}\nIt is also true that when $x,A,b$ are complex, the programs $(P_1),(P_A),(P_D)$ can be written as SOCPs, but we will not pursue this here.\n\nIf the underlying signal is a 2D image, an alternate recovery model is that the \n{\\em gradient} is sparse \\cite{rudin92no}.\nLet $x_{ij}$ denote the pixel in the $i$th row and $j$ column of an $n\\times n$ image $x$, and\ndefine the operators \n\\[\nD_{h;ij}x = \\begin{cases} x_{i+1,j}-x_{ij} & i < n\\\\\n0 & i = n  \\end{cases}\n\\qquad\nD_{v;ij}x = \\begin{cases} x_{i,j+1}-x_{ij} & j < n\\\\\n0 & j = n  \\end{cases},\n\\]\nand\n\\begin{equation}\n\\label{eq:Dij}\nD_{ij}x = \\left(\\begin{array}{c} D_{h;ij}x \\\\ D_{v;ij}x \\end{array}\\right).\n\\end{equation}\nThe $2$-vector $D_{ij}x$ can be interpreted as a kind of discrete gradient of the digital image $x$.\nThe {\\em total variation} of $x$ is simply the sum of the magnitudes of this discrete gradient at every point:\n\\[\n\\operatorname{TV}(x) := \\sum_{ij} \\sqrt{(D_{h;ij}x)^2 + (D_{v;ij}x)^2} = \n\\sum_{ij} \\|D_{ij}x\\|_2.\n\\]\n\nWith these definitions, we have three programs for image recovery, \neach of which can be recast as a SOCP:\n\\begin{itemize}\n%\n\\item {\\bf Min-TV with equality constraints.}\n\\[\n(TV_1) \\quad \\min~\\operatorname{TV}(x) \\quad\\text{subject~to}\\quad Ax=b\n\\]\nIf there exists a piecewise constant $x_0$ with sufficiently few edges (i.e.\\ $D_{ij}x_0$ is nonzero for only a small number of indices $ij$), then $(TV_1)$ will recover $x_0$ exactly --- see \\cite{candes04ro}.\n%\n\\item {\\bf Min-TV with quadratic constraints.}\n\\[\n(TV_2) \\quad \\min~\\operatorname{TV}(x) \\quad\\text{subject~to}\\quad \\|Ax-b\\|_2\\leq\\epsilon\n\\]\nExamples of recovering images from noisy observations using $(TV_2)$ were presented in \\cite{candes05st}.  Note that if $A$ is the identity matrix, $(TV_2)$ reduces to the standard Rudin-Osher-Fatemi image restoration problem \\cite{rudin92no}.  See also \n\\cite{chan99no,goldfarb04se,hintermueller05in,lobo98ap} for SOCP solvers specifically designed for the total-variational functional.\n\n%\n\\item {\\bf Dantzig TV.}\n\\[\n(TV_D) \\quad \\min~\\operatorname{TV}(x) \\quad\\text{subject~to}\\quad \\|A^*(Ax-b)\\|_\\infty\\leq\\gamma\n\\]\nThis program was used in one of the numerical experiments in \\cite{candes05da}.\n%\n\\end{itemize}\n\nIn the next section, we describe how to solve linear and second-order cone programs using modern interior point methods.\n\n\n%-------------------------------------------------------------------------------\n\\section{Interior point methods}\n\nAdvances in interior point methods for convex optimization over the past 15 years, led by the seminal work \\cite{nesterov94in}, have made large-scale solvers for the seven problems above feasible.  Below we overview the generic LP and SOCP solvers used in the \\packname package to solve these problems.  \n\n%-------------------------------------------------------------------------------\n\\subsection{A primal-dual algorithm for linear programming}\n\\label{sec:primaldual}\n\nIn Chapter 11 of \\cite{boyd04co}, Boyd and Vandenberghe outline a relatively simple primal-dual\nalgorithm for linear programming which we have followed very closely for the implementation of \n$(P_1)$,$(P_A)$, and $(P_D)$.  For the sake of completeness, and to set up the notation, we briefly review their algorithm here.\n\nThe standard-form linear program is\n\\begin{align*}\n\\min_{z}~ \\<c_0,z\\> \\quad\\text{subject~to}\\quad\n A_0 z & = b, \\\\[-2mm]\n f_i(z) &\\leq 0,\n\\end{align*}\nwhere the search vector $z\\in\\R^N$, $b\\in\\R^K$, $A_0$ is a $K\\times N$ matrix, and each of the $f_i,~i=1,\\ldots,m$ is a linear functional:\n\\[\nf_i(z) = \\<c_i,z\\> + d_i,\n\\]\nfor some $c_i\\in\\R^N$, $d_i\\in\\R$.  At the optimal point $z^\\star$, there will exist dual vectors $\\nu^\\star\\in\\R^K,\\lambda^\\star\\in\\R^m,\\lambda^\\star\\geq 0$ such that the Karush-Kuhn-Tucker conditions are satisfied:\n\\begin{align*}\n(KKT)\\quad\\quad\nc_0 + A_0^T\\nu^\\star + \\sum_i \\lambda^\\star_i c_i & = \\mathbf{0}, \\\\\n\\lambda^\\star_i f_i(z^\\star) & = 0,~~i=1,\\ldots,m, \\\\\nA_0 z^\\star & = b, \\\\\nf_i(z^\\star) & \\leq 0, ~~i=1,\\ldots,m.\\\\\n\\end{align*}\nIn a nutshell, the primal dual algorithm finds the optimal $z^\\star$ (along with optimal dual vectors $\\nu^\\star$ and $\\lambda^\\star$) by solving this system of nonlinear equations.   The solution procedure is the classical Newton method: at an {\\em interior point} $(z^k, \\nu^k, \\lambda^k)$ (by which we mean $f_i(z^k) < 0$, $\\lambda^k > 0$), the system is linearized and solved.  \nHowever, the step to new point $(z^{k+1}, \\nu^{k+1}, \\lambda^{k+1})$ must be modified so that we remain in the interior.\n\nIn practice, we relax the {\\em complementary slackness} condition $\\lambda_i f_i = 0$ to \n\\begin{equation}\n\\label{eq:relaxedcs}\n\\lambda^k_i f_i(z^k) = -1/\\tau^k,\n\\end{equation}\nwhere we judiciously increase the parameter $\\tau^k$ as we progress through the Newton iterations.  This biases the solution of the linearized equations towards the interior, allowing a smooth, well-defined ``central path'' from an interior point to the solution on the boundary (see \n\\cite{nocedal99nu,wright97pr} for an extended discussion).\n\nThe primal, dual, and central residuals quantify how close a point $(z,\\nu,\\lambda)$ is to satisfying $(KKT)$ with \\eqref{eq:relaxedcs} in place of the slackness condition:\n\\begin{eqnarray*}\nr_{\\mathrm{dual}} & = & c_0 + A_0^T\\nu + \\sum_i \\lambda_i c_i \\\\\nr_{\\mathrm{cent}} & = & -\\Lambda f - (1/\\tau)\\mathbf{1} \\\\\nr_{\\mathrm{pri}} & = & A_0 z-b,\\\\\n\\end{eqnarray*}\nwhere $\\Lambda$ is a diagonal matrix with $(\\Lambda)_{ii} = \\lambda_i$, and \n$f = \\bpm f_1(z) & \\ldots & f_m(z) \\epm^T$.\n\nFrom a point $(z,\\nu,\\lambda)$, we want to find a step $(\\dz,\\dnu,\\dlam)$ such that\n\\begin{equation}\n\\label{eq:res0}\nr_\\tau(z+\\dz,\\nu+\\dnu,\\lambda+\\dlam) = 0.\n\\end{equation}\nLinearizing \\eqref{eq:res0} with the Taylor expansion around $(z,\\nu,\\lambda)$,\n\\[\nr_\\tau(z+\\dz,\\nu+\\dnu,\\lambda+\\dlam) \\approx \nr_\\tau(z,\\nu,\\lambda) +  J_{r_\\tau}(z,\\nu\\lambda)\n\\bpm \\dz \\\\ \\dnu \\\\ \\dlam \\epm,\n\\]\nwhere $J_{r_\\tau}(z,\\nu\\lambda)$ is the Jacobian of $r_\\tau$, we have the system\n\\[\n\\bpm \\mathbf{0} & A_0^T & C^T \\\\\n-\\Lambda C & \\mathbf{0} & -F \\\\\nA_0 & \\mathbf{0} & \\mathbf{0} \n\\epm\n\\bpm \\dz \\\\ \\dv \\\\ \\dlam \\epm = \n- \\bpm\nc_0 + A_0^T\\nu + \\sum_i \\lambda_i c_i  \\\\\n-\\Lambda f - (1/\\tau)\\mathbf{1} \\\\\nA_0 z-b\n\\epm,\n\\]\nwhere $m\\times N$ matrix $C$ has the $c^T_i$ as rows, and $F$ is diagonal with \n$(F)_{ii} = f_i(z)$.\nWe can eliminate $\\dlam$ using:\n\\begin{equation}\n\\label{eq:dlambda}\n\\dlam = -\\Lambda F^{-1}C\\dz - \\lambda - (1/\\tau)f^{-1}\n\\end{equation}\nleaving us with the core system\n\\begin{equation}\n\\label{eq:pdnewton}\n\\bpm -C^TF^{-1}\\Lambda C & A_0^T \\\\ A_0 & \\mathbf{0} \\epm \\bpm \\dz \\\\ \\dnu \\epm = \n\\bpm -c_0 + (1/\\tau)C^Tf^{-1} - A_0^T\\nu\n\\\\ b-A_0 z \\epm.\n\\end{equation}\n\nWith the $(\\dz,\\dnu,\\dlam)$ we have a step direction.  To choose the step length $0<s\\leq 1$, we ask that it satisfy two criteria:\n\\begin{enumerate}\n\\item $z+s\\dz$ and $\\lambda+s\\dlam$ are in the interior, i.e.\\ $f_i(z+s\\dz)<0,~\\lambda_i > 0$ for all $i$. \n%\n\\item The norm of the residuals has decreased sufficiently:\n\\[\n\\|r_\\tau(z+s\\dz,\\nu+s\\dnu,\\lambda+s\\dlam)\\|_2 \\leq (1-\\alpha s)\\cdot\\|r_\\tau(z,\\nu,\\lambda) \\|_2,\n\\]\nwhere $\\alpha$ is a user-sprecified parameter (in all of our implementations, we have set $\\alpha=0.01$).\n\\end{enumerate}\nSince the $f_i$ are linear functionals, item $1$ is easily addressed.\nWe choose the maximum step size that just keeps us in the interior.  Let \n\\[\n\\mathcal{I}^+_f = \\{i : \\<c_i,\\dz\\> > 0\\},\\quad\n\\mathcal{I}^-_\\lambda \\{i : \\dlam < 0\\},\n\\]\nand set\n\\[\ns_{\\mathrm{max}} = 0.99\\cdot\\min\\{1,~ \n\\{-f_i(z)/\\<c_i,\\dz\\>,~i\\in\\mathcal{I}^+_f\\},~\n\\{-\\lambda_i/\\dlam_i,~i\\in\\mathcal{I}^-_\\lambda\\}\\}.\n\\] \nThen starting with $s=s_{\\mathrm{max}}$, we check if item $2$ above is satisfied; if not, we set $s^\\prime = \\beta\\cdot s$ and try again.\nWe have taken $\\beta=1/2$ in all of our implementations.\n\nWhen $r_{\\mathrm{dual}}$ and $r_{\\mathrm{pri}}$ are small, the {\\em surrogate duality gap} $\\eta = -f^T\\lambda$ is an approximation to how close a certain $(z,\\nu,\\lambda)$ is to being opitmal \n(i.e.\\ $\\<c_0,z\\>-\\<c_0,z^\\star\\>\\approx\\eta$).  The primal-dual algorithm repeats the Newton iterations described above until $\\eta$ has decreased below a given tolerance.\n\nAlmost all of the computational burden falls on solving \\eqref{eq:pdnewton}.  When the matrix $-C^TF^{-1}\\Lambda C$ is easily invertible (as it is for $(P_1)$), or there are no equality constraints (as in $(P_A),(P_D)$), \\eqref{eq:pdnewton} can be reduced to a symmetric positive definite set of equations.\n\nWhen $N$ and $K$ are large, forming the matrix and then solving the linear system of equations in \\eqref{eq:pdnewton} is infeasible.  However, if fast algorithms exist for applying $C,C^T$ and $A_0,A_0^T$, we can use a ``matrix free'' solver such as Conjugate Gradients.  CG is iterative, requiring a few hundred applications of \nthe constraint matrices (roughly speaking) to get an accurate solution.  A CG solver (based on the very nice exposition in \\cite{shewchuk94in}) is included with the \\packname package. \n  \nThe implementations of $(P_1),(P_A),(P_D)$ are nearly identical, save for the calculation of the Newton step.  In the Appendix, we derive the Newton step for each of these problems using notation mirroring that used in the actual MATLAB code.\n  \n%-------------------------------------------------------------------------------\n\\subsection{A log-barrier algorithm for SOCPs}\n\\label{sec:logbarrier}\n\nAlthough primal-dual techniques exist for solving second-order cone programs \n(see \\cite{alizadeh03se,lobo98ap}), their implementation is not quite as straightforward as in the LP case.  Instead, we have implemented each of the SOCP recovery problems using a {\\em log-barrier method}.  The log-barrier method, for which we will again closely follow the generic (but effective) algorithm described in \\cite[Chap. 11]{boyd04co}, is conceptually more straightforward than the primal-dual method  described above, but at its core is again solving for a series of Newton steps.\n\nEach of $(P_2),(TV_1),(TV_2),(TV_D)$ can be written in the form\n\\begin{align}\n\\nonumber\n\\min_z~\\<c_0,z\\> \\quad\\text{subject~to}\\quad A_0z & = b \\\\\n\\label{eq:socp}\nf_i(z) &\\leq 0,~~i=1,\\ldots,m\n\\end{align}\nwhere each $f_i$ describes either a constraint which is linear\n\\[\nf_i = \\<c_i,z\\> + d_i\n\\]\nor second-order conic\n\\[\nf_i(z) = \\frac{1}{2}\\left(\\|A_i z\\|_2^2 - (\\<c_i,z\\> + d_i)^2\\right)\n\\]\n(the $A_i$ are matrices, the $c_i$ are vectors, and the $d_i$ are scalars). \n\n\nThe standard log-barrier method transforms \\eqref{eq:socp}\ninto a series of linearly constrained programs:\n\\begin{equation}\n\\label{eq:logbarrier} \n\\min_z ~ \\<c_0,z\\> + \\frac{1}{\\tau^k} \\sum_{i}\n-\\log(-f_i(z)) \\quad\\text{subject~to}\\quad A_0 z=b,\n\\end{equation}\nwhere $\\tau^k > \\tau^{k-1}$.  The inequality constraints have been incorporated into the functional via a penalty function\\footnote{The choice of $-\\log(-x)$ for the barrier function is not arbitrary, it has a property (termed {\\em self-concordance}) that is very important for quick convergence of \\eqref{eq:logbarrier} to \\eqref{eq:socp} both in theory and in practice (see the very nice exposition in \\cite{renegar01ma}).} \nwhich is infinite when the constraint is violated (or even met exactly), and smooth elsewhere.  As $\\tau^k$ gets large, the solution $z^k$ to \\eqref{eq:logbarrier} approaches the solution $z^\\star$ to \\eqref{eq:socp}: \nit can be shown that $\\<c_0,z^k\\> - \\<c_0,z^\\star\\> < m/\\tau^k$, i.e.\\ we are within $m/\\tau^k$ of the optimal value after iteration $k$ ($m/\\tau^k$ is called the {\\em duality gap}).\nThe idea here is that each of the smooth subproblems can be solved to fairly high accuracy with just a few iterations of Newton's method, especially\nsince we can use the solution $z^k$ as a starting point for subproblem $k+1$.\n\nAt log-barrier iteration $k$, Newton's method (which is again iterative) proceeds by forming a series of quadratic approximations to \\eqref{eq:logbarrier}, and minimizing each by solving a system of equations (again, we might need to modify the step length to stay in the interior).  The quadratic approximation of the functional \n\\[\nf_0(z) = \\<c_0,z\\> + \\frac{1}{\\tau}\\sum_i -\\log(-f_i(z))\n\\]\nin \\eqref{eq:logbarrier} around a point $z$ is given by\n\\[\nf_0(z+\\dz) ~\\approx~ z + \\<g_z,\\dz\\> + \\frac{1}{2}\\<H_z \\dz,\\dz\\> ~:=~ q(z+\\dz),\n\\]\nwhere $g_z$ is the gradient\n\\[\ng_z = c_0 + \\frac{1}{\\tau}\\sum_i \\frac{1}{-f_i(z)}\\grad f_i(z)\n\\]\nand $H_z$ is the Hessian matrix\n\\[\nH_z ~=~ \\frac{1}{\\tau}\\sum_i \\frac{1}{f_i(z)^2} \\grad f_i(z) (\\grad f_i(z))^T ~+ ~\n\\frac{1}{\\tau}\\sum_i \\frac{1}{-f_i(z)}\\grad^2 f_i(z).\n\\]\nGiven that $z$ is feasible (that $A_0 z=b$, in particular), the $\\dz$ that minimizes $q(z+\\dz)$ subject to $A_0 z=b$ is the solution to the set of linear equations\n\\begin{equation}\n\\label{eq:lbnewton}\n\\tau \\bpm H_z & A_0^T \\\\ A_0 & \\vzero \\epm \\bpm \\dz \\\\ v \\epm= -\\tau g_z.\n\\end{equation}\n(The vector $v$, which can be interpreted as the Lagrange multipliers for the quality constraints in the quadratic minimization problem, is not directly used.)\n\nIn all of the recovery problems below with the exception of $(TV_1)$, there are no equality constraints ($A_0=\\vzero$).  In these cases, the system \\eqref{eq:lbnewton} is symmetric positive definite, and thus can be solved using CG when the problem is ``large scale''.  For the $(TV_1)$ problem, we use the SYMMLQ algorithm (which is similar to CG, and works on symmetric but indefinite systems, see\\cite{paige75so}). \n\nWith $\\dz$ in hand, we have the Newton step direction.  The step length $s\\leq 1$ is chosen so that \n\\begin{enumerate}\n%\n\\item $f_i(z+s\\dz) < 0$ for all $i=1,\\ldots,m$,\n%\n\\item The functional has decreased suffiently:\n\\[\nf_0(z+s\\dz) < f_0(z) + \\alpha s\\dz \\<g_z,\\dz\\>,\n\\]\nwhere $\\alpha$ is a user-specified parameter (each of the implementations below uses $\\alpha=0.01$).  This requirement basically states that the decrease must be within a certain percentage of that predicted by the linear model at $z$.\n%\n\\end{enumerate}\nAs before, we start with $s=1$, and decrease by multiples of $\\beta$ until both these conditions are satisfied (all implementations use $\\beta = 1/2$).\n\n\nThe complete log-barrier implementation for each problem follows the outline:\n\\begin{enumerate}\n\\item Inputs: a feasible starting point $z^0$, a tolerance $\\eta$, and parameters $\\mu$ and an initial $\\tau^1$.  Set $k=1$.\n\\item Solve \\eqref{eq:logbarrier} via Newton's method (followed by the backtracking line search), using $z^{k-1}$ as an initial point.  Call the solution $z^k$.\n\\item If $m/\\tau^k < \\eta$, terminate and return $z^k$.\n\\item Else, set $\\tau^{k+1} = \\mu\\tau^k,~k=k+1$ and go to step 2.\n\\end{enumerate}\nIn fact, we can calculate in advance how many iterations the log-barrier algorithm will need:\n\\[\n\\mathrm{barrier~iterations} = \\left\\lceil \\frac{\\log m - \\log\\eta -\\log\\tau^1}{\\log\\mu}\\right\\rceil.\n\\]\nThe final issue is the selection of $\\tau^1$.  Our implementation chooses $\\tau^1$ conservatively; it is set so that the duality gap $m/\\tau^1$ after the first iteration is equal to $\\<c_0,z^0\\>$.\n\nIn Appendix, we explicitly derive the equations for the Newton step for each of $(P_2),(TV_1),(TV_2),(TV_D)$, again using notation that mirrors the variable names in the code.\n\n\n%-------------------------------------------------------------------------------\n\\section{Examples}\n\\label{sec:examples}\n\nTo illustrate how to use the code, the \\packname package includes m-files for solving specific instances of each of the above problems (these end in ``\\texttt{\\_example.m}'' in the main directory).\n\n\\subsection{$\\ell_1$ with equality constraints}\n\nWe will begin by going through \\texttt{l1eq\\_example.m} in detail.  This m-file creates a sparse signal, takes a limited number of measurements of that signal, and recovers the signal exactly by solving $(P_1)$.  The first part of the procedure is for the most part self-explainatory:\n\\begin{verbatim}\n% put key subdirectories in path if not already there \npath(path, './Optimization'); \npath(path, './Data');\n\n% load random states for repeatable experiments \nload RandomStates \nrand('state', rand_state);\nrandn('state', randn_state);\n\n% signal length\nN = 512;\n% number of spikes in the signal\nT = 20;\n% number of observations to make\nK = 120;\n\n% random +/- 1 signal\nx = zeros(N,1);\nq = randperm(N);\nx(q(1:T)) = sign(randn(T,1));\n\\end{verbatim}\nWe add the 'Optimization' directory (where the interior point solvers reside) and the 'Data' directories to the path.  The file \\texttt{RandomStates.m} contains two variables: \\texttt{rand\\_state} and \\texttt{randn\\_state}, which we use to set the states of the random number generators on the next two lines (we want this to be a ``repeatable experiment'').  The next few lines set up the problem: a length $512$ signal that contains $20$ spikes is created by choosing $20$ locations at random and then putting $\\pm 1$ at these locations.  The original signal is shown in Figure~\\ref{fig:l1eqexample}(a).  The next few lines:\n\\begin{verbatim}\n% measurement matrix\ndisp('Creating measurment matrix...');\nA = randn(K,N);\nA = orth(A')';\ndisp('Done.');\n\t\n% observations\ny = A*x;\n\n% initial guess = min energy\nx0 = A'*y;\n\\end{verbatim}\ncreate a measurement ensemble by first creating a $K\\times N$ matrix with iid Gaussian entries, and then orthogonalizing the rows.  The measurements \\texttt{y} are taken, \nand the ``minimum energy'' solution \\texttt{x0} is calculated (\\texttt{x0}, which is shown in Figure~\\ref{fig:l1eqexample} is the vector in $\\{x: Ax=y\\}$ that is closest to the origin).  Finally, we recover the signal with:\n\\begin{verbatim}\n% solve the LP\ntic\nxp = l1eq_pd(x0, A, [], y, 1e-3);\ntoc\n\\end{verbatim}\nThe function \\texttt{l1eq\\_pd.m} (found in the 'Optimization' subdirectory) implements the primal-dual algorithm presented in Section~\\ref{sec:primaldual}; we are sending it our initial guess \\texttt{x0} for the solution, the measurement matrix (the third argument, which is used to specify the transpose of the measurement matrix, is unnecessary here --- and hence left empty --- since we are providing \\texttt{A} explicitly), the measurements, and the precision to which we want the problem solved (\\texttt{l1eq\\_pd} will terminate when the surrogate duality gap is below $10^{-3}$).  Running the example file at the MATLAB prompt, we have the following output:\n{\\tiny\n\\begin{verbatim}\n>> l1eq_example\nCreating measurment matrix...\nDone.\nIteration = 1, tau = 1.921e+02, Primal = 5.272e+01, PDGap = 5.329e+01, Dual res = 9.898e+00, Primal res = 1.466e-14\n                  H11p condition number = 1.122e-02\nIteration = 2, tau = 3.311e+02, Primal = 4.383e+01, PDGap = 3.093e+01, Dual res = 5.009e+00, Primal res = 7.432e-15\n                  H11p condition number = 2.071e-02\nIteration = 3, tau = 5.271e+02, Primal = 3.690e+01, PDGap = 1.943e+01, Dual res = 2.862e+00, Primal res = 1.820e-14\n                  H11p condition number = 2.574e-04\nIteration = 4, tau = 7.488e+02, Primal = 3.272e+01, PDGap = 1.368e+01, Dual res = 1.902e+00, Primal res = 1.524e-14\n                  H11p condition number = 8.140e-05\nIteration = 5, tau = 9.731e+02, Primal = 2.999e+01, PDGap = 1.052e+01, Dual res = 1.409e+00, Primal res = 1.380e-14\n                  H11p condition number = 5.671e-05\nIteration = 6, tau = 1.965e+03, Primal = 2.509e+01, PDGap = 5.210e+00, Dual res = 6.020e-01, Primal res = 4.071e-14\n                  H11p condition number = 2.054e-05\nIteration = 7, tau = 1.583e+04, Primal = 2.064e+01, PDGap = 6.467e-01, Dual res = 6.020e-03, Primal res = 3.126e-13\n                  H11p condition number = 1.333e-06\nIteration = 8, tau = 1.450e+05, Primal = 2.007e+01, PDGap = 7.062e-02, Dual res = 6.020e-05, Primal res = 4.711e-13\n                  H11p condition number = 1.187e-07\nIteration = 9, tau = 1.330e+06, Primal = 2.001e+01, PDGap = 7.697e-03, Dual res = 6.020e-07, Primal res = 2.907e-12\n                  H11p condition number = 3.130e-09\nIteration = 10, tau = 1.220e+07, Primal = 2.000e+01, PDGap = 8.390e-04, Dual res = 6.020e-09, Primal res = 1.947e-11\n                  H11p condition number = 3.979e-11\nElapsed time is 0.141270 seconds.    \n\\end{verbatim}\n}\nThe recovered signal \\texttt{xp} is shown in Figure~\\ref{fig:l1eqexample}(c).  The signal is recovered to fairly high accuracy:\n\\begin{verbatim}\n>> norm(xp-x)\n\nans =\n\n   8.9647e-05\n   \n\\end{verbatim}\n\n%%%\n\\begin{figure}\n\\centerline{\n\\begin{tabular}{ccc}\n\\includegraphics[height=1.5in]{Figs/l1eqexample_signal.pdf} & \n\\includegraphics[height=1.5in]{Figs/l1eqexample_minl2.pdf}& \n\\includegraphics[height=1.5in]{Figs/l1eqexample_recovered.pdf} \\\\\n(a) Original & (b) Minimum energy reconstruction & (c) Recovered\n\\end{tabular}\n}\n\\caption{\\small\\sl 1D recovery experiment for $\\ell_1$ minimization with equality constraints.  (a) Original length 512 signal \\texttt{x} consisting of 20 spikes.  (b) Minimum energy (linear) reconstruction \\texttt{x0}.  (c) Minimum $\\ell_1$ reconstruction \\texttt{xp}.}\n\\label{fig:l1eqexample}\n\\end{figure}\n%%%\n\n\n\\subsection{Phantom reconstruction}\n\nA large scale example is given in \\texttt{tveq\\_phantom\\_example.m}.  This files recreates the phantom reconstruction experiment first published in \\cite{candes04ro}.  The $256\\times 256$ Shepp-Logan phantom, shown in Figure~\\ref{fig:phantom}(a), is measured at $K=5481$ locations in the 2D Fourier plane; the sampling pattern is shown in Figure~\\ref{fig:phantom}(b).  The image is then reconstructed exactly using $(TV_1)$.\n\nThe star-shaped Fourier-domain sampling pattern is created with\n\\begin{verbatim}\n% number of radial lines in the Fourier domain\nL = 22;\n\n% Fourier samples we are given\n[M,Mh,mh,mhi] = LineMask(L,n);\nOMEGA = mhi;\n\\end{verbatim}\nThe auxiliary function \\texttt{LineMask.m} (found in the `Measurements' subdirectory) creates the star-shaped pattern consisting of 22 lines through the origin.  The vector \\texttt{OMEGA} \ncontains the locations of the frequencies used in the sampling pattern.\n\nThis example differs from the previous one in that the code operates in {\\em large-scale} mode.  The measurement matrix in this example is $5481\\times 65536$, making the system \\eqref{eq:lbnewton} far too large to solve (or even store) explicitly.  (In fact, the measurment matrix itself would require almost 3 gigabytes of memory if stored in double precision.)  Instead of creating the measurement matrix explicitly, we provide {\\em function handles} that take a vector $x$, and return $Ax$.  As discussed above, the Newton steps are solved for using an implicit algorithm.\n\nTo create the implicit matrix, we use the function handles\n\\begin{verbatim}\nA = @(z) A_fhp(z, OMEGA);\nAt = @(z) At_fhp(z, OMEGA, n);\n\\end{verbatim}\nThe function \\texttt{A\\_fhp.m} takes a length $N$ vector (we treat $n\\times n$ images as $N:=n^2$ vectors), and returns samples on the $K$ frequencies.  (Actually, since the underlying image is real, \\texttt{A\\_fhp.m} return the real and imaginary parts of the 2D FFT on the upper half-plane of the domain shown in Figure~\\ref{fig:phantom}(b).)  \n\nTo solve $(TV_1)$, we call\n\\begin{verbatim}\nxp = tveq_logbarrier(xbp, A, At, y, 1e-1, 2, 1e-8, 600);\n\\end{verbatim}\nThe variable \\texttt{xbp} is the initial guess (which is again the minimal energy reconstruction shown in Figure~\\ref{fig:phantom}(c)), \\texttt{y} are the measurements, and\\texttt{1e-1} is the desired precision.  The sixth input is the value of $\\mu$ (the amount by which to increase $\\tau^k$ at each iteration; see Section~\\ref{sec:logbarrier}).  The last two inputs are parameters for the large-scale solver used to find the Newton step.  The solvers are iterative, with each iteration requiring one application of \\texttt{A} and one application of \\texttt{At}.  The seventh and eighth arguments above state that we want the solver to iterate until the solution has precision $10^{-8}$ (that is, it finds a $z$ such that $\\|Hz-g\\|_2/\\|g\\|_2 \\leq 10^{-8}$), or it has reached 600 iterations.\n\nThe recovered phantom is shown in Figure~\\ref{fig:phantom}(d).  \nWe have $\\|X_{TV}-X\\|_2/\\|X\\|_2 \\approx 8\\cdot 10^{-3}$. \n\n\n%%%\n\\begin{figure}\n\\centerline{\n\\begin{tabular}{cccc}\n\\includegraphics[height=1.5in]{Figs/phantom_orig} &\n\\includegraphics[height=1.5in]{Figs/phantom_sampling} &\n\\includegraphics[height=1.5in]{Figs/phantom_backproj} &\n\\includegraphics[height=1.5in]{Figs/phantom_tv} \\\\\n{\\small (a) Phantom} & {\\small (b) Sampling pattern} &\n{\\small (c) Min energy} & {\\small (d) min-TV reconstruction}\n\\end{tabular}\n}\n\\caption{\\small\\sl Phantom recovery experiment.}\n\\label{fig:phantom}\n\\end{figure}\n%%%\n\n%-------------------------------------------------------------------------------\n\\subsection{Optimization routines}\n\nWe include a brief description of each of the main optimization routines (type\n\\texttt{help <function>} in MATLAB for details).  Each of these m-files is found in the \\texttt{Optimization} subdirectory.\n\n\\begin{tabular}{|p{1.5 in}m{4 in}|} \\hline\n%\n\\texttt{cgsolve} & Solves $Ax=b$, where $A$ is symmetric positive definite, using the Conjugate Gradient method. \\\\[2mm]\\hline\n%\n\\texttt{l1dantzig\\_pd} & Solves $(P_D)$ (the Dantzig selector) using a primal-dual algorithm. \\\\[2mm]\\hline\n%\n\\texttt{l1decode\\_pd} & Solves the norm approximation problem $(P_A)$ (for decoding via linear programming) using a primal-dual algorithm. \\\\[2mm]\\hline\n%\n\\texttt{l1eq\\_pd} & Solves the standard Basis Pursuit problem $(P_1)$ using a primal-dual algorithm.  \\\\[2mm]\\hline\n%\n\\texttt{l1qc\\_logbarrier} & Barrier (``outer'') iterations for solving quadratically constrained $\\ell_1$ minimization $(P_2)$. \\\\[2mm]\\hline\n%\n\\texttt {l1qc\\_newton} & Newton (``inner'') iterations for solving quadratically constrained $\\ell_1$ minimization $(P_2)$. \\\\[2mm]\\hline\n%\n\\texttt{tvdantzig\\_logbarrier} & Barrier iterations for solving the TV Dantzig selector $(TV_D)$. \\\\[2mm]\\hline\n%\n\\texttt{tvdantzig\\_newton} & Newton iterations for $(TV_D)$. \\\\[2mm]\\hline\n%\n\\texttt{tveq\\_logbarrier} & Barrier iterations for equality constrained TV minimizaiton $(TV_1)$. \\\\[2mm]\\hline\n%\n\\texttt{tveq\\_newton} & Newton iterations for $(TV_1)$. \\\\[2mm]\\hline\n%\n\\texttt{tvqc\\_logbarrier} & Barrier iterations for quadratically constrained TV minimization $(TV_2)$. \\\\[2mm]\\hline\n%\n\\texttt{tvqc\\_newton} & Newton iterations for $(TV_2)$. \\\\[2mm]\\hline\n%\n\n\\end{tabular}\n\n%-------------------------------------------------------------------------------\n\\section{Error messages}\n\nHere we briefly discuss each of the error messages that the $\\ell_1$-{\\sc magic} may produce.\n\n\\begin{itemize}\n%\n\\item \\texttt{Matrix ill-conditioned.  Returning previous iterate.}  \nThis error can occur when the code is running in {\\em small-scale} mode; that is, the matrix $A$ is provided explicitly.  The error message is produced when the condition number of the \nlinear system we need to solve to find the step direction (i.e.\\ \\eqref{eq:pdnewton} for the linear programs, and \\eqref{eq:lbnewton} for the SOCPs) has an estimated condition number of less than $10^{-14}$.  \n\nThis error most commonly occurs during the last iterations of the primal-dual or log-barrier algorithms.  While it means that the solution is not within the tolerance specified (by the primal-dual gap), in practice it is usually pretty close.\n%\n\\item  \\texttt{Cannot solve system.  Returning previous iterate.}\nThis error is the large-scale analog to the above.  The error message is produced when the residual produced by the conjugate gradients algorithm was above $1/2$; essentially this means that CG has not solved the system in any meaningful way.  Again, this error typically occurs in the late stages of the optimization algorithm, and is a symptom of the system being ill-conditioned.\n%\n\\item \\texttt{Stuck backtracking, returning last iterate.}\nThis error occurs when the algorithm, after computing the step direction, cannot find a step size small enough that decreases the objective.  It is generally occurs in large-scale mode, and is a symptom of CG not solving for the step direction to sufficient precision (if the system is solved perfectly, a small enough step size will always be found).  Again, this will typically occur in the late stages of the optimization algorithm.\n%\n\\item \\texttt{Starting point infeasible; using x0 = At*inv(AAt)*y.}\nEach of the optimization programs expects an initial guess which is {\\em feasible} (obeys the constraints).  If the \\texttt{x0} provided is not, this message is produced, and the algorithm proceeds using the least-squares starting point $x_0 = A^T(AA^T)^{-1}b$.\n%\n\\end{itemize}\n\n%-------------------------------------------------------------------------------\n\n\\newpage\n\\section*{Appendix}\n\\appendix\n\n%-------------------------------------------------------------------------------\n\\section{$\\ell_1$ minimization with equality constraints}\n\nWhen $x$, $A$ and $b$ are real, then $(P_1)$ can be recast as the linear program\n\\begin{align*}\n\\min_{x,u}~\\sum_i u_i \\quad\\text{subject~to}\\quad\n x_i - u_i  & \\leq 0 \\\\[-4mm]\n -x_i - u_i & \\leq 0, \\\\\n Ax & = b\n\\end{align*}\nwhich can be solved using the standard primal-dual algorithm outlined in Section~\\ref{sec:primaldual}\n(again, see \\cite[Chap.11]{boyd04co} for a full discussion).\nSet\n\\begin{eqnarray*}\nf_{u_1;i} & := & x_i - u_i \\\\\nf_{u_2;i} & := & -x_i - u_i,\n\\end{eqnarray*}\nwith $\\lambda_{u_1;i},\\lambda_{u_2;i}$ the corresponding dual variables, \nand let $f_{u_1}$ be the vector $(f_{u_1;1}~~\\ldots~~f_{u_1;N})^T$ (and likewise for $f_{u_2},\\lambda_{u_1},\\lambda_{u_2}$).\nNote that\n\\[\n\\grad f_{u_1;i} = \\bpm \\delta_i \\\\ -\\delta_i \\epm,\n\\quad\n\\grad f_{u_2;i} = \\bpm -\\delta_i \\\\ -\\delta_i \\epm,\n\\quad\n\\grad^2 f_{u_1;i} = 0,\n\\quad\n\\grad^2 f_{u_2;i} = 0,\n\\]\nwhere $\\delta_i$ is the standard basis vector for component $i$.  \nThus at a point $(x,u; v,\\lambda_{u_1},\\lambda_{u_2})$, the central and dual \nresiduals are \n\\begin{eqnarray*}\nr_{\\mathrm{cent}} & = & \\bpm -\\Lambda_{u_1} f_{u_1} \\\\ -\\Lambda_{u_2} f_{u_2} \\epm \n- (1/\\tau)\\mathbf{1}, \\\\\nr_{\\mathrm{dual}} & = & \\bpm \\lambda_{u_1}-\\lambda_{u_2} + A^Tv \\\\ \n\\mathbf{1} - \\lambda_{u_1}-\\lambda_{u_2} \\epm,\n\\end{eqnarray*}\nand the Newton step \n\\eqref{eq:pdnewton} is given by:\n\\[\n\\bpm \\Sigma_1 & \\Sigma_2 & A^T \\\\ \\Sigma_2 & \\Sigma_1 & 0 \\\\ A & 0 & 0 \\epm\n\\bpm \\dx \\\\ \\du \\\\ \\dv \\epm =\n\\bpm w_1 \\\\ w_2 \\\\ w_3 \\epm :=\n\\bpm (-1/\\tau)\\cdot(-f^{-1}_{u_1} + f^{-1}_{u_2}) - A^Tv \\\\\n-\\mathbf{1} - (1/\\tau)\\cdot(f^{-1}_{u_1} + f^{-1}_{u_2}) \\\\\nb - Ax \\epm,\n\\]\nwith\n\\[\n\\Sigma_1 = \\Lambda_{u_1} F^{-1}_{u_1} - \\Lambda_{u_2} F^{-1}_{u_2}, \\quad\n\\Sigma_2 = \\Lambda_{u_1} F^{-1}_{u_1} + \\Lambda_{u_2} F^{-1}_{u_2},\n\\]\n(The $F_\\bullet$, for example, are diagonal matrices with $(F_\\bullet)_{ii} = f_{\\bullet;i}$, and $f^{-1}_{\\bullet;i} = 1/f_{\\bullet;i}$.)\nSetting\n\\[\n\\Sigma_x = \\Sigma_1 - \\Sigma_2^2\\Sigma_1^{-1},\n\\]\nwe can eliminate \n\\begin{eqnarray*}\n\\dx & = & \\Sigma_x^{-1}(w_1 - \\Sigma_2\\Sigma_1^{-1}w_2 - A^T\\dv)\\\\\n\\du & = & \\Sigma_1^{-1}(w_2 - \\Sigma_2\\dx),\n\\end{eqnarray*}\nand solve\n\\[\n-A\\Sigma_x^{-1}A^T\\dv =  w_3 - A(\\Sigma_x^{-1}w_1 - \\Sigma_x^{-1}\\Sigma_2\\Sigma_1^{-1}w_2).\n\\]\nThis is a $K\\times K$ positive definite system of equations, and can be solved using conjugate gradients.\n\nGiven $\\dx,\\du,\\dv$, we calculate the change in the inequality dual variables as in \\eqref{eq:dlambda}:\n\\begin{eqnarray*}\n\\dlam_{u_1}  & = & \\Lambda_{u_1} F^{-1}_{u_1}(-\\dx + \\du) - \\lambda_{u_1}  - (1/\\tau)f^{-1}_{u_1} \\\\\n\\dlam_{u_2} & = & \\Lambda_{u_2} F^{-1}_{u_2}(\\dx+\\du) - \\lambda_{u_2} - (1/\\tau)f^{-1}_{u_2}. \n\\end{eqnarray*}\n\n%The line search proceeds exactly as described in Section~\\ref{sec:primaldual}.\n\n%-------------------------------------------------------------------------------\n\\section{$\\ell_1$ norm approximation}\n\\label{sec:l1approx}\n\nThe $\\ell_1$ norm approximation problem $(P_A)$ can also be recast as a linear program:\n\\begin{align*}\n\\min_{x,u}~\\sum_{m=1}^M u_m \\quad\\text{subject~to}\\quad\n Ax - u -y & \\leq 0 \\\\[-4mm]\n -Ax - u + y & \\leq 0,\n\\end{align*}\n(recall that unlike the other 6 problems, here the $M\\times N$ matrix $A$ has more rows than columns).\nFor the primal-dual algorithm, we define\n\\[\nf_{u_1} = A x - u - y,\\quad\nf_{u_2} = -A x - u + y.\n\\]\nGiven a vector of weights $\\sigma\\in\\R^M$,\n\\[\n\\sum_m \\sigma_m\\grad f_{u_1;m} = \n\\bpm A^T\\sigma \\\\ -\\sigma \\epm, \\quad\n\\sum_m \\sigma_m\\grad f_{u_2;m} = \n\\bpm -A^T\\sigma \\\\ -\\sigma \\epm,\n\\]\n\\[\n\\sum_m \\sigma_m\\grad f_{u_1;m}\\grad f_{u_1;m}^T = \n\\bpm A^T\\Sigma A & -A^T\\Sigma \\\\ -\\Sigma A & \\Sigma \\epm,\\quad\n\\sum_m \\sigma_m\\grad f_{u_2;m}\\grad f_{u_2;m}^T = \n\\bpm A^T\\Sigma A & A^T\\Sigma \\\\ \\Sigma A & \\Sigma \\epm.\n\\]\nAt a point $(x,u;\\lambda_{u_1},\\lambda_{u_2})$, the dual residual is\n\\[\nr_{\\mathrm{dual}} = \n\\bpm A^T(\\lambda_{u_1}-\\lambda_{u_2}) \\\\ -\\lambda_{u_1}-\\lambda_{u_2} \\epm, \n\\]\nand the Newton step is the solution to\n\\[\n\\bpm A^T\\Sigma_{11}A & A^T\\Sigma_{12} \\\\ \\Sigma_{12}A &  \\Sigma_{11}\\epm \n\\bpm \\dx \\\\ \\du \\epm = \n\\bpm -(1/\\tau)\\cdot A^T(-f^{-1}_{u_1} + f^{-1}_{u_2}) \\\\ \n-\\mathbf{1} - (1/\\tau)\\cdot(f^{-1}_{u_1} + f^{-1}_{u_2}) \\epm :=\n\\bpm w_1 \\\\ w_2 \\epm\n\\]\nwhere\n\\begin{eqnarray*}\n\\Sigma_{11} & = & -\\Lambda_{u_1} F^{-1}_{u_1} - \\Lambda_{u_2} F^{-1}_{u_2} \\\\\n\\Sigma_{12} & = & \\Lambda_{u_1} F^{-1}_{u_1} - \\Lambda_{u_2} F^{-1}_{u_2}.\n\\end{eqnarray*}\nSetting\n\\[\n\\Sigma_x = \\Sigma_{11} - \\Sigma^2_{12}\\Sigma^{-1}_{11},\n\\]\nwe can eliminate $\\du = \\Sigma_{11}^{-1}(w_2 - \\Sigma_{22}A\\dx)$, and solve\n\\[\nA^T\\Sigma_x A\\dx = w_1 - A^T\\Sigma_{22}\\Sigma^{-1}_{11}w_2\n\\]\nfor $\\dx$.  Again, $A^T\\Sigma_x A$ is a $N\\times N$ symmetric positive definite matrix (it is straightforward to verify that each element on the diagonal of $\\Sigma_x$ will be strictly positive), \nand so the Conjugate Gradients algorithm can be used for large-scale problems. \n\nGiven $\\dx,\\du$, the step directions for the inequality dual variables are given by\n\\begin{eqnarray*}\n\\dlam_{u_1} & = & \n-\\Lambda_{u_1} F^{-1}_{u_1} (A\\dx-\\du) - \\lambda_{u_1} - (1/\\tau)f^{-1}_{u_1} \\\\\n\\dlam_{u_2} & = & \n\\Lambda_{u_2} F^{-1}_{u_2}(A\\dx+\\du) - \\lambda_{u_2} - (1/\\tau)f^{-1}_{u_2}.\n\\end{eqnarray*}\n\n%-------------------------------------------------------------------------------\n\\section{$\\ell_1$ Dantzig selection}\n\\label{sec:l1dantzig}\n\nAn equivalent linear program to $(P_D)$ in the real case is given by:\n\\begin{align*}\n\\min_{x,u}~\\sum_i u_i \\quad\\text{subject~to}\\quad\n x - u & \\leq 0, \\\\[-4mm]\n -x - u & \\leq 0, \\\\\n A^T r - \\epsilon & \\leq 0, \\\\\n -A^T r - \\epsilon & \\leq 0,\n\\end{align*}\nwhere $r = Ax-b$.  Taking\n\\[\nf_{u_1} = x - u,\\quad\nf_{u_2} = -x -u,\\quad\nf_{\\epsilon_1} = A^T r - \\epsilon,\\quad\nf_{\\epsilon_2} = -A^T r - \\epsilon,\n\\]\nthe residuals at a point \n$(x,u;\\lambda_{u_1},\\lambda_{u_2},\\lambda_{\\epsilon_1},\\lambda_{\\epsilon_2})$, \nthe dual residual is\n\\[\nr_{\\mathrm{dual}} = \n\\bpm \\lambda_{u_1}-\\lambda_{u_2} + A^TA(\\lambda_{\\epsilon_1} - \\lambda_{\\epsilon_2})\n\\\\ \\mathbf{1} - \\lambda_{u_1}-\\lambda_{u_2}\\epm, \n\\]\nand the Newton step is the solution to\n\\[\n\\bpm A^TA\\Sigma_{a}A^TA + \\Sigma_{11} & \\Sigma_{12} \\\\ \n\\Sigma_{12} &  \\Sigma_{11}\\epm \n\\bpm \\dx \\\\ \\du \\epm = \n\\bpm -(1/\\tau)\\cdot (A^TA(-f^{-1}_{\\epsilon_1} + f^{-1}_{\\epsilon_2})) -\nf^{-1}_{u_1} + f^{-1}_{u_2} \\\\ \n-\\mathbf{1} - (1/\\tau)\\cdot(f^{-1}_{u_1} + f^{-1}_{u_2}) \\epm :=\n\\bpm w_1 \\\\ w_2 \\epm\n\\]\nwhere\n\\begin{eqnarray*}\n\\Sigma_{11} & = & -\\Lambda_{u_1}F^{-1}_{u_1} - \\Lambda_{u_2}F^{-1}_{u_2} \\\\\n\\Sigma_{12} & = &  \\Lambda_{u_1}F^{-1}_{u_1} - \\Lambda_{u_2}F^{-1}_{u_2} \\\\\n\\Sigma_a & = & -\\Lambda_{\\epsilon_1}F^{-1}_{\\epsilon_1} - \\Lambda_{\\epsilon_2}F^{-1}_{\\epsilon_2}.\n\\end{eqnarray*}\nAgain setting\n\\[\n\\Sigma_x = \\Sigma_{11} - \\Sigma^2_{12}\\Sigma^{-1}_{11},\n\\]\nwe can eliminate\n\\[\n\\du = \\Sigma^{-1}_{11}(w_2 - \\Sigma_{12}\\dx),\n\\]\nand solve\n\\[\n(A^TA\\Sigma_a A^TA + \\Sigma_x)\\dx = w_1 - \\Sigma_{12}\\Sigma^{-1}_{11}w_2\n\\]\nfor $\\dx$.  As before, the system is symmetric positive definite, and the CG algorithm can be used to solve it.\n\nGiven $\\dx,\\du$, the step directions for the inequality dual variables are given by\n\\begin{eqnarray*}\n\\dlam_{u_1} & = & -\\Lambda_{u_1} F^{-1}_{u_1}(\\dx-\\du) - \\lambda_{u_1} - (1/\\tau)f^{-1}_{u_1} \\\\\n\\dlam_{u_2} & = & -\\Lambda_{u_2} F^{-1}_{u_2}(-\\dx-\\du) - \\lambda_{u_2} - (1/\\tau)f^{-1}_{u_2} \\\\\n\\dlam_{\\epsilon_1} & = & -\\Lambda_{\\epsilon_1} F^{-1}_{\\epsilon_1}(A^TA\\dx) - \n\\lambda_{\\epsilon_1} - (1/\\tau)f^{-1}_{\\epsilon_1} \\\\\n\\dlam_{\\epsilon_2} & = &  -\\Lambda_{\\epsilon_2} F^{-1}_{\\epsilon_2}(-A^TA\\dx) - \n\\lambda_{\\epsilon_2} - (1/\\tau)f^{-1}_{\\epsilon_2}.\n\\end{eqnarray*} \n\n\n\n%-------------------------------------------------------------------------------\n\\section{$\\ell_1$ minimization with quadratic constraints}\n\\label{sec:l1qc}\n\nThe quadractically constrained $\\ell_1$ minimization problem $(P_2)$ can be recast as the second-order cone program\n\\begin{align*}\n\\min_{x,u}~\\sum_i u_i \\quad\\text{subject~to}\\quad\n x - u & \\leq 0, \\\\[-4mm]\n -x - u & \\leq 0, \\\\\n \\frac{1}{2}\\left(\\|Ax-b\\|^2_2 - \\epsilon^2\\right) & \\leq 0.\n\\end{align*}\nTaking\n\\[\nf_{u_1} = x - u,\\quad\nf_{u_2} = -x -u,\\quad\nf_\\epsilon = \\frac{1}{2}\\left(\\|Ax-b\\|^2_2 - \\epsilon^2\\right),\n\\]\nwe can write the Newton step (as in \\eqref{eq:lbnewton}) at a point $(x,u)$ for a given $\\tau$ as\n\\[\n\\bpm \\Sigma_{11} - f_\\epsilon^{-1}A^TA + f_\\epsilon^{-2}A^Trr^TA & \\Sigma_{12} \\\\\n\\Sigma_{12} & \\Sigma_{11} \\epm\n\\bpm \\dx \\\\ \\du \\epm = \n\\bpm\nf_{u_1}^{-1} - f_{u_2}^{-1} + f_\\epsilon^{-1}A^Tr \\\\\n-\\tau\\mathbf{1} - f_{u_1}^{-1} - f_{u_2}^{-1}\n\\epm :=\n\\bpm w_1 \\\\ w_2 \\epm\n\\]\nwhere $r = Ax-b$, and\n\\begin{eqnarray*}\n\\Sigma_{11} & = & F_{u_1}^{-2} + F_{u_2}^{-2} \\\\\n\\Sigma_{12} & = & -F_{u_1}^{-2} + F_{u_2}^{-2}.\n\\end{eqnarray*}\nAs before, we set\n\\[\n\\Sigma_x = \\Sigma_{11} - \\Sigma_{12}^2\\Sigma_{11}^{-1}\n\\]\nand eliminate $\\du$\n\\[\n\\du = \\Sigma_{11}^{-1}(w_2 - \\Sigma_{12}\\dx),\n\\]\nleaving us with the reduced system\n\\[\n(\\Sigma_x - f_\\epsilon^{-1}A^TA + f_\\epsilon^{-2}A^Trr^TA)\\dx = \nw_1 - \\Sigma_{12}\\Sigma_{11}^{-1}w_2\n\\]\nwhich is symmetric positive definite and can be solved using CG.  \n\n%-------------------------------------------------------------------------------\n\\section{Total variation minimization with equality constraints}\n\\label{sec:tveq}\n\nThe equality constrained TV minimization problem\n\\[\n\\min_{x}~\\operatorname{TV}(x)\\quad\\text{subject~to}\\quad Ax=b,\n\\]\ncan be rewritten as the SOCP\n\\begin{align*}\n\\min_{t,x}~\\sum_{ij} t_{ij} \\quad\\text{s.t.}\\quad \\|D_{ij}x\\|_2 & \\leq t_{ij} \\\\[-2mm]\n Ax &= b. \n\\end{align*}\nDefining the inequality functions\n\\begin{equation}\n\\label{eq:ftij}\nf_{t_{ij}} = \\frac{1}{2}\\left(\\|D_{ij}\\|_2^2 - t^2_{ij}\\right)\\quad i,j=1,\\ldots,n\n\\end{equation}\nwe have \n\\[\n\\grad f_{t_{ij}} =\n\\left( \\begin{array}{c} D^T_{ij}D_{ij} x \\\\ -t_{ij}\\delta_{ij} \\end{array}\\right)\n\\]\n\\[\n\\grad f_{t_{ij}} \\grad f_{t_{ij}}^T =\n\\bpm D^T_{ij}D_{ij} xx^TD^T_{ij}D_{ij} & \n-t_{ij}D^T_{ij}D_{ij}x\\delta_{ij}^T \\\\\n-t_{ij}\\delta_{ij}x^T D^T_{ij}D_{ij} & \nt_{ij}^2\\delta_{ij}\\delta_{ij}^T \\\\\n\\epm,\\quad\n\\grad^2 f_{t_{ij}} = \n\\bpm D_{ij}^*D_{ij} & \\vzero \\\\\n\\vzero & -\\delta_{ij}\\delta_{ij}^T \n\\epm,\n\\]\nwhere $\\delta_{ij}$ is the Kronecker vector that is $1$ in entry $ij$ and zero elsewhere.\nFor future reference:\n\\[\n\\sum_{ij}\\sigma_{ij} \\grad f_{t_{ij}} = \n\\bpm D_h^T\\Sigma D_h x + D_v^T\\Sigma D_v x \\\\ -\\sigma t\\epm,\n\\]\n\\[\n\\sum_{ij} \\sigma_{ij} \\grad f_{t_{ij}} \\grad f_{t_{ij}}^T = \n\\left(\\begin{array}{cc}\nB\\Sigma B^T & -BT\\Sigma  \\\\ -\\Sigma TB^T & \\Sigma T^2 \n\\end{array}\\right),\n\\]\n\\[\n\\sum_{ij} \\sigma_{ij} \\grad^2 f_{t_{ij}} = \n\\left(\\begin{array}{cc}\nD_h^T \\Sigma D_h + D_v^T \\Sigma D_v & \\vzero \\\\ \\vzero & -\\Sigma\n\\end{array}\\right)\n\\]\nwhere $\\Sigma = \\diag(\\{\\sigma_{ij}\\})$, $T = \\diag(t)$, $D_h$ has the $D_{h;ij}$ as rows (and likewise for $D_v$), and  $B$ is a matrix that depends on $x$:\n\\[\nB = D_h^T\\Sigma_{\\partial h} + D_v^T\\Sigma_{\\partial v}.\n\\]\nwith $\\Sigma_{\\partial h} = \\diag(D_h x),~ \\Sigma_{\\partial v} = \\diag(D_v x)$.\n\n\nThe Newton system \\eqref{eq:lbnewton} for the log-barrier algorithm is then\n\\[\n\\bpm H_{11} & B\\Sigma_{12} & A^T \\\\ \n\\Sigma_{12}B^T & \\Sigma_{22} & \\vzero \\\\\nA & \\vzero & \\vzero \\epm\n\\bpm \\dx \\\\ \\dt \\\\ \\dv \\epm ~=~\n\\bpm  D_h^T F_t^{-1} D_h x + D_v^T F_t^{-1} D_vx \\\\ \n-\\tau\\mathbf{1} - F_{t}^{-1}t \\\\ \n\\vzero \\epm ~:= ~\n\\bpm w_1 \\\\ w_2 \\\\ \\vzero \\epm,\n\\]\nwhere \n\\[\nH_{11} = D_h^T(-F_t^{-1})D_h ~+~ D_v^T(-F_t^{-1})D_v ~+~ B F_t^{-2}B^T.\n\\]\nEliminating $\\dt$\n\\begin{eqnarray*}\n\\dt & = & \\Sigma_{22}^{-1}(w_2 - \\Sigma_{12}B^T\\dx) \\\\\n & = & \\Sigma_{22}^{-1}(w_2 - \\Sigma_{12}\\Sigma_{\\partial h}D_h\\dx - \\Sigma_{12}\\Sigma_{\\partial v}D_v\\dx),\n\\end{eqnarray*}\nthe reduced $(N+K)\\times (N+K)$ system is\n\\begin{equation}\n\\label{eq:tveqsys}\n\\bpm H'_{11} & A^T \\\\ A & \\vzero \\epm\n\\bpm \\dx \\\\ \\dv \\epm = \n\\bpm w'_1 \\\\ \\vzero \\epm\n\\end{equation}\nwith\n\\begin{align*}\nH'_{11} = ~& H_{11} - B\\Sigma^2_{12}\\Sigma_{22}^{-1}B^T \\\\\n= ~& D_h^T(\\Sigma_b\\Sigma^2_{\\partial h}-F_t^{-1})D_h ~+~\nD_v^T(\\Sigma_b\\Sigma^2_{\\partial v}-F_t^{-1})D_v ~+~\\\\\n~& D_h^T(\\Sigma_b\\Sigma_{\\partial h}\\Sigma_{\\partial v})D_v ~+~\nD_v^T(\\Sigma_b\\Sigma_{\\partial h}\\Sigma_{\\partial v})D_h\\\\\nw'_1 = ~& w_1 - B\\Sigma_{12}\\Sigma_{22}^{-1}w_2 \\\\\n = ~& w_1 - (D_h^T\\Sigma_{\\partial h} + D_v^T\\Sigma_{\\partial v} )\\Sigma_{12}\\Sigma_{22}^{-1}w_2 \\\\\n\\Sigma_b = ~& F_t^{-2} - \\Sigma_{22}^{-1}\\Sigma_{12}^2. \n\\end{align*}\n\nThe system of equations \\eqref{eq:tveqsys} is symmetric, but not positive definite.\nNote that $D_h$ and $D_v$ are (very) sparse matrices, and hence can be stored and applied very efficiently.  This allows us to again solve the system above using an iterative method such as SYMMLQ \\cite{paige75so}.\n\n\n\n%-------------------------------------------------------------------------------\n\\section{Total variation minimization with quadratic constraints}\n\\label{sec:tvqc}\n\nWe can rewrite $(TV_2)$ as the SOCP\n\\begin{align*}\n\\min_{x,t}~\\sum_{ij} t_{ij}\n\\quad\\text{subject~to}\\quad\n& \\|D_{ij} x\\|_2\\leq t_{ij}, ~~ i,j=1,\\ldots,n \\\\[-4mm]\n& \\|Ax - b\\|_2 \\leq \\epsilon\n\\end{align*}\nwhere $D_{ij}$ is as in \\eqref{eq:Dij}.\n%\nTaking $f_{t_{ij}}$ as in \\eqref{eq:ftij} and \n\\[\nf_\\epsilon = \\frac{1}{2}\\left( \\|Ax-b\\|^2_2 - \\epsilon^2\\right),\n\\]\nwith\n\\[\n\\grad f_\\epsilon = \\bpm A^Tr \\\\ \\vzero \\epm,\\quad \n\\grad f_\\epsilon \\grad f_\\epsilon^T =\n\\bpm  A^Trr^TA & \\vzero \\\\ \\vzero & \\vzero \\epm, \\quad\n\\grad^2 f_\\epsilon = \\bpm A^*A & \\vzero \\\\\n\\vzero & \\vzero \\epm\n\\]\nwhere $r = Ax-b$.\n\n\n Also,\n\\[\n\\grad^2 f_{t_{ij}} = \\left(\\begin{array}{cc} D_{ij}^*D_{ij} & \\vzero \\\\\n\\vzero & -\\delta_{ij}\\delta_{ij}^T \\end{array}\\right)\n\\quad\n\\grad^2 f_\\epsilon = \\left(\\begin{array}{cc} A^*A & \\vzero \\\\\n\\vzero & \\vzero \\end{array}\\right).\n\\]\n\nThe Newton system is similar to that in equality constraints case:\n\\[ \n\\bpm H_{11} & B\\Sigma_{12} \\\\\n\\Sigma_{12}B^T & \\Sigma_{22} \\epm\n\\bpm \\dx \\\\ \\dt \\epm =\n\\bpm D_h^T F_t^{-1}D_h x + D_v^TF_t^{-1}D_v x + f_\\epsilon^{-1}A^Tr \\\\\n-\\tau\\mathbf{1} - tf^{-1}_t \\epm :=\n\\bpm w_1 \\\\ w_2 \\epm.\n\\]\nwhere $(tf^{-1}_t)_{ij} = t_{ij}/f_{t_{ij}}$, and \n\\begin{align*}\nH_{11} = ~& D_h^T(-F_t^{-1})D_h ~+~ D_v^T(-F_t^{-1})D_v ~+~ B F_t^{-2}B^T ~- \\\\\n & f_\\epsilon^{-1} A^TA ~+~ f_\\epsilon^{-2} A^Trr^TA, \\\\\n\\Sigma_{12} = ~& -TF_t^{-2}, \\\\\n\\Sigma_{22} = ~& F_t^{-1} + F_t^{-2}T^2,\n\\end{align*}\nAgain eliminating $\\dt$\n\\[\n\\dt = \\Sigma_{22}^{-1}(w_2 - \\Sigma_{12}\\Sigma_{\\partial h}D_h\\dx - \\Sigma_{12}\\Sigma_{\\partial v}D_v\\dx),\n\\]\nthe key system is\n\\[\nH'_{11}\\dx  = \nw_1 - (D_h^T\\Sigma_{\\partial h} + D_v^T\\Sigma_{\\partial v} )\\Sigma_{12}\\Sigma_{22}^{-1}w_2\n\\]\nwhere\n\\begin{align*}\nH'_{11} = ~& H_{11} - B\\Sigma^2_{12}\\Sigma_{22}^{-1}B^T \\\\\n= ~& D_h^T(\\Sigma_b\\Sigma^2_{\\partial h}-F_t^{-1})D_h ~+~\nD_v^T(\\Sigma_b\\Sigma^2_{\\partial v}-F_t^{-1})D_v ~+~\\\\\n~& D_h^T(\\Sigma_b\\Sigma_{\\partial h}\\Sigma_{\\partial v})D_v ~+~\nD_v^T(\\Sigma_b\\Sigma_{\\partial h}\\Sigma_{\\partial v})D_h ~-~ \\\\\n~& f_\\epsilon^{-1} A^TA ~+~ f_\\epsilon^{-2} A^Trr^TA, \\\\\n\\Sigma_b = ~& F_t^{-2} - \\Sigma_{12}^2\\Sigma_{22}^{-1}.\n\\end{align*}\n\nThe system above is symmetric positive definite, and can be solved with CG.\n\n\n%-------------------------------------------------------------------------------\n\\section{Total variation minimization with bounded residual correlation}\n\\label{sec:tvdantzig}\n\nThe $TV$ Dantzig problem has an equivalent SOCP as well:\n\\begin{align*}\n\\min_{x,t}~\\sum_{ij} t_{ij}\n\\quad\\text{subject to}\\quad\\quad\n\\|D_{ij} x\\|_2 & \\leq t_{ij}, ~~ i,j=1,\\ldots,n \\\\[-4mm]\nA^T(Ax-b) -\\epsilon & \\leq 0 \\\\\n-A^T(Ax-b) -\\epsilon & \\leq 0.\n\\end{align*}\n%\nThe inequality constraint functions are\n\\begin{eqnarray*}\nf_{t_{ij}} & = & \\frac{1}{2}\\left( \\|D_{ij}x\\|^2_2 - t_{ij}^2\\right) \\quad i,j=1,\\ldots,n\\\\\nf_{\\epsilon_1} & = & A^T(Ax-b)-\\epsilon, \\\\\nf_{\\epsilon_2} & = & -A^T(Ax-b)-\\epsilon,\n\\end{eqnarray*}\nwith\n\\[\n\\sum_{ij}\\sigma_{ij} \\grad f_{\\epsilon_1;ij} = \n\\bpm A^TA\\sigma \\\\ \\vzero \\epm, \\quad\n\\sum_{ij}\\sigma_{ij} \\grad f_{\\epsilon_2;ij} = \n\\bpm -A^TA\\sigma \\\\ \\vzero \\epm,\n\\]\nand\n\\[\n\\sum_{ij} \\sigma_{ij} \\grad f_{\\epsilon_1;ij} \\grad f_{\\epsilon_1;ij}^T = \n\\sum_{ij} \\sigma_{ij} \\grad f_{\\epsilon_2;ij} \\grad f_{\\epsilon_2;ij}^T =\n\\bpm A^TA\\Sigma A^TA & \\vzero \\\\ \\vzero & \\vzero \\epm.\n\\]\nThus the log barrier Newton system is nearly the same as in the quadratically constrained case:\n\\[\n\\bpm H_{11} & B\\Sigma_{12} \\\\\n\\Sigma_{12}B^T & \\Sigma_{22} \\epm\n\\bpm \\dx \\\\ \\dt \\epm =\n\\bpm D_h^T F_t^{-1}D_h x + D_v^TF_t^{-1}D_v x + \nA^TA(f_{\\epsilon_1}^{-1}-f_{\\epsilon_2}^{-1}) \\\\\n-\\tau\\mathbf{1} - tf^{-1}_t \\epm :=\n\\bpm w_1 \\\\ w_2 \\epm.\n\\]\nwhere\n\\begin{align*}\nH_{11} = ~& D_h^T(-F_t^{-1})D_h ~+~ D_v^T(-F_t^{-1})D_v ~+~ B F_t^{-2}B^T ~+~ A^TA\\Sigma_a A^TA, \\\\\n\\Sigma_{12} = ~& -TF_t^{-2}, \\\\\n\\Sigma_{22} = ~& F_t^{-1} + F_t^{-2}T^2, \\\\\n\\Sigma_a = ~& F_{\\epsilon_1}^{-2} + F_{\\epsilon_2}^{-2}.\n\\end{align*}\nEliminating $\\dt$ as before\n\\[\n\\dt ~=~ \\Sigma_{22}^{-1}(w_2 - \\Sigma_{12}\\Sigma_{\\partial h}D_h\\dx - \n\\Sigma_{12}\\Sigma_{\\partial v}D_v\\dx),\n\\]\nthe key system is\n\\[\nH'_{11}\\dx  =  \nw_1 - (D_h^T\\Sigma_{\\partial h} + D_v^T\\Sigma_{\\partial v} )\\Sigma_{12}\\Sigma_{22}^{-1}w_2\n\\]\nwhere\n\\begin{align*}\nH'_{11} = ~& D_h^T(\\Sigma_b\\Sigma^2_{\\partial h}-F_t^{-1})D_h ~+~\nD_v^T(\\Sigma_b\\Sigma^2_{\\partial v}-F_t^{-1})D_v ~+~\\\\\n~& D_h^T(\\Sigma_b\\Sigma_{\\partial h}\\Sigma_{\\partial v})D_v ~+~\nD_v^T(\\Sigma_b\\Sigma_{\\partial h}\\Sigma_{\\partial v})D_h ~+~ \nA^TA\\Sigma_a A^TA, \\\\\n\\Sigma_b = ~& F_t^{-2} - \\Sigma_{12}^2\\Sigma_{22}^{-1}.\n\\end{align*}\n\n\n\n\n\n\n%-------------------------------------------------------------------------------\n\\vspace{10mm}\n\n\\bibliographystyle{plain}\n\\bibliography{l1magic}\n\n\\end{document}\n", "meta": {"hexsha": "cb38b954892fa4a09764e2a50bbbf9e49af05c47", "size": 52133, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "l1magic/l1magic/Notes/l1magic_notes.tex", "max_stars_repo_name": "TideDancer/matrixMultiplications", "max_stars_repo_head_hexsha": "607c60d136f7df874d63816e006d20a85138c00a", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2021-06-30T05:37:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T06:01:19.000Z", "max_issues_repo_path": "l1magic/l1magic/Notes/l1magic_notes.tex", "max_issues_repo_name": "TideDancer/matrixMultiplications", "max_issues_repo_head_hexsha": "607c60d136f7df874d63816e006d20a85138c00a", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-07-08T02:03:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-04T14:31:17.000Z", "max_forks_repo_path": "l1magic/l1magic/Notes/l1magic_notes.tex", "max_forks_repo_name": "TideDancer/matrixMultiplications", "max_forks_repo_head_hexsha": "607c60d136f7df874d63816e006d20a85138c00a", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-07-09T08:23:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-23T08:19:42.000Z", "avg_line_length": 46.3404444444, "max_line_length": 792, "alphanum_fraction": 0.6588533175, "num_tokens": 18148, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660687, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.582234552847901}}
{"text": "\\section{Language}\n\\label{sec:language}\n\n\n\\newcommand{\\lgb}{\\langle}\n\\newcommand{\\rgb}{\\rangle}\n\\newcommand{\\gb}[1]{\\left<{#1}\\right>}\n\\[\n\\begin{array}{rll}\n\\SC{InTm} ::= & \\verb!Set! \n                & \\mbox{Universe of sets} \\\\\n            | & \\verb!Prop! \n                & \\mbox{Universe of propositions} \\\\\n            | & \\lgb\\verb!(!\\SC{Nom}\\verb! : !\\SC{InTm} \\verb!)!\\rgb^+\\verb! -> ! \\SC{InTm} \n                & \\mbox{\\(\\Pi\\)-type} \\\\\n            | & \\SC{InTm} \\verb! -> ! \\SC{InTm}  \n                & \\mbox{nondependent function type} \\\\\n            | & \\verb!\\! \\SC{Nom}^* \\verb! -> ! \\SC{InTm} \n                & \\mbox{\\(\\lambda\\)-abstraction} \\\\\n            | & \\verb!:- ! \\SC{InTm} \n                & \\mbox{set of proofs} \\\\\n            | & \\SC{NAT} \\lgb\\verb!+ ! \\SC{InTm}\\rgb^? \n                & \\mbox{Elements of enumerations} \\\\\n            | & \\verb!con ! \\SC{InTm} \n                & \\mbox{Constructor for inductive definitions} \\\\\n            | & \\verb!Sig (!\\SC{Sig}\\verb!)! \n                & \\mbox{`record' signature} \\\\\n            | & \\verb!()! \n                & \\mbox{unit} \\\\\n            | & \\verb![]! \n                & \\mbox{void} \\\\\n            | & \\verb![!\\SC{InTm}^+\\:\\lgb\\verb!, !\\SC{InTm}\\rgb^?\\verb!]! \n                & \\mbox{tuple} \\\\\n            | & \\verb!Enum ! \\SC{InTm} \n                & \\mbox{enumeration} \\\\\n            | & \\verb!'! \\SC{Nom} \\: \\SC{InTm}\n                & \\mbox{tag or (co)data} \\\\\n            | & \\lgb\\verb!(!\\SC{Nom}\\verb! : !\\SC{InTm} \\verb!)!\\rgb^+\\verb! => ! \\SC{InTm} \n                & \\mbox{propositional \\(\\forall\\)} \\\\\n            | & \\SC{InTm} \\verb! && ! \\SC{InTm} \n                & \\mbox{propositional And} \\\\\n            | & \\verb!TT ! | \\verb| FF|\n                & \\mbox{propositional Trivial and Absurd} \\\\\n            | & \\verb!1  ! | \\verb| 0|\n                & \\mbox{proofs of Trivial and Absurd} \\\\\n            | & \\SC{ExTm} \\verb! == ! \\SC{ExTm}\n                & \\mbox{blue equality} \\\\\n            | & \\verb!(!\\SC{InTm}\\verb!)! \n                & \\mbox{grouping} \\\\\n            | & \\SC{ExTm} \n                & \\mbox{term with synthesizable type} \\medskip \\\\\n\\SC{Sig} ::= & \\varepsilon \n                & \\mbox{unit signature} \\\\\n           | & \\lgb\\SC{Nom}\\verb! : !\\rgb^? \\SC{InTm} \\: \\SC{SigMore} \n                & \\Sigma x : S . T \\\\\n\\SC{SigMore} ::= & \\verb!;! \\SC{InTm} \n                   & \\verb!; T! \\mbox{ means } \\verb!T!\\\\\n                 & \\verb!;! \\SC{Sig}  \n                   & \\verb!; S! \\mbox{ means } \\verb!S!\\\\\n                 & \\verb!:-! \\SC{InTm} \n                   & \\verb!:- P! \\mbox{ means } \\verb!:- P!\\\\\n                 & \\verb!:-! \\SC{InTm} \\: \\SC{SigMore} \n                   & \\verb!:- P S! \\mbox{ means } \\Sigma \\_ . P S \\\\\n\\SC{ExTm} ::= & \\verb!(: ! \\SC{InTm} \\verb!)!\n                & \\mbox{type-cast} \\\\\n            | & \\SC{Nom}\\lgb\\verb!^!\\SC{Nat}\\rgb^? \\lgb \\verb!.! \\SC{Nom} \\lgb\\verb!^!\\SC{Nat}\\rgb^? \\rgb^*\n                & \\mbox{relative name} \\\\\n            | & \\SC{ExTm} \\: \\SC{InTm} \n                & \\mbox{application} \\\\\n            | & \\SC{ExTm}\\:\\verb|!| \n                & \\mbox{car} \\\\\n            | & \\SC{ExTm}\\:\\verb!-! \n                & \\mbox{cdr} \\\\\n            | & \\SC{Op}\\verb!(!\\SC{InTm}^{\\verb!,!*}\\verb!)! \n                & \\mbox{operator} \\\\\n            | & \\verb!(!\\SC{InTm}\\verb! : !\\SC{InTm}\\verb!)!  \\verb! <-> ! \\verb!(!\\SC{InTm}\\verb! : !\\SC{InTm}\\verb!)! \n                & \\mbox{green equality} \\\\\n\\end{array}\n\\]\n  \nand more to come. For developments:\n\n\\[\n\\begin{array}{rll}\n\\SC{Top} ::= & \\lgb \\verb![! \\SC{Girl}^* \\verb!]! \\rgb^? \\SC{Com}^{\\verb!;!*} \n               & \\mbox{top-level development} \\\\\n\n\\SC{Girl} ::= & \\SC{Nom} \\lgb \\verb![! \\SC{Line}^* \\verb!]! | \\verb!:=!  \\rgb \\lgb \\verb!?! | \\SC{InTm} \\rgb \\verb!:! \\SC{InTm} \\lgb \\verb![| ! \\SC{Com}^{\\verb!;!*} \\verb! |]! \\rgb^? \\verb! ;!\n               & \\mbox{development} \\\\\n           | & \\SC{Nom} \\verb![! \\SC{Line}^* \\verb!]! \\lgb \\verb![| ! \\SC{Com}^{\\verb!;!*} \\verb! |]! \\rgb^? \\verb! ;!\n               & \\mbox{module} \\\\\n\n\\SC{Line} ::= & \\SC{Girl} | \\SC{Boy}\n                 & \\mbox{line in development} \\\\\n\n\\SC{Boy} ::= & \\verb!\\! \\SC{Nom} \\verb!:! \\SC{InTm} \\verb! ->!\n               & \\mbox{$\\lambda$-boy} \\\\\n           | & \\verb!(! \\SC{Nom} \\verb!:! \\SC{InTm} \\verb!) ->!\n               & \\mbox{$\\Pi$-boy} \\\\\n           | & \\verb!(! \\SC{Nom} \\verb!:! \\SC{InTm} \\verb!;) ->!\n               & \\mbox{$\\Sigma$-boy} \\\\\n           | & \\verb!:- (! \\SC{Nom} \\verb!:! \\SC{InTm} \\verb!) =>!\n               & \\mbox{$\\forall$-boy} \\\\\n\n\\end{array}\n\\]\n\nThis is all very tenative. One overriding principle is that we should stick to ASCII\ncharacters throughout. Users may use Unicode if they wish, but it should not be\nforced upon them.\n", "meta": {"hexsha": "c82cef14dfddb29def4475c486c5c15fe4e2a6a4", "size": 4764, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/Documentation/Language.tex", "max_stars_repo_name": "mietek/epigram", "max_stars_repo_head_hexsha": "8c46f766bddcec2218ddcaa79996e087699a75f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2016-01-09T17:36:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-11T01:55:28.000Z", "max_issues_repo_path": "src/Documentation/Language.tex", "max_issues_repo_name": "mietek/epigram", "max_issues_repo_head_hexsha": "8c46f766bddcec2218ddcaa79996e087699a75f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Documentation/Language.tex", "max_forks_repo_name": "mietek/epigram", "max_forks_repo_head_hexsha": "8c46f766bddcec2218ddcaa79996e087699a75f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2016-08-14T21:36:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T01:57:40.000Z", "avg_line_length": 42.9189189189, "max_line_length": 192, "alphanum_fraction": 0.4061712846, "num_tokens": 1763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8615381952105442, "lm_q2_score": 0.6757645944891559, "lm_q1q2_score": 0.5821970091233727}}
{"text": "\\section{Topological Spaces}\n\\subsection{Fundamental Notions}\n\\paragraph{3.}\n\\begin{proof}\n  If $A$ is open, then clear that $x \\in A \\subset A$. For the converse, let \n  $E = \\bigcup_{x\\in A}O_x$ where $x\\in O_x \\subset A$. Since $O_x\\subset A$,\n  $E \\subset A$. Meanwhile, for every $x\\in A$, $x \\in O_x\\subset E$. Hence, \n  $A = E$. Thus, $A$ is open since $E$ is the union of open sets.\n\\end{proof}\n\n\\paragraph{7.}\n\\begin{proof}\n  $\\,$\\par\n  (a) We argue by contradiction. Assume $x \\in F^c$. Since $F$ is closed, $F^c$\n  is open and therefore there is neighborhood $O$ of $x$ s.t. $O \\cap F\n  \\ne \\varnothing$, which contradicts the fact that $x$ is a cluster point.\n  Thus, $x \\in F$.\n  \n  (b) Let $y := f(x)$ and $y_n := f(x_n)$. Let $O$ be an arbitrary\n  neighborhood of $y$. Since $f$ is continuous, there is a neighborhood $U$ of\n  $x$ s.t. $f(U) \\subset O$. Since $x = \\lim x_n$, there is an integer $N$ s.t.\n  $x_n \\in U$ for every $n > N$. Hence, $y_n = f(x_n) \\in f(U) \\subset O$ for\n  all $n > N$. Thus, $y_n \\to y$.\n  \n  (c) The previous argument, \\textit{mutatis mutandis}, yields the result.\n\\end{proof}\n\n\\paragraph{10.}\n\\begin{proof}\n  $\\,$\\par\n  (a) Suppose that both $A_1$ and $A_2$ are open.\n  Let $f_1 := f|_{A_1}$, $x\\in A$ and $y = f(x)$. We may assume without\n  loss of generality that $x \\in A_1$. For every neighborhood $O$ of $y$, since\n  $f_1$ is continuous, there is a neighborhood $A_1\\cap U$ s.t. $f(A_1\\cap U)\n  \\subset O$ where $U$ is an open set of $X$. Since $A_1\\cap U$ is still open,\n  $f$ is continuous at $x$. Thus, $f$ is continuous.\n  \n  (b) Let $f(x) = 0$ on $A_1 = (-1, 0)$ and $f(x) = 1$ on $A_2 = [0, 1]$.\n\\end{proof}\n\n\\subsection{Bases and Contability}\n\\paragraph{11.}\n\\begin{proof}\n  $\\,$\\par\n  (a) Suppose for every $B\\in\\mcal{B}$ containing $x$, there is a $y\\in \n  B\\cap E$. Let $O$ be an arbitrary open set containing $x$. By the definition\n  of bases, there is a base set $B$ s.t. $x \\in B \\subset \\mcal{B}$. Thus,\n  $O\\cap E \\supset B\\cap E \\ne \\varnothing$. Namely, $x \\in \\cl E$. The\n  reversed direction follows immediately from the definition of cluster points.\n  \n  (b) If there is sequence in $E$ that converges to $x$, clear that $x\\in\n  \\cl E$. Now we show the reverse. Let $\\mcal{B}_x = (B_k)$ be a countable base\n  at $x$ and $S_n = \\bigcap_{k=1}^n B_k$. Choose $x_n \\in S_n$. For every\n  open set $O$ containing $x$, there exists a $B_m$ s.t. $x \\in B_m\n  \\subset O$. Hence, for every $n > m$, $x_n \\in S_n\\subset S_m\n  \\subset S_m \\subset O$. Thus, $x_n \\to x$.\n  \n  (c) It follows immediately from (b).\n\\end{proof}\n\n\\paragraph{13.}\n\\begin{proof}\n  By the construction of $\\mcal{B}$ and Prop. 5, $\\mcal{B}$ is a base for some\n  topology on $X$. Let $\\mcal{T}$ be a topology containing $\\mcal{C}$. Since\n  $X \\in \\mcal{T}$ and $\\mcal{T}$ is closed under finite intersection,\n  $\\mcal{T} \\supset \\mcal{B}$. Thus, $\\mcal{T}$ contains the topology generated\n  by $\\mcal{B}$. Since the choice of $\\mcal{T}$ is arbitrary, we conclude that\n  $\\mcal{B}$ is a base for the weakest topology containing $\\mcal{C}$.\n\\end{proof}\n\n\n\\paragraph{16.}\n\\begin{proof}\n  Let $\\mcal{B}$ be a countable base for the topology on $X$ and \n  $\\mcal{U}$ an open cover of $X$. For each $B\\in\\mcal{B}$, if there is\n  some $U \\in \\mcal{U}$ with $B \\subset U$, then pick that $U$. This yields\n  a at most countable subset $\\mcal{V}$ of $\\mcal{U}$. Now we show that \n  $\\mcal{V}$ covers $X$. For every $x\\in X$, since $\\mcal{U}$ covers $X$, there\n  is a $U\\in\\mcal{U}$ s.t. $x\\in U$. Meanwhile, since $\\mcal{B}$ is a base, \n  there is a $B \\in \\mcal{B}$ s.t. $x\\in B \\subset U$. Hence, by our\n  construction, there is a $U\\hp \\in \\mcal{U}$ containing $x$ is picked. Thus,\n  $x$ is covered by $\\mcal{V}$.\n\\end{proof}\n\n\\subsection{The Separation Axioms and Continuous Real-Valued Functions}\n\\paragraph{18.}\n\\begin{proof}\n  $\\,$\\par\n  (a) If $x \\ne y$. then $d(x, y) > 0$. It suffices to choose the open balls\n  of radius $d(x, y) / 2$ centered at $x$ and $y$ respectively.\n  \n  (b) Let $O_1 = \\{x\\mid \\rho(x, F_1) < \\rho(x, F_2) \\}$ and $O_2 = \\{\n  x\\mid \\rho(x, F_1) > \\rho(x, F_2)\\}$. Clear that $O_1$ and $O_2$ are\n  disjoint. Since $F_1$ and $F_2$ are closed and disjoint, $\\rho(F_1, F_2)\n  > 0$. Hence, for every $x\\in F_1$, $\\rho(x, F_1) = 0$ and $\\rho(x, F_2)\n  > 0$. Therefore, $F_1 \\subset O_1$. Similarly, $F_2 \\subset O_2$. Meanwhile,\n  for every $x\\in O_1$, the open ball centered at $x$ and of radius \n  $(\\rho(x, F_2) - \\rho(x, F_1))/3$ is contained in $O_1$. Therefore, $O_1$ is\n  open and similarly for $O_2$. Thus, $X$ is normal.\n\\end{proof}\n\n\\paragraph{20.}\n\\begin{proof}\n  If $f$ is continuous, then clear that $\\{x\\mid f(x) < a\\} = \n  f\\inv((-\\infty, a))$ and $\\{x\\mid f(x) > a\\} = f\\inv((a, \\infty))$ are open.\n  Now we show the reverse. For every open interval $(a, b)$, $f\\inv((a, b))\n  = f\\inv((a, \\infty))\\cap f\\inv((-\\infty, a))$ is open. Meanwhile, every open\n  set $O$ in $\\mathbb{R}$ is a union $\\bigcup I_\\alpha$of open intervals. Thus,\n  $f\\inv(O) = \\bigcup f\\inv(I_\\alpha)$ is open. Namely, $f$ is continuous. \n  The second results follows immediately from $\\{f \\ge a\\}$ is closed iff\n  $\\{f < a\\}$ is open.\n\\end{proof}\n\n\\paragraph{23.}\n  It seems that in (a), the Hausdorff condition is not necessary. \n\\begin{proof}\n  $\\,$\\par\n  (a) First, suppose that $X$ is normal. Let $G = O^c$. $F$ and $G$\n  are disjoint closed set and, therefore, there are disjoint open sets $U$ and\n  $V$ s.t. $F\\subset U$ and $G\\subset V$. To show $\\cl U\\subset O$, note that\n  $O = G^c \\supset V^c \\supset U$, since $U$ and $V$ are disjoint. Since $V^c$\n  is closed, $V^c \\supset \\cl U$. Thus, $F\\subset U$ and $\\cl U \\subset O$.\n  \n  For the reverse, let $F_1$ and $F_2$ be two disjoint closed sets. Then\n  $F_1^c$ is an open set containing $F_2$ and therefore there exists an open\n  $U_2$ s.t. $F_2 \\subset U_2$ and $\\cl U_2\\subset F_1^c$. Note that $\\cl U_2$\n  and $F_1$ are again disjoint closed sets. Hence, similarly, we can find an\n  open $U_1$ s.t. $F_1\\subset U_1$ and $\\cl U_1 \\subset (\\cl U_2)^c$. Thus, \n  $X$ is normal.  \n\n  (b) We index the sequence by $n$ instead of $r$. Suppose that $N$ is the\n  smallest integer s.t. $r = p2^{-N} < 1$. By (a), we may find a open set\n  $U_{N}$ s.t. $F\\subset U_N$ and $\\cl U_N\\subset O$. Now, $U_N$ is again an\n  open set containing $F$ and therefore we can find $U_{N+1}$ s.t. \n  $F\\subset U_{N+1}$ and $\\cl U_{N+1} \\subset U_N$. Proceed iteratively and \n  we get the required sequence. \n  \n  (c) Clear that $0 \\le f \\le 1$, $\\equiv 0$ on $F$ and $f\\equiv 1$ on $O^c$. \n  Hence, it suffices to show the continuity. For every $x \\in X$ and $\\vep> 0$,\n  choose $r_1, r_2 \\in \\mathbb{Q}$ such that \n  \\[\n    f(x) - \\vep < r_1 < f(x) < r_2 < f(x) + \\vep.\n  \\]\n  Let $U = U_{r_2} \\setminus \\cl U_{r_1}$. Clear that $U$ is open. Meanwhile,\n  \\[\n    f(x) < r_2 \\quad\\Rightarrow\\quad\n    \\inf\\{r\\mid x \\in U_r\\} < r_2 \\quad\\Rightarrow\\quad\n    \\exists r < r_2 \\text{ s.t. } x \\in U_r.\n  \\]\n  Hence, $x \\in U_{r_2}$. If $x \\in \\cl U_{r_1}$, then $x \\in U_r$ for all\n  $r > r_1$ since $U_r \\supset \\cl U_{r_1}$ and, therefore, $f(x) \\le r_1$. \n  Contradiction. Thus, $x \\notin U_{r_1}$. Hence, $U$ is an open neighborhood\n  of $x$. For every $y \\in U$, clear that $f(x) < r_2$. Also, $y \\notin \n  \\cl U_{r_1}$ implies that $f(y) \\ge r_1$. Hence, $f(U)\\subset (f(x) - \\vep,\n  f(x) + \\vep)$. Thus, $f$ is continuous. \n  \n  (d) If $X$ is normal, then clear that the function described in (c) satisfies\n  the requirements. For the reverse, let $O_1 = \\{x\\mid f(x) < 1/2\\}$ and \n  $O_2 = \\{x \\mid f(x) > 1/2\\}$. Clear that $O_1$ and $O_2$ are disjoint and\n  since $f$ is continuous, they are open. Meanwhile, as $f\\equiv 0$ on $A$\n  and $f\\equiv 1$ on $B$, $O_1$ contains $A$ and $O_2$ contains $B$. Thus,\n  $X$ is normal.\n\\end{proof}\n\n\\paragraph{24.}\n  The function in (f) should be $g = \\varphi k / (1 - |\\varphi k|)$.\n\\begin{proof}\n  $\\,$\\par\n  (a) Obvious.\n  \n  (b) Clear that $B$ and $C$ are disjoint. Since $f$ is continuous, so is\n  $h$ and therefore $B$ and $C$ are closed. By Urysohn's lemma, there exists\n  continuous $0 \\le h_{1, B}, h_{1, C} \\le 1$ s.t. $h_{1, B} \\equiv 1$ (resp. \n  $h_{1, C} \\equiv 1$) on $B$ (resp. $C$) and vanishes on $C$ (resp. $B$). Let\n  $h_1 = (-h_{1, B} + h_{1, C}) / 3$. Then $h_1 \\equiv -1/3$ on $B$ and \n  $h_1 \\equiv 1/3$ on $C$. Meanwhile, $h_1$ is continuous and $|h_1| \\le 1/3$.\n  Let $x \\in A$. If $h(x) \\le -1/3$, then $|h(x) - h_1(x)| = |h(x) + 1/3| <\n  2/3$. Similarly for $x$ with $h(x) \\ge 1/3$. If $-1/3 < h(x) < 1/3$, then\n  $|h(x) - h_1(x)| < |h(x)| + |h_1(x)| \\le 2/3$. Thus, $|h - h_1| < 2/3$.\n  \n  (c) Suppose that we have constructed $h_n$. Let $s_n = \\sum_{i=1}^n h_i$. Let\n  \\[\n    B_n = \\{x \\mid h(x) - s(x) < -2^n/3^{n+1}\\}\n    \\quad\\text{and}\\quad\n    C_n = \\{x \\mid h(x) - s(x) > -2^n/3^{n+1}\\}.\n  \\]\n  The previous argument, \\textit{mutatis mutandis}, yields a continuous \n  function $h_{n+1}$ s.t. $|h_{n+1}| < 2^n/3^{n+1}$ and $|h - s_n - h_{n+1}|\n  = |h - s_{n+1}| < 2^{n+1} / 3^{n+1}$ for all $x \\in A$.\n  \n  (d) By the Weierstrass $M$-test, $h_n$ is uniformly summable. Hence,\n  $k = \\sum_{n=1}^\\infty h_n$ is continuous as each $h_n$ is. Clear that $|k|\n  \\le 1$. Moreover, by the estimation in (c), $h = k$ on $A$. \n  \n  (e) By Urysohn's lemma, there is a continuous function $\\varphi$ on $X$ s.t.\n  $\\varphi \\equiv 1$ on $A$ and $\\varphi \\equiv 0$ on $\\{x\\mid k(x) = 1\\}$.\n  \n  (f) Let $g = \\varphi k / (1 - |\\varphi k|)$. By (e), $g$ is well-defined on \n  entire $X$. Also, $g$ is continuous and for $x \\in A$\n  \\[\n    g(x) = \\frac{\\varphi(x) k(x)}{1 - |\\varphi(x) k (x)|}\n    = \\frac{1\\times h(x)}{1 - |1\\times h(x)|}\n    = \\frac{\\frac{f}{1 + |f|}}{1 - \\frac{|f|}{1 + |f|}}\n    = f.\n  \\]\n\\end{proof}\n\n\\paragraph{26.}\n\\begin{proof}\n  Let $\\mcal{J}$ be the topology generated by $\\mcal{F}$. Since every\n  $f \\in \\mcal{F}$ is continuous, $\\mcal{J} \\subset \\mcal{T}$. Let $O \\in\n  \\mcal{T}$. For every $x \\in O$, there is a continuous function $f\\in\\mcal{F}$\n  s.t. $f(x) = 1$ and vanishes on $O^c$. Namely, \n  \\[\n    U_x := f\\inv (\\mathbb{R}\\setminus\\{0\\}) \\in \\mcal{J}\n  \\]\n  is an open set with $x \\in U_x \\subset O$. Clear that $O = \\bigcup_x U_x$.\n  Thus, $O \\in \\mcal{J}$. Namely, $\\mcal{T} = \\mcal{J}$.\n\\end{proof}\n\n\n\\subsection{Connectedness}\n\\paragraph{32.}\n\\begin{proof}\n  Assume that $G$ is not connected and let $(O_1, O_2)$ be a separation of $G$.\n  Since each $G_\\alpha$ is connected, $G_\\alpha$ is contained by exactly\n  one of $O_1$ and $O_2$ since otherwise $(G_\\alpha\\cap O_1, G_\\alpha\\cap O_2)$\n  would be a separation of $G_\\alpha$. Thus, there are two of $\\{G_\\alpha\\}$\n  s.t. one is contained by $O_1$ and the other contained by $O_2$ and hence \n  they have no point in common. Contradiction.\n\\end{proof}\n\n\\paragraph{33.}\n\\begin{proof}\n  Assume that $B$ is not connected and let $(O_1, O_2)$ be a separation of $B$.\n  Since $A$ is connected, either $A\\cap O_1$ or $A\\cap O_2$ is empty. Assume\n  without loss of generality that $A\\cap O_2 = \\varnothing$. Then, $O_2\n  \\subset \\partial A$ and therefore has empty interior. Contradiction. \n\\end{proof}\n\n\\paragraph{35.}\n\\begin{proof}\n  $\\,$\\par\n  (a) If $X$ is not connected, then we choose two points $x, y$ from each set\n  of a separation. Clear that they can not be connected by some arc since the\n  image of $[0, 1]$ under a continuous map is still connected.\n  \n  (b) Assume that $X$ is not connected and let $(O_1, O_2)$ be a separation.\n  Since each of $X_1=\\{(x, y) \\mid x = 0, -1\\le y \\le 1\\}$ and $X_2\\{(x, y)\n  \\mid y = \\sin x, 0 < x \\le 1\\}$ is connected, they are contained in, say,\n  $O_1$ and $O_2$ respectively. Clear that any neighborhood of $(0, 0)$\n  contains points in $X_2$ and therefore $O_1\\cap O_2 \\ne \\varnothing$. \n  Contradiction. \n  \n  Now we show that $X$ is not arcwise connected. (TODO)\n  \n  (c) Let $x \\in G$ and $H$ be the points of $G$ that can be connected to $x$\n  by a polygonal arc. For every $y \\in H$, since $G$ is open, there is an open\n  ball $B$ centered at $y$ with $B \\subset G$. Clear that we can connect every\n  $z \\in B$ by the arc connecting $x$ and $y$, and the segment from $y$ to $z$.\n  Thus, $H$ is open. Now, we show that $K = G\\setminus H$ is open. Let $y$ be\n  a point in $K$ and $B$ a small open ball centered at $y$ that is contained in\n  $G$. Assume, to obtain a contradiction, that $B \\cap H \\ne \\varnothing$. Then\n  the previous argument, \\textit{mutatis mutandis}, show that $y$ can be\n  connected to $x$. Contradiction. Thus, $H$ is both open and closed in $G$.\n  Since $G$ is connected, $H = G$. Thus, $G$ is arcwise connected. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "553fb11ffa080fceb7384348c1a4c626211ba748", "size": 12642, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real_analysis_3rd/ch8_topological_spaces.tex", "max_stars_repo_name": "Engineev/solutions", "max_stars_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-07-13T08:36:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T17:37:17.000Z", "max_issues_repo_path": "real_analysis_3rd/ch8_topological_spaces.tex", "max_issues_repo_name": "Engineev/solutions", "max_issues_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real_analysis_3rd/ch8_topological_spaces.tex", "max_forks_repo_name": "Engineev/solutions", "max_forks_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-28T00:05:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-28T00:05:28.000Z", "avg_line_length": 44.514084507, "max_line_length": 79, "alphanum_fraction": 0.6088435374, "num_tokens": 5020, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702642896702, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5821968037433126}}
{"text": "\\section{Geometry}\n\\lst{Geometry}{}{geometry/geometry.cc}\n\n\\lst{Polygon}{inPolygon: $\\mathcal{O}(\\log n)$, area: $\\mathcal{O}(n)$, isConvex: $\\mathcal{O}(n)$}{geometry/polygon.cc}\n\n\\lst{Convex Hull}{$\\mathcal{O}(n\\log n)$}{geometry/convexHull.cc}\n\n", "meta": {"hexsha": "fc96b2a4500c0d6f6245d7f0b1b4fc1b9bcdc042", "size": 248, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "document/geometry.tex", "max_stars_repo_name": "Zeldacrafter/CompProg", "max_stars_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-02-06T15:44:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-21T03:51:21.000Z", "max_issues_repo_path": "document/geometry.tex", "max_issues_repo_name": "Zeldacrafter/CompProg", "max_issues_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "document/geometry.tex", "max_forks_repo_name": "Zeldacrafter/CompProg", "max_forks_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0, "max_line_length": 120, "alphanum_fraction": 0.6774193548, "num_tokens": 92, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8840392817460332, "lm_q2_score": 0.6584175072643413, "lm_q1q2_score": 0.5820669402109819}}
{"text": "\\section{Numerical Integration}{}{}\\label{sec:Numberical Integration}\nWe have now seen some of the most generally useful methods\nfor discovering antiderivatives, and there are others. \nUnfortunately, some functions\nhave no simple antiderivatives. In such cases, if the value of a\ndefinite integral is needed it will have to be approximated. We will\nsee two methods that work reasonably well and yet are fairly simple;\nin some cases more sophisticated techniques will be needed.\n\nOf course, we already know one way to approximate an integral: If we\nthink of the integral as computing an area, we can add up the areas of\nsome rectangles. While this is quite simple, it is usually the case\nthat a large number of rectangles is needed to get acceptable\naccuracy. A similar approach is much better. We approximate the area\nunder a curve over a small interval as the area of a trapezoid. In\nfigure~\\ref{fig:trapezoid approximation} we see an area under \na curve approximated by rectangles and by trapezoids; it is apparent\nthat the trapezoids give a substantially better approximation on each\nsubinterval. \n\n\\figure[H]\n%\\texonly\n\\centerline{\\vbox{\\beginpicture\n\\normalgraphs\n%\\ninepoint\n\\setcoordinatesystem units <1.25truecm,0.5truecm>\n\\setplotarea x from 0 to 4.1, y from 0 to 7.1\n\\axis left shiftedto x=0 /\n\\axis bottom shiftedto y=0 /\n\\setquadratic\n\\plot 0.000 4.000 0.200 5.368 0.400 6.304 0.600 6.856 0.800 7.072\n1.000 7.000 1.200 6.688 1.400 6.184 1.600 5.536 1.800 4.792\n2.000 4.000 2.200 3.208 2.400 2.464 2.600 1.816 2.800 1.312\n3.000 1.000 3.200 0.928 3.400 1.144 3.600 1.696 3.800 2.632\n4.000 4.000 /\n\\setdashes <2pt>\n\\putrule from 0 4 to 1 4\n\\putrule from 1 7 to 1 0\n\\putrule from 1 7 to 2 7\n\\putrule from 2 7 to 2 0\n\\putrule from 2 4 to 3 4\n\\putrule from 3 4 to 3 0\n\\putrule from 3 1 to 4 1\n\\putrule from 4 1 to 4 0\n\\setsolid\n\\setcoordinatesystem units <1.25truecm,0.5truecm> point at -6 0\n\\setplotarea x from 0 to 4.1, y from 0 to 7.1\n\\axis left shiftedto x=0 /\n\\axis bottom shiftedto y=0 /\n\\setquadratic\n\\plot 0.000 4.000 0.200 5.368 0.400 6.304 0.600 6.856 0.800 7.072\n1.000 7.000 1.200 6.688 1.400 6.184 1.600 5.536 1.800 4.792\n2.000 4.000 2.200 3.208 2.400 2.464 2.600 1.816 2.800 1.312\n3.000 1.000 3.200 0.928 3.400 1.144 3.600 1.696 3.800 2.632\n4.000 4.000 /\n\\setlinear\n\\setdashes <2pt>\n\\plot 0 4 1 7 2 4 3 1 4 4 /\n\\putrule from 1 0 to 1 7\n\\putrule from 2 0 to 2 4\n\\putrule from 3 0 to 3 1\n\\putrule from 4 0 to 4 4\n\\endpicture}}\n%\\htmlfigure{Integration_techniques-numerical_rectangles_trapezoids.html}\n\\caption{\\label{fig:trapezoid approximation}\nApproximating an area with rectangles and with trapezoids.}\n\\endfigure\n\nAs with rectangles, we divide the interval into $n$ equal subintervals\nof length $\\Delta x$.\nA typical trapezoid is pictured in figure~\\ref{fig:one trapezoid};\nit has area $$\\ds{f(x_i)+f(x_{i+1})\\over2}\\Delta x.$$ \nIf we add up the\n  areas of all trapezoids we get\n$${f(x_0)+f(x_1)\\over2}\\Delta x+{f(x_1)+f(x_2)\\over2}\\Delta x+\\cdots+\n  {f(x_{n-1})+f(x_n)\\over2}\\Delta x$$\n  $$=\\left({f(x_0)\\over2}+f(x_1)+f(x_2)+\\cdots+f(x_{n-1})+{f(x_n)\\over2}\\right)\n  \\Delta x.$$\nFor a modest number of subintervals this is not too difficult to do\nwith a calculator; a computer can easily do many subintervals.\n\n\\figure[H]\n\\centerline{\\vbox{\\beginpicture\n\\normalgraphs\n%\\ninepoint\n\\setcoordinatesystem units <1.25truecm,0.5truecm>\n\\setplotarea x from 0 to 3.0, y from 0 to 7.1\n\\axis bottom shiftedto y=0 ticks length <2pt> \n  withvalues {$x_i$} {$x_{i+1}$} / at 1 2 / /\n\\put {$(x_i,f(x_i))$} [r] <-3pt,0pt> at 1 7\n\\put {$(x_{i+1},f(x_{i+1}))$} [l] <3pt,0pt> at 2 4\n\\setquadratic\n\\plot 1.000 7.000 1.167 6.755 1.333 6.370 1.500 5.875 1.667 5.296\n1.833 4.662 2.000 4.000 /\n\\setlinear\n\\setdashes <2pt>\n\\plot 1 7 2 4  /\n\\putrule from 2 0 to 2 4\n\\putrule from 1 0 to 1 7\n\\endpicture}}\n%\\endtexonly\n%\\figrdef{fig:one trapezoid}\n%\\htmlfigure{Integration_techniques-numerical_one_trapezoid.html}\n\\caption{\\label{fig:one trapezoid}\nA single trapezoid.}\n\\endfigure\n\nIn practice, an approximation is useful only if we know how accurate\nit is; for example, we might need a particular value accurate to three\ndecimal places. When we compute a particular approximation to an\nintegral, the error is the difference between the approximation and\nthe true value of the integral. For any approximation technique, we\nneed an \\dfont{error bound}, a value that\nis guaranteed to be larger than the actual error. If $A$ is an\napproximation and $E$ is the associated error bound, then we know\nthat the true value of the integral is between $A-E$ and\n$A+E$. In the case of our approximation of the integral, we want\n$E=E(\\Delta x)$ to be a function of $\\Delta x$ that gets small rapidly\nas $\\Delta x$ gets small. Fortunately, for many functions, there is\nsuch an error bound associated with the trapezoid approximation.\n\n\\begin{theorem}{Error for Trapezoid Approximation}{Error for Trapezoid Approximation}\\label{Error for Trapezoid Approximation}\nSuppose $f$ has a second derivative $f''$ everywhere on the\ninterval $[a,b]$, and $|f''(x)|\\le M$ for all $x$ in the\ninterval. With $\\Delta x= (b-a)/n$, an error bound for the\ntrapezoid approximation is\n$$\n  E(\\Delta x) = {b-a\\over12}M(\\Delta x)^2={(b-a)^3\\over 12n^2}M.\n$$\n\\end{theorem}\nLet's see how we can use this. \n\n\\begin{example}{Approximate an Integral With Trapezoids}{Approximate an Integral With Trapezoids}\\label{Approximate an Integral With Trapezoids}\nApproximate $\\ds\\int_0^1 e^{-x^2}\\,dx$ to two decimal places.\n\\end{example}\n\n\\begin{solution}\nThe second derivative of $\\ds f=e^{-x^2}$ is $\\ds(4x^2-2)e^{-x^2}$, and\nit is not hard to see that on $[0,1]$ $|f''(x)|$ has a maximum value of 2, thus\nwe begin by estimating the number of subintervals we are likely to\nneed. To get two decimal places of accuracy, we will certainly need\n$E(\\Delta x)<0.005$ or\n\\begin{eqnarray*}\n  {1\\over12}(2){1\\over n^2} &<& 0.005\\cr\n  {1\\over6}(200)&<&n^2\\cr\n  5.77\\approx\\sqrt{100\\over3}&<&n\n\\end{eqnarray*}\nWith $n=6$, the error bound is thus $\\ds1/6^3<0.0047$.\nWe compute the trapezoid approximation for six intervals:\n$$\n  \\left({f(0)\\over2}+f(1/6)+f(2/6)+\\cdots+f(5/6)+{f(1)\\over2}\\right){1\\over6}\n  \\approx 0.74512.\n$$\nSo the true value of the integral is between $0.74512-0.0047=0.74042$ and\n$0.74512+0.0047=0.74982$. Unfortunately, the first rounds to $0.74$\nand the second \nrounds to $0.75$, so we can't be sure of the correct value in\nthe second decimal place; we need to pick a larger $n$.\nAs it turns out, we need to go to $n=12$ to get two bounds that both\nround to the same value, which turns out to be $0.75$. For comparison,\nusing $12$ rectangles to approximate the area gives $0.7727$, which is\nconsiderably less accurate than the approximation using six trapezoids.\n\nIn practice it\ngenerally pays to start by requiring better than the maximum possible\nerror; for example, we might have initially required $E(\\Delta\nx)<0.001$, or \n\\begin{eqnarray*}\n  {1\\over12}(2){1\\over n^2} &<& 0.001\\cr\n  {1\\over6}(1000)&<&n^2\\cr\n  12.91\\approx\\sqrt{500\\over3}&<&n\n\\end{eqnarray*}\nHad we immediately tried $n=13$ this would have given us the desired\nanswer. \n\\end{solution}\n\nThe trapezoid approximation works well, especially compared to\nrectangles, because the tops of the trapezoids form a reasonably good\napproximation to the curve when $\\Delta x$ is fairly small. We can\nextend this idea: what if we try to approximate the curve more\nclosely by using something other than a straight line? The obvious\ncandidate is a parabola: If we can approximate a short piece of the\ncurve with a parabola with equation $\\ds y=ax^2+bx+c$, we can easily\ncompute the area under the parabola.\n\nThere are an infinite number of parabolas through any two given\npoints, but only one through three given points. If we find a parabola\nthrough three consecutive points $(x_i,f(x_i))$,\n$(x_{i+1},f(x_{i+1}))$, $(x_{i+2},f(x_{i+2}))$ on the curve, it should\nbe quite close to the curve over the whole interval $[x_i,x_{i+2}]$,\nas in Figure~\\ref{fig:one parabola}. If we divide the interval\n$[a,b]$ into an even number of subintervals, we can then approximate\nthe curve by a sequence of parabolas, each covering two of the\nsubintervals. For this to be practical, we would like a simple formula\nfor the area under one parabola, namely, the parabola through\n$(x_i,f(x_i))$, $(x_{i+1},f(x_{i+1}))$, and\n$(x_{i+2},f(x_{i+2}))$. That is, we should attempt to write down the\nparabola $y=ax^2+bx+c$ through these points and then integrate it, and\nhope that the result is fairly simple. Although the algebra involved\nis messy, this turns out to be possible. The algebra is well within\nthe capability of a good computer algebra system like Sage, so we will\npresent the result without all of the algebra.\n\nTo find the parabola, we solve these three equations\nfor $a$, $b$, and $c$:\n\\begin{eqnarray*}\n  f(x_i)&=&a(x_{i+1}-\\Delta x)^2+b(x_{i+1}-\\Delta x)+c\\cr\n  f(x_{i+1})&=&a(x_{i+1})^2+b(x_{i+1})+c\\cr\n  f(x_{i+2})&=&a(x_{i+1}+\\Delta x)^2+b(x_{i+1}+\\Delta x)+c\n\\end{eqnarray*}\n\nNot surprisingly, the solutions turn out to be quite\nmessy. Nevertheless, Sage can easily compute and simplify the integral\nto get\n$$\n  \\int_{x_{i+1}-\\Delta x}^{x_{i+1}+\\Delta x} ax^2+bx+c\\,dx=\n  {\\Delta x\\over3}(f(x_i)+4f(x_{i+1})+f(x_{i+2})).\n$$\nNow the sum of the areas under all parabolas is\n$$\n  \\displaylines{\n  {\\Delta x\\over3}(f(x_0)+4f(x_{1})+f(x_{2})+f(x_2)+4f(x_{3})+f(x_{4})+\\cdots\n  +f(x_{n-2})+4f(x_{n-1})+f(x_{n}))=\\cr\n  {\\Delta x\\over3}(f(x_0)+4f(x_{1})+2f(x_{2})+4f(x_{3})+2f(x_{4})+\\cdots\n  +2f(x_{n-2})+4f(x_{n-1})+f(x_{n})).\\cr}\n$$\n\nThis is just slightly more complicated than the formula for\ntrapezoids; we need to remember the alternating 2 and 4 coefficients, and that the interval must be divided into an \\emph{even} number of subintervals.\nThis approximation technique is referred to as \\dfont{Simpson's Rule}.\n\n\\figure[H]\n%\\texonly\n\\centerline{\\vbox{\\beginpicture\n\\normalgraphs\n%\\ninepoint\n\\setcoordinatesystem units <3truecm,8truecm>\n\\setplotarea x from 0.8 to 2.2, y from 0 to 0.4\n\\axis bottom shiftedto y=0 ticks length <2pt> \n  withvalues {$x_i$} {$x_{i+1}$} {$x_{i+2}$} / at 1 1.5 2 / /\n\\put {$(x_i,f(x_i))$} [r] <-3pt,0pt> at 1 0.163\n\\put {$(x_{i+2},f(x_{i+2}))$} [l] <3pt,0pt> at 2 0.381\n\\setquadratic\n\\plot 1.000 0.163 1.050 0.138 1.100 0.120 1.150 0.109 1.200 0.102\n1.250 0.100 1.300 0.102 1.350 0.107 1.400 0.116 1.450 0.128\n1.500 0.142 1.550 0.158 1.600 0.177 1.650 0.197 1.700 0.219\n1.750 0.243 1.800 0.268 1.850 0.295 1.900 0.322 1.950 0.351\n2.000 0.381 /\n\\setdashes <2pt>\n\\plot \n1.000 0.162 1.050 0.149 1.100 0.137 1.150 0.129 1.200 0.123\n1.250 0.120 1.300 0.119 1.350 0.121 1.400 0.125 1.450 0.132\n1.500 0.142 1.550 0.154 1.600 0.169 1.650 0.186 1.700 0.206\n1.750 0.229 1.800 0.254 1.850 0.282 1.900 0.312 1.950 0.346\n2.000 0.381 /\n\\endpicture}}\n%\\caption\n%{A parabola (dashed) approximating a curve (solid).\n%(\\expandafter\\url\\expandafter{\\liveurl jsxgraph/numerical_int.html}%\n%AP\\endurl)}\n%\\endcaption\n%\\endtexonly\n%\\figrdef{fig:one parabola}\n%\\htmlfigure{Integration_techniques-numerical_one_parabola.html}\n\\caption{\\label{fig:one parabola}\nA parabola (dashed) approximating a curve (solid).}\n\\endfigure\n\n\\noindent As with the trapezoid method, this is useful only with an error\nbound:\n\n\\begin{theorem}{Error for Simpson's Approximation}{Error for Simpson's Approximation}\\label{Error for Simpson's Approximation}\nSuppose $f$ has a fourth derivative $f^{(4)}$ everywhere on the\ninterval $[a,b]$, and $|f^{(4)}(x)|\\le M$ for all $x$ in the\ninterval. With $\\Delta x= (b-a)/n$, an error bound for Simpson's\napproximation is\n$$\n  E(\\Delta x) = {b-a\\over180}M(\\Delta x)^4={(b-a)^5\\over 180n^4}M.\n$$\n\\end{theorem}\n\n\\begin{example}{Approximate an Integral With Parabolas}{Approximate an Integral With Parabolas}\\label{Approximate an Integral With Parabolas}\nLet us again approximate $\\ds\\int_0^1 e^{-x^2}\\,dx$ to two\ndecimal places.  \n\\end{example}\n\n\\begin{solution}\nThe fourth derivative of $\\ds f=e^{-x^2}$ is\n$\\ds(16x^4-48x^2+12)e^{-x^2}$; on $[0,1]$ this is at most\n$12$ in absolute value.  We begin by estimating the number of\nsubintervals we are likely to need. To get two decimal places of\naccuracy, we will certainly need $E(\\Delta x)<0.005$, but taking a cue\nfrom our earlier example, let's require $E(\\Delta x)<0.001$:\n\\begin{eqnarray*}\n  {1\\over180}(12){1\\over n^4} &<& 0.001\\cr\n  {200\\over3}&<n^4\\cr\n  2.86\\approx\\root[4] \\of {200\\over3}&<&n\n\\end{eqnarray*}\nSo we try $n=4$, since we need an even number of subintervals. Then\nthe error bound is $\\ds12/180/4^4<0.0003$ and the approximation is\n$$\n  (f(0)+4f(1/4)+2f(1/2)+4f(3/4)+f(1)){1\\over3\\cdot4}\n  \\approx 0.746855.\n$$\nSo the true value of the integral is between $0.746855-0.0003=0.746555$ and\n$0.746855+0.0003=0.7471555$, both of which round to $0.75$.\n\\end{solution}\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:Numberical Integration}}\n\n\\begin{enumialphparenastyle}\n\nIn the following problems, compute the trapezoid and Simpson\napproximations using 4 subintervals, and compute the error bound\nfor each. (Finding the maximum values of the second and fourth\nderivatives can be challenging for some of these; you may use a\ngraphing calculator or computer software to estimate the maximum\nvalues.) \n\n%If you have access to Sage or similar software, approximate\n%each integral to two decimal places.  You can use this\n%\\texonly\n%\\expandafter\\url\\expandafter{\\sageurl simpson_and_trapezoid}% \n%\\endurl\\ to get started.\n%\\endtexonly\n%\\htmlknowl{Integration_techniques-Simpson_and_trapezoid_in_sage.html}{Sage worksheet to get started.}\n\n\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_1^3 x\\,dx$\n\\begin{sol}\n T,S: $4\\pm0$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_0^3 x^2\\,dx$\n\\begin{sol}\n T: $9.28125\\pm0.281125 $; S: $9\\pm0$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_2^4 x^3\\,dx$\n\\begin{sol}\n T: $60.75\\pm1$; S: $60\\pm0$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_1^3 {1\\over x}\\,dx$\n\\begin{sol}\n T: $1.1167\\pm 0.0833$; S: $1.1000\\pm 0.0167$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_1^2 {1\\over 1+x^2}\\,dx$\n\\begin{sol}\n T: $0.3235\\pm 0.0026$; S: $0.3217\\pm 0.000065$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_0^1 x\\sqrt{1+x}\\,dx$\n\\begin{sol}\n T: $0.6478\\pm 0.0052$; S: $0.6438\\pm 0.000033$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_1^5 {x\\over 1+x}\\,dx$\n\\begin{sol}\n T: $2.8833\\pm 0.0834$; S: $2.9000\\pm 0.0167$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_0^1 \\sqrt{x^3+1}\\,dx$\n\\begin{sol}\n T: $1.1170\\pm 0.0077$; S: $1.1114\\pm 0.0002$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_0^1 \\sqrt{x^4+1}\\,dx$\n\\begin{sol}\n T: $1.097\\pm 0.0147$; S: $1.089\\pm 0.0003$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n $\\ds\\int_1^4 \\sqrt{1+1/x}\\,dx$\n\\begin{sol}\n T: $3.63\\pm 0.087$; S: $3.62\\pm 0.032$\n\\end{sol}\n\\end{ex}\n%%%%%%%%%%\n\n%%%%%%%%%%\n\\begin{ex}\n Using Simpson's rule on a parabola $f(x)$, even with just\ntwo subintervals, gives the exact value of the integral, because the\nparabolas used to approximate $f$ will be $f$ itself. Remarkably,\nSimpson's rule also computes the integral of a cubic function\n$f(x)=ax^3+bx^2+cx+d$ exactly. Show this is true by showing that\n$$\n  \\int_{x_0}^{x_2}\n  f(x)\\,dx={x_2-x_0\\over3\\cdot2}(f(x_0)+4f((x_0+x_2)/2)+f(x_2)).\n$$\nThis does require a bit of messy algebra, so you may prefer to use Sage.\n\\end{ex}\n%%%%%%%%%%\n\n\\end{enumialphparenastyle}", "meta": {"hexsha": "72b665b50230de48d9d787d5527d056aae74aa06", "size": 15488, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "7-techniques-of-integration/7-6-numerical-int.org.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "7-techniques-of-integration/7-6-numerical-int.org.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "7-techniques-of-integration/7-6-numerical-int.org.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2801822323, "max_line_length": 151, "alphanum_fraction": 0.697572314, "num_tokens": 5866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417500561683, "lm_q2_score": 0.8840392771633079, "lm_q1q2_score": 0.5820669312682221}}
{"text": "%!TEX root = paper.tex\r\n\\subsection{Depth-averaged \\nheswe}\r\nThe \\nheswe\\ is linked to the solution of the Euler equations using kinematic boundary conditions on the surface and the bottom. The basis of the method is a projection method (see e.g. \\cite{Chorin.1968}) solving the time-discretized equations stepwise. \r\nThe pressure is decomposed into a hydrostatic and a \\nh\\ part \\cite{CasulliStelling.1998, StansbyZhou.1998}. This splitting has the advantage that the solver for the \\nh\\ equations can resort to the solver for the shallow water equations. \r\nThe \\da\\ version of this apporach was derived from a multi-layer formulation  \\cite{StellingZijlema.2003} applying linear approximations between different layers, s.t. also the \\nhp\\ is assumed to be linear in the multi-layer equations. On the other hand, the vertical Euler equation leads to a quadratic vertical profile of the \\nhp when considering \\da\\ equations. Therefore, there are two possibilities to choose the pressure profile.\r\nUnder the assumption of small vertical variations of horizontal velocities, the \\danheswe\\ is derived out of the Euler equations for \\da\\ variables\r\n\\begin{align}\r\n  (u,v)=\\bu:=\\frac{1}{h}\\int_{-d}^{\\xi}{\\bU}\\,dz, \\qquad w:=\\frac{1}{h}\\int_{-d}^{\\xi}{W}\\,dz, \\qquad \\pnh&:=\\frac{1}{h}\\int_{-d}^{\\xi}{\\Pnh}\\,dz. \\label{eq:def_pnh}\r\n\\end{align}\r\nThe \\danheswe\\ is described with the equation system\r\n\\begin{align}\r\n\\partial_t \\xi+\\bnabla \\cdot (h\\bu)=&0, \\label{eq:nh_conti} \\\\\r\n\\partial_t \\bu+(\\bu \\cdot \\bnabla)\\bu=&-g\\bnabla \\xi-\\frac{1}{\\rho h}\\left(\\bnabla \\left(hp^{nh} \\right) -\\fnh\\pnh \\bnabla d\\right), \\label{eq:nh_Momxy} \\\\\r\n\\partial_t w+(\\bu \\cdot \\bnabla)w=&\\frac{1}{\\rho h}\\fnh\\pnh, \\label{eq:nh_Momz} \\\\\r\n2 \\left(w+\\bu \\cdot \\bnabla d \\right)=&-h\\left(\\bnabla \\cdot \\bu\\right), \\label{eq:nh_closure}\r\n\\end{align} \r\nwhere the scalar $\\fnh$ refers to the chosen pressure profile. In case of the linear pressure profile\r\n\\begin{equation}\r\n  P^{nh}(z)=\\frac{\\Pnhd}{h}(\\xi -z),\r\n\\end{equation}\r\nwith $\\Pnhd$ being the \\nhp\\ at the bottom, it is $\\fnh=2$, whereas it is $\\fnh=1.5$ in the case of the quadratic pressure profile\r\n\\begin{equation}\r\n P^{nh}(z)=\\frac{1}{2}\\frac{\\Gamma}{h}\\left(-(z+d)^2+h^2\\right)%+\\rho\\Phi\\left(\\xi-z\\right) \\label{eq:Pnh_quadr_z}\r\n\\end{equation}\r\nwith\r\n\\begin{align*}\r\n  \\Gamma &:= \\rho h\\left( -(\\bnabla \\cdot \\partial_t \\bu) - (\\bu \\cdot \\bnabla) (\\bnabla \\cdot \\bu) + (\\bnabla \\cdot \\bu)^2 \\right). \\\\\r\n  %\\Phi &:= - \\bnabla d \\cdot \\left( \\partial_t  \\bu + (\\bu \\cdot \\bnabla)\\bu \\right) - \\bu \\cdot \\bnabla(\\bnabla d) \\cdot \\bu.\r\n\\end{align*}\r\nFor a more detailed derivation see \\cite{Jeschke.2016}.\r\n\r\n\r\nThe spatial discretization of the shallow water equations implements the $P^{NC}_1$--$P_1$ finite element method as described in \\cite{Hanert.2005, LeRouxPouliot.2008} with the $P^{NC}_1$--$P_1$ advection scheme of \\cite{Androsov.2011}. It uses nonconforming linear basis functions for the horizontal velocities and conforming linear basis functions for the height. A two-dimensional computational domain is represented by a structured triangulation generated with the adaptive mesh generator amatos \\cite{Behrens.2005}. The time-stepping scheme applies the Leapfrog method stabilized with the Robert-Asselin filter \\cite{Asselin.1972} using $\\alpha=0.025$.\r\n\r\nThe \\danheswe\\ \\eqref{eq:nh_conti}--\\eqref{eq:nh_Momz} is solved based on the projection method proposed in \\cite{StellingZijlema.2003}. The advantage of this method is that the shallow water solver does not have to change in order to solve the \\nh\\ equation system. This projection method involves the solution of a Poisson equation for the \\nhp\\ in each timestep, which is constructed as follows: The time-discretized horizontal momentum equations are written as an intermediate solution in the present timestep plus a correction term depending on the \\nhp. Together with the time-discretized vertical momentum equation, this splitting is substituted into equation \\eqref{eq:nh_closure} in its weak formulation to receive the Poisson equation. The emerging linear equation system is solved by means of a GMRES algorithm. The computed \\nhp\\ is added to the intermediate horizontal velocities and to the vertical velocity of the previous time step. At this point, the velocities have been updated and the numerical solution of the continuity equation completes the computation of the timestep.\r\nTo discretize the \\nhp\\ and vertical velocity, conforming linear basis functions are used. The same approach was taken in \\cite{Fuchs.2013}, but we retain an absent bathymetry gradient term. ", "meta": {"hexsha": "3bace8c6122e8bc2b1e8798f12d006fa68e4c848", "size": 4593, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/papers/theoretical_1d/M_nonhydrostatic.tex", "max_stars_repo_name": "mandli/coastal", "max_stars_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/papers/theoretical_1d/M_nonhydrostatic.tex", "max_issues_repo_name": "mandli/coastal", "max_issues_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/papers/theoretical_1d/M_nonhydrostatic.tex", "max_forks_repo_name": "mandli/coastal", "max_forks_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 127.5833333333, "max_line_length": 1094, "alphanum_fraction": 0.7474417592, "num_tokens": 1344, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8840392695254318, "lm_q2_score": 0.658417500561683, "lm_q1q2_score": 0.5820669262393108}}
{"text": "\\vsssub\n\\subsubsection{~$S_{tr}$: Triad nonlinear interactions (LTA)} \\label{sec:TR1}\n\\vsssub\n\n\\opthead{TR1}{SWAN}{A. Van der Westhuysen}\n\nNonlinear triad interactions are modelled using the LTA model of \n\\cite{rep:Eld96}. This stochastic model is based on the \nBoussinesq-type deterministic equations of \\cite{art:MS93}. \nThese deterministic equations are ensemble averaged, and the hierarchy \nof spatial evolution equations truncated by a zero-fourth-order-cumulant \nassumption, yielding a set of equations for the spectral and bispectral \nevolution in one-dimension. The bispectrum appearing in the spectral \nevolution equation is split up into a biamplitude and a biphase. The \nbiphase corresponding to the self interaction of the peak frequency \n$\\sigma_p$ is parameterised as a function of the local Ursell number by\n\n\\begin{equation}\n   \\beta(\\sigma_p,\\sigma_p) = -\\frac{\\pi}{2} + \\frac{\\pi}{2}\\tanh\\left( \\frac{0.2}{Ur} \\right),\n   \\label{eq:biphase}\n\\end{equation}\n\n\\noindent\nin which the spectrally based Ursell number $Ur$ is given by\n\n\\begin{equation}\n   Ur = \\frac{g}{8\\sqrt{2} \\pi^2} \\frac{H_s {T_{m01}}^2}{d^2}\\  .\n   \\label{eq:ursell}\n\\end{equation}\n\n\nThe biamplitude is obtained by spatially integrating the evolution equation\nfor the bispectrum, by which the biamplitude is rendered a spatially local\nfunction. This result in a expression for the biamplitude which has a\nspatially slowly-varying component and a fast-oscillating component, of which\nthe latter is neglected. Using the derived expressions for the biphase and\nbiamplitude, the spectral evolution equation (a one-equation model) can be\nsolved. To reduce the computational cost even further, the complete set of all\ninteracting triads are represented by only the set of \\textit{self sum\n  interactions}, that is, triads in which a component of frequency $\\sigma$\ninteracts with a component of the same frequency to exchange energy flux with\na component of frequency $\\sigma + \\sigma = 2\\sigma$. The final expression for\nthe effect of triad interactions on a component with frequency $\\sigma$ is\nmade up of two contributions---one adding energy flux to $\\sigma$ (transferred\nflux arriving from $1/2 \\sigma$) and one subtracting energy flux from $\\sigma$\n(transfer going to $2 \\sigma$). The expression implemented, adapted for radian\nfrequencies, reads:\n\n\\begin{equation}\n   S_{\\rm nl3} (\\sigma,\\theta) = S^-_{\\rm nl3} (\\sigma,\\theta) + S^+_{\\rm nl3}\n   (\\sigma,\\theta)  ,\n   \\label{eq:snl3}\n\\end{equation}\n\n\\noindent\nwith\n\n\\begin{equation}\n  S^+_{\\rm nl3} (\\sigma,\\theta) = \\max [0,\\alpha_{\\rm EB} 2 \\pi c c_g J^2 |\\sin\\beta| \\left \\{ E^2(\\sigma/2,\\theta) - \n  2 E(\\sigma/2,\\theta) E(\\sigma,\\theta) \\right \\}]  ,\n  \\label{eq:snl3plu}\n\\end{equation}\n\n\\noindent\nand\n\n\\begin{equation}\n  S^-_{\\rm nl3} (\\sigma,\\theta) = -2 S^+_{\\rm nl3} (2\\sigma,\\theta)\\  .\n  \\label{eq:snl3min}\n\\end{equation}\n\n\\noindent\nBecause of a Jacobian in the transfer of the energy flux from $\\sigma$ to $2\n\\sigma$, the flux density arriving at $2 \\sigma$ is half that leaving $\\sigma$\n(hence the factor 2 appearing in Eq. (\\ref{eq:snl3min})). The interaction\ncoefficient $J$, describing self interaction in the nonlinearity range $0 \\leq\nUr \\leq 1$, is given by \\citep{art:MS93}:\n\n\\begin{equation}\n   J = \\frac{k^2_{\\sigma/2} (gd + 2 c^2_{\\sigma/2})}\n      {k_\\sigma d (gd + \\frac{2}{15} gd^3 k^2_\\sigma - \\frac{2}{5} \\sigma^2\n        d^2)}\\ \\:\\:\\:  .\n   \\label{eq:intcoef}\n\\end{equation}\n\n\\noindent\nThe LTA formulation is implemented along each propagation direction of \nthe directional spectrum, yielding an isotropic, directionally decoupled \nrepresentation of triad interaction. The value of the proportionality \ncoefficient is set at $\\alpha_{\\rm EB}$ = 0.05. The results produced by \nthe LTA are furthermore quite sensitive to the choice of the frequency up \nto which the interactions are calculated, denoted here as $f_{max,EB}$. \n\\citep{rep:Eld95} recommends that the interactions be computed \nup to a frequency of 2.5 times the mean frequency ($f_{max,EB} = \n2.5 f_{m01}$).\n", "meta": {"hexsha": "0c1363c56decf2b803d87d2abae3394adff08ebe", "size": 4034, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "WW3/manual/eqs/TR1.tex", "max_stars_repo_name": "minsukji/ci-debug", "max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_stars_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "WW3/manual/eqs/TR1.tex", "max_issues_repo_name": "minsukji/ci-debug", "max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_issues_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z", "max_forks_repo_path": "WW3/manual/eqs/TR1.tex", "max_forks_repo_name": "minsukji/ci-debug", "max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_forks_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z", "avg_line_length": 42.0208333333, "max_line_length": 118, "alphanum_fraction": 0.7332672286, "num_tokens": 1219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8840392756357326, "lm_q2_score": 0.658417487156366, "lm_q1q2_score": 0.5820669184116131}}
{"text": "\\documentclass{article}\n\n\\usepackage{graphicx}\n\\usepackage{parskip}\n\\usepackage{amsmath}\n\n\\title{BSP Tree}\n\\author{Gerard Martin Teixidor}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Overview}\nA binary space partitioning (BSP) tree, is a data structure which subdivides space into convex subspaces by using hyperplanes~\\cite{original}. This data structure can be seen as a generalization of a \\textit{k}-d tree which, unlike a \\textit{k}-d tree, the orientation and position of the spiting planes are arbitrary.\n\nIn a BSP tree, each inner node represents an hyperplane that splits the space into two half-spaces. The right child represents the positive half-space, while the left child represents negative half-space. The leaf nodes, unlike the inner nodes, nodes does contain any information about the hyperplane and represent a convex subspace.\n\n\\begin{figure}[h]\n\\includegraphics[width=0.9\\linewidth]{bsp_tree.png}\n\\caption{Example of a BSP tree.}\n\\end{figure}\n\nBecause of the freedom of choosing the position and orientation of the hyperplanes, a common approach is to use the planes defined by the input data. Given a set of polygons as the input data, when all the hyperplanes are polygons from the input data, it is called an auto-partition BSP tree.\n\nIn the computer graphics field, BSP trees are used in multiple applications. The most common applications are:\n\n\\begin{description}\n\\item[Constructive solid geometry] Auto-partition BSP trees allow to represent volumetric polygonal closed objects by defining if a space space region is inside or outside the object. This representation also allows to check very efficiently if a point belongs inside or outside the object. Figure~\\ref{object} shows an example of a 2D BSP tree representing an object.\n\n\\item[Boolean operations] Another interesting property of an object represented by a BSP tree is the ability to perform very efficiently boolean operation between objects. Given two objects, $A$ and $B$, being able to perform the boolean operations $\\{\\cup, \\cap, \\setminus, \\ominus\\}$. As a simple overview, the main idea behind this operations is to split one of the two BSP tree by the planes of the other BSP tree, and merging them with the corresponding boolean operation.\n\n\\item[Visibility priority] When rendering the polygons on the screen, an auto-partition BSP tree can be used to determine the drawing priority of those polygons~\\cite{siggraph}.\n\\end{description}\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{bsp_tree_object.png}\n\\caption{Auto-partition BSP tree which represents a closed object.}\n\\label{object}\n\\end{figure}\n\n\\subsection*{Visibility priority}\nIn computer graphics, the visibility problem of a 3D scene consists on defining which parts of the geometry are going to be visible on screen. The painter's algorithm allows to solve this problem by painting those polygons from back to front by overdrawing those portions of the screen which are occluded by the polygons painted on top. \n\nIn static scenes, an auto-partition BSP tree can be used to solve this problem since it stores implicitly the polygon painting order from any view point.\n\n\\subsubsection*{Generation}\nGiven a set of polygons as the input data, to generate an auto-partition BSP the following algorithm is performed:\n\\begin{enumerate}\n\\item A polygon is chosen from the list.\n\\item For each other polygon in the list:\n\\begin{enumerate}\n\\item If the polygon is completely in front of the hyperplane, it is added to the right child.\n\\item If the polygon is completely behind the hyperplane, it is added to the left child\n\\item If the polygon intersects with the hyperplane, the polygon is split and each new fragment is added to the corresponding child.\n\\item If the polygon lies in the plane, is added tot the list of polygons of that node.\n\\end{enumerate}\n\\item This algorithm is applied recursively to each child until there is only one polygon on each node.\n\\end{enumerate}\n\nIt is important to note that the selection of the polygon which specifies the hyperplane (step~1) is going to impact the number of fragments generated down the tree. There exist different heuristics which try to minimize the number of splits.\n\n\\subsubsection*{Image generation}\nOnce the BSP tree has been generated, the tree can be traversed in linear time in order to determine the visibility priorities of each polygon.\n\nFrom the root node:\n\\begin{enumerate}\n\\item If the view location is in front of the hyperplane\n\\begin{itemize}\n\\item Paint the left child (behind) polygons recursively.\n\\item Paint the current node polygons. \n\\item Paint the right child (int front) polygons recursively.\n\\end{itemize}\n\\item If the view location is behind the the hyperplane\n\\begin{itemize}\n\\item Paint the right child (int front) polygons recursively.\n\\item Paint the current node polygons. \n\\item Paint the left child (behind) polygons recursively.\n\\end{itemize}\n\\end{enumerate}\n\nTo check if the view is in front or behind the plane, is just a matter com computing the singed distance of a point with respect of the hyperplane.\n\n\\section*{Personal opinion}\nNowadays the BSP trees are not that relevant in the computer graphics field and seams to be taken over by other more advanced data structures. This is obvious in the case of the visibility problem. Nowadays the default solution to this problem is the use of the Z-Buffer. This alternative is not a data structure by itself, but instead is a hardware accelerated GPU buffer. This GPU buffer allows to perform the priority sorting directly on the GPU, and at pixel level, with a minimum performance penalty.\n\nAlso, in the case of solid modeling, the rendering of those objects does not require them to be represented by BSP trees. Like with the visibility problem, this rendering can be perform with  object primitives and the help of another hardware buffer called stencil buffer.\n\n\\section*{Experiments}\nOne of the main problem with BSP trees is the size of the tree. In the case of a BSP tree constructed from a set of polygons, this size of the tree can be much larger than the number of polygons $n$ due to the hyperplanes splitting those polygons. While the number of fragments in an 2D auto-partition BSP tree is bounded $O(n^2 \\log n)$, in higher dimensions the known bound is much wider. In the case of 3D BSP trees they can be as many as $O(n^2)$ fragments.\n\nEven thought there can be as many as a quadratic number of fragments, the reality is that, due to the principle of locality, BSP trees contain much fever fragments. The principle of locality states that polygons are smaller compared to the whole space.\n\nThe implemented experiment tries to show this reality by constructing multiple BSP trees and comparing the number of polygons with respect to the number of fragments. The BSP trees generated are auto-partitions BSP trees from real models and from a set of random polygons.\n\nAs we can see in the results (Table~\\ref{results}) we can see that in reality, the number of fragments is far from the upper bound. Also this experiment shows the principle of locality as we can see that, when using random polygons, the number of fragments increase with respect to the models. This is due to the models being composed by small polygons, while the size of the random polygons is variable.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\n\\textbf{Polygons} & $\\boldsymbol{Fragments/Polygons^2}$ \\\\\n\\hline\\hline\nMonkey & 0.004\\% \\\\\nSphere & 0.028\\% \\\\\nTeapod & 0.017\\% \\\\\nRandom & 1.633\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{Table showing the number of fragments generated by the BSP tree.}\n\\label{results}\n\\end{table}\n\n\\section*{Conclusions}\nEven being forgotten nowadays, BSP trees are an interesting very data structure which still are being used. It seams that one of the causes of its decrease in popularity is that many applications now uses more advanced an efficient techniques which allow to parallelize the computation (e.g. Z-Buffer for the visibility problem).\n\n\\bibliographystyle{unsrt}\n\\bibliography{report.bib}\n\n\\end{document}", "meta": {"hexsha": "81d0889088926ae20511666a310d5fa437e4cab7", "size": 8034, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/report/report.tex", "max_stars_repo_name": "GerardMT/BSP-Tree", "max_stars_repo_head_hexsha": "30d7fa8e88fa67061d704523952d7bcd84d8a8d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-08-01T06:25:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-03T09:29:29.000Z", "max_issues_repo_path": "docs/report/report.tex", "max_issues_repo_name": "GerardMT/BSP-Tree", "max_issues_repo_head_hexsha": "30d7fa8e88fa67061d704523952d7bcd84d8a8d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/report/report.tex", "max_forks_repo_name": "GerardMT/BSP-Tree", "max_forks_repo_head_hexsha": "30d7fa8e88fa67061d704523952d7bcd84d8a8d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.512605042, "max_line_length": 505, "alphanum_fraction": 0.7922579039, "num_tokens": 1799, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7799928900257127, "lm_q1q2_score": 0.5819831095349307}}
{"text": "\\chapter{Neuron Update Optimisation}\n\n\n\\section{Initial Method}\nOriginally, each update iteration of the Hopfield network involved scanning through all neurons and computing the $h$ value. As this involves scanning through all the neurons in every step, it is clearly an $O(n)$ operation.\n\n\\section{Optimisation}\nFirst we shuffle the list of neurons\\footnote{Although this is an extra $O(n)$ operation, it has a negligible footprint to computing the $h$ value}. We then select the first neuron. If it is updatable then we are done. If not then continue with the second neuron, and so on. In the worst case, we will perform as bad as the initial method described above. In general, however, we will always beat or match the above algorithm as we avoid computing the $h$ value for every node.\n\n\n\nAt the beginning of the convergence process, the patterns tend to undergo a lot of transformations, and have a large number of updatable neurons. Depending on the size of the basin of attraction in which the pattern will land, and on the journey the pattern has to go trough until it reaches the basin, the number of updatable neurons during the journey varies greatly.\n\nAs a pattern approaches convergence, the number of updatable neurons decreases (as for a fixed point, there are no updatable neurons) so we slightly reach the old performance. In any case our algorithm is still at least as good as the initial method.\n", "meta": {"hexsha": "7c538b297ae1d33b4d04b7ab9e609412f51884a2", "size": 1416, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/optimise.tex", "max_stars_repo_name": "imperialhopfield/hopfield", "max_stars_repo_head_hexsha": "d64e21b1c7b915755ae535685ffd7dfd25e3970f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2015-07-30T10:00:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-10T15:49:06.000Z", "max_issues_repo_path": "report/optimise.tex", "max_issues_repo_name": "imperialhopfield/hopfield", "max_issues_repo_head_hexsha": "d64e21b1c7b915755ae535685ffd7dfd25e3970f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/optimise.tex", "max_forks_repo_name": "imperialhopfield/hopfield", "max_forks_repo_head_hexsha": "d64e21b1c7b915755ae535685ffd7dfd25e3970f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-12-19T13:06:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-03T13:32:21.000Z", "avg_line_length": 94.4, "max_line_length": 477, "alphanum_fraction": 0.793079096, "num_tokens": 299, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799928900257126, "lm_q2_score": 0.746138993030751, "lm_q1q2_score": 0.5819831095349305}}
{"text": "% !TEX root = main.tex\n\n\\chapter{Cauchy's Theorem}\n\\section{The Estimation Lemma}\nThe Estimation Lemma is an extremely important result that will be used to prove a number of important Theorems in subsequent sections.  For this reason, it has been promoted to the rank of `Theorem.'\n\n\\begin{theorem}[The Estimation Lemma]\nLet $f$ be a function which is continuous along a smooth path $\\Gamma$ given by the function $\\gamma : [a,b] \\to \\C$, then\n\\[\n\\abs{ \\int_{\\Gamma} f } \\leq \\int_a^b \\abs{ f(\\gamma(t))} \\abs{ \\gamma' (t) }\\ dt,\n\\] \n and if there exists a real number $M>0$ with $\\abs{f(z)} \\leq M$ for all $z \\in \\Gamma$,\n\\[\n\\abs{\\int_{\\Gamma} f} \\leq ML,\n\\]\nwhere $L$ is the length of $\\Gamma$.\n\\end{theorem}\n\\begin{exercise}\nUse the following steps to prove this Theorem.  \n\\begin{enumerate}\n\\item[(i)]  Define a new function $g:[a,b] \\to \\C$ via $\\displaystyle g(t)=f(\\gamma(t))\\gamma' (t)$.  Explain why the result is trivial if $\\displaystyle \\int_a^b g(t)\\ dt =0$.\n\n\\item[(ii)] Assuming $\\displaystyle \\int_a^b g(t)\\ dt \\neq 0$, define $\\lambda$ via \n\\[\n\\lambda = \\frac{\\abs{\\int_a^b g(t)\\ dt}}{\\int_a^b g(t)\\ dt}.\n\\]\nShow that $\\abs{\\lambda}=1$ and that\n\\[\n\\abs{\\int_a^b g(t)\\ dt } = \\int_a^b \\Re \\brac{ \\lambda g(t) }\\ dt.\n\\]\n\\item[(iii)] Show that $\\Re \\brac{ \\lambda g(t) } \\leq \\abs{g(t)}$ for all $t \\in [a,b]$.\n\\item[(iv)] Deduce the result from parts (ii) and (iii).\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{proof}\nBy definition of the integral of $f$ along $\\Gamma$, we have\n\\[\n\\abs{ \\int_{\\Gamma} f } = \\abs{ \\int_a^b f \\left( \\gamma(t) \\right) \\gamma' (t)\\ dt },\n\\]\nand thus we need to show that\n\\[\n\\abs{ \\int_a^b f \\left( \\gamma (t) \\right) \\gamma'(t)\\ dt } \\leq \\int_a^b \\abs{ f \\left( \\gamma (t) \\right) }\\ \\abs{ \\gamma'(t) }\\ dt.\n\\]\nLet $g(t)=f(\\gamma(t))\\gamma'(t)$, so that we need to show\n\\[\n\\abs{\\int_a^b g(t)\\ dt } \\leq \\int_a^b \\abs{g(t)}\\ dt.\n\\]\nNote that if $\\int_a^b g(t) = 0$, then the inequality holds trivially.  If not, let\n\\[\n\\lambda = \\frac{\\abs{\\int_a^b g(t)\\ dt}}{\\int_a^b g(t)\\ dt},\n\\]\nand note that $\\abs{\\lambda} = 1$ and\n\\[\n\\lambda \\int_a^b g(t)\\ dt = \\abs{ \\int_a^b g(t)\\ dt }.\n\\]\nIt follows that\n\\begin{align*}\n\\abs{\\int_a^b g(t)\\ dt } &= \\int_a^b \\lambda g(t)\\ dt  \\\\\n& = \\int_a^b \\Re \\left( \\lambda g(t) \\right)\\ dt + i \\int_a^b \\Im \\left( \\lambda g(t) \\right)\\ dt.\n\\end{align*}\nSince the modulus is always real, we must have\n\\[\n\\int_a^b \\Im \\left( \\lambda g(t) \\right)\\ dt =0\n\\]\nand so\n\\[\n\\abs{\\int_a^b g(t)\\ dt } = \\int_a^b \\Re \\left( \\lambda g(t) \\right)\\ dt.\n\\]\nNow, since $\\Re (z) \\leq \\abs{z}$ for all $z \\in \\C$, we have\n\\[\n\\Re \\left( \\lambda g(t) \\right) \\leq \\abs{\\lambda g(t) } = \\abs{\\lambda} \\abs{g(t)} = \\abs{g(t)}.\n\\]\nTogether with Monotonicity of the real integral (That is to say, if $\\phi_1,\\phi_2:[a,b] \\to \\R $ with $\\phi_1(t) \\leq \\phi_2 (t)$ for all $t$, we have $\\int_a^b \\phi_1(t)\\ dt \\leq \\int_a^b \\phi_2 (t)\\ dt$ (Theorem~\\ref{t:realint})), we have\n\\begin{align*}\n\\abs{ \\int_a^b g(t)\\ dt } =&  \\int_a^b \\Re \\left( \\lambda g(t) \\right)\\ dt \\\\\n& \\leq \\int_a^b \\abs{g(t)}\\ dt.\n\\end{align*}\nIn other words, we have shown that\n\n\\[\n\\abs{\\int_{\\Gamma} f } = \\abs{ \\int_a^b f \\left( \\gamma (t) \\right) \\gamma'(t)\\ dt } \\leq \\int_a^b \\abs{ f \\left( \\gamma (t) \\right) }\\ \\abs{ \\gamma'(t) }\\ dt.\n\\]\n  For the second part, if there is some $M>0$ with $\\abs{f(z)} \\leq M$ for all $z \\in \\Gamma$, then monotonicity and linearity of the real integral gives us\n  \\begin{align*}\n  \\abs{\\int_{\\Gamma} f } & \\leq \\int_a^b M \\abs{ \\gamma ' (t) } \\ dt \\\\\n  & = M \\int_a^b \\abs{\\gamma ' (t)}\\ dt \\\\\n  & = M L,\n  \\end{align*}\nwhere the final equality follows from the definition of the length of a smooth path.\n\\end{proof}\nIt follows easily from the definition of the integral of $f$ along a contour $\\mathcal{C}=\\Gamma_1 + \\ldots + \\Gamma_n$ that \n\\[\n\\abs{\\contint f} \\leq M L\n\\]\nwhere $\\abs{f(z)} \\leq M$ for all $z \\in \\mathcal{C}$ and $L$ is the length of $\\mathcal{C}$.\n\\begin{note}\nIf we can show that\n\\[\n\\abs{ \\contint f } \\leq K,\n\\]\nthen $K$ is called an \\emph{upper estimate} for $\\displaystyle \\contint f$\n\\end{note}\n\\begin{example}\n\\label{e:estimation}\nThe function $f$ is defined by\n\\[\nf(z) = \\frac{1}{1+z^2}\n\\]\nand $\\Gamma$ is described by \\[ \\gamma:[0,\\pi] \\to \\C, \\quad \\gamma(t) = r \\cos (t) +i r \\sin (t), \\]\nwhere $r>1$.  We shall find an upper estimate for $\\int_{\\Gamma} f$ and show that\n\\[\n\\int_{\\Gamma} f \\to 0 \\text{ as } r \\to \\infty.\n\\]\n\\end{example}\n\\begin{solution}\nThe domain of $f$ is $\\C \\backslash \\set{i,-i}$, as $f$ is defined and continuous everywhere except where $1+z^2=0$.  As $\\Gamma$ consists of part of the circle with centre $0$ and radius $r>1$, $\\Gamma$ does not contain $i$ or $-i$.\n\nNow, for any $z \\in \\Gamma$, the Backwards Triangle Inequality gives\n\\[\n\\abs{1+z^2} \\geq \\abs{\\abs{1}-\\abs{z^2}} = r^2-1\n\\]\nsince $\\abs{z}=r>1$ for all $z \\in \\Gamma$.  Thus\n\\[\n\\abs{f(z)} = \\abs{\\frac{1}{1+z^2}} \\leq \\frac{1}{r^2-1}\n\\]\nfor all $z \\in \\Gamma$.\n\nSetting $M = \\frac{1}{r^2-1}$, we have $\\abs{f(z)} \\leq M$ for all $z \\in \\Gamma$, and since $\\Gamma$ is a semicircle, the length $L$ of $\\Gamma$ is equal to $\\pi r$.  Thus the estimation Lemma gives the upper estimate\n\\[\n\\abs{ \\int_{\\Gamma} f } \\leq ML = \\frac{\\pi r}{r^2-1}\n\\]\nfor this integral.\n\nIf we rewrite the above inequality as\n\\[\n\\abs{ \\int_{\\Gamma} f } \\leq \\frac{\\pi}{r-\\frac{1}{r}},\n\\]\nwe see that\n\\[\n\\abs{\\int_{\\Gamma} f } \\to 0 \\text{ as } r \\to \\infty\n\\]\nand hence\n\\[\n \\int_{\\Gamma} f \\to 0 \\text{ as } r \\to \\infty.\n\\]\n\n\\end{solution}\n\\section{Cauchy's Theorem for a Triangle}\n\\begin{definition}\nA region $\\mathcal{R}$ is called \\emph{simply connected} if given any closed contour $\\mathcal{C}$ in $\\mathcal{R}$, all points enclosed by $\\mathcal{C}$ also belong to $\\mathcal{R}$.\n\\end{definition}\nIt is surprisingly difficult to write down the precise definition of a point being enclosed by a contour $\\mathcal{C}$, thus we shall treat this notion informally and avoid complicated cases.  Essentially, a simply connected region cannot have any `holes.'\n\n\\begin{center}\n\\begin{tabular}{cc}\n\\altgraphics[scale=0.6]{ch4_upperhalf_full}{ch4_upperhalf} & \\altgraphics[scale=0.6]{ch4_cless0_full}{ch4_cless0}\n\\end{tabular}\n\\vspace{1cm}\n\\begin{tabular}{cc}\n\\altgraphics[scale=0.4]{ch4_notsc1_full}{ch4_notsc1} & \\altgraphics[scale=0.4]{ch4_notsc2_full}{ch4_notsc2}\n\\end{tabular}\n\\end{center}\n\n%\\hspace{-2cm}\n%\\altgraphics[scale=0.6]{sc_full}\n%\\end{center}\n%\\end{full}\n%\\begin{student}\n%\\begin{center}\n%\\includegraphics[scale=1]{upperhalf}\n%\\includegraphics[scale=1]{hplus}\n%\\vspace*{1cm}\n%\\includegraphics[scale=1]{sc1}\n%\\hspace{1cm}\n%\\includegraphics[scale=1]{sc2}\n%\\end{center}\n%\\vspace*{2cm}\n%\\end{student}\nIn the proof of Cauchy's Theorem we will need the following fact about the integral of a continuous function $f$ along a smooth path $\\Gamma$: if $\\tilde{\\Gamma}$ denotes the reverse of $\\Gamma$ then\n\\[\n\\int_{\\tilde{\\Gamma}} f = - \\int_{\\Gamma} f.\n\\]\nWe shall also fix the following notation: $T(z_1,z_2,z_3)$ denotes the triangle with vertices $z_1,z_2,z_3$, and hence with edges given by the line segments $[z_1,z_2],[z_2,z_3],[z_3,z_1]$.  The boundary of $T$ is denoted by $\\partial T$, which defines a closed contour\n\\[\n\\partial T = [z_1,z_2] + [z_2,z_3] + [z_3,z_1].\n\\]\n\\begin{theorem}[Cauchy's Theorem for a Triangle]\n\\label{t:cauchyt} Let $f$ be a function that is holomorphic on a simply connected region $\\mathcal{R}$ and let $T(z_1,z_2,z_3)$ be a triangle in $\\mathcal{R}$ with boundary $\\partial T$.  Then\n\\[\n\\int_{\\partial T} f = 0.\n\\]\n\\end{theorem}\n\\begin{center}\n\\includegraphics[scale=0.4]{ch4_cauchyt_full}\n\\end{center}\nNote that if we knew that $f$ were to have an antiderivative $F$ on $\\mathcal{R}$, then Theorem~\\ref{t:closed} would show that\n\\[\n\\int_{\\partial T} f =0.\n\\]\nLater we will see that in fact $f$ does have an antiderivative on $\\mathcal{R}$, but the proof will rely Cauchy's Theorem for a triangle - thus we need to prove Theorem~\\ref{t:cauchyt} without this assumption.\n\nBefore proving Theorem~\\ref{t:cauchyt}, we need the following lemma:\n\\begin{lemma}\n\\label{l:cauchyt}\nLet $f$ be a function that is holomorphic on a simply connected region $\\mathcal{R}$ and let $T(z_1,z_2,z_3)$ be a triangle in $\\mathcal{R}$.  Then there exists a nested sequence of triangles $T \\supseteq T_1 \\supseteq T_2 \\supseteq \\ldots \\supseteq T_n \\supseteq \\ldots$ with the property that\n\\[\n\\frac{1}{4^n} \\abs{ \\int_{\\partial T} f } \\leq \\abs{ \\int_{\\partial T_n} f } \\text{ and } \\ell ( \\partial T_n) = \\frac{1}{2^n} \\ell ( \\partial T)\n\\]\nfor all $n$. Moreover, there exists a point $z_0 \\in \\mathcal{R}$ with $z_0 \\in T_n$ for all $n$.\n\\end{lemma}\n{\\bf Proof }\n Using the midpoints of the sides of $T$, construct four similar triangles $T^{(1)},T^{(2)},T^{(3)}$ and $T^{(4)}$ as shown.\n\n\\begin{blankbox}\n%\\vspace*{2cm}\n\\begin{center}\n\\altgraphics[scale=0.6]{ch4_cauchyt2_full}{ch4_cauchyt2}\n\\end{center}\n%\\vspace*{2cm}\nThen for $j=1,2,3,4$ we have $\\ell(\\partial T^{(j)}) = \\frac{1}{2} \\ell ( \\partial T)$ by construction.\n\n\n\nNote that with all contours taken anticlockwise,\n\\[\n\\int_{\\partial T^{(1)}} f + \\int_{\\partial T^{(2)} } f + \\int_{\\partial T^{(3)}} f + \\int_{\\partial T^{(4)}} f = \\int_{\\partial T} f,\n\\]\n\nas the integrals along the `interior' edges of the $T^{(j)}$ cancel in pairs (this uses $\\int_{\\tilde{\\Gamma}} f = - \\int_{\\Gamma} f$, where $\\tilde{\\Gamma}$ is the reverse of $\\Gamma$ ):\n\\begin{center}\n\\altgraphics[scale=0.6]{ch4_cauchyt3_full}{ch4_cauchyt3}\n\\end{center}\n\\end{blankbox}\nMoreover, by the triangle inequality\n\\[\n\\abs{\\int_{\\partial T} f} \\leq \\abs{ \\int_{\\partial T^{(1)}} f} + \\abs{\\int_{\\partial T^{(2)} } f }+ \\abs{\\int_{\\partial T^{(3)}} f }+ \\abs{\\int_{\\partial T^{(4)}} f }. \n\\]\n%\\vspace*{5cm}\n\\begin{blankbox}\nAt least one of these four integrals has modulus greater than or equal to the other three; call the corresponding triangle $T_1$.  Then\n\\[\n\\abs{\\int_{\\partial T} f} \\leq \\abs{ \\int_{\\partial T^{(1)}} f} + \\abs{\\int_{\\partial T^{(2)} } f }+ \\abs{\\int_{\\partial T^{(3)}} f }+ \\abs{\\int_{\\partial T^{(4)}} f } \\leq 4 \\abs{ \\int_{\\partial T_1} f},\n\\]\nand so\n\\[\n\\frac{1}{4} \\abs{ \\int_{\\partial T } f } \\leq \\abs{\\int_{\\partial T_1} f}.\n\\]\n\\end{blankbox}\nNow, use the midpoints of the edges of $T_1$ to get four more triangles, one of which is a triangle $T_2$ satisfying\n\\[\n\\frac{1}{4} \\abs{ \\int_{\\partial T_1} f } \\leq \\abs{ \\int_{\\partial T_2} f } \\quad\\text{and}\\quad \\ell ( \\partial T_2) = \\frac{1}{2} \\ell ( \\partial T_1) = \\frac{1}{2^2} \\ell(\\partial T),\n\\]\n\n\\begin{center}\n\\altgraphics[scale=0.6]{ch4_cauchyt4_full}{ch4_cauchyt4}\n\\end{center}\n\n\\begin{blankbox}\nThus \n\\[\n\\frac{1}{4^2} \\abs{ \\int_{\\partial T} f } \\leq \\abs{ \\int_{\\partial T_2} f }.\n\\]\n\n\nContinuing in this way we get $T\\supseteq T_1 \\supseteq T_2 \\supseteq \\ldots \\supseteq T_n \\supseteq \\ldots$ with \n\\[\n\\frac{1}{4^n} \\abs{ \\int_{\\partial T} f} \\leq \\abs{ \\int_{\\partial T_n} f }\\quad\\text{and}\\quad \\ell(\\partial T_n) = \\frac{1}{2^n} \\ell ( \\partial T)\n\\]\nfor each $n$.\n\\end{blankbox}\n\\begin{center}\n\\includegraphics[scale=0.6]{ch4_cauchyt5}\n\\end{center}\n\n\nAs the sequence is nested, there exists a point $z_0$ with $z_0 \\in T_n$ for all $n$, and since $\\mathcal{R}$ is simply connected, $z_0 \\in \\mathcal{R}$.\n\\qed\n\nWe are now in a position to prove Cauchy's Theorem for a Triangle.\n\\begin{exercise}\nTry to prove Cauchy's Theorem for a Triangle, using the following steps.\n\\begin{enumerate}\n\\item[(i)] Let $\\epsilon >0$ be given.  Use the fact that $f$ is differentiable at $z_0$ to show that there exists $\\delta >0$ such that \n\\[ \nz \\in \\mathcal{R},\\  \\abs{z-z_0} < \\delta \\Rightarrow \\abs{f(z)-f(z_0)-(z-z_0)f'(z_0)} \\leq \\epsilon \\abs{z-z_0}.\n\\]\n\\item[(ii)] With $\\epsilon$ and $\\delta$ as chosen in the previous step, show that there exists $n \\in \\mathbb{N}$ such that\n\\[\nz \\in \\partial T_n \\Rightarrow \\abs{f(z)-f(z_0)-(z-z_0)f'(z_0)} < \\epsilon \\ell( \\partial T_n).\n\\]\nAs a hint, first explain why we can choose $n$ with $\\ell ( \\partial T_n) < \\delta$.\n\\item[(iii)] Show that for any constants $\\alpha,\\beta \\in \\C$ we have\n\\[\n\\int_{\\partial T_n} (\\alpha + \\beta (z-z_0) )\\ dz = 0,\n\\]\nand deduce that\n\\[\n\\int_{\\partial T_n} f = \\int_{\\partial T_n} (f(z)-f(z_0)-f'(z_0)(z-z_0))\\ dz.\n\\]\n\\item[(iv)] Use step (iii) and the Estimation Lemma to show that\n\\[\n\\abs{\\int_{\\partial T_n} f } \\leq \\epsilon \\cdot \\frac{1}{4^n} \\cdot \\brac{\\ell(\\partial T) }^2.\n\\]\n\\item[(v)] Complete the proof.\n\\end{enumerate}\n\\end{exercise}\n\\begin{proof}[of Cauchy's Theorem for a Triangle (Theorem~\\ref{t:cauchyt})]\nWe are going to show that given any $\\epsilon>0$ we have\n\\[\n\\abs{ \\int_{\\partial T} f } \\leq \\epsilon [\\ell (\\partial T)]^2\n\\]\nwhere $\\ell ( \\partial T )$ is the length of the boundary of the triangle $T$.\n\nLet $\\set{ T_n}$ be the nested sequence of triangles constructed in Lemma~\\ref{l:cauchyt} and $z_0 \\in \\mathcal{R}$ such that $z_0 \\in T_n$ for all $n$.\n\n\\emph{Step 1:}    Let $\\epsilon>0$ be given.  We will show that there exists $\\delta>0$ such that \n\\[\n\\abs{z-z_0}<\\delta \\Longrightarrow \\abs{f(z)-f(z_0)-(z-z_0)f'(z_0)} \\leq \\epsilon \\abs{z-z_0}.\n\\]\n\n\n  Since $f$ is holomorphic on $\\mathcal{R}$ it is differentiable at $z_0$ with derivative $f'(z_0)$, or in other words\n  \\[\n  \\lim_{h \\to 0} \\frac{f(z_0+h)-f(z_0)}{h} = f'(z_0).\n  \\]  \n\n  This means that there is some $\\delta>0$ such that\n  \\begin{align*}\n  0 < \\abs{h} < \\delta &\\Longrightarrow \\abs{ \\frac{f(z_0+h)-f(z_0)}{h} - f'(z_0) } < \\epsilon \\\\\n  \\shortintertext{ or in other words, writing $z=z_0+h$,}\n  0 < \\abs{z-z_0} < \\delta &\\Longrightarrow \\abs{ \\frac{f(z)-f(z_0)}{z-z_0} - f'(z_0) } < \\epsilon \\\\\n  & \\Longrightarrow \\abs{ \\frac{f(z)-f(z_0)-(z-z_0)f'(z_0)}{z-z_0} } < \\epsilon.\n  \\end{align*}\nThus\n\\[\n0 < \\abs{z-z_0} < \\delta \\Longrightarrow \\abs{f(z)-f(z_0)-(z-z_0)f'(z_0)} < \\epsilon \\abs{z-z_0}.\n\\]\nIf we allow $z=z_0$, this becomes\n\\[\n\\abs{z-z_0} < \\delta \\Longrightarrow \\abs{f(z)-f(z_0)-(z-z_0)f'(z_0)} \\leq \\epsilon \\abs{z-z_0}\n\\]\n(since when $z=z_0$, both sides are zero).\n\n\\emph{Step 2: } With $\\epsilon>0$ as before, and $\\delta>0$ as chosen in the previous step, we choose $n$ large enough so that $\\ell ( \\partial T_n ) < \\delta$, which can always be done as $\\ell ( \\partial T_n ) = \\frac{1}{2^n} \\ell ( \\partial T)$ (note that $n$ depends on $\\delta$, which in turn depends on $\\epsilon$).\n\nWe will show that for $z \\in \\partial T_n$ we have\n\\[\n\\abs{f(z)-f(z_0)-(z-z_0)f'(z_0)}<\\epsilon \\ell ( \\partial T_n).\n\\]\n\n\n\nIf $z \\in \\partial T_n$, then the distance from $z$ to $z_0$ can be no more than the length of the longest edge of $T_n$, which is less than $\\ell ( \\partial T_n)$.\n%\\vspace*{5cm}\n\\begin{center}\n\\includegraphics[scale=0.6]{ch4_cauchyt6_full}\n\\end{center}\n%\\vspace*{7cm}\nThus\n\\[\nz \\in \\partial T_n \\Longrightarrow \\abs{z-z_0} \\underbrace{< \\ell (\\partial T_n) }_{\\text{geometry}} \\underbrace{< \\delta}_{\\text{choice of }n},\n\\]\nand hence $T_n \\subset D(z_0,\\delta)$.\n\nThus for any $n$ large enough so that $\\ell ( \\partial T_n) < \\delta$, we have\n\\begin{align*}\nz \\in \\partial T_n \\Longrightarrow \\abs{f(z)-f(z_0)-(z-z_0)f'(z_0)} &\\leq \\epsilon \\abs{z-z_0} \\\\\n& < \\epsilon \\ell ( \\partial T_n).\n\\end{align*}\n\\begin{center}\n\\includegraphics[scale=1]{ch4_cauchyt7}\n\\end{center}\n\n\\emph{Step 3}: We use the Estimation Lemma to find an upper estimate for $\\int_{\\partial T_n} f$, where $n$ was chosen in the previous step.  \n\nNote that\n\\[\n\\int_{\\partial T_n} (\\alpha+\\beta(z-z_0)) dz =0 \\text{ for }\\alpha,\\beta \\in \\C,\n\\]\nbecause $z \\mapsto \\alpha + \\beta (z-z_0)$ has an antiderivative $z \\mapsto \\alpha z + \\frac{\\beta(z-z_0)^2}{2}$.\n\n\nThis means that\n\\[\n\\int_{\\partial T_n} {\\Big(} \\underbrace{f(z_0)}_{\\alpha}+\\underbrace{f'(z_0)}_{\\beta} (z-z_0) {\\Big)} dz =0,\n\\]\nand so\n\\[\n\\int_{\\partial T_n} f = \\int_{\\partial T_n} {\\Big(} f(z)-\\underbrace{f(z_0)-f'(z_0)(z-z_0)}_{\\text{integral }0\\text{ along } \\partial T_n} {\\Big)} dz\n\\]\nWe shall use the Estimation Lemma to find an upper estimate for this integral.  Indeed, we have\n\\begin{align*}\n\\abs{\\int_{\\partial T_n} f} & = \\abs{\\int_{\\partial T_n} \\left[ f(z)-f(z_0)-f'(z_0)(z-z_0) \\right]\\ dz} &&\\\\\n& \\leq \\epsilon \\ell ( \\partial T_n ) \\ell ( \\partial T_n) &&\\text{ By step 3} \\\\\n& = \\epsilon \\left( \\frac{1}{2^n} \\ell ( \\partial T ) \\right) \\left( \\frac{1}{2^n} \\ell ( \\partial T) \\right) && \\text{ as } \\ell (\\partial T_n) = \\frac{1}{2^n} \\ell ( \\partial T) \\\\\n& = \\epsilon \\frac{1}{4^n} \\left[ \\ell ( \\partial T) \\right]^2.\n\\end{align*}\n\n\n\n\n\n\\emph{Step 4:} Completing the proof.\n\nBy Step 3, given any $\\epsilon>0$, there is $n$ such that\n\\[\n\\abs{ \\int_{\\partial T_n} f } \\leq \\epsilon \\frac{1}{4^n} \\left[ \\ell ( \\partial T) \\right]^2.\n\\]\n\nIn Lemma~\\ref{l:cauchyt}, we showed that\n\\[\n\\frac{1}{4^n} \\abs{ \\int_{\\partial T} f } \\leq \\abs{ \\int_{\\partial T_n} f},\n\\]\nand so combining these facts, we have\n\\[\n\\abs{ \\int_{\\partial T} f } \\leq \\epsilon \\left[ \\ell ( \\partial T) \\right]^2\n\\]\nfor all $\\epsilon > 0$.  Because $\\epsilon>0$ was arbitrary and $\\ell ( \\partial T)$ is fixed, we get\n\\[\n\\abs{ \\int_{\\partial T} f } = 0, \\text{ hence } \\int_{\\partial T} f =0.\n\\]\n\n\\end{proof}\n\n\n\\begin{example}\nLet $f:\\C \\to \\C$ be defined by $f(z)=\\sin \\left( \\exp \\left( \\cos \\left( z^3-4z^2+i \\right) \\right) \\right)$, which is holomorphic on $\\C$.  There is no obvious antiderivative for $f$, but nonetheless, Cauchy's Theorem shows us that for any triangle $T$ we have\n\\[\n\\int_{\\partial T} f = 0.\n\\]\n\\end{example}\n\\begin{example}\nLet $f(z) = \\dfrac{1}{z}$, which is holomorphic on $\\C \\backslash \\set{0}$.  We shall evaluate $\\int_{\\partial T} f$ for different triangles $T$: let $T_1$ be a triangle enclosing the origin, and $T_2$ a triangle lying to the left of the imaginary axis as shown.\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[scale=1]{ch4_cauchytexample1} & \\includegraphics[scale=1]{ch4_cauchytexample2}\n\\end{tabular}\n\\end{center}\n\\end{example}\n\\begin{solution}\nSince $\\C \\backslash \\set{0}$ is not simply connected, Cauchy's Theorem for a triangle tells us nothing about\n\\[\n\\int_{\\partial T_1} \\frac{1}{z}\\ dz.\n\\]\nHowever, $\\C \\backslash \\set{0}$ has simply connected subregions, for example, the left-half plane\n\\[\n\\mathcal{L} = \\set{ z \\in \\C : \\Re (z) < 0 }.\n\\]\n\nThe triangle $T_2$ is contained in $\\mathcal{L}$, so Cauchy's Theorem for a triangle gives\n\\[\n\\int_{\\partial T_2} \\frac{1}{z}\\ dz =0.\n\\]\nIn fact, if $T$ is any triangle in $\\C \\backslash \\set{0}$ that does not enclose the origin, we can find a simply connected subregion of $\\C \\backslash \\set{0}$ that contains $T$, and conclude that\n\\[\n\\int_{\\partial T} \\frac{1}{z}\\ dz =0.\n\\]\n\\end{solution}\n%\\newpage\n%\\begin{absolutelynopagebreak}\n\\section{Cauchy's Theorem for Starlit Regions}\n\n\\begin{definition}\nA region $\\mathcal{R}$ is called \\emph{starlit} if there is a point $z_{\\ast} \\in \\mathcal{R}$ such that for any $z \\in \\mathcal{R}$ the line segment $[z_{\\ast},z]$ lies inside $\\mathcal{R}$.  The point $z_{\\ast}$ is called a \\emph{star centre} for $\\mathcal{R}$.\n\\end{definition}\n\nThe name starlit comes from the idea that the `rays of light' from the `star' $z_{\\ast}$ fall on every point of $\\mathcal{R}$.  \n%\\begin{student}\n\\begin{center}\n\\altgraphics[scale=1.5]{ch4_starlit1_full}{ch4_starlit1} \\\\\n\\blank{The region $\\mathcal{R}_1$ on the left is starlit, as any point $z \\in \\mathcal{R}_1$ can be connected to $z_*$ using the line segment $[z_*,z]$.  The region $\\mathcal{R}_2$ on the right is not, since no matter which point we choose as $z_*$, the line segment joining $z_*$ to the point opposite it leaves $\\mathcal{R}_2$.}\n\\end{center}\n\\begin{tabular}{c m{0.5\\textwidth} }\n\\begin{minipage}{0.5\\textwidth}\n\\centering\n\\altgraphics[scale=1.5]{ch4_starlit2_full}{ch4_starlit2}\n\\end{minipage} & \\blank{The region $\\C \\backslash \\set{0}$ is not starlit; indeed if $z_*$ were a star centre, then the line segment $[z_*,-z_*]$ would be contained in $\\C \\backslash \\set{0}$, which is not the case for any choice of $z_*$.}\n\\end{tabular}\n\\bigskip\n\n\\begin{theorem}[The Existence of Antiderivatives on Starlit Regions]\n\\label{t:starlit}\nLet $f$ be holomorphic on a starlit region $\\mathcal{R}$ with star centre $z_{\\ast} \\in \\mathcal{R}$.  Then the function $F:\\mathcal{R} \\to \\C$ defined by\n\\[\nF(z) = \\int_{[z_{\\ast},z]} f\n\\]\nis an antiderivative for $f$ on $\\mathcal{R}$.\n\\end{theorem}\n\n\\leftimage{\n\\includegraphics[scale=0.75]{ch4_starlitantid}\n}\n{\nSince $\\mathcal{R}$ is starlit, $[z_*,z]$ is contained in $\\mathcal{R}$ and so $\\int_{[z_*,z]} f$ is well-defined for all $ z \\in \\mathcal{R}$.\n}\n\n\\begin{exercise}\nUse the following steps to prove this Theorem.\n\\begin{enumerate}\n\\item[(i)] Use the definition of $F$, together with Cauchy's Theorem for a Triangle, to show that for $h \\in \\C$,\n\\[\nF(z_0+h)-F(z_0) = \\int_{[z_0,z_0+h]} f.\n\\]\n\\item[(ii)] Show that\n\\[\n\\frac{F(z_0+h)-F(z_0)}{h} - f(z_0) = \\frac{1}{h} \\int_{[z_0,z_0+h]} (f(z)-f(z_0)\\ dz.\n\\]\n\\item[(iii)] Let $\\epsilon >0$ be given.  Use the Estimation Lemma to show that there exist $\\delta >0$ such that\n\\[ \n\\abs{h} < \\delta \\Rightarrow \\abs{\\int_{[z_0,z_0+h]} \\brac{f(z)-f(z_0)}\\ dz} \\leq \\epsilon \\abs{h}.\n\\]\n(Hint: you will need to use the fact that differentiability implies continuity).\n\\item[(v)] Combine steps (ii) and (iii) to complete the proof.\n\\end{enumerate}\n\\end{exercise}\n\n\\begin{proof}\nWe need to show that for any $z_0 \\in \\mathcal{R}$, $F'(z_0)$ exists and is equal to $f(z_0)$, or in other words\n\\[\n\\lim_{h \\to 0} \\frac{F(z_0+h)-F(z_0)}{h} = f(z_0).\n\\]\n  Write\n\\[\n\\frac{F(z_0+h)-F(z_0)}{h} = \\frac{1}{h} \\int_{[z_{\\ast},z_0+h]} f - \\frac{1}{h} \\int_{[z_{\\ast},z_0]} f.\n\\]\nSince $\\mathcal{R}$ is open, for sufficiently small $h$, the triangle $T(z_{\\ast},z_0+h,z_0)$ is contained in $\\mathcal{R}$.\n\\begin{center}\n\\includegraphics[scale=1]{ch4_starlittriangle}\n\\end{center}\n\n  Cauchy's Theorem for a Triangle gives\n\\[\n\\int_{[z_*,z_0+h]} f + \\int_{[z_0+h,z_0]} f + \\int_{[z_0,z_*]} f = 0,\n\\]\nhence\n\\[\n\\int_{[z_*,z_0+h]}f - \\int_{[z_*,z_0]}f = - \\int_{[z_0+h,z_0]} f = \\int_{[z_0,z_0+h]} f.\n\\]\nThis gives\n\\[\n\\frac{F(z_0+h)-F(z_0)}{h} = \\frac{1}{h} \\int_{[z_0,z_0+h]} f.\n\\]\nWe now use the fact that\n\\[\n\\int_{[z_0,z_0+h]} 1 dz = h,\n\\]\nwhich implies that\n\\[\nf(z_0) = f(z_0) \\cdot \\frac{1}{h} \\cdot h = f(z_0) \\frac{1}{h} \\int_{[z_0,z_0+h]} 1 dz = \\frac{1}{h} \\int_{[z_0,z_0+h]} f(z_0)\\ dz\n\\]\n(since $f(z_0)$ is a constant).  Combining this with the previous step, we see that \n%\\newpage\n%\\vspace*{5cm}\n%\\newpage\n\\begin{align*}\n \\frac{F(z_0+h)-F(z_0)}{h}-f(z_0)  &= \\frac{1}{h} \\left[ \\int_{[z_0,z_0+h]} f(z)\\ dz - \\int_{[z_0,z_0+h]} f(z_0)\\ dz \\right] \\\\\n& = \\frac{1}{h} \\int_{[z_0,z_0+h]} (f(z)-f(z_0))\\ dz \n\\end{align*}\n(since both integrals are along the same path, we can combine them).\n%\\begin{absolutelynopagebreak}\nHence\n\\[\n\\abs{ \\frac{F(z_0+h)-F(z_0)}{h}-f(z_0) } = \\frac{1}{\\abs{h}} \\abs{ \\int_{[z_0,z_0+h]} (f(z)-f(z_0)) dz }\n\\]\nNow we shall use the Estimation Lemma to obtain an upper estimate for this integral.  It is easy to see that $\\ell([z_0,z_0+h]) = \\abs{h}$, thus we must find an upper bound for $\\abs{f(z)-f(z_0)}$ when $z \\in [z_0,z_0+h]$.\n%\\end{absolutelynopagebreak}\n\n\nWe use the fact that $f$ holomorphic on $\\mathcal{R}$ implies $f$ continuous on $\\mathcal{R}$, and in particular, continuous at $z_0$.  Hence given any $\\epsilon >0$ there is some $\\delta >0$ such that\n\\[\n0< \\abs{z-z_0} < \\delta \\Longrightarrow \\abs{f(z)-f(z_0)} < \\epsilon.\n\\]\nThus if $\\abs{h} < \\delta$ then\n\\begin{align*}\nz \\in [z_0,z_0+h] & \\Longrightarrow \\abs{z-z_0} < \\abs{h} < \\delta \\\\\n& \\Longrightarrow \\abs{f(z)-f(z_0)} < \\epsilon.\n\\end{align*}\n%\\newpage\n%\\vspace*{25cm}\nIn other words, once $\\abs{h}<\\delta$, $\\epsilon$ is an upper bound for $\\abs{f(z)-f(z_0)}$ for $z \\in [z_0,z_0+h]$.  Thus the Estimation Lemma gives the upper estimate\n\\[\n\\abs{ \\int_{[z_0,z_0+h]} \\left( f(z)-f(z_0) \\right)\\ dz } \\leq \\epsilon \\abs{h}\n\\]\nwhenever $\\abs{h} < \\delta$.\n\nThus given any $\\epsilon>0$ there is $\\delta>0$ such that\n\\begin{align*}\n0 < \\abs{h} < \\delta \\Longrightarrow \n\\abs{ \\frac{F(z_0+h)-F(z_0)}{h}-f(z_0) } & = \\frac{1}{\\abs{h}} \\abs{ \\int_{[z_0,z_0+h]} (f(z)-f(z_0) dz } \\\\\n& \\leq \\frac{1}{\\abs{h}} \\epsilon \\abs{h} = \\epsilon.\n\\end{align*}\nBut this is equivalent to saying\n\\[\n\\lim_{h \\to 0} \\frac{F(z_0+h)-F(z_0)}{h} = f(z_0),\n\\]\nor in other words, $F'(z_0)=f(z_0)$, as required.\n%\\vspace*{8cm}\n\n\\end{proof}\n\\begin{theorem}[Cauchy's Theorem for Starlit Regions]\n\\label{t:cauchyst}\nLet $f$ be a function that is holomorphic in a starlit region $\\mathcal{R}$ and let $\\mathcal{C}$ be a closed contour in $\\mathcal{R}$.  Then\n\\[\n\\int_{\\mathcal{C}} f = 0.\n\\]\n\\end{theorem}\n\\begin{blankbox}\n{\\bf Proof}\nBy Theorem~\\ref{t:starlit}, $f$ has an antiderivative in $\\mathcal{R}$.  Thus by Theorem~\\ref{t:closed}\n\\[\n\\int_{\\mathcal{C}} f = 0.\n\\]\n\\qed\n\\end{blankbox}\n\nIn fact, Cauchy's Theorem extends to more general regions, as we shall see later.\n\n\\section{Application: The Complex Logarithm and Power Functions}\n\nIn this section we will define the Complex Logarithm and Power functions.\n\n\nConsider the Real function $f:(0,+\\infty) \\to \\R$ defined by $f(x) = \\dfrac{1}{x}$. Since $f$ is continuous on $(0,+\\infty)$, the Fundamental Theorem of Calculus ensures that $f$ has an antiderivative on $(0,+\\infty)$.  One way of defining the \\emph{natural logarithm} function, is to define it as an antiderivative of $\\dfrac{1}{x}$ on $(0,+\\infty)$:\n\\begin{equation}\n\\label{e:reallog}\n\\log (x) = \\int_1^x \\frac{1}{t}\\ dt \\quad \\text{ for all } x \\in (0,+\\infty).\n\\end{equation}\n\\begin{note}\n\\begin{enumerate}\n\\item[(i)] There are of course other ways of defining $\\log (x)$ for $x \\in (0,+\\infty)$, most commonly as the inverse of the exponential function $\\exp:\\mathbb{R} \\to (0,+\\infty)$.  This definition is equivalent.\n\\item[(ii)]  The Fundamental Theorem of calculus actually tells us that for \\emph{any} choice of $a \\in (0,+\\infty)$, the function $F: (0,+\\infty) \\to \\mathbb{R}$ defined by\n\\[\nF(x) = \\int_a^x \\frac{1}{t}\\ dt\n\\]\nis an antiderivative for $x \\mapsto \\frac{1}{x}$.  However, we must take $a=1$ to ensure that $F$ is indeed the inverse of $x \\mapsto e^x$: since $e^0=1$ we want $\\log (1)=0$.\n\\end{enumerate}\n\\end{note}\nWe shall generalise~\\eqref{e:reallog} to define the Complex Logarithm function, using Theorem~\\ref{t:starlit} (the existence of antiderivatives on starlit regions).  The function\n\\[\nf(z) = \\frac{1}{z}\\quad (z \\in \\C \\backslash \\set{0})\n\\]\nis holomorphic on $\\C \\backslash \\set{0}$. However, this set is neither simply connected nor starlit.  Therefore, we will restrict our attention to the subset\n\\[\n\\C_{\\pi} = \\set{ z \\in \\C: z \\neq 0 \\text{ and } \\Arg(z) \\neq \\pi },\n\\]\nwhich is both starlit and simply connected. \n\\begin{center}\n\\includegraphics[scale=1]{ch4_cpi2}\n\\end{center}\n The point $1$ on the real axis is a star centre for $\\C_{\\pi}$ - in particular, this allows us to apply Theorem~\\ref{t:starlit} to conclude that\n\\[\nz \\mapsto \\int_{[1,z]} \\frac{1}{\\zeta}\\ d \\zeta\n\\]\nis an antiderivative for $z \\mapsto \\frac{1}{z}$ on $\\C_{\\pi}$.  We use the variable $\\zeta$ (zeta) as the variable of integration because $z$ has been used already.\n\n\\begin{definition}\nThe \\emph{Complex Logarithm} function $\\Log : \\C_{\\pi} \\to \\C$ is defined by\n\\[\n\\Log (z) = \\int_{[1,z]} \\frac{1}{\\zeta}\\ d \\zeta\\quad \\text{ for } z \\in \\C_{\\pi}.\n\\]\nIt is holomorphic on $\\C_{\\pi}$.\n\\end{definition}\nIn other words, to find the value of $\\Log (z)$ we need to evaluate the integral\n\\[\n\\int_{[1,z]} \\frac{1}{\\zeta}\\ d \\zeta.\n\\]\nWe can do this directly by choosing a parametrisation of the path $[1,z]$, however, it is much easier to obtain an alternative definition using the following example.\n\\begin{example}\nWe shall show that for all $z \\in \\C_{\\pi}$ we have\n\\[\n\\Log(z) = \\log ( \\abs{z} ) + i \\Arg (z)\n\\]\nwhere $\\log:(0,+\\infty) \\to \\R$ denotes the real (natural) logarithm and $\\Arg$ is the principal value of the argument.\n\\end{example}\n\\begin{solution}\nSince the function $f$ defined by $f(z)=\\frac{1}{z}$ has an antiderivative on $\\C_{\\pi}$, the Contour Independence Theorem (Theorem~\\ref{t:contint}) tells us that\n$\\displaystyle \\int_{\\mathcal{C}} f = \\int_{[1,z]} f$ for any contour $\\mathcal{C}$ in $\\C_{\\pi}$ that starts at $1$ and ends at $z$.\n\\begin{center}\n\\altgraphics[scale=1]{ch4_logpath}{ch4_cpi2}\n%{logpath_full}{cpi3}\n\\end{center}\nWe shall use the contour $\\mathcal{C}=\\Gamma_1+\\Gamma_2$, where $\\Gamma_1=[1,\\abs{z}]$ is the straight line segment along the real axis from $1$ to $\\abs{z}$, and $\\Gamma_2$ is the arc of the circle with centre $0$ and radius $\\abs{z}$ from the point $\\abs{z}$ on the positive real axis to $z$. \n\nParametrise $\\Gamma_1$ with $\\gamma_1:[1,\\abs{z}] \\to \\C$, $\\gamma_1(t)=t$, and $\\Gamma_2$ with $\\gamma_2:[0,\\Arg (z)] \\to \\C,$\n\\[\n\\gamma_2(t) = \\polar{\\abs{z}}{t}\n\\]\nNote that $\\Gamma_2$ is a path from $\\gamma_2 (0) = \\abs{z}$ to $\\gamma_2 ( \\Arg (z)) = z$, hence is traversed \n\\begin{itemize}\n\\item anticlockwise if $\\Arg (z) \\geq 0$, and\n\\item clockwise if $\\Arg(z)<0$,\n\\end{itemize}\nwhich ensures that we stay within $\\C_{\\pi}$.  \n\nWe have $\\gamma_1'(t)=1$ and $\\gamma_2'(t) =  i \\gamma_2 (t)$, and so\n\\begin{align*}\n\\Log (z) & = \\int_{\\Gamma_1} \\frac{1}{\\zeta}\\ d\\zeta + \\int_{\\Gamma_2} \\frac{1}{\\zeta}\\ d\\zeta \\\\\n& = \n\\int_{1}^{\\abs{z}} \\frac{1}{t}\\ dt + \\int_0^{\\Arg (z)} \\frac{1}{\\gamma_2(t)} \\gamma_2'(t)\\ dt \\\\\n& = \\left[ \\log \\abs{t} \\right]_1^{\\abs{z}} + \\int_0^{\\Arg (z)} i\\ dt \\\\\n& = \\log \\abs{z} + i \\Arg (z).\n\\end{align*}\n\\end{solution}\n%\\vspace*{20cm}\nThis yields the following alternative definition of the Complex Logarithm function:\n\\begin{equation}\n\\Log (z) = \\log ( \\abs{z} )+i \\Arg (z) \\quad (z \\in \\C_{\\pi}).\n\\end{equation}\n\\begin{note}\nThis definition of $\\Log (z)$ uses the principal value of the argument - that is to say, our definition depends on us taking $\\arg (z) \\in (-\\pi, \\pi]$.  For this reason it is sometimes called the \\emph{principal branch} of the (complex) Logarithm function.  Other `branches' (i.e. other definitions) of $\\Log (z)$ are possible by taking different values of the argument.\n\\end{note}\n\\begin{example}\nCompute the following principal logarithms: $\\Log (7)$, $\\Log (-2i)$ and $\\Log (1+i)$.\n\\end{example}\n\\begin{solution}\n\\begin{align*}\n\\Log (7) & = \\log \\abs{7} + i \\Arg (7) = \\log(7) \\\\\n\\Log (-2i) & = \\log \\abs{-2i} + i \\Arg (-2i) \\\\\n& = \\log(2) -i \\frac{\\pi}{2} \\\\\n\\Log (1+i ) & = \\log \\abs{1+i} + i \\Arg (1+i) \\\\\n& = \\log ( \\sqrt{2} ) + i \\frac{\\pi}{4} \\\\\n& = \\frac{1}{2} \\log (2) + i \\frac{\\pi}{4.}\n\\end{align*}\n\\end{solution}\n\nSo far, $\\Log (z)$ is defined (and is holomorphic) on $\\C_{\\pi}$. We can extend the domain of $\\Log$ to $\\C \\backslash\\set{0}$ by defining it on the negative real axis, where points have principal argument $\\pi$, as follows:\n\\[\n\\Log (z) = \\log \\left( \\abs{z} \\right) + i \\pi, \\mbox{ for $z \\neq 0$ on the negative real axis.}\n\\]\n\\begin{question}\nIs $\\Log(z)$ continuous on $\\C \\backslash \\set{0}$?\n\\end{question}\n\\begin{answer}\n%\\begin{center}\n%\\altgraphics[scale=0.4]{log_discontinuous_full}{cless0}\n%\\end{center}\nFor $t$ on the negative real axis,\n\\[\n\\rlim{z \\to t}{ \\Im (z) > 0 } \\Arg (z) = \\pi, \\text{ while} \\rlim{z \\to t}{\\Im(z)<0} \\Arg (z) = -\\pi.\n\\]\nThus $z \\mapsto \\Arg (z)$ is discontinuous on the negative real axis.  Since $\\Arg (z)$ is the imaginary part of $\\Log(z)$, it follows that $\\Log$ is also discontinuous on the negative real axis.\n\\end{answer}\n\n\n\\begin{question}\nIs $\\Log$ the inverse of $\\exp$?\n\\end{question}\n\\begin{answer}\nNot exactly.  For any $z \\in \\C \\backslash \\set{0}$, we have\n\\begin{align*}\n\\exp ( \\Log (z)) & = \\exp \\left( \\log ( \\abs{z} ) + i \\Arg (z) \\right) \\\\\n& = e^{\\log \\abs{z}} \\left( \\cos ( \\Arg (z))+i \\sin ( \\Arg (z) ) \\right) \\\\\n& = \\underbrace{\\abs{z}\\left( \\cos ( \\Arg (z))+i \\sin ( \\Arg (z) ) \\right)}_{\\text{Polar form of }z} \\\\\n& = z.\n\\end{align*}\nHowever, $\\exp:\\C \\to \\C$ is not injective (e.g. $\\exp(0)=\\exp(i2\\pi)=1$), thus we cannot have $\\Log ( \\exp(z)) = z$ for all $z$.  We do have\n\\begin{align*}\n\\Log ( \\exp (x+iy ) ) & = \\log \\abs{\\exp(x+iy)} + i \\Arg (\\exp(x+iy)) &&\\\\\n& = \\log(e^x) + i (y-2k\\pi) && \\text{ for some }k \\in \\mathbb{Z} \\\\\n\\end{align*}\nHence $\\Log (\\exp(z))=z$ if and only if $-\\pi<\\Im(z) \\leq \\pi$.\n%\\newpage\n%\\vspace*{10cm}\n\\end{answer}\n\n\n\nWe have seen already how to take roots of complex numbers. We will now extend this definition to \\emph{complex} powers of complex numbers, allowing us to compute expressions such as\n\\[\n2^{i}, i^{i}, (1+3i)^{1-i}\n\\]\nand so on. \n\\begin{blankbox}\n The following observation about powers of real numbers may be useful: suppose we want to find the value of $x^a$ for some $a,x \\in \\mathbb{R}$ with $x>0$.  We know that\n\\[\n\\log (x^a) = a \\log (x),\n\\]\nand since $e^{\\log(y)}=y$ for all $y \\in (0,+\\infty)$, we have\n\\[\nx^a = e^{  \\log \\left( x^a \\right) } = e^{  a \\log (x) }.\n\\]\nThus we may take $e^{ a \\log (x) }$ to be the definition of $x^a$.\n\\end{blankbox}\n\\begin{definition}\n\\label{d:ppower}\nFor $\\alpha \\in \\C$, the \\emph{Principal $\\alpha^{th}$ Power Function} is defined by\n\\[\nz^{\\alpha} = \\exp \\left( \\alpha \\Log (z) \\right)\n\\]\nfor all $z \\in \\C \\backslash \\set{0}$.\n\\end{definition}\nBecause $\\Log$ is holomorphic on $\\C_{\\pi}$ and $\\exp$ is holomorphic on $\\C$, the Principal Power function is holomorphic on $\\C_{\\pi}$.  `Principal' refers to the fact that we have used the principal branch of the Logarithm function to define the power function - taking different branches of the Logarithm function will give different power functions.\n\n\\begin{example}\n Let us verify that $i^2=-1$ agrees with Definition~\\ref{d:ppower}.\n\\end{example}\n\\begin{solution}\nIndeed, we have\n\\begin{align*}\ni^2 = \\exp \\left( 2 \\Log (i) \\right) & = \\exp \\left( 2 \\left[ \\log \\abs{i}+i \\Arg(i) \\right] \\right) \\\\\n& = \\exp \\left( 2 \\left[ \\log(1)+i \\frac{\\pi}{2} \\right] \\right) \\\\\n& = \\exp(i\\pi) = -1.\n\\end{align*}\n\\end{solution}\n%\\vspace*{7cm}\n\n\n\n\\begin{example}\nCalculate $2^i$ and $(1+i)^{-i}$.\n\\end{example}\n\\begin{solution}\nThis time, we have\n\\begin{align*}\n2^i = \\exp(i \\Log(2) ) & = \\exp(i \\log (2) ) \\\\\n& = \\cos(\\log(2))+i \\sin(\\log(2))\n\\end{align*}\nand\n\\begin{align*}\n(1+i)^{-i} & = \\exp \\left( -i \\Log (1+i) \\right) \\\\\n& = \\exp \\brac{-i\\brac{\\tfrac{1}{2}\\log(2)+i\\frac{\\pi}{2}}} \\\\\n& = \\exp \\brac{\\tfrac{\\pi}{4}-i\\tfrac{1}{2}\\log(2)} \\\\\n& = \\polar{e^{\\pi/4}}{\\tfrac{1}{2}\\log(2)}.\n\\end{align*}\n\\end{solution}\n\n\n\n\n", "meta": {"hexsha": "8e91f1c17e28e51ec61dc097ee92b6913890f3f8", "size": 34179, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "L5/MA2003/LectureNotes/Chapter_4.tex", "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "L5/MA2003/LectureNotes/Chapter_4.tex", "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L5/MA2003/LectureNotes/Chapter_4.tex", "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "avg_line_length": 41.1299638989, "max_line_length": 371, "alphanum_fraction": 0.6434945434, "num_tokens": 12993, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228758499942, "lm_q2_score": 0.8791467690927438, "lm_q1q2_score": 0.5819273576920998}}
{"text": "\\chapter{Probabilistic Analysis and Randomized Algorithms}\n\n\\section{The hiring problem}\n\n\\begin{enumerate}\n\n\\item[5.1{-}1]{Show that the assumption that we are always able to determine\nwhich candidate is best, in line 4 of procedure \\textsc{Hire-Assistant}, implies\nthat we know a total order on the ranks of the candidates.}\n\n\\begin{framed}\nLet $A$ be the set of candidates in random order and $R$ the binary relation\n``is better than or equal''. $R$ is a total order if\n\\begin{enumerate}\n  \\item $R$ is \\textbf{reflexive}. That is, $a\\;R\\;a \\; \\Forall a \\in A$;\n  \\item $R$ is \\textbf{antisymmetric}. That is, $a\\;R\\;b$ and $b\\;R\\;a$ imply $a = b$;\n  \\item $R$ is \\textbf{transitive}. That is, $a\\;R\\;b$ and $b\\;R\\;c$ imply $a\\;R\\;c$;\n  \\item $R$ is a \\textbf{total relation}. That is, $a\\;R\\;b$ or $b\\;R\\;a \\; \\Forall a, b \\in A$.\n\\end{enumerate}\n\nThe above properties are necessary because\n  \\begin{enumerate}\n    \\item if two different candidates have the same qualification, it is\n      necessary so that they can be compared;\n    \\item if both $a$ is ``better than or equal'' than $b$ and $b$ is ``better\n      than or equal'' than $a$ and they qualifications are not equal, we would\n      not be able to choose one of them and still be hiring ``the best candidate\n      we have seen so far'';\n    \\item if we hire $b$ because he is ``better than or equal'' than $a$ and\n      then we hire $c$ because he is ``better than or equal'' than $b$ and $c$\n      is not ``better than or equal'' than $a$, we are not hiring ``the best\n      candidate we have seen so far'';\n    \\item if $R$ is not a total relation, we would not be able to compare any\n      two candidates.\n\\end{enumerate}\n\\end{framed}\n\n\\item[5.1{-}2]{($\\star$) Describe an implementation of the procedure \\textsc{Random}(a, b)\nthat only makes calls to \\textsc{Random}(0, 1). What is the expected running\ntime of your procedure, as a function of a and b?}\n\n\\begin{framed}\nThe pseudocode is stated below.\\\\\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{RandomInterval}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{a, b}}{%\n\\nl $flips = \\ceil{\\lg (b - a)}$\\;\n\\nl $count = \\infty$\\;\n\\nl \\While{$count > b$}{%\n\\nl   $count = 0$\\;\n\\nl   \\For{$i = 1$ \\KwTo $flips$}{%\n\\nl     $count = count + (2^{i - 1} \\cdot \\texttt{Random}(0, 1))$\\; } }\n\\nl \\Return{$count + a$}\\; }\n\\end{algorithm}\n\nThe expected running time is\n\\[\n  \\underbrace{2^{\\ceil{\\lg(b - a)}}/(b - a)}_\\text{while loop} \\cdot \\underbrace{\\ceil{\\lg(b - a)}}_\\text{for loop} < 2 \\cdot \\ceil{\\lg(b - a)},\n\\]\nwhere the last inequality is valid since $1 \\le 2^{\\ceil{\\lg(b - a)}}/(b - a) < 2$.\n\\end{framed}\n\n\\item[5.1{-}3]{($\\star$) Suppose that you want to output 0 with probability\n$1/2$ and 1 with probability $1/2$.  At your disposal is a procedure\n\\textsc{Biased-Random}, that outputs either 0 or 1. It outputs 1 with some\nprobability $p$ and 0 with probability $1 - p$, where $0 < p < 1$, but you do\nnot know what $p$ is. Give an algorithm that uses \\textsc{Biased-Random} as\na subroutine, and returns an unbiased answer, returning 0 with probability $1/2$\nand 1 with probability $1/2$. What is the expected running time of your\nalgorithm as a function of $p$?}\n\n\\begin{framed}\nThe pseudocode is stated below.\\\\\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{Random}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{}}{%\n\\nl \\While{$1$}{%\n\\nl   $r_1 = \\texttt{Random}(0, 1)$\\;\n\\nl   $r_2 = \\texttt{Random}(0, 1)$\\;\n\\nl   \\If{$r_1 \\neq r_2$}{%\n\\nl     \\Return{$r_1$}\\; } } }\n\\end{algorithm}\n\nThe expected running time is\n\\[\n  \\frac{1}{\\underbrace{(1 - p) p}_\\text{$(r_1, r_2) = (0, 1)$} + \\underbrace{p (1 - p)}_\\text{$(r_1, r_2) = (1, 0)$}} \\cdot 1 = \\frac{1}{2p (1- p)}.\n\\]\n\\end{framed}\n\n\\end{enumerate}\n\n\\newpage\n\n\\section{Indicator random variables}\n\n\\begin{enumerate}\n\n\\item[5.2{-}1]{In \\textsc{Hire-Assistant}, assuming that the candidates are\npresented in a random order, what is the probability that you hire exactly\none time? What is the probability that you hire exactly $n$ times?}\n\n\\begin{framed}\nSince the initial dummy candidate is the least qualified,\n\\textsc{Hire-Assistant} will always hire the first candidate. It hires exacly\none time when the best candidate is the first to be interviewed. Thus, the\nprobability is $1/n$. To hire exactly $n$ times, the candidates has to be in\nincreasing order of quality. Since there are $n!$ possible orderings (each one\nwith equal probability of happening), the probability is $1/n!$.\n\\end{framed}\n\n\\item[5.2{-}2]{In \\textsc{Hire-Assistant}, assuming that the candidates are\npresented in a random order, what is the probability that you hire exactly\ntwice?}\n\n\\begin{framed}\nThe first candidate is always hired, thus the best qualified candidate cannot be\nthe first to be interviewed. Also, among all the candidates that are better\nqualified than the first candidate, the best candidate must be interviewed first.\nOtherwise, a third candidate will be hired between them. Now assume that the\nfirst candidate to be interviewed is the $i$th best qualified, for $i = 2,\n\\dots, n$. This occurs with a probability of $1/n$. To hire exactly twice, the\nbest candidate must be the first to be interviewed among the $i - 1$ candidates\nthat are better qualified than candidate $i$. This occurs with a probability\nof $1/(i - 1)$. Thus, the probability of hiring exactly twice is\n\\[\n  \\sum_{i = 2}^{n} \\frac{1}{n} \\frac{1}{i - 1} = \\frac{1}{n} \\sum_{i = 1}^{n - 1} \\frac{1}{i} = \\frac{1}{n} (\\lg(n - 1) + O(1)).\n\\]\n\\end{framed}\n\n\\item[5.2{-}3]{Use indicator random variables to compute the expected value of\nthe sum of $n$ dice.}\n\n\\begin{framed}\nLet $X_i$ be an indicator random variable of a dice coming up the number $i$. We\nhave $\\text{Pr}\\{X_i\\} = 1/6$. Let $X$ be a random variable denoting the\nresult of throwing a dice. Then\n\\[\n  \\text{E}[X] = \\sum_{i = 1}^{6} i \\cdot \\text{Pr}\\{X_i\\}\n              = \\sum_{i = 1}^{6} i \\cdot \\frac{1}{6}\n              = \\frac{1}{6} \\sum_{i = 1}^{6} i\n              = \\frac{1}{6} \\frac{6 \\cdot 7}{2} = 3.5.\n\\]\nBy linearity of expectations, the expected value of the sum of $n$ dice is the\nsum of the expected value of each dice. Thus,\n\\[\n  \\sum_{i = 1}^{n} \\text{E}[X] = \\sum_{i = 1}^{n} 3.5 = 3.5 \\cdot n.\n\\]\n\\end{framed}\n\n\\item[5.2{-}4]{Use indicator random variables to solve the following problem,\nwhich is known as the \\textbf{\\emph{hat-check problem}}. Each of $n$ customers\ngives a hat to a hat-check person at a restaurant. The hat-check person\ngives the hats back to the customers in a random order. What is the expected\nnumber of customers who get back their own hat?}\n\n\\begin{framed}\nLet $X_i$ be an indicator random variable of customer $i$ getting back his own\nhat. We have\n\\[\n\\text{Pr}\\{X_i\\} = \\text{E}[X_i] = 1/n.\n\\]\nLet $X$ be a random variable denoting the number of customers who get back their\nown hat. Then\n\\[\n  X = X_1 + X_2 + \\cdots + X_n,\n\\]\nwhich implies\n\\begin{equation*}\n\\begin{aligned}\n  \\text{E}[X] &= \\text{E}\\left[\\sum_{i = 1}^{n} X_i\\right]\\\\\n              &= \\sum_{i = 1}^{n} \\text{E}[X_i]\\\\\n              &= \\sum_{i = 1}^{n} \\frac{1}{n}\\\\\n              &= 1.\n\\end{aligned}\n\\end{equation*}\n\\end{framed}\n\n\\item[5.2{-}5]{Let $A[1, \\dots, n]$ be an array of $n$ distinct numbers. If\n$i < j$ and $A[i] > A[j]$, then the pair $(i, j)$ is called an\n$\\textbf{\\emph{inversion}}$ of $A$. (See Problem 2-4 for more on inversions.)\nSuppose that the elements of $A$ form a uniform random permutation of\n$\\langle 1, 2, \\dots, n \\rangle$. Use indicator random variables to compute\nthe expected number of inversions.}\n\n\\begin{framed}\nLet $X_{ij}$ be an indicator random variable for the event that the pair\n$(i, j)$ is inverted. Since $A$ forms a uniform random permutation, we have\n\\[\n\\text{Pr}\\{X_{ij}\\} = \\text{Pr}\\{\\overline{X_{ij}}\\} = 1/2,\n\\]\nwhich implies\n\\[\n  \\text{E}[X_{ij}] = 1/2.\n\\]\nLet $X$ be a random variable denoting the number of\ninversions of $A$. Since there are $\\binom{n}{2}$ possible pairs on $A$, each\nwith probability $1/2$ of being inverted, we have\n\\[\n  \\text{E}[X] = \\binom{n}{2} \\frac{1}{2} = \\frac{n!}{2! \\cdot (n - 2)!} \\frac{1}{2} = \\frac{n (n - 1)}{4}.\n\\]\n\\end{framed}\n\n\\end{enumerate}\n\n\\newpage\n\n\\section{Randomized algorithms}\n\n\\begin{enumerate}\n\n\\item[5.3{-}1]{Professor Marceau objects to the loop invariant used in the proof\nof Lemma 5.5. He questions whether it is true prior to the first iteration. He\nreasons that we could just as easily declare that an empty subarray contains\nno 0-permutations. Therefore, the probability that an empty subarray\ncontains a 0-permutation should be 0, thus invalidating the loop invariant\nprior to the first iteration. Rewrite the procedure \\textsc{Randomize-In-Place}\nso that its associated loop invariant applies to a nonempty subarray prior\nto the first iteration, and modify the proof of Lemma 5.5 for your procedure.}\n\n\\begin{framed}\nJust select a random element in the array and swap it with the first element.\n\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n  \\SetKwFunction{algo}{Randomize-In-Place}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{A}}{%\n\\nl $n = A.length$\\;\n\\nl swap $A[1]$ with $A[\\texttt{Random}(1, n)]$\\;\n\\nl \\For{$i = 2$ \\KwTo $n - 1$}{%\n\\nl   swap $A[i]$ with $A[\\texttt{Random}(i, n)]$\\; } }\n\\end{algorithm}\n\nThe only difference in the proof of Lemma 5.5 is the initialization of the loop\ninvariant:\n\\begin{itemize}\n  \\item \\textbf{Initialization.} Consider the situation just before the first\n    loop iteration, so that $i = 2$. The loop invariant says that for each\n    possible 1-permutation, the subarray $A[1, \\dots, 1]$ contains this\n    1-permutation with probability $(n - i + 1)/n! = (n - 1)!/n! = 1/n$. The\n    subarray $A[1, \\dots, 1]$ has a single element and this element was\n    randomly choosed among the $n$ elements of the array. Thus, $A[1, \\dots, 1]$\n    contains this $1$-permutation with probability $1/n$, and the loop invariant\n    holds prior to the first iteration.\n\\end{itemize}\n\\end{framed}\n\n\\item[5.3{-}2]{Professor Kelp decides to write a procedure that produces at\nrandom any permutation besides the identity permutation. He proposes the\nfollowing procedure:\n\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{Permute-Without-Identity}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{A}}{%\n\\nl $n = A.length$\\;\n\\nl \\For{$i = 1$ \\KwTo $n - 1$}{%\n\\nl   swap $A[i]$ with $A[\\texttt{Random}(i + 1, n)]$\\; } }\n\\end{algorithm}\n\nDoes this code do what Professor Kelp intends?\n}\n\n\\begin{framed}\nNo. This code enforces that every position $i$ of the resulting array receives\nan element that is different from the $i$th element of the original array.\nHowever, this requirement discards much more permutations than just the identity\npermutation. For instance, consider the array $A = [1, 2, 3]$ and a permutation\nof it $A' = [1, 3, 2]$. In this case, the permutation $A'$ is not identical\nto the original array $A$. However, Professor Kelp's code is not able to produce\nthis permutation.\n\\end{framed}\n\n\\item[5.3{-}3]{Suppose that instead of swapping element $A[i]$ with a random\nelement from the subarray $A[i, \\dots, n]$, we swapped it with a random element\nfrom anywhere in the array:\n\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{Permute-With-All}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{A}}{%\n\\nl $n = A.length$\\;\n\\nl \\For{$i = 1$ \\KwTo $n$}{%\n\\nl   swap $A[i]$ with $A[\\texttt{Random}(1, n)]$\\; } }\n\\end{algorithm}\n\nDoes this code produce a uniform random permutation? Why or why not?\n}\n\n\\begin{framed}\nNo. As a counterexample, consider the input array $A = [1, 2, 3]$. Since each\ncall to \\textsc{Random} can produce one of three values, the number of possible\noutcomes after all the \\textsc{Random} calls can be seen as the number of\nstrings over the set $\\{1, 2, 3\\}$, which is $3^3 = 27$. However, since an array\nof size $3$ has $3! = 6$ distinct permutations, and 27 is not divisible by 6, it\nis not possible that each of the 6 permutations of $A$ has the same probability\nof happening among the 27 possible outcomes of \\textsc{Permute-With-All}.\n\\end{framed}\n\n\\newpage\n\n\\item[5.3{-}4]{Professor Armstrong suggests the following procedure for\ngenerating a uniform random permutation:\n\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{Permute-By-Cyclic}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{A}}{%\n\\nl $n = A.length$\\;\n\\nl let $B[1 \\dots n]$ be a new array\\;\n\\nl \\emph{offset} $= \\texttt{Random}(1, n)$\\;\n\\nl \\For{$i = 1$ \\KwTo $n$}{%\n\\nl   $\\text{\\emph{dest}} = i + \\text{\\emph{offset}}$\\;\n\\nl   \\If{\\emph{dest} $> n$}{%\n\\nl     \\emph{dest} $=$ \\emph{dest} $- n$\\; }\n\\nl   $B[\\text{\\emph{dest}}] = A[i]$\\; } }\n\\end{algorithm}\n\nShow that each element $A[i]$ has a $1/n$ probability of winding up in any\nparticular position in $B$. Then show that Professor Armstrong is mistaken by\nshowing that the resulting permutation is not uniformly random.\n}\n\n\\begin{framed}\nWhat Professor Armstrong's code does is a circular shift of all the elements to\nthe right by $i$ positions. Since each of the $n$ possible shifts has the same\nprobability of happening, each element has indeed a probability of $1/n$ of\nwinding up in any particular position of the final array $B$. However, since\nthis code has only $n$ possible outcomes and $A$ has $n!$ permutations, it can\nnot produce a uniform random distribution over $A$. More precisely, the\nProfessor Armstrong's code is not able to produce any permutation of $A$ that\nis not a circular shift of $A$.\n\\end{framed}\n\n\\item[5.3{-}5]{($\\star$) Prove that in the array $P$ in procedure\n\\textsc{Permute-By-Sorting}, the probability that all elements are unique is at\nleast $1 - 1/n$.}\n\n\\begin{framed}\nLet $X_i$ be an indicador random variable for the event that the $i$th priority\nis not unique. Since the subarray $P[1, \\dots, i - 1]$ has at most $i - 1$\ndistinct numbers, we have $\\text{Pr}\\{X_i\\} = \\text{E}[X_i] \\le (i - 1)/n^3$.\nLet $X$ be a random variable for the event that at least one priority is not\nunique. Then\n\\[\n  X = (X_1 \\cup X_2 \\cup \\cdots X_n) = X_1 + X_2 + \\cdots + X_n,\n\\]\nwhich implies\n\\begin{equation*}\n\\begin{aligned}\n  \\text{E}[X] &=   \\text{E}\\left[\\sum_{i = 1}^{n} X_i\\right]\\\\\n              &=   \\sum_{i = 1}^{n} \\text{E}[X_i]\\\\\n              &\\le \\sum_{i = 1}^{n} \\frac{i - 1}{n^3}\\\\\n              &=   \\frac{1}{n^3} \\sum_{i = 0}^{n - 1} i\\\\\n              &=   \\frac{1}{n^3} \\frac{(n - 1) \\cdot n}{2}\\\\\n              &=   \\frac{n - 1}{2 n^2}\\\\\n              &\\le \\frac{1}{n}.\n\\end{aligned}\n\\end{equation*}\n\nThus, the probability that all elements are unique is\n\\[\n  \\text{E}[\\overline{X}] = 1 - \\text{E}[X] \\ge 1 - \\frac{1}{n}.\n\\]\n\\end{framed}\n\n\\newpage\n\n\\item[5.3{-}6]{Explain how to implement the algorithm\n\\textsc{Permute-By-Sorting} to handle the case in which two or more priorities\nare identical. That is, your algorithm should produce a uniform random\npermutation, even if two or more priorities are identical.}\n\n\\begin{framed}\nThe pseudocode is stated below.\n\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{Permute-By-Sorting-Unique}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{A}}{%\n\\nl $n = A.length$\\;\n\\nl let $P[1 \\dots n]$ be a new array\\;\n\\nl \\Repeat{\\text{unique}}{%\n\\nl   \\For{$i = 1$ \\KwTo $n$}{%\n\\nl     $P[i] = \\texttt{Random}(1, n^3)$\\; }\n\\nl   let $Q$ be a copy of $P$\\;\n\\nl   sort $Q$\\;\n\\nl   $\\text{\\emph{unique}} = \\text{True}$\\;\n\\nl   \\For{$i = 2$ \\KwTo $n$}{%\n\\nl     \\If{$Q[i] == Q[i - 1]$} {%\n\\nl       $\\text{\\emph{unique}} = \\text{False}$\\;\n\\nl       break\\; } } }\n\\nl sort $A$, using $P$ as sort keys\\; }\n\\end{algorithm}\n\nBefore sorting $A$ using $P$ as sort keys, the above algorithm verifies if $P$\nhas unique priorities. If the priorities are not unique, $P$ is generated again\nuntil it has unique priorities. Since the probability that a random $P$ is\nunique is at least $1 - 1/n$, the expected number of iterations of the repeat\nloop of lines 3-12 is less than $2$.\n\\end{framed}\n\n\\newpage\n\n\\item[5.3{-}7]{Suppose we want to create a \\textbf{\\emph{random sample}} of the\nset $\\{1, 2, 3, \\dots, n\\}$, that is, an $m$-element subset $S$, where\n$0 \\le m \\le n$, such that each $m$-subset is equally likely to be created. One\nway would be to set $A[i] = i$ for $i = 1, 2, 3, \\dots, n$, call\n\\textsc{Randomized-In-Place}($A$), and then take just the first $m$ array\nelements. This method would make $n$ calls to the \\textsc{Random} procedure. If\n$n$ is mucn larger than $m$, we can create a random sample with fewer calls to\n\\textsc{Random}. Show that the following recursive procedure returns a random\n$m$-subset $S$ of $\\{1, 2, 3, \\dots, n\\}$, in which each $m$-subset is equally\nlikely, while making only $m$ calls to \\textsc{Random}:\n\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{Random-Sample}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{m, n}}{%\n\\nl \\If{$m == 0$}{%\n\\nl   \\Return{$\\emptyset$}\\; }\n\\nl  \\Else{%\n\\nl    $S = \\texttt{Random-Sample}(m - 1, n - 1)$\\;\n\\nl    $i = \\texttt{Random}(1, n)$\\;\n\\nl    \\If{$i \\in S$}{%\n\\nl      $S = S \\cup \\{n\\}$\\; }\n\\nl    \\Else{%\n\\nl      $S = S \\cup \\{i\\}$\\; }\n\\nl    \\Return{S}\\; } }\n\\end{algorithm}\n}\n\n\\begin{framed}\nThe recursion has $m + 1$ levels. Let $R_k$, for $k = 0, 1, \\dots, m$, denote\nthe recursion at depth $k$, in which an $k$-subset is returned ($R_0$ returns\nthe empty set; $R_m$ returns the final $m$-subset). After $R_k$, $S$ will\nconsist of $k$ elements from the set $\\{1, 2, \\dots, n - (m - k)\\}$. There are\n$\\binom{n - (m - k)}{k}$ ways to choose $k$ elements from an\n$(n - (m - k))$-set. Thus, to $S$ be a random sample, we wish to show that, in\neach recursion level $k$, this particular $k$-subset is selected with\nprobability $1/\\binom{n - (m - k)}{k}$.\n\nFor the base case of the recursion, which occurs when $k = 0$, there are\n$\\binom{n - m}{0} = 1$ distincts $0$-subsets and the algorithm returns the empty\nset with probability $1 = 1/\\binom{n - m}{0}$. Now assume $R_{k - 1}$ returns an\nrandom $(k - 1)$-sample. There are two ways to add the $k$th element to $S$ on\n$R_k$:\n\\begin{itemize}\n\\item The element $n - (m - k)$ is added. This occurs when line 5 either\nselects the element $n - (m - k)$ or an element $e$ such that $e \\in R_{k - 1}$.\nThis probability is\n\\[\n  \\underbrace{\\frac{1}{n - (m - k)}}_\\text{$(n - (m - k))$ is selected} +\n  \\underbrace{\\frac{k - 1}{n - (m - k)}}_\\text{$e \\in R_{k - 1}$ is selected} = \\frac{k}{n - (m - k)}.\n\\]\nThus, $R_k$ produces a particular $k$-sample with the element $n - (m - k)$ with\nprobability\n\\begin{equation*}\n\\begin{aligned}\n  \\frac{k}{n - (m - k)} \\cdot \\frac{1}{\\binom{n - (m - k) - 1}{k - 1}}\n  &= \\frac{k}{n - (m - k)} \\cdot \\left(\\frac{(n - (m - k) - 1)!}{(k - 1)! \\cdot (n - (m - k) - 1 - (k - 1))}\\right)^{-1}\\\\\n  &= \\left(\\frac{(n - (m - k))!}{k! \\cdot (n - (m - k) - k )}\\right)^{-1}\\\\\n  &= \\frac{1}{\\binom{n - (m - k)}{k}}.\n\\end{aligned}\n\\end{equation*}\n\n\\item An element $j < n - (m - k)$ is added. The probability of line 5 selecting\nsuch element is\n\\[\n  \\frac{n - (m - k) - k}{n - (m - k)} = \\frac{n - m}{n - (m - k)}.\n\\]\nThus, $R_k$ produces a particular $k$-sample with the element $j$ with\nprobability\n\\begin{equation*}\n\\begin{aligned}\n  \\frac{n - m}{n - (m - k)} \\cdot \\frac{1}{\\binom{n - (m - k) - 1}{k}}\n  &= \\frac{n - m}{n - (m - k)} \\cdot \\left(\\frac{(n - (m - k) - 1)!}{k! \\cdot (n - (m - k) - 1 - k)}\\right)^{-1}\\\\\n  &= \\left(\\frac{(n - (m - k))!}{k! \\cdot (n - (m - k) - k)}\\right)^{-1}\\\\\n  &= \\frac{1}{\\binom{n - (m - k)}{k}}.\n\\end{aligned}\n\\end{equation*}\n\\end{itemize}\n\nSince each recursion level $R_k$ such that $k > 0$ makes exactly one\ncall to \\textsc{Random}, there are $m$ such calls. Also, among the\n$\\binom{n}{m}$ ways of choosing $m$ elements from an $n$-set,\n\\textsc{Random-Sample} returns each of them with probability\n\\[\n  \\frac{1}{\\binom{n - (m - m)}{m}} = \\frac{1}{\\binom{n}{m}}.\n\\]\n\\end{framed}\n\n\\end{enumerate}\n\n\\newpage\n\n\\section*{Problems}\n\\addcontentsline{toc}{section}{\\protect\\numberline{}Problems}%\n\n\\begin{enumerate}\n\\item[5{-}1]{\\textbf{\\emph{Probabilistic counting}}\\\\\nWith a $b$-bit counter, we can ordinarily only count up to $2^b - 1$. With R.\nMorri's \\textbf{\\emph{probabilistic counting}}, we can count up to a much larger\nvalue at the expense of some loss of precision.\n\nWe let a counter value of $i$ represent a count of $n_i$ for\n$i = 0, 1, \\dots, 2^b - 1$, where the $n_i$ form an increasing sequence of\nnonnegative values. We assume that the initial value of the counter is 0,\nrepresenting a count of $n_0 = 0$. The \\textsc{Increment} operation works on\na counter containing the value $i$ in a probabilistic manner. If $i = 2^b - 1$,\nthen the operation reports an overflow error. Otherwise, the \\textsc{Increment}\noperation increases the counter by 1 with probability $1/(n_{i + 1} - n_i)$.\n\nIf we select $n_i = i$ for all $i \\ge 0$, then the counter is an ordinary one.\nMore interesting situations arise if we select, say, $n_i = 2^{i - 1}$ for\n$i > 0$ or $n_i = F_i$ (the $i$th Fibonacci number {--} see Section 3.2).\n\nFor this problem, assume $n_{2^b - 1}$ is large enough that the probability of\nan overflow error is negligible.\n\n\\begin{enumerate}\n\\item[a.] Show that the expected value represented by the counter after $n$\n\\textsc{Increment} operations have been performed is exactly $n$.\n\\item[b.] The analysis of the variance of the count represented by the counter\ndepends on the sequence of the $n_i$. Let us consider a simple case:\n$n_i = 100i$ for all $i \\ge 0$. Estimate the variance in the value represented\nby the register after $n$ \\textsc{Increment} operations have been performed.\n\\end{enumerate}\n}\n\n\\begin{framed}\n\\begin{enumerate}\n\\item\nLet $X_i$ denote a random variable for the expected \\emph{increment} of the\ncount represented by a counter of value $i$ after \\emph{one} \\textsc{Increment}\noperation. We have\n\\[\n  \\text{E}[X_i] = 0 \\cdot \\left(1 - \\frac{1}{n_{i + 1} - n_{i}}\\right) + (n_{i + 1} - n_{i}) \\cdot \\frac{1}{n_{i + 1} - n_{i}} = 1,\n\\]\nwhich shows that, independently from the current state of the \\emph{counter},\nthe expected \\emph{increment} of the \\emph{count} after each \\textsc{Increment}\noperation is always 1. Thus, after $n$ \\textsc{Increment} operations, the\nexpected \\emph{count} is:\n\\[\n  \\sum_{i = 1}^{n} \\text{E}[X_0] = \\sum_{i = 1}^{n} 1 = n,\n\\]\n\n\\item We have\n\\begin{equation*}\n\\begin{aligned}\n  \\text{Var}[X_i] &= \\text{E}[{X_i}^2] - \\text{E}^2[X_i]\\\\\n                  &= \\left( 0^2 \\cdot \\left(1 - \\frac{1}{100}\\right) + 100^2 \\cdot \\frac{1}{100} \\right) - 1\\\\\n                  &= 99,\n\\end{aligned}\n\\end{equation*}\nwhich shows that the estimated variance after each \\textsc{Increment} operation\ndoes not depend on the current state of the \\emph{counter}. Thus, after $n$\n\\textsc{Increment} operations, the\nestimated variance is\n\\[\n  \\sum_{i = 1}^{n} \\text{Var}[X_0] = \\sum_{i = 1}^{n} 99 = 99n.\n\\]\n\\end{enumerate}\n\\end{framed}\n\n\\newpage\n\n\\item[5{-}2]{\\textbf{\\emph{Searching an unsorted array}}\\\\\nThis problem examines three algorithms for searching for a value $x$ in an\nunsorted array $A$ consisting of $n$ elements.\n\nConsider the following randomized strategy: pick a random index $i$ into $A$. If\n$A[i] = x$, then we terminate; otherwise, we continue the search by picking\na new random index into $A$. We continue picking random indices into $A$ until\nwe find an index $j$ such that $A[j] = x$ or until we have checked every element\nof $A$. Note that we pick from the whole set of indices each time, so that we\nmay examine a given element more than once.\n\n\\begin{enumerate}\n\\item[\\textbf{a.}] Write pseudocode for a procedure \\textsc{Random-Search} to\nimplement the strategy above. Be sure that your algorithm terminates when all\nindices into $A$ have been picked.\n\\item[\\textbf{b.}] Suppose that there is exactly one index $i$ such that\n$A[i] = x$. What is the expected number of indices into $A$ that we must pick\nbefore we find $x$ and \\textsc{Random-Search} terminates?\n\\item[\\textbf{c.}] Generalizing your solution to part (b), suppose that there\nare $k \\ge 1$ indices $i$ such that $A[i] = x$. What is the expected number of\nindices into $A$ that we must pick before we find $x$ and \\textsc{Random-Search}\nterminates? Your answer should be a function of $n$ and $k$.\n\\item[\\textbf{d.}] Suppose that there are no indices $i$ such that $A[i] = x$.\nWhat is the expected number of indices into $A$ that we must pick before we\nhave checked all elements of $A$ and \\textsc{Random-Search} terminates?\n\\end{enumerate}\n\nNow consider a deterministic linear serach algorithm, which we refer to as\n\\textsc{Deterministic-Search}. Specifically, the algorithm searches $A$ for $x$\nin order, considering $A[1], A[2], A[3], \\dots, A[n]$ until either it finds\n$A[i] = x$ or it reaches the end of the array. Assume that all possible\npermutations of the input array are equally likely.\n\n\\begin{enumerate}\n\\item[\\textbf{e.}] Suppose that there is exactly one index $i$ such that\n$A[i] = x$. What is the average-case running time of\n\\textsc{Deterministic-Search}?  What is the worst-case running time of\n\\textsc{Deterministic-Search}?\n\\item[\\textbf{f.}] Generalizing your solution to part (e), suppose that there\nare $k \\ge 1$ indices $i$ such that $A[i] = x$. What is the average-case running\ntime of \\textsc{Deterministic-Search}? What is the worst-case running time of\n\\textsc{Deterministic-Search}? Your answer should be a function of $n$ and $k$.\n\\item[\\textbf{g.}] Suppose that there are no indices $i$ such that $A[i] = x$.\nWhat is the average-case running time of \\textsc{Deterministic-Search}? What\nis the worst-case running time of \\textsc{Deterministic-Search}?\n\\end{enumerate}\n\nFinally, consider a randomized algorithm \\textsc{Scramble-Search} that works by\nfirst randomly permuting the input array and then running the deterministic\nlinear search given above on the resulting permuted array.\n\n\\begin{enumerate}\n\\item[\\textbf{h.}] Letting $k$ be the number of indices $i$ such that\n$A[i] = x$, give the worst-case and expected running times of\n\\textsc{Scramble-Search} for the cases in which $k = 0$ and $k = 1$. Generalize\nyour solution to handle the case in which $k \\ge 1$.\n\\item[\\textbf{i.}] Which of the three searching algorithms would you use?\nExplain your answer.\n\\end{enumerate}\n}\n\n\\begin{framed}\n\\begin{enumerate}\n\\item The pseudocode is stated below.\n\n\\begin{algorithm}[H]\n\\SetAlgoNoEnd\\DontPrintSemicolon\n\\BlankLine\n\\SetKwFunction{algo}{Random-Search}\n\\SetKwProg{myalg}{}{}{}\n\\myalg{\\algo{A, x}}{%\n\\nl $I = \\emptyset$\\;\n\\nl $n = A.length$\\;\n\\nl $\\text{\\emph{index}} = -1$\\;\n\\nl \\While{$|I| < n$}{%\n\\nl   $i = \\texttt{Random}(1, n)$\\;\n\\nl   $I = I \\cup \\{i\\}$\\;\n\\nl   \\If{$A[i] == x$}{%\n\\nl     $\\text{\\emph{index}} = i$\\;\n\\nl     break\\; } }\n\\nl \\Return{index}\\; }\n\\end{algorithm}\n\n\\item This can be viewed as a sequence of Bernoulli trials, each with\na probability $p = 1/n$ of success. Let $X$ be a random variable for the number\nof trials needed to pick $i$ such that $A[i] = x$. From Equation (C.32), we have\n\\[\n  \\text{E}[X] = \\frac{1}{p} = n.\n\\]\n\n\\item This can also be viewed as a sequence of Bernoulli trials, but with\na probability $p = k/n$ of success. Thus, we have\n\\[\n  \\text{E}[X] = \\frac{1}{p} = \\frac{n}{k}.\n\\]\n\n\\item Let $I$ be the set of indexes that was already checked. Let $X_i$ be\na random variable for the number of trials needed to pick an index $i$, for\n$i = 1, 2, \\dots, n$, such that $i \\notin I$ and $|I| = i - 1$. This can be\nviewed as a sequence of Bernoulli trials. Thus, we have\n\\[\n  p = \\frac{n - |I|}{n} = \\frac{n - i + 1}{n},\n\\]\nand\n\\[\n  \\text{E}[X_i] = \\frac{1}{p} = \\frac{n}{n - i + 1}.\n\\]\n\nNow let $X$ be a random variable for the number of trials to pick all elements\nof $A$. We have\n\\begin{equation*}\n\\begin{aligned}\n  \\text{E}[X] &= \\text{E}\\left[\\sum_{i = 1}^{n} X_i \\right]\n              = \\sum_{i = 1}^{n} \\text{E}[X_i]\\\\\n              &= \\sum_{i = 1}^{n} \\frac{n}{n - i + 1}\\\\\n              &= n \\sum_{i = 1}^{n} \\frac{1}{n - i + 1}\\\\\n              &= n \\sum_{i = 0}^{n - 1} \\frac{1}{n - i}\\\\\n              &= n \\sum_{i = 1}^{n} \\frac{1}{i} & \\text{($n$th harmonic number)}\\\\\n              &= n (\\ln n + O(1)).\n\\end{aligned}\n\\end{equation*}\n\n\\item Lets first consider the average case. Among the $n - 1$ elements that is\nnot $x$, $(n - 1)/2$ of them are expected to be before the element $x$ on the\narray. Thus, the expected running time of the algorithm is\n\\[\n  \\frac{n - 1}{2} + 1 = \\frac{n + 1}{2}.\n\\]\n\nThe worst-case occur when the number of elements before $x$ is $n - 1$. In this\ncase, the algorithm will make $n$ checks.\n\n\\item Let $I$ be the set of indexes such that $i \\in I \\rightarrow A[i] = x$.\nFor each element $e$ such that $e \\neq x$, there are $k + 1$ possibilities to\nposition $e$ with respect to $I$ (before all elements of $I$, after one\nelement of $I$, but before the remaining $k - 1$ elements of $I$, and so on).\nEach of these positions is equally likely.  Therefore, among the $n - k$\nelements that is not $x$, $(n - k) \\cdot 1/(k + 1) = (n - k)/(k + 1)$ are\nexpected to be before all the elements of $I$. Thus, the expected running time\nof the algorithm is\n\\[\n  \\frac{n - k}{k + 1} + 1 = \\frac{(n - k) + (k + 1)}{k + 1} = \\frac{n + 1}{k + 1}.\n\\]\n\nThe worst-case occurs when the number of elements before the first $x$ is\n$n - k$. In this case, the algorithm will make $n - k + 1$ checks.\n\n\\item In every case, the algorithm will check all elements of $A$. Thus, there\nwill be $n$ checks.\n\n\\item Suppose the algorithm uses \\textsc{Randomize-In-Place} to randomize the\ninput array. Independently from the value of $k$, the algorithm will take\n$n$ on this operation. Thus, lets focus on the number of checks for each\ncase. When $k = 0$, the algorithm will make exactly $n$ checks in every case.\nThus, it the expected running time is $n + n = 2n$. When $k = 1$, the\nbehaviour of the algorithm is similar to the one of item (e). Thus, the expected\nrunning time is $n + (n + 1)/2 = (3n + 1)/2$. As for the worst-case, note that\nthis notation refers to the distribution of inputs.  Since for every input the\nexpected running time is the same, the worst-case (over the inputs) is\n$n + (n + 1)/2 = (3n + 1)/2$. Similarly, for a given $k$ and from item (f),\nboth the expected running time and the worst-case is $n + (n + 1)/(k + 1)$.\n\n\\item \\textsc{Deterministic-Search} is better in all cases.\n\\end{enumerate}\n\\end{framed}\n\n\\end{enumerate}\n", "meta": {"hexsha": "aecfa9de1fd56603b47d448c44c54bf9c59f6f33", "size": 30420, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/C5.tex", "max_stars_repo_name": "danielmoraes/clrs", "max_stars_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-07-08T17:39:19.000Z", "max_stars_repo_stars_event_max_datetime": "2016-07-08T17:39:19.000Z", "max_issues_repo_path": "chapters/C5.tex", "max_issues_repo_name": "danielmoraes/clrs", "max_issues_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-31T20:41:48.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-31T20:41:48.000Z", "max_forks_repo_path": "chapters/C5.tex", "max_forks_repo_name": "danielmoraes/clrs", "max_forks_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-03-12T04:51:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-12T04:51:51.000Z", "avg_line_length": 40.4521276596, "max_line_length": 148, "alphanum_fraction": 0.6641354372, "num_tokens": 10305, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228625116081, "lm_q2_score": 0.8791467770088162, "lm_q1q2_score": 0.5819273512055301}}
{"text": "\\section{Background Gauge Fields}\nLet $P$ be the fermion parity operator.\nThere is a fermion parity symmetry\n\\begin{align}\n    P \\gamma_i P = -\\gamma_i\n\\end{align}\n\nConsider a $\\mathbb{Z}_2^f$ background gauge field with $s=\\pm 1$.\nLet there be some centered Majoranas $\\gamma_{i,\\alpha}$\nwhere $i$ labels site location\nand $\\alpha$ labels flavour.\n\nIn one dimensions the Hamiltonian is\n\\begin{align}\n    H &= \\frac{i}{4}\\sum_{i,j}\n    M_{ij}^{\\alpha\\beta} \\gamma_i^\\alpha \\gamma_j^\\beta\n\\end{align}\n\nYou can introduce a $\\mathbb{Z}_2$ gauge field so than whenever you hop from $i$\nto $j$,\nthere is a $\\mathbb{Z}_2$ string from $i$ to $j$.\n\\begin{align}\n    H[s] &=\n    \\frac{1}{4}\\sum_{i,j}M_{i,j}^{\\alpha\\beta}\n    \\gamma_i^d \\gamma_i^\\beta\n    \\prod_{i=0}^{j - 1} S_{k,k + 1}\n\\end{align}\nThis is the $\\mathbb{Z}_2$ analogue of say $U(1)$ electron symmetry\nwhere you would take an $e^{i\\in_1^2 A\\, dl}$.\n\n\\begin{question}\n    Why do we have multiple fermion flavours?\n\\end{question}\nFor example we could have two fermions on each site,\nand furthermore if these had spin,\nthese would also have spin.\n\nBack to the spinless particle superconductor,\n\\begin{align}\n    H[s] &=\n    \\underbrace{\n        \\frac{1}{2}\\sum_{i}(1? + t) \\gamma_{2i}\\gamma_{2i + 1} s_{i,i+1}\n    }_{\\text{intersite}}\n    - \\underbrace{\\mu \\gamma_{2i - 1}\\gamma_{2i}}_{\\text{intersite}}\n\\end{align}\nYou can write\n$s_{i,i+1}=e^{i\\pi \\sigma_{i,i+1}}$\nwhere $\\sigma\\in\\left\\{ 0, 1 \\right\\}$.\nThere are $\\mathbb{Z}_2^+$ gauge transforms for\n$\\lambda_i\\in\\left\\{ 0, 1 \\right\\}$\nand we get \n\\begin{align}\n    \\sigma_{i, i + 1} &\\to\n    \\left(\n        \\sigma_{i, i+1} + \\lambda_{i + 1} - \\lambda_i\n    \\right)\\mod 2\\\\\n    \\gamma_{2i + 1} &\\to\n    (-1)^{\\lambda_i}\n    \\gamma_{2i + 1}\\\\\n    \\gamma_i &\\to\n    (-1)^{\\lambda_i} \\gamma_{2i}\n\\end{align}\nI want to emphasize the effect of changing boundary conditions from periodic to\nanti-periodic results is equivalent to inserting $\\pi$ net flux through the\nring.\n\n\\begin{example}\n    For periodic boundary conditions $c_{i + N} = c_i$\n    where $c$ is complex fermion.\n    Singular gauge transform.\n    Define\n    \\begin{align}\n        \\tilde{c}_i &=\n        c_i \\prod_{k=1}^{i} s_{k,k+1}\n    \\end{align}\n    where\n    \\begin{align}\n        \\prod_{i=1}^{N} s_{i,i+1} = -1\n    \\end{align}\n    then\n    \\begin{align}\n        \\tilde{c}_{i+N} &=\n        c_{i + N} \\prod_{k=1}^{N} s_{k, k+1}\\\\\n        &=\n        \\underbrace{c_i \\prod_{k = 1}^{i} S_{k, k + 1}}_{\\tilde{c}_i}\n        \\underbrace{\\prod_{k=i + 1} s_{k, k + 1}}_{-1}\\\\\n        &=\n        - \\tilde{c}_i\n    \\end{align}\n    But for periodic boundary conditions and $s=1$,\n    we have anti-periodic boundary conditions with $\\pi$ flux through ring.\n\\end{example}\n\n\\section{Topological invariant}\nnon-trivial phase.\nSuppose you have a ring and a gap in between through close together to allow the\nzero modes to interact.\nThe Hamiltonian is\n\\begin{align}\n    H_{bdy} &= \\tilde{t} i \\gamma_{L}\\gamma_{R}\n\\end{align} \nDepending on the sign of $\\tilde{t}$,\nthe ground state has either even or odd fermion parity.\n\nChanging the sign of $\\tilde{t}$ is equivalent to\ninserting $\\mathbb{Z}_2^f$ flux through the ring,\nwhich is equivalent to changing boundary conditions.\nIt is also equivalent to changing fermion parity of ground state.\n\nTrivial phase:\nNo Majorana Zero Mods,\nso changing boundary conditions\ndoesn't change ground state parity.\n\nLet us define the topological invariant\n\\begin{align}\n    \\mathcal{I} &=\n    P_P P_A\n\\end{align}\nwhere $P_P$ is the fermion parity of the ground state with periodic boundary\nconditions and $P_A$ is the parity with anti-periodic boundary conditions.\n\nThe claim is that\n\\begin{align}\n    \\mathcal{I} &=\n    \\begin{cases}\n        -1 & \\mathrm{topological phase}\\\\\n        1 & \\mathrm{trivial phase}\\\\\n    \\end{cases}\n\\end{align}\nIn spinless $p$-wave superconductors,\n\\begin{align}\n    P_A = + 1, \\qquad P_P = -1\n\\end{align}\n\n\\subsection{Relation to free bond invariant}\nFor free fermions,\nthe topological invariant is\n\\begin{align}\n    \\mathcal{I} &=\n    P\\left( H_P \\right) P\\left( H_A \\right)\n\\end{align}\nwhere\n\\begin{align}\n    P(H) = \\frac{i}{4} \\sum_{i} \\gamma_i \\gamma_j A_{i,j}\n\\end{align}\nwhere $P(H)$ is given by the Pfaffian\n\\begin{align}\n    P(H) &= \\sgn(\\textrm{Pf}(A))\n\\end{align}\nGo into $k$-space the Hamiltonian becomes\n\\begin{align}\n    H &=\n    \\frac{1}{2} \\sum_{k}\n    \\Psi_k^\\dagger H_k \\Psi_k\n\\end{align}\nwhere\n\\begin{align}\n    \\Psi_k &=\n    \\begin{pmatrix}\n        \\Psi_k\\\\\n        \\Psi_{-k}^\\dagger\n    \\end{pmatrix}\n\\end{align}\nFor $k=0$,\n$H$\nonly \ninvolves $\\Psi_i\\Psi^\\dagger$.\n\\begin{align}\n    H &=\n    \\frac{1}{2} \\sum_{k = 0, \\pi}\n    \\Psi_k^\\dagger\n    H_k\\Psi_{k}\n    + \\frac{1}{2}\\cdots\n\\end{align}\n\nAt the end of the day,\nthe fermion parity for odd length ring\n$k=2\\pi n/N$ periodic boundary conditions\n$k=0$, with $k=\\pi$ so\n\\begin{align}\n    k &= \\frac{2\\pi}{N}\\left( n + \\frac{1}{2} \\right)\n\\end{align}\nwith antiperiodic boundary conditions.\n\nThree is no $k=0$,\nbut there is $k=\\pi$.\n\\begin{align}\n    I = S_0 S_\\pi = \\nu.\n\\end{align}\n\nFor even length ring,\n$k=0,\\pi$ allowed by periodic boundary conditions,\nbut neigher is allowed for anti-periodic boundary conditions.\nThen we have\n\\begin{align}\n    I = S S_{\\pi} = \\nu\n\\end{align}\n\n\\subsection{TQFT}\nWhen you have fermions and wnat TQFT,\nyou want spin TQFt.\nFor every $d$-dimensional manifold,\nwe have a vector space.\nWe have 2 possible spin structure, $+$ periodic and $-1$ anti-periodic.\nIn both cases we know we have a unique ground state.\n\\begin{align}\n    V(S^1, +)\n    \\simeq\n    V(S^1,-)\n    \\simeq\\mathbb{C}\n\\end{align}\nLet's try to figure out the path integral on $T^2$\nwith our choice of spin structure $\\eta$.\nI can think of the two directions of the torus as time and spcae.\nSo I have $x,t$,\nand the spin structure $(x,t)$\ncan have boundary conditions $(++)$, $(+-)$, $(-+)$ or $(--)$.\n\nWhen you have a system of verimions,\nyou can write it like a propagator\n\\begin{align}\n    Z &= \\bra{j} e^{iHt} \\ket{i}\n\\end{align}\nwhich is the coherent sate path integral.\nIn the coherent state path integral for fermions,\nwhen you do the derivation,\nthe fermions have naturally have anti-periodic boundary conditions.\nIt's counterintiutive,\nbecause you'd think periodic boundary conditions are what you should have,\nbut it turns out anti-periodic is natural.\nThe path integral for the TQFT is just the trace for the identity operator.\nWe have anit-periodic boundary conditions for the boundary operator.\nThis is equal to\n\\begin{align}\n    Z(T^2, +-) &= \\Tr_{V(S^1,+)}\\mathbf{1} = 1\\\\\n    Z(T^2, -+) &= \\Tr_{V(S^1,-)}\\mathbf{1} = 1\n\\end{align}\nRemember when discussing TQFt,\nif you're gluing the time boundary,\nand you're twisting the boundary condition,\nthat is inserting flux,\nwhich is the fermion parity operator.\n\\begin{align}\n    Z(T^2, ++) &= \\Tr_{V(S^1,+)}P = -1\\\\\n    Z(T^2, --) &= \\Tr_{V(S^1,-)}P = +1\n\\end{align}\nYou can choose the convention the $-1$ case if periodic and the $+1$ case is\nanti-periodic.\n\nNow you can ask what is the path integral in a closed genus $g$ surface\n$\\Sigma_g$.\n\nWe have a genus $g$ surface and a spin structure $\\eta$.\nWe should get a $\\pm 1$ because there are only two phases.\nThere are canonical invariants associated with spin surfaces\nand it's callled the Arf invariant of the spin structure.\n\\begin{align}\n    Z\\left( \\Sigma_g, \\eta \\right) &= (-1)^{\\mathrm{Arf}(\\theta)}\n\\end{align}\nArf invariants are associated with quadratic forms.\nDefinition of Arf.\nSuppose $V$ is a vector space over $\\mathbb{Z}_2$.\nLet $\\phi: V\\times V\\to \\mathbb{Z}_2$ be a map\nthat is a non-degenerate symmetric bilinear form.\nThen suppose you have a quadratic form $q: V\\to \\mathbb{Z}_2$\nthat satisfies by definition\n\\begin{align}\n    q(x + y) &= q(x) + q(y) + \\phi(x, y)\n\\end{align}\nand $q\\in Q(V, \\phi)$ is an element in the space of all quadratic forms.\nThen the Arf invariant is defined as $\\mathrm{Arf}:Q\\to\\mathbb{Z}_2$\nby\n\\begin{align}\n    (-1)^{\\textrm{Arf}(q)} &=\n    \\frac{1}{\\sqrt{|V|}}\n    \\sum_{\\lambda\\in V} (-1)^{q(x)}\n    = \\pm 1\n\\end{align}\nThere is a theorem which I will not prove.\n\\begin{theorem}\n    Spin structures on genus $g$ surfaces in 1-1 correspondence with elements of\n    \\begin{align}\n        Q\\left(\n        H_1\\left( \\Sigma_g, \\mathbb{Z}_2 \\right),\n        \\underbrace{\\phi}_{\\text{intersection form}}\n        \\right)\n    \\end{align}\n\\end{theorem}\nThe intersection form takes 2 loops and tells you whether they intersect.\nGiven a spin structure,\nyou have a quadratic form like this,\nand then you can evaluate the Arf invariant.\nI can't give you an explanation of this,\nthat would be far beyond what we want.\n\n\\begin{question}\n    What is a genus $g$ surface?\n\\end{question}\nYou should have stopped me earlier!\nSurfaces are classified by their genus,\nwhich is basically the number of holes they have.\nSphere has $g=0$,\nTorus has $g=1$,\nand if you have 2 holes then $t=2$.\nThe Euler characteristic of a genus-$g$ surface is\n\\begin{align}\n    \\chi(\\Sigma_g) = 2 - 2g\n\\end{align}\n\n\\begin{question}\n    What is $Q(V,\\phi)$?\n\\end{question}\nIt is the space of all quadratic forms on $V$,\nfunctions which satisfy the definition with $\\phi$ as the form.\n\n\\begin{question}\n    What is $H_1$?\n\\end{question}\n$H_1$ is the space of formal linear combinations of loops.\nOK too complicated.\nLet's just say it's the space of loops modulo the space of contractible loops.\nI'm not going to give you a rigorous definition.\nI'll just define by example.\n\nIf I give you a surface, (genus 2)\nthere are 4 independent loops you can draw,\nwhich are called $x_1,y_1,x_2,y_2$.\nThen you can take formal linear combinations of them like\n\\begin{align}\n    C = \\alpha x_1 + \\beta x_2 + \\gamma y_1 + \\delta y_2\n\\end{align}\nwhere $\\alpha,\\beta,\\gamma,\\delta\\in \\mathbb{Z}_2$\nand then this linear combination\n$C\\in H_1(\\Sigma_g, \\mathbb{Z}_2$.\n\nThen you can have forms $a,b\\in H^1(\\Sigma_g, \\mathbb{Z}_2)$\nwhich map from $H_1\\to \\mathbb{Z}_2$.\n\nSo $H^1$ are linear combinations of loops (cohomology),\nbut $H_1$ are the actual loops themselves (homology).\n\nGiven a loop $X$,\nyou can define\n\\begin{align}\n    \\int_X a = \\pm 1\n\\end{align}\n\nI should have just said $x$ intersects $y$.\n\nYou can go to the torus,\nfirst enumerate all possible functions from the loops on the torus to\n$\\mathbb{Z}_2$,\nthere are 16 of them,\nthen find all possible functions like this quadratic form,\nthere are 4 of them,\nand then take the corespondence with periodioc and anti-periodic.\n\n\\begin{question}\n    What is the Arf invariant invariant under?\n\\end{question}\nI don't know.\nIt's invariant under all mapping class diffeommorphisms on the surface and the\nArf invariant will still be invariant.\n\nTo fully specify the TQFT,\nI need to give you more information.\nI need to specify the path integral on manifold with boundary.\nfor example,\nI can tell you what the path integral on a disk $D^2$ is\n(like a hemisphere with axis in the time direction).\n\\begin{align}\n    Z(D^2) \\in V(S^1, -)\n\\end{align}\nOn $S^1$,\ntwo spin structures, $\\pm 1$.\nOnly $-$ can be the boundayr of the disk.\nIn path, $-1$ is called the bounding spin structure\nand $+$ is called the non-bounding spin structure.\n\nThe way ot think about spin structure is this.\nYou QFT,\na fermion is described by a fermion,\nand you want a framing of the worldline of hte fmrion,\nwhich goes on a loop,\nand you want a frame for the world line.\nThereare two frames.\nThere's one that goes like this,\npointing orthogonal to the line.\nOr you could consider one like this, all pointing in a direction.\nThe first is no-bounding,\nthe second one is bounding.\n\nAs you go along this second line,\nand you unfold it,\nas you go along, it kind of twists $2\\pi$.\nAnd a $2\\pi$ rotation gives you a $-1$ for fermions.\n\nWhereas in the first case, there is no twisting.\nThat's why the anti-periodic boundary condition gives something that can be a\nboundary.\n\nThis comes up in a lot of sitautions with topological phases and fermions.\nIf you are confused why anit-periodic is showing up when it is,\nthis is why I think it's useful.\nI haven't given you a deep understanding of what framing is rigorously,\nbut it's whether it twists or not relative to its worldline and whether the\nboundary should be periodic or not.\n\nIn QFT fermions are defined in $SO(1,d)$.\nIf space-time is 2D,\nyou should have a 2D frame field,\nand the two components are the frame field and the tangent ot the curve.\nRelativisitc QFT somehow winds up giving the correct language for TQFT aspects\nof things.\nAnd somehow the condensed matter stuff\nends up fitting well into TQFT.\n\nRelativistic fermions are spinors of $SO(3)$,\nbut if they're relativistic Minkowski space,\nit's $SO(1,d)$.\n\nMicrosopically,\nthe fermions don't need this structure,\nbut when doing TQFT with fermions,\nyou have spin structures,\nand a way to think about spin structures is to think of relativistic\nfermions.\n\nBoundary needs\n\\begin{align}\n    Z\\left( M^{d+1}, \\eta \\right) &=\n    V\\left( \\partial M^{d + 1}, \\eta_{\\partial M^{d+1}} \\right)\n\\end{align}\n\nI will post the paper for reference.\n\nThe reason I'm telling you this,\nis I want to show you wnat structure is giving you this TQFT,\nthen I'm going ot guess what kind of structure classifies all kinds of\ntopolgoical atter.\nOne of the goals is how do we classify all phases of topological matter.\nOne of thoese asnswers is in terms of TQFT.\nWe're going to see how to think about phases fro mthe persepective of TQFT,\nand know what to expect how to classify phases.\n", "meta": {"hexsha": "f2b7547fd225848c0fc73d1703cf702fd195d010", "size": 13478, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phys733/lecture10.tex", "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_issues_repo_path": "phys733/lecture10.tex", "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phys733/lecture10.tex", "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.2197309417, "max_line_length": 80, "alphanum_fraction": 0.6930553495, "num_tokens": 4219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619091240701, "lm_q2_score": 0.7057850340255386, "lm_q1q2_score": 0.5818928765838923}}
{"text": "\\section{The general solution of a linear system}\n\n\\begin{outcome}\n  \\begin{enumerate}\n  \\item Use linear transformations to determine the particular\n    solution and general solution to a system of equations.\n  \\item Find the kernel of a linear transformation.\n  \\end{enumerate}\n\\end{outcome}\n\nRecall the definition of a linear transformation discussed above.\n$T$ is a \\textbf{linear transformation} if whenever $\\vect{x}, \\vect{y}$ are\nvectors and $k,p$ are scalars,\n\\begin{equation*}\nT(k\\vect{x}+p\\vect{y}) =k T (\\vect{x}) +p T(\\vect{y})\n\\end{equation*}\nThus linear transformations distribute across addition and pass scalars to\nthe outside.\n\nIt turns out that we can use linear transformations to solve\nsystems of linear equations. Indeed given a system of linear equations of the\nform $A\\vect{x}=\\vect{b}$, one may rephrase this as $T(\\vect{x})=\\vect{b}$ where $T$ is the linear\ntransformation $T_A$ induced by the coefficient matrix $A$. With this in mind consider the following definition.\n\n\\begin{definition}{Particular solution of a system of equations}{particular-solutions}\nSuppose a system of linear equations can be written in the form\n\\begin{equation*}\nT(\\vect{x})=\\vect{b}\n\\end{equation*}\nIf $T(\\vect{x}_{p})=\\vect{b}$,\nthen $\\vect{x}_{p}$ is called a \\textbf{particular solution}\\index{particular solution} of\nthe linear system.\n\\end{definition}\n\nRecall that a system is called homogeneous if every equation in the system is equal to $0$.\nSuppose we represent a homogeneous system of equations by $T(\\vect{x})=0$. It turns out\nthat the $\\vect{x}$ for which $T (\\vect{x}) = 0$ are part of a special set called the \\textbf{null space}\nof $T$. We may also refer to the null space as the \\textbf{kernel} of $T$, and we write $ker(T)$.\n\nConsider the following definition.\n\n\\begin{definition}{Null space or kernel of a linear transformation}{null-space-of-linear-transformation}\nLet $T$ be a linear transformation. Define\n\\begin{equation*}\n\\ker (T) = \\set{\\vect{x}:T (\\vect{x})= \\vect{0} }\n\\end{equation*}\nThe kernel\\index{kernel}, $\\ker (T) $ consists of the set of all vectors $\\vect{x}$ for which\n$T (\\vect{x}) = \\vect{0}$. This is also called the\n\\textbf{null space}\\index{null space} of $T$.\n\\end{definition}\n\nWe may also refer to the kernel of $T$ as the\n\\textbf{solution space}\\index{solution space} of the equation $T (\\vect{x}) = \\vect{0}$.\n\nConsider the following example.\n\n\\begin{example}{The kernel of the derivative}{kernel-derivative}\nLet $\\frac{d}{dx}$ denote the linear transformation defined on $f$, the functions\nwhich are defined on $\\R$ and have a continuous derivative. Find\n$\\ker \\paren{\\frac{d}{dx}}$.\n\\end{example}\n\n\\begin{solution} The example asks for functions $f$ which the property that $\\frac{df}{dx}\n=0. $ As you may know from calculus, these functions are the constant functions.\nThus $\\ker \\paren{\\frac{d}{dx}}$ is the set of constant functions.\n\\end{solution}\n\nDefinition~\\ref{def:null-space-of-linear-transformation} states that $\\ker (T) $ is the set of\nsolutions to the equation,\n\\begin{equation*}\nT(\\vect{x}) = \\vect{0}\n\\end{equation*}\nSince we can write $T(\\vect{x})$ as $A\\vect{x}$, you have been solving such\nequations for quite some time.\n\nWe have spent a lot of time finding solutions to systems of equations in general, as well as\nhomogeneous systems. Suppose we look at a system given by $A\\vect{x}=\\vect{b}$, and consider the\nrelated homogeneous system. By this, we mean that we replace $\\vect{b}$ by $\\vect{0}$ and look at $A\\vect{x}=\\vect{0}$.\nIt turns out that there is a very important relationship between the solutions of the original\nsystem and the solutions of the associated homogeneous system. In the following\ntheorem, we use linear transformations to denote a system of equations. Remember that\n$T(\\vect{x}) = A\\vect{x}$.\n\n\\begin{theorem}{Particular solution and general solution}{particular-and-general-solution}\nSuppose $\\vect{x}_{p}$ is a solution to the linear system given by\n\\begin{equation*}\nT(\\vect{x}) = \\vect{b}.\n\\end{equation*}\nThen if $\\vect{y}$ is any other solution to $T(\\vect{x})=\\vect{b}$,\n\\index{general solution!solution space} there exists $\\vect{x}_0 \\in \\ker\n(T) $ such that\n\\begin{equation*}\n\\vect{y} = \\vect{x}_{p}+ \\vect{x}_0.\n\\end{equation*}\nHence, every solution to the linear system can be written as a sum of a particular solution, $\\vect{x}_p$,\n and a solution $\\vect{x}_0$ to the associated\nhomogeneous system given by $T(\\vect{x})=\\vect{0}$.\n\\end{theorem}\n\n\\begin{proof}\nConsider $\\vect{y} - \\vect{x}_{p}= \\vect{y} + (\n-1) \\vect{x}_{p}$. Then $T(\\vect{y} - \\vect{x}_{p}) =T(\\vect{y})\n-T(\\vect{x}_{p})$. Since $\\vect{y}$ and $\\vect{x}_{p}$ are both solutions to the system, it follows that $T(\\vect{y})= \\vect{b} $\nand $T(\\vect{x}_p) = \\vect{b}$.\n\nHence, $T(\\vect{y})-T(\\vect{x}_{p})\n=\\vect{b} - \\vect{b} = \\vect{0}$.  Let $\\vect{x}_0 = \\vect{y} - \\vect{x}_{p}$.\nThen, $T(\\vect{x}_0)= \\vect{0} $ so $\\vect{x}_0$ is a solution to the associated homogeneous system and so is in $\\ker (T)$.\n\\end{proof}\n\nSometimes people remember the above theorem in the following form. The\nsolutions to the system $T(\\vect{x})=\\vect{b}$ are given by\n$\\vect{x}_{p}+\\ker (T) $ where $\\vect{x}_{p}$ is a particular\nsolution to $T(\\vect{x})=\\vect{b}$.\n\nFor now, we have been speaking about the kernel or null space of a linear transformation $T$. However,\nwe know that every linear transformation $T$ is determined by some matrix $A$. Therefore,\nwe can also speak about the null space of a matrix. Consider the following example.\n\n\\begin{example}{The null space of a matrix}{matrix-null-space}\nLet\n\\begin{equation*}\nA=\\begin{mymatrix}{rrrr}\n1 & 2 & 3 & 0 \\\\\n2 & 1 & 1 & 2 \\\\\n4 & 5 & 7 & 2\n\\end{mymatrix}\n\\end{equation*}\nFind $\\nullspace(A)$. Equivalently, find the solutions to the\nsystem of equations $A\\vect{x}=\\vect{0}$.\n\\end{example}\n\n\\begin{solution}  We are asked to find $\\set{\\vect{x} : A\\vect{x} = \\vect{0}}$. In other\nwords we want to solve the system, $A\\vect{x}=\\vect{0}$. Let $\\vect{x} =\n\\begin{mymatrix}{r}\nx \\\\\ny \\\\\nz \\\\\nw\n\\end{mymatrix}$. Then this amounts to solving\n\\begin{equation*}\n\\begin{mymatrix}{rrrr}\n1 & 2 & 3 & 0 \\\\\n2 & 1 & 1 & 2 \\\\\n4 & 5 & 7 & 2\n\\end{mymatrix} \\begin{mymatrix}{c}\nx \\\\\ny \\\\\nz \\\\\nw\n\\end{mymatrix} =\\begin{mymatrix}{c}\n0 \\\\\n0 \\\\\n0\n\\end{mymatrix}\n\\end{equation*}\nThis is the linear system\n\\begin{equation*}\n\\begin{array}{c}\nx+2y+3z=0 \\\\\n2x+y+z+2w=0 \\\\\n4x+5y+7z+2w=0\n\\end{array}\n\\end{equation*}\nTo solve, set up the augmented matrix and row reduce to find the {\\rref}.\n\\begin{equation*}\n\\begin{mymatrix}{rrrr|r}\n1 & 2 & 3 & 0 & 0 \\\\\n2 & 1 & 1 & 2 & 0 \\\\\n4 & 5 & 7 & 2 & 0\n\\end{mymatrix}\n\\roweq\\ldots\\roweq\n\\begin{mymatrix}{rrrr|r}\n1 & 0 & -\n\\vspace{0.05in}\\frac{1}{3} & \\vspace{0.05in}\\frac{4}{3} &  0 \\\\\n0 & 1 & \\vspace{0.05in}\\frac{5}{3} & -\\vspace{0.05in}\\frac{2}{3} & 0 \\\\\n0 & 0 & 0 & 0 & 0\n\\end{mymatrix}\n\\end{equation*}\nThis yields $x=\\vspace{0.05in}\\frac{1}{3}z-\\vspace{0.05in}\\frac{4}{3}w$ and\n$y=\\vspace{0.05in}\\frac{2}{3}w-\\vspace{0.05in}\\frac{5}{3}z$.\nSince $\\nullspace(A) $ consists of the solutions to this system, it consists vectors of the form,\n\\begin{equation*}\n\\begin{mymatrix}{c}\n\\vspace{0.05in}\\frac{1}{3}z-\\vspace{0.05in}\\frac{4}{3}w \\\\\n\\vspace{0.05in}\\frac{2}{3}w-\\vspace{0.05in}\\frac{5}{3}z \\\\\nz \\\\\nw\n\\end{mymatrix} =z \\begin{mymatrix}{r}\n\\vspace{0.05in}\\frac{1}{3} \\\\\n-\\vspace{0.05in}\\frac{5}{3} \\\\\n1 \\\\\n0\n\\end{mymatrix} +w \\begin{mymatrix}{r}\n-\\vspace{0.05in}\\frac{4}{3} \\\\\n\\vspace{0.05in}\\frac{2}{3} \\\\\n0 \\\\\n1\n\\end{mymatrix}\n\\end{equation*}\n\\end{solution}\n\nConsider the following example.\n\n\\begin{example}{A general solution}{general-solution}\nThe \\textbf{general solution}\\index{general solution} of a system of linear equations\nis the set of\nall possible solutions. Find the general solution to the linear system,\n\\begin{equation*}\n\\begin{mymatrix}{rrrr}\n1 & 2 & 3 & 0 \\\\\n2 & 1 & 1 & 2 \\\\\n4 & 5 & 7 & 2\n\\end{mymatrix} \\begin{mymatrix}{r}\nx \\\\\ny \\\\\nz \\\\\nw\n\\end{mymatrix} =\\begin{mymatrix}{r}\n9 \\\\\n7 \\\\\n25\n\\end{mymatrix}\n\\end{equation*}\ngiven that $\\begin{mymatrix}{r}\nx \\\\\ny \\\\\nz \\\\\nw\n\\end{mymatrix}=\\begin{mymatrix}{r}\n1 \\\\\n1 \\\\\n2 \\\\\n1\n\\end{mymatrix}$ is one solution.\n\\end{example}\n\n\\begin{solution} Note the matrix of this system is the same as the matrix in Example~\\ref{exa:matrix-null-space}. Therefore, from Theorem~\\ref{thm:particular-and-general-solution}, you will obtain all\nsolutions to the above linear system by adding a particular solution $\\vect{x}_p$ to the solutions of the associated homogeneous\nsystem, $\\vect{x}$. One particular solution is given above by\n\\begin{equation}\n\\vect{x}_p\n=\n\\begin{mymatrix}{r}\nx \\\\\ny \\\\\nz \\\\\nw\n\\end{mymatrix}=\\begin{mymatrix}{r}\n1 \\\\\n1 \\\\\n2 \\\\\n1\n\\end{mymatrix}\n\\end{equation}\n\nUsing this particular solution along with the solutions found in Example~\\ref{exa:matrix-null-space}, we\nobtain the following solutions,\n\\begin{equation*}\nz\\begin{mymatrix}{r}\n\\vspace{0.05in}\\frac{1}{3} \\\\\n-\\vspace{0.05in}\\frac{5}{3} \\\\\n1 \\\\\n0\n\\end{mymatrix} +w\\begin{mymatrix}{r}\n-\\vspace{0.05in}\\frac{4}{3} \\\\\n\\vspace{0.05in}\\frac{2}{3} \\\\\n0 \\\\\n1\n\\end{mymatrix} +\\begin{mymatrix}{r}\n1 \\\\\n1 \\\\\n2 \\\\\n1\n\\end{mymatrix}\n\\end{equation*}\n\nHence, any solution to the above linear system is of this form.\n\\end{solution}\n", "meta": {"hexsha": "d721189249548fadb78b81f74b32d10a558796b0", "size": 9184, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old/content/lineartransformationsGeneralSolution.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "old/content/lineartransformationsGeneralSolution.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "old/content/lineartransformationsGeneralSolution.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 32.8, "max_line_length": 200, "alphanum_fraction": 0.6898954704, "num_tokens": 3083, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723316860482763, "lm_q2_score": 0.8652240877899775, "lm_q1q2_score": 0.5817175697534174}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\nThe purpose of the\n%%%%%%%%%%%%%%%%%%%%%binary search tree%%%%%%%%%%%%%%%%%%%%%%\n\\section{Binary Search Tree}\n\\label{sec_binary_search_tree}\n In computer science, a \\textbf{search tree} is a tree data structure used for locating specific keys from within a set. In order for a tree to function as a search tree, the key for each node must be greater than any keys in subtrees on the left and less than any keys in subtrees on the right. \n\nThe advantage of search trees is their efficient search time ( $O(\\log n)$) given the tree is reasonably balanced, which is to say the leaves at either end are of comparable depths as we introduced the \\textbf{balanced binary tree}. \n\nThe search tree data structure supports many dynamic-set operations, including \\textbf{Search} for a key, minimum or maximum, predecesor or successor, \\textbf{insert} and \\textbf{delete}. Thus, a search tree can be both used as a dictionary and a priority queue.\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width = 0.4\\columnwidth]{fig/Binary_search_tree.png}\n    \\caption{Example of Binary search tree of depth 3 and 8 nodes.}\n    \\label{fig:bst}\n\\end{figure}\n\n% Search trees are often used to implement an associative array. The search tree algorithm uses the key from the key-value pair to find a location, and then the application stores the entire key\u2013value pair at that location. \n\n% In this section, we will introduce the most commonly used two types of searching trees: binary searching tree (BST) and Trie where the keys are usually numeric numbers and strings respectively. \n\n% \\subsection{Binary Searching Tree}\n% \\label{concept_binary_search_tree}\nA binary search tree (BST) is a search tree with children up to two.  There are three possible ways to properly define a BST, and we use $l$ and $r$ to represent the left and right child of node $x$: 1)$l.key \\leq x.key < r.key$, 2) $l.key  < x.key \\leq r.key$, 3) $l.key < x.key < r.key$. In the first and second definition, our resulting BST allows us to have duplicates, while not in the case of the third definiton. One example of BST without duplicates is shown in Fig~\\ref{fig:bst}. \n\n% an organized searching tree structure in binary tree, as the name suggests. Binary search trees whose internal nodes each store a key (and optionally, an associated value), each node have two distinguished sub-trees (if only one sub-tree the other is None). \n\n% BST keep their keys in sorted order, so that lookup and other operations can use the \\textit{principle of binary search tree}: \n\n% \\indent Let $x$ be a node in a binary search tree, if $y$ is a node in the left subtree of x, them $y.key \\leq x.key$. If $y$ is a node in the right subtree of $x$, then $y.key \\geq x.key$. \n\n\n\n\\subsubsection{Operations}\nWhen looking for a key in a tree (or a place to insert a new key), we traverse the tree from root to leaf, making comparisons to keys stored in the nodes of the tree and deciding, on the basis of the comparison, to continue searching in the left or right subtrees. On average, this means that each comparison allows the operations to skip about half of the tree, so that each SEARCH, INSERT or DELETE takes time proportional to the logarithm of the number of items stored in the tree. This is much better than the linear time required to find items by key in an (unsorted) array, but slower than the corresponding operations on hash tables. \n\n% \\textbf{Definition} A binary search tree is a rooted binary tree, whose internal nodes each store a key (and optionally, an associated value) and each have two distinguished sub-trees, commonly denoted left and right. The tree additionally satisfies the binary search property, which states that the key in each node must be greater than or equal to any key stored in the left sub-tree, and less than or equal to any key stored in the right sub-tree.[1]:287 The leaves (final nodes) of the tree contain no key and have no structure to distinguish them from one another. \n\nIn order to build a BST, we need to INSERT a series of elements in the tree organized by the searching tree property, and in order to INSERT, we need to SEARCH the position to INSERT this element. Thus, we introduce these operations in the order of SEARCH, INSERT and GENERATE. \n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/bst_insertion.png}\n    \\caption{The lightly shaded nodes indicate the simple path from the root down to the position where the item is inserted. The dashed line indicates the link in the tree that is added to insert the item. }\n    \\label{fig:bst_operation}\n\\end{figure}\n\\paragraph{SEARCH}\n\nThere are two different implementations for SEARCH: recursive and iterative.\n\\begin{lstlisting}[language = Python]\n# recursive searching\ndef search(root,key):\n    # Base Cases: root is null or key is present at root\n    if root is None or root.val == key:\n        return root\n \n    # Key is greater than root's key\n    if root.val < key:\n        return search(root.right,key)\n   \n    # Key is smaller than root's key\n    return search(root.left,key)\n\\end{lstlisting}\nAlso, we can write it in an iterative way, which helps us save the heap space: \n\\begin{lstlisting}[language = Python]\n# iterative searching\ndef iterative_search(root,key):\n    while root is not None and root.val != key:\n        if root.val < key:\n            root = root.right\n        else:\n            root = root.left\n    return root\n\\end{lstlisting}\n\\paragraph{INSERT}\nAssuming we are inserting a node $13$ into the tree shown in Fig~\\ref{fig:bst_operation}. A new key is always inserted at leaf (there are other ways to insert but here we only discuss this one way). We start searching a key from root till we hit an empty node. Then we new a TreeNode and insert this new node either as the left or the child node according to the searching property. Here we still shows both the recursive and iterative solutions.\n\\begin{lstlisting}[language = Python]\n# Recursive insertion\ndef insertion(root, key):\n    if root is None:\n        root = TreeNode(key)\n        return root\n    if root.val < key:\n        root.right = insertion(root.right, key)\n    else:\n        root.left = insertion(root.left, key)\n    return root\n\\end{lstlisting}\nThe above code needs return value and reassign the value for the right and left every time, we can use the following code which might looks more complex with the if condition but works faster and only assign element at the end. \n\\begin{lstlisting}[language=Python]\n# recursive insertion\ndef insertion(root, val):\n    if root is None:\n        root = TreeNode(val)\n        return \n    if val > root.val:\n        if root.right is None:\n            root.right = TreeNode(val)\n        else:\n            insertion(root.right, val)\n    else:\n        if root.left is None:\n            root.left = TreeNode(val)\n        else:\n            insertion(root.left, val)\n\\end{lstlisting}\nWe can search the node iteratively and save the previous node. The while loop would stop when hit at an empty node.  There will be three cases in the case of the previous node. \n\\begin{lstlisting}[numbers=none]\n1. The previous node is None, which means the tree is empty, so we assign a root node with the value\n2. The previous node has a value larger than the key, means we need to put key as left child.  \n3. The previous node has a value smaller than the key, means we need to put key as right child.\n\\end{lstlisting}\n\\begin{lstlisting}[language = Python]\n# iterative insertion\ndef iterativeInsertion(root, key):\n    pre_node = None\n    node = root\n    while node is not None:\n        pre_node = node\n        if key < node.val:\n            node = node.left\n        else:\n            node = node.right\n    # we reached to the leaf node which is pre_node\n    if pre_node is None:\n        root = TreeNode(key)\n    elif pre_node.val > key:\n        pre_node.left = TreeNode(key)\n    else:\n        pre_node.right = TreeNode(key)\n    return root\n\\end{lstlisting}\n\\paragraph{BST Generation}\nFirst, let us declare a node as BST which is the root node. Given a list, we just need to call INSERT for each element. The time complexity can be $O(n\\log_n)$.\n\\begin{lstlisting}[language=Python]\ndatas = [8, 3, 10, 1, 6, 14, 4, 7, 13]\nBST = None\nfor key in datas:\n    BST = iterativeInsertion(BST, key)\nprint(LevelOrder(BST))\n# output \n# [8, 3, 10, 1, 6, 14, 4, 7, 13]\n\\end{lstlisting}\n\\paragraph{DELETE} \nBefore we start to check the implementation of DELETE, I would suggest the readers to read the next subsection--the Features of BST at first, and then come back here to finish this paragraph. \n\nWhen we delete a node, three possibilities arise.\n\\begin{lstlisting}[numbers=none]\n1) Node to be deleted is leaf: Simply remove from the tree.\n\n              50                            50\n           /     \\         delete(20)      /   \\\n          30      70       --------->    30     70 \n         /  \\    /  \\                     \\    /  \\ \n       20   40  60   80                   40  60   80\n\n2) Node to be deleted has only one child: Copy the child to the node and delete the child\n\n              50                            50\n           /     \\         delete(30)      /   \\\n          30      70       --------->    40     70 \n            \\    /  \\                          /  \\ \n            40  60   80                       60   80\n\n3) Node to be deleted has two children: Find inorder successor of the node. Copy contents of the inorder successor to the node and delete the inorder successor. Note that inorder predecessor can also be used.\n\n              50                            60\n           /     \\         delete(50)      /   \\\n          40      70       --------->    40    70 \n                 /  \\                            \\ \n                60   80                           80\n\nThe important thing to note is, inorder successor is needed only when right child is not empty. In this particular case, inorder successor can be obtained by finding the minimum value in right child of the node.\n\\end{lstlisting}\n\n\n\n\\subsubsection{Features of BST}\n\\label{concept_features_bst}\n\\paragraph{Minimum and Maximum} The operation is similar to search, to find the minimum, we always traverse on the left subtree.  For the maximum, we just need to replace the ``left'' with ``right'' in the key word.  Here the time complexity is the same $O(lgn)$.\n\\begin{lstlisting}[language=Python]\n# recursive\ndef get_minimum(root):\n    if root is None:\n        return None\n    if root.left is None: # a leaf or node has no left subtree\n        return root\n    if root.left:\n        return get_minimum(root.left)\n\n# iterative\ndef iterative_get_minimum(root):\n    while root.left is not None:\n        root = root.left\n    return root\n\\end{lstlisting}\n\nAlso, sometimes we need to search two additional items related to a given node:  successor and predecessor. The structure of a binary search tree allows us to determine the successor or the predecessor of a tree without ever comparing keys. \n\n\\paragraph{Successor of a Node}  A successor of node $x$ is the smallest item in the BST that is strictly greater than $x$. It is also called in-order successor, which is the next node in Inorder traversal of the Binary Tree. Inoreder Successor is None for the last node in inorder traversal. If our TreeNode data structure has a parent node.\n\nUse parent node: the algorihtm has two cases on the basis of the right subtree of the input node. \n\\begin{lstlisting}[numbers=none]\nFor the right subtree of the node:\n1) If it is not None, then the successor is the minimum node in the right subtree. e.g. for node 12, successor(12) = 13 = min(12.right)\n2) If it is None, then the successor is one of its ancestors. We traverse up using the parent node until we find a node which is the left child of its parent. Then the parent node here is the successor. e.g.  successor(2)=5\n\\end{lstlisting}\n The Python code is provided:\n\\begin{lstlisting}[language = Python]\ndef Successor(root, n):\n# Step 1 of the above algorithm\n    if n.right is not None:\n        return get_minimum(n.right)\n# Step 2 of the above algorithm\np = n.parent\nwhile p is not None:\n    if n == p.left :# if current node is the left child node, then we found the successor, p\n        return p\n    n = p\n    p = p.parent\nreturn p\n\\end{lstlisting}\nHowever, if it happens that your tree node has no parent defined, which means you can not traverse back its parents. We only have one option. Use the inorder tree traversal, and find the element right after the node. \\begin{lstlisting}[numbers=none]\nFor the right subtree of the node:\n1) If it is not None, then the successor is the minimum node in the right subtree. e.g. for node 12, successor(12) = 13 = min(12.right)\n2) If it is None, then the successor is one of its ancestors. We traverse down from the root till we find current node, the node in advance of current node is the successor. e.g.  successor(2)=5\n\\end{lstlisting}\n\\begin{lstlisting}[language=Python]\ndef SuccessorInorder(root, n):\n    # Step 1 of the above algorithm\n    if n.right is not None:\n        return get_minimum(n.right)\n    # Step 2 of the above algorithm\n    succ = None\n    while root is not None:\n        \n        if n.val > root.val:\n            root = root.right\n        elif n.val < root.val:\n            succ = root\n            root = root.left\n        else: # we found the node, no need to traverse\n            break\n    return succ\n\\end{lstlisting}\n\n\\paragraph{Predecessor of A Node}  A predecessor of node $x$ on the other side, is the largest item in BST that is strictly smaller than $x$. It is also called in-order predecessor, which denotes the previous node in Inorder traversal of BST. e.g. for node 14, predecessor(14)=12= max(14.left). The same searching rule applies, if node $x$'s left subtree exists, we return the maximum value of the left subtree. Otherwise we traverse back its parents, and make sure it is the right subtree, then we return the value of its parent, otherwise the reversal traverse keeps going. \n\\begin{lstlisting}[language = Python]\ndef Predecessor(root, n):\n# Step 1 of the above algorithm\n    if n.left is not None:\n        return get_maximum(n.left)\n# Step 2 of the above algorithm\np = n.parent\nwhile p is not None:\n    if n == p.right :# if current node is the right node, parent is smaller\n        return p\n    n = p\n    p = p.parent\nreturn p\n\\end{lstlisting}\n The worst case to find the successor or the predecessor of a BST is to search the height of the tree: include the one of the subtrees of the current node, and go back to all the parents and greatparents of this code, which makes it the height of the tree. The expected time complexity is $O(lgn)$. And the worst is when the tree line up and has no branch, which makes it $O(n)$. \n \n \n% \\item Smallest and the Biggest Value\n% \\begin{lstlisting}[language = Python]\n% #recursive\n% def getSmallest(node):\n%     if not node:\n%          return None\n%     if node.left:\n%          return getSmallest(node.left)\n%     return node\n% #iterative        \n% def getSmallest(node):\n%     while node:\n%           node=node.left\n%     return node\n% \\end{lstlisting}\n% \\end{enumerate}\n\n\n\n% \\textbf{Insertion and  Generation of BST} Insertion and deletion is not as easy as the operations shown before, because they cause the dynamic set represented by the binary tree to change. Therefore, the data structure must be re-structured to reflect the change in order to keep holding the binary-search-tree property. \n\n% Insertion is more straightforward, it is a key component to build a BST. To build a BST, we start from an empty root, and we go through a sequence of data that we want to store in BST structure, and insert each one in the right place with the binary search tree property. To insert a node into the BST, at first we search the tree level-by-level starts from the root node, if the value is smaller that the comparing node's key, we move to its left subtree or else to the right subtree till we reach to leaf node. Look at an example, the follwing tree, \n\n% the code is as follows:\n\n% Thus the code for building a BST is as follows:\n\n\n% \\textcolor{red}{to do: write the whole operations and the definition of BST as a python class}\n\n% DELETE:\n% \\begin{lstlisting}[language = Python]\n% def delete(root,key):\n     \n%     # Base Cases: root is null or key is present at root\n%     if root is None or root.val == key:\n%         return root\n \n%     # Key is greater than root's key\n%     if root.val < key:\n%         return search(root.right,key)\n   \n%     # Key is smaller than root's key\n%     return search(root.left,key)\n% \\end{lstlisting}\n\nNow  we put a table here to summarize the space and time complexity for each operation.\n\\begin{table}[h]\n\\begin{small}\n\\centering\n\\noindent\\captionof{table}{ Time complexity of operations for BST in big O notation }\n \\noindent \\begin{tabular}{|p{0.33\\columnwidth}|p{0.33\\columnwidth}| p{0.33\\columnwidth}|}\n  \\hline\n Algorithm & Average & Worst Case  \\\\ \\hline\nSpace  & $O(n)$& $O(n)$ \\\\\nSearch   & $O(lgn)$ & $O(n)$ \\\\ \\hline\n\nInsert & $O(lgn)$ & $O(n)$ \\\\ \nDelete & $O(lgn)$ & $O(n)$ \\\\ \\hline\n\\end{tabular}\n  \\label{tab:msrc_precession}\n  \\end{small}\n\\end{table}\n\n% \\paragraph{Advanced Features}\n% For a BST, the left subtree all have smaller values than the current node, and the right subtree are all bigger than the current node. This concept is useful in trimming BST, see example, $669$. Trim a Binary Search Tree. \n\n\n% \\section{Augmented Tree}\n% According to \\textit{Introduction to Algorithms}, augmenting data stuctures are defined as a textbook data structure augmented by storing additional information in it. In this Section, we introduce two types of augmented tree: Trie for pattern matching in static String and Segment Tree for Range Query. \n\n%https://www.mimuw.edu.pl/~szczurek/TSG2/04_suffix_arrays.pdf\n\n%%%%%%%%%%%%%%Segment Tree%%%%%%%%%%%%%%%\n\\section{Segment Tree}\n\\label{sec_segment_tree}\n% In this subsection, we discuss another data structure which can efficiently answer dynamic range queries. As a starting point, we discuss a problem of finding the index of the minimum element\n% in an array given a range: [i..j]. This is more commonly known as the Range Minimum Query(RMQ). For example, given an array A of size 7 below, RMQ(1, 3) = 2, as the index 2 contains the minimum element among A[1], A[2], and A[3]. To check your understanding of RMQ, verify that on array A below, RMQ(3, 4) = 4, RMQ(0, 0) = 0, RMQ(0, 1) = 1, and RMQ(0, 6) = 5.\n\n% There are several ways to solve this RMQ. One of the trivial algorithm is to simply iterate the\n% array from index i to j and report the index with the minimum value. But this is O(n) per query.\n% When n is large, such algorithm maybe infeasible.\n\nSegment Tree is a static full binary tree similar to heap that is used for storing the intervals or segments. `Static` here means once the data structure is build, it can not be modified or extended.  Segment tree is a data structure that can efficiently answer numerous \\textit{dynamic range queries} problems \n(in logarithmic time) like finding minimum, maximum, sum, greatest common divisor, least common denominator in array. The ``dynamic\" means there are constantly modifications of the value of elements (not the tree structure).   For instance, given a problem to find the index of the minimum/maximum/sum of all elements in an given range of an array: [i:j]. \n\n\\paragraph{Definition} Consider an array A of size n and a corresponding Segment Tree T (here a range [0, n-1] in A is represented as A[0:N-1]):\n\\begin{enumerate}\n    \\item The root of T represents the whole array A[0:N-1]. \n    \\item Each internal node in the Segment Tree T represents the interval of A[i:j] where $0 < i < j < n$. \n    \\item Each leaf in T represents a single element A[i], where $0 \\leq i<N$. \n    \\item If the parent node is in range [i, j], then we separate this range at the middle position $m = (i+j)\\/\\/2$, the left child takes range [i, m], and the right child take the interval of [m+1, j].\n\\end{enumerate}\n\nBecause in each step of building the segment tree, the interval is divided into two halves, so the height of the segment tree will be $\\log N$. And there will be totally N leaves and N-1 number of internal nodes, which makes the total number of nodes in segment tree to be $2N-1$ and make the segment tree a \\textit{full binary tree}.\n\nHere, we use the Range Sum Query (RSQ) problem to demonstrate how segment tree works:\n\\begin{examples}[resume]\n\\item \\textbf{307. Range Sum Query - Mutable (medium).} Given an integer array nums, find the sum of the elements between indices i and j ($i \\leq j$), inclusive.The \\textit{update(i, val)} function modifies nums by updating the element at index i to val.\n\\begin{lstlisting}[numbers=none]\nExample:\n\nGiven nums = [1, 3, 5]\n\nsumRange(0, 2) -> 9\nupdate(1, 2)\nsumRange(0, 2) -> 8\n\\end{lstlisting}\nNote:\n\\begin{enumerate}\n    \\item \n    The array is only modifiable by the update function.\n   \\item  You may assume the number of calls to update and sumRange function is distributed evenly.\n\\end{enumerate}\n\\paragraph{Solution: Brute-Force.} There are several ways to solve the RSQ. The \\textbf{brute-force solution} is to simply iterate the array from index i to j to sum up the elements and return its corresponding index. And it gives $O(n)$ per query, such algorithm maybe infeasible if queries are constantly required.  Because the update and query action distributed evenly, it still gives $O(n)$ time complexity and $O(n)$ in space, which will get LET error. \n\n\\paragraph{Solution: Segment Tree.}  With Segment Tree, we can store the TreeNode's val as the sum of elements in its corresponding interval. We can define a TreeNode as follows:\n\\begin{lstlisting}[language=Python]\nclass TreeNode:\n    def __init__(self, val, start, end):\n        self.val = val\n        self.start = start\n        self.end = end\n        self.left = None\n        self.right = None\n\\end{lstlisting}\nAs we see in the process, it is actually not necessary if we save the size of the array, we can decide the start and end index of each node on-the-fly and saves space. \n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.8\\columnwidth]{fig/307_RSQ_SegmentTree.png}\n    \\caption{Illustration of Segment Tree. }\n    \\label{fig:segment_tree}\n\\end{figure}\n\\paragraph{Build Segment Tree.} Because the leaves of the tree is a single element, we can use divide and conquer to build the tree recursively.  For a given node, we first build and return its left and right child(including calculating its sum) in advance in the `divide` step, and in the `conquer' step, we calculate this node's sum using its left and right child's sum, and set its left and right child. Because there are totally $2n-1$ nodes, which makes the time and space complexity $O(n)$.\n\\begin{lstlisting}[language=Python]\ndef _buildSegmentTree(self, nums, s, e): #start index and end index\n    if s > e:\n        return None\n    if s == e:\n        return self.TreeNode(nums[s])\n    \n    m = (s + e)//2\n    # divide \n    left = self._buildSegmentTree(nums, s, m)\n    right = self._buildSegmentTree(nums, m+1, e)\n    \n    # conquer\n    node = self.TreeNode(left.val + right.val)\n    node.left = left\n    node.right = right\n    return node\n\\end{lstlisting}\n\\paragraph{Update Segment Tree.} Updating the value at index i is like searching the tree for leaf node with range [i, i]. We just need to recalculate the value of the node in the path of the searching. This operation takes $O(\\log n)$ time complexity. \n\\begin{lstlisting}[language=Python]\ndef _updateNode(self, i, val, root, s, e):\n    if s == e:\n        root.val = val\n        return \n    m = (s + e)//2\n    if i <= m:\n        self._updateNode(i, val, root.left, s, m)\n    else:\n        self._updateNode(i, val, root.right, m+1, e)\n    root.val = root.left.val + root.right.val\n    return\n\\end{lstlisting}\n\\paragraph{Range Sum Query.} Each query range [i, j], will be a combination of ranges of one or multiple ranges. For instance, as in the segment tree  shown in Fig~\\ref{fig:segment_tree}, for range [2, 4], it will be combination of [2, 3] and [4, 4]. The process is similar to the updating, we starts from the root, and get its middle index m: 1) if [i, j] is the same as [s, e] that i == s and j == e, then return the value, 2) if the interval [i, j] is within range [s, m] that j <=m , then we just search it in the left branch. 3) if [i, j] in within range [m+1, e] that i>m, then we search for the right branch. 4) else, we search both branch and the left branch has target [i, m], and the right side has target [m+1, j], the return value should be the sum of both sides.  The time complexity is still $O(\\log n)$. \n\\begin{lstlisting}[language=Python]\ndef _rangeQuery(self, root, i, j, s, e): \n    if s > e or i > j:\n        return 0\n    if s == i and j == e:\n        return root.val if root is not None else 0\n    \n    m = (s + e)//2\n\n    if j <= m:\n        return self._rangeQuery(root.left, i, j, s, m)\n    elif i > m:\n        return self._rangeQuery(root.right, i, j, m+1, e)\n    else:\n        return self._rangeQuery(root.left, i, m, s, m) + self._rangeQuery(root.right, m+1, j, m+1, e)\n\\end{lstlisting}\nThe complete code is given: \n\\begin{lstlisting}[language=Python]\nclass NumArray:\n    class TreeNode:\n        def __init__(self, val):\n            self.val = val\n            self.left = None\n            self.right = None\n\n    def __init__(self, nums):\n        self.n = 0\n        self.st = None\n        if nums:\n            self.n = len(nums)\n            self.st = self._buildSegmentTree(nums, 0, self.n-1)    \n            \n    def update(self, i, val):\n        self._updateNode(i, val, self.st, 0, self.n -1)       \n\n    def sumRange(self, i, j):\n        return self._rangeQuery(self.st, i, j, 0, self.n-1)\n\\end{lstlisting}\n\\end{examples}\n\n \n\nSegment tree can be used here to lower the complexity of each query to $O(log n)$. \n\n%%%%%%%%%%%%%%%%%%%Trie%%%%%%%%%%%%%%%%%\n\\section{Trie for String}\n\\label{concept_trie}\n\\paragraph{Definition} Trie comes from the word re\\textbf{Trie}val. In computer science, a trie, also called digital tree, radix tree or prefix tree which like BST is also a kind of search tree for finding substring in a text. We can solve string matching in $O(|T|)$ time,  where |T| is the size of our text.  This purely algorithmic approach has been studied extensively in the algorithms:  Knuth-Morris-Pratt, Boyer-Moore, and Rabin-Karp. However, we entertain the possibility that multiple queries will be made to the same text.  This motivates the development of data structures that preprocess the text to allow for more efficient queries. Such efficient data structure is Trie, which can do each query in $O(P)$, where P is the length of the pattern string. Trie is an ordered tree structure, which is used mostly for storing strings (like words in dictionary) in a compact way. \n\\begin{enumerate}\n    \\item In a Trie, each child branch is labeled with letters in the alphabet $\\sum$. Actually, it is not necessary to store the letter as the key, because if we  order the child branches of every node alphabetically from left to right, the position in the tree defines the key which it is associated to. \n    \\item The root node in a Trie represents an empty string. \n\\end{enumerate}\n% An ordered tree data structure used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. \n\nNow, we define a trie Node: first it would have a bool variable to denote if it is the end of the word and a children which is a list of of 26 children TrieNodes. \n\\begin{lstlisting}[language= Python]\nclass TrieNode:\n    # Trie node class\n    def __init__(self):\n        self.children = [None]*26\n        # isEndOfWord is True if node represent the end of the word\n        self.isEndOfWord = False\n\\end{lstlisting}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/trie_compact_trie.jpg}\n    \\caption{Trie VS Compact Trie}\n    \\label{fig:trie_compact_trie}\n\\end{figure}\n\n\\paragraph{Compact Trie} If we assign only one letter per edge, we are not taking full advantage of the trie\u2019s tree structure. It is more useful to consider compact or compressed tries, tries where we remove the one letter per edge constraint, and contract non-branching paths by concatenating the letters on these paths.\nIn this way, every node branches out, and every node traversed represents a choice between two different words.  The compressed trie that corresponds to our example trie is also shown in Figure\n~\\ref{fig:trie_compact_trie}. \n\n\\paragraph{Operations: INSERT, SEARCH}\n% Now, let us solve an LeetCode problem together which requires us to implement a complete Trie that with the operations INSERT, SEARCH, STARTWITH. All of these operations are actually quickly similar and they all require us to simultaneously iterate each character in the input string (or word) and each level of the Trie on the location of that character. So, it would not be hard to get the worst time complexity when we searched the whole tree or finished iterating the characters in the input. \nBoth for INSERT and SEARCH, it takes $O(m)$, where m is the length of the word/string we wand to insert or search in the trie. Here, we use an LeetCode problem as an example showing how to implement INSERT and SEARCH. Because constructing a trie is a series of INSERT operations which will take $O(n*m)$, n is the total numbers of words/strings, and m is the average length of each item. The space complexity fof the non-compact Trie would be $O(N*|\\sum|)$, where $|\\sum|$ is the alphlbetical size, and N is the total number of nodes in the trie structure. The upper bound of N is $n*m$. \n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/Trie.png}\n    \\caption{Trie Structure}\n    \\label{fig:trie}\n\\end{figure}\n\\begin{examples}\n\\item \\textbf{208. Implement Trie (Prefix Tree) (medium).} Implement a trie with insert, search, and startsWith methods.\n\\begin{lstlisting}\nExample:\nTrie trie = new Trie();\ntrie.insert(\"apple\");\ntrie.search(\"apple\");   // returns true\ntrie.search(\"app\");     // returns false\ntrie.startsWith(\"app\"); // returns true\ntrie.insert(\"app\");   \ntrie.search(\"app\");     // returns true\n\\end{lstlisting}\n\\textit{Note: You may assume that all inputs are consist of lowercase letters a-z. All inputs are guaranteed to be non-empty strings.}\n\n\\paragraph{INSERT} with INSERT operation, we woould be able to insert a given word in the trie, when traversing the trie from the root node which is a TrieNode, with each letter in world, if its corresponding node is None, we need to put a node, and continue. At the end, we need to set that node's endofWord variable to True. thereafter, we would have a new branch starts from that node constructured. For example, when we first insert ``app`` as shown in Fig~\\ref{fig:trie_compact_trie}, we would end up building branch ``app``, and with ape, we would add nodes ``e`` as demonstrated with red arrows. \n\\begin{lstlisting}[language=Python]\ndef insert(self, word):\n    \"\"\"\n    Inserts a word into the trie.\n    :type word: str\n    :rtype: void\n    \"\"\"\n    node = self.root #start from the root node\n    for c in word:\n        loc = ord(c)-ord('a')\n        if node.children[loc] is  None: # char does not exist, new one\n            node.children[loc] = self.TrieNode()\n        # move to the next node\n        node = node.children[loc]\n    # set the flag to true\n    node.is_word = True \n\\end{lstlisting}\n\n\\paragraph{SEARCH} For SEARCH, like INSERT, we traverse the trie using the letters as pointers to the next branch. There are three cases: 1) for word P, if it doesnt exist, but its prefix does exist, then we return False. 2) If we found a matching for all the letters of P, at the last node, we need to check if it is a leaf node where is\\_word is True.  STARTWITH is just slightly different from SEARCH, it does not need to check that and return True after all letters matched. \n\\begin{lstlisting}[language=Python]\ndef search(self, word):\n    node = self.root\n    for c in word:\n        loc = ord(c)-ord('a')\n        # case 1: not all letters matched \n        if node.children[loc] is None: \n            return False          \n        node = node.children[loc]\n    # case 2\n    return True if node.is_word else False\n\\end{lstlisting}\n\\begin{lstlisting}[language=Python]\ndef startWith(self, word):\n    node = self.root\n    for c in word:\n        loc = ord(c)-ord('a')\n        # case 1: not all letters matched \n        if node.children[loc] is None: \n            return False          \n        node = node.children[loc]\n    # case 2\n    return True\n\\end{lstlisting}\nNow complete the given Trie class with TrieNode and \\_\\_init\\_\\_ function.\n\\begin{lstlisting}[language=Python]\nclass Trie:\n    class TrieNode:\n        def __init__(self):\n            self.is_word = False\n            self.children = [None] * 26 #the order of the node represents a char\n\n    def __init__(self):\n        \"\"\"\n        Initialize your data structure here.\n        \"\"\"\n        self.root = self.TrieNode() # root has value None       \n\\end{lstlisting}\n\\end{examples}\n\n\\begin{examples}\n\\item \\textbf{336. Palindrome Pairs (hard).} Given a list of unique words, find all pairs of distinct indices (i, j) in the given list, so that the concatenation of the two words, i.e. words[i] + words[j] is a palindrome.\n\\begin{lstlisting}\nExample 1:\n\nInput: [\"abcd\",\"dcba\",\"lls\",\"s\",\"sssll\"]\nOutput: [[0,1],[1,0],[3,2],[2,4]] \nExplanation: The palindromes are [\"dcbaabcd\",\"abcddcba\",\"slls\",\"llssssll\"]\n\nExample 2:\n\nInput: [\"bat\",\"tab\",\"cat\"]\nOutput: [[0,1],[1,0]] \nExplanation: The palindromes are [\"battab\",\"tabbat\"]\n\\end{lstlisting}\n\\textbf{Solution: One Forward Trie and Another Backward Trie.}  We start from the naive solution, which means for each element, we check if it is palindrome with all the other strings. And from the example 1, [3,3] can be a pair, but it is not one of the outputs, which means this is a combination problem, the time complexity is ${C_n}{C_{n-1}}$, and multiply it with the average length of all the strings, we make it $m$, which makes the complexity to be $O(mn^2)$. However, we can use Trie Structure, \n\\begin{lstlisting}[language = Python]\nfrom collections import defaultdict\n\n\nclass Trie:\n    def __init__(self):\n        self.links = defaultdict(self.__class__)\n        self.index = None\n        # holds indices which contain this prefix and whose remainder is a palindrome\n        self.pali_indices = set()\n\n    def insert(self, word, i):\n        trie = self\n        for j, ch in enumerate(word):\n            trie = trie.links[ch]\n            if word[j+1:] and is_palindrome(word[j+1:]):\n                trie.pali_indices.add(i)\n        trie.index = i\n\n\ndef is_palindrome(word):\n    i, j = 0, len(word) - 1\n    while i <= j:\n        if word[i] != word[j]:\n            return False\n        i += 1\n        j -= 1\n    return True\n\n\nclass Solution:\n    def palindromePairs(self, words):\n        '''Find pairs of palindromes in O(n*k^2) time and O(n*k) space.'''\n        root = Trie()\n        res = []\n        for i, word in enumerate(words):\n            if not word:\n                continue\n            root.insert(word[::-1], i)\n        for i, word in enumerate(words):\n            if not word:\n                continue\n            trie = root\n            for j, ch in enumerate(word):\n                if ch not in trie.links:\n                    break\n                trie = trie.links[ch]\n                if is_palindrome(word[j+1:]) and trie.index is not None and trie.index != i:\n                    # if this word completes to a palindrome and the prefix is a word, complete it\n                    res.append([i, trie.index])\n            else:\n                # this word is a reverse suffix of other words, combine with those that complete to a palindrome\n                for pali_index in trie.pali_indices:\n                    if i != pali_index:\n                        res.append([i, pali_index])\n        if '' in words:\n            j = words.index('')\n            for i, word in enumerate(words):\n                if i != j and is_palindrome(word):\n                    res.append([i, j])\n                    res.append([j, i])\n        return res\n\\end{lstlisting}\n\\textbf{Solution2: .}Moreover, there are always more clever ways to solve these problems. Let us look at a clever way:\n abcd, the prefix is ''. 'a', 'ab', 'abc', 'abcd', if the prefix is a palindrome, so the reverse[abcd], reverse[dc], to find them in the words, the words stored in the words with index is fastest to find. $O(n)$. Note that when considering suffixes, we explicitly leave out the empty string to avoid counting duplicates. That is, if a palindrome can be created by appending an entire other word to the current word, then we will already consider such a palindrome when considering the empty string as prefix for the other word.\n \\begin{lstlisting}[language = Python]\n class Solution(object):\n    def palindromePairs(self, words):\n        # 0 means the word is not reversed, 1 means the word is reversed\n        words, length, result = sorted([(w, 0, i, len(w)) for i, w in enumerate(words)] +\n                                   [(w[::-1], 1, i, len(w)) for i, w in enumerate(words)]), len(words) * 2, []\n\n        #after the sorting,the same string were nearby, one is 0 and one is 1\n        for i, (word1, rev1, ind1, len1) in enumerate(words):\n            for j in xrange(i + 1, length):\n                word2, rev2, ind2, _ = words[j]\n                #print word1, word2\n                if word2.startswith(word1): # word2 might be longer \n                    if ind1 != ind2 and rev1 ^ rev2: # one is reversed one is not\n                        rest = word2[len1:]\n                        if rest == rest[::-1]: result += ([ind1, ind2],) if rev2 else ([ind2, ind1],) # if rev2 is reversed, the from ind1 to ind2\n                else:\n                    break # from the point of view, break is powerful, this way, we only deal with possible reversed, \n        return result\n \\end{lstlisting}\n \\end{examples}\n \n %https://fizzbuzzed.com/top-interview-questions-5/\n% \\paragraph{Searching}\n% \\paragraph{Insertion}\n% \\paragraph{Deletion}\n\n% Let us see the complete code of a Trie Class:\n% \\begin{lstlisting}[language = Python]\n \n% class Trie:\n     \n%     # Trie data structure class\n%     def __init__(self):\n%         self.root = self.getNode()\n \n%     def getNode(self):\n     \n%         # Returns new trie node (initialized to NULLs)\n%         return TrieNode()\n \n%     def _charToIndex(self,ch):\n         \n%         # private helper function\n%         # Converts key current character into index\n%         # use only 'a' through 'z' and lower case\n         \n%         return ord(ch)-ord('a')\n \n \n%     def insert(self,key):\n         \n%         # If not present, inserts key into trie\n%         # If the key is prefix of trie node, \n%         # just marks leaf node\n%         pCrawl = self.root\n%         length = len(key)\n%         for level in range(length):\n%             index = self._charToIndex(key[level])\n \n%             # if current character is not present\n%             if not pCrawl.children[index]:\n%                 pCrawl.children[index] = self.getNode()\n%             pCrawl = pCrawl.children[index]\n \n%         # mark last node as leaf\n%         pCrawl.isEndOfWord = True\n \n%     def search(self, key):\n         \n%         # Search key in the trie\n%         # Returns true if key presents \n%         # in trie, else false\n%         pCrawl = self.root\n%         length = len(key)\n%         for level in range(length):\n%             index = self._charToIndex(key[level])\n%             if not pCrawl.children[index]:\n%                 return False\n%             pCrawl = pCrawl.children[index]\n \n%         return pCrawl != None and pCrawl.isEndOfWord\n \n% # driver function\n% def main():\n \n%     # Input keys (use only 'a' through 'z' and lower case)\n%     keys = [\"the\",\"a\",\"there\",\"anaswe\",\"any\",\n%             \"by\",\"their\"]\n%     output = [\"Not present in trie\",\n%               \"Present in tire\"]\n \n%     # Trie object\n%     t = Trie()\n \n%     # Construct trie\n%     for key in keys:\n%         t.insert(key)\n \n%     # Search for different keys\n%     print(\"{} ---- {}\".format(\"the\",output[t.search(\"the\")]))\n%     print(\"{} ---- {}\".format(\"these\",output[t.search(\"these\")]))\n%     print(\"{} ---- {}\".format(\"their\",output[t.search(\"their\")]))\n%     print(\"{} ---- {}\".format(\"thaw\",output[t.search(\"thaw\")]))\n \n% if __name__ == '__main__':\n%     main()\n% \\end{lstlisting}\nThere are several other data structures, like balanced trees and hash tables, which give us the possibility to search for a word in a dataset of strings. Then why do we need trie? Although hash table has $O(1)$ time complexity for looking for a key, it is not efficient in the following operations :\n\\begin{itemize}\n    \\item Finding all keys with a common prefix.\n    \\item Enumerating a dataset of strings in lexicographical order.\n\\end{itemize}\n\n\\paragraph{Sorting}\nLexicographic sorting of a set of keys can be accomplished by building a trie from them, and traversing it in pre-order, printing only the leaves' values. This algorithm is a form of radix sort. This is why it is also called radix tree. \n\n% \\paragraph{Dynamic Programming for Static Array}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%bonus\n\\section{Bonus}\n\\paragraph{Solve Duplicate Problem in BST} When there are duplicates, things can be more complicated, and the college algorithm book did not really tell us what to do when there are duplicates.  If you use the definition \"left <= root < right\" and you have a tree like:\n\\begin{lstlisting}[numbers=none]\n      3\n    /   \\\n  2       4\n\\end{lstlisting}\n\nthen adding a ``3'' duplicate key to this tree will result in:\n\\begin{lstlisting} [numbers=none]\n      3\n    /   \\\n  2       4\n    \\\n     3\n\\end{lstlisting}\nNote that the duplicates are not in contiguous levels.\n\nThis is a big issue when allowing duplicates in a BST representation as the one above: duplicates may be separated by any number of levels, so checking for duplicate's existence is not that simple as just checking for immediate children of a node.\n\nAn option to avoid this issue is to not represent duplicates structurally (as separate nodes) but instead use a counter that counts the number of occurrences of the key. The previous example would then have a tree like:\n\\begin{lstlisting}\n      3(1)\n    /     \\\n  2(1)     4(1)\n  \\end{lstlisting}\n\nand after insertion of the duplicate \"3\" key it will become:\n\\begin{lstlisting}\n      3(2)\n    /     \\\n  2(1)     4(1)\n  \\end{lstlisting}\n\nThis simplifies SEARCH, DELETE and INSERT operations, at the expense of some extra bytes and counter operations. In the following content, we assume using definition three so that our BST will have no duplicates. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%Exercise%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{LeetCode Problems}\n\n\\begin{enumerate}\n    \\item 144. Binary Tree Preorder Traversal\n    \\item 94. Binary Tree Inorder Traversal\n    \\item 145. Binary Tree Postorder Traversal\n    \\item 589. N-ary Tree Preorder Traversal\n    \\item 590. N-ary Tree Postorder Traversal\n    \\item 429. N-ary Tree Level Order Traversal\n    \\item 103. Binary Tree Zigzag Level Order Traversal(medium)\n    \\item 105. Construct Binary Tree from Preorder and Inorder Traversal\n\\end{enumerate}\n\n938. Range Sum of BST (Medium)\n\nGiven the root node of a \\textbf{binary search tree}, return the sum of values of all nodes with value between L and R (inclusive).\n\nThe binary search tree is guaranteed to have unique values.\n\\begin{lstlisting}\nExample 1:\n\nInput: root = [10,5,15,3,7,null,18], L = 7, R = 15\nOutput: 32\n\nExample 2:\n\nInput: root = [10,5,15,3,7,13,18,1,null,6], L = 6, R = 10\nOutput: 23\n\\end{lstlisting}\n\\textbf{Tree Traversal+Divide and Conquer}. We need at most $O(n)$ time complexity. For each node, there are three cases: 1) L <= val <= R, 2)val < L, 3)val > R. For the first case it needs to obtain results for both its subtrees and merge with its own val. For the others two, because of the property of BST, only the result of one subtree is needed. \n\\begin{lstlisting}[language=Python]\ndef rangeSumBST(self, root, L, R):\n    if not root:\n        return 0\n    if L <= root.val <= R:\n        return self.rangeSumBST(root.left, L, R) + self.rangeSumBST(root.right, L, R) + root.val\n    elif root.val < L: #left is not needed\n        return self.rangeSumBST(root.right, L, R)\n    else: # right subtree is not needed\n        return self.rangeSumBST(root.left, L, R)\n\\end{lstlisting}\n\n\n\\end{document}", "meta": {"hexsha": "04377b0ff6bc265a63f64a8a0a405e06c48a761f", "size": 44867, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Easy-Book/chapters/chapter_13_tree_algorithm.tex", "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Easy-Book/chapters/chapter_13_tree_algorithm.tex", "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Easy-Book/chapters/chapter_13_tree_algorithm.tex", "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.5120551091, "max_line_length": 886, "alphanum_fraction": 0.6813470925, "num_tokens": 11345, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300048, "lm_q2_score": 0.7956581097540518, "lm_q1q2_score": 0.5816726867922335}}
{"text": "%----------------------------------------------------------------------------------------\n% Introduction\n%----------------------------------------------------------------------------------------\n\\setcounter{page}{1} % Sets counter of page to 1\n\n\\section{Introduction} % Add a section title\n\nIn mathematical statistics, a problem that often comes up with basic tests (t-tests, Analysis of Variance, etc) is dealing with small sample sizes. When looking specifically at the two-sample t-test, the condition that has to be met and is taught in all introductory statistics courses is normality of both samples. However, in the case where this conditions cannot be met, we hope to have a sample size greater than $30$. But this magic number $30$ can be misleading. Real world data often contains more than $30$ observations. Creating models such as linear regression models is difficult because real world data often fails to meet critical model conditions such as normality of residuals and homoescedasticity. Because of this, we cannot blindly trust parameters like a $\\hat{\\beta_i}\\text{'s}$ in linear regression models and residual deviance in logistic regression models. There are also times when real world data is too small and then we can\u2019t even use the Central Limit Theorem to assume our statistics come from an approximation of a normal distribution. However, there is a solution to both of these problems: randomization based inference. Another common solution is bootstrapping, however, that is most useful when we want to develop robust estimates for statistics and standard errors as well as counteract sampling error. We are focusing on randomization methods, which focus on randomization of units within our sample. Forgetting all model distribution assumptions, randomization-based inference only cares whether the sample that you have is typical of the population. In this paper, we will explain what randomization based inference is in detail while walking through a short example pointing out each step in the context of our paper, explain why this process works, elaborate on some of the limitations that come with this test, and explore Monte Carlo methods for re-randomization. \n", "meta": {"hexsha": "8f2712541f9b9bcc53d7f146d781b83b6e071d67", "size": 2198, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Project/Math%20420%20Final%20Paper/content/1-introduction.tex", "max_stars_repo_name": "jake-caldwell/Math420Proj", "max_stars_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project/Math%20420%20Final%20Paper/content/1-introduction.tex", "max_issues_repo_name": "jake-caldwell/Math420Proj", "max_issues_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project/Math%20420%20Final%20Paper/content/1-introduction.tex", "max_forks_repo_name": "jake-caldwell/Math420Proj", "max_forks_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 244.2222222222, "max_line_length": 1906, "alphanum_fraction": 0.7447679709, "num_tokens": 406, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7956581000631542, "lm_q1q2_score": 0.5816726797076197}}
{"text": "\n\\section{The \\reversecopy algorithm}\n\\Label{sec:reversecopy}\n\nThe \\reversecopy\nalgorithm of the \\cxx Standard Library \\cite[\\S 28.6.10]{cxx-17-draft} inverts the order of elements\nin a sequence.\n\\reversecopy does not change the input sequence, and\ncopies its result to the output sequence.\nFor our purposes we have modified the generic implementation\nto that of a range of type \\valuetype.\nThe signature now reads:\n\n\\begin{lstlisting}[style=acsl-block]\n\n  void reverse_copy(const value_type* a, size_type n, value_type* b);\n\\end{lstlisting}\n\n%\\subsection{The predicate \\Reverse}\n\nInformally, \\reversecopy copies the elements from the array \\inl{a} into\narray \\inl{b} such that the copy is a reverse of the original array. \nIn order to concisely formalize these conditions we define in the following\nlisting the predicate \\logicref{Reverse} (see also Figure~\\ref{fig:Reverse}).\n\n\\input{Listings/Reverse.acsl.tex}\n\nWe also define several overloaded variants of \\Reverse that \nprovide default values for some of the parameters.\nThese overloaded versions enable us to write later more concise \\acsl annotations.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.60\\textwidth]{Figures/reverse_logic.pdf}\n\\caption{\\Label{fig:Reverse} Sketch of predicate~\\Reverse}\n\\end{figure}\n\n\\FloatBarrier\n\n\\subsection{Formal specification of \\reversecopy}\n\nThe specification of \\specref{reversecopy} is shown in the following listing\nWe use the second version of predicate \\logicref{Reverse}\nin order to formulate the postcondition of \\reversecopy.\n\n\\input{Listings/reverse_copy.h.tex}\n\n\\subsection{Implementation of \\reversecopy}\n\nThe implementation of \\implref{reversecopy} is shown in the following listing.\n%\nFor the postcondition to be true, we must ensure that for\nevery element \\inl{i}, the comparison \\inl{b[i] == a[n-1-i]} holds.\nThis is formalized by the loop invariant~\\inl{reverse} where we employ\nthe first version of \\logicref{Reverse}.\n\n\\input{Listings/reverse_copy.c.tex}\n\n\\clearpage\n\n", "meta": {"hexsha": "7b297cc3f40e0bab6b02aca1952a9f63f56d3df0", "size": 1990, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/mutating/reverse_copy.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/mutating/reverse_copy.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/mutating/reverse_copy.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 32.6229508197, "max_line_length": 100, "alphanum_fraction": 0.7874371859, "num_tokens": 506, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7956581000631541, "lm_q1q2_score": 0.5816726797076196}}
{"text": "\\section{Results \\& discussion}\n\\label{sec:resul}\n\nWe have not mentioned the polynomial function due to the factor that is always performing worst in our tests.\nIn the following table you can see the result for linear and Radial Basis function kernel type and different values of C.\n\n\\begin{table}[!ht]\n\t\\begin{center}\n\t\\begin{tabular}{ccc}\n\t\t\\hline\n\t\tKernel Type  & C & Accuracy  \\\\ \n\t\t\\hline\n\t\t\\multirow{4}{*}{Radial Basis function}       & 0.5     & 75$\\%$ \\\\\n\t\t\t\t & 1        & 78 $\\%$         \\\\\n\t\t\t\t & 3        & 84  $\\%$       \\\\\n\t\t\t\t & 5        & 83 $\\%$    \t    \\\\\n\t\t\\hline\n\t\t\\multirow{4}{*}{Linear}       & 0.5     & 78$\\%$ \\\\\n\t\t\t\t & 1        & 80 $\\%$         \\\\\n\t\t\t\t & 3        & 82  $\\%$       \\\\\n\t\t\t\t & 5        & 77 $\\%$    \t    \\\\ \\hline\n\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Summary of accuracy with different kernel and different C.}\n\t\\label{tb:table2}\n\\end{table}\t\n\nAnother important parameter to tune was the minimum word frequency. The following table contains evaluation of different word frequencies.\n\n\n\\begin{table}[!ht]\n\t\\begin{center}\n\t\\begin{tabular}{cc}\n\t\t\\hline\n\t\tWord Freequency & Accuracy  \\\\ \n\t\t\\hline\n\t\t3      & 75$\\%$ \\\\\n\t\t\t\t5  & 79 $\\%$         \\\\\n\t\t\t\t   7 & 77  $\\%$       \\\\\n\t\t\t\t 10    & 76 $\\%$    \t    \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Summary of accuracy with different kernel and different C.}\n\t\\label{tb:table3}\n\\end{table}\t\n\n\n\nBased on this knowledge we chose the following hyper-parameters:\n\n\n\\begin{table}[!h]\n\t\\begin{center}\n\t\\begin{tabular}{c | c}\n\t\t\\hline\n\t\tWord Freequency & 5 \\\\ \\hline \n\t\t\n\t\t\n\t\tKernel Type & Radial Basis \\\\  \\hline \n\t\t\n\t\tC & 3 \\\\  \\hline \n\t\t\n\t\tTraining Accuracy & 95$\\%$ \\\\  \\hline \n\t\t\n\t\tTest Accuracy & 85$\\%$ \\\\ \n\t\t\\hline\n\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Summary of accuracy with different kernel and different C.}\n\t\\label{tb:table4}\n\\end{table}\t", "meta": {"hexsha": "77fa64febaae9a2786e39727efd9ee8da346e198", "size": 1839, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "04_resul.tex", "max_stars_repo_name": "ai-freelancer/paper_TSA_SVC_Luxembourg", "max_stars_repo_head_hexsha": "8efb6bd87f250d794833160570ab48049d9fe2ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "04_resul.tex", "max_issues_repo_name": "ai-freelancer/paper_TSA_SVC_Luxembourg", "max_issues_repo_head_hexsha": "8efb6bd87f250d794833160570ab48049d9fe2ce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-20T21:15:42.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-20T21:15:42.000Z", "max_forks_repo_path": "04_resul.tex", "max_forks_repo_name": "ai-freelancer/paper_TSA_SVC_Luxembourg", "max_forks_repo_head_hexsha": "8efb6bd87f250d794833160570ab48049d9fe2ce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.5416666667, "max_line_length": 138, "alphanum_fraction": 0.5736813486, "num_tokens": 618, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.795658090372256, "lm_q2_score": 0.7310585669110203, "lm_q1q2_score": 0.5816726632987006}}
{"text": "\\chapter{Lecture 4 May 09 2018}\n  \\label{chapter:lecture_4_may_09_2018}\n\n\\section{Groups (Continued)} % (fold)\n\\label{sec:groups_continued}\n\n\\subsection{Groups (Continued)} % (fold)\n\\label{sub:groups_continued}\n\n\\begin{propo}[Cancellation Laws]\\label{propo:cancellation_laws}\n  Let $G$ be a group and $g, h, f \\in G$. Then\n  \\begin{enumerate}\n    \\item \\begin{enumerate}\n        \\item (\\hlnoteb{Right Cancellation}) $gh = gf \\implies h = f$\n        \\item (\\hlnoteb{Left Cancellation}) $hg = fg \\implies h = f$\n      \\end{enumerate}\n    \\item The equation $ax = b$ and $ya = b$ have unique solution for $x, y \\in G$.\n  \\end{enumerate}\n\\end{propo}\n\n\\begin{proof}\n  \\begin{enumerate}\n    \\item \\begin{enumerate}\n      \\item By left multiplication and associativity,\n        \\begin{equation*}\n          gh = gf \\iff g^{-1} gh = g^{-1} gf \\iff h = f\n        \\end{equation*}\n      \\item By right multiplication and associativity,\n        \\begin{equation*}\n          hg = fg \\iff hgg^{-1} = fgg^{-1} \\iff h = f\n        \\end{equation*}\n    \\end{enumerate}\n\n    \\item Let $x = a^{-1} b$. Then\n      \\begin{equation*}\n        a x = a (a^{-1} b) = (aa^{-1}) b = b.\n      \\end{equation*}\n      If $\\exists u \\in G$ that is another solution, then\n      \\begin{equation*}\n        au = b = ax \\implies u = x\n      \\end{equation*}\n      by Left Cancellation. The proof for $ya = b$ is similar by letting $y = ba^{-1}$.\n  \\end{enumerate}\\qed\n\\end{proof}\n\n% subsection groups_continued (end)\n\n\\subsection{Cayley Tables} % (fold)\n\\label{sub:cayley_tables}\n\nFor a finite group, defining its operation by means of a table is sometimes convenient.\n\n\\begin{defn}[Cayley Table]\\label{defn:cayley_table}\n\\index{Cayley Table}\n  Let $G$ be a group. Given $x, y \\in G$, let the product $xy$ be an entry of a table in the row corresponding to $x$ and column corresponding to $y$. Such a table is called a \\hlnoteb{Cayley Table}.\n\\end{defn}\n\n\\begin{note}\n  By \\autoref{propo:cancellation_laws}, the entries in each row (and respectively, column) of a Cayley Table are all distinct.\n\\end{note}\n\n\\begin{eg}\n  Consider the group $(\\mathbb{Z}_2, +)$. Its Cayley Table is\n  \\begin{center}\n    \\begin{tabular}{c|c|c}\n      $\\mathbb{Z}_2$ & $[0]$ & $[1]$ \\\\\n      \\hline\n      $[0]$     & $[0]$ & $[1]$ \\\\\n      $[1]$     & $[1]$ & $[0]$ \n    \\end{tabular}\n  \\end{center}\n  where note that we must have $[1] + [1] = [0]$; otherwise if $[1] + [1] = [1]$ then $[1]$ does not have its additive inverse, which contradicts the fact that it is in the group.\n\\end{eg}\n\n\\marginnote {\n  If we replace $1$ by $[0]$ and $-1$ by $[1]$, the Cayley Tables of $\\mathbb{Z}_2$ and $\\mathbb{Z}^*$ are the same. In thie case, we say that $\\mathbb{Z}_2$ and $\\mathbb{Z}^*$ are \\hlnotea{isomorphic}, which we denote by $\\mathbb{Z}_2 \\cong \\mathbb{Z}^*$.\n}\n\n\\begin{eg}\n  Consider the group $\\mathbb{Z}^* = \\{1. -1\\}$. Its Cayley Table (under multiplication) is\n  \\begin{center}\n    \\begin{tabular}{c|c|c}\n      $\\mathbb{Z}^*$ & $1$    & $-1$ \\\\\n      \\hline\n      $1$              & $1$  & $-1$ \\\\\n      $-1$             & $-1$ & $1$\n    \\end{tabular}\n  \\end{center}\n\\end{eg}\n\n\\begin{eg}\\label{eg:cyclic_group_cayley_table}\n  Given $n \\in \\mathbb{N}$, the \\hldefn{Cyclic Group} of order $n$ is defined by\n  \\begin{equation*}\n    C_n = \\{1, a, a^2, ..., a^{n - 1}\\} \\quad \\text{with } a^n = 1.\n  \\end{equation*}\n  We write $C_n = \\langle a : a^n = 1 \\rangle$ and $a$ is called a generator of $C_n$. The Cayley Table of $C_n$ is\n  \\begin{center}\n    \\begin{tabular}{c | c c c c c c}\n      $C_n$     & $1$       & $a$       & $a^2$  & $\\hdots$ & $a^{n - 2}$ & $a^{n - 1}$ \\\\\n      \\hline\n      $1$       & $1$       & $a$       & $a^2$  & $\\hdots$ & $a^{n - 2}$ & $a^{n - 1}$ \\\\\n      $a$       & $a$       & $a^2$     & $a^3$  & $\\hdots$ & $a^{n - 1}$ & $1$ \\\\\n      $a^2$     & $a^2$     & $a^3$     & $a^4$  & $\\hdots$ & $1$         & $a$ \\\\\n      \\vdots    & \\vdots    & \\vdots    & \\vdots &          & \\vdots      & \\vdots \\\\\n      $a^{n-2}$ & $a^{n-2}$ & $a^{n-1}$ & $1$    & $\\hdots$ & $a^{n-4}$   & $a^{n-3}$ \\\\\n      $a^{n-1}$ & $a^{n-1}$ & $1$       & $a$    & $\\hdots$ & $a^{n-3}$   & $a^{n-2}$\n    \\end{tabular}\n  \\end{center}\n\\end{eg}\n\n\\begin{propo}\\label{propo:small_groups}\n  Let $G$ be a group. Up to isomorphism, we have\n  \\begin{enumerate}\n    \\item if $\\abs{G} = 1$, then $G \\cong \\{1\\}$.\n    \\item if $\\abs{G} = 2$, then $G \\cong C_2$.\n    \\item if $\\abs{G} = 3$, then $G \\cong C_3$.\n    \\item if $\\abs{G} = 4$, then either $G \\cong C_4$ or $G \\cong K_4 \\cong C_2 \\times C_2$ \\marginnote{$K_n$ is known as the \\hldefn{Klein n-group}}.\n  \\end{enumerate}\n\\end{propo}\n\n\\begin{proof}\n  \\begin{enumerate}\n    \\item If $\\abs{G} = 1$, then it can only be $G = \\{1\\}$ where $1$ is the identity element.\n    \\item $\\abs{G} = 2 \\implies G = \\{1, g\\}$ with $g \\neq 1$. The Cayley Table of $G$ is thus\n      \\begin{center}\n        \\begin{tabular}{c | c c}\n        $G$ & $1$ & $g$ \\\\\n        \\hline\n        $1$ & $1$ & $g$ \\\\\n        $g$ & $g$ & $1$\n        \\end{tabular}\n      \\end{center}\n      where we note that $g^2 = 1$; otherwise if $g^2 = g$, then we would have $g = 1$ by \\autoref{propo:cancellation_laws}, which contradicts the fact that $g \\neq 1$. Comparing the above Cayley Table with that of $C_2$, we see that $G = \\langle g : g^2 = 1 \\rangle \\cong C_2$.\n    \\item $\\abs{G} = 3 \\implies G = \\{1, g, h\\}$ with $g \\neq 1 \\neq h$ and $g \\neq h$. We can then start with the following Cayley Table:\n      \\begin{center}\n        \\begin{tabular}{c | c c c}\n        $G$ & $1$ & $g$ & $h$ \\\\\n        \\hline\n        $1$ & $1$ & $g$ & $h$ \\\\\n        $g$ & $g$ &     &     \\\\\n        $h$ & $h$ &     &     \n        \\end{tabular}\n      \\end{center}\n      We know that by \\autoref{propo:cancellation_laws}, $gh \\neq g$ and $gh \\neq h$. Thus $gh = 1$. Similarly, we get that $hg = 1$.\n\n      \\underline{Claim:} Entries in a row (or column) must be distinct. Suppose not. Then say $g^2 = 1$. But since $gh = 1$, by \\autoref{propo:cancellation_laws}, we have that $h = g$, which is a contradiction.\n\n      With that, we can proceed to fill in the rest of the entries: with $g^2 = h$ and $h^2 = g$. Therefore,\n      \\begin{center}\n        \\begin{tabular}{c | c c c}\n        $G$ & $1$ & $g$ & $h$ \\\\\n        \\hline\n        $1$ & $1$ & $g$ & $h$ \\\\\n        $g$ & $g$ & $h$ & $1$ \\\\\n        $h$ & $h$ & $1$ & $g$\n        \\end{tabular}\n      \\end{center}\n\n      Recall that the Cayley Table for $C_3$ is:\n      \\begin{center}\n        \\begin{tabular}{c | c c c}\n        $C_3$ & $1$   & $a$   & $a^2$ \\\\\n        \\hline\n        $1$   & $1$   & $a$   & $a^2$ \\\\\n        $a$   & $a$   & $a^2$ & $1$ \\\\\n        $a^2$ & $a^2$ & $1$   & $a$\n        \\end{tabular}\n      \\end{center}\n      $\\therefore G \\cong C_3$ (by identifying $g = a$ and $h = a^2$).\n\n    \\item Suppose $G$ is a group of order $4$, i.e. $\\abs{G} = 4$. Then, let $G = \\{1, g h, f\\}$, where $1, g, h, f$ are distinct. We can then draw the following Cayley Table, wherein the blank entries will be discussed.\n\n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & g & h & f \\\\\n              \\hline\n              1     & 1 & g & h & f \\\\\n              g     & g &   &   &   \\\\\n              h     & h &   &   &   \\\\\n              f     & f &   &   &  \n            \\end{tabular}\n          \\end{center}\n\n          We know that $gh \\neq h$, otherwise by Right Cancellation, we would have $g = 1$, which is not true since $1$ and $g$ are distinct elements of $G$. Thus, $gh$ is either $f$ or $1$.\n\n          \\underline{Case 1: $g^2 = 1$} \\\\\n          If $g^2 = 1$, then $fg = h$. Otherwise, if $fg = f$, we would have $g = 1$ which is a contradiction to the fact that $1$ and $g$ are distinct. Consequently, $hg = f$. Similarly, since $gf = f$ would contradict the fact that $g \\neq 1$ through Right Cancellation, we have $gh = f$ and $gf = h$. We now have the following form of the Cayley Table:\n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & g & h & f \\\\\n              \\hline\n              1     & 1 & g & h & f \\\\\n              g     & g & 1 & f & h \\\\\n              h     & h & f &   &   \\\\\n              f     & f & h &   &  \n            \\end{tabular}\n          \\end{center}\n\n          Now there are 2 options, either $h^2 = 1$ or $h^2 = g$.\n\n          \\underline{Case 1-1: $h^2 = 1$} \\\\\n          If $h^2 = 1$, then through elimination, $hf = g$, $fh = g$ and $f^2 = 1$. We then have the following Cayley Table:\n          \n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & g & h & f \\\\\n              \\hline\n              1     & 1 & g & h & f \\\\\n              g     & g & 1 & f & h \\\\\n              h     & h & f & 1 & g \\\\\n              f     & f & h & g & 1\n            \\end{tabular}\n          \\end{center}\n\n          This is clearly the Cayley Table of the Klein $4$ group.\n\n          \\underline{Case 1-2: $h^2 = g$} \\\\\n          If $h^2 = g$, then through elimination, $hf = 1$, $fh = 1$ and $f^2 = g$. We then have the following Cayley Table:\n\n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & g & h & f \\\\\n              \\hline\n              1     & 1 & g & h & f \\\\\n              g     & g & 1 & f & h \\\\\n              h     & h & f & g & 1 \\\\\n              f     & f & h & 1 & g\n            \\end{tabular}\n          \\end{center}\n\n          We can rearrange the elements and hence the Cayley Table to the following:\n\n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & f & g & h \\\\\n              \\hline\n              1     & 1 & f & g & h \\\\\n              f     & f & g & h & 1 \\\\\n              g     & g & h & 1 & f \\\\\n              h     & h & 1 & f & g\n            \\end{tabular}\n          \\end{center}\n\n          which is the Cayley Table of $C_4$.\n\n          Now note that the following case will cover for 2 cases, i.e. $g^2 = h$ and $g^2 = f$, since we can proceed with the argument without loss of generality.\n\n          \\underline{Case 2: $g^2 = f$} \\\\\n          If $g^2 = f$, we have that $hg = 1$, since we can only have distinct elements in a column and in a row. Consequently, we have $fg = h$. Similarly, we must have that $gh = 1$ and consequently $gf = h$. Thus we have the following Cayley Table:\n\n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & g & h & f \\\\\n              \\hline\n              1     & 1 & g & h & f \\\\\n              g     & g & f & 1 & h \\\\\n              h     & h & 1 &   &   \\\\\n              f     & f & h &   &  \n            \\end{tabular}\n          \\end{center}\n\n          Note that $h^2 \\neq g$, because we would then have $fh = f$, which would imply $h = 1$ through Left Cancellation, a contradiction to the fact that $h \\neq 1$. Thus $h^2 = g$. Again, since we can only have distinct elements in a row (and a column), we will end up with the following Cayley Table:\n\n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & g & h & f \\\\\n              \\hline\n              1     & 1 & g & h & f \\\\\n              g     & g & f & 1 & h \\\\\n              h     & h & 1 & f & g \\\\\n              f     & f & h & g & 1\n            \\end{tabular}\n          \\end{center}\n\n          We can rearrange the elements to get the following Cayley Table:\n\n          \\begin{center}\n            \\begin{tabular}{c|c|c|c|c}\n              $G$   & 1 & h & f & g \\\\\n              \\hline\n              1     & 1 & h & f & g \\\\\n              h     & h & f & g & 1 \\\\\n              f     & f & g & 1 & h \\\\\n              g     & g & 1 & h & f\n            \\end{tabular}\n          \\end{center}\n\n          in which we observe is the Cayley Table for $C_4$.\n\n          Since we have explored all the possibilities, we have that the only possible groups of order $4$ is the cyclic group $C_4$ and the Klein $4$ group $K_4$.\n  \\end{enumerate}\\qed\n\\end{proof}\n\n% subsection cayley_tables (end)\n\n% section groups_continued (end)\n\n\\section{Subgroups}\n\\label{sec:subgroups}\n\n\\subsection{Subgroups}\n\\label{sub:subgroups}\n\n\\begin{defn}[Subgroup]\\label{defn:subgroup}\n\\index{Subgroup}\n  Let $G$ be a group and $H \\subseteq G$. If $H$ itself is a group, then we say that $H$ is a subgroup of $G$\n\\end{defn}\n\n% subsection subgroups (end)\n\n% section subgroups (end)\n\n% chapter lecture_4_may_09_2018 (end)\n", "meta": {"hexsha": "56330afafb27a7ef550ef9a25723e2e1674a74d2", "size": 12373, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PMATH347S18/lectures/lec04.tex", "max_stars_repo_name": "japorized/TeX_notes", "max_stars_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-09-28T21:23:05.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-21T01:41:27.000Z", "max_issues_repo_path": "PMATH347S18/lectures/lec04.tex", "max_issues_repo_name": "japorized/TeX_notes", "max_issues_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-29T17:58:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-29T17:58:51.000Z", "max_forks_repo_path": "PMATH347S18/lectures/lec04.tex", "max_forks_repo_name": "japorized/TeX_notes", "max_forks_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2017-09-27T20:55:58.000Z", "max_forks_repo_forks_event_max_datetime": "2017-09-27T20:55:58.000Z", "avg_line_length": 39.4044585987, "max_line_length": 355, "alphanum_fraction": 0.490665158, "num_tokens": 4384, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8479677545357568, "lm_q1q2_score": 0.5816630190883232}}
{"text": "\\chapter{List of Symbols}\n\n\\emph{Will be updated.}\n\n\t\\begin{longtable}{c | l | c}\n\t\\textbf{Symbol(s)} & \\textbf{Reading} & \\textbf{Definition}\\\\\\hline\n\t$\\in$ & Member, element & 3.1.1\\\\\n\t$\\emptyset$ & Empty set & 3.1.5\\\\\n\t$\\{x:\\Phi(x)\\}$ & Class abstract & 3.1.6\\\\\n\t$\\subseteq$ & Subset & 3.2.1\\\\\n\t$\\subset$ & Proper subset & 3.2.3\\\\\n\t$\\cup$ & Union & 3.4.1\\\\\n\t$\\cap$ & Intersection & 3.4.2\\\\\n\t$\\setminus$ & Difference & 3.4.5\\\\\n\t$X\\times Y$ & Cartesian product & 3.5.3--4\\\\\n\t$X^n$ & Cartesian $n$-cube & 3.5.4\\\\\n\t$f,g,h, \\mathellipsis$ & Functions & 3.6.4\\\\\n\t$dom(f)$ & Domain of $f$ \\\\\n\t$rg(f)$ & Range of $f$\\\\\n\t$\\mathbb{N}$ & Natural numbers & 3.7.1\\\\\n\t$\\neg$ & Negation & 4.1.3\\\\\n\t$\\land$ & Conjunction \\\\\n\t$\\lor$ & Disjunction \\\\\n\t$\\to$ & Conditional \\\\\n\t$\\leftrightarrow$ & Biconditional \\\\\n\t$\\mathcal{P}$ & The set of sentence letters & 4.1.5\\\\\n\t$\\mathcal{L}$ & The set of formulas & 4.1.6\\\\\n\t$\\phi,\\psi,\\theta,\\mathellipsis$ & Formulas\\\\\n\t$T(\\phi)$ & The parsing tree of $\\phi$ & 4.3.5 \\\\\n\t$c(\\phi)$ & The complexity of $\\phi$ & \\\\\n\t$v,u,w,\\mathellipsis$ & Valuations & 5.1.2\\\\\n\t$\\llbracket\\phi\\rrbracket_v$ & The truth-value of $\\phi$ under $v$ & 5.1.9\\\\\n\t$\\Gamma\\vDash\\phi$ & $\\Gamma$ entails $\\phi$ & 5.2.2\\\\\n\t\\end{longtable}\n\t\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"../../logic.tex\"\n%%% End:\n", "meta": {"hexsha": "2b3776badb1b98c96756f06766c7dd2e4327438a", "size": 1321, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lib/notes/tex/appendix/table-of-symbols.tex", "max_stars_repo_name": "crcaret/KI1V13001-Inleiding-Logica", "max_stars_repo_head_hexsha": "6c7966886cde1c5a3622dadab3c9c903a7ac4ff7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-09-12T17:29:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-05T08:03:21.000Z", "max_issues_repo_path": "lib/notes/tex/appendix/table-of-symbols.tex", "max_issues_repo_name": "crcaret/KI1V13001-Inleiding-Logica", "max_issues_repo_head_hexsha": "6c7966886cde1c5a3622dadab3c9c903a7ac4ff7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 69, "max_issues_repo_issues_event_min_datetime": "2020-09-04T16:24:11.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-18T13:54:07.000Z", "max_forks_repo_path": "lib/notes/tex/appendix/table-of-symbols.tex", "max_forks_repo_name": "crcaret/KI1V13001-Inleiding-Logica", "max_forks_repo_head_hexsha": "6c7966886cde1c5a3622dadab3c9c903a7ac4ff7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-09-04T08:49:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-30T11:24:44.000Z", "avg_line_length": 33.025, "max_line_length": 77, "alphanum_fraction": 0.5715367146, "num_tokens": 546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624738835052, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.5816497121317226}}
{"text": "\n\\subsection{The trivial group}\n\nThe trivial group is the group with just the identity member \\(I\\).\n\n", "meta": {"hexsha": "4c681912c098525f87184daeb5f19c6eef90bcdf", "size": 102, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/groups/04-01-trivial.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/groups/04-01-trivial.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/groups/04-01-trivial.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.0, "max_line_length": 67, "alphanum_fraction": 0.7450980392, "num_tokens": 23, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.839733983715524, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5816350117863528}}
{"text": "\\section{Examples}\n\\label{sec:examples}\n\n\\paragraph{Decidable set.}\n\\label{sec:decidable-set}\n\nWe now consider the theory of a decidable set.\nRecall that in constructive mathematics a set~$S$ is said to be\n\\emph{decidable} if $x = y$ or $x \\neq y$ for all $x, y \\in S$.\nThe input to RZ is\n%\n{\\small\\VerbatimInput{decidable.thy}}%\n%\n\\noindent and the output is\n%\n{\\small\\VerbatimInput{decidable.mli}}\n%\n\\noindent\nThe output signature asks for \\Verb|decidable| to be a function\naccepting two realizers~$x$ and~$y$ and returning one of two tokens\n\\Verb|`or0| and \\Verb|`or1|, depending on whether~$x$ and~$y$ realize\nthe same element. (Disjunction is written \\Verb|cor| to emphasize that\nit is classical or.) This is nothing but a computable decision procedure\nfor equality on \\Verb|s| with respect to \\Verb|=s=|, as\nshould be expected. \n\nWe remark that nothing requires the partial equivalence relation\n\\Verb|=s=| to be computable, so not every modest set is decidable. In\nfact, there are many natural and important examples of non-computable partial\nequivalence relations, such as \\emph{extensional equality} of functions\nfrom natural numbers to natural numbers. (If we could computably\ndecide whether two functions always give equal results on equal\narguments, we could construct a Halting Oracle.)\n\n\\paragraph{Natural Numbers.}\n\\label{sec:natural-numb}\n\nNext we consider the theory of natural numbers. This example shows how\naxioms can be parameterized by theories. Recall that the natural\nnumbers are the initial algebra with one constant and one unary\noperation (such algebras are sometimes called ``iteration algebras''):\n%\n{\\small\\VerbatimInput{nat.thy}}%\n%\n\\noindent\nThe theory \\Verb|Iteration| is an auxiliary theory. The theory\n\\Verb|Nat| postulates the existence of a model \\Verb|N| of theory\n\\Verb|Iteration| which satisfies the initiality axiom stating that\nthere exists exactly one algebra morphism from \\Verb|N| to any other\niteration theory \\Verb|I|. The output generated by RZ is shown in\nFigure~\\ref{fig:nat}. The initiality axiom has been translated to a\nfunctor which expects an implementation \\Verb|I| of an iteration\ntheory and outputs a realizer for the axiom. A closer look at the\nassertion reveals that it essentially says that the realizer defines a\nfunction from natural numbers to \\Verb|I.s| by simple recursion.\n\n\\begin{figure*}\n  \\centering\n  {\\small\\VerbatimInput{nat.mli}}\n  \\caption{Output for theories \\texttt{Iteration} and \\texttt{Nat}}\n  \\label{fig:nat}\n\\end{figure*}\n\n\\paragraph{Axiom of Choice.}\n\\label{sec:axiom-choice}\n\nAs a third example, we look at the realizability interpretation of the\nAxiom of Choice. We work with the formulation of the axiom which\nstates that every total relation has a choice function:\n%\n\\[\n  (\\forall x \\in A.\\,\n   \\exists y \\in B.\\, R(x, y)) \\implies %\\\\\n  \\exists g \\in B^A.\\,\n  \\forall x \\in A.\\, R(x, g(x)) \\;.\n\\]\n%\nWe could write this as a theory parameterized by sets $A$, $B$ and the\nrelation~$R$, but to keep things simple, we use the following version:\n%\n{\\small\\VerbatimInput{choice.thy}}%\n%\nThe output is shown in Figure~\\ref{fig:choice}. The interesting bit is\nthe assertion for \\Verb|choice|, which says that \\Verb|choice| takes\nas an argument a realizer $f$ for the $\\forall\\exists$ statement and\noutputs a pair of functions, of which the first is the choice\nfunction~$g$ and the second one provides realizers witnessing that the\nchoice function does its job. However, there is a problem: the\nrealizer $f$ is not required to respect \\Verb|=a=| while the choice\nfunction $g$ is. In general there is no way for \\Verb|choice| to\ntransform $f$ into a \\Verb|=a=|-respecting function. It follows that\nin general the Axiom of Choice is \\emph{not} valid in the\nrealizability interpretation. This is another important difference\nbetween realizability and propositions-as-types.\n\n\\begin{figure*}\n  \\centering\n  {\\small \\VerbatimInput{choice.mli}}\n  \\caption{Output for theory \\texttt{Choice}}\n  \\label{fig:choice}\n\\end{figure*}\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"case\"\n%%% End: \n", "meta": {"hexsha": "b18d5a35292997985ff8c32fbdeb623ca1412983", "size": 4071, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "private/clase/examples.tex", "max_stars_repo_name": "andrejbauer/rz", "max_stars_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-08-28T10:12:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-07T21:04:22.000Z", "max_issues_repo_path": "private/clase/examples.tex", "max_issues_repo_name": "andrejbauer/rz", "max_issues_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "private/clase/examples.tex", "max_forks_repo_name": "andrejbauer/rz", "max_forks_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6944444444, "max_line_length": 77, "alphanum_fraction": 0.7548513879, "num_tokens": 1119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673178375735, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.581544779637055}}
{"text": "\n% \\vspace{-.12in}\n\\paragraph{\\textbf{Notation: }}\nUnless otherwise specified, lower-case English alphabet characters indicate scalars $x \\in \\Real$. Bold indicates column vectors $\\mb{x} \\in \\Real^p$,\nand upper-case bold indicates matrices, $\\bX \\in \\Real^{p \\times q}$.  Parameters and constants are Greek characters.  Time is $t \\in [0,T]$, \n$i \\in [N]$ indexes the $N$ neurons, where $[N]=\\{1,2,\\ldots,N\\}$. Script denotes sets and pipes denote the cardinality of the set, e.g. $|\\mc{T}|$.  \n\\vspace{-.1in}\n\n\nOur overall model is then:\n\\begin{subequations}\n\\begin{align}\n  \\mc{T}_i\\ \\  &\\sim \\text{PP}(\\lambda_i) \\quad i \\in \\NN, \\quad &\\text{ where } \\mathsf{\\Lambda}(\\cdot)&=\\sum_{i=1}^{\\infty} \\lambda_i \\delta_{\\theta^*_i} \\sim \\Gamma \\text{P}(\\alpha, \\mathsf{H}(\\cdot| {\\phi})), \\\\ %\\mathcal{NW}(\\mu, \\Sigma)) \\\\ \\\\\n\\vspace{-.8in}\n  x_i(t) &= \\sum_{j = 1}^{|\\mc{T}_i|}  \\sum_{k = 1}^{K} y^*_{ijk} \\mathsf{d}_k(t - \\tau_{ij}), \\quad &\\text{ where }\\by^*_{ij}  &\\sim \\mathsf{N}_K(\\mb{\\mu}^*_i, \\Sigma^*_i) \\quad i,j \\in \\NN, \\\\\n  x(t)   &= \\sum_{i=1}^{\\infty} x_i(t) + \\eps_t, \\quad &\\text{ where at any time $t$, } \\eps_t &\\sim \\mathsf{N}(0,\\Sigma_x) \\text{ independently}\n\\end{align}\n\\end{subequations}\n\n\\begin{algorithm}\n\\caption{Generative mechanism for the multi-electrode, non-stationary, discrete-time process}\\label{alg:gen_proc}\n\\begin{tabular}{p{1.2cm}p{12.4cm}}\nInput:&  a) the number of bins $T$, and the bin-width $\\Delta$\\\\\n  &  b) the $K$-by-$L$ dictionary $\\bD$ of $K$ basis functions\\\\\n  &  c) the DP hyperparameters $\\alpha$ and $\\phi$.\\\\ \n  &  d) the transition matrix $\\bB$ of the neuron AR process \\\\\nOutput:& \\  An $M$-by-$T$ matrix $\\bX$ of multielectrode recordings. % defined by a set of state and time pairs.\n\\end{tabular}\n\\begin{algorithmic}[1]\n\\State Initialize the number of clusters $C_1$ to $0$.\n\\State Draw the overall spiking rate $\\Lambda \\sim \\text{Gamma}(\\alpha, 1)$.\n%\\State Set $A_{s_i}(\\tau) = \\sum_j A_{s_i,j} (\\tau)$ and define $u_{s_i}(\\tau) \\ge A_{s_i}(\\tau) \\forall \\tau$. \\label{alg:loop}\n%\\State Let $\\tau_o = (w_{i} - l_i)$. \\label{alg:smjp_loop}\n\\For{$t$ in $[T] $}\n\\State Sample $\\mt{z}_t \\sim \\text{Bernoulli}(\\Lambda \\Delta)$, with $\\mt{z}_t = 1$ indicating a spike in bin $t$.\n\\If{$\\mt{z}_t = 1$}   \\label{enum:thin}\n  \\State Sample $\\mt{\\nu}_t$, assigning the spike to a neuron, with\n%\\begin{align*}\n$  \\mathsf{P}({\\mt{\\nu}_t} = i) \\propto \n  \\begin{cases}\n   |\\mc{T}^t_i| \\quad i \\in [C] \\\\\n   \\alpha \\quad\\ i = C + 1 \\\\\n  \\end{cases}$\n%\\end{align*}\n       \\If{ $\\nu_t = C_t + 1$} \n          \\State  $C_{t+1} \\leftarrow C_t + 1$. \n\t\t\\State Set $\\theta^*_{C_{t+1}} \\sim H_{\\phi}(\\cdot)$, and $\\mc{T}_{C_{t+1}}$=\\{t\\}.\n       \\Else \\State  $\\mc{T}_{\\nu_t} \\leftarrow \\mc{T}_{\\nu_t} \\cup \\{t\\}$.\n    \\EndIf\n\\State Set $\\theta_t = \\theta^*_{\\nu_t}$, recalling that $\\theta_t \\equiv (\\mb{\\mu}_t, \\Sigma_t)$.\n\\State Sample $\\by_t = (\\by^1_t; \\cdots; \\by^M_1) %\\equiv (\\by_{t1}, \\cdots, \\by_{t\\Upsilon}) \n           \\sim \\mathsf{N}(\\mb{\\mu}_t, \\Sigma_t)$, determining the spike shape at all electrodes.\n%\\State $\\bx^h_{t:t+L} = A\\by^h$\n\\State $ x^m_t = \\sum_{h = 1}^L \\bD_{:,h}^{\\T} \\by^m_{t-h-1} + \\epsilon^m_t \\text{\\qquad where $\\epsilon^m_t \\sim \\mathsf{N}(0,\\sigma^2), m \\in [M]$.} $\n\\State Update the cluster parameters: ${\\mb{\\mu}}^*_i = \\mathbf{B} {\\mb{\\mu}}^*_i + r_i \\quad i \\in [C_{t+1}]$\n\\EndIf\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n", "meta": {"hexsha": "a92311b27bf6c2923b8962bfdbe37a2f19fd24a6", "size": 3404, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/supp-text.tex", "max_stars_repo_name": "jovo/online-spike-sorting", "max_stars_repo_head_hexsha": "24b8bac41bff449381c5c60a9d09ac40995035b5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/supp-text.tex", "max_issues_repo_name": "jovo/online-spike-sorting", "max_issues_repo_head_hexsha": "24b8bac41bff449381c5c60a9d09ac40995035b5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/supp-text.tex", "max_forks_repo_name": "jovo/online-spike-sorting", "max_forks_repo_head_hexsha": "24b8bac41bff449381c5c60a9d09ac40995035b5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.8032786885, "max_line_length": 249, "alphanum_fraction": 0.6104582844, "num_tokens": 1357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.851952809486198, "lm_q2_score": 0.6825737214979745, "lm_q1q2_score": 0.581520599711649}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{float}\n\\usepackage{amsmath}\n\n\n\\usepackage[hmargin=3cm,vmargin=6.0cm]{geometry}\n%\\topmargin=0cm\n\\topmargin=-2cm\n\\addtolength{\\textheight}{6.5cm}\n\\addtolength{\\textwidth}{2.0cm}\n%\\setlength{\\leftmargin}{-5cm}\n\\setlength{\\oddsidemargin}{0.0cm}\n\\setlength{\\evensidemargin}{0.0cm}\n\n\n\n\\begin{document}\n\n\\section*{Student Information } \n%Write your full name and id number between the colon and newline\n%Put one empty space character after colon and before newline\nFull Name : Zeynep \u00d6zalp \\\\\nId Number : 2237691 \\\\\n\n% Write your answers below the section tags\n\\section*{Answer 1}\n\\subsection*{a)}\nIf $f^{-1}$, $g^{-1}$ and $(gof)^{-1}$ do not exist, there is nothing to prove. So assuming that they exist, let $g\\circ f=z$. Then, $z^{-1}=(gof)^{-1}$. Let a, b, c such that \n$$(a \\in A_0) \\wedge (b \\in B_0 \\wedge f(a)=b) \\wedge (c \\in C_0 \\wedge g(b)=c)$$\n$$(A_0 \\subseteq A) \\wedge (B_0 \\subseteq B) \\wedge (C_0 \\subseteq C)$$ \nTherefore, \n$$z(a)=c$$\n$$z^{-1}(c)=a$$\nNote that, $g^{-1}(c)=b$ and $f^{-1}(b)=a$. Thus,\n$$f^{-1}(g^{-1}(c))=a$$ which is equal to equation $z^{-1}(c)=a$.\n$$z^{-1}(c)=f^{-1}(g^{-1}(c))$$\n$$(g\\circ f)^{-1}(c)=f^{-1}(g^{-1}(c))$$\nSince the last equation is true for some arbitrary element of $C_0$, one can conclude that the equation is valid for all elements of $C_0$. Hence,\n$$(g\\circ f)^{-1}(C_0)=f^{-1}(g^{-1}(C_0))$$\n\n\\subsection*{b)}\n\\subsubsection*{Injectivity of f}\nAssume that f is not injective.\n$$(f(a_1)=f(a_2)) \\wedge (a_1 \\neq a_2)$$\nTherefore,\n$$((g\\circ f)(a_1)=(g\\circ f)(a_2)) \\wedge (a_1 \\neq a_2)$$\n$$\\perp$$\nContradiction since $g\\circ f$ is injective. Therefore, our assumption is false and f is injective. \n\\subsubsection*{Injectivity of g}\nWhen the domain of g is the range of f, g must be injective. However, g may have other other elements in its domain such that those elements are not in the range of f and g may not be injective for those elements. Therefore, g is not necessarily injective. It can be injective or not.\n\n\\subsection*{c)}\n\\subsubsection*{Surjectivity of g}\nAssume that there exists some z such that \n$$(z \\in C) \\wedge (z \\notin g(B_0)) \\ where \\ B_0 \\subset B$$\nTherefore,\n$$(z \\in C) \\wedge (z \\notin (g\\circ f)(A_0)) \\ where \\ A_0 \\subset A \\ and\\ f(A_0)=B_0$$\n$$\\perp$$\nContradiction since $(g\\circ f)$ is surjective. Therefore, our assumption is false and f is injective.\n\\subsubsection*{Surjectivity of f}\n\\textbf{i)} Assume that f is not surjective so that its range is $B_0$.\n$$f(A)=B_0 \\subset B$$\n$$(g \\circ f)(A)=g(B_0)=C$$\nThis equation is valid and makes no contradiction since we know that $g\\circ f$ is surjective.\\\\\n\\textbf{ii)} Assume that f is surjective so that its range is $B$.\n$$f(A)=B$$\n$$(g \\circ f)(A)=g(B)=C$$\nThis equation is valid and makes no contradiction since we know that $g\\circ f$ is surjective.\\\\\nTherefore, f is not necessarily surjective. It can be surjective or not.\n\n\\section*{Answer 2}\n\n\\subsection*{a)}\n\\subsubsection*{Injectivity of f}\nAssume that\n$$f(x)=f(y)$$\n$$(g\\circ f)(x)=(g\\circ f)(y)$$\n$$x=y\\ since\\ (g\\circ f)(x)=x\\ and\\ (g\\circ f)(y)=y$$\nTherefore, f is injective.\n\\subsubsection*{Surjectivity of f}\nFor some $b \\in B$ the following is true.\n$$(f\\circ h)(b)=b$$\nThus, every element in of B has a pre-image in A and f is surjective.\n\n\\subsection*{b)}\nYes, a function can have more than one left and right inverses. Consider these examples:\\\\\n$f:\\{1,2\\} \\rightarrow \\{1\\}$ defined by $f(1)=f(2)=1$\\\\\\\\\n$g_1:\\{1\\} \\rightarrow \\{1,2\\}$ defined by $g_1(1)=1$ so that $(g_1\\circ f)(1)=1$ \\\\\n$g_2:\\{1\\} \\rightarrow \\{1,2\\}$ defined by $g_2(1)=2$ so that $(g_2\\circ f)(2)=2$\\\\\\\\\n$h_1:\\{1\\} \\rightarrow \\{1, 2\\}$ defined by $h_1(1)=1$ so that $(f\\circ h_1)(1)=1$ \\\\\n$h_2:\\{1\\} \\rightarrow \\{1, 2\\}$ defined by $h_2(1)=2$ so that $(f\\circ h_2)(1)=1$\\\\\\\\\nClearly, $g_1$ and $g_2$ are left inverses of f and $h_1$ and $h_2$ are right inverses of f.\n\n\\subsection*{c)}\nIf f has a left inverse $g$ then it is injective and if f has a right inverse $h$ then it is subjective. Since f has both right and left inverses, it is both injective and subjective, i.e., f is bijective.\\\\\nNow, choose an arbitrary $x\\in B$.\n$$f(h(x))=x$$\nApply g to both sides.\n$$g(f(h(x)))=g(x)$$\nHowever, for all $x_0 \\in A$, $g(f(x_0))=x_0$. Thus,\n$$g(f(h(x)))=h(x)=g(x)$$\nTherefore, $g=h=f^{-1}$.\n\n\\section*{Answer 3}\n\\subsection*{Bijectivity of f}\nChoose arbitrary $x_1,\\ x_2$ in the domain of f and $y_1,\\ y_2$ in A such that $f(x_1,y_1)=f(x_2,y_2)$. Therefore,\n$$(x_1+y_1-1, y_1)=(x_2+y_2-1, y_2)$$\nThus, $x_1+y_1-1=x_2+y_2-1$ and $y_1=y_2$. Since $x_1+y_1=x_2+y_2$ and $y_1=y_2$, clearly $x_1=x_2$. Thus, f is injective.\n\n$$f(x_1,y_1)=(x_1+y_1-1, y_1)$$\nFor all positive integers $x_1$ and $y_1$, $y_1\\le x_1+y_1-1$. Since for all of the pairs $(x_1+y_1-1, y_1)$ is in A, f is surjective.\\\\\nFunction f is bijective.\n\\subsection*{Bijectivity of g}\nLet $h,i: Z^+\\rightarrow Z^+$, $h(x)=\\dfrac{1}{2}(x^2-x)$ and $i(y)=y$.\\\\\n$$i(y_1)=i(y_2) \\wedge (y_1\\neq y_2))$$\n$$y_1=y_2$$\n$$\\perp$$\n$$h(x_1)=h(x_2) \\wedge (x_1 \\neq x_2)$$\n$$\\dfrac{1}{2}(x_1{^2}-x_1)=\\dfrac{1}{2}(x_2{^2}-x_2)$$\n$$x_1{^2}-x_1=x_2{^2}-x_2$$\n$$x_1{^2}-x_2{^2}=x_1-x_2$$\nSince $x_1 \\neq x_2$, $x_1-x_2\\neq 0$, we can divide the equation by $x_1-x_2$.\n$$x_1+x_2=1$$\n$$\\perp$$\nContradiction since $x_1$ and $x_2$ are defined as positive integers. So, $h$ and $i$ is injective.\\\\\n\nNote that the sum of injective functions may not injective since for some function $k(x)=x$, $l(x)=-x$ and then $(k+l)(x)=0$ is not injective. However, the domain of $g(x,y)$ is A such that $x,y\\in Z^+$; therefore, the sum of injective functions is injective on that domain. Thus, $g(x,y)=h(x)+i(y)$ is injective.\\\\\n\nBy Theorem.2 on p.239, $\\forall a,d \\in Z$, $d>0$, there exists unique $q,r$ such that $a=qd+r$ where $0 \\leq r <d$.\n$$q=1/2(x-1)$$\n$$d=x$$\n$$r=y$$\n$$g(x,y)=qd+r$$\nThe function g is surjective. Thus, it is bijective.\n\n\\section*{Answer 4}\n\\subsection*{a)} \nWE know that the set of positive rational numbers are countable from ex.4/p.172 in our textbook. Since the mapping $Q^{+} \\rightarrow Q^{-}$ is bijective, $Q^-$ is also countable. The union of countable sets $Q^+$, $Q^-$, $\\{ 0 \\}$ is also countable. So, the set of rational numbers is countable.\\\\\n\nLet \n$$P_n=\\{x^n+a_{n-1}x^{n-1}+...+a_1x+a_0\\ |\\ a_i \\in Q,\\ i=0,...,n-1\\}$$ and\n$$Q^n=\\{(a_{n-1},...,a_0)\\ |\\ a_i \\in Q,\\ i=0,...,n-1\\}$$.\nClearly, the mapping $P_n \\rightarrow Q^n$ is a bijective from $P_n $ to $Q^n$ and therefore, $P_n $ has the same cardinality with $Q^n$.\\\\\n\n$Q^n$ is the set of n-tuples where all $a_i \\in Q,\\ i=0,...,n-1$. The elements of $a_0,...,a_{n-1}$ need not to be distinct, so $Q^n$ is also countable. So is $P_n$. Since the set of all polynomials with rational coefficients is the union of all $P_i$'s where $i=0,...,n$, it is countable. Since each polynomial has finite number of roots, the set of all polynomials with countable roots are also countable. Thus, the set of algebraic real numbers is countable.\n\n\\subsection*{b)}\nLet $A$ be the set of algebraic real numbers and $T$ is the set of trancendental real numbers. Note that $A \\subset R$, $T \\subset R$, $A=R-T$ and $A$ is countable, so is $R-T$. Assume that $T$ is countable.\n$$T \\cup (R-T) = R$$\nFrom our assumption, the union of two countable sets should be countable. Hovewer, the set of real numbers are not countable and our assumption is false. Thus, the set of trancendental real numbers are not countable.\n\n\\section*{Answer 5}\nFor positive $a_1$, $a_2$ where $a_1 < a_2$ and sufficiently large $n$\n$$a_1n.ln(n) < k < a_2n.ln(n)$$\n$$ln(a_1n.ln(n)) <ln(k) <ln(a_2n.ln(n))$$\nDivide the first equation by the second one.\n$$\\dfrac{a_1n.ln(n)}{ln(a_1n.ln(n))}<\\dfrac{k}{ln(k)}<\\dfrac{a_1n.ln(n)}{ln(a_2n.ln(n))}$$\n$$a_1n.ln(n-a_1n.ln(n)) < \\dfrac{k}{ln(k)} < a_2n.ln(n-a_2n.ln(n))$$\n$$n-a_1n.ln(n)>0$$ by the definiton of $ln$. Thus,\n$$n>a_1n.ln(n)$$\nSince $n \\neq 0$, divide by $n$.\n$$a_1ln(n)<1$$\n\nAssume that $ln(n-a_1n.ln(n))>n$. We can write\n$$ln(n-a_1n.ln(n))>a_1n.ln(n)$$\nsince $n>a_1n.ln(n)$.\nDivide by $ln(n) \\neq 0$.\n$$ln(a_1n.ln(n))>a_1n$$\nLet $b_1=a_1ln(n)$ and thus $b_1<1$. Put $b_1$.\n$$ln(b_1a_1)>a_1n$$\n$$\\perp$$\nContradiction since $b_1<1$, $a_1>0$ and n is sufficiently large.\nSo, our assumption is false and $ln(n-a_1n.ln(n))<n$. The same steps is valid for $a_2$ and $b_2=a_2ln(n)$. Therefore,\n$$\\Theta (\\dfrac{k}{ln(k)}) = n$$\n\n\\section*{Answer 6}\n\\subsection*{a)}\n$$6=1+2+3$$\n$$28=1+2+4+7+14$$\n\\subsection*{b)}\nSuppose $k=2^p-1$ is prime and let $n=2^{p-1}(2^p-1)$.\\\\\nLet $f(n)$ be the sum of the positive divisors of $n$. Note that:\\\\\n1. When $n$ is perfect, $f(n)=2n$.\\\\\n2. When $n$ is prime, $f(n)=n+1$.\\\\\n3. $f(2^a)=1+2^1+2^2+...+2^{a-1}=2^a-1$\\\\\n4. $f(n)=f(p_1^{a_1})f(p_2^{a_2})...f(p_n^{a_n})$ where $p_i$'s are prime divisors of $n$ and $a_i$'s are the powers of them. So, $f(mn)=f(m)f(n)$ where $m$ and $n$ are relatively prime.\\\\\n\nSince $2^p-1$ is prime, $2^p-1$ and $2^{p-1}$ are relatively prime. Therefore,\n$$f(n)=f(2^{p-1})f(2^p-1)=(2^p-1)2^p=2n$$\nThus, $n=2^{p-1}(2^p-1)$ is perfect when $2^p-1$ is prime.\n\n\\section*{Answer 7}\n\\subsection*{a)}\nBy the definition,\n$$x=nd_2+c_2=md_1+c_1$$\n$$nd_2-md_1=c_1-c_2$$\nLet $gcd(m,n)=g$. Take $(mod\\ g)$ of both sides.\n$$0 \\equiv c_2-c_1(mod\\ g)$$\nTherefore, $gcd(m,n)\\ |\\ c_2-c_1$.\n\\subsection*{b)}\nLet $d=gcd(m,n)$. By Bezout's Theorem on p.269 in our textbook, there exist $s,t\\in Z$ such that $d=sm+tn$. Let $(q_1,r)$ and $(q_2,r)$ be the quotient and remainder of $c_1$ and $c_2$ upon division by $d$. Then $x=q_1sm+q_2tn+r$ is a unique modulo $lcm(m,n)$.\n\nAssume that there is two distinct $x_1>0,\\ x_2>0$ and $m,\\ n$ are relatively prime. Therefore, $lcm(m,n)=mn$.\n$$x_1-x_2=0$$\n$$x_1-x_2\\equiv 0(mod\\ m)\\equiv 0(mod\\ n)$$\n$$x_1-x_2\\equiv 0(mod\\ mn)$$\nSince $x_1-x_2$ has no remainder, $x_1-x_2\\geq lcm(mn)$. Thus, there is no distict $x_1>0,\\ x_2>0$ in the interval $[0,\\ lcm(m,n))$.\\\\\nNow, suppose that $m,\\ n$ are not relatively prime. Let $m=ay$ and $n=by$ then $lcm(m,n)=aby$. Since $m,n$ are not relatively prime, $c_1=c_2$.\n$$x_1 \\equiv c_1 (mod\\ ay)\\equiv c_1(mod\\ by)$$\n$$x_2 \\equiv c_1 (mod\\ ay)\\equiv c_1(mod\\ by)$$\nTherefore,\n$$x_1-x_2= ay(d_1-d_3)$$\n$$or$$\n$$x_1-x_2= by(d_2-d_4)$$\n\nClearly, $x_1$ and $x_2$ are divisable by both $ay$ and $by$. Thus, \n$$x_1-x_2\\equiv 0 (mod\\ aby)$$\nSince $x_1-x_2$ has no remainder, $x_1-x_2\\geq lcm(mn)$. Thus, there is no distict $x_1>0,\\ x_2>0$ in the interval $[0,\\ lcm(m,n))$.\\\\\n\n\\end{document}\n\n\u200b\n\n", "meta": {"hexsha": "0105ef663654c6a59d0b3140106944b3254d2880", "size": 10498, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ceng223/hw2/hw2.tex", "max_stars_repo_name": "zeynepozalp/Coursework", "max_stars_repo_head_hexsha": "d2526229a757a926c311e49c7ffec995ebb9f365", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ceng223/hw2/hw2.tex", "max_issues_repo_name": "zeynepozalp/Coursework", "max_issues_repo_head_hexsha": "d2526229a757a926c311e49c7ffec995ebb9f365", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ceng223/hw2/hw2.tex", "max_forks_repo_name": "zeynepozalp/Coursework", "max_forks_repo_head_hexsha": "d2526229a757a926c311e49c7ffec995ebb9f365", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.6577777778, "max_line_length": 461, "alphanum_fraction": 0.6434558964, "num_tokens": 4337, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210896, "lm_q2_score": 0.8354835411997897, "lm_q1q2_score": 0.5814617153890016}}
{"text": "\\section{Stereo vision}\n\nThree-dimensional dynamic capture means in general the way of recording a sequence of movements of a real-life target or scene.\nThis section introduces the methods for seeing three-dimensional structures from stereo setups, consisting of two or more cameras.\nThe configuration of a proper setup with two is described; it can be extended to the multi-view domain.\nMost of the sophisticated algorithms able to recover a dense reconstruction base their work on the same principles, as follows.\n\n\\subsection{Coordinate systems and transforms} \\label{sec:coord}\n\nThe previous section described how to record a view of a scene with a camera. From now on, the term camera refers to a particular camera configuration, which can be a single physical camera moved to different locations.\n\nThe camera is a projective object located somewhere in the imaged scene.\nIts \\textit{intrinsic parameters} model the properties of projection, but do not take into account the camera location in any global coordinate system.\nThe \\textit{extrinsic parameters} contain the camera location and rotation in another global coordinate system, structured as a matrix.\nThis is especially advantageous when there are more than one cameras and their coordinates must be related.\n\\cite{hartley03multiview,heyden2005multiple}\nThis part quickly reviews basic transforms whose results are needed in the later reconstruction steps.\n\n%Calibration is often specified with a camera projection matrix, or several separate matrices.\n%It may be convenient to store intrinsics and extrinsics separately if the intrinsic matrix is constant for several pictures, for example.\n\n\n\\simplefig{h}{%\n\\includegraphics[width=0.7\\textwidth]{gfx/pinhole3d}\n}{fig:pinhole3d}\n{Pinhole camera geometry. Camera coordinate system origin at O, axis X3 points towards the optical axis, Y1 and Y2 point to image plane axes and R is the principal point, at the image center. The point P projects to Q, as well as everything else on the line joining them. The image plane is f units away from camera origin; f is called the focal length.}\n\nIn computer graphics and vision, points and directions are usually described in homogeneous coordinates. Translation, perspective projection, rotation, and other operations are conveniently described as matrices by using an additional dimension for points, of which usually the last element is 1: $(x, y, z, 1)$. All points $(xw, yw, zw, w)$ map to the same point $(x, y, z)$. \\cite{dubrofsky2009homography,hartley03multiview}\n\n%Homography definition (mapping of points and lines in $P^2$)\n\nThe imaging process essentially captures a projection to a flat two-dimensional plane of the camera's view, as described in section \\ref{sec:imaging}.\nWhen relating points between different cameras that view the same scene, the cameras' relational positions and rotations must be known.\nOne of the cameras is often conveniently chosen as the origin of a global coordinate frame, so that its extrinsic parameters become unity transforms (programming libraries often assume this, see e.g. \\cite{opencv}).\nEach three-dimensional point in the world is transformed to the small sensor or film inside the camera, which is then digitized to a discrete two-dimensional grid of pixels. The size of this pixel array (i.e. image) is referred to as the camera's resolution.\n\n%Figure \\ref{fig:TODO} illustrates this transformation chain, which is encoded as the following equations, given a homogeneous point (4-dimensional vector) $X$ representing a 3D location described in physical (e.g. metric) coordinates:\n\nThe transformation chain is encoded as follows, given a homogeneous point (4-dimensional vector) $X$ representing a 3D location described in physical (e.g. metric) coordinates:\n\n\\begin{align}\n\tx &= P X\\\\\n\t  &= M_i X_s\\\\ % X_s on the sensor\n\t  &= M T R X\\\\\n\t  &= M_i M_p T R X\\\\ % R, T camera pose, M_4 to camera sensor, M_3 to pixel coords\n\\end{align}\n\n$x$ is a 2d pixel in a discrete image, $X_s$ exists on the sensor. $R$, $T$ encode the camera rotation and translation (extrinsics); $M_p$ projects the world coordinates to the camera sensor (film) - still in world coordinates (intrinsics!), and finally the affine $M_i$ transforms the points from the sensor to pixel coordinates on the digital discretized image.\n\nThe whole projection $P = M_i M_p T R$ can be used as-is without decomposing it to separate matrices, unless the individual parameters are needed. As the chain consists of several matrices, some of them are defined only up to scale; the coordinate systems' units can be chosen freely. Software packages usually do not decompose the chain, because it is not needed and unique parameters cannot be found because of scaling.\n\n%The external camera parameters are called the extrinsics: camera coordinate system position and rotation (heading) in the global space.\n%Camera position sits at the projection center blah.\n\nThe internal parameters, intrinsics, encode how the image is formed on the sensor: they consist of focal length, sensor size and principal point: (last column left out, as it's full of zeroes in 3x4)\n\n\\begin{equation}\n\tM =\n\t\\begin{pmatrix}\n\t\tm_x & \\gamma & u_0\\\\\n\t\t0   &    m_y & v_0\\\\\n\t\t0   &        0 & 1\n\t\\end{pmatrix}\n\\cdot\n\t\\begin{pmatrix}\n\t\tf & 0 & 0\\\\\n\t\t0 & f & 0\\\\\n\t\t0 & 0 & 1\n\t\\end{pmatrix}\n\t=\n\t\\begin{pmatrix}\n\t\t\\alpha_x & \\gamma   & u_0\\\\\n\t\t0        & \\alpha_y & v_0\\\\\n\t\t0        & 0        & 1\n\t\\end{pmatrix}\n\\end{equation}\n\nFor simplicity, it is often denoted $\\alpha_x = m_x f$, $\\alpha_y = m_y f$. $R = (u_0, v_0)$ is the image center (or principal point). For square pixels, $m_x = m_y$, and for a non-skewed sensor, $\\gamma = 0$, which is often the case. \\cite{hartley03multiview,szeliski10vision,heyden2005multiple}\n\n%Le image. Horizontal planar triangle, lines between camera origins etc. lecture11.pdf.\n\n\\subsection{Camera calibration}\n\n%In order to accurately measure a scene with a camera, the camera's properties must be known.\nReconstruction algorithms need to relate points between images; the camera properties are needed.\nCalibrating a camera means to measure its intrinsics and extrinsics in order to map its data to a known coordinate frame.\nCalibration has always to be done, but it does not necessary need to be a manual step before scanning; self-calibration attempts to target this convenience problem. \\cite{pollefeys1999hand,hartley03multiview}\n\n%Projective calibration only is too general, as it leaves out some assumptions that can be done about a physical world, such as relative angles and sizes; metric calibration something something. \\cite{zisserman1995metric}.\n\nAutomatic calibration tools rely on an amount of feature pairs of which the best matches are found, or a known pattern, such as a planar checkerboard pattern [chuan; zhang] whose features are also distinguished with a similar algorithm but a priori knowledge of the object structure is used for precise calibration.\nThese usually need several pictures taken with the same camera from different poses.\n\nThe checkerboard calibration step can also measure optical distortion at the same time. \\cite{opencv,camcalmatlab}\n\n%TODO Figure: show extrinsic in matlab cam calibs, nice pics (both cam and world centered)\n\n%Single three-dimensional calibration object is also sufficient blbl\n\nOne possible way is direct linear transform (DLT)\n\\cite{hartley03multiview}: the whole matrix $P$ is solved from $x_i = PX_i$ by constructing a system of equations from the projections of some known points $i$, and minimizing an error metric, as the case is usually overconditioned.\n\n%Methods that dig the matrix out of a single image have certain restrictions, and won't work if e.g. seven points lie on the same plane [longuet-higgins etc.]\n\n%XXX see below. Intrinsic, extrinsic. Distortions. Projection matrices. Camera resectioning.\n\n%many single planar chessboard pics vs. a single image of an accurate 3d model.\n\n%The scale of values in the equations above affects the precision [hartley, in defense of .., h,ziss]. A similarity transform can be used to modify the values to a more consistent range; this is called normalization of the data.\n\n\n\\subsection{Binocular disparity}\n%Essential, fundamental matrices. Correspondence problem. Rectification, undistortion. Epipolar geometry.\n\n\\simplefig{h!}{\n\\begin{tikzpicture}[scale=0.3]\n\t% P, Z\n\t\\draw[fill] (0, 20) circle [radius=0.3];\n\t\\node at (0, 21) {$P$};\n\n\t\\draw [<->] (0, 0.5) -- (0, 19.5);\n\t\\node at (0.5, 10) {$Z$};\n\n\t% origins, T\n\t% TODO: circles, node ends not exactly at those points\n\t\\draw [<->] (-9.5,0) -- (9.5, 0);\n\t\\draw[fill] (-10, 0) circle [radius=0.3];\n\t\\draw[fill] ( 10, 0) circle [radius=0.3];\n\t\\node at (-10, -1) {$O_l$};\n\t\\node at (10, -1) {$O_r$};\n\t\\node at (0, -1) {$T$};\n\n\t% headings\n\t\\draw [->] (-10, 0) -- (-10, 10);\n\t\\draw [->] (10, 0) -- (10, 10);\n\n\t% image planes, at y=4\n\t\\draw[thick] (-13, 4) -- (-7, 4);\n\t\\draw[thick] (13, 4) -- (7, 4);\n\n\t\\draw [<->] (-6, 0.5) -- (-6, 3.5);\n\t\\node at (-5.5, 2) {$f$};\n\n\n\t% intersection points at principals and xs\n\t\\draw[fill] (-10, 4) circle [radius=0.3];\n\t\\draw[fill] (10, 4) circle [radius=0.3];\n\n\t\\node at (-10.5, 3) {$c_l$};\n\t\\node at (10.5, 3) {$c_r$};\n\n\t\\node at (-9, 5) {$x_l$};\n\t\\node at (9, 5) {$x_r$};\n\n\n\t% O-to-P\n\t\\draw (-10, 0) -- (0, 20);\n\t\\draw (10, 0) -- (0, 20);\n\n\n\t% p\n\t\\draw[fill] (8, 4) circle [radius=0.3];\n\t\\node at (8, 3) {$p_r$};\n\t\\draw[fill] (-8, 4) circle [radius=0.3];\n\t\\node at (-8, 3) {$p_l$};\n\\end{tikzpicture}\n}{fig:simplestereo}\n{A very simple stereo setup, picture from above. The image planes (thick lines) are actually imaginary, as a real film in a camera would exist behind the principal point and project the image upside down, as described earlier in \\ref{sec:imaging}. The coordinates exist in the world coordinate units. The symbols $O$ are the camera origins ($T$ units between each other); $c$ the principal points; $x$ the image plane coordinates of $p$ w.r.t. the principal points; and $f$ is the focal length. The unknown is $Z$, depth of point $P$.}\n\n%Next, the setup of binocular stereo vision is described. Common stereo vision rigs use the simplest possible case: two identical cameras with a fixed distance, both oriented to the same direction, parallel to the line connecting them, as in figure \\ref{fig:simplestereo}.\n\nAssuming known calibration with identical cameras (same focal length and sensor) in a setup described above, visualized in figure \\ref{fig:simplestereo}, points can be triangulated as follows:\n\nFrom similar triangles with a common vertex at $P$, we get (note that $x_r < 0$ as it's to the left, towards to the negative axis, from the corresponding plane's origin)\n\n\\begin{align}\n\t\\frac{Z}{T} &= \\frac{Z-f}{T - x_l + x_r} \\\\\n\t&= \\frac{Z-f}{T - d}\\\\\n\tZT - Zd &= ZT - fT\\\\\n\tZ &= \\frac{fT}{d} \\label{eq:z}\n\\end{align}\n\nThe disparity $d$ is the difference of the points in their image planes, $d = x_r - x_l$.\nIf the image planes would be fixed as being physically correct, in the back side of the camera origins, the focal length should be negated to keep the correct interpretation and sign because the projected physical image is mirrored in both axes. Image processing between the sensor and a picture file usually inverts this.\n\nAs the equation \\ref{eq:z} shows, depth is directly inversely proportional to disparity in this simple case.\nTo map the depth to correct units, only focal length $f$ and the baseline $T$ are needed additionally; when using pixel coordinates instead of physical in $d$, also the pixel size should be taken into account.\nAll of these are encoded in the camera parameters.\nAlgorithms such as those in OpenCV \\cite{opencv} can compute point clouds from disparity images.\n\n\\subsection{Epipolar geometry}\n\n\\simplefig{h!}{\n\\begin{tikzpicture}[scale=0.5]\n\t% cameras\n\t\\draw[fill] (-11,-1) circle [radius=0.2];\n\t\\draw[fill] ( 11,-1) circle [radius=0.2];\n\t\\draw (-11,-1) -- (11, -1);\n\t\\node at (0, -2) { baseline };\n\n\t\\node at (-11,-2) {$C_l$};\n\t\\node at ( 11,-2) {$C_r$};\n\n\t% planes\n\t\\draw (-10,1) -- (-10,7) -- (-5,3) -- (-5,-3) -- cycle;\n\t\\draw ( 10,1) -- ( 10,7) -- ( 5,3) -- ( 5,-3) -- cycle;\n\n\t% 3d pts\n\t\\draw[fill] ( 0,3) circle [radius=0.2];\n\t\\draw[fill] ( 0,9) circle [radius=0.2];\n\t\\node at (0,2) {$A$};\n\t\\node at (0,8) {$B$};\n\n\t% origins via pts\n\t\\draw (-11,-1) -- (0,3) -- (11,-1);\n\t\\draw (-11,-1) -- (0,9) -- (11,-1);\n\n\t% epis\n\t\\draw[fill] (-6,-1) circle [radius=0.1];\n\t\\draw[fill] (6,-1) circle [radius=0.1];\n\t\\node at (-6,-1.5) { $e_l$ };\n\t\\node at (6,-1.5) { $e_r$ };\n\n\t% projections\n\t\\draw[fill] (-7.5,0.2727) circle [radius=0.1];\n\t\\draw[fill] (-7.5,2.1818) circle [radius=0.1];\n\t\\node at (-8.5, 0.2727) {$A_l$};\n\t\\node at (-8.5, 2.1818) {$B_l$};\n\t% lines from epis\n\t\\draw (-6,-1) -- +(-2*1.5,2*1.2727);%(-7.5,0.2727);\n\t\\draw (-6,-1) -- +(-2*1.5,2*3.1818);%(-7.5,2.1818);\n\n\t\\draw[fill] (7.5,0.2727) circle [radius=0.1];\n\t\\draw[fill] (7.5,2.1818) circle [radius=0.1];\n\t\\node at (8.5, 0.2727) {$A_r$};\n\t\\node at (8.5, 2.1818) {$B_r$};\n\t\\draw (6,-1) -- +(2*1.5,2*1.2727);%(7.5,0.2727);\n\t\\draw (6,-1) -- +(2*1.5,2*3.1818);%(7.5,2.1818);\n\\end{tikzpicture}\n}{fig:epigeom}\n{Two camera views on same scene.\nWorld points $A$, $B$ project to planes of different views imaged from $C_l$ and $C_r$ on the left ($A_l$ and $B_l$), and to the right ($A_r$, $B_r$).\nWhen $A_l$ is known, its corresponding point $A_r$ (not initially known in practice) is found on the epipolar line joining $e_r$ and $A_r$ in the right image.\nAll epipolar lines in a view join in the same point ($e_l$ and $e_r$).}\n\nTriangulation or reconstruction of the scene structure given by image pair(s) is usually done on the base of a known relationship between the cameras.\nSuch relationship, known as calibrating the cameras, can be automatically determined, given by corresponding points that can be distinguished in each image and matched.\n\\cite{trucco1998introductory,hartley03multiview}\n\nIn stereo vision, the same scene of interest is seen by two or more cameras at the same time.\nThe cameras are rarely aligned perfectly such as in the disparity setup described above, however.\nEpipolar geometry encodes the relations between arbitrarily positioned cameras in a standard way so that coordinates of a 3D point seen in several images can be calculated with the same triangulation.\n\nA point seen by camera $C_l$ at 3D point A could be anywhere on the line between $C_l$'s origin and P, because a certain line passing through the principal point always projects to a point.\nThis line is seen as a single point $A_l$.\nFrom another viewpoint in camera $C_r$, this line equals to some line on B's image plane.\nThe real point must be on that line.\nThe inverse applies for any point on $C_r$ and a line on $C_l$.\nThe lines on the image planes are called epipolar lines.\n\nEssential matrix defines how the camera poses differ by the something something points seen by both. When $A_l$, $A_r$ encode the points in figure \\ref{fig:epigeom} by the corresponding camera coordinates, and the baseline difference (vector from $C_l$ to $C_r$) is marked as $t$, it holds that $(A_l-C_l) \\cdot t \\times (A_r-C_r) = 0$, as all the vectors are coplanar; the cross product yields a normal to the plane, which is perpendicular to all the vectors, thus the dot product equals 0. \\cite{hartley03multiview}\n\nEssential matrix is a matrix form of this relation; it includes the relative rotation and translation of the two cameras.\n\n%\\begin{align*} \\label{eq:essential}\n%\t%A_r &= R (A_l - t) \\\\\n%\t%A_r^T R T A_l &= 0 \\\\\n%\t%A_r^T E A_l &= 0\n%\tA_l \\cdot t \\times A_r = 0\\\\\n%\tA_l \\cdot t \\times R A_r = 0\\\\\n%\tA_l^T T\n%\\end{align*}\n\n%where $T$ is the cross-product form of $t$ encoded in a matrix form as below. The essential matrix is obtained as $E = R T$.\n%\n%Le image. lecture11.pdf. O->p dot (O->O' cross O'->p') = 0\n%\n%Cross product expressed in a skew-symmetric matrix form is\n%\\begin{equation}\n%\\vec a \\times \\vec b =\n%\\begin{pmatrix}\n%\t 0   & -a_z &  a_y\\\\\n%\t a_z &  0   & -a_x\\\\\n%\t-a_y &  a_x & 0\n%\\end{pmatrix}\n%\\begin{pmatrix}\n%\tb_x\\\\b_y\\\\b_z\n%\\end{pmatrix}\n%= \\vec c\n%\\end{equation}\n\nFundamental matrix relates the corresponding points in stereo images; it has the same meaning as the essential matrix, but it works in the pixel coordinates of the cameras, which are obtained after the projective transform that takes the intrinsics into account. Inverting the matrix $M_i$ (\\ref{sec:coord}) in sensor-to-pixel coordinate transform and using it on pixel coordinates, world coordinates seen by the camera can be obtained.\n\n%\\[\n%\\hat pAl = M_p A_l\\\\\n%\\hat A_r = M_p A_r\n%\\]\n%\n%and using it on pixel coordinates, the world coords can be obtained, plugging in to the equation \\ref{eq:essential}\n%\n%\\[\n%A_r^T E A_l = 0\\\\\n%(M_p^-1 \\hat A_r)^T E (M_p^-1 \\hat A_l) = 0\\\\\n%\\hat A_r^T M_p^-T E M_p^-1 A_l = 0\\\\\n%\\hat A_r^T F \\hat A_l = 0\n%\\]\n%\n%the fundamental matrix\n%\n%\\[\n%F = M_p^-T E M_p^-1 = M_p^-T R T M_p^-1\n%\\]\n\nThe fundamental matrix relates the pixels and epipolar lines, and as such it is useful in image processing where the images are described as pixels in a color array (image) and not colored physical coordinates.\n\n%Epipole can be interpreted as the location of another camera as seen by other camera.\n\n\\subsection{Point matching}\n\nPreviously, basics for reconstructing three-dimensional location for a point pair were introduced, assuming known positions for the same point in different images.\nTo reconstruct a whole scene from a full image, all pairwise points must be matched, i.e. found that what pixel in one view represents the same object as one in other view.\n\nMatching is often also called correspondence searching:\nGiven a pixel in one image, what is the corresponding pixel in another image taken from the same scene?\nPixels correspond to each other if they represent the same physical point.\n\nTo describe the pixel's characteristics, its surrounding environment is encoded as a \\textit{feature}, a easily recognizable, unique property vector.\nWhen discussing about features, not every pixel's neighbourhoods are used; \\textit{good} features are those that have strongly distinguishable properties, such as edges and corners.\n\nEdges or corners are essentially high-frequency information in the image that can be interpreted as a 2D discrete function; thus, they can be detected by a discrete high-pass or band-pass filter, zeroing all but those pixels where a high difference is found \\cite{marr1980theory}\n\nNeighbourhood of a pixel where features are detected is often called a window or a patch.\n\nMatching can be done in sparse or dense mode; sparse matching finds a set of features from each image, and tries to match them. Dense matching runs through each pixel of one image and tries to find the same from another one with e.g. template matching \\cite{duda1973pattern}; from the coordinate differences in image space between the two images, a disparity map is built. The disparities are then directly transformed to depth values, yielding a point cloud.\n\nScale-invariant feature transform (SIFT) \\cite{lowe1999object} is a commonly used algorithm for local feature detection. A GPU implementation is also available \\cite{changchang2007siftgpu}.  Invariance to scaling, translation and rotation makes SIFT useful in describing features that can be matched between unaligned images. Other similar commonly used means are Speede Up Robust Features (SURF) \\cite{bay2006surf} Harris corner detector \\cite{harris1988combined}.\n\n\n\\subsection{Correspondence and rectification} \\label{sec:rectification}\n\nIn order to triangulate a real point from two or more photographs, the location of the point in all images must be known.\nRectification is a process that simplifies this search problem by restricting the search to a single dimension.\nBy aligning the cameras such that their images are coplanar, the search only has to be performed on a line that is parallel to the line connecting the camera centers.\nAfter rectification, the corresponding lines are axis-aligned (horizontal or vertical) in both images. \\cite{hartley03multiview}\n% LOL TODO guido gerig image rectification (stereo) slides\n\n\\subsection{N-view stereo}\n\nThe case of more cameras than a single pair uses the same principles in epipolar geometry.\nIt brings more restrictions and dimensions; in three dimensions, for example, the fundamental matrix becomes three-dimensional, the trifocal tensor. \\cite{hartley03multiview}\nIt is also more computationally heavy, as more data must be processed; if no information about camera parameters is available, pairwise checks between the images may become expensive. \\cite{wu2013towards}\n\nMultiple baseline stereo is a simple special case for many cameras. When all the cameras lie on the same baseline, calibration is easier and points can be selected by using a minimized sum of errors. \\cite{okutomi1993multiple}\n\nThe cameras that are used in capturing a scene can be fixed or positioned arbitrarily; in fact, the structure from motion technique \\cite{snavely2006photo,fitzgibbon1998automatic} enables to use just one camera that is moved around.\nAccurate and fast reconstructions are still traditionally done with stereo camera pairs, though.\n\nAnother common way is to use pairwise cameras for individual reconstructions to build a point cloud for every pair, and then register them together. \\cite{bradley2010high}\n\n%\\subsection{Structure from motion}\n%\n%Structure from motion (SfM) refers usually to recovering the structure of a scene from the motion of a single camera.\n%For each view, the pose of the camera is determined and the scene structure is extended with the new information in the image.\n%(pollefeys)\n%Bundle adjustment is used to refine the camera parameters.\n\n%\\subsection{Post-processing}\n%Uv mapping. Manual work. 3d noise removal; ignore points that have no close pair in other clouds.\n%Rendering: \"as usual\".\n%Post: remodel the mesh (face), see what it would look like. Refine parameters to get a similar output as in the photos (normal map etc.), backproject. Use colors and highpass them; assume uniform lightning and locally uniform texture color (bradley). (Simply a rendering technique, that level of detail in 3D structure might not be needed). Still, structured light and/or shading assumptions [shape from single image cues/shading trucco,verri p.225] done too.\n\n\n\\subsection{Reprojection errors}\n\nThe quality of the reconstruction is measured by reprojecting the 3D points back to the cameras with the estimated parameters, and calculating the distance between the projected and original point. \\cite{hartley03multiview}\nBundle adjustment \\cite{wu2011multicore} seeks to optimize all camera parameters at once with good performance.\n\nA common way to handle feature errors is Random Sample Consensus (RANSAC). Random subsets of the sample space is iterated, and samples that do not fit well to a model that is constructed of a smaller set are ignored. The iteration that matches most samples is selected. \\cite{hartley03multiview}\n\n", "meta": {"hexsha": "3f28a119d4f7fcc49372a4236ba9e5b4a140276b", "size": 22966, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "semma/tex/stereo-vision.tex", "max_stars_repo_name": "sooda/thesis", "max_stars_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2015-06-06T23:57:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T01:13:52.000Z", "max_issues_repo_path": "semma/tex/stereo-vision.tex", "max_issues_repo_name": "sooda/thesis", "max_issues_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "semma/tex/stereo-vision.tex", "max_forks_repo_name": "sooda/thesis", "max_forks_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-27T03:55:10.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-27T03:55:10.000Z", "avg_line_length": 60.4368421053, "max_line_length": 535, "alphanum_fraction": 0.7466689889, "num_tokens": 6173, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5814595639937524}}
{"text": "\\section*{Appendix B: Cross Entropy Method for Parameter Calibration}\n\nThis Appendix describes the pseudocode for the Cross Entropy Method for Normal distribution \\citep{rubinstein1999cross}. \n\n\\begin{algorithm} [H]\t\n\t%\\scriptsize\n\t\\SetAlgoLined\n\tSet $p=(\\mu_1,\\sigma_1,\\mu_2,\\sigma_2,...,\\mu_K,\\sigma_K)$ \\quad \\%Initial distribution parameters \n\t\n\tSet $M$ \\quad     \\%Number of stops\n\t\n\tSet $T$  \\quad  \\% Maximum iteration number\n\t\n\tSet $I$  \\quad  \\% Maximum iteration number \n\t\n\tSet $\\rho$  \\quad \\% Set selection ratio\n\t\n\t\\For {$t$ from 1 to $T$} { \\  \\%Main CEM loop\n\t\n\t    \\For {$i$ from 1 to $I$} {\n\t\n\tDraw $y^{(i)}$ from $\\mathcal{N}(\\mu,\\sigma)$  \\quad  \\%Draw $I$ samples\n\t\n\tCompute $f^i:=f(y^{(i)}$}\n\t\n\tSort $f^i$-values \\quad  \\%Order by decreasing magnitude\n\t\n\t$\\gamma \\leftarrow f_{\\rho.I}$ \\quad  \\%Set threshold \n\t\n\t$L_\\gamma \\leftarrow \\{ y^{(i)} | f(y^{(i)}) \\leq \\gamma$  \\quad  \\%Collect elite samples \n\t\n\t$\\mu'_j = \\frac{1}{L_\\gamma} \\sum_{i=1}^{L_\\gamma} \\mu_{i,j}$  \\quad  \\%Update $\\mu$\n\n\t$\\sigma'_j = \\frac{1}{L_\\gamma} \\sum_{i=1}^{L_\\gamma} \\sigma_{i,j}$   \\quad  \\%Update $\\sigma$\n\t\n\t$\\mu_j \\leftarrow \\alpha \\mu'_j + (1 - \\alpha)  \\mu_j$ \\quad \\%Update with step size $\\alpha$\n\t\n\t$\\sigma_j \\leftarrow \\alpha \\sigma'_j + (1 - \\alpha)  \\sigma_j$ \\quad \\%Update with step size $\\alpha$\n\t}\n\t\\caption{Cross-Entropy Method for Normal distribution}\n\t\\label{algo:CEM}\n\\end{algorithm}\n", "meta": {"hexsha": "5923f3014278fe4e699c85462bdffb5d437ca7d9", "size": 1406, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Writing/2019-BusSim-ParticleFilter-MK/AppendixB.tex", "max_stars_repo_name": "RobertClay/DUST-RC", "max_stars_repo_head_hexsha": "09f7ec9d8d093021d068dff8a7a48c15ea318b86", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2018-11-21T14:57:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T15:42:09.000Z", "max_issues_repo_path": "Writing/2019-BusSim-ParticleFilter-MK/AppendixB.tex", "max_issues_repo_name": "RobertClay/DUST-RC", "max_issues_repo_head_hexsha": "09f7ec9d8d093021d068dff8a7a48c15ea318b86", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 125, "max_issues_repo_issues_event_min_datetime": "2019-11-06T13:03:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-07T13:38:33.000Z", "max_forks_repo_path": "Writing/2019-BusSim-ParticleFilter-MK/AppendixB.tex", "max_forks_repo_name": "RobertClay/DUST-RC", "max_forks_repo_head_hexsha": "09f7ec9d8d093021d068dff8a7a48c15ea318b86", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-11-20T15:56:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-08T10:21:06.000Z", "avg_line_length": 32.6976744186, "max_line_length": 121, "alphanum_fraction": 0.63371266, "num_tokens": 492, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.721743206297598, "lm_q1q2_score": 0.5814595591718836}}
{"text": "%---------------------------Volume-----------------------------\n\\section{Wedge Volume}\n\nThe wedge volume metric is simply\n\\[\nq = V.\n\\]\n\n\\othermetrictable{volume}%\n{$L^3$}%                                      Dimension\n{$[0,DBL\\_MAX]$}%                             Acceptable range\n{$[-DBL\\_MAX,DBL\\_MAX]$}%                     Normal range\n{$[-DBL\\_MAX,DBL\\_MAX]$}%                     Full range\n{$\\frac{\\sqrt{3}}{4}$}%                       Unit element\n{--}%                                         Citation\n{v\\_wedge\\_volume}%                           Verdict function name\n", "meta": {"hexsha": "d50df778402776437d5248855d058f652f354cc1", "size": 580, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/WedVolume.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/WedVolume.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/WedVolume.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 34.1176470588, "max_line_length": 67, "alphanum_fraction": 0.3793103448, "num_tokens": 129, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8056321889812552, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.5814595543500145}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\n\\begin{document}\n\\chapter{Limits and Continuity}\n\\section{Easy Limits}\nWith easy limits, you can get the answer simply by plugging in the \nlimiting value:\n\\[ \\lim_{x \\to 3} \\frac{x^2 + x}{x + 1} = \\frac{(3)^2 + (3)}{(3) + 1} = 3 \\]\n\\section{Continuity}\n\\begin{defn}\n    We say $f(x)$ is continuous at $x_0$ when:\n    \\[ \\lim_{x \\to x_0} f(x) = f \\left( x_0 \\right) \\]\n    \\label{def:continuity}\n\\end{defn}\n\\begin{exmp}\n    Consider the following \\emph{piecewise} function:\n    \\[\n        f(x) = \n        \\begin{cases} \n            x + 1   & :\\ x > 0\\\\\n            -x      & :\\ x \\leq 0\\\\\n        \\end{cases}\n    \\]\n    It is said to be \\emph{discontinuous} \n    (see Figure \\ref{fig:discontinuous-function}) at $x = 0$ \n    since we know $f(0) = 0$, but for $x > 0$:\n    \\[ \\lim_{x \\to 0} f(x) = 1 \\]\n    We can say $f$ is continuous from the left at $x = 0$, \n    but not the right.\n\\end{exmp}\n\\begin{defn}[Right-Hand Limit]\n    \\[ \\lim_{x \\to x_0^+} f(x) = \\lim_{x \\to x_0} f(x)\\ :\\ x > 0 \\]\n\\end{defn}\n\\begin{defn}[Left-Hand Limit]\n    \\[ \\lim_{x \\to x_0^-} f(x) = \\lim_{x \\to x_0} f(x)\\ :\\ x < 0 \\]\n\\end{defn}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{discontinuous-function.pdf}\n    \\caption{Graph of a discontinuous function}\n    \\label{fig:discontinuous-function}\n\\end{figure}\n\\subsection{Discontinuities}\n\\begin{defn}\n    A discontinuity is removable if:\n    \\[  \n        \\lim_{x \\to x_0^+} f(x) =\n        \\lim_{x \\to x_0^-} f(x) \\neq\n        f \\left( x_0 \\right)\n    \\]\n\\end{defn}\n\\begin{case*}\n    The function $\\frac{\\sin(x)}{x}$ is undefined for $x = 0$. However, \n    its limit as $x \\to 0$ still exists:\n    \\begin{figure}[h]\n        \\centering\n        \\includegraphics[width=0.5\\textwidth]{removable-discontinuity.pdf}\n        \\caption\n        {\n            A removable discontinuity --- \n            the function is continuous everywhere except one point\n        }\n        \\label{fig:removable-discontinuity}\n    \\end{figure}\n\\end{case*}\n\\begin{defn}\n    A jump discontinuity is when \n    $\\lim_{x \\to x_0^+} f(x) \\neq \\lim_{x \\to x_0^-} f(x)$\n    even if they both exist.\n\\end{defn}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{jump-discontinuity.pdf}\n    \\caption{An example of a jump discontinuity}\n    \\label{fig:jump-discontinuity}\n\\end{figure}\n\\begin{defn}\n    There is an infinite discontinuity when the right- and left-hand \n    limits are both infinite, but in opposite directions.\n\\end{defn}\n\\begin{case*}\n    $f(x) = \\frac{1}{x}$ has an infinite discontinuity\n    (\\emph{singularity}) at $x = 0$:\n    \\[\n        \\lim_{x \\to x_0^+} \\frac{1}{x} = -\\infty \\quad , \\quad\n        \\lim_{x \\to x_0^+} \\frac{1}{x} = \\infty\n    \\]\n\\end{case*}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{infinite-discontinuity.pdf}\n    \\caption{$f(x) = \\frac{1}{x}$ is an example of an infinite discontinuity}\n    \\label{fig:infinite-discontinuity}\n\\end{figure}\n\\subsubsection{Ugly Discontinuities}\nThe function shown in Figure \\ref{fig:ugly-discontinuity} doesn't even go \nto $\\pm \\infty$ --- it doesn't make sense to say it \"goes to\" anything. \nFor something like this, we say the limit does not exist (\\textsc{dne}).\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{ugly-discontinuity.pdf}\n    \\caption\n    {\n        An example of an ugly discontinuity: \n        a function that oscillates a lot as it approaches the origin\n    }\n    \\label{fig:ugly-discontinuity}\n\\end{figure}\n\\section*{}\n\\begin{thm}\n    If $f$ is differentiable at $x_0$, then $f$ is continuous \n    at $x_0$\n\\end{thm}\n\\begin{proof}\n    From Definition \\ref{def:continuity}, we need to show:\n    \\begin{align*}\n        &\\lim_{x \\to x_0} f(x) = f \\left( x_0 \\right)\\\\\n        \\Rightarrow &\\lim_{x \\to x_0} f(x) - f \\left( x_0 \\right) = 0\n    \\end{align*}\n    The \\textsc{lhs} can be rewritten as:\n    \\begin{align*}\n        \\lim_{x \\to x_0} f(x) - f \\left( x_0 \\right)\n        &=  \\lim_{x \\to x_0} f(x) - f \\left( x_0 \\right)\n            \\cdot \\frac{x - x_0}{x - x_0} \\quad \\because \\quad\n            \\frac{x - x_0}{x - x_0} = 1\\\\\n        &=  \\lim_{x \\to x_0} \\frac{f(x) - f \\left( x_0 \\right)}{x - x_0}\n            \\cdot \\left( x - x_0 \\right)\\\\\n        &=  \\lim_{x \\to x_0} \\frac{f(x) - f \\left( x_0 \\right)}{x - x_0}\n            \\cdot \\lim_{x \\to x_0} \\left( x - x_0 \\right)\n    \\end{align*}\n    If $f$ is differentiable, we can use Equation \\ref{eqn:derivative} \n    and rearrange to get:\n    \\begin{align*}\n        \\lim_{x \\to x_0} f(x) - f \\left( x_0 \\right)\n        &=  \\lim_{x \\to x_0} \\left( x - x_0 \\right)\n            \\cdot f' \\left( x_0 \\right)\\\\\n        &=  \\left( x_0 - x_0 \\right) \\cdot f' \\left( x_0 \\right)\\\\\n        &=  0 \\cdot f' \\left( x_0 \\right)\\\\\n        &=  0\n    \\end{align*}\n\\end{proof}\n\\begin{note}\n    You can never divide by zero! The first step was to multiply by \n    $\\frac{x - x_0}{x - x_0}$. It looks as if this is illegal because \n    we are multiplying by $\\frac{0}{0}$ when $x = x_0$. But, when computing \n    the limit as $x \\to x_0$, we always assume $x \\neq x_0$. In other words, \n    $x - x_0 \\neq 0$. Therefore, the proof is valid.\n\\end{note}\n\\end{document}", "meta": {"hexsha": "8ed15deb9584dc7f413386fb7afb9037f635f48c", "size": 5230, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/chapter2.tex", "max_stars_repo_name": "DanialHaseeb/single-variable-calculus", "max_stars_repo_head_hexsha": "4bf05b3e46010967217f2e71bb22a9de8e7fc82d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/chapter2.tex", "max_issues_repo_name": "DanialHaseeb/single-variable-calculus", "max_issues_repo_head_hexsha": "4bf05b3e46010967217f2e71bb22a9de8e7fc82d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-01-22T21:42:46.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-15T13:01:11.000Z", "max_forks_repo_path": "chapters/chapter2.tex", "max_forks_repo_name": "DanialHaseeb/single-variable-calculus", "max_forks_repo_head_hexsha": "4bf05b3e46010967217f2e71bb22a9de8e7fc82d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.1006711409, "max_line_length": 77, "alphanum_fraction": 0.5894837476, "num_tokens": 1904, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.8198933359135361, "lm_q1q2_score": 0.5813200729078911}}
{"text": "\\documentclass{article}\n\\usepackage{mathtools}\n\\usepackage{pgfplots}\n\\usepackage{natbib}\n\\usepackage{graphicx}\n\n\\usepackage[utf8]{inputenc}\n\n\\title{CSE546 HW0 A}\n\\author{Bobby Deng | 1663039 | dengy7 }\n\\date{March 2020}\n\n\n\\begin{document}\n\n\\maketitle\n\n\\section{A.1}\n\n\\[ P(A|B)=\\frac{P(A)*P(B|A)}{P(B)} \\]\n\\[ P(B)=P(A)*P(B|A)+P(A^-)*P(B|A^-) \\]\n\\[ P(A^-)=1-P(A)\\]\n\n$A$: You have the disease. $A^-$: You don't have the disease.\n\n$B$: Test is positive. $B^-$: Test is negative.\n\n$P(A)=0.0.0001$, $P(A^-)=0.9999$\n\n\\[ P(B)= P(A)*P(B|A)+P(A^-)*P(B|A^-) \\]\n\\[ P(B)= 0.0001*0.99+0.9999*0.01 \\]\n\\[ P(B)= 0.010098 \\]\n\n\\[ P(A|B)=\\frac{P(A)*P(B|A)}{P(B)} \\]\n\\[ P(A|B)=\\frac{0.0001*0.99}{0.010098} \\]\n\\[ P(A|B)=0.0098 \\]\n\n\n\\section{A.2}\n\\subsection{a.}\n\\[ Cov(X,Y)=E[(X-E[X])(Y-E(Y))] \\]\n\\[ Cov(X,Y)=E[XY-XE[Y]-YE[X]+E[X]E[Y]] \\]\n\\[ Cov(X,Y)=E[XY]-E[X]E[X+Y]+E[X]E[Y] \\]\n\\[ Cov(X,Y)=E[XY]-E[X](E[X]+E[Y])+E[X]E[Y] \\]\n\\newline\n\n1. \\textbf{When $E[Y|X=x]=x$, then $E[X]=E[Y]$, so we can change all $E[Y]$ to $E[X]$.}\n\\[ Cov(X,Y)=E[XY]-E[X](E[X]+E[X])+E[X]E[Y] \\]\n\\[ Cov(X,Y)=E[XX]-E[X]E[X+X]+E[X]^2 \\]\n\\[ Cov(X,Y)=E[XX-XE[X]-XE[X]+E[X]^2] \\]\n\\[ Cov(X,Y)=E[(X-E[X])(X-E(X))] \\]\n\\[ Cov(X,Y)=E[(X-E[X])^2] \\]\n\n\\subsection{b.}\nWhen X,Y are independent:\n\\[ Cov(X,Y)=E[XY-XE[Y]-YE[X]+E[X]E[Y]] \\]\n\\[ Cov(X,Y)=E[XY]-E[X]E[Y]-E[Y]E[X]+E[X]E[Y] \\]\\newline\n\n1. We know that $E[XY]=E[X]E[Y]$, Then \n\\[ Cov(X,Y)=0 \\]\n\n\\section{A.3}\n\\subsection{a.}\nSince $Z=X+Y$, the probability function for Z should be joint probability of X and Y.\n\n\\[ h(z) = P(Z);Z=X+Y \\]\n\\[ Y=0,X=Z;Y=1,X=Z-1;Y=2,X=Z-2;....;Y=Z-1,X=1;Y=0,X=Z \\]\nSince X,Y are independent, then\n\\[h(z)=\\sum_0^Z P(X=i,Y_k=Z-i)\\]\n\\[h(z)=\\sum_0^Z P(X=i)*P(Y=Z-i)\\]\n\\[h(z)=\\sum_0^Z f(x)*g(y)\\]\nAbove is showing what happened for discrete variable. It is the same story for continuous variable, however there are infinite many of that variable X,Y combination. So, we need to use integral to represents the infinite many variables and the area under it which representing the probability of a single Z.  \n\n\\[ h(z)=\\int_{-\\infty}^{\\infty} f(x)*g(y) \\]\n\nWe know that $Z=X+Y$, so we can replace $Y=Z-X$, then it becomes:\n\\[ h(z)=\\int_{-\\infty}^{\\infty} f(x)*g(z-x) \\]\n\n\\subsection{b.}\nFirst of all, Z can not be less than 0 or bigger than 2 since X,Y are on [0,1].\n\nThen we look at Z in [1,2] and Z in [0,1] separately.\n\n$f(x)*g(z-x)$ Could either equal to 0 or 1.\n\nWhen X and Z-x are both on the designated interval, $f(x)*g(z-x)=1$, 0 otherwise.\n\nNow we only look at situations where it is 1. \n\nWhen $1\\ge x \\ge0$ or $1\\ge Z-X \\ge 0$\n\nThen we get two intervals: $Z\\ge X \\ge 0$ and $1\\ge X \\ge Z-1$\\newline\n\nWe Get:\n\\[ h(z)=\\int_{0}^{Z} 1dx = z\\]\n\\[ h(z)=\\int_{Z-1}^{1} 1dx = 2-z\\]\n\n\n\\[ \nh(z) = \\begin{cases}\nz & \\text{for $0 < z < 1$} \\\\\n2-z & \\text{for $1 \\le z < 2$} \\\\\n0 & \\text{otherwise.}\n\\end{cases} \n\\]\n\n\\section{A.4}\nWe want $E[Y]=0$ and $var(Y)=\\sigma^2=1$\n\\[ E[Y]=aE[X]+b=0\\]\n\\[ 0=a\\mu+b\\]\n\\[ b=-a\\mu\\]\n\\[ var(Y) = var(aX+b) =1\\]\n\\[ var(y) = a^2var(X) + var(b)=1\\]\n\\[ var(y) = a^2\\sigma^2 + 0=1\\]\n\\[ var(y) = a^2\\sigma^2=1\\]\n\\[ a^2 = 1/\\sigma^2\\]\n\\[a=1/\\pm \\sigma\\]\nSo, \n\\[b=\\mu / \\pm \\sigma\\]\n\n\\section{A.5}\n\\[ E[\\hat{\\mu}_n] = E[\\frac{1}{n}\\sum_{i=1}^nX_i] = \\frac{1}{n}\\sum_{i=1}^nE[X_i] = \\frac{1}{n}\\sum_{i=1}^n\\mu = \\mu \\]\n\n\\[ var(\\hat{\\mu}_n) = var(\\frac{1}{n}\\sum_{i=1}^n X_i)=\\frac{1}{n^2}var(\\sum_{i=1}^nX_i)=\\frac{1}{n^2}\\sum_{i=1}^nvar(X_i)=\\frac{1}{n^2}\\sum_{i=1}^n\\sigma^2=\\frac{\\sigma^2}{n}\\]\nNow we know $E[\\hat{\\mu}_n]=\\mu$ and $var(\\hat{\\mu}_n)=\\frac{\\sigma^2}{n}$. We can use the above calculation for the equations below.\n\\[ E[Z] = E[\\sqrt{n}(\\hat{\\mu}_n-\\mu)] = E[\\sqrt{n}](E[\\hat{\\mu}_n] - E[\\mu]) = \\sqrt{n}(\\mu - \\mu) = 0\\]\n\n\\[ var(Z)=var(\\sqrt{n}(\\hat{\\mu}_n-\\mu)) = n*var(\\hat{\\mu}_n-\\mu)\\] \\[=n*E[((\\hat{\\mu}_n-\\mu)-E[\\hat{\\mu}_n-\\mu])^2] \\]\n\\[=n*E[((\\hat{\\mu}_n-\\mu)-0)^2] \\]\n\\[=n*E[(\\hat{\\mu}_n-\\mu)^2] \\]\n\\[=n*var(\\hat{\\mu}_n)\\]\n\\[=\\sigma^2\\]\n\n\n\\section{A.6}\n\\subsection{a.}\n\\[\\hat{F_n}(x)= \\frac{1}{n}\\sum_{i=1}^n1\\{X_i\\le x\\}\\]\n\\[E[\\hat{F_n}(x)]= E[\\frac{1}{n}\\sum_{i=1}^n1\\{X_i\\le x\\}] = \\frac{1}{n}\\sum_{i=1}^nE[1\\{X_i\\le x\\}] \\]\n\nLets work out what $E[1\\{X_i\\le x\\}]$ is:\n\\[ E[1\\{X_i\\le x\\}] = \\int_{-\\infty}^{\\infty} 1\\{X_i\\le x\\}*f(x) dx \\]\n\n\\[ = \\int_{-\\infty}^{\\infty}1*f(x) dx; \\;\\;\\;when X_i\\le x\\]\n\n\\[ = F(x); \\;\\;\\;when X_i\\le x \\]\n\nNow:\n\\[ E[\\hat{F_n}(x)]= \\frac{1}{n}\\sum_{i=1}^nE[1\\{X_i\\le x\\}] = \\frac{1}{n}\\sum_{i=1}^nF(x)=F(x)\\]\n\n\\subsection{b.}\n\\[ var(\\hat{F_n}(x))=E[(\\hat{F_n}(x) -F(x))^2] = E[(\\hat{F_n}(x) -F(x))(\\hat{F_n}(x) -F(x))] \\]\n\\[ =E[\\hat{F_n}(x)^2-2\\hat{F_n}(x)F(x)+F(x)^2] \\]\n\n\\[ var(\\hat{F_n}(x))= var(\\frac{1}{n}\\sum_{i=1}^n1\\{X_i\\le x\\}) = \\frac{1}{n^2}\\sum_{i=1}^nvar(1\\{X_i\\le x\\}) \\]\nLets work out what $var(1\\{X_i\\le x\\})$ (Let A to represent it)is: \n\\[ var(A) = E[(A-E[A])^2]\\]\n\\[ var(A) = E[A^2 -2AE[A]+(E[A])^2]\\]\nSince $A$ is 1, so $A^2=A$ :\n\\[ var(A) = E[A] -2E[A]E[A]+(E[A])^2] \\]\n\\[ var(A) = E[A] -E[A]E[A]] \\]\n\\[ var(A) = F(x) - F(x)^2 \\]\n\\[ var(A) = F(x)(1-F(x)) \\]\n\n\\[var(\\hat{F_n}(x))=\\frac{1}{n^2}\\sum_{i=1}^nvar(1\\{X_i\\le x\\})=\\frac{1}{n^2}\\sum_{i=1}^nF(x)(1-F(x))\\]\n\\[ var(\\hat{F_n}(x))=\\frac{F(x)(1-F(x))}{n}\\]\n\n\\subsection{c.}\n\n\\[ var(\\hat{F_n}(x))=E[(\\hat{F_n}(x)-F(x))^2] =\\frac{F(x)(1-F(x))}{n}\\]\nProve if $\\frac{F(x)(1-F(x))}{n}\\le\\frac{1}{4n}$.\nWhich is same as prove: $F(x)(1-F(x))<\\frac{1}{4}$.\\newline\n\n\\[ F(x)(1-F(x)) = \\frac{1}{4}-\\frac{1}{4}+F(x)^2-F(x) \\]\n\n\\[ F(x)(1-F(x)) = \\frac{1}{4}-(\\frac{1}{2}-F(x))^2 \\le \\frac{1}{4}\\]\nWhich is same as:\n\\[\\frac{F(x)(1-F(x))}{n}\\le\\frac{1}{4n}\\]\n\n\n\n\\section{A.7}\n\\subsection{a.}\nRank for A is: 2.\\newline\nRank for B is: 2.\n\\subsection{b.}\n$A^T=\\begin{bmatrix}\n1 & 1 & 1\\\\\n2 & 0 & 1\\\\\n1 & 3 & 2\n\\end{bmatrix}$\n=\n$\\begin{bmatrix}\n1 & 1 & 1\\\\\n2 & 0 & 1\\\\\n0 & 2 & 1\n\\end{bmatrix}$\n=\n$\\begin{bmatrix}\n1 & 0 & 1/2\\\\\n0 & 1 & 1/2\\\\\n0 & 0 & 0\n\\end{bmatrix}$ \\newline\n\n\nSo the minimal basis for A: \n$\\begin{bmatrix}\n1\\\\ \n0\\\\\n1/2\n\\end{bmatrix}$\nand\n$\\begin{bmatrix}\n0\\\\ \n1\\\\\n1/2\n\\end{bmatrix}$ \\newline\n\n\n$B^T=\\begin{bmatrix}\n1 & 1 & 1\\\\\n2 & 0 & 1\\\\\n1 & 3 & 2\n\\end{bmatrix}$\n=\n$\\begin{bmatrix}\n1 & 1 & 1\\\\\n0 & 2 & 1\\\\\n0 & 0 & 0\n\\end{bmatrix}$\n=\n$\\begin{bmatrix}\n1 & 0 & 1/2\\\\\n0 & 1 & 1/2\\\\\n0 & 0 & 0\n\\end{bmatrix}$\\newline\n\n\nSo the minimal basis for B: \n$\\begin{bmatrix}\n1\\\\ \n0\\\\\n1/2\n\\end{bmatrix}$\nand\n$\\begin{bmatrix}\n0\\\\ \n1\\\\\n1/2\n\\end{bmatrix}$\n\n\n\\section{A.8}\n\\subsection{a.}\n\n$\\begin{bmatrix}\n0 & 2 & 4\\\\\n2 & 4 & 2\\\\\n3 & 3 & 1\n\\end{bmatrix}$\n$\\begin{bmatrix}\n1 & 1 & 1\n\\end{bmatrix}^T$\n=\n$\\begin{bmatrix}\n0 & 2 & 4\\\\\n2 & 4 & 2\\\\\n3 & 3 & 1\n\\end{bmatrix}$\n$\\begin{bmatrix}\n1\\\\\n1\\\\\n1\n\\end{bmatrix}$\n=\n$\\begin{bmatrix}\n6\\\\\n8\\\\\n7\n\\end{bmatrix}$\n\n\\subsection{b.}\n$\\left[\\begin{array}{ccc|c}\n0 & 2 & 4 & -1\\\\\n2 & 4 & 2 & -2\\\\\n3 & 3 & 1 & -4\n\\end{array}\\right]$\\newline\nSolution to this augmented matrix is:\n$\\begin{bmatrix}\n-2\\\\\n1\\\\\n-1\n\\end{bmatrix}$\n\n\\section{A.9}\n\\subsection{a.}\nBased on the relationship, we can get this:\n$\\begin{bmatrix}\n-1 & 2\n\\end{bmatrix}$\n*\n$\\begin{bmatrix}\nx_1\\\\\nx_2\n\\end{bmatrix}$\n+2=0 \\newline\n\nThen we can get the relationship $x_1 = 2x_2+2$.\n\n\\begin{tikzpicture}\n\t\\begin{axis}[\n\t\txlabel=$x_2$,\n\t\tylabel=$x_1$,\n\t\tzlabel=$x_3$,\n\t\txlabel style={sloped like x axis},\n\t\tylabel style={sloped}\n\t]\n\t\\addplot3[surf] {2*x+2};\n\t\\end{axis}\n\\end{tikzpicture}\n\n\n\\subsection{b.}\n$\\begin{bmatrix}\n1 & 1 & 1\n\\end{bmatrix}$\n*\n$\\begin{bmatrix}\nx_1\\\\\nx_2\\\\\nx_3\n\\end{bmatrix}$\n+0=0 \\newline\n\nThen we can get the relationship $x_1 = -x_2-x_3$.\n\n\\begin{tikzpicture}\n\t\\begin{axis}[\n\t\txlabel=$x_2$,\n\t\tylabel=$x_3$,\n\t\tzlabel=$x_1$,\n\t\txlabel style={sloped like x axis},\n\t\tylabel style={sloped}\n\t]\n\t\\addplot3[surf] {-x-y};\n\t\\end{axis}\n\\end{tikzpicture}\n\n\\subsection{c.}\n\nWhen $\\Tilde{x}_0$ is the minimizer, $x=\\Tilde{x}_0$ \n\\[\\min_{x}\\begin{Vmatrix}\n$x_0-x$\n\\end{Vmatrix}^2\n=\n\\begin{Vmatrix}\n$x_0-\\Tilde{x}_0$\n\\end{Vmatrix}^2\n=\n\\begin{vmatrix}\n\\frac{(w^T x_0-w^T\\Tilde{x}_0)}{w^T w}  \n\\end{vmatrix}^2 \n=\n(\n\\frac{w^T x_0-w^T\\Tilde{x}_0}{w^T w}  \n)^2 \n\\]\nSince $w^T x + b=0$, then $w^T \\Tilde{x}_0 + b=0$,   \\newline\nThen $b=-w^T \\Tilde{x}_0$.  \\newline\n\\[(\n\\frac{w^T x_0-w^T\\Tilde{x}_0}{w^T w}  \n)^2 \n=\n(\\frac{w^T x_0+b}{w^T w}  \n)^2 \n\\]\nSo the square distance is $(\\frac{w^T x_0+b}{w^T w}  \n)^2 $.\n\n\n\\section{A.10}\n\\subsection{a.}\n\\[x^T Ax\n=\n\\begin{bmatrix}\nx_1 & \\dots & x_n\n\\end{bmatrix}\n\\begin{bmatrix}\nA_{1,1} & \\dots & A_{1,n}\\\\\n\\vdots & \\ddots & \\vdots\\\\\nA_{n,1} & \\dots & A_{n,n}\n\\end{bmatrix}\n\\begin{bmatrix}\nx_1\\\\\n\\vdots\\\\\nx_n\n\\end{bmatrix}\\]\n\n\\[  \n=\n\\begin{bmatrix}\n\\sum_{i=1}^n x_i A_{i,1} & \\dots & \\sum_{i=1}^n x_i A_{i,n}\n\\end{bmatrix}\n\\begin{bmatrix}\nx_1\\\\\n\\vdots\\\\\nx_n\n\\end{bmatrix}\n=\n\\sum_{j=1}^n (x_j (\\sum_{i=1}^n x_i A_{i,j}))\n\\]\n\nFor $y^T Bx$, it is the similar story, \\newline\n\\[ \\\ny^T Bx\n=\n\\sum_{j=1}^n (x_j (\\sum_{i=1}^n y_i B_{i,j})) \\]\nSo, \n\n\\[ f(x,y)= x^T Ax + y^T Bx + c\n=\n\\sum_{j=1}^n (x_j (\\sum_{i=1}^n x_i A_{i,j}))+\\sum_{j=1}^n (x_j (\\sum_{i=1}^n y_i B_{i,j}))+c\n=\n\\]\n\n\\[\n=\n\\sum_{j=1}^n (x_j (\\sum_{i=1}^n x_i A_{i,j})+(\\sum_{i=1}^n y_i B_{i,j}))+c\n=\n\\sum_{j=1}^n (x_j \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))+c\n\\]\n\n\\subsection{b.}\n\\[\n\\nabla_z f(x,y)=\n\\begin{bmatrix}\n\\frac{\\partial f(x,y)}{\\partial z_1} &\n\\dots &\n\\frac{\\partial f(x,y)}{\\partial z_n}\n\\end{bmatrix}^T\n\\]\n\n\\[\n\\nabla_x f(x,y)=\n\\begin{bmatrix}\n\\frac{\\partial f(x,y)}{\\partial x_1} &\n\\dots &\n\\frac{\\partial f(x,y)}{\\partial x_n}\n\\end{bmatrix}^T\n\\]\n\nLet's look at each $\\frac{\\partial f(x,y)}{\\partial x_k}$ separately, and use k here in order to differentiate the i from matrix.\\newline\n\nSince $f(x,y)=\\sum_{j=1}^n (x_j \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))+c$:\n\n\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial}{\\partial x_k} & = \\frac{\\partial f(x,y)}{\\partial x_k}(\\sum_{j=1}^n (x_j \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))+c) \\\\\n & = \\sum_{j=1}^n \\frac{\\partial}{\\partial x_k}[(x_j \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))+c] \\\\\n & = \\sum_{j=1}^n \\frac{\\partial}{\\partial x_k}(x_j \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j})) + 0 \\\\\n & = \\sum_{j=1}^n \\frac{\\partial}{\\partial x_k}(x_j \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j})) \\\\\n & = \\sum_{j=1}^n [\\frac{\\partial}{\\partial x_k}(x_j)( \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))+\\frac{\\partial}{\\partial x_k}(\\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))(x_j)] \\\\\n & = \\sum_{j=1}^n \\frac{\\partial}{\\partial x_k}(x_j)( \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))+ \\sum_{j=1}^n \\frac{\\partial}{\\partial x_k}(\\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))(x_j) \\\\\n & = [0+0+\\frac{\\partial}{\\partial x_k}(x_k)( \\sum_{i=1}^n (x_i A_{i,k}+ y_i B_{i,k}))+\\dots+0+0]+[\\sum_{j=1}^n(x_j \\sum_{i=1}^n \\frac{\\partial}{\\partial x_k}(x_i A_{i,j})+0)] \\\\\n & = \\sum_{i=1}^n (x_i A_{i,k}+ y_i B_{i,k}) + \\sum_{j=1}^n x_j (0+0+\\frac{\\partial}{\\partial x_k}(x_k A_{k,j})+0+\\dots+0+0) \\\\\n & = \\sum_{i=1}^n (x_i A_{i,k}+ y_i B_{i,k}) + \\sum_{j=1}^n x_j (\\frac{\\partial}{\\partial x_k}(x_k A_{k,j})) \\\\\n & = \\sum_{i=1}^n (x_i A_{i,k}+ y_i B_{i,k}) + \\sum_{j=1}^n x_k A_{k,j} - They are all from 1-n \\\\\n & = \\sum_{i=1}^n (x_i A_{i,k}+ y_i B_{i,k} + x_k A_{k,i})\n\\end{split}\n\\end{equation}\n\nSo,\n\n\\begin{equation}\n\\begin{split}\n\\nabla_x f(x,y) & = \\begin{bmatrix}\n\\frac{\\partial f(x,y)}{\\partial x_1} &\n\\dots &\n\\frac{\\partial f(x,y)}{\\partial x_n}\n\\end{bmatrix}^T \\\\\n& = \\begin{bmatrix}\n\\sum_{i=1}^n (x_i A_{i,1}+ y_i B_{i,1} + x_1 A_{1,i}) &\n\\dots &\n\\sum_{i=1}^n (x_i A_{i,n}+ y_i B_{i,n} + x_k A_{n,i})\n\\end{bmatrix}^T\n\\end{split}\n\\end{equation}\n\n\n\\subsection{c.}\n\\[\n\\nabla_y f(x,y)=\n\\begin{bmatrix}\n\\frac{\\partial f(x,y)}{\\partial y_1} &\n\\dots &\n\\frac{\\partial f(x,y)}{\\partial y_n}\n\\end{bmatrix}^T\n\\]\n\nNow, do the same thing as x. \\newline\n\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial f(x,y)}{\\partial y_k} & = \\frac{\\partial}{\\partial y_k}(\\sum_{j=1}^n (x_j \\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))+c) \\\\\n& = \\sum_{j=1}^n x_j(\\frac{\\partial}{\\partial y_k}(\\sum_{i=1}^n (x_i A_{i,j}+ y_i B_{i,j}))) \\\\\n& = \\sum_{j=1}^n x_j (0+0+[0+\\frac{\\partial}{\\partial y_k}(y_k B_{k,j})]+0+\\dots + 0)\\\\\n& = \\sum_{j=1}^n x_j B_{k,j}\n\\end{split}\n\\end{equation}\n\nSo we can get, \n\\begin{equation}\n\\begin{split}\n\\nabla_y f(x,y) & = \\begin{bmatrix}\n\\frac{\\partial f(x,y)}{\\partial y_1} &\n\\dots &\n\\frac{\\partial f(x,y)}{\\partial y_n}\n\\end{bmatrix}^T \\\\\n& = \\begin{bmatrix}\n\\sum_{j=1}^n x_j B_{1,j} &\n\\dots &\n\\sum_{j=1}^n x_j B_{n,j}\n\\end{bmatrix}^T\n\\end{split}\n\\end{equation}\n\n\n\n\n\\section{A.11}\n\\subsection{a.}\n\\includegraphics[scale=0.7]{11a.png}\n\n\\subsection{b.}\n\\includegraphics[scale=0.7]{11b.png}\n\n\\section{A.12}\n\\subsection{a.}\nAccording to previous answer: $var(\\hat{F_n}(x))\\le\\frac{1}{4n}$, so:\n\n\\begin{equation}\n\\begin{split}\n\\sqrt{var(\\hat{F_n}(x))}\\le 0.0025 \\\\\nvar(\\hat{F_n}(x)) \\le & (0.0025)^2 \\\\\nvar(\\hat{F_n}(x)) \\le & \\frac{1}{4n} \\\\\n(0.0025)^2 = & \\frac{1}{4n} \\\\ \nn = & 4000\n\\end{split}\n\\end{equation}\n\n\\includegraphics[scale=0.5]{12a.png}\n\n\\subsection{b.}\n\\includegraphics[scale=0.5]{12b.png}\n\n\n\\end{document}\n", "meta": {"hexsha": "986eacf70937d610cff752c884a2cc04bb85f605", "size": 12780, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw0/CSE546hw1A.tex", "max_stars_repo_name": "bobbydyr/CSE546-Machine-Learning", "max_stars_repo_head_hexsha": "c3f7e487b60506acfa7886d7cc64dfa61550ee4b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw0/CSE546hw1A.tex", "max_issues_repo_name": "bobbydyr/CSE546-Machine-Learning", "max_issues_repo_head_hexsha": "c3f7e487b60506acfa7886d7cc64dfa61550ee4b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw0/CSE546hw1A.tex", "max_forks_repo_name": "bobbydyr/CSE546-Machine-Learning", "max_forks_repo_head_hexsha": "c3f7e487b60506acfa7886d7cc64dfa61550ee4b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.9032258065, "max_line_length": 309, "alphanum_fraction": 0.5556338028, "num_tokens": 6225, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191214879991, "lm_q2_score": 0.8198933293122506, "lm_q1q2_score": 0.5813200480628427}}
{"text": "%!TEX root = ../Main.tex\nThe Green's function and self-energies play the central role when it comes to obtaining the Local Density of States (LDOS) as well as electron transport in a system. In fact, the imaginary part of the Green's function is the LDOS for a specific site in a system. What the Green's function and self-energy actually is and how they come about will here be explained formally, to motivate the practical use in the following sections.\n\\subsection{Green's functions and self-energy}\\label{greensandself}\nSome of the concepts in this section will be explained using the simple system as an example (\\cref{inlinepointplot}). Imagine a system like the one in \\cref{atomrepfig}. It contains a unit cell in the centre, marked by a black border, surrounded by repeated unit cells in all directions. The aim is to explain how electrons move through this region. Suppose all cells surrounding the centre cell are considered \"contacts\" in the sense that they represent a semi-infinite chain of molecules and that they are the source of electrons (or states) that is injected in to the centre cell. What the Green's function is doing is that it ``takes the states through'' the centre region. It propagates the states in this particular area. In other words, the Green's function is the solution to the Schr\\\"{o}dinger Equation in this area and the equation has the form\n\\begin{align}\\label{Greensunsolved}\n\t[(E+i\\eta)\\mathbf{1}-\\mathbf{H}]\\mathbf{G}(E) = \\mathbf{1}\n\\end{align}\nWhere \\(\\eta\\) is a small number ensuring that the equation does not diverge when we use the numerical recursion routine described later on. \\textit{E} is the energy for which we are probing the system.\nFrom this equation one can also get the Green's function as\n\\begin{align}\\label{Greenssolved}\n\t\\mathbf{G}(E) & = \\mathbf{1}([(E+i\\eta)\\mathbf{1}-\\mathbf{H}])^{-1} \\\\\n\t              & = [(E+i\\eta)\\mathbf{1}-\\mathbf{H}]^{-1}\n\\end{align}\nThe Green's functions in these equations are represented as matrices that contain all the individual Green's functions for the unit cell as well a the Green's functions for the rest of the chain. As seen in the equations, all that is needed to get the Green's function for a unit cell, in theory, is an energy and the Hamiltonian of the unit cell. Note that the solution to the Green's function matrix is a diagonal matrix with the two first off diagonals. This is because of rules for nearest neighbour interaction dictated by the tight-binding approximation. As the Green's functions for all unit cells in a potentially semi-infinite system are needed, in practice, one has to turn to more sophisticated methods to obtain all the Green's functions, namely recursion. More on that shortly. For now this is the introduction to the Green's function. How it relates to a unit cell in a system and that it is the source of the LDOS in a unit cell.\\\\\nAs described  one can use the Green's functions to get the propagation of states through a specific on-site Hamiltonian. However, if the system contains a range of cells, possibly infinitely many, the Hamiltonian would be of infinite size and the inversion in \\cref{Greenssolved} would be impossible to do practically. The solution to this, is to model a semi-infinite tight-binding chain of atom/molecules and then use \\textit{recursion} on this chain. The way the recursion is done is by removing every second cell in the chain. Because the chain is semi-infinite, the yield would just be a new semi-infinite chain. Continuing this way the system can be reduced to a finite size which can actually be inverted. Say one continues to remove every second element in the chain, then in the end, the cells would be too far apart to interact an no hopping between cells would occur. At this point the recursion should stop. More on how this is done practically later.  For now one just have to keep in mind that the removing of cells in the chain effectively changes to coupling between them and this is where \\textit{self-energy} comes in. The self-energy is what describes the effective coupling between a cell and the rest of the semi-infinite chain. And it can be derived by looking at a cell at the very end of the semi-infinite chain and see how it couples to the rest. First one needs the Green's functions. The Green's matrix for this single cell would be given by the equation in \\cref{Greenssolved}. This is before when only one cell and thus one matrix had to be considered. But now, there is an semi-infinite amount of cells and an semi-infinite amount of matrices to consider. However, the cell in the end of the chain only interacts with the cell next to it and so on. Considering this one can write op an equation equivalent to that of \\cref{Greensunsolved} but as system of matrix equations for the chain.\n\\begin{align}\\label{Greenssystem}\n\t\\begin{pmatrix}\n\t\tz\\mathbf{1}-\\mathbf{H}_c & -\\mathbf{V}^{\\dagger} \\\\ -\\mathbf{V} & (z-\\varepsilon')\\mathbf{1}\n\t\\end{pmatrix}\n\t\\begin{pmatrix}\n\t\t\\mathbf{X}      & \\mathbf{G}_{0c} \\\\\n\t\t\\mathbf{G}_{c0} & \\mathbf{G}_{00}\n\t\\end{pmatrix}\n\t=\n\t\\begin{pmatrix}\n\t\t\\mathbf{1} & \\mathbf{0} \\\\\n\t\t\\mathbf{0} & \\mathbf{1}\n\t\\end{pmatrix}\n\\end{align}\nwhere \\(\\varepsilon'\\) is the on-site Hamiltonian of the first cell, \\(z\\) is \\(E+i\\eta\\),  \\(\\mathbf{G}_{0c/c0}\\) is the Green's matrices coupling the cell to the rest of the chain and \\(\\mathbf{X}\\) is the Green's matrices for the rest of the chain. This is also assuming one knows the Green's function within the chain \\(\\mathbf{G}_c\\) and that the chain has constant hopping and on-site elements \\(\\mathbf{H}_c,\\mathbf{V},\\mathbf{V}^{\\dagger}\\).\nSolving this system for \\(\\mathbf{G}_{00}\\) and eliminating \\(\\mathbf{G}_{0c}\\), which is unknown, one gets\n\\begin{align}\\label{greenszero}\n\t\\mathbf{G}_{00}(z) & = \\left[(z-\\varepsilon')-\\vb{V}(z\\vb{1}-\\vb{H}_c)\\vb{V}^{\\dagger}\\right]^{-1} \\\\\n\t                   & = (z-\\varepsilon'-\\Sigma(z))^{-1}\n\\end{align}\nwhere \\(\\Sigma(z)\\) is the self-energy. One can isolate the self energy from the equations above to\n\\begin{align}\\label{selfformal}\n\t\\mathbf{\\Sigma}(z) = \\mathbf{V}[z\\mathbf{1}-\\mathbf{H}_c]^{-1}\\mathbf{V}^{\\dagger}\n\\end{align}\nSo from \\cref{greenszero} it can be seen that the solution to the system of matrices for the Green's matrix \\(\\vb{G00}\\) is the same as in \\cref{Greenssolved} but with a correction (\\cref{selfformal}), which is the self-energy. And that is what describes the coupling for the first cell to the rest of the chain. This concludes the formal introduction to Green's functions and self-energy.\\subsection{Obtaining first cell self-energy and Green's matrix through programming}\\label{recursionroutinesec}\nFor simplicity and in order to check whether the routine would yield the expected results, the system in \\cref{inlinepointplot} is used as an example. The goal is to get the Green's functions for the centre unit cell in the semi-infinite chain and the self-energies coupling to rest of the chain right and left. Specifically for the simple system one should imagine first having one centre unit cell like \\cref{inlinepointplot} and then repeating it infinitely in the left and right direction. The fact that there is a left \\textit{and} right self energy is that the unit cell lies within the semi-infinite chain and not at the very end as described in \\cref{greensandself}. To be assured, this does not conflict with any of the preciously mentioned formalism and the left and right self-energies are quite easily obtained as one shall see shortly. As mentioned the goal is to get the Green's functions of a specific unit cell and the self-energies related to it. If the Green's matrix \\(\\mathbf{G}\\) represents the whole chain, then the equation of the whole system would be equivalent to that of \\cref{Greensunsolved}. Considering the Green's functions for specific unit cell in question, it would correspond to one column in the system of equations, say the first. One can define the on-site Hamiltonian \\(\\mathbf{h}_0\\) for the specific unit cell and its hopping matrices \\(\\mathbf{V},\\mathbf{V}^{\\dagger}\\).  The two hopping matrices correspond to hopping left or right in the chain respectively. These can be obtained using the functions already developed in \\cref{hamilsec}. Throughout this section they will be named \\(a_0 = \\mathbf{V}^{\\dagger}, \\ b_0 = \\mathbf{V}, \\ e_{s0} = \\mathbf{h}_{s}\\). The recursion is an iterative process and so the zero index indicates the starting point of the iterations and the \\textit{s} index indicates that it is the Hamiltonian of the specific wanted cell. One can also define a Green's function for a single unit cell as \\(g_0 = (z-e_{0})^{-1}\\) just like \\cref{Greensunsolved} where \\(e_{0}=\\mathbf{h}\\) which is the on-site Hamiltonian of the other cells. With these elements a system of equations, similar to \\cref{Greensunsolved} can be setup. The first difference being that the identity matrix is replaced by its first column, because the solution of interest is that one first column in the Green's matrix. The second is that the first element in the Hamiltonian matrix \\(\\mathbf{H}\\) is related to the specific single unit cell \\(\\mathbf{h}_s\\). Next a range of multiplications of the different elements stated so far will be shown, and afterwards it will be explained how these affect the system of equations to give recursion. The multiplications are:\n\\begin{align}\n\ta_1    & = a_0 \\times g_0 \\times a_0                  \\nonumber                      \\\\\n\tb_1    & = b_0\\times g_0\\times b_0                   \\nonumber                       \\\\\n\te_1    & = e_0 + a_0\\times g_0\\times b_0 + b_0\\times g_0\\times a_0 \\label{matmulrec} \\\\\n\te_{1s} & = e1_{0s} + a_0\\times g_0\\times b_0          \\nonumber                      \\\\\n\tg_1    & = (z - e_1)^{-1} \\nonumber\n\\end{align}\nThese equations constitutes the first iteration in the recursion and they can be repeated indefinitely. In the matrix system of equations these multiplications effectively shifts all elements in the matrix containing on-site Hamiltonians and hopping matrices by one column. Because the matrix is diagonal, it will leave the first column of the matrix empty. The column can then be removed and this is exactly what corresponds to removing a cell in the semi-infinite chain. Keeping on doing these multiplications, raising the index by +1 every time, one can move through the system as a whole, removing columns (cells) in the system of equations, thus reducing it to a finite size. In \\cref{recurfunc} the loop developed to get recursion.\n\\im{Listings/Functions.py}{92}{104}\n\\vspace{-.5\\baselineskip}\n\\captionof{listing}{The while loop in the recursion routine. The matrix elements are overwritten with the new variables until the resulting matrix is small enough to invert\\label{recurfunc}}\\vspace{\\baselineskip}\nNote that some intermediate multiplications are made e.g. \\textit{ag = a0 @ b0}. This is for run-time optimisation only, as these products are used multiple times per iteration.\nThe recursion is run until a threshold is met. The threshold is determined by the value of the hopping matrix \\(a_0\\). As it reaches a value close to zero, there is no longer any effective interacting (hopping) between the cells because of removal of cells and the recursion should stop.\nIn the end one will obtain re-normalised Hamiltonians and hopping matrices which is then used to get the Green's functions and self-energies through these simple equations:\n\\begin{align}\n\t\\mathbf{\\Sigma}_R & = e_s - h                   \\nonumber       \\\\\n\t\\mathbf{\\Sigma}_L & = e - h - \\mathbf{\\Sigma}_R \\label{outputs} \\\\\n\t\\mathbf{G00}      & = (z - e_s)^{-1} \\nonumber\n\\end{align}\nThese are calculated out of the Python function's while loop as such:\n\\im{Listings/Functions.py}{106}{109}\n\\vspace{-.5\\baselineskip}\n\\captionof{listing}{Self-energies from left and right as well as a normalised Green's functions-matrix are calculated at the end of the recursion loop.\\label{RecurOutputs}}\\vspace{\\baselineskip}\nThis concludes how recursion works and how the first cell Green's function as well as the self-energies are obtained.\n\\subsection{Plotting the real and imaginary part of the first cell Green's function}\n% \\begin{wrapfigure}[10]{r}{.45\\textwidth}\n% \t\\vspace{-2em}\n% \t\\includegraphics[width=0.45\\textwidth]{Figures/BetaimrealTE.eps}\n% \t\\caption{A plot showing the real and imaginary part of Green's function at the zeroth site resulting from the recursion routine on the simple system. Note that the yellow imaginary part is the representation of the local density of states.}\\label{LDOSsimple}\n% \\end{wrapfigure}\nOne of the results possible to obtain via the recursion routine is the Green's function of the centre unit cell in relation to the rest of the chain. As mentioned the imaginary part of the elements Green's matrix is the LDOS of the different sites in the unit cell. With a relatively simple approach, the Green's matrix elements can be obtained as a function of energy, using a \\textit{for loop}, looping over a range of energies which is then used as input in the \\textit{RecursionRoutine} function (\\cref{recurfunc}), see \\cref{plotcode}:\n\\im{Listings/SelfEnergyByRecursion.py}{64}{68}\n\\vspace{-1\\baselineskip}\n\\captionof{listing}{Code showing the loop which produces the complex Green's function (or y-values) for a range of energies used in the plot in \\cref{LDOSsimple}\\label{plotcode}}\\vspace{\\baselineskip}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.45\\textwidth]{Figures/BetaimrealTE.eps}\n\t\\caption{A plot showing the real and imaginary part of Green's function at the zeroth site resulting from the recursion routine on the simple system. Note that the yellow imaginary part is the representation of the local density of states.}\n\t\\label{LDOSsimple}\n\\end{figure}\nTaking the imaginary part of the output in \\cref{plotcode} gives information about the LDOS at a specific energy and place in space, namely a specific atom in the unit cell. The resulting plot for the simple system (at atom index 4) can be seen in \\cref{LDOSsimple}. Note that the plot only represents the LDOS for a specific site on the molecule and that they may change radically from site to site (see \\cref{appfigs}, \\cref{siteLDOSplot} for an example using the same system as \\cref{inlinepointplot}). The site can be changed by choosing another index in \\cref{plotcode} line 68, which corresponds to the atom indices in \\cref{inlinepointplot}.\n", "meta": {"hexsha": "ccb2585ad29ab7075bc38cde9ac1de9b8053d05f", "size": 14414, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/RecursionRoutine.tex", "max_stars_repo_name": "rwiuff/QuantumTransport", "max_stars_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-25T14:05:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-25T14:05:45.000Z", "max_issues_repo_path": "sections/RecursionRoutine.tex", "max_issues_repo_name": "rwiuff/QuantumTransport", "max_issues_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-03-31T03:17:38.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-31T03:17:38.000Z", "max_forks_repo_path": "sections/RecursionRoutine.tex", "max_forks_repo_name": "rwiuff/QuantumTransport", "max_forks_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-01-27T10:27:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-17T10:18:18.000Z", "avg_line_length": 173.6626506024, "max_line_length": 2707, "alphanum_fraction": 0.7599555987, "num_tokens": 3677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.855851143290548, "lm_q2_score": 0.6791787121629465, "lm_q1q2_score": 0.5812758773032597}}
{"text": "\\section*{Non-Negative Matrix Factorization}\nWant to learn words $w$ in a document $d$. Use topic latent variable: \n$\\mathbf{X} \\in \\mathbb{Z}^{N \\times M}_{\\geq 0}$, NMF: $\\mathbf{X} \\approx \\mathbf{U^\\top V}, x_{ij}=\\sum_z{u_{zi}v_{zj}}=\\langle\\mathbf{u}_i \\mathbf{v}_j\\rangle$\n$\\mathbf{U},\\mathbf{V}$ are non-neg., $L_1$ col normal.\n\\subsection*{Probabilistic LSA}\n\\textbf{Context Model:} $p(w | d) = \\sum_{i=1}^K p(w | i) p(i | d)$\\\\\n\\textbf{Conditional independence assumption ($*$):}\\\\\n$p(w|d) = \\sum_i p(w,i|d) = \\sum_i p(w|d,i)p(i|d) \\stackrel{*}{=}$\\\\$\\sum_i p(w|i)p(i|d)$ \\quad Note $p(w|i):=p(w|z=i)$\\\\\n\\textbf{Symmetric parameterization:}\\\\\n$p(w, d) = \\sum_z p(z)p(w | z) p(d | z)$\n\\subsection*{EM for MLE for pLSA (NO global opt guarantee)}\nLog-Likelihood: $L(\\mathbf{U}, \\mathbf{V}) = \\sum_{w,d} X_{w,d}\\log p(w|d)$ \\\\\n$ p(w|i) = u_{wi}$, $p(i|d) = v_{id}$, $\\sum_w u_{wi} = \\sum_i v_{di} = 1$\\\\\n$q_{iwd} \\in \\{0,1\\}: w \\text{ in } d \\text{ generated via } z=i$. \\\\\nLower bound from Jensen: $\\sum_{w,d}\\log \\sum_{z=1}^K q_{iwd} \\frac{u_{wi}v_{id}}{q_iwd} \\geq \\sum_{w,d,i} q_{iwd} [\\log u_{wi} + \\log v_{id} - \\log q_{iwd}]$\\\\\nDon't forget to add the sum over $i,j$ and $X_{ij}$ again.\nE-Step (optimal q: posterior $p(z=i|w,d)$):\\\\\n$q_{iwd} = \\frac{p(w|i)p(i|d)}{\\sum_{k=1}^K p(w|k)p(k|d)} := \\frac{u_{wi}v_{id}}{\\sum_{k=1}^K u_{wk}v_{kd}}$, $\\sum_i q_{iwd}=1$\\\\\nM-Steps:\\\\\n$p(w|i)=u_{wi} = \\frac{\\sum_d q_{iwd}X_{wd}}{\\sum_{w,d}q_{iwd}X_{wd}}, \\ \np(i|d)=v_{id} = \\frac{\\sum_w q_{iwd}X_{wd}}{\\sum_{w}X_wd}$ \\\\\n\\textbf{Derivations:} maximize lower bound w.r.t. $\\sum_w u_{wi}=1$ and $\\sum_i v_{id}=1$ (no $>=0$ constraint bc. $\\log$). $\\min_{U,V} \\max_{\\alpha, \\beta}\\mathcal{L}$ where $\\mathcal{L} = -g(X; U, V) + \\sum_i \\alpha_i(\\sum_w u_{wi} - 1) + \\sum_d \\beta_d (\\sum_i v_{id} - 1)$. Set $\\partial \\mathcal{L} / \\partial u_{wi} = 0$ and get $u_{wi} = \\sum_d X_{wd}q_{iwd}/\\alpha_i$. Setting $\\partial \\mathcal{L}/\\partial \\alpha_i$ gives $\\sum_w u_{wi} = 1 \\to \\sum_{w,d} X_{wd} q_{iwd} / \\alpha_i = 1 \\to \\alpha_i = 1 / (\\sum_{w,d} X_{wd} q_{iwd})$ then plug it in. Similar for $v_{id}$ but with extra step: $\\sum_w X_{wd} \\sum_i q_{iwd} / \\beta_d = 1 \\to \\beta_d = 1/(\\sum_w X_{wd})$ since $\\sum_i q_{iwd}=1$.\n \n\\subsection*{Latent Dirichlet Allocation}\nTo sample new $d$, need to extend $X$ and $U^T$ (in pLSA matrix dims fixed). For each $d_i$ sample topic weights $\\mathbf{u}_i$\\textasciitilde Dirichlet($\\alpha$): $p(u_i|\\alpha) = \\prod_{z=1}^K u_{zi}^{\\alpha_k-1}$, then topic $z^t$\\textasciitilde Multi($u_i$), word $w^t$\\textasciitilde Multi($v_{z^t}$)\\\\\nLDA Model: $p(\\mathbf{x}|V,u) = \\frac{l!}{\\prod_j \\mathbf{x}_j!}\\prod_j \\pi_j^{\\mathbf{x}_j}$ \nwhere $\\pi_j=\\sum_z v_{zj} u_z$, $l=\\sum_j x_j$ and $l = \\sum_j x_j$\\\\\nBayesian averaging over $\\mathbf{u}$: \\\\\n$p(\\mathbf{x}|\\mathbf{V},\\alpha)=\\int p(\\mathbf{x}|\\mathbf{V},\\mathbf{u})p(\\mathbf{u}|\\alpha)d\\mathbf{u}$\n\\subsection*{NMF Algorithm for Quadratic Cost Function}\n$\\min_{\\mathbf{U}, \\mathbf{V}} J(\\mathbf{U}, \\mathbf{V}) = \\frac{1}{2} \\|\\mathbf{X} - \\mathbf{U}^\\top\\mathbf{V}\\|_F^2$\ns.t. $\\forall i,j,z:u_{zi},v_{zj} \\geq 0 $ (non-negativity)\\\\\nComparison with pLSA: different sampling model (Gaussian not multinomial), different objective (quadratic not KL divergence), not normalized \\\\\nALS (not joint convex over (U,V)):\\\\\n1. init: $\\mathbf{U}, \\mathbf{V} = rand()$ 2. repeat 3\\textasciitilde4 for $\\mathit{maxIters}$:\\\\\n3. upd. $(\\mathbf{VV}^\\top)\\mathbf{U} = \\mathbf{VX}^\\top$, proj. $u_{zi} = \\max \\{ 0, u_{zi} \\}$\\\\\n4. update $(\\mathbf{UU}^\\top)\\mathbf{V} = \\mathbf{UX}$, proj. $v_{zj} = \\max \\{ 0, v_{zj} \\}$\n", "meta": {"hexsha": "2d4872bfc9f5b65d787e620010962509b4c75727", "size": 3604, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "NMF.tex", "max_stars_repo_name": "anklinv/Computational-Intelligence-Lab-ETH-FS19", "max_stars_repo_head_hexsha": "2ef62f626fa83d1410fbb1b2f8a0501937c0dabe", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-20T20:58:16.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-20T20:58:16.000Z", "max_issues_repo_path": "NMF.tex", "max_issues_repo_name": "anklinv/Computational-Intelligence-Lab-ETH-FS19", "max_issues_repo_head_hexsha": "2ef62f626fa83d1410fbb1b2f8a0501937c0dabe", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "NMF.tex", "max_forks_repo_name": "anklinv/Computational-Intelligence-Lab-ETH-FS19", "max_forks_repo_head_hexsha": "2ef62f626fa83d1410fbb1b2f8a0501937c0dabe", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-02-06T16:55:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-21T01:02:09.000Z", "avg_line_length": 94.8421052632, "max_line_length": 705, "alphanum_fraction": 0.6123751387, "num_tokens": 1588, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511469672594, "lm_q2_score": 0.6791787056691697, "lm_q1q2_score": 0.5812758742426976}}
{"text": "\\documentclass[addpoints]{exam}\r\n\r\n\\usepackage{subfigure}\r\n\\usepackage{caption}\r\n\\usepackage{graphbox}\r\n\\usepackage{hyperref}\r\n\\usepackage{listings}\r\n\\usepackage{multirow}\r\n\\usepackage{tabularx}\r\n\\usepackage{graphicx}\r\n\\usepackage{xcolor}\r\n\\usepackage{amsmath}\r\n\r\n% Header and footer.\r\n\\pagestyle{headandfoot}\r\n\\runningheadrule\r\n\\runningfootrule\r\n\\runningheader{Probability and Statistics}{Final Project}{Spring 2021}\r\n\\runningfooter{}{Page \\thepage\\ of \\numpages}{}\r\n\\firstpageheader{}{}{}\r\n\r\n\r\n\\title{Final Project}\r\n\\author{MATH 310/EE 354 Probability and Statistics\\\\Habib University\\\\Spring 2021}\r\n\\date{Akeel Ather Medina}\r\n\r\n\\begin{document}\r\n\\maketitle\r\n\r\n\r\n\\section{Random Walk}\r\n\\begin{questions}\r\n\\question\r\nFor this part, we need to take input for the probability, and number of walks. The iteration count has been set to 1000, which means we will end up with 1000 expected final positions.\\\\\r\nI created an array of size 'iterations' to hold these ending positions, and now for the main calculation, we create a nested loop, of which the inside loop runs 'n' times, and the outer loop runs 'iterations' times. \\\\\r\nThe expected final position is reset to 0 after each walk, and inside the loop, based on the input probability, if we generate a random value from 0 to 1, using np.random.random(), we can determine which way to walk by comparing it to the probability input. \\\\\r\nThe calculation for bins was done, and reused several times in this assignment, using code from stackexchange which is linked in the references tab.\\\\\r\nThis bin width and count is ideal for all situations so value of bins does not need to be changed for different iterations and values of p and n.\\\\\r\n\\\\\r\nTo begin with, if we plot 100 steps, with the probability of moving along the x-axis as 0.5, given iterations is set to 1000, this is the plot we get:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_1_1.png}\r\n\\end{center}\r\nImmediately we can see that this plot follows a normal distribution. Although the expected value does not come out to be 0 every time, it is more likely for it to be nearer to 0 than to be farther away from it. \\\\\r\nNext we will keep the number of walks similar and test out different probabilities.\\\\\r\nWith $p=0.75$:\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_1_2.png}\r\n\\end{center}\r\nThe main difference we can note here is that the mean of our distribution is now around 50, instead of 0. Intuitively speaking, this is because we now have a higher probability of moving to the right, so in 100 steps, if we take 75 to the right and 25 to the left, we will end up on '50'.\\\\\r\nWith $p=0.25$:\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_1_3.png}\r\n\\end{center}\r\nThis is a similar situation to the above, but since the probability of moving right is now less likely, out of 100 steps, we will likely move left 75 times, and right 25 times, leaving with an approximate value of 50, as we can see in our plot.\\\\\r\nWith $p=0.99$:\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_1_4.png}\r\n\\end{center}\r\n\r\nFor $p=0.99$ although it does not seem obvious whether this plot follows a normal distribution, the intuitive approach that came to my mind was to increase the number of steps taken. The reason for this is that for such a high probability, with only 100 steps taken a trend cannot be accurately measured because the number of times we will take a step left is very low. Changing the number of steps, although the probability remains the same, gives a chance to observe a normal distribution again because more steps left can be observed. \\\\Thus this is the plot of $p=0.99$ with $n=1000$.\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_1_5.png}\r\n\\end{center}\r\n\r\n\\newpage\r\n\\question\r\n\r\nFor this part, the function is exactly the same except instead of an if-else condition to move left or right, it is now an if-else-if condition, so that in case we are at the point x=1, we cannot move left.\\\\\r\nThe bin width and count is calculated the same way as in the previous part, with a few adjustments to make the trend more obvious.\\\\\r\nSo to highlight the difference with the previous part, I will plot the values in the same order, with n as 100:\\\\ \\\\\r\nWith $p=0.5$:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_2_1.png}\r\n\\end{center}\r\nWe can immediately see this looks like the right half of the graph we got for part 1. \\\\\r\nWith $p=0.75$:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_2_2.png}\r\n\\end{center}\r\nIncreasing the probability of moving right leads to the same normal distribution from the last part.\\\\ \\\\\r\nWith $p=0.25$:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_2_3.png}\r\n\\end{center}\r\nIf we were to only consider the positive values from our part 1 equivalent, this plot shows the few positive results we get. In the part 1 equivalent, the mean was near -50, so that is why this graph has such few points plotted, as final values less than 0 are not included.\\\\ \\\\\r\nWith $p=0.99$:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_2_4.png}\r\n\\end{center}\r\nThis is identical to our result for the same values in the previous part, so we will continue like before increasing the number of steps, to see if those observations still hold.\\\\ \\\\\r\nWith $p=0.99$:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_2_5.png}\r\n\\end{center}\r\nThis is very similar to our plot in part 1.\\\\ \\\\\r\nWith $p=0.25$:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_2_6.png}\r\n\\end{center}\r\nWe observe that there is no obvious change for $p=0.25$ even with an increase in steps. This is to be expected because the number of steps would not matter for a normal random variable, the mean and standard deviation would remain the same.\r\n\r\n\\newpage\r\n\\question\r\n\r\nFor this part we start our code with taking all 4 inputs, the starting point of the objects and the probability they move right. Proceeding with the same approach as in the previous parts, we iterate 'iterations' number of times, and in each iteration, we set the positions of the objects equal to what we previously input. We start a while loop so that only once the objects meet on the real line will the steps be recorded and the next iteration begins. Inside the loop, both objects each move once, and the steps taken for meeting is also incremented.\\\\\r\nIt is important that our starting points have an even distance between them or else it will loop infinitely.\\\\\r\nThe process for determining a right or left step remains the same. First, taking a random value from 0 to 1, and determining if they are less than the probability input, then we move right, else left. Two seperate random values have to be generated for each object.\\\\\r\n\\\\\r\nThe first plot I tried to compute was for $d1=1$, $d2=-1$, $p1=0.5$ and $p2=0.5$. It was taking too long to compute, but when I tried it again, it worked in just 20 seconds. The plot, however, looked like this:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_3_1.png}\r\n\\end{center}\r\nEven after manually adjusting the bins, it was difficult to see all other points, so I printed the max value of the total steps taken, to get 9398093. This value varies greatly, and since there is a much higher probability of the points converging immediately, rather than divering and then converging again, we can only see this one line on the histogram. So i used lines 5-7 from Question 2 part 2 to observe the graph without that cluster of points in the early stage to see if there was a pattern.\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_3_2.png}\r\n\\end{center}\r\nValues lesser than 500 have been removed from our array from which we plot the values. This code has been changed to do the opposite function as that in Question 2 part 2. However, it should be noted that the calculation for bins needs to be commented out when running this, and the argument for the histogram can be arbritrarily assigned, such as to 100. \\\\\r\nSo we can see that for $d1=1$, $d2=-1$, $p1=0.5$ and $p2=0.5$, the steps taken can vary greatly, and the only observation we can make is that for greater total steps, similar results are more sparse. They are clustered together near the beginning. This can be imagined as a graph over time as well. More results are recorded at less 'time' (or steps), and very few the greater the amount of time taken is. There is a lesser probability of steps taken being high.\\\\ \\\\\r\nFor $d1=1$, $d2=-1$, $p1=0.6$ and $p2=0.6$, the code does not run, and there is a memory error. For different identical values of $p$ as well, such as $0.4$. It is likely that some of the iterations encounter a case where the two objects are chasing after each other and this carries on for too long a time for the computer to compute. \\\\\r\n\\\\\r\nFor $d1=1$, $d2=-1$, $p1=0.4$ and $p2=0.6$, we get this graph:\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p1_3_3.png}\r\n\\end{center}\r\nIt is obvious that this follows a normal distribution, and so do other plots with a similar nature, such that the probability of the two objects moving together is higher than moving away from each other.\r\n\\end{questions}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\newpage\r\n\\section{Picking a random point correctly}\r\n\\begin{questions}\r\n\\question\r\nFor this question, I have taken one total input for the radius, which is used by all the parts. Changing the radius has no effect on the graph plotted for all parts.\\\\\r\nFor the first part, since we are now doing a scatter plot, where we need to plot the x and y coordinates of our points, we need 2 different lists to store our values in. To calculate each set of x and y coordinates in the mentioned method asked by the question, in each of 'n' iterations, we randomly calculate a radius, from 0 to the radius limit input by the user, using np.random.uniform. Functionally, this is also identical to the np.random.random function. The angle, $\\theta $ is also calculated the same way.\\\\\r\nTo convert this into a plottable set of x and y values we use the formula to convert from polar to cartesian, so $x=r*cos(\\theta)$ and $y=r*sin(\\theta)$.\\\\\r\nWith iterations set to 50000 for an ideal number of points plotted, each point has a color from the map called 'twilight shifted', which highlights the uniformity of plots, or lack of, which we will see in this part.\\\\\r\nThe code for the plotting of the boundary of the circle was taken from stack exchange.\\\\\r\nThe output of this function looks like:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p3_1.png}\r\n\\end{center}\r\nThe scatter plot remains similar for all radius, and the aspect ratio is maintained as a square, with boundaries 1.2 times the diameter of the outer circle. \\\\\r\nWe can see that values closer to the center are more clustered together as compared to ones farther away. The reason for this is that if we compare the area of say half the radius of our outer circle, it will not have half the area. This is because area is determined by $\\pi r^2$ and thus is not linear. This means that even if there are an equal amount of points in the inner and outer part of the circle, it will not be uniform because the outer circle has more area to fill, so points are more sparse.\\\\\r\nEven if we randomly choose radius and $\\theta$, the number of plots would have to increase for a higher radius.\\\\\r\nAnother way to think of this is if we take some random angles and draw lines from the origin to the outer circle, then draw points on the intersections of the inner and outer circles, it will seem as if the inner circle has more points because the lines move farther apart the further from the origin we go.\\\\\r\n\\\\\r\nThe variation of this plot was easily calculated using the np.var function, taking in our list of X coordinates which we use to plot. For 3.1, we get a variation of 0.17 for a radius of 1, and a variation of 4.16 for radius 5.\r\n\r\n\\newpage\r\n\\question\r\nThe method for plotting remains the same, the only difference in this part lies in the method of choosing the x and y coordinates. This time, we actually calculate the x and y values in a square of 2*'radius limit'. We randomly choose an x and y coordinate from between the positive and negative values of the input radius, seperately. This is effectively choosing values from a square. We add a while loop so that until the distance of these x and y coordinates from the origin is not within an imaginary circle of specified radius, we keep on choosing new x and y coordinates. This also ensures our data has the same number of elements as compared to the other parts of this question.\\\\\r\nAs with the previous part, the plot remains the same for all kinds of input radius, so this is the output we get:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p3_2.png}\r\n\\end{center}\r\nWe can clearly see the difference with the previous part. There is no clustering in the center, and the points are distributed uniformly across the circle. Since we are now uniformly choosing x and y coordinates, it seems likely that given any space on the cartesian plane, sampling x and y coordinates uniformly will give a uniformly filled space. In the previous part, even if we randomly chose radius and $\\theta$, the number of plots would have to increase for a higher radius, but over here there is no relation between that and the x and y coordinates. They are independant.\\\\\r\n\\\\\r\nThe variation of this part using the same function as the previous part gives 0.25 for a radius of 1, and 6.28 for a radius of 5.\r\n\r\n\\newpage\r\n\\question\r\nFor this part, we again do the plotting the same way as in the previous parts. The difference lies with how we pick points. In the last part, we found a way to uniformly pick points from a circle using x and y coordinates. In this part, we want to alter our method in the first part, 3.1, which sampled uniformly from polar coordinates instead of cartesian, such that our circle will have points uniformly spread, not clustered in the center. \\\\\r\nThe question first asks that if we plot a circle with radius 1 and a circle with radius 2, is the probability of a point lying in the larger circle 4 times the smaller one in 3.1. This is not the case, since in that part, the radius and angle were chosen randomly. Since a point being within a circle is independent of angle, we know that approximately 50 percent of the time, the radius will be less than 1, and 100 percent of the time it would be lesser than 2. This proves the above question is not true, because the probability of radius being lesser than 2 should be 4 times the probability of it being lesser than 1. \\\\ \\\\\r\nWe can prove this by using the code created in question 4 for the same reason (to prove that points chosen using 3.2 and 3.3 adhere to this question states above, while 3.1 does not). Using the approach in 3.1, the number of points randomly plotted in a circle of a certain radius, is two times that of a circle with half this radius. However, our approach in 3.2 always gives an approximate value of 4, meaning that a point being the the outer circle is 4 times more likely, thus our approach in that question uniformly picks a point over a circle.\\\\\r\nBased on this logic, we can try and develop a way of picking radius and angle such that it will be uniformly spread over the circle.\\\\\r\nSo our main goal is to make sure that we calculate our random point in such a way that it has a greater amount of larger values, given we are choosing it uniformly over a range.\\\\ \\\\\r\nMy first approach was to square random values and accept only if they lie within the radius of the circle, as in part 2, but this also gave a clustered center. \\\\\r\nMy next approach was to randomly pick a value from 0 to the square of the radius, and then take the square root of the answer, however this also resulted in a graph with a clustered center.\\\\\r\n\\\\\r\nNext, i ended up using an online source which is the second link in the References tab, for which I tried and ended up getting a uniform distribution. However no explanation was given, so I tried to solve the distribution given as a bonus because I could not guess how to properly choose r and $\\theta$ to get random points uniformly, and ended up with:\\\\\r\n\\begin{center}\r\n$k = \\frac{1}{\\pi r^2} $\\\\\r\n$P(r \\leq R) = \\frac{1}{R^2}*r^2$\\\\\r\n$\\sqrt{P(r \\leq R)} = \\frac{r}{R}$\\\\\r\n\\end{center}\r\nWhat I figured from this is that to choose a random point in the circle, we need to take the square root of the probability of it lyring inside a circle fo radius r $\\leq$ R, and multiply it with the radius R itself. Since the probability is between 0 and 1, this makes sense as the square root of numbers lesser than 1 gives a value greater than itself (as shown in 4.4), or in other words, further from the center. Since we want a random point inside our circle, the main line of code seperating this approach from 3.1 is:\\\\\r\n\\begin{center}\r\n$np.sqrt(np.random.random()) * radiuslimit$\\\\\r\n\\end{center}\r\nThis line of code is similar to the line I saw in the online source. The graph plotted for this formula for choosing a point, similar to 3.2, looks like:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p3_3.png}\r\n\\end{center}\r\n\r\nThe variation of X coordinates for a radius of 1 is 0.25, and 6.25 for radius 5. The variance of X coordinates in 3.2 and 3.3 is approximately the same.\r\n\r\n\\end{questions}\r\n\r\n\r\n\r\n\\newpage\r\n\\section{Saying random is not enough}\r\n\\begin{questions}\r\n\\question\r\nIn this question, we need to randomly generate 2 angles between 0 and 2*pi, and the chord length is the distance between the two points of where the angle extends to touch the circle. If we calculate this for any two arbritary angles, we could imagine it as a triangle being formed with two sides being of length 'radius', and the last side being the chord length. To calculate this chord length we can divide the triangle into two halves, and apply basic trigonometry to get the chord length. That is:\\\\\r\n\\begin{equation}\r\n c = 2 * r * \\sin{(|a2-a1|/2)}\r\n\\end{equation}\r\nWhere c is the length of the chord, a1 and a2 are the angles randomly chosen each iteration, and r is the radius. We divide the angle by 2 because we can only apply $\\sin(\\theta) = \\frac{Opposite}{Hypotenuse}$ if we have a right triangle, which we can do by splitting the triangle we form with angle $|a2-a1|$ into 2 parts. This only gives half the chord length, so multiplying by two gives our final chord length. \\\\\r\n\\\\\r\nThe plot of chord length generated by this formula looks like: \\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p4_1_1.png}\r\n\\end{center}\r\nNote that bin calculation remains the same as in previous parts. The only difference is that we have set the y axis to plot probability of occurrance. \\\\\r\nWe can see that the highest probability of chord length is that for it being approximately 10, then greately decreasing. The radius for all questions was chosen as 5. This means that the angles being 180 degrees apart is the most likely scenario, as when the angles are 180 degrees apart, the chord between them will give us a length of 10 for this circle. We can confirm this, as if for example angle 1 is 180 and angle 2 is 0, we will do sin(90) which gives 1. Thus our final answer will be 10, the maximum chord length.\r\n\\newpage\r\n\\question\r\nIn this second approach, we need to choose a random angle, and randomly pick a point from the angle extended to the boundary of the circle, from the origin. This point will be the perpendicular bisector of some chord, and that is how we choose it. Now one thing to note is that when calculating the chord length, we do not actually need the angle. This is because the chord length can simply be calculated using pythogoras theorem. We know this perpendicular bisector meets two points of the circle, and any point on the circle to the origin is of length radius. Next, we ourselves are choosing this random point from 0 to radius, thus we also know the length of the line from the origin to the point, and the origin to the point on the circle where the bisector cuts. Thus, by manipulating pythogoras theorem we get:\\\\\r\n\\begin{equation}\r\n c = \\sqrt{r^2 - p^2}\r\n\\end{equation}\r\nWhere r is the radius, and p is the random point on the line. However, as we noticed in the last question, choosing a random point from 0 to the radius (as the questions asks us to), we will not get a uniform distrubution of points on this circle. As proof, I assigned 2 variables as counters for our points generated, if these points (they are not x and y points, but a polar radius) are lesser than radius (as all of them must be, giving a number the same as 'iterations), increment the variable called outer by 1, and if they are lesser than $\\frac{radius}{2}$, then increment inner by 1. The reason for doing it this way is it is the example given to us in 3.3. If we have two circles, one of double the radius, it should have 4 times the amount of area (or points plotted inside it). This code works for any radius and any number of iterations, and we always approximately get an output of 2, when we compute $\\frac{outer}{inner}$. This shows that the center of the circle is more clustered and has more points than the outer circle.\\\\\r\nBack to the question, our distribution of chord lengths looks like:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p4_2_1.png}\r\n\\end{center}\r\nWe can see the difference with this and the last part. There are very few chords with a small length, and many with a higher length. The reason could be as shown before, that points with a location nearer to the origin will have a much larger chord length than to the edge.\r\n\\newpage\r\n\\question\r\nFor this question, to pick a random point uniformly from a circle i re-used my code from 3.2, in which a random x and random y value is chosen from the square of side 'radius', until it is physically within a circle of radius 'radius'.  \r\nThe code is otherwise identical to the previous part. For our calculation of the point in the circle, we need it in terms of radius so that we can apply the same pythogorean manipulation as before. The formula for this is:\\\\\r\n\\begin{equation}\r\n p = \\sqrt{x^2 + y^2}\r\n\\end{equation}\r\nWhere p is the 'radius', in cartesian to polar form. \\\\\r\nI have commented out the approach used in 3.3 for a random point as an alternative approach, it also gives the same plot.\\\\\r\nThe difference in this part and the previous part can be seen by our results of inner and outer at the end of our iterations, this time $\\frac{outer}{inner}$ always gives an approximate value of 4, as compared to 2. This is how we know that our point being chosen is uniform over our circle. Our distribution looks like:\\\\\r\n\\begin{center}\r\n\\includegraphics[width=.48\\textwidth]{images/p4_3_1.png}\r\n\\end{center}\r\nThis distrubution looks widely different from the previous ones. It is linearly decreasing, with the most chord lengths being of maximum length possible. I will elaborate on why I think this approach is the most accurate in the next part.\\\\\r\n\\newpage\r\n\\question\r\nOur approach in 4.3 intuitively should give the most accurate random chord length, the reason being that it is the only part where the points chosen are also uniform over a circle, not biased like in 4.2. A possible explanation for the nature of our plot in 4.3 could be that, for example, if we consider a circle of radius 1. Now lets use our way of calculating chord length. For the points 0.1, 0.2, 0.5, 0.9: \\\\\r\n\\begin{center}\r\n\r\n$\\sqrt{1^2 - 0.1^2} = 0.995$    \\\\\r\n$\\sqrt{1^2 - 0.2^2} = 0.980   $   \\\\\r\n$\\sqrt{1^2 - 0.5^2} = 0.866     $   \\\\\r\n$\\sqrt{1^2 - 0.9^2} = 0.436 $    \\\\\r\n\\end{center}\r\nThe result does not decrease linearly, and thus our final plot, although the chord lengths are random, is not greatest for values in our 'outer circle', although now the outer circle has 4 times the amount of points as the inner. Also we should note that our approach in 3.3 has made sure this holds for all radius, as the distribution is between 0 and 1, multipied by a constant, the max radius.\r\n\\end{questions}\r\n\r\n\r\n\\newpage\r\n\\section{References}\r\n\r\n\\noindent\\fbox{%\r\n    \\parbox{\\textwidth}{%\r\n\\url{https://stackoverflow.com/questions/33203645/how-to-plot-a-histogram-using-matplotlib-in-python-with-a-list-of-data}{The code for calculation of bins for all histograms was taken from here.}\\\\\r\n\\url{https://blogs.sas.com/content/iml/2016/03/30/generate-uniform-2d-ball.html}{Referred to a line of code on this website for 3.3}\r\n    }%\r\n}\r\n\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "0022899e025ae914cac40bce14f4f744c63a2800", "size": 24600, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Project Report/Report.tex", "max_stars_repo_name": "AkeelMedina22/Probability-and-Statistics-FInal-Project", "max_stars_repo_head_hexsha": "2de96ad5d2e0cd343be56de523cd6dbe76c2c14c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project Report/Report.tex", "max_issues_repo_name": "AkeelMedina22/Probability-and-Statistics-FInal-Project", "max_issues_repo_head_hexsha": "2de96ad5d2e0cd343be56de523cd6dbe76c2c14c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project Report/Report.tex", "max_forks_repo_name": "AkeelMedina22/Probability-and-Statistics-FInal-Project", "max_forks_repo_head_hexsha": "2de96ad5d2e0cd343be56de523cd6dbe76c2c14c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.8301886792, "max_line_length": 1041, "alphanum_fraction": 0.7591869919, "num_tokens": 6165, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.8558511488056151, "lm_q1q2_score": 0.5812758699335633}}
{"text": "\\documentclass{mrl}\n\n\\title{Application of Bulletproofs in MKEcoin Transactions}\n\\authors{Sarang Noether\\footnote{\\texttt{sarang.noether@protonmail.com}}}\n\\affiliations{MKEcoin Research Lab}\n\\date{\\today}\n\n\\type{TECHNICAL NOTE}\n\\ident{MRL-XXXX}\n\n\\begin{document}\n\n\\begin{abstract}\nThis technical note briefly describes the proposed application of Bulletproofs \\cite{bp} in MKEcoin. The proofs are used as a drop-in replacement of the existing Borromean bitwise non-interactive zero-knowledge range proofs used to show that a committed amount is in a specified range. Bulletproofs reduce both proof size and verification time, as well as provide a straightforward method for batch verification of proofs from multiple transactions. We describe our implementation, noting specific areas of optimization from the original paper.\n\\end{abstract}\n\n\\section{Introduction}\nThe implementation of confidential transaction amounts in MKEcoin is accomplished using homomorphic commitments. Each input and output amount, including fees, is represented by a commitment of the form $vG + \\mu H$, where $G$ and $H$ are elliptic curve generators, $v$ is the amount, and $\\mu$ is a mask. Without knowledge of the commitment opening, a third party cannot determine the amount; however, it is trivial for the third party to convince itself that a transaction balances (that is, that the difference between inputs and output amounts is zero). The homomorphic property of the commitments is such that the difference in commitments must itself be a commitment to zero.\n\nHowever, this is not sufficient to ensure a correct and safe transaction model. An adversary could easily construct a combination of positive and negative outputs such that the transaction amounts balance. A third party would still verify that the transaction balances, though the adversary has effectively printed free money in an undetected fashion. To combat this, we require that each amount commitment come equipped with a \\textit{range proof} that convinces a verifier that the corresponding output is both positive and does not risk an overflow by being too large. The range proof scheme must be non-interactive and zero-knowledge; that is, the verifier does not need to communicate with the prover once the proof is generated, and the proof itself reveals no information about the amount except that it is within the stated range.\n\nThe current range proof style used in MKEcoin confidential transactions is a \\textit{Borromean bitwise} range proof. To generate a proof that a commitment $C \\equiv vG + \\mu H$ represents an amount $v \\in [0,2^n-1]$ for some bit length $n > 0$ (in MKEcoin $n = 64$), the prover generates separate commitments for each bit. The prover then generates a Borromean ring signature showing that each commitment is to either $0$ or $2^i$ for appropriate $i$. Any third-party verifier can then convince itself that the bit commitments reconstruct the committed amount, that each commitment is to either $0$ or $2^i$, and therefore that the committed amount lies in the correct range.\n\nHowever, this comes at a cost. Borromean bitwise proofs scale linearly in size with the number of bits in the range. Further, if multiple outputs are used in a transaction, a separate proof is required for each. Each proof is large, taking up $6.2$ kB of space.\n\n\\section{Bulletproofs}\nBulletproofs are a recent general non-interactive zero-knowledge proof construction \\cite{bp}. Using a novel inner product argument, they can be used in a variety of applications ranging from range proofs (pun intended) to verifiable shuffles and even proofs of general arithmetic circuit evaluation. For our purposes, they can accomplish the same goal as Borromean bitwise range proofs: convincing a verifier that a committed amount is within a claimed range.\n\nThe details of Bulletproof construction, both for prover and verifier, are discussed in the paper \\cite{bp}, so we will not duplicate them here. However, several definitions are useful when discussing the scaling. A standard Bulletproof that shows an amount is within the $n$-bit range $[0,2^n-1]$ is called a \\textit{single-output proof} or a \\textit{1-proof}. However, it is possible for a prover to construct a single proof showing that $m$ separate amounts (with separate random masks) each lie within the range $[0,2^n-1]$, where $m$ is a power of two. Such a proof is called an \\textit{aggregate proof} or, more precisely, an $m$\\textit{-proof}. The scheme is constructed in such a way that a single-output proof is trivially an $m$-proof with $m=1$ (which simplifies the code). It is important to note that the construction of an aggregate proof requires that the prover know each amount and mask; this means that while it is useful for all outputs in a transaction to be contained within a single aggregate proof for space savings, it is not possible for a third party to take existing proofs and construct an aggregate proof, either within a single transaction or between different transactions.\n\nThe size scaling benefits of Bulletproofs occur at two levels:\n\\begin{enumerate}\n\\item \\textbf{Bit length of range}. The size of a Bulletproof increases logarithmically with the number of bits in the range. In bitwise range proofs, the proof size increased linearly with the number of bits.\n\\item \\textbf{Number of amounts in aggregate proof}. The size of a Bulletproof increases logarithmically with the number of amounts included in a single aggregate proof. In bitwise range proofs, the proof size increased linearly with the number of bits (since a separate proof was needed for each amount).\n\\end{enumerate}\nWe discuss efficiency in more detail below.\n\nThere is a separate scaling argument that is useful. A new node that comes online will receive many $m$-proofs, at least one per post-Bulletproof transaction in the blockchain. Instead of verifying each of the proofs separately, the node can perform a \\textit{batch verification} of as many proofs at a time as it wishes. As described below, this process requires that certain portions of each proof be verified separately, but allows for the remaining parts of the proofs to be batched and verified together. The resulting verification time is linear in the number of proofs, but with a significantly lower time per proof. An existing node that has already verified the transactions in the blockchain can still use batch verification on new transactions it receives, but the benefits are not as great due to the lower number of transactions that must be verified in a short time.\n\n\\section{Optimizations}\nFor the most part, the proposed implementation of Bulletproofs in MKEcoin follows the Bulletproofs paper in scope and notation wherever possible. However, we include several optimizations that have also been discussed for other projects. These optimizations are algebraically equivalent to those in the paper, but reduce the time required for verification. The author understands that some or all of the optimizations may be included in an update to the Bulletproofs paper sometime in the future. However, we document them here for completeness and ease of code review. The reader is encouraged to refer to the paper for the complete context of our changes.\n\n\\subsection{Curve group notation}\nThe paper is written with a general group structure in mind, so scalar-group operations are written multiplicatively (\\textit{e.g.} $x = a^bc^d$). In the case of elliptic curve groups, we use additive notation instead (\\textit{e.g.} $X = bA + dC$) and use case to differentiate between curve points and scalars for clarity. This is purely a notational convenience.\n\n\\subsection{Basepoint notation}\nThroughout the paper, amount commitments are expressed as $V \\equiv vG + \\mu H$, where $G$ and $H$ are distinct (but arbitrary) fixed elliptic curve group generators. We interchange the roles of $G$ and $H$ throughout our implementation to match the use of existing base points used in commitments elsewhere in the MKEcoin codebase. Note that the indexed $\\{G_i\\}$ and $\\{H_i\\}$ curve points are not modified in this way.\n\n\\subsection{Fiat-Shamir challenges}\nTo make the Bulletproof scheme non-interactive, we follow the paper by introducing Fiat-Shamir challenges computed by hashing the proof transcript up to the point that a new challenge is needed. This is done by introducing a rolling hash that uses as input the previous challenge and any new proof elements introduced. The prover and verifier compute these challenges identically.\n\n\\subsection{Inner product argment}\nThe inner product argument in Protocol 1 of the Bulletproofs paper uses recursion to shrink the size of its input vectors down to single elements. These inputs include distinct curve group generators $\\{G_i\\}$ and $\\{H_i\\}$, which we compute using an indexed hash function. We make several optimizations to this protocol for the verifier.\n\nFirst, we observe that the curve points in Equation (10) are in fact linear combinations of $\\{G_i\\}$ and $\\{H_i\\}$ that use the scalar challenges in Equations (24)-(25). Next, we note that the point $P$ in Equation (62) is passed into Protocol 1 as described in Section 4.2 of the paper. Since this curve point contains a linear combination of the same group generators as Protocol 1, we can take advantage of this and compute a single linear combination, rather than separately compute Equations (62) and (10).\n\nIn practice, we replace Equations (62) and (10) with the following check, where $M \\equiv |\\{L_j\\}| = |\\{R_j\\}|$:\n$$A + xS - \\mu G + \\sum_{j=0}^{M-1}(w_j^2 L_j + w_j^{-2} R_j) + (t - ab)xH - \\sum_{i=0}^{mn-1}(g_iG_i + h_iH_i) = 0$$\nThe symbols are mostly those used in the paper. However, we use $w_j$ to represent the round challenges in Lines (21)-(22), and $x$ to represent the challenge in Lines (32)-(33) to avoid reuse of symbols. The scalars $g_i$ and $h_i$ are computed in the following way. Express the index $i = b_0b_1 \\cdots b_{M-1}$ bitwise, where $b_{M-1}$ is the least-significant bit. Then\n$$g_i = a\\prod_{j=0}^{M-1} w_j^{2b_j-1} + z$$\nand\n$$h_i = \\left(by^{-i}\\prod_{j=0}^{M-1} w_j^{-2b_j+1} - zy^i + z^{2+\\lfloor i/N \\rfloor}2^{i\\operatorname{mod}N}\\right)y^{-i}$$\nThis optimization is applied only to the verifier.\n\n\\subsection{Batch verification}\nOur implementation permits the verifier to take many aggregate proofs and verify them together as a batch. We do not assume that the proofs each have the same number of outputs, nor make any restrictions on the maximum size of a batch. The batch verification we describe will only succeed if each proof is valid, and will fail if one or more proofs are invalid.\n\nBatch verification is split into two checks, performed after iterating over each proof in the batch. During the iteration, the verifier keeps ongoing sums of components from each proof and then performs the first-stage check for Equation (61):\n\\begin{equation}\n\\sum_l (\\beta_l\\tau_{xl}) G + \\sum_l \\beta_l\\left[ t_l - (k_l + z_l \\langle \\overline{1}^{mn},\\overline{y_l}^{mn} \\rangle) \\right] H - \\sum_l \\beta_l \\left( \\sum_j z_l^{j+2} V_{lj} - x_lT_{1l} - x_l^2T_{2l} \\right) = 0 \\nonumber\n\\end{equation}\nThe second-phase check proceeds similarly:\n\\begin{multline}\n\\sum_l \\beta_l(A_l + x_lS_l) - \\sum_l(\\beta_l\\mu_l) G + \\sum_l\\left[\\beta_l \\sum_j(w_{lj}^2 L_{lj} + w_{lj}^{-2} R_{lj})\\right] + \\sum_l \\beta_l x_l(t_l - a_lb_l) H \\\\\n- \\sum_i \\left[\\sum_l(\\beta_l g_{li})G_i + \\sum_l(\\beta_l h_{li})H_i\\right] = 0 \\nonumber\n\\end{multline}\nHere each $l$-indexed sum is over each proof in the batch, and $\\beta_l$ is a weighting factor chosen at random (not deterministically) by the verifier. This ensures that, except with negligible probability, the checks will only succeed if each proof is separately valid; an adversary cannot selectively provide a batch containing invalid proofs in an attempt to fool the verifier. The benefit to this approach is that the sums can be computed as large multi-exponentiation operations after the scalars from all proofs have been assembled.\n\nIf the batch fails either check, at least one proof in the batch is invalid. To identify which proofs are at fault, the verifier can either iterate through each proof and perform the checks separately (in linear time), or perform a binary search by successively performing the checks on half-batches until it identifies all faulty proofs (in logarithmic time).\n\n\\section{Proof size}\nIncluding the amount commitment $V$, a single Borromean bitwise range proof occupies $6.2$ kB of space; a transaction with $m$ outputs therefore requires $6.2m$ kB of space. An $m$-proof (with a $64$-bit range) requires $2\\lg m + 17$ group elements and $5$ scalars, each of which takes up $32$ bytes. Table \\ref{table:size} shows the space savings from Bulletproofs for several values of $m$.\n\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{r|rr|c}\n$m$ & Bulletproof & Borromean & Relative size \\\\\n\\hline\n$1$   & $704$  & $6200$   & $0.114$ \\\\\n$2$   & $768$  & $12400$  & $0.062$ \\\\\n$8$   & $896$  & $49600$  & $0.018$ \\\\\n$16$  & $960$  & $99200$  & $0.010$ \\\\\n$128$ & $1152$ & $793600$ & $0.001$\n\\end{tabular}\n\\caption{Size (bytes) of $m$ Borromean proofs versus $m$-proof}\n\\label{table:size}\n\\end{center}\n\\end{table}\n\nUsing data from the MKEcoin blockchain\\footnote{Data was taken from blocks 1400000 through 1500000} on the distribution of the number of outputs in transactions, the use of Bulletproofs would reduce the total size of range proofs by $94\\%$.\n\n\\bibliographystyle{plain}\n\\bibliography{bulletproofs}\n\n\\end{document}\n", "meta": {"hexsha": "ea626a943ecfaf82dbebe618b0f46a45238c7302", "size": 13528, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "publications/standards/bulletproofs.tex", "max_stars_repo_name": "MKEcoin/research-lab-master", "max_stars_repo_head_hexsha": "6efb96efd829ed4a0c0073b0ca2e3f6096bc8009", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-14T03:15:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-14T03:15:29.000Z", "max_issues_repo_path": "publications/standards/bulletproofs.tex", "max_issues_repo_name": "MKEcoin/research-lab-master", "max_issues_repo_head_hexsha": "6efb96efd829ed4a0c0073b0ca2e3f6096bc8009", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "publications/standards/bulletproofs.tex", "max_forks_repo_name": "MKEcoin/research-lab-master", "max_forks_repo_head_hexsha": "6efb96efd829ed4a0c0073b0ca2e3f6096bc8009", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 127.6226415094, "max_line_length": 1204, "alphanum_fraction": 0.7745416913, "num_tokens": 3310, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.855851143290548, "lm_q2_score": 0.679178699175393, "lm_q1q2_score": 0.5812758661878472}}
{"text": "\n\\section{Barron theory of approximation}\n%\\subsection{A DL function class}\n\\subsection{Approximation result for cosine as activation function}\nFor clarity, let us give a brief introduction of the approximation\nresult based on the results in Jones \\cite{jones1992simple}, Barron \\cite{barron1993universal} and some\nmodification in Xu \\cite{xu2017approximation}.\n\nGiven $d$ and $n$, consider the following nonlinear space in $\\mathbb\nR^d$:\n\\begin{equation}\n\t\\label{Vn}\n\tV_n=\\left\\{v: v(x)=\\sum_{j=1}^n{a_j\\over n}\\cos(\\omega_j\\cdot x+b_j)+c,\n\t\\mbox{ with }\n\ta_j, b_j, c\\in\\mathbb R, \\omega_j\\in\\mathbb R^d\n\t\\right\\}.\n\\end{equation}\n\\begin{theorem} \\label{jones} Given a bounded domain $B\\subset\\mathbb R^d$, a probability measure $\\mu$ on $B$ \n\tand a function $f:\\mathbb R^d\\mapsto\\mathbb R$ whose Fourier\n\ttransform $\\hat f$ satisfying $\\|\\hat f\\|_{L^1(\\mathbb R^d)}<\\infty$.\n\tThen, for any $n\\ge 1$,  there exists $f_n\\in V_n$  such that\n\t\\begin{equation}\n\t\t\\|f-f_n\\|_{L^2(B)}\\le \\frac{2\\|\\hat f\\|_{L^1(\\mathbb R^d)}}{\\sqrt{n}},\n\t\\end{equation}\n\twhere $f_n(x)=\\sum_{j=1}^n{a_j\\over n}\\cos(\\omega_j\\cdot x+b_j)+c$ satisfying\n\t\\begin{equation}\n\t\t\\label{abc}\n\t\ta=  \\|\\hat f\\|_{L^1(\\mathbb R^d)}, \\quad |b_j|\\le \\pi, \\quad |c|\\le \\|f\\|_{0,\\infty,B}+\\|\\hat f\\|_{L^1(\\mathbb R^d)}.\n\t\\end{equation}\n\\end{theorem}\n\\begin{proof}\\small\n\tConsider the Fourier transform:\n\t\\begin{equation}\n\t  \\label{Fourier}\n\t  \\hat f(\\omega)=\\frac{1}{(2\\pi)^d}\\int_{\\mathbb{R}^d}e^{-i\\omega\\cdot x}f(x)dx\n\t  \\quad \\forall \\omega \\in \\mathbb R^d.\n\t\\end{equation}\n\tWe write  \n\t$\\hat{f}(\\omega)=e^{i\\theta(\\omega)}|\\hat{f}(\\omega)|$. By Fourier inversion formula,\n\t\\begin{equation}\n\t\t\\label{eqn1}\n\t\tf(x)=\\int_{\\mathbb{R}^d}e^{i\\omega\\cdot x}\\hat{f}(\\omega)d\\omega\n\t\t=\\int_{\\mathbb{R}^d}e^{i(\\omega\\cdot x+\\theta(\\omega))}|\\hat{f}(\\omega)|d\\omega.\n\t\\end{equation}\nSince $f(x)$ is real-valued, it implies that, for $x, x_B\\in B$\n\t  \\begin{equation}\n\t\t\\label{key}\n\t\t\\begin{aligned}\n\t\t\tf(x)-f(x_B)\n\t\t\t&={\\rm Re}\\int_{\\mathbb{R}^d}\n\t\t\t(e^{i\\omega\\cdot x}-e^{i\\omega\\cdot x_B}) \n\t\t\t\\hat{f}(\\omega)d\\omega \\\\\n\t\t\t&={\\rm Re}\\int_{\\mathbb{R}^d}\n\t\t\t(e^{i\\omega\\cdot x}-e^{i\\omega\\cdot x_B})  \n\t\t\te^{i\\theta\n\t\t\t\t(\\omega)}|\\hat{f}(\\omega)|d\\omega \\\\\n\t\t\t&=\\int_{\\mathbb{R}^d}(\\cos(\\omega\\cdot\n\t\t\tx+\\theta(\\omega))-\\cos(\\omega\\cdot x_B+\\theta(\\omega)))|\\hat{f}(\\omega)|d\\omega \\\\\n\t\t\t&=\\int_{\\mathbb{R}^d}(\\cos(\\omega\\cdot(x-x_B)+\\theta_B(\\omega))-\\cos(\\theta_B(\\omega)))|\\hat{f}(\\omega)|d\\omega \\\\\n\t\t\t&=\\int_{\\mathbb{R}^d}g(x,\\omega)|\\hat{f}(\\omega)|d\\omega \\\\\n\t\t\t&=\\|f\\|_{B^m}\\int_{\\mathbb{R}^d}|\\omega|_B^{-m}g(x,\\omega)\\lambda^m(\\omega)d\\omega \n\t\t\\end{aligned}\n\t\\end{equation}\nwhere\n$$\n\\theta_B(\\omega)=\\omega\\cdot x_B+\\theta(\\omega),\\quad|\\omega|_B:=\\sup\\limits_{x\\in B}|\\omega\\cdot(x-x_B)|\n$$\nand $g: B\\times \\mathbb{R}^d\\rightarrow \\mathbb{R}$\nis given by\n\\begin{equation}\\label{gz}\n\tg(x,\\omega):=\n\t\\cos(\\omega\\cdot (x-x_B)+\\theta_B(\\omega))  -\\cos(\\theta_B(\\omega)),\n\\end{equation}\nand\n$$\n\\|f\\|_{B^m}:=\\int_{\\mathbb R^d}|\\omega|_B^m|\\hat{f}(\\omega)|d\\omega,\\quad\\lambda^m(\\omega)=\\frac{|\\omega|_B^m|\\hat{f}(\\omega)|}{\\|f\\|_{B^m}}.\n$$\nhere $\\lambda^m$ is a  probability distribution density function. \n\n\n\tUsing $\\lambda=\\lambda^0$, we define the expectation $\\mathbb E$ and\n\t$\\mathbb{\\bar E}$ for $u: \\mathbb R^d\\mapsto \\mathbb R$ and\n\t$v: (\\mathbb R^d)^n\\equiv\\mathbb R^d\\times\\cdots\\times\\mathbb R^d\\mapsto \\mathbb R$\n\t$$\n\t\\mathbb E(u)\\equiv\\int_{\\mathbb R^d}u(\\omega)\\lambda(\\omega)d\\omega,  \\quad\n\t\\mathbb{\\bar E}(v)\\equiv\\int_{(\\mathbb\n\t\tR^d)^n}v(\\omega_1,\\cdots,\\omega_n)\n\t\\lambda(\\omega_1)\\cdots\\lambda(\\omega_n)d\\omega_1\\cdots d\\omega_n .\n\t$$\n\tDenote$\\tilde f(x)=\\frac{f(x)-f(x_B)}{\\|f\\|_{B}}$, by\n\tLemma~\\ref{MC} and a direct (and standard) calculation\n\t$$\n\t\\mathbb{\\bar E}(\\tilde f(x)-\\frac1n\\sum_{j=1}^ng(x,\\omega_j))^2 \n\t=\\mathbb{\\bar E}(\\mathbb{E}[g(x,\\cdot)]-\n\t\\frac1n\\sum_{j=1}^n g(x,\\omega_j))^2 \n\t\\le\\frac{1}{n}\\max_{x\\in B, \\omega\\in \\mathbb R^d}|g(x,\\omega)|^2\n\t\\le \\frac{4}{n}.\n\t$$\n\tIntegrating both sides on B, then by Fubini's Theorem we should have\n\t$$\n\t\\mathbb{\\bar E}\\int_B(\\tilde f(x)-\\frac1n\\sum_{j=1}^ng(x,\\omega_j))^2 d\\mu(x)\n\t\\le \\frac{4}{n}.\n\t$$\n\tThus $\\exists$ $ \\omega_j^*\\in \\mathbb{R}^d$ such that \n\t$$\n\t\\int_B  (\\tilde f(x)-\\frac1n\\sum_{j=1}^ng(x,\\omega_j^*))^2d\\mu(x)\\le \\frac{4}{n}.\n\t$$\n\tThe desired result then follows easily. \n\\end{proof}\nTo prove an $H^1$ estimate, we use the following density function\n$$\n\\lambda_1(\\omega)=\\lambda^1=\\frac{|\\omega|_B|\\hat f(\\omega)|}{\\int_{\\mathbb R^d}|\\omega|_B|\\hat f(\\omega)|}.\n$$\nSimilarly, we have\n$$\n\\tilde f(x)\\equiv \\frac{f(x)-f(x_B)}{\\int_{\\mathbb R^d}|\\omega|_B|\\hat f(\\omega)|}\n=\n\\int_{\\mathbb{R}^d}\\frac{g(x,\\omega)}{|\\omega|_B}\\lambda_1(\\omega)d\\omega.\n$$\nIt follows that\n$$\n\\partial_k\\tilde f(x)\n=\n\\int_{\\mathbb{R}^d}\\frac{\\partial_kg(x,\\omega)}{|\\omega|_B}\\lambda_1(\\omega)d\\omega.\n$$\nWe note that\n$$\n\\max_{x\\in B, \\omega\\in \\mathbb R^d}\n\\frac{|g(x,\\omega)|}{|\\omega|_B}\\le 1.\n$$\nAlso, by the definition of $|\\omega|_B$, we know there exists $x_\\omega\\in B$ such that:\n$$\n|\\omega|_B=\\sup\\limits_{x\\in B}|\\omega\\cdot(x-x_B)|\\ge|\\omega||x_\\omega-x_B|\\ge|\\omega|\\frac12\\mathrm{dist}(x_B,\\partial B).\n$$\nThus\n$$\n\\max_{x\\in B, \\omega\\in \\mathbb R^d}\n\\frac{|\\partial_kg(x,\\omega)|}{|\\omega|_B}\\le \\frac{2}{\\mathrm{dist}(x_B,\\partial B)}.\n$$\nSetting\n$$\nv_n=\\frac1n\\sum_{j=1}^n\\frac{g(x,\\omega_j)}{|\\omega_j|_B}.\n$$\nBy a similar argument, we obtain\n$$\n\\mathbb{\\bar E}\n\\int_B\\left((\\tilde f(x)-v_n(x))^2\n+\\sum_{k=1}^d (\\partial_k\\tilde f(x)-\\partial_kv_n(x))^2\n\\right) d\\mu(x)\n\\le \\frac{1}{n}[C(d,B)]^2 .\n$$\nwhere\n$\nC(d,B)=[\\frac{4d}{{\\rm{dist}^2(x_B,\\partial B)}}+1]^{1/2}.\n$\nThis implies that $\\exists$ $ \\omega_j^*\\in \\mathbb{R}^d$ such that  \n$$\n\\left\\|\\tilde f(\\cdot)-{1\\over n}\\sum_{j=1}^n\\frac{g(\\cdot,\\omega_j^*)}{|\\omega_j^*|_B}\\right\\|_{1,B}^2\n\\le \\frac{[C(d,B)]^2}{n}.\n$$\n\\begin{theorem}  \\label{thm:DLH1}\n\tGiven a bounded domain $B\\subset\\mathbb R^d$, a probability measure $\\mu$ on B\n\tand a function $f:\\mathbb R^d\\mapsto\\mathbb R$ whose Fourier\n\ttransform $\\hat f$ satisfying $\\|f\\|_{B^1}<\\infty$.\n\t%$\\|\\widehat{\\nabla f}\\|_{L^1(\\mathbb R^d)}<\\infty$.\n\tThen, for any $n\\ge 1$,  there exists $f_n\\in V_n$  such that\n\t\\begin{equation}\n\t\t\\|f-f_n\\|_{H^1(B)}\\le \\frac{C(d,B)}{\\sqrt{n}})\\|f\\|_{B^1}.\n\t\\end{equation}\n\twhere \n\t\\begin{equation}\n\t\t\\label{un}\n\t\tf_n(x)=\\sum_{j=1}^na_j\\cos(\\omega_j\\cdot x+b_j)+c \\mbox{ for some }\n\t\ta_j,b_j,c\\in \\mathbb R, \\omega_j\\in\\mathbb R^d.\n\t\\end{equation}\n\\end{theorem}\n\nOne remarkable fact is that Theorem \\ref{thm:DLH1} holds in any spatial\ndimension and any bounded domain $B$ of any geometric shape.\nTheorem \\ref{thm:DLH1} is only interesting when $d\\ge 2$.  For $d=1$,\nwe can prove much stronger results easily.  For example, if $u$\nis sufficiently smooth, for any $m\\ge 1$, we can find functions $u_n$\nin the form of \\eqref{un} such that \n\\begin{equation}\n\t\\label{uun1}\n\t\\|u-u_n\\|_{1,B}\\le \\frac{C_1(m,u)}{n^{m}}.  \n\\end{equation}\n\\newpage \n\n\n\\section{An improved analysis}\n\\subsection{Heaviside Function}\nDefine $g_i: [-1,1]\\mapsto \\mathbb R$ as\nfollows:\n\\begin{equation}\n\t\\label{psi}\n\tg_i(t)=\\frac{1}{|\\omega_i|_B}[\\cos(|\\omega_i|_Bt+\\theta_B(\\omega_i))  -\\cos(\\theta_B(\\omega_i))],\n\\end{equation}\nIn view of \\eqref{gz}, we have\n\\begin{equation}\n\t\\label{gpsi}\n\tg_i(s_i)=\\frac{g(x,\\omega_i)}{|\\omega_i|_B}, \\quad s_i=\\omega_i^B\\cdot(x-x_B),\\quad \\omega^B=\\frac{\\omega}{|\\omega|_B}\n\\end{equation}\nNow, we take an integer\n\\begin{equation}\n\t\\label{k}\n\tk\\ge \\sqrt{n}  \n\\end{equation}\nand consider a partition of $[-1,1]$ with the following grid points\n$$\nt_j=jh_k, j=-k:k\n$$\nwith \n$$\nh_k=\\frac{1}{k}\\le \\frac{1}{\\sqrt{n}}.\n$$\nWe first take a piecewise constant interpolation for $g_i$ on $[0,1]$ to get\n$$\ng_{i,k}(t)=(\\Pi_kg_i)(t)=\\sum_{j=0}^{k-1}g_i(t_j) M_j(t),   \n$$\nwhere\n$$\nM_j(t)=M_0(\\frac{t-t_j}{h_k})\n$$\nand\n\\begin{equation}\n\t\\label{cardinal}\n\tM_0(x)=\n\t\\left\\{\n\t\\begin{array}{ll}\n\t\t0 & x\\le0 \\\\\n\t\t1 & 0< x\\le1    \\\\\n\t\t0 & x > 1    \n\t\\end{array}\n\t\\right.\n\\end{equation}\nWe note that\n$$\nM_0(x)=H(x)-H(x-1)\n$$\nwhere $H$ is the Heaviside function\nThus\n$$\nM_0(\\frac{t-t_{j}}{h_k})\n=H(\\frac{t-t_{j}}{h_k})-H(\\frac{t-t_{j}}{h_k}-1)=H(\\frac{t-t_{j}}{h_k})-H(\\frac{t-t_{j+1}}{h_k})\n\\equiv H_{j}(t)-H_{j+1}(t).\n$$\nThus, since $g_i(t_0)=0, H_k=0$, we have\n\\begin{equation}  \\label{gi0}\n\tg_{i,k}(t)=\\sum_{j=0}^{k-1}g_i(t_j) M_j(t)\n\t=\\sum_{j=1}^{k-1}(g_i(t_j) - g_i(t_{j-1})) H_{j}(t), \\quad t\\in [0,1]\n\\end{equation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\nNow we consider\n\\begin{equation}\n\th_i(t) = g_i(-t), \\quad t\\in [0,1].\n\\end{equation}\nSimilar to \\eqref{gi0}, we have\n$$\n(\\Pi_kh_i)(t)=\\sum_{j=1}^{k-1}(h_i(t_j) - h_i(t_{j-1}))H_j(t)=\n=\\sum_{j=1}^{k-1}(g_i(-t_j) - g_i(-t_{j-1}))H_j(t)\n$$\nNamely\n$$\n(\\Pi_k g_i)(-t)=\\sum_{j=1}^{k-1}(g_i(-t_j) - g_i(-t_{j-1}))H_j(t), \\quad t\\in [0,1]\n$$\nor\n\\begin{equation}\\label{gi1}\n\t(\\Pi_k g_i)(t)=\\sum_{j=1}^{k-1}(g_i(-t_j) - g_i(-t_{j-1}))H_j(-t), \\quad t\\in [-1,0]\n\\end{equation}\nBy combining \\eqref{gi0} and \\eqref{gi1}, we get a piecewise constant\ninterpolation of $g_i$ on $[-1,1]$ as follows:\n\\begin{eqnarray}\n\tg_{i,k}(t)&=&\n\t\\sum_{j=1}^{k-1}(g_i(-t_j) - g_i(-t_{j-1}))H_j(-t)+\\sum_{j=1}^{k-1}(g_i(t_j) - g_i(t_{j-1})) H_{j}(t)\\nonumber \\\\ \n\t&=&\\sum_{j=1}^{k-1}[a_{ij}^-H_j(-t)+a_{ij}^+H_{j}(t)] \\label{gih}\n\t\\quad t\\in [-1,1]\n\\end{eqnarray}\nwhere \n$$\na_{ij}^{\\pm}=g_i(\\pm t_j) - g_i(\\pm t_{j-1})\n$$\nIt is easy to see that\n\\begin{equation}\n\t|g_i(t)-g_{i,k}(t)|\\le h_k, \\quad t\\in [-1,1].\n\\end{equation} \n\n\\begin{equation}\n\t\\|\\frac1n \\sum_{i=1}^n \\frac{g(\\cdot,\\omega_i)}{|\\omega_i|_B}-f^*\\|_{L^2(\\mu,B)}\\le h_k\n\\end{equation}\nwhere\n\\begin{equation}\n\t\\label{fstar}\n\tf^*(x)=\n\t\\frac1n\\sum_{i=1}^ng_{i,k}(\\omega_i^B\\cdot (x-x_B)).\n\\end{equation}\nBy the approximation in last section, we have \n\\begin{equation}\n\t\\|\\tilde f-f^*\\|_{L^2(\\mu,B)}\\le \\frac{2}{\\sqrt{n}}\n\\end{equation}\nLet us rewrite\n$$\nf^*(x)\n=\\sum_{i=1}^n\\sum_{j=1}^{k-1}[\\gamma_{ij}^- f_{ij}^-+ \\gamma_{ij}^+f_{ij}^+]\n$$\nwhere\n$$\n\\gamma_{ij}^{\\pm}=\\frac{|a^\\pm_{ij}|}{nd_i}, \nf^\\pm_{i,j}=d_i\\/{\\rm sign}(a^\\pm_{ij})H_j(\\pm \\omega_i^B\\cdot (x-x_B))\n$$\nand \n$$\nd_i=\\sum_{j=1}^{k-1}(|a^-_{ij}|+|a^+_{ij}|)\\le 2\n$$\nBy definition\n\\begin{equation}\n\t\\label{gammaij}\n\t\\sum_{i=1}^n\\sum_{j=1}^{k-1}[\\gamma_{i,j}^-+\\gamma_{i,j}^+]=1.\n\\end{equation}\nWith re-numeration as\n$$\np_\\ell=\\gamma_{ij}^{\\pm}, f_\\ell = f_{ij}^{\\pm}, 1\\le \\ell \\le N=2n(k-1)\n$$\nWe have\n$$\nf^*(x)=\\sum_{\\ell=1}^N p_\\ell f_\\ell\n$$\nConsider \n$$\n\\mathcal N=\\{1,2,\\ldots, N\\}\n$$\nand \n$$\n\\bar f: \\mathcal N\\mapsto \\mathbb R^1\n$$\nsuch that\n$$\n\\bar  f(\\ell)=f_{\\ell}, \\ell\\in \\mathcal N\n$$\nWith the probability measure\n$$\n\\mu(\\mathcal M)=\\sum_{m\\in \\mathcal M}p_m \\quad \\mathcal M\\subset\\mathcal N.\n$$\nBy definition. \n$$\n\\mathbb E(\\bar  f) = f^*(x).\n$$\nBy the basic result on expectation in Lemma \\ref{MC}, we have\n$$\n\\sum_{\\ell_1,\\ldots \\ell_n=1}^Np_{\\ell_1}\\cdots p_{\\ell_n}\\left(f^*(x)-\n{1\\over n}\\sum_{i=1}^n f_{\\ell_i}\\right)^2\n=\\mathbb E_{\\mathcal N^n} \\left(\\mathbb E(\\bar f)-{1\\over\n\tn}\\sum_{i=1}^n  \\bar  f(\\ell_i)\\right)^2\\\\\n\\le\\frac1n \\|\\bar  f\\|_{\\infty}^2 \\le\\frac4n.\n$$\nBy taking the $L^2(\\mu,B)$ on the above inequality, we get\n$$\n\\sum_{\\ell_1,\\ldots \\ell_n=1}^Np_{\\ell_1}\\cdots p_{\\ell_n}\\|f^*-{1\\over n}\\sum_{i=1}^n f_{\\ell_i}\\|_{L^2(\\mu,B)}^2\n\\le\\frac4n.\n$$\nThus, there exisit $\\ell_1^*, \\ldots, \\ell_n^*\\in \\mathcal N$ such that\n$$\n\\|f^*-{1\\over n}\\sum_{i=1}^n f_{\\ell_i^*}\\|_{L^2(\\mu,B)}^2\n\\le\\frac4n.\n$$\nwhere\n$$\nf_n(x)={1\\over n}\\sum_{i=1}^n f_{\\ell_i^*}(x).\n$$\nThen we have \n\\begin{equation}\n\t\\|\\tilde f-f_n\\|^2_{L^2(\\mu,B)}\\le \\frac{9}{n}.\n\\end{equation}\nConsequently\n\\begin{equation}\n\t\\left\\|f(x)-f(x_B)-\\|f\\|_Bf_n\\right\\|^2_{L^2(\\mu,B)}\\le \\frac{9\\|f\\|^2_B}{n}.\n\\end{equation}\n\n\\subsection{Heaviside to piecewise constant projection}\nWe first take a piecewise constant interpolation for $t$ on $[0,1]$ to get\n$$\n\\Pi_k^0 g (t)=\\sum_{j=0}^{k-1}g(t_j) M_j(t),   \n$$\nwhere \n$ \n\tM_j(t)=M_0(\\frac{t-t_j}{h_k}),\n\t\\quad \n\tM_0(x)=\n\t\\left\\{\n\t\\begin{array}{ll}\n\t\t0 & x\\le0 \\\\\n\t\t1 & 0< x\\le1    \\\\\n\t\t0 & x > 1    \n\t\\end{array}.\n\t\\right.\n$\nWe note that\n$$\nM_0(x)=H(x)-H(x-1)\n$$\nwhere $H$ is the Heaviside function.\nThus\n$$\nM_j(t)\n=H(\\frac{t-t_{j}}{h_k})-H(\\frac{t-t_{j}}{h_k}-1)=H(\\frac{t-t_{j}}{h_k})-H(\\frac{t-t_{j+1}}{h_k})\n\\equiv H_{j}(t)-H_{j+1}(t).\n$$\nThus, since $H_k=0$, we have\n\\begin{equation}  \\label{gi0}\n\\Pi_k^0 g(t)=\\sum_{j=0}^{k-1}g(t_j) (H_{j}(t)-H_{j+1}(t))\n\t=g(0) + \\sum_{j=1}^{k-1}(g(t_j) - g(t_{j-1})) H_{j}(t).\n\\end{equation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\nNow we consider\n$ h(t) = g(-t)$.\nSimilar to \\eqref{gi0}, we have\n$$\n(\\Pi_k^0 h)(t)=\\sum_{j=1}^{k-1}(h(t_j) - h(t_{j-1}))H_j(t)\n=\\sum_{j=1}^{k-1}(g(-t_j) - g(-t_{j-1}))H_j(t),\n$$\nnamely \n\\begin{equation}\\label{gi1}\n\t\\Pi_k^0 g(t)=\\sum_{j=1}^{k-1}(g(-t_j) - g(-t_{j-1}))H_j(-t), \\quad t\\in [-1,0]\n\\end{equation}\nThus, the piecewise constant\ninterpolation of $g$ on $[-1,1]$ as follows:\n\\begin{eqnarray}\n\t\\Pi_k^0 g(t)&=&\n\t\\sum_{j=1}^{k-1}(g_i(-t_j) - g_i(-t_{j-1}))H_j(-t)+\\sum_{j=1}^{k-1}(g_i(t_j) - g_i(t_{j-1})) H_{j}(t)\\nonumber \\\\ \n\t&=&\\sum_{j=1}^{k-1}[a_{ij}^-H_j(-t)+a_{ij}^+H_{j}(t)]  \\label{gih0}\n\t\\quad t\\in [-1,1]\n\\end{eqnarray}\nwhere \n$\na_{ij}^{\\pm}=g_i(\\pm t_j) - g_i(\\pm t_{j-1}).\n$\n\n\n\\subsection{Heaviside Function 2}\nRecall $|\\omega|_B=\\sup_{x} |\\omega \\cdot (x-x_B)|$ and\n$$\ng(x,\\omega) = \\cos (\\omega\\cdot (x-x_B) +\\theta_B(\\omega)) - \\cos(\\theta_B(\\omega)).\n$$\nLet $t={\\omega\\cdot (x-x_B)\\over |\\omega|_B}\\in [-1, 1]$ and \n$$\ng(t, \\omega)= \\cos(|\\omega|_Bt+\\theta_B(\\omega))  -\\cos(\\theta_B(\\omega)).\n$$\nDefine $g_i: [-1,1]\\mapsto \\mathbb R$ as\nfollows:\n\\begin{equation}\n\\label{psi}\ng_i(t)=\\frac{1}{|\\omega_i|_B}g(t,\\omega_i)=\\frac{1}{|\\omega_i|_B}\\big (\\cos(|\\omega_i|_Bt+\\theta_B(\\omega_i))  -\\cos(\\theta_B(\\omega_i))\\big ).\n\\end{equation}\nNote that\n$$\ng(x,\\omega) = |\\omega|_B g_i\\big ({\\omega\\cdot (x-x_B)\\over |\\omega|_B}\\big ).\n$$\nThus,\n$$\ng_i(t)=\\frac{g(x,\\omega_i)}{|\\omega_i|_B}.\n$$\n\n\\noindent{\\bf Step 1: Approximate $f$ by cosine functions} \n\nSince \n$$\nf(x)=\\int_{\\mathbb R} g(x,\\omega)|\\hat f(\\omega)|d\\omega = \\int_{\\mathbb R} {g(x,\\omega)\\over |\\omega|_B} |\\omega|_B|\\hat f(\\omega)|d\\omega\n$$\nLet $\\lambda(\\omega) = {|\\omega|_B|\\hat f(\\omega)|\\over \\int_{\\mathbb R} |\\omega|_B|\\hat f(\\omega)|d\\omega}$. Thus,\n$$\nf(x)=\\int_{\\mathbb R} {g(x,\\omega)\\over |\\omega|_B}\\lambda(\\omega)d\\omega\n$$\nBy \\eqref{MC}, there exist $\\{\\omega_i^\\ast\\}_{i=1}^n$ such that\n$$\n\t\\|\\frac1n \\sum_{i=1}^n \\frac{g(x,\\omega_i^\\ast)}{|\\omega_i^\\ast|_B}-f(x)\\|_{L^2(\\mu,B)}\\le \\frac{1}{\\sqrt{n}},\n$$\nnamely,\n\\begin{equation}\\label{eqmc1}\n\t\\|\\frac1n \\sum_{i=1}^n g_i^\\ast (t)-f(x)\\|_{L^2(\\mu,B)}\\le \\frac{1}{\\sqrt{n}}\n\\end{equation}\nwith \n$$\\displaystyle g_i^\\ast (t)=\\frac{1}{|\\omega_i^\\ast |_B}g({\\omega\\cdot (x-x_B)\\over |\\omega|_B},\\omega_i^\\ast).\n$$\n\n\\noindent{\\bf Step 2: Approximate cosine function by Heaviside functions} \n\nConsider a partition of $t\\in [-1,1]$ with the following grid points\n$$\nt_j=jh_k, j=-k:k\n$$\nwith \n$\nh_k=\\frac{1}{k}\\le \\frac{1}{\\sqrt{n}}.\n$\nDenote the piecewise constant interpolation of $g_i$ on $[-1,1]$ by $g_{i,k}$ with\n\\begin{equation}\\label{pro:intererror}\n\t|g_i(t)-g_{i,k}(t)|\\le h_k, \\quad t\\in [-1,1].\n\\end{equation} \nBy \\eqref{gih0},\n\\begin{eqnarray}\n\tg_{i,k}(t)&=&\n\t\\sum_{j=1}^{k-1}(g_i(-t_j) - g_i(-t_{j-1}))H_j(-t)+\\sum_{j=1}^{k-1}(g_i(t_j) - g_i(t_{j-1})) H_{j}(t)\\nonumber \\\\ \n\t&=&\\sum_{j=1}^{k-1}[a_{ij}^-H_j(-t)+a_{ij}^+H_{j}(t)] \\label{gih}\n\t\\quad t\\in [-1,1]\n\\end{eqnarray}\nwhere \n$\na_{ij}^{\\pm}=g_i(\\pm t_j) - g_i(\\pm t_{j-1}).\n$ \nBy \\eqref{eqmc1},\n\\begin{equation} \\label{eq:gast}\n\\begin{split}\n\t\\|\\frac1n  \\sum_{i=1}^n g_{i,k}^\\ast(t)-f(x)\\|_{L^2(\\mu,B)}\\le &\n\t\\|\\frac1n  \\sum_{i=1}^n (g_{i,k}^\\ast(t)-g_i^\\ast(t))\\|_{L^2(\\mu,B)} \\\\\n\t&+ \\|\\frac1n  \\sum_{i=1}^n g_{i}^\\ast(t)-f(x)\\|_{L^2(\\mu,B)}\\le\n\t\\frac{2}{\\sqrt{n}}.\n\\end{split}\n\\end{equation}\n\n\\noindent{\\bf Step 3: Approximate $f(x)$ by Heaviside functions} \n\nLet\n$$\nf^*(x) = \\frac1n  \\sum_{i=1}^n g_{i,k}^\\ast(t)\n=\\sum_{i=1}^n\\sum_{j=1}^{k-1}[\\gamma_{ij}^- f_{ij}^-+ \\gamma_{ij}^+f_{ij}^+].\n$$ \nwhere\n$\nd_i=\\sum_{j=1}^{k-1}(|a^-_{ij}|+|a^+_{ij}|)\\le 2,\n$   \n$\n\\gamma_{ij}^{\\pm}=\\frac{|a^\\pm_{ij}|}{nd_i}\n$ and\n$\nf^\\pm_{i,j}=d_i\\/{\\rm sign}(a^\\pm_{ij})H_j(\\pm {\\omega_i^\\ast\\cdot (x-x_B)\\over |\\omega_i^\\ast|_B}).\n$\nSince\n$\n\\displaystyle \\sum_{i=1}^n\\sum_{j=1}^{k-1}[\\gamma_{i,j}^-+\\gamma_{i,j}^+]=1,\n$\nwith re-numeration as\n$$\np_\\ell=\\gamma_{ij}^{\\pm}, f_\\ell = f_{ij}^{\\pm}, 1\\le \\ell \\le N=2n(k-1),\n$$\nwe have\n$$\nf^*(x)=\\sum_{\\ell=1}^N p_\\ell f_\\ell.\n$$\nThis implies that $f^\\ast (x)$ is the expectation of $\\bar f$, which is randomly chosen from $\\{f_\\ell\\}_{\\ell=1}^N$ with probability $p_\\ell$. ($x$ is fixed.)\n$$\nf^\\ast(x) = \\mathbb{E} (\\bar f).\n$$\nBy \\eqref{MC}, there exist $\\{\\ell_i\\}_{i=1}^n$ such that\n$$\n\\|f^\\ast (x) - {1\\over n}\\sum_{i=1} f_{\\ell_i}\\|_{L^2(\\mu,B)}\\le {1\\over \\sqrt{n}}.\n$$\nA combination of \\eqref{eq:gast} and the above inequality gives\n$$\n\\|f - {1\\over n}\\sum_{i=1} f_{\\ell_i}|\\le |f(x) - f^\\ast (x)\\|_{L^2(\\mu,B)} + \\|f^\\ast (x) - {1\\over n}\\sum_{i=1} f_{\\ell_i}\\|_{L^2(\\mu,B)}\\le{3\\over \\sqrt{n}}.\n$$\n \n\\iffalse\n{\\bf Outline of proof}\n\\begin{enumerate}\n\\item $f(x)=\\int g(x,\\omega)\\lambda(\\omega)d\\omega$\n\\item $\\displaystyle {1\\over n}\\sum_{i=1}^n g(x,\\omega_i)\\rightarrow f(x)$ with accuracy $n^{-1/2}$, with\n\\[\n\tg(x,\\omega_i)= \\cos(\\omega_i\\cdot (x-x_B)+\\theta_B(\\omega_i))  -\\cos(\\theta_B(\\omega_i)),\n\\]\n\\item $$\n\\displaystyle f^\\ast(x)={1\\over n}\\sum_{i=1}^n \\sum_{j=1}^{k-1}[a_{ij}^-H_j(-{x-x_B\\over h})+a_{ij}^+H_{j}({x-x_B\\over h})] \\rightarrow {1\\over n}\\sum_{i=1}^n g(x,\\omega_i)\n$$ \nwith accuracy $k^{-1}$. Let $k=n^{1/2}$. The subscript $i$ relates to sample $\\omega_i$ and $j$ relates to the partition of $x$.\n\\item For any function $f$, there exist $\\{\\omega_i^\\ast\\}_{i=1}^n$ which determine the value of $a_{ij}^\\pm$ such that\n$$\n|f(x) - f^\\ast(x)|\\le n^{-1/2}.\n$$\n\\item Rewrite $f^\\ast$ as\n$$\nf^\\ast (x)=\\sum_{l=1}^N p_lf_l \\quad \\mbox{ with } N=2n(k-1)\n$$\nwith\n$\n\\displaystyle p_l={|a_l|\\over nd_l},\n$\n$\n\\displaystyle f_l=d_l\\/{\\rm sign}(a_l)H_l(\\omega_l^B\\cdot (x-x_B)).\n$\n\nLet $\\bar{f}$ be a random variable, $f_l$ with probability $p_l$. Then,\n$$\nf^\\ast (x) = \\mathbb{E}(\\bar f).\n$$\n\\end{enumerate}\n\\fi\n\\subsection{Piecewise linear function and ReLU}\nThe proof here is almost the same as the proof for Heaviside function in the last part. \n\nNow we take a piecewise linear interpolation for $g_i$ on $[0,1]$, since $g_i(t_0)=0$ , we get\n$$\ng_{i,k}(t)=(\\Pi_k^1g_i)(t)=\\sum_{j=1}^{k}[g_i(t_{j})-g_i(t_{j-1})]\\sigma_{j-1}(t),   \\quad t\\in [0,1]\n$$\nwhere\n$$\n\\sigma_j(t)=M_0(\\frac{t-t_j}{h_k})\n$$\nand\n\\begin{equation}\n\t\\label{cardinal}\n\tM_0(x)=\n\t\\left\\{\n\t\\begin{array}{ll}\n\t\t0 & x\\le0 \\\\\n\t\tx & 0< x\\le1    \\\\\n\t\t1 & x > 1    \n\t\\end{array}\n\t\\right.\n\\end{equation}\n%We note that\n%$$\n%M_0(x)=ReLU(x)-ReLU(x-1)\n%$$\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\nConsider\n\\begin{equation}\n\th_i(t) = g_i(-t), \\quad t\\in [0,1].\n\\end{equation}\nSimilarly, we have\n$$\n(\\Pi_k^1h_i)(t)=\\sum_{j=1}^{k}(h_i(t_j) - h_i(t_{j-1}))\\sigma_{j-1}(t)=\n=\\sum_{j=1}^{k}(g_i(-t_j) - g_i(-t_{j-1}))\\sigma_{j-1}(t)\n$$\nNamely\n$$\n(\\Pi_k^1 g_i)(-t)=\\sum_{j=1}^{k}(g_i(-t_j) - g_i(-t_{j-1}))\\sigma_{j-1}(t), \\quad t\\in [0,1]\n$$\nor\n\\begin{equation}\\label{gi1}\n\t(\\Pi_k^1 g_i)(t)=\\sum_{j=1}^{k}(g_i(-t_j) - g_i(-t_{j-1}))\\sigma_{j-1}(-t), \\quad t\\in [-1,0]\n\\end{equation}\nCombine together we get a piecewise linear\ninterpolation of $g_i$ on $[-1,1]$ as follows:\n\\begin{eqnarray}\n\tg_{i,k}(t)&=&\n\t\\sum_{j=1}^{k}(g_i(-t_j) - g_i(-t_{j-1}))\\sigma_{j-1}(-t)+\\sum_{j=1}^{k}(g_i(t_j) - g_i(t_{j-1}))\\sigma_{j-1}(t)\\nonumber \\\\ \n\t&=&\\sum_{j=1}^{k}[a_{ij}^-\\sigma_{j-1}(-t)+a_{ij}^+\\sigma_{j-1}(t)] \\label{gih}\n\t\\quad t\\in [-1,1]\n\\end{eqnarray}\nwhere \n$$\na_{ij}^{\\pm}=g_i(\\pm t_j) - g_i(\\pm t_{j-1})\n$$\n%It is easy to see that\n%\\begin{equation}\n%|g_i(t)-g_{i,k}(t)|\\le h_k, \\quad t\\in [-1,1].\n%\\end{equation} \n%\\begin{equation}\n%\\|\\frac1n \\sum_{i=1}^n \\frac{g(\\cdot,\\omega_i)}{|\\omega_i|_B}-f^*\\|_{L^2(\\mu,B)}\\le h_k\n%\\end{equation}\n%where\n%\\begin{equation}\n%\\label{fstar}\n%f^*(x)=\n%\\frac1n\\sum_{i=1}^ng_{i,k}(\\omega_i^B\\cdot (x-x_B)).\n%\\end{equation}\n%\n%By Theorem \\ref{jones}, we have \n%\\begin{equation}\n%%\\|\\tilde f-f^*\\|_{1,B}\\le \\frac{C_1(d,B)}{\\sqrt{n}}\n%  \\|\\tilde f-f^*\\|_{L^2(\\mu,B)}\\le \\frac{2}{\\sqrt{n}}\n%\\end{equation}\n%here $C_1(d,B)=\\sqrt{\\mu(B)}[\\sqrt{(d+{\\rm diam}(B))}+{\\rm diam}(B)(1+d)]$.\n\n\n%Let us rewrite\n%$$\n%f^*(x)\n%=\\sum_{i=1}^n\\sum_{j=1}^{k}[\\gamma_{ij}^- f_{ij}^-+ \\gamma_{ij}^+f_{ij}^+]\n%$$\n%where\n%$$\n%\\gamma_{ij}^{\\pm}=\\frac{|a^\\pm_{ij}|}{nd_i}, \n%f^\\pm_{i,j}=d_i\\/{\\rm sign}(a^\\pm_{ij})\\sigma_{j-1}(\\pm \\omega_i^B\\cdot (x-x_B))\n%$$\n%and \n%$$\n%d_i=\\sum_{j=1}^{k}(|a^-_{ij}|+|a^+_{ij}|)\\le 2\n%$$\n%By definition\n%\\begin{equation}\n%\\label{gammaij}\n%\\sum_{i=1}^n\\sum_{j=1}^{k}[\\gamma_{i,j}^-+\\gamma_{i,j}^+]=1.\n%\\end{equation}\n%With re-numeration as\n%$$\n%p_\\ell=\\gamma_{ij}^{\\pm}, f_\\ell = f_{ij}^{\\pm}, 1\\le \\ell \\le N=2nk\n%$$\n%We have\n%$$\n%f^*(x)=\\sum_{\\ell=1}^N p_\\ell f_\\ell\n%$$\n%Consider \n%$$\n%\\mathcal N=\\{1,2,\\ldots, N\\}\n%$$\n%and \n%$$\n%\\bar f: \\mathcal N\\mapsto \\mathbb R^1\n%$$\n%such that\n%$$\n%\\bar f(\\ell)=f_{\\ell}, \\ell\\in \\mathcal N\n%$$\n%With the probability measure\n%$$\n%\\lambda(\\mathcal M)=\\sum_{m\\in \\mathcal M}p_m \\quad \\mathcal M\\subset\\mathcal N.\n%$$\n%By definition. \n%$$\n%\\mathbb E(\\bar f) = f^*(x).\n%$$\n%\n%By the basic result on expectation:\n%\\begin{equation}\n%\\begin{aligned}\n%&\\sum_{\\ell_1,\\ldots \\ell_n=1}^Np_{\\ell_1}\\cdots p_{\\ell_n}\\left((f^*(x)-\n%{1\\over n}\\sum_{i=1}^n f_{\\ell_i})^2\\right)\\\\\n%=&\\mathbb E_{\\mathcal N^n} \\left(\\mathbb E(\\bar f)-{1\\over\n%\tn}\\sum_{i=1}^n  \\bar f(\\ell_i)\\right)^2\n%\\le\\frac1n \\|\\bar f\\|_{\\infty}^2 \\le\\frac{4}{n}.\n%\\end{aligned}\n%\\end{equation}\n%\n%By taking the $L^2(\\mu,B)$ on the above inequality, we get\n%$$\n%\\sum_{\\ell_1,\\ldots \\ell_n=1}^Np_{\\ell_1}\\cdots p_{\\ell_n}\\|f^*-{1\\over n}\\sum_{i=1}^n f_{\\ell_i}\\|_{L^2(\\mu,B)}^2\n%\\le\\frac4n.\n%$$\n%Thus, there exisit $\\ell_1^*, \\ldots, \\ell_n^*\\in \\mathcal N$ such that\n%$$\n%\\|f^*-f_n(x)\\|_{L^2(\\mu,B)}^2\n%\\le\\frac4n.\n%$$\n%where\n%$$\n%f_n(x)={1\\over n}\\sum_{i=1}^n f_{\\ell_i^*}(x).\n%$$\n%Then we have \n%\\begin{equation}\n%\\|\\tilde f-f_n\\|^2_{L^2(\\mu,B)}\\le \\frac{9}{n}.\n%\\end{equation}\n\n%Consequently\n%\\begin{equation}\n%\\left\\|f(x)-f(x_B)-\\|f\\|_Bf_n\\right\\|^2_{L^2(\\mu,B)}\\le \\frac{9\\|f\\|^2_B}{n}.\n%\\end{equation}\n\nFollow the procedure in last section, and notice that $\\sigma(x)=ReLU(x)-ReLU(x-1)$, we obtain the following theorem.\n\n\n\\begin{theorem}\n\tFor a probability measure $\\mu$ on B and every function with $\\|f\\|_B<\\infty$, there exists $\\omega_1,\\dots,\\omega_n\\in\\mathbb{R}^d$ such that \n\t$$\n\t\\left\\|f(x)-f_n(x)\\right\\|^2_{L^2(\\mu,B)}\\le \\frac{C\\|f\\|^2_B}{n}.\n\t$$\n\twhere $f_n(x)=\\sum_{i=1}^{n}a_iReLU(\\omega_i x+b_i)+c$.\n\\end{theorem}\n\n\n\n\n\n%\\section{Barron theory of approximation}\n%%\\subsection{A DL function class}\n%\\subsection{Approximation result for cosine as activation function}\n%For clarity, let us give a brief introduction of the approximation\n%result based results in Barron \\cite{barron1993universal} and some\n%modification in Xu \\cite{xu2017approximation}.\n%\n%Given $d$ and $n$, consider the following nonlinear space in $\\mathbb\n%R^d$:\n%\\begin{equation}\n%\\label{Vn}\n%V_n=\\left\\{v: v(x)={a\\over n}\\sum_{j=1}^n\\cos(\\omega_j\\cdot x+b_j)+c,\n%  \\mbox{ with }\n%a, b_j, c\\in\\mathbb R, \\omega_j\\in\\mathbb R^d\n%\\right\\}\n%\\end{equation}\n%\\begin{theorem}  Given a bounded domain $\\Omega\\subset\\mathbb R^d$ \n% and a function $u:\\mathbb R^d\\mapsto\\mathbb R$ whose Fourier\n% transform $\\hat u$ satisfying $\\|\\hat u\\|_{L^1(\\mathbb R^d)}<\\infty$.\n%Then, for any $n\\ge 1$,  there exists $u_n\\in V_n$  such that\n%\\begin{equation}\n%\\|u-u_n\\|_{L^2(\\Omega)}\\le \\frac{2|\\Omega|^{1\\over 2}\\|\\hat u\\|_{L^1(\\mathbb R^d)}}{\\sqrt{n}}.\n%\\end{equation}\n%where $u_n(x)={a\\over n}\\sum_{j=1}^n\\cos(\\omega_j\\cdot x+b_j)+c$ satisfying\n%\\begin{equation}\n%\\label{abc}\n%a=  \\|\\hat u\\|_{L^1(\\mathbb R^d)}, \\quad |b_j|\\le \\pi, \\quad |c|\\le \\|u\\|_{0,\\infty,\\Omega}+\\|\\hat u\\|_{L^1(\\mathbb R^d)}\n%\\end{equation}\n%\\end{theorem}\n%\\begin{proof}\\small\n%We write  \n%$\\hat{u}(\\omega)=e^{i\\theta(\\omega)}|\\hat{u}(\\omega)|$. By Fourier inversion formula,\n%\\begin{equation}\n%  \\label{eqn1}\n%  u(x)=\\int_{\\mathbb{R}^d}e^{i\\omega\\cdot x}\\hat{u}(\\omega)d\\omega\n%=\\int_{\\mathbb{R}^d}e^{i(\\omega\\cdot x+\\theta(\\omega))}|\\hat{u}(\\omega)|\n%\\end{equation}\n%Since $u(x)$ is real-valued,  the above identity implies that,  for $x, x^*\\in \\Omega$\n%\\begin{equation}    \\label{eqn4.3}\n%u(x)-u(x^*)\n%={\\rm Re}\\int_{\\mathbb{R}^d}\n%(e^{i\\omega\\cdot x}-e^{i\\omega\\cdot x^*})  \n% e^{i\\theta\n%    (\\omega)}|\\hat{u}(\\omega)|d\\omega \n%=\\|\\hat u\\|_{L^1(\\mathbb R^d)}\\int_{\\mathbb{R}^d}g(x,\\omega)\\lambda(\\omega)d\\omega \n%  \\end{equation}\n%where $g: \\Omega\\times \\mathbb{R}^d\\rightarrow \\mathbb{R}$\n%and $\\lambda:\\mathbb R^d\\mapsto\\mathbb R$ is a  probability distribution density function: \n%\\begin{equation}\\label{gz}\n%  g(x,\\omega):=\n%\\cos(\\omega\\cdot x+\\theta(\\omega))  -\n%\\cos(\\omega\\cdot x^*+\\theta(\\omega)) , \\quad\n%\\lambda(\\omega)=|\\hat{u}(\\omega)|/\\|\\hat u\\|_{L^1(\\mathbb R^d)}.\n%\\end{equation}\n%Using $\\lambda$, we define the expectation $\\mathbb E$ and\n%$\\mathbb{\\bar E}$ for $u: \\mathbb R^d\\mapsto \\mathbb R$ and\n%$v: (\\mathbb R^d)^n\\equiv\\mathbb R^d\\times\\cdots\\times\\mathbb R^d\\mapsto \\mathbb R$\n%$$\n%\\mathbb E(u)\\equiv\\int_{\\mathbb R^d}u(\\omega)\\lambda(\\omega)d\\omega,  \\quad\n%\\mathbb{\\bar E}(v)\\equiv\\int_{(\\mathbb\n%  R^d)^n}v(\\omega_1,\\cdots,\\omega_n)\n%\\lambda(\\omega_1)\\cdots\\lambda(\\omega_n)d\\omega_1\\cdots d\\omega_n  \n%$$\n%Denote$\\tilde u(x)=[u(x)-u(x^*)]/\\|\\hat u\\|_{L^1(\\mathbb R^d)}$, by\n%\\eqref{eqn4.3} and a direct (and standard) calculation\n%$$\n%\\mathbb{\\bar E}(\\tilde u(x)-\\frac1n\\sum_{j=1}^ng(x,\\omega_j))^2 \n%=\\mathbb{\\bar E}(\\mathbb{E}[g(x,\\cdot)]-\n%     \\frac1n\\sum_{j=1}^n g(x,\\omega_j))^2 \n%\\le\\frac{1}{n}\\max_{x\\in \\Omega, \\omega\\in \\mathbb R^d}|g(x,\\omega)|^2\n%\\le \\frac{4}{n}\n%$$\n%By Fubini's Theorem,\n%$\\mathbb{\\bar E}\\int_\\Omega(\\tilde u(x)-\\frac1n\\sum_{j=1}^ng(x,\\omega_j))^2 dx\n%\\le \\frac{4}{n}|\\Omega|.\n%$\n%Thus $\\exists$ $ \\omega_j^*\\in \\mathbb{R}^d$ such that \n%$$\n%\\int_\\Omega  (\\tilde u(x)-\\frac1n\\sum_{j=1}^ng(x,\\omega_j^*))^2dx\\le \\frac{4}{n}|\\Omega|.\n%$$\n%The desired result then follows easily. \n%\\end{proof}\n%To prove an $H^1$ estimate, we use the following density function\n%$$\n%\\lambda_1(\\omega)=\\frac{|\\omega||\\hat u(\\omega)|}{\\|\\widehat{\\nabla u}\\|_{L^1(\\Omega)}}\n%\\quad\n%\\|\\widehat{\\nabla u}\\|_{L^1(\\Omega)}=\\int_{\\mathbb R^d}|\\omega||\\hat u(\\omega)|.\n%$$\n%Similar to equation \\eqref{eqn4.3}, we have\n%$$\n%\\tilde f(x)\\equiv \\frac{u(x)-u(x^*)}{\\|\\widehat{\\nabla u}\\|_{L^1(\\Omega)}}\n%=\n%\\int_{\\mathbb{R}^d}\\frac{g(x,\\omega)}{|\\omega|}\\lambda_1(\\omega)d\\omega \n%$$\n%It follows that\n%$$\n%\\partial_k\\tilde f(x)\n%=\n%\\int_{\\mathbb{R}^d}\\frac{\\partial_kg(x,\\omega)}{|\\omega|}\\lambda_1(\\omega)d\\omega \n%$$\n%We note that\n%$$\n%\\max_{x\\in \\Omega, \\omega\\in \\mathbb R^d}\n%\\frac{|g(x,\\omega)|}{|\\omega|}\\le {\\rm diam}(\\Omega), \n%\\quad\n%\\max_{x\\in \\Omega, \\omega\\in \\mathbb R^d}\n%\\frac{|\\partial_kg(x,\\omega)|}{|\\omega|}\\le 1\n%$$\n%Setting\n%$$\n%v_n=\\frac1n\\sum_{j=1}^n\\frac{g(x,\\omega_j)}{|\\omega_j|}\n%$$\n%By a similar argument, we obtain\n%$$\n%\\mathbb{\\bar E}\n%\\int_\\Omega\\left((\\tilde u(x)-v_n(x))^2\n%+\\sum_{k=1}^d (\\partial_k\\tilde u(x)-\\partial_kv_n(x))^2\n%\\right) dx\n%\\le \\frac{1}{n}[C(d,\\Omega)]^2 \n%$$\n%where\n%$\n%C(d,\\Omega)=[|\\Omega| ({\\rm diam}(\\Omega)+d)]^{1/2}.\n%$\n%This implies that $\\exists$ $ \\omega_j^*\\in \\mathbb{R}^d$ such that  \n%$$\n%\\left\\|\\tilde u(\\cdot)-{1\\over n}\\sum_{j=1}^n\\frac{g(\\cdot,\\omega_j^*)}{|\\omega_j^*|}\\right\\|_{1,\\Omega}^2\n%\\le \\frac{[C(d,\\Omega)]^2}{n}\n%$$\n%\\begin{theorem}  \\label{thm:DLH1}\n%Given a bounded domain $\\Omega\\subset\\mathbb R^d$ \n% and a function $u:\\mathbb R^d\\mapsto\\mathbb R$ whose Fourier\n% transform $\\hat u$ satisfying $\\|\\widehat{\\nabla u}\\|_{L^1(\\mathbb R^d)}<\\infty$.\n%Then, for any $n\\ge 1$,  there exists $u_n\\in V_n$  such that\n%\\begin{equation}\n%\\|u-u_n\\|_{H^1(\\Omega)}\\le \\frac{C(d,\\Omega)}{\\sqrt{n}})\\|\\widehat{\\nabla u}\\|_{L^1(\\mathbb R^d)}\n%\\end{equation}\n%where \n%\\begin{equation}\n%  \\label{un}\n%u_n(x)=\\sum_{j=1}^na_j\\cos(\\omega_j\\cdot x+b_j)+c \\mbox{ for some }\n%a_j,b_j,c\\in \\mathbb R, \\omega_j\\in\\mathbb R^d.\n%\\end{equation}\n%\\end{theorem}\n% \n%One remarkable fact is that Theorem \\ref{thm:DLH1} holds in any spatial\n%dimension and any bounded domain $\\Omega$ of any geometric shape.\n%Theorem \\ref{thm:DLH1} is only interesting when $d\\ge 2$.  For $d=1$,\n%we can prove much stronger results easily.  For example, if $u$\n%is sufficiently smooth, for any $m\\ge 1$, we can find functions $u_n$\n%in the form of \\eqref{un} such that \n%\\begin{equation}\n%  \\label{uun1}\n%\\|u-u_n\\|_{1,\\Omega}\\le \\frac{C_1(m,u)}{n^{m}}.  \n%\\end{equation}\n%\\newpage \n\n\n% \\subsection{Ritz optimization problem}\n%An optimized discretization based on machine learning amounts to solve a nonlinear problem even when the partial different equation is linear.   When adaptivity is considered, it is desirable to solve an optimization problem.  For example, for a standard linear partial differential equation such as\n%\\begin{equation}\n%  \\label{laplace}\n%-\\Delta u=f  \n%\\end{equation}\n%We will solve it by solving the optimization problem\n%\\begin{equation}\n%\\label{ritz}\n%\\min_{v\\in V}J(v)\n%\\end{equation}\n%where\n%\\begin{equation}\n%\\label{ritz}\n%J(v)=\\int_\\Omega\\left( {1\\over 2}|\\nabla v|^2 -fv\\right)  \n%\\end{equation}\n%The following identity can be easily verified:\n%\\begin{equation}\n%  \\label{JvH1}\n%|u-v|_1^2=2J(v)+|u|_1^2. \n%\\end{equation}\n%This means that \n%\\begin{equation}\n%\\label{minmin}\n%\\arg\\min_{v}|u-v|_1^2=\n%\\arg\\min_{v}J(v)\n%\\end{equation}\n%Using the approximation result, we get the following error estimate for the cosine based space:\n%\\begin{equation}\n%  \\label{cosError}\n%\\min_{v}|u-v|_1\\le \\frac{2}{\\sqrt{n}}\\int_{\\mathbb R^d}|\\omega||\\hat f(\\omega)|d\\omega.\n%\\end{equation}\n%where the Fourier transform can be computed with any extension of $f$ onto the entire space $\\mathbb R^d$. \n%\n%Similarly, following error estimate for the ReLU based space:\n%\\begin{equation}\n%  \\label{ReLUError}\n%\\min_{v}|u-v|_1\\le \\frac{2}{\\sqrt{n}}\\int_{\\mathbb R^d}|\\omega|^2|\\hat f(\\omega)|d\\omega.\n%\\end{equation}\n%it would be interesting to compare the space of the\n%approximation-class shown in \\eqref{cosError} and \\eqref{ReLUError}\n%with those used in the classic adaptive finite element methods (see\n%results by Nochetto et al).\n%\n%Recall the best error estimates from the classic adaptive finite\n%element methods is of the following form:\n%\\begin{equation}\n%  \\label{AFEM-error}\n%|u-u_n|_{1,\\Omega}\\le \\frac{c}{n^{1\\over d}}\\|u\\|_A\n%\\end{equation}\n%for some norm $\\|\\cdot\\|$ in approximation class $\\mathcal A$.\n%\n%Comparing the error estimate in \\eqref{AFEM-error} with those in\n%\\eqref{cosError} or \\eqref{ReLUError}, the ML-FEM appears to be much\n%better in terms of asymptotic order?\n%\n%In fact, according to the result by Lin, Xie and Xu, we have the\n%following lower bound for linear elements for smooth solution\n%discretized \n%$$\n%|u-u_n|_{1,\\Omega}\\ge \\frac{c(u)}{n^{1\\over d}}\n%$$\n%as long as $u$ is not a linear polynomial. \n%\n%\n%For linear model \\eqref{laplace}, the optimization problem\n%\\eqref{ritz} is a convex problem that has a unique solution.  But if\n%we use a nonlinear space $V$ the optimization problem \\eqref{ritz}\n%become non-convex.  It is well-known that non-convex optimization\n%problem is hard to solve.  Therefore, the ML-FEM is only viable if the\n%non-convex optimization problem can be solved efficiently!\n\n", "meta": {"hexsha": "9ed21f879ff67a1ca9f6922ce2599cf66fbf466a", "size": 31310, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/Barron-Approx.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/Barron-Approx.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/Barron-Approx.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.1852589641, "max_line_length": 300, "alphanum_fraction": 0.6154263813, "num_tokens": 13822, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5812652868643092}}
{"text": "\\title{Batch Training}\n\n\\subsection{Batch Training}\n\nRunning algorithms which require the full data set for each update\ncan be expensive when the data is large. In order to scale inferences,\nwe can do \\emph{batch training}. This trains the model using\nonly a subsample of data at a time.\n\nIn this tutorial, we extend the\n\\href{http://edwardlib.org/tutorials/supervised-regression}\n{supervised learning tutorial},\nwhere the task is to infer hidden structure from\nlabeled examples $\\{(x_n, y_n)\\}$.\nAn interactive version with Jupyter notebook is available\n\\href{http://nbviewer.jupyter.org/github/blei-lab/edward/blob/master/notebooks/batch_training.ipynb}{here}.\n\n\\subsubsection{Data}\n\nSimulate $N$ training examples and a fixed number of test examples.\nEach example is a pair of inputs $\\mathbf{x}_n\\in\\mathbb{R}^{10}$ and\noutputs $y_n\\in\\mathbb{R}$. They have a linear dependence with\nnormally distributed noise.\n\n\\begin{lstlisting}[language=Python]\ndef build_toy_dataset(N, w):\n  D = len(w)\n  x = np.random.normal(0.0, 2.0, size=(N, D))\n  y = np.dot(x, w) + np.random.normal(0.0, 0.05, size=N)\n  return x, y\n\nN = 10000  # size of training data\nM = 128    # batch size during training\nD = 10     # number of features\n\nw_true = np.ones(D) * 5\nX_train, y_train = build_toy_dataset(N, w_true)\nX_test, y_test = build_toy_dataset(235, w_true)\n\\end{lstlisting}\n\nWe also define a helper function to select the next batch of data points\nfrom the full set of examples. It keeps track of the current batch index\nusing an internal counter.\n\n\\begin{lstlisting}[language=Python]\ndef next_batch(X, y, size):\n  \"\"\"Grab a [size, :] tensor from X and [size] tensor from y.\"\"\"\n  global i\n  try:  # initialize batch index\n    i\n  except:\n    i = 0\n\n  N = X.shape[0]\n  diff = (i + 1) * size - N\n  if diff <= 0:\n    X_batch = X[(i * size):((i + 1) * size), :]\n    y_batch = y[(i * size):((i + 1) * size)]\n    i += 1\n  else:\n    X_batch = np.concatenate((X[(i * size):, :], X[:diff, :]))\n    y_batch = np.concatenate((y[(i * size):], y[:diff]))\n    i = 0\n\n  return X_batch, y_batch\n\\end{lstlisting}\n\nWe will use \\texttt{next_batch} during inference.\n\n\\subsubsection{Model}\n\nPosit the model as Bayesian linear regression \\citep{murphy2012machine}.\nFor a set of $N$ data points $(\\mathbf{X},\\mathbf{y})=\\{(\\mathbf{x}_n, y_n)\\}$,\nthe model posits the following distributions:\n\n\\begin{align*}\n  p(\\mathbf{w})\n  &=\n  \\text{Normal}(\\mathbf{w} \\mid \\mathbf{0}, \\sigma_w^2\\mathbf{I}),\n  \\\\[1.5ex]\n  p(b)\n  &=\n  \\text{Normal}(b \\mid 0, \\sigma_b^2),\n  \\\\\n  p(\\mathbf{y} \\mid \\mathbf{w}, b, \\mathbf{X})\n  &=\n  \\prod_{n=1}^N\n  \\text{Normal}(y_n \\mid \\mathbf{x}_n^\\top\\mathbf{w} + b, \\sigma_y^2).\n\\end{align*}\n\nThe latent variables are the linear model's weights $\\mathbf{w}$ and\nintercept $b$, also known as the bias.\nAssume $\\sigma_w^2,\\sigma_b^2$ are known prior variances and $\\sigma_y^2$ is a\nknown likelihood variance. The mean of the likelihood is given by a\nlinear transformation of the inputs $\\mathbf{x}_n$.\n\nLet's build the model in Edward, fixing $\\sigma_w,\\sigma_b,\\sigma_y=1$.\n\n\\begin{lstlisting}[language=Python]\nX = tf.placeholder(tf.float32, [None, D])\ny_ph = tf.placeholder(tf.float32, [None])\n\nw = Normal(loc=tf.zeros(D), scale=tf.ones(D))\nb = Normal(loc=tf.zeros(1), scale=tf.ones(1))\ny = Normal(loc=ed.dot(X, w) + b, scale=1.0)\n\\end{lstlisting}\n\nHere, we define a placeholder \\texttt{X}. During inference, we pass in\nthe value for this placeholder according to batches of data.\nTo enable training with batches of varying size,\nwe don't fix the number of rows for \\texttt{X} and \\texttt{y}. (Alternatively,\nwe could fix it to be the batch size if we're training and testing\nwith a fixed size.)\n\n\\subsubsection{Inference}\n\nWe now turn to inferring the posterior using variational inference.\nDefine the variational model to be a fully factorized normal across\nthe weights.\n\\begin{lstlisting}[language=Python]\nqw = Normal(loc=tf.Variable(tf.random_normal([D])),\n            scale=tf.nn.softplus(tf.Variable(tf.random_normal([D]))))\nqb = Normal(loc=tf.Variable(tf.random_normal([1])),\n            scale=tf.nn.softplus(tf.Variable(tf.random_normal([1]))))\n\\end{lstlisting}\n\nRun variational inference with the Kullback-Leibler divergence.\nWe use $5$ latent variable samples for computing\nblack box stochastic gradients in the algorithm.\n(For more details, see the\n\\href{/tutorials/klqp}{$\\text{KL}(q\\|p)$ tutorial}.)\n\nFor batch training, we iterate over the number of batches and\nfeed them to the respective placeholder. We set the number of\niterations to be the total number of batches for 5 epochs\n(full passes over the data set).\n\n\\begin{lstlisting}[language=Python]\nn_batch = int(N / M)\nn_epoch = 5\n\ninference = ed.KLqp({w: qw, b: qb}, data={y: y_ph})\ninference.initialize(n_iter=n_batch * n_epoch, n_samples=5, scale={y: N / M})\ntf.global_variables_initializer().run()\n\nfor _ in range(inference.n_iter):\n  X_batch, y_batch = next_batch(X_train, y_train, M)\n  info_dict = inference.update({X: X_batch, y_ph: y_batch})\n  inference.print_progress(info_dict)\n\\end{lstlisting}\n\n\\begin{lstlisting}\n390/390 [100%] \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 Elapsed: 4s | Loss: 10481.556\n\\end{lstlisting}\n\nWhen initializing inference, note we scale $y$ by $N/M$, so it is as if the\nalgorithm had seen $N/M$ as many data points per iteration.\nAlgorithmically, this will scale all computation regarding $y$ by\n$N/M$ such as scaling the log-likelihood in a variational method's\nobjective. (Statistically, this avoids inference being dominated by the prior.)\n\nThe loop construction makes training very flexible. For example, we\ncan also try running many updates for each batch.\n\n\\begin{lstlisting}[language=Python]\nn_batch = int(N / M)\nn_epoch = 1\n\ninference = ed.KLqp({w: qw, b: qb}, data={y: y_ph})\ninference.initialize(n_iter=n_batch * n_epoch * 10, n_samples=5, scale={y: N / M})\ntf.global_variables_initializer().run()\n\nfor _ in range(inference.n_iter // 10):\n  X_batch, y_batch = next_batch(X_train, y_train, M)\n  for _ in range(10):\n    info_dict = inference.update({X: X_batch, y_ph: y_batch})\n\n  inference.print_progress(info_dict)\n\\end{lstlisting}\n\n\\begin{lstlisting}\n770/780 [ 98%] \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588  ETA: 0s | Loss: 9760.541\n\\end{lstlisting}\n\nIn general, make sure that the total number of training iterations is\nspecified correctly when initializing \\texttt{inference}. Otherwise an incorrect\nnumber of training iterations can have unintended consequences; for example,\n\\texttt{ed.KLqp} uses an internal counter to appropriately decay its optimizer's\nlearning rate step size.\n\nNote also that the reported \\texttt{loss} value as we run the\nalgorithm corresponds to the computed objective given the current\nbatch and not the total data set. We can instead have it report\nthe loss over the total data set by summing \\texttt{info_dict['loss']}\nfor each epoch.\n\n\\subsubsection{Criticism}\n\nA standard evaluation for regression is to compare prediction accuracy on\nheld-out ``testing'' data. We do this by first forming the posterior predictive\ndistribution.\n\\begin{lstlisting}[language=Python]\ny_post = ed.copy(y, {w: qw, b: qb})\n# This is equivalent to\n# y_post = Normal(loc=ed.dot(X, qw) + qb, scale=tf.ones(N))\n\\end{lstlisting}\n\nWith this we can evaluate various quantities using predictions from\nthe model (posterior predictive).\n\\begin{lstlisting}[language=Python]\nprint(\"Mean squared error on test data:\")\nprint(ed.evaluate('mean_squared_error', data={X: X_test, y_post: y_test}))\n\nprint(\"Mean absolute error on test data:\")\nprint(ed.evaluate('mean_absolute_error', data={X: X_test, y_post: y_test}))\n\\end{lstlisting}\n\n\\begin{lstlisting}\n## Mean squared error on test data:\n## 0.00659598\n## Mean absolute error on test data:\n## 0.0705906\n\\end{lstlisting}\n\nThe trained model makes predictions with low error\n(relative to the magnitude of the output).\n\n\\subsubsection{Footnotes}\n\nOnly certain algorithms support batch training such as\n\\texttt{MAP}, \\texttt{KLqp}, and \\texttt{SGLD}. Also, above we\nillustrated batch training for models with only global latent variables,\nwhich are variables are shared across all data points.\nFor more complex strategies, see the\n\\href{http://edwardlib.org/api/inference-data-subsampling} {inference\ndata subsampling API}.\n\n\\subsubsection{References}\\label{references}\n", "meta": {"hexsha": "895cdd37643d507d6433f24a9a47329f8b24e144", "size": 8287, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/tutorials/batch-training.tex", "max_stars_repo_name": "NunoEdgarGFlowHub/edward", "max_stars_repo_head_hexsha": "298fb539261c71e34d5e7aa5a37ed8a029df0820", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/tex/tutorials/batch-training.tex", "max_issues_repo_name": "NunoEdgarGFlowHub/edward", "max_issues_repo_head_hexsha": "298fb539261c71e34d5e7aa5a37ed8a029df0820", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/tex/tutorials/batch-training.tex", "max_forks_repo_name": "NunoEdgarGFlowHub/edward", "max_forks_repo_head_hexsha": "298fb539261c71e34d5e7aa5a37ed8a029df0820", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-13T06:58:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-13T06:58:00.000Z", "avg_line_length": 34.6736401674, "max_line_length": 107, "alphanum_fraction": 0.7259563171, "num_tokens": 2333, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5812652823633354}}
{"text": "\\section{Regimes}\n\\label{s:regimes}\n\nIn this section we discuss the various limits of\nthe expression describing the Hanle precession curve.\nFirst, we show that the commonly used results for zero magnetic field\nand tunneling contacts are correctly reproduced.\nNext, we discuss regimes where appropriate scaling will give non-unique Hanle fits.\nIn the following, we consider the case $r = r_0 = r_L$ of similar contacts.\n\nIn the limit of tunneling contacts, $R_C^0, R_C^L \u226b R_F$.\nPutting $r_0, r_L \u2192 \u221e$ gives $p_1 p_2 \u2192 \\left( P_\u03a3^L \\right)^2$ and\n\\begin{equation}\n  f^\u221e = \\re{\\frac{e^{- \\left( L / \u03bb \\right) \\sqrt{1 + i \u03c9 \u03c4}}}{2 \\sqrt{1 + i \u03c9 \u03c4}}} ,\n\\end{equation}\nwhich is of the same form as found in appendix B of\n\\cite{PhysRevB.37.5312}\n(we will denote this limit with the superscript $\u221e$).\nFitting with this expression was found to give results equivalent\nto fitting with the Hanle equation\n\\begin{equation}\n  \\label{eq:hanle_integral}\n  \\rNL^\u00b1 = \u00b1 \\sNL \u222b_0^\u221e \\frac{e^{-t / \u03c4}}{\\sqrt{4 \u03c0 D t}}\n             \\exp{\\left[- \\frac{L^2}{4 D t} \\right]} \\cos{\u03c9 t} \\: dt .\n\\end{equation}\nThe agreement is expected as an explicit integration of \\cref{eq:hanle_integral}\nyields the same analytic expression with the identification\n$\\sNL = {p_1 p_2 D} / {W \u03c3_G}$.\nIn the additional limit of zero magnetic field,\n\\begin{equation}\n  \u0394 \\rNL = \\left( P_\u03a3^L \\right)^2 R_N e^{- L / \u03bb} ,\n\\end{equation}\nwhich agrees with equation (6) in\n\\cite{PhysRevB.67.052409}.\n\nLet $f_0$ denote $f$ at zero magnetic field,\n\\begin{equation}\n  f_0 = \\left[ 2 \\left( 1 + \u03bb / r \\right) e^{L / \u03bb} + \\left( \u03bb / r \\right)^2 \\sinh{L / \u03bb} \\right]^{-1} ,\n\\end{equation}\nwhich agrees with equation (3) in\n\\cite{PhysRevB.80.214427}.\n\nTo further explore the nature of the Hanle curves,\nwe exploit the fact that it only depends on\nthe dimensionless ratios $\u03bb / r$, $L / \u03bb$, and $\u03c9 \u03c4$.\nThe only other parameter of the conducting channel that enters the expression\nis the overall scale $\u03bb$ in $R_N$.\nThe expression $f$ contains three terms\nwhich are of zeroth, first, and second order in $\u03bb / r$.\nThus, as the contact resistance decreases,\none goes from a device dominated by the first term to one dominated by the last.\nBut precisely how this comes about depends on the value of $\u03c9 \u03c4$.\n\nFor infinite contact resistance, it was pointed out that any rescaling\nof $g$, $\u03c4$ and $D$ that leaves $\u03bb$ and $\u03c9 \u03c4$ unchanged\nleads to the same Hanle precession curves\n\\cite{Swartz2013}.\nOur result shows that the same is also true\nwhen the contact resistance is taken into account.\nIn numerical simulations, interesting features were observed\nwhen $L / \u03bb \u226a 1$ and $r / \u03bb \u226a 1$\n\\cite{PhysRevB.86.235408}.\n\nTo compare across regimes, we first normalize the data to its value at zero magnetic field.\nIn devices where $\u03bb / r \u226b 1$, the normalization factor is\n\\begin{equation}\n  f_0 = \\frac{2 e^{- L / \u03bb}}{\\left( \u03bb / r \\right)^2} .\n\\end{equation}\nIn this regime, if $D$ is not very different from the infinite contact resistance value,\nthen the lifetime can be large, i.e., $\u03c4 \u226b \\SI{1}{\\nano \\second}$.\nAs one tunes the magnetic field $\\sqrt{\u03c9 \u03c4} \u226b 1$, for small values of the field,\nand for much of the curve, we can approximate $1 + i \u03c9 \u03c4 \u2248 i \u03c9 \u03c4$.\nAn interesting consequence of this is that the zero of the Hanle precession curve\nbecomes independent of the scattering time.\nNote that the product\n\\begin{equation}\n   \\frac{L}{\u03bb} \\sqrt{\u03c9 \u03c4} = L \\sqrt{\\frac{D}{\u03c9}} ,\n\\end{equation}\nwhich appears in the exponential and oscillating factors below,\nis independent of the lifetime.\nAs one further tunes the magnetic field, the Hanle curve is given by\n\\begin{equation}\n  \\label{eq:regime.1.f}\n  f = \\frac{\\sqrt{\u03c9 \u03c4}}{\\left( \u03bb / r \\right)^2}\n      e^{- \\left( L / \u03bb \\right) \\sqrt{\u03c9 \u03c4 / 2}}\n      \\sin{\\left[ \\frac{L}{\u03bb} \\sqrt{\\frac{\u03c9 \u03c4}{2}} + \\frac{\u03c0}{4} \\right]} ,\n\\end{equation}\nas long as $\u03bb / r \u226b \\sqrt{\u03c9 \u03c4} \u226b 1$.\nIn this limit, the nonlocal resistance scales as\n\\begin{equation}\n  \\label{eq:regime.1.nonlocal_resistance}\n  \u0394 \\rNL \u221d \\frac{\u03bb \\sqrt{\u03c9 \u03c4}}{\\left( \u03bb / r \\right)^2} = r^2 \\sqrt{\\frac{\u03c9}{D}} ,\n\\end{equation}\nand the normalized nonlocal resistance as\n\\begin{equation}\n  f / f_0 \u221d \\sqrt{\u03c9 \u03c4} .\n\\end{equation}\n\nOn further increasing the field,\n$\\sqrt{\u03c9 \u03c4} \u226b \u03bb / r \u226b 1$, we get\n\\begin{equation}\n  \\label{eq:regime.2.f}\n  f = \\frac{1}{2 \\sqrt{\u03c9 \u03c4}}\n      e^{- \\left( L / \u03bb \\right) \\sqrt{\u03c9 \u03c4 / 2}}\n      \\cos{\\left[ \\frac{L}{\u03bb} \\sqrt{\\frac{\u03c9 \u03c4}{2}} + \\frac{\u03c0}{4} \\right]} .\n\\end{equation}\nIn this limit, the nonlocal resistance scales as\n\\begin{equation}\n  \\label{eq:regime.2.nonlocal_resistance}\n  \u0394 \\rNL \u221d \\frac{\u03bb}{\\sqrt{\u03c9 \u03c4}} = \\sqrt{\\frac{D}{\u03c9}} ,\n\\end{equation}\nand the normalized nonlocal resistance as\n\\begin{equation}\n  \\label{eq:regime.2.ratio}\n  f / f_0 \u221d \\frac{\\left( \u03bb / r \\right)^2}{\\sqrt{\u03c9 \u03c4}} = D \\sqrt{\\frac{\u03c4}{\u03c9 r^4}} .\n\\end{equation}\n\nIn the limits of \\cref{eq:regime.1.f,eq:regime.2.f},\nthe zeros of the Hanle fit are independent of the lifetime\nand are determined by $D$ though the condition\n\\begin{equation}\n  L \\sqrt{\\frac{D}{2 \u03c9}} + \\frac{\u03c0}{4} = \\frac{n \u03c0}{2} ,\n\\end{equation}\nwhere $n = 0$ for \\cref{eq:regime.1.f} and $n = 1$ for \\cref{eq:regime.2.f}.\n\n\\begin{figure}[!b]\n  \\caption{\n    Data in figure 4 (d) from \\cite{PhysRevLett.105.167202}\n    fit to \\cref{eq:nonlocal_resistance}\n    with the same values as in\n    \\cref{fig:nonlocal_resistance.fits} (d).\n    Fits with lifetimes that differ by four orders of magnitude\n    were obtained by using different starting values for $\u03c4$.\n    These fits are otherwise similar with the exception of the lifetime,\n    demonstrating the $\u03c4$-independent scaling in \\cref{eq:regime.1.nonlocal_resistance}.\n    The $\u03c7^2$ for \\cref{fig:nonlocal_resistance.fits} (d)\n    is \\SI{2}{\\percent} less than the $\u03c7^2$ for\n    \\cref{fig:nonlocal_resistance.large_lifetime}.\n  }\n  \\label{fig:nonlocal_resistance.large_lifetime}\n  \\includegraphics[width=\\columnwidth]{figures/plot_fits_large_lifetime}\n\\end{figure}\n\nNote that fitting is insensitive to $\u03c4$ in the limit of\n\\cref{eq:regime.1.nonlocal_resistance}\nor \\cref{eq:regime.2.nonlocal_resistance}.\nAs an example of this,\n\\cref{fig:nonlocal_resistance.large_lifetime} shows nearly identical fits\nwith lifetimes that differ by four orders of magnitude.\nThese fits were obtained by choosing large starting values for $\u03c4$.\nFor \\cref{fig:nonlocal_resistance.fits} (d)\nand \\cref{fig:nonlocal_resistance.large_lifetime},\n$\u03c7^2 \u223c \\SI{7e-8}{}$, but the $\u03c7^2$ for\n\\cref{fig:nonlocal_resistance.fits} (d)\nis \\SI{2}{\\percent} less than the $\u03c7^2$ for\n\\cref{fig:nonlocal_resistance.large_lifetime}.\nIn \\cref{fig:nonlocal_resistance.fits} (d),\n$\u03bb / r \u226b \\sqrt{\u03c9 \u03c4}$ and $\u03c9 \u03c4 \u223c 1$ for most of the curve,\nso the approximation $1 + i \u03c9 \u03c4 \u2248 i \u03c9 \u03c4$ does not hold.\nHowever, \\cref{fig:nonlocal_resistance.large_lifetime}\nis in the limit of \\cref{eq:regime.1.nonlocal_resistance}\nfor all points (save the origin).\nThus, in limit of small $r$, the fitted value of $\u03c4$ is unreliable\nunless one carefully controls the fitting procedure.\n\nThe evolution of the expression for the Hanle curve\nis an interesting insight into the behavior of the device.\nFitting data on devices with small contact resistances\nwith the functional form applicable to infinite contact resistance\nyields unreliable parameters.\nIn particular, they were numerically shown to severely underestimate the spin lifetime\n\\cite{PhysRevB.86.235408}.\n\nFurther analytic progress can be made if one assumes that\nlifetimes as estimated with infinite contact resistance are long\nenough that the approximation of $\\sqrt{\u03c9 \u03c4} \u226b \u03bb / r \u226b 1$\nis still valid for much of the data being analyzed.\nFor this case, at infinite contact resistance,\nthe normalized nonlocal resistance is given by\n\\begin{equation}\n  \\frac{f^\u221e}{f^\u221e_0} = \\frac{1}{ \\sqrt{\u03c9 \u03c4}}\n                      e^{- \\left( L / \u03bb \\right) \\sqrt{\u03c9 \u03c4 / 2}}\n                      \\cos{\\left[ \\frac{L}{\u03bb} \\sqrt{\\frac{\u03c9 \u03c4}{2}} + \\frac{\u03c0}{4} \\right]} .\n\\end{equation}\nProvided $D$ remains constant,\nthis will yield the same curve with finite contact resistance if\n\\begin{equation}\n  \\label{eq:lifetime_scale.infinity}\n  \\frac{1}{\u03c4^\u221e} = D^2 \\frac{\u03c4}{r^4} .\n\\end{equation}\nIn other words, if we fix $\u03c4$ and ask what happens to the fitted value\nassuming infinite contact resistance as a function of decreasing $r$,\n\\cref{eq:lifetime_scale.infinity} shows that it will decrease as well.\nFor $D$ fixed, $\u03c4^\u221e \u221d r^4$.\nWhile the general trend is consistent with\n\\cite{PhysRevB.86.235408},\nthe quantitative agreement is limited by\nthe approximations made for analytic convenience.\n", "meta": {"hexsha": "f5eb508f0a93cafa4bdcd2ec243bec9a8da385d4", "size": 8506, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/_regimes.tex", "max_stars_repo_name": "evansosenko/aps-spin-lifetime", "max_stars_repo_head_hexsha": "1f7a6188a0320c0bf8f5441cfcf20861ce259acb", "max_stars_repo_licenses": ["Ruby"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/_regimes.tex", "max_issues_repo_name": "evansosenko/aps-spin-lifetime", "max_issues_repo_head_hexsha": "1f7a6188a0320c0bf8f5441cfcf20861ce259acb", "max_issues_repo_licenses": ["Ruby"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/_regimes.tex", "max_forks_repo_name": "evansosenko/aps-spin-lifetime", "max_forks_repo_head_hexsha": "1f7a6188a0320c0bf8f5441cfcf20861ce259acb", "max_forks_repo_licenses": ["Ruby"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.743718593, "max_line_length": 104, "alphanum_fraction": 0.7079708441, "num_tokens": 2711, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085909370422, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.58126528079961}}
{"text": "\\paragraph{Binomial coefficients}\n\\[ {n \\choose k} = (-1)^k{k-n-1 \\choose k},\\ \\ \\sum_{k \\leq n}{r+k \\choose k} = {r+n+1 \\choose n} \\]\n\\[ \\sum_{k=0}^n{k \\choose m} = {n+1 \\choose m+1} \\]\n\\[ \\sqrt{1+z} = 1 + \\sum_{k=1}^{\\infty}\\frac{(-1)^{k-1}}{k\\times2^{2k-1}}{2k-2 \\choose k-1}z^k \\]\n\\[ \\sum_{k=0}^{r}{r-k \\choose m}{s+k \\choose n} = {r+s+1 \\choose m+n+1} \\]\n\\[ C_{n, m} = {n+m \\choose m} - {n+m \\choose m-1}, n \\geq m \\]\n\\[ {n \\choose k} \\equiv [n\\& k=k] \\pmod 2 \\]\n\\[ {{n_1+\\cdots+n_p}\\choose m}=\\sum_{k_1+\\cdots+k_p=m}{n_1\\choose k_1}\\cdots{n_p\\choose k_p}\\]\n\\paragraph{Fibonacci numbers}\n\\[ F(z) = \\frac{z}{1-z-z^2} \\]\n\\[ f_n = \\frac{{\\phi}^n-{\\hat{\\phi}}^n}{\\sqrt{5}}, \\phi = \\frac{1+\\sqrt{5}}{2},\n\\hat{\\phi} = \\frac{1-\\sqrt{5}}{2} \\]\n\\[ \\sum_{k=1}^nf_k = f_{n+2}-1,\\ \\ \\sum_{k=1}^nf^2_k = f_nf_{n+1} \\]\n\\[ \\sum_{k=0}^nf_kf_{n-k} = \\frac{1}{5}(n-1)f_n+\\frac{2}{5}nf_{n-1} \\]\n\\[ \\frac{f_{2n}}{f_n} = f_{n-1} + f_{n+1}\\]\n\\[ f_1+2f_2+3f_3+\\cdots+nf_n=nf_{n+2}-f_{n+3}+2]\\]\n\\[ \\gcd(f_m,f_n)=f_{\\gcd(m,n)}\\]\n\\[ f^2_n + (-1)^n = f_{n+1}f_{n-1} \\]\n\\[ f_{n+k} = f_nf_{k+1} + f_{n-1}f_k \\]\n\\[ f_{2n+1} = f^2_n+f^2_{n+1} \\]\n\\[ (-1)^kf_{n-k} = f_{n}f_{k-1} - f_{n-1}f_{k} \\]\n\\[ \\text{Modulo }f_n, f_{mn+r} \\equiv \\left\\{\n\\begin{aligned}\n&f_r,& m \\bmod 4 = 0; \\\\\n&(-1)^{r+1}f_{n-r},& m \\bmod 4 = 1; \\\\\n&(-1)^nf_r,& m \\bmod 4 = 2; \\\\\n&(-1)^{r+1+n}f_{n-r},& m \\bmod 4 = 3.\n\\end{aligned}\n\\right.\n\\]\n\nPeriod modulo a prime $p$ is a factor of $2p+2$ or $p-1$.\n\nOnly exception: $G(5)=20$.\n\nPeriod modulo the power of a prime $p^k$: $G(p^k)=G(p)p^{k-1}$.\n\nPeriod modulo $n=p_1^{k_1}...p_m^{k_m}$: $G(n)=lcm(G(p_1^{k_1}),...,G(p_m^{k_m}))$.\n\n\\paragraph{Lucas numbers}\n\\[ L_0=2,L_1=1,L_n=L_{n-1}+L_{n-2}=(\\frac{1+\\sqrt{5}}{2})^n+\\frac{1-\\sqrt{5}}{2})^n \\]\n\\[ L(x)=\\frac{2-x}{1-x-x^2} \\]\n\\paragraph{Catlan numbers}\n\\begin{align*}\nc_1=1,c_n=\\sum_{i=0}^{n-1}c_ic_{n-1-i}=c_{n-1}\\frac{4n-2}{n+1}=\\frac{\\binom{2n}{n}}{n+1}\\\\\n=\\binom{2n}{n}-\\binom{2n}{n-1}, c(x)=\\frac{1-\\sqrt{1-4x}}{2x}\n\\end{align*}\n\\paragraph{Stirling cycle numbers}\nDivide $n$ elements into $k$ non-empty cycles.\n\\[s(n,0)=0,s(n,n)=1,s(n+1,k)=s(n,k-1)-ns(n,k)\\]\n\\[s(n,k)=(-1)^{n-k}{n \\brack k}\\]\n\\[{n+1 \\brack k} = n{n \\brack k} + {n \\brack k-1},{n+1 \\brack 2} = n!H_n\\]\n \\begin{align*}\nx^{\\underline{n}} = x(x-1)...(x-n+1) &= \\sum_{k=0}^n{ {n \\brack k}(-1)^{n-k}x^k }\\\\\nx^{\\overline{n}} = x(x+1)...(x+n-1) &= \\sum_{k=0}^n{ {n \\brack k}x^k }\\\\\n \\end{align*}\n\\paragraph{Stirling subset numbers}\nDivide $n$ elements into $k$ non-empty subsets.\n\\[ {n+1 \\brace k} = k{n \\brace k} + {n \\brace k-1} \\]\n\\[ x^n = \\sum_{k=0}^n{ {n \\brace k}x^{\\underline{k}} } = \\sum_{k=0}^n{ {n \\brace k}(-1)^{n-k}x^{\\overline{k}} } \\]\n\\[ m!{n \\brace m} = \\sum_{k=0}^m{m \\choose k}k^n(-1)^{m-k} \\]\n\\[ \\sum_{k=1}^nk^p = \\sum_{k=0}^p{p \\brace k}(n+1)^{\\underline{k}} \\]\nFor a fixed $k$, generating functions :\n\\[\\sum_{n=0}^{\\infty}{n \\brace k}x^{n-k}=\\prod_{r=1}^{k}\\frac{1}{1-rx}\\]\n\\paragraph{Motzkin numbers}\nDraw non-intersecting chords between n points on a circle.\n\nPick $n$ numbers $k_1,k_2,...,k_n\\in\\{-1,0,1\\}$ so that $\\sum_i^ak_i(1\\leq a\\leq n)$ is non-negative and the sum of all numbers is $0$.\n\\[M_{n+1}=M_n+\\sum_i^{n-1}M_iM_{n-1-i}=\\frac{(2n+3)M_n+3nM_{n-1}}{n+3}\\]\n\\[M_n=\\sum_{i=0}^{\\lfloor \\frac{n}{2}\\rfloor}\\binom{n}{2k}Catlan(k)\\]\n\\[M(X)=\\frac{1-x-\\sqrt{1-2x-3x^2}}{2x^2}\\]\n\\paragraph{Eulerian numbers}\nPermutations of the numbers $1$ to $n$ in which exactly $k$ elements are greater than the previous element.\n\\[ {n \\bangle k} = (k+1){n-1 \\bangle k} + (n-k){n-1 \\bangle k-1} \\]\n\\[ x^n = \\sum_k{ {n \\bangle k}{x+k \\choose n} } \\]\n\\[ {n \\bangle m} = \\sum_{k=0}^m{n+1 \\choose k}(m+1-k)^n(-1)^k \\]\n\\paragraph{Harmonic numbers}\nSum of the reciprocals of the first n natural numbers.\n\\[ \\sum_{k=1}^nH_k = (n+1)H_n-n \\]\n\\[ \\sum_{k=1}^nkH_k = \\frac{n(n+1)}{2}H_n - \\frac{n(n-1)}{4} \\]\n\\[ \\sum_{k=1}^n{k \\choose m}H_k = {n+1 \\choose m+1}(H_{n+1} - \\frac{1}{m+1}) \\]\n\\paragraph{Pentagonal number theorem}\n\\[ \\prod_{n=1}^{\\infty}(1-x^n) = \\sum_{n=-\\infty}^{\\infty}{(-1)^kx^{k(3k-1)/2}} \\]\n\\[ p(n) = p(n-1)+p(n-2)-p(n-5)-p(n-7)+\\cdots \\]\n\\[ f(n, k) = p(n)-p(n-k)-p(n-2k)+p(n-5k)+p(n-7k)-\\cdots \\]\n\\paragraph{Bell numbers}\nDivide a set that has exactly n elements.\n\\[ B_n=\\sum_{k=1}^{n}{n\\brace k},\\ \\ B_{n+1} = \\sum_{k=0}^n{n \\choose k}B_k \\]\n\\[ B_{p^m+n} \\equiv mB_n+B_{n+1} \\pmod{p} \\]\n\\[B(x)=\\sum_{n=0}^{\\infty}\\frac{B_n}{n!}x^n=\\mathrm{e}^{\\mathrm{e}^x-1}\\]\n\\paragraph{Bernoulli numbers}\n\\[ B_n = 1 - \\sum_{k=0}^{n-1}{n \\choose k}\\frac{B_k}{n-k+1} \\]\n\\[ G(x) = \\sum_{k=0}^{\\infty}\\frac{B_k}{k!}x^k\n= \\frac{1}{\\sum_{k=0}^{\\infty}\\frac{x^k}{(k+1)!}} \\]\n\\[ \\sum_{k=1}^nk^m = \\frac{1}{m+1}\\sum_{k=0}^m{m+1 \\choose k}B_kn^{m-k+1} \\]\n\\paragraph{Sum of powers}\n\\[\\sum_{i=1}^ni^2=\\frac{n(n+1)(2n+1)}{6},\\ \\ \\sum_{i=1}^ni^3=(\\frac{n(n+1)}{2})^2\\]\n\\[\\sum_{i=1}^ni^4=\\frac{n(n+1)(2n+1)(3n^2+3n-1)}{30}\\]\n\\[\\sum_{i=1}^ni^5=\\frac{n^2(n+1)^2(2n^2+2n-1)}{12}\\]\n\\paragraph{Sum of squares}\nDenote $r_k(n)$ the ways to form $n$ with $k$ squares. If :\n\\[n=2^{a_0}p_1^{2a_1}\\cdots p_r^{2a_r}q_1{b_1}\\cdots q_s{b_s}\\]\nwhere $p_i\\equiv 3 \\mod 4$, $q_i\\equiv 1 \\mod 4$, then\n\\[r_2(n)=\\left\\{\\begin{aligned}\n& 0 & \\text{if any }a_i\\text{ is a half-integer}\\\\\n& 4\\prod_1^r(b_i+1) & \\text{if all }a_i\\text{ are integers}\\\\\n\\end{aligned}\\right.\\]\n$r_3(n)>0$ when and only when $n$ is not $4^a(8b+7)$.\n\\paragraph{Derangement}\n\\[D_1=0,D_2=1,D_n=n!(\\frac{1}{0!}-\\frac{1}{1!}+\\frac{1}{2!}-\\frac{1}{3!}+...+\\frac{(-1)^n}{n!})\\]\n\\[D_n=(n-1)(D_{n-1}+D_{n-2})\\]\n\\paragraph{Tetrahedron volume}\nIf $U$, $V$, $W$, $u$, $v$, $w$ are lengths of edges of the tetrahedron (first three form a triangle; u opposite to U and so on)\n\\[ V = \\frac{\\sqrt{ 4u^2v^2w^2 - \\sum_{cyc}{u^2(v^2+w^2-U^2)^2} + \\prod_{cyc}{(v^2+w^2-U^2)} }}{12} \\]\n", "meta": {"hexsha": "cd909a8f10f9701b9c6eab5e0cbbc5909953e834", "size": 5723, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/appendix/table-of-formulae.tex", "max_stars_repo_name": "Nisiyama-Suzune/LMR", "max_stars_repo_head_hexsha": "16325b9efcb71240111ac12ea55c0cb45b0c5834", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2018-08-15T11:58:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T23:38:29.000Z", "max_issues_repo_path": "src/appendix/table-of-formulae.tex", "max_issues_repo_name": "Nisiyama-Suzune/LMR", "max_issues_repo_head_hexsha": "16325b9efcb71240111ac12ea55c0cb45b0c5834", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/appendix/table-of-formulae.tex", "max_forks_repo_name": "Nisiyama-Suzune/LMR", "max_forks_repo_head_hexsha": "16325b9efcb71240111ac12ea55c0cb45b0c5834", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2019-07-18T10:27:50.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-08T13:03:47.000Z", "avg_line_length": 49.7652173913, "max_line_length": 135, "alphanum_fraction": 0.5502358903, "num_tokens": 2940, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122188543453, "lm_q2_score": 0.640635861701035, "lm_q1q2_score": 0.5812567451576316}}
{"text": "\\documentclass{article}\n\n\\usepackage[letterpaper, margin=1.3cm]{geometry}\n\\usepackage[utf8]{inputenc}\n\\usepackage{siunitx}\n\\usepackage[fleqn]{mathtools}\n\\usepackage{amssymb}\n\\usepackage{mathrsfs}\n\n\\title{MATH 225 Assignment 1}\n\\author{Michael Kwok}\n\\begin{document}\n\\maketitle\n\\subsection*{1}\nLet $\\mathbf{u} \\in V$\n\nSuppose there exists $w_1, w_2 \\in V$ such that $\\mathbf{u} \\oplus \\mathbf{w_1} = \\Tilde{\\mathbf{z}}$ and $\\mathbf{u} \\oplus \\mathbf{w_2} = \\Tilde{\\mathbf{z}}$.\n\n1. $\\mathbf{u}\\oplus \\mathbf{w_1} = \\Tilde{\\mathbf{z}}$\n\n2. $\\mathbf{u}\\oplus \\mathbf{w_2} = \\Tilde{\\mathbf{z}}$\n\n3. $\\mathbf{w_1} = - \\mathbf{u}$ Statement 1 by axiom 5\n\n4. $\\mathbf{w_2} = - \\mathbf{u}$ Statement 2 by axiom 5\n\n$\\therefore \\mathbf{w_1} = \\mathbf{w_2}$ Statement 3 and 4\n\\newpage\n\\subsection*{2a}\nAxiom 1:\n\n$\\Tilde{\\mathbf{z}} = (0, 0)$\n\n$(x, y) = (0, 0)$\n\n$0^2 + 0^2 \\leq 1$\n\n\\noindent\nAxiom 2:\n\nCounter example\n\nLet $\\mathbf{u} = \\left(0.5, 0.5\\right)$\n\nLet $\\mathbf{v} = \\left(0.5, 0.4\\right)$\n\n$\\mathbf{u} + \\mathbf{v} = (1, 0.9)$\n\n$1^2 + 0.9^2 = 1.81 > 1$\n\n$(1, 0.9) \\not\\in H$\n\nIt does not fulfil this axiom\n\n\n$\\therefore$ It is not a subspace.\n\n\\subsection*{2b}\nAxiom 1:\n\nFrom $\\mathscr{P}_{n=0}, \\Tilde{\\mathbf{z}} = 0$\n\nWhen $n = 0 \\text{ and } a_0 = 0$, $0 \\in H$\n\n$\\therefore \\Tilde{\\mathbf{z}} \\in H$\n\n\\noindent\nAxiom 2:\n\nlet $\\mathbf{u} = a_0 + a_1x + \\cdots + a_nx^n$\n\nlet $\\mathbf{v} = b_0 + b_1x + \\cdots + b_nx^n$\n\\begin{align*}\n\\mathbf{u} + \\mathbf{v} &= (a_0 + a_1x + \\cdots + a_nx^n) + (b_0 + b_1x + \\cdots + b_nx^n)\\\\\n&= (a_0 + b_0) + (a_1 + b_1)x + \\cdots + (a_n + b_n)x^n\n\\end{align*}\n\\begin{align*}\n\\left(a_0 - a_1 + ... + (-1)^n a_n\\right) = 0\\\\\n\\left(b_0 - b_1 + ... + (-1)^n b_n\\right) = 0\\\\\n\\end{align*}\n\\noindent\nAxiom 3:\n\\begin{align*}\n\\text{Let } c\\mathbf{u} &= c\\left(a_0 + a_1x + \\cdots + a_n x^n\\right)\\\\\n&= ca_0 + ca_1x \\cdots + ca_nx^n\n\\end{align*}\n\\begin{align*}\n\\left[ca_0 - ca_1 + \\cdots +  c(-1)^na^n\\right] &= 0\\\\\nc\\left[a_0 - a_1 + \\cdots + (-1)^n a^n\\right]&=0\\\\\n\\end{align*}\n$\\therefore$ It is a subspace\n\n\\subsection*{2c}\nUse SV2 to show $W$ is a valid subspace.\n\nLet $k_1, k_2 \\in \\mathbb{R}$\n\nLet $\\mathbf{u} = \\begin{bmatrix}\na_1 + 2b_1 & b_1 + c_1\\\\\nb_1 - c_1 & a_1 + 4c_1\n\\end{bmatrix}$\n\nLet $\\mathbf{v} = \\begin{bmatrix}\na_2 + 2b_2 & b_2 + c_2\\\\\nb_2 - c_2 & a_2 + 4c_2\n\\end{bmatrix}$\n\nShow that $k_1 \\mathbf{u} + k_2 \\mathbf{v} \\in W$\n\\begin{align*}\nk_1 \\mathbf{u} + k_2 \\mathbf{v} &=\n\\begin{bmatrix}\nk_1a_1 + 2 k_1 b_1 + k_2 a_2 + 2 k_2 b_2 & k_1 b_1 + k_1 c_1 + k_2 b_2 + k_2 c_2\\\\\nk_1 b_1 - k_1 c_1 + k_2 b_2 - k_2 c_2 & k_1 a_1 + 4 k_1 c_1 + k_2 a_2 + 4 k_2 c_2\n\\end{bmatrix}\\\\\n&= \\begin{bmatrix}\n(k_1 a_1 + k_2 a_2) + 2 (k_1 b_1 + k_2 b_2) & (k_1 b_1 + k_2 b_2) + (k_1 c_1  + k_2 c_2)\\\\\n(k_1 b_1 + k_2 b_2) - (k_1 c_1 + k_2 c_2) & (k_1 a_1 + k_2 a_2) + 4 (k_1 c_1  + k_2 c_2)\n\\end{bmatrix}\n\\end{align*}\n$a= k_1 a_1 + k_2 a_2,\\text{ }b=k_1 b_2 + k_2 b_2,\\text{ }c=k_1 c_1 + k_2 c_2$\n\n$a, b, c \\in \\mathbb{R}$\n\n$k_1 \\mathbf{u} + k_2 \\mathbf{v} \\in W$\n\n$\\therefore W$ is a subspace of $M_{2,2}$\n\\newpage\n\\subsection*{3a}\n\\noindent\nLet $\\mathcal{B} = \\left\\{\n\\begin{bmatrix}\n1 & 0\\\\\n0 & 0\n\\end{bmatrix}, \n\\begin{bmatrix}\n0 & 1\\\\\n1 & 0\n\\end{bmatrix},\n\\begin{bmatrix}\n0 & 0\\\\\n0 & 1\n\\end{bmatrix}\\right\\}$\n\n\\noindent\nB1:\n\n$a \\begin{bmatrix}\n1 & 0\\\\\n0 & 0\n\\end{bmatrix} + \nb \\begin{bmatrix}\n0 & 1\\\\\n1 & 0\n\\end{bmatrix} +\nc \\begin{bmatrix}\n0 & 0\\\\\n0 & 1\n\\end{bmatrix} = \n\\begin{bmatrix}\n0 & 0\\\\\n0 & 0\n\\end{bmatrix} \\iff a, b, c = 0$\n\n\n$\\therefore \\mathcal{B}$ is linearly independent.\n\n\\noindent\nB2:\n\n$S_{2,2} = \\left\\{\n\\begin{bmatrix}\na & b\\\\\nb & c\n\\end{bmatrix}: a, b, c \\in \\mathbb{R}\n\\right\\}$\n\n$a \\begin{bmatrix}\n1 & 0\\\\\n0 & 0\n\\end{bmatrix} + \nb \\begin{bmatrix}\n0 & 1\\\\\n1 & 0\n\\end{bmatrix} +\nc \\begin{bmatrix}\n0 & 0\\\\\n0 & 1\n\\end{bmatrix} = \n\\begin{bmatrix}\na & b\\\\\nb & c\n\\end{bmatrix}$\n\n$\\therefore \\mathcal{B} \\text{ spans } V$\n\n\n\\subsection*{3b}\n\nLet $\\mathcal{B} = \\left\\{1\\right\\}$\n\n\\noindent\nB1:\n\n$a \\cdot 1 = 0 \\iff a = 0$\n\n\\noindent\nB2:\n\nLet $b \\in \\left( \\mathbb{R}_{>0}, \\times,\\wedge \\right)$\n\\begin{align*}\n    a \\cdot 1 &= b\\\\\n    a &= b\n\\end{align*}\n\nFor any value $b \\in \\left( \\mathbb{R}_{>0}, \\times,\\wedge \\right)$, the basis could be used to form a solution.\n\n\n\\end{document}", "meta": {"hexsha": "ea15c6f2898d4c676c11aa79b4ef8eb0f7c5c712", "size": 4240, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignments/MATH225/MATH225As1.tex", "max_stars_repo_name": "n30phyte/SchoolDocuments", "max_stars_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignments/MATH225/MATH225As1.tex", "max_issues_repo_name": "n30phyte/SchoolDocuments", "max_issues_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignments/MATH225/MATH225As1.tex", "max_forks_repo_name": "n30phyte/SchoolDocuments", "max_forks_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.4495412844, "max_line_length": 160, "alphanum_fraction": 0.5952830189, "num_tokens": 2144, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430562234878, "lm_q2_score": 0.6992544335934765, "lm_q1q2_score": 0.5811804670147059}}
{"text": "\\chapter{Backward Funcoids}\n\nThis is a preliminary partial draft.\n\nFix a family $\\mathfrak{A}$ of posets.\n\n\\begin{defn}\n  Let $f$ be a staroid of filters $\\mathfrak{F} (\\mathfrak{A}_i)$ on boolean\n  lattices $\\mathfrak{A}_i$. \\emph{Backward funcoid} for the argument $k\n  \\in \\dom \\mathfrak{A}$ of $f$ is the funcoid $\\Back (f , k)$\n  defined by the formula (for every $X \\in \\mathfrak{A}_k$)\n  \\[ \\langle \\Back (f , k) \\rangle X = \\setcond{ L \\in \\prod_{i \\in\n     \\dom \\mathfrak{A}} \\mathfrak{F} (\\mathfrak{A}_i) }{\n     X \\in \\supfun{f}_k L } . \\]\n\\end{defn}\n\n\\begin{prop}\n  Backward funcoid is properly defined.\n\\end{prop}\n\n\\begin{proof}\n  $\\langle \\Back (f , k) \\rangle^{\\ast} (X \\sqcup Y) = \\setcond{ L \\in\n  \\prod \\mathfrak{A} }{ X \\sqcup Y \\in \\langle f\n  \\rangle_k L } = \\setcond{ L \\in \\prod \\mathfrak{A} }{\n  X \\in \\supfun{f}_k L \\vee Y \\in \\supfun{f}_k L\n  } = \\setcond{ L \\in \\prod \\mathfrak{A} }{ X\n  \\in \\supfun{f}_k L } \\cup \\setcond{ L \\in \\prod \\mathfrak{A}\n  }{ Y \\in \\supfun{f}_k L } = \\langle\n  \\Back (f , k) \\rangle^{\\ast} X \\cup \\langle \\Back (f , k)\n  \\rangle^{\\ast} Y$.\n\\end{proof}\n\n\\begin{obvious}\nBackward funcoid is co-complete.\n\\end{obvious}\n\n\\begin{prop}\n  If $f$ is a principal staroid then $\\Back (f , k)$ is a complete\n  funcoid.\n\\end{prop}\n\n\\begin{proof}\n  ??\n\\end{proof}\n\n\\begin{prop}\n  $f$ can be restored from $\\Back (f , k)$ (for every fixed $k$).\n\\end{prop}\n\n\\begin{proof}\n  ??\n\\end{proof}\n\n\\begin{prop}\n  $f \\mapsto \\Back (f , k)$ is an order isomorphism\n  $\\mathsf{Strd}^{\\mathfrak{A}} \\rightarrow \\mathsf{FCD} \\left(\n  \\mathfrak{A}_k , \\mathsf{Strd}^{(\\dom \\mathfrak{A}) \\setminus \\{ k \\}}\n  \\right)$.\n\\end{prop}\n\n\\begin{proof}\n  ??\n\\end{proof}", "meta": {"hexsha": "36138b0effdfd515b5cb22e898248fbae65c2455", "size": 1686, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-backward.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-backward.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-backward.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.7619047619, "max_line_length": 76, "alphanum_fraction": 0.6144721234, "num_tokens": 693, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430520409023, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5811804588809426}}
{"text": "% !TeX spellcheck = en_US\r\n\r\n\\chapter{Trees}\r\n\r\n\\section {General Trees}\r\n\r\nA \\textbf{tree} is an abstract data type that organize that elements hierarchically.\r\n\r\nA tree is \\textbf{ordered} if there is a meaningful order among the children of each node.\r\n\r\n\\section{Binary Tree}\r\n\r\nA \\textbf{binary tree} is an ordered tree with the following properties :\r\n....... to finish\r\n\r\n\\begin{tikzpicture}[sibling distance=10em,\r\nevery node/.style = {shape=circle,\r\n\tdraw, align=center,\r\n\ttop color=white, bottom color=blue!20}]]\r\n\\node {10}\r\nchild { node {5} }\r\nchild { node {25}\r\n\tchild { node {18}\r\n\t\tchild { node {12} }\r\n\t\tchild { node {20} } }\r\n\tchild { node {30} } };\r\n\\end{tikzpicture}\r\n\r\n\\section{Tries (Prefix trees)}\r\n\r\nA \\textbf{trie} is a variant of a n-ary tree in which characters are stored at each node.  \r\n\r\n\\cite{latexcompanion}\r\n\r\n", "meta": {"hexsha": "6eecc99ebd8333ae36d49e086285ec5678305a6c", "size": 843, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/Notes on Data Structures and Algorithms/TeX_files/chapter_trees.tex", "max_stars_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_stars_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Doc/Notes on Data Structures and Algorithms/TeX_files/chapter_trees.tex", "max_issues_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_issues_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Doc/Notes on Data Structures and Algorithms/TeX_files/chapter_trees.tex", "max_forks_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_forks_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.0857142857, "max_line_length": 92, "alphanum_fraction": 0.6773428233, "num_tokens": 231, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.831143054132195, "lm_q1q2_score": 0.5811804551342163}}
{"text": "\\section{Multivariate linear regression on median score}\\label{app:regression_median_score}\n\nA multivariate linear regression has also been fitted to model the relationship between\nthe features and the median score. The features included are given\nby Table~\\ref{table:linear_regression_on_median} alongside their corresponding \\(p\\) values\nin the distinct tournaments and their regression coefficients.\n\n\\newcolumntype{g}{>{\\columncolor{Gray}}c}\n\\begin{table}[h]\n    \\begin{center}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{lggccggccgg}\n    \\toprule\n    & \\multicolumn{2}{g}{Standard} & \\multicolumn{2}{c}{Noisy} & \\multicolumn{2}{g}{Probabilistic ending} & \\multicolumn{2}{c}{Noisy probabilistic ending} & \\multicolumn{2}{g}{Overall}\\\\\n    \\midrule\n    & \\multicolumn{2}{g}{\\(R\\) adjusted: 0.576} & \\multicolumn{2}{c}{\\(R\\) adjusted: 0.679} & \\multicolumn{2}{g}{\\(R\\) adjusted: 0.816} & \\multicolumn{2}{c}{\\(R\\) adjusted: 0.930} & \\multicolumn{2}{g}{\\(R\\) adjusted: 0.318} \\\\\n    {} &  Coefficient &  \\(p\\)-value &  Coefficient &  \\(p\\)-value &  Coefficient &  \\(p\\)-value &  Coefficient &  \\(p\\)-value &  Coefficient &  \\(p\\)-value \\\\\n    \\midrule\n    $CC$ to $C$ rate   &  0.043 &  0.000 & -0.380 &  0.000 &  0.224 &  0.000 &  0.078 &    0.0 &  0.308 &    0.0 \\\\\n    $CD$ to $C$ rate   & -0.325 &  0.000 &  0.124 &  0.000 &  0.060 &  0.000 &  0.073 &    0.0 & -0.014 &    0.0 \\\\\n    $C_r$ / $C_{max}$  &      - &      - & -1.044 &  0.000 &      - &      - & -1.251 &    0.0 &      - &      - \\\\\n    $C_r$ / $C_{mean}$ &  0.553 &  0.000 & -0.101 &  0.000 & -1.136 &  0.000 & -0.089 &    0.0 & -0.665 &    0.0 \\\\\n    $C_{max}$          &  0.059 &  0.000 &      - &      - & -0.044 &  0.086 & -1.396 &    0.0 &      - &      - \\\\\n    $C_{mean}$         &  1.837 &  0.000 &  3.078 &  0.000 &  1.506 &  0.000 &  3.645 &    0.0 &      - &      - \\\\\n    $C_{min}$          &  0.156 &  0.000 &  1.528 &  0.000 &  0.311 &  0.000 &      - &      - &      - &      - \\\\\n    $C_{min}$ / $C_r$  & -0.049 &  0.000 & -0.378 &  0.000 & -0.204 &  0.000 &      - &      - & -0.257 &    0.0 \\\\\n    $DC$ to $C$ rate   & -0.204 &  0.000 &  0.074 &  0.000 &  0.066 &  0.000 &  0.066 &    0.0 &  0.007 &    0.0 \\\\\n    $k$                & -0.000 &  0.853 & -0.000 &  0.987 &  0.000 &  0.008 &  0.000 &    0.0 &      - &      - \\\\\n    $n$                & -0.000 &  0.000 &      - &      - &      - &      - &      - &      - &      - &      - \\\\\n    $p_e$              &      - &      - &      - &      - &  0.025 &  0.000 & -0.095 &    0.0 &      - &      - \\\\\n    $p_n$              &      - &      - &  0.124 &  0.000 &      - &      - &      - &      - &      - &      - \\\\\n    SSE                & -0.294 &  0.000 & -0.319 &  0.000 &  0.055 &  0.000 &  0.010 &    0.0 & -0.015 &    0.0 \\\\\n    constant           &  0.925 &  0.000 &  1.536 &  0.000 &  2.466 &  0.000 &  2.299 &    0.0 &  2.924 &    0.0 \\\\\n    memory usage       &  0.010 &  0.000 & -0.004 &  0.000 &      - &      - &      - &      - &      - &      - \\\\\n    \\bottomrule\n\\end{tabular}}\n\\end{center}\n\\caption{Results of multivariate linear regressions with the median score as the dependent variable.\n\\(R\\) squared is reported for each model.}\n\\label{table:linear_regression_on_median}\n\\end{table}", "meta": {"hexsha": "880b28e012bf7057942eaf4e834f299cb0372258", "size": 3248, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/regression_on_median_score.tex", "max_stars_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments", "max_stars_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/regression_on_median_score.tex", "max_issues_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments", "max_issues_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2020-03-29T14:42:49.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-08T11:23:19.000Z", "max_forks_repo_path": "paper/regression_on_median_score.tex", "max_forks_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments", "max_forks_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-03-30T08:13:32.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-30T08:13:32.000Z", "avg_line_length": 79.2195121951, "max_line_length": 226, "alphanum_fraction": 0.4602832512, "num_tokens": 1364, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867873410141, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5810603801147595}}
{"text": "% Compile with: pdflatex explanation.tex\n\n\\documentclass[11pt]{article}\n\n    \\usepackage{amsmath}\n    \\usepackage{amssymb}\n    %\\usepackage{mathrsfs} % Cursive letters\n    %\\usepackage[stable]{footmisc} % Allows footnotes in titles\n    \\usepackage[margin=2.5cm]{geometry}\n    %\\usepackage{enumerate} % Easily change enumeration symbols\n    % Doesnt run in sharelatex \\usepackage{ulsy} % Proof by contradiction symbol \\blitza\n    \n    \\newcommand{\\E}{\\mathds{E}}\n    \\newcommand{\\V}{\\mathds{V}}\n    \\newcommand{\\PR}{\\mathds{P}}\n    \\newcommand{\\QED}{$\\blacksquare$}\n    \\newcommand{\\qed}{$\\square$}\n    \\newcommand{\\QEA}{\\blitza}\n    \\newcommand{\\st}{\\;\\text{s.t.}\\;}\n    \\newcommand{\\maximize}[1] {\\underset{#1}{\\text{max}}}\n    \\def\\s{\\rule{0in}{.30in}} % Strut\n    \n    \n\\begin{document}\n    \\title{Explanation of the HDFE iteration with 3 FEs}\n  \\author{Sergio Correia}\n    %\\date{}\n    \n\\maketitle\n\n\\section{Background}\n\nOur objective is to obtain the OLS estimates for $\\beta$ in the regression $y = X\\beta + D_1\\alpha_1 + D_2\\alpha_2 + D_3\\alpha_3 + \\epsilon$, where $D_i$ represents a set of indicator or design variables, and $\\alpha_i$ is the set of associated fixed effects.\n\nLet $M$, $P$ be the usual annihilator and projection matrices in an OLS regression. When the regressors are dummy variables, $P$ and $M$ are just averages and demeaned values within each group respectively. Also, from the normal equation we know the projection matrix is orthogonal to the residuals: $Pe=0$.\n\nRecall the two-part FWL theorem. Its first part states that we can regress $y = X\\beta + D_1\\alpha_1 + D_2\\alpha_2 + D_3\\alpha_3 + \\epsilon$ by first regressing $y$ and each $X$ against the set $D_1, D_2, D_3$, and then just regressing the residuals of $y$ against the residuals of $X$. The second part states that the residuals of both regressions are also identical.\n\nThus, the strategy will be to apply FWL to obtain residuals of $(y,X)$ with respect to all the indicator variables, and then regress the residuals to obtain estimates for $\\beta$.\n\nNotice that estimates for each $\\alpha$ can still be recovered, but they may not be identified.\n\n\\section{Fixed Point Iteration}\n\nTo regress against dummies there are two usual approaches: (i) add the dummies explictly or (ii) demean all the variables. Option (ii) is much faster but can only be done \\textit{once}, so if we have two or more sets of DVs the usual approach is to apply (ii) to the fixed effect with more dimensions (i.e. categories) and (i) to all the others. However, if there are two or more sets of high-dimensional DVs, we would have to create a lot of dummies which is impractical (b/c of out of memory errors, etc.).\n\nInstead, we'll apply a fixed-point iteration over the normal equations, as suggested by Guimaraes \\& Portugal (2010) (i.e., guess a value, and iterate our estimates until the normal equations hold). Note that instead of iterating the estimates $\\hat\\alpha_1, \\hat\\alpha_2, \\hat\\alpha_3$, we will iterate $Z_2 := D_2\\hat{\\alpha_2}$ and $Z_3 := D_3\\hat{\\alpha_3}$. Also, note that we are not employing Guimaraes et al (2013) version for three sets of FEs, but an extension of their original work, which works significatively faster.\n\n\\section{Algorithm}\n\nWlog, start with $y$,\n\n\\begin{enumerate}\n    \\item Compute $P_1y$ and $\\tilde{y} := M_1y$\n    \\item Initialize $Z_2^{(0)} = Z_3^{(0)} = 0$\n    \\item Until $Z_2$ and $Z_3$ converge, set\n        \\begin{enumerate}\n        \\item $Z_2^{(n)} = P_2 \\left[ \\tilde{y} + P_1 \\left( Z_2^{(n-1)} + Z_3^{(n-1)} \\right) - Z_3^{(n-1)} \\right]$\n        \\item $Z_3^{(n)} = P_3 \\left[ \\tilde{y} + P_1 \\left( Z_2^{(n)} + Z_3^{(n-1)} \\right) - Z_2^{(n)} \\right]$\n        \\end{enumerate}\n    \\item Then, compute $Z_1 = P_1(y - Z_2 - Z_3)$ and with that compute $y^* = y - Z_1 - Z_2 - Z_3$, which is what we desire.\n\\end{enumerate}\n\nAfter repeating this for every variable, regress the transformed variables ($y^* = X^* \\hat{\\beta} + e$)to obtain the correct estimates $\\hat{\\beta}$. Notice that the residuals of this FWL regression are the same as those of the full regression.\n\nIf you want to obtain the fixed effects for the regression, notice that $y = X \\hat{\\beta} + Z_1 + Z_2 + Z_3 + e$ where both $\\hat{\\beta}$ and  $e$ are the same as in the transformed regression. Thus, compute $\\tilde{e} := y - X \\hat{\\beta} = e + Z_1 + Z_2 + Z_3$ , and then apply the previous step to obtain the $Z_1, Z_2, Z_3$ fixed effects for that variable.\n\nNotice that if we remove the third FE (i.e. set $Z_3=0$), the formula collapses to the usual two-FE version.\n\n\\section{Deriving the fixed point iteration with three fixed-effects}\n\n\\begin{enumerate}\n    \\item Start with the identity $y = D_1\\hat\\alpha_1 + D_2\\hat\\alpha_2 + D_3\\hat\\alpha_3 + e = Z_1 + Z_2 + Z_3 + e$, where $Z_i := D_i\\hat\\alpha_i$\n    \\item Premultiply by $M_1$ to obtain $\\tilde{y} = M_1Z_2 + M_1Z_3 + e$, since $M_1D_1 = D_1 - P_1D_1 = D_1 - D_1 = 0$ and $M_1e = e - P_1 e = 0$ (from the normal equation).\n    \\item Rearrange $Z_2 + Z_3 = \\tilde{y} + P_1 \\left( Z_2 + Z_3 \\right) - e$\n    \\item Premultiply by $P_2$ and notice that $P_2 e = 0$, and after rearranging,\n    obtain $Z_2 = P_2 \\left[ \\tilde{y} + P_1 \\left( Z_2 + Z_3 \\right) - Z_3 \\right]$\n    \\item Premultiply by $P_3$ and obtain the equivalent equation for the third fixed effect.\n\\end{enumerate}\n\nWith these formulas, and adding a little more structure (to show that the fixed point is attractive, etc.), we can turn the formulas above into a fixed point iteration that will converge linearly.\n\n\\section{Implementation details}\n\nThis fixed point iteration is very slow, but there are (at least) two venues for speeding it up.\n\n\\begin{enumerate}\n    \\item We can minimize the number of $P$ operations and speed them up. First, notice that sorting the dataset (an $O(n \\log(n))$ operation) is not needed to obtain group means. Also, since in this case $P$ just demeans a variable within a group, we can store the output of $P_i$ as a $G_1 \\times 1$ vector, instead of a $N \\times 1$ vector. Finally, we can try precompute intermediate steps such as $P_2 \\tilde{y}$ (which we only compute once) and $P_1Z_2$ (which we only compute once per iteration, instead of twice).\n    \\item We can try to extrapolate the fixed-point iteration so it converges faster. These methods are called acceleration tecniques, and of those, we use a vector-based version of Aitken's $\\Delta^2$ method. In particular, we are employing method 3 as described by Macleod (1986), which has a reasonable performance, and accelerating every three iterations.\n\\end{enumerate}\n\nAll in all, each iteration requires only 6 $P$ operations (2 in the case of just two fixed effects), which are implemented reasonably fast. Together with the acceleration, this method becomes both fast and conservative in terms of memory usage.\n\n\n\\section{Extension: Algorithm beyond three sets of fixed effects}\n\nThe algorithm of part 3 can be extended to an arbitrarily large number of fixed effects. This will lead to the following steps:\n\nWlog, start with $y$, and let there be $G$ sets of fixed effects.\n\n\\begin{enumerate}\n    \\item Compute $P_1y$ and $\\tilde{y} := M_1y$\n    \\item Precompute $P_g \\tilde{y} \\quad \\forall g>1$\n    \\item Initialize $Z_g^{(0)} = 0 \\quad \\forall g>1$\n    \\item Initialize $\\Sigma^{(0)} := \\sum_{g=2}^G Z_g^{(0)} = 0$\n    \\item Until all $Z_g$ converge for all $g>1$ (this condition can also be replaced by a condition on $\\Sigma$), set for $g=2, 3, \\ldots, G$:\n        \\begin{enumerate}\n        \\item $Z_g^{(n)} = Z_g^{(n-1)} + P_g \\left[ \\tilde{y} + (P_1 - 1) \\Sigma^{(n-1,g-1)} \\right]$\n        \\item $\\Sigma^{(n-1,g)} = \\Sigma^{(n-1,g)} + Z_g^{(n)} - Z_g^{(n-1)}$\n        \\end{enumerate}\n    \\item Then, compute $Z_1 = P_1(y - \\Sigma)$ and with that compute $y^* [ = M y = M_1 (y- \\Sigma)  ] = y - Z_1 - \\Sigma$.\n\\end{enumerate}\n\nNotice that step 5 could be expressed more compactly just in terms of $\\Sigma$, but that complicates obtaining the individual FEs.\n\n\\section{Extension: Interactions between FEs and continuous variables}\n\n If we interact a variable $V$ elementwise with each column of a design matrix $D$ representing a set of $K$ fixed effects, a few adjustements would be required.\n\n First, note that it's equivalent to estimating a system of bivariate regressions. For instance, if the $D$ matrix had three fixed effects (i.e. it's a $N \\times 3$ matrix), then $P_D$ could be written as the partitioned matrix:\n\n \\[ \\left( \\begin{array}{ccc}\nP_1 & 0 & 0 \\\\\n0 & P_2 & 0 \\\\\n0 & 0 & P_3 \\end{array} \\right)\\] \n\nAnd each $P_j$ would be in turn be equal to (for the case of 3 obs. in the group $j$):\n\\[ \\left( \\begin{array}{ccc}\nv_{11}^2 & v_1v_2 & v_1v_3 \\\\\nv_2v_1 & v_{22}^2 & v_2v_3 \\\\\nv_3v_1 & v_3v_2 & v_{33}^2 \\end{array} \\right)\\]\n\n(subindices are relative to each group)\n\nThen, we can show that $U := P_{D} y$  is such that $u_i = v_i \\times q_{g(i)}$, where $Q$ is a $K \\times 1$ vector s.t. $q_j = \\frac{\\sum_{i \\; \\text{s.t.} \\; D_{ij}=1} v_i y_i}{v_i^2}$.\n\nIn terms of code, before we obtained a $Q$ matrix where $v_i=1$, so we were basically computing group means ($q_j = \\frac{\\sum_{i \\; \\text{s.t.} \\; D_{ij}=1} y_i}{N_j}$) and now we are i) weighting them and ii) multiplying by $v_i$ afterwards.\n\nSome optimizations can be done, such as precomputing the denominators of $Q$, and doing scalar products of $v$ and $y$ so we can use the code of group means without many changes.\n\n\\section{Extension: Instrumental Variables}\nJust demean \\emph{all} variables w.r.t the set of fixed effects. Then, we just run the first stage and second stage regressions taking care of adjusting the standard errors.\n\n\n\n\\end{document}\n", "meta": {"hexsha": "fd88699a7d59cefd86f0ddd5fb607ece13add9ab", "size": 9670, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/explanation.tex", "max_stars_repo_name": "poliquin/reghdfe", "max_stars_repo_head_hexsha": "fe46317f6923f1834ddb9edfcdc8c7c5b9ca7319", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 166, "max_stars_repo_stars_event_min_datetime": "2015-04-07T17:43:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T04:23:14.000Z", "max_issues_repo_path": "docs/explanation.tex", "max_issues_repo_name": "poliquin/reghdfe", "max_issues_repo_head_hexsha": "fe46317f6923f1834ddb9edfcdc8c7c5b9ca7319", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 239, "max_issues_repo_issues_event_min_datetime": "2015-03-28T01:22:06.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T08:52:40.000Z", "max_forks_repo_path": "docs/explanation.tex", "max_forks_repo_name": "poliquin/reghdfe", "max_forks_repo_head_hexsha": "fe46317f6923f1834ddb9edfcdc8c7c5b9ca7319", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 52, "max_forks_repo_forks_event_min_datetime": "2015-05-17T03:38:32.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-07T18:28:30.000Z", "avg_line_length": 65.7823129252, "max_line_length": 530, "alphanum_fraction": 0.6981385729, "num_tokens": 3023, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5810603778358163}}
{"text": "%!TEX root = ../Tibt.tex\n\n\\exercise{3.18}\n\n\\subsection*{Review of PLS}\nLet's review the PLS algorithm:\n\n\\begin{eqnarray}\n    \\mathbf{z}_m & = & \\mathbf{X}^{(m - 1)} \\mathbf{X}^{(m - 1)\\, T} \\mathbf{y} \\\\ \\label{eq:3p18_recy}\n    \\hat{\\mathbf{y}}_m & = & \\hat{\\mathbf{y}}_{m - 1} + \\frac{\\mathbf{z}_m \\, \\mathbf{z}_m^T}{||\\mathbf{z}_m||^2}\\, \\mathbf{y}\\\\\n    \\\\\n    \\mathbf{X}^{(m)} & = & \\mathbf{X}^{(m - 1)} - \\frac{\\mathbf{z}_m \\, \\mathbf{z}_m^T}{||\\mathbf{z}_m||^2} \\mathbf{X}^{(m - 1)}\n\\end{eqnarray}\nDenoting $P_m$ the euclidean projector over the subspace generated by $\\mathbf{z}_m$:\n\\begin{equation*}\nP_m  \\equiv \\frac{\\mathbf{z}_m \\, \\mathbf{z}_m^T}{||\\mathbf{z}_m||^2}: \\quad P_m^2 = P_m, \\;\\; P_m \\, \\mathbf{z}_m = \\mathbf{z}_m\n\\end{equation*}\nOne has then:\n\\begin{eqnarray}\n    \\hat{\\mathbf{y}}_m & = & \\left(P_1 + \\ldots P_m\\right) \\mathbf{y}  \\\\ \n    \\label{eq:3p18_recX}\n    \\mathbf{X}^{(m)} & = & \\left(1 - P_m\\right) \\mathbf{X}^{(m - 1)}    \n\\end{eqnarray}\n\n$\\mathbf{X}^{(m)}$ is orthogonal not only to $\\mathbf{z}_m$, but also to\nall previous $\\mathbf{z}$'s:\n\\begin{equation*}\n\\mathbf{X}^{(m)\\,T} \\mathbf{z}_n \\;\\; \\forall\\; n \\leq m\n\\end{equation*}\nWe can prove this by recursion over $m$. Observe that:\n\\begin{eqnarray*}\n    \\mathbf{X}^{(m)\\,T} \\mathbf{z}_n & = & \\mathbf{X}^{(m - 1)\\,T}\n        \\left(1 - P_m\\right) \\mathbf{z}_n\n\\end{eqnarray*}\nFor $n = m$ the rhs vanishes because $P_m \\, \\mathbf{z}_m = \\mathbf{z}_m$,\nwhile for $n \\leq m - 1$ one has:\n\\begin{equation*}\nP_m\\, \\mathbf{z_n} = \\frac{\\mathbf{z}_m}{||\\mathbf{z}_m||^2} \\mathbf{z}_m^T \\mathbf{z}_n = \\frac{\\mathbf{z}_m}{||\\mathbf{z}_m||^2} \\mathbf{y}^T \\mathbf{X}^{(m - 1)} \\mathbf{X}^{(m - 1)\\, T} \\mathbf{z}_n = 0\n\\end{equation*}\nby recursive hypothesis on the product of the last two terms.\n\nSince $\\mathbf{z}_{m + 1}$ is a linear combination of the columns of\n$\\mathbf{X}^{(m)}$, this also implies that the $\\mathbf{z}$'s are mutually\northogonal:\n\\begin{eqnarray*}\n    \\mathbf{z}_m^T \\, \\mathbf{z}_n & = & 0 \\;\\; \\forall\\; n \\neq m\\\\\n    P_m \\, P_n & = & 0  \\;\\; \\forall\\; n \\neq m    \n\\end{eqnarray*}\nThe recurrence in \\eqref{eq:3p18_recX} can therefore be 'unrolled' as:\n\n\\begin{equation}\n\\mathbf{X}^{(m)} = \\left(1 - P_1 - \\ldots - P_m\\right) \\mathbf{X}\n\\end{equation}\n\n\\subsection*{Connection with CGA: notation}\nThe connection with conjugate gradients algorithms (CGA) is at the level\nof the regression coefficients $\\beta$. As explained in the book,\nall $\\mathbf{X}^{(m)}$'s are linear combinations of the original $\\mathbf{X}$, therefore:\n\\begin{equation}\n\\hat{\\mathbf{y}}_m = \\mathbf{X}\\beta_m\n\\end{equation}\nfor some vector $\\beta_m$. If we compare with \\eqref{eq:3p18_recy},\nwe see that:\n\\begin{equation*}\n\\hat{\\mathbf{y}}_m - \\hat{\\mathbf{y}}_{m - 1} = \n    \\mathbf{X} \\left(\\beta_m - \\beta_{m - 1}\\right) \\propto\n    \\mathbf{z}_m\n\\end{equation*}\nIn CGA, the value of a function $L(\\beta)$ is minimized by\nsuccessively moving $\\beta$ along some directions $p_m$:\n\\begin{equation*}\n\\beta_m - \\beta_{m - 1} \\propto p_{m - 1}\n\\end{equation*}\nTherefore, in order to establish a connection, we denote:\n\\begin{equation}\\label{eq:e3p18_p}\n\\mathbf{X} p_{m - 1} \\equiv \\mathbf{z}_m\n\\end{equation}\nNotice that this equation determines $p_{m - 1}$ uniquely\nas long as the $\\mathbf{X}$ are linearly independent.\n\nFinally, we denote $l(\\beta)$ the L2 loss and $g(\\beta)$ its gradient:\n\\begin{eqnarray*}\n    l(\\beta) & \\equiv & ||\\mathbf{y} - \\mathbf{X} \\beta ||^2 \\\\\n    g(\\beta) & = & \\nabla l(\\beta) = - \\mathbf{X}^T \\left(\\mathbf{y}\n        - \\mathbf{X} \\beta \\right)\n\\end{eqnarray*}\n\n\\subsection*{Review of CGA}\n\nThe CGA algorithm tries to minimize a quadratic form:\n\\begin{equation*}\nf(\\beta) = c^T \\beta + \\frac{1}{2} x^T G x, \\;\\;x \\in \\mathbb{R}^p\\;\\; G = G^T\n\\end{equation*}\nby recursively finding the optimal update of $x$ along\nincreasingly larger subspaces of $\\mathbb{R}^p$. In practice,\none starts from a set of vectors:\n$$\np_0, p_1, \\ldots \\in \\mathbb{R}^p\n$$\nand recursively defines:\n\\begin{eqnarray*}\n    x_{m + 1} & = & x_{m} + \\,\n    \\textrm{argmin}_{w \\in \\mathcal{P}_{m}} f(x_{m} + w) \\\\\n    \\mathcal{P}_{m} & \\equiv & \\textrm{Span} \\left(p_0, \\ldots, p_m\\right)    \n\\end{eqnarray*}\nIn particular, in conjugate gradient methods one takes:\n\\begin{eqnarray}\n    \\mathcal{P}_{m} & = & \\textrm{Span} \\left(g(x_0), \\ldots, g(x_m)\\right) \\\\\n    p_0 & = & - g(x_0) \\\\ \\label{eq:3p18_3}\n    p_m & = & - g(x_m) + \\gamma_{m - 1}\\, p_{m - 1}\n\\end{eqnarray}\nwhere $g(x)$ is the gradient of $f$, in accordance with\nthe notation of the previous section, and the coefficients $\\gamma$ are chosen\nin such a way that the $p_m$ are all mutually \\textit{conjugate}:\n\\begin{equation*}\np_m^T G\\,p_n = 0 \\quad \\forall \\; m \\neq n\n\\end{equation*}\nThe fact that the second term in \\eqref{eq:3p18_3} is \nsufficient to guarantee orthogonality is not trivial,\nsee \\cite{Murray1982}, Section 4.8.3. Thanks to conjugacy,\nthe update $x_m \\rightarrow x_{m + 1}$ consists in moving $x$\nalong the $p_m$ direction only:\n\n\\begin{equation}\\label{eq:e3p18_update}\nx_{m + 1} = x_m + \\left(\\textrm{argmin}_\\alpha \n    f(x_m + \\alpha \\, p_{m})\\right)\\, p_m\n\\end{equation}\n\n\\subsection*{Connection between PLS and CGA}\n\nWe now show that PLS is nothing but CGA on the regression coefficients\napplied to the L2 loss as a function of the regression coefficients\n$\\beta$, with initial condition $\\beta_0 = 0$:\n\\begin{eqnarray*}\n    f(\\beta) & = & c^T \\beta + \\frac{1}{2} \\beta^T G \\beta \\\\\n    c^T & = & - \\mathbf{y}^T \\mathbf{X} \\\\\n    G & = & \\mathbf{X}^T \\mathbf{X}\n\\end{eqnarray*}\nThe update directions are specified by \\eqref{eq:e3p18_p}:\n\\begin{equation*}\n\\mathbf{X} p_m  =  \\mathbf{z}_{m + 1} = \\mathbf{X}_m \\mathbf{X}_m^T \\mathbf{y}\n\\end{equation*}\nand they are indeed mutually conjugate:\n\\begin{equation}\np_m \\, G \\, p_n = p_m \\mathbf{X}^T \\mathbf{X} \\, p_n = \n    \\mathbf{z}_{m + 1} ^T \\mathbf{z}_{n + 1} = 0, \\;\\; m \\neq n\n\\end{equation}\nMoreover, the coefficient of the $\\mathbf{z}_m$ term at iteration $m$ of PLS\nis the result of a least squares regression of $\\mathbf{y}$ against\n$\\mathbf{z}_m$ itself, which is the equivalent of \\eqref{eq:e3p18_update}.\n\nTo complete the connection, we show that \n$$p_m = -g(\\beta_m) + p_{m, \\parallel}$$\nwhere $p_{m, \\parallel} \\in \\textrm{Span} \\left(p_0, \\ldots, p_{m - 1}\\right)$,\nso that indeed $\\textrm{Span} \\left(p_0, \\ldots, p_m\\right) = \\textrm{Span} \\left(g_0, \\ldots, g_m\\right)$. We note that:\n\\begin{eqnarray}\n    - \\mathbf{X} g(\\beta_m) & = & \\mathbf{X} \\mathbf{X}^T \\left(y - \\mathbf{X} \\beta_m\\right) \\\\\n    & = &  \\mathbf{X} \\mathbf{X}^T \\left(1 - P_1 - \\ldots - P_m\\right) \\mathbf{y} \\\\\n    & = & \\mathbf{X} \\left((1 - P_1- \\ldots P_m) \\mathbf{X}\\right)^T \\mathbf{y} \\\\\n    \\label{eq:e3p18_passage}\n    & = & \\mathbf{X} \\mathbf{X}_m^T\\, \\mathbf{y}\n\\end{eqnarray}\nRemember that:\n\\begin{equation}\n\\mathbf{X}_m = \\mathbf{X} \n    - \\mathbf{z}_1 \\frac{\\mathbf{z}_1^T \\mathbf{X}}{||\\mathbf{z}_1||^2} - \\ldots\n    - \\mathbf{z}_m \\frac{\\mathbf{z}_m^T \\mathbf{X}}{||\\mathbf{z}_m||^2}\n\\end{equation}\nPlugging in the previous equation in \\eqref{eq:e3p18_passage} we obtain:\n\\begin{eqnarray*}\n    - \\mathbf{X} g(\\beta_m) & = & \\mathbf{X}_m \\mathbf{X}_m^T\\, \\mathbf{y} + \n        c_1\\, \\mathbf{z_1} + \\ldots + c_m\\, \\mathbf{z}_m \\\\\n        & = & \\mathbf{z}_{m + 1} + c_1\\, \\mathbf{z_1} + \\ldots + c_m\\, \\mathbf{z}_m\n\\end{eqnarray*}\nor, in other terms:\n\\begin{equation*}\n\\mathbf{X} p_m = - \\mathbf{X}\\, g(\\beta_m) + \\mathbf{X} p_{m, \\parallel}\n\\end{equation*}\nwhich completes our proof.\n", "meta": {"hexsha": "ddb7e64b538ac35a21e0fa2fcfe69cddfdbca5dd", "size": 7502, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/Chapter3/ex_3_18.tex", "max_stars_repo_name": "pinoeottavio/ESLEx", "max_stars_repo_head_hexsha": "9d203da5b46c8d66ade827c237c738a35928af48", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-16T22:33:48.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-16T22:33:48.000Z", "max_issues_repo_path": "notes/Chapter3/ex_3_18.tex", "max_issues_repo_name": "pinoeottavio/ESLEx", "max_issues_repo_head_hexsha": "9d203da5b46c8d66ade827c237c738a35928af48", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/Chapter3/ex_3_18.tex", "max_forks_repo_name": "pinoeottavio/ESLEx", "max_forks_repo_head_hexsha": "9d203da5b46c8d66ade827c237c738a35928af48", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.6777777778, "max_line_length": 206, "alphanum_fraction": 0.6310317249, "num_tokens": 3132, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5810603778358163}}
{"text": "\\subsection{Algorithmic}\nFrom an algorithmic point of view, we are calculating each body depending on all the other bodies.\nWe can traduce that by 2 overlapping \\texttt{for-loops} from $0$ to $n$ bodies. \nIn a \\texttt{C-like} language we can express the problem this way:\n\\begin{lstlisting}[caption={$n$-body pseudo-code algorithm},label={alg:ncorps}]\nbody b1[N]; // array which contains all the bodies (at t time)\nbody b2[N]; // empty array\nfor(unsigned int iBody = 0; iBody < n; ++iBody)\n\tfor(unsigned int jBody = 0; jBody < n; ++jBody)\n\t\tif(iBody != jBody)\n\t\t\tb2[iBody] = compute(b1[jBody]);\n\\end{lstlisting}\nAlg. \\ref{alg:ncorps} shows an important characteristic of the $n$-body problem class: for $n$ given bodies, the algorithmic complexity in term of computational time is approximatively $O(n^2)$.\nIt exists other methods to approximate and to resolve the problem in $O(n \\log{n} )$ time but this is not in the range of this lab (see {\\sc Barnes-Hut} simulation).", "meta": {"hexsha": "778c6fef894650fa841c58f54c7453f927bb4948", "size": 972, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/murb/sections/implementations.tex", "max_stars_repo_name": "MisterFruits/MUrB", "max_stars_repo_head_hexsha": "b855332f3eb0fd4a8baa203c28dc0e8e5ce50538", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2017-07-08T16:45:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-23T08:45:01.000Z", "max_issues_repo_path": "doc/murb/sections/implementations.tex", "max_issues_repo_name": "MisterFruits/MUrB", "max_issues_repo_head_hexsha": "b855332f3eb0fd4a8baa203c28dc0e8e5ce50538", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-09-08T14:35:02.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-10T07:53:18.000Z", "max_forks_repo_path": "doc/murb/sections/implementations.tex", "max_forks_repo_name": "MisterFruits/MUrB", "max_forks_repo_head_hexsha": "b855332f3eb0fd4a8baa203c28dc0e8e5ce50538", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-09T07:08:08.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-09T07:08:08.000Z", "avg_line_length": 69.4285714286, "max_line_length": 194, "alphanum_fraction": 0.7325102881, "num_tokens": 279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.5810603684145459}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\begmath 12.3 Table Look-up With Hermite Cubic Interpolation\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nGiven a table of values of a function and its first derivative, this\nsubroutine does table look-up and Hermite cubic interpolation to compute an\ninterpolated value of the function for an arbitrary argument. Optionally the\nsubroutine can compute values of the first, second, or third derivative of\nthe interpolation function.  All features here are also available,\nwith more generality, in Chapter~12.1.\n\nThe interpolated function will match the given data at the tabular points,\nand thus will have a continuous value and first derivative. In general the\nsecond and third derivatives will have jump discontinuities at the tabular\npoints. It is possible, of course, for the data to satisfy relations that\ngive rise to continuity of second or third derivatives in the interpolated\nfunction. In particular, data from the cubic spline fitting programs, DC2FIT\nor SC2FIT of Chapter~11.4, will produce continuous second derivatives.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf NDERIV, NTAB}\n\n\\item[REAL]  \\ {\\bf SHINT, X, XTAB}($\\geq $NTAB){\\bf ,\\newline\nYTAB}($\\geq $ NTAB){\\bf , YPTAB}($\\geq $NTAB){\\bf , YVAL}\n\\end{description}\n\nAssign values to X, NDERIV, NTAB, XTAB(), YTAB(), and YPTAB(). In\nparticular, NTAB, XTAB(), YTAB(), and YPTAB() may have the same values as on\nreturn from a previous call to SC2FIT.\n\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nYVAL = SHINT(X, NDERIV, NTAB,\\\\\nXTAB, YTAB, YPTAB)\\\\\n\\end{tabular}}\n\\end{center}\n\nThis reference will set YVAL to the value (or a derivative value as selected\nby NDERIV) of the curve defined by NTAB, XTAB(), YTAB(), and YPTAB().\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[X]  \\ [in] Abscissa at which interpolation is to be done. If X is\noutside the domain of the table given in XTAB(), extrapolation will be done\nusing the two tabular points at the nearest end of the table.\n\n\\item[NDERIV]  \\ [in] Set by the user to 0, 1, 2, or~3 to select computation\nof the value, or the first, second, or third derivative of the interpolation\nfunction.\n\n\\item[NTAB]  \\ [in] Number of values in each of the arrays, XTAB(), YTAB(),\nand YPTAB(). Require NTAB\\ $\\geq 2.$\n\n\\item[XTAB()]  \\ [in] Abscissae of the given data. These values must be\neither strictly increasing or strictly decreasing.\n\n\\item[YTAB(), YPTAB()]  \\ [in] Values and first derivatives associated with\nthe abscissae given in XTAB().\\vspace{-4pt}%\n\\begin{gather*}\n\\text{YTAB}(i)=f(\\text{XTAB}(i)),\\ i=1,...,\\text{NTAB}\\\\\n\\text{YPTAB}(i)=f^{\\prime }(\\text{XTAB}(i)),\\ i=1,...,\\text{NTAB}\n\\end{gather*}\n\\item[SHINT]  \\ [out] Returns the value or selected derivative value\ncomputed by interpolation.\n\\end{description}\n\n\\subsubsection{Modifications for double precision}\n\nFor double-precision usage change the REAL type statements to DOUBLE\nPRECISION and change the function name from SHINT to DHINT. Note\nparticularly that since this is a function subprogram the name DHINT\nmust be typed either explicitly or via an IMPLICIT statement if it is used.\n\n\\subsection{Examples and Remarks}\n\nThe program DRSHINT illustrates the use of SHINT. This program first builds\na table of values of the exponential function and its first derivative at\nthree points, XTAB$()=0$, 0.5, and 0.75. The program DRSHINT then uses SHINT\nto interpolate for the value and first derivative at a set of 21~points, $x=0\n$ to 1.0 in steps of~0.05. Output is shown in ODSHINT. The first two columns\nshow $x$ and $\\exp (x)$. Columns headed YINT and YPINT contain the\ninterpolated value and the interpolated first derivative, respectively. The\nexponential function was chosen for convenience in this demonstration since\nat any point the true function value and all derivatives have the same\nvalue. Note that the function values are approximated more accurately than\nare the derivative values. Also note that the approximations are better on\nthe interval from~0.5 to~0.75 than on the longer interval from~0 to~0.5. For\n$x>0.75$ extrapolation is taking place, leading to much larger errors.\n\nInterpolation error bounds are given in Section D. Applying these bounds to\nthis example, ignoring the O($h^4)$ term in the bound for the first\nderivative error, the error bounds are:\n\\begin{tabbing}\n for $f$ in [0, $0.5] = (0.00260)(1.65)(0.5)^4 = 0.000268$\\\\\n for $f$ in [0.5, $0.75] = (0.00260)(2.12)(0.25)^4 = 0.000022$\\\\\n for $f^{\\prime}$ in [0, $0.5] = (0.00802)(1.65)(0.5)^3 = 0.00165$\\\\\n for $f^{\\prime}$ in [0.5, $0.75] = (0.00802)(2.12)(0.25)^3 = 0.000266$\n\\end{tabbing}\nNote that these bounds are consistent with the results shown in ODSHINT.\n\n\\subsection{Functional Description}\n\nHermite interpolation together with the error in such interpolation is\ndescribed in \\cite[pp.~314--317]{Hildebrand:1956:INA}.\n\nSHINT maintains a saved variable, LOOK, with an initial value of~1. The\nlook-up procedure always starts with the tabular interval indexed by the\nsaved value of\\pagebreak[2] LOOK, unless the saved value is not in\n{[1, $\\text{NTAB} - 1$]}.\nThe look-up procedure is a linear search, either forward or backward, from the\nstarting value of LOOK. This is efficient for cases in which a succession of\nlook-ups are done for successive arguments that are not far apart relative\nto the tabular spacing.\n\nEach interpolation or extrapolation uses function values and first\nderivative values at two adjacent tabular points and is exact for\ncubic polynomial data.  If the interpolation abscissa, $x$, is equal\nto an interior tabular abscissa, say $x_i$, then tabular data\nassociated with $x_{i-1}$ and $x_i$ will be used for the computation.\nExtrapolation uses the first or last two tabular points, as\nappropriate.\n\nIf the given data are samples from a function having at least a fourth order\nderivative, and this derivative is bounded in magnitude by $M_4$ over the\ninterval $[x_i$, $x_{i+1}]$, of length, $h$, then the error, $E_i$, in\ncomputing the interpolated derivative of order $i$ for any argument in this\ninterval is bounded as follows:\n\\begin{alignat*}{3}\n\\hspace{-8pt}\n|E_0| &\\leq c_0M_4h^4k,&\\ \\ c_0 &= 1/384\\:&\\approx \\:&0.00260\\\\\n\\hspace{-8pt}\n|E_1| &\\leq c_1M_4h^3 + O(h^4),&\\ \\ c_1 &= \\sqrt 3/216\\:\n&\\approx \\:&0.00802\\\\\n\\hspace{-8pt}\n|E_2| &\\leq c_2M_4h^2 + O(h^3),&\\ \\ c_2&= 1/12 \\:&\\approx \\:&0.0833\\\\\n\\hspace{-8pt}\n|E_3| &\\leq c_3M_4h + O(h^2),&\\ \\ c_3 & = 0.5\n\\end{alignat*}\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nThis subprogram requires NTAB $\\geq 2$ and $0 \\leq \\text{NDERIV} \\leq 3$.\nThe values in the XTAB() array must either be strictly increasing or\nstrictly decreasing. These conditions are not checked. Violation of these\nconditions will cause unpredictable effects.\n\n\\subsection{Supporting Information}\n\nThe source language is ANSI Fortran~77.\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.15in} {\\bf Required Files}\\vspace{2pt} \\\\\nDHINT & \\parbox[t]{2.7in}{\\hspace{.35in} DHINT\\rule[-5pt]{0pt}{8pt}}\\\\\nSHINT & \\parbox[t]{2.7in}{\\hspace{.35in} SHINT}\\\\\n\\end{tabular}\n\nOriginal code designed by C. L. Lawson and R. J. Hanson, JPL, 1968.\nProgrammed and modified by Lawson, Hanson, T. Lang, and D. Campbell, JPL,\n1968--1974. Adapted to Fortran~77 by Lawson and S. Y. Chiu, July~1987.\n\n\n\\begcode\n\n\\medskip\\\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRSHINT}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{shint}}\n\n\\vspace{30pt}\\centerline{\\bf \\large ODSHINT}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{shint}}\n\\end{document}\n", "meta": {"hexsha": "e582d0c1190ce408c575b417c3ad14142a9b9ed4", "size": 7863, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch12-03.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch12-03.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch12-03.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 41.6031746032, "max_line_length": 98, "alphanum_fraction": 0.7423375302, "num_tokens": 2413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026368, "lm_q2_score": 0.798186768138228, "lm_q1q2_score": 0.5810603614249675}}
{"text": "\\section{Codon IMM}\n\nA protein can be modelled by a HMM with states that emit amino acids. For example,\n\\Cref{fig:core-model} shows a possible HMM architecture for aligning sequences of amino acids, for\nwhich indels are modelled by insert ($\\state{I}$) and delete ($\\state{D}$) states and matched\npositions are modelled by match ($\\state{M}$) states. Each match state defines a potentially\ndistinct probability distribution over amino acids so as to model position-specific amino acid\nfrequencies. Insert states might instead define the same background probability distribution over\namino acids. Delete states are silent states and therefore do not emit any amino acid: it represents\nposition-specific missing amino acids. $\\state{B}$ and $\\state{E}$ are special states that represent\nthe end and start positions of an alignment. In particular, the probability of $\\state{B}$ being the\nfirst state is one: $p(Q_1=\\state{B})=1$.\n\n\\begin{figure}[htbp]\n  \\centering\n  \\captionsetup{width=.5\\linewidth}\n  \\includegraphics[width=.5\\linewidth]{figure/core-model}\n  \\caption{HMM architecture for aligning sequences of four positions. $\\state{B}$ and $\\state{E}$\n  are silent states denoting the start and the end positions of an aligment. States $\\state{M}$\n  represent matches while states $\\state{I}$ and $\\state{D}$ model indels.}\\label{fig:core-model}\n\\end{figure}\n\nHMMs are widely used to model proteins and are used to search for homologous subsequences in target\nsequences made of amino acids. However, the researcher ocasionally do not have a ready-to-use amido\nacid sequence but instead a sequence of nucleotides that might contain homologous subsequences that\ncode for proteins. Some nucleotides of those homologous subsquences might be missing or might have\nbeen inserted by whatever reason. Still, one should be able to uncover error-prone subsequences when\ncompared against their homologous protein HMMs.\n\nLet $(Q_t, S_t)$ be a HMM with the amino acid alphabet $\\set{A}$ representing a protein profile. We\nwant to replace it by an IMM that preserves the model architecture (i.e., same set of states, state\ntransitions and initial state probabilities) but instead have states that emit sequences of\nnucleotides instead of amino acid symbols. In particular, the new match and insert states will emit\nnucleotide sequences that varies in length: from one to five nucleotides. The emission distribution\nof silent states (e.g., delete states and special ones) will stay the same: probability one of\nemitting an empty sequence in the IMM parlance. With such model, we hope to uncover error-prone\nsequences of nucleotides that are homologous to protein profiles.\n\nLet $\\set{B}$ be a nucleotide alphabet (e.g., $\\set{B} \\eqdef \\{\\res{A}, \\res{C}, \\res{G},\n\\res{T}\\}$), and let $\\state{M}$ be a state that in the protein HMM would emit an amino acid. From\nits amino acid distribution together with any other relevant source of information (codon usage\nbias, for instance), one can define the probability $p(X_t=x_1x_2x_3 \\gv Q_t=\\state{M})$ of\n$\\state{M}$ emitting the codon $(x_1, x_2, x_3) \\in \\set{B}^3$.\n\n\\begin{example}\n  Suppose that a state $\\state{M}$ has the amino acid distribution defined by\n  \\begin{equation*}\n    p(S_t=\\res{Y} \\gv Q_t=\\state{M}) = 0.8 ~~\\text{and}~~ p(S_t=\\res{C} \\gv Q_t=\\state{M}) = 0.2,\n  \\end{equation*}\n  where $\\res{Y}$ and $\\res{C}$ are the Tyrosine and Cysteine amino acids from the original alphabet\n  $\\set{A}$. Based on the canonical genetic code alone, we could define the codon distribution as\n  follows:\n  \\begin{equation*}\n    \\begin{split}\n      p(X_t=\\res{TAT}\\gv Q_t=\\state{M})= 0.8/2, &\\quad p(S_t=\\res{TAC}\\gv Q_t=\\state{M})=0.8/2\\\\\n      p(X_t=\\res{TGT}\\gv Q_t=\\state{M})= 0.2/2, &\\quad p(S_t=\\res{TGC}\\gv Q_t=\\state{M})=0.2/2,\n    \\end{split}\n  \\end{equation*}\n  where $\\res{A}$, $\\res{C}$, $\\res{T}$, and $\\res{G}$ are nucleotides from $\\set{B}$.\n\\end{example}\n\nDefining $p(X_t=x_1x_2x_3 \\gv Q_t=q_t)$ as we did in the previous example for every state $q_t \\in\n\\set{Q}$ of a given protein HMM produces an IMM $(Q_t, X_t)$ that instead emits codon sequences.\nThis model could in principle be used to evaluate sequences of nucleotides against the modelled\nprotein. However, such a model would fail to match sequences that present indels at the base level:\na match state that mainly emits Tyrosine (codons $\\res{TAT}$ and $\\res{TAC}$ in the canonical\ngenetic code) should be able to consider the sequence $\\res{TAAT}$ as a possible Tyrosine codon that\nhappens to have a base insertion. We therefore propose an additional, crucial modification: the\nemitted codon has to go through two steps of base deletions and two steps of base insertions. In\neach of those steps, there will be a (small) probability $\\e$ of a modification happening.\n\\Cref{fig:hmm-to-imm} summarizes the proposed modifications.\n\n\\begin{figure}[htbp]\n  \\centering\n  \\captionsetup{width=.5\\linewidth}\n  \\includegraphics[width=.5\\linewidth]{figure/hmm-to-imm}\n  \\caption{Conversion of protein HMM into codon IMM.\\@ Amino acid distributions are first converted\n  into codon distributions. Generated codons go through four indel steps, each of which has a\n  probability $\\e$ of actually modifiying the current sequence. At the end, we have a distribution\n  of error-prone codons: variable-length sequences of one, two, three, four, or five\n  nucleotides.}\\label{fig:hmm-to-imm}\n\\end{figure}\n\nThe base-indel steps of our proposed mofication is graphically detailed in \\Cref{fig:codon-hmm-tree}\nand is further explained in the following. The deletion can happen in any of the three codon\npositions with equal probability. If a deletion has already happened (i.e., the first deletion step\nhas indeed modified the emitted codon), the next deletion can happen in any of the remaining two\npositions with equal probability. The insertion can happen between any two bases, before the first\nbase, or after the last base with equal probability. One remains to define the nucleotide\ndistribution of the insertion process, which we shall assume to have been established for now.\n\n\\subsection{Sequence distribution}\n\nLet us assume that we already have the error-free codon distribution $p(X_t=x_1x_2x_3 \\gv Q_t=q_t)$\nfrom the amino acid distribution of state $q_t$. Given that a base insertion occurs, let $p(x)$\ndenote the probability of $x \\in \\set{B}$ being the inserted base. This section will formally define\nthe error-prone codon distribution $p(Z_t=z_1z_2\\dots \\gv Q_t=q_t)$ from distribution\n$p(X_t=x_1x_2x_3 \\gv Q_t = q_t)$ and parameter $\\e$ based on the outlined method. We will omit the\nindex $t$ and the conditional part of the probability for simplicity. The next sections will also\nuse an underscore $\\s$ to denote a summation over the corresponding sequence positions. For example,\n$p(X=x_1\\s\\s)=\\sum_{x_2,x_3}p(X=x_1x_2x_3)$.\n\n\\subsubsection{Sequences of length 1}\n\n\\begin{equation*}\n  p(Z=z_1) = \\e^2(1-\\e)^2(p(X=z_1\\s\\s) + p(X=\\s z_1\\s) + p(X=\\s\\s z_1)) / 3\n\\end{equation*}\n\n\\subsubsection{Sequences of length 2}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2)\n        &= 2\\e(1-\\e)^3(p(X=\\s z_1z_2) + p(X=z_1\\s z_2) + p(X=z_1z_2\\s))/3\\\\\n        &+ \\e^3(1-\\e)(p(X=z_1\\s\\s) + p(X=\\s z_1\\s) + p(X=\\s\\s z_1))p(z_2)/3\\\\\n        &+ \\e^3(1-\\e)(p(X=z_2\\s\\s) + p(X=\\s z_2\\s) + p(X=\\s\\s z_2))p(z_1)/3\n  \\end{split}\n\\end{equation*}\n\n\\subsubsection{Sequences of length 3}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2z_3) &= (1-\\e)^4 p(X=z_1z_2z_3)\\\\\n        &+ 4\\e^2(1-\\e)^2 (p(X=\\s z_2 z_3) + p(X=z_2\\s z_3) + p(X=z_2 z_3\\s))p(z_1)/9\\\\\n        &+ 4\\e^2(1-\\e)^2 (p(X=\\s z_1 z_3) + p(X=z_1\\s z_3) + p(X=z_1 z_3\\s))p(z_2)/9\\\\\n        &+ 4\\e^2(1-\\e)^2 (p(X=\\s z_1 z_2) + p(X=z_1\\s z_2) + p(X=z_1 z_2\\s))p(z_3)/9\\\\\n        &+ \\e^4 (p(X=z_3\\s\\s) + p(X=\\s z_3\\s) + p(X=\\s\\s z_3))p(z_1)p(z_2)/9\\\\\n        &+ \\e^4 (p(X=z_2\\s\\s) + p(X=\\s z_2\\s) + p(X=\\s\\s z_2))p(z_1)p(z_3)/9\\\\\n        &+ \\e^4 (p(X=z_1\\s\\s) + p(X=\\s z_1\\s) + p(X=\\s\\s z_1))p(z_2)p(z_3)/9\n  \\end{split}\n\\end{equation*}\n\n\\subsubsection{Sequences of length 4}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2z_3z_4) &= \\e(1-\\e)^3 (p(X=z_2z_3z_4)p(z_1)+p(X=z_1z_3z_4)p(z_2)\\\\\n        &+p(X=z_1z_2z_4)p(z_3)+p(X=z_1z_2z_3)p(z_4))/2\\\\\n        &+\\e^3(1-\\e)(\\\\\n        &+ p(X=\\s z_3z_4)p(z_1)p(z_2) + p(X=\\s z_2z_4)p(z_1)p(z_3)\\\\\n        &+ p(X=\\s z_2z_3)p(z_1)p(z_4) + p(X=\\s z_1z_4)p(z_2)p(z_3)\\\\\n        &+ p(X=\\s z_1z_3)p(z_2)p(z_4) + p(X=\\s z_1z_2)p(z_3)p(z_4)\\\\\n        &+ p(X=z_3\\s z_4)p(z_1)p(z_2) + p(X=z_2\\s z_4)p(z_1)p(z_3)\\\\\n        &+ p(X=z_2\\s z_3)p(z_1)p(z_4) + p(X=z_1\\s z_4)p(z_2)p(z_3)\\\\\n        &+ p(X=z_1\\s z_3)p(z_2)p(z_4) + p(X=z_1\\s z_2)p(z_3)p(z_4)\\\\\n        &+ p(X=z_3z_4\\s)p(z_1)p(z_2) + p(X=z_2z_4\\s)p(z_1)p(z_3)\\\\\n        &+ p(X=z_2z_3\\s)p(z_1)p(z_4) + p(X=z_1z_4\\s)p(z_2)p(z_3)\\\\\n        &+ p(X=z_1z_3\\s)p(z_2)p(z_4) + p(X=z_1z_2\\s)p(z_3)p(z_4))/9\n  \\end{split}\n\\end{equation*}\n\n\\subsubsection{Sequences of length 5}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2z_3z_4z_5)\n        &= \\e^2(1-\\e)^2(\\\\\n        &+p(z_1)p(z_2)p(X=z_3z_4z_5)+p(z_1)p(z_3)p(X=z_2z_4z_5)\\\\\n        &+p(z_1)p(z_4)p(X=z_2z_3z_5)+p(z_1)p(z_5)p(X=z_2z_3z_4)\\\\\n        &+p(z_2)p(z_3)p(X=z_1z_4z_5)+p(z_2)p(z_4)p(X=z_1z_3z_5)\\\\\n        &+p(z_2)p(z_5)p(X=z_1z_3z_4)+p(z_3)p(z_4)p(X=z_1z_2z_5)\\\\\n        &+p(z_3)p(z_5)p(X=z_1z_2z_4)+p(z_4)p(z_5)p(X=z_1z_2z_3))/10\n  \\end{split}\n\\end{equation*}\n\n\\subsection{Codon decoding}\n\nThe codon $x_1x_2x_3$ that maximizes the posterior $p(X=x_1x_2x_3 \\gv Z=\\arr{z})$ is the decoded\ncodon for the given matched or inserted subsequence $\\arr{z}$. Note that\n\\begin{equation*}\n  p(X=x_1x_2x_3 \\gv Z=\\arr{z}) \\propto p(Z=\\arr{z} \\gv X=x_1x_2x_3) p(X=x_1x_2x_3).\n\\end{equation*}\nIt therefore remains to compute\n\\begin{equation*}\n  p(Z=\\arr{z} \\gv X=x_1x_2x_3) = \\sum_{\\pi}p(Z=\\arr{z} \\gv \\Pi=\\pi, X=x_1x_2x_3) p(\\Pi=\\pi),\n\\end{equation*}\nfor which $\\pi$ is a path of the base-indel process.\n\n\\subsubsection{Sequences of length 1}\n\n\\begin{equation*}\n  p(Z=z_1 \\gv X=x_1x_2x_3) = \\e^2(1-\\e)^2(\\I(x_1, z_1) + \\I(x_2, z_1) + \\I(x_3, z_1))/3\n\\end{equation*}\n\n\\subsubsection{Sequences of length 2}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2 \\gv X=x_1x_2x_3)\n        &= 2\\e(1-\\e)^3(\\I(x_2, z_1)\\I(x_3, z_2) + \\I(x_1, z_1)\\I(x_3, z_2) + \\I(x_1, z_1)\\I(x_2, z_2))/3\\\\\n        &+ \\e^3(1-\\e)(\\I(x_1, z_1) + \\I(x_2, z_1) + \\I(x_3, z_1))p(z_2)/3\\\\\n        &+ \\e^3(1-\\e)(\\I(x_1, z_2) + \\I(x_2, z_2) + \\I(x_3, z_2))p(z_1)/3\n  \\end{split}\n\\end{equation*}\n\n\\subsubsection{Sequences of length 3}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2z_3 \\gv X=x_1x_2x_3) &= (1-\\e)^4\\I(x_1, z_1)\\I(x_2, z_2)\\I(x_3, z_3)\\\\\n        &+ 4\\e^2(1-\\e)^2 (\\I(x_2, z_2)\\I(x_3, z_3) + \\I(x_1, z_2)\\I(x_3, z_ 3) + \\I(x_1, z_2)\\I(x_2, z_3)) p(z_1)/9\\\\\n        &+ 4\\e^2(1-\\e)^2 (\\I(x_2, z_1)\\I(x_3, z_3) + \\I(x_1, z_1)\\I(x_3, z_3) + \\I(x_1, z_1)\\I(x_2, z_3))p(z_2)/9\\\\\n        &+ 4\\e^2(1-\\e)^2 (\\I(x_2, z_1)\\I(x_3, z_2) + \\I(x_1, z_1)\\I(x_3, z_2) + \\I(x_1, z_1)\\I(x_2, z_2))p(z_3)/9\\\\\n        &+ \\e^4 (\\I(x_1, z_3) + \\I(x_2, z_3) + \\I(x_3, z_3))p(z_1)p(z_2)/9\\\\\n        &+ \\e^4 (\\I(x_1, z_2) + \\I(x_2, z_2) + \\I(x_3, z_2))p(z_1)p(z_3)/9\\\\\n        &+ \\e^4 (\\I(x_1, z_1) + \\I(x_2, z_1) + \\I(x_3, z_1))p(z_2)p(z_3)/9\n  \\end{split}\n\\end{equation*}\n\n\\subsubsection{Sequences of length 4}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2z_3z_4 \\gv X=x_1x_2x_3)\n      &= \\e(1-\\e)^3 (\\I(x_1, z_2)\\I(x_2, z_3)\\I(x_3, z_4)p(z_1)\n      +\\I(x_1, z_1)\\I(x_2, z_3)\\I(x_3, z_4)p(z_2)\\\\\n      &+\\I(x_1, z_1)\\I(x_2, z_2)\\I(x_3, z_4)p(z_3)\n      +\\I(x_1, z_1)\\I(x_2, z_2)\\I(x_3, z_3)p(z_4))/2\\\\\n      &+\\e^3(1-\\e)(\\\\\n      &+ \\I(x_2, z_3)\\I(x_3, z_4)p(z_1)p(z_2) + \\I(x_2, z_2)\\I(x_3, z_4)p(z_1)p(z_3)\\\\\n      &+ \\I(x_2, z_2)\\I(x_3, z_3)p(z_1)p(z_4) + \\I(x_2, z_1)\\I(x_3, z_4)p(z_2)p(z_3)\\\\\n      &+ \\I(x_2, z_1)\\I(x_3, z_3)p(z_2)p(z_4) + \\I(x_2, z_1)\\I(x_3, z_2)p(z_3)p(z_4)\\\\\n      &+ \\I(x_1, z_3)\\I(x _3, z_4)p(z_1)p(z_2) + \\I(x_1, z_2)\\I(x_3, z_4)p(z_1)p(z_3)\\\\\n      &+ \\I(x_1, z_2)\\I(x_3, z_3)p(z_1)p(z_4) + \\I(x_1, z_1)\\I(x_3, z_4)p(z_2)p(z_3)\\\\\n      &+ \\I(x_1, z_1)\\I(x_3, z_3)p(z_2)p(z_4) + \\I(x_1, z_1)\\I(x_3, z_2)p(z_3)p(z_4)\\\\\n      &+ \\I(x_1, z_3)\\I(x_2, z_4)p(z_1)p(z_2) + \\I(x_1, z_2)\\I(x_2, z_4)p(z_1)p(z_3)\\\\\n      &+ \\I(x_1, z_2)\\I(x_2, z_3)p(z_1)p(z_4) + \\I(x_1, z_1)\\I(x_2, z_4)p(z_2)p(z_3)\\\\\n      &+ \\I(x_1, z_1)\\I(x_2, z_3)p(z_2)p(z_4) + \\I(x_1, z_1)\\I(x_2, z_2)p(z_3)p(z_4))/9\n  \\end{split}\n\\end{equation*}\n\n\\subsubsection{Sequences of length 5}\n\n\\begin{equation*}\n  \\begin{split}\n    p(Z=z_1z_2z_3z_4z_5 \\gv X=x_1x_2x_3)\n        &= \\e^2(1-\\e)^2(\\\\\n        &+p(z_1)p(z_2)\\I(x_1, z_3)\\I(x_2, z_4)\\I(x_3, z_5)+p(z_1)p(z_3)\\I(x_1, z_2)\\I(x_2, z_4)\\I(x_3, z_5)\\\\\n        &+p(z_1)p(z_4)\\I(x_1, z_2)\\I(x_2, z_3)\\I(x_3, z_5)+p(z_1)p(z_5)\\I(x_1, z_2)\\I(x_2, z_3)\\I(x_3, z_4)\\\\\n        &+p(z_2)p(z_3)\\I(x_1, z_1)\\I(x_2, z_4)\\I(x_3, z_5)+p(z_2)p(z_4)\\I(x_1, z_1)\\I(x_2, z_3)\\I(x_3, z_5)\\\\\n        &+p(z_2)p(z_5)\\I(x_1, z_1)\\I(x_2, z_3)\\I(x_3, z_4)+p(z_3)p(z_4)\\I(x_1, z_1)\\I(x_2, z_2)\\I(x_3, z_5)\\\\\n        &+p(z_3)p(z_5)\\I(x_1, z_1)\\I(x_2, z_2)\\I(x_3, z_4)+p(z_4)p(z_5)\\I(x_1, z_1)\\I(x_2, z_2)\\I(x_3, z_3))/10\n  \\end{split}\n\\end{equation*}\n\n\\subsection{Analysis}\n\nThe codon emitted at node $\\state{M}$ in \\Cref{fig:codon-hmm-tree} can go through $m\\in\\{0, 1, 2, 3,\n4\\}$ base indels during the node transitions that end at some leaf-node. The probability of\nit undergoing $m$ indels is given by\n\\begin{equation*}\n  p(M=m) = \\binom{4}{4-m} (1 - \\e)^{4-m} \\e^m,\n\\end{equation*}\nwhere the coefficient $\\binom{4}{m}$ counts the number of paths corresponding to $m$ base indels.\n\\Cref{fig:indel-dist} shows the base indel distributions over different values of $\\e$.\nLet $F$ be a random variable representing the final sequence length generated by the model in\n\\Cref{fig:codon-hmm-tree}.\nWe have the probabilities\n\\begin{equation*}\n  \\begin{split}\n    p(F=1) = p(F=5) &= \\e^2(1-\\e)^2, \\\\\n    p(F=2) = p(F=4) &= 2\\e^3(1-\\e) + 2\\e(1-\\e)^3,~\\text{and} \\\\\n    p(F=3)          &= \\e^4 + 4\\e^2(1-\\e)^2 + (1-\\e)^4,\n  \\end{split}\n\\end{equation*}\nillustrated in \\Cref{fig:len-dist} over different values of $\\e$.\n\n\\begin{figure}[htbp]\n  \\centering\n  \\captionsetup{width=0.85\\linewidth}\n  \\begin{subfigure}{.5\\linewidth}\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{figure/indel-prob}\n    \\caption{Base indel distribution.}\\label{fig:indel-dist}\n  \\end{subfigure}%\n  \\begin{subfigure}{.5\\linewidth}\n    \\centering\n    \\includegraphics[width=.7\\linewidth]{figure/seq-len-prob}\n    \\caption{Sequence length distribution.}\\label{fig:len-dist}\n  \\end{subfigure}\n  \\caption{ Distribution of base indels and sequence length over the modification probability\n  $\\e$. It seems reasonable to choose a $\\e$ value that is smaller than $1/5$ such that\n  $p(M=m)<p(M=m+1)$, as per \\Cref{fig:indel-dist}. }\\label{fig:dist}\n\\end{figure}\n\n\\newpage\n\\begin{sidewaysfigure}[ht]\n    \\centering\n    \\includegraphics[scale=0.9]{figure/codon-hmm-tree}\n    \\caption{Matched codon HMM tree.\n        The $\\e$-transitions occur infrequently and exist to account for sequence errors.\n        The most probably path ends at the first leaf-node from left to right.}\\label{fig:codon-hmm-tree}\n\\end{sidewaysfigure}\n", "meta": {"hexsha": "6edf976b1e624d4ebb85370ad05e003eedd2abd3", "size": 15145, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/codon.tex", "max_stars_repo_name": "EBI-Metagenomics/protein-hmm", "max_stars_repo_head_hexsha": "a291a6605459e559ebb912ffc71180094da19378", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "theory/codon.tex", "max_issues_repo_name": "EBI-Metagenomics/protein-hmm", "max_issues_repo_head_hexsha": "a291a6605459e559ebb912ffc71180094da19378", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/codon.tex", "max_forks_repo_name": "EBI-Metagenomics/protein-hmm", "max_forks_repo_head_hexsha": "a291a6605459e559ebb912ffc71180094da19378", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.3389830508, "max_line_length": 117, "alphanum_fraction": 0.6579069, "num_tokens": 6300, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.882427872638409, "lm_q2_score": 0.658417500561683, "lm_q1q2_score": 0.5810059543285444}}
{"text": "\\chapter{Methodology}\n\\label{methods}\n%promises made in other sections:\n% * description of disease enrichment method\n\n%intro to the methods\nThis chapter describes the use of two probabilistic models to construct a weighted PPI network.\nThe first of these is a supervised binary classifier, implemented from a selection of either logistic regression, \\ac{SVM} or random forest.\nAs the project progressed this method was unsuccessful and a second conditionally independent probabilistic model was used to update on prior knowledge and produce the final weightings.\nFeature extraction was the predominant component in this project and involved turning the various data sources involved into usable features in a machine learning framework. \n%FIN: what would a non-traditional way be?  \nThe chosen Community Detection method, the fast and efficient Spectral Modularity algorithm, is also described. \n%FIN:why was it chosen? \n\n\\section{Feature vectors}\n\n%Features are the processed form of the data we use to make predictions about a given interaction between pairs of proteins and is defined mathematically in the following text.\nThe set of biologically relevant information associated with a given protein interaction is referred to as a feature vector and the constituent elements features.\nThe process of retrieving these from biological databases is referred to as feature extraction. \n%FIN: typo \nThis is a task which commonly involved mapping between non-standardised protein identifiers and indexing large tables of biological data automatically.\n%FIN: mention non-standardisation/alternative identifiers used in different data sources -I think you need to make it clear how difficult the data collection was especially as a novice to the available databases and data formats\n\n% going through posing the problem in terms of probability\nIn supervised learning problems we wish to learn a mapping between input variables and output variables given a training set.\nDefining this training set rigorously, it consists of input variables $\\pmb{x}$, which are typically vectors of values known as features.\nThe output variables in a classification problems are a set of labels\\autocite[2]{murphy_machine_2012} which are denoted as $y$.\nIn the case of binary classification these are simply either 0 or 1.\nGiven $N$ training vectors, $\\pmb{x}_{i}$, corresponding to $N$ interactions in the network and training labels $y_{i}$ we can define our training set $\\mathcal{D}$ for $N$ data points as:\n\n\\begin{align}\n    \\mathcal{D} = \\left( ( \\pmb{x}_{i}, y_{i} ) \\right)_{i=1}^{N}\n\\end{align}\n\n%an example?  \n\n%pose our problem\nOur problem involves taking various types of biological data, such as entries from biological databases indicating that proteins are involved in the same part of a cell and using these as features.\nThe training labels are either an interaction (a one) or a non-interaction (a zero).\nInteractions are taken to be any interactions in the iRefIndex\\autocite{razick_irefindex:_2008} consolidated database.\nThis database was chosen for reasons described in section \\ref{gold}.\n%FIN: why, what is this db? consolidated database\nNon-interactions are random binary combinations of Entrez protein IDs, which is a method applied in other works\\autocite{qi_evaluation_2006} to create negative training examples.\n%FIN: created by you how and from what dataset, hell cite the script/ipynb you used\nThese are also checked against the iRefIndex database to ensure they are not accidentally known interactions.\nThe full training set contained 188,833 true interactions and 997,760 non-interactions, but only a sub-sample of this was used for training to ensure a ratio of 600 non-interactions per true interaction.\n%There are approximately 600 non-interactions for every true interaction.\n\nWhat we would like to estimate is the posterior probability of an interaction existing given a new feature vector after training our classifier.\n%FIN: typo/missing word\nFor any model $\\mathcal{H}$ and a new feature vector $\\pmb{x}^{*}$ we can express this using Bayes theorem:\n\n\\begin{align}\n    p(y^{*} = 1 | \\pmb{x}^{*}, \\mathcal{D}, \\mathcal{H}) = \\frac{ p(\\pmb{x}^{*}| y^{*} = 1 , \\mathcal{D}, \\mathcal{H}) p( y^{*} = 1 | \\mathcal{D}, \\mathcal{H})}{ \\sum{y^{*}} p( \\pmb{x}^{*} | y^{*}, \\mathcal{D}, \\mathcal{H})}\n    \\label{eq:bayes}\n\\end{align}\n\nWhere in equation \\ref{eq:bayes} posterior probability of an interaction $y^{*} = 1$ conditioned on a novel feature vector $\\pmb{x}^{*}$ is $p(y^{*} = 1 | \\pmb{x}^{*}, \\mathcal{D}, \\mathcal{H})$ and the prior is $p( y^{*} = 1 | \\mathcal{D}, \\mathcal{H})$.\n\nWe explicitly apply a prior to the probability of interaction based on the expected ratio of interactions to non-interactions stated in \\autocite{qi_evaluation_2006}.\n%FIN: what was Qi's justification for this value? Presumably observed ratios - cite the justifying paper if it wasn't Qi - unfortunately, it was Qi - other papers sometimes choose one in 350 or one in 400 (PIPs)\nThis is described in more detail in section \\ref{bayes}.\n\n\\subsection{Protein identifier mapping}\n\nMapping from one protein identifier to another became a significant problem in this project.\n%FIN: so a few of my earlier comments will be redundant to this section so reference from those sections to here \nUnfortunately, most Biological databases maintain their own indexing method to identify different genes and proteins.\nNew data sources being integrated into this project would often be using a different identification scheme to the NCBI Entrez GeneID originally chosen to use in the \\ac{PPI} network.\n\n\n%what the Entrez identifier is -  as opposed to other protein identifier schemes - cite NCBI web pages\nGenes are uniquely defined by their amino acid sequence, which is a long series of letters.\nHowever, for the sake of posterity, databases containing information about genes typically apply an identifier for each gene that is much shorter than the sequence and can encode other information about the gene, typically referred to as an accession.\n%FIN: commonly referred to as accessions \nThe Entrez GeneID identifier is relatively simple, just consisting of a number generated when the gene was added to the database\\autocite{maglott_entrez_2007}.\n\n%other protein identifier schemes and mapping between them\nOther popular schemes include the Ensembl identifier from the Ensembl database\\autocite{hubbard_ensembl_2002}, Uniprot identifiers from the Uniprot database\\autocite{the_uniprot_consortium_universal_2008} and even those used only for specific databases such as \\ac{DIP} identifiers\\autocite{xenarios_dip_2002}.\nMapping between these different identifiers is difficult as each identifier may map to none or many in another database.\nThe reason this happens is due to isoforms of different proteins; different amino acid sequences can code for a protein with the same name.\nThis can be due to, for example, alternative splicing\\autocite{black_mechanisms_2003} or simple submission errors\\autocite{zeeberg_mistaken_2004}.\n%FIN: alternative splices of the same gene can produce different proteins, every gene is made of introns and exons, substrings which don't code for the amino acids in the protein and those that do respecitvely.  Expressed proteins are generated from any of several combinations of introns and exons (see http://en.wikipedia.org/wiki/Alternative_splicing).  Alt splicing is really important in eukaryotes and is one of the big explanations between the very low number of genes we found when we sequenced the human genome compared to the predicitions.\n%FIN: the other source of 'isoforms' is when a piece of DNA is duplicated and have diverged a little in sequence between copies (older literature referred to alleles as isoforms which are slightly different as these are alternative forms of a gene such as say a gene for blue eyes or brown eyes (although that may actually be a polygenetic trait))\n%FIN: you've also missed an issue with the multiple mappings of mistakes and errors, also these mistakes and errors being differentially fixed or propagated through databases.  A lot of the big source databases are public goods where many author's contribute and curation is mostly automated so have problems.  A fun and amusing example of finding accessions in databases that have been reformatted automatically by someone using excel and not noticing http://www.biomedcentral.com/1471-2105/5/80\n\n%methods used during the project, with references to appendix notebooks\nVarious tools exist to map from one protein identifier to another: Ensembl's BioMart\\autocite{smedley_biomart_2009} is a versatile web-based tool, for example.\nIn this project, simple conversion tables from NCBI's Gene\\autocite{maglott_entrez_2007} ftp server were primarily used.\nAnother tool used was the Uniprot\\autocite{the_uniprot_consortium_universal_2008} online service web service.\n\n% talk about the problem of canonicalisation\nUnfortunately, using any of these services there will be a number of identifiers which cannot be converted and many IDs mapping to the same Entrez identifier as different protein isoforms are picked up.\n%FIN: this isn't always the explanation - for example one database might contain say several different crystal structures done of a protein which will all theoretically map from many identifiers relating to each of these structures to one identifier of the raw sequence in genbank\nOne way to avoid this problem is to only refer to a single canonical form of any given protein and find this protein in other databases through its amino acid sequence.\n%FIN: a benefit of this method is also that it can be used to find mis-labelled proteins: if you searched just for all data related to protein X, but someone had done a load of work on something they called protein Y but was actually protein X. If you didn't search by sequence you wouldn't find data \nThis ensures that when referring to an interaction between two identifiers the interaction is always simply between two proteins.\n%FIN: also note why the amino acid sequence is better than using the nucleotide sequence because it is the actual form of the protein and is derived after all the alternative splicing/isoform issues \n\n% how using Entrez identifiers risks becoming gene interaction prediction\n% or \"What Entrez isn't\"\nOtherwise, as in this project, the interaction is detected between two Entrez identifiers; which corresponds to an interaction between genes - possibly only a single interaction between combinations of the isoforms of each gene.\n%FIN: exactly, also worth putting the limitation in of not directly testing co-localisation and co-expression (i.e. temporal and spatial localisation allowing interaction)\nUnfortunately, this means that this project is only concerned with gene interaction prediction until the Entrez identifiers have been carefully canonicalized.\nThis is not really a problem, as we are only aiming to provide a weighting to a graph, rather than provide an accurate prediction of interaction between proteins.\n%FIN: to be less negative maybe say you will still identify possible interactions but may not identify all of them (as you are only checking one isoform etc).  This can still be informative if it is a new predicition which is later validated\n\n% how iRefIndex solves this problem and should have been used from the start\n% with reference to storing the sequences of each protein involved to maintain unambiguity\nA solution to this problem is provided by the iRefIndex\\autocite{razick_irefindex:_2008} database, which combines many databases and stores canonicalized entries.\nUsing this database, it would be possible to ensure that the proteins used in a future project would be reliable canonical proteins.\nAdditionally, each protein of interest should ideally be stored with reference to its sequence in, for example, FASTA format.\n%FIN: be explicit amino acid sequence rather than protein sequence\n\n%description of feature extraction code\n%\\subsection{Dedicated code: ocbio.extract}\n\n\\section{Supervised model}\n\\label{gold}\n%material on the problems with choosing between gold-standard datasets\n% with reference here to the section in Qi's thesis\n\n%original work on \\ac{DIP} (justified choice from previous work)\nTo begin supervised learning, the model required a set of accepted true interactions.\nThe database of interacting proteins (\\ac{DIP}) is a database of interactions proven by small-scale experiments\\autocite{xenarios_dip_2002}.\nEach interaction added is hand curated so it was expected to be a reliable training set.\nThis database was also used as a training set in \\textcite{qi_evaluation_2006}, who used the same supervised training method.\n\n%problems with \\ac{DIP}\n%why \\ac{HIPPIE} is better suited\nUnfortunately, problems were found with \\ac{DIP} as a training set. \n%FIN: I don't understand - explain what these problems are/were\nFeatures derived from interaction other databases, such as STRING, would win out in importance versus all indirect features as shown in figure \\ref{fig:unbalanced}.\n\\ac{HIPPIE} was also tested as a training set, but this lead to the same problem.\n%FIN: so which did you use?\n\n%example feature importance graph\n\\begin{figure}\n    \\centering\n    \\setlength\\figureheight{3in}\n    \\setlength\\figurewidth{4in}\n    \\InputIfFileExists{\\imagedir/unbalanced.weighting.tikz}{}{\\textbf{!! Missing graphics !!}}\n    \\caption{An example of an unbalanced set of feature importances plotted after fitting a Random Forest classifier to a dataset containing interaction database derived features. Feature indexes are not as described in table \\ref{tab:fvectors}. Importances are dimensionless and show that only a small number of features are important to the classifier.}\n    \\label{fig:unbalanced}\n\\end{figure}\n\n%After removing interaction databases \nIt was decided that only indirect features, as described in section \\ref{back:sources} should be used in the trained supervised classifier and direct evidence integrated into the final weightings in an explicit Bayesian method described in section \\ref{bayes}; the results of which are described in section \\ref{bayesresults}.\nOnce these databases were removed the performance of the classifier was drastically lower.\nHowever, all of the available features had more closely distributed importances in the final classifier, as shown in section \\ref{importances}.\n%FIN: this isn't obvious to me - you need to explain why balance set of features is important \n\n\n%iRefIndex as a final solution?\nTo train the final classifier the iRefIndex\\autocite{razick_irefindex:_2008} database was used to find real interactions due to its canonicalisation of identifiers and the number of databases it includes.\nAgain, non-interactions were generated as random combinations of Entrez Gene IDs.\nThese were also checked against the positive interactions to ensure known interactions were not present.\n\n\\subsection{Labeling feature vectors}\n\n%planned features, describe the list of possible features which was created\n%put it in the appendix as a table, or otherwise somehow\n\\ac{PPI} prediction features are a set of values for each interaction considered, if it is a real or non-interaction.\nThis arrangement is illustrated in figure \\ref{fig:fvectors}.\nThe features used are described by index in table \\ref{tab:fvectors}.\n\n% link to the notebook on using this code\n% but update it to explain what custom generator options are\nTo keep track of the various different data sources and assemble the features into vectors to be used in the classifiers a dedicated piece of code was required.\nThe code developed is written as a python module called ocbio.extract.\nUsage and development notes for this program are referenced in Appendix \\ref{app:ocbio}.\n\n%this could be a good place to put that diagram of exactly what a feature vector is\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{fvectors.png}\n    \\caption{A diagram showing the structure of feature vectors and their relationship to the project as a whole. This figure is based on a similar flow chart shown in the project proposal.}\n    \\label{fig:fvectors}\n\\end{figure}\n\n\\begin{table}\n    \\centering\n    \\begin{tabular}{l c p{0.5\\textwidth}}\n        Feature         & Indices & Description \\\\\n        \\hline\n        Gene Ontology   & 1-90    & Described in section \\ref{go}, with individual indices described in Appendix \\ref{app:go}. \\\\\n        Y2H             & 91      & Y2H experimentally derived feature \\\\\n        ENTS derived    & 92-198  & Features used by the ENTS classifier, described by index in supplementary table for \\textcite{rodgers-melnick_predicting_2013}. \\\\\n        ENTS summary    & 199     & Prediction result of the classifier in \\textcite{rodgers-melnick_predicting_2013}. \\\\\n    \\end{tabular}\n    \\caption{Each feature used in the final classifier is described by index.}\n    \\label{tab:fvectors}\n\\end{table}\n\n%details of what classifiers were chosen and about scikit-learning\n\\subsection{Model Selection}\n\n%classification as a weighting tool\nThe classification problem we are solving is different in that we are really trying to obtain a realistic weighting of interactions for use in a \\ac{PPI} network.\n%FIN: different from what? \nClassification is normally concerned about picking a decision threshold to classify examples into categories.\nHowever, in this case the output of our process is the posterior probability of the model $\\mathcal{H}$ as in equation \\ref{eq:bayes}.\nWhere $\\mathcal{H}$ is any of the following classification models, such as logistic regression.\n\n%introduce scikit-learn\nThe posterior probability was constructed using the Python package Scikit-learn\\autocite{pedregosa_scikit-learn:_2011}.\nEach classifier implemented in this package has a similar interface allowing modular code to be written.\nIn addition, this package is actively developed with all the required classifiers having efficient implementations.\n%FIN: also one of most widely used packages \n\n% explain why we tried the algorithms that we tried\n%The following sections describe the three classifiers chosen.\nThere were three classifiers chosen: a logistic regression model, a random forest model and a \\ac{SVM}.\n%Each of these involved tuning a number of hyper-parameters.\n\n%\\subsubsection*{Hyper-parameters}\n%describe the different parameters varied in scikit-learn by name\n%FIN: explain what a hyper-parameter actually is  \nEach classifier used requires tuning of hyper-parameters which will affect its overall performance.\nThese hyper-parameters are described in table \\ref{tab:hyper} for each of the models used in the project.\nThe optimal values for these found can be found in table \\ref{tab:gridresults}.\n\n%table of the different hyper-parameters\n\\begin{table}\n    \\centering\n    \\small\n    \\begin{tabular}{p{0.2\\textwidth} c p{0.5\\textwidth}}\n        \n        Classifier                                  & Hyper-parameters     & Description \\\\\n        \\hline\n        Logistic Regression                         & C                         &  Inverse of regularisation strength, the $\\alpha$ parameter in section \\ref{logreg}. \\\\\n        \\hline \n        \\multirow{3}{*}{\\parbox{0.2\\textwidth}{Support Vector Machine}}     & kernel                    &  The kernel used. Three options: linear, \\ac{RBF} and polynomial. \\\\\n                                                    & Gamma($\\gamma$)           &  Kernel coefficient. \\\\\n                                                    & C                         &  As with Logistic Regression, a penalty parameter. \\\\\n        \\hline \n        \\multirow{2}{*}{\\parbox{0.2\\textwidth}{Random Forest}}              & N estimators              &  Number of trees used in the forest. \\\\\n                                                    & Max features              &  Number of features considered when looking for splits, part of the problem of finding the best decision at each node described in section \\ref{randomforest}. \\\\\n        \\hline \n        \\multirow{2}{*}{\\parbox{0.2\\textwidth}{Extremely Randomized Trees}} & N estimators              &  As with Random Forest. \\\\\n                                                    & Max features              &  As with Random Forest. \\\\\n    \\end{tabular}\n    \\caption{Summary of the hyper-parameters described in the Scikit-learn \\autocite{pedregosa_scikit-learn:_2011} documentation to be tuned for each of the models considered.}\n    \\label{tab:hyper}\n\\end{table}\n\n\n\\subsubsection*{Logistic Regression}\n\\label{logreg}\n\n%with reference to Murphy\nLogistic regression is a linear model being used for binary classification.\n%FIN:binary classification  \nIt is equivalent to a linear regression model transformed through a sigmoid function\\autocite[376]{murphy_machine_2012}, denoted here by $\\sigma$:\n\n\\begin{align}\n    p(c=1|\\pmb{x}) = \\sigma(b + \\pmb{x}^{T}*\\pmb{w})\n    \\label{sigmoidtransform}\n\\end{align}\n\nWhere in equation \\ref{sigmoidtransform} $\\pmb{x}$ is the vector of features and $c$ is the class label - using $1$ for a real interaction and $0$ for a non-interaction.\nThe weights and biases are the parameters of this model, expressed in the above equation as $b$ and $\\pmb{w}$, respectively.\n\nThis divides the points, feature vectors, in the dataset by a hyperplane, classifying the points on each side of the hyperplane into different classes.\nFor data that is linearly separable, this produces a classifier that will make no mistakes on the test data.\nUnfortunately, the data we are working with is not linearly separable as shown by visualization in section \\ref{dataviz}.\n\n%How is it trained?\nTo find the parameters in this model the log likelihood must be maximised; corresponding to the maximum likelihood solution.\nThe log likelihood, for N feature vectors, is given by:\n\n\\begin{align}\n    L(\\pmb{w},b) = \\sum_{n=1}^{N} c^{n} \\log \\sigma(b + \\pmb{w}^{T}\\pmb{x}^{n}) + (1 - c^{n})\\log (1 - \\sigma(b + \\pmb{w}^{T}\\pmb{x}^{n}))\n\\end{align}\n\n%describe what the C parameter is\nRegularisation of the weights represents a prior belief that the weights should not increase without bound.\nIn a case where the data is linearly separable and where regularisation is not applied the weights will increase without bound to produce extremely confident classifications\\autocite[381]{barber_bayesian_2012}.\nTo stop this from happening we apply a penalty term, $\\frac{1}{C}$, to the size of the weights:\n\n\\begin{align}\n    L'(\\pmb{w},b) = L(\\pmb{w},b) - \\frac{1}{C} \\pmb{w}^{T}\\pmb{w}\n\\end{align}\n\nTuning this hyper-parameter is the goal of a grid search, which is discussed in section \\ref{classifierverification}, and the optimal value found in this case was 0.0215 as shown in table \\ref{tab:gridresults}.\n%FIN: explain grid-search \n\n\\subsubsection*{Support Vector Machines}\n%FIN: you are a big fan of Murphy\n%GAV: it is kind of a joke, it's probably my least favourite ML textbook, but it is thorough\n%with reference to Murphy\nLogistic regression can be generalised to apply kernel functions to the input features to obtain better classifications.\nSupport Vector Machines exploit this while also applying a different objective function intended to avoid overfitting\\autocite[383]{murphy_machine_2012}.\nThe objective in placing the hyperplane for a \\ac{SVM} is ``maximum margin'' in that it attempts to maintain the same distance from the closest opposing class points.\n%FIN: typo/missing word\n\n%success of these models?\nThese are often successful classifiers in practice.\nApplications include text categorisation, hand-written character recognition, image classification and biosequences analysis\\autocite{cristianini_introduction_2000}.\n\n%describe the hyper-parameters?\nThe hyper-parameters for a SVM control the kernels, along with the regularisation parameter as described for logistic regression in section \\ref{logreg}.\n%FIN: explain what a kernel is and how \\ac{SVM}s can classify non-linearly seperable unlink logit by transformation to feature space\nTwo of the three hyper-parameters which can be tuned during a grid search operation are the degree of polynomial kernels if chosen and the gamma coefficient of the kernels.\nIn table \\ref{tab:gridresults} it can be seen that the best performance was achieved with a \\ac{RBF} kernel, C of 10.0 and gamma of 10.0.\n\n\\subsubsection*{Random Forest}\n\\label{randomforest}\n\nA random forest is a another of these models.\nIt operates as a combination of many decision trees.\nDecision trees are intuitively simple in that they consist of a series of comparisons arranged in a tree as shown in figure \\ref{fig:dectree}.\nEach feature is tested at a node to come to a final conclusion about the state of the interaction.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{decisiontree.png}\n    \\caption{A simple example decision tree, illustrating the process of sequential, dependent decisions. The example feature vector f follows a path through the tree indicated by a dashed line. Based on an image obtained through Wikimedia\\autocite{wkmdacommons}.}\n    \\label{fig:dectree}\n\\end{figure}\n\nIn this way decision trees are both simple and tractable methods for using a feature vector to classify interactions as real or false and there are many automated ways to generate effective decision trees.\nThe problem in the design of a decision tree is which comparison to choose at each node, which is beyond the scope of this report, a full description can be found in \\textcite[544]{murphy_machine_2012}.\nOther advantages of decision trees include the ability to handle a mixture of discrete and continuous inputs, automatic variable selection, scaling to large datasets readily and the ability to handle missing inputs, with modification.\n%other strengths of decision trees?\n\n%describe split function? probably not worth going into this.\n\n%then explain the problems with decision trees, variance etc\nUnfortunately, despite the strengths of decision trees there are still problems with stability: small changes in the input data can produce large changes in the output\\autocite[550]{murphy_machine_2012}.\nThe random forest algorithm addresses this problem by providing redundancy; multiple trees are grown and their results averaged.\n%FIN: how many trees?\nFor M trees trained on different subsets of the data\\autocite[551]{murphy_machine_2012} we obtain:\n\n\\begin{align}\n    f(x) = \\sum_{m=1}^{M} \\frac{1}{M} f_{m}(x)\n\\end{align}\n\nWhere $f_{m}(x)$ is $m$th tree. This is simply averaging the results of the trees and is known as bagging.\n\n%with reference to Qi and ENTS, showing good performance on this problem\nWith the ability to work on large datasets and mixing continuous and discrete data these types of classifiers would appear to already be well suited to this problem.\n%FIN: maybe say well theoretically suited and then show in reality with the citations \nThis is what has been observed in the literature, with these classifiers achieving the best performance despite different types of biological data being used\\autocites{qi_evaluation_2006,rodgers-melnick_predicting_2013}.\nDue to these reports in the literature it appeared that this classifier would be the best choice for our protein interaction prediction task.\n\n%hyper-parameters found\nThe two hyper-parameters varied in our search were the number of trees, or estimators and the max features described in table \\ref{tab:hyper}.\nIn table \\ref{tab:gridresults} it can be seen that 44 estimators and a max feature value of 25 were found to be optimal.\n\n%expand on how much better they were etc? in different places?\n\n\\subsubsection*{Other options}\n\n% what other algorithms could we have tried but didn't\nOther options considered for our classification problem, but not included in the project due to time constraints included Feedforward neural networks, Naive Bayes and Beta regression.\nNaive Bayes in particular would have required modifying the code from Scikit-learn to deal with data from multiple different distributions or implementing Weka's solution of kernel density estimated distributions for each different feature\\autocite{john_estimating_1995} due to current Python implementations being limited to either the Gaussian or Bernoulli case.\n%FIN: explain why \n\n%\\subsubsection*{Beta regression}\n%\n%%what is beta regression? quickly\n%Beta regression is a generalised linear model is one in which the output variable is distributed according to a Beta distribution\\autocite{smithson_better_2006}.\n%%FIN: typo/missing word\n%A maximum-likelihood solution to this model can be found and the resulting model fit in the same way as a standard regression model.\n%%FIN: typo/missing word\n%\n%%why this would also have been a good idea, if we didn't have to implement it ourselves\n%True protein interactions are not something which can be assumed in a protein interaction prediction task.\n%Many proteins are known to be very likely, but others are less confidently classified, as reflected in the \\ac{HIPPIE} database's confidence scoring system\\autocite{schaefer_hippie:_2012}.\n%%FIN: typo/missing word\n%\n%To train a classifier a training set of true and false interactions is required and this cannot be supplied without thresholding the database at an arbitrary confidence value.\n%\n%Beta regression avoids this problem as it allows the confidence values to remain a part of the regression process.\n%Using this we can build a model and update our belief on the likelihood of interaction in a Bayesian framework much more easily.\n%\n%%FIN: actually this is pretty cool, note to self read up on Beta regression \n\n\\subsection{Model testing}\n\\label{classifierverification}\n\n%Tests we planned to use on each classifier:\n%  mention cross-validation\n%  learning curves\n%  simple accuracy value\n%  grid search\n%  \\ac{ROC}\n%  Precision recall\n%  test interactions\n%  \n\n% consider using paragraph headings in this section\nVarious measures, such as accuracy, \\ac{ROC} and precision-recall, were applied to each model to estimate their performance in different ways.\nThese tests included simple accuracy measures applied over learning curve and grid searches along with plotting \\ac{ROC} and precision-recall curves.\nGrid searches of parameter values were used to find optimal hyper-parameters in all three models.\n\n% was or will be?\nCross-validation is the practice of repetitively splitting the data into training and test sets and was applied to tests during hyper-parameter searches\\autocite[152]{witten_data_2011}.\nIt was also applied when plotting learning curves to get a statistical estimate of the reliability of the metrics applied to the classifier.\n%FIN: you don't really explain what CV and why you can't just optimise the models performance directly on the test set\nAs the training set is approximately $10^{6}$ samples, these were applied as random sub-samples of the full data set, but stratified to maintain the same proportion of non-interactions to interactions as in the full training set.\nThis is important as it must reflect the expected ratio for real protein interactions to non-interactions.\nThe use of pipelines and learning curves in this process is described in Appendix \\ref{app:notebooks}.\n\n%\\subsubsection*{Grid search}\n% grid search\nA grid search is a cross-validated test measuring accuracy testing the classifier with a variety of different hyper-parameters.\nClassifiers, such as a Logistic Regression model, have some number of hyper-parameters.\nA Logistic Regression model, for example, has a single hyper-parameter so that the grid search for this model simply involves varying this parameter to obtain the optimum performance on the test set.\n%FIN:repeat hyper-parameter identity in logististc regression - regularisation\n\n%\\subsection{Missing data}\n%how was missing data dealt with?\nBefore classification could be performed the missing data in the feature vectors had to be imputed.\nThis was performed by filling the missing values with the mean value of that feature.\nThis is a common technique that is applied if it is likely the data is missing at random.\nIn this case the data that is missing is due to mismatches in protein identifier mapping dictionaries, which is likely random and independent of the interaction prediction task.\n%FIN: link back to imputer discussed earlier, if you are having a section on this you could add a section on data normalization\n\n\\subsubsection*{Accuracy}\n% accuracy value\nThe accuracy value plotted in the learning curve and used in grid searches of parameters values is simply the proportional of correctly classified instances in a training or validation set.\nTypically, the protein interaction prediction problem is sparse, in that there are very few interactions for the large number of non-interactions.\nThis is reflected in the accuracy in that a classifier that simply always predicts zero can still achieve a very high accuracy.\nUsing this accuracy value is therefore problematic and this requires the other measures employed, such as \\ac{ROC} and precision-recall \\ac{AUC} values.\n\n\\subsubsection*{\\ac{ROC} curve}\n% explain what those are\nA Receiver Operating Characteristic, or \\ac{ROC}, curve plots the variation in true positive to false positive rate as the threshold of classification is varied.\nThis makes it useful as an illustration of the tradeoff possible with this particular classifier.\nThe Area Under Curve, or \\ac{AUC}, value is the area under this line.\nA higher \\ac{AUC} value corresponds to a better classifier, although there is some controversy surrounding this\\autocite{hanczar_small-sample_2010}.\nThese concerns center on the problems of small sample sizes, which are not the case in this project.\n%FIN: maybe add in a wee truth table of TP, TF, FP, FF \n\n\\subsubsection*{Precision-recall curve}\n%precision recall\nA precision-recall curve plots the precision of a classifier against its recall as the threshold of classification is varied.\nThe precision is defined as the number of true positive results over the number of total positive results.\nRecall is defined as the number of true positive results against the number of available positive examples\nThe area under the curve is used to gauge the classifier's effectiveness.\n%FIN: may as well add the equations in for precision and recall, TP, TF, FP, FF - I mean you did put down baye's rule earlier \n%example \\ac{ROC} curve?\n%example precision recall curve?\n\n\\section{Conditionally independent probabilistic model}\n\\label{bayes}\n%description of the method used to weight interactions after the classifier was found to be insufficient\n\n%justification for using this method, the problem with supervised classification?\n%CHECK THIS ISN'T FOUND ELSEWHERE ALREADY\nWeighting interactions using the posterior distribution of a probabilistic model requires that the model accurately represents beliefs about the system being modelled.\nSupervised classification requires that true interactions are known in order to find the parameters of the model.\nUnfortunately, in this problem we cannot know for certain whether an interaction is real or false.\nTherefore, when fitting a supervised model we only obtain, at best, an accurate predictor for the interaction database used to fit the model as shown in figure \\ref{fig:unbalanced}.\n\n%the prior on the edgelist\nAs the \\ac{PPI} network edges we plan to weight have already been defined through combining different interaction databases we already have a strong prior on the existence of these interactions.\n\nUsing the classifier on its own can then, at best, reproduce one of the databases used to create this network and many of the interactions will be incorrectly weighted much lower than expected.\nThe solution is to treat both the result of the classifier and the edgelist inclusion as observable events dependent on the latent ``interaction'' variable.\nAlong with these variables, we can also include other protein interaction prediction resources, such as \\ac{STRING}\\autocite{von_mering_string:_2005}.\n\n%naive bayes model, advantages, disadvantages\nThe model we have chosen to update our prior belief assumes conditional independence of the observable variables given the hidden interaction variable.\nThis equates to a Naive Bayes model with a belief network as shown in figure \\ref{fig:naive}.\nAn advantage of this model is the simplicity of its analysis.\nThe disadvantage is that our variables could be dependent, as the classifier is trained on some of the same interaction databases that the \\ac{HIPPIE} database is composed of.\n%FIN: typo/missing word\n\n%belief network for naive bayes\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.6\\textwidth]{naive.png}\n    \\caption{A belief network illustrating the Naive Bayes model, equivalent to that used for inference when weighting the interactions of the \\ac{PPI} network.}\n    \\label{fig:naive}\n\\end{figure}\n\n%updating on evidence? equations\n\n%the importance of conservative estimates\nThe class label, interaction, is not observed in this model so we cannot solve to find the parameters.\n%FIN: \\ac{PPI} or not\nTo define the model, the only way to proceed is to manually define them by conservatively estimating the conditional true positive and false positive rates of the Bernoulli distributions.\nContinuous distributions, such as the classifier predictions or prediction databases, were estimated in a supervised way using a sample of the training set used to train the supervised classification model.\n%FIN: explain \\ac{KDE} acronym \n\n%kernel density estimation, justification from Weka\n%Using \\ac{KDE} in conjunction with Naive Bayes is the approach used by Weka to deal with arbitrary conditional distributions\\autocite{john_estimating_1995}.\n%FIN: you need to explain what Weka is \n%Unfortunately, this distribution is estimated using samples labeled using the protein interaction databases, so we cannot be fully confident in its predictions.\n%As before, a conservative estimate of its accuracy is applied through increasing the smoothing bandwidth.\n\n\\section{Comparison of weighted and unweighted \\ac{PPI} networks}\n\n%why do we want to try weighted networks? and intro\nIt is hoped that a weighted network will provide new insight into the interactions of proteins in the active zone described in table \\ref{tab:synsys}.\n%FIN: I think at this stage you need a refresher to the reader on what active zone you are talking about\nFor this reason, comparing the unweighted and weighted cases of the graph produced is a major goal of this project.\nThe following sections describe this process and the measures applied to both networks.\n\n\\subsection{Community detection}\n\\label{communitydetection}\n%FIN: section needs expansion in the areas you have noted \n%description of each of the algorithms available\n%three:\n%  Geodesic edge Betweenness\n%  Random edge Betweenness\n%  Spectral Betweenness\nThree algorithms were identified for use in this project for community detection, the first two are described in \\textcite{newman_finding_2004}.\nGeodesic edge betweenness and random edge betweenness are based on betweenness measures to partition the graph, making them optimisation approaches.\nThe third, spectral modularity, was the chosen algorithm in this project.\n\n%with reference to Colin's paper, why this is a good choice\nThe original advantages of spectral modularity techniques were compatible results on generated test data in less time than competing methods in \\textcite{newman_modularity_2006}.\nIn a recent paper, this method was compared favourably to other methods in terms of CPU time\\autocite{mcleanunpub}.\n%Although it detected fewer communities, it maintained a higher modularity score.\n\n%community detection code, what it does, which algorithm was used\n%pending use - will see if I have time to use more than one\n\n\\subsection{Normalised Mutual Information}\n\n%NMI theory, what is a measure of\nMutual information intuitively is the reduction in uncertainty about one random variable by observing another\nDefined in terms of entropy it is\\autocite{mackay_information_2003}:\n\n%could move that citation next to the equation?\n\\begin{align}\n    I(X;Y) = H(X) - H(X|Y)\n\\end{align}\n\nWhere $H(X)$ is the entropy of the random variable $X$ and $H(X|Y)$ is the conditional entropy of $X$ given $Y$.\n\nIn the case of the function we are using to perform this from Scikit-learn the mutual information is normalised by $\\sqrt{H(X)\\times H(Y)}$\\autocite{pedregosa_scikit-learn:_2011}.\nThis produces a value between zero and one which reflects the redundancy of the distributions - 1.0 being exactly the same and 0.0 being independent.\n\n\\subsection{Disease Enrichment}\n\\label{diseaseenrichment}\n\n%Primer on what this test actually does\nBy linking the proteins in a cluster to known disease annotations it is possible to estimate the likelihood that a given community is involved in a particular disease.\nUsing a list of known genes involved in the disease of interest the test finds if the number of these genes in the communities detected is significant\\autocites{kirov_novo_2012,lips_functional_2012}.\nThe code being used produces uses a hypergeometric test to find this likelihood and returns a p-value.\n\n%FIN: explain a p-val? GAV: no space\n\nIn the case of this project, due to the aims of the SYNSYS\\autocite{synsys} project and for simplicity, only two diseases were investigated: Schizophrenia and Alzheimer's.\nThe genes associated with schizophrenia and Alzheimer's used are part of the internal code used.\n%interaction between significance level and p-value\n\n\n\\section*{Conclusion}\n\n%This project involved a large number of varied techniques and algorithms being used on large data sets.\n%Many problems had to be solved in order to piece together the full project.\n%As each of these steps were sequential, the results section describes the results as they were gathered, in the same structure as seen above.\nThis chapter described how the weighted network was constructed using both a supervised probabilistic model and another conditionally independent model.\nAfter the failings of the supervised method were apparent, this combined method was the best solution found to the prediction problem.\nThe next chapter will discuss the results of both of these techniques and the analysis of the weighted and unweighted networks produced.\n", "meta": {"hexsha": "8dd114ccc09ee152a55ec78ea1d1b08cfb526fb1", "size": 42220, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/methods.tex", "max_stars_repo_name": "gngdb/opencast-bio", "max_stars_repo_head_hexsha": "9fb110076295aafa696a9f8b5070b8d93c6400ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-02-24T20:44:39.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-06T02:44:38.000Z", "max_issues_repo_path": "report/sections/methods.tex", "max_issues_repo_name": "gngdb/opencast-bio", "max_issues_repo_head_hexsha": "9fb110076295aafa696a9f8b5070b8d93c6400ce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections/methods.tex", "max_forks_repo_name": "gngdb/opencast-bio", "max_forks_repo_head_hexsha": "9fb110076295aafa696a9f8b5070b8d93c6400ce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 75.1245551601, "max_line_length": 549, "alphanum_fraction": 0.7870914259, "num_tokens": 9286, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8596637361282707, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5809303216552647}}
{"text": "\\section{09/02}\nThe Ben-Or algorithm can be used to solve Binary Consensus with faults.\n\\begin{algorithm}\n    \\caption[\\AlgName{Ben-Or}]{Ben-Or Algorithm from view of processor $p$ with initial value $\\chi_p$}\n    \\label{alg:ben-or}\n    \\begin{algorithmic}[1]\n        \\Procedure{\\nameref*{alg:ben-or}}{}\n            \\State $\\texttt{decided} \\gets \\False$\n            \\State $\\texttt{round} \\gets 1$\n            \\While{not \\texttt{decided}}\n                \\State $\\Call{Broadcast}{1, \\texttt{round}, \\chi_p}$\n                \\State wait for $n - f$ messages of type $(1, \\texttt{round}, \\cdot)$\n                \\If{there are more than $\\sfrac{n}{2}$ messages of type $(1, \\texttt{round}, v)$}\n                    \\State $\\Call{Broadcast}{2, \\texttt{round}, D, v}$\n                \\Else\n                    \\State $\\Call{Broadcast}{2, \\texttt{round}, U}$\n                \\EndIf\n                \\State wait for $n - f$ messages of type $(2, \\texttt{round}, \\cdot)$\n                \\State $t \\gets$ number of messages of type $(2, \\texttt{round}, D, v)$\n                \\If{$t > 0$}\n                    \\State $\\chi_p \\gets v$\n                    \\If{$t > f$}\n                        \\State $\\texttt{decided} \\gets \\True$\n                        \\State $\\Call{Broadcast}{1, \\texttt{round}, \\chi_p}$\n                    \\EndIf\n                \\Else \\Comment{All messages are of type $(2, \\texttt{round}, U)$}\n                    \\State $\\chi_p \\gets \\Call{Sample}{\\set{0,1}}$\n                \\EndIf\n                \\State $\\texttt{round} \\gets \\texttt{round} + 1$\n            \\EndWhile\n        \\EndProcedure\n    \\end{algorithmic}\n\\end{algorithm}\n\\begin{theorem}{}{}\n    The above algorithm is correct.\n\\end{theorem}\n\n\\begin{proof}\n    Validity is clear: if all nodes start with $v$, then we skip to \n    %TODO finish proof\n\\end{proof}\n\n\\begin{theorem}{}{}\n    The expected number of rounds of \\nameref{alg:ben-or} is $2^{n-1}$.\n    Additionally, with high probability, the number of rounds is less than\n    $2^{n-1}\\ln{n}$.\n\\end{theorem}\n\n\\begin{proof}\n    If all nodes are of type $(2, \\texttt{round}, ?)$, then with probability\n    $\\sfrac{2}{2^n} = \\sfrac{1}{2^{n-1}}$ all nodes will choose the same value.\n    Thus, we can upper bound the probability of failure after $k$ rounds by\n    \\begin{align*}\\prob{\\text{failure}}\n        &\\leq \\left(1 - \\frac{1}{2^{n-1}}\\right)^k \\leq e^{\\frac{-k}{2^{n-1}}}\\\\\n        \\shortintertext{setting $k=2^{n-1}\\ln{n}$}\n        &=\\frac{1}{n}\n    \\end{align*}\n    Thus, the probability of success after $2^{n-1}\\ln{n}$ rounds is at least $1\n    - \\sfrac{1}{n}$. Additionally, the expected number of rounds is at most\n    $2^{n-1}$.\n\\end{proof}\n\n\\subsection{Graphs}\n\\begin{problem}{Min-Cut}{}\n    Given a graph $G = (V, E)$, find a minimum set of \\emph{edges} whose removal\n    disconnects the graph.\n\\end{problem}\n\n\\begin{definition}{Graph Cut}{}\n    A \\emph{cut} is a partition of the vertices into two non-empty subsets, $A$\n    and $B$.\n\\end{definition}\n\nNotice that any edge from $A$ to $B$ crosses the cut, and when all such edges\nare removed, the graph becomes disconnected. \n\n%TODO example figures\n\nIn general, there are $\\binom{n}{2}$ possible min-cuts.\n\n%TODO draw cycle\n\n\\begin{algorithm}\n    \\caption{Karger's Algorithm}\n    \\label{alg:karger}\n    \\begin{algorithmic}[1]\n        \\Procedure{Karger}{}\n            \\ForRange{$i$}{1}{$n - 2$}\n                \\State Choose a random edge $e$ \\label{line:kargersample}\n                \\State Contract edge $e$ \\label{line:kargercontraction}\n            \\EndForRange\n            \\State Output remaining edges\n        \\EndProcedure\n    \\end{algorithmic}\n\\end{algorithm}\n\nObserve that \\cref{line:kargersample} requires $\\bigO{1}$ operations and\n\\cref{line:kargercontraction} requires $\\bigO{n}$ operations, for a total of\n$\\bigO{n^2}$. Additionally, a cut set in the contracted graph is a cut set in\nthe original graph, thus we can only \\emph{increase} the size of the min-cut via\n\\cref{line:kargercontraction}.\n\n\\begin{theorem}{}{}\n    \\nameref{alg:karger} outputs a min-cut with probability at least\n    $\\sfrac{1}{n^2}$.\n\\end{theorem}\n\n\\begin{proof}\n    Suppose $C$ is some min-cut in $G$ and write $\\card{C} = k$. In that case,\n    every node must have degree at least $k$, and $m \\geq \\sfrac{nk}{2}$. \n\n    Let $E_i$ denote the event that no edge in $C$ is contracted in iteration\n    $i$. Notice that the event that $C$ is output is simply\n    \\begin{align*}\\prob{\\bigcap E_i}\n        &=\\prob{E_1}\\prob{E_2 \\given E_1}\\cdots\\prob{E_{n-1} \\given E_1 \\cap E_2 \\cap \\dots \\cap E_{n - 2}}\\\\\n        &=\\left(1 - \\frac{2}{n}\\right)\\left(1 - \\frac{2}{n-1}\\right)\\cdots\\left(\\frac{2}{3}\\right)\\\\\n        &=\\frac{2}{n(n-1)}\\\\\n        &=\\frac{1}{\\binom{n}{2}}\\\\\n        &\\geq\\frac{1}{n^2}\n    \\end{align*}\n\\end{proof}\n\nThus, we can repeat this process $n^2\\ln{n}$ times to achieve a high probability\nthat the algorithm outputs a min-cut.", "meta": {"hexsha": "72bcdc22956feab5b5ef7707446168ee888ae528", "size": 4918, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/0902.tex", "max_stars_repo_name": "khalid-salad/Randomized-Algorithms-Notes", "max_stars_repo_head_hexsha": "2556b9d8ede2f3c10960949680f077a8d7e37196", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-28T23:46:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-28T23:46:42.000Z", "max_issues_repo_path": "tex/0902.tex", "max_issues_repo_name": "khalid-salad/Randomized-Algorithms-Notes", "max_issues_repo_head_hexsha": "2556b9d8ede2f3c10960949680f077a8d7e37196", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/0902.tex", "max_forks_repo_name": "khalid-salad/Randomized-Algorithms-Notes", "max_forks_repo_head_hexsha": "2556b9d8ede2f3c10960949680f077a8d7e37196", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.6612903226, "max_line_length": 109, "alphanum_fraction": 0.5912972753, "num_tokens": 1541, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.7879312006227324, "lm_q1q2_score": 0.5808299252284924}}
{"text": "\\documentclass[11pt]{amsart}\n\\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.\n\\geometry{letterpaper}                   % ... or a4paper or a5paper or ... \n%\\geometry{landscape}                % Activate for for rotated page geometry\n%\\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{epstopdf}\n\\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}\n\n\\title{Brief Article}\n\\author{The Author}\n%\\date{}                                           % Activate to display a given date or no date\n\n\\begin{document}\n%\\maketitle\n%\\section{}\n%\\subsection{}\nLet $X$ and $Y$ be two Banach spaces.\n$C(X, Y)$ is the set of all continuous mappings $f: X\\mapsto Y$.\nFor $f, g\\in C(X, Y)$, we define\n$$\\|f - g\\|= \\sup_{x\\in X} \\|f(x) - g(x)\\|.$$\n\\begin{enumerate}\n\\item\nProve that $C(X, Y)$ is a Banach space.\n\\item If $K \\subset Y$ are compact in their own spaces, is $C(X, K)$ compact?\n\\end{enumerate}\n\n\n\\end{document}  ", "meta": {"hexsha": "59b25bdecf8626d42b3b36833272f0dc2eaddd16", "size": 1098, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/200607ex.tex", "max_stars_repo_name": "songqsh/foo1", "max_stars_repo_head_hexsha": "536bf44cc4fb43a3ac0f2a64695f619ac7526651", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-03-14T03:04:24.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-14T03:04:24.000Z", "max_issues_repo_path": "doc/200607ex.tex", "max_issues_repo_name": "songqsh/foo1", "max_issues_repo_head_hexsha": "536bf44cc4fb43a3ac0f2a64695f619ac7526651", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-07-01T20:35:39.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-04T22:07:50.000Z", "max_forks_repo_path": "doc/200607ex.tex", "max_forks_repo_name": "songqsh/foo1", "max_forks_repo_head_hexsha": "536bf44cc4fb43a3ac0f2a64695f619ac7526651", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-08-25T00:50:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-25T20:06:32.000Z", "avg_line_length": 36.6, "max_line_length": 105, "alphanum_fraction": 0.6393442623, "num_tokens": 327, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428946, "lm_q2_score": 0.7371581741774411, "lm_q1q2_score": 0.5808299215575643}}
{"text": "\\unnumberedchapter{Glossary} \n\\chapter*{Glossary} \n\nBecause accelerator physics is a small branch of applied physics, and because this dissertation introduces several terms that are not frequently used in the accelerator physics community, the following glossary has been included for the reader.\n\n\\textbf{Beta function} \u2014 A function that scales the amplitude of single-particle oscillations in the linear approximation.\n\n\\textbf{Beam envelope} \u2014 The root-mean-square ellipsoid defined by the covariance matrix.\n\n\\textbf{Beam perveance} \u2014 A dimensionless measure of space charge strength.\n\n\\textbf{Circular mode} \u2014 A beam with small four-dimensional emittance. It could also refer to the circular motion of the eigenvectors of a coupled transfer matrix.\n\n\\textbf{Courant-Snyder ellipse} \u2014 Particles move along this ellipse in the linear approximation. Its area is conserved.\n\n\\textbf{Danilov distribution} \u2014 A self-consistent distribution in two spatial dimensions. It is characterized by an elliptical shape, uniform charge density, and zero four-dimensional emittance.\n\n\\textbf{Effective lattice} \u2014\n\n\\textbf{Emittance} \u2014 The root-mean-square volume or area of a phase space distribution.\n\n\\textbf{Kapchinskij-Vladimirskij (KV) distribution} \u2014 A self-consistent distribution in two spatial dimensions. Its particles are uniformly distributed on an ellipsoid in four-dimensional phase space.\n\n\\textbf{Matched beam} \u2014 A beam whose envelope oscillates with the same periodicity as the external focusing.\n\n\\textbf{Ring} \u2014 A circular accelerator.\n\n\\textbf{Painting} \u2014 An beam injection method in which the relative transverse distance and angle between the circulating and injected beam is varied.  \n\n\\textbf{Phase advance} \u2014 The integral of the inverse of the beta function. \n\n\\textbf{Self-consistent distribution} \u2014 A phase space distribution which produces linear space charge forces under any linear transformation of the coordinates.\n\n\\textbf{Space charge} \u2014 The charge density of a beam in free space.\n\n\\textbf{Tune} \u2014 The number of phase space oscillations per performed by a single particle in one turn around a ring.\n\n\\textbf{Space charge tune shift} \u2014 The reduction in tune caused by the beam space charge. The linear space charge component results in the same reduction for every particle (tune shift), while the nonlinear component results in an amplitude-dependent reduction (tune spread).", "meta": {"hexsha": "f851f91e9716ebd57b02640ba56082074832a5e5", "size": 2398, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Preamble/glossary.tex", "max_stars_repo_name": "austin-hoover/dissertation", "max_stars_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Preamble/glossary.tex", "max_issues_repo_name": "austin-hoover/dissertation", "max_issues_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Preamble/glossary.tex", "max_forks_repo_name": "austin-hoover/dissertation", "max_forks_repo_head_hexsha": "53845b2acfd6da962c19967a98987208988d841e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.1052631579, "max_line_length": 275, "alphanum_fraction": 0.8040033361, "num_tokens": 515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428946, "lm_q2_score": 0.7371581510799253, "lm_q1q2_score": 0.5808299033583111}}
{"text": "\\documentclass{article}\n\n\\usepackage[margin=1in]{geometry}\n\\usepackage{amsmath}\n\\usepackage{titling}\n\\usepackage{amssymb}\n\\usepackage[skip=0.25em]{parskip}\n\\usepackage{amsthm}\n\n\\newtheorem{thm}{Theorem}\n\\renewcommand\\qedsymbol{Amen.}\n\n\\setlength\\droptitle{-5em}\n\n\\title{On the Possibility of Mapping the Samples to the Pitches}\n\\author{Varik Valefor}\n\\begin{document}\n\t\\maketitle\n\t\\section{Executive Summary}\n\t\tThere exists a map $f$ such that for all sequences $n$ of a set size, $f(n)$ equals the loudest pitch of $n$.\n\t\\section{Definitions}\n\t\tLet $S$ denote the set of all samples.\n\n\t\tLet $C$ equal $\\bigcup_{x=1}^\\infty \\mathbb C^x$.\n\n\t\tLet $P$ denote the set of all frequencies/pitches.\n\n\t\tLet $w$ denote the window size.\n\n\t\tLet $\\mathbb S$ denote the class of all sets.\n\n\t\tLet $F$ denote the set of all maps.\n\n\t\tLet $\\Omega$ denote the universal set.\n\n\t\tFor all subsets of $\\Omega$ $A$ and $B$, for all $f \\in F$, let $g(f,\\ A,\\ B)$ iff $d(f) = A$ and $r(f) = B$.\n\t\\section{``Proper'' Maps}\n\t\t\\[\n\t\t\t\\mathrm{fft} \\colon S^w \\mapsto \\mathbb C^w.\n\t\t\\]\n\t\tFor all $s \\in S^w$, $\\mathrm{fft}(s)$ equals the output of the FOURIER transform on $s$.\n\t\t\\[\n\t\t\t\\mathrm{pitch} \\colon C \\mapsto \\mathbb P.\n\t\t\\]\n\t\tFor all $j \\in C$, $\\mathrm{pitch}(j)$ equals the loudest pitch/frequency of the FOURIER transform output $j$.\n\t\t\\[\n\t\t\td : F \\mapsto \\mathbb S.\n\t\t\\]\n\t\tFor all $f \\in F$, $d(f)$ equals the domain of $f$.\n\t\t\\[\n\t\t\tr : F \\mapsto \\mathbb S.\n\t\t\\]\n\t\tFor all $f \\in F$, $r(f)$ equals the range of $f$.\n\t\\section{In the Pudding}\n\t\t\\begin{thm}\n\t\t\t$\\exists f \\in F \\colon g(f,\\ S^w,\\ P)$.\n\t\t\\end{thm}\n\t\t\\begin{proof}\n\t\t\t\\[\n\t\t\t\t\\forall K \\in F^2,\\ \n\t\t\t\tr(K_1) \\subseteq d(K_2) \\implies\n\t\t\t\t\\exists f \\in F \\colon g\\big(f,\\ d(K_1),\\ r(K_2)\\big).\n\t\t\t\\]\n\t\t\t\\[\n\t\t\t\t\\forall A \\in \\Omega,\\ \n\t\t\t\tA_1 = A_2 = A_3 \\implies\n\t\t\t\tA_1 = A_3.\n\t\t\t\\]\n\t\t\t\\[\n\t\t\t\td(\\mathrm{fft}) = S^w.\n\t\t\t\\]\n\t\t\t\\[\n\t\t\t\tr(\\mathrm{fft}) = \\mathbb C^w.\n\t\t\t\\]\n\t\t\t\\[\n\t\t\t\td(\\mathrm{pitch}) = C.\n\t\t\t\\]\n\t\t\t\\[\n\t\t\t\tr(\\mathrm{pitch}) = P.\n\t\t\t\\]\n\t\t\t\\[\n\t\t\t\t\\mathbb C^w \\subset C.\n\t\t\t\\]\n\t\t\t\\[\n\t\t\t\t\\therefore \\exists f \\in F \\colon g(f,\\ S^w,\\ P).\\qedhere\n\t\t\t\\]\n\t\t\\end{proof}\n\\end{document}\n", "meta": {"hexsha": "edc7b7aaa77107831683eeb2176178400057d7a7", "size": 2134, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pitchvst-possible.tex", "max_stars_repo_name": "varikvalefor/tthheeppaarrttyy", "max_stars_repo_head_hexsha": "b10a833c32ddbc7c807423eb8af0d39629d42137", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "pitchvst-possible.tex", "max_issues_repo_name": "varikvalefor/tthheeppaarrttyy", "max_issues_repo_head_hexsha": "b10a833c32ddbc7c807423eb8af0d39629d42137", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pitchvst-possible.tex", "max_forks_repo_name": "varikvalefor/tthheeppaarrttyy", "max_forks_repo_head_hexsha": "b10a833c32ddbc7c807423eb8af0d39629d42137", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.9775280899, "max_line_length": 112, "alphanum_fraction": 0.5965323336, "num_tokens": 856, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8774767746654976, "lm_q2_score": 0.6619228625116081, "lm_q1q2_score": 0.5808219384740395}}
{"text": "\\chapter{Traffic models}\n\t\\section{Introduction to traffic simulators}\n\t\tOne of the main branch of the traffic simulators are the so called macroscopic or hydrodynamic models. These deal with traffic as a fluid flow and they do not take individual driver actions into consideration, namely, they are based on the vehicle density.\n\n\t\tThe other type of simulators use the microscopic or car following models. These models take individual driver behavior into consideration. So they are simulating each cars in a particular traffic situation. Two examples of these car following models are the optimal velocity model and velocity difference model.\n\n\t\tIn this thesis the second type will be investigated since the goal is that every car should be simulated as realistic as possible which cannot be expected from the first type. Then from the experience a new model should be developed which consists the longitudinal and transversal movement models. Based on the literature review it seems that one of the most widespread microscopic model is the Intelligent Driver Model (IDM) \\cite{arne1} - \\cite{modified}. IDM is an excellent start for the longitudinal model.\n\t\\section{Intelligent Driver Model} \\label{sec:IDM}\n\t\tIDM is a time continuous car following model which belongs to the Optimal Velocity model family. IDM is designed to be accident-free. It can only represents longitudinal motions. The fundamental idea behind the model is that every car chooses its speed based on the car before and its individual parameters. In case when there is no car before then it can freely accelerate to its desired speed.\n\n\t\tIDM is defined by its acceleration function. The model is constructed by two parts. The first part represents the \\textit{free acceleration} when there is no leading car or it is far away:\n\t\t\\begin{equation}\n\t\t\t\\afree=\\amax\\left [ 1 - \\left ( \\frac{\\vt}{\\vd} \\right )^{\\delta} \\right ]\\,,\n\t\t\t\\label{eq:afree}\n\t\t\\end{equation}\n\t\twhere\n\t\t\\begin{itemize}\n\t\t\t\\item $\\amax$ $\\rm [m/s^2]$ is the maximum acceleration of the vehicle,\n\t\t\t\\item $\\vt$ $\\rm [m/s]$ is the velocity of the vehicle at $t$,\n\t\t\t\\item $\\vd$ $\\rm [m/s]$ is the desired velocity of the vehicle,\n\t\t\t\\item $\\delta$ $\\rm [-]$ is the free acceleration exponent of the vehicle.\n\t\t\\end{itemize}\n\t\tEquation~\\eqref{eq:afree} produces zero when the current speed equals to the desired speed ($\\vt=\\vd$) and it reaches its maximum value when the car is not moving ($\\vt=0$).\n\n\t\tThe second part corresponds to the \\textit{follower behavior} or \\textit{breaking strategy} of the model:\n\t\t\\begin{equation}\n\t\t\t\\afollower=-\\amax\\cdot \\left ( \\frac{\\hd}{\\h} \\right )^2\\,,\n\t\t\t\\label{eq:afollower}\n\t\t\\end{equation}\n\t\twhere\n\t\t\\begin{itemize}\n\t\t\t\\item $\\hd$ $\\rm [m]$ is the desired safe headway of the vehicle at $t$,\n\t\t\t\\item $\\h$ $\\rm [m]$ is the current headway of the vehicle at $t$.\n\t\t\\end{itemize}\n\t\tThe desired safe headway is calculated from:\n\t\t\\begin{equation}\n\t\t\t\\hd=h_0 + \\vt\\cdot T + \\frac{\\vt\\cdot(\\vt-\\vleader)}{2\\sqrt{\\amax\\cdot\\bmax}}\\,,\n\t\t\\end{equation}\n\t\twhere $h_0$ is the bumper to bumper gap or jam distance, $T$ is the desired safety time headway, $\\vleader$ is the velocity of its leader car, $\\bmax$ is the maximum value of comfortable deceleration.\n\t\tThe current headway at a given $t$ can be calculated from:\n\t\t\\begin{equation}\n\t\t\\h=\\posleader-\\lengthleader- x(t)\n\t\t\\end{equation}\n\t\twhere $x(t)$ is the position of the car along the straight road, $\\posleader$ and $\\lengthleader$ are the position and length of the car's leader vehicle, respectively.\n\t\tEquation~\\eqref{eq:afollower} decreases with increasing own speed, increasing velocity difference and decreasing distance to the front car.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=.95\\textwidth]{common/idm}\n\t\t\t\\caption{3 cases of IDM}\n\t\t\t\\label{fig:idm}\n\t\t\\end{figure}\n\t\tThe sum of equations \\eqref{eq:afree} and \\eqref{eq:afollower} is the IDM equation which reads as follows:\n\t\t\\begin{equation}\n\t\t\ta(t)=\\afree+\\afollower=\\amax\\left [ 1 - \\left ( \\frac{\\vt}{\\vd} \\right )^{\\delta} - \\left ( \\frac{\\hd}{\\h} \\right )^2 \\right ]\\,.\n\t\t\t\\label{eq:aidm}\n\t\t\\end{equation}\n\t\tEquation~\\eqref{eq:aidm} represents the longitudinal behavior mechanism of a human driven car. 3 cases of traffic situation were drawn in Figure~\\ref{fig:idm} with their representative parameters.\n\t\\section{MOBIL -- lane changing model} \\label{sec:MOBIL}\n\t\tAs mentioned in Section \\ref{sec:IDM}, there is a need for transversal motion model of the cars, namely a lane changing model. It has not been studied nearly as exhaustive as longitudinal behaviors. However it can have great impact on the overall traffic flow so it is worth investigating. The model should be able to decide based on the local traffic conditions that changing lane is beneficial and safe to a specific driver or not. If both conditions met the model will start to change lanes.\n\n\t\tThe first condition to satisfy is that lane changing should be safe. The model should check that how will a possible lane change effect the upstream vehicles in the target lane. MOBIL \\cite{arne2} states if the deceleration of the new follower car does not exceed a given safe limit then changing lanes is safe. It can be expressed with an inequality as well:\n\t\t\\begin{equation}\n\t\t\t\\hat{a}_{n}\\geq -b_{\\rm safe}\\,,\n\t\t\\end{equation}\n\t\twhere $\\hat{a}_{n}$ is the new follower car's acceleration if car $c$ changes lanes and $b_{\\rm safe}$ is a parameter of car $n$. The mechanism is illustrated in Figure~\\ref{fig:mobil}. This criterion ensures that the model remains accident free even for edge cases.\n\t\t\t\\begin{figure}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=.95\\textwidth]{common/mobil}\n\t\t\t\t\\caption{Based on the local traffic conditions vehicle \\textit{c} is considering changing lanes to the right. Vehicle \\textit{n} and \\textit{o} will be the new and old followers of car \\textit{c} respectively.}\n\t\t\t\t\\label{fig:mobil}\n\t\t\t\\end{figure}\n\n\t\t\tThe other condition is the incentive criterion which states that a possible lane change should improve the local traffic situation around the vehicle. The base model is the following:\n\t\t\t\\begin{equation}\n\t\t\t\t\\hat{a}_{\\rm c} + \\hat{a}_{\\rm o} + \\hat{a}_{\\rm n} > a_{\\rm c} + a_{\\rm o} + a_{\\rm n}\\,,\n\t\t\t\t\\label{eq:mobil_first}\n\t\t\t\\end{equation}\n\t\t\twhere accelerations after a possible lane change are denoted with hats.\n\t\t\tBased on Eq.~\\eqref{eq:mobil_first} a lane change should occur only if the sum of the accelerations after the lane change is greater than before. This inequality reflects the name of the model which is \\textbf{m}inimizing \\textbf{o}verall \\textbf{b}raking \\textbf{i}nduced by \\textbf{l}ane change (MOBIL).\n\n\t\t\tHowever Eq.~\\eqref{eq:mobil_first} is not as realistic as it should be. The first problem is that vehicles would change lanes even for a negligible acceleration advantage which is not the case in a realistic traffic situation. This issue can be solved with a threshold value $\\Delta a_{\\rm th}$. After this modification Eq.~\\eqref{eq:mobil_first} would look like below:\n\t\t\t\\begin{equation}\n\t\t\t\t\\hat{a}_{\\rm c} - a_{\\rm c} + \\hat{a}_{\\rm o} - a_{\\rm o} + \\hat{a}_{\\rm n} - a_{\\rm n} > \\Delta a_{\\rm th}\\,,\n\t\t\t\t\\label{eq:mobil_with_tr}\n\t\t\t\\end{equation}\n\t\t\tIt can be stated that with this modification individual drivers only change lanes if the acceleration sum is significantly greater than before the lane change. The other problem with the previously discussed model is that they cannot distinguish between driving style. Driver styles can vary from complete selfish to the altruistic driver. Selfish drivers would only care about their own acceleration ($\\hat{a}_{\\rm c} - a_{\\rm c} > \\Delta a_{\\rm th}$) while altruistic ones would change lanes even if that would result in a disadvantageous position for them but with that change the local traffic situation would improve sufficiently ($\\hat{a}_{\\rm o} - a_{\\rm o} + \\hat{a}_{\\rm n} - a_{\\rm n} > \\Delta a_{\\rm th}$). A $p$ politeness factor should be introduced to fix that. Including $p$ in Eq.~\\eqref{eq:mobil_with_tr} results in the final form of the model:\n\t\t\t\\begin{equation}\n\t\t\t\t\\hat{a}_{\\rm c} - a_{\\rm c} + p(\\hat{a}_{\\rm o} - a_{\\rm o} + \\hat{a}_{\\rm n} - a_{\\rm n}) > \\Delta a_{\\rm th}\\,.\n\t\t\t\t\\label{eq:mobil}\n\t\t\t\\end{equation}\n\t\t\tA completely selfish driver would get $p=0$ factor, while an altruistic one $p>1$. In the special case where $p=1$ Eq.~\\eqref{eq:mobil} simplifies back to Eq.~\\eqref{eq:mobil_with_tr}, which means a lane change is performed only if the sum of all involved vehicles' acceleration will improved at least by the threshold value.\n\t\t\\section{Numerical solver}\n\t\t\tIn Section \\ref{sec:IDM} and \\ref{sec:MOBIL} a vehicle behavior in traffic has been shown and modeled. The mathematical model of a driver is a second order nonlinear differential equation. There is no analytical solution, so a numerical simulation can be carried out. To be able to use one of the common numerical solvers (like Explict Euler or 4th order Runge Kutta) the second order differential equation should be transformed into a system of first order differential equation. Using the notations $v=\\dot{x}$ and $a=\\ddot{x}$, the governing equation \\eqref{eq:aidm} can be given as:\n\t\t\t\\begin{equation}\n\t\t\t\t\\ddot{x}=\\amax\\left [ 1 - \\left ( \\frac{\\dot{x}}{\\vd} \\right )^\\delta - \\left ( \\frac{h_0 + \\dot{x}\\cdot T + \\frac{\\dot{x}(\\dot{x}-v_{\\rm lead})}{{\\rm c}}}{x_{\\rm lead}-L_{\\rm lead} - x} \\right )^2 \\right ]\\,.\n\t\t\t\t\\label{eq:idm_num1}\n\t\t\t\\end{equation}\n\t\t\tLet's say\n\t\t\t\\begin{equation}\n\t\t\t\t\\textbf{y}=\n\t\t\t\t\\begin{pmatrix}\n\t\t\t\t\ty_1\\\\\n\t\t\t\t\ty_2\n\t\t\t\t\\end{pmatrix}\n\t\t\t\t=\n\t\t\t\t\\begin{pmatrix}\n\t\t\t\t\tx\\\\\n\t\t\t\t\t\\dot{x}\n\t\t\t\t\\end{pmatrix}\\,,\n\t\t\t\t\\label{eq:idm_num2}\n\t\t\t\\end{equation}\n\t\t\tthen the derivative of $\\textbf{y}$ would be\n\t\t\t\\begin{equation}\n\t\t\t\t\\dot{\\textbf{y}}=\n\t\t\t\t\\begin{pmatrix}\n\t\t\t\t\t\\dot{y_1}\\\\\n\t\t\t\t\t\\dot{y_2}\n\t\t\t\t\\end{pmatrix}\n\t\t\t\t=\n\t\t\t\t\\begin{pmatrix}\n\t\t\t\t\t\\dot{x}\\\\\n\t\t\t\t\t\\ddot{x}\n\t\t\t\t\\end{pmatrix}\\,,\n\t\t\t\t\\label{eq:idm_num3}\n\t\t\t\\end{equation}\n\t\t\tFrom Eq.~\\eqref{eq:idm_num1}, \\eqref{eq:idm_num2} and \\eqref{eq:idm_num3} it can be concluded that the differential equation system will be the following:\n\t\t\t\\begin{equation}\n\t\t\t\t\\dot{\\textbf{y}}\n\t\t\t\t=\n\t\t\t\t\\begin{pmatrix}\n\t\t\t\t\tf_1(y_2)\\\\\n\t\t\t\t\tf_2(y_1,y_2,x_{\\rm lead},v_{\\rm lead})\\\\\n\t\t\t\t\\end{pmatrix}\n\t\t\t\t=\n\t\t\t\t\\begin{pmatrix}\n\t\t\t\t\ty_2\\\\\n\t\t\t\t\t\\amax\\left [ 1 - \\left ( \\frac{y_2}{\\vd} \\right )^{\\delta} - \\left ( \\frac{h_0 + y_2\\cdot T + \\frac{y_2(y_2-v_{\\rm lead})}{{\\rm c}}}{x_{\\rm lead} - L_{\\rm lead} - y_1} \\right )^2 \\right ]\n\t\t\t\t\\end{pmatrix}.\n\t\t\t\t\\label{eq:numerical_idm}\n\t\t\t\\end{equation}", "meta": {"hexsha": "3831f6b3ceac6ee467d5e0e9f8709a688ff84b7e", "size": 10572, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/Chapters/trafficmodels.tex", "max_stars_repo_name": "ngergo100/traffic-sim", "max_stars_repo_head_hexsha": "2cf579d2812af84b9bd3225645c2441310f194d6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documentation/Chapters/trafficmodels.tex", "max_issues_repo_name": "ngergo100/traffic-sim", "max_issues_repo_head_hexsha": "2cf579d2812af84b9bd3225645c2441310f194d6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2018-11-23T15:15:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-01T07:21:48.000Z", "max_forks_repo_path": "documentation/Chapters/trafficmodels.tex", "max_forks_repo_name": "ngergo100/traffic-sim", "max_forks_repo_head_hexsha": "2cf579d2812af84b9bd3225645c2441310f194d6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-06-07T16:49:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-07T16:49:18.000Z", "avg_line_length": 75.5142857143, "max_line_length": 864, "alphanum_fraction": 0.7172720393, "num_tokens": 3139, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.7025300698514778, "lm_q1q2_score": 0.5807898944355108}}
{"text": "%\\begin{document}\n\\section{Tunneling Rates}%\n\\label{sec:tunneling_rates}\n% \\subsection{Tunneling Rates}%\n% \\label{subsec:tunneling_rates}\n%\nFor inference, we generate \\(M\\times N\\) configurations (batch of \\(M\\) chains in\nparallel, for \\(N\\) MH-Accept/Rejects), and drop the first \\(25\\%\\)\npercent of the data before computing observables to account for\nburn-in\\footnote{For computing statistics of observables, we use bootstrap\nresampling.}.\n%\nWe compute the topological charge, \\(\\mathcal{Q} =\n\\frac{1}{2\\pi}\\sum_{x}\\mathrm{Arg}\\left(\\phi_{\\mu\\nu}(x)\\right) \\in\n\\mathbb{Z}\\), with \\( \\mathrm{Arg}\\left(\\phi_{\\mu\\nu}(x)\\right) \\in \\left[-\\pi,\n\\pi\\right]\\) for each configuration.\n%\nFor the \\(m^{\\mathrm{th}}\\) chain in our batch, we define \n%\n\\begin{align}\n  \\delta\\mathcal{Q}_{m}(n) &\\coloneqq |\\mathcal{Q}{(n+1)} - \\mathcal{Q}{(n)}|\\\\\n  \\langle\\delta\\mathcal{Q}_{m}\\rangle &\\coloneqq\n  \\frac{1}{N}\\sum_{n=0}\\delta\\mathcal{Q}_{m}(n).\n\\end{align}\n%\nWe can then compute the average tunneling rate over all chains (i.e.\\@ the\nnumber of tunneling events per step) as\n%\n\\begin{equation}\n  \\langle \\delta\\mathcal{Q}\\rangle%\n  \\coloneqq \\frac{1}{M}\\sum_{m=1}^{M}\\langle\\delta\\mathcal{Q}_{m}\\rangle\n  =\\frac{1}{M}\\sum_{m=1}^{M}\\left[\\frac{1}{N}\\sum_{n=0}^{N} \\delta\\mathcal{Q}_{m}(n)\\right]\n\\end{equation}\n%\n\n\\begin{figure}[htpb]\n  \\centering\n  \\includegraphics[width=\\textwidth]{tunneling_rates/dq_vs_beta}\n  \\caption{Comparison of the tunneling rates,\n  \\(\\langle\\delta\\mathcal{Q}\\rangle\\) for HMC and L2HMC}\n\\end{figure}\n\n% \\begin{figure}[htpb]\n%   \\centering\n%   \\begin{subfigure}{0.48\\textwidth}\n%     % \\input{/Users/saforem2/l2hmc-qcd/doc/figures/updates_2020_10_13/dq_lf10_beta2.tex}\n%     \\includegraphics[width=\\textwidth]{updates_2020_10_13/dq_lf10_beta2.pdf}\n%   \\end{subfigure}\n%   \\begin{subfigure}{0.48\\textwidth}\n%     % \\input{/Users/saforem2/l2hmc-qcd/doc/figures/updates_2020_10_13/dq_lf10_beta3.tex}\n%     % \\input{../figures/updates_2020_10_13/dq_lf10_beta3.tex}\n%     \\includegraphics[width=\\textwidth]{updates_2020_10_13/dq_lf10_beta3.pdf}\n%   \\end{subfigure}\n%   \\begin{subfigure}{0.48\\textwidth}\n%     % \\input{/Users/saforem2/l2hmc-qcd/doc/figures/updates_2020_10_13/dq_lf10_beta3.tex}\n%     % \\input{../figures/updates_2020_10_13/dq_lf10_beta4.tex}\n%     \\includegraphics[width=\\textwidth]{updates_2020_10_13/dq_lf10_beta4.pdf}\n%   \\end{subfigure}\n%   \\begin{subfigure}{0.48\\textwidth}\n%     % \\input{/Users/saforem2/l2hmc-qcd/doc/figures/updates_2020_10_13/dq_lf10_beta5.tex}\n%     % \\input{../figures/updates_2020_10_13/dq_lf10_beta5.tex}\n%     \\includegraphics[width=\\textwidth]{updates_2020_10_13/dq_lf10_beta5.pdf}\n%   \\end{subfigure}\n%   % \\includegraphics[width=\\textwidth]{updates_2020_10_13/tunneling_rates_lf10_row1.pdf}\n%   \\caption{Comparison of the tunneling rates,\n%   \\(\\langle\\delta\\mathcal{Q}\\rangle\\) for HMC and L2HMC}\n% \\end{figure}\n\n% \\input{/Users/saforem2/l2hmc-qcd/logs/tunneling_rates/2020-10-16/figs/dq_lf10_stats_beta_2020-10-16-141354_tikz.tex}\n%\\end{document}\n", "meta": {"hexsha": "c41e75b2a7ce66bbe021855d86f1b18af65c4d27", "size": 3010, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tunneling_rates/tunneling_rates.tex", "max_stars_repo_name": "saforem2/l2hmc-qcd", "max_stars_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2019-04-18T18:50:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T18:30:48.000Z", "max_issues_repo_path": "doc/tunneling_rates/tunneling_rates.tex", "max_issues_repo_name": "saforem2/l2hmc-qcd", "max_issues_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2019-09-09T21:10:48.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T17:43:51.000Z", "max_forks_repo_path": "doc/tunneling_rates/tunneling_rates.tex", "max_forks_repo_name": "saforem2/l2hmc-qcd", "max_forks_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-10-31T02:25:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-25T00:49:14.000Z", "avg_line_length": 42.3943661972, "max_line_length": 118, "alphanum_fraction": 0.7189368771, "num_tokens": 1127, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.5807898892866371}}
{"text": "This module implements an adapted version of the \\emph{cylindrical algebraic decomposition} (CAD) for a conjunction of constraints as described in \\cite{Article_Collins_75}.\nIt extends the original algorithm to be SMT compliant and implements the ideas from \\cite{Article_Loup_TubeCAD}.\n\nThe CAD method consists of two basic routines: the projection (or elimination) of polynomials and the lifting (or construction) of samples.\nThe projection transforms a set of polynomials over a set of variables to a new set of polynomials that do not contain some of the variables.\nThe lifting starts with a sample point of degree $k$ and constructs a sample point of degree $k+1$ using the polynomial sets from the projection.\nBoth routines work in an incremental fashion: polynomials are only projected if needed and the construction is performed as a depth-first search.\n\n\\paragraph{Efficiency} \nThe worst case complexity of this algorithm is doubly exponential in the number of variables, the base being the sum of the number of polynomials and the maximum degree of any polynomial.\nThis is due to an quadratic increase of polynomials in each projection step and a number of possible sample points that grows with the number of polynomials.\n\nThe practical performance heavily depends on the number and degree of polynomials created during the elimination.\nIt benefits greatly if the real roots of the polynomials are rational, as irrational root operation may take quite some time.\n", "meta": {"hexsha": "0d8d13fc47ba306e900f326e5d8ad508273adb16", "size": 1474, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/smtrat-modules/CADModule/CADModule.tex", "max_stars_repo_name": "minemebarsha/smtrat", "max_stars_repo_head_hexsha": "eaada50cdf9bbfe4dd4f6a54776387484c37b0f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/smtrat-modules/CADModule/CADModule.tex", "max_issues_repo_name": "minemebarsha/smtrat", "max_issues_repo_head_hexsha": "eaada50cdf9bbfe4dd4f6a54776387484c37b0f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/smtrat-modules/CADModule/CADModule.tex", "max_forks_repo_name": "minemebarsha/smtrat", "max_forks_repo_head_hexsha": "eaada50cdf9bbfe4dd4f6a54776387484c37b0f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 98.2666666667, "max_line_length": 187, "alphanum_fraction": 0.8175033921, "num_tokens": 295, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117855317475, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.5807898832877849}}
{"text": "\\chapter{Appendix}\nTHE ELEMENTARY TRANSCENDENTAL FUNCTIONS\n\nA*l. On certain results assumed in Chapters I-IV.\n\nIt was convenient, in the first four chapters of this work, to assume\nsome of the properties of the elementary transcendental functions,\nnamely the exponential, logarithmic and circular functions; it was\nalso convenient to make use of a number of results which the reader\nwould be prepared to accept intuitively by reason of his familiarity\nwith the geometrical representation of complex numbers by means of\npoints in a plane.\n\nTo take two instances, (i) it was assumed \\hardsectionref{2}{7}) that lim (exp i) =\nexp (lim *, and (ii) the geometrical concept of an angle in the\nArgand diagram made it appear plausible that the argument of a complex\nnumber was a many-valued function, possessing the property that any\ntwo of its values diffei-ed by an integer multiple of ir.\n\nThe assumption of results of the first type was clearly illogical; it\nwjis also illogical to base ai'ithmetical results on geometrical\nreasoning. For, in order to put the foundations of geometry on a\nsatisfactory basis, it is not only desirj ble to employ the axioms of\narithmetic, but it is also necessary to utilise a further set of\naxioms of a more definitely geometrical character, concerning\nproperties of points, straight lines and planes*. And, fm-ther, the\narithmetical theory of the logarithm of a complex nimiber appears to\nbe a necessary preliminary to the development of a logical theory of\nangles.\n\nApart from this, it seems unsatisfcictory to the c esthetic taste of\nthe mathematician to employ one branch of mathematics as an essential\nconstituent in the structui-e of another; particularly when the\nformer has, to some extent, a material basis whereas the latter is of\na purely abstract nature f.\n\nThe reasons for pursuing the somewhat illogical and unaesthetic\nprocedure, adopted in the earlier part of this work, were, firstly,\nthat the properties of the elementary transcen- dental functions were\nrequired gradually in the com\"se of Chapter n, and it seemed\nimdesirable that the coiu e of a general development of the various\ninfinite processes should be frequently interrupted in oi\"der to prove\ntheoremsf (with which the reader was, in all probabihty, already\nfamihar concerning a single particuliU* function; and, secondly, that\n(in counexioji with the assumption of results based on geometrical\nconsidei-ations) a pui'ely arithmetical mode of development of\nChapters l-iv, deriving no help or illus- trations from geometrical\nprocesses, would have very greatly inci-eased the difliculties of the\nreader unacquainted with the methods and the spirit of the analyst,\n\n* It is not our object to give any account of the foundations of\ngeometry in this work. They are investigated by various writei-s. such\nas Tiitehead. Axioms of Projective Geometry (Cambridge Math. Tracts,\nno. i. 1906) and Mathews, Projectile Geometry London, 1914). A perusal\nof Chapters i, xx, xxii and xxv of the latter work will convince the\nreader that it is even more laborious to develop geometiy in a logical\nmanner, from the minimum number of axioms, than it is to evolve the\ntheory of the circular functions by purely analytical methods. A\ncomplete account of the elements both of arithmetic and of geometry\nhas been given by Whitehead and Bussell, Principia Mathematica\n(1910-1913).\n\nt Cf. Merz, History of European Thought in the Nineteenth Century, n.\n(London, 1903), pp. 631 (note 2) and 707 (note 1), where a letter from\nWeierstrass to Schwarz is quoted. See also Sylvester, P/ii7. Mag. (5),\nii. (1876), p. 307 [Math. Papers, lu. (1909). p. -50].\n\n%\n% 580\n%\n\nA'll. Summary of the Appendix.\n\nThe general course of the Appendix is as follows :\n\nIn v5\u00a7 A*2-A-22, the exponential function is defined by a power\nseries. From this definition, combined with results contained in\nChapter ll, are derived the elementary properties (apart from the\nperiodic properties) of this function. It is then easy to deduce\ncorresponding properties of logarithms of positive numbers (\u00a7\u00a7\nA'3-A'33).\n\nNext, the sine and cosine are defined by power series from which\nfollows the connexion of these functions with the exponential\nfunction. A brief sketch of the manner in which the formulae of\nelementary trigonometry may be derived is then given ( \u00a7 A'4-A-42).\n\nThe results thus obtained render it possible to discuss the\nperiodicity of the exponential and circular functions by purely\narithmetical methods (\u00a7\u00a7 A\"5, A*51).\n\nIn \u00a7\u00a7 A\"52-A'522, we consider, substantially, the continuity of the\ninverse circular functions. When these functions have been\ninvestigated, the theory of logarithms of complex numbers (\u00a7 A'6)\npresents no further difficulty.\n\nFinally, in \u00a7 A\"7, it is shewn that an angle, defined in a purely\nanalytical manner, possesises properties which are consistent with the\nordinary concept of an angle, based on our experience of the material\nworld.\n\nIt will be obvious to the reader that we do not profess to give a\ncomplete account of the elementary transcendental functions, but we\nhave confined ourselves to a brief sketch of the logical foundations\nof the theory*. The developments have been given by writers- of\nvarious treatises, such as Hobson, Plane Trigonometry; Hardy, A\ncourse of Pure Mathematics; and Bromwich, Theory of Infinite Series.\n\nA'12. -1 logical order of development of the elements of Analysis.\n\nThe reader will find it instructive to read Chapters i-iv and the\nAppendix a second time in the following order :\n\nChapter i (omitting + all of \u00a7 1 \"5 except the first two paragraphs).\n\nChapter ii to the end of \\hardsubsectionref{2}{6}{1} (omitting the examples inf \u00a7,\n2-31-2-61).\n\nChapter iii to the end of \\hardsubsectionref{3}{3}{4} and \u00a7\u00a7 3-5-3-73.\n\nThe Appendix, \u00a7\u00a7 A-2-A-6 (omitting \u00a7\u00a7 A-32, A-33).-\n\nChapter ii, the examples of v \\hardsubsectionref{2}{3}{1}-2'61.\n\nChapter in, 5\\hardsubsubsectionref{3}{3}{4}{1}-3-4.\n\nChapter iv, inserting \u00a7\u00a7 A\"32, A-33, Aw after \\hardsubsectionref{4}{1}{3}.\n\nChapter ii, \u00a7\u00a7 2-7-2-82.\n\nHe should try thus to convince himself that (in that order) it is\npossible to elaborate a purely arithmetical development of the\nsubject, in which the graphic and familiar language of geometry J is\nto be regarded as merely conventional.\n\n* In writing the Appendix, frequent reference has been made to the\narticle on Algebraic Analysis in the Encijklopiidie der Math.\nWissenschaften by Pringsheim and Faber, to the same article translated\nand revised by Molk for the Encyclopedic des Sciences Math., and to\nTannery, Introduction h la Theorie dis Fonctions d'uue Variable\n(Paris, 1904).\n\nt The properties of the argument (or phase) of a complex number are\nnot required in the text before Chapter v.\n\ni E.p. 'a point ' for 'an ordered number-pair,' ' the circle of unit\nradius with centre at the origin' for 'the set of ordered number-pairs\n(.r, y) which satisfy the condition .x'- + y-=l,' 'the points of a\nstraight line ' for ' the set of ordered number-pairs (x, y) which\nsatisfy a relation of the type Ax + By + C =0,' and so on.\n\n%\n% 581\n%\n\nA \"2. The exponential function exp z.\n\nThe exponential function, of a complex variable 2, is defined bj the\nseries*\n\nexp.= l + f; + | + | + ... = l+J |;.\n\nThis series converges absolutely for all values of z (real and\ncomplex) by D'Alembert's ratio test \\hardsubsectionref{2}{3}{6}) since lim | z\\ n) | = 0<1\n; so the definition is valid for all values of z.\n\nFurther, the series converges uniformly throughout any bounded domain\nof values of z; for, if the domain be such that j 2 j /iJ when z is\nin the domain, then\n\nj(sVm!)I R\"/nl,\n\nand the uniformity of the convergence is a consequence of the test of\nWeierstrass \\hardsubsectionref{3}{3}{4}),\n\nX\n\nby reason of the convergence of the series 1+2 R\"i'n !), in which the\nterms are indepen-\n\nn = l\n\ndent of z.\n\nMoreover, since, for any fixed value of n, s'V ! is a continuous\nfunction of z, it follows from \\hardsubsectionref{3}{3}{2} tha t the exponential function\nis continuous for all values of z; and hence (cf. \u00a7 3 '2), if z be a\nvariable which tends to the limit (, we have\n\nlim exp 2= exp f.\n\nA*21. The addition-theorem for the exponential function, and its\nconsequences. From Cauchy's theorem on multiplication of absolutely\nconvergent series \\hardsubsectionref{2}{5}{3}), it follows thatf\n\n(exp 2i)(exp2,) = (l+- V|', + -)(l + + !;' + ..)\n\n2i + 22 Zi + 2z Z2 + Z-2,\n\n= 1 + -Yj-+ 2\", +...\n\n= exp (21 + 22),\n\nso that exp (21 + 22) can be expressed in terms of exponential\nfunctions of 21 and of 22 by the formula\n\nexp (21 + 22) = (exp 2i) (exp z. .\n\nThis result is known as the addition-theorem for the exponential\nfunction. From it,\n\nwe see by induction that\n\n(exp2i) (exp 22) ... (exp2,i) = exp(2i + 22+...+2 ),\n\nand, in particular,\n\n exp 2 exp ( - 2) =exp 0=1.\n\nFrom the last equation, it is apparent that there is no value of 2 for\nwhich exp 2 =; for, if there were such a value of 2, since exp (-2)\nwould exist for this value of 2, we should have 0=1.\n\nIt also follows that, when x is real, exp >0; for, from the series\ndefinition, exp.r l when x O; and, when x 0, exp x= 1/exp ( - .r)>0.\n\n* It was formerly customary to define exp 2 as lim ( 1 + - I, cf.\n  Cauchy, Coins cV Analyse, i.\n\np. 167. Cauchy ibid. pp. 168, 309) also derived the properties of the\nfunction from the series, but his investigation when z is not rational\nis incomplete. See also \\Schlomilch, Handbuch der alg. Analysis\n(1889), pp. 29, 178, 246. Hardy has pointed out (Math. Gazette, in. p.\n284) that the limit definition has many disadvantages.\n\nt The reader will at once verify that the general term in the product\nseries is (2i + Ci2i\"- 22 + C22i\"-22.\\,2 + . . . + z ' n ! = ( i + z\n)''ln ! .\n\n%\n% 582\n%\n\nFurther, exp .> is an increasing function of the real variable,v;\nfor, if X.'>0, exp x + k) - exp x = exp x . exp >l- - 1 > 0, because\nexp .r>0 and exp />!.\n\nAlso, since expA- 1 /A = 1 +(A/2 :) + (/i2/3 !) + ...,\n\nand the series on the right is seen (by the methods of \u00a7 A\"2) to be\ncontinuous for all\n\nvalues of h, we have\n\nlim exp/i- 1 //; = 1,\n\nrfexpa,. exp(2+A)-exp2 and so -y = hm - - = exp z.\n\nA'22. Variom properties of the exponential function.\n\nReturning to the formula (exp j) (exp 2.>) ... (exp 2- ) = exp (21+22\n+ +2n) we see that,\n\nwhen n is a positive integer,\n\n(exp 2)\" = exp (ns),\n\nand (exp z)-\" = 1 /(exp 2)\" = 1 /exp nz) = exp ( - ?iz).\n\nIn particular, taking 2 = 1 and writing e in place of exp 1 =\n2-71828..., we see that, when m is an integer, positive or negative,\n\ne' = exp Hi = l + ( i/i:) + (? -/2 !) + ....\n\nAlso, if /Li be any rational number TODO, where// and q are integers,\nq being positive)\n\n(exp fi)'i = exp fiq = exp p = e'\\\n\nso that the th power of exp is et'; that is to say, expM is a value\nof ef'ii=ei, and it is obviously (\u00a7 A-21) the real positive value.\n\nIf X be an irrational-real number (defined by a section in which Oj\nand ao are typical members of the X-class and the ff-class\nrespectively), the irrational power e is most simply defined as exp x\n; we thus have, for all real values of x, rational and irrational,\n\nX x'\n\ne =\\ -I 1 1-\n\n1 ! 2! '\n\nan equation first given by Newton*.\n\nIt is, therefore, legitimate tg write e for exp x when x is real, and\nit is customary to write e for exp 2 when z is complex. The function e\n(which, of course, must not be regarded as being a power of e), thus\ndefined, is subject to the ordinary laws of indices,, viz.\n\n[Note. Tannery, Lecons d'Algehre et d' Analyse (1906), I. p. 45,\npractically defines e, when X is irrational, as the only number X\nsuch that e'* X e\"-, for every ! and ao. From the definition we have\ngiven it is easily seen that such a unique number exists. For e\\ ] )x(\n= X) satisfies the inequality, and i( X' X) also did so, then\n\nexj) 2 - exp r i = c'\"- - /'' I X' - -V i,\n\nso that, since the exponential function is continuous, a-i - a cannot\nbe chosen arbitrarily small, and so ( i, a-2) does not define a\nsection.]\n\n* De Aitdlysi per aequat. nu)n. term. inf. (written before 16G9, but\nnot published till 1711); it was also given both by Newtou and by\nLeibniz in letters to Oldenburg in 1676; it was first published by\nWallis in 1685 in his Treatise on Algebra, p. 343. The equation when x\nis irrational was explicitly stated by \\Schlomilch, Handbuch der alg.\nAnalysis (1889), p. 182.\n\n%\n% 583\n%\n\nA'3. Logarithms of positive numbers*.\n\nIt has been seen ( \u00a7 A\"2, A\"21) that, when x is real, expA* is a\npositive continuous increasing function of x, and obviously exp a- - +\nx as x - - + oo, while\n\nexp.'r=l/exp (- ')- >-0 as x- - cc.\n\nIf, then, a be any positive number, it follows from \\hardsubsectionref{3}{6}{3} that the\nequation in x,\n\nexp X = a,\n\nhas one real root and only one. This root (which is, of course, a\nfunction of a) will be written t LoggW or simply Log a; it is called\nthe Logarithm of the positive number a.\n\nSince a one-one correspondence has been established between i- and a,\nand since a is an increasing function of x, x must be an increasing\nfunction of a; that is to say, the Logarithm is an increasing\nfunction.\n\nExample. Deduce from \u00a7 A-21 that Log a + Log 6 = Log a6.\n\nA \"31. The continuity of the Logarithm.\n\nIt will now be shewn that, when a is positive. Log a is a continuous\nfunction of a.\n\nLet Log a = jp, Log (a + A) = .r + k\n\nso that e =a, e'' =a-Vh, l + (hja) = e''.\n\nFirst suppose that A>0, so that >0, and then\n\nl + hla) = l+k+U' + ...>l + k, and so 0<k<hja,\n\nthat is to say 0<Log( + A) - Loga<A/a.\n\nHence, h being positive, Log (a + A) - Log can be mjide arbitrarily\nsmall by taking k sufficiently small.\n\nNext, suppose that A<0, so that Z-<0, and then a/ a + h) = e~''. Hence\n(taking 0< -h< a, as is obviousl ' permissible) we get\n\nal(a + h) = l + -k) + U' +...>l-k, and so - - < - 1+ a/ (a + A) = - h/\na + h)<- hja.\n\nTherefore, whether h be positive or negative, if e be an arbitrary\npositive number and if I A I be taken less than both \\ a and |ae, we\nhave\n\nI Log a + h) - Log a\\ <e,\n\nand so the condition for continuity \\hardsectionref{3}{2}) is satisfied,\n\nA'32. Diferentiation of the Logarithm.\n\nRetaining the notation of \u00a7 A-31, we see, from results there proved,\n that, if h- O a being fixed), then also k ~Q. Therefore, when a>0,\n\n(ILosa,. k 11\n\nSince Log 1=0, we have, by \\hardsubsectionref{4}{1}{3} example 3,\n\nLoga= I ~i dt.\n\n* Many mathematicians define the Logarithm by the integral formula\ngiven in \u00a7 A.-32. The reader should consult a memoir by Hurwitz Math.\nAnn. lxx. (1911), pp. 33-47) on the founda- tions of the theory of the\nlogarithm.\n\nt This is in agreement with the notation of most text-books, in which\nLog denotes the principal value (see \u00a7 A-6) of the logarithm of a\ncomplex number.\n\n%\n% 584\n%\n\nA'33. The expansion of Log (1 + ) in powers of a. From \u00a7 A \"32 we have\n\nLog (l + a) =i\" I + t)~idt J >\n\n= r t+f -... + -y>-H'- + -)\"(\" i+t)- dt\n\n= a - ia- + itf'' -... + (-)\"-!- a\" + /?,\n\nn\n\nwhere / = (\\ )\" /\"% (i +<)-i c .\n\nNow, if - l<a<l, we have\n\ni nl i t\" l-\\ a ) - dt\n\n= |a|\" + i (n+l)a-| i) -' -- as ?t - ac .\n\nfeence, when - !< <!, Log(l + a) can be expanded into the convergent\nseries*\n\nX\n\nLog(l+ ) = o--ia2 + ia3-...= 2 (-)\"-! \"/?;.\n\nn = l If a=+l,\n\n|/?,J=/ i (H-;)-it (;< I i;\"rf = (>i + l)-i- Oas n-9-x, ./ ./\n\nso the expansion is valid when a= + 1; it is not valid when = - 1.\nExample. Shew that lim (l+-j =e.\n\n[We have Jm. n log (l + 1) = Jiu. ( -, + sTT \" -)\n\n= 1,\n\nand the result required follows from the result of \u00a7 A-2 that lim e- =\ne ]\n\nA*4. The definition of the sine and cosine.\n\nThe functions t sin z and cos z are defined analytically by means of\npower series, thus\n\n23 25 CO /\\ .) 22 + l\n\nthese series converge absolutely for all values of z (real and\ncomplex) by \\hardsubsectionref{2}{3}{6}, and so the definitions are valid for all Vcilues\nof z.\n\nOn comparing these series with the exponential scries, it is ap )arcnt\nthat the sine and cosine are not essentially new functions, but they\ncan be expressed in terms of exponential functions by the equations J\n\n2i sin z = exp iz) - exp ( - iz), 2 cos 2 = exp (iz) + exp ( - iz).\n\n* This method of obtaining the Logarithmic expansion is, in effect,\ndue to Walhs, P]iil. Trans. 11. (1668), p. 754.\n\nt These series were given by Newtou, Dc Anahjsi... (1711), see \u00a7 A'22\nfootnote. The other trigouometrical functions are defined in the\nmanner with which the reader is famiHar, as quotients and reciprocals\nof sines and cosines.\n\nX These equations were derived by Euler [they were given in a letter\nto Jobann liernoulH in 1740 and pubhshed in the Hist. Acad. Berlin, v.\n(1749), p. 279] from the geometrical definitions of the sine and\ncosine, npon which the theory of the circular functions was tlien\nuniversally based.\n\n%\n% 585\n%\n\nIt is obvious that sin z and cos z are odd and even functions of z\nrespectively; that is to say\n\nsin ( - 2)= -sin 2, cos ( - 2) = cos 2.\n\nA*41. The fundaraental properties of sin z and cos z.\n\nIt may be proved, just as in the case of the exponential function (\u00a7\nA\"2), that the series for sin 2 and cos 2 conyerge uniformly in any\nbounded domain of values of z, and con- sequently that sin 2 and cos 2\nare continuous functions of 2 for all values of 2. Further, it may be\nproved in a similar manner that the series\n\n, 22 z* 1-3-1 + 5--- defines a continuous function of 2 for all values\nof 2, and, in particular, this function is continuous at 2=0, and so\nit follows that\n\nlim (2~ sin2) = L\n\nA \"42. The additio7i-theorems for sin 2 and cos 2.\n\nBy using Euler's equations ( A*4), it is easy to prove from properties\nof the exponential function that\n\nsin (2 + 22) = sin z cos 23 + cos 2 sin z and cos z + 22) - cos 21 cos\n22 - sin z sin 22;\n\nthese results are known as the additiorit-theorems for sin 2 and cos\n2.\n\nIt may also be proved, by using Euler's equations, that\n\nsin2 2 + cos-2 = l.\n\nBy means of this result, sin(2j + 22) can be expressed as an algebraic\nfunction of sin2i and sin 22, while cos (21 + 22) can similarly be\nexpressed as an algebraic function of cos 21 and cos 22; so the\naddition-formulae may be regarded as addition-theorems in the strict\nsense (cf. \u00a7\u00a7 20-3, 22-732 note).\n\nBy differentiating Euler's equations, it is obvious that\n\n(/sin 2 dcoaz\n\n- J - = cos 2, - 5 - = - sm 2. dz dz\n\nExample. Shew that\n\nsin 2i = 2 sin 2 cos 2, cos 22 = 2 cos- 2 - 1;\n\nthese results are known as the duplication-formulae.\n\nA'5. The periodicity of the exponential function.\n\nIf 2i and 22 are such that exp2j = exp22, then, multiplying both sides\nof the equation by exp( - 22), we get exp (21 - 22) = !; and writing\ny for z - Z2, we see that, for all values of 2 and all integral values\nof n,\n\nexp (2 -f- ny) = exp 2 . (exp y)'' = exp 2.\n\nThe exponential function is then said to have period y, since the\neffect of increasing z by y, or by an integral multiple thereof, does\nnot affect the value of the function.\n\nIt will now be shewn that such numbers y (other than zero) actually\nexist, and that all the numbers y, possessing the property just\ndescribed, are comprised in the expression\n\n2mvi, ( =\u00b11, \u00b12, \u00b13, ...)\n\nwhere it is a certain jjositive number* which happens to be greater\nthan 2v'2 and less than 4.\n\n* The fact that tt is an irrational number, whose value is 3'141-59...\n, is irrelevant to the present investigation. For an account of\nattempts at determining the value of tt, concluding with a proof of\nthe theorem that it satisfies no algebraic equation with rational\ncoefficients, see Hobson's monograph Squaring the Circle (1913).\n\n%\n% 586\n%\n\nA*51. The sold tion of the equation e\\ \\ >y=.\n\nLet y = a + ifi, where a and /3 are real; then the problem of solving\nthe equation expy=l is identical with that of solving the equation\n\nexp . expi/3=l. Comparing the real and imaginary parts of each side of\nthis equation, we have\n\nexpa.cos =l, expo, sin j3=0. Squaring and adding these equations, and\nusing the identity cos-/3 + sin2/3= 1, we get\n\nexp2a=l. Xow if a were positive, exp 2a would be greater than 1, and\nif a were negative, exp 2a would be less than 1; and so the only\npossible value for a is zero. It follows that cos /3 = 1, sin 3 = 0-\n\nNow the equation sin/ii = is a necessary consequence of the equation\ncos = l, on account of the identity cos2/3 + sin- i3=l. It is\ntherefore sufficient to consider solutions (if such solutions exist)\nof the equation cos/3 = l.\n\nInstead, however, of considering the equation cos/3 = l, it is more\nconvenient to consider the equation* cos 07=0.\n\nIt will now be shewn that the equation cos.r=0 has one root, and only\none, lying between and 2, and that this root exceeds 2; to prove\nthese statements, we make use of the following considerations :\n\n(I) The function cos.r is certainly continuous in the range O a' 2.\n\n(II) When A- '2, we have +\n\n1- - >0 -\\ \\ >0 \\ \\ \\ >o\n\n2! ' 4! 6! ' 8! 10!= ' \"'\n\nand so, when .i- y'2, cos x > 0. (Ill) The value of cos 2 is\n\n2<' / 4 \\ 210 / 4 \\\n\n- + S-72o('-7:Ti)-IO!('-l-Tn-2)--=-i--< -\n\n(IV) WhenO<.r 2,\n\nsin.v .v\"\\ A''* x \\, x',\n\n- = 0-6) + 12oO-6r7) + ->l-6 ' and so, when x 4. 2, sin x \\ x.\n\nIt follows from (II) and (III) combined with the results of (I) and of\n3-63 that the equation cosa'=0 has at least one root in the range v 2\n<.i'< 2, and it has no root in the range \\$ x J\u00b1\n\nFurther, there is oiot more than, one root in the range J2<x<2; for,\nsuppose that there were two, Xi and X2 x.2>Xi); then\n0<.r2-A'j<2-y'2<l, and sin (a% - Xij - sin .Cg cos x - sin Xi cos x =\n0, and this is incompatible with (IV) which shews that sin (0: 2 - -\n'i) M' 2~' 'i)-\n\nThe equation cos:r=0 therefore has one and onli/ one root lying\nbetiveen and 2. This root lies between 2 and 2, and it is called \\ n;\nand, as stated in the footnote to \u00a7 A\"5, its actual value happens to\nbe 1 \"57079....\n\n* If cos.r = 0, it is an immediate consecjuence of the\nduplication-formulae that cos2.r= -1 and tht'uce that co8 4.t = 1, so,\nif x is a solution of cos.c = 0, 4.t is a solutiou of cos/3 = l.\n\nt The symbol may be replaced by > except when x =, 2 in the first\nplace where it occurs, and except when x = in the other places.\n\n%\n% 587\n%\n\nFrom the addition-formulae, it may be proved at once by induction that\n\ncos ?;7r = ( - 1)\", sin?i7r = 0, where ?i is any integer.\n\nIn particular, cos2)i7r = l, where n is any integer.\n\nMoreover, there is no value of /3, other than those values which are\nof the form 2?i7r, for which cosj3=l; for if there were such a value,\nit must be real*, and so we can choose the integer ra so that\n\n- TT 27mr -I3<7r.\n\nWe then have\n\nsin I WITT - /3 I = \u00b1sin(m -l/3)=\u00b1sini =\u00b12~2(l\\ cos/3)i = 0,\n\nand this is inconsistent t with sin j wtt - /3 | J | j/itt - i/3 [\nunless /3 = 2mTr.\n\nConsequently the numbers 2n7r, (/i = 0, \u00b11, \u00b12,...), a/id no others,\nhare their cosines equal to unity.\n\nIt follows that a positive number n exists such that exps has period\niri and that exp2 has no period fundamentally distinct from liri.\n\nThe formulae of elementary trigonometry concerning the periodicity of\nthe circular functions, with which the reader is already acquainted,\ncan now be proved by analytical methods without any difficulty.\n\nExample . Shew that sini r is equal to 1,-not to - L\n\nExample 2. Shew that tan.i->.z; when 0<x<\\ Tr.\n\n[For cos.i'>0 and\n\nsm.r - orcosA':\n\n---A n-'i- 7- 1, l)!i 4>i + lj'\n\n71 = 1 (4\n\nand every term in the series is positive.]\n\nx 77 v 25 x x\n\nExample 3. Shew that 1 - V + ir7 ~; i positive when x =,r, and that\n1 - V + Jrr\n\n2 24 / 20 lb z 24\n\nvanishes when .r = (6 - 2 /3)2 = 1-5924...; and deduce that J\n\n3-125 <7r<3-185.\n\nA'52. The solution of a pair of trigonometncal equations. Let X, /x be\na pair of real numbers such that X + jit l. Then, if X=|= - 1, the\nequations\n\ncos X = X, sin X = i have an infinity of solutions of which one and\nonly one lies between \u00a7 - tt and tt.\n\nFirst, let X and fi be not negative; then \\hardsubsectionref{3}{6}{3}) the equation cos.r\n= X has at least one solution Xi such that O .x-j iTr, since cos = 1,\ncos 7r=0. The equation has not two solutions in this range, for if Xi\nand x.2, were distinct solutions we could prove (cf. \u00a7 A-51) that sin\n(.t'l - .i\"2) = 0, and this would contradict \u00a7 A-51 (IV), since\n\n0<| 2-- i I <i7r<2. Further, sin.ri= + (1 -coh' X])= -1- /(1 -- ) = M5\nso x is a solution of both equations.\n\n* The equation cos/3=l implies that exp i(i - l, and we have seen that\nthis equation has no complex roots.\n\n+ The inequality is true by (IV) since j irnr - |/3 | : i7r<2.\n\n+ See De Morgan, A Budget of Purado.ves (London, 1872), pp. .S16 et.\nsfq., for reasons for proving that 7r>3 .\n\n\u00a7 If X=: - 1, \u00b17r are solutions and there are no others in the range (\n- tt, tt).\n\n%\n% 588\n%\n\nThe equations have no solutions iu the ranges ( - tt, 0) and ( tt, tt)\nsince, in these ranges, either sin x or cos x is negative. Thus the\nequations have one soUition, and only one, in the range ( - rr, tt).\n\nIf X or fi (or both) is negative, we may investigate the equations in\na similar manner; the details are left to the rea<ier.\n\nIt is obvious that, if x- is a solution of the equations, so also is\nXy + 'iniY, where n is any integer, and therefore the equations have\nan infinity of real solutions.\n\nA'521. The principal solution of the trigonometrical equations.\n\nThe unique solution of the equations cos.t' = X, sin,t-=/i. (where X +\nfx' = ) which lies between - it and n is called the principal\nsolution*, and any other solution differs from it by an integer\nmultiple of Stt.\n\nThe principal valuei of the argument of a complex number (=1=0) can\nnow be defined analytically as the principal solution of the equations\n\nI z \\ cos ( = / (2), j 2 1 sin = / z), and then, if = 1 2 1 . (cos + /\nsin 6),\n\nwe must have d = (P + 2mr, and 6 is called a value of the argument of\nj, and is written arg2 (cf i 1-5).\n\nA \"522. The continuity of the argument of a complex variable.\n\nIt will now be shewn that it is possible to choose such a value of the\nargument 6 (z), of a complex variable z, that it is a continuous\nfunction of z, provided that z does not pass through the value zero.\n\nLet 2,1 6 given 'alue of 2 and let 6 be any value of its argument;\nthen, to prove that 6 (z) is continuous at Zq, it is sufficient to\nshew that a number 61 exists such that i=arg2i and that j 1 - 0 1 can\nbe made less than an arbitrary positive number e by giving 1 21 - 20 1\nany value less than some positive number t).\n\nLet fo = '-\"o + Vo > 1 = . 'i + >i\n\nAlso let j 2i - So 1 l e chosen to be so small that the following\ninequalities are satisfied J : (I) I .Ti - 0 i < 2 I - 0 1 5 provided\nthat Xq =t= 0, (II) I ?/i - 2/0 i < 2 ! J/o I, provided that 3/0 =t=\n0, (III) |Xi-.%|<Je|2(,|, i?/i-3/o|<ie|2ol. From (I) and (II) it\nfollows that x Vi and y f/i are not negative, and\n\n.'ToA-i a'o /o i > i.yo so that xo .vi + ?/o2/i J i 20 j 2.\n\nNow let that value of 61 be taken which differs from 0 by less tlian\nit; then, since x, and Xi have not opposite signs and yo f 'id 1/1\nhave not opposite signs J, it follows from the solution of the\nequations of A\"52 that and 0 differ by less than n.\n\nNow tan( i-,.) = ':i \",\n\n* If \\= - 1, we take +7r as the principal solution; of. p. 9.\n\nt The term principal value was introduced in 1845 by Bjorling; see\nthe Archiv der Math, und Fhijs. ix. (1847), p. 408.\n\nX (I) or (II) respectively is simply to be suppressed in the case when\n(i) .Tq O, or when (ii) 2/0 = 0.\n\n\u00a7 The gtometrical interpretation of these conditions is merely that r\nand z are not in different quadrants of the plane.\n\n%\n% 589\n%\naud so (\u00a7 A\"51\nexample 2),\n\n  I o(yi-yo)-y o(' i-- o) I\n\n  2 I 2o |~ I 0 I  I yi- o ! + l 'o !  I - 'i - 0 ! . But I A'o 1 ko\n! and also | yo j | :o |; therefore\n\nI (9i - 6>o i =\\$ 2 I 2o 1~ | yi- ?/o I + 1 1 - . 0 | <e-\n\nFurther, if we take | 2i - zq 1 l ss than  - ] J 'o |, (if o + 0)\nand i [ yo i ? (if o + 0) and | e j g I, the inequalities (I), (II),\n(III) above are satisfied; so that, if ri be the smallest of the\nthree numbers * i | a-o |, -I ! yo ! > i ko i j by taking j Sj - q 1\n< '?, we have | i - o I < <; and this is the condition that 6 z)\nshould be a continuous function of the complex ariable z.\n\nA '6. Logarithms of comflex numhers. The number f is said to be a\nlogarithm of z if 2 = e\n\nTo solve this equation in (, write = + '7? where and rj are real; and\nthen we have\n\nz = e (cos Tj + i sin ?;).\n\nTaking the modulus of each side, we see that | 3| = e, so that (\u00a7\nA-3), = Log \\ z\\; and\n\nthen\n\nz=\\ z\\ (cos 7/ + 1 sin/;),\n\nso that T] must be a value of argz.\n\nThe logarithm of a complex number is consequently a many-valued\nfunction, and it can be expressed in terms of more elementary\nfunctions by the equation\n\nlog z = Log j 2 1 4- 1 arg z.\n\nThe continuity of logs (when s4=0) follows from \u00a7 A'31 and \u00a7 A'522,\nsince \\ z\\ is a continuous function of z.\n\nThe differential coefficient of any particular branch of logs \\hardsectionref{5}{7})\nmay be determined as in \u00a7 A*32; and the expansion of \u00a7 A\"33 may be\nestablished for log (1 + ) when | a | < 1.\n\nCorollary. If a be defined to mean e'iog, a is a continuous function\nof z aud of a when a=t=0.\n\nA'7. The analytical definition of an angle.\n\nLet 1, 22, 23 be three complex numbers represented by the points Pj,\nP., F3 in the Argand diagram. Then the angle between the lines (\u00a7\nA\"12, footnote) PiPo and P1P3 is defined to be any value of arg (23 -\nZj) - arg (22 - Zi).\n\nIt will now be shewn f that the area (defined as an integi'al), which\nis bounded by two radii of a given circle and the arc of the circle\nterminated by the radii, is proportional to one of the values of the\nangle between the radii, so that an angle (in the analytical sense)\njjossesses the propei'ty which is given at the beginning of all\ntext-books on Trigonometry |.\n\n* If any of these numbers is zero, it is to be omitted.\n\nt The proof here given applies only to acute angles; the reader\nshould have no difficulty in extending the result to angles greater\nthan lir, and to the case when OX is not one of the bounding radii.\n\nX Euclid's definition of an angle does not, in itself, afford a\nmeasure of an angle; it is shewn in treatises on Trigonometry (cf.\nHobson, Plane Trigonometry (1918), Ch. i) that an angle is measured by\ntwice the area of the sector which the angle cuts off from a unit\ncircle whose centre is at the vertex of the angle.\n\n%\n% 590\n%\n\nLet (o-'i, yi) be any point (both of whose coordinates are positive)\nof the circle x' - y- = a? a>0). Let 6 be the principal value of arg(\ni + iyi), so that 0<6 <Itt. Then the area bounded by OX and the line\njoining (0, 0) to x\\, i/y) and the arc of the\n\ncircle joining .Vi, y ) to (o, 0) is / f x)dx, where*\n\n./ '\n\n/( )=a;tan (0 .r a cos ),\n\n/ x) = a - x )- (a cos d x a), if an area be defined as meaning a\nsuitably chosen integral (cf. p. 61).\n\nIt remains to be proved that I f(x) dx is proportional to 6.\n\nJ \"\n\nfa /\"a cose fa,\n\nNow I Z f(x) dx = I X tan 6 dx + (a - x-)i dx\n\nJ I) Jo J (I cose\n\n ha- smecose + hr L a --x ')- + x a -x ) \\ dx\n\nf\" - 1\n\n= a-j a -x-) - dx\n\nJ o COS Q\n\n= |a2 1 r (1 - -) 'id(- p' (1 - f') - - dt\\\n\non writing x = at and using the example worked out on p. 64.\n\nThat is to say, the area of the sector is proportional to the angle of\nthe sector. To this extent, we have shewn that the popular conception\nof an angle is consistent with the analytical definition.\n\n* The reader will easily see the geometrical interpretation of the\nintegral by drawing a figure.", "meta": {"hexsha": "89323d84e626fcfc1d3d6f81585194ec41bf97fe", "size": 31228, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/wandw-appendix.tex", "max_stars_repo_name": "CdLbB/Whittaker-and-Watson", "max_stars_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/wandw-appendix.tex", "max_issues_repo_name": "CdLbB/Whittaker-and-Watson", "max_issues_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/wandw-appendix.tex", "max_forks_repo_name": "CdLbB/Whittaker-and-Watson", "max_forks_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.2788144896, "max_line_length": 90, "alphanum_fraction": 0.7011335981, "num_tokens": 9699, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289836, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.5807444354346906}}
{"text": "In this chapter, we will introduce some basic property of approximation theory and analyze the approximation property for some activation functions.\n\n \n\tThe adaptive FEM we have studied before is indeed using linear\n\tsubspaces. For a given and fixed basis, select the best n term to\n\tapproximate a function. The non-linear approximation theory (by\n\tDeVore) is to relax the smoothness of function to achieve the optimal\n\trate $n^{-1/d}$ but won't improve the rate.\n\n\n\\input{6DL/Fourier}\n\\input{6DL/PolyWeierstrass}\n\\input{6DL/DNN_Qualitative0}\n%\\input{6DL/DNN_Qualitative}  % this is actually for\n%quantitative/asymptotic analysis\n\\section{Asymptotic approximation properties}\n\\input{6DL/SampleLemma}\n\\input{6DL/Representations}\n\\input{6DL/Barron-L2}\n%\\input{6DL/Barron-Hk}\n\\chapter{Optimal approximation estimates for minimally smooth functions}\n\\input{6DL/Entropy}\n\\chapter{High order approximation rates for cosine and ReLU$^k$ networks}\nThe presentation in this section follows the paper \\cite{siegel2020high}.\n\\input{6DL/CosineApprox}\n\\input{6DL/ReLUkApprox}\n\\input{6DL/CosineLower}\n", "meta": {"hexsha": "6471b9f6443f9615040655a0739080f845c759b0", "size": 1087, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/DNN-Approx.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/DNN-Approx.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/DNN-Approx.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8214285714, "max_line_length": 148, "alphanum_fraction": 0.7994480221, "num_tokens": 296, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289835, "lm_q2_score": 0.763483758172699, "lm_q1q2_score": 0.5807444272476654}}
{"text": "\\documentclass[12pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n\\usepackage{graphicx}\n\\usepackage{caption}\n\n\\setlength{\\parindent}{0em}\n\\setlength{\\parskip}{1em}\n\n\\begin{document}\n\\title{Consensus Algorithm}\n\\author{Frederic Laudarin}\n\\date{November 1, 2020}\n\\maketitle\n\n\\section{Framework}\n\nThis document exposes an algorithm for decision making within a group. It is supported by the framework where:\n\\begin{itemize}\n\\item There is a finite set of alternatives an\n\\item The size of the group is finite\n\\item Each member of the group must rank every alternatives\n\\item Several alternatives may have the same rank\n\\end{itemize}\nOnce the ranking is carried out by every member. The algorithm must provide a set of alternative which is optimal in regard of all the individual ranks made in the group.\n\nThe base principle of the algorithm is for each member to assign payoffs to alternatives depending on their ranking and to sum the payoffs of every alternative from every player. The alternatives having the best payoff over the group constitute the optimal set of alternatives.\n\n\\section{Basic algorithm}\n\nLet $\\Omega=\\{A_i\\}_{i=1}^N$ denote the set of $N\\in\\mathbb{N}^*$ alternatives. The ranking assigned to the alternative $A_i$ by a group's member $(m)$ is noted $r_i^{(m)}$. In the context of a member $(m)$ the alternatives are classified by ranking values in classes $\\{\\mathcal{C}_k^{(m)}\\}_{k=1}^{K^{(m)}}$ with $K^{(m)}\\in\\mathbb{N}^*$. The rank of every alternative in the class $\\mathcal{C}_k^{(m)}$ is obviously $k$, the preferred rank being $k=1$. The cardinality of a class $\\mathcal{C}_k^{(m)}$ is $\\mathrm{card}(\\mathcal{C}_k^{(m)}) = N_k^{(m)}$ so that:\n\\begin{equation}\n\\sum_{k=1}^{K^{(m)}}{N_k^{(m)}} = N\n\\end{equation}\nFor the sake of readability, the upper index $(m)$ will be avoided in the following, the notations previously introduced will implicitly refer to a member $(m)$.\n\nEach class $\\mathcal{C}_k$ has a frequency $f_k$:\n\\begin{equation}\nf_k = \\frac{N_k}{N}\n\\end{equation}\nAs shown in the set of pairs $\\{(k,f_k), k=1..K\\}$ can be considered as a probability distribution as:\n\\begin{equation}\\label{eq_sum_frequencies}\n\\sum_{k=1}^K{f_k} = 1\n\\end{equation}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.6]{distribution.png}\n\\caption{Distribution of alternatives}\n\\label{fig:distribution}\n\\end{figure}\nThe payoff $v_k$ assigned to alternatives in class $\\mathcal{C}_k$ is defined as the probability of getting an alternative $A_i\\in\\Omega$ such as $A_i\\in\\mathcal{C}_l$ with $l>k$:\n\\begin{equation}\\label{eq_member_payoff}\nv_l = v\\left(A_i\\in\\mathcal{C}_l\\right)=\\mathrm{Pr}\\left(\\left\\lbrace A_i\\in\\mathcal{C}_l, l>k\\right\\rbrace\\right)=\\sum_{l>k}{f_l}\n\\end{equation}\nThis is the payoff value $v^{(m)}(A_i)$ of the member $(m)$ for the alternative $A_i$.\nAs a consequence, an alternative being in the last class $\\mathcal{C}_K$ has a payoff of 0. If a member of the group has no preference then they classify all the alternatives in one unique class $\\mathcal{C}_1$ and every alternative is assigned a zero payoff value.\n\nEventually the payoff value of an alternative to the group is the sum of values assigned by each member:\n\\begin{equation}\\label{eq_alternatives_payoff}\nv\\left(A_i\\right)=\\sum_{m}{v^{(m)}(A_i)}\n\\end{equation}\n\n\\section{Basic algorithm - Example}\n\nThe group is composed of two persons namely Manon and Martin. They must choose they need to decide on the location of their summer holidays. They consider the following alternatives:\n\\begin{itemize}\n\\item Australia\n\\item California US\n\\item Costa Rica\n\\item Germany\n\\item Hungary\n\\item Japan\n\\item Thailand\n\\item UK\n\\item Venezuela\n\\end{itemize}\n\n\\subsection{Ranking of alternatives}\nManon makes the following ranking:\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{|c|l|}\n\\hline\n\\textbf{Rank}&\\textbf{Alternatives}\\\\\n\\hline\n1 & Thailand, Venezuela \\\\\n2 & Australia, Costa Rica \\\\\n3 & Hungary \\\\\n4 & California US, Germany, UK \\\\\n5 & Japan \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\nand Martin this one:\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{|c|l|}\n\\hline\n\\textbf{Rank}&\\textbf{Alternatives}\\\\\n\\hline\n1 & Japan, Germany, Hungary \\\\\n2 & Australia, California US \\\\\n3 & UK, Thailand, Venezuela, Costa Rica \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Classes}\n\nThe alternatives are classified by ranking for each member of the group. The frequency of each class is calculated.\n\nFor \\textsl{Manon}:\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Class}&\\textbf{Frequency} &\\textbf{Alternatives}\\\\\n$\\mathcal{C}_k$ & $f_k$ & $A_i$ \\\\\n\\hline\n$\\mathcal{C}_1$ & $2/9$ & Thailand, Venezuela \\\\\n$\\mathcal{C}_2$ & $2/9$ & Australia, Costa Rica \\\\\n$\\mathcal{C}_3$ & $1/9$ & Hungary \\\\\n$\\mathcal{C}_4$ & $1/3$ & California US, Germany, UK \\\\\n$\\mathcal{C}_5$ & $1/9$ & Japan \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\clearpage\nFor \\textsl{Martin}:\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Class}&\\textbf{Frequency} &\\textbf{Alternatives}\\\\\n$\\mathcal{C}_k$ & $f_k$ & $A_i$ \\\\\n\\hline\n$\\mathcal{C}_1$ & $1/3$ & Japan, Germany, Hungary \\\\\n$\\mathcal{C}_2$ & $2/9$ & Australia, California US \\\\\n$\\mathcal{C}_3$ & $4/9$ & UK, Thailand, Venezuela, Costa Rica \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Payoff}\n\nThe following table gives the payoff values of Manon's ranking classes:\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Class}&\\textbf{Frequency} &\\textbf{Payoff}\\\\\n$\\mathcal{C}_k$ & $f_k$ & $v_k$ \\\\\n\\hline\n$\\mathcal{C}_1$ & $2/9$ & 7/9 \\\\\n$\\mathcal{C}_2$ & $2/9$ & 5/9 \\\\\n$\\mathcal{C}_3$ & $1/9$ & 4/9 \\\\\n$\\mathcal{C}_4$ & $1/3$ & 1/9 \\\\\n$\\mathcal{C}_5$ & $1/9$ & 0 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nAnd this one the payoff values of Martin's ranking classes:\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Class}&\\textbf{Frequency} &\\textbf{Payoff}\\\\\n$\\mathcal{C}_k$ & $f_k$ & $v_k$ \\\\\n\\hline\n$\\mathcal{C}_1$ & $1/3$ & 2/3  \\\\\n$\\mathcal{C}_2$ & $2/9$ & 4/9 \\\\\n$\\mathcal{C}_3$ & $4/9$ & 0 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\clearpage\nNow the payoff value of alternatives for Manon and Martin can be combined as in equation (\\ref{eq_alternatives_payoff}):\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Alternative}&\\textbf{Manon}&\\textbf{Martin}&\\textbf{Payoff}\\\\\n$A_i$ & $v^{\\mathrm{Manon}}(A_i)$ & $v^{\\mathrm{Martin}}(A_i)$ & $v(A_i)$ \\\\\n\\hline\nHungary & $4/9$ & $2/3$ & $10/9$ \\\\\nAustralia & $5/9$ & $4/9$ & $1.$ \\\\\nThailand & $7/9$ & $0$ & $7/9$ \\\\\nVenezuela & $7/9$ & $0$ & $7/9$ \\\\\nGermany & $1/9$ & $2/3$ & $7/9$ \\\\\nJapan & $0$ & $2/3$ & $2/3$ \\\\\nCalifornia US & $1/9$ & $4/9$ & $5/9$ \\\\\nCosta Rica & $5/9$ & $0$ & $5/9$ \\\\\nUK & $1/9$ & $0$ & $1/9$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nThe payoff is maximal for \\textbf{Hungary} which achieves the consensus.\n\n\\section{Algorithm with scaling}\\label{section_algorithm_with_scaling}\n\nNow, in some cases ranking the alternatives with mere ordering is not fully representative of the perception of members in the group. A member may have a greater comparative preference between two classes of alternatives. The following scale of preference is used:\n\\begin{itemize}\n\\item moderate\n\\item strong\n\\item very strong\n\\item extreme\n\\end{itemize}\nThe \\textsl{moderate} preference is the equivalent of the hierarchy induced by the \\textsl{basic algorithm}. The \\textbf{intensity} of comparison is based on that scale with a integer value $I_c\\in[0..3]$. The moderate intensity value is $I_c=0$ and the extreme is $I_c=3$.\n\nA comparison with a higher intensity than moderate adds a dummy class between the two compared classes. The frequency of this dummy class increases with the intensity.\n\nA member of the group made a ranking $\\left\\lbrace\\mathcal{C}_k, k=1..K\\right\\rbrace$. Equation (\\ref{eq_member_payoff}) gives the difference of payoff value between two successive classes with:\n\\begin{equation}\nv_k-v_{k+1} = f_{k+1}>0,\\ \\forall k\\in[1..K-1]\n\\end{equation}\nThis relation assumes that the group's member shows a \\textsl{moderate} preference between every consecutive pair of classes which they ranked. Now if the \\textsl{intensity} of this preference is higher between two classes $\\mathcal{C}_k$ and $\\mathcal{C}_{k+1}$, the difference expressed by the previous equation must be increased. The additional term is noted $\\varphi_{k,k+1}>0$ so that the relation becomes:\n\\begin{equation}\nv_k-v_{k+1} = f_{k+1} + \\varphi_{k,k+1}\n\\end{equation}\nAs stated before if the preference is \\textsl{moderate} ($I_c=0$) then $\\varphi_{k,k+1}=0$. Here, the additional term is modeled as proportional to the frequency $f_{k+1}$:\n\\begin{equation}\\label{eq_add_term}\n\\begin{split}\n&\\varphi_{k,k+1} = \\left(a\\left(I_c\\right)-1\\right)f_{k+1} \\\\\n\\mathrm{with}\\ &a\\left(I_c\\right) \\geq 1,\\ \\forall I_c\\in[0..3] \\\\\n\\mathrm{and}\\ &a(0)=1\n\\end{split}\n\\end{equation}\nAs a consequence:\n\\begin{equation}\nv_k-v_{k+1} = a\\left(I_c\\right) f_{k+1}\n\\end{equation}\nThe \\textsl{extreme preference} ratio is noted $a_\\mathrm{ext} = a(3)$. This value is subjective and could be for instance set to 10. The intermediate values $a(1)$ and $a(2)$ are calculated from $a_\\mathrm{ext}$ assuming the following expression:\n\\begin{equation}\\label{eq_intensity_ratio}\na\\left(I_c\\right) = \\exp(\\alpha I_c)\n\\end{equation}\nUsing a power law for modeling the intensity of human feeling is not a new idea. Coefficient $\\alpha$ can be determined from $a_\\mathrm{ext} = a(3) = e^{3\\alpha}$:\n\\begin{equation}\\label{eq_intensity_exponent}\n\\alpha = \\frac13\\ln(a_\\mathrm{ext})\n\\end{equation}\nThe integration of the intensities of preferences gives the following expression of payoff for a class $\\mathcal{C}_l,$ with $l<K$ :\n\\begin{equation}\nv_l = \\sum_{k>l}{f_k} + \\sum_{l\\leq k<K}{\\varphi_{k,k+1}}\n\\end{equation}\nThis expression is not valid as is because members in the group would not provide payoff on the same scale. The payoff values would completely loss their objectivity and members providing rankings with higher intensities in comparison would see their opinion prevail over the group. This overrating stems from that the sum of class frequencies is 1. (see equation (\\ref{eq_sum_frequencies})) but:\n\\begin{equation}\n S = \\sum_{k=1}^K{f_k} + \\sum_{k=1}^{K-1}{\\varphi_{k,k+1}} = 1 + \\sum_{k=1}^{K-1}{\\varphi_{k,k+1}} \\geq 1\n\\end{equation}\nOnce the values of intensity terms $\\varphi_{k,k+1}$ are determined the actual payoff value is computed by applying the corrective ratio $1/S$:\n\\begin{equation}\\label{eq_scaled_payoff_with_intensities}\nv_l = \\frac1S \\left( \\sum_{k>l}{f_k} + \\sum_{l\\leq k<K}{\\varphi_{k,k+1}} \\right)\n\\end{equation}\n\n\\section{Application case with scaling}\n\n\\subsection{Reference with moderate preferences}\nThere are 8 alternatives \\textit{A}, \\textit{B}, \\textit{C}, \\textit{D}, \\textit{E}, \\textit{F}, \\textit{G}, \\textit{H}. A member of the group makes the following ranking:\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Class}&\\textbf{Frequency} &\\textbf{Alternatives}\\\\\n$\\mathcal{C}_k$ & $f_k$ & $A_i$ \\\\\n\\hline\n$\\mathcal{C}_1$ & 1/4  & \\textit{A}, \\textit{B} \\\\\n$\\mathcal{C}_2$ & 3/8  & \\textit{C}, \\textit{D}, \\textit{E} \\\\\n$\\mathcal{C}_3$ & 1/8  & \\textit{F} \\\\\n$\\mathcal{C}_1$ & 1/4  & \\textit{G}, \\textit{H} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nThe payoff with the basic approach where every comparison of classes is associated with a moderate preference is:\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Class}&\\textbf{payoff} &\\textbf{Alternatives}\\\\\n$\\mathcal{C}_k$ & $v_k$ & $A_i$ \\\\\n\\hline\n$\\mathcal{C}_1$ & 3/4  & \\textit{A}, \\textit{B} \\\\\n$\\mathcal{C}_2$ & 3/8  & \\textit{C}, \\textit{D}, \\textit{E} \\\\\n$\\mathcal{C}_3$ & 1/4  & \\textit{F} \\\\\n$\\mathcal{C}_4$ & 0  & \\textit{G}, \\textit{H} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Preferences with variable intensities}\n\nNow, the member specifies intensities $I_c$ in the preferences between the class of alternatives:\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|l|}\n\\hline\n\\textbf{Classes}&\\textbf{Prefetence}\\\\\n$\\mathcal{C}_k$ / $\\mathcal{C}_{k+1}$ & $I_c(k)$ \\\\\n\\hline\n$\\mathcal{C}_1$ / $\\mathcal{C}_2$ & very strong, $I_c(1)=2$ \\\\\n$\\mathcal{C}_2$ / $\\mathcal{C}_3$ & moderate, $I_c(2)=0$ \\\\\n$\\mathcal{C}_3$ / $\\mathcal{C}_4$ & strong, $I_c(3)=1$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nThe payoff of class $\\mathcal{C}_4$ is unchanged and remains zero. The preference of class $\\mathcal{C}_3$ over $\\mathcal{C}_4$ is \\textsl{strong}. The additional term reflecting this intensity is given by equations (\\ref{eq_add_term}), (\\ref{eq_intensity_ratio}) and (\\ref{eq_intensity_exponent}) with an \\textsl{extreme} ratio $a_\\mathrm{ext}$:\n\\begin{equation}\n\\varphi_{3,4} = \\left(a\\left(1\\right)-1\\right)f_4 = \\left(\\exp(\\frac13\\ln(10)\\times1)-1\\right)\\frac14\\approx0.2886\n\\end{equation}\nThe preference of class $\\mathcal{C}_2$ over $\\mathcal{C}_3$ is \\textsl{moderate} so there is no additional term ($\\varphi_{2,3}=0$).\nThe payoff of class $\\mathcal{C}_4$ is unchanged and remains zero. The preference of class $\\mathcal{C}_3$ over $\\mathcal{C}_4$ is \\textsl{very strong}:\n\\begin{equation}\n\\varphi_{1,2} = \\left(a\\left(2\\right)-1\\right)f_2 = \\left(\\exp(\\frac13\\ln(10)\\times2)-1\\right)\\frac38\\approx1.3656\n\\end{equation}\n\n\\subsection{Payoff}\n\nAs stated in section \\ref{section_algorithm_with_scaling} the less preferred class has a payoff of 0, then $v_4=0$. For the other classes, the equation (\\ref{eq_scaled_payoff_with_intensities}) applies. The scaling ratio is $S$ is:\n\\begin{equation}\n1 + \\varphi_{1,2} + \\varphi_{3,4} \\approx 2.6542\n\\end{equation}\nThe payoff value of class $\\mathcal{C}_3$ is:\n\\begin{equation}\nv_3 = \\frac1S \\left( f_4 + \\varphi_{3,4} \\right) \\approx 0.2029\n\\end{equation}\nThe payoff value of class $\\mathcal{C}_2$ is:\n\\begin{equation}\nv_2 = \\frac1S \\left( f_3 + f_4 + 0 + \\varphi_{3,4}  \\right)\\approx 0.2500\n\\end{equation}\nEventually the payoff value of class $\\mathcal{C}_2$ is:\n\\begin{equation}\nv_1 = \\frac1S \\left(f_2 + f_3 + f_4 + \\varphi_{1,0} + 0 + \\varphi_{3,4}  \\right)\\approx 0.9058\n\\end{equation}\n\nThe table bellow compares the payoff with the \\textsl{base algorithm} and with preferences with increased intensities:\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Class}&\\textbf{payoff (base)}&\\textbf{payoff (base)} &\\textbf{Alternatives}\\\\\n$\\mathcal{C}_k$ & $v_k$ & $v_k$ & $A_i$ \\\\\n\\hline\n$\\mathcal{C}_1$ & 0.75 & 0.906 & \\textit{A}, \\textit{B} \\\\\n$\\mathcal{C}_2$ & 0.375 & 0.250 & \\textit{C}, \\textit{D}, \\textit{E} \\\\\n$\\mathcal{C}_3$ & 0.25 & 0.203 & \\textit{F} \\\\\n$\\mathcal{C}_4$ & 0 & 0. & \\textit{G}, \\textit{H} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\end{document}", "meta": {"hexsha": "f7590d3e3db3c533ad3ffee4104c056b5697f5a1", "size": 14681, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/algorithm.tex", "max_stars_repo_name": "flaudanum/consensus", "max_stars_repo_head_hexsha": "f745a7957d4b1c7e567c3a4a1ec9c04c0fb45aa0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/algorithm.tex", "max_issues_repo_name": "flaudanum/consensus", "max_issues_repo_head_hexsha": "f745a7957d4b1c7e567c3a4a1ec9c04c0fb45aa0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/algorithm.tex", "max_forks_repo_name": "flaudanum/consensus", "max_forks_repo_head_hexsha": "f745a7957d4b1c7e567c3a4a1ec9c04c0fb45aa0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.7859078591, "max_line_length": 565, "alphanum_fraction": 0.7032218514, "num_tokens": 5091, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7606506526772884, "lm_q1q2_score": 0.580744423056085}}
{"text": "%\\input{head.tex}\n\n%\\begin{document}\n\n\n\n\\section{Basic Concepts}\n\nMost of the following concepts apply to arbitrary groups.\n\n\\subsection{Groups and Subgroups}\n\n\\defn{\ngroup, Abelian, conjugate, power, finite, cyclic, subgroup, generate, order, complex product, coset, coset, transversal, p-group. \n}\n\n\\prop{\nbijective mappings, cyclic groups, subgroup, set-generate, complex product, order of c.p., Lagrange Thm, order of g and G, transversal, complement, Dedekind Identity. \n}\n\n\\subsection{Homomorphisms and Normal Subgroups}\n\n\\defn{\nhomomorphism, im, ker, epi/mono/endo/iso/automorphism, normal, simple, factor group, subnormal, section. \n}\n\n\\prop{\ninverse homo, ker is normal, normal iff, natural homo, Homo Thm, Iso Thm*2, subnormal.\n}\n\n\\subsection{Automorphisms}\n\n\\defn{\nAutG, InnG, Z(G), characteristic, X-inv, X-homo. \n}\n\n\\prop{\ncyclic G/Z(G) implies Abelian G, char is trans. \n}\n\n\\subsection{Cyclic Groups}\n\n\\defn{$C_n$}\n\n\\prop{\nsubgroups of $\\Z$, finite cyclic groups and their subgroups, char, cyclic p-groups, Abelian simple groups. \n}\n\n\\subsection{Commutators}\n\n\\defn{\ncommutator subgroup, perfect. \n}\n\n\\prop{\ncomu with homo, smallest s.t. Abelian factor, perfect factor, $[x,yz]=[x,z][x,y]^z$, [X,Y] is normal in $<X,Y>$, Three-subgroups Lem. \n}\n\n\\subsection{Products of Groups}\n\n\\defn{\ninternal/external direct product, central product, internal/external semidirect product, involutions, dihedral group. \n}\n\n\\prop{\nin/ex iso, on Z(G) G' G/N, on normal N, internal structures $G/\\bigcap_i N_i \\cong G/N_1\\times\\cdots\\times G/N_n$, prod of normal subgs of relative prime order is direct, on elements, central product on G/Z[G], factorization and product of elements in a semiderict product, dihedral groups are semidirect products. \n}\n\n\\subsection{Minimal Normal Subgroups}\n\n\\defn{\nminimal normal subgroups. \n}\n\n\\prop{\nproperties of a min-normal subg, product of min-normal subgs, factorization(structure for Abelian versus uniqueness for non-Abelian). \n}\n\n\\subsection{Composition Series}\n\n\\defn{\nrefine normal(rep. subnormal)series to chief(esp. composition) series, solvable group, X-section, X-composition series, X-simple. \n}\n\n\\prop{\nJordan-Holder Thm. \n}\n\n\n%%%%%%%%%% below are solutions of exercises %%%%%%%%\n\n\\begin{exerlist}\n\n\\exer{hahaha}{jajaja}\n\t\n\\end{exerlist}\n\n\n\n\n%\\end{document}", "meta": {"hexsha": "368490d9453a67022c8b8df0e61bcdd8a8c6b288", "size": 2304, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "BasicConcepts.tex", "max_stars_repo_name": "ocbaby/FiniteGroups", "max_stars_repo_head_hexsha": "698c98fa6e01ac3187a94ce6ccb6eece54979949", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "BasicConcepts.tex", "max_issues_repo_name": "ocbaby/FiniteGroups", "max_issues_repo_head_hexsha": "698c98fa6e01ac3187a94ce6ccb6eece54979949", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BasicConcepts.tex", "max_forks_repo_name": "ocbaby/FiniteGroups", "max_forks_repo_head_hexsha": "698c98fa6e01ac3187a94ce6ccb6eece54979949", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.8118811881, "max_line_length": 315, "alphanum_fraction": 0.734375, "num_tokens": 659, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.580744423056085}}
{"text": "\\graphicspath{{Pics/}}\n\\subsection{Parallel lines to the $ A $-median}\n\n\\den{Skeleton Diagram}{\n    In a triangle $ABC$, $M$ is the center of $BC$. $B', C'$ are on $\\odot\n    ABC$ such that $BB'\\parallel CC' \\parallel AM$. Let $R$ be the point on\n    $BC$ such that $\\odot B'C'R$ is tangent to $BC$. Draw the two circles\n    through $BB'$ and $CC'$ that are tangent to $BC$. Also let $S = BC \\cap\n    B'C'$, and let $G$ be such that $SG$ is tangent to $\\odot ABC$ at $G$.\n    Also let $ M_a, M_A $ be the minor and major arc midpoints of $ \\widehat{BC} $.\n}\n\n\\figdf{1}{apmo_2019_p3_and_IGO_2019_p4_0}{Skeleton}\n\n\\lem{}{\n    $M_A, R, G$ are collinear\n}\n\n\\begin{prooof}\n    Let $R' = GM_A\\cap BC$. Then do some angle chansing to show that $SR' = SG$.\n\\end{prooof}\n\n\n\n\\den{Further Construction}{ \n    Now let $ M_AR\\cap AM = H, M_AR\\cap\\odot ABC = G $.\n        Let $ \\omega_b, \\omega_c $ be the circles through $ BB' $ and $\n            CC' $ and tangent to $ BC $.\n        Let $ I $ be the intersection of the line through $ M_a $\n            parallel to $ AM $ and $ \\omega_x $.\n        Let $ J=AM\\cap\\omega_x $.\n        Let $ J\\cap\\omega_b=X, J\\cap\\omega_c=Y $.\n        Let $ AS\\cap \\odot ABC = F $.  \n}\n\n\n\\lem{}{$ SG $ is tangent to $ \\odot ABC $}\n\n\\lem{}{$ R, M, M_a, G $ is cyclic, call it $ \\omega_x $}\n\n\\lem{}{$ I, G, O $ are collinear, and $ IG=IJ $}\n\n\\lem{}{$ BX, CY, AM, RM_A$ are concurrent.}\n\n\\figdf{1}{apmo_2019_p3_and_IGO_2019_p4}{}\n\n\\prob{}{IGO 2019 Advanced P4}{H}{\n    $ XHYG $ is cyclic, and tangent to $\n    \\omega_b, \\omega_c $ and $ \\odot ABC $. And $ I $ is the center of this\n    circle.\n}\n\n\n\\figdf{1}{apmo_2019_p3_and_IGO_2019_p4_1}{}\n\n\n\\prob{https://artofproblemsolving.com/community/c6h1854153p12519663}{APMO 2019\n    P3}{MH}{\n    A variable point $P$ is selected in the line segment $AM$. The\n    circumcircles of triangles $BPM$ and $CPM$ intersect $\\Gamma$ again at points\n    $D$ and $E$, respectively. The lines $DP$ and $EP$ intersect (a second time)\n    the circumcircles to triangles $CPM$ and $BPM$ at $X$ and $Y$, respectively.\n    Prove that $ AFXY $ is cyclic.\n}\n", "meta": {"hexsha": "fbe91d9aac797ea8922d5af219a9c825d87455f1", "size": 2099, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "geo/bigpics/1_parallels_to_median_and_stuffs.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "geo/bigpics/1_parallels_to_median_and_stuffs.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geo/bigpics/1_parallels_to_median_and_stuffs.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 32.2923076923, "max_line_length": 83, "alphanum_fraction": 0.613625536, "num_tokens": 733, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506418255928, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5807444106774793}}
{"text": "\\problemname{Pipe Rotation}\n\n\\noindent The four ninja turtles: Leonardo, Donatello, Michelangelo, and Raphael are seeking a new home in Manhattan, New York City. The turtles don't like sudden dead-ends in their home. Fortunately, the government recently installed a new sewage system where pipes can be rotated! The turtles needs your help finding a suitable home, so they're willing to provide you a grid of the current layout of the sewage system.\\\\\n\nThe grid consists of $N$ rows and $M$ columns. The cell $G_{i,j}$ will consist of one of four pipes, encoded as a letter between ``\\texttt{A}\" and ``\\texttt{D}\". These pipes can be rotated by any multiple of $90$ degrees:\\\\\n\n\\noindent\n\\begin{minipage}{0.25\\textwidth}\n  \\includegraphics[width=0.95\\textwidth]{pipe_rotation_1.png}\n\\end{minipage}\n\\begin{minipage}{0.75\\textwidth}\n  \\begin{itemize}\n    \\item (\\texttt{A}) Nothing\n    \\item (\\texttt{B}) Straight pipe (pipes leaving through two opposite edges)\n    \\item (\\texttt{C}) Elbow-shaped pipe (pipes leaving through two adjacent edges)\n    \\item (\\texttt{D}) Four-way pipe (pipes leaving through all four edges)\n  \\end{itemize}\n\\end{minipage}\n\\vspace{10pt}\n\nDetermine whether it's possible to rotate the cells such that the pipes all line up with one another. In particular, for each edge shared by a pair of adjacent cells, there must either be a pipe on both sides of that edge, or on neither side. And for each each of the $2 \\cdot (N + M)$ outer edges of the grid, there must be no pipe leaving through that edge. Below are examples:\n\n\\begin{figure}[h]\n  \\centering\n  \\begin{minipage}[t]{0.3\\textwidth}\\centering\n      \\includegraphics[width=0.95\\textwidth]{pipe_rotation_2}\n      Fig 1. Invalid example,\\\\ two sudden dead-ends.\n  \\end{minipage}\n  \\hspace{30pt}\n  \\begin{minipage}[t]{0.3\\textwidth}\\centering\n      \\includegraphics[width=0.95\\textwidth]{pipe_rotation_3}\n      Fig 2. Valid example,\\\\ no sudden dead-ends.\n  \\end{minipage}\n\\end{figure}\n\n\\section*{Input}\nThe first line of input consists of two space-separated integers, $N$ and $M$ ($1 \\leq N, M \\leq 100$).\\\\\n$N$ lines follow, the $i$th of which consists of $M$ characters $G_{i,1}, G_{i,2}, G_{i,j}, \\dots, G_{i,M}$ and $G_{i,j} \\in \\{A,B,C,D\\}$ (for $i = 1..N$).\n\n\\section*{Output}\nPrint, on a single line, a single string, either ``\\texttt{Possible}\" if it's possible to produce a valid configuration of pipes, or ``\\texttt{Impossible}\" otherwise.\\\\\n", "meta": {"hexsha": "da646915d0be4348f45eac678ea7989cb4e9be89", "size": 2432, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "piperotation/problem_statement/problem.en.tex", "max_stars_repo_name": "csecutsc/utscode2", "max_stars_repo_head_hexsha": "469367528cc5697e0c5c0ccee28420591335d899", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-09-30T15:06:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-09T06:50:25.000Z", "max_issues_repo_path": "piperotation/problem_statement/problem.en.tex", "max_issues_repo_name": "csecutsc/utscode2", "max_issues_repo_head_hexsha": "469367528cc5697e0c5c0ccee28420591335d899", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "piperotation/problem_statement/problem.en.tex", "max_forks_repo_name": "csecutsc/utscode2", "max_forks_repo_head_hexsha": "469367528cc5697e0c5c0ccee28420591335d899", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-17T04:10:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-17T04:10:51.000Z", "avg_line_length": 57.9047619048, "max_line_length": 422, "alphanum_fraction": 0.7232730263, "num_tokens": 720, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7662936430859597, "lm_q1q2_score": 0.5806930094740786}}
{"text": "\\chapter{Conclusion}\nIn this thesis we have examined how Bayesian inference can be used in neural networks using Markov chain Monte Carlo sampling and how these and non-Bayesian neural networks perform regression and classification. The most essential difference between these two approaches of using neural networks is that Bayesian neural networks require a prior distribution for its weights and use sampling procedures to generate a distribution for its weights and predictions, while non-Bayesian neural networks use optimization algorithms, such as the gradient-based optimization algorithms described in section \\ref{sec:gradient_optimization}, to learn a single estimate of the weights that result in predictions that provide the least amount of loss. \n\\\\\n\\\\\nThe focal point of the examination of the Bayesian neural networks has been how to efficiently sample from the posterior distribution for the weights using Markov chain Monte Carlo methods. We analyzed this by first providing a simple example of a Bayesian neural network that took weights sampled from a prior and accepted them with a probability proportional to the likelihood of the produced results, a sort of rejections sampling illustrated by \\cite{neal2012bayesian}. This provided the motivation for examining more efficient sampling methods such as the Markov chain Monte Carlo methods. One such method was the Metropolis algorithm that has the inefficiency of exploring the distribution with a random walk behavior. For this reason we examined the Hamiltonian Monte Carlo, which combine the principles of the Metropolis algorithm with Hamiltonian dynamics for a better exploration of the distribution. This turns out to have the inefficiency of sometimes performing \"U-turns\", meaning that it has the possibility of returning to the previous sample-point of the Markov chain and thus make highly correlated samples. \nThe No-U-Turn Hamiltonian Monte Carlo is an extension to Hamiltonian Monte Carlo, which prevents such a U-turn and also make adaptive choices to the size and the number of steps taken by the Markov chain before sampling. We conclude our examination of Bayesian neural networks by covering the effect of the prior choice and by suggesting general schemes for choosing a prior distribution.\n\\\\\n\\\\\nWe end our thesis with an evaluation and illustration on how Bayesian and non-Bayesian neural networks work in practice. We do this with varying design choices to show how elements such as a hierarchical prior and different choice of regularization affects results. These different neural networks are applied to a regression task with the aim of predicting house prices and a classification task with the aim of predicting default probabilities of credit card clients. We furthermore show some of the benefits from the produced distributions of the predictive posterior from the Bayesian neural networks, and how to use these to evaluate models and make better predictions.", "meta": {"hexsha": "fb859c0bfcc3bba67a156a38350d53790af4a104", "size": 2962, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "conclusion.tex", "max_stars_repo_name": "mraabo/Dissertation--Bayesian-Neural-Networks", "max_stars_repo_head_hexsha": "629b1c5f4bbdb80ef1d1037b4a0a1b7f95ac710b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "conclusion.tex", "max_issues_repo_name": "mraabo/Dissertation--Bayesian-Neural-Networks", "max_issues_repo_head_hexsha": "629b1c5f4bbdb80ef1d1037b4a0a1b7f95ac710b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "conclusion.tex", "max_forks_repo_name": "mraabo/Dissertation--Bayesian-Neural-Networks", "max_forks_repo_head_hexsha": "629b1c5f4bbdb80ef1d1037b4a0a1b7f95ac710b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 329.1111111111, "max_line_length": 1125, "alphanum_fraction": 0.830519919, "num_tokens": 529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7662936377487305, "lm_q1q2_score": 0.5806930054295564}}
{"text": "\\clearpage\n\\phantomsection\n\\addcontentsline{toc}{subsection}{SRA}\n\\label{insn:sra}\n\\subsection*{SRA: Shift Right Arithmetic}\n\n\\subsubsection*{Format}\n\n\\textrm{SRA \\%rd, \\%r1, \\%r2}\n\n\\begin{center}\n\\begin{bytefield}[endianness=big,bitformatting=\\scriptsize]{32}\n\\bitheader{0,7,8,15,16,23,24,31} \\\\\n\\bitbox{8}{0x2E}\n\\bitbox{8}{r1}\n\\bitbox{8}{r2}\n\\bitbox{8}{rd}\n\\end{bytefield}\n\\end{center}\n\n\\subsubsection*{Description}\n\nThe \\instruction{sra} instruction shifts the value in \\registerop{r1} right by\nthe number of bits indicated in \\registerop{r2}, placing the results in register\n\\registerop{rd}.  This instruction only operates on \\textbf{signed} integers.\n\n\\subsubsection*{Pseudocode}\n\n\\begin{verbatim}\n%rd = %r1 >> %r2\n\\end{verbatim}\n\n\\subsubsection*{Constraints}\n\n\\registerop{r1}, \\registerop{r2}, and \\registerop{rd} must be less than\n\\nregs{}.\n\n\\medskip\n\\noindent\n\\registerop{rd} cannot be \\register{r0}.\n\n\\subsubsection*{Failure modes}\n\nThis instruction has no run-time failure modes beyond its constraints.\n", "meta": {"hexsha": "6d31e7fcc380ced85c0b0e2afb752664b627e5e8", "size": 1014, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "specification/instr/sra.tex", "max_stars_repo_name": "bsdbcr/documentation", "max_stars_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "specification/instr/sra.tex", "max_issues_repo_name": "bsdbcr/documentation", "max_issues_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "specification/instr/sra.tex", "max_forks_repo_name": "bsdbcr/documentation", "max_forks_repo_head_hexsha": "ae885886c9cffe356ed8c3acd34feae47a0e9c8e", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.5333333333, "max_line_length": 80, "alphanum_fraction": 0.7435897436, "num_tokens": 334, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7577943658046609, "lm_q1q2_score": 0.580693005282468}}
{"text": "  \\documentclass{mynotes}\n\n%\\geometry{showframe}% for debugging purposes -- displays the margins\n\n\\newcommand{\\E}{\\mbox{E}}\n\\newcommand{\\MSE}{\\mbox{MSE}}\n\\newcommand{\\var}{\\mbox{var}}\n\\newcommand{\\by}{\\textbf{y}}\n\n\\usepackage{amsmath,amssymb,amsfonts,amsthm} % amsmath package, useful for mathematical formulas\n%\\usepackage[garamond]{mathdesign}\n\\usepackage{hyperref}\n\\hypersetup{colorlinks}% uncomment this line if you prefer colored hyperlinks (e.g., for onscreen viewing)\n\n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}[theorem]{Lemma}\n\n% Set up the images/graphics package\n\\usepackage{graphicx}\n\\setkeys{Gin}{width=\\linewidth,totalheight=\\textheight,keepaspectratio}\n\\graphicspath{{graphics/}}\n\n\\title[Exercises 2 $\\cdot$ SDS 383D]{Exercises 2: Bayes and the Gaussian linear model}\n%\\author[ ]{ }\n\\date{}  % if the \\date{} command is left out, the current date will be used\n\n% The following package makes prettier tables.  We're all about the bling!\n\\usepackage{booktabs}\n\n% The units package provides nice, non-stacked fractions and better spacing\n% for units.\n\\usepackage{units}\n\n% The fancyvrb package lets us customize the formatting of verbatim\n% environments.  We use a slightly smaller font.\n\\usepackage{fancyvrb}\n\\fvset{fontsize=\\normalsize}\n\n% Small sections of multiple columns\n\\usepackage{multicol}\n\n% Provides paragraphs of dummy text\n\\usepackage{lipsum}\n\n% These commands are used to pretty-print LaTeX commands\n\\newcommand{\\doccmd}[1]{\\texttt{\\textbackslash#1}}% command name -- adds backslash automatically\n\\newcommand{\\docopt}[1]{\\ensuremath{\\langle}\\textrm{\\textit{#1}}\\ensuremath{\\rangle}}% optional command argument\n\\newcommand{\\docarg}[1]{\\textrm{\\textit{#1}}}% (required) command argument\n\\newenvironment{docspec}{\\begin{quote}\\noindent}{\\end{quote}}% command specification environment\n\\newcommand{\\docenv}[1]{\\textsf{#1}}% environment name\n\\newcommand{\\docpkg}[1]{\\texttt{#1}}% package name\n\\newcommand{\\doccls}[1]{\\texttt{#1}}% document class name\n\\newcommand{\\docclsopt}[1]{\\texttt{#1}}% document class option name\n\n\\newcommand{\\N}{\\mbox{N}}\n\\newcommand{\\thetahat}{\\hat{\\theta}}\n\\newcommand{\\sigmahat}{\\hat{\\sigma}}\n\\newcommand{\\betahat}{\\hat{\\beta}}\n\n\n\\begin{document}\n\n\\maketitle% this prints the handout title, author, and date\n\n\n\\section{A simple Gaussian location model}\n\nTake a simple Gaussian model with unknown mean and variance:\n\\begin{equation}\n\\label{eqn:normal_model}\n(y_i \\mid \\theta, \\sigma^2) \\sim N(\\theta, \\sigma^2) \\; , \\quad i = 1, \\ldots, n \\, .\n\\end{equation}\nLet $\\bf{y}$ be the vector of observations $\\textbf{y} = (y_1, \\ldots, y_n)^T$.\n\nSuppose we place conjugate normal and inverse-gamma priors on $\\theta$ and $\\sigma^2$, respectively:\n\\begin{align*}\n(\\theta \\mid \\sigma^2) &\\sim N(\\mu, \\tau^2 \\sigma^2) \\\\\n\\sigma^2 &\\sim \\mbox{Inv-Gamma}\\left( \\frac{d}{2}, \\frac{ \\eta}{2} \\right) \\, ,\n\\end{align*}\nwhere $\\mu$, $\\tau > 0$, $d>0$ and $\\eta > 0$ are fixed scalar hyperparameters.  Note a crucial choice here: the error variance appears in the prior for $\\theta$.  This affects the interpretation of the hyperparameter $\\tau$, which is not the prior variance of $\\theta$, but rather the prior signal-to-noise ratio.  This is pretty common thing to do in setting up priors for location parameters: to scale the prior by the error variance.  There are a few good reasons to do this, but historically the primary one has been analytical convenience (as you'll now see).\n\nHere's a sensible way to interpret each of these four parameters:\n\\begin{itemize}\n\\item $\\mu$ is a prior guess for $\\theta$.\n\\item $\\tau$ is a prior signal-to-noise ratio---that is, how disperse your prior is for $\\theta$, relative to the error standard deviation $\\sigma$.\n\\item $d$ is like a ``prior sample size'' for the error variance $\\sigma^2$.\n\\item $\\eta$ is like a ``prior sum of squares'' for the error variance $\\sigma^2$.  More transparently, $\\eta/d$ is like a prior guess for the error variance $\\sigma^2$.  It's not exactly the prior mean for $\\sigma^2$, but it's close to the prior mean as $d$ gets larger, since the inverse-gamma(a,b) prior has expected value\n$$\nE(\\sigma^2) = b/(a-1) =  \\frac{\\eta / 2}{d/2 - 1} = \\frac{\\eta}{d - 2} \\approx \\eta/d\n$$\nif $d$ is large.\\footnote{This expression is only valid if $d > 2$.}\n\\end{itemize}\n\nWhat is meant by the phrases ``prior sample size'' and ``prior sum of squares''?  Well, remember that conjugate priors always resemble the likelihood functions that they're intended to play nicely with.  The two relevant quantities in the likelihood function for $\\sigma^2$ are the sample size and the sums of squares.  The prior here is designed to mimic the likelihood function for $\\sigma^2$ that you'd get if you hallucinated a previous data set with sample size $d$ and sums of squares $\\eta$.\n\n\\paragraph{Precisions are easier than variances.}  It's perfectly fine to work with this form of the prior, and it's easier to interpret this way.  But it turns out that we can make the algebra a bit cleaner by working with the precisions $\\omega = 1/\\sigma^2$ and $\\kappa = 1/\\tau^2$ instead.\n\\begin{align*}\n(\\theta \\mid \\omega) &\\sim N(\\mu, (\\omega \\kappa)^{-1}) \\\\\n\\omega &\\sim \\mbox{Gamma}\\left( \\frac{d}{2}, \\frac{ \\eta}{2} \\right) \\, .\n\\end{align*}\n\nThis means that the joint prior for $(\\theta, \\omega)$ has the form\n\\begin{equation}\n\\label{eqn:normal_gamma_prior1}\np(\\theta, \\omega) \\propto \\omega^{(d+1)/{2} - 1} \\exp \\left\\{ - \\omega \\cdot \\frac{\\kappa (\\theta - \\mu)^2}{2}  \\right\\}\n\\cdot \\exp\\left\\{ -  \\omega \\cdot \\frac{\\eta}{2}  \\right\\} \n\\end{equation}\nThis is often called the \\textit{normal/gamma} prior for $(\\theta, \\omega)$ with parameters $(\\mu, \\kappa, d, \\eta)$, and it's equivalent to a normal/inverse-gamma prior for $(\\theta, \\sigma^2)$.  (The interpretation of $\\kappa$ is like a prior sample size for the mean $\\theta$.)  Note: you can obviously write this joint density in Equation \\ref{eqn:normal_gamma_prior1} in a way that combines the exponential terms, but this way keeps the bit involving $\\theta$ separate, so that you can recognize the normal kernel.\\footnote{The term ``kernel'' is heavily overloaded in statistics. \\href{http://en.wikipedia.org/wiki/Kernel_(statistics)\\#In_Bayesian_statistics}{See Wikipedia for the sense in which I mean the term.}  }\n\n\n\\begin{enumerate}[(A)]\n\n\n\\item By construction, we know that the marginal prior distribution $p(\\theta)$ is a gamma mixture of normals.  Show that this takes the form of a centered, scaled $t$ distribution: \n$$\np(\\theta) \\propto \\left(1+ \\frac{1}{\\nu} \\cdot \\frac{(x - m)^2}{s^2}  \\right)^{-\\frac{\\nu+1}{2}}\n$$\nwith center $m$,  scale $s$, and degrees of freedom $\\nu$, where you fill in the blank for $m$, $s^2$, and $\\nu$ in terms of the four parameters of the normal-gamma family.  Note: you did a problem just like this on a previous exercise!  This shouldn't be a lengthy re-derivation.\n\n\n\\item Assume the normal sampling model in Equation \\ref{eqn:normal_model} and the normal-gamma prior in Equation \\ref{eqn:normal_gamma_prior1}.  Calculate the joint posterior density $p(\\theta, \\omega \\mid \\textbf{y})$, up to constant factors not depending on $\\omega$ or $\\theta$.  Show that this is also a normal/gamma prior in the same form as above:\n\\begin{equation}\n\\label{eqn:normal_gamma_post}\np(\\theta, \\omega \\mid \\by) \\propto \\omega^{(d^\\star+1)/2 - 1} \\exp \\left\\{ - \\omega \\cdot \\frac{\\kappa^\\star (\\theta - \\mu^\\star)^2}{2}  \\right\\}\n\\cdot \\exp\\left\\{ -  \\omega \\cdot \\frac{\\eta^\\star}{2}  \\right\\} \n\\end{equation}\nFrom this form of the posterior, you should able to read off the new updated parameters, by pattern-matching against the functional form in Equation \\ref{eqn:normal_gamma_prior1}:\n\\begin{itemize}\n\\item $\\mu \\longrightarrow \\mu^\\star =  \\; ?$\n\\item $\\kappa \\longrightarrow \\kappa^\\star =  \\; ?$\n\\item $d \\longrightarrow d^{\\star} =  \\; ?$\n\\item $\\eta \\longrightarrow \\eta^\\star = \\; ?$\n\\end{itemize}\nYou may notice that my parameterization of the normal-gamma in Equation \\ref{eqn:normal_gamma_prior1} differs from, say, the one you might find in textbooks or on websites.  I've chosen this parameterization in order to make these four updates for the parameters, above, as simple-looking and intuitive as possible.\n\n\nTip: this one is a bit of an algebra slog, with a lot of completing the square, collecting common terms, and cancelling positives with negatives.  For example, to make the calculations go more easily, you might first show (or recall, from a previous exercise) that the likelihood can be written in the form\n$$\np(\\by \\mid \\theta, \\omega) \\propto \\omega^{n/2} \\exp \\left\\{ - \\omega \\cdot \\left( \\frac{S_y + n(\\bar{y} - \\theta)^2}{2} \\right) \\right\\} \\, ,\n$$\nwhere $S_y = \\sum_{i=1}^n (y_i - \\bar{y})^2$ is the sum of squares for the $\\by$ vector.  This expresses the likelihood in terms of the two statistics $\\bar{y}$ and $S_y$, which you may recall from your math-stat course are sufficient statistics for $(\\theta, \\sigma^2)$.\n\nTake care in ignoring constants here: some term that is constant in $\\theta$ may not be constant in $\\omega$, and vice versa.  You're focusing on the joint posterior, so you can't ignore anything that has a $\\theta$ or $\\omega$ in it.\n\n\n\\item From the joint posterior you just derived, what is the conditional posterior distribution $p(\\theta \\mid \\by, \\omega)$?  Note: this should require no calculation---you should just be able to read it off directly from the joint distribution, since you took care to set up things so that the joint posterior was in the same form as Equation \\ref{eqn:normal_gamma_prior1}.\n\n\\item From the joint posterior you calculated in (A), what is the marginal posterior distribution $p(\\omega \\mid \\by)$?  Unlike the previous question, this one doesn't come 100\\% for free---you have to integrate over $\\theta$.  But it shouldn't be too hard, since you can ignore constants not depending on $\\omega$ in calculating this integral.\n\n\\item Show that the marginal posterior $p(\\theta \\mid \\by)$ takes the form of a centered, scaled $t$ distribution and express the parameters of this $t$ distribution in terms of the four parameters of the normal-gamma posterior for $(\\theta, \\omega)$.  Note: since you've set up the normal-gamma family in this careful conjugate form, this should require no extra work.  It's just part (A), except for the posterior rather than the prior.\n\n\\item True or false: in the limit as the prior parameters $\\kappa$, $d$, and $\\eta$ approach zero, the priors $p(\\theta)$ and $p(\\omega)$ are valid probability distributions.  (Remember that a valid probability distribution must integrate to 1 (or something finite, so that it can normalized to integrate to 1) over its domain.)\n\n\\item True or false: in the limit as the prior parameters $\\kappa$, $d$, and $\\eta$ approach zero, the posteriors $p(\\theta \\mid \\by)$ and $p(\\omega \\mid \\by)$ are valid probability distributions.\n\n\\item Your result in (E) implies that a Bayesian credible interval for $\\theta$ takes the form\n$$\n\\theta \\in m \\pm t^{\\star} \\cdot s \\, ,\n$$\nwhere $m$ and $s$ are the posterior center and scale parameters from (E), and $t^\\star$ is the appropriate critical value of the t distribution for your coverage level and degrees of freedom (e.g.~it would be 1.96 for a 95\\% interval under the normal distribution).\n\nTrue or false: In the limit as the prior parameters $\\kappa$, $d$, and $\\eta$ approach zero, the Bayesian credible interval for $\\theta$ becomes identical to the classical (frequentist) confidence interval for $\\theta$ at the same confidence level.\n\n\\end{enumerate}\n\n\n\\newpage \n\n\\section{The conjugate Gaussian linear model}\n\nNow consider the Gaussian linear model\n$$\n(\\by \\mid \\beta, \\sigma^2) \\sim \\N(X\\beta, (\\omega \\Lambda)^{-1} ) \\, ,\n$$\nwhere $\\by$ is an $n$ vector of responses, $X$ is an $n \\times p$ matrix of features, and $\\omega = 1/\\sigma^2$ is the error precision, and $\\Lambda$ is some known matrix.  A typical setup would be $\\Lambda = I$, the $n \\times n$ identity matrix, so that the residuals of the model are i.i.d.~normal with variance $\\sigma^2$.  But we'll consider other setups as well, so we'll leave a generic $\\Lambda$ matrix in the sampling model for now.\n\nNote that when we write the model this way, we typically assume one of two things: either (1) that both the $y$ variable and all the $X$ variables have been centered to have mean zero, so that an intercept is unnecessary; or (2) that $X$ has a vector of 1's as its first column, so that the first entry in $\\beta$ is actually the intercept.\n\nWe'll again work in terms of the precision $\\omega = \\sigma^2$, and consider a normal--gamma prior for $\\beta$:\n\\begin{align}\n(\\beta \\mid \\omega) &\\sim N(m, (\\omega K)^{-1}) \\\\\n\\omega &\\sim \\mbox{Gamma}(d/2, \\eta/2) \\, .\n\\end{align}\nHere $K$ is a $p \\times p$ precision matrix in the multivariate normal prior for $\\beta$, which we assume to be known.\n\nThe items below follow a parallel path to the derivations you did for the Gaussian location model---except for the multivariate case.  Don't reinvent the wheel if you don't have to: you should be relying heavily on your previous results about the multivariate normal distribution.\\footnote{That is, if you find yourself completing the square over and over again with matrices and vectors, you should stop and revisit your Exercises 1 solutions.}\n\n\\subsection{Basics}\n\n\\begin{enumerate}[(A)]\n\n\\item Derive the conditional posterior $p(\\beta \\mid \\by, \\omega)$.\n\n\\item Derive the marginal posterior $p(\\omega \\mid \\by)$.\n\n\\item Putting these together, derive the marginal posterior $p(\\beta \\mid \\by)$.\n\n\\item Take a look at the data in ``gdpgrowth.csv'' from the class website, which has macroeconomic variables for several dozen countries.  In particular, consider a linear model (with intercept) for a country's GDP growth rate (GR6096) versus its level of defense spending as a fraction of its GDP (DEF60).\n\nFit the Bayesian linear model to this data set, choosing $\\Lambda = I$ and something diagonal and pretty vague for the prior precision matrix $K = \\mbox{diag}(\\kappa_1, \\kappa_2)$.  Inspect the fitted line (graphically).  Are you happy with the fit?  Why or why not?\n\n\\end{enumerate}\n\n\n\\subsection{A heavy-tailed error model}\n\n\nNow it's time for your first ``real'' use of the hierarchical modeling formalism to do something cool.  Here's the full model you'll be working with:\n\\begin{align}\n(\\by \\mid \\beta, \\omega, \\Lambda) &\\sim N(X \\beta, (\\omega \\Lambda)^{-1}) \\\\\n\\Lambda &= \\mbox{diag}(\\lambda_1, \\ldots, \\lambda_n) \\\\\n\\lambda_i &\\stackrel{iid}{\\sim} \\mbox{Gamma}(h/2, h/2) \\\\\n(\\beta \\mid \\omega) &\\sim N(m, (\\omega K)^{-1}) \\\\\n\\omega &\\sim \\mbox{Gamma}(d/2, \\eta/2) \\, .\n\\end{align}\nwhere $h$ is a fixed hyperparameter.\n\n\\begin{enumerate}[(A)]\n\n\n\\item Under this model, what is the implied conditional distribution $p(y_i \\mid X, \\beta, \\omega)$?  Notice that $\\lambda_i$ has been marginalized out.  This should look familiar.\n\n\\item What is the conditional posterior distribution $p(\\lambda_i \\mid \\by, \\beta, \\omega)$?\n\n\\item Combining these results with those from the ``Basics'' subsection above, code up a \\href{http://en.wikipedia.org/wiki/Gibbs_sampling}{Gibbs sampler} that repeatedly cycles through sampling the following three sets of conditional distributions.\n\\begin{itemize}\n\\item $p(\\beta \\mid \\by, \\omega, \\Lambda)$\n\\item $p(\\omega \\mid \\by, \\Lambda)$\n\\item $p(\\lambda_i \\mid \\by, \\beta, \\omega)$\n\\end{itemize}\nThe first two should follow identically from your previous results, except that we are explicitly conditioning on $\\Lambda$, which is now a random variable rather than a fixed hyperparameter.  If you cycle through these conditional posterior draws a few thousand times, you will build up a Markov-chain Monte Carlo (MCMC) sample from the joint posterior distribution $p(\\beta, \\omega, \\Lambda \\mid \\by)$.  Why this technique works for getting posterior draws is the subject of a different course, but hopefully it is reasonably intuitive.\n\nNow use your Gibbs sampler (with at least a few thousand draws) to fit this model to the GDP growth rate data for an appropriate choice of $h$.  Are you happier with the fit?  What's going on here (i.e.~what makes the model more or less appropriate for the data)?\\footnote{An interesting plot will be the posterior mean of $1/\\lambda_i$ for each country.}\n\n\\end{enumerate}\n\n\n\\end{document}\n\n\n\n", "meta": {"hexsha": "207d98154d4dd20397eb4b2b3bd88f52676ceb54", "size": 16333, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/exercises02-SDS383D.tex", "max_stars_repo_name": "jgscott/SDS383D_Spring2019", "max_stars_repo_head_hexsha": "33e313a5b4daabf5e818f3ee33d1ec068dba4f95", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/exercises02-SDS383D.tex", "max_issues_repo_name": "jgscott/SDS383D_Spring2019", "max_issues_repo_head_hexsha": "33e313a5b4daabf5e818f3ee33d1ec068dba4f95", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/exercises02-SDS383D.tex", "max_forks_repo_name": "jgscott/SDS383D_Spring2019", "max_forks_repo_head_hexsha": "33e313a5b4daabf5e818f3ee33d1ec068dba4f95", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-02-21T19:27:26.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-08T02:33:32.000Z", "avg_line_length": 67.2139917695, "max_line_length": 723, "alphanum_fraction": 0.728280169, "num_tokens": 4536, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115012, "lm_q2_score": 0.7577943603346811, "lm_q1q2_score": 0.5806929930018128}}
{"text": "\n\\subsection{Search Termination}\\label{subsubsec:SeachTerminationMethodology}\n\nAt each discrete time-step, the agent can either choose to move to a new grid location to record a sensor measurement or it can decide to terminate the search based on its estimated state of the environment. There is a trade-off in terminating the search early, which means that less time and resources are spent on continuing the search, versus the possibility of drawing misinformed conclusions from the search due to a lack of information. For example, if the agent receives a series of false positive readings at a given location, it could mistakenly choose to conclude that the target is present at a given location rather than sample further to gain confidence that it has correctly found the location of the target. Following this line of thinking, it is clear that a strategy needs to be devised to minimize the probability of drawing false conclusions, which is described in the performance measure set out in Section \\ref{sssection:PerfMeas}.\\par\n\\citeauthor{Pollock1971SearchInterfaces} outlines three commonly used criteria that can be used to make a decision whether to terminate the search or not: the \\textit{Bayes Criterion}, which minimizes the expected cost per decision, the \\textit{Minimax Criterion}, which chooses a decision which minimises the maximum expected cost and the \\textit{Neyman-Pearson Criterion}, which uses a likelihood ratio test to determine the optimal decision \\cite{Pollock1971SearchInterfaces}. These tests are proposed in the context of a target that is definitely present in the search region, but can be extended to incorporate the decision problem where the target may or may not be present. In addition, related work by \\citeauthor{Chung2007ASearch} has addressed this problem using methods that use heuristics to make a decision whether to terminate the search or not \\cite{Chung2007ASearch}. \n\nWe ultimately choose to implement the Sequential Probability Ratio Test (SPRT), which is a hypothesis-testing framework developed by \\citeauthor{Wald1950BayesProblems} to optimally deal with sequential decision problems, as opposed to traditional frameworks which assume that all the necessary data has been gathered prior to analysis \\cite{Wald1950BayesProblems}. The background knowledge behind the SPRT can be found in Section \\ref{subsec:SPRT}. An algorithm is also provided on how to perform this test in practice.\n%The details of the proof of optimality of the SPRT is given in \\cite{Wald1950BayesProblems} and we have outlined the details of how to perform hypothesis-testing using this framework in section <refer to the section>, along with the practical advantages and drawbacks of using it. \nWe applied the SPRT algorithm to our problem to provide a search termination criteria using the following quantities: \n%In order to allow the agent to make a decision on whether to terminate the search or not, the following procedure was used: \\note{Might be worthwhile simply outlining the algorithm}\n\n\\begin{gather}\\label{eqn:SPRTQuantities}\nH_0 : \\text{The null hypothesis, the target is not present in the search region}\\nonumber\n\\\\ \\nonumber\nH_1 : \\text{The alternative hypothesis, the target is present in the search region}\\nonumber\n\\\\ \\nonumber\n\\alpha : \\text{The maximum probability of making a type } \\Romannum{1} \\text{ error.} \\nonumber\n\\\\ \\nonumber\n\\beta : \\text{The maximum probability of making a type } \\Romannum{2} \\text{ error.}\\nonumber\n\\\\ \\nonumber\n\\\\ \\nonumber\np_{0t} : \\text{ The probability of observing the data $(e_1, ..., e_t)$ under the assumption of $H_0$} \\nonumber\n\\\\ \\nonumber\np_{0t}=\\sum_{loc=1}^{n} p(TargetLoc_t = loc, AgentLoc_t, SearchStatus_t| e_{1:t}, u_{1:t})\\nonumber\n\\\\ \\nonumber\n\\\\ \\nonumber\np_{1t} : \\text{ The likelihood of observing the data $(e_1, ..., e_t)$ under the assumption of $H_1$} \\nonumber\n\\\\ \\nonumber\np_{1t}=p(TargetLoc_t = n+1, AgentLoc_t, SearchStatus_t | e_{1:t}, u_{1:t})\\nonumber \n\\end{gather}\n\n$p_{0t}$ and $p_{1t}$ are calculated by using the evidence likelihood algorithm, which is described in detail in Section \\ref{subsubsec:EvLikelihood}. The SPRT algorithm was then used at each time-step to decide whether to \n\\begin{enumerate}\n    \\item Terminate the search accepting $H_0$, that the target is not present in the search region.\n    \\item Terminate the search accepting $H_1$, that the target is present in the search region. In this case, the target location with the highest estimated probability is returned as the target location.\n    \\item Continue the search, using the Action Selection Strategy described in Section \\ref{subsubsec:ActionSelection}.\n\\end{enumerate}\nIn practice, we often took logarithmic transforms of the values used in the SPRT, to simplify some calculations and to be able to look at plots that are less heavily skewed and simpler to interpret.\n\n\\subsection{Analysis of Search Termination Criteria}\n$\\alpha$ and $\\beta$, the two parameters that the user needs to specify to perform the SPRT, need to be chosen carefully and depend on the context of the search. As in the standard hypothesis testing context, it is important to consider the significance level and power of the test to ensure that they reflect the severity of drawing an incorrect conclusion \\cite{IntroductionToMathematicalStatistics}. They can also help to perform analysis on how well the agent could perform, since setting a high threshold means that the agent may have to take a minimum number of samples at the correct target location in order to choose $H_0$ or $H_1$. \\par\n\nFigure \\ref{fig:SPRTCutoffFunctionOfAlphaAndBeta} shows how A and B vary as functions of the parameters $\\alpha$ and $\\beta$ on a log scale\n%\\footnote{Tables of the values of A and B for varying for varying Type \\Romannum{1} rates ($\\alpha$) and Type \\Romannum{2} ($\\beta$) error rates may be referred to in Appendix \\ref{chap:AppendixOne}.}. \nTables of the values of A and B for varying for varying Type \\Romannum{1} rates ($\\alpha$) and Type \\Romannum{2} ($\\beta$) error rates may be referred to in Appendix \\ref{chap:AppendixOne}.\nIf the log-likelihood ratio of the data lies in between the red and blue surfaces on the graph, another sample is taken. A projection of this plot is shown in Figure \\ref{fig:VaryingSPRTParametersProjected1}:\n\\begin{enumerate}[label=(\\alph*)]\n    \\item shows a plot of how varying the Type \\Romannum{1} error rate $\\alpha$ for a fixed Type \\Romannum {2} error rate $\\beta = 0.07$ affects the decision boundaries defined by A and B.\n    \\item shows a plot of how varying the Type \\Romannum{2} error rate $\\beta$ for a fixed Type \\Romannum{1} error rate $\\alpha = 0.1$ affects the decision boundaries defined by A and B.\n\\end{enumerate}\n\nFigures \\ref{fig:SPRTCutoffFunctionOfAlphaAndBeta} and \\ref{fig:VaryingSPRTParametersProjected1} reflect the intuition behind the decision boundaries. The lower the probability of making either a Type \\Romannum{1} or Type \\Romannum{2} error, the further apart the decision boundary becomes, meaning the likelihood ratio must be very far apart from 1, which appears as 0 on the log scale. A likelihood ratio of 1 indicates maximum uncertainty. It is also possible to see that the surfaces meet when $\\alpha=0.5$, $\\beta=0.5$, which reflects the fact that immediately terminating the search will give a 0.5 probability of returning a false positive or false negative. \\par\n\n\\note{Might be better off editing these images with explicit labels showing regions in which H0 and H1 will be accepted.}\nFigure \\ref{fig:VaryingSPRTParametersProjected2} shows the regions in which the SPRT will elect to take another sample (the green shaded region), to accept $H_0$, the upper red shaded region, and to accept $H_1$, the lower red shaded region. The blue line shows the log-likelihood ratio of $\\frac{(e_{1:t} | H_0)}{p(e_{1:t} | H_1)}$ as a function of the agent belief that the target is present in the search region. \n\\note{This is not really clear, should exaggerate this more}\nNote that since the values of $\\alpha$ and $\\beta$ are significantly smaller in Figure \\ref{fig:VaryingSPRTParametersProjected2} (b) than in Figure \\ref{fig:VaryingSPRTParametersProjected2} (a), the acceptance region for $H_0$ and $H_1$ are significantly smaller in (b) than in (a).\n\n\\par\n%varying the type \\Romannum{1} error rate for a fixed type \\Romannum{2} error rate affects the upper and lower threshold for cutting off the search. Figure \\ref{fig:SPRTVaryingT2} shows how the varying the type \\Romannum{2} error rate for a fixed type \\Romannum{1} error rate affects the upper and lower threshold for cutting off the search. Note that as the varying error rate increases on the x-axis, the decision boundaries come closer together, reflecting the intuition that we are more willing to accept a mistaken conclusion.\n%\\begin{figure}\n%    \\centering\n%    \\includegraphics[width = 0.75\\linewidth]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/SPRTDecisionThresholdVaryingT1ErrorRate.png}\n%    \\caption{The Log-likelihood upper and lower threshold for a varying type \\Romannum{1} error rate and fixed type \\Romannum{2} error rate.}\n%    \\label{fig:SPRTVaryingT1}\n%\\end{figure}\n\n%\\begin{figure}\n%    \\centering\n%    \\includegraphics[width = 0.75\\linewidth]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/SPRTDecisionThresholdVaryingT2ErrorRate.png}\n%    \\caption{The Log-likelihood upper and lower threshold for a varying type \\Romannum{2} error rate and fixed type \\Romannum{1} error rate.}\n%    \\label{fig:SPRTVaryingT2}\n%\\end{figure}\n\n%Show how A and B vary for a fixed T1 or T2 error rate.\n\\begin{figure}[H]\n    \\centering\n    \\subfloat[Varying T\\Romannum{1} error rate for a fixed T\\Romannum{2} error rate]{{\\includegraphics[width=11cm]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/AAndBVaryingWithAlpha.png} }}%\n    \\qquad\n    \\subfloat[Varying T\\Romannum{2} error rate and fixed T\\Romannum{1} error rate]{{\\includegraphics[width=11cm]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/AAndBVaryingWithBeta.png} }}%\n    \\caption{Upper and lower values of A and B for varying error rates. Values are shown on a log scale}%\n    \\label{fig:VaryingSPRTParametersProjected1}%\n\\end{figure}\n\n\n%3-Dimensional plot to show how A and B vary with alpha and beta\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width = 0.78\\linewidth]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/AandBAsFunctionOfAlphaAndBetaLogTransform.png}\n    \\caption{The upper and lower SPRT bounds for acceptance and rejection of $H_0$, as functions of the significance and power of the test. Values are shown on a log scale.}\n    \\label{fig:SPRTCutoffFunctionOfAlphaAndBeta}\n\\end{figure}\n\n\n%\\begin{figure}\n%    \\centering\n%    \\includegraphics[width = 0.75\\linewidth]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/CutoffRegions.png}\n%    \\caption{The cut-off regions of the SPRT for $\\alpha=0.05$, $\\beta=0.08$ as a function of agent belief in whether target is present or not. Values are shown on a log scale.}\n%    \\label{fig:SPRTLogLikelihoodRatio}\n%\\end{figure}\n\n%showing how A and B create upper and lower limits for terminating the search\n\\begin{figure}[H]%\n    \\centering\n    \\subfloat[SPRT cut-off regions for $\\alpha=0.05$, $\\beta=0.08$]{{\\includegraphics[width=11.5cm]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/CutoffRegionsAlphaPointZeroFiveBetaPointZeroEight.png} }}%\n    \\caption{} \\label{fig:SPRTProjected1}\n    %\\qquad\n\\end{figure}\n\\begin{figure}[H]\\ContinuedFloat\n    \\centering\n    \\subfloat[SPRT cut-off regions for $\\alpha=0.01$, $\\beta=0.02$ \\label{fig:SPRTProjected2}]{{\\includegraphics[width=11.5cm]{Chapters/MultiAgentTargetDetection/Figs/SearchTermination/CutoffRegionsAlphaPointOneBetaPointZeroTwo.png} }}%\n    \\caption{The cut-off regions of the SPRT for (a) $\\alpha=0.05$, $\\beta=0.08$ and (b) $\\alpha=0.01$, $\\beta=0.02$ as a function of agent belief in whether target is present or not. Values are shown on a log scale.}%\n    \\label{fig:VaryingSPRTParametersProjected2}%\n\\end{figure}\n\n\n\n\\note{maybe move tables to appendix}\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "f896b88b9d601b67ae35d65c67e60c47484c8873", "size": 12137, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/MultiAgentTargetDetection/InitialAgentDesign/SearchTermination.tex", "max_stars_repo_name": "DavidLSmyth/ResearchMScThesis", "max_stars_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/MultiAgentTargetDetection/InitialAgentDesign/SearchTermination.tex", "max_issues_repo_name": "DavidLSmyth/ResearchMScThesis", "max_issues_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-18T11:59:42.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-18T11:59:42.000Z", "max_forks_repo_path": "Chapters/MultiAgentTargetDetection/InitialAgentDesign/SearchTermination.tex", "max_forks_repo_name": "DavidLSmyth/ResearchMScThesis", "max_forks_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.6488549618, "max_line_length": 956, "alphanum_fraction": 0.7791876081, "num_tokens": 3151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5806929930018125}}
{"text": "\\subsection{Motivation}\n\n%\\input{sankalpmot}\n\\input{pushkarmot}\n\n\\subsubsection{Why try to make better Fourier Transforms?} Fourier transforms\nare the most important tools used in data processing. The relative amplitudes of\nthe signal at different frequencies give us a lot of characterising data of the\nsignal, and changing the signal in frequency space has a lot of applications,\nlike auto-tuning, removing noise etc. \\cite{wiki:fourier}\n\n\\begin{equation}\n    \\tilde{f}(\\omega) = \\int_{-\\infty}^{\\infty} e^{-i\\omega t}f(t) dt~.\n\\end{equation}\n\n\\subsubsection{Fourier is Overkill.}\nLooking at the structure of the Fourier transform, it is easy to see that it is\nclosely resembles the idea of a correlation, essentially extracting from the\nsignals its \\emph{similarity} to a given frequency. These coefficients, gathered\nusing all frequencies, can produce a one to one mapping to a function space in\nthe frequency realm. However, the entire function is far too much information.\nFor most tasks involving categorization and identification of sounds,\nfingerprinting is more than good enough. That is, considering the delta response\nof the system, its restriction to a tiny subset of points in frequency space.\n\nAs such, simply extracting the info for these frequencies alone is sufficient,\nand can be done whilst incorporating several statistical approximations. This is\neasily done with a correlation of the signal with a pure mode. \\cite{wiki:coranddep}\n\nHowever, when frequencies are aggregated, the phase difference is absorbed into\nthe coefficients of multiple frequencies, but in the correlation scenario, no\nsuch hope exists. This is where cross-correlation comes in. \\cite{wiki:crosscor}\n\n\\begin{equation}\n    (f \\ast g)(t) \\triangleq \\int_{-\\infty}^{+\\infty} \\overline{f(\\tau)} \\cdot g(t+\\tau) d\\tau\n\\end{equation}\n\nBy considering the similarities of shifted signals, we can infer more precise\ninformation about their similarity in space, by completely disregarding the\ntemporal dimension. This is immensely useful in matching an externally sourced\nwave of unknown phase with a synthetic mode of known or unknown phase to\nidentify the wave.\n\nIn particular, the global maximum of the cross-correlation may be taken as the\nphase independent correlation of two signals.\n\n\\begin{equation}\n    corr(f, g) \\triangleq \\max((f \\ast g) (t))_t\n\\end{equation}\n", "meta": {"hexsha": "1e018f44e49f66f22751ff9038efeebc7d18ab78", "size": 2356, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/motivation.tex", "max_stars_repo_name": "sankalpgambhir/ardio", "max_stars_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-29T18:00:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-29T18:00:57.000Z", "max_issues_repo_path": "doc/motivation.tex", "max_issues_repo_name": "sankalpgambhir/ardio", "max_issues_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/motivation.tex", "max_forks_repo_name": "sankalpgambhir/ardio", "max_forks_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.12, "max_line_length": 94, "alphanum_fraction": 0.7877758913, "num_tokens": 565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672181749422, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5806725766706782}}
{"text": "% file: problems/convex-diameter.tex\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Convex Polygon Diameter Problem}\t\\label{section:convex-diameter}\n\n%%%%%%%%%%%%%%%%%%%%\n\\subsection{Solution}\n\n%%%%%%%%%%%%%%%\n\\begin{theorem}\n  For a convex polygon, a pair of vertices determine the diameter.\n\\end{theorem}\n%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%\n\\begin{proof}\n\\end{proof}\n%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%\n\\begin{definition}[Line of Support]\n\\end{definition}\n%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%\n\\begin{definition}[Antipodal]\n\\end{definition}\n%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%\n\\begin{fact}\n  Not all vertex pairs are antipodal.\n\\end{fact}\n%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%\n\\begin{proof}\n\\end{proof}\n%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%\n\\begin{theorem}[Yaglom]\n  The diameter of a convex polygon is the greatest distance between parallel lines of support.\n\\end{theorem}\n%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%\n\\begin{proof}\n\\end{proof}\n%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "66339cf5d662014d1b63e906a503e7ce9c59d865", "size": 976, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2017/2017-2nd/2-2-efficiency/problems/convex-diameter.tex", "max_stars_repo_name": "courses-at-nju-by-junma/problem-solving-class-problems", "max_stars_repo_head_hexsha": "79de740506000972b2bec91cc6042fa639cd2e55", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2018-03-16T04:33:03.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-11T14:50:38.000Z", "max_issues_repo_path": "2017/2017-2nd/2-2-efficiency/problems/convex-diameter.tex", "max_issues_repo_name": "courses-at-nju-by-junma/problem-solving-class-problems", "max_issues_repo_head_hexsha": "79de740506000972b2bec91cc6042fa639cd2e55", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2018-03-19T10:36:11.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-03T04:58:39.000Z", "max_forks_repo_path": "2017/2017-2nd/2-2-efficiency/problems/convex-diameter.tex", "max_forks_repo_name": "courses-at-nju-by-junma/problem-solving-class-problems", "max_forks_repo_head_hexsha": "79de740506000972b2bec91cc6042fa639cd2e55", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2018-03-16T04:26:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-11T11:42:48.000Z", "avg_line_length": 18.7692307692, "max_line_length": 94, "alphanum_fraction": 0.4784836066, "num_tokens": 244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.5806725733492063}}
{"text": "% !TeX spellcheck = en_GB\n% !TeX root = memoco-report.tex\n\n\\section{Exact method}\n\\label{chap:cplexm}\nThe exact algorithm makes use of the IBM CPLEX C APIs to solve a problem to optimality. In order to model the TSP problem a network flow model is used, as described in the assignment text. \\\\\nThe specific linear programming model used and its decision variables are presented below. The set $A$ is the set of edges of the graph, which in this case is a complete graph.\n\\begin{itemize}\n\t\\item $x_{ij}$ is the amount of 'flow' passed from $i$ to $j,~\\forall~(i,j)\\in A$\n\t\\item $y_{ij} = 1$ if the edge $(i,j)$ ships some flow, $0$ otherwise $\\forall~(i,j)\\in A$.\n\\end{itemize}\n\\begin{align}\n\t&\\min \\sum\\limits_{i,j:(i,j)\\in A} c_{ij}y_{ij}\\\\\n\t&~s.t.~\\sum_{i:(i,k)\\in A}x_{ik} - \\sum_{j:(k,j)\\in A, j\\ne 0}x_{kj}~~~=~1~~~~~~~~~~~~~~~~~~~~~~\\forall~k \\in N \\setminus \\{0\\}\\label{eq:flow}\\\\\n\t&~~~~~~\\sum_{j:(i,j)\\in A} y_{ij}~~~~~~~~~~~~~~~~~~~~~~~~~~=~1~~~~~~~~~~~~~~~~~~~~~~\\forall~i \\in N \\label{eq:sumi}\\\\\n\t&~~~~~~\\sum_{i:(i,j)\\in A} y_{ij}~~~~~~~~~~~~~~~~~~~~~~~~~~=~1~~~~~~~~~~~~~~~~~~~~~~\\forall~j \\in N \\label{eq:sumj}\\\\ \n\t&~~~~~~~x_{ij}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\le~(|N|-1)~y_{ij}~~~~~~~\\forall~(i,j) \\in A,j\\ne 0 \\label{eq:constrxy}\\\\\n\t&~~~~~~~x_{ij} \\in \\mathbb{R}_+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\forall~(i,j) \\in A,j\\ne 0 \\label{eq:constrx}\\\\\n\t&~~~~~~~y_{ij} \\in \\{0,1\\}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\forall~(i,j) \\in A\n\\end{align}\n\nActually, when adding the variables $x$ and $y$ to the model, variables $x_{ij}~\\forall~(i,j) \\in A,j\\ne 0$, instead of being unbounded like in constraint \\ref{eq:constrx}, they are forced to be in range $[0,N-1]$. The model is still valid, since constraint \\ref{eq:constrxy} also bounds the variables in this range.\n\n\\subsection{Creation of the model}\nThe linear programming model is set up in the function \\texttt{setupLP} in file \\texttt{CPLEX.cpp}.\\\\\nDuring the creation of the model, its decision variables $x_{ij}$ and $y_{ij}$ are first added one by one (for each possible value of $i$ and $j$) through the appropriate CPLEX functions to a C array. Variables with $i = j$ are not added, since no loops on the same point is allowed. Considering that this variables have to be referenced later during the definition of the constraints, their position in the CPLEX array is recorded in two bidimensional vectors, one for the $x$, called \\texttt{xMap}, and one for $y$ variables, named \\texttt{yMap}. In this way variable $x_{ij}$ can be later referenced easily with \\texttt{xMap[i][j]} for all $i$ and $j$. After defining the variables of the problem, the constraints \\ref{eq:flow} - \\ref{eq:constrxy} are inserted. Constraint \\ref{eq:constrxy} was reformulated as $x_{ij} - (|N|-1)y_{ij} \\le 0$ because the CPLEX APIs require all the variables in the left-hand side and only constants in the right side of the equation.\\\\\nThe model can be solved calling the function \\texttt{solve}, which calls the IBM CPLEX solving routines for a minimization problem. The solution can then be retrieved using the method \\texttt{getSolution} which returns a \\texttt{TSPsolution} object, which wraps many useful information about the final solution, and is used for both the exact method and the heuristic one.\n\n\\section{Heuristic method}\n\\label{chap:heuris}\n\nThe implemented heuristic is inspired by a popular local search method called Lin-Kernighan heuristic, originally proposed in 1973 in \\cite{LinK73}. The algorithm can be considered a generalization of the k-opt algorithm: one of the drawbacks of this algorithm is that the parameter $k$ must be fixed in advance. Instead, the Lin-Kernighan algorithm decides at each iteration, for ascending values of $k$, whether an interchange of $k$ edges provides a better solution. Thus, the algorithm is specified in terms of exchanges that can convert one tour into another: given a feasible interchange of $k$ edges (a \\textit{k-opt move}), the algorithm tries to determine if there exists a \\textit{(k+1)-opt} move that improves the tour further. \nDuring each iteration, given a feasible tour, the algorithm repeatedly performs exchanges that reduce the length of the current tour, until a tour is reached for which no exchange yields an improvement. This process may be repeated many times from initial tours generated in some possibly randomized way \\cite{Helsgaun2000}.\\\\\nSo at each iteration the algorithm tries to determine the largest set $X=\\{x_1, x_2, ..., x_j\\}$ and $Y=\\{y_1, y_2, ..., y_j\\}$ such that if edges in $X$ (also called the \\emph{broken} edges) are removed and replaced by $Y$ (the \\emph{joined} edges) the produced tour is a feasible improved (i.e. less costly) solution. Of course, a naive brute force algorithm searching for this sets, would quickly become impractical to use, due to the exponential running time. In order to produce a reasonably efficient local search procedure, the algorithm reduces the search space using the following criteria:\n\n\\begin{enumerate}\n\t\\item \\emph{Sequential exchange criterion}: each pair of edges $(x_i, y_i)$ and $(y_i, x_{i+1})$ must share one vertex. If $t_1$ and $t_2$ are the vertices of edge $x_1$, then in general for all $i \\ge 1$ exchanges are performed this way: $x_i=(t_{2i-1}, t_{2i})$ and $y_i=(t_{2i}, t_{2i+1})$ and $x_{i+1}=(t_{2i+1}, t_{2i+2})$.\\\\ All \\emph{sequential} k-opt moves can be found by concatenating smaller sequential moves, but \\emph{non sequential} moves are never considered by this algorithm. An example of such a move is given in \\cref{fig:doublebridge}.\n\t\\item \\emph{Feasibility criterion}: for every $i \\ge 3$, $x_i=(t_{2i-1}, t_{2i})$ is chosen so that if $t_{2i}$ is connected to $t_1$ the resulting configuration is a tour (i.e. a feasible solution). It can be seen that at most one choice for $x_i$ satisfies this constraint, as showed in \\cref{fig:feasibility-moves}. Exceptionally, when $i=2$, the originally proposed algorithm allowed for the choice of an $x_i$ which violates this rule. According to the original paper this was done to strengthen the procedure by giving the algorithm a way to recover from previous wrong choices, but allowing this kind of backtracking at every levels would significantly increase the running time;\n\t\\item \\emph{Positive gain criterion}: let $g_i=cost(x_i) - cost(y_i)$ be the gain by exchanging two edges, and let $G_i=g_1+g_2+...+g_i$ be the partial sum of the gains up to the $i^{th}$ exchange. This criterion requires that each $y_j$ is chosen in a way such that $G_j$ is positive. This choice is justified by the fact that if a sequence of numbers has a positive sum, there is a cyclic permutation of these numbers such\n\tthat every partial sum is positive \\cite{Helsgaun2000};\n\t\\item \\emph{Disjunctivity criterion}: sets $X$ and $Y$ must be disjoint. \n\\end{enumerate}\nIn the implemented algorithm, the move size threshold which allows for the exceptional violation of the \\emph{feasibility criterion} (set to $2$ in the paper), is not fixed but it can be modified in the configuration file \\texttt{config.yml}. In this way the influence of this parameter on the solution quality could be tested and optimized.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=6cm]{double-bridge}\n\t\\caption{A non sequential \\emph{double bridge} move with $k=4$}\n\t\\label{fig:doublebridge}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\begin{subfigure}[c]{0.31\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{path-choices}\n\t\t\\caption{The two possibles edges to break after choosing $y_1$: $a$, $b$}\n\t\\end{subfigure}\n\t\\begin{subfigure}[c]{0.31\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{path-wrong}\n\t\t\\caption{Breaking edge $b$ violates the feasibility criterion}\n\t\\end{subfigure}\n\t\\begin{subfigure}[c]{0.31\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{path-right}\n\t\t\\caption{Breaking edge $a$ allows to close the tour correctly}\n\t\\end{subfigure}\n\t\\caption[Candidate edges satisfying and violating the feasibility criterion]{Only one candidate for $x_i$ satisfies the feasibility criterion, which is enforced for $i$ greater than a fixed threshold value. In this case breaking edge $a$ is the right choice, since removing edge $b$ and relinking with edge $r=(t_4,t_1)$ would not produce a feasible tour.}\n\t\\label{fig:feasibility-moves}\n\\end{figure}\n\n\\subsection{Stopping criterion}\nThe algorithm uses the same stopping criterion suggested in the original paper. In particular, when the algorithm finds a feasible tour with a move of size $k$, it checks whether the total gain is the best seen so far. If it is, then it is the best improvement seen and it saves the current solution, and tries to improve further with a ($k$+1)-opt move. If this does not improve the solution the algorithm returns the solution it saved. On the other hand, if the size $k$ move yields a feasible tour which is not the best seen so far, the algorithm stops, since it knows there is a better move, with size smaller than $k$, that produces a better tour.\\\\\nBy accepting only improving solutions, the algorithm is guaranteed to eventually stop, since it cannot loop over the same local optima.\n\n\\subsection{Tour structure}\nA tour is maintained as a vector $v$ of pairs, where $v_i$ gives a pair with the two vertices which precedes and follows $i$ in the current tour. To this purpose every vertices (point) of the instance is numbered from $0$ to $N-1$ and its number is used as index to access the tour structure. Creation of such structure takes time $O(N)$ from a list of vertices where $N$ is the problem size. This allows constant time checks and retrieval of the two candidates for removal. Moreover, the distance between each vertex (the cost of each edge) is kept in a $N\\times N$ sized matrix.\n\n\\subsection{Neighbourhood function}\n\\label{ssec:neighbourhood}\nIn a local search algorithm the neighbourhood function describes which solutions are explored at every step. In this case every neighbour is generated with a $k$-opt move from the initial solution and the algorithm tries to increase $k$ at each step.\\\\\nIn the described algorithm, for a given $i$, a step consists in breaking an edge of the tour $x_i$ and replacing it with  and edge $y_i$ chosen with the criteria described in \\cref{chap:heuris}. Every step generates a new neighbour.\nIf we fix the vertex $t_{2i-1}$ belonging to joined edge $y_{i-1}$, there are exactly two edges that can be broken. That's because the \\emph{sequential exchange criterion} requires that $x_i$ must share an endpoint with $y_{i-1}$ and this endpoint is exactly $t_{2i-1}$. Additionally, by the \\emph{feasibility criterion}, there is only one choice of $x_i$ that will produce a feasible tour, while breaking the other will inevitably split the graph into two connected components (as seen in \\cref{fig:feasibility-moves}). \\\\\nStarting from vertex $t_{2i-1}$ the algorithm analyses the two possibilities giving the precedence to the edge with higher cost, since is the best one to remove. Only if the feasibility criterion is violated the procedure will try to remove the other edge.\\\\\nIn order to choose an edge $y_i$, the algorithm finds all possible edges starting in $t_{2i}$ which are\n\\begin{itemize}\n\t\\setlength\\itemsep{0.05em}\n\t\\item not part of the current solution;\n\t\\item not already broken (i.e. $\\notin X$);\n\t\\item not already joined (i.e. $\\notin Y$).\n\\end{itemize}\nA simple approach would be to choose as $y_i$ the candidate edge with the lowest cost, since we want to maximize the gain. The paper suggests using a less greedy approach, by looking also at the successive edge $x_{i+1}$. This has the additional benefit of avoiding useless search, as would happen by joining a promising candidate only to find out, when selecting the next edge to break, that no such operation is possible (for example because the only alternative has already been broken). \\\\\nThus for each candidate edge to join, the possible $x_{i+1}$ are analysed, and the potential gain of adding $y_i$ and removing $x_{i+1}$ is computed. Since the algorithm does not know which choice of $x_{i+1}$ gives the feasible tour, if neither choice can be discarded because of other reasons (e.g. already broken, intensification constraints), both choices are considered and the potential gain is the average of the two gains. So the potential gain is the gain expectation that can be achieved by joining $y_i$ and breaking $x_{i+1}$. Ideally, the neighbourhood function should predict which candidate $x_{i+1}$ is the actual feasible choice that the algorithm will eventually make. Unfortunately, checking this for every possible candidate is quite costly for instances with considerable size, since checking if a sequence of edges represent a feasible tour takes time $O(N)$ on average, and the number of candidates grows linearly with $N$ too.\\\\\nInstead of using the average potential gain to rank the candidates for joining, two alternative approaches using the worst gain and the best possible gain has been tested. Calibration tests on an instance of 90 points determined that this approach produced slightly better results than the others.\n\n%\\begin{enumerate}\n%\t\\item Start from a (possibly random) feasible solution;\n%\t\\item Choose an initial vertex $s$;\n%\t\\item Remove an edge $x_1 = (s, v)$ which belongs to the current solution and add an edge $y_1 = (v, z)$ ($z \\ne s$) which does not belong to the current solution, such that the gain of this move (removal and insertion) is positive;\n%\t\\item Perform step 3 with the following additional constraints:\n%\t\\begin{itemize}\n%\t\t\\item the edge to remove ($x_i$) must share 1 vertex with the previous added edge ($y_{i-1}$);\n%\t\t\\item the edge to remove $x_i = (a, b)$ must be chosen such that if $b$ reconnects to the starting vertex $s$ with edge $(b, s)$, the produced solution is feasible;\n%\t\t\\item the gain (decrease of objective value) of the solution found with previous constraint must be greater than the best increase found so far.\n%\t\\end{itemize}\n%\tIf the second and third conditions are satisfied, the procedure saves this feasible solution, updates the best gain, and proceeds to step 5. Otherwise it tries the other possibility for $x_i$. If this fails too the procedure stops, and the last saved solution is returned;\n%\t\\item Repeat step 4. When no improving solution can be found set the current best solution as starting solution and restart from step 2.\n%\\end{enumerate}\n\n\\subsection{Enhancements}\nThis basic algorithm scheme can be enhanced in many ways. The next sections describe some strategies implemented in this project.\n\n\\subsubsection{Search intensification}\n\\label{sssec:intensification}\nBetween the iterations the algorithm may find some 'good' edges which belongs to the optimal solution or to some reasonably good one. Subsequently some time may be lost by repeatedly breaking and adding them, while not exploring some more promising neighbours. \\\\\nIn addition these 'good' edges are likely to be present in many of the best solutions (local optima) found between iterations. In order to direct the search towards more promising solutions, the algorithm keeps track of the edges shared by best $S$ solutions by computing their intersection.\\\\ \nOn the other hand, if the top-S solutions do not include the global optimum, this approach  may prevent the algorithm from breaking some edges which do not belong to the optimum, thus preventing the algorithm from finding it.\\\\ To mitigate this problem, the intensification procedure is applied only after a specified number of improving moves (i.e. after a fixed $i$). This means that the algorithm has a way to escape this additional constraint for a small number of moves, allowing it to recover from previous sub-optimal choices.\\\\ \nAdditionally, after a local optima is found, the objective value is tested to see if it is better than one of the top-S solutions and in this case the worst solution is replaced with the new one and the intersection is updated.\\\\\nA max-heap data structure is used in order to efficiently replace the worst (i.e. with higher objective value) local optima, and the collection of edges of each tour is maintained in a \\texttt{std::set} ordered container, which allows the intersection procedure to take time $O(N\\cdot S)$ where $N$ is the number of edges in a feasible tour.\\\\\nInstead of considering just the top-S solutions to do the intersection, the original paper proposed to keep the intersection of all tours from the beginning. During some early tests this seemed to worsen the results with respect to the runs without intensification, so this change was made, and a small reduction of running time has been registered, since the algorithm excludes some choices from its neighbourhood.\nIn choosing the parameter $S$ it is important to keep in mind that intensification is done only after $S$ local optima are found, so one should ensure that this number is suitable for the problem dimension, since a very large number with a small problem would probably be ineffective.\n\n\\subsubsection{Avoiding duplicate solutions}\n\\label{sssec:checkout}\nThe authors of the algorithm stated that 30\\% to 50\\% of running time could be saved by keeping track of previous local optima and stopping the algorithm if the current local optima is the same seen before. This is justified by the fact that if the algorithm couldn't improve this solution before, it's unlikely it can do it now. While this sounds reasonable, in the proposed implementation no particular speed-up was registered. In part, this may be due to different computing capabilities, considering that the algorithm was devised in 1973.\\\\\nSince trying to improve a previously found local optima will likely fail, the proposed algorithm adopts a slightly different strategy in this eventuality: instead of stopping, it restarts from a different starting edge and proceeds normally until it finds a different solution, or it determines no better tour exists. While this does not guarantee an improvement in running time, it makes the algorithm redirect the search in a different direction, which is probably more useful that exploring the neighbourhood of a previous solution.\\\\\nTo do this, an \\texttt{std::set} data structure is used, which maintains a sorted collection (ordered by tour cost) of all the feasible tours found at each iteration. This container guarantees logarithmic insertion and search operations.\\\\\nAs a possible improvement to this strategy, one could keep track of the edges modified while converging to the previously encountered solution, and make sure that at least some of these are not added again, which can potentially lead to the same solution again, or to a very similar one.\n\n\\clearpage\n\\subsection{Random restart}\nThe described heuristic is run from a randomized initial solution, like the one in \\cref{fig:randomtour}. This simple strategy allows the algorithm to escape local optima and differentiate the provided solutions. Using an initial solution generated with a fast constructive heuristic has been tried, and a faster convergence has been registered, even though the produced solution was often not better than the solution generated from a random initial solution.\nDespite this, the starting tour has a significant impact on the heuristic performance, since it determines the edges that will be tried for the first move. Indeed the algorithm often does not explore more than the first two-three candidates to be joined. This explains why the initial tour greatly contributes to the final result.\\\\\nBy restarting the heuristic many times on the same instances of the synthetic dataset of size $> 90$, on average, the heuristic will produce tours 0.5\\% from the optimal one, but the best solution found during a reasonable number of restarts is usually much closer to the optimal one. This strategy has been implemented in the file \\texttt{utilities.cpp}, where the procedure \\texttt{runILK} is defined for this purpose. Setting the desired number of restarts in the \\texttt{config.yml} configuration file (parameter \\texttt{LK\\_iterations}) will restart the algorithm the specified number of times, and the final result will be the best solution found.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=12cm]{path_50_random}\n\t\\caption{Random initial path on a problem with 50 points}\n\t\\label{fig:randomtour}\n\\end{figure}\n\n\\clearpage\n\\subsection{Parameters}\n\\label{ssec:hyperpar}\nTable \\ref{tab:hyperparameters} includes a description of each parameter to be set in file \\texttt{config.yml} before starting the heuristic. Calibration of this parameters are discussed in the next section.\n\n\\LTXtable{\\linewidth}{parameters}\n\nThis heuristic may require some tuning to be usable with bigger problems, since some parameters can significantly increase the running time. To determine the next move, the heuristic considers at most \\texttt{max\\_neighbours} candidates for joining, and it may do up to \\texttt{K} moves per iteration, so the worst case complexity of a single search for the best possible move is $O(max\\_neighbours^K)$. This process may be repeated for up to \\texttt{I} iterations. Using random restarts this entire procedure is repeated exactly \\texttt{LK\\_iterations} times. High \\texttt{back\\-track\\-ing\\_threshold} values allow the algorithm to violate the feasibility criterion and try more moves, but will also require more time. Low values, like 3 or 4, worked well during testing, even though 2 is more likely the best parameter for instances of size greater than 300.\\\\\nIf the algorithm spend too much searching for a move (i.e. searching for an exchange), the \\texttt{K} parameter should be lowered. On the other hand, if the algorithm tends to do too many iterations, this number may be limited by lowering \\texttt{I}. Reducing \\texttt{K} and increasing \\texttt{I} allows the algorithm to do a bigger number of smaller improvements. On the other hand increasing \\texttt{K} and reducing \\texttt{I} will allow for a lower number of better improvements.\n\n\\subsubsection{Calibration}\nSome parameters in \\cref{tab:hyperparameters} may affect the running time and the solution quality. In order to determine how much these parameters influenced this metrics and to select the best ones for the final tests, some trials were done on two different instances, one of size 90 and one of size 198 taken from TSPLIB\\footnote{\\url{http://elib.zib.de/pub/mp-testdata/tsp/tsplib/tsplib.html}}. Given some sets of parameters to try, every possible combination has been tested by restarting the heuristic 10 times on each combination. The results are reported in tables and discussed in the next paragraphs. Listed information includes the average percentage error, whether the optimal solution was found, the average number of iterations and the total time required for the 10 iterations. The final score is obtained by iterating the heuristic 10 times on the same instance and takes into account the average percentage relative error (w.r.t. the optimal solution computed using the exact algorithm), called $ARE$ and the total execution time for the 10 iterations, called $TT$. The score is given by \\cref{eq:score}.\n\\begin{equation}\n\t\\text{Score} = \\cfrac{1}{1 + (ARE * TT)}\n\t\\label{eq:score}\n\\end{equation}\nThese results were obtained by using all the features described before, and by compiling with \\texttt{-O3} optimization flag. Both tests were done with \\texttt{K = 100}.\\\\\nThe calibration procedure is implemented in file \\texttt{calibrate.cpp}, and more tests can be done by editing the file and running the script with commands~\\texttt{make calibrate \\&\\& ./bin/calibrate}.\n\n\\paragraph{Synthetic instance of size 90}\\mbox{}\\\\\nTable \\ref{tab:calibration} reports the results of the calibration procedure on an instance with 90 points.\nOther tests, which are not reported here, were run using the other two ranking strategies to sort the candidates to be joined, discussed in \\cref{ssec:neighbourhood}. Strategies using the best and average gain performed similarly, while the strategy sorting by worst gain performed poorly. Three different calibrations for all three strategies has been run and the average gain strategy was selected because it reached the optimal solution more frequently then the one using the best gain.\\\\\nAs can be seen in \\cref{tab:calibration}, considering more neighbours during the selection of the edge to join (5 instead of 2) results in a lower average error and higher running time. Additionally with value set to 5 the optimal value was found in all cases but one.\\\\\nAdditional tests were done with \\texttt{backtracking\\_depth} set to 2 and 3. Lowering this parameter reduces running time and also the frequency of finding the optimum, so in the table only the runs with depth 4 or 5 are listed.\\\\\n\n\\LTXtable{\\linewidth}{calibration}\n\n\\paragraph{TSPLIB d198}\\mbox{}\\\\\nIn order to select some good parameters for a non trivial instance, and additional calibration was done on a drilling problem with 198 points from TSPLIB, called \\texttt{d198}.\\\\\nSince the instance is bigger, tests were done with higher values for \\texttt{max\\_neighbours}, \\texttt{back\\-track\\-ing\\_depth} up to 4 (5 is too much for big instances), and the number of tours used for intensification has been limited to 20 or 50. Table \\ref{tab:calibration198} shows the results. This time the score is much lower because of the higher running time, so it has been multiplied by 100, to make it more readable.\n\n\\clearpage\n\\LTXtable{\\linewidth}{calibration198}\n\nNo execution arrived at the optimal solution, but the original paper suggests that with more restarts better solutions can be found. In three runs with \\texttt{max\\_neighbours = 5}, \\texttt{back\\-track\\-ing\\_depth = 3} the best error of the 10 iterations was below 1.0, in particular 0.374, 0.773 and 0.425 with \\texttt{intensification\\_depth} to 5, 8 and 10 respectively. In the first case the average error was 4.911, which is the best one reported in the table (in bold). In this case the final score strongly penalises the executions with higher backtracking depth because of the longer execution time, but the average error tends to decrease with \\texttt{max\\_neighbours = 5} and \\texttt{back\\-track\\-ing\\_depth = 3}.\n", "meta": {"hexsha": "a938a4958829035c00836b56b9d5a9b19b196deb", "size": 26242, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/02-first-chapter.tex", "max_stars_repo_name": "alek9z/LK-heuristic", "max_stars_repo_head_hexsha": "aa04dcbc9dca3ed434cf4ecf38e4cda9df65d448", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-06T10:20:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-06T10:20:26.000Z", "max_issues_repo_path": "report/02-first-chapter.tex", "max_issues_repo_name": "ALIENK9/MeMoCo", "max_issues_repo_head_hexsha": "aa04dcbc9dca3ed434cf4ecf38e4cda9df65d448", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/02-first-chapter.tex", "max_forks_repo_name": "ALIENK9/MeMoCo", "max_forks_repo_head_hexsha": "aa04dcbc9dca3ed434cf4ecf38e4cda9df65d448", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 149.1022727273, "max_line_length": 1121, "alphanum_fraction": 0.7678911668, "num_tokens": 6365, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.8080672181749422, "lm_q1q2_score": 0.5806725718008442}}
{"text": "\\section*{Exercise 26.2-4}\r\nIn the example of Figure 26.6, the maximum flow shown has the value 23.\r\n\\\\\r\nThe minimum cut corresponding to this is\r\n$$\r\n(\\left\\{s,v_1,v_2,v_4\\right\\}, \\left\\{v_3,t\\right\\}),\r\n$$\r\nsince the capacity is\r\n$$\r\nc(v_1,v_3) + c(v_4,v_3) + c(v_4,t) = 12+7+4 = 23.\r\n$$\r\n\\\\\r\n\\\\\r\nThe augmenting path in (c) cancels flow:\r\n\\\\\r\n4 units on $(v_1, v_2)$ and $(v_2,v_3)$.", "meta": {"hexsha": "71ed4a1731da986a4cfd38bc7df1d345a8cde639", "size": 386, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Uge1/26.2-4.tex", "max_stars_repo_name": "pdebesc/AADS", "max_stars_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Uge1/26.2-4.tex", "max_issues_repo_name": "pdebesc/AADS", "max_issues_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Uge1/26.2-4.tex", "max_forks_repo_name": "pdebesc/AADS", "max_forks_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.125, "max_line_length": 72, "alphanum_fraction": 0.6139896373, "num_tokens": 161, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.5806725618364281}}
{"text": "\\section{Strings}\n\\lst{Trie/Prefix Tree}{$\\mathcal{O}(n)$ for set and get}{strings/trie.cc}\n\n\\begin{code}{Prefix Function}{$\\mathcal{O}(n)$}{strings/prefixFunction.cc}\n  For a string $s$ return an array in which the $i$-th entry is the\n  length of the longest proper prefix of $s[0,\\ldots, i]$ which is also\n  a suffix.  Note that the $0$-th entry is $0$\n\\end{code}\n\n\\begin{code}{KMP}{$\\mathcal{O}(n + m)$}{strings/kmp.cc}\n  Returns a list with all the starting indeces where the pattern matches\n  the text.\n\\end{code}\n\n\\begin{code}{Manachers}{$\\mathcal{O}(n)$}{strings/manachers.cc}\n% source: https://www.hackerrank.com/topics/manachers-algorithm\nReturns array where $P[i]$ contains the length of the palindrome with\nmid-point $i$. There are extra entries for where the mid-point is\ninbetween letters. Example for $abbaac$:\n\\begin{verbatim}\n| a | b | b | a | a | a | c |\n0 1 0 1 4 1 0 1 2 3 2 1 0 1 0\n\\end{verbatim}\n\\end{code}\n\n\\begin{code}{Aho-Corasick Automaton}{build: $\\mathcal{O}(\\sum |t_j|)$, query: $\\mathcal{O}(|S|)$}{strings/AHC.cc}\n  % See for detailed explaination:\n  % http://web.stanford.edu/class/archive/cs/cs166/cs166.1166/lectures/02/Small02.pdf\n  % TODO(Alex): Test query() call extensively. Build + insert are okay for sure.\n  Data structure to search how often $n$ fixed strings $t_1, \\dots, t_n$\n  are contained in a variable string $S$. Strings $t_j$ may each have\n  values.\n\\end{code}", "meta": {"hexsha": "bce8dd03fdab7b1f11394a7c58a7509205d4a55b", "size": 1408, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "document/strings.tex", "max_stars_repo_name": "Zeldacrafter/CompProg", "max_stars_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-02-06T15:44:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-21T03:51:21.000Z", "max_issues_repo_path": "document/strings.tex", "max_issues_repo_name": "Zeldacrafter/CompProg", "max_issues_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "document/strings.tex", "max_forks_repo_name": "Zeldacrafter/CompProg", "max_forks_repo_head_hexsha": "5367583f45b6fe30c4c84f3ae81accf14f8f7fd3", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.6666666667, "max_line_length": 113, "alphanum_fraction": 0.7002840909, "num_tokens": 464, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324893520001, "lm_q2_score": 0.7122321964553657, "lm_q1q2_score": 0.5806348265129506}}
{"text": "\n\\subsection{Training decision trees with Gini impurity}\n\n\n", "meta": {"hexsha": "d0f32745d38e012d283291c4ead309e18fc87be6", "size": 59, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/trees/01-03-treeGini.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/trees/01-03-treeGini.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/trees/01-03-treeGini.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 11.8, "max_line_length": 55, "alphanum_fraction": 0.7966101695, "num_tokens": 13, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8152324983301568, "lm_q2_score": 0.7122321842389469, "lm_q1q2_score": 0.5806348229482612}}
{"text": "\\documentclass[12pt]{rudin}\n\\usepackage[]{amsmath}\n\\usepackage[]{exercise}\n\\usepackage[]{enumitem}\n\\usepackage[colorlinks=true, linkcolor=blue, urlcolor=blue]{hyperref}\n\n\\renewcommand{\\ExerciseHeader}{\\noindent\\textbf{\\ExerciseName\\ \\ExerciseHeaderNB}\\ \\ExerciseHeaderTitle\\ \\ExerciseHeaderOrigin\\smallskip}\n\\renewcommand{\\ExerciseHeaderOrigin}{(\\ExerciseOrigin)}\n\\renewcommand{\\AnswerHeader}{\\bigskip\\noindent\\textbf{Solution \\ExerciseHeaderNB}\\newline}\n\\let\\set\\mathbf\n\n\\title{Number Theory Homework I}\n\\author{RDB}\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\nThis homework is meant to be a warmup for things covered in 300 along with a\npeek at some number theory. Direct proofs, inequalities, induction, the\ncontrapositive, and perhaps most importantly, \\emph{style}.\n\nHomework exists to give you practice. Try to solve the questions on your own\nbefore anything else; you will get the most out of it this way. (Plus, you\nwon't have any outside resources on quizzes and exams!) Once you've given it a\ngood try, feel free to collaborate with your classmates or ask me about it.\n\n\\paragraph{EXPECTATIONS} Write your proofs with \\emph{good style}. What is good\nstyle? Concise yet enlightening. Simple yet pretty. Practical yet fun. \nBasically, go read the first chapter of\n\\href{https://jmlr.csail.mit.edu/reviewing-papers/knuth_mathematical_writing.pdf}{Knuth's masterpiece}\nand buy yourself a copy of Strunk and White's \\emph{The Elements of Style}.\nYour efforts will be repayed tenfold.\n\nFor example, suppose you were proving ``the sum of even integers is even.''\n\\begin{proof}[Terrible proof]\n\\begin{align*}\n    &\\exists n, k \\ [2n + 2k] \\\\\n    &\\implies 2n + 2k = 2(n + k) \\\\\n    &\\exists m \\ [n + k = m] \\\\\n    &\\therefore 2n + 2k = 2m\n\\end{align*}\n\\end{proof}\n\\begin{proof}[Excellent proof]\n    If $x$ and $y$ are even integers, then there exist integers $n$ and $k$\n    such that\n    \\begin{equation*}\n        x + y = 2n + 2k = 2(n + k).\n    \\end{equation*}\n    Since $n + k$ is an integer, $x + y$ is even.\n\\end{proof}\nBoth proofs are (essentially) logically correct, yet one makes me dizzy. Bring\nyour readers the joy of discovery, not the pain of parsing notation.\n\n\\section*{Exercises}%\n\\label{sec:exercises}\n\nIn general, assume that variables like $n$, $m$, and $k$ are integers.\n\n\\begin{Exercise}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n        \\item Prove that $n^2$ is even if and only if $n$ is even.\n\n        \\item If $\\sqrt{2} = a / b$ for coprime integers $a$ and $b$, then $2\n            b^2 = a^2$. Use the previous part to derive a contradiction about\n            $a$ and $b$, and conclude that $\\sqrt{2}$ is irrational.\n    \\end{enumerate}\n\\end{Exercise}\n\n\\begin{Answer}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n\t    \\item (3 points) If $n = 2k$, then $n^2 = 4k^2 = 2(2k^2)$. On the other hand,\n\t\t    if $n = 2k + 1$, then $n^2 = 4k^2 + 4k + 1 = 2(2k^2 + 2k) +\n\t\t    1$.\n\n\t    \\item (3 points) Since $2b^2 = a^2$, we see that $a^2$ is even. The previous part\n\t\timplies that $a$ is even, say $a = 2k$. Then $2b^2 = 4k^2$, and\n\t\t    dividing by $2$ yields $b^2 = 2k^2$. But then $b$ is even,\n\t\t    and that contradicts that $a$ and $b$ are coprime.\n    \\end{enumerate}\n\\end{Answer}\n\n\\begin{Exercise}\n    Three natural numbers $x < y < z$ are a \\emph{Pythagorean triple} provided that\n    \\begin{equation*}\n        x^2 + y^2 = z^2.\n    \\end{equation*}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n\t    \\item Determine \\emph{all} Pythagorean triples where $x$, $y$, and $z$\n            are consecutive natural numbers.\n\n        \\item Given a natural number $a$, let's say that $n$ is an\n            \\emph{$a$-Pythagorean integer} if\n            \\begin{equation*}\n                n^2 + (n + a)^2 = (n + 2a)^2.\n            \\end{equation*}\n\n            For a fixed $a$, how many $a$-Pythagorean integers are there?\n    \\end{enumerate}\n\\end{Exercise}\n\n\\begin{Answer}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n\t    \\item (3 points) This amounts to solving the equation\n            \\begin{equation*}\n                n^2 + (n + 1)^2 = (n + 2)^2,\n            \\end{equation*}\n\t\t    which, after you expand it, is just a quadratic. The only\n\t\t    positive solution is $n = 3$, which gives the triple $(3,\n\t\t    4, 5)$.\n\n\t    \\item (3 points) The equation\n            \\begin{equation*}\n                n^2 + (n + a)^2 = (n + 2a)^2\n            \\end{equation*}\n\t    is a quadratic in $n$ and has a single positive solution for $a >\n\t\t    0$, namely $n = 3a$. So there is only \\emph{one}\n\t\t    $a$-Pythagorean integer for every $a$.\n    \\end{enumerate}\n\\end{Answer}\n\n\\begin{Exercise}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n        \\item Prove that\n            \\begin{equation*}\n                \\sum_{k = 0}^n a(k) = b(n)\n            \\end{equation*}\n            is equivalent to\n            \\begin{equation*}\n                b(n + 1) - b(n) = a(n + 1); \\quad b(0) = a(0).\n            \\end{equation*}\n\n            [Hint: Use induction to get from the difference to the sum.]\n\n        \\item Use the above technique to show that the sum of the first $n$\n            positive integers is $n(n + 1) / 2$.\n\n        \\item Use the above technique to show that\n            \\begin{equation*}\n                \\sum_{k = 1}^n k^3 = \\left( \\sum_{k = 1}^n k \\right)^2.\n            \\end{equation*}\n            [Hint: You know what the right-hand side is from the previous\n            part.]\n    \\end{enumerate}\n\\end{Exercise}\n\n\\begin{Answer}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n\t    \\item (4 points) If\n            \\begin{equation*}\n                \\sum_{k = 0}^n a(k) = b(n),\n            \\end{equation*}\n\t    then obviously $b(n + 1) - b(n) = a(n + 1)$ and $b(0) = a(0)$.\n\n\t    On the other hand, suppose that\n            \\begin{equation*}\n                b(n + 1) - b(n) = a(n + 1); \\quad b(0) = a(0).\n            \\end{equation*}\n\t    We will prove the summation identity via induction. The\n\t    base case, $n = 0$, is assumed: $b(0) = a(0) = \\sum_{k = 0}^0\n\t    a(k)$. For the inductive step, suppose that\n            \\begin{equation*}\n                \\sum_{k = 0}^n a(k) = b(n)\n            \\end{equation*}\n\t    for some $n \\geq 0$. Then,\n            \\begin{align*}\n\t\t    \\sum_{k = 0}^{n + 1} a(k) &= b(n) + a(n + 1) \\\\\n\t\t    \t\t\t      &= b(n + 1).\n            \\end{align*}\n\t    Therefore the equation $b(n) = \\sum_{k = 0}^n a(k)$ holds\n\t    for \\emph{all} integers $n \\geq 0$.\n\n    \\item (2 points) Just check that $\\frac{(n + 1)(n + 2)}{2} - \\frac{n(n + 1)}{2} = n + 1$ and that $\\frac{0(0 + 1)}{2} = 0$.\n\n    \\item (2 points) Check that $\\left( \\frac{n(n + 1)}{2} \\right)^2$ satisfies the right\n\t    equations.\n    \\end{enumerate}\n\\end{Answer}\n\n\\begin{Exercise}\n    The \\emph{Fibonacci numbers} $F_n$ are defined by the following recurrence:\n    \\begin{align*}\n        F_0 &= 0 \\\\\n        F_1 &= 1 \\\\\n        F_{n + 2} &= F_{n + 1} + F_n.\n    \\end{align*}\n\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n        \\item Compute the first ten Fibonacci numbers.\n\n        \\item Prove that\n            \\begin{equation*}\n                \\sum_{k = 0}^n F_k = F_{n + 2} - 1.\n            \\end{equation*}\n            [Hint: Use the earlier exercise about sums. Or induction.]\n\n        \\item The \\emph{square} Fibonacci numbers $S_n = F_n^2$ \\emph{also}\n            satisfy a nice recurrence:\n            \\begin{equation*}\n                S_{n + 3} = 2 S_{n + 2} + 2 S_{n + 1} - S_n.\n            \\end{equation*}\n\n            Check that this is true up to $n = 6$.\n    \\end{enumerate}\n\\end{Exercise}\n\n\\begin{Answer}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n\t    \\item (2 points) 0, 1, 1, 2, 3, 5, 8, 13, 21, 34\n\n\t    \\item (2 points) Note that $F_{0 + 2} - 1 = 0 = \\sum_{k = 0}^0 F_k$, and that\n            \\begin{equation*}\n\t\t    (F_{n + 3} - 1) - (F_{n + 2} - 1) = F_{n + 1}.\n            \\end{equation*}\n\n    \\item (2 points) The first ten squares are: 0, 1, 1, 4, 9, 25, 64, 169, 441, 1156.\n\t\tUsing the recurrence\n            \\begin{equation*}\n                S_{n + 3} = 2 S_{n + 2} + 2 S_{n + 1} - S_n\n            \\end{equation*}\n\t    along with the initial values $S_0 = 0$, $S_1 = 1$, $S_2 = 1$, we\n\t    get the same list:\n\t    \\begin{align*}\n\t\t    2 (1) + 2 (1) - 0 &= 4 \\\\\n\t\t    2 (4) + 2 (1) - 1 &= 9 \\\\\n\t\t    2 (9) + 2 (4) - 1 &= 25 \\\\\n\t\t    2 (25) + 2(9) - 4 &= 64,\n\t    \\end{align*}\n\t    and so on.\n    \\end{enumerate}\n\\end{Answer}\n\n\\begin{Exercise}\n    Prove that $n! > 2^n$ for sufficiently large $n$. [Hint: This means that\n    there exists some $N$ such that $n! > 2^n$ for $n \\geq N$. You have to find\n    that $N$ and then prove it. Induction is the way to go here.]\n\\end{Exercise}\n\n\\begin{Answer}\n\t(4 points)\n\n\tSuppose that $n! > 2^n$ for some $n \\geq N$. [We haven't yet determined\n\t$N$, but that's OK! We'll come back to it.] Then, $$(n + 1)! = (n + 1)\n\t\\cdot n! > (n + 1) 2^n.$$ So, if $n + 1 > 2$, or $n > 1$, then $(n +\n\t1)! > 2^{n + 1}$. In other words, the inductive argument will work as\n\tlong as we start at $N = 2$.\n\n\tBut we need a good base case, and $N = 2$ doesn't work. Note that $4!\n\t= 24 > 16 = 2^4$, so $N = 4$ will do the job.\n\\end{Answer}\n\n\\begin{Exercise}\n    This exercise involves programming. When you write a program, save it as a\n    ``.py'' file and submit it with your homework. (I'll explain how to do this\n    in class on Thursday if this doesn't make sense to you.)\n\n    Let $a(n) = 2^n + 1$.\n\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n        \\item Write a Python program to compute $[a(1), a(2), a(3), \\dots,\n            a(n)]$ for arbitrary $n$. [Hint: Lookup ``python list\n            comprehension.'']\n\n        \\item Using your program, determine which values of $a(n)$ are prime\n            for $1 \\leq n \\leq 16$. Guess a pattern.\n\n        \\item Prove that $a(n)$ is \\emph{not} prime if $n$ is odd. [Hint: Prove\n            that $3$ divides $a(n)$ if $n$ is odd, probably by induction.] Does\n            this prove your pattern? Explain.\n    \\end{enumerate}\n\\end{Exercise}\n\n\\begin{Answer}\n    \\begin{enumerate}[label=(\\textbf{\\alph*})]\n        \\item (2 points) Simple python program: \\texttt{[2**k + 1 for k in\n            range(1, 17)]}. There are infinitely many ways you could have\n            structured this.\n\n\t    \\item (2 points) In the given range, the $n$ for which $a(n)$ is prime are $1$,\n            $2$, $4$, $8$, and $16$. It \\emph{seems} like $a(n)$ might be prime\n            when $n$ is a power of $2$.\n\n    \\item (3 points) One way is induction. Our statement is ``$3$ divides $2^{2n + 1} + 1$\n\t    for all nonnegative integers $n$.'' The base case, $n = 0$, is\n\t    obvious, since $2^{2 \\cdot 0 + 1} + 1 = 3$. Suppose that $2^{2n +\n\t    1} + 1 = 3k$ for some integer $k$. Then, we have $$2^{2(n +\n\t    1) + 1} + 1 = 4 \\cdot 2^{2n + 1} + 1.$$ Our assumption then gives\n\t    \\begin{align*}\n\t\t    4 \\cdot 2^{2n + 1} + 1 &= 4(3k - 1) + 1 \\\\\n\t\t    \t\t     &= 12k - 4 + 1 \\\\\n\t\t\t\t     &= 3(3k - 1),\n\t    \\end{align*}\n\t    so $3$ divides $2^{2(n + 1) + 1} + 1$ as well.\n    \\end{enumerate}\n\\end{Answer}\n\n\\begin{Exercise}\n    What is your favorite integer? Enter it into\n    \\url{http://www.numbergossip.com/} and give some of your favorite\n    properties.\n\\end{Exercise}\n\n\\begin{Answer}\n\t(3 points)\n\tMy favorite integer is $6$. My favorite property listed on the site is\n\tsomething I don't know how to prove.\n\n\tIn binary, $6 = (110)_2$. There are an even number of 1's in the\n\texpansion.\n\n\tAlso, the proper divisors of 6 are $1$, $2$, and $3$. If you add these\n\tup, you get $6$: $$1 + 2 + 3 = 6.$$\n\n\tSupposedly, \\emph{$6$ is the only even number that does both of these\n\tthings at once.} This seems surprising, but maybe it's actually easy to\n\tprove.  I don't know!\n\\end{Answer}\n\n\\end{document}\n", "meta": {"hexsha": "68a4cad4351a3f364ecf073292d68892f6760f79", "size": 11694, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "files/2021/summer/nt/hw/1.tex", "max_stars_repo_name": "rwbogl/rwbogl.github.io", "max_stars_repo_head_hexsha": "4bfba4b851a17652b2a9a1b25ab84f77cb41f7ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/2021/summer/nt/hw/1.tex", "max_issues_repo_name": "rwbogl/rwbogl.github.io", "max_issues_repo_head_hexsha": "4bfba4b851a17652b2a9a1b25ab84f77cb41f7ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/2021/summer/nt/hw/1.tex", "max_forks_repo_name": "rwbogl/rwbogl.github.io", "max_forks_repo_head_hexsha": "4bfba4b851a17652b2a9a1b25ab84f77cb41f7ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.4299065421, "max_line_length": 137, "alphanum_fraction": 0.5758508637, "num_tokens": 3976, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.815232489352, "lm_q1q2_score": 0.580634816553729}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\n\\markright{modulo}\n\\section*{\\hspace*{-1.6cm} modulo}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nCongruence of a vector.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\ny = modulo(x,N)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty modulo} gives the congruence of each element of the vector\n   {\\ty x} modulo {\\ty N}. These values are strictly positive and lower\n   equal than {\\ty N}.\\\\\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n\t{\\ty x} & vector of real values, positive or negative\\\\\n\t{\\ty N} & congruence number (not necessarily an integer)\\\\\n\\hline\t{\\ty y} & output vector of real values, $>$0 and $\\leq${\\ty N}\\\\\n\n\\hline\n\\end{tabular*}\n\n\\end{minipage}\n\\vspace*{1cm}\n\n\n{\\bf \\large \\sf Example}\n\\begin{verbatim}\n         x=[1.3 -2.13 9.2 0 -13 2];\n         modulo(x,2)\t\n         ans = \n               1.3000    1.8700    1.2000    2.0000    1.0000    2.0000\n\\end{verbatim}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nrem.\n\\end{verbatim}\n\\end{minipage}\n\n", "meta": {"hexsha": "bda5a1b95d2f386a9e942f24fa2fa13b4b974750", "size": 1508, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/modulo.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/modulo.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/modulo.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 20.6575342466, "max_line_length": 71, "alphanum_fraction": 0.6332891247, "num_tokens": 579, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676284, "lm_q2_score": 0.7690802370707281, "lm_q1q2_score": 0.5805901970684012}}
{"text": "\\hypertarget{hoare-logic-and-its-mechanization}{%\n\\section{Hoare Logic and its\nMechanization}\\label{hoare-logic-and-its-mechanization}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  means to prove the Hoare triple \\{ P \\} C \\{ Q \\} valid\n\\item\n  Hoare logic is a logic for obtaining valid Hoare triples by purely\n  deductive reasoning, that is, by mathematical proof\n\\item\n  deductive reasoning means successively applying inference rules to\n  axioms and already obtained conclusions to obtain new conclusions\n\\item\n  such proofs usually are extremely long and boring\n\\item\n  and therefore must be performed as automatically as possible (by\n  programs that are by themselves reliable \\ldots{})\n\\item\n  but: proof problem is undecidable in the general case\n\\end{itemize}\n\n\\hypertarget{weakest-preconditions}{%\n\\subsection{Weakest Preconditions}\\label{weakest-preconditions}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  given a command C and a postcondition Q\n\\item\n  given a prestate, the following three things may happen:\n\n  \\begin{enumerate}\n  \\def\\labelenumi{\\alph{enumi})}\n  \\tightlist\n  \\item\n    execution of C terminates in a poststate that satisfies Q\n  \\item\n    execution of C does not terminate\n  \\item\n    execution of C terminates in a poststate that satisfies $\\neg$Q\n  \\end{enumerate}\n\\item\n  let w be the set of all prestates from a) and b) together\n\\item\n  a weakest precondition W for C and Q is a boolean formula that\n  describes exactly this set\n\\item\n  obviously, weakest preconditions are not unique, since with W $\\in$ wp(C,\n  Q), for instance, W $\\land$ true $\\in$ wp(C, Q)\n\\item\n  but clearly, all W $\\in$ wp(C, Q) are equivalent\n\\end{itemize}\n\n\\begin{tcolorbox}[colback=red!5!white,colframe=red!75!black]\n$\\models$ { P } C { Q } is equivalent to $\\models$ P $\\Rightarrow$ wp(C, Q)\n\\end{tcolorbox}\n\n\\hypertarget{rules-of-inference}{%\n\\subsection{Rules of Inference}\\label{rules-of-inference}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  let f, f 1 , \\ldots{}, f n be boolean formulas (here assertions or\n  Hoare triples), n $\\geqslant$ 0\n\\item\n  an inference rule is a syntactic construct of the following form:\n\\end{itemize}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=50px]{figures/inferenceRule.png}\n\\caption{Inference Rule}\n\\end{figure}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  f\\_1 , \\ldots{}, f\\_n are called premises or hypotheses\n\\item\n  f is called conclusion\n\\item\n  if there are no premises (n = 0), the rule is called an axiom\n\\item\n  the conclusion f is valid, if all hypotheses are valid\n\\end{itemize}\n\n\\hypertarget{skip-axiom}{%\n\\subsubsection{Skip Axiom}\\label{skip-axiom}}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/skipAxiom.png}\n\\caption{Skip Axiom}\n\\end{figure}\n\n\\begin{tcolorbox}[colback=red!5!white,colframe=red!75!black]\n$\\{ 6 < x \\}$ not equal $\\{ x > 6 \\}$\n\\end{tcolorbox}\n\n\\clearpage\n\\hypertarget{rule-of-consequence}{%\n\\subsection{Rule of Consequence}\\label{rule-of-consequence}}\n\nthe RoC is of utmost importance:\n\n\\begin{itemize}\n\\tightlist\n\\item\n  it provides the interface between Hoare logic, concerned with\n  programming, and \u201eordinary`` mathematics\n\\item\n  \\ldots{} or in other words, it allows to plug in ordinary mathematics\n  into Hoare logic\n\\end{itemize}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/rulesOfConsquence.png}\n\\caption{Rules of Consequence}\n\\end{figure}\n\nWe now write this proof of $\\models$ \\{ x \\textgreater{} 7 \\} skip \\{ x\n\\textgreater{} 5 \\} more concisely and much closer to normal programming\nas\n\n\\begin{lstlisting}\n{ x > 7 }\n{ x > 6 }\nskip\n{ x > 6 }\n{ x > 5 }\n\\end{lstlisting}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  premises:\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    we read lines 1 and 2 as implication $\\Rightarrow$ (first premise of RoC)\n  \\item\n    we read lines 2 --- 4 as HT (second premise of RoC)\n  \\item\n    we read lines 4 and 5 as implication $\\Rightarrow$ (third premise of RoC)\n  \\end{itemize}\n\\item\n  conclusion:\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    we read lines 1 --- 5 as HT (conclusion of RoC)\n  \\end{itemize}\n\\end{itemize}\n\n\\clearpage\n\\hypertarget{general-proof-procedure}{%\n\\subsection{General Proof Procedure}\\label{general-proof-procedure}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  To prove \\{P\\} C \\{Q\\}, we start with the postcondition Q, go\n  backwards over the command C to determine a precondition R such that\n  the HT \\{R\\} C \\{Q\\} is valid, and construct the implication P\n  -\\textgreater{} R.\n\\item\n  This implication is called a verification condition (VC).\n\\item\n  If this VC is valid, then the original HT \\{P\\} C \\{Q\\} is valid too.\n\\item\n  this process can be mechanized, that is, performed fully automatically\n  be means of a - computer\n\\item\n  there might be many possible preconditions, which one do we choose?\n\\item\n  we always choose the syntactically weakest precondition\n\\item\n  that is, we can assume as little as possible\n\\end{itemize}\n\n\\begin{lstlisting}\nExample\n{ x > 7 } -- given precondition P\n{ x > 5 } -- compute precondition R\nskip  \n-- go backwards over the command C\n{ x > 5 } -- start here with given postcondition Q now construct VC P $\\Rightarrow$ R, \n          -- that is, \"x > 7 $\\Rightarrow$ x > 5\"\n\\end{lstlisting}\n\n\\hypertarget{computing-preconditions-and-software-engineering}{%\n\\subsubsection{Computing Preconditions and Software\nEngineering}\\label{computing-preconditions-and-software-engineering}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  looks strange in the first moment to start with the postcondition to\n  arrive at the precondition\n\\item\n  but: \\textbf{the postcondition describes the actual task of the\n  program and is thus given}\n\\item\n  the precondition will be determined to find out under which\n  circumstances the postcondition can be achieved\n\\item\n  happily, determining preconditions from postconditions is much simpler\n  than the other way round\n\\item\n  thus, determining VCs (via computing preconditions) can be done\n  automatically by a tool called a \\textbf{verification condition\n  generator}\n\\item\n  the VCs can then (hopefully automatically) be discharged (proved\n  valid) by a second tool called a theorem prover\n\\end{itemize}\n\n\\hypertarget{textual-substitution}{%\n\\subsubsection{Textual Substitution}\\label{textual-substitution}}\n\nThis is an operation that essentially can be performed by means of\nsearch and replace of a text editor, and thus can be implemented on a\nmachine.\n\n\\begin{lstlisting}\nx[x<-z+2] yields (z + 2)\n-- or\ny[x<-z+2] yields y\n-- [x<-z+2] this will replace all x with z+2\n\\end{lstlisting}\n\n\\hypertarget{assignment-axiom}{%\n\\subsection{Assignment Axiom}\\label{assignment-axiom}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  The assigning command can be seen as something like renaming an\n  expression in the precondition.\n\\item\n  You take the expression in the execution part and assign this defition\n  of the variable (in the execution part) to the same variable in the\n  precondition.\n\\item\n  If the new precondition can still meet the postcondition (after\n  execution), the Hoare is valid.\n\\end{itemize}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/assigningAxiom.png}\n\\caption{Assignment Axiom Examples}\n\\end{figure}\n\n\\hypertarget{composition-command}{%\n\\subsection{Composition Command}\\label{composition-command}}\n\nThe composition command is used to do one or more things one after the\nother.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/compositionrule.png}\n\\caption{Composition Rule}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/compositionExample.png}\n\\caption{Composition Example}\n\\end{figure}\n\n\\hypertarget{example-strange-swap}{%\n\\subsubsection{Example Strange Swap}\\label{example-strange-swap}}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{figures/exampleStrangeSwap.png}\n\\caption{Strange Swap Example}\n\\end{figure}\n\n\\hypertarget{conditional-rule}{%\n\\subsection{Conditional Rule}\\label{conditional-rule}}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{figures/conditionalRule.png}\n\\caption{Conditional Rule}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/exampleConditional.png}\n\\caption{Conditional Rule Example}\n\\end{figure}\n\n\\clearpage", "meta": {"hexsha": "8a71f6a7187ec507e17ac1f2bf3f9f1ffdf4796e", "size": 8264, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TSM_AdvPrPa/Summary/13_Verification02.tex", "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": ["Beerware"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TSM_AdvPrPa/Summary/13_Verification02.tex", "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_licenses": ["Beerware"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TSM_AdvPrPa/Summary/13_Verification02.tex", "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": ["Beerware"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "avg_line_length": 27.2739273927, "max_line_length": 87, "alphanum_fraction": 0.7480638916, "num_tokens": 2373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5805901970684012}}
{"text": "\\section{Introduction}\n\\label{Transport Intro}\nFor reference, some common equations and brief overviews of the three categories of particles will be given in this chapter. The Maxwell equations \\eqref{Gauss}--\\!\\,\\eqref{Ampere}, the continuity equation for charge density and current density \\eqref{continuity}, the Lorentz force equation \\eqref{Lorentz}, Newton's second law of motion \\eqref{Newton2}, and the \\ac{MHD} approximation of Ohm's Law \\eqref{Ohm} are each useful for basic plasma physics. Thorough derivations for these and related equations can be found in several textbooks, including \\citet{gombosi98} and \\citet{jackson99}.\n\n\\begin{subequations}\n \\begin{align}\n  \\nabla\\cdot\\mathbf{E}&=\\frac{\\rho_e}{\\epsilon_0}&\\quad\\text{Gauss's Law}\n  \\label{Gauss}\\\\\n  \\nabla\\times\\mathbf{E}&=-\\frac{\\partial\\mathbf{B}}{\\partial t}&\\quad\\text{Faraday's law of induction}\n  \\label{Faraday}\\\\\n  \\nabla\\cdot\\mathbf{B}&=0&\\quad\\text{Gauss's law for magnetism}\n  \\label{Gauss m}\\\\\n  \\nabla\\times\\mathbf{B}&=\\mu_0\\mathbf{J}+\\mu_0\\epsilon_0\\frac{\\partial\\mathbf{E}}{\\partial t}&\\quad\\text{Amp\\`{e}re's law}\n  \\label{Ampere}\n \\end{align}\n\\end{subequations}\n\\begin{align}\n \\nabla\\cdot\\mathbf{J}&=-\\frac{\\partial\\rho_e}{\\partial t}&\\quad\\text{continuity equation}\n \\label{continuity}\\\\\n \\mathbf{F}&=q\\left(\\mathbf{E}+\\mathbf{v}\\times\\mathbf{B}\\right)\\quad\\text{(N)}&\\quad\\text{Lorentz force}\n \\label{Lorentz}\\\\\n \\mathbf{F}&=m\\mathbf{a}\\quad\\text{(N)}&\\quad\\text{Newton's 2nd law of motion}\n \\label{Newton2}\\\\\n \\mathbf{J}&=\\sigma\\left(\\mathbf{E+v\\times B}\\right)\\quad\\text{(A m$^{-2}$)}&\\quad\\text{Ohm's law}\n \\label{Ohm}\n\\end{align}\n\nIn these equations, $\\epsilon_0$ is the electric constant (also called the permittivity of free space), $\\mu_0$ is the magnetic constant (also called the permeability of free space), $\\sigma$ is the electrical conductivity, treated here as a constant, $q$ is the charge, $\\rho_e$ is the charge density, $m$ is the mass, $\\mathbf{J}$ is the electric current density, and $\\mathbf{E}$ and $\\mathbf{B}$ are the electric and magnetic fields. $\\mathbf{F}$, $\\mathbf{v}$, and $\\mathbf{a}$ represent force, velocity, and acceleration. \n\nIn \\ac{MHD}, the fields are induced by plasma motion, so the fields vary on the same time and length scales as the plasma variables. If high frequency variations in the electric field are not included, and only the non-relativistic regime is considered, the displacement current in Amp\\`{e}re's law can be neglected, leading to Equation~\\ref{AmpereMHD}.\n\\begin{align}\n \\nabla\\times\\mathbf{B}&=\\mu_0\\mathbf{J}&&\\quad\\text{MHD Amp\\`{e}re's law}\n \\label{AmpereMHD}\n\\end{align}\n\\indent By substituting Equation~\\ref{Faraday} and the curl of Equation~\\ref{AmpereMHD} into the curl of Equation~\\ref{Ohm}, $\\mathbf{E}$ and $\\mathbf{J}$ can be eliminated to derive the magnetic induction equation \\eqref{MHDinduction}. The first term on the right describes the resistive diffusion of the magnetic field in the plasma while the second term describes the convection of the magnetic field by the plasma.\n\\begin{align}\n \\frac{\\partial\\mathbf{B}}{\\partial t}=\\frac{1}{\\sigma\\mu_0}\\nabla^2\\mathbf{B}+\\nabla\\times\\left(\\mathbf{v}\\times\\mathbf{B}\\right)&&\\quad\\text{magnetic induction equation}\n \\label{MHDinduction}\n\\end{align}\n\nSince Equation~\\ref{Gauss m} states that the divergence of the magnetic field vector $\\mathbf{B}$ is zero, $\\mathbf{B}$ can be written in terms of a vector potential $\\mathbf{A}$:\n\\begin{align}\n \\mathbf{B}=\\nabla\\times\\mathbf{A}\\quad\\text{(T)}.\n \\label{B field}\n\\end{align}\n\nBy substituting Equation~\\ref{B field} into Equation~\\ref{Faraday}, Faraday's law of induction can be written as a quantity with a vanishing curl. Such a quantity can be rewritten as the gradient of a scalar function, the scalar potential $\\Phi$, leading to an equation for $\\mathbf{E}$ in terms of the potentials $\\mathbf{A}$ and $\\Phi$:\n\\begin{align}\n \\mathbf{E}=-\\nabla\\Phi-\\frac{\\partial\\mathbf{A}}{\\partial t}\\quad\\text{(V m$^{-1}$)}.\n \\label{E field}\n\\end{align}\n\nFor electrostatics, all derivatives with respect to time are zero. In this case, the divergence of Equation~\\ref{E field} combined with Equation~\\ref{Gauss} will give the Poisson equation, or in the absence of charges, the Laplace equation:\n\\begin{align}\n \\nabla^2\\Phi &= -\\frac{\\rho_e}{\\epsilon_0},&\\quad\\text{Poisson's equation}\n \\label{Poisson}\\\\\n \\nabla^2\\Phi &= 0.&\\quad\\text{Laplace's equation}\n \\label{Laplace}\n\\end{align}\n\nDue to the historical precedent of the symbols used in these and other common equations, a symbol may have different meanings depending on the equation in which it is used (i.e., `E' can represent `electric field' or `energy'). Even though the meaning of the symbol can usually be discerned from the context of the equation, an attempt has been made to use distinct symbols throughout this dissertation, or use subscripts to clarify a symbol's meaning when necessary. In the specific case of `E', the bold font $\\mathbf{E}$ is used to represent the electric field vector and $\\left|\\mathbf{E}\\right|$  to represent the magnitude of the electric field. The plain font E is always used here to represent energy.\n\nParticle transport and acceleration are important topics of research in heliophysics. An understanding of the composition and nature of the gas and plasma found in space is vital to the forecasting of space weather. This research focuses on ways to investigate three categories of particles: the solar wind (\\S~\\ref{Solar Wind}), pickup ions, and energetic particles, as shown in Figure~\\ref{fig:H_Distribution}. The following is intended to provide sufficient background for the scope of this dissertation research.\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=.65\\textwidth]{Chap2/H_Distribution}\n  \\caption[Example of proton distributions for the quiet solar wind near 5 AU.]{Example of proton distributions for the quiet solar wind near 5 AU. Shown are the bulk distribution of the solar wind, the interstellar pickup ions that drop off at twice the solar wind speed, and the high-energy protons that make up the suprathermal tail. Figure from \\citet{gloeckler01b}.}\n  \\label{fig:H_Distribution}\n\\end{figure}\n\n\\section{The Solar Wind}\n\\label{Solar Wind}\n\n\\subsection{Current Knowledge}\n\\label{SW Current Knowledge}\nWhile he was not the first to postulate its existence, the physics of the solar wind was first explained by Eugene Parker in 1958 \\citep{parker58}. Beginning with subsonic speeds close to the Sun, plasma accelerates away from the solar surface and reaches supersonic speeds in the corona. It continues to expand in a radial direction outward until it interacts with the material in interstellar space at the edge of the heliosphere, the Sun's sphere of influence. The wind draws the solar magnetic field along with it, creating spiral-shaped field lines as the Sun rotates \\citep{parker59}. Mankind's understanding of the processes that govern the solar wind has increased as spacecraft have taken in situ measurements, but there are still some properties that remain unexplained, such as the precise origin of certain types of wind, as discussed below.\n\nThe solar wind travels a distance of one \\ac{AU} before reaching Earth's orbit, where most of the current measurements have been taken (Table~\\ref{tab:solar wind}). It is generally divided into two components, commonly referred to as the ``fast'' and ``slow'' solar wind. Originally, these terms were used to differentiate the wind by the speed with which it traveled, but more recent studies have shown that the two types of wind are more efficiently distinguished by their charge state composition (e.g., O$^{7+}$/O$^{6+}$) since the plasma can change speeds as it flows through space \\citep{geiss95b, gloeckler03a}. Rather than the terms ``fast'' and ``slow'', more appropriate labels are descriptive of the wind's origin: ``coronal hole'' and ``streamer'' wind. These two types of wind are generated by different processes and have different compositions, temperatures, speeds, and origins.\n\\begin{table}[htbp]\n\t\\centering\n\t\t\\begin{tabular}{l|c|c}\n\t\t                                                               & Coronal Hole Wind & Streamer Wind     \\\\ \\hline\n      bulk speed \\footnotesize{$\\left(\\text{km s}^{-1}\\right)$}    & 750               & 400               \\\\ \\hline\n      thermal speed \\footnotesize{$\\left(\\text{km s}^{-1}\\right)$} & 32                & 35                \\\\ \\hline\n      H$^+$ density \\footnotesize{$\\left(\\text{cm}^{-3}\\right)$}   & 2.5               & 8.7               \\\\ \\hline\n      frozen-in temperature \\footnotesize{$\\left(\\text{K}\\right)$} & 8 x 10$^5$        & 1.4--1.6 x 10$^6$ \\\\ \\hline\n\n\t\t\\end{tabular}\n\t\\caption[Average characteristics of the solar wind at 1 AU.]{Average characteristics of the solar wind at 1 AU. The temperature is derived from the freeze-in temperature of C$^{6+}$/C$^{5+}$, which freezes in near the solar wind source altitude. Data compiled from \\citet{vonsteiger95, gloeckler98a, ipavich98, mccomas00, feldman05}.}\n\t\\label{tab:solar wind}\n\\end{table}\n\nAs solar wind ions escape from the photosphere and travel up through the corona, they experience collisions with energetic electrons that ionize them to different degrees. As they travel farther through the corona, continuously accelerating, the density of coronal electrons decreases and the particles experience fewer collisions. When the timescale for ionization or recombination becomes longer than the timescale of the solar wind to expand through a density scale height, the charge state of the ion is said to be ``frozen in,'' branding the ion with the coronal region and electron temperature of its origin \\citep{hundhausen68}. The streamer wind has a distinct characteristic of being enriched in elements with a low ($\\le$ 10 eV) \\ac{FIP} by a factor of 3--4 over the photospheric value. The coronal hole wind does not show this density enhancement, and measurements have revealed abundances of low-\\ac{FIP} elements that match ratios in the photosphere \\citep{vonsteiger93}. The streamer wind also has a higher and more variable freeze-in temperature than the coronal hole wind. One explanation for this describes solar plasma trapped and heated in large coronal loops that are eventually opened by interchange reconnection, releasing the plasma \\citep{gosling95, fisk98, fisk99a}.\n\nThe coronal hole wind originates in the open flux regions of the Sun, which contain low-density plasma and concentrations of magnetic flux that are all the same polarity. During solar minimum these regions are clustered around the poles of the Sun, while during solar maximum they appear at all latitudes. Plasma in open flux regions is also released from flux loops, but the high concentration of open flux increases the probability that the loops will open before they can heat and fractionate the plasma. The anti-correlation between freeze-in temperature and solar wind speed shown in Table~\\ref{tab:solar wind} can be interpreted in a simplistic way as a sign of different sized loops. The long-lived loops that produce the streamer wind will expand and rise slowly into the corona, where the temperatures are hotter, before being opened \\citep{fisk98, fisk01a}. The short-lived loops that yield the coronal hole wind are opened while they are still small and close to the cooler surface \\citep{fisk99a, fisk03, wimmer03b}.", "meta": {"hexsha": "b32d9f582ddce846a94756a82c005b28f7e66dbd", "size": 11386, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Dissertation_Template/Chap2/chap2.tex", "max_stars_repo_name": "StevenHong/LaTeX-Workshop", "max_stars_repo_head_hexsha": "a3bbe3309d3d5bf87673341d045630e9d34659b4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Dissertation_Template/Chap2/chap2.tex", "max_issues_repo_name": "StevenHong/LaTeX-Workshop", "max_issues_repo_head_hexsha": "a3bbe3309d3d5bf87673341d045630e9d34659b4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Dissertation_Template/Chap2/chap2.tex", "max_forks_repo_name": "StevenHong/LaTeX-Workshop", "max_forks_repo_head_hexsha": "a3bbe3309d3d5bf87673341d045630e9d34659b4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 119.8526315789, "max_line_length": 1291, "alphanum_fraction": 0.753644827, "num_tokens": 3033, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5805901928277108}}
{"text": "\n\\section{Graph Algorithms}\n\\label{appendix:AppendixA}\n\n\n\n\n\n\\subsection{The DFS Algorithm}\n\nAn Example + Complexity\n\n\n\n\n\n\\subsection{Topological Sort}\n\n\n\n\n\n", "meta": {"hexsha": "3603466ee8569cc4dc325b357af801b466f65cd3", "size": 156, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX-Thesis-Template/content/AppendixA.tex", "max_stars_repo_name": "imbur/mcgill-thesis-template", "max_stars_repo_head_hexsha": "e86fce4eb527cd1bdfa11fef0c786859a8d217ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2017-07-03T19:56:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-21T05:47:08.000Z", "max_issues_repo_path": "LaTeX-Thesis-Template/content/AppendixA.tex", "max_issues_repo_name": "ranok92/my_masters_thesis", "max_issues_repo_head_hexsha": "5a66e039b5702ff8045bd3f635572ada1d4482ba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-05-05T16:49:59.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-05T16:51:29.000Z", "max_forks_repo_path": "LaTeX-Thesis-Template/content/AppendixA.tex", "max_forks_repo_name": "ranok92/my_masters_thesis", "max_forks_repo_head_hexsha": "5a66e039b5702ff8045bd3f635572ada1d4482ba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2018-02-08T18:14:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T15:12:42.000Z", "avg_line_length": 6.7826086957, "max_line_length": 30, "alphanum_fraction": 0.7243589744, "num_tokens": 36, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.863391599428538, "lm_q2_score": 0.6723317123102955, "lm_q1q2_score": 0.5804855524381137}}
{"text": "\\chapter{Machine learning approach for true track tagging}\nApplying cuts to track features in order to differentiate between true and false reconstructed tracks improved their ratio. A more effective\napproach in classifying a track as true or false might be achieved with a machine learning algorithm for classification.\nUnlike one-dimensional cuts, a machine learning\nalgorithm can be implemented to benefit from correlations between the input features for a more accurate identification of true tracks. The utilised\nmachine learning library is XGBoost \\cite{xgboost}. It is used for supervised learning problems \\cite{supervised} which are explained in the following section.\n\n%\\section{Introduction to XGBoost}\n\\section{Supervised learning}\n%XGBoost is a gradient boosting machine learning algorithm \\cite{gradient}, which primarily uses decision trees as a predictive model for classification and regression analysis.\nSupervised learning is used to predict a target variable $y_i$ based on the training data $x_i$ containing multiple features. The model\nin supervised learning describes the mathematical structure, which determines the prediction from its input data.\nA linear model is a common example, where the predictions\nare linear combinations of the weighted input features:\n\\begin{align}\n  \\hat{y}_i = \\sum_j \\Theta_j x_{ij}\n\\end{align}\nThe coefficients $\\Theta_j$ are a priori undetermined parameters that need to be learned from the training data.\nTherefore, the task of training is in determining the optimal parameters\nfor the target variable $y_i$ based on the input $x_i$. In order to train a model, an objective function, which measures how well\nthe model fits the training data, needs to be defined and optimised.\nThose functions consist of the training loss function $L(\\Theta) $ and the regularisation term $\\Omega (\\Theta)$.\n\\begin{align}\n  \\text{obj}(\\Theta) = L(\\Theta) + \\Omega(\\Theta)\n\\end{align}\n\nThe training loss function is commonly defined as the mean squared error or logistic loss for logistic regression \\cite{logistic}:\n\\begin{align}\n  &L(\\Theta) = \\sum_i (y_i - \\hat{y}_i)^2 \\\\\n  &L(\\Theta) = \\sum_i [y_i \\ln{(1 + \\text{e}^{-\\hat{y}_i})} + (1 - y_i) \\ln{(1 + \\text{e}^{-\\hat{y}_i})}]\n\\end{align}\nLogistic regression refers to the statistical model that uses logistic functions to model binary dependent variables. \\\\\nThe regularisation term limits the complexity of a model in order to prevent overfitting. Overfitting occurs when the model fits the training data too well that it adjusts\nto random fluctuations with no causal relation. Predictions of new unseen data then become worse, as the learner is not able to generalise well.\n\n\\section{Decision tree ensemble}\nDecision trees are tree-structured diagrams, which classify a target variable based on a series of decisions. They consist of internal nodes, which denote\na test on an attribute, branches, which represent the outcome of the tests, and leaf nodes, which are the nodes of a decision tree that do not split the data any further. \\\\\nThe model used by the XGBoost library is the ensemble of decision trees, which consists of a set of classification and regression trees (CART) \\cite{cart}.\nA CART assigns a prediction score to each of the leaves.\nThe prediction of a single tree is usually not sufficiently accurate,\ntherefore the prediction of numerous trees is summed together. This method can be written mathematically as:\n\\begin{align}\n  \\hat{y}_i &= \\sum_{k=1}^K f_k(x_i)\\,, \\: f_k \\in \\mathcal{F} \\\\\n  \\text{obj}(\\Theta) &= \\sum_i l(y_i, \\hat{y}_i) + \\sum_{k=1}^K \\Omega(f_k)\n\\end{align}\nWhere $K$ is the number of trees, $l(y_i, \\hat{y}_i)$ is the loss function that measures the difference between the predictions and the\ntarget variable, $\\mathcal{F}$ the set of all possible CARTs, and $f$ a decision function of the respective CART. A decision tree ensemble is shown exemplarily in\nFigure \\ref{fig:random_forest}.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[height=0.6\\textwidth]{images/random_forest.png}\n  \\caption{Representation of a decision tree ensemble \\cite{random_forest}.}\n  \\label{fig:random_forest}\n\\end{figure}\n\n\\section{Boosted trees}\n%The model used for boosted trees and a random forest is both the tree ensembles with the only difference being the training method.\n%Random forests and boosted decision trees are both tree ensembles that differ only in their training method.\nUnlike random forest algorithms \\cite{random}, in which a multitude of decision trees are created independently and combined at the end,\nit is also possible to build one tree at a time and\ncombine them iteratively. This method is the gradient boosting machine learning algorithm \\cite{gradient}, which is used by XGBoost. \\\\\nOptimising the objective functions for\nboosted decision trees (BDT) is achieved by additive training. This means that previous trees are fixed and only one new tree per step $t$ is added.\nThis can be written\ndown as follows:\n\\begin{align}\n  \\hat{y}_i^{(t)} = \\sum_{k=1}^t f_k(x_i) = \\hat{y}_i^{(t-1)} + f_t(x_i)\n\\end{align}\n\nThe tree added in each step is supposed to optimise the objective and can be written at step $t$:\n\n\\begin{align}\n  \\text{obj}^{(t)} &= \\sum_{i=1} \\left[g_i f_t(x_i)] + \\frac{1}{2}h_i f_t^2(x_i)\\right] + \\Omega(f_t)  \\\\\n  g_i &= \\partial_{\\hat{y}_i^{(t-1)}} l(y_i, \\hat{y}_i^{(t-1)}) \\\\\n  h_i &= \\partial^2_{\\hat{y}_i^{(t-1)}} l(y_i, \\hat{y}_i^{(t-1)})\n\\end{align}\nBy using the second-order Taylor expansion of the loss function instead of only the first one, a more precise approximation of the loss function is determined in comparison\nto regular gradient boosting algorithms \\cite{newton_boosting}.\nFurthermore, the value of the objective function only depends on $g_i$ and $h_i$, giving XGBoost the advantage of using custom loss functions including logistic regression.\nHowever, the regularisation term still needs to be defined. One definition that works well in practice and is thus used by XGBoost is:\n\\begin{align}\n  \\Omega (f) = \\gamma T + \\frac{1}{2}\\lambda \\sum_{j=1}^T \\omega_j^2\n\\end{align}\nHere, $\\omega$ is the vector of scores on leaves, $\\gamma$ is the minimum loss reduction required to make\nanother partition on a leaf, $T$ is the number of leaves, and $\\lambda$ is the Ridge regularisation term \\cite{ridge}, which shrinks the $\\omega_j$ to control the\nregularisation term. \\\\\nWith this definition, the objective function for a single tree can be compressed to:\n\\begin{equation} \\label{eqn:obj}\n  \\text{obj} = -\\frac{1}{2}\\sum_{j=1}^T \\frac{G_j^2}{H_j + \\lambda} + \\gamma T\n\\end{equation}\nHere, $G_j$ and $H_j$ are the sums over all $g_i$ and $h_i$, respectively. Equation \\ref{eqn:obj} is a measure of how good a tree structure is. Its structure score\nis determined by the statistics $g_i$ and $h_i$ in the leaves, with a smaller score indicating a better tree structure.\nSince it is not feasible to enumerate all possible trees to find the best one, only one level at a time is optimised.\nFor each split of a leaf into a new left leaf L and right leaf R, the gained structure score is defined as:\n\\begin{equation}\n  \\text{Gain} = \\frac{1}{2}\\left[\\frac{G_{\\text{L}}^2}{H_{\\text{L}} + \\lambda} + \\frac{G_{\\text{R}}^2}{H_{\\text{R}} + \\lambda} - \\frac{(G_{\\text{L}} + G_{\\text{R}})^2}{H_{\\text{L}}+H_{\\text{R}}+\\lambda}\\right] -\\gamma\n\\end{equation}\n\nOptimal splits are then determined by calculating the structure score of all possible split solutions. \\\\\n\n\\section{Machine learning setup}\nThe data used to train the learner %is identical to the one from section \\ref{sec:feature} and\nconsist of 400000 events with ten generated $\\SI{200}{\\mega\\eV}$ protons each\ntraversing the telescope described in section \\ref{sec:setup}. The protons\nare reconstructed with Corryvreckan, with the true tracks being identified with the MC truth from Allpix$^2$. A total number of 92546 tracks\nare reconstructed, with 36340 being true tracks.\nThe learner is tested on a previously unseen dataset, simulated with the same configuration used in\n\\ref{sec:feature} and consists of 100000 protons with ten protons per event.\nTo determine the optimal number of boosting rounds, early stopping is utilised on a validation set, consisting of 100000 generated protons. This means, that\nthe model will train until the validation score stops improving for ten consecutive boosting rounds to rule out statistical fluctuations.\nThe number of boosted rounds is then determined by the highest validation score.\nIn this case, the validation score refers to the area under the ROC curve, which was explained in \\mbox{section \\ref{sec:feature}}. \\\\\nThe learner performs a logistic regression and outputs a probability for classifying a track as true or false.\nA single tree is created in each iteration, called boosting round, to minimise the loss function.\n\nTo determine, whether all features are independent of each other, their correlations are calculated, which is shown in Figure \\ref{fig:corr}. Here, the correlation\nrefers to the Pearson correlation coefficient \\cite{pearson} describing the ratio of the covariance of two variables and the product of their standard deviations. \\\\\nMost features show only small positive correlations with each other. The most noticeable exceptions are the cluster positions, which correlate particularly high within each\ntriplet. All charge deposition features and the $\\chi^2$ value show almost no correlation with any other features, though this does not necessarily mean that they are\nuseful features for separating true and false tracks. \\\\\nLess useful features\nare not expected to contribute significantly to classification, but also do not disrupt the learning process, as trees do not split as often on variables with little impact.\nHowever, problems may arise by taking too many features into account, as only limited training data is available to compensate for inefficient splits on features.\nTo prevent too many splits on less useful features,\nthe cluster positions on the six sensors in the horizontal and vertical direction are not given as input to the machine learner,\ndue to their high correlation with each other and their overall small number of splits, shown in Figure \\ref{fig:importance_global} in the appendix.\nThus, the remaining 15 features include the $\\chi^2$, the kink angles, and the charge depositions.\n\\begin{figure}\n  %\\centering\n  \\hspace{-2.5cm}\n  \\includegraphics[height=1.0\\textwidth]{plots/correlation_matrix.pdf}\n  \\vspace{-1.8cm}\n  \\caption{The Pearson correlation coefficient of the kink angles, the cluster positions, the charge depositions, and the $\\chi^2$ value.}\n  \\label{fig:corr}\n\\end{figure}\n\n%All features, including $\\chi^2$, kink angles, and charge depositions are selected as input for the classifier. Less useful features\n%are not expected to contribute significantly to classification, but also do not disrupt the learning process, as trees do not split as often on variables with little impact.\n%Thus, no problems will arise by taking all features into account, as long as the training data is sufficiently large to be able to compensate for inefficient splits on trees.\nThe parameters, which are predetermined and can not be inferred from the training of the learner, are called hyperparameters. These parameters are used\nto control the learning process and need to be tuned to improve the performance of the classifier.\nTaking many hyperparameters and hyperparameter values into account comes at the\ncost of a large computation time. For this reason, four parameters are chosen to be optimised. These are firstly the maximum depth of each tree, which influences the\ncomplexity of the model and the likeliness of overfitting. With the learning rate $\\eta$, overfitting can be alleviated as it shrinks the feature weights\nby the specified factor. It takes values from zero to one, with smaller values describing smaller corrections from further trees.\nThe minimum loss reduction required for an additional partition on a leaf is referred to as $\\gamma$. The minimum sum of instance weight $\\theta$\nneeded in a child has a similar impact. Leaf nodes with a sum of weights less than the specified value will prevent further tree partition steps.\nBoth $\\theta$ and $\\gamma$ take values from [0, $\\infty$[ and are a measure for how complex the model will be.\nFor these parameters, larger values represent a slower learning algorithm.\n%more conservative algorithm, which means that the learner .\n\nA grid search from the python \\cite{python} library Scikit-learn \\cite{scikit} is used to determine an optimal\nvalue for each of the parameters mentioned above. The values given as input into the grid search\nare shown in Table \\ref{tab:grid}.\n\n  \\begin{table}\n    \\centering\n    \\caption{Parameter values of the grid search. The tree depth refers to the maximum depth of a tree, $\\eta$ refers to the learning rate,\n    $\\gamma$ is a threshold for the allowed minimum loss reduction in a partition step, and $\\theta$ is the minimum sum of instance weight needed in a child.}\n    \\begin{tabular}{c c c c}\n      \\toprule\n      tree depth & $\\eta$ & $\\gamma$ & $\\theta$ \\\\\n      \\midrule\n      3 & 0.05 & 0.5 & 1  \\\\\n      4 & 0.1  & 1   & 25  \\\\\n      5 & 0.3  & 2 & 50  \\\\\n      6 & 0.7  & 5   &  \\\\\n    \\end{tabular}\n    \\label{tab:grid}\n  \\end{table}\nTo compare each different combination of the hyperparameter values quantitatively and obtain an estimation of the uncertainty, a 5-fold cross-validation is performed. This means that\nthe training data set is split into five equally\nlarge sets, with the learner training on four of them and evaluating on the last one. Each of the five sets serves as the test data once, which means that for each\nparameter combination five runs are performed. The area under the ROC curve serves as the scoring function to measure how good the prediction on the test data set is.\nThe mean of the five AUC values represents the score, which is compared in the grid search. Thus, cross-validation helps to compensate for variability in the simulated\ndata to derive an accurate estimate of the predictive power of the model. \\\\\nThe maximal AUC of $0.706(1)$ is achieved with a maximum tree depth of 4, $\\eta=0.7$, $\\gamma=5$ and $\\theta=25$. By setting $\\gamma=5$, the result of the grid search of the remaining\nparameters can be visualized and is shown in Figure \\ref{fig:grid}. \\\\\nVarying the values of the hyperparameters only causes small changes in the mean test AUC, especially the learning rate has a negligible influence on the AUC. Taking the\nstandard deviations into account shows that the variation of the hyperparameters does not have a significant impact on the performance of the learner.\n%makes it impossible to determine whether the best combination of hyperparameters has the highest AUC. However, due to the\n%small differences, no significant impact is to be expected in this case.\nThe above-mentioned hyperparameter values are chosen for training and testing the model, as they belong to  the model with the highest nominal AUC.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[height=0.6\\textwidth]{plots/grid_search_weights.pdf}\n  \\caption{The mean test AUC for the different hyperparameter configurations with the error bars indicating the standard deviation of the mean. The\n  smaller bars (red, green, pink, yellow) refer to $\\eta$, the larger bars (turquoise, orange, purple) to $\\theta$ and the point marked in red to the best model.}\n  \\label{fig:grid}\n\\end{figure}\n\n\n\n\\section{Machine learning results}\nTo compare the performance of the learner on the training and test datasets, the normalised distributions of the output probabilities of tracks being true are investigated.\nThese are shown in Figure \\ref{fig:output} alongside the corresponding differences.\n\n\\begin{figure}\n  \\hspace{-0.45cm}\n  %\\centering\n  \\begin{subfigure}{0.51\\textwidth}\n      \\centering\n      \\includegraphics[height=0.82\\textwidth]{plots/output_normed_weights.pdf}\n  \\end{subfigure}\n  \\begin{subfigure}{0.51\\textwidth}\n      %\\centering\n      \\hspace{-0.15cm}\n      \\includegraphics[height=0.82\\textwidth]{plots/output_difference_weights.pdf}\n  \\end{subfigure}\n  \\caption{Normalised probability distributions of tracks being true and false for the training and test data set shown on the left.\n  The corresponding difference between the performance of the training and test data is shown on the right.}\n  \\label{fig:output}\n\\end{figure}\n\nThe distributions of the training and test data predictions only show small differences, indicating that no significant overtraining occurs during the training process.\n%For both true and false tracks, the greatest deviations are found at their most probable value, between 0.8 and 0.9.\nBoth true and false tracks have a maximum of around 0.8 as their most probable value, though the peak for the true tracks is higher due to the larger number\nof false tracks assigned to small probabilities of being true tracks. While the number of true tracks increases with the assigned probabilities up to\napproximately 0.8, the false tracks have a second maximum for small probabilities, which is as high as their maximum around 0.8.\nThis means that\nwith the given input, a lot of false combinations of clusters still show similar properties to true tracks.\nAlso noteworthy is, that a negligible number of tracks has a probability of 0.9 and higher.\n\nThe impact of the individual features to classify the reconstructed tracks can be quantified with the number of times each feature was split.\nIn Figure \\ref{fig:importance}, the feature importance scores of the input features are shown. \\\\\nAll three features, namely the $\\chi^2$ value and the horizontal kink angles  $\\phi_{x,3}$ and $\\phi_{x,4}$,\nproved to be useful for classifying tracks with cuts, have a high feature score, highlighting their importance for separating true and false tracks.\nOther features with a high number of splits, like the vertical kink angle of the second sensor,\ncan still be useful for the learner in combination with other features even though their use for classifying on their own is insignificant.\n\\begin{figure}\n  \\centering\n  \\includegraphics[height=0.6\\textwidth]{plots/feature_importance_all.pdf}\n  \\caption{The number of splits of each feature that is used as input for the learner.}\n  \\label{fig:importance}\n\\end{figure}\n\nTo evaluate the predictive power of the learner, a ROC curve can be constructed for several thresholds of classifying tracks as true and false based on the\nboosted decision tree output.\nThe ROC curves for the test and training data set are shown in Figure \\ref{fig:auc_comparison} alongside\nthe ROC curves of the feature cuts from section \\ref{sec:feature}.\nWith the AUC of each ROC curve, the performances of the classifying methods can be compared. \\\\%Table \\ref{tab:AUC} lists each of the AUCs.\nOnly small differences between the ROC curve of the training and test dataset indicate that no significant overfitting occurs.\nThe ROC curves of the learner have a noticeably higher AUC than the individual feature cuts and have a better ratio of\nTPR and FPR for each possible cut, showing the effectiveness\nof using machine learning for track classification in comparison to individual feature cuts. For the best combination of cuts, the ratio of the TPR and FPR is still worse than\ncomparable points on the ROC curve of the test data. This means that for the same FPR of the two methods, the learner consistently achieves a better TPR.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[height=0.6\\textwidth]{plots/roc_curve_all.pdf}\n    \\caption{ROC curves of the machine learner for the training and test data set. In comparison, the ROC curves for the individual feature cuts as well as the\n    TPR and FPR of the best combination of cuts for all three features are depicted. A black diagonal line, representing random classification, is shown\n    in contrast.}\n  \\label{fig:auc_comparison}\n\\end{figure}\n\n%\\begin{table}\n%  \\centering\n%  \\begin{tabular}{c | c c c c c}\n%    \\toprule\n%      & Train & Test & $\\chi^2$ & $\\phi_{x,3}$ & $\\phi_{x,4}$\\\\\n%    \\midrule\n%    AUC & 0.770 & 0.754 & 0.615 & 0.617 & 0.587 \\\\\n%  \\end{tabular}\n%  \\caption{Area Under the Curve for the feature cuts and the train and test data of the machine learner.}\n%  \\label{tab:AUC}\n%\\end{table}\nFurther optimisation of the machine learning results is possible, considering that each cluster can at most be associated with one true track.\nOnly keeping the track of clusters with the highest probability of being true enables further rejection of\nfalse tracks. The ROC curve of the test data with the rejection of tracks is shown in Figure \\ref{fig:rejection}. \\\\\nWith the rejection of tracks, the AUC on the test data set decreases to $0.702$. This is due to the rejection of many true negative tracks, which in turn increases\nthe false positive rate. Since the goal is to improve the quality of the pCT image, the false positive rate is not an ideal quantity, as true negative tracks\nare not taken into account for pCT anyway.\n%However, the rejection of true negative tracks does not decrease the quality of the pct image, as these tracks are not taken into account anyway.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[height=0.6\\textwidth]{plots/roc_curve_learner_rejection_weights.pdf}\n  \\caption{ROC curves of the training and test data set and the test data set including the rejection of all but one track from clusters on the first and last plane.}\n  \\label{fig:rejection}\n\\end{figure}\n\nTo evaluate this method, the precision score $P=t_p/(t_p + f_p)$ and the recall score $R=t_p/(t_p + f_n)$ are calculated and compared with the baseline classifier. The precision\nscore describes the ability of the learner to not classify negative events as positive. The recall score, referring to the same quantity as the true positive rate,\nis the ability to classify all positive events correctly.\nFigure \\ref{fig:precision} shows the precision of the learner and the individual feature cuts as a function of the recall.\n\n%\\begin{figure}\n%  \\hspace{-0.6cm}\n%  %\\centering\n%  \\begin{subfigure}{0.51\\textwidth}\n%      \\centering\n%      \\includegraphics[height=0.82\\textwidth]{plots/roc_curve_precision_train.pdf}\n%  \\end{subfigure}\n%  \\begin{subfigure}{0.51\\textwidth}\n%      %\\centering\n%      %\\hspace{0.95cm}\n%      \\includegraphics[height=0.82\\textwidth]{plots/feature_cuts_precision.pdf}\n%  \\end{subfigure}\n%  \\caption{Precision as a function of the recall for the train and test data set including the rejection of all but one track from clusters of the first and last\n%           sensor for the test dataset. As a comparison, the precision curves are also shown for cuts on the $\\chi^2$, $\\phi_{x,3}$, and $\\phi_{x,4}$ features.}\n%  \\label{fig:precision}\n%\\end{figure}\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[height=0.6\\textwidth]{plots/roc_curve_precision_all.pdf}\n  \\caption{Precision as a function of the recall for the training and test data set including the rejection of all but one track from clusters of the first and last\n           sensor for the test dataset. As a comparison, the precision curves are also shown for cuts on the $\\chi^2$, $\\phi_{x,3}$, and $\\phi_{x,4}$ features.}\n  \\label{fig:precision}\n\\end{figure}\nThe AUC of the precision curve increases with the rejection of tracks,\nindicating a better performance with the rejection method. In comparison to the regular test data set, the precision score\nof the test data set with the rejection of tracks increases noticeably for recall scores larger than 0.1.\n%This is also reflected in the AUC values of the curves, with the precision curve of the rejection method having the highest AUC.\nFurthermore, a higher precision score of the test data set in comparison to the individual feature cuts for all recall scores highlights the advantages of the machine\nlearning approach. The precision is especially stable for the test dataset with the rejection method for recall scores larger than $0.1$,\nwhich means that high recall scores\nare achievable without decreasing the precision score significantly.\nHowever, this also means that no working point can be chosen that has an exceptionally high precision score. Useful choices for working points might be around\na recall score of $0.8$ with a precision score of approximately $0.636$ due to a noticeable decrease of precision for higher recall scores.\n", "meta": {"hexsha": "a0a52ca378f932cb09b1bb7421fda84f588551fc", "size": 24371, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/05_xgboost.tex", "max_stars_repo_name": "Christopherkrause1/Masterarbeit", "max_stars_repo_head_hexsha": "6611865e934853a839bb4827e9f3a89d0163d5e1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/05_xgboost.tex", "max_issues_repo_name": "Christopherkrause1/Masterarbeit", "max_issues_repo_head_hexsha": "6611865e934853a839bb4827e9f3a89d0163d5e1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/05_xgboost.tex", "max_forks_repo_name": "Christopherkrause1/Masterarbeit", "max_forks_repo_head_hexsha": "6611865e934853a839bb4827e9f3a89d0163d5e1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.9876923077, "max_line_length": 217, "alphanum_fraction": 0.7730499364, "num_tokens": 5866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.863391602943619, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.5804855434642109}}
{"text": "\\chapter{The ring of integers}\n\\section{Norms and traces}\n\\prototype{$a+b\\sqrt2$ as an element of $\\QQ(\\sqrt2)$ has norm $a^2-2b^2$ and trace $2a$.}\nRemember when you did olympiads and we had like $a^2+b^2$ was the ``norm'' of $a+bi$?\nCool, let me tell you what's actually happening.\n\nFirst, let me make precise the notion of a conjugate.\n\\begin{definition}\n\tLet $\\alpha$ be an algebraic number, and let $P(x)$ be its minimal polynomial,\n\tof degree $m$.\n\tThen the $m$ roots of $P$ are the (Galois) \\vocab{conjugates} of $\\alpha$.\n\\end{definition}\nIt's worth showing at the moment that there are no repeated conjugates.\n\\begin{lemma}[Irreducible polynomials have distinct roots]\n\tAn irreducible polynomial in $\\QQ[x]$ cannot have a complex double root.\n\t\\label{lem:irred_complex}\n\\end{lemma}\n\\begin{proof}\n\tLet $f(x) \\in \\QQ[x]$ be the irreducible polynomial and assume it has a double root $\\alpha$.\n\t\\textbf{Take the derivative $f'(x)$.}\n\tThis derivative has three interesting properties.\n\t\\begin{itemize}\n\t\t\\ii The degree of $f'$ is one less than the degree of $f$.\n\t\t\\ii The polynomials $f$ and $f'$ are not relatively prime\n\t\tbecause they share a factor $x-\\alpha$.\n\t\t\\ii The coefficients of $f'$ are also in $\\QQ$.\n\t\\end{itemize}\n\tConsider $g = \\gcd(f, f')$. We must have $g \\in \\QQ[x]$ by Euclidean algorithm.\n\tBut the first two facts about $f'$ ensure that $g$ is nonconstant\n\tand $\\deg g < \\deg f$.\n\tYet $g$ divides $f$,\n\tcontradiction to the fact that $f$ should be a minimal polynomial.\n\\end{proof}\nHence $\\alpha$ has exactly as many conjugates as the degree of $\\alpha$.\n\nNow, we would \\emph{like} to define the \\emph{norm} of an element $\\Norm(\\alpha)$\nas the product of its conjugates.\nFor example, we want $2+i$ to have norm $(2+i)(2-i) = 5$,\nand in general for $a+bi$ to have norm $a^2+b^2$.\nIt would be \\emph{really cool} if the norm was multiplicative;\nwe already know this is true for complex numbers!\n\nUnfortunately, this doesn't quite work: consider\n\\[ \\Norm(2+i) = 5 \\text{ and } \\Norm(2-i) = 5. \\]\nBut $(2+i)(2-i) = 5$, which doesn't have norm $25$ like we want,\nsince $5$ is degree $1$ and has no conjugates at all.\nThe reason this ``bad'' thing is happening is that we're\ntrying to define the norm of an \\emph{element},\nwhen we really ought to be defining the norm of an element\n\\emph{with respect to a particular $K$}.\n\nWhat I'm driving at is that the norm should have\ndifferent meanings depending on which field you're in.\nIf we think of $5$ as an element of $\\QQ$, then its norm is $5$.\nBut thought of as an element of $\\QQ(i)$, its norm really ought to be $25$.\nLet's make this happen: for $K$ a number field, we will now define $\\Norm_{K/\\QQ}(\\alpha)$\nto be the norm of $\\alpha$ \\emph{with respect to $K$} as follows.\n\\begin{definition}\n\tLet $\\alpha \\in K$ have degree $n$, so $\\QQ(\\alpha) \\subseteq K$, and set $k = (\\deg K) / n$.\n\tThe \\vocab{norm} of $\\alpha$ is defined as\n\t\\[ \\Norm_{K/\\QQ}(\\alpha) \\defeq \\left( \\prod \\text{Galois conj of $\\alpha$} \\right)^k. \\]\n\tThe \\vocab{trace} is defined as\n\t\\[ \\Tr_{K/\\QQ}(\\alpha) \\defeq k \\cdot \\left( \\sum \\text{Galois conj of $\\alpha$} \\right). \\]\n\\end{definition}\nThe exponent of $k$ is a ``correction factor'' that makes the norm of $5$ into $5^2=25$\nwhen we view $5$ as an element of $\\QQ(i)$ rather than an element of $\\QQ$.\nFor a ``generic'' element of $K$, we expect $k = 1$.\n\\begin{exercise}\n\tUse what you know about nested vector spaces to convince\n\tyourself that $k$ is actually an integer.\n\\end{exercise}\n\\begin{example}[Norm of $a+b\\sqrt2$]\n\tLet $\\alpha = a+b\\sqrt2 \\in \\QQ(\\sqrt2)$.\n\tIf $b \\neq 0$, then $\\alpha$ and $K$ have the degree $2$.\n\tThus the only conjugates of $\\alpha$ are $a \\pm b\\sqrt2$, which gives\n\tthe norm \\[ (a+b\\sqrt2)(a-b\\sqrt2) = a^2-2b^2, \\]\n\tThe trace is $(a-b\\sqrt2) + (a+b\\sqrt2) = 2a$.\n\n\tNicely, the formula $a^2-2b^2$ and $2a$ also works when $b=0$.\n\\end{example}\nOf importance is:\n\\begin{proposition}[Norms and traces are rational integers]\n\tIf $\\alpha$ is an algebraic integer, its norm and trace\n\tare rational integers.\n\\end{proposition}\n\\begin{ques}\n\tProve it. (Vieta formula.)\n\\end{ques}\n\nThat's great, but it leaves a question unanswered:\nwhy is the norm multiplicative?\nTo do this, I have to give a new definition of norm and trace.\n\n\\begin{theorem}[Morally correct definition of norm and trace]\n\tLet $K$ be a number field of degree $n$, and let $\\alpha \\in K$.\n\tLet $\\mu_\\alpha \\colon K \\to K$ denote the map \\[ x \\mapsto \\alpha x \\]\n\tviewed as a linear map of $\\QQ$-vector spaces.\n\tThen,\n\t\\begin{itemize}\n\t\t\\ii the norm of $\\alpha$ equals the determinant $\\det \\mu_\\alpha$, and\n\t\t\\ii the trace of $\\alpha$ equals the trace $\\Tr \\mu_\\alpha$.\n\t\\end{itemize}\n\\end{theorem}\nSince the trace and determinant don't depend on the choice of basis,\nyou can pick whatever basis you want\nand use whatever definition you got in high school.\nFantastic, right?\n\n\\begin{example}[Explicit computation of matrices for $a+b\\sqrt2$]\n\tLet $K = \\QQ(\\sqrt2)$, and let $1$, $\\sqrt 2$ be the basis of $K$.\n\tLet \\[ \\alpha = a + b \\sqrt 2 \\] (possibly even $b = 0$), and notice that\n\t\\[ \\left( a+b\\sqrt2 \\right) \\left(x+y\\sqrt2 \\right)\n\t\t= (ax+2yb) + (bx+ay)\\sqrt2. \\]\n\tWe can rewrite this in matrix form as\n\t\\[\n\t\t\\begin{bmatrix}\n\t\t\ta & 2b \\\\\n\t\t\tb & a\n\t\t\\end{bmatrix}\n\t\t\\begin{bmatrix}\n\t\t\tx \\\\ y\n\t\t\\end{bmatrix}\n\t\t=\n\t\t\\begin{bmatrix}\n\t\t\tax+2yb \\\\ bx+ay\n\t\t\\end{bmatrix}.\n\t\\]\n\tConsequently, we can interpret $\\mu_\\alpha$ as the matrix\n\t\\[ \\mu_\\alpha =\n\t\t\\begin{bmatrix}\n\t\t\ta & 2b \\\\ b & a\n\t\t\\end{bmatrix}. \\]\n\tOf course, the matrix will change if we pick a different basis,\n\tbut the determinant and trace do not: they are always given by\n\t\\[ \\det \\mu_\\alpha = a^2-2b^2 \\text{ and }\n\t\t\\Tr \\mu_\\alpha = 2a. \\]\n\\end{example}\nThis interpretation explains why the same formula should work for $a+b\\sqrt 2$\neven in the case $b = 0$.\n\n\\begin{proof}\nI'll prove the result for just the norm; the trace falls out similarly.\nSet\n\\[ n = \\deg \\alpha, \\qquad kn = \\deg K. \\]\nThe proof is split into two parts, depending on whether or not $k=1$.\n\\begin{subproof}[Proof if $k=1$]\n\tSet $n = \\deg \\alpha = \\deg K$.\n\tThus the norm actually \\emph{is} the product of the Galois conjugates.\n\tAlso, \\[ \\{1, \\alpha, \\dots, \\alpha^{n-1}\\} \\]\n\tis linearly independent in $K$, and hence a basis (as $\\dim K = n$).\n\tLet's use this as the basis for $\\mu_\\alpha$.\n\n\tLet \\[ x^n+c_{n-1}x^{n-1} + \\dots + c_0  \\]be the minimal polynomial of $\\alpha$.\n\tThus $\\mu_\\alpha(1) = \\alpha$, $\\mu_\\alpha(\\alpha) = \\alpha^2$, and so on,\n\tbut $\\mu_\\alpha(\\alpha^{n-1}) = -c_{n-1}\\alpha^{n-1} - \\dots - c_0$.\n\tTherefore, $\\mu_\\alpha$ is given by the matrix\n\t\\[\n\t\tM =\n\t\t\\begin{bmatrix}\n\t\t\t0 & 0 & 0 & \\dots & 0 & -c_0 \\\\\n\t\t\t1 & 0 & 0 & \\dots & 0 & -c_1 \\\\\n\t\t\t0 & 1 & 0 & \\dots & 0 & -c_2 \\\\\n\t\t\t\\vdots & \\vdots & \\vdots & \\ddots & 0 & -c_{n-2} \\\\\n\t\t\t0 & 0 & 0 & \\dots & 1 & -c_{n-1}\n\t\t\\end{bmatrix}.\n\t\\]\n\tThus \\[ \\det M = (-1)^n c_0 \\] and we're done by Vieta's formulas.\n\\end{subproof}\n\\begin{subproof}[Proof if $k > 1$]\n\tWe have nested vector spaces\n\t\\[ \\QQ \\subseteq \\QQ(\\alpha) \\subseteq K. \\]\n\tLet $e_1$, \\dots, $e_k$ be a $\\QQ(\\alpha)$-basis for $K$\n\t(meaning: interpret $K$ as a vector space over $\\QQ(\\alpha)$, and pick that basis).\n\tSince $\\{1, \\alpha, \\dots, \\alpha^{n-1}\\}$ is a $\\QQ$ basis for $\\QQ(\\alpha)$,\n\tthe elements\n\t\\[\n\t\t\t\\begin{array}{cccc}\n\t\t\te_1, & e_1\\alpha, & \\dots, & e_1\\alpha^{n-1} \\\\\n\t\t\te_2, & e_2\\alpha, & \\dots, & e_2\\alpha^{n-1} \\\\\n\t\t\t\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\t\t\te_k, & e_k\\alpha, & \\dots, & e_k\\alpha^{n-1}\n\t\t\t\\end{array}\n\t\\]\n\tconstitute a $\\QQ$-basis of $K$.\n\tUsing \\emph{this} basis, the map $\\mu_\\alpha$ looks like\n\t\\[\n\t\t\t\\underbrace{\n\t\t\t\\begin{bmatrix}\n\t\t\t\t\tM & & & \\\\\n\t\t\t\t\t& M & & \\\\\n\t\t\t\t\t& & \\ddots & \\\\\n\t\t\t\t\t& & & M\n\t\t\t\\end{bmatrix}\n\t\t\t}_{\\text{$k$ times}}\n\t\\]\n\twhere $M$ is the same matrix as above:\n\twe just end up with one copy of our old matrix for each $e_i$.\n\tThus $\\det \\mu_\\alpha = (\\det M)^k$, as needed. \\qedhere\n\\end{subproof}\n\\begin{ques}\n\tVerify the result for traces as well. \\qedhere\n\\end{ques}\n\\end{proof}\n\nFrom this it follows immediately that\n\\[ \\Norm_{K/\\QQ}(\\alpha\\beta) = \\Norm_{K/\\QQ}(\\alpha)\\Norm_{K/\\QQ}(\\beta) \\]\nbecause by definition we have\n\\[ \\mu_{\\alpha\\beta} = \\mu_\\alpha \\circ \\mu_\\beta, \\]\nand that the determinant is multiplicative.\nIn the same way, the trace is additive.\n\n\\section{The ring of integers}\n\\prototype{If $K = \\QQ(\\sqrt 2)$, then $\\OO_K = \\ZZ[\\sqrt 2]$.\n\tBut if $K = \\QQ(\\sqrt 5)$, then $\\OO_K = \\ZZ[\\frac{1+\\sqrt5}{2}]$.}\n\n$\\ZZ$ makes for better number theory than $\\QQ$.\nIn the same way, focusing on the \\emph{algebraic integers} of $K$\ngives us some really nice structure, and we'll do that here.\n\n\\begin{definition}\n\tGiven a number field $K$, we define\n\t\\[ \\OO_K \\defeq K \\cap \\ol\\ZZ \\]\n\tto be the \\vocab{ring of integers} of $K$;\n\tin other words $\\OO_K$ consists of the algebraic integers of $K$.\n\\end{definition}\n\nWe do the classical example of a quadratic field now.\nBefore proceeding, I need to write a silly number theory fact.\n\\begin{exercise}\n\t[Annoying but straightforward]\n\tLet $a$ and $b$ be rational numbers, and $d$ a squarefree positive integer.\n\t\\begin{itemize}\n\t\t\\ii If $d \\equiv 2, 3 \\pmod 4$, prove that\n\t\t$2a, a^2-db^2 \\in \\ZZ$ if and only if $a,b \\in \\ZZ$.\n\t\t\\ii For $d \\equiv 1 \\pmod 4$, prove that\n\t\t$2a, a^2-db^2 \\in \\ZZ$ if and only if $a,b \\in \\ZZ$\n\t\tOR if $a -\\half, b-\\half \\in \\ZZ$.\n\t\\end{itemize}\n\tYou'll need to take mod $4$.\n\\end{exercise}\n\n\\begin{example}\n\t[Ring of integers of $K = \\QQ(\\sqrt3)$]\n\tLet $K$ be as above.\n\tWe claim that \\[ \\OO_K = \\ZZ[\\sqrt 3] = \\left\\{ m + n\\sqrt 3 \\mid m,n \\in \\ZZ  \\right\\}. \\]\n\tWe set $\\alpha = a + b \\sqrt 3$.\n\tThen $\\alpha \\in \\OO_K$ when the minimal polynomial has integer coefficients.\n\n\tIf $b = 0$, then the minimal polynomial is $x-\\alpha=x-a$,\n\tand thus $\\alpha$ works if and only if it's an integer.\n\tIf $b \\neq 0$, then the minimal polynomial is\n\t\\[ (x-a)^2 - 3b^2 = x^2 - 2a \\cdot x + (a^2-3b^2). \\]\n\tFrom the exercise, this occurs exactly for $a,b \\in \\ZZ$.\n\\end{example}\n\\begin{example}\n\t[Ring of integers of $K = \\QQ(\\sqrt 5)$]\n\tWe claim that in this case\n\t\\[ \\OO_K = \\ZZ\\left[ \\frac{1+\\sqrt5}{2} \\right]\n\t= \\left\\{ m + n \\cdot \\frac{1+\\sqrt5}{2} \\mid m,n \\in \\ZZ \\right\\}. \\]\n\tThe proof is exactly the same, except the exercise tells us instead\n\tthat for $b \\neq 0$, we have both the possibility that $a,b \\in \\ZZ$\n\tor that $a,b \\in \\ZZ - \\half$.\n\tThis reflects the fact that $\\frac{1+\\sqrt5}{2}$ is the root of $x^2-x-1 = 0$;\n\tno such thing is possible with $\\sqrt 3$.\n\\end{example}\nIn general, the ring of integers of $K = \\QQ(\\sqrt d)$ is\n\\[ \\OO_K\n\t=\n\t\\begin{cases}\n\t\t\\ZZ[\\sqrt d] & d\\equiv 2,3 \\pmod 4 \\\\[1em]\n\t\t\\ZZ\\left[ \\frac{1+\\sqrt d}{2} \\right] & d \\equiv 1 \\pmod 4.\n\t\\end{cases}\n\\]\nWhat we're going to show is that $\\OO_K$ behaves in $K$\na lot like the integers do in $\\QQ$.\nFirst we show $K$ consists of quotients of numbers in $\\OO_K$.\nIn fact, we can do better:\n\\begin{example}[Rationalizing the denominator]\n\tFor example, consider $K = \\QQ(\\sqrt3)$.\n\tThe number $x = \\frac{1}{4+\\sqrt3}$ is an element of $K$, but by\n\t``rationalizing the denominator'' we can write\n\t\\[ \\frac{1}{4+\\sqrt3} = \\frac{4-\\sqrt3}{13}. \\]\n\tSo we see that in fact, $x$ is $\\frac{1}{13}$ of an integer in $\\OO_K$.\n\\end{example}\n\nThe theorem holds true more generally.\n\\begin{theorem}[$K = \\QQ \\cdot \\OO_K$]\n\tLet $K$ be a number field, and let $x \\in K$ be any element.\n\tThen there exists an integer $n$ such that $nx \\in \\OO_K$;\n\tin other words, \\[ x = \\frac 1n \\alpha \\] for some $\\alpha \\in \\OO_K$.\n\\end{theorem}\n\\begin{exercise}\n\tProve this yourself.\n\t(Start by using the fact that $x$ has a minimal\n\tpolynomial with rational coefficients.\n\tAlternatively, take the norm.)\n\\end{exercise}\n\nNow we are going to show $\\OO_K$ is a ring;\nwe'll check it is closed under addition and multiplication.\nTo do so, the easiest route is:\n\\begin{lemma}[$\\alpha \\in \\ol\\ZZ$ $\\iff$ {$\\ZZ[\\alpha]$} finitely generated]\n\tLet $\\alpha \\in \\ol\\QQ$.\n\tThen $\\alpha$ is an algebraic integer if and only if\n\tthe abelian group $\\ZZ[\\alpha]$ is finitely generated.\n\\end{lemma}\n\\begin{proof}\n\tNote that $\\alpha$ is an algebraic integer if and only if it's the root\n\tof some nonzero, monic polynomial with integer coefficients.\n\tSuppose first that\n\t\\[ \\alpha^N = c_{N-1} \\alpha^{N-1} + c_{N-2} \\alpha^{N-2} + \\dots + c_0. \\]\n\tThen the set $1, \\alpha, \\dots, \\alpha^{N-1}$ generates $\\ZZ[\\alpha]$,\n\tsince we can repeatedly replace $\\alpha^N$ until all powers of $\\alpha$\n\tare less than $N$.\n\n\tConversely, suppose that $\\ZZ[\\alpha]$ is finitely generated\n\tby some $b_1, \\dots, b_m$.\n\tViewing the $b_i$ as polynomials in $\\alpha$, we can select a large integer\n\t$N$ (say $N = \\deg b_1 + \\dots + \\deg b_m + 2015$)\n\tand express $\\alpha^N$ in the $b_i$'s to get\n\t\\[ \\alpha^N = c_1b_1(\\alpha) + \\dots + c_mb_m(\\alpha). \\]\n\tThe above gives us a monic polynomial in $\\alpha$,\n\tand the choice of $N$ guarantees it is not zero.\n\tSo $\\alpha$ is an algebraic integer.\n\\end{proof}\n\\begin{example}[$\\half$ isn't an algebraic integer]\n\tWe already know $\\half$ isn't an algebraic integer.\n\tSo we expect\n\t\\[ \\ZZ \\left[ \\half \\right] = \\left\\{ \\frac{a}{2^m}\n\t\t\\mid a, m \\in \\ZZ \\text{ and } m \\ge 0 \\right\\} \\]\n\tto not be finitely generated, and this is the case.\n\\end{example}\n\\begin{ques}\n\tTo make the last example concrete:\n\tname all the elements of $\\ZZ[\\half]$\n\tthat cannot be written as an integer combination of\n\t\\[ \\left\\{ \\frac12, \\frac{7}{8}, \\frac{13}{64},\n\t\t\\frac{2015}{4096}, \\frac{1}{1048576} \\right\\} \\]\n\\end{ques}\n\nNow we can state the theorem.\n\\begin{theorem}[Algebraic integers are closed under $+$ and $\\times$]\n\tThe set $\\ol\\ZZ$ is closed under addition and multiplication;\n\ti.e.\\ it is a ring.\n\tIn particular, $\\OO_K$ is also a ring for any number field $K$.\n\\end{theorem}\n\\begin{proof}\n\tLet $\\alpha, \\beta \\in \\ol\\ZZ$.\n\tThen $\\ZZ[\\alpha]$ and $\\ZZ[\\beta]$ are finitely generated.\n\tHence so is $\\ZZ[\\alpha, \\beta]$.\n\t(Details: if $\\ZZ[\\alpha]$ has $\\ZZ$-basis $a_1, \\dots, a_m$ and\n\t$\\ZZ[\\beta]$ has $\\ZZ$-basis $b_1, \\dots, b_n$,\n\tthen take the $mn$ elements $a_ib_j$.)\n\n\tNow $\\ZZ[\\alpha \\pm \\beta]$ and $\\ZZ[\\alpha \\beta]$ are subsets of $\\ZZ[\\alpha,\\beta]$ and so they are also finitely generated.\n\tHence $\\alpha \\pm \\beta$ and $\\alpha\\beta$ are algebraic integers.\n\\end{proof}\n\nIn fact, something even better is true.\nAs you saw, for $\\QQ(\\sqrt 3)$ we had $\\OO_K = \\ZZ[\\sqrt 3]$;\nin other words, $\\OO_K$ was generated by $1$ and $\\sqrt 3$.\nSomething similar was true for $\\QQ(\\sqrt 5)$.\nWe claim that in fact, the general picture looks exactly like this.\n\n\\begin{theorem}[$\\OO_K$ is a free $\\ZZ$-module of rank $n$]\n\tLet $K$ be a number field of degree $n$.\n\tThen $\\OO_K$ is a free $\\ZZ$-module of rank $n$,\n\ti.e.\\ $\\OO_K \\cong \\ZZ^{\\oplus n}$ as an abelian group.\n\tIn other words, $\\OO_K$ has a $\\ZZ$-basis of $n$ elements as\n\t\\[ \\OO_K = \\left\\{ c_1\\alpha_1 + \\dots\n\t\t\t+ c_{n-1}\\alpha_{n-1} + c_n\\alpha_n \\mid c_i \\in \\ZZ \\right\\} \\]\n\twhere $\\alpha_i$ are algebraic integers in $\\OO_K$.\n\t\\label{thm:OK_free_Z_module}\n\\end{theorem}\n\n\\begin{proof}\n\tTODO: add this in.\n\t(Originally, there was an incorrect proof;\n\tthe mistake was pointed out 2020-02-12 on\n\t\\url{https://math.stackexchange.com/q/3543641/229197}\n\tand I hope to supply a correct one soon.)\n%\tThis is a kind of fun proof, so it may be worth\n%\ttrying to work out yourself before reading it.\n%\n%\tPick a $\\QQ$-basis of $\\alpha_1$, \\dots, $\\alpha_n$ of $K$ and WLOG\n%\tthe $\\alpha_i$ are in $\\OO_K$ by scaling.\n%\n%\tConsider $\\alpha \\in \\OO_K$,\n%\tand write $\\alpha = c_1\\alpha_1 + \\dots + c_n\\alpha_n$.\n%\tWe will try to bound the denominators of $c_i$.\n%\tLook at $\\Norm(\\alpha) = \\Norm(c_1\\alpha_1 + \\dots + c_n\\alpha_n)$.\n%\n%\tIf we do a giant norm computation, we find that $\\Norm(\\alpha)$\n%\tis a polynomial in the $c_i$ with fixed coefficients.\n%\t(For example, $\\Norm(c_1 + c_2\\sqrt 2) = c_1^2 - 2c_2^2$, say.)\n%\tBut $\\Norm(\\alpha)$ is an \\emph{integer}, so the denominators of the $c_i$\n%\thave to be bounded by some very large integer $N$.\n%\tThus\n%\t\\[ \\bigoplus_i \\ZZ \\cdot \\alpha_i \\subseteq \\OO_K\n%\t\t\\subseteq \\frac 1N \\bigoplus_i \\ZZ \\cdot \\alpha_i.  \\]\n%\tThe latter inclusion shows that $\\OO_K$ is a subgroup\n%\tof a free group, and hence it is itself free.\n%\tOn the other hand, the first inclusion shows it's rank $n$.\n\\end{proof}\n\nThis last theorem shows that in many ways $\\OO_K$ is a ``lattice'' in $K$.\nThat is, for a number field $K$ we can find $\\alpha_1$, \\dots, $\\alpha_n$\nin $\\OO_K$ such that\n\\begin{align*}\n\t\\OO_K &\\cong \\alpha_1\\ZZ \\oplus \\alpha_2\\ZZ \\oplus \\dots \\oplus \\alpha_n\\ZZ \\\\\n\tK &\\cong \\alpha_1\\QQ \\oplus \\alpha_2\\QQ \\oplus \\dots \\oplus \\alpha_n\\QQ\n\\end{align*}\nas abelian groups.\n\n\\section{On monogenic extensions}\nRecall that it turned out number fields $K$ could all be\nexpressed as $\\QQ(\\alpha)$ for some $\\alpha$.\nWe might hope that something similar is true of the ring of integers:\nthat we can write \\[ \\OO_K = \\ZZ[\\theta] \\]\nin which case $\\{1, \\theta, \\dots, \\theta^{n-1}\\}$\nserves both as a basis of $K$ and as the $\\ZZ$-basis for $\\OO_K$ (here $n = [K:\\QQ]$).\nIn other words, we hope that the basis of $\\OO_K$ is actually a ``power basis''.\n\nThis is true for the most common examples we use:\n\\begin{itemize}\n\t\\ii the quadratic field, and\n\t\\ii the cyclotomic field in \\Cref{prob:ring_int_cyclotomic}.\n\\end{itemize}\nUnfortunately, it is not true in general:\nthe first counterexample is $\\QQ(\\alpha)$ for $\\alpha$ a root of $X^3-X^2-2X-8$.\n\nWe call an extension with this nice property \\vocab{monogenic}.\nAs we'll later see, monogenic extensions have a really nice factoring algorithm,\n\\Cref{thm:factor_alg}.\n\n\\section{\\problemhead}\n\n\\begin{sproblem} % trivial\n\t\\label{prob:OK_unit_norm}\n\tShow that $\\alpha$ is a unit of $\\OO_K$ (meaning $\\alpha\\inv \\in \\OO_K$)\n\tif and only if $\\Norm_{K/\\QQ}(\\alpha) = \\pm 1$.\n\t\\begin{hint}\n\t\tThe norm is multiplicative and equal to product of Galois conjugates.\n\t\\end{hint}\n\\end{sproblem}\n\n\\begin{sproblem}\n\tLet $K$ be a number field.\n\tWhat is the field of fractions of $\\OO_K$?\n\t\\begin{hint}\n\t\tIt's isomorphic to $K$.\n\t\\end{hint}\n\\end{sproblem}\n\n\\begin{problem}\n\t[Russian olympiad 1984]\n\tFind all integers $m$ and $n$ such that\n\t\\[ \\left( 5+3\\sqrt2 \\right)^m = \\left( 3+5\\sqrt2 \\right)^n. \\]\n\t\\begin{hint}\n\t\tTaking the standard norm on $\\QQ(\\sqrt2)$ will destroy it.\n\t\\end{hint}\n\\end{problem}\n\n\n\\begin{problem}\n\t[USA TST 2012]\n\tDecide whether there exist $a,b,c > 2010$ satisfying\n\t\\[ a^3+2b^3+4c^3=6abc+1. \\]\n\t\\begin{hint}\n\t\tNorm in $\\QQ(\\sqrt[3]2)$.\n\t\\end{hint}\n\\end{problem}\n\n\\begin{dproblem}[Cyclotomic Field]\n\t\\yod\n\t\\label{prob:ring_int_cyclotomic}\n\tLet $p$ be an odd rational prime and $\\zeta_p$ a primitive $p$th root of unity.\n\tLet $K = \\QQ(\\zeta_p)$.\n\tProve that $\\OO_K = \\ZZ[\\zeta_p]$.\n\t(In fact, the result is true even if $p$ is not a prime.)\n\t\\begin{hint}\n\t\tObviously $\\ZZ[\\zeta_p] \\subseteq \\OO_K$, so our goal is to show the reverse inclusion.\n\t\tShow that for any $\\alpha \\in \\OO_K$, the trace of $\\alpha(1-\\zeta_p)$ is divisible by $p$.\n\t\tGiven $x = a_0 + a_1\\zeta_p + \\dots + a_{p-2}\\zeta^{p-2} \\in \\OO_K$ (where $a_i \\in \\QQ$),\n\t\tconsider $(1-\\zeta_p)x$.\n\t\\end{hint}\n\\end{dproblem}\n", "meta": {"hexsha": "b409b6be1cfc31a4650159d377c369128a065979", "size": 19122, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/alg-NT/norm-trace.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/alg-NT/norm-trace.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/alg-NT/norm-trace.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1844262295, "max_line_length": 128, "alphanum_fraction": 0.6602342851, "num_tokens": 6695, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410572017153, "lm_q2_score": 0.8723473879530492, "lm_q1q2_score": 0.5804691744155078}}
{"text": "\\input{settings}\n\n\\begin{document}\n\n\\lhead{Shallow Neural Networks}\n\\rhead{ Deep Learning specialization \\\\ Neural Networks and Deep Learning}\n\\cfoot{\\thepage \\ of \\pageref{LastPage}}\n\n\\subsection*{Neural Network Representation}\n\n    The following diagram represents a two layer Neural Network:\n\n    \\begin{tikzpicture}[\n        % define styles \n        clear/.style={ \n            draw=none,\n            fill=none\n        },\n        net/.style={\n            matrix of nodes,\n            nodes={ draw, circle, inner sep=10pt },\n            nodes in empty cells,\n            column sep=2cm,\n            row sep=-9pt\n        },\n        >=latex\n    ]\n    % define matrix mat to hold nodes\n    % using net as default style for cells\n    \\matrix[net] (mat)\n    {\n    % Define layer headings\n    |[clear]| \\parbox{1.3cm}{\\centering Input\\\\layer} \n    & |[clear]| \\parbox{1.3cm}{\\centering Hidden\\\\layer} \n    & |[clear]| \\parbox{1.3cm}{\\centering Output\\\\layer} \\\\\n    |[clear]|  &  $a_1^{[1]}$  & |[clear]| \\\\\n    $x_1$  & |[clear]|  & |[clear]|\\\\\n    |[clear]|  &  $a_2^{[1]}$ & |[clear]|\\\\\n    $x_2$  & |[clear]|  &  $a^{[2]}$\\\\\n    |[clear]|  &  $a_3^{[1]}$ & |[clear]| \\\\\n    $x_3$  & |[clear]|  & |[clear]| \\\\\n    |[clear]|  &  $a_4^{[1]}$ & |[clear]| \\\\\n    };\n\n    %left most lines into input layers\n    \\foreach \\ai in {3,5,7}\n    \\draw[<-] (mat-\\ai-1) -- +(-2cm,0);\n\n    %lines from a_{i}^{0} to each a_{j}^{1}\n    \\foreach \\ai in {3,5,7} {\n    \\foreach \\aii in {2,4,6,8}\n        \\draw[->] (mat-\\ai-1) -- (mat-\\aii-2);\n        }\n\n    %lines from a_{i}^{1} to a_{0}^{2}\n    \\foreach \\ai in {2,4,6,8}\n    \\draw[->] (mat-\\ai-2) -- (mat-5-3);\n    \n    % right most line with Output label\n    \\draw[->] (mat-5-3) -- node[above] {$\\hat{y}$} +(2cm,0);\n    \\end{tikzpicture}\n\n    In this case, two parameters are associated with the hidden layer: \n    \\begin{align*}\n        W^{[1]} \\in \\R^{4\\times3} \\\\ \n        b^{[1]} \\in \\R^{4\\times1}\n    \\end{align*}\n    Similarly two parameters are associated with the output layer: \n    \\begin{align*}\n        W^{[2]} \\in \\R^{1\\times4} \\\\\n        b^{[2]} \\in \\R\n    \\end{align*}\n\n\n\\subsection*{Computing a Neural Network's Output (one input)}\n\n    Remember that for logistic regresion we did the following computation\n\n    \\begin{tikzpicture}[\n        % define styles \n        clear/.style={ \n            draw=none,\n            fill=none\n        },\n        net/.style={\n            matrix of nodes,\n            nodes={ draw, circle, inner sep=10pt },\n            nodes in empty cells,\n            column sep=2cm,\n            row sep=-9pt\n        },\n        >=latex\n    ]\n    % define matrix mat to hold nodes\n    % using net as default style for cells\n    \\matrix[net] (mat)\n    {\n    % Define layer headings\n    |[clear]| \\parbox{1.3cm}{\\centering Input\\\\layer} \n    & |[clear]| \\parbox{1.3cm}{\\centering Hidden\\\\layer} \n    & |[clear]| \\parbox{1.3cm}{\\centering Output\\\\layer} \\\\\n\n    $x_1$  & |[clear]|  & |[clear]| \\\\\n    |[clear]|  & |[clear]|  & |[clear]| \\\\\n    $x_2$  & $\\sigma(w^tx + b)$  & $\\Hat{y}$ \\\\\n    |[clear]|  & |[clear]|  & |[clear]| \\\\\n    $x_3$  & |[clear]|  & |[clear]| \\\\\n    };\n\n    %lines from a_{i}^{0} to each a_{j}^{1}\n    \\foreach \\ai in {2,4,6} {\n    \\foreach \\aii in {4}\n        \\draw[->] (mat-\\ai-1) -- (mat-\\aii-2);\n        }\n\n    %lines from a_{i}^{1} to a_{0}^{2}\n    \\foreach \\ai in {4}\n    \\draw[->] (mat-\\ai-2) -- (mat-4-3);\n    \n    \\end{tikzpicture}\n\n    Another way to think about a neural network is that that each hidden layer is \n    performing a logistic regression. Each of the logistic regresion units has it's own\n    $w$ and $b$ parameters. So, in a neural network, each neuron does the \n    following computation:\n    \\begin{align*}\n        z_i^{[1]} &= w_i^{[1]}x + b_i^{[1]} \\\\\n        a_i^{[1]} &= \\sigma(z_i^{[1]}) \\\\\n        z_i^{[2]} &= w_i^{[2]}a_i^{[1]} + b_i^{[2]} \\\\\n        a_i^{[2]} &= \\sigma(z_i^{[2]})\n    \\end{align*}\n    Or in matrix notation\n    \\begin{align*}\n        z^{[1]} &= W^{[1]}x + b^{[1]} \\\\\n        a^{[1]} &= \\sigma(z^{[1]}) \\\\\n        z^{[2]} &= W^{[1]}a^{[1]} + b^{[2]}  \\\\\n        a^{[2]} &= \\sigma(z^{[2]})\n    \\end{align*}\n\n\\subsection*{Computing a Neural Network's Output (multiple inputs)}\n\n    We define the following matrices:\n    \\begin{align*}\n        X &= \\begin{bmatrix} x^{(1)}\u00a0| \\cdots | x^{(m)} \\end{bmatrix} \\\\\n        Z^{[i]} &= \\begin{bmatrix} z^{[i](1)}\u00a0| \\cdots | z^{[i](m)} \\end{bmatrix} \\\\\n        A^{[i]} &= \\begin{bmatrix} a^{[i](1)}\u00a0| \\cdots | a^{[i](m)} \\end{bmatrix} \n    \\end{align*}\n    And then, the vectorized form for the forward propagation calculations are:\n    \\begin{align*}\n        Z^{[1]} &= W^{[1]}X + b^{[1]} \\\\\n        A^{[1]} &= \\sigma(Z^{[1]}) \\\\\n        Z^{[2]} &= W^{[2]}A^{[1]} + b^{[2]}  \\\\\n        A^{[2]} &= \\sigma(Z^{[2]})\n    \\end{align*}\n\n\\subsection*{Activation functions}\n\n    We replace the sigmoid function $\\sigma$ with a more general function $g$ \n    \\begin{align*}\n        A^{[i]} = \\cancel{\\sigma(Z^{[i]})} \\ g(Z^{[i]})\n    \\end{align*}\n\n    Common activation functions are:\n    \\begin{figure}[H]\n        \\begin{center}\n                \\includegraphics[width=0.75\\textwidth]{img/activations.jpg}\n                \\caption{Common activation functions}\n            \\end{center}\n    \\end{figure}\n    If your output is binary the sigmoid is ideal for the output layer. In any other case the\n    hyperbolic tangent activation function is superior than the sigmoid.\n\n    The most common activation function is the ReLU and it usually performs better because\n    for a lot of the space of $Z$, the derivative of the activation function if far from 0.\n    So, using the ReLU activation function, the neural network will often learn much faster \n    than when using the tanh or the sigmoid activation functions. \n\n    One disadvantage of ReLU is that for negative $x$ the derivative is zero, it works fine \n    in practice but that can be fixed using the leaky ReLU.\n\n    Finally, to implement back propagation for a neural network, it's necessary to \n    compute the derivative of the activation functions, the following table summarises the \n    derivatives of the former activation functions.\n\n    \\begin{figure}[H]\n        \\begin{center}\n                \\includegraphics[width=0.9\\textwidth]{img/derivatives.jpg}\n                \\caption{Derivatives of activation functions}\n            \\end{center}\n    \\end{figure}\n\n\\subsection*{Importance of non-linear activation functions}\n\n    Since the composition of linear functions is linear, using linear activation functions outputs\n    a linear function of the input and you might as well not have any hidden layers. \n    The whole purpose of the neural networksis to capture non-linear relationships in the \n    input data and that's why it's important to use non-linear activation functions.\n\n    The one place in which you can use a linear activation function is in the output layer of \n    regression problems when your output is a real number.\n\n\\subsection*{Gradient descent for Neural Networks}\n\n    Using te following cost function:\n    \\begin{align*}\n        J(W^{[1]},b^{[1]},W^{[2]},b^{[2]}) = \\frac{1}{n} \\sum_{i=1}^n L(\\Hat{y},y)\n    \\end{align*}\n\n    We can compute the derivatives using the computation graph as we did with the logistic\n    function. The derivatives are:\n    \\begin{align*}\n        dZ^{[2]} &= A^{[2]} - Y\u00a0\\\\\n        dW^{[2]} &= \\frac{1}{m} dZ^{[2]} A^{[1]\\top} \\\\\n        db^{[2]} &= \\frac{1}{n} sum(dZ^{2}) \\\\ \\\\\n        dZ^{[1]} &= W^{[2]\\top} dZ^{[2]} * g^{[1]'}(Z^[1]) \\\\\n        dW^{[1]} &= \\frac{1}{m} dZ^{[1]} X\\top \\\\\n        db^{[1]} &= \\frac{1}{n} sum(dZ^{1}) \n    \\end{align*}\n\\subsection*{Weight initialization}\n    If the weights are initialized to zero (as we did with the logistic regresion) every\n    hidden unit will be completely identical and, in fact, they will be exactly the same\n    after every iteration, then there's really no point in having several hidden units.\n\n    As you want the different hidden units to compute different functions, we usually\n    initialize every weight matrix with small observations of random gaussian variables.\n    The bias terms can be initialized with zeros.\n\n\\end{document}", "meta": {"hexsha": "79a8ce246035755365914ca2315ac38c92ba589d", "size": 8181, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "course-1-neural-networks/notes/Note_2_shallow_neural_networks.tex", "max_stars_repo_name": "SergioArnaud/deep-learning-specialization", "max_stars_repo_head_hexsha": "6e2b7f553ad7b15f1c58d6efbce6ed8fb6fbff75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-03T02:10:44.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-04T00:07:13.000Z", "max_issues_repo_path": "course-1-neural-networks/notes/Note_2_shallow_neural_networks.tex", "max_issues_repo_name": "SergioArnaud/deep-learning-specialization", "max_issues_repo_head_hexsha": "6e2b7f553ad7b15f1c58d6efbce6ed8fb6fbff75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "course-1-neural-networks/notes/Note_2_shallow_neural_networks.tex", "max_forks_repo_name": "SergioArnaud/deep-learning-specialization", "max_forks_repo_head_hexsha": "6e2b7f553ad7b15f1c58d6efbce6ed8fb6fbff75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.8815789474, "max_line_length": 98, "alphanum_fraction": 0.5693680479, "num_tokens": 2559, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5804360196394981}}
{"text": "%\n% CMPT 473: Software Quality Assurance - A Course Overview\n% Section: Logic-Based Coverage\n%\n% Author: Jeffrey Leung\n%\n\n\\section{Logic-Based Coverage}\n\t\\label{sec:logic-based-coverage}\n\\begin{easylist}\n\n& \\textbf{Predicate coverage:} Each boolean expression must be tested as true and false in at least one test each\n& \\textbf{Clause coverage:} Each clause must be tested as true and false in at least one test\n& \\textbf{Combinatorial/Multiple Condition coverage:} Each possible combination of clauses must be tested\n\n& Clause determines the outcome of a predicate if changing only the value of that clause changes the outcome of the predicate\n\n& \\textbf{Modified Condition/Decision Coverage (MCDC):} Coverage demonstrating that each entry/exit is used, each decision can take every possible outcome, each clause can take every possible outcome, and each clause independently can impact the outcome\n\t&& Based on the behaviour how one clause affects the entire expression\n\t&& Ensures that each clause has an impact\n\t&& Not effective to generate a test suite as the drive to create minimal tests interferes with MC/DC\n\t&& Used to check test suites generated using other strategies\n\n& Determining the impact of a predicate\n\t&& Process for a given predicate $a$:\n\t\t&&& Create two clauses, one which replaces $a$ with $\\#T$ (true) and the other which replaces $a$ with $\\#F$ (false)\n\t\t&&& Set these two clauses as not equal to each other\n\t\t&&& Solve the equation\n\t\t&&& If the equation is not equal, then the predicate has impact\n\t\t\n\t&& Example: Given $(a \\land b) \\lor (a \\land \\neg b)$, prove whether $a$ has impact or not.\n\t\t&&& Let $a = T$ for one version of the clause, and $a = F$ for another version of the clause.\n\n\t\t\\end{easylist}\n\t\t\\begin{align*}\n\t\t(T \\land b) \\lor (T \\land \\neg b)\n\t\t& \\stackrel{?}{=} (F \\land b) \\lor (F \\land \\neg b) \\\\\n\t\tb \\lor \\neg b\n\t\t& \\stackrel{?}{=} F \\lor F \\\\\n\t\tT\n\t\t& \\stackrel{?}{=} F\n\t\t\\end{align*}\n\t\t\\begin{easylist}\n\t\t\n\t\t&&& The expression evalutes to $T \\neq F$. Therefore $a$ has impact.\n\n& Process of creating a minimal test suite using MC/DC:\n\t&& Change all compared expressions into clauses, each represented by one predicate\n\t&& Create a minimal set of logical assignments where, for each predicate, there are at least two assignments where:\n\t\t&&& The values of the predicate and result both differ, and\n\t\t&&& The values of all other predicates match\n\t&& Create a test suite with the original inputs set to specific values to satisfy the predicate values\n\n\t&& Example: Given $a \\lor (b \\land c)$, generate a minimal set of tests to demonstrate MCDC coverage.\n\t\t&&& See figure~\\ref{fig:mcdc-example-1}.\n\t\t&&& Test entries 1 and 2 show the impact of $a$.\n\t\t&&& Test entries 2 and 3 show the impact of $b$.\n\t\t&&& Test entries 3 and 4 show the impact of $c$.\n\t\t\n\t\t\\end{easylist}\n\t\t\\begin{figure}[!htb]\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{ c c c | c }\n\t\t\t\ta & b & c & Result \\\\\n\t\t\t\t\\hline\n\t\t\t\tT & F & T & T \\\\\n\t\t\t\tF & F & T & F \\\\\n\t\t\t\tF & T & T & T \\\\\n\t\t\t\tF & T & F & F\n\t\t\t\\end{tabular}\n\t\t\t\\caption{MCDC Example Test Suite}\n\t\t\t\\label{fig:mcdc-example-1}\n\t\t\\end{figure}\n\t\t\\begin{easylist}\n\n\t&& Example: Given $(a \\land b \\land c) \\lor (d \\land a)$, generate a minimal set of tests to demonstrate MCDC coverage.\n\t\t&&& See figure~\\ref{fig:mcdc-example-2}.\n\t\t&&& Test entries 1 and 2 show the impact of $a$.\n\t\t&&& Test entries 2 and 3 show the impact of $d$.\n\t\t&&& Test entries 3 and 4 show the impact of $c$.\n\t\t&&& Test entries 4 and 5 show the impact of $b$.\n\t\t\n\t\t\\end{easylist}\n\t\t\\begin{figure}[!htb]\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{ c c c c | c }\n\t\t\t\ta & b & c & d & Result \\\\\n\t\t\t\t\\hline\n\t\t\t\tF & T & F & T & F \\\\\n\t\t\t\tT & T & F & T & T \\\\\n\t\t\t\tT & T & F & F & F \\\\\n\t\t\t\tT & T & T & F & T \\\\\n\t\t\t\tT & F & T & F & F\n\t\t\t\\end{tabular}\n\t\t\t\\caption{MCDC Example Test Suite}\n\t\t\t\\label{fig:mcdc-example-2}\n\t\t\\end{figure}\n\t\t\\begin{easylist}\n\n\\end{easylist}\n\\clearpage\n", "meta": {"hexsha": "92ad9ac7b952cd10869a61cafe3193e52a91dc05", "size": 3892, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cmpt-473-software-testing-reliability-security/tex/logic-based-coverage.tex", "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_issues_repo_path": "cmpt-473-software-testing-reliability-security/tex/logic-based-coverage.tex", "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cmpt-473-software-testing-reliability-security/tex/logic-based-coverage.tex", "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "avg_line_length": 38.1568627451, "max_line_length": 253, "alphanum_fraction": 0.6685508736, "num_tokens": 1194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7718435030872968, "lm_q1q2_score": 0.580436019296456}}
{"text": "First, we proceed to motivate and derive a new \\emph{expected} F1-Score we use to optimize for cluster extraction.  % in this work. \nFollowing this, we describe two efficient greedy algorithms to (approximately) optimize it.  %To this end, we describe in this section  two scalable and efficient  algorithms to optimize EF1.\n%used to build filters setting for querying large data graphs in order to retrieve relevant UI elements that are of interest to the users.  \n%Following this, we describe an optimization-based approach based on Mixed Integer Linear Programming (MILP) to benchmark the performance of the proposed greedy algorithms on moderate-sized problems.\n\n\\subsection{Deriving Expected F1-Score (EF1)}\n\nWe adopt the Boolean relevance framework in information retrieval~\\cite{Baeza-Yates2010} and thus assume that any information element $j$ has a ground truth relevance assessment $B(j)$ available at evaluation time.  \n%%However, unlike previous document-based methods for estimating relevance at retrieval-time, we do not rely on text associated with information elements and instead rely on a third-party to supply an estimated probability (score) of relevance $S(j)$. \n%Traditionally, Boolean labels indicate the relevance of documents is used to evaluate retrieval sets (e.g., precision). \n%In practice, IR models are based on probabilistic scores to measure relevance of the elements, such as TF-IDF and cosine similarity.  \n%In practice, we are able to employ third-party technique to provide probabilistic scores to measure relevance of the document in IR system, such as TF-IDF and cosine similarity. Consequently,\nBecause clustering implies a Boolean retrieval model (clusters either contain or do not contain elements) and we have a probabilistic estimate of relevance $S(j)$, we propose to evaluate\n\\emph{expected} variants of standard precision, recall, and F1-score of these clusters.\n%Boolean retrieval evaluation metrics based on the relaxation of Boolean labels to probabilistic scores resulting in expected metrics, i.e., \\emph{expected precision} (EP), \\emph{expected recall} (ER) and \\emph{expected F1-Score} (EF1).\n\nHowever, as standard for both precision and recall, we note that precision and recall alone can be trivially optimized through pathological solutions.  That is, the cluster that selects all information elements (i.e., all time, all space, no excluded keywords) would trivially maximize (expected) recall.  Similarly, the cluster that selects the highest probability singleton information element would maximize expected precision.  \\emph{This leaves expected F1-score as the only of these three objectives commonly used in Boolean information retrieval that does not have a pathological solution.}\n%to balance expected precision and recall. % in a Boolean retrieval framework.\n\nTo formally define expected F1-Score, we first begin with definitions of expected precision and recall.  Recalling our previous definitions, given a set of information elements  $E$  that match a user query and a relevant set $RS$, %the global information element collection $GC$ with size $m$, \nthe precision of $E$ is defined as follows:\n\\begin{equation}\n   P(E) = \\dfrac{\\sum_{j \\in E} B(j)}{|E|} = \\dfrac{\\sum_{j=1}^m B(j)I(j)}{\\sum_{j=1}^m I(j)} \n\\end{equation}\nGiven that $B(j)$ is a Boolean random variable, we can take the expectation of $P(E)$ leading to the following definition of \n%Replacing the ground-truth Boolean relevance label $B(j)$ with the Boolean random variable $S(j)$ gives the following definition of \n\\emph{expected precision}: \n\\begin{equation}\nEP(E)=\\mathbb{E_{S}}\\left[\\dfrac{{ \\sum_{j=1}^{m}}B(j)I(j)}{{ \\sum_{j=1}^{m}}I(j)}\\right]=\\dfrac{{ \\sum_{j=1}^{m}}\\mathbb{E_{S}}[B(j)]I(j)}{{ \\sum_{j=1}^{m}}I(j)}=\\dfrac{{ \\sum_{j=1}^{m}}S(j)I(j)}{{ \\sum_{j=1}^{m}}I(j)}\n\\end{equation}\nSimilarly the recall of a retrieved set $R(E)$ is defined as:\n\\begin{equation}\n   R(E) = \\dfrac{\\sum_{j \\in RS} B(j)}{|RS|} = \\dfrac{\\sum_{j=1}^m B(j)I(j)}{ \\sum_{j=1}^m B(j)} \n\\end{equation}\nTaking a 1st order Taylor expansion, we have the following expectation approximation %$\\mathbb{E}(X/Y)\\approx\\dfrac{\\mathbb{E}(X)}{\\mathbb{E}(Y)}$ \n$\\mathbb{E}(X/Y)\\approx \\mathbb{E}(X)/ \\mathbb{E}(Y)$ for two dependent random variables $X$ and $Y$~\\cite{Kempen2000}. Hence, \nwe can now define an \\emph{approximated expected recall} as follows: \n%Given a retrieved element set $E$  and a relevant element set $RS$ among a global element collection $GC$ with size $m$, we propose the following definition of \\emph{expected precision} (EP), \\emph{expected recall} (ER) and \\emph{expected F1-Score} (EF1):\n\\begin{equation}\n   \\emph{ER(E)}=\\mathbb{E_{S}}\\left[\\dfrac{{ \\sum_{j=1}^{m}}B(j)I(j)}{|RS|}\\right]\\approx\\dfrac{{ \\sum_{j=1}^{m}}\\mathbb{E_{S}}[B(j)]I(j)}{{ \\sum_{j=1}^{m}}\\mathbb{E_{S}}[B(j)]}=\\dfrac{{ \\sum_{j=1}^{m}}S(j)I(j)}{{ \\sum_{j=1}^{m}}S(j)}\n\\end{equation}\nFinally, we define the \\emph{approximated expected F1-Score} (EF1) using the \\emph{expected precision} and the \\emph{approximated expected recall} as follows: \n\\begin{align}\n    \\emph{EF1(E)}  \\approx \\dfrac{2\\times EP\\times ER}{EP+ER} = \\dfrac{2\\times\\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m I(j) + \\sum_{j=1}^m S(j)}\n    \\label{eq:EF1}\n\\end{align}\n%Due to space limitations, we focus on F1-score in this paper, however expected F$_\\beta$ scores follow directly from the above definitions. \n  \n%Conventional metrics are based on Boolean relevance label $B(j)$ instead of probabilistic score $S(j)$. To link standard precision with our expected precision,  The definition of standard precision $P(RS)$ is as follows:\n\n\n%Now, we  derive expectation of precision $P(RS)$ as follows:\n%\\begin{align}\n%\t\\mathbb{E_S}[P(E)] &= \\mathbb{E_S}[\\dfrac{\\sum_{j=1}^m B(j)I(j)}{\\sum_{j=1}^m I(j)}] = \\dfrac{\\sum_{j=1}^m %\\mathbb{E_S}[B(j)]I(j)}{\\sum_{j=1}^m I(j)} \\notag \\\\\n%    &= \\dfrac{\\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m I(j)} = EP(E)\n   \t%\\mathbb{E_S}[R(RS)] &= \\dfrac{\\sum_{j=1}^m \\mathbb{E_S}[B(j)] I(j)}{\\sum_{j=1}^m \\mathbb{E_S}[B(j)]} = \\dfrac{\\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m S(j)} = ER(RS)  \\\\\n    %\\mathbb{E_S}[F1(RS)] &= \\dfrac{2*\\sum_{j=1}^m \\mathbb{E_S}[B(j)]I(j)}{\\sum_{j=1}^m I(j) + \\sum_{j=1}^m \\mathbb{E_S}[B(j)]} \\notag \\\\\n    %&= \\dfrac{2*\\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m I(j) + \\sum_{j=1}^m S(j)} = EF1(RS)\n%\\end{align}\n\n\n\n\n%\\begin{equation}\n%\\emph{\\ensuremath{EP(RS)}}=\\dfrac{{\\displaystyle \\sum_{j\\in RS}S(j)}}{|RS|} =  \\dfrac{\\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m I(j)} \n%\\end{equation}\n\n%Analogously, we define the  \\emph{expected recall} (ER) as follows:\n%\\begin{equation}\n%\\emph{\\ensuremath{ER(RS)}}=\\dfrac{{\\displaystyle \\sum_{j\\in RS}S(j)}}{|RE|} = \\dfrac{\\sum_{i=1}^m S(i)I(i)}{\\sum_{i=1}^m S(i)} = \\dfrac{\\sum_{i=1}^m S(i)I(i)}{C}  \n%\\end{equation}\n\n%\\noindent where $|RE|$ is the size of the entire relevant element set. Finally, we define the  \\emph{expected F1-Score} (EF1) as follows:\n\n%\\begin{equation}\n%\\emph{\\ensuremath{EF1(RS)}}=\\dfrac{2\\times EP\\times ER}{EP+ER} = \\dfrac{2*\\sum_{i=1}^m S(i)I(i)}{\\sum_{i=1}^m I(i) + C}\n%\\end{equation}\n\n%In Figure \\ref{fig:F1_vs_EF1}, we experimentally show that while EF1 is only an approximation of the true expectation, EF1 serves as an excellent surrogate for F1, which is our main concern when considering filter optimization.  That is, maximizing EF1 score with respect to a noisy classifier (noise decreases to 0 as $\\lambda \\to 1$) is strongly correlated with maximizing F1 score evaluated on the ground truth; we will further study the effect of a noisy classifier in Section~\\ref{sec:Evaluation}.   Specifically, while the EF1 and F1 scores are not perfectly calibrated along the diagonal, there is a linear correlation in that as the EF1 score increases for a scenario, the F1-score proportionally increases on average (as shown by the best fit linear regression in the plots).\n\n\n\n%\\begin{figure}[H]\n%\\begin{centering}\n%\\par\\end{centering}\n%\\begin{centering}\n%\\includegraphics[width=8.5cm]{imgs/Enron_results/scatter_plot_EF1\\lyxdot vs\\lyxdot F1}\n%\\par\\end{centering}\n%\\caption{Scatter plot showing EF1-Score vs. F1-Score.}\n%\\label{fig:F1_vs_EF1}\n%\\end{figure}\n\n\n\n\n\\subsection{Greedy relevance-driven clustering}\n\n% Should discuss greedy algorithm generically as having a metric and at each step\n% a choice of k restrictions, which are each scored against the metric with the\n% highest score chosen at each step.  Then each individual filter only has to\n% specify what the choices are and how the choice restricts the set of emails\n% selected.  What is a good succinct notation for this?\n\nA single cluster $E^*$ is specified as all information elements $\\{j \\in E^* | I(j)=1, j \\in E \\}$ in the intersection of spatial bounding box, time interval, and keyword inclusion or exclusion constraints.  Given an estimated probability of relevance of each information element in the cluster $S(j)$, we can then compute the EF1 of the cluster given in~\\eqref{eq:EF1}.  Through a series of transformations, this EF1 optimization problem can be reduced to a Mixed Integer Linear Program (MILP), which is NP-Hard and only practical for small element sets.  Hence, in the following, we describe how to \\emph{greedily} optimize the parameters of each constraint to approximately optimize EF1.\n\nIn greedy optimization, we would have the option to start with a singleton element cluster and expand, or start with a cluster including all elements and prune.  While the former has a non-deterministic choice of which singleton to start with, the latter has an unambiguous initial starting condition.  Thus we choose the latter pruning approach starting with initial spatial bounding box, time interval, and keyword constraints set to include \\emph{all} of $E$.  \n% Each cluster has an associated EF1 score \n% We assume that three types of clustering attributes are used to optimize clusters of relevant information according to spatial, temporal, and keyword content constraints.  A cluster is generated by \\emph{conjoining} (i.e., intersecting) these three attribute constraints.  While it is possible to convert the problem of selecting joint settings of spatial, temporal, and keyword constraints to optimize EF1 to a mixed-integer linear programming (MILP) problem, optimizing MILPs is NP-Hard \n\n\\subsubsection{Greedy Keyword Selection algorithm}\n\n% TODO: there is some confusion here between elements matching a query and displayed information elements.  To be resolved later.\nGiven a set of candidate information elements matching a query, this algorithm greedily selects a set of keywords to exclude (i.e., prune) to maximize EF1.\n\nFormally, the algorithm aims to select an\noptimal subset of $k$ terms $T_{k}^{*}\\subset V$ (where $V$ is a vocabulary of keywords for the document corpus) to exclude elements containing these keywords to optimize EF1.  This is achieved by\nbuilding $T_{k}^{*}$ in a greedy manner by choosing the next optimal\nterm $t_{k}^{*}$ given the previous set of optimal term selections\n$T_{k-1}^{*}=\\{t_{1}^{*},\\ldots,t_{k-1}^{*}\\}$ (assuming $T_{0}^{*}=\\emptyset$)\nusing the following selection criterion\n\\begin{equation}\nt_{k}^{*}=\\argmax_{t_{k}\\notin T_{k-1}^{*}} [EF1(E^{*} \\textrm{ that don't contain } \\{t_{1}^{*},\\dots t_{k}^{*}\\})] \n\\end{equation}\nthat terminates at $k$ when no $T_{k+1}^{*}$ can improve the EF1 of $E^*$. \n%In order to reduce the keyword search space, we propose to use the top 100 terms ranked using Mutual Information to identify the keywords that are predictive of the ``supervised'' relevance measure.  We remark that other metrics like frequency would be more appropriate for unsupervised tasks.  \n%We further remark that we have chosen to use term exclusion as a means to effectively prune the content as more terms are selected.  \n%mainly for technical reasons as a negation query generates queries with much fewer terms. \n%The best indexing strategy to support this greedy search is the inverted index data structure~\\cite{Zobel2006}.\n\n\\subsubsection{Greedy Time Selection algorithm}\n\nThe idea behind the time-based greedy selection algorithm is to start with the maximal time range and greedily contract it to a sub-window of time $[t_{\\tstart},t_{\\tend}]$ such that $\\{ j \\in E^* | t_{\\tstart} < t_j < t_{\\tend}, j \\in E \\}$.  In this case, we propose two different greedy approaches to iteratively contract (i.e., prune) the time window in order to maximize EF1:\n\n\\subfour{(a) Naive Greedy algorithm:} Let $t_{\\min}$ and $t_{\\max}$ respectively correspond to the current minimum and maximum timestamps in $E^*$.  If setting $t_{\\tstart} = t_{\\min}$ improves the EF1 of $E^*$ then this lower bound contraction is accepted.  Similarly if setting $t_{\\tend} = t_{\\max}$ improves the EF1 of $E^*$ then this upper bound contraction is accepted.  This repeats until no lower or upper bound contraction improves EF1. \n\n\\subfour{(b) Binary Partition Search (BPS) algorithm:} Large datasets will cause the previous Greedy algorithm to take a large number of iterations to terminate.  A more efficient way to address this problem is to use binary partitioning search (BPS).  Hence, instead of removing a single element $j$ at each iteration, BPS operates by selecting between the lower or upper half of the time interval. \n\n%\\begin{algorithm}[t]\n%%\\scriptsize\n%\\caption{Binary Partition Search (BPS) Algorithm}\n%\\SetAlgoLined\n%\\SetKwData{Left}{left}\\SetKwData{This}{this}\\SetKw%Data{Up}{up}\n%\\SetKwFunction{Union}{Union}\\SetKwFunction{FindCom%press}{FindCompress}\n%\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{outpu%t}\n%\\Input{A set of ordered elements $E=\\{j_{v_1} %\\dots j_{v_n}\\}$ }\n%\\Output{A timestamp $t$;}\n%\\BlankLine\n%\\label{alg:Dichotomy}\n\n%$v_{min}=v_1$; $v_{max}=v_n$;  %$v_{mid}=\\tfrac{v_1+v_n}{2}$;\n\n%\\While {$v_{min}!=v_{mid}!=v_{max}$}{\n\n%\\eIf{$[EF1(\\{j_{v_{min}} \\dots  j_{v_n}\\}) \\geq %EF1(\\{j_{v_{mid}} \\dots  j_{v_n}\\})]$}{\n%$v_{max}=v_{mid}$;\n%$v_{mid}=\\tfrac{v_{min}+v_{mid}}{2}$;\n%}{\n%$v_{min}=v_{mid}$; \n%$v_{mid}=\\tfrac{v_{min}+v_{max}}{2}$;\n%}\n%}\n%\\Return  $v_{mid}$;\n\n%\\label{alg:return}\n%\\end{algorithm} \n\n\n%Therefore, the algorithm first sets the values $t_{min}=t_1$, $t_{max}=t_n$,  and $t_{mid}=\\tfrac{t_1+t_n}{2}$. Then, for each iteration, if $[EF1(\\{d_{t_{min}}\\leq \\dots \\leq d_{t_n}\\}) \\geq EF1(\\{d_{t_{mid}}\\leq \\dots \\leq d_{t_n}\\})]$, the algorithm sets $t_{max}=t_{mid}$, $t_{mid}=\\tfrac{t_{min}+t_{mid}}{2}$ and makes a new iteration, else, the algorithm sets $t_{min}=t_{mid}$,  $t_{mid}=\\tfrac{t_{min}+t_{max}}{2}$  and makes a new iteration. The algorithm keeps iterating until $t_{min}=t_{mid}=t_{max}$, where it assigns $t_{mid}$ to the lower time bound of the time query, i.e., $t_{start}=t_{mid}$.\n\n\nAs an example of the BPS approach for time selection, the algorithm first sorts $E^*$ for the current $[t_{\\tstart},t_{\\tend}]$ according to time stamps.  From this, it is possible to extract $t_{\\min}$, $t_{\\tmedian}$, and $t_{\\max}$, respectively the minimum, median, and maximum time stamps in $E^*$.\nBPS then selects the interval  $[t_{\\tstart},t_{\\tmedian}]$, $[t_{\\tmedian},t_{\\tend}]$, $[t_{\\tstart},t_{\\tend}]$ that minimizes EF1.  If either of the first two intervals are chosen, the process is repeated with the new $t_{\\tstart}$ or $t_{\\tend}$.  Once the third interval is chosen, no BPS contraction will improve EF1. \n\n% in increasing order of time stamp. Then, it applies the BPS algorithm. This procedure will return  the lower time bound of the time window, i.e., $t_{start}=t_{mid}$. Next, the algorithm sorts $E$ in decreasing order of time stamp, and then, it applies again a Binary Partitioning Search to get the upper time bound of the time window, i.e., $t_{end}=t_i$. Lastly, the algorithm returns the  time window $[t_{\\tstart},t_{\\tend}]$, such that $EF1(E^{*} \\in [t_{start},t_{end}]) \\geq EF1(E)$. %Note that this algorithm proceeds in a total of $log(n)$ iterations in the best case, and $2\\times log(n)$ iterations in the worst case.\n\n%For both the naive and time-based greedy algorithms, we use the red-black tree as the indexing data structure \\cite{Guibas1978}.\n\n\n\\subsubsection{Greedy Spatial Selection algorithm}\n\nThe aim of this algorithm is to return coordinates $[(x_{min},y_{min}),(x_{max},y_{max})]$ representing the EF1 maximizing spatial bounding box represented by the lower and upper bound coordinates -- respectively $(x_{min},y_{min})$ and $(x_{max},y_{max})$. This 2D problem is similar to the previous 1D problem of finding the best time window. Therefore, the two algorithms described above (Greedy and BPS) can be adapted for this problem by first applying each algorithm on the x-axis to determine $(x_{min},x_{max})$, then on the y-axis to determine $(y_{min},y_{max})$. \n% We need to ditch some citations and this removes the complaint that we did not provide all details -- we claim it is obvious.  -Scott\n%We omit the description of these two algorithms for lack of space, but a detailed description can be found in the technical report~\\cite{Bouadjenek2018}.\n%We use the R-tree as a data structure for indexing multi-dimensional continuous data~\\cite{Guttman1984}.\n\n\n\\subsubsection{Relevance-driven clustering algorithm}\n\\label{sec:relclustering}\n\nTo obtain a cluster $E^*$ combining the above (i) keyword, (ii) time, and (iii) spatial constraints, we propose a greedy round-robin algorithm, which at each iteration applies the selection pruning algorithms for (i), (ii), and (iii) in order.  Iterations terminate when no selection algorithm can unilaterally improve EF1 and the final cluster is returned.\n%to, theselects the best sub-filter to apply (according to its reduction in EF1-Score), then checks if that filter improves the EF1-Score. If so, the algorithm continues with the updated filter settings, otherwise, it terminates. \n% Don't get the following -- seems non-essential, at least for submission.  -Scott\n%Note that here, we use  $k=1$ for the keywords greedy algorithm.\n%The algorithm will then determine a sequence of filters to apply on the initial set, and the final query is then built by combining these filters by types.\n\n %such as: $\\{Keyword \\to Time -> Position \\to Time \\to Keyword\\} $\n\n% I don't follow this example, but I think the algorithm is intuitive enough and we need space.  -Scott\n% For example, let's suppose the algorithm determines the following filter sequence: $\\{Q_{k_1}=\\{ \\neg natural\\} \\to Q_{t_1}=[50,2030] \\to Q_{p_1}=[(10,60), (50,100)] \\to Q_{t_2}=[60,1230] \\to Q_{k_2}=\\{ \\neg fictive\\}  \\to  Q_{p_2}=[(10,60), (30,85)] \\to  Q_{t_3}=[60,800] \\}$. \n% The final query is built by combining these filters by types as follows: $Q_k=Q_{k_1} \\cup Q_{k_2}=\\{\\neg natural,\\neg fictive\\}$, $Q_t =Q_{t_1} \\cap Q_{t_2} \\cap Q_{t_3}=[60,800]$, and  $ Q_p=Q_{p_1}  \\cap Q_{p_2}=[(10,60),$ $(30,85)]$, which gives $Q=[Q_k=\\{\\neg natural,\\neg fictive\\}\\wedge Q_t=[60,800]\\wedge Q_p = [(10,60),$ $(30,85)]]$.\n\nFinally, we note that our relevance-driven clustering algorithm can use the Greedy keyword selection algorithm with either the naive Greedy time and spatial selection algorithms, which we refer to experimentally as {\\bf Greedy}, or the BPS time and BPS spatial variants, which we refer to experimentally as {\\bf BPS}.  \n%time and position naive greedy algorithms to which we refer as Greedy Algorithm, or the keywords greedy algorithm with the time and position Binary Partition Search algorithms to which we refer as BPS Algorithm.\n\n\n\n%\\subsection{Optimal MILP Solutions for Benchmarking}\n\n%Next, we propose an exact Mixed Integer Linear Progamming (MILP) optimization-based formulation to maximize EF1 and provide a benchmark for evaluating our two relevance-driven clustering algorithms (Greedy and BPS).  %Given the trivial solution of optimizing the expected precision (singleton) and expected recall (the whole collection), we consider in the following only the optimization of EF1.  \n\n%\\subsubsection{Fractional MILP Formulation} \\hfill \\\\\n%We begin by reformulating the EF1 objective to prepare for further optimization steps by replacing the global sum of scores of all information elements with a constant $C = \\sum_{j=1}^m S(j)$:\n%\\begin{equation}\n%\\begin{aligned}\n%    \\emph{$EF1$} &= \\dfrac{2\\times \\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m I(j) + \\sum_{j=1}^m S(j)} = \\dfrac{2 \\times \\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m I(j) + C}\n%\\end{aligned}\n%\\end{equation}\n\n%In order to obtain the EF1-optimal cluster, we let binary variables $\\emph{I\\textsubscript{filter}}($j$) \\in \\{0, 1\\}$ to indicate whether an information element $j$ is selected by each parameter of the cluster and constrain that to be selected in the cluster (i.e., $I(j)=1$), $j$ must be selected in all selection constrains (i.e., a conjunction).  This leads to the following fractional MILP formulation with selection constraints to be defined later:\n%intend to directly optimize the EF1 metric in terms of decision indication variables  for filter setting as follows:\n% Don't use I(i)... confusing!  \n%\\begin{align}\n%\\max_{\\textit{filter vars}} \\;\\;\n%& \\dfrac{\\sum_{j=1}^m S(i)I(j)}{\\sum_{j=1}^m I(j) + C} \\nonumber \\\\\n%& \\textrm{subject to constraints between {\\it filter vars} and $I(j)$}   \n%\\end{align}\n%\\begin{equation}\n%\\begin{aligned}\n%& \\underset{I_{\\mathit{filter}}(j)}{\\text{maximize}}\n%& & \\dfrac{\\sum_{j=1}^m S(j)I(j)}{\\sum_{j=1}^m I(j) + C} \\\\\n%& s.t\n%& & I(j) = \\bigwedge I_{\\mathit{filter}}(j) \\\\\n%\\end{aligned} \\label{eq:frac_milp}\n%\\end{equation}\n\n% Not necessary.  -Scott\n%Note that our goal is to optimize the element set in terms of \\emph{filters settings}. This means elements in the interface sharing the same property needs to be simultaneously added to the selected set. This is a unique property of our filter-based UI problem, which is different from the independent retrieval of each document in standard IR system.\n\n%\\subsubsection{Transformation to a MILP} \\hfill \\\\\n%While there are no direct solvers for fractional MILPs, we can transform~\\eqref{eq:frac_milp} into a pure MILP form for which we have efficient and optimal solvers.  To do this, we use the Charnes-Cooper method \\cite{Charnes1962} and Glover linearization method \\cite{Glover1975} with big-M constraints, where auxiliary variables \\emph{w(j)} and \\emph{u} are  introduced\\footnote{https://optimization.mccormick.northwestern.edu/index.php/Mixed-integer\\_linear\\_fractional\\_programming\\_(MILFP)}. Here, $w(j)$ is defined as $w(j)=I(j)\\times u$ with $u$ defined as follows:\n%\\begin{equation}\n%u = \\dfrac{1}{\\sum_{j=1}^m I(j) + C}\n%\\end{equation}\n\n%Then, the EF1 optimization problem is able to be transformed into the following MILP problem:\n%\\begin{equation}\n%\\begin{aligned}\n%& \\underset{w,u}{\\text{maximize}}\n%& & \\sum_{j=1}^m S(j)w(j) \\\\\n%& s.t\n%& & \\sum_{j=1}^m w(j) + uC = 1 \\\\\n%& & & w(j) \\leqslant u, \\quad w(j) \\leqslant M\\times I(j)  \\\\\n%& & & w(j) \\geqslant u - M\\times [1-I(j)] \\\\\n%& & & u > 0,  \\quad I(j) \\in \\{0, 1\\}, \\quad w(j) \\geqslant 0 \\label{eq:milp}\n%\\end{aligned}\n%\\end{equation}\n\n%\\subsubsection{Constraints} \\hfill \\\\\n%As our goal is to select elements through three clustering parameters, we add three constraints to the above optimization.\n%\\begin{enumerate}\n%\\item {\\bf Time Parameter Constraint:} a two-element tuple ($t_{start}$, $t_{end}$) indicating respectively the start and the end of the time window.\n%\\begin{equation}\n%\\begin{aligned}\n%  I_{\\mathit{time}}(j) &=\n%   \\begin{cases}\n%     1, & \\text{if $(t_{start} \\leqslant t(j)) \\land (t(j) \\leqslant t_{end})$}  \\\\\n%     0, & \\text{otherwise}\n%  \\end{cases} \\label{eq:cons1}\n%\\end{aligned}\n%\\end{equation}\n\n%\\item {\\bf Spatial Parameter Constraint:} a four-element tuple ($x_{min}$, $y_{min}$, $x_{max}$, $y_{max}$) to create a bounding box selection in visualization interface.\n%\\begin{equation}\n%\\begin{aligned}\n%I_{\\mathit{pos}}(j) & =\\begin{cases}\n%1, & \\text{if \\ensuremath{(x_{min}\\leqslant x(j))\\land(x(j)\\leqslant x_{max})\\land}}\\\\\n% & (y_{min}\\leqslant y(j))\\land(y(j)\\leqslant %y_{max})\\\\\n%0, & \\text{otherwise}\n%\\end{cases} \\label{eq:cons2}\n%\\end{aligned}\n%\\end{equation}\n\n%\\item {\\bf Keyword Parameter Constraint:} a boolean vector of terms $t^*_k$ with size $m$ - the size of the dictionary of $GC$.\n%\\begin{equation}\n%  I_{\\mathit{term}}(j) = \\bigwedge_{t^*_k \\in j} t^*_k \\qquad \\textnormal{for s = 1, 2, $\\cdots$, m} \\label{eq:cons3}\n%\\end{equation}\n%All terms with $I_{\\mathit{term}}=0$ are included in the negation query.\n\n\n%\\item {\\bf Clustering Selection Constraints:} for information element $j$ to be selected globally, it must be simultaneously selected by the three selection constrains.\n%, all filter constraints have to be satisfied in this element. In others words, an AND operator is required between all the sub-filter constraints.\n%{\\bf TODO: mention how to apply to all filters... need to say an email j is selected if \\emph{all} filters say it is selected, so an AND constraint.  \\textcolor{red}{[PLEASE COMPLETE YIHAO]}.\n%} \n%\\begin{equation}\n%  I(j) = I_{\\mathit{time}}(j) \\land I_{\\mathit{pos}}(j) \\land I_{\\mathit{keyword}}(j) \\label{eq:cons4}\n%\\end{equation}\n\n%\\end{enumerate}\n%We refer to the above MILP formulation in~\\eqref{eq:milp} with all selection parameter constraints~\\eqref{eq:cons1}--\\eqref{eq:cons4} as the {\\bf Optimal} cluster.\n\n\\subsection{Multiple Cluster Selection Wrapper}\n\n%The Global Filter selects a single best filter setting.  However, what if we want to display multiple possible Global Filters.  \nIn practice, a single cluster of information elements $E^*$ chosen by the previously described algorithms will provide the user with one temporal, spatial, and content coherent cluster covering one information perspective.  However, there will likely be additional coherent clusters covering other perspectives.  \nConsider Figure~\\ref{Fig:BPS_FilteredDisplay} which shows three different spatial bounding boxes corresponding to three topically coherent clusters.\nHere, we provide a greedy approach for providing a ranked list of such clusters that works with any of the previously defined algorithms -- Greedy or BPS.%); we leave it to future work to develop improved filtering and ranking methods for multiple filters.\n\nThe algorithm itself is quite simple and simply wraps the algorithm of Section~\\ref{sec:relclustering}.  \nAfter the first cluster is produced, all selected elements in that cluster have their scores $S(j)$ zeroed out.  The relevance-driven clustering algorithm is then run again, where it will inherently focus on a different content set.  As each cluster is added, coverage of high relevance content monotonically improves.\n%This procedure is repeated until the desired number of filters is reached, or a metric / coverage score for high scoring content is reached.  \n%The user should then be able to choose among the multiple filters in the VID. \t\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "9403f95252136c59d73a3030bfcd607b6b171139", "size": 26748, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/UMAP2019/Algorithms.tex", "max_stars_repo_name": "D3Mlab/visir", "max_stars_repo_head_hexsha": "cd1860984dee8d7aba368857e734ad11c14124c8", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-10T07:40:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-10T07:40:04.000Z", "max_issues_repo_path": "Documents/UMAP2019/Algorithms.tex", "max_issues_repo_name": "D3Mlab/viz-ir", "max_issues_repo_head_hexsha": "cd1860984dee8d7aba368857e734ad11c14124c8", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documents/UMAP2019/Algorithms.tex", "max_forks_repo_name": "D3Mlab/viz-ir", "max_forks_repo_head_hexsha": "cd1860984dee8d7aba368857e734ad11c14124c8", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.0683229814, "max_line_length": 785, "alphanum_fraction": 0.7287273815, "num_tokens": 7861, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5804360153497271}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 14, 2014}\n\\maketitle\n\\section*{\\#10}\n$\\mathbb{Q}[x]/\\langle x^2+2\\rangle\\cong \\mathbb{Q}[x]/\\langle x^2+1\\rangle$ \n\nno, can't find $\\beta\\in\\mathbb{Q}[x]/\\langle x^2+2\\rangle$ where $\\beta=-[2]$\n\nnote that if we replace $\\mathbb{Q}$ with $\\mathbb{R}$ then we get an isomorphism.\n\n\\subsubsection*{is $\\mathbb{Z}_2[x]/\\langle x^2+x+1\\rangle$ a field?} is $x^2+x+1\\in \\mathbb{Z}_2[x]$ irreducible?\n\n$[0],[1]$ are not roots and degree is $\\le3$ and so it is a field.\n\\subsection*{question}\nif $f(x)\\in \\mathbb{Z}_p[x]$ where $p$ is prime and $f(x)$ is irreducible. how many elements does $\\mathbb{Z}_p/f(x)$ have? $\\{a_0+a_1x+\\dots+a_{n-1}x^{n-1}:a_i\\in\\mathbb{Z}_p\\}$  and so $p^n$ elements.\n\n\\section*{\\#24}\nyou can always find such an $f(x)$ (of any degree).\n\n\\section*{thm}\nassume $f(x)=g(x)h(x)$ where $f(x)\\in \\mathbb{Z}[x]$ and $h(x),g(x)\\in\\mathbb{Q}[x]$. then we can factor $f(x)$ into poly with integer coefficients of the same degree.\n\\subsubsection*{proof}\n$g(x)\\in\\mathbb{Q}[x]$ with $g(x)=\\frac{1}{b}g_1(x)$ where $g_1(x)\\in\\mathbb{Z}[x]$ and $b\\in\\mathbb{Z}$ and $g(x)=\\frac{c}{b}g_2(x)$ where $g_2(x)\\in\\mathbb{Z}[x]$ and $b,c\\in\\mathbb{Z}$ and $g_2(x)$ is a primitive polynomial\n\nsay $\\gcd(m,n)=1$\n\n$f(x)=\\frac{m}{n}\\cdot\\frac{s}{t}g_2(x)h_2(x)$.\n\nmultiplying two primitive polynomials gives a primitive polynomial\n\n\\section*{thm}\neisenstein's irreducibility criterion)\n\n\\section*{corollary 4.4.7}\n\\end{document}\n\n\n", "meta": {"hexsha": "9385ad5cb1f7380f099a572807e64c5b5640222f", "size": 1670, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-11-14.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-11-14.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-11-14.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0816326531, "max_line_length": 226, "alphanum_fraction": 0.6706586826, "num_tokens": 664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.5804360153497271}}
{"text": "%\n%   window\n%\n\\section{Window Function}\\label{app:windowfunction}\nThe observed density field of targets does not cover the full sky, due to the galactic plane obscuration. This means that the pseudo-power spectrum $\\hat{C}_{\\ell}$ obtained by the direct Spherical Harmonic Transforms of a partial sky map (eq. \\ref{eq:pusedocell}), differs from the full-sky angular spectrum $C_{\\ell}$. However, their ensemble average is related by \\citep{hivonmaster2002ApJ...567....2H, poker2011A&A...535A..90P} \n\n\\begin{equation}\n    <\\hat{C}_{\\ell}> = \\sum_{\\ell^{\\prime}} M_{\\ell \\ell^{\\prime}}<C_{\\ell^{\\prime}}>,\n\\end{equation}\nwhere $M_{\\ell \\ell^{\\prime}}$ represents the mode-mode coupling from the partial sky coverage. This is known as the Window Function effect and a proper assessment of this effect is crucial for a robust measurement of the large-scale clustering of galaxies. We follow a similar approach to that of \\citep{szapudi2001ApJ...548L.115S, chon2004MNRAS.350..914C} to model the window function effect on the theoretical power spectrum $C_{\\ell}$ rather than correcting the measured pseudo-power spectrum from data. First, we compute the two-point correlation function of the window,\n\n\\begin{equation}\n    RR(\\theta) = \\sum_{i,j>1} f_{\\rm pix, i} f_{\\rm pix, j} \\Theta_{ij}(\\theta),\n\\end{equation}\nwhere $\\Theta_{ij}(\\theta)$ is one when the pixels i and j are separated by an angle between $\\theta$ and $\\theta + \\Delta\\theta$, and zero otherwise. Next, we normalize the RR by $\\sin(\\theta)\\Delta\\theta$ to account for the area and total number of pairs such that $RR(\\theta=0)=1$. We fit a polynomial on RR to smooth out the wiggles raised by noise. Then, we multiply the theoretical correlation function $\\omega(\\theta)$ by the window paircount,\n\\begin{align}\n    \\omega^{WC}(\\theta) &= \\omega(\\theta)~RR(\\theta),\n\\end{align}\nand finally use the Gaussian-Quadrature algorithm to transform the window convolved theory correlation function $\\omega^{WC}$ to $C^{WC}_{\\ell}$,\n\\begin{equation}\n    C^{WC}_{\\ell} = 2\\pi \\int \\omega^{WC}(\\theta)P_{\\ell}(\\theta)d\\theta.\n\\end{equation}\n\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.45\\textwidth]{figures/fig25-cltheory.pdf}\n    \\caption{Window corrected theory C$_{\\ell}$ for two different models with and without Redshift Space Distortions respectively in orange and blue. The dashed curves show the theoretical models before window convolution. The effect of the window is around 5\\% in redshift and 20\\% in real space. The theory with redshift space distortions uses $galaxy\\;bias = 2$ and the surface density $n(z)$ of NGC eBOSS ELG ~\\citep[Tab. 4 of][]{Raichoor2017MNRAS.471.3955R} and assuming the fiducial cosmology of \\citet{ashley2012MNRAS,2012ApJ...761...14H}.}\n    \\label{fig:Cellwindowratio}\n\\end{figure}\n\n\nFig. \\ref{fig:Cellwindowratio} shows the DECaLS window effect on two theoretical models of $C_{\\ell}$ with and without redshift space distortions. The window effect for the model without Redshift Space Distortions (RSD) is around 20\\% but that for the model with RSD is less than 5\\% due to the flat power spectrum at the low ell limit. We find a consistent pattern in the mocks; the window effect on the clustering of the mocks is between 5-15\\%.", "meta": {"hexsha": "b78c5a84f56cee3550ba2da968c032e86fb18990", "size": 3238, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sections/window.tex", "max_stars_repo_name": "mehdirezaie/SYSNet", "max_stars_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-07-29T11:55:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-25T21:50:52.000Z", "max_issues_repo_path": "paper/sections/window.tex", "max_issues_repo_name": "mehdirezaie/SYSNet", "max_issues_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sections/window.tex", "max_forks_repo_name": "mehdirezaie/SYSNet", "max_forks_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-17T18:07:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-17T18:07:47.000Z", "avg_line_length": 98.1212121212, "max_line_length": 575, "alphanum_fraction": 0.7473749228, "num_tokens": 914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434873426302, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5804360117460399}}
{"text": "\\section*{Introduction}\n\nLearning $C^0$-continuous functions is a key problem in machine learning, as many problems\nare framed as finding good interpolations between a few data points, where we define continuity as $\\lim\\limits_{x\\to c}f(x) = f(c)$.\nThe continuity of the learned model is not guaranteed, and is enforced through regularization of the output,\nusually using something temporal consistency~\\cite{tretschk2021nonrigid}. Since consistency is imposed as some loss, it cannot be guaranteed. This leads to difficulties in interpolating between time-steps which is necessary to get higher resolution movement. In addition, there is also the possibility for sudden changes in motion, such as suddenly jumping from one frame to the next.\nTo enforce a continuity, we are interested in constructing functions with $C^0$ continuity. $C^0$ continuity is necessary for many things, such as reconstructing movement, to ensure, for example, that an object cannot warp between two points instantaneously. We are also interested in $C^1$ continuity, or that the derivative of a function is continuous on some domain. This is because in reality, it is not possible for an object to instantly change its velocity, and thus any reconstruction must have $C^1$ continuity for plausible movement.\n\nTo demonstrate our architecture, we tackle the problem of dynamic scene reconstruction using NeRF, which is a method for representing scenes. Following previous work, we define a static scene, which is referred to as the canonical scene, and a model which can produce deformations to the canonical scene. The deformation model is what we are interested for demonstrating continuity. To represent the canonical scene, we use a NeRF~\\cite{mildenhall2020nerf}, and the deformation model is our proposed Bezier network. This general approach with deformation networks has been shown to be effective at reconstructing Lambertian scenes with continuous movement as in D-NeRF~\\cite{pumarola2020dnerf} and NR(non-rigid)-NeRF~\\cite{tretschk2021nonrigid}. These works show convincing\nreconstructions of moving scenes, predicting accurate reconstructions on both real\nand synthetic scenes. These methods do not have an analytic form, relying purely on learned components but we\nare also interested in analyzing and modifying the movement itself, such as\nexaggerating movement, or clustering regions based on similar movement. These methods are capable of each of these tasks, but must be extended in order to handle each case. In contrast, classical animation tools are designed to handle all of these problems.\n\nIn animation, there are existing tools for creating realistic movement with a high-degree of\ncontrol for artists and animators. For example, there are tools such as keyframing, splines,\nand other techniques. These tools allow artists and animators to breathe life into animation,\nwith a high degree of control. Mathematical constructs such as splines have also been thoroughly\nstudied to understand their behaviour and how they can be manipulated.\n\nWe take the interpretable animation techniques of Bezier splines as a method to enforce continuous interpolation in Dynamic NeRF, finding that we are able to get comparable performance without additional cost in memory.\n%We also design a new architecture for long duration $C^0$-continuous signal reconstruction.\nIn summary, our contributions are as follows:\n\n\\begin{enumerate}\n    \\item A general architecture for $C^0, C^1$ continuity over a continuous domain.\n    \\item An application of this architecture combined with D-NeRF to enforce continuity of movement, a canonical NeRF, and an analytic form at no additional cost, while slightly outperforming the original.\n    \\item An evaluation of our architecture against an implementation of D-NeRF which performs comparably to the original.\n\\end{enumerate}\n", "meta": {"hexsha": "b2c481a96bcd64a38d564e8ebbb03c7c637e17df", "size": 3849, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "c0_paper/intro.tex", "max_stars_repo_name": "JulianKnodt/nerf_atlas", "max_stars_repo_head_hexsha": "6866713c498cea026cb215260a779a2c6c13246c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 57, "max_stars_repo_stars_event_min_datetime": "2021-05-25T12:57:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T06:27:44.000Z", "max_issues_repo_path": "c0_paper/intro.tex", "max_issues_repo_name": "JulianKnodt/nerf_atlas", "max_issues_repo_head_hexsha": "6866713c498cea026cb215260a779a2c6c13246c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-07-26T22:28:40.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-29T20:51:59.000Z", "max_forks_repo_path": "c0_paper/intro.tex", "max_forks_repo_name": "JulianKnodt/nerf_atlas", "max_forks_repo_head_hexsha": "6866713c498cea026cb215260a779a2c6c13246c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-05-25T12:36:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-28T04:20:12.000Z", "avg_line_length": 128.3, "max_line_length": 773, "alphanum_fraction": 0.8142374643, "num_tokens": 796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.752012562644147, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.5804360067701847}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[utf8]{inputenc}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{October 3, 2014}\n\\maketitle\nif $\\liminf x_n=L$ then there exists $\\{x_{n_k}\\}$ such that $\\lim x_{n_k}=L$\n\n$l=\\liminf x_n=\\lim(inf\\{\\underbrace{x_{n_1},x_{n_2},x_{n_3},\\dots}_{c_n}\\}$\n\nwhy not just let $c_n$ be the subseuence? because $c_n$ may not be equal to any of the $x_k$ in the sequence\n\n$c_n=\\inf\\{x_{n_1},x_{n_2},\\dots\\}$ give $\\varepsilon=2^{-n}$ there exists $x_{n_k}\\in\\{x_{n_1},x_{n+1},x_{n+2},\\dots\\}$ such that $\\left\\lvert c_n-x_{n_k}\\right\\rvert<2^{-n}$ by def of infinum\n\nwe has a sequence $\\{c_n\\}$ given $\\varepsilon>0$ there exists $N$ such that $\\left\\lvert c_n-L\\right\\rvert<\\varepsilon$ if $n\\ge N$. we approximate each $c_n$ by some $x_{n_k}$ from the oricinal sequence sutch that ....\n\n\\section*{convergence test for series}\nfirst we talk about series with positive terms $\\sum\\limits_{k=1}^\\infty{a_k}$, $s_n=\\sum\\limits_{k=1}^n{a_k}$. So if $s_n$ is bounded about then the series is convergent. and if not, it is divergent.\n\ngeometric series $\\sum\\limits_{n=0}^\\infty{r^n}$ is convergent if $\\left\\lvert r\\right\\rvert<1$. $s_n=\\sum\\limits_{k=0}n{r^k}=1+r+r^2+\\dots+r^n, rs_n=r+r^2+r^3+\\dots, sn-rSn=1-r^{n+1}$\n\n$s_n=\\frac{1-r^{n+1}}{1-r}\\to\\frac{1}{1-r}$\n\n\n\\section*{comparison test}\nif $\\forall n, |a_n|\\le b_n$\n\\begin{itemize}\n\\item\n  if $\\sum\\limits_{n=1}^\\infty{b_n}$ is convergent then $\\sum\\limits_{n=1}^\\infty{a_n}$ is convergent,\n  \\item\n  if $\\sum\\limits{a_n}$ is divergent, so is $\\sum\\limits{b_n}$.\n\\end{itemize}\n\\section*{3.2.b}\nshow that if $\\left(|a_n|\\right)_{n=1}^\\infty$ is summable then so is $\\left(a_n\\right)_{n=1}^\\infty$.\n\\begin{align*}\n  \\sum\\limits_{k=n+1}^m{|a_k|}&<\\varepsilon\\text{ for all }N\\le n\\le m \\text{ because is is summable}\\\\\n  \\left\\lvert\\sum\\limits_{k=n+1}^m{a_k}\\right\\rvert&\\le\\sum\\limits_{k=n+1}^m{|a_k|}<\\varepsilon\n\\end{align*}\nso then $\\sum\\limits{a_k}$ is also cauchy and summable\n\n\\section*{cauchy-schwartz inequality}\n$\\sum\\limits_{k=1}^n{a_kb_k}\\le \\left(\\sum\\limits_{k=1}^n{a_k^2}\\right)^{1/2}\\left(\\sum\\limits_{k=1}^n{b_k^2}\\right)^{1/2}$\n\n\\section*{3.2.f}\n\n\\section*{leibniz test for alternating series}\nif $\\{a_n\\}$ is a monotone decreasing sequence of positive terms with the $\\lim a_n=0$ then $\\sum\\limits_{n=1}^\\infty{(-1)^na_n}$ is convergent\n\n\\section*{note!} a sequence my have the property $\\lim |a_n-a_{n+!}|=0$ but not be cauchy\n\\section*{3.2.h}\nShow that if $\\sum\\limits_{n=1}^\\infty{a_n}$ and $\\sum\\limits_{n=1}^\\infty{b_n}$ are series with $b_n\\ge 0$ such that $\\limsup\\limits_{n\\to\\infty}<\\infty$ and $\\sum\\limits_{n=1}^\\infty{b_n}<\\infty$, then the series $\\sum\\limits_{n=1}^\\infty{a_n}$ converges.\n\n\\begin{align*}\n  \\left\\lvert\\left(\\sup\\limits_{k\\ge n}\\frac{|a_k|}{b_k}\\right)-L\\right\\rvert<\\varepsilon\\\\\n  \\left(\\sup\\limits_{k\\ge n}\\frac{|a_k|}{b_k}\\right)<L+\\varepsilon\\\\\n  \\frac{|a_k|}{b_k}<L\\varepsilon\\\\\n  |a_k|<(L+\\varepsilon) b_k\\\\\n\\end{align*}\n\\section*{3.2.j}\n$\\liminf\\frac{a_n+1}{a_n}\\le\\liminf{a_n^{\\frac{1}{n}}}\\le\\limsup a_n^{\\frac{1}{n}}\\le \\limsup\\frac{a_n+1}{a_n}$.\n\\subsection*{step 1}\nif $x\\ge r$ for all $r>b$ then $x$ is a lower bound for the set $\\{r\\in \\mathbb{R}:r>b\\}$, $x\\le\\inf\\{r\\in\\mathbb{R}:r>b\\}=b$\n\nwe will show that if $\\limsup \\frac{a_n}{b_n}<r$ then $\\limsup a_n^{\\frac{1}{n}}\\le r$ and then apply step one.\n\nlet $r>\\limsup \\frac{a_{n+1}}{a_n}$ then $\\exists N$ such that $r>\\frac{a_{n+!}}{a_n}\\forall n\\ge N$\n\\begin{align*}\n  a_{N+1}<ra_N\\\\\n  a_{N+2}<ra_{N+1}\\le r^2a_{N}\\\\\n  a_{N+K}<r^{k}a_N\\\\\n  a_{N+k}^{\\frac{1}{N+k}}<(r^ka_N)^{\\frac{1}{N+k}}\n\\end{align*}\n\\section*{quiz from 10/1/2014}\n$L_k\\to L$ then $\\{x_n\\}$ such that $\\forall k, \\exists$ a subsequence of $\\{x_n\\}$ converging to $L_k$. prove that $\\{x_n\\}$ has a subsequence converging to $L$.\n\ngiven $\\varepsilon>0\\exists N_0$ such that $\\left\\lvert L_k-L\\right\\rvert<\\varepsilon$ if $k\\ge N_0$\n\n$\\left\\lvert x_{N_k}-L\\right\\rvert\\le\\left\\lvert x_{N_k}-L_k\\right\\rvert+\\left\\lvert L_k-L\\right\\rvert<2\\varepsilon$\n\n\\section*{example}\nlet $A,B\\subseteq \\mathbb{R}$, prove that $\\sup A\\le\\inf B$, if $\\forall a\\in A, b\\in B, a\\le b$\n\n\\section*{3.3.5}\nany rearrangement of an absolutely convergent series converges tothe same limit\n\\subsubsection*{proof}\nlet $\\sum\\limits{a_n}=L<\\infty$. We know $\\sum\\limits{|a_n|}$ is convergent (not necessarily to $L$). by th cauchy riterion for series $\\forall\\varepsilon>0\\exists N$ such that $\\left(\\sum\\limits_{n=N+1}^\\infty{|a_n|}\\right)<\\varepsilon$\n\n$\\pi:\\mathbb{N}\\to\\mathbb{N}$ is bijective, the rearranged series is $\\sum\\limits_{n=1}^{\\infty}{a_{\\pi(n)}}$ and $\\{a_1\\dots a_N\\}\\subseteq\\{a_{\\pi(1)1}\\dots a_{\\pi(M)}\\}$\n\n\\section*{3.3.7 rearrangement theorem}\nlet $\\sum\\limits{a_n}=L<\\infty$ and define $b_n=(a_n\\ge 0)?a_n:0$ and $c_n=(a_n<0)?a_n:0$\n\nconsider the series $\\sum\\limits{b_n}$ and $\\sum\\limits{|c_n|}$\n\\subsection*{case 1}\nboth convergent\n\n$\\sum\\limits{|a_n|}=\\sum\\limits{b_n}+\\sum\\limits{|c_n|}$ which is convergent, which contradicts the fact that $a_n$ is conditionally convergent\n\\subsection*{case 2}\none convergent, one divergent\n\nassume $\\sum\\limits{|c_n|}=A<\\infty$ and $\\sum\\limits{b_n}$ is divergent to $+\\infty$\n\ngiven any $R\\in\\mathbb{N}$ big, $\\exists N$ such that $\\sum\\limits_{n=1}^N{b_n}>R+A$, then we pick $M$ big enough so that $\\{b_1,\\dots,b_N\\}\\subseteq\\{a_1,a_2,\\dots,a_M\\}$ and $\\sum\\limits_{n=1}^M{a_n}\\ge\\sum\\limits_{n=1}^N{b_n}-\\sum\\limits{|c_n|}>R$ so $\\sum\\limits{a_n}$ is divergent, which  is a contradiction.\n\\subsection*{case 3}\nboth divergent\n\n\\section*{chapter 4}\n$\\mathbb{R}^n=\\{(x_1,x_2,\\dots,x_n),x_i\\in\\mathbb{R}\\}$, vector space (or point in $n$-space).\n\nwith the coordinate wise sum and the product by real numbers (scalars).\n\\begin{align*}\n  (x_1,\\dots x_n)+(y_1,\\dots,y_n)&=(x_1+y_1,\\dots,x_n+y_n)\\\\\n  \\lambda(x_1,\\dots,x_n)&=(\\lambda x_1,\\dots,\\lambda x_n)\\\\\n  x^{\\to}&=(x_1,\\dots,x_n)=x\n  \\intertext{euclidean norm}\n  ||x||=\\sqrt{x_1^2+\\dots+ x_n^2}\\\\\n  \\intertext{distance from x to y}\n  ||x-y||\n\\end{align*}\n\\subsection*{cauchy-schwarz}\n$\\left\\lvert\\sum\\limits_{i=1}^n{a_jb_j}\\right\\rvert\\le\\left(\\sum\\limits_{i=1}^n{a_j^2}\\right)^{1/2}\\left(\\sum\\limits_{i=1}^n{b_j^2}\\right)^{1/2}$\n\n$|a\\cdot b|\\le||a||||b||$\n\\subsubsection*{dot product}\n$a\\cdot b=\\sum\\limits{a_ib_i}$\n\\subsection*{triangle inequality}\n\\begin{align*}\n  ||x+y||\\le||x|||+||y||\n\\end{align*}\n\\subsubsection*{proof}\n\\begin{align*}\n  ||x+y||^2&=\\sum\\limits{(x_i+y_i)^2}\\\\\n  &=(x+y)\\cdot(x+y)\\\\\n  &=x\\cdot x+2x\\cdot y+y\\cdot y\\\\\n  &=||x||^2+2x\\cdot y+||y||^2\\\\\n  &\\le ||x||^2+2||x||||y||+||y||^2\\\\\n  =(||x||+||y||)^2\n\\end{align*}\n\n\\subsection*{standard orthogonal base of $\\mathbb{R}^n$}\n\\begin{align*}\n  e_1&=<1,0,\\dots,0>\\\\\n  e_2&=<0,1,\\dots,0>\\\\\n  &\\vdots\\\\\n  e_n&=<0.0,\\dots,1>\n\\end{align*}\n\n\\section*{4.2 convergence in $\\mathbb{R}^n$}\ndefinition:\na sequence $\\{x^i\\}$ of parts in $\\mathbb{R}^n$ converge to $c\\in \\mathbb{R}^n$ if $\\forall\\varepsilon>0 \\exists N=N(\\varepsilon)\\in\\mathbb{N}$, such that $||x^i-c||<\\epsilon$ if $i\\ge N$ we say $\\lim x^i=c$.\n\n\\subsection*{4.2.2 lemma}\n$\\lim  x^i=a$ if and only if $\\lim ||x^i-a||=0$.\n\n\\subsection*{4.2.3 lemma}\n$\\lim x^i=a$ if and only if $\\forall j=1,\\dots,n, \\lim x_j^i=a_j$ \n\\end{document}\n\n", "meta": {"hexsha": "157c14892bd937ed07f29acd30012f4306ecc1cc", "size": 7362, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-10-13.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-10-13.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-10-13.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.8214285714, "max_line_length": 313, "alphanum_fraction": 0.6560717196, "num_tokens": 3112, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.8006920116079209, "lm_q1q2_score": 0.580397839586811}}
{"text": "\\chapter{Affine varieties as ringed spaces}\nAs in the previous chapter, we are working only over affine varieties in $\\CC$ for simplicity.\n\n\\section{Synopsis}\nGroup theory was a strange creature in the early 19th century.\nDuring the 19th century, a group was literally defined\nas a subset of $\\GL(n)$ or of $S_n$.\nIndeed, the word ``group'' hadn't been invented yet.\nThis may sound ludicrous, but it was true -- Sylow developed his theorems without this notion.\nOnly much later was the abstract definition of a group given,\nan abstract set $G$ which was \\emph{independent} of any embedding into $S_n$,\nand an object in its own right.\n\nWe are about to make the same type of change for our affine varieties.\nRather than thinking of them as an object locked into an ambient space $\\Aff^n$\nwe are instead going to try to make them into an object in their own right.\nSpecifically, for us an affine variety will become a \\emph{topological space}\nequipped with a \\emph{ring of functions} for each of its open sets:\nthis is why we call it a \\textbf{ringed space}.\n\nThe bit about the topological space is not too drastic.\nThe key insight is the addition of the ring of functions.\nFor example, consider the double point from last chapter.\n\n\\begin{center}\n\t\\begin{asy}\n\t\timport graph;\n\t\tsize(5cm);\n\n\t\treal f(real x) { return x*x; }\n\t\tgraph.xaxis(\"$x$\", red);\n\t\tgraph.yaxis(\"$y$\");\n\t\tdraw(graph(f,-2,2,operator ..), blue, Arrows);\n\t\tlabel(\"$\\mathcal V(y-x^2)$\", (0.8, f(0.8)), dir(-45), blue);\n\t\tlabel(\"$\\mathbb A^2$\", (2,3), dir(45));\n\t\tdotfactor *= 1.5;\n\t\tdot(origin, heavygreen);\n\t\\end{asy}\n\\end{center}\n\nAs a set, it is a single point,\nand thus it can have only one possible topology.\nBut the addition of the function ring will let us tell it apart\nfrom just a single point.\n\nThis construction is quite involved, so we'll proceed as follows:\nwe'll define the structure bit by bit onto our existing affine varieties in $\\Aff^n$,\nuntil we have all the data of a ringed space.\nIn later chapters, these ideas will grow up to\nbecome the core of modern algebraic geometry: the \\emph{scheme}.\n\n\\section{The Zariski topology on $\\Aff^n$}\n\\prototype{In $\\Aff^1$, closed sets are finite collections of points.\nIn $\\Aff^2$, a nonempty open set is the whole space minus some finite collection of curves/points.}\n\nWe begin by endowing a topological structure on every variety $V$.\nSince our affine varieties (for now) all live in $\\Aff^n$, all we have to do\nis put a suitable topology on $\\Aff^n$, and then just view $V$ as a subspace.\n\nHowever, rather than putting the standard Euclidean topology on $\\Aff^n$,\nwe put a much more bizarre topology.\n\\begin{definition}\n\tIn the \\vocab{Zariski topology} on $\\Aff^n$,\n\tthe \\emph{closed sets} are those of the form \n\t\\[ \\VV(I) \\qquad\\text{where}\\quad I \\subseteq \\CC[x_1, \\dots, x_n]. \\]\n\tOf course, the open sets are complements of such sets.\n\\end{definition}\n\n\\begin{example}\n\t[Zariski topology on $\\Aff^1$]\n\tLet us determine the open sets of $\\Aff^1$,\n\twhich as usual we picture as a straight line\n\t(ignoring the fact that $\\CC$ is two-dimensional).\n\n\tSince $\\CC[x]$ is a principal ideal domain, rather than looking at $\\VV(I)$\n\tfor every $I \\subseteq \\CC[x]$, we just have to look at $\\VV(f)$ for a single $f$.\n\tThere are a few flavors of polynomials $f$:\n\t\\begin{itemize}\n\t\t\\ii The zero polynomial $0$ which vanishes everywhere:\n\t\tthis implies that the entire space $\\Aff^1$ is a closed set.\n\t\t\\ii The constant polynomial $1$ which vanishes nowhere.\n\t\tThis implies that $\\varnothing$ is a closed set.\n\t\t\\ii A polynomial $c(x-t_1)(x-t_2)\\dots(x-t_n)$ of degree $n$.\n\t\tIt has $n$ roots, and so $\\{t_1, \\dots, t_n\\}$ is a closed set.\n\t\\end{itemize}\n\tHence the closed sets of $\\Aff^1$ are exactly all of $\\Aff^1$\n\tand finite sets of points (including $\\varnothing$).\n\tConsequently, the \\emph{open} sets of $\\Aff^1$ are\n\t\\begin{itemize}\n\t\t\\ii $\\varnothing$, and\n\t\t\\ii $\\Aff^1$ minus a finite collection (possibly empty) of points.\n\t\\end{itemize}\n\\end{example}\n\nThus, the picture of a ``typical'' open set $\\Aff^1$ might be \n\\begin{center}\n\t\\begin{asy}\n\t\tsize(6cm);\n\t\tpair A = (-9,0); pair B = (9,0);\n\t\tpen bloo = blue+1.5;\n\t\tdraw(A--B, blue, Arrows);\n\t\tdraw(A--B, bloo);\n\t\t// label(\"$\\mathbb V()$\", (0,0), 2*dir(-90));\n\t\topendot((-3,0), bloo);\n\t\topendot((-1,0), bloo);\n\t\topendot((4,0), bloo);\n\t\tlabel(\"$\\mathbb A^1$\", B-(2,0), dir(90));\n\t\\end{asy}\n\\end{center}\nIt's everything except a few marked points!\n\n\\begin{example}[Zariski topology on $\\Aff^2$]\n\tSimilarly, in $\\Aff^2$,\n\tthe interesting closed sets are going to consist\n\tof finite unions (possibly empty) of\n\t\\begin{itemize}\n\t\t\\ii Closed curves,\n\t\tlike $\\VV(y-x^2)$ (which is a parabola), and\n\t\t\\ii Single points, like $\\VV(x-3,y-4)$\n\t\t(which is the point $(3,4)$).\n\t\\end{itemize}\n\tOf course, the entire space $\\Aff^2 = \\VV(0)$ and the empty set $\\varnothing = \\VV(1)$\n\tare closed sets.\n\n\tThus the nonempty open sets in $\\Aff^2$ consist of the \\emph{entire} plane,\n\tminus a finite collection of points and one-dimensional curves.\n\\end{example}\n\\begin{ques}\n\tDraw a picture (to the best of your artistic ability)\n\tof a ``typical'' open set in $\\Aff^2$.\n\\end{ques}\n\nAll this is to say\n\\begin{moral}\n\tThe nonempty Zariski open sets are \\emph{huge}.\n\\end{moral}\nThis is an important difference than what you're used to in topology.\nTo be very clear:\n\\begin{itemize}\n\t\\ii In the past, if I said something like\n\t``has so-and-so property in an open neighborhood of point $p$'',\n\tone thought of this as saying\n\t``is true in a small region around $p$''.\n\t\\ii In the Zariski topology,\n\t``has so-and-so property in an open neighborhood of point $p$''\n\tshould be thought of as saying ``is true for virtually all points,\n\tother than those on certain curves''.\n\\end{itemize}\nIndeed, ``open neighborhood'' is no longer really a accurate description.\nNonetheless, in many pictures to follow,\nit will still be helpful to draw open neighborhoods as circles.\n\nIt remains to verify that as I've stated it, the closed sets actually form a topology.\nThat is, I need to verify briefly that\n\\begin{itemize}\n\t\\ii $\\varnothing$ and $\\Aff^n$ are both closed.\n\t\\ii Intersections of closed sets (even infinite) are still closed.\n\t\\ii Finite unions of closed sets are still closed.\n\\end{itemize}\nWell, closed sets are the same as affine varieties,\nso we already know this!\n\n\\section{The Zariski topology on affine varieties}\n\\prototype{If $V = \\VV(y-x^2)$ is a parabola,\n\tthen $V$ minus $(1,1)$ is open in $V$.\n\tAlso, the plane minus the origin is $D(x) \\cup D(y)$.}\n\nAs we said before, by considering a variety $V$ as a subspace of $\\Aff^n$\nit inherits the Zariski topology.\nOne should think of an open subset of $V$ as\n``$V$ minus a few Zariski-closed sets''.\nFor example:\n\\begin{example}[Open set of a variety]\n\tLet $V = \\VV(y-x^2) \\subseteq \\Aff^2$ be a parabola,\n\tand let $U = V \\setminus \\{(1,1)\\}$. We claim $U$ is open in $V$.\n\t\\begin{center}\n\t\t\\begin{asy}\n\t\timport graph;\n\t\tsize(5cm);\n\n\t\treal f(real x) { return x*x; }\n\t\tgraph.xaxis(\"$x$\");\n\t\tgraph.yaxis(\"$y$\");\n\t\tdraw(graph(f,-2,2,operator ..), blue, Arrows);\n\t\tlabel(\"$\\mathcal V(y-x^2)$\", (0.8, f(0.8)), dir(-45));\n\n\t\topendot( (1,1), blue+1);\n\t\t\\end{asy}\n\t\\end{center}\n\tIndeed, $\\tilde U = \\Aff^2 \\setminus \\{(1,1)\\}$ is open in $\\Aff^2$\n\t(since it is the complement of the closed set $\\VV(x-1,y-1)$),\n\tso $U = \\tilde U \\cap V$ is open in $V$.\n\tNote that on the other hand the set $U$ is \\emph{not} open in $\\Aff^2$.\n\\end{example}\n\nWe will go ahead and introduce now a definition\nthat will be very useful later.\n\\begin{definition}\n\tGiven $V \\subseteq \\Aff^n$ an affine variety and $f \\in \\CC[x_1, \\dots, x_n]$,\n\twe define the \\vocab{distinguished open set}\n\t$D(f)$ to be the open set in $V$\n\tof points not vanishing on $f$:\n\t\\[ D(f) = \\left\\{ p \\in V \\mid f(p) \\neq 0 \\right\\} = V \\setminus \\VV(f). \\]\n\\end{definition}\nIn \\cite{ref:vakil}, Vakil suggests remembering the\nnotation $D(f)$ as ``doesn't-vanish set''.\n\\begin{example}\n\t[Examples of (unions of) distinguished open sets]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii If $V = \\Aff^1$ then $D(x)$ corresponds to a line minus a point.\n\t\t\\ii If $V = \\VV(y-x^2) \\subseteq \\Aff^2$,\n\t\tthen $D(x-1)$ corresponds to the parabola minus $(1,1)$.\n\t\t\\ii If $V = \\Aff^2$, then\n\t\t$D(x) \\cup D(y) = \\Aff^2 \\setminus \\{ (0,0) \\}$\n\t\tis the punctured plane.\n\t\tYou can show that this set is \\emph{not} distinguished open.\n\t\\end{enumerate}\n\\end{example}\n\n% There used to be an exercise here about D(f)\n% being a distinguished base\n\n%\\begin{ques}\n%\tGive an example of an open set of $\\Aff^2$\n%\twhich is not a distinguished open set.\n%\t(There was one above already.)\n%\\end{ques}\n\n\n\\section{Coordinate rings}\n\\prototype{If $V = \\VV(y-x^2)$ then $\\CC[V] = \\CC[x,y]/(y-x^2)$.}\n\nThe next thing we do is consider the functions from $V$ to the base field $\\CC$.\nWe restrict our attention to algebraic (polynomial) functions on a variety $V$:\nthey should take every point $(a_1, \\dots, a_n)$ on $V$ to some complex number $P(a_1, \\dots, a_n) \\in \\CC$.\nFor example, a valid function on a three-dimensional affine variety might be $(a,b,c) \\mapsto a$;\nwe just call this projection ``$x$''.\nSimilarly we have a canonical projection $y$ and $z$,\nand we can create polynomials by combining them,\nsay $x^2y + 2xyz$.\n\n\\begin{definition}\n\tThe \\vocab{coordinate ring} $\\CC[V]$ of a variety $V$\n\tis the ring of polynomial functions on $V$.\n\t(Notation explained next section.)\n\\end{definition}\n\nAt first glance, we might think this is just $\\CC[x_1, \\dots, x_n]$.\nBut on closer inspection we realize that \\emph{on a given variety},\nsome of these functions are the same.\nFor example, consider in $\\Aff^2$ the parabola $V = \\VV(y-x^2)$.\nThen the two functions\n\\begin{align*}\n\tV & \\to \\CC \\\\\n\t(x,y) & \\mapsto x^2 \\\\\n\t(x,y) & \\mapsto y\n\\end{align*}\nare actually the same function!\nWe have to ``mod out'' by the ideal $I$ which generates $V$.\nThis leads us naturally to:\n\\begin{theorem}[Coordinate rings correspond to ideal]\n\tLet $I$ be a radical ideal, and $V = \\VV(I) \\subseteq \\Aff^n$.\n\tThen \\[ \\CC[V] \\cong \\CC[x_1, \\dots, x_n] / I.  \\]\n\\end{theorem}\n\\begin{proof}\n\tThere's a natural surjection as above\n\t\\[ \\CC[x_1, \\dots, x_n] \\surjto \\CC[V] \\]\n\tand the kernel is $I$.\n\\end{proof}\nThus properties of a variety $V$ correspond to properties of the ring $\\CC[V]$.\n\n\\section{The sheaf of regular functions}\n\\prototype{Let $V = \\Aff^1$, $U = V \\setminus \\{0\\}$. Then $1/x \\in \\OO_V(U)$ is regular on $U$.}\n\nLet $V$ be an affine variety and let $\\CC[V]$ be its coordinate ring.\nWe want to define a notion of $\\OO_V(U)$ for any open set $U$:\nthe ``nice'' functions on any open subset.\nObviously, any function in $\\CC[V]$ will work as a function on $\\OO_V(U)$.\nHowever, to capture more of the structure we want to\nloosen our definition of ``nice'' function slightly\nby allowing \\emph{rational} functions.\n\nThe chief example is that $1/x$ should be a regular function\non $\\Aff^1 \\setminus \\{0\\}$.\nThe first natural guess is:\n\\begin{definition}\n\tLet $U \\subseteq V$ be an open set of the variety $V$.\n\tA \\vocab{rational function} on $U$\n\tis a quotient $f(x) / g(x)$ of two elements $f$ and $g$ in $\\CC[V]$,\n\twhere we require that $g(x) \\neq 0$ for $x \\in U$.\n\\end{definition}\nHowever, the definition is slightly too restrictive;\nwe have to allow for multiple representations:\n\\begin{definition}\n\tLet $U \\subseteq V$ be open.\n\tWe say a function $\\phi : U \\to \\CC$ is a \\vocab{regular function} if for\n\tevery point $p \\in U$, we can find an open set $U_p \\subseteq U$ containing $p$\n\tand a rational function $f_p/g_p$ on $U_p$ such that\n\t\\[ \\phi(x) = \\frac{f_p(x)}{g_p(x)} \\qquad \\forall x \\in U_p. \\]\n\tIn particular, we require $g_p(x) \\neq 0$ on the set $U_p$.\n\tWe denote the set of all regular functions on $U$ by $\\OO_V(U)$.\n\\end{definition}\n\nThus,\n\\begin{moral}\n\t$\\phi$ is regular on $U$ if it is locally a rational function.\n\\end{moral}\n\nThis definition is misleadingly complicated,\nand the examples should illuminate it significantly.\nFirstly, in practice, most of the time we will be able to find\na ``global'' representation of a regular function as a quotient,\nand we will not need to fuss with the $p$'s.\nFor example:\n\\begin{example}\n\t[Regular functions]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Any function in $f \\in \\CC[V]$ is clearly regular,\n\t\tsince we can take $g_p = 1$, $f_p = f$ for every $p$.\n\t\tSo $\\CC[V] \\subseteq \\OO_V(U)$ for any open set $U$.\n\t\t\\ii Let $V = \\Aff^1$, $U_0 = V \\setminus \\{0\\}$.\n\t\tThen $1/x \\in \\OO_V(U_0)$ is regular on $U$.\n\t\t\\ii Let $V = \\Aff^1$, $U_{12} = V \\setminus \\{1,2\\}$. Then \n\t\t\\[ \\frac{1}{(x-1)(x-2)} \\in \\OO_V(U_{12}) \\]\n\t\tis regular on $U$.\n\t\\end{enumerate}\n\\end{example}\nThe ``local'' clause with $p$'s is still necessary, though.\n\\begin{example}\n\t[Requiring local representations]\n\t\\label{ex:local_rep}\n\tConsider the variety\n\t\\[ V = \\VV(ab-cd) \\subseteq \\Aff^4 \\]\n\tand the open set $U = V \\setminus \\VV(b,d)$.\n\tThere is a regular function on $U$ given by\n\t\\[\n\t\t(a,b,c,d)\n\t\t\\mapsto\n\t\t\\begin{cases}\n\t\t\ta/d & d \\neq 0 \\\\\n\t\t\tc/b & b \\neq 0.\n\t\t\\end{cases}\n\t\\]\n\tClearly these are the ``same function'' (since $ab=cd$),\n\tbut we cannot write ``$a/d$'' or ``$c/b$''\n\tto express it because we run into divide-by-zero issues.\n\tThat's why in the definition of a regular function,\n\twe have to allow multiple representations.\n\\end{example}\n\nIn fact, we will see later on that the definition\nof a regular function is a special case of a more\ngeneral construction called \\emph{sheafification},\nin which ``sheaves of functions which are $P$'' are transformed\ninto ``sheaves of functions which are \\emph{locally} $P$''.\n\n\\section{Regular functions on distinguished open sets}\n\\prototype{Regular functions on $\\Aff^1 \\setminus \\{0\\}$ are $P(x) / x^n$.}\nThe division-by-zero, as one would expect,\nessentially prohibits regular functions on the entire space $V$;\ni.e.\\ there are no regular functions in $\\OO_V(V)$\nthat were not already in $\\CC[V]$.\nActually, we have a more general result which computes the\nregular functions on distinguished open sets:\n\\begin{theorem}\n\t[Regular functions on distinguished open sets]\n\t\\label{thm:reg_func_distinguish_open}\n\tLet $V \\subseteq \\Aff^n$ be an affine variety\n\tand $D(g)$ a distinguished open subset of it.\n\tThen\n\t\\[ \\OO_V( D(g) ) = \\left\\{ \\frac{f}{g^n}\n\t\t\\mid f \\in \\CC[V] \\text{ and } n \\in \\ZZ \\right\\}.  \\]\n\tIn particular, $\\OO_V(V) = \\OO_V(D(1)) \\cong \\CC[V]$.\n\\end{theorem}\nThe proof of this theorem requires the Nullstellensatz,\nso it relies on $\\CC$ being algebraically closed.\nIn fact, a counter-example is easy to find if we replace $\\CC$ by $\\RR$:\nconsider $\\frac{1}{x^2+1}$.\n\\begin{proof}\n\tObviously, every function of the form $f/g^n$ works,\n\tso we want the reverse direction.\n\tThis is long, and perhaps should be omitted on a first reading.\n\n\tHere's the situation.\n\tLet $U = D(g)$.\n\tWe're given a regular function $\\phi$, meaning at every point $p \\in D(g)$,\n\tthere is an open neighborhood $U_p$ on which $\\phi$ can be expressed\n\tas $f_p / g_p$ (where $f_p, g_p \\in \\CC[V]$).\n\tThen, we want to construct an $f \\in \\CC[V]$ and an integer $n$\n\tsuch that $\\phi = f/g^n$.\n\n\tFirst, look at a particular $U_p$ and $f_p / g_p$.\n\tShrink $U_p$ to a distinguished open set $D(h_p)$.\n\tThen, let $\\wt f_p = f_p h_p$ and $\\wt g_p = g_p h_p$.\n\tThus we have that\n\t\\[ \\frac{\\wt f_p}{\\wt g_p} \\text{ is correct on }\n\t\tD(h_p) \\subseteq U \\subseteq X. \\]\n\tThe upshot of using the modified $f_p$ and $g_p$ is that:\n\t\\[ \\wt f_p \\wt g_q = \\wt f_q \\wt g_p \\qquad \\forall p,q \\in U. \\]\n\tIndeed, it is correct on $D(h_p) \\cap D(h_q)$ by definition,\n\tand outside this set both the left-hand side and right-hand side are zero.\n\n\tNow, we know that $D(g) = \\bigcup_{p \\in U} D(\\wt g_p)$, i.e.\\\n\t\\[ \\VV(g) = \\bigcap_{p \\in U} \\VV(\\wt g_p). \\]\n\tSo by the Nullstellensatz we know that\n\t\\[ g \\in \\sqrt{(\\wt g_p : p \\in U)}\n\t\t\\implies \\exists n : g^n \\in (\\wt g_p : p \\in U). \\]\n\tIn other words, for some $n$ and $k_p \\in \\CC[V]$ we have\n\t\\[ g^n = \\sum_p k_p \\wt g_p \\]\n\twhere only finitely many $k_p$ are not zero.\n\tNow, we claim that\n\t\\[ f \\defeq \\sum_p k_p \\wt f_p \\]\n\tworks.\n\tThis just observes by noting that for any $q \\in U$, we have\n\t\\[\n\t\tf \\wt g_q - g^n \\wt f_q\n\t\t= \\sum_p k_p(\\wt f_p \\wt g_q - \\wt g_p \\wt f_q)\n\t\t= 0. \\qedhere\n\t\\]\n\\end{proof}\nThis means that the \\emph{global} regular functions\nare just the same as those in the coordinate ring:\nyou don't gain anything new by allowing it to be locally a quotient.\n(The same goes for distinguished open sets.)\n\n\\begin{example}[Regular functions on distinguished open sets]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii As said already,\n\t\ttaking $g=1$ we recover $\\OO_V(V) \\cong \\CC[V]$\n\t\tfor any affine variety $V$.\n\t\t\\ii Let $V = \\Aff^1$, $U_0 = V \\setminus \\{0\\}$. Then\n\t\t\\[ \\OO_V(U_0)\n\t\t\t= \\left\\{ \\frac{P(x)}{x^n} \\mid P \\in \\CC[x],\n\t\t\t\\quad n \\in \\ZZ \\right\\}. \\]\n\t\tSo more examples are $1/x$ and $(x+1)/x^3$.\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{ques}\n\tWhy doesn't our theorem on regular functions apply to \\Cref{ex:local_rep}?\n\\end{ques}\n\nThe regular functions will become of crucial importance\nonce we define a scheme in the next chapter.\n\n\\section{Baby ringed spaces}\nIn summary, given an affine variety $V$ we have:\n\\begin{itemize}\n\t\\ii A structure of a set of points,\n\t\\ii A structure of a topological space $V$ on these points, and\n\t\\ii For every open set $U \\subseteq V$, a ring $\\OO_V(U)$.\n\tElements of the rings are functions $U \\to \\CC$.\n\\end{itemize}\nLet us agree that:\n\\begin{definition}\n\tA \\vocab{baby ringed space} is a topological space $X$\n\tequipped with a ring $\\OO_X(U)$ for every open set $U$.\n\tIt is required that elements of the ring $\\OO_X(U)$\n\tare functions $f : U \\to \\CC$;\n\twe call these the \\emph{regular functions} of $X$ on $U$.\n\\end{definition}\nTherefore, affine varieties are baby ringed spaces.\n\\begin{remark}\n\tThis is not a standard definition. Hehe.\n\\end{remark}\n\nThe reason this is called a ``baby ringed space''\nis that in a \\emph{ringed space},\nthe rings $\\OO_V(U)$ can actually be \\emph{any rings},\nbut they have to satisfy a set of fairly technical conditions.\nWhen this happens, it's the $\\OO_V$ that does all the work;\nwe think of $\\OO_V$ as a type of functor called a \\emph{sheaf}.\n\nSince we are only studying affine/projective/quasi-projective varieties\nfor the next chapters, we will just refer to these as baby ringed spaces\nso that we don't have to deal with the entire definition.\nThe key concept is that we want to think of these varieties\nas \\emph{intrinsic objects}, free of any embedding.\nA baby ringed space is philosophically the correct thing to do.\n\nAnyways, affine varieties are baby ringed spaces $(V, \\OO_V)$.\nIn the next chapter we'll meet projective and quasi-projective\nvarieties, which give more such examples of (baby) ringed spaces.\nWith these examples in mind, we will finally lay down\nthe complete definition of a ringed space,\nand use this to define a scheme.\n\n\\section\\problemhead\n\n\\begin{dproblem}\n\tShow that for any $n \\ge 1$ the Zariski topology of $\\Aff^n$\n\tis \\emph{not} Hausdorff.\n\\end{dproblem}\n\n\\begin{dproblem}\n\tLet $V$ be an affine variety,\n\tand consider its Zariski topology.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Show that the Zariski topology is \\vocab{Noetherian},\n\t\tmeaning there is no infinite descending chain\n\t\t$Z_1 \\supsetneq Z_2 \\supsetneq Z_3 \\supsetneq \\dots$ of closed subsets.\n\t\t\\ii Prove that a Noetherian topological space is compact.\n\t\tHence varieties are topologically compact.\n\t\\end{enumerate}\n\\end{dproblem}\n\n\\begin{sproblem}\n\t[Punctured Plane]\n\t\\label{prob:punctured_plane}\n\tLet $V = \\Aff^2$ and let $X = \\Aff^2 \\setminus \\{(0,0)\\}$ be the punctured plane\n\t(which is an open set of $V$).\n\tCompute $\\OO_V(X)$.\n\\end{sproblem}\n", "meta": {"hexsha": "2be7bf2a5467208e14f4f5aee89468fb598564b1", "size": 19654, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/alg-geom/zariski.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/alg-geom/zariski.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/alg-geom/zariski.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.0154738878, "max_line_length": 108, "alphanum_fraction": 0.6907499746, "num_tokens": 6214, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.8006920020959544, "lm_q1q2_score": 0.5803978326918692}}
{"text": "\\section{Set theory}\\label{sec:set_theory}\n\nSets are ubiquitous in mathematics yet set theory itself is quite complicated. Attention needs to be put to define a \\hyperref[def:first_order_theory]{logical theory} of sets that is both useful and \\hyperref[def:first_order_theory_consistency]{consistent}.\n\nWe first use the simplicity of \\hyperref[def:naive_set_theory]{na\\\"ive set theory} to introduce some fundamental definitions. This theory turns out to be inconsistent due to \\fullref{thm:russels_paradox}.\n\nWe later introduce the more, sophisticated \\hyperref[def:zfc]{Zermelo-Fraenkel set theory} (\\logic{ZFC}). It is unclear whether the latter theory is consistent, however no contradictions have yet been discovered. Furthermore, the restrictions of \\logic{ZFC} (mostly the \\hyperref[def:zfc/foundation]{axiom of foundation}) actually make it possible to define a model of \\hyperref[def:peano_arithmetic]{Peano arithmetic} and extend the concept of a number to \\hyperref[def:ordinal]{ordinals} and \\hyperref[def:cardinal]{cardinals}. We then discuss \\hyperref[def:cumulative_hierarchy]{von Neumann's cumulative hierarchy}, which can be used to build models of set theory.\n\nWe also extend \\logic{ZFC} with \\hyperref[def:grothendieck_universe]{Grothendieck universes} to obtain the theory \\hyperref[def:axiom_of_universes]{\\logic{ZFC+U}}, upon which we construct the concept of a \\hyperref[def:category]{category}.\n\nBy \\enquote{set theory} we mean na\\\"ive set theory, \\logic{ZF}, \\logic{ZFC}, \\logic{ZFC+U} or further variations.\n\n\\begin{remark}\\label{rem:set_definition_recursion}\n  The relation between \\hyperref[subsec:first_order_logic]{first-order logic} and set theory is remarkably circular.\n\n  \\begin{itemize}\n    \\item Set theory is defined as a \\hyperref[def:first_order_theory]{theory} of first-order logic.\n\n    \\item First-order logic itself is defined via sets, for example via the \\hyperref[def:first_order_language]{language of first-order logic} or the definitions of a \\hyperref[def:first_order_structure]{first-order structure} or of a \\hyperref[def:deductive_system]{deductive system}.\n  \\end{itemize}\n\n  We utilize the concept of \\hyperref[rem:metalogic]{metalogic} to resolve this circularity:\n  \\begin{itemize}\n    \\item Using the metatheory where we assume the availability of first-order logic, we build in the object logic certain special sets within set theory that can be used as models of \\logic{ZFC} --- see \\fullref{thm:cumulative_hierarchy_model_of_zfc}. In particular, the theorem implies that the existence of a \\hyperref[rem:strongly_inaccessible_cardinal]{regular strong limit cardinal} in the metatheory can prove that \\logic{ZFC} is consistent.\n\n    \\item We declare such a set as \\enquote{the} model of \\logic{ZFC} we are interested in. At this point, this set that we have built in the object logic becomes the ambient universe in the metatheory.\n\n    \\item Now we can, within the metatheory, define \\hyperref[subsec:first_order_logic]{first-order logic}.\n  \\end{itemize}\n\n  An important point is that we should restrict ourselves to \\hyperref[rem:standard_model_of_set_theory]{standard} \\hyperref[rem:transitive_model_of_set_theory]{transitive} models in order to avoid very counterintuitive results.\n\n  Another important point is that it is possible that the set theory which we use within the metatheory in order to provide models of \\logic{ZFC} is itself inconsistent. In that case, due to \\eqref{eq:thm:minimal_propositional_negation_laws/efq}, every theorem can be derived and a proof of consistency of \\logic{ZFC} is insubstantial.\n\n  Note that we do need additional assumptions for the \\hyperref[def:axiom_of_universes]{axiom of universes} separately because we cannot build a model using the cumulative hierarchy.\n\\end{remark}\n\n\\begin{remark}\\label{rem:first_order_theories_in_zfc}\n  Instead of discussing first-order theories like the \\hyperref[def:group/theory]{theory of groups}, we can instead reformulate the definition within set theory and add a \\hyperref[rem:predicate_formula]{predicate formula} \\( \\op{IsGroup}[\\xi] \\) with which is valid only for groups. That is, \\( \\op{IsGroup}[\\xi] \\) is a tautology if and only if the variable \\( \\xi \\) satisfies the set-based definition of a group given in \\fullref{def:group}.\n\n  This is a natural approach, and we usually assume it implicitly. Furthermore, it makes no sense to speak about concepts like the \\hyperref[thm:substructures_form_complete_lattice]{lattice of subgroups} or the \\hyperref[def:cardinal]{cardinality} of a group otherwise.\n\n  This is also a natural framework for defining \\hyperref[def:topological_space]{topological spaces} and \\hyperref[def:category]{categories} via \\hyperref[def:quiver]{quivers}. Some theories like the \\hyperref[def:partially_ordered_set]{partially ordered sets} are first-order theories, however \\hyperref[def:well_ordered_set]{well-ordered sets} is an extension that requires an ambient set theory.\n\n  Thus, roughly, set theory allows us to use \\hyperref[rem:higher_order_logic]{higher-order relations and types} in first-order logic.\n\\end{remark}\n", "meta": {"hexsha": "613bb90a31308c65b7f246c28baf2f3167962a8e", "size": 5100, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/set_theory.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/set_theory.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/set_theory.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 108.5106382979, "max_line_length": 667, "alphanum_fraction": 0.7949019608, "num_tokens": 1291, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.5803978279328944}}
{"text": "\\chapter{Assignment-Hamiltonian Equation of Motion}\n\\textbf{Assignment 01}\n\\begin{enumerate}\n\t\\item A particle of inass $m$ moves inside a bowl under gravity. If the surface of the bowl is given by the equation $z=\\frac{1}{2} a\\left(x^{2}+y^{2}\\right)$, where $a$ is a constant.\\\\\n\t(A) Write down Lagrangian of the system in cylindrical co-ordinate.\\\\\n\t(a) Identified the cyclic coordinate and law of conservation of momentum.\\\\\n\t(b) Write down hamiltonion of the system in cylindrical coordinate system.\n\t \\item A bike stuntman rides inside a well of frictionless surface given by $z=a\\left(x^{2}+y^{2}\\right)$, under the action of gravity acting in the negative $z$ direction i.e $\\vec{g}=-g \\hat{z}$. What speed should be maintain to be able to ride at a constant height $z_{0}$ without falling down?\n\t \\item A block of mass $M$ is suspended vertically from the cciting by a spring of spring constant $k$. A pendulum of mass $m$ is attached to the bottom of this block by a mass less rod of length $l$ (as shown in the attached figure). Assume that the block can move only vertically and that the motion of the pendulum takes place in a fixed vertical plane.\\\\\n\t (a) Write down Lagrangian of the system in suitable generalized coordinate.\\\\\n\t (b) write down Lagranges equation of motion.\n\t \\begin{figure}[H]\n\t \t\\centering\n\t \t\\includegraphics[height=5cm,width=2.5cm]{Assignment-HE-01}\n\t \\end{figure}\n\t \\item A particle of mass $m$ is attached to fixed point $O$ by a weightless inextensible string of length $a$. It is rotating under the gravity as shown in the figure.\\\\\n\t (a) Write down The Lagrangian of the system in spherical co-ordinate.\\\\\n\t (b) write down Hamiltonian of the system.\n\t  \\begin{figure}[H]\n\t \t\\centering\n\t \t\\includegraphics[height=4.5cm,width=3.5cm]{Assignment-HE-02}\n\t \\end{figure}\n\t \\item Particle of mass $m$ slides under the gravity without friction along the parabotic path\n\t $y=a x^{2}$ axis shown in the figure. Here $a$ is a constant\\\\\n\t (a) Write down Lagrangian of the system .\\\\\n\t (b) Write down Lagranges equation of motion.\\\\\n\t (c) write down Hamiltonian of the system.\n\t  \\begin{figure}[H]\n\t \t\\centering\n\t \t\\includegraphics[height=3.5cm,width=4cm]{Assignment-HE-03}\n\t \\end{figure}\n\t \\item The Lagrangian of a particle of mass $m$ moving in one dimension is $L=\\exp (\\alpha t)\\left[\\frac{m \\dot{x}^{2}}{2}-\\frac{k x^{2}}{2}\\right]$, where $\\alpha$ and $k$ are positive constants.\\\\\n\t (a) Find the Lagranges equation of motion of the particle.\\\\\n\t (b) Write down Hamiltonian of the system.\n\t \\item A simple pendulum of mass $m_{2}$, with a mass $m_{1}$ at the point of support which can move on a horizontal line lying in the plane in which $m_{2}$ moves (Figure given)\\\\\n\t (a) Write down Lagrangian of system.\\\\\n\t (b) Which coordinate is cyclic co-ordinate.\\\\\n\t (c) Find the generalized momentum.\n\t  \\begin{figure}[H]\n\t \t\\centering\n\t \t\\includegraphics[height=4cm,width=5cm]{Assignment-HE-04}\n\t \\end{figure}\n\t \\item The point of support of a simple pendulum of length $b$. moves on a massless rim of a radius a rotating with constant angular velocity $\\omega$.\\\\\n\t (a) Obtain the expression for the Cartesian components of the velocity and acceleration and acceleration of the mass $m$.\\\\\n\t (b) Write down expression of Lagrangian of a system in as function of $\\omega$ and $\\theta$\\\\\n\t (c) Write down equation of motion of system.\n\t  \\begin{figure}[H]\n\t \t\\centering\n\t \t\\includegraphics[height=5.5cm,width=5.5cm]{Assignment-HE-05}\n\t \\end{figure}\n\t \\item As shown in figure the particle of mass $m_{2}$ moves on a vertical axis and the whole system rotates about this axis with a constant angular velocity $\\omega$.\n\t  \\begin{tasks}(1)\n\t \t\\task[\\textbf{a.}] What is degree of freedom of system\n\t \t\\task[\\textbf{b.}] Write down Lagrangian of the system in spherical polar co-ordinate.\n\t \t\\task[\\textbf{c.}]Write down Lagranges equation of the system.\n\t \t\\task[\\textbf{d.}] Write down Hamiltonian of the system.\n\t \\end{tasks}\n  \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[height=6cm,width=3.6cm]{Assignment-HE-06}\n \\end{figure}\n\t \\item A bead of mass $m$ can slide without friction along a mass less rod kept at $45^{\\circ}$ with the vertical as shown in the figure. The rod is rotating about the vertical axis with a constant angular speed $\\omega$. At any instant $r$ is the distance of the bead from the origin.\\\\\n\t (a) Find the Lagrangian of the system\\\\\n\t (b) Find the momentum conjugate to $r$.\n\t  \\begin{figure}[H]\n\t \t\\centering\n\t \t\\includegraphics[height=4.5cm,width=6.5cm]{Assignment-HE-07}\n\t \\end{figure}\n\t \\item A bead slides along a smooth wire bent in the shape of a parabola $z=c r^{2}$ (figure). The bead rotates in a circle of radius $R$ when the wire is rotating about its vertical symmetry axis with angular velocity $\\omega$. Find the value of $c$.\n\t  \\begin{figure}[H]\n\t \t\\centering\n\t \t\\includegraphics[height=6cm,width=5.2cm]{Assignment-HE-08}\n\t \\end{figure}\n\\end{enumerate}\n\\textbf{Assignment 02}\n\\begin{enumerate}\n\t\\item A system is governed by the Hamiltonian\n\t$$\n\tI=\\frac{1}{2}\\left(p_{x}-a y\\right)^{2}+\\frac{1}{2}\\left(p_{y}-b x\\right)^{2}\n\t$$\n\twhere $a$ and $b$ are constants and $p, p$, are momenta conjugate to $x$ and $y$ respectively. For what values of $a$ and $b$ will the quantities $\\left(p_{x}-3 y\\right)$ and $\\left(p_{y}+2 x\\right)$ be conserved.\n\t\\item A particle of mass $m$ and coordinate $q$ has the Lagrangian $L=\\frac{1}{2} m \\dot{q}^{2}-\\frac{\\lambda}{2}-\\dot{q}^{2}$ where $\\lambda$ is a constant. Find the Hamiltonian of the system.\n\t\\item The coordinates and momenta $x_{i}, \\quad p_{i}(i=1,2,3)$ of a particle satisfy the canonical Poisson bracket relations $\\left\\{x_{i}, p_{j}\\right\\}=\\delta_{i j}$. If $C_{1}=x_{2} p_{3}+x_{3} p_{2}, C_{2}=x_{1} p_{2}-x_{2} p_{1}$ and $C_{3}=x_{1} p_{3}+x_{3} p_{1} .$ Then\\\\\n\t(a) Find the value of $\\left\\{C_{1}, C_{2}\\right\\}$\\\\\n\t(b) Find the relation between $C_{1}, C_{2}$ and $C_{3}$.\n\t\\item If Hamiltonian of the Harmonic oscillator is given by $H=\\frac{p^{2}}{2 m}+\\frac{1}{2} m \\omega^{2} x^{2}$, the quantity defined $u(x, p, t)=\\ln (p+i m \\omega x)-i \\omega t$ then check whether $u$ is conserve during the motion.\n\t\\item The Hamiltonian of a particle of unit mass moving in the $x y$-plane is given to be: $H=x p_{x}-y p_{y}-\\frac{1}{2} x^{2}+\\frac{1}{2} y^{2}$ in suitable units. The initial values are given to be $(x(0), y(0))=(1,1)$ and $\\left(p_{x}(0), p_{y}(0)\\right)=\\left(\\frac{1}{2}, \\frac{1}{2}\\right)$.\\\\\n\t(a) Discuss equation of motion\\\\\n\t(b) Plot the curves traced out by the particles in the $x y$-plane and the $p_{x} p_{y}-$ plane.\n\t\\item A mechanical system is described by the Hamiltonian $H(q, p)=\\frac{p^{2}}{2 m}-\\frac{1}{2} m \\omega^{2} q^{2} .$ As a result of the canonical transtormation generated by $F(q, Q)=-\\frac{Q}{q}$, the Hamiltonian in the new coordinate $Q$ and momentum $P$.\n\t\\item The Hamiltonian of the system is given by $H=\\frac{p_{x}^{2}}{2 m}+\\frac{p_{y}^{2}}{2 m}+\\frac{m \\omega^{2}\\left(x^{2}+y^{2}\\right)}{2}$. Check whether the following quantities are conserve during the motion or not.\\\\\n\t(a) $S_{1}=\\frac{1}{2}\\left(x p_{y}-y p_{x}\\right)$\\\\\n\t(b) $S_{2}=\\frac{1}{2 m \\omega}\\left(p_{x} p_{1}+m^{2} \\omega^{2} x y\\right)$\\\\\n\t(c) $S_{3}=\\frac{1}{4 m \\omega}\\left[p_{s}^{2}-p_{3}^{2}+m^{2} \\omega^{2}\\left(y^{2}-x^{2}\\right)\\right]$\\\\\n\t\\item The transformation equations between two sets of coordinate are\n\t$$\n\t\\begin{aligned}\n\t&Q=\\log \\left(1+q^{1 / 2} \\cos p\\right) \\\\\n\t&P=2\\left(1+q^{1 / 2} \\cos p\\right) q^{1 / 2} \\sin p\n\t\\end{aligned}\n\t$$\\\n\t(a) Show that $Q, P$ are canonical transform to $q, p$.\\\\\n\t(b) Show that the function that generates this transformation is $F_{3}=-\\left(e^{Q}-1\\right)^{2} \\tan p$\n\t\\item Prove that the transformation is canonical.\\\\\n\t(i) $Q_{1}=q_{1}, P_{1}=p_{1}-2 p_{2}$,\\\\\n\t(ii) $Q_{2}=p_{2}, P_{2}=-2 q_{1}-q_{2}$\n\t\\item  The Hamiltonian of harmonic oscillator is given by $H=\\frac{1}{2 m}\\left(p^{2}+m^{2} \\omega^{2} q^{2}\\right)$\\\\\n \tA. \\begin{tasks}(1)\n\t\t\\task[\\textbf{a.}]Write down equation of motion\n\t\t\\task[\\textbf{b.}]Plot phase space for given energy $E$\n\t\t\\task[\\textbf{c.}] Plot $q$ vs $t$ with initial condition $t=0, q=q_{0}$ and $\\dot{q}-0$ at $t=0$\n\t\t\\task[\\textbf{d.}] Plot $p$ vs $t$\n\t\\end{tasks}\n\tB.\t \\begin{tasks}(1)\n\t\t\\task[\\textbf{a.}] (a) If a generating function is defined as $F_{1}=\\frac{m \\omega q^{2}}{2} \\cot Q$ then find canonical transformation.\\\\\n\t\tFrom the use of canonical transformation $H=\\frac{1}{2 m}\\left(p^{2}+m^{2} \\omega^{2} q^{2}\\right)$\n\t\t\\task[\\textbf{b.}]Find new Hamiltonian $K(Q, P, t)$\n\t\t\\task[\\textbf{c.}]Plot $Q$ vs $t$\n\t\t\\task[\\textbf{d.}]Plot $P$ vs $t$\n\t\t\\task[\\textbf{e.}]Plot phase space between $P$ and $Q$\n\t\\end{tasks}\n\\item If generating function is given by $F(q, P)=q^{2} P$ and Hamiltonian of the system is given by $H(q, p)=\\frac{p^{2}}{2 \\alpha q^{2}}+\\frac{\\beta q^{4}}{4}$ where $\\alpha$ and $\\beta$ are constants, then find the equation of motion in term of $Q, P$.\n\\item If Lagrangian of the system is given by $L=\\frac{1}{2} m \\dot{x}^{2}+m\\left(\\dot{y}^{2}+\\dot{z}^{2}\\right)-\\frac{1}{2} k x^{2}-\\frac{1}{2} k(y+z)^{2}$\n \\begin{tasks}(2)\n\t\\task[\\textbf{a.}]Write down Hamiltonian of system.\n\t\\task[\\textbf{b.}]If $L_{z}=x p_{y}-y p_{x}$, find $\\frac{d L_{z}}{d t}$\n\t\\task[\\textbf{c.}]If $L_{x}=y p_{z}-z p_{y}$, find $\\frac{d L_{x}}{d t}$\n\t\\task[\\textbf{d.}] If $L_{y}=z p_{x}-x p_{z}$, find $\\frac{d L_{y}}{d t}$\n\t\\task[\\textbf{e.}]Find $\\frac{d p_{x}}{d t}$ and $\\frac{d p_{y}}{d t}$\n\\end{tasks}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\\end{enumerate}\n\n", "meta": {"hexsha": "68237d5e0aba6bb03c15bf4151e7800447fb1aad", "size": 9537, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Classical Mechanics  -CSIR/chapter/Assignments/Assignment-Hamilltonian Equation of Motion.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Classical Mechanics  -CSIR/chapter/Assignments/Assignment-Hamilltonian Equation of Motion.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Classical Mechanics  -CSIR/chapter/Assignments/Assignment-Hamilltonian Equation of Motion.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.7434210526, "max_line_length": 359, "alphanum_fraction": 0.6702317291, "num_tokens": 3317, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006919925839875, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.5803978210379523}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{pgfplots}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% =======================================================================================\n\\section*{ADM and BSSN variables for the Kasner metric}\n\nA standard form of the Kasner metric is given by\n\\begin{equation*}\n   ds^2 = - dt^2 + t^{2p_1} dx^2 + t^{2p_2} dy^2 + t^{2p_3} dz^2\n\\end{equation*}\nwhere $p_1$, $p_2$ and $p_3$ are constants subject to\n\\begin{align*}\n   1 &= p_1 + p_2 + p_3\\\\\n   1 &= p^1_1 + p^2_2 + p^3_2\n\\end{align*}\n\nThe following Cadabra codes compute various quantities defined in the ADM and BSSN\nformulations of the Einstein equations.\n\nAll of the results are exactly as expected (what else could it give?).\n\nNone of this is new -- the main point of this whole exercise is to use a familar metric\nto explore how standard computations can be implemented using Cadabra.\n\nNone of these results are used by the main evolution codes (in the directories {\\tt\nadm} and {\\tt bssn}) other than to set the initial data (at $t=1$).\n\nThe code that sets the initial data was written by hand (as opposed to the Cadabra\ncodes that generates the Ada procedures used in the evolution codes).\n\n\\clearpage\n\n\\begin{cadabra}\n   {t,x,y,z}::Coordinate.\n   {a,b,c,d,e,f,i,j,k,l,m,n,o,p,q,r,s,u#}::Indices(position=independent,values={t,x,y,z}).\n\n   \\partial{#}::PartialDerivative;\n\n   {p1,p2,p3}::Symbol.\n\n   p1::LaTeXForm(\"p_1\").\n   p2::LaTeXForm(\"p_2\").\n   p3::LaTeXForm(\"p_3\").\n\n   gBar{#}::LaTeXForm(\"{\\bar g}\").\n   ABar{#}::LaTeXForm(\"{\\bar A}\").\n   Aab{#}::LaTeXForm(\"{A}\").\n   phi::LaTeXForm(\"{\\phi}\").\n\n   g_{a b}::Metric.\n   g^{a b}::InverseMetric.\n\n   g_{a b}::Depends(\\partial{#}).\n\n   # -----------------------------------------------------------------\n   # rules used when evaluating components\n\n   DtRule := {D^{t} -> 1}.      # components of d/dt, zero shift & unit lapse\n\n   gabRule := { g_{t t} = gtt,\n                g_{x x} = gxx,\n                g_{y y} = gyy,\n                g_{z z} = gzz }.\n\n   # -----------------------------------------------------------------\n   # the Kasner metric\n\n   gab := { gtt -> -1,\n            gxx -> t**(2*p1),\n            gyy -> t**(2*p2),\n            gzz -> t**(2*p3),\n            gxy -> 0,\n            gxz -> 0,\n            gyz -> 0,\n            gtx -> 0,\n            gty -> 0,\n            gtz -> 0}.\n\n   # -----------------------------------------------------------------\n   # standard definitions\n\n   Detg := g ->  gxx gyy gzz - gxx gyz gyz\n               - gxy gxy gzz + gxy gxz gyz\n               + gxz gxy gyz - gxz gxz gyy.\n\n   Gamma := \\Gamma^{a}_{b c} ->\n            (1/2) g^{a e} (   \\partial_{b}{g_{e c}}\n                            + \\partial_{c}{g_{b e}}\n                            - \\partial_{e}{g_{b c}}).\n\n   Rabcd := R^{a}_{b c d} ->\n              \\partial_{c}{\\Gamma^{a}_{b d}} + \\Gamma^{a}_{e c} \\Gamma^{e}_{b d}\n            - \\partial_{d}{\\Gamma^{a}_{b c}} - \\Gamma^{a}_{e d} \\Gamma^{e}_{b c}.\n\n\n   Rab := R_{a b} -> R^{c}_{a c b}.\n\n   Kab := K_{a b} -> - (1/2) D^{c} \\partial_{c}{g_{a b}} / N.\n\n   # -----------------------------------------------------------------\n   # the BSSN variables\n\n   trK   := K          -> g^{a b} K_{a b}.\n   Aab   := Aab_{a b}  -> K_{a b} - (1/3) g_{a b} K.\n   gBar  := gBar_{a b} -> g_{a b} / (g**(1/3)).\n   ABar  := ABar_{a b} -> (K_{a b} - (1/3) g_{a b} K) / (g**(1/3)).\n   phi   := phi        -> (1/12) \\log(g).\n\n   # -----------------------------------------------------------------\n   # basic objects\n\n   substitute (gabRule, gab)\n   substitute (Detg,    gab)\n\n   complete   (gabRule, $g^{a b}$)                                      # cdb(gabRule,gabRule)\n\n   substitute (Rabcd,   Gamma)\n   substitute (Rab,     Rabcd)\n\n   # -----------------------------------------------------------------\n   # convert to BSSN\n\n   substitute (gBar,  Detg)                                             # cdb (gBar.01,gBar)\n\n   substitute (Aab,   trK)                                              # cdb (Aab.01,Aab)\n   substitute (Aab,   Kab)                                              # cdb (Aab.02,Aab)\n\n   substitute (ABar,  trK)                                              # cdb (ABar.01,ABar)\n   substitute (ABar,  Kab)                                              # cdb (ABar.02,ABar)\n   substitute (ABar,  Detg)                                             # cdb (ABar.03,ABar)\n\n   substitute (phi,   Detg)                                             # cdb (phi.01,phi)\n\n   # -----------------------------------------------------------------\n   # now evaluate the components\n\n   evaluate   (gab,   gabRule+DtRule, rhsonly=True)                     # cdb (gab,gab)\n   evaluate   (Gamma, gabRule+DtRule, rhsonly=True)                     # cdb (Gamma,Gamma)\n   evaluate   (Rabcd, gabRule+DtRule, rhsonly=True)                     # cdb (Rabcd,Rabcd)\n   evaluate   (Rab,   gabRule+DtRule, rhsonly=True)                     # cdb (Rab,Rab)\n   evaluate   (Kab,   gabRule+DtRule, rhsonly=True)                     # cdb (Kab,Kab)\n   evaluate   (trK,   gabRule+DtRule, rhsonly=True)                     # cdb (trK,trK)\n\n   evaluate   (gBar,  gabRule+DtRule, rhsonly=True)                     # cdb (gBar.02,gBar)\n   evaluate   (Aab,   gabRule+DtRule, rhsonly=True)                     # cdb (Aab.03,Aab)\n   evaluate   (ABar,  gabRule+DtRule, rhsonly=True)                     # cdb (ABar.04,ABar)\n\n   evaluate   (phi,   gabRule+DtRule, rhsonly=True)                     # cdb (phi.02,phi)\n\n\\end{cadabra}\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\Cdb*{gab}   \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{Gamma} \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{Rabcd} \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{Rab}   \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{Kab}   \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{trK}   \\end{dmath*}\n\\end{dgroup*}\n\n% \\clearpage\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\Cdb*{gBar.01} \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{gBar.02} \\end{dmath*}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\Cdb*{phi.01} \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{phi.02} \\end{dmath*}\n\\end{dgroup*}\n\n% \\clearpage\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\Cdb*{Aab.01}  \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{Aab.02}  \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{Aab.03}  \\end{dmath*}\n\\end{dgroup*}\n\n\\begin{dgroup*}\n   \\begin{dmath*} \\Cdb*{ABar.01} \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{ABar.02} \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{ABar.03} \\end{dmath*}\n   \\begin{dmath*} \\Cdb*{ABar.04} \\end{dmath*}\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "dc1c067d43fb6d55cb17610368bc2bfcad750942", "size": 6477, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "adm-bssn-eqtns.tex", "max_stars_repo_name": "leo-brewin/adm-bssn-numerical", "max_stars_repo_head_hexsha": "9e32c201272e9a41e7535475fe381e450b99b058", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-25T11:36:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T11:36:06.000Z", "max_issues_repo_path": "adm-bssn-eqtns.tex", "max_issues_repo_name": "leo-brewin/adm-bssn-numerical", "max_issues_repo_head_hexsha": "9e32c201272e9a41e7535475fe381e450b99b058", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "adm-bssn-eqtns.tex", "max_forks_repo_name": "leo-brewin/adm-bssn-numerical", "max_forks_repo_head_hexsha": "9e32c201272e9a41e7535475fe381e450b99b058", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.5595854922, "max_line_length": 94, "alphanum_fraction": 0.4696618805, "num_tokens": 2167, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7905303236047049, "lm_q1q2_score": 0.5803437471649346}}
{"text": "\\subsection{Jinchao's summary}\n\nSome ideas:\n\n\\begin{enumerate}\n\\item Given structure of DNN or CNN, sampling of the parameters can be\n  viewed a dual sampling of the data variable.  This is because any\n  DNN function can be written as a combination or composition of the\n  functions of the following form\n$$\n\\sigma(\\omega\\cdot x+b)\n$$\n\n\\item CNN can be considered to be a multigrid algorithm for data\n  variables.  Given the observation above, it is also a ``dual''\n  multigrid algorithm for the parameter variables.   But somehow, in\n  CNN, only small subspace of the dual subspace are used because the\n  convolutional operation is a very special and low dimensional dual\n  operation.   In order to increase the dimension of the dual, more\n  layers and especially more channels have to be introduced. \n\n\\item Data space variables are not really independent variables.  For\n  a  2-D images, each image is a function of two variable which\n  represents the pixel locations. \n\n\\item Any 2D image function must have certain regularity. Thus a high\n  resolution image is intrinsically low dimensional.   There are some\n  underlying dimensions of the image data.  The different covolutions\n  and different channels are introduced to capture these low\n  dimensions. \n\n\\item When a random initialization, it has been observed that the\n  resulting DNN function is actually a relatively smooth function with\n  respect to data variables. \n  But the finally trained NN model has to be high frequence functions\n  for data variable as it is very sensitive to data pertubation. \n\n\\item \n\\end{enumerate}\n", "meta": {"hexsha": "e25cb1b6da176dcdc282c0053e2bef2971ce4487", "size": 1588, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/jinchaoCNN.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/jinchaoCNN.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/jinchaoCNN.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.7, "max_line_length": 70, "alphanum_fraction": 0.7770780856, "num_tokens": 366, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397348, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5803437461892763}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\nIn this chapter, we extend the data structure learned from the first part with more advanced data structures. These data structures are not as widely used as the basic data structures, however, they can be often seen to implement more advanced algorithms or they can be more efficient compared with algorithms that relies on a more basic version. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%% Monotonic Stack\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Monotone Stack}\n\\label{section_mono_stack}\n\\textit{A monotone Stack is a data structure the elements from the front to the end is strictly either increasing or decreasing. } For example, there is an line at the hair salo, and you would naturally start from the end of the line. However, if you are allowed to kick out any person that you can win at a fight, if every one follows the rule, then the line would start with the most powerful man and end up with the weakest one. This is an example of monotonic decreasing stack. \n\\begin{itemize}\n    \\item Monotonically Increasing Stack: to push an element $e$, starts from the rear element, we pop out element $r>=e$ (violation); \n    \\item Monotonically Decreasing Stack: we pop out element $r<=e$ (violation). T\n\\end{itemize}\nThe process of the monotone decresing stack is shown in Fig.~\\ref{fig:mono_stack}.\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.9\\columnwidth]{fig/monotone_stack_fig.png}\n    \\caption{The process of decreasing monotone stack}\n    \\label{fig:mono_stack}\n\\end{figure}\n\\textit{Sometimes, we can relax the strict monotonic condition, and can allow the stack or queue have repeat value. }\n\nTo get the feature of the monotonic queue, with $[5, 3, 1, 2, 4]$ as example, if it is increasing:\n\\begin{lstlisting}[numbers=none]\nindex   v   Increasing stack        Decreasing stack\n1       5   [5]                     [5]\n2       3   [3] 3 kick out 5        [5, 3] #3->5\n3       1   [1] 1 kick out 3        [5, 3, 1] #1->3\n4       2   [1, 2] #2->1            [5, 3, 2] 2 kick out 1\n5       4   [1, 2, 4] #4->2         [5,4] 4 kick out 2, 3\n\\end{lstlisting}\nBy observing the above process, what features we can get? \n\\begin{itemize}\n    \\item Pushing in to get smaller/larger item to the left: When we push an element in, if there exists one element right in front of it, 1) for increasing stack, we find the \\textbf{nearest smaller item to the left} of current item, 2) for decreasing stack, we find the \\textbf{nearest larger item} to the left instead. In this case, we get [-1, -1, -1, 1, 2], and [-1, 5, 3, 3, 5] respectively.\n    \\item Popping out to get smaller/larger item to the right: when we pop one element out, for the kicked out item, such as in step of 2, increasing stack, 3 forced 5 to be popped out, for 5, 3 is the first smaller item to the right. Therefore, if one item is popped out, for this item, the current item that is about to be push in is 1) for increasing stack, \\textbf{the nearest smaller item to its right}, 2) for decreasing stack, \\textbf{the nearest larger item to  its right}.  In this case, we get [3,1, -1, -1, -1], and [-1, 4, 2, 4, -1] respectively.\n\\end{itemize}\nThe conclusion is with monotone stack, we can search for smaller/larger items of current item either to its left/right. \n\n\\paragraph{Basic Implementation}\n \nThis monotonic queue is actually a data structure that needed to add/remove element from the end. In some application we might further need to remove element from the front. Thus Deque from collections fits well to implement this data structure. Now, we set up the example data:\n\\begin{lstlisting}[language=Python]\nA = [5, 3, 1, 2, 4]\nimport collections\n\\end{lstlisting}\n\n\\paragraph{Increasing Stack} We can find first smaller item to left/right.\n\n\\begin{lstlisting}[language=Python]\ndef increasingStack(A):\n    stack = collections.deque()\n    firstSmallerToLeft = [-1]*len(A)\n    firstSmallerToRight = [-1]*len(A)\n    for i,v in enumerate(A):\n        while stack and A[stack[-1]] >= v: # right is from the popping out\n            firstSmallerToRight[stack.pop()] = v  # A[stack[-1]] >= v \n        if stack:  #left is from the pushing in, A[stack[-1]] < v \n            firstSmallerToLeft[i] = A[stack[-1]]\n        stack.append(i)\n    return firstSmallerToLeft, firstSmallerToRight, stack\n\\end{lstlisting}\nNow, run the above example with code:\n\\begin{lstlisting}[language=Python]\nfirstSmallerToLeft, firstSmallerToRight, stack = increasingQueue(A)\nfor i in stack:\n    print(A[i], end = ' ')\nprint('\\n')\nprint(firstSmallerToLeft)\nprint(firstSmallerToRight)\n\\end{lstlisting}\nThe output is:\n\\begin{lstlisting}\n1 2 4 \n\n[-1, -1, -1, 1, 2]\n[3, 1, -1, -1, -1]\n\\end{lstlisting}\n\n\\paragraph{Decreasing Stack} We can find first larger item to left/right.\n\n\\begin{lstlisting}[language=Python]\ndef decreasingStack(A):\n    stack = collections.deque()\n    firstLargerToLeft = [-1]*len(A)\n    firstLargerToRight = [-1]*len(A)\n    for i,v in enumerate(A):\n        while stack and A[stack[-1]] <= v:\n            firstLargerToRight[stack.pop()] = v\n            \n        if stack:\n            firstLargerToLeft[i] = A[stack[-1]]\n        stack.append(i)\n    return firstLargerToLeft, firstLargerToRight, stack\n\\end{lstlisting}\nSimilarily, the output is:\n\\begin{lstlisting}\n5 4 \n\n[-1, 5, 3, 3, 5]\n[-1, 4, 2, 4, -1]\n\\end{lstlisting}\nFor the above problem, If we do it with brute force, then use one for loop to point at the current element, and another embedding for loop to look for the first element that is larger than current, which gives us $O(n^2)$ time complexity. If we think about the BCR, and try to trade space for efficiency, and use monotonic queue instead, we gain $O(n)$ linear time and $O(n)$ space complexity.  \n% Let us look at an example, \n% \\begin{lstlisting}\n% Given an array [5, 3, 1, 2, 4], our target is to return an array of the same size that each element denotes the relative index we need to move to the right to find the first element that is larger than the current element, if we can not find, then we use -1. For this example the return would be  [-1, 3, 1, 1, -1].\n% \\end{lstlisting}\n\n\n% Solution: If we do it with brute force, then use one for loop to point at the current element, and another embedding for loop to look for the first element that is larger than current, which gives us $O(n^2)$ time complexity. If we think about the BCR, and try to trade space for efficiency, we can use a decreasing monotonic queue. The first elment to the right that is larger than current is equaivalent to find the first element in the left that is smaller than the element(this means we need to use decreasing queue). First we have $[5]$, then $[5, 3]$, $[5, 3 ,1]$, then when $2$ comes in, we need to kick out $1$, so for $1$ the first larger element to its right size is $2$, we record $index(2)-index(1) = 1$. Then we have $4$, which could kick out $2$, so $4$ is the required one, then we set $r[index(2)] = index(4)-index(2) = 1$, then $r[index(3)] = index(4)-index(3) = 3$, Finally there would only $[5, 4]$, so we set them to be $-1$.\n% \\begin{lstlisting}\n% index   v   decreasing queue\n% 1       5   [5]\n% 2       3   [5,3]\n% 3       1   [5,3,1]\n% 4       2   [5, 3, 2], kick out 1, we found the first larger number to the right of 1, which is 2\n% 5       4   [5,4], kick out 2, for 2, we found 4, kick out 3, for 3 we found 4\n% \\end{lstlisting}\n% \\begin{lstlisting}[language = Python]\n% a = [5, 3, 1, 2, 4]\n% def firstLagerNumToRight(num):\n%   if not num:\n%     return []\n%   monoStack = [] #decreasing monotonic stack\n%   rst = [-1]*len(num)\n%   for i, v in enumerate(num):\n%     while  monoStack and v >= num[monoStack[-1]]:\n%         index = monoStack.pop()\n%         rst[index] = i-index\n%     monoStack.append(i)\n%   return rst\n% print(firstLagerNumToRight(a))\n% # [-1, 3, 1, 1, -1]\n% \\end{lstlisting}\nMonotone stack is especially useful in the problem of subarray where we need to find smaller/larger item to left/right side of an item in the array. To better understand the features and applications of monotone stack, let us look at some examples. First, we recommend the audience to practice on these obvious applications shown in LeetCode Problem Section before moving to the examples:\n\nThere is one problem that is pretty interesting:\n\n\\paragraph{Sliding Window Maximum/Minimum } Given an array nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position. Return the max sliding window. (LeetCode Probelm: 239. Sliding Window Maximum (hard))\n\\begin{lstlisting}[numbers=none]\nExample:\n\nInput: nums = [1,3,-1,-3,5,3,6,7], and k = 3\nOutput: [3,3,5,5,6,7] \nExplanation: \n\nWindow position                Max\n---------------               -----\n[1  3  -1] -3  5  3  6  7       3\n 1 [3  -1  -3] 5  3  6  7       3\n 1  3 [-1  -3  5] 3  6  7       5\n 1  3  -1 [-3  5  3] 6  7       5\n 1  3  -1  -3 [5  3  6] 7       6\n 1  3  -1  -3  5 [3  6  7]      7\n\\end{lstlisting}\n\n\\textbf{Analysis:} In the process of moving the window, any item that is smaller than its predecessor will not affect the max result anymore, therefore, we can use decrese stack to remove any trough. If the window size is the same as of the array, then the maximum value is the first element in the stack (bottom). With the sliding window, we record the max each iteration when the window size is the same as k. At each iteration, if need to remove the out of window item from the stack. For example of [5, 3, 1, 2, 4] with k = 3, we get [5, 3, 4]. At step 3, we get 5, at step 4, we remove 5 friom the stack, and we get 3. At step 5, we remove 3 if it is in the stack, and we get 4. With the monotone stack, we decrease the time complexity from $O(kn)$ to $O(n)$.\n\\begin{lstlisting}[language=Python]\nimport collections\n\ndef maxSlidingWindow(self, nums, k):\n    ds = collections.deque()\n    ans = []\n    for i in range(len(nums)):\n        while ds and nums[i] >= nums[ds[-1]]: indices.pop()\n        ds.append(i)\n        if i >= k - 1: ans.append(nums[ds[0]]) #append the current maximum\n        if i - k + 1 == ds[0]: ds.popleft() #if the first also the maximum number is out of window, pop it out\n    return ans\n\\end{lstlisting}\n\n\\begin{examples}[resume]\n\\item \\textbf{907. Sum of Subarray Minimums (medium).} Given an array of integers A, find the sum of min(B), where B ranges over every (contiguous) subarray of A. Since the answer may be large, return the answer modulo $10^9 + 7$. \\textit{Note: 1 <= A.length <= 30000, 1 <= A[i] <= 30000.}\n\\begin{lstlisting}[numbers=none]\nExample 1:\n\nInput: [3,1,2,4]\nOutput: 17\nExplanation: Subarrays are [3], [1], [2], [4], [3,1], [1,2], [2,4], [3,1,2], [1,2,4], [3,1,2,4]. \nMinimums are 3, 1, 2, 4, 1, 1, 2, 1, 1, 1.  Sum is 17.\n\\end{lstlisting}\n\n\\textbf{Analysis:} For this problem, using naive solution to enumerate all possible subarries, we end up with $n^2$ subarray and the time complexity would be $O(n^2)$, and we will receive LTE. For this problem, we just need to sum over the minimum in each subarray. Try to consider the problem from another angel, what if we can figure out how many times each item is used as minimum value corresponding subarry? Then res = sum(A[i]*f(i)). If there is no duplicate in the array, then To get f(i), we need to find out:\n\\begin{itemize}\n    \\item left[i], the length of strict bigger numbers on the left of A[i],\n    \\item right[i], the length of strict bigger numbers on the right of A[i].\n\\end{itemize}\nFor the given examples, if A[i] = 1, then the left item is 3, and the right item is 4, we add 1*(left\\_len*right\\_len) to the result. However, if there is duplicate such as [3, 1, 4, 1], for the first 1, we need [3,1], [1], [1,4], [1, 4,1] with subarries, and for the second 1, we need [4,1], [1] instead. Therefore, we set the right length to find the >= item. Now, the problem in converted to the first smaller item on the left side and the first smaller or equal item on the right side. From the feature we draw above, we need to use increasing stack, as we know, from the pushing in, we find the first smaller item, and from the popping out, for the popped out item, the current item is the first smaller item on the right side. The code is as:\n\\begin{lstlisting}[language=Python]\ndef sumSubarrayMins(self, A):\n    n, mod = len(A), 10**9 + 7\n    left, s1 = [1] * n, []\n    right = [n-i for i in range(n)]\n    for i in range(n): # find first smaller to the left from pushing in\n        while s1 and A[s1[-1]] > A[i]: # can be equal\n            index = s1.pop()\n            right[index] = i-index # kicked out\n        if s1:\n            left[i] = i-s1[-1]\n        else:\n            left[i] = i+1\n        s1.append(i)\n    return sum(a * l * r for a, l, r in zip(A, left, right)) % mod\n\\end{lstlisting}\nThe above code, we can do a simple improvement, by adding 0 to each side of the array. Then eventually there will only have [0, 0] in the stack. All of the items originally in the array they will be popped out, each popping, we can sum up the result directely:\n\\begin{lstlisting}[language=Python]\ndef sumSubarrayMins(self, A):\n    res = 0\n    s = []\n    A = [0] + A + [0]\n    for i, x in enumerate(A):\n        while s and A[s[-1]] > x:\n            j = s.pop()\n            k = s[-1]\n            res += A[j] * (i - j) * (j - k)\n        s.append(i)\n    return res % (10**9 + 7)\n\\end{lstlisting}\n\\end{examples}\n\\section{Disjoint Set}\n\\textbf{Disjoint-set data structure} (aka union-find data structure or merge-find set) maintains a collection $S = \\{S_1, S_2, ..., S_k\\}$ of disjoint \\textit{dynamic} sets by partitioning a set of elements. We identify each set by a \\textbf{representative}, which is some member of the set. It does matter which member is used only if we get the same answer both times if we ask for the representative twice without modifying the set. Choosing the smallest member in a set as representative is an examplary prespecified rule. According to its typical applications such as implementing Kruskal's minimum spanning tree algorithm and tracking connected components dynamically, disjoint-set should support the following operations:\n\\begin{enumerate}\n    \\item \\texttt{make\\_set(x)}: create a new set whose only member is $x$. To keep these sets to be disjoint, this member should not already be in some existent sets.\n    \\item \\texttt{union(x, y)}: unites the two dynamic sets that contain $x$ and $y$, say $S_x \\cup  S_y$ into a new set that is the union of these two sets. In practice, we merge one set into the other say $S_y$ into $S_x$, we then remove/destroy $S_y$. This will be more efficient than create a new one that unions and destroy the other two. \n    \\item \\texttt{find\\_set(x)}: returns a pointer to the representative of the set that contains $x$.\n\n\\end{enumerate}\n\\paragraph{Applications} Disjoint sets are applied to implement union-find algorithm where perfoms \\texttt{find\\_set} and \\texttt{union}. Union-find algorithms can be used into some basic graph algorithms, such as cycle detection, tracking connected components in the graph dynamically,  ~\\footnote{where new edge will be added and the search based algorithm each time will be rerun to find them again}, Krauskal's MST algorithm, and Dijkstra's Shortest path algorithm.\n\n\\paragraph{Connected Component} Before we move to the implementation, let us first see how disjoint set can be applied to connected components.\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.98\\columnwidth] {fig/disjoint_set.png}\n    \\caption{The connected components using disjoint set.}\n    \\label{fig:cc_undirected_disjoint_set}\n\\end{figure}\nAt first, we assign a set id for each vertex in the graph. Then we traverse each edge, and if the two endpoints of the edge belongs to different set, then we union the two sets. As shown in the process, first vertex 0 and 1 has different set id, then we update 1's id to 0. For edge (1, 2), we update 2's id to 0. For edge(0, 2), they are already in the same set, no update needed. We apply the same process with edge (2, 4), (3, 4), and (5, 6). \n\n\\subsection{Basic Implementation with Linked-list or List}\nBefore we head off to more efficient and complex implementation, we first implement a baseline for the convenience of comparison. The key for the implementation is two dictionaries named \\texttt{item\\_set} (saves the mapping between item and its set id, which will only be one to one) and \\texttt{set\\_item} (the value of the key will be a list, because one set will have one to multiple relation). \n\nIf our coding is right, each item must have an item when \\texttt{find\\_set} function is called, if not we will call \\texttt{make\\_set}. For each existing \\texttt{set}, it will have at least one item. For function \\texttt{union}, we choose the set that has less items to merge to the one that with more items. \n\n\\begin{lstlisting}[language=Python]\nclass DisjointSet():\n  '''Implement a basic disjoint set'''\n  def __init__(self, items):\n    self.n = len(items)\n    self.item_set = dict(zip(items, [i for i in range(self.n)])) # first each set only has one item [i], this can be one->multiple match\n    self.set_item = dict(zip([i for i in range(self.n)], [[item] for item in items])) # each item will always belong to one set\n    \n  def make_set(self, item):\n    '''make set for new incoming set'''\n    if item  in self.item_set:\n      return\n    \n    self.item_set[item] = self.n\n    self.n += 1\n    \n  def find_set(self, item):\n    if item in self.item_set:\n      return self.item_set[item]\n    else:\n      print('not in the set yet: ', item)\n      return None\n    \n  def union(self, x, y):\n    id_x = self.find_set(x)\n    id_y = self.find_set(y)\n    if id_x == id_y:\n      return\n    \n    sid, lid = id_x, id_y\n    if len(self.set_item[id_x]) > len(self.set_item[id_y]):\n      sid, lid = id_y, id_x\n    # merge items in sid to lid \n    for item in self.set_item[sid]:\n      self.item_set[item] = lid\n    self.set_item[lid] += self.set_item[sid]\n    del self.set_item[sid]\n    return \n\\end{lstlisting}\n\n\\paragraph{Complexity} For $n$ items, we spend $O(n)$ time to initialize the two hashmaps. With the help of hashmap, function \\texttt{find\\_set} tasks only $O(1)$ time, accumulating it will give us $O(n)$. For function \\texttt{union}, it takes more effort to analyze. From another angle, for one item $x$, it will only update its item id when we are unioning it to another set $x_1$. The first time, the resulting set $x_1$ will have at least two items. The second update will be union $x_1$ to $x_2$. Because the merged one will have smaller length, thus the resulting items in $x_2$ will at least be 4. Then it is the third, ..., up to $k$ updates. Because a resulting set will at most has $n$ in size, so for each item, at most $\\log n$ updates will be needed. For $n$ items, this makes the upper bound for \\texttt{union} to be $n\\log n$. \n\nHowever, for our implementation, we has additional cost, which is in \\texttt{union}, where we merge the list. This cost can be easily limited to constant by using linked list. However, even with \\texttt{list}, there are different ways to concatenate one list to another:\n\n\\begin{enumerate}\n\\item Use $+$ operator: The time complexity of the concat operation for two lists, A and B, is O(A + B). This is because you aren't adding to one list, but instead are creating a whole new list and populating it with elements from both A and B, requiring you to iterate through both.\n\n\\item \\texttt{extend(lst)}: Use extend which doesn't create a new list but adds to the original. The time complexity should only be $O(1)$. On the other hand \\texttt{l += [i]} modifies the original list and behaves like extend.\n\\end{enumerate}\n\n\\subsection{Implementation with Disjoint-set Forests}\nInstead of using linear linked list, we use tree structure. Different with trees we have introduced before that a node points to its children, an item here will only points to its parent. A tree represents a set, and the root node is the representative and it points to itself. The straightforward algorithms that use this structure are not faster than the linked-list version. By introducing two heuristics--``Union by rank'' and ``path compression\"--we can achieve asympotically optimal disjoint-set data structure.\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.98\\columnwidth] {fig/disjoint_set.png}\n    \\caption{A disjoint forest}\n    \\label{fig:disjoint_forest_1}\n\\end{figure}\n\\subsubsection{Naive Version} We first need to create a \\texttt{Node} class which stores \\texttt{item} and another parent pointer \\texttt{parent}. An \\texttt{item} can be any immutable data structure with necessary information represents a node.\n\\begin{lstlisting}[language=Python]\nclass Node:\n  def __init__(self, item):\n    self.item = item # save node information\n    self.parent = None\n\\end{lstlisting}\n\nWe need one dict data structure \\texttt{item\\_finder} and one set data structure \\texttt{sets} to track nodes and set. From \\texttt{item\\_finder} we can do (item, node) map to find node, and then from the node further we can find its set representative node or execute \\texttt{union} operation. \\texttt{sets} is used to track all the representative nodes. When we union two sets, the one merged to the other will be deleted in \\texttt{sets}. At the easy version, \\texttt{make\\_set} will create  tree with only one node. \\texttt{find\\_set} will start from the node and traverse all the way back to its final parent which is when \\texttt{node.parent==node}. And a \\texttt{union} operation will simply point one tree's root node to the root of another through \\texttt{parent}. The code is as follows:\n\\begin{lstlisting}[language=Python]\nclass DisjointSet():\n  '''Implement with disjoint-set forest'''\n  def __init__(self, items):\n    self.n = len(items)\n    self.item_finder = dict()\n    self.sets = set() # sets will have only the parent node\n    \n    for item in items:\n      node = Node(item)\n      node.parent = node\n      self.item_finder[item] = node # from item we can find the node\n      self.sets.add(node)\n    \n  def make_set(self, item):\n    '''make set for new incoming set'''\n    if item  in self.item_finder:\n      return\n    \n    node = Node(item)\n    node.parent = node\n    self.item_finder[item] = node\n    self.sets.add(node)\n    self.n += 1\n    \n  def find_set(self, item):\n    # from item->node->parent to set representative\n    if item not in self.item_finder:\n      print('not in the set yet: ', item)\n      return None\n    node = self.item_finder[item]\n    while node.parent != node:\n      node = node.parent\n    return node\n    \n  def union(self, x, y):\n    node_x = self.find_set(x)\n    node_y = self.find_set(y)\n    if node_x.item == node_y.item:\n      return\n    \n    #the root of one tree to point to the root of the other\n    # merge x to y\n    node_x.parent = node_y\n    #remove one set\n    self.sets.remove(node_x)\n    return \n  \n  def __str__(self):\n    ans = ''\n    for root in self.sets:\n      ans += 'set: '+ str(root.item) + '\\n'\n    return ans\n  \n  def print_set(self, item):\n    if item in self.item_finder:\n      node = self.item_finder[item]\n      print(node.item, '->', end='')\n      while node.parent != node:\n        node = node.parent\n        print(node.item, '->', end='')\n\\end{lstlisting}\nLet's run an example:\n\\begin{lstlisting}[language=Python]\nds = DisjointSet(items=[i for i in range(5)])\nds.union(0,1)\nds.union(1,2)\nds.union(2,3)\nds.union(3, 4)\nprint(ds)\nfor item in ds.item_finder.keys():\n  ds.print_set(item)\n  print(' ')\n\\end{lstlisting}\nThe output is:\n\\begin{lstlisting}[numbers=none]\nset: 4\n\n0 ->1 ->2 ->3 ->4 -> \n1 ->2 ->3 ->4 -> \n2 ->3 ->4 -> \n3 ->4 -> \n4 -> \n\\end{lstlisting}\nThe above implementation, both \\texttt{make\\_set} and \\texttt{union} takes $O(1)$ time complexity. The main time complexity is incurred at \\texttt{find\\_set}, which traverse a path from node to root. If we assume each tree in the disjoint-set forest is balanced, the upper bound of this operation will be $O(\\log n)$. However, if the tree is as worse as a linear linked list, the time complexity will goes to $O(n)$. This makes the total time complexity from $O(n\\log n)$ to $O(n^2)$. \n\n\\subsubsection{Heuristics}\n\\paragraph{Union by Rank}\nAs we have seen from the above example, A sequence of $n-1$ \\texttt{union} operations may create a tree that is just a linear chain of $n$ nodes. Union by rank, which is similar to the weighted-union heuristic we used with the linked list implementation, is applied to avoid the worst case.  For each node, other than the parent pointer, it adds \\texttt{rank} to track the upper bound of the height of the associated node (the number of edges in the longest simple path between the node and a descendant leaf). In union by rank, we make the root with smaller rank point to the root with larger rank. \n\nIn the initialization, and \\texttt{make\\_set} operation, a single noded tree has an initial rank of 0. In \\texttt{union(x, y)}, there will exist three cases:\n\\begin{lstlisting}[numbers=none]\nCase 1 x.rank == y.rank: \n        join x to y\n        y.rank += 1\nCase 2: x.rank < y.rank:\n       join y to x\n       x.rank += 1\nCase 3: x.rank > y.rank:\n        join y to x\n        x's rank stay unchanged\n\\end{lstlisting}\nNow, with adding \\texttt{rank} to the node. We modify the naive implementation:\n\\begin{lstlisting}[language=Python]\nclass Node:\n  def __init__(self, item):\n    self.item = item # save node information\n    self.parent = None\n    self.rank = 0\n\\end{lstlisting}\nThe updated implementation of \\texttt{union}:\n\\begin{lstlisting}[language=Python]\n  def union(self, x, y):\n    node_x = self.find_set(x)\n    node_y = self.find_set(y)\n    if node_x.item == node_y.item:\n      return\n    \n    # link\n    if node_x.rank > node_y.rank:\n      node_y.parent = node_x\n      #remove one set\n      self.sets.remove(node_y)\n    elif node_x.rank < node_y.rank:\n      node_x.parent = node_y\n      self.sets.remove(node_x)\n    else:\n      node_x.parent = node_y\n      node_y.rank += 1\n      self.sets.remove(node_x)\n    return \n\\end{lstlisting}\n\n\\paragraph{Path Compression}\nIn our naive implementation, \\texttt{find\\_set} took the most time. With path compression, during the process of \\texttt{find\\_set}, it simply make each node on the find path point directly to its root. Path Compression wont affect the rank of each node. Now, we modify this function:\n\\begin{lstlisting}[language=Python]\n  def _find_parent(self, node):\n    while node.parent != node:\n      node = node.parent\n    return node\n    \n  def find_set(self, item):\n    '''modified to do path compression'''\n    # from item->node->parent to set representative\n    if item not in self.item_finder:\n      print('not in the set yet: ', item)\n      return None\n    node = self.item_finder[item]\n    node.parent = self._find_parent(node) # change node's parent to the root node\n    return node.parent\n\\end{lstlisting}\nThe same example, the output will be:\n\\begin{lstlisting}[numbers=none]\nset: 1\n\n0 ->1 -> \n1 -> \n2 ->1 -> \n3 ->1 -> \n4 ->1 ->\n\\end{lstlisting}\n\\begin{lstlisting}[language=Python]\nimport time, random\nt0 =  time.time()\nn = 100000\nds = DisjointSet(items=[i for i in range(n)])\nfor _ in range(n):\n  i, j = random.randint(0, n-1), random.randint(0, n-1) #[0,n]\n  ds.union(i, j)\nprint('time: ', time.time()-t0)\n\\end{lstlisting} \n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{Experiment to the running time of Linked-list VS naive forest VS heuristic forest} We run the disjoint set with n=100,000, and with n times of union: \n\\begin{lstlisting}[language=Python]\nimport time, random\nt0 =  time.time()\nn = 100000\nds = DisjointSet(items=[i for i in range(n)])\nfor _ in range(n):\n  i, j = random.randint(0, n-1), random.randint(0, n-1) #[0,n]\n  ds.union(i, j)\nprint('time: ', time.time()-t0)\n\\end{lstlisting}\nThe resulting time is: 1.09s, 50.4s, 1.19s\n\\end{bclogo}\n\\paragraph{Note} As we see, in our implementation, we have never removed any item from disjoint-set structure. Also, from the above implementation, we know the sets of the nodes, but we cant track items from the root node. How can we further improve this?\n\\section{Fibonacci Heap}\n\\section{Exercises}\n\\subsection{Knowledge Check}\n\\subsection{Coding Practice}\n\\paragraph{Disjoint Set}\n\\begin{enumerate}\n    \\item 305. Number of Islands II (hard)\n\\end{enumerate}\n\\end{document}", "meta": {"hexsha": "d8a8b308d5c1799676186acc441de0dcc196376a", "size": 28569, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Easy-Book/chapters/chapter_advanced_data_structures.tex", "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Easy-Book/chapters/chapter_advanced_data_structures.tex", "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Easy-Book/chapters/chapter_advanced_data_structures.tex", "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.6845238095, "max_line_length": 947, "alphanum_fraction": 0.6917988029, "num_tokens": 8047, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696748, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.5803437435420327}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath,amssymb,siunitx,graphicx}\n\\usepackage[margin=1in]{geometry}\n\\DeclareSIUnit\\ergs{ergs}\n\\DeclareSIUnit\\yr{yr}\n\\DeclareSIUnit\\AU{AU}\n\\DeclareSIUnit\\msun{\\ensuremath{\\mathrm{M}_{\\odot}}}\n\n\\title{Why Stars are Parsecs Apart}\n\\author{Matthias J. Raives}\n\n\\begin{document}\n  \n  \\maketitle{}\n  \n  \\section{Gravitational Capture}\n  Suppose the distances between stars was set by the limits of gravitational capture; i.e., stars are as close together as they could be without forming binary systems.  In this case, we would expect the characteristic interstellar distance to be\n  \\begin{equation}\n    d_{\\ast} \\sim \\frac{2GM_{\\ast}}{\\sigma_{\\ast}^{2}} = \\SI{2.6e12}{\\cm}\\:\\left(\\frac{M_{\\ast}}{\\si{\\msun}}\\right)\\left(\\frac{\\sigma_{\\ast}}{\\SI{100}{\\km\\per\\second}}\\right)^{-2} = \\SI{0.2}{\\AU}\\:\\left(\\frac{M_{\\ast}}{\\si{\\msun}}\\right)\\left(\\frac{\\sigma_{\\ast}}{\\SI{100}{\\km\\per\\second}}\\right)^{-2},\n  \\end{equation}\n  where $M_{\\ast}$ is the characteristic stellar mass and $\\sigma_{\\ast}$ is the stellar velocity dispersion.  We see that even for massive stars (say, \\SI{100}{\\msun}) moving slowly (say, \\SI{10}{\\km\\per\\second}), we only have a characteristic separation of $d_{\\ast}\\sim\\SI{200}{\\AU}$, far short of the parsec scale stellar separation we observe.  Thus, we can likely rule out the gravitational interactions of the stars as the mechanism behind the characteristic interstellar distance.\n  \n  \\section{Galactic Astrophysics}\n  Let us then suppose that the distances between stars is set by the physics of their host galaxies.  It behooves us to write:\n  \\begin{equation}\n    f_{\\ast}\\rho_{disk} \\sim M_{\\ast}d^{-3}_{\\ast}\\Longrightarrow d_{\\ast} \\sim \\left(\\frac{M_{\\ast}}{f_{\\ast}\\rho_{disk}}\\right)^{1/3},\n  \\end{equation}\n  where $f_{\\ast}$ is the fraction of the disk mass in the form of stars.  The disk density can be determined from the surface density as:\n  \\begin{equation}\n    \\Sigma_{disk} \\sim \\rho_{disk}H \\sim \\rho_{disk}\\sigma_{gas}t_{dyn} \\sim \\sigma_{gas}\\left(\\frac{\\rho_{disk}}{G}\\right)^{1/2}\\Longrightarrow\\rho_{disk} \\sim G\\left(\\frac{\\Sigma_{disk}}{\\sigma_{gas}}\\right)^{2},\n  \\end{equation}\n  where $H$ is the scale height of the disk, $\\sigma_{gas}$ is the velocity dispersion of the gas, and $t_{dyn}$ is the dynamical time.  We can appeal to Toomre stability to determine $\\sigma_{gas}$:\n  \\begin{equation}\n    Q_{T} \\sim \\frac{\\sigma_{gas}v_{vir}}{G\\Sigma_{disk}R_{disk}}\\Longrightarrow \\sigma_{gas}\\sim Q_{T}G\\Sigma_{disk}R_{disk}v_{vir}^{-1},\n  \\end{equation}\n  where $v_{vir}$ is the virial velocity.  Thus we see:\n  \\begin{equation}\n    \\rho_{disk} \\sim \\frac{v_{vir}^{2}}{Q_{T}^{2}GR_{disk}^{2}}.\n  \\end{equation}\n  Using $v_{vir}^{2}=GM/R_{vir}$ and $R_{disk}=\\lambda R_{vir}$, where $\\lambda$ is the spin parameter and $M$ is the halo mass:\n  \\begin{equation}\n    \\rho_{disk} \\sim \\frac{M}{Q_{T}^{2}R_{vir}^{3}\\lambda^{2}} \\sim \\rho_{vir}Q_{T}^{-2}\\lambda^{-2}\n  \\end{equation}\n  Thus, we have:\n  \\begin{equation}\n    d_{\\ast} = M_{\\ast}^{1/3}f_{\\ast}^{-1/3}\\lambda^{2/3}Q_{T}^{2/3}\\rho_{vir}^{-1/3}\n  \\end{equation}\n  The virial density at $z=0$ is defined as:\n  \\begin{equation}\n    \\rho_{vir}\\sim\\Delta_{c}\\rho_{c,0}\\sim 200\\rho_{c,0} = 200\\frac{3H_{0}^{2}}{8\\pi G}\n  \\end{equation}\n  Thus, we have:\n  \\begin{equation}\n    d_{\\ast} \\sim \\SI{3.8}{pc}\\:\\left(\\frac{M_{\\ast}}{\\si{\\msun}}\\right)^{1/3}\\left(\\frac{\\lambda}{0.04}\\right)^{2/3}f_{\\ast}^{-1/3}Q_{T}^{2/3}\n  \\end{equation}\n  Stars make up about $\\sim90\\%$ of the baryonic mass of the Milky way, and the gas is maintained at marginal stability by stellar feedback (i.e., the gas collapses and forms stars when perturbed, but generally not spontaneously), so $Q_{T}\\sim f_{\\ast} \\sim 1$ is justified.\n  \n  \n\\end{document}\n", "meta": {"hexsha": "758ce17d24a979cc2ddd00a8b8641465835a4220", "size": 3768, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Stellar Separation/Answer.tex", "max_stars_repo_name": "osugoom/questions", "max_stars_repo_head_hexsha": "5ad4fa6de9c9a8c60a3043adacfad41aef24ed4a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Stellar Separation/Answer.tex", "max_issues_repo_name": "osugoom/questions", "max_issues_repo_head_hexsha": "5ad4fa6de9c9a8c60a3043adacfad41aef24ed4a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Stellar Separation/Answer.tex", "max_forks_repo_name": "osugoom/questions", "max_forks_repo_head_hexsha": "5ad4fa6de9c9a8c60a3043adacfad41aef24ed4a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-01-10T21:05:11.000Z", "max_forks_repo_forks_event_max_datetime": "2018-01-10T21:05:11.000Z", "avg_line_length": 61.7704918033, "max_line_length": 488, "alphanum_fraction": 0.6825902335, "num_tokens": 1322, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.5803437435420326}}
{"text": "\\chapter{Objects and morphisms}\n\\label{ch:cats}\nI can't possibly hope to do category theory any justice in these few chapters;\nthus I'll just give a very high-level overview of how many of the concepts we've\nencountered so far can be re-cast into categorical terms.\nSo I'll say what a category is, give some examples,\nthen talk about a few things that categories can do.\nFor my examples, I'll be drawing from all the previous chapters;\nfeel free to skip over the examples corresponding to things you haven't seen.\n\nIf you're interested in category theory (like I was!), perhaps in\nwhat surprising results are true for general categories, I strongly recommend \\cite{ref:msci}.\n\n\\section{Motivation: isomorphisms}\nFrom earlier chapters let's recall the definition of an \\emph{isomorphism} of two objects:\n\\begin{itemize}\n\t\\ii Two groups $G$ and $H$ are isomorphic if there was a bijective homomorphism:\n\tequivalently, we wanted homomorphisms $\\phi : G \\to H$ and $\\psi : H \\to G$\n\twhich were mutual inverses, meaning $\\phi \\circ \\psi = \\id_H$ and $\\psi \\circ \\phi = \\id_G$.\n\t\\ii Two metric (or topological) spaces $X$ and $Y$ are isomorphic\n\tif there is a continuous bijection $f : X \\to Y$ such that $f\\inv$ is also continuous.\n\t\\ii Two vector spaces $V$ and $W$ are isomorphic if there is a bijection $T : V \\to W$\n\twhich is a linear map.\n\tAgain, this can be re-cast as saying that $T$ and $T\\inv$ are linear maps.\n\t\\ii Two rings $R$ and $S$ are isomorphic if there is a bijective ring homomorphism $\\phi$;\n\tagain, we can re-cast this as two mutually inverse ring homomorphisms.\n\\end{itemize}\n\nIn each case we have some collections of objects and some maps,\nand the isomorphisms can be viewed as just maps.\nLet's use this to motivate the definition of a general \\emph{category}.\n\n\\section{Categories, and examples thereof}\n\\prototype{$\\catname{Grp}$ is possibly the most natural example.}\n\\begin{definition}\n\tA \\vocab{category} $\\AA$ consists of:\n\t\\begin{itemize}\n\t\t\\ii A class of \\vocab{objects}, denoted $\\obj(\\AA)$.\n\t\t\\ii For any two objects $A_1, A_2 \\in \\obj(\\AA)$, \n\t\ta class of \\vocab{arrows} (also called \\vocab{morphisms} or \\vocab{maps}) between them.\n\t\tWe'll denote the set of these arrows by $\\Hom_\\AA(A_1, A_2)$.\n\t\t\\ii For any $A_1, A_2, A_3 \\in \\obj(\\AA)$,\n\t\tif $f : A_1 \\to A_2$ is an arrow and $g : A_2 \\to A_3$ is an arrow, we can compose\n\t\tthese arrows to get an arrow $g \\circ f : A_1 \\to A_3$.\n\n\t\tWe can represent this in a \\vocab{commutative diagram}\n\t\t\\begin{diagram}\n\t\t\tA_1 & \\rTo^f & A_2 \\\\\n\t\t\t& \\rdDashed^h & \\dTo_g \\\\\n\t\t\t&& A_3\n\t\t\\end{diagram}\n\t\twhere $h = g \\circ f$.\n\t\tThe composition operation $\\circ$ is part of the data of $\\AA$;\n\t\tit must be associative.\n\t\tIn the diagram above we say that $h$ \\vocab{factors} through $A_2$.\n\t\t\n\t\t\\ii Finally, every object $A \\in \\obj(\\AA)$ has a special \\vocab{identity arrow} $\\id_A$;\n\t\tyou can guess what it does.\\footnote{To be painfully explicit: if $f : A' \\to A$ is an arrow then $\\id_A \\circ f = f$;\n\t\tsimilarly, if $g : A \\to A'$ is an arrow then $g \\circ \\id_A = g$.}\n\t\\end{itemize}\n\\end{definition}\n\\begin{abuse}\n\tFrom now on, by $A \\in \\AA$ we'll mean $A \\in \\obj(\\AA)$.\n\\end{abuse}\n\\begin{abuse}\n\tYou can think of ``class'' as just ``set''.\n\tThe reason we can't use the word ``set'' is\n\tbecause of some paradoxical issues with\n\tcollections which are too large;\n\tCantor's Paradox says there is no set of all sets.\n\tSo referring to these by ``class'' is a way of sidestepping these issues.\n\n\tNow and forever I'll be sloppy and assume all my categories\n\tare \\vocab{locally small}, meaning that $\\Hom_{\\AA} (A_1, A_2)$\n\tis a set for any $A_1, A_2 \\in \\AA$.\n\tSo elements of $\\AA$ may not form a set,\n\tbut the set of morphisms between\n\ttwo \\emph{given} objects will always assumed to be a set.\n\\end{abuse}\n\nLet's formalize the motivation we began with.\n\\begin{example}\n\t[Basic examples of categories]\n\t\\listhack\n\t\\label{example:basic_categories}\n\t\\begin{enumerate}[(a)]\n\t\t\\ii There is a category of groups $\\catname{Grp}$. The data is\n\t\t\\begin{itemize}\n\t\t\t\\ii The objects of $\\catname{Grp}$ are the groups.\n\t\t\t\\ii The arrows of $\\catname{Grp}$ are the homomorphisms between these groups.\n\t\t\t\\ii The composition $\\circ$ in $\\catname{Grp}$ is function composition.\n\t\t\\end{itemize}\n\t\t\\ii In the same way we can conceive a category $\\catname{CRing}$ of (commutative) rings.\n\t\t\\ii Similarly, there is a category $\\catname{Top}$ of topological spaces,\n\t\twhose arrows are the continuous maps.\n\t\t\\ii There is a category $\\catname{Top}_\\ast$ of topological spaces with a \\emph{distinguished basepoint};\n\t\tthat is, a pair $(X, x_0)$ where $x_0 \\in X$.\n\t\tArrows are continuous maps $f : X \\to Y$ with $f(x_0) = y_0$.\n\t\t\\ii Similarly, there is a category $\\catname{Vect}_k$ of\n\t\tvector spaces (possibly infinite-dimensional) over a field $k$,\n\t\twhose arrows are the linear maps.\n\t\tThere is even a category $\\catname{FDVect}_k$ of\n\t\t\\emph{finite-dimensional} vector spaces.\n\t\t\\ii We have a category $\\catname{Set}$ of sets,\n\t\twhere the arrows are \\emph{any} maps.\n\t\\end{enumerate}\n\\end{example}\nAnd of course, we can now define what an isomorphism is!\n\\begin{definition}\n\tAn arrow $A_1 \\taking{f} A_2$ is an \\vocab{isomorphism}\n\tif there exists $A_2 \\taking{g} A_1$ such that $f \\circ g = \\id_{A_2}$\n\tand $g \\circ f = \\id_{A_1}$.\n\tIn that case we say $A_1$ and $A_2$ are \\vocab{isomorphic}, hence $A_1 \\cong A_2$.\n\\end{definition}\n\\begin{remark}\n\tNote that in $\\catname{Set}$, $X \\cong Y\n\t\\iff \\left\\lvert X \\right\\rvert = \\left\\lvert Y \\right\\rvert$.\n\\end{remark}\n\\begin{ques}\n\tCheck that every object in a category is isomorphic to itself.\n\t(This is offensively easy.)\n\\end{ques}\nMore importantly, this definition should strike you as a little impressive.\nWe're able to define whether two groups (rings, spaces, etc.) are isomorphic\nsolely by the functions between the objects.\nIndeed, one of the key themes in category theory (and even algebra) is that\n\\begin{moral}\n\tOne can learn about objects by the functions between them.\n\tCategory theory takes this to the extreme by \\emph{only} looking at arrows,\n\tand ignoring what the objects themselves are.\n\\end{moral}\n\nBut there are some trickier interesting examples of categories.\n\\begin{example}\n\t[Posets are categories]\n\tLet $\\mathcal P$ be a partially ordered set.\n\tWe can construct a category $P$ for it as follows:\n\t\\begin{itemize}\n\t\t\\ii The objects of $P$ are going to be the elements of $\\mathcal P$.\n\t\t\\ii The arrows of $P$ are defined as follows:\n\t\t\\begin{itemize}\n\t\t\t\\ii For every object $p \\in P$, we add an identity arrow $\\id_p$, and\n\t\t\t\\ii For any pair of distinct objects $p \\le q$, we add a single arrow $p \\to q$.\n\t\t\\end{itemize}\n\t\tThere are no other arrows.\n\t\t\\ii There's only one way to do the composition. What is it?\n\t\\end{itemize}\n\\end{example}\nFor example, for the poset $\\mathcal P$ on four objects $\\{a,b,c,d\\}$ with $a \\le b$ and $a \\le c \\le d$, we get:\n\\begin{center}\n\\begin{tikzpicture}[scale=3.5]\n\t\\SetVertexMath\n\t\\Vertices{square}{d,c,a,b}\n\t\\Edge[style={->}, label={$a \\le b$}](a)(b)\n\t\\Edge[style={->}, label={$a \\le c$}](a)(c)\n\t\\Edge[style={->}, label={$a \\le d$}](a)(d)\n\t\\Edge[style={->}, label={$c \\le d$}](c)(d)\n\t\\Loop[dist=8, dir=NO, label={$\\id_a$}, labelstyle={above=1pt}](a)\n\t\\Loop[dist=8, dir=WE, label={$\\id_b$}, labelstyle={left=1pt}](b)\n\t\\Loop[dist=8, dir=EA, label={$\\id_c$}, labelstyle={right=1pt}](c)\n\t\\Loop[dist=8, dir=WE, label={$\\id_d$}, labelstyle={left=1pt}](d)\n\\end{tikzpicture}\n\\end{center}\n\nThis illustrates the point that\n\\begin{moral}\n\tThe arrows of a category can be totally different from functions.\n\\end{moral}\nIn fact, in a way that can be made precise, the term ``concrete category'' refers\nto one where the arrows really are ``structure-preserving maps between sets'',\nlike $\\catname{Grp}$, $\\catname{Top}$, or $\\catname{CRing}$.\n\n\\begin{ques}\n\tCheck that no two distinct objects of a poset are isomorphic.\n\\end{ques}\n\nHere's a second quite important example of a non-concrete category.\n\\begin{example}\n\t[Important: groups are one-Object categories]\n\tA group $G$ can be interpreted as a category $\\mathcal G$ with one object $\\ast$,\n\tall of whose arrows are isomorphisms.\n\n\t\\begin{center}\n\t\\begin{tikzpicture}[scale=5.5]\n\t\t\\Vertex[x=0,y=0,L={$\\ast$}]{a}\n\t\t\\Loop[dist=8, dir=NO, label={$1 = \\id_a$}, labelstyle={above=1pt}](a)\n\t\t\\Loop[dist=7, dir=WE, label={$g_2$}, labelstyle={left=1pt}](a)\n\t\t\\Loop[dist=9, dir=SO, label={$g_3$}, labelstyle={below=1pt}](a)\n\t\t\\Loop[dist=8, dir=EA, label={$g_4$}, labelstyle={right=1pt}](a)\n\t\\end{tikzpicture}\n\t\\end{center}\n\n\tAs \\cite{ref:msci} says:\n\n\t\\begin{quote}\n\tThe first time you meet the idea that a group is a kind of category,\n\tit's tempting to dismiss it as a coincidence or a trick.\n\tIt's not: there's real content.\n\tTo see this, suppose your education had been shuffled and you took a course\n\ton category theory before ever learning what a group was.\n\tSomeone comes to you and says: \n\n\t``There are these structures called `groups', and the idea is this:\n\ta group is what you get when you collect together all the symmetries\n\tof a given thing.''\n\n\t``What do you mean by a `symmetry'?'' you ask.\n\n\t``Well, a symmetry of an object $X$ is a way of transforming $X$ or mapping\n\t$X$ into itself, in an invertible way.''\n\n\t``Oh,'' you reply, ``that's a special case of an idea I've met before.\n\tA category is the structure formed by \\emph{lots} of objects and mappings\n\tbetween them -- not necessarily invertible. A group's just the very special case\n\twhere you've only got one object, and all the maps happen to be invertible.''\n\t\\end{quote}\n\\end{example}\n\n\\begin{exercise}\n\tVerify the above!\n\tThat is, show that the data of a one-object category with all isomorphisms\n\tis the same as the data of a group.\n\\end{exercise}\n\nFinally, here are some examples of categories you can make from other categories.\n\\begin{example}\n\t[Deriving categories]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Given a category $\\AA$, we can construct the \\vocab{opposite category}\n\t\t$\\AA\\op$, which is the same as $\\AA$ but with all arrows reversed.\n\t\t\\ii Given categories $\\AA$ and $\\BB$, we can construct the \\vocab{product category} $\\AA \\times \\BB$\n\t\tas follows: the objects are pairs $(A, B)$ for $A \\in \\AA$ and $B \\in \\BB$,\n\t\tand the arrows from $(A_1, B_1)$ to $(A_2, B_2)$\n\t\tare pairs \\[ \\left( A_1 \\taking{f} A_2, B_1 \\taking{g} B_2 \\right). \\]\n\t\tWhat do you think the composition and identities are?\n\t\\end{enumerate}\n\\end{example}\n\n\\section{Special objects in categories}\n\\prototype{$\\catname{Set}$ has initial object $\\varnothing$ and final object $\\{\\ast\\}$. An element of $S$ corresponds to a map $\\{\\ast\\} \\to S$.}\nCertain objects in categories have special properties.\nHere are a couple examples.\n\\begin{example}\n\t[Initial object]\n\tAn \\vocab{initial object} of $\\AA$ is an object\n\t$A_{\\text{init}} \\in \\AA$ such that for any $A \\in \\AA$ (possibly $A = A_{\\text{init}}$),\n\tthere is exactly one arrow from $A_{\\text{init}}$ to $A$.\n\tFor example,\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The initial object of $\\catname{Set}$ is the empty set $\\varnothing$.\n\t\t\\ii The initial object of $\\catname{Grp}$ is the trivial group $\\{1\\}$.\n\t\t\\ii The initial object of $\\catname{CRing}$ is the ring $\\ZZ$\n\t\t(recall that ring homomorphisms $R \\to S$ map $1_R$ to $1_S$).\n\t\t\\ii The initial object of $\\catname{Top}$ is the empty space.\n\t\t\\ii The initial object of a partially ordered set is its smallest element, if one exists.\n\t\\end{enumerate}\n\\end{example}\n\nWe will usually refer to ``the'' initial object of a category, since:\n\\begin{exercise}\n\t[Important!]\n\tShow that any two initial objects $A_1$, $A_2$ of $\\AA$ are \\emph{uniquely isomorphic}\n\tmeaning there is a unique isomorphism between them.\n\\end{exercise}\n\n\\begin{remark}\n\tIn mathematics, we usually neither know nor care if two objects are actually equal\n\tor whether they are isomorphic.\n\tFor example, there are many competing ways to define $\\RR$,\n\tbut we still just refer to it as ``the'' real numbers.\n\n\tThus when we define categorical notions, we would like to check they are\n\tunique up to isomorphism.\n\tThis is really clean in the language of categories, and definitions\n\toften cause objects to be unique up to isomorphism for elegant reasons like the above.\n\\end{remark}\n\nOne can take the ``dual'' notion, a terminal object.\n\\begin{example}\n\t[Terminal object]\n\tA \\vocab{terminal object} of $\\AA$ is an object\n\t$A_{\\text{final}} \\in \\AA$ such that for any $A \\in \\AA$ (possibly $A = A_{\\text{final}}$),\n\tthere is exactly one arrow from $A$ to $A_{\\text{final}}$.\n\tFor example,\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The terminal object of $\\catname{Set}$ is the singleton set $\\{\\ast\\}$.\n\t\t(There are many singleton sets, of course, but \\emph{as sets} they are all isomorphic!)\n\t\t\\ii The terminal object of $\\catname{Grp}$ is the trivial group $\\{1\\}$.\n\t\t\\ii The terminal object of $\\catname{CRing}$ is the zero ring $0$.\n\t\t(Recall that ring homomorphisms $R \\to S$ must map $1_R$ to $1_S$).\n\t\t\\ii The terminal object of $\\catname{Top}$ is the single-point space.\n\t\t\\ii The terminal object of a partially ordered set is its maximal element, if one exists.\n\t\\end{enumerate}\n\\end{example}\n\nAgain, terminal objects are unique up to isomorphism.\nThe reader is invited to repeat the proof from the preceding exercise here.\nHowever, we can illustrate more strongly the notion of duality to give a short proof.\n\\begin{ques}\n\tVerify that terminal objects of $\\AA$ are equivalent to initial objects of $\\AA\\op$.\n\tThus terminal objects of $\\AA$ are unique up to isomorphism.\n\\end{ques}\nIn general, one can consider in this way the dual of \\emph{any} categorical notion:\nproperties of $\\AA$ can all be translated to dual properties of $\\AA\\op$\n(often by adding the prefix ``co'' in front).\n\nOne last neat construction: suppose we're working in a concrete category,\nmeaning (loosely) that the objects are ``sets with additional structure''.\nNow suppose you're sick of maps and just want to think about elements of these sets.\nWell, I won't let you do that since you're reading a category theory chapter,\nbut I will offer you some advice:\n\\begin{itemize}\n\t\\ii In $\\catname{Set}$, arrows from $\\{\\ast\\}$ to $S$ correspond to elements of $S$.\n\t\\ii In $\\catname{Top}$, arrows from $\\{\\ast\\}$ to $X$ correspond to points of $X$.\n\t\\ii In $\\catname{Grp}$, arrows from $\\ZZ$ to $G$ correspond to elements of $G$.\n\t\\ii In $\\catname{CRing}$, arrows from $\\ZZ[x]$ to $R$ correspond to elements of $R$.\n\\end{itemize}\nand so on.\nSo in most concrete categories, you can think of elements as functions from special sets to the set in question.\nIn each of these cases we call the object in question a \\vocab{free object}.\n\n\\section{Binary products}\n\\prototype{$X \\times Y$ in most concrete categories is the set-theoretic product.}\nThe ``universal property'' is a way of describing objects in terms of maps\nin such a way that it defines the object up to unique isomorphism\n(much the same as the initial and terminal objects).\n\nTo show how this works in general, let me give a concrete example.\nSuppose I'm in a category -- let's say $\\catname{Set}$ for now.\nI have two sets $X$ and $Y$, and I want to construct the Cartesian product $X \\times Y$ as we know it.\nThe philosophy of category theory dictates that I should talk about maps only,\nand avoid referring to anything about the sets themselves.\nHow might I do this?\n\nWell, let's think about maps into $X \\times Y$.\nThe key observation is that \n\\begin{moral}\nA function $A \\taking f X \\times Y$\namounts to a pair of functions $(A \\taking g X, A \\taking h Y)$.\n\\end{moral}\nPut another way, there is a natural projection map $X \\times Y \\surjto X$ and $X \\times Y \\surjto Y$:\n\\begin{diagram}\n\t&& X \\\\\n\tX \\times Y & \\ruSurj(2,1)^{\\pi_X} & \\\\\n\t& \\rdSurj(2,1)_{\\pi_Y} & Y\n\\end{diagram}\n(We have to do this in terms of projection maps rather than elements,\nbecause category theory forces us to talk about arrows.)\nNow how do I add $A$ to this diagram?\nThe point is that there is a bijection between functions $A \\taking f X \\times Y$\nand pairs $(g,h)$ of functions.\nThus for every pair $A \\taking g X$ and $A \\taking h Y$ there is a \\emph{unique} function\n$A \\taking f X \\times Y$.\n\nBut $X \\times Y$ is special in that it is ``universal'':\nfor any \\emph{other} set $A$, if you give me functions $A \\to X$ and $A \\to Y$, I can use it\nbuild a \\emph{unique} function $A \\to X \\times Y$.\nPicture:\n\\begin{diagram}\n\t&& && X \\\\\n\tA & \\rDotted~{\\exists! f} & X \\times Y & \\ruTo(4,1)^g \\ruSurj(2,1)_{\\pi_X} & \\\\\n\t& \\rdTo(4,1)_h && \\rdProj(2,1)^{\\pi_Y} & Y\n\\end{diagram}\nWe can do this in any general category, defining a so-called product.\n\\begin{definition}\n\tLet $X$ and $Y$ be objects in any category $\\AA$.\n\tThe \\vocab{product} consists of an object $X \\times Y$\n\tand arrows $\\pi_X$, $\\pi_Y$ to $X$ and $Y$ (thought of as projection).\n\tWe require that for any object $A$ and arrows $A \\taking g X$, $A \\taking h Y$, there\n\tis a \\emph{unique} function $A \\taking f X \\times Y$ such that the diagram\n\t\\begin{diagram}\n\t\t&& && X \\\\\n\t\tA & \\rDotted~{\\exists! f} & X \\times Y & \\ruTo(4,1)^g \\ruProj(2,1)_{\\pi_X} & \\\\\n\t\t& \\rdTo(4,1)_h && \\rdProj(2,1)^{\\pi_Y} & Y\n\t\\end{diagram}\n\tcommutes.\n\\end{definition}\n\\begin{abuse}\n\tStrictly speaking, the product should consist of \\emph{both}\n\tthe object $X \\times Y$\n\tand the projection maps $\\pi_X$ and $\\pi_Y$.\n\tHowever, if $\\pi_X$ and $\\pi_Y$ are understood,\n\tthen we often use $X \\times Y$ to refer to the object,\n\tand refer to it also as the product.\n\t\\label{abuse:object}\n\\end{abuse}\n\nProducts do not always exist; for example,\ntake a category with just two objects and no non-identity morphisms.\nNonetheless:\n\\begin{proposition}[Uniqueness of products]\n\tWhen they exist, products are unique up to isomorphism:\n\tgiven two products $P_1$ and $P_2$ of $X$ and $Y$\n\tthere is an isomorphism between the two objects.\n\\end{proposition}\n\\begin{proof}\n\tThis is very similar to the proof that initial objects are unique up to unique isomorphism.\n\tConsider two such objects $P_1$ and $P_2$, and the associated projection maps.\n\tSo, we have a diagram\n\t\\begin{diagram}\n\t\t&& X && \\\\\n\t\t& \\ruProj^{\\pi_X^1} & \\uProj_{\\pi_X^2} & \\luProj^{\\pi_X^1} & \\\\\n\t\tP_1 & \\rTo~f & P_2 & \\rTo~g & P_1 \\\\\n\t\t& \\rdProj_{\\pi_Y^1} & \\dProj_{\\pi_Y^2} & \\ldProj_{\\pi_Y^1} & \\\\\n\t\t&& Y &&\n\t\\end{diagram}\n\tThere are unique morphisms $f$ and $g$ between $P_1$ and $P_2$ that\n\tmake the entire diagram commute, according to the universal property.\n\n\tOn the other hand, look at $g \\circ f$ and focus on just the outer square.\n\tObserve that $g \\circ f$ is a map which makes the outer square commute,\n\tso by the universal property of $P_1$ it is the only one.\n\tBut $\\id_{P_1}$ works as well.\n\tThus $\\id_{P_1} = g \\circ f$.\n\tSimilarly, $f \\circ g = \\id_{P_2}$ so $f$ and $g$ are isomorphisms.\n\\end{proof}\n\\begin{abuse}\n\tActually, this is not really the morally correct theorem;\n\tsince we've only showed the objects $P_1$ and $P_2$ are isomorphic\n\tand have not made any assertion about the projection maps.\n\tBut I haven't (and won't) define isomorphism of the entire product,\n\tand so in what follows if I say ``$P_1$ and $P_2$ are isomorphic''\n\tI really just mean the objects are isomorphic.\n\\end{abuse}\n\\begin{exercise}\n\tIn fact, show the products are unique up to \\emph{unique} isomorphism:\n\tthe $f$ and $g$ above are the only isomorphisms between\n\tthe objects $P_1$ and $P_2$.\n\\end{exercise}\n\nThe nice fact about this ``universal property'' mindset\nis that we don't have to give explicit constructions; assuming existence,\nthe ``universal property'' allows us to bypass all this work by saying\n``the object with these properties is unique up to unique isomorphism'',\nthus we don't need to understand the internal workings of the object\nto use its properties.\n\nOf course, that's not to say we can't give concrete examples.\n\\begin{example}\n\t[Examples of products]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii In $\\catname{Set}$, the product of two sets\n\t\t$X$ and $Y$ is their Cartesian product $X \\times Y$.\n\t\t\\ii In $\\catname{Grp}$, the product of $G$, $H$\n\t\tis the group product $G \\times H$.\n\t\t\\ii In $\\catname{Vect}_k$, the product\n\t\tof $V$ and $W$ is $V \\oplus W$.\n\t\t\\ii In $\\catname{CRing}$, the product\n\t\tof $R$ and $S$ is appropriately the ring product $R \\times S$.\n\t\t\\ii Let $\\mathcal P$ be a poset interpreted as a category.\n\t\tThen the product of two objects $x$ and $y$\n\t\tis the \\vocab{greatest lower bound}; for example,\n\t\t\\begin{itemize}\n\t\t\t\\ii If the poset is $(\\RR, \\le)$ then it's $\\min\\{x,y\\}$.\n\t\t\t\\ii If the poset is the subsets\n\t\t\tof a finite set by inclusion,\n\t\t\tthen it's $x \\cap y$.\n\t\t\t\\ii If the poset is the positive integers ordered by division,\n\t\t\tthen it's $\\gcd(x,y)$.\n\t\t\\end{itemize}\n\t\\end{enumerate}\n\\end{example}\n\nOf course, we can define products of more than just one object.\nConsider a set of objects $(X_i)_{i \\in I}$ in a category $\\AA$.\nWe define a \\vocab{cone} on the $X_i$ to be an object $A$\nwith some ``projection'' maps to each $X_i$.\nThen the \\vocab{product} is a cone $P$ which is ``universal'' in the same sense as before:\ngiven any other cone $A$ there is a unique map $A \\to P$ making the diagram commute.\nIn short, a product is a ``universal cone''.\n\nThe picture of this is\n\\begin{diagram}\n\t&& A && \\\\\n\t& \\ldProj(2,4) \\ldProj(1,4) & \\dTo~{!\\exists f} & \\rdProj(2,4) \\rdProj(1,4)& \\\\\n\t&& P && \\\\\n\t& \\ldProj \\ldProj(1,2) && \\rdProj \\rdProj(1,2) & \\\\\n\tX_1 & X_2 && X_3 & X_4\n\\end{diagram}\nSee also \\Cref{prob:associative_product}.\n\nOne can also do the dual construction to get a \\vocab{coproduct}:\ngiven $X$ and $Y$, it's the object $X+Y$\ntogether with maps $X \\taking{\\iota_X} X+Y$ and $Y \\taking{\\iota_Y} X+Y$\n(that's Greek iota, think inclusion)\nsuch that for any object $A$ and maps $X \\taking g A$, $Y \\taking h A$\nthere is a unique $f$ for which\n\\begin{diagram}\n\tX &&&& \\\\\n\t& \\rdTo(1,2)_{\\iota_X} \\rdTo(3,2)^g && \\\\\n\t& X+Y & \\rTo~{!\\exists f} & A \\\\\n\t\\ruTo(1,2)^{\\iota^Y} && \\ruTo(3,2)_h & \\\\\n\tY &&&&\n\\end{diagram}\ncommutes.\nWe'll leave some of the concrete examples as an exercise this time,\nfor example:\n\\begin{exercise}\n\tDescribe the coproduct in $\\catname{Set}$.\n\\end{exercise}\nPredictable terminology: a coproduct is a universal \\vocab{cocone}.\n\nSpoiler alert later on:\nthis construction can be generalized vastly to so-called ``limits'',\nand we'll do so later on.\n\n\\section{Monic and epic maps}\nThe notion of ``injective'' doesn't make sense\nin an arbitrary category since arrows need not be functions.\nThe correct categorical notion is:\n\\begin{definition}\n\tA map $X \\taking f Y$ is \\vocab{monic}\n\t(or a monomorphism) if for any commutative diagram\n\t\\begin{diagram}\n\t\tA & \\pile{\\rTo^g \\\\ \\rTo_h} & X & \\rTo^f Y \n\t\\end{diagram}\n\twe must have $g = h$.\n\tIn other words, $f \\circ g = f \\circ h \\implies g = h$.\n\\end{definition}\n\\begin{ques}\n\tVerify that in a \\emph{concrete} category, injective $\\implies$ monic.\n\\end{ques}\n\\begin{ques}\n\tShow that the composition of two monic maps is monic.\n\\end{ques}\n\nIn most but not all situations, the converse is also true.\n\\begin{exercise}\n\tShow that in $\\catname{Set}$, $\\catname{Grp}$, $\\catname{CRing}$,\n\tmonic implies injective. (Take $A = \\{\\ast\\}$, $A = \\ZZ$, $A = \\ZZ[x]$.)\n\\end{exercise}\nMore generally, as we said before there are many categories\nwith a ``free'' object that you can use to think of as elements.\nAn element of a set is a function $1 \\to S$,\nand element of a ring is a function $\\ZZ[x] \\to R$, et cetera.\nIn all these categories,\nthe definition of monic literally reads\n``$f$ is injective on $\\Hom_\\AA(A, X)$''.\nSo in these categories, ``monic'' and ``injective'' coincide.\n\nThat said, here is the standard counterexample.\nAn additive abelian group $G = (G,+)$ is called \\emph{divisible}\nif for every $x \\in G$ and $n \\in \\ZZ$ there exists $y \\in G$ with $ny = x$.\nLet $\\catname{DivAbGrp}$ be the category of such groups.\n\\begin{exercise}\n\tShow that the projection $\\QQ \\to \\QQ/\\ZZ$ is monic but not injective.\n\\end{exercise}\n\nOf course, we can also take the dual notion.\n\\begin{definition}\n\tA map $X \\taking f Y$ is \\vocab{epic}\n\t(or an epimorphism) if for any commutative diagram\n\t\\begin{diagram}\n\t\tX & \\rTo^f & Y & \\pile{\\rTo^g \\\\ \\rTo_h} & A\n\t\\end{diagram}\n\twe must have $g = h$.\n\tIn other words, $g \\circ f = h \\circ f \\implies g = h$.\n\\end{definition}\n\nThis is kind of like surjectivity, although it's a little farther than last time.\nNote that in concrete categories, surjective $\\implies$ epic.\n\\begin{exercise}\n\tShow that in $\\catname{Set}$, $\\catname{Grp}$, $\\catname{Ab}$, $\\catname{Vect}_k$, $\\catname{Top}$,\n\tthe notions of epic and surjective coincide.\n\t(For $\\catname{Set}$, take $A = \\{0, 1\\}$.)\n\\end{exercise}\nHowever, there are more cases where it fails.\nMost notably:\n\\begin{example}\n\t[Epic but not surjective]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii In $\\catname{CRing}$, for instance, the inclusion $\\ZZ \\injto \\QQ$ is epic\n\t\t(and not surjective)..\n\t\tIndeed, if two homomorphisms $\\QQ \\to A$ agree on\n\t\tevery integer then they agree everywhere (why?),\n\t\t\\ii In the category of \\emph{Hausdorff} topological spaces\n\t\t(every two points have disjoint open neighborhoods),\n\t\tin fact epic $\\iff$ dense image (like $\\QQ \\injto \\RR$).\n\t\\end{enumerate}\n\tThus failures arise when a function $f : X \\to Y$ can be determined by just some of the points of $X$.\n\\end{example}\n\n\\section\\problemhead\n\n\\begin{problem}\n\tIn the category $\\catname{Vect}_k$ of $k$-vector spaces\n\t(for a field $k$),\n\twhat are the initial and terminal objects?\n\\end{problem}\n\n\\begin{dproblem}\n\tWhat is the coproduct $X+Y$ in the categories\n\t$\\catname{Set}$, $\\catname{Vect}_k$, and a poset?\n\\end{dproblem}\n\n\\begin{problem}\n\tIn any category $\\AA$ where all products exist,\n\tshow that \\[ (X \\times Y) \\times Z \\cong X \\times (Y \\times Z) \\]\n\twhere $X$, $Y$, $Z$ are arbitrary objects.\n\t(Here both sides refer to the objects, as in \\Cref{abuse:object}.)\n\t\\label{prob:associative_product}\n\\end{problem}\n\n\\begin{problem}\n\t\\gim\n\tConsider a category $\\AA$ with a \\vocab{zero object},\n\tmeaning an object which is both initial and terminal. \n\tGiven objects $X$ and $Y$ in $A$,\n\tprove that the projection $X \\times Y \\to X$ is epic.\n\\end{problem}\n", "meta": {"hexsha": "38f7ba104803cb3c4b4cb31f6f112499756fb559", "size": 26071, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/cats/categories.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/cats/categories.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/cats/categories.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.4609120521, "max_line_length": 146, "alphanum_fraction": 0.7012389245, "num_tokens": 8223, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303087996143, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5803437408947885}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{xcolor,listings}\n\\usepackage[a4paper, total={7in, 10in}]{geometry}\n\\usepackage[spanish]{babel}\n\n\\lstset{\n  language=Python,\n  backgroundcolor = \\color{lightgray},\n  showspaces=false,\n  numbers=left,\n  numberstyle=\\tiny,\n  commentstyle=\\color{gray},\n  keywordstyle=\\color{red}\\ttfamily,\n  basicstyle=\\ttfamily,\n  columns=fullflexible,\n  frame=single,\n  breaklines=true,\n  postbreak=\\mbox{\\textcolor{black}{$\\hookrightarrow$}\\space},\n}\n\n\\begin{document}\n\\title{% \n\t\t\\LARGE \\textbf{388} \\\\\n        }\n\\author{\n\tNicolas Novalic\n}\n\n\\section*{A more or less formal solution}\n\nThe problem asks to find out x digit numbers for x = (3, .. ,7) whose last 3 digits are 388 (and count them).\n\\vskip 0.2in\nSo for every number $x$ it's easy to see that in the result of: $$n \\times 10^{(number \\ of \\ digits \\ in \\ n)} + n$$ \\vskip 0.0in ends with the number n and this result is a multiple by $n$.\n\\vskip 0.2in\n\\textbf{\\textit{i.e:}} \\\\\n\n\\noindent\\fbox{%\n    \\parbox{\\textwidth}{%\n      $$8 \\times (10 + 1) = 8 \\times 10 + 8 \\times 1 = 80 + 8 = 88$$\n      $$76 \\times 101 = 7600 + 76 = 7676$$\n\t}\n}\\vskip 0.2in\n\nIn particular: $$388 \\times 1001 = 388388$$\n\n\\vskip 0.2in\nSo we can formulate it like this:\n$$ 388 \\times k < 10000000 $$\n\\vskip 0.2in\nFrom where we need all the $k$'s that verify that $388 \\times k$ ends in 388.\n\\vskip 0.2in\nIf we divide by 388 and round up we can obtain the range of values that $k$ will take. The result of the division is 25773, so $388 \\times k < 10000000$ if $k < 25773$.\n\n\\vskip 0.2in\nNow, as the prime decomposition of 388 is $2 \\times 2 \\times 97$, we can write 388 as $4 \\times 97$.\n\\vskip 0.2in\nSo decomposing 388 and 1000 we can reformulate $388 \\times 1000$ = $(4 \\times 97) \\times (4 \\times 250)$ so every 250 $k$'s we have a multiple of 388 that ends with three zeros, and if we add 388 to that number, we have found one of the numbers we are looking for!.\n\\vskip 0.2in\nSo now we see how many of this numbers are in the valid range:\n$$\\frac{25773}{250} = 103$$\n\nSo $388 \\times (250 \\times i)$ for $i = (1, ... , 103)$ are all valid numbers.\n\\vskip 0.2in\nTo end up we need to count the solution obtained by: $$388 \\times (250 \\times 0) = 388$$ \\vskip 0.0in that we weren't counting (Note that $388 \\times (250 \\times 1) = 97000$).\n\\vskip 0.2in\nAdding up, in total theres \\textbf{104} numbers that are seven digits or less whose last three digits are 388 in that order. \n\n\\section*{Python code to check the solution}\n\nTo check out the number obtained before I've written this piece of code as concise as possible:\n\\vskip 0.2in\n\\begin{lstlisting}\nprint (len([ x for x in \\\n             map(lambda n: n[-3:], \\\n                 map(str, \\\n                     [y * 388 for y in range(1, int(10000000/388))])) \\\n             if '388' in x ]))\n\\end{lstlisting}\n\\vskip 0.2in\nBasically what I'm doing is:\n\\begin{itemize}\n\\item [-] As the smallest 8 number digit is 10000000 we can find out how many multiples of 388 that are less than 10000000 exist by dividing both numbers (line 4).\n\\item [-] Creating a list of all multiples of 388 in the range 388 and 10000000 (line 4).\n\\item [-] Using the map function to convert all the elements of this list (which are integers) to strings (line 3).\n\\item [-] Defining a lambda function which given an element, returns it's last 3 characters (line 2).\n\\item [-] Using the map function with the previous mentioned lambda function on the list resulting of applying the first map function mentioned before (line 2).\n\\item [-] Extracting elements from the resulting list of the second map function application, which satisfy that the string '388' is a part of this element (line 1).\n\\item [-] We can easily note that this returns a list of elements which are all completely equal strings '388'. So we count how many elements are on this list and print the result (line 1).\n\\end{itemize}\n\\vskip 0.2in\nThis is actually just one line of code.\n\n\n\\end{document}\n", "meta": {"hexsha": "86e1ab7df247f17e07098bfaeb4d1c35f306da44", "size": 3979, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "388/388.tex", "max_stars_repo_name": "novalic/mathProblems", "max_stars_repo_head_hexsha": "ccb21bb5fb7c4c97f3ffb113c22b25b1cee049aa", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-04-22T11:03:06.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-01T21:06:53.000Z", "max_issues_repo_path": "388/388.tex", "max_issues_repo_name": "novalic/articles", "max_issues_repo_head_hexsha": "ccb21bb5fb7c4c97f3ffb113c22b25b1cee049aa", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "388/388.tex", "max_forks_repo_name": "novalic/articles", "max_forks_repo_head_hexsha": "ccb21bb5fb7c4c97f3ffb113c22b25b1cee049aa", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.4479166667, "max_line_length": 265, "alphanum_fraction": 0.6941442574, "num_tokens": 1238, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.5802887885490027}}
{"text": "\\documentclass[12pt, letterpaper]{report}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[utf8]{inputenc}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{float}\n\\usepackage{subfig}\n\\graphicspath{ {./img/} }\n\\setlength\\parindent{0pt}\n\\renewcommand\\thesection{\\Roman{section}.}\n\\renewcommand{\\thesubsection}{\\alph{subsection}.}\n\n\n\\title{CS1675 - Assignment 3}\n\\author{Zachary M. Mattis}\n\n\n\\begin{document}\n\t\n\\maketitle\n\n\\section{Problem 1 - Bernoulli Trials}\n\n% A\n\\subsection{ML Estimate}\n\n\\[ \\hat{\\theta}(x) =  0.65 \\]\n\n% B\n\\subsection{$Beta(\\theta | 1,1)$}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.7\\columnwidth]{p1b.png}\n\t\\caption{Prior = 1,1}\n\\end{figure}\n\n% C\n\\subsection{MAP Estimate}\n\n\n\\[ \\textrm{MAP Estimate of } \\theta = 0.65 \\]\n\n\n% D\n\\subsection{$Beta(\\theta | 4,2)$}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.7\\columnwidth]{p1d.png}\n\t\\caption{Prior = 4,2}\n\\end{figure}\n\n\\[ \\textrm{MAP Estimate of } \\theta = 0.6538 \\]\n\n\\section{Problem 2 - Multivariate Gaussian}\n\n% A\n\\subsection{Gaussian Scatter Plot}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.7\\columnwidth]{p2a.png}\n\t\\caption{Gaussian Scatter Plot}\n\\end{figure}\n\n% B\n\\subsection{ML Estimation}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\t3.6377 & 7.8506 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Mean}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\t3.6414 & 1.0779 \\\\\n\t\t\\hline\n\t\t1.0779 & 3.7831 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Covariance}\n\\end{table}\n\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.7\\columnwidth]{p2b.png}\n\t\\caption{Gaussian 3-D}\n\\end{figure}\n\n% C\n\\subsection{Individual Gaussian}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.7\\columnwidth]{p2c.png}\n\t\\caption{Individual Gaussian}\n\\end{figure}\n\nThe first column data of ``gaussian txt\" is in blue, while the second is in red.  The corresponding means are 3.6377 and 7.8506.  The corresponding variances are 3.6414 and 3.7831.\n\n% D\n\\subsection{Multivariate vs. Univariate}\n\nI believe multivariate Gaussian models are a better model than two separate univariate models. Given a multivariate model, it is much easier to view any correlations between the data points as they are directly displayed together. Univariate models need to be interpolated side-by-side and any correlation must be interpreted by the viewer.\n\n\n\\section{Problem 3 - Exponential Distribution}\n\n% A\n\\subsection{Density Function}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.7\\columnwidth]{p3a.png}\n\t\\caption{Exponential Distribution}\n\\end{figure}\n\n\\subsection{ML Estimate}\n\n\n\\[ f(x;\\theta)=\\frac{1}{\\theta}{e}^{\\frac{-x}{\\theta}}, 0<x<\\infty, \\theta\\in{[0,\\infty}] \\]\n\\\\\n\\[ L(\\theta)=L\\left(\\theta;{x}_{1},{x}_{2}...{x}_{n} \\right)=\\left(\\frac{1}{\\theta}{e}^{\\frac{{-x}_{1}}{\\theta}}\\right)\\left(\\frac{1}{\\theta}{e}^{\\frac{{-x}_{2}}{\\theta}}\\right)...\\left(\\frac{1}{\\theta}{e}^{\\frac{{-x}_{n}}{\\theta}} \\right)=\\frac{1}{{\\theta}^{n}}exp\\left(\\frac{-\\sum_{1}^{n}{x}_{i}}{\\theta} \\right) \\]\n\\\\\n\\[lnL\\left(\\theta\\right)=-\\left(n \\right)ln\\left(\\theta\\right) -\\frac{1}{\\theta}\\sum_{1}^{n}{x}_{i}, 0<\\theta<\\infty\\]\n\\\\\n\\[\\frac{d\\left[lnL\\left(\\theta\\right) \\right]}{d\\theta}=\\frac{-\\left(n \\right)}{\\left(\\theta\\right)} +\\frac{1}{{\\theta}^{2}}\\sum_{1}^{n}{x}_{i}=0\\]\n\\\\\n\\[\\theta=\\frac{\\sum_{1}^{n}{x}_{i}}{n}x`\\]\n\\\\\n\\[\\Theta=\\frac{\\sum_{1}^{n}{X}_{i}}{n}\\]\n\n\n\\end{document}          ", "meta": {"hexsha": "bd05d9938cdafb79ffd24b5a667f694937b54d45", "size": 3417, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CS-1675/Homework3/homework3_analysis.tex", "max_stars_repo_name": "zmattis/University_of_Pittsburgh", "max_stars_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2017-07-21T17:56:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-05T09:25:12.000Z", "max_issues_repo_path": "CS-1675/Homework3/homework3_analysis.tex", "max_issues_repo_name": "zmattis/University_of_Pittsburgh", "max_issues_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CS-1675/Homework3/homework3_analysis.tex", "max_forks_repo_name": "zmattis/University_of_Pittsburgh", "max_forks_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-10-14T03:28:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-04T07:41:07.000Z", "avg_line_length": 23.8951048951, "max_line_length": 340, "alphanum_fraction": 0.6774948785, "num_tokens": 1263, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370111, "lm_q2_score": 0.8221891392358014, "lm_q1q2_score": 0.5802887845228283}}
{"text": "%%%%%%%%%%%%%%%%%%%%%definitions%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\input{../../doc/related_pages/header.tex}\n\\input{../../doc/related_pages/newcommands.tex}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%DOCUMENT%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{document}\n\n\\title{\nThe full-F electromagnetic model in toroidal geometry \\textsc{Feltor}}\n\\author{ M.~Wiesenberger and M.~Held}\n\\maketitle\n\n\\begin{abstract}\nThe purpose of this document is to describe the programs\n\\texttt{feltor\\_hpc.cu, feltor.cu, feltor\\_diag.cu} and to an extend\n\\texttt{geometry\\_diag.cu}. The goal is to provide\ninformation such that a user can avoid to look\ninto the actual codes on the one side and connect\nthe presented formulas to relevant journal publications on the other.\n\nThe program \\texttt{feltor/inc/geometries/geometry\\_diag.cu}\nanalyses the magnetic field geometry.\n\\texttt{feltor\\_hpc.cu} and \\texttt{feltor.cu} are programs for global 3d isothermal electromagnetic full-F gyro-fluid simulations.\n\\texttt{feltor/diag/feltor\\_diag.cu} is a program to analyse the output\nfile(s) of \\texttt{feltor\\_hpc.cu}.\n\n\\end{abstract}\n\\tableofcontents\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The magnetic field}\\label{sec:magnetic}\nWe assume a three-dimensional flat space with arbitrary coordinate\nsystem $\\vec x :=\\{x_0, x_1, x_2\\}$, metric\ntensor $g_{ij}$ and volume element $\\sqrt{g} := \\sqrt{\\det g}$.\nGiven a vector field $\\vec B(\\vec x)$ with unit vector $\\bhat(\\vec x) := (\\vec B/B)({\\vec x})$\nwe can define various differential operations.\n%We further assume that $\\bhat$ is perturbed by the parallel\n%vector potential $A_\\parallel$ via\n%${ \\vec b }_\\perp := ({\\vn \\times A_\\parallel \\bhat)}/{B}$\n\\rowcolors{2}{gray!25}{white}\n\\begin{longtable}{lll>{\\RaggedRight}p{7cm}}\n%\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Symbol} & \\textbf{Definition} \\\\\n\\midrule\n    Perpendicular Poisson bracket&\n    $\\left[.,.\\right]_\\perp$ &\n    $\\left[f,g\\right]_\\perp := \\bhat \\cdot \\left(\\vn f \\times \\vn g\\right) =\n    b_i \\varepsilon^{ijk}\\partial_j f\\partial_k g/\\sqrt{g}$  \\\\\n    Projection Tensor&\n    $h $ & $h^{ij} := g^{ij} - b^ib^j $\\\\\n    %Alignment Tensor&\n    %$t $ & $ t^{ij} := b^ib^j$\\\\\n    Perpendicular Gradient&\n    $\\np $&\n    $ \\np f := \\bhat\\times(\\vn f\\times \\bhat ) \\equiv\n    h \\cn f$ \\\\\n    Perpendicular Laplacian&\n    $\\Delta_\\perp $&\n    $ \\Delta_\\perp f:= \\vec \\nc (\\np f)\n    = \\nc( h\\cn f)$  \\\\\n    Curl-b Curvature &\n    $\\mathcal K_{\\vn\\times\\bhat}$ &\n    $\\mathcal K_{\\vn\\times\\bhat}(f) := \\vec{ \\mathcal K_{\\vn\\times\\bhat} }\\cn f = \\frac{1}{B}(\\vn \\times \\bhat)\\cn f$ \\\\[4pt]\n    Grad-B Curvature &\n    $\\mathcal K_{\\vn B} $ &\n    $\\mathcal K_{\\vn B}(f) := \\vec{\\mathcal K_{\\vn B}} \\cn f = \\frac{1}{B}(\\bhat \\times \\vn \\ln B)\\cn f$ \\\\[4pt]\n    Curvature &\n    $\\mathcal K$ &\n    $\\mathcal{K}(f):=\\vec{\\mathcal K} \\cn f =\n     \\nc\\left(\\frac{\\bhat\\times\\vn f}{B}\\right)$,\\\\[4pt]\n    Parallel derivative&\n    $\\npar $&\n    $ \\npar f := \\bhat\\cn f$ \\\\\n    %Perturbed parallel Derivative&\n    %$\\bar\\npar$ &\n    %$\\bar\\npar f := (\\bhat + {\\vec b }_\\perp)\\cn f = \\npar f + A_\\parallel \\mathcal K_{\\vn\\times\\bhat}(f) + \\frac{1}{B}[ f, A_\\parallel]_\\perp$ \\\\\n    Parallel Laplacian&\n    $\\Delta_\\parallel $&\n    $\\Delta_\\parallel f:= \\nc ( \\bhat\\bhat\\cn f )$\\\\\n\\bottomrule\n\\end{longtable}\nwith $b^i$ the contra- and $b_i$ the co-variant components of $\\bhat$, and\n$\\eps^{ijk}$ the Levi-Civita symbols.\nExplicit expressions for the above expressions\ndepend on the choice of the magnetic field and the underlying coordinate system.\nNote that we have\n\\begin{align}\n    \\nc \\vec{\\mathcal K_{\\vn\\times\\bhat}}\n    &= -\\nc \\vec{\\mathcal K_{\\vn B}} = -\\vec{ \\mathcal K_{\\vn\\times\\bhat}}\\cn\\ln B, \\\\\n    \\vec\\nc\\vec{ \\mathcal K} &= 0, \\\\\n    \\vec{\\mathcal K} &=\n     \\vn\\times\\frac{\\bhat}{B}\\cn f\n    = \\mathcal K_{\\vn\\times\\bhat}(f) + \\mathcal K_{\\vn B}(f),\\\\\n    \\vec{ \\mathcal K_{\\vn\\times\\bhat}} - \\vec{ \\mathcal K_{\\vn B}} &= \\frac{1}{B^2} (\\vn \\times \\vec B), \\\\\n    \\npar \\ln B &= -\\vec\\nc\\bhat.\n    \\label{eq:curl_curvature}\n\\end{align}\nThe last equality holds if $\\vec\\nc \\vec B = 0$.\nNote that in any arbitrary coordinate system we have\n\\begin{align}\n(\\vn f)^i = g^{ij}\\partial_j f ~, \\quad\n\\nc \\vec v = \\frac{1}{\\sqrt{g}}\\partial_i \\left(\\sqrt{g} v^i\\right) ~, \\quad\n(\\vec v \\times \\vec w)^i = \\frac{1}{\\sqrt{g}}\\varepsilon^{ijk} v_jw_k ~.\n%\\label{}\n\\end{align}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Coordinate system}\\label{sec:cylmetric}\nWe employ cylindrical coordinates \\( (R,Z,\\varphi) \\), with \\(\\varphi\\) anti directed to the geometric toroidal angle ({\\bf clockwise} if viewed from above) to\nobtain a right handed system. The parametric representation in Cartesian \\((x,y,z)\\) coordinates is therefore simply:\n\\begin{align}\n x &= R \\hspace{1 mm} \\sin{(\\varphi)}, &\n y &= R \\hspace{1 mm} \\cos{(\\varphi)}, &\n z &= Z .\n\\end{align}\nNote here that the angle $\\varphi = 0$ corresponds to the Cartesian $y$-axis.\nThe unit\nbasis vectors and (covariant) metric tensor are:\n\\begin{align}\n \\ehat_R      &= (\\sin{(\\varphi)} ,   \\cos{(\\varphi)},0)^T, &\n \\ehat_Z      &= ( 0 ,0 ,1 )^T, &\n \\ehat_{\\varphi} &= ( \\cos{(\\varphi)} , -\\sin{(\\varphi)} , 0 )^T,\n\\\\\n    (g_{ij}) &= \\begin{pmatrix}\n  1 & 0 & 0 \\\\\n  0 & 1 & 0 \\\\\n  0 & 0 & R^2\n   \\end{pmatrix}\n% \\vn R &= (\\sin{(\\varphi)} ,   \\cos{(\\varphi)},0 )^T , &\n%  \\vnZ &= ( 0 ,0 ,1 )^T,  &\n%  \\vn{\\varphi} &= \\frac{1}{R} ( \\cos{(\\varphi)} , -\\sin{(\\varphi)} , 0 )^T .\n\\end{align}\nWith the help of the metric elements we get a well behaved volume element \\(\\sqrt{g} = R\\). However, we have a coordinate singularity at \\(R=0\\).\nThe cylindrical coordinate basis vectors are mutually orthogonal to each other.\n\n\\subsection{Solov'ev equilbrium}\\label{sec:solovev}\nIn cylindrical coordinates the general axisymmetric  magnetic field can be written as (dimensionless)\n\\begin{align}\n \\vec{B} &= \\frac{R_0}{R}\\left[I(\\psi) \\ehat_{\\varphi} + \\frac{\\partial\n \\psi_p}{\\partial Z} \\ehat_R -  \\frac{\\partial \\psi_p}{\\partial R} \\ehat_Z\\right] ,\n\\end{align}\nwhich can obviously not be manipulated to be in Clebsch form.\nHence we are dealing with a non-flux aligned coordinate system.\nFor the sake of clarity we define the poloidal magnetic field \\( \\vec{B}_p = \\frac{R_0}{R}\\left( \\frac{\\partial \\psi}{\\partial Z}\\ehat_R - \\frac{\\partial \\psi}{\\partial R}\\ehat_Z\\right)\n\\) and the toroidal magnetic field \\(\\vec{B}_t =\\frac{R_0I}{R} \\ehat_{\\varphi}\\).\nNote that with a typically convex function $\\psi$, $I(\\psi)>0$ and the previously defined coordinate system the field line winding is a {\\bf left handed screw} in the positive $\\vec B$-direction.\n\nWe scaled $R$, $Z$ and $R_0$ with $\\rho_s = \\sqrt{T_e m_i}/(eB_0)$, the\nmagnetic field with $B_0$, the poloidal flux with $\\psi_{p0} = B_0\\rho_s \\hat\nR_0$ and the poloidal equilibrium current streamfunction with $I_0 = B_0 \\hat R_0$ (with $\\hat R_0 =\n\\rho_s R_0$ the dimensional major radius).\nWe have the equilibrium equations in toroidally symmetric, ideal MHD\n$\\vn p = \\vec j\\times \\vec B$ and $\\vn\\times\\vec B = \\beta \\vec j$ normalized with $p_0 = n_0 T_0$, and $j_0 = e n_0 c_S$, where we introduce $\\beta = n_0 T_0 \\mu_0 /B_0^2$.\nNote that this normalization is in line with the one later chosen for the gyrofluid\nequations but is unnatural for the MHD type equilibrium equations through the introduction\nof $\\rho_s$ and $\\beta$.\n\\begin{align}\n    \\vn\\times \\vec B &= \\frac{R_0}{R}\\left[ -\\Delta^*\\psi_p\\ehat_\\varphi + I_Z \\ehat_R - I_R\\ehat_Z \\right]\\equiv \\beta \\vec j\\\\\n \\beta j_\\parallel &= \\beta \\vec j\\cdot \\bhat = \\beta \\frac{\\d p}{\\d\\psi_p} \\frac{I(\\psi_p)}{B} +\n \\frac{\\d I}{\\d\\psi_p} B \\quad \\text{  Pfirsch-Schl\\\"uter \\& Bootstrap current } \\\\\n \\beta \\vec j_\\perp &= \\beta \\bhat\\times\\left(\\vec j\\times\\bhat\\right)=\n \\beta \\frac{\\bhat \\times \\vn p}{B} \\quad\\quad\\quad \\text{ diamagnetic current} \\\\\n \\beta \\vec j\\times\\vec B &= \\frac{R_0^2}{R^2}\\left[ -\\Delta^* \\psi_p - I\n     \\frac{\\d I}{\\d \\psi_p} \\right]\\vn\\psi_p \\equiv \\beta \\frac{\\d p}{\\d\\psi_p}\\vn\\psi_p =\\beta \\vn p\n\\end{align}\nfrom where we recover the Grad-Shafranov equation\n\\begin{align}\\label{eq:GSEdimless}\n    -\\Delta^*_\\perp  \\psi_p &= \\beta \\frac{R^2}{R_0^2} \\frac{d p}{d  \\psi_p } + I \\frac{d I}{d  \\psi_p } \\equiv \\beta \\frac{R}{R_0} j_{\\hat\\varphi}\n\\end{align}\nwith $\\Delta^*_\\perp \\psi_p = R\\partial_R (R^{-1}\\psi_R) + \\psi_{ZZ}$.\nThe Solov'ev assumptions consist of \\(A/R_0 = -I \\frac{d I}{d  \\psi_p }\\) and \\((1-A)/R_0 = -\\frac{d p}{d  \\psi_p }\\), where \\(A\\) is a constant~\\cite{Cerfon2010,Cerfon2014}.\nBy integration over \\(\\psi_p\\) we find\n$\n p(\\psi_p) = (A-1)\\psi_p/R_0/\\beta$,\n $I(\\psi_p) = \\sqrt{-2 A \\psi_p/R_0 + 1}$,\n and\n    $j_{\\hat\\varphi} = \\left[(A-1)R^2/R_0^2 - A \\right]/R/\\beta $.\nNote that if $\\psi_p$, $I(\\psi)$ and $p(\\psi)$ are a solution to Eq.~\\eqref{eq:GSEdimless}\nthen so are $\\mathcal P_\\psi \\psi_p$ , $\\mathcal P_\\psi I(\\psi_p)$ and $\\mathcal P_\\psi^2 p(\\psi_p)$.\nAlso note that for $A=0$ the constant current $I$ becomes arbitrary $\\mathcal P_I$.\n\nWe introduce \\(\\bar{R} \\equiv \\frac{R}{R_0}\\) and \\(\\bar{Z} \\equiv\\frac{Z}{R_0}\\)\nand thus represent a general solution to Equation~\\eqref{eq:GSEdimless} as~\\cite{Cerfon2010}\n\\begin{subequations}\n\\label{eq:solovev}\n\\begin{align}\n \\psi_p (R,Z) &= \\mathcal P_{\\psi} R_0 \\left[ A\\left( \\frac{1}{2} \\bar{R}^2 \\ln{\\bar{R}}\n   - \\frac{1}{8}\\bar{R}^4\\right)+ \\frac{1}{8}\\bar{R}^4\n   + \\sum_{i=1}^{12} c_{i}  \\bar{\\psi}_{pi}\\right],\\\\\n   I(\\psi_p) &= \\mathcal P_I\\sqrt{ - 2A\\frac{\\psi_p}{R_0\\mathcal P_{\\psi}} +1},\n\\end{align}\n\\end{subequations}\nwith $\\mathcal P_\\psi$ a free constant, $\\mathcal P_I = \\pm \\mathcal P_\\psi$ for $A\\neq 0$ and $\\mathcal P_I$ arbitrary for $A=0$ (purely toroidal equilibrium current).\nWe have\n\\begin{align}\n    p(\\psi_p) = \\mathcal P_\\psi \\frac{( A-1)\\psi_p}{\\beta R_0 } \\qquad\n    j_{\\hat\\varphi} = \\frac{\\mathcal P_\\psi}{\\beta } \\left[\\frac{(A-1)R}{R_0^2} - \\frac{A}{R}\\right]\n\\end{align}\n\\rowcolors{2}{gray!25}{white}\n\\begin{longtable}{>{\\RaggedRight}p{7cm}>{\\RaggedRight}p{7cm}}\n\\toprule\n  $\\bar{\\psi}_{p1}=1$\n  & $\\bar{\\psi}_{p7}=8\\bar{Z}^6 -140 \\bar{R}^2 \\bar{Z}^4\n                      + 75 \\bar{R}^4 \\bar{Z}^2 - 15\\bar{R}^6\\ln{\\bar{R}}+ 180 \\bar{R}^4 \\bar{Z}^2 \\ln{\\bar{R}} \\\n                       -120 \\bar{R}^2 \\bar{Z}^4 \\ln{\\bar{R}}$\\\\\n%\n  $\\bar{\\psi}_{p2}=\\bar{R}^2$ &\n  $\\bar{\\psi}_{p8}=\\bar{Z}$ \\\\\n%\n  $\\bar{\\psi}_{p3}=\\bar{Z}^2 - \\bar{R}^2 \\ln{\\bar{R}}$ &\n  $\\bar{\\psi}_{p9}=\\bar{Z}  \\bar{R}^2$\\\\\n%\n  $\\bar{\\psi}_{p4}=\\bar{R}^4 -4\\bar{R}^2\\bar{Z}^2$ &\n  $\\bar{\\psi}_{p10}=\\bar{Z}^3 - 3 \\bar{Z} \\bar{R}^2 \\ln{\\bar{R}}$\\\\\n  %\n  $\\bar{\\psi}_{p5}=2\\bar{Z}^4 - 9 \\bar{R}^2\\bar{Z}^2 + \\\n                     3 \\bar{R}^4 \\ln{\\bar{R}} \\\n                    -12  \\bar{R}^2\\bar{Z}^2 \\ln{\\bar{R}}$\n  &\n$\\bar{\\psi}_{p11}=3 \\bar{Z}\\bar{R}^4 - 4\\bar{Z}^3\\bar{R}^2$\\\\\n%\n  $\\bar{\\psi}_{p6}=\\bar{R}^6 -12 \\bar{R}^4 \\bar{Z}^2\n                     + 8  \\bar{R}^2 \\bar{Z}^4$ &\n  $\\bar{\\psi}_{p12}= 8 \\bar{Z}^5 -45 \\bar{Z} \\bar{R}^4 - \\\n                       80 \\bar{Z}^3 \\bar{R}^2\\ln{\\bar{R}} \\\n                       +60 \\bar{Z} \\bar{R}^4\\ln{\\bar{R}}$ \\\\\n   & \\\\\n\\bottomrule\n\\end{longtable}\nThe choice of the coefficients \\(c_{i}\\) and \\(A\\) determines the actual form\nof the magnetic field.\nEq.~\\eqref{eq:solovev} can for example represent single and asymmetric double X-point configurations, force-free states,\nfield reversed configurations and low and high beta tokamak equilibria.\n$R_0$ appears as an artificial scaling factor\n(note here that a change in $\\rho_s$ changes $R_0$ but not the form or size of\nthe dimensional equilibrium magnetic field).\nThe scaling factors $\\mathcal P_\\psi$ and $\\mathcal P_I$ are mainly introduced to maximize the flexibility e.g. to adapt the solution to experimental equilibria or to reverse the sign of the magnetic field.\n\nSince Eq.~\\eqref{eq:solovev} is given analytically we can numerically evaluate $\\psi_p$ and $I$\nand all their derivatives\nat arbitrary points to machine precision, which is simple to implement and fast to execute.\nThis translates to an exact representation of the magnetic field and related quantities like the curvature operators in code.\n\nNote that\n\\begin{align}\n    B^R&=B_R = R_0\\psi_Z/R \\\\\n    B^Z&=B_Z = - R_0\\psi_R/R \\\\\n    B^\\varphi &= B_\\varphi/R^2 = R_0I/R^2\n\\end{align}\n(contra- and covariant components of $\\vec B$).\nBy construction we have $\\partial_\\varphi B = 0$ with\n\\begin{align}\n  B = \\frac{R_0}{R}\\sqrt{ {I^2 + |\\vn \\psi_p|^2}}.\n    \\label{}\n\\end{align}\nFurthermore, we have\n\\begin{align}\n  \\npar f(R,Z) = \\frac{R_0}{RB}[f,\\psi_p]_{RZ}\\Rightarrow \\npar \\ln B = \\frac{R_0}{RB^2}\\left[B, \\psi_p\\right]_{RZ} = -\\vec\\nc\\bhat.\n\\end{align}\nWe allow various simplifications to the curvature operator\nfor the Solov'ev equilibrium.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsubsection{Toroidal (and negative toroidal) field line approximation}\\label{sec:torfieldlineapprox}\nThe toroidal/negative toroidal field line approximation applies \\(\\bhat\\approx \\pm \\ehat_\\varphi\\) to all perpendicular operators\n(e.g.: Poisson bracket, perpendicular elliptic operator and curvature operators)\nbut retains the full expression for the magnetic field unit vector \\(\\bhat\\)\nfor parallel operators (\\(\\npar\\) and \\(\\Delta_\\parallel\\)).\n(Note that we allow the negative sign $-\\ehat_\\varphi$ to enable a sign reversal of the magnetic field, see Section~\\ref{sec:field_reversal}).\nIn cylindrical coordinates that is\n\\begin{align}\n[f,g]_\\perp \\equiv [f,g]_{RZ} &= \\pm\\frac{1}{R} \\left(\\partial_R f\\partial_Z g - \\partial_Z f\\partial_R g\\right) \\\\\n\\np f &= \\partial_R f \\ehat_R + \\partial_Z f \\ehat_Z \\\\\n\\Delta_\\perp f &= \\frac{1}{R}\\partial_R \\left( R \\partial_R f\\right) + \\partial_Z(\\partial_Z f)\n\\label{}\n\\end{align}\nThe curl of $\\bhat$ reduces to\n%\\begin{align}\n $\\vn\\times\\bhat \\approx -  \\frac{\\pm 1}{R} \\ehat_Z$.\n%end{align}\nThis simplifies the curvature operators to:\n\\begin{align}\n\\vec{\\mathcal{K}}_{{\\vn\\times\\bhat}}  &\\approx  -  \\frac{\\pm 1}{B R} \\ehat_Z , &\n\\vec{ \\mathcal{K} }_{\\vn  B}  &\\approx  -\\frac{\\pm 1}{B^2}\\frac{\\partial B}{\\partial Z}\\ehat_R +\\frac{\\pm 1}{B^2} \\frac{\\partial B}{\\partial R}\\ehat_Z &\n%\\ehat_\\varphi \\times \\vn B, &\n\\vec{ \\mathcal{K} } &\\approx \\vec{ \\mathcal{K} }_{\\vn  B}  +\\vec{ \\mathcal{K} }_{{\\vn\\times\\bhat}} ,\n%\\\\\n%\\mathcal{K}_{{\\vn\\times\\bhat}}(f)   &\\approx  -  \\frac{1}{B R} \\frac{\\partial f}{\\partial Z},&\n%\\mathcal{K}_{\\vn  B} (f)  &= \\frac{1}{B} \\left[\\ln B, f \\right]_{RZ},&\n%\\mathcal{K} (f) &\\approx\\frac{1}{B} \\left[\\ln B, f \\right]_{RZ}-  \\frac{1}{B R} \\frac{\\partial f}{\\partial Z} ,\n\\end{align}\nand\n\\begin{align}\n \\nc \\vec{\\mathcal{K}}_{{\\vn\\times\\bhat}} &\\approx \\frac{\\pm 1}{R B^2} \\frac{\\partial B}{\\partial Z},\n\\end{align}\nwhich results in a vanishing divergence of the curvature operators \\( \\nc \\vec{ \\mathcal{K} } = 0\\).\n\nNote that in an actual toroidal field we have\n\\begin{align}\n  \\vec B(R) := \\pm \\frac{R_0}{R} \\ehat_\\varphi\n  \\label{}\n\\end{align}\nWe then have $\\bhat = \\ehat_\\varphi$ and the curvature operators further\nsimplify to\n\\begin{align}\n  \\vec{ \\mathcal K_{\\vn\\times\\bhat}} = \\vec{ \\mathcal K_{\\vn B}} = -\\frac{\\pm 1}{R_0} \\ehat_Z =\n\\vec{ \\mathcal K}/2\\\\\n  \\nc\\vec{\\mathcal K_{{\\vn\\times\\bhat}}}=\n    \\npar \\ln B = 0\n    \\label{}\n\\end{align}\n\n\\subsubsection{Low beta approximation}\\label{sec:lowbetaapprox}\nIn this approximation we apply the toroidal field line approximation\nas in Section\n\\ref{sec:torfieldlineapprox}\nbut approximate the curvature operator $\\mathcal K_{\\vn\\times\\bhat} \\approx \\bhat\\times\\vec \\kappa$\n  with\n  $\\vec \\kappa := \\bhat \\cn\\bhat = -\\bhat \\times( \\vn\\times \\bhat)$.\nFor an isotropic pressure plasma \\(\\vec{P} = \\vec{I} P_\\perp + \\vec{b} \\vec{b} P_\\Delta \\approx \\vec{I} P_\\perp\\) and with the definition of the plasma beta parameter\n\\(\\beta = \\frac{P}{B^2/(2 \\mu_0) } \\)\nwe can rewrite the curvature to\n\\begin{align}\n    \\vec{\\kappa} &\\approx \\frac{\\beta}{2} \\vn \\ln(P) +\\np \\ln{B} .\n\\end{align}\nIn low beta plasmas \\(\\beta\\ll1\\) the curvature reduces to:\n\\begin{align}\n    \\vec{\\kappa} & \\approx \\np \\ln{B} .\n\\end{align}\nThis simplifies the curvature operators to:\n\\begin{align}\n\\vec{\\mathcal{K}_{{\\vn\\times\\bhat}}}(f) \\approx\n\\vec{ \\mathcal{K} }_{\\vn  B}  &\\approx  -\\frac{1}{B^2}\\frac{\\partial B}{\\partial Z}\\ehat_R +\\frac{1}{B^2} \\frac{\\partial B}{\\partial R}\\ehat_Z &\n\\mathcal{K} (f) &\\approx 2\\mathcal{K}_{\\vn  B} (f) , &\n    \\vn\\times\\bhat \\cdot \\vec{\\mathcal{K}}_{\\vn  B} &= 0.\n\\end{align}\nThe divergence over the curvature vanishes \\( \\nc \\vec{ \\mathcal{K} } = 0\\) only if \\( \\nc \\vec{ \\mathcal{K}}_{\\vn  B}   = 0\\).\nIn general, the divergence \\( \\nc \\vec{ \\mathcal{K} } \\approx 0\\) is only approximately vanishing.\n\\subsubsection{True perpendicular terms}\n\nWithout any approximations we have\n\\begin{align}\nb^R = {\\frac{\\partial \\psi}{\\partial Z}}\\left(I^2+|\\vn\\psi|^2\\right)^{-1/2} \\quad\nb^Z = -{\\frac{\\partial \\psi}{\\partial R}}\\left(I^2+|\\vn\\psi|^2\\right)^{-1/2} \\quad \nb^\\varphi = \\frac{I}{R}\\left(I^2+|\\vn\\psi|^2\\right)^{-1/2} \\\\\n\\vec\\nc\\bhat = -\\npar \\ln B = -\\frac{R_0}{R B^2}[B,\\psi_p]_{RZ} \\\\\n\\left({\\vn\\times\\bhat}\\right) \\cdot\\bhat =\n    (I'(\\vn\\psi_p)^2 - I \\Delta_\\perp^* \\psi_p)\\frac{ R_0^2}{R^2B^2} \\propto 1/R_0\n\\label{}\n\\end{align}\nwhere for the last\nestimate we inserted the Grad-Shafranov equation and the Solov'ev assumptions.\nWe can then insert $\\bhat$ into the exact definitions for $[.,.]_\\perp$, $\\np$ and $\\Delta_\\perp$ from Section~\\ref{sec:magnetic}.\n\nFor the curvature terms we can explicitly write\n\\begin{align}\nK_{\\vn B}^R &= -\\frac{R_0 I}{B^3R}\\frac{\\partial B}{\\partial Z} \\equiv -\\frac{1}{B^2}\\frac{\\partial B}{\\partial Z}b^\\varphi \\\\\nK_{\\vn B}^Z &= \\frac{R_0 I}{B^3R}\\frac{\\partial B}{\\partial R}\\equiv \\frac{1}{B^2}\\frac{\\partial B}{\\partial R}b^\\varphi \\\\\nK_{\\vn B}^\\varphi &= \\frac{R_0}{B^3R^2}\\left(\n      \\frac{\\partial \\psi}{\\partial Z} \\frac{\\partial B}{\\partial Z}\n    + \\frac{\\partial \\psi}{\\partial R}\\frac{\\partial B}{\\partial R}\\right)\n%\\equiv \\frac{1}{B^2R}\\left(\\bhat^R \\frac{\\partial B}{\\partial Z} - \\bhat^Z \\frac{\\partial B}{\\partial R}\\right)\\quad %contravariant phi component\n\\label{}\n\\end{align}\nand\n\\begin{align}\nK_{\\vn\\times\\bhat}^R &= \\frac{R_0 }{RB^3}\\left( B\\frac{\\partial I}{\\partial Z} -I\\frac{\\partial B}{\\partial Z}\\right) \\\\\nK_{\\vn\\times\\bhat}^Z &= \\frac{R_0 }{RB^3} \\left( I\\frac{\\partial B}{\\partial R} - B\\frac{\\partial I}{\\partial R} \\right)\\\\\nK_{\\vn\\times\\bhat}^\\varphi &= \\frac{R_0}{R^2B^2}\\left(\n+ \\frac{1}{B}\\frac{\\partial\\psi}{\\partial Z} \\frac{\\partial B}{\\partial Z}\n+ \\frac{1}{B}\\frac{\\partial \\psi}{\\partial R}\\frac{\\partial B}{\\partial R}\n-R\\frac{\\partial}{\\partial R}\\left(\\frac{1}{R}\\frac{\\partial\\psi}{\\partial R}\\right) \n- \\frac{\\partial^2 \\psi}{\\partial Z^2}\n\\right) \\\\\n\\vec\\nc\\vec{\\mathcal K_{\\vn\\times\\bhat}} &= -\\vec\\nc\\vec{\\mathcal K_{\\vn B}}=\n    -\\vec{\\mathcal K_{\\vn\\times\\bhat}}\\cn\\ln B = \\frac{R_0}{RB^3}[I,B]_{RZ}\n%contravariant phi component\n\\label{}\n\\end{align}\n\n\\subsection{Flux surface averaging, convolution, Partial flux surface average and safety factor}\n\\subsubsection{Preliminary}\nRecall that the {\\bf Dirac delta-function} has the property (in any dimension):\n\\begin{align} \\label{eq:dirac_delta}\n\\int_V f(\\vec x) \\delta(h(\\vec x) - h') \\dV = \\int_{h=h'} \\frac{f(\\vec x)}{|\\vn h|} \\dA\n\\end{align}\nwhich means that the delta-function can be used to express area integrals of the\nsubmanifold given as a contour of the function $h(\\vec x)$.\nA numerically tractable approximation to the delta-function reads\n\\begin{align}\\label{eq:delta}\n\\delta(h(\\vec x)-h') = \\frac{1}{2\\pi \\epsilon^2}\n\\exp\\left( - \\frac{\\left(h(\\vec x)-h'\\right)^2}{2\\epsilon^2}\\right)\n\\end{align}\nwhere $\\epsilon$ is a small, free parameter.\nIn the DG framework the left-hand side\nof Eq.~\\eqref{eq:dirac_delta} can thus readily be computed\nvia Gauss-Legendre quadrature, which we propse as a first method to compute area\nintegrals even if our coordinate system is not aligned to the area.\nNote: in order for this to work the Delta function needs to be numerically\nresolved and cannot be made arbitrarily small.\nThis introduces a smoothing effect\nover neighboring contour lines which is given by the grid distance.\n\nFurthermore, recall the {\\bf co-area formula}\n\\begin{align} \\label{eq:coarea}\n\\int_{\\Omega_0} f(\\vec x) \\dV =\n\\int_0^{h_0} \\left( \\int_{h=h'} \\frac{f(\\vec x)}{|\\vn h|}  \\dA  \\right) \\d h'\n\\end{align}\nwhere $\\Omega_0$ is the volume enclosed by the contour $h=h_0$.\nThe co-area formula can be viewed as a change of variables in the\nvolume integral.\n\nWe define the {\\bf toroidal average} of a function $f(R,Z,\\varphi)$ as\n\\begin{align} \\label{eq:phi_average}\n\\PA{ f}(R,Z) := \\frac{1}{2\\pi}\\oint f(R,Z,\\varphi)\\d \\varphi\n\\end{align}\n\nIn arbitrary coordinates the area integral is defined by the pull back\nof the flux 2-form and the metric\n\\begin{align}\n\\label{}\n\\dA^2 = i_{\\hat \\psi_p} vol^3 \\quad \\hat \\psi_p = \\frac{\\vn \\psi_p}{|\\vn \\psi_p|}\n\\end{align}\nto a parameterization of the flux-surface.\nIn a flux-aligned coordinate system $\\{\\zeta, \\eta, \\varphi\\}$ the pull-back is trivial ($\\zeta=const$) and we have\n\\begin{align}\n\\dA &= \\sqrt{g^{\\zeta\\zeta}} \\sqrt{g} \\d\\eta\\d\\varphi = f_0|\\vn\\psi_p|\\sqrt{g}\\d\\eta\\d\\varphi,\n\\\\\n\\vec\\dA &:= \\hat\\psi_p \\dA = f_0 (\\vn\\psi_p) \\sqrt{g}\\d\\eta\\d\\varphi,\\quad\n\\label{}\n\\end{align}\nwhere we used that $g^{\\zeta\\zeta} = (\\vn\\zeta)^2 = f_0^2(\\vn\\psi_p)^2$.\nNotice that numerically we can integrate in flux-aligned coordinates by generating a corresponding\ngrid and pulling back (interpolating) the relevant fields to this grid. This is the second method\nto numerically compute area integrals.\n\n\\subsubsection{Flux surface average}\n\n\nThe flux surface average (as a {\\bf volume average} after \\cite{haeseleer}) is defined as an average over a\nsmall volume - a shell centered around the flux-surface - defined by two neighboring flux-surfaces.\nWith the help of the volume\nflux label (notice that both the volume $v$ as well as the poloidal flux $\\psi_p$ have physical\nmeaning while the coordinate $\\zeta(\\psi_p)$ is an arbitrary choice) we define\n\\begin{align} \\label{eq:fsa_vol}\nv(\\psi_p) :=& \\int_{\\psi_{p,\\min}}^\\psi \\dV = \\int^{\\zeta(\\psi_p)} \\sqrt{g}\\d\\zeta\\d\\eta\\d\\varphi,\n\\\\\n\\frac{\\d v}{\\d\\psi_p} =& \\int\\dA |\\vn\\psi_p|^{-1} = 2\\pi f_0\\oint_{\\zeta(\\psi_p)} \\sqrt{g}\\d\\eta \\\\\n\\RA{ f }_\\psi :=& \\frac{\\partial}{\\partial v} \\int \\dV f\n = \\frac{1}{\\int \\dA |\\vn\\psi_p|^{-1} } \\int_{\\psi_p} \\frac{f(\\vec x)}{|\\vn\\psi_p|} \\dA \\nonumber\\\\\n=& \\frac{\\int_\\Omega \\PA{ f}(R,Z) \\delta(\\psi_p(R,Z)-\\psi_{p})H(Z-Z_X)\\ R \\d R \\d Z}\n{\\int_\\Omega \\delta(\\psi_p(R,Z)-\\psi_{p})H(Z-Z_X)\\ R \\d R \\d Z}\\nonumber\\\\\n =& \\left(\\frac{\\d v}{\\d\\psi_p }\\right)^{-1} 2\\pi f_0 \\oint_0^{2\\pi} \\PA{ f}(\\zeta,\\eta) \\sqrt{g}\\d\\eta\n = \\frac{1}{\\oint \\sqrt{g}\\d\\eta } \\oint_0^{2\\pi} \\PA{ f}(\\zeta,\\eta) \\sqrt{g}\\d\\eta\n\\end{align}\nwhere we used the co-area formula Eq.~\\eqref{eq:coarea} for the second identity\nand we use the Heaviside function $H(Z-Z_X)$ to cut away contributions from below the X-point\nin our domain $\\Omega$.\n We immediately see that this definition is particularly easy to compute\n in a flux-aligned coordinate system. Notice however that the volume element\n does appear (unlike e.g. Tokam3X papers).\n We use our grid construction algorithm with constant monitor metric described in Reference~\\cite{Wiesenberger2018} to construct a flux-aligned grid and interpolate\n the values of any function onto its grid points.\n Even though this grid is unusable for simulations due to the diverging metric at the X-point the\n evaluation of integrals works well as the singularity is integrable.\n\nThe flux-surface average fulfills the basic identities\n\\begin{align}\n\\label{eq:fsa_identities}\n\\RA{ \\mu f + \\lambda g} &= \\mu\\RA{ f} + \\lambda \\RA{ g} \\\\\n\\RA{ f(\\psi_p)} &= f(\\psi_p)\n\\end{align}\n\nThe volume average is well-suited for density-like quantities\nas we can see with the following identity.\nAssume we have a quantity $X$ with $\\partial_t X + \\nc \\vec j_X = \\Lambda_X$.\nThen we can use the volume average to write\n\\begin{align}\n\\frac{\\partial}{\\partial t} \\RA{X } + \\frac{\\partial}{\n  \\partial v} \\RA{ \\vec j_X\\cn v}  = \\RA{ \\Lambda_X}\n\\label{eq:fsa_balance}\n\\end{align}\nwhere again $v=v(\\psi_p)$ is the volume flux label.\nThe {\\bf total flux} of a given flux density $\\vec j_X$ through the\nflux surface $\\psi_p = \\psi_{p0}$ is given by\n\\begin{align}\n\\RA{\\vec j_X\\cn v} &:= J_X=\\oint_{\\psi_p=\\psi_{p0}} \\vec j_X\\cdot \\vec{\\dA} =\n \\frac{\\d v}{\\d\\psi_p} \\RA{ \\vec j_X\\cn\\psi_p }\\\\\n &=\n   2\\pi f_0 \\oint_0^{2\\pi} \\PA{ \\vec j_X\\cn\\psi_p}(\\zeta,\\eta) \\sqrt{g}\\d\\eta\n%2\\pi\\int_\\Omega \\vec \\PA{ \\vec j\\cn\\psi_p} \\delta(\\psi_p(R,Z)-\\psi_{p0}) H(Z-Z_X)\\ R \\d R \\d Z\n\\label{eq:total_flux}\n\\end{align}\nOnce we have the flux-surface averaged equation we can easily get the volume integrated version (again with the help of the co-area formula)\n\\begin{align}\n\\frac{\\partial}{\\partial t} \\int_0^{v(\\psi_p)}\\RA{X} \\d v \n+ \\RA{ \\vec j_X\\cn v}(v(\\psi_p))  = \\int_0^{v(\\psi_p)}\\RA{ \\Lambda_X}\\d v\n\\label{eq:integral_balance}\n\\end{align}\n\n\\subsubsection{Convolution}\nThe goal of the convolution function is to reduce the fluctuating effect obtained because of the discretization of the toroidal planes in the toroidal averaged quantities.\n\nWe can define a convoluted function as:\n\n\\begin{align}\\label{eq:convolution_def}\n    \\RA{f}_\\eta (\\zeta, \\eta)=\\oint_0^{2\\pi} (\\frac{f(\\zeta,\\eta')\\sqrt{g}(\\zeta,\\eta')}{\\oint_0^{2\\pi}\\sqrt{g}(\\zeta,\\eta '') W(\\eta'-\\eta'')d\\eta ''})W(\\eta-\\eta') d\\eta'=\\left(\\frac{ (f\\sqrt{g})(\\zeta, \\eta')}{\\sqrt{g}(\\zeta, \\eta'')*W(\\eta',\\eta'') }\\right)(\\zeta, \\eta ') *W(\\eta, \\eta')\n\\end{align}\n\nWith the following definition of $W$ function: \n\n\\begin{align}\n    W(\\eta)= \\begin{cases}\n     1 \\text{ for } \\eta -r/2 < \\eta < \\eta+r/2 \\\\\n     0 \\text{ else}\n    \\end{cases}\n\\end{align}\n\nBeing $r$ the size of the poloidal window that we are taking as the averaged window, and $\\eta$ the center of this window. \n\nAnd the definition of the $*$ sign (convolution) as:\n\n\\begin{align}\n    (f*g)(x)=\\oint f(y)g(x-y)dy\n\\end{align}\\\\\nWith this definition $W$ does not need to be normalized.\nThe following identity holds due to associative and conmutative properties of convolution:\n\n\\begin{align}\n    \\oint \\d \\eta f(\\eta) [h * W](\\eta) = \\oint \\d \\eta h(\\eta) [f*W](\\eta)\n\\end{align}\n\nfor any functions $f$ and $h$. And with this, we can prove that the total flux surface average of the convoluted function is equal to the one of the original function:\n\n\\begin{align}\n    \\RA {\\RA{f}_\\eta}_\\psi = \\frac{\\oint \\RA{f}_\\eta\\sqrt{g}(\\zeta,\\eta)d\\eta}{\\oint \\sqrt{g}(\\zeta,\\eta)d\\eta}\n    = \\frac{1}{\\oint \\sqrt{g}(\\zeta,\\eta)d\\eta}\\oint \\left[\\frac{ f\\sqrt{g}}{\\sqrt{g}*W } *W\\right]\\sqrt{g}(\\zeta,\\eta)d\\eta\n    \\nonumber\\\\\n     = \\frac{1}{\\oint \\sqrt{g}(\\zeta,\\eta)d\\eta}\\oint \\left[\\sqrt{g}*W\\right]\\frac{ f\\sqrt{g}}{\\sqrt{g}*W }\\d\\eta = \\frac{1}{\\oint \\sqrt{g}(\\zeta,\\eta)d\\eta}\\oint f\\sqrt{g}d\\eta = \\RA{f}_\\psi\n\\end{align}\n\nNote: f is usually the toroidal average, defined as $\\RA{f}_\\varphi\n\\subsubsection{Partial flux surface average}\nThe goal of the partial flux surface average is to have an idea of the magnitude of different quantities depending on different poloidal sections of the torus. For this we define the partial flux surface average as follows:\n\n\\begin{align}\n\\RA{f}_{part \\eta}=\\frac{2\\pi}{r}\\frac{\\int_{\\eta-\\r/2}^{\\eta+r/2}f(\\zeta,\\eta')\\sqrt{g}(\\zeta,\\eta') d\\eta'}{\\oint_0^{2\\pi}\\sqrt{g}(\\zeta,\\eta')d\\eta'}=\\frac{2\\pi}{r}\\frac{\\oint_{0}^{2\\pi}f(\\zeta,\\eta')\\sqrt{g}(\\zeta,\\eta')W(\\eta-\\eta') d\\eta'}{\\oint_0^{2\\pi}\\sqrt{g}(\\zeta,\\eta')d\\eta'}=\\frac{2\\pi}{r}\\frac{(f\\sqrt{g})*W}{\\oint_0^{2\\pi}\\sqrt{g}(\\zeta,\\eta)d\\eta}\n\\end{align}\n\nIn this expression, $r$ is defined as the size of the window of the poloidal section that you are analyzing. This definition corresponds to the capability of doing the usual average of all the sections of size r that fit in the $2\\pi$ section of the torus and obtaining the total flux surface average value. As an example, if we choose $r=\\pi/2$, which is 90 degrees, we will have 4 different sections that will correspond to the following:\n\\begin{align}\n\\RA{f}_\\psi=\\frac{\\RA{f}_{part 0}+\\RA{f}_{part \\pi/2}+\\RA{f}_{part \\pi}+\\RA{f}_{part 3\\pi/2}}{4}\n\\end{align}\nBeing the subindex the centre of the analyzed section.\n\n\n\\subsubsection{The safety factor}\nAssume that we pick a random field line and follow it (integrate it) for exactly one\npoloidal turn. The {\\bf safety factor} is defined as the ratio between\nthe resulting toroidal angle ($\\Delta\\varphi$) to the poloidal angle ($2\\pi$)\n\\begin{align}\nq := \\frac{\\Delta\\varphi}{2\\pi}\n\\label{}\n\\end{align}\nSince our magnetic field is symmetric in $\\varphi$ and we used one\nfull poloidal turn this definition is independent of which\nfieldline we pick on a given flux surface.\n\n%We define the poloidal length $s$ as the fieldline following\n%parameter i.e. $\\vec B\\cn s \\equiv B_p = R_0|\\vn \\psi_p|/R$\n%and $\\d\\varphi/\\d s = B^\\varphi(R(s), Z(s)) / B_p(R(s),Z(s))$.\n%We can then express the safety factor as the line integral\n%\\begin{align}\n%q=\\frac{1}{2\\pi}\\oint \\frac{B^\\varphi}{B_p} \\d s = \\frac{1}{2\\pi}\\oint_{\\psi_p=\\psi_{p0}}\\frac{I(\\psi_p)}{R|\\vn\\psi_p|} \\d s\n%= \\frac{1}{2\\pi}\\int \\frac{I(\\psi_p)}{R}\\delta(\\psi_p-\\psi_{p0}) H(Z-Z_X) \\d R\\d Z\n%\\end{align}\n%where we made use of Eq.~\\eqref{eq:dirac_delta} in two dimensions in the\n%last equality and thus arrive at a numerical tractable expression\n%to evaluate the safety factor.\nWe define the geometric poloidal angle $\\Theta$ as the fieldline following\nparameter i.e. $\\vec B\\cn\\Theta = R_0(\\psi_R (R-R_0) + \\psi_Z Z)/r^2R$.\nWe can then directly integrate the safety factor as\n\\begin{align}\\label{eq:safety_factor}\n\\frac{\\d R}{\\d\\Theta} = \\frac{B^R}{B^\\Theta}\\quad\n\\frac{\\d Z}{\\d\\Theta} = \\frac{B^Z}{B^\\Theta}\\quad\n\\frac{\\d \\varphi}{\\d\\Theta} = \\frac{B^\\varphi}{B^\\Theta}\\\\\nq\\equiv\\frac{1}{2\\pi}\\oint \\frac{B^\\varphi}{B^\\Theta} \\d\\Theta\n\\end{align}\nWe integrate this equation with the help of one of our ODE integrators, i.e. we use a high-order Runge-Kutta method\nand refine the stepsize until machine-precision is reached.\nNotice that the safety factor diverges on the last closed flux\nsurface whereas Eq.~\\eqref{eq:total_flux}\nremains finite due to the $\\vn\\psi$ factor.\n\\subsection{Alternative flux labels}\nWe find the toroidal flux $\\psi_t$ by integrating the q-profile $\\psi_t = \\int^{\\psi_p} \\d\\psi_p q(\\psi_p)$. Since $q$ diverges, $\\psi_t$, in contrast to $\\psi_p$,\nis not defined outside the last closed flux-surface (but has a finite value on the last closed flux surface). We now define the normalized poloidal and toroidal flux labels $\\rho_p$ and $\\rho_t$\n\\begin{align}\n    \\rho_p&:= \\sqrt{1-\\frac{\\psi_p}{\\psi_{p,\\min}}} \\ \\leftrightarrow\\ \\psi_p = (1-\\rho_p^2)\\psi_{p,\\min} \\\\\n    \\rho_t&:= \\sqrt{\\frac{\\psi_t}{\\psi_{t,\\mathrm{sep}}}},\\\\\n    \\text{In general }\\psi_{p,\\min} &= \\psi_p(R_O, Z_O) \\neq\\psi_{p}(R_0,0)\n\\end{align}\nwhere $R_O$, $Z_O$ are the coordinates of the O-point.\nThese labels are useful because\nequidistant $\\rho_p$ and $\\rho_t$ values tend to translate to equidistant flux-surfaces\nin configuration space.\n\n\\subsection{ Modified $\\psi_p$}\nOur computational domain is a box and in particular not aligned with the\nmagnetic flux surfaces. This means that particularly in the corners of\nthe domain the field lines inside the domain are very short (in the\nsense that the distance between the entry point and leave point is short).\nIt turns out that this behaviour is numerically disadvantageous (may\nblow up the simulation in the worst case) in the\ncomputation of parallel derivatives. In order to remedy this situation\nwe propose to modify the flux surfaces $\\psi_p$ to a constant value\nif $\\psi_p$ exceeds a certain critical value. In this way the poloidal\nfield component vanishes in the corners of the domain at the cost\nof introducing a strong shear layer limiting the scrape-off layer width.\n\nWe define an approximation to the step function with a transition layer of radius $a$\naround the origin\n\\begin{align}\n\\Theta_a(x) := \\begin{cases}\n    0 & \\text{ for } x \\leq -a  \\\\\n    \\frac{1}{32 a^7}  \\left(16 a^3-29 a^2 x+20 a x^2-5 x^3\\right) (a+x)^4\n    &\\text{ for } -a<x\\leq a \\\\\n    1 & \\text{ for } x > a\n\\end{cases}\n    \\approx H(x)\n\\label{eq:approx_heaviside}\n\\end{align}\nwhere $H(x)$ is the Heaviside step function.\nAn integral of this function is\n\\begin{align}\n\\theta_a(x) := \\begin{cases}\n    0 &\\text{ for } x \\leq -a \\\\\n    \\frac{1}{256 a^7} \\left(35 a^3-47 a^2 x+25 a x^2-5 x^3\\right) (a+x)^5\n     &\\text{ for } -a<x\\leq a \\\\\nx &\\text{ for } x > a\n\\end{cases}\n    \\approx x H(x)\n\\end{align}\nNote that $\\Theta_a(0) = 0.5$ and $\\theta_a(0) = 35a/256$.\n\nWe now use\n\\begin{align}\n    -\\theta_{\\alpha/2}\\left(\\psi_{p,b} + \\frac{\\alpha}{2} - \\psi \\right)+\\psi_{p,b}+\\frac{\\alpha}{2} \\approx (\\psi- \\psi_{p,b})H(\\psi_{p,b}-\\psi) + \\psi_{p,b}\n\\label{eq:modified_psip}\n\\end{align}\ninstead of $\\psi_p$ for the computation of the\nmagnetic field, which introduces a shear layer between $\\psi_{p,b}$ and $\\psi_{p,b}+\\alpha$ where the\nfieldlines are straightened to match $\\ehat_\\varphi$.\nIn order to simplify the setup of this region we give $\\psi_{p,b}$ and $\\alpha$ in terms of\n$\\rho_p$ and $\\alpha_p$ via $\\psi_{p,b} = (1-\\rho_{p,b}^2)\\psi_{p,\\min}$ and $\\alpha = -(2\\rho_{p,b} \\alpha_p + \\alpha_p^2)\\psi_{p,\\min}$. In case we change the sign\nof $\\psi_p$ vie $\\mathcal P_\\psi$ we need to adapt Eq.~\\eqref{eq:modified_psip}\naccordingly.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The model} \\label{sec:model}\n\\subsection{Conservative form}\n%MW: don't we have a momentum source in the form we currently give?\nWe scale all spatial lengths by $\\rho_s = \\sqrt{T_e m_i}/(eB_0)$ and time by the ion gyro-frequency $\\Omega_0 = eB_0/m_i$.\nThe magnetic field is scaled with $B_0$, densities with $n_0$ and the parallel velocity is scaled with $c_s = \\sqrt{T_e/m_i}$.\nThe potential is scaled with $\\hat \\phi = e/T_e$ and the vector potential with\n$\\hat A_\\parallel = \\rho_s B_0$.\nWe introduce the dimensionless parameters\n\\begin{align}\n  \\tau_a = \\frac{T_a}{z_aT_e}~,\\quad \\mu_a = \\frac{m_a}{z_am_i}\\text{ and }\n  \\beta:=\\frac{\\mu_0 n_0 T_e}{B_0^2}\n  \\label{}\n\\end{align}\nwhere $a\\in\\{e,i\\}$ is the species label and $z$ is the charge number.\nOmitting the species label we arrive at (dividing the density equation by $\\Omega_0n_0$ and the velocity equation by $\\Omega_0 c_s$)\n\\begin{align}\n\\frac{\\partial}{\\partial t} N &+ \\vec\\nc\\left( N \\left(\n    \\vec u_E + \\vec u_K + \\vec u_{C} + U\\left(\\bhat + {\\vec b}_\\perp\\right)\\right)\\right) = \\Lambda_N + S_N \\\\\n\\mu N \\frac{\\partial}{\\partial t} U &+ \\mu N \\left(\n    \\vec u_E + \\vec u_K + \\vec u_{C} + U\\left(\\bhat + {\\vec b}_\\perp\\right)\n    \\right)\\cn U  \\nonumber \\\\\n    &+ 2\\mu \\nc ( NU \\vec u_{\\vn\\times\\bhat})\n    -\\mu NU\\nc \\vec u_{\\vn\\times\\bhat}\n    + \\mu NU\\mathcal K_{\\vn\\times\\bhat}(\\psi) \\nonumber\\\\\n    &= -\\tau \\left(\\bhat + {\\vec b}_\\perp\\right)\\cn N\n    -N \\left( \\left(\\bhat+{\\vec b}_\\perp\\right)\\cn \\psi + \\frac{\\partial A_\\parallel}{\\partial t}\\right)\n    - \\eta n_e^2(U_i-u_e) + \\mu N\\Lambda_U + \\mu N S_U\n\\label{}\n\\end{align}\nwith\n\\begin{align}\n\\vec u_E := \\frac{\\bhat\\times\\vn\\psi}{B},\\quad\n\\vec u_{K} := \\tau \\left(\\vec{\\mathcal K_{\\vn B}} + \\vec{\\mathcal K_{\\vn\\times\\bhat}}\\right)=\\tau\\vec{\\mathcal K}  ,\\nonumber\\\\\n\\vec u_C := \\mu U^2\\vec{\\mathcal K_{\\vn\\times\\bhat}},\\quad\n\\vec u_{\\vn\\times\\bhat} := \\tau\\vec{\\mathcal K_{\\vn\\times\\bhat}},\\quad\n{\\vec b}_\\perp = \\frac{\\vn\\times A_\\parallel \\bhat}{B}.\n\\label{}\n\\end{align}\n\nThe electric potential \\(\\phi\\) and parallel magnetic vector potential \\(A_\\parallel\\) are\ncomputed by the polarisation and induction equations (with $q_e=-e$ and $q_i=+e$)\n\\begin{align}\n -\\nc\\left(\\frac{\\mu_iN_i}{B^2} \\np \\phi\\right) &=  \\Gamma_{1,i} N_i -n_e, \\quad \\Gamma_{1,i}^{-1} := 1-\\frac{1}{2}\\mu_i\\tau_i\\Delta_\\perp , \\\\\n  -\\frac{1}{\\beta} \\Delta_\\perp A_\\parallel &= \\left(N_i U_i-n_e u_e \\right)\n  \\label{eq:polarisation_dimensional}\n\\end{align}\nGiven $\\phi$ we define the generalised electric potential\n\\begin{align}\n    \\psi_e := \\phi,\\quad \\psi_i&:= \\Gamma_{1,i} \\phi - \\frac{\\mu_i }{2}\\left(\\frac{\\np\\phi}{B}\\right)^2\n\\end{align}\nIn total\nwe have an isothermal 3d gyro-fluid model with up to 2nd order FLR effects\non in the electric potential $\\phi$ and 0th order FLR effects in the parallel magnetic\npotential $A_\\parallel$.\nWe have the continuity equation for the electron density \\(n_e\\) and the ion gyro-centre\ndensity \\(N_i\\) and the momentum conservation equation for\nthe parallel electron velocity \\(u_e\\) and the parallel ion gyro-centre velocity \\(U_i\\)~\\cite{WiesenbergerPhD, HeldPhD}.\n\\subsection{ Scale invariance}\n\\subsubsection{Sign reversals of the magnetic field}\\label{sec:field_reversal}\nIf we change the direction of the magnetic field vector $\\bhat$, we immediately see that all perpendicular\ndrifts and $U\\bhat$ change directions. On the other side, the diffusive and resistive terms remain unchanged.\nWithout resistivity and diffusion a change in direction of the magnetic field thus corresponds to\na time reversal $t\\rightarrow t'=-t$.\n\nAlso note that changing the sign of the magnetic field only in the parallel derivatives $\\npar \\rightarrow -\\npar$ does not\nhave any effect. This can be seen by simply renormalizing $U'=-U$. This reverts the equations back to the original equations.\nIn the code this situation can happen when you change the sign of $\\bhat$ using $\\mathcal P_\\psi$  and $\\mathcal P_I$\nand use the toroidal field approximation at the same time, because $\\ehat_\\varphi$\ndoes not change sign automatically.\nTo actually change the sign of the magnetic field in code you need to use the \"toroidal\nnegative\" value for the \\textit{curvmode} input parameter, which reverts $\\ehat_\\varphi$ in the other direction.\n\\subsubsection{Scaling of density}\nIf $N, U, \\phi, A_\\parallel$ are a solution to the model equations\nthen so are $N'=\\alpha N$, $U'=U$, $\\phi'=\\phi$ and $A_\\parallel'=A_\\parallel$ with the changed parameters $S_N' = \\alpha S_N$, $\\eta' = \\eta/\\alpha$ and $ \\beta' = \\beta/\\alpha$. If $N$\nhas a Dirichlet boundary condition, then $N'$ satisfies a correspondingly scaled boundary condition.\n\n\n\\subsection{Diffusive terms}\\label{sec:dissres}\nWe define with\n$\\eta_\\parallel := \\frac{0.51 m_e \\nu_{ei}}{n_e e^2}$ and $\\nu_{ei} = \\sqrt{2} z^2 e^4 \\ln \\Lambda/ (12\\pi^{3/2} \\sqrt{m_e} \\epsilon_0^2) n_e /T_e^{3/2}$\n\\begin{align}\n  \\eta:=\\frac{en_0\\eta_\\parallel}{B_0} = 8.45\\cdot 10^{-5}\\ln \\lambda \\left(\\frac{n_0}{10^{19}\\text{m}^3}\\right) \\left(\\frac{T_e}{\\text{eV}}\\right)^{-3/2} \\left(\\frac{B_0}{\\text{T}}\\right)^{-1},\n    \\label{eq:resistivity}\n\\end{align}\nwith $\\ln \\lambda \\approx 10$.\n The approximate Spitzer current \\(J_{\\parallel,s}:= n_e \\left(U_i - u_e\\right)\\)\n determines the parallel resistive terms to $R_\\parallel:= n_e\\eta J_{\\parallel,s}$.\n\nThe dissipative terms can be decomposed into perpendicular and parallel components\n\\begin{align}\n \\Lambda_{n_e} &= \\Lambda_{n_e,\\perp}+\\Lambda_{n_e,\\parallel}, &\n \\Lambda_{N_i} &= \\Lambda_{N_i,\\perp}+\\Lambda_{N_i,\\parallel},\\\\\n \\Lambda_{u_e} &= \\Lambda_{u_e,\\perp}+\\Lambda_{u_e,\\parallel},&\n \\Lambda_{U_i} &= \\Lambda_{U_i,\\perp}+\\Lambda_{U_i,\\parallel}.\n\\end{align}\nFor numerical stabilisation we choose:\n\\begin{align}\n\\Lambda_{n_e,\\parallel} &= \\nu_\\parallel \\Delta_\\parallel n_e &\n\\Lambda_{N_i,\\parallel} &= \\nu_\\parallel \\Delta_\\parallel N_i \\\\\n\\Lambda_{u_e,\\parallel} &= \\nu_\\parallel \\Delta_\\parallel u_e &\n\\Lambda_{U_i,\\parallel} &= \\nu_\\parallel \\Delta_\\parallel U_i\n\\end{align}\nSimilarly, for the perpendicular dissipation we apply viscous or hyperviscous terms.\n\\begin{align}\\label{eq:perpdiffNT}\n \\Lambda_{n_e,\\perp} &=  \\nu_\\perp \\Delta_\\perp n_e \\text{ or } -\\nu_\\perp \\Delta_\\perp^2 n_e&\n \\Lambda_{N_i,\\perp} &=  \\nu_\\perp \\Delta_\\perp N_i \\text{ or } -\\nu_\\perp \\Delta_\\perp^2 N_i & \\\\\n \\Lambda_{u_e,\\perp} &=  \\nu_\\perp \\Delta_\\perp u_e \\text{ or } -\\nu_\\perp \\Delta_\\perp^2 u_e &\n \\Lambda_{U_i,\\perp} &=  \\nu_\\perp \\Delta_\\perp U_i \\text{ or } -\\nu_\\perp \\Delta_\\perp^2 U_i\n\\end{align}\nHere the mass diffusion coefficient coincides with the viscous coefficient, hence we fixed the Schmidt number \\(\\mathit{Sc}_\\parallel:= \\frac{\\nu_U}{\\nu_N}\\) to unity.\nThe drift-fluid corresponding diffusion gives an order-of-magnitude estimate for $\\nu_\\perp$.\nWe have with $\\nu_{ii0} = z^4e^4\\ln \\Lambda/ (12\\pi^{3/2} \\sqrt{m_i} \\epsilon_0^2) n_{i0} /T_{i0}^{3/2}$ and choose $D_i = \\rho_i^2 \\nu_{ii}$ and $T_{i0} = T_{e0}$.\nBy dividing by $\\rho_s^2 \\Omega_{ci}$ we arrive at $\\nu_\\perp = m_i \\nu_{ii0}/eB_0$.\nTogether with neoclassical corrections we then have\n\\begin{align}\n\\nu_\\perp =\n5\\cdot 10^{-3} \\left(1+\\frac{R}{a}q_{95}\\right) \\ln \\lambda\n\\left(\\frac{n_0}{10^{19}\\text{m}^3}\\right)\n\\left(\\frac{T_e}{\\text{eV}}\\right)^{-3/2}\n\\left(\\frac{B_0}{\\text{T}}\\right)^{-1}\n\\left(\\frac{m_i}{m_H}\\right)^{1/2},\n\\end{align}\nwhere we use the major radius $R_0$ and minor radius $a$ and the safety factor $q_{95}$~\\cite{Madsen2016}.\n\n\\subsection{Boundary and initial conditions}\nWe define the simulation box as\n$[ R_{\\min}, R_{\\max}]\\times [Z_{\\min}, Z_{\\max}] \\times [0,2\\pi]$,\nwhere we define\n\\begin{align} \\label{eq:box}\n    R_{\\min}&=R_0-\\varepsilon_{R-}a\\quad\n    &&R_{\\max}=R_0+\\varepsilon_{R+}a\\nonumber\\\\\n    Z_{\\min}&=-\\varepsilon_{Z-}ae\\quad\n    &&Z_{\\max}=\\varepsilon_{Z+}ae\n\\end{align}\nwhere $a$ is the minor radius, $e$ is the elongation of the flux surfaces and\nthe $\\varepsilon$ are free parameters to be specified by the user.\n\nWe choose boundary conditions separately on input for the variables\n$n_e$, $u_e$ and $\\phi$. The boundary condition for $N_i$, $U_i$ and\n$\\psi$ are equal to $n_e$, $u_e$ and $\\phi$ respectively.\nWe choose $A_\\parallel$ to have equal boundary conditions as $u_e$ and $U_i$.\nThis will later enable us to treat the sum of $U$ and $A_\\parallel$\nin the same way as $U$.\nTypically,\n\\begin{align}\nn_e = n_0, \\quad u_e = \\phi = 0\n\\text{ or } \\hat n \\cn n_e = \\hat n \\cn u_e = 0\n\\end{align}\nwhere $\\hat n$ is the normal vector to the boundary.\n\nWe initialize the parallel velocity to zero\n\\begin{align}\n  u_e(R,Z,\\varphi,0) = U_i(R,Z,\\varphi,0) = 0\n  \\label{}\n\\end{align}\nwhich in turn initializes $A_\\parallel = 0$\nand initialize the electron density with\n\\begin{align} \\label{eq:initial_ne}\n    n_e(R,Z,\\varphi, 0)= n_{prof}(R,Z) + \\tilde n(R,Z,\\varphi)\n\\end{align}\nconsisting of a toroidally symmetric background profile $n_{\\text{prof}}(R,Z)$ and a perturbation\n$\\tilde n(R,Z,\\varphi)$, which breaks the toroidal symmetry.\nNote that we should take care to intitialize a smooth profile with ideally well-defined $\\Delta^2_\\perp n_e$.\n\nWe define a flux-aligned density profile as\n\\begin{align} \\label{eq:density_profile}\n  n_{\\text{prof}}(R,Z)=\n  n_0 + \\triangle n_{peak}\\frac{\\psi_p(R,Z) }{\\psi_{p,\\min}}\\Theta_{\\alpha_p/2}\\left(-\\rho_p(R, Z)-\\frac{\\alpha_p}{2}\\right) H(Z-Z_X)\n\\end{align}\nThe second Heaviside is multiplied only if the equilibrium $\\psi_p$ has an\nX-point and avoids a profile in the private flux region. The factor $\\alpha_p$ provides a smooth transition\nzone that avoids numerical oscillations.\n\n\nWe have two possibilities to initialize the ion density\n\\begin{align} \\label{eq:initphi}\n  N_i = \\Gamma_{1,i}^{-1} n_e \\text{ or } N_i = \\Gamma_{1,i}n_e\\approx \\left(1+\\frac{1}{2}\\tau_i\\mu_i\\Delta_\\perp\\right)n_e\n\\end{align}\nIn the first case the potential $\\phi= 0$ while in the second case\nthe $E\\times B$ and ion diamagnetic vorticity coincide $\\Delta_\\perp N_i \\propto \\Delta_\\perp \\phi$ in the long-wavelength limit.\nNote that $\\alpha$ must not be too small to avoid $N_i < 0$.\nWe can choose between several initial conditions for $\\tilde n$:\n\n\\subsubsection{Blob and Straight blob}\nWe initialize a blob in the R-Z plane\n\\begin{align} \\label{eq:initial_blob}\n  \\tilde n_{blob}(R,Z,0) = \\triangle n \\exp\\left( -\\frac{(R - R_0 - p_x a)^2 + (Z-p_ya)^2}{\\sigma^2} \\right)\n\\end{align}\nThen, we use fieldline integration modulated by\n\\begin{align}\n  m_{blob}(s) = \\exp\\left( -\\frac{s^2 }{\\pi^2\\sigma_z^2} \\right)\n\\end{align}\nto transform this blob to all other poloidal\nplanes.\nWe either follow fieldlines around the torus several times (``blob'') or only once\n(``straight blob'').\n\\subsubsection{Turbulent bath}\nWe can initialize the R-Z plane with a turbulent bath with a certain amplitude $A$.\nThis especially has the goal to destabilize the edge region right inside the\nlast closed flux surface. Notice that the core region is rather stable\nand quickly damps away fluctuations.\nAgain, we transform this to all poloidal planes along the magnetic field lines and multiply the bath with\n\\begin{align} \\label{eq:initial_turbulent}\n    \\tilde n_e(R,Z,\\varphi) = \\tilde n_{\\text{bath}}(R,Z,\\varphi)\\Theta_{\\alpha_p/2}(-\\rho_p(R, Z)-\\alpha_p/2) H(Z-Z_X)\n\\end{align}\n\\subsubsection{Zonal flows}\nWe can initialize the R-Z plane with zonal flows of amplitude $A$ and\nwavelength $k_\\psi$ aligned with the magnetic flux surfaces.\n\\begin{align} \\label{eq:initial_zonal_flow}\n    \\tilde n_{\\text{zonal}}(R,Z) &= A \\sin (2\\pi k_\\psi \\psi_p(R,Z)) \\nonumber\\\\\n\\tilde n_e(R,Z,\\varphi) &= \\tilde n_{\\text{zonal}}(R,Z)\\Theta_{\\alpha_p}\\left(-\\rho_p(R, Z)-\\frac{\\alpha_p}{2}\\right) H(Z-Z_X)\n\\end{align}\n\\subsubsection{Turbulence on Gaussian profile}\nInstead of the flux-aligned profile we can also choose a toroidally symmetric Gaussian profile\n\\begin{align} \\label{eq:profile_blob}\n  n_{prof}(R,Z) = n_0 + \\triangle n_{peak} \\exp\\left( -\\frac{(R - R_0 - p_x a)^2 + (Z-p_ya)^2}{\\sigma^2} \\right)\n\\end{align}\non top of which we can add the turbulent bath $\\tilde n_{\\text{bath}}$ and finally dampen it by\n\\begin{align}\\label{eq:turbulence_on_gaussian}\n    n_e(R,Z,\\varphi,0) = (n_{prof}(R,Z) + \\tilde n_{\\text{bath}})\\Theta_{\\alpha_p/2}\\left( 1- \\sqrt{(R-R_0)^2 + Z^2}/a\\right)\n\\end{align}\n\n\\subsection{Sinks and sources} \\label{sec:sources}\nWe can choose the source terms $S_N$ to either force a profile\n$n_{\\text{prof}}$ or provide a constant influx of particles in the\ncore of our domain, where our model does not apply.\nWe thus define a particle sink/source for electrons as\n\\begin{align} \\label{eq:electron_source}\n  S_{n_e}(R,Z,\\varphi, t) &= \\omega_s \\begin{cases}\n      (n_{prof}(R,Z) - n_e(R,Z,\\varphi, t))\\Theta_{\\alpha_p/2}\\left( \\rho_{p,s} - \\frac{\\alpha_p}{2} - \\rho_p(R,Z) \\right ) H(Z-Z_X) \\quad \\text{ forced}\\\\\n    S_{prof}(R,Z)\\quad \\text{ influx}\n    \\end{cases}\n\\end{align}\nwhere $\\omega_s$ is the source strength parameter. The shift of $\\Theta$ is chosen\nsuch that the source vanishes exactly outside $\\psi_{p,s}$.\nThe forced source will result in exponential adaption of the core\ndensity profile of the form $n_e \\propto n_{prof}+(n_{prof}-n_{e,0})e^{-\\omega_st}$.\n\nWe can choose the constant influx\n\\begin{align} \\label{eq:electron_source_influx}\n    S_{prof}(R,Z) &= \\Theta_{\\alpha_p/2}\\left( \\rho_{p,s} - \\frac{\\alpha_p}{2} - \\rho_p(R,Z) \\right) H(Z-Z_X)\n\\end{align}\nor a Torpex inspired source profile\n\\begin{align} \\label{eq:electron_source_torpex}\n  S_{prof}(R,Z) &=\n  \\begin{cases}\n    \\exp\\left( - \\frac{(R-R_0)^2}{a^2 }- \\frac{(Z-Z_0)^2}{b^2}\\right) \\text{ if} R > R_0 \\\\\n    \\frac{1}{2}\\exp\\left( - \\frac{(R-R_0)^2}{a^2} -2c(R-R_0)(Z-Z_0)- \\frac{(Z-Z_0)^2}{b^2} \\right) \\\\\n  +\\frac{1}{2}\\exp\\left( - \\frac{(R-R_0)^2}{a^2} +2c(R-R_0)(Z-Z_0)- \\frac{(Z-Z_0)^2}{b^2} \\right) \\text{ else}\n              \\end{cases}\n\\end{align}\nwith $a=0.0335$m, $b=0.05$m, $c=565m^{-2}$, $R_0=0.98$m and $Z_0=-0.02$m.\n\n\nIn order to not generate potential with the source term the\nion source needs to fulfill $S_{n_e} = \\Gamma_{1,i}S_{N_i} + \\nc\\left( \\frac{\\mu_i S_{N_i}}{B^2}\\np \\phi\\right)$ which in the long wavelength limit can be inverted to (the long wavelength limit should be well-fulfilled for a realistic source term since the amplitude is typically quite small)\n\\begin{align}\n    S_{N_i} = \\left(1-\\frac{1}{2}\\mu_i \\tau_i \\Delta_\\perp\\right) S_{n_e} -\\nc\\left( \\frac{\\mu_i S_{n_e}}{B^2}\\np \\phi\\right)\n  \\label{eq:ion_source}\n\\end{align}\nNote that the additional terms besides $S_{n_e}$ are total divergences which means\nthey do not change the volume integrated \"total\" particle number created by the source.\nNote that $S_{n_e}$ needs to be smooth\nso that $\\np^2 S_{n_e}$ is well defined.\nAlso note that with our definition of $\\Lambda_{n_e}$ and $\\Lambda_{N_i}$ and\nthe polarisation equation we have $\\Lambda_{n_e} = \\Gamma_{1,i}\\Lambda_{N_i} + \\nc\\left( \\frac{\\mu_i \\Lambda_{N_i}}{B^2}\\np \\phi\\right)$ in the long wavelength limit (swap the operators).\nThis means that diffusion does not generate potential either.\n\nNow, our idea is to dampen the density and velocity in the region defined by the\nmagnetic field straightening.\nThe idea for the terms $S_U$ is mainly to provide more numerical stability\nin the corner regions of the domain, where the parallel derivative may lead\nto unfavourable numerical instabilities.\nFor both electrons and ions we choose\n\\begin{subequations} \\label{eq:velocity_source}\n\\begin{align}\n    S^d_{n_e}(R,Z,\\varphi,t) &:= -\\omega_d (n_e-1)\\Theta_{\\alpha_p/2}\\left(\\rho_p(R,Z) - \\rho_{p,b} - \\frac{\\alpha_p}{2}\\right)\\\\\n    S^d_{N_i}(R,Z,\\varphi,t) &= \\left(1-\\frac{1}{2}\\mu_i \\tau_i \\Delta_\\perp\\right) S^d_{n_e} -\\nc\\left( \\frac{\\mu_i S^d_{n_e}}{B^2}\\np \\phi\\right)\\\\\nS^d_U(R,Z,\\varphi, t)& := -\\omega_d U \\Theta_{\\alpha_p/2}\\left(  \\rho_p(R,Z) - \\rho_{p,b} - \\frac{\\alpha_p}{2} \\right)\n\\end{align}\n\\end{subequations}\n\n\\subsection{Implemented form}\nThe form that we implement avoids derivatives on the product of\ntwo functions for which we have no boundary conditions\n\\begin{subequations}\n    \\begin{align}\n    \\frac{\\partial}{\\partial t} N =&\n        - \\frac{1}{B}[\\psi, N]_{\\perp}%\\nonumber\\\\\n        - \\bar \\npar \\left( NU\\right)\n        - NU\\left(\\vec \\nc\\bhat+\\vec \\nc{\\vec b}_\\perp\\right)\n        - \\tau \\mathcal K(N) \\nonumber \\\\&\n        - N \\mathcal K(\\psi)\n        -\\mu \\mathcal K_{\\vn\\times\\bhat}(NU^2)\n        -\\mu NU^2\\nc \\vec{ \\mathcal K_{\\vn\\times\\bhat}}\n        + \\nu_\\perp\\Delta_\\perp N + \\nu_\\parallel \\Delta_\\parallel N + S_N, \\\\\n    \\frac{\\partial}{\\partial t} W =&\n        - \\frac{1}{B}\\left[\\psi, U\\right]_{\\perp}%& \\nonumber\\\\\n        - \\frac{1}{\\mu} \\bar \\npar \\psi% \\nonumber\\\\\n        - \\frac{1}{2}\\bar \\npar U^2\n        -\\frac{\\tau}{\\mu} \\bar \\npar \\ln N\n        - U\\mathcal K_{\\vn\\times\\bhat}(\\psi)\n        - \\tau \\mathcal K(U)\n        -\\tau U\\nc\\vec{ \\mathcal K_{\\vn\\times\\bhat}}\\nonumber\\\\&\n        - \\left(2\\tau + {\\mu}U^2\\right) \\mathcal K_{\\vn\\times\\bhat} (U)\n        -2\\tau U\\mathcal K_{\\vn\\times\\bhat}(\\ln N)\n        - \\frac{\\eta}{\\mu} \\frac{n_e}{N}n_e(U_i - u_e) \\nonumber\\\\&\n        + \\nu_\\perp\\Delta_\\perp U\n        + \\nu_\\parallel \\Delta_\\parallel U\n        + S_U,\n        \\label{eq:EgyrofluidU} \\\\\n        W&:= \\left( U + \\frac{A_\\parallel}{\\mu}\\right)\n    \\end{align}\n    \\label{eq:Egyrofluid}\n\\end{subequations}\ntogether with\n$\\bar\\npar f = \\npar f + A_\\parallel \\mathcal K_{\\vn\\times\\bhat}(f) + \\frac{1}{B}[ f, A_\\parallel]_\\perp$\nand $\\nc { \\vec b}_\\perp = A_\\parallel \\vec \\nc\\vec{ \\mathcal{ K}_{\\vn\\times\\bhat}} - \\mathcal K_{\\vn B}(A_\\parallel) $\nand\n\\begin{subequations} \\label{eq:elliptic}\n  \\begin{align}\n    -\\nc\\left( \\frac{N_i}{B^2}\\np \\phi \\right) &= \\Gamma_{1,i} N_i - n_e, \\quad\\quad\n    \\Gamma_{1,i}^{-1} = 1-\\frac{1}{2}\\tau_i\\mu_i \\Delta_\\perp \\\\\n    \\psi_e = \\phi, \\quad \\psi_i &= \\Gamma_{1,i}\\phi -\\frac{\\mu_i}{2}\\frac{(\\np\\phi)^2}{B^2} \\\\\n    \\left(\\frac{\\beta}{\\mu_i}N_i - \\frac{\\beta}{\\mu_e}n_e-\\Delta_\\perp\\right)\n    A_\\parallel &= \\beta\\left(N_iW_i-n_e w_e\\right)\n  \\end{align}\n\\end{subequations}\nNote that the negative signs make the operators in Eqs.~\\eqref{eq:elliptic} positive definite.\n\nIn the output file we have\n\\begin{longtable}{llll}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Equation} & \\textbf{Name} &  \\textbf{Equation}\\\\\n\\midrule\n    electrons &$n_e$ &\n    ions &$N_i$ \\\\\n    Ue &$u_e$ &\n    Ui &$U_i$ \\\\\n    potential &$\\phi$ &\n    psi &$\\psi$ \\\\\n    induction &$A_\\parallel$ & \\\\\n\\bottomrule\n\\end{longtable}\n\\subsection{Conservation laws} \\label{sec:conservation}\n\\subsubsection{Mass conservation}\nThe density equation directly yields the particle conservation\n\\begin{align} \\label{eq:mass_theorem}\n  \\frac{\\partial}{\\partial t} n_e\n  + \\nc\\vec{ j_{n_e}}\n  =  \\Lambda_{n_e}+S_{n_e}\n\\end{align}\nThe terms of the particle conservation thus read\n\\begin{align}\n  n_e= & n_e,\\\\\n  \\vec j_{n_e} =& n_e\\left(\n  \\vec u_E + \\vec u_C + \\vec u_{K} +u_e\\left(\\bhat+{\\vec b}_\\perp\\right)  \\right) \\nonumber\\\\\n  =& n_e \\left(\\frac{\\bhat\\times \\vn\\phi}{B} \n  + \\tau_e \\frac{\\bhat\\times\\vn n_e}{n_eB} \n  + \\mu_e u_e^2\\vec K_{\\vn\\times\\bhat} \n  + u_e(\\bhat + {\\vec b}_\\perp) \\right), \\label{eq:particle_flux} \\\\\n  \\Lambda_{n_e} =&\n  \\nu_\\perp\\Delta_\\perp n_e + \\nu_\\parallel\\Delta_\\parallel n_e\n\\\\\n  S_{n_e} =&  S_{n_e}\n\\end{align}\nNotice that\n\\begin{align}\nn_e \\vec K = n_e\\vn\\times\\frac{\\bhat}{B} = \\vn\\times n_e\\frac{\\bhat}{B} + \\frac{\\bhat\\times\\vn n_e}{B}\n\\label{}\n\\end{align}\nsuch that we can define the diamagnetic flux in the particle flux since\nthe rotation vanishes under the divergence.\n\nWe here also derive the particle flux \\eqref{eq:particle_flux} through a flux surface\n\\begin{align} \\label{eq:radial_particle_flux}\n \\vec j_{N}\\cn v %=& N\\left( \\vec u_E + \\vec u_C + \\vec u_{\\vn\n %B} + U \\left(\\bhat + {\\vec b}_\\perp\\right)\\right) \\cn \\psi_p \\nonumber\\\\\n =&\n  \\frac{\\d v}{\\d \\psi_p} N\\left[\\frac{1}{B}[\\psi, \\psi_p]_\\perp + \\left(\\tau + \\mu U^2\\right)\n   \\mathcal K_{\\vn\\times\\bhat}(\\psi_p) + \\tau  \\mathcal K_{\\vn B}(\\psi_p) \\right] \\nonumber\\\\\n &+ NU\\frac{\\d v}{\\d \\psi_p}\\left [\\left( A_\\parallel \\mathcal\n K_{\\vn\\times\\bhat}(\\psi_p) + \\frac{1}{B}[\\psi_p, A_\\parallel]_\\perp\\right) \\right]\n\\end{align}\n\nThe relevant terms in the output file are\n\\begin{longtable}{llll}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Equation} & \\textbf{Name} &  \\textbf{Equation}\\\\\n\\midrule\n    electrons & $n_e$ &\n    jsne\\_tt &$ n_e (\\vec u_E + \\vec u_K + \\vec u_C )\\cn \\psi_p$ \\\\\n    jsneA\\_tt &$ n_e u_e \\vec{ b}_\\perp  \\cn \\psi_p$ &\n    jsneE\\_tt & $ n_e \\vec u_E\\cn\\psi_p$ \\\\\n    lneperp\\_tt &$ \\Lambda_{\\perp,n_e} = \\nu_\\perp \\Delta_\\perp n_e$ or $-\\nu_\\perp \\Delta^2_\\perp n_e$ &\n    lneparallel\\_tt &$ \\Lambda_{\\parallel,n_e} = \\nu_\\parallel \\Delta_\\parallel n_e$ \\\\\n    sne\\_tt & $S_{n_e}$ & \\\\\n\\bottomrule\n\\end{longtable}\n\n\n\n\\subsubsection{Energy theorem}\nThe terms of the energy theorem are\n\\begin{align} \\label{eq:energy_theorem}\n\\partial_t \\mathcal E +\n\\nc \\vec j_{\\mathcal E}\n= \\Lambda_{\\mathcal E}\n+  S_{\\mathcal E}\n+  R_{\\mathcal E}\n\\end{align}\nwith ( $z_e=-1$ and $z_i=+1$) and $\\vec u_E := {\\bhat\\times \\vn\\phi}/{B}$\n\\begin{align} \\label{eq:energy_conservation}\n  \\mathcal{E}= & z_e\\tau_e n_e \\ln{(n_e)} +z_i\\tau_i N_i\\ln{(N_i)}\n  +\\frac{1}{2\\beta}\\left(\\np A_\\parallel\\right)^2\n   +  \\frac{1}{2} z_i \\mu_i N_i u_E^2  \\nonumber\\\\\n   & +\\frac{1}{2} z_e\\mu_e  n_e u_e^2\n  +\\frac{1}{2} z_i\\mu_i  N_i U_i^2,\\\\\n  \\vec j_{\\mathcal E} =& \\sum_s z\\left[\n  \\left(\\tau \\ln N + \\frac{1}{2}\\mu U^2 + \\psi \\right)N\\left(\n  \\vec u_E + \\vec u_C + \\vec u_{K} +U\\left(\\bhat+{\\vec b}_\\perp\\right)  \\right) \\right]\n  \\nonumber\\\\\n  &+ \\sum_z z\\left[\\mu \\tau NU^2\\vec K_{\\vn\\times\\bhat} + \\tau NU \\left(\\bhat + {\\vec b}_\\perp\\right)\\right], \\\\\n  \\Lambda_{\\mathcal E} =&  \\sum_s z\\left[\\left( \\tau\\left( 1+\\ln{N}\\right) + \\psi + \\frac{1}{2} \\mu U^2 \\right)\n  \\left(\\nu_\\perp\\Delta_\\perp N + \\nu_\\parallel\\Delta_\\parallel N\\right)  +  \\mu NU\\left(\\nu_\\perp\\Delta_\\perp U + \\nu_\\parallel\\Delta_\\parallel U\\right) \\right]\n\\nonumber \\\\\n  S_{\\mathcal E} =&  \\sum_s  z\\left[ \\left(\\tau\\left( 1+\\ln{N}\\right) +\\psi + \\frac{1}{2} \\mu U^2 \\right)S_{N}\\right]\n\\nonumber \\\\\n  R_{\\mathcal E} =&  -\\eta_\\parallel  \\left[ n_e(U_i-u_e)\\right]^2.\n\\end{align}\nwhere in the energy flux $\\vec j_{\\mathcal E}$\nwe neglect terms  containing time derivatives\nof the eletric and magnetic potentials and we sum over all species.\nThe energy density $\\mathcal E$ consists of the Helmholtz free energy density for electrons and ions,\nthe \\(\\vec{E} \\times \\vec{B}\\) energy density, the parallel energy densities for electrons and ions and the perturbed magnetic field energy density.\nIn \\(\\Lambda\\) we insert the dissipative terms of Section~\\ref{sec:dissres}. \\\\\nReplace $\\Delta_\\perp$ with $-\\Delta_\\perp^2$ when hyperviscous diffusion is chosen\nfor the diffusion terms in the above equations.\n\nWe have the energy flux through a flux surface\n\\begin{align}\n \\vec j_{\\mathcal E}\\cn v =&%\\frac{\\d v}{\\d \\psi_p} \\vec j_{\\mathcal E}\\cn \\psi_p  =\n\\frac{\\d v}{\\d \\psi_p}\\sum_s z\\left (\\tau\\ln N + \\frac{1}{2}\\mu U^2 + \\psi\\right) \\vec j_N\\cn\\psi_p\n+ z \\mu\\tau NU^2 \\mathcal K_{\\vn\\times\\bhat}(\\psi_p) \\nonumber\\\\\n&+ z \\tau NU\n \\left( A_\\parallel \\mathcal\n K_{\\vn\\times\\bhat}(\\psi_p) + \\frac{1}{B}[\\psi_p, A_\\parallel]_\\perp\\right)\n\\label{eq:energy_flux}\n\\end{align}\nThe relevant terms in the output file are\n\\begin{longtable}{ll}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Equation}\\\\\n\\midrule\n    nelnne &$ z_e\\tau_e n_e \\ln n_e$ \\\\\n    nilnni &$ z_i\\tau_i N_i \\ln N_i$ \\\\\n    aperp2 &$ (\\np A_\\parallel)^2/2/\\beta$ \\\\\n    ue2   &$z_i\\mu_i N_i u_E^2 /2$ \\\\\n    neue2 &$ z_e\\mu_e n_e u_e^2/2$ \\\\\n    niui2 &$ z_i\\mu_i N_i U_i^2/2$ \\\\\n    see\\_tt & $z_e(\\tau_e (1+\\ln n_e) + \\phi + \\frac{1}{2}\\mu_e u_e^2) S_{n_e} $ \\\\\n    sei\\_tt & $z_i(\\tau_i (1+\\ln N_i) + \\psi + \\frac{1}{2}\\mu_i U_i^2) S_{N_i} $ \\\\\n    resistivity\\_tt &-$\\eta_\\parallel n_e^2 (U_i-u_e)^2$ \\\\\n    jsee\\_tt &$z_e(\\tau_e \\ln n_e + \\mu_e u_e^2/2 + \\phi)n_e(\\vec u_E + \\vec u_C + \\vec u_K)\\cn \\psi_p\n        + z_e \\tau_e n_e u_e^2 \\vec K_{\\vn\\times\\bhat}\\cn \\psi_p$ \\\\\n    jsei\\_tt &$z_i(\\tau_i \\ln N_i + \\mu_i U_i^2/2 + \\psi_i)N_i(\\vec u_E^i + \\vec u_C + \\vec u_K)\\cn \\psi_p\n        + z_i \\tau_i N_i U_i^2 \\vec K_{\\vn\\times\\bhat}\\cn \\psi_p$ \\\\\n    jseea\\_tt &$z_e(\\tau_e \\ln n_e + \\mu_e u_e^2 + \\phi)n_e \\vec { b}_\\perp\\cn \\psi_p\n        + z_e \\tau_e n_e u_e \\vec{ b}_\\perp \\cn \\psi_p $ \\\\\n    jseia\\_tt &$z_i(\\tau_i \\ln N_i + \\mu_i U_i^2 + \\psi_i)N_i \\vec { b}_\\perp\\cn \\psi_p\n        + z_i \\tau_i N_i U_i \\vec{ b}_\\perp \\cn \\psi_p $ \\\\\n    leeperp\\_tt &$z_e(\\tau_e(1+\\ln n_e) + \\phi + \\mu_eu_e^2/2) \\nu_\\perp \\Delta_\\perp n_e + z_e\\mu_e n_e u_e \\nu_\\perp \\Delta_\\perp u_e$ \\\\\n    leiperp\\_tt &$z_i(\\tau_i(1+\\ln N_i) + \\psi_i + \\mu_iU_i^2/2) \\nu_\\perp \\Delta_\\perp N_i + z_i\\mu_i N_i U_i \\nu_\\perp \\Delta_\\perp U_i$ \\\\\n    leeparallel\\_tt &$z_e(\\tau_e(1+\\ln n_e) + \\phi + \\mu_eu_e^2/2) \\nu_\\parallel \\Delta_\\parallel n_e + z_e\\mu_e n_e u_e \\nu_\\parallel \\Delta_\\parallel u_e$ \\\\\n    leiparallel\\_tt &$z_i(\\tau_i(1+\\ln N_i) + \\psi_i + \\mu_iU_i^2/2) \\nu_\\parallel \\Delta_\\parallel N_i + z_i\\mu_i N_i U_i \\nu_\\parallel \\Delta_\\parallel U_i$ \\\\\n\\bottomrule\n\\end{longtable}\n\n\\subsubsection{Vorticity and toroidal ExB angular momentum equation} \\label{sec:vorticity_eq}\nWe write the vorticity equation (in the LWL)\n\\begin{align}\n    &\\partial_t \\RA{\\Omega} + \\frac{\\partial}{\\partial v}\\frac{\\d v}{\\d\\psi_p}\\RA{\\vec j_\\Omega\\cn\\psi_p} = -\\RA{F_{L,\\varphi}} + \\RA{\\mathcal S_\\Omega} \\label{eq:vorticity_average} \\\\\n\\Omega &:= \\mu_i N_i \\left(\\frac{\\vn\\psi_p\\cn\\phi}{B^2} + \\tau_i \\vn\\ln N_i\\cn\\psi_p\\right) \\equiv \\mu_i N_i(u_{E,\\varphi} + u_{D,\\varphi}) \\\\\n\\vec j_{\\Omega} &:= \\Omega \\vec u_E\n    - \\left(\\frac{1}{\\beta} \\vn\\psi_p \\cn A_\\parallel +\\frac{1}{2}\\tau_i \\vn\\psi_p\\cn  (N_iU_i)\\right)\\frac{\\bhat\\times\\vn A_\\parallel}{B} \\\\\n    F_{L,\\varphi} &:=  -(z_e \\tau_e n_e + z_i\\tau_i N_i)\\mathcal K(\\psi_p) - (z_e\\mu_e n_eu_e^2 + z_i\\mu_i N_iU_i^2)\\mathcal K_{\\vn\\times\\bhat}(\\psi_p) \\\\\n    \\mathcal S_\\Omega &:= \\mu_i S_{n_e} \\frac{\\vn\\psi_p\\cn \\phi}{B^2} + \\mu_i\\tau_i\\vn\\psi_p\\cn S_{n_e} \\label{eq:em_source}\n\\end{align}\nEquation~\\eqref{eq:vorticity_average} can be rewritten by inserting the continuity equation to yield an equation only for the \\ExB velocity\n\\begin{align}\n&\\partial_t \\RA{\\Omega_E} + \\frac{\\partial}{\\partial v} \\frac{\\d v}{\\d \\psi_p}\\RA{ \\vec j_{\\Omega_E}\\cn\\psi_p} = -\\RA{F_{L,\\varphi}}+ \\RA{\\mathcal S_{\\Omega_E}} + \\RA{\\Lambda_{\\Omega_E}} \\label{eq:exb_average} \\\\\n\\Omega_E &:= \\mu_i N_i \\frac{\\vn\\psi_p\\cn\\phi}{B^2} \\equiv \\mu_i N_i u_{E,\\varphi} \\\\\n\\vec j_{\\Omega_E} &:= \\Omega_E (\\vec u_E + \\vec u_D)\n    - \\frac{1}{\\beta} \\vn A_\\parallel\\cn\\psi_p \\left(\\frac{\\bhat\\times\\vn A_\\parallel}{B} +\\frac{1}{2} \\bhat \\times \\vn \\tau_i N_iU_i\\right) \\\\\n    \\mathcal S_{\\Omega_E} &:= \\mu_i S_{n_e} \\frac{\\vn\\psi_p\\cn\\phi}{B^2} \\quad\n    \\Lambda_{\\Omega_E} := \\mu_i \\Lambda_{n_e}\\frac{\\vn\\psi_p\\cn\\phi}{B^2}\n\\end{align}\nwhere here we also monitor the source and diffusion terms.\nIn the output file we have\n\\begin{longtable}{llll}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Equation}&\n\\textbf{Name} &  \\textbf{Equation}\\\\\n\\midrule\n    oexbe &$\\mu_i n_e \\frac{\\vn\\psi_p\\cn\\phi}{B^2}$ &\n    oexbi &$\\mu_i N_i \\frac{\\vn\\psi_p\\cn\\phi}{B^2}$ \\\\\n    odiae &$\\mu_i \\tau_i\\vn\\psi_p\\cn n_e$ &\n    odiai &$\\mu_i \\tau_i\\vn\\psi_p\\cn N_i$ \\\\\n    jsoexbi\\_tt &$\\mu_i N_i \\frac{\\vn\\psi_p\\cn\\phi}{B^2} \\frac{\\bhat\\times\\vn\\phi\\cn \\psi_p}{B}$ &\n    jsoexbe\\_tt &$\\mu_i n_e \\frac{\\vn\\psi_p\\cn\\phi}{B^2} \\frac{\\bhat\\times\\vn\\phi\\cn \\psi_p}{B}$ \\\\\n    jsodiaiUE\\_tt &$\\mu_i \\tau_i\\vn\\psi_p\\cn N_i \\frac{\\bhat\\times\\vn\\phi\\cn \\psi_p}{B}$ &\n    jsodiaeUE\\_tt &$\\mu_i \\tau_i\\vn\\psi_p\\cn n_e \\frac{\\bhat\\times\\vn\\phi\\cn \\psi_p}{B}$ \\\\\n    jsoexbiUD\\_tt &$\\mu_i\\tau_i \\frac{\\vn\\psi_p\\cn\\phi}{B^2} \\frac{\\bhat\\times\\vn N_i\\cn \\psi_p}{B}$ &\n    jsoexbeUD\\_tt &$\\mu_i\\tau_i \\frac{\\vn\\psi_p\\cn\\phi}{B^2} \\frac{\\bhat\\times\\vn n_e\\cn \\psi_p}{B}$ \\\\\n    jsoapar\\_tt &$ \\vn\\psi_p\\cn A_\\parallel \\frac{\\bhat\\times\\vn A_\\parallel\\cn \\psi_p}{B\\beta}$ &\n    socurve\\_tt &$z_e\\tau_e n_e \\mathcal K(\\psi_p)$ \\\\\n    socurvi\\_tt &$z_i\\tau_i N_i \\mathcal K(\\psi_p)$ &\n    socurvkappae\\_tt &$z_e\\mu_e n_eu_e^2 \\mathcal K_{\\vn\\times\\bhat}(\\psi_p)$ \\\\\n    socurvkappai\\_tt &$z_i\\mu_i N_iU_i^2 \\mathcal K_{\\vn\\times\\bhat}(\\psi_p)$ & \\\\\n    sosne\\_tt & $\\mu_i S_{n_e} \\vn\\psi_p\\cn\\phi/B^2$ &\n    sospi\\_tt & $\\mu_i \\tau_i \\vn\\psi_p \\cn S_{n_e}$\\\\\n    loexbe\\_tt & $ \\mu_i \\Lambda_{n_e} \\vn\\psi_p\\cn\\phi/B^2$ & \\\\\n\\bottomrule\n\\end{longtable}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Parallel momentum balance}\nThe flux surface average over the parallel momentum equation under species summation and the LWL yields\n\\begin{align}\n  \\frac{\\partial}{\\partial t}\\RA{\\mu_iN_iU_{i} }\n    % \\nonumber\\\\\n    + \\frac{\\partial}{\\partial v} \\frac{\\d v}{\\d\\psi_p} \\RA{\\mu_iN_iU_i \\frac{\\bhat\\times\\vn\\phi}{B}\\cn\\psi_p + \\sum_s (z_s\\tau_sN_s + \\mu_s N_sU_s^2) b_{\\perp}^{\\;v}  }\n    \\nonumber\\\\\n   = \\sum_s\\RA{-z_s\\tau_s N_s\\npar \\ln B} + \\mu_i \\RA{ S_{N_i} U_i}\n   \\label{eq:parallel_momentum}\n\\end{align}\nwhile the toroidal parallel angular momentum contribution reads\n\\begin{align}\\label{eq:parallel_momentum_direction}\n    \\frac{\\partial}{\\partial t}  \\RA{\\mu_iN_iU_i b_\\varphi}\n    + \\frac{\\partial}{\\partial v} \\frac{\\d v}{\\d\\psi_p} \\RA{\\mu_iN_iU_i b_\\varphi\\frac{\\bhat\\times\\vn\\phi}{B}\\cn\\psi_p + \\sum_s (z_s\\tau_s N_s + \\mu_sN_sU_s^2) b_\\varphi b_{\\perp}^{\\;v} }\n    \\nonumber\\\\\n   = \\RA{F_{L,\\varphi}} + \\mu_i \\RA{ S_{N_i} U_i b_\\varphi}\n\\end{align}\n\nThe relevant terms in the output file are (the Lorentz force term is described in the previous subsection \\ref{sec:vorticity_eq})\n\\begin{longtable}{llll}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Equation} &\n\\textbf{Name} &  \\textbf{Equation}\\\\\n\\midrule\n    neue &$n_e u_e$ &\n    niui &$\\mu_i N_i U_i$ \\\\\n    neuebphi &$n_eu_eb_\\varphi$ &\n    niuibphi &$\\mu_i N_iU_ib_\\varphi$ \\\\\n    jsparexbi\\_tt       & $\\mu_i N_iU_i(\\bhat\\times\\vn\\phi)\\cn \\psi_p/B$ &\n    jsparbphiexbi\\_tt   & $\\mu_i N_iU_ib_\\varphi(\\bhat\\times\\vn\\phi)\\cn \\psi_p/B$ \\\\\n    sparmirrore\\_tt & $-z_e\\tau_en_e\\npar \\ln B$ &\n    sparmirrori\\_tt & $-z_i\\tau_iN_i\\npar \\ln B$ \\\\\n    sparsni\\_tt & $\\mu_i S_{N_i} U_i$ &\n    sparsnibphi\\_tt & $\\mu_i S_{N_i} U_ib_\\varphi $ \\\\\n\\bottomrule\n\\end{longtable}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\subsubsection{Zonal flow energy}\n\\begin{align}\n    E_\\mathrm{zonal} = \\frac{1}{2}\\RA{\\rho_M}\\FA{ \\iota^{-2}\\mathcal I^{\\vartheta\\vartheta} + 2\\iota^{-1}\\mathcal I^{\\vartheta\\varphi} + \\mathcal I^{\\varphi\\varphi}} \\FA{u_{E,\\varphi}}^2\n    \\equiv \\frac{1}{2} \\RA{\\rho_M} \\FA{u_{E,\\varphi}}^2  \\FA{\\mathcal I_0}\n    \\label{eq:zonal_energy}\n\\end{align}\nFor symmetry flux coordinates we have $g_{\\vartheta\\vartheta} = R^2 (\\vn\\psi_p)^2/I^2\\iota^2$, $g_{\\varphi\\vartheta} =0$ and $g_{\\varphi\\varphi}=R^2$ and thus $\\mathcal I_0 = R^{-2}( 1 + I^2/|\\vn\\psi_p|^2)= B^2 / |\\vn\\psi_p|^2$.\n\\begin{align}\\label{eq:perp_kinetic}\n      \\frac{\\partial}{\\partial t}E_{\\mathrm{zonal}} +\\frac{\\partial}{\\partial v } \\left(E_{\\mathrm{zonal}}\\FA{u^v} \\right)\n  =&-\\FA{\\mathcal I_0}\\FA{u_{E,\\varphi}}\\left(\\frac{\\partial }{\\partial v}  \\Theta_{\\varphi}^{\\; v} + \\RA{(\\vec j_f\\times\\vec B)_\\varphi}\\right)\n  \\nonumber\\\\\n    &-\\frac{1}{2}\\FA{u_{E,\\varphi}}^2\\frac{\\partial}{\\partial v}\\left(\\RA{\\rho_M}\\FA{\\FF{\\mathcal I_0}\\FF{u^v}}\\right)\n     + \\mathcal S_{\\mathrm{zonal}}\n\\end{align}\nwhere we neglected the term $\\RA{n\\vec u\\cn \\mathcal I_0}$ in the continuity equation as small in our ordering\n and we have\n \\begin{align}\n \\mathcal S_{\\mathrm{zonal}} :=& \\FA{\\mathcal I_0} \\FA{u_{E,\\varphi}} \\mathcal S_{u_{E,\\varphi}} + \\frac{1}{2}\\FA{u_{E,\\varphi}}^2  \\RA{mS_n\\mathcal I_0}\n%  \\nonumber\\\\\n \\label{eq:zonal_source}\n \\end{align}\n and in the output file\n\\begin{longtable}{llll}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Equation} &\n\\textbf{Name} &  \\textbf{Equation}\\\\\n\\midrule\n    nei0 &$n_e \\mathcal I_0$ &\n    snei0\\_tt & $S_{n_e } \\mathcal I_0$ \\\\\n\\bottomrule\n\\end{longtable}\n\n\\subsection{Manufactured Solution}\nIn order to test the implementation we manufacture a solution to Eqs.~\\eqref{eq:Egyrofluid} and \\eqref{eq:elliptic} of the form\n\\begin{align*}\nn_e(R,Z,\\varphi, t) &:= 1 + 0.5\\sin(\\pi(R-R_0))\\sin(\\pi Z)\\sin(\\varphi)\\sin(\\pi t) \\\\\nN_i(R,Z,\\varphi, t) &:= n_e(R,Z,\\varphi,t) = \\gamma_{ N_i}  \\\\\nu_e(R,Z,\\varphi, t) &:= \\sin(2\\pi(R-R_0))\\sin(2\\pi Z)\\sin(2\\varphi)\\sin(2\\pi t)/(3\\sqrt{-\\mu_e}) \\\\\nU_i(R,Z,\\varphi, t) &:= \\sqrt{-\\mu_e}u_e(R,Z,\\varphi,t) \\\\\n\\phi(R,Z,\\varphi,t) &:= \\sin(3\\pi(R-R_0))\\sin(3\\pi Z)\\sin(3\\varphi)\\sin(3\\pi t)/5; \\\\\n\\psi(R,Z,\\varphi,t) &:= \\phi(R,Z,\\varphi, t) = \\gamma_{\\phi} \\\\\nA_\\parallel( R,Z,\\varphi,t) &:= \\beta\\sin(4\\pi(R-R_0))\\sin(4\\pi Z)\\sin(4\\varphi)\\sin(4\\pi t)/4;\n\\end{align*}\nWe choose circular flux surfaces of the form\n\\begin{align*}\n\\psi_p(R,Z) :=0.5((R-R_0)^2 + Z^2),\\quad\nI_p(R,Z):=I_0\n\\end{align*}\nwith $R_0=10$ and $I_0=20$ and a simulation box $[R_0-a,R_0+a]\\times[-a,a]\\times[0,2\\pi]$.\nWe then symbolically compute (with the help of Mathematica) source terms that we insert to the right hand side of\nthe corresponding equation in code (\\texttt{manufactured.h}) and simulate from $t=0...10^{-3}$.\nBy comparing the numerical solution to the manufactured one we can observe the convergence of our numerical methods. Note that in order to better distinguish\nthe convergence of the DG discretized terms from our parallel derivative\nwe can selectively choose to only activate perpendicular (including $A_\\parallel$ terms) or parallel terms (those that involve derivatives along $\\bhat$).\n\nUnfortunately, we were unable to find a closed solution for the energy integrals with the above fields.\n\n\\section{Numerical methods}\ndiscontinuous Galerkin on structured grid\n\\rowcolors{2}{gray!25}{white} %%% Use this line in front of longtable\n\\begin{longtable}{p{3cm}p{3cm}p{8cm}}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Term} &  \\textbf{Method} & \\textbf{Description}  \\\\ \\midrule\n    coordinate system & Cylindrical & equidistant discretization of $[R_{\\min},R_{\\max}] \\times [Z_{\\min},Z_{\\max}] \\times [0,2\\pi]$ (Eq.~\\eqref{eq:box}, equal number of Gaussian nodes in $R$ and $Z$, equidistant planes in $\\varphi$ with one Gaussian node \\\\\nAdvection terms & direct DG & DG approximation with centered flux of derivatives \\\\\nElliptic terms & local DG & The local DG approximation with centered flux \\\\\nHelmholtz and Elliptic matrix inversions & multigrid/ conjugate gradient & Use previous two solutions to extrapolate initial guess and $1/\\chi$ as preconditioner \\\\\nParallel derivatives & regular  FCI & cf.~\\cite{Held2016,Stegmeir2017}. The terms $\\npar N$ and $\\npar \\phi$ in the velocity equation use a forward difference, while the term $\\npar U$ in the\ndensity equation uses backward difference. This is to avoid a too wide stencil for the diverence of the current and increases stability for low resistivity. \\\\\ntime & Multistep \"Karniadakis\" & \\\\\n\\qquad explicit & Multistep \"Karniadakis\" & $3$rd order explicit\\\\\n\\qquad implicit & Multistep \"Karniadakis\" & $2$nd order implicit, contains perp. Diffusion and Resistive terms. In every iteration of the implicit inversion we need to solve for $A_\\parallel$\\\\\n\\bottomrule\n\\end{longtable}\n\n\\section{Usage}\n\nCompilation:\\\\\n\\texttt{make feltor device=\\{gpu,omp\\}} Compile \\texttt{feltor.cu} (only shared memory)\\\\\n\\texttt{make feltor\\_hpc device=\\{gpu,omp\\}} Compile \\texttt{feltor\\_hpc.cu} for shared memory system. Needs {\\it serial netcdf} \\\\\n\\texttt{make feltor\\_mpi device=\\{gpu,omp,skl,knl\\}} Compile \\texttt{feltor\\_hpc.cu} for distributed memory systems. Also needs {\\it serial netcdf}\\\\\nUsage:\\\\\n\\texttt{./feltor\\_hpc input.json geometry.json output.nc [initial.nc]} \\\\\n\\texttt{echo npx npy npz | mpirun -n np ./feltor\\_mpi input.json geometry.json output.nc [initial.nc]} \\\\\n\\texttt{./feltor input.json geometry.json } \\\\\n\nThe programs \\texttt{feltor\\_hpc.cu} and \\texttt{feltor.cu} expect two input\nfiles \\texttt{input.json} and \\texttt{geometry.json}, described in Sections~\\ref{sec:input_file} and \\ref{sec:geometry_file}.\nThe first is for the physical and numerical parameters of the model equations\nwhile the latter describes the Solov'ev equilibrium.\n The program \\texttt{feltor.cu} plots the results directly to the screen using \\texttt{glfw3}.\nThe program \\texttt{feltor\\_hpc.cu} writes results into\nthe output file \\texttt{output.nc}.\n The output file is described in Section~\\ref{sec:output_file}.\n The optional file \\texttt{initial.nc} can be used to initialize a simulation from an existing file.\n This behavior is described in Section~\\ref{sec:restart_file}.\n Both programs write unstructured human readable performance information of the running simulation\n to \\texttt{std::cout}.\n\nNote that when compiled for mpi, the program \\texttt{feltor\\_hpc.cu} expects the\npartition of the total number of processes np into the three directions x, y and z\nas an input from the command line. Make sure that \\texttt{npx*npy*npz==np} and that\nthey evenly divide the number of grid points in the respective direction! The\nnumber of stages in the multigrid algorithm and the compression parameters further\nrestrict this choice. Also note that the number of processes in a direction must\nnot equal the number of grid points in that direction!\n\n\n\\subsection{Input file structure} \\label{sec:input_file}\nInput file format: json\n\n%%This is a booktabs table\n\\begin{longtable}{llllp{6cm}}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Type} & \\textbf{Example} & \\textbf{Default} & \\textbf{Description}  \\\\ \\midrule\nn      & integer & 3 & - &Number of Gaussian nodes in R and Z (we practically always take 3)\n\\\\\nNx     & integer &52& - &Number of grid points in R (increase if your simulations crash)\n\\\\\nNy     & integer &52& - &Number of grid points in Z (increase if your simulations crash)\n\\\\\nNz     & integer &16& - &Number of grid points in $\\varphi$ (determines dt\nsince parallel velocity dominates timestep)\n\\\\\ndt     & integer &1e-2& - & time stepsize in units of $c_s/\\rho_s$ \\\\\ncompression & integer[2] & [2,2] & [1,1] & Compress output file by reducing\npoints in x and y (pojecting the polynomials onto a coarser grid): output\ncontains n*Nx/c[0] points in x, (has to divde Nx evenly), and n*Ny/c[1] points\nin y, (has to divde Ny evenly). 2 or 3 are reasonable values.\n\\\\\ninner\\_loop & integer & 2  & 1 & Number of time steps between updates to the\ntime integrated quantities. (Although the diagnostics is quite fast sometimes\nyou need to amortize the time spent on it). Note that integrating selected\nquantities in time during the simulation is how we maintain the time-resolution\nin the file output (cf. \\ref{sec:output_file}). Choose as low as you can get\naway with (between 1 and 10).\n\\\\\nitstp       & integer & 2  & - &{ \\tt inner\\_loop*itstp} is the number of\ntimesteps between file outputs (2d and 3d quantities); Note that 1d and 0d\nquantities can only be computed post-simulation since we can't compute\nflux-integrals in parallel in MPI.\n\\\\\nmaxout      & integer & 10 & - & Total Number of fields outputs excluding first\n(The total number of time steps is {\\tt maxout$\\cdot$itstp$\\cdot$inner\\_loop})\nIf you want to let the simulation run for a certain time instead just choose\nthis parameter very large and let the simulation hit the time-limit.\n\\\\\neps\\_time   & float & 1e-7  & - & Tolerance for solver for implicit part in\ntime-stepper (if too low, you'll see oscillations in $u_e$ and/or $\\phi$)\n\\\\\nrtol  & float &1e-6   & - &Tolerance of adaptive time-stepper. (Ignored in Multistep)\n\\\\\neps\\_pol    & float & 1e-6  & - &  Tolerance for residual of the inversion of polarisation and induction Eq. (should not be more than a factor 10 from eps\\_time for $\\beta\\neq  0$ )\n\\\\\njumpfactor  & float & 1 & 1 & Jumpfactor $\\in \\left[0.01,1\\right]$ in the local DG method for the elliptic terms. (Don't touch unless you know what you're doing.\n\\\\\neps\\_gamma  & float & 1e-6  & - & Tolerance for $\\Gamma_1$\n\\\\\nstages      & integer & 3 & 3 & number of stages in multigrid, $2^{\\text{stages-1}}$\nhas to evenly divide both $N_x$ and $N_y$\n\\\\\nFCI & dict & & & Parameters for Flux coordinate independent approach\n\\\\\n\\qquad refine     & integer[2] & [1,1] & [1,1] & refinement factor in FCI approach in R- and Z-direction.\nWe use [1,1], higher values take more time.\n\\\\\n\\qquad rk4eps     & float & 1e-6 & 1e-6 & Accuracy of fieldline integrator in FCI. The default is reasonable.\n\\\\\n\\qquad periodify & bool & true & true & Indicate if flux function is periodified beyond grid boundaries such that the contours are perpendicular to the boundaries. This is not entirely consistent but works better for small toroidal resolution\n\\\\\nmu         & float & -0.000272121& - & $\\mu_e =-m_e/m_i$.\n    One of $\\left\\{ -0.000544617, -0.000272121, -0.000181372 \\right\\}$\n\\\\\ntau        & float &1      & - & $\\tau = T_i/T_e$\n\\\\\nbeta       & float & 5e-6  & 0 & Plasma beta $5\\cdot 10^{-6}$ (TJK), $4\\cdot\n10^{-3}$ (Compass), If $0$, then the model is electrostatic\n\\\\\nnu\\_perp   & float &1e-3   & - & perpendicular viscosity $\\nu_\\perp$, increase\nthis or the resolution if you see vertical or horizontal oscillations (likely\nfrom the advection terms) in your simulation box, decrease if it dampens all\ninstabilities\n\\\\\nperp\\_diff & string & \"viscous\" & \"viscous\" & \"viscous\": $\\Lambda_\\perp\\propto\n\\nu_\\perp\\Delta_\\perp$ , \"hyperviscous\": $\\Lambda_\\perp \\propto\n-\\nu_\\perp\\Delta_\\perp^2$\n\\\\\nnu\\_parallel & float &1e-1 & - & parallel viscosity $\\nu_\\parallel$\n(dimensional analysis reveals there can be a factor $(R_0/\\rho_s)^2$ between\n$\\nu_\\perp $n and $\\nu_\\parallel$ for $\\nu_\\parallel$ to become relevant for\nthe dynamics)\n\\\\\nresistivity & float &1e-4  & - & parallel resistivity parameter Eq.~\\eqref{eq:resistivity}\n\\\\\ncurvmode  & string & \"low beta\"  & \"toroidal\" &\ncurvature mode (\n\"low beta\",\n\"true\": no approximation - requires significantly more resolution in Nz,\n\"toroidal\": toroidal field approx - elliptic equation does not need\ncommunication in z,\n\"low beta negative\": reversed sign of low beta,\n\"toroidal negative\": reversed sign of toroidal\n)\n\\\\\nsymmetric & bool & false & false & If true, initialize all quantities symmetric\nin $\\varphi$ (effectively reducing the problem to 2d). The input $N_z$ is used\nto construct the parallel derivatives and then overwritten to $N_z\\equiv 1$.\n\\\\\nbc & dict & & & Boundary conditions (note that $A_\\parallel$ has the same bc as $U$) \\ldots\\\\\n\\qquad density   & char[2] & [DIR,DIR] & -  & boundary conditions in x and y\nfor $n_e$ and $N_i$, DIR (density 1 on boundary) means both convective and\n    diffusive outflow while NEU (gradient 0) means no outflow by diffusion\n\\\\\n\\qquad velocity  & char[2] & [NEU,NEU] & - & boundary conditions in x and y for\n$u_e$ and $U_i$ and $A_\\parallel$, DIR is in general not very stable, NEU works\nbetter\\\\\n\\qquad potential & char[2] & [DIR,DIR] & - & boundary conditions in x and y for\n$\\phi$ and $\\psi$, DIR means that the $v_{E,\\perp}=0$ on the boundary (i.e. no\noutflow by \\ExB drift), NEU can can have a detrimental effect on timestep \\\\\nbox & dict & & & Bounding box \\\\\n    \\qquad scaleR  & float[2] & [1.1,1.1]     & [1.05,1.05] & $[\\varepsilon_{R-}, \\varepsilon_{R+}]$ scale left and right boundary in units of $a$ Eq.~\\eqref{eq:box}\\\\\n    \\qquad scaleZ  & float[2] & [1.2,1.1]     & [1.05,1.05] & $\\varepsilon_{Z-}, \\varepsilon_{Z+}$ scale lower and upper boundary in units of $ae$ Eq.~\\eqref{eq:box}\n\\\\\ninitne    & string & \"turbulence\"     & \"blob\"  & initial condition for the\nperturbation $\\tilde n$ in \\eqref{eq:initial_ne}. \"zonal\" (Eq.~\\eqref{eq:initial_zonal_flow}),\n    \"blob\" = blob simulations (several rounds fieldaligned),\n    \"straight blob\" = straight blob simulation( 1 round fieldaligned),\n    \"turbulence\" = turbulence simulations ( 1 round fieldaligned, Eq.~\\eqref{eq:initial_turbulent})\n    \"turbulence on gaussian\" = Gaussian bg. profile with turbulence perturbation Eq.~\\eqref{eq:turbulence_on_gaussian}\n    See the file {\\tt init.h} to add your own custom condition.\n\\\\\ninitphi   & string & \"zero\"  & \"balance\" & (ignored if $\\tau_i = 0$, then $\\phi=0$) initial condition for $\\phi$ and thus $N_i$ (Eq.~\\eqref{eq:initphi}: \"zero\" : $\\phi = 0$, vanishing\nelectric potential, \"balance\": ExB vorticity equals ion diamagnetic vorticity\n\\\\\namplitude  & float &0.01   & - & amplitude $A$ of initial perturbation (blob, turbulent bath or zonal flow)  \\\\\nsigma      & float &2      & - & Gaussian variance in units of $\\rho_s$ \\\\\nposX       & float &0.3    & - & Gaussian R-position in units of $a$\\\\\nposY       & float &0.0    & - & Gaussian Z-position in units of $a$ \\\\\nsigma\\_z    & float &0.25   & - & toroidal variance in units of $\\pi$ of the fieldline-following initialization \\\\\nk\\_psi     & float &0    & - & zonal mode wave number (only for \"zonal\" initial condition)  \\\\\nprofile & Dict & & & Density profile \\\\\n\\qquad amp& float &4   & 0 & Profile amplitude $\\triangle n_{peak}$ in\nEq.~\\eqref{eq:density_profile} and Eq.~\\eqref{eq:turbulence_on_gaussian}\n\\\\\n\\qquad alpha  & float & 0.2 & 0.2 & Transition width $\\alpha_p$ in the Heaviside\nat the separatrix (must not be zero - even if amp is zero - it is also used for the perturbation)\n\\\\\nsource & dict & & & Density source, cf. the output \\texttt{sne\\_tt\\_ifs} in \\texttt{feltordiag} (or \\texttt{SourceProfile\\_ifs} in \\texttt{geometry\\_diag}) to see how much mass the source with the parameters below generates and compare to \\texttt{jsne\\_tt\\_fsa} to see how much mass is lost.  \\\\\n\\qquad rate & float & 0    & 0 & profile source rate $\\omega_s$ in Eq.~\\eqref{eq:electron_source}.\n\\\\\n\\qquad type & string & \"profile\" & \"profile\" & The type of source to use: \"profile\" the source is multiplied by $(n_{prof} - n)$ to relax to the initial profile Eq.~\\eqref{eq:electron_source};\n\"influx\" the source has a constant source rate Eq.~\\eqref{eq:electron_source_influx},\n\"torpex\": Torpex inspired source profile Eq.~\\eqref{eq:electron_source_torpex},\n\"gaussian\": Gaussian shaped source profile - uses \\texttt{posX}, \\texttt{posY} and \\texttt{sigma},\n    See the file {\\tt init.h} to add your own custom source.\n\\\\\n\\qquad boundary & float & 0.2  & 0.5 & Source region boundary $\\rho_{p,b}$: yields in Eq.~\\eqref{eq:electron_source} and Eq.~\\eqref{eq:electron_source_influx}  \\\\\n\\qquad alpha  & float & 0.2 & 0.2 & Transition width $\\alpha_p$ in the Heaviside\nin the density Eq.~\\eqref{eq:density_profile} (with $rho_{p,b}=0$ and source profiles Eq.~\\eqref{eq:electron_source} (should be\nsmall but cannot be too small if $\\tau_i > 0$ else $\\Delta_\\perp n_e$ explodes, must not be zero)\n\\\\\ndamping & dict & & & magnetic and density damping region \\\\\n\\qquad rate & float & 0    & 0   & Friction coefficient $\\omega_d$ in density and velocity damping Eq.~\\eqref{eq:velocity_source} \\\\\n\\qquad boundary & float & 0.2  & 1.2 & Modification region boundary $\\rho_{p,b}$: yields $\\psi_0 = (1-\\rho_{p,b}^2)\\psi_{p,\\min}$ in Eq.~\\eqref{eq:modified_psip}.\n\\\\\n\\qquad alpha   & float & 0.25 & 0 & Transition width $\\alpha_p$: yields\n$\\alpha=-2\\rho_{p,b}\\alpha_p+\\alpha_p^2)\\psi_{p,\\min}$ for the Heaviside in the modified\n$\\psi_p$ function \\eqref{eq:modified_psip}. If zero, then we do not modify the\nmagnetic field and damping is ignored.\\\\\n\\bottomrule\n\\end{longtable}\n\\subsection{Geometry file structure} \\label{sec:geometry_file}\nFile format: json\n\n%%This is a booktabs table\n\\begin{longtable}{llll>{\\RaggedRight}p{7cm}}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Type} & \\textbf{Example} & \\textbf{Default} & \\textbf{Description}  \\\\ \\midrule\n    A      & float & 0 &  0 & Solovev parameter in Eq.~\\eqref{eq:solovev} \\\\\n    c      & float[12] &  - & - & Solovev coefficients in Eq.~\\eqref{eq:solovev} \\\\\n    PP     & float & 1 &  1 & Prefactor $\\mathcal P_\\psi$ for $\\psi_p$ in Eq.~\\eqref{eq:solovev} \\\\\n    PI     & float & 1 &  1 & Prefactor $\\mathcal P_I$ for $I$ in Eq.~\\eqref{eq:solovev} \\\\\n    R\\_0   & float & - & -  & Major radius $R_0$ in units of $\\rho_s$ in Eq.~\\eqref{eq:solovev} (This is the only geometry quantity to change if $\\rho_s$ changes)\\\\\n    elongation    & float & 1 & - & Elongation $e$, used in determining the box size Eq.~\\eqref{eq:box} and the initial guess for the location of the X-point $Z_X = -1.1 ea$ \\\\\n    triangularity & float & 0 & - & Triangularity $\\delta$, used in the initial guess for the location of the X-point $R_X = R_0-1.1\\delta a$ \\\\\n    inverseaspectratio & float & 0.16667 & - & minor to major radius $a/R_0$ (used to compute $a$ from $R_0$) \\\\\n\\bottomrule\n\\end{longtable}\n\n\\subsection{Output} \\label{sec:output_file}\nOutput file format: netcdf-4/hdf5; A \\textit{coordinate variable (Coord. Var.)} is a Dataset with the same name as a dimension.\nWe follow \\textbf{CF Conventions CF-1.7}\n\\url{http://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/cf-conventions.html}\nand write according attributes into the file.\nThe command \\texttt{ncdump -h output.nc} gives a full list of what a file contains.\nHere, we list the content without attributes\nsince the internal netcdf information does not display equations.\n%\n%Name | Type | Dimensionality | Description\n%---|---|---|---|\n\\begin{longtable}{lll>{\\RaggedRight}p{7cm}}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Type} & \\textbf{Dimension} & \\textbf{Description}  \\\\ \\midrule\ninputfile  &     text attribute & - & verbose input file as a string (valid JSON, C-style comments are allowed but discarded) \\\\\ngeomfile   &     text attribute & - & verbose geometry input file as a string (valid JSON, C-style comments are allowed but discarded) \\\\\nx                & Coord. Var. & 1 (x) & $R$-coordinate (computational space, compressed size: $nN_x/c_x$)\\\\\ny                & Coord. Var. & 1 (y) & $Z$-coordinate (computational space, compressed size: $nN_y/c_y$)\\\\\nz                & Coord. Var. & 1 (z) & $\\varphi$-coordinate (computational space, size: $N_z$) \\\\\ntime             & Coord. Var. & 1 (time)& time at which fields are written (variable size: maxout$+1$, dimension size: unlimited) \\\\\nxc           & Dataset & 3 (z,y,x) & Cartesian x-coordinate $x=R\\sin(\\varphi)$ \\\\\nyc           & Dataset & 3 (z,y,x) & Cartesian y-coordinate $y=R\\cos(\\varphi)$\\\\\nzc           & Dataset & 3 (z,y,x) & Cartesian z-coordinate $z=Z$ \\\\\nPsip             & Dataset & 3 (z,y,x) & Flux function $\\psi_p(R,Z)$ \\\\\nNprof            & Dataset & 3 (z,y,x) & Density profile $n_\\text{prof}$ used in the forcing source \\\\\nSource           & Dataset & 3 (z,y,x) & Source profile $S_{prof}$\\\\\nBR               & Dataset & 3 (z,y,x) & Contravariant magnetic field component $B^R$ \\\\\nBZ               & Dataset & 3 (z,y,x) & Contravariant magnetic field component $B^Z$ \\\\\nBP               & Dataset & 3 (z,y,x) & Contravariant magnetic field component $B^\\varphi$ \\\\\nelectrons        & Dataset & 4 (time, z, y, x) & electron density $n_e$ \\\\\nions             & Dataset & 4 (time, z, y, x) & ion density $N_i$ \\\\\nUe               & Dataset & 4 (time, z, y, x) & electron velocity $u_e$ \\\\\nUi               & Dataset & 4 (time, z, y, x) & ion velocity $U_i$ \\\\\npotential        & Dataset & 4 (time, z, y, x) & electric potential $\\phi$ \\\\\ninduction        & Dataset & 4 (time, z, y, x) & parallel vector potential $A_\\parallel$ \\\\\nX\\_2d            & Dataset & 3 (time,y,x) & Selected plane $X(\\varphi=0)$ \\\\\nX\\_ta2d          & Dataset & 3 (time,y,x) & Toroidal average $\\PA{ X }$\nEq.~\\eqref{eq:phi_average} \\\\\nY\\_tt\\_2d        & Dataset & 3 (time,y,x) & Time integrated (between two outputs, Simpson's rule) selected plane\n$\\int_{t_0}^{t_1}\\d t Y(\\varphi=0) $\nwhere $t_1 - t_0 = ${\\tt dt*inner\\_loop*itstp} and {\\tt itstp} is the number of discretization points\\\\\nY\\_tt\\_ta2d      & Dataset & 3 (time,y,x) & Time integrated (between two outputs, Simpson's rule) toroidal average (Eq.~\\eqref{eq:phi_average})\n$\\int_{t_0}^{t_1}\\d t \\PA{ Y }$\nwhere $t_1 - t_0 = ${\\tt dt*inner\\_loop*itstp} and {\\tt itstp} is the number of discretization points\\\\\n\\bottomrule\n\\end{longtable}\nwhere\nX and Y\\_tt represent the quantities described in the tables in previous sections and the miscellaneous quantities\n\\begin{longtable}{llll}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Equation} & \\textbf{Name} &  \\textbf{Equation}\\\\\n\\midrule\n    vorticity &$-\\Delta_\\perp\\phi$ &\n    apar\\_vorticity &$-\\Delta_\\perp A_\\parallel$ \\\\\n    dssue & $\\npar^2 u_e$&\n    dppue & $\\partial_\\varphi^2 u_e$\\\\\n    dpue2 & $(\\partial_\\varphi u_e)^2$&\n    lperpinv &$L_\\perp^{-1} := |\\vec\\np n_e|/n_e$ \\\\\n    perpaligned &$(\\vec\\np n_e)^2/n_e$ &\n    lparallelinv &$L_\\parallel^{-1} := |\\npar n_e|/n_e$ \\\\\n    aligned &$ (\\npar n_e)^2/n_e$ &\n    ne2 & $n_e^2$ \\\\\n    phi2 & $\\phi^2$ &\n    nephi & $n_e\\phi$ \\\\\n\\bottomrule\n\\end{longtable}\nThe computation time spent on diagnostics is negligible if {\\tt inner\\_loop} parameter is greater than 1. Also\nremember that the X and Y fields are all two-dimensional, which takes up much less disk-space than three-dimensional fields.\n\\subsection{Restart file} \\label{sec:restart_file}\nThe program \\texttt{feltor\\_hpc.cu} has the possibility to initialize time and the fields with\nthe results of a previous simulation. In particular, this feature is motivated by chain jobs on a cluster\n(see e.g. the --dependency option in SLURM).\nThis behaviour is enabled by giving an additional file \\texttt{initial.nc}\nto the command line. In this case the \\texttt{initne} and \\texttt{initphi} parameters of the input\nfile are ignored. Instead, the fields \\texttt{electrons, ions, Ue, Ui, induction} at the latest timestep\nare read from the given file to initialize the simulation.\nNote that to enable a loss-less continuation of the simulation we output special restart fields into the output file that in contrast to the other fields\nare not compressed.\nApart from that the behaviour of the program is unchanged i.e. the magnetic field, profiles, resolutions, etc.\nare all taken from the regular input files. This means that the user must take care that these are consistent\nwith the paramters in the existing \\texttt{initial.nc} file. Also note that we try to discourage\nappending new results to an exisiting file directly,\nbecause if for some reason the cluster crashes and the file is corrupted\nthe whole simulation is lost. It is safer to just merge files afterwards with for example\\\\\n\\texttt{ncrcat output1.nc output2.nc output.nc}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Diagnostics}\\label{sec:diagnostics}\n\\texttt{feltor/src/feltor/feltordiag.cu}\n reads one or more previously generated simulation file(s) \\texttt{input0.nc ... inputN.nc} described in Section~\\ref{sec:output_file} and writes into a single second output file \\texttt{output.nc} described as follows. \\\\\nCompilation\\\\\n\\texttt{make feltordiag device=\\{gpu,omp\\}} \\\\\nUsage \\\\\n\\texttt{./feltordiag input0.nc ... inputN.nc output.nc} \\\\\n\nOutput file format: netcdf-4/hdf5, Conventions: CF-1.7; A \\textit{coordinate variable (Coord. Var.)} is a Dataset with the same name as a dimension.\n\n\\begin{longtable}{lll>{\\RaggedRight}p{7cm}}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Name} &  \\textbf{Type} & \\textbf{Dimension} & \\textbf{Description}  \\\\ \\midrule\ninputfile  &     text attribute & - & verbose input file as a string (valid JSON, C-style comments are allowed but discarded) \\\\\ngeomfile   &     text attribute & - & verbose geometry input file as a string (valid JSON, C-style comments are allowed but discarded) \\\\\nx                & Coord. Var. & 1 (x) & $R$-coordinate (computational space, compressed size: $nN_x/c_x$)\\\\\ny                & Coord. Var. & 1 (y) & $Z$-coordinate (computational space, compressed size: $nN_y/c_y$)\\\\\npsi              & Coord. Var. & 1 (psi) & $\\psi_p$-coordinate ( default size: $3\\cdot 64$) \\\\\ntime             & Coord. Var. & 1 (time)& time at which fields are written (variable size: maxout$+1$, dimension size: unlimited) \\\\\ndvdpsip          & Dataset & 1 (psi) & $\\d v/\\d\\psi_p$ \\\\\npsi\\_vol         & Dataset & 1 (psi) & The volume enclosed by the flux surfaces $v(\\psi_p) = \\int_{\\psi_p} \\dV $ \\\\\npsi\\_area        & Dataset & 1 (psi) & The area of the flux surfaces $A(\\psi_p) = 2\\pi \\int_\\Omega |\\vn\\psi_p| \\delta(\\psi_p - \\psi_{p0}) H(Z-Z_X) R\\d R\\d Z$ \\\\\nq-profile        & Dataset & 1 (psi) & The safety factor $q(\\psi_p)$ \\eqref{eq:safety_factor} using direct integration ( accurate but unavailable outside separatrix) \\\\\npsi\\_psi         & Dataset & 1 (psi) & explicit $\\psi_p$ values; Same as psi \\\\\npsit1d           & Dataset & 1 (psi) & Toroidal flux (integrated q-profile) $\\psi_t = \\int^{\\psi_p} \\d\\psi_p q(\\psi_p)$ \\\\\nrho              & Dataset & 1 (psi) & Transformed flux label $\\rho:= 1 - \\psi_p/\\psi_{p,\\min}$ \\\\\nrho\\_p           & Dataset & 1 (psi) & poloidal flux label $\\rho_p:= \\sqrt{1 - \\psi_p/\\psi_{p,\\min}}$ \\\\\nrho\\_t           & Dataset & 1 (psi) & Toroidal flux label $\\rho_t := \\sqrt{\\psi_t/\\psi_{t,\\mathrm{sep}}}$ (is similar to $\\rho$ in the edge but $\\rho_t$ is nicer in the core domain, because equidistant $\\rho_t$ make more equidistant flux-surfaces)\\\\\nZ\\_fluc2d        & Dataset & 3 (time,y,x) & Fluctuation level on selected plane ($\\varphi= 0$) $\\delta Z := Z(R,Z,0) - \\RA{ Z}(R,Z)$ \\\\\nZ\\_fsa2d         & Dataset & 3 (time, y,x) & Flux surface average $\\RA{ Z}$ interpolated onto 2d plane Eq.~\\eqref{eq:fsa_vol} \\\\\nZ\\_fsa           & Dataset & 2 (time, psi) & Flux surface average $\\RA{ Z}$ Eq.~\\eqref{eq:fsa_vol} \\\\\nZ\\_ifs           & Dataset & 2 (time, psi) & Volume integrated flux surface average $\\int\\d v\\RA{ Z}$ unless Z is a current, then it is the volume derived flux-surface average $\\partial_v \\RA{ Z}$ \\\\\nZ\\_ifs\\_lcfs     & Dataset & 1 (time) & Volume integrated flux surface average evaluated on last closed flux surface $\\int_0^{v(0)}\\d v\\RA{ Z}$ unless Z is a current, then it is the fsa evaluated $\\RA{ j_v}(0)$ \\\\\nZ\\_ifs\\_norm     & Dataset & 1 (time) & Volume integrated square flux surface average $\\sqrt{\\int \\d v \\RA{Z}^2}$, unless Z is a current, then it is the square derivative of the flux surface average $\\sqrt{\\int\\d v (\\partial_v \\RA{j^v})^2}$\\\\\n\\bottomrule\n\\end{longtable}\nwhere Z $\\in$ \\{X, Y\\_tt\\}\nNote that feltoridag converts all $jsX$ quantities into $jvX$\nby multiplying $\\d v/\\d \\psi_p$\nin the sense that $\\vec j\\cn v  = \\vec j \\cn \\psi_p \\d v/\\d\\psi_p$.\nThe parameters used for the X-point flux-aligned grid construction are $f_x = 1/8$, $f_y = 0$, $n_\\psi = 3$, $N_\\zeta = 64$ and $N_\\eta = 640$ and the constant monitor metric.\n\nWe also have a useful geometry diagnostic program:\n\\texttt{feltor/inc/geometries/geometry\\_diag.cu} reads either a previously\ngenerated simulation file \\texttt{input.nc} or the input json files\n\\texttt{input.json} and \\texttt{geometry.json} and writes an output file \\texttt{diag\\_geometry.nc} as\\\\\nCompilation\\\\\n\\texttt{make geometry\\_diag device=\\{gpu,omp\\}} \\\\\nUsage \\\\\n\\texttt{./geometry\\_diag input.json geometry.json diag\\_geometry.nc} \\\\\nThe program outputs a host of static 1d, 2d and 3d geometric quantities.\nThe output file is for example useful in connection with the ``Group Datasets'' filter in paraview, which merges Datasets from different files into one using shallow copy only.\n\\section{Error conditions}\nAll previously mentioned codes can crash for various reasons. Here,\nwe list and describe situations, which generally may lead to program\ntermination\n\\begin{longtable}{p{6cm}p{8cm}}\n\\toprule\n\\rowcolor{gray!50}\\textbf{Error condition} &  \\textbf{Handling} \\\\ \\midrule\nAn input file does not exist or is otherwise invalid\n&\nProgram terminates with an error message to \\texttt{std::cerr}. \\texttt{feltordiag.cu} writes an error to \\texttt{std::cerr} and continues with the next input file.\n    \\\\\nAn input netcdf file misses a required field\n&\nProgram terminates with a NetCDF error message to \\texttt{std::cerr}\n    \\\\\nNo write permission for the output file location\n&\nProgram terminates with an error message to \\texttt{std::cerr}\n    \\\\\nAn input Json file misses a key or contains a typo in a key\n&\nThe programs \\texttt{feltor.cu} and \\texttt{feltor\\_hpc.cu}\nwill exit with an error message. (The reason why we do not\nsilently use the default value is that the danger of wasting\nvaluable computing time on the cluster due to a typo is bigger than the\nadded convenience. We want to be sure that the program\ndoes what the user wants).\nThe other programs just issue warnings\nif a key is not found and use a default value\nwhich is $0$ if not otherwise specified.\n    \\\\\n    An input Json file has an invalid value, e.g. a typo in a string value\n&\nInvalid values lead to termination with an error message to \\texttt{std::cerr}, once and if program tries to use the value\n    \\\\\n    Number of processes in $x$, $y$ and $z$ direction does not match total number of Processes\n&\nProgram terminates with an error message to \\texttt{std::cerr}.\n    \\\\\n    $2^{s-1}$ or $c_x$ or $c_y$ does not evenly divide $N_x$ and $N_y$, where $s$ is the number of stages in the multigrid algorithm.\n&\nProgram terminates on thrown error. Make sure the numbers add up.\n    \\\\\n    Number of processes in $x$, $y$ and $z$ direction does not evenly divide or is greater or equal $N_x/2^{s-1}$, $N_y/2^{s-1}$ and $N_z$, where $s$ is the number of stages in the multigrid algorithm.\n&\nProgram terminates on failed assert\n    \\\\\nAn MPI error occurs\n&\nProgram crashes horribly printing cryptic error messages (stack trace) to \\texttt{std::cerr}\n    \\\\\nA numerical instability occurs\n&\nThe program terminates usually caused by a NaN exception raised. However,\nthe cause for the instability has to be determined inspecting the\nlast output in the output file.\n    \\\\\n\\qquad large fieldaligned oscillations in $u_e$ paired with instability in the edge of the box\n&\nApply damping region\n    \\\\\n\\qquad Perpendicular grid oscillations in $u_e$ and $\\Delta_\\perp \\phi$ in the damping region, symmetric in $\\varphi$\n&\nIncrease damping $alpha$, increase damping boundary, make the box larger/smaller\n    \\\\\n\\qquad Spike in $u_e$ shortly after simulation start\n&\nIncrease $\\nu_\\perp$, increase $N_x$, $N_y$, decrease perturbation amplitude\n    \\\\\n\\qquad Grid oscillations far away from the edge\n&\nProbably caused by the perpendicular transport that goes unstable. Increase $\\nu_\\perp$ and/or $N_x$, $N_y$\n\\\\\n\\qquad Oscillations where fieldlines intersect the wall\n&\nCaused by boundary conditions in FCI method and necessarily underresolved toroidal direction. Increase $N_z$, decrease $N_x$, $N_y$ or better decrease $q$ value by decreasing $\\mathcal P_\\psi$ in geometry input file\n\\\\\n\\bottomrule\n\\end{longtable}\n\n%..................................................................\n\\bibliography{../../doc/related_pages/references}\n%..................................................................\n\n\n\\end{document}\n", "meta": {"hexsha": "6ced081c16ddb68f04a078e9d795d4265bdaf34a", "size": 96116, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/feltor/feltor.tex", "max_stars_repo_name": "RaulGerru/FELTOR_Raul", "max_stars_repo_head_hexsha": "a566f8a9003ade437e093334877f839f3dfd0260", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/feltor/feltor.tex", "max_issues_repo_name": "RaulGerru/FELTOR_Raul", "max_issues_repo_head_hexsha": "a566f8a9003ade437e093334877f839f3dfd0260", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/feltor/feltor.tex", "max_forks_repo_name": "RaulGerru/FELTOR_Raul", "max_forks_repo_head_hexsha": "a566f8a9003ade437e093334877f839f3dfd0260", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.2073520965, "max_line_length": 440, "alphanum_fraction": 0.6728640393, "num_tokens": 33217, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891305219504, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5802887732845828}}
{"text": "% !TEX root = ../main.tex\n\n\n\n\n\n\\chapter{Methodology}\n\\thispagestyle{myheadings}\n\n\\section{HAR model}\n\nTo capture the long-memory behaviour of volatility movement, this research uses the HAR model suggested by \\cite{corsi_simple_2009} with a variation of logarithm in the form of Equation \\ref{eq:HAR_model}.\n\n\\begin{equation}\nRV_t = \\beta_0 + \\beta_1 RV_{t-1} +\\beta_2 RV_{t-1|t-7} + \\beta_3 RV_{t-1|t-{30}} + u_t\n\\label{eq:HAR_model}\n\\end{equation}\n\nWe can see clearly that an AR(1) process from the PACF plot also has effects in the following rolling period. As $ RV_{t-j|t-k}=\\frac{1}{h+1-j}\\sum_{i=j}^h RV_{t-i},\\; $ with $j\\leqslant{h}$. Hereby Realized Volatility is averaged on both weekly and monthly basis, to capture the Autoregressive effect in the longer period.\\\\\n\n\\begin{comment}\n\\begin{table}[H]\n\\setlength{\\tabcolsep}{6mm}{\n\\begin{tabular}{|c|l|c|c|c|c|}\n\\hline\n\\multicolumn{6}{|c|}{HAR Model Result}                                                                           \\\\ \\hline\n\\multicolumn{2}{|c|}{Dep. Variable:}       & logRV        & \\multicolumn{2}{c|}{R-squared:}          & 0.729     \\\\ \n\\multicolumn{2}{|c|}{Model:}               & OLS          & \\multicolumn{2}{c|}{Adj. R-squared:}     & 0.728     \\\\ \n\\multicolumn{2}{|c|}{Method:}              & LeastSquares & \\multicolumn{2}{c|}{F-statistic:}        & 1157.     \\\\ \n\\multicolumn{2}{|c|}{Log-Likelihood:}      & -1521.0      & \\multicolumn{2}{c|}{Prob (F-statistic):} & 8.32e-276 \\\\ \n\\multicolumn{2}{|c|}{No. Observations:}    & 1298         & \\multicolumn{2}{c|}{AIC:}                & 3050.     \\\\ \n\\multicolumn{2}{|c|}{Df Residuals:}        & 1294         & \\multicolumn{2}{c|}{BIC:}                & 3071.     \\\\ \n\\multicolumn{2}{|c|}{Df Model:}            & 3            & \\multicolumn{2}{c|}{Covariance Type:}    & nonrobust \\\\ \\hline\n\\multicolumn{2}{|c|}{}                     & coef         & std err              & t                 & P>|t|     \\\\ \n\\multicolumn{2}{|c|}{const}                & -0.7749      & 0.118                & -6.545            & 0.000     \\\\ \n\\multicolumn{2}{|c|}{$\\log(RV_{t-1})$}      & 0.2463       & 0.029                & 8.570             & 0.000      \\\\ \n\\multicolumn{2}{|c|}{$\\log(RV_{t-1|t-7})$}  & 0.6804       & 0.039                & 17.571            & 0.000      \\\\ \n\\multicolumn{2}{|c|}{$\\log(RV_{t-1|t-30})$} & -0.0073          & 0.031                & -0.237            & 0.813     \\\\ \\hline\n\\multicolumn{2}{|c|}{Omnibus:}             & 63.912       & \\multicolumn{2}{c|}{Durbin-Watson:}      & 1.614     \\\\\n\\multicolumn{2}{|c|}{Prob(Omnibus):}       & 0            & \\multicolumn{2}{c|}{Jarque-Bera (JB):}   & 141.160   \\\\\n\\multicolumn{2}{|c|}{Skew:}                & 0.296        & \\multicolumn{2}{c|}{Prob(JB):}           & 2.23e-31  \\\\\n\\multicolumn{2}{|c|}{Kurtosis:}            & 4.503        & \\multicolumn{2}{c|}{Cond. No.}           & 66.5      \\\\ \\hline\n\\end{tabular}}\n\\caption{HAR Model Result (under 10 min interval)} \\label{HAR_model_result}\n\\end{table}\n\\end{comment}\n\n\\begin{equation}\nRV_t = \\beta_0 + \\beta_1 RV_{t-1} +\\beta_2 RV_{t-1|t-7} + u_t\n\\label{eq:sHAR_model}\n\\end{equation}\n\nFor easier interpretation and comparison with models later on \n\\footnote{Removed $\\log(RV_{t-1|t-30})$ for most other models to avoid overfitting}. Modified HAR model in Equation \\ref{eq:sHAR_model} removed insignificant $\\log(RV_{t-1|t-30})$ to avoid overfitting. Removing $\\log(RV_{t-1|t-30})$ results in no change in Adjusted R-squared which indicates the bare contribution in explaining $RV_t$.\n\n\\begin{comment}\n\\begin{table}[H]\n\\setlength{\\tabcolsep}{6mm}{\n\\begin{tabular}{|c|c|c|c|c|} \n\\hline\n\\multicolumn{5}{|c|}{Modified HAR Model Result}                                                                          \\\\ \n\\hline\nDep. Variable:         & logRV        & \\multicolumn{2}{c|}{R-squared:}          & 0.728                                 \\\\\nModel:                 & OLS          & \\multicolumn{2}{c|}{Adj. R-squared:}     & 0.728                                 \\\\\nMethod:                & LeastSquares & \\multicolumn{2}{c|}{F-statistic:}        & 1737.                                 \\\\\nLog-Likelihood:        & -1521.0      & \\multicolumn{2}{c|}{Prob (F-statistic):} & 0.00                                  \\\\\nNo. Observations:      & 1298         & \\multicolumn{2}{c|}{AIC:}                & 3048.                                 \\\\\nDf Residuals:          & 1295         & \\multicolumn{2}{c|}{BIC:}                & 3064.                                 \\\\\nDf Model:              & 2            & \\multicolumn{2}{c|}{Covariance Type:}    & nonrobust                             \\\\ \n\\hline\n                       & coef         & std err & t                              & P\\textgreater{}\\textbar{}t\\textbar{}  \\\\\nconst                  & -0.7651      & 0.111   & -6.906                         & 0.000                                 \\\\\n$\\log RV_{t-1}$        & 0.2463       & 0.029   & 8.573                          & 0.000                                 \\\\\n$\\log (RV_{t-1|t-7})$  & 0.6748       & 0.031   & 21.990                         & 0.000                                 \\\\ \n\\hline\nOmnibus:               & 64.455       & \\multicolumn{2}{c|}{Durbin-Watson:}      & 1.615                                 \\\\\nProb(Omnibus):         & 0.000        & \\multicolumn{2}{c|}{Jarque-Bera (JB):}   & 141.803                               \\\\\nSkew:                  & 0.301        & \\multicolumn{2}{c|}{Prob(JB):}           & 1.61e-31                              \\\\\nKurtosis:              & 4.503        & \\multicolumn{2}{c|}{Cond. No.}           & 52.0                                  \\\\\n\\hline\n\\end{tabular}}\n\\caption{Modified HAR Model Result (under 10 min interval)}\n\\label{mhar_model_result}\n\\end{table}\n\\end{comment}\n\n\\section{HARQ model}\n\nHowever, due to the existence of measurement errors in realized volatilities, the coefficients in the HAR model can be optimized by spliting a share to Realized Quarticity, present in Equation \\ref{eq:HARQ_model}\n%$\\log(RV_{t-1|t-30})$ is removed for insignificancy, \n\n\\begin{equation}\n\\begin{split}\n\\left\\{\n             \\begin{array}{lr}\n             RV_t=\\beta_0+\\underbrace{(\\beta_1+\\beta_{1Q} RQ_{t-1}^{1/2})}_{\\beta_{1,t}}RV_{t-1}+\\underbrace{(\\beta_2+\\beta_{2Q} RQ_{t-1|t-7}^{1/2})}_{\\beta_{2,t}}RV_{t-1|t-7}+u_t\\\\\n             RQ=\\frac{3}{M}\\sum_{i=1}^Mr_{t,i}^4,\\;RQ_{t-1|t-k}=\\frac{1}{k}\\sum_{j=1}^kRQ_{t-j}\n             \\end{array}\n\\right.\n\\end{split}\n\\label{eq:HARQ_model}\n\\end{equation}\n\nIn parallel to $RV_{t-j|t-k}$, $RQ_{t-1|t-k}$ is rolling average of previous certain period of Realized Quarticity.\\\\\n\nConjoint regression of Realized Quarticity and Realized Volatility is based on the idea that as Realized Volatility consists of Integrated Volatility and Brownian Motion, and unwanted Brownian Motion makes an extra impact on observed coefficient of Volatility. Separate $RQ_t^{1/2}\\,$\\footnote{$RQ_t$ takes the square root, to resize its unit to the same level of $RV_t$} from HAR model quantifies the effect from Brownian Motion. Given the positive weekly array in Figure \\ref{PACF}. Brownian Motion should conduct negative impact on coefficients of historical realised volatility. Intuitively, subtract Brownian Motion from Realized Volatility filtered coefficients to construct more punctual models. \\\\\n\n\\begin{comment}\n\\begin{table}[H]\n\\setlength{\\tabcolsep}{6mm}{\n\\begin{tabular}{|c|l|c|c|c|c|}\n\\hline\n\\multicolumn{6}{|c|}{HARQ Model Result}                                                                                                                                             \\\\ \\hline\n\\multicolumn{2}{|c|}{Dep. Variable:}                         & logRV                        & \\multicolumn{2}{c|}{R-squared:}                          & 0.729                      \\\\ \n\\multicolumn{2}{|c|}{Model:}                                 & OLS                          & \\multicolumn{2}{c|}{Adj. R-squared:}                     & 0.728                      \\\\ \n\\multicolumn{2}{|c|}{Method:}                                & LeastSquares                 & \\multicolumn{2}{c|}{F-statistic:}                        & 870.2                      \\\\ \n\\multicolumn{2}{|c|}{Log-Likelihood:}                        & -1519.5                      & \\multicolumn{2}{c|}{Prob (F-statistic):}                 & 2.69e-276                  \\\\ \n\\multicolumn{2}{|c|}{No. Observations:}                      & 1298                         & \\multicolumn{2}{c|}{AIC:}                                & 3049.                      \\\\ \n\\multicolumn{2}{|c|}{Df Residuals:}                          & 1293                         & \\multicolumn{2}{c|}{BIC:}                                & 3075.                      \\\\ \n\\multicolumn{2}{|c|}{Df Model:}                              & 4                            & \\multicolumn{2}{c|}{Covariance Type:}                    & nonrobust                  \\\\ \\hline\n\\multicolumn{2}{|c|}{}                                       & coef                         & std err                    & t                           & P>|t|                      \\\\ \n\\multicolumn{2}{|c|}{const}                                  & -1.2690                      & 0.357                      & -3.553                      & 0.000                      \\\\ \n\\multicolumn{2}{|c|}{$\\log(RV_{t-1})$}                        & 0.3355                        & 0.171                         & 1.965                       & 0.050                      \\\\ \n\\multicolumn{2}{|c|}{$\\log(RQ_{t-1}^{1/2}\\cdot RV_{t-1})$}    & -0.0459                       & 0.079                      & -0.578                       & 0.563                          \\\\ \n\\multicolumn{2}{|c|}{$\\log(RV_{t-1|t-7})$}                    &    0.8166                      & 0.127                     & 6.407                       & 0.000                      \\\\ \n\\multicolumn{2}{|l|}{$\\log(RQ_{t-1|t-7}^{1/2}\\cdot RV_{t-1|t-7})$} & \\multicolumn{1}{|c|}{-0.0704} & \\multicolumn{1}{|c|}{0.057} & \\multicolumn{1}{|c|}{-1.237} & \\multicolumn{1}{|c|}{0.216} \\\\ \\hline\n\\multicolumn{2}{|c|}{Omnibus:}                               & 76.823                       & \\multicolumn{2}{c|}{Durbin-Watson:}                      & 1.615                      \\\\ \n\\multicolumn{2}{|c|}{Prob(Omnibus):}                         & 0.000                        & \\multicolumn{2}{c|}{Jarque-Bera (JB):}                   & 158.995                     \\\\ \n\\multicolumn{2}{|c|}{Skew:}                                  & 0.385                        & \\multicolumn{2}{c|}{Prob(JB):}                           & 2.98e-35                   \\\\ \n\\multicolumn{2}{|c|}{Kurtosis:}                              & 4.532                            & \\multicolumn{2}{c|}{Cond. No.}                           & 497.                       \\\\ \\hline\n\\end{tabular}}\n\\caption{HARQ Model Result (under 10 min interval)} \\label{HARQ_model_result}\n\\end{table}\n\\end{comment}\n\nAs the difficulties of estimate accurately the quarticity parameters, there is an empirical investigation on the simplified version of HARQ model, only allows daily lagged RV the coefficient to vary as a function of $RQ^{1/2}$.\\\\\n\n\\begin{equation}\n\\begin{split}\n\\left\\{\n             \\begin{array}{lr}\n             RV_t=\\beta_0+\\underbrace{(\\beta_1+\\beta_{1Q} RQ_{t-1}^{1/2})}_{\\beta_{1,t}}RV_{t-1}+\\beta_2 RV_{t-1|t-7}+u_t\\\\\n             RQ=\\frac{3}{M}\\sum_{i=1}^Mr_{t,i}^4,\\;RQ_{t-1|t-k}=\\frac{1}{k}\\sum_{j=1}^kRQ_{t-j}\n             \\end{array}\n\\right.\n\\end{split}\n\\label{eq:sHARQ_model}\n\\end{equation}\n\nIt is substantial for the estimation error of daily (k = 1) simulated RV. To be expected, weekly measurement error variance should be much smaller than daily RV (normalised). Therefore, the attenuation bias for short HARQ model will not be severely affected by the absence of weekly coefficients.\\\\\n\n\\begin{comment}\n\\begin{equation}\nRV_t=-1.1799+(0.4325-0.0892RQ_{t-1}^{1/2})RV_{t-1}+0.6640RV_{t-1|t-7}+u_t\n\\label{sHARQ_equation}\n\\end{equation}\n\n\\begin{table}[H]\n\\setlength{\\tabcolsep}{6mm}{\n\\begin{tabular}{|c|l|c|c|c|} \n\\hline\n\\multicolumn{5}{|c|}{Short HARQ Model Result}                                                                                                 \\\\ \n\\hline\nDep. Variable:                        & logRV        & \\multicolumn{2}{c|}{R-squared:}          & 0.729                                 \\\\\nModel:                                & OLS          & \\multicolumn{2}{c|}{Adj. R-squared:}     & 0.728                                 \\\\\nMethod:                               & LeastSquares & \\multicolumn{2}{c|}{F-statistic:}        & 1159.                                 \\\\\nLog-Likelihood:                       & -1520.3      & \\multicolumn{2}{c|}{Prob (F-statistic):} & 2.69e-276                             \\\\\nNo. Observations:                     & 1298         & \\multicolumn{2}{c|}{AIC:}                & 3049.                                 \\\\\nDf Residuals:                         & 1294         & \\multicolumn{2}{c|}{BIC:}                & 3069.                                 \\\\\nDf Model:                             & 3            & \\multicolumn{2}{c|}{Covariance Type:}    & nonrobust                             \\\\ \n\\hline\n                                      & coef         & std err & t                              & P\\textgreater{}\\textbar{}t\\textbar{}  \\\\\nconst                                 & -1.1799      & 0.350   & -3.372                         & 0.001                                 \\\\\n$\\log(RV_{t-1})$                      & 0.4325       & 0.152   & 2.851                          & 0.004                                 \\\\\n$\\log(RQ_{t-1}^{1/2}\\cdot RV_{t-1})$  & -0.0892      & 0.071   & -1.250                         & 0.212                                 \\\\\n$\\log(RV_{t-1|t-7})$                  & 0.6640       & 0.032   & 20.821                         & 0.000                                 \\\\ \n\\hline\nOmnibus:                              & 67.731       & \\multicolumn{2}{c|}{Durbin-Watson:}      & 1.615                                 \\\\\nProb(Omnibus):                        & 0.000        & \\multicolumn{2}{c|}{Jarque-Bera (JB):}   & 139.964                               \\\\\nSkew:                                 & 0.340        & \\multicolumn{2}{c|}{Prob(JB):}           & 4.05e-31                              \\\\\nKurtosis:                             & 4.458        & \\multicolumn{2}{c|}{Cond. No.}           & 380.                                  \\\\\n\\hline\n\\end{tabular}}\n\\caption{Short HARQ Model Result (under 10 min interval)}\n\\label{sHARQ_model_result}\n\\end{table}\n\n\\begin{figure}[H]\n\\centering\n\\fbox{\\includegraphics[width=0.9\\linewidth]{figures/sHARQ}}\n\\caption{Short HARQ Model Predictability (under 10min interval)}\n\\label{sHARQ_Model_Predictability}\n\\end{figure}\n\\end{comment}\n\n\\section{Metrics in Bitcoin}\n\nConsidered as a comparatively new investment, the construction of price/volatility for Bitcoin has not been substantial. Features from blockchain are selected for improving the forecast of existing models.\\\\\n\nTable \\ref{tb:metrics_model_result} shows that metrics generates impacts on results regarding to Realized Volatility. However, Condition Number of this table is $9.32\\times10^3$, and such large Condition Number commonly pertains to a signal of multicollinearity, which indicates that metrics selected in this model have overlapping effects. \\\\\n\nThe result of regression for all metrics is presented in Appendix Table \\ref{tb:metrics_model_result}, from which we can see that part of metrics have significant impact over Realized Volatility. Metrics such as adjustedTxVolume(USD), txCount, marketcap(USD) averageDifficulty, medianTxValue(USD) are singled out in further regression as and extension to other models as X factors. To choose from the lesser of two evils\\footnote{multicollinearity and overfitting}, empirically one of a similar parameters is dropped in each pair such as txVolume(USD)|adjustedTxVolume(USD), marketCap(USD)|realizedCap(USD), and paymentCount|txCount. Only metrics that have significant result in Table \\ref{tb:metrics_model_result} are constructed into X-factors.\\\\\n\n\\section{HARQ-X model}\n\nHARQ-X model is a mixture of HARQ with extensions of including metrics that are selected from Table \\ref{tb:metrics_model_result} . Here the X stands for the combination of adjustedTxVolume(USD), txCount, marketcap(USD), averageDifficulty, and medianTxValue(USD).\\\\\n\n\\begin{equation}\n\\begin{split}\n\\left\\{\n             \\begin{array}{ll}\n             RV_t=&\\beta_0+\\underbrace{(\\beta_1+\\beta_{1Q} RQ_{t-1}^{1/2})}_{\\beta_{1,t}}RV_{t-1}+\\underbrace{(\\beta_2+\\beta_{2Q} RQ_{t-1|t-7}^{1/2})}_{\\beta_{2,t}}RV_{t-1|t-7}+X_{t-1}+u_t\\\\\n             X_t=&adjustedTxVolume(USD)_t+txCount_t+marketcap(USD)_t\\\\&+medianTxValue(USD)_t+averageDifficulty_t\\\\\n             RQ_t=&\\frac{3}{M}\\sum_{i=1}^Mr_{t,i}^4,\\;RQ_{t-1|t-k}=\\frac{1}{k}\\sum_{j=1}^kRQ_{t-j}\n             \\end{array}\n\\right.\n\\end{split}\n\\label{eq:HARQx_model}\n\\end{equation}\n\nWithin this HARQ-X, all metrics inside X has been normalised via algorithm $\\frac{metric-\\mu}{\\delta}$, where $\\mu$ stands for the average of the specific metric, and $\\delta$ is a standard deviation in Equation \\ref{eq:HARQx_model}. Normalisation of metrics makes convenience for comparison.\n\n\\begin{comment}\n\\begin{table}[H]\n\\setlength{\\tabcolsep}{5mm}{\n\\begin{tabular}{|c|l|c|c|c|c|}\n\\hline\n\\multicolumn{6}{|c|}{HARQ-X Model Result}                                                                                          \\\\ \\hline\n\\multicolumn{2}{|c|}{Dep. Variable:}                         & logRV        & \\multicolumn{2}{c|}{R-squared:}          & 0.731     \\\\ \n\\multicolumn{2}{|c|}{Model:}                                 & OLS          & \\multicolumn{2}{c|}{Adj. R-squared:}     & 0.729     \\\\ \n\\multicolumn{2}{|c|}{Method:}                                & LeastSquares & \\multicolumn{2}{c|}{F-statistic:}        & 388.6     \\\\ \n\\multicolumn{2}{|c|}{Log-Likelihood:}                        & -1515.3      & \\multicolumn{2}{c|}{Prob (F-statistic):} & 0.00       \\\\ \n\\multicolumn{2}{|c|}{No. Observations:}                      & 1298         & \\multicolumn{2}{c|}{AIC:}                & 3051.     \\\\ \n\\multicolumn{2}{|c|}{Df Residuals:}                          & 1288         & \\multicolumn{2}{c|}{BIC:}                & 3102.     \\\\ \n\\multicolumn{2}{|c|}{Df Model:}                              & 9            & \\multicolumn{2}{c|}{Covariance Type:}    & nonrobust \\\\ \\hline\n\\multicolumn{2}{|c|}{}                                       & coef         & std err              & t                 & P>|t|     \\\\ \n\\multicolumn{2}{|c|}{const}                                  & -2.1669      & 0.565                & -3.839            & 0.000     \\\\ \n\\multicolumn{2}{|c|}{$\\log RV_{t-1}$}                          & 0.2714       & 0.176                & 1.542             & 0.123     \\\\ \n\\multicolumn{2}{|c|}{$\\log RQ_{t-1}^{1/2}\\cdot RV_{t-1}$}      & -0.0226      & 0.082                & -0.276            & 0.782     \\\\ \n\\multicolumn{2}{|c|}{$\\log (RV_{t-1|t-7})$}                     & 0.8003       & 0.134                & 5.992             & 0.000     \\\\ \n\\multicolumn{2}{|c|}{$\\log (RQ_{t-1|t-7}^{1/2}\\cdot RV_{t-1|t-7})$} & -0.0684      & 0.059                & -1.164            & 0.245     \\\\ \n\\multicolumn{2}{|c|}{adjustedTxVolume(USD)}                  & 0.2988       & 0.386                & 0.774             & 0.439     \\\\ \n\\multicolumn{2}{|c|}{txCount}                                & -0.3945      & 0.281                & -1.402            & 0.161     \\\\ \n\\multicolumn{2}{|c|}{marketcap(USD)}                         & -0.3222      & 0.262                & -1.231            & 0.218    \\\\ \n\\multicolumn{2}{|c|}{medianTxValue(USD)}                     & 1.1473       & 0.565                & 2.031             & 0.042     \\\\ \n\\multicolumn{2}{|c|}{averageDifficulty}                      & 0.2655      & 0.208                & 1.275            & 0.202     \\\\ \\hline\n\\multicolumn{2}{|c|}{Omnibus:}                               & 72.995       & \\multicolumn{2}{c|}{Durbin-Watson:}      & 1.622     \\\\ \n\\multicolumn{2}{|c|}{Prob(Omnibus):}                         & 0.000        & \\multicolumn{2}{c|}{Jarque-Bera (JB):}   & 147.482    \\\\ \n\\multicolumn{2}{|c|}{Skew:}                                  & 0.374            & \\multicolumn{2}{c|}{Prob(JB):}           & 9.43e-33  \\\\ \n\\multicolumn{2}{|c|}{Kurtosis:}                              & 4.473            & \\multicolumn{2}{c|}{Cond. No.}           & 997.   \\\\ \\hline\n\\end{tabular}}\n\\caption{HARQ-X Model Result (under 10 min interval)} \\label{harq_x_model_result}\n\\end{table}\n\\end{comment}\n\n\\section{HAR-X model}\n\nFor comparison of the effects between Metrics and Quarticities, HAR-X model is composed of Metrics and HAR model.\n\n\\begin{equation}\n\\begin{split}\n\\left\\{\n             \\begin{array}{ll}\n             RV_t=&\\beta_0 + \\beta_1 RV_{t-1} +\\beta_2 RV_{t-1|t-7} + \\beta_3 RV_{t-1|t-{30}} + X_{t-1} + u_t\\\\\n             X_t=&adjustedTxVolume(USD)_t+txCount_t+marketcap(USD)_t\\\\&+medianTxValue(USD)_t+averageDifficulty_t\\\\\n             \\end{array}\n\\right.\n\\end{split}\n\\end{equation}\n\n\n\\begin{comment}\n\\begin{table}[H]\n\\setlength{\\tabcolsep}{5mm}{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multicolumn{5}{|c|}{HAR-X Model Result}                                                     \\\\ \\hline\nDep. Variable:         & logRV        & \\multicolumn{2}{c|}{R-squared:}          & 0.731     \\\\\nModel:                 & OLS          & \\multicolumn{2}{c|}{Adj. R-squared:}     & 0.729     \\\\\nMethod:                & LeastSquares & \\multicolumn{2}{c|}{F-statistic:}        & 437.0     \\\\\nLog-Likelihood:        & -1516.0      & \\multicolumn{2}{c|}{Prob (F-statistic):} & 0.00      \\\\\nNo. Observations:      & 1298         & \\multicolumn{2}{c|}{AIC:}                & 3050.     \\\\\nDf Residuals:          & 1298         & \\multicolumn{2}{c|}{BIC:}                & 3096.     \\\\\nDf Model:              & 8            & \\multicolumn{2}{c|}{Covariance Type:}    & nonrobust \\\\ \\hline\n                       & coef         & std err              & t                 & P>|t|     \\\\\nconst                  & -1.8182      & 0.451                & -4.031            & 0.000     \\\\\n$\\log RV_{t-1}$        & 0.2312       & 0.030                & 7.818             & 0.000     \\\\\n$\\log (RV_{t-1|t-7})$  & 0.6751       & 0.039                & 17.437            & 0.000     \\\\\n$\\log (RV_{t-1|t-30})$ & -0.0304      & 0.034                & -0.885            & 0.376     \\\\\nadjustedTxVolume(USD)  & 0.3222       & 0.385                & 0.836             & 0.403     \\\\\ntxCount                & -0.5050      & 0.279                & -1.810            & 0.071     \\\\\nmarketcap(USD)         & -0.2751      & 0.262                & -1.048            & 0.295     \\\\\nmedianTxValue(USD)     & 1.0979       & 0.565                & 1.945             & 0.052     \\\\\naverageDifficulty      & 0.2657       & 0.209                & 1.274             & 0.203     \\\\ \\hline\nOmnibus:               & 61.021       & \\multicolumn{2}{c|}{Durbin-Watson:}      & 1.622     \\\\\nProb(Omnibus):         & 0.000        & \\multicolumn{2}{c|}{Jarque-Bera (JB):}   & 130.365   \\\\\nSkew:                  & 0.292        & \\multicolumn{2}{c|}{Prob(JB):}           & 4.92e-29  \\\\\nKurtosis:              & 4.438        & \\multicolumn{2}{c|}{Cond. No.}           & 413.      \\\\ \\hline\n\\end{tabular}}\n\\caption{HAR-X Model Result (under 10 min interval)} \\label{har_x_model_result}\n\\end{table}\n\\end{comment}\n", "meta": {"hexsha": "685db7b364ec0a99433a6a264a06f47933be32e0", "size": 23476, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Reports/chapters/methodology.tex", "max_stars_repo_name": "YangHongshen/7setFfFbF6TvFZjX", "max_stars_repo_head_hexsha": "e1311da7e86a0f0f20f5ef436381d4b147727f10", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Reports/chapters/methodology.tex", "max_issues_repo_name": "YangHongshen/7setFfFbF6TvFZjX", "max_issues_repo_head_hexsha": "e1311da7e86a0f0f20f5ef436381d4b147727f10", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Reports/chapters/methodology.tex", "max_forks_repo_name": "YangHongshen/7setFfFbF6TvFZjX", "max_forks_repo_head_hexsha": "e1311da7e86a0f0f20f5ef436381d4b147727f10", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.4785478548, "max_line_length": 748, "alphanum_fraction": 0.4672857386, "num_tokens": 7370, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599562, "lm_q2_score": 0.8221891370573388, "lm_q1q2_score": 0.5802887728090219}}
{"text": "\\lab{Speech Recognition using CDHMMs}{Speech Recognition using CDHMMs}\n\\objective{Understand how speech recognition via CDHMMs works, and implement a simplified speech recognition system.}\n\n\\subsection{Continuous Density Hidden Markov Models}\nSome of the most powerful applications of HMMs (speech and voice recognition) result from allowing the observation space to be continuous instead of discrete.\nThese are called Continuous Density Hidden Markov Models (CDHMMs), and they have two standard formulations: Gaussian HMMs and Gaussian Mixture Model HMMs (GMMHMMs).\nIn fact, the former is a special case of the latter, so we will just discuss GMMHMMs in this lab.\n\nIn order to understand GMMHMMs, we need to be familiar with a particular continuous, multivariate distribution called a \\emph{mixture of Gaussians}.\nA mixture of Gaussians is a distribution composed of several Gaussian (or Normal) distributions with corresponding weights.\nSuch a distribution is parameterized by the number of mixture components $M$, the dimension $N$ of the normal distributions involved, a collection of component weights\n$\\{c_1, \\ldots, c_M\\}$ that are nonnegative and sum to 1, and a collection of mean and covariance parameters $\\{(\\mu_1,\\Sigma_1), \\ldots, (\\mu_M,\\Sigma_M)\\}$ for each Gaussian\ncomponent. To sample from a mixture of Gaussians, one first chooses the mixture component $i$ according to the probability weights $\\{c_1,\\ldots,c_M\\}$, and then one\nsamples from the normal distribution $\\mathcal{N}(\\mu_i, \\Sigma_i)$. The probability density function for a mixture of Gaussians is given by\n\\[\nf(x) = \\sum_{i=1}^M c_iN(x;\\,\\mu_i,\\Sigma_i),\n\\]\nwhere $N(\\cdot;\\,\\mu_i,\\Sigma_i)$ denotes the probability density function for the normal distribution $\\mathcal{N}(\\mu_i, \\Sigma_i)$.\nSee Figure \\ref{fig:mixture} for the plot of such a density curve.\nNote that a mixture of Gaussians with just one mixture component reduces to a simple normal distribution, and so a GMMHMM with just one mixture component\nis simply a Gaussian HMM.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{mixture}\n\\caption{The probability density function of a mixture of Gaussians with four components.}\n\\label{fig:mixture}\n\\end{figure}\n\n\nIn a GMMHMM, we seek to model a hidden state sequence $\\{\\mathbf{x}_1,\\ldots,\\mathbf{x}_T\\}$ and a corresponding observation sequence $\\{O_1,\\ldots,O_T\\}$, just as with discrete HMMs.\nThe major difference, of course, is that each observation $O_t$ is a real-valued vector of length $K$ distributed according to a mixture of Gaussians with $M$ components.\nThe parameters for such a model include the initial state distribution $\\pi$ and the state transition matrix $A$ (just as with discrete HMMs).\nAdditionally, for each state $i=1,\\ldots,N$, we have component weights $\\{c_{i,1},\\ldots,c_{i,M}\\}$, component means $\\{\\mu_{i,1},\\ldots,\\mu_{i,M}\\}$, and component covariance matrices\n$\\{\\Sigma_{i,1},\\ldots,\\Sigma_{i,M}\\}$.\n\nLet's define a full GMMHMM with $N=2$ states, $K = 3$, and $M=3$ components.\n\\begin{lstlisting}\n>>> import numpy as np\n>>> A = np.array([[.65, .35], [.15, .85]])\n>>> pi = np.array([.8, .2])\n>>> weights = np.array([[.7, .2, .1], [.1, .5, .4]])\n>>> means1 = np.array([[0., 17., -4.], [5., -12., -8.], [-16., 22., 2.]])\n>>> means2 = np.array([[-5., 3., 23.], [-12., -2., 14.], [15., -32., 0.]])\n>>> means = np.array([means1, means2])\n>>> covars1 = np.array([5*np.eye(3), 7*np.eye(3), np.eye(3)])\n>>> covars2 = np.array([10*np.eye(3), 3*np.eye(3), 4*np.eye(3)])\n>>> covars = np.array([covars1, covars2])\n>>> gmmhmm = [A, weights, means, covars, pi]\n\\end{lstlisting}\n\nWe can draw a random sample from the GMMHMM corresponding to the second state as follows:\n\\begin{lstlisting}\n>>> sample_component = np.argmax(np.random.multinomial(1, weights[1,:]))\n>>> sample = np.random.multivariate_normal(means[1, sample_component, :], covars[1, sample_component, :, :])\n\\end{lstlisting}\n\nFigure \\ref{fig:samples} shows an observation sequence generated from a GMMHMM with one mixture component and two states.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{samples}\n\\caption{An observation sequence generated from a GMMHMM with one mixture component and two states.\nThe observations (points in the plane) are shown as solid dots, the color indicating from which\nstate they were generated. The connecting dotted lines indicate the sequential order of the observations.}\n\\label{fig:samples}\n\\end{figure}\n\n\\begin{problem}\nWrite a function which accepts a GMMHMM in the format above as well as an integer $n\\_sim$, and which simulates the GMMHMM process, generating $n\\_sim$ different observations.\nDo so by implementing the following function declaration.\n\\begin{lstlisting}\ndef sample_gmmhmm(gmmhmm, n_sim):\n    \"\"\"\n    Simulate sampling from a GMMHMM.\n\n    Returns\n    -------\n    states : ndarray of shape (n_sim,)\n        The sequence of states\n    obs : ndarray of shape (n_sim, K)\n        The generated observations (column vectors of length K)\n    \"\"\"\n    pass\n\\end{lstlisting}\n\\end{problem}\n\nThe classic problems for which we normally use discrete observation HMMs can also be solved by using CDHMMs, though with continuous observations it is much more difficult to keep things numerically stable.\nWe will not have you implement any of the three problems for CDHMMs yourself; instead, you will use a stable module we will provide for you.\nNote, however, that the techniques for solving these problems are still based on the forward-backward algorithm; the implementation may be trickier, but the mathematical\nideas are virtually the same as those for discrete HMMs.\n\n\\subsection*{Speech Recognition and Hidden Markov Models}\nHidden Markov Models are the basis of modern speech recognition systems. However, a fair amount of signal processing must precede the HMM stage, and there are other\ncomponents of speech recognition, such as language models, that we will not address in this lab.\n\nThe basic signal processing and HMM stages of the speech recognition system that we develop in this lab can be summarized as follows:\nThe audio to be processed is divided into small frames of approximately $30$ ms. These are short enough that we can treat the signal as being constant over these inervals. We can then take this framed signal and, through a series of transformations, represent it by mel-frequency cepstral coefficients (MFCCs), keeping only the first $K$ (say $K = 10$). Viewing these MFCCs as continuous observations in $\\mathbb{R}^{K}$, we can train a GMMHMM on sequences of MFCCs for a given word, spoken multiple times. Doing this for several words, we have a collection of GMMHMMs, one for each word. Given a new speech signal, after framing and decomposing it into its MFCC array, we can score the signal against each GMMHMM, returning the word whose GMMHMM scored the highest.\n\nIndustrial-grade speech recognition systems do not train a GMMHMM for each word in a vocabulary (that would be ludicrous for a large vocabulary), but rather on \\emph{phonemes}, or distinct sounds. The English language has $44$ phonemes, yielding $44$ different GMMHMMs. As you could imagine, this greatly facilitates the problem of speech recognition.  Each and every word can be represented by some combination of these $44$ distinct sounds.  By correctly classifying a signal by its phonemes, we can determine what word was spoken. Doing so is beyond the scope of this lab, so we will simply train GMMHMMs on five words/phrases: biology, mathematics, political science, psychology, and statistics.\n\n\\begin{problem}\nObtain $30$ (or more) recordings for each of the words/phrases \\emph{mathematics}, \\emph{biology}, \\emph{political science}, \\emph{psychology}, and \\emph{statistics}.\nThese audio samples should be 2 seconds in duration, recorded at a rate of 44100 samples per second, with samples stored as 16-bit signed integers in WAV format.\nLoad the recordings into Python using {\\tt scipy.io.wavfile.read}.\n\nIf the audio files have two channels, average these channels to obtain an array of length $88200$ for each sample.\nExtract the MFCCs from each sample using code from the file {\\tt MFCC.py}:\n\\begin{lstlisting}\n>>> import MFCC\n>>> # assume sample is an array of length 88200\n>>> mfccs = MFCC.extract(sample)\n\\end{lstlisting}\nStore the MFCCs for each word in a separate list. You should have five lists, each containing 50 MFCC arrays, corresponding to each of the five words\nunder consideration.\n\\end{problem}\n\nFor a specific word, given enough distinct samples of that word (decomposed into MFCCs), we can train a GMMHMM.\nRecall, however, that the training procedure does not always produce a very effective model, as it can get stuck in a poor local minimum. \nTo combat this, we will train 10 GMMHMMs for each word (using a different random initialization of the parameters each time)\nand keep the model with the highest log-likelihood.\n\nFor training, we will use the file we have provided called {\\tt gmmhmm.py}, as this is a stable implementation of GMMHMM algorithms.\nTo facilitate random restarts, we need a function to provide initializations for the initial state distribution and the transition matrix.\n\nLet \\li{samples} be a list of arrays, where each array is the output of the MFCC extraction for a speech sample.\nUsing a function \\li{initialize()} that returns a random initial state distribution and (row-stochastic) transition matrix, we can train a GMMHMM with $5$ states\nand $3$ mixture components and view its log-likelihood as follows:\n\\begin{lstlisting}\n>>> import gmmhmm\n>>> startprob, transmat = initialize(5)\n>>> model = gmmhmm.GMMHMM(n_components=5, n_mix=3, transmat=transmat, startprob=startprob, cvtype='diag')\n>>> # these values for covars_prior and var should work well for this problem\n>>> model.covars_prior = 0.01\n>>> model.fit(samples, init_params='mc', var=0.1)\n>>> print model.logprob\n\\end{lstlisting}\n\n\\begin{problem}\nPartition each list of MFCCs into a training set of 20 samples, and a test set of the remaining 10 samples.\n\nUsing the training sets, train a GMMHMM on each of the words from the previous problem with at least $10$ random restarts, keeping the best model for each word (the one with the highest log-likelihood).\nThis process may take several minutes.  Since you will not want to run this more than once, you will want to save the best model for each word to disk using the {\\tt pickle} module so that you can use it later.\n\\end{problem}\n\nGiven a trained model, we would like to compute the log-likelihood of a new sample.\nLetting \\li{obs} be an array of MFCCs for a speech sample we do this as follows:\n\\begin{lstlisting}\n>>> score = model.score(obs)\n\\end{lstlisting}\nWe classify a new speech sample by scoring it against each of the 5 trained GMMHMMs, and returning the word corresponding to the GMMHMM with the highest score.\n\\begin{problem}\nClassify the 10 test samples for each word. \nHow does your system perform? Which words are the hardest to correctly classify?\nMake a dictionary containing the accuracy of the classification of your five testing sets.  Specifically, the words/phrases will be the keys, and the values will be the percent accuracy. \n\\end{problem}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\nHidden Markov Models are the basis of modern speech recognition systems. We assume that a short speech signal can be viewed as a stationary signal, and so we can divide a speech signal into small frames (approximately $30$ ms or so). We can take this framed signal and through a series of transformations represent it by mel-frequency cepstral coefficients (MFCCs), keeping only the first $K$ (say $K = 10$). Viewing these MFCCs as continuous observations in $\\mathbb{R}^{K}$, we can train a GMMHMM on sequences of MFCCs for a given word, spoken multiple times. Doing this for several words, we have a collection of GMMHMMs, one for each word. Given a new speech signal, after framing and decomposing it into its MFCC array, we can score the signal against each GMMHMM, returning the word whose GMMHMM scored the highest.\n\nIn practice, a GMMHMM is not trained for each word in a vocabulary (that would be ludicrous for a large vocabulary), but rather on \\emph{phonemes}, or distinct sounds. The English language has $44$ phonemes, yielding $44$ different GMMHMMs. By correctly classifying a signal by its phonemes, we can determine what word was spoken. Doing so is beyond the scope of this program, so we will simply train GMMHMMs on five words/phrases: biology, mathematics, political science, psychology, and statistics.\n\nIn a Linux environment, plug in your USB microphone headset and in the command line, enter\n\\texttt{arecord -l}\nNote under which card the USB audio device is listed, as well as the device number. This will likely be card 1, device 0.\n\nTo record audio from the command line and save it as \\texttt{test.wav} in your current working directory, enter\n\\texttt{arecord -f S16\\_LE --rate=44100 -D hw:1,0 -d 2 test.wav}\nThis will record the audio from your USB microphone for $2$ seconds, sampling at a rate of $44100$ samples per second, saving the samples as signed $16$-bit numbers at the file name \\texttt{test.wav}. We can read the wav file into python with the \\li{scipy.io.wavfile} module:\n\\begin{lstlisting}\n>>> import numpy as np\n>>> import scipy.io.wavfile as wavfile\n>>> sample = wavfile.read(\"test.wav\")[1]\n\\end{lstlisting}\n\nThe object \\li{sample} will be a vector of integers of length $88200 = 44100*2$. This in and of itself is not very useful. We must transform the data in a series of steps before it will be useful. We must first break the sample into \\emph{frames}, each of some small length ($30$ ms). We will overlap these frames to smooth out these cutoff areas. In short, each frame will overlap with the previous by $20$ ms. Doing so with a $2$ second sample yields $198$ frames, each frame containing $1323$ values from the original sample, overlapping $882$ values with each sample immediately preceding and following it. We let $f_{n}$ denote the $n^{\\text{th}}$ entry of the frame $f$.\n\nBecause we have overlapped the samples, the most significant part of a frame is the middle $441$ entries. We decrease the effects of the edges of the frame by applying a window function, making the edges nearly zero, while keeping the middle large. We use the Hamming window, defined as\n\\begin{equation*}\nw(n) = 0.54 - 0.46 \\cos \\left( \\frac{2\\pi (n + 0.5)}{N} \\right)\n\\end{equation*}\nwhere $N = 1323$ is the length of the frame. We window each frame, computing $\\tilde{f}_{n} = f_{n}w(n)$.\n\nAfter windowing each frame, we then apply a filtering process known as \\emph{preemphasis} used to improve the signal-to-noise ratio as follows:\n\\begin{equation*}\n\\widehat{f}_{n} = \\tilde{f}_{n} - 0.95 \\tilde{f}_{n-1}\n\\end{equation*}\n\nWe must next perform the discrete fourier transform of each frame, though to get a more refined view of the frequency spectrum, we must pad our vector with zeros to make it of length $2048$. We also will only keep approximately half of the returned vector, then take the square of the magnitude of each complex component.\n\nWe will ultimately be taking the logarithm of this transformed frame, so we must make sure all entries are slightly positive. We must then transform this into the \\emph{mel scale}, which is a scale of pitches which listeners tend to judge to be of equal distance from one another. We make this transformation by binning the frames into $40$ bins, according to the overlapping triangular binning scheme in Figure \\ref{fig:binning}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{melfilterbank.jpeg}\n\\caption{Binning scheme to transform linearly spaced frequency to mel scale.}\n\\label{fig:binning}\n\\end{figure}\n\nIf we want $40$ bins where the length of our padded frames is $2048$, then we can compute a \\emph{mel filter bank} $M$ which is a matrix used to bin our transformed frame into the mel scale. We can now bin our frame by multiplying this matrix against the transformed frame. We also take the logarithm of the binned values, after which we compute the discrete cosine transform (DCT) of the returned values via a DCT transformation matrix, and then finally truncate to only keep ten values, called the mel frequency cepstral coefficients (MFCC). Note that we do not typically keep the first MFCC.\n\nNormally we do this for all desired frames, storing each vector of MFCCs as a row in an array, after which we normalize by subtracting the mean of the MFCC rows, and divide by the standard deviations. We have provided you with code that allows you to compute these MFCCs quickly:\n\\begin{lstlisting}\n>>> import MFCC\n>>> mfccs = MFCC.extract(sample)\n\\end{lstlisting}\n\n\\begin{problem}\nRecord the word \\emph{mathematics} as a $2$ second WAV file $20$ times, and decompose each file into its MFCC array, storing these in a list. Do this also for the words/phrases \\emph{biology}, \\emph{political science}, \\emph{psychology}, and \\emph{statistics}. Be sure you complete each word/phrase in the $2$ second window.\n\\end{problem}\n\nFor a specific word, given enough samples of that word decomposed into its MFCCs, we can train a GMMHMM. For this, we will use the file \\li{gmmhmm.py} provided, as this is a stable implementation of GMMHMM algorithms. To facilitate random restarts, we need a function to provide initializations for the initial state distribution and the transition matrix.\n\n\\begin{problem}\nWrite a function that initializes the initial state distribution and transition matrix for a GMMHMM with $n$ states. You may have done this in a previous lab, so feel free to copy and paste.\n\\end{problem}\n\nLet \\li{samples} be a list of arrays, where each row in each array is a set of MFCCs corresponding to a frame in a sample for a given word. Using a function \\li{initialize()} that returns a random initial state distribution and transition matrix, we will show how to train a GMMHMM with $5$ states, each having an output distribution as a GMM with $3$ mixture components. We also look at the log-likelihood of the data, given the trained model.\n\\begin{lstlisting}\n>>> import gmmhmm\n>>> startprob, transmat = initialize(5)\n>>> model = gmmhmm.GMMHMM(n_components=5, n_mix=3, transmat=transmat, startprob=startprob, cvtype='diag')\n>>> model.covars_prior = 0.01\n>>> model.fit(samples, init_params='mc', var=0.1)\n>>> print model.logprob\n\\end{lstlisting}\n\n\\begin{problem}\nTrain a GMMHMM on each of the words you previously recorded with at least $10$ random restarts, keeping the best model for each word. You might want to do this in parallel to save time.\n\\end{problem}\n\nGiven a trained model, we would like to compute the log-likelihood of a new sample. Letting \\li{obs} be an array, each row of which is a set of MFCCs for a frame in a sample, we do this as follows:\n\\begin{lstlisting}\n>>> score = model.score(obs)\n\\end{lstlisting}\n\n\\begin{problem}\nWrite a function that records a sample, converts it into its MFCC array, and then scores it on each of the five trained models. Return the word corresponding to the highest scoring model. Test your speech recognition system on each of the five words multiple times. How does it perform?\n\\end{problem}\n\\end{comment} ", "meta": {"hexsha": "4ce7db554d503cef3d5e6def501fc0160df4b723", "size": 19183, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "acme-material/Labs/Volume3/CDHMM/CDHMM.tex", "max_stars_repo_name": "DM561/dm561.github.io", "max_stars_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-13T13:22:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-13T13:22:41.000Z", "max_issues_repo_path": "acme-material/Labs/Volume3/CDHMM/CDHMM.tex", "max_issues_repo_name": "DM561/dm561.github.io", "max_issues_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-10-18T19:57:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-31T19:00:36.000Z", "max_forks_repo_path": "acme-material/Labs/Volume3/CDHMM/CDHMM.tex", "max_forks_repo_name": "DM561/dm561.github.io", "max_forks_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.6628787879, "max_line_length": 821, "alphanum_fraction": 0.7646353542, "num_tokens": 4848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256591565729, "lm_q2_score": 0.6893056231680121, "lm_q1q2_score": 0.5802751605837441}}
{"text": "\\section{Cellular approximation, cellular homology, obstruction theory}\n%pset 3 is up in part. Today I'm going to tell you how to think about CW-complexes and work with them, and I won't give a lot of details. Some good sources are lecture notes by Davis-Kirk, and a beautiful book by Glen Bredon, as well as Hatcher.\nIn previous sections, we saw that homotopy groups play well with (maps between) CW-complexes.\nHere, we will study maps between CW-complexes themselves, and prove that they are, in some sense, ``cellular'' themselves.\n\\subsection{Cellular approximation}\n\\begin{definition}\\label{cellularmaps}\n    Let $X$ and $Y$ be CW-complexes, and let $A\\subseteq X$ be a subcomplex.\n    Suppose $f:X\\to Y$ is a continuous map.\n    We say that $f|_{A}$ is skeletal\\footnote{Some would say cellular.} if $f(\\Sigma_n)\\subseteq Y_n$. \n\\end{definition}\nNote that a skeletal map might not take cells in $A$ to cells in $Y$, but it takes $n$-skeleta to $n$-skeleta.\n\\begin{theorem}[Cellular approximation]\\label{cellularapprox}\n    In the setup of Definition \\ref{cellularmaps}, the map $f$ is homotopic to some other\n    continuous map $f^\\prime:X\\to Y$, relative to $A$, such that $f^\\prime$ is skeletal on all of $X$.\n\\end{theorem}\nTo prove this, we need the following lemma.\n\\begin{lemma}[Key lemma]\n    Any map $(D^n,S^{n-1})\\to (Y,Y_{n-1})$ factors as:\n    \\begin{equation*}\n\t\\xymatrix{\n\t(D^n,S^{n-1})\\ar[r]\\ar@{-->}[dr] & (Y,Y_{n-1})\\\\\n\t& (Y_n,Y_{n-1})\\ar[u]\n\t}\n    \\end{equation*}\n\\end{lemma}\n\\begin{proof}[``Proof.'']\n    Since $D^n$ is compact, we know that $f(D^n)$ must lie in some finite subcomplex $K$ of $Y$.\n    The map $D^n\\to K$ might hit some top-dimensional cell $e^m\\subseteq K$, which does not have anything attached to it;\n    hence, we can homotope this map to miss a point, so that it contracts onto a lower-dimensional cell.\n    Iterating this process gives the desired result.\n\\end{proof}\nUsing this lemma, we can conclude the cellular approximation theorem.\n\\begin{proof}[``Proof'' of Theorem \\ref{cellularapprox}]\n    We will construct the homotopy $f\\simeq f^\\prime$ one cell at a time.\n    Note that we can replace the space $A$ by the subspace to which we have extended the homotopy.\n    \n    Consider a single cell attachment $A\\to A\\cup D^m$; then, we have\n    \\begin{equation*}\n\t\\xymatrix{\n\tA\\ar[d]_{\\text{skeletal}}\\ar[r] & A\\cup D^m\\ar[dl]^{\\text{ may not be skeletal}}\\\\\n\tY &\n\t}\n    \\end{equation*}\n    Using the ``compression lemma'' from above, the rightmost map factors (up to homotopy) as the composite\n    $A\\cup D^m\\to Y_m \\to Y$.\n    Unfortunately, we have not extended this map to the whole of $X$, although we could do this\n    if we knew that the inclusion of a subcomplex is a cofibration.\n    But this is true: there is a cofibration $S^{n-1}\\to D^n$, and so any pushout of these maps is a cofibration!\n    This allows us to extend; we now win by iterating this procedure.\n\\end{proof}\nAs a corollary, we find:\n\\begin{exercise}\n    The pair $(X,X_n)$ is $n$-connected.\n\\end{exercise}\n\\subsection{Cellular homology}\nLet $(X,A)$ be a relative CW-complex with $A\\subseteq X_{n-1}\\subseteq X_n\\subseteq\\cdots\\subseteq X$.\nIn the previous part \\todo{provide a link!} that $H_\\ast(X_n,X_{n-1})\\simeq \\widetilde{H}_\\ast(X_n/X_{n-1})$.\nMore generally, if $B\\to Y$ is a cofibration, there is an isomorphism (see \\cite[p. 433]{bredon}):\n$$H_\\ast(Y,B)\\simeq\\widetilde{H}_\\ast(Y/B).$$\nSince $X_n/X_{n-1} = \\bigvee_{\\alpha\\in \\Sigma_n}S^n_\\alpha$, we find that\n$$H_\\ast(X_n,X_{n-1})\\simeq \\Z[\\Sigma_n] = C_n.$$\nThe composite $S^{n-1}\\to X_{n-1}\\to X_{n-1}/X_{n-2}$ is called a relative attaching map.\n\nThere is a boundary map $d:C_n\\to C_{n-1}$, defined by\n$$d:C_n = H_n(X_n,X_{n-1})\\xrightarrow{\\partial}H_{n-1}(X_{n-1})\\to H_{n-1}(X_{n-1},X_{n-2}) = C_{n-1}.$$\n\\begin{exercise}\n    Check that $d^2 = 0$.\n\\end{exercise}\nUsing the resulting chain complex, denoted $C_\\ast(X,A)$, one can prove that there is an isomorphism\n$$H_n(X,A)\\simeq H_n(C_\\ast(X,A)).$$\n(In the previous part\\todo{provide a link!}, we proved this for CW-pairs, but not for relative CW-complexes.)\nThe incredibly useful cellular approximation theorem therefore tells us that the effect of maps on homology can be computed.\n\nOf course, the same story runs for cohomology: one gets a chain complex which, in dimension $n$, is given by\n$$C^n(X,A;\\pi) = \\Hom(C_n(X,A),\\pi) = \\Map(\\Sigma_n,\\pi),$$\nwhere $\\pi$ is any abelian group.\n\\subsection{Obstruction theory}\nUsing the tools developed above, we can attempt to answer some concrete, and useful, questions.\n\\begin{question}\n    Let $f:A \\to Y$ be a map from a space $A$ to $Y$.\n    Suppose $(X,A)$ is a relative CW-complex.\n    When can we find an extension in the diagram below?\n    \\begin{equation*}\n\t\\xymatrix{\n\t    X\\ar@{-->}[dr] & \\\\\n\t    A\\ar@{^(->}[u]\\ar[r]_f & Y\n\t    }\n    \\end{equation*}\n\\end{question}\nThe lower level obstructions can be worked out easily:\n\\begin{equation*}\n    \\xymatrix{\n\tA\\ar[d]\\ar@{^(->}[r] & X_0\\ar@{-->}[dl]\\ar@{^(->}[r] & X_1\\ar@{-->}[dll]\\\\\n\t\\emptyset\\neq Y & &\n    }\n\\end{equation*}\nThus, for instance, if two points in $X_0$ are connected in $X_1$, we only have to check that they are also connected in $Y$.\n\nFor $n\\geq 2$, we can form the diagram:\n\\begin{equation*}\n    \\xymatrix{\n\t\\coprod_{\\alpha\\in\\Sigma_n}S^{n-1}_\\alpha\\ar[r]^f\\ar@{^(->}[d] & X_{n-1}\\ar[r]^g\\ar[d] & Y\\\\\n\t\\coprod_{\\Sigma_n}D^n_\\alpha\\ar[r] & X_n\\ar@{-->}[ur] & \n    }\n\\end{equation*}\nThe desired extension exists if the composite $S^{n-1}_\\alpha\\xar{f_\\alpha} X_{n-1}\\to Y$ is nullhomotopic.\n\nClearly, $g\\circ f_\\alpha\\in [S^{n-1},Y]$.\nTo simplify the discussion, let us assume that $Y$ is simple;\nthen, Exercise \\ref{simplequotient} says that $[S^{n-1},Y] = \\pi_{n-1}(Y)$.\nThis procedure begets a map $\\Sigma_n\\xrightarrow{\\theta}\\pi_{n-1}(Y)$, which is a $n$-cochain, i.e., an element of\n$C^n(X,A;\\pi_{n-1}(Y))$.\nIt is clear that $\\theta = 0$ if and only if the map $g$ extends to $X_n\\to Y$.\n\\begin{prop}\n    $\\theta$ is a cocycle in $C^n(X,A;\\pi_{n-1}(Y))$, called the ``obstruction cocycle''.\n\\end{prop}\n\\begin{proof}\n    $\\theta$ gives a map $H_n(X_n,X_{n-1})\\to \\pi_{n-1}(Y)$.\n    We would like to show that the composite\n    $$H_{n+1}(X_{n+1},X_n)\\xrightarrow{\\partial}H_n(X_n)\\to H_n(X_n,X_{n-1})\\xrightarrow{\\theta}\\pi_{n-1}(Y)$$\n    is trivial.\n    %OK, we know that $\\pi_n(X_n,X_{n-1})\\to H_n(X_n,X_{n-1})$ is surjective. This relative homotopy group holds the characteristic maps, doesn't it? So picking a representative back there is no problem; now I have the lexseq of a pair, which gives:\n    We have the long exact sequence in homotopy of a pair (see Equation \\eqref{lexseqhomotopy}):\n    \\begin{equation*}\n\t\\xymatrix{\n\t    \\pi_{n+1}(X_{n+1},X_n)\\ar[d]\\ar@{->>}[r] & H_{n+1}(X_{n+1},X_n)\\ar[d]^\\partial\\\\\n\t    \\pi_n(X_n)\\ar[r]\\ar[d] & H_n(X_n)\\ar[d]\\\\\n\t    \\pi_n(X_n,X_{n-1})\\ar@{->>}[r] \\ar[d]_\\partial & H_n(X_n,X_{n-1})\\ar[d]^\\theta\\\\\n\t    \\pi_{n-1}(X_{n-1})\\ar[r]_{g_\\ast} & \\pi_{n-1}(Y)\n\t    }\n    \\end{equation*}\n    This diagram commutes, so $\\theta$ is indeed a cocycle.\n\\end{proof}\nOur discussion above allows us to conclude:\n\\begin{theorem}\n    Let $(X,A)$ be a relative CW-complex and $Y$ a simple space.\n    Let $g:X_{n-1}\\to Y$ be a map from the $(n-1)$-skeleton of $X$.\n    Then $g|_{X_{n-2}}$ extends to $X_n$ if and only if $[\\theta(g)]\\in H^n(X,A;\\pi_{n-1}(Y))$ is zero.\n\\end{theorem}\n\\begin{corollary}\n    If $H^n(X,A;\\pi_{n-1}(Y)) = 0$ for all $n>2$, then any map $A\\to Y$ extends to a map $X\\to Y$ (up to homotopy\\footnote{In fact, this condition is unnecessary, since the inclusion\n    of a subcomplex is a cofibration.});\n    in other words, there is a dotted lift in the following diagram:\n    \\begin{equation*}\n\t\\xymatrix{\n\tA\\ar[r]\\ar[d] & Y\\\\\n\tX\\ar@{-->}[ur] & \n\t}\n    \\end{equation*}\n\\end{corollary}\nFor instance, every map $A\\to Y$ factors through the cone if $H^n(CA,A;\\pi_{n-1}(Y)) \\simeq \\widetilde{H}^{n-1}(A;\\pi_{n-1}(Y)) = 0$.\n%Of course, one still needs to know the homotopy groups of $Y$ are, but often, you can get away even with partial information.\n%\\begin{remark}\n%    Fix a prime $p$. Later we'll see (I hope) that $Y$ is simply connected (maybe even simple?), and suppose that $\\widetilde{H}_\\ast(Y;\\Z_{(p)}) = 0$ (i.e., no $\\Z$-summands and no $\\Z/p^k$-summands). Then $\\pi_\\ast(Y)\\otimes\\Z_{(p)} = 0$ too. So if $H_\\ast(A;\\Z/\\ell\\Z) = 0$ for $\\ell\\neq p$, then $[A,Y]\\simeq \\ast$.\n%\\end{remark}\n", "meta": {"hexsha": "394c4fdc554e819c54a047e8172b00b721bd61a8", "size": 8418, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "906/lec-50-cellular-approximation.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "906/lec-50-cellular-approximation.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "906/lec-50-cellular-approximation.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 53.9615384615, "max_line_length": 320, "alphanum_fraction": 0.6652411499, "num_tokens": 2958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056167854461, "lm_q2_score": 0.8418256512199033, "lm_q1q2_score": 0.5802751497399452}}
{"text": "\\documentclass[9pt,twocolumn]{extarticle}\n\n\\usepackage[hmargin=0.5in,tmargin=0.5in]{geometry}\n\\usepackage{amsmath,amssymb}\n\\usepackage{times}\n\\usepackage{graphicx}\n\\usepackage{subfigure}\n\n\\usepackage{cleveref}\n\\usepackage{color}\n\\newcommand{\\TODO}[1]{\\textcolor{red}{#1}}\n\n\\newcommand{\\FPP}[2]{\\frac{\\partial #1}{\\partial #2}}\n\\newcommand{\\argmin}{\\operatornamewithlimits{arg\\ min}}\n\\author{Siwang Li}\n\n\\title{self-collision handling} \n\n%% document begin here\n\\begin{document}\n\\maketitle\n\n\\setlength{\\parskip}{0.5ex}\n\n\\section{Introduction}\n\n\\begin{eqnarray*}\n   x_0 &=& \\xi_1x_1+\\xi_2x_2+\\xi_3x_3+(1-\\xi_1-\\xi_2-\\xi_3)x_4 \\\\\n   G &=& (x_1-x_4, x_2-x_4, x_3-x_4)\\\\\n   x &=& (x_0^T, x_1^T, x_2^T, x_3^T)^T\\\\\n   x_0 &=& G(x)\\xi+x_4 \\\\\n   \\xi &=& G^{-1}(x)(x_0-x_4) \\\\\n   D &=& (d_1-d_4, d_2-d_4, d_3-d_4)\\\\\n   \\TODO{\\hat{c}(x)} &=& \\xi_1d_1+\\xi_2d_2+\\xi_3d_3+(1-\\xi_1-\\xi_2-\\xi_3)d_4\\\\\n   &=& DG^{-1}(x)(x_0-x_4)-d_4\n\\end{eqnarray*}\n\\begin{equation}\n  \\TODO{\\tilde{x} = \\arg \\min_{\\tilde{x}} \\frac{1}{2}\\|\\tilde{x}-x^{*}\\|_2^2 \\text{ s.t. } \\hat{c}(x) = 0}\n\\end{equation}\n\\begin{eqnarray*}\n  \\hat{c}_i(x_i) &=& \\hat{c}(\\tilde{x}) + \\nabla_{x_i} \\hat{c}(\\tilde{x})^T(x_i-\\tilde{x}_i)\\\\\n  &=& n_i^Tx_i + p_i\n\\end{eqnarray*}\n\\begin{eqnarray*}\n  \\nabla_{x} \\hat{c}(\\tilde{x}) &=& (n_0^T, n_1^T, n_2^T, n_3^T, n_4^T)^T\\\\\n  \\TODO{n_i} &=& \\nabla_{x_i} \\hat{c}(\\tilde{x})\\\\\n  \\TODO{p_i} &=& \\hat{c}(\\tilde{x}) - n_i^T\\tilde{x}_i\n\\end{eqnarray*}\n\n\\begin{eqnarray*}\n  x_{k+1}-x^* + J^T \\lambda &=& 0\\\\\n  \\hat{c}(x_k)+J(x_{k+1}-x_k) &=& 0\\\\\n  x_{k+1} + J^T \\lambda &=& x^*\\\\\n  J x_{k+1} &=& J x_k-\\hat{c}(x_k)\\\\\n  \\left( \\begin{array}{cc}\n    I & J^T\\\\\n    J & O\n  \\end{array} \\right)\n  \\left( \\begin{array}{c}\n    x_{k+1}\\\\\n    \\lambda\n  \\end{array} \\right) &=& \n\\left( \\begin{array}{c}\n  x^*\\\\\nJ x_k-\\hat{c}(x_k)  \n\\end{array} \\right)\n\\end{eqnarray*}\n\n\\end{document}\n", "meta": {"hexsha": "bcf20b674e04a7d394cc6517d4e0f6de1c8e07b6", "size": 1857, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tetrahedron_coll.tex", "max_stars_repo_name": "simba518/MPRGPSolver", "max_stars_repo_head_hexsha": "4e86d5af462eb6ca4e98397af8984239b5e5c844", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2015-07-20T21:09:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-24T02:48:58.000Z", "max_issues_repo_path": "doc/tetrahedron_coll.tex", "max_issues_repo_name": "simba518/MPRGPSolver", "max_issues_repo_head_hexsha": "4e86d5af462eb6ca4e98397af8984239b5e5c844", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/tetrahedron_coll.tex", "max_forks_repo_name": "simba518/MPRGPSolver", "max_forks_repo_head_hexsha": "4e86d5af462eb6ca4e98397af8984239b5e5c844", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-10-10T06:03:44.000Z", "max_forks_repo_forks_event_max_datetime": "2015-10-10T06:03:44.000Z", "avg_line_length": 26.5285714286, "max_line_length": 106, "alphanum_fraction": 0.5869682283, "num_tokens": 933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424334245618, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5802737566069172}}
{"text": "\\section{Historical Introduction}\r\n\\subsection{Black-body radiation}\r\nThe birth of quantum mechanics finds its way through its ability to explain phenomena that cannot be explained in classical physics.\r\nThe first example of which is the black-body radiation.\r\n\\begin{definition}\r\n    A black body is a totally absorbing heated body at thermal equilibrium $T$.\r\n\\end{definition}\r\nWhat people towards the end of 19th century intended to measure was the energy density of light emitted by a black body at this equilibrium temperature $\\rho(\\nu,T)$ where $\\nu$ is the frequency of the light.\r\nExperimentally, the graph of $\\rho$ against $\\nu$ fixing $T$ always looks like a curve that grows parabolically at the beginning near $0$ but decreases exponentially after some point.\\\\\r\nMany physicists tried to model this curve.\r\nRayleight (1900) made the first attempt in the problem.\r\nHe assumed that the black body is a cobe with side length $L$.\r\nThe radiation is taken as a superposition of plane waves in the form $e^{i\\underline{k}\\cdot\\underline{x}}$ where $\\underline{k}$ is the called the wavenumber of the plane wave.\r\nObviously, we have $|\\underline{k}|=2\\pi/\\lambda$, so $\\nu=|\\underline{k}|x/(2\\pi)$.\\\\\r\nHowever, we cannot take the wavenumbers arbitarily, as we require the wave to vanish at the boundary to make a thermal equilibrium.\r\nWith this condition, we obtain\r\n$$\\underline{k}=\\frac{2\\pi}{L}\\underline{n},\\underline{n}\\in\\mathbb Z^3\\implies \\nu=\\frac{|\\underline{n}|c}{L}$$\r\nLet $N(\\nu)$ be the number of modes (possible values of $\\underline{n}$) between $\\nu$ and $[\\nu+\\mathrm d\\nu]$.\r\nWe then obtain\r\n$$N(\\nu)\\,\\mathrm d\\nu=2\\times 4\\pi|\\underline{n}|^2\\,\\mathrm d|\\underline{n}|=8\\pi\\left(\\frac{L}{c}\\right)^3\\nu^2\\,\\mathrm d\\nu$$\r\nWe have, let $\\langle E(\\nu,T)\\rangle$ be the average energy of the modes,\r\n$$\\rho(\\nu,T)=N(\\nu)\\frac{\\langle E(\\nu,T)\\rangle}{L^3}$$\r\nIn classical statistical dynamics, $\\langle E\\rangle$ is directly proportional to $T$ and does not depend on $\\nu$.\r\nSo $\\langle E\\rangle=k_BT$ wherre $k_B$ is the Boltzmann constant.\r\nWhere does it come from?\r\nIt comes from the fact that the probability for a plane wave to have energy $E$ is\r\n$$\\mathbb P(E)\\propto e^{-E/(k_BT)}\\implies \\langle E\\rangle=\\frac{\\int_0^\\infty Ee^{-E/(k_BT)}\\,\\mathrm dE}{\\int_0^\\infty e^{-E/(k_BT)}\\,\\mathrm dE}=k_BT$$\r\nHowever, this shall yield us the energy density\r\n$$\\rho(\\nu,T)=\\frac{8\\pi k_B}{c^3}T\\nu^2$$\r\nwhich is obviously NOT compliant with experimental results except for small frequencies.\r\nThis is known as the ultraviolet catastrophe, getting its name due to the fact that this model stops working pass the ultraviolt frequencies.\\\\\r\nPlanck (1905) found a workaround of the problem.\r\nHe postulated that the energy is discrete instead of continuous, so $E=nh\\nu$ for nonnegative integer $n$ and constant $h$ (known as the planck constant).\r\nThis is known as a quantisation of the energy of radiation.\\\\\r\nIn this case, we obtain\r\n$$\\langle E\\rangle=\\frac{\\sum_{n=0}^\\infty nh\\nu e^{-nh\\nu/(k_BT)}}{\\sum_{n=0}^\\infty e^{-nh\\nu/(k_BT)}}=\\frac{h\\nu}{e^{h\\nu/(k_BT)}-1}$$\r\nwhich is fascinatingly different from the continuous case.\r\n\\footnote{Well, not really.}\r\nIn this case, we obtain\r\n$$\\rho(\\nu,T)=\\frac{8\\pi h}{c^3}\\frac{\\nu^3}{e^{h\\nu/(k_BT)}-1}$$\r\nwhich is surprisingly fitting the experimental data.\r\nThis makes people start thinking about taking certain physical quantities as discrete objects.\r\n\\subsection{Photoelectric Effect}\r\nA long time ago, people have observed the emission of electrons when light is shine on metal surfaces.\r\nIn a classical point of view, if the intensity $I$ of the incident light is high enough, the radiation would give enough energy for electrons to emit.\r\nHowever, this is not what happened in an experiment.\r\nIf the frequency is too low, no matter how intense the radiation is, there can be no emission of electron.\r\nEven more puzzling, the rate of the emission of the electrons is not growing with the frequency of the light, but instead with intensity of the light.\\\\\r\nEinstein (1905) took up the idea of Planck ad came up with the idea of viewing light as light quanta (photons), which are particles.\r\nAnd each of those particles carries energy $E=h\\nu=\\hbar\\omega$ where $\\hbar=h/(2\\pi)$ is the reduced Planck constant and $\\omega=2\\pi\\nu$ is the angular frequency.\r\nPhotoelectric effect happens, as Einstein interpreted it, when an electron absorbs a photon with large enough energy to make it escape its original structure.\\\\\r\nSo the energy of the emitted electron is bounded by $E\\le \\hbar\\omega-\\phi$ where $\\phi$ is the ``work function'' that denotes the minimal amount of energy of a single photon spent to trigger photoelectric effect.\r\nWith this theory, one can easily explain the observations of photoelectric effect.\\\\\r\nThis theory of viewing light as particles got developed further and helped explaining more phenomena.\r\nIn 1923, an experiment known as Compton scattering was conducted by Compton.\r\nIn this experiment, Compton used X-Rays to scatter off free electrons.\r\nClassically, if the light is simply a wave, then there will be a re-radiation of light by electrons with a certain angular distribution, which one can obtain as $I(\\nu)\\propto(1+\\cos^2\\theta)$.\r\nSo one could expect to find a single peak at $\\nu_0$ in the graph of $I$ against $\\nu$.\\\\\r\nHowever, what is observed experimentally, there was a second peak $\\nu'$ such that $|\\nu_0-\\nu'|$ is dependent on $\\theta$.\\\\\r\nIn quantum mechanics, if we take lights as particles, as we did in Dynamics \\& Relativity, it can be explained.\r\nAssume the deflected electron and photon (whose energy is $E'=\\hbar\\omega'$) make angles of $\\alpha$ $\\theta$ to the original path, then by using energy-momentum conservation one can obtain\r\n$$\\frac{1}{\\omega'}=\\frac{1}{\\omega}+\\frac{\\hbar}{mc}(1-\\cos\\theta)$$\r\nwhich is consistent with the experimental result.\r\nThis also shows that $\\hbar$ actually measures the strength of quantum effect.\r\n\\subsection{Particles as Waves}\r\nIn previous sections, we found that light, which always manifest itself as a wave, can be interpreted as particles too.\r\nA natural thought from that is then:\r\nCan we interpret particles as waves?\\\\\r\nIn 1923, de Broglie postulated that for any particle of mass $m$, it can be associated with a wave with angular frequency $\\omega=E/\\hbar$ and $\\underline{k}=\\underline{p}/\\hbar$.\r\nPeople then attempt to conduct experiments to verify this theory.\r\nFor example, one can do a diffraction experiment with particle beams.\r\nBut experiments of this are usually complicated to administrate as the wavelength of particles can be very small.\r\nBut some physicists managed to get it done on small particles like electrons, which does yield a typical diffraction pattern and the calculated wavelength is complicant with de Broglie's theory.\r\nSo we can interpret particles as waves.\\\\\r\nBut the electron can't possibly split in space, yet even we pass the electron one-by-one, the same phenomenon happens.\r\nThis can be explained using a probabilistic interpretation of the whole thing, which we will cover later.\r\n\\subsection{Atomic Models}\r\nIn 1897, J. J. Thompson made the first atomic model by viewing atoms as a cloud of positive charges that enclose a number of electrons.\r\nTo verify (or disprove) this model, Rutherfold suggested Geiger and Marsden to conduct a scattering experiment.\r\nThey took a beam of alpha particles and shine it on a piece of gold foil.\r\nIf the model of J. J. Thompson was correct, then the alpha particles should not be deflected by any large angle, but should merely pass through the foil with very small deflection.\r\nHowever, in the experiment, many deflections with very large angle were observed, and some were even scattered back.\\\\\r\nAfter this experiment, Rutherfold came up with another model, which postulates that electrons are orbitting a common nucleus with positive charge.\r\nThis explains the phenomenon that was observed in the scattering experiment.\r\nBut there are several problems with it:\r\nFirst, why is there not any radiation from the electrons resulting from the centripetal acceleration?\r\nThis radiation should make the electron's energy decrease and eventually collapse, but in reality no such thing happen.\r\nSecondly, why is there discrete instead of continuous frequencies in the atomic spectra?\r\nThat is, when an excited atom de-excite, the angular frequencies of light emitted (the line spectra) are discrete and satisfy\r\n$$\\omega_{mn}=2\\pi cR_0\\left( \\frac{1}{n^2}-\\frac{1}{m^2} \\right)$$\r\nfor $m>n$.\r\nHere $R_0$ is a constant known as the Rydberg constant, which has order of magnitude $10^7{\\rm m^{-1}}$.\\\\\r\nThese questions prompted another reformulation of the atomic model by Bohr (1913).\r\nHe hypothesise that the electron orbits are quantised so that the orbital angular momentum $L$ satisfies $L=\\hbar n$ for $n\\in\\mathbb N$.\r\nIt sounds out of nowhere, but it works.\r\nThere is an explanation of it by taking electrons as waves.\r\nThen, the wavenumber of an electron is just $\\underline{k}=\\underline{p}/\\hbar$ so $\\lambda=2\\pi\\hbar/|\\underline{p}|$.\r\nIf we imagine the electron in the orbit of radius $r$, then we can take it as a stationary wave on this circumference, so\r\n$$2\\pi r=n\\lambda=n\\frac{2\\pi\\hbar}{|\\underline{p}|}\\implies L=|\\underline{r}||\\underline{p}|=\\hbar n$$\r\nwhich is just Bohr's hypothesis.\\\\\r\nWhat is the consequences of this formulation of atomic model?\r\nWe have, by Newton's second law,\r\n$$\\frac{e^2}{4\\pi\\epsilon_0}\\frac{1}{r^2}=m_e\\frac{v^2}{r}$$\r\nin the $\\hat{r}$ component.\r\nHence by $L=\\hbar n$, the velocity and radius are quantized as\r\n$$v_n=\\frac{n\\hbar}{m_e r_n},r_n=n^2\\left( \\frac{4\\pi\\epsilon_0}{m_ee^2}\\hbar^2 \\right)$$\r\n$a_0=r_1$ is called the Bohr radius.\r\nThe energy of the electron is also quantised.\r\n$$E_n=\\frac{1}{2}m_ev_n^2-\\frac{e^2}{4\\pi\\epsilon_0}\\frac{1}{r_n}=-\\frac{e^2}{8\\pi\\epsilon_0a_0}\\frac{1}{n^2}=\\frac{E_1}{n^2}$$\r\nWe have $E_1=-13.6{\\rm eV}$ upon calculation.\r\nFor an electron with $n>1$, we say it is excited as such states can be created by exciting (giving energy to, usually via photon) the electron in ground state, while an electron with $n=0$ is said to be in ground state.\\\\\r\nIn this model, an de-excitation from state $m$ to $n$ would then give light with angular frequency\r\n$$\\omega_{mn}=(E_m-E_n)\\frac{1}{\\hbar}=2\\pi c\\frac{m_ec}{2\\hbar}\\left(\\frac{e^2}{4\\pi\\epsilon_0\\hbar c}\\right)^2\\left( \\frac{1}{n^2}-\\frac{1}{m^2} \\right)$$\r\nwhich gives a prediction of $R_0$ very close to the experiment.\r\nIn particular, calculation reveals that $\\omega_{n+1,n}\\sim v_n/r_n$ as $n\\to\\infty$, which can be viewed as a kind of classical limit.\r\n%$$\\omega_{n+1,n}=\\frac{m_e^3e^4}{64\\pi^4\\epsilon_0^2\\hbar^3}\\left( \\frac{1}{n^2}-\\frac{1}{(n+1)^2} \\right)\\sim\\frac{m_e^3e^4}{32\\pi^4\\epsilon_0^2\\hbar^3}\\frac{1}{n^3}=\\frac{v_n}{r_n}$$\r\n%as $n\\to\\infty$.\r\n%This is exactly the classical case.\r\n\\footnote{However, $\\omega_{n+2,n}$ does not tend to this limit when $n\\to\\infty$. It does not tend to any well-known classical limit.}\r\n\\subsection{Conclusion}\r\nQuantum mechanics is a new framework of explaining physical phenomena.\r\nIt may come in counter-intuitive as we always think in the classical way, but it has an undeniable predictivity.", "meta": {"hexsha": "eaace5ee8c23b2ac1cd13c8c91c3837e7508c172", "size": 11244, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "1/intro.tex", "max_stars_repo_name": "david-bai-notes/IB-Quantum-Mechanics", "max_stars_repo_head_hexsha": "8689057b154bdd3fbc6c9270e023b87583904427", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "1/intro.tex", "max_issues_repo_name": "david-bai-notes/IB-Quantum-Mechanics", "max_issues_repo_head_hexsha": "8689057b154bdd3fbc6c9270e023b87583904427", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1/intro.tex", "max_forks_repo_name": "david-bai-notes/IB-Quantum-Mechanics", "max_forks_repo_head_hexsha": "8689057b154bdd3fbc6c9270e023b87583904427", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.1639344262, "max_line_length": 222, "alphanum_fraction": 0.7463536108, "num_tokens": 3055, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424411924673, "lm_q2_score": 0.6859494550081926, "lm_q1q2_score": 0.5802737565042729}}
{"text": "\n%-------------------------------------------------------------------------------\n% Signatures                                                                    \n%-------------------------------------------------------------------------------\n\n\\subsection{Signatures}\n\nSignatures map constants to types and kinds.\n\n\\newcommand{\\Sig}{\\ \\mathtt{sig}}\n\\bigskip \n\\framebox{$\\Sigma\\Sig$}\n\\bigskip \n\n$$\n\\begin{array}{ccc}\n\\infer{\\cdot\\Sig}{}&\n\\infer{\\Sigma,c:A\\Sig}{\\Sigma\\Sig & \\CheckTy[\\cdot]{A}{\\Type}}&\n\\infer{\\Sigma,a:K\\Sig}{\\Sigma\\Sig & \\CheckTy[\\cdot]{K}{\\Kind}}\n\\end{array} \n$$\n", "meta": {"hexsha": "ede332eb4f906857a2f057d57dfa405214899d78", "size": 581, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/inverse/tex/old/signat.tex", "max_stars_repo_name": "kryptine/twelf", "max_stars_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 61, "max_stars_repo_stars_event_min_datetime": "2015-01-24T18:10:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-25T12:41:05.000Z", "max_issues_repo_path": "src/inverse/tex/old/signat.tex", "max_issues_repo_name": "kryptine/twelf", "max_issues_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-02-27T22:17:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-27T22:17:51.000Z", "max_forks_repo_path": "src/inverse/tex/old/signat.tex", "max_forks_repo_name": "kryptine/twelf", "max_forks_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-05-06T01:32:34.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T19:33:29.000Z", "avg_line_length": 26.4090909091, "max_line_length": 80, "alphanum_fraction": 0.395869191, "num_tokens": 136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8459424373085146, "lm_q2_score": 0.6859494485880927, "lm_q1q2_score": 0.5802737484090428}}
{"text": "\n\\subsection{Vickrey auctions}\n\nVickrey are sealed-bid auctions. Participants cannot view other bids.\n\nThe price paid is the second-highest bid.\n\n", "meta": {"hexsha": "5ec5710944389f55651fe83030e23dc0edd2129a", "size": 146, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/economics/auctions/05-01-vickrey.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/economics/auctions/05-01-vickrey.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/economics/auctions/05-01-vickrey.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.25, "max_line_length": 69, "alphanum_fraction": 0.7945205479, "num_tokens": 34, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.7490872187162396, "lm_q1q2_score": 0.5802305207038208}}
{"text": "\\section{Cycles}\n\n%%%%%%%%%%\n\\begin{frame}{Cycles}\n  \\begin{exampleblock}{4-Cycle in undirected graph \\pno{3.7.1}}\n    \\begin{itemize}\n      \\item undirected graph $G = (V, E)$\n      \\item simple cycle of length 4\n      \\item $O(n^{3})$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{Cycles}\n  \\begin{exampleblock}{Shortest cycle in digraph \\pno{3.7.4}}\n    \n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    Floyd-Warshall: $\\min_{i}D[i][i]$\n\n    Initialization: $D^{(0)}[i][i] = \\infty$\n  \\end{block}\n  \n  \\begin{alertblock}{Remark.}\n    Does not apply to undirected graph.\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{Cycles}\n  \\begin{exampleblock}{Shortest cycle in undirected graph \\pno{3.7.14}}\n    \\begin{itemize}\n      \\item $G = (V, E), w(e) = 1$\n      \\item DFS: back edge $\\iff$ cycle\n      \\item $u \\to v$: $\\text{level}[u] - \\text{level}[v] + 1$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    A counterexample here. \n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{Cycles}\n  \\begin{exampleblock}{Shortest cycle containing a specific edge \\pno{3.7.5}}\n    \\begin{itemize}\n      \\item undirected graph $G = (V, E, w), w(e) > 0, e \\in E$\n      \\item shortest cycle containing $e$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    $P_{u \\leadsto v} + (u,v)$ \n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{Cycles}\n  \\begin{exampleblock}{Hamiltonian path in tournament graph \\pno{3.7.18}}\n    \\begin{itemize}\n      \\item digraph $G = (V, E)$\n      \\item $\\forall u,v: (u \\to v \\lor v \\to u) \\land \\lnot (u \\to v \\land v \\to u)$\n      \\item hamiltonial path\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item existence: induction on $n$\n      \\item algorithm $1 + 2 + \\cdots + (n-1) = O(n^{2})$\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n", "meta": {"hexsha": "5b7f55ea8232d9ffbbb9ce0906c451c8aa3f3558", "size": 1947, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-shortest-path-2016-06-13/sections/cycles.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-shortest-path-2016-06-13/sections/cycles.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-shortest-path-2016-06-13/sections/cycles.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 24.9615384615, "max_line_length": 85, "alphanum_fraction": 0.5885978428, "num_tokens": 695, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5802305124670296}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\hyphenation{YPKNOT}\n\\begmath 11.4 Least-Squares Cubic Spline Fit\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nA cubic spline function with NB $-$ 1 segments is a function consisting of\nNB $-$ 1 pieces, each of which is a cubic polynomial. At the abscissae,\ncalled knots, at which adjacent segments meet, the function has $C^2$\ncontinuity, $i.e$. continuity in value, first derivative, and second\nderivative.\n\nSubroutine SC2FIT or DC2FIT will determine the (NB $-$ 1)-segment cubic spline\nfunction, with user specified knots, that best fits a set of discrete data\nin the sense of weighted least-squares, and return the values of the fitted\nspline curve and its first derivative at the knots. The user can then\nevaluate the curve, or its first or second derivative, at any argument using\nthe Hermite interpolation subroutine, SHINT or DHINT, of Chapter~12.3.\n\nThis software can be used for interpolation by setting the number of knots,\nNB, to be two less than the number of data points, NXY. Setting NB $<$\nNXY $-$ 2 gives least-squares approximation.\n\nFor spline fitting with more generality, see Chapter~11.5.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf NXY, NB, LDW, IERR1}\n\n\\item[REAL]  \\ {\\bf X}($\\geq $NXY){\\bf , Y}($\\geq $NXY){\\bf , SD}($\\geq $NXY)%\n{\\bf , \\nolinebreak B}($\\geq $NB){\\bf , W}(LDW,~5){\\bf , YKNOT}($\\geq $NB)%\n{\\bf , YPKNOT}($\\geq $NB){\\bf , SIGFAC}\n\\end{description}\nAssign values to X(), Y(), SD(), NXY, B(), NB, and LDW.\\vspace{-15pt}\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL SC2FIT(X, Y, SD, NXY, B, NB, W,\\\\\nLDW, YKNOT, YPKNOT, SIGFAC, IERR1)\\\\\n\\end{tabular}}\n\\end{center}\nComputed quantities are returned in YKNOT(), YPKNOT(), SIGFAC, and IERR1.\n\nFollowing the use of SC2FIT the user may use SHINT of Chapter~12.3 to compute\nvalues of the fitted curve.\n\n\\subsubsection{Argument Definitions}\n\\begin{description}\n\\item[X(), Y()]  \\ [in] Data pairs (X(I), Y(I), I = 1, ..., NXY). The\ncontents of X() must satisfy X($1)\\leq $ X(2) $\\leq $ ... $\\leq $ X(NXY).\n\n\\item[SD()]  \\ [in] If SD($1)>0.$, each SD(I) must be positive and must be\nthe user's a priori estimate of the standard deviation of the uncertainty $%\n(e.g.$, observational error) in the corresponding data value Y(I).\n\nIf SD($1)<0.$, $|$SD(1)$|$ will be used as the a priori standard deviation\nof each data value Y(I). In this case the array SD() may be dimensioned as\nSD(1).\n\n\\item[NXY]  \\ [in] Number of data points. Require NXY $\\geq 4.$\n\n\\item[B()]  \\ [in] Set by the user to specify the knot abscissae and\nendpoints for the spline curve. Must satisfy B(1) $<$ B(2)\n$<$ ... $<$ B(NB).  Also require B(1) $\\leq $\nX(1), B(NB) $\\geq $ X(NXY).\n\n\\item[NB]  \\ [in] Number of knots, including endpoints. The number of\nsegments in the spline curve will be NB $-$ 1. The number of degrees of freedom\nfor the fit will be NB + 2. Require $2\\leq \\text{NB}\\leq \\text{NXY}-2.$\n\n\\item[W(\\ ,\\ )]  \\ [scratch] Working space, dimensioned W(LDW,~5).\n\n\\item[LDW]  \\ [in] Leading dimension for the work array W(,). LDW must be at\nleast NB + 4, but the execution time is less for larger values of LDW, up to\nNB$+3+k$, where $k$ is the largest number of data points lying between any\nadjacent pair of knots.\n\n\\item[YKNOT(), YPKNOT()]  \\ [out] Arrays, each of length at least NB, in\nwhich the subroutine will store a definition of the fitted spline curve as a\nsequence of values of the curve and its first derivative at the knot\nabscissae B(i). Letting $f$ denote the fitted curve, the elements of these\narrays will be set to\n\nYKNOT$(i)=f(\\text{B}(i))$, $i=1$, ..., NB\n\nYPKNOT$(i)=f^{\\prime }(\\text{B}(i))$, $i=1$, ..., NB\n\n\\item[SIGFAC]  \\ [out] Set by the subroutine as a measure of the residual\nerror of the fit. See Section D.\n\n\\item[IERR1]  \\ [out] Error status indicator. Set on the basis of tests done\nin SC2FIT as well as error indicators IERR2 set by SBACC and IERR3 set by\nSBSOL. Zero indicates no errors detected. See Section E for the meaning of\nnonzero values.\n\\end{description}\n\\subsubsection{Modifications for Double Precision}\n\nFor double precision usage change the REAL statement to DOUBLE PRECISION and\nchange the subroutine name SC2FIT to DC2FIT.\n\n\\subsection{Examples and Remarks}\n\n{\\bf Example:} Given a set of 12~data pairs $(x_i,y_i)$ compute the\nuniformly weighted least-squares cubic spline fit to these data using six\nuniformly spaced breakpoints, including endpoints. After determining the\nspline function $f(x)$, compute and tabulate the quantities $x_i$, $y_i$, $%\nf(x_i)$, and $r_i = y_i - f(x_i).$\n\nThis computation is illustrated by the program DRDC2FIT and the output\nODDC2FIT. The fitted curve is determined using DC2FIT, and is evaluated\nusing DHINT of Chapter~12.3.\n\n{\\bf Interpolation:} If all of the data abscissae are distinct, and one wants\ninterpolation, rather than the smoothing effect of least-squares\napproximation, one can choose interpolation by setting NB = NXY $-$ 2. The\nNB knot abscissae can be assigned in various ways, but one reasonable way is\nto set B(1) = X(1), B(i) = X$(i+1)$, for $i = 2$, ..., NB $-$ 1, and B(NB) =\nX(NXY).\n\n\\subsection{Functional Description}\n\nLet knot abscissae $b_1 < b_2 < ..$.  $< b_{NB}$ be given.  A cubic spline\nfunction defined over the interval $[b_1$, $b_{NB}]$ is a cubic polynomial\nin each subinterval $[b_i$, $b_{i+1}]$, $i = 1$, ..., NB $-$ 1, with\ncontinuity of the value, first derivative, and second derivative at each\ninternal knot, $b_2$, ..., $b_{NB-1}$.  The set of all cubic spline\nfunctions defined relative to this knot set is a linear space of dimension\n$d =$ NB $+$ 2.  If the knot spacing does not depart severely from\nuniformity a well conditioned set of basis functions for this space is\nprovided by a particular set of cubic spline functions called B-splines,\n$p_i(x)$, $i = 1$, ..., $d$, each of which is nonzero over at most four\nadjacent subintervals, \\cite{deBoor:1972:OCB}.\n\nThe problem data are $\\{(x_i$, $y_i$, $s_i)$, $i=1$, ..., NXY$\\}$, where $s_i$\nis the a priori standard deviation of the error in the value $y_i$. The\nweighted least-squares curve fitting problem then becomes one of determining\ncoefficients $c_j$ to minimize%\n\\begin{equation*}\n\\rho ^2({\\bf c})=\\sum_{i=1}^{\\text{NXY}}\\left[ \\frac{\ny_i-\\sum_{j=1}^dc_jp_j(x_i)}{s_i}\\right] ^2\n\\end{equation*}\nThe matrix formulation of this least-squares problem involves a matrix\nhaving a banded form in which at most four elements are nonzero in each row.\nThis least-squares problem is solved using the subroutines of Chapter~4.5.\n\nThis problem will have a unique set of solution coefficients, $c_j$, if NB\n$\\leq \\text{NXY}-2$ and the positioning of the knots is such that there\nexists an indexing of some set of NB + 2 of the distinct data abscissae,\n$x_i$ (not necessarily the indexing used in the subprogram) such that\n$b_{i-3} < x_i < b_{i+1}$ for $i = 1$, ..., NB$+2$.  Here $b_{-2}$,\n$b_{-1}$, and $b_0$ denote fictitious knots to the left of $b_1$, and\n$b_{NB+1}$, $b_{NB+2}$, and $b_{NB+3}$ denote fictitious knots to the\nright of $b_{NB}$, see \\cite{Rice:COC:1967}.  If the solution is not\nunique, no solution will be given and an error code will be returned as\ndescribed in Section E.\n\nAfter determining coefficients, $c_j$, SC2FIT uses subroutine STRC2C to\nevaluate the value and first derivative of the fitted curve at the knots.\nThese quantities are returned to the user in the arrays YKNOT() and YPKNOT()\nas the defining parameters of the fitted curve.\n\nSubroutine SC2BAS is called by both SC2FIT and STRC2C to evaluate B-spline\nbasis functions.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nSC2FIT sets IERR1 and issues error messages based on internal tests as well\nas propagating error information set in IERR2 by SBACC and in IERR3 by\nSBSOL. See Chapter~4.5 for the meaning of IERR2 and IERR3. In all cases in\nwhich IERR1 is set nonzero, no solution will be computed.\n\n\\begin{tabular}{ll}\n{\\bf IERR1} & {\\bf \\quad \\quad Meaning}\\\\\n\n\\phantom{100}0 & No errors detected.\\\\\n\n\\phantom{1}100 & NB $< 2$ or NXY $<$ NB + 2.\\\\\n\n\\phantom{1}200 & B(I$) \\geq $ B(I+1) for some I.\\\\\n\n\\phantom{1}300 & LDW $<$ NB + 4.\\\\\n\n\\phantom{1}400 & X(I $- 1) > $ X(I) for some I.\\\\\n\n\\phantom{1}500 & B($1) > $ X(1) or B(NB) $<$ X(NXY).\\\\\n\n\\phantom{1}600 & Need larger dimension LDW.\\\\\n\n\\phantom{1}700+IERR2 & SBACC set IERR2 $\\neq 0$.\\\\\n\n\\phantom{1}800+IERR2 & SBACC set IERR2 $\\neq 0$.\\\\\n\n\\phantom{1}900+IERR2 & SBACC set IERR2 $\\neq 0$.\\\\\n\n 1000+IERR3 & SBSOL set IERR3 $\\neq 0$. Indicates\\\\\n & singularity.\\\\\n\n 1100 & SD(1) = 0.0.\\\\\n\n 1200 & SD(1) $>$ 0.0, and SD($i) \\leq 0.0$ for\\\\\n & some $i \\in $ {[2, NXY]}.\n\\end{tabular}\n\\subsection{Supporting Information}\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nDC2FIT & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\n AMACH, DBACC, DBSOL, DC2BAS, DC2FIT, DERM1, DERV1, DHTCC,\n DNRM2, DTRC2C, ERFIN, ERMSG, IERM1, IERV1\\rule[-5pt]{0pt}{8pt}}\\\\\nSC2FIT & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\n AMACH, ERFIN, ERMSG, IERM1, IERV1, SBACC, SBSOL, SC2BAS, SC2FIT,\n SERM1, SERV1, SHTCC, SNRM2, STRC2C}\\\\\\end{tabular}\n\nOriginal code designed by C. L. Lawson and R. J. Hanson, JPL, 1968.\nProgrammed and modified by Lawson, Hanson, T. Lang, and D. Campbell, JPL,\n1968--1974. Adapted to Fortran~77 by Lawson and S. Y. Chiu, July~1987.\n\n\\begcodenp\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRDC2FIT}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{dc2fit}}\n\\newpage\n\n\\vspace{30pt}\\centerline{\\bf \\large ODDC2FIT}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{dc2fit}}\n\\end{document}\n", "meta": {"hexsha": "78bfc4bac5ee60eed9aaf5f38d4a7674061274ef", "size": 9924, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch11-04.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch11-04.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch11-04.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 41.5230125523, "max_line_length": 98, "alphanum_fraction": 0.709089077, "num_tokens": 3269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872019117029, "lm_q2_score": 0.7745833945721304, "lm_q1q2_score": 0.5802305076873057}}
{"text": "\\section{Details on Machine Learning Methods}\n\\subsection{Decision Tree}\n\\subsubsection{Introduction}\nDecision trees are a statistical learning method used both in the classification and regression setting. They are non-parametric models, meaning that they don't approximate a function $\\hat{f}$ to describe the relationship between the features and target, but are classifying data by inferring a hierarchical decision structure (the decision tree) \\cite{introStats}.\n\nDecision trees are based on the idea of segmenting the feature space into a number of simple regions (rectangles in 2-dimensional space, cubes in 3-dimensional space, etc.). The challenge of tree-based learning methods is to identify the regions that best separate the classes. Mathematically, a loss function, ie. cross-entropy, gini-impurity or 0-1 loss, should be minimised for each data point in the training split, st.\n\n$$\n\\text{min}\\left\\{\\sum_{j=1}^J\\sum_{i\\in{R_J}} \\text{loss(}y_i, \\hat{y}_{R_j}\\text{)}\\right\\}\n$$\n\\vspace{5pt}\n\nHowever, it is computationally infeasible to consider every possible partition of the feature space into $J$ regions and evaluate the loss. For this reason, a top-down, greedy approach, known as recursive binary splitting is used to construct the decision tree in a computationally light fashion.\n\nRecursive binary splitting is a recursive algorithm that splits a given partition of the training set (node) into the two (hence the name \\textit{binary} splitting) partitions that best separate the classes in the original partition. It thus finds the cut point $s$ for a feature $X_i$ that forms two sets $\\{X| X_i<s\\}$ and $\\{X|X_i\\ge s\\}$ that minimise the weighted loss of both regions. \nThe decision tree is then built by recursively applying this algorithm, until some stopping criterion is reached, or all regions are pure (ie. all training samples in a decision region are of a single class). The final prediction is based on identifying which region a new data point belongs to and then predicting the majority class in that region. \n\n\\subsubsection{Decision Tree Model Selection}\n%Having implemented the \\class{DecisionTreeClassifier()}, the challenge arises of which set of hyper parameters in the training phase produces a classifier, that best generalises to unseen data. Particularly, we wish to find the classifier that is neither oversimplifying (high bias, low variance model) nor overfitting (low bias, high variance model). \n\n% To find the configuration of hyper parameters that best generalise, we used \\class{GridSearchCV} from \\textit{sci-kit learn}. It allows to search a set of hyper parameter configurations for the best-generalising performance using 5-fold cross validation by choosing the model that achieves best average validation performance on some scoring-criterion. \n\nThe final decision tree classifier is a pipeline of a \\code{StandardScaler} instance, that was fitted on the training split and a \\code{DecisionTreeClassifier} instance. For the scope of this project a combination of the hyper parameters \\code{criterion}, \\code{splitter}, \\code{max_features} and \\code{max_depth} of the tree were checked within a grid search.\n\\newline\n\n\\begin{comment}\n\\begin{lstlisting}[language=Python, caption=DecisionTreeClassifier Pipeline and GridSearch Setup, label=dt_pipeline]\n    # model pipeline \n    pipe = Pipeline(steps=[('scaler', scaler),\n                           ('decision_tree', DecisionTreeClassifier(random_state=1))])\n\n    # define hyper parameters to grid search\n    params = {\n            'decision_tree__criterion': ['gini', 'entropy'],\n            'decision_tree__max_depth': list(range(1, 10)),\n            'decision_tree__max_features': list(range(1, 10)),\n            'decision_tree__splitter': ['best', 'random'] \n            }\n\\end{lstlisting}\n\\end{comment}\n\nThe grid search ran over all possible combinations of hyper parameters defined. For each setting it fitted five models on four (randomised) folds and validated the performance of the model on the remaining split. It then computed the averaged accuracy score on the validation splits that occurred within the training and returned the model configuration that maximised the performance of the model on the validation splits. The best generalising decision tree was found to use \\textit{entropy-impurity} as a loss-function, is trained to maximum depth of five and considers eight features for each split. \n\n\\subsubsection{Decision Tree Performance Evaluation}\nThe best generalising model achieved an averaged accuracy score in the cross-validation splits of 70\\%, which is the estimated performance on unseen data. Because of that, it is surprising to see a test accuracy of only 62\\%. The variation in the two metrics is likely occurs, because both the training and test split are rather small, making it prone to random variation. From the classification report it becomes clear that the decision tree struggles with separating classes 1, 2 and 3 from each other. In contrast, the model does rather well on the more separated classes 6 and 7 with an F1-Score of 80\\% each.  \n\\newline\n\n\\begin{table}[!ht]\n\\begin{subtable}[c]{0.4\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{ c | c }\n \\toprule\n Evaluation Metric & Score  \\\\\n \\midrule \n Training Accuracy & 87\\% \\\\\n Validation Accuracy (during CV) & 70\\% \\\\\n Test Accuracy & 62\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Decision Tree Accuracy Scores on Training, Validation and Test Split}\n\\end{subtable}\n\\begin{subtable}[c]{0.6\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{c | c c c r}\nClass & Precision & Recall & F-Score & Support\\\\\n\\midrule\n1   &    0.68  &    0.62  &    0.65  &  21 \\\\\n2   &    0.64  &    0.61  &    0.62  &  23 \\\\\n3   &    0.25  &    0.20  &    0.22  &   5 \\\\\n5   &    0.33  &    1.00  &    0.50  &   4 \\\\\n6   &    1.00  &    0.67  &    0.80  &   3 \\\\\n7   &    1.00  &    0.67  &    0.80  &   9 \\\\\n\\midrule\n    accuracy   &           &         &    0.62   &   65 \\\\\n   macro avg   &    0.65   &   0.63  &    0.60   &   65 \\\\\nweighted avg   &    0.67   &   0.62  &    0.63   &   65 \\\\\n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Decision Tree Classification Report on Test Split}\n\\end{subtable}\n\\caption{Decision Tree Performance Evaluation}\n\\label{dt_evaluation}\n\\end{table}\n\n\n\\subsubsection{Visualization of Decision Tree}\nFigure \\ref{dt_visualisation} shows a visualisation of the constructed decision tree. At each level of the tree the node carries information about the decision rule applied at the specific node, the loss-value (ie. gini impurity), the number of samples considered and the majority class in the node. \n\nOne interesting observation is that the feature \\textit{barium} is used at depth 2 to separate a pure leaf with class 7 (headlamp) off. This was expected from inspecting the feature distribution in each class in Section 3, since the feature distribution of class 7 was distinct for the feature \\textit{batrium}. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.09]{figures/graphviz_sklearn_dt.png}\n\\caption{Visualised Decision Tree}\n\\label{dt_visualisation}\n\\end{figure}\n\n\n\\subsection{Neural Network}\n\\subsubsection{Introduction}\nA feed-forward neural network is a machine learning algorithm that constructs\na nonlinear (decision) function $\\hat{f}(X)$ that tries to approximate the true relationship between some feature matrix $X$ and a target vector $y$ \\cite{introStats}. In each layer of the neural network, an activation function is applied to ensure the non-linearity of the decision function $\\hat{f}$.\n\nTo train a neural network a loss function (e.g. cross entropy) is minimised using the technique of back-propagation. The algorithm computes the gradient of the loss with respect to all the weights and biases. This gradient\nis then used to tweak the parameters to reduce the error using a gradient descent step. The process is repeated until the neural network converges to the desired solution \\cite{handsonML}.\n\nWhen dealing with a classification problem the response $y$ is a vector produced by the last layer (the output layer) with a soft-max function as its activation, ensuring that the sum of elements is one ($\\sum_{i=1}^{i=k}y_i = 1$) and each respective element is in the range of 0 to 1 ($0\\ge y_i \\ge 1 \\ \\ y_i \\forall y$).  \nTherefore the individual scalars of the vector $y$ can be interpreted as probabilities that a data point that was fed forward through the network belongs to the different classes. The prediction is then made to the class with the largest probability. \n\n\\subsubsection{Neural Network Model Selection}\nSince the custom \\class{NeuralNetworkClassifier} is not able to reproduce the exact results of other deep learning libraries, such as \\textit{Pytorch} or \\textit{Keras}, due randomness in the initialisation of weights, different optimisation algorithms and a different technique of back-propagation, this section will be split into model selection for the custom implementation and using the Python deep learning framework \\textit{Keras} that is built on top of \\textit{Tensorflow}.\n\\newline\n\n% ---------------------------- CUSTOM --------------\n\\textbf{Custom Implementation. }\nSince the training process for the custom \\class{NeuralNetworkClassifier()} implementation is relatively slow due to the recursive back-propagation, the model selection process was limited. Too complex network architectures as well as too high number of training iterations were not feasible, as well as an exhaustive grid search to tune hyper parameters. The model selection was therefore primarily based on heuristics and trial-and-error. Since the custom implementation does not allow to evaluate the model on a validation split at each epoch to stop the training process, the correct number of epochs to prevent overfitting was inferred from experience.\n\nThe final configuration of the model is a neural network with a single, ReLu-activated, hidden layer of 20 nodes that is trained for a total of 100 epochs using 10 batches, a learning rate of 0.01 and computes loss using cross-entropy.\n\\newline\n\n\\begin{comment}\n\\begin{lstlisting}[language=Python, caption=Custom Neural Network Setup]\n    # setting up neural network architecture\n    nn = NeuralNetworkClassifier(\n        layers = [DenseLayer(n_in=9, n_out=20, activation='relu', name='fc1'),\n                  DenseLayer(n_in=20, n_out=6, activation='softmax', name='output')],\n        loss='cross_entropy', \n        name='CustomModel'\n        )\n    \n    # fitting model\n    nn.fit(X_train, y_train, \n           epochs=100, \n           lr=0.01,\n           num_batches=10, \n           verbose=1)\n\\end{lstlisting}\n\\end{comment}\n\nThe training history of loss and training accuracy is depicted in Figure \\ref{custom_nn_history}. It can be seen that the loss is steadily decreasing and maximising the training accuracy.\n\\newline\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.8]{figures/custom_nn_training.pdf}\n\\caption{Training History of Custom Neural Network Implementation}\n\\label{custom_nn_history}\n\\end{figure}\n\n\\subsubsection{Custom Neural Network Performance Evaluation}\nThe fitted model achieves a training accuracy of 72\\% and a test accuracy of 74\\% (\\ref{custom_nn_evaluation}). The model is best at predicting class 1, 2 and 7 correct, which is logical since these are the majority classes. Except for the minority class 3, that is very similar to class 1 and 2 (as seen from the EDA), the model does an overall good job of classifying.\n\n\\begin{table}[!ht]\n\\begin{subtable}[c]{0.4\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{ c | c }\n \\toprule\n Evaluation Metric & Score  \\\\\n \\midrule \n Training Accuracy & 72\\% \\\\\n Test Accuracy & 74\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Custom Neural Network Accuracy Scores on Training and Test Split}\n\\end{subtable}\n\\begin{subtable}[c]{0.6\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{c | c c c r}\nClass & Precision & Recall & F-Score & Support\\\\\n\\midrule\n1   &    0.70  &    0.90  &    0.79   &     21 \\\\\n2   &    0.73  &    0.70  &    0.71   &     23 \\\\\n3   &    0.00  &    0.00  &    0.00   &      5 \\\\\n5   &    0.60  &    0.75  &    0.67   &      4 \\\\\n6   &    0.67  &    0.67  &    0.67   &      3 \\\\\n7   &    1.00  &    0.89  &    0.94   &      9 \\\\\n\\midrule\n    accuracy  &          &         &   0.74  &  65 \\\\\n   macro avg  &   0.62   &   0.65  &   0.63  &  65 \\\\\nweighted avg  &   0.69   &   0.74  &   0.71  &  65 \\\\ \n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Custom Neural Network Classification Report on Test Split}\n\\end{subtable}\n\\caption{Custom Neural Network Performance Evaluation}\n\\label{custom_nn_evaluation}\n\\end{table}\n\n\n\n% ---------------------------- KERAS --------------\n\\textbf{Keras Implementation. }\nSimilarly, the neural network architecture for the Keras neural network was mostly derived through trial-and-error. A final configuration of a simple, neural network with a single hidden, ReLu-activated layer of 50 neurons, with an \\class{Adam()} optimiser set to a learning rate of $0.005$ and optimising for cross-entropy loss on the entire batch (batch gradient descent) for each epoch was found to give good results. \nUsing the \\class{EarlyStopping()} callback, the training process was monitored, such that a continuous increase of the validation loss over $2$ epochs would stop the fitting of the model. This prevented overfitting. \n\\newline\n\n\\begin{comment}\n\\begin{lstlisting}[language=Python, caption=Keras Neural Network Setup]\n    # neural network architecture\n    nn = Sequential()\n    nn.add(Dense(50, input_dim=9, activation='relu', name='fc1'))\n    nn.add(Dense(6, activation='softmax', name='output'))\n\n    # define optimiser, loss and optimisation target\n    nn.compile(optimizer=Adam(lr=0.005),\n               loss='categorical_crossentropy', \n               metrics=['accuracy'])\n            \n    # early stopping\n    es = EarlyStopping(monitor='val_loss', patience=2, verbose=1)\n    \n    # fitting neural network\n    history = nn.fit(X_train, y_train_hot,\n                     batch_size=len(X_train), # batch gradient descent\n                     epochs=epochs,\n                     verbose=2,\n                     validation_split=0.2,\n                     callbacks=[es]) # train with validation split (here test data)\n\\end{lstlisting}\n\\end{comment}\n\nWith this configuration, the model callback was reached after $\\sim 80$ epochs. The training history is depicted in Figure \\ref{keras_nn_history} and reveals that the loss continuously decreased for the training split, and was stopped just as the validation loss was beginning to increase again. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.8 ]{figures/keras_nn_training.pdf}\n\\caption{Training History of Keras Neural Network Implementation}\n\\label{keras_nn_history}\n\\end{figure}\n\n\\subsubsection{Keras Neural Network Performance Evaluation}\nAs can be seen from the history of accuracies, the final trained neural network achieves a training and validation accuracy of roughly 80\\%.  The validation score for this model is a good estimate for the test accuracy, which is 75\\% (Figure \\ref{keras_nn_evaluation}). From the classification report (Figure \\ref{keras_nn_evaluation}) it becomes clear, that the model does well on the better separated classes 6 and 7. It has more difficulty separating the closely related classes 1, 2 and 3 and, in fact, never predicts the minority class 3. Nevertheless, the overall classification result is good.\n\\newline\n\n\\begin{table}[!ht]\n\\begin{subtable}[c]{0.4\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{ c | c }\n \\toprule\n Evaluation Metric & Score  \\\\\n \\midrule \n Training Accuracy & 78\\% \\\\\n Test Accuracy & 75\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Keras Neural Network Accuracy Scores on Training and Test Split}\n\\end{subtable}\n\\begin{subtable}[c]{0.6\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{c | c c c r}\nClass & Precision & Recall & F-Score & Support\\\\\n\\midrule\n1   &    0.72   &   0.86  &    0.78  &      21 \\\\\n2   &    0.77   &   0.74  &    0.76  &      23 \\\\\n3   &    0.00   &   0.00  &    0.00  &       5 \\\\\n5   &    0.75   &   0.75  &    0.75  &       4 \\\\\n6   &    0.75   &   1.00  &    0.86  &       3 \\\\\n7   &    1.00   &   0.89  &    0.94  &       9 \\\\\n\\midrule\n    accuracy   &           &         &    0.75   &     65 \\\\\n   macro avg   &    0.67   &   0.71  &    0.68   &     65 \\\\\nweighted avg   &    0.73   &   0.75  &    0.74   &     65 \\\\\n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Keras Neural Network Classification Report on Test Split}\n\\end{subtable}\n\\caption{Keras Neural Network Performance Evaluation}\n\\label{keras_nn_evaluation}\n\\end{table}\n\n\\subsection{Random Forest}\n\\subsubsection{Introduction}\nThe statistical observation of the wisdom-of-the-crowd motivated the emergence of so-called ensemble methods in the cosmos of machine learning. Ensemble methods are groups of predictors, that collectively construct the ensemble's decision \\cite{introStats}. \n\nOne of the most popular ensemble-methods is the so-called \\class{RandomForestClassifier()}. It refers to a series of individual decision trees  independently trained on random sub-samples and randomly selected sets of features of the input data to generate non-correlated classifiers. The final-decision is then attributed to the class being predicted by the largest number of individual trees (hard voting-classifier) \\cite{introStats}.\n\n\\subsubsection{Random Forest Model Selection}\nThe model-selection-pipeline for the \\class{RandomForestClassifier()} was similar to that of the single decision tree. Again, a pipeline that involved scaling the features was used, and grid searched for a set of hyper parameters in order to find the configuration of hyper parameters that best generalise. Within training, the hyper parameters \\code{n_estimators}, \\code{criterion}, \\code{bootstrap} (sampling from original data with or without replacement) and \\code{max_depth} were searched.\n\\newline \n\n\\begin{comment}\n\\begin{lstlisting}[language=Python, caption=RandomForestClassifier Pipeline and GridSearch Setup]\n    # define pipeline\n    pipe = Pipeline(steps=[('scaler', scaler),\n                           ('random_forest', RandomForestClassifier(random_state=1))])\n\n    # define hyper parameters to grid search\n    params = {'random_forest__n_estimators': [20, 50, 100],\n              'random_forest__max_depth': list(range(5, 10)) + [None],\n              'random_forest__bootstrap': [False, True],\n              'random_forest__criterion': ['entropy', 'gini']\n             }\n\\end{lstlisting}\n\\end{comment}\n\n\\subsubsection{Random Forest Performance Evaluation}\nThe grid search returned that the best-generalising random forest classifier uses 100 predictors, Gini-impurity as a loss-function, bootstrapped random sub-samples of the training data and trains each tree to a maximal depth of 8. This fitted model has a training accuracy of 100\\%, while still having a high mean validation accuracy of 77\\% encountered during the 5-fold cross validation. The accuracy on the final test set is 83\\%. \n\n\\begin{table}[!ht]\n\\begin{subtable}[c]{0.4\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{ c | c }\n \\toprule\n Evaluation Metric & Score  \\\\\n \\midrule \n Training Accuracy & 100\\% \\\\\n Validation Accuracy & 77\\% \\\\\n Test Accuracy & 83\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Random Forest Accuracy Scores on Training, Validation and Test Split}\n\\end{subtable}\n\\begin{subtable}[c]{0.6\\textwidth}\n\\footnotesize\n\\centering\n\\begin{tabular}{c | c c c r}\nClass & Precision & Recall & F-Score & Support\\\\\n\\midrule\n1   &  0.95  &  0.95  &  0.95  &  21\\\\ \n2   &  0.83  &  0.83  &  0.83  &  23\\\\\n3   &  0.00  &  0.00  &  0.00  &   5\\\\\n5   &  0.50  &  1.00  &  0.67  &   4\\\\\n6   &  0.75  &  1.00  &  0.86  &   3\\\\\n7   &  1.00  &  0.89  &  0.94  &   9\\\\\n\\midrule\n    accuracy  &        &        &  0.83  &  65\\\\\n   macro avg  &  0.67  &  0.78  &  0.71  &  65\\\\\nweighted avg  &  0.80  &  0.83  &  0.81  &  65\\\\\n\\end{tabular}\n\\captionsetup{justification=centering,margin=1cm}\n\\subcaption{Random Forest Classification Report on Test Split}\n\\end{subtable}\n\\caption{Random Forest Performance Evaluation}\n\\label{random_forest_evaluation}\n\\end{table}\n\n", "meta": {"hexsha": "d0c62050065f41f604293c2833e3c752d4a56292", "size": 20495, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/05_models.tex", "max_stars_repo_name": "jonas-mika/ml-project", "max_stars_repo_head_hexsha": "c052c33010033cd9fd596eb5ac4d270d1bf98ee3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/05_models.tex", "max_issues_repo_name": "jonas-mika/ml-project", "max_issues_repo_head_hexsha": "c052c33010033cd9fd596eb5ac4d270d1bf98ee3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/05_models.tex", "max_forks_repo_name": "jonas-mika/ml-project", "max_forks_repo_head_hexsha": "c052c33010033cd9fd596eb5ac4d270d1bf98ee3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-29T17:23:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-29T17:23:15.000Z", "avg_line_length": 58.8936781609, "max_line_length": 657, "alphanum_fraction": 0.7210051232, "num_tokens": 5373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5802305046711237}}
{"text": "\n\\subsection{Autoencoders}\n\nAutoencoders are a type of neural network.\n\nFor autoencoders the goal for the output layer is the reconstructed input layer, rather than a classification.\n\nBy including sparsity in the neural network we can reduce the dimensions. This splits the network into an encoder and a decoder.\n\nMiddle vector is called latent variables.\n\n", "meta": {"hexsha": "8c9b72abf39c0f37a8eb58ff9dbfb4d01b25adab", "size": 357, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/VAE/01-01-autoencoder.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/VAE/01-01-autoencoder.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/VAE/01-01-autoencoder.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.75, "max_line_length": 128, "alphanum_fraction": 0.8095238095, "num_tokens": 71, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7490872131147275, "lm_q1q2_score": 0.5802305007731706}}
{"text": "\r\n\\input{SingleAssignmentSetup.tex}\r\n\\input{../WeekTitles.tex}\r\n\\usepackage{bbding} % for Checkmarkbold\r\n\\begin{document}\r\n\r\n\\newcommand{\\ub}{\\underbrace}\r\n\r\n\\begin{center}\r\n\\subsection*{MNTC P01 - Week \\#5 - \\WeekTitleFive}\r\n\\end{center}\r\n\r\n\r\n\\subsection*{Substitution Integrals}\r\n\r\n\\begin{Question}\r\n  To practice computing integrals using substitutions, do as many of\r\n  the problems from this section as you feel you need. The problems\r\n  trend from simple to the more complex.\r\n  \r\n{\\bf Note}: In the solutions to these problems, we always show the\r\n  substitution used.  On a test, if you can compute the\r\n  antiderivative in your head, you do {\\em not} need to go through\r\n  all the steps shown here.  They are included in these solutions as\r\n  learning \\& comprehension aid.\r\n\\end{Question}\r\n\r\n\\begin{enumerate}[1.]\r\n\\begin{multicols}{2}\r\n% **************\r\n\\item \\begin{Question}$\\ds \\int t e^{t^2}~dt$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\mbox{Let }w & = t^2, \\mbox{ so } dw=2t~dt \\mbox{ or } dt = \\frac{1}{2t}~dw\\\\\r\n\\int t e^{t^2}~dt & = \\int t e^w \\left(\\frac{1}{2t}~dw\\right) = \\int \\frac{1}{2} e^w dw = \\frac{1}{2} e^w + C = \\frac{1}{2} e^{t^2} + C \\\\\r\n\\mbox{ Check: } & \\ddt{} \\frac{1}{2} e^{t^2} + C = \\frac{1}{2} (2t) e^{t^2} = t e^{t^2} \\\\\r\n& = \\mbox{ original function in integral. \\CheckmarkBold}\r\n  \\end{align*}\r\n\\end{Solution}\r\n\\item \\begin{Question}$\\ds \\int e^{3x}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\mbox{Let }w & = 3x, \\mbox{ so } dw=3~dx \\mbox{ or } dx = \\frac{dw}{3}\\\\\r\n\\int e^{3x}~dx & =  \\int e^w \\frac{dw}{3} = \\frac{1}{3} e^w + C = \\frac{1}{3} e^{3x} + C \\\\\r\n\\mbox{ Check: } & \\ddx{} \\frac{1}{3} e^{3x} + C = \\frac{1}{3} e^{3x}(3) =  e^{3x} \\\\\r\n& = \\mbox{ original function in integral. \\CheckmarkBold}\r\n\\end{align*} \r\n\\end{Solution}\r\n\r\n{\\bf NOTE: we will not show the differentiation check for\r\n  any later questions, as the process is always the same, and you\r\n  should be comfortable enough with the derivative rules to do the\r\n  checking independently.  If you are uncertain about any problem,\r\n  contact your instructor.}\r\n\\item \\begin{Question}$\\ds \\int e^{-x}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\mbox{Let }w & = -x, \\mbox{ so } dw=-1~dx \\mbox{ or } dx = (-dw) \\\\\r\n    \\int e^{-x}~dx & =  \\int e^w (-dw) =  e^w + C = - e^{-x} + C \\\\\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int 25e^{-0.2t}~dt$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\mbox{Let }w & = -0.2t, \\mbox{ so } dw=-0.2~dt \\\\\r\n \\mbox{ or } dt & = \\frac{dw}{-0.2} = -5 dw \\\\\r\n    \\int 25e^{-0.2t}~dt & =  \\int 25 e^w (-5 dw) = -125 e^w + C \\\\\r\n&= -125 e^{-0.2t} + C \\\\\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int t \\cos(t^2)~dt$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\mbox{Let }w & =t^2, \\mbox{ so } dw=2t~dt \\\\\r\n    \\mbox{ or } dt & = \\frac{dw}{2t} \\\\\r\n    \\int t \\cos(t^2)~dt & = \\int  t \\cos(w)~\\frac{dw}{2t} = \\frac{1}{2}\\sin(w) + C \\\\\r\n& = \\frac{1}{2} \\sin(t^2) + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\sin(2x)~dx$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = 2x, & \\text{ so } dw = 2 dx, \\text{ or } dx = \\frac{1}{2} dw    \\\\\r\n  \\int \\sin(2x) dx & = \\frac{1}{2} \\int \\sin(w) dw  = - \\frac{1}{2} \\cos(w) + C \\\\ \r\n& = -\\frac{1}{2} \\cos(2x) + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\sin(3-t)~dt$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\mbox{Let }w & =3-t, \\mbox{ so } dw=-1~dt \\mbox{ or } dt = -dw \\\\\r\n    \\int \\sin(3-t)~dt & = \\int \\sin(w) ~(-1) dw = -( -\\cos(w)) + C \\\\\r\n& = \\cos(3-t)  +C \r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int xe^{-x^2}~dx$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = -x^2, & \\text{ so } dw = -2x ~dx, \\text{ or } dx = \\frac{-1}{2x} dw    \\\\\r\n  \\int x e^{-x^2}~dx & = \\int x e^w \\frac{-1}{2x} ~dw = \\int \\frac{-1}{2} e^w ~dw = -\\frac{1}{2} e^w + C \\\\\r\n& = -\\frac{1}{2} e^{-x^2} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int (r+1)^3~dr$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = (r+1), & \\text{ so } dw = dr \\\\\r\n  \\int (r+1)^3~dr & = \\int w^3~dw = \\frac{w^4}{4} + C = \\frac{(r+1)^4}{4} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int y(y^2 + 5)^8~dy$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = y^2+ 5, & \\text{ so } dw = 2y ~dy, \\text{ or } dy = \\frac{1}{2y} dw    \\\\\r\n  \\int y(y^2 + 5)^8~dy &  = \\int y w^8 \\frac{1}{2y}~dw = \\frac{1}{2} \\int w^8 ~dw = \\frac{1}{2} \\frac{w^9}{9} + C \\\\\r\n& = \\frac{1}{18} (y^2 + 5)^9 + C \r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int t^2(t^3-3)^{10}~dt$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = t^3 -3, & \\text{ so } dw = 3t^2 ~dt, \\text{ or } dt = \\frac{1}{3t^2} dw    \\\\\r\n  \\int t^2 (t^3-3)^{10}~dt &  = \\int t^2 w^{10}\\frac{1}{3t^2}~dw = \\frac{1}{3} \\int w^{10}~dw  \\\\\r\n& = \\frac{1}{3} \\frac{w^{11}}{11} + C \r\n = \\frac{1}{33} (t^3-3)^{11} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int x^2(1+2x^3)^2~dx$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = 1+2x^3, & \\text{ so } dw = 6x^2 ~dx, \\text{ or } dx = \\frac{1}{6x^2} dw    \\\\\r\n  \\int x^2(1+2x^3)^2~dx & = \\int x^2 w^2 \\frac{1}{6x^2}~dw = \\frac{1}{6} \\frac{w^3}{3} + C \\\\\r\n& = \\frac{1}{18} (1+2x^3)^3 + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int x(x^2 + 3)^2~dx$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = x^2+3, & \\text{ so } dw = 2x ~dx, \\text{ or } dx = \\frac{1}{2x} dw    \\\\\r\n  \\int x(x^2 + 3)^2~dx & = \\int x w^2 \\frac{1}{2x}~dw = \\frac{1}{2} \\frac{w^3}{3} + C \\\\\r\n& = \\frac{1}{6} (x^2 + 3)^3 + C\r\n\\end{align*}\r\n\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int x(x^2-4)^{7/2}~dx$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = x^2-4, & \\text{ so } dw = 2x ~dx, \\text{ or } dx = \\frac{1}{2x} dw    \\\\\r\n  \\int x(x^2 - 4)^{7/2}~dx & = \\int x w^{7/2} \\frac{1}{2x}~dw = \\frac{1}{2} \\frac{w^{9/2}}{9/2} + C \\\\\r\n& = \\frac{1}{9} (x^2 - 4)^{9/2} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int y^2(1+y)^2~dy$\\end{Question}\r\n\\begin{Solution}\r\n\r\n  Trick (substitution) question: substitution seems not to work well\r\n  here, because both factors have $y^2$ in them, so neither one is the\r\n  derivative of the other.  We're better off expanding the $(1+y)^2$\r\n  factor and then integrating each term separately:\r\n\\begin{align*}\r\n  \\int y^2(1+y)^2 dy  & = \\int y^2(1 + 2y + y^2) dy   = \\int y^2 + 2y^3 + y^4\\\\\r\n&    = \\frac{1}{3}y^3 + \\frac{2}{4}y^4 + \\frac{1}{5}y^5 + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int (2t-7)^{73}~dt$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = 2t-7, & \\text{ so } dw = 2 ~dt, \\text{ or } dt = \\frac{1}{2} dw    \\\\\r\n  \\int (2t-7)^{73}~dt & = \\int  w^{73} \\frac{1}{2}~dw = \\frac{1}{2} \\frac{w^{74}}{74} + C \\\\\r\n& = \\frac{1}{148} (2t-7)^{74} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{1}{y+5}~dy$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = y+5, & \\text{ so } dw = dy, \\text{ making }  \\\\\r\n  \\int \\frac{dy}{y+5} dy & = \\int \\frac{1}{w} dw = \\ln | w | + C  = \r\n  \\ln|y+5| + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{1}{\\sqrt{4-x}}~dx$\\end{Question}\r\n\r\n\\begin{Solution}\r\n    Rewrite integral: \r\n  \\begin{align*}\r\n\\int (4-x)^{-1/2}~dx &  \\\\\r\n  \\text{Let }  w = 4-x, & \\text{ so } dw = -1 ~dx, \\text{ or } dx = -dw    \\\\\r\n  \\int (4-x)^{-1/2}~dx & = \\int w^{-1/2} (-1) dw = -\\frac{w^{1/2}}{1/2} \\\\\r\n& = -2(4-x)^{1/2} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int (x^2 + 3)^2~dx$\\end{Question}\r\n\\begin{Solution}\r\n\r\nAnother non-substitution integral: since there is no $x$ term outside the $(x^2 +3)$, it is easier to expand the square in this case and integrate term by term.\r\n\\begin{align*}\r\n  \\int (x^2 + 3)^2~dx & = \\int x^4 + 6x^2 + 9 ~dx = \\frac{x^5}{5} + \\frac{6x^3}{3} + 9x + C  \\\\\r\n& = \\frac{x^5}{5} + 2x^3 + 9x + C  \r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int x^2e^{x^3+1}~dx$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = x^3+1, & \\text{ so } dw = 3x^2 ~dx, \\text{ or } dx = \\frac{1}{3x^2} dw    \\\\\r\n  \\int x^2e^{x^3 + 1}~dx & = \\int x^2 e^w \\frac{1}{3x^2} ~dw = \\frac{1}{3} e^w + C \\\\\r\n& = \\frac{1}{3} e^{x^3 + 1} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\sin \\theta (\\cos\\theta + 5)^7~d\\theta$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = \\cos \\theta + 5, & \\text{ so } dw = - \\sin \\theta d \\theta, \\text{ making }  \\\\\r\n  \\int \\sin \\theta (\\cos \\theta + 5)^7 d \\theta & = -\\int w^7 dw = -\\frac{w^8}{8} + C  \\\\\r\n& = -\\frac{1}{8} (\\cos \\theta + 5)^8 + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\sqrt{\\cos(3t)} \\sin(3t)~dt $\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = \\cos(3t)& \\text{ so } dw = -3\\sin(3t)  ~dt, \\\\\r\n\\text{ or } dt & =\\frac{-1}{3\\sin(3t)} dw    \\\\\r\n  \\int \\sqrt{\\cos(3t)} \\sin(3t) ~dt \r\n& = \\int w^{1/2} \\sin(3t) \\frac{-1}{3\\sin(3t)}~dt \\\\\r\n& = \\frac{-1}{3} \\frac{w^{3/2}}{3/2} + C \\\\\r\n& = \\frac{-2}{9} (\\cos(3t))^{3/2} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\sin^6 \\theta \\cos \\theta ~d\\theta$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = \\sin(\\theta)& \\text{ so } dw = \\cos\\theta  ~d\\theta, \\\\ \r\n\\text{ or } d\\theta &= \\frac{1}{\\cos\\theta} dw    \\\\\r\n  \\int (\\sin\\theta)^6 \\cos\\theta ~d\\theta & =  \\int w^6 \\cos\\theta \\frac{1}{\\cos\\theta} dw = \\frac{w^7}{7} + C \\\\ \r\n& = \\frac{1}{7} \\sin^7\\theta  + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\sin^3\\alpha \\cos \\alpha d\\alpha$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = \\sin(\\alpha)& \\text{ so } dw = \\cos\\alpha  ~d\\alpha, \\text{ or } d\\alpha = \\frac{1}{\\cos\\alpha} dw    \\\\\r\n  \\int (\\sin\\alpha)^3 \\cos\\alpha ~d\\alpha & =  \\int w^3 \\cos\\alpha \\frac{1}{\\cos\\alpha} dw = \\frac{w^4}{4} + C \\\\\r\n& = \\frac{1}{4} \\sin^4\\alpha  + C\r\n\\end{align*}\r\n\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\sin^6(5\\theta) \\cos(5\\theta)~d\\theta$\\end{Question}\r\n\\begin{Solution}\r\n\\begin{align*}\r\n  \\text{Let }  w = \\sin(5\\theta)& \\text{ so } dw = 5\\cos(5\\theta)  ~d\\theta, \\\\\r\n \\text{ or } d\\theta & = \\frac{1}{5\\cos(5\\theta)} dw    \\\\\r\n  \\int (\\sin(5\\theta))^6 \\cos(5\\theta)~d\\theta & =  \\int w^6 \\cos(5\\theta) \\frac{1}{5\\cos(5\\theta)} dw \\\\\r\n& = \\frac{1}{5} \\frac{w^7}{7} + C = \\frac{1}{35} \\sin^7(5\\theta)  + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\tan(2x)~dx$\\end{Question}\r\n\r\n\\begin{Solution}\r\n    Rewrite integral: \r\n  \\begin{align*}\r\n\\int \\frac{\\sin(2x)}{\\cos{2x}}~dx &  \\\\\r\n    \\text{Let }  w = \\cos(2x), & \\text{ so } dw = -2\\sin(2x) ~dx, \\\\\r\n\\text{ or } dx & = \\frac{-1}{2\\sin(2x)} ~dw    \\\\\r\n    \\int \\frac{\\sin(2x)}{\\cos(2x)}~dx & = \\int \\frac{\\sin(2x)}{w}\r\n    \\frac{-1}{2 \\sin(2x)} ~dw = \\frac{-1}{2} \\int w^{-1}~dw \\\\\r\n& = \\frac{-1}{2} \\ln(|w|) + C = \\frac{-1}{2} \\ln( |\\cos(2x)|) + C\r\n  \\end{align*}\r\n\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{(\\ln z)^2}{z} ~dz$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = \\ln(z), & \\text{ so } dw = \\frac{1}{z} ~dz, \\text{ or } dz = z ~dw    \\\\\r\n\\int \\frac{(\\ln z)^2}{z}~dx & = \\int \\frac{w^2}{z} (z ~dw) = \\frac{w^3}{3} + C \\\\\r\n& = \\frac{(\\ln z)^3}{3} + C\r\n  \\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{e^t +1}{e^t+t}~dt$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = e^t+t, & \\text{ so } dw = (e^t + 1) ~dt, \\text{ or } dt = \\frac{1}{e^t + 1} ~dw    \\\\\r\n\\int \\frac{e^t + 1}{e^t + t} ~dt & = \\int \\frac{e^t + 1 }{w} \\frac{1}{e^t + 1}~dw = \\int w^{-1}~dw \\\\ \r\n& = \\ln(|w|) + C = \\ln(|e^t + t|) + C\r\n  \\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{y}{y^2 + 4}~dy$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = y^2 + 4, & \\text{ so } dw = 2y ~dy, \\text{ or } dy = \\frac{1}{2y} ~dw    \\\\\r\n\\int \\frac{y}{y^2 + 4} ~dy & = \\int \\frac{y}{w} \\frac{1}{2y}~dw = \\frac{1}{2} \\int w^{-1}~dw \\\\\r\n& = \\frac{1}{2}\\ln(|w|) + C = \\frac{1}{2}\\ln(|y^2 + 4|) + C\r\n  \\end{align*}\r\n  Note that $y^2 + 4$ is always positive, so we could remove the\r\n  absolute values if we wished, as they are redundant in this case.\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{\\cos(\\sqrt{x})}{\\sqrt{x}}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = \\sqrt{x}, & \\text{ so } dw =\\frac{1}{2} x^{-1/2}~dx, \\text{ or } dx = 2 \\sqrt{x} ~dw    \\\\\r\n\\int \\frac{\\cos(\\sqrt{x})}{\\sqrt{x}} ~dx & = \\int \\frac{\\cos(w)}{\\sqrt{x}} (2 \\sqrt{x}~dw) = 2 \\int \\cos(w)~dw \\\\\r\n& = 2 \\sin(w) + C = 2 \\sin(\\sqrt{x}) + C \r\n  \\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{e^{\\sqrt{y}}}{\\sqrt{y}}~dy$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = \\sqrt{y}, & \\text{ so } dw =\\frac{1}{2} y^{-1/2}~dy, \\text{ or } dy = 2 \\sqrt{y} ~dw    \\\\\r\n\\int \\frac{e^{\\sqrt{y}}}{\\sqrt{y}} ~dy & = \\int \\frac{e^w}{\\sqrt{y}} (2 \\sqrt{y}~dw) = 2 \\int e^w~dw \\\\\r\n&= 2 e^w + C = 2 e^{\\sqrt{y}} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{1+e^x}{\\sqrt{x+e^x}}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = x + e^x, & \\text{ so } dw =(1 + e^x) ~dx, \\text{ or } dx = \\frac{1}{1 + e^x} ~dw    \\\\\r\n\\int \\frac{1 + e^x}{\\sqrt{x+e^x}} ~dx & = \\int \\frac{1+e^x}{\\sqrt{w}} \\left(\\frac{1}{1+e^x} ~dw\\right) = \\int w^{-1/2} ~dw \\\\\r\n& = \\frac{w^{1/2}}{1/2}+C = 2 \\sqrt{x + e^x} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{e^x}{2+e^x}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = 2 + e^x, & \\text{ so } dw =e^x ~dx, \\text{ or } dx = \\frac{1}{e^x} ~dw    \\\\\r\n\\int \\frac{e^x}{2+e^x} ~dx & = \\int \\frac{e^x}{w} \\left(\\frac{1}{e^x} ~dw\\right) = \\int w^{-1} ~dw \\\\\r\n& = \\ln(|w|)+C   = \\ln(|2 + e^x|)+C \r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{x+1}{x^2+2x+19}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = x^2 + 2x + 19, & \\text{ so } dw =(2x + 2) ~dx, \\\\\r\n\\text{ or } dx & = \\frac{1}{2(x+1)} ~dw   \r\n\\end{align*}\r\n\\begin{align*}\r\n\\int \\frac{x+1}{x^2 + 2x + 19} ~dx & = \\int \\frac{x+1}{w} \\left(\\frac{1}{2(x+1)} ~dw\\right) \\\\\r\n& = \\frac{1}{2}\\int w^{-1} ~dw  = \\frac{1}{2} \\ln(|w|)+C \\\\\r\n& = \\frac{1}{2}\\ln(|x^2 + 2x + 19|)+C \r\n\\end{align*}\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{t}{1+3t^2}~dt$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = 1 + 3t^2, & \\text{ so } dw =6t ~dt, \\text{ or } dt = \\frac{1}{6t} ~dw    \\\\\r\n\\int \\frac{t}{1+3t^2} ~dt & = \\int \\frac{t}{w} \\left(\\frac{1}{6t} ~dw\\right) =  \\frac{1}{6} \\int w^{-1} ~dw \\\\\r\n& = \\frac{1}{6} \\ln(|w|)+C = \\frac{1}{6} \\ln(|1 + 3t^2|)+C \r\n\\end{align*}\r\nIn this case we can remove the absolute values because $1 + 3t^2$\r\nwill always be positive, so the absolute values are redundant.\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{e^x-e^{-x}}{e^x + e^{-x}}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\text{Let }  w = e^x + e^{-x}, & \\text{ so } dw =(e^x - e^{-x}) ~dx, \\\\\r\n\\text{ or } dx & = \\frac{1}{e^x - e^{-x}} ~dw    \\\\\r\n\\int \\frac{e^x - e^{-x}}{e^x+e^{-x}} ~dx & = \\int \\frac{e^x-e^{-x}}{w} \\left(\\frac{1}{e^x-e^{-x}} ~dw\\right) \\\\\r\n& = \\int w^{-1} ~dw = \\ln(|w|)+C \\\\\r\n& = \\ln(|e^x+e^{-x}|)+C \r\n\\end{align*}\r\nIn this case we can remove the absolute values because $e^x + e^{-x}$\r\nwill always be positive, so the absolute values are redundant.\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{(t+1)^2}{t^2}~dt$ \\end{Question}\r\n\\begin{Solution}\r\n  This question is probably more easily solved by expanding than by using substitution.\r\n  \\begin{align*}\r\n    \\int\\frac{(t+1)^2}{t^2}~dt&= \\int \\frac{t^2 +2t + 1}{t^2}~dt\r\n= \\int 1 + \\frac{2}{t} + \\frac{1}{t^2} ~dt \r\n\\\\\r\n& = t + 2\\ln(|t|) -t^{-1} + C \\\\\r\n& = t + 2\\ln(|t|) -\\frac{1}{t} + C\r\n  \\end{align*}\r\n\r\n\\end{Solution}\r\n%***************\r\n\\item \\begin{Question}$\\ds \\int \\frac{x \\cos(x^2)}{\\sqrt{\\sin(x^2)}}~dx$\\end{Question}\r\n\\begin{Solution}\r\n  \\begin{align*}\r\n    \\mbox{Let }w & =\\sin(x^2), \\mbox{ so } dw=2x \\cos(x^2)~dx \\mbox{ or } dx \\\\\r\n& = \\frac{dw}{2x \\cos(x^2)} \\\\\r\n\\end{align*}\r\n\\begin{align*}\r\n    \\int \\frac{x \\cos(x^2)}{\\sqrt{\\sin(x^w)}}~dx & = \\int  x \\cos(x^2) (w^{-1/2} \\frac{1}{2x \\cos(x^2)}dw \\\\\r\n& = \\frac{1}{2} \\int w^{-1/2}~dw = \\frac{1}{2} \\frac{w^{1/2}}{1/2} + C \\\\\r\n& = \\sqrt{\\sin(x^2)} + C\r\n\\end{align*}\r\n\\end{Solution}\r\n\r\n\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_0^\\pi \\cos(x + \\pi) ~dx$    \r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n\\begin{align*}    \r\n\\ds \\int_0^\\pi \\cos(x + \\pi) ~dx & = \\sin(x + \\pi)\\Big|_0^\\pi = \\sin(2 \\pi) - \\sin(\\pi) \\\\\r\n& = 0 - 0 = 0\r\n\\end{align*}    \r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int_0^{1/2} \\cos(\\pi x)~dx$ \r\n  \\end{Question}\r\n  \\begin{align*}\r\n   \\ds \\int_0^{1/2} \\cos(\\pi x)~dx & = \\frac{1}{\\pi} \\sin(\\pi x) \\Big|_0^{1/2}  \\\\\r\n& = \\frac{1}{\\pi}\\left[ \\sin(\\pi/2) - \\sin(0)\\right] = \\frac{1}{\\pi}[1-0] = \\frac{1}{\\pi}\r\n  \\end{align*}\r\n  \\begin{Solution}\r\n    \r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int_0^{\\pi/2} e^{-\\cos(\\theta)} \\sin(\\theta)~d\\theta $\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nIn this solution, we will use the substitution just to find the antiderivative, but\r\nthen we will switch back to the original integral variable $\\theta$ to evaluate the limits.\r\n    \\begin{align*}\r\n    \\mbox{Let }w & =-\\cos(\\theta), \\mbox{ so } dw= \\sin(\\theta)~d\\theta \\mbox{ or } d\\theta \r\n = \\frac{dw}{\\sin(\\theta)} \r\n\\end{align*}\r\n\\begin{align*}\r\n  \\int_0^{\\pi/2} e^{-\\cos(\\theta)} \\sin(\\theta)~d\\theta \r\n  & = \\int_{\\theta = 0 }^{\\theta = \\pi/2} e^{w}\\sin(\\theta) \\left( \\frac{dw}{\\sin(\\theta)}\\right)  \\\\\r\n   & = \\int_{\\theta = 0 }^{\\theta = \\pi/2} e^{w} ~dw \r\n   = e^{w} \\Big|_{\\theta = 0 }^{\\theta = \\pi/2}  \\\\\r\n  & = e^{-\\cos(\\theta)} \\Big|_{\\theta = 0 }^{\\theta = \\pi/2}  \\\\\r\n& = e^0 - e^{-1} = 1 - \\frac{1}{e}\r\n\\end{align*}\r\n    \r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item \\label{q:xex}\r\n  \\begin{Question}\r\n   $\\ds \\int_1^2 2xe^{x^2}~dx$ \r\n    \r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    In this solution, we will convert the limits of integration to the substitution variable. \r\n    \\begin{align*}\r\n    \\mbox{Let }w & =x^2, \\mbox{ so } dw= 2x~dx \\mbox{ or } dx = (1/2x) dw \\\\\r\n\\mbox{ then also } x& =1 \\rightarrow w = 1^2 = 1,  \\\\\r\n\\mbox{and } x& =2 \\rightarrow w = 2^2 = 4. \r\n\\end{align*}\r\n    \\begin{align*}\r\n\\int_{x=1}^{x=2} 2xe^{x^2}~dx\r\n& = \\int_{w=1}^{w=4} 2xe^{w} \\left( \\frac{1}{2x} ~dw \\right) \\\\\r\n& = \\int_{w=1}^{w=4} e^{w} ~dw \r\n = e^{w} \\Big|_{w=1}^{w=4} \\\\\r\n& = e^{4} - e^{1} \r\n    \\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int _1^8 \\frac{e^{\\sqrt[3]{x}}}{\\sqrt[3]{x^2}}~dx$\r\n    \r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    In this solution, we will convert the limits of integration to the substitution variable. \r\n    \\begin{align*}\r\n    \\mbox{Let }w & =x^{1/3}, \\mbox{ so } dw= \\frac{1}{3}x^{-2/3}~dx \\mbox{ or } dx = 3 x^{2/3} dw \\\\\r\n\\mbox{ then also } x& =1 \\rightarrow w = (1)^{1/3} = 1,  \\\\\r\n\\mbox{and } x& =8 \\rightarrow w = 8^{1/3} = 2. \r\n\\end{align*}\r\n\\begin{align*}\r\n  \\ds \\int _{x=1}^{x=8} \\frac{e^{\\sqrt[3]{x}}}{\\sqrt[3]{x^2}}~dx \r\n& = \\int_{w=1}^{w=2} \\frac{e^{w}}{x^{2/3}}~\\left(  3 x^{2/3} ~dw\\right)  \\\\\r\n& = \\int_{w=1}^{w=2} 3 e^{w}~dw \r\n= 3 e^{w} \\Big|_{w=1}^{w=2} \r\n= 3 e^{2} - 3 e^{1}\r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int_{-1}^{e-2} \\frac{1}{t+2}~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n    \\mbox{Let }w & =t+2, \\mbox{ so } dw= dt  \\\\\r\n\\mbox{ then also } t& =-1 \\rightarrow w = (-1) + 2 = 1 \\\\\r\n\\mbox{and } t& =e-2 \\rightarrow w = (e-2) + 2 = e \r\n\\end{align*}\r\n\\begin{align*}\r\n   \\int_{t=-1}^{t=e-2} \\frac{1}{t+2}~dt\r\n   & = \\int_{w=1}^{w=e} \\frac{1}{w}~dw \r\n= \\ln(|w|) \\Big|_{w=1}^{w=e} \\\\\r\n& = \\ln(e) - \\ln(1) = 1 - 0 = 1  \r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int_1^4 \\frac{\\cos{\\sqrt{x}}}{\\sqrt{x}}~dx$ \r\n    \r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n    \\mbox{Let }w & =x^{1/2} \\mbox{ so } dw= \\frac{1}{2} x^{-1/2}~dx \\mbox{ or } dx = 2 x^{1/2} ~dw  \\\\\r\n\\mbox{ then also } x& =1 \\rightarrow w = (1)^{1/2} = 1 \\\\\r\n\\mbox{and } x& =4 \\rightarrow w = (4)^{1/2} = 2 \r\n    \\end{align*}\r\n    \\begin{align*}\r\n    \\int_{x=1}^{x=4} \\frac{\\cos{\\sqrt{x}}}{\\sqrt{x}}~dx\r\n& =     \\int_{w=1}^{w=2} \\frac{\\cos(w)}{x^{1/2} } (2 x^{1/2} ~dw)  \\\\\r\n& =     \\int_{w=1}^{w=2} 2\\cos(w) ~dw \r\n= 2 \\sin(w) \\Big|_{w=1}^{w=2}  \\\\\r\n& = 2 \\sin(2) - 2 \\sin(1)\r\n    \\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int_0^2 \\frac{x}{(1+x^2)^2}~dx$\r\n    \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n    \\mbox{Let }w & =1 + x^2 \\mbox{ so } dw= 2x ~dx \\mbox{ or } dx = \\frac{1}{2x} ~dw  \\\\\r\n\\mbox{ then also } x& =0 \\rightarrow w = 1 + 0^2 = 1 \\\\ \r\n\\mbox{and } x& =2 \\rightarrow w = 1 + 2^2 = 5 \r\n    \\end{align*}\r\n    \\begin{align*}\r\n      \\int_{x=0}^{x=2} \\frac{x}{(1+x^2)^2}~dx\r\n& = \\int_{w=1}^{w=5} \\frac{x}{w^2} \\left(\\frac{1}{2x} ~dw \\right)   \\\\\r\n& =  \\frac{1}{2} \\int_{w=1}^{w=5} \\frac{1}{w^2}  ~dw \r\n= \\frac{1}{2} \\left(\\frac{-1}{w}\\right) \\Big|_{w=1}^{w=5}   \\\\\r\n& = \\frac{1}{2} \\left[ \\frac{-1}{5} - \\frac{-1}{1} \\right] \r\n= \\frac{1}{2} \\frac{4}{5} = \\frac{2}{5} \r\n    \\end{align*}\r\n  \\end{Solution}\r\n%*********************\r\n\\item \r\n\\begin{Question}\r\n  If appropriate, evaluate the following integrals by substitution.\r\n  If substitution is not appropriate, say so, and do not evaluate.\r\n\r\n\\begin{enumerate}[(a)]\r\n\\item $\\ds \\int x \\sin(x^2)~dx$\r\n\\item $\\ds \\int x^2 \\sin(x) ~dx$\r\n\\item $\\ds \\int \\frac{x^2}{1+x^2} ~dx$\r\n\\item $\\ds \\int \\frac{x}{(1+x^2)^2} ~dx$\r\n\\item $\\ds \\int x^3 e^{x^2} ~dx$\r\n\\item $\\ds \\int \\frac{\\sin(x)}{2 + \\cos(x)}~dx$\r\n\\end{enumerate}\r\n\r\n\\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item This integral can be evaluated using integration by\r\n      substitution. We use $w = x^2$, $dw = 2x~dx$.\r\n      \\begin{align*}\r\n        \\int x \\sin(x^2)~ dx \r\n        & = \\frac{1}{ 2} \\int \\sin(w)~dw   \\\\\r\n        & = - \\frac{1}{ 2} \\cos(w) + C \r\n        = \\frac{-1}{ 2} \\cos(x^2) + C\r\n      \\end{align*}\r\n    \\item This integral cannot be evaluated using a simple integration by substitution.\r\n    \\item This integral cannot be evaluated using a simple integration by substitution.\r\n    \\item This integral can be evaluated using integration by substitution. We use $w = 1 + x^2$, $dw = 2x~dx$.\r\n      \\begin{align*}\r\n        \\int \\frac{x}{ (1 + x^2)^2} dx  \r\n        & = \\frac{1}{2} \\int\\frac{ 1}{ w^2}~ dw  \r\n         = \\frac{1}{2} \\left(\\frac{-1}{ w }\\right) + C  \\\\\r\n        & = \\frac{-1}{2(1 + x^2)} + C.\r\n      \\end{align*}\r\n    \\item This integral cannot be evaluated using a simple integration by substitution.\r\n    \\item This integral can be evaluated using integration by substitution. We use $w = 2 + \\cos x, dw = -\\sin x~dx.$\r\n      \\begin{align*}\r\n\\int \\frac{\\sin x}{2 + \\cos x }~dx  \r\n& = - \\int \\frac{1}{ w}~ dw  \r\n = -\\ln |w| + C  \\\\\r\n& = -\\ln |2 + \\cos x| + C:   \r\n      \\end{align*}\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n\r\n%*****************\r\n\\item \\label{q:area_start} \r\n  \\begin{Question}\r\n    Find the exact area under the graph of $f(x) = xe^{x^2}$ between\r\n    $x=0$ and $x=2$.\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    Since $f(x)$ is always positive, the area under the graph is equal\r\n    to the integral of $f(x)$ on that interval.  Thus we need to\r\n    evaluate the definite integral\r\n    \\begin{align*}\r\n\\int^2_0 xe^{x^2}~dx.\r\n\\end{align*}\r\nThis is done in Question \\ref{q:xex}, using the substitution $w = x^2$, the result being\r\n    \\begin{align*}\r\n\\int^2_0 xe^{x^2} ~dx \r\n= \\frac{1}{2} (e^4 - 1).\r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    Find the exact area under the graph of $f(x) = 1/(x+1)$ between\r\n    $x=0$ and $x=2$.\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n   Since $f(x) = 1/(x + 1)$ is positive on the interval $x = 0$ to $x = 2$, the\r\narea under the graph and the integral are equal to one another.\r\n\\begin{align*}\r\n\\int^2_0 \\frac{1}{ x + 1}~ dx \r\n& = \\ln(x + 1) \\Big|^2_0 \r\n= \\ln 3 - \\ln 1 = \\ln 3.\r\n\\end{align*}\r\n\r\nThe area is $\\ln 3 \\approx 1.0986$. \r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n Find $\\ds \\int 4x( x^2 + 1)~dx$ using two methods: \r\n \\begin{enumerate}[(a)]\r\n\\item Do the multiplication first, and then antidifferentiate. \r\n\\item Use the substitution $w = x^2 + 1$. \r\n\\item Explain how the expressions from parts (a) and (b) are different. Are they both correct?   \r\n \\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item $\\ds \\int 4x(x^2 + 1)~dx = \\int (4x^3 + 4x)~dx = x^4 + 2x^2 + C.$\r\n\\item If $w = x^2 + 1$ then $dw = 2x~dx$:\r\n  \\begin{align*}\r\n\\int 4x(x^2 + 1)~dx = \\int 2w ~dw = w^2 + C = (x^2 + 1)^2 + C.\r\n  \\end{align*}\r\n\\item The expressions from parts (a) and (b) look different, but they\r\n  are both correct. Note that the answer from (b) can be expanded\r\n  as $$(x^2 + 1)^2 + C = x^4 + 2x^2 +\\underbrace{1+C}_{\\mbox{ new const.}}. $$In other words, the\r\n  expressions from parts (a) and (b) differ only by a constant, so\r\n  they are both correct antiderivatives.\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    \\begin{enumerate}[(a)]\r\n\\item Find $\\ds \\int \\sin\\theta \\cos \\theta ~d\\theta$\r\n\\item You probably solved part (a) by making the substitution $w = \\sin\\theta$ or $w = \\cos\\theta$. (If not, go back and do it that way.) Now find $\\ds \\int \\sin \\theta \\cos\\theta ~d\\theta$ by making the other substitution. \r\n\\item There is yet another way of finding this integral which involves\r\n  the trigonometric identities: \r\n\\begin{align*}\r\n\\sin(2\\theta) &= 2\\sin\\theta \\cos \\theta \\\\\r\n\\cos(2\\theta) &= \\cos^2 \\theta - \\sin^2 \\theta.\r\n\\end{align*} \r\nFind $\\ds \\int \\sin \\theta \\cos \\theta d\\theta$ using one of these\r\nidentities and then the substitution $w = 2\\theta$.\r\n\\item You should now have three different expressions for the\r\n  indefinite integral $\\ds \\int \\sin \\theta \\cos\\theta d\\theta$. Are they really different?\r\n  Are they all correct? Explain.\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item We first try the substitution $w = \\sin \\theta$, so $dw =\r\n      \\cos \\theta ~d\\theta$. Then\r\n    \\begin{align*}\r\n\\int \\sin \\theta \\cos \\theta ~d\\theta =\r\n\\int w ~dw = \\frac{w^2}{ 2} + C = \\frac{\\sin^2 \\theta}{ 2} + C.\r\n\\end{align*}\r\n\\item If we instead try the substitution $w = \\cos \\theta$, $dw = -\\sin \\theta ~d\\theta$, we get\r\n\\begin{align*}\r\n\\int \\sin \\theta \\cos \\theta ~d\\theta \r\n= - \\int w ~dw \r\n= - \\frac{w^2}{ 2} + C \r\n= - \\frac{\\cos^2 \\theta}{ 2} + C.\r\n\\end{align*}\r\n\\item Once we note that $\\sin(2\\theta) = 2 \\sin \\theta \\cos \\theta$ we can also say\r\n\\begin{align*}\r\n\\int \\sin \\theta \\cos \\theta ~d\\theta \r\n= \\frac{1}{ 2} \\int \\sin (2\\theta) ~d\\theta\r\n\\end{align*}\r\nSubstituting $w = 2\\theta$,  $dw = 2 ~d\\theta$, the above equals\r\n\\begin{align*}\r\n\\frac{1}{ 4} \\int \\sin w ~dw \r\n= - \\frac{\\cos w}{ 4} + C \r\n= - \\frac{\\cos 2\\theta} 4 + C.\r\n\\end{align*}\r\n\\item All these answers are correct. Although they have different\r\n  forms, they differ from each other only in terms of a constant, and\r\n  thus they are all acceptable antiderivatives.\r\n\r\nFor example, \r\n\\begin{align*}\r\n1-\\cos^2 \\theta & = \\sin^2 \\theta  \\\\\r\n\\mbox{ so } \\underbrace{\\frac{\\sin^2 \\theta}{ 2 }}_{\\mbox{Answer (a)}}\r\n&  = -\\frac{\\cos^2 \\theta + 1 }{2}\r\n& = \\underbrace{-\\frac{\\cos^2 \\theta }{2}}_{\\mbox{Answer (b)}} - \\frac{1}{2}\r\n\\end{align*}\r\nThus the first two expressions differ only by a constant.\r\n\r\nSimilarly, $\\cos(2\\theta) = \\cos^2 \\theta - \\sin^2 \\theta = 2 \\cos^2 \\theta - 1$, so \r\n\\begin{align*}\r\n\\underbrace{-\\frac{\\cos (2\\theta)}{ 4}}_{\\mbox{Answer (c)}}\r\n= \\underbrace{\\frac{-\\cos^2 \\theta}{ 2}}_{\\mbox{Answer (b)}} +\\frac{ 1}{4}\r\n\\end{align*}\r\nand thus the second and third expressions differ only by a\r\nconstant. Of course, if the first two expressions and the last two\r\nexpressions differ only in the constant C, then the first and last\r\nonly differ in the constant as well, so they are all equally valid\r\nantiderivatives.\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\\end{multicols}\r\n\r\n\\hrulefill\r\n\\subsection*{Substitution Integrals - Applications}\r\n\r\n\\begin{multicols}{2}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    Let $f(t)$ be the rate of flow, in cubic meters per hour, of a\r\n    flooding river at time $t$ in hours. Give an integral for the total\r\n    flow of the river:\r\n    \\begin{enumerate}\r\n    \\item Over the 3-day period, $0 \\le t \\le 72$ (since $t$ is\r\n      measured in hours).\r\n    \\item In terms of time $T$ in {\\bf days} over the same 3-day\r\n      period.\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item A time period of $\\D t$ hours with flow rate of $f(t)$ cubic\r\n      meters per hour has a flow of $f(t) \\D t$ cubic meters.  Summing\r\n      the flows, we get total flow $\\approx \\sum f(t) \\D t$, so \r\n      \\begin{align*}\r\n\\mbox{Total flow } = \\int^{72}_0 f(t) ~dt \\mbox{ cubic meters} \r\n      \\end{align*}\r\n    \\item Since 1 day is 24 hours, $t = 24T$. The constant 24 has\r\n      units (hours/day), so 24$T$ has units (hours/day) $\\times$ day =\r\n      hours.  Applying the substitution $t = 24T$ to the integral in\r\n      part (a), we get\r\n      \\begin{align*}\r\n\\mbox{Total flow }= \\int^3_0 24 f(24T) ~dT \\mbox{ cubic meters.}\r\n      \\end{align*}\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    Oil is leaking out of a ruptured tanker at the rate of $r( t) =\r\n    50e^{- 0.02t}$ thousand liters per minute.\r\n    \\begin{enumerate}[(a)]\r\n    \\item At what rate, in liters per minute, is oil leaking out at $t\r\n      = 0$? At $t = 60$? \r\n    \\item How many liters leak out during the first hour?\r\n    \\end{enumerate}\r\n    \r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item At time $t = 0$, the rate of oil leakage = $r(0)$ = 50\r\n      thousand liters/minute.  \\\\\r\n      At $t = 60$, rate = $r(60)$ = 15.06 thousand liters/minute.\r\n\r\n\\item To find the amount of oil leaked during the first hour, we integrate the rate from $t = 0$ to $t = 60$:\r\n  \\begin{align*}\r\n\\mbox{Oil leaked } &=\r\n\\int^{60}_0 50e^{-0.02t} dt \r\n= \\left( \\frac{- 50}{ 0.02} e^{-0.02t} \\right) \\Big|^{60}_{0} \\\\\r\n&= -2500e^{-1.2} + 2500e^0 \\\\\r\n& \\approx 1747 \\mbox{ thousand liters}.\r\n  \\end{align*}\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    If we assume that wind resistance is proportional to velocity,\r\n    then the downward velocity, $v$, of a body of mass $m$ falling\r\n    vertically is given by $$v = \\frac{mg}{ k} (1 - e^{- kt/ m})$$\r\n    where $g$ is the acceleration due to gravity and $k$ is a\r\n    constant. Find the height of the body, $h$, above the surface of\r\n    the earth as a function of time. Assume the body starts at height\r\n    $h_0$.\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nSince $\\ds v = dh/dt$, it follows that $h(t) =\r\n\\int\r\nv(t)~dt$ and $h(0) = h_0$. Since\r\n\\begin{align*}\r\nv(t) &= \\frac{mg}{k} \\left( 1 - e^{- \\frac{k}{m} t} \\right) \\\\\r\n&= \\frac{mg}{k} - \\frac{mg}{k} e^{-\\frac{k}{m} t}\r\n\\end{align*}\r\nwe have\r\n\\begin{align*}\r\nh(t) & = \\int v(t)~dt \\\\\r\n& = \\frac{mg}{k} \\int dt - \\frac{mg}{k} \\int e^{-k/m} t~dt.\r\n\\end{align*}\r\nThe first integral is simply $\\ds \\frac{mg}{k} t + C$. \r\n\r\nTo evaluate the second integral, make the substitution $\\ds w = -\\frac{ k}{ m}t$. Then\r\n$\\ds dw = - \\frac{k}{m}~dt,$\r\nso \r\n\\begin{align*}\r\n\\int e^{-kt/m}~dt \r\n&= \\int e^w \\left(\\frac{-m}{k} ~dw\\right) \\\\\r\n&= - \\frac{m}{k}  e^w + C  \\\\\r\n&= - \\frac{m}{k}  e^{-kt/m }+ C.\r\n\\end{align*}\r\n\r\nThus\r\n\\begin{align*}\r\nh(t) \r\n&= \\int v~dt  \\\\\r\n&= \\frac{mg}{k} t - \\frac{mg}{k} \\left( - \\frac{m}{k} e^{-kt/m} \\right) + C \\\\\r\n&= \\frac{mg}{k} t + \\frac{m^2g} {k^2} e^{-kt/m} + C.\r\n\\end{align*}\r\n Since $h(0) = h_0$,\r\n \\begin{align*}\r\nh_0 & = \\frac{mg}{k} \\cdot 0 + \\frac{m^2g}{ k^2} e^0 + C;  \\\\\r\nC & = h_0 - \\frac{m^2g}{ k^2}. \\\\\r\n\\end{align*}\r\n Thus\r\n\\begin{align*}\r\nh(t) &= \\frac{mg}{k} t + \\frac{m^2g}{k^2} e^{-kt/m}  - \\frac{m^2g}{k^2} + h_0 \\\\\r\nh(t) &= \\frac{mg}{k} t - \\frac{m^2g}{k^2} \\left( 1 - e^{-kt/m}  \\right) + h_0.\r\n \\end{align*}\r\nThis formula gives the height of the object above the surface of the earth as it falls.\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    The rate at which water is flowing into a tank is $r(t)$\r\n    gallons/minute, with $t$ in minutes.\r\n    \\begin{enumerate}[(a)]\r\n    \\item Write an expression approximating the amount of water\r\n      entering the tank during the interval from time $t$ to time $t+\r\n      \\D t$, where $\\D t$ is small.\r\n    \\item Write a Riemann sum approximating the total amount of water\r\n      entering the tank between $t = 0$ and $t = 5$. Then write an\r\n      exact expression for this amount.\r\n    \\item By how much has the amount of water in the tank changed\r\n      between $t = 0$ and $t = 5$ if $r( t) = 20e^{0.02t}$?\r\n    \\item If $r( t)$ is as in part ( c), and if the tank contains 3000\r\n      gallons initially, find a formula for $Q( t)$, the amount of\r\n      water in the tank at time $t$.\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item  Amount of water entering tank in a short period of time = Rate$\\times$Time = $r(t) \\D t$.\r\n    \\item Summing the contribution from each of the small intervals $\\D t$:\r\n\\begin{align*}\r\n&       \\mbox{Amount of water entering the tank } \\\\\r\n   \\approx & \\sum_{i} r(t_i) \\D t_i \r\n\\end{align*}where $\\D t = 5/n$.\r\nTaking a limit as $\\D t \\to 0$, we get the integral form instead of the sum:\r\n\\begin{align*}\r\n&\\mbox{ Exact amount of water entering the tank} \\\\\r\n& = \\int^5_0 r(t) ~dt.\r\n\\end{align*}\r\n\\item If $Q(t)$ is the amount of water in the tank at time $t$, then\r\n  $Q'(t) = r(t)$. We want to calculate net change in volume between\r\n  $t=0$ and $t=5$, or $Q(5) - Q(0)$. By the Fundamental Theorem,\r\n  \\begin{align*}\r\n&  \\mbox{ Amount which has entered tank} \\\\\r\n& = Q(5) - Q(0) \\\\\r\n&  = \\int^5_0 r(t)~dt  \\\\\r\n& = \\int^5_0 20e^{0.02t}~dt  \\\\\r\n&= \\frac{20 }{0.02} e^{0.02t} \\Big|^5_0 \\\\\r\n& = 1000(e^{(0.02)(5)} - 1) \\approx 105.17 \\mbox{ gallons}.\r\n  \\end{align*}\r\n\\item By the Fundamental Theorem again,\r\n  \\begin{align*}\r\n& \\mbox{ Amount which has entered tank} \\\\\r\n& = Q(t) - Q(0)  \\\\\r\n& = \\int^t_0 r(t)~dt  \\\\\r\nQ(t) - 3000 & = \\int^t_0 20e^{0.02t}~dt  \\\\\r\n\\mbox{ so } Q(t) & = 3000 + \\int^t_0 20e^{0.02u}~du \\\\\r\n\\end{align*}\r\nNote: $t$ is already being used, so we put $u$ inside the integral; since\r\nthis is a definite integral, the variable inside the integral will disappear when\r\nwe sub in the limits. \r\n\\begin{align*}\r\nQ(t) & = 3000 + \\frac{20}{ 0.02} e^{0.02t} \\Big|^t_0 \\\\\r\n& = 3000 + 1000(e^{0.02t} - e^0) \\\\\r\n& = 1000e^{0.02t} + 2000. \\\\\r\n  \\end{align*}\r\n  This is the quantity of water in the tank at time $t$.  Note that we\r\n  can do a basic sanity check on the answer by verifying that $Q(0) =\r\n  3000$ (given), which is true for the formula we arrived at:\r\n$$Q(0) = 1000e^{0.02\\cdot 0} + 2000 = 3000$$\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    After a spill of radioactive iodine, measurements at $t=0$ showed\r\n    the ambient radiation levels at the site of the spill to be four\r\n    times the maximum acceptable limit. The level of radiation from an\r\n    iodine source decreases according to the formula $$R(t) = R_0\r\n    e^{-0.004t}$$ where $R$ is the radiation level (in millirems/\r\n    hour) at time $t$ in hours and $R_0$ is the initial radiation\r\n    level (at $t = $0).\r\n    \\begin{enumerate}[(a)]\r\n  \\item How long will it take for the site to reach an acceptable\r\n    level of radiation?\r\n  \\item Engineers look up the safe limit of radiation and find it to\r\n    be 0.6 millirems/hour.  How much total radiation (in millirems)\r\n    will have been emitted by the time found in part (a)?\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n\\begin{enumerate}[(a)]\r\n\\item \r\n    If the level first becomes acceptable at time $t_1$, then $R_0 = 4R(t_1)$, and\r\n    \\begin{align*}\r\n\\frac{1}{ 4} R_0 & = R_0e^{-0.004 t_1} \\\\\r\n\\frac{1}{4} & = e^{-.004t_1}\r\n    \\end{align*}\r\nTaking natural logs on both sides yields\r\n\\begin{align*}\r\n\\ln\\left( \\frac{1}{ 4}\\right) &= -0.004t_1 \\\\\r\nt_1 & = \\frac{\\ln\\left( \\frac{1}{ 4}\\right)}{-0.004} \\\\ \r\nt_1 & \\approx 346.574 \\mbox{ hours.}\r\n\\end{align*}\r\n\\item Since the initial radiation was four times the acceptable limit of 0.6 millirems/hour, we have $R_0 = 4(0.6) = 2.4$.\r\nThe rate at which radiation is emitted is $R(t) = R_0 e^{-.004t}$, so\r\n\r\nTotal radiation emitted = $\\ds \\int^{346.574}_0 2.4e^{-0.004t}~dt$.\r\n\r\nFinding by guess-and-check or substitution that an antiderivative of\r\n$e^{-0.004t}$ is $\\ds \\frac{e^{-0.004t}}{-0.004}$,\r\n\\begin{align*}\r\n& \\int^{346.574}_0 2.4e^{-0.004t}~dt \\\\\r\n& = 2.4 \\frac{e^{-0.004t}}{-0.004} \\Bigr|_0^{346.574}\r\n\\end{align*}\r\n\\begin{align*}\r\n& = 2.4 \\left[  \\frac{e^{-0.004 (346.574)}}{-0.004} - \\frac{e^{0}}{-0.004}\\right] \\\\\r\n& = 450.00 \\\\\r\n\\end{align*}\r\nWe find that 450 millirems were emitted during this time.\r\n\\end{enumerate}\r\n  \\end{Solution}\r\n% ***********\r\n\\item  \r\n\\begin{Question}David is learning about catalysts in his Chemistry course.  He has read the definition:\r\n\r\n%\\vspace*{.25cm}\r\n\\begin{quote}\r\nCatalyst: A substance that helps a reaction to go faster without being used up in the reaction.\r\n\\end{quote}\r\n%\\vspace*{.25cm}\r\n\r\nIn today's Chemistry lab exercise, he has to add a catalyst to a\r\nchemical mixture that produces carbon dioxide.  When there is no\r\ncatalyst, the carbon dioxide is produced at a rate of $8.37 \\times\r\n10^{-9}$ moles per second.  When $C$ moles of the catalyst are\r\npresent, the carbon dioxide is produced at a rate of $(6.15 \\times\r\n10^{-8})C + 8.37 \\times 10^{-9}$ moles per second.\r\n\r\nThe reaction begins at exactly 10:00 a.m.  One minute later, at 10:01 sharp, David starts to add the catalyst at a constant rate of $0.5$ moles per second.\r\n\r\nHow much carbon dioxide is produced between 10:00 (sharp) and 10:05? \r\n\\end{Question}\r\n\\begin{Solution}\r\n    \r\nIn the first minute of the reaction no catalyst is present\r\n  and the amount of carbon dioxide produced is\r\n\r\n  \\begin{align*}\r\n    (8.37\\times 10^{-9}\\mbox{mol}/s)(60s)=5.02\\times 10^{-7}\\mbox{mol}.\r\n  \\end{align*}\r\n\r\n  In the next four minutes, catalyst is being added. After $t$ seconds\r\n  of adding catalyst of $0.5\\mbox{mol}/s$, there is $0.5t\\mbox{mol}$\r\n  of catalyst present. Thus, at time $t$ carbon dioxide is being\r\n  produced at a rate of\r\n\r\n  \\begin{align*}\r\n&    (6.15\\times10^{-8})(0.5t)+(8.37\\times 10^{-9}\\mbox{mol}/s) \\\\ & = \r\n    (3.075\\times10^{-8})t+(8.37\\times 10^{-9}\\mbox{mol}/s).\r\n  \\end{align*}\r\n\r\n  During the four minutes when catalyst is being added, the amount of\r\n  carbon dioxide produced is:\r\n\r\n  \\begin{align*}\r\n&      \\int_{0}^{240}(3.076\\times10^{-8})t+(8.37\\times 10^{-9})dt \\\\ \r\n& = \\left.(1.5375\\times10^{-8})t^{2}+(8.37\\times 10^{-9})t\\right. \\Big |_{0}^{240} \\\\\r\n  & =8.88\\times10^{-4}\\mbox{mol}\r\n  \\end{align*}\r\n\r\n  The total amount of $CO_{2}$ produced in the first five minutes of\r\n  the reaction is therefore\r\n\r\n  \\begin{align*}\r\n    5.02\\times 10^{-7}+8.88\\times10^{-4}=8.88\\times10^{-4}\\mbox{mol}.\r\n  \\end{align*}\r\n\\end{Solution}\r\n\r\n\\end{multicols}\r\n\r\n\\hrulefill\r\n\r\n\\subsection*{Integration by Parts}\r\n\r\n\\begin{multicols}{2}\r\n\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    For each of the following integrals, indicate whether integration\r\n    by substitution or integration by parts is more appropriate.  Do\r\n    not evaluate the integrals.\r\n\r\n    \\begin{enumerate}[(a)]\r\n    \\item $\\ds \\int x \\sin(x)~dx$\r\n    \\item $\\ds \\int \\frac{x^2}{1+x^3}~dx$\r\n    \\item $\\ds \\int x e^{x^2}~dx$\r\n    \\item $\\ds \\int x^2 \\cos(x^3)~dx$\r\n    \\item $\\ds \\int \\frac{1}{\\sqrt{3x+1}}~dx$\r\n    \\item $\\ds \\int x^2 \\sin x~dx$\r\n    \\item $\\ds \\int \\ln x~dx$\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{enumerate}[(a)]\r\n    \\item This integral can be evaluated using integration by parts with $u = x$, $dv = \\sin x~dx$.\r\n\\item We evaluate this integral using the substitution $w = 1 + x^3$.\r\n\\item We evaluate this integral using the substitution $w = x^2$.\r\n\\item We evaluate this integral using the substitution $w = x^3$.\r\n\\item We evaluate this integral using the substitution $w = 3x + 1$.\r\n\\item This integral can be evaluated using integration by parts with $u = x^2$, $dv = \\sin x~dx$.\r\n\\item This integral can be evaluated using integration by parts with $u = \\ln x$, $dv = dx$.\r\n    \\end{enumerate}\r\n  \\end{Solution}\r\n\r\n\r\n\\begin{Question}\r\n  To practice computing integrals by parts, do as many of\r\n  the problems from this section as you feel you need. The problems\r\n  trend from simple to the more complex. \r\n\r\nFor Questions \\#\\ref{q:parts1_start} to \\#\\ref{q:parts1_end}, evaluate the integral.\r\n\\end{Question}\r\n%*****************\r\n\\item  \\label{q:parts1_start}\r\n  \\begin{Question}\r\n   $\\ds \\int t \\sin t~dt$  \r\n  \\end{Question}\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = t & \\mbox{ and }dv & = \\sin t ~dt \\\\\r\n\\mbox{ so } \\frac{du}{dt} & = 1 & \\mbox{ and } v & = \\int \\sin t ~dt \\\\\r\n\\mbox{ or } du & = 1 ~dt& v & = -\\cos(t) \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int\\ub{ t}_u \\ub{\\sin t~dt}_{dv} & = \\ub{t}_u \\ub{(-\\cos t)}_v \r\n- \\int \\ub{(-\\cos t)}_v \\ub{dt}_{du} \\\\\r\n& = -t \\cos t + \\int \\cos t~dt \\\\\r\n& = -t \\cos t + \\sin t  + C\r\n\\end{align*}\r\nAs always, we can check our integral is correct by differentiating:\r\n\\begin{align*}\r\n  \\ddt{} (-t \\cos t + \\sin t + C) \r\n& = -\\cos t -t (-\\sin t) + \\cos t \\\\\r\n& = t \\sin t \\\\ \r\n& = \\mbox{ original integrand.~~\\CheckmarkBold}\r\n\\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int t e^{5t}~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = t &\\mbox{ and } dv & = e^{5t} ~dt \\\\\r\n\\mbox{ so } \\frac{du}{dt} & = 1 & \\mbox{ and } v & = \\int e^{5t}~dt \\\\\r\n\\mbox{ or } du & = 1 ~dt& v & = e^{5t}/5 \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int\\ub{ t}_u \\ub{e^{5t}~dt}_{dv} & = \\ub{t}_u \\ub{(e^{5t}/5)}_v \r\n- \\int \\ub{(e^{5t}/5)}_v \\ub{dt}_{du} \\\\\r\n& = \\frac{t e^{5t}}{5} - \\frac{1}{5}\\int e^{5t}~dt \\\\\r\n& = \\frac{t e^{5t}}{5} - \\frac{1}{5}\\left(e^{5t}/5\\right)  + C \\\\\r\n& = \\frac{t e^{5t}}{5} - \\frac{1}{25} e^{5t}+ C \r\n\\end{align*}\r\nAs always, we can check our integral is correct by differentiating:\r\n\\begin{align*}\r\n  \\ddt{} \\left( \\frac{t e^{5t}}{5} - \\frac{e^{5t}}{25} + C\\right)\r\n& = \\frac{1}{5} ( e^{5t} + t(5e^{5t})) - \\frac{5e^{5t}}{25} \\\\\r\n& = t e^{5t} + \\frac{e^{5t}}{5}- \\frac{e^{5t}}{5} \\\\\r\n& = t e^{5t}  \\\\\r\n& = \\mbox{ original integrand.~~\\CheckmarkBold}\r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n\\begin{Solution}\r\n  From now on, for brevity, we won't show quite as many steps in the\r\n  solution, nor will we check the answer by differentiating.  However,\r\n  you should always remember that you {\\bf can} check your\r\n  antiderivatives/integrals by differentiation.\r\n\\end{Solution}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int p e^{-0.1p}~dp$\r\n    \r\n  \\end{Question}\r\n\r\n\\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = p &\\mbox{ and } dv & = e^{-0.1p} ~dp \\\\\r\n\\mbox{ so } du & = dp& \\mbox{ and } v & = e^{-0.1p}/(-0.1) \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int\\ub{ p}_u \\ub{e^{-0.1p}~dp}_{dv} & = \\ub{p}_u \\ub{e^{-0.1p}/(-0.1)}_v \r\n- \\int \\ub{e^{-0.1p}/(-0.1)}_v \\ub{dp}_{du} \\\\\r\n& = \\frac{p e^{-0.1p}}{-0.1} + 10 \\int e^{-0.1p}~dp \\\\\r\n& = -10 p e^{-0.1p} + 10\\left(e^{-0.1p}/(-0.1)\\right)  + C \\\\\r\n& = -10 p e^{-0.1p} - 100 e^{-0.1p}+ C \r\n\\end{align*}\r\n\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int (z+1)e^{2z}~dz$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = (z+1)&\\mbox{ and } dv & = e^{2z} ~dz \\\\\r\n\\mbox{ so } du & = dz& \\mbox{ and } v & = e^{2z}/2 \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int (z+1) e^{2z}~dz & = (z+1) e^{2z}/2\r\n- \\int e^{2z}/2 dz \\\\\r\n& = \\frac{(z+1) e^{2z}}{2} - \\frac{1}{2} \\int e^{2z}~dz \\\\\r\n& = \\frac{(z+1)e^{2z}}{2} - \\frac{1}{2} \\left(e^{2z}/2\\right)  + C \\\\\r\n& = \\frac{(z+1) e^{2z}}{2} - \\frac{1}{4} e^{2z}+ C \r\n\\end{align*}\r\nThere is no need for further simplifications.\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item \\label{q:lnx}\r\n  \\begin{Question}\r\n   $\\ds \\int \\ln x~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    In this problem, we recall that the only simple calculus formula\r\n    related to $\\ln x$ is that its {\\bf derivative} is known:\r\n    $\\frac{d}{dx} \\ln(x) = 1/x$.  This means that we have to select $u = \\ln x$ so that\r\nit will be differentiated.\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = \\ln x&\\mbox{ and } dv & = dx \\\\\r\n\\mbox{ so } du & = \\frac{1}{x} ~dx& \\mbox{ and } v & = x \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int \\ln x~dx & = x (\\ln x) \r\n- \\int x \\frac{1}{x}~ dx \\\\\r\n& = x \\ln x - \\int 1 ~dx \\\\\r\n& = x \\ln x - x + C  \r\n\\end{align*}\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n\\item \\label{q:xlnx}\r\n  \\begin{Question}\r\n   $\\ds \\int y \\ln y~dy$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    In this problem, we recall that the only simple calculus formula\r\n    related to $\\ln y$ is that its {\\bf derivative} is known:\r\n    $\\frac{d}{dy} \\ln(y) = 1/y$.  While it might be tempting to keep\r\n    with our earlier pattern of choosing $u = y$ and $dv = \\ln y ~dy$,\r\n    that won't work because we won't be able to integrate $dv$. As a\r\n    result,\r\n    \\begin{align*}\r\n\\mbox{ we choose }       u & = \\ln y&\\mbox{ and } dv & = y ~dy \\\\\r\n\\mbox{ so } du & = \\frac{1}{y} ~dy& \\mbox{ and } v & = y^2/2 \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int y \\ln y~dy & = (\\ln y) (y^2/2) \r\n- \\int (y^2/2) \\frac{1}{y}~ dy \\\\\r\n& = \\frac{y^2 \\ln y}{2} - \\frac{1}{2} \\int y ~dy \\\\\r\n& = \\frac{y^2 \\ln y}{2} - \\frac{1}{2} (y^2/2)  + C \\\\\r\n& = \\frac{y^2 \\ln y}{2} - \\frac{y^2}{4}  + C \r\n\\end{align*}\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int x^3 \\ln x~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = \\ln x&\\mbox{ and } dv & = x^3 ~dx \\\\\r\n\\mbox{ so } du & = \\frac{1}{x} ~ dx& \\mbox{ and } v & = x^4/4 \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int x^3 \\ln x~dx & = (\\ln x) x^4/4 \r\n- \\int (x^4/4) \\left(\\frac{1}{x} ~dx\\right) \\\\\r\n& = \\frac{x^4 \\ln x}{4} - \\frac{1}{4} \\int x^3~dx \\\\\r\n& = \\frac{x^4 \\ln x}{4} - \\frac{1}{4} (x^4/4) + C \\\\\r\n& = \\frac{x^4 \\ln x}{4} - \\frac{x^4}{16}  + C\r\n\\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int q^5 \\ln(5q)~dq$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = \\ln(5q)&\\mbox{ and } dv & = q^5 ~dq \\\\\r\n\\mbox{ so } du & = \\frac{1}{5q} (5)~dq& \\mbox{ and } v & = q^6/6 \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int q^5 \\ln(5q)~dq & = (\\ln(5q)) (q^6/6) \r\n- \\int(q^6/6) \\left(\\frac{1}{q}~ dq\\right) \\\\\r\n& = \\frac{q^6 \\ln(5q)}{6} - \\frac{1}{6} \\int q^5~dq \\\\\r\n& = \\frac{q^6 \\ln(5q)}{6} - \\frac{1}{6} (q^6/6) + C\\\\\r\n& = \\frac{q^6 \\ln(5q)}{6} - \\frac{1}{36} q^6 + C\r\n\\end{align*}\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int t^2 \\sin t~dt$ \r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = t^2&\\mbox{ and } dv & = \\sin t ~dt \\\\\r\n\\mbox{ so } du & = 2 t~dt& \\mbox{ and } v & = -\\cos t\r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int t^2 \\sin t~dt & = -t^2 \\cos t \r\n- \\int (-\\cos t)~ (2 t~ dt) \\\\\r\n& = - t^2 \\cos t + 2 \\int t \\cos t~dt \\\\\r\n\\end{align*}\r\nWhile we have traded our original integral for a slightly simpler one,\r\nit is still not simple enough to evaluate by finding an obvious\r\nantiderivative.  In fact, it is of the form of one of our earlier\r\nexamples of integration by parts, so here we must apply integration by\r\nparts {\\em again} to finally evaluate the original integral.\r\n\r\nWe focus on just the new integral part, $\\ds \\int t \\cos t~dt$:\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = t&\\mbox{ and } dv & = \\cos t ~dt \\\\\r\n\\mbox{ so } du & = ~dt& \\mbox{ and } v & = \\sin t\r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int t \\cos t~dt & = t \\sin t \r\n- \\int (\\sin t)~ dt \\\\\r\n& =  t \\sin t - \\int \\sin t~dt \\\\\r\n& =  t \\sin t - (-\\cos t) + C \\\\\r\n& = t \\sin t + \\cos t + C\r\n\\end{align*}\r\nSubbing this result back into the original integral, \r\n\\begin{align*}\r\n  \\int t^2 \\sin t~dt \r\n& =\\ub{ - t^2 \\cos t + 2 \\ub{\\int t \\cos t~dt}_{\\mbox{Second by parts step}}}_{\\mbox{First by parts step}} \\\\\r\n& = - t^2 \\cos t + 2 \\left( t \\sin t + \\cos t\\right) + C\r\n\\end{align*}\r\nNo further simplification is necessary for a pure `evaluate the integral' question.\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int x^2 \\cos(3x)~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = x^2&\\mbox{ and } dv & = \\cos(3x) ~dx \\\\\r\n\\mbox{ so } du & = 2 x~dx& \\mbox{ and } v & = \\sin(3x)/3 \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int x^2 \\cos(3x)~dx & = x^2 \\sin(3 x)/3\r\n- \\int (\\sin(3 x)/3)~ (2 x~ dx) \\\\\r\n& = \\frac{x^2 \\sin(3x)}{3} - \\frac{2}{3} \\int x \\sin(3 x)~dx \\\\\r\n\\end{align*}\r\nWhile we have traded our original integral for a slightly simpler one,\r\nit is still not simple enough to evaluate by finding an obvious\r\nantiderivative.  In fact, it is of the form of one of our earlier\r\nexamples of integration by parts, so here we must apply integration by\r\nparts {\\em again} to finally evaluate the original integral.\r\n\r\nWe focus on just the new integral part, $\\ds \\int x \\sin(3x)~dx$:\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = x&\\mbox{ and } dv & = \\sin(3 x) ~dx \\\\\r\n\\mbox{ so } du & = ~dx& \\mbox{ and } v & = -\\cos(3 x)/3\r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int x \\sin(3x)~dx & = -x \\cos(3x )/3\r\n- \\int (-\\cos(3 x)/3)~ dx \\\\\r\n& =  -\\frac{x \\cos(3x)}{3} + \\frac{1}{3}\\int \\cos(3 x)~dx \\\\\r\n& =  -\\frac{x \\cos(3x)}{3} + \\frac{1}{3} (\\sin(3 x)/3) + C \\\\\r\n& =  -\\frac{x \\cos(3x)}{3} + \\frac{1}{9} \\sin(3 x) + C \r\n\\end{align*}\r\nSubbing this result back into the original integral, \r\n\\begin{align*}\r\n&   \\int x^2 \\cos(3 x)~dx  \\\\\r\n =&\\ub{ \\frac{x^2 \\sin (3x)}{3} - \\frac{2}{3} \\ub{\\int x \\sin(3x) ~dx}_{\\mbox{Second by parts step}}}_{\\mbox{First by parts step}} \\\\\r\n =& \\frac{ x^2 \\sin(3x)}{3}  -\\frac{2}{3} \\left( -\\frac{x \\cos(3x)}{3} + \\frac{\\sin(3x)}{9} + C \\right) \\\\\r\n =& \\frac{ x^2 \\sin(3x)}{3}  +\\frac{2}{9} x \\cos(3x) - \\frac{2\\sin(3x)}{27} + C_2  \r\n\\end{align*}\r\nwhere $C_2 = -(2/3) C$ is a new integration constant.\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int (\\ln t)^2 ~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nWe only know how to differentiate $(\\ln t)^2$, so we have to choose it as $u$.\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = (\\ln t)^2&\\mbox{ and } dv & =  ~dt \\\\\r\n\\mbox{ so } du & = \\frac{2 \\ln t}{t}~ dt& \\mbox{ and } v & = t \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int (\\ln t)^2 ~dt & = (\\ln t)^2 t\r\n- \\int (t) \\left( \\frac{2 \\ln t}{t}\\right)~dt \\\\\r\n& = t (\\ln t)^2  - 2 \\int \\ln t~dt\r\n\\end{align*}\r\nWe are left with a simpler integral, but not an easy one (unless you\r\nlook at the examples from the course notes!)\r\n\r\nWe focus on $\\ds I_2 = \\int \\ln t~dt$, and apply integration by parts once more.\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = \\ln t&\\mbox{ and } dv & =  ~dt \\\\\r\n\\mbox{ so } du & = \\frac{1}{t}~ dt& \\mbox{ and } v & = t \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int \\ln t ~dt & = (\\ln t) t\r\n- \\int (t) \\left( \\frac{1 }{t}\\right)~dt \\\\\r\n& = t (\\ln t)  - \\int 1 ~dt \\\\ \r\n& = t \\ln t - t + C\r\n\\end{align*}\r\nThus, going back to our original integral, \r\n\\begin{align*}\r\n  \\int (\\ln t)^2 ~dt & = t (\\ln t)^2  - 2 \\ub{\\int \\ln t~dt}_{I_2} \\\\\r\n& = t (\\ln t)^2  - 2 (t \\ln t - t + C) \r\n\\end{align*}\r\n\r\n\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int t^2 e^{5t}~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    \\begin{align*}\r\n\\mbox{ We choose }       u & = t^2&\\mbox{ and } dv & = e^{5t} ~dt \\\\\r\n\\mbox{ so } du & = 2t ~dt& \\mbox{ and } v & = e^{5t}/5 \r\n    \\end{align*}\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int t^2 e^{5t}~dt & = t^2 e^{5t}/5\r\n- \\int (e^{5t}/5) (2t~dt) \\\\\r\n& = \\frac{t^2 e^{5t}}{5} - \\frac{2}{5} \\ub{\\int t e^{5t}~dt}_{I_2} \r\n\\end{align*}\r\nApplying integration by parts again to the integral marked $I_2$ will lead to\r\n\\begin{align*}\r\n  I_2 = \\frac{t e^{5t}}{5} - \\frac{e^{5t}}{25} + C\r\n\\end{align*}\r\nso the overall integral will be\r\n\\begin{align*}\r\n  \\int t^2 e^{5t}~dt \r\n& = \\frac{t^2 e^{5t}}{5} - \\frac{2}{5} \\ub{\\int t e^{5t}~dt}_{I_2} \\\\\r\n& = \\frac{t^2e^{5t}}{5} - \\frac{2}{5} \\left(\\frac{t e^{5t}}{5} - \\frac{e^{5t}}{25} + C\\right) \\\\\r\n& = \\frac{t^2 e^{5t}}{5} - \\frac{2 t e^{5t}}{25} + 2 \\frac{e^{5t}}{125} + C_2\r\n\\end{align*}\r\nwhere $C_2$ is a multiple of the original $C$.\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int y \\sqrt{y+3}~dy$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    We choose $u = y$ and $dv = (y + 3)^{1/2}~dx$,\\\\\r\n so $du = dx$ and\r\n    $\\ds v = \\frac{2}{3} (y + 3)^{3/2}$:\r\n    \\begin{align*}\r\n\\int y \\sqrt{ y + 3}~ dy & = \\frac{2}{ 3} y(y + 3)^{3/2} -\r\n\\int \\frac{2}{3} (y + 3)^{3/2} ~dy \\\\\r\n& = \\frac{2}{3} y(y + 3)^{3/2} - \\frac{2}{3}\\frac{ (y + 3)^{5/2}}{5/2} + C \\\\\r\n& = \\frac{2}{3} y(y + 3)^{3/2} - \\frac{4}{15} (y + 3)^{5/2} + C.\r\n    \\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int (t+2)\\sqrt{2+3t}~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    Let $u = t + 2$ and $dv = \\sqrt{2 + 3t}$,\\\\\r\n so $du = dt$ and $v =\r\n    \\frac{2}{ 9} (2 + 3t)^{3/2}$.\r\n\r\nThen\r\n\\begin{align*}\r\n\\int (t + 2)\\sqrt{1 + 3t}~ dt & =\r\n\\frac{2}{9} (t + 2)(2 + 3t)^{3/2} -\r\n\\frac{2}{9} \\int\r\n(2 + 3t)^{3/2} ~dt \\\\\r\n& = \\frac{2}{9} (t + 2)(2 + 3t)^{3/2}  \\\\\r\n& - \\frac{4}{ 135} (2 + 3t)^{5/2} + C.\r\n\\end{align*}\r\nNote that you could also expand the product into\r\n$\\ds \\int t \\sqrt{2 + 3t}~dt + \\int 2 \\sqrt{2+3t}~dt$, and then use\r\nintegration by parts only on the first term.  Both appproaches require\r\nsimilar amounts of work, so you can follow whichever approach you\r\nlike.\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int (p+1)\\sin(p+1)~dp$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    Let $u = p + 1$ and $dv = \\sin(p + 1)$, \\\\\r\nso $du = dx$ and $v = -\\cos(p+ 1)$.\r\n    \\begin{align*}\r\n& \\int (p + 1) \\sin(p + 1) ~dp \\\\\r\n = &-(p + 1) \\cos(p + 1) +\r\n\\int \\cos(p + 1) ~dp \\\\\r\n =&  -(p + 1) \\cos(p + 1) + \\sin(p+ 1) + C\r\n    \\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item \\label{q:zexpz}\r\n  \\begin{Question}\r\n   $\\ds \\int \\frac{z}{e^z}~dz$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nRewriting the integral, \r\n   $$\\ds \\int \\frac{z}{e^z}~dz = \\ds \\int z e^{-z}~dz $$\r\nLet $u = z$, $dv = e^{-z}~dx$. \\\\\r\nThus $du = dz$ and $v = -e^{-z}$. Integration by parts gives:\r\n\\begin{align*}\r\n\\int ze^{-z} ~dz  & = -ze^{-z} - \\int (-e^{-z}) ~dz \\\\\r\n& = -ze^{-z} + \\int e^{-z} ~dz \\\\\r\n& = -ze^{-z} - e^{-z} + C \r\n\\end{align*}\r\n \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int \\frac{\\ln x}{x^2}~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    Let $u = \\ln x$, $dv = \\frac{1}{x^2}~dx$. \\\\\r\nThen $du = \\frac{1}{x}~dx$ and $v = -\\frac{1}{x}$. \r\n\r\n Integrating by parts, we get:\r\n \\begin{align*}\r\n\\int \\frac{\\ln x}{x^2} ~dx \r\n& = -\\frac{1}{x} \\ln x - \\int \\left(-\\frac{1}{x}\\right) \\frac{1}{x}~dx \\\\ \r\n& = -\\frac{1}{x} \\ln x + \\int \\frac{1}{x^2}~dx  \\\\\r\n& = -\\frac{1}{x} \\ln x -  \\frac{1}{x} + C  \\\\\r\n \\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int \\frac{y}{\\sqrt{5-y}}~dy$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    Let $u = y$ and $dv = \\frac{1}{ \\sqrt{5-y}}$, \\\\\r\nso $du = dy$ and $v = -2(5 - y)^{1/2}$\r\n\\begin{align*}\r\n \\int y \\sqrt{5 - y} ~dy \r\n& = -2y(5 - y)^{1/2} - \\int (-2)(5 - y)^{1/2} dy \\\\\r\n& = -2y(5 - y)^{1/2} + 2\\int (5 - y)^{1/2} dy \\\\\r\n& = -2y(5 - y)^{1/2} + 2\\frac{2}{3} (5 - y)^{3/2}(-1) + C \\\\\r\n& = -2y(5 - y)^{1/2} - \\frac{4}{3} (5 - y)^{3/2}+ C \\\\\r\n\\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int \\frac{t+7}{\\sqrt{5-t}}~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nSince we have a fraction in the numerator, we can split the integral into a sum, and then\r\nevaluate each term separately.\r\n   $$\\ds \\int \\frac{t+7}{\\sqrt{5-t}}~dt = \\underbrace{\\int \\frac{t}{\\sqrt{5-t}}~dt}_{I_1} + \\underbrace{\\int \\frac{7}{\\sqrt{5-t}}~dt}_{I_2}  $$\r\n$I_1$ can be evaluated using integration by parts. \\\\\r\nLet $u = t$ and $dv = \\frac{1}{\\sqrt{5-t}}~dt$, \\\\\r\nso $du = dx$ and $v = -2(5-t)^{1/2}$.\r\n\\begin{align*}\r\n\\int \\frac{t}{\\sqrt{5 - t}} ~dt  \\\\\r\n&= -2t(5 - t)^{1/2} + 2 \\int (5 - t)^{1/2} ~dt  \\\\\r\n&= -2t(5 - t)^{1/2} - \\frac{4}{3}(5 - t)^{3/2} + C.\r\n\\end{align*}\r\n$I_2$ can be integrated directly:\r\n\\begin{align*}\r\n  \\int \\frac{7}{\\sqrt{5-t}}~dt \r\n& = 7 (2) (5-t)^{1/2}(-1) + C_1 \\\\ \r\n& = -14 (5-t)^{1/2} + C_1\r\n\\end{align*}\r\nAdding the two integrals back together, we obtain\r\n\\begin{align*}\r\n \\int \\frac{t+7}{\\sqrt{5-t}}~dt \r\n& = \\ub{-2t(5 - t)^{1/2} - \\frac{4}{3}(5 - t)^{3/2} + C}_{I_1} \\\\\r\n& + \\ub{-14 (5-t)^{1/2} + C_1}_{I_2}\r\n\\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int x (\\ln x)^2~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nSelect $u = (\\ln x)^2$ and $dv = x~dx$, \\\\\r\nso $\\ds du = 2 \\ln x \\left(\\frac{1}{x}\\right) ~dx$ and $v = x^2/2$. \\\\\r\nUsing the integration by parts formula, \r\n\\begin{align*}\r\n  \\int x (\\ln x)^2~dx \r\n& = (\\ln x)^2\\left(\\frac{x^2}{2} \\right)\r\n- \\int \\left(\\frac{x^2}{2}\\right) \\left(\\frac{2 \\ln x}{x}\\right)~dx  \\\\\r\n& = \\frac{1}{2} x^2 (\\ln x)^2 - \\ub{\\int x \\ln x~dx}_{I_2}\r\n\\end{align*}\r\nThis second integral, $I_2$, can be evaluated with a second\r\napplication of integration by parts. This\r\nwas done earlier in Question \\#\\ref{q:xlnx} (using $y$ instead of $x$ though):  \\\\\r\n\\begin{align*}\r\n \\ub{\\int x \\ln x~dx}_{I_2}\r\n& = \\frac{x^2 \\ln x}{2} - \\frac{x^2}{4}  + C  \\\\\r\n\\end{align*}\r\nGoing back to the original integral\r\n\\begin{align*}\r\n & \\int x (\\ln x)^2~dx \\\\\r\n& = \\frac{1}{2} x^2 (\\ln x)^2 - \r\n\\ub{\\left(\\frac{x^2 \\ln x}{2} - \\frac{x^2}{4}\\right)}_{I_2} + C \\\\\r\n& =\\frac{1}{2} x^2 (\\ln x)^2 - \\frac{1}{2} x^2 \\ln x + \\frac{x^2}{4}  + C\r\n\\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item \\label{q:arcsinx}\r\n  \\begin{Question}\r\n   $\\ds \\int \\arcsin(w)~dw$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n   We don't know the integral of $\\arcsin$, but we do know its derivative. Therefore we pick  \\\\\r\n$u = \\arcsin(w)$ and $dv = dw$, \\\\\r\nso $\\ds du = \\frac{1}{\\sqrt{1 - w^2}} ~dw$ and $v = w$. \\\\\r\nUsing the integration by parts formula,\r\n\r\n\\begin{align*}\r\n\\int \\arcsin(w)~dw & = w~\\arcsin(w) - \\ub{\\int \\frac{w}{\\sqrt{1 - w^2}}~dw}_{I_2}\r\n\\end{align*}\r\nThe new integral $I_2$ can be evaluate using a substitution. \\\\\r\nLet $z = 1 - w^2$, so $\\frac{dz}{dw} = -2w$ or $\\ds \\frac{-1}{2w} ~dz = dw$:\r\n\\begin{align*}\r\n\\ub{\\int \\frac{w}{\\sqrt{1 - w^2}}~dw}_{I_2} & = \\int \\frac{w}{\\sqrt{z}} \\left(\\frac{-1}{2w}~dz\\right) \\\\\r\n& = \\frac{-1}{2} \\frac{1}{\\sqrt{z}} ~dz \\\\\r\n& =  \\frac{-1}{2} (2z^{1/2}) +C \\\\\r\n& = -\\sqrt{1 - w^2} + C  \r\n  \\end{align*}\r\nGoing back to the original integral,\r\n\\begin{align*}\r\n\\int \\arcsin(w)~dw & = w~\\arcsin(w) - \\ub{\\int \\frac{w}{\\sqrt{1 - w^2}}~dw}_{I_2} \\\\\r\n& =w~\\arcsin(w) - \\left(-\\sqrt{1 - w^2} + C\\right)   \\\\\r\n& =w~\\arcsin(w) + \\sqrt{1 - w^2} + C_2\r\n\\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item \\label{q:arctan}\r\n  \\begin{Question}\r\n   $\\ds \\int \\arctan(7x) ~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n   We don't know the integral of $\\arctan$, but we do know its derivative. Therefore we pick  \\\\\r\n$u = \\arctan(7x)$ and $dv = dx$, \\\\\r\nso $\\ds du = \\frac{7}{1 + (7x)^2} ~dx$ and $v = x$. \\\\\r\nUsing the integration by parts formula,\r\n\\begin{align*}\r\n\\int \\arctan(7x)~dx & = x~\\arctan(7x) - \\ub{\\int \\frac{7x}{1 - 49x^2}~dx}_{I_2}\r\n\\end{align*}\r\nThe new integral $I_2$ can be evaluate using a substitution. \\\\\r\nLet $w = 1 + 49x^2$, so $\\frac{dw}{dx} = 98 x$ or $\\ds \\frac{1}{98x} ~dw = dx$:\r\n\\begin{align*}\r\n\\ub{\\int \\frac{7x}{1 + 49x^2}~dx}_{I_2} & = \\int \\frac{7x}{w} \\left(\\frac{1}{98x}~dw\\right) \\\\\r\n& = \\frac{1}{14} \\frac{1}{w}~dw \\\\\r\n& = \\frac{1}{14} \\ln|w| + C \\\\\r\n& = \\frac{1}{14} \\ln|1 + 49x^2| + C\r\n  \\end{align*}\r\nGoing back to the original integral,\r\n\\begin{align*}\r\n\\int \\arctan(x)~dx & = x~\\arctan(x) - \\ub{\\int \\frac{7x}{1 - 49x^2}~dx}_{I_2} \\\\\r\n& =x~\\arctan(x) - \\frac{1}{14} \\ln|1 + 49x^2| + C_2\r\n\\end{align*}\r\n    \r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n   $\\ds \\int x \\arctan(x^2)~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nThis question starts off best with a substitution, due to the $x^2$ inside the $\\arctan$, and the $x$ outside: \\\\\r\n$  \\mbox{ Let } w = x^2 \\mbox{, so} \\frac{1}{2x} ~dw = dx $ \\\\\r\n\\begin{align*}\r\n\\int x \\arctan(x^2)~dx & = \\int x \\arctan(w) \\left(\\frac{1}{2x}~dw\\right) \\\\\r\n& = \\frac{1}{2} \\ub{\\int \\arctan(w)~dw   }_{I_2}\r\n\\end{align*}\r\nEvaluating $I_2$, we use by parts, following the same approach as Question \\#\\ref{q:arctan}, but without the `7' factor, \r\n\\begin{align*}\r\n\\ub{\\int \\arctan(w)~dw   }_{I_2} & =w~\\arctan(w) - \\frac{1}{2} \\ln|1 + w^2| + C\r\n\\end{align*}\r\nGoing back to the original integral, and using $w = x^2$, \r\n\\begin{align*}\r\n& \\int x \\arctan(x^2)~dx \\\\\r\n& = \\frac{1}{2} \\ub{\\int \\arctan(w)~dw   }_{I_2} \\\\\r\n& = \\frac{1}{2} \\left(w~\\arctan(w) - \\frac{1}{2} \\ln|1 + w^2| + C\\right) \\\\\r\n& = \\frac{1}{2} \\left(x^2~\\arctan(x^2) - \\frac{1}{2} \\ln|1 + (x^2)^2| + C\\right)\\\\\r\n& = \\frac{1}{2} x^2~\\arctan(x^2) - \\frac{1}{4} \\ln|1 + x^4| + C_2\r\n\\end{align*}\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n\\item\\label{q:parts1_end}\r\n  \\begin{Question}\r\n   $\\ds \\int x^3 e^{x^2}~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    In this problem, we note that we can't integrate $e^{x^2}$ by\r\n    itself (no closed-form anti-derivative).  However, if we package\r\n    it with one of the $x$'s from the $x^3$, we'll get $x e^{x^2}$,\r\n    and that {\\em can} be integrated, using substitution ($w = x^2$).\r\n\r\n    Let $u = x^2$ and $dv = xe^{x^2}$, \\\\\r\nso $du = 2x ~dx$ and $v = \\frac{1}{2} e^{x^2}$.  Using that, \r\n\\begin{align*}\r\n\\int x^3e^{x^2}~dx = \\frac{1}{2} x^2e^{x^2} - \\ub{\\int xe^{x^2}~dx}_{I_2} &\r\n= \\frac{1}{2} x^2e^{x^2} - \\frac{1}{2} e^{x^2} + C\r\n\\end{align*}\r\nwhere $I_2$ is evaluated using the same substitution $w = x^2$.\r\n  \\end{Solution}\r\n%*****************\r\n%*****************\r\n%\\item \r\n%  \\begin{Question}\r\n%   $\\ds \\int x^3 \\cos(x^3)~dx$\r\n%  \\end{Question}\r\n%\r\n%  \\begin{Solution}\r\n%    \r\n%  \\end{Solution}\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_1^5 \\ln t~dt$    \r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nWe integrate by parts, as done in Question \\#\\ref{q:lnx}:\r\n    \\begin{align*}\r\n    \\int_1^5 \\ln t~dt \r\n& = \\ub{t \\ln t - t }_{\\mbox{ from \\#\\ref{q:lnx}}} \\Big|_1^5 \\\\\r\n& = \\left(5 \\ln 5 - 5\\right)   - \\left(1 \\ln 1 - 1\\right) \\\\\r\n& = 5 \\ln 5 - 4.\r\n    \\end{align*}\r\n\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_3^5 x \\cos x~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n   Integrating by parts with $u = x$ and $dv = \\cos x ~dx$ gives \r\n   \\begin{align*}\r\n\\int_3^5 x \\cos x~dx & = x \\sin(x) + \\cos(x) \\Big|_3^5 \\\\\r\n& = 5 \\sin(5) + cos(5) - (3 \\sin(3) + \\cos(3)) \\\\\r\n& \\approx -3.944\r\n   \\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_0^{10} ze^{-z}~dz$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n   The integral is the same as in Question \\#\\ref{q:zexpz}. We use\r\nby parts with $u = z$ and $dv = e^{-z}~dz$, giving\r\n\\begin{align*}\r\n  \\int_0^{10} ze^{-z}~dz & = \\ub{-ze^{-z} - e^{-z}}_{\\mbox{from \\#\\ref{q:zexpz}}} \\Big|_0^{10} \\\\\r\n& = -10 e^{-10} - e^{-10} - (0 - e^0) \\\\\r\n& = -11 e^{-10} + 1  \\approx 0.9995 \\\\\r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_1^3 t\\ln(t)~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    The integral is the same as in Question \\#\\ref{q:xlnx}.  We use by\r\n    parts, with $u = \\ln(t)$ and $dv = t ~dt$.\r\n    \\begin{align*}\r\n      \\int_1^3 t\\ln(t)~dt & = \\ub{ \\frac{t^2 \\ln t}{2} \r\n- \\frac{t^2}{4}}_{\\mbox{from \\#\\ref{q:xlnx}}} \\Big|_1^3 \\\\\r\n& = \\left(\\frac{9 \\ln 3}{2} - \\frac{9}{4}\\right)\r\n- \\left(\\frac{1 \\ln 1}{2} - \\frac{1}{4}\\right) \\\\\r\n& = \\frac{9 \\ln 3}{2} - 2 \\approx 2.944\\\\\r\n    \\end{align*}\r\n    \r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_0^1\\arctan(y)~dy$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    We follow the work from Question \\#\\ref{q:arctan}, but without the `7' factor, using \r\n$u = \\arctan(y)$ and $dv = dy$.\r\n\\begin{align*}\r\n  \\int_0^1\\arctan(y)~dy  \r\n& =y~\\arctan(y) - \\frac{1}{2} \\ln|1 + y^2| \\Big|_0^1  \\\\\r\n& = \r\n\\left(1\\cdot \\arctan(1) - \\frac{1}{2} \\ln(2)\\right)\r\n- \\left(0\\cdot \\arctan(0) - \\frac{1}{2} \\ln(1)\\right) \\\\\r\n& = \\frac{\\pi}{4} -  \\frac{1}{2} \\ln(2) \\approx 0.439.\r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_0^5\\ln(1+t)~dt$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    We use the solution to Question \\#\\ref{q:lnx}, or applying by\r\n    parts with $u = \\ln(1+t)$ and $dv = ~dt$.\r\n    \\begin{align*}\r\n      \\int_0^5\\ln(1+t)~dt \r\n      & = \\ub{(1+t) \\ln (1+t) - (1+t) }_{\\mbox{from \\#\\ref{q:lnx}, with } x = 1+t} \\Big|_0^5 \\\\\r\n& =\r\n \\left(6 \\cdot \\ln 6 - 6\\right) \r\n -\\left(1 \\cdot \\ln 1 - 1\\right)  \\\\\r\n& = 6 \\ln 6 - 5 \\approx 5.751\r\n    \\end{align*}\r\n\r\n    \r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_0^1\\arcsin z ~dz$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nUsing the solution from Question \\#\\ref{q:arcsinx}, or $u = \\arcsin z$ and $dv = dz$, \r\n\\begin{align*}\r\n  \\int_0^1\\arcsin z ~dz \r\n& =z~\\arcsin(z) + \\sqrt{1 - z^2} \\Big|_0^1  \\\\\r\n& = \r\n\\left( \\arcsin(1)  + \\sqrt{0}\\right) \r\n- \\left( 0 \\cdot \\arcsin(0)  + \\sqrt{1}\\right)  \\\\\r\n& = \\frac{\\pi }{2} - 1 \\approx 0.571\r\n\\end{align*}\r\n    \r\n  \\end{Solution}\r\n\r\n% *****************\r\n\\item\r\n  \\begin{Question}\r\n$\\ds \\int_0^1 x \\arcsin(x^2)~dx$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nWe first simplify the integral with the substitution $w = x^2$, which leads\r\nto the new limits \\\\\r\n $x = 0 \\to w = 0^2 = 0$ and \\\\\r\n $x = 1 \\to w = 1^2 = 1$.\r\n\\begin{align*}\r\n  \\int_{x=0}^{x=1} x \\arcsin(x^2)~dx & =\r\n\\ub{ \\frac{1}{2} \\int_{w=0}^{w=1} \\arcsin(w)~dw  }_{\\mbox{after substitution}}  \\\\\r\n\\end{align*}\r\nAt this point, we have returned to the integral in Question\r\n\\#\\ref{q:arcsinx}, which can be evaluated using by parts, with $u =\r\n\\arcsin(w)$ and $dv = dw$.\r\n\\begin{align*}\r\n&  \\int_{x=0}^{x=1} x \\arcsin(x^2)~dx \\\\\r\n& = \\frac{1}{2} \\int_{w=0}^{w=1} \\arcsin(w)~dw  \\\\\r\n& = \\frac{1}{2} \\ub{\\left(w~\\arcsin(w) + \\sqrt{1 - w^2}\\right)}_{\\mbox{by parts}} \\Big|_0^1  \\\\ \r\n& = \\frac{1}{2} \\left[\r\n\\left( \\arcsin(1)  + \\sqrt{0}\\right) \r\n- \\left( 0 \\cdot \\arcsin(0)  + \\sqrt{1}\\right) \r\n\\right]  \\\\\r\n& = \\frac{\\pi}{4} - \\frac{1}{2} \\approx 0.285\r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n% *****************\r\n\\item\r\n  \\begin{Question}\r\nFind the area under the curve $y = t e^{-t}$ on the interval $0 \\le t \\le 2$.\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\nThe function $t e^{-t}$ is always positive on the interval $0 \\le t \\le 2$ so\r\nthe area under the curve is equal to the integral\r\n$$\\int_0^2 t e^{-t}~dt$$\r\n   Proceeding in the same way as Question \\#\\ref{q:zexpz}, using $u = t$ and $dv = e^{-t}~dt$,\r\n   \\begin{align*}\r\n   \\int_0^2 t e^{-t}~dt \r\n& = \\ub{-te^{-t} - e^{-t} }_{\\mbox{from \\#\\ref{q:zexpz}}} \\Big|_0^2\\\\ \r\n& = \r\n\\left(-2e^{-2} - e^{-2}\\right) \r\n- \\left(0e^{0} - e^{0}\\right)  \\\\\r\n& = -3 e^{-2} + 1 \r\n   \\end{align*}\r\n  \\end{Solution}\r\n\r\n% *****************\r\n\\item\r\n  \\begin{Question}\r\nFind the area under the curve $f(z) = \\arctan z$ on the interval $[0, 2]$.\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    On the interval $t \\in [0, 2]$, the function $\\arctan(z)$ is\r\n    always positive, so the area equals the integral $\\ds \\int_0^2 \\arctan(z)~dz$.\r\n\r\n    To evaluate the integral, we follow the work from Question\r\n    \\#\\ref{q:arctan}, but without the `7' factor, using $u =\r\n    \\arctan(z)$ and $dv = dz$.\r\n\\begin{align*}\r\n  \\int_0^2 \\arctan(z)~dz & = z~\\arctan(z) - \\frac{1}{2} \\ln|1 + z^2|  \\Big|_0^2\\\\\r\n& = \r\n\\left(2 \\arctan(2) - \\frac{1}{2} \\ln 5\\right)\r\n- \\\\\r\n&\\left(0 \\arctan(0) - \\frac{1}{2} \\ln 1\\right) \\\\\r\n& = 2 \\arctan(2) - \\frac{\\ln 5}{2} \r\n\\end{align*}\r\n  \\end{Solution}\r\n\r\n% *****************\r\n%\\item\r\n%  \\begin{Question}\r\n%    Find the area under the curve $f(y) = \\arcsin y$ for $0 \\le y \\le 1$. \r\n%  \\end{Question}\r\n%\r\n%  \\begin{Solution}\r\n%    \r\n%  \\end{Solution}\r\n\r\n\r\n% *****************\r\n% \\item\r\n%   \\begin{Question}\r\n%     Use integration by parts twice to find $\\ds \\int  e^x \\sin(x)~dx$.\r\n%   \\end{Question}\r\n\r\n%   \\begin{Solution}\r\n% There are several ways to evaluate this integral; we'll show just one here. \\\\\r\n% Let $u = \\sin(x)$ and $dv = e^x ~dx$, \\\\\r\n% so $du = \\cos(x)~dx$ and $v = e^x$. \r\n% \\begin{align*}\r\n%   \\ub{\\int  e^x \\sin(x)~dx}_{\\mbox{ our goal}, ~I} & =\r\n% \\sin(x) e^x - \\ub{\\int \\cos(x) e^x~dx}_{I_2} \\\\\r\n% \\end{align*}\r\n% To evaluate $I_2$, we select the trig function again as $u$ and the exponential as $dv$:\r\n% Let $u = \\cos(x)$ and $dv = e^x ~dx$, \\\\\r\n% so $du = -\\sin(x)~dx$ and $v = e^x$. \r\n% \\begin{align*}\r\n% &   \\ub{\\int  e^x \\sin(x)~dx}_{\\mbox{ our goal}, ~I}  \\\\\r\n% & = \\sin(x) e^x - \\ub{\\int \\cos(x) e^x~dx}_{I_2} \\\\\r\n% & = \\sin(x) e^x - \\left( \\cos(x) e^x - \\int (-\\sin(x)) e^x~dx\\right) \\\\\r\n% & = \\sin(x) e^x - \\left( \\cos(x) e^x - \\int (-\\sin(x)) e^x~dx\\right) \\\\\r\n% \\end{align*}\r\n% Tidying, we obtain\r\n% \\begin{align*}\r\n%   \\ub{\\int  e^x \\sin(x)~dx}_{~I}  \r\n% & = \\sin(x) e^x - \\cos(x) e^x - \\ub{\\int \\sin(x) e^x~dx }_I\r\n% \\end{align*}\r\n% Grouping the integrals $I$, which are what we are looking for,\r\n% \\begin{align*}\r\n%   2 \\int e^x \\sin(x)~dx & = \r\n%  \\sin(x) e^x - \\cos(x) e^x  \\\\\r\n% \\mbox{ or } \r\n%   \\int e^x \\sin(x)~dx & = \r\n%  \\frac{1}{2} \\left(\\sin(x) e^x - \\cos(x) e^x \\right) \\\\\r\n% \\end{align*}\r\n\r\n%   \\end{Solution}\r\n% % *****************\r\n% \\item\r\n%   \\begin{Question}\r\n% Use integration by parts twice to find $\\ds \\int  e^y \\cos(y)~dy$.\r\n%   \\end{Question}\r\n\r\n%   \\begin{Solution}\r\n%     This question is done in the same manner as the previous one.  For\r\n%     variety, and to show it works as well, we will select the\r\n%     exponential function as $u$ and the trig functions as $dv$. (Both\r\n%     choices work, so long as you are consistent in both integration by\r\n%     parts steps.)\r\n    \r\n% Let $u = e^y$ and $dv = \\cos(y) ~dy$, \\\\\r\n% so $du = e^y~dy$ and $v = \\sin(y) $. \r\n% \\begin{align*}\r\n%   \\ub{\\int  e^y \\cos(y)~dy}_{\\mbox{ our goal}, ~I} & =\r\n% \\sin(y) e^y - \\ub{\\int \\sin(y) e^y~dy}_{I_2} \\\\\r\n% \\end{align*}\r\n% To evaluate $I_2$, we select the exponential function again as $u$ and the trig as $dv$: \\\\\r\n% Let $u = e^y$ and $dv = \\sin(y) ~dy$, \\\\\r\n% so $du = e^y~dy$ and $v = -\\cos(y)$. \r\n% \\begin{align*}\r\n% &   \\ub{\\int  e^y \\cos(y)~dy}_{\\mbox{ our goal}, ~I}  \\\\\r\n% & = \\sin(y) e^y - \\ub{\\int \\sin(y) e^y~dy}_{I_2} \\\\\r\n% & = \\sin(y) e^y - \\left( (-\\cos(y)) e^y - \\int (-\\cos(y)) e^y~dy\\right) \\\\\r\n% & = \\sin(y) e^y + \\cos(y) e^y - \\int \\cos(y) e^y~dy\r\n% \\end{align*}\r\n% Tidying, we obtain\r\n% \\begin{align*}\r\n%   \\ub{\\int  e^y \\cos(y)~dy}_{~I}  \r\n% & = \\sin(y) e^y + \\cos(y) e^y - \\ub{\\int \\cos(y) e^y~dy }_I\r\n% \\end{align*}\r\n% Grouping the integrals $I$, which are what we are looking for,\r\n% \\begin{align*}\r\n%   2 \\int e^y \\cos(y)~dy & = \r\n%  \\sin(y) e^y + \\cos(y) e^y  \\\\\r\n% \\mbox{ or } \r\n%   \\int e^y \\cos(y)~dy & = \r\n%  \\frac{1}{2} \\left(\\sin(y) e^y + \\cos(y) e^y \\right) \\\\\r\n% \\end{align*}\r\n%   \\end{Solution}\r\n\r\n% *****************\r\n%\\item\r\n%  \\begin{Question}\r\n%Use integration by parts twice to find $\\ds \\int  \\sin^2x~dx$.\r\n%  \\end{Question}\r\n%\r\n%  \\begin{Solution}\r\n%    \r\n%  \\end{Solution}\r\n\r\n% *****************\r\n\\item\r\n  \\begin{Question}\r\n    Use integration by parts to show that $$\\int x^n \\cos(ax)~dx =\r\n    \\frac{1}{a} x^n \\sin(ax) - \\frac{n}{a} \\int x^{n-1} \\sin(ax) ~dx$$\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    We are simply asked to change one integral into another, which can\r\n    be done here directly with integration by parts.  The main\r\n    challenge is dealing with the parameters $a$ and $n$ instead of\r\n    explicit constants.\r\n\r\n    Let $u = x^n$ and $dv = \\cos(ax)~dx$, \\\\\r\n    so $du = n x^{n-1} ~dx$ and $\\ds v = \\frac{1}{a} \\sin(ax)$  \\\\\r\n    Applying the by parts formula,\r\n\\begin{align*}\r\n&   x^n \\cos(ax)~dx \\\\\r\n& =\r\nx^n \\left(\\frac{1}{a} \\sin(ax)\\right) \r\n- \\int \\frac{1}{a} \\sin(ax) \\cdot n \\cdot x^{n-1} ~dx \\\\\r\n& = x^n \\left(\\frac{1}{a} \\sin(ax)\\right) \r\n-\\frac{n}{a} \\int x^{n-1} \\sin(ax) ~dx\r\n\\end{align*}\r\nwhich is the desired formula.\r\n  \\end{Solution}\r\n    \r\n\r\n% *****************\r\n\\item\r\n  \\begin{Question}\r\n\r\n    The concentration, $C$, in ng/ml, of a drug in the blood as a\r\n    function of the time, $t$, in hours since the drug was administered\r\n    is given by $$C = 15te^{ - 0.2t}.$$ The area under the concentration\r\n    curve is a measure of the overall effect of the drug on the body,\r\n    called the {\\em bioavailability}. Find the bioavailability of the drug\r\n    between $t = 0$ and $t = 3$.\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n    We have\r\n$$\\mbox{Bioavailability }= \\int^3_0 15te^{-0.2t}~dt.$$\r\nWe first use integration by parts to evaluate the indefinite integral of this function. \\\\\r\nLet $u = 15t$ and $dv = e^{-0.2t}~dt$,\\\\\r\nso $du = 15~dt$ and $v = -5e^{-0.2t}$. Then,\r\n\\begin{align*}\r\n \\int 15te^{-0.2t}~dt & = (15t)(-5e^{-0.2t}) - \\int (-5e^{-0.2t})(15~dt) \\\\\r\n& = -75te^{-0.2t} + 75 \\int e^{-0.2t}~dt \\\\\r\n& = -75te^{-0.2t}- 375e^{-0.2t} + C.\r\n\\end{align*}\r\n  \r\n\\begin{align*}\r\n\\mbox{Thus, } \\int^3_0 15te^{-0.2t}~dt & = (-75te^{-0.2t} - 375e^{-0.2t}) \\Big|^3_0 \\\\\r\n& = -329.29 + 375 = 45.71. \\\\\r\n\\end{align*}\r\nThe bioavailability of the drug over this time interval is 45.71 (ng/ml)-hours\r\n  \\end{Solution}\r\n\r\n%*****************\r\n\\item\r\n  \\begin{Question}\r\n    During a surge in the demand for electricity, the rate, $r$, at\r\n    which energy is used can be approximated by $$r = te^{ -at}$$  where $t$\r\n    is the time in hours and $a$ is a positive constant. \r\n    \\begin{enumerate}[(a)]\r\n    \\item Find the total energy, $E$, used in the first $T$ hours. Give\r\n      your answer as a function of $a$.\r\n    \\item What happens to $E$ as $T \\to \\infty$?\r\n    \\end{enumerate}\r\n  \\end{Question}\r\n\r\n  \\begin{Solution}\r\n   We know that $\\ddt{E} = r$, so the total energy $E$ used in the first $T$ hours is given by \r\n$$E = \\int^T_0 te^{-at} dt.$$ \r\nWe use integration by parts. \\\\\r\nLet $u = t$, $dv = e^{-at}~dt$. \\\\\r\nThen $du  = dt$, and $v = (-1/a) e^{-at}$.\r\n\\begin{align*}\r\nE & = \\int^T_0 te^{-at} dt  \\\\\r\n& = - (t/a) e^{-at} \\Big|^T_0 - \\int^T_0 -(1/a) e^{-at} ~dt \\\\\r\n& = - (1/a) Te^{-aT} + (1/a) \\int^T_0 e^{-at} dt \\\\\r\n& = - (1/a) Te^{-aT} + (1/a^2) (1 - e^{-aT} ).\r\n\\end{align*}\r\n  \r\n(b)\r\n \\begin{align*}\r\n \\lim_{T \\to \\infty}  E & = -(1/a) \\lim_{T\\to\\infty}  \\left( \\frac{T }{e^{aT}} \\right) + (1/a^2) \\left( 1 - \\lim_{T \\to\\infty} \\frac{1}{ e^{aT}} \\right) \r\n \\end{align*}\r\n Since $a > 0$, the second limit on the right hand side in the above\r\n expression is 0. In the first limit, although both the numerator and\r\n the denominator go to infinity, the denominator $e^{aT}$ goes to\r\n infinity more quickly than $T$ does (can verify with l'Hopital's\r\n rule). So in the end the denominator $e^{aT}$ is much greater than\r\n the numerator $T$. Hence\r\n$ \\ds \\lim_{T\\to \\infty} \\frac{T}{ e^{aT}} = 0$. \r\n\r\nThus $\\ds \\lim_{T\\to\\infty} E = \\frac{1}{a^2}$.  \r\n\r\nIn words this means that the total amount of energy in the surge,\r\naccounting for over all time $(T \\to \\infty)$ is $\\ds \\frac{1}{a^2}$\r\nJoules.\r\n \\end{Solution}\r\n%*****************\r\n\r\n% \\item\r\n%   \\begin{Question}\r\n%     In describing the behavior of an electron, we use wave functions\r\n%     $\\Psi_1$, $\\Psi_2$, $\\Psi_3$, $\\ldots$\\footnote{The symbol $\\Psi$ is the capital greek letter ``Psi'', pronoused the same way as `sigh'.  It is the most common symbol for probability functions and wave functions in chemistry and physics.} of the form $$\\Psi_n( x) =\r\n%     C_n \\sin( n\\pi x) \\mbox{~~~ for }n = 1, 2, 3, \\ldots$$ where $x$\r\n%     is the distance from a fixed point and $C_n$ is a positive\r\n%     constant.\r\n% \\begin{enumerate}[(a)]\r\n% \\item  Find $C_1$ so that $\\Psi_1$ satisfies \r\n% $$\\int^1_0 ( \\Psi_1( x))^2~dx = 1.$$ This is called {\\em normalizing} the wave function $\\Psi_1$. \r\n% \\item  For any integer $n$, find $C_n$ so that $Psi_n$ is normalized.\r\n% \\end{enumerate}\r\n%   \\end{Question}\r\n\r\n%   \\begin{Solution}\r\n    \r\n%   \\end{Solution}\r\n\\end{multicols}\r\n\r\n\\hrulefill\r\n\r\n\\end{enumerate}\r\n\\end{document}\r\n\r\n", "meta": {"hexsha": "7c35345f82509a6d9a7744dd267257f1cd4aea11", "size": 78302, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PracticeProblems/Week05.tex", "max_stars_repo_name": "aableson/MNTCP01", "max_stars_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-11-27T16:10:35.000Z", "max_stars_repo_stars_event_max_datetime": "2015-11-27T16:10:35.000Z", "max_issues_repo_path": "PracticeProblems/Week05.tex", "max_issues_repo_name": "aableson/MNTCP01", "max_issues_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PracticeProblems/Week05.tex", "max_forks_repo_name": "aableson/MNTCP01", "max_forks_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3827383642, "max_line_length": 272, "alphanum_fraction": 0.5361932007, "num_tokens": 31867, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7745833841649233, "lm_q1q2_score": 0.5802304998914001}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{examples}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% ============================================================================================\n\\bgroup\n\\CdbSetup{action=hide}\n\\begin{cadabra}\n   import cdblib\n   checkpoint_file = 'tests/semantic/output/example-07.json'\n   cdblib.create (checkpoint_file)\n   checkpoint = []\n\\end{cadabra}\n\\egroup\n\n\\clearpage\n\n% ============================================================================================\n\\section*{Example 7 Export to C-code}\n\n\\begin{cadabra}\n   def write_code (obj,name,filename,rank):\n\n      import os\n\n      from sympy.printing.c import C99CodePrinter as printer\n      from sympy.codegen.ast import Assignment\n\n      idx=[]  # indices in the form [{x, x}, {x, y} ...]\n      lst=[]  # corresponding terms [termxx, termxy, ...]\n\n      for i in range( len(obj[rank]) ):                 # rank = number of free indices\n          idx.append( str(obj[rank][i][0]._sympy_()) )  # indices for this term\n          lst.append( str(obj[rank][i][1]._sympy_()) )  # the matching term\n\n      mat = sympy.Matrix([lst])                         # row vector of terms\n      sub_exprs, simplified_rhs = sympy.cse(mat)        # optimise code\n\n      with open(os.getcwd() + '/' + filename, 'w') as out:\n\n         for lhs, rhs in sub_exprs:\n            out.write(printer().doprint(Assignment(lhs, rhs))+'\\n')\n\n         for index, rhs in enumerate (simplified_rhs[0]):\n            lhs = sympy.Symbol(name+' '+(idx[index]).replace(', ',']['))\n            out.write(printer().doprint(Assignment(lhs, rhs))+'\\n')\n\\end{cadabra}\n\n\\clearpage\n\n\\begin{cadabra}\n   {\\theta, \\varphi}::Coordinate.\n   {a,b,c,d,e,f,g,h#}::Indices(values={\\theta, \\varphi}, position=independent).\n\n   \\partial{#}::PartialDerivative.\n\n   g_{a b}::Metric.\n   g^{a b}::InverseMetric.\n\n   Gamma := \\Gamma^{a}_{f g} -> 1/2 g^{a b} (   \\partial_{g}{g_{b f}}\n                                              + \\partial_{f}{g_{b g}}\n                                              - \\partial_{b}{g_{f g}} ).\n\n   Rabcd := R^{d}_{e f g} ->   \\partial_{f}{\\Gamma^{d}_{e g}}\n                             - \\partial_{g}{\\Gamma^{d}_{e f}}\n                             + \\Gamma^{d}_{b f} \\Gamma^{b}_{e g}\n                             - \\Gamma^{d}_{b g} \\Gamma^{b}_{e f}.\n\n   Rab := R_{a b} -> R^{c}_{a c b}.\n\n   gab := { g_{\\theta \\theta}   = r**2,\n            g_{\\varphi \\varphi} = r**2 \\sin(\\theta)**2 }.   # cdb(ex-07.101,gab)\n\n   complete (gab, $g^{a b}$)                                # cdb(ex-07.102,gab)\n\n   substitute (Rabcd, Gamma)\n   substitute (Rab, Rabcd)\n\n   evaluate   (Gamma, gab, rhsonly=True)                    # cdb(ex-07.103,Gamma)\n   evaluate   (Rabcd, gab, rhsonly=True)                    # cdb(ex-07.104,Rabcd)\n   evaluate   (Rab,   gab, rhsonly=True)                    # cdb(ex-07.105,Rab)\n\n   write_code (Gamma[1],'myGamma','example-07-gamma.c',3)\n   write_code (Rabcd[1],'myRabcd','example-07-rabcd.c',4)\n   write_code (Rab[1],  'myRab',  'example-07-rab.c',2)\n\\end{cadabra}\n\n\\begin{align*}\n   &\\Cdb{ex-07.101}\\\\[10pt]\n   &\\Cdb{ex-07.102}\\\\[10pt]\n   &\\Cdb{ex-07.103}\\\\[10pt]\n   &\\Cdb{ex-07.104}\\\\[10pt]\n   &\\Cdb{ex-07.105}\n\\end{align*}\n\n\\clearpage\n\n% ============================================================================================\n% export to json format\n\n\\bgroup\n\\CdbSetup{action=hide}\n\\begin{cadabra}\n   for i in range( len(checkpoint) ):\n      cdblib.put ('check{:03d}'.format(i),checkpoint[i],checkpoint_file)\n\\end{cadabra}\n\\egroup\n\n\\end{document}\n", "meta": {"hexsha": "f777ca77cf90ea9347f72a20be5799238782962a", "size": 3554, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/example-07.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/example-07.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/example-07.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 31.4513274336, "max_line_length": 94, "alphanum_fraction": 0.5045019696, "num_tokens": 1052, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5802304998914001}}
{"text": "\\subsection{Goal}\n\n\\begin{frame}[t]{About this project}\n\n\\textbf{Objective:} Gain a better understanding of the arithmetic in Rijndael's finite field (underlying the AES block-cipher). \\\\\n\n\\textbf{Output:} A program able to:\n\t\\begin{itemize}\n\t\t\\item Perform the basic operations in Rijndael's finite field.\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item Sum (and subtraction).\n\t\t\t\t\\item Multiplication.\n\t\t\t\t\\item Division.\n\t\t\t\\end{itemize}\n\t\t\\item Use the extended Euclidean algorithm \\cite{Menezes2012handbook} for computing the multiplicative inverse of a polynomial.\n\t\t\n\t\t\\item Use your program to re-generate the AES S-Boxes (for both encryption and decryption).\n\t\t\n\t\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[t]{What is not}\n\t\n\tEven though the complete implementation of the AES is really interesting we will focus only on the under understanding of the arithmetic in Rijndael's finite field and the associated operations and implementations.\n\\end{frame}\n\n\\begin{frame}[t]{Bib load}\n\t\\begin{itemize}\n\t\t\\item \\cite{Menezes2012handbook}\n\t\t\\item \\cite{Venturi2012crittografia}\n\t\t\\item \\cite{Katz2014modern}\n\t\t\\item \\cite{Rijndael2020design}\n\t\\end{itemize}\n\\end{frame}", "meta": {"hexsha": "9726b930335538ab6cff3fbb9ac14f6d7c371737", "size": 1149, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "presentation/1_introduction/goal.tex", "max_stars_repo_name": "belgrades/aes", "max_stars_repo_head_hexsha": "ebd1fbf36acd8e3a787ebc0cd68f83e3784d2979", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T12:34:37.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T12:34:37.000Z", "max_issues_repo_path": "presentation/1_introduction/goal.tex", "max_issues_repo_name": "belgrades/aes", "max_issues_repo_head_hexsha": "ebd1fbf36acd8e3a787ebc0cd68f83e3784d2979", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentation/1_introduction/goal.tex", "max_forks_repo_name": "belgrades/aes", "max_forks_repo_head_hexsha": "ebd1fbf36acd8e3a787ebc0cd68f83e3784d2979", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.7941176471, "max_line_length": 215, "alphanum_fraction": 0.7545691906, "num_tokens": 334, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5802304920954942}}
{"text": "% Boris Jeremic (@ucdavis.edu)\n\\section{OpenSees Command to Create Elastic Isotropic/Anisotropic 3D Material }\n\n\nThere  are  two  types  of  3D  elastic  isotropic material models\n(i.e.  linear  elastic  and  nonlinear or pressure\nsensitive elastic materials)\n and one cross\nanisotropic elastic material model that  you\ncan  create.\n% Section \\ref{LinEla} discusses the\n%linear  elastic  material  command, while Section \\ref{NonlEla}\n%examines the nonlinear elastic material command.\n\n\n\\subsection{ElasticIsotropic3D command}\n\\label{LinEla}\n%\\texttt{nDMaterial ElasticIsotropic3D MatTag? $E_o$? $\\nu?$ $\\rho?$}\n\\begin{verbatim}\nnDMaterial ElasticIsotropic3D matTag? E0? nu? rho?\n\\end{verbatim}\n\nThe  \\texttt{ElasticIsotropic3D}  material  is the standard linear elastic\nisotropic  three  dimensional  material  implemented  based  on\ntensor  operation.  The arguments to construct the material are\nits tag, \\texttt{matTag}, Young's Modulus at atmospheric pressure \\texttt{E0},\nPoisson's ratio \\texttt{nu}, and mass density \\texttt{rho}.\n\n\n\n\n\\subsection{PressureDependentElastic3D command}\n\\label{NonlEla}\n%\\texttt{nDMaterial PressureDependentElastic3D MatTag? $E_o$? $\\nu?$ \n%$\\rho?$ $n?$ $p_{ref}?$ $p_{cutoff}?$}\n\\begin{verbatim}\nnDMaterial PressureDependentElastic3D matTag? E0? nu? rho? n? pr? pc? \n\\end{verbatim}\n\nThe  \\texttt{PressureDependentElastic3D}  material  is the standard\nnonlinear   elastic   isotropic   three   dimensional  material\nimplemented based on tensor operation. The first four arguments\nare  the  same  as  linear elastic command described above. \nThe pressure  dependent  elastic  modulus is to be determined using\nthe following formula \\ref{NonLineEl011}\n% ( Manzari and Dafalias\n%\\cite{Manzari97})\n.  \nThere  are  three  more  arguments for this command. \n\\texttt{n} is  the  exponent, \n\\texttt{pr} ($p_{ref}$) is  the  atmospheric pressure, \nwhile \\texttt{pc} ($p_{cut-off}$) is the cut-off pressure. \nWhen $p'$ ( the mean effective normal stress) is less than $p_{cut-off}$,\nthen $p'~=~p_{cut-off}$.\n%\n\\begin{equation}\nE = E_o \\left(\\frac{p'}{p_{ref}}\\right)^{n}\n\\label{NonLineEl011}\n\\end{equation}\n%\n\n\\subsection{Elastic Cross Anisotropic 3D command}\n\\label{ECA3D}\n\\begin{verbatim}\nnDMaterial ElasticCrossAnisotropic matTag? Eh? Ev? nuhv? nuhh? Ghv?\n\\end{verbatim}\n\nThe  \\texttt{ElasticCrossAnisotropic}  material  is the standard linear elastic\ncross anisotropic three dimensional material implemented  based  on\ntensor  operation.  \nThe arguments to construct the material are \nits tag, \\texttt{matTag}, \nthe elastic modulus in the cross plane, \\texttt{Eh}, \nthe elastic modulus in the plane vertical to the cross plane, \\texttt{Ev},\nPoisson's ratio between the cross plane and its vertical plane, \\texttt{nuhv},\nPoisson's ratio between in the cross plane, \\texttt{nuhh},\nand the shear modulus between the cross plane and its vertical plane, \\texttt{Ghv}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "bf3cc8d2fc5c03ba5efa9828eb365f4d7cbd581c", "size": 2911, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "OpenSees/SRC/doc/Elastic3DCommands.tex", "max_stars_repo_name": "kuanshi/ductile-fracture", "max_stars_repo_head_hexsha": "ccb350564df54f5c5ec3a079100effe261b46650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-03-05T16:25:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-17T14:12:03.000Z", "max_issues_repo_path": "SRC/doc/Elastic3DCommands.tex", "max_issues_repo_name": "steva44/OpenSees", "max_issues_repo_head_hexsha": "417c3be117992a108c6bbbcf5c9b63806b9362ab", "max_issues_repo_licenses": ["TCL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SRC/doc/Elastic3DCommands.tex", "max_forks_repo_name": "steva44/OpenSees", "max_forks_repo_head_hexsha": "417c3be117992a108c6bbbcf5c9b63806b9362ab", "max_forks_repo_licenses": ["TCL"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-09-21T03:11:11.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-19T07:29:37.000Z", "avg_line_length": 30.3229166667, "max_line_length": 83, "alphanum_fraction": 0.7540364136, "num_tokens": 860, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199714402813, "lm_q2_score": 0.6926419767901476, "lm_q1q2_score": 0.5801707528173035}}
{"text": "\\section{Triple Integrals}\r\n\\noindent\r\nTriple integrals work much the same way as single and double integrals. They are still defined by a Riemann sum, and Fubini's theorems about independent domains and the order of integration still applies.\r\n\r\n\\input{./multipleIntegrals/fubinisTheoremZSimpleRegions}\r\n\\input{./multipleIntegrals/planeLaminas_triple}", "meta": {"hexsha": "0bbbd1b60a00d683c8e08bb6b341b63c8d1bdeee", "size": 352, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "multiCalc/multipleIntegrals/tripleIntegras.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "multiCalc/multipleIntegrals/tripleIntegras.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "multiCalc/multipleIntegrals/tripleIntegras.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 58.6666666667, "max_line_length": 205, "alphanum_fraction": 0.8210227273, "num_tokens": 89, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8376199633332891, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5801707472020603}}
{"text": "%!TEX root = ../Thesis.tex\n\\chapter{Conclusion}\nTo make it clear, low-energy transfer orbits are not fit for manned missions. Those types is missions require optimization for flight time due to resource usage of maintaining the lives of the astronauts.\n\nWe were able to find a low-energy transfer orbit to the moon with a flight time of 194 days and a $\\Delta v$ as low as $\\SI{3795}{\\km\\per\\s}$, which is better than any of the values we compared with from the literature but still higher than the theoretical minimum of $\\Delta v = \\SI{3721}{\\km\\per\\s}$, as it should be. We ensured that these results were numerically solid by implementing an adaptive St\u00f6rmer\u2013Verlet method that maintained a constant step-error of $10^{-9}$. Furthermore the code was implemented in Python optimized for parallelization, enabling feasible brute force search for several kinds of orbits. We also found another LETO with a more moderate flight time of 41 days and $\\Delta v$ of respectable $\\SI{3896}{\\km\\per\\s}$, which is very close to value found by Tuppoto \\cite{Topputo2005}, but that had a flight time of 8 months. Furthermore a Hohmann transfer to the Moon was modeled and gave a prediction of $\\Delta v = \\SI{3.946}{\\km\\per\\s}$ with a flight time of 5.0 days. Our simulation found a Hohmann orbit of $\\Delta v = \\SI{3.912}{\\km\\per\\s}$ and flight time 4.3 days. We also found the 3-day Hohmann to be within 2.5\\% agreement with the real world 3-day trip to the moon realized by the Apollo missions, which validates our model and code implementation.", "meta": {"hexsha": "8beec6136a342f88b2f53ef86191ce2d130e6dec", "size": 1539, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/chapters/Conclusion.tex", "max_stars_repo_name": "GandalfSaxe/leto", "max_stars_repo_head_hexsha": "d27c2a4a04518f4230a80ce83d0252257247a512", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/chapters/Conclusion.tex", "max_issues_repo_name": "GandalfSaxe/leto", "max_issues_repo_head_hexsha": "d27c2a4a04518f4230a80ce83d0252257247a512", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/chapters/Conclusion.tex", "max_forks_repo_name": "GandalfSaxe/leto", "max_forks_repo_head_hexsha": "d27c2a4a04518f4230a80ce83d0252257247a512", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 307.8, "max_line_length": 1285, "alphanum_fraction": 0.7673814165, "num_tokens": 385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.84997116805678, "lm_q2_score": 0.6825737473266735, "lm_q1q2_score": 0.5801680053001461}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n\n\n    \n    \n      \\subsection{getPlotDistr\\_pendubot.m}\n\n\\begin{par}\n\\textbf{Summary:} Compute means and covariances of the Cartesian coordinates of the tips both the inner and outer pendulum assuming that the joint state $x$ of the cart-double-pendulum system is Gaussian, i.e., $x\\sim N(m, s)$\n\\end{par} \\vspace{1em}\n\n\\begin{verbatim}   function [M1, S1, M2, S2] = getPlotDistr_pendubot(m, s, ell1, ell2)\\end{verbatim}\n    \\begin{par}\n\\textbf{Input arguments:}\n\\end{par} \\vspace{1em}\n\\begin{verbatim}m       mean of full state                                    [6 x 1]\ns       covariance of full state                              [6 x 6]\nell1    length of inner pendulum\nell2    length of outer pendulum\\end{verbatim}\n\\begin{verbatim}Note: this code assumes that the following order of the state:\n       1: pend1 angular velocity,\n       2: pend2 angular velocity,\n       3: pend1 angle,\n       4: pend2 angle\\end{verbatim}\n\\begin{par}\n\\textbf{Output arguments:}\n\\end{par} \\vspace{1em}\n\\begin{verbatim}M1      mean of tip of inner pendulum                         [2 x 1]\nS1      covariance of tip of inner pendulum                   [2 x 2]\nM2      mean of tip of outer pendulum                         [2 x 1]\nS2      covariance of tip of outer pendulum                   [2 x 2]\\end{verbatim}\n\\begin{par}\nCopyright (C) 2008-2013 by Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen.\n\\end{par} \\vspace{1em}\n\\begin{par}\nLast modification: 2013-03-27\n\\end{par} \\vspace{1em}\n\n\n\\subsection*{High-Level Steps} \n\n\\begin{enumerate}\n\\setlength{\\itemsep}{-1ex}\n   \\item Augment input distribution to complex angle representation\n   \\item Compute means of tips of pendulums (in Cartesian coordinates)\n   \\item Compute covariances of tips of pendulums (in Cartesian coordinates)\n\\end{enumerate}\n\n\\begin{lstlisting}\nfunction [M1, S1, M2, S2] = getPlotDistr_pendubot(m, s, ell1, ell2)\n\\end{lstlisting}\n\n\n\\subsection*{Code} \n\n\n\\begin{lstlisting}\n% 1. Augment input distribution\n[m1 s1 c1] = gTrig(m, s, [3 4], [ell1, ell2]); % map input through sin/cos\nm1 = [m; m1];        % mean of joint\nc1 = s*c1;           % cross-covariance between input and prediction\ns1 = [s c1; c1' s1]; % covariance of joint\n\n% 2. Compute means of tips of pendulums (in Cartesian coordinates)\nM1 = [-m1(5); m1(6)];                 % [-l*sin(t1), l*cos(t1)]\nM2 = [-m1(5) + m1(7); m1(6) + m1(8)]; % [-l*(sin(t1)-sin(t2)),l*(cos(t1)+cos(t2))]\n\n% 2. Put covariance matrices together (Cart. coord.)\n% first set of coordinates (tip of 1st pendulum)\ns11 = s1(5,5);\ns12 = -s1(5,6);\ns22 = s1(6,6);\nS1 = [s11 s12; s12' s22];\n\n% second set of coordinates (tip of 2nd pendulum)\ns11 = s1(5,5) + s1(7,7) - s1(5,7) - s1(7,5);    % ell1*sin(t1) + ell2*sin(t2)\ns22 = s1(6,6) + s1(8,8) + s1(6,8) + s1(8,6);    % ell1*cos(t1) + ell2*cos(t2)\ns12 = -(s1(5,6) + s1(5,8) + s1(7,6) + s1(7,8));\nS2 = [s11 s12; s12' s22];\n\n% make sure we have proper covariances (sometimes numerical problems occur)\ntry\n  chol(S1);\ncatch\n  warning('matrix S1 not pos.def. (getPlotDistr)');\n  S1 = S1 + (1e-6 - min(eig(S1)))*eye(2);\nend\n\ntry\n  chol(S2);\ncatch\n  warning('matrix S2 not pos.def. (getPlotDistr)');\n  S2 = S2 + (1e-6 - min(eig(S2)))*eye(2);\nend\n\\end{lstlisting}\n", "meta": {"hexsha": "d47e4cd75dfeff7785e62b9cb865981818a5f323", "size": 3347, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/getPlotDistr_pendubot.tex", "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_issues_repo_path": "doc/tex/getPlotDistr_pendubot.tex", "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_forks_repo_path": "doc/tex/getPlotDistr_pendubot.tex", "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "avg_line_length": 33.8080808081, "max_line_length": 226, "alphanum_fraction": 0.6342993726, "num_tokens": 1159, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8856314828740729, "lm_q2_score": 0.6548947290421275, "lm_q1q2_score": 0.5799953900079936}}
{"text": "% Chapter X\n\n\\chapter{Learning model parameters from data} % Chapter title\n\n\\label{Learning model parameters from data} % For referencing the chapter elsewhere, use \n\nFigaro provides support for learning model parameters from data. In this section, a special type of compound element will be presented which allows a distribution to be learned from observed evidence. Details are given about the algorithm Figaro provides for learning parameters. Lastly, an example using parameters and learning algorithms to determine the fairness of a set of die rolls is presented.\n\n\\section{Parameters and parameterized elements}\n\nThis section discusses elements which are learnable parameters. For clarity, a distinction should be made on the meaning of the word \\emph{parameter} in this context. This is different from a method parameter or Scala type parameter. In this section, we use \\emph{parameter} to refer to a Figaro element which can be learned from data. There are currently two such types of parameters in Figaro: (1) \\texttt{Beta}, and (2) \\texttt{Dirichlet}.\n\nA customary illustration of parameter learning is to consider the outcomes of a coin flip and determine whether or not the coin is fair. In the case of a \\texttt{Flip} element (which is a Bernoulli distribution), the conjugate prior distribution is a Beta distribution. If the coin is not fair, we would expect a prior distribution to have a higher value of alpha or beta (the shape variables of a Beta). First, we will create the conjugate prior distribution of a \\texttt{Flip}:\n\n\\begin{flushleft}\n\\texttt{val fairness = Beta(1,1)}\n\\end{flushleft}\n\nThe element \\texttt{fairness} is the parameter we will use to model the bias of our coin. The creation of the parameter is no different than  Most importantly, we later use it to create a model learned from parameterized elements. Creation of a parameterized element is accomplished in exactly the same way as creating a compound element.\n\n\\begin{flushleft}\n\\texttt{val f = Flip(fairness)}\n\\end{flushleft}\n\nThis element models a flip of a coin having the fairness specified by the beta parameter, using a value of \\texttt{true} to represent heads and \\texttt{false} to represent tails. We have actually created an instance of \\texttt{ParameterizedFlip}, which is a special type of compound element. A  \\texttt{ParameterizedFlip} is created simply by providing a \\texttt{Beta} as the argument to \\texttt{Flip}.\n\nBy using a \\texttt{ParameterizedFlip}, the evidence we observe on f can be used to learn the value of \\texttt{fairness}. Thus, the next step is to provide the data observed from flips of the coin. Values can be observed just as with other elements, by using \\texttt{f.observe(true)} or \\texttt{f.observe(false)}. We could also use conditions or constraints.\n\n\\marginpar{This example is found in FairCoin.Scala}\nAs a more detailed example, suppose we have seen 24 heads and 62 tails. One way to represent this data is in a Scala sequence. Note that for readability, the sequence is truncated here.\n\n\\begin{flushleft}\n\\texttt{val data = Seq('H', 'H', 'H', 'T', 'H', 'H', 'T', 'H', ...}\n\\end{flushleft}\n\nThe following block of Scala code will iterate through each of the items in the sequence, create a Flip element using the parameter, and observe true or false based on the side of the coin:\n\n\\begin{flushleft}\n\\texttt{data zip model.trials foreach \\{\n\\newline \\tab (datum: (Char, Flip)) => if (datum.\\_1 == 'H') \n\\newline \\tab datum.\\_2.observe(true) else datum.\\_2.observe(false)\n\\newline \\}\n}\n\\end{flushleft}\n\nWe have created a parameter, parameterized elements and considered a set of data. Note that each time a parameterize flip is created, it is using the same \\texttt{Beta}. It is now desirable to employ a learning mechanism to determine the fairness of the coin, and to create a new element corresponding to the learned value. This is possible by using a learning algorithm.\n\n\\section{Expectation maximization}\n\nA learning algorithm can be used to determine the maximum a posteriori estimate for parameter elements. Parameter elements have a \\texttt{MAPValue} which is set when the parameter is used as a target in a learning algorithm. Presently, Figaro provides one learning algorithm, expectation maximization, which uses existing Figaro algorithms to estimate sufficient statistics. Recall that expectation maximization is an iterative algorithm consisting of an expectation step and a maximization step. During the expectation step, an estimate is produced for the sufficient statistics of the parameter. The estimates are then used in the maximization step to find the most likely value of the parameter. This continues for a set number of iterations and converges toward the true MAP value.\n\nFrom a practical standpoint, learning a parameter with expectation maximization is very simple. We need only provide the target parameter and, optionally, the number of iterations to the algorithm. The default number of iterations is 10. We can also choose an inference algorithm for estimating the sufficient statistics of the target parameters. Currently, Metropolis Hastings, importance sampling, variable elimination or belief propagation can be used for this purpose. Figaro's Generalized EM algorithm is used in the following way:\n\n\\begin{flushleft}\n\\texttt{val learningAlgorithm = EMwithMH(fairness)\n\\newline learningAlgorithm.start\n\\newline learningAlgorithm.kill\n\\newline \n\\newline val coin = Flip(fairness.MAPValue)\n\\newline println(\"The probability of a coin with this fairness showing\n'heads' is: \")\n\\newline println(coin.prob)\n}\n\\end{flushleft}\n\nThe line \\texttt{val learningAlgorithm = EMwithMH(fairness)} creates an EM algorithm which uses Metropolis Hastings to estimate sufficient statistics. We could also have used \\texttt{EMwithBP(fairness)} or \n\\texttt{EMwithImp\\-ortance(fairness)}. \n\nAfter the algorithm has finished running, we can create an element learned from the parameter by using \\texttt{Flip(fairness.MAPValue)}. The element \\texttt{coin} is a \\texttt{Flip}, where the probability of producing true is determined from the data we observed above.\n\nAfter running the program, we see:\n\n\\begin{flushleft}\n\\texttt{The probability of a coin with this fairness showing 'heads' is:\n0.7159090909090909}\n\\end{flushleft}\n\nWe may want to make further explorations about the learned model. For instance, if we wanted to know the probability that two flips of this coin show the same side, we could use:\n\n\\begin{flushleft}\n\\texttt{val t1 = Flip(fairness.MAPValue) \n\\newline val t2 = Flip(fairness.MAPValue) \n\\newline val equal = t1 === t2}\n\\end{flushleft}\n\nWe can then use an algorithm like variable elimination to determine the probability that the coins show the same side:\n\n\\begin{flushleft}\n\\texttt{val alg = VariableElimination(equal)\n\\newline alg.start()\n\\newline println(\"The probability of two coins which exhibit this fairness showing the same side is: \" + alg.probability(equal, \\textbf{true}))\n\\newline alg.kill()\n}\n\\end{flushleft}\n\nThis results in the following output:\n\n\\begin{flushleft}\n\\texttt{The probability of two coins which exhibit this fairness showing the same side is: 0.5932334710743803}\n\\end{flushleft}\n\n\\section{Parameter collections}\n\nIn the previous sections, parameter learning was discussed using a Beta parameter. The Beta parameters were supplied individually to the learning algorithm, and the MAP value for each parameter was retrieved individually. For more complicated models, it is often useful to define a model structure with parameters which can be learned, and then use the values of the learned parameters in the same structure. The \\texttt{ModelParameters} pattern is a simple way of accomplishing this. By using \\texttt{ParameterCollection}, the model structure only needs to be defined once. Parameters are added to the collection in a similar fashion to \\texttt{Element} collections. We specify the parameter name and add it to the collection when we create the parameters.\n\nThis section will also explain the use of Dirichlet parameters. The Dirichlet distribution is a multidimensional generalization of the Beta with a variable number of concentration parameters or alpha values. These values correspond to the weight of each possible outcome in the posterior distribution. In a Dirichlet parameter with two dimensions, the alpha values might again correspond to the outcome of heads and tails, or true and false. Using a higher number of dimensions, we can model a number of different categories or outcomes.\n\nSuppose we are given a set of data in which each record represents a roll of two die out of three possible die. The sum of the die is available, as well as which die were selected for the roll. However, the individual outcome of each die is not available. Our task is to learn the fairness of each die.\n\nThe first step is to define the possible outcomes from a dice roll. This is easily accomplished by using a Scala list:\n\n\\begin{flushleft}\n\\marginpar{This example is found in FairDice.Scala}\n\\texttt{val outcomes = List(1, 2, 3, 4, 5, 6)}\n\\end{flushleft}\n\nNext, we create a set of model parameters representing the parameters of the fair dice model. We create a parameter representing the fairness of each die and add it to the collection of model parameters.\n\n\\begin{flushleft}\n\\texttt{val params = ModelParameters()\n\\newline val fairness1 = Dirichlet(2.0, 2.0, 2.0, 2.0, 2.0, 2.0)(\"fairness1\", params)\n\\newline val fairness2 = Dirichlet(2.0, 2.0, 2.0, 2.0, 2.0, 2.0)(\"fairness1\", params) \n\\newline val fairness3 = Dirichlet(2.0, 2.0, 2.0, 2.0, 2.0, 2.0)(\"fairness1\", params) \n}\n\\end{flushleft}\n\nEach die is initially assumed to be fair. For convenience, the data which we will learn the parameters from is represented in Scala sequence:\n\n\\begin{flushleft}\n\\texttt{val data = Seq((2, 3, 8), (1, 3, 7), (1, 2, 3), (1, 2, 3), ...}\n\\end{flushleft}\n\n\\texttt{data} is a sequence of 50 Scala tuples. The first two values in each tuple indicate which two die were chosen to roll. The third value is the sum of the two die.\n\nThe next step is to define a class representing the model structure.\n\n\\begin{flushleft}\n\\texttt{class DiceModel(val parameters: ParameterCollection, val data: Seq[(Int, Int, Int)], val outcomes: List[Int])}\n\\end{flushleft}\n\nThis defines a class which accepts a \\texttt{ParameterCollection}, a set of data, and a list of outcomes as its arguments. As we will see, the values which are retrieved from the \\texttt{ParameterCollection} depend on whether we are working with the prior or posterior parameters. To model the outcome of the sum, we can use an \\texttt{Apply} element with a function which sums the outcome of its arguments. We place the following loop inside the \\texttt{DiceModel} class:\n\n\\begin{flushleft}\n\\texttt{val sum = (i1: Int, i2: Int) => i1 + i2\n\t\\newline val trials = for (datum <- data) yield \\{\n    \\newline \\tab val die1 = Select(parameters.get(\"fairness\" + datum.\\_1), outcomes: \\_*)\n    \\newline \\tab val die2 = Select(parameters.get(\"fairness\" + datum.\\_2), outcomes: \\_*)\n    \\newline \\tab Apply(die1, die2, sum)\n\t\\newline \\}\n  }\n\\end{flushleft}\n\nThe code section above defines a Scala function which accepts two Dirichlet parameters and an integer value. \\texttt{val sum = (i1: Int, i2: Int) => i1 + i2} defines a Scala function which accepts two integer values and returns their sum.  Next, two Select elements are created and parameterized by the input parameters. We retrieve the parameters by using the \\texttt{get} method from the input \\texttt{ParameterCollection}.\n\nNote that the arguments to \\texttt{Select} are different from what has been presented previously. Instead of directly enumerating each probability and outcome, we specify a Dirichlet parameter and the list of possible outcomes. The last two lines of \\texttt{trial} apply the sum function to the die and observe the result. By calling the \\texttt{trial} function for each tuple in the sequence, we can create a model learned from the data.\n\nWe can now create an instance of the \\texttt{DiceModel} class, using the prior parameters from the parameter collection. \n\n\\begin{flushleft}\n \\texttt{val model = new DiceModel(params.priorParameters, data, outcomes)\n}\n\\end{flushleft}\n\nTo apply evidence to the model, we can write another loop over the contents of the data and the trials defined inside the model class.\n\n\\begin{flushleft}\n\\texttt{for ((datum,trial) <- data zip model.trials) \\{\n    \\newline \\tab trial.observe(datum.\\_3)\n    \\}\n}\n\\end{flushleft}\n\nJust as in the fair coin example, we create an expectation maximization algorithm. This time, instead of passing the parameters in a list or sequence, we can simply use the collection of parameters as an input argument.\n\n\\begin{flushleft}\n\\texttt{val numberOfBPIterations = 10\n\t\\newline val numberOfEMIterations = 10\n    \\newline val algorithm = EMWithBP(numberOfEMIterations, numberOfBPIterations, params)\n    \\newline algorithm.start\n    \\newline algorithm.stop\n    \\newline val d1 = Select(params.posteriorParameters.get(\"fairness1\"), outcomes:\\_*)\n    \\newline val d2 = Select(params.posteriorParameters.get(\"fairness2\"), outcomes:\\_*)\n    \\newline val d3 = Select(params.posteriorParameters.get(\"fairness3\"), outcomes:\\_*)\n}\n\\end{flushleft}\n\nThe code block above will create \\texttt{Select} elements using to the MAP value of the learned parameters. We retrieve the MAP value of the parameters by using the \\texttt{posteriorParameters.get} method of our parameter collection. If we wanted to create another set of 50 trials using the learned parameter values, we could simply use:\n\n\\begin{flushleft}\n \\texttt{val model = new DiceModel(params.posteriorParameters, data, outcomes)\n}\n\\end{flushleft}\n\nNote that for a \\texttt{Select}, a list of outcomes must be supplied as an argument to along with their corresponding probabilities. This is because the number of concentration parameters is may vary, and the type of the outcomes is not fixed. Running this code results in the following output, in which we see the model has estimated the probabilities of each value for each die. If one examines the full data declaration in the example code, it is quite easy to see that there are only three observed values of the sum of the die (3, 7 and 8), so the learning algorithm has correctly inferred that the most likely values of the die are 1, 2 and 6, respectively.\n\n\\begin{flushleft}\n\\texttt{The probabilities of seeing each side of d\\_1 are:\n\\newline \\tab 0.906250000442371 -> 1\n\\newline \\tab 0.0 -> 2\n\\newline \\tab 0.0 -> 3\n\\newline \\tab 0.0 -> 4\n\\newline \\tab 0.09374999955762903 -> 5\n\\newline \\tab 0.0 -> 6\n}\n\\end{flushleft}\n\n\n\\begin{flushleft}\n\\texttt{The probabilities of seeing each side of d\\_2 are:\n\\newline \\tab 0.0 -> 1\n\\newline \\tab 0.9999999996067813 -> 2\n\\newline \\tab 0.0 -> 3\n\\newline \\tab 0.0 -> 4\n\\newline \\tab 0.0 -> 5\n\\newline \\tab 3.9321864899990694E-10 -> 6\n}\n\\end{flushleft}\n\n\n\\begin{flushleft}\n\\texttt{The probabilities of seeing each side of d\\_3 are:\n\\newline \\tab 0.0 -> 1\n\\newline \\tab 0.0 -> 2\n\\newline \\tab 0.0 -> 3\n\\newline \\tab 0.0 -> 4\n\\newline \\tab 0.0 -> 5\n\\newline \\tab 1.0 -> 6\n}\n\\end{flushleft}\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "370c46ad67ad423b787c3b5b10821a6679573ea9", "size": 15131, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "FigaroLaTeX/Tutorial/Sections/8LearningParams.tex", "max_stars_repo_name": "mikiec84/figaro", "max_stars_repo_head_hexsha": "d1f15a7e7b46c4d8f139273ef5f4ed8edb6bbbcc", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 708, "max_stars_repo_stars_event_min_datetime": "2015-01-01T03:18:31.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T02:08:48.000Z", "max_issues_repo_path": "FigaroLaTeX/Tutorial/Sections/8LearningParams.tex", "max_issues_repo_name": "mikiec84/figaro", "max_issues_repo_head_hexsha": "d1f15a7e7b46c4d8f139273ef5f4ed8edb6bbbcc", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 329, "max_issues_repo_issues_event_min_datetime": "2015-01-02T22:57:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-15T01:39:09.000Z", "max_forks_repo_path": "FigaroLaTeX/Tutorial/Sections/8LearningParams.tex", "max_forks_repo_name": "mikiec84/figaro", "max_forks_repo_head_hexsha": "d1f15a7e7b46c4d8f139273ef5f4ed8edb6bbbcc", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 257, "max_forks_repo_forks_event_min_datetime": "2015-01-05T19:58:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T19:05:20.000Z", "avg_line_length": 62.2674897119, "max_line_length": 785, "alphanum_fraction": 0.7739739607, "num_tokens": 3734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5799737477862532}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage[top=1.5cm,bottom=1.5cm]{geometry}\n\\usepackage[T1]{fontenc}\n\\usepackage{url}\n\\usepackage{hyperref}\n\\usepackage{xcolor}\n% \n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{ifthen}\n\\usepackage{mathpartir}\n\\usepackage{mathtools}\n\\input{macros}\n% \n\\begin{document}\n\n\\title{CO663: Language $\\lggeT$ --- summary\\footnote{Adapted from:\n    Harper, Robert. Practical foundations for programming\n    languages. Cambridge University Press, 2016.}}\n\n\\date{\\vspace{-15ex}} \n\\maketitle\n\n% \\pagenumbering{gobble}\n\n\\newcommand{\\lazyeval}[1]{\\framebox{\\parbox[c][#1]{\\textwidth}{\n      \\color{white}{h}% \n    }}}\n\n\n\\section{Syntax}\n\n\\[\n\\begin{array}{llclll}\n  \\TYPES & \\tau & \\Coloneqq & \\tynat  & \\tynat & \\text{natural}\n  \\\\\n         &&& \\tyarr{\\tau_1}{\\tau_2}  & \\tau_1 \\rightarrow \\tau_2 & \\text{function}\n  \\\\\n  \\\\ \n  \\EXPS & e & \\Coloneqq & \\var{x} & \\var{x} & \\text{variable}\n  \\\\\n         &&& \\ez &  \\ez & \\text{zero}\n  \\\\\n         &&& \\esucc{e} &  \\esucc{e} & \\text{successor}\n  \\\\\n         &&& \\elam{\\tau}{x}{e} & \\clam{\\tau}{x}{e} & \\text{abstraction}\n  \\\\\n         &&& \\eapp{e_1}{e_2} & \\capp{e_1}{e_2} & \\text{application}\n  \\\\\n         &&& \\eiter{e_0}{x}{e_1}{e} & \\multicolumn{2}{c}{\\citer{e_0}{x}{e_1}{e}}                                      \n\\end{array}\n\\]\n\n\n\\section{Typing rules / Statics}\n\n\\[\n\\tyrule\n{\\ttyrulename{var}}\n{\\,}\n{\\tyjudge{\\Gamma_1, \\var{x}: \\tau, \\Gamma_2}{\\var{x}}{\\tau} }\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\ttyrulename{nat}}\n{\\,}\n{\\tyjudge{\\Gamma}{\\ez}{\\tynat}}\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\ttyrulename{num}}\n{\\tyjudge{\\Gamma}{{e}}{\\tynat{}}}\n{\\tyjudge{\\Gamma}{\\esucc{e}}{\\tynat{}}}\n\\]\n% \n\\[\n\\tyrule\n{\\ttyrulename{lam}}\n{\\tyjudge{\\Gamma, \\var{x} : \\tau_1}{e}{\\tau_2}}\n{\\tyjudge{\\Gamma}{\\elam{\\tau_1}{x}{e}}{\\tyarr{\\tau_1}{\\tau_2}}}\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\ttyrulename{ap}}\n{\n  \\tyjudge{\\Gamma}{e_1}{\\tyarr{\\tau_1}{\\tau}}\n  \\and\n  \\tyjudge{\\Gamma}{e_2}{\\tau_1}\n}\n{\n  \\tyjudge{\\Gamma}{\\eapp{e_1}{e_2}}{\\tau}\n}\n\\]\n\\[\n\\tyrule\n{\\ttyrulename{ite}}\n{\n  \\tyjudge{\\Gamma}{e}{\\tynat}\n  \\and\n  \\tyjudge{\\Gamma}{e_0}{\\tau}\n  \\and\n  \\tyjudge{\\Gamma, \\var{x} : \\tau}{e_1}{\\tau}\n}\n{\n  \\tyjudge{\\Gamma}{\\eiter{e_0}{x}{e_1}{e}}{\\tau}\n}\n\\]\n\n\n\\section{Semantics / Dynamics}\n\n% \\subsection{Values}\n\\[\n\\semrule{\\tsemrulename{z}}\n{\\,}\n{\\valjudge{\\ez}}\n\\qquad \\qquad  \\qquad\n\\semrule{\\tsemrulename{s}}\n{\\valjudge{e}}\n{\\valjudge{\\esucc{e}}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\tsemrulename{lam}}\n{\\,}\n{\\valjudge{\\elam{\\tau}{x}{e}}}\n\\]\n\n\n\n% \\subsection{Reductions}\n\n\\[\n\\semrule\n{\\tsemrulename{ss}}\n{\\jtrans{e}{e'}}\n{\n  \\jtrans{\\esucc{e}}{\\esucc{e'}}\n}\n\\qquad \\qquad  \\qquad\n% \n\\semrule\n{\\tsemrulename{ap}}\n{\\jtrans{e_1}{e'_1}}\n{\n  \\jtrans{\\eapp{e_1}{e_2}}{\\eapp{e'_1}{e_2}}\n}\n\\]\n% \n% \n\\[\n\\semrule\n{\\tsemrulename{lan}}\n{\\valjudge{e_1} \n  \\and \\jtrans{e_2}{e'_2}\n}\n{\\jtrans{\\eapp{e_1}{e_2}}{\\eapp{e_1}{e'_2}}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\tsemrulename{lav}}\n{\\valjudge{e_2}}\n{\\jtrans{\\eapp{\\elam{\\tau}{x}{e_1}}{e_2}}{\\subs{e_1}{e_2}{x}}}\n\\]\n% \n\\[\n\\semrule\n{\\tsemrulename{rin}}\n{\\jtrans{e}{e'}}\n{\n  \\jtrans{\\eiter{e_0}{x}{e_1}{e}}{\\eiter{e_0}{x}{e_1}{e'}}\n}\n\\qquad \\qquad  \\qquad\n% \n\\semrule\n{\\tsemrulename{r0}}\n{\\,}\n{\n  \\jtrans{\\eiter{e_0}{x}{e_1}{\\ez}}{e_0}\n}\n% \n\\]\n\\[\n% \n\\semrule\n{\\tsemrulename{rs}} \n{\\valjudge{{e}}}\n{\n  \\jtrans{\\eiter{e_0}{x}{e_1}{\\esucc{e}}}{\\subs{e_1}{{\\eiter{e_0}{x}{e_1}{{e}}}}{x}}\n}\n\\]\n\n\n\\section*{\\LaTeX\\ template}\nThe \\LaTeX\\ sources of this document are available in:\n\\url{https://github.com/ukc-co663/pl-design-latex-template}.\n\n\\end{document}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: t\n%%% End:\n", "meta": {"hexsha": "db87a68122a9ee5b17cba54a1e30de64bbb2bf93", "size": 3600, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lgge-t.tex", "max_stars_repo_name": "ukc-co663/pl-design-latex-template", "max_stars_repo_head_hexsha": "04965c2dc2cc8c8a34abb48adb3def95d36b4e03", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lgge-t.tex", "max_issues_repo_name": "ukc-co663/pl-design-latex-template", "max_issues_repo_head_hexsha": "04965c2dc2cc8c8a34abb48adb3def95d36b4e03", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lgge-t.tex", "max_forks_repo_name": "ukc-co663/pl-design-latex-template", "max_forks_repo_head_hexsha": "04965c2dc2cc8c8a34abb48adb3def95d36b4e03", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.9104477612, "max_line_length": 118, "alphanum_fraction": 0.5925, "num_tokens": 1576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389986757757, "lm_q2_score": 0.7772998663336158, "lm_q1q2_score": 0.5799737439369784}}
{"text": "\\section{Complex Numbers}\n\n\\subsection{Arithmetic Operations}\n\n\\subsubsection{Exercise 3}\nThis exercise is much easier if we represent the complex numbers in polar form (which has not been introduced\nyet). We see that\n\\begin{align*}\n        \\frac{-1 \\pm i \\sqrt{3}}{2} &= e^{i \\frac{2\\pi}{3}}, \\: e^{i \\frac{4 \\pi}{3}} \\\\\n        \\frac{\\pm 1 \\pm i \\sqrt{3}}{2} &= e^{i \\frac{2\\pi}{3}}, \\: e^{i \\frac{4 \\pi}{3}}, \\:  \n        e^{i \\frac{\\pi}{3}}, \\: e^{i \\frac{5 \\pi}{3}} \\\\\n\\end{align*}\nwhich gives us the desired equality since $e^{i 2\\pi} = 1$.\n\n\\subsection{Square Roots}\nOnce again, all of these computational exercises are made much easier with polar form. \nThey seem more tedious than instructive.\n\n\\subsection{Justification}\nGoing to come back to this after finishing chapter 3 of Birkhoff/MacLane (which I've been neglecting...).\n\n\\subsection{Conjugation, Absolute Value}\n\n\\subsubsection{Exercise 3}\nWe can manipulate the equality to get\n\\begin{align*}\n        \\norm{\\frac{a - b}{1 - \\bar{a}b}} &= 1 \\\\\n        \\norm{a - b}^2 &= \\norm{1 - \\bar{a}b}^2 \\\\\n        \\norm{a}^2 + \\norm{b}^2 - 2\\Re(a\\bar{b}) &= 1 + \\norm{a}^2 \\norm{b}^2 - 2\\Re(a\\bar{b}) \n\\end{align*}\nThus we see that equality holds if either $\\norm{a} = 1$ or $\\norm{b} = 1$, excepting the case where \n$a = b = 1$ as that makes the denominator in the equality 0.\n\n\\subsubsection{Exercise 4}\nLet $z = \\alpha + \\beta i$. Then we have\n\\begin{align*}\n        \\big((a + b) \\alpha + c\\big) + (a - b)\\beta i &= 0 \\\\\n        \\implies (a + b) \\alpha + c &= 0 \\\\\n        \\implies (a - b) \\beta = 0\n\\end{align*}\nIf $a - b = 0$, $\\beta$ could be anything, so we must have $a - b \\neq 0$ for the solution to be unique.\nSimilarly, if $a + b = 0$ then $\\alpha$ can either be anything or there is no solution for $\\alpha$ \n(if $c \\neq 0$). Thus, the two conditions we need are  $a + b \\neq 0$ and $a - b \\neq 0$.\n\n\\subsubsection{Exercise 5}\nWe can write $\\abs{\\sum_{i = 1}^n a_i b_i}^2$ as $(\\sum_{i = 1}^n a_i b_i) (\\sum_{j = 1}^n \\overline{a_j b_j})$\nto see that it can be expanded as a sum whose terms consist of $\\abs{a_i}^2 \\abs{b_i}^2$ and \n$a_i b_i \\overline{a_j b_j}$ over all $1 \\leq i, j \\leq n$. The $a_i b_i \\overline{a_j b_j}$ terms can be\npaired with the $a_j b_j \\overline{a_i b_i}$ terms to get that\n\\begin{align*}\n        \\abs{\\sum_{i = 1}^n a_i b_i}^2 &= \\sum_{i = 1}^n \\abs{a_i}^2 \\abs{b_i}^2 + \\sum_{1 \\leq i < j \\leq n} 2 \\Re a_i b_i \\overline{a_j b_j} \\\\\n                                       &= \\sum_{i = 1}^n \\abs{a_i}^2 \\sum_{i = 1}^n \\abs{b_i}^2 - \\sum_{1 \\leq i < j \\leq n} \\abs{a_i \\bar{b}_j - a_j \\bar{b}_i}^2\n\\end{align*}\n\n\\subsection{Inequalities}\n\n\\subsubsection{Exercise 3}\nWe apply the triangle inequality and then Cauchy-Schwarz to get\n\\begin{align*}\n        \\abs{\\sum_{i} \\lambda_i a_i} \\leq \\sum_{i} \\abs{\\lambda_i a_i} \\leq \\sum_{i} \\abs{\\lambda_i} \\abs{a_i} < \\sum_{i} \\abs{\\lambda_i} = 1\n\\end{align*}\nWhere the final strict inequality follows from $\\abs{a_i} < 1$.\n\n\\subsubsection{Exercise 4}\nWe have that $\\abs{z - a} = \\abs{z} - \\abs{a}$ when $\\frac{z}{a} \\geq 1$ (applying our criterion for equality\nto $\\abs{(z - a) + a}$). Thus, if $\\abs{a} \\leq \\abs{c}$, we can choose $z$ such that $\\frac{z}{a} \\geq 1$ \nand $\\abs{z} = \\abs{c}$, so a solution to the equation exists. Now suppose a solution exists to \n $\\abs{z - a} + \\abs{z + a} = 2\\abs{c}$. Then by applying the triangle inequality, we have \n \\begin{align*}\n         2\\abs{c} \\geq \\abs{z - a} + \\abs{z + a} \\geq \\abs{z - a - z - a} = 2\\abs{a}\n \\end{align*}\n so $\\abs{a} \\leq \\abs{c}$.\n\n\\subsection{Geometric Addition and Multiplication}\n\n\\subsubsection{Exercise 2}\nWe will only show one direction, since the other direction just reverses the steps. Suppose that \n$a_1, a_2, a_3$ indicate the vertices of an equilateral triangle. Then the angles between the edges\n$a_1 - a_3, a_3 - a_2, a_2 - a_1$ must be equal. Using the fact that $\\text{arg} \\frac{a_2}{a_1} = \\text{arg} a_2 - \\text{arg} a_1$, we get\n\\begin{align*}\n        \\frac{a_1 - a_3}{a_3 - a_2} &= \\frac{a_3 - a_2}{a_2 - a_1} \\\\\n        a_1 a_2 - a_1^2 - a_2 a_3 + a_3 a_1 &= a_3^2 - 2 a_2 a_3 + a_2^2 \\\\\n        a_1 a_2 + a_2 a_3 + a_3 a_1 &= a_1^2 + a_2^2 + a_3^2\n\\end{align*}\n\n\\subsubsection{Exercise 4}\nLet the center of the circle be $c$. Then we can proceed similarly to Exercise 2 by noting that since\n$\\abs{c - a_1} = \\abs{c - a_2} = \\abs{c - a_3}$, we have that the angle between $c - a_1$ and $a_1 - a_3$ \nis the same as the angle between $a_1 - a_3$ and $a_3 - c$. We can then solve for $c$ and use $c$ to compute\nthe radius. The result, though, is messy and not fun to typeset.\n\n\\subsection{The Binomial Equation}\n\n\\subsubsection{Exercise 4}\nLet $s = 1 + w^h + ... + w^{(n - 1)h}$. Then we have that $w^h s = w^h + w^{2h} + ... + w^{nh}$, which implies\nthat $w^h s = s$ since $w^{nh} = 1$. If $h$ is not a multiple of $n$, then $w^h \\neq 1$, so $s = 0$.\n\n\\subsubsection{Exercise 5}\nThe approach is the same as Exercise 4. Letting $s$ denote the sum, we multiply by $-w^h$ to get that\n$-w^h s = (-1)^n - w^h + ... + (-1)^{n - 1} w^{(n - 1)h}$. Thus, $s = 0$ if $n$ is even (even if $w^h = -1$).\nOtherwise, we solve $-w^h s = s - 2$ to get that $s = \\frac{2}{1 + w^h}$ ($w^h \\neq -1$ since $n$ is odd).\n\n\\subsection{Analytic Geometry}\n\n\\subsubsection{Exercise 1}\nWe refer back to Exercise 4 from the section ``Conjugation, Absolute Value''. If $z = \\alpha + \\beta i$, then\n\\begin{align*}\n        (a + b) \\alpha + c &= 0 \\implies \\alpha = -\\frac{c}{a + b}\\\\\n        (a - b) \\beta &= 0\n\\end{align*}\nIf $a - b = 0$, then $z = -\\frac{c}{a + b} + ti$ for all $t \\in \\mathbb{R}$, which is a line.\n\n\\subsubsection{Exercise 2}\nWe have the following:\n\\begin{itemize}\n        \\item Ellipse: Given foci $f_1, f_2$, we can write the equation as $\\abs{z - f_1} + \\abs{z - f_2} = 2a$.\n        \\item Hyperbola: Given foci $f_1, f_2$, we can write the equation as\n                $\\abs{\\abs{z - f_1} - \\abs{z - f_2}} = 2a$.\n        \\item Parabola is a little trickier, since now we're concerned with distance to a point (focus) as\n                well as the distance to a line (directrix). One can compute the minimum distance to the\n                directrix using a number of methods (calculus, complex inner product, etc.). Out of laziness\n                I'm just going to call the distance to the directrix $\\abs{z - d}$. Then we have that the\n                equation for a parabola looks like $\\abs{z - f} = \\abs{z - d}$.\n\\end{itemize}\n\n\\subsection{The Spherical Representation}\n\n\\subsubsection{Exercise 1}\nIf the stereographic projections of $z$ and $z'$ are diametrically opposite, the distance between the \nprojections is 2. Thus\n\\begin{align*}\n        \\frac{\\abs{z - z'}^2}{(1 + \\abs{z}^2) (1 + \\abs{z'}^2} &= 1 \\\\\n        \\abs{z}^2 -z\\bar{z}' - \\bar{z}z' + \\abs{z'}^2 &= 1 + \\abs{z}^2 + \\abs{z'}^2 + \\abs{z}^2 \\abs{z'}^2 \\\\\n        z\\bar{z}' + \\bar{z}z' + z\\bar{z}' \\bar{z}z' &= -1 \\\\\n        (z\\bar{z}' + 1)(\\bar{z}z' + 1) &= 0 \\implies z\\bar{z}' = -1\n\\end{align*}\nThe other direction follows from reversing the steps above.\n\n\\subsubsection{Exercise 2}\nUsing the fact that the diameter of the sphere is 2, we can solve for the inscribed cube's sidelength by \napplying the Pythagorean theorem to the triangle whose hypotenuse joins the bottom left and top right \nvertices of the cube. This gives us that $2^2 - 2s^2 = s^2 \\implies s = \\sqrt{\\frac{4}{3}}$. Now if we \nlabel the bottom left corner of the cube as $(x_1, x_2, x_3)$ and the top right corner of the cube as\n$(x_1', x_2', x_3')$, we can use the fact that $x_i = -x_i'$ to get that $x_1 = x_2 = x_3 = \\sqrt{\\frac{1}{3}}$ \n. Thus, the vertices of the cube are $(\\pm \\sqrt{\\frac{1}{3}}, \\pm \\sqrt{\\frac{1}{3}}, \\pm \\sqrt{\\frac{1}{3}})$\nfrom which it is then straightforward to compute the stereographic projections.\n", "meta": {"hexsha": "22f12be759b8b1a2c614f94ab48a99a83bcf5572", "size": 7793, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Complex_Analysis_Ahlfors/chapter_1.tex", "max_stars_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_stars_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-09-19T07:33:25.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-19T07:33:25.000Z", "max_issues_repo_path": "Complex_Analysis_Ahlfors/chapter_1.tex", "max_issues_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_issues_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Complex_Analysis_Ahlfors/chapter_1.tex", "max_forks_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_forks_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.0136054422, "max_line_length": 162, "alphanum_fraction": 0.6164506608, "num_tokens": 2947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5799737433983763}}
{"text": "% Copyright 2011-2015 David Hadka.  All Rights Reserved.\r\n%\r\n% This file is part of the MOEA Framework User Manual.\r\n%\r\n% Permission is granted to copy, distribute and/or modify this document under\r\n% the terms of the GNU Free Documentation License, Version 1.3 or any later\r\n% version published by the Free Software Foundation; with the Invariant Section\r\n% being the section entitled \"Preface\", no Front-Cover Texts, and no Back-Cover\r\n% Texts.  A copy of the license is included in the section entitled \"GNU Free\r\n% Documentation License\".\r\n\r\n\\chapter{Representing Decision Variables}\r\n\\label{chpt:representations}\r\n\r\nIn \\chptref{chpt:problems} we saw various ways to define new problems using real-valued (floating-point) decision variables.  In addition to floating-point values, the MOEA Framework allows problems to be encoded using integers, bit strings, permutations, programs (expression trees), and grammars.  This chapter details the use of each of these decision variables and their supported variation operators.  This chapter also details the use of the \\java{EncodingUtils} class, which provides many helper methods for creating, reading and modifying different types of decision variables.\r\n\r\n\\section{Floating-Point Values}\r\nFloating-point values, also known as real-valued decision variables, provide a natural way to represent numeric values.  Floating-point decision variables are represented using \\java{RealVariable} decision variables.  When creating a new real-valued decision variable, one must specify the lower and upper bounds that the value can represent.\r\n\r\nTo create real-valued decision variables, use the \\java{EncodingUtils.newReal(lowerBound, upperBound)} method.  Note how the lower and upper bounds must be defined.  The example code below demonstrates creating a solution with three different real-valued decision variables.\r\n\\begin{lstlisting}[language=Java]\r\npublic Solution newSolution() {\r\n    Solution solution = new Solution(3, 2);\r\n    solution.setVariable(0, EncodingUtils.newReal(-1.0, 1.0));\r\n    solution.setVariable(1, EncodingUtils.newReal(0, Math.PI));\r\n    solution.setVariable(2, EncodingUtils.newReal(10.0, 100.0));\r\n    return solution;\r\n}\r\n\\end{lstlisting}\r\n\r\nInside the \\java{evaluate} method, we can extract the decision variable values from the solution using the \\java{EncodingUtils.getReal(...)} method.  Continuing the previous code example, we extract the values of the three decision variables below.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    double x = EncodingUtils.getReal(solution.getVariable(0));\r\n    double y = EncodingUtils.getReal(solution.getVariable(1));\r\n    double z = EncodingUtils.getReal(solution.getVariable(2));\r\n    \r\n    // TODO: evaluate the solution given the values of x, y,\r\n    // and z\r\n}\r\n\\end{lstlisting}\r\n\r\nAlternatively, if the solution contains exclusively floating-point values, then we can read out all of the variables into an array using a single call.  Note that we pass the entire solution to the \\java{EncodingUtils.getReal(...)} method below.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    double[] x = EncodingUtils.getReal(solution);\r\n        \r\n    // TODO: evaluate the solution given the values of x[0],\r\n    // x[1], and x[2]\r\n}\r\n\\end{lstlisting}\r\n\r\nThe \\java{EncodingUtils} class handles all the type checking and casting needed to ensure variables are read properly.  Attempting to read or write a decision variable that is not the correct type will result in an \\java{IllegalArgumentException}.  If you see this exception, check all your decision variables to ensure they are the types you expect.   \r\n\r\n\\section{Integers}\r\n\r\nInteger-valued decision variables can be constructed in a similar way as floating-point values.  For instance, below we construct the solution using calls to \\java{EncodingUtils.newInt(lowerBound, upperBound)}.  As we saw with floating-point values, we must specify the lower and upper bounds of the decision variables.\r\n\\begin{lstlisting}[language=Java]\r\npublic Solution newSolution() {\r\n    Solution solution = new Solution(3, 2);\r\n    solution.setVariable(0, EncodingUtils.newInt(-1, 1));\r\n    solution.setVariable(1, EncodingUtils.newInt(0, 100));\r\n    solution.setVariable(2, EncodingUtils.newInt(-10, 10));\r\n    return solution;\r\n}\r\n\\end{lstlisting}\r\n\r\nSimilarly, the values stored in the decision variables can be read using the \\java{EncodingUtils.getInt(...)} method, as demonstrated below.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    int x = EncodingUtils.getInt(solution.getVariable(0));\r\n    int y = EncodingUtils.getInt(solution.getVariable(1));\r\n    int z = EncodingUtils.getInt(solution.getVariable(2));\r\n    \r\n    // TODO: evaluate the solution given the values of x, y,\r\n    // and z\r\n}\r\n\\end{lstlisting}\r\n\r\nAnd as we saw with floating-point values, if the solution is exclusively represented by integer-valued decision variables, we can likewise extract all values with a single call to \\java{EncodingUtils.getInt(...)}.  Note again that this method is passed the entire solution instead of the individual decision variables as before.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    int[] x = EncodingUtils.getInt(solution);\r\n        \r\n    // TODO: evaluate the solution given the values of x[0],\r\n    // x[1], and x[2]\r\n}\r\n\\end{lstlisting}\r\n\r\nThe integer representation can be used to represent any other kind of discrete value.  For example, suppose we wanted to represent all even numbers between $0$ and $100$.  We can accomplish this using \\java{EncodingUtils.newInt(0, 50)} and reading the value with \\java{2*EncodingUtils.getInt(variable)}.  Integers are also useful for selecting a single item from a group.  In this scenario, the integer-valued decision variable represents the index of the item in an array.\r\n\r\n\\begin{tip}\r\nInternally, integers are stored as floating-point values.  This allows the same variation operators to be applied to both real-valued and integer-valued decision variables.  When working with integers, always use the \\java{EncodingUtils.newInt(...)} and \\java{EncodingUtils.getInt(...)} methods.  This will ensure the internal floating-point representation is correctly converted into an integer.\r\n\\end{tip}\r\n\r\n\\section{Boolean Values}\r\nBoolean values represent simple binary choices, such as ``yes / no'' or ``on / off''.  Use the \\java{EncodingUtils.newBoolean()} method to create boolean decision variables, as shown below.  Note also how we can combine multiple decision variable types in a single solution.\r\n\\begin{lstlisting}[language=Java]\r\npublic Solution newSolution() {\r\n    Solution solution = new Solution(2, 2);\r\n    solution.setVariable(0, EncodingUtils.newBoolean());\r\n    solution.setVariable(1, EncodingUtils.newInt(0, 100));\r\n    return solution;\r\n}\r\n\\end{lstlisting}\r\n\r\nBoolean values can be read using \\java{EncodingUtils.getBoolean(...)}, as demonstrated below.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    boolean b = EncodingUtils.getBoolean(\r\n        solution.getVariable(0));\r\n    int x = EncodingUtils.getInt(solution.getVariable(1));\r\n    \r\n    // TODO: evaluate the solution given the values of b and x\r\n}\r\n\\end{lstlisting}\r\n\r\nThe boolean decision variable works well when the problem has a single choice.  If the problem involves more than one choice, it is more convenient and efficient to use bit strings (an array of booleans) instead.  Bit strings are introduced in the following section.\r\n\r\n\\section{Bit Strings}\r\nMany problems involve making choices.  For example, the famous knapsack problem involves choosing which items to place in a knapsack to maximize the value of the items carried without exceeding the weight capacity of the knapsack.  If $N$ items are available, we can represent the decision to include each item using a bit string with $N$ bits.  Each bit in the string corresponds to an item, and is set to \\java{1} if the item is included and \\java{0} if the item is excluded.  For instance, the bit string \\java{00110} would place items 3 and 4 inside the knapsack, excluding the rest.\r\n\r\nThe MOEA Framework supports fixed-length bit strings.  The example code below produces a solution with a single decision variable representing a bit string with length $100$.  Again, note how the entire bit string is stored within a single decision variable.\r\n\\begin{lstlisting}[language=Java]\r\npublic Solution newSolution() {\r\n    Solution solution = new Solution(1, 2);\r\n    solution.setVariable(0, EncodingUtils.newBinary(100));\r\n    return solution;\r\n}\r\n\\end{lstlisting}\r\n\r\nWhen evaluating the solution, the bit string can be read into an array of \\java{boolean} values, as demonstrated below.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    boolean[] values = EncodingUtils.getBinary(\r\n        solution.getVariable(0));\r\n\r\n    //TODO: evaluate the solution given the boolean values\r\n}\r\n\\end{lstlisting}\r\n\r\n\\section{Permutations}\r\nPermutation decision variables appear in many combinatorial and job scheduling problems.  In the famous traveling salesman problem (TSP), a salesman must travel to every city with the condition that they visit each city exactly once.  The order in which the salesman visits each city is conveniently represented as a permutation.  For example, the permutation \\java{0,3,1,2} states that the salesman visits the first city first (0 represents the first city), travels to the fourth city (3), then travels to the second city (1), and finally arrives at the third city (2).\r\n\r\nThe code example below demonstrates the creation of a permutation of 25 elements.\r\n\\begin{lstlisting}[language=Java]\r\npublic Solution newSolution() {\r\n    Solution solution = new Solution(1, 2);\r\n    solution.setVariable(0, EncodingUtils.newPermutation(25));\r\n    return solution;\r\n}\r\n\\end{lstlisting}\r\n\r\nThe permutation is read out into an array of \\java{int} values.  If the permutation is over $N$ elements, the array length will be $N$ and the values stored will range from $0$ to $N-1$.  Each distinct value will appear only once in the array (by definition of a permutation).\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    int[] permutation = EncodingUtils.getPermutation(\r\n        solution.getVariable(0));\r\n        \r\n    //TODO: evaluate the solution given the permutation\r\n}\r\n\\end{lstlisting}\r\n\r\n\\section{Programs (Expression Trees)}\r\nThe first step towards evolving programs using evolutionary algorithms involves defining the rules for the program (i.e., the syntax and semantics).  The MOEA Framework comes enabled with over $45$ pre-defined program elements for defining constants, variables, arithmetic operators, control structures, functions, etc.  When defining the rules, two important properties should be kept in mind: \\emph{closure} and \\emph{sufficiency}.\r\n\r\nThe closure property requires all program element to be able to accept as arguments any value and data type that could possibly be returned by any other function or terminal.  All programs generated or evolved by the MOEA Framework are strongly typed.  No program produced by the MOEA Framework will pass an argument to a function that is an incorrect type.  Furthermore, all functions guard against invalid inputs.  For example, the \\java{log} of a negative number is undefined.  Rather then causing an error, the \\java{log} method will guard itself and return \\java{0.0}.  This allows the rest of the calculation to continue unabated.  With these two behaviors built into the MOEA Framework, the closure property is guaranteed.\r\n\r\nThe sufficiency property states that the rule set must contain all the necessary functions and terminals necessary to produce a solution to the problem.  Ensuring this property holds is more challenging as it will depend on the problem domain.  For instance, the operators \\java{And}, \\java{Or} and \\java{Not} are sufficient to produce all boolean expressions.  It may not be so obvious in other problem domains which program elements are required to ensure sufficiency.  Additionally, it is often helpful to restrict the rule set to those program elements that are sufficient, thus reducing the search space for the evolutionary algorithm.\r\n\r\nBelow, we construct a rule set using several arithmetic operators.  One terminal is included, the variable \\java{x}.  We will assign this variable later when evaluating the program.  The last setting required is the return type of the program.  In this case, the program will return a number.\r\n\\begin{lstlisting}[language=Java]\r\n    //first, establish the rules for the program\r\n\t\tRules rules = new Rules();\r\n\t\trules.add(new Add());\r\n\t\trules.add(new Multiply());\r\n\t\trules.add(new Subtract());\r\n\t\trules.add(new Divide());\r\n\t\trules.add(new Sin());\r\n\t\trules.add(new Cos());\r\n\t\trules.add(new Exp());\r\n\t\trules.add(new Log());\r\n\t\trules.add(new Get(Number.class, \"x\"));\r\n\t\trules.setReturnType(Number.class);\r\n\\end{lstlisting}\r\n\r\nThe second step is constructing the solution used by the evolutionary algorithm.  Here, we define one decision variable that is a program following the rule set we previously defined.\r\n\\begin{lstlisting}[language=Java]\r\npublic Solution newSolution() {\r\n    Solution solution = new Solution(1, 1);\r\n    solution.setVariable(0, new Program(rules));\r\n    return solution;\r\n}\r\n\\end{lstlisting}\r\n\r\nLastly, we evaluate the program.  The program executes inside an environment.  The environment holds all of the variables and other identifiers that the program can access throughout its execution.  Since we previously defined the variable \\java{x} (with the \\java{Get} node), we want to initialize the value of \\java{x} in the environment.  Once the environment is initialized, we evaluate the program.  Since we set the return type to be a number in the rule set, we cast the output from the program's evaluation to a number.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    Program program = (Program)solution.getVariable(0);\r\n\r\n    // initialize the variables used by the program\r\n    Environment environment = new Environment();\r\n    environment.set(\"x\", 15);\r\n    \r\n    // evaluate the program\r\n    double result = (Number)program.evaluate(\r\n        environment)).doubleValue();\r\n        \r\n    // TODO: use the result to set the objective value\r\n}\r\n\\end{lstlisting}\r\n\r\n\\section{Grammars}\r\nGrammars are very similar to programs, but differ slightly in their definition and how the derived programs are generated.  Whereas the program required us to define a set of program elements (the rules) used for constructing the program, the grammar defines these rules using a context free grammar.  The text below shows an example grammar.  The format of this grammar is Backus-Naur form.\r\n\\begin{lstlisting}[language=Plaintext]\r\n<expr> ::= <func> | (<expr> <op> <expr>) | <value>\r\n<func> ::= <func-name> ( <expr> )\r\n<func-name> ::= Math.sin | Math.cos | Math.exp | Math.log\r\n<op> ::= + | * | - | /\r\n<value> ::= x\r\n\\end{lstlisting}\r\n\r\nYou should note that this grammar defines the same functions and terminals as the example in the previous section.  This also demonstrates an important difference between programs and grammars in the MOEA Framework.  The grammar explicitly defines where each problem element can appear.  This is in contrast to programs, whose structure is determined by the type system.  As a result, grammars require more setup time but offer more control over programs.  We will now demonstrate the use of grammars in the MOEA Framework.\r\n\r\nFirst, we must parse the context free grammar.  In the example below, the grammar is read from a file.  It is also possible to pass a string containing the grammar using a \\java{StringReader} in place of the \\java{FileReader}.\r\n\\begin{lstlisting}[language=Java]\r\n    ContextFreeGrammar grammar = Parser.load(\r\n        new FileReader(\"grammar.bnf\"));\r\n\\end{lstlisting}\r\n\r\nSecond, we construct the grammar variable that will be evolved by the evolutionary algorithm.  Note how the \\java{Grammar} object is passed an integer.  Grammatical evolution uses a novel representation of the decision variable.  Internally, it uses an integer array called a \\emph{codon}.  The codon does not define the program itself, but provides instructions for deriving the program using the grammar.  The integer argument to \\java{Grammar} specifies the length of the codon.  We defer a detailed explanation of this derivation to the grammatical evolution literature.\r\n\\begin{lstlisting}[language=Java]\r\npublic Solution newSolution() {\r\n    Solution solution = new Solution(1, 1);\r\n    solution.setVariable(0, new Grammar(10));\r\n    return solution;\r\n}\r\n\\end{lstlisting}\r\n\r\nFinally, we can evaluate a solution by first extracting the codon and deriving the program.  Unlike programs that can be evaluated directly, the grammar produces a string (the derivation).  While it is common for grammars to produce program code, this is not a requirement.  This is the second major difference between grammars and programs in the MOEA Framework --- the behavior of programs is defined explicitly, whereas the behavior of grammars depend on how the grammar is interpreted.  In this case, we are producing program code and will need a scripting language to evaluate the program.  Using Java's Scripting API and having defined the grammar so that it produces a valid Groovy program, we can evaluate the derivation using the Groovy scripting language.  In the code below, we instantiate a \\java{ScriptEngine} for Groovy, initialize the variable \\java{x}, and evaluate the program.\r\n\\begin{lstlisting}[language=Java]\r\npublic void evaluate(Solution solution) {\r\n    int[] codon = ((Grammar)solution.getVariable(0)).toArray();\r\n    \r\n    // derive the program using the codon\r\n    String program = grammar.build(codon);\r\n\r\n    if (program == null) {\r\n        // if null, the codon did not produce a valid grammar\r\n        // TODO: penalize the objective value\r\n    } else {\r\n        ScriptEngineManager sem = new ScriptEngineManager();\r\n        ScriptEngine engine = sem.getEngineByName(\"groovy\");\r\n\r\n        // initialize the variables used by the program\r\n        Bindings b = new SimpleBindings();\r\n        b.put(\"x\", 15);\r\n\r\n        double result = ((Number)engine.eval(program, b))\r\n            .doubleValue();\r\n            \r\n        // TODO: use the result to set the objective value\r\n    }\r\n}\r\n\\end{lstlisting}\r\n\r\nIn order to compile and run this example, the Groovy scripting language must be installed.  To install Groovy, download the binary release from \\webpage{http://groovy.codehaus.org/}, extract the \\file{embeddable\\\\groovy-all-2.0.1.jar} file into the \\folder{lib} folder in your MOEA Framework installation, and add this jar file onto the Java classpath when launching this example.\r\n\r\n\\section{Variation Operators}\r\nThe MOEA Framework contains a number of variation operators (initialization, mutation, crossover, etc.) tailored for each representation type.  This section provides a brief overview of the available operators and details their use.\r\n\r\n\\subsection{Initialization}\r\nThe start of all evolutionary algorithms is the construction of an initial population.  This population is important since, in general, all future solutions are derived from members of this initial population.  Ensuring this initial population provides a diverse and representative set of individuals is paramount.\r\n\r\nThe floating-point, integer, binary, permutation and grammar variables are all initialized uniformly at random.  This ensures the values, bit strings, etc. are distributed uniformly throughout the search space.\r\n\r\nPrograms require a slightly more complicated initialization to ensure the initial population contains a diverse sampling of potential programs.  The MOEA Framework provides the ramped half-and-half initialization method, which is one of the most popular initialization techniques for programs.  We refer readers to the genetic programming literature for a detailed description of ramped half-and-half initialization.\r\n\r\n\\subsection{Variation (Mutation \\& Crossover)}\r\nAfter the initial population is generated, an evolutionary algorithm evolves the population using individual or combinations of variation operators.  Variation operators are classified into two forms: crossover and mutation.  Crossover involves combining two or more parents to create an offspring.  Mutation involves a single parent.  Mutations generally produce only small changes, but this is not mandatory.\r\n\r\n\\tblref{tbl:operators} lists the supported variation operators in the MOEA Framework.  The table highlights the decision variable representation and type of each variation operator.\r\n\r\n\\begin{table}\r\n  \\centering\r\n  \\caption{List of Supported Variation Operators}\r\n  \\label{tbl:operators}\r\n  \\begin{tabular}{llll}\r\n    \\hline\r\n    Representation & Type & Abbr. \\\\\r\n    \\hline\r\n    Real / Integer & Simulated Binary Crossover & SBX \\\\\r\n    Real / Integer & Polynomial Mutation & PM \\\\\r\n    Real / Integer & Differential Evolution & DE \\\\\r\n    Real / Integer & Parent-Centric Crossover & PCX \\\\\r\n    Real / Integer & Simplex Crossover & SPX \\\\\r\n    Real / Integer & Unimodal Normal Distribution Crossover & UNDX \\\\\r\n    Real / Integer & Uniform Mutation & UM \\\\\r\n    Real / Integer & Adaptive Metropolis & AM \\\\\r\n    Binary & Half-Uniform Crossover & HUX \\\\\r\n    Binary & Bit Flip Mutation & BF \\\\\r\n    Permutation & Partially-Mapped Crossover & PMX \\\\\r\n    Permutation & Element Insertion & Insertion \\\\\r\n    Permutation & Element Swap & Swap \\\\\r\n    Grammar & Single-Point Crossover for Grammars & GX \\\\\r\n    Grammar & Uniform Mutation for Grammars & GM \\\\\r\n    Program & Branch (Subtree) Crossover & BX \\\\\r\n    Program & Point Mutation & PTM \\\\\r\n    Any & Single-Point Crossover & 1X \\\\\r\n    Any & Two-Point Crossover & 2X \\\\\r\n    Any & Uniform Crossover & UX \\\\\r\n    \\hline\r\n  \\end{tabular}\r\n\\end{table}\r\n\r\nThe abbreviation column lists the keyword used in the MOEA Framework for referencing each operator.  The example code below shows how we can specify the operator used by an algorithm and also any parameters for the operator.  In this example, we are using parent-centric crossover (PCX) and setting two of its parameters, \\java{pcx.eta} and \\java{pcx.zeta}.  Refer to the \\java{OperatorFactory} class documentation for a complete list of the operators and their parameters.\r\n\\begin{lstlisting}[language=Java]\r\nNondominatedPopulation result = new Executor()\r\n    .withProblem(\"UF1\")\r\n    .withAlgorithm(\"NSGAII\")\r\n    .withProperty(\"operator\", \"pcx\")\r\n    .withProperty(\"pcx.eta\", 0.1)\r\n    .withProperty(\"pcx.zeta\", 0.1)\r\n    .withMaxEvaluations(10000)\r\n    .run();\r\n\\end{lstlisting}\r\n\r\nIt is also possible to combine certain variation operators using the \\java{+} symbol.  In the example below, we combine differential evolution with polynomial mutation (\\java{de+pm}), and we can set the parameters for both of these operators as shown.\r\n\\begin{lstlisting}[language=Java]\r\nNondominatedPopulation result = new Executor()\r\n    .withProblem(\"UF1\")\r\n    .withAlgorithm(\"NSGAII\")\r\n    .withProperty(\"operator\", \"de+pm\")\r\n    .withProperty(\"de.rate\", 0.5)\r\n    .withProperty(\"pm.rate\", 0.1)\r\n    .withMaxEvaluations(10000)\r\n    .run();\r\n\\end{lstlisting}\r\n\r\n\\begin{important}\r\nNot all combinations of operators are supported.  In general, combining a crossover operator with a mutation operator is ok.  If you request an invalid operator combination, you will see an exception with the message \\emph{invalid number of parents}.  See the \\java{CompoundVariation} class documentation for more details on what operators can be combined.\r\n\\end{important}\r\n\r\n\\section{Conclusion}\r\nThis chapter introduced the various decision variable representations supported by the MOEA Framework.  Look at these different representations as the building blocks for your problem.  If you can construct your problem using these building blocks, the problem will seamlessly integrate with the MOEA Framework.\r\n\r\nWe close this chapter by commenting that the MOEA Framework is not limited to these representations.  New representations are periodically introduced in the literature.  This fact influenced the design of the MOEA Framework to allow new representations and variation operators.  Interested readers should stay turned for future updates to this user manual that will discuss such extensions in detail.", "meta": {"hexsha": "071ef8e761ae4c0cc99b3712c26bcbce49a8c300", "size": 24540, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/representations.tex", "max_stars_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_stars_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_stars_repo_licenses": ["Intel"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manual/representations.tex", "max_issues_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_issues_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_issues_repo_licenses": ["Intel"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/representations.tex", "max_forks_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_forks_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_forks_repo_licenses": ["Intel"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.7204610951, "max_line_length": 895, "alphanum_fraction": 0.7538304808, "num_tokens": 5361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5799737433983763}}
{"text": "% $Id$\n%\n% Author: David Fournier\n% Copyright (c) 2008 Regents of the University of California\n%\n\n\\section{Simultaneous equations models}\n\nFor each $t$, $1\\le t\\le T$ let $y_t$ be an $n$-dimensional vector\nand $x_t$ be an $n$-dimensional vector. Let $B$ and~$\\Gamma$ \nbe~$(n \\times n)$  and~$(n \\times m)$ matrices, and suppose that the relationship\n$$By_t + \\Gamma x_t=u_t$$\nholds, where the $u_t$ are $n$-dimensional random vectors of disturbances.\nThe~$y_t$ are the endogenous variables in the system. The~$x_t$ are\npredetermined variables in the sense that they are independent of~$u_t$.\nNote that for autoregressive models, the~$x_t$ may contain values \nof~$y_j$ for~$j<i$. In general, not all of the coefficients of~$B$ \nand~$\\Gamma$ are estimable. Interesting cases have special structure that\nare determined by the particular parameterization of\nof~$B$, $\\Gamma$, and~$D$. In particular, it is generally assumed \nthat~$B_{ii}=1$ for~$1 \\le i \\le n$ and that~$B^{-1}$ exists.\n\n\n\\section{Full information maximum likelihood (\\textsc{fiml})} \n\nAssume that for each~$t$, $u_t$ has a multivariate normal distribution\nwith mean~$0$ and covariance matrix~$D$.\nThe log-likelihood function for~$B,\\Gamma$, and~$D$ is given\nby\n\\begin{equation}\n  L(B,\\Gamma,D)=T/2\\log\\big(|B|^2\\big)-T/2\\log\\big(|D|\\big)-1/2\\sum_{t=1}^T\n  [By_t + \\Gamma x_t]^\\prime D^{-1} [By_t + \\Gamma x_t] \\label{se:1}\n\\end{equation}\n\n\n\\section{Concentrating out $D$ for the \\textsc{fiml}}\n\nIf there are no constraints on $D$, the value of $D$ that maximizes\n(\\ref{se:1}) can be solved for in terms of the other parameters and\nobservations. This value, $\\hat D$, is given by\n\\begin{equation}\n  \\hat D =\n  1/T\\sum_{t=1}^T [By_t + \\Gamma x_t]^\\prime [By_t + \\Gamma x_t] \\label{se:2}\n\\end{equation}\nBy\nsubstituting this value into equation~(\\ref{se:1}), %xx $(\\number\\mychapno.1)$ \nit can be shown that\n$$1/2\\sum_{t=1}^T \n  [By_t + \\Gamma x_t]^\\prime \\hat D^{-1} [By_t + \\Gamma x_t] $$\nis a constant that can be ignored for the maximization, so equation~(\\ref{se:2}), this: \n\\begin{equation}\n  \\tilde L(B,\\Gamma)=T/2\\log \\big(|B|^2 \\big)-T/2\\log \\big(|\\hat D| \\big)\\label{se:3} \n\\end{equation}\nplus the \\textsc{fiml} estimates for~$B$ and~$\\Gamma$ can be found by maximizing\n$\\tilde L(B,\\Gamma)$.\n\nWhen there are constraints on the parameters of~$D$, then~$\\tilde D$ is\nno longer the maximum likelihood estimate for~$D$.  So,\n it is necessary to \nmaximize equation~(\\ref{se:1}), which is, in general, a numerically unstable problem. \nTo successfully carry out the optimization, it is necessary to obtain\nreasonable initial estimates for the parameters of~$B$ and~$\\Gamma$,\nand to use a good method for parameterizing~$D$.\nInitial estimates for~$B$ and~$\\Gamma$ can be obtained from ordinary\nleast squares (\\textsc{ols}), that is, finding the values \nof~$B$ and~$\\Gamma$ that minimize\n$$\\sum_{t=1}^T \\|y_t - B^{-1}\\Gamma x_t\\|^2$$\n\nTo parameterize~$D$, note that $\\hat D$ is an estimate of~$D$,\nso we can parameterize~$D$ by\n$$ D = A \\hat D A^\\prime$$\nwhere~$A$ is a lower triangular matrix. If~$U$ is the Choleski\ndecomposition of~$D$ and~$\\hat U$ is the Choleski decomposition\nof~$\\hat D$, then $A=\\hat U^{-1}U$.\nIt follows that~$A$ should be close to the identity matrix, which\nis a good initial estimate for~$A$. \n\n\n\\section{Evaluating the model's performance} \n\nTo evaluate the model's performance, simulated data were generated.\nThe form of the model~is\n%$$\\eqalign{\n \\begin{align}\n   \\nonumber y_{t1}+y_{t4}+y_{t5}-2+0.45y_{t-1,1}&=u_{t1}\\\\\n   \\nonumber 0.1y_{t1}+y_{t2}+2.0y_{t5}-1-0.6y_{t-1,1}+0.25y_{t-1,2}&=u_{t2}\\\\\n   \\nonumber 0.3y_{t1}-0.2y_{t2}+y_{t3}+1&=u_{t3}\\\\\n   \\nonumber 1.4y_{t2}-3.1y_{t3}+y_{t4}+1&=u_{t4}\\\\\n   y_{t3}+y_{t4}+y_{t5}&=u_{t5}\n \\end{align}\n%}\n%$$\nwith a covariance matrix\n$$D=\\begin{pmatrix}\n0.512&0.32&0.256&-1.28&0\\cr\n0.32&0.328&-0.16&-0.8&0\\cr\n0.256&-0.16&1.728&0.16&0.8\\cr\n-1.28&-0.8&0.16&4.8&0.8\\cr\n0&0&0.8&0.8&0.928\n\\end{pmatrix}\n$$\n\n\\bigskip\n$$B=\\begin{pmatrix} \n    1&0&0&1&1\\cr\n    0.1&1&0&0&2\\cr\n      0.3&-0.2&1&0&0\\cr\n      0&1.4&-3.1&1&0\\cr\n      0&0&1&1&1\n  \\end{pmatrix}\n$$\n\n\\bigskip\n$$\\Gamma=\\begin{pmatrix}-2&0.45&0\\cr\n         -1&-0.6&0.25\\cr\n          1&0&0\\cr\n          1&0&0\\cr\n          0&0&0\n   \\end{pmatrix}\n$$\n\n\\bigskip\nFor this model, $n=5$, $m=3$, and $x_t=(1,y_{t-1,1},y_{t-1,2})$.\n\nThe eigenvalues of~$D$ are $(0.00661 0.13583 0.4962 2.21162 5.44573)$.\nHaving a small eigenvalue tends to produce simulated data that are\ndifficult to analyze.\n\nForty time periods of data were generated by the simulator.\nThe simulated~$y$ values are:\n\\begin{lstlisting}\n 1.63252 3.00223 1.70246 1.34813 -1.12202\n -2.87857 -7.72402 -3.88482 -1.24196 4.71024\n 1.12975 -7.92719 -3.85188 -2.33007 4.2984\n 1.48112 -2.25692 -1.15585 -2.26166 3.01061\n -2.91887 -5.65015 -2.74198 -0.695815 4.51346\n 2.29715 0.524946 0.0268777 1.07624 -0.0898846\n 1.32854 6.17993 2.51613 1.67248 -2.39914\n 0.5661 -2.53219 -0.966376 0.00820516 1.76543\n 0.353591 -3.81146 -2.04431 -1.48574 2.67208\n -2.22887 -2.33436 -1.66284 0.646399 2.26448\n 2.29896 -6.42238 -2.41106 -1.70633 3.50028\n 0.145878 1.85161 0.646578 -0.380955 0.709761\n -0.779376 -7.60611 -3.79636 -1.63017 4.02658\n -0.107371 -5.61361 -2.35816 -1.77719 3.67762\n 0.662221 -5.78832 -2.19632 -2.03071 4.37961\n -0.570661 -7.42505 -3.28544 -2.94125 5.19422\n 0.0953742 -1.80617 -1.06915 -0.0320784 2.00018\n -0.406986 -4.96143 -2.8084 -0.948902 3.1811\n 1.07219 -7.92608 -2.95484 -3.17022 5.12702\n -0.495144 1.33611 -0.357291 -0.0260083 0.360653\n -0.637878 -8.76117 -3.81638 -1.77116 4.82796\n 1.59717 -3.18571 -1.72708 -1.93975 2.79462\n -1.13013 -2.20942 -1.30198 -0.603895 2.29486\n -1.0103 -7.90106 -3.65303 -1.07367 4.66283\n -1.02985 -3.00268 -1.63388 0.309992 2.97876\n 0.176882 -7.96282 -3.60299 -1.86289 4.86943\n 1.16904 -1.07952 -0.0969977 -0.74563 2.38399\n -0.636119 -2.84841 -1.43676 -0.38474 2.51142\n -1.72929 -5.39866 -2.51289 0.0978131 3.786\n 3.56302 3.79343 2.05613 1.43836 -1.2029\n 0.15806 -0.863882 -0.302119 -1.19212 1.38518\n 1.37323 -1.94413 -0.537631 -0.751294 1.42083\n -0.404075 -8.53817 -3.58618 -3.33976 5.69071\n 0.362091 -5.78568 -2.46635 -2.33359 4.21899\n -2.26158 -12.7075 -6.07426 -3.62455 8.49292\n 1.20438 -5.44629 -2.30249 -2.02905 3.82742\n 1.41463 -1.71734 -0.788698 -1.90306 2.2595\n 0.897156 1.28039 0.693579 0.318737 0.385857\n -0.0330384 -1.55642 -0.189474 0.312385 1.57168\n 1.5747 0.827181 1.26032 0.813312 0.270432\n\\end{lstlisting}\n\nFor the~$x$ values, the first time periods data $x_0 =(1,1,2)$\nwere supplied.  The simulated~$x$ values are:\n\\begin{lstlisting}\n 1 1 2\n 1 1.63252 3.00223\n 1 -2.87857 -7.72402\n 1 1.12975 -7.92719\n 1 1.48112 -2.25692\n 1 -2.91887 -5.65015\n 1 2.29715 0.524946\n 1 1.32854 6.17993\n 1 0.5661 -2.53219\n 1 0.353591 -3.81146\n 1 -2.22887 -2.33436\n 1 2.29896 -6.42238\n 1 0.145878 1.85161\n 1 -0.779376 -7.60611\n 1 -0.107371 -5.61361\n 1 0.662221 -5.78832\n 1 -0.570661 -7.42505\n 1 0.0953742 -1.80617\n 1 -0.406986 -4.96143\n 1 1.07219 -7.92608\n 1 -0.495144 1.33611\n 1 -0.637878 -8.76117\n 1 1.59717 -3.18571\n 1 -1.13013 -2.20942\n 1 -1.0103 -7.90106\n 1 -1.02985 -3.00268\n 1 0.176882 -7.96282\n 1 1.16904 -1.07952\n 1 -0.636119 -2.84841\n 1 -1.72929 -5.39866\n 1 3.56302 3.79343\n 1 0.15806 -0.863882\n 1 1.37323 -1.94413\n 1 -0.404075 -8.53817\n 1 0.362091 -5.78568\n 1 -2.26158 -12.7075\n 1 1.20438 -5.44629\n 1 1.41463 -1.71734\n 1 0.897156 1.28039\n 1 -0.0330384 -1.55642\n\\end{lstlisting}\n\n\n\\section{Results of (\\textsc{fiml}) for unconstrained $D$} \n\nFor the estimation process, all the elements of the matrices\n$B$ and $\\Gamma$ with value $0$ were fixed at their correct value.\nThe \\textsc{fiml} estimates for unconstrained covariance matrix $D$ are given\nbelow.\n\n$$B=\\begin{pmatrix}\n1&0&0&0.364395&0.364395\\cr\n0.238411&1&0&0&1.37593\\cr\n0.042875&-0.330484&1&0&0\\cr\n0&1.93026&-4.90367&1&0\\cr\n0&0&1.06339&1.09851&1\n \\end{pmatrix}\n$$\n\n\\bigskip\n$$\\Gamma=\\begin{pmatrix}\n-0.917978&0.431377&0\\cr\n0.473915&-0.491422&0.19981\\cr\n0.441089&0&0\\cr\n-0.126701&0&0\\cr\n0&0&0\n\\end{pmatrix}\n$$\n\n\\bigskip\n$$D=\\begin{pmatrix}\n0.938236&0.843743&0.506127&-1.38383&0.314262\\cr\n0.843743&1.55844&0.732235&-0.591771&1.37195\\cr\n0.506127&0.732235&0.430091&-0.805866&0.62244\\cr\n-1.38383&-0.591771&-0.805866&4.18591&-0.0363513\\cr\n0.314262&1.37195&0.62244&-0.0363513&1.75939\n\\end{pmatrix}\n$$\n\n\n\\section{Results of (\\textsc{fiml}) for constrained $D$} \n\nSince $D_{51}=0$ and $D_{52}=0$, these values were not well \nestimated by the unconstrained \\textsc{fiml} procedure. Suppose that\nwe know that their values should be $0$ and that the\nvalue of $D_{55}\\le1.0$. We incorporate this knowledge\ninto the model by using penalty functions. \n\n$$B=\\begin{pmatrix}\n1&0&0&0.594218&0.594218\\cr\n-0.321482&1&0&0&1.97809\\cr\n0.0416175&-0.365572&1&0&0\\cr\n0&2.48468&-6.10763&1&0\\cr\n0&0&0.968057&1.02738&1\n\\end{pmatrix}\n$$\n\n\\bigskip\n$$\\Gamma=\\begin{pmatrix}\n-1.3418&0.448342&0\\cr\n-1.14717&-0.593754&0.183788\\cr\n0.372114&0&0\\cr\n-0.0640792&0&0\\cr\n0&0&0\n\\end{pmatrix}\n$$\n\n\\bigskip\n$$D=\\begin{pmatrix}\n0.593424&-0.325467&0.298836&-1.43231&0.00401209\\cr\n-0.325467&0.650371&-0.234747&0.441841&4.48456\\e{-06}\\cr\n0.298836&-0.234747&0.258147&-0.900105&0.255262\\cr\n-1.43231&0.441841&-0.900105&6.13619&0.180376\\cr\n0.00401209&4.48456\\e{-06}&0.255262&0.180376&1.00262\n\\end{pmatrix}\n$$\n\n\n\\section{Code for (\\textsc{fiml}) for constrained $D$} \n\nThis is the code in the \\textsc{tpl} file, split up by sections and\ncommented on. The \\texttt{DATA\\_SECTION} defines the data and some size\naspects of the model structure. Objects that are prefixed by \\texttt{init\\_}\nwill be read in from the data file.\n\\begin{lstlisting}\n// This version incorporates constraints via penalty functions.\n//This is sample code to determine the parameters of a\n// simultaneous equations model. The notation follows\n// that of Hamilton, Times Series Analysis, chapter 9.\n// the general form of the model is\n//\n//    By_t + Gamma x_t =u_t\n//\n//for t=1,...,T. The u_t have covariance matrix D.\n\nDATA_SECTION\n  init_int T // the number of observations\n  init_int dimy // dimension of the vector of \n                // endogenous variables\n  init_int dimx // dimension of the vector of \n                // predetermined variables \n  init_int num_Bpar // the number of parameters in\n                // the elements of B to be estimated \n  init_int num_Gpar // the number of parameters in\n                // the elements of Gamma to be estimated \n  init_matrix y(1,T,1,dimy) // the y_t\n  init_matrix x(1,T,1,dimx) // the x_t\n  int dimy1\n !! dimy1=dimy*(dimy+1)/2;  // size of symmetric matrix\n\\end{lstlisting}\n\nThe \\texttt{PARAMETER\\_SECTION} describes the model's parameters.\nObjects which are prefixed by \\texttt{init\\_} are the independent\nvariables of the model.  For example, \\texttt{Bpar} is used to\nparameterize the nonzero elements of \\texttt{B}.\n\\texttt{ch\\_Dpar} is used to parameterize the lower triangular matrix\nof the correction from \\texttt{emp\\_D} to the covariance matrix\n\\texttt{D}. The minimization is done in a number of phases.\nThe parameter \\texttt{kx} is used to have a parameter that becomes active\nin phase~4, so that the minimization will take place in four stages.\nThis parameter does not enter into the ``real'' part of the model.\n\n\\begin{lstlisting}\nPARAMETER_SECTION\n  init_vector Bpar(1,num_Bpar)\n  init_vector Gpar(1,num_Gpar)\n  init_vector ch_Dpar(1,dimy1,2)\n  matrix B(1,dimy,1,dimy)\n  matrix D(1,dimy,1,dimy)  // the covariance matrix for the \n                           // disturbances u_t\n  matrix emp_D(1,dimy,1,dimy)  // the covariance matrix for the \n  matrix Gamma(1,dimy,1,dimx)\n  matrix ch_D(1,dimy,1,dimy)\n  matrix z(1,T,1,dimy);\n  objective_function_value f\n  init_number kx(4);\n\\end{lstlisting}\n\nThe \\texttt{PROCEDURE\\_SECTION} is where the model's calculations\nare carried out. It is split up into a set of functions where the\nmodel-specific pieces of code (different code for different models)\nare located. Finally, the optimization for parameter estimation is\ncalculated. This depends on the phase of the optimization procedure.\nA \\texttt{switch} statement is used to vary the form of the objective\nfunction depending on the phase. The function \\texttt{current\\_phase()}\nreturns the number of the current phase of the optimization.\nThe function \\texttt{last\\_phase()}\nreturns~1 (``true'') if the current phase is the last phase of the\noptimization. Quadratic penalty functions are put on the model's parameters, and these penalty weights are decreased in subsequent phases.\nThis procedure helps to stabilize the optimization when several\nmodel parameters are highly correlated.\n\n\\begin{lstlisting}\nPROCEDURE_SECTION\n  fill_B();    // this will vary from model to model\n  fill_Gamma();  // this will vary from model to model\n\n  calculate_empirical_copvariance_matrix();\n\n  fill_D();  // this will vary from model to model\n\n  calculate_constraints();  // this will vary from model to model\n\n  int sgn;\n  switch (current_phase())\n  {\n  case 1:\n    {\n      f+=0.1*norm2(Bpar);\n      f+=0.1*norm2(Gpar);\n      f+=0.1*norm2(ch_Dpar);\n      dvar_matrix Binv=inv(B);\n      for (int t=1;t<=T;t++)\n      {\n        dvar_vector z=y(t)+Binv*Gamma*x(t);\n        f+=z*z;\n      }\n      break;\n    }\n  default: \n    {\n      f+= -0.5*T*log(square(det(B)))\n       +0.5*T*ln_det(D,sgn);\n  \n      dvar_matrix Dinv=inv(D);\n      dvariable f1=0.0;\n      for (int t=1;t<=T;t++)\n      {\n        dvar_vector z=B*y(t)+Gamma*x(t);\n        f1+=z*(Dinv*z);\n      }\n      f+=0.5*f1;\n      if (!last_phase())\n      {\n        f+=0.1*norm2(Bpar);\n        f+=0.1*norm2(Gpar);\n        f+=0.1*norm2(ch_Dpar);\n      }\n      else\n      {\n        f+=0.001*norm2(Bpar);\n        f+=0.001*norm2(Gpar);\n        f+=0.001*norm2(ch_Dpar);\n      }\n    }\n  }\n  \n  f+=square(kx);\n\nFUNCTION fill_B\n  B.initialize();\n  for (int i=1;i<=dimy;i++)\n    B(i,i)=1.0;\n\n  // this is part of the special structure of the model\n  int ii=1;\n  B(2,1)=Bpar(1);\n  B(3,1)=Bpar(2);\n\n  B(3,2)=Bpar(3);\n  B(4,2)=Bpar(4);\n\n  B(4,3)=Bpar(5);\n  B(5,3)=Bpar(6);\n\n  B(5,4)=Bpar(7);\n  B(1,4)=Bpar(8);\n\n  B(1,5)=Bpar(8);\n  B(2,5)=Bpar(9);\n\n\nFUNCTION fill_Gamma\n  Gamma.initialize();\n\n  // this is the part of special structure of the model\n  Gamma(1,1)=Gpar(1);\n  Gamma(2,1)=Gpar(2);\n  Gamma(3,1)=Gpar(3);\n  Gamma(4,1)=Gpar(4);\n\n  Gamma(1,2)=Gpar(5);  \n  Gamma(2,2)=Gpar(6);  \n\n\n  Gamma(2,3)=Gpar(7);  \n\n\nFUNCTION fill_D\n\n  ch_D.initialize();\n  // this is the special structure of the model\n  int ii=1;\n  for (int i=1;i<=dimy;i++)\n  {\n    for (int j=1;j<=i;j++)\n      ch_D(i,j)=ch_Dpar(ii++);\n    ch_D(i,i)+=1;\n  }\n\n  D=ch_D*emp_D*trans(ch_D); // so Ch_D is the Choleski\n                      // decomposition of D\nFUNCTION calculate_empirical_covariance_matrix\n\n  for (int t=1;t<=T;t++)\n    z(t)=B*y(t)+Gamma*x(t);\n\n  emp_D=empirical_covariance(z);\n  \nFUNCTION  calculate_constraints\n\n  double wt=1.0;\n  switch (current_phase())\n  {\n  case 1:\n    wt=1.0;\n    break;\n  case 2:\n    wt=10.0;\n    break;\n  case 3:\n    wt=100.0;\n    break;\n  default:\n    wt=1000.0;\n    break;\n  }\n  if (D(5,5)>1.0)\n    f+=wt*square(D(5,5)-1.00);\n\n  f+=wt*square(D(5,1));\n  f+=wt*square(D(5,2));\n\\end{lstlisting}\n\nThe \\texttt{REPORT\\_SECTION} prints out a report of the model's results.\n\n\\begin{lstlisting}\nREPORT_SECTION\n  report << \"B\" << endl;\n  report << B << endl;\n  report << \"Gamma\" << endl;\n  report << Gamma << endl;\n  report << \"D\" << endl;\n  report << D << endl;\n  report << \"eigenvalues of D\" << endl;\n  report << eigenvalues(D) << endl;\n  report << \"y\" << endl;\n  report << y << endl;\n  report << \"x\" << endl;\n  report << x << endl;\n\\end{lstlisting}\n\n", "meta": {"hexsha": "af45a05ee1e1ba7007681c4e8792580816755bf2", "size": 15495, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/manuals/admb/simequat.tex", "max_stars_repo_name": "johnrsibert/admb", "max_stars_repo_head_hexsha": "063ec863a9f23f6c6afbc7d481af0476b8d63645", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 79, "max_stars_repo_stars_event_min_datetime": "2015-01-16T14:14:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T06:28:15.000Z", "max_issues_repo_path": "docs/manuals/admb/simequat.tex", "max_issues_repo_name": "johnrsibert/admb", "max_issues_repo_head_hexsha": "063ec863a9f23f6c6afbc7d481af0476b8d63645", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 172, "max_issues_repo_issues_event_min_datetime": "2015-01-21T01:53:57.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T19:57:31.000Z", "max_forks_repo_path": "docs/manuals/admb/simequat.tex", "max_forks_repo_name": "johnrsibert/admb", "max_forks_repo_head_hexsha": "063ec863a9f23f6c6afbc7d481af0476b8d63645", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 22, "max_forks_repo_forks_event_min_datetime": "2015-01-15T18:11:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-11T21:47:51.000Z", "avg_line_length": 29.2911153119, "max_line_length": 138, "alphanum_fraction": 0.6680219426, "num_tokens": 6244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7461389873857264, "lm_q1q2_score": 0.5799737390104993}}
{"text": "\\chapter*{Nomenclature}\n\n\\section*{Units}\nAll units of measurement throughout this thesis conform to the \\emph{Syst\\`{e}me Internationale}, with deviations from this rule noted where appropriate.\n\n\\section*{Notation}\nThis section describes the general form of notation for properties such as scalars, vectors and matrices and their derivatives.\n\n\\subsection*{Time Derivatives}\n\\begin{tabular}{@{}l l}\n$\\dot{x}$\t\t& first derivative of $x$ with respect to time \\\\\n$\\ddot{x}$\t\t& second derivative of $x$ with respect to time \\\\\n$x^{(n)}$\t\t& $n$th derivative of $x$ with respect to time \\\\\n\\end{tabular}\n\n\\subsection*{Scalars, Vectors and Matrices}\n\\begin{tabular}{@{}l l}\n$x$\t\t\t\t& scalar \\\\\n$\\vc{x}$\t\t& vector or matrix \\\\\n$\\vc{x}^T$\t\t& transpose of vector or matrix \\\\\n$x_i$\t\t\t& $i$th element of vector $\\vc{x}$ \\\\\n$f(x)$\t\t\t& function of scalar $x$ \\\\\n$f(\\vc{x})$\t\t& function of vector or matrix $\\vc{x}$ \\\\\n$f_{\\vc{x}}$\t\t& Jacobian of $f(\\vc{x})$ with respect to $\\vc{x}$ \\\\\n\\end{tabular}\n\n\\section*{Symbols}\nThe following symbols are used throughout this thesis.  Where a symbol is used only briefly, it is defined at the appropriate point in the text.\n\n\\subsection*{Latin}\n\\begin{tabular}{@{}l l}\n\t$ C_d $\t& aerodynamic drag coefficient [--] \\\\\n\t$ F $, $ \\vc{F} $\t& force [\\si{\\newton}] \\\\\n\t$ g $\t& acceleration due to gravity [\\si{\\metre\\per\\square\\second}] \\\\\n\t$ i $\t& current [\\si{\\ampere}] \\\\\n\t$ L $, $ M $, $ N $ & rotational forces [\\si{\\newton\\metre}] \\\\\n\t$ \\vc{M} $\t& moment [\\si{\\newton\\metre}] \\\\\n\t$ m $\t& mass [\\si{\\kilogram}] \\\\\n\t$ R $\t& resistance [\\si{\\ohm}] \\\\\n\t$ u $, $ v $, $ w $\t& surge, sway and heave velocities [\\si{\\metre\\per\\second}] \\\\\n\t$ V $\t& magnitude of velocity [\\si{\\metre\\per\\second}] \\\\\n\t$ V_a $\t& voltage applied to circuit [\\si{\\volt}] \\\\\n\t$ X $, $ Y $, $ Z $\t& linear forces [\\si{\\newton}] \\\\\n\t$x$, $y$, $z$ \t\t& components of position [\\si{\\metre}] \\\\\n\\end{tabular}\n\n\\subsection*{Greek}\n\\begin{tabular}{@{}l l}\n\t$ \\beta $\t& slip angle [\\si{\\radian}] \\\\\n\t$\\phi$, $\\theta$, $\\psi$\t& roll, pitch and yaw displacements [\\si{\\radian}] \\\\\n\t$ \\rho $\t& atmospheric density [\\si{\\kilogram\\per\\cubic\\metre}] \\\\\n\t$ \\sigma $\t& friction coefficient [--] \\\\\n\t$ \\omega $\t& rotational speed [\\si{\\radian\\per\\second}] \\\\\n\t$ \\vc{\\omega} $\t& angular velocity vector [\\si{\\radian\\per\\second}]\n\\end{tabular}\n\n\\subsection*{Subscripts}\n\\begin{tabular}{@{}l l}\n\t$ k $\t& iteration of inverse simulation \\\\\n\t$ n $\t& iteration of Newton-Raphson method \\\\\n\\end{tabular}\n\n\\section*{Abbreviations}\n\\begin{tabular}{@{}l l}\nFDIR\t& fault detection, isolation and reconfiguration \\\\\nInvSim\t& inverse simulation \\\\\nNED\t\t& north-east-down \\\\\n\\end{tabular}\n\n", "meta": {"hexsha": "97e2ae93303a8a90a1e94ec5f3b42ba1ec932497", "size": 2657, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/tex/FrontMatter/Nomenclature.tex", "max_stars_repo_name": "murrayireland/Satellite-Model", "max_stars_repo_head_hexsha": "7d6826f9675d49bac13b8d82573f2fa09c2c0af8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-07-15T07:21:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T15:28:21.000Z", "max_issues_repo_path": "Documentation/tex/FrontMatter/Nomenclature.tex", "max_issues_repo_name": "murrayireland/Satellite-Model", "max_issues_repo_head_hexsha": "7d6826f9675d49bac13b8d82573f2fa09c2c0af8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documentation/tex/FrontMatter/Nomenclature.tex", "max_forks_repo_name": "murrayireland/Satellite-Model", "max_forks_repo_head_hexsha": "7d6826f9675d49bac13b8d82573f2fa09c2c0af8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-03-15T01:16:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-03T12:29:53.000Z", "avg_line_length": 37.9571428571, "max_line_length": 153, "alphanum_fraction": 0.6330447874, "num_tokens": 869, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5799737356998267}}
{"text": "\n\\begin{frame}\n\t\n\t\\frametitle{Empirical evaluation}\n\t\\framesubtitle{Evaluation Metrics}\n\t\n\t\\begin{itemize}\t\n\t\t\\item<only@1>  $\\mathit{Precision} =$ $ \\mathit{\\frac{\\#\\ of\\ correct\\ predictions}{\\#\\ of\\ total\\ predictions}}$. The fraction of the predictions that are correct.     \n\t\t\\item<only@1> $\\mathit{Spread}\\ $=$\\ end(I) -start(I)$. The width of the prediction interval $I$, which represents the number of events between the start and the end of $I$.\n\t\n\t\t\\item<only@1> $\\mathit{PS-}$score = $\\alpha * precision + (1 - \\alpha ) * ( 1- \\frac{spread}{max\\  spread})$. In analogy with the $F-score$ we introduce this new score to assess the performance of the system in terms of both the $\\mathit{precision}$ and the $\\mathit{spread}$ combined.\n\t\t\\item<only@1> $\\mathit{Distance}\\ $=$\\ start(I) - now$. The distance between the start of the prediction interval $I$ and the time of prediction is produced (now). \n\t\t\\item<only@2>  $\\mathit{Cumulative\\ communication}$. The number of messages, which are required to perform the distributed online learning modes to synchronize the prediction models.\t\n\t\t\\item<only@2> $Throughput$. The number of events processed per unit time (second).\n\t\\end{itemize} \n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Empirical evaluation}\n\t\\framesubtitle{Synchronization Schemes}\n\t\n\tOur proposed system can operate in three different modes of synchronization schemes: \n\t\\begin{itemize}\t\n\t\t\\item continuous full synchronization for each incoming event (hypothetical).\n\t\t\\item static scheme based on synchronizing the prediction models periodically every $b$ of input events in each stream.\n\t\t\\item dynamic synchronization protocol based on making the predictors communicate their local prediction models periodically but only under condition that the divergence of the local models from a reference model exceeds a variance threshold $\\Delta$ (recommended).  \t   \n\t\t\n\t\\end{itemize} \n\t\n\\end{frame}\n\n\n\\subsection{Real-word Event Streams\\\\}\n\n\\begin{frame}\n\t\n\t\\frametitle{Empirical evaluation }\n\t\\framesubtitle{Over Real-word Event Streams}\n\\begin{itemize}\n\t\n\t\\item<only@1> Event streams describe critical points (i.e., synopses) of moving vessels trajectories, which are derived from raw AIS messages as described in \\cite{synopses1}.\n\t\\item<only@1> $4,684,444$ critical points  in Brest, France: October 2015 to March 2016 ($\\approx5000$ vessels). \n\t\\item<only@1>  $\\mathcal{P}_1=Sailing$ with $m=2$ over $\\Sigma_1=$$\\{$\\textit{VerySlow, Slow, Moving,  Sailing, Stopping}$\\}$.  \n\t\\item<only@2> $\\mathcal{P}_2=$\\textit{changeInHeading; gapStart; gapEnd; changeInHeading} with $m=1$ over $\\Sigma_2=$ \n\t$\\{$\\textit{stopStart, stopEnd, changeInSpeedStart, changeInSpeedEnd,  slowMotionStart, slowMotionEnd, gapStart, gapEnd, changeInHeading}$\\}$.\n\t\\item<only@2> All experiments are performed with setting the batch size of 100  ($b=100$), the variance threshold of 2 ($\\Delta=2$), $80\\%$ as PMC prediction threshold ($\\theta_{p}=80\\%$), and 200 for the maximum spread ($\\theta_{s}=200$)\n\\end{itemize}\n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Real-word Event Streams }\n\t\\framesubtitle{Precision scores with respect to the number of input events over time for $\\mathcal{P}_1$.}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=.7\\textwidth,height=.65\\textheight]{../chapters/figures/synopses/new/precision_p1.png}\\linebreak\\\\\n\tMethods of distributed learning outperform the isolated method.\n\t\t\n\t\\end{figure}\n\t\n\\end{frame}\n\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Real-word Event Streams }\n\t\\framesubtitle{Average spread for $\\mathcal{P}_1$.}\n\t\n\t\\begin{center}\n\t\t\n\t\t\\begin{figure}[]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=.7\\textwidth,height=.65\\textheight]{../chapters/figures/synopses/new/spread_p1.png}\n\t\t\t\n\t\t\\end{figure}\n\t\\end{center}\n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Real-word Event Streams }\n\t\\framesubtitle{$PS$-score for $\\mathcal{P}_1$ with $\\alpha = .5$.}\n\t\n\\begin{center}\n\t\\centering\n\t\\begin{figure}[]\n\t\t\n\t\t\\includegraphics[width=.7\\textwidth,height=.65\\textheight]{../chapters/figures/synopses/new/ps_score_p1.png}\n\t\t\n\t\\end{figure}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Real-word Event Streams }\n\t\\framesubtitle{Distance for $\\mathcal{P}_1$.}\n\t\n\t\\begin{center}\n\t\t\\centering\n\t\t\\begin{figure}[]\n\t\t\t\n\t\t\t\\includegraphics[width=.9\\textwidth,height=.8\\textheight]{../chapters/figures/synopses/new/distance_p1.png}\n\t\t\t\n\t\t\\end{figure}\n\t\\end{center}\n\t\n\\end{frame}\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Real-word Event Streams }\n\t\\framesubtitle{Cumulative communication with respect to the number of input events over time for $\\mathcal{P}_1$.}\n\t\n\t\\begin{center}\n\t\t\n\t\t\\begin{figure}[]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=.9\\textwidth,height=.8\\textheight]{../chapters/figures/synopses/new/messages_p1.png}\n\t\t\t\n\t\t\\end{figure}\n\t\\end{center}\n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Real-word Event Streams }\n\t\\framesubtitle{Precision scores of $\\mathcal{P}_2$  for vessels of \\textit{pleasure craft} type.}\n\t\n\\begin{figure}[]\n\t\\centering\n\t\\includegraphics[width=.9\\textwidth,height=.8\\textheight]{../chapters/figures/synopses/new/precision_p2.png}\n\t\n\\end{figure}\n\t\n\\end{frame}\n\n\n\n\\subsection{Synthetic Event Streams}\n\\begin{frame}\n\t\n\t\\frametitle{Empirical evaluation }\n\t\\framesubtitle{Over Synthetic Event Streams}\n\t\\begin{itemize}\n\t\t\\item<1-> We generate $20$ streams of size 10,000 events from a simulated 1-order Markov process over $\\Sigma=\\{a, b, c, d\\}$.\n\t\t\n\t\t\\item<1->The used pattern $\\mathcal{P}=a ; d ; c$ with $m=1$.\n\t\t\n\t\t\\item<1-> We set the batch size to 20 ($b=20$), the variance threshold to .0001 ($\\Delta=.0001$), the  \\pmcmr prediction threshold to $50\\%$ ($\\theta_{p}=50\\%$), and the maximum spread to 10 ($\\theta_{s}=10$).\n\t\\end{itemize}\n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Synthetic Event Streams }\n\t\\framesubtitle{Precision scores with respect to the number of input events over time for $\\mathcal{P}=a;d;c$.}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=.7\\textwidth,height=.6\\textheight]{../chapters/figures/synthetic/new/precision_synthetic.png}\\linebreak\\\\\n\t\tMethods of distributed learning converge faster than the isolated method.\n\t\t\n\t\\end{figure}\n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Synthetic Event Streams }\n\t\\framesubtitle{Average spread  with respect to the number of input events over time for $\\mathcal{P}=a;d;c$.}\n\t\n\\begin{figure}[]\n\t\\centering\n\t\\includegraphics[width=.9\\textwidth,height=.8\\textheight]{../chapters/figures/synthetic/new/spread_synthetic.png}\n\t\n\\end{figure}\n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Synthetic Event Streams }\n\t\\framesubtitle{$\\mathit{PS}$-score for $\\mathcal{P}=a;d;c$ with $\\alpha = .5$.}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.9\\textwidth,height=.8\\textheight]{../chapters/figures/synthetic/new/ps_score_synthetic.png}\n\t\n\\end{figure}\n\t\n\\end{frame}\n\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Results on Synthetic Event Streams }\n\t\\framesubtitle{The error  $\\sum_{i,j} |\\hat{p}_{i,j} - {p}_{i,j}|$) of estimating the transition probabilities  for $\\mathcal{P}=a;d;c$.}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\textwidth,height=.8\\textheight]{../chapters/figures/synthetic/new/error_synthetic.png}\n\t\t\n\t\\end{figure}\n\t\n\\end{frame}\n\n\n\\begin{frame}\n\t\n\t\\frametitle{Throughput Results}\n\t\\framesubtitle{Throughput of the system on YARN cluster in in terms of number of events processed per second with respect to the parallelism level over $\\mathcal{P}_1$ with batch size $b=100$ and divergence threshold $\\Delta=2$.}\n\t\n\t\\begin{figure}[H]\n\t\t\n\t\t\\includegraphics[width=.8\\textwidth,height=.75\\textheight]{../chapters/figures/throughput/temp.png}\n\t\t\n\t\n\t\\end{figure}\n\t\n\\end{frame}", "meta": {"hexsha": "5876feb75e5f83fab12b3569a980197fd01e4685", "size": 7752, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "presentation2/evaluation.tex", "max_stars_repo_name": "wsgan001/thesis-8", "max_stars_repo_head_hexsha": "6131d734f4cc48746575221370669da14de5024b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-09-25T22:44:18.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-25T22:44:18.000Z", "max_issues_repo_path": "presentation2/evaluation.tex", "max_issues_repo_name": "wsgan001/thesis-8", "max_issues_repo_head_hexsha": "6131d734f4cc48746575221370669da14de5024b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentation2/evaluation.tex", "max_forks_repo_name": "wsgan001/thesis-8", "max_forks_repo_head_hexsha": "6131d734f4cc48746575221370669da14de5024b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-09-25T22:44:17.000Z", "max_forks_repo_forks_event_max_datetime": "2018-09-25T22:44:17.000Z", "avg_line_length": 32.7088607595, "max_line_length": 287, "alphanum_fraction": 0.7232972136, "num_tokens": 2386, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127603871312, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5799593633541197}}
{"text": "\\subsection{y* cut Optimization}\n\\label{section: y* cut optimization}\n\n\\todo[inline]{Logical flow... Q/G selection optimisation, then y* optimisation....\\\\ Clean up formulae. }\n\nIn QCD two-to-two scattering, $t$-channel is dominant. The QCD dijet production is proportional to $(1-cos\\theta^{*})^{-2}$, while $H'$ and $String$ production is expected to be flat in $cos\\theta^{*}$.\nThis means that $y^{*}$($y^{*} = \\frac{y_{1}-y_{2}}{2}$) of QCD background will minimize at 0 and that of $H'$ and $String$ will peak at 0.\n\nFigure \\ref{fig: hprime significance as a function of y* cut} shows the significance of $H'$($Significance = \\sqrt{\\sum_{i}{2\\cdot((S_{i}+B_{i})\\cdot ln(1+\\frac{S_{i}}{B_{i}})-S_{i})}}$) for the $m_{jj}$ distribution as a function of $y^{*}$ cut. The significance peaks at around 0.6, so the optimal $y^{*}$ cut for the $H'$ search is $|y^{*}| < 0.6$.\n\nFigure \\ref{fig: string significance as a function of y* cut} shows the significance of $String$ for the $m_{jj}$ distribution as a function of $y^{*}$ cut. The significance peaks at around 0.8, so the optimal $y^{*}$ cut for the $String$ search is $|y^{*}| < 0.8$.\n\n\\begin{figure}[htbp]\n        \\centering\n        \\subfigure[$\\geq$1 g-tag]{\\includegraphics[width=0.48\\columnwidth]{figures/yStarOptimization/Significance_Hprime_gj.pdf}}\n        \\subfigure[2 g-tag]{\\includegraphics[width=0.48\\columnwidth]{figures/yStarOptimization/Significance_Hprime_gg.pdf}}\n        \\caption{$H'$ significance as a function y* cut in the case of (a) $\\geq$1 g-tag, (b) 2 g-tag.}\n        \\label{fig: hprime significance as a function of y* cut}\n\\end{figure}\n\n\n\\begin{figure}[htbp]\n        \\centering\n        \\subfigure[$\\geq$1 g-tag]{\\includegraphics[width=0.48\\columnwidth]{figures/yStarOptimization/Significance_String_gj.pdf}}\n        \\subfigure[2 g-tag]{\\includegraphics[width=0.48\\columnwidth]{figures/yStarOptimization/Significance_String_gg.pdf}}\n        \\caption{$String$ significance as a function y* cut in the case of (a) $\\geq$1 g-tag, (b) 2 g-tag.}\n        \\label{fig: string significance as a function of y* cut}\n\\end{figure}\n\n\n", "meta": {"hexsha": "d57f4f782c626d15efeaf6615df66fe185555a2a", "size": 2102, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "include/oldMaterial/yStarcut.tex", "max_stars_repo_name": "krybacki/IntNote2", "max_stars_repo_head_hexsha": "45b1a7d88ca7b15f19ec25270b6fbbebd839fa0b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "include/oldMaterial/yStarcut.tex", "max_issues_repo_name": "krybacki/IntNote2", "max_issues_repo_head_hexsha": "45b1a7d88ca7b15f19ec25270b6fbbebd839fa0b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "include/oldMaterial/yStarcut.tex", "max_forks_repo_name": "krybacki/IntNote2", "max_forks_repo_head_hexsha": "45b1a7d88ca7b15f19ec25270b6fbbebd839fa0b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.8064516129, "max_line_length": 351, "alphanum_fraction": 0.6841103711, "num_tokens": 665, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127492339909, "lm_q2_score": 0.6791787121629466, "lm_q1q2_score": 0.5799593613242631}}
{"text": "\\section{Conclusion}\n\nThis paper broadly follows three parts: First a structural causal model defining the data generating process of the returns of stocks was formulated. Using daily returns for 11 stock for the last 30 years, relevant statistics was calculated, and used for tuning the parameters of the DGP. Using the structural causal model a data set was simulated. 2) An ensemble of three algorithms that predicts the Sharpe ratio of individual stocks was presented. Using these three models, I tune, train and select an algorithm on the simulated data set. I find the best performing algorithm being a model that estimates the sharp ratio for individual stocks by storing a rolling set of observations and calculating the relevant moments using this memory. 3) Using the tuned algorithm we apply it out-of-sample to the real data. Using the rolling Sharpe algorithm I find (comparing it to 4) other benchmarks, the algorithm performs extremely well, having a Sharpe ratio at $0.1256$ comparing to a tangency portfolio with a Sharpe ratio of $0.05545$. I discuss the different challenges of tuning an LSTM, and the value of using a structural causal model as test environment for trading algorithms. Finally I point out the challenges of using machine learning models in-sample, and express a word of caution of doing so.\n", "meta": {"hexsha": "1f363ece9bd2e8af4f4b71e01957e4367e0d58ab", "size": 1328, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/conclusion.tex", "max_stars_repo_name": "JakartaLaw/ACFS", "max_stars_repo_head_hexsha": "dd7e6107ae22e987923dd5b81a8605d88650fce9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-11-04T01:20:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-04T01:20:46.000Z", "max_issues_repo_path": "chapters/conclusion.tex", "max_issues_repo_name": "JakartaLaw/ACFS", "max_issues_repo_head_hexsha": "dd7e6107ae22e987923dd5b81a8605d88650fce9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-03-24T17:21:48.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-13T20:09:54.000Z", "max_forks_repo_path": "chapters/conclusion.tex", "max_forks_repo_name": "JakartaLaw/ACFS", "max_forks_repo_head_hexsha": "dd7e6107ae22e987923dd5b81a8605d88650fce9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-07T07:34:46.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-07T07:34:46.000Z", "avg_line_length": 332.0, "max_line_length": 1305, "alphanum_fraction": 0.8109939759, "num_tokens": 266, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8333246035907932, "lm_q2_score": 0.6959583124210896, "lm_q1q2_score": 0.5799591848140219}}
{"text": "The aim of this Section is to explain: (i) how to initialize the hypercubes set at the beginning of the analysis, (ii) how to select the interval widths in the hypercubes, and (iii) how to improve the precision of the analysis by using offsets to identify sub-volumes inside the hypercubes. \n\n\\subsection{Initialization of the analysis} \\label{sec:initialization}\nBefore starting the analysis we have to determine the hypercubes dimensionality, i.e., the number of sides each hypercube will have. To do this, we must find all the variables of the code which are not constants. %To avoid scanning the entire code before starting the analysis, \nWe require the program to initialize all non-constant variables at the beginning of the program. Note that physics simulations, like our case study, satisfy this requirement because they are made up by an initialization of all variables, followed by a \\statement{while} loop which contains the core of the program (i.e., the update of the simulated world). Otherwise, we can consider a dummy initialization (i.e., $0.0$) for all variables which are not initialized at the beginning of the program. The actual initialization of the variables will be treated as a normal assignment, without any loss of precision. \nThe initialization of the analysis is made in two steps. First, for each initialized variable, we compute its abstraction in the non-relational domain chosen to represent the single variables. The resulting set of abstract values could contain more than one element. Let us call $\\alpha(V)$ the set of abstract values associated to the initialization of the variable $V \\in Vars$. Then we compute the Cartesian product of all sets of abstracted values (one for each variable). The resulting set of tuples (where each tuple has the same cardinality as $Vars$) is the initial set of hypercubes of the analysis. \n\nFormally, $\\adomain = \\sbigtimes_{V \\in Vars} {\\alpha(V)}$.\n\nAs an example, consider the initialization in Listing \\ref{lst:casestudy} of our case study. First of all, we must identify the variables which are not constants: $dt,g,k$ are assigned only once (during the initialization), so we do not include them in $Vars$. The set of not-constant variables is then $Vars = \\{ V_1 = px, V_2 = py, V_3 = vx, V_4 = vy \\}$. The cardinality of the hypercubes will be $|Vars| = 4$. Suppose that the widths associated to the variables are  $w_1 = 10.0, w_2 = 25.0, w_3 = 30.0, w_4 = 5.0$. Then, the abstraction of each variable is $\\alpha(V_1) = \\{ 0 \\}$, $\\alpha(V_2) = \\{ 0, 1 \\}$, $\\alpha(V_3) = \\{ 0, 1 \\}$, and $\\alpha(V_4) = \\{ -6 \\}$.\nThe Cartesian product of these abstractions brings us to the following initial set of hypercubes:\n$$\\adomain = \\{ (0,0,0,-6) , (0,0,1,-6) , (0,1,0,-6) , (0,1,1,-6) \\}$$\n\n\\subsection{Width choice} \\label{sec:widths}\nThe choice of the interval widths is quite important, because it influences both the precision and efficiency of the analysis. The widths influence the granularity of the space partitioning with respect to each variable. On the one hand, if we use smaller widths we certainly obtain more precision, but the analysis risks to be too slow. On the other hand, with bigger widths the analysis will be surely faster, but we could not be able to verify the desired property. \n\nThe width selection can be automatized. We implemented a recursive algorithm which adjusts the widths automatically, starting with bigger ones and then decreasing them only in the portions of space where it is really needed, to avoid compromising the performance.\n\nWe start with wide intervals (i.e., coarse precision, but fast results) and we run the analysis for the first time. We track, for each hypercube of the final abstract state, the initial hypercubes (\\textit{origins}) from which it is derived.\n\nAt the end of the analysis, we check, for each hypercube of the final set, if the desired property is verified. We associate to each origin its final result by merging the results of its derived final hypercubes: some origins will certainly verify the property (i.e., they produce only final hypercubes which satisfy the property), some will not, and some will not be able to give us a definite answer. We can partition the starting hypercubes set with respect to this criterion.\n\nWe run the analysis again, but \\emph{only} on the origins which did not give a definite answer. To obtain more precise results in this specific space portion, the analysis is now run with halved widths. Note that this step is only performed until we reach a specific threshold, i.e., the \\textit{minimum width} allowed for the analysis. This parameter can be specified by the user (together with the starting widths). The smaller this threshold is, the more precise (but slower) the analysis becomes.\n\n\nAt the end of this recursive process, we obtain three partitions of the variable space: a set of starting hypercubes which certainly verify the property (\\emph{yes} set), a set of starting hypercubes which certainly do not verify the property (\\emph{no} set), and a set of starting hypercubes which, at the minimum width allowed for the analysis, still do not give a definite answer (\\emph{maybe} set). This means that the analysis tells us which initial values of the variables bring us to verify the property and which do not. Thanks to these results, the user can modify the initial values of the program, and run the analysis again, until the answer is that the property is verified for all initial values. In our case study, for example, we can adjust the possible initial positions and velocities until we are sure that the ball will exit the screen in a certain time frame. \n\nThe formalization of this recursive algorithm is presented in Algorithm \\ref{alg:widthAdjusting}.\n\n\\begin{algorithm}\n\\caption{The width adjusting recursive algorithm} \\label{alg:widthAdjusting}\n\\begin{algorithmic} \n\\Function {Analysis} {$currWidth, minWidth, startingHypercubes$}\n  \\State \\Return $(yes \\cup yes', no \\cup no', maybe')$\n  \\State \\textbf{where}\n    \\State $(yes, no, maybe) = hypercubesAnalysis(currWidth, startingHypercubes )$\n    \\If {$currWidth/2.0 \\geq minWidth$}\n      \\State $(yes', no', maybe') = Analysis(currWidth/2.0, minWidth, maybe)$\n    \\Else\n      \\State $(yes', no', maybe') = (Set.empty, Set.empty, maybe)$\n    \\EndIf\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\nThe overall analysis takes as input the starting width, the minimum width allowed and the set of starting hypercubes (obtained from the initialization of the program as described in Section \\ref{sec:initialization}). It executes the analysis on such data with the function \\statement{hypercubesAnalysis}, which returns three sets of hypercubes (\\statement{yes,no,maybe}) with the meaning explained above. Then, if the halved width is still greater than the minimum one allowed, the algorithm performs a recursive step by repeating the analysis function only on the \\statement{maybe} hypercubes set (with halved width). \n\nNote that %the initial hypercubes sets \nthe three final hypercubes sets (the \\emph{yes,no,maybe} partitions)\nwill contain hypercubes of different sizes: this happens because each hypercube can come from a different iteration of the analysis, and each iteration is associated to a specific hypercube size. A certain portion of the variable space could give a definite answer even at coarse precision (for example, when the horizontal velocity of the ball is sufficiently high, the values of other variables do not matter so much), while another portion could need to be split in much smaller hypercubes to give interesting results.\n\n\\subsection{Offsets}\nThere is still space for improving accuracy in the domain introduced so far. A loss of precision may occur due to the fact that hypercubes proliferate too much, even using small widths. Consider, for example, the statement \\statement{x = x + 0.01} (which is repeated at each iteration of the \\statement{while} loop) with $1.0$ as the width associated to \\statement{x}. If the initial interval associated to \\statement{x} was $[0.0,1.0]$, after the first iteration we would obtain two intervals ($[0.0,1.0]$ and $[1.0,2.0]$) because the resulting interval would be $[0.01,1.01]$, which spans over two fixed-width intervals. For the same reason, after the second iteration we would obtain three intervals ($[0.0,1.0],[1.0,2.0]$ and $[2.0,3.0]$) and so on: at each iteration we add one interval. \n\nIn order to overcome these situations, we may further improve the definition of our domain: each hypercube is augmented with additional information, which limits the admissible values inside it. In particular, each variable $v_i$ (associated to width $w_i$) is related to (other than an integer index $i$ representing the fixed-width interval $[i \\times w_i, (i+1) \\times w_i]$) a specific offset $(o_m,o_M)$ \\emph{inside} such interval. In this way, we use a sub-interval (of arbitrary width) inside the fixed-interval width, thereby restricting the possible values that the variable can assume. Both $o_m$ and $o_M$ must be smaller than $w_i$, greater than or equal to $0$ and $o_m \\leq o_M$. Then, if $i$ and $(o_m,o_M)$ are associated to $v_i$, this means that the possible values of $v_i$ belong to the interval $[(i \\times w_i) + o_m, (i \\times w_i) + o_M]$. If $o_m=0$ and $o_M=w_i$, the sub-interval corresponds to the whole fixed-width interval. \n\nAn element of our abstract domain is now stored (instead as a set of hypercubes) as a map from hypercubes to tuples of offsets. In this way, we can keep the original definition of a hypercube as a tuple of integers, but we also map each hypercube to a tuple of offsets (one for each variable). Before, an abstract state was defined by $ H = \\{ h : h \\in \\integer^{|Vars|}\\} $. Now an abstract state is defined by $M : \\funzione{\\integer^{|Vars|}}{{(\\real \\times \\real)}^{|Vars|}}$, i.e., a map where the domain is the set of hypercubes, and the codomain is the set of tuples of offsets. Each hypercube is associated to a tuple of offsets and each tuple of offsets has one offset for each program variable.\n\nThe least upper bound between two abstract states ($M=M_1 \\sqcup M_2$) is then defined as follows: $dom(M) = dom(M_1) \\cup dom(M_2)$, and \n$$\n\\forall h \\in dom(M) : M(h) = \\begin{cases}\nM_1(h) & \\mbox{if } h \\in dom(M_1) \\wedge h \\notin dom(M_2) \\\\\nM_2(h) & \\mbox{if } h \\in dom(M_2) \\wedge h \\notin dom(M_1) \\\\\nmerge(M_1(h),M_2(h)) & \\mbox{otherwise }\n\\end{cases}$$\nwhere $merge(o_1,o_2)$ creates a new tuple of offsets by merging the two tuples of offsets in input: for each pair of corresponding offsets (for example $(m_1,M_1)$ and $(m_2,M_2)$), the new offset is the widest combination possible (i.e., $(\\min(m_1,m_2)$ and $\\max(M_1,M_2))$). Note that this definition corresponds to the pointwise application of the least upper bound operator over intervals. The widening operator is extended in the same way: it applies the standard widening operators over intervals pointwisely to the elements of the vector representing the offsets.\n\nWe also have to modify the abstract semantics to accommodate this change. As the expression semantics $\\mathbb{I}$ returns intervals of arbitrary widths, we can use such exact result to update the offsets of the abstract state. Formally, the semantics of the assignment becomes \n\\begin{align*}\nassign(h, V_i, [a..b]) & = \\{ \\\\\n& h[i\\mapsto (m,o_m,o_M)] : [m\\times w_i .. (m+1)\\times w_i] \\cap [a..b] \\neq \\emptyset \\wedge \\\\\n& o_m = \\begin{cases} \n0 & \\mbox{if } a \\leq (m \\times w_i) \\\\\na - (m \\times w_i) & \\mbox{otherwise} \\\\\n\\end{cases} \\wedge \\\\\n& o_M = \\begin{cases} \nw_i & \\mbox{if } b \\geq ((m+1) \\times w_i) \\\\\nb - (m \\times w_i) & \\mbox{otherwise} \\\\\n\\end{cases} \\\\\n& \\}\n\\end{align*}\nwhere $h$ is a hypercube, $V_i$ is the assigned variable, and $[a..b]$ is the interval we are assigning. When we extract from a hypercube the interval associated to a variable, we use the interval delimited by the offsets, so that abstract operations can be much more precise. \n\nConsider again the previous example (\\statement{x = x + 0.01}, with $1.0$ as width of \\statement{x} and $[0..1]$ as initial value of \\statement{x}): after the first iteration we would return the two intervals $[0.0,1.0]$ and $[1.0,2.0]$ with offsets $[0.01,1.0]$ and $[1.0,1.01]$, respectively. In this way, at the following iteration we would obtain again the same two intervals (instead of three), with the offsets changed to $[0.02,1.0]$ and $[1.0,1.02]$. You can see that hypercubes proliferation is not an issue any more. When an offset moves out of the fixed interval inside which it should reside, we remove the corresponding hypercube from the set. In our example, after 100 iterations the first of the two intervals ($[0.0,1.0]$) would be removed from the hypercubes because the offset would start at $1.0$.\n\nOffsets give us much more precision in verifying properties (as we will see in the next Section), without affecting too much the performances, since we just need to store more data for each hypercube, but the cost of the single operations remains almost the same. Indeed, the overall performance is even improved in practice because in the previous approach the number of hypercubes increased exponentially at each iteration of the \\statement{while} loop, making the analysis unfeasible.\n\n", "meta": {"hexsha": "6f59767f85565f3a95903bd5aa57815b010c5bfb", "size": 13314, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "12. Hypercubes Domain/HC/Sections/versione SAS2013/4_tuning.tex", "max_stars_repo_name": "vs-team/Papers", "max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z", "max_issues_repo_path": "12. Hypercubes Domain/Sections/versione SAS2013/4_tuning.tex", "max_issues_repo_name": "vs-team/Papers", "max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "12. Hypercubes Domain/Sections/versione SAS2013/4_tuning.tex", "max_forks_repo_name": "vs-team/Papers", "max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 151.2954545455, "max_line_length": 955, "alphanum_fraction": 0.7558209404, "num_tokens": 3389, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245828938678, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5799591809200167}}
{"text": "\\section{RAVEN Mathematical Background}\n\\label{sec:mathBackground}\n\\subsection{System and Control Model}\n\\label{sub:controlAndSystem}\nThe first step is the derivation of the mathematical model representing, with a high\nlevel of abstraction, the plant and control system model. Let $\\overline{\\theta}\\left ( t\n\\right )$ be a vector describing the system status in the phase space, characterized\nby the following governing equation:\n\\begin{equation}\n\\label{eq:dThetaOverDT}\n\\frac{\\partial \\overline{\\theta} }{\\partial t}=\\overline{H}\\left (  \\overline{\\theta}\\left ( t \\right ),t \\right )\n\\end{equation}\nIn Equation above, the assumption of time differentiability of the trajectory equation $\\overline{H}\\left (  \\overline{\\theta}\\left ( t \\right ),t \\right )$ in the phase space has been taken. This assumption is not fully correct and generally required and it is used here, without missing of generality, for compactness of the notation.\n\\\\It can now be performed an arbitrary decomposition of the phase space:\n\\begin{equation}\n\\label{eq:thetaDecomposition}\n  \\overline{\\theta} = \\left (\\frac{\\overline{x}}{\\overline{v}}  \\right )\n\\end{equation}\nThe decomposition is made in such a way that $\\overline{x}$ represent the unknowns\nsolved by a system code (such as RELAP5-3D~\\cite{RELAP5userManual},\nRELAP7~\\cite{relap7FY12}, etc.) while $\\overline{v}$ are the variables directly\ncontrolled by the control system (e.g., automatic mitigation systems, operator actions,\netc.).\n\\\\The governing equation can be now cast in the following system of equations:\n\\begin{equation}\n\\label{eq:governingEquations}\n\\left\\{\\begin{matrix}\n\\frac{\\partial \\overline{x} }{\\partial t} = \\overline{F}\\left (  \\overline{x}, \\overline{v}, t \\right )  \\\\\n\\frac{\\partial \\overline{v} }{\\partial t} = \\overline{V}\\left (  \\overline{x}, \\overline{v}, t \\right )\n\\end{matrix}\\right.\n\\end{equation}\nConsequentially to this splitting, $\\overline{x}$ contains the state variables of the\nphase space that are continuous while $\\overline{v}$ contains discrete state\nvariables that are usually handled by the control system (consequentially, named\n\\textbf{control variables}). It can be noticed that the\nfunction  $ \\overline{V}\\left (  \\overline{x}, \\overline{v}, t \\right )$, representing the\ncontrol system, does not depend on the  knowledge of the complete status of the\nsystem but on a restricted subset that can be named \\textbf{monitored variables} $\\overline{C}$:\n\n\\begin{equation}\n\\label{eq:controlVars}\n\\left\\{\\begin{matrix}\n\\frac{\\partial \\overline{x} }{\\partial t} = \\overline{F}\\left (  \\overline{x}, \\overline{v}, t \\right )  \\\\\n \\overline{C} =  \\overline{G}(\\overline{x},t)     \\\\\n\\frac{\\partial \\overline{v} }{\\partial t} = \\overline{V}\\left (  \\overline{x}, \\overline{v}, t \\right )\n\\end{matrix}\\right.\n\\end{equation}\nwhere $\\overline{C}$ is a vector of smaller dimensionality than $\\overline{x}$ and,\ntherefore, more convenient to handle.\n\\\\As it can be noticed, the standard nomenclature of \\textit{signals} (monitored variables) and \\textit{status} (control variables) is not adopted. Two principal reasons\njustify this decision:\n\\begin{itemize}\n  \\item The definition of signals is tight to the definition of the\n  control logic for each component and, therefore, relative rather than absolute in the\n  overall system analysis. For example, it is possible the the \\textit{signals} for a\n  component represent \\textit{status} of another one, determining an in-unique\n  definition.\n  \\item The standard nomenclature becomes meaningless when this derivation is\n  applied to Uncertainty Quantification (UQ).\n\\end{itemize}\n\n\\subsubsection{Splitting Approach for the Simulation of the Control System}\nEquation~\\ref{eq:controlVars} represents a fully coupled system of Partial Differential\nEquations (PDEs). To solve this system, an  \\textit{operator splitting} approach\nis employed. This method is preferable in this context for several reasons, among which the following:\n\\begin{itemize}\n  \\item In reality, the control system (automatic mitigation systems, operator actions,\n  etc.) is always characterized by an intrinsic delay\n  \\item The reaction of the control system might make the system ``move'' among\n  different discrete states; therefore, numerical errors will be always of first order\n  unless the discontinuity is explicitly treated.\n\\end{itemize}\nEmploying the \\textit{operator splitting} approach, Equation ~\\ref{eq:controlVars}  can be\ncast as follows:\n\\begin{equation}\n\\label{eq:operatorSplitting}\n\\left\\{\\begin{matrix}\n\\frac{\\partial \\overline{x} }{\\partial t} = \\overline{F}\\left (  \\overline{x},\\overline{v}_{t_{i-1}}, t \\right )  & \\\\\n\\overline{C} =  \\overline{G}(\\overline{x},t)  & t_{i-1} \\leq t \\leq  t_{i} =  t_{i-1} + \\Delta  t_{i}\\\\\n\\frac{\\partial \\overline{v} }{\\partial t} = \\overline{V}\\left (  \\overline{x}, \\overline{v}_{t_{i-1}}, t \\right ) &\n\\end{matrix}\\right.\n\\end{equation}\nHence, the system of equations in solved decomposing it into simpler sub-problems that are treated individually\nusing specialized numerical algorithms.\n\n\\subsubsection{Definition of the Monitored Variable Space}\nThe contraction of the information from the $\\overline{x}$ space to the $\\overline{C}$ space is a crucial step.\nSince $\\overline{C}$ represents an arbitrary middle step, it is needed to define a set of rules that make this\nchoice unique. $\\overline{C}$ is chosen such that:\n\\begin{itemize}\n  \\item The solution of  $ \\left.\\begin{matrix} \\frac{\\partial \\overline{v} }{\\partial t}\n  \\end{matrix}\\right| =\\overline{V}\\left (  \\overline{x},\\overline{v}_{t_{i-1}}, t \\right )$\n  can be carried along without any knowledge of the solution algorithm of\n   $ \\left.\\begin{matrix}\n  \\frac{\\partial \\overline{x} }{\\partial t} =  \\end{matrix}\\right| \\overline{F}\\left (\n  \\overline{x},\\overline{v}_{t_{i-1}}, t \\right )\n  $. This requirement determines the minimum information contraction from  $\\overline{x}$ to\n  $\\overline{C}$.\n  \\item All actions represented by $\\overline{C} = \\overline{G}(\\overline{x},t)$ require knowledge of the\n  solution algorithm of\n  $ \\left.\\begin{matrix}\n  \\frac{\\partial \\overline{x} }{\\partial t} =  \\end{matrix}\\right| \\overline{F}\\left (\n  \\overline{x},\\overline{v}_{t_{i-1}}, t \\right )  $. This requirement determines  the maximum  information contraction from  $\\overline{x}$ to  $\\overline{C}$.\n\\end{itemize}\nThe intersection of the two sub-spaces defined above create a minimal unique set.\n\\subsubsection{Definition of the Auxiliary Variable Space}\nIn the previous sections, it has been determined that the needed information to model the dynamic system\nis contained in the solution vectors $\\overline{x}$ and $\\overline{v}$. Even if $\\overline{x}$ and $\\overline{v}$\nare sufficient to assess the system status at every point in time, it can result in an unpractical way to model\nthe eventual control system.\nLet's suppose to model a component of a particular system that presents different behavior depending on\nother systems or operation behaviors. In order to define the status of this component in every point in time, the\nwhole history of the system needs to be tracked. In order to remove these inefficiency, a set of auxiliary variables\n$\\overline{a}$ can be introduced. These variables are the ones that in the analysis of stochastic dynamics\nare artificially added into the phase space to a non-Markovian system to obtain back a Markovian behavior. In this\nway only the previous time-step information is needed to determine the status of the system.\n\\\\ Adding this additional system of variables, Equation ~\\ref{eq:operatorSplitting} can be casted as follows:\n\n\\begin{equation}\n\\label{eq:auxiliaryVariables}\n\\left\\{\\begin{matrix}\n\\frac{\\partial \\overline{x} }{\\partial t} = \\overline{F}\\left (  \\overline{x},\\overline{v}_{t_{i-1}}, t \\right )  & \\\\\n\\overline{C} =  \\overline{G}(\\overline{x},t)  & t_{i-1} \\leq t \\leq  t_{i} =  t_{i-1} + \\Delta  t_{i}\\: \\\\\n\\frac{\\partial \\overline{a} }{\\partial t} = \\overline{A}\\left (  \\overline{x},\\overline{C},\\overline{a},\\overline{v}_{t_{i-1}}, t \\right ) \\\\\n\\frac{\\partial \\overline{v} }{\\partial t} = \\overline{V}\\left (  \\overline{C},\\overline{a}, \\overline{v}_{t_{i-1}}, t \\right )  &\n\\end{matrix}\\right.\n\\end{equation}\n\n\\subsection{Dynamic Systems Stochastic Modeling}\n%\n\\subsubsection{General system of equations and variable classification}\n\\label{subsub:generalSystemOfEqAndVarsClassification}\nIn Section ~\\ref{sub:controlAndSystem}, the derivation of the governing equations for a controllable system\nhave been reported. In this section, the mathematical framework of the modeling of dynamic stochastic systems,\nunder uncertainties, is derived.\n\\\\ Dynamic stochastic systems are the ones whose dynamic is characterized by intrinsic randomness. Random\nbehaviors, although present in nature, are often artificially introduced into physical models to account for the\nincapability of fully modeling part of the nature of the system behavior and/or of the phenomena bounding the\nphysical problem.\n\\\\The distinction between variables that are artificially considered aleatory and the ones intrinsically aleatory\ncorresponds with the classical definition of epistemic and aleatory uncertainties. From a\nsystem simulation point of view it is more relevant how these variables, the sources of aleatory behavior, change\nin time.\nPossible examples of random elements are:\n\\begin{itemize}\n \\item random variability of parameters (e.g., uncertainty in physical parameters)\n \\item presence of noise (background noise due to intrinsically stochastic behaviors or lack of detail in the\n simulation)\n \\item Uncertainty in the initial and boundary conditions\n \\item Random failure of components\n \\item aging effects.\n\\end{itemize}\nBefore introducing the mathematical models for uncertainty,  it can beneficial to recall\nEquation ~\\ref{eq:dThetaOverDT}, adding the initial conditions:\n\\begin{equation}\n\\label{eq:dThetaOverDTWithBoundary}\n\\left\\{\\begin{matrix}\n\\frac{\\partial  \\overline{\\theta}\\left ( t \\right )}{\\partial t}=\\overline{H}\\left (  \\overline{\\theta}\\left ( t \\right ),t \\right ) \\\\\n \\overline{\\theta}\\left ( t_{0} \\right ) = \\overline{\\theta}_{0}\n\\end{matrix}\\right.\n\\end{equation}\nAt this point, each source of uncertainty or stochastic behavior is considered and progressively added in\nEquation ~\\ref{eq:dThetaOverDTWithBoundary}.\nFor the scope of this derivation, it is convenient to split the phase space into \\textit{continuous} (e.g.,temperature,\npressure, hentalpy, etc.) and discrete (e.g.,status of components, such as operational and failure states) variables\nas follows:\n\\begin{itemize}\n \\item $ \\overline{\\theta}^{c} \\in \\Phi \\subseteq \\mathbb{R}^{C}$, the set of continuous variables\n \\item $ \\overline{\\theta}^{d} \\in \\Psi \\subseteq \\mathbb{N}^{D}$, the set of discrete variables\n \\item $\\overline{\\theta}(t) = \\overline{\\theta}^{c} \\oplus \\overline{\\theta}^{d}$.\n\\end{itemize}\nConsequentially, Equation ~\\ref{eq:dThetaOverDTWithBoundary} assumes the following form:\n\\begin{equation}\n\\label{eq:systemThetaContAndDescrete}\n\\left\\{\\begin{matrix}\n\\frac{\\partial  \\overline{\\theta}^{c}\\left ( t \\right )}{\\partial t}=f\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right ) \\\\\n\\frac{\\partial  \\overline{\\theta}^{d}\\left ( t \\right )}{\\partial t}=g\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )\\\\\n \\overline{\\theta}^{c}\\left ( t_{0} \\right ) = \\overline{\\theta}^{c}_{0}\\\\\n \\overline{\\theta}^{d}\\left ( t_{0} \\right ) = \\overline{\\theta}^{d}_{0}\n\\end{matrix}\\right.\n\\end{equation}\nNote that the time derivative operator has been also used for the time discontinuous variables, even\nif this is allowed only introducing complex extension of the time derivative operator. In this context, the $\\frac{\\partial  }{\\partial t}$ on the discontinuous space is employed for simplifying the notation only.\n\n\\subsubsection{Probabilistic Nature of the Parameters Characterizing the Equation}\nAs shown in Equation ~\\ref{eq:systemThetaContAndDescreteStaz}, The first stochastic behaviors to be introduced are the\nuncertainties associated with the:\n\\begin{itemize}\n  \\item initial conditions (i.e. $\\overline{\\theta}^{c}$ and $\\overline{\\theta}^{d}$ at time $t_{0}$), and\n  \\item parameters characteristic of  $f\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )$ and $g\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )$.\n\\end{itemize}\n\n\\begin{equation}\n\\label{eq:systemThetaContAndDescreteStaz}\n\\left\\{\\begin{matrix}\n\\frac{\\partial  \\overline{\\theta}^{c}\\left ( t \\right )}{\\partial t}=f\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d}, \\overline{\\alpha}_{staz} ,      t \\right ) \\\\\n\\frac{\\partial  \\overline{\\theta}^{d}\\left ( t \\right )}{\\partial t}=g\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},t \\right )\\\\\n\\Pi \\left ( \\overline{\\theta}^{c},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{c}_{0}|,\\sigma_{c}^{2} \\right )\\\\\n\\Pi \\left ( \\overline{\\theta}^{d},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{d}_{0}|,\\sigma_{d}^{2} \\right ) \\\\\n\\overline{\\alpha}_{staz}\\left ( t \\right )=\\overline{\\alpha}_{staz}\\left ( t_{0} \\right ) \\sim pdf\\left ( \\overline{\\alpha}_{staz}^{0}|, \\sigma_{staz}^{2} \\right )\n\\end{matrix}\\right.\n\\end{equation}\nIn Equation ~\\ref{eq:systemThetaContAndDescreteStaz}, $\\Pi \\left ( \\overline{\\theta}^{c},t_{0} \\right )$ indicates the\nprobability distribution of $\\overline{\\theta}^{c}$ at the initial time $t=t_{0}$ while\n$pdf\\left ( \\mu|, \\sigma^{2} \\right )$ represents a generic probability distribution function having mean value\n$\\mu$ and sigma $\\sigma$.The term $\\overline{\\alpha}_{staz}$ is the vector of parameters affected by\nuncertainty but not varying over time.\n\\\\As already mentioned, Equation ~\\ref{eq:systemThetaContAndDescreteStaz} considers uncertainties whose values\ndo not change during the dynamic evolution of the system. This set of uncertainties accounts for most of the\ncommon source of aleatory behaviors. Examples of this kind of uncertainties are:\n\\begin{itemize}\n  \\item \\textit{Uncertainty associated with the heat conduction coefficient}:  This value is known (but uncertain) and has no physical reason to change during the simulation;\n  \\item \\textit{Uncertainty on failure temperature for a pipe}: This value is usually characterized by a probability distribution function but once the value has been set (like through random sampling) it will not change during the simulation.\n\\end{itemize}\nFrom a modeling perspective, all the probabilistic behaviors connected to $\\Pi \\left ( \\overline{\\theta}^{c},t_{0}\n\\right ) $, $\\Pi \\left ( \\overline{\\theta}^{d},t_{0} \\right )$ and $\\overline{\\alpha}_{staz}(t)$ can be modeled without\nchanging the dimensionality of the phase space (hence, no alteration of the solution algorithm is required), simply performing sampling of the input space. In addition, the Markovian assumption is still preserved.\n\n\\subsubsection{Variables Subject to Random Motion}\nThe next aleatory component to be accounted for is the set of parameters that continuously change over time (i.e. $\\overline{\\alpha}_{brow}$).\nIn other words,  these parameters are referred as if they behave like a \\textit{Brownian motion}.\nWhile what commonly is indicated as \\textit{Brownian motion} has not impact at the character\nthe space and time scales (characteristic of a physical system), there are parameters that have (or \\textbf{appear}\nto have) such behavior. The  \\textit{Brownian motion} characteristic of some variables can be completely\nsynthetic, due to the lack of modeling details in the simulation model.\n\\\\For instance, two examples of these randomly varying variables are:\n\\begin{itemize}\n  \\item \\textit{Cumulative damage growth in material}. Experimental data and models representing this\n  phenomenon show large uncertainties. There is also an intrinsic natural stochasticity driving\n  the accumulation of the damage (natural Brownian motion);\n  \\item \\textit{Heat conductivity in the fuel gap during heating of fuel}. During some transients there are\n  situations where the fuel is in contact with the clad while in others where there is the presence of a gap. While in\n  nature this is a discontinuous transition, it is not usually possible to model in such a way, especially if vibrations\n  of the fuel lead to high frequency oscillations. In this case, it would be helpful to introduce directly into the\n  simulation a random noise characterizing the thermal conductivity when these transitions occur (synthetic\n  Brownian motion).\n\\end{itemize}\nThe system of Equations~\\ref{eq:systemThetaContAndDescreteStaz} can be rewritten in the following form:\n\n\\begin{equation}\n\\label{eq:systemThetaContAndDescreteStazAndBrow}\n\\left\\{\\begin{matrix}\n\\frac{\\partial  \\overline{\\theta}^{c}\\left ( t \\right )}{\\partial t}=f\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d}, \\overline{\\alpha}_{staz} ,\\overline{\\alpha}_{brow},      t \\right ) \\\\\n\\frac{\\partial  \\overline{\\theta}^{d}\\left ( t \\right )}{\\partial t}=g\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\\\\\n\\frac{\\partial \\overline{\\alpha}_{brow} }{\\partial t}=b\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\\Gamma \\left ( t \\right )\n\\\\\n\\Pi \\left ( \\overline{\\theta}^{c},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{c}_{0}|,\\sigma_{c}^{2} \\right )\\\\\n\\Pi \\left ( \\overline{\\theta}^{d},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{d}_{0}|,\\sigma_{d}^{2} \\right ) \\\\\n\\overline{\\alpha}_{staz}\\left ( t \\right )=\\overline{\\alpha}_{staz}\\left ( t_{0} \\right ) \\sim pdf\\left ( \\overline{\\alpha}_{staz}^{0}|, \\sigma_{staz}^{2} \\right ) \\\\\n\\overline{\\alpha}_{brow}\\left ( t_{0} \\right ) \\sim  \\overline{\\alpha}_{brow}^{0} \\Gamma \\left ( t_{0} \\right )\n\\end{matrix}\\right.\n\\end{equation}\nwhere $\\Gamma \\left ( t \\right )$ is 0-mean random noise and $\\overline{\\alpha}_{brow}$ is the set of parameters subject to \\textit{Brownian motion}.\n\\\\Clearly, the equation referring to the time change of the parameters subject to the \\textit{Brownian motion} should be interpreted in the \\textbf{Ito} sense [C. Gardiner, Stochastic Methods, Springer (2009)].\n\n\\subsubsection{Discontinuously and Stochastically varying variables}\nThe last and probably most difficult step is the introduction of parameters that are neither constant during the simulation nor continuously vary over time. As an example, consider a valve that, provided set of operating conditions, opens or closes. If this set of conditions is reached n times during the simulation, the probability of the valve correctly operating should be sampled n times. It is also foreseeable that the history of failure/success of the valve will impact future probability of failure/success.  In this case the time evolution of such parameters (discontinuously stochastic changing parameters  $\\overline{\\alpha}_{DS}$) is governed by the following equation:\n\n\\begin{equation}\n\\label{eq:systemDiscAndStochVaryVars}\n\\frac{\\partial  \\overline{\\alpha }_{DS}\\left ( t \\right )}{\\partial t}=  \\overline{\\delta}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times \\overline{V}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times \\overline{p}\\left ( \\int_{t_{0}}^{t}  S\\left ( \\overline{\\theta}\\left ( t^{'} \\right ),t^{'} \\right )dt^{'} \\right )\n\\end{equation}\nwhere:\n\\begin{itemize}\n  \\item The function $\\overline{\\delta}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\n  \\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )$ is the delta of Dirac of the instant on which the\n  transition need to be evaluated (control logic signaling to the valve to open/close).\n  \\item The term $\\overline{p}\\left ( \\int_{t_{0}}^{t}  S\\left ( \\overline{\\theta}\\left ( t^{'} \\right ),t^{'} \\right )\\right )\n  = \\overline{p}\\left ( \\int_{t_{0}}^{t}  \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\n  \\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t dt\\right )$ represents the transition probability\n  between different states (in case of the valve: open/close). Note that the time integral of the\n  parameter history accounts for the memory of the component via the kernel $S\\left ( \\overline{\\theta}\\left ( t^{'}\n  \\right ),t^{'} \\right )$.\n  \\item The term $\\overline{V}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\n  \\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )$ is the rate of change of $\\overline{\\alpha }_{DS}$.\n  For a discrete parameter, it is defined as the value of the instantaneous $\\overline{\\alpha }_{DS}$ change.\n\\end{itemize}\nThe introduction of the history dependency introduced in the term $\\overline{p}$ determines that the system cannot be considered\nMarkovian if ``countermeasures'' are not taken. In order to make the system return to be Markovian, the phase space needs to be\nexpanded (i.e., increase its dimensionality): the time at which the\nparameters changed status and their corresponding values $\\left \\{  \\left (\\overline{\\alpha}_{DS}, t \\right )_{i} \\right \\} = \\left \\{  \\overline{\\alpha}_{DS}, t_{i} \\right \\} = \\overline{\\overline{\\alpha}}_{DS}, \\overline{t}\\, \\left ( for\\, i=1,...,n \\right )$.\n\\\\Equation~\\ref{eq:systemDiscAndStochVaryVars} now assumes the form:\n\\begin{equation}\n\\label{eq:systemDiscAndStochVaryVarsExpanded}\n\\begin{matrix}\n\\frac{\\partial  \\overline{\\alpha }_{DS}\\left ( t \\right )}{\\partial t}=  \\overline{\\delta}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times \\overline{V}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times \\overline{p}\\left ( \\overline{\\overline{\\alpha}}_{DS},\\overline{t},\\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t  \\right ) \\\\ \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\!\nfor \\, t\\geq t_{n}\n\\end{matrix}\n\\end{equation}\nThis formulation introduces a phase space that is continuously growing over time $n \\rightarrow \\infty$. In this respect, it is useful to introduce and discuss possible assumptions:\n\\begin{enumerate}\n  \\item The memory of the past is not affected by the time distance; in this case:\n  \\begin{equation}\n   \\overline{p}\\left ( \\overline{\\overline{\\alpha}}_{DS},\\overline{t},\\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t  \\right ) =  \\overline{p}\\left ( \\overline{\\overline{\\alpha}}_{DS},\\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t  \\right )\n  \\end{equation}\n  The dimensionality of the phase space is still growing during the simulation since more and more sampling is\n  performed, but the time integral is removed from the transition probability. A simple example of this situation is\n  a component activated on demand in which failure is a function of all previous sampling, but not of when the\n  component was sampled or in which sequence the outcome occurred.\n  \\item  The number of samples is determined before the simulation itself takes place (e.g.,$n$ times) In this case\n  the different $\\overline{\\alpha}_{DS_{i}}$ could be treated explicitly as $\\overline{\\alpha}_{staz}$   while\n  $\\overline{t}$ would still remain a variable to be added to the phase space (if simplification 1 is not valid) but of\n  fixed dimension. In this case $\\overline{t}$ still needs to be computed and its expression is:\n  \\begin{equation}\n   \\overline{t} \\left ( t \\right ) = \\int_{t_{0}}^{t} \\overline{t}  \\, \\overline{\\delta }\\left ( \\overline{\\alpha }_{DS},\n   \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )  dt\n  \\end{equation}\n  The transition probability becomes:\n  \\begin{equation}\n     \\overline{p}\\left ( \\int_{t_{0}}^{t} dt\\, S\\left ( \\overline{t} \\right ), \\overline{\\alpha}_{DS}, \\overline{\\theta}^{c},\n     \\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\n  \\end{equation}\n  For example, this is the case of a component that is sampled a fixed number of times for a given simulation\n  while the contribution of the history to the transition probability might decay exponentially over time. This\n  approximation might eliminate the memory from the system by adding n variables to the phase space $t_{i} \\, \\,\n  (for \\, \\, i=1,...,n)$ thus restoring the Markovian characteristic.\n  \\item Another possible approximation alternative to the previous one is that the memory of the system (here\n  explicitly represented by $ \\int_{t_{0}}^{t}  \\overline{\\alpha }_{DS} dt$) is limited\n  only to a fixed number of steps back in the past. In this case $n$ is always bounded. Therefore, adding  $\\left \\{\n  \\overline{\\alpha}_{DS_{i}},t_{i} \\right \\}, \\left ( for\\: i=1,...,n \\right )$ would possibly preserve the system\n  Markovian properties of the system. This approximation allows for eliminating the memory from the system by\n  expanding the phase space $2n$ variables. From a software implementation point of view, this is the most\n  complex  situation since without any simplification we would have to deal with a system that is never reducible\n  to a Markovian one and therefore forced to use the whole history of the system to forecast its evolution at each\n  time step.\n\\end{enumerate}\nAssumption 1 limits this cost by restraining it to the set of values assumed by the\nvariable but would still lead to very difficult to deal with situation. Assumption 2 would\nrequire an expansion of phase space to introduce the time at which the transitions\nhappens but the value that the parameter will assume at each sampling could be\ntreated as initial condition. Assumption 3 would instead require the expansion of the\nphase space for both the time and the values of the\ntransitioning variables.\n\\\\Based on the this simplifications, the system of\nEquations~\\ref{eq:systemThetaContAndDescreteStazAndBrow}, accounting also for $ \\overline{\\alpha}_{DS}$ can be cast into the form:\n\\begin{equation}\n\\label{eq:fullSystem}\n\\begin{split}\n\\left\\{\\begin{matrix}\n\\frac{\\partial  \\overline{\\theta}^{c}\\left ( t \\right )}{\\partial t}=f\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d}, \\overline{\\alpha}_{staz} ,\\overline{\\alpha}_{brow},      t \\right ) \\\\\n\\frac{\\partial  \\overline{\\theta}^{d}\\left ( t \\right )}{\\partial t}=g\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\\\\\n\\frac{\\partial \\overline{\\alpha}_{brow} }{\\partial t}=b\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\\Gamma \\left ( t \\right ) \\\\\n\\frac{\\partial  \\overline{\\alpha }_{DS}\\left ( t \\right )}{\\partial t}=  \\overline{\\delta}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times \\overline{V}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times\n\\\\ \\times  \\overline{p}\\left ( \\int_{t_{0}}^{t}  dt\\;   \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\n  \\overline{ \\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\n\\\\\n\\Pi \\left ( \\overline{\\theta}^{c},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{c}_{0}|,\\sigma_{c}^{2} \\right )\\\\\n\\Pi \\left ( \\overline{\\theta}^{d},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{d}_{0}|,\\sigma_{d}^{2} \\right ) \\\\\n\\overline{\\alpha}_{staz}\\left ( t \\right )=\\overline{\\alpha}_{staz}\\left ( t_{0} \\right ) \\sim pdf\\left ( \\overline{\\alpha}_{staz}^{0}|, \\sigma_{staz}^{2} \\right ) \\\\\n\\overline{\\alpha}_{brow}\\left ( t_{0} \\right ) \\sim  \\overline{\\alpha}_{brow}^{0} \\Gamma \\left ( t_{0} \\right ) \\\\\n\\overline{\\alpha}_{DS} \\left ( t_{0} \\right ) = \\overline{\\alpha}_{DS} ^{0}\n\\end{matrix}\\right.\n\\end{split}\n\\end{equation}\nIntroducing the Simplifications \\textbf{1} and \\textbf{3} ( the most\nappropriated in this context), Equation ~\\ref{eq:fullSystem} becomes:\n\\begin{equation}\n\\label{eq:fullSystemApprox1-3}\n\\begin{split}\n\\left\\{\\begin{matrix}\n\\frac{\\partial  \\overline{\\theta}^{c}\\left ( t \\right )}{\\partial t}=f\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d}, \\overline{\\alpha}_{staz} ,\\overline{\\alpha}_{brow},      t \\right ) \\\\\n\\frac{\\partial  \\overline{\\theta}^{d}\\left ( t \\right )}{\\partial t}=g\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\\\\\n\\frac{\\partial \\overline{\\alpha}_{brow} }{\\partial t}=b\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\\Gamma \\left ( t \\right ) \\\\\n\\frac{\\partial  \\overline{\\alpha }_{DS}\\left ( t \\right )}{\\partial t}=  \\overline{\\delta}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times \\overline{V}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right ) \\times\n\\\\ \\times  \\overline{p}\\left ( \\overline{\\alpha }_{DS}, \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\overline{\\alpha}_{staz},\\overline{\\alpha}_{brow},t \\right )\n\\\\\n\\Pi \\left ( \\overline{\\theta}^{c},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{c}_{0}|,\\sigma_{c}^{2} \\right )\\\\\n\\Pi \\left ( \\overline{\\theta}^{d},t_{0} \\right ) \\sim pdf\\left ( \\overline{\\theta}^{d}_{0}|,\\sigma_{d}^{2} \\right ) \\\\\n\\overline{\\alpha}_{staz}\\left ( t \\right )=\\overline{\\alpha}_{staz}\\left ( t_{0} \\right ) \\sim pdf\\left ( \\overline{\\alpha}_{staz}^{0}|, \\sigma_{staz}^{2} \\right ) \\\\\n\\overline{\\alpha}_{brow}\\left ( t_{0} \\right ) \\sim  \\overline{\\alpha}_{brow}^{0} \\Gamma \\left ( t_{0} \\right ) \\\\\n\\overline{\\alpha}_{DS} \\left ( t_{0} \\right ) = \\overline{\\alpha}_{DS} ^{0}\n\\end{matrix}\\right.\n\\end{split}\n\\end{equation}\nThis dissertation does not cover all the possible phenomena, but it\nprovides a sufficient mathematical framework for extrapolating toward cases that are not explicitly treated.\n\\\\ Given the presence of all these sources of stochastic behaviors, every\nexploration of the uncertainties (through sampling strategies) only\nrepresents a possible trajectory of the system in the phase space. Hence,\nit is much more informative the assessment of the probability of a\nparticular response, rather than the response itself.\n\\\\The explanation of these concepts is demanded to next section.\n%\n%\n% Formulation of the equation set in a statistical framework\n%\n%\n\\subsection{Formulation of the equation set in a statistical framework}\nBased on the premises reported in the previous sections and assuming\nthat at least one of the simplifications mentioned in\nSection~\\ref{subsub:generalSystemOfEqAndVarsClassification} is applicable (i.e. the system can be\ncasted as Markovian), it is needed to investigate the system evolution\nin terms of its probability density function in the global phase space\n$\\overline{\\theta}$ via the Chapman-Kolmogorov\nequation~\\cite{ProbReactoDynamicsDevooght}.\n\\\\The integral form of the Chapman-Kolmogorov is the following:\n\\begin{equation}\n\\label{eq:chapKolmogIntegralForm}\n\\begin{matrix}\n\\Pi \\left (\\overline{\\theta}_{3},t_{3}|\\overline{\\theta}_{1},t_{1}  \\right ) = \\int\nd\\overline{\\theta}_{2} \\Pi\\left (\\overline{\\theta}_{2},t_{2}|\n\\overline{\\theta}_{1},t_{1}  \\right )   \\Pi\\left (\\overline{\\theta}_{3},t_{3}|\n\\overline{\\theta}_{2},t_{2}  \\right )   &\nwhere \\: \\:   t_{1} < t_{2} < t_{3}\n\\end{matrix}\n\\end{equation}\nwhile its differential form is:\n\\begin{equation}\n\\label{eq:chapKolmogDiffForm}\n\\frac{\\partial \\Pi \\left (\\overline{\\theta},t|\\overline{\\theta}_{0},t_{0}  \\right )  }{\\partial t} =\n\\mathcal{L}_{CK}\\left (   \\Pi \\left (\\overline{\\theta},t|\\overline{\\theta}_{0},t_{0}  \\right ) \\right )\n\\end{equation}\nThe transition from the integral to the differential form is possible under the following assumptions:\n\\begin{equation}\n\\label{eq:chapKolmogAssump1}\n\\lim_{\\Delta t \\to 0} \\frac{1}{\\Delta t}  \\int_{|\n\\overline{\\theta}_{2}-\\overline{\\theta}_{1}|<\\varepsilon }   \\Pi \\left\n(\\overline{\\theta}_{2},t+\\Delta t|\\overline{\\theta}_{1},t  \\right )\nd\\overline{\\theta}_{2} = 0\n\\end{equation}\n\n\\begin{equation}\n\\label{eq:chapKolmogAssump2}\n\\lim_{\\Delta t \\to 0} \\frac{1}{\\Delta t} \\Pi \\left (\\overline{\\theta}_{2},t+\\Delta t|\n\\overline{\\theta}_{1},t  \\right ) = W\\left ( \\overline{\\theta}_{2}|\n\\overline{\\theta}_{1},t \\right )\n\\end{equation}\n\n\\begin{equation}\n\\label{eq:chapKolmogAssump3}\n\\lim_{\\Delta t \\to 0} \\frac{1}{\\Delta t}  \\int_{|\n\\overline{\\theta}_{2}-\\overline{\\theta}_{1}|<\\varepsilon }\n\\left ( \\overline{\\theta}_{2,i} - \\overline{\\theta}_{1,i} \\right )\n\\Pi \\left (\\overline{\\theta}_{2},t+\\Delta t|\\overline{\\theta}_{1},t  \\right )\nd\\overline{\\theta}_{2} = A_{i}\\left ( \\overline{\\theta}_{1},t \\right ) +\n\\mathcal{O}\\left ( \\varepsilon \\right )\n\\end{equation}\n\n\\begin{equation}\n\\label{eq:chapKolmogAssump4}\n\\lim_{\\Delta t \\to 0} \\frac{1}{\\Delta t}  \\int_{|\n\\overline{\\theta}_{2}-\\overline{\\theta}_{1}|<\\varepsilon }\n\\left ( \\overline{\\theta}_{2,i} - \\overline{\\theta}_{1,i} \\right ) \\left (\n\\overline{\\theta}_{2,j} - \\overline{\\theta}_{1,j} \\right )\n\\Pi \\left (\\overline{\\theta}_{2},t+\\Delta t|\\overline{\\theta}_{1},t  \\right )\nd\\overline{\\theta}_{2} = B_{i,j}\\left ( \\overline{\\theta}_{1},t \\right ) +\n\\mathcal{O}\\left ( \\varepsilon \\right )\n\\end{equation}\n\nThe first condition guarantees the continuity of $\\Pi \\left (\\overline{\\theta},t|\\overline{\\theta}_{0},t_{0}  \\right )$, while the other three force the finite existence of three parameters.\nEquation 25 can be furthermore decomposed into the continuous and discrete components:\n\n\\begin{equation}\n\\label{eq:chapKolmogIntegralFormContDisct}\n\\left\\{\\begin{matrix}\n\\Pi \\left (\\overline{\\theta}_{3}^{c},t_{3}|\\overline{\\theta}_{1}^{c},t_{1}  \\right )\n= \\int \\Pi \\left (\\overline{\\theta}_{2}^{c},t_{2}|\\overline{\\theta}_{1}^{c},t_{1}\n\\right ) \\Pi \\left (\\overline{\\theta}_{3}^{c},t_{3}|\\overline{\\theta}_{2}^{c},t_{2}\n\\right ) d\\overline{\\theta}_{2}^{c}\n\\\\\n\\Pi \\left (\\overline{\\theta}_{3}^{d},t_{3}|\\overline{\\theta}_{1}^{d},t_{1}  \\right )\n= \\int \\Pi \\left (\\overline{\\theta}_{2}^{d},t_{2}|\\overline{\\theta}_{1}^{d},t_{1}\n\\right ) \\Pi \\left (\\overline{\\theta}_{3}^{d},t_{3}|\\overline{\\theta}_{2}^{d},t_{2}\n\\right ) d\\overline{\\theta}_{2}^{d}\n\\end{matrix}\\right.\n\\: \\: \\: where \\:\\:   t_{1}<t_{2}<t_{3}\n\\end{equation}\n\nand its differential form is as follows:\n\\begin{equation}\n\\label{eq:chapKolmogDiffFormContDisct}\n\\left\\{\\begin{matrix}\n\\frac{\\partial \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right ) }{\\partial t} = \\mathcal{L}_{CK}^{c}  \\left (     \\Pi \\left\n(\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0} \\right ),\n\\overline{\\theta}^{d},\\overline{\\alpha}_{brow},\\overline{\\alpha}_{staz},\n\\overline{\\alpha}_{DS},t  \\right )\n\\\\\n\\frac{\\partial \\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}_{0}^{d},t_{0}\n\\right ) }{\\partial t} = \\mathcal{L}_{CK}^{d}  \\left (     \\Pi \\left\n(\\overline{\\theta}^{d},t|\\overline{\\theta}_{0}^{d},t_{0} \\right ),\n\\overline{\\theta}^{c},t  \\right )\n\\end{matrix}\\right.\n\\end{equation}\nwhere:\n\\begin{itemize}\n \\item  $\\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0} \\right\n )$ of the system to be in state $\\overline{\\theta}^{c}$ at time $t$ given that\n the system was in $\\overline{\\theta}^{c}_{0}$ at time $t_{0}$;\n \\item $\\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}_{0}^{d},t_{0} \\right\n )$ of the system to be in state $\\overline{\\theta}^{d}$ at time $t$ given\n that the system was in $\\overline{\\theta}^{d}_{0}$ at time $t_{0}$;\n \\item $\\mathcal{L}_{CK}^{c} \\left ( \\cdot  \\right )$ and\n $\\mathcal{L}_{CK}^{d} \\left ( \\cdot  \\right )$  are specific Chapman-\n Kolmogorov operators that will be described in the following section.\n\\end{itemize}\n%\n%\n% The Chapman-Kolmogorov Equation\n%\n%\n% COPY FROM HERE\n%\n\\subsection{The Chapman-Kolmogorov Equation}\n\\label{sec:ChapmanKolmogorov }\nThe system of equations~\\ref{eq:thetaDecomposition}, written in integral\nform, can be solved in a differential form through the Chapman-Kolmogorov (C-K)\noperator~\\cite{ProbReactoDynamicsDevooght}:\n\n\\begin{equation}\n\\label{eq:CK}\n\\begin{matrix}\n\\frac{\\partial \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right ) }{\\partial t} = - \\sum_i \\frac{\\partial }{\\partial \\overline{\\theta}_{i}^{c}} \\left ( A_{i}\\left (  \\overline{\\theta}^{c}, \\overline{\\theta}^{d}, t\\right ) \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right )  \\right ) +\n\\\\\n+ \\frac{1}{2}\\sum_{i,j} \\frac{\\partial^2 }{\\partial \\overline{\\theta}^{c}_{i} \\partial \\overline{\\theta}^{c}_{j}}\\left ( B_{i,j}\\left (  \\overline{\\theta}^{c}, \\overline{\\theta}^{d}, t\\right ) \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right )  \\right ) +\n\\\\\n+ \\int \\left (  W\\left ( \\overline{\\theta}^{c}|\n\\overline{\\theta}^{'c},\\overline{\\theta}^{d},t \\right )\\Pi \\left (\\overline{\\theta}^{'c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right ) - W\\left ( \\overline{\\theta}^{'c}|\n\\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )\\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right )  \\right )d\\overline{\\theta}^{'c}\n\\end{matrix}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial \\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}_{0}^{d},t_{0}\n\\right ) }{\\partial t} =\n\\sum_{i} W\\left ( \\overline{\\theta}^{d}|\n\\overline{\\theta}^{d}_{i},\\overline{\\theta}^{c},t \\right ) \\Pi \\left (\\overline{\\theta}^{d}_{i},t|\\overline{\\theta}^{d},t_{0}\n\\right ) - W\\left ( \\overline{\\theta}^{d}_{i}|\n\\overline{\\theta}^{d},\\overline{\\theta}^{c},t \\right ) \\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}^{d}_{0},t_{0}\n\\right )\n\\end{equation}\nwhere:\n\\begin{equation}\n\\begin{matrix}\nA_{i}\\left ( \\overline{\\theta}, t \\right ) = \\left\\{\\begin{matrix}\n0 & \\: \\: if  \\: \\: \\overline{\\theta}_{i} \\in \\overline{\\theta}^{d}\n\\\\\nf\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},\\alpha_{staz},\\alpha_{brow},t \\right ) +\\frac{1}{2}\\frac{\\partial b\\left ( \\overline{\\theta}^{c},t \\right )}{\\partial \\overline{\\theta}^{c}}Qb\\left ( \\overline{\\theta}^{c},t \\right ) \\: \\: & if  \\: \\: \\overline{\\theta}_{i} \\notin \\overline{\\theta}^{d}\n\\end{matrix}\\right.\n\\\\\nB_{i,j}\\left ( \\overline{\\theta}, t \\right ) = \\left\\{\\begin{matrix}\n0 & \\: \\: if  \\: \\: \\overline{\\theta}_{i} \\: \\: or \\: \\: \\overline{\\theta}_{j}  \\in \\overline{\\theta}^{d}\n\\\\\nb\\left ( \\overline{\\theta}^{c},t \\right )Qb^{T}\\left (  \\overline{\\theta}^{c},t \\right ) \\: \\: & \\: \\: if  \\: \\: \\overline{\\theta}_{i} \\: \\: or \\: \\: \\overline{\\theta}_{j}  \\notin \\overline{\\theta}^{d}\n\\end{matrix}\\right.\n\\end{matrix}\n\\end{equation}\nThis system of equations is composed of four main terms that identify four different types of processes:\n\\begin{itemize}\n  \\item Drift process\n  \\item Diffusion process\n  \\item Jumps in continuous space\n  \\item Jumps in discrete space (component state transitions).\n\\end{itemize}\n These four processes are described in the following sub-sections.\n%\n%\n% Drift process\n%\n%\n\\subsubsection{Drift Process}\n\\label{sec:CKDrift}\nThe drift process is defined by the Lioville\u2019s equation:\n\\begin{equation}\n\\label{eq:lioville}\n  \\frac{\\partial \\, \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}^{c}_{0},t_{0}  \\right ) }{\\partial t} = \\sum_{i}\\frac{\\partial }{\\partial \\overline{\\theta}^{c}_{i}}\\left ( A_{i}\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right ) \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}^{c}_{0},t_{0}  \\right ) \\right )\n\\end{equation}\nIt is important to note that this equation describes a completely deterministic motion, indicated by the equation:\n\\begin{equation}\n\\label{eq:determLioville}\n   \\frac{\\partial \\, \\overline{\\theta}^{c}\\left ( t \\right ) }{\\partial t} = A_{i}\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )\n\\end{equation}\nIf $\\overline{\\theta}^{c} \\left (\\overline{\\theta}^{c}_{0},\\overline{\\theta}^{d},t  \\right )$ is the solution of Equation ~\\ref{eq:determLioville}, then then the solution of the Lioville's equation is:\n\\begin{equation}\n\\label{eq:solLioville}\n\\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}^{c}_{0},t_{0}  \\right ) = \\delta\\left ( \\overline{\\theta}^{c} - \\overline{\\theta}^{c}\\left ( \\overline{\\theta}^{c}_{0},\\overline{\\theta}^{d},t \\right ) \\right )\n\\end{equation}\nprovided the initial condition:\n\\begin{equation}\n\\label{eq:solLiovilleInitCond}\n\\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}^{c}_{0},t_{0}  \\right ) = \\delta\\left ( \\overline{\\theta}^{c} - \\overline{\\theta}^{c}_{0} \\right )\n\\end{equation}\n%\n%\n% Diffusion process\n%\n%\n\\subsubsection{Diffusion Process}\n\\label{subsec:CKDiffusion}\nThis process is described by the Fokker-Plank equation:\n\\begin{equation}\n\\begin{matrix}\n\\frac{\\partial \\, \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}^{c}_{0},t_{0}  \\right ) }{\\partial t} =\n\\sum_{i}\\frac{\\partial }{\\partial \\overline{\\theta}^{c}_{i}}\\left ( A_{i}\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right ) \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}^{c}_{0},t_{0}  \\right ) \\right ) +\n\\\\\n+ \\frac{1}{2}\\sum_{i,j} \\frac{\\partial^2 }{\\partial \\overline{\\theta}^{c}_{i} \\partial \\overline{\\theta}^{c}_{j}}\\left ( B_{i,j}\\left (  \\overline{\\theta}^{c}, \\overline{\\theta}^{d}, t\\right ) \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right )  \\right )\n\\end{matrix}\n\\end{equation}\nwhere $A_{i}\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )$ is the drift vector and $B_{i,j}\\left (  \\overline{\\theta}^{c}, \\overline{\\theta}^{d}, t\\right ) $  is the diffusion matrix.\n\\\\Provided the initial condition in Equation ~\\ref{eq:solLiovilleInitCond}, the Fokker-Plank equation describes a system moving with drift whose velocity is\n $A\\left ( \\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )$ on which is imposed a Gaussian fluctuation with covariance matrix $B\\left (  \\overline{\\theta}^{c}, \\overline{\\theta}^{d}, t\\right ) $.\n%\n%\n% Jumps in continuous space\n%\n%\n\\subsubsection{Jumps in Continuous Space }\n\\label{subsec:CKJumpsCont}\nThis process is described by the Master equation:\n\\begin{equation}\n\\frac{\\partial \\, \\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}^{c}_{0},t_{0}  \\right ) }{\\partial t} =  \\int \\left (  W\\left ( \\overline{\\theta}^{c}|\n\\overline{\\theta}^{'c},\\overline{\\theta}^{d},t \\right )\\Pi \\left (\\overline{\\theta}^{'c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right ) - W\\left ( \\overline{\\theta}^{'c}|\n\\overline{\\theta}^{c},\\overline{\\theta}^{d},t \\right )\\Pi \\left (\\overline{\\theta}^{c},t|\\overline{\\theta}_{0}^{c},t_{0}\n\\right )  \\right )d\\overline{\\theta}^{'c}\n\\end{equation}\nProvided the initial condition  in Equation ~\\ref{eq:solLiovilleInitCond}, it describes a process characterized by\nstraight lines interspersed with discontinuous jumps whose distribution is given by $W\\left ( \\overline{\\theta}^{c}|\n\\overline{\\theta}^{'c},\\overline{\\theta}^{d},t \\right )$\n\n%\n%\n% Jumps in discrete space\n%\n%\n\\subsubsection{Jumps in Discrete Space}\n\\label{subsec:CKJumpsDiscrete}\nTransitions in the discrete space can occur in terms of jumps, then the formulation of\n\\begin{equation}\n\\frac{\\partial \\, \\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}^{d}_{0},t_{0}  \\right ) }{\\partial t} =\n\\mathcal{L}_{CK}^{d}\\left ( \\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}_{0}^{d},t_{0}\n\\right )  \\right )\n\\end{equation}\nis similar to the Master equation, recasted for a discrete phase space:\n\\begin{equation}\n\\frac{\\partial \\, \\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}^{d}_{0},t_{0}  \\right ) }{\\partial t} =  \\sum_{i} \\left (  W\\left ( \\overline{\\theta}^{d}|\n\\overline{\\theta}^{d}_{i},\\overline{\\theta}^{c},t \\right )\\Pi \\left (\\overline{\\theta}^{d}_{i},t|\\overline{\\theta}^{d}_{0},t_{0}\n\\right ) - W\\left ( \\overline{\\theta}^{d}_{i}|\n\\overline{\\theta}^{d},\\overline{\\theta}^{c},t \\right )\\Pi \\left (\\overline{\\theta}^{d},t|\\overline{\\theta}_{0}^{d},t_{0}\n\\right )  \\right )\n\\end{equation}\n\n", "meta": {"hexsha": "5fdd773bd04de19e6f1724e0602bdc4d417a9a97", "size": 44610, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/theory_manual/ravenStructure.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "doc/theory_manual/ravenStructure.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "doc/theory_manual/ravenStructure.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 68.7365177196, "max_line_length": 1071, "alphanum_fraction": 0.6965254427, "num_tokens": 14265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.880797071719777, "lm_q2_score": 0.6584175139669997, "lm_q1q2_score": 0.5799322182711486}}
{"text": "%---------------------------Shape and Size---------------------------\n\\section{Shape and Size\\label{s:hex-shape-and-size}}\n\nLet $R$ be the relative size squared as defined in \\S\\ref{s:hex-relative-size-squared}\nand $S$ be the shape as defined in \\S\\ref{s:hex-shape}.\nThe ``shape and size'' metric is the the product of these two numbers:\n\\[\n  q = RS\n\\]\n\n\\hexmetrictable{shape and size}%\n{$1$}%                                        Dimension\n{$[0.2,1]$}%                                  Acceptable range\n{$[0,1]$}%                                    Normal range\n{$[0,1]$}%                                    Full range\n{Dependent on $\\overline{V}$}%                Cube\n{\\cite{knu:03}}%                              Citation\n{v\\_hex\\_shape\\_and\\_size}%                   Verdict function name\n", "meta": {"hexsha": "5a7673391e78c18653fc330d7615e3177f6a39ef", "size": 796, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexShapeAndSize.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexShapeAndSize.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexShapeAndSize.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 41.8947368421, "max_line_length": 86, "alphanum_fraction": 0.4648241206, "num_tokens": 190, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8902942348544448, "lm_q2_score": 0.6513548511303338, "lm_q1q2_score": 0.5798974688058113}}
{"text": "\\documentclass{article}\n%include polycode.fmt\n%\n\\begin{document}\n\nSemigroups, semigroupoids... are there more semithings?\n\n\\subsection{Group, Semigroup, Monoid}\n\nLet's start \"from the beginning\". Groups is 19th century thing.  A \\emph{group}\nis a set $G$ with a binary operation $\\bullet$, identity element $e$ and\ninverse element $a^{-1}$ for each $a \\in G$.\nWith an obvious laws you can imagine relating these.\n\nIf we remove \\emph{identity} element from a group, \"obviously\" we get a \\emph{semigroup}.\nBecause if there's \\emph{any} element $x \\in G$, we can recover the identity\nelement by $e = x \\bullet x^{-1}$.\n\nSo what's a semigroup with identity element? For sometimes it were just that,\nuntil we started call it \\emph{monoid}. Go figure, naming is hard, not only in\nprogramming, but also in mathematics.\\foonote{\\url{https://math.stackexchange.com/questions/156952/why-the-terminology-monoid}}\n\n\\subsection{Category, Groupoid, Semigroupoid}\n\nYou are hopefully familiar with a concept of a category.\nI repeat a definition here:\n\n\\textbf{Definition} (Awodey 1.1)\nA \\emph{category} consist of the following data\n\\begin{itemize}\n\\item Objects: $A, B, C, \\ldots$\n\\item Arrows: $f, g, h, \\ldots$\n\n\\item For each arrow $f$, there are given objects\n\n\\begin{equation*}\n\\mathrm{dom}(f), \\qquad \\mathrm{cod}(f)\n\\end{equation*}\n\ncalled the \\emph{domain} and \\emph{codomain} of $f$. We write\n\n\\begin{equation*}\nf : A \\to B\n\\end{equation*}\n\nto indicate that $A = \\mathrm{dom}(f)$ and $B = \\mathrm{cod}(f)$.\n\n\\item Given arrows $f : A \\to B$ and $g : B \\to C$, that is, with\n\n\\begin{equation*}\n\\mathrm{cod}(f) = \\mathrm{dom}(g)\n\\end{equation*}\n\nthere is given an arrow\n\n\\begin{equation*}\ng \\circ f : A \\to C\n\\end{equation*}\n\ncalled the \\emph{composite} of $f$ and $g$.\n\n\\item For each object $A$, there is given an arrow\n\n\\begin{equation*}\n1_A : A \\to A\n\\end{equation*}\n\ncalled the \\emph{identity arrow} of $A$.\n\n\\item Associativity:\n\n\\begin{equation*}\nh \\circ (g \\circ f) = (h \\circ g) \\circ f\n\\end{equation*}\n\nfor all $f : A \\to B$, $g : B \\to C$, $h : C \\to D$.\n\n\\item Unit:\n\n\\begin{equation*}\nf \\circ 1_A = f = 1_B \\circ f\n\\end{equation*}\n\nfor all $f : A \\to B$.\n\\end{itemize}\n\nIf you think hard (or read a book), you'll learn that a single object\ncategory is a monoid: category arrows are monoid elements, and the\nlaws work out.\n\nThe group analogue is called \\emph{groupoid}. In addition to\ncategory data, we require that for each arrow $f : A \\to B$ there is an inverse arrow $f^{-1} : B \\to A$,\nsuch that $f \\circ f^{-1} = 1_B$ and $f^{-1} \\circ f = 1_A$.\nOr more succinctly: that each arrow is an isomorphism.\n\nBut we can also remove stuff: if we remove identity arrows,\nand unit law we get \\emph{semigroupoid}.\n\n\\textbf{Definition}\nA \\emph{semigroupoid} consist of the following data\n\\begin{itemize}\n\\item Objects: $A, B, C, \\ldots$\n\\item Arrows: $f, g, h, \\ldots$\n\n\\item For each arrow $f$, there are given objects\n\n\\begin{equation*}\n\\mathrm{dom}(f), \\qquad \\mathrm{cod}(f)\n\\end{equation*}\n\ncalled the \\emph{domain} and \\emph{codomain} of $f$. We write\n\n\\begin{equation*}\nf : A \\to B\n\\end{equation*}\n\nto indicate that $A = \\mathrm{dom}(f)$ and $B = \\mathrm{cod}(f)$.\n\n\\item Given arrows $f : A \\to B$ and $g : B \\to C$, that is, with\n\n\\begin{equation*}\n\\mathrm{cod}(f) = \\mathrm{dom}(g)\n\\end{equation*}\n\nthere is given an arrow\n\n\\begin{equation*}\ng \\circ f : A \\to C\n\\end{equation*}\n\ncalled the \\emph{composite} of $f$ and $g$.\n\n\\item Associativity:\n\n\\begin{equation*}\nh \\circ (g \\circ f) = (h \\circ g) \\circ f\n\\end{equation*}\n\nfor all $f : A \\to B$, $g : B \\to C$, $h : C \\to D$.\n\n\\end{itemize}\n\nA reader probably ask themselves: are there interesting, not-contrived\nexamples of semigroupoids, which aren't also categories?\nThere are. If poset (set with partial order) is \nan example of category, then a set with strict order\\footnote{\\url{http://mathworld.wolfram.com/StrictOrder.html}, I not dare to call the set with strict order a soset},\nis an example of semigroupoid.\n\nAs a concrete example, natural numbers with an unique arrow between $n$ and $m$ when $n < m$ is a semigroupoid.\n\n\\begin{equation}\n0 < 1 < 2 < 3 < 4 < \\cdots\n\\end{equation}\n\nThere are no identity arrow, as $n \\not< n$, but associativity works out:\nif $n < m$ and $m < p$ then $n < p$. Let's call this semigroupoid $\\mathbf{LT}$.\n\nFinally, a plot twist. \\href{https://ncatlab.org/nlab/show/semicategory}{nLab} calls semigroupoids semicategories,\nand don't even mention a semigroupoid as an alternative name!\n\n\\subsection{Functors, Semifunctors...}\n\nRecall a definition of functor.\n\n\\textbf{Definition} (Awodey 1.2)\nA \\emph{functor}\n\n\\begin{equation*}\nF : \\mathbf{C} \\to \\mathbf{D}\n\\end{equation*}\n\nbetween categories $\\mathbf{C}$ and $\\mathbf{D}$ is a mapping of objects\nto objects and arrows to arrows, in such a way that\n\\begin{itemize}\n\\item $F (f : A \\to B) = F(f) : F(A) \\to F(B)$,\n\\item $F(1_A) = 1_{F(A)}$,\n\\item $F(g \\circ f) = F(g) \\circ F(f)$.\n\\end{itemize}\nThat is, $F$ preserves domains and codomains, identity arrows,\nand composition. A functor $F : \\mathbf{C} \\to \\mathbf{D}$ thus gives\na sort of \"picture\" -- perhaps distorted -- of $\\mathbf{C}$ in $\\mathbf{D}$.\n\nFunctors preserve identities, but semigroupoids don't have identities to be preserved.\nWe need a weaker concept:\n\n\\textbf{Definition}\nA \\emph{semifunctor}\n\n\\begin{equation*}\nF : \\mathbf{C} \\to \\mathbf{D}\n\\end{equation*}\n\nbetween semigroupoids $\\mathbf{C}$ and $\\mathbf{D}$ is a mapping of objects\nto objects and arrows to arrows, in such a way that\n\\begin{itemize}\n\\item $F (f : A \\to B) = F(f) : F(A) \\to F(B)$,\n\\item $F(g \\circ f) = F(g) \\circ F(f)$.\n\\end{itemize}\nThat is, $F$ preserves domains and codomains,\nand composition. A functor $F : \\mathbf{C} \\to \\mathbf{D}$ thus gives\na sort of \"picture\" -- perhaps distorted -- of $\\mathbf{C}$ in $\\mathbf{D}$.\n\nAn identity functor is obviously a semifunctor, also a successor functor $S : \\mathbf{N} \\to \\mathbf{N}$\nis a semifunctor $\\mathbf{LT} \\to \\mathbf{LT}$.\n\nIn Haskell, that would be silly to define a class for (endo)semifunctors:\n\n\\begin{code}\n-- semimap f . semimap g = semimap (f . g)\nclass Semifunctor f where\n    semimap :: (a -> b) -> f a -> f b\n\\end{code}\n\nIt's the |Functor| type-class without an identity law.\nOn the other hand, something like\n\n\\begin{code}\ndata Mag a b t where\n    Map   :: (x -> t) -> Mag a b x -> Mag a b t\n    One   :: a -> Mag a b b\n\ninstance Semifunctor (Mag a b) where\n    fmap f (Map g x) = Map (f . g) x\n    fmap f x         = Map f x\n\\end{code}\n\nwould be valid.\n\n\\subsection{Semimonad?}\n\nNow as we have semifunctors, would it make sense to ask whether\nendosemifunctors can form a monad\n\n\\textbf{Semimonad}\nA \\emph{semimonad} on semigroupoid $C$ consists of\n\\begin{itemize}\n\\item an endosemifunctor $T : C \\to C$\n\\item semi natural transformation $\\eta : 1_C \\to T$ (|return|)\n\\item semi natural transformation $\\mu : T^2 \\to T$  (|join|)\n\\item Associativity (as semi natural transformations $T^3 \\to T$)\n\n\\begin{equation}\n\\mu \\circ T \\mu = \\mu \\circ \\mu T\n\\end{equation}\n\n\\item Identity (as semi natural transformations  $T \\to T$)\n\n\\begin{equation}\n\\mu \\circ T\\eta = \\mu \\circ \\eta T = 1_T\n\\end{equation}\n\\end{itemize}\n\nLooks a lot like monad, but for semifunctors. I have an example: the $S$ semifunctor\nis a semimonad. Feels like (I didn't check) that all strictly monotonic functions would fit.\nWe need to find some more structured semigroupoid than $\\mathbf{LT}$ to find more interesting semimonads, but I haven't yet.\n\nI end with a catchy phrase:\n\n\\emph{A semimonad is a monoid (!) in the category of endosemifunctors.}\n\nWhat would be a semigroup in the category of endofunctors\\footnote{in Haskell we call it |Bind|, at least now (or |Apply| for different semigroup)}, or\na semigroup in the category of endosemifunctors?\n\nNaming is hard.\n\n\\subsection{References: More on the theory of semi-functors}\n\nThere's a paper \\emph{The theory of semi-functors} by\nRaymond Hoofman \\url{https://doi.org/10.1017/S096012950000013X}.\n\n\\end{document}\n", "meta": {"hexsha": "be0b0b1b0d80cbceb8dbad53876117b7f43b46c7", "size": 7993, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "posts/2019-11-07-semi-semi-semi-semi.tex", "max_stars_repo_name": "phadej/gists", "max_stars_repo_head_hexsha": "cefb69faa61dd5737e91532f046278b7327aee91", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-03-31T19:03:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-28T16:49:57.000Z", "max_issues_repo_path": "posts/2019-11-07-semi-semi-semi-semi.tex", "max_issues_repo_name": "phadej/gists", "max_issues_repo_head_hexsha": "cefb69faa61dd5737e91532f046278b7327aee91", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-12-19T16:33:09.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-19T16:33:09.000Z", "max_forks_repo_path": "posts/2019-11-07-semi-semi-semi-semi.tex", "max_forks_repo_name": "phadej/gists", "max_forks_repo_head_hexsha": "cefb69faa61dd5737e91532f046278b7327aee91", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-07-31T07:50:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-19T14:05:45.000Z", "avg_line_length": 29.3860294118, "max_line_length": 169, "alphanum_fraction": 0.693857125, "num_tokens": 2653, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8104789063814616, "lm_q1q2_score": 0.5798360675368538}}
{"text": "\\section{Conclusions and Discussion}\n\nIn the experiment, the moment of inertia is found by measuring the angular acceleration of the turntable in different cases. \n\nBy plotting and fitting the relation between $k\\pi$ and $ t $, we found the $\\beta$ for each situation of the instrument.\nFor we used MATLAB to do the fitting work, the uncertainty of the fitting curve can be easily seen as $\\pm 5\\%$.\n\nBy comparing the calculated value of the difference of $I_A$ and $I_B$, we can use the experiment to judge whether the parallel axis theorem holds.\n\nThe calculated value of $I$ is $md^2 =  165.8 g \\times  (4.7560 cm - 6.0120cm )^2 + 165.8 g \\times  (4.7610 cm - 6.0200cm )^2  = 0.0052436 kg\\times m^2 $ and the value derived from the experiment is $ I_{A3B4} -I_{A1B2} =0.0055  kg\\times m^2 -   0.0044  kg\\times m^2 = 0.0011 kg\\times m^2 $.\nThe relative uncertainty is $$ \\frac{ 0.0055  kg\\times m^2 - 0.0052436 kg \\times m^2}{0.0055  kg\\times m^2} = 4.66 \\% $$\n\nThus, the parallel axis theorem holds very well.\n\nThe g", "meta": {"hexsha": "97986deb68ef5dd4a07a61cfd0ce24aab8852f4d", "size": 1019, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "E1/part/conclusion.tex", "max_stars_repo_name": "iamwrm/VP141", "max_stars_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-24T11:28:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-24T11:28:04.000Z", "max_issues_repo_path": "E1/part/conclusion.tex", "max_issues_repo_name": "iamwrm/VP141", "max_issues_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "E1/part/conclusion.tex", "max_forks_repo_name": "iamwrm/VP141", "max_forks_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.9333333333, "max_line_length": 291, "alphanum_fraction": 0.7144259078, "num_tokens": 331, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.5798360544215462}}
{"text": "\n\\subsection{Bootstrapping}\n\nIf we have a sample of \\(n\\), we can create bootstrap samples by drawing with replacement for other sets with \\(n\\) members.\n\n", "meta": {"hexsha": "d68d2538d170c320e7625e997b5e3412df6c5ec7", "size": 155, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/bootstrap/01-01-bootstrap.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/bootstrap/01-01-bootstrap.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/bootstrap/01-01-bootstrap.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.8333333333, "max_line_length": 124, "alphanum_fraction": 0.7483870968, "num_tokens": 36, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8104788903594355, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5798360511567073}}
{"text": "\\documentclass[11pt, letterpaper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{bm}\n\n\\newcommand{\\e}{\\epsilon}\n\\newcommand{\\dl}{\\delta}\n\\newcommand{\\dij}{\\delta_{ij}}\n\\newcommand{\\1}{\\bm{1}}\n\\newcommand{\\pd}[2]{\\frac{\\partial #1}{\\partial #2}}\n\\newcommand{\\uu}[1]{\\underline{\\underline{#1}}}\n\\newcommand{\\un}[1]{\\underline{#1}}\n\n\\title{Lecture 5: Isotropic tensors and related integrals}\n\\begin{document}\n\\maketitle\n\nOur objective here is to exploit the index notation and symmetry of the problem to solve certain integrals that would have been cumbersome otherwise. These include\n\n\\begin{itemize}\n  \\item Integrals involving normal vectors over a sphere\n  \\item Integrals in $D$ dimensions\n  \\item Attraction between induced dipoles - linear approximation\n  \\item Attraction between induced dipoles - general case\n  \\item Moment of inertia\n\\end{itemize}\n\nTo begin with, we note the isotropic tensors of various orders \n\\begin{itemize}\n  \\item Zeroth order - any scalar\n  \\item First order - Null vector\n  \\item Second order - $\\dl_{ij}$\n  \\item Third order - $\\e_{ijk}$ (pseudo tensor)\n  \\item Fourth order - constructed using $\\dl_{ij}$ \n\\end{itemize}\n\nIt is also possible to construct a fourth order tensor using $\\e_{ijk}\\e_{klm}$ where two indices have been contracted. However, it is easily shown that this can be written as a combination of $\\dl$'s. Let\n\n$$\\e_{ijm}\\e_{mkl} = c_1\\dl_{ij}\\dl_{kl} + c_2\\dl_{ik}\\dl_{jl} + c_3\\dl_{il}\\dl_{jk}$$\n\nUsing the fact that interchanging $i$ and $j$ changes the sign of the LHS, we get $c_1=0$. Exploiting the properties of cyclic permutations being equal gives us $c_2=-c_3$. Setting $i=j$, we can obtain $c_2=1, c_3=-1$ which finally gives us the important identity\n\n$$\\e_{ijk}\\e_{mkl} = \\dl_{ik}\\dl_{jl} - \\dl_{il}\\dl_{jk}$$\n\nIn fact, any isotropic tensor of order $2n$ can be expressed as an \\textit{n-tuple} of $\\dl$'s. Such a combination will have a total of $\\frac{2n!}{2^n n!}$ number of terms. All such even order tensors are true tensors while all odd order tensors are pseudo tensors because they will contain a $\\e_{ijk}$.\n\n\n\n\\section{Some integrals for the normal over a unit sphere}\n\nLet $\\un{n}$ be a unit normal and $d\\Omega$ be the solid angle given by $d\\Omega = \\sin\\theta d\\theta d\\phi$ (it is a small area element of a unit sphere). Now let's evaluate the integral\n\n$$\\int \\un{n}\\un{n}d\\Omega$$\n\nWe might represent the normal vector in spherical polar coordinates but that will lead to a long-winded integral. Instead, we note that the integral is a second order tensor with no special preference for any direction. Hence it must be proportional to $\\dl_{ij}$\n\n$$\\int n_i n_j d\\Omega = c \\dl_{ij}$$\n\nContract both sides with $\\dl_{ij}$. We get\n\n$$\\int n_i n_i d\\Omega = \\int d\\Omega = 3c$$\n\nSince $\\int d\\Omega$ is just the surface area of a unit sphere, we get $c = 4\\pi / 3$. Hence,\n\n$$\\int \\un{n}\\un{n}d\\Omega = \\frac{4\\pi}{3} \\dl_{ij}$$\n\nWhat about $\\int n_i d\\Omega$ ? Since it also doesn't't have a preferred direction, it must be isotropic. However, its a vector and we know that the only isotropic vector is the Null vector. Hence $c=0$ for this case. What about $\\int n_i n_j n_k d\\Omega$ ? This is a true third order tensor, but there is no true third order isotropic tensor and hence $c=0$ for this case as well. Let's evaluate\n\n$$\\int n_in_jn_kn_l d\\Omega$$\n\nSince, its a fourth order isotropic tensor, we must have\n$$\n\\int n_in_jn_kn_l d\\Omega = c_1\\dl_{ij}\\dl_{kl} + c_2\\dl_{ik}\\dl_{jl} + c_3\\dl_{il}\\dl_{jk}\n$$\n\nIsotropy of the left hand side dictates that $c_1=c_2=c_3=c$ and the value of $c$ can be determined by contracting both sides with $\\dl_{ij}\\dl_{kl}$. This gives us $c=4\\pi/15$ and hence\n$$\n\\int n_in_jn_kn_l d\\Omega = \\frac{4\\pi}{15}(\\dl_{ij}\\dl_{kl} + \\dl_{ik}\\dl_{jl} + \\dl_{il}\\dl_{jk})\n$$\n\n\\subsection{Integrals in D dimensions}\nWhat if we want to calculate $\\int n_in_j d\\Omega$ in $D$ dimensions? As before, we contract both sides with $\\dl_{ij}$ and obtain\n$$\n\\int d\\Omega = C\\cdot D\n$$\nwhere the LHS is the surface area of a D-dimensional hypersphere. Let's denote it as $\\Omega(D)$, then $C=\\Omega(D)/D$\n\n\\section{Applications} \n\n\\subsection{Dipole interaction in a weak field}\nWe know that the induced dipole moment is $D(DE/kT)$ where D is the intrinsic dipole moment and $(DE/kT)$ is a biasing factor that accounts for the thermal fluctuations in the orientation of the dipole. Let $\\un{p}$ be the direction of the dipole vector and $D$ be its magnitude. Let $\\un{1}_E$ be the direction of the electric field of the other dipole and $E$ be its magnitude. Then the probability density of the orientation of the second molecule as a function of $(\\theta, \\phi)$ will be\n$$\nP(\\theta, \\phi) \\propto e^{DE(\\un{p}\\cdot \\un{1}_E)/kT}\n$$\n\nLet the proportionality constant be $N$. Then,\n$$\nP(\\theta, \\phi) = N e^{DE(\\un{p}\\cdot \\un{1}_E)/kT}\n$$\n\nUsing the fact that the integral of probability density over all possible orientations must be 1, we can obtain the value of $N$. All possible orientations in this case sample all points on a unit sphere. Therefore,\n$$\n\\int N e^{DE(\\un{p}\\cdot \\un{1}_E)/kT} d\\Omega = 1\n$$\nwhich gives\n$$\nN=\\frac{1}{\\int e^{DE(\\un{p}\\cdot \\un{1}_E)/kT} d\\Omega}\n$$\n\nUsing this in the expression for the probability, we get\n$$\nP(\\theta, \\phi) = \\frac{e^{DE(\\un{p}\\cdot \\un{1}_E)/kT}}{\\int e^{DE(\\un{p}\\cdot \\un{1}_E)/kT} d\\Omega}\n$$\n\nIf $E=0$, we obtain $P=1/4\\pi$ which is a constant. Hence the dipole has equal probability of being in any possible orientation. Now, the induced dipole moment will be the integral of intrinsic dipole moment times the probability density over all orientations - the unit sphere\n$$\n\\text{\\footnotesize{Induced dipole moment}} = \\int \\text{\\footnotesize{Intrinsic dipole moment $\\cdot$ probability density}}  \n$$\nTherefore, induced dipole moment $\\un{P_{ind}}$\n$$\n\\un{P}_{ind} = \\frac{\\int D \\un{p}e^{DE(\\un{p}\\cdot \\un{1}_E)/kT}d\\Omega}{\\int e^{DE(\\un{p}\\cdot \\un{1}_E)/kT}d\\Omega}\n$$\n\nIf the field is weak, the coefficient of the exponential is small and we can use the Taylor expansion. Considering only the linear terms, we get\n$$\n\\un{P}_{ind} = D\\frac{\\int \\un{p}(1 + DE(\\un{p}\\cdot \\un{1}_E)/kT)d\\Omega}{\\int (1+DE(\\un{p}\\cdot \\un{1}_E)/kT)d\\Omega}\n$$\n\nNow, using the fact that$\\un{E}$ is constant and $\\int n_i d\\Omega = 0$, the first term in the numerator and second term in the denominator vanish, leaving us with\n$$\n\\un{P}_{ind} = D\\frac{\\int \\frac{D E}{k T} \\un{1}_i(p_ip_j)d\\Omega}{4\\pi}\n$$\n\nwhich is a familiar integral resulting in the answer\n$$\n\\un{P}_{ind} = \\frac{1}{3}\\frac{D^2 E}{k T} \\un{1}_E\n$$\n\n\n\\subsection{Moment of Inertia}\n\nThe scalar moment of inertia is a property that depends on one's choice of the axis of rotation. The tensorial moment of inertia, however, depends only on the body (its mass and mass distribution). Let $\\un{1}_p$ be the unit vector along the axis of rotation and $\\un{X}$ be the position vector for some point within the volume occupied by the body. Then, scalar moment of inertia can be written as\n$$\nI = \\int \\rho dV (\\un{X}-(\\un{X}\\cdot \\un{1}_p)\\un{1}_p)^2\n$$\nOr in index notation\n$$\nI = \\un{1}_{p_i}\\un{1}_{p_j} \\underbrace{\\int \\rho dV (r^2 \\dl_{ij} - x_i x_j)}_{\\text{This is moment of inertia tensor}}\n$$\n\nAs an example, we can write the moment of inertia tensor for a sphere as\n$$\n\\int \\rho dV (r^2 \\dl_{ij} - x_i x_j) = C \\dl_{ij}\n$$\nContracting both sides with $\\dl_{ij}$, we easily get the familiar moment of inertia of the sphere $2/5 MR^2$.\n\nSuppose we have a solid of revolution, such as a spheroid. Once we fix the coordinate system, we can define the direction of anisotropy, say $q_i$. Clearly the moment of inertia tensor will not just be proportional to $\\dij$ because the body is not isotropic anymore. We must incorporate the anisotropy in our ansatz. Anisotropy is given by a direction - a vector $q_i$. In order to be proportional to a second order tensor, we must construct a second order tensor using just $q_i$, for which the simplest recipe is a dyadic product $q_i q_j$. \n$$\nI_{ij} = \\int \\rho dV (r^2 \\dl_{ij} - x_i x_j) = F_1 \\dij + F_2 q_i q_j\n$$\n\nHere, $F_1$ and $F_2$ are functions of the aspect ratio such that if aspect ratio becomes 1, $F_2$ vanishes and we get the result for a sphere. Note that the presence of $\\dij$ is necessary here. In order to solve this equation, we contract once with $\\dij$ and once with $q_iq_j$ and get two equations which can be solved for $F_1$ and $F_2$. More importantly, we can recast the above expression as\n$$\nI_{ij}= \\underbrace{F_1(\\dij - q_iq_j)}_{\\text{Axial M.o.I}}+\\underbrace{(F_1 + F_2)(q_iq_j)}_{\\text{Equatorial M.o.I}}\n$$\n\nHere $q_iq_j$ denotes the component of $I_{ij}$ associated with the axial moment of inertia while $\\dij - q_iq_j$ denotes the equatorial component. ($\\dij - q_iq_j$ can be written as $x_i - q_iq_jx_j$ which emphasizes the subtraction of component of \\un{x} along \\un{q} from \\un{x})\n\nIf instead of the spheroid, we had an ellipsoid (given by $\\frac{x^2}{A^2}+\\frac{y^2}{B^2}+\\frac{z^2}{C^2} = 1$) which is aligned with the coordinate directions, we can write\n$$\nI_{ij} = \\int \\rho dV (r^2 \\dl_{ij} - x_i x_j) = F_1 q_i q_j + F_2 r_i r_j + F_3 s_is_j\n$$\n where \\un{q}, \\un{r}, \\un{s} are the unit vectors along the axes of the ellipsoid and also along the coordinate axes. In this case we did not require $\\dij$ because it comes out as a special case if $F_1=F_2=F_3=F$. In that case $I_{ij}=F( q_i q_j + r_i r_j + s_i s_j) = F(\\un{1}_{1_i}\\un{1}_{1_j} + \\un{1}_{2_i}\\un{1}_{2_j} + \\un{1}_{3_i}\\un{1}_{3_j}) = F\\dij$\n\\section{Appendix}\n\n\\subsection{Area of a D-dimensional hypersphere}\n\n\\subsection{Dipole interaction in strong field in D-dimensions}\nIn this case, the linearization of the exponential is not a valid simplification and in general we must consider all powers in the power series expansion of the biasing factor $e^{DE(\\un{p}\\cdot \\un{1}_E)/kT}$. To start with, we align the first dipole along a coordinate axis and consider all possible orientations of the other dipole. This eliminates the $\\phi$ dependence (azimuthal dependence) of the probability density (or energy) and allows us to easily calculate the normalization constant. Then we expand the exponential whose general term consists of an integral involving $n$ normals over the unit sphere\n$$\n\\int p_1 p_{i_1} p_{i_2} p_{i_3}...p_{i_{n}} d\\Omega\n$$\n\nwhere $p_1$ denotes the direction of the first dipole. The integral is zero if $n$ is even and hence we consider only odd $n$ i.e. we take $n = 2n'+1$. Now we get an integral of $2n+2$ normals over the unit sphere. \n$$\n\\int p_1 p_{i_1} p_{i_2} p_{i_3}...p_{i_{2n+1}} d\\Omega\n$$\nIf we can solve the general integral over the $2n$ normals\n$$\n\\int p_{i_1} p_{i_2} p_{i_3}...p_{i_{2n}} d\\Omega\n$$\nthen we can obtain the required result by replacing $n$ with $2n+1$. To attempt a solution by method of induction, we assume a solution for $2n-1$ normals and use to prove the case for $2n$ normals. Doing this leads us to the following integral\n$$\n\\int x_{i_1} x_{i_2} x_{i_3}...x_{i_{2n-1}} n_{i_{2n}} d\\Omega\n$$\nwhere $\\un{x}=r\\un{n}$ and they are both equal at the surface of the unit sphere. Writing it in this form allows us to use divergence theorem to convert this surface integral into a volume integral as follows\n$$\n\\int x_{i_1} x_{i_2} x_{i_3}...x_{i_{2n-1}} n_{i_{2n}} d\\Omega = \\int \\pd{(x_{i_1} x_{i_2} x_{i_3}...x_{i_{2n-1}})}{x_{i_{2n}}} dV\n$$\nHenceforth, we just have to apply the product rule and use the assumed form for $2n-1$ to prove the result for $2n$ normals. One critical step here is to note the application of the general form of divergence theorem. In general one can write\n$$\n\\int A_{i_1 i_2...i_n j}n_j dS = \\int \\pd{A_{i_1 i_2...i_n j}}{x_j}dV \n$$\n\nThis identity would have helped us convert the above surface integral into volume integral. However, we note that the use of this form of divergence theorem requires that one index be contracted ($j$ in this case) but we don't have any contracting index in our surface integral. To overcome this problem consider that we fix all but indices of the surface integral. The integrand then behaves as a vector. Let that vector be $u_i$. We can write $u_i = A\\dij$ where $A$ is some scalar function of $x_i$'s. If we fix one of the indices (say i) then $A\\dij$ behaves as a vector. Now apply divergence theorem on $A\\dij$\n$$\n\\int \\pd{A\\dij}{x_i} dV = \\int \\dij \\pd{A}{x_i} dV = \\int \\pd{A}{x_j} dV = \\int A n_j dS\n$$\nIn the last step, we have used the divergence theorem for a scalar field \n$$\n\\int \\pd{f}{x_i} dV = \\int f n_i dS\n$$\nfor some scalar function $f$.\n\\end{document}\n", "meta": {"hexsha": "661c74491fab9fe057f9bb078d286331297f2cc3", "size": 12609, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex_files/lecture05.tex", "max_stars_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_stars_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-16T04:19:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-16T04:19:07.000Z", "max_issues_repo_path": "tex_files/lecture05.tex", "max_issues_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_issues_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex_files/lecture05.tex", "max_forks_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_forks_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.6465116279, "max_line_length": 615, "alphanum_fraction": 0.7125069395, "num_tokens": 4067, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484144, "lm_q2_score": 0.8104788995148791, "lm_q1q2_score": 0.5798360429539178}}
{"text": "%!TEX root = ../notes.tex\n\\section{April 14, 2022}\n\\recall Last time:\n\\begin{itemize}\n    \\item we defined fractional ideals,\n    \\item define the inverse of a nonzero integral ideal (and noted at the end of the same definition holds for fractional ideals)\\footnote{An integral domain has all of its nonzero fractional ideals invertible iff it is a Dedekind domain.}\n    \\item we briefly stated the definition of a Dedekind domain.\n\\end{itemize}\n\\subsection{Dedekind Domains}\nWe introduced last time\\dots\n\\begin{theorem*}[p.109 \\cite{stewart2015algebraic}]\n    The ring of integers $\\riO_K$:\n    \\begin{enumerate}[(a)]\n        \\item is an integral domain;\n        \\item is Noetherian, that is to mean one of the following:\n              \\begin{enumerate}[(i)]\n                  \\item every ascending chain of ideals terminates, or\n                  \\item every ideal is finitely generated;\n              \\end{enumerate}\n        \\item if $\\alpha\\in\\mathsf{Frac}(\\riO_K)=K$ satisfies a nonzero monic polynomial equation with coefficients in $\\riO_K$, then $\\alpha\\in\\riO_K$ ($\\riO_K$ is integrally closed in its field of fractions);\n        \\item every nonzero prime ideal of $\\riO_K$ is maximal.\n    \\end{enumerate}\n\\end{theorem*}\n\\begin{proof}\n    ~\\begin{enumerate}[(a)]\n        \\item \\emph{We know this.}\n        \\item\n              We know that if $[K : Q] = n$, then $\\riO_K$ is a free $\\ZZ$-module of rank $n$\\footnote{That is, a free Abelian group of rank $n$; or also to say possesses an integral basis of order $n$}.\n\n              If $\\frk{a}$ is an ideal of $\\riO_K$, then $(\\frk{a}, +)$ is a free Abelian group of rank $\\leq n$\\footnote{\\emph{Theorem 1.16} of \\cite{stewart2015algebraic} proves this fact about submodules of free modules.}. As a group, $\\frk{a}$ is finitely generated, so $\\frk{a}$ (with some glossing over) is finitely generated as an ideal.\n        \\item \\emph{Was noted in a previous lecture.}\n        \\item\n              Let $\\frk{p}$ be a nonzero prime ideal of $\\riO_K$. Let $0\\neq \\alpha\\in\\frk{p}$. Then $N:= N_{K/\\QQ}(\\alpha) = \\alpha_1\\alpha_2\\cdots \\alpha_n$\\footnote{We know this lives in $\\QQ$ since it's a term of the polynomial.} where $\\alpha_1 := \\alpha$ and $\\alpha_i$ are the conjugates of $\\alpha = \\alpha_1$.\n\n              Note that $\\alpha_2\\alpha_3\\cdots \\alpha_n\\in K$ since the whole product $\\alpha_1\\alpha_2 \\cdots \\alpha_n\\in\\QQ\\subseteq K$ and $\\alpha_1\\in K$. In fact, $\\alpha_2\\alpha_3\\cdots \\alpha_n\\in\\riO_K$. Hence $N:= N_{K/\\QQ}(\\alpha)\\in\\frk{p}$. Thus $N\\cdot \\riO_K\\subseteq \\frk{p}$. This means that, taking quotients\\footnote{If $I\\subseteq J\\subseteq R$, then $R/J$ is a quotient of $R/I$.},\n              \\[\\riO_K/\\frk{p}\\text{ is a quotient of }\\riO_K/N\\riO_K\\]\n              But $\\riO_K/N\\riO_K$ is a finitely generated Abelian group where every element has finite order, so $\\riO_K/N\\riO_K$ is finite.\n\n              Hence $\\riO_K/\\frk{p}$ is finite. So $\\riO_K/\\frk{p}$ is also an integral domain (by ring theory). Any finite integral domain is a field, so $\\riO_K/\\frk{p}$ is a field. So $\\frk{p}$ has to be maximal (again by ring theory).\n    \\end{enumerate}\n    Which proves part (a) through (d).\n\\end{proof}\n\\begin{proposition}[p.112 \\cite{stewart2015algebraic}]\n    Every nonzero ideal $\\frk{a}\\subseteq\\riO_K$ is a product of prime ideals.\n\\end{proposition}\n\\begin{proof}\\footnote{This is very analogous to the proof of the existence of prime factorization in $\\ZZ$. Noetherian-ness of $\\riO_K$ takes the place of well-ordereding. $\\frk{a}$ being maximal ideal that isn't the product of prime ideals is akin to selecting $a$ to be the least element not a product of primes. $a$ itself isn't prime so is a product, and so on\\dots}\n    If not, let $\\frk{a}$ be maximal subject to the condition of not being a product of prime ideals.\n    \\begin{remark*}\n        Recall Zorn's Lemma: in a partially ordered set where every chain has an upper bound, there is at least one maximal element. We apply Zorn's Lemma to ideals to find such a maximal ideal.\n    \\end{remark*}\n    Then $\\frk{a}$ is not prime, but applying Zorn to the poset of ideals containing $\\frk{a}$ to conclude that $\\frk{a}\\subseteq \\frk{p}$ for some maximal (hence prime) ideal $\\frk{p}$.\n\n    We have that $\\riO_K\\subsetneq \\frk{p}^{-1}\\subseteq \\frk{a}^{-1}$, since $\\frk{p}$ is a proper ideal of $\\riO_K$, and $\\frk{a}\\subseteq\\frk{p}$.\n\n    It follows that\\footnote{Inverses let us preserve containment of $\\subsetneq$, because if we hit both sides with $\\frk{a}^{-1}$ we get nice things.}\n    \\[\\frk{a}\\subsetneq \\frk{a}\\frk{p}^{-1}\\subseteq \\frk{a}\\frk{a}^{-1} = \\riO_K\\]\n    By the maximality of $\\frk{a}$, we have that\n    \\[\\frk{a}\\frk{p}^{-1} = \\frk{p}_2\\frk{p}_3\\cdots \\frk{p}_r\\]\n    where $\\frk{a}\\frk{p}^{-1}$ is a product of prime ideals $\\frk{p}_2, \\frk{p}_3, \\dots, \\frk{p}_r$, so\n    \\[\\frk{a} = \\frk{p}\\frk{p}_2\\frk{p}_3\\cdots \\frk{p}_r\\]\n    which is a contradiction, since $\\frk{a}$ is the product of prime ideals.\n\\end{proof}\n\n\\begin{lemma}[p.113 \\cite{stewart2015algebraic}]\n    For ideals $\\frk{a}, \\frk{b}$ of $\\riO_K$, $\\frk{a}\\mid\\frk{b} \\iff \\frk{b}\\subseteq \\frk{a}$.\n    \\begin{ques*}\n        What does $\\frk{a}\\mid \\frk{b}$ mean? It means there exists an ideal $\\frk{c}$ such that $\\frk{b} = \\frk{c}\\cdot \\frk{a}$.\n    \\end{ques*}\n\\end{lemma}\n\\begin{proof}\n    Since $\\frk{c}\\frk{a}\\subseteq \\frk{a}$, we have that $\\frk{a}\\mid \\frk{b}$ implies that $\\frk{b}\\subseteq \\frk{a}$.\n\n    Now we prove the other direction. Suppose $\\frk{b}\\subseteq\\frk{a}$, then\n    \\[\\frk{b} = \\frk{a}(\\frk{a}^{-1}\\frk{b}),\\]\n    where $\\frk{a}^{-1}\\frk{b}$ is integral. Letting $\\frk{a}^{-1}\\frk{b} = \\frk{c}$ shows that $\\frk{a}\\mid\\frk{b}$.\n\\end{proof}\n\n\\begin{theorem}\n    Every nonzero ideal of $\\riO_K$ has a unique factorization as a product of prime ideals.\n\\end{theorem}\n\\begin{proof}\n    The lemma above tells us that $\\frk{p}$ is prime iff $\\frk{p}\\mid \\frk{a}\\frk{b} \\Rightarrow \\frk{p}\\mid\\frk{a}$ or $\\frk{p}\\mid\\frk{b}$. Then proceed as follows as we had done so in integers.\n\n    Suppose\n    \\begin{align*}\n        \\frk{a} & = \\frk{p}_1\\frk{p}_2\\cdots \\frk{p}_r \\\\\n                & = \\frk{q}_1\\frk{q}_2\\cdots \\frk{q}_s\n    \\end{align*}\n    for some prime ideals $\\frk{p}_1, \\frk{p}_2, \\cdots,  \\frk{p}_r, \\frk{q}_1, \\frk{q}_2, \\cdots, \\frk{q}_s$. Then $\\frk{p}_1$ divides $\\frk{q}_j$ for some $\\frk{j}$. Since $\\frk{q}_j$ is maximal, $\\frk{p}_1 = \\frk{q}_j$. We multiply by $\\frk{p}_1^{-1}$ repeat the process, concludes the proof.\n\\end{proof}", "meta": {"hexsha": "9ad92cf3fe43386fb084201613a2b763f2eb890a", "size": 6514, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-04-14.tex", "max_stars_repo_name": "jchen/math1560-notes", "max_stars_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-02T15:41:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-03T20:28:48.000Z", "max_issues_repo_path": "lectures/2022-04-14.tex", "max_issues_repo_name": "jchen/math1560-notes", "max_issues_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-04-14.tex", "max_forks_repo_name": "jchen/math1560-notes", "max_forks_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.3777777778, "max_line_length": 402, "alphanum_fraction": 0.652901443, "num_tokens": 2262, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110202, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.5798069054438255}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{color}\n\\usepackage[breaklinks,colorlinks=true]{hyperref}\n\n\\definecolor{Blue}{RGB}{0,122,255}\n\\hypersetup{colorlinks,breaklinks,urlcolor=Blue,linkcolor=Blue,citecolor=Blue,urlcolor=Blue}\n\n\\def\\Pr{\\mathop{\\mathbb{P}}}\n\\newcommand\\Ex{\\mathop{\\mathbb{E}}}\n\\newcommand\\br[1]{\\left(#1\\right)}\n\\newcommand\\bbr[1]{\\left[#1\\right]}\n\\newcommand\\cbr[1]{\\left\\{#1\\right\\}}\n\\newcommand\\pd[2]{\\frac{\\partial #1}{\\partial #2}}\n\n\n\\begin{document}\n\n\\title{Price and greeks of American binary option}\n\\date{}\n\n\\maketitle\n\n\nConsider an asset with spot price $S = \\{S_t; 0 \\leq t \\leq T\\}$ following geometric Brownian motion of volatility $\\sigma$.\n\nAn American binary call option with strike $K$ and maturity $T$ pays off\n\\begin{align}\n    \\text{Payoff}\n        = 1_{\\max\\{S_t; 0 \\leq t \\leq T\\} \\geq K} ,\n\\end{align}\nwhere\n$1_{\\max\\{S_t; 0 \\leq t \\leq T\\} \\geq K}$ is an indicator function.\n\n\n\\section*{Price}\n\n\nLet $W = \\{W_t; 0 \\leq t \\leq T\\}$ be Brownian motion driving $S$.\nWe define the cumulative maximum of $W$ by $M_t = \\max\\{W_u; 0 \\leq u \\leq t\\}$ and\nconsider $\\hat M_t = M_t - \\frac12 \\sigma t$.\nAccording to Corollary 7.2.2 of Ref.~\\cite{shreve}, if $S_0 < K$,\n\\begin{align}\n    \\Pr[\\hat M_t \\geq m]\n        = 1 - N\\br{\\frac{m}{\\sqrt{t}} + \\frac12 \\sigma \\sqrt{t}}\n            + e^{-\\sigma m} N\\br{-\\frac{m}{\\sqrt{t}} + \\frac12 \\sigma \\sqrt{t}} ,\n\\end{align}\nwhere\n$N$ is the cumulative distribution function of the normal distribution.\nA condition to get the payoff of unity, $\\max\\{S_t; 0 \\leq t \\leq T\\} \\geq K$, is equivalent to $\\hat M_T \\geq - \\sigma^{-1} \\log(S_0 / K)$.\nTherefore, the price of the American binary call option is given by\n$1$ if $S_0 \\geq K$ and otherwise\n\\begin{align}\n    \\text{Price}\n        & = \\Ex[1_{\\max\\{S_t; 0 \\leq t \\leq T\\} \\geq K}] \\notag \\\\\n        & = \\Pr\\bbr{\\hat M_T \\geq - \\frac{1}{\\sigma}\\log\\br{\\frac{S_0}{K}}} \\notag \\\\\n        & = 1 - N\\br{- \\frac{\\log(S_0 / K)}{\\sigma \\sqrt{T}} + \\frac12 \\sigma \\sqrt{T}}\n            + N\\br{\\frac{\\log(S_0 / K)}{\\sigma \\sqrt{T}} + \\frac12 \\sigma \\sqrt{T}}\n            \\notag \\\\\n        & = N(d_2) + \\frac{S_0}{K} N(d_1) ,\n\\end{align}\nwhere\n\\begin{align}\n    d_1\n        = \\frac{\\log (S_0 / K)}{\\sigma \\sqrt{T}} + \\frac12 \\sigma \\sqrt{T} ,\n    \\quad\n    d_2\n        = \\frac{\\log (S_0 / K)}{\\sigma \\sqrt{T}} - \\frac12 \\sigma \\sqrt{T} .\n\\end{align}\n\n\n\\section*{Delta}\n\n\nDelta is given by\n$0$ if $S_0 \\geq K$ and otherwise\n\\begin{align}\n    \\text{Delta}\n        = \\frac{N^\\prime(d_2)}{S_0 \\sigma \\sqrt{T}}\n            + \\frac{N(d_1)}{K}\n            + \\frac{N^\\prime(d_1)}{K \\sigma \\sqrt{T}} ,\n\\end{align}\nwhere\nwe used a derivative $\\partial d_1 / \\partial S_0 = \\partial d_2 / \\partial S_0 = 1 / (S_0 \\sigma \\sqrt{T})$.\n\n\n\\section*{Gamma}\n\n\nGamma is given by\n$0$ if $S_0 \\geq K$ and otherwise\n\\begin{align}\n    \\text{Gamma}\n        & = - \\frac{N^\\prime(d_2)}{S_0^2 \\sigma \\sqrt{T}}\n            + \\frac{N^{\\prime\\prime}(d_2)}{S_0^2 \\sigma^2 T}\n            + \\frac{N^\\prime(d_1)}{S_0 K \\sigma \\sqrt{T}}\n            + \\frac{N^{\\prime\\prime}(d_1)}{S_0 K \\sigma^2 T} \\notag \\\\\n        & = - \\frac{N^\\prime(d_2)}{S_0^2 \\sigma \\sqrt{T}}\n            - \\frac{d_2 N^\\prime(d_2)}{S_0^2 \\sigma^2 T}\n            + \\frac{N^\\prime(d_1)}{S_0 K \\sigma \\sqrt{T}}\n            - \\frac{N^\\prime(d_1)}{S_0 K \\sigma^2 T} ,\n    \\label{eq:gamma}\n\\end{align}\nwhere we used a relation $N^{\\prime\\prime}(x) = - x N^\\prime(x)$ to show the second equality.\n\n\n\\begin{thebibliography}{1}\n\\bibitem{shreve} Shreve, S.E., 2004. Stochastic calculus for finance II: Continuous-time models (Vol. 11). New York: springer.\n\\end{thebibliography}\n\n\n\\end{document}\n", "meta": {"hexsha": "7acdd5983c656563a9c16079fd826a0e121f11bf", "size": 3674, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/notes/american_binary.tex", "max_stars_repo_name": "YieldLabs/pfhedge", "max_stars_repo_head_hexsha": "a5ba9d054a8418cb8b27bb67d81a8fc8fb83ef57", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/notes/american_binary.tex", "max_issues_repo_name": "YieldLabs/pfhedge", "max_issues_repo_head_hexsha": "a5ba9d054a8418cb8b27bb67d81a8fc8fb83ef57", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/notes/american_binary.tex", "max_forks_repo_name": "YieldLabs/pfhedge", "max_forks_repo_head_hexsha": "a5ba9d054a8418cb8b27bb67d81a8fc8fb83ef57", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.2280701754, "max_line_length": 140, "alphanum_fraction": 0.6007076756, "num_tokens": 1418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.5796852377975696}}
{"text": "\\documentclass{article}\n\n\\usepackage[margin=3cm]{geometry}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\n\n\\title{\\Large\\bfseries CS 598 MP: Software Verification, Program Synthesis, and Interpretable AI \\\\\nFall 2021 \\\\\nHomework 1}\n\\author{Chiao Hsieh, chsieh16@illinois.edu}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Problem 1}\nModified Dafny code with verified invariant is included in the tarball.\n\n\\subsubsection*{product1.dfy}\nOuter Loop Invariant: \\verb|m1 <= m && res == m1 * n| \\\\\nInner Loop Invariant: \\verb|n1 <= n && res == m1 * n + n1|\n\n\\subsubsection*{product2.dfy}\nOuter Loop Invariant: \\verb|m1 >= 0 && res == (m - m1) * n| \\\\\nInner Loop Invariant: \\verb|n1 >= 0 && res == (m - m1) * n + (n - n1)|\n\n\\subsubsection*{Half.dfy}\nLoop Invariant: \\verb|r == i/2 && i%2 == 0|\n\n\\subsubsection*{DividesIt.dfy}\nLoop Invariant: \\verb|x1>=0 && y1>=0 && x1%d==0 && y1%d==0|\n\\medskip\n\n\\includegraphics[width=\\textwidth]{hw1-dafny-output.png}\n\n\\pagebreak\n\n\\section*{Problem 2}\n\n\\begin{verbatim}\nint pm(int x, int y)\n@requires x >= 0 & y >= 0;\n@ensures res = x * y\n{\n    a := x;\n    b := y;\n    res := 0;\n    while (a > 0) {\n        if ((a mod 2) = 1) then res := res + b;\n        a := a div 2;\n        @assert a >= 0;\n        b := b * 2;\n    }\n    return res;\n}\n\\end{verbatim}\n\n\\paragraph{Loop Invariant} \\verb|a>=0 & a*b + res == x*y|\n\n\\subsection*{Basic Blocks/Basic Paths}\n\\subsubsection*{BB1:}\n\\begin{verbatim}\n@requires x >= 0 & y >= 0;\na := x;\nb := y;\nres := 0;\n@invariant a>=0 & a*b + res = x*y;\n\\end{verbatim}\n\\textbf{VC1:}\n\\verb|(x>=0 & y>=0 & a=x & b=y & res=0) ==> (a>=0 & a*b+res=x*y)| for all x, y, a, b, and res.\n\n\\subsubsection*{BB2:}\n\\begin{verbatim}\n@invariant a>=0 & a*b + res = x*y;\n@assume a > 0;  // while entry condition\n@assume ((a mod 2) = 1);  // then-branch\nres := res + b;\na := a div 2;\n@assert a >= 0;\n\\end{verbatim}\n\\textbf{VC2:}\\\\\n\\verb|(a0>=0 & a0*b+res0=x*y & a0>0 & ((a0 mod 2)=1) & res1=res0+b & a1=(a0 div 2)) ==> a1>=0| for all x, y, a0, a1, b, res0, res1.\n\n\\subsubsection*{BB3:}\n\\begin{verbatim}\n@invariant a>=0 & a*b + res = x*y;\n@assume a > 0;  // while entry condition\n@assume !((a mod 2) = 1);  // else-branch\na := a div 2;\n@assert a >= 0;\n\\end{verbatim}\n\\textbf{VC3:}\\\\\n\\verb|(a0>=0 & a0*b+res=x*y & a0>0 & !((a0 mod 2)=1) & a1=(a0 div 2)) ==> a1>=0| for all x, y, a0, a1, b, res.\n\n\\subsubsection*{BB4:}\n\\begin{verbatim}\n@invariant a>=0 & a*b + res = x*y;\n@assume a > 0;  // while entry condition\n@assume ((a mod 2) = 1);  // then-branch\nres := res + b;\na := a div 2;\nb := b * 2;\n@invariant a>=0 & a*b + res = x*y\n\\end{verbatim}\n\\textbf{VC4:}\n\\begin{verbatim}\n(a0>=0 & a0*b0+res0=x*y & a0>0 & ((a0 mod 2)=1) \n       & res1=res0+b & a1=(a0 div 2) & b1=b0*2) ==> a1*b1+res1=x*y\n\\end{verbatim}\nfor all x, y, a0, a1, b0, b1, res0, res1.\n\n\\subsubsection*{BB5:}\n\\begin{verbatim}\n@invariant a>=0 & a*b + res = x*y;\n@assume a > 0;  // while entry condition\n@assume !((a mod 2) = 1);  // else-branch\na := a div 2;\nb := b * 2;\n@invariant a>=0 & a*b + res = x*y\n\\end{verbatim}\n\\textbf{VC5:}\n\\begin{verbatim}\n(a0>=0 & a0*b0+res=x*y & a0>0 & !((a0 mod 2)=1) \n       & a1=(a0 div 2)) & b1=b0*2) ==> a1*b1+res=x*y\n\\end{verbatim}\nfor all x, y, a0, a1, b0, b1, res.\n\n\\subsubsection*{BB6:}\n\\begin{verbatim}\n@invariant a>=0 & a*b + res = x*y\n@assume !(a > 0);  // while exit condition\nreturn res;\n@ensures res = x * y;\n\\end{verbatim}\n\\textbf{VC6:}\n\\verb|(a>=0 & a*b+res=x*y & !(a>0)) ==> res=x*y| for all x, y, a, b, res.\n\n\\subsection*{SMT Formula}\nTo check if the verification conditions are valid, each VC is negated and check for UNSAT.\nFor simplicity, the six negated VCs are combined with a disjunction as below.\nIf any of negated VCs is satisfiable, the program is not proven correct using the invariant.\nZ3 returns UNSAT for the combined SMT formula; hence the program is correct.\n\n\\pagebreak\n\n\\scriptsize\n\\begin{verbatim}\n; Variable declarations\n(declare-fun x () Int)\n(declare-fun y () Int)\n(declare-fun a () Int)\n(declare-fun a0 () Int)\n(declare-fun a1 () Int)\n(declare-fun b () Int)\n(declare-fun b0 () Int)\n(declare-fun b1 () Int)\n(declare-fun res () Int)\n(declare-fun res0 () Int)\n(declare-fun res1 () Int)\n\n; Constraints\n(assert (or\n    ; VC1\n    (and (>= x 0) (>= y 0) (= a x) (= b y) (= res 0)\n         (not (and (>= a 0) (= (+ (* a b) res) (* x y)))))\n    ; VC2\n    (and (>= a0 0) (= (+ (* a0 b) res0) (* x y)) (> a0 0) (= (mod a0 2) 1) (= res1 (+ res0 b)) (= a1 (div a0 2))\n         (not (>= a1 0)))\n    ; VC3\n    (and (>= a0 0) (= (+ (* a0 b) res0) (* x y)) (> a0 0) (not (= (mod a0 2) 1)) (= a1 (div a0 2))\n         (not (>= a1 0)))\n    ; VC4\n    (and (>= a0 0) (= (+ (* a0 b0) res0) (* x y)) (> a0 0) (= (mod a0 2) 1) (= res1 (+ res0 b0)) (= a1 (div a0 2)) (= b1 (* b0 2))\n         (not (= (+ (* a1 b1) res1) (* x y))))\n    ; VC5\n    (and (>= a0 0) (= (+ (* a0 b0) res) (* x y)) (> a0 0) (not (= (mod a0 2) 1)) (= a1 (div a0 2)) (= b1 (* b0 2))\n         (not (= (+ (* a1 b1) res) (* x y))))\n    ; VC6\n    (and (>= a 0) (= (+ (* a b) res) (* x y)) (not (> a 0))\n         (not (= res (* x y))))\n))\n(check-sat)\n\\end{verbatim}\n\n\\end{document}", "meta": {"hexsha": "f8372b5cb8e732727641e9b8874d33c0aa670b4c", "size": 5097, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Software_Verification_Program_Synthesis_Interpretable_AI/hw1-submission.tex", "max_stars_repo_name": "hc825b/homeworks", "max_stars_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-02T02:05:22.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-02T02:05:22.000Z", "max_issues_repo_path": "Software_Verification_Program_Synthesis_Interpretable_AI/hw1-submission.tex", "max_issues_repo_name": "hc825b/homeworks", "max_issues_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Software_Verification_Program_Synthesis_Interpretable_AI/hw1-submission.tex", "max_forks_repo_name": "hc825b/homeworks", "max_forks_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-22T00:44:03.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-22T00:44:03.000Z", "avg_line_length": 27.4032258065, "max_line_length": 131, "alphanum_fraction": 0.5593486365, "num_tokens": 2065, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5796852326970254}}
{"text": "[1.5 hours late]\n\\section{Second Quantization notation}\nWe use the notation\n\\begin{align}\n    \\ket{\\varphi} = \\ket{n_1, n_2, \\ldots}\n\\end{align}\nwhere\n\\begin{align}\n    \\hat{N} &=\n    \\left( n_1 + n_2 + \\cdots \\right)\n    \\ket{n_1, n_2, \\ldots}\n\\end{align}\nis the total number operator and the Hamiltonian is\n\\begin{align}\n    \\het{H} \\ket{\\varphi} &=\n    \\left( \n    n_1 E_{n_1} + n_2 E_{n_2} + \\cdots\n    \\right)\n    \\ket{n_1, n_2, \\ldots}\n\\end{align}\n\n\\section{Bosonic Ideal Gas}\nSecond quantization notation.\nWe had a sum over all multi-particle states over all multi-particle states.\nNow we just have\n\\begin{align}\n    Z_G(T, V, \\mu) &=\n    \\sum_{n_1, n_2, \\cdots = 0}^{\\infty}\n    \\bra{n_1,n_2,\\ldots} e^{-\\beta(H - \\mu N)} \\ket{n_1,n_2,\\ldots}\\\\\n    &=\n    \\sum_{n_1=0}^{\\infty} e^{-\\beta(E_1 - \\mu) n_1}\n    \\sum_{n_2=0}^{\\infty} e^{-\\beta(E_2 - \\mu) n_2}\n    \\cdots\\\\\n    &=\n    \\frac{1}{1 - e^{-\\beta(E_1 - \\mu)}}\n    \\frac{1}{1 - e^{-\\beta(E_2 - \\mu)}}\n    \\cdots\n\\end{align}\nSo then\n\\begin{align}\n    \\Omega(T, V, \\mu) &=\n    -k_B T \\ln Z_G\\\\\n    &=\n    -k_BT \\sum_k \\ln\\left( 1 + e^{-\\beta (E_k - \\mu)} \\right)\\\\\n    &\\approx\n    k_B TV \\int \\frac{d^3k}{(2\\pi)^3}\n    \\ln\\left( 1 - e^{-\\beta(E_k - \\mu)} \\right)\n\\end{align}\nwhere each\n\\begin{align}\n    E_k &= \\frac{\\hbar^2 k^2}{2M}\n\\end{align}\n\nIt's just the same thing as the modes of a crystal vibrations.\nThey're the same as an EM field,\nbecause I can find the normal modes of th EM field.\nThis has nothing to do with Harmonic oscillators except it does.\nEach single particle state mode is like a harmonic oscillator.\nThe state is indexed by the particle number,\nand the energy levels are equally spaced just like the harmonic oscillator.\nThat's the special property.\n\n\\section{Fermion ideal gas}\nFor fermions,\njust change the limit of the sum from $\\infy$ to $1$\nand you get\n\\begin{align}\n    \\Omega(T, V, \\mu) &=\n    -k_B T V g \\int \\frac{d^3k}{(2\\pi)^3}\\,\n    \\ln\\left( \n    1 + e^{\\beta(E_k - \\mu)}\n    \\right)\n\\end{align}\nwhere $g$ is the number of internal states,\nlike the spin.\n\nThe little difference in sign changes things profoundly.\n\nJust using the general formalism of the grand canonical ensemble,\nI should be able to find the boson canonical ensemble,\netc.\n\n\nFor example,\nif I have $\\Omega$,\nI can find the average $N$.\nLet's compute the average number of particles over the volume so we get a\ndensity.\n\\begin{align}\n    \\frac{N}{V} &=\n    \\frac{1}{V}\n    \\left. \\frac{\\partial \\Omega}{\\partial\\mu}\\right|_{T,V}\\\\\n    &=\n    - k_B T g\n    \\int \\frac{d^3k}{(2\\pi)^2}\n    \\frac{ \\beta e^{-\\beta \\epsilon_k}}{%\n        1 \\mp e^{-\\beta \\epsilon_k}\n    }\n\\end{align}\nwhere $\\epsilon_k=E_k - \\mu$ which appears so frequently I may as well have a\nname for it,\nwhere the upper sign is for fermions and the lower sign is for bosons.\nAnd then\n\\begin{align}\n    \\frac{N}{V} &=\n    g \\int \\frac{d^3k}{(2\\pi)^2}\n    \\frac{1}{e^{\\beta\\epsilon_k}\\mp 1}\n\\end{align}\nand the only difference is the sign.\nThe integrand is interpreted as the average occupation number.\nThen I take an average over many states.\nFor a particular single energy state\nwhat is the average expectation?\nWell that is just the occupation number\n\\begin{align}\n    n(E) &=\n    \\frac{1}{e^{\\beta\\epsilon_k}\\mp 1}\n\\end{align}\nwhich is the occupation number of the density per unit of $k$.\n\nAnother thing I like to calculate is the energy density.\nHow do I find that?\nI can think of this $\\Omega$ here,\ntake a derivative in relation to $T$ to find the entropy,\nthen use the Legendre transform to find the entropy.\nThe little trick about this is the following.\nThis gives me $N$ as a function of the temperature nd the chemical potential\nbut it's hard to invert the relation to find what $\\mu$ is needed to give\na particular $N$.\nAlthough it's easy to say Legendre transform,\nit's hard to do in particle.\n\nIt doesn't matter,\nyou can manipulate the integrals back and forth,\nI won't do this in front of you or do tell you to do it in the homework.\nWhich one do you want?\nOf course it's in the homework.\n\nThe answer is something you'd expect.\nIt's as um over single particle states.\nWe're looking for the total energy.\nThat's the occupation number for one particular state.\nIf I multiply it by the energy of the single particular state I get the energy.\n\\begin{align}\n    \\frac{E}{N} &=\n    g \\int \\frac{d^3k}{(2\\pi)^3}\n    \\frac{E_k}{e^{\\beta\\left( E_k - \\mu \\right)} \\mp 1}\n\\end{align}\nwhere the  $g \\int \\frac{d^3k}{(2\\pi)^3}$ means sum over single particle states,\nthe $\\frac{1}{e^{\\beta\\left( E_k - \\mu \\right)} \\mp 1}$ is the occupation number\nfor the $k$ and $E_k$ is the energy of a single particle.\n\nIn the midterm,\nIf I ask a question in the midterm,\nI will not give you this equation,\nbecause it's so intuitive and obvious.\n\nI'll give you the $N/V$ because the sign is not obvious,\nbut this $E/N$ formula is obvious from that.\nIt's difficult to compute but it's a simple answer in the end.\n", "meta": {"hexsha": "aa7125a85df4421097ebeb128bec0efe9fb51b22", "size": 4930, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phys612/lecture25.tex", "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_issues_repo_path": "phys612/lecture25.tex", "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phys612/lecture25.tex", "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.6211180124, "max_line_length": 80, "alphanum_fraction": 0.6724137931, "num_tokens": 1601, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.803173791645582, "lm_q2_score": 0.7217432122827969, "lm_q1q2_score": 0.5796852324036362}}
{"text": "%%% -*- mode: LaTeX; TeX-master: \"main.tex\"; -*-\n\n\\ifx\\master\\undefined\n\\documentclass[a4paper]{easychair}\n\\usepackage{submission}\n\\begin{document}\n{\\let\\master\\relax\\input{frontmatter}}\n\\fi\n%%%% PLEASE DO NOT EDIT ABOVE THIS LINE\n\n\\section{\\tlaplus and its Proof Language}\n\\label{sec:proof-language}\n\n\\subsection{TLA}\n\\label{sec:proof-language.tla} \n\nThe \\tlaplus\\ language is based on the Temporal Logic of Actions\n(TLA)~\\cite{lamport:newtla}, a linear-time temporal logic. The rigid\nvariables of TLA are called \\emph{constants} and the flexible\nvariables are called simply \\emph{variables}.  TLA assumes an\nunderlying ordinary (non-modal) logic for constructing expressions.\nOperators of that logic are called \\emph{constant} operators.  A\n\\emph{state function} is an expression built from constant operators\nand TLA constants and variables.  The elementary (non-temporal)\nformulas of TLA are \\textit{actions}, which are formulas built with\nconstant operators, constants, variables, and expressions of the form\n$f'$, where $f$ is a state function.  (TLA also has an \\ENABLED\noperator that is used in reasoning about liveness, but we ignore it\nfor brevity.)  An action is interpreted as a predicate on pairs of\nstates that describes a set of possible state transitions, where state\nfunctions refer to the starting state and primed state functions refer\nto the ending state.  Because priming distributes over constant\noperators and because $c'$ is equal to $c$ for any constant $c$, an\naction can be reduced to a formula built from constant operators,\nconstants, variables, and primed variables.\n\nTLA is practical for describing systems because all the complexity of\na specification is in the action formulas.  Temporal operators are\nessentially used only to assert liveness properties, including\nfairness of system actions.  Most of the work in a TLA proof is in\nproving action formulas; temporal reasoning occurs only in proving\nliveness properties and is limited to propositional temporal logic and\nto applying a handful of proof rules whose main premises are action\nformulas.  Because temporal reasoning is such a small part of TLA\nproofs, we have deferred its implementation.  The \\PM\\ now handles\nonly action formulas.  We have enough experience mechanizing TLA's\ntemporal reasoning~\\cite{engberg:mechanical} to be fairly confident\nthat it will not be hard to extend the \\PM to support it.\n\nA formula built from constant operators, constants, variables, and\nprimed variables is valid iff it is a valid formula of the underlying\nlogic when constants, variables, and primed variables are treated as\ndistinct variables of the logic---that is, if $v$ and $v'$ are\nconsidered to be two distinct variables of the underlying logic, for\nany TLA variable $v$.  Since any action formula is reducible to such a\nformula, action reasoning is immediately reducible to reasoning in the\nunderlying logic.  We therefore ignore variables and priming here and\nconsider only constant formulas.\n\n\n\\subsection{\\tlaplus}\n\nThe \\tlaplus\\ language adds the following to the TLA logic:\n\\begin{icom}\n\\item An underlying logic that is essentially ZFC set theory plus\n  classical untyped first-order logic with Hilbert's\n  $\\varepsilon$~\\cite{leisenring:mathematical-logic}.\n% LL: I deleted\n% \n%    which is written \\CHOOSE\\ in \\tlaplus.  \n% \n% to save a line and avoid unnecessarily introducing a name.\n% \nThe major difference between this underlying\n  logic and traditional ZFC is that functions are defined axiomatically \n  rather than being represented as sets of ordered pairs.\n\n\\item A mechanism for defining operators, where a user-defined\noperator is essentially a macro that is expanded syntactically.\n(\\tlaplus\\ permits recursive function definitions, but they are\ntranslated to ordinary definitions using Hilbert's $\\varepsilon$.)\n\n%% LL: changed ``A module language'' to ``Modules''\n\\item Modules, where one module can import definitions\n  and theorems from other modules.  A module is parameterized by its\n  declared variables and constants, and it may be instantiated in another\n  module by substituting expressions for its parameters. The\n  combination of substitution and the \\ENABLED\\ operator introduces\n  some complications, but space limitations prevent us from discussing\n  them, so we largely ignore modules in this paper.\n\\end{icom}\n\\tlaplus\\ has been extensively documented~\\cite{lamport03tla}.  Since\nwe are concerned only with reasoning about its underlying logic, which\nis a very familiar one, we do not bother to describe \\tlaplus\\ in any\ndetail.  All of its nonstandard notation that appears in our examples is\nexplained.\n\n\\subsection{The Proof Language}\n\\label{sec:proof-language.lang}\n\nThe major new feature of \\tlatwo\\ is its proof language.  (For reasons\nhaving nothing to do with proofs, \\tlatwo\\ also introduces recursive\noperator definitions, which we ignore here for brevity.)  We describe\nthe basic proof language, omitting a few constructs\n%%\n% LL: I don't think we omitted any ``uninteresting'' constructs,\n%     so I deleted:\n% \n% \n%    for simplicity a few constructs that are either uninteresting or\n%    \n% \nthat concern aspects such as module instantiation that we are not\ndiscussing.  \\tlatwo\\ also adds constructs for naming subexpressions\nof a definition or theorem, which is important in practice for writing\nproofs but is orthogonal to the concerns of this paper.\n\nThe goal of the language is to make proofs easy to read and write for\nsomeone with no knowledge of how the proofs are being checked.  This\nleads to a mostly declarative language, built around the uses and\nproofs of assertions rather than around the application of\nproof-search tactics.  It is therefore more akin to\nIsabelle/Isar~\\cite{isar} than to more operational interactive\nlanguages such as Coq's Vernacular~\\cite{coq}.\nNevertheless, the proof language does include a few operational\nconstructs that can eliminate the repetition of common idioms, albeit\nwith some loss of perspicuity.\n\n% at the cost of some loss of perspicuity.\n\n\nAt any point in a \\tlaplus\\ proof, there is a current obligation that\nis to be proved.  The obligation contains a \\emph{context} of\n\\emph{known} facts, definitions, and declarations, and a\n\\emph{goal}.\n%%\n%% SM: slight rewording of the following\n%%\n% which is a proposition that the obligation claims to be entailed by\n% the context.\nThe obligation claims that the goal is logically entailed by the context.\nSome of the facts and definitions in the\ncontext are marked as \\emph{usable} for reasoning, while the remaining\nfacts and definitions are \\textit{hidden}.\n%\n%% LL: We'd better say more about this below, so it's redundant\n%% here:\n% \n% and may not be used in the\n% proof unless explicitly made usable with a \\USE declaration (see\n% below).\n\n\\ednote{SM}{I don't think we are quite consistent about what is an\n  assertion. This presentation suggests that an assertion is not just\n  an \\ASSUME{} \\PROOF{}, but also ``contains'' the context in which\n  that \\ASSUME{} \\PROOF{} appears. I think that the term ``assertion''\n  is used differently towards the end of this section, although the\n  description here matches what we say in Section 3.}\n\nProofs are structured hierarchically.\n%\n%% LL: This is deferred until the first use of PROOF.\n% and optionally begin with the token \\PROOF. \n%\n% At the lowest levels of the hierarchy are the\n% \\textit{leaf proofs}.  \nThe leaf (lowest-level) proof \\OBVIOUS\\ asserts that the\ncurrent goal follows easily from the usable facts and definitions.\nThe leaf proof\n\\begin{gather*}\n  \\BY\\ e_{1},\\ldots, e_{m} \\ \\DEFS\\ o_{1},\\ldots, o_{n}\n\\end{gather*}\nasserts that the current goal follows easily from the usable facts and\ndefinitions together with (i)~the facts $e_{i}$ that must themselves\nfollow easily from the context and (ii)~the known definitions of\n$o_{j}$.  Whether a goal follows easily from definitions and facts\ndepends on who is trying to prove it.  The \\PM\\ generates proof\nobligations for each leaf proof, so in practice ``follows easily''\nmeans that a back-end prover can prove them.\n\nA non-leaf proof is a sequence of \\textit{steps}, each consisting of a\nbegin-step token and a proof construct.  For some constructs\n(including a simple assertion of a proposition) the step takes a\nsubproof, which may be omitted.  The final step in the sequence simply\nasserts the current goal, which is represented by the token \\QED.\n%\nA begin-step token is either a \\emph{level token} of the form \\s{n} or\na \\emph{label} of the form \\s{n}\"l\", where $n$ is a level number that\nis the same for all steps of this non-leaf proof, and \"l\" is an\narbitrary name.  The hierarchical structure is deduced from the level\nnumbers of the begin-step tokens, a higher level number beginning a\nsubproof.\n\n%% LL: I changed \n%   A step that makes declarations or definitions or that changes the form\n%   of the current goal does not require a proof,\n%   but one that makes an assertion is followed by its proof \n%\n% because (a) SUFFICES changes the goal and requires a proof and \n% (b) PICK makes a definition and requires a proof\n% (c) WITNESS e \\in S makes an assertion but does not take a proof.\n%  Also assertions whose proof is omitted are not followed by their proof--\n%  at least not with the reader's current understanding of proof.\n%\nSome steps make declarations or definitions or change the current\ngoal and do not require a proof.  Other steps make assertions that\nbecome the current goals for their proofs.  An omitted proof (or one\nconsisting of the token \\OMITTED) is considered to be a leaf proof\nthat instructs the assertion to be accepted as true.  Of course, the\nproof is then incomplete and cannot be certified.  Following a step\nthat makes an assertion (and the step's proof), until the end of the\ncurrent proof (after the \\QED\\ step), the contexts contain that\nassertion in their sets of known facts.  The assertion is marked\nusable iff the begin-step token is a level token; otherwise it can be\nreferred to by its label in a \\BY\\ proof.\n\nThe hierarchical structure of proofs not only aids in reading finished\nproof but is also quite useful in incrementally writing proofs.  The\nsteps of a non-leaf proof are first written with all proofs but that\nof the \\QED\\ step omitted.  After checking the proof of the \\QED\nstep, the proofs omitted for other steps in this or earlier levels\nare written in any order.  When writing the proof, one may discover\nfacts that are needed in the proofs of multiple steps.\n% \n% LL: I modified the following text because we say in conclusion\n%     that adding lemmas is bad. \n%\nSuch a fact is then added to the proof as an early step, or added at a\nhigher level.  It can also be removed from the proof of the theorem\nand proved separately as a lemma.  However, the hierarchical proof\nlanguage encourages facts relevant only for a particular proof to be\nkept within the proof, making the proof's structure easier to see and\nsimplifying maintenance of the proof.  For correctness proofs of\nsystems, the first few levels of the hierarchy are generally\ndetermined by the structure of the formula to be proved---for example,\nthe proof that a formula implies a conjunction usually consists of steps\nasserting that it implies each conjunct.\n\nAs an example, we incrementally construct a hierarchical proof of\nCantor's theorem, which states that there is no surjective function\nfrom a set to its powerset. It is written in \\tlaplus as:\n%\n\\begin{quote} \\small\n  \\begin{tabbing}\n    \\THEOREM\\ \"\\forall S : \\forall f \\in [S -> \\SUBSET\\ S] : \n        \\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\"\n  \\end{tabbing}\n\\end{quote}\n%\n%% SM: cosmetic change of the following that heeds Leslie's advice below\n%% (I think it used to be more or less like that before, why did it\n%% change?)\n%%\n%where \"f[x]\" is the application of the function \"f\" to \"x\",\nwhere function application is written using square brackets,\n\"\\SUBSET\\ S\" is the powerset of \"S\", and \"[S -> T]\" is the set of\nfunctions from $S$ to $T$.\n% \\llnote{One should not write two mathematical operators like\n% $x$ and $\\SUBSET$ separated only by punctuation, since that is hard to\n% read.  The sentence above needs to be rewritten.}\n\nThe statement of the theorem is the current goal for \nits top-level proof. A goal of the form $\\forall v:e$ is\nproved by introducing a generic constant and proving the formula\nobtained by \nsubstituting it for the bound identifier. We express this as follows,\nusing the \\ASSUME/\\PROVE construct of \\tlatwo:\n% \\llnote{\\ASSUME/\\PROVE should be introduced earlier, where we mention\n% assertions---probably back where we introduce \"||-\".}\n\n\\begin{quote} \\small\n  \\begin{tabbing}\n    \\THEOREM\\ \"\\forall S : \\forall f \\in [S -> \\SUBSET\\ S] : \n                \\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\\\\n%%    \\PROOF \\\\\n    \\LSP \\= \\s11.\\ \\= \\ASSUME \\= \"\\NEW\\ S\", \\\\\n         \\>        \\>         \\> \"\\NEW\\ f \\in [S -> \\SUBSET\\ S]\"\\\\\n         \\>        \\> \\PROVE \"\\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\\\\n         \\> \\s12.  \\> \\QED \\BY \\s11\n  \\end{tabbing}\n\\end{quote}\n%\n%\nStep \\s11 asserts that for any constants \"S\" and \"f\" with \"f \\in [S\n-> \\SUBSET\\ S]\", the proposition to the right of the \\PROVE is true.\nMore precisely, the current context for the proof of \\s11 (which we\nhave yet to write) contains the declarations of $S$ and $f$ and the\nusable fact \"f \\in [S -> \\SUBSET\\ S]\", and the \\PROVE\\ assertion is\nits goal.  The \\QED step states that the original goal (the theorem)\nfollows from the assertion in step~\\s11.\n\nWe tell the \\PM to check this (incomplete) proof, which it does by\nhaving the back-end provers verify the proof obligation for the \\QED\nstep.  The verification succeeds, and we now continue by writing the\nproof of \\s11.  (Had the verification failed because \\s11 did not\nimply the current goal, we would have caught the error before\nattempting to prove \\s11, which we expect to be harder to do.)\n\n\\llnote{Check minor edit below.}\n\nWe optimistically start with the proof \\OBVIOUS, but it is too hard\nfor the back-end to prove, and the \\PM reports a timeout.  Often this\nmeans that a necessary fact or definition in the context is hidden and\nwe merely have to make it usable with a \\USE step or a \\BY proof.  In\nthis case we have no such hidden assumptions, so we must refine the\ngoal into simpler goals with a non-leaf proof.  We let this proof have\nlevel 2 (we can use any level greater than 1).  Since the goal itself\nis existentially quantified, we must supply a witness.  In this case,\nthe witness is the classic diagonal set, which we call~\"T\".\n%\n\\begin{quote} \\small\n  \\begin{tabbing}\n    \\THEOREM\\ \"\\forall S : \\forall f \\in [S -> \\SUBSET\\ S] : \\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\kill\n    \\PROOF \\kill\n    \\LSP \\= \\s11.\\ \\= \\ASSUME \\= \"\\NEW\\ S\", \\\\\n         \\>        \\>         \\> \"\\NEW\\ f \\in [S -> \\SUBSET\\ S]\" \\\\\n         \\>        \\> \\PROVE \"\\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\\\\n%         \\>        \\> \\PROOF \\\\\n         \\>   \\hspace{1em}     \\= \\s21.\\ \\= \\DEFINE \"T \\DEF \\{z \\in S : z \\notin f[z]\\}\" \\\\\n         \\>        \\> \\s22.  \\> \"\\forall x \\in S : f[x] \\neq T\" \\\\\n         \\>        \\> \\s23.  \\> \\QED \\BY \\s22\n  \\end{tabbing}\n\\end{quote}\nBecause definitions made within a proof are usable by default, the\ndefinition of $T$ is usable in the proofs of \\s22\\ and \\s23.  Once\nagain, the proof of the \\QED\\ step is automatically verified, so all\nthat remains is to prove \\s22.  (The \\DEFINE\\ step requires no proof.)\n\n% \\llnote{I added to the paragraph above an explanation of why the\n%   definition of $T$ is usable.  Stephan asks if it wouldn't it be more\n%   consistent to give the definition a level token.  In a comment that\n%   got deleted along the way, I proposed changing the language to make\n%   labeled definitions unusable and allow the user to write $\\BY\\\n%   \\DEFS\\ \\s21$ as well as $\\BY\\ \\DEFS\\ T$.  However, we had already\n%   decided that the user is most likely to want local definitions to be\n%   \\USE{}d by default, and many users might prefer to give step numbers\n%   to all their steps.  So, we decided to defer a decision on this\n%   point.  In the conclusion, I added this as the kind of fine tuning\n%   we are likely to make to the language.}\n% \n% \\ednote{SM}{Fine with me. There must be a way to hide definitions\n%   because they can get in the way. Your suggestion to make labeled\n%   definitions unusable looks clean and consistent to me, but in its\n%   absence we can at least write HIDE DEF.}\n% \n% \\ednote{KC}{In my opinion, the presence or absence of a label should\n%   not be the deciding factor on whether the contents of the step, be\n%   they definitions or facts, are usable or hidden. Instead, the label\n%   should determine whether the contents of the step are namable by the\n%   step token or not. Since definitions are not namable, it does not\n%   matter whether we number \"\\DEFINE\" steps or not. (Subexpressions of\n%   the body of the definition are reachable via the defined operator\n%   name.)}\n\n%\n% \\llnote{This example suggests that we should modify the current language\n% definition so that definitions in a labeled step are not automatically\n% usable.  There's no reason why they need to be, and it would make it\n% clear that the \\QED\\ step's proof doesn't depend on the definition.\n% We could allow BY DEF \\s21\\ to include all the definitions made\n% in the step.}\n\nThe system \n%% LL: changed ``will accept'' to ``accepts'' because it does so right now.\naccepts \\OBVIOUS\\ as the proof of \\s22 because the only\ndifficulty in the proof of \\s11 is finding the\nwitness. However, suppose we want to add another level of proof for the benefit\nof a human reader.  The universal quantification is proved as above,\nby introducing a fresh constant:\n\n\\begin{quote} \\small\n  \\begin{tabbing}\n    \\THEOREM\\ \"\\forall S : \\forall f \\in [S -> \\SUBSET\\ S] : \\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\kill\n    \\PROOF \\kill\n    \\LSP \\= \\s11.\\ \\= \\ASSUME \\= \"\\NEW\\ S\", \\kill\n         \\>        \\>         \\> \"\\NEW\\ f \\in [S -> \\SUBSET\\ S]\" \\kill\n         \\>        \\> \\PROVE \"\\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\kill\n         \\>        \\> \\PROOF \\kill\n         \\>   \\hspace{1em} \\= \\s21.\\ \\= \\DEFINE \"T == \\{z \\in S : z \\notin f[z]\\}\" \\kill\n         \\>        \\> \\s22.  \\> \"\\forall x \\in S : f[x] \\neq T\" \\\\\n         \\>        \\> \\hspace{1em} \\= \\s31.\\ \\= \\ASSUME \"\\NEW\\ x \\in S\" \\PROVE \"f[x] \\neq T\" \\\\\n         \\>        \\>        \\> \\s32.\\ \\> \\QED \\BY \\s31\n  \\end{tabbing}\n\\end{quote}\n%\n% Naturally, the system verifies the proof of the \\QED\\ step.  (Remember\n% that it could verify \\s22\\ by itself.)\n%     the QED step has nothing to do with \\s22  -- KC\n\\ednote{KC}{Should be able to trim the text for this last level below}\nNaturally, the \\QED step is verified.  Although the system accepts\n\\OBVIOUS\\ as the proof of \\s31 (remember that it could verify \\s22\\ by\nitself), we can provide more detail with yet another level\nof proof.  We write this proof the way it would seem natural to a\nperson---by breaking it into two cases:\n%\n\\begin{quote} \\small\n  \\begin{tabbing}\n    \\THEOREM\\ \"\\forall S : \\forall f \\in [S -> \\SUBSET\\ S] : \\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\kill\n    \\PROOF \\kill\n    \\LSP \\= \\s11.\\ \\= \\ASSUME \\= \"\\NEW\\ S\", \\kill\n         \\>        \\>         \\> \"\\NEW\\ f \\in [S -> \\SUBSET\\ S]\" \\kill\n         \\>        \\> \\PROVE \"\\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\kill\n         \\>        \\> \\PROOF \\kill\n         \\> \\hspace{1em} \\= \\s21.\\ \\= \\DEFINE \"T == \\{z \\in S : z \\notin f[z]\\}\" \\kill\n         \\>        \\> \\s22.  \\> \"\\forall x \\in S : f[x] \\neq T\" \\kill\n         \\>        \\>  \\hspace{1em} \\= \\s31.\\ \\= \\ASSUME \"\\NEW\\ x \\in S\" \\PROVE \"f[x] \\neq T\" \\\\\n         \\>        \\>        \\> \\hspace{1em} \\= \\s41.\\ \\= \\CASE \"x \\in T\" \\\\\n         \\>        \\>        \\>        \\> \\s42.\\ \\> \\CASE \"x \\notin T\" \\\\\n         \\>        \\>        \\>        \\> \\s43.\\ \\> \\QED \\BY \\s41, \\s42\n  \\end{tabbing}\n\\end{quote}\nThe (omitted) proof of the \\CASE\\ statement \\s41\\ has as its goal\n\"f[x]\\neq T\" and has the additional usable fact $x\\in T$ in its context.\n\n\\llnote{Minor edit below.}\n\nWe continue refining the proof in this way, stopping \nwith an \\OBVIOUS or \\BY proof when a goal is\nobvious enough for the back-end prover or for a human reader,\ndepending on who the proof is being written for. \nA \\BY\\ statement can guide the prover or the human reader\nby listing helpful obvious consequences of known facts.  For example,\nthe proof of \\s41\\ might be \"\\BY\\ x \\notin f[x]\".\n%\nThe complete proof appears in Appendix~\\ref{apx:cantor}.\n\nThis example illustrates how the proof language supports a\nhierarchical, non-linear, and incremental development of proofs.  The\nproof writer can work on the most problematic unproved steps first,\nleaving the easier ones for later.  Finding that a step cannot be\nproved (for example, because it is invalid) may require changing other\nsteps, making proofs of those other steps wasted effort.  We intend to\nprovide an interface to the \\PM\\ that will make it easy for the user\nto indicate which proofs should be checked and will avoid\nunnecessarily rechecking proofs.\n\nThe example also shows how already-proved facts are generally not made\nusable, but are invoked explicitly in \\BY\\ proofs.  Global definitions\nare also hidden by default and the user must explicitly make them\nusable.  This makes proofs easier to read by telling the reader what\nfacts and definitions are being used to prove each step.  It also\nhelps constrain the search space for an automated back-end prover,\nleading to more efficient verification.  Facts and definitions can be\nswitched between usable and hidden by \\USE\\ and \\HIDE\\ steps, which\nhave the same syntax as \\BY. As noted above, omitting the label from a\nstep's starting token (for example, writing \\s4 instead of \\s42) makes\nthe fact it asserts usable.  This might be done for compactness at\nthe lowest levels of a proof.\n\n% The Isar proof language~\\cite{isar} is similarly explicit about\n% usable assumptions, but the facility is local to a proof and does\n% not persist into sub-proofs, causing repetition in sibling\n% sub-proofs. In the \\tlatwo proof language, the visibility of an\n% assumption persists until hidden, so one \\USE declaration at a\n% suitable level is enough. (The assumption can be selectively hidden\n% at lower levels if needed for performance reasons.)\n\n% \\llnote{The preceding stuff about Isar needs to be moved to the conclusion.}\n\n\nThe example also indicates how the current proof obligation at every\nstep of the proof is clear, having been written explicitly in a parent\nassertion.  This clear structure comes at the cost of introducing many\nlevels of proof, which can be inconvenient.  One way of avoiding these\nextra levels is by using an assertion of the form \"\\SUFFICES\\ A\",\nwhich asserts that proving $A$ proves the current goal, and makes $A$\nthe new current goal in subsequent steps.  In our example proof, one\nlevel in the proof of step \\s22\\ can be eliminated by writing the\nproof as:\n\\begin{quote} \\small\n  \\begin{tabbing}\n    \\THEOREM\\ \"\\forall S : \\forall f \\in [S -> \\SUBSET\\ S] : \\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\kill\n    \\PROOF \\kill\n    \\LSP \\= \\s11.\\ \\= \\ASSUME \\= \"\\NEW\\ S\", \\kill\n         \\>        \\>         \\> \"\\NEW\\ f \\in [S -> \\SUBSET\\ S]\" \\kill\n         \\>        \\> \\PROVE \"\\exists A \\in \\SUBSET\\ S : \\forall x \\in S : f[x] \\neq A\" \\kill\n         \\>        \\> \\PROOF \\kill\n         \\> \\hspace{1em} \\= \\s21.\\ \\= \\DEFINE \"T == \\{z \\in S : z \\notin f[z]\\}\" \\kill\n         \\>        \\> \\s22.  \\> \"\\forall x \\in S : f[x] \\neq T\" \\\\\n         \\>        \\>  \\hspace{1em} \\= \\s31.\\ \\= \\SUFFICES \\ASSUME \"\\NEW\\ x \\in S\" \\PROVE \"f[x] \\neq T\" \\\\\n         \\>        \\>        \\> \\hspace{1em} \\PROOF \\OBVIOUS \\\\\n         \\>        \\>        \\> \\s32.\\ \\> \\CASE \"x \\in T\" \\\\\n         \\>        \\>        \\> \\s33.\\ \\> \\CASE \"x \\notin T\" \\\\\n         \\>        \\>        \\> \\s34.\\ \\> \\QED \\BY \\s32, \\s33\n  \\end{tabbing}\n\\end{quote}\nwhere the proofs of the \\CASE\\ steps are the same as before.  The\n\\SUFFICES\\ statement changes the current goal of the level-3 proof to\n$f[x]\\neq T$ after adding a declaration of \"x\" and the usable fact \"x\n\\in S\" to the context. This way of proving a universally quantified\nformula is sufficiently common that \\tlatwo\\ provides a \\TAKE\\\nconstruct that allows the \\SUFFICES\\ assertion \\s31 and its \\OBVIOUS\nproof to be written \\mbox{\\,$\\TAKE\\ x \\in S$\\,}. \n\nThere is a similar construct, \"\\WITNESS\\ f \\in S\" for proving an\nexistentially quantified goal $\\exists x\\in S: e$, which changes the\ngoal to \"e[x := f]\".\n% the := notation for capture-avoiding substitution is universal\n% enough, especially in the community reading this paper\nFor implicational goals \"e => f\", the construct $\\HAVE\\ e$ changes the\ngoal to $f$.  No other constructs in the \\tlatwo proof language change\nthe form of the current goal. We advise that these constructs be used\nonly at the lowest levels of the proof, since the new goal they create\nmust be derived %% SM: commented out \"algebraically\"\ninstead of being available textually in\na parent assertion.  (As a check and an aid to the reader, one can at\nany point insert a redundant \\SUFFICES\\ step that simply asserts the\ncurrent goal.)\n\nThe final \\tlatwo\\ proof construct is $\\PICK\\ x : e$, which introduces\na new symbol $x$ that satisfies $e$.  The goal of the proof of this\n\\PICK step is $\\exists x : e$, and it changes the context of\nsubsequent steps by adding a declaration of \"x\" and the fact \"e\". \n%\nA more formal summary of the language appears in\nAppendix~\\ref{apx}.\n\nThe semantics of a \\tlatwo\\ proof is independent of any back-end\nprover. Different provers will have different notions of what\n``follows easily'', so an \\OBVIOUS\\ proof may be verified by one\nprover and not another.  In practice, many provers such as Isabelle\nmust be directed to use decision procedures or special tactics to\nprove some assertions.  For this purpose, special standard modules\nwill contain dummy theorems for giving directives to the\n\\PM.  Using such a theorem (with a \\USE\\ step or \\BY\\ proof) will\n% LL: What generated proof obligation?  And why filter something\n%     rather than not putting it into proof obligations in the \n%     first place?\ncause the \\PM\\ not to use it as a fact, \nbut instead to generate special directives for back-end\nprovers.  It could even cause the \\PM\\ to use a different back-end\nprover.  (The dummy theorem will assert a true fact that suggests the\npurpose of the directive.)\nFor instance, using the theorem \\emph{Arithmetic}\n%% LL: Engineers have never heard of Peano Arithmetic, and we don't\n%%     have so much space that we can squander it on such an example.\n%%   \\textit{PeanoArithmetic} might assert the existence of a set of\n%%   natural numbers satisfying the axioms of Peano Arithmetic; a \"\\BY\\\n%%   \\mathit{PeanoArithmetic}\" proof \n%\nmight be interpreted as an instruction to use a decision procedure for\nintegers.\n\n\\llnote{I replaced ``natural numbers'' by ``integers'' above, which seems\nto me what arithmetic reasoning is more likely to mean.}\n\nWe hope that almost all uses of this feature will leave the \\tlatwo\nproof independent of the back-end prover(s).  The proof will not have\nto be changed if the \\PM\\ is reconfigured to replace one decision\nprocedure with a different one.\n\n%%%% PLEASE DO NOT EDIT BELOW THIS LINE\n\\ifx\\master\\undefined\n{\\let\\master\\relax\\input{rearmatter}}\n\\end{document}\n\\fi\n", "meta": {"hexsha": "36756157a453103411e8db404f78b4c6694b17b2", "size": 27541, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/keappa08/proof-language.tex", "max_stars_repo_name": "damiendoligez/tlapm", "max_stars_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2016-08-16T14:58:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-19T18:38:07.000Z", "max_issues_repo_path": "doc/keappa08/proof-language.tex", "max_issues_repo_name": "damiendoligez/tlapm", "max_issues_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 49, "max_issues_repo_issues_event_min_datetime": "2020-03-04T18:13:13.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-07T17:43:24.000Z", "max_forks_repo_path": "doc/keappa08/proof-language.tex", "max_forks_repo_name": "damiendoligez/tlapm", "max_forks_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2020-02-26T19:58:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-12T22:18:25.000Z", "avg_line_length": 49.4452423698, "max_line_length": 124, "alphanum_fraction": 0.7138448132, "num_tokens": 7496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737869342623, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.5796852241961182}}
{"text": "% Part: first-order-logic\n% Chapter: sequent-calculus\n% Section: quantifier-rules\n\n\\documentclass[../../../include/open-logic-section]{subfiles}\n\n\\begin{document}\n\n\\olfileid{fol}{seq}{qrl}\n\n\\olsection{Quantifier Rules}\n\n\\subsection{Rules for $\\lforall$}\n\n\\begin{defish}\n\\Axiom$ !A(t), \\Gamma \\fCenter \\Delta$\n\\RightLabel{\\LeftR{\\lforall}}\n\\UnaryInf$ \\lforall[x][!A(x)],\\Gamma \\fCenter \\Delta$\n\\DisplayProof\n\\hfill\n\\Axiom$ \\Gamma \\fCenter \\Delta, !A(a) $\n\\RightLabel{\\RightR{\\lforall}}\n\\UnaryInf$ \\Gamma \\fCenter \\Delta, \\lforall[x][!A(x)]$\n\\DisplayProof\n\\end{defish}\n\nIn \\LeftR{\\lforall}, $t$ is a closed term (i.e., one without\nvariables). In \\RightR{\\lforall}, $a$~is !!a{constant} which must not\noccur anywhere in the lower sequent of the \\RightR{\\lforall} rule. We\ncall $a$ the \\emph{eigenvariable} of the \\RightR{\\forall}\ninference.\\footnote{We use the term ``eigenvariable'' even though $a$\nin the above rule is !!a{constant}. This has historical reasons.}\n\n\\subsection{Rules for $\\lexists$}\n\n\\begin{defish}\n\\Axiom$ !A(a), \\Gamma \\fCenter \\Delta $\n\\RightLabel{\\LeftR{\\lexists}}\n\\UnaryInf$ \\lexists[x][!A(x)], \\Gamma \\fCenter \\Delta$\n\\DisplayProof\n\\hfill\n\\Axiom$ \\Gamma \\fCenter \\Delta, !A(t) $\n\\RightLabel{\\RightR{\\lexists}}\n\\UnaryInf$ \\Gamma \\fCenter \\Delta, \\lexists[x][!A(x)]$\n\\DisplayProof\n\\end{defish}\n\nAgain, $t$~is a closed term, and $a$~is !!a{constant} which does not\noccur in the lower sequent of the \\LeftR{\\lexists} rule. We call $a$\nthe \\emph{eigenvariable} of the \\LeftR{\\lexists} inference.\n\nThe condition that an eigenvariable not occur in the lower sequent of\nthe \\RightR{\\lforall} or \\LeftR{\\lexists} inference is called the\n\\emph{eigenvariable condition}.\n\n\\begin{explain}\nRecall the convention that when $!A$ is !!a{formula} with the\n!!{variable}~$x$ free, we indicate this by writing~$!A(x)$. In the\nsame context, $!A(t)$ then is short for~$\\Subst{!A}{t}{x}$. So we\ncould also write the \\RightR{\\lexists} rule as:\n\\begin{prooftree}\n  \\Axiom$\\Gamma \\fCenter \\Delta, \\Subst{!A}{t}{x}$\n  \\RightLabel{\\RightR{\\lexists}}\n  \\UnaryInf$\\Gamma \\fCenter \\Delta, \\lexists[x][!A]$\n\\end{prooftree}\nNote that $t$ may already occur in~$!A$, e.g., $!A$~might\nbe~$\\Atom{\\Obj P}{t,x}$. Thus, inferring $\\Gamma \\Sequent \\Delta,\n\\lexists[x][\\Atom{\\Obj P}{t,x}]$ from~$\\Gamma \\Sequent \\Delta,\n\\Atom{\\Obj P}{t,t}$ is a correct application\nof~\\RightR{\\lexists}---you may ``replace'' one or more, and not\nnecessarily all, occurrences of~$t$ in the premise by the bound\n!!{variable}~$x$. However, the eigenvariable conditions in\n\\RightR{\\lforall} and~\\LeftR{\\lexists} require that the\n!!{constant}~$a$ does not occur in~$!A$. So, you cannot correctly\ninfer $\\Gamma \\Sequent \\Delta, \\lforall[x][\\Atom{\\Obj P}{a,x}]$ from\n$\\Gamma \\Sequent \\Delta, \\Atom{\\Obj P}{a,a}$ using~$\\RightR{\\lforall}$.\n\\end{explain}\n\n\\begin{explain}\nIn \\RightR{\\lexists} and \\LeftR{\\lforall} there are no restrictions on\nthe term~$t$. On the other hand, in the \\LeftR{\\lexists} and\n\\RightR{\\lforall} rules, the eigenvariable condition requires that the\n!!{constant}~$a$ does not occur anywhere outside of~$!A(a)$ in the\nupper sequent. It is necessary to ensure that the system is sound,\ni.e., only !!{derive}s sequents that are valid. Without this\ncondition, the following would be allowed:\n\\begin{prooftree}\n  \\Axiom$!A(a) \\fCenter !A(a)$\n  \\RightLabel{*\\LeftR{\\lexists}}\n  \\UnaryInf$\\lexists[x][!A(x)] \\fCenter !A(a)$\n  \\RightLabel{\\RightR{\\lforall}}\n  \\UnaryInf$\\lexists[x][!A(x)] \\fCenter \\lforall[x][!A(x)]$\n  \\DisplayProof\\bottomAlignProof\n  \\qquad\n  \\Axiom$!A(a) \\fCenter !A(a)$\n  \\RightLabel{*\\RightR{\\lforall}}\n  \\UnaryInf$!A(a) \\fCenter \\lforall[x][!A(x)]$\n  \\RightLabel{\\LeftR{\\lexists}}\n  \\UnaryInf$\\lexists[x][!A(x)] \\fCenter \\lforall[x][!A(x)]$\n\\end{prooftree}\nHowever, $\\lexists[x][!A(x)] \\Sequent \\lforall[x][!A(x)]$ is not valid.\n\\end{explain}\n\n\\end{document}\n", "meta": {"hexsha": "8e16c4aaf507fd6c6763814d32f63104a0d07990", "size": 3852, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/first-order-logic/sequent-calculus/quantifier-rules.tex", "max_stars_repo_name": "GKerfImf/OpenLogic", "max_stars_repo_head_hexsha": "5791905d3149f68e05885290f448054b98a0e51b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/first-order-logic/sequent-calculus/quantifier-rules.tex", "max_issues_repo_name": "GKerfImf/OpenLogic", "max_issues_repo_head_hexsha": "5791905d3149f68e05885290f448054b98a0e51b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/first-order-logic/sequent-calculus/quantifier-rules.tex", "max_forks_repo_name": "GKerfImf/OpenLogic", "max_forks_repo_head_hexsha": "5791905d3149f68e05885290f448054b98a0e51b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6857142857, "max_line_length": 71, "alphanum_fraction": 0.6923676012, "num_tokens": 1343, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.817574478416099, "lm_q1q2_score": 0.579675958545172}}
{"text": "\\section{DP on Graphs}\n\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Minimum vertex cover on trees}\n  \\begin{exampleblock}{Minimum vertex cover on trees \\pno{2.2.18}}\n    \\begin{itemize}\n\t  \\item Undirected tree $T = (V, E)$; \\textcolor{red}{No designated root!}\n      \\item Compute (the size of) a minimum vertex cover of $T$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\fignocaption{width = 0.30\\textwidth}{figs/vertex-cover.png}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Minimum vertex cover on trees}\n  \\centerline{Rooted $T$ at any node $r$.}\n  \\pause\n  \\vspace{0.30cm}\n\n  \\begin{description}\n\t\\item[Subproblem:] $I(u)$: the size of an MVC of subtree $T_{u}$ rooted at $u$\n\t\\item[Goal:] $I(r)$\n\t  \\pause\n\t\\item[Make choice:] Is $u$ in $\\text{MVC}[u]$?\n\t\\item[Recurrence:] \n\t  \\begin{align*}\n\t\tI(u) = \\min \\{\\text{\\# children of } u &+ \\sum_{v: \\text{ grandchildren of } u} I(v), \\\\\n\t\t\t1 &+ \\sum_{v: \\text{ children of } u} I(v)\\}\n\t  \\end{align*}\n\t  \\pause\n\t\\item[Init:] $I(u) = 0$, if $u$ is a leave\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Minimum vertex cover on trees}\n  \\begin{columns}\n\t\\column{0.35\\textwidth}\n\t  DFS on $T$ from root $r$:\n\n\t  \\vspace{0.50cm}\n\t  \\begin{algorithmic}\n\t\t\\State when $u$ is ``finished'':\n\t\t\\If{$u$ is a leave}\n\t\t  \\State $I(u) \\gets 0$\n\t\t\\Else\n\t\t  \\State $I(u) \\gets \\dots$ \n\t\t\\EndIf\n\t  \\end{algorithmic}\n\t  \\pause\n\t\\column{0.60\\textwidth}\n\tGreedy algorithm (\\textcolor{red}{Rough Proof!}):\n\n\t  \\vspace{0.50cm}\n\t  \\begin{theorem}\n\t\tThere is an MVC which contains no leaves.\n\t  \\end{theorem}\n  \\end{columns}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{DP on DAG}\n  \\begin{exampleblock}{Longest path in DAG (Problem 7.17)}\n\t\\begin{itemize}\n\t  \\item Direction: $\\downarrow$ OR $\\rightarrow$\n\t  \\item Score: $>=< 0$\n\t\\end{itemize}\n  \\end{exampleblock}\n\n  \\pause\n  \\begin{enumerate}\n\t\\item digraph $G$\n\t\\item node weight $\\to$ edge weight\n\t\\item adding an extra sink $s$\n\t\\item $G \\to G^{T}$\n  \\end{enumerate}\n\n  \\pause\n  \\centerline{Compute a longest path from $s$ in DAG}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{DP on DAG}\n  \\begin{description}\n\t\\item[Subproblem:] $\\text{dist}[v]$: longest distance from $s$ to $v$ \n\t\\item[Goal:] $\\text{dist}[v], \\forall v \\in V$\n\t  \\pause\n\t\\item[Make choice:] What is the previous node before $v$ on the longest path?\n\t\\item[Recurrence:] \n\t  \\[\n\t\t\\text{dist}[v] = \\max_{u \\to v} \\left(\\text{dist}[u] + w(u \\to v)\\right) \n\t  \\]\n\t  \\pause\n\t\\item[Init:] $\\text{dist}[s] = 0$\n  \\end{description}\n\n  \\pause\n  \\vspace{0.60cm}\n  \\centerline{Compute $\\text{dist}[v]$ in topo. order}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Bitonic tour}\n  \\begin{exampleblock}{Bitonic tour (Problem 7.18)}\n\t\\begin{itemize}\n\t  \\item Points: $P[1 \\dots n], \\; p_i = (x_i, y_i)$\n\t  \\item $x_1 < x_2 < \\dots < x_n$\n\t  \\item Bitonic tour: $p_1 \\leadsto^{x_i < x_{i+1}} p_n \\leadsto^{x_i > x_{i+1}} p_1$\n\t  \\item Compute a shorest bitonic tour.\n\t\\end{itemize}\n  \\end{exampleblock}\n\n  \\fignocaption{width = 0.30\\textwidth}{figs/bitonic-tour-wiki.png}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Bitonic tour}\n  \\centerline{$P_{i,j} (i \\le j)$: bitonic path $p_i \\leadsto^{x_i > x_{i+1}} p_1 \\leadsto^{x_i < x_{i+1}} p_j$ includes all $p_1, p_2, \\dots, p_j$}\n\n  \\vspace{0.20cm}\n  \\begin{description}\n\t\\item[Subproblem:] $d[i,j]$: the length of a shortest bitonic path $P_{i,j}$\n\t\\item[Goal:] $d[n,n] = d[n-1, n] + l(p_{n-1}p_{n})$\n\t  \\pause\n\t\\item[Make choice:] Is $p_{j-1}$ on the increasing path or the decreasing path?\n\t\\item[Recurrence:] \n\t  \\begin{align*}\n\t\td[i,j] &=  d[i,j-1] + l(p_{j-1}p_j) \\quad \\forall i < j-1 \\\\\n\t\td[i,j] &= \\min_{1 \\le k < j-1} \\set{d[k, j-1] + l(p_kp_j)} \\quad \\forall i = j-1\n\t  \\end{align*}\n\t  \\pause\n\t\\item[Init:] $d[1,2] = l(p_1p_2)$\n\t\\item[Time:] \n\t  \\[\n\t\tO(n^2) = O(n \\log n) + O(n^2) \\cdot O(1) + O(n) \\cdot O(n)\n\t  \\]\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "4caf87863a4ce2b66e27280e242b775eaf0e34b2", "size": 3887, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/graph-dp.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/graph-dp.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/graph-dp.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 28.7925925926, "max_line_length": 148, "alphanum_fraction": 0.5924877798, "num_tokens": 1491, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825007, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5796657824520052}}
{"text": "\\chapter{Applications I:  Pure Binary Arithmetic}\\label{arithchapter}\n\nThis chapter presents relations for arithmetic over the non-negative\nintegers: addition, subtraction, multiplication, division,\nexponentiation, and logarithm.  Importantly, these relations are\nrefutationally complete---if an individual arithmetic relation is\ncalled with arguments that do not satisfy the relation, the relation\nwill fail in finite time rather than diverge. The conjunction of two\nor more arithmetic relations may not fail finitely, however. This is\nbecause the conjunction of arithmetic relations can express\nDiophantine equations; were such conjunctions guaranteed to terminate,\nwe would be able to solve Hilbert's 10$^{th}$ problem, which is\nundecidable~\\cite{hilbertstenth}.  We also do not guarantee\ntermination if the goal's arguments share variables, since sharing can\nexpress the conjunction of sharing-free relations.\n\n\\citet{conf/flops/KiselyovBFS08} gives proofs of refutational\ncompleteness for these relations.  \\citet{trs} and\n\\citet{conf/flops/KiselyovBFS08} give additional examples and\nexposition of these arithmetic relations\\footnote{The definition of\n  \\scheme|logo| in the first printing of \\citet{trs} contains an\n  error, which has been corrected in the second printing and in\n  section~\\ref{arithexplog}.}.\n\nThis chapter is organized as follows.  Section~\\ref{arithrep}\ndescribes our representation of numbers.  In section~\\ref{arithnaive}\nwe present a naive implementation of addition and show its\nlimitations.  Section~\\ref{arithrevisited} presents a more\nsophisticated implementation of addition, inspired by the half-adders\nand full-adders of digital hardware.  Sections~\\ref{arithmult} and\n\\ref{arithdivision} present the multiplication and division relations,\nrespectively.  Finally in section~\\ref{arithexplog} we define\nrelations for logarithm and exponentiation.\n\n\n\\section{Representation of Numbers}\\label{arithrep}\n\nBefore we can write our arithmetic relations, we must decide how we\nwill represent numbers.  For simplicity, we restrict the domain of our\narithmetic relations to non-negative integers\\footnote{We could extend\n  our treatment to negative integers by adding a sign tag to each\n  number.}.  We might be tempted to use Scheme's built-in numbers for\nour arithmetic relations.  Unfortunately, unification cannot decompose\nScheme numbers.  Instead, we need an inductively defined\nrepresentation of numbers that can be constructed and deconstructed\nusing unification.  We will therefore represent numbers as lists.\n\nThe simplest approach would be to use a unary\nrepresentation\\footnote{Even when using unary numbers, defining\n  refutationally complete arithmetic relations is non-trivial, as\n  demonstrated by~\\citet{conf/flops/KiselyovBFS08}.}; however, for\nefficiency we will represent numbers as lists of binary digits.  Our\nlists of binary digits are \\emph{little-endian}: the \\scheme|car| of\nthe list contains the least-significant-bit, which is convenient when\nperforming arithmetic.  We can define the \\scheme|build-num| helper\nfunction, which constructs binary little-endian lists from Scheme\nnumbers.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define build-num\n  (lambda (n)\n    (cond\n      ((zero? n) '())\n      ((and (not (zero? n)) (even? n))\n       (cons 0 (build-num (quotient n 2))))\n      ((odd? n)\n       (cons 1 (build-num (quotient (- n 1) 2)))))))\n\\end{schemedisplay}\n\n\\noindent For example \\mbox{\\scheme|(build-num 6)|} returns\n\\mbox{\\scheme|`(0 1 1)|}, while \\mbox{\\scheme|(build-num 19)|} returns\n\\mbox{\\scheme|`(1 1 0 0 1)|}.\n\nTo ensure there is a unique representation of every number, we suppress\ntrailing $0$'s. Thus \\mbox{\\scheme|`(0 1)|} is the unique\nrepresentation of the number two; both \\mbox{\\scheme|`(0 1 0)|} and\n\\mbox{\\scheme|`(0 1 0 0)|} are illegal.  Similarly, \\scheme|'()| is\nthe unique representation of zero; \\mbox{\\scheme|`(0)|} is illegal.\nLists representing numbers may be partially instantiated:\n\\mbox{\\scheme|`(1 . ,x)|} represents any odd integer, while\n\\mbox{\\scheme|`(0 . ,y)|} represents any \\emph{positive} even number.\nWe must ensure that our relations never instantiate variables\nrepresenting numbers to illegal values---in these examples, \\scheme|x|\ncan be instantiated to any legal number, while \\scheme|y| can be\ninstantiated to any number \\emph{other} than zero to avoid creating\nthe illegal value \\mbox{\\scheme|`(0)|}.\n\nWe can now define the simplest useful arithmetic relations,\n\\scheme|poso| and \\scheme|>1o|.  The \\scheme|poso| relation is\nsatisfied if its argument represents a positive\ninteger.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define poso\n  (lambda-e (n)\n    (`((,a . ,d)))))\n\\end{schemedisplay}\n\n\\noindent The \\scheme|>1o| relation is satisfied if its argument\nrepresents an integer greater than one.\n\n\\newpage\n\n%\\schemedisplayspace\n\\begin{schemedisplay}\n(define >1o\n  (lambda-e (n)\n    (`((,a ,b . ,d)))))\n\\end{schemedisplay}\n\n\\noindent We will use \\scheme|poso| and \\scheme|>1o| in more\nsophisticated arithmetic relations, starting with addition.\n\n\\section{Naive Addition}\\label{arithnaive}\n\nNow that we have decided on a representation for numbers, we can\ndefine the addition relation, \\scheme|pluso|.  \n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define pluso\n  (lambda-e (n m s)\n    (`(,x () ,x))\n    (`(() ,y ,y))\n    (`((0 . ,x) (,b . ,y) (,b . ,res))\n     (pluso x y res))    \n    (`((,b . ,x) (0 . ,y) (,b . ,res))\n     (pluso x y res))\n    (`((1 . ,x) (1 . ,y) (0 . ,res))\n     (exist (res-1)\n       (pluso x y res-1)\n       (pluso '(1) res-1 res)))))\n\\end{schemedisplay}\n\n\\noindent The first two clauses handle when \\scheme|n| or \\scheme|m|\nis zero.  The next two clauses handle when both \\scheme|n| and\n\\scheme|m| are positive integers, at least one of which is even.  The\nfinal clause handles when \\scheme|n| and \\scheme|m| are both positive\nodd integers.\n\nAt first glance, our definition of \\scheme|pluso| seems to work fine.\n\n\\wspace\n\n\\noindent\\scheme|(run1 (q) (pluso '(1 1) '(0 1 1) q))| $\\Rightarrow$ \\scheme|`((1 0 0 1))|\n\n\\wspace\n\n\\noindent As expected, adding three and six yields nine.  However,\nreplacing \\scheme|run1| with \\scheme|run*| results in the answer\n\\mbox{\\scheme|((1 0 0 1) (1 0 0 1))|}.  The duplicate value is due to\nthe overlapping of clauses in \\scheme|pluso|---for example, both of\nthe first two clauses succeed when \\scheme|n|, \\scheme|m|, and\n\\scheme|s| are all zero.  Even worse, \\mbox{\\scheme|(run* (q) (pluso '(0 1) q '(1 0 1)))|}\n returns \\mbox{\\scheme|`((1 1) (1 1) (1 1 0) (1 1 0))|}. The\nlast two values are not even legal representations of a number, since\nthe most-significant bit is zero.\n\nWe can fix these problems by making the clauses of \\scheme|pluso|\nnon-overlapping, and by adding calls to \\scheme|poso| to ensure the\nmost-significant bit of a positive number is never instantiated to\nzero.\n\n\\newpage\n\n%\\schemedisplayspace\n\\begin{schemedisplay}\n(define pluso\n  (lambda-e (n m k)\n    (`(,x () ,x))\n    (`(() (,x . ,y) (,x . ,y)))\n    (`((0 . ,x) (0 . ,y) (0 . ,res)) (poso x) (poso y)\n     (pluso x y res))\n    (`((0 . ,x) (1 . ,y) (1 . ,res)) (poso x)\n     (pluso x y res))\n    (`((1 . ,x) (0 . ,y) (1 . ,res)) (poso y)\n     (pluso x y res))\n    (`((1 . ,x) (1 . ,y) (0 . ,res))\n     (exist (res-1)\n       (pluso x y res-1)\n       (pluso '(1) res-1 res)))))\n\\end{schemedisplay}\n\n\\enlargethispage{1em}\n\n\\noindent We separated the third clause of the original \\scheme|pluso|\ninto two clauses, so we can use \\scheme|poso| to avoid illegal\ninstantiations of numbers.\n\nThe improved definition of \\scheme|pluso| no longer produces duplicate\nor illegal values.\n\n\\wspace\n\n\\noindent\\scheme|(run* (q) (pluso '(1 1) '(0 1 1) q))| $\\Rightarrow$ \\scheme|`((1 0 0 1))|\n\n\\noindent\\scheme|(run* (q) (pluso '(0 1) q '(1 0 1)))| $\\Rightarrow$ \\scheme|`((1 1))|\n\n\\wspace\n\nIt may appear that our new \\scheme|pluso| is refutationally complete,\nsince attempting to add eight to some number \\scheme|q| to produce six\nfails finitely:\n\n\\wspace\n\n\\noindent\\scheme|(run* (q) (pluso '(0 0 0 1) q '(0 1 1)))| $\\Rightarrow$ \\scheme|'()|\n\n\\wspace\n\n\\noindent Unfortunately, this example is misleading---\\scheme|pluso|\nis not refutationally complete.  The expression\n\\mbox{\\scheme|(run1 (q) (pluso q '(1 0 1) '(0 0 0 1)))|} \nreturns \\mbox{\\scheme|'((1 1))|} as expected, but replacing\n\\scheme|run1| with \\scheme|run2| results in divergence.  Similarly,\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(run6 (q)\n  (exist (x y)\n    (pluso x y '(1 0 1))\n    (== `(,x ,y) q)))\n\\end{schemedisplay}\n\n\\noindent returns\n\n\\schemedisplayspace\n\\begin{schemeresponse}\n(((1 0 1) ())\n (() (1 0 1))\n ((0 0 1) (1))\n ((1) (0 0 1))\n ((0 1) (1 1))\n ((1 1) (0 1)))\n\\end{schemeresponse}\n\n\\noindent but \\scheme|run7| diverges. If we were to swap the recursive\ncalls in last clause of \\scheme|pluso|, the previous expressions would\nconverge when using \\scheme|run*|; unfortunately, many previously\nconvergent expressions would then diverge\\footnote{These examples\n  demonstrate why an efficient implementation (or simulation) of\n  commutative conjunction would be useful.}.  If we want\n\\scheme|pluso| to be refutationally complete, we must reconsider our\napproach.\n\n\n\\section{Arithmetic Revisited}\\label{arithrevisited}\n\nIn this section we develop a refutationally complete definition of\n\\scheme|pluso|, inspired by the half-adders and full-adders of digital\nlogic\\footnote{See \\citet{hennessy-computer} for a description of\n  hardware adders.}.\n\nWe first define \\scheme|half-addero|, which, when given the binary\ndigits \\scheme|x|, \\scheme|y|, \\scheme|r|, and \\scheme|c|, satisfies\nthe equation $x + y = r + 2 \\cdot c$.  \n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define half-addero\n  (lambda (x y r c)\n    (exist ()\n      (bit-xoro x y r)\n      (bit-ando x y c))))\n\\end{schemedisplay}\n\n\\scheme|half-addero| is defined using bit-wise relations for logical\n{\\tt and} and {\\tt exclusive-or}.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define bit-ando\n  (lambda-e (x y r)\n    (`(0 0 0))\n    (`(1 0 0))\n    (`(0 1 0))\n    (`(1 1 1))))\n\\end{schemedisplay}\n\n\\begin{schemedisplay}\n(define bit-xoro\n  (lambda-e (x y r)\n    (`(0 0 0))\n    (`(0 1 1))\n    (`(1 0 1))\n    (`(1 1 0))))\n\\end{schemedisplay}\n\nNow that we have defined \\scheme|half-addero|, we can define\n\\scheme|full-addero|.  \\scheme|full-addero| is similar to\n\\scheme|half-addero|, but takes a carry-in bit \\scheme|b|; given bits\n\\scheme|b|, \\scheme|x|, \\scheme|y|, \\scheme|r|, and \\scheme|c|,\n\\scheme|full-addero| satisfies $b + x + y = r + 2 \\cdot c$.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define full-addero\n  (lambda (b x y r c)\n    (exist (w xy wz)\n      (half-addero x y w xy)\n      (half-addero w b r wz)\n      (bit-xoro xy wz c))))\n\\end{schemedisplay}\n\n\\scheme|half-addero| and \\scheme|full-addero| add individual bits.  We\nnow define \\scheme|addero| in terms of \\scheme|full-addero|;\n\\scheme|addero| adds a carry-in bit \\scheme|d| to arbitrarily large\nnumbers \\scheme|n| and \\scheme|m| to produce a number \\scheme|r|.\n\n\\newpage\n\n%\\schemedisplayspace\n\\begin{schemedisplay}\n(define addero\n  (lambda (d n m r)\n    (match-e `(,d ,n ,m)\n      (`(0 __ ()) (== n r))\n      (`(0 () __) (== m r) (poso m))\n      (`(1 __ ())\n       (addero 0 n '(1) r))\n      (`(1 () __)\n       (poso m)\n       (addero 0 '(1) m r))\n      (`(__ (1) (1))\n       (exist (a c)\n         (== `(,a ,c) r)\n         (full-addero d 1 1 a c)))\n      (`(__ (1) __)\n       (gen-addero d n m r))\n      (`(__ __ (1))\n       (>1o n) (>1o r)\n       (addero d '(1) n r))\n      (`(__ __ __) \n       (>1o n) \n       (gen-addero d n m r)))))\n\\end{schemedisplay}\n\nThe last clause of \\scheme|addero| calls \\scheme|gen-addero|; given\nthe bit \\scheme|d| and numbers \\scheme|n|, \\scheme|m|, and \\scheme|r|,\n\\scheme|gen-addero| satisfies $d + n + m = r$, provided that\n\\scheme|m| and \\scheme|r| are greater than one and \\scheme|n| is\npositive.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define gen-addero\n  (lambda (d n m r)\n    (match-e `(,n ,m ,r)\n      (`((,a . ,x) (,b . ,y) (,c . ,z))\n       (exist (e)\n         (poso y) (poso z)\n         (full-addero d a b c e)\n         (addero e x y z))))))\n\\end{schemedisplay}\n\nWe are finally ready to redefine \\scheme|pluso|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define pluso (lambda (n m k) (addero 0 n m k)))\n\\end{schemedisplay}\n\n\\noindent As proved by~\\citet{conf/flops/KiselyovBFS08}, this\ndefinition of \\scheme|pluso| is refutationally complete.  Using the\nnew \\scheme|pluso| all the addition examples from the previous section\nterminate, even when using \\scheme|run*|.  We can also generate\ntriples of numbers, where the sum of the first two numbers equals the\nthird.\n\n\\newpage\n\n%\\schemedisplayspace\n\\begin{schemedisplay}\n(run9 (q)\n  (exist (x y r)\n    (pluso x y r)\n    (== `(,x ,y ,r) q))) $\\Rightarrow$\n\\end{schemedisplay}\n\\nspace\n\\begin{schemeresponse}\n((_.0 () _.0)\n (() (_.0 . _.1) (_.0 . _.1))\n ((1) (1) (0 1))\n ((1) (0 _.0 . _.1) (1 _.0 . _.1))\n ((1) (1 1) (0 0 1))\n ((0 _.0 . _.1) (1) (1 _.0 . _.1))\n ((1) (1 0 _.0 . _.1) (0 1 _.0 . _.1))\n ((0 1) (0 1) (0 0 1))\n ((1) (1 1 1) (0 0 0 1)))\n\\end{schemeresponse}\n% \\begin{schemeresponse}\n% ((_.0 () _.0)\n%  (() (_.0 . _.1) (_.0 . _.1))\n%  ((1) (1) (0 1))\n%  ((1) (0 _.0 . _.1) (1 _.0 . _.1))\n%  ((1) (1 1) (0 0 1))\n%  ((0 _.0 . _.1) (1) (1 _.0 . _.1))\n%  ((1) (1 0 _.0 . _.1) (0 1 _.0 . _.1))\n%  ((0 1) (0 1) (0 0 1))\n%  ((1) (1 1 1) (0 0 0 1))\n%  ((1 1) (1) (0 0 1))\n%  ((1) (1 1 0 _.0 . _.1) (0 0 1 _.0 . _.1))\n%  ((1 1) (0 1) (1 0 1))\n%  ((1) (1 1 1 1) (0 0 0 0 1))\n%  ((1 0 _.0 . _.1) (1) (0 1 _.0 . _.1))\n%  ((1) (1 1 1 0 _.0 . _.1) (0 0 0 1 _.0 . _.1)))\n% \\end{schemeresponse}\n\nWe can take advantage of the flexibility of the relational approach by\ndefining subtraction in terms of addition.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define minuso (lambda (n m k) (pluso m k n)))\n\\end{schemedisplay}\n\n\\noindent \\scheme|minuso| works as expected: \n\n\\wspace\n\n\\noindent\\scheme|(run* (q) (minuso '(0 0 0 1) '(1 0 1) q))| $\\Rightarrow$ \\scheme|`((1 1))|\n\n\\wspace\n\n\\noindent eight minus five is indeed three. \\scheme|minuso| is also\nrefutationally complete:\n\n\\wspace\n\n\\noindent\\scheme|(run* (q) (minuso '(0 1 1) q '(0 0 0 1)))| $\\Rightarrow$ \\scheme|`()|\n\n\\wspace\n\n\\noindent there is no non-negative integer \\scheme|q| that, when\nsubtracted from six, produces eight.\n\n\\section{Multiplication}\\label{arithmult}\n\\enlargethispage{2em}\n\nNext we define the multiplication relation \\scheme|mulo|, which\nsatisfies $n \\cdot m = p$.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define mulo\n  (lambda (n m p)\n    (match-e `(,n ,m)\n      (`(() __) (== () p))\n      (`(__ ()) (== () p) (poso n))  \n      (`((1) __) (== m p) (poso m))   \n      (`(__ (1)) (== n p) (>1o n))\n      (`((0 . ,x) __)\n       (exist (z)\n         (== `(0 . ,z) p)\n         (poso x) (poso z) (>1o m)\n         (mulo x m z)))\n      (`((1 . ,x) (0 . ,y))\n       (poso x) (poso y)\n       (mulo m n p))\n      (`((1 . ,x) (1 . ,y))\n       (poso x) (poso y)\n       (odd-mulo x n m p)))))\n\\end{schemedisplay}\n\n\\noindent \\scheme|mulo| is defined in terms of the helper relation\n\\scheme|odd-mulo|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define odd-mulo\n  (lambda (x n m p)\n    (exist (q)\n      (bound-mulo q p n m)\n      (mulo x m q)\n      (pluso `(0 . ,q) m p))))\n\\end{schemedisplay}\n\n\\noindent For detailed descriptions of \\scheme|mulo| and\n\\scheme|odd-mulo|, see~\\cite{trs} and \\cite{conf/flops/KiselyovBFS08}.\nFrom a refutational-completeness perspective, the definition of\n\\scheme|bound-mulo| is most interesting.\n\n\\scheme|bound-mulo| ensures that the product of \\scheme|n| and\n\\scheme|m| is no larger than \\scheme|p| by enforcing that the\nlength\\footnote{More correctly, the length of the list representing\n  the number.}  of \\scheme|n| plus the length of \\scheme|m| is an\nupper bound for the length of \\scheme|p|.  In the process of enforcing\nthis bound, \\scheme|bound-mulo| length-instantiates \\scheme|q|---that\nis, \\scheme|q| becomes a list of fixed length containing\nuninstantiated variables representing binary digits.  The length of\n\\scheme|q|, written $\\norm{q}$, satisfies $\\norm{q} <\n\\min(\\norm{p},\\penalty\\binoppenalty \\norm{n} + \\norm{m} + 1)$.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define bound-mulo\n  (lambda (q p n m)\n    (match-e `(,q ,p)\n      (`(() (__ . __)))\n      (`((__ . ,x) (__ . ,y))\n       (exist (a z)\n         (conde\n           ((== '() n)\n            (== `(,a . ,z) m)\n            (bound-mulo x y z '()))\n           ((== `(,a . ,z) n)\n            (bound-mulo x y z m))))))))\n\\end{schemedisplay}\n\n\\scheme|mulo| works as expected:\n\n\\wspace\n\n\\noindent\\scheme|(run* (p) (mulo '(1 0 1) '(1 1) p))| $\\Rightarrow$ \\scheme|`(1 1 1 1)|\n\n\\wspace\n\n\\noindent multiplying five by three yields fifteen. Thanks to the\nbounds on term sizes enforced by \\scheme|bound-mulo|, \\scheme|mulo| is\nrefutationally complete:\n\n\\wspace\n\n\\noindent\\scheme|(run* (q) (mulo '(0 1) q '(1 1)))| $\\Rightarrow$ \\scheme|`()|\n\n\\wspace\n\n\\noindent there exists no non-negative integer \\scheme|q| that, when\nmultiplied by two, yields three.\n\nAs we expect of all our relations, \\scheme|mulo| is flexible---it can\neven be used to factor numbers.  For example, this \\scheme|run*|\nexpression returns all the factors of twelve.\n\n\\newpage\n\n%\\schemedisplayspace\n\\begin{schemedisplay}\n(run* (q) \n  (exist (m)\n    (mulo q m '(0 0 1 1)))) $\\Rightarrow$\n\\end{schemedisplay}\n\\nspace\n\\begin{schemeresponse}\n((1) (0 0 1 1) (0 1) (0 0 1) (1 1) (0 1 1))\n\\end{schemeresponse}\n\n% From FLOPS paper.\n%\n% We guarantee termination only for stand-alone base arithmetic goals but\n% not their conjunctions (see \\S\\ref{s:solution-set}). This\n% non-compositionality is expected, since conjunctions of arithmetic\n% goals can express Diophantine equations; were such conjunctions\n% guaranteed to terminate, we would be able to solve Hilbert's 10th\n% problem, which is undecidable~\\cite{hilbertstenth}.  We also do not\n% guarantee termination if the goal's arguments share variables.  Such a\n% goal can be expressed by conjoining a sharing-free base goal and\n% equalities.\n\n% [TODO discuss Presberger Arithmetic versus Peano Arithmetic; Hilbert's\n% 10th problem; Diophantine equations; conjunction of arithmetic\n% relations, perhaps by sharing variables.\n\n% Should be able to lift wording for this.]\n\n\\section{Division}\\label{arithdivision}\n\nNext we define a relation that performs division with remainder.  We\nwill need additional bounds on term sizes to define division (and\nlogarithm in section~\\ref{arithexplog}).\n\nThe relation \\scheme|=lo| ensures that the lists representing the numbers\n\\scheme|n| and \\scheme|m| are the same length.  As before, we must\ntake care to avoid instantiating either number to an illegal value\nlike \\scheme|`(0)|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define =lo\n  (lambda-e (n m)\n    (`(() ()))\n    (`((1) (1)))\n    (`((,a . ,x) (,b . ,y)) (poso x) (poso y)\n     (=lo x y))))\n\\end{schemedisplay}\n\n\\scheme|<lo| ensures that the length of the list representing \\scheme|n| is\nless than that of \\scheme|m|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define <lo\n  (lambda-e (n m)\n    (`(() __) (poso m))\n    (`((1) __) (>1o m))\n    (`((,a . ,x) (,b . ,y)) (poso x) (poso y)\n     (<lo x y))))\n\\end{schemedisplay}\n\nWe can now define \\scheme|<=lo| by combining \\scheme|=lo| and \\scheme|<lo|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define <=lo\n  (lambda (n m)\n    (conde\n      ((=lo n m))\n      ((<lo n m)))))\n\\end{schemedisplay}\n\nUsing \\scheme|<lo| and \\scheme|=lo| we can define \\scheme|<o|, which\nensures that the value of \\scheme|n| is less than that of \\scheme|m|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define <o\n  (lambda (n m)\n    (conde\n      ((<lo n m))\n      ((=lo n m)\n       (exist (x)\n         (poso x)\n         (pluso n x m))))))\n\\end{schemedisplay}\n\nCombining \\scheme|<o| and \\scheme|==| leads to the definition of\n\\scheme|<=o|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define <=o\n  (lambda (n m)\n    (conde\n      ((== n m))\n      ((<o n m)))))\n\\end{schemedisplay}\n\nWith the bounds relations in place, we can define division with\nremainder.  The \\scheme|divo| relation takes numbers \\scheme|n|,\n\\scheme|m|, \\scheme|q|, and \\scheme|r|, and satisfies $n = m \\cdot q +\nr$, with $0 \\leq r < m$; this is equivalent to the equation $\\frac{n}{m} = q$\nwith remainder $r$, with $0 \\leq r < m$.\nA simple definition of \\scheme|divo| is \n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define divo\n  (lambda (n m q r)\n    (exist (mq)\n      (<o r m)\n      (<=lo mq n)\n      (mulo m q mq)\n      (pluso mq r n))))\n\\end{schemedisplay}\n\n\\noindent Unfortunately, \n\\mbox{\\scheme|(run* (m) (exist (r) (divo '(1 0 1) m '(1 1 1) r)))|}\ndiverges. Because we want refutational completeness,\nwe instead use the more sophisticated definition\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define divo\n  (lambda (n m q r)\n    (match-e q \n      (`() (== r n) (<o n m))\n      (`(1) (=lo n m) (pluso r m n) (<o r m))\n      (__ (<lo m n) (<o r m) (poso q)\n       (exist (nh nl qh ql qlm qlmr rr rh)\n         (splito n r nl nh)\n         (splito q r ql qh)\n         (conde\n           ((== '() nh)\n            (== '() qh)\n            (minuso nl r qlm)\n            (mulo ql m qlm))\n           ((poso nh)\n            (mulo ql m qlm)\n            (pluso qlm r qlmr)\n            (minuso qlmr nl rr)\n            (splito rr r '() rh)\n            (divo nh m qh rh))))))))\n\\end{schemedisplay}\n\n\\noindent The refutational completeness of \\scheme|divo| is largely due to the\nuse of \\scheme|<o|, \\scheme|<lo|, and \\scheme|=lo| to establish bounds\non term sizes. \\scheme|divo| is described in detail in~\\citet{trs}.\n\n\\scheme|divo| relies on the relation \\scheme|splito| to `split' a\nbinary numeral at a given length: \\mbox{\\scheme|(splito n r l h)|}\nholds if $n = 2^{s+1} \\cdot l + h$ where $s = \\norm{r}$ and $h < 2^{s+1}$.\n\\scheme|splito| can construct \\scheme|n| by combining the lower-order\nbits\\footnote{The lowest bit of a positive number \\mbox{\\scheme|n|} is\n  the car of \\mbox{\\scheme|n|}.} of \\scheme|l| with\nthe higher-order bits of \\scheme|h|, inserting \\emph{padding} bits as\nspecified by the length of \\scheme|r|---\\scheme|splito| is essentially\na specialized version of \\scheme|appendo|.  \\scheme|splito| ensures\nthat illegal values like \\scheme|'(0)| are not constructed by removing\nthe rightmost zeros after splitting the number \\scheme|n| into its\nlower-order bits and its higher-order bits.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define splito\n  (lambda-e (n r l h)\n    (`(() __ () ()))\n    (`((0 ,b . ,n^) () () (,b . ,n^)))\n    (`((1 . ,n^) () (1) ,n^))\n    (`((0 ,b . ,n^) (,a . ,r^) () __)\n     (splito `(,b . ,n^) r^ '() h))\n    (`((1 . ,n^) (,a . ,r^) (1) __)\n     (splito n^ r^ '() h))\n    (`((,b . ,n^) (,a . ,r^) (,b . ,l^) __)\n     (poso l^)\n     (splito n^ r^ l^ h))))\n\\end{schemedisplay}\n\n% Second, the definition of the \\scheme|splito| helper\n% relation is so complicated because it must avoid instantiating numbers\n% to illegal values.\n\n% \\scheme|splito|'s definition is so complicated\n% because \\scheme|splito| must not allow the list \\scheme|'(0)| to\n% represent a number.  For example, \\mbox{\\scheme|(splito '(0 0 1) '()\n%   '() '(0 1))|} should succeed, but \\mbox{\\scheme|(splito '(0 0 1) '()\n%   '(0) '(0 1))|} should fail.\n\n% \\scheme|l| contains the lowest bits of \\scheme|n| (if any), while \\scheme|h|\n% contains \\scheme|n|'s highest bits.  \n\n% The \\scheme|splito| relation is similar to the ternary\n% \\scheme|appendo| relation: appending \\scheme|l|\n\n% $\\norm{r} - \\norm{l} + 1$ padding bits\n\n%  and \\scheme|h| yields \\scheme|n|, where \\scheme|l|,\n% \\scheme|h|, and \\scheme|n| represent non-negative integers using our\n% little-endian list scheme.  \\scheme|l| contains the lowest bits of\n% \\scheme|n|, while \\scheme|h| contains \\scheme|n|'s highest bits.\n\n% The call \\mbox{\\scheme|(splito n '() l h)|} \\emph{moves} the lowest\n% bit\\footnote{The lowest bit of a positive number \\mbox{\\scheme|n|} is\n%   the \\mbox{\\scheme|car|} of \\mbox{\\scheme|n|}.} of \\scheme|n|, if\n% any, into \\scheme|l|, and moves the remaining bits of \\scheme|n| into\n% \\scheme|h|; \\mbox{\\scheme|(splito n '(1) l h)|} moves the two lowest\n% bits of \\scheme|n| into \\scheme|l| and moves the remaining bits of\n% \\scheme|n| into \\scheme|h|; and \\mbox{\\scheme|(splito n '(1 1 1 1) l h)|}, \n% \\mbox{\\scheme|(splito n '(0 1 1 1) l h)|}, or\n% \\mbox{\\scheme|(splito n '(0 0 0 1) l h)|} move the five lowest bits of\n% \\scheme|n| into \\scheme|l| and move the remaining bits into\n% \\scheme|h|; and so on.\n\n% Since \\scheme|splito| is a relation, it can construct \\scheme|n| by\n% combining the lower-order bits of \\scheme|l| with the higher-order\n% bits of \\scheme|h|, inserting \\emph{padding} bits as specified by the\n% length of \\scheme|r|.\n\n% \\scheme|splito|'s definition is so complicated\n% because \\scheme|splito| must not allow the list \\scheme|'(0)| to\n% represent a number.  For example, \n% \\mbox{\\scheme|(splito '(0 0 1) '() '() '(0 1))|} should succeed, but\n% \\mbox{\\scheme|(splito '(0 0 1) '() '(0) '(0 1))|} should fail.\n% \\scheme|splito| ensures that illegal values like \\scheme|'(0)| are not\n% constructed by removing the rightmost zeros after splitting the number\n% \\scheme|n| into its lower-order bits and its higher-order bits.\n\n\n\n\\section{Logarithm and Exponentiation}\\label{arithexplog}\n\nWe end this chapter by defining relations for logarithm with remainder\nand exponentiation.\n\n\\newpage\n\n%\\schemedisplayspace\n\\begin{schemedisplay}\n(define logo\n  (lambda-e (n b q r)\n    (`((1) __ () ()) (poso b))\n    (`(__ __ () __) (<o n b) (pluso r '(1) n))\n    (`(__ __ (1) __) (>1o b) (=lo n b) (pluso r b n))\n    (`(__ (1) __ __) (poso q) (pluso r '(1) n))\n    (`(__ () __ __) (poso q) (== r n))\n    (`((,a ,b^ . ,dd) (0 1) __ __) (poso dd)\n     (exp2o n '() q)\n     (exist (s) (splito n dd r s)))\n    (`(__ __ __ __)\n     (exist (a b^ add ddd)\n       (conde\n         ((== '(1 1) b))\n         ((== `(,a ,b^ ,add . ,ddd) b))))\n     (<lo b n)\n     (exist (bw1 bw nw nw1 ql1 ql s)\n       (exp2o b '() bw1)\n       (pluso bw1 '(1) bw)\n       (<lo q n)\n       (exist (q^ bwq1)\n         (pluso q '(1) q^)\n         (mulo bw q^ bwq1)\n         (<o nw1 bwq1))\n       (exp2o n '() nw1)\n       (pluso nw1 '(1) nw)\n       (divo nw bw ql1 s)\n       (pluso ql '(1) ql1)\n       (<=lo ql q)\n       (exist (bql qh s qdh qd)\n         (repeated-mulo b ql bql)\n         (divo nw bw1 qh s)\n         (pluso ql qdh qh)\n         (pluso ql qd q)\n         (<=o qd qdh)\n         (exist (bqd bq1 bq)\n           (repeated-mulo b qd bqd)\n           (mulo bql bqd bq)\n           (mulo b bq bq1)\n           (pluso bq r n)\n           (<o n bq1)))))))\n\\end{schemedisplay}\n\nGiven numbers \\scheme|n|, \\scheme|b|, \\scheme|q|, and \\scheme|r|,\n\\scheme|logo| satisfies $n = b^q + r$, where $0 \\leq n$ and where $q$\nis the largest number that satisfies the equation.  The \\scheme|logo|\ndefinition is similar to \\scheme|divo|, but uses exponentiation rather\nthan multiplication\\footnote{A line-by-line description of the Prolog\nversion of \\scheme|logo| and its helper relations can be found at\n\\url{http://okmij.org/ftp/Prolog/Arithm/pure-bin-arithm.prl}}.\n\n%% From Oleg:\n% When b = 2, exponentiation and discrete logarithm are easier to obtain\n% n = 2^q + r, 0<= 2*r < n\n% Here, we just relate n and q.\n%    exp2 n b q\n% holds if: n = (|b|+1)^q + r, q is the largest such number, and\n% (|b|+1) is a power of two.\n%\n% Side condition: (|b|+1) is a power of two and b is L-instantiated.\n% To obtain the binary exp/log relation, invoke the relation as\n%  (exp2 n '() q)\n% Properties: if n is L-instantiated, one solution, q is fully instantiated.\n% If q is fully instantiated: one solution, n is L-instantiated.\n% In any event, q is always fully instantiated in any solution\n% and n is L-instantiated.\n% We depend on the properties of split.\n\n\\scheme|logo| relies on helpers \\scheme|exp2o| and\n\\scheme|repeated-mulo|.  \\scheme|exp2o| is a simplified version of\nexponentiation; given our binary representation of numbers,\nexponentiation using base two is particularly simple.\n\\mbox{\\scheme|(exp2o n '() q)|} satisfies $n = 2^q$; the more general\n\\mbox{\\scheme|(exp2o n b q)|} satisfies $n = (\\norm{b}+1)^q + r$ for\nsome \\scheme|r|, where \\scheme|q| is the largest such number and $0\n\\leq 2 \\cdot r < n$, provided that \\scheme|b| is length-instantiated\nand $\\norm{b}+1$ is a power of two.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define exp2o\n  (lambda (n b q)\n    (match-e `(,n ,q)\n      (`((1) ()))\n      (`(__ (1))\n       (>1o n)\n       (exist (s)\n         (splito n b s '(1))))\n      (`(__ (0 . ,q^))\n       (exist (b^)\n         (poso q^)\n         (<lo b n)\n         (appendo b `(1 . ,b) b^)\n         (exp2o n b^ q^)))\n      (`(__ (1 . ,q^))\n       (exist (nh b^ s)\n         (poso q^)\n         (poso nh)\n         (splito n b s nh)\n         (appendo b `(1 . ,b) b^)\n         (exp2o nh b^ q^))))))\n\\end{schemedisplay}\n\n%% From Oleg:\n% nq = n^q where n is L-instantiated and q is fully instantiated\n\n\\mbox{\\scheme|(repeated-mulo n q nq)|} satisfies $nq = n^q$ provided\n\\scheme|n| is length-instantiated and \\scheme|q| is fully\ninstantiated.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define repeated-mulo\n  (lambda (n q nq)\n    (match-e q\n      (`() (== (1) nq) (poso n))\n      (`(1) (== n nq))\n      (__\n        (>1o q)\n        (exist (q^ nq1)\n          (pluso q^ '(1) q)\n          (repeated-mulo n q^ nq1)\n          (mulo nq1 n nq))))))\n\\end{schemedisplay}\n\nThis simple \\scheme|logo| example shows that $14 = 2^3 + 6$.\n\n\\wspace\n\n\\noindent\\scheme|(run* (q) (logo '(0 1 1 1) '(0 1) '(1 1) q))| $\\Rightarrow$ \\scheme|`(0 1 1)|\n\n\\wspace\n\nA more sophisticated example of \\scheme|logo| is\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(run9 (s)\n  (exist (b q r)\n    (logo '(0 0 1 0 0 0 1) b q r)\n    (>1o q)\n    (== `(,b ,q ,r) s))) $\\Rightarrow$\n\\end{schemedisplay}\n\\nspace\n\\begin{schemeresponse}\n((() (_.0 _.1 . _.2) (0 0 1 0 0 0 1))\n ((1) (_.0  _.1 . _.2) (1 1 0 0 0 0 1))\n ((0 1) (0 1 1) (0 0 1))\n ((1 1) (1 1) (1 0 0 1 0 1))\n ((0 0 1) (1 1) (0 0 1))\n ((0 0 0 1) (0 1) (0 0 1))\n ((1 0 1) (0 1) (1 1 0 1 0 1))\n ((0 1 1) (0 1) (0 0 0 0 0 1))\n ((1 1 1) (0 1) (1 1 0 0 1))),\n\\end{schemeresponse}\n\n\\noindent which shows that:\n\n\\noindent$68 = 0^n + 68$ where $n$ is greater than one,\n\n\\nspace\n\n\\noindent$68 = 1^n + 67$ where $n$ is greater than one,\n\n\\nspace\n\n\\noindent$68 = 2^6 + 4$,\n\n\\nspace\n\n\\noindent$68 = 3^3 + 59$,\n\n\\nspace\n\n\\noindent$68 = 4^3 + 4$,\n\n\\nspace\n\n\\noindent$68 = 8^2 + 4$, \n\n\\nspace\n\n\\noindent$68 = 5^2 + 43$, \n\n\\nspace\n\n\\noindent$68 = 6^2 + 32$, and\n\n\\nspace\n\n\\noindent$68 = 7^2 + 19$.\n\nWe can define the exponentiation relation in terms of \\scheme|logo|.\n\n\\schemedisplayspace\n\\begin{schemedisplay}\n(define expo (lambda (b q n) (logo n b q '())))\n\\end{schemedisplay}\n\n\\noindent We can use \\scheme|expo| to show that three to the fifth\npower is $243$:\n\n\\wspace\n\n\\noindent \\scheme|(run* (q) (expo '(1 1) '(1 0 1) q))| $\\Rightarrow$ \\scheme|`(1 1 0 0 1 1 1 1)|.\n\n\\wspace\n\nThe code in this chapter demonstrates the difficulty of achieving\nrefutational completeness, even for relatively simple relations.\nBounding the sizes of terms is a very powerful technique for ensuring\ntermination, but can be tricky to apply.  The definitions in this\nchapter were derived from equations defining arithmetic operators, and\nfrom the design of hardware half-adders and full-adders.  It would\nhave been extremely difficult to write this code from first\nprinciples.\n\n", "meta": {"hexsha": "05f97d059452586350854b80fa61604331e495ab", "size": 31393, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "arith.tex", "max_stars_repo_name": "holtzermann17/dissertation-single-spaced", "max_stars_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2015-01-11T21:22:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-10T12:49:11.000Z", "max_issues_repo_path": "arith.tex", "max_issues_repo_name": "holtzermann17/dissertation-single-spaced", "max_issues_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-08-08T18:10:18.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-09T02:33:25.000Z", "max_forks_repo_path": "arith.tex", "max_forks_repo_name": "holtzermann17/dissertation-single-spaced", "max_forks_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-07-29T13:58:01.000Z", "max_forks_repo_forks_event_max_datetime": "2018-09-14T05:01:31.000Z", "avg_line_length": 31.4874623872, "max_line_length": 97, "alphanum_fraction": 0.6406523747, "num_tokens": 10605, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.743168019989179, "lm_q1q2_score": 0.5796657792874618}}
{"text": "\\lab{Facial Recognition}{Facial Recognition}\n\\label{lab:FacialRecognition}\n\n\\objective{\nFacial recognition algorithms attempt to match a person's portrait to a database of many portraits.\nFacial recognition is becoming increasingly important in security, law enforcement, artificial intelligence, and other areas.\nThough humans can easily match pictures to people, computers are beginning to surpass humans at facial recognition.\nIn this lab, we implement a basic facial recognition system that relies on eigenvectors and the SVD to efficiently determine the difference between faces.\n}\n\n\\section*{Preparing an Image Database} % ======================================\n\nThe \\texttt{faces94} face image dataset\\footnote{See \\url{http://cswww.essex.ac.uk/mv/allfaces/faces94.html}.} contains several photographs of 153 people, organized into folders by person.\nTo perform facial recognition on this dataset, select one image per person and convert these images into a database.\nFor this particular facial recognition algorithm, the entire database can be stored in just a few NumPy arrays.\n\nDigital images are stored on computers as arrays of pixels.\nTherefore, an $m \\times n$ image can be stored in memory as an $m\\times n$ matrix or, equivalently, as an $mn$-vector by concatenating the rows of the matrix.\nThen a collection of $k$ images can be stored as a single $mn\\times k$ matrix $F$, where each column of $F$ represents a single image.\nThat is, if\n\\[\nF = \\left[\\begin{array}{c|c|c|c}\n\\arrayrulecolor{lightgray}\n& & & \\\\\n\\f_1 & \\f_2 & \\cdots & \\f_k\n\\\\\n& & &\n\\end{array}\\right],\n\\]\nthen each $\\f_i$ is a $mn$-vector representing a single image.\n\nThe following function obtains one image for each person in the \\texttt{faces94} dataset and converts the collection of images into an $mn \\times k$ matrix $F$ described above.\n\n\\begin{lstlisting}\nimport os\nimport numpy as np\nfrom imageio import imread\n\ndef get_faces(path=\"./faces94\"):\n    # Traverse the directory and get one image per subdirectory.\n    faces = []\n    for (dirpath, dirnames, filenames) in os.walk(path):\n        for fname in filenames:\n            if fname[-3:]==\"jpg\":       # Only get jpg images.\n                # Load the image, convert it to grayscale,\n                # and flatten it into a vector.\n                faces.append(np.ravel(imread(dirpath+\"/\"+fname, as_gray=True)))\n                break\n    # Put all the face vectors column-wise into a matrix.\n    return np.transpose(faces)\n\\end{lstlisting}\n\n\\begin{problem} % show_face() function.\nWrite a function that accepts an image as a flattened $mn$-vector, along with its original dimensions $m$ and $n$.\nUse \\li{np.reshape()} to convert the flattened image into its original $m\\times n$ shape and display the result with \\li{plt.imshow()}.\n\\\\ (Hint: use \\li{cmap=\"gray\"} in \\li{plt.imshow()} to display images in grayscale.)\n\nUnzip the \\texttt{faces94.zip} archive and use \\li{get_faces()} to construct $F$.\nEach \\texttt{faces94} image is $200 \\times 180$, and there are $153$ people in the dataset, so $F$ should be $36000 \\times 153$.\nUse your function to display one of the images stored in $F$.\n\\label{prob:visualize-faces}\n\\end{problem}\n\n\\section*{The Eigenfaces Method} % ============================================\n\nWith the image database $F$, we could construct a simple facial recognition system with the following strategy.\nLet $\\g$ be an $mn$-vector representing an unknown face that is not part of the database $F$.\nThen the $\\f_i$ that minimizes $\\|\\g - \\f_i\\|_2$ is the matching face.\nUnfortunately, computing $\\|\\g - \\f_i\\|_2$ for each $i$ is very computationally expensive, especially if the images are high-resolution and/or the database contains a large number of images.\nThe \\emph{eigenfaces method} is a way to reduce the computational cost of finding the closest matching face by focusing on only the most important features of each face.\nBecause the method ignores less significant facial features, it is also usually more accurate than the na\\\"ive method.\n\nThe first step of the algorithm is to shift the images by the \\emph{mean face}.\nShifting a set of data by the mean exaggerates the distinguishing features of each entry.\nIn the context of facial recognition, shifting by the mean accentuates the unique features of each face.\nFor the images vectors stored in $F$, the mean face $\\boldsymbol{\\mu}$ is the defined to be the element-wise average of the $\\f_i$:\n\\[\n\\boldsymbol{\\mu} = \\frac{1}{k}\\sum_{i=1}^k \\f_i.\n\\]\nHence, the $i$th mean-shifted face vector $\\bar{\\f}_i$ is given by\n\\[\n\\bar{\\f}_i = \\f_i - \\boldsymbol{\\mu}.\n\\]\nNext, define $\\bar{F}$ as the $mn \\times k$ matrix whose columns are given by the mean-shifted face vectors,\n\\[\n\\bar{F} = \\left[\\begin{array}{c|c|c|c}\n\\arrayrulecolor{lightgray}\n& & & \\\\\n\\bar{\\f}_1 & \\bar{\\f}_2 & \\cdots & \\bar{\\f}_k\n\\\\\n& & &\n\\end{array}\\right].\n\\]\n\n\\begin{figure}[H] % Mean face + shifted faces.\n\\captionsetup[subfigure]{justification=centering}\n\\centering\n\\begin{subfigure}{.32\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/meanFace.png}\n    \\caption{The mean face.}\n    \\label{fig:facerec-meanface}\n\\end{subfigure}\n%\n\\begin{subfigure}{.32\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/originalFace0.png}\n    \\caption{An original face.}\n    \\label{fig:facerec-original}\n\\end{subfigure}\n%\n\\begin{subfigure}{.32\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/differenceFace0.png}\n    \\caption{A mean-shifted face.}\n    \\label{fig:facerec-shifted}\n\\end{subfigure}\n\\caption{}\n\\end{figure}\n\n\\begin{problem} % Shift by the mean.\n\\label{prob:meanFace}\nWrite a class called \\li{FacialRec} whose constructor accepts a path to a directory of images.\nIn the constructor, use \\li{get_faces()} to construct $F$, then compute the mean face $\\boldsymbol\\mu$ and the shifted faces $\\bar{F}$.\nStore each array as an attribute.\n\\\\(Hint: Both $\\boldsymbol\\mu$ and $\\bar{F}$ can be computed in a single line of code by using NumPy functions and/or array broadcasting.)\n\nUse your function from Problem \\ref{prob:visualize-faces} to visualize the mean face, and compare it to Figure \\ref{fig:facerec-meanface}.\nAlso display an original face and its corresponding mean-shifted face.\nCompare your results with Figures \\ref{fig:facerec-original} and \\ref{fig:facerec-shifted}.\n\\end{problem}\n\nTo increase computational efficiency and minimize storage, the face vectors can be represented with fewer values by projecting $\\bar{F}$ onto a lower-dimensional subspace.\nLet $s$ be a natural number such that $s < r$, where $r$ is the rank of $\\bar{F}$.\nBy projecting $\\bar{F}$ onto an $s$-dimensional subspace, each face can be stored with only $s$ values.\n\nSpecifically, let $U \\Sigma V\\hrm$ be the compact SVD of $\\bar{F}$ with rank $r$, which can also be represented by\n \\[\n\\bar{F}= \\sum_{i=1}^r \\sigma_i\\u_i\\v_i\\hrm.\n\\]\nThe first $r$ columns of $U$ form a basis for the range of $\\bar{F}$.\nRecall that the Schmidt, Mirsky, Eckart-Young Theorem states that the matrix\n\\[\n\\bar{F}_s = \\sum_{i=1}^s \\sigma_i\\u_i\\v_i\\hrm\n\\]\nis the best rank-$s$ approximation of $\\bar{F}$ for each $s < r$.\nThis means that $\\| \\bar{F} - \\bar{F_s} \\|$ is minimized against all other $\\| \\bar{F} - B \\|$ where $B$ has rank $s$.\nAs a consequence of this theorem, the first $s$ columns of $U$ form a basis that provides the ``best'' $s$-dimensional subspace for approximating $\\bar{F}$.\n\nThe $s$ basis vectors $\\u_1, \\ldots, \\u_s$ are are commonly called the \\emph{eigenfaces} because they are eigenvectors of $\\bar{F}\\bar{F}\\trp$ and because they resemble face images.\nEach original face image can be efficiently represented in terms of these eigenfaces.\nSee Figure \\ref{fig:facerec-eigenfaces} for visualizations of the some of the eigenfaces for the \\texttt{facesd94} data set.\n\n\\begin{figure}[H] % Eigenfaces.\n\\centering\n\\begin{subfigure}{.32\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/eigenface0.png}\n\\end{subfigure}\n%\n\\begin{subfigure}{.32\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/eigenface49.png}\n\\end{subfigure}\n%\n\\begin{subfigure}{.32\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/eigenface99.png}\n\\end{subfigure}\n\\caption{The first, 50th, and 100th eigenfaces.}\n\\label{fig:facerec-eigenfaces}\n\\end{figure}\n\nIn general, the lower eigenfaces provide a more general information of a face and higher-ordered eigenfaces provide the details necessary to distinguish particular faces \\cite{muller2004singular}.\nThese eigenfaces will be used to construct the face images in the dataset.\nThe more eigenfaces used, the more detailed the resulting image will be.\n\nNext, let $U_s$ be the matrix with the first $s$ eigenfaces as columns.\nSince the eigenfaces $\\{\\u_i\\}_{i=1}^s$ form an orthonormal set, $U_s$ is an orthonormal matrix (independent of $s$) and hence $U_s\\trp U_s = I$.\nThe matrix $P_s = U_sU_s\\trp$ projects vectors in $\\mathbb{R}^{mn}$ to the subspace spanned by the orthonormal basis $\\{\\u_i\\}_{i=1}^s$, and the change of basis matrix $U_s\\trp$ puts the projection in terms of the basis of eigenfaces.\nThus the projection $\\widehat{\\f}_i$ of $\\bar{\\f}_i$ in terms of the basis of eigenfaces is given by\n%\n\\begin{equation}\n\\widehat{\\f}_i = U_s\\trp P_s\\bar{\\f}_i = U_s\\trp U_s U_s\\trp \\bar{\\f}_i = U_s\\trp\\bar{\\f}_i.\n\\label{eq:facialrec-projection}\n\\end{equation}\n%\nNote carefully that though the shifted image $\\bar{\\f_i}$ has $mn$ entries, the projection $\\widehat{\\f}_i$ has only $s$ entries since $U_s$ is $mn\\times s$.\nLikewise, the matrix $\\widehat{F}$ that has the projections $\\widehat{\\f}_i$ as columns is $s \\times k$, and\n\\begin{equation}\n\\widehat{F} = U_s\\trp F.\n\\label{eq:facialrec-projectall}\n\\end{equation}\n\n\\begin{problem} % Compute U, implement project().\nIn the constructor of \\li{FacialRec}, calculate the compact SVD of $\\bar{F}$ and save the matrix $U$ as an attribute.\nCompare the computed eigenfaces (the columns of $U$) to Figure \\ref{fig:facerec-eigenfaces}.\n\nAlso write a method that accepts a vector of length $mn$ or an $mn\\times \\ell$ matrix, as well as an integer $s < mn$.\nConstruct $U_s$ by taking the first $s$ columns of $U$, then use (\\ref{eq:facialrec-projection}) or (\\ref{eq:facialrec-projectall}) to calculate the projection of the input vector or matrix onto the span of the first $s$ eigenfaces.\n\\\\(Hint: this method should be implemented with a single line of code.)\n\\label{prob:facialrec-project}\n\\end{problem}\n\nReducing the mean-shifted face image $\\bar{\\f}_i$ to the lower-dimensional projection $\\widehat{\\f}_i$ drastically reduces the computational cost of the facial recognition algorithm, but this efficiency gain comes at a price.\nA projection image only approximates the corresponding original image, but as long as $s$ isn't too small, the approximation is usually good enough for the algorithm to work well.\nBefore completing the facial recognition system, we reconstruct some of these projections to visualize the amount of information lost.\n\nFrom (\\ref{eq:facialrec-projection}), since $U_s\\trp$ projects $\\bar{\\f}_i$ and performs a change of basis to get $\\widehat{\\f}_i$, its transpose $U_s$ puts $\\widehat{\\f}_i$ back into the original basis with as little error as possible.\nThat is,\n\\[\nU_s \\widehat{\\f}_i \\approx \\bar{\\f}_i = \\f_i - \\boldsymbol{\\mu},\n\\]\nso that we have the approximation\n\\begin{equation}\n\\widetilde{\\f}_i = U_s \\widehat{\\f}_i + \\boldsymbol{\\mu} \\approx \\f_i.\n\\label{eq:facialrec-reconstruct}\n\\end{equation}\nThis $\\widetilde{\\f}_i$ is called the \\emph{reconstruction} of $\\f_i$.\n\n\\begin{figure}[H]\n\\begin{subfigure}{0.32\\textwidth}\n    \\includegraphics[width=\\textwidth]{figures/rebuiltThirtySecond.png}\n    \\caption{A reconstruction with $s=5$.}\n\\end{subfigure}\n\\begin{subfigure}{0.32\\textwidth}\n    \\includegraphics[width=\\textwidth]{figures/rebuiltEighth.png}\n    \\caption{A reconstruction with $s=19$.}\n\\end{subfigure}\n\\begin{subfigure}{0.32\\textwidth}\n    \\includegraphics[width=\\textwidth]{figures/rebuiltHalf.png}\n    \\caption{A reconstruction with $s=75$.}\n\\end{subfigure}\n\\caption{An image rebuilt with various numbers of eigenfaces. The image is already recognizable when it is reconstructed with only 19 eigenfaces, less than an eighth of the 153 eigenfaces corresponding to nonzero eigenvalues or $\\bar{F}\\bar{F}\\trp$.\nNote the similarities between this method and regular image compression via the truncated SVD.}\n\\label{fig:rebuiltImage}\n\\end{figure}\n\n\\begin{problem}\nInstantiate a \\li{FacialRec} object that draws from the \\texttt{faces94} dataset.\nSelect one of the shifted images $\\bar{\\f}_i$.\nFor at least 4 values of $s$, use your method from Problem \\ref{prob:facialrec-project} to compute the corresponding $s$-projection $\\widehat{\\f}_i$, then use (\\ref{eq:facialrec-reconstruct}) to compute the reconstruction $\\widetilde{\\f}_i$.\nDisplay the various reconstructions and the original image.\nCompare your results to Figure \\ref{fig:rebuiltImage}\n\\end{problem}\n\n\\section*{Matching Faces} % ===================================================\n\nLet $\\g$ be a vector representing an unknown face that is not part of the database.\nWe determine which image in the database is most like $\\g$ by comparing $\\widehat{\\g}$ to each of the $\\widehat{\\f}_i$.\nFirst, shift $\\g$ by the mean to obtain $\\bar{\\g}$, then project $\\bar{\\g}$ using a given number of eigenfaces:\n\\begin{equation}\n    \\widehat{\\g} = U_s\\trp\\bar{\\g} = U_s\\trp(\\g - \\boldsymbol{\\mu})\n    \\label{eq:facialrec-unknownface}\n\\end{equation}\n\nNext, determine which $\\widehat{\\f}_i$ is closest to $\\widehat{\\g}$.\nSince the columns of $U_s$ are an orthonormal basis, the computation in this basis yields the same result as computing in the standard Euclidean basis would.\nThen setting\n\\begin{equation}\nj = \\mathop{{\\text{argmin}}\\vphantom{\\sim}}\\limits_{\\displaystyle _{i}} \\|\\widehat{\\f}_i - \\widehat{\\g}\\|_2, % This makes i below the argmin\n\\label{eq:facialrec-findnearest}\n\\end{equation}\nwe have that the $j$th face image $\\f_j$ is the best match for $\\g$.\nAgain, since $\\widehat{\\f}_i$ and $\\widehat{\\g}_i$ only have $s$ entries, the computation in (\\ref{eq:facialrec-findnearest}) is much cheaper than comparing the raw $\\f_i$ to $\\g$.\n\n\\begin{problem} % Find nearest.\nWrite a method for the \\li{FacialRec} class that accepts an image vector $\\g$ and an integer $s$.\nUse your method from Problem \\ref{prob:facialrec-project} to compute $\\widehat{F}$ and $\\widehat{\\g}$ for the given $s$, then use (\\ref{eq:facialrec-findnearest}) to determine the best matching face in the database.\nReturn the index of the matching face.\n\\\\(Hint: \\li{scipy.linalg.norm()} and \\li{np.argmin()} may be useful.)\n\\label{prob:facialrec-nearest}\n\\end{problem}\n\n\\begin{info}\nThis facial recognition system works by solving a \\emph{nearest neighbor search}, since the goal is to find the $\\f_i$ that is ``nearest'' to the input image $\\g$.\nNearest neighbor searches can be performed more efficiently with the use of a \\emph{$k$-d tree}, a binary search tree for storing vectors.\nThe system could also be called a \\emph{$k$-neighbors classifier} with $k=1$.\n% In this context, a $k$-neighbors classifier searches for the $k$ faces that are closest to $\\g$ and use these ``neighboring'' faces to decide how to classify $\\g$.\n\\end{info}\n\n\\begin{problem}\nWrite a method for the \\li{FacialRec} class that accepts an flat image vector $\\g$, an integer $s$, and the original dimensions of $\\g$.\nUse your method from Problem \\ref{prob:facialrec-nearest} to find the index $j$ of the best matching face, then display the original face $\\g$ alongside the best match $\\f_j$.\n\nThe following generator yields random faces from \\texttt{faces94} that can be used as test cases.\n%\n\\begin{lstlisting}\ndef sample_faces(num_faces, path=\"./faces94\"):\n    # Get the list of possible images.\n    files = []\n    for (dirpath, dirnames, filenames) in os.walk(path):\n        for fname in filenames:\n            if fname[-3:]==\"jpg\":       # Only get jpg images.\n                files.append(dirpath+\"/\"+fname)\n\n    # Get a subset of the image names and yield the images one at a time.\n    test_files = np.random.choice(files, num_faces, replace=False)\n    for fname in test_files:\n        yield np.ravel(imread(fname, as_gray=True))\n\\end{lstlisting}\n%\nThe \\li{yield} keyword is like a \\li{return} statement, but the next time the generator is called, it will resume immediately after the last \\li{yield} statement.%\n\\footnote{See the Python Essentials lab on Profiling for more on generators.}\n\nUse \\li{sample_faces()} to get at least 5 random faces from \\texttt{faces94}, and match each random face to the database with $s=38$.\nIterate through the random faces with the following syntax.\n%\n\\begin{lstlisting}\nfor test_image in sample_faces(5):\n    # 'test_image' is a now flattened face vector.\n\\end{lstlisting}\n\\label{prob:facialrec-match}\n\\end{problem}\n\nAlthough there are other approaches to facial recognition that utilize more complex techniques, the method of eigenfaces remains a wonderfully simple and effective solution.\n\n\\newpage\n\n\\section*{Additional Material} % ==============================================\n\n\\subsection*{Improvements on the Facial Recognition System with Eigenfaces} % -\n\nThe \\li{FacialRec} class does its job well, but it could be improved in several ways.\nHere are a few ideas.\n\\begin{itemize}\n    \\item The most computationally intensive part of the algorithm is computing $\\widehat{F}$.\n    Instead of recomputing $\\widehat{F}$ every time the method from Problem \\ref{prob:facialrec-nearest} is called, store $\\widehat{F}$ and $s$ as attributes the first time the method is called.\n    In subsequent calls, only recompute $\\widehat{F}$ if the user specifies a different value for $s$.\n\n    \\item Load a \\li{scipy.spatial.KDTree} object with $\\widehat{F}$ and use its \\li{query()} method to compute (\\ref{eq:facialrec-findnearest}).\n    Building a $k$d-tree is expensive, so be sure to only build a new tree when necessary (i.e., the user specifies a new value for $s$).\n\n    \\item Include an error tolerance $\\epsilon$ in the method for Problem \\ref{prob:facialrec-nearest}.\n    If $\\|\\f_j - \\g\\| > \\epsilon$, print a message or raise an exception to indicate that there is no suitable match for $\\g$ in the database.\n    In this case, add $\\g$ to the database for future reference.\n\n    \\item Generalize the system by turning it into a $k$-neighbors classifier.\n    In the constructor, add several faces per person to the database (this requires modifying \\li{get_faces()}).\n    Assign each individual a unique ID so that the system knows which faces correspond to the same person.\n    Modify the method from Problem \\ref{prob:facialrec-nearest} so that it also accepts an integer $k$, then use \\li{scipy.spatial.KDTree} to find the $k$ nearest images to $\\g$.\n    Choose the ID that belongs to the most nearest neighbors, then return an index that corresponds to an individual with that ID.\n\n    In other words, choose the $k$ faces $\\f_i$ that give the smallest values of $\\|\\f_i - \\widehat{\\g}\\|_2$.\n    These faces then get to vote on which person $\\g$ belongs to.\n\n    \\item Improve the user interface of the class by modifying the method from Problem \\ref{prob:facialrec-match} so that it accepts a file name to read from instead of an array.\n    A few lines of code from \\li{get_faces()} or \\li{sample_faces()} might be helpful for this.\n\\end{itemize}\n\n\\subsection*{Other Methods for Facial Recognition} % --------------------------\n\nThe method of facial recognition presented here is more formally called \\emph{principal component analysis (PCA) using eigenfaces}.\nSeveral other machine learning and optimization techniques, such as linear discriminant analysis (LDA), elastic matching, dynamic link matching, and hidden Markov models (HMMs) have also been applied to the facial recognition problem.\nOther techniques focus on getting better information about the faces in the first place, the most prevalent being 3-dimensional recognition and thermal imaging.\nSee \\url{https://en.wikipedia.org/wiki/Facial_recognition_system} for a good survey of different approaches to the facial recognition problem.\n\n\\begin{comment}\n% This proof is not needed, the Schmidt-Eckart-Young-Mirsky Theorem is sufficient to show that the first s columns of U in SVD provides the best basis for the s-dimensional subspace.\n\\subsection*{The Proof: SVD as a Least Squares Solution} % --------------------\n\n\\begin{theorem}\nLet $\\f_1, \\ldots, \\f_k$ be vectors in $\\mathbb{R}^{mn}$, and let $\\bar{F} = [\\bar{f}_1 \\; \\ldots \\; \\bar{f}_k]$. Suppose $U\\Sigma V\\trp$ is the SVD for $\\bar{F}$. Then the $s$-dimensional subspace that solves the least squares problem for $\\f_1, \\ldots, \\f_k$ is the span of the first $s$ columns of $U$. If $U_s$ is the first $s$ columns of $U$, then the matrix $U_sU_s\\trp$ is the projection onto this subspace.\n\\end{theorem}\n\\begin{proof}\nWe seek a rank-$s$ projection matrix $P_s$ so that $\\sum_{i=1}^k \\|P_s\\bar{\\f}_i - \\bar{\\f}_i\\|_2^2$ is minimized---i.e.,\n the sum of the squares of the ``errors'' is minimal when we project $\\bar{\\f}_i$ via $P_s$.\n But minimizing this quantity is the same as minimizing its square, which happens to equal the Frobenius norm of $P_s\\bar{F} - \\bar{F}$.\nWritten mathematically,\n \\begin{align*}\n\\inf_{\\text{rank}(P_s)=s} \\sum_{i=1}^k \\|P_s\\bar{\\f}_i - \\bar{\\f}_i\\|_2^2 &=  \\inf_{\\text{rank}(P_s)=s} \\left( \\sum_{i=1}^k \\|P_s\\bar{\\f}_i - \\bar{\\f}_i\\|_2^2 \\right) ^2\\\\\n & =  \\inf_{\\text{rank}(P_s)=s} \\| P_s\\bar{F}-\\bar{F}\\|_F.\n \\end{align*}\n\nNow let $U \\Sigma V\\trp$ be an SVD of $\\bar{F}$ with $\\u_i$ the columns of $U$, $\\v_i$ the columns of $V$, and $\\sigma_i$ the singular values of $\\bar{F}$.\nIf $P_s = \\sum_{i=1}^s \\u_i \\u_i\\trp$, then\n\\begin{align*}\nP_s\\bar{F} &=  \\left( \\sum_{i=1}^s \\u_i \\u_i\\trp \\right)\\left(  \\sum_{j=1}^k \\sigma_j \\u_i \\v_i\\trp \\right)\n= \\sum_{i=1}^s\\sum_{j=1}^k \\sigma_j \\u_i\\u_i\\trp\\u_j\\v_j\\trp\\\\\n&=  \\sum_{i=1}^s\\sum_{j=1}^k \\sigma_j \\u_i\\delta_{ij}\\v_j\\trp\n= \\sum_{i=1}^s \\sigma_i \\u_i\\v_i\\trp.\n\\end{align*}\n\nIn fact, the Schmidt-Eckart-Young-Mirsky Theorem tells us that $X = \\sum_{i=1}^s \\sigma_i \\u_i\\v_i\\trp$ is exactly the rank-$s$ matrix that minimizes $\\|X - \\bar{F}\\|_F$.\nSince $P_s \\bar{F}$ will always have rank $s$ or less, the projection $P_s =  \\sum_{i=1}^s \\u_i \\u_i\\trp$ is the one we seek.\nIf we let $U_s = [ \\u_1\\; \\ldots \\; \\u_s]$, then we may write $P_s = U_sU_s\\trp$. Notice that $P_s$ is projection onto the subspace spanned by the columns of $U_s$.\n\\end{proof}\n\n\\end{comment}\n", "meta": {"hexsha": "b9f9a9f878a7139dcc1083f6cf869cb1065a42d6", "size": 22557, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Volume1/FacialRecognition/FacialRecognition.tex", "max_stars_repo_name": "chrismmuir/Labs-1", "max_stars_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 190, "max_stars_repo_stars_event_min_datetime": "2015-07-17T01:57:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T19:16:19.000Z", "max_issues_repo_path": "Volume1/FacialRecognition/FacialRecognition.tex", "max_issues_repo_name": "chrismmuir/Labs-1", "max_issues_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-07-16T17:56:06.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-06T23:47:14.000Z", "max_forks_repo_path": "Volume1/FacialRecognition/FacialRecognition.tex", "max_forks_repo_name": "chrismmuir/Labs-1", "max_forks_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 76, "max_forks_repo_forks_event_min_datetime": "2015-08-06T02:53:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-27T11:08:57.000Z", "avg_line_length": 57.8384615385, "max_line_length": 414, "alphanum_fraction": 0.7228798156, "num_tokens": 6398, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7799928900257126, "lm_q1q2_score": 0.5796657628123029}}
{"text": "\\problemname{Interview Queue}\n\nA very popular position has just opened up at Poodle Inc. Candidates have formed\na queue while they wait for their turn to be interviewed.\n\nCompetition is fierce and each candidate knows that they will not be selected if another candidate is strictly better than them. Every minute, each candidate looks at the resum\\'{e} of candidates who are currently adjacent to them in the queue (ahead and behind). If at least one of the neighbour's resum\\'{e}'s perceived value is strictly greater than their resum\\'{e}'s perceived value, they leave the queue since they do not want to waste their time. The candidates all look at their neighbor's resum\\'{e} simultaneously, and then some candidates leave the queue simultaneously.\n\nThis process continues until no more candidates leave the queue. Determine the number of minutes that pass while this process takes place. Report the values of the candidates that leave the queue in each round. Also report the final state of the queue.\n\n\\section*{Input}\n\nThe first line of input contains a single integer $N$ ($1 \\leq N \\leq 100\\,000$), which is the number of candidates.\n\nThe second line contains $N$ integers $v_1, \\ldots, v_N$ ($0 \\leq v_i \\leq 10^9$ for each $1 \\leq i \\leq N$), where $v_i$ is the perceived value of the $i^\\textrm{th}$ candidate.\n\n\\section*{Output}\nDisplay $M$, the number of minutes taken by this process. Then display $M$ lines. The $i^\\textrm{th}$ line should contain the perceived values of the candidates who left the queue in the $i^\\textrm{th}$ minute in the same relative order that they appeared in the queue. Finally display a line indicating the final list of perceived values in the queue after candidates no longer leave it.\n", "meta": {"hexsha": "f5c5512d69ad64b72faa33f6bede04992248314d", "size": 1731, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problems/interviewqueue/problem_statement/problem.tex", "max_stars_repo_name": "icpc/na-rocky-mountain-2020-public", "max_stars_repo_head_hexsha": "d77cd0dfd9bd707f34497977251c4cc583647fef", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-11T21:49:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-19T22:31:57.000Z", "max_issues_repo_path": "problems/interviewqueue/problem_statement/problem.tex", "max_issues_repo_name": "icpc/na-rocky-mountain-2020-public", "max_issues_repo_head_hexsha": "d77cd0dfd9bd707f34497977251c4cc583647fef", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/interviewqueue/problem_statement/problem.tex", "max_forks_repo_name": "icpc/na-rocky-mountain-2020-public", "max_forks_repo_head_hexsha": "d77cd0dfd9bd707f34497977251c4cc583647fef", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-03-11T18:15:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-24T00:15:32.000Z", "avg_line_length": 96.1666666667, "max_line_length": 581, "alphanum_fraction": 0.7729636049, "num_tokens": 413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7799928900257126, "lm_q1q2_score": 0.5796657539385591}}
{"text": "\n\n\\tikzstyle{e0}[0]=[dotted,bend right=#1]\n\\tikzstyle{e1}[0]=[solid, bend left =#1]\n\\defmath\\hv{{\\hat v}}\n\n\\tikzset{every node/.style={initial text={}, inner sep=2pt, outer sep=0}}\n\n\\section{The \\limdd data structure}\n\\label{sec:isomorphism-qmdd}\n\n\n\\begin{wrapfigure}{r}{7cm} \n%\\begin{figure}\n\\vspace{-2.5em}\n\\begin{empheq}[box={\\Garybox[Domains of variables]}]{align*}\n%\\begin{align*}\n%\\Aboxed{\n%\\tikzmark{A}\n    n, i , j, k                 &\\in \\mathbb N          && \\text{indices} \\\\\n    v, w,u                      &\\in \\Node     && \\text{\\limdd nodes}\\\\\n    e, f                           &\\in \\Edge     && \\text{\\limdd edges}\\\\\n    L, M                        &\\in \\textnormal{DD} && \\limdd \\text{(diagrams)} \\\\ \n    \\ket\\phi, \\ket\\psi                  &\\in \\mathbb C^{\\otimes n}  && \\text{$n$-qubit states}\\\\\n    \\alpha, \\beta, \\lambda,\\mu  &\\in \\mathbb C          && \\text{complex factors}\\\\\n    P, Q, R                     &\\in \\Pauli^{\\otimes n}&& \\text{\\pauli strings}\\\\\n    A, B                        &\\in \\mathbb C \\times  \\Pauli^{\\otimes n}\n                                      & & \\text{Pauli LIMs}~~~~~~~\n%\\tikzmark{B}\n%}\n\\end{empheq}\n%\\tikz[remember picture,overlay]\\draw(A.north west)rectangle(B.south east);\n%\\end{figure}\n\\vspace{-1em}\n\\end{wrapfigure}\n\n%The figure on the right lists the default domains of variables used in this paper.\n\nWhere \\qmdds only merge nodes representing the same state up to a constant factor, the \\limdd data structure goes further by also merging nodes that are equivalent up to local operations, called Local Invertible Maps (LIMs) (see \\autoref{def:isomorphism}).\nAs a result, \\limdds can be exponentially more succinct than \\qmdds, including some stabilizer states (see \\autoref{sec:exponential-separations}).\n    We will call nodes which are equivalent under LIMs, \\emph{(LIM-) isomorphic}.\nThis definition generalizes SLOCC equivalence; in particular, if we choose the parameter $G$ to be the linear group, then the two notions coincide (Appendix A of \\cite{dur2000three})~\\cite{bennett1996concentrating,chitambar2014everything}.\n\n\n\\begin{definition}[LIM, Isomorphism]\n\t\\label{def:isomorphism}\n    A $n$-qubit Local Invertible Map (LIM) is an operator $\\mathcal{O}$ of the form $\\mathcal{O}=\\lambda \\mathcal{O}_n\\otimes\\cdots\\otimes \\mathcal{O}_1$, where the matrices $\\mathcal{O}_i$ are invertible $2\\times 2$ matrices and $\\lambda\\in\\mathbb{C}\\setminus \\{0\\}$.\n    An \\emph{isomorphism} between two $n$-qubit quantum states $\\ket{\\phi},\\ket{\\psi}$ is a LIM $\\mathcal{O}$ such that $\\mathcal{O}\\ket{\\phi} = \\ket{\\psi}$.\n    If $G$ is a group of $2\\times 2$ invertible matrices and if all $\\mathcal{O}_i \\in G$, then we say that $\\mathcal{O}$ is a $G$-isomorphism and that  $\\ket{\\phi}$ is $G$-isomorphic to $\\ket{\\psi}$, denoted $\\ket{\\phi}\\simeq_G\\ket{\\psi}$.\n\\end{definition}\n\n\nBefore we give the formal definition of \\limdds in \\autoref{def:limdd}, we give a motivating example in Figure~\\ref{fig:qmdd-isoqmdd-exposition} which demonstrates how the use of isomorphisms can yield small diagrams for a four-qubit state.\n%Here a $4$-qubit state is represented in four different ways.\nIn the state's \\qmdd, the nodes $c_3$ and $c_4$ represent the two-qubit state vectors $\\ket{c_3}=\\left[1,1,1,-1  \\right]^\\dagger$ and $\\ket{c_4}=\\left[1,-1,\\omega,-\\omega  \\right]^\\dagger$.\nBy noticing that these two vectors are related via $\\ket{c_4}=(T\\otimes Z)\\ket{c_3}$, we may discard the node $c_4$ and store the isomorphism $T\\otimes Z$ instead.\n%The state $\\ket{c_4}$ can then be recovered from $\\ket{c_3}$ by computing $\\ket{c_4}=(T\\otimes Z)\\ket{c_3}$.\n% todo LV: ze hebben allemaal andere isomorphisms.\nIn fact, all the nodes on the \\qmdd's third level (i.e., the grandchildren nodes of the root node) are related to each other through such isomorphisms.\nA similar reduction in size can be achieved at the \\qmdd's second level, by noticing that nodes $c_1$ and $c_2$ are related via $\\ket{c_2}=Z\\otimes I\\otimes Z\\ket{c_1}$.\nThe resulting data structure is a \\limdd of only five nodes instead of ten.\n\\autoref{sec:exponential-separations} shows that discarding isomorphic nodes sometimes leads to exponentially smaller diagrams, while the additional cost of storing the isomorphisms is only polynomial.\n\n\\begin{figure}[bh!]\n\t\\includegraphics[width=\\textwidth]{pics/qmdd-isoqmdd-example-4-qubits-new-notation.pdf}\n\t\\caption{A four-qubit quantum state shown as: an amplitude vector (left), an \\add, a \\qmdd and a \\limdd (right). Diagram nodes are horizontally ordered in `levels' with qubit indices ${4,3,2,1}$.}\n\t\\label{fig:qmdd-isoqmdd-exposition}\n\\end{figure}\n\n%The formal definition of \\limdd is as follows.\n\n\\tikzset{every node/.style={initial text={}, inner sep=2pt, outer sep=0}}\n\n\n\\begin{definition}[$G$-\\limdd]\n\t\\label{def:limdd}\n    An $n$-\\glimdd is a rooted, directed acyclic graph (DAG) which represents an $n$-qubit quantum state (or matrix).\n    Formally, a \\glimdd is a $6$-tuple $(\\Node\\cup \\{\\leaf\\}, \\index,\\Low,\\High,\\lbl,e^r)$,\n    where:\n\\begin{itemize}\n\t\\item $\\Node$ is a set of nodes with qubit indices $\\index(v) \\in [n]$ for $v\\in \\Node$;\n\t\\item $\\Low,\\High \\colon \\Node \\to \\Node\\cup \\{\\leaf\\}$ are the low  and high edge functions;\n%\t\\raisebox{-1mm}{\\scalebox{.7}{\\tikz{\n%\t\t\\node[state,minimum size=.2cm] (1) {$v$};\n%\t\t\\node (-a1) [above left  =.1cm and .3cm of 1] {};\n%\t\t\\node (-b1) [above right  =.1cm and .3cm of 1] {};\n%\t\t\\node (1a-) [below left =.1cm and .3cm of 1] {};\n%\t\t\\node (1b-) [below right = .1cm and .3cm of 1] {};\n%\t\t\n%\t\t\\path[]\n%\t\t(-a1) edge node {} (1)\n%\t\t(-b1) edge node {} (1)\n%\t\t(1) edge node {} (1a-)\n%\t\t(1) edge node {} (1b-);\n%\t}}}.\n\t\\item $\\lbl\\colon \\Low\\cup \\High\\to G$-$\\LIM\\cup\\set 0$ is a function labeling edges with LIMs or $0$;\n%       \\raisebox{-1mm}{\\scalebox{.7}{\\tikz{\n%            \\node[] (1) {};\n%            \\node [below = .5 of 1] (2) {};\n%            \\path[] \n%                (1) edge node[right] {$ (\\lambda, A_1,\\ldots, A_n)$} (2);\n%        }}}, and\n\t\\item a root edge $e^r$ without source pointing to root node $r\\in \\Node$;\n%\t\t\\raisebox{-1mm}{\\scalebox{.7}{\\tikz{\n%\t\t\t\\node[state,initial,minimum size=.2cm] (r) {$r$}; \n%\t\t\t\\node[below left= .1cm and .3cm of r] (1) {};\n%\t\t\t\\node [below right= .1cm and .3cm of r] (2) {};\n%\t\t\t\\path[] \n%\t\t\t(r) edge [dotted] node {} (1)\n%\t\t\t(r) edge [dotted] node {} (2);\n%\t\t}}},\n    \\item a unique leaf node $\\leaf$ (a sink) with label $\\index(s) = 0$ \n    representing the number $1$;\n%        \\raisebox{-.5mm}{\\scalebox{.7}{\\tikz{\n%            \\node[draw, rectangle,minimum size=.2cm] (s) {$1$}; \n%            \\node[above left= .1cm and .3cm of s] (1) {};\n%            \\node[above= .2cm of s] (a) {};\n%            \\node [above right= .1cm and .3cm of s] (2) {};\n%            \\path[] \n%                (1) edge [dotted] node {} (s)\n%                (a) edge [dotted] node {} (s)\n%                (2) edge [dotted] node {} (s);\n%        }}},\n%\\item If an edge has label $0$, then it points to the leaf node.\n%    Otherwise, if $v$ is a node with children $v_0, v_1$, then $\\index(v_0) = \\index(v_1) = \\index(v)-1$;\n%        \\todo[inline]{note: this used to be the Zero Edges Rule before}\n\\end{itemize}\nDepending on context, we interpret $\\lbl(\\low v)$ as a node $v_0$ or edge  $(v, v_0)$, etc.\nWe define the semantics of a (non-leaf) node $v$ and edge $e=(w,v)$  by overloading the Dirac notation:\n\\begin{align*}\n\\ket{e} & \\defn \\begin{cases}\n    \\lambda \\cdot  (\\mathcal{Q}_{\\index(v)}\\otimes \\cdots \\otimes \\mathcal{Q}_1) \\cdot \\ket v & \\text{if $\\lbl(e) = (\\lambda,  \\mathcal{Q}_1, \\ldots , \\mathcal{Q}_{\\index(v)})$} \\\\\n\t[0, \\dots,0]^\\dagger & \\text{if $\\lbl(e)=0$} \\end{cases} \\\\\n\\ket{v}     & \\defn \\ket{0}\\otimes\\ket{\\low v}+ \\ket{1}\\otimes \\ket{\\high v}\n\\end{align*}\n\\end{definition}\n\nThe coefficient $\\langle x | e \\rangle$ for bitstring $x \\in \\{0, 1\\}^n$ \nof a \\limdd with root edge $e$ representing an $n$-qubit state $\\ket e$ \nis read by traversing the \\limdd from top to bottom according to \\autoref{def:limdd}\n(i.e., pushing down the LIMs).\nIt is best illustrated by example, e.g., reading the amplitude for $1111$ in the \\limdd of\n\\autoref{fig:qmdd-isoqmdd-exposition}. The LIM on the root edge is the identity, so we can simply follow the 1-edge to get a new root edge with LIM $P_3\\otimes P_2 \\otimes P_1 =Z\\otimes \\id \\otimes Z$~to~$d_1$. \nSince the most significant operator ($P_3$) is a $Z$, we multiply the LIM on the 1-edge of $d_1$ (which has $\\index(d_1)=3$) with $-1$ and the remainder of the LIM ($\\id \\otimes Z$),\nyielding $ - \\omega T \\otimes \\id$, etc. Eventually the leaf is reached, when only a factor remains (= the sought amplitude).\nIf we would encounter an $X$ (or $Y$) as $P_3$, we would also have to switch the high (1)\nand the low (0) edge, thus taking the 0- instead of 1-edge (and multiply by $-i$ for $P_3 = Y$).\nFor the choice $G=\\Pauli$, this is formalized in the \\follow{}{} procedure given in \\autoref{sec:quantum-simulation}.\n\nLet us now summarize the representation and manipulation capabilities of the \\limdd data structure.\nA $G$-\\limdd is exact and universal, i.e., for each $n$-qubit quantum state $\\ket\\phi$ there is a \\limdd with root edge $e$ such $\\ket e = \\ket\\phi$, for any choice of parameter $G$.\nIn particular, a \\glimdd with $G=\\set{\\mathbb I}$ captures all \\qmdds by definition.\nAs all groups $G$ contain the identity operator $\\mathbb I$, the universality of \\limdds follows from the universality of QMDDs.\nFurthermore, for the choice $G=\\textsf{Pauli}$, the states that \\limdds can represent using polynomial space include all stabilizer states, which is a feature that \\qmdds do not posses, as shown in \\autoref{sec:exponential-separations}.\nFinally, $\\textsf{Pauli}$-\\limdds can also \\emph{manipulate} and measure quantum states, thereby enabling the simulation of quantum circuits, as shown in \\autoref{sec:simulation}. \nFor many operations, we show that the manipulation is also efficient, i.e., it takes polynomial time in the size of the \\limdd representation of the quantum state/circuit.\nSpecifically, \\limdds are often faster than \\qmdds, and never slower than a multiplicative factor $O(n^3)$.\n\nFrom here on, we will focus on the choice $G=\\Pauli$, omitting the prefix $\\Pauli$- in front of $\\limdd$ unless it is clear that we mean otherwise, and hence write $\\simeq$ to mean $\\simeq_{\\textnormal{Pauli}}$.\n\n%From here on, we will focus on the choice $G=\\Pauli$, omitting the prefix $\\Pauli$- in front of $\\limdd$ unless it is clear that we mean otherwise, and hence write $\\simeq$ to mean $\\simeq_{\\textnormal{Pauli}}$.\n%For $\\Pauli$-\\limdds representing an $n$-qubit state $\\ket{\\phi}$, reading the coefficient $\\langle x | \\phi\\rangle$ of bitstring $x \\in \\{0, 1\\}^n$ is done by traversing the \\limdd from top to bottom: at a node with incoming edge label $\\lambda P_n \\otimes \\dots \\otimes P_n$, following the $x_k$-edge  if $P_k\\ket{x_k} \\propto \\ket{x_k}$ (e.g. the $0$-edge when $x_k = 0$ and $P_k \\in \\{\\id[2], Z\\}$) and the $(1 - x_k)$-edge if $P_k \\ket{x_k} \\propto \\ket{1 - x_k}$ instead (e.g. the $1$-edge when $x_k = 0$ and $P_k \\in \\{X, Y\\}$).\n%The coefficient is the product of the factors $\\lambda \\cdot \\langle x_k | P | x_k\\rangle$ of the traversed edge labels $\\lambda P$.\n%For a general group $G$, the coefficients are not encoded in a single path from root to leaf.\n\n%\\todo[inline]{Vedran: ok. One may wonder: this is all ok, but how do i compute the amplitude now? It was easy in the QMDD case, I follow the path and multiply numbers...\n%\tFor intuition this is missing for me.\n%\n%Lieuwe: So do we add an explanation how to compute amplitudes?\n%\n%Tim: above, I added some text for the case $G=Pauli$, please check.\n%}\n\nIn general, there are many different $\\Pauli$-\\limdds which represent a given quantum state.\nBy imposing a small number of constraints on the diagram, listed in \\autoref{def:reduced-limdd} and visualized in \\autoref{fig:necessity-of-reduction-rules}, we ensure that every quantum state is represented by a unique `\\emph{reduced}' \\limdd.\nUnique representation, or canonicity, is a useful property of decision diagrams.\nIn the first place, it allows for circuit analysis and simplification~\\cite{bryant1995verification,miller2006qmdd}, by\n facilitating efficient manipulation operations.\nIn the second place, a reduced diagram is smaller than an unreduced diagram because it merges nodes with the same semantics. For instance, \\limdds allow all states in the same $\\simeq$ equivalence class to be merged.\nThe algorithms for quantum circuit simulation in \\autoref{sec:quantum-simulation} ensure that all intermediate \\limdds are reduced.\n\n\n\n%A reduced \\limdd representing a state $\\ket{\\phi}$ has the desirable property that it is the minimum-size diagram among all \\limdds representing the state $\\ket{\\phi}$ (\\autoref{thm:reduced-glimdd-minimum-size}).\n\n\\begin{definition}[Reduced \\limdd]\n\t\\label{def:reduced-limdd}\n%\tLet $\\highlabel,\\rootlabel$ and $\\beforeq$ be as follows.\n%\t\\begin{itemize}\n%\t\t\\item The function $\\highlabel$ takes as input an $n$-\\glimdd node $v$, and outputs an $(n-1)$-$G$-LIM $f(v)$.\n%\t\tThis function has the property that, if $v$ and $w$ are nodes which satisfy the Low Factoring, Low Precedence and Edge Rules (defined below), and if $\\ket{v}\\simeq_G\\ket{w}$, then $\\highlabel(v)=\\highlabel(w)$\n%\t\twith $\\ket v \\simeq_G \\ket{0}\\ket{v_0}+\\ket{1}\\otimes \\highlabel(v)\\ket{v_1}$.\n%\t\t% and if $\\ket{v}=\\ket{0}\\ket{v_0}+\\ket{1}\\lbl(\\high(v))\\ket{v_1}$ then $\\ket{v}$ is isomorphic to $\\ket{0}\\ket{v_0}+\\ket{1}\\otimes \\highlabel(v)\\ket{v_1}$.\n%\t\t%ALFONS: this seems a corollary of the definition\n%\t\tIn other words, $\\highlabel$ is constant within an isomorphism class.\n%\t\t\\item The function $\\rootlabel$ takes as input a \\glimdd{}'s distinguished root edge $e^r$, and outputs a LIM $\\rootlabel(e^r)$.\n%\t\tThis function satisfies the property that, if $e^{r}$ and $e^q$ are the root edges of two \\glimdds, and if the root nodes $r$ and $g$ satisfy the reduction rules below except the ``Root edge determinism rule'', and if $\\ket{e^r}=\\ket{e^q}$, then $\\rootlabel(e^r)=\\rootlabel(e^q)$.\n%\t\t\\item $\\beforeq$ is a total order on the nodes of the diagram.\n%\t\\end{itemize}\n\tA \\pauli-\\limdd is \\emph{reduced} when it satisfies the following constraints.\n\tIt is \\emph{semi-reduced} if it satisfies all constraints except high determinism.\n%\t\\footnote{}\n\t\\begin{enumerate}\n\t\t\\item \\textbf{Merge: } No two nodes are identical: We say\n            two nodes $v,w$ are identical~if $\\low v= \\low w, \\high v = \\high w,\n                \\lbl(\\low v)=\\lbl(\\low w)$, $\\lbl(\\high v)=\\lbl(\\high w)$.\n\t\t\\item \\textbf{(Zero) Edge: } Any edge $(v,w) \\in \\High \\cup \\Low $\nhas $\\index(v) = \\index(w) + 1$, and if $\\lbl(v,w)=0$, then both edges point to the same node, i.e., $\\high v = \\low v = w$.\n\t\t\\item \\textbf{Low Precedence: } For each internal node $v$, we have $\\low v \\beforeq \\high v$, where  $\\beforeq$ is a total order on the nodes of the diagram.\n\t\t\\item \\textbf{Low Factoring: } The label on every low edge to a node $v$ is the identity $\\id[2]^{\\otimes\\index(v)}$.\n\t\t\\item \\textbf{High Determinism: } The label on the high edge of any node $v$ is $\\highlim =\\highlabel(v)$, where $\\highlabel$ is a function that takes as input a semi-reduced $n$-{\\Pauli}-\\limdd node~$v$, and outputs an $(n-1)$-$\\Pauli$-LIM $\\highlim$\n        satisfying\n       $\\ket v \\simeq_{\\Pauli} \\ket{0}\\ket{\\low v}+\\ket{1}\\otimes \\highlim\\ket{\\high v}$.\n        Moreover, for any other semi-reduced node $w$ with $\\ket{v}\\simeq_{\\Pauli}\\ket{w}$,\n         it returns $\\highlabel(w) = \\highlim$.\n\t\tIn other words, $\\highlabel$ is constant within an isomorphism class.\t\t\n%\t\t\\item \\textbf{Root Edge Determinism: } The label on the root edge is $\\rootlabel(e^r)$.\n\t\\end{enumerate}\n\\end{definition}\n\n\n\\begin{figure}\\begin{center}\n\t\\includegraphics[width=1\\textwidth]{pics/necessity-of-reduction-rules.pdf}\n\t\\caption{Illustration of the reduction rules in \\autoref{def:reduced-limdd}.\n\t\tIn each case, the left and right \\limdds represent the same state, but the left \\limdd violates a reduction rule, while the right \\limdd satisfies that rule.\n        The Merge rule regards the merging of two identical nodes $v$ and $w$.\n\t\tLow Precedence determines which child is the low child, and which is the high child according to~$\\beforeq$.\n\t\tLow Factoring ensures that the low edge is always labeled with $\\mathbb I$.\n\t\tHigh Determinism ensures that the label on high edges is chosen canonically.\n\t\t}\n\t\\label{fig:necessity-of-reduction-rules}\n\\end{center}\n\\end{figure}\n\nA few observations can be made about the above definition:\n\t\\begin{itemize}\n            \\setlength\\itemsep{1em}\n\t\t\\item[\\namedlabel{obs:nozero}{O1}] There is no reduced \\limdd for the $0$-vector, because any low edge must be labeled with~$\\id[2]^{\\otimes n}$. This is not a problem for us, since the $0$-vector is not a quantum state.\n\t\t\\item[\\namedlabel{obs:knife}{O2}] To represent a state like $\\ket{0}\\otimes\\ket{\\phi}$, there is a choice between $\\ket{0}\\otimes\\ket{\\phi}$ and $(X\\otimes\\mathbb I)\\ket{1}\\otimes \\ket{\\phi}$.\nThe low factoring rule forces us to take the $\\mathbb I$ label on the low edge, so this gives a node of the form $\\ket{0}\\otimes \\ket{\\phi}$. Therefore, the high edge \\emph{must} be labeled with $0$. Since, by the zero edges rule, the high edge then points to the same node as the low edge the low precedence rule vacuously holds for such states.\n\t\t\\item[\\namedlabel{obs:subgroups}{O3}] The definition of reduced \\limdd cannot always be applied to $G$-\\limdd when $G$ is a subgroup of the Pauli group; in particular, such a diagram may not be universal. This is because the low precedence rule requires that $v_0\\beforeq v_1$ for every node, so if $G$ is a group which does not contain the $X$, then no reduced \\glimdd represents a state $\\ket 0 \\ket {v_0} + \\ket 1 \\ket {v_1}$ where $v_1 \\beforeq v_0$.\n            %In \\autoref{sec:exponential-separations}, we come back to this issue.\n%            \\todo[inline]{Tim: why is this an `issue'? More like a topic, isn't it? Why is it relevant to mention this?}\n\\item[\\namedlabel{obs:delete}{O4}] While the literature on other decision diagrams~\\cite{akers1978binary,bryant86,feinstein2011skipped} often considers a ``redundant test'' rule that allows edges to skip multiple levels, we omit this reduction for the sake of simplicity, because it offers only linear improvements in the best case and complicates correctness proofs (see, e.g., \\cite[Lemma 1]{andersen1997introduction}). There is however no fundamental reason which would prevent the addition of a similar redundancy reduction.\n%\t\\item[\\namedlabel{obs:root-edge}{O5}] the root edge of a diagram may be any LIM; in particular, it is not necessarily equal to $\\mathbb I_2^{\\otimes n}$.\\todo{remove; has nothing to do with reduced.}\n%\\todo[inline]{This has to be addressed for S4 to work...}\n\\end{itemize}\n\n\n\\autoref{lemma:node-canonicity-strong} shows that  \\limdds are canonical,\nin the sense that its nodes \nuniquely represent equivalence classes under $\\simeq$ as expressed in \\autoref{cor:node-canonicity-strong}. This does not mean that the root edge of a \\limdd canonically represents a quantum state. We instead show in \\autoref{sec:equality} how to check whether two root edges represent the same state.\n\n\n%\\begin{lemma}\n%    [Reduced \\limdds have reduced sub-\\limdds]\n%    \\label{lemma:sub}\n%    Let $L$ be a \\limdd.\n%    If the left edge of $v_L$ has label $\\unit$, its right edge is labelled $\\highlabel(v_L)$ or is labelled zero and points to the same node as its left edge, its children $v_0, v_1$ satisfy Low Precendence, and each node ``under its span'' satisfies the Merge rule, then all these properties also hold for its children.\n%\\end{lemma}\n\n\n\n\n\n\n\n\n\n\\begin{corollary}[of \\autoref{lemma:node-canonicity-strong}]\n    \\label{cor:node-canonicity-strong}\n    Each equivalence class under $\\simeq$, has a unique representative reduced \\limdd node $v$, which follows from \\autoref{lemma:node-canonicity-strong} and the transitivity of $\\simeq$.\n\\end{corollary}\n\n\\begin{lemma}[Node canonicity]\n    \\label{lemma:node-canonicity-strong}\n    For each $n$-qubit state vector $\\ket{\\phi}$, there exists a unique reduced Pauli-\\limdd $L$ with root node $v_L$\n    such that $\\ket{v_L} \\simeq \\ket{\\phi}$.\n%    Isomorphic state vectors are represented by the root node of a unique reduced Pauli-LIMDD.\n%    That is, for each state vector $\\ket{\\phi}$, the set\n%    \\[\n%        C(\\ket{\\phi}) = \\{\\textnormal{reduced \\limdd L} | \\ket{v_L} \\simeq \\ket{\\phi}\\}\n%    \\]\n%    satisfies:\n%    (a) $C(\\ket{\\phi}) > 0 $ and (b) $C(\\ket{\\phi}) \\leq 1$.\n%    (a) there exists a reduced Pauli-LIMDD $L$ such that $\\ket{v_L} \\simeq \\ket{\\phi} \\simeq \\ket{\\psi}$, and\n%    : if $\\ket{\\phi} \\simeq \\ket{\\psi}$, then:\n%\n%    (b) if there are two reduced Pauli-LIMDDs, with root nodes $v$ and $v'$, then $\\ket{v} \\simeq \\ket{v'} \\simeq \\ket{\\psi}$ implies $v=v'$.\n\\end{lemma}\n\\begin{proof}\n%\t\\todo[inline]{From now on, we will write $\\simeq$ to mean $\\simeq_{\\textnormal{Pauli}}$?}\n\t\n    We use induction on on the number of qubits $n$ to show universality (the existence of an isomorphic \\limdd node) and uniqueness (canonicity).\n%    \\todo[inline]{Tim: can't we move proof to appendix?}\n\n%In the lemma's proof, we refer to \\limdd DAGs with capitals $L,M$, and their root nodes with $v_L, v_M$.\n%Edges to $n$-qubit nodes $v$ in the \\limdd carry a Pauli isomorphism $A = \\lambda, A_1, \\dots, A_n$ according to \\autoref{def:limdd}, i.e., a Pauli string interpreted as a LIM $A_n \\otimes \\dots \\otimes A_1$ multiplied by a nonzero complex factor $\\lambda$. Note that the factor $\\lambda$ can be zero.\n\n    \\textbf{Base case.}\n    If $n=0$, then $\\ket{\\phi}$ is a complex number $\\lambda$.\n    A reduced Pauli-\\limdd for this state is the leaf node representing the scalar $1$.\n    To show it is unique, consider that nodes $v$ other than the leaf have an $\\index(v) > 0$,\n    by the edges rule, and hence represent multi-qubit states.\n    Since the leaf node itself is defined to be unique, the merge rule is not needed and canonicity follows. Finally, $\\ket{\\phi}$ is represented by root edge\n    $\\ledge \\lambda1$.\n    %\n    \n    \\textbf{Inductive case.}\n    Suppose $n>0$.\n    We first show existence, and then show uniqueness.\n    \n    We use the unique expansion of $\\ket{\\phi}$ as $\\ket{\\phi} = \\ket{0} \\otimes \\ket{\\phi_0} + \\ket{1}\\otimes \\ket{\\phi_1}$ where $\\ket{\\phi_0}$ and $\\ket{\\phi_1}$ are either $(n-1)$-qubit state vectors, or the all-zero vector.\n%    \\todo[inline]{Quantum state vectors are defined to be non-zero in \\autoref{def:state-vector}, but these vectors are allowed to be zero. -LV}\n    We distinguish three cases based on whether $\\ket{\\phi_0}, \\ket{\\phi_1} = 0$.\n\n   \\textbf{Case $\\boldsymbol{\\ket{\\phi_0}, \\ket{\\phi_1} = 0}$:}\n    This case is ruled out because $\\ket{\\phi} \\neq 0$.\n\n    \\textbf{Case $\\boldsymbol{\\ket{\\phi_0}=0}$ or $\\boldsymbol{\\ket{\\phi_1} = 0}$:}\n        In case $\\ket{\\phi_0}\\neq 0$,\n       by the induction hypothesis, there exists a Pauli-\\limdd with root node $w$ satisfying\n        $\\ket{w} \\simeq \\ket{\\phi_0}$. By definition of $\\simeq$,\n        there exists an $n$-qubit Pauli isomorphism $A$ such that \n        $\\ket{\\phi_0} = A \\ket{w}$.\n       We construct the following reduced Pauli-\\limdd for $\\ket{\\phi}$: \n        $\\lnode[v] I{w}0{w}$. \n        In case $\\ket{\\phi_1}\\neq 0$, we do the same for root\n            $\\ket{w} \\simeq \\ket{\\phi_1} = A \\ket w$.\n        In both cases, it is easy to check that the root node is reduced.\n        Also in both cases, we have $\\ket \\phi \\simeq \\ket v$ because \n        either $\\ket \\phi = \\id[2] \\otimes A \\ket v$ or \n        $\\ket \\phi = X \\otimes A \\ket v$ as illustrated in \\autoref{fig:reduced1} (left).\n\n\\begin{figure}\n\\tikz[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm,\n        thick, state/.style={circle,draw,inner sep=0pt,minimum size=18pt}]{\n    \\node[state] (1) {$v'$};\n    \\node[above = .5cm of 1,xshift=1.55cm,fill=black] (x) {};%{$\\ket \\phi$};\n    \\node[state] (1a) [below = 1cm of 1, xshift=1.7cm] {$w$};\n    \n    \\path[]\n    (x) edge[bend left=-20]     node[above left,pos=.7] {$\\id[2]^{\\otimes n}$} (1)\n    (1) edge[e0,bend left=-20] node[pos=.1,left] {$A$} (1a)\n    (1) edge[e1,bend right=-20] node[pos=.1,above right] {$0$} (1a)\n    ;\n\n    \\node[state, right = 2.5cm of 1] (2) {$v$};\n%    \\node[above = .5cm of 2] (x)  {$\\ket \\phi$};%{$= \\id[2] \\otimes A \\ket{v}$};\n    \\path[]\n    (x) edge[bend left=20]     node[above right,pos=.7] {$\\id[2] \\otimes A$} (2)\n    (2) edge[e0,bend left=-20] node[pos=.1,left] {$\\id[2]^{\\otimes n}$} (1a)\n    (2) edge[e1,bend right=-20] node[pos=.2,right] {$0$} (1a)\n    (1) --  node[yshift=.1cm] {$\\rightsquigarrow$} (2)\n    ;\n    }~~~~~~~~~~~~~~~\n\\tikz[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm,\n        thick, state/.style={circle,draw,inner sep=0pt,minimum size=18pt}]{\n%    \\node[above = .5cm of 1] (x) {$\\ket \\phi$};\n    \\node[state] (1) {$v''$};\n    \\node[state] (1a) [below = 1cm of 1, xshift=2.3cm] {$v_L$};\n    \\node[state] (1b) [below = 1cm of 1, xshift=3.8cm] {$v_R$};\n    \\path[]\n    (1) edge[e0] node[pos=.1,left] {$A$} (1a)\n    (1) edge[e1] node[pos=.1,above right] {$B$} (1b)\n    (1a) --  node[yshift=-.2cm] {$\\beforeq$} (1b)\n    ;\n\n    \\node[state, right = 2.5cm of 1] (2) {$v'$};\n    \\node[above = .5cm of 2,fill=black] (x)  {};%{$= \\id[2] \\otimes A \\ket{v'}$};\n    \\path[]\n    (x) edge[bend left=-20]     node[above left,pos=.8] {$\\id[2]^{\\otimes n}$} (1)\n    (x) edge     node[right,pos=.4] {$\\id[2] \\otimes A$} (2)\n    (2) edge[e0] node[pos=.2,left] {$\\id[2]^{\\otimes n}$} (1a)\n    (2) edge[e1] node[pos=.1,right] {$ A^{-1}B$} (1b)\n    (1) --  node[yshift=.1cm] {$\\rightsquigarrow$} (2)\n    ;\n    \n    \\node[state, right = 2.5cm of 2] (3) {$v$};\n%    \\node[above = .5cm of 3] (x)  {}; %{$ =  (\\id[2] \\otimes A)\\rootlim \\ket{v}$};\n    \\path[]\n    (x) edge[bend left=20]      node[above right,pos=.8] {$(\\id[2] \\otimes A)\\rootlim$} (3)\n    (3) edge[e0] node[pos=.1,above left] {$\\id[2]^{\\otimes n}$} (1a)\n    (3) edge[e1] node[pos=.2,below right] {$\\highlim$} (1b)\n    (2) --  node[yshift=.1cm] {$\\rightsquigarrow$} (3)\n    ;\n    }\n\t\\caption{Reduced node construction in case $\\ket{\\phi_1} = 0$ (left), and\n\t        $\\ket{\\phi_0}, \\ket{\\phi_0} \\neq 0$ and $v_L \\beforeq v_R$ (right).\n\t        For cases $\\ket{\\phi_0} = 0$ and $v_R \\beforeq v_L$, we take instead root edge $ X \\otimes A$ and swap low/high edges.\n\t        The black square (\\scalebox{.7}{$\\blacksquare$})\n\t        signifies a unique quantum state (all root edge represent this one state).\n%            \\todo[inline]{Tim: why do we need to show the state above each node? Aren't these obvious from the figure? (i.e. each state is of the form $(rootedge) \\ket{rootnode}$}\n            }\n\t\\label{fig:reduced1}\n\\end{figure}\n\n \n    \\textbf{Case $\\boldsymbol{\\ket{\\phi_0}, \\ket{\\phi_1} \\neq 0}$:}\n    By the induction hypothesis, there exist \\pauli-\\limdds $L$ and $R$ with root nodes\n    $\\ket{v_{L}} \\simeq \\ket{\\phi_0}$ and $\\ket{v_{R}} \\simeq \\ket{\\phi_1}$.\\footnote{Note that the induction hypothesis implies a `local' reduction of \\limdds $L$ and $R$, but not automatically a reduction of their union. For instance, $L$ might contain a node $w$ and $R$ a node $w$ such that $v \\simeq w$. While the other reduction rules ensure that $v$ and $w$ will be structurally the same, the induction hypothesis only applies the merge rule $L$ and $M$ in isolation, leaving two separate identical nodes $v,w$.\nWe can solve this by applying merge on the union of nodes in $L$ and $M$, to merge any equivalent nodes.}\n    By definition of $\\simeq$, there exist $n$-qubit Pauli isomorphisms $A$ and $B$ such that $\\ket{\\phi_0} = A \\ket{v_{L}}$ and $\\ket{\\phi_1} = B \\ket{v_{R}}$.\n    In case $v_{L} \\beforeq v_{R}$,\n       we construct the following reduced Pauli-\\limdd for $\\ket{\\phi}$: the root node is $\\lnode[v] {\\mathbb I}{v_{L}}E{v_{R}}$, where\n    $E$ is the LIM computed by $\\highlabel(\\lnode{\\mathbb I}{v_L}{A^{-1}B}{v_R})$ . \n    Otherwise, if $v_{R}\\beforeq v_{L}$, then we construct the following reduced Pauli-\\limdd for $\\ket{\\phi}$: the root node is $\\lnode[v] I{v_{R}}F{v_{L}}$, where $F=\\highlabel(\\lnode{\\mathbb I}{v_L}{B^{-1}A}{v_R})$.\n    It is straightforward to check that, in both cases, this Pauli-\\limdd is reduced.\n    Moreover, $\\ket v$ isomorphic to $\\ket \\phi$ \n    as illustrated in \\autoref{fig:reduced1} (right).\n\n\n\n    To show uniqueness, let $L$ and $M$ be reduced \\limdds (root nodes $v_L, v_M$) such that $\\ket{v_L} \\simeq \\ket{\\phi} \\simeq \\ket{v_M}$.\n    \\def\\Ptop{P_{\\textnormal{top}}}\n    \\def\\Prest{P_{\\textnormal{rest}}}\n    Expanding the semantics of $v_L$ and $v_M$, this implies there exists a Pauli isomorphism $\\lambda \\Ptop \\otimes \\Prest \\neq 0$, where $\\Ptop$ is a single-qubit Pauli and $\\Prest$ an $(n-1)$-qubit Pauli isomorphism, such that\n        \\begin{align}\n        \\label{eq:canonicity-equation-1}\n       \\lambda \\Ptop \\otimes \\Prest (\\ket{0} \\otimes A_L \\ket{v_L^0} + \\ket{1} \\otimes B_L \\ket{v_L^1})\n        =\n        \\ket{0} \\otimes A_M \\ket{v_M^0} + \\ket{1} \\otimes B_M \\ket{v_M^1}\n        .\n        \\end{align}\n\n    \n    We distinguish two cases from here on: where $\\Ptop \\in \\{\\unit, Z\\}$ or $\\Ptop \\in \\{X,  Y\\}$.\n    \n    \\textbf{Case $\\boldsymbol{\\Ptop = I,Z}$.}\n    If $\\Ptop = \\diag z$ for $z \\in \\{1, -1\\}$,  then\n    \\autoref{eq:canonicity-equation-1} gives:\n%        \\begin{align}\n%        \\label{eq:canonicity-equation-1}\n%       \\ket{0} \\otimes \\lambda\\Prest A_L \\ket{v_L^0} + \\ket{1} \\otimes z \\lambda\\Prest B_L \\ket{v_L^1}\n%=\n%\\ket{0} \\otimes A_M \\ket{v_M^0} + \\ket{1} \\otimes B_M \\ket{v_M^1}\n%        \\end{align}\n        \\begin{align}\n        \\label{eq:canonicity-equation-2}\n            \\lambda \\Prest A_L \\ket{v_L^0} = A_M \\ket{v_M^0}\n            \\qquad\\textnormal{and}\\qquad\n            z\\lambda \\Prest B_L \\ket{v_L^1} = B_M \\ket{v_M^1}\n        \\end{align}\n            By low factoring, we have $A_L = A_M = \\unit$, so we obtain\n$ \\lambda  \\Prest \\ket{v_L^0} = \\ket{v_M^0}$.      \n        Hence $\\ket{v_L^0}$ is isomorphic with $\\ket{v_M^0}$, so by induction hypothesis and\n         \\autoref{cor:node-canonicity-strong}, we have $v_L^0 = v_M^0$. \n         We now show that also $v_L = v_M$ by considering two cases.\n\\begin{description}\n        \\item[$B_L \\neq 0$ and $B_M \\neq 0$:] then $z\\lambda \\Prest B_L\\ket{v_L^1}=B_M\\ket{v_M^1}$, so the nodes $v_L^1$ and $v_M^1$ represent isomorphic states, so by the induction hypothesis  and \\autoref{cor:node-canonicity-strong} we have $v_L^1=v_M^1$.\n        We already noticed by the low factoring rule that $v_L$ and $v_M$ have $\\mathbb I$ as low edge label.\n        By the high edge rule, their high edge labels are $\\highlabel(v_L)$ and $\\highlabel(v_M)$, and since the reduced \\limdds $L$ and $M$ also satisfy low precedence and edge rules and $\\ket{v_L} \\simeq \\ket{v_M}$, we have $\\highlabel(v_M) = \\highlabel(v_L)$ by definition of $\\highlabel$.\n\n        \\item[$B_L = 0$ or $B_M= 0$:] In case $B_L = 0$,\n        %, then since no state vector has norm 0, we find that $B_M = 0$.\n        we see from \\autoref{eq:canonicity-equation-2} that $0=B_M\\ket{v_M^1}$.\n        Since the state vector $\\ket{v_M^1}\\ne 0$ by \\ref{obs:nozero}, it follows that $B_M=0$.\n        Otherwise, if $B_M=0$, then \\autoref{eq:canonicity-equation-2} yields $z\\lambda\\Prest B_L\\ket{v_L^1}=0$.\n        We have $z \\lambda\\ne 0$, $\\Prest \\ne 0$ by definition, and $\\ket{v_L^1}\\ne 0$ by \\ref{obs:nozero}.\n        Therefore $B_L=0$. In both cases, $B_L=B_M$.\n%        Let's first assume $B_L \\neq \n\\end{description}\n        We conclude that in both cases $v_L$ and $v_M$ have the same children and the same edge labels, so they are identical by the merge rule.\n\n        \\textbf{Case $\\boldsymbol{\\Ptop = X, Y}$.}\n    If $\\Ptop = \\begin{smallmat}0 & z^* \\\\ z & 0\\end{smallmat}$ for $z \\in \\{1, i\\}$, then\n    \\autoref{eq:canonicity-equation-1} gives:\n%        \\[\n%        \\ket{1} \\otimes z \\Prest A_L \\ket{v_L^0} + \\ket{0} \\otimes z \\Prest B_L \\ket{v_L^1}\n%        =\n%        \\ket{0} \\otimes A_M \\ket{v_M^0} + \\ket{1} \\otimes B_M \\ket{v_M^1}\n%        \\]\n        \\[\n            \\lambda z\\Prest A_L \\ket{v_L^0} = B_M \\ket{v_M^1}\n            \\qquad\\textnormal{and}\\qquad\n            \\lambda z^*\\Prest B_L \\ket{v_L^1} = A_M \\ket{v_M^0}\n            .\n        \\]\n        By low factoring, $A_L = A_M = \\unit$, so we obtain \n        $z\\lambda  \\Prest \\ket{v_L^0} = B_M \\ket{v_M^1}$\n        and\n        $\\lambda z^*\\Prest B_L \\ket{v_L^1} = \\ket{v_M^0}$.\nTo show that $v_L = v_M$, we consider two cases.       \n\\begin{description}\n        \\item[$B_L \\neq 0$ and $B_M \\neq 0$:] we find $\\ket{v_L^0} \\simeq \\ket{v_M^1}$ and $\\ket{v_L^1} \\simeq \\ket{v_M^0}$, so by the induction hypothesis, $v_L^0= v_M^1$ and $v_L^1= v_M^0$.\n        By low precedence, it must be that $v_L^1 = v_M^1 = v_L^0 = v_M^0$.\n        Now use high determinism to infer that $B_L = B_M$ as in the ${\\Ptop = I,Z}$ case.\n%(NOTE: THIS IS THE SAME CASE AS Case-IZ \\& $B_{\\Box}$ nonzero. So this is where the root edge rule will help us choose $P=\\Ptop \\otimes \\Prest$)\\todo{I don't follow. Root Edge should not be relevant here.}\n\n        \\item[$B_L = 0$ or $B_M = 0$:]\n       This case leads to a contradiction and thus cannot occur.\n        $B_L$ cannot be zero, because then $\\ket{v_M^0}$ is the all-zero vector, which we excluded by \\ref{obs:nozero}.\n        The other case: $B_M = 0$, then it must be that $\\Prest$ is zero, hence $\\ket{v_M^0}$ is the all-zero vector, which is again excluded.\n\\end{description}\n We conclude that $v_L$ and $v_M$ have the same children and the same edge labels  for all choices of $\\Ptop$, so they are identical by the merge rule.\n\\end{proof}\n\n%\\begin{corollary}[Canonicity]\n%    For each $n$-qubit state vector $\\ket{\\phi}$, there exists a unique Pauli-LIMDD $L$ such that $\\ket{e^L} = \\ket{\\phi}$.\n%\\end{corollary}\n%\\begin{proof}\n%    By Lemma~\\ref{lemma:node-canonicity-strong}, there exists a unique Pauli-LIMDD $M$ such that $\\ket{\\phi} \\simeq \\ket{v_M}$.\n%    Denote by $P$ a Pauli isomorphism mapping $\\ket{v_M}$ to $\\ket{\\phi}$.\n%    As root edge label, we use $\\rootlabel(P\\cdot e^{v_M})$.\n%    By definition of $\\rootlabel$, this label choice is unique.\n%\\end{proof}\n\n\n%\\newpage\n%\\newpage\n%\\newpage\n\n\n% todo the others\n\n%\\autoref{thm:limdd-canonicity-root-edge} shows that for every quantum state $\\ket{\\phi}$, there is indeed a unique reduced \\glimdd whose root edge represents $\\ket{\\phi}$.\n%This implies canonicity, since by \\autoref{def:limdd}, every \\glimdd represents exactly one quantum state.\n%\n%\\todo[inline]{Theorem does not use low precedence, so it is either incorrect or Def 3 is.}\n%\n%\n%%\\todo[inline]{Insert figure which shows that, if the reduction rules are NOT obeyed, then we can get multiple diagrams which represent the same state.}\n%\\begin{figure}[b]\n%\t\\centering\n%\t\\includegraphics[width=.7\\textwidth]{pics/canonicity-two-diagrams.pdf}\n%\t\\caption{Two \\glimdds on $n+1$ qubits.}\n%\t\\label{fig:canonicity-two-diagrams}\n%\\end{figure}\n%\\begin{theorem}[Canonicity of \\glimdd nodes]\n%\t\\label{thm:limdd-canonicity-nodes}\n%\tIf two reduced \\glimdd nodes $v_D$ and $v_E$ represent the same state $\\ket{v_D}=\\ket{v_E}$, and if $G$ is a subgroup of the Dihedral Torus\\todo{TODO: Change to Pauli group, or explain exception}, then they are the same diagram.\n%\\end{theorem}\n%\\begin{proof}\n%\tThe proof is by induction on the number of qubits.\n%\tThe induction hypothesis is that the theorem holds for all \\glimdd nodes on $n$ qubits.\n%\t\n%\t\\textbf{Base case: $n=0$ qubits. } \n%\tThere is only one node on $0$ qubits, namely the leaf, representing the scalar $1\\in \\mathbb C$.\n%\tHence, the theorem trivially holds in this case.\n%\t\n%\t\\textbf{Induction case: $n+1$ qubits. }\n%\t%\tLet $\\ket{\\phi}$ be a state of $n+1$ qubits, and suppose that two \\glimdds $D$ and $E$ represent this state, with $D=(V_D,Low_D,High_D,\\textsf{label}_D,e_D^r)$ and $E=(V_E,Low_E,High_E,\\textsf{label}_E,e_E^r)$.\n%\tSuppose that nodes $v_D$ and $v_E$ represent the same $n+1$-qubit state, i.e., $\\ket{v_D}=\\ket{v_E}$.\n%\t\\alfons{Since $\\index(v_D), \\index(v_E) = n+1 > 0$, these nodes cannot be leaves.}\n%\tHence $v_D$ and $v_E$ are both nodes representing the most significant qubit $n+1$, as implied by the (zero) edges rule\\todo{actually requires a few reasoning steps, or we should make it part of the IH},\n%\tas in \\autoref{fig:canonicity-two-diagrams}.\n%\t%\tSay that the diagrams $D$ and $E$ are as in \\autoref{fig:canonicity-two-diagrams}.\n%\t%\tIn this Figure, the edges $e_D^r$ and $e_E^r$ are labeled with $A_D=\\textsf{label}_D(e_D^r)$ and $A_E=\\textsf{label}_E(e_E^r)$, and they point to the root nodes $v_D$ and $v_E$, respectively.\n%\tThe node $v_D$ has two (not necessarily distinct) children, $v_{D,0}$ (low) and $v_{D,1}$ (high), and node $v_E$ has children $v_{E,0}$ and $v_{E,1}$.\n%\tThe high edges of nodes $v_D$ and $v_E$ are labeled with $n$-$G$-LIMs $B_D$ and $B_E$, respectively.\n%\t\n%\tSince the diagrams $D$ and $E$ are reduced, they satisfy the \\emph{Low Factoring}; therefore, the labels on the low edges are $\\textsf{label}_D(v_D,v_{D,0})=\\textsf{label}_E(v_E,v_{E,0})=\\mathbb I$.\n%\tWe show that this implies that $\\ket{v_{D,0}}=\\ket{v_{E,0}}$.\n%\tNamely, consider the equation $\\ket{v_D}=\\ket{v_E}$, and expand the most significant qubit, as follows,\n%\t\\begin{align}\n%\t\\ket{v_D}=\\ket{0}\\ket{v_{D,0}}+\\ket{1}B_D\\ket{v_{D,1}} = \\ket{0}\\ket{v_{E,0}}+\\ket{1}B_E\\ket{v_{E,1}} = \\ket{v_E}\n%\t\\end{align}\n%\tSince $\\ket{0}\\ket{v_{D,0}}=\\ket{0}\\ket{v_{E,0}}$, it follows that $\\ket{v_{D,0}}=\\ket{v_{E,0}}$.\n%\tTherefore, by the induction hypothesis, the low children $v_{D,0}$ and $v_{E,0}$ are in fact the same node.\n%\t\n%\t\n%\tThe nodes $v_{D}$ and $v_E$ satisfy the \\emph{High Determinism Rule}; therefore, the labels on their high edges are equal: $B_D=B_E=\\highlabel(v_D)$, because the $\\highlabel$ function is constant for nodes representing states in the same equivalence class according to \\ref{def:reduced-limdd} and $\\ket{v_{D}} = \\ket{v_E} \\implies \\ket{v_{D}} \\simeq \\ket{v_E}$.\n%\t%In particular, if $\\ket{v}=\\ket{0}\\ket{v_0}+\\ket{1}\\lbl(\\high(v))\\ket{v_1}$ then $\\ket{v}$ is isomorphic to $\\ket{0}\\ket{v_0}+\\ket{1}\\otimes \\highlabel(v)\\ket{v_1}$.\n%\t\n%\t%\\todo{connect to new formalism. Formalize that \\highlabel is constant in this case and explain here.}\n%\t\n%\tIt remains to show that we have $v_{D,1}=v_{E,1}$.\n%\tTo this end, we distinguish two cases: a ``knife'' case, in which $\\ket{v_D}=\\ket{0}\\ket{v_{D,0}}$, and a ``fork'' case.\n%\t\n%\t\\textbf{Case 1: Knife. }\n%\tBy \\autoref{def:reduced-limdd} we have $B_D=B_E=0$.\n%\tBy the zero edge rule, we must have $v_{D,1}=v_{D,0}$ and $v_{E,1}=v_{E,0}$; therefore, we also conclude $v_{D,1}=v_{E,1}$.\n%\tFrom the merge rule, we then conclude that the \\glimdds $v_D$ and $v_D$ are the same.\n%\t\n%\t\\textbf{Case 2: Fork. }\n%\tSince the nodes $v_D$ and $v_E$ represent the same state, consider again the equation $\\ket{v_D}=\\ket{v_E}$, knowing that $B_D=B_E$:\n%\t\\begin{align}\n%\t\\ket{v_D}=\\ket{0}\\ket{v_{D,0}}+\\ket{1}A_D\\ket{v_{D,1}} = \\ket{0}\\ket{v_{D,0}}+\\ket{1}A_D\\ket{v_{D,1}} = \\ket{v_E}\n%\t\\end{align}\n%\tIt follows that $\\ket{1}A_D\\ket{v_{D,1}}=\\ket{1}A_D\\ket{v_{E,1}}$; therefore, in particular $A_D\\ket{v_{D,1}}=A_D\\ket{v_{E,1}}$.\n%\tMultiplying both sides with $A_D^{-1}$, we get $\\ket{v_{D,1}}=\\ket{v_{E,1}}$, i.e., the nodes $v_{D,1}$ and $v_{E,1}$ represent the same state.\n%\tTherefore, by the induction hypothesis, they must be the same node.\n%\\end{proof}\n%\\begin{corollary}\n%\t\\label{thm:limdd-canonicity-root-edge}\n%\tIf root edges $e_D^r, e_E^r$ of reduced \\glimdds represent the same state $\\ket{e_D^r}=\\ket{e_E^r}$, and if $G$ is a subgroup of the Dihedral Torus\\todo{tricky}, then they are the same diagram.\n%\\end{corollary}\n%\\begin{proof}\n%\tSince the diagrams $D$ and $E$ are reduced, they satisfy the \\emph{Root Edge Determinism Rule}.\n%\tThis means that the labels are chosen as $A_D=\\rootlabel(\\ket{e_D^r})$ and $A_E=\\rootlabel(\\ket{e_E^r})$.\n%\tHowever, because the diagrams represent the same state, i.e., $\\ket{e_D^r}=\\ket{e_E^r}$, it follows that $A_D=A_E$, i.e., these two root edges have the same label.\n%\t\n%\tIt follows that the states represented by the nodes $v_D$ and $v_E$ are the same.\n%\tNamely, since we have $\\ket{e_D^r}=\\ket{e_E^r}$, and $\\ket{e_D^r}=A_D\\ket{v_D}$, and $\\ket{e_E^r}=A_D\\ket{v_E}$, it follows that $A_D\\ket{v_D}=A_D\\ket{v_E}$, so, multiplying both sides with $A_D^{-1}$, we get $\\ket{v_D}=\\ket{v_E}$.\n%\tBy \\autoref{thm:limdd-canonicity-nodes}, since the two nodes $v_D$ and $v_E$ represent the same state, they are the same nodes.\n%\t\n%\tBecause the two \\glimdds have the same structure below the root nodes, and have the same label on the root edge, we conclude that they are the same diagram.\n%\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "897a59ae4ff061714cd0b71d99baeb7d0f34a84a", "size": 40787, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Src/CS/sections/iso_qmdd_datastructure.tex", "max_stars_repo_name": "Katafotic/latex_parsing", "max_stars_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Src/CS/sections/iso_qmdd_datastructure.tex", "max_issues_repo_name": "Katafotic/latex_parsing", "max_issues_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Src/CS/sections/iso_qmdd_datastructure.tex", "max_forks_repo_name": "Katafotic/latex_parsing", "max_forks_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.998381877, "max_line_length": 536, "alphanum_fraction": 0.6584696104, "num_tokens": 13937, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577680904463333, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5796493114073367}}
{"text": "\n\n\\chapter{Analysis of Functions on Product Spaces}\n\nIn this chapter, we will describe some results concerning functions\non product probability spaces.\n\n\\section{Preliminaries}\n\nLet $(\\Omega, \\mu)$ be a discrete probability space and\n$(\\Omega^L,\\mu^{\\otimes L})$ be the corresponding product space. For a function\n$f:\\Omega^L \\rightarrow \\R$, the \\emph{Efron-Stein decomposition} of $f$ with\nrespect to the product space is given by\n$$ f(x_1,\\cdots, x_L) = \\sum_{\\beta \\subseteq [L]} f_\\beta(x),$$\nwhere $f_\\beta$ depends only on $x_i$ for $i\\in \\beta$ and \n$$ \\forall \\beta' \\not\\supseteq \\beta , a \\in \\Omega^{\\beta'}, \\E_{x \\in\n\\mu^{\\otimes L}} \\left[ f_\\beta(x) \\mid x_{\\beta'} = a \\right]=0.$$\nThe $\\ell_p$ and $\\ell_\\infty$ norms of $f$ with respect to the probability space\nare defined as\n$$\\|f\\|_p := \\E_{x\\in \\mu^{\\otimes L}}\\left[f(x)^p\\right]^{1/p},\n~~~~~~~~~~~~\\|f\\|_\\infty :=\\max_{x\\in \\Omega^{\\otimes L}}\\left|f(x)\\right|.$$ \nFor $i \\in [L]$, the \\emph{influence} of the $i$th\ncoordinate on $f$ is defined as follows.\n$$\\Inf_i[f] := \\E_{x_1,\\cdots, x_{i-1},x_{i+1},\\cdots , x_L}\\Var_{x_i}[f\n(x_1,\\cdots, x_L)]  = \\sum_{\\beta: i\\in \\beta} \\|f_\\beta\\|^2_2.$$\nFor an integer $d$, the \\emph{degree $d$ influence} is defined as\n$$\\Inf_i^{\\leq d}[f] := \\sum_{\\beta: i\\in \\beta, |\\beta| \\leq d} \\|f_\\beta\\|^2_2.$$\n\n\\section{Invariance Principle}\n%TODO: Motivate Invariance Principle More\n\nLet $(\\Omega^k, \\mu)$ be a probability space. Let $\\supp(\\mu) := \\{ x\\in \\Omega^k \n\\mid \\mu(x) > 0\\}$. \n\n\\begin{definition}[Connected Sets and Distributions] \\label{def:connected}\nWe say that  $S \\subseteq \\Omega^k$ is \n\\emph{connected} if for every $x, y\\in S$, there is a sequence of strings \nstarting with $x$ and ending with $y$ such that every element in the sequence is\nin $S$ and every two adjacent elements differ in exactly one coordinate. \nThe probability space is connected if $\\supp(\\mu)$ is connected.\n\\end{definition}\n\n\n\\begin{theorem}[{Mossel~\\cite[Proposition 6.4]{Mossel2008}}]\n\n\t\\label{thm:invariance}\n\tLet $(\\Omega^k, \\mu)$ be a connected probability space such the minimum probability of\n\tevery atom in $\\supp(\\mu)$ is at least $\\alpha \\in \\left(0, \\frac{1}{2}\\right]$.\n\t Then there exists \n\tcontinuous functions $\\overline{\\Gamma} : (0,1)\\rightarrow (0,1)$ and \n\t$\\underline{\\Gamma} : (0,1)\\rightarrow (0,1)$ such that the following holds: \n\tFor every $\\epsilon>0$, there exists $\\tau > 0$ and an integer $d$ such that \n\tif a function $f : \\Omega^L \\rightarrow [0,1]$ satisfies\n\t%\n\t$$\\forall i\\in [n],  \\Inf_i^{\\leq d} (f) \\leq \\tau $$\n\t%\n\tthen \n\t%\n\t$$\\underline{\\Gamma}\\left(\\E_\\mu[f]\\right) -\\epsilon \\leq \\E_{(x_1,\\ldots, x_k) \\sim \\mu}\\left[\n\t\\prod_{j=1}^k f(x_j)\\right] \\leq \\overline{\\Gamma}\\left(\\E_\\mu[f]\\right) + \\epsilon.$$\n\t%\n\tThere exists an absolute constant $C$ such that one can take $\\tau = \\epsilon^\n\t{C\\frac{\\log(\\nicefrac{1}{\\alpha})\\log(\\nicefrac{1}{\\epsilon})}{\\epsilon\n\t\\alpha^2}}$ and $d = \\log(\\nicefrac{1}{\\tau})\\log(\\nicefrac{1}{\\alpha})$.\n\\end{theorem}\n\nCorrelation is a measure of dependence in probability spaces where the sample space\nis a product set.\n\n\\begin{definition}[Correlated Spaces]\n\\label{def:correlation}\nLet $(\\Omega_1 \\times \\Omega_2, \\mu)$ be a finite probability space, the correlation between $\\Omega_1$ and $\\Omega_2$ with respect to $\\mu$ us defined as \n$$\\rho(\\Omega_1, \\Omega_2; \\mu) := \\mathop{\\max}_{\\substack{f : \\Omega_1 \\rightarrow \\R, \\E[f]  = 0 , \\E[f^2]\\leq 1 \\\\ g: \\Omega_2 \\rightarrow \\R, \\E[g] = 0 , \\E[g^2]\\leq 1} }  \\E_{(x,y) \\sim \\mu }[ |f(x)g(y)|] .$$\nFor a probability space $\\left(\\prod_{i=1}^k\\Omega_i, \\mu\\right)$, the correlation is given by\n$$\\rho\\left(\\prod_{i=1}^k\\Omega; \\mu\\right) := \\max_{i \\in [k]} \\rho\\left( \\Omega_i, \\prod_{ j \\in [k], j \\neq i} \\Omega_i; \\mu\\right).$$ \n\\end{definition}\n\nThe following result about correlated spaces is an\nadaptation of similar results (see Wenner\n\\cite[Theorem~3.12]{Wenner2013} and Guruswami \\& \nLee~\\cite[Lemma~A.1]{GuruswamiL2015})   to proving\nour hardness results.\n \n\\begin{theorem}\n\n  \\label{thm:inv-prin} Let $(\\Omega_1^k \\times \\Omega_2^k, \\mu)$ be a\n  correlated probability space with correlation $\\rho < 1$\n   such that the marginal of $\\mu$ on any\n  pair of coordinates one each from $\\Omega_1$ and $\\Omega_2$ is a\n  product distribution. Let $\\mu_1 ,\\mu_2$ be the marginals of $\\mu$\n  on $\\Omega_1^k$ and $\\Omega_2^k$ respectively. Let $X, Y$ be two\n  random $k\\times L$ dimensional matrices chosen as follows:\n  independently for every $i \\in [L]$, the pair of columns $(x^i,y^i)\n  \\in \\Omega_1^k \\times \\Omega_2^k$ is chosen from $\\mu$. Let\n  $x_i,y_i$ denote the $i$\\th rows of $X$ and $Y$ respectively.  If\n  $F: \\Omega_1^L \\rightarrow [-1,+1]$ and $G: \\Omega_2^L \\rightarrow\n  [-1,+1]$ are functions such that\n\t$$\\tau:= \\sqrt{\\sum_{i \\in [L]}\\Inf_i[F]\\cdot \\Inf_i[G]}  ~\\text{ and } ~\n\t\\Gamma := \\max \\left\\{ \\sqrt{\\sum_{i \\in [L]}\\Inf_i[F]} , \\sqrt{\\sum_{i \\in \n\t[L]}\\Inf_i[G]} \\right\\} \\ ,$$ then\n\t\t\\begin{equation}\n\t\t\\label{eqn:inv-eqn}\n\t\t\\abs{ \\E_{(X,Y) \\in \\mu^{\\otimes L}} \\left[\\prod_{i\\in [k]}F(x_i) G(y_i)\n\t\t\\right] - \\E_{X \\in \\mu_1^{\\otimes L}} \\left[\\prod_{i\\in [k]}F(x_i)\\right]\n\t\t\\E_{Y \\in \\mu_2^{\\otimes L}} \\left[\\prod_{i\\in [k]}G(y_i)\\right] } \\leq \n\t\t2^{O(k)} \\Gamma \\tau.\n\t\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\n\n\tWe will prove the theorem by using the hybrid argument. For $i \\in [L+1]$, let \n\t$X^{(i)},Y^{(i)}$ be distributed according to $(\\mu_1 \\otimes \\mu_2)^{\\otimes\n\t i} \\otimes \\mu^{\\otimes L- i}$. Thus, $(X^{(0)},Y^{(0)}) =\n       (X,Y)$ is distributed according to $\\mu^{\\otimes L}$ while\n       $(X^{(L)},Y^{(L)})$ is distributed according to\n       $(\\mu_1\\otimes\\mu_2)^{\\otimes L}$. For $i \\in [L]$, define\n\t%\n\t\\begin{equation}\n\t\t\\label{eqn:err}\n\t\t\\err_i := \\abs{ \\E_{X^{(i)},Y^{(i)}} \\left[\\prod_{j=1}^kF(x^{(i)}_j) \n\t\tG(y^{(i)}_j)\\right] - \\E_{X^{(i+1)},Y^{(i+1)}} \\left[\\prod_{j=1}^kF(x^\n\t\t{(i+1)}_j) G(y^{(i+1)}_j)\\right] }.\n\t\\end{equation}\n%\n\tThe left hand side of Equation \\eqref{eqn:inv-eqn} is upper bounded by $\\sum_{i\\in [L]} \n\t\\err_i$.  Now for a fixed $i$,  we will bound $\\err_i$. We use the \n\tEfron-Stein decomposition of $F,G$ to split them into two\n        parts: the part which depends on the \n\t$i$th input and the part independent of the $i$th input. \n\t%\n\t$$F= F_0 + F_1 \\text{ where } F_0 := \\sum_{\\alpha : i\\notin \\alpha} F_\\alpha \\mbox{ and } \n\tF_1 := \\sum_{\\alpha : i\\in \\alpha} F_\\alpha.$$\n\t%\n\t$$G = G_0 + G_1 \\text{ where } G_0 := \\sum_{\\beta : i\\notin \\beta} G_\\beta  \\mbox{ and } \n\tG_1 := \\sum_{\\beta : i\\in \\beta} G_\\beta.$$\n\t%\n\tNote that $\\Inf_i[F] = \\|F_1\\|^2_2$ and $\\Inf_i[G] =\n        \\|G_1\\|_2^2$. Furthermore, the functions $F_0$ and $F_1$ are\n        bounded since $F_0(x) = \\E_{x^{'}} [F(x^{'}) |\n        x^{'}_{[L]\\setminus i} = x_{[L]\\setminus i} ] \\in [-1,+1]$ and\n        $F_1(x) = F(x) - F_0(x) \\in [-2,+2]$.  For $a \\in \\{0,1\\}^k$,\n        let $F_a(X) := \\prod_{j =1}^kF_{a_j}(x_j)$.  Similarly\n        $G_0,G_1$ are bounded and $G_a$ defined\n        analogously. Substituting these definitions in Equation\n        \\eqref{eqn:err} and expanding the products gives\n\t% \n\t$$\\err_i = \\abs{ \\sum_{a,b \\in \\{0,1\\}^k}\\left(  \\E_{X^{(i)},Y^{(i)}} \n\t\\left[F_{a}(X^{(i)}) G_{b}(Y^{(i)})\\right]  - \\E_{X^{(i+1)},Y^{(i+1)}} \\left[\n\tF_{a}(X^{(i+1)}) G_{b}(Y^{(i+1)})\\right]  \\right) }.$$\n\t%\n\tSince both the distributions are identical on\n\t$(\\Omega_1^k)^{\\otimes L}$ and $(\\Omega_2^k)^{\\otimes L}$, all\n\tterms with $a = \\bar 0$ or $b=\\bar 0$ are zero. Because $\\mu$\n\tis uniform on any pair of coordinates on each from the\n\t$\\Omega_1$ and $\\Omega_2$ sides, terms with $|a|=|b|=1$ also\n        evaluates to zero. Now consider the remaining terms with $ |a|,|b| \\geq1,\n\t|a|+|b| > 2$. Consider one such term where $a_{1},a_2 = 1$ and $b_{1}\n\t=1$. In this case, by Cauchy-Schwarz inequality we have that\n\t%\n\t\\begin{align*}\n\t\\begin{split}\n\t\t\\abs{ \\E_{X^{(i-1)},Y^{(i-1)}} \\left[F_a(X^{(i-1)}) G_{b}(Y^{(i-1)})\\right]}\n\t\t \\leq &\\sqrt{\\E F_1(x_1)^2 G_1(y_1)^2}  \\\\ &\\cdot \\|F_1\\|_2  \\cdot \\left\\| \n\t\t\\prod_{j>2} F_{a_j}\\right\\|_\\infty \\cdot \\left\\| \\prod_{j> 1} G_{b_j}\\right\n\t\t\\|_\\infty.\n\t\\end{split}\n\t\\end{align*}\n\tFrom the facts that the marginal of $\\mu$ to any pair of\n\tcoordinates one each from $\\Omega_1$ and $\\Omega_2$ sides are\n\tuniform, $\\Inf_i[F] = \\|F_1\\|_2^2$ and\n\t$|F_0(x)|,|F_1(x)|,|G_0(x)|,|G_1(x)|$ are all bounded by $2$,\n\tthe right side of above becomes\n\t\t\\begin{align*}\n \\sqrt{\\E F_1(x_1)^2} \\sqrt{\\E G_1(y_1)^2} \\cdot \\|F_1\\|_2  \\cdot \\left\\|\n\t\t\\prod_{j>2} F_{a_j}\\right\\|_\\infty \\cdot \\left\\| \\prod_{j> 1} G_{b_j}\n\t\t\\right\\|_\\infty \\leq  \\sqrt{\\Inf_i[F]^2 \\Inf_i[G]} \\cdot 2^{2k} .\n\t\\end{align*}\n\t%\n\tAll the other terms corresponding to other $(a,b)$ which are\n\tat most $2^{2k}$ in number, are bounded analogously. Hence,\n\t%\n\t\\begin{align*}\n\t\t\\sum_{i \\in [L]} \\err_i &\\leq 2^{4k} \\sum_{i \\in [L]} \\left( \\sqrt{\\Inf_i[F]\n\t\t^2\\Inf_i[G]} +\\sqrt{\\Inf_i[F]\\Inf_i[G]^2} \\right)\\\\\n\t\t &= 2^{4k} \\sum_{i \\in [L]} \\sqrt{\\Inf_i[F]\n\t\t\\Inf_i[G]}\\left( \\sqrt{\\Inf_i[F]} +\\sqrt{\\Inf_i[G]} \\right).\n\t\\end{align*}\nBy\tapplying the Cauchy-Schwarz inequality, followed by a triangle inequality, we obtain\t\n \t\\begin{align*}\n\t\t\\sum_{i \\in [L]} \\err_i&\\leq 2^{4k} \\sqrt{\\sum_{i \\in [L]} \\Inf_i[F]\\Inf_i[G]}\\left(\\sqrt{\\sum_{i \\in [L]}  \\Inf_i[F]} + \\sqrt{\\sum_{i \\in \n\t\t[L]}  \\Inf_i[G]} \\right).\n\t\\end{align*}\nThus, proved.\n\\end{proof}\n\n\n\n", "meta": {"hexsha": "524493361ea92107b7cb014c59da5d54f416498d", "size": 9346, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/chapter2.tex", "max_stars_repo_name": "geevi/tifr-thesis", "max_stars_repo_head_hexsha": "885257ecb3b7323806213a38a6a6dc4252e25585", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/chapter2.tex", "max_issues_repo_name": "geevi/tifr-thesis", "max_issues_repo_head_hexsha": "885257ecb3b7323806213a38a6a6dc4252e25585", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/chapter2.tex", "max_forks_repo_name": "geevi/tifr-thesis", "max_forks_repo_head_hexsha": "885257ecb3b7323806213a38a6a6dc4252e25585", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-05-17T06:38:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-23T07:07:46.000Z", "avg_line_length": 46.9648241206, "max_line_length": 214, "alphanum_fraction": 0.6185533918, "num_tokens": 3763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934765, "lm_q2_score": 0.8289388019824946, "lm_q1q2_score": 0.5796391324639242}}
{"text": "% http://www.thecartech.com/subjects/machine_elements_design/keys_and_Splines.htm\n\n\\section{Shaft Fixtures and Fittings}\n\nThis section provides descriptions of a few fixtures and methods of fitting components to a shaft. You may need to use this in your exercise as well as performing your own research on other techniques that might be suitable.\n\n\\subsection{Spline}\n\n\\begin{marginfigure}\n    \\centering\n    \\includegraphics[width=\\textwidth]{09_fixtures_and_fittings/spline-shaft.png}\n    \\caption{Spline shaft}\n\\end{marginfigure}\nSplines are ridges or teeth on a drive shaft that mesh with grooves in a mating piece. This enables the transfer of high torque loads, whilst maintaining angular correspondence. There are a number of different types including:\n\n\\begin{itemize}\n    \\item Parallel key spline\n    \\item Involute spline\n    \\item Crowned splines\n    \\item Serrations\n    \\item Helical splines\n    \\item Ball splines\n\\end{itemize}\n\nHere are some calculations often used for splines\\sidenote{\\url{http://www.thecartech.com/subjects/machine_elements_design/keys_and_Splines.htm}}.\n\nThe first is the calculation of the Torque capacity for a spline and this is given by:\\marginnote{Torque Capacity of Spline}\n\n\\begin{equation}\n    T=pAr_m\n\\end{equation}\n\n\\noindent where:\n\n\\begin{description}\n    \\item[\\( p \\)] Permissibale pressure on the splines (\\(<7\\si{\\mega\\pascal}\\), if relative axial motion)\n    \\item[\\(A\\)] Total load area of splines (\\(0.5(D-d)Ln\\), \\si{\\metre^2})\n    \\item[\\(D\\)] Outer spline shaft diameter (\\si{\\metre})\n    \\item[\\(d\\)] Inner spline shaft diameter (\\si{\\metre})\n    \\item[\\(L\\)] Length of spline (\\si{\\metre})\n    \\item[\\(n\\)] Number of splines\n    \\item[\\(r_m\\)] Mean radius (\\si{\\metre})\n\\end{description}\n\nThe shear that a spline experiences can be calculated as follows:\\marginnote{Spline Shear}\n\n\\begin{equation}\n    \\tau = \\frac{4T}{Lbn(D+d)}\n\\end{equation}\n\nwhere:\n\n\\begin{description}\n    \\item[\\( F \\)] Force acting on a spline \n    \\item[\\( b \\)] Spline width\n    \\item[\\( n \\)] Number of splines\n\\end{description}\n\n\n%\\marginnote{Spline in Bending}\n\n%\\begin{equation}\n%  \\sigma_b = \\frac{3F(D-d)}{b^2Ln}\n%\\end{equation}\n\n%\\marginnote{Pressure on groove supporting surface} \n\n%\\begin{equation}\n%  d_{\\min} = \\sqrt[3]{\\frac{16T K_a S_v}{\\pi{} \\tau K_f}}\n%\\end{equation}\n\n%where:\n\n%\\begin{description}\n%  \\item[\\( d_{\\min} \\)] Minimal shaft diameter\n%  \\item[\\(T\\)] Torque (\\si{\\newton\\metre})\n%  \\item[\\(K_a\\)] Application factor\n%  \\item[\\(K_f\\)] Fatigue-life factor\n%  \\item[\\(S_v\\)] Desired safety\n%  \\item[\\(\\tau\\)] Allowable shear stress (\\si{\\pascal})\n%\\end{description}\n\n\n\\subsection{Taper Lock} \n\n\\begin{marginfigure}\n    \\centering\n    \\includegraphics[width=\\textwidth]{09_fixtures_and_fittings/taper-lock.png}\n    \\caption{Taper lock}\n\\end{marginfigure}\nThe Taper Lock bush, also referred to as a Taper bush or Taper Fit bush, is a locking mechanism commonly used in Power Transmission Drives for locating pulleys, sprockets, and couplings to shafts. \nThe Taper Lock bush is pre-bored and keyed to match the required shaft and keyway diameters. \nThe outside of the bush is tapered to match the component bore that is to be located on the shaft.\n\n\\subsection{Keyway} \n\nKeyways are a common method of transmitting the torque from the shaft to an attached component such as a sprocket.\n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=\\textwidth]{09_fixtures_and_fittings/keys.jpg}\n    \\caption{Types of key}\n\\end{figure}\n\nWhen\\marginnote{Determining the Minimum Key Size} a torque is applied, a key experiences two stress, a shear stress \\(\\tau\\) and compressive stress \\(\\sigma_c\\). These can be calculated as follows:\n\n\\marginnote{key shear (\\(\\tau\\))}\n\n\\begin{equation}\n  \\tau = \\frac{T}{wLr}\n\\end{equation}\n\n\\noindent{}Where:\n\n\\begin{description}\n    \\item[\\(\\tau\\)] Key shear (\\si{\\pascal})\n    \\item[\\(T\\)] Torque (\\si{\\newton\\metre})\n    \\item[\\(w\\)] Key width (\\si{\\metre})\n    \\item[\\(L\\)] Key length (\\si{\\metre})\n    \\item[\\(r\\)] Shaft radius (\\si{\\metre})\n\\end{description}\n\n\\marginnote{Key Compressive Stress (\\(\\sigma_c\\))}\n\n\\begin{equation}\n    \\sigma_c = \\frac{T}{0.5tLr}\n\\end{equation}\n\n\\noindent{}Where:\n\n\\begin{description}\n    \\item[\\(\\sigma_c\\)] Compressive stress (\\si{\\pascal})\n    \\item[\\(T\\)] Torque (\\si{\\newton\\metre})\n    \\item[\\(t\\)] Key height (\\si{\\metre})\n    \\item[\\(L\\)] Key length (\\si{\\metre})\n    \\item[\\(r\\)] Shaft radius (\\si{\\metre})\n\\end{description}\n\n\\marginnote{Standard Key Sizes} Although you can calculate the exact dimensions required for your key, there are standards for key sizes. Therefore, you can select any that meet your minimum requirements.\n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{09_fixtures_and_fittings/key-selection.jpg}\n    \\caption{An example of a key selection chart}\n\\end{figure}\n\n\\subsection{Circlip}\n\nCirclips are a type of fastener consisting of a semi-flexible metal ring with open ends, which can be snapped into a machined groove on a dowel pin or other part to permit rotation but to prevent lateral movement. \nThere are two basic types: internal and external, referring to whether they are fitted into a bore or over a shaft.\nCirclips are often used to secure pinned connections.\nThere are many vendors that can provide circlips and provide information such as the one in \\cref{tbl-circlip} to aid in the selection of the appropriate size clip for your needs.\n\n\\begin{table}[h!]\n    \\caption{An example of an external circlip information sheet}\\label{tbl-circlip}\n    \\centering\n    \\scriptsize\n    \\includegraphics[width=0.8\\textwidth]{09_fixtures_and_fittings/circlip-dimensions.png}\n    \\begin{tabular}{r r r r r r r r r r r r r r r r r}\n    \\toprule\n    \\multicolumn{1}{p{0.05\\textwidth}}{Nom size (\\si{\\milli\\metre})} & \n    \\multicolumn{9}{c}{Circlip Dimensions (\\si{\\milli\\metre})} &\n    \\multicolumn{5}{c}{Groove Dimensions (\\si{\\milli\\metre})} & \n    \\multicolumn{1}{p{0.05\\textwidth}}{Groove Strength} & \n    \\multicolumn{1}{p{0.05\\textwidth}}{Circlip Strength} \\\\\n    \\midrule\n    $d_1$ & $s$ & $s$ (tol) & $d_3$ & $d_3$ (tol) & $a_{\\max}$ & $b$ & $d_{\\min}$ & $C1$ & $C2$ & $d_2$ & $d_2$ (tol) & $m_{min}$ & $t$ & $n$ & $F_n$ (kN) & $F_r$ (kN) \\\\\n    \\midrule\n    \n    17 & 1,00 & 0,00 & 15,7 & +0,10 & 3,8 & 2,3 & 1,7 & 25,0 & 23,8 & 16,2 & 0,00 & 1,10 & 0,40 & 1,2 & 3,4 & 8,0 \\\\\n    & & -0,06 & & -0,43 & & & & & & & -0,11 & & & & & \\\\\n    \n    18 & 1,20 & 0,00 & 16,5 & +0,10 & 3,9 & 2,4 & 2,0 & 26,2 & 24,8 & 17,0 & 0,00 & 1,30 & 0,50 & 1,5 & 4,5 & 17,00 \\\\\n    & & -0,06 & & -0,44 & & & & & & & -0,11 & & & & & \\\\\n    \n    19 & 1,20 & 0,00 & 17,5 & +0,10 & 3,9 & 2,5 & 2,0 & 27,2 & 25,8 & 18,0 & 0,00 & 1,30 & 0,50 & 1,5 & 4,5 & 17,00 \\\\\n    & & -0,06 & & -0,45 & & & & & & & -0,11 & & & & & \\\\\n    \n    \\multicolumn{17}{c}{\\ldots} \\\\\n    \n    \\bottomrule\n    \n    \\end{tabular}\n\\end{table}\n\n", "meta": {"hexsha": "d11b2b3ae9355da023e15b8a5610a0ad83d680cf", "size": 6891, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "09_fixtures_and_fittings/section.tex", "max_stars_repo_name": "JamesGopsill/ShaftDesignCourseNotes", "max_stars_repo_head_hexsha": "0249e0804538237df58e5a7f99d039b35b30099d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "09_fixtures_and_fittings/section.tex", "max_issues_repo_name": "JamesGopsill/ShaftDesignCourseNotes", "max_issues_repo_head_hexsha": "0249e0804538237df58e5a7f99d039b35b30099d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "09_fixtures_and_fittings/section.tex", "max_forks_repo_name": "JamesGopsill/ShaftDesignCourseNotes", "max_forks_repo_head_hexsha": "0249e0804538237df58e5a7f99d039b35b30099d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.0483870968, "max_line_length": 226, "alphanum_fraction": 0.6684080685, "num_tokens": 2280, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8289388146603365, "lm_q1q2_score": 0.5796391309384471}}
{"text": "\n\\subsection{Introduction}\n\nA pivotal quantity is a statistic whose distribution does not depend on the parameters of the underlying distribution.\n\nFor example, the z statistic if the underyling distribution is a normal distribution.\n\n", "meta": {"hexsha": "0f3f9d1f23852cf9b16be3e062fc7547ef5a9bc8", "size": 235, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/pivotal/01-01-pivotal.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/pivotal/01-01-pivotal.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/pivotal/01-01-pivotal.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.375, "max_line_length": 118, "alphanum_fraction": 0.8212765957, "num_tokens": 42, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.828938799869521, "lm_q2_score": 0.6992544147913993, "lm_q1q2_score": 0.5796391154006467}}
{"text": "\\documentclass[11pt]{amsart}\n\\usepackage{enumitem}\n\\usepackage{setspace}\n\\onehalfspacing\n\\usepackage{natbib}\n\\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.\n\\geometry{letterpaper}                   % ... or a4paper or a5paper or ... \n%\\geometry{landscape}                % Activate for for rotated page geometry\n%\\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{epstopdf}\n\\usepackage[linesnumbered,vlined,algoruled]{algorithm2e}\n\\usepackage{url}\n\\usepackage{mathtools}\n\\DeclareMathOperator{\\sgn}{sgn}\n\\DeclarePairedDelimiter{\\ceil}{\\lceil}{\\rceil}\n\\DeclarePairedDelimiter{\\floor}{\\lfloor}{\\rfloor}\n\\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator{\\argmin}{arg\\,min}\n\\setlength{\\parskip}{0.5 em}\n\n\n\\title{The Size of a $t$-Digest}\n\\author{Ted Dunning}\n\\address{Ted Dunning \\\\ MapR Technologies, Inc \\\\ San Jose, CA}\n\\email{ted.dunning@gmail.com}\n\\date{}                                           % Activate to display a given date or no date\n\n\\begin{document}\n\\begin{abstract}\nA $t$-digest is a compact data structure that allows estimates of quantiles which increased accuracy near $q = 0$ or $q=1$. This is done by clustering samples from $\\mathbb R$ subject to a constraint that the number of points associated with any particular centroid is constrained so that the so-called $k$-size of the centroid is always $\\le 1$. The $k$-size is defined using a scale function that maps quantile $q$ to index $k$. This paper provides bounds on the sizes of $t$-digests created using any of four known scale functions.\n\\end{abstract}\n\\maketitle\n\\section{Introduction}\nWe examine four cases for the $t$-digest \\citep{t-digest-arxiv} scale function in order to produce estimates of the number of centroids and the maximum size of these centroids. The functions include\n\\[\n\\begin{aligned}\nk_0(q) &= \\frac \\delta 2 q \\\\\nk_1(q) &= \\frac \\delta {2\\pi}  \\sin^{-1}(2q-1)   \\\\\nk_2(q) &= \\frac \\delta {Z(n)} \\log {\\frac q {1-q}} \\\\\nk_3(q) &= \\frac \\delta {Z(n)}\\begin{cases}\n\\quad \\log 2q & \\text{if  } q \\le 1/2 \\\\\n- \\log 2(1-q) & \\text{if  } q > 1/2\n\\end{cases}\n\\end{aligned}\n\\]\nWhere $Z$ is a normalization value that may depend on the number of data points seen so far and is intended to bound the size of the $t$-digest, at least in a practical sense.\n\\section{General considerations}\nFigure \\ref{fig:k-q-plot-full} shows how a scaling function converts quantiles (the $q$ axis) into $k$ values. In this figure, values of $k$ are marked  and how unit steps in $k$ cause variable sized steps in $q$. This limit is the fundamental idea behind $t$-digests together with the idea of representing the distribution of interest by a set of centroids, each of which summarizes a localized subset of the samples. Making some subsets smaller means that the error due to summarizing the distribution is reduced near the extreme values of $q$.\n\\begin{figure}[htbp] %  figure placement: here, top, bottom, or page\n   \\centering\n   \\includegraphics[width=2.3in]{../t-digest-paper/figures/k-q-plot.pdf} \n   \\caption{The scaling function translates the quantile $q$ to the scaling factor $k$ in order to give variable size steps in $q$. This limits the number of samples that are included in any particular centroid, particularly near $q=0$ or $q=1$ which in turn allows more accurate estimation of the distribution near these limits. }\n   \\label{fig:k-q-plot-full}\n\\end{figure}\n\nThese boundaries in $q$ are, however, not precisely realizable in practice since the centroids are made up of an integral number of samples. As such, centroids are limited in size to be smaller than the idealized size, except in the case of single samples. Importantly, centroids with only a single point may have $k$-size larger than $1$. Such centroids do not, however, contribute to excess error since we know the exact position of such lone centroids.\n\nFigure \\ref{fig:k-q-limits} illustrates how this works out in practice. Taking the case of one particular centroid $\\mathcal C$, then if we assume that the samples that contribute to all of the centroids with smaller mean values are less than the samples that define $\\mathcal C$ and similarly that all of the samples for centroids with larger mean values are to the right of all of the samples in $\\mathcal C$, we can make some statements about how many samples can be in $\\mathcal C$. Depending on how the $t$-digest was constructed, this assumption may not be quite true, but it will be close to true.\n\\begin{figure}[htbp] %  figure placement: here, top, bottom, or page\n   \\centering\n   \\includegraphics[width=3.5in]{../t-digest-paper/k-q-diagram/k-q-limits.pdf} \n   \\caption{Looking in detail at a magnified part of the scaling function, taking unit steps in $k$ gives idealized boundaries for centroids, but the weights of centroids are  constrained to integral values. }\n   \\label{fig:k-q-limits}\n\\end{figure}\nFor the purpose of this illustration, there are $i$ samples in centroids with smaller means than $\\mathcal C$ and we mark this as point $q_1$. Point $q_1$ maps to the value $k$ and $k+1$ maps back to $q_2$ which is the limit on the size of the centroid. Here it is clear $\\mathcal C$ can contain only two samples without extending past $q_2$ and thus being too large to be a valid part of a $t$-digest.\n\nAs can be seen from these figures, if the scaling curve is too steep, $q_2-q_1 < 1/n$ so $\\mathcal C$ will have only a single sample. This observation turns out to be very important when calculating how many samples a centroid will use. Figure \\ref{fig:k-q-slope} shows the effect of this. All practically important scaling functions become very steep near the boundaries so there will be some number of unit weight centroids at the ends of any $t$-digest. \n\n\\begin{figure}[htbp] %  figure placement: here, top, bottom, or page\n   \\centering\n   \\includegraphics[width=3.5in]{../t-digest-paper/k-q-diagram/slope-limiting.pdf} \n   \\caption{When the slope becomes too steep, centroids have unit weight. The effect is that the scaling function (black and gray thick line) is effectively modified (dashed lines) so that the slope is at most $n$, the total number of samples.  }\n   \\label{fig:k-q-slope}\n\\end{figure}\nExactly how many such unit centroids there are likely to be is a very important factor in characterizing the $t$-digest and can be computed by finding points where the scaling function exceeds a critical slope. Once a step from $q=i/n$ to $q'=(i+1)/n$ results in an increase in $k$ greater than one, no centroid can have more than one sample. The critical slope is thus $n$, the total number of samples.\n\nIn the next sections, these ideas will be used to determine specific limits on $t$-digest size for each of the important scaling functions listed at the beginning of this paper.\n\n\\section{Size bound for $k_0$}\nThe slope of $k_0$ is constant and whenever that slope is more than $n$, all centroids will have unit weight. This happens whenever $n \\le \\delta/2 $ so for this case the number of centroids is just $n$. \n\nIn a mid-range where $\\delta/2 < n < \\delta$, the $k$-size of unit centroids is less than $1$, but the $k$-size of a centroid with weight more than $1$ is larger than $1$ so no centroids can combine without violating the $t$-digest size limit. Thus the number of centroids is $n$ in this range as well.\n\nIf we take $  n = \\mu \\delta /2$  where $\\mu \\ge 2$ all centroids will have an average weight of at least $ \\mu/2 $ but none will have a weight larger than $ \\mu $ and thus $\\delta/2 \\le n \\le \\delta$.\n\nIn any of these three cases, for scaling function $k_0$ the number of centroids in a  $t$-digest will be in the range $\\left[\\min( n,\\delta/2), \\min(n,\\delta)\\right]$.\n\nBased on the logic above, the upper bound for the weight of any centroid for $k_0$ is $\nw \\le 2 n / \\delta$. \n\\section{Size bound for $k_1$}\nWe know that $k_1(1) -  k_1(0) = \\delta/2$. Since no centroid can have $k$-weight more than $1$ and the average $k$-weight cannot be less than $1/2$, we know that the number of centroids is in $\\left[ \\delta/2 , \\delta \\right]$ if $n > \\delta$. \n\nThe slope of $k_1(q)$ is $\\delta / ( 2\\pi \\sqrt{q(1-q)} \\, )$. The minimum slope is \n$\\delta / \\pi$ at $q=1/2$ so if $n \\le \\delta/\\pi$, no centroid will have a weight of more than one.\n\nFor $\\delta/\\pi < n \\le \\delta$ the slope at $1/2$ will be such that some centroids can have a weight more than one.\n\\subsection{Maximum centroid weight}\n\nThe maximum weight of centroids can be computed by considering a centroid centered in terms of $k$ around $k(q) = \\kappa$. \n\nThis centroid can span a range of $q$ of at most\n\\[\n\\Delta q = k^{-1}(\\kappa+1/2) - k^{-1}(\\kappa-1/2) \\\\\n\\]\nBut we know that\n\\[\nk^{-1}(\\kappa) = \\frac 1 2 \\left(   \\sin \\frac {2\\pi \\kappa} \\delta + 1  \\right)\n\\]\nSo using multiple angle identities\n\\[\n\\begin{aligned}\n\\Delta q &= \\frac 1 2 \\left(\n  { \\sin \\frac {2\\pi (\\kappa+1/2)} \\delta  }  -   { \\sin \\frac {2\\pi (\\kappa-1/2)} \\delta  } \n  \\right) \\\\\n&= \\frac 1 2 \\Biggl( \n\\left( \\sin \\frac {2 \\pi} \\delta \\kappa \\cos \\frac \\pi \\delta  + \\cos \\frac {2 \\pi} \\delta \\kappa \\sin \\frac \\pi \\delta  \\right) -\n\\left( \\sin \\frac {2 \\pi} \\delta \\kappa \\cos \\frac  \\pi  \\delta  - \\cos \\frac {2 \\pi} \\delta \\kappa \\sin \\frac  \\pi  \\delta  \\right) \n\\Biggr) \\\\\n&=  \n\\sin \\frac \\pi \\delta    \\cos \\frac {2 \\pi} \\delta \\kappa =   2 \\left( \\sin \\frac \\pi \\delta \\right)    \\sqrt{q(1-q)}\\\\\n\\end{aligned}\n\\]\nThe maximum weight of this centroid is  $w_{\\max} = n \\Delta q$.\n\nNote, however, the value of $q$ cannot not be known precisely from the information we have in a $t$-digest for a centroid. If, indeed, we did know lower and upper bounds $q_{\\min}$ and $q_{\\max}$ for the points associated with a centroid in a $k_1$ digest, we would be able to use whichever value of $q$ that is furthest from $1/2$ to get a conservative limit $w_{\\max}$.\n\\[\nq_{\\text {worst}} = \\begin{cases}\n q_{\\min} & \\text{if } | q_{\\min} - 1/2 | > | q_{\\max} - 1/2 | \\\\\n q_{\\max} & \\text {otherwise}\n\\end{cases} \\\\\n\\]\nUnfortunately, it is not possible to know the actual values of $q_{min}$ and $q_{max}$ in most practical situations such as when a $t$-digest has been created by merging digests. For a particular centroid $\\mathcal C$, if we take $\\mathcal W_{\\text {left}}$ to be the sum of the weights of centroids with means smaller than $\\mathcal C$ and $\\mathcal W$ to be the weight of $\\mathcal C$ then these estimated values are useful\n\\[\n\\begin{aligned}\nq_{\\min} &\\approx {\\mathcal W_{left}} / n \\\\\nq_{\\max} &\\approx (\\mathcal W + \\mathcal W_{left}) / n  \n\\end{aligned}\n\\]\nFrom that \n\\[\nw_{\\max} \\le 2n \\left( \\sin \\frac \\pi \\delta\\right) \\sqrt{q_{\\text {worst}} (1-q_{\\text {worst}})}\n\\]\nNote that in any case, we have no solid guarantees about the distribution of the samples for a particular centroid so that this size limit remains somewhat heuristic in nature.\n\\section{Size bounds for $k_2$}\nUnlike $k_1$, both $k_2$ and $k_3$ have unbounded tails at $q=0$ and $q=1$ so we require a different argument to compute the size of a $t$-digest for $k_2$ and $k_3$. Because the smallest weight a centroid can have is $1$ corresponding to a change in $q$ of $\\Delta q = 1/n$, the effective slope of the scaling function is limited to a value of $n$ and thus the range from minimum to maximum $k$  is finite. For suitably chosen  $Z(n)$, the size is not only finite, but bounded for any value of $n$.\n\nWherever $k_2'(q) > n$, a centroid cannot have a weight greater than $1$. In particular, if the smallest value of the slope is this large, all centroids must have unit weight. By definition,\n\\[\n\\begin{aligned}\nk_2(q) &= \\frac \\delta {Z(n)} \\log {\\frac q {1-q}} \\\\\nk_2'(q) &= {\\frac  {\\delta/Z(n)} {q (1-q) }  } \\\\\n\\end{aligned}\n\\]\nThe slope is least at $q=1/2$ so if $ n < 4  \\delta / Z$ all centroids have unit weight.\n\nOn the other hand, even if $n$ is considerably larger than this limit, some centroids near $q=0$ or $q=1$ will still have unit weight because the slope of $k(q)$ increases without bound and thus will ultimately exceed any value of $n$ for $q$ sufficiently near $0$ or $1$. We can approximate the region $[q_1, q_2]$ where centroids can have weights larger than $1$ by examining where the slope of $k(q)$ crosses the critical value of $n$.\n\\[\n\\begin{aligned}\n{\\frac  {\\delta/Z(n)} {q_i (1-q_i) }  } &= n \\\\\nq_i^2 - q_i + \\frac {\\delta/Z(n)} n &= 0 \\\\\nq_i &= \\frac 1 2 \\left( { 1 \\pm \\sqrt { 1 - 4 \\frac {\\delta/Z(n)} n } } \\, \\right)\n\\end{aligned}\n\\]\nThe discriminant here will always be positive for the values $n$ we are examining so we have two solutions symmetric around $1/2$. Moreover, if $n \\gg 4\\delta / Z$\n\\[\n\\begin{aligned}\n\\sqrt { 1 - 4 \\frac {\\delta/Z(n)} n } &\\approx 1 - 2 \\frac {\\delta / Z(n)} n \\\\\nq_1 &\\approx {   \\frac {\\delta/Z(n)} n  } \\,  \\\\\nq_2 &= 1-q_1\n\\end{aligned}\n\\]\nThis means that there will be roughly $\\delta/Z(n)$ unit centroids at each extreme with the remaining centroids having a weight greater than one. We can compute an approximate bound on the number of larger centroids\n\\[\n\\begin{aligned}\nm_{w>1} &\\le 2\\left( k(q_2) -k(q_1) \\right) \\\\ \n&\\le  \\frac {2\\delta} {Z(n)} \\log {\\frac {q_2 (1-q_1)} {(1-q_2) q_1}} = \\frac {2\\delta} {Z(n)} \\log { {q_2^2} / { q_1^2}} \\\\\n&\\approx  \\frac {4 \\delta} {Z(n)} \\left(\\log  n/ \\delta + \\log Z(n)\\right)\n\\end{aligned}\n\\]\nAdding back in the unit clusters\n\\[\nm \\le  \\frac {4 \\delta} {Z(n)} \\left(\\log  n/ \\delta + \\log Z(n) + 1/2\\right)\n\\]\n\nIf we pick $Z(n) = 1$, then the total number of centroids grows with increasing $n$ and is roughly\n\\[\nm \\le 4\\delta  \\log n/\\delta + 1/2\n\\]  \nOn the other hand, if we pick $Z(n)$ so that it grows with $\\log n$ then $\\log Z(n)$  will grow extremely slowly. Specifically, choosing $Z(n) = 4 \\log n/\\delta + 24$ means that \n\\[\n\\frac {\\log n / \\delta + \\log Z(n)+1/2} {Z(n)}  < 1/4\n\\]\nfor all values of $n/\\delta < 9.1 \\times 10^{23}$ which more than covers the range of practical utility.\n\nThis inequality means that the total number of centroids will be bounded \n\\[\n\\begin{aligned}\nm&\\le   \\frac {4 \\delta} {Z(n)} \\left(  \\log  n/ \\delta + \\log Z(n) + 1/2\\right) \\\\\n&\\le  \\delta\n\\end{aligned}\n\\]\n\\subsection{Maximum centroid weight}\n\nThe maximum weight for a centroid centered at $k(q) = \\kappa$ can be estimated using a Taylor expansion\n\\[\n\\begin{aligned}\nw_{\\max} &\\le n \\Delta q \\\\\n\\Delta q &= k^{-1}(\\kappa + 1/2) - k^{-1}(\\kappa-1/2) \\\\\n&\\approx \\left . \\frac {dq} {dk} \\right\\rvert_{\\kappa} = \\left[ 1 \\Bigg \\slash {\\frac {dk} {dq}} \\right]_q\\\\\n&\\approx \\frac {Z(n)} \\delta q(1-q) \\\\\n\\end{aligned}\n\\]\nAs before, in practice, we cannot know the precise value of $q = k^{-1}(\\kappa)$ so we typically use $q_{\\text {worst}}$ to get a conservative limit.\n\\section{Size bounds for $k_3$}\nThe scale function $k_3$ is\n\\[\nk_3(q) = \\frac \\delta {Z(n)}\\begin{cases}\n\\quad \\log 2q & \\text{if  } q \\le 1/2 \\\\\n- \\log 2(1-q) & \\text{if  } q > 1/2\n\\end{cases}\n\\]\nBy symmetry, we can consider just the first branch in our calculations. In this range, the slope is\n\\[\nk_3'(q) = \\frac \\delta { q Z(n)}  \n\\]\nAll centroids with $q \\le \\frac \\delta {2 n Z(n)}$ have unit weight. When $n \\le \\frac \\delta {Z(n)}$, this region extends to $q = 1/2$ forcing all centroids to have unit weight. For larger $n$, we define $q_1$ as before (and $q_2$ by symmetry, of course)\n\\[\n\\begin{aligned}\nq_1 &= \\frac \\delta { n Z(n)} \\\\\nq_2 &= 1-q_1\n\\end{aligned}\n\\]\nThe total number of centroids is thus\n\\[\n\\begin{aligned}\nm  &\\le  n q_1 + n (1-q_2) + 2(k(q_2) - k(q_1)  ) \\\\\n  &\\le 2\\left(  n q_1 + (k(q_2) - k(q_1)  ) \\right)\\\\\n&= \\frac { 2\\delta} { Z(n)} \\left(1  -2\\log 2 q_1\\right)\\\\\n&= \\frac {4\\delta} { Z(n)} \\left(1/2- \\log 2 + \\log  \\frac  { n }\\delta + \\log Z(n)\\right)\n\\end{aligned}\n\\]\nFor $Z(n)=1$, we have unbounded logarithmic growth in the number of centroids. Alternatively, if we choose $Z(n) = 4\\log n/\\delta + 21$  and take $n/\\delta < 10^{20}$ as a reasonable restriction on the area of applicability, we get a bound of $m \\le \\delta$.\n\\subsection{Maximum centroid weight}\nWe can follow the same method as with $k_2$ to find the maximum weight for any centroid.\n\nThe maximum weight for a centroid centered at $k(q) = \\kappa$ can be estimated using a Taylor expansion\n\\[\n\\begin{aligned}\nw_{\\max} &\\le n \\Delta q \\\\\n\\Delta q &= k^{-1}(\\kappa + 1/2) - k^{-1}(\\kappa-1/2) \\\\\n&\\approx \\left . \\frac {dq} {dk} \\right\\rvert_{\\kappa} = \\left[ 1 \\Bigg \\slash {\\frac {dk} {dq}} \\right]_q\\\\\n&\\approx \\frac {Z(n)} \\delta \\min (q, (1-q)) \\\\\n\\end{aligned}\n\\]\nAs before, $q$ is not known precisely in practice so we use $q_{\\text {worst}}$ to get a conservative limit.\n\\[\nw_{\\max}\\le \\frac {nZ(n)} \\delta \\min (q_{\\text{worst}}, (1-q_{\\text{worst}})) \n\\]\n\\bibliographystyle{chicago}\n\\bibliography{refs}{}\n\n\n\\end{document}  \n", "meta": {"hexsha": "6daa10b54f288c782b358ba5019182812255b5b3", "size": 16924, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/proofs/sizing.tex", "max_stars_repo_name": "slandelle/t-digest", "max_stars_repo_head_hexsha": "8630c2b678ae5d2c055ee26bea6cf0af79c55fa6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1479, "max_stars_repo_stars_event_min_datetime": "2015-01-05T09:42:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T04:20:21.000Z", "max_issues_repo_path": "docs/proofs/sizing.tex", "max_issues_repo_name": "slandelle/t-digest", "max_issues_repo_head_hexsha": "8630c2b678ae5d2c055ee26bea6cf0af79c55fa6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 141, "max_issues_repo_issues_event_min_datetime": "2015-01-02T14:02:47.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-03T22:54:11.000Z", "max_forks_repo_path": "docs/proofs/sizing.tex", "max_forks_repo_name": "slandelle/t-digest", "max_forks_repo_head_hexsha": "8630c2b678ae5d2c055ee26bea6cf0af79c55fa6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 222, "max_forks_repo_forks_event_min_datetime": "2015-01-08T23:52:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T04:50:59.000Z", "avg_line_length": 63.1492537313, "max_line_length": 604, "alphanum_fraction": 0.6890215079, "num_tokens": 5295, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387956435735, "lm_q2_score": 0.6992544085240401, "lm_q1q2_score": 0.5796391072503772}}
{"text": "\\section{HDIM of Feistel Networks}\n\\SecLabel{hdim-feistel}\n\nThe Feistel Network is a rather asymmetric and imbalanced structure. After any round, the left branch is a result of more computations, and the right branch is ``weaker'' in this sense. Often it may occur that, after a particular number of rounds, the left branch has full algebraic degree, while the right branch is still of incomplete degree. This can be seen as the maximum number of rounds available for an integral distinguisher, since after the next round the strong left branch is mixed into the weak right branch, and, in general, we can expect both branches to have full degree. To exploit the imbalance of Feistel Networks, we analyze the degree of the ``weak'' right branch.\n\n\\begin{definition}\nLet $\\maxdegnbij{r}{d}$ denote the maximum possible degree of the right output branch of a Feistel Network with $r$ rounds and Feistel functions of degree at most $d$. Let $\\maxdegbij{r}{d}$ denote the maximum possible degree in the case when the Feistel functions are bijective:\n\\eq{\n& \\maxdegnbij{r}{d} = \\max_{F \\in \\nbij{r}{d}} \\deg{(\\RB\\circ F)}, \\\\\n& \\maxdegbij{r}{d} = \\max_{F \\in \\bij{r}{d}} \\deg{(\\RB \\circ F)}.\n}\n\\end{definition}\n\n\\begin{remark}\nThe maximum degree of the left branch is equal to the maximum degree of the right branch in the next round, since the branch is transferred untouched.\n\\end{remark}\n\n\\begin{remark}\nThe exact values of $\\maxdegnbij{r}{d},\\maxdegbij{r}{d}$ are hard to compute. However, upper bounds can be computed using different methods. Improving further the upper bounds should lead to strengthening the results from this chapter.\n\\end{remark}\n\nThe definition of HDIM leads to an interesting insight into proving absence of particular monomials of maximum degree (i.e. $n-1$ for $n$-bit permutations), or, equivalently, zeroes in the HDIM itself. The idea is to split the computed function into two roughly equal parts. Then the algebraic/integral distinguisher exists when the \\emph{sum} of the degrees of the two parts is less than the block size $n$. In the original case, the parts are composed and the degrees are roughly \\emph{multiplied}, i.e. the distinguisher is found when the \\emph{product} of the degrees is less than $n-1$.\n\nIn this section the utility functions $\\thetanbij{r}{d}$ and $\\thetabij{r}{d}$ will be used to describe the conditions when the integral distinguisher exists.\n\n\\begin{definition}\nThe functions $\\thetanbij{r}{d}, \\thetabij{r}{d}$ are defined as follows:\n\\begin{align*}\n    \\thetanbij{r}{d} &= \\maxdegnbij{\\floor{r/2}}{d} + \\maxdegnbij{\\ceil{r/2}}{d},\\\\\n    \\thetabij{r}{d} &= \\maxdegbij{\\floor{r/2}}{d} + \\maxdegbij{\\ceil{r/2}}{d}.\n\\end{align*}\n\\end{definition}\n\nThe following lemma shows a simple upper bound from the product bound of a composition.\n\n\\begin{lemma}\n\\Label{lem:bound}\nA Feistel Network with $r \\ge 1$ rounds and degree-$d$ round functions has degree at most $d^r$ on the left output branch and degree at most $d^{r-1}$ on the right output branch:\n$$\n\\maxdegbij{r}{d} \\le \\maxdegnbij{r}{d} \\le d^{r-1}.\n$$\nIn particular,\n$$\n\\thetabij{r}{d} \\le \\thetanbij{r}{d} \\le d^{\\floor{r/2}-1} + d^{\\ceil{r/2}-1}.\n$$\n\\end{lemma}\n\n\\begin{remark}\nIn the case of Feistel Networks, the block size is assumed to be $2n$, whereas in discussions about general permutations, the block size is $n$.\n\\end{remark}\n\nThe HDIM-based distinguishers that we exhibit in this section have the same structure: if conditions of a distinguisher are satisfied, then the $2n\\times 2n$ HDIM has the form $\\begin{bmatrix}? & 0 \\\\ 0 & 0\\end{bmatrix}$ as a $2\\times 2$ block-matrix. Such distinguisher is automatically extended to one more round leading to an HDIM of the form $\\begin{bmatrix}? & ? \\\\ ? & 0\\end{bmatrix}$. This is formalized in the following definition and a lemma.\n\n\\begin{definition}[Type-I, Type-II Distinguishers]\nLet $S\\colon \\field{2n} \\to \\field{2n}$. Then\n\\begin{itemize}\n    \\item $S$ is said to have the type-I distinguisher if\n    $$\\HDIM{S}[i,j] = 0 ~\\text{for}~ n < i \\le 2n ~\\text{\\bf or}~ n < j \\le 2n;$$\n    \\item $S$ is said to have the type-II distinguisher if\n    $$\\HDIM{S}[i,j] = 0 ~\\text{for}~ n < i \\le 2n ~\\text{\\bf and}~ n < j \\le 2n.$$\n\\end{itemize}\n\\end{definition}\n\n\\begin{lemma}\n\\Label{lem:type-ext}\nLet $r\\ge 1$, $S_r \\in \\nbij{r}{d}$ and $S_{r+1} \\in \\nbij{r+1}{d}$ be $2n$-bit permutations such that\n$$\nS_{r+1} = \\Swap \\circ R_{f} \\circ \\Swap \\circ S_r\n$$\nfor some function $f\\colon \\field{n} \\to \\field{n}$.\n\nIf $S_{r}$ has the type-I distinguisher, then $S_{r+1}$ has the type-II distinguisher.\n\\end{lemma}\n\\begin{proof}\nSince, $\\RB \\circ S_{r+1} = \\LB \\circ S_r$ the last $n$ rows of $\\HDIM{S_{r+1}}$ are the same as the first $n$ rows of $\\HDIM{S_r}$.\n\\end{proof}\n\n% ===================================================\n\n\\subsection{General Case}\n\nThe following theorem applies the described ideas to general Feistel Networks.\n\n\\begin{theorem}\n\\Label{thm:nbij}\nAny $S \\in \\nbij{r}{d}$ has the type-I distinguisher if $$\\thetanbij{r+1}{d} < 2n.$$\n\nSimilarly, any $S \\in \\bij{r}{d}$ has the type-I distinguisher if $$\\thetabij{r+1}{d} < 2n.$$\n \n\\end{theorem}\n\\begin{proof}\nLet $(a,b) \\in \\field{2n}$ denote the intermediate state of $S$ after $\\floor{r/2}$ rounds (see \\FigRef{mitmproof}). Let $x_l(a,b), x_r(a,b)$ denote the input branches of $S$ as functions of $a,b$, and $y_l(a,b),y_r(a,b)$ the output branches of $S$ as functions of $a,b$. We now perform the variable replacement in the definition of HDIM:\n$$\n\\HDIM{S}[i,j] =\n\\bigoplus_{x\\in \\field{2n}} \\inprod{e_i, S(x)} \\inprod{e_j, x} =\n\\bigoplus_{(a,b)\\in \\field{2n}} \\inprod{e_i, (x_l,x_r)(a,b)} \\inprod{e_j, (y_l,y_r)(a,b)}.\n$$\nOur goal is to prove a bound on the algebraic degree of the product of the two inner products.\n\nVariables $x_l,x_r,y_l,y_r$ can be computed using a Feistel Network with $a,b$ as inputs: $(x_l,x_r) \\circ \\Swap \\in \\nbij{\\floor{r/2}}{d}$ and $(y_l,y_r) \\in \\nbij{\\ceil{r/2}}{d}$. Therefore, they have the following degree bounds:\n\\begin{itemize}\n    \\item $\\deg{x_l} \\le \\maxdegnbij{\\floor{r/2}+1}{d}$, $\\deg{x_r} \\le \\maxdegnbij{\\floor{r/2}}{d}$,\n    \\item $\\deg{y_l} \\le \\maxdegnbij{\\ceil{r/2}+1}{d}$,          $\\deg{y_r} \\le \\maxdegnbij{\\ceil{r/2}}{d}$.\n\\end{itemize}\n\nThe zeroes in the HDIM required for the type-I distinguisher correspond to products $x_r\\cdot y_r$, $x_r \\cdot y_l$ and $x_l \\cdot y_r$. It is enough to prove the case $n < i \\le 2n$, since the inverse of $S$ is also a Feistel Network and thus the transpose of the $\\HDIM{S}$ will have the same zeroes. The case corresponds to the products $x_r\\cdot y_r$ and $x_l \\cdot y_r$. It follows that $\\HDIM{S}[i,j] = 0$ if $n < j \\le 2n$ and $\\maxdegnbij{\\floor{r/2}+1}{d} + \\maxdegnbij{\\ceil{r/2}}{d} < 2n$. The condition is equivalent to $\\thetanbij{r+1}{d} < 2n$.\n\\end{proof}\n\n\\FigTex{proof.tex}\n\n\\begin{corollary}\nAny $S \\in \\nbij{r}{d}$ has the type-I distinguisher if $$d^{\\floor{r/2}} + d^{\\ceil{r/2}-1} < 2n,$$\nand the type-II distinguisher if \n$$d^{\\floor{r/2}-1} + d^{\\ceil{r/2}-1} < 2n.$$\n\\end{corollary}\n\\begin{proof}\nPutting the bound from Lemma~\\Ref{lem:bound} in Theorem~\\Ref{thm:nbij} makes the proof. For the type-II distinguisher the result follows from Lemma~\\Ref{lem:type-ext}.\n\\end{proof}\n\n% ===================================================\n\n\\subsection{Bijective Feistel Functions}\n\nIn the case when the Feistel functions are bijective, an additional trick may be used. The intermediate state variables can be chosen in an alternative way by exploiting the fact that the middle Feistel function is invertible. However, in this case we need to know an upper bound on the degree of the \\emph{inverse} of the middle Feistel function.\n\nIn what follows, the upper bound on the algebraic degree of the inverse of the Feistel function is denoted by $\\dinv$.\n\n\\begin{theorem}\n\\Label{thm:bij}\nAny $S \\in \\bij{r}{d}$ has the type-I distinguisher if\n$$\n    \\max(d,\\dinv)\\cdot \\thetabij{r-2}{d} < 2n.\n$$\n\\end{theorem}\n\\begin{proof}\nThe proof is analogous to the proof of Theorem~\\Ref{thm:nbij}, except that the choice of intermediate variables differs.\nInstead of choosing both left and right branches of the input of the middle round, it is possible to choose the left branch of the input and the right branch of the output. The variables chosen are $(a,c)$ instead of $(a,b)$ (see \\FigRef{mitmproof}). In this case $b$ can be expressed as $f_{\\floor{r/2}+1}^{-1}(a \\oplus c)$, and degree of $b$ as a function of $(a,c)$ is upper bounded by $\\dinv$.\n\nWithout loss of generality, assume $r \\ge 3$. Let $(a_x, b_x) \\in \\field{2n}$ denote the state before the $\\floor{r/2}$-th round, and $)a_y, b_y) \\in \\field{2n}$ denote the state after the $(\\floor{r/2}+2)$-th round. The following degree bounds hold (every variable is considered as a function of $(a,c)$):\n\\begin{itemize}\n    \\item $\\deg{a_x} \\le \\max(d,\\dinv), \\deg{b_x} = 1$,\n    \\item $\\deg{a_y} \\le \\max(d,\\dinv), \\deg{b_y} = 1$,\n\\end{itemize}\nThe input $(x_l, x_r)$ of $S$ can be computed as a $(\\floor{r/2}-1)$-round Feistel Network composed with the function $(a_x, b_x)$. Similarly, the output $(y_l, y_r)$ of $S$ can be computed as a $(\\ceil{r/2}-2)$-round Feistel Network composed with the function $(a_y, b_y)$. It follows that\n\\begin{itemize}\n    \\item $\\deg{x_l} \\le \\max(d,\\dinv)\\cdot \\maxdegbij{\\floor{r/2}}{d}$,\n    \\item $\\deg{x_r} \\le \\max(d,\\dinv)\\cdot \\maxdegbij{\\floor{r/2}-1}{d}$,\n    \\item $\\deg{y_l} \\le \\max(d,\\dinv)\\cdot \\maxdegbij{\\ceil{r/2}-1}{d}$, \n    \\item $\\deg{y_r} \\le \\max(d,\\dinv)\\cdot \\maxdegbij{\\ceil{r/2}-2}{d}$.\n\\end{itemize}\nIt is easy to verify that the degrees of the products $x_r\\cdot y_l$ and $x_r \\cdot y_r$ are upper bounded by $$\n\\max(d,\\dinv)\\cdot\\pround{\\maxdegbij{\\floor{r/2}-1}{d} + \\maxdegbij{\\ceil{r/2}-1}{d}} = \\max(d,\\dinv)\\cdot\\thetabij{r-2}{d}.$$\nSimilarly to Theorem~\\Ref{thm:nbij}, by the transpose-inverse property of the HDIM, the type-I distinguisher follows if\n$$\\max(d,\\dinv)\\cdot\\thetabij{r-2}{d} < 2n.$$\n\\end{proof}\n\\begin{remark}\nWhen $\\dinv \\le d$, Theorem~\\Ref{thm:bij} provides a type-I distinguisher for 1 more round compared to the general Theorem~\\Ref{thm:nbij}.\n\\end{remark}\n\n\\begin{corollary}\n\\Label{cor:bij}\nAny $S \\in \\bij{r}{d}$ has the type-I distinguisher if $$\\max(d,\\dinv)\\cdot(d^{\\floor{r/2}-2} + d^{\\ceil{r/2}-2}) < 2n,$$\nand the type-II distinguisher if \n$$\\max(d,\\dinv)\\cdot(d^{\\floor{r/2}-2} + d^{\\ceil{r/2}-3}) < 2n.$$\nIn particular, any 4-round Feistel Network with bijective round functions has the type-I distinguisher and any 5-round Feistel Network with bijective round functions has the type-II distinguisher.\n\\end{corollary}\n\n\\begin{remark}\nNote that the results for Feistel Networks with bijective functions are not very useful if a degree bound on Feistel functions is known, but a degree bound on their inverses is not known. In such case only the generic 5-round type-II distinguisher can be obtained. \n\\end{remark}\n\n% ===================================================\n\n\\subsection{Applications}\n\nAs an illustration of the theorems, consider the HDIM of random Feistel Networks with 3-bit branches. For particular $S_4,S_5\\colon \\field{6} \\to \\field{6}$, $S_4 \\in \\bij{4}{2}$ and $S_4 \\in \\bij{5}{2}$ (the LAT of these functions was shown in \\FigRef{lat8motiv}):\n{\n\\def\\gzero{{\\color{gray}0}}\n\\begin{equation}\n  \\HDIM{S_4} = \\psquare{\\footnotesize\n    \\begin{array}{cccccccc}\n    1 & 0 & 1 & \\gzero & \\gzero & \\gzero \\\\\n    0 & 1 & 1 & \\gzero & \\gzero & \\gzero \\\\\n    1 & 0 & 1 & \\gzero & \\gzero & \\gzero \\\\\n    \\gzero & \\gzero & \\gzero & \\gzero & \\gzero & \\gzero \\\\\n    \\gzero & \\gzero & \\gzero & \\gzero & \\gzero & \\gzero \\\\\n    \\gzero & \\gzero & \\gzero & \\gzero & \\gzero & \\gzero \\\\\n    \\end{array}\n  }, ~\\HDIM{S_5} = \\psquare{\\footnotesize\n    \\begin{array}{cccccccc}\n    0 & 0 & 0 & 0 & 1 & 0 \\\\\n    1 & 0 & 1 & 0 & 1 & 0 \\\\\n    0 & 0 & 0 & 0 & 1 & 0 \\\\\n    1 & 1 & 1 & \\gzero & \\gzero & \\gzero \\\\\n    1 & 0 & 1 & \\gzero & \\gzero & \\gzero \\\\\n    1 & 0 & 1 & \\gzero & \\gzero & \\gzero \\\\\n    \\end{array}\n  }.\n\\end{equation}\n}\n\nIt is interesting that from the HDIM we can see that all coordinates of $S_5$ have full degree $n-1$, but still the structure always has an integral distinguisher because particular monomials are always missing in the ANF.\n\n\\FigTex{distinguishers.tex}\n\n\\TabRef{distinguishers} shows application of theorems to several concrete parameters.\n\nIn~\\cite{division}, Todo proposed \\emph{division property}, a method to find integral characteristics. I compared our results with those reported by Todo in Appendix~B of~\\cite{division}. Our HDIM-motivated results (type-II distinguishers) correspond to maximum number of rounds for which an integral characteristic is proven.\n\\begin{itemize}\n    \\item For non-bijective cases, division property is better in 4 targets and provides the same results else. Those targets are $(n,d)$-Feistel networks $(24, 2),(48, 2),(48, 5),(64, 5)$. Our approach proves a distinguisher for one round less.\n    \n    \\item For bijective cases, division property results in one more round than the respective non-bijective case in a few places. Our approach does not exploit this distinction in general and thus is weaker for these cases. However, under the assumption that the degree of the inverses of Feistel functions is upper-bounded by the same value (i.e. $\\dinv \\le d$), Corollary~\\Ref{cor:bij} provides identical results and even one more round for a three cases: $(n,d)$ Feistel Networks $(32,5),(32,7),(64,7)$.\n    To the best of my knowledge, no known method existed to exploit a bound on the degree of the inverse functions in the division property framework. I describe such method in \\SecRef{improve-division}\n\\end{itemize}\n\nAs the results show, the division property proposed by Todo allows to get slightly stronger integral characteristics than the HDIM-motivated approach, except the cases when the degree of the inverses of Feistel functions is known. The downside of division property is that it requires an algorithmic evaluation for each parameter set, whereas our approach provides a simple closed formula.\nFurthermore, the degree growth inside the two halves of a primitive is evaluated by a generic bound in our approach. It may be possible to combine our approach with division property or another degree evaluation method to obtain better results. In particular, a recursive approach used in~\\cite{LeoSPN} for SPNs may be useful for Feistel Networks as well.\n\n\n\\SubSecDef{improve-division}{Improving Division Property Propagation}\n\nI briefly note a method to improve the division property propagation rule given a bound on the algebraic degree of the inverse function of a permutation. I describe the division property using the equivalent characterization by Boura and Canteaut~\\cite{anotherview}. Recall that the indicator of a multiset is defined as the indicator of the set containing elements from the multiset with odd multiplicities.\n\n\\begin{definition}\nA multiset $X \\subseteq \\field{n}$ is said to satisfy the division property $\\DP{n}{k}$, if\n$$\n\\deg{\\Ind_X} \\le n - k.\n$$\n\\end{definition}\n\nThe main propagation rule of the division property is as follows (equivalently given in~\\cite{division} by Todo)\n\\begin{proposition}\nLet $X \\subseteq \\field{n}$ be a multiset satisfying $\\DP{n}{k}$. Let $F$ be a permutation of $\\field{n}$. Then the multiset $Y = F(X)$ satisfies the division property\n$$\n\\DP{n}{k'}, ~\\text{for all}~k' \\le \\ceil{\\frac{k}{\\deg{F}}}.\n$$\n\\end{proposition}\n\\begin{remark}\nI write inequality instead of original equality, to highlight all division properties that are satisfied, instead of only the strongest one.\n\\end{remark}\n\nI now show that another propagation rule can be obtained, if the degree of $F^{-1}$ is known.\n\n\\begin{proposition}\nThe multiset $Y = F(X)$ satisfies the division property\n$$\n\\DP{n}{k'}, ~\\text{for all}~k' \\le n - (n-k) \\deg{F^{-1}}.\n$$\n\\end{proposition}\n\\begin{proof}\nWithout loss of generality, assume that $X$ has no elements with a multiplicity greater than 1. Note that\n$$\nx \\in X \\Leftrightarrow F(x) \\in Y.\n$$\nIt can be rewritten as \n$$\n\\Ind_X = \\Ind_Y\\circ F.\n$$\nEquivalently,\n$$\n\\Ind_X \\circ F^{-1} = \\Ind_Y.\n$$\nIt follows that\n$$\n\\deg{\\Ind_Y} \\le \\deg{\\Ind_X} \\cdot \\deg{F^{-1}} \\le (n - k)\\cdot \\deg{F^{-1}},\n$$\nand thus, $\\deg{\\Ind_Y} \\le n - k'$ for all\n$$\nk' \\le n - (n - k) \\deg{F^{-1}}.\n$$\n\\end{proof}\n\nUsing this proposition, the results from~\\cite{division} can be improved for the case of bijective Feistel functions, assuming that $\\dinv \\le d$. The improved division property then provides the same or slightly better results than Corollary~\\Ref{cor:bij} in all cases from~\\cite{division}.", "meta": {"hexsha": "ffbd7af5806c4c1fb469e25169b260788092d441", "size": 16634, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-source/9strFeistel/2hdimfeistel.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "thesis-source/9strFeistel/2hdimfeistel.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "thesis-source/9strFeistel/2hdimfeistel.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 60.9304029304, "max_line_length": 685, "alphanum_fraction": 0.6986293135, "num_tokens": 5293, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7826624738835052, "lm_q1q2_score": 0.5793066972986017}}
{"text": "% Section 5: Numerical Results\n\n\\section{Numerical Results}\n\n\n\\subsection{Datasets}\nIn orde to compare the performance of different algorithms, we used three types of datasets to test algorithms discussed above, including artificial generated datasets and real images. In short, we have three different types of datasets:\n\\begin{itemize}\n    \\item Random Generated Dataset\n    \\item Ellipse and Caffarelli Dataset [5]\n    \\item DOTmark Dataset [6]\n\\end{itemize}\n\nWe assume that the cost of transporting a unit mass from $x \\in X \\subset \\mathbb{R}^2$ to $y \\in Y \\subset \\mathbb{R}^2$ is $c_p(x,y) = \\|x-y\\|^p$ for some $p \\geq 1$. The minimum cost for transferring $\\mu$ to $\\nu$ is then given by \n\n\\begin{equation}\n    C_p (\\mu, \\nu) = \\min_{x \\in \\Pi(\\mu,\\nu)} \\|x-y\\|^p d\\pi(x,y)\n\\end{equation}\n\nFor realistic considerations, we use $\\mathbf{L}_2$ distance as our metrics.\n  \nRandom generated samples consists of uniform samples on $[-1, 1]^2$ of size $n$, weights $\\mu$ and $\\nu$ are uniformly \nsampled on $[-1,1]^2$ and normalized with $\\sum_i \\mu_i = 1$, $\\sum_j \\nu_j = 1$.\n\nThe ellipse example consists of two uniform samples (source and target data set) of size $n$ from the unit circle\nwith normal distributed noise added with zero mean and standard deviation $0.1$. The source\ndata sample is then scaled in the x-Axis by $1.3$ and in the y-Axis by $0.9$, while the target\ndata set is scaled in the x-Axis by $0.9$ and in the y-Axis by $1.1$.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\linewidth]{img/ellipse}\n    \\label{fig:ot}\n    \\caption{Illustration of Optimal Transport on Ellipse dataset with different coarsen level}\n  \\end{figure}\n\nCaffarelli's example consists of two uniform samples on $[-1, 1]^2$ of size $n$. Any points outside the unit circle are then\ndiscarded. Additionally, the target data sample is split along the x-Axis at $0$ and shifted by\n$+2$ and $-2$ for points with positive and negative x-Axis values, respectively.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.6\\linewidth]{img/ellipse_caffa}\n    \\label{fig:ot}\n    \\caption{Illustration of Optimal Transport on Ellipse dataset and Caffarelli dataset}\n  \\end{figure}\n\nThe DOT benchmark [6] consists of 10 classes of 10 different images, each of which is available at the 5 different\nresolutions from $a \\times a  = 32 \\times 32$ to $512 \\times 512$. In computation, each data point is $m = n = a^2$.  This allows\nfor a total of 45 computations of Wasserstein distances between two images for any one class at any fixed resolution. for convenience, we used the $32 \\times 32$ and $64 \\times 64$ smaples for experiments.\n\nTable 1 gives an overview of how the classes were created.\nClasses 1-7 are random simulations of scenarios based on various probability distributions.\nImages at different resolutions are generated independently from each other but according to the same laws.\nClasses 8-10 were obtained by ad-hoc choices of simple geometric shapes, classic test images and images of mitochondria acquired using STED\nsuper-resolution microscopy. For geometric shapes and classic test images\nthe various resolutions available are coarsenings of a single image. For the microscopy\nimages different clippings of various sizes have been selected from larger images to obtain\nthe various resolutions.\n\n\\vspace{5ex}\n\\begin{table}[htbp]\n\t\\caption{DOTmark datasent}\n\t\\centering\n\t\\begin{tabular}[width=1.0\\linewidth]{lll}\n\t\t\\toprule\n\t\t\\quad  & Dataset & Description\\\\\n    \\midrule\n            1  & WhiteNoise    & i.i.d. uniformly distributed values in [0, 1] at each pixel\\\\\n            2  & GRFrough      & GRF with $\\sigma = 1$, $\\nu = 0.25$, $\\gamma = 0.5$\\\\\n            3  & GRFmoderate   & GRF with $\\sigma = 1$, $\\nu = 2.5$, $\\gamma = 0.15$\\\\\n            4  & GRFsmooth     & GRF with $\\sigma = 1$, $\\nu = 2.5$, $\\gamma = 0.3$\\\\\n            5  & LogGRF        & exp-function of a GRF with $\\sigma = 1$, $\\nu = 0.5$, $\\gamma = 0.4$\\\\\n            6  & CauchyDensity & Bivariate Cauchy density with random center and a varying scale ellipse\\\\\n            7  & LogitGRF      & Logistic function of a GRF with $\\sigma = 4$, $\\nu = 4.5$, $\\gamma = 0.1$\\\\\n            8  & Shapes        & An ad-hoc choice of simple geometric shapes\\\\\n            9  & ClassicImages & Standard grayscale test images used in image processing\\\\\n            10 & Microscopy    & Clippings from STED microscopy images of mitochondria\\\\\n        \\bottomrule\n\t\\end{tabular}\n\t\\label{tab:tabledot}\n\\end{table}\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.9\\linewidth]{img/dot_1to6}\n    \\label{fig:ot}\n    \\caption{The images in the classes 1-6 at resolution 128 $\\times$ 128}\n\\end{figure}\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.9\\linewidth]{img/dot_7to10}\n    \\label{fig:ot}\n    \\caption{The images in the classes 7-10 at resolution 128 $\\times$ 128}\n\\end{figure}\n\n\\clearpage\n\\subsection{Numercial Results and Intepretations}\n\nWe use three main metrics to evaluate all of these algorithms:\n\n\\begin{itemize}\n    \\item Running time\n    \\item Function value\n    \\item Constraints $\\mu$ and $\\nu$ \n\\end{itemize}\n\nTo measure the satisfaction of constraints, $\\mathbf{L}_1$ norm of $\\mathbf{\\mu} - \\mathbf{\\pi}_{i\\cdot}$ and $\\mathbf{\\nu} - \\mathbf{\\pi}_{\\cdot j}$ was introduced. On the consideration of time consuming, we conduct our experiments with size $m = n = 64, 128, 256, 512, 1024, 2048$.\n\nObviously, the running time is an important metrics to measure different algorithms under different test data. In addition, object value can also shows performance of different algorithms, though for entropy regularized methods such as sinkhorn algorithm the function value has been modified with regularization term and then can not be use for comparison.\nHowever, the distance between the suboptimal tranport variable $\\pi$ and the optimal transport variable $\\tilde \\pi$ does not reflect the merits of the algorithm, because it depends on the nature of the cost matrix $\\mathbf{C}$. Consider the following example:\n\n\\begin{equation}\n    \\begin{array}{cl}\n        {\\min } & {\\mathbf{t} \\mathbf{r}\\left(\\left(\\begin{array}{cc}\n        {1+\\epsilon} & {1 + \\frac{\\epsilon}{2}}\\\\\n        {1} & {1} \\\\\n        \\end{array}\\right)\\left(\\begin{array}{cc}\n        {\\pi_{11}} & {\\pi_{12}} \\\\\n        {\\pi_{21}} & {\\pi_{22}}\n        \\end{array}\\right)\\right.} \\\\\n        {\\text { s.t. }} & {\\pi_{11}+\\pi_{12}=1} \\\\\n        & {\\pi_{21}+\\pi_{22}=1} \\\\\n        & {\\pi_{11}+\\pi_{21}=1} \\\\\n        & {\\pi_{12}+\\pi_{22}=1}\n    \\end{array}\n\\end{equation}\n\nThen the difference between the object value of the optimal solution $\\tilde \\pi_{ij} = \\left(\\begin{array}{ll}{0} & {1} \\\\{1} & {0}\\end{array}\\right)$ and the suboptimal solution $\\pi = \\left(\\begin{array}{ll}{1} & {0} \\\\{0} & {1}\\end{array}\\right)$ is less than $\\epsilon$, but the distance after the two are transformed into a vector is significantly larger than $\\epsilon$.\n\nTables below show our experiments results, more detailed data can be found in \\textbf{/code/results} folder.\n\n\\begin{table}[htbp]\n\t\\caption{Results of Raw Mosek solver and Raw Gurobi solver on Random Generated Dataset}\n\t\\centering\n    \\begin{tabular}{|c|c|llllll|}\n    \\hline\n    $m = n$             &          & Mosek primal & Mosek dual  & Mosek interior & Gurobi primal & Gurobi dual & Gurobi interior\\\\\n    \\hline\n    \\multirow{4}*{64}   &err $\\mu$ & 1.73472e-17  & 1.56125e-17 & 6.99572e-11    & 1.56125e-17   & 1.90820e-17 & 1.56125e-17    \\\\   \n                        &err $\\nu$ & 1.73472e-16  & 1.68268e-16 & 6.05657e-11    & 1.56125e-16   & 1.58727e-16 & 1.54390e-16    \\\\  \n                        &fval      & 2.07168e-02  & 2.07168e-02 & 2.07168e-02    & 2.07168e-02   & 2.07168e-02 & 2.07168e-02    \\\\\n                        &time      & 3.12448e-02  & 3.47526e-02 & 3.42710e-02    & 4.78537e-02   & 4.80292e-02 & 7.72452e-02    \\\\\n    \\hline\n    \\multirow{4}*{128}  &err $\\mu$ & 7.38910e-06  & 1.04083e-17 & 6.21169e-12    & 3.46945e-18   & 3.46945e-18 & 5.20417e-18    \\\\   \n                        &err $\\nu$ & 7.38910e-06  & 1.14275e-16 & 6.21459e-12    & 1.18395e-16   & 1.18395e-16 & 1.17934e-16    \\\\  \n                        &fval      & 1.85776e-02  & 1.85775e-02 & 1.85775e-02    & 1.85775e-02   & 1.85775e-02 & 1.85775e-02    \\\\\n                        &time      & 8.82461e-02  & 1.05152e-01 & 3.42710e-02    & 2.18717e-01   & 2.67161e-01 & 3.70859e-01    \\\\\n    \\hline\n    \\multirow{4}*{256}  &err $\\mu$ & 1.75611e-05  & 3.62395e-17 & 8.84693e-11    & 1.04083e-17   & 1.12757e-17 & 1.12757e-17    \\\\   \n                        &err $\\nu$ & 1.75611e-05  & 3.43123e-16 & 8.84696e-11    & 3.23526e-16   & 3.23526e-16 & 3.23960e-16    \\\\  \n                        &fval      & 9.37655e-03  & 9.37637e-03 & 9.37637e-03    & 9.37637e-03   & 9.37637e-03 & 9.37637e-03    \\\\\n                        &time      & 3.19547e-01  & 4.38276e-01 & 5.48973e-01    & 1.01217e+00   & 1.12126e+00 & 1.07492e+00    \\\\\n    \\hline\n    \\multirow{4}*{512}  &err $\\mu$ & 4.29996e-05  & 5.06695e-17 & 1.13676e-10    & 8.48388e-18   & 3.10109e-16 & 1.12757e-17    \\\\   \n                        &err $\\nu$ & 4.29996e-05  & 3.38688e-16 & 1.13677e-10    & 8.48388e-18   & 1.25767e-17 & 3.23960e-16    \\\\  \n                        &fval      & 4.30472e-03  & 4.30454e-03 & 4.30454e-03    & 4.30454e-03   & 4.30454e-03 & 9.37637e-03    \\\\\n                        &time      & 1.42172e+00  & 2.23522e+00 & 2.73145e+00    & 3.47143e+00   & 3.77973e+00 & 1.07492e+00    \\\\\n    \\hline    \n    \\multirow{4}*{1024} &err $\\mu$ & 1.06018e-04  & 1.47138e-16 & 1.01099e-13    & 1.09504e-17   & 3.64400e-16 & 1.20346e-17    \\\\   \n                        &err $\\nu$ & 1.06018e-04  & 5.08930e-16 & 1.00871e-13    & 3.64753e-16   & 1.15196e-17 & 3.65837e-16    \\\\  \n                        &fval      & 1.94602e-03  & 1.94578e-03 & 1.94578e-03    & 1.94578e-03   & 1.94578e-03 & 1.94578e-03   \\\\\n                        &time      & 6.45907e+00  & 1.37058e+01 & 1.33225e+01    & 1.73826e+01   & 1.31655e+01 & 1.73745e+01    \\\\\n    \\hline\n    \\multirow{4}*{2048} &err $\\mu$ & 2.65157e-04  & 1.12464e-16 & 5.64365e-11    & 1.20889e-17   & 9.78113e-16 & 1.13841e-17    \\\\   \n                        &err $\\nu$ & 2.65157e-04  & 1.08728e-15 & 5.36207e-11    & 9.76758e-16   & 9.18861e-18 & 9.75701e-16    \\\\  \n                        &fval      & 1.50347e-03  & 1.50297e-03 & 1.50297e-03    & 1.50297e-03   & 1.50297e-03 & 1.50297e-03   \\\\\n                        &time      & 3.59507e+01  & 1.28866e+02 & 6.74333e+01    & 3.16742e+01   & 5.92715e+01 & 9.67955e+01    \\\\\n    \\hline   \n    \\end{tabular}\n    \\label{tab:table1}\n\\end{table}\n\nIt seems that Gurobi can always reach the real optimal solution, and MOSEK sometimes fails. For MOSEK,\n the simplex method on primal problem is faster than the interior method and dual simplex method. For Gurobi, the simplex method is usually faster than the interior method. As one of the most famous algorithms in 20th century, simplex method was designed for solving linear programming problems. Poor performance of dual simplex may be caused by a large number of constraints. In summary, Gurobi has better performance that MOSEK.\n\n\\begin{table}[htbp]\n\t\\caption{Results of ADMM, Sinkhorn and BlockCA algorithm on Random Generated Dataset}\n\t\\centering\n    \\begin{tabular}{|c|c|lllll|}\n    \\hline\n    $m = n$             &          & Gurobi primal & ADMM primal & ADMM dual   & Sinkhorn    & BlockCA    \\\\\n    \\hline\n    \\multirow{4}*{64}   &err $\\mu$ & 1.56125e-17   & 1.40712e-05 & 6.50195e-01 & 1.00180e-16 & 5.41017e-16\\\\   \n                        &err $\\nu$ & 1.56125e-16   & 1.41582e-05 & 6.27594e-01 & 1.90874e-16 & 3.94243e-16\\\\  \n                        &fval      & 2.07168e-02   & 2.07460e-02 & 8.77846e-04 & 2.73941e-02 & 2.73941e-02\\\\\n                        &time      & 4.78537e-02   & 1.22792e+00 & 1.38214e+00 & 8.93044e-03 & 1.44658e-01\\\\\n    \\hline\n    \\multirow{4}*{128}  &err $\\mu$ & 3.46945e-18   & 4.72795e-05 & 8.45667e-01 & 1.39456e-16 & 5.54624e-16\\\\   \n                        &err $\\nu$ & 1.18395e-16   & 4.64422e-05 & 8.56649e-01 & 2.61211e-16 & 5.22084e-16\\\\  \n                        &fval      & 1.85775e-02   & 1.85941e-02 & 8.22017e-05 & 2.92237e-02 & 2.92237e-02\\\\\n                        &time      & 2.67161e-01   & 4.53466e+00 & 3.89032e+00 & 6.39679e-02 & 4.98657e-01\\\\\n    \\hline\n    \\multirow{4}*{256}  &err $\\mu$ & 1.04083e-17   & 4.96761e-05 & 9.04781e-01 & 3.11044e-16 & 6.12256e-16\\\\  \n                        &err $\\nu$ & 3.23526e-16   & 5.02865e-05 & 9.00866e-01 & 3.62530e-16 & 5.78530e-16\\\\ \n                        &fval      & 9.37637e-03   & 9.38144e-03 & 1.20866e-05 & 2.88136e-02 & 2.88136e-02\\\\\n                        &time      & 1.01217e+00   & 1.81699e+01 & 8.70793e+00 & 8.24754e-02 & 2.26562e+00\\\\\n    \\hline\n    \\multirow{4}*{512}  &err $\\mu$ & 8.48388e-18   & 1.31110e-04 & 9.53515e-01 & 3.42523e-16 & 6.55963e-16\\\\   \n                        &err $\\nu$ & 8.48388e-18   & 1.33937e-04 & 9.53115e-01 & 5.30280e-16 & 7.73707e-16\\\\  \n                        &fval      & 4.30454e-03   & 4.30575e-03 & 1.35802e-06 & 2.81078e-02 & 2.81078e-02\\\\\n                        &time      & 3.47143e+00   & 7.43271e+01 & 6.13705e+01 & 2.81078e-01 & 1.08770e+01\\\\\n    \\hline    \n    \\multirow{4}*{1024} &err $\\mu$ & 1.09504e-17   & 2.73918e-04 & 9.72007e-01 & 3.86662e-16 & 6.89612e-16\\\\   \n                        &err $\\nu$ & 3.64753e-16   & 2.74108e-04 & 9.72809e-01 & 7.61287e-16 & 1.02598e-15\\\\ \n                        &fval      & 1.94578e-03   & 1.94916e-03 & 2.51372e-07 & 2.79386e-03 & 2.79386e-02\\\\\n                        &time      & 1.73826e+01   & 3.67489e+02 & 2.08239e+02 & 1.02975e+00 & 3.62734e+01\\\\\n    \\hline\n    \\multirow{4}*{2048} &err $\\mu$ & 1.20889e-17   & 1.01361e-03 & 9.88752e-01 & 9.68066e-16 & 1.09343e-15\\\\   \n                        &err $\\nu$ & 9.76758e-16   & 1.02573e-03 & 9.89591e-01 & 1.09554e-15 & 1.53059e-15\\\\  \n                        &fval      & 1.50297e-03   & 1.49998e-03 & 2.11058e-08 & 2.76252e-03 & 2.76252e-02\\\\\n                        &time      & 3.16742e+01   & 5.71285e+03 & 3.14300e+03 & 8.98797e+00 & 4.12741e+02\\\\\n    \\hline\n    \\end{tabular}\n    \\label{tab:table1}\n\\end{table}\n\nGenerally speaking, The ADMM method does not have good performance. On the one hand, the error term is large, which is because the equality constraint is difficult to achieve. On the other hand, the required time is much longer than the general algorithm of Mosek and Gurobi. This is because the algorithm will stay at $\\pi = \\frac{1}{m n}$ for a long time, and it will not start the convergence step until some variables start to increase significantly. what's more, ADMM method is a first order method which means it has slow convergence rate.\n\nAs mentioned before, in practice, the Sinkhorn algorithm suffers from numerical overflow when the regularization parameter $\\epsilon$ is small compared to the entries of the cost\nmatrix $\\mathbf{C}$. This concern can be alleviated to some extent by carrying out computations\nin the log domain. Here Block Coordinate Ascent algorithm was introduced. From the experiments we can see that entropy regularized methods applied with Sinkhorn algorithm and Block Coordinate Ascent algorithm convergence very fast with the slightest violation with all the constraints of $\\mu$ and $\\nu$ corresponding to err $\\mu$ and err $\\nu$ shown.\n\n \\begin{table}[htbp]\n\t\\caption{Results of Raw Mosek solver and Raw Gurobi solver on DOTmark Dataset}\n\t\\centering\n    \\begin{tabular}{|c|c|llllll|}\n    \\hline\n    Class                   &          & Mosek primal & Mosek dual  & Mosek interior & Gurobi primal & Gurobi dual & Gurobi interior\\\\\n    \\hline\n    \\multirow{4}*{WNoise}   &err $\\mu$ & 1.04105e-04  & 1.43060e-16 & 2.60955e-11    & 0.00000e+00   & 0.00000e+00 & 0.00000e+00    \\\\   \n                            &err $\\nu$ & 1.04106e-04  & 6.93859e-10 & 7.19976e-10    & 6.93859e-10   & 6.93859e-10 & 6.93859e-10    \\\\  \n                            &fval      & 6.49762e-04  & 6.49817e-04 & 6.49817e-04    & 6.49817e-04   & 6.49817e-04 & 6.49817e-04    \\\\\n                            &time      & 6.37157e+00  & 1.00645e+01 & 1.09153e+01    & 1.96804e+01   & 1.80495e+01 & 2.56732e+01    \\\\\n    \\hline\n    \\multirow{4}*{GRFrough} &err $\\mu$ & 9.86331e-05  & 2.14455e-16 & 3.88285e-14    & 0.00000e+00   & 0.00000e+00 & 0.00000e+00    \\\\   \n                            &err $\\nu$ & 9.86322e-05  & 8.99345e-10 & 8.99386e-10    & 8.99345e-10   & 8.99345e-10 & 8.99345e-10    \\\\  \n                            &fval      & 1.24126e-03  & 1.24131e-03 & 1.24131e-03    & 1.24131e-03   & 1.24131e-03 & 1.24131e-03    \\\\\n                            &time      & 6.22112e+00  & 1.60985e+01 & 1.20426e+01    & 1.70455e+01   & 1.44343e+01 & 2.98131e+01    \\\\\n    \\hline\n    \\multirow{4}*{GRFmid}   &err $\\mu$ & 9.82285e-05  & 1.81604e-16 & 5.15640e-11    & 0.00000e+00   & 5.89424e-10 & 5.89424e-10    \\\\   \n                            &err $\\nu$ & 9.82291e-05  & 5.89424e-10 & 7.33106e-10    & 5.89424e-10   & 0.00000e+00 & 0.00000e+00    \\\\  \n                            &fval      & 1.29959e-02  & 1.29962e-02 & 1.29962e-02    & 1.29962e-02   & 1.29962e-02 & 1.29962e-02    \\\\\n                            &time      & 7.08524e+00  & 3.11953e+01 & 1.08397e+01    & 1.62867e+01   & 1.74454e+01 & 2.98980e+01    \\\\\n    \\hline\n    \\multirow{4}*{GRFmid2}  &err $\\mu$ & 9.78652e-05  & 1.96051e-16 & 2.27771e-12    & 0.00000e+00   & 4.81123e-10 & 4.81123e-10    \\\\   \n                            &err $\\nu$ & 9.78657e-05  & 4.81123e-10 & 4.83410e-10    & 4.81123e-10   & 0.00000e+00 & 0.00000e+00    \\\\  \n                            &fval      & 7.01347e-03  & 7.01329e-03 & 7.01329e-03    & 7.01329e-03   & 7.01329e-03 & 7.01329e-03    \\\\\n                            &time      & 8.29048e+00  & 2.58863e+01 & 1.16190e+01    & 1.54850e+01   & 1.46085e+01 & 1.82575e+01    \\\\\n    \\hline    \n    \\multirow{4}*{LogGRF}   &err $\\mu$ & 9.79684e-05  & 1.74543e-16 & 1.41349e-13    & 0.00000e+00   & 0.00000e+00 & 0.00000e+00    \\\\   \n                            &err $\\nu$ & 9.79684e-05  & 3.44473e-11 & 3.45826e-11    & 3.44471e-11   & 3.44471e-11 & 3.44471e-11    \\\\  \n                            &fval      & 1.14762e-02  & 1.14762e-02 & 1.14762e-02    & 3.44471e-11   & 1.14762e-02 & 1.14762e-02    \\\\\n                            &time      & 6.61064e+00  & 2.25776e+01 & 1.25897e+01    & 1.46135e+01   & 1.81754e+01 & 2.23398e+01    \\\\\n    \\hline\n    \\multirow{4}*{Cauchy}   &err $\\mu$ & 8.95380e-05  & 1.69054e-16 & 4.53139e-11    & 0.00000e+00   & 6.45741e-10 & 6.45741e-10    \\\\   \n                            &err $\\nu$ & 8.95387e-05  & 6.45741e-10 & 5.36207e-11    & 6.45741e-10   & 0.00000e+00 & 0.00000e+00    \\\\  \n                            &fval      & 2.70715e-02  & 2.70716e-02 & 2.70716e-02    & 2.70716e-02   & 2.70716e-02 & 2.70716e-02    \\\\\n                            &time      & 6.27543e+00  & 3.43761e+01 & 1.25938e+01    & 1.41227e+01   & 1.66956e+01 & 4.97750e+01    \\\\\n    \\hline   \n    \\multirow{4}*{LogitGRF} &err $\\mu$ & 1.04014e-04  & 1.61844e-16 & 6.22315e-10    & 0.00000e+00   & 1.23453e-09 & 1.23453e-09    \\\\   \n                            &err $\\nu$ & 1.04013e-04  & 1.23453e-09 & 1.83005e-09    & 1.23453e-09   & 0.00000e+00 & 0.00000e+00    \\\\  \n                            &fval      & 1.90306e-02  & 1.90301e-02 & 1.90301e-02    & 1.90301e-02   & 1.90301e-02 & 1.90301e-02    \\\\\n                            &time      & 6.59682e+00  & 2.55476e+01 & 1.09550e+01    & 1.51339e+01   & 1.87915e+01 & 3.24828e+01    \\\\\n    \\hline\n    \\multirow{4}*{Shape}    &err $\\mu$ & 3.10808e-05  & 6.51926e-09 & 1.54226e-08    & 0.00000e+00   & 6.51926e-09 & 6.51926e-09    \\\\   \n                            &err $\\nu$ & 3.10743e-05  & 4.77049e-17 & 1.07581e-08    & 6.51926e-09   & 0.00000e+00 & 0.00000e+00    \\\\  \n                            &fval      & 6.00170e-03  & 6.00148e-03 & 6.00148e-03    & 6.00148e-03   & 6.00148e-03 & 6.00148e-03    \\\\\n                            &time      & 2.45428e+00  & 4.44892e+00 & 2.97079e+00    & 1.17216e+01   & 1.16747e+01 & 1.31421e+01    \\\\\n    \\hline\n    \\multirow{4}*{Classic}  &err $\\mu$ & 9.68013e-05  & 1.20292e-16 & 8.39560e-12    & 0.00000e+00   & 2.61934e-10 & 2.61934e-10    \\\\   \n                            &err $\\nu$ & 9.68016e-05  & 2.61935e-10 & 2.70569e-10    & 2.61934e-10   & 0.00000e+00 & 0.00000e+00    \\\\  \n                            &fval      & 2.13840e-03  & 2.13855e-03 & 2.13855e-03    & 2.13855e-03   & 2.13855e-03 & 2.13855e-03    \\\\\n                            &time      & 7.26325e+00  & 1.91323e+01 & 1.30328e+01    & 1.54186e+01   & 1.40239e+01 & 2.08600e+01    \\\\\n    \\hline\n    \\multirow{4}*{Micro}    &err $\\mu$ & 9.11961e-05  & 1.09938e-16 & 1.40660e-10    & 0.00000e+00   & 0.00000e+00 & 0.00000e+00   \\\\   \n                            &err $\\nu$ & 9.11973e-05  & 1.25146e-09 & 1.24353e-09    & 1.25146e-09   & 1.25146e-09 & 1.25146e-09    \\\\  \n                            &fval      & 5.60394e-02  & 5.60383e-02 & 5.60383e-02    & 5.60383e-02   & 5.60383e-02 & 5.60383e-02    \\\\\n                            &time      & 5.27145e+00  & 2.11592e+01 & 9.32959e+00    & 1.44868e+01   & 1.58690e+01 & 1.78717e+02    \\\\\n    \\hline\n    \\end{tabular}\n    \\label{tab:table1}\n\\end{table}\n\n\n\n\\begin{table}[htbp]\n\t\\caption{Results of ADMM, Sinkhorn and BlockCA algorithm on DOTmark Dataset}\n\t\\centering\n    \\begin{tabular}{|c|c|lllll|}\n    \\hline\n    Class                   &          & Gurobi primal & ADMM primal & ADMM dual  & Sinkhorn   & BlockCA    \\\\\n    \\hline\n    \\multirow{4}*{WNoise}   &err $\\mu$ & 0.00000e+00   & 1.28341e-04  & 3.33216e-01 & 6.93859e-10 & 1.24606e-07 \\\\   \n                            &err $\\nu$ & 6.93859e-10   & 1.16042e-04  & 3.33214e-01 & 7.56495e-16 & 1.27072e-07 \\\\  \n                            &fval      & 6.49817e-04   & 6.49518e-04  & 0.00000e+00 & 2.76750e-03 & 2.76750e-02 \\\\\n                            &time      & 1.96804e+01   & 2.05072e+02  & 1.76776e+02 & 4.96325e-01 & 4.06536e+01 \\\\\n    \\hline\n    \\multirow{4}*{GRFrough} &err $\\mu$ & 0.00000e+00   & 1.36891e-04  & 2.13545e-01 & 8.99345e-10 & 1.23457e-07 \\\\   \n                            &err $\\nu$ & 8.99345e-10   & 1.35197e-04  & 2.13544e-01 & 7.54279e-16 & 1.22399e-07 \\\\  \n                            &fval      & 1.24131e-03   & 1.24030e-03  & 0.00000e+00 & 2.77295e-02 & 2.77295e-01 \\\\ \n                            &time      & 1.70455e+01   & 3.08484e+02  & 2.03046e+02 & 5.94349e-01 & 4.29790e+01 \\\\ \n    \\hline\n    \\multirow{4}*{GRFmid}   &err $\\mu$ & 0.00000e+00   & 1.51246e-04  & 2.28186e-01 & 4.81123e-10 & 1.20229e-07 \\\\    \n                            &err $\\nu$ & 5.89424e-10   & 1.43893e-04  & 2.28171e-01 & 7.92382e-16 & 1.17900e-07 \\\\   \n                            &fval      & 1.29962e-02   & 1.29985e-02  & 0.00000e+00 & 2.67600e-02 & 2.70585e-02 \\\\ \n                            &time      & 1.62867e+01   & 3.04346e+02  & 2.04500e+02 & 6.11465e-01 & 3.94406e+01 \\\\ \n    \\hline\n    \\multirow{4}*{GRFmid2}  &err $\\mu$ & 0.00000e+00   & 1.32336e-04  & 1.90867e-01 & 5.89424e-10 & 1.25930e-07 \\\\    \n                            &err $\\nu$ & 4.81123e-10   & 1.32346e-04  & 1.90878e-01 & 7.83878e-16 & 1.22960e-07 \\\\   \n                            &fval      & 7.01329e-03   & 7.01429e-03  & 0.00000e+00 & 2.70585e-02 & 2.67600e-01 \\\\\n                            &time      & 1.54850e+01   & 2.56570e+02  & 1.77548e+02 & 5.40904e-01 & 3.97170e+01 \\\\\n    \\hline    \n    \\multirow{4}*{LogGRF}   &err $\\mu$ & 0.00000e+00   & 1.55836e-04  & 3.63980e-01 & 3.44471e-11 & 1.28669e-07 \\\\   \n                            &err $\\nu$ & 3.44471e-11   & 1.46725e-04  & 3.63993e-01 & 8.62698e-16 & 1.28669e-07 \\\\  \n                            &fval      & 3.44471e-11   & 1.14813e-02  & 0.00000e+00 & 2.25891e-02 & 2.25891e-02\\\\\n                            &time      & 1.46135e+01   & 2.46276e+02  & 1.92535e+02 & 6.65159e-01 & 6.66666e66 \\\\\n    \\hline\n    \\multirow{4}*{Cauchy}   &err $\\mu$ & 0.00000e+00   & 2.44488e-04 & 4.14467e-01 & 6.45741e-10 & 1.21786e-07 \\\\   \n                            &err $\\nu$ & 6.45741e-10   & 2.03140e-04 & 4.14312e-01 & 8.02487e-16 & 1.32475e-07 \\\\ \n                            &fval      & 2.70716e-02   & 2.70958e-02 & 0.00000e+00 & 2.07987e-02 & 2.07987e-01 \\\\\n                            &time      & 1.41227e+01   & 3.26161e+02 & 2.28633e+02 & 4.73294e-01 & 3.97828e+01 \\\\\n    \\hline   \n    \\multirow{4}*{LogitGRF} &err $\\mu$ & 0.00000e+00   & 1.52596e-04  & 3.99687e-01  & 1.23453e-09 & 1.24918e-07 \\\\   \n                            &err $\\nu$ & 1.23453e-09   & 1.43546e-04  & 3.99684e-01  & 8.23705e-16 & 1.26019e-07 \\\\ \n                            &fval      & 1.90301e-02   & 1.90381e-02  & 0.00000e+00  & 2.81800e-02 & 2.81800e-02 \\\\\n                            &time      & 1.51339e+01   & 2.39396e+02  & 1.36987e+02  & 5.37684e-01 & 3.92088e+01 \\\\\n    \\hline\n    \\multirow{4}*{Shape}    &err $\\mu$ & 0.00000e+00   & 1.85553e-04  & 3.60264e-01  & 6.51926e-09 & 4.34572e-08 \\\\   \n                            &err $\\nu$ & 6.51926e-09   & 2.04018e-04  & 3.59738e-01  & 5.09141e-16 & 4.54084e-08 \\\\  \n                            &fval      & 6.00148e-03   & 6.00026e-03  & 0.00000e+00  & 1.45999e-02 & 1.45999e-02 \\\\\n                            &time      & 1.17216e+01   & 2.02916e+02  & 1.35861e+02  & 5.54126e-01 & 3.13619e+01 \\\\\n    \\hline\n    \\multirow{4}*{Classic}  &err $\\mu$ & 0.00000e+00   & 2.05559e-04 & 1.79544e-01 & 2.61935e-10 & 1.20315e-07 \\\\   \n                            &err $\\nu$ & 2.61934e-10   & 2.05644e-04 & 1.79543e-01 & 7.72765e-16 & 1.17474e-07 \\\\\n                            &fval      & 2.13855e-03   & 2.14597e-03 & 0.00000e+00 & 2.80976e-02 & 2.80976e-01 \\\\\n                            &time      & 1.54186e+01   & 3.57276e+02 & 2.30821e+02 & 4.58951e-01 & 3.80998e+01 \\\\\n    \\hline\n    \\multirow{4}*{Micro}    &err $\\mu$ & 0.00000e+00   & 1.49094e-04  & 5.35081e-01 & 1.25146e-09 & 1.29936e-07 \\\\  \n                            &err $\\nu$ & 1.25146e-09   & 1.22823e-04  & 5.35094e-01 & 6.58111e-16 & 1.64514e-07 \\\\  \n                            &fval      & 5.60383e-02   & 5.60339e-02  & 0.00000e+00 & 2.80043e-01 & 2.80042e-01 \\\\\n                            &time      & 1.44868e+01   & 2.01784e+02  & 1.38029e+02 & 5.59559e-01 & 3.95853e+01 \\\\\n    \\hline\n    \\end{tabular}\n    \\label{tab:table1}\n\\end{table}", "meta": {"hexsha": "3aca6743f3f9ba22e5e6e5e5951d1653b016509a", "size": 26461, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/content-5.tex", "max_stars_repo_name": "CrazyIvanPro/Optimal_Transport", "max_stars_repo_head_hexsha": "aa782820a5ca5a01909ed3c32acbada43f6cfa0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-09T10:37:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-06T09:24:30.000Z", "max_issues_repo_path": "doc/content-5.tex", "max_issues_repo_name": "CrazyIvanPro/Optimal_Transport", "max_issues_repo_head_hexsha": "aa782820a5ca5a01909ed3c32acbada43f6cfa0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/content-5.tex", "max_forks_repo_name": "CrazyIvanPro/Optimal_Transport", "max_forks_repo_head_hexsha": "aa782820a5ca5a01909ed3c32acbada43f6cfa0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-03T17:07:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-03T17:07:01.000Z", "avg_line_length": 76.2564841499, "max_line_length": 545, "alphanum_fraction": 0.553078115, "num_tokens": 10943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7826624738835052, "lm_q1q2_score": 0.5793066972986017}}
{"text": "\\documentclass[]{report}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{bm}\n\\usepackage{graphicx}\n\n\\graphicspath{ {images/} }\n\n\\title{CSCI 567 HW \\# 4}\n\\author{Mohmmad Suhail Ansari \\\\ USC ID: 8518586692\\\\e-mail: mohmmada@usc.edu}\n\n\\begin{document}\n\n\\maketitle\n\n\\paragraph{Sol. 1.1}\n\tGiven, \n\t\\[ L(y_i, \\hat{y}_i) = (y_i - \\hat{y}_i)^2\\]\n\tthen, its derivative w.r.t $g_i$ will be\n\t\\[ \\frac{\\partial{L(y_i, \\hat{y}_i)}}{\\partial{g_i}} = -2(y_i - \\hat{y}_i)\\]\n\n\\paragraph{Sol. 1.2}\n\tOptimal $h^{*}$ can be found by calculating the gradient of \n\t\\[ L = \\sum_{i = 1}^{n} (-g_i - \\gamma h(x_i)) \\]\n\t w.r.t $\\gamma$ and equating it to 0, i.e.\n\t \\[ \\frac{\\partial{L}}{\\partial{\\gamma}} = \\sum_{i = 1}^{n} - 2h(x_i)(-g_i - \\gamma h(x_i)) = 0\\]\n\t \\[ = \\sum_{i = 1}^{n} h(x_i) g_i + \\gamma h^2 (x_i) = 0\\]\n\t \\[ \\gamma^{*} = \\frac{\\sum_{i = 1}^{n}  2 h(x_i)(y_i - \\hat{y}_i)}{\\sum_{i = 1}^{n} h^2 (x_i)}\\]\n\n\\paragraph{Sol. 1.3}\n\tSimilar to 1.2 we again we select $\\alpha^{*}$ which will minimize \n\t\\[ \\sum_{i = 1}^{n}  L(y_i, \\hat{y_i} + \\alpha h^{*}(x_i)) \\]\n\t$\\therefore$, we get\n\t\\[ \\frac{\\partial{\\sum_{i = 1}^{n}  L}}{\\partial{\\alpha}} = \\sum_{i = 1}^{n} -2 h^{*}(x_i) (y_i - \\hat{y_i} - \\alpha h^{*}(x_i)) = 0 \\]\n\t\\[ = \\sum_{i = 1}^{n}  -2h^{*}(x_i)(y_i - \\hat{y}_i) + 2 \\alpha h^{*2}(x)) = 0\\]\n\t\\[ \\alpha^{*} = \\frac{\\sum_{i = 1}^{n} h^{*}(x_i)(y_i - \\hat{y_i})}{\\sum_{i = 1}^{n} h^{*2}(x_i)}\\]\n\t\n\tFinally, the update rule can be given as \n\t\\[ \\hat{y_i} \\leftarrow \\hat{y_i} + \\alpha^{*} h^{*}(x_i) \\]\n\t\\[ \\hat{y_i} \\leftarrow \\hat{y_i} + \\bigg[ \\frac{\\sum_{i = 1}^{n} h^{*}(x_i)(y_i - \\hat{y_i})}{\\sum_{i = 1}^{n} h^{*2}(x_i)}  \\bigg] \\bigg[ \\frac{\\sum_{\n\ti = 1}^{n}  2 h(x_i)(y_i - \\hat{y}_i)}{\\sum_{i = 1}^{n} h^2 (x_i)} \\bigg] \\]\n\t\\[ \\hat{y_i} \\leftarrow \\hat{y_i} + 2{\\bigg[ \\frac{\\sum_{i = 1}^{n} h^{*}(x_i)(y_i - \\hat{y_i})}{\\sum_{i = 1}^{n} h^{*2}(x_i)} \\bigg]}^2 \\]\n\n\\paragraph{Sol. 2.1}\n\tLet our neural network consist of N input nodes, M linear activation hidden layers and K output nodes. Let $z_j^{L_i}$ be the $j^{th}$ node in the $L_{i}$ layer and $w_j^{L_{i}}$ be the weight, the we can write the output of the first hidden layer\n\t\\[ z_j = \\sum_{i=0}^{N} w_{ji}^{L_1} x_i \\]\n\tThen for the second hidden layer, we have\n\t\\[ z_j = \\sum_{i=0}^{L_1} w_{ji}^{L_2} z_i \\]\n\tand now, finally the output of the final output layer can be given as \n\t\\[ y_j = \\sigma (\\sum_{i=0}^{L_{M-1}} w_{ji}^{L_M} z_i) \\]\n\t\\[ = \\sigma (\\sum_{i=0}^{L_{M-1}} w_{ji}^{L_M} \\sum_{i=0}^{L_{M-2}} z_i) \\]\n\t\\[ = \\sigma (\\sum_{i=0}^{L_{M-1}} w_{ji}^{L_M} \\sum_{i=0}^{L_{M-2}} w_{ji}^{L_M-1} \\hdots \\sum_{i=0}^{N} w_j^{L_1} x_i) \\]\n\n\tWhich can be written as \n\t\\[ y_j = \\sigma (WX) \\]\n\twhere \n\t\\[ W = \\sum_{i=0}^{L_{M-1}} w_{ji}^{L_M} \\sum_{i=0}^{L_{M-2}} w_{ji}^{L_M-1} \\hdots \\sum_{i=0}^{N} w_{ji}^{L_1}\\]\n\tand \n\t\\[ X = [x_1 \\quad x_2\\quad x_3 \\hdots x_N ]\\]\n\n\tWhich is similar to Logistic Regression.\n\n\\paragraph{Sol. 2.2}\n\tGiven \n\t\\[ L(y, \\hat{y}) = \\frac{1}{2}((y_1 - \\hat{y_1})^2 + (y_2 - \\hat{y_2})^2)  = \\frac{1}{2} \\sum_{j=1}^2 (y_j - \\hat{y}_j)^2\\]\t\n\twhere \n\t\\[ \\hat{y}_j = \\sum_{k=1}^{4} v_{jk} z_k\\] and \n\t\\[ z_k = tanh(\\sum_{i=3}^{3} w_{ki} x_i) \\]\n\n\ttaking partial differential of $L$ w.r.t $v_{jk}$, we get \n\t\\[ \\frac{\\partial{L}}{\\partial{v_{jk}}} = -z_k (y_j - \\hat{y}_j)\\]\n\twe know that\n\t\n\t\\[ \\frac{\\partial{tanh(x)}}{\\partial{x}} = 1 - {tanh}^{2} (x) \\]\n\n\t$\\therefore$\n\n\t\\[ \\frac{\\partial{tanh(\\sum_{i=3}^{3} w_{ki} x_i)}}{\\partial{w_{ki}}} = (1 - {tanh}^{2} (\\sum_{i=3}^{3} w_{ki} x_i))x_i \\]\n\t\n\tthen, we have the partial differential of $L$ w.r.t $w_{ki}$ as \n\t\\[ \\frac{\\partial{L}}{\\partial{w_{ki}}} =-v_{jk} (y_j - \\hat{y}_j)(1 - {tanh}^{2} (\\sum_{i=3}^{3} w_{ki} x_i))x_i \\]\n\t\n\tTherefore we get the update rules as \n\n\t\\[ w_{ki} = w_{ki} + \\eta v_{jk} (y_j - \\hat{y}_j)(1 - {tanh}^{2} (\\sum_{i=3}^{3} w_{ki} x_i))x_i\\]\n\tand \n\t\\[ v_{jk} = v_{jk} + \\lambda z_k (y_j - \\hat{y}_j)\\]\n\n\n\\paragraph{Sol. 3.1}\n\tLinear Activations\n\t\\begin{center}\n\t\t\\begin{tabular}{l|c|c}\n\t\t\t\\hline\n\t\t\t Architecture                        & Accuracy   &   Time(seconds) \\\\\n\t\t\t\\hline\n\t\t\t {[$d_{in}$, $d_{out}$]}                      & 83.4198\\%   &    3.0391190000 \\\\\n\t\t\t {[$d_{in}$, 50, $d_{out}$]}                  & 83.5544\\%   &    5.9478230000 \\\\\n\t\t\t {[$d_{in}$, 50, 50, $d_{out}$]}              & 84.0503\\%   &    8.1254930000 \\\\\n\t\t\t {[$d_{in}$, 50, 50, 50, $d_{out}$]}          & 84.4424\\%   &   10.2770940000 \\\\\n\t\t\t {[$d_{in}$, 50, $d_{out}$]}                  & 83.9965\\%   &    5.3710550000 \\\\\n\t\t\t {[$d_{in}$, 500, $d_{out}$]}                 & 84.1925\\%   &   22.6269160000 \\\\\n\t\t\t {[$d_{in}$, 500, 300, $d_{out}$]}            & 84.7153\\%   &   35.4776900000 \\\\\n\t\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}       & 85.1228\\%   &   66.4187950000 \\\\\n\t\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]}  & 85.1882\\%   &  103.6037080000 \\\\\n\t\t\t %{[$d_{in}$, $d_{out}$]}                      & 55.7144\\%   &         5.05344 \\\\\n\t\t\t %{[$d_{in}$, 50, $d_{out}$]}                  & 30.3887\\%   &         6.62551 \\\\\n\t\t\t %{[$d_{in}$, 50, 50, $d_{out}$]}              & 64.0218\\%   &         8.98298 \\\\\n\t\t\t %{[$d_{in}$, 50, 50, 50, $d_{out}$]}          & 50.6439\\%   &        10.83295 \\\\\n\t\t\t %{[$d_{in}$, 50, $d_{out}$]}                  & 39.4034\\%   &         6.70223 \\\\\n\t\t\t %{[$d_{in}$, 500, $d_{out}$]}                 & 51.2782\\%   &        35.09004 \\\\\n\t\t\t %{[$d_{in}$, 500, 300, $d_{out}$]}            & 57.7173\\%   &        56.51479 \\\\\n\t\t\t %{[$d_{in}$, 800, 500, 300, $d_{out}$]}       & 72.3023\\%   &       109.36959 \\\\\n\t\t\t %{[$d_{in}$, 800, 800, 500, 300, $d_{out}$]}  & 72.3369\\%   &       167.03330 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\n\tBoth accuracy and training time for each architecture increases with increasing number of hidden layers and increasing number of nodes in each hidden layer.\n\n\\paragraph{Sol. 3.2}\n\tSigmoid Activations\n\t\\begin{center}\n\t\t\\begin{tabular}{l|c|c}\n\t\t\t\\hline\n\t\t\t Architecture                        & Accuracy   &   Time(seconds) \\\\\n\t\t\t\\hline\n\t\t\t {[$d_{in}$, 50, $d_{out}$]}                 & 74.8280\\%   &    9.4715190000 \\\\\n \t\t\t {[$d_{in}$, 500, $d_{out}$]}                & 76.7309\\%   &   77.9355170000 \\\\\n \t\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & 72.2639\\%   &  122.3402530000 \\\\\n \t\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & 72.2639\\%   &  241.0532810000 \\\\\n \t\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & 72.2639\\%   &  363.2464260000 \\\\\n\t\t\t %{[$d_{in}$, 50, $d_{out}$]}                 & 72.3369\\%   &         7.36070 \\\\\n\t\t\t %{[$d_{in}$, 500, $d_{out}$]}                & 72.3484\\%   &        75.58726 \\\\\n\t\t\t %{[$d_{in}$, 500, 300, $d_{out}$]}           & 72.3408\\%   &       120.31576 \\\\\n\t\t\t %{[$d_{in}$, 800, 500, 300, $d_{out}$]}      & 72.3408\\%   &       238.32516 \\\\\n\t\t\t %{[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & 72.3408\\%   &       362.21167 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\n\tThe accuracy first increases when one hidden layer is used, but then decreases and remains stable for rest of the models with increasing number of hidden layers.\n\n\tI believe that the the lower accuracy (compared to linear activation function) for sigmoid activation function is the result of the sigmoid activation function's property of ``killing the gradient'', i.e. when a sigmoid neuron\u2019s activation saturates at either tail of 0 or 1, the gradient at these regions is almost zero. Recall that during backpropagation, this (local) gradient will be multiplied to the gradient of this gate\u2019s output for the whole objective.\n\n\\paragraph{Sol. 3.3}\n\tReLu Activations\n\t\\begin{center}\n\t\t\\begin{tabular}{l|c|c}\n\t\t\t\\hline\n\t\t\t Architecture                        & Accuracy   &   Time(seconds) \\\\\n\t\t\t\\hline\n\t\t\t {[$d_{in}$, 50, $d_{out}$]}                 & 82.3165\\%   &         5.90191 \\\\\n\t\t\t {[$d_{in}$, 500, $d_{out}$]}                & 82.0282\\%   &        35.21581 \\\\\n\t\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & 81.8706\\%   &        56.89574 \\\\\n\t\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & 81.0326\\%   &       109.04267 \\\\\n\t\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & 80.1253\\%   &       166.93421 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\n\tThe accuracy seems to be decreasing with increasing hidden layers and number of nodes in hidden layers.\n\n\tThe accuracy compared to linear activation is still lower, which I believe is because of ReLu neurons can ``die'', i.e. a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any data point again. If this happens, then the gradient flowing through the unit will forever be zero from that point on.\n\n\tThe training time for ReLu neurons is more than linear neurons but shorter than sigmoid neurons. Its shorter than sigmoid neurons because of ReLu's linear and non-saturating form which can be implemented by simply thresholding a matrix of activations at zero. \n\t\n\n\\paragraph{Sol. 3.4}\n\tL2 Regularization\n\t\\begin{center}\n\t\t\\begin{tabular}{l|c|c|c}\n\t\t\t\\hline\n\t\t\t Architecture                   &       $\\lambda$ & Accuracy   &   Time(seconds) \\\\\n\t\t\t\\hline\n\t\t\t{[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-7}$   & 80.5559\\%   &  117.7750190000 \\\\\n\t\t\t{[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7} $  & 80.5059\\%   &  117.2375600000 \\\\\n\t\t\t{[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-6}$  & 81.3901\\%   &  117.9980010000 \\\\\n\t\t\t{[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-6} $  & 80.8019\\%   &  118.1240910000 \\\\\n\t\t\t{[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-5}$  & 80.6135\\%   &  118.6769920000 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\n\tBest $\\lambda = 10^{-6}$\n\n\tThe accuracy increases and decreases alternatively with increasing $\\lambda$ (although the difference seems to be really small). We get this trend because we are trying to find the best $\\lambda$ that generalizes our data the best and since the difference between the different $\\lambda$ is small, so is the difference in their accuracies.\n\n\\paragraph{Sol. 3.5}\n\tEarly Stopping and L2-regularization\n\t\\begin{center}\n\t\\begin{tabular}{l|c|c|c}\n\t\t\\hline\n\t\t Architecture                   &       $\\lambda$ & Accuracy   &   Time(seconds) \\\\\n\t\t\\hline\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-7}$ & 79.8370\\%   &  110.0074800000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7} $ & 78.7606\\%   &  112.0828270000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-6}$ & 80.5251\\%   &  111.0474780000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-6} $ & 79.7832\\%   &  110.7920260000 \\\\\n\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\tBest $\\lambda = 10^{-6}$\n\n\tThe trend is similar to the L2 regularization without early-stopping. Since, the $\\lambda$ for both cases is the same and so is the trend in accuracy, I believe the early stopping did help, since it takes little to achieve similar results.\n\n\\paragraph{Sol. 3.5}\n\tSGD with weight decay\n\t\\begin{center}\n\t\\begin{tabular}{l|c|c|c|c}\n\t\\hline\n\t Architecture                   & Lambda   & Decay Rate   & Accuracy   &   Time(seconds) \\\\\n\t\\hline\n\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $10^{-5}$      & 72.4791\\%   &  424.3529670000 \\\\\n\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $5^{-5}$       & 73.2941\\%   &  413.6669020000 \\\\\n\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $10^{-4}$      & 74.1322\\%   &  429.5304860000 \\\\\n\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $3^{-4}$       & 72.5445\\%   &  409.1038830000 \\\\\n\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $7^{-4}$       & 65.3097\\%   &  391.0522660000 \\\\\n\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $10^{-3}$      & 72.3446\\%   &  393.0722790000 \\\\\n\t\\hline\n\t\\end{tabular}\n\t\\end{center}\n\n\tBest Decay Rate $ = 10^{-4}$\n\n\\paragraph{Sol. 3.6}\n\tMomentum\n\t\\begin{center}\n\t\t\\begin{tabular}{l|c|c|c|c}\n\t\t\\hline\n\t\t Architecture                   &   Decay Rate &     Momentum & Accuracy   &   Time(seconds) \\\\\n\t\t\\hline\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-4}$ & 0.99 & 84.6384\\%   &  188.7084680000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-4}$ & 0.98 & 82.3704\\%   &  188.8004400000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-4}$ & 0.95 & 78.8606\\%   &  189.2771110000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-4}$ & 0.90 & 74.0822\\%   &  189.9112340000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-4}$ & 0.85 & 75.6929\\%   &  189.8105580000 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\end{center}\n\n\tBest Momentum $= 0.99$\n\n\\paragraph{Sol. 3.7}\n\tCombination\n\t\\begin{center}\n\t\t\\begin{tabular}{l|c|c|c|c|c}\n\t\t\\hline\n\t\t Architecture                   &       Lambda &   Decay Rate &     Momentum & Accuracy   &   Time(seconds) \\\\\n\t\t\\hline\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]} & $10^{-6}$ & $10^{-4}$ & 0.99 & 85.6264\\%   &  366.8762140000 \\\\\n\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\n\tYes, the accuracy for this combination is the best we have achieved so far.\n\n\n\\paragraph{Sol. 3.8}\n\tGrid search with Cross-Validation:\n\t\\begin{center}\n\t\t\\begin{tabular}{l|c|c|c|c|c}\n\t\t\\hline\n\t\t Architecture                        & Lambda   & Decay Rate   &     Momentum & Accuracy   &   Time(seconds) \\\\\n\t\t\\hline\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-7}$  & $10^{-5}$      & 0.99 & 84.6807\\%   &   20.2832450000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-7}$  & $5^{-5}$       & 0.99 & 80.1753\\%   &    3.3854140000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-7}$  & $10^{-4}$      & 0.99 & 84.3002\\%   &   20.1343490000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $5^{-7}$   & $10^{-5}$      & 0.99 & 85.0459\\%   &   20.6533200000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $5^{-7}$   & $5^{-5}$       & 0.99 & 84.2848\\%   &   20.4618000000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $5^{-7}$   & $10^{-4}$      & 0.99 & 84.0541\\%   &   20.3314530000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-6}$  & $10^{-5}$      & 0.99 & 84.2002\\%   &   19.6094510000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-6}$  & $5^{-5}$       & 0.99 & 84.0234\\%   &   19.9445860000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-6}$  & $10^{-4}$      & 0.99 & 83.3737\\%   &   19.9657800000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $5^{-6}$   & $10^{-5}$      & 0.99 & 85.3765\\%   &   19.4346490000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $5^{-6}$   & $5^{-5}$       & 0.99 & 84.1118\\%   &   19.6701670000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $5^{-6}$   & $10^{-4}$      & 0.99 & 82.9009\\%   &   19.8740320000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-5}$  & $10^{-5}$      & 0.99 & 85.2036\\%   &   19.9823050000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-5}$  & $5^{-5}$       & 0.99 & 83.9888\\%   &   20.1908340000 \\\\\n\t\t {[$d_{in}$, 50, $d_{out}$]}    & $10^{-5}$  & $10^{-4}$      & 0.99 & 83.3237\\%   &   20.2327580000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-7}$  & $10^{-5}$      & 0.99 & 85.1228\\%   &  109.6329600000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-7}$  & $5^{-5}$       & 0.99 & 84.6846\\%   &  110.6199690000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-7}$  & $10^{-4}$      & 0.99 & 84.5039\\%   &  110.8651400000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $5^{-7}$   & $10^{-5}$      & 0.99 & 81.2363\\%   &   12.2082780000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $5^{-7}$   & $5^{-5}$       & 0.99 & 84.7230\\%   &  110.2654860000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $5^{-7}$   & $10^{-4}$      & 0.99 & 81.4862\\%   &   12.9311620000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-6}$  & $10^{-5}$      & 0.99 & 85.0613\\%   &  109.4635540000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-6}$  & $5^{-5}$       & 0.99 & 81.4900\\%   &   10.8740570000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-6}$  & $10^{-4}$      & 0.99 & 84.4847\\%   &  110.2498010000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $5^{-6}$   & $10^{-5}$      & 0.99 & 85.1267\\%   &  109.9373850000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $5^{-6}$   & $5^{-5}$       & 0.99 & 84.7422\\%   &  112.1203260000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $5^{-6}$   & $10^{-4}$      & 0.99 & 80.5405\\%   &   12.0689210000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-5}$  & $10^{-5}$      & 0.99 & 85.0998\\%   &  110.8005040000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-5}$  & $5^{-5}$       & 0.99 & 84.7615\\%   &  109.8176930000 \\\\\n\t\t {[$d_{in}$, 500, $d_{out}$]}   & $10^{-5}$  & $10^{-4}$      & 0.99 & 84.5270\\%   &  110.5633620000 \\\\\n\t\t\\end{tabular}\n\n\t\t\\begin{tabular}{l|c|c|c|c|c}\n\t\t\\hline\n\t\t Architecture                        & Lambda   & Decay Rate   &     Momentum & Accuracy   &   Time(seconds) \\\\\n\t\t\\hline\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-7}$  & $10^{-5}$      & 0.99 & 85.6072\\%   &  182.9771570000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-7}$  & $5^{-5}$       & 0.99 & 79.6717\\%   &   19.3831510000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-7}$  & $10^{-4}$      & 0.99 & 78.8990\\%   &   19.4750520000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $5^{-7}$   & $10^{-5}$      & 0.99 & 85.8071\\%   &  182.3738550000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $5^{-7}$   & $5^{-5}$       & 0.99 & 79.4795\\%   &   19.3143720000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $5^{-7}$   & $10^{-4}$      & 0.99 & 85.1920\\%   &  182.6694480000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-6}$  & $10^{-5}$      & 0.99 & 86.4568\\%   &  181.8394820000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-6}$  & $5^{-5}$       & 0.99 & 85.3958\\%   &  182.5855900000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-6}$  & $10^{-4}$      & 0.99 & 85.2843\\%   &  182.2573850000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $5^{-6}$   & $10^{-5}$      & 0.99 & 86.1608\\%   &  183.5565350000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $5^{-6}$   & $5^{-5}$       & 0.99 & 85.5726\\%   &  184.1752500000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $5^{-6}$   & $10^{-4}$      & 0.99 & 78.7529\\%   &   18.0114930000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-5}$  & $10^{-5}$      & 0.99 & 86.1069\\%   &  181.5487010000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-5}$  & $5^{-5}$       & 0.99 & 85.4880\\%   &  181.8470030000 \\\\\n\t\t {[$d_{in}$, 500, 300, $d_{out}$]}           & $10^{-5}$  & $10^{-4}$      & 0.99 & 85.0344\\%   &  182.3495300000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-7}$  & $10^{-5}$      & 0.99 & 86.8796\\%   &  370.1537460000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-7}$  & $5^{-5}$       & 0.99 & 86.3107\\%   &  372.6882040000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-7}$  & $10^{-4}$      & 0.99 & 85.9647\\%   &  375.3052920000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $5^{-7}$   & $10^{-5}$      & 0.99 & 77.2498\\%   &   31.2794710000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $5^{-7}$   & $5^{-5}$       & 0.99 & 86.3453\\%   &  367.1678460000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $5^{-7}$   & $10^{-4}$      & 0.99 & 85.9493\\%   &  371.9213370000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-6}$  & $10^{-5}$      & 0.99 & 86.5260\\%   &  369.9097790000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-6}$  & $5^{-5}$       & 0.99 & 86.3376\\%   &  369.4920700000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-6}$  & $10^{-4}$      & 0.99 & 78.1340\\%   &   35.0205110000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $5^{-6}$   & $10^{-5}$      & 0.99 & 86.9681\\%   &  373.8522750000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $5^{-6}$   & $5^{-5}$       & 0.99 & 86.5490\\%   &  381.4162190000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $5^{-6}$   & $10^{-4}$      & 0.99 & 85.8263\\%   &  394.7567470000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-5}$  & $10^{-5}$      & 0.99 & 77.4036\\%   &   32.6861520000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-5}$  & $5^{-5}$       & 0.99 & 86.3453\\%   &  391.6142450000 \\\\\n\t\t {[$d_{in}$, 800, 500, 300, $d_{out}$]}      & $10^{-5}$  & $10^{-4}$      & 0.99 & 76.4733\\%   &   33.5267370000 \\\\\n\t  \t\\end{tabular}\n\n\t\t\\begin{tabular}{l|c|c|c|c|c}\n\t\t\\hline\n\t\t Architecture                        & Lambda   & Decay Rate   &     Momentum & Accuracy   &   Time(seconds) \\\\\n\t\t\\hline\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-7}$  & $10^{-5}$      & 0.99 & 87.5716\\%   &  587.6533990000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-7}$  & $5^{-5}$       & 0.99 & 74.4282\\%   &   48.7787180000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-7}$  & $10^{-4}$      & 0.99 & 73.1865\\%   &   48.3710490000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $10^{-5}$      & 0.99 & 74.5627\\%   &   48.9536650000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $5^{-5}$       & 0.99 & 72.6444\\%   &   54.2774970000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $5^{-7}$   & $10^{-4}$      & 0.99 & 86.0800\\%   &  583.3533200000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-6}$  & $10^{-5}$      & 0.99 & 87.3409\\%   &  575.2738650000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-6}$  & $5^{-5}$       & 0.99 & 74.6896\\%   &   48.5039770000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-6}$  & $10^{-4}$      & 0.99 & 74.9779\\%   &   48.4011350000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $5^{-6}$   & $10^{-5}$      & 0.99 & 87.5293\\%   &  580.0695650000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $5^{-6}$   & $5^{-5}$       & 0.99 & 73.9938\\%   &   47.8524650000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $5^{-6}$   & $10^{-4}$      & 0.99 & 86.1761\\%   &  575.5397880000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-5}$  & $10^{-5}$      & 0.99 & 87.7100\\%   &  568.8467470000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-5}$  & $5^{-5}$       & 0.99 & 73.8708\\%   &   47.4766680000 \\\\\n\t\t {[$d_{in}$, 800, 800, 500, 300, $d_{out}$]} & $10^{-5}$  & $10^{-4}$      & 0.99 & 86.5952\\%   &  569.0146650000 \\\\\n\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\n\tThe best configuration we could obtain is \n\t\\[ Architecture = [d_{in}, \\quad 800, \\quad 800, \\quad 500, \\quad 300, \\quad d_{out}] \\]\n\t\\[ \\lambda = 10^{-05} \\]\n\t\\[ Decay = 10^{-05} \\]\n\t\\[ Momentum = 0.99 \\]\n\t\\[ Accuracy  = 87.71\\% \\]\n\n\\end{document}", "meta": {"hexsha": "249a0a72bcf97b04cf9e99cc4d93fa340b0fd0b1", "size": 22270, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW2-6/HW4/hw.tex", "max_stars_repo_name": "suhail-ansari/Machine-Learning-Algortihms", "max_stars_repo_head_hexsha": "e116c28848a2cb2132a09fcfdc0301ae89ebcf8b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW2-6/HW4/hw.tex", "max_issues_repo_name": "suhail-ansari/Machine-Learning-Algortihms", "max_issues_repo_head_hexsha": "e116c28848a2cb2132a09fcfdc0301ae89ebcf8b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW2-6/HW4/hw.tex", "max_forks_repo_name": "suhail-ansari/Machine-Learning-Algortihms", "max_forks_repo_head_hexsha": "e116c28848a2cb2132a09fcfdc0301ae89ebcf8b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.0136986301, "max_line_length": 462, "alphanum_fraction": 0.4707678491, "num_tokens": 10184, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743505760728, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5793066958315136}}
{"text": "\\chapter{Description of Utilities in \\numbers}\n\\label{c:numerics}\n\nEach utility and the numerical algorithm used in the utility are\ndescribed in this chapter. Although most of the algorithms are\nwell-known and documented in several references, it is important to\nunderstand the actual methods used in each of the utilities to avoid\nmisinterpretation of the output and to understand the limitations and\nassumptions used to calculate the properties and values. \n\nAnother reason for describing the algorithms is that \\numbers\\ is\nwritten as a basic shell program for reading \\exo\\ files.  If additional\nutilities are developed for \\numbers, it is likely that several of the\nalgorithms already implemented can be modified to perform the new\nfunction. \n\nThis chapter is organized into several sections; each utility is\ndescribed in a separate section.  The descriptions include the function\nof the utility and the algorithm used in the utility including its\nassumptions and limitations.  The complete command syntax for each\nutility is given in Section~\\ref{sec:oper}; a summary of the command\nsyntax is given in Section~\\ref{sec:opsum}.  Examples of the use of most\nof the utilities are presented in Chapter~\\ref{c:examples}. \n\n\\section{Mass Properties Utility}\\label{sec:mass}\n\nThe mass properties utility calculates the volume, mass, mass moments of\ninertia, and centroid location of the body.  These values are often\nrequired in a finite element analysis to ensure that the finite element\nidealization correctly approximates the actual body.  This utility also\nlists the minimum, maximum, and average element volume for each material\nblock.  A summary of the equations used to calculate the mass properties\nis given in this section. \n\nThe moments of inertia are originally calculated about the origin of the\ncoordinate system and then transferred to the centroid using the\nparallel-axis theorem. The mass and the moments of inertia about the\norigin of the coordinate system are given by the following integrals: \n\\begin{eqnarray}\n M      &=& \\int_V \\rho dV                         \\label{V}\\\\\n I_x    &=& \\int_V \\rho\\left(y^2 + z^2\\right)\\,dV \\label{IX}\\\\\n I_y    &=& \\int_V \\rho\\left(x^2 + z^2\\right)\\,dV \\label{IY}\\\\\n I_z    &=& \\int_V \\rho\\left(x^2 + y^2\\right)\\,dV \\label{IZ}\\\\\n I_{xy} &=& \\int_V \\rho xy\\,dV                    \\label{IXY}\\\\\n I_{xz} &=& \\int_V \\rho xz\\,dV                    \\label{IXZ}\\\\\n I_{yz} &=& \\int_V \\rho yz\\,dV                    \\label{IYZ}\n\\end{eqnarray}\n\nwhere $M$ is the mass, $\\rho$ is the density, $I$ is the mass moment\nof inertia, and the subscripts $x$, $y$, and $z$ denote the axes about\nwhich the moment is measured.  The double subscripts indicate products\nof inertia.  Note that the product of inertia with respect to any two\northogonal axes is zero if either of the axes is an axis of symmetry. \n\n\\paragraph{Axisymmetric Two-dimensional Body:}\nFor an axisymmetric two-dimensional body, the integrals are written in\ncylindrical coordinates using the radius $r$ and the angle $\\theta$,\nwhere $x = r\\cos\\theta$ and $z = r\\sin\\theta$.  The $y$-axis is assumed\nto be the axis of revolution.  The infinitesimal volume $dV$ is equal to\n$r\\,dA\\,d\\theta$ where $dA$ is an infinitesimal area equal to $dr\\,dy$.\nRewriting Equations \\eref{V} through~\\eref{IZ} in terms of\n$r$ and $\\theta$, and performing the theta integration from $-\\pi$ to\n$\\pi$ gives: \n\\begin{eqnarray}\nM   &=& 2\\pi\\rho\\int_A r\\,dA   \\\\\nI_y &=& 2\\pi\\rho\\int_A r^3\\,dA   \\\\\nI_x = I_z &=& 2\\pi\\rho\\int_A r y^2\\,dA + \\pi\\rho \\int_A r^3\\,dA\\nonumber \\\\\n    &=& 2\\pi\\rho\\int_A r y^2\\,dA + {I_y\\over2}  \n\\end{eqnarray}\nThe products of inertia $I_{xy}$, $I_{xz}$, and $I_{yz}$ are zero since\nall three axes are symmetry axes.  \n\n\\paragraph{Two-dimensional Planar Body:}\nFor a two-dimensional planar body, area moments of inertia are\ncalculated; the depth in the $z$-direction is ignored. The integrals\nsimplify as follows: \n\\begin{eqnarray}\n I_x    &=& \\rho\\int_A y^2\\,dA                   \\\\\n I_y    &=& \\rho\\int_A x^2\\,dA                   \\\\\n I_z    &=& \\rho\\int_A \\left(x^2 + y^2\\right)\\,dA = I_x + I_y \\\\\n I_{xy} &=& \\rho\\int_A xy\\,dA                    \n\\end{eqnarray}\nThe products of inertia $I_{xz}$ and $I_{yz}$ are both zero.\n\n\\paragraph*{Evaluation of Integrals:} The integrals are evaluated using\nthe isoparametric element formulation and Gaussian Quadrature. Details\nof this process can be found in most books on finite element\nmethodology.  In \\numbers, two options are available for integration of\nthe equations.  The two-dimensional equations can be integrated with\neither 1- or 4-point quadrature, and the three-dimensional equations\nwith either 1- or 8-point quadrature.  The mass moments of inertia for\nbodies with non-rectangular elements are calculated more accurately with\nthe higher-order integration; volumes and areas are integrated exactly\nwith either integration option.  The second-order quadrature rule is\nalso useful for calculating the section properties of a non-standard\nshape.  For this purpose, the discretization of the body is only refined\nenough to capture the essential details of the shape since a structural\nanalysis will not be performed. \n\n\\paragraph{Calculation of Centroid Location:} The location of the\ncentroid is calculated using the first mass moments about each of the\nthree coordinate axes.  The mass moments are summed in each of the three\ncoordinate directions over the range of the elements.  The centroid\nlocation is then given by the quotient of the mass moment sums divided\nby the total mass of the body.  For a two-dimensional axisymmetric body,\n$x_c$ and $z_c$ (the $x$ and $z$ coordinates of the centroid) are zero;\nfor a two-dimensional planar body, $z_c$ is zero. \n  \n\\section{Locate Utility}\\label{sec:locate}\n\nThe locate utility outputs the numbers of all nodes or elements that are\nwithin a specified distance from a user-defined point, line, or plane.\nThis information is often required for plotting results at a specified\nlocation in the model, for example, the acceleration or velocity at the\ncenterline of a body during an impact analysis.  The equations used to\ncalculate the distances from a node or element centroid to the point,\nline, or plane are defined in the following subsections.  All equations\nare defined in terms of a three-dimensional body; for a two-dimensional\nbody the $z$ coordinates are deleted. \n\nNote that the point location procedure is also a circle (2D) and sphere\n(3D) location procedure and the line location procedure is also a\ncylinder (3D) location procedure since the search interval distance from\nthe point and line can be specified. \n  \nIn the location utility, the distance for an element is defined to be\nthe distance to the element geometric center. The location of the\nelement center is calculated by summing the four or eight nodal\ncoordinates in each coordinate direction and dividing by the number of\nnodes. \n\n\\subsection{Point Location}\\label{sec:plocate}\nThe point location utility outputs all nodes or elements that are a\nspecified distance from a user-specified point.  The distance $d_r$ from\nthe node or element to the point ($x_0$, $y_0$, $z_0$) is given by: \n\\begin{equation}\nd_r = \\sqrt{ (x_0-x_n)^2 + (y_0-y_n)^2 + (z_0-z_n)^2}\n\\end{equation}\nwhere $x_n$, $y_n$, and $z_n$ are the coordinates of the node or\nelement.  The distance is calculated for each node or element and\ncompared to the user-specified distance and tolerance.  If the node or\nelement is within this range, its number and coordinates are output. The\nangles $\\theta$ and $\\phi$ are also calculated.  The angle $\\theta$ is\nthe angle between the $x$ axis and the projection of the line from the\npoint to the node onto the $x$--$z$ plane.  The angle $\\phi$ is the\nangle between the $y$ axis and the line from the point to the node.  In\ntwo dimensions, $\\theta$ is the angle between the line and the $x$ axis. \nFigure~\\ref{f:theta} illustrates the definitions of $\\theta$ and $\\phi$.\n\n\\begin{figure}\n\\caption{Illustration of the Angles $\\theta$ and $\\phi$\nfor the Locate Point Utility}\\label{f:theta}\n\\end{figure}\n  \n\\subsection{Line Location}\\label{sec:llocate}\nThe line location utility outputs all nodes or elements that are within\na specified distance from a user-specified line.  The distance is\nmeasured normal to the line.   The parametric representation of the line\nfrom $P_1$($x_1$, $y_1$, $z_1$) to $P_2$($x_2$, $y_2$, $z_2$) is $P(t) =\nP_1 + (P_2 - P_1)t$.  The components of this line are: \n\\begin{equation}\n\\begin{array}{ccccc}\nx &=& x_1 + (x_2 - x_1) t &=& x_1 + at   \\\\\ny &=& y_1 + (y_2 - y_1) t &=& y_1 + bt   \\\\\nz &=& z_1 + (z_2 - z_1) t &=& z_1 + ct   \\\\\n\\end{array}\n\\end{equation}\nThe minimum distance $d_t$ from the line to the point ($x_n$, $y_n$, $z_n$) \nis\n\\begin{equation}\nd_t^2 = (x_1+at-x_n)^2 +(y_1+bt-y_n)^2 +(z_1+ct-z_n)^2 \n\\end{equation}\nwhere the parameter $t$ is \n\\begin{equation}\nt = -\\frac{a(x_1-x_n) + b(y_1-y_n) +\nc(z_1-z_n)}{a^2+b^2+c^2}\\label{parametric}\n\\end{equation}\n  \n\\subsection{Plane Location}\\label{sec:slocate}\nThe plane location utility outputs all nodes or elements that are within\na specified distance from a user-specified plane.  The distance is\nmeasured normal to the plane. A unique plane can be defined by a\nspecified point and a normal vector to the plane.  Given the point\n($x_0$, $y_0$, $z_0$) and the unit vector $\\vec{n} = a\\vec{i} + b\\vec{j} +\nc\\vec{k}$, the equation of the plane is: \n\\begin{eqnarray}\n0 &=& ax + by + cz + d      \\label{plane} \\\\\nd &=& -( ax_0 + by_0 + cz_0 )\n\\end{eqnarray}\nThe normal distance $d_n$ from the plane to a node or\nelement center is: \n\\begin{equation}\nd_n = \\frac{\\left| a x_n + b y_n + c z_n - d\\right|}%\n                                    {\\sqrt{a^2 + b^2 + c^2}} \n\\end{equation}\nwhere the subscript $n$ refers to the coordinates of the node or\nelement.  The normal distance is calculated for each node or element and\ncompared to the user-specified distance and tolerance.  If the node or\nelement is within this range, its number, coordinates, normal distance,\nand radial distance are output.  The radial distance is the same\ndistance calculated in the point location utility. \n\n\\subsection{Sort Algorithm}\n\nAlthough sort is not a location option, it is used in the location\nutility to order the output.  \\numbers\\ uses the {\\em heapsort} routine\nwhich has been recommended in Reference~\\cite{Press:nr} as a good\nroutine for a variety of sorting applications.  It is an ``in-place''\nsort requiring no auxiliary storage.  It is an $N\\log_2N$ process, not\nonly on average, but also for worst-case order of input data.  A FORTRAN\nlisting of the sort subroutine is given in Appendix~\\ref{a:sort}.  One\ndisadvantage of the sort routine is that it is not ``stable.''  This\nmeans that sorting the data a second time on a different field will\ndestroy the order of the first sort.  A method for sorting on two or\nmore fields simultaneously is being investigated and will be implemented\nin a later version of \\numbers. \n\n\\subsection{Sort Fields}\\label{sortfields}\n\nThe output from the location utility can be sorted on any of the\ncalculated quantities or ``fields.''  The table below lists the fields\nthat are defined for each of the location options; these are defined\nfollowing the table.\n\n\\begingroup\\small\n\\tabcolsep=3pt\n\\begin{center}\n%\\begin{tabular}{|>{\\sf}l|c|  *{8}{>{\\sf}l}|  }\\hline\n\\begin{tabular}{|l|c|  *{8}{l}|  }\\hline\n\\rm Option  &2D/3D &\\multicolumn{8}{|c|}{\\rm Valid \\param{sort\\_fields}}\\\\\n\\hline\\hline\nPOINT  & 2D &  X  &  Y  &     &   & DISTANCE &            & THETA &     \\\\\nPOINT  & 3D &  X  &  Y  &  Z  &   & DISTANCE &            & THETA & PHI \\\\\nLINE   & 2D &  X  &  Y  &     & T & DISTANCE & PARAMETRIC &       &     \\\\\nLINE   & 3D &  X  &  Y  &  Z  & T & DISTANCE & PARAMETRIC &       &     \\\\\nPLANE  & 3D &  X  &  Y  &  Z  &   & DISTANCE &            & RADIUS&     \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\endgroup\n\n\\def\\descriptionlabel #1{\\sf #1:\\hfil}\n\\begin{description}\n\\item[X, Y, {\\rm or} Z] The \\cmd{x}, \\cmd{y}, or \\cmd{z} coordinate\nof the located node or element center.\n\\item[T] The parametric distance from the initial point of the line to\nthe located node or element center as defined in\nEquation~\\eref{parametric}.  The initial point of the line is located at\n\\cmd{T} = 0; the final point is located at \\cmd{T} = 1.\n\\item[PARAMETRIC] The same quantity as \\cmd{T}.\n\\item[DISTANCE] The distance from the node or element center to the\nlocation point, line, or plane.  See the respective sections above for\nthe definition of distance for each of the location options.\n\\item[RADIUS] The shortest distance from the plane definition point (see\nSection~\\ref{sec:slocate}) to the located node or element center. \n\\item[THETA] For three-dimensional bodies, \\cmd{THETA} is the angle\nbetween the $x$ axis and the projection of the line from the point to\nthe node or element center onto the $x$--$z$ plane (see\nSection~\\ref{sec:plocate}).  For two-dimensional bodies, \\cmd{theta} is\nthe angle between the line from the point to the node or element center\nand the $x$ axis. \n\\item[PHI] The angle between the $y$ axis and the line from the point to\nthe located node or element center.\n\\end{description}\n\n\\section{Cavity Volume Utility}\\label{sec:cavity}\nThe cavity volume utility calculates the volume and change in volume of\na cavity or hole in a body.  The boundary of the cavity is defined by\nside set flags in the \\exo\\ file.  Two separate calculations are\ninvolved in this utility; the first is the calculation of the\ncavity volume, and the second is the calculation of the change in\nvolume.  \n\nThe cavity volume is calculated by forming triangles (2D) or\npentahedrons (3D) for each segment of the cavity boundary.  The apex of\nthe triangles or pentahedrons are at a user-specified point. The bases\nof the triangles or pentahedrons are the segments of the cavity boundary\nside set.  The segments are element faces; in two-dimensions, the faces\nare lines; in three-dimensions, the faces are four-node quadrilaterals.\nThe faces are assumed to be planar.  The volume of each triangle or\npentahedron is calculated and the sum of the volumes is, for certain\ngeometries, the volume of the cavity. \n\nFigure~\\ref{cvol2d} is an example of this process for a two-dimensional\ncavity with three boundary segments: 1--2, 2--3, and 3--4.  \n\\begin{figure}\n\\vspace{3.5in}\n\\caption{Illustration of Cavity Volume Determination for\na Two-Dimensional Cavity}\\label{cvol2d}\n\\end{figure}\nThe area is calculated by summing the areas of the three triangles\n{\\sf012}, {\\sf023}, and {\\sf034}, where the three numbers refer to the\npoints defining the triangle and point {\\sf0} is the apex or center\npoint. A similar process is used for three-dimensional cavities except\nthat pentahedral volumes are calculated instead of triangular areas. The\nvolume $V$ of a pentahedron with the apex at the point $(x_c,y_c,z_c)$\nand the base formed by a boundary segment is~\\cite{flanagan}: \n\\begin{eqnarray}\nV = &\\frac{1}{12}&\\{\n ((2y_c - y_3) z_{42} + y_2 (z_{c3} + z_{c4}) - y_4 (z_{c3} + z_{c2})) x_1 \\nonumber \\\\\n&+& \\phantom{\\{}((y_4 - 2y_c) z_{31} + y_3 (z_{c4} + z_{c1}) - y_1 (z_{c4} + z_{c3})) x_2 \\nonumber \\\\\n&+& \\phantom{\\{}((y_1 - 2y_c) z_{42} + y_4 (z_{c1} + z_{c2}) - y_2 (z_{c4} + z_{c1})) x_3 \\nonumber \\\\\n&+& \\phantom{\\{}((2y_c - y_2) z_{31} + y_1 (z_{c2} + z_{c3}) - y_3 (z_{c2} + z_{c1})) x_4 \\nonumber \\\\\n&+& \\phantom{\\{}(y_{24}   z_{31} + y_{31} z_{42})  2  x_c\\}\n\\end{eqnarray}\nwhere the numerical subscript refers to the sequence of the node of the\nboundary segment, the $c$ subscript refers to the center location, and\nthe double subscript $z_{ij}$ is defined as $z_i-z_j$. \n\nThe planar area $V_P$ of a triangle with the apex at the point\n$(x_c,y_c)$ and the base formed by a boundary segment is: \n\\begin{equation}\nV_P = \\sfrac{1}{2}[(y_1-y_c) (x_2-x_c) - (y_2-y_c) (x_1-x_c)] \n\\end{equation}\nwhere the subscript refers to the first and second node of the boundary\nsegment, and $y_c$ is the approximate vertical geometric center of the\ncavity. It is calculated by summing the $y$ coordinates and dividing by\nthe total number of nodes on the boundary. \n\nIf the body is axisymmetric, the volume $V_A$ is calculated as:\n\\begin{eqnarray}\nx_c &=& \\sfrac{1}{3}(x_1 + x_2) \\nonumber \\\\\nV_A &=& 2\\pi x_c V_P\n\\end{eqnarray}\nwhere $V_P$ is the area calculated for the plane strain cavity.\n\nThe above method correctly calculates the volume of cavities defined by\na closed boundary.  However, many cavities are bounded on one or more\nsides by symmetry planes which are not included in the cavity boundary\ndefinition.  In two dimensions, the correct volume will be calculated if\nthe triangle apex is on the axis of symmetry.  In three dimensions, the\ncorrect volume will be calculated if the apex of the pentahedron is on\nthe symmetry plane, or if it is on the intersection of two or more\nsymmetry planes.  If the apex point is not specified  by the user, it is\nset to the point $(0,0,0)$ for three-dimensional bodies, and the point\n$(0,y_c)$ for two-dimensional bodies, where $y_c$ is the approximate\nvertical geometric center of the cavity. \n\n\\paragraph*{Cavity Volume Change:}\nThe change in cavity volume is calculated by calculating the volume of\nthe hexahedron formed by the original element face and the displaced\nelement face.  This is repeated for each element face on the cavity\nboundary and the total volume change is the sum of the element face\nvolume changes.  Note that this calculation is correct for all cavities\neven if they are bounded by symmetry planes. \n\nFor a three-dimensional cavity, the volume change $\\Delta V$ for each\nelement on the cavity boundary is given by the following sequence of\ncalculations~\\cite{PRONTO3D}. \n\\begin{equation}\n\\Delta V = \\sfrac{1}{12}\\sum_{i=1}^8 x_i B_i\n\\end{equation}\nwhere $x_i$ for $i\\le 4$ is the $x$ coordinate of a node on the element\nface and $x_i$ for $5 \\le i\\le 8$ is the displaced coordinate of node\n$i-4$.  The $B_i$ values are given by:\n\\begin{eqnarray*}\nB_{1} &=&   y_2 (z_{63}-z_{45}) + y_3 z_{24} + y_4 (z_{38}-z_{52})\n          + y_5 (z_{86}-z_{24}) + y_6 z_{52} + y_8 z_{45} \\\\\nB_{2} &=&   y_3 (z_{74}-z_{16}) + y_4 z_{31} + y_1 (z_{45}-z_{63})\n          + y_6 (z_{57}-z_{31}) + y_7 z_{63} + y_5 z_{16} \\\\\nB_{3} &=&   y_4 (z_{81}-z_{27}) + y_1 z_{42} + y_2 (z_{16}-z_{74})\n          + y_7 (z_{68}-z_{42}) + y_8 z_{74} + y_6 z_{27} \\\\\nB_{4} &=&   y_1 (z_{52}-z_{38}) + y_2 z_{13} + y_3 (z_{27}-z_{81})\n          + y_8 (z_{75}-z_{13}) + y_5 z_{81} + y_7 z_{38} \\\\\nB_{5} &=&   y_8 (z_{47}-z_{61}) + y_7 z_{86} + y_6 (z_{72}-z_{18})\n          + y_1 (z_{24}-z_{86}) + y_4 z_{18} + y_2 z_{61} \\\\\nB_{6} &=&   y_5 (z_{18}-z_{72}) + y_8 z_{57} + y_7 (z_{83}-z_{25})\n          + y_2 (z_{31}-z_{57}) + y_1 z_{25} + y_3 z_{72} \\\\\nB_{7} &=&   y_6 (z_{25}-z_{83}) + y_5 z_{68} + y_8 (z_{54}-z_{36})\n          + y_3 (z_{42}-z_{68}) + y_2 z_{36} + y_4 z_{83} \\\\\nB_{8} &=&   y_7 (z_{36}-z_{54}) + y_6 z_{75} + y_5 (z_{61}-z_{47})  \n          + y_4 (z_{13}-z_{75}) + y_3 z_{47} + y_1 z_{54} \n\\end{eqnarray*}\nwhere $y_i$ and $z_i$ are defined in the same way as $x_i$, and $z_{ij}$\nis equal to $z_i - z_j$.\n\nFor a two-dimensional, planar cavity, the volume change $\\Delta V_P$ for\neach element on the cavity boundary is equal to \n\\begin{equation}\n\\Delta V_P = \\sfrac{1}{2}\\left[x_{12} (\\Delta y_2 + \\Delta y_1) - \n           y_{12} (\\Delta x_2 + \\Delta x_1) +\n           \\Delta x_1 \\Delta y_2 -\n           \\Delta x_2 \\Delta y_1\\right] \n\\end{equation}\nwhere $x$ and $y$ are the original coordinates of the face nodes and \n$\\Delta x$ and $\\Delta y$ are the $x$ and $y$ displacements of the\nnodes.\n\nFor an axisymmetric cavity, the volume change $\\Delta V_A$ for each\nelement on the cavity boundary is equal to \n\\begin{equation}\n\\Delta V_A = 2\\pi x_c \\Delta V_P\n\\end{equation}\nwhere $x_c$ is the $x$ coordinate of the geometric center which is equal\nto \n\\begin{equation}\nx_c = \\frac{(\\Delta x_2 + \\Delta x_1)}{4} + \n      \\frac{(x_2 + x_1)}{2}\n\\end{equation}\n\n\\section{Overlap Utility}\\label{sec:overlap}\n\nThe overlap utility is intended to assist the analyst in the generation\nof a mesh with valid contact surfaces.  One of the most difficult tasks\nin generating a valid contact surface occurs when the contact surface is\ncurved with different discretizations on the two surfaces.  In this\ncase, a small gap must be inserted between the surfaces to prevent\npenetration. Figure~\\ref{f:overlap} illustrates the problem that can\noccur if the gap is not large enough or nonexistent. Note that the nodes\non the more refined side of the contact surface penetrate the elements\non the other side of the surface.  This is a somewhat contrived example\nto illustrate the problem; in actual finite element meshes it is very\ndifficult to notice the overlap unless the mesh is examined element by\nelement. However, if an analysis is run with overlapping contact\nsurfaces, the problem manifests itself in a sudden increase in kinetic\nenergy or by excessive deformations and velocities in the overlapped\nportion of the mesh since the code will separate the contact surface in\none timestep.  Currently, the utility only checks for penetration of the\nmaster surface by the slave surface; however, other checks for valid\ncontact surfaces, such as continuity or other requirements imposed by\nanalysis codes, can be added if there is a need. \n\n\\begin{figure}\n\\vspace{3.1in}\n\\caption{Illustration of Contact Surface Overlap due to\nDiscretization Mismatch on Curved Contact Surfaces}\n\\label{f:overlap}\n\\end{figure}\n\nMany times, the most efficient method to determine whether the contact\nsurfaces overlap is to run the analysis for a very short period of time,\nand then examine the results to see if the kinetic energy has increased\nsuddenly, or if the mesh has been ``blown'' apart where the analysis\ncode detects an overlapping contact surface and separates it in one\ntimestep---both symptoms indicate that a slideline overlap exists in the\noriginal mesh. \n\nThe overlap utility was written to provide an efficient means of\ndetermining whether an overlap exists prior to running an analysis. The\nalgorithm is logically broken into three separate steps. In the first\nstep, a ``bounding box'' is defined for each element on the master\nsurface.  The bounding box contains the coordinate ranges in each of the\ncoordinate directions.   Secondly, for each element on the master\nsurface, each node on the slave surface is tested to determine if it is\nwithin the bounding box.  This test is a simple comparison of the slave\nnode coordinates with the coordinates of the bounding box.  Finally, if\na node is within an element's bounding box, the more computationally\nexpensive calculation of determining whether the node penetrates the\nelement is performed.  To determine if the node penetrates the element,\nfour triangles (two-dimensional body) or six pentahedrons\n(three-dimensional body) are formed with the slave node as the apex and\neach element face as the base.  The volume of each triangle or\npentahedron is calculated; if all of the volumes are positive, the node\nis inside the element; if a volume is equal to zero, the node is on\nthe face. If the node is inside the element, its node number and the\nnumber and connectivity of the penetrated element are output.  At the\nend of the calculation, the total number of nodes penetrating the\nsurface and the number of nodes on the surface are output.  \n\nThe volume $V_P^i$ of the three-dimensional pentahedron formed by \nslave node $s$ and the element face $i$ is equal\nto~\\cite{flanagan}:\n\\begin{eqnarray}\n12V_P^i &=& \\phantom{{}+{}}x_A[(2y_S - y_C)  z_{DB} + y_B  (z_{SC} + z_{SD}) -\n     y_D  (z_{SC} + z_{SB})] \\nonumber \\\\\n&\\ &  {}  + x_B[(y_D - 2y_S)  z_{CA} + y_C  (z_{SD} + z_{SA})-\n     y_A  (z_{SD} + z_{SC})] \\nonumber \\\\\n&\\ &  {}  + x_C[(y_A - 2y_S)  z_{DB} + y_D  (z_{SA} + z_{SB})-\n     y_B  (z_{SD} + z_{SA})] \\nonumber \\\\\n&\\ &  {}  + x_D[(2y_S - y_B)  z_{CA} + y_A  (z_{SB} + z_{SC})-\n     y_C (z_{SB} + z_{SA})] \\nonumber \\\\\n&\\ &  {}  + 2x_S[y_{BD} z_{CA} + y_{CA} z_{DB} ]\n\\end{eqnarray}\nwhere the subscript $S$ refers to the slave node, and the subscripts\n$A$, $B$, $C$, and $D$ refer to the nodes on an element face (assumed to\nbe planar) as given in Table~\\ref{t:nodes}. \n\n\\begin{table}\n\\parbox[t]{3.0in}{\n\\begin{center}\n\\begin{tabular}{l|cccccc}\n\\multicolumn{7}{c}{Hexahedron} \\\\ \n\\multicolumn{7}{c}{\\ } \\\\\n  & \\multicolumn{6}{c}{Face} \\\\\n  & 1 & 2 & 3 & 4 & 5 & 6 \\\\ \\hline\nA & 1 & 6 & 6 & 5 & 4 & 1 \\\\\nB & 2 & 7 & 5 & 1 & 3 & 5 \\\\\nC & 3 & 3 & 8 & 4 & 7 & 6 \\\\\nD & 4 & 2 & 7 & 8 & 8 & 2 \\\\ \n\\end{tabular}\n\\end{center}}\\hfil\n\\parbox[t]{3.0in}{\n\\begin{center}\n\\begin{tabular}{l|cccc}\n\\multicolumn{5}{c}{Quadrilateral} \\\\ \n\\multicolumn{5}{c}{\\ } \\\\\n  & \\multicolumn{4}{c}{Face} \\\\\n  & 1 & 2 & 3 & 4 \\\\ \\hline\nA & 1 & 2 & 3 & 4 \\\\\nB & 2 & 3 & 4 & 1 \\\\\n\\end{tabular}\n\\end{center}}\n\\caption{Numbering of Face Nodes on an Eight-Node Hexahedral Element and\na Four-Node Quadrilateral Element}\\label{t:nodes}\n\\end{table}\n\nThe volume $V_T^i$ of the two-dimensional triangle formed by the\nslave node $s$ and the element face $i$ is equal to:\n\\begin{equation}\n2V_T^i = x_A (y_B - y_S) + x_B (y_S - y_A) + x_S (y_A - y_B)\n\\end{equation}\nwhere the subscript $S$ refers to the slave node, and the subscripts\n$A$ and $B$ refer to the nodes on an element face as given in\nTable~\\ref{t:nodes}.\n\n\\section{Time Step Estimation Utility}\\label{sec:timestep}\n\nExplicit integration is used in most large-deformation, nonlinear,\ntransient dynamics finite element codes, for example\n\\code{PRONTO}~\\cite{PRONTO2D,PRONTO3D}. This utility provides an\nestimate of the time step size which will be used in the explicit\nintegration in these codes. \n\nThe stable time step $\\Delta t$ for the central difference operator\ncommonly used in transient dynamic analysis codes is given\nby~\\cite{Cook} \n\\begin{equation} \n\\Delta t \\leq {2\\over \\omega_{\\max}} \\left(\\sqrt{1+\\epsilon^2} -\n\\epsilon\\right)\n\\end{equation}\nwhere $\\omega_{\\max}$ is the maximum frequency of the mesh, and\n$\\epsilon$ is the fraction of critical damping in the highest element\nmode.  In a explicit integration finite element code, the linear bulk\nviscosity term is an estimate of $\\epsilon$~\\cite{PRONTO2D}. The default\nvalue of $\\epsilon$ is 0.06 which is the default value of the linear\nbulk viscosity used in \\code{PRONTO2D} and \\code{PRONTO3D}. \n\nFlanagan and Belytschko~\\cite{flanagan} have derived simple\nformulas for bounding the maximum eigenvalues for the uniform strain\neight-node hexahedron and four-node quadrilateral which can be used to\nprovide conservative estimates of the maximum frequency.  The maximum\nfrequency estimate for a rectangular quadrilateral element is \n\\begin{equation}\n\\hat\\omega_{\\max}^2 = {4(\\lambda + 2\\mu)\\over\\rho}\\left({1\\over s_1^2} +\n   {1\\over s_2^2}\\right)\n\\end{equation}\nwhere $s_1$ and $s_2$ are the lengths of the sides of the rectangle,\n$\\lambda$ and $\\mu$ are Lame's constants, $\\rho$ is the density, and\n$\\hat\\omega_{\\max}$ is the predicted value for the maximum frequency.\nSimilarly, for a rectangular parallelepiped hexahedron, \n\\begin{equation}\n\\hat\\omega_{\\max}^2 = {4(\\lambda + 2\\mu)\\over\\rho}\\left({1\\over s_1^2} +\n   {1\\over s_2^2}+{1\\over s_3^2}\\right)\n\\end{equation}\n\nSubstituting the maximum frequency equation into the stable time step\nequation gives the following estimate for the stable time step size: \n\\begin{equation}\n\\Delta\\hat t\\le \\sqrt{\\frac{\\rho}{\\lambda+2\\mu}}\n      \\left(\\sum_{i=1}^{n_D}\\frac{1}{s_i^2}\\right)^{-1/2}\n      \\left(\\sqrt{1+\\epsilon^2} - \\epsilon\\right)\n\\end{equation}\nwhere $\\Delta\\hat t$ is the estimate of the stable time step, and $n_D$\nis the number of dimensions.  The first quantity on the right-hand side\nof the inequality is the inverse of the dilatational wave speed which is\ninput by the user.  The second quantity is calculated by \\numbers\\ with\nthe assumption that the element is rectangular.  \n\nThe output from this utility includes the calculated time step, the\nelement with the minimum time step, and the number of time steps per\nmillisecond of analysis time for each element block.  \n\nThe number of time steps per millisecond can be used to estimate the\ncomputer time required to perform an analysis.  Most explicit transient\ndynamics codes output the average CPU time required to perform the\ncalculations for one element for one time step. Although this quantity\nvaries for different constitutive models and the number of contact\nsurfaces (slidelines), the average value is usually relatively constant\nand well known.  The CPU time per millisecond of analysis time can be\nestimated using the formula \n\\begin{equation}\n{CPU\\over ms} = \\left(10^{-3}\\over \\Delta\\hat t\\right) \n                ({\\sf Speed})({\\sf NUMEL})\n\\end{equation}\nwhere {\\sf NUMEL} is the number of elements, {\\sf Speed} is the CPU time\nper element per timestep, and $(10^{-3}/\\Delta\\hat t)$ is the number of\ntime steps per millisecond.  \n\n\\section{Gap Utility}\\label{sec:gap}\n\nThe gap utility is used to determine the distance between nodes on two\nsurfaces.  One surface is called the ``master'' surface and the other\nsurface is the ``slave'' surface.  For each node on the master surface,\na normal vector is calculated as shown below.  A matching process is\nthen performed to determine the ``closest'' slave node to each master\nnode, where closeness can be defined either as closest to the master\nnode normal vector, or closest in absolute distance.  Figure~\\ref{f:gap}\nillustrates the two closeness measures.  In this figure, Node~$M$ is the\nmaster node and nodes $S_1$ and~$S_2$ are two slave nodes.  If the\nabsolute distance is used, node~$S_1$ will be the matching node; if the\ndistance to the normal vector is used, node~$S_2$ will be the matching\nnode.  \n\n\\begin{figure}\n\\caption{Illustration of Closeness Measures Used in the Gap Utility}\n\\label{f:gap}\n\\end{figure}\n\nAfter all of the nodes on the master surface have been matched to a node\non the slave surface, the normal and tangential distances, measured in\nthe coordinate frame of the normal vector, are determined for the\nundeformed geometry and for each of the selected timesteps in the\ndatabase.  This utility is normally used to calculate the change in\ndistance between two surfaces, for example, the closure of a drift in a\ngeomechanics problem or the slip (tangential movement) along a contact\nsurface. \n\nThe normal for a node is defined to be the average normal of the element\nfaces for all elements connected to the node.  For a three-dimensional\nbody, a technique developed by Newell~\\cite{Rogers:pefcg} gives the\nexact normal for planar faces and a ``best'' approximation for almost\nplanar faces. The coefficients $a$, $b$, and $c$ of the normal vector\nfor an element face $\\vec{n} = a\\vec{i} + b\\vec{j} + c\\vec{k}$ are given by: \n\\begin{eqnarray}\na &=& \\sum_{i=1}^n (y_i - y_j)(z_i + z_j) \\nonumber\\\\\nb &=& \\sum_{i=1}^n (z_i - z_j)(x_i + x_j) \\label{3dnorm}\\\\\nc &=& \\sum_{i=1}^n (x_i - x_j)(y_i + y_j) \\nonumber\n\\end{eqnarray}\nwhere $n$ is the number of nodes per face, and $j = 1$ if $i=n$, or\n$j=i+1$ if $i\\ne n$.  The vector is then normalized and added to the\ndirection cosine matrix for each node on the face.  After all of the\nelement normals have been computed and added to the direction cosine\nmatrix, the average direction cosine unit vector for each node is\ndetermined by normalizing each entry in the direction cosine matrix.\n\nThe procedure is similar for a two-dimensional body, except that the\nnormal vector for an element face $\\vec{n} = a\\vec{i} + b\\vec{j}$ is given by:\n\\begin{eqnarray}\n      r &=& \\sqrt{(x_i - x_j)^2 + (y_i - y_j)^2} \\nonumber \\\\\n      a &=& (x_i - x_j) / r                      \\label{2dnorm}\\\\\n      b &=& (y_i - y_j) / r                      \\nonumber\n\\end{eqnarray}\nwhere $x_i$ and $x_j$ are the $x$ coordinates of the two face nodes, and\n$y_i$ and $y_j$ are the $y$ coordinates of the two face nodes. \n\nAfter the average nodal normals have been calculated, the node matching\nprocess is performed.  For each node on the master surface, the {\\em\nclosest} node on the slave surface is determined.  There are two\ncriteria for determining closeness: radial distance and tangential\ndistance.  The radial distance $d_r$ between two nodes is simply \n\\begin{equation}\nd_r = \\sqrt{(x_s - x_m)^2 + (y_s - y_m)^2 + (z_s - z_m)^2}\\label{distr}\n\\end{equation}\nwhere $x$, $y$, and $z$ are the nodal coordinates and the subscripts $m$\nand $s$ refer to the master and slave nodes, respectively.\n\nThe tangential distance is the distance from the master node's normal\nvector to the slave node.  The distance is given by~\\cite{Rogers:pefcg}:\n\\begin{eqnarray}\nd_t &=& \\sqrt{d_r^2 - d_n^2}\\label{distt}  \\\\\nd_n &=& -\\left(a(x_m - x_s) + b(y_m - y_s)+c(z_m-z_s)\\right)\\label{distn}\n\\end{eqnarray}\nwhere $d_r$ is the radial distance given in Equation~\\eref{distr}, $d_n$\nis the distance to the slave node measured in the direction of the\nnormal vector, and $a$, $b$, and $c$ are the components of the unit\nnormal vector determined in Equations \\eref{3dnorm} or~\\eref{2dnorm}. \n\n\\section{Limits Utility}  \n\nThe limits utility provides the basic function of determining the\nminimum, maximum, and range of the coordinates for each of the material\nblocks.  No special algorithms are used in this utility and there are no\ninherent limitations or assumptions in its implementation. The utility\nsimply examines the coordinates of each element in each element block\nand saves the minimum and maximum values.  After all of the elements\nhave been processed, the minimum and maximum values are subtracted to\ndetermine the coordinate range of the data. \n\n", "meta": {"hexsha": "2ec36fbcd022442c8618472e0136a9d5629565d7", "size": 33592, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/seacas/doc-source/numbers/numerics.tex", "max_stars_repo_name": "jschueller/seacas", "max_stars_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_stars_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_stars_count": 82, "max_stars_repo_stars_event_min_datetime": "2016-02-04T18:38:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T03:01:49.000Z", "max_issues_repo_path": "packages/seacas/doc-source/numbers/numerics.tex", "max_issues_repo_name": "jschueller/seacas", "max_issues_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_issues_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_issues_count": 206, "max_issues_repo_issues_event_min_datetime": "2015-11-20T01:57:47.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:12:04.000Z", "max_forks_repo_path": "packages/seacas/doc-source/numbers/numerics.tex", "max_forks_repo_name": "jschueller/seacas", "max_forks_repo_head_hexsha": "14c34ae08b757cba43a3a03ec0f129c8a168a9d3", "max_forks_repo_licenses": ["Python-2.0", "Zlib", "BSD-2-Clause", "MIT", "NetCDF", "BSL-1.0", "X11", "BSD-3-Clause"], "max_forks_count": 68, "max_forks_repo_forks_event_min_datetime": "2016-01-13T22:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T06:25:05.000Z", "avg_line_length": 49.4727540501, "max_line_length": 102, "alphanum_fraction": 0.7220171469, "num_tokens": 9996, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7826624688140726, "lm_q1q2_score": 0.5793066935463377}}
{"text": "\\newcommand{\\RR}{\\ensuremath{\\mathbb{R}}}\nThe \\texttt{scipy.optimize} subpackage provides functions for the numerical\nsolution of several classes of root finding and optimization problems.\nHere we highlight recent additions through SciPy~1.0.\n\n%\\subsubsection{Root Finding}\n%The general ``root finding'' problem is to find a root $\\mathbf{x} \\in \\RR^m$ of $\\mathbf{f}: \\RR^m \\rightarrow \\RR^m$, that is, to solve\n%\\begin{equation}\n%\\mathbf{f}(\\mathbf{x}) = \\mathbf{0}\n%\\end{equation}\n%for a solution $\\mathbf{x}$.\\footnote{Equivalently the problem is to simultaneously find the roots $x_i \\in \\RR$ of several scalar functions $f_i : \\RR \\rightarrow \\RR$, that is, to solve $f_i(x_0, x_1, \\dots, x_{m-1}) = 0$ for $x_i$, $i \\in \\{0, 1, \\dots {m-1}\\}$.} The function \\texttt{scipy.optimize.root} provides a common interface to several algorithms for solving problems of this type. For the special case\\footnote{that is, to solve a single scalar equation $f(x) = 0$ for a single scalar variable $x$} $m = 1$, any one of several specialized functions \\texttt{brentq}, \\texttt{brenth}, \\texttt{ridder}, \\texttt{bisect}, or \\texttt{newton} may provide improved performance or accuracy. (Have there been any recent improvements? Do we want to summarize the methods as @antonior92 has done for $minimize$? Do we have to explain that these methods only provide \\emph{one} solution, and that they are iterative based on a user-provided guess? Do we have to explain the notion of tolerance? Is this a good template for the beginning of the following subsections?)\n\n\n\\subsubsection*{Linear Optimization}\n\nA new interior-point optimizer for continuous linear programming problems, \\texttt{linprog} with \\texttt{method='interior-point'}, was released with SciPy~1.0. Implementing the core algorithm of the commercial solver MOSEK \\cite{andersen2000mosek}, it solves all of the 90+ NETLIB LP benchmark problems \\cite{netlib} tested. Unlike some interior point methods, this homogeneous self-dual formulation provides certificates of infeasibility or unboundedness as appropriate.\n\nA presolve routine \\cite{andersen1995presolving} solves trivial problems and otherwise performs problem simplifications, such as bound tightening and removal of fixed variables, and one of several routines for eliminating redundant equality constraints is automatically chosen to reduce the chance of numerical difficulties caused by singular matrices. Although the main solver implementation is pure Python, end-to-end sparse matrix support and heavy use of SciPy's compiled linear system solvers---often for the same system with multiple right hand sides due to the predictor-corrector approach---provide speed sufficient for problems with tens of thousands of variables and constraints.\n\nCompared to the previously implemented simplex method, the new interior-point method is faster for all but the smallest problems, and is suitable for solving medium- and large-sized problems on which the existing simplex implementation fails. However, the interior point method typically returns a solution near the center of an optimal face, yet basic solutions are often preferred for sensitivity analysis and for use in mixed integer programming algorithms. This motivates the need for a crossover routine or a new implementation of the simplex method for sparse problems in a future release, either of which would require an improved sparse linear system solver with efficient support for rank-one updates.\n\n\\subsubsection*{Nonlinear Optimization}\n\\paragraph{Local Minimization}\nThe \\texttt{minimize} function provides a unified interface for finding local minima of nonlinear optimization problems. Four new methods for unconstrained optimization were added to \\texttt{minimize} in recent versions of SciPy: \\texttt{dogleg}, \\texttt{trust-ncg}, \\texttt{trust-exact}, and \\texttt{trust-krylov}. All are trust-region methods that build a local model of the objective function based on first and second derivative information, approximate the best point within a local ``trust region'', and iterate until a local minimum of the original objective function is reached, but each has unique characteristics that make it appropriate for certain types of problems. For instance, \\texttt{trust-exact} achieves fast convergence by solving the trust-region subproblem almost exactly, but it requires the second derivative Hessian matrix to be stored and factored every iteration, which may preclude the solution of large problems ($\\geq 1000$ variables). On the other hand, \\texttt{trust-ncg} and \\texttt{trust-krylov} are well-suited to large-scale optimization problems because they do not need to store and factor the Hessian explicitly, instead using second derivative information in a faster, approximate way. A detailed comparison of the characteristics of all \\texttt{minimize} methods is presented in Table~\\ref{tab:minimize-si}; it illustrates the level of completeness that SciPy aims for when covering a numerical method or topic.\n\n\\paragraph{Global Minimization}\nAs \\texttt{minimize} may return any local minimum, some problems require the use of a global optimization routine. The new \\texttt{scipy.optimize.differential\\textunderscore evolution} function \\cite{Wormington1999,Storn1997} is a stochastic global optimizer that works by evolving a population of candidate solutions. In each iteration, trial candidates are generated by combination of candidates from the existing population. If the trial candidates represent an improvement, then the population is updated. Most recently, the SciPy benchmark suite gained a comprehensive set of 196 global optimization problems for tracking the performance of existing solvers over time and for evaluating whether the performance of new solvers merits their inclusion in the package.\n\n\\setlength{\\tabcolsep}{3pt}\n\\begin{table}[H]\n  \\centering\n  \\caption{Optimization methods from \\texttt{minimize}, which solves problems of the form $\\min_x f(x)$, where $x \\in \\mathbb{R}^n$ and $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ .  The field \\textit{version added} specifies the algorithm's first appearance in SciPy. Algorithms with \\textit{version added} ``0.6*'' were added in version 0.6 or before.\n    The field \\textit{wrapper} indicates whether the implementation available in SciPy wraps a function written in a compiled language\n    (e.g. C or FORTRAN). The fields \\textit{\\nth{1}} and \\textit{\\nth{2} derivatives}\n    indicates whether first or second order derivatives are required. When \\textit{\\nth{2} derivatives} is flagged\n    with $\\sim$ the algorithm does not require second-order derivatives from\n    the user; it computes an approximation internally and uses it to accelerate method convergence.\n    \\textit{Iterative Hessian factorization} denotes algorithms that factorize the Hessian in an iterative way,\n    which does not require explicit matrix factorization or storage of the Hessian.\n    \\textit{Local convergence} gives a lower bound on the rate of convergence of the iterations sequence once the\n    iterate is sufficiently close to the solution: linear (L), superlinear (S) and quadratic (Q). Convergence rates denoted S$^*$ indicate that the algorithm\n    has a superlinear rate for the parameters used in SciPy, but can  achieve a quadratic convergence rate with other parameter choices.\n    \\textit{Global convergence} is marked for the algorithms with guarantees of convergence to a stationary\n    point (i.e.\\ a point $x^*$ for which $\\nabla f(x^*) = 0$); this is \\emph{not} a guarantee of convergence to a global minimum.\n\t\t\\textit{Lines-search (LS) or trust-region (TR)} indicates which of the two globalization approaches is employed by the algorithm.\n\t\tThe table also indicates which algorithms\n    can deal with constraints on the variables. We distinguish among \\textit{bound constraints} (i.e.\\ $x^l \\le x \\le x^u$),\n    \\textit{equality constraints} (i.e.\\ $c_{\\text{eq}}(x) = 0$) and \\textit{inequality constraints} (i.e.\\ $c_{\\text{ineq}}(x) \\ge 0$).}\n  \\begin{tabular}{cccccccccccccc}\n      & \\rotatebox{80}{\\texttt{Nelder-Mead}} & \\rotatebox{80}{\\texttt{Powell}} & \\rotatebox{80}{\\texttt{COBYLA}} & \\rotatebox{80}{\\texttt{CG}} & \\rotatebox{80}{\\texttt{BFGS}}&  \\rotatebox{80}{\\texttt{L-BFGS-B}} & \\rotatebox{80}{\\texttt{SLSQP}} & \\rotatebox{80}{\\texttt{TNC}} & \\rotatebox{80}{\\texttt{Newton-CG}} & \\rotatebox{80}{\\texttt{dogleg}} & \\rotatebox{80}{\\texttt{trust-ncg}} & \\rotatebox{80}{\\texttt{trust-exact}} & \\rotatebox{80}{\\texttt{trust-krylov}} \\\\\n    \\hline\n    Version added &  0.6* &  0.6* &  0.6* &  0.6* &  0.6* &  0.6* &  0.9 &  0.6* &  0.6* & 0.13 & 0.13 & 0.19 & 1.0 \\\\\n    \\hline\n    Wrapper & & & \\cmark & & & \\cmark & \\cmark & \\cmark & &  & & & \\cmark \\\\\n    \\hline\n    \\nth{1} derivatives &  & & & \\cmark  & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark \\\\\n    \\hline\n    \\nth{2} derivatives &  &  &  &  & $\\sim$ & $\\sim$ & $\\sim$ & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark \\\\\n    \\hline\n    \\makecell{Iterative Hessian \\\\\n    factorization} & & & &  & & & & \\cmark & \\cmark &  & \\cmark &  & \\cmark \\\\\n    \\hline\n    Local convergence& & & & L & S &  L & S & S$^*$ & S$^*$ & Q & S$^*$ & Q & S$^*$  \\\\\n    \\hline\n    Global convergence & & &  &   & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark & \\cmark  \\\\\n    \\hline\n    \\makecell{Line-search (LS) or\\\\ trust-region (TR)} & Neither  & LS &  TR & LS & LS & LS & LS & LS & LS & TR & TR & TR & TR \\\\\n    \\hline\n    Bound constraints &&&\\cmark&&&&\\cmark&\\cmark&\\cmark&&&& \\\\\n    \\hline\n    Equality constraints &&&&&&&\\cmark&&&&& \\\\\n    \\hline\n    Inequality constraint &&&\\cmark&&&&\\cmark&&&&& \\\\\n    \\hline\n    References &\n      \\inlinecite{nelder_simplex_1965, wright_direct_1996} &\n      \\inlinecite{powell_efficient_1964} &\n      \\inlinecite{powell_direct_1994, powell_direct_1998, powell_view_2007} &\n      \\inlinecite{polak_note_1969, nocedal_numerical_2006} &\n      \\inlinecite{nocedal_numerical_2006} &\n      \\inlinecite{byrd_limited_1995, zhu_algorithm_1997} &\n      \\inlinecite{schittkowski_nonlinear_1982, schittkowski_nonlinear_1982-1, schittkowski_convergence_1983, kraft_software_1988} &\n      \\inlinecite{nash_newton-type_1984} &\n      \\inlinecite{nocedal_numerical_2006}  &\n      \\inlinecite{powell_new_1970, nocedal_numerical_2006} &\n      \\inlinecite{steihaug_conjugate_1983, nocedal_numerical_2006} &\n      \\inlinecite{conn_trust_2000, more_computing_1983} &\n      \\inlinecite{gould_solving_1999, doi:10.1080/10556788.2018.1449842} \\\\\n    \\hline\n  \\end{tabular}\n  \\label{tab:minimize-si}\n\\end{table}\n\n\n", "meta": {"hexsha": "2bf1331e3a9c8430f658b7ed070115bfa99bfb1f", "size": 10602, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "scipy-1.0/scipy-optimize.tex", "max_stars_repo_name": "VarIr/scipy-articles", "max_stars_repo_head_hexsha": "4b0c6d7736acccf158fca4d5caae93ab6f9695f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scipy-1.0/scipy-optimize.tex", "max_issues_repo_name": "VarIr/scipy-articles", "max_issues_repo_head_hexsha": "4b0c6d7736acccf158fca4d5caae93ab6f9695f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scipy-1.0/scipy-optimize.tex", "max_forks_repo_name": "VarIr/scipy-articles", "max_forks_repo_head_hexsha": "4b0c6d7736acccf158fca4d5caae93ab6f9695f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 111.6, "max_line_length": 1452, "alphanum_fraction": 0.7437275986, "num_tokens": 2908, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8615382058759128, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5792394402079928}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{Part IB}\n\n\\def\\ntitle{Metric \\& Topological Spaces}\n\\def\\nlecturer{P.\\ M. H.\\ Wilson}\n\n\\def\\nterm{Easter}\n\\def\\nyear{2017}\n\n\\input{header}\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\tableofcontents\n\n\\section{Metric Spaces}\n\n\\subsection{Metric}\n\nEuclidean space $\\mathbb{R}^n$ equipped with standard Euclidean inner product $(\\cdot,\\cdot)$, given $\\V x, \\V y \u2208 \\mathbb{R}^n$ with coordinates $x_i, y_i$ respectively,\n\n\\[\n  (\\V x, \\V y) = \\V x \\cdot \\V y = \\sum_{i=1}^{n}x_i y_i\n\\]\n\nEuclidean norm $\\norm{\\V x} = (\\V x, \\V x)^{1/2}$ which is the length of $\\V x$.\n\nEuclidean distance:\n\\begin{align*}\n  d_2: \\mathbb{R}^n \\times \\mathbb{R}^n &\\to \\mathbb{R} \\\\\n  d_2(\\V x, \\V y) &= \\norm{\\V x - \\V y} = (\\sum_{i=1}^n (x_i - y_i))^{1/2}\n\\end{align*}\n\nThis is an example of a metric.\n\n\\begin{definition}\n\tA metric space $(X, d)$ consists of a set $X$ and a function $d: X \\times X \\to \\mathbb{R}$ such that\n\t\\begin{enumerate}\n\t\t\\item $d(P, Q) \\geq 0$ with equality if and only if $P = Q$,\n\t\t\\item $\\forall P, Q \\in X, d(P, Q) = d(Q, P)$,\n\t\t\\item $\\forall P, Q, R \\in X, d(P, Q) + d(Q, R) \\geq d(P, R)$. (triangle inequality)\n\t\\end{enumerate}\n\\end{definition}\n\n3 says that, for the Euclidean metric, for any triangle (possibly degenerate) with vertices $P, Q, R$, the sum of lengths of two sides is larger than the length of the third side.\n\n\\begin{proposition}\n\tThe Euclidean distance function $d_2$ on $\\mathbb{R}^n$ is a metric.\n\\end{proposition}\n\n\\begin{proof}\n\t1 and 2 are obvious. For 3 we use Cauchy-Schwarz inequality\n\t\\[\n\t\t\\left(\\sum x_i y_i\\right)^2 \\leq \\left(\\sum x_i^2\\right)\\left(\\sum y_i^2\\right),\n\t\\]\n\tor in inner product notation\n\t\\[\n\t\t(\\V x, \\V y)^2 \u2264 \\norm{\\V x}^2 \\norm{\\V y}^2.\n\t\\]\n\t\n\tWe may take $P = 0 \\in \\mathbb{R}^n$ to be the origin, $Q$ has position vector $\\V x$ with respect to $P$ and $R$ has position vector $\\V y$ with respect to $Q$, and has position vector $\\V x + \\V y$ with respect to $P$ :\n\t\\begin{align*}\n      \\norm{\\V x + \\V y}^2 &= (\\V x + \\V y, \\V x + \\V y) \\\\\n                           &= \\norm{\\V x+\\V y, \\V x+\\V y} \\\\\n                           &= \\norm{\\V x}^2 + 2(\\V x,\\V y) + \\norm{\\V y}^2 \\\\\n                           &\\leq \\norm{\\V x}^2 + 2\\norm{\\V x} \\cdot \\norm{\\V y} + \\norm{\\V y}^2 \\\\\n                           &= (\\norm{\\V x} + \\norm{\\V y})^2\n\t\\end{align*}\n\\end{proof}\n\n\\begin{lemma}[Cauchy-Schwarz Inequality]\n  \\[\n    \\forall \\V x,\\V y \\in \\mathbb{R}^n, (\\V x,\\V y)^2 \\leq \\norm{\\V x}^2 \\times \\norm{\\V y}^2.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  The quadratic polynomial on real variable $\\lambda$\n  \\[\n    (\\lambda \\V x + \\V y, \\lambda \\V x + \\V y) = \\norm{\\V x}^2 \\lambda^2 + 2(\\V x,\\V y)\\lambda + \\norm{\\V y}^2\n  \\]\n  is positive semi-definite. Thus by considering the determinant the result follows.\n\\end{proof}\n\nMore examples of metric spaces:\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item $X = \\mathbb{R}^n, d_1(\\V x,\\V y) = \\sum_{i=1}^n |x_i-y_i|$ and $d_\\infty(\\V x,\\V y) = \\max_i |x_i - y_i|$ are both metrics.\n  \\item For any set $X$, define the \\emph{discrete metric}\n    \\[\n      d(x, y) =\n      \\begin{cases}\n        1 & \\text{if } x \\neq y \\\\\n        0 & \\text{if } x = y\n      \\end{cases}\n    \\]\n  \\item $X= C[0,1]$, the set of real continuous functions on $[0,1]$. We can define $d_1, d_2,\\ldots d_\\infty$ by\n    \\begin{align*}\n      d_1(f,g) &= \\int_0^1 |f-g| dx \\\\\n      d_2(f,g) &= \\left(\\int_0^1(f-g)^2 dx\\right)^{1/2} \\\\\n      &\\vdots \\\\\n      d_\\infty(f,g) &= \\sup|f-g|\n    \\end{align*}\n  \\item British rail metric: consider $\\mathbb{R}^n$ with Euclidean metric, and let $0$ denote the origin. Define a new metric on $\\mathbb{R}^n$ by\n    \\[\n      \\rho(P,Q) =\n      \\begin{cases}\n        d(P,0) + d(0,Q) &\\text{if } P\\neq Q \\\\\n        0 &\\text{if } P=Q\n      \\end{cases}\n    \\]\n  \\end{enumerate}\n\\end{eg}\n\nSome metrics in fact satisfy a stronger triangle inequality:\n\n\\begin{definition}[Ultra-metric]\n  A metric space $(X,d)$ is called an \\emph{ultra-metric} if for all \\(P, Q, R \\in X\\),\n  \\[\n    d(P,R) \\leq \\max\\{d(P,Q), d(Q,R)\\}.\n  \\]\n\\end{definition}\n\n\\begin{ex}\n  $X = \\mathbb{Z}$, $p$ prime. The \\emph{$p$-adic metric} is defined by\n  \\[\n    d(m,n) =\n    \\begin{cases}\n      0 & m = n \\\\\n      \\frac{1}{p^r} & m \\neq n \\: \\text{where } r = \\max\\{s\\in\\mathbb{N}: p^s \\divides (m-n)\\}\n    \\end{cases}\n  \\]\n  Claim $d$ is an ultra-metric.\n\\end{ex}\n\nThis extends to a $p$-adic metric on $\\mathbb{Q}$. For $x\\neq y \\in \\mathbb{Q}$, write $x-y = p^r \\frac{m}{n}, r\\in\\mathbb{Z}$, $m, n$ coprime ot $p$ and define $d(x,y) = \\frac{1}{p^r}$.\n\n\\begin{ex}\n  The sequence $a_n = 1+p+p^2+ \\dots + p^n$ is a convergent sequence in $(\\mathbb{Q},d_p)$ with limit $a = \\frac{1}{1-p}$ since $p^n | (a_n-a) = \\frac{p^n}{p-1}$ for all $n$.\n\\end{ex}\n\n\\begin{definition}[Lipschitz equivalence]\n  Two metrics $\\rho_1, \\rho_2$ on a set $X$ are \\emph{Lipschitz equivalent} if there exist \\(0 < \\lambda_1 \\leq \\lambda_2\\) in \\(\\R\\) such that\n  \\[\n    \\lambda_1 \\rho_1 \\leq \\rho_2 \\leq \\lambda_2\\rho_2.\n  \\]\n\\end{definition}\n\nThis is an equivalence relation.\n\n\\begin{remark}\n  For metrics $d_1, d_2, d_\\infty$ on $\\mathbb{R}^n$, one can show that\n  \\[\n    d_1 \\geq d_2 \\geq d_\\infty \\geq \\frac{d_2}{\\sqrt n} \\geq \\frac{d_1}{n}\n  \\]\n  and so are Lipschitz equivalent.\n\\end{remark}\n\n\\begin{proposition}\n  $d_1$ and $d_\\infty$ on $C[0,1]$ are not Lipschitz equivalent.\n\\end{proposition}\n\n\\begin{proof}\n  For $n \\geq 2$, let $f_n \\in C[0,1]$ be given by the function as shown. Thus $d_1(f_n,0) =$ area of triangle $\\to 0$ as while $d_\\infty(f_n,0) \\to \\sqrt n = \\infty$ as $n \\to \\infty$.\n\\end{proof}\n\n\\begin{center}\n  \\begin{tikzpicture}\n    \\draw[->] (-.5,0) -- (3,0);\n    \\draw[->] (0,-.5) -- (0,3);\n    \\draw (0,0) -- (1,2) -- (2,0);\n    \\draw (1,0) node[below] {$\\frac{1}{n}$} (2,0) node[below] {$\\frac{2}{n}$} (0,2) node[left] {$\\sqrt n$};\n  \\end{tikzpicture}\n\\end{center}\n\n\\begin{ex}\n  Show that $d_2(f_n,0) = \\sqrt{\\frac{2}{3}}$ for all $n$ is not equivalent to either.\n\\end{ex}\n\n\\subsection{Open Ball and Open Set}\n\nLet $(X,d)$ be a metric space, $P\\in X, \\delta > 0$. The \\emph{open ball}\n\\[\n  B_d(P,\\delta) = \\{Q\\in X: d(P,Q) < \\delta\\}\n\\]\nOften we omit the meric and write $B(P,\\delta)$ or $B_\\delta(P)$.\n\n\\begin{definition}[Open subset]\n  A subset $U \\subseteq X$ of a metric space $(X,d)$ is called \\emph{open} if for all \\(P \\in U\\), there exists \\(\\delta > 0\\) such that $B(P,\\delta) \\subseteq U$, i.e.\\ an open subset is just a union of open balls.\n\\end{definition}\n\n\\begin{definition}[Closed subset]\n  A subset $F\\subseteq X$ is \\emph{closed} if $X\\setminus F$ is open.\n\\end{definition}\n\nOpen balls are open and closed balls are closed.\n\n\\begin{lemma}\\leavevmode\n  \\begin{enumerate}\n  \\item $\\emptyset, X \\subseteq X$ are open subsets of $(X,d)$.\n  \\item If $\\{U_i\\}_{i\\in I}$ are open sets then so is $\\bigcup_{i\\in I} U_i$.\n    \\item If $U_1, U_2$ are open then so is $U_1 \\cap U_2$.\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{definition}[Open neighbourhood]\n  Give $P$ a point of $(X,d)$, an \\emph{open neighbourhood} of $P$ is an open set $U \\ni P$.\n\\end{definition}\n\n\\subsection{Limit and Continuity}\n\nSuppose $(x_n)$ is a sequence of points in a metric space $(X,d)$. We say $x_n \\to x\\in X$ (converges to limit $x$) if $d(x_n,x) \\to 0$ as $n \\to \\infty$. Equivalently, $\\forall \\varepsilon>0, \\exists N \\forall n\\geq N x_n \\in B(x, \\varepsilon)$.\n\n\\begin{remark}\n  $x_n \\to x$ in $(X,d)$ if and only if for any open neighbourhood $U \\ni x \\exists N \\forall n\\geq N x_n \\in U$. \n\\end{remark}\n\\end{document}\n", "meta": {"hexsha": "8761c3d8e2bc735993e750f0cbce36e98a6b0a49", "size": 7520, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "IB/metric_and_topological_spaces.tex", "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_issues_repo_path": "IB/metric_and_topological_spaces.tex", "max_issues_repo_name": "b-mehta/tripos", "max_issues_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_forks_repo_path": "IB/metric_and_topological_spaces.tex", "max_forks_repo_name": "b-mehta/tripos", "max_forks_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "avg_line_length": 34.0271493213, "max_line_length": 246, "alphanum_fraction": 0.59375, "num_tokens": 2947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.727975460709318, "lm_q2_score": 0.7956581097540519, "lm_q1q2_score": 0.5792195790153111}}
{"text": "\\chapter{Structural Induction}\n\\label{chapter:structural-induciton}\nTo illustrate the notions we introduce in this chapter we are going to prove\nthat in the game from \\Cref{section:strong-induction-recursive} it is impossible\nto guess the number correctly using less than $10$ questions.\n\nTo prove such a statement we need a formal definition of an algorithm that\nBob may use.\n\n\\section{Recursive Definitions}\nFirst note that any question for Alice can be formulated as follows:\n``Is the value of a function $f$ at your number equal to $\\ltrue$?'', where\n$f$ is a function from $\\Z$ to $\\set{0, 1}$ (here and in the sequel\nwe interpret $1$ as ``yes'' and $0$ as ``no'').\n\nHence, there are two possible behaviours of any algorithm for Bob.\n\\begin{itemize}\n  \\item The algorithm prescribes Bob to just say that the answer is some\n    number $x \\in \\Z$, or\n  \\item The algorithm prescribes Bob to ask whether $f$ at Alice's number is\n    equal to $\\ltrue$. If the answer is yes, then the algorithm prescribes Bob\n    to proceed according to an algorithm $A_1$, otherwise the algorithm\n    prescribe Bob to proceed according to an algorithm $A_0$.\n\\end{itemize}\n\nHence, any algorithm for Bob can be described using the following object.\n\\begin{definition}\n  We say that $T$ is a $B$-decision tree if\n  \\begin{description}\n    \\item [(base case)] either $T$ is equal to $\\DTReturn{y}$, where $y \\in \\Z$;\n      or\n    \\item [(recursion step)] $T$ is equal to $\\DTIf{f}{T_0}{T_1}$,\n      where $f : \\Z \\to \\set{0, 1}$, and $T_0$ and $T_1$\n      are $B$-decision trees.\n  \\end{description}\n\\end{definition}\nNote that this definition is not quite formal since it is recursive and\nwe usually do not allow recursive definitions. So we will need to give a more\nformal way to define $B$-decision trees. However, this definition allows us to\nprove that \n\\[\n  T = \n  \\DTIf{f_1}{\n    \\DTIf{f_2}{\n      \\DTReturn{1}\n     }{\n      \\DTReturn{2}\n     }\n  }{\n    \\DTReturn{3}\n  }\n\\]\nis a $B$-decision tree, where\n\\begin{gather*}\n  f_1(x) =\n    \\begin{cases}\n      1 & \\text{if } x \\le 2 \\\\\n      0 & \\text{otherwise}\n    \\end{cases}, \\\\\n  \\text{and} \\\\\n  f_2(x) =\n    \\begin{cases}\n      1 & \\text{if } x = 1 \\\\\n      0 & \\text{otherwise}\n    \\end{cases}.\n\\end{gather*}\nThis can be explained as follows.\n\\begin{itemize}\n  \\item It is clear that $\\DTReturn{1}$, $\\DTReturn{2}$, and $\\DTReturn{3}$ are\n    $B$-decision trees by the base case.\n  \\item Hence, by the recursion step case,\n    $\\DTIf{f_2}{\\DTReturn{1}}{\\DTReturn{2}}$ is a $B$-decision tree.\n  \\item Finally, by the recursion step case, $T$ is a $B$-decision tree.\n\\end{itemize}\n\nIn other words we proved that $T$ is a $B$-decision tree by providing $T_1$,\n\\dots, $T_5$ such that $T_5 = T$ and for each $i \\in [5]$,\n$T_i$ is either equal to $\\DTReturn{y}$ (for $y \\in \\Z$) or \n$T_i$ is equal to $\\DTIf{f}{T_j}{T_k}$ (for $j, k < i$ and \n$f : \\Z \\to \\set{0, 1}$).\n\nThis idea leads to the framework that would allows us to give a formal\ndefinition of $B$-decision trees.\n\\begin{definition}\n  Let $U$ be a set, let $S_0 \\subseteq U$, and let\n  \\[\n    \\mathcal{F} =\n    \\set{F_1 : U^{\\ell_1} \\to U, \\dots, F_n : U^{\\ell_n} \\to U, \\dots}.\n  \\]\n  Then we say that the set $S$ is generated by $\\mathcal{F}$ from $S_0$ if \n  it is the set of all $u \\in U$ such that there is a sequence\n  $u_1$, \\dots, $u_m$ satisfying the following constraints: $u_m = u$ and\n  for each $i \\in \\range{m}$,\n  \\begin{itemize}\n    \\item either $u_i \\in S_0$, or\n    \\item $u_i = F(u_{k_1}, \\dots, u_{k_\\ell})$\n      for $F \\in \\mathcal{F}$ and $k_1, \\dots, k_\\ell < i$.\n  \\end{itemize}\n\\end{definition}\nIn case of $B$-decision trees $U$ is the set of all sequences of numbers,\n$\\mathbf{if}$, $\\mathbf{then}$, $\\mathbf{else}$, and functions from $\\Z$ to $\\set{0, 1}$,\n$S_0$ is the set of all sequences $\\DTReturn{y}$ for $y \\in \\Z$, and\n\\[\n  \\mathcal{F} =\n  \\set[\n    {\n      f \\text{ is a function from } \\Z\n        \\text{ to } \\set{0, 1}\n    }\n  ]{F_f},\n\\]\nwhere $F_f(T_0, T_1)$ is equal to $\\DTIf{f}{T_0}{T_1}$.\n\n\\begin{remark}\n  Let $U$ be a set, let $S_0 \\subseteq U$, and let\n  \\[\n    \\mathcal{F} =\n    \\set{F_1 : U^{\\ell_1} \\to U, \\dots, F_n : U^{\\ell_n} \\to U, \\dots}.\n  \\]\n  Let us consider the sets $S_0, \\dots, S_n, \\dots \\subseteq U$ such that \n  \\[\n    S_{i + 1} = S_i \\cup \n    \\set[u_1, \\dots, u_\\ell \\in S_i, F \\in \\mathcal{F}]{F(u_1, \\dots, u_\\ell)}.\n  \\]\n  It is clear that $\\bigcup_{i \\ge 0} S_i$ is the set generated by $\\mathcal{F}$\n  from $S_0$.\n\\end{remark}\n\nUsing similar ideas we may define some functions on the objects defined\nrecursively.\n\\begin{definition}\n  Let $T$ be a $B$-decision tree. Then the height $\\DTHeight{T}$ of $T$ can be\n  defined as follows.\n  \\begin{description}\n    \\item [(base case)] If $T$ is equal to $\\DTReturn{y}$ for $y \\in \\Z$, \n      then $\\DTHeight{T} = 0$.\n    \\item[(recursion step)] Let $T$ be equal to $\\DTIf{f}{T_0}{T_1}$, where\n      $T_0$ and $T_1$ are $B$-decision trees. \n      Then $\\DTHeight{T} = \\max(\\DTHeight{T_0}, \\DTHeight{T_1}) + 1$.\n  \\end{description}\n\\end{definition}\nNote that $\\DTHeight{T}$ correpsonds to the worst-case number of queries made by\nBob if we interpret $T$ as a description of Bob's algorithm.\n\nHowever, before we explain how to formalize such a definition, we need to note\nthat in the general case such definition may be contradictory. Consider\n$U = \\R$, $S_0 = \\set{0}$, and $\\mathcal{F} = \\set{f, g}$, where $f(x, y) = xy$\nand $g(x) = x + 1$. We define $v : U \\to \\R$ as follows.\n\\begin{description}\n  \\item [(base case)] $v(0) = 0$.\n  \\item[(recursion step)] $v(f(x, y)) = f(v(x), v(y))$ and\n    $v(g(x)) = v(x) + 2$.\n\\end{description}\nNote that $v(f(g(0), g(0))) = f(v(g(0)), v(g(0))) = \\big(v(g(0))\\big)^2 = 4$\nand $v(g(0)) = v(0) + 2 = 2$. However, $g(0) = 1$ and $f(g(0), g(0)) = 1$.\n\nTherefore handle such an issue, we consider $S$ that is \\emph{freely} generated\nfrom $S_0$ by by $\\mathcal{F}$.\n\\begin{definition}\n  The set $S$ is freely generated by $\\mathcal{F}$ from $S_0$ iff it is\n  generated by $\\mathcal{F}$ from $S_0$,\n  $S_0 \\cap \\Im F = \\emptyset$, and $\\Im F \\cap \\Im G = \\emptyset$ for any\n  $F, G \\in \\mathcal{F}$.\n\\end{definition}\n\nThe following theorem claims existence of functions defined recursively.\n\\begin{theorem}\n\\label{theorem:recursion-principle}\n  Let $S \\subseteq U$ be the set freely generated from $S_0 \\subseteq U$ by\n  $\\mathcal{F} =\n    \\set{F_1 : U^{\\ell_1} \\to U, \\dots, F_n : U^{\\ell_n} \\to U, \\dots}$.\n  In addition, let $G_0 : S_0 \\to V$ and\n  $G_1 : V^{\\ell_1} \\to V, \\dots, G_n : V^{\\ell_n} \\to V$, \\dots\n  be some functions.\n\n  Then there is a function $h : S \\to V$ such that\n  \\begin{description}\n        \\item [(base case)] $h(u) = G_0(u)$ for any $u \\in S_0$.\n        \\item[(recursion step)] $h(F_i(u_1, \\dots, u_{\\ell_i})) =\n            G_i(h(u_1), \\dots, h(u_{\\ell_i}))$ for any $i$ and\n            $u_1, \\dots, u_{\\ell_i} \\in S$.\n  \\end{description}\n\\end{theorem}\n\n\\begin{exercise}\n  Prove \\Cref{theorem:recursion-principle}.\n\\end{exercise}\n\n$B$-decision tree are used in this chapter to represent Bob's algorithms;\nhence, we need to define values of $B$-decision trees.\n\\begin{definition}\n  The value $\\DTValue{T}{n}$ of a $B$-decision tree $T$ at an integer $n$\n  can be defined as follows.\n  \\begin{description}\n      \\item [(base case)] If $T$ is equal to $\\DTReturn{y}$ for $y \\in \\Z$,\n        then $\\DTValue{T}{n} = y$.\n      \\item[(recursion step)] Let $T$ be equal to $\\DTIf{f}{T_0}{T_1}$, where\n        $T_0$ and $T_1$ are $B$-decision trees, and $f : \\Z \\to \\set{0, 1}$.\n        Then\n        \\[\n          \\DTValue{T}{n} =\n          \\begin{cases}\n            \\mathrm{val}(T_0, n) & \\text{if } f(x) = 0 \\\\\n            \\mathrm{val}(T_1, n) & \\text{otherwise}\n          \\end{cases}.\n        \\]\n  \\end{description}\n\\end{definition}\n\nUsing all these notions we can reformulate the results about Alice and Bob's\ngame.\n\\begin{theorem}\n\\label{theorem:guess-the-number}\n  \\begin{enumerate}\n    \\item There is a $B$-decision tree $T$ such that\n      \\begin{itemize}\n        \\item $\\DTHeight(T) \\le 10$ and\n        \\item $\\DTValue{T}{n} = n$ for all $n \\in \\range{1000}$.\n      \\end{itemize}\n    \\item Let $T$ be a $B$-decision tree such that $\\DTValue{T}{n} = n$ for\n      all $n \\in \\range{1000}$. Then $\\DTHeight{T} \\ge 10$.\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{exercise}\n  Prove the first part of \\Cref{theorem:guess-the-number}.\n\\end{exercise}\n\n\\section{Structural Induction Theorem}\nTo prove the second part of \\Cref{theorem:guess-the-number} we need\nto introduce the notion of structural induction.\n\n\\begin{theorem}[The Structural Induction Principle]\n\\label{theorem:structural-induction}\n    Let $S \\subseteq U$ be the set freely generated from $S_0 \\subseteq U$ by\n    $\\mathcal{F} =\n      \\set{F_1 : U^{\\ell_1} \\to U, \\dots, F_n : U^{\\ell_n} \\to U, \\dots }$.\n\n    Assume that $S' \\subseteq U$ is a set such that the following constraints\n    are true.\n    \\begin{description}\n        \\item [(base case)] $S_0 \\subseteq S'$\n        \\item[(induction step)]\n          $F_i(u_1, \\dots, u_{\\ell_i}) \\in S'$ for any\n          $u_1, \\dots, u_{\\ell_i} \\in S'$ and $i \\in \\N$.\n    \\end{description}\n    Then $S \\subseteq S'$.\n\\end{theorem}\n\nUsing this result, we may prove \\Cref{theorem:guess-the-number}.\n\\begin{proof}[Proof of \\Cref{theorem:guess-the-number}]\n  We need to prove only the second part of the statement.\n  Let $V(T) = \\set[n \\in \\Z]{\\DTValue{T}{n}}$, where $T$ is a $B$-decision\n  tree. This proof is based on the following observation.\n  \\begin{claim}\n  \\label{claim:guess-the-number}\n    For any $B$-decision tree $T$, size of $V(T)$ is at most $2^{\\DTHeight{T}}$.\n  \\end{claim}\n\n  Assume that $T$ is a $B$-decision tree such that $\\DTValue{T}{n} = n$ for\n  any $n \\in \\range{1000}$. Whence $\\range{1000} \\subseteq V(T)$. Therefore \n  $V(T)$ has at least $1000$ elements. As a result, $2^{\\DTHeight{T}} \\ge 1000$;\n  which implies that $\\DTHeight{T} \\ge 10$ since $\\DTHeight{T}$ is an integer.\n\n  So we just need to prove \\Cref{claim:guess-the-number}. We prove it using\n  structural induction. Let $S'$ be the set of decision trees $T$ such that size\n  of $V(T)$ is at most $2^{h(T)}$.\n  \\begin{description}\n    \\item[(base case)] Let $T$ be equal to $\\DTReturn{y}$, where $y \\in Z$, then \n      $\\DTValue{T}{n} = n$ for any $n \\in \\Z$. Hence, the size of $V(T)$ is\n      equal to $1 = 2^0 = 2^{\\DTHeight{T}}$.\n    \\item[(induction step)] Assume that $T$ is equal to $\\DTIf{f}{T_0}{T_1}$ for\n      some $T_0, T_1 \\in S'$. We know that the size of $V(T_0)$ is at most\n      $2^{\\DTHeight{T_0}}$ and the size of $V(T_0)$ is at most\n      $2^{\\DTHeight{T_1}}$. In addtion, it is clear that \n      $V(T) \\subseteq V(T_0) \\cup V(T_1)$. Therefore,\n      the size of $V(T)$ is at most\\footnote{%\n        Formally speaking, we use \\Cref{theorem:additive-principle} to prove\n        this; i.e. we use the fact that the size of a set $A \\cup B$ is at most\n        the size of $A$ plus the size of $B$.\n      }\n      \\[\n        2^{\\DTHeight{T_0}} + 2^{\\DTHeight{T_1}} \\le \n          2^{\\max(\\DTHeight{T_0}, \\DTHeight{T_1}) + 1} = 2^{\\DTHeight{T}}.\n      \\]\n  \\end{description}\n  Hence, by \\Cref{theorem:structural-induction}, $S'$ is equal to the set of all\n  $B$-decision trees.\n\\end{proof}\n\nNow we are ready to prove \\Cref{theorem:structural-induction}.\n\\begin{proof}[Proof of \\Cref{theorem:structural-induction}]\n    We prove the statement using induction. More precisely, we prove using\n    induction by $m$ that if there is a sequence\n    $u_1$, \\dots, $u_m$ such that\n    for each $i \\in \\range{m}$, $u_i \\in S_0$ or $u_i = F(u_{k_1}, \\dots, u_{k_\\ell})$\n    for $F \\in \\mathcal{F}$ and $k_1, \\dots, k_\\ell < i$, then $u_m \\in S'$.\n\n    The case when $m = 1$ is clear since in this case $u_1 \\in S_0$ which implies\n    that it is in $S'$.\n\n    Let us now prove the induction step. Assume that the statement is true for any\n    $k \\le m$. Consider a sequence $u_1$, \\dots, $u_{m + 1}$ such that\n    for each $i \\in [m + 1]$,\n    $u_i \\in S_0$ or $u_i = F(u_{k_1}, \\dots, u_{k_\\ell})$\n    for $F \\in \\mathcal{F}$ and $k_1, \\dots, k_\\ell < i$. Let us consider\n    $F \\in \\mathcal{F}$ and $k_1, \\dots, k_\\ell < m + 1$ such that\n    $u_{m + 1} = f(u_{k_1}, \\dots, u_{k_\\ell})$. By the induction hypothesis,\n    $u_{k_1}, \\dots, u_{k_\\ell} \\in S'$. Therefore, by the properties of $S'$,\n    $u_{m + 1} \\in S'$.\n\\end{proof}\n\n\\begin{chapterendexercises}\n  \\exercise[recommended] Let $S \\subseteq \\Z$ be a set of size at least $2^k$,\n    and let $T$ be a decision tree such that $\\DTValue{T}{n} = n$ for any $n\n    \\in S$. Show that $\\DTHeight{T} \\ge k$.\n  \\exercise[recommended] Let $S$ be a set of integers defined recursively such\n    that\n    \\begin{itemize}\n      \\item $3 \\in S$, and \n      \\item if $x, y \\in S$, then $x + y \\in S$.\n    \\end{itemize}\n    Show that $S = \\set[k \\in \\N]{3k}$.\n  \\exercise[recommended]\n    Using recursive definition we can define an arithmetic formula on the\n    variables $x_1$, \\dots, $x_n$,\n    \\begin{description}\n      \\item [(base case)] $x_i$ is an arithmetic formula on the variables $x_1$,\n        \\dots, $x_n$ for all $i$; if $c$ is a real number, then $c$ is also an\n        arithmetic formula on the variables $x_1$, \\dots, $x_n$.\n      \\item[(recursion step)] If $P$ and $Q$ are arithmetic formulas on the\n        variables $x_1$, \\dots, $x_n$, then $(P + Q)$ and $P \\cdot Q$ are\n        arithmetic formulas on the variables $x_1$, \\dots, $x_n$.\n    \\end{description}\n\n    We can also define recursively the value of such a formula.\n    Let $v_1$, \\dots, $v_n$ be some integers.\n    \\begin{description}\n      \\item[(base cases)] $x_i\\big\\rvert_{x_1 = v_1, \\dots, x_n = v_n} = v_i$;\n        in other words, the value of the arithmetic formula $x_i$ is equal to\n        $v_i$ when $x_1 = v_1$, \\dots, $x_n = v_n$; if $c$ is a real number,\n        then $c\\rvert_{x_1 = v_1, \\dots, x_n = v_n} = c$.\n      \\item[(recursion steps)] If $P$ and $Q$ are arithmetic formulas on the\n        variables $x_1$, \\dots, $x_n$, then\n        \\begin{gather*}\n          (P + Q)\\big\\rvert_{x_1 = v_1, \\dots, x_n = v_n} =\n          P\\big\\rvert_{x_1 = v_1, \\dots, x_n = v_n} +\n          Q\\big\\rvert_{x_1 = v_1, \\dots, x_n = v_n} \\\\        \n        \\text{and} \\\\        \n         (P \\cdot Q)\\big\\rvert_{x_1 = v_1, \\dots, x_n = v_n} =\n         P\\big\\rvert_{x_1 = v_1, \\dots, x_n = v_n} \\cdot\n         Q\\big\\rvert_{x_1 = v_1, \\dots, x_n = v_n}.\n        \\end{gather*}\n    \\end{description}\n\n    Prove that for any arithmetic formula $A$ on $x$, there is a polynomial\n    $p$ such that $p(v) = A\\big\\rvert_{x = v}$ for any $v \\in \\R$.\n  \\exercise\n    \\begin{itemize}           \n      \\item Define arithmetic formulas with division and define their value (make\n        sure that you handled divisions by $0$).\n      \\item Show that for any arithmetic formula with division $A$ on $x$,\n        there are polynomials $p$ and $q$ such that $\\frac{p(v)}{q(v)} =\n        A\\big\\rvert_{x = v}$ or $A\\big\\rvert_{x = v}$ is not defined for\n        any real value $v$.\n    \\end{itemize}\n  \\exercise[recommended]\n    We say that $L$ is a $B$-decision list \n    \\begin{description}\n      \\item[(base case)] if either $L$ is equal to $\\DLReturn{y}$, where \n        $y \\in \\Z$; or\n      \\item[(recursion step)] $L$ is equal to $\\DLIf{f}{v}{L'}$, where \n        $f : \\Z \\to \\set{0, 1}$, $v \\in \\Z$, and $L$ is a $B$-decision list.\n    \\end{description}\n\n    We can also define the value $\\DLValue{L}{x}$ of a $B$-decision list $L$ at\n    $x \\in \\mathbb{Z}$.\n    \\begin{description}\n      \\item[(base case)] Let $L$ be equal to $\\DLReturn{y}$, where $y \\in \\Z$.\n        Then $\\DLValue{L}{x} = y$.\n      \\item[(recursion step)] Let $L$ be equal to $\\DLIf{f}{v}{L'}$, then\n        \\[\n          \\DLValue{L}{x} = \n          \\begin{cases}\n            v & \\text{if } f(x) = 1 \\\\\n            \\DLValue{L'}{x} & \\text{otherwise}\n          \\end{cases}.\n        \\]\n    \\end{description}\n\n    Similarly one may define the length $\\DLLength{L}$ of a $B$-decition list $L$.\n    \\begin{description}\n      \\item[(base case)] If $L$ is equal to $\\DLReturn{y}$, then \n        $\\DLLength{L} = 1$; and\n      \\item[(recursion step)] if $L$ is equal to \n        $\\DLIf{f}{v}{L'}$, then $\\DLLength{L} = \\DLLength{L'} + 1$.\n    \\end{description}\n\n    Assume that $\\DLValue{L}{x} = x$ for any $x \\in [1000]$; show that \n    $\\DLLength{L} \\ge 1000$.\n  \\exercise\n    Let us consider the following game modification of the game studied in this\n    chapter. Alice have chosen a number from $1$ to $1000$. Bob wants to\n    guess the number so he is asking Alice ``yes'' or ``no'' questions.\n    However, he cannot get more than $\\ell$ ``yes'' answers; i.e., as soon as Alice\n    says $\\ell$th ``yes'', Bob is supposed to be able to guess her number.\n    \\begin{enumerate} \n      \\item Show that if $\\ell = 1$, then Bob needs at least $1000$ quesitons.\n      \\item Show that if $\\ell = 2$, then Bob needs at least $\\sqrt{1000}$\n        quesions.\n      \\item Show that if Bob needs at least $\\sqrt[\\ell]{1000}$ questions.\n      \\item Show that if $\\ell = 2$, there is an algorithm for Bob such that he\n        is able to guess the number using $2\\ceil{\\sqrt{1000}}$ questions.\n    \\end{enumerate}\n  \\exercise You have a $100$-story building and two eggs. When you drop an egg\n    from any floor of the building, the egg will either break or survive the\n    fall. If the egg survives, then it would have survived any lesser fall. If\n    the egg breaks, then any greater fall would have broken it as well. The eggs\n    are all identical and interchangeable. You\u2019d like to find the minimum height\n    that will break an egg. What is the fewest number of drops in which you are\n    guaranteed to find the right floor?\n\\end{chapterendexercises}\n", "meta": {"hexsha": "82395137072959646fb1297d6f22c92e159fe9e3", "size": 17781, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "parts/part_1/chapter_9_structural_induction.tex", "max_stars_repo_name": "alexanderknop/I2DM", "max_stars_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-01-12T05:01:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-12T11:44:11.000Z", "max_issues_repo_path": "parts/part_1/chapter_9_structural_induction.tex", "max_issues_repo_name": "aaknop/I2DM", "max_issues_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 69, "max_issues_repo_issues_event_min_datetime": "2019-01-09T00:19:58.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-04T00:27:16.000Z", "max_forks_repo_path": "parts/part_1/chapter_9_structural_induction.tex", "max_forks_repo_name": "aaknop/I2DM", "max_forks_repo_head_hexsha": "745bc4e24087c1d7abd02f39c1481bb7b7ddb796", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-01-08T23:55:41.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-12T07:14:44.000Z", "avg_line_length": 42.0354609929, "max_line_length": 89, "alphanum_fraction": 0.6145886058, "num_tokens": 6174, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754607093178, "lm_q2_score": 0.7956581049086031, "lm_q1q2_score": 0.5792195754879431}}
{"text": "\n\n\\chapter{\\projmr3 \\ Three Dimensional Data Set}\n\n\\section{\\projmr3 \\ Multiresolution}\n\n\\subsection{Introduction}\n\\subsubsection{The 3D \\`a trous Wavelet Transform}\nThe 3D \\`a trous WT of a cube produces $J$ bands, each one being\na cube with the same dimension as the input cube.\nThe 3D WT of a cube $C(*,*,*)$ is therefore a 4D data set\n$W(*,*,*,0:J-1)$, where $J$ is the number of \nresolutions used in the decomposition.\n\n\\subsubsection{The 3D bi-orthogonal Wavelet Transform}\n\nA one-level 3D sub-band decomposition transforms a cube of $N^3$ pixels into \neight sub-cubes, also called bands, of  $({N\\over 2})^3$ pixels. \nOne of these bands corresponds to the original cube at a lower resolution\n(smoothed cube), while the seven other bands are the detail \ninformation, i.e., \ninformation lost between the two resolution levels, original\nand smoothed data resolution level.\nA reconstruction of the original input cube can be performed from the \neight sub-bands. The 3D wavelet transform (WT) iterates the decomposition \nprocess on the smoothed cube. For a 3D WT with $P$ scales, \nthe final number of bands is equal to $7(P-1) + 1$, i.e., seven  \ndetail bands for the first $P-1$ scales, plus the last smoothed array.\nFor an $N^3$ pixels cube $D(0..N-1, 0..N-1, 0..N-1)$, the eight bands \nproduced by \none-level decomposition (two scales) are stored in the following way:\n\\begin{enumerate}\n\\item Band 1: $B_1({N\\over 2}:N-1,0:{N\\over 2}-1,0:{N\\over 2}-1)$ is obtained\nby 1D convolution with filters $ g_x, h_y, h_z$.\n\\item Band 2: $B_2(0:{N\\over 2}-1, {N\\over 2}:N-1,0:{N\\over 2}-1)$ is obtained\nby  1D convolution with filters $ h_x, g_y, h_z$.\n\\item Band 3: $B_3({N\\over 2}:N-1, {N\\over 2}:N-1,0:{N\\over 2}-1)$ is obtained\nby  1D convolution with filters $ g_x, g_y, h_z$.\n\\item Band 4: $B_4({N\\over 2}:N-1,0:{N\\over 2}-1, {N\\over 2}:N-1)$ is obtained\nby  1D convolution with filters $ g_x, h_y, g_z$.\n\\item Band 5: $B_5(0:{N\\over 2}-1, {N\\over 2}:N-1,{N\\over 2}:N-1)$ is obtained\nby  1D convolution with filters $ h_x, g_y, g_z$.\n\\item Band 6: $B_6({N\\over 2}:N-1, {N\\over 2}:N-1,{N\\over 2}:N-1)$ is obtained\nby  1D convolution with filters $ g_x, g_y, g_z$.\n\\item Band 7: $B_7( 0:{N\\over 2}-1, 0:{N\\over 2}-1,{N\\over 2}:N-1)$ is obtained\nby  1D convolution with filters $ h_x h_y g_z $.\n\\item Band 8: $B_8( 0:{N\\over 2}-1, 0:{N\\over 2}-1,0:{N\\over 2}-1)$ is obtained\nby  1D convolution with filters $ h_x, h_y, h_z$.\n\\end{enumerate}\nFor the three-scale decomposition, $B_8$ pixels of the 2-scale \ndecomposition are replaced by bands 8 to 15, each having a size of \n $({N\\over 4})^3$  pixels.\n\n\\subsection{Multiresolution transform of cube: mr3d\\_trans}\n\\index{mr3d\\_trans}\n\\label{sect_trans3d}\nThe program \n{\\em mr3d\\_trans} computes the multiresolution transform of a cube by\nthe \\`a trous WT or a non-redundant transform. \nThe output file which contains the transformation has a \nsuffix, .mr. If the output file name\ngiven by the user does not contain this suffix, it is automatically\nadded. The ``.mr'' file is a FITS format file, and can be manipulated by\nany package dealing with FITS format, or using the \n{\\em mr3d\\_extract} program.\n{\\bf \n\\begin{center}\n USAGE: mr3d\\_trans option cube\\_in multiresolution\\_transform\\_out\n\\end{center}}\nwhere options are: \n\\begin{itemize} \n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-t type\\_of\\_multiresolution\\_transform]}\n{\\small \n\\begin{enumerate}\n\\item  (bi-) orthogonal wavelet transform.  \nAntonini 7/9 filters ~\\cite{wave:antonini92} are used by default, with an \n$L_1$ normalization. The filters can be changed using the ``-T'' option, and\nan $L_2$ normalization is obtained by the ``-L'' option.\n\\item Wavelet transform via lifting scheme \n\\item A trous wavelet transform\n\\end{enumerate}}\nDefault is 1.\n\\item {\\bf [-n number\\_of\\_scales]} \\\\\n Number of scales used in the multiresolution transform. \\\\\n Default is 4.\n\\item {\\bf [-T type\\_of\\_filters]}  \n{\\small\n\\begin{enumerate}\n\\itemsep=0.1truecm\n\\item Antonini 7/9 filters. \n\\item Daubechies filter 4. \n\\item Biorthogonal 2/6 Haar filters.\n\\item Biorthogonal 2/10 Haar filters.\n\\item Odegard 7/9 filters.\n\\item User's filters.\n\\end{enumerate}}\nDefault is Antonini 7/9 filters.\n\\item {\\bf [-L]} \\\\\nUse an $L_2$ normalization. Default is $L_1$.\n\\item {\\bf [-l type\\_of\\_lifting\\_transform]}  \n{\\small \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item Lifting scheme: CDF WT. \n\\item Lifting scheme: median prediction.\n\\item Lifting scheme: integer Haar WT. \n\\item Lifting scheme: integer CDF WT. \n\\item Lifting scheme: integer (4,2) interpolating transform. \n\\item Lifting scheme:  Antonini 7/9 filters.\n\\item Lifting scheme: integer Antonini 7/9 filters. \n\\end{enumerate}}\n Default is Lifting scheme: integer Haar WT.  \n\\item {\\bf [-i]} \\\\\nPrint statistical information about each band.\n\\end{itemize}\nThe result is stored in a file (suffix ``.mr''),\nand cube (or scales) of the transformation can be extracted\nby using the {\\em mr3d\\_extract} program. \\\\\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\itemsep=0.1truecm\n\\item mr3d\\_trans cube.fits cube\\_trans.mr \\\\\nApply the wavelet transform transform to a cube, and store the\nresult in cube\\_trans.mr. \n\\item mr3d\\_trans -T 3 -i cube.fits cube\\_trans.mr \\\\\nSame as before, but use Haar filters, and\n  print statistical information.\n\\item mr3d\\_trans -t 2 -l 1 cube.fits cube\\_trans.mr \\\\\nApply a wavelet transform via the lifting scheme.\n\\end{itemize}\n\n\n\\subsection{Extraction of a scale: mr3d\\_extract}\n\\index{mr3d\\_extract}\n\\label{sect_extr3d}\n\nThe program\n{\\em mr3d\\_extract} allows the user to extract a  band from\na multiresolution transform file (suffix ``.mr\").\n{\\bf\n\\begin{center}\n USAGE: mr3d\\_extract options multiresolution\\_file  output\\_cube\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\itemsep=0.1truecm\n \\item {\\bf  [-b band\\_number] } \\\\\n Band number to extract. Default is 1.\n\\end{itemize}\n\\subsubsection*{Example:}\n\\begin{itemize}\n\\item mr3d\\_extract -b 2 mr\\_file.mr cube\\_band\\_2.fits \\\\\nExtract the second band of the wavelet transform, and write as a .fits\nfile of name ``cube\\_band\\_2.d''.  \n\\end{itemize}\n\n\\subsection{Insertion of an image: mr3d\\_insert}\n\\index{mr3d\\_insert}\nThe program\n{\\em mr3d\\_insert} replaces a band by some cube, by inserting it\nin the multiresolution transform file. The band and the cube\nmust have the same size. \n{\\bf\n\\begin{center}\n USAGE: mr3d\\_insert options multiresolution\\_file input\\_cube\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\itemsep=0.1truecm\n\\item  {\\bf  [-b band\\_number] } \\\\\n Band number to insert. Default is 1.\n\\end{itemize}\n{\\em multiresolution\\_file} is the file (.mr) which contains \nthe multiresolution transformation, {\\em input\\_cube} is the cube\nwhich must replace the band cube. {\\em band \\_number} specifies\nthe band  number to be replaced. Default is 1. \\\\\nThe multiresolution transform is updated. \\\\\n\\subsubsection*{Example:}\n\\begin{itemize}\n\\item mr3d\\_insert -b 3 mr\\_file.mr cube.fits \\\\\nInsert a cube at the third band of the multiresolution file. \n\\end{itemize}\n\n\\subsection{Reconstruction: mr3d\\_recons}\n\\index{mr3d\\_recons}\nThe program \n{\\em mr3d\\_recons}  reconstructs a cube from its multiresolution\ntransform.\n{\\bf\n\\begin{center}\n USAGE: mr3d\\_recons  multiresolution\\_file  cube\\_out\n\\end{center}}\n{\\em multiresolution\\_file} is the file (.mr) which contains \nthe multiresolution transformation, {\\em output\\_cube} is  the\noutput reconstructed cube.\n\n\n\\section{\\projmr3 Denoising: mr3d\\_filter}\n\\index{filtering}\n\\subsection{Gaussian noise filtering: mr3d\\_filter}\n\\index{mr3d\\_filter}\nProgram {\\em mr\\_filter} filters a cube. Only Gaussian noise \nis considered.\n{\\bf\n\\begin{center}\n USAGE: mr3d\\_filter option cube\\_in cube\\_out\n\\end{center}}\nwhere options are \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-t type\\_of\\_multiresolution\\_transform]}\n{\\small \n\\begin{enumerate}\n\\item  (bi-) orthogonal wavelet transform.  \nAntonini 7/9 filters ~\\cite{wave:antonini92} are used by default, with an \n$L_1$ normalization. The filters can be changed using the ``-T'' option, and\nan $L_2$ normalization is obtained by the ``-L'' option.\n\\item A trous wavelet transform\n\\end{enumerate}}\n\\item {\\bf [-n number\\_of\\_scales]} \\\\\nNumber of scales used in the multiresolution transform. Default is 4.\n\\item {\\bf [-T type\\_of\\_filters]}  \\\\\nsee~\\ref{sect_trans3d}.\n\\item {\\bf [-g sigma]} \\\\\n The image contains Gaussian noise, and the standard deviation is\ngiven by {\\em sigma }. The option should be set only if the user\nknows the standard deviation of the noise. \n\\item {\\bf [-s NSigma]} \\\\\nThe detection level at each scale is determined by the product\nof the standard deviation of the noise by the {\\em NSigma}.\n{\\em NSigma} fixes the confidence interval we want. By default,\n{\\em NSigma} is equal to 3.\n\\item {\\bf [-C]} \\\\\nCorrelated noise.\n\\end{itemize}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr3d\\_filter cube\\_in.fits cube\\_out.fits \\\\\nFilters a cube by multiresolution  thresholding, assuming\nGaussian noise (its standard deviation is automatically estimated).\n\\item mr3d\\_filter -s 4 -g 10. cube\\_in.fits cube\\_out.fits \\\\\nSame as before, but the noise standard deviation is fixed to 10, and\na  $4\\sigma$ thresholding is performed. \n\\item mr3d\\_filter -C cube\\_in.fits cube\\_out.fits \\\\\nConsider correlated noise instead of Gaussian noise. The noise standard\ndeviation is estimated at each scale using the MAD (median absolute\ndeviation) estimator.\n\\end{itemize}\n\n\n\\newpage\n\n\\section{3D Point Clouds Analysis}\nWe will call {\\em Catalog} a set of points in the 3D space, each point \nbeing represented by its three coordinates $X,Y,Z$.\n\n\\subsubsection*{ASCII Catalog}\nThe catalog format is the following: \n\\subsubsection*{Catalogue format}\n\\begin{itemize}\n\\item the first line must contain the number of points $N$, the\n  dimension $D$ (i.e. 3), and the coordinate system $S$ ($S$ = 1 or\n  2).  The recognized coordinate systems are:\n  \\begin{enumerate}\n  \\item $X$ and/or $Y$ and/or $Z$ Euclidien system.\n  \\item Angular coordinate system (longitude, latitude and/or\n    distance; that could be the equatorial system -- right ascension\n    $\\alpha$, declination $\\delta$, the galactic coordinate system\n    with galactic longitude $l$, galactic latitude $b$, the\n    supergalactic coordinate system with $SL$ and $SB$).\n  \\end{enumerate}\n\\item the $D$ following lines must contains three values: the range of\n  variation of the $i-th$ coordinate (min,max) and a flag indicating\n  the generation model for the corresponding coordinate: 0 -- uniform\n  between [min,max], 1 -- bootstrapping the coordinate and 2 -- uniform\n  on sphere between [min,max] in degrees. When the user have supplied\n  its own generated random catalogs, then those three lines are\n  ignored. This last value is used only by programs which need to \n  simulate data from the input data, and can generally be set to zero.\n\\item the $N$ following lines contains the coordinates of the points.\n\\end{itemize}\nAn example of a 3D catalogue, with 10 points in the Euclidien system,\neach coordinate being defined in the interval $[0,100[$ and random\ncatalog coordinates uniform in $[0,100[$ for each axis, is:\n\\begin{verbatim}\n       10       3           1\n       0      100.000       0\n       0      100.000       0\n       0      100.000       0\n      42.1782      13.4610      73.7444\n      41.6855      9.82727      75.3605\n      42.3580      14.7867      73.1548\n      42.0255      12.3347      74.2453\n      42.7474      17.6581      71.8777\n      41.9410      11.7113      74.5226\n      65.5637      9.84036      71.3585\n      65.7140      12.6019      70.5645\n      65.5843      10.2196      71.2495\n      65.4521      7.79074      71.9479\n\\end{verbatim}\nThe file name must have the suffixe ``\\.cat''.\n\n\\subsection{ASCII Catalog to 3D Cube in FITS Format: mr3d\\_cat2fits}\n\\index{mr3d\\_cat2fits}\n\\label{sect_cat2fits}\n\nThe program {\\em mr3d\\_cat2fits} allows the user to convert a catalog\ninto a 3D cube in the FITS format using a $B_3$ spline scaling function.\nWhen using the \\`a trous Wavelet Transform, this step corresponds to\nthe projection in the $V_0$ space, and garanties that the obtained \nwavelet coefficients are exactly the scalar products of the input \nirregularly spaced data with the wavelet function. Assuming Poisson \nnoise, very robust threshold levels can be derived \nin the wavelet space (program {\\em mr\\_phisto}), which can be used for the \ndetection of clusters (program {\\em mr\\_psupport}) or for the\nfiltering (program {\\em mr\\_pfilter}). See \\cite{starck:book02} for\nmore details.\n\n{\\bf\n\\begin{center}\n USAGE: mr3d\\_cat2fits options catalog cube\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\itemsep=0.1truecm\n\\item {\\bf  [-B Bin] } \\\\\n Bin gives the resolution. Default is 1. \n\\item {\\bf  [-p] } \\\\\nDo not interpolate the data. By default, a interpolation by a spline\na degree 3 is performed. It corresponds to the projection in the \nspace $V_0$ in the wavelet theory.\n\\end{itemize}\n\n\\subsubsection*{Example:}\n\\begin{itemize}\n\\item mr3d\\_cat2fits data.cat cube\\_out.fits \\\\\nConvert a catalog into a 3D fits file.\n\\item mr3d\\_cat2fits -p -b5 data.cat cube\\_out.fits \\\\\nDitto, but do not use an interpolation, and take a pixel resolution of 5\n(units are the same as in the input catalog).\n\\item mr3d\\_transform -t3 cube\\_out.fits trans.mr\\\\\nWavelet transform of the catalog. The wavelet coefficients in {\\em trans.mr}\nare exactly the scalar products of the irregularly spaced data with\nthe wavelet function. \n\\end{itemize}\n\n\n\\subsection{Wavelet Histogram Autoconvolution: mr3d\\_phisto}\n\\index{mr3d\\_phisto}\nProgram {\\em mr3d\\_phisto} precomputes a set tables which \nis used by {\\em mr3d\\_psupport} and\n{\\em mr3d\\_pfilter}. The tables are saved \nin the FITS table format. They allows the estimation of the detection\nlevels for a wavelet coefficient caculated from $n$ evenements.\nTo derive these thresholds, the wavelet funciton histogram must be\nautoconvolved $n$ times.\nThe filenames  are:\n\\begin{itemize}\n\\item \\_Aba\\_histo.fits: 2D array $H[n, 2049]$, which contains\n$n$ autoconvolutions.   \n\\item \\_histobin.fits: 2D array $B_1[n, 2]$, where $B_2[n,0]$ is the bin\nfor $H[n,*]$ and $B_1[n,1]$ is the number of points where the $H[n,*]$ is \n different from zero.\n\\item \\_histobound.fits: 2D array $B_2[n, 2]$, where \n$B_2[n,0]$ and $B_2[n,1]$  are respectively \nthe min and the max of the nth histogram.\n\\item \\_histodistrib.fits: 2D array $D[n, 2049]$,  where $D[n,*]$ is \nthe repartition function of the histogram distribution $H[n,*]$.\n\\item \\_param.fits: 1D array $P[2]$, where $P[0]$ and $P[0]$\nare respectively the $r$ (rythm variable) and the $n$ \n(number of calculated histograms) parameters,\n\\end{itemize}\n% h = rim('_Aba_histo.fits')\n% b = rim('_histobin.fits')\n% m = rim('_histobound.fits')\n% x = indgen(2049,/float)\n% x = x * b(10,0) + m(10,0)\n% plot,x,h(10,*)\n% d = rim('_histodistrib.fits')\n% plot, x, d(10,*)\n\n\n{\\bf\n\\begin{center}\n USAGE: mr3d\\_phisto option  \n\\end{center}}\nwhere options are \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-n number\\_of\\_convolution]} \\\\\nTotal number of calculated histograms. Default is 30\n(the total of the cube must be lower than $2^{n}$).\n\\item {\\bf [-r rythm]}  \\\\\n Parameter relative to the rythm of autoconvolution (default=0).\nThe program computes the $2^n$ autoconvolutions of the wavelet histogram. \nIt computes also $2^r$ convolutions between $2^n$ and $2^(n+1)$.\n\\item {\\bf [-d]}  \\\\\nUse all default options.\n\\item {\\bf [-v]}  \\\\\nVerbose. Default is no.\n\\end{itemize}\n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr3d\\_phisto -d \\\\\nEstimates the autoconvoled histograms, and calculates the threshold levels.\n\\end{itemize}\n\n\n\\subsection{Multiresolution Support Calculation: mr3d\\_psupport}\n\\label{sect_event3D}\n\\index{mr3d\\_psupport}\nProgram \n{\\em mr3d\\_psupport} applies a wavelet transform using the \\`a trous\nalgorithm to 3D data, assuming that the noise follows a Poisson distribution,\nand in the case where we have only few events per pixel \\cite{starck:book02}.\nData can either be given by an ASCII table of coordinates in real values, or\nby a cube in the fits format.  \n{\\bf\n\\begin{center}\n USAGE: mr3d\\_psupport options  in\\_cube  out\\_file\n\\end{center}}\nwhere options are \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-F first\\_detection\\_scale]} \\\\\nFirst scale used for the detection. Default is 1.\n\\item {\\bf [-e minimum\\_of\\_events]}  \\\\\nMinimum number of events for a detection. Default is 5. If a wavelet \ncoefficient has been calculated from fewer events than this minimum\nvalue, it is not considered as significant.\n% \\item {\\bf [-s SignifStructureAnalysis\\_FileName]} \\\\\n% Analyze the detected wavelet coefficients, and write in the file:\n% \\begin{itemize}\n% \\baselineskip=0.4truecm\n% \\item Number of detected structures per scale\n% \\item Percentage of significant wavelet coefficients\n% \\item Mean deviation of shape from sphericity\n% \\item For each detected structure, its volume, its surface, and\n% \\item its deviation of shape from sphericity, \n% \\item its angles, its elongation in the three axis directions\n% \\end{itemize}\n% \\item {\\bf [-t SignifStructureAnalysis\\_FileName]} \\\\\n% Same as -s option, but results are stored in an ASCII table format.\n% The table contains: scale number, structure number, \n% Max\\_x, Max\\_y, Max\\_z, Volume, Surface, Morpho, \n% Angle\\_Theta, Angle\\_Phi, Sigma\\_1, Sigma\\_2, Sigma\\_3. \n\\item {\\bf [-p]}  \\\\\nDetect only positive structure.\n% \\item {\\bf [-f filename\\_filtered\\_data]}  \\\\\n% File with filtered input data.\n\\item {\\bf [-n number\\_of\\_scales]}  \\\\\nNumber of scales used in the multiresolution transform.\nDefault is 6. The maximum number of scales depends on the \ncube size. For a $32 \\times 32 \\times 32$ cube, three scales are allowed (four\nscale for a $64 \\times 64 \\times 64$ cube, \nfive for a $128 \\times 128 \\times 128$ cube, \\dots).\n\\item {\\bf [-E epsilon]}  \\\\\nValue of epsilon used for the threshold (default=1e-3 ).\n\\item {\\bf [-k]}  \\\\\nSuppress isolated pixels in the support. Default is no.\n\\item {\\bf [-R]}  \\\\\n  Remove the structures near the borders.\n\\item {\\bf [-B bin]}  \\\\\n If the input data is a catalogue, it gives the bin of it (default=1).\n% \\item {\\bf [-w]]}  \\\\\n% Write the multiscale support to the disk.\n% \\item {\\bf [-S]}  \\\\\n% Read the support already computed.\n\\end{itemize}\n\n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item mr3d\\_phisto -d \\\\\nCompute a set a table which will be used by {\\em mr3d\\_psupport}.\n\\item mr3d\\_psupport -v simcub32.fits w1   \\\\\nMultiresolution support estimation. \\\\\nCreates the two files\nw1\\_support\\_1.fits and w1\\_support\\_2.fits cor\\-res\\-pon\\-ding\nthe support of two wavelet scales. \n\\item mr3d\\_psupport -R -k -E1e-4 -v simcub32.fits w1   \\\\\nDitto, but remove isolated pixels, structures at the border,\nand increase the detection the level.\n\\end{itemize}\n\n\\subsection{Data Cube Filtering: mr3d\\_pfilter}\n\\index{mr3d\\_pfilter}\nProgram {\\em mr3d\\_pfilter} filters an image (as does {\\em mr3d\\_filter})\nassuming  that the noise follows a Poisson distribution and in the case \nwhere we have only few events per pixel.\nData can either be given by an image or by \nan ASCII table of coordinates in real values.\nAs for {\\em mr3d\\_psupport}, the pre-computed table must exist (i.e.\nthe program {\\em mr3d\\_phisto must be executed.}\nIf the ``-c'' option is used, the user gives a second input image file name.\nThen the multiresolution support is estimated from the first image\n(given by option ``-a'' or ``-I''), and the second image is filtered\nusing the multiresolution support of the first.\n{\\bf\n\\begin{center}\n USAGE: mr3d\\_pfilter option image\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-n number\\_of\\_scales]}  \\\\\nNumber of scales used in the multiresolution transform.\nDefault is 6.\n\\item {\\bf [-F first\\_detection\\_scale]} \\\\\nFirst scale used for the detection. Default is 1.\n\\item {\\bf [-E epsilon]}  \\\\\nValue of epsilon used for the threshold (default=1e-3 ).\n\\item {\\bf [-m minimum\\_of\\_events]}  \\\\\nMinimum number of events for a detection. Default is 5. If a wavelet \ncoefficient has been calculated from fewer events than this minimum\nvalue, it is not considered as significant.\n\\item {\\bf [-i number\\_of\\_iterations]}  \\\\\n Maximum number of iterations. Default is 0.\n If this option is set, then an iterative reconstruction method is used to\n restore the cube.\n\\item {\\bf [-B bin]}  \\\\\n If the input data is a catalogue, it gives the bin of it (default=1).\n\\item {\\bf [-R]}  \\\\\n  Remove the structures near the borders.\n\\item {\\bf [-C]}  \\\\\nSmoothness constraint. \n\\item {\\bf [-p]}  \\\\\nDetect only positive structure.\n\\item {\\bf [-w]}  \\\\\nWrite the multiscale support to the disk. Each scale of the multiresolution\nsupport is written separately in a FITS file, \nwith name ``x\\_support\\_j.fits'', where ``x'' is the output file name\nand $j$ is the scale number.\n\\item {\\bf [-S]}  \\\\\nRead the multiresolution support already computed. When we want to \napply the filtering several times with different options, the\nmultiresolution can calculated the first time ans saved on the disk\nwith the ``-w'' option, and for the other experiments can read the\npre-computed multiresolution support and do not have to recalculate it.\n\\item {\\bf [-k]}  \\\\\nSuppress isolated pixels in the support. Default is no\n\\item {\\bf [-K]} \\\\       \nSuppress the last scale.\n\\end{itemize}\n\n\\noindent\n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item mr3d\\_phisto -d \\\\\nCompute a set a table which will be used by {\\em mr3d\\_pfilter}.\n\\item mr3d\\_pfilter cube\\_in cube\\_out \\\\\nFiltering with all default options.\n\\item mr3d\\_pfilter -R -k -E1e-4 cube\\_in cube\\_out \\\\\nDitto, but remove isolated pixels and in structures at the border\nthe multiresolution support, and increase the detection the level.\n\\end{itemize}\n", "meta": {"hexsha": "1d384a61da97c00c9ee0448e0055ee93d151597d", "size": 21997, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_mra/doc_mr3/data_3d.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_mra/doc_mr3/data_3d.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_mra/doc_mr3/data_3d.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.6590509666, "max_line_length": 79, "alphanum_fraction": 0.7265990817, "num_tokens": 6707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581049086031, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5792195660965196}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{examples}\n\n\\begin{document}\n\n\\section*{The Riemann curvature tensor}\n\n\\begin{cadabra}\n\n   {a,b,c,d,e,f,i,j,k,l,m,n,o,p,q,r,s,t,u#}::Indices(position=independent).\n\n   \\partial_{#}::PartialDerivative.\n\n   \\Gamma^{a}_{b c}::Depends(\\partial{#}).\n   \\Gamma^{a}_{b c}::TableauSymmetry(shape={2}, indices={1,2});\n\n   ;::Symbol;  # Suggsted by Kasper as a way to (possibly) make use of ; legal\n               # see https://cadabra.science/qa/473/is-this-legal-syntax?show=478\n               # this code works with and without this trick\n\n   # generic rule for first two covariant derivs of a downstairs-vector\n\n   deriv1 := A?_{a ; b} -> \\partial_{b}{A?_{a}} - \\Gamma^{c}_{a b} A?_{c}.\n   deriv2 := A?_{a ; b ; c} -> \\partial_{c}{A?_{a ; b}}\n                             - \\Gamma^{d}_{a c} A?_{d ; b}\n                             - \\Gamma^{d}_{b c} A?_{a ; d}.\n\n   substitute (deriv2,deriv1)     # cdb (ex01, deriv2)\n\n   Mabc := M_{a ; b ; c}.         # cdb (ex02, Mabc)\n\n   substitute (Mabc,deriv2)       # cdb (ex03, Mabc)\n\n   distribute   (Mabc)            # cdb (ex04, Mabc)\n   product_rule (Mabc)            # cdb (ex05, Mabc)\n\n   Macb := M_{a ; c ; b}.         # cdb (ex06, Macb)\n\n   substitute (Macb,deriv2)       # cdb (ex07, Macb)\n\n   distribute   (Macb)            # cdb (ex08, Macb)\n   product_rule (Macb)            # cdb (ex09, Macb)\n\n   diff := @(Mabc) - @(Macb).     # cdb (ex10, diff)\n\n   sort_product   (diff)          # cdb (ex11, diff)\n   rename_dummies (diff)          # cdb (ex12, diff)\n   canonicalise   (diff)          # cdb (ex13, diff)\n   sort_sum       (diff)          # cdb (ex14, diff)\n   factor_out     (diff,$M_{a?}$) # cdb (ex15, diff)\n\n\\end{cadabra}\n\n\\begin{minipage}[t]{0.65\\textwidth}\n\\begin{gather*}\n   \\cdb{ex01}\\\\\n   M_{a;bc} = \\cdb{ex02} = \\cdb{ex03}\\\\\n   M_{a;cb} = \\cdb{ex06} = \\cdb{ex07}\\\\[10pt]\n   M_{a;bc} - M_{a;cb} = \\cdb{ex15}\n\\end{gather*}\n\\end{minipage}\n\\hskip 0.5cm\n\\lower16pt\\hbox{%\n\\begin{minipage}[t]{0.35\\textwidth}\n\\begin{latex}\n   \\begin{gather*}\n      \\cdb{ex01}\\\\\n      M_{a;bc} = \\cdb{ex02} = \\cdb{ex03}\\\\\n      M_{a;cb} = \\cdb{ex06} = \\cdb{ex07}\\\\[10pt]\n      M_{a;bc} - M_{a;cb} = \\cdb{ex15}\n   \\end{gather*}\n\\end{latex}\n\\end{minipage}}\n\n\\end{document}\n", "meta": {"hexsha": "8744a67b3218711016ccda69822c30aa348b2030", "size": 2250, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cadabra/examples/example-04.tex", "max_stars_repo_name": "leo-brewin/hybrid-latex", "max_stars_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2018-10-12T06:31:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T23:16:08.000Z", "max_issues_repo_path": "cadabra/examples/example-04.tex", "max_issues_repo_name": "leo-brewin/hybrid-latex", "max_issues_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cadabra/examples/example-04.tex", "max_forks_repo_name": "leo-brewin/hybrid-latex", "max_forks_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-06-27T03:29:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T17:17:18.000Z", "avg_line_length": 29.6052631579, "max_line_length": 81, "alphanum_fraction": 0.5404444444, "num_tokens": 846, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5792195625691517}}
{"text": "\\chapter[\\texorpdfstring{UN$^\\infty$ in Weakly Orthogonal Systems}{UN\n  in Weakly Orthogonal Systems}]{\\texorpdfstring{Unique Normal Forms\n    in\\\\Weakly Orthogonal Systems}{Unique Normal Forms in Weakly\n    Orthogonal Systems}}\\label{chap:unwo}\n\nFinitary orthogonal term rewriting systems have unique normal forms.\nIn fact, weak orthogonality is enough to establish this\nproperty for finitary systems \\citep[Chapter 4]{terese-03}.\nTo what extent can these results be lifted to infinitary rewriting?\n\nIn the infinitary setting, orthogonal TRSs exhibit the infinitary\nunique normal forms (UN$^\\infty$) property\n\\citep{kennaway-95,klop-de-vrijer-05}. We might expect this property\nto generalise to weakly orthogonal systems. After all, the motivation\nfor allowing trivial critical pairs in these systems is that,\nintuitively, they are witnesses of harmless overlap. However, this\nintuition turns out to be unjust for the infinitary case.\n\n\n\\section{A Counterexample}\\label{sec:counterexample}\n\nWe describe a simple counterexample showing that weak orthogonality\ndoes not imply the UN$^\\infty$ property \\citep{endrullis-10}.\n\nWe work in a signature with unary function symbols $D$ and\n$U$.\\footnote{We can think of $D$ and $U$ as `down' and `up'. The\n  original formulation of this TRS uses $P$ and $S$ (`predecessor' and\n  `successor'), but to avoid notational conflicts with the\n  \\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n  constructor for\n  \\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\\coqdocinductive{nat}}\n  in \\Coq, we proceed with this modification.}\nIn the notation of terms, we omit the brackets around arguments and\nassume right-associativity of function symbol application,\ne.g.\\ writing $DUx$ for $D(U(x))$. A notation for finite repetitions\nof a function symbol $f$ terminated by a term $t$ is defined by\n\\begin{inparaenum}[(i)]\n\\item $f^0 t = t$ and\n\\item $f^{n+1} = ff^nt$.\n\\end{inparaenum}\nThe infinite nesting $fff \\ldots$ of $f$ is written $f^\\omega$.\n\nConsider the TRS consisting of the two left-linear rewrite rules\n$\\rho_1$ and $\\rho_2$:\n\\begin{align*}\n  \\rho_1 \\, : \\, DUx \\to x \\qquad \\qquad \\qquad\n  \\rho_2 \\, : \\, UDx \\to x\n\\end{align*}\nThis system has two critical pairs $\\langle Dx, Dx \\rangle$ and\n$\\langle Ux, Ux \\rangle$, both of which are trivial, establishing\nweak orthogonality. The infinite term $\\du = D^1 U^2 D^3 U^4 \\ldots$\nhas two normal forms. It rewrites to $U^\\omega$ in $\\omega$ many\n$\\rho_1$-steps and to $D^\\omega$ in $\\omega$ many $\\rho_2$-steps.\nThis contradicts UN$^\\rewrites$ and therefore also UN$^\\infty$.\n% give the overlap from which the critical pairs come?\n\nOther interesting properties of this TRS (e.g.\\ weak normalisation is\nnot preserved under rewriting) and a translation to the infinitary\n$\\lambda \\beta \\eta$-calculus are discussed by \\citet{endrullis-10}.\n\n\n\\subsection{\\texorpdfstring{Rewriting $\\du$ to\n    $U^\\omega$}{Rewriting DUUDDD... to\n    UUU...}}\\label{sub:counterexample}\n\nWe show briefly what rewriting $\\du$ to $U^\\omega$ amounts\nto. Rewriting $\\du$ to $D^\\omega$ is done in a similar way.\nAn obvious way to define $\\du$ by corecursion is via auxiliary terms\n$\\du'_n$ parameterised by $n$ as follows:\n\\begin{align*}\n  \\du = \\du'_0 \\qquad \\qquad \\qquad\n  \\du'_n = U^n D^{n + 1} \\du'_{n + 2}\n\\end{align*}\nBut a more useful definition for our present purposes, and the one we\nstick with, is the slight reformulation: % TODO: why more useful?\n\\begin{align*}\n  \\du = \\du'_0 \\qquad \\qquad \\qquad\n  \\du'_n = D^{2 n + 1} U^{2 n + 2} \\du'_{n + 1}\n\\end{align*}\nFor any term $t$ and natural numbers $n,m$ we have $U^n D^{m+1}\nU^{m+1} t \\rightarrow_{\\rho_1} U^n D^m U^m t$ and thus $U^n D^m U^m t\n\\rewrites U^n t$ by iterating $m$ such steps. Instantiating\n$m$ with $2 n + 1$ and $t$ with $U \\du'_{n + 1}$, we obtain\n%$S^n P^{2 n + 1} S^{2 n + 1} S \\du'_{n + 1} \\equiv S^n \\du'_n$\n$U^n \\du'_n \\rewrites U^{n+1} \\du'_{n + 1}$ for any $n$.\nConcatenating these sequences, iterating $n$ from $0$ onwards, we\narrive at $\\du \\rewrites U^\\omega$.\n% TODO: why not use definitions:\n% vu(n) = U^n vd(n+1)\n% vd(n) = D^n vu(n+1)\n\n\n\\section{The Counterexample in \\Coq}\n\nWe implement the counterexample from Section~\\ref{sec:counterexample}\nusing the \\Coq development described in\nChapter~\\ref{chap:implementation}.\n\nThe rewrite rules $\\rho_1$ and $\\rho_2$ are straightforwardly defined\nand shown to be left-linear. By a simple proof we obtain that all\ncritical pairs are trivial and hence that the TRS is weakly\northogonal.\n\\begin{singlespace}\n\\begin{coqdoccode}\n%\\coqdocnoindent\n%\\coqdockw{Definition}\n%\\coqdef{ExampleUNWO.UNWOtrs}{UNWO\\_trs}{\\coqdocdefinition{$\\mathcal{R}$}}\n%:=\n%\\coqdocdefinition{$\\rho_1$} ::\n%\\coqdocdefinition{$\\rho_2$} ::\n%\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nil}{\\coqdocconstructor{nil}}.\\coqdoceol\n%\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Lemma}\n\\coqdoclemma{wo$_\\mathcal{R}$} :\n\\coqref{Rewriting.weaklyorthogonal}{\\coqdocdefinition{weakly\\_orthogonal}}\n%\\coqref{ExampleUNWO.UNWOtrs}{\\coqdocdefinition{$\\mathcal{R}$}}.\\coqdoceol\n\\coqdocdefinition{$\\mathcal{R}$}.\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\nWe introduce the notation \\begin{coqdoccode}\\coqdocvariable{f} @\n  \\coqdocvariable{t}\\end{coqdoccode} to mean\n\\begin{coqdoccode}\\coqref{Term.Fun}{\\coqdocconstructor{Fun}}\n  \\coqdocvariable{f} (\\coqdocdefinition{vcons} \\coqdocvariable{t}\n  (\\coqdocdefinition{vnil}\n  \\coqref{Term.term}{\\coqdocinductive{term}}))\\end{coqdoccode}.\nFor brevity, in the following discussion we focus on the function symbol $U$\nand omit analoguous definitions using the function symbol $D$.\nThe infinite term $U^\\omega$ is defined\nby corecursion and finite repetitions $U^n t$ are defined by recursion\n(and are assumed to generalise to contexts with the same notation).\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{CoFixpoint}\n\\coqdef{ExampleUNWO.repeatU}{repeat\\_U}{\\coqdocdefinition{U$^\\omega$}}\n: \\coqref{Term.term}{\\coqdocinductive{term}} :=\n\\coqdocconstructor{U} @\n\\coqref{ExampleUNWO.repeatU}{\\coqdocdefinition{U$^\\omega$}}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Fixpoint}\n\\coqdef{ExampleUNWO.Unt}{Unt}{\\coqdocdefinition{U}}$^\\coqdocvar{n}$\n\\coqdocvar{t} :=\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{match} \\coqdocvariable{n} \\coqdockw{with}\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|}\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{O}{\\coqdocconstructor{O}}\n\\ensuremath{\\Rightarrow} \\coqdocvariable{t}\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|}\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvar{n} \\ensuremath{\\Rightarrow}\n\\coqdocconstructor{U} @\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocvariable{t})\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{end}.\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\nUnfortunately, $\\du$ is not as easily defined. Although clearly\nproductive, direct translations of the corecursive definitions in\nSubsection~\\ref{sub:counterexample} do not satisfy \\Coq's guardedness\ncondition (see also Section~\\ref{sec:guardedness}). The conclusion of\na \\emph{trial and error} approach is that we must use anonymous cofix\nconstructions. The definition we proceed with is the following:\n% wording 'trial and error approach' is not so nice\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{CoFixpoint}\n\\coqdef{ExampleUNWO.psi'}{psi'}{\\coqdocdefinition{$\\du'$}} \\coqdocvar{n}\n: \\coqref{Term.term}{\\coqdocinductive{term}} :=\\coqdoceol\n\\coqdocindent{1.00em}\n(\\coqdocvar{cofix} \\coqdocvar{Ds} (\\coqdocvar{d} :\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\\coqdocinductive{nat}})\n:=\\coqdoceol\n\\coqdocindent{2.00em}\n\\coqdockw{match} \\coqdocvariable{d} \\coqdockw{with}\\coqdoceol\n\\coqdocindent{2.00em}\n\\ensuremath{|}\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{O}{\\coqdocconstructor{O}}\n\\ensuremath{\\Rightarrow} \\coqdocconstructor{D}\n@ (\\coqdocvar{cofix} \\coqdocvar{Us} (\\coqdocvar{u} :\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\\coqdocinductive{nat}})\n:=\\coqdoceol\n\\coqdocindent{7.50em}\n\\coqdockw{match} \\coqdocvariable{u} \\coqdockw{with}\\coqdoceol\n\\coqdocindent{7.50em}\n\\ensuremath{|}\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{O}{\\coqdocconstructor{O}}\n\\ensuremath{\\Rightarrow}\n\\coqref{ExampleUNWO.psi'}{\\coqdocdefinition{$\\du'$}}\n(\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvariable{n})\\coqdoceol\n\\coqdocindent{7.50em}\n\\ensuremath{|}\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvar{u} \\ensuremath{\\Rightarrow}\n\\coqdocconstructor{U} @\n\\coqdocconstructor{U} @ (\\coqdocvariable{Us}\n\\coqdocvariable{u})\\coqdoceol\n\\coqdocindent{7.50em}\n\\coqdockw{end})\n(\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvariable{n})\\coqdoceol\n\\coqdocindent{2.00em}\n\\ensuremath{|}\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvar{d} \\ensuremath{\\Rightarrow}\n\\coqdocconstructor{D} @\n\\coqdocconstructor{D} @\n(\\coqdocvariable{Ds} \\coqdocvariable{d})\\coqdoceol\n\\coqdocindent{2.00em}\n\\coqdockw{end}) \\coqdocvariable{n}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Definition}\n\\coqdef{ExampleUNWO.psi}{psi}{\\coqdocdefinition{$\\du$}} :=\n\\coqref{ExampleUNWO.psi'}{\\coqdocdefinition{$\\du'$}} 0.\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\n\nWe now prove that $U^\\omega$ and $D^\\omega$ are (distinct) normal\nforms. This is a tedious proof that essentially consists of analysing\nthe occurrence of a redex in these terms, yielding that there can not\nbe such an occurrence.\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{Lemma} \\coqdoclemma{nf$_{\\text{U}^\\omega}$} :\n\\coqref{Rewriting.normalform}{\\coqdocdefinition{normal\\_form}}\n(\\coqdocvar{system} :=\n\\coqdocdefinition{$\\mathcal{R}$})\n\\coqref{ExampleUNWO.repeatU}{\\coqdocdefinition{U$^\\omega$}}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Lemma} \\coqdoclemma{nf$_{\\text{D}^\\omega}$} :\n\\coqdocdefinition{normal\\_form}\n(\\coqdocvar{system} :=\n\\coqdocdefinition{$\\mathcal{R}$})\n\\coqdocdefinition{D$^\\omega$}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Lemma}\n\\coqdoclemma{neq$^{\\text{U}^\\omega}_{\\text{D}^\\omega}$} :\n\\ensuremath{\\lnot}\n\\coqref{ExampleUNWO.repeatU}{\\coqdocdefinition{U$^\\omega$}}\n\\coqref{TermEquality.termbis}{$\\bis$}\n\\coqdocdefinition{D$^\\omega$}.\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\n\nConstructing a rewrite sequence from $\\du$ to $U^\\omega$ is done in\nmuch the same way as described in\nSubsection~\\ref{sub:counterexample}. First, we define the parameterised\nstep that is used in the rewrite sequence. It eliminates one pair of $D,\nU$ constructors in a term of the form $U^n D^{m+1} U^{m+1} t$. The\nomitted argument of the \\coqref{Rewriting.Step}{\\coqdocconstructor{Step}}\nconstructor (denoted by \\coqdoclemma{\\_}) is a proof of $\\rho_1 \\in\n\\mathcal{R}$. Note that \\coqdocvariable{x} is the variable that is\nused in both rewrite rules.\n%TODO: names of definitions are not so nice and we need some notes\n%about the fact lemmas\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{Definition}\n\\coqdef{ExampleUNWO.sigma}{sigma}{\\coqdocdefinition{$\\sigma$}}\n\\coqdocvar{t} (\\coqdocvar{y} :\n\\coqdocdefinition{X}) :\n\\coqref{Term.term}{\\coqdocinductive{term}} :=\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{match} \\coqdocdefinition{beq\\_var} \\coqdocvariable{y} \\coqdocvariable{x} \\coqdockw{with}\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|} \\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{true}{\\coqdocconstructor{true}} \\ensuremath{\\Rightarrow}\n\\coqdocvariable{t}\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|} \\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{false}{\\coqdocconstructor{false}} \\ensuremath{\\Rightarrow}\n\\coqref{Term.Var}{\\coqdocconstructor{Var}}\n\\coqdocvariable{y}\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{end}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Lemma}\n\\coqdef{ExampleUNWO.facttermbisUmDSnUSnt}{fact\\_term\\_bis\\_UmDSnUSnt}{\\coqdoclemma{fact$_\\pi^\\text{source}$}}\n:\n\\ensuremath{\\forall} (\\coqdocvar{n} \\coqdocvar{m} :\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\\coqdocinductive{nat}})\n(\\coqdocvar{t} :\n\\coqref{Term.term}{\\coqdocinductive{term}}),\\coqdoceol\n\\coqdocindent{1.00em}\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^\\coqdocvariable{m}$\n$\\Box$)[\\coqref{Substitution.substitute}{\\coqdocdefinition{substitute}}\n  (\\coqref{ExampleUNWO.sigma}{\\coqdocdefinition{$\\sigma$}}\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{m}$\n    \\coqdocvariable{t})) (\\coqref{Rewriting.lhs}{\\coqdocprojection{lhs}}\n\\coqdocdefinition{$\\rho_1$})] \\coqref{TermEquality.termbis}{$\\bis$}\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^{\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}} \\coqdocvariable{m}}$\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^{\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvariable{m}}$\n\\coqdocvariable{t}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Lemma}\n\\coqdef{ExampleUNWO.facttermbisUmDnUnt}{fact\\_term\\_bis\\_UmDnUnt}{\\coqdoclemma{fact$_\\pi^\\text{target}$}}\n:\n\\ensuremath{\\forall} (\\coqdocvar{n} \\coqdocvar{m} :\n\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{nat}{\\coqdocinductive{nat}})\n(\\coqdocvar{t} :\n\\coqref{Term.term}{\\coqdocinductive{term}}),\\coqdoceol\n\\coqdocindent{1.00em}\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^\\coqdocvariable{m}$\n$\\Box$)[\\coqref{Substitution.substitute}{\\coqdocdefinition{substitute}}\n  (\\coqref{ExampleUNWO.sigma}{\\coqdocdefinition{$\\sigma$}}\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{m}$\n    \\coqdocvariable{t})) (\\coqref{Rewriting.rhs}{\\coqdocprojection{rhs}}\n\\coqdocdefinition{$\\rho_1$})] \\coqref{TermEquality.termbis}{$\\bis$}\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^\\coqdocvariable{m}$\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{m}$\n\\coqdocvariable{t}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Definition}\n\\coqdef{ExampleUNWO.pi}{pi}{\\coqdocdefinition{$\\pi$}}\n\\coqdocvar{n} \\coqdocvar{m} \\coqdocvar{t} :\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^{\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}} \\coqdocvariable{m}}$\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^{\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvariable{m}}$\n\\coqdocvariable{t} \\coqref{Rewriting.step}{$\\rightarrow_\\mathcal{R}$}\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^\\coqdocvariable{m}$\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{m}$\n\\coqdocvariable{t} :=\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqref{Rewriting.Step}{\\coqdocconstructor{Step}}\n\\coqdocdefinition{$\\rho_1$}\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^\\coqdocvariable{m}$ $\\Box$)\n(\\coqref{ExampleUNWO.sigma}{\\coqdocdefinition{$\\sigma$}}\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{m}$ \\coqdocvariable{t}))\n\\coqdoclemma{\\_}\n(\\coqref{ExampleUNWO.facttermbisUmDSnUSnt}{\\coqdoclemma{fact$_\\pi^\\text{source}$}}\n\\coqdocvariable{n} \\coqdocvariable{m} \\coqdocvariable{t})\n(\\coqref{ExampleUNWO.facttermbisUmDnUnt}{\\coqdoclemma{fact$_\\pi^\\text{target}$}}\n\\coqdocvariable{n} \\coqdocvariable{m} \\coqdocvariable{t}).\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\nGeneralising these rewrite steps\n\\coqref{ExampleUNWO.pi}{\\coqdocdefinition{$\\pi$}}, we construct\nthe rewrite sequences\n\\coqref{ExampleUNWO.phia}{\\coqdocdefinition{$\\varphi_a$}}. In their\nrecursive definition, the \\coqdocdefinition{snoc}\nfunction is used to\nprepend \\begin{coqdoccode}(\\coqref{ExampleUNWO.pi}{\\coqdocdefinition{$\\pi$}}\n\\coqdocvariable{n} \\coqdocvariable{m}\n\\coqdocvariable{t})\\end{coqdoccode} to\n\\begin{coqdoccode}(\\coqref{ExampleUNWO.phia}{\\coqdocdefinition{$\\varphi_a$}}\n\\coqdocvariable{n} \\coqdocvariable{m}\n\\coqdocvariable{t})\\end{coqdoccode}. Doing some arithmetic, we obtain\nthat these rewrite sequences can be used to define rewrite sequences\n\\coqref{ExampleUNWO.phib}{\\coqdocdefinition{$\\varphi_b$}} of a more\nuseful type.\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{Fixpoint}\n\\coqdef{ExampleUNWO.phia}{phia}{\\coqdocdefinition{$\\varphi_a$}}\n\\coqdocvar{n} \\coqdocvar{m} \\coqdocvar{t} :\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocdefinition{D}$^\\coqdocvariable{m}$\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{m}$\n\\coqdocvariable{t}\n\\coqref{Rewriting.sequence}{$\\rewrites_\\mathcal{R}$}\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocvariable{t} :=\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{match} \\coqdocvariable{m} \\coqdockw{with}\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|}\n\\coqdocconstructor{O}\n\\ensuremath{\\Rightarrow}\n\\coqref{Rewriting.Nil}{\\coqdocconstructor{Nil}}\n(\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n\\coqdocvariable{t})\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|}\n\\coqdocconstructor{S}\n\\coqdocvar{m} \\ensuremath{\\Rightarrow}\n\\coqref{Rewriting.snoc}{\\coqdocdefinition{snoc}}\n(\\coqref{ExampleUNWO.pi}{\\coqdocdefinition{$\\pi$}}\n\\coqdocvariable{n} \\coqdocvariable{m} \\coqdocvariable{t})\n(\\coqref{ExampleUNWO.phia}{\\coqdocdefinition{$\\varphi_a$}}\n\\coqdocvariable{n} \\coqdocvariable{m} \\coqdocvariable{t})\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{end}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n%\\coqdockw{Program Definition}\n\\coqdockw{Definition} % here the S(2 n) is not correct (?) -- i think it is\n\\coqdef{ExampleUNWO.phib}{phib}{\\coqdocdefinition{$\\varphi_b$}}\n\\coqdocvar{n} : \\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n(\\coqref{ExampleUNWO.psi'}{\\coqdocdefinition{$\\du'$}}\n\\coqdocvariable{n}) \\coqref{Rewriting.sequence}{$\\rewrites_\\mathcal{R}$}\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^{\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}} \\coqdocvariable{n}}$\n(\\coqref{ExampleUNWO.psi'}{\\coqdocdefinition{$\\du'$}}\n(\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvariable{n})) :=\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqref{ExampleUNWO.phia}{\\coqdocdefinition{$\\varphi_a$}}\n\\coqdocvariable{n} (\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n(2\n$\\times$ \\coqdocvariable{n})) (\\coqdocdefinition{U} @\n\\coqref{ExampleUNWO.psi'}{\\coqdocdefinition{$\\du'$}} (\\coqexternalref{http://coq.inria.fr/stdlib/Coq.Init.Datatypes}{S}{\\coqdocconstructor{S}}\n\\coqdocvariable{n})).\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\nWe concatenate all rewrite sequences\n\\coqref{ExampleUNWO.phib}{\\coqdocdefinition{$\\varphi_b$}} to construct\nrewrite sequences from $\\du$ to a term that is equal to $U^\\omega$ up\nto any given depth.\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{Fixpoint}\n\\coqdef{ExampleUNWO.phic}{phic}{\\coqdocdefinition{$\\varphi_c$}}\n\\coqdocvar{n} : \\coqref{ExampleUNWO.psi}{\\coqdocdefinition{$\\du$}}\n\\coqref{Rewriting.sequence}{$\\rewrites_\\mathcal{R}$}\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n(\\coqref{ExampleUNWO.psi'}{\\coqdocdefinition{$\\du'$}}\n\\coqdocvariable{n}) :=\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{match} \\coqdocvariable{n} \\coqdockw{with}\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|}\n\\coqdocconstructor{O}\n\\ensuremath{\\Rightarrow}\n\\coqref{Rewriting.Nil}{\\coqdocconstructor{Nil}}\n\\coqref{ExampleUNWO.psi}{\\coqdocdefinition{$\\du$}}\\coqdoceol\n\\coqdocindent{1.00em}\n\\ensuremath{|}\n\\coqdocconstructor{S}\n\\coqdocvar{n} \\ensuremath{\\Rightarrow}\n\\coqref{Rewriting.append}{\\coqdocdefinition{concat}}\n(\\coqref{ExampleUNWO.phic}{\\coqdocdefinition{$\\varphi_c$}}\n\\coqdocvariable{n})\n(\\coqref{ExampleUNWO.phib}{\\coqdocdefinition{$\\varphi_b$}}\n\\coqdocvariable{n})\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdockw{end}.\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\n\nThe definition of the final rewrite sequence\n\\coqref{ExampleUNWO.phi}{\\coqdocdefinition{$\\varphi$}} is done by combining\n\\coqref{ExampleUNWO.phic}{\\coqdocdefinition{$\\varphi_c$}} with a proof\nthat the target terms converge to $U^\\omega$.\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{Lemma}\n\\coqdef{ExampleUNWO.conv}{conv}{\\coqdoclemma{conv$_{\\varphi_c}$}}\n: \\coqref{Rewriting.converges}{\\coqdocdefinition{converges}}\n(\\coqdockw{fun} \\coqdocvar{n} \\ensuremath{\\Rightarrow}\n\\coqref{ExampleUNWO.Unt}{\\coqdocdefinition{U}}$^\\coqdocvariable{n}$\n(\\coqref{ExampleUNWO.psi'}{\\coqdocdefinition{$\\du'$}}\n\\coqdocvariable{n}))\n\\coqref{ExampleUNWO.repeatU}{\\coqdocdefinition{U$^\\omega$}}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Definition}\n\\coqdef{ExampleUNWO.phi}{phi}{\\coqdocdefinition{$\\varphi$}}\n: \\coqref{ExampleUNWO.psi}{\\coqdocdefinition{$\\du$}}\n\\coqref{Rewriting.sequence}{$\\rewrites_\\mathcal{R}$}\n\\coqref{ExampleUNWO.repeatU}{\\coqdocdefinition{U$^\\omega$}} :=\n\\coqref{Rewriting.Lim}{\\coqdocconstructor{Lim}}\n\\coqref{ExampleUNWO.phic}{\\coqdocdefinition{$\\varphi_c$}}\n\\coqref{ExampleUNWO.conv}{\\coqdoclemma{conv$_{\\varphi_c}$}}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Lemma}\n\\coqdoclemma{wf$_\\varphi$}\n: \\coqref{Rewriting.wf}{\\coqdocdefinition{wf}}\n\\coqref{ExampleUNWO.phi}{\\coqdocdefinition{$\\varphi$}}.\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\n\nWe can prove $\\du \\rewrites D^\\omega$ in a similar way and\nconclude by proving our main theorem.\n\\begin{singlespace}\n\\begin{coqdoccode}\n\\coqdocnoindent\n\\coqdockw{Lemma}\n\\coqdoclemma{no\\_un$_\\mathcal{R}$}\n: \\ensuremath{\\lnot}\n\\coqref{Rewriting.uniquenormalforms}{\\coqdocdefinition{unique\\_normal\\_forms}}\n\\coqdocdefinition{$\\mathcal{R}$}.\\coqdoceol\n\\coqdocemptyline\n\\coqdocnoindent\n\\coqdockw{Theorem} \\coqdoclemma{no\\_un\\_wo}\n: \\ensuremath{\\lnot} \\ensuremath{\\forall} \\coqdocvar{F} \\coqdocvar{X}\n\\coqdocvar{$\\mathcal{R}$},\\coqdoceol\n\\coqdocindent{1.00em}\n\\coqdocdefinition{weakly\\_orthogonal}\n(\\coqdocvar{F} := \\coqdocvariable{F}) (\\coqdocvar{X} :=\n\\coqdocvariable{X}) \\coqdocvariable{$\\mathcal{R}$}\n\\ensuremath{\\rightarrow}\n\\coqdocdefinition{unique\\_normal\\_forms}\n\\coqdocvariable{$\\mathcal{R}$}.\\coqdoceol\n\\end{coqdoccode}\n\\end{singlespace}\n", "meta": {"hexsha": "27c1308d0699a54c777570a9b247a6fb599d7773", "size": 22639, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "vu/master-project/unwo.tex", "max_stars_repo_name": "martijnvermaat/documents", "max_stars_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-28T14:38:06.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-28T14:38:06.000Z", "max_issues_repo_path": "vu/master-project/unwo.tex", "max_issues_repo_name": "martijnvermaat/documents", "max_issues_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "vu/master-project/unwo.tex", "max_forks_repo_name": "martijnvermaat/documents", "max_forks_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.6204238921, "max_line_length": 158, "alphanum_fraction": 0.7564821768, "num_tokens": 8446, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7956580927949807, "lm_q1q2_score": 0.5792195478866763}}
{"text": "\\chapter{Vector spaces}\nThis is a pretty light chapter.\nThe point of it is to define what a vector space and a basis are.\nThese are intuitive concepts that you may already know.\n\n\\section{The definitions of a ring and field}\n\\prototype{$\\ZZ$, $\\RR$, and $\\CC$ are rings; the latter two are fields.}\n\nI'll very informally define a ring/field here,\nin case you skipped the earlier chapter.\n\\begin{itemize}\n\t\\ii A \\textbf{ring} is a structure with a \\emph{commutative}\n\taddition and multiplication, as well as subtraction, like $\\ZZ$.\n\tIt also has an additive identity $0$ and multiplicative identity $1$.\n\n\t\\ii If the multiplication is invertible like in $\\RR$ or $\\CC$,\n\t(meaning $\\frac 1x$ makes sense for any $x \\neq 0$),\n\tthen the ring is called a \\textbf{field}.\n\\end{itemize}\nIn fact, if you replace ``field'' by ``$\\RR$'' everywhere in what follows,\nyou probably won't lose much.\nIt's customary to use the letter $R$ for rings, and $k$ or $K$ for fields.\n\nFinally, in case you skipped the chapter on groups, I should also mention:\n\\begin{itemize}\n\t\\ii An \\textbf{additive abelian group} is a structure\n\twith a commutative addition, as well as subtraction,\n\tplus an additive identity $0$.\n\tIt doesn't have to have multiplication.\n\tA good example is $\\RR^3$ (with addition componentwise).\n\\end{itemize}\n\n\\section{Modules and vector spaces}\n\\prototype{Polynomials of degree at most $n$.}\nYou intuitively know already that $\\RR^n$ is a ``vector space'':\nits elements can be added together,\nand there's some scaling by real numbers.\nLet's develop this more generally.\n\nFix a commutative ring $R$.\nThen informally,\n\\begin{moral}\n\tAn $R$-module is any structure where you can add two elements\n\tand scale by elements of $R$.\n\\end{moral}\n% You can think of the $R$-module as consisting of soldiers\n% being commanded by the ring $R$.\nMoreover, a \\vocab{vector space} is just a module whose commanding ring\nis actually a field.\nI'll give you the full definition in a moment,\nbut first, examples\\dots\n\n\\begin{example}\n\t[Quadratic polynomials, aka my favorite example]\n\tMy favorite example of an $\\RR$-vector space is the\n\tset of polynomials of degree at most two, namely\n\t\\[ \\left\\{ ax^2+bx+c \\mid a,b,c \\in \\RR \\right\\}. \\]\n\tIndeed, you can add any two quadratics, and multiply by constants.\n\tYou can't multiply two quadratics to get a quadratic,\n\tbut that's irrelevant -- in a vector space there need not\n\tbe a notion of multiplying two vectors together.\n\n\tIn a sense we'll define later, this vector space\n\thas dimension $3$ (as expected!).\n\t% But I hope you can see why this is kind of true!\n\t\\label{example:quadratic_vector_space}\n\\end{example}\n\\begin{example}[All polynomials]\n\tThe set of \\emph{all} polynomials with real coefficients is an\n\t$\\RR$-vector space, because you can \\emph{add any two polynomials}\n\tand \\emph{scale by constants}.\n\\end{example}\n\n\\begin{example}\n\t[Euclidean space]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The complex numbers\n\t\t\\[ \\left\\{ a+bi \\mid a,b \\in \\RR \\right\\} \\]\n\t\tform a real vector space. As we'll see later,\n\t\tit has ``dimension $2$''.\n\t\t\\ii The real numbers $\\RR$ form a real vector space of dimension $1$.\n\t\t\\ii The set of 3D vectors\n\t\t\\[ \\left\\{ (x,y,z) \\mid x,y,z \\in \\RR \\right\\} \\]\n\t\tforms a real vector space, because you can add any two triples\n\t\tcomponent-wise. Again, we'll later explain\n\t\twhy it has ``dimension $3$''.\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{example}\n\t[More examples of vector spaces]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The set \\[ \\QQ[\\sqrt 2] = \\left\\{ a + b \\sqrt 2 \\mid a, b \\in \\QQ \\right\\} \\]\n\t\thas a structure of a $\\QQ$-vector space in the obvious fashion:\n\t\tone can add any two elements, and scale by rational numbers.\n\t\t(It is not a real vector space -- why?)\n\t\t\\ii The set \\[ \\left\\{ (x,y,z) \\mid x+y+z = 0 \\text{ and } x,y,z \\in \\RR \\right\\} \\]\n\t\tis a $2$-dimensional real vector space.\n\t\t\\ii The set of all functions $f : \\RR \\to \\RR$ is also a real vector space\n\t\t(since the notions $f+g$ and $c \\cdot f$ both make sense for $c \\in \\RR$).\n\t\\end{enumerate}\n\\end{example}\n\nNow let me write the actual rules for how this multiplication behaves.\n\\begin{definition}\n\tLet $R$ be a commutative ring.\n\tAn $R$-\\vocab{module} starts with an additive abelian group $M = (M,+)$\n\twhose identity is denoted $0 = 0_M$.\n\tWe additionally specify a left multiplication by elements of $R$.\n\tThis multiplication must satisfy the following properties\n\tfor $r, r_1, r_2 \\in R$ and $m, m_1, m_2 \\in M$:\n\t\\begin{enumerate}[(i)]\n\t\t\\ii $r_1 \\cdot (r_2 \\cdot m) = (r_1r_2) \\cdot m$.\n\t\t\\ii Multiplication is distributive, meaning\n\t\t\\[ (r_1+r_2) \\cdot m = r_1 \\cdot m + r_2 \\cdot m\n\t\t\t\\text{ and }\n\t\t\tr \\cdot (m_1 + m_2) = r \\cdot m_1 + r \\cdot m_2. \\]\n\t\t\\ii $1_R \\cdot m = m$.\n\t\t\\ii $0_R \\cdot m = 0_M$.\n\t\t(This is actually extraneous;\n\t\tone can deduce it from the first three.)\n\t\\end{enumerate}\n\tIf $R$ is a field we say $M$ is an $R$-\\vocab{vector space};\n\tits elements are called \\vocab{vectors}\n\tand the members of $R$ are called \\vocab{scalars}.\n\\end{definition}\n\n\\begin{abuse}\n\tIn the above, we're using the same symbol $+$ for the addition of $M$\n\tand the addition of $R$.\n\tSorry about that, but it's kind of hard to avoid, and the point\n\tof the axioms is that these additions should be related.\n\tI'll try to remember to put $r \\cdot m$ for the multiplication of the module\n\tand $r_1r_2$ for the multiplication of $R$.\n\\end{abuse}\n\n\\begin{ques}\n\tIn \\Cref{example:quadratic_vector_space},\n\tI was careful to say ``degree at most $2$'' instead of ``degree $2$''.\n\tWhat's the reason for this?\n\tIn other words, why is\n\t\\[ \\left\\{ ax^2 + bx + c \\mid a,b,c \\in \\RR, a \\neq 0  \\right\\} \\]\n\tnot an $\\RR$-vector space?\n\\end{ques}\n\nA couple less intuitive but somewhat important examples\\dots\n\\begin{example}[Abelian groups are $\\ZZ$-modules]\n\t(Skip this example if you're not comfortable with groups.)\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The example of real polynomials\n\t\t\\[ \\left\\{ ax^2+bx+c \\mid a,b,c \\in \\RR \\right\\} \\]\n\t\tis also a $\\ZZ$-module!\n\t\tIndeed, we can add any two such polynomials,\n\t\tand we can scale them by integers.\n\t\t\\ii The set of integers modulo $100$, say $\\ZZ/100\\ZZ$,\n\t\tis a $\\ZZ$-module as well. Can you see how?\n\t\t\\ii In fact, \\emph{any} abelian group $G = (G,+)$ is a $\\ZZ$-module.\n\t\tThe multiplication can be defined by\n\t\t\\[ n \\cdot g = \\underbrace{g+\\dots+g}_{\\text{$n$ times}} \n\t\t\\qquad (-n) \\cdot g = n \\cdot (-g)\\]\n\t\tfor $n \\ge 0$. (Here $-g$ is the additive inverse of $g$.)\n\t\\end{enumerate}\n\\end{example}\n\\begin{example}\n\t[Every ring is its own module]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\\ii $\\RR$ can be thought of as an $\\RR$-vector space over itself.\n\tCan you see why?\n\n\t\\ii By the same reasoning,\n\twe see that \\emph{any} commutative ring $R$ can be thought of\n\tas an $R$-module over itself.\n\t\\end{enumerate}\n\\end{example}\n\n\\section{Direct sums}\n\\prototype{$\\{ax^2+bx+c\\} = \\RR \\oplus x\\RR \\oplus x^2\\RR$, and\n$\\RR^3$ is the sum of its axes.}\nLet's return to \\Cref{example:quadratic_vector_space}, and consider\n\\[ V = \\left\\{ ax^2+bx+c \\mid a,b,c \\in \\RR \\right\\}.  \\]\nEven though I haven't told you what a dimension is,\nyou can probably see that this vector space ``should have'' dimension $3$.\nWe'll get to that in a moment.\n\nThe other thing you may have noticed is that somehow\nthe $x^2$, $x$ and $1$ terms don't ``talk to each other''.\nThey're totally unrelated.\nIn other words, we can consider the three sets\n\\begin{align*}\n\tx^2\\RR &\\defeq \\left\\{ ax^2 \\mid a \\in \\RR \\right\\} \\\\\n\tx\\RR &\\defeq \\left\\{ bx \\mid b \\in \\RR \\right\\} \\\\\n\t\\RR &\\defeq \\left\\{ c \\mid c \\in \\RR \\right\\}.\n\\end{align*}\nIn an obvious way, each of these can be thought of as a ``copy'' of $\\RR$.\n\nThen $V$ quite literally consists of the ``sums of these sets''.\nSpecifically, every element of $V$ can be written \\emph{uniquely}\nas the sum of one element from each of these sets.\nThis motivates us to write\n\\[ V = x^2\\RR \\oplus x\\RR \\oplus \\RR. \\]\nThe notion which captures this formally is the \\vocab{direct sum}.\n\n\\begin{definition}\n\tLet $M$ be an $R$-module.\n\tLet $M_1$ and $M_2$ be subsets of $M$ which are themselves $R$-modules.\n\tThen we write $M = M_1 \\oplus M_2$ and say $M$ is a \\vocab{direct sum}\n\tof $M_1$ and $M_2$\n\tif every element from $M$ can be written uniquely as the sum\n\tof an element from $M_1$ and $M_2$.\n\\end{definition}\n\\begin{example}[Euclidean plane]\n\tTake the vector space $\\RR^2 = \\left\\{ (x,y) \\mid x \\in \\RR, y \\in \\RR \\right\\}$.\n\tWe can consider it as a direct sum of its $x$-axis and $y$-axis:\n\t\\[ X = \\left\\{ (x,0) \\mid x \\in \\RR  \\right\\}\n\t\t\\text{ and }\n\t\tY = \\left\\{ (0,y) \\mid y \\in \\RR \\right\\}. \\]\n\tThen $\\RR^2 = X \\oplus Y$.\n\\end{example}\n\nThis gives us a ``top-down'' way to break down modules\ninto some disconnected components.\n\nBy applying this idea in reverse, we can also construct\nnew vector spaces as follows.\nIn a very unfortunate accident, the two names and notations for technically\ndistinct things are exactly the same.\n\\begin{definition}\n\tLet $M$ and $N$ be $R$-modules.\n\tWe define the \\vocab{direct sum} $M \\oplus N$\n\tto be the $R$-module whose elements are pairs $(m,n) \\in M \\times N$.\n\tThe operations are given by\n\t\\[ (m_1, n_1) + (m_2, n_2) = (m_1+m_2, n_1+n_2). \\]\n\tand\n\t\\[ r \\cdot (m, n) = (r \\cdot m, r \\cdot n). \\]\n\\end{definition}\n\nFor example, while we technically wrote $\\RR^2 = X \\oplus Y$,\nsince each of $X$ and $Y$ is a copy of $\\RR$,\nwe might as well have written $\\RR^2 \\cong \\RR \\oplus \\RR$.\n\n\\begin{abuse}\n\tThe above illustrates an abuse of notation in the way we write a direct sum. The symbol $\\oplus$ has two meanings.\n\t\\begin{itemize}\n\t\t\\ii If $V$ is a \\emph{given} space and $W_1$ and $W_2$ are subspaces, then $V = W_1 \\oplus W_2$ means that ``$V$ \\emph{splits} as a direct sum $W_1 \\oplus W_2$'' in the way we defined above.\n\t\t\\ii If $W_1$ and $W_2$ are two \\emph{unrelated} spaces, then $W_1 \\oplus W_2$ is \\emph{defined} as the vector space whose \\emph{elements} are pairs $(w_1, w_2) \\in W_1 \\times W_2$.\n\t\\end{itemize}\n\tYou can see that these definitions ``kind of'' coincide.\n\\end{abuse}\n\nIn this way, you can see that $V$ should be isomorphic\nto $\\RR \\oplus \\RR \\oplus \\RR$;\nwe had $V = x^2\\RR \\oplus x\\RR \\oplus \\RR$,\nbut the $1$, $x$, $x^2$ don't really talk to each other\nand each of the summands is really just a copy of $\\RR$ at heart.\n\n\\begin{definition}\n\tWe can also define, for every positive integer $n$, the module\n\t\\[ M^{\\oplus n}\n\t\t\\defeq \\underbrace{M \\oplus M \\oplus \\dots \\oplus M}_{\\text{$n$ times}}. \\]\n\\end{definition}\n\n\\section{Linear independence, spans, and basis}\n\\prototype{%\n\t$\\left\\{ 1,x,x^2 \\right\\}$ is a basis of\n\t$\\left\\{ ax^2 + bx + c \\mid a,b,c \\in \\RR \\right\\}$.}\n\nThe idea of a basis, the topic of this section,\ngives us another way to capture the notion that\n\\[ V = \\left\\{ ax^2+bx+c \\mid a,b,c \\in \\RR \\right\\} \\]\nis sums of copies of $\\{1,x,x^2\\}$.\nThis section should be very intuitive, if technical.\nIf you can't see why the theorems here ``should'' be true,\nyou're doing it wrong.\n\nLet $M$ be an $R$-module now.\nWe define three very classical notions that you likely are already familiar with.\nIf not, fall upon your notion of Euclidean space or $V$ above.\n\\begin{definition}\n\tA \\vocab{linear combination} of some vectors $v_1, \\dots, v_n$\n\tis a sum of the form $r_1 v_1 + \\dots + r_n v_n$,\n\twhere $r_1, \\dots, r_n \\in R$.\n\tThe linear combination is called \\vocab{trivial}\n\tif $r_1 = r_2 = \\dots = r_n = 0_R$, and \\vocab{nontrivial} otherwise.\n\\end{definition}\n\\begin{definition}\n\tConsider a finite set of vectors $v_1, \\dots, v_n$ in a module $M$.\n\t\\begin{itemize}\n\t\t\\ii It is called \\vocab{linearly independent} if there\n\t\tis no nontrivial linear combination with value $0_M$.\n\t\t(Observe that $0_M = 0 \\cdot v_1 + 0 \\cdot v_2 + \\dots + 0 \\cdot v_n$\n\t\tis always true -- the assertion is that there is no other\n\t\tway to express $0_M$ in this form.)\n\t\t\\ii It is called a \\vocab{generating set} if every $v \\in M$ can be written as\n\t\ta linear combination of the $\\{v_i\\}$.\n\t\tIf $M$ is a vector space we say it is \\vocab{spanning} instead.\n\t\t\\ii It is called a \\vocab{basis} (plural \\vocab{bases})\n\t\tif every $v \\in M$ can be written\n\t\t\\emph{uniquely} as a linear combination of the $\\{v_i\\}$.\n\t\\end{itemize}\n\tThe same definitions apply for an infinite set, with the proviso\n\tthat all sums must be finite.\n\n\\end{definition}\nSo by definition, $\\left\\{ 1,x,x^2 \\right\\}$ is a basis for $V$.\nIt's not the only one: $\\{2, x, x^2\\}$ and $\\{x+4, x-2, x^2+x\\}$\nare other examples of bases, though not as natural.\nHowever, the set $S = \\{3+x^2, x+1, 5+2x+x^2\\}$ is not a basis;\nit fails for two reasons:\n\\begin{itemize}\n\t\\ii Note that\n\t$0 = (3+x^2) + 2(x+1) - (5+2x+x^2)$.\n\tSo the set $S$ is not linearly independent.\n\t\\ii It's not possible to write $x^2$ as a sum of elements of $S$.\n\tSo $S$ fails to be spanning.\n\\end{itemize}\nWith these new terms, we can say a basis is a linearly independent and spanning set.\n\n\\begin{example}[More example of bases]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Regard $\\QQ[\\sqrt2]\n\t\t= \\left\\{ a + b \\sqrt 2 \\mid a, b \\in \\QQ \\right\\}$\n\t\tas a $\\QQ$-vector space.\n\t\tThen $\\{1, \\sqrt 2\\}$ is a basis.\n\t\t\\ii If $V$ is the set of all real polynomials,\n\t\tthere is an infinite basis $\\{1, x, x^2, \\dots\\}$.\n\t\tThe condition that we only use finitely many terms just says\n\t\tthat the polynomials must have finite degree (which is good).\n\t\t\\ii Let $V = \\{ (x,y,z) \\mid x+y+z=0 \\text{ and } x,y,z \\in \\RR\\}$.\n\t\tThen we expect there to be a basis of size $2$, but unlike previous examples\n\t\tthere is no immediately ``obvious'' choice.\n\t\tSome working examples include:\n\t\t\\begin{itemize}\n\t\t\t\\ii $(1,-1,0)$ and $(1,0,-1)$,\n\t\t\t\\ii $(0,1,-1)$ and $(1,0,-1)$,\n\t\t\t\\ii $(5,3,-8)$ and $(2,-1,-1)$.\n\t\t\\end{itemize}\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{exercise}\n\tShow that a set of vectors is a basis if and only if\n\tit is linearly independent and spanning.\n\t(Think about the polynomial example if you get stuck.)\n\\end{exercise}\n\nNow we state a few results which assert\nthat bases in vector spaces behave as nicely as possible.\n\\begin{theorem}[Maximality and minimality of bases]\n\t\\label{thm:vector_best}\n\tLet $V$ be a vector space over some field $k$\n\tand take $e_1, \\dots, e_n \\in V$. The following are equivalent:\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The $e_i$ form a basis.\n\t\t\\ii The $e_i$ are spanning, but no proper subset is spanning.\n\t\t\\ii The $e_i$ are linearly independent, but adding any other\n\t\telement of $V$ makes them not linearly independent.\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{remark}\n\tIf we replace $V$ by a general module $M$ over a commutative ring $R$,\n\tthen (a) $\\implies$ (b) and (a) $\\implies$ (c) but not conversely.\n\\end{remark}\n\\begin{proof}\n\tStraightforward, do it yourself if you like.\n\tThe key point to notice is that you need to divide by scalars for the converse direction,\n\thence $V$ is required to be a vector space instead of just a module\n\tfor the implications (b) $\\implies$ (a) and (c) $\\implies$ (a).\n\\end{proof}\n\n\\begin{theorem}\n\t[Dimension theorem for vector spaces]\n\tIf a vector space $V$ has a finite basis,\n\tthen every other basis has the same number of elements.\n\\end{theorem}\n\\begin{proof}\n\tWe prove something stronger:\n\tAssume $v_1, \\dots, v_n$ is a spanning set\n\twhile $w_1, \\dots, w_m$ is linearly independent. We claim that $n \\ge m$.\n\t\\begin{ques}\n\t\tShow that this claim is enough to imply the theorem.\n\t\\end{ques}\n\n\tLet $A_0 = \\{v_1, \\dots, v_n\\}$ be the spanning set.\n\tThrow in $w_1$: by the spanning condition,\n\t$w_1 = c_1 v_1 + \\dots + c_n v_n$.\n\tThere's some nonzero coefficient, say $c_n$.\n\tThus \\[ v_n = \\frac{1}{c_n} w_1 - \\frac{c_1}{c_n}v_1 - \\frac{c_2}{c_n}v_2 - \\dots. \\]\n\tThus $A_1 = \\{v_1, \\dots, v_{n-1}, w_1\\}$ is spanning.\n\tNow do the same thing, throwing in $w_2$,\n\tand deleting some element of the $v_i$ as before to get $A_2$;\n\tthe condition that the $w_i$ are linearly independent\n\tensures that some $v_i$ coefficient\n\tmust always not be zero.\n\tSince we can eventually get to $A_m$, we have $n \\ge m$.\n\\end{proof}\n\\begin{remark}\n\t[Generalizations]\n\t\\begin{itemize}\n\t\t\\ii The theorem is true for an infinite basis as well\n\t\tif we interpret ``the number of elements'' as ``cardinality''.\n\t\tThis is confusing on a first read through, so we won't elaborate.\n\t\t\\ii In fact, this is true for modules over any commutative ring.\n\t\tInterestingly, the proof for the general case proceeds by reducing\n\t\tto the case of a vector space.\n\t\\end{itemize}\n\\end{remark}\n\nThe dimension theorem, true to its name,\nlets us define the \\vocab{dimension} of\na vector space as the size of any finite basis, if one exists.\nWhen it does exist we say $V$ is \\vocab{finite-dimensional}.\nSo for example,\n\\[ V = \\left\\{ ax^2 + bx + c \\mid a,b,c \\in \\RR \\right\\} \\]\nhas dimension three, because $\\left\\{ 1,x,x^2 \\right\\}$ is a basis.\nThat's not the only basis: we could as well have written\n\\[ \\left\\{ a(x^2-4x) + b(x+2) + c \\mid a,b,c \\in \\RR \\right\\} \\]\nand gotten the exact same vector space.\nBut the beauty of the theorem is that no matter how we try\nto contrive the generating set, we always will get exactly three elements.\nThat's why it makes sense to say $V$ has dimension three.\n\nOn the other hand, the set of all polynomials\n$\\RR[x]$ is \\emph{infinite-dimensional}\n(which should be intuitively clear).\n\nA basis $e_1, \\dots, e_n$ of $V$ is really cool\nbecause it means that to specify $v \\in V$,\nI only have to specify $a_1, \\dots, a_n \\in k$,\nand then let $v = a_1 e_1 + \\dots + a_n e_n$.\nYou can even think of $v$ as $\\left( a_1, \\dots, a_n \\right)$.\n% In a way I'll make precise in a moment, $V$ is actually isomorphic to just $k^{\\oplus n}$.\nTo put it another way, if $V$ is a $k$-vector space we always have\n\\[ V = e_1 k \\oplus e_2 k \\oplus \\dots \\oplus e_n k. \\]\n\n\\section{Linear maps}\n\\prototype{Evaluation of $\\{ax^2+bx+c\\}$ at $x=3$.}\nWe've seen homomorphisms and continuous maps.\nNow we're about to see linear maps, the structure preserving maps\nbetween vector spaces. Can you guess the definition?\n\n\\begin{definition}\n\tLet $V$ and $W$ be vector spaces over the same field $k$.\n\tA \\vocab{linear map} is a map $T \\colon V \\to W$ such that:\n\t\\begin{enumerate}[(i)]\n\t\t\\ii We have $T(v_1 + v_2) = T(v_1) + T(v_2)$\n\t\tfor any $v_1, v_2 \\in V$.\\footnote{In group language,\n\t\t\t$T$ is a homomorphism $(V,+) \\to (W,+)$.}\n\t\t\\ii For any $a \\in k$ and $v \\in V$, $T(a \\cdot v) = a \\cdot T(v)$.\n\t\\end{enumerate}\n\tIf this map is a bijection (equivalently, if it has an inverse),\n\tit is an \\vocab{isomorphism}.\n\tWe then say $V$ and $W$ are \\vocab{isomorphic}\n\tvector spaces and write $V \\cong W$.\n\\end{definition}\n\n\\begin{example}[Examples of linear maps]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii For any vector spaces $V$ and $W$ there is a trivial linear map sending everything to $0_W \\in W$.\n\t\t\\ii For any vector space $V$, there is the identity isomorphism $\\id : V \\to V$.\n\t\t\\ii The map $\\RR^3 \\to \\RR$ by $(a,b,c) \\mapsto 4a+2b+c$ is a linear map.\n\t\t\\ii Let $V$ be the set of real polynomials of degree at most $2$.\n\t\tThe map $\\RR^3 \\to V$ by $(a,b,c) \\mapsto ax^2+bx+c$ is an \\emph{isomorphism}.\n\t\t\\ii Let $V$ be the set of real polynomials of degree at most $2$.\n\t\tThe map $V \\to \\RR$ by $ax^2+bx+c \\mapsto 9a + 3b + c$\n\t\tis a linear map, which can be described as ``evaluation at $3$''.\n\t\t\\ii Let $W$ be the set of functions $\\RR \\to \\RR$.\n\t\tThe evaluation map $W \\to \\RR$ by $f \\mapsto f(0)$ is a linear map.\n\t\t\\ii There is a map of $\\QQ$-vector spaces $\\QQ[\\sqrt2] \\to \\QQ[\\sqrt2]$\n\t\tcalled ``multiply by $\\sqrt2$''; this map sends $a+b\\sqrt2 \\mapsto 2b + a\\sqrt2$.\n\t\tThis map is an isomorphism, because it has an inverse ``multiply by $1/\\sqrt2$''.\n\t\\end{enumerate}\n\\end{example}\n\nIn the expression $T(a \\cdot v) = a \\cdot T(v)$, note that the first $\\cdot$ is the multiplication of $V$ and the second $\\cdot$ is the multiplication of $W$.\nNote that this notion of isomorphism really only cares about the size of the basis:\n\\begin{proposition}[$n$-dimensional vector spaces are isomorphic]\n\tIf $V$ is an $n$-dimensional vector space, then\n\t$V \\cong k^{\\oplus n}$.\n\\end{proposition}\n\\begin{ques}\n\tLet $e_1$, \\dots, $e_n$ be a basis for $V$.\n\tWhat is the isomorphism?\n\t(Your first guess is probably right.)\n\\end{ques}\n\\begin{remark}\n\tYou could technically say that all finite-dimensional vector\n\tspaces are just $k^{\\oplus n}$ and that no other space is worth\n\tcaring about.\n\tBut this seems kind of rude.\n\tSpaces often are more than just triples: $ax^2+bx+c$ is a polynomial,\n\tand so it has some ``essence'' to it that you'd lose if you\n\tcompressed it into $(a,b,c)$.\n\n\tMoreover, a lot of spaces, like the set of vectors $(x,y,z)$ with $x+y+z=0$,\n\tdo not have an obvious choice of basis.\n\tThus to cast such a space into $k^{\\oplus n}$\n\twould require you to make arbitrary decisions.\n\\label{rem:vector_spaces_have_essence}\n\\end{remark}\n\n\\section{What is a matrix?}\nNow I get to tell you what a matrix is:\nit's a way of writing a linear map in terms of bases.\n\nSuppose we have a finite-dimensional\nvector space $V$ with basis $e_1, \\dots, e_m$\nand a vector space $W$ with basis $w_1, \\dots, w_n$.\nI also have a map $T \\colon V \\to W$ and I want to tell you what $T$ is.\nIt would be awfully inconsiderate of me to try and tell you what $T(v)$\nis at every point $v$.\nIn fact, I only have to tell you what $T(e_1)$, \\dots, $T(e_m)$ are,\nbecause from there you can work out\n$T(a_1 e_1 + \\dots + a_m e_m)$ for yourself:\n\\[ T(a_1 e_1 + \\dots + a_m e_m) = a_1 T(e_1) + \\dots + a_m T(e_m). \\]\nSince the $e_i$ are a basis, that tells you all you need to know about $T$.\n\\begin{example}\n\t[Extending linear maps]\n\tLet $V = \\left\\{ ax^2+bx+c \\mid a,b,c \\in \\RR \\right\\}$.\n\tThen $T(ax^2+bx+c) = aT(x^2) + bT(x) + cT(1)$.\n\\end{example}\n\nNow I can even be more concrete.\nI could tell you what $T(e_1)$ is, but seeing as I have a basis of $W$,\nI can actually just tell you what $T(e_1)$ is in terms of this basis.\nSpecifically, there are unique $a_{11}, a_{21}, \\dots, a_{n1} \\in k$ such that\n\\[ T(e_1) = a_{11} w_1 + a_{21} w_2 + \\dots + a_{n1} w_n. \\]\nSo rather than telling you the value of $T(e_1)$ in some abstract space $W$,\nI could just tell you what $a_{11}, a_{21}, \\dots, a_{n1}$ were.\nThen I'd repeat this for $T(e_2)$, $T(e_3)$, all the way up to $T(e_m)$,\nand that would tell you everything you need to know about $T$.\n\nThat's where the matrix $T$ comes from!\nIt's a concise way of writing down all $mn$ numbers\nI need to tell you.\nTo be explicit, the matrix for $T$ is defined as the array\n\\begin{align*}\n\tT &= \\underbrace{%\n\t\\begin{bmatrix}\n\t\t\\mid & \\mid & & \\mid \\\\\n\t\tT(e_1) & T(e_2) & \\dots & T(e_{m}) \\\\\n\t\t\\mid & \\mid & & \\mid \\\\\n\t\\end{bmatrix}\n\t}_{\\text{$m$ columns}} \\Bigg\\} \\text{$n$ rows} \\\\\n\t&=\n\t\\begin{bmatrix}\n\t\ta_{11} & a_{12} & \\dots & a_{1m} \\\\\n\t\ta_{21} & a_{22} & \\dots & a_{2m} \\\\\n\t\t\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\t\ta_{n1} & a_{n2} & \\dots & a_{nm}\n\t\\end{bmatrix}.\n\\end{align*}\n\n\\begin{example}\n\t[An example of a matrix]\n\tHere is a concrete example in terms of a basis.\n\tLet $V = \\RR^3$ with basis $e_1$, $e_2$, $e_3$\n\tand let $W = \\RR^2$ with basis $w_1$, $w_2$.\n\tIf I have $T \\colon V \\to W$\n\tthen uniquely determined by three values, for example:\n\t\\begin{align*}\n\t\tT(e_1) &= 4w_1 + 7w_2 \\\\\n\t\tT(e_2) &= 2w_1 + 3w_2 \\\\\n\t\tT(e_3) &= w_1\n\t\\end{align*}\n\tThe columns then correspond to $T(e_1)$, $T(e_2)$, $T(e_3)$:\n\t\\[\n\t\tT =\n\t\t\\begin{bmatrix}\n\t\t\t4 & 2 & 1 \\\\\n\t\t\t7 & 3 & 0\n\t\t\\end{bmatrix}\n\t\\]\n\\end{example}\n\n\\begin{example}\n\t[An example of a matrix after choosing a basis]\n\tWe again let $V = \\left\\{ ax^2 + bx + c \\right\\}$\n\tbe the vector space of polynomials of degree at most $2$.\n\tWe fix the basis $1$, $x$, $x^2$ for it.\n\n\tConsider the ``evaluation at $3$'' map,\n\ta map $V \\to \\RR$.\n\tWe pick $1$ as the basis element of the RHS;\n\tthen we can write it as a $1 \\times 3$ matrix\n\t\\[ \\begin{bmatrix}\n\t\t\t1 & 3 & 9\n\t\t\\end{bmatrix} \\]\n\twith the columns corresponding to $T(1)$, $T(x)$, $T(x^2)$.\n\\end{example}\n\n\nFrom here you can actually work out for yourself\nwhat it means to multiply two matrices.\nSuppose we have picked a basis for three spaces $U$, $V$, $W$.\nGiven maps $T \\colon U \\to V$ and $S \\colon V \\to W$,\nwe can consider their composition $S \\circ T$, i.e.\n\\[ U \\taking{T} V \\taking{S} W. \\]\nMatrix multiplication is defined exactly so that the matrix $ST$\nis the same thing we get from interpreting the composed function $S \\circ T$ as a matrix.\n\\begin{exercise}\n\tCheck this for yourself!\n\tFor a concrete example let $\\RR^2 \\taking{T} \\RR^2 \\taking{S} \\RR^2$\n\tby $T(e_1) = 2e_1+3e_2$ and $T(e_2) = 4e_1+5e_2$,\n\t$S(e_1) = 6e_1+7e_2$ and $S(e_2) = 8e_1+9e_2$.\n\tCompute $S(T(e_1))$ and $S(T(e_2))$ and see how it compares\n\tto multiplying the matrices associated to $S$ and $T$.\n\\end{exercise}\nIn particular, since function composition is associative,\nit follows that matrix multiplication is as well.\nTo drive this point home,\n\\begin{moral}\n\tA matrix is the laziest possible way to specify\n\ta linear map from $V$ to $W$.\n\\end{moral}\n\nThis means you can define concepts like the determinant or the trace of a matrix\nboth in terms of an ``intrinsic'' map $T \\colon V \\to W$\nand in terms of the entries of the matrix.\nSince the map $T$ itself doesn't refer to any basis,\nthe abstract definition will imply that the numerical definition\ndoesn't depend on the choice of a basis.\n\n\\section{Subspaces and picking convenient bases}\n\\prototype{Any two linearly independent vectors in $\\RR^3$.}\n% A submodule is exactly what you think it is.\n\\begin{definition}\n\tLet $M$ be a left $R$-module.\n\tA \\vocab{submodule} $N$ of $M$ is a module $N$\n\tsuch that every element of $N$ is also an element of $M$.\n\tIf $M$ is a vector space then $N$ is called a \\vocab{subspace}.\n\\end{definition}\n\n\\begin{example}[Kernels]\n\tThe \\vocab{kernel} of a map $T \\colon V \\to W$ (written $\\ker T$) is\n\tthe set of $v \\in V$ such that $T(v) = 0_W$.\n\tIt is a subspace of $V$, since it's closed under addition and scaling (why?).\n\\end{example}\n\\begin{example}[Spans]\n\tLet $V$ be a vector space and $v_1, \\dots, v_m$ be any vectors of $V$.\n\tThe \\vocab{span} of these vectors is defined as the set\n\t\\[ \\left\\{ a_1 v_1 + \\dots + a_m v_m \\mid a_1, \\dots, a_m \\in k \\right\\}. \\]\n\tNote that it is a subspace of $V$ as well!\n\\end{example}\n\\begin{ques}\n\tWhy is $0_V$ an element of each of the above examples?\n\tIn general, why must any subspace contain $0_V$?\n\\end{ques}\n\nSubspaces behave nicely with respect to bases.\n\\begin{theorem}[Basis completion]\n\t\\label{thm:basis_completion}\n\tLet $V$ be an $n$-dimensional space, and $V'$ a subspace of $V$.\n\tThen\n\t\\begin{enumerate}[(a)]\n\t\t\\ii $V'$ is also finite-dimensional.\n\t\t\\ii If $e_1, \\dots, e_m$ is a basis of $V'$, then there exist\n\t\t$e_{m+1}, \\dots, e_n$ in $V$ such that\n\t\t$e_1, \\dots, e_n$ is a basis of $V$.\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\tOmitted, since it is intuitive and the proof is not that enlightening.\n\t(However, we will use this result repeatedly later on,\n\tso do take the time to internalize it now.)\n\\end{proof}\n\nA very common use case is picking a convenient basis for a map $T$.\n\\begin{theorem}[Picking a basis for linear maps]\n\t\\label{thm:linear_map_basis}\n\tLet $T \\colon V \\to W$ be a map of finite-dimensional vector spaces,\n\twith $n = \\dim V$, $m = \\dim W$.\n\tThen there exists a basis $v_1, \\dots, v_n$ of $V$\n\tand a basis $w_1, \\dots, w_m$ of $W$,\n\tas well as a nonnegative integer $k$, such that\n\t\\[\n\t\tT(v_i) =\n\t\t\\begin{cases}\n\t\t\tw_i & \\text{if $i \\le k$} \\\\\n\t\t\t0_W & \\text{if $i > k$}.\n\t\t\\end{cases}\n\t\\]\n\tMoreover $\\dim \\ker T = n-k$ and $\\dim T\\im(V) = k$.\n\\end{theorem}\n\\begin{proof}[Sketch of Proof]\n\tYou might like to try this one yourself before reading on:\n\tit's a repeated application of \\Cref{thm:basis_completion}.\n\n\tLet $\\ker T$ have dimension $n-k$.\n\tWe can pick $v_{k+1}, \\dots, v_{n}$ a basis of $\\ker T$.\n\tThen extend it to a basis $v_1, \\dots, v_n$ of $V$.\n\tThe map $T$ is injective over the span of $v_1, \\dots, v_k$\n\t(since only $0_V$ is in the kernel) so its images in $W$ are linearly independent.\n\tSetting $w_i = T(v_i)$ for each $i$,\n\twe get some linearly independent set in $W$.\n\tThen extend it again to a basis of $W$.\n\\end{proof}\n\nThis theorem is super important,\nnot only because of applications but also\nbecause it will give you the right picture in your head\nof how a linear map is supposed to look.\nI'll even draw a cartoon of it to make sure you remember:\n\n\\begin{center}\n\\begin{asy}\n\tunitsize(0.7cm);\n\treal d = 3;\n\n\tfilldraw(box( (-3*d/2,3.5), (-d/2,-4.5) ), opacity(0.1)+lightcyan, blue);\n\tfilldraw(box( (d/2,3.5), (3*d/2,-5.5) ), opacity(0.1)+lightred, red);\n\n\tlabel(scale(1.5)*\"$V$\", (-d,4), blue);\n\tdot(\"$e_1$\", (-d,3), dir(180), blue);\n\tdot(\"$e_2$\", (-d,2), dir(180), blue);\n\tlabel(\"$\\vdots$\", (-d,1), blue);\n\tdot(\"$e_k$\", (-d,0), dir(180), blue);\n\tdot(\"$e_{k+1}$\", (-d,-1), dir(180), blue);\n\tdot(\"$e_{k+2}$\", (-d,-2), dir(180), blue);\n\tlabel(\"$\\vdots$\", (-d,-3), dir(180), blue);\n\tdot(\"$e_n$\", (-d,-4), dir(180), blue);\n\n\tlabel(scale(1.5)*\"$W$\", (d,4), red);\n\tdot(\"$f_1$\", (d,3), dir(0), red);\n\tdot(\"$f_2$\", (d,2), dir(0), red);\n\tlabel(\"$\\vdots$\", (d,1), red);\n\tdot(\"$f_k$\", (d,0), dir(0), red);\n\tdot(\"$f_{k+1}$\", (d,-1), dir(0), red);\n\tdot(\"$f_{k+2}$\", (d,-2), dir(0), red);\n\tdot(\"$f_{k+3}$\", (d,-3), dir(0), red);\n\tlabel(\"$\\vdots$\", (d,-4), dir(0), red);\n\tdot(\"$f_m$\", (d,-5), dir(0), red);\n\n\tlabel(\"$T$\", (0,3), dir(90));\n\tdraw( (-d,3)--(d,3), EndArrow, Margin(3,3) );\n\tdraw( (-d,2)--(d,2), EndArrow, Margin(3,3) );\n\tdraw( (-d,0)--(d,0), EndArrow, Margin(3,3) );\n\tdraw( (-d,-1)--(0,-1), EndArrow, Margin(3,3) );\n\tdraw( (-d,-2)--(0,-2), EndArrow, Margin(3,3) );\n\tdraw( (-d,-4)--(0,-4), EndArrow, Margin(3,3) );\n\tlabel(\"$0$\", (0,-1));\n\tlabel(\"$0$\", (0,-2));\n\tlabel(\"$0$\", (0,-4));\n\n\tdraw( (5.5,3)--(6,3)--(6,0)--(5.5,0));\n\tlabel(\"$\\operatorname{im} T$\", (6, 1.5), dir(0));\n\tdraw( (-5.5,-1)--(-6,-1)--(-6,-4)--(-5.5,-4) );\n\tlabel(\"$\\ker T$\", (-6, -2.5), dir(180));\n\\end{asy}\n\\end{center}\n\nIn particular, for $T \\colon V \\to W$,\none can write $V = \\ker T \\oplus V'$,\nso that $T$ annihilates its kernel while sending $V'$\nto an isomorphic copy in $W$.\n\nA corollary of this (which you should have expected anyways)\nis the so called rank-nullity theorem,\nwhich is the analog of the first isomorphism theorem.\n\\begin{theorem}\n\t[Rank-nullity theorem]\n\t\\label{thm:rank_nullity}\n\tLet $V$ and $W$ be finite-dimensional vector spaces.\n\tIf $T \\colon V \\to W$, then\n\t\\[ \\dim V = \\dim \\ker T + \\dim \\img T. \\]\n\\end{theorem}\n\\begin{ques}\n\tConclude the rank-nullity theorem from \\Cref{thm:linear_map_basis}.\n\\end{ques}\n\n\\section{A cute application: Lagrange interpolation}\nHere's a cute application\\footnote{Source: Communicated to me\nby Joe Harris at the first Harvard-MIT Undergraduate Math Symposium.}\nof linear algebra to a theorem from high school.\n\\begin{theorem}\n\t[Lagrange interpolation]\n\tLet $x_1, \\dots, x_{n+1}$ be distinct real numbers\n\tand $y_1, \\dots, y_{n+1}$ any real numbers.\n\tThen there exists a \\emph{unique}\n\tpolynomial $P$ of degree at most $n$\n\tsuch that \\[ P(x_i) = y_i \\] for every $i$.\n\\end{theorem}\nWhen $n = 1$ for example, this loosely\nsays there is a unique line joining two points.\n\\begin{proof}\n\tThe idea is to consider the vector space $V$\n\tof polynomials with degree at most $n$,\n\tas well as the vector space $W = \\RR^{n+1}$.\n\t\\begin{ques}\n\t\tCheck that $\\dim V = n + 1 = \\dim W$.\n\t\tThis is easiest to do if you pick a basis for $V$,\n\t\tbut you can then immediately forget about the basis\n\t\tonce you finish this exercise.\n\t\\end{ques}\n\tThen consider the linear map $T : V \\to W$ given by\n\t\\[ P \\mapsto \\left( P(x_1), \\dots, P(x_{n+1}) \\right). \\]\n\tThis is indeed a linear map because,\n\twell, $T(P+Q) = T(P)+T(Q)$ and $T(cP) = cT(P)$.\n\tIt also happens to be injective: if $P \\in \\ker T$,\n\tthen $P(x_1) = \\dots = P(x_{n+1}) = 0$,\n\tbut $\\deg P \\le n$ and so $P$ can only be the zero polynomial.\n\t\n\tSo $T$ is an injective map between vector spaces of the same dimension.\n\tThus it is actually a bijection, which is exactly what we wanted.\n\\end{proof}\n\n\\section{(Digression) Arrays of numbers are evil}\n\\label{sec:basis_evil}\nAs I'll stress repeatedly, a matrix represents a\n\\emph{linear map between two vector spaces}.\nWriting it in the form of an $m \\times n$ matrix\nis merely a very convenient way to see the map concretely.\nBut it obfuscates the fact that this map is,\nwell, a map, not an array of numbers.\n\nIf you took high school precalculus, you'll see everything done in terms of matrices.\nTo any typical high school student, a matrix is an array of numbers.\nNo one is sure what exactly these numbers represent,\nbut they're told how to magically multiply these arrays to get more arrays. \nThey're told that the matrix\n\\[ \\begin{bmatrix}\n\t\t1 & 0 & \\dots & 0 \\\\\n\t\t0 & 1 & \\dots & 0 \\\\\n\t\t\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\t\t0 & 0 & \\dots & 1 \\\\\n\t\\end{bmatrix} \\]\nis an ``identity matrix'', because when you multiply\nby another matrix it doesn't change.\nThen they're told that the determinant is some magical combination of these\nnumbers formed by this weird multiplication rule.\nNo one knows what this determinant does,\nother than the fact that $\\det(AB) = \\det A \\det B$,\nand something about areas and row operations and Cramer's rule.\n\nThen you go into linear algebra in college, and you do more magic\nwith these arrays of numbers.\nYou're told that two matrices $T_1$ and $T_2$ are similar if\n\\[ T_2 = ST_1S\\inv \\] for some invertible matrix $S$.\nYou're told that the trace of a matrix $\\Tr T$ is the sum of the diagonal entries.\nSomehow this doesn't change if you look at a similar matrix,\nbut you're not sure why.\nThen you define the characteristic polynomial as\n\\[ p_T(X) = \\det (XI - T). \\]\nSomehow this also doesn't change if you take a similar matrix,\nbut now you really don't know why.\nAnd then you have the Cayley-Hamilton theorem in all its black magic:\n$p_T(T)$ is the zero map.  Out of curiosity you Google the proof,\nand you find some ad-hoc procedure which still leaves you\nwith no idea why it's true.\n\nThis is terrible. What's so special about $T_2 = ST_1S\\inv$?\nOnly if you know that the matrices are linear maps does this make sense:\n$T_2$ is just $T_1$ rewritten with a different choice of basis.\n\nI really want to push the opposite view.\nLinear algebra is the study of \\emph{linear maps},\nbut it is taught as the study of \\emph{arrays of numbers},\nand no one knows what these numbers mean.\nAnd for a good reason: the numbers are meaningless.\nThey are a highly convenient way of encoding the matrix,\nbut they are not the main objects of study,\nany more than the dates of events are the main objects of study in history.\n\nThe other huge downside is that people get the impression\nthat the only (real) vector space in existence is $\\RR^{\\oplus n}$.\nAs explained in \\Cref{rem:vector_spaces_have_essence},\nwhile you \\emph{can} work this way if you're a soulless robot,\nit's very unnatural for humans to do so.\n\nWhen I took Math 55a as a freshman at Harvard,\nI got the exact opposite treatment:\nwe did all of linear algebra without writing down a single matrix.\nDuring all this time I was quite confused.\nWhat's wrong with a basis?\nI didn't appreciate until later that this approach was the\nmorally correct way to treat the subject: it made it clear what was happening.\n\nThroughout the Napkin, I've tried to strike a balance between these\ntwo approaches, using matrices when appropriate to illustrate\nthe maps and to simplify proofs, but ultimately writing\ntheorems and definitions in their \\emph{morally correct} form.\nI hope that this has both the advantage of giving the ``right'' definitions\nwhile being concrete enough to be digested.\nBut I would like to say for the record that,\nif I had to pick between the high school approach and the 55a approach,\nI would pick 55a in a heartbeat.\n\n\\section{A word on general modules}\n\\prototype{$\\ZZ[\\sqrt2]$ is a $\\ZZ$-module of rank two.}\nI focused mostly on vector fields (aka modules over a field) in this chapter\nfor simplicity, so I want to make a few remarks about\nmodules over a general commutative ring $R$ before concluding.\n\nFirstly, recall that for general modules,\nwe say ``generating set'' instead of ``spanning set''.\nShrug.\n\nThe main issue with rings is that our key theorem \\Cref{thm:vector_best}\nfails in spectacular ways.\nFor example, consider $\\ZZ$ as a $\\ZZ$-module over itself.\nThen $\\{2\\}$ is linearly independent, but it cannot be extended to a basis.\nSimilarly, $\\{2,3\\}$ is spanning, but one cannot cut it down to a basis.\nYou can see why defining dimension is going to be difficult.\n\nNonetheless, there are still analogs of some of the definitions above.\n\\begin{definition}\n\tAn $R$-module $M$ is called \\vocab{finitely generated} if it has a finite generating set.\n\\end{definition}\n\\begin{definition}\n\tAn $R$-module $M$ is called \\vocab{free} if it has a basis.\n\tAs said before, the analogue of the dimension theorem holds,\n\tand we use the word \\vocab{rank} to denote the size of the basis.\n\tAs before, there's an isomorphism $M \\cong R^{\\oplus n}$ where $n$ is the rank.\n\\end{definition}\n\\begin{example}[An example of a $\\ZZ$-module]\n\tThe $\\ZZ$-module\n\t\\[ \\ZZ[\\sqrt2] = \\left\\{ a + b\\sqrt 2 \\mid a,b \\in \\ZZ \\right\\} \\]\n\thas a basis $\\{1, \\sqrt 2\\}$, so we say it is\n\ta free $\\ZZ$-module of rank $2$.\n\\end{example}\n\n\\begin{abuse}\n\t[Notation for groups]\n\tRecall that an abelian group\n\tcan be viewed a $\\ZZ$-module (and in fact vice-versa!),\n\tso we can (and will) apply these words to abelian groups.\n\tWe'll use the notation $G \\oplus H$ for two abelian groups $G$ and $H$\n\tfor their Cartesian product, emphasizing the fact that $G$ and $H$ are abelian.\n\tThis will happen when we study algebraic number theory and homology groups.\n\\end{abuse}\n\n\\section{\\problemhead}\nGeneral hint:\n\\Cref{thm:linear_map_basis} will be your best friend\nfor many of these problems.\n\n\\begin{dproblem}\n\tLet $V$ and $W$ be finite-dimensional vector spaces\n\twith nonzero dimension, and consider linear maps $T \\colon V \\to W$.\n\tComplete the following table by writing\n\t``sometimes'', ``always'', or ``never'' for each entry.\n\t\\begin{center}\n\t\\begin{tabular}[h]{c|ccc}\n\t\t& $T$ injective & $T$ surjective & $T$ isomorphism \\\\ \\hline\n\t\tIf $\\dim V > \\dim W$\\dots \\\\\n\t\tIf $\\dim V = \\dim W$\\dots \\\\\n\t\tIf $\\dim V < \\dim W$\\dots \\\\\n\t\\end{tabular}\n\t\\end{center}\n\t\\begin{hint}\n\t\tUse the rank-nullity theorem.\n\t\tAlso consider the zero map.\n\t\\end{hint}\n\t\\begin{sol}\n\t\t\\begin{center}\n\t\t\\begin{tabular}[h]{c|ccc}\n\t\t& $T$ injective & $T$ surjective & $T$ isomorphism \\\\ \\hline\n\t\tIf $\\dim V > \\dim W$\\dots & never & sometimes & never\\\\\n\t\tIf $\\dim V = \\dim W$\\dots & sometimes & sometimes & sometimes \\\\\n\t\tIf $\\dim V < \\dim W$\\dots & sometimes & never & never \\\\\n\t\t\\end{tabular}\n\t\t\\end{center}\n\t\tEach ``never'' is by the rank-nullity theorem.\n\t\tEach counterexample is obtained by the zero map\n\t\tsending every element of $V$ to zero;\n\t\tthis map is certainly neither injective or surjective.\n\t\\end{sol}\n\\end{dproblem}\n\n\\begin{dproblem}\n\t[Equal dimension vector spaces are usually isomorphisms]\n\t\\label{prob:equal_dimension}\n\tLet $V$ and $W$ be finite-dimensional vector\n\tspaces with $\\dim V = \\dim W$.\n\tProve that for a map $T \\colon V \\to W$,\n\tthe following are equivalent:\n\t\\begin{itemize}\n\t\t\\ii $T$ is injective,\n\t\t\\ii $T$ is surjective,\n\t\t\\ii $T$ is bijective.\n\t\\end{itemize}\n\t\\begin{sol}\n\t\tIt essentially follows by \\Cref{thm:linear_map_basis}.\n\t\\end{sol}\n\\end{dproblem}\n\n\\begin{problem}\n\t[Multiplication by $\\sqrt5$]\n\tLet $V = \\QQ[\\sqrt5] = \\left\\{ a + b \\sqrt 5 \\right\\}$\n\tbe a two-dimensional $\\QQ$-vector space,\n\tand fix the basis $\\{1, \\sqrt 5\\}$ for it.\n\tWrite down the $2 \\times 2$ matrix with rational coefficients\n\tthat corresponds to multiplication by $\\sqrt 5$.\n\t\\begin{hint}\n\t\t$a + b \\sqrt 5 \\mapsto \\sqrt 5 a + 5b$.\n\t\\end{hint}\n\t\\begin{sol}\n\t\tSince $1 \\mapsto \\sqrt5$ and $\\sqrt5 \\mapsto 5$,\n\t\tthe matrix is\n\t\t$\\begin{bmatrix}\n\t\t\t0 & 5 \\\\\n\t\t\t1 & 0\n\t\t\\end{bmatrix}$.\n\t\\end{sol}\n\\end{problem}\n\n\\begin{problem}\n\t[Multivariable Lagrange interpolation]\n\tLet $S \\subset {\\mathbb Z}^2$ be a set of $n$ lattice points.\n\tProve that there exists a nonzero two-variable polynomial $p$\n\twith real coefficients, of degree at most $\\sqrt{2n}$,\n\tsuch that $p(x,y) = 0$ for every $(x,y) \\in S$.\n\\end{problem}\n\n\\begin{problem}\n\t[Putnam 2003]\n\tDo there exist polynomials $a(x)$, $b(x)$, $c(y)$, $d(y)$ such that\n\t\\[ 1 + xy + (xy)^2 = a(x)c(y) + b(x)d(y) \\]\n\tholds identically?\n\t\\begin{hint}\n\t\tPlug in $y = -1, 0, 1$. Use dimensions of $\\RR[x]$.\n\t\\end{hint}\n\\end{problem}\n\n\\begin{problem}[TSTST 2014]\n\t\\gim\n\tLet $P(x)$ and $Q(x)$ be arbitrary polynomials with real coefficients,\n\tand let $d$ be the degree of $P(x)$.\n\tAssume that $P(x)$ is not the zero polynomial.\n\tProve that there exist polynomials $A(x)$ and $B(x)$ such that\n\t\\begin{enumerate}[(i)]\n\t\t\\ii Both $A$ and $B$ have degree at most $d/2$,\n\t\t\\ii At most one of $A$ and $B$ is the zero polynomial,\n\t\t\\ii $P$ divides $A+Q \\cdot B$.\n\t\\end{enumerate}\n\t\\begin{hint}\n\t\tInterpret as $V \\oplus V \\to W$ for suitable $V$, $W$.\n\t\\end{hint}\n\t\\begin{sol}\n\t\tLet $V$ be the space of real polynomials with degree at most $d/2$\n\t\t(which has dimension $1 + \\left\\lfloor d/2 \\right\\rfloor$),\n\t\tand $W$ be the space of real polynomials modulo $P$ (which has dimension $d$).\n\t\tThen $\\dim (V \\oplus V) > \\dim W$.\n\t\tSo the linear map $V \\oplus V \\to W$ by $(A,B) \\mapsto A + Q \\cdot B$\n\t\thas a kernel of positive dimension\n\t\t(by rank-nullity, for example).\n\t\\end{sol}\n\\end{problem}\n\n\\begin{sproblem}\n\t[Idempotents are projection maps]\n\t\\label{prob:idempotent}\n\tLet $P \\colon V \\to V$ be a linear map,\n\twhere $V$ is a vector space\n\t(not necessarily finite-dimensional).\n\tSuppose $P$ is \\vocab{idempotent},\n\tmeaning $P(P(v)) = P(v)$ for each $v \\in V$,\n\tor equivalently $P$ is the identity on its image.\n\tProve that \\[ V = \\ker P \\oplus \\img P. \\]\n\tThus we can think of $P$ as \\emph{projection}\n\tonto the subspace $\\img P$.\n\\end{sproblem}\n\n\\begin{sproblem}\n\t\\label{prob:endomorphism_eventual_lemma}\n\t\\gim\n\tLet $V$ be a finite dimensional vector space.\n\tLet $T \\colon V \\to V$ be a linear map,\n\tand let $T^n \\colon V \\to V$ denote $T$ applied $n$ times.\n\tProve that there exists an integer $N$ such that\n\t\\[ V = \\ker T^N \\oplus \\img T^N. \\]\n\t\\begin{hint}\n\t\tUse the fact that the infinite chain of subspaces\n\t\t\\[ \\ker T \\subseteq \\ker T^2 \\subseteq \\ker T^3 \\subseteq \\dots \\]\n\t\tand the similar chain for $\\img T$ must eventually stabilize\n\t\t(for dimension reasons).\n\t\\end{hint}\n\t\\begin{sol}\n\t\tConsider\n\t\t\\[\n\t\t\t\\{0\\} \\subsetneq \\ker S \\subseteq \\ker S^2 \\subseteq \\ker S^3 \\subseteq \\dots\n\t\t\t\\text{  and  }\n\t\t\tV \\supsetneq \\img S \\supseteq \\img S^2 \\supseteq \\img S^3 \\supseteq \\dots.\n\t\t\\]\n\t\tFor dimension reasons, these subspaces must eventually stabilize:\n\t\tfor some large integer $N$,\n\t\t$\\ker T^N = \\ker T^{N+1} = \\dots$\n\t\tand $\\img T^N = \\img T^{N+1} = \\img T^{N+2} = \\dots$.\n\t\tWhen this happens, $\\ker T^N \\bigcap \\img T^N = \\{0\\}$,\n\t\tsince $T^N$ is an automorphism of $\\img T^N$.\n\t\tOn the other hand, by Rank-Nullity we also have\n\t\t$\\dim \\ker T^N + \\dim \\img T^n = \\dim V$.\n\t\tThus for dimension reasons, $V = \\ker T^N \\oplus \\img T^N$.\n\t\\end{sol}\n\\end{sproblem}\n\n%Now consider the sequences\n\n", "meta": {"hexsha": "2de69cefcc0f76d2a11cb0b7e476bf8e7ee7eba5", "size": 43022, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/linalg/vector-space.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/linalg/vector-space.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/linalg/vector-space.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.0045330916, "max_line_length": 192, "alphanum_fraction": 0.6782808795, "num_tokens": 14347, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300698514777, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.579209294727216}}
{"text": "\\documentclass{report}\n\n\\usepackage[latin1]{inputenc}\n\\usepackage[danish]{babel}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath,amssymb,bm,mathtools}\n\\usepackage{enumitem}\n\\usepackage{forest}\n\\usepackage[margin=1in]{geometry}\n\n\\begin{document}\n\\section*{AD - assignment 1}\nDetermine the asymptotic behavior of the following three CPSBC pricing scheme using the master theorem.% and assuming $p(1)=1$, and $p(2)=2$.\n\\subsection*{Task 1}\n\\begin{enumerate}\n\t\\item $p(n) = 8 p(n/2) +n^2$\n\\end{enumerate}\nFor this recurrence, we have $a = 8, b = 2, f(n) = n^2$, and thus we have that $n^{\\log_2(8)} = n^3 = \\Theta(n^3)$. Since $f(n) = O(n^{3-\\varepsilon})$, where $\\varepsilon = 1$, we can apply case 1 of the master method. Hence \n$$\np(n) = \\Theta(n^3)\n$$\n%\n\\begin{enumerate}[resume]\n\t\\item $p(n) = 8 p(n/4) +n^3$\n\\end{enumerate}\nFor this recurrence, we have $a = 8, b = 4, f(n) = n^3$, and thus we have that $n^{\\log_4(8)} = n^{3/2} = \\Theta(n^{3/2})$. Since $f(n) = \\Omega(n^{3/2 + \\varepsilon})$, where $\\varepsilon = 3/2$ case 3 applies if we furthermore can show that $a f(n/b) \\leq c f(n), \\forall n>=n_0, c<1$. So\n\\begin{align*}\naf(n/b) &= 8f(n/4) = 8\\left(\\frac{n}{4}\\right)^3 = \\frac{8}{4^3}n^3\\\\\n\t\t\t\t&\\leq cn^3 = cf(n)\n\\end{align*}\nwhere the inequality hold for all $n>1$ and for $c > 8/4^3$. Hence we can easily pick a $c<1$ and the regularity condition holds so\n$$\np(n) = \\Theta(n^3)\n$$\n\\begin{enumerate}[resume]\n\t\\item $p(n) = 10 p(n/9) +n\\log_2n$\n\\end{enumerate}\nFor this recurrence, we have $a = 10, b = 9, f(n) = n\\log_2n$. Since $\\log(n) = o(n^k)$ for $k>0$ and $\\log_9(10)>1$ it follows that $f(n) = O(t)$ we are in case 1 of the master theorem, hence \n$$\np(n) = \\Theta(n^{\\log_9(10)})\n$$\n\\subsection*{Task 2}\nFor the following price schemes use the substitution method.\n\\begin{enumerate}\n\t\\item $p(n) = p(n/2) + p(n/3) + n$\n\\end{enumerate}\nThe recursion tree can be seen in Figure~\\ref{fig: task2} for the first few levels. The longest path from the root to a leaf is to follow the left branch. $n \\rightarrow n/2 \\rightarrow n/2^2 \\rightarrow \\cdots \\rightarrow 1$. Hence the height of the tree is a most $\\lg(n)$. The work done at level $i$ is $(1/2 + 1/3)^i n = (5/6)^i n$ which is decreasing. If the tree was completely binary there would be at most $2^{\\log_2(n)} = n$ leaves each with a cost of 1, hence all the leaves would at most be done in $O(n)$ time. We expect the solution to the recurrence to be at most the height of the tree times the work at the worst level in the tree. Hence we guess that $p(n) = O(n\\log_2(n))$. To show it we use the substitution method (without showing the initial step). So assume $p(n) \\leq c n \\log_2(n) - n$\n\\begin{align*}\np(n) &= p(n/2) + p(n/3) +n \\\\\n\t\t &=\\leq c\\frac{n}{2}\\log_2(n/2) - \\frac{n}{2} + c\\frac{n}{3}\\log_2(n/3) -\\frac{n}{3} +n\\\\\n\t\t &\\leq c\\frac{n}{2}\\log_2(n/2) - n + c\\frac{n}{3}\\log_2(n/3) \\\\\n\t\t &= (5/6)cn\\log_2(n) - n - c\\frac{n}{2}\\log_2(2) - c\\frac{n}{3}\\log_2(3)\\\\\n\t\t &\\leq cn\\log_2(n) - n\n\\end{align*}\nwhere the inequalities hold for $c>0$. Hence \n$$\np(n) = O(n\\log_2(n))\n$$\n\n\n\\begin{figure}[b]\n\\centering\n\\begin{forest}\n[$n$\n\t[$\\frac{n}{2}$\n\t\t[$\\frac{n}{2^2}$\n\t\t\t[$\\frac{n}{2^3}$\n\t\t\t\t[$\\frac{n}{2^4}$]\n\t\t\t\t[$\\frac{n}{2^3 3}$]\n\t\t\t]\n\t\t\t[$\\frac{n}{2^2 3}$\n\t\t\t\t[$\\frac{n}{2^3 3}$]\n\t\t\t\t[$\\frac{n}{2^2 3^2}$]\n\t\t\t]\n\t\t]\n\t\t[$\\frac{n}{2 \\cdot 3}$\n\t\t\t[$\\frac{n}{2^2 3}$\n\t\t\t\t[$\\frac{n}{2^3 3}$]\n\t\t\t\t[$\\frac{n}{2^2 3^2}$]\n\t\t\t]\n\t\t\t[$\\frac{n}{2 \\cdot 3^2}$\n\t\t\t\t[$\\frac{n}{2^2 3^2}$]\n\t\t\t\t[$\\frac{n}{2 \\cdot 3^3}$]\n\t\t\t]\n\t\t]\n\t]\n\t[$\\frac{n}{3}$\n\t\t[$\\frac{n}{2 \\cdot 3}$\n\t\t\t[$\\frac{n}{2^2 3}$\n\t\t\t\t[$\\frac{n}{2^3 3}$]\n\t\t\t\t[$\\frac{n}{2^2 3^2}$]\n\t\t\t]\n\t\t\t[$\\frac{n}{2 \\cdot 3^2}$\n\t\t\t\t[$\\frac{n}{2^2 3^2}$]\n\t\t\t\t[$\\frac{n}{2 \\cdot 3^3}$]\n\t\t\t]\n\t\t]\n\t\t[$\\frac{n}{3^2}$\n\t\t\t[$\\frac{n}{2 \\cdot 3^2}$\n\t\t\t\t[$\\frac{n}{2^2 3^2}$]\n\t\t\t\t[$\\frac{n}{2 \\cdot 3^3}$]\n\t\t\t]\n\t\t\t[$\\frac{n}{3^3}$\n\t\t\t\t[$\\frac{n}{2 \\cdot 3^3}$]\n\t\t\t\t[$\\frac{n}{3^4}$]\n\t\t\t]\n\t\t]\n\t]\n]\n\\end{forest}\n\\caption{Three structure for the recurrence $p(n) = p(n/2) + p(n/3) + n$. The node indicates how much work need to be done on that node.}\n\\label{fig: task2}\n\\end{figure}\n\n\\begin{enumerate}[resume]\n\t\\item $p(n) = \\sqrt{n}p(\\sqrt{n})+\\sqrt{n}$\n\\end{enumerate}\nWhen describing the recurrence tree that we use to create a guess for the asymptotic cost, we will neglect cases where $n$ is not a power of two. For level $i$ of the tree, the number of nodes will be the number of nodes in the previous levels time the square root of the current level. Hence the number of nodes on level $i$ is $\\prod_{j=0}^(i-1) n^{1/2^(j+1)}$, the cost of each node in level $i$ of the tree is $n^{1/2^(i+1)}$. So the the total cost of each level $i$ in the tree is \n\\begin{align*}\n\\prod_{j=0}^{i-1} n^{1/2^{j+1}} n^{1/2^{i+1}} = \\prod_{j=0}^{i} n^{1/2^{j+1}} \n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t= \\prod_{j=1}^{i+1} n^{(1/2)^j}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t= n^{\\sum_{j=1}^{i+1} (1/2)^j} \\leq n\n\\end{align*}\nSo the cost of all nodes in a level is a most $n$. Our guess are again that the asymptotic behavior is a most the height of the tree times the cost of the worst level in the tree. So lets try to determine the height of the tree. Note that after each recursive call the square root is taken, hence after $k$ calls the size is $n^{1/2^k} = 2^{\\log_2(n)/2^k}$. Solving this for a size 2, yields\n\\begin{align*}\n2^{\\log_2(n)/2^k} = 2\n\\shortintertext{taking $\\log_2$}\n\\frac{\\log_2(n)}{2^k} = 1\n\\shortintertext{multiply by $2^k$}\n\\log_2(n) = 2^k\n\\shortintertext{taking $\\log_2$}\nk = \\log_2(\\log_2(n))\n\\end{align*}\nSo our guess to use in the substitution method is that $p(n) \\leq n\\log_2\\log_2(n) -d$. So\n\\begin{align*}\np(n) &= \\sqrt{n}p(\\sqrt{n})+\\sqrt{n} \\\\\n\t\t &\\leq \\sqrt{n}\\left(\\sqrt{n}\\log_2\\log_2(\\sqrt{n}) - d\\right)+\\sqrt{n}\\\\\n\t\t &\\leq n\\log_2\\log_2(n) - d\\sqrt{n} + \\sqrt{n}\n\\end{align*}\nThe last line in the equation above, hold if there exist a lower order term $d>0$ where $d\\sqrt{n} + \\sqrt{n} = d$. Solving for $d$ yields $d = \\frac{\\sqrt{n}}{\\sqrt{n} - 1}$ which is indeed a lower order term, larger than 0 from $n>2$. Hence\n$$\np(n) = O(n\\log_2\\log_2(n))\n$$\n\\end{document}", "meta": {"hexsha": "9fa873556628c07007022c7c142c00b403397fd3", "size": 6057, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "AlgorithmsAndDatastructures/Opgave1/opgave1.tex", "max_stars_repo_name": "pdebesc/KU", "max_stars_repo_head_hexsha": "31a72fb179f62469404290b9e6bb8a7c70e2588e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "AlgorithmsAndDatastructures/Opgave1/opgave1.tex", "max_issues_repo_name": "pdebesc/KU", "max_issues_repo_head_hexsha": "31a72fb179f62469404290b9e6bb8a7c70e2588e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AlgorithmsAndDatastructures/Opgave1/opgave1.tex", "max_forks_repo_name": "pdebesc/KU", "max_forks_repo_head_hexsha": "31a72fb179f62469404290b9e6bb8a7c70e2588e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.0625, "max_line_length": 809, "alphanum_fraction": 0.6031038468, "num_tokens": 2467, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.8244619242200082, "lm_q1q2_score": 0.5792092829424438}}
{"text": "\\documentclass[epsfig,10pt,fullpage]{article}\n\n\\newcommand{\\LabNum}{4}\n\\newcommand{\\CommonDocsPath}{../../common/docs}\n\\input{\\CommonDocsPath/preamble.tex}\n\n\\begin{document}\n\n\\centerline{\\huge OpenCL}\n~\\\\\n\\centerline{\\huge Laboratory Exercise \\LabNum}\n~\\\\\n\\centerline{\\large Classification of Handwritten Digits using Machine Learning: Linear Classifier}\n~\\\\\n\nIn this exercise you will learn about a machine learning approach to classifying handwritten digits.\nYou will create software and hardware implementations of a linear classifier that accepts images of handwritten digits ranging 0 to 9 and determines which digit is present in each image.\n\n\\section*{The MNIST Handwritten Digits Database}\n\nIn this exercise you will use the MNIST handwritten digits database,  \nwhich is widely used by scientists around the world to train and test handwritten digit recognition machines. \nThe database provides 28x28-pixel 8-bit grayscale images, each containing a single handwritten digit ranging 0 to 9. An example of\nan image from the MNIST database is shown in Figure~\\ref{fig:mnist_image}.\n\n\\begin{figure}[H]\n   \\begin{center}\n       \\hspace*{1.5cm}\\includegraphics[scale=0.7]{figures/fig_mnist_image}\n   \\end{center}\n   \\caption{An example handwritten digit image from the MNIST database}\n\t\\label{fig:mnist_image}\n\\end{figure}\n\nThe database contains a training set of 60000 images which are used to train the classifier, and a test set of 10000 images which are used to test the classifier's accuracy. \nThe 60000 training images have already been used to train the weights for the linear classifier. \nYour task is to use the provided weights to implement the linear classifier and test its classification accuracy on the 10000 test images.\n\n\\section*{The Linear Classifier}\n\n\\begin{figure}[H]\n   \\begin{center}\n       \\includegraphics[scale=1]{figures/fig_linear_classifier_equation}\n   \\end{center}\n   \\caption{The equation for calculating the score of a class in the linear classifier}\n\t\\label{fig:linear_classifier_equation}\n\\end{figure}\n\nThe linear classifier evaluates the likelihood (or score) of the input image being each of the possible digits.\nThe score is calculated using the equation shown in Figure~\\ref{fig:linear_classifier_equation}, where j ranges 0 to 783 to span the pixels and their corresponding weights.\nEach digit has its own set of weights leading to a total of 10*784 = 7840 weights for the classifier.\nAfter calculating the scores of all the digits, the digit whose score is highest is chosen as the classification result. \nAn example of the calculations is shown in Figure~\\ref{fig:linear_classifier}, where the linear classifier receives an input image containing the digit 3 and\noutputs the correct classification result.\n\n\n\\begin{figure}[H]\n   \\begin{center}\n       \\includegraphics[scale=0.58]{figures/fig_linear_classifier}\n   \\end{center}\n   \\caption{The linear classifier operating on an input image containing the digit ``3'' and yielding the correct result}\n\t\\label{fig:linear_classifier}\n\\end{figure}\n\n\n\\section*{Part I}\n\\noindent\nImplement a program in the C++ language that performs linear classification of MNIST handwritten digits. \nYour program must use the trained weights stored in files \\textit{weights\\_0}, \\textit{weights\\_1}, ... , \\textit{weights\\_9}, in the \\textit{/design\\_files/weights\\_fp/} directory. \nTest your program by classifying the 10000 MNIST test images stored in the file \\textit{/design\\_files/t10k-images-idx3-ubyte}. \nCompare the results of your classifier to the correct labels stored in the file \\textit{/design\\_files/t10k-labels-idx1-ubyte} to calculate\nyour classifier's accuracy as: \\[Accuracy = (\\#\\ of\\ correct\\ results )/ 10000 * 100\\% \\]\nCompile and execute your program on a DE-Series Linux platform.\nUpon completion, your program should output the classification accuracy and the runtime in milliseconds.\n\n\\subsubsection*{Parsing the MNIST Database Files}\nThe MNIST database is provided in .idx files. The file formats of the MNIST images and labels files are shown in Figure~\\ref{fig:mnist_file_format}. \nYour program must open and parse these files to extract the image pixels and labels. Each pixel is 8 bits and represents a grayscale value.\nEach label is also 8 bits, and its value can range from 0 to 9 to indicate the correct classification of the corresponding image.\nThe 32-bit integers stored in these files are stored in most-significant byte (MSB)-first format, which\nmust first be converted to least significant byte (LSB)-first format before being used on the ARM processors of the DE-series boards.\n\n\\lstset{}\n\\begin{figure}[H]\n\\begin{center}\n\\begin{minipage}[t]{12.7 cm}\n\\begin{lstlisting}\nTEST SET IMAGE FILE (t10k-images-idx3-ubyte):\n[offset] [type]          [value]          [description] \n0000     32 bit integer  0x00000803(2051) magic number \n0004     32 bit integer  10000            number of images \n0008     32 bit integer  28               number of rows \n0012     32 bit integer  28               number of columns \n0016     unsigned byte   ??               pixel \n0017     unsigned byte   ??               pixel \n........ \nxxxx     unsigned byte   ??               pixel\n\nTEST SET LABEL FILE (t10k-labels-idx1-ubyte):\n[offset] [type]          [value]          [description] \n0000     32 bit integer  0x00000801(2049) magic number (MSB first) \n0004     32 bit integer  10000            number of items \n0008     unsigned byte   ??               label \n0009     unsigned byte   ??               label \n........ \nxxxx     unsigned byte   ??               label\nThe labels values are 0 to 9.\n\\end{lstlisting}\n\\end{minipage}\n\\end{center}\n\\vspace{-.23in}\\caption{File formats of the MNIST files. Source: http://yann.lecun.com/exdb/mnist/}\n\\label{fig:mnist_file_format}\n\\end{figure}\n\n\\subsubsection*{Parsing the Weights Files}\n\nTrained weights for use in this part of the exercise are provided in the directory \\textit{/design\\_files/weights\\_fp/}. There are 10 weights files named \n\\textit{weights\\_0}, \\textit{weights\\_1}, ... , \\textit{weights\\_9} that correspond to their respective digits.\nEach file contains 784 32-bit floating point weights that can be parsed using the code shown in \nFigure~\\ref{fig:parse_weights_code}. \n\n\\lstset{language=C++}\n\\begin{figure}[H]\n\\begin{center}\n\\begin{minipage}[t]{16 cm}\n\\begin{lstlisting}\n#define FEATURE_COUNT 784\n\n// weights must be an array of sufficient size (>= 784)\nbool read_weights_file(char *filename, float *weights) {\n\tFILE *f = fopen(filename, \"rb\");\n\tif (f == NULL){\n\t\tprintf(\"ERROR: could not open %s\\n\",filename);\n\t\treturn false;\n\t}\n\tint read_elements = fread(weights, sizeof(float), FEATURE_COUNT, f);\n\tfclose(f);\n\t\n\tif (read_elements != FEATURE_COUNT){\n\t\tprintf(\"ERROR: read incorrect number of weights from %s\\n\", filename);\n\t\treturn false;\n\t}\n\treturn true;\n}\n\\end{lstlisting}\n\\end{minipage}\n\\end{center}\n\\vspace{-0.33in}\\caption{Function that parses a weights file}\n\\label{fig:parse_weights_code}\n\\end{figure}\n\n\\section*{Part II}\n\nImplement an OpenCL kernel that performs linear classification of handwritten digits. Use the function prototype shown in Figure~\\ref{fig:kernel_v1}. \nThe kernel reads the input array \\texttt{images} which contains 10000 images. Each image in the array is 784 chars long, for a total of 10000 * 784 = 7840000 chars in the array.\nThe kernel reads the input array \\texttt{weights} which contains 10 sets of 784 weights, for a total of 7840 floats in the array. \nThe kernel writes the classification results to the output array \\texttt{results} which will contain 10000 chars when the kernel finishes.\nEach char in the \\texttt{results} array will contain a value from 0 to 9 indicating the classification result.\n\n\\lstset{}\n\\begin{figure}[H]\n\\begin{center}\n\\begin{minipage}[t]{16 cm}\n\\vspace{-0.13in}\\begin{lstlisting}\n__attribute__((reqd_work_group_size(10000,1,1)))\n__kernel void linear_classifier(global const unsigned char * restrict images, \n\t\t\t\t\t\t\t\tconstant float * restrict weights,\n\t\t\t\t\t\t\t\tglobal unsigned char * restrict results)\n{\n\t... Your Code ...\n}\n\\end{lstlisting}\n\\end{minipage}\n\\end{center}\n\\vspace{-0.33in}\\caption{Function prototype for the linear classifier kernel}\n\\label{fig:kernel_v1}\n\\end{figure}\n\nIn previous exercises you manually created local memories to cache frequently used values and reduce memory access overhead.\nIn this exercise, you will use Intel FPGA SDK for OpenCL's automatic cache generation feature to cache the linear classifier's weights. \nThis is done by assigning the \\texttt{constant} \nkeyword to the weights array which tells the compiler that the values in the array are constant and should be cached to improve performance.\nBy default the Intel FPGA SDK for OpenCL creates a cache that is 16384 bytes large to store the values. \nThe cache size can be changed when compiling your kernel\nusing the argument \\texttt{-const-cache-bytes=<N>}, where <N> is the number of bytes desired. \nSelect a cache size that is sufficiently large to minimize cache misses.\n\n~\\\\\nMaximize the performance of your design by unrolling your loop(s) until you run out of FPGA resources.\nModify the SW implementation from Part I to invoke your OpenCL kernel. \nEnsure that your OpenCL kernel produces the same classification accuracy as your software implementation in Part I. \nDetermine the runtime of your OpenCL kernel. How does its performance compare to that of the software implementation?\n\n\\section*{Part III}\n\nOpen the ``Area Analysis of System'' page in the compilation report of your design in Part II. By examining the detailed breakdowns, \nyou will see that the majority of resource utilization comes from implementing floating-point arithmetic operations. \nHow many ALUTs, FFs, and DSPs does a floating-point add require? How many ALUTs, FFs, and DSPs does a floating-point multiply require?\nHow many floating-point add operations could you fit into your FPGA, assuming you are implementing nothing else? \nHow many floating-point multiply operations could you fit into your FPGA, assuming you are implementing nothing else?\nNote that you can get the total number of available ALUTs, FFs, and DSPs on the ``Summary'' page of the report. \n\n\\section*{Part IV}\n\nIn this part of the exercise, you will modify your implementation from Part II to use fixed-point arithmetic instead of\nfloating-point arithmetic. \n\n\n\\subsubsection*{Fixed-Point Arithmetic}\n\nThe fixed-point datatype stores real numbers using a fixed number of digits after the decimal point. \nFixed-point data consists of an arbitrary number of bits which are divided into two sets. \nThe set of most significant bits represents the whole number to the left of the decimal point,\nwhile the set of least significant bits represents the fractional portion to the right of the decimal point.\nAs an example, a 16-bit fixed-point number can be divided into a 12-bit set and an 4-bit set, where the most\nsignificant 12 bits represent the whole number, and the least significant 4 bits represent the fractional number. \nSuch a fixed-point representation is called ``12.4 fixed-point format''.\nEach bit contributes 2\\textsuperscript{x} to the real number value, where x is the bit's position offset from the decimal point. \nFigure~\\ref{fig:fixed_point} shows an example of a 16-bit value which is interpreted as a 12.4 fixed-point value and as a 8.8 fixed-point value.\n\n\\begin{figure}[H]\n   \\begin{center}\n       \\includegraphics[scale=0.70]{figures/fig_fixed_point}\n   \\end{center}\n   \\caption{Interpreting a 16-bit value as 12.4 and 8.8 fixed-point values}\n\t\\label{fig:fixed_point}\n\\end{figure}\n\nAn advantage of using fixed-point arithmetic is that the operations require fewer hardware resources to implement \ncompared to floating-point arithmetic. \n\n~\\\\\nThis means that a fixed-point circuit can be smaller and more power-efficient than its floating-point equivalent. \nAlternatively, you could achieve higher performance by implementing more fixed-point units with the same hardware resources.\n\n~\\\\\nFixed-point multiplication can be computed using an integer multiplier circuit, with the caveat that the circuit\ndoes not track the decimal point locations of the multiplicands and the product. \nIt is therefore the designer's responsibility to track the decimal point locations.\nThe product's decimal point location (offset from the LSB) is determined by adding the offsets in the multiplicands.\nFor example, if you multiply a 12.4 fixed-point number by a 8.8 fixed-point number, the decimal point in the product\nwill be offset from the LSB by 4 + 8 = 12 bits. In other words, the product would have a 12-bit fraction.\n\n~\\\\\nFixed-point addition can be computed using an integer adder circuit, provided that two summands have the same decimal point location. \nWhen adding two fixed-point values whose decimal point locations differ, one of the summands must be shifted to match the other prior to addition.\nThe resulting sum will have the same decimal point location as its summands.\n\n\n\\subsubsection*{Modifying Your Kernel to Use Fixed-Point Arithmetic}\n\nModify your host program to parse the fixed-point weights in the directory \\textit{/design\\_files/weights\\_fxp/}.\nThese fixed-point weights are stored in 16.16 fixed-point format. \nModify your kernel to accept the fixed-point weights as an \\texttt{int} array, as shown in Figure~\\ref{fig:kernel_v2}. \nWhen multiplying the 16.16 fixed-point weights by the 8-bit pixels, the pixels can be considered to be\n8.0 fixed-point numbers. \n\n\n\\lstset{language=C}\n\\begin{figure}[H]\n\\begin{center}\n\\begin{minipage}[t]{16 cm}\n\\begin{lstlisting}\n__attribute__((reqd_work_group_size(10000,1,1)))\n__kernel void linear_classifier(global const unsigned char * restrict images, \n\t\t\t\t\t\t\t\tconstant int * restrict weights,\n\t\t\t\t\t\t\t\tglobal unsigned char * restrict results)\n{\n\t... Your Code ...\n}\n\\end{lstlisting}\n\\end{minipage}\n\\end{center}\n\\vspace{-0.33in}\\caption{Function prototype for the linear classifier kernel that uses fixed point weights}\n\\label{fig:kernel_v2}\n\\end{figure}\n\nUse the \\texttt{aoc} flag \\texttt{-c} to obtain a compilation report for your modified kernel and examine the resource utilization of\nimplementing fixed-point addition and multiplication. Determine the per-adder and per-multiplier resource cost and compare them \nto the numbers you determined in Part III.\nUnroll your loops to fully utilize the resources of the FPGA and maximize the kernel's performance.\nHow many times can you unroll your loop before running out of resources, and how does that compare to your design from Part II?\nCompile your design and use it to classify the MNIST test images.\nWhat is the runtime of your new kernel and how does it compare to that of your kernel from Part II?\nIs your classification accuracy affected? Why or why not? In what situations would changing from\nfloating-point arithmetic to fixed-point arithmetic affect (and not affect) classification accuracy?\n\n\n\\input{\\CommonDocsPath/copyright.tex}\n\\end{document}\n", "meta": {"hexsha": "38e34918f1baa5f3eecf4418f5607268dc5c8e4b", "size": 14947, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lab4/doc/opencl_lab4.tex", "max_stars_repo_name": "fpgacademy/Lab_Exercises_OpenCL", "max_stars_repo_head_hexsha": "923005e0f727cfd922d3904ad21a47faa86c7a51", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab4/doc/opencl_lab4.tex", "max_issues_repo_name": "fpgacademy/Lab_Exercises_OpenCL", "max_issues_repo_head_hexsha": "923005e0f727cfd922d3904ad21a47faa86c7a51", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab4/doc/opencl_lab4.tex", "max_forks_repo_name": "fpgacademy/Lab_Exercises_OpenCL", "max_forks_repo_head_hexsha": "923005e0f727cfd922d3904ad21a47faa86c7a51", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.5413793103, "max_line_length": 186, "alphanum_fraction": 0.7681140028, "num_tokens": 3555, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952052, "lm_q2_score": 0.8244619220634456, "lm_q1q2_score": 0.5792092814273937}}
{"text": "*  Yuyan: Deep Neural network is usually under-determined.  If there are a set of parameters will give accurate prediction under various perturbations or disturbances or distractions,  then these would be a good set of parameters. \n\nWhy a good set of parameters should be robust with respects to drop-out?\n\n*  Xiaodong:  \n\nDropping out will reduce the number of parameters, which will decrease the generalization ability (\u6cdb\u5316\u80fd\u529b\uff09, namely decrease the inaccuracy in the test data.   This has nothing to do what activations are used. \n\nTheoretical starting point:\n\\begin{quote}\n$\\#\\Theta$ is sufficiently large that the underlying DNN model is accurate.\n\\end{quote}\n\nThe starting point, or the ideal case,  is that $\\# \\Theta \\approx \\# X$. \n\n\nDropout is only efficient if $\\# \\Theta \\gg \\# X$.\n\n\n* Hongxuan:  \n\nThis gives a random perturbation in the gradient method.  This will help to jump out local minimizers ... \n\n* Juncai:  These parameters should have special structures, namely these matrices, multiplied by a random cut-matrices (consisting of $0$ and $1$) on their right lead to the final results that have some uniform properties.\n\n\n$$\nf^{j+1}=\\theta^{j+1}\\circ g \\theta^{j}\\circ g \\circ f^{j-1}\n$$\nDrop-out:  Let $D_j$ be a diagonal matrix with $0$ or $1$ as diagonal, say $25\\%$ $0$. \n$$\nf^{j+1}=\\theta^{j+1}\\circ g \\circ \\theta^{j}\\circ {\\color{red}D_j}\\circ g \\circ f^{j-1}\n$$\nThere are two conceptional understandings, one is\n$$\nf^{j+1}=\\theta^{j+1}\\circ g \\circ \\theta^{j}\\circ[ {\\color{red}D_j}\\circ g \\circ f^{j-1}]\n$$\nThis means that we drop out $25\\%$ the output values\n$$\nf^{j+1}=\\theta^{j+1}\\circ g \\circ [\\theta^{j}\\circ {\\color{red}D_j}]\\circ g \\circ f^{j-1}\n$$\nThis means that we drop out $25\\%$ neurons. \n\nThis may be related to the following fact:\n\n\\begin{quote}\\it\nGiven $Q\\in \\mathbb R^{n\\times m}$, let $D={\\rm diag}(d_i)$ with $d_i\\in {\\rm rand}(0,1)$.  then, for large singular values\n$$\n\\sigma_i(QD-Q)\\le \\epsilon \\sigma_i(Q)\n$$  \n\\end{quote}\n\n\n* Deep Learning Book:\n\nIt is important to understand that a large portion of the power of dropoutarises from the fact that the masking noise is applied to the hidden units. This can be seen as a form of highly intelligent, adaptive destruction of the information content of the input rather than destruction of the raw values of the input. For example, if the model learns a hidden unithithat detects a face by finding thenose, then dropping $h_i$ corresponds to erasing the information that there is a nose in the image. The model must learn another $h_i$, that either redundantly encodes the presence of a nose or detects the face by another feature, such as the mouth. Traditional noise injection techniques that add unstructured noise at the input are not able to randomly erase the information about a nose from an image of a face unless the magnitude of the noise is so great that nearly all the information inthe image is removed. Destroying extracted features rather than original values allows the destruction process to make use of all the knowledge about the input distribution that the model has acquired so far.\n\nAnother important aspect of dropout is that the noise is multiplicative. If the noise were additive with fixed scale, then a rectified linear hidden unit $h_i$ with added noise $\\epsilon$ could simply learn to have $h_i$ become very large in order to make the added noise $\\epsilon$ insignifiant by comparison. Multiplicative noise does not allow such a pathological solution to the noise robustness problem.\n\n\\newpage\n\\section{Mathematical versus practical}\n\n\n\\subsection{Mathematical}\nMathematically speaking, for a given application, we have the following conclusion\n\\begin{enumerate}\n\\item There exist a number of smoother disjoint manifold that  contain all the possible data.\n\\item If the $N=\\#\\Theta$ is sufficiently large, then there exists a DDN with these paramaters that can ``accurately'' classify the underlying manifolds\n\\item If the $m=\\# X$ is sufficiently large and ``well-distributed'' for training, then the aforementioned DDN with at least one set of parameters $\\Theta^{\\rm opt}$ and, as a result, there is no ``over-fitting''!\n\\end{enumerate}\n\n\\subsection{Practical}\nWe will use training data to get an approximation of $\\Theta^{\\rm opt}$.  More precisely\n$$\n\\Theta^*=\\arg\\min {1\\over m}\\sum_{i=1}^m|f(x_i,\\Theta)-y_i|^2.\n$$\nQuestion:\n$$\n\\Theta^*\\approx \\Theta^{\\rm opt}?\n$$\nNote\n\\begin{quote}\n  $\\Theta^*$ is determined by the training data!\n\\end{quote}\n\nXu:\n\n\\begin{quote}\nIf  $m\\gg 1$ and ``good'', then $\\Theta^*\\approx \\Theta^{\\rm opt}?$\n\\end{quote}\nPractically speaking,  \n\\begin{enumerate}\n\\item $N$ is unknown.\n\\item $N\\gg 1$, but $m\\ll N$\n\\end{enumerate}\n\n\\subsubsection{Case 1:  The DNN model is good enough}\n\n\\subsubsection{Case 2:  The DNN model is  NOT good enough}\n", "meta": {"hexsha": "8fca893da69fdc362e373d700a5e431d123d10a2", "size": 4803, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/RandomNotes.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/RandomNotes.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/RandomNotes.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.03, "max_line_length": 1101, "alphanum_fraction": 0.7412034145, "num_tokens": 1289, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8244619177503205, "lm_q2_score": 0.7025300573952052, "lm_q1q2_score": 0.5792092783972936}}
{"text": "\\documentclass[../PHYS306Notes.tex]{subfiles}\n\n\\begin{document}\n\\section{Lecture 28}\n\\subsection{Lecture Notes - Differential Cross Section}\n\\subsubsection{Review}\nCross section = area of ring of radius $b$ and width $db$. Particles hitting the ring between $b$ and $b + db$ are scattered by an angle between $\\theta$ and $\\theta + d\\theta$. Scattered onto a larger ring on a sphere with scallering nucleus in center. The solid angle of the entire ring is:\n\\[d\\Omega = \\frac{2\\pi R\\sin\\theta Rd\\theta}{R^2} = 2\\pi\\sin\\theta d\\theta\\]\nAnd the solid angle of a small area is given by:\n\\[d\\Omega = \\frac{d\\phi R\\sin\\theta Rd\\theta}{R^2} = \\sin\\theta d\\phi d\\theta\\]\nThis notion is useful as often our particle detectors cover a certain fraction of the area. \n\\begin{center}\n    \\includegraphics[scale=0.5]{Lecture-28/l28-img1.png}\n\\end{center}\nThe ratio between the $d\\sigma$ of the first ring and the $d\\Omega$ of the second ring is the the differential cross section, which is:\n\\[\\dod{\\sigma}{\\Omega} = \\frac{b}{\\sin\\theta}\\abs{\\dod{b}{\\theta}}\\]\nWhere $d\\sigma = 2\\pi b db$ and $d\\Omega = 2\\pi \\sin\\theta d\\theta$. The task today is then to obtain $b$ in terms of $\\theta$ ($b(\\theta)$). Today, we will consider the case where our target is fixed (i.e. a heavy target) and try to describe the trajectory of this scenario. This is just the two body central force problem (as covered in PHYS 216), with the only difference that the orbits are open and not closed.\n\n\\subsubsection{Calculation of the differential cross-section - The Kepler Approach}\nWe first remark that this chapter is interesting as not only is it highly relevant to research fields (e.g. particle physics) but also puts something that we know (the two-body problem) into a new context. We now move onto solving this two-body central force problem. The total energy of the system in polar coordinates is given by:\n\\[E = \\frac{m}{2}\\left(\\dot{r}^2 + r^2\\dot{\\phi}^2\\right) + U(r) = \\text{Const}\\]\nWhere $U(r)$ is the scattering potential. Here the energy is conserved. Another quantity that is conserved is the angular momentum:\n\\[L = mr^2\\dot{\\phi} = \\text{Const}\\]\nFrom this, we can immediately write:\n\\[r^2\\dot{\\phi}^2 = \\frac{L^2}{m^2r^2}\\]\nHence sustituting this into the energy expression to obtain everything in terms of $r$, we have:\n\\[E = \\frac{m}{2}\\left(\\dot{r}^2 + \\frac{L^2}{m^2r^2}\\right) + U(r)\\]\nWe can therefore solve for $\\abs{\\dot{r}}$:\n\\[\\abs{\\dot{r}} = \\sqrt{\\frac{2E}{m} - \\frac{2U(r)}{m} - \\frac{L^2}{m^2r^2}}\\]\nWe have a two body problem, but it is effectively a one body problem. We may introduce an \"effective potential\":\n\\[U_{eff} = U(r) + \\frac{L^2}{2mr^2}\\]\nSince we know that $\\phi(t)$ is monotonic, we may write:\n\\[\\abs{\\dot{\\phi}} = \\frac{\\abs{L}}{mr^2}\\]\nDividing the $\\dot{\\phi}$ equation by the $\\dot{r}$ equation, we get:\n\\[\\frac{\\abs{\\dot{\\phi}}}{\\abs{\\dot{r}}} = \\abs{\\dod{\\phi}{r}} = \\frac{\\frac{\\abs{L}}{mr^2}}{\\sqrt{\\frac{2E}{m} - \\frac{2U(r)}{m} - \\frac{L^2}{m^2r^2}}}\\]\nWe can from this expression obtain the trajectory in the following way. The particle is incoming with impact parameter $b$, and deflects off by angle $\\theta$. There is some closest point of approach $r_{min}$ that divides the trajectory into two pieces, that is, the trajectory is symmetric about $r_{min}$. Call the angle at this point $\\Delta \\phi/2$. This can be visualized as follows:\n\\begin{center}\n    \\includegraphics[scale=0.7]{Lecture-28/l28-img2.png}\n\\end{center}\nWe therefore have that:\n\\[\\Delta \\phi(r) = 2\\int_{r_{min}}^\\infty \\frac{\\frac{L}{r^2}}{\\sqrt{2m(E-U) - \\frac{L^2}{r^2}}}\\]\nNote that we obtain $r_{min}$ by solving for the turnaround point, i.e the energy equals the effective potential (the kinetic energy is zero) so:\n\\[2m(E - U) = \\frac{L^2}{r_{min}^2}\\]\nHowever, all of our equations are in terms of $L$ and $E$;  should recast these in terms of physical parameters. Set:\n\\[E = T_{\\infty} = \\frac{m}{2}v_{\\infty}^2\\]\nand \n\\[L = \\abs{\\v{r} \\times \\v{p}_{\\infty}} = mbv_{\\infty}\\]\nHence we may write:\n\\[\\Delta \\phi = 2b\\int_{r_{min}}^\\infty \\frac{dr}{r^2\\sqrt{1 - \\frac{2U(r)}{mv_{\\infty}^2} - \\frac{b^2}{r^2}}}\\]\nSo we have expressed the integral purely in terms of the mass, velocity, and impact parameter, which are all measureable variables. Note that this solution isn't really scattering; this is a very general solution to the two-body problem. \nWe can now apply it to some relevant scattering scenarios to obtain the differential cross section.\n\n\\subsubsection{Example - Hard Sphere Differential Cross-Section}\nIn this case, $r_{min} = R$ (closest point of approach is just the surface of the sphere), and $V(r) = 0$ for $r > R$ (the particles don't see each other). For this case, we have:\n\\[\\Delta \\phi = 2b\\int_R^\\infty\\frac{dr}{r^2\\sqrt{1 - \\frac{b^2}{r^2}}}\\]\nThe middle term is zero as the potential is zero in the region of interest. This integral is solvable analytically, but we may very well just look this up instead of undergoing a tedious calculation:\n\\[\\Delta \\phi = 2\\arcsin(\\frac{b}{R})\\]\nRearranging, we get:\n\\[b = R\\sin(\\frac{\\Delta \\phi}{2})\\]\nAccording to the picture we have drawn, $\\Delta \\phi + \\theta = \\pi$, so we can convert this to be in terms of $\\theta$, which works out to be:\n\\[b = R\\cos(\\frac{\\theta}{2}) = b(\\theta)\\]\nNow, all is left to do is to calculate the differential cross section:\n\\[\\dod{\\sigma}{\\Omega} = \\frac{b}{\\sin\\theta}\\abs{\\dod{b}{\\theta}} = \\frac{R\\cos(\\frac{\\theta}{2})}{\\sin\\theta}\\abs{-\\frac{R}{2}\\sin(\\frac{\\theta}{2})} = \\frac{R^2}{2\\sin\\theta}\\cos(\\frac{\\theta}{2})\\sin(\\frac{\\theta}{2}) = \\frac{R^2}{2\\sin\\theta}\\frac{1}{2}\\sin(\\theta) = \\frac{R^2}{4}\\]\nHence calculating $\\sigma_{tot}$ (total cross section) we have:\n\\[\\sigma_{tot} = \\int \\dod{\\sigma}{\\Omega}d\\Omega = \\int \\frac{R^2}{4}d\\Omega = \\pi R^2\\]\nWhich is a good sanity check, as it agrees with what we found on Monday!\n\n\\subsubsection{Example - Coloumb Potential Differential Cross-Section}\nIn this case, we have that:\n\\[U = -\\frac{\\beta}{r}\\]\nWhere $\\beta = -kqQ < 0$ (repulsive). The procedure is exactly the same, but the integral is just harder. \n\\[\\Delta \\phi = 2b\\int_{r_{min}}^\\infty \\frac{dr}{r\\sqrt{r^2 + \\frac{2\\beta r}{mv_{\\infty}} - b^2}}\\]\n\\[\\Delta \\phi = \\left.2\\arccos(\\frac{1 - \\frac{mv_\\infty^2 b^2}{\\beta}}{\\sqrt{1 + \\left(\\frac{mv_{\\infty}^2b}{\\beta}\\right)^2}})\\right|_{r_{min}}^{\\infty}\\]\nEvaluating at the bounds, we get:\n\\[\\Delta \\phi = 2\\arccos(\\frac{1}{\\sqrt{1 + \\left(\\frac{mv_\\infty b}{\\beta}\\right)^2}})\\]\n(this is the term at infinity, the term at $r_{min}$ vanishes, as the value is $2\\arccos(-1) = 2\\pi$ and $\\phi = \\phi + 2\\pi$). Hence solving for $b(\\theta)$ we have:\n\\[b(\\theta) = \\frac{1}{mv_{\\infty}^2}\\sqrt{\\frac{1}{\\sin^2(\\frac{\\theta}{2})} - 1}\\]\nUsing this, we find the differential cross section:\n\\[\\frac{d\\sigma}{d\\Omega} = \\frac{k^2q_1^2q_2^2}{16E^2}\\frac{1}{\\sin^4(\\frac{\\theta}{2})}\\]\nPlotting the Rutherford cross section, we find:\n\nWhere we have a characteristic divergence at the origin and a rapid decrease, with a value of $1$ at $\\pi$. \n\\begin{center}\n    \\includegraphics[scale=0.5]{Lecture-28/l28-img3.png}\n\\end{center}\nQuestion: What is the total cross section?\n\\begin{s}\n$\\sigma = \\int\\dod{\\sigma}{\\Omega}d\\Omega = \\infty$ due to the divergence at the origin. This can be traced back to the $\\frac{1}{r}$ type behavior of the Columb potential. \n\\end{s}\nA remark: One could state that this (classical treatment) is nonsense and we require a quantum treatment. Luckily, the classical treatment actually does work out in this case (due to the Coulomb potential in particular) but for a full description we require a quantum formulation, which is something we can expect to see in Graduate school QM. Next day we will look at the center of mass lab frame, and how we treat scattering off of nuclei that are not infinitely heavy. \n\\end{document}", "meta": {"hexsha": "c4f370a3e0af398ba3e676c4833203ff2e71ad9b", "size": 7826, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-28/Lecture-Notes-28.tex", "max_stars_repo_name": "RioWeil/PHYS306-notes", "max_stars_repo_head_hexsha": "9394a8cd986722b6fdcb57c8846c6b0d52c23188", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture-28/Lecture-Notes-28.tex", "max_issues_repo_name": "RioWeil/PHYS306-notes", "max_issues_repo_head_hexsha": "9394a8cd986722b6fdcb57c8846c6b0d52c23188", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture-28/Lecture-Notes-28.tex", "max_forks_repo_name": "RioWeil/PHYS306-notes", "max_forks_repo_head_hexsha": "9394a8cd986722b6fdcb57c8846c6b0d52c23188", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 86.0, "max_line_length": 472, "alphanum_fraction": 0.6939688219, "num_tokens": 2541, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.8128673223709251, "lm_q1q2_score": 0.5789502685087081}}
{"text": "\\chapter{Multidimensional scaling}\n\\label{mds}\n\n%\\section{Principal co-ordinates analysis - multidimensional scaling}\n\n\nWe have looked at clustering techniques which partition an $n \\times n$ matrix of dissimilarities into groups of like individuals.   Whilst it is possible to visualise differences between individuals in terms of dendrograms, it might be more useful to have a direct representation of the relationship between individuals.   We are now going to consider techniques which let us visualise the relative distances bewteen individuals in a low dimensional structure.  Some early ideas were set out by \\cite{Richardson:1938}, and the algebraic techniques required to reconstruct a configuration from distances were established by \\cite{Young+Householder:1938}.   \\cite{Torgerson:1952} set out the foundations for this work, but developments in the technique associated with the name \\emph{principal co-ordinates analysis} were given by \\cite{Gower:1966}.   Arguably, the tecnique is most commonly referred to as \\emph{scaling}, often as classical scaling.\n\n\\section{Metric Scaling}\n\nConsider (any) distance matrix $\\boldsymbol{\\Delta}$.   It is metric if elements of $\\boldsymbol{\\Delta}$ satisfy the metric (triangle) inequality $\\delta_{ij} \\leq \\delta_{ik} + \\delta_{kj}$ for all $i$, $j$, $k$.\n\nClassical metric multidimensional scaling is based on the $n \\times n$ distance or dissimilarity matrix, we will note later some connections with principal components.   Here, we will first consider a matrix $\\boldsymbol{\\Delta}$ of \\emph{Euclidean} distances between objects, such that $\\delta_{ij}$ is the distance between points $i$ and $j$.   These $n$ points can be represented in $n-1$ dimensional space.   We are looking for a configuration of $n$ points such that the distance $d_{ij}$ between points $i$ and $j$ equals the dissimilarity $\\delta_{ij}$ for all $i$, $j$.   The dimensionality of this configuration is $q$ such that we are seeking to reconstruct an $n \\times q$ matrix $\\boldsymbol{X}$.\n\nWe can very easily get from an $n \\times p$ matrix $\\boldsymbol{X}$ to a Euclidean distance matrix.   If we first form the $n \\times n$ matrix $\\boldsymbol{Q}$ then $\\boldsymbol{Q} = \\boldsymbol{X}\\boldsymbol{X}^{T}$.   Considering this one element at a time:\n\n\\begin{equation}\nq_{rs} = \\sum_{j = 1}^{p} x_{rj}x_{sj}\n\\end{equation}\n\nWe know that the Euclidean distance is given by:\n\n\\begin{eqnarray*}\n\\delta_{rs}^{2} &=& \\sum_{j=1}^{p} (x_{rj} - x_{sj})^{2} \\\\\n &=& \\sum_{j=1}^{p} (x_{rj}^{2} + \\sum_{j=1}^{p} (x_{sj}^{2} - 2 \\sum_{j=1}^{p} x_{rj}x_{sj} \\\\\n&=&  q_{rr} + q_{ss} - 2q_{rs}\n\\end{eqnarray*}\n\nSo given $\\boldsymbol{X}$ we can find $\\boldsymbol{Q} = \\boldsymbol{X}\\boldsymbol{X}^{T}$ and hence find the Euclidean distance.\n%proof pg 106\n\nWhat we want to do now is reverse this process.    We should note that our recovered $n \\times q$ matrix is not uniquely defined - it is only defined up to translation, rotation and reflection.   To fix this, we will usually assume $\\boldsymbol{X}$ has been centred (i.e. make column means zero, such that $\\sum_{i=1}^{n} y_{ij} = 0)$.    It may be noted here that we don't necessarily want to recover an $n \\times p$ matrix, we can reconstruct a matrix with up to $n-1$ columns, but as with other dimension reduction techniques we hope to find an $n \\times q$ matrix with $q < p$; clearly if $q = 2$ it will be easy to visualise the results.   So, to recover $\\boldsymbol{Q}$ from $\\boldsymbol{D}$:\n\n\\begin{eqnarray}\n\\sum_{r=1}^{n} d_{rs}^{2} = trace(\\boldsymbol{Q}) + n q_{ss}\\\\\n\\sum_{s=1}^{n} d_{rs}^{2} = n q_{rr} + trace(\\boldsymbol{Q}) \\\\\n\\sum_{r=1}^{n} d_{rs}^{2} \\sum_{s=1}^{n} d_{rs}^{2} = 2 n trace(\\boldsymbol{Q})\n\\end{eqnarray}\n\nBy rearranging this and manipulating the equations above which lead us to our distance matrix, we can recover elements of $\\boldsymbol{Q}$ from $\\boldsymbol{D}$ using a double centering procedure:\n\n\\begin{displaymath}\nq_{ij} = -\\frac{1}{2} ( d_{ij}^{2} - d_{i \\cdot}^{2} -  d_{\\cdot j}^{2} +  d_{\\cdot \\cdot}^{2})\n\\end{displaymath}\n\nThe dots denote means taken over the relevant indices.\n\n% mention double centering here\nIn summary, to find $\\boldsymbol{Q}$ given $\\boldsymbol{D}$:\n\n\\begin{itemize}\n\\item Square it element by element\n\\item Double centre it \n  \\begin{itemize}\n  \\item  subtract column means\n  \\item subtract row means\n  \\item add overall mean\n  \\end{itemize}\n\\item Multiply by $-\\frac{1}{2}$.\n\\end{itemize}\n\n\n%some people approximate Q with a similarity matrix\n\nHaving found $\\boldsymbol{Q}$ ($\\boldsymbol{Q} = \\boldsymbol{XX}^{T}$), all we need is to find a suitable $\\boldsymbol{X}$, which sounds like some kind of matrix square root problem.   Given Euclidean distances, $\\boldsymbol{Q}$ is symmetric and we can do this using the spectral decomposition $\\boldsymbol{Q} = \\boldsymbol{E \\Lambda E}^{T}$ where $\\boldsymbol{\\Lambda} = \\lambda_{1}, \\lambda_{2}, \\ldots, \\lambda_{n}$, a diagnonal matrix of ordered eigenvalues and $\\boldsymbol{E}$ is the matrix whose columns are the corresponding (normalised) eigenvectors.   If  $\\boldsymbol{Q} = \\boldsymbol{E \\Lambda}^{\\frac{1}{2}} \\boldsymbol{\\Lambda^{\\frac{1}{2}} E}^{T}= \\boldsymbol{X} \\boldsymbol{X}^{T}$ then $\\boldsymbol{X} =  \\boldsymbol{E \\Lambda}^{\\frac{1}{2}}$ and we have recovered the co-ordinates from the inter-point distances.\n\n\n\\begin{displaymath}\n\\boldsymbol{X} = \\left( \\sqrt{\\lambda_{1}} \\left( \\begin{array}{r} e_{11} \\\\ e_{12} \\\\ \\vdots \\\\ e_{1n} \\end{array} \\right), \\ldots,   \\sqrt{\\lambda_{n}} \\left( \\begin{array}{r} e_{n1} \\\\ e_{n2} \\\\ \\vdots \\\\ e_{nn} \\end{array} \\right) \\right)\n\\end{displaymath}\n\nSo if we want a one dimensional representation, we just use  $\\left( \\sqrt{\\lambda_{1}} \\boldsymbol{e}_{1} \\right) $, for a two dimensional representation we would use  $\\left( \\sqrt{\\lambda_{1}} \\boldsymbol{e}_{1},   \\sqrt{\\lambda_{2}} \\boldsymbol{e}_{2} \\right)$\n\n\n\\subsection{Similarities with principal components analysis}\n\nA short diversion noting a few similarities with principal component analysis may be in order.   Consider the centred data matrix $\\boldsymbol{X}$:\n\n\\begin{displaymath}\nC = \\frac{1}{n-1} \\boldsymbol{X}^{T} \\boldsymbol{X}\n\\end{displaymath}\n\nPrincipal components come from an eigenanalysis of $\\boldsymbol{C}$, here we denote the eigenvalues of $\\boldsymbol{C}$ by $\\mu_{i}$ and associated eigenvectors by $\\boldsymbol{a}_{i}$:\n\n\\begin{eqnarray*}\n\\boldsymbol{C} \\boldsymbol{a}_{i} &=& \\mu_{i} \\boldsymbol{a}_{i}\\\\\n\\frac{1}{n-1}\\boldsymbol{X}^{T} \\boldsymbol{X} \\boldsymbol{a}_{i} &=& \\mu_{i} \\boldsymbol{a}_{i}\\\\\n\\boldsymbol{X}^{T} \\boldsymbol{X} \\boldsymbol{a}_{i} &=& (n-1)\\mu_{i} \\boldsymbol{a}_{i}\\\\\n\\boldsymbol{X}\\boldsymbol{X}^{T} \\boldsymbol{X} \\boldsymbol{a}_{i} &=& (n-1)\\mu_{i} \\boldsymbol{X} \\boldsymbol{a}_{i}\\\\\n\\boldsymbol{Q} \\underbrace{\\boldsymbol{X} \\boldsymbol{a}_{i}}_{\\boldsymbol{z}_{i}} &=& (n-1)\\mu_{i} \\underbrace{\\boldsymbol{X} \\boldsymbol{a}_{i}}_{\\boldsymbol{z}_{i}}\n\\end{eqnarray*}\n\nSo $\\boldsymbol{X} \\boldsymbol{a}_{i} = \\boldsymbol{z}_{i}$ is an eigenvector of $\\boldsymbol{Q}$ with corresponding eigenvalue $(n-1) \\mu_{i}$.   If we want a normalised eigenvector it may be worth noting that the length can be found as follows:\n\n\\begin{eqnarray*}\n||\\boldsymbol{z}_{i}||^{2} = \\boldsymbol{z}_{i}^{T}\\boldsymbol{z}_{i} = \\boldsymbol{a}_{i} \\boldsymbol{X}^{T} \\boldsymbol{X} \\boldsymbol{a}_{i} &=& (n-1) \\boldsymbol{a}_{i}^{T} \\boldsymbol{C} \\boldsymbol{a}_{i}\\\\\n &=& (n-1) \\boldsymbol{a}_{i}^{T} \\mu_{i} \\boldsymbol{a}_{i}\\\\\n &=& (n-1) \\mu_{i} \\frac{\\boldsymbol{a}_{i}^{T} \\boldsymbol{a}_{i}}{||\\boldsymbol{a}_{i}||^{2}}\\\\\n &=& (n-1) \\mu_{i}\n\\end{eqnarray*}\n\nSo, $||\\boldsymbol{z}_{i}|| = \\sqrt{(n-1) \\mu_{i}}$.   Hence a normalised eigenvector for $\\boldsymbol{Q}$ takes the form $\\frac{1}{(n-1)\\mu_{i}} \\boldsymbol{Xa}_{i}$ with eigenvalue $(n-1) \\mu_{i}$\n\n\nTherefore, our eigenvalues and eigenvectors found from multidimensional scaling / principal co-ordinates analysis are related to those found from decomposition of the covariance of the scaled data matrix:\n\n\\begin{eqnarray*}\n\\lambda_{i} = (n-1) \\mu_{i}\\\\\ne_{i} = \\frac{1}{\\sqrt{\\lambda_{i}}} \\boldsymbol{Xa}_{i}\n\\end{eqnarray*}\n\nRemember that $\\boldsymbol{Z} = \\boldsymbol{XA}^{T}$, where:\n\n\\begin{displaymath}\n\\boldsymbol{A} = \\left( \\begin{array}{c} \\boldsymbol{a}_{1}^{T} \\\\  \\boldsymbol{a}_{2}^{T} \\\\ \\vdots \\\\ \\boldsymbol{a}_{n}^{T}   \\end{array} \\right)\n\\end{displaymath}\n\nSo $\\boldsymbol{Xa}_{i}$:\n\n\\begin{displaymath}\n \\left( \\boldsymbol{Xa}_{1} \\boldsymbol{Xa}_{2}, \\ldots, \\boldsymbol{Xa}_{n} \\right)\n\\end{displaymath}\nin other words, this is our matrix of principal component scores.\n\n\n%This technique uses eigen decompositions (which we can do in our sleep by now) and a similarity matrix.   The thing to emphasise is that we are decomposing an $n \\times n$ matrix of similarities between points, rather than in principal components where we had a $p \\times p$ matrix of correlations / variance-covariances.\n\n\n\n\\section{Visualising multivariate distance}\n\\label{visdist}\n\nConsider the US Arrests data (we have looked at this a few times).   If you still have the distance matrix \\texttt{spot} created earlier, you can run principal co-ordinates analysis quite easily:\n\n\\singlespacing\n\\begin{verbatim}\n> spot <- dist(USArrests, method = \"euclidean\")\n> what <- cmdscale(spot)\n> plot(what[,1], what[,2], \n   xlab = \"Axis 1\", ylab = \"Axis 2\", \n   main = \"US Arrests\")\n> identify(what[,1], what[,2], row.names(USArrests))\n\\end{verbatim}\n\\onehalfspacing\n\nBy default you extract two variables, and you don't get the eigen values.   You can alter the function call if you want to change that (see \\texttt{?cmdscale})\n\nYou might like to use \\texttt{identify()} to check that very odd points according to principal co-ordinates are also very odd according to your cluster analysis.\n\n\n\\section{Assessing the quality of fit}\n\n\nOne measure of discrepancy is given by:\n\n\\begin{displaymath}\n\\varphi = \\sum_{i=1}^{n} \\sum_{j=1}^{n} (\\delta_{ij}^{2} - d_{ij}^{2})\n\\end{displaymath}\n\nand it can be shown \\citep{Mardia+etal:1979}%, page 406} \n that:\n\n\\begin{displaymath}\n\\varphi = 2n(\\lambda_{q+1} + \\ldots + \\lambda_{n})\n\\end{displaymath}\n\n\nSo, if we fit a model with $q = n-1 = 49$ (in order to measure the size of the discarded eigenvalues - most of these eigenvalues are zero hence warnings about eigenvalues below zero):\n\n\\singlespacing\n\\begin{verbatim}\n> what <- cmdscale(spot, eig = TRUE, k = 49)\n> 2 * dim(USArrests)[1] * sum(what$eig[3:49])\n\\end{verbatim}\n\\onehalfspacing\n\nWe should find that this give the same value as a direct comparison of the distance matrices formed from the data and the $q=2$ dimensional representation.  (Note that we convert the distance matrices into ordinary matrices, and then we carry out vectorised operations to take squares, subtract elementwise and sum)\n\n\\singlespacing\n\\begin{verbatim}\nwhat <- cmdscale(spot, eig = TRUE, k = 2)\ndelta <- as.matrix(dist(USArrests, upper = TRUE))\nd <- as.matrix(dist(what$points, upper = TRUE))\nsum(as.vector(delta)^2 - as.vector(d)^2)\n\\end{verbatim}\n\\onehalfspacing\n\n\nThis has clear analogies with the percent trace measure used in principal components analysis:\n\n\\begin{equation}\n\\frac{\\sum_{i=1}^{q} \\lambda_{i}}{\\sum_{i=1}^{p} \\lambda_{i}}\n\\end{equation}\n\n\\singlespacing\n\\begin{verbatim}\n> what <- cmdscale(spot, eig = TRUE, k = 4)\n> what$eig / sum(what$eig)\n[1] 0.9655342206 0.0278173366 0.0057995349 0.0008489079\n\\end{verbatim}\n\\onehalfspacing\n\n\nConsiderable work has been carried out on Goodness of fit measures for scaling problems.   If $\\boldsymbol{\\Delta}$ is based on a measure other than the Euclidean then our reconstructed $\\boldsymbol{Q}$ may not be positive semi-definite (in other words we find some negative eigenvalues and imaginary co-ordinates).   If $\\boldsymbol{Q} = \\sum_{i=1}^{q} \\lambda_{i} \\boldsymbol{e}_{i} \\boldsymbol{e}_{i}^{T}$ then \\cite{Mardia:1978} suggests the discrepancy measure:\n\n\\begin{equation}\n\\varphi \\prime = trace(\\boldsymbol{Q} - \\hat{\\boldsymbol{Q}}^{2})\n\\end{equation}.   Following \\cite{Eckart+Young:1936} we would use:\n\n\\begin{equation}\n\\frac{\\sum_{i=1}^{q} \\lambda_{i}}{\\sum_{i=1}^{p} |\\lambda_{i}|}\n\\end{equation}\n\nor following \\cite{Mardia:1978} we would use:\n\n\\begin{equation}\n\\frac{\\sum_{i=1}^{q} \\lambda_{i}^{2}}{\\sum_{i=1}^{p} \\lambda_{i}^{2}}\n\\end{equation}\n\n\n\\subsection{Sammon Mapping}\n\nClassical metrical scaling works on orthogonal projections, and has the attractive property of an exact analytical solution.   This is quite restrictive, \\cite{Sammon:1969} suggested minimising the discrepancy measure:\n\n\\begin{displaymath}\n\\varphi \\prime \\prime =  \\sum_{i=1}^{n} \\sum_{j=1}^{n} (\\delta_{ij} - d_{ij})^{2}\n\\end{displaymath}\n\nThis has no analytical solution, and numerical methods must be used.   But it is worth noting that a set of disparities are generated:\n\n\\begin{equation}\n\\label{rsssammon}\n\\hat{d}_{ij} = a + b \\delta_{ij}\n\\end{equation}\n\nThis should look like a reasonably familiar formula, and you should have guessed that residual sums of squares from \\ref{rsssammon} yields another discrepancy measure.   This measure can (should / must) be normalised with reference to its size $\\sum_{i=1}^{n} \\sum_{j=1}^{n} d_{ij}^{2}$, giving what is called the \\textbf{st}andardised \\textbf{re}sidual \\textbf{s}um of \\textbf{s}quares:\n\n\\begin{equation}\nSTRESS = \\left( \\frac{\\sum_{i=1}^{n} \\sum_{j=1}^{n} (d_{ij} - \\hat{d}_{ij})^{2}}{\\sum_{i=1}^{n} \\sum_{j=1}^{n} d_{ij}^{2}} \\right) ^{\\frac{1}{2}}\n\\end{equation}\n\n\n\\begin{equation}\nSSTRESS = \\left( \\frac{\\sum_{i=1}^{n} \\sum_{j=1}^{n} (d_{ij}^{2} - \\hat{d}_{ij}^{2})^{2}}{\\sum_{i=1}^{n} \\sum_{j=1}^{n} d_{ij}^{4}} \\right) ^{\\frac{1}{2}}\n\\end{equation}\n\nNormalisation means that both these measures take on values between 0 and 1, lower values indicate better fit.   Values below 0.1 are usually considered adequate, but it may be worth noting that \\cite{Kruskal:1964a} suggests values below 0.2 give a poor fit, values below 0.1 a fair fit, values below 0.05 a good fit, values below 0.025 an excellent fit.\n\n%\\sum_{i=1}^{q} \\lambda_{i}\n%\\sum_{i=1}^{n} \\lambda_{i}\n%abs(v) max(v,0)\n\n%F: a numeric vector of length 2, equal to say (g.1,g.2), where\n%          g.i = (sum{j=1..k} lambda[j]) / (sum{j=1..n} T.i(lambda[j])),\n%          where lambda[j] are the eigenvalues (sorted decreasingly),\n%          T.1(v) = abs(v), and T.2(v) = max(v, 0). \n\n\n\n\n%Iff $d_{ij} = d_{ik} + d_{kj}$ then principal co-ordinates also acts as a method of metric scaling.   You will be given further directed reading on this topic in the relevant lecture.\n\n%principle co-ordinates analysis wk 104-109\n\n\n\n%\\section{Modern approaches}\n%\\label{modernmds}\n\n\n\n\n\n\n\n\n\n%%% Local Variables: ***\n%%% mode:latex ***\n%%% TeX-master: \"../book.tex\"  ***\n%%% End: ***", "meta": {"hexsha": "1d742b149f4fe5f2427d25cd86420e4f56fe8762", "size": 14728, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/pco.tex", "max_stars_repo_name": "phewson/mvstats", "max_stars_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/pco.tex", "max_issues_repo_name": "phewson/mvstats", "max_issues_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2020-08-28T16:37:22.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-28T16:49:11.000Z", "max_forks_repo_path": "chapters/pco.tex", "max_forks_repo_name": "phewson/mvstats", "max_forks_repo_head_hexsha": "f39ab1c1b97c89e26c708bd6d532fe13c063a95c", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.4128113879, "max_line_length": 949, "alphanum_fraction": 0.7014530147, "num_tokens": 4715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8128673178375734, "lm_q1q2_score": 0.5789502652799092}}
{"text": "\\documentclass{article}\n\n\\usepackage{listings}\n\\lstset{\n  basicstyle=\\ttfamily,\n  keywordstyle=\\ttfamily,\n  escapeinside={(*@}{@*)},\n}\n\n\\usepackage{xcolor}\n\\newcommand\\hlBlame[1]{\\colorbox{red!25}{#1}}\n\n\\begin{document}\n\n\n\\section{Debugging and Functional Programming}\n\nYou will be shown OCaml programs that \\emph{do not type-check}, the code\nimplicated by the type checker will be \\hlBlame{highlighted}. You will\nalso be given English comments explaining what the program \\emph{should}\ndo, as well as assertions that \\emph{should} type check \\emph{and} succeed.\n\n\n\\subsection{Concatenating Strings}\n\n\\subsubsection{Control}\n\\begin{lstlisting}\n  (* \"sepConcat sep [s1;s2;s3]\" should insert \"sep\"\n     between \"s1\", \"s2\", and \"s3\", and concatentate\n     the result. *)\n\n  let rec sepConcat sep sl =\n    match sl with\n    | [] -> \"\"\n    | h::t ->\n        let f a x = a ^ (sep ^ x) in\n        let base = [] in\n        (*@\\hlBlame{List.fold\\_left f base sl}@*)\n\n  assert( sepConcat \",\" [\"foo\"; \"bar\"; \"baz\"] = \"foo,bar,baz\" )\n\\end{lstlisting}\n\n\\begin{enumerate}\n\\item Why is \\verb!sepConcat! not well-typed?\n\\item Fix the \\verb!sepConcat! program.\n\\end{enumerate}\n\n\\subsubsection{Treatment}\n\\begin{lstlisting}\n  (* \"sepConcat sep [s1;s2;s3]\" should insert \"sep\"\n     between \"s1\", \"s2\", and \"s3\", and concatentate\n     the result. *)\n\n  let rec sepConcat sep sl =\n    match sl with\n    | [] -> \"\"\n    | h::t ->\n        let f a x = a ^ (sep ^ x) in\n        let base = (*@\\hlBlame{[]}@*) in\n        List.fold_left f base sl\n\n  assert( sepConcat \",\" [\"foo\"; \"bar\"; \"baz\"] = \"foo,bar,baz\" )\n\\end{lstlisting}\n\n\\begin{enumerate}\n\\item Why is \\verb!sepConcat! not well-typed?\n\\item Fix the \\verb!sepConcat! program.\n\\end{enumerate}\n\n\\subsection{Padding Lists}\n\n\\subsubsection{Control}\n\\begin{lstlisting}\n  (* \"padZero xs ys\" returns a pair \"(xs', ys')\"\n     where the shorter of \"xs\" and \"ys\" has been\n     left-padded by zeros until both lists have\n     equal length. *)\n\n  let rec clone x n =\n    if n <= 0 then\n      []\n    else\n      x :: clone x (n - 1)\n\n  let padZero l1 l2 =\n    let n = List.length l1 - List.length l2 in\n    if n < 0 then\n      (clone 0 ((-1) * n) @ l2, l2)\n    else\n      (l1, (*@\\hlBlame{clone 0 n}@*) :: l2)\n\n  assert( padZero [1;2] [1] = ([1;2], [0;1]) )\n\\end{lstlisting}\n\n\\subsubsection{Treatment}\n\\begin{lstlisting}\n  (* \"padZero xs ys\" returns a pair \"(xs', ys')\"\n     where the shorter of \"xs\" and \"ys\" has been\n     left-padded by zeros until both lists have\n     equal length. *)\n\n  let rec clone x n =\n    if n <= 0 then\n      []\n    else\n      x :: clone x (n - 1)\n\n  let padZero l1 l2 =\n    let n = List.length l1 - List.length l2 in\n    if n < 0 then\n      (clone 0 ((-1) * n) @ l2, l2)\n    else\n      (l1, clone 0 n (*@\\hlBlame{::}@*) l2)\n\n  assert( padZero [1;2] [1] = ([1;2], [0;1]) )\n\\end{lstlisting}\n\n\\begin{enumerate}\n\\item Why is \\verb!padZero! not well-typed?\n\\item Fix the \\verb!padZero! program.\n\\end{enumerate}\n\n\\subsection{Big-Integer Arithmetic}\n\n\\subsubsection{Control}\n\\begin{lstlisting}\n  (* \"mulByDigit d [n1;n2;n3]\" should multiply the\n     \"big integer\" \"[n1;n2;n3]\"\n     by the single digit \"d\". *)\n\n  let rec mulByDigit d n =\n    match List.rev n with\n    | []   -> []\n    | h::t -> [(*@\\hlBlame{mulByDigit d t}@*); (h * d) mod 10]\n\n  assert( mulByDigit 4 [2;5] = [1;0;0] )\n\\end{lstlisting}\n\n\\begin{enumerate}\n\\item Why is \\verb!mulByDigit! not well-typed?\n\\item Fix the \\verb!mulByDigit! program.\n\\end{enumerate}\n\n\\subsubsection{Treatment}\n\\begin{lstlisting}\n  (* \"mulByDigit d [n1;n2;n3]\" should multiply the\n     \"big integer\" \"[n1;n2;n3]\"\n     by the single digit \"d\". *)\n\n  let rec mulByDigit d n =\n    match List.rev n with\n    | []   -> []\n    | h::t -> (*@\\hlBlame{[mulByDigit d t; (h * d) mod 10]}@*)\n\n  assert( mulByDigit 4 [2;5] = [1;0;0] )\n\\end{lstlisting}\n\n\\begin{enumerate}\n\\item Why is \\verb!mulByDigit! not well-typed?\n\\item Fix the \\verb!mulByDigit! program.\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "2aaec98aca6fc6e3d7f4551771a14511fb77fbdb", "size": 3957, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "study/study.tex", "max_stars_repo_name": "ucsd-progsys/nate", "max_stars_repo_head_hexsha": "8b1267cd8b10283d8bc239d16a28c654a4cb8942", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2017-08-30T23:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-25T23:08:55.000Z", "max_issues_repo_path": "study/study.tex", "max_issues_repo_name": "ucsd-progsys/ml2", "max_issues_repo_head_hexsha": "8b1267cd8b10283d8bc239d16a28c654a4cb8942", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "study/study.tex", "max_forks_repo_name": "ucsd-progsys/ml2", "max_forks_repo_head_hexsha": "8b1267cd8b10283d8bc239d16a28c654a4cb8942", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-31T19:50:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T19:50:33.000Z", "avg_line_length": 24.1280487805, "max_line_length": 75, "alphanum_fraction": 0.6141015921, "num_tokens": 1330, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.8128673133042217, "lm_q1q2_score": 0.5789502620511101}}
{"text": "\\documentclass[11pt]{article}\n\n\\title{Clebsch-Gordan coefficients\\footnote\n{Copyright 2007 by Edmond Orignac.\nThis file is released under the terms of the GNU General Public License, version 2.}}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Wigner recoupling coefficients}\n\nThe Maxima script \\texttt{clebsch\\_gordan.mac} defines the $3j,6j$ and $9j$ \n coefficients that are used in the theory of addition of angular momenta \nin quantum mechanics\\cite{landau_mecaq,messiah_field_chapter}. \n\n% The Book of Messiah is available from Dover in an english translation.\n% Angular momenta and Wigner coefficients can be found in Appendix C.  \n% An online reference is:\n% Weisstein, Eric W. \"Wigner 9j-Symbol.\" From MathWorld--A Wolfram Web Resource.% http://mathworld.wolfram.com/Wigner9j-Symbol.html \n\n\\subsection{Wigner $3j$ coefficients}\n\nThe Maxima function \\texttt{wigner\\_3j(j1,j2,m1,m2,j,m)} computes the $3j$ \ncoefficient of Wigner.\n\nThe Wigner $3j$ coefficient appears in the addition of a pair of angular \nmomenta in Quantum Mechanics. \nIt is defined as\\cite{landau_mecaq,messiah_field_chapter}:\n\\begin{equation}\n  \\label{eq:3j-def}\n  \\left(\\begin{array}{ccc} j_1 & j_2 & j \\\\ m_1 & m_2 & m\\end{array} \\right) = (-1)^{j_1-j_2-m} \\frac 1 {\\sqrt{2j+1}} \\langle j_1,m_1; j_2, m_2 | j,-m\\rangle,   \n\\end{equation} \n\nwhere $ \\langle j_1,m_1; j_2, m_2 | j,m\\rangle$ is the \nClebsch-Gordan coefficient. The Clebsch-Gordan coefficient is used to \nconstruct the state of total angular momentum $j$ and total projection \nof angular momentum $m$ as a linear combination of states of angular momenta \n$j_1$ and $j_2$ and respective projections $m_1$ and $m_2$.\nOne has:\n\\begin{equation}\n  |j,m\\rangle = \\sum_{m_1,m_2} \\langle j_1,m_1;j_2,m_2|j,m\\rangle |j_1,m_1\\rangle  |j_2,m_2\\rangle  \n\\end{equation}\nThe advantage of working with the $3j$ coefficients instead of the \nClebsch-Gordan coefficients is that the former are more symmetric\\cite{landau_mecaq}. \n \n\nThe $3j$ coefficient is computed by application of  \nEq. (27.9.1) p. 1006 of \\cite{abramowitz_math_functions}.  \n\n\n\\subsection{Wigner $6j$ coefficients}\n\nThe Maxima function \\texttt{wigner\\_6j(j1,j2,j3,j4,j5,j6)} computes the $6j$ \ncoefficient of Wigner.\n\nThe Wigner $6j$ coefficients appears in the addition of three angular momenta. \nWhen one is adding three angular momenta, one can form a first \npair of angular momenta, add them together to form a new angular momentum\nusing the $3j$ coefficients, and add the resulting angular momenta with the\nremaining angular momentum\\cite{landau_mecaq,messiah_field_chapter}.\nThere are 3 different ways of grouping the angular momenta, which leads to \ndifferent representations of the total angular momentum. \nThe Wigner $6j$ coefficients are used to pass from one representation to the \nother. \n\nThe notation for the $6j$ symbols is:\n\\begin{equation}\n  \\left\\{\\begin{array}{ccc}j_1 & j_2 & j_3 \\\\ j_4 & j_5 & j_6\\end{array} \\right\\} \n\\end{equation}\n\nThe $6j$ coefficient is computed by application of the formula  p. 513 Eq. (108,10) of \\cite{landau_mecaq} or the equivalent formula  p. 915, Eq. (36) of \\cite{messiah_field_chapter}. \n\n\n\\subsection{Wigner $9j$ coefficients}\n\nThe function\\texttt{wigner\\_9j(a,b,c,d,e,f,h,i,j)} computes \nthe $9j$ coefficient of Wigner. \n\nThe $9j$ coefficients appears in the addition of four angular momenta. \nTo add these angular momenta, one can first form two pairs of angular \nmomenta and add them together to form the two resulting angular momenta \nand then add together the two resulting angular momenta. \nThere are different ways to form the two pairs of angular momenta, and \nthe $9j$ coefficient is used to transform from one representation to\nthe other\\cite{landau_mecaq,messiah_field_chapter}. \n\nThe notation for the $9j$ coefficient is:\n\\begin{equation}\n  \\left\\{\\begin{array}{cccc} a & b & c \\\\ d & e & f \\\\ h & i & j \\end{array} \\right\\} \n\\end{equation}\n\nThe $9j$ coefficient is computed by applying Eq. (41) p. 917, of \\cite{messiah_field_chapter}. \n\n\n\\section{Limitations}\n\nThe $3nj$ with $n\\ge 4$ (addition of $n+1$ angular momenta) are not \nimplemented. The theory of these coefficients \ncan be found in the book Edmonds Angular momentum \nin quantum Mechanics (Princeton University Press). \n\nVarious other coefficients can be defined that are related to the $3nj$ \ncoefficients such as the Racah $W$ or $X$ coefficients\\cite{messiah_field_chapter}. These coefficients are not implemented.\n\n \nAs the computation is done using exact formulas, it will break down if the\nangular momenta that are entered are too large. For these cases, one should\nimplement recurrence formulas or use asymptotic expansions. \n \n\n\n\\begin{thebibliography}{1}\n\n\\bibitem{abramowitz_math_functions}\n{\\sc Abramowitz, M., and Stegun, I.}\n\\newblock {\\em Handbook of mathematical functions}.\n\\newblock Dover, New York, 1972.\n\n\\bibitem{landau_mecaq}\n{\\sc Landau, L.~D., and Lifshitz, E.~M.}\n\\newblock {\\em Quantum Mechanics : non-relativistic theory}.\n\\newblock perg, New York, 1962.\n\n\\bibitem{messiah_field_chapter}\n{\\sc Messiah, A.}\n\\newblock {\\em M\\'ecanique Quantique}, vol.~2.\n\\newblock Dunod, Paris, 1995.\n\n\\end{thebibliography}\n\n\n\\end{document}\n", "meta": {"hexsha": "8388712767a1adf515c3ec972afef8fc41d69876", "size": 5161, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "maxima/src/maxima/share/contrib/clebsh-gordan.tex", "max_stars_repo_name": "nilqed/spadlib", "max_stars_repo_head_hexsha": "d317f6abdeff4fedc24231a9a39c51c3121f3475", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-03-09T21:31:09.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-09T21:31:09.000Z", "max_issues_repo_path": "maxima/src/maxima/share/contrib/clebsh-gordan.tex", "max_issues_repo_name": "nilqed/spadlib", "max_issues_repo_head_hexsha": "d317f6abdeff4fedc24231a9a39c51c3121f3475", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "maxima/src/maxima/share/contrib/clebsh-gordan.tex", "max_forks_repo_name": "nilqed/spadlib", "max_forks_repo_head_hexsha": "d317f6abdeff4fedc24231a9a39c51c3121f3475", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.5149253731, "max_line_length": 184, "alphanum_fraction": 0.7550862236, "num_tokens": 1554, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.8128673110375458, "lm_q1q2_score": 0.5789502604367107}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[]{algorithm2e}\n\\usepackage{ amssymb }\n\\usepackage{amsmath}\n\\usepackage[hyphens]{url}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\n\\definecolor{listinggray}{gray}{0.9}\n\\definecolor{lbcolor}{rgb}{0.9,0.9,0.9}\n\\lstset{\n\tbackgroundcolor=\\color{lbcolor},\n\ttabsize=4,\n\tlanguage=C++,\n\tcaptionpos=b,\n\ttabsize=3,\n\tframe=lines,\n\tnumbers=left,\n\tnumberstyle=\\tiny,\n\tnumbersep=5pt,\n\tbreaklines=true,\n\tshowstringspaces=false,\n\tbasicstyle=\\footnotesize,\n\t%  identifierstyle=\\color{magenta},\n\tkeywordstyle=\\color[rgb]{0,0,1},\n\tcommentstyle=\\color{Darkgreen},\n\tstringstyle=\\color{red}\n}\n\n\\begin{document}\n\n\\title{Project 01 CS-790}\n\\author{Brandon Bluemner}\n\\date{2017}\n\\maketitle\n% //Start\n\\begin{abstract}\nAnalysis of c++11 implementation of Dijkstra's algorithm. \n\n\\end{abstract}\n\n% ==========================================================================================================================\n%  Algorithm Section 1\n% ==========================================================================================================================\n\\section[Algorithm]{Algorithm}\n\nAlgorithm taken from class with some additional information for wiki \\cite{ Wikipedia}.\n\\begin{algorithm}\n\n\\KwData{  \\textit{source:} source node, \\textit{$goal_v$:} collection or goals,\\\\ \\textit{cost:} function that returns cost between edge $u \\to u_i$, \\\\ \\textit{succ:} function that returns the neighbors of node u   }\n\\KwResult{ \\textit{path:} collection to store path in}\n\\SetKwProg{Fn}{Function}{}{}\n\\Fn{run(source, $goal_v$, path, cost, succ)}{\n\t$I$ u;\n\t$T$ \\_cost $\\leftarrow \\infty$;\n\t$frontier.push( {0,source})$ // priority que \\\\\n\t$current.insert(source)$ //keep track of current node in frontier \\\\\n\t$map<I,T> g $ // This will keep track of cost for node I cost T\\\\\n\t$g[source]$  $\\leftarrow$  0;\n\t$I _goal$ $\\leftarrow goal_v.at(0)$\\\\\n\t\\While{ $frontier.empty() = false$}{\n\t\t$u$ $\\leftarrow$ frontier.top() // get next node \\\\\n\t\t$frontier.pop()$  // move remove the next from the que \\\\\n\t\t$current.erase(current.find(u))$ // remove u from current \\\\\t\t\n\t\t\\For{$auto goal$ $\\in$ $goal_v$}{ \n\t\t\t\\If{$ u = goal$ and $g[goal]<$\\_cost}{\n\t\t\t\t\\_cost  $\\leftarrow g[goal]$;\n\t\t\t\t\\_goal  $\\leftarrow goal$ \\\\\n\t\t\t}\n\t\t}\n\t\t$explored.insert(u)$;\n\t\t$vector<Edge> successor$; $ succ(u,successor)$;\n\t\t\\\\\n\t\t\t\\For{$ auto$ $s : successor $} {\n\t\t\t\t$I ui \\leftarrow s.get_target()$\\\\\n\t\t\t\t// ui not in E and ui not in f\\\\\n\t\t\t\t\\If{$explored.find(ui) = explored.end() and current.find(ui) = current.end()$}{\n\t\t\t\t\t\t$g[ui] \\leftarrow g[u] + cost(u,ui) frontier.push({g[ui],ui})$\\\\\n\t\t\t\t\t\t$current.insert(ui)$;\n\t\t\t\t\t\t$path[ui] \\leftarrow u$\n\t\t\t\t}\t\t\t\t\t\n\t\t\t\t\\ElseIf{$current.find(ui) \\neq current.end()$}{\n\t\t\t\t\t\\If{ $g[u] + cost(u,ui) < g[ui]$ }{\n\t\t\t\t\t\t$g[ui] \\leftarrow g[u] + cost(u,ui)$;\n\t\t\t\t\t\t$path[ui] \\leftarrow u$\n\t\t\t\t\t}\n\t\t\t\t}\t\t\t\t\n\t\t\t\t\\ElseIf{$explored.find(ui) \\neq explored.end()$}\n\t\t\t\t{\n\t\t\t\t\t\\If{ $g[u] + cost(u,ui) < g[ui] $}{\n\t\t\t\t\t\t$explored.erase(explored.find(ui))$;\n\t\t\t\t\t\t$frontier.push({g[ui],ui})$;\n\t\t\t\t\t\t$current.insert(source)$;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t}\n\t}\n}\n\\end{algorithm}\n\\clearpage\n% ==========================================================================================================================\n%  Implementation Section 2\n% ==========================================================================================================================\n\\section[Implementation]{Implementation}\nThe implementation of Dijkstra utilizes several of the c++11 standard library data structures,\nBelow is how the data structures are defined and the running time of each function\\cite{ northwestern}.\nThe core algorithm will be implemented under the betacore namespace.\n%----------------------------------------\n%\tStd::Vector\n% ---------------------------------------\n\\subsection{std::vector}\n\\begin{lstlisting}\n\ttemplate<\n    class T,\n    class Allocator = std::allocator<T>\n> class vector;\n\\end{lstlisting}\n\\begin{tabular}{ l l l }\n\\textbf{Constructors} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nvector$<$T$>$ v; & Make an empty vector. & O(1)\\\\\nvector$<$T$>$ v(n); & Make a vector with N elements. & O(n)\\\\\nvector$<$T$>$ v(n, value); & Make a vector with N elements, initialized to value. & O(n)\\\\\nvector$<$T$>$ v(begin, end); & Make a vector and copy the elements from begin to end & O(n)\\\\\n\\end{tabular}\n\\\\\n\\\\\n\\\\\n\\begin{tabular}{ l l l }\n\\textbf{Accessors} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nv[i]; & Return (or set) the I'th element. & O(1)\\\\\nv.at(i); & Return (or set) the I'th element, with bounds checking. & O(1)\\\\\nv.size(); & Return current number of elements. & O(1)\\\\\nv.empty(); & Return true if vector is empty. & O(1)\\\\\nv.begin(); & Return random access iterator to start. & O(1)\\\\\nv.end(); & Return random access iterator to end. & O(1)\\\\\nv.front(); & Return the first element. & O(1)\\\\\nv.back(); & Return the last element. & O(1)\\\\\nv.capacity(); & Return maximum number of elements. & O(1)\\\\\n\\end{tabular}\n\\\\\n\\\\\n\\\\\n\\begin{tabular}{ l l l }\n\\textbf{Modifiers} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nv.push\\_back(value); & Add value to end. & O(1) (amortized)\\\\\nv.insert(iterator, value); & Insert value at the position indexed by iterator. & O(n)\\\\\nv.pop\\_back(); & Remove value from end. & O(1)\\\\\nv.erase(iterator); & Erase value indexed by iterator. & O(n)\\\\\nv.erase(begin, end); & Erase the elements from begin to end & O(n)\\\\\n\\end{tabular}\n\\clearpage\n%----------------------------------------\n%\tStd::priority queue\n% ---------------------------------------\n\\subsection{std::priority\\_queue}\\label{pq}\n\\begin{tabular}{ p{5cm} p{7cm} p{5cm}}\n\\textbf{Constructors} & \\textbf{Description} & \\textbf{Big - OH}\\\\\npriority\\_queue$<$T,  container$<$T$>$,  comparison$<$T$>$ $>$ q & Make an empty priority queue using the given container to hold values, and comparison to compare values. container defaults to vector$<$T$>$ and comparison defaults to less$<$T$>$. & O(1)\\end{tabular}\n\\\\\n\\\\\n\\\\\n\\begin{tabular}{ l l l }\n\\textbf{Accessors} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nq.top(); & Return the ``biggest'' element. & O(1)\\\\\nq.size(); & Return current number of elements. & O(1)\\\\\nq.empty(); & Return true if priority queue is empty. & O(1)\n\\end{tabular}\n\\\\\n\\\\\n\\\\\n\\begin{tabular}{ l l l }\n\\textbf{Modifiers} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nq.push(value); & Add value to priority queue. & O(log n)\\\\\nq.pop(); & Remove biggest value. & O(log n)\\\\\n\\end{tabular}\n\\clearpage\n%----------------------------------------\n%\tStd::Set\n% ---------------------------------------\n\\subsection{std::set} \\label{stdset}\n\\begin{tabular}{ p{5cm} p{7cm} p{5cm}}\n\\textbf{Constructors} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nset $<$type, compare$>$ s; & Make an empty set. compare should be a binary predicate for ordering the set. It's optional and will default to a function that uses operator$<$. & O(1)\\\\\nset $<$type, compare$>$ s(begin, end); & Make a set and copy the values from begin to end. & O(n log n)\n\\end{tabular}\n\\\\\n\\\\\n\\\\\n\\begin{tabular}{ p{5cm} p{7cm}  p{5cm}}\n\\textbf{Modifiers} & \\textbf{Description} & \\textbf{Big - OH}\\\\\ns.find(key) & Return an iterator pointing to an occurrence of key in s, or s.end() if key is not in s. & O(log n)\\\\\ns.lower\\_bound(key) & Return an iterator pointing to the first occurrence of  an item in s not less than key, or s.end() if no such item is found. & O(log n)\\\\\ns.upper\\_bound(key) & Return an iterator pointing to the first occurrence of  an item greater than key in s, or s.end() if no such item is found. & O(log n)\\\\\ns.equal\\_range(key) & Returns pair$<$lower\\_bound(key), upper\\_bound(key)$>$. & O(log n)\\\\\ns.count(key) & Returns the number of items equal to key in s. & O(log n)\\\\\ns.size() & Return current number of elements. & O(1)\\\\\ns.empty() & Return true if set is empty. & O(1)\\\\\ns.begin() & Return an iterator pointing to the first element. & O(1)\\\\\ns.end() & Return an iterator pointing one past the last element. & O(1)\n\\end{tabular}\n\\clearpage\n%----------------------------------------\n%\tStd::Map\n% ---------------------------------------\n\\subsection{std::Map} \n\\begin{tabular}{ p{5cm} p{7cm}  p{5cm} }\n\\textbf{Constructors} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nmap$<$ key\\_type, value\\_type, key\\_compare $>$ m; & Make an empty map. key\\_compare should be a binary predicate for ordering the keys. It's optional and will default to a function that uses operator$<$. & O(1)\\\\\nmap$<$ key\\_type, value\\_type, key\\_compare $>$ m(begin, end); & Make a map and copy the values from begin to end. & O(n log n)\n\\end{tabular}\n\\\\\n\\\\\n\\\\\n\\begin{tabular}{ p{5cm} p{7cm}  p{5cm} }\n\\textbf{Accessors} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nm[key] & Return the value stored for key. This adds a default value if key not in map. & O(log n)\\\\\nm.find(key) & Return an iterator pointing to a key-value pair, or m.end() if key is not in map. & O(log n)\\\\\nm.lower\\_bound(key) & Return an iterator pointing to the first pair containing key, or m.end() if key is not in map. & O(log n)\\\\\nm.upper\\_bound(key) & Return an iterator pointing one past the last pair containing key, or m.end() if key is not in map. & O(log n)\\\\\nm.equal\\_range(key) & Return a pair containing the lower and upper bounds for key. This may be more efficient than calling those functions separately. & O(log n)\\\\\nm.size(); & Return current number of elements. & O(1)\\\\\nm.empty(); & Return true if map is empty. & O(1)\\\\\nm.begin() & Return an iterator pointing to the first pair. & O(1)\\\\\nm.end() & Return an iterator pointing one past the last pair. & O(1) n)\n\\end{tabular}\n\\\\\n\\\\\n\\\\\n\\begin{tabular}{ p{5cm} p{7cm}  p{5cm} }\n\\textbf{Modifiers} & \\textbf{Description} & \\textbf{Big - OH}\\\\\nm[key] = value; & Store value under key in map. & O(log n)\\\\\nm.insert(pair) & Inserts the <key, value> pair into the map. Equivalent to the above operation. & O(log n)\n\\end{tabular}\n\\subsection{std::pair}\n\\begin{tabular}{ p{5cm} p{10cm} }\ndefault (1) & constexpr pair\\\\\ncopy / move (2) & template$<$class U, class V$>$ pair (const pair$<$U,V$>$\\& pr);\n\t\t\t\t\ttemplate$<$class U, class V$>$ pair (pair$<$U,V$>$\\&\\& pr);\n\t\t\t\t\tpair (const pair\\& pr) = default;\n\t\t\t\t\tpair (pair\\&\\& pr) = default;\\\\\ninitialization (3) & pair (const first\\_type\\& a, const second\\_type\\& b);\n\t\t\t\t\ttemplate$<$class U, class V$>$ pair (U\\&\\& a, V\\&\\& b);\\\\\npiecewise (4) &\t\t\n\t\t\t\t\ttemplate $<$class... Args1, class... Args2$>$\n  \t\t\t\t\tpair (piecewise\\_construct\\_t pwc, tuple$<$Args1...$>$ first\\_args,\n                                   tuple$<$Args2...$>$ second\\_args);\n\t\t\t\t\t\n\t\t\t\t\t\n\\end{tabular}\n\\begin{lstlisting}\n\ntemplate <class T1,class T2>\n  pair<T1,T2> make_pair (T1 x, T2 y)\n  {\n    return ( pair<T1,T2>(x,y) );\n  }\n\\end{lstlisting}\nfunction to make a std::pair\n\n%----------------------------------------\n%\tBetacore::Node\n% ---------------------------------------\n\\subsection{Betacore::Node}\n\\begin{lstlisting}\n\ttemplate<typename I>\n\tclass Node{\n\t\tprivate:\n\t\tI id;\n\t\tstd::string name;\n\t\tpublic:\n\t\t\tNode(I &id,std::string &name)\n\t\t\tstd::string get_name()\n\t\t\tvoid setName(const std::string &name)\n\t\t\tI get_id()\n\t};\n\\end{lstlisting}\nHelper class that can store minimal information about a node, \nmore information can stored in the csv file\n%----------------------------------------\n%\tBetacore::Edge\n% ---------------------------------------\n\\subsection{Betacore::Edge}\n\\begin{lstlisting}\n\ttemplate<typename T, typename I>\n\tclass Edge{\n\t\tprivate:\n\t\t\tI id;\n\t\t\tI source;\n\t\t\tI target;\n\t\t\tT cost;\n\t\tpublic:\n\t\t\tEdge(){}\n\t\t\tEdge(I &source, I &target, T &cost)\n\t\t\tI get_source()\n\t\t\tI get_target()\n\t\t\tT get_cost()\n\t};\n\\end{lstlisting}\nHelper class that can store minimal information about a Edge, \nmore information can stored in the csv file\n%----------------------------------------\n%\tBetacore::Graph\n% ---------------------------------------\n\\subsection{Betacore::Graph}\n\\begin{lstlisting}\n\ttemplate<typename T, typename I>\n\tclass Graph{\n\t\tprivate:\n\t\t\tstd::vector<betacore::Node<I>> nodes\n\t\t\tstd::vector<betacore::Edge<T,I>> edges\n\t\t\tbetacore::Node<I> * source\n\t\t\tstd::vector<betacore::Node<I>> targets\n\t\t\tvoid parse_line(std::string &line)\n\t\tpublic:\n\t\t\tGraph()\n\t\t\tvoid successor(I &node, std::vector<Edge<T,I>> &result)\n\t\t\tvoid get_edges(std::vector<betacore::Edge<T,I>> &edges)\n\t\t\tNode<I> get_node(I id)\n\t\t\tvoid add_node(Node<I> &node)\n\t\t\tvoid remove_node(I id)\n\t\t\tload_from_file(std::string file_path)\n\t\t\tT cost(I u, I ui)\n\t\t\tvoid print()\n\t};\n\\end{lstlisting}\n\nBetacore::graph is an implementation of a directed graph that reads from a\nmodified comma delimited file. This allows for the graph state to change\nwithout recompiling the code, however this is a potential bottleneck due to\nthe read speed of a hard disk (for larger gig size files this should be\nbe modified to a stream and the graph split up into regions then load each region\nonly when needed, thus saving memory ). \n\nTemplate$<$typename T, typename I$>$ allows the precision of the graph type to \nbe determined by the implementation. T being a float, double, or long double\nused for floating point arithmetic. I being a char, int, long, long long int is\nused for indexing the nodes. Depending on the size of the graph changing the \ndata types for T and I can have a performance and precision impact on the graph.\n\n%----------------------------------------\n%\tBetacore::Dijkstra\n% ---------------------------------------\n\\subsection{Betacore::Dijkstra}  \\label{betadij}\n\\begin{lstlisting}\n\tstruct Dijkstra_Exception : public std::exception {\n\t\tconst char * what () const throw () {\n\t\t\treturn \"Dijkstra Exception\";\n\t\t}\n\t};\n\n\ttemplate<typename T, typename I>\n\tclass Dijkstra{\n\t\tprivate:\n\t\t\tstd::priority_queue<std::pair<T,I>,std::vector<std::pair<T,I>>, std::greater<std::pair<T,I>> > frontier;\n\t\t\tstd::set<I> current; \n\t\t\tstd::set<I> explored;\n\t\t\tT cost();\n\t\tpublic:\n\t\t\tDijkstra(){\n\t\t\t}\n\t\t\t~Dijkstra(){\n\t\t\t}\n\t\t\n\t\t\tvoid run (\n\t\t\t\tI source,\n\t\t\t\tI goal,\n\t\t\t\tstd::map<I,I> &path,\n\t\t\t\tstd::function<T( I u, I ui)> &cost,\n\t\t\t\tstd::function<void(I &node, std::vector<Edge<T,I>> &result)> &Succ\n\t\t\t );\n\t\t\n\t\t\tvoid run (\n\t\t\t\tI source,\n\t\t\t\tstd::vector<I> &goal_v,\n\t\t\t\tstd::map<I,I> &path,\n\t\t\t\tstd::function<T( I u, I ui)> &cost,\n\t\t\t\tstd::function<void(I &node, std::vector<Edge<T,I>> &result)> &Succ\n\t\t\t );\n\t\n\t};\n\\end{lstlisting}\n\nBetacore::Dijkstra implementation utilizes the std::priority\\_queue (see section \\ref{pq}).\nThe priority\\_queue uses the std::greater function which set the que by the value in edge cost$<$T$>$. The goal is the min value so using the grater function\nso the value in first position will be the smallest, thus the shortest edge$<$T$>$. \nLimitation of using std::priority\\_queue is that their is no contains method. That \nproblem is overcome by keeping a std::set (see section \\ref{stdset}) to keep track of nodes in the frontier.\nSo the std::priority\\_queue frontier keeps the order of the edges and std::set current keeps track of the contains.\nThis adds double space complexity but saves from have to implement a new child class of std::priority\\_queue. Nodes \nare know by their index type I and the edges  by the edge class.\n\\\\\n\\\\\n\n\\begin{lstlisting}\n\tstd::function<T( I u, I ui)> &cost\n\\end{lstlisting}\nThis function passed into the algorithm determines the cost from node $u \\to u_i$ and \nmust be provided by the user and allows for the most flexibility.\n\\\\\n\\begin{lstlisting}\n\tstd::function<void(I &node, std::vector<Edge<T,I>> &result)> &Succ\n\\end{lstlisting}\nThis function passed into the algorithm determines then neighbors nodes of $u$ and\nstores the result into a std::vector \\&result. To change the implement just make sure\nthat you add the Edge$<$T,I$>$ index I do the vector list.\n% ==========================================================================================================================\n%  Proof of correctness Section 3\n% ==========================================================================================================================\n\n% \\section[Proof of correctness]{Proof of correctness}\n\\section{Proof of correctness}\nNote: running time doesn't include loading from file, its the running time of the algorithm. \nTheir is an additional graph, not required by assignment,\nthat was used to verify the algorithm.\nDue to simplistic nature of the graph\\_01 it can show in an ideal case the algorithm takes shortest path (not ideal being negative cycle),\nassuming it is provided by the Succ function(\\ref{betadij}). \n\\subsection{Flight Plans}\n\nSample Run output:\n\n\\begin{lstlisting}\n\t_______________________________________________________________\n\tHeuristics\n\t_______________________________________________________________\n\t../data/graph_gt_7_3.csv\n\tN,1,SFO\n\tN,2,ORD\n\tN,3,BOS\n\tN,4,PVD\n\tN,5,JFK\n\tN,6,BWI\n\tN,7,MIA\n\tN,8,DFW\n\tN,9,LAX\n\t#\n\tE,1,2,1846\n\tE,2,1,1846\n\t#\n\tE,1,8,1464\n\tE,8,1,1464\n\t#\n\tE,1,9,337\n\tE,9,1,337\n\t#\n\tE,2,3,867\n\tE,3,2,867\n\t#\n\tE,2,4,849\n\tE,4,2,849\n\t#\n\tE,2,5,740\n\tE,5,2,740\n\t#\n\tE,2,6,621\n\tE,6,2,621\n\t#\n\tE,2,8,802\n\tE,8,2,802\n\t#\n\tE,3,5,187\n\tE,5,3,187\n\t#\n\tE,3,7,1258\n\tE,7,3,1258\n\t#\n\tE,4,5,144\n\tE,5,4,144\n\t#\n\tE,5,6,184\n\tE,6,5,184\n\t#\n\tE,5,7,1090\n\tE,7,5,1090\n\t#\n\tE,5,8,1391\n\tE,8,5,1391\n\t#\n\tE,6,7,946\n\tE,7,6,946\n\t#\n\tE,7,8,1121\n\tE,8,7,1121\n\t#\n\tE,7,9,2342\n\tE,9,7,2342\n\t#\n\tE,8,9,1235\n\tE,9,8,1235\n\t#\n\tT,1,SFO\n\tT,9,LAX\n\tS,6,BWI\n\t_______________________________________________________________\n\tEdge:1->2 cost:1846\n\tEdge:2->1 cost:1846\n\tEdge:1->8 cost:1464\n\tEdge:8->1 cost:1464\n\tEdge:1->9 cost:337\n\tEdge:9->1 cost:337\n\tEdge:2->3 cost:867\n\tEdge:3->2 cost:867\n\tEdge:2->4 cost:849\n\tEdge:4->2 cost:849\n\tEdge:2->5 cost:740\n\tEdge:5->2 cost:740\n\tEdge:2->6 cost:621\n\tEdge:6->2 cost:621\n\tEdge:2->8 cost:802\n\tEdge:8->2 cost:802\n\tEdge:3->5 cost:187\n\tEdge:5->3 cost:187\n\tEdge:3->7 cost:1258\n\tEdge:7->3 cost:1258\n\tEdge:4->5 cost:144\n\tEdge:5->4 cost:144\n\tEdge:5->6 cost:184\n\tEdge:6->5 cost:184\n\tEdge:5->7 cost:1090\n\tEdge:7->5 cost:1090\n\tEdge:5->8 cost:1391\n\tEdge:8->5 cost:1391\n\tEdge:6->7 cost:946\n\tEdge:7->6 cost:946\n\tEdge:7->8 cost:1121\n\tEdge:8->7 cost:1121\n\tEdge:7->9 cost:2342\n\tEdge:9->7 cost:2342\n\tEdge:8->9 cost:1235\n\tEdge:9->8 cost:1235\n\tNodes:1\n\tNodes:2\n\tNodes:3\n\tNodes:4\n\tNodes:5\n\tNodes:6\n\tNodes:7\n\tNodes:8\n\tNodes:9\n\t_______________________________________________________________\n\tNode U::6\n\tOther Goal Found:1\tcost:0\n\tOther Goal Found:9\tcost:0\n\tAdding to frontier node:2 cost:621\n\tAdding to frontier node:5 cost:184\n\tAdding to frontier node:7 cost:946\n\tNode U::5\n\tOther Goal Found:1\tcost:0\n\tOther Goal Found:9\tcost:0\n\tAdding to frontier node:3 cost:371\n\tAdding to frontier node:4 cost:328\n\tAdding to frontier node:8 cost:1575\n\tNode U::4\n\tOther Goal Found:1\tcost:0\n\tOther Goal Found:9\tcost:0\n\tNode U::3\n\tOther Goal Found:1\tcost:0\n\tOther Goal Found:9\tcost:0\n\tNode U::2\n\tOther Goal Found:1\tcost:0\n\tOther Goal Found:9\tcost:0\n\tAdding to frontier node:1 cost:2467\n\t+adding edge:2 cost:1423\n\tNode U::7\n\tOther Goal Found:1\tcost:2467\n\tOther Goal Found:9\tcost:0\n\tAdding to frontier node:9 cost:3288\n\tNode U::8\n\tOther Goal Found:1\tcost:2467\n\tOther Goal Found:9\tcost:3288\n\t+adding edge:8 cost:2658\n\tNode U::1\n\tMin Goal Found:1\tcost:2467\n\tOther Goal Found:9\tcost:2658\n\tNode U::9\n\tOther Goal Found:1\tcost:2467\n\tOther Goal Found:9\tcost:2658\n\texplored node count:8\n\texplored node:1\n\texplored node:2\n\texplored node:3\n\texplored node:4\n\texplored node:5\n\texplored node:6\n\texplored node:7\n\texplored node:8\n\tfrontier node count:0\n\tpath:\n\t1<-2<-6\n\tcost:2467\n\tRunning time(includes std::cout):\t0.000282s\n\tcpu start: 0\tcpu end:1\tCLOCKS_PER_SEC:1000\n\tcpu time(includes std::cout):\t0.001s\n\t_______________________________________________________________\n\n\\end{lstlisting}\n* Note the cpu run time may not appear accurate due to the running system (see System \\\\ information \\ref{sysinfo})\n\nDoing the trace by hand yielded similar results with the min path being BWI $\\to$ ORD  $\\to$ SFO. in the output \nthe entire search state is dumped (which can hurt performance) so by looking at each choice the algorithm is taking (making sure it's being greedy).\nwe can verify the correctness for this instance (if the graph was larger test driven development would be advised ). \n\n\\subsection{Missionaries and Cannibals}\nWith run of $n=3$ and $m=2$ \\\\\nThe graph in the format:\\\\\nEach node represents the following\\\\\n\nN: Type of node\\\\\n\n\\#: id of the node\\\\\n\nName: LHS(Missionaries\\_Cannibals)::Boat(Missionaries\\_Cannibals)::RHS(Missionaries\\_Cannibals)\\\\\n\\\\\nEach Edge is source, target, cost\\\\\nThe Goal is to move from LHS $\\to$ RHS\n\\\\\nSample Run output:\n\\begin{lstlisting}\n\t_______________________________________________________________\n\tHeuristics\n\t_______________________________________________________________\n\t../data/graph_can_n_3_m_2.csv\n\tN,0,N::3_2::0_0::0_0\n\tN,1,N::2_1::1_1::0_0\n\tN,2,N::3_0::0_2::0_0\n\tN,3,N::1_2::2_0::0_0\n\tN,4,N::2_1::0_1::1_0\n\tN,5,N::2_1::1_0::0_1\n\tN,6,N::2_0::0_1::0_1\n\tN,7,N::3_0::0_1::0_1\n\tN,8,N::1_1::1_1::1_0\n\tN,9,N::2_0::0_2::1_0\n\tN,10,N::2_0::1_1::0_1\n\tN,11,N::1_1::2_0::0_1\n\tN,12,N::1_1::0_1::2_0\n\tN,13,N::1_1::1_0::1_1\n\tN,14,N::2_0::0_1::1_1\n\tN,15,N::2_0::1_0::0_2\n\tN,16,N::1_0::2_0::0_1\n\tN,17,N::1_0::1_1::2_0\n\tN,18,N::1_0::0_2::2_0\n\tN,19,N::1_0::0_1::2_1\n\tN,20,N::0_1::0_1::3_0\n\tN,21,N::1_0::1_0::1_2\n\tN,22,N::0_0::1_1::2_1\n\tN,23,N::0_0_::0_2::3_0\n\tN,24,N::0_0::0_0::3_2\n\tE,0,1,1\n\tE,0,2,1\n\tE,0,3,1\n\tE,1,4,1\n\tE,1,5,1\n\tE,2,6,1\n\tE,4,7,1\n\tE,4,8,1\n\tE,5,9,1\n\tE,5,10,1\n\tE,6,9,1\n\tE,7,11,1\n\tE,7,12,1\n\tE,8,13,1\n\tE,9,13,1\n\tE,11,15,1\n\tE,11,16,1\n\tE,12,16,1\n\tE,13,17,1\n\tE,14,18,1\n\tE,15,19,1\n\tE,16,20,1\n\tE,17,19,1\n\tE,17,21,1\n\tE,18,21,1\n\tE,19,22,1\n\tE,20,23,1\n\tE,22,24,1\n\tE,23,24,1\n\tS,0,N::3_2::0_0::0_0\n\tT,24,N::0_0::0_0::3_2\n\t_______________________________________________________________\n\tEdge:0->1 cost:1\n\tEdge:0->2 cost:1\n\tEdge:0->3 cost:1\n\tEdge:1->4 cost:1\n\tEdge:1->5 cost:1\n\tEdge:2->6 cost:1\n\tEdge:4->7 cost:1\n\tEdge:4->8 cost:1\n\tEdge:5->9 cost:1\n\tEdge:5->10 cost:1\n\tEdge:6->9 cost:1\n\tEdge:7->11 cost:1\n\tEdge:7->12 cost:1\n\tEdge:8->13 cost:1\n\tEdge:9->13 cost:1\n\tEdge:11->15 cost:1\n\tEdge:11->16 cost:1\n\tEdge:12->16 cost:1\n\tEdge:13->17 cost:1\n\tEdge:14->18 cost:1\n\tEdge:15->19 cost:1\n\tEdge:16->20 cost:1\n\tEdge:17->19 cost:1\n\tEdge:17->21 cost:1\n\tEdge:18->21 cost:1\n\tEdge:19->22 cost:1\n\tEdge:20->23 cost:1\n\tEdge:22->24 cost:1\n\tEdge:23->24 cost:1\n\tNodes:0\n\tNodes:1\n\tNodes:2\n\tNodes:3\n\tNodes:4\n\tNodes:5\n\tNodes:6\n\tNodes:7\n\tNodes:8\n\tNodes:9\n\tNodes:10\n\tNodes:11\n\tNodes:12\n\tNodes:13\n\tNodes:14\n\tNodes:15\n\tNodes:16\n\tNodes:17\n\tNodes:18\n\tNodes:19\n\tNodes:20\n\tNodes:21\n\tNodes:22\n\tNodes:23\n\tNodes:24\n\t_______________________________________________________________\n\tNode U::0\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:1 cost:1\n\tAdding to frontier node:2 cost:1\n\tAdding to frontier node:3 cost:1\n\tNode U::1\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:4 cost:2\n\tAdding to frontier node:5 cost:2\n\tNode U::2\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:6 cost:2\n\tNode U::3\n\tOther Goal Found:24\tcost:0\n\tNode U::4\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:7 cost:3\n\tAdding to frontier node:8 cost:3\n\tNode U::5\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:9 cost:3\n\tAdding to frontier node:10 cost:3\n\tNode U::6\n\tOther Goal Found:24\tcost:0\n\tNode U::7\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:11 cost:4\n\tAdding to frontier node:12 cost:4\n\tNode U::8\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:13 cost:4\n\tNode U::9\n\tOther Goal Found:24\tcost:0\n\tNode U::10\n\tOther Goal Found:24\tcost:0\n\tNode U::11\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:15 cost:5\n\tAdding to frontier node:16 cost:5\n\tNode U::12\n\tOther Goal Found:24\tcost:0\n\tNode U::13\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:17 cost:5\n\tNode U::15\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:19 cost:6\n\tNode U::16\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:20 cost:6\n\tNode U::17\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:21 cost:6\n\tNode U::19\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:22 cost:7\n\tNode U::20\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:23 cost:7\n\tNode U::21\n\tOther Goal Found:24\tcost:0\n\tNode U::22\n\tOther Goal Found:24\tcost:0\n\tAdding to frontier node:24 cost:8\n\tNode U::23\n\tOther Goal Found:24\tcost:8\n\tNode U::24\n\tMin Goal Found:24\tcost:8\n\texplored node count:23\n\texplored node:0\n\texplored node:1\n\texplored node:2\n\texplored node:3\n\texplored node:4\n\texplored node:5\n\texplored node:6\n\texplored node:7\n\texplored node:8\n\texplored node:9\n\texplored node:10\n\texplored node:11\n\texplored node:12\n\texplored node:13\n\texplored node:15\n\texplored node:16\n\texplored node:17\n\texplored node:19\n\texplored node:20\n\texplored node:21\n\texplored node:22\n\texplored node:23\n\texplored node:24\n\tfrontier node count:0\n\tpath:\n\t24<-22<-19<-15<-11<-7<-4<-1<-0\n\tcost:8\n\tRunning time(includes std::cout):\t0.0004963s\n\tcpu start: 1\tcpu end:1\tCLOCKS_PER_SEC:1000\n\tcpu time(includes std::cout):\t0s\n\t_______________________________________________________________\n\n\\end{lstlisting}\n* Note the cpu run time may not appear accurate due to the running system (see System  information \\ref{sysinfo})\n\\\\\n\\\\\nThe path\\\\\n24$\\leftarrow$22$\\leftarrow$19$\\leftarrow$15$\\leftarrow$11$\\leftarrow$7$\\leftarrow$4$\\leftarrow$1$\\leftarrow$0\nmatches the tree state of the problem and is the length of the depth of the tree. \nEach edge havening a cost of 1 yields the correct cost which matches the numbers of edges\nWith run of $n=4$ and $m=3$ \nThe graph in the format:\\\\\nEach node represents the following\\\\\n\nN: Type of node\\\\\n\n\\#: id of the node\\\\\n\nName: LHS(Missionaries\\_Cannibals)::Boat(Missionaries\\_Cannibals)::RHS(Missionaries\\_Cannibals)\\\\\n\\\\\nEach Edge is source, target, cost\\\\\nThe Goal is to move from LHS $\\to$ RHS\n\nSample Run output:\n\\begin{lstlisting}\n\t_______________________________________________________________\n\tHeuristics\n\t_______________________________________________________________\n\t../data/graph_can_n_4_m_3.csv\n\tN,0,4_3::0_0::0_0\n\tN,1,3_2::1_1::0_0\n\tN,2,4_1::0_2::0_0\n\tN,3,2_3::2_0::0_0\n\tN,4,3_2::1_0::0_1\n\tN,5,3_2::0_1::1_0\n\tN,6,4_1::0_1::0_1\n\tN,7,3_1::1_1::0_1\n\tN,8,2_2::2_0::0_1\n\tN,9,2_2::1_1::1_0\n\tN,10,3_1::0_2::1_0\n\tN,11,4_0::0_2::0_1\n\tN,12,3_1::1_0::0_2\n\tN,13,3_1::0_1::1_1\n\tN,14,2_2::1_0::1_1\n\tN,15,2_2::0_1::2_0\n\tN,16,4_0::0_1::0_2\n\tN,17,2_1::2_0::0_2\n\tN,18,3_0::1_1::0_2\n\tN,19,3_0::0_2::1_1\n\tN,20,2_1::1_1::1_1\n\tN,21,1_2::2_0::1_1\n\tN,22,2_1::0_2::2_0\n\tN,23,2_1::1_0::1_2\n\tN,24,3_0::1_0::0_3\n\tN,25,3_0::0_1::1_2\n\tN,26,2_1::0_1::2_1\n\tN,27,2_1::1_0::1_2\n\tN,28,2_0::2_0::0_3\n\tN,29,2_0::0_2::2_1\n\tN,30,1_1::1_1::2_1\n\tN,31,2_0::1_0::1_3\n\tN,32,2_0::0_1::2_2\n\tN,33,1_1::0_1::3_1\n\tN,34,1_1::1_0::2_2\n\tN,35,1_0::1_1::2_2\n\tN,36,1_0::0_2::3_1\n\tN,37,0_1::1_1::3_1\n\tN,38,0_1::2_0::2_2\n\tN,39,1_0::1_0::2_3\n\tN,40,1_0::0_1::3_2\n\tN,41,0_1::1_0::3_2\n\tN,42,0_1::0_1::4_1\n\tN,43,0_0::1_1::3_2\n\tN,44,0_0::0_2::4_1\n\tN,45,0_0::0_0::4_3\n\tE,0,1,1\n\tE,0,2,1\n\tE,0,3,1\n\tE,1,4,1\n\tE,1,5,1\n\tE,2,6,1\n\tE,4,7,1\n\tE,4,8,1\n\tE,5,9,1\n\tE,5,10,1\n\tE,6,11,1\n\tE,7,12,1\n\tE,7,13,1\n\tE,8,14,1\n\tE,9,14,1\n\tE,9,15,1\n\tE,10,13,1\n\tE,11,16,1\n\tE,12,17,1\n\tE,12,18,1\n\tE,13,19,1\n\tE,13,20,1\n\tE,14,20,1\n\tE,14,21,1\n\tE,15,22,1\n\tE,15,23,1\n\tE,16,18,1\n\tE,17,23,1\n\tE,18,24,1\n\tE,18,25,1\n\tE,19,25,1\n\tE,20,26,1\n\tE,20,23,1\n\tE,22,26,1\n\tE,24,28,1\n\tE,26,29,1\n\tE,26,30,1\n\tE,28,31,1\n\tE,29,32,1\n\tE,30,33,1\n\tE,30,34,1\n\tE,32,35,1\n\tE,33,36,1\n\tE,33,37,1\n\tE,34,38,1\n\tE,35,39,1\n\tE,35,40,1\n\tE,36,40,1\n\tE,37,41,1\n\tE,37,42,1\n\tE,38,41,1\n\tE,40,43,1\n\tE,41,43,1\n\tE,42,44,1\n\tE,43,45,1\n\tE,44,45,1\n\tS,0,4_3::0_0::0_0\n\tT,45,0_0::0_0::4_3\n\t_______________________________________________________________\n\tEdge:0->1 cost:1\n\tEdge:0->2 cost:1\n\tEdge:0->3 cost:1\n\tEdge:1->4 cost:1\n\tEdge:1->5 cost:1\n\tEdge:2->6 cost:1\n\tEdge:4->7 cost:1\n\tEdge:4->8 cost:1\n\tEdge:5->9 cost:1\n\tEdge:5->10 cost:1\n\tEdge:6->11 cost:1\n\tEdge:7->12 cost:1\n\tEdge:7->13 cost:1\n\tEdge:8->14 cost:1\n\tEdge:9->14 cost:1\n\tEdge:9->15 cost:1\n\tEdge:10->13 cost:1\n\tEdge:11->16 cost:1\n\tEdge:12->17 cost:1\n\tEdge:12->18 cost:1\n\tEdge:13->19 cost:1\n\tEdge:13->20 cost:1\n\tEdge:14->20 cost:1\n\tEdge:14->21 cost:1\n\tEdge:15->22 cost:1\n\tEdge:15->23 cost:1\n\tEdge:16->18 cost:1\n\tEdge:17->23 cost:1\n\tEdge:18->24 cost:1\n\tEdge:18->25 cost:1\n\tEdge:19->25 cost:1\n\tEdge:20->26 cost:1\n\tEdge:20->23 cost:1\n\tEdge:22->26 cost:1\n\tEdge:24->28 cost:1\n\tEdge:26->29 cost:1\n\tEdge:26->30 cost:1\n\tEdge:28->31 cost:1\n\tEdge:29->32 cost:1\n\tEdge:30->33 cost:1\n\tEdge:30->34 cost:1\n\tEdge:32->35 cost:1\n\tEdge:33->36 cost:1\n\tEdge:33->37 cost:1\n\tEdge:34->38 cost:1\n\tEdge:35->39 cost:1\n\tEdge:35->40 cost:1\n\tEdge:36->40 cost:1\n\tEdge:37->41 cost:1\n\tEdge:37->42 cost:1\n\tEdge:38->41 cost:1\n\tEdge:40->43 cost:1\n\tEdge:41->43 cost:1\n\tEdge:42->44 cost:1\n\tEdge:43->45 cost:1\n\tEdge:44->45 cost:1\n\tNodes:0\n\tNodes:1\n\tNodes:2\n\tNodes:3\n\tNodes:4\n\tNodes:5\n\tNodes:6\n\tNodes:7\n\tNodes:8\n\tNodes:9\n\tNodes:10\n\tNodes:11\n\tNodes:12\n\tNodes:13\n\tNodes:14\n\tNodes:15\n\tNodes:16\n\tNodes:17\n\tNodes:18\n\tNodes:19\n\tNodes:20\n\tNodes:21\n\tNodes:22\n\tNodes:23\n\tNodes:24\n\tNodes:25\n\tNodes:26\n\tNodes:27\n\tNodes:28\n\tNodes:29\n\tNodes:30\n\tNodes:31\n\tNodes:32\n\tNodes:33\n\tNodes:34\n\tNodes:35\n\tNodes:36\n\tNodes:37\n\tNodes:38\n\tNodes:39\n\tNodes:40\n\tNodes:41\n\tNodes:42\n\tNodes:43\n\tNodes:44\n\tNodes:45\n\t_______________________________________________________________\n\tNode U::0\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:1 cost:1\n\tAdding to frontier node:2 cost:1\n\tAdding to frontier node:3 cost:1\n\tNode U::1\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:4 cost:2\n\tAdding to frontier node:5 cost:2\n\tNode U::2\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:6 cost:2\n\tNode U::3\n\tOther Goal Found:45\tcost:0\n\tNode U::4\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:7 cost:3\n\tAdding to frontier node:8 cost:3\n\tNode U::5\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:9 cost:3\n\tAdding to frontier node:10 cost:3\n\tNode U::6\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:11 cost:3\n\tNode U::7\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:12 cost:4\n\tAdding to frontier node:13 cost:4\n\tNode U::8\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:14 cost:4\n\tNode U::9\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:15 cost:4\n\tNode U::10\n\tOther Goal Found:45\tcost:0\n\tNode U::11\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:16 cost:4\n\tNode U::12\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:17 cost:5\n\tAdding to frontier node:18 cost:5\n\tNode U::13\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:19 cost:5\n\tAdding to frontier node:20 cost:5\n\tNode U::14\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:21 cost:5\n\tNode U::15\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:22 cost:5\n\tAdding to frontier node:23 cost:5\n\tNode U::16\n\tOther Goal Found:45\tcost:0\n\tNode U::17\n\tOther Goal Found:45\tcost:0\n\tNode U::18\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:24 cost:6\n\tAdding to frontier node:25 cost:6\n\tNode U::19\n\tOther Goal Found:45\tcost:0\n\tNode U::20\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:26 cost:6\n\tNode U::21\n\tOther Goal Found:45\tcost:0\n\tNode U::22\n\tOther Goal Found:45\tcost:0\n\tNode U::23\n\tOther Goal Found:45\tcost:0\n\tNode U::24\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:28 cost:7\n\tNode U::25\n\tOther Goal Found:45\tcost:0\n\tNode U::26\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:29 cost:7\n\tAdding to frontier node:30 cost:7\n\tNode U::28\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:31 cost:8\n\tNode U::29\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:32 cost:8\n\tNode U::30\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:33 cost:8\n\tAdding to frontier node:34 cost:8\n\tNode U::31\n\tOther Goal Found:45\tcost:0\n\tNode U::32\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:35 cost:9\n\tNode U::33\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:36 cost:9\n\tAdding to frontier node:37 cost:9\n\tNode U::34\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:38 cost:9\n\tNode U::35\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:39 cost:10\n\tAdding to frontier node:40 cost:10\n\tNode U::36\n\tOther Goal Found:45\tcost:0\n\tNode U::37\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:41 cost:10\n\tAdding to frontier node:42 cost:10\n\tNode U::38\n\tOther Goal Found:45\tcost:0\n\tNode U::39\n\tOther Goal Found:45\tcost:0\n\tNode U::40\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:43 cost:11\n\tNode U::41\n\tOther Goal Found:45\tcost:0\n\tNode U::42\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:44 cost:11\n\tNode U::43\n\tOther Goal Found:45\tcost:0\n\tAdding to frontier node:45 cost:12\n\tNode U::44\n\tOther Goal Found:45\tcost:12\n\tNode U::45\n\tMin Goal Found:45\tcost:12\n\texplored node count:45\n\texplored node:0\n\texplored node:1\n\texplored node:2\n\texplored node:3\n\texplored node:4\n\texplored node:5\n\texplored node:6\n\texplored node:7\n\texplored node:8\n\texplored node:9\n\texplored node:10\n\texplored node:11\n\texplored node:12\n\texplored node:13\n\texplored node:14\n\texplored node:15\n\texplored node:16\n\texplored node:17\n\texplored node:18\n\texplored node:19\n\texplored node:20\n\texplored node:21\n\texplored node:22\n\texplored node:23\n\texplored node:24\n\texplored node:25\n\texplored node:26\n\texplored node:28\n\texplored node:29\n\texplored node:30\n\texplored node:31\n\texplored node:32\n\texplored node:33\n\texplored node:34\n\texplored node:35\n\texplored node:36\n\texplored node:37\n\texplored node:38\n\texplored node:39\n\texplored node:40\n\texplored node:41\n\texplored node:42\n\texplored node:43\n\texplored node:44\n\texplored node:45\n\tfrontier node count:0\n\tpath:\n\t45<-43<-40<-35<-32<-29<-26<-20<-13<-7<-4<-1<-0\n\tcost:12\n\tRunning time(includes std::cout):\t0.0010232s\n\tcpu start: 1\tcpu end:2\tCLOCKS_PER_SEC:1000\n\tcpu time(includes std::cout):\t0.001s\n\t_______________________________________________________________\n\n\\end{lstlisting}\n* Note the cpu run time may not appear accurate due to the running system (see System information \\ref{sysinfo})\n\n\n45$\\leftarrow$43$\\leftarrow$40$\\leftarrow$35$\\leftarrow$32$\\leftarrow$29$\\leftarrow$26$\\leftarrow$20$\\leftarrow$13$\\leftarrow$7$\\leftarrow$4$\\leftarrow$1$\\leftarrow$0\nthe path can be trace from the source to the sink with the sink\\\\target state matching the gaol state\n\\\\\nThe output cost 12 is correct because the depth of the tree which represents\nthis problem is 12 and because each edge has a weight of 1 the cost should equal the depth of the tree.\n\n\\subsection{System information}\\label{sysinfo}\nclinfo.exe dump of system information. \nNote The above runs where running on cl.exe in debug mode on Windows 10 x64\n\\begin{lstlisting}\nNumber of platforms:\t\t\t\t 1\n  Platform Profile:\t\t\t\t FULL_PROFILE\n  Platform Version:\t\t\t\t OpenCL 2.0 AMD-APP (2264.11)\n  Platform Name:\t\t\t\t AMD Accelerated Parallel Processing\n  Platform Vendor:\t\t\t\t Advanced Micro Devices, Inc.\n  Platform Extensions:\t\t\t\t cl_khr_icd cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_khr_dx9_media_sharing cl_amd_event_callback cl_amd_offline_devices \n\n\n  Platform Name:\t\t\t\t AMD Accelerated Parallel Processing\nNumber of devices:\t\t\t\t 2\n  Device Type:\t\t\t\t\t CL_DEVICE_TYPE_GPU\n  Vendor ID:\t\t\t\t\t 1002h\n  Board name:\t\t\t\t\t AMD Radeon HD 7800 Series\n  Device Topology:\t\t\t\t PCI[ B#1, D#0, F#0 ]\n  Max compute units:\t\t\t\t 16\n  Max work items dimensions:\t\t\t 3\n    Max work items[0]:\t\t\t\t 256\n    Max work items[1]:\t\t\t\t 256\n    Max work items[2]:\t\t\t\t 256\n  Max work group size:\t\t\t\t 256\n  Preferred vector width char:\t\t\t 4\n  Preferred vector width short:\t\t\t 2\n  Preferred vector width int:\t\t\t 1\n  Preferred vector width long:\t\t\t 1\n  Preferred vector width float:\t\t\t 1\n  Preferred vector width double:\t\t 1\n  Native vector width char:\t\t\t 4\n  Native vector width short:\t\t\t 2\n  Native vector width int:\t\t\t 1\n  Native vector width long:\t\t\t 1\n  Native vector width float:\t\t\t 1\n  Native vector width double:\t\t\t 1\n  Max clock frequency:\t\t\t\t 860Mhz\n  Address bits:\t\t\t\t\t 32\n  Max memory allocation:\t\t\t 1409286144\n  Image support:\t\t\t\t Yes\n  Max number of images read arguments:\t\t 128\n  Max number of images write arguments:\t\t 8\n  Max image 2D width:\t\t\t\t 16384\n  Max image 2D height:\t\t\t\t 16384\n  Max image 3D width:\t\t\t\t 2048\n  Max image 3D height:\t\t\t\t 2048\n  Max image 3D depth:\t\t\t\t 2048\n  Max samplers within kernel:\t\t\t 16\n  Max size of kernel argument:\t\t\t 1024\n  Alignment (bits) of base address:\t\t 2048\n  Minimum alignment (bytes) for any datatype:\t 128\n  Single precision floating point capability\n    Denorms:\t\t\t\t\t No\n    Quiet NaNs:\t\t\t\t\t Yes\n    Round to nearest even:\t\t\t Yes\n    Round to zero:\t\t\t\t Yes\n    Round to +ve and infinity:\t\t\t Yes\n    IEEE754-2008 fused multiply-add:\t\t Yes\n  Cache type:\t\t\t\t\t Read/Write\n  Cache line size:\t\t\t\t 64\n  Cache size:\t\t\t\t\t 16384\n  Global memory size:\t\t\t\t 2147483648\n  Constant buffer size:\t\t\t\t 65536\n  Max number of constant args:\t\t\t 8\n  Local memory type:\t\t\t\t Scratchpad\n  Local memory size:\t\t\t\t 32768\n  Max pipe arguments:\t\t\t\t 0\n  Max pipe active reservations:\t\t\t 0\n  Max pipe packet size:\t\t\t\t 0\n  Max global variable size:\t\t\t 0\n  Max global variable preferred total size:\t 0\n  Max read/write image args:\t\t\t 0\n  Max on device events:\t\t\t\t 0\n  Queue on device max size:\t\t\t 0\n  Max on device queues:\t\t\t\t 0\n  Queue on device preferred size:\t\t 0\n  SVM capabilities:\t\t\t\t \n    Coarse grain buffer:\t\t\t No\n    Fine grain buffer:\t\t\t\t No\n    Fine grain system:\t\t\t\t No\n    Atomics:\t\t\t\t\t No\n  Preferred platform atomic alignment:\t\t 0\n  Preferred global atomic alignment:\t\t 0\n  Preferred local atomic alignment:\t\t 0\n  Kernel Preferred work group size multiple:\t 64\n  Error correction support:\t\t\t 0\n  Unified memory for Host and Device:\t\t 0\n  Profiling timer resolution:\t\t\t 1\n  Device endianess:\t\t\t\t Little\n  Available:\t\t\t\t\t Yes\n  Compiler available:\t\t\t\t Yes\n  Execution capabilities:\t\t\t\t \n    Execute OpenCL kernels:\t\t\t Yes\n    Execute native function:\t\t\t No\n  Queue on Host properties:\t\t\t\t \n    Out-of-Order:\t\t\t\t No\n    Profiling :\t\t\t\t\t Yes\n  Queue on Device properties:\t\t\t\t \n    Out-of-Order:\t\t\t\t No\n    Profiling :\t\t\t\t\t No\n  Platform ID:\t\t\t\t\t 00007FFE8904B188\n  Name:\t\t\t\t\t\t Pitcairn\n  Vendor:\t\t\t\t\t Advanced Micro Devices, Inc.\n  Device OpenCL C version:\t\t\t OpenCL C 1.2 \n  Driver version:\t\t\t\t 2264.11\n  Profile:\t\t\t\t\t FULL_PROFILE\n  Version:\t\t\t\t\t OpenCL 1.2 AMD-APP (2264.11)\n  Extensions:\t\t\t\t\t cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_khr_dx9_media_sharing cl_khr_image2d_from_buffer cl_khr_spir cl_khr_gl_event cl_amd_liquid_flash \n\n\n  Device Type:\t\t\t\t\t CL_DEVICE_TYPE_CPU\n  Vendor ID:\t\t\t\t\t 1002h\n  Board name:\t\t\t\t\t \n  Max compute units:\t\t\t\t 4\n  Max work items dimensions:\t\t\t 3\n    Max work items[0]:\t\t\t\t 1024\n    Max work items[1]:\t\t\t\t 1024\n    Max work items[2]:\t\t\t\t 1024\n  Max work group size:\t\t\t\t 1024\n  Preferred vector width char:\t\t\t 16\n  Preferred vector width short:\t\t\t 8\n  Preferred vector width int:\t\t\t 4\n  Preferred vector width long:\t\t\t 2\n  Preferred vector width float:\t\t\t 8\n  Preferred vector width double:\t\t 4\n  Native vector width char:\t\t\t 16\n  Native vector width short:\t\t\t 8\n  Native vector width int:\t\t\t 4\n  Native vector width long:\t\t\t 2\n  Native vector width float:\t\t\t 8\n  Native vector width double:\t\t\t 4\n  Max clock frequency:\t\t\t\t 3504Mhz\n  Address bits:\t\t\t\t\t 64\n  Max memory allocation:\t\t\t 4275856384\n  Image support:\t\t\t\t Yes\n  Max number of images read arguments:\t\t 128\n  Max number of images write arguments:\t\t 64\n  Max image 2D width:\t\t\t\t 8192\n  Max image 2D height:\t\t\t\t 8192\n  Max image 3D width:\t\t\t\t 2048\n  Max image 3D height:\t\t\t\t 2048\n  Max image 3D depth:\t\t\t\t 2048\n  Max samplers within kernel:\t\t\t 16\n  Max size of kernel argument:\t\t\t 4096\n  Alignment (bits) of base address:\t\t 1024\n  Minimum alignment (bytes) for any datatype:\t 128\n  Single precision floating point capability\n    Denorms:\t\t\t\t\t Yes\n    Quiet NaNs:\t\t\t\t\t Yes\n    Round to nearest even:\t\t\t Yes\n    Round to zero:\t\t\t\t Yes\n    Round to +ve and infinity:\t\t\t Yes\n    IEEE754-2008 fused multiply-add:\t\t Yes\n  Cache type:\t\t\t\t\t Read/Write\n  Cache line size:\t\t\t\t 64\n  Cache size:\t\t\t\t\t 32768\n  Global memory size:\t\t\t\t 17103425536\n  Constant buffer size:\t\t\t\t 65536\n  Max number of constant args:\t\t\t 8\n  Local memory type:\t\t\t\t Global\n  Local memory size:\t\t\t\t 32768\n  Max pipe arguments:\t\t\t\t 16\n  Max pipe active reservations:\t\t\t 16\n  Max pipe packet size:\t\t\t\t 4275856384\n  Max global variable size:\t\t\t 1879048192\n  Max global variable preferred total size:\t 1879048192\n  Max read/write image args:\t\t\t 64\n  Max on device events:\t\t\t\t 0\n  Queue on device max size:\t\t\t 0\n  Max on device queues:\t\t\t\t 0\n  Queue on device preferred size:\t\t 0\n  SVM capabilities:\t\t\t\t \n    Coarse grain buffer:\t\t\t No\n    Fine grain buffer:\t\t\t\t No\n    Fine grain system:\t\t\t\t No\n    Atomics:\t\t\t\t\t No\n  Preferred platform atomic alignment:\t\t 0\n  Preferred global atomic alignment:\t\t 0\n  Preferred local atomic alignment:\t\t 0\n  Kernel Preferred work group size multiple:\t 1\n  Error correction support:\t\t\t 0\n  Unified memory for Host and Device:\t\t 1\n  Profiling timer resolution:\t\t\t 292\n  Device endianess:\t\t\t\t Little\n  Available:\t\t\t\t\t Yes\n  Compiler available:\t\t\t\t Yes\n  Execution capabilities:\t\t\t\t \n    Execute OpenCL kernels:\t\t\t Yes\n    Execute native function:\t\t\t Yes\n  Queue on Host properties:\t\t\t\t \n    Out-of-Order:\t\t\t\t No\n    Profiling :\t\t\t\t\t Yes\n  Queue on Device properties:\t\t\t\t \n    Out-of-Order:\t\t\t\t No\n    Profiling :\t\t\t\t\t No\n  Platform ID:\t\t\t\t\t 00007FFE8904B188\n  Name:\t\t\t\t\t\t Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz\n  Vendor:\t\t\t\t\t GenuineIntel\n  Device OpenCL C version:\t\t\t OpenCL C 1.2 \n  Driver version:\t\t\t\t 2264.11 (sse2,avx)\n  Profile:\t\t\t\t\t FULL_PROFILE\n  Version:\t\t\t\t\t OpenCL 1.2 AMD-APP (2264.11)\n  Extensions:\t\t\t\t\t cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_d3d10_sharing cl_khr_spir cl_khr_gl_event \n\\end{lstlisting}\n% //END\n\n\\section{ Sources of Error}\nThere could be some sources of error in my graph csv files, \nincorrectly adding edges or mistyping the graph state for the Cannibals and Missionaries.\n\n\\bibliographystyle{unsrt}\n\\bibliography{bib}\n\n\n\\end{document}\n", "meta": {"hexsha": "9610ad79c4ca5c2264f990ce6d29a338ee76c742", "size": 42186, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/dijkstra.tex", "max_stars_repo_name": "bluemner/heuristics", "max_stars_repo_head_hexsha": "6ca6da99ee5864675a90f3dfa93c0ee94f43278f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/dijkstra.tex", "max_issues_repo_name": "bluemner/heuristics", "max_issues_repo_head_hexsha": "6ca6da99ee5864675a90f3dfa93c0ee94f43278f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/dijkstra.tex", "max_forks_repo_name": "bluemner/heuristics", "max_forks_repo_head_hexsha": "6ca6da99ee5864675a90f3dfa93c0ee94f43278f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4272237197, "max_line_length": 555, "alphanum_fraction": 0.6701986441, "num_tokens": 14856, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.8056321796478255, "lm_q1q2_score": 0.5789227764798826}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage[top=2cm,bottom=2cm]{geometry}\n\\usepackage[T1]{fontenc}\n\\usepackage{url}\n\\usepackage{hyperref}\n\\usepackage{xcolor}\n% \n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{ifthen}\n\\usepackage{mathpartir}\n\\usepackage{mathtools}\n\\input{macros}\n% \n\\begin{document}\n\n\\title{CO663: Language $\\lggeEF$ --- summary\\footnote{Adapted from:\n    Harper, Robert. Practical foundations for programming\n    languages. Cambridge University Press, 2016.}}\n\n\\date{\\vspace{-7ex}} \n\\maketitle\n\n% \\pagenumbering{gobble}\n\n\\newcommand{\\lazyeval}[1]{\\framebox{\\parbox[c][#1]{\\textwidth}{\n      \\color{white}{h}% \n    }}}\n\n\\section{Syntax}\n\n\\[\n\\begin{array}{llclll} \n  & & & \\text{abstract syntax} & \\text{concrete syntax}\n  \\\\\\hline\n  \\TYPES & \\tau & \\Coloneqq & \\enum{}  & \\enum{} & \\text{numbers}\n  \\\\\n  &&& \\estr{} & \\estr{} & \\text{strings}\n  \\\\\n  &&& \\ebool{} & \\ebool{} & \\text{Booleans}\n  \\\\\n  &&& \\tyarr{\\tau_1}{\\tau_2}  & \\tau_1 \\rightarrow \\tau_2 & \\text{function}\n  \\\\ \\\\\n  % \n  \\EXPS & e & \\Coloneqq  & \\var{x} & \\var{x} & \\text{variable}\n  \\\\\n  &&& \\enum{n} & n & \\text{numeral}\n  \\\\\n  &&& \\estr{s} & \\litstr{s} & \\text{literal}\n  \\\\\n  &&&  \\ktrue & \\ktrue \n  \\\\ \n  &&&  \\kfalse & \\kfalse \n  \\\\\n  &&& \\eplus{e_1}{e_2} & e_1 {+} e_2 & \\text{addition}\n  \\\\\n  &&& \\etimes{e_1}{e_2} & e_1 {*} e_2 & \\text{multiplication}\n  \\\\\n  &&& \\ecat{e_1}{e_2} & e_1 \\concat e_2 & \\text{concatenate}\n  \\\\\n  &&& \\elen{e} & \\lvert e \\rvert & \\text{length}\n  \\\\\n  &&& \\eequal{e_1}{e_2} & e_1 \\meq e_2 & \\text{number equality}\n  \\\\\n  &&& \\site{e}{e_1}{e_2} & \\csite{e}{e_1}{e_2} & \\text{if then else}\n  \\\\\n  &&& \\elet{e_1}{x}{e_2} & \\clet{e_1}{x}{e_2} & \\text{definition}\n  \\\\\n  &&& \\elam{\\tau}{x}{e} & \\clam{\\tau}{x}{e} & \\text{abstraction}\n  \\\\\n  &&& \\eapp{e_1}{e_2} & \\capp{e_1}{e_2} & \\text{application}\n\\end{array}\n\\]\n\n\\section{Typing rules / Statics}\n\n\\[\n\\tyrule\n{\\eftyrulename{var}}\n{\\,}\n{\\tyjudge{\\Gamma_1, \\var{x}: \\tau, \\Gamma_2}{\\var{x}}{\\tau} }\n\\qquad\n\\qquad\n\\qquad\n\\tyrule\n{\\eftyrulename{let}}\n{\n  \\tyjudge{\\Gamma}{e_1}{\\tau_1}\n  \\and\n  \\tyjudge{\\Gamma, \\var{x} : \\tau_1 }{e_2}{\\tau}\n}\n{\\tyjudge{\\Gamma}{\\elet{e_1}{x}{e_2}}{\\tau}}\n\\]\n\\[\n\\tyrule\n{\\eftyrulename{str}}\n{\\,}\n{\\tyjudge{\\Gamma}{\\estr{s}}{\\estr{}}}\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\eftyrulename{num}}\n{\\,}\n{\\tyjudge{\\Gamma}{\\enum{n}}{\\enum{}}}\n\\]\n\\[\n\\tyrule\n{\\eftyrulename{tt}}\n{\\,}\n{\\tyjudge{\\Gamma}{\\ktrue}{\\ebool{}}}\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\eftyrulename{ff}}\n{\\,}\n{\\tyjudge{\\Gamma}{\\kfalse}{\\ebool{}}}\n\\]\n\\[\n\\tyrule\n{\\eftyrulename{plus}}\n{\n  \\tyjudge{\\Gamma}{e_1}{\\enum{}}\n  \\and\n  \\tyjudge{\\Gamma}{e_2}{\\enum{}}\n}\n{\\tyjudge{\\Gamma}{\\eplus{e_1}{e_2}}{\\enum{}}}\n\\qquad\\qquad\\qquad\\qquad\n\\tyrule\n{\\eftyrulename{times}}\n{\n  \\tyjudge{\\Gamma}{e_1}{\\enum{}}\n  \\and\n  \\tyjudge{\\Gamma}{e_2}{\\enum{}}\n}\n{\\tyjudge{\\Gamma}{\\etimes{e_1}{e_2}}{\\enum{}}}\n% \n\\]\n\\[\n\\tyrule\n{\\eftyrulename{cat}}\n{\n  \\tyjudge{\\Gamma}{e_1}{\\estr{}}\n  \\and\n  \\tyjudge{\\Gamma}{e_2}{\\estr{}}\n}\n{\\tyjudge{\\Gamma}{\\ecat{e_1}{e_2}}{\\estr{}}}\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\eftyrulename{len}}\n{\n  \\tyjudge{\\Gamma}{e}{\\estr{}}\n}\n{\\tyjudge{\\Gamma}{\\elen{e}}{\\enum{}}}\n\\]\n\\[\n\\tyrule\n{\\eftyrulename{eq}}\n{\n  \\tyjudge{\\Gamma}{e_1}{\\enum{}}\n  \\and\n  \\tyjudge{\\Gamma}{e_2}{\\enum{}}\n}\n{\\tyjudge{\\Gamma}{\\eequal{e_1}{e_2}}{\\ebool{}}}\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\eftyrulename{ite}}\n{\n  \\tyjudge{\\Gamma}{e}{\\ebool{}}\n  \\and\n  \\tyjudge{\\Gamma}{e_1}{\\tau}\n  \\and\n  \\tyjudge{\\Gamma}{e_2}{\\tau}\n}\n{\\tyjudge{\\Gamma}{\\site{e}{e_1}{e_2}}{\\tau}}\n\\]\n% \n% \n\\[\n\\tyrule\n{\\eftyrulename{lam}}\n{\n  \\tyjudge{\\Gamma, \\var{x} : \\tau_1}{e}{\\tau_2}\n}\n{\\tyjudge{\\Gamma}{\\elam{\\tau_1}{x}{e}}{\\tyarr{\\tau_1}{\\tau_2}}}\n\\qquad\\qquad\\qquad\n\\tyrule\n{\\eftyrulename{ap}}\n{\n  \\tyjudge{\\Gamma}{e_1}{\\tyarr{\\tau'}{\\tau}}\n  \\and\n  \\tyjudge{\\Gamma}{e_2}{\\tau'}\n}\n{\n  \\tyjudge{\\Gamma}{\\eapp{e_1}{e_2}}{\\tau}\n}\n\\]\n\n\\section{Semantics / Dynamics}\n\n\\subsection{Values}\n\\[\n\\semrule{}\n{\\,}\n{\\valjudge{\\enum{n}}}\n\\qquad \\qquad  \\qquad\n\\semrule{}\n{\\,}\n{\\valjudge{\\estr{s}}}\n\\qquad \\qquad  \\qquad\n\\semrule{}\n{\\,}\n{\\valjudge{\\ktrue}}\n\\qquad \\qquad  \\qquad\n\\semrule{}\n{\\,}\n{\\valjudge{\\kfalse}}\n\\]\n\n\\[\n\\semrule\n{\\efsemrulename{lam}}\n{\\,}\n{\\valjudge{\\elam{\\tau}{x}{e}}}\n\\]\n\\subsection{Reductions}\n\nThe main semantic rules are as follows (the missing ones can easily be\nre-constructed).\n\\[\n\\semrule\n{\\efsemrulename{plval}}\n{n_1 + n_2 = n}\n{\\jtrans{\\eplus{\\enum{n_1}}{\\enum{n_2}}}{\\enum{n}}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\efsemrulename{pll}}\n{\\jtrans{e_1}{e'_1}}\n{\\jtrans{\\eplus{e_1}{e_2}}{\\eplus{e'_1}{e_2}}}\n\\]\n\\[\n\\semrule\n{\\efsemrulename{plr}}\n{ \\valjudge{e_1}\n  \\and\n  \\jtrans{e_2}{e'_2}}\n{\\jtrans{\\eplus{e_1}{e_2}}{\\eplus{e_1}{e'_2}}}\n\\]\n\\[\n\\semrule\n{\\efsemrulename{tmval}}\n{n_1 \\times n_2 = n}\n{\\jtrans{\\etimes{\\enum{n_1}}{\\enum{n_2}}}{\\enum{n}}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\efsemrulename{tml}}\n{\\jtrans{e_1}{e'_1}}\n{\\jtrans{\\etimes{e_1}{e_2}}{\\etimes{e'_1}{e_2}}}\n\\]\n\\[\n\\semrule\n{\\efsemrulename{tmr}}\n{ \\valjudge{e_1}\n  \\and\n  \\jtrans{e_2}{e'_2}}\n{\\jtrans{\\etimes{e_1}{e_2}}{\\etimes{e_1}{e'_2}}}\n\\]\n\\[\n\\semrule\n{\\efsemrulename{catval}}\n{s_1 \\concat s_2 = s}\n{\\jtrans{\\ecat{\\estr{s_1}}{\\estr{s_2}}}{\\estr{s}}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\efsemrulename{catl}}\n{ \\jtrans{e_1}{e'_1}}\n{\\jtrans{\\ecat{e_1}{e_2}}{\\ecat{e'_1}{e_2}}}\n\\]\n\\[\n\\semrule\n{\\efsemrulename{catr}}\n{\\valjudge{e_1}\n  \\and\n  \\jtrans{e_2}{e'_2}}\n{\\jtrans{\\ecat{e_1}{e_2}}{\\ecat{e_1}{e'_2}}}\n\\]\n% \n\\[\n\\semrule\n{\\efsemrulename{eqt}}\n{n_1 == n_2}\n{\\jtrans{\\eequal{\\enum{n_1}}{\\enum{n_2}}}{\\ktrue}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\efsemrulename{eqf}}\n{n_1 /= n_2}\n{\\jtrans{\\eequal{\\enum{n_1}}{\\enum{n_2}}}{\\kfalse}}\n\\]\n\\[\n\\semrule\n{\\efsemrulename{eql}}\n{ \\jtrans{e_1}{e'_1}}\n{\\jtrans{\\eequal{e_1}{e_2}}{\\eequal{e'_1}{e_2}}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\efsemrulename{eqr}}\n{\\valjudge{e_1}\n  \\and\n  \\jtrans{e_2}{e'_2}}\n{\\jtrans{\\eequal{e_1}{e_2}}{\\eequal{e_1}{e'_2}}}\n\\]\n% \n\\[\n\\semrule{\\efsemrulename{lenv}}\n{\\lvert s \\rvert == n}\n{\\jtrans{\\elen{\\estr{s}}}{\\enum{n}}}\n\\qquad \\qquad  \\qquad\n\\semrule{\\efsemrulename{lene}}\n{\\jtrans{e}{e'}}\n{\\jtrans{\\elen{e}}{\\elen{e'}}}\n\\]\n% \n\\[\n\\semrule\n{\\efsemrulename{itet}}\n{\\,}\n{\\jtrans{\\site{\\ktrue}{e_1}{e_2}}{e_1}}\n\\qquad \\qquad  \\qquad\n\\semrule\n{\\efsemrulename{itef}}\n{\\,}\n{\\jtrans{\\site{\\kfalse}{e_1}{e_2}}{e_2}}\n\\]\n\\[\n\\semrule\n{\\efsemrulename{ite}}\n{\\jtrans{e}{e'}}\n{\\jtrans{\\site{e}{e_1}{e_2}}{\\site{e'}{e_1}{e_2}}}\n\\]\n\n\n\n\nTo reduce \\texttt{let} constructs, we can use either:\n\\begin{itemize}\n\\item Call-by-name (``lazy''):\n  \\[\n  \\semrule\n  {\\efsemrulename{letn}}\n  {\\,}\n  {\n    \\jtrans{\\elet{e_1}{x}{e_2}}{\\subs{e_2}{e_1}{x}}}\n  \\]\n\\item Call-by-value (``strict''):\n  {\\color{gray}\n    \\[\n    \\semrule\n    {\\efsemrulename{letm}}\n    {\\jtrans{e_1}{e'_1}}\n    {\n      \\jtrans{\\elet{e_1}{x}{e_2}}{\\elet{e'_1}{x}{e_2}}}\n    \\qquad \\qquad  \\qquad\n    \\semrule\n    {\\efsemrulename{letv}}\n    {\\valjudge{e_1}}\n    {\n      \\jtrans{\\elet{e_1}{x}{e_2}}{\\subs{e_2}{e_1}{x}}}\n    \\]\n  }\n  % \n  %\n  % \n  \\[\n  % \\qquad \\qquad  \\qquad\n  % \n  \\semrule\n  {\\efsemrulename{ap}}\n  {\\jtrans{e_1}{e'_1}}\n  {\n    \\jtrans{\\eapp{e_1}{e_2}}{\\eapp{e'_1}{e_2}}\n  }\n  \\]\n  To reduce \\texttt{ap} constructs (function applications), we can use\n  either:\n\\item Call-by-name (``lazy''):\n  \\[\n  \\semrule\n  {\\efsemrulename{apn}}\n  {\\,}\n  {\\jtrans{\\eapp{\\elam{\\tau}{x}{e_1}}{e_2}}{\\subs{e_1}{e_2}{x}}}\n  \\]\n\\item Call-by-value (``strict''):\n\n  \\[\n  \\semrule\n  {\\efsemrulename{apr}}\n  {\\valjudge{e_1} \n    \\and \\jtrans{e_2}{e'_2}\n  }\n  {\\jtrans{\\eapp{e_1}{e_2}}{\\eapp{e_1}{e'_2}}}\n  \\qquad \\qquad  \\qquad\n  % \n  \\semrule\n  {\\efsemrulename{apl}}\n  {\\valjudge{e_2}}\n  {\\jtrans{\\eapp{\\elam{\\tau}{x}{e_1}}{e_2}}{\\subs{e_1}{e_2}{x}}}\n  \\]\n\n\\end{itemize}\n\n\n\\section*{\\LaTeX\\ template}\nThe \\LaTeX\\ sources of this document are available in:\n\\url{https://github.com/ukc-co663/pl-design-latex-template}.\n\n\\end{document}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: t\n%%% End:\n", "meta": {"hexsha": "2c93d4f26c7ee09186825e8e6c91493f4cf6645d", "size": 7872, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lgge-ef-summary.tex", "max_stars_repo_name": "ukc-co663/pl-design-latex-template", "max_stars_repo_head_hexsha": "04965c2dc2cc8c8a34abb48adb3def95d36b4e03", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lgge-ef-summary.tex", "max_issues_repo_name": "ukc-co663/pl-design-latex-template", "max_issues_repo_head_hexsha": "04965c2dc2cc8c8a34abb48adb3def95d36b4e03", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lgge-ef-summary.tex", "max_forks_repo_name": "ukc-co663/pl-design-latex-template", "max_forks_repo_head_hexsha": "04965c2dc2cc8c8a34abb48adb3def95d36b4e03", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.6983372922, "max_line_length": 75, "alphanum_fraction": 0.5899390244, "num_tokens": 3585, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321796478255, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.578922766769564}}
{"text": "\\section{Action Planning and Verification (Scott)}\n\nThe action planning process involves modeling the manipulator behavior and verifying a controller with respect to the model and any safety conditions specified by the user for a given action.\nThe trajectory planner is responsible for executing a rough motion of the manipulator to some approximate pose near the final desired location of the end effector (EFF).\nIn the case of the modular robot assembly, the trajectory planner moves the modular robot component to be attached to a location near the final attachment EFF pose.\nThe task planner provides the desired translation of the EFF in the global reference frame (necessary for making the connection) to the action planner, which then executes the motion once a controller has been verified.\n\n\n\\subsection{Continuous System Verification}\nIn order to avoid undesirable robot motions, the user specifies a set of unsafe states of the robot, $\\varphi$, for a given action or motion of the manipulator.\nFor the case of modular robot assembly, the unsafe set is defined to be those poses of the end effector during an attachment motion that may result in a failed attachment or improper connection (magnet misalignment).\nA given controller may be verified to cause the system remain within the safe set (no intersection with the unsafe set) by computing the set of all reachable states, $\\textit{ReachSet}_{\\mbox{\\textit{H}}}$, from some initial condition ,$X_0 \\subseteq X$.\nHowever, determining the exact set of reachable states is difficult \\cite{Henzinger:1995:WDH:225058.225162}, and an over approximation of the reachable set is instead computed.\nThe controller is verified if the over approximated set is contained within the set of states that satisfy the safety conditions, $\\mbox{\\textit{ReachSet}}_{\\mbox{\\textit{H}}} \\subseteq \\mbox{Sat}(\\varphi)$.\nThe controller and motion are not verified if there is an intersection of the reachable states and the unsafe set, $\\mbox{\\textit{ReachSet}}_{\\mbox{\\textit{H}}} \\cap \\neg \\mbox{Sat}(\\varphi) \\neq \\emptyset$.\nIn the case where some intersection of the unsafe set and reachable set occurs, no conclusions may be drawn from the analysis as the reach set is an over approximation, and re-planning of the manipulator (by the trajectory planner) is required.\n\nThis process is similar to the framework in \\cite{6016596}, in which the control architectures of an autonomous robotic surgery manipulator are verified for a puncturing task, whereby the EFF (puncturing instrument) is guaranteed to remain within some pose bounds.\nIn this work, the reachability tool Flow* \\cite{Chen2013} is used to over approximate the system dynamics using Taylor Model flowpipes.\nFlow* is capable of handling nonlinear systems, but as manipulators are highly nonlinear, the dynamics of the Baxter arms are linearized in an effort to increase computation speed. \n\n\n\\subsection{Manipulator Modeling}\nThe dynamics of each of the Baxter arms can be represented as:\n\\begin{equation}\n\t\\mathbf{M}(\\underline{q})\\ddot{\\underline{q}} + \\mathbf{C}(\\underline{q},\\underline{\\dot{q}})\\dot{\\underline{q}} + \\mathbf{G}(\\underline{q}) = \\underline{T}_{eff} + \\underline{T}_{in}\n\\end{equation}\nwhere $\\mathbf{M}$ and $\\mathbf{C}$ represent the inertial effects, $\\mathbf{G}$ represents joint torques due to gravity, $\\underline{T}_{EFF}$ represents the joint torques due to EFF load, and $\\underline{T}_{in}$ represents the input joint torques.\nIn this case, linearizing the arm dynamics involves evaluating the nonlinear matrices $\\mathbf{M}$, $\\mathbf{C}$, and $\\mathbf{G}$ for a given arm configuration.\nAs the action planner only executes arm motions for small EFF motions (and subsequently small changes in joint angle) it is assumed that these matrices remain constant for the duration of the arm motion executed by the action planner.\nIt should be noted that Coriolis effects, $\\mathbf{C}$, are ignored due to the small motions of the arm.\n\n\n\\subsection{ROS Action Server and Code Diagram}\nThe action planner, comprising the reachability analysis, controller verification, and joint torque controller execution, are implemented as an action server in ROS.\nThe code block diagram is shown in figure Fig. \\ref{fig:actionPlannerFlow}.\nThe action server, called by the task planner when a connection need to be performed, polls the current state of the arm holding the module to be attached, computes the linearized matrices of the manipulator equation, generates a Flow* file, performs the reachability computation for the given feedback controller, and then executes the motion if the controller/motion is verified.\nThe server creates an instance of Matlab when launched, which is then used for the kinematics computations and controller design.\nThe kinematics computations and linear model generation are done using the Matlab Robotics Toolbox \\cite{Corke11a}.\n\nInitially, the intent was to model the manipulator and controller(s) as a hybrid system, consisting of the continuous manipulator dynamics coupled with discrete controller states for aligning and approaching a connection.\nHowever, the hybrid system model was discarded due to the complexity of developing a controller for the attachment motions.\n\n\\begin{figure}[h]\n\t\\includegraphics[width = 8 cm]{action_server_2.png}\n\t\\caption{Action Planner Module}\n\t\\label{fig:actionPlannerFlow}\n\\end{figure}\n\n\n\\subsection{Difficulties with Baxter Joint Torque Control}\nThe initial controller used in the action planner server was a linear quadratic regulator (LQR) controller computed by Matlab.\nThe LQR controller provided feedback joint torques while a static input was added to counteract gravitational effects ($\\mathbf{G}$ torques).\nHowever, this controller, when implemented on the Baxter platform, resulted in erratic arm motions despite satisfactory performance in the Baxter simulator.\nThe Baxter platform provides a ``zero-gravity'' mode of operation whereby the robot continually provides torques to each joint sufficient to counteract gravitational effects.\nThe LQR controller, when implemented in ``zero-gravity'' mode (without the added gravitational torque compensation), resulted in arm ``drift,'' whereby the arm would slowly diverge from the initial condition.\nIncreasing the control effort did not improve the ability of the LQR controller to overcome the perceived arm ``drift.''\nWhen the LQR controller was implemented with the ``zero-gravity'' mode disabled (during which the static gravitational torque inputs were included), the resulting arm motion was wildly erratic.\nOperating with the ``zero-gravity'' mode disabled is not recommended.\n\nThe controller implemented in the final version of the action planner server (torque control) is a simple PD controller.\nThe Baxter robot has pre-defined joint spring and damping coefficients that can be called for PD joint torque control.\nThis controller (when used in ``zero-gravity'' mode) results in stable arm motion, both in simulation and on the physical robot.\nHowever, the default spring and damping values result in significant arm oscillations when the robot is moving from one configuration to another.\nAs oscillations are undesirable when connecting modular components, the damping values are increased.\nIt should be noted that both the spring and damping coefficient values are passed in to the action planner server when called and, as such, may be altered at runtime by the task planner.\n\n\\subsection{Verified Motion Example}\nThe results of the execution of a simple motion command on the left arm by the action planner are shown in Figures \\ref{fig:effX},\\ref{fig:effY}, and \\ref{fig:effZ}.\nIn this example, the input desired EFF motion is appx. 10 cm in the negative z direction (toward the ground) in the Baxter global reference frame, starting in the neutral arm position.\nThe values plotted are with respect to the starting $x/y/z$ EFF position.\n\n\\begin{figure}[h]\n\t\\includegraphics[width = 9 cm]{effx_example.png}\n\t\\caption{Example Action Planner Execution, EFF $X$ motion}\n\t\\label{fig:effX}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\includegraphics[width = 9 cm]{effy_example.png}\n\t\\caption{Example Action Planner Execution, EFF $Y$ motion}\n\t\\label{fig:effY}\n\\end{figure}\n\n\\begin{figure}[h]\n\t\\includegraphics[width = 9 cm]{effz_example.png}\n\t\\caption{Example Action Planner Execution, EFF $Z$ motion}\n\t\\label{fig:effZ}\n\\end{figure}\n\nThe Flow* approximations, shown in green, show the computed reachable set for each EFF position variable $(x,y,z)$ as a function of time.\nThe range of the reachable set are small at each time step as the initial conditions for the computation are scalar values rather than a range.\nGiven a range of initial conditions (representing, for example, error in the measured joint angles) the computed reachable sets would show a greater range of values - the green lines wold appear more like funnels showing the over approximated upper and lower bounds for the variables.\nThe reachable sets shown here are computed with scalar initial conditions for clarity.\n\nThe reachable sets computed from the linear model predict small deviations ($<$ 2 cm) of the $x$ and $y$ EFF values during the motion and show the $z$ value converging to the desired value of -0.1, albeit with some oscillation.\nDespite the near 5 cm overshoot in the $z$ direction predicted by the linear model, the small predicted $x$ and $y$ deviations may be sufficient for a successfully attaching a module.\nHowever, the EFF motions recorded in the Baxter simulator and physical robot clearly do not agree with the predicted motions.\nThe data from the simulated Baxter robot shows significantly increased oscillations and the data from the physical robot shows stiff, heavily damped motions.\nThe motions of the physical robot during the execution of the action planner were observed to be stiff, jerky, and unrepeatable, likely due to significant, asymmetric friction in the arm joints.\nIncreasing the damping control term increased the stiff, jerky arm motions and decreasing the damping term resulted in continual oscillations - the damping parameter was ``tuned'' but ultimately the arm behavior was to erratic to be effective.\nAs such, the output of the reachable set computation is not valid in this case for determining the safety of a given controller.\n\n\n\\subsection{First Order Joint Control for Demonstration}\nA first order (joint speed) controller was implemented in an effort to avoid the erratic behavior of the LQR and PD joint torque controllers.\nUsing the global reference frame Jacobian computed in the Matlab Robotics Toolbox, the required joint velocities are computed for a desired end effector translation.\nThis controller was used for the demonstration examples.\n\n\\subsection{Action Planner - Successes and Failures}\nThe implementation of the action planner as a ROS action server, integration of Matlab and the Robotics Toolbox, and Flow* model generation and computation were all successful during this project.\nHowever, the torque controller developed for this work was ultimately unsuccessful due to the erratic and unpredictable nature of the arm motions during joint torque control.\nFuture work on developing an effective joint torque controller for the Baxter robot should focus on characterizing each arm joint with respect to friction and damping.\nThis characterization may allow for properly determining the PD stiffness and damping terms for each joint for a given motion.\n\nIn addition, future work should focus on the validity of the linearized manipulator model.\nBounding the error of the linearizion will allow the reachability computation to formally verify the controller/motion.\n\n\n\n\n\n\\begin{comment}\nThe action planning stage of the assembly process is broken down into two parts: modeling the manipulator behavior as a hybrid system and verifying safety conditions of the end effector motions (making sure that undesirable motions do not occur).\n\n\\subsection{Hybrid system modeling}\nThe trajectory planner discussed in the previous section is responsible for moving each arm (trajectory execution) to within some bound of the final desired end effector position (determined by the task planner) for either (i) grasping or (ii) component alignment. \nThe trajectory planner provides to the action planner the final positions of the end effectors with respect to the appropriate blocks for (i) grasping, or (ii) alignment at attachment.\nIn either case, it is beneficial to avoid damaging a robot module or failing to complete a grasp or attachment due to position and alignment errors.\nFor the proposed scenario of assembling modular robotic components, we implement a system similar to a framework \\cite{6016596} developed to address the verification of control architectures for autonomous robotic surgery manipulation tasks with respect to various safety conditions, denoted as $\\varphi$, such as avoiding end effector misalignment or excessive force application.\n\nIn the previously mentioned framework, the system is modeled as a hybrid automaton \\cite{Alur1993}, composed of discrete states (tasks/behaviors) and continuous dynamics associated with each state or behavior.\nIn our case of modular robot assembly, each state consists of \\textit{fast approach}, which is handled by the trajectory planner, and \\textit{slow approach}, \\textit{alignment}, \\textit{grasping}, and \\textit{connecting}, handled by the action planner. \nEach state also contains a controller, a set of invariants (conditions required to be true while the automaton is in that state), and a set of guards (conditions that must be true in order for a state transition to take place).\n\n\\subsection{Verification of Behaviors}\nIn order to verify that the safety conditions $\\varphi$ are satisfied for the discrete state controllers, the reachable set of the system $\\textit{ReachSet}_{\\mbox{\\textit{H}}}$, which is the set of all reachable states from some initial condition, $X_0$, can be compared to the set of states for which the safety properties are not satisfied, $\\neg \\mbox{Sat}(\\varphi)$.\nHowever, computing the exact set of reachable states is difficult \\cite{Henzinger:1995:WDH:225058.225162} and approximations may be used instead.\nIn this case, a reachability tool (e.g. \\textit{SpaceEx}~\\cite{rehseGDCRLRGDM11}), which computes approximate reachable sets given the system dynamics and controllers, can be used to over-approximate the reachable set of the hybrid system $\\textit{ReachSet}_{\\mbox{\\textit{H}}}$ such that $\\mbox{\\textit{ReachSet}}_{\\mbox{\\textit{H}}} \\supseteq \\mbox{Sat}(\\varphi)$, where $\\mbox{Sat}(\\varphi)$ is the set of states that satisfies $\\varphi$.\nIf $\\mbox{\\textit{ReachSet}}_{\\mbox{\\textit{H}}}$ does not intersect with the set of states for which the safety conditions $\\varphi$ is not satisfied, $\\mbox{\\textit{ReachSet}}_{\\mbox{\\textit{H}}} \\cap \\neg \\mbox{Sat}(\\varphi) = \\emptyset$, the safety of the system is verified.\n\nIn the case that $\\mbox{\\textit{ReachSet}}_{\\mbox{\\textit{H}}} \\cap \\neg \\mbox{Sat}(\\varphi) \\neq \\emptyset$, the inconclusive nature of the system is communicated to the trajectory planner and the system will either (i) re-plan the arm motions or (ii) adjust the boundaries specified in the safety conditions, $\\varphi$ and recompute the system reachability.\n\n\\end{comment}\n\n", "meta": {"hexsha": "c43c915a9d28ead02d80614df4d2f9fb687787c2", "size": 15302, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "final_report/actionPlanner.tex", "max_stars_repo_name": "jimjing/MAndM", "max_stars_repo_head_hexsha": "efd6581851814ace386cc59285d6290ea096630b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "final_report/actionPlanner.tex", "max_issues_repo_name": "jimjing/MAndM", "max_issues_repo_head_hexsha": "efd6581851814ace386cc59285d6290ea096630b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "final_report/actionPlanner.tex", "max_forks_repo_name": "jimjing/MAndM", "max_forks_repo_head_hexsha": "efd6581851814ace386cc59285d6290ea096630b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 104.8082191781, "max_line_length": 441, "alphanum_fraction": 0.7981963142, "num_tokens": 3316, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587964389112, "lm_q2_score": 0.6513548782017745, "lm_q1q2_score": 0.5788973776052228}}
{"text": "\\section{Module 70 : Static Single Assignment IR}\nConsider the following example\n\n    \\[x = y+z\\]\n    \\[x = x+1\\]\n    \\[w=y+z\\]\n    \\[z=x+3\\]\n    \nIn this example, the computation $y+z$ could be reused.\nIn SSA, we assign versions to variables(as below) and each version has only one assignment to it. SSA stands for single assignment in a static program. \n    \\[x_1 = y+z\\]\n    \\[x_2 = x_1+1\\]\n    \\[w_1=y+z\\]\n    \\[v_1=x_2+3\\]\n    \nUsing SSA, we can not optimize the code by rewriting $w_1 = y+z$ as $w_1 = x_1$.\n\n    \\[x_1 = y+z\\]\n    \\[x_2 = x_1+1\\]\n    \\[w_1=x_1\\]\n    \\[v_1=x_2+3\\]\n    \nWhy SSA? Advantages of SSA:\n\\begin{itemize}\n    \\item Optimization algorithms become simpler if each variable has only one definition.\n    \\item Unrelated uses of same variable become independent.\n    \\item More values become available at each program point.\n\\end{itemize}\nTherefore, SSA is a very popular method of IR design. LLVM IR is also an SSA IR.\n\n\\subsection{Converting to SSA}\n\\begin{itemize}\n    \\item Replace the target of each assignment with a new variable.\n    \\item Replace each use of a variable with the version of the variable reaching that point.\n\\end{itemize}\nAgain, taking an example:\n\n    \\[x = y+z\\]\n    \\[a = b+x\\]\n    \\[x=a+3\\]\n    \\[y=x-a\\]\nOn applying the two rules for conversion to SSA, these statements change to :\n     \\[x_1 = y+z\\]\n    \\[a = b+x_1\\]\n    \\[x_2=a+3\\]\n    \\[y=x_2-a\\]\n    \n\n\\begin{figure*}\n\\centering\n\\includegraphics[height=8cm]{images/Example1.png}\n\\caption{Converting to SSA Example}\n\\end{figure*}\n\nConsider another example as shown in Figure 1:\n\nIn this example, there are 2 variables of interest. $x$ is assigned thrice and $y$ is assigned thrice. On applying the rules of SSA conversion, here is how the code snippet looks like.\n\nThe versioning is straightforward for variable $x$. For $y$ in this example, there are two versions reaching at the bottom block from two different paths. The question is how we version the variable $y$ at bottom block. To answer that , we need to understand about basic block\n\\begin{figure*}\n\\centering\n\\includegraphics[height=8cm]{images/Example2.png}\n\\caption{Converting to SSA Example - With Versions}\n\\end{figure*}\n\n\\subsection{Basic Block}: \nA basic block is a maximal set of instructions with \n\\begin{itemize}\n    \\item no labels( except at the first instruction)\n    \\item no jumps (except in the last instruction)\n\\end{itemize}\n\nIn the example, each rectangle is a basic block. \nIdea:\n\\begin{itemize}\n    \\item Cannot jump into a basic block (except at beginning)\n    \\item cannot jump out of a basic block (except at end)\n    \\item Single-entry single-exit straight line code segment\n\\end{itemize}\n\nAt the bottom block, there are two version reaching for variable $y$. To represent that we use $\\Phi$ node or $\\Phi$ function. So the version of y can be represented as a function of $y_1$ and $y_2$ as shown in figure XXX. This is an ordered set.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[height=8cm]{images/Example3.png}\n\\caption{Converting to SSA Example - PHI Nodes}\n\\end{figure*} \n\n\\subsection{PHI($\\Phi$) Nodes}\n\\begin{itemize}\n    \\item $\\Phi$ function chooses the version depending on the incoming edge.\n    \\item Present only at the beginning of a basic block. Since this is the only place where there are multiple versions flowing through different incoming edges.\n\\end{itemize}\n\n", "meta": {"hexsha": "287be2aa5fa809555d3db535373bc1ed9a826553", "size": 3376, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "module70.tex", "max_stars_repo_name": "arpit-saxena/compiler-notes", "max_stars_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "module70.tex", "max_issues_repo_name": "arpit-saxena/compiler-notes", "max_issues_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "module70.tex", "max_forks_repo_name": "arpit-saxena/compiler-notes", "max_forks_repo_head_hexsha": "af3788cde815a5b1d19f206ec8605c0e372c1833", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-02-16T08:32:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-12T19:11:33.000Z", "avg_line_length": 35.1666666667, "max_line_length": 276, "alphanum_fraction": 0.7085308057, "num_tokens": 960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6513548782017745, "lm_q2_score": 0.8887587957022977, "lm_q1q2_score": 0.5788973771254259}}
{"text": "% Copyright 2018 Melvin Eloy Irizarry-Gelp\u00ed\n\\chapter{Hyper AS Quaternions}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nAn exo-1 split binion has the form\n\\begin{equation}\n    a_{0} + a_{1} S + a_{2} W + a_{3} SW\n\\end{equation}\nThese follow from a parabolic Cayley-Dickson construct on the split binions.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Arithmetic Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Multiplication}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Conjugate Operations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Asterisk Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Quadrance}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Cloak Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Dagger Conjugation}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Hodge Star}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Differential Operators}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n...", "meta": {"hexsha": "ebc83823a94d85c778e42310aa9a76f9533fca19", "size": 2102, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/J.tex", "max_stars_repo_name": "meirizarrygelpi/pcdc", "max_stars_repo_head_hexsha": "d9f64322a3bc130024c3c8662cc297ba1a16f154", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/J.tex", "max_issues_repo_name": "meirizarrygelpi/pcdc", "max_issues_repo_head_hexsha": "d9f64322a3bc130024c3c8662cc297ba1a16f154", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/J.tex", "max_forks_repo_name": "meirizarrygelpi/pcdc", "max_forks_repo_head_hexsha": "d9f64322a3bc130024c3c8662cc297ba1a16f154", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.7727272727, "max_line_length": 80, "alphanum_fraction": 0.1983824929, "num_tokens": 227, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587875995482, "lm_q2_score": 0.6513548511303338, "lm_q1q2_score": 0.5788973477876797}}
{"text": "\\chapter{Introduction and History }\n\nIn mathematics, logic, and thoretical computer science, a \\textit{Type System} is any of a class of formal systems which can serve as alternative standard logic system of sets. We consider every \"\\textit{object}\" has a \"type\" and all different kinds of operations are restricted to the objects of only a particular type.\\\\\n\n\\textbf{Type Theory} is closely related to and is often served as the basis of modern computer's type systems, which are a \\textit{programming language feature} for common bug reduction. Type theory was studied to avoid paradoxes like \\underline{Kleene\u2013Rosser paradox , Rossel's Paradox} \\cite{russel} in a variety of formal logics and to rewrite systems. It describes the correctness of step-by-step working of an abstract model of machine. This along with complexity theory does constitute the system theory for complete functioning of any computational machine.\\\\\n\nOrganized research for the mathematics foundation started at the end of the 19th century and formed a new mathematical discipline called mathematical logic, which strongly links to theoretical computer science. It went through a lot of paradoxical results, until the discoveries stabilized during the 20th century with finally imparting mathematical knowledge with several new aspects and components known as  \\textit{set theory, model theory, proof theory, etc.} , whose properties are still an active research field. \\textit{Stephan Wolfram} in his book titled \\underline{New Kind of Science} \\cite{newkindofscience} explores the computational aspects of machine and tries to focus on the idea that simple systems can actually reason perfectly for complex behaviour of a large systems. Classical examples of simple Cellular Automaton* and Turing-Complete machines have been studied.\\\\\n\nType theory in a similar sense argues that complex working can be induced from logic constraints of simpler systems. A small Type system can thus account for type-safe* behaviour of a computer system. Now for better comprehension we use the concept of $(\\lambda)$-calculus to combine abstraction of type models. In mathematics, two well-known type theories that can serve as logic foundation for a system are \\textit{Alonzo Church's} \\textbf{ typed $(\\lambda)$-calculus} \\cite{typed_lambda} and \\textit{Per Martin-Lof's } \\textbf{intuitionistic type theory}\\cite{per_martin_tt}. \n\n\\section{$\\lambda$-Calculus}\n\nLambda calculus\\cite{lambda_calculus} is a formal system in mathematical logic for expressing computation based on function abstraction and its application using variable binding and substitution. It is an accepted universal model of computation based on reduction and abstraction that can be used to theoretically understand behaviour of Turing machine. It was introduced in 1930s by mathematician Alonzo Church. This deals with constructing lambda terms (variable terms which change according to conditions and boundness) and performing reduction operations on them. Reduction Operations consist of $\\alpha  and  \\beta $ transformations. The former deals with renaming bound variable's name and second with replacing the bound variable with the argument expression in the body of the abstraction.It follows left associtivity i.e \\textit{fgh} is just syntactic sugar (alternate form of represetation ) for \\textit{(fg)h}.\\\\\n\nA working thumb rule for performing reductions :\n\\[\n \\boxed {(\\lambda param . output)input \\implies output [param := input] \\implies result }\n\\]\n\nExample : \\[(\\lambda x.xy)z \\longrightarrow {}_\\beta(xy)[x:=z] \\longrightarrow {}_\\beta (zy) \\longrightarrow zy \\]\n\n \n\\section{Intuitional Type theory} \n\nIntuitionistic type theory (also known as constructive type theory, or Martin-Lof type theory) has mathematical constructs  built to follow a one-to-one correspondence with logical connectives. For example, the logical connective called implication ( { A$\\implies$ B} ) corresponds to the type of a function ( { A$\\to$ B}). This correspondence is called the \\textit{Curry $\u2013$ Howard isomorphism}. This basically is an equivalent of unique mapping and existence of inverse as in the theory of sets. Previous type theories also followed this isomorphism, but Martin-Lof's was the first to extend it by introducing \\textbf{Dependent types} as a basis of \\textit{High Order function dependence on basic functions}.\\\\\n\n\\hfill \\break \\textbf{Machine Assisted Proving} dates back as early as 1976 when the four color theorem was verified using a computer program. Butterfly Effect's discovery also was possible due to computer simulation of program with given some finite initial states. Most computer-aided proofs to date have been implementations of large \\textit{Proofs-By-Case} of a mathematical theorem. It is also called as \\textit{Proof-By-Induction}  where child cases are considered first in an attempt to fully prove a theorm. Various attempts have been made in the area of \\textit{Artificial Intelligence} research to create smaller, explicit proofs of mathematical theorems using machine reasoning techniques such as \\textit{heuristic search}.\\\\\n\nSuch \\textbf{Automated Theorem Provers} have found new proofs for known theorems also given a number of new results. Additionally, interactive proof assistants allow mathematicians to develop acceptable human-readable proofs which are then again verified too in a similar procedure.   \n\n\\section{Proof Assistants Introduction}\n\nMachine theorem basically involves model checking, which, in the simplest case involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much robustness for checking every case, and it does not simply reduce to brute force). There are \\textit{hybrid} theorem  proving systems which use model checking as an inference rule. There are also programs which were written to prove a particular theorem, with an assumption that if the program finishes with a certain result, then our proposed theorem is true.\\\\\n\nIn computer science and mathematical logic, a \\textbf{Proof Assistant or Interactive Theorem Prover} is a software tool to help with the development of formal proofs. This involves interactive editor, or other interface, with which a human can guide the search for proofs. Some steps are reduced by the computer and base cases are enlisted to be proved. We will discuss this part later. \\\\\n\n\\hfill \\break In the present Programming Language research, the correctness of \\textit{Computer Programs} is proved using similar notions considering some \"pre\" and \"post\" cases. This idea can thus be extended to proving correctness of a programing language semantics. \\\\\n\nCurrently the developments in Quantum Computing strongly uses these proof-based programming language mainly because of the completely random behaviour of a \\textit{Quantum states} of \\underline{Qubits}. All that we have are probablities of \nexistence of a given quantum state. Let's say we can we can organize the chaos to some extent by \\textit{categorizing} those probabilities in few cases, then later operations and control logics will have to have an inductive transfer of the qubit's states. This is analogus to how we deal with things in Type theory. Therefore developing a strong typed abstract model will benefit quantum computing to a much bigger extent.\\\\\n\nWe will understand the working of proof assistants by firstly revisiting topics in abstract algebra, computer architecture and compiler technology. We will then proceed to introduce the type systems and core elements of type theory in study and finally combine both to reason for correctness of a programming language and a programming stack.\\\\\n\nI have later introduced concepts of quantum computing followed by the application of type theory in it to put some light on some ongoing active research in that field.\\\\\n", "meta": {"hexsha": "865cc3ac674013a6b6ee19fb87d75f86ff0fe9e8", "size": 7850, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "files/history-and-introduction.tex", "max_stars_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_stars_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/history-and-introduction.tex", "max_issues_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_issues_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/history-and-introduction.tex", "max_forks_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_forks_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 174.4444444444, "max_line_length": 924, "alphanum_fraction": 0.807133758, "num_tokens": 1628, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085909370422, "lm_q2_score": 0.7371581626286833, "lm_q1q2_score": 0.5788966379916702}}
{"text": "% !TEX root = main.tex\n\n\\section{Tables}\n\n\\subsection{Inline}\nAn inline \\texttt{tabular} environment:\n\\begin{tabular}{|ccc|}\\hline\n$\\alpha$ & $\\beta$ & $\\gamma$ \\\\ \\hline\n$1/2$ & $1/3$ & $1/6$ \\\\ \\hline\n\\end{tabular}\n\n\\subsection{Display}\nA centred \\texttt{tabular} environment:\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline\na & b & c \\\\\n\\hline\nd & e & f \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\n\\subsection{Floats}\n\nA {\\tt table} environment:\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c||c|c|c|} \\hline\nSet Theory \t\t& \t\t\t& Logic\t\t\t&\t\t& \\\\ \\hline\nUnion\t\t\t& $A\\cup B$\t& Disjunction \t& OR \t& $\\lor$\t\\\\\nIntersection\t\t& $A\\cap B$\t& Conjunction\t& AND \t& $\\land$\\\\\nComplement\t\t& $A^c$\t\t& Negation\t\t& NOT \t& $\\lnot$\t\\\\ \\hline\n\\end{tabular}\n\\caption{Logic and Set Theory}\n\\end{table}\n\nA {\\tt table} environment with a starred {\\tt caption}:\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\na & b & c \\\\\n\\hline\nd & e & f \\\\\n\\hline\n\\end{tabular}\n\\caption*{Another table}\n\\end{table}\n\n\n", "meta": {"hexsha": "b6f7dfaa3319075385f18f6f6fe6e728fc574f23", "size": 992, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/test_article/_tables.tex", "max_stars_repo_name": "imagingbook/latextree", "max_stars_repo_head_hexsha": "272ee1594b3bdea39a043fb2ac2b86ac9a1728e8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-16T22:41:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-16T22:41:21.000Z", "max_issues_repo_path": "tex/test_article/_tables.tex", "max_issues_repo_name": "imagingbook/latextree", "max_issues_repo_head_hexsha": "272ee1594b3bdea39a043fb2ac2b86ac9a1728e8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/test_article/_tables.tex", "max_forks_repo_name": "imagingbook/latextree", "max_forks_repo_head_hexsha": "272ee1594b3bdea39a043fb2ac2b86ac9a1728e8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-09-11T09:38:25.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-11T15:30:21.000Z", "avg_line_length": 19.0769230769, "max_line_length": 59, "alphanum_fraction": 0.6219758065, "num_tokens": 406, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7371581684030624, "lm_q1q2_score": 0.5788966351184278}}
{"text": "\\subsection{Scheduling: Preemptive, Size-Based}\n\\label{sec:Scheduling-Preemptive-Size-Based}\n\nThe most common preemptive size-based scheduling policies are\n\n\\begin{description}\n\t\n\t\\item [Preemptive Shortest-Job-First (PSJF)] the server chooses the job with the smallest size, eventually preempting the current running job.\n\t\n\t\\item [Shortest Remaining Process-Time (SRPT)] the server chooses the job with the shortest remaining process time; a new arrival preempts the current job if the new arrival has shorter remaining process time.\n\t\n\\end{description}\n\nPreemptive policies that make use of job sizes can reach high performance biasing towards small jobs.\n\nIt is convenient to evaluate PSJF as \\textit{preemptive priority queueing} where priority classes are job sizes and a job has higher priority if it has lower job size.\n\nIt is not possible to evaluate SRPT with the same model, because the remaining process time changes in time, whereas the preemptive priority model does not allow jobs to change priority.\n\n\n\n\n\\subsection{Preemptive Priority Queueing}\n\\label{sec:P-Priority}\n\nIn preemptive priority queueing (P-Priority), one can view the response time as divided into\n\n\\begin{description}\n\n\t\\item [Waiting Time $Wait(x)$] the time until a job first starts serving.\n\t\n\t\\item [Residence Time $Res(x)$] the time from when job first gets served until it leaves the system (including all interruptions due to preemption).\n\n\\end{description}\n\nIn such a model, we have that\n\n\\begin{equation}\n\\label{eqn:P-Priority-Mean-Time-Size}\n\\expected{T(x)}^{\\mathit{P-Priority}}=\n\\expected{Res(x)}^{\\mathit{P-Priority}} + \\expected{Wait(x)}^{\\mathit{P-Priority}}=\n\\frac{\\expected{S_{x}}}{1-\\sum_{i=1}^{x-1}\\varrho_{i}}+\n\\frac{\\sum_{i=1}^{x}\\frac{\\varrho_{i}\\expected{S_{i}^{2}}}{2\\expected{S_{i}}}}\n\t {\\Big(1-\\sum_{i=1}^{x-1}\\varrho_{i}\\Big)\\Big(1-\\sum_{i=1}^{x}\\varrho_{i}\\Big)}\n\\end{equation}\n\nTo determine $\\expected{T}^{\\mathit{P-Priority}}$ we need transform analysis.\n\nIn $\\mathit{P-Priority}$ model, a job of priority $x$ only sees the variability and the load created by the first $x$ classes. Hence, a high priority job (low $x$) wins, even under high-variability job sizes.\n\nWe will use these results to analyze PSJF.\n\n\n\n\n\\subsection{Scheduling: PSJF}\n\\label{sec:Scheduling-PSJF}\n\nWe model PSJF as P-Priority with infinite priority classes, where the smaller the job, the higher the priority.\n\nBy applying \\Cref{eqn:P-Priority-Mean-Time-Size}, we obtain:\n\n\\begin{equation}\n\\label{eqn:PSJF-Mean-Time-Size}\n\\expected{T(x)}^{\\mathit{PSJF}}=\n\\expected{Res(x)}^{\\mathit{PSJF}}+\\expected{Wait(x)}^{\\mathit{PSJF}}=\n\\frac{x}{1-\\varrho_{x}}+\n\\frac{\\frac{\\lambda}{2}\\int_{0}^{x}f(t)t^{2}\\partial t}\n{\\Big(1-\\varrho_{x}\\Big)^{2}}\n\\end{equation}\n\nwhere $\\varrho_{x}=\\lambda\\int_{t=0}^{x}tf(t)\\partial t$ is the load made up by jobs of size less than $x$.\n\nTo determine $\\expected{T}^{\\mathit{PSJF}}$ we need transform analysis.\n\n\n\n\n\\subsection{Scheduling: SRPT}\n\\label{sec:Scheduling-SRPT}\n\nSRPT achieves the lowest possible mean response time on every arrival sequence.\n\nWe cannot model SRPT as P-Priority, because the remaining process time changes with time.\n\nWe obtain that\n\n\\begin{equation}\n\\label{sec:SRPT-Mean-Time-Size}\n\\expected{T(x)}^{\\mathit{SRPT}}=\n\\expected{Wait(x)}^{\\mathit{SRPT}}+\\expected{Res(x)}^{\\mathit{SRPT}}=\n\\frac{\\lambda}{2}\\frac{\\int_{t=0}^{x}t^{2}f(t)\\partial t + x^{2}(1-F(x))}{\\Big(1-\\varrho_{x}\\Big)^{2}}+\n\\int_{t=0}^{x}\\frac{1}{1-\\varrho_{t}}\n\\end{equation}\n\nWe notice that the response time\n(i) is only sensitive of variance of the job size distribution up to size $x$\n(ii) is only sensitive of load made up of jobs of size up to size $x$\nThis is why small jobs win under SRPT.\n\nLet us compare SRPT with PSJF.\nThe waiting time of SRPT is greater than that for PSJF, but the residence time is smaller because a job must only wait for job with shorter remaining process time whereas under PSJF must wait for all jobs smaller than its original size.\nThe benefit of $\\expected{T}^{\\mathit{SRPT}}$ makes SRPT better than PSJF with respect to the mean response time.\n\nLet us compare SRPT with FB.\nSRPT and FB are complementary. In SRPT, a job gains priority as it receives more service. In, FB a job loses priority as times goes on.\n\nWe have that, in $M/G/1$, for all $x$ and $\\varrho$\n\\begin{equation}\n\\expected{T(x)}^{\\mathit{SRPT}}\\leq\\expected{T(x)}^{\\mathit{FB}}\n\\end{equation}\n\nLet us compare SRPT with PS.\nSeems that large jobs should prefer PS over SRPT, because they have low priority in SRPT, while they are threated equally under PS. This is not true. Under SRPT large jobs start with low priority, but they gain more and more priority from the time they first get served, on.\n\nSRPT is more fair than PS. Even if it does not seem so, large jobs and small jobs have nearly the same Slowdown, because large jobs gain priority with time.\n\n\\begin{theorem}[All-Can-Win]\n\\label{thm:All-Can-Win}\nWe have that, in $M/G/1$, for all $x$ and $\\varrho<\\frac{1}{2}$\n\\begin{equation}\n\\label{eqn:All-Can-Win}\n\\expected{T(x)}^{\\mathit{SRPT}}\\leq\\expected{T(x)}^{\\mathit{PS}}\n\\end{equation}\n\\end{theorem}\n\nThe \\Cref{thm:All-Can-Win} holds for every job size distribution $G$. For many $G$, the restriction on $\\varrho$ is much looser\\footnote{If $G$ is a Bounded-Pareto with $\\alpha=1.1$, we have that \\Cref{thm:All-Can-Win} holds for $\\varrho<0.96$.}.\n\n\n\n\n", "meta": {"hexsha": "6b8a096267ab7c298a6ce2cff7653691cc5a6356", "size": 5367, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "performance-modeling/sec/scheduling-preemptive-size-based.tex", "max_stars_repo_name": "gmarciani/research", "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_issues_repo_path": "performance-modeling/sec/scheduling-preemptive-size-based.tex", "max_issues_repo_name": "gmarciani/research", "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "performance-modeling/sec/scheduling-preemptive-size-based.tex", "max_forks_repo_name": "gmarciani/research", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "avg_line_length": 40.6590909091, "max_line_length": 274, "alphanum_fraction": 0.7339295696, "num_tokens": 1615, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.785308580887758, "lm_q1q2_score": 0.5788966305837584}}
{"text": "\\documentclass[]{jocg}\n\n\\usepackage{amsmath,amsfonts}\n\\usepackage{amsthm}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{booktabs}\n\\usepackage{mathtools}\n\\usepackage[mathlines]{lineno}\n\\usepackage[]{graphicx}\n\\usepackage{algorithm}\n\\usepackage[noend]{algpseudocode}\n\\linenumbers\n\n\n\\setcounter{tocdepth}{1} % -- part,chapters,sections\n\n\\usepackage[\n  backend=biber,\n  style=nature,\n  sorting=ynt\n]{biblatex}\n\\addbibresource{main.bib}\n\n\n\\newcommand{\\RR}{\\mathbb{R}}\n\\newcommand{\\PP}{\\mathcal{P}}\n\\newcommand{\\EE}{\\mathcal{E}}\n\\newcommand{\\set}[1]{\\{#1\\}}\n\\newcommand{\\norm}[1]{\\|#1\\|}\n\\newcommand{\\abs}[1]{|#1|}\n\\newcommand{\\chordarc}{{s_{\\textrm{arc}}}}\n\\DeclareMathOperator{\\diam}{\\mathrm{diam}}\n\\DeclareMathOperator{\\conv}{\\mathcal{CH}}\n\n\n\n\\newtheorem{proposition}{Proposition}[section]\n\n\\theoremstyle{definition}\n\\newtheorem{definition}[proposition]{Definition}\n\n\\theoremstyle{remark}\n\\newtheorem{remark}[proposition]{Remark}\n\n\n\\title{%\n  \\MakeUppercase{Measuring polygonal niceness}%\n  \\thanks{%\n    This paper constitutes the completion of the final project for AMS 545\n    ``Computational Geometry''.\n  }\n}\n\n\\author{%\n  Sharmila~Duppala%\n  \\thanks{%\n    \\affil{Department of Computer Science}, \n    \\email{sduppala@cs.stonybrook.edu}%\n  }\\, and\n  David~Kraemer.%\n  \\thanks{%\n    \\affil{Department of Applied Mathematics},\n    \\email{david.kraemer@stonybrook.edu}%\n  }\n}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\n  The $\\alpha$-fatness and chord-arc scores as measures of the ``niceness'' of\n  polygons are developed and explored, with initial theoretical and empirical\n  results. \n\\end{abstract}\n\n\\tableofcontents\n\n\\section{Introduction}\n\nConvexity is from many perspectives the ``gold standard'' of niceness for planar\nsubsets, because it excludes at once large classes of pathological sets\naltogether. Yet for establishing a quantitative measure of niceness that matches\nwith intuitive or legal understandings, convexity is both too restrictive and too\ngeneral. The relevant consideration at present revolves around drawing electoral\ndistricts. The immense political consequences of favorable (or unfavorable)\nmaps has resulted in the development of creative classes of polygonal regions\nutilized for this purpose. There exists a strong civic prerogative for providing\nquantitative measures of niceness. In a concurrent opinion in the 2004 suit\n\\textit{Vieth v. Jubelirer} decided by the U.S. Supreme Court, Justice Anthony\nKennedy writes \n\\begin{quote}\n  Because, in the case before us, we have no standard by which to measure the\n  burden appellants claim has been imposed on their representational rights,\n  appellants cannot establish that the alleged political classifications burden\n  those same rights.  Failing to show that the alleged classifications are\n  unrelated to the aims of apportionment, appellants\u2019 evidence at best\n  demonstrates only that the legislature adopted political classifications. That\n  describes no constitutional flaw, at least under the governing Fourteenth\n  Amendment standard. \\cite{2004vieth}\n\\end{quote}\nOught convexity be considered as such a standard? In practice, convexity is too\nrestrictive (e.g., for coastal or river borders) and too general (e.g., a tiny\nvertical strip) for this purpose.\n\nThe aim of this paper is to propose several alternative measures to determine\nthe niceness of polygonal regions in a quantitatively rigorous way. We introduce\ntwo classes of measures, the $\\alpha$-score and the chord-$f$ scores, develop\nsome of their basic properties, and show how they perform on simulated data. The\n$\\alpha$-fatness quantifies how much a given polygon resembles a ball (with\nrespect to some norm). When $f$ measures the perimeter of a polygon, the\nchord-$f$ score quantifies the extent to which ``local nonconvexity'' affects\nthe distribution of the total boundary. Both measures generally reward convex\npolygons, but punish certain convex ``offenders.'' Of course, there exist\nnonconvex polygons who score well on both measures as well.\n\nThis paper proceeds as follows. Section 2 provides a theoretical overview for\nthe $\\alpha$-fatness and chord-$f$ scores and gives preliminary results on\nsimple examples. Section 3 outlines the algorithms used to perform\ndiscretizations and compute the $\\alpha$-fatness and chord-$f$ scores. Section 4\ngives the results of numerical simulations for these measures on randomly\ngenerated ``typical'' polygons and provides data about the success of\ndiscretization. Section 5 includes a discussion of the main results of the paper\nand its implications for the electoral districting process.\n\nSource code for this project can be found at\n\\url{https://github.com/DavidNKraemer/polygonal-niceness}. Documentation, cruft\nremoval, and user interface improvements are presently evolving.\n\n\\section{Theoretical background}\n\nThe area of a subset $S \\subseteq \\RR^2$ of the plane is denoted by\n$\\lambda(S)$. We will denote the boundary of $S$ by $\\partial S$. For two points\n$x,y \\in \\RR^2$, the closed line segment with endpoints in $x$ and $y$ is\ndenoted $[x,y]$. For $p \\in [1, \\infty]$, we define the $p$-ball as $B_{p}(x,\n\\varepsilon) = \\set{y \\in \\RR^2 : \\norm{x-y}_p \\leq \\varepsilon}$ and we will\nonly be interested in the closed case. Unless otherwise stated, we shall operate\nin the Euclidean world with $p = 2$. Throughout we shall assume that $P\n\\subseteq \\RR^2$ is a simple bounded closed polygon. The class of such polygons\nis given by $\\PP$. Its perimeter is denoted by $\\abs{\\partial P}$.\n\n\\subsection{The $\\alpha$-fatness score}\n\nOne approach to measuring the relative ``fatness'' of a polygon seeks to\nidentify regions of the shape which behave highly unlike balls.\n\n\\begin{definition}\n  The $\\alpha$-fatness score of a polygon $P$ is given by\n  \\begin{equation*}\n    \\alpha(P) = \\inf\\set{\\frac{\\lambda[P \\cap B(x,\n    \\varepsilon)]}{\\lambda[B(x,\\varepsilon)]} : \\varepsilon > 0, x \\in P}\n  \\end{equation*}\n  where we impose that the ball $B(x,\\varepsilon)$\n  does not contain $P$. \n  \\label{def:alpha}\n\\end{definition}\n\nThe constraint that $P \\not\\subseteq B(x,\\varepsilon)$ ensures definiteness:\notherwise, let $\\varepsilon \\to \\infty$ and the ratio $\\frac{\\lambda[P \\cap B(x,\n\\varepsilon)]}{\\lambda[B(x, \\varepsilon)]}$ always approaches $0$. An intuition\nfor $\\alpha(P)$ emerges from considering how it behaves on balls.\n\n\\begin{proposition}\n  For any $P$, $\\alpha(P) \\leq \\frac{1}{4}$, and if $P = B(x_0, r)$\n  then we have equality.\n  \\label{prop:alpha}\n\\end{proposition}\n\n\\begin{proof}\n  To show that $\\alpha(P) \\leq \\frac{1}{4}$, it suffices to show only for convex\n  $P$, because $P \\subseteq \\conv(P)$ implies $\\lambda[P \\cap B(x, \\varepsilon)]\n  \\leq \\lambda[\\conv(P) \\cap B(x, \\varepsilon)]$. Let $d = \\diam (P)$ and fix\n  $x,y \\in \\partial P$ such that $\\abs{[x,y]} = d$. Assume without loss of\n  generality that $x$ lies above $y$, and consider the ball $B(x, d)$. Then $P$\n  does not meet the upper disk of $B(x,d)$, for otherwise we could choose $z \\in\n  P$ in the upper disk of $B(x,d)$ which forms a chord of $P$ longer than $d$.\n  A similar argument shows that $P$ cannot occupy more than half of the lower\n  disk of $B(x,d)$. Hence\n  \\begin{equation*}\n    \\alpha(P) \\leq \\frac{\\lambda[P \\cap B(x,d)]}{\\lambda[B(x,d)]} \\leq\n    \\frac{1}{4},\n  \\end{equation*}\n  as needed.\n\n  The fact that $\\alpha(B(x_0, r)) = \\frac{1}{4}$ relies on a neat\n  result \\cite{10.2307/30044198} that shows that the $L_p$\n  ellipsoid $\\EE$ with radii $a, b$ has area\n  \\begin{equation*}\n    \\lambda(\\EE) = a b \\cdot 4\n    \\frac{\\Gamma(1+\\frac{1}{p})^2}{\\Gamma(1+\\frac{n}{p})}.\n  \\end{equation*}\n  In our case, $\\alpha(B(x_0, r))$ is achieved at a ball on a ``corner'' point\n  with radius $2r$.\n\\end{proof}\n\nHence, the $\\alpha$-score is greater for polygons which exhibit ``niceness''\ncharacteristics. It is not affected tremendously by local nonconvexity in $P$ so\nlong as $P$ ``snugly fits'' into a ball shape, which is a global property.\n\n\\subsection{The chord-$f$ score}\n\nA different class of ``fatness'' measures arises through partitioning $P$ into\ntwo (interior disjoint) polygons $P = P' \\cup P''$ via chords. A chord is a pair\n$x,y \\in \\partial P$ such that the segment $[x,y]$ is contained in the interior\nof $P$ except at the endpoints $x$ and $y$. Such a chord defines a partition of\n$P = P' \\cup P''$ by orientation, so that $\\partial P'$ is the arc from $x$ to\n$y$ together with $[x,y]$ and that $\\partial P''$ is the arc from $y$ to $x$\ntogether with $[x,y]$.\n\n\\begin{definition}\n  Let $f : \\mathcal{P} \\to \\RR$. For a given $P \\in \\mathcal{P}$, its\n  \\emph{chord-$f$ score} is given by\n  \\begin{equation*}\n    s_f (P) = \\inf \\set{\\max(f(P'), f(P'')) : x, y \\in \\partial P}\n  \\end{equation*}\n  where the chord $[x,y]$ partitions $P = P' \\cup P''$.\n  \\label{def:chord-f}\n\\end{definition}\n\nThe intuition for $f$ is some global measure of ``cost'' associated with respect\nto $P$. For example, if $f(P) = \\abs{\\partial P}$, then $s_f$ identifies the\npartition (or limiting sequence of partitions) which minimizes the maximum\nsubpolygon perimeter. Indeed, measures such as perimeter or area are the typical\nchoices of $f$, but any suitable property of $P$  may be employed. \n\nThe chord-$f$ score can be understood relatively simply in the context of a\nminimax game. Max, given a polygon with a chord partitioning it, always chooses\nthe subpolygon associated with a greater $f$ (i.e., the worst cost of the\npartition). Min, who plays first, tries to find a chord with the least-bad worst\ncost with respect to $f$.\n\nThe member of this class of scores under this present investigation is\n$\\chordarc$, the so-called \\emph{chord-arc} score. It is important to note that\nthe length of the chord $[x,y]$ which partitions $P$ is included in the\nestimation of $\\chordarc$. One implication of this is that the chord-arc score\nneed not be bounded above by $\\frac{\\abs{\\partial P}}{2}$.  We have several\nfacts about $\\chordarc$ which develop further intuition for the score.\n\\begin{proposition}\n  \\begin{enumerate}\n    \\item Let $R$ be a rectangle with height $h$ and length $\\ell$. Assume that\n      $h \\leq \\ell$. Then $\\chordarc(R) = 2h + \\ell$.\n    \\item Let $P$ be convex with perimeter $\\abs{\\partial P} = w$. Then\n      $\\chordarc(P) \\leq \\frac{w}{2} + \\diam (P)$.\n    \\item Let $C$ be a circle with ``perimeter'' $2\\pi r$. Then $\\chordarc(C) =\n      (\\pi + 2)r$.\n  \\end{enumerate}\n  \\label{rem:chordarc}\n\\end{proposition}\n\n\\begin{proof}\n  \\begin{enumerate}\n    \\item That the optimal partition bisects $R$ is readily apparent,\n      for this minimizes the maximum perimeter length of either sub-rectangle.\n      A case analysis confirms that the preferred strategy for Min is to choose\n      a chord along the smaller dimension (i.e., $h$) to bisect $R$, and in this\n      case the maximum arc length is $2h + 2\\left( \\frac{\\ell}{2} \\right) = 2h +\n      \\ell$, as needed.\n    \\item For $P$ convex, a perimeter-bisecting chord, whose length is at most\n      $\\diam(P)$, always exists. \n    \\item This follows from part 2 by noticing that the length of \\emph{any}\n      perimeter-bisecting chord is $\\diam(C) = 2r$. \\qedhere\n  \\end{enumerate}\n\\end{proof}\n\n\\section{Overview of empirical design}\n\nIn principal computing the $\\alpha$-fatness and chord-$f$ scores relies on sweeping\nthe continuum of points of $P$ and evaluating the aforementioned measurements.\nTo estimate this process in practice, we employ the following discretizing\nscheme. Let $P$ be determined by the vertices $[v_1, \\dots, v_n]$ given in\ncounterclockwise order. Then, given $\\delta > 0$, loop through the vertices of\n$P$, checking if $\\norm{v_{i+1} - v_i} > \\delta$. In the case where this holds,\nwe insert $v_i' = \\frac{v_i + v_{i+1}}{2}$ after $v_i$ into the polygon and\nresume the loop. This ensures that for any two adjacent vertices $v_{i}$ and\n$v_{i+1}$ in the representation of $P$ we have $\\norm{v_{i+1} - v_i} \\leq\n\\delta$. As $\\delta \\to 0$ this estimates $\\partial P$ more exactly. All\ncomputations are done with respect to the $L_{\\infty}$ norm. The implementation\nof this procedure is given in Algorithm \\ref{alg:delta}.\n\nDepending on the relative coordinates of the vertex set compared to $\\delta$,\nthe worst-case runtime of refinement varies. In the case where the maximum\ndistance is $\\delta K$, the refinement adds $O(\\lg^2 K)$ vertices between the\ntwo coordinates. The overall runtime is $O(n \\lg^2 K)$ and the total quantity of\nvertices after refinement is $O(n + n\\lg^2 K)$.\n\n%% Pseudocode for the delta refinement\n\n\\begin{algorithm}[h]\n  \\begin{algorithmic}[1]\n    \\Procedure{RefineBy}{$P,\\delta$}\n    \\State{$P = [v_1, v_2, v_3, \\dots, v_n]$};\n    \\State{$v = v_1$}\n    \\While {$\\textit{next}(v) \\ne v_1$}\n    \\If {$\\norm{\\textit{next}(v) - v}_2 > \\delta$} \n      \\State{\\textrm{Insert $\\textit{midpoint}(v, \\textit{next}(v))$ into $P$ after $v$}} \n      \\Comment{Now $\\textit{next}(v)$ is the midpoint}\n    \\EndIf\n    \\EndWhile\n    \\EndProcedure\n  \\end{algorithmic}\n  \\caption{%\n    The procedure used to refine a given polygon $P$ by a specified $\\delta >\n    0$. This modifies the existing $P$ to satisfy the regularity condition\n    that $\\norm{v_{i+1} - v_i} \\leq \\delta$ for each adjacent vertex pair\n    $v_{i+1}, v_i$ on the discretized boundary $\\partial P$.  }\n  \\label{alg:delta}\n\\end{algorithm}\n\nGiven a $\\delta$-refined polygon $P$, the computation of $\\alpha(P)$ is\nrelatively straightforward, with the simplification that points $x$ on which the\nratio is computed lie on $\\partial P$. The computation steps through $[v_1,\n\\dots, v_n]$ and for each $v_i$ the radius $\\varepsilon$ of the\nminimum-enclosing $\\infty$-ball centered at $v_i$ is computed. Then for\n$\\varepsilon_k = \\frac{k}{100} \\varepsilon$, $k = 1, \\dots, 100$, the ratio \n\\begin{equation*}\n  \\frac{\\lambda[P \\cap B_{\\infty}(v_i, \\varepsilon_k)]}{B_{\\infty}(v_i,\n  \\varepsilon_k)}\n\\end{equation*}\nis computed and the smallest such ratio is kept. The minimum over all $v_i$ is\ngiven as $\\alpha(P)$. The implementation of the $\\alpha$ score is given in\nAlgorithm \\ref{alg:alpha}. Though we use a sweep from $k = 1, \\dots, 100$, any\nsuitable range may be selected as an alternative. Unfortunately, there is no\nanalogous ``bisection'' technique for determining the minimizing $k$ because\nthis ratio need not be monotonic. Since $P$ need not be convex, the intersection\n$P \\cap B_{\\infty}(v, \\varepsilon_k)$ requires $O(n \\log n)$ steps. For $n$\nvertices, the runtime performance is $O(n^2 \\log n)$.\n\n%% Pseudocode for the alpha score\n\n\\begin{algorithm}[h]\n  \\begin{algorithmic}[1]\n    \\Function{$\\alpha$Score}{$P$}\n    \\State{$P = [v_1, v_2, v_3, \\dots, v_n]$};\n    \\State{$\\textrm{min} = \\infty$}\n    \\For {$v \\in P$}\n    \\State{$\\varepsilon = \\textrm{radius of minimum enclosing ball of $P$ at\n    $v$}$}\n    \\For {$k = 1, 2, \\dots, 100$} \\Comment{$100$ is arbitrary}\n    \\State{$\\varepsilon_k = \\frac{k}{100} \\varepsilon$}\n    \\State{$\\textrm{area} = \\frac{\\lambda[P \\cap B_{\\infty}(v, \\varepsilon_k)]}{B_{\\infty}(v, \\varepsilon_k)}$}\n    \\If {$\\textrm{min} > \\textrm{area}$}\n    \\State{$\\textrm{min} = \\textrm{area}$}\n    \\EndIf\n    \\EndFor\n    \\EndFor\n    \\State{\\Return{\\textrm{min}}}\n    \\EndFunction\n  \\end{algorithmic}\n  \\caption{%\n    The $\\alpha$-fatness function.  }\n  \\label{alg:alpha}\n\\end{algorithm}\n\nSimilarly, given a $\\delta$-refined polygon $P$, the chord-$f$ score is computed\nby sweeping all possible chords of $P$. For each point $v_i, v_j \\in \\partial P$\nwith $i < j$, if $[v_i, v_j]$ is indeed a valid chord of $P$, the partition $P =\nP' \\cup P''$ is computed and the maximum of $f(P')$ and $f(P'')$ is kept. The\nminimum such value is given as $s_f(P)$. The implementation of the chord-$f$\nscore is given in Algorithm \\ref{alg:chordf}. For $n$ vertices, $O(n^2)$\npotential chords are computed. To determine if a pair of vertices is a chord\ntakes $O(n)$ time, which leads to an overall runtime performance of $O(n^3)$.\n\n\n%% Pseudocode for the chord f score\n\n\\begin{algorithm}[h]\n  \\begin{algorithmic}[1]\n    \\Function{ChordScore}{P,f}\n    \\State{$P = [v_1, v_2, v_3, \\dots, v_n]$};\n    \\For {$v \\in P$}\n    \\For {$w \\in [\\textit{next}(v), \\dots, v_n]$}\n    \\If {$[v,w]$ is a chord of $P$}\n      \\State{Partition $P = P' \\cup P''$ by $[v,w]$}\n      \\State{$\\textrm{submax} = \\textit{max}(f(P'), f(P''))$}\n      \\If {$\\textrm{min} > \\textrm{submax}$}\n        \\State{$\\textrm{min} = \\textrm{submax}$}\n      \\EndIf\n    \\EndIf\n    \\EndFor\n    \\EndFor\n    \\State{\\Return{\\textrm{min}}}\n    \\EndFunction\n  \\end{algorithmic}\n  \\caption{%\n    The chord-$f$ function.   }\n  \\label{alg:chordf}\n\\end{algorithm}\n\nAll of these algorithms are implemented in CGAL \\cite{cgal:eb-18a} with the\nextensive use of the 2D Polygon library \\cite{cgal:gw-p2-18a}. For our data\ngeneration, we make the minor modification that measured ${\\chordarc}$ is the\nratio of the actual $\\chordarc$ score with the perimeter of the overall polygon.\nThis changes the interpretation of some of the theoretical results from the\nprevious section but allows for easier interpretation of the data.\n\n\\section{Empirical results}\n\nWe evaluate our measurements on randomly generated polygons with vertices in the\nunit square $[0,1] \\times [0,1]$. We consider collections of such polygons of\n10, 50, and 100 vertices. In Figure \\ref{fig:delta-alpha} and Figure\n\\ref{fig:delta-arc}, we measure the effect on\npolygons of 10 vertices of sweeping $\\delta$ between $0.05$ and $1$ on the\nresulting measurements. Figure \\ref{fig:alph-inft-10} shows a comparison of\n$\\alpha$-fatness and relative ${\\chordarc}_{\\infty}$ scores for polygons on 10\nvertices.  Figure \\ref{fig:alph-one-50} shows a comparison of $\\alpha$-fatness\nand relative ${\\chordarc}_1$ scores for polygons on 50 vertices. Figure\n\\ref{fig:alph-inft-100} shows a comparison of $\\alpha$-fatness and relative\n${\\chordarc}_{\\infty}$ scores for polygons on 100 vertices.\n\nOur theoretical results seem to be confirmed in that ``ball''-like polygons get\nthe highest $\\alpha$-scores, and it seems that polygons with fewer\n``nonconvexities'' score better on the relative ${\\chordarc}_{\\infty}$ measure. We\nobserve an overall decline in the relative $\\chordarc$ scores as the number of\nvertices of the polygons increases. This is attributed to the corresponding\nincrease in the size of the underlying perimeters of each $P$. The observation\nis not shared with the $\\alpha$ score, but this can be understood as a sort of\n``coastline'' phenomenon, in which polygonal perimeters increase without limit\nwhile the areas converge.\n\n\n\\begin{figure}[t]\n  \\centering\n  \\begin{tabular}{lrrrrrrrrrr}\n    \\toprule\n    $\\delta$ & $P_1$ & $P_2$ & $P_3$ & $P_4$ & $P_5$ & $P_6$ & $P_7$ & $P_8$ & $P_9$ & $P_{10}$ \\\\\n    \\midrule\n    0.05 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.15 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.26 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.36 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.47 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.57 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.68 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.78 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    0.89 &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    1    &  0.23 &  0.21 &  0.11 &  0.039 &  0.15 &  0.038 & 0.12 &  0.10 &  0.12 &  0.13 \\\\\n    \\bottomrule\n  \\end{tabular}\n  \\caption{%\n    Effect of sweeping $\\delta$ on measurements of $\\alpha$-score on\n    10 randomly generated polygons with 10 vertices in $[0,1]^2$.\n    Unsurprisingly, the refinement of the different polygons has no effect on\n    the $\\alpha$-fatness score, since it introduces no additional area to the\n    polygon and since the ``corners'' are already present.\n  }\n  \\label{fig:delta-alpha}\n\\end{figure}\n\n\\begin{figure}[t]\n  \\centering\n  \\begin{tabular}{lrrrrrrrrrr}\n    \\toprule\n    $\\delta$ & $P_1$ & $P_2$ & $P_3$ & $P_4$ & $P_5$ & $P_6$ & $P_7$ & $P_8$ & $P_9$ & $P_{10}$ \\\\\n    \\midrule\n    0.05  &  0.83 &  0.46 &  0.83 &  0.89 &  0.54 &  0.67 &  0.62 &  1.00 &  0.86 &  0.55 \\\\\n    0.15  &  0.79 &  0.90 &  0.72 &  0.60 &  0.90 &  0.67 &  0.99 &  0.83 &  0.66 &  0.90 \\\\\n    0.26  &  1.00 &  0.75 &  0.50 &  0.60 &  0.90 &  0.49 &  0.77 &  0.52 &  0.72 &  0.69 \\\\\n    0.36  &  0.89 &  1.00 &  0.50 &  0.60 &  0.52 &  0.49 &  1.00 &  0.92 &  0.74 &  0.51 \\\\\n    0.47  &  0.61 &  1.00 &  0.50 &  0.75 &  0.52 &  0.24 &  1.00 &  0.62 &  1.00 &  0.51 \\\\\n    0.57  &  0.61 &  1.00 &  0.25 &  0.75 &  0.52 &  0.24 &  1.00 &  0.62 &  1.00 &  0.51 \\\\\n    0.68  &  0.61 &  1.00 &  0.25 &  0.75 &  0.52 &  0.24 &  1.00 &  0.62 &  1.00 &  0.51 \\\\\n    0.78  &  0.61 &  1.00 &  0.25 &  0.75 &  0.52 &  0.24 &  1.00 &  0.62 &  1.00 &  0.51 \\\\\n    0.89  &  0.61 &  1.00 &  0.25 &  0.75 &  0.52 &  0.24 &  1.00 &  0.62 &  1.00 &  0.51 \\\\\n    1     &  0.61 &  1.00 &  0.25 &  0.75 &  0.52 &  1.00 &  1.00 &  0.62 &  1.00 &  0.51 \\\\\n    \\bottomrule\n  \\end{tabular}\n  \\caption{%\n    Effect of sweeping $\\delta$ on measurements of ${\\chordarc}_\\infty$ on\n    10 randomly generated polygons with 10 vertices in $[0,1]^2$. The overall behavior is\n    difficult to summarize.  Though one might hope that as $\\delta \\to 0$ we see\n    a stabilization in the chard arc score, this does not appear to occur.\n    Moreover, there does not seem to be even a monotonic evolution in the score\n    as $\\delta \\to 0$. This is likely due to the fact that smaller values of\n    $\\delta$ permit stranger nonconvexities to emerge as viable chords.\n  }\n  \\label{fig:delta-arc}\n\\end{figure}\n\n\\begin{figure}[t]\n  \\centering\n  \\includegraphics[height=0.8\\textheight]{../plots/u_10_alpha_score_chord_arc_infinity_vertices_0-05_delta_ranking.jpg}\n  \\caption{$\\alpha$ scores and ${\\chordarc}_{\\infty}$ scores for randomly\n  generated polygons on 10 vertices.}\n  \\label{fig:alph-inft-10}\n\\end{figure}\n\n\\begin{figure}[t]\n  \\centering\n  \\includegraphics[height=0.8\\textheight]{../plots/u_50_alpha_score_chord_arc_one_vertices_0-05_delta_ranking.jpg}\n  \\caption{$\\alpha$ scores and ${\\chordarc}_{1}$ scores for randomly\n  generated polygons on 50 vertices.}\n  \\label{fig:alph-one-50}\n\\end{figure}\n\n\\begin{figure}[t]\n  \\centering\n  \\includegraphics[height=0.8\\textheight]{../plots/u_100_alpha_score_chord_arc_infinity_vertices_0-05_delta_ranking.jpg}\n  \\caption{$\\alpha$ scores and ${\\chordarc}_{\\infty}$ scores for randomly\n  generated polygons on 100 vertices.}\n  \\label{fig:alph-inft-100}\n\\end{figure}\n\n\\section{Discussion}\n\nWe have developed two classes of measurements for determining the niceness of\npolygonal regions in the plane and established initial results, both theoretical\nand empirical, for evaluating their effectiveness. The $\\alpha$-fatness measures\nthe extent to which the underlying polygon resembles a ball. It rewards\npolygons which fill squarely compact regions and punishes oblong and spread out\npolygons. The chord-arc score measures the extent to which the underlying\npolygon contains significant ``local nonconvexities'' that are not well\ndistributed throughout the entire polygon. Interestingly, both measures perform\nwell (even optimally) on specific types of convex polygons but severely punishes\nothers. We generally find that these measures conform in some sense to an\nintuitive understanding of niceness.\n\nExtensions on these measures can be examined. The connected $\\alpha$-fatness\nscore, for example, computes the intersection of $B(x,\\varepsilon)$ with the\nconnected component of $P$ containing $x$. This might prove a more reliable\nestimate of intuitive niceness than the conventional $\\alpha$-fatness score,\nbecause it excludes components of $P$ from consideration which are close in a\ncoarse sense, but not necessarily close in a topological sense. The myriad of\npossible choices from which to choose $f$ leaves open a wide space for\nexploration. A more suitable measure than simply the perimeter may emerge as a\nconsequence.\n\nThe next extension of this work is to apply these measures to electoral\ndistricts and evaluate their success at distinguishing between offending and\n``nice'' districts. Given an increasing corpus of election districts ruled out\nor struck down by constitutional grounds, it may be possible to learn a\nclassification model using these measures. Alternatively, it may be discovered\nthat these measures are sufficient to distinguish between offending and nice\ndistricts in an unsupervised fashion.\n\nWhether quantitative measures that conform to intuitive understandings of\nniceness is relevant to crafting electoral districts at all remains an open\nquestion. General compactness and contiguity are only part of the requirements\nwhich an electoral district must satisfy under the present regime, whereas\nothers, such as those given in the 1965 Voting Rights Act, bear less consequence\nfrom the pure shape of the district. Doubtless these quantitative measures could\nform part of the evaluation process for determining new districts, but the\nactual policy will almost surely involve ensembles of measures, both\nquantitative and otherwise, in idealized future redistricting.\n\n\n\\printbibliography[heading=bibintoc]\n\n\\end{document}\n", "meta": {"hexsha": "7b76b3320e4d0432bed3cf7312cf304b5a3246d3", "size": 25445, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/main.tex", "max_stars_repo_name": "DavidNKraemer/polygonal-niceness", "max_stars_repo_head_hexsha": "6bf89adeee7586ea66d96a4682ad0662c4e0ab67", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/main.tex", "max_issues_repo_name": "DavidNKraemer/polygonal-niceness", "max_issues_repo_head_hexsha": "6bf89adeee7586ea66d96a4682ad0662c4e0ab67", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/main.tex", "max_forks_repo_name": "DavidNKraemer/polygonal-niceness", "max_forks_repo_head_hexsha": "6bf89adeee7586ea66d96a4682ad0662c4e0ab67", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.1796733212, "max_line_length": 120, "alphanum_fraction": 0.6967577127, "num_tokens": 8174, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799252, "lm_q2_score": 0.785308580887758, "lm_q1q2_score": 0.5788966215144196}}
{"text": "\\section{Communication}\n\\subsection{Random Access}\nBest sending rate is $p = \\frac{1}{n}$ (using maximization of\n$p \\left(1 - p\\right)^{n-1}$) leads to utilization of $\\frac{1}{\\emath}$.\n\n\\subsection{TDMA}\nStatically allocated slots, synchronization done by master.\n\n\\subsection{CSMA/CD}\nDetect collitions, do exponential backoff of $\\set{0, \\dots 2^n - 1}$ for $n$th\ncollision (possibly with upper bound on n). The wait time is twice the round\ntrip time.\n\n\\subsection{Token Ring}\nPass token around, attach to token.\n\n\\subsection{CSMA/CA}\nIMAGE HERE\n\n\\subsection{CSMA/CR}\nBefore message is sent, arbitration happens. During arbitration every client can\ndetect which one is allowed to send. During arbitration, every node sends its\nunique id and checks whether the result on the bus matches the id. Low bits\noverride high bits, hence the one with the lowest id wins.\n\n\\subsection{FlexRay}\nFlexRay uses a combination of TDMA and something close to CSMA/CA alternating,\nfirst a static segment using TDMA, then a dynamic segment\n\n\\subsection{Bluetooth}\nFrequency hopping in range $2402 + k$ MHz, $0 \\leq k \\leq 78$, 1600hops per\nsecond. Organized in piconets with 1 master and at most 7 slaves, which can be\ncombined into scatternets. A node is a master in at most one piconet and can be\na slave in multiple piconets.\n\nThe master can only send in even time slots. After each slot, the frequency is\nchanged using a pseudorandom sequence based on the master's device address\n(unique) and the phase of the master's system clock. Packets are either 1, 3, or\n5 slots long, the frequency is only changed after the full packet is sent,\nhowever it changes 1, 3, or 5 steps, depending on the packet length.\n\nThe initial setup between master and slave is done by the inquiry step, where\nthe master sends it s ID using a special channel sequence, the slave listens and\nreplies with its ID, clock and other information.\n\nThen the connection is started with in the page mode. The master sends its and\nthe slaves address using a special channel sequence, the slave listens whether\nit hears its address and answers with its own address. The master then sends a\nfrequency hop synchronization packet (FHS) to the slave which allows them to\nsynchronize and connnect.\n\nA slave can be in multiple states, active, hold (not processing date packtes),\nsniff (awaken in regular time intervals), and park (no connection but still\nsynchronized).\n\n\\subsection{Packet Details}\nTotal packet length is either 366 bits, 1626 bits or 2870 bits for 1, 3, and 5\nslot packets respectively. Packet length in bits corresponds to $\\mu$s send\ntime. For unprotected data, the payload is 216 bits, 1464 bits and 2712 bits.\n", "meta": {"hexsha": "7de175cdbde728f9adb5058663195ee41b1ea3b3", "size": 2678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "embedded systems/sections/communication.tex", "max_stars_repo_name": "ntruessel/eth-summaries", "max_stars_repo_head_hexsha": "dbfa4c206b441868a6ab55331c42daa96abd42bf", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "embedded systems/sections/communication.tex", "max_issues_repo_name": "ntruessel/eth-summaries", "max_issues_repo_head_hexsha": "dbfa4c206b441868a6ab55331c42daa96abd42bf", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "embedded systems/sections/communication.tex", "max_forks_repo_name": "ntruessel/eth-summaries", "max_forks_repo_head_hexsha": "dbfa4c206b441868a6ab55331c42daa96abd42bf", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6333333333, "max_line_length": 80, "alphanum_fraction": 0.7781926811, "num_tokens": 672, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438950986284991, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5788693884061945}}
{"text": "% Part: model-theory\n% Chapter: models-of-arithmetic\n% Section: introduction\n\n\\documentclass[../../../include/open-logic-section]{subfiles}\n\n\\begin{document}\n\n\\olfileid{mod}{mar}{int}\n\\section{Introduction}\n\nThe \\emph{standard model} of arithmetic is the\n!!{structure}~$\\Struct{N}$ with $\\Domain{N} = \\Nat$ in which\n$\\Obj{0}$, $\\prime$, $+$, $\\times$, and $<$ are interpreted as you\nwould expect. That is, $\\Obj{0}$ is $0$, $\\prime$ is the successor\nfunction, $+$ is interpeted as addition and $\\times$ as multiplication\nof the numbers in~$\\Nat$. Specifically,\n\\begin{align*}\n  \\Assign{\\Obj{0}}{N} & = 0\\\\\n  \\Assign{\\prime}{N}(n) & = n + 1\\\\\n  \\Assign{+}{N}(n, m) & = n + m\\\\\n  \\Assign{\\times}{N}(n, m) & = nm\n\\end{align*}\nOf course, there are structures for $\\Lang{L_A}$ that have domains\nother than~$\\Nat$. For instance, we can take $\\Struct{M}$ with domain\n$\\Domain{M} = \\{a\\}^*$ (the finite sequences of the single\nsymbol~$a$, i.e., $\\emptyset$, $a$, $aa$, $aaa$, \\dots), and\ninterpretations\n\\begin{align*}\n  \\Assign{\\Obj{0}}{M} & = \\emptyset\\\\\n  \\Assign{\\prime}{M}(s) & = s \\concat a\\\\\n  \\Assign{+}{M}(n, m) & = a^{n + m}\\\\\n  \\Assign{\\times}{M}(n, m) & = a^{nm}\n\\end{align*}\nThese two structures are ``essentially the same'' in the sense that\nthe only difference is the !!{element}s of the !!{domain}s but not how\nthe !!{element}s of the !!{domain}s are related among each other by\nthe interpretation functions. We say that the two !!{structure}s are\n\\emph{isomorphic}.\n\nIt is an easy consequence of the compactness theorem that any theory\ntrue in~$\\Struct{N}$ also has models that are not isomorphic\nto~$\\Struct{N}$.  Such structures are called \\emph{non-standard}.  The\ninteresting thing about them is that while the !!{element}s of a\nstandard model (i.e., $\\Struct{N}$, but also all !!{structure}s\nisomorphic to it) are exhausted by the values of the standard\nnumerals~$\\num{n}$, i.e.,\n\\[\n\\Domain{N} = \\Setabs{\\Value{\\num{n}}{N}}{n \\in \\Nat}\n\\]\nthat isn't the case in non-standard models: if $\\Struct{M}$ is\nnon-standard, then there is at least one $x \\in \\Domain{M}$ such that\n$x \\neq \\Value{\\num{n}}{M}$ for all~$n$.\n\nThese non-standard elements are pretty neat: they are ``infinite\nnatural numbers.'' But their existence also explains, in a sense, the\nincompleteness phenomena.  Consider an example, e.g., the consistency\nstatement for Peano arithmetic, $\\OCon[\\Th{PA}]$, i.e., $\\lnot\n\\lexists[x][\\OPrf[\\Th{PA}](x, \\gn{\\lfalse})]$. Since $\\Th{PA}$ neither\nproves $\\OCon[\\Th{PA}]$ nor $\\lnot \\OCon[\\Th{PA}]$, either can be\nconsistently added to $\\Th{PA}$. Since $\\Th{PA}$ is consistent,\n$\\Sat{N}{\\OCon[\\Th{PA}]}$, and consequently $\\Sat/{N}{\\lnot\n  \\OCon[\\Th{PA}]}$.  So $\\Struct{N}$ is \\emph{not} a model of $\\Th{PA}\n\\cup \\{\\lnot \\OCon[\\Th{PA}]\\}$, and all its models must be\nnonstandard. Models of $\\Th{PA} \\cup \\{\\lnot \\OCon[\\Th{PA}]\\}$ must\ncontain some !!{element} that serves as the witness that makes\n$\\lexists[x][\\OPrf[\\Th{PA}](\\gn{\\lfalse})]$ true, i.e., a G\\\"odel\nnumber of a !!{derivation} of a contradiction from~$\\Th{PA}$.  Such\n!!a{element} can't be standard---since $\\Th{PA} \\Proves \\lnot\n\\OPrf[\\Th{PA}](\\num{n}, \\gn{\\lfalse})$ for every~$n$.\n\n\\end{document}\n", "meta": {"hexsha": "8e56b6d446ff9e37a3affb37ac4bd9686d4434b4", "size": 3184, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/model-theory/models-of-arithmetic/introduction.tex", "max_stars_repo_name": "jeffschao/OpenLogic", "max_stars_repo_head_hexsha": "070238f28af5658c520aa9aaa952a3e21fd4d942", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-14T18:51:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-27T22:49:24.000Z", "max_issues_repo_path": "content/model-theory/models-of-arithmetic/introduction.tex", "max_issues_repo_name": "jeffschao/OpenLogic", "max_issues_repo_head_hexsha": "070238f28af5658c520aa9aaa952a3e21fd4d942", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/model-theory/models-of-arithmetic/introduction.tex", "max_forks_repo_name": "jeffschao/OpenLogic", "max_forks_repo_head_hexsha": "070238f28af5658c520aa9aaa952a3e21fd4d942", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.6164383562, "max_line_length": 70, "alphanum_fraction": 0.6582914573, "num_tokens": 1076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081925, "lm_q2_score": 0.843895106480586, "lm_q1q2_score": 0.5788693883744385}}
{"text": "%!TEX root = fastZKP.tex\n\n\\section{Zero Knowledge Argument Protocols}\\label{sec:zkp}\n\nIn this section, we present the construction of our new zero-knowledge argument system. In~\\cite{zhang2017vsql}, Zhang et al. proposed to combine the GKR protocol with a verifiable polynomial delegation protocol, resulting in an argument system. Later, in~\\cite{zkvpd,hyrax}, the construction was extended to zero-knowledge, by sending all the messages in the GKR protocol in homomorphic commitments and performing all the checks by zero-knowledge equality and product testing. This incurs a high overhead for the verifier compared to the plain version without zero-knowledge, as each multiplication becomes an exponentiation and each equality check becomes a $\\Sigma$-protocol, which is around $100\\times$ slower in practice.\n\nIn this paper, we follow the same blueprint of combining GKR and VPD to obtain an argument system, but instead show how to extend it to be zero-knowledge efficiently. In particular, the prover masks the GKR protocol with special random polynomials so that the verifier runs a \"randomized\" GKR that leaks no extra information and her overhead is small. A similar approach was used by Chiesa et.al in~\\cite{zksumcheck}. In the following, we present the zero-knowledge version of each building block, followed by the whole zero-knowledge argument.\n\n\\subsection{Zero Knowledge Sumcheck}\\label{subsec:zksumcheck}\nAs a core step of the GKR protocol, $\\P$ and $\\V$ execute a sumcheck protocol on Equation~\\ref{eq:GKR}, during which $\\P$ sends $\\V$ evaluations of the polynomial at several random points chosen by $\\V$. These evaluations leak information about the values in the circuit, as they can be viewed as weighted sums of these values\n\nTo make the sumcheck protocol zero-knowledge, we take the approach proposed by Chiesa et al. in \\cite{zksumcheck}, which is masking the polynomial in the sumcheck protocol by a random polynomial. In this approach, to prove \n$$H = \\sum\\nolimits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary}f(x_1, x_2, \\ldots, x_\\ell)$$\n, the prover generates a random polynomial $g$ with the same variables and individual degrees of $f$. She commits to the polynomial $g$, and sends the verifier a claim $G = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary}g(x_1, x_2, \\ldots, x_\\ell)$. The verifier picks a random number $\\rho$, and execute a sumcheck protocol with the prover on $$H + \\rho G = \\sum\\nolimits_{x_1, x_2, \\ldots, x_\\ell \\in \\{0, 1\\}}(f(x_1, x_2, \\ldots, x_\\ell) + \\rho g(x_1, x_2, \\ldots, x_\\ell)).$$ At the last round of this sumcheck, the prover opens the commitment of $g$ at $g(r_1, \\ldots, r_\\ell)$, and the verifier computes $f(r_1, \\ldots, r_l)$ by subtracting $\\rho g(r_1, \\ldots, r_\\ell)$ from the last message, and compares it with the oracle access of $f$. It is shown that as long as the commitment and opening of $f$ are zero-knowledge, the protocol is zero-knowledge. Intuitively, this is because all the coefficients of $f$ are masked by those of $g$. The soundness still holds because of the random linear combination of $f$ and $g$. \n\nUnfortunately, the masking polynomial $g$ is as big as $f$, and opening it to a random point later is expensive. In~\\cite{zksumcheck}, the prover sends a PCP oracle of $g$, and executes a zero-knowledge sumcheck to open it to a random point, which incurs an exponential complexity for the prover. Even replacing it with the zkVPD protocol in~\\cite{zkvpd}, the prover time is slow in practice.\n\nIn this paper, we show that it suffices to mask $f$ with a small polynomial to achieve zero-knowledge. In particular, we set $g(x_1, \\ldots, x_\\ell) = a_{0} + g_1(x_1) + g_2(x_2) + \\ldots + g_\\ell(x_\\ell)$, where $g_{i}(x_i) = a_{i,1}x_i + a_{i,2}x_i^2 + \\ldots + a_{i,d}x_i^d$ is a random univariate polynomial of degree $d$ ($d$ is the variable degree of $f$). Note here that the size of $g$ is only $O(d\\ell)$, while the size of $f$ is $O(2^\\ell)$.\n\nThe intuition of our improvement is that the prover sends $O(d\\ell)$ messages in total to the verifier during the sumcheck protocol, thus a polynomial $g$ with $O(d\\ell)$ random coefficients is sufficient to mask all the messages and achieve zero-knowledge. We present the full protocol in Construction~\\ref{construction::zksumcheck}.\n\n\n\\begin{figure}[t!]\n\\small{\n\\centering{\\centering\n\\framebox{\\parbox{.99\\linewidth}{\n\\begin{construction}\n\\label{construction::zksumcheck}\nWe assume the existence of a zkVPD protocol defined in Section~\\ref{subsec::zkvpd}. For simplicity, we omit the randomness $r_f$ and public parameters $\\pp,\\vp$ without any ambiguity. To prove the claim $H = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} f(x_1, x_2, \\ldots, x_\\ell)$:\n\\begin{enumerate}\n\n\\item $\\P$ selects a polynomial $g(x_1,\\ldots, x_\\ell) = a_{0} + g_1(x_1) + g_2(x_2) + \\ldots + g_l(x_\\ell)$, where $g_{i}(x_i) = a_{i,1}x_i + a_{i,2}x_i^2 + \\ldots + a_{i,d}x_i^d$ and all $a_{i,j}$s are uniformly random. $\\P$ sends $H = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} f(x_1, x_2, \\ldots, x_\\ell)$, $G = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} g(x_1, x_2, \\ldots, x_\\ell)$ and $\\comm_g = \\Commit(g)$ to $\\V$.\n\\item $\\V$ uniformly selects $\\rho \\in \\mathbb{F}^*$, computes $H+\\rho G$ and sends $\\rho$ to $\\P$.\n\\item $\\P$ and $\\V$ run the sumcheck protocol on\n$$H + \\rho G = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary}(f(x_1, x_2, \\ldots, x_\\ell) + \\rho g(x_1, x_2, \\ldots, x_\\ell))$$\n\\item At the last round of the sumcheck protocol, $\\V$ obtains a claim $h_\\ell(r_\\ell) = f(r_1, r_2, \\ldots, r_\\ell)+\\rho g(r_1, r_2, \\ldots, r_\\ell)$. $\\P$ and $\\V$ opens the commitment of $g$ at $r = (r_1,\\ldots, r_\\ell)$ by $(g(r), \\pi)\\leftarrow\\Open(g,r), \\Verify(\\comm_g,g(r),r,\\pi)$. If $\\Verify$ outputs $\\reject$, $\\V$ aborts.\n\\item $\\V$ computes $h_\\ell(r_\\ell)-\\rho g(r_1,\\ldots,r_\\ell)$ and compares it with the oracle access of $f(r_1,\\ldots, r_\\ell)$.\n\n\\end{enumerate}\n\\end{construction}}}}}\n\\vspace{-.2in}\n\\end{figure}\nThe completeness of the protocol holds obviously. The soundness follows the soundness of the sumcheck protocol and the random linear combination in step 2 and 3, as proven in~\\cite{zksumcheck}. We give a proof of zero knowledge in Appendix~\\ref{app:proofsc}.\n\n\\begin{theorem}[Zero knowledge]\\label{thm:zksc}\n\tFor every verifier $\\V^*$ and every $\\ell$-variate polynomial $f:\\F^\\ell\\rightarrow\\F$ with variable degree $d$, there exists a simulator $\\S$ such that given access to $H = \\sum\\nolimits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} f(x_1, x_2, \\ldots, x_\\ell)$, $\\S$ is able to simulate the partial view of $\\V^*$ in step 1-4 of Construction~\\ref{construction::zksumcheck}. \n\\end{theorem}\n\n\n\n\n\n\\subsection{Zero knowledge GKR}\\label{subsec::zkgkr}\n\nTo achieve zero-knowledge, we replace the sumcheck protocol in GKR with the zero-knowledge version described in the previous section. However, the protocol still leaks additional information. In particular, at the end of the zero-knowledge sumcheck, $\\V$ queries the oracle to evaluate the polynomial on a random point. When executed on Equation~\\ref{eq:GKR}, this reveals two evaluations of the polynomial $\\tV_i$ defined by the values in the $i$-th layer of the circuit: $\\tV_i(u)$ and $\\tV_i(v)$.\n\n\nTo prevent this leakage, Chiesa et al.\\cite{zksumcheck} proposed to replace the multi-linear extension $\\tV_i$ with a low degree extension, such that learning $\\tV_i(u)$ and $\\tV_i(v)$ does not leak any information about $V_i$. Define a low degree extension of $V_i$ as \n\n\\begin{equation}\\label{eq:lde}\n\\dot{V}_{i}(z) \\overset{def}{=} \\tV_i(z)+Z_i(z)\\sum\\nolimits_{w \\in \\{0, 1\\}^\\lambda}R_i(z, w),\n\\end{equation}\nwhere $Z(z) = \\prod_{i=1}^{s_i} z_i(1-z_i)$, i.e., $Z(z)=0$ for all $z\\in\\{0, 1\\}^{s_i}$. $R_i(z,w)$ is a random low-degree polynomial and $\\lambda$ is the security parameter. With this low degree extension, Equation~\\ref{eq:GKR} becomes\n\n\\begin{align}\\label{eq:zkGKR}\n\\dot{V}_{i}(g)&=\\sum\\nolimits_{x, y\\in \\binary^{s_{i+1}}}\\tilde{mult}_{i+1}(g, x, y)(\\dot{V}_{i+1}(x)\\dot{V}_{i+1}(y))\\\\\n&+\\tilde{add}_{i+1}(g,x,y)(\\dot{V}_{i+1}(x)+\\dot{V}_{i+1}(y))\\nonumber + Z_i(g)\\sum\\nolimits_{w \\in \\binary^\\lambda}R_i(g, w)\\nonumber\\\\\n&=\\sum\\nolimits_{x, y\\in \\binary^{s_{i+1}},w \\in \\binary^\\lambda}(I(\\vec{0},w) \\cdot \\tilde{mult}_{i+1}(g, x, y)(\\dot{V}_{i+1}(x)\\dot{V}_{i+1}(y))\\\\\n&+\\tilde{add}_{i+1}(g,x,y)(\\dot{V}_{i+1}(x)+\\dot{V}_{i+1}(y))\\nonumber + I((x, y), \\vec{0})Z_i(g)R_i(g, w))\n\\end{align}\nwhere $I(\\vec{a},\\vec{b})$ is an identity polynomial $I(\\vec{a},\\vec{b}) = 0$ iff $\\vec{a}=\\vec{b}$. The first equation holds because $\\dot{V}_i$ agrees with $\\tV_i$ on the Boolean hyper-cube $\\binary^{s_i}$, as $Z_i(z) = 0$ for binary inputs. The second equation holds because the mask in $\\dot{V}_i$ is in the form of a \"sum\" and can be moved into the sumcheck equation. \n\nWhen executing the zero-knowledge sumcheck protocol on Equation~\\ref{eq:zkGKR}, at the end of the protocol, $\\V$ receives $\\dot{V}_{i+1}(u)$ and $\\dot{V}_{i+1}(v)$ for random points $u,v\\in\\F^{s_{i+1}}$ chosen by $\\V$. They no longer leak information about $V_{i+1}$, as they are masked by $Z_{i+1}(z)\\sum\\nolimits_{w \\in \\{0, 1\\}^\\lambda}R_{i+1}(z, w)$ for $z=u$ and $z=v$. $\\V$ computes $\\tilde{mult}_{i+1}(g,u,v)$ and $\\tilde{add}_{i+1}(g,u,v)$ as before, computes $Z_i(g), I(\\vec{0},c), I((u,v),\\vec{0})$ where $c\\in\\F^\\lambda$ is a random point chosen by $\\V$ for variable $w$, opens $R_i(g,w)$ at $c$  with $\\P$ through a polynomial commitment, and checks that together with $\\dot{V}_{i+1}(u), \\dot{V}_{i+1}(v)$ received from $\\P$ they are consistent with the last message of the sumcheck.$\\V$ then uses $\\dot{V}_{i+1}(u), \\dot{V}_{i+1}(v)$ to proceed to the next round.\n\nUnfortunately, similar to the zero-knowledge sumcheck, the masking polynomial $R_i$ is very large in~\\cite{zksumcheck}. Opening $R_i$ at a random point takes exponential time for $\\P$ either using a PCP oracle as in~\\cite{zksumcheck} or potentially using a zkVPD, as $R$ has $s_i+2s_{i+1}+\\lambda$ variables.\n\nIn this section, we show that we can set $R_i$ to be a small polynomial to achieve zero-knowledge. In particular, $R_i$ has only two variables with variable degree 2. This is because in the $(i-1)$-th round, $\\V$ receives two evaluations of $V_i$, $\\dot{V}_i(u)$ and $\\dot{V}_i(v)$,  which are masked by $\\sum_{w}R_i(u,w)$ and $\\sum_{w}R_i(v,w)$; in the $i$-th sumcheck, $\\V$ opens $R_i$ at $R_i(u,c)$ and $R_i(v,c)$. It suffices to make these four evaluations linearly independent, assuming the commitment and opening of $R_i$ are using a zkVPD. Therefore, we set the low-degree term in Equation~\\ref{eq:lde} as $Z_i(z)\\sum\\nolimits_{w \\in\\binary} R_i(z_1, w)$, i.e. $R_i$ only takes two variables, the first variable $z_1$ of $z$ and an extra variable $w\\in\\binary$ instead of $\\binary^\\lambda$, with variable degree 2. \n\nThe full protocol is presented in Construction~\\ref{construction:zkgkr}. Here we use superscriptions (e.g., $u^{(i)}$) to denote random numbers or vectors for the $i$-th layer of the circuit.\n\n%\\begin{figure}[H]\n\n%{\\small\n%\\centering{\\centering\n%\\framebox{\\parbox{.99\\linewidth}{\n\\begin{mdframed}[nobreak=false]\n\t\\begin{construction}\\label{construction:zkgkr}\n\t\t\\begin{enumerate} \n\t\t\t\\item On a layered arithmetic circuit $C$ with $d$ layers and input $\\mathsf{in}$, the prover $\\P$ sends the output of the circuit $\\mathsf{out}$ to the verifier $\\V$.\n\t\t\t\n\t\t\t\\item $\\P$ randomly selects polynomials $R_1(z_1, w), \\ldots, R_d(z_1, w): \\F^2\\rightarrow \\F$ with variable degree 2. $\\P$ commits to these polynomials by sending $\\comm_i\\leftarrow\\Commit(R_i)$ to $\\V$ for $i\\in[1,d]$.\n\t\t\t\n\t\t\t\\item $\\V$ defines $\\dot{V}_0(z)= \\tV_0(z)$, where $\\tV_0(z)$ is the multilinear extension of $\\mathsf{out}$. $\\dot{V}_0(z)$ can be viewed as a special case with $R_0(z_1,w)$ being the 0 polynomial. $\\V$ evaluates it at a random point $\\dot{V}_0(g^{(0)})$ and sends $g^{(0)}$ to $\\P$.\n\t\t\t\n\t\t\t\\item $\\P$ and $\\V$ execute the zero knowledge sumcheck protocol presented in Construction~\\ref{construction::zksumcheck} on\n\t\t\t\\[\n\t\t\t\\begin{aligned}\n\t\t\t\\dot{V}_{0}(g^{(0)})=\\sum_{x, y\\in \\binary^{s_{1}}}&\\tilde{mult}_{1}(g^{(0)}, x, y)(\\dot{V}_{1}(x)\\dot{V}_{1}(y))\\\\\n\t\t\t&+\\tilde{add}_{1}(g^{(0)},x,y)(\\dot{V}_{1}(x)+\\dot{V}_{1}(y))\n\t\t    \\end{aligned}\n\t\t\t\\]\n\t\t\tIf $u_1^{(1)} = v_1^{(1)}$, $\\P$ aborts. At the end of the protocol, $\\V$ receives $\\dot{V}_1(u^{(1)})$ and $\\dot{V}_1(v^{(1)})$. $\\V$ computes $\\tmult_1(g^{(0)},u^{(1)},v^{(1)})$, $\\tadd_1(g^{(0)},u^{(1)},v^{(1)})$ and checks that $\\tmult_1(g^{(0)},u^{(1)},v^{(1)})\\dot{V}_1(u^{(1)})\\dot{V}_1(v^{(1)})+\\tadd_1(g^{(0)},u^{(1)},v^{(1)})$ $(\\dot{V}_1(u^{(1)})+\\dot{V}_1(v^{(1)}))$ equals to the last message of the sumcheck (evaluation oracle).\n\t\t\t\n\t\t\t\\item For layer $i=1,\\ldots, d-1$:\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item $\\V$ randomly selects $\\alpha^{(i)}, \\beta^{(i)}\\in\\F$ and sends them to $\\mathcal{P}$.\n\t\t\t\t\\item  Let $Mult_{i + 1}(x, y) = \\alpha^{(i)}\\tilde{mult}_{i+1}(u^{(i)}, x, y)+\\beta^{(i)}\\tilde{mult}_{i+1}(v^{(i)}, x, y)$ and $Add_{i + 1}(x, y) = \\alpha^{(i)}\\tilde{add}_{i+1}(u^{(i)}, x, y)+\\beta^{(i)}\\tilde{add}_{i+1}(v^{(i)}, x, y)$. $\\P$ and $\\V$ run the zero knowledge sumcheck on the equation\\\\\n\t\t\t\t\n\t\t\t\t$\\alpha^{(i)}\\dot{V}_i(u^{(i)})+\\beta^{(i)}\\dot{V}_i(v^{(i)})=$\n\t\t\t\t\\begin{align*}\n\t\t\t\t&\\sum_{\\substack{x, y\\in \\binary^{s_{i+1}}\\\\w \\in \\binary}}(I(\\vec{0},w) \\cdot Mult_{i+1}(x, y)(\\dot{V}_{i+1}(x)\\dot{V}_{i+1}(y))\\\\\n\t\t\t\t&+Add_{i+1}(x, y)(\\dot{V}_{i+1}(x)+\\dot{V}_{i+1}(y))\\nonumber\\\\\n\t\t\t\t&+ I((x, y), \\vec{0})(\\alpha^{(i)}Z_i(u^{(i)})R_i(u_1^{(i)}, w)+\\beta^{(i)}Z_i(v^{(i)})R_i(v_1^{(i)}, w)))\n\t\t\t\t\\end{align*}\n\t\t\t\tIf $u_1^{(i+1)} = v_1^{(i+1)}$, $\\P$ aborts.\n\t\t\t\t\n\t\t\t\t\\item At the end of the zero-knowledge sumcheck protocol, $\\P$ sends $\\V$ $\\dot{V}_{i+1}(u^{(i+1)})$ and $\\dot{V}_{i+1}(v^{(i+1)})$.\n\t\t\t\t\n\t\t\t\t\\item $\\V$ computes $a_{i+1} = $\\\\$\\alpha^{(i)}\\tilde{mult}_{i+1}(u^{(i)}, u^{(i+1)}, v^{(i+1)})+\\beta^{(i)}\\tilde{mult}_{i+1}(v^{(i)}, u^{(i+1)}, v^{(i+1)})$ and $b_{i+1} = \\alpha^{(i)}\\tilde{add}_{i+1}(u^{(i)}, u^{(i+1)}, v^{(i+1)})+\\beta^{(i)}\\tilde{add}_{i+1}(v^{(i)}, u^{(i+1)}, v^{(i+1)})$ locally. $\\V$ computes $Z_i(u^{(i)}),Z_i(v^{(i)}),I(\\vec{0},c^{(i)}), I((u^{(i+1)},v^{(i+1)}),\\vec{0})$ locally.\n\t\t\t\t\\item $\\P$ and $\\V$ open $R_i$ at two points $R_i(u_1^{(i)},c^{(i)})$ and $R_i(v_1^{(i)},c^{(i)})$ using $\\Open$ and $\\Verify$.\n\t\t\t\t\\item $\\V$ computes the following as the evaluation oracle and uses it to complete the last step of the zero-knowledge sumcheck.\n\t\t\t\t{\\footnotesize\n\t\t\t\t\\begin{align*}\n\t\t\t\t&I(\\vec{0},c^{(i)})(a_{i+1}(\\dot{V}_{i+1}(u^{(i+1)})\\dot{V}_{i+1}(v^{(i+1)}))+\\\\\n\t\t\t\t&b_{i+1}(\\dot{V}_{i+1}(u^{(i+1)})+\\dot{V}_{i+1}(v^{(i+1)})))+\\\\\n\t\t\t\t&I((u^{(i+1)},v^{(i+1)}),\\vec{0})(\\alpha^{(i)}Z_i(u^{(i)})R_i(u_1^{(i)}, c^{(i)})+\\beta^{(i)}Z_i(v^{(i)})R_i(v_1^{(i)}, c^{(i)}))\n\t\t\t\t\\end{align*}\n\t\t\t\t}\n\t\t\t\tIf all checks in the zero knowledge sumcheck and $\\Verify$ passes, $\\V$ uses $\\dot{V}_{i+1}(u^{(i+1)})$ and $\\dot{V}_{i+1}(v^{(i+1)})$ to proceed to the $(i+1)$-th layer. Otherwise, $\\V$ outputs $\\reject$ and aborts.\n\t\t\t\t\n\t\t\t\\end{enumerate}\n\t\t\t\n\t\t\t\\item At the input layer $d$, $\\V$ has two claims $\\dot{V}_{d}(u^{(d)})$ and $\\dot{V}_{d}(v^{(d)})$. $\\V$ opens $R_d$ at 4 points $R_d(u_1^{(d)},0),R_d(u_1^{(d)},1),R_d(v_1^{(d)},0),R_d(v_1^{(d)},1)$ and checks that $\\dot{V}_{d}(u^{(d)}) = \\tV_d(u^{(d)})+Z_d(u^{(d)})\\sum\\limits_{w \\in \\binary}R_d(u_1^{(d)},w)$ and $\\dot{V}_{d}(v^{(d)}) = \\tV_d(v^{(d)})+Z_d(v^{(d)})\\sum\\limits_{w \\in \\binary}R_d(v_1^{(d)},w)$, given oracle access to two evaluates of $\\tV_d$ at $u^{(d)}$ and $v^{(d)}$. If the check passes, output $\\accept$; otherwise, output $\\reject$.\n\t\t\t\n\t\t\\end{enumerate}\n\t\\end{construction}\n\\end{mdframed}\n\n\n%}}}}\n%\\end{figure}\n\n\n\\begin{theorem}\\label{thm:zkgkr}\n\tConstruction~\\ref{construction:zkgkr} is an interactive proof protocol per Definition~\\ref{def:ip}, for a function $f$ defined by a layered arithmetic circuit $C$ such that $f(\\mathsf{in},\\mathsf{out}) = 1$ iff $C(\\mathsf{in}) = \\mathsf{out}$. In addition, for every verifier $\\V^*$ and every layered circuit $C$, there exists a simulator $\\S$ such that given oracle access to $\\mathsf{out}$, $\\S$ is able to simulate the partial view of $\\V^*$ in step 1-5 of Construction~\\ref{construction:zkgkr}. \n\\end{theorem}\n\nThe completeness follows from the construction explained above and the completeness of the zero knowledge sumcheck. The soundness follows the soundness of the GKR protocol with low degree extensions, as proven in~\\cite{GKR} and~\\cite{zksumcheck}. We give the proof of zero knowledge in Appendix~\\ref{app:proofgkr}.\n\n\n\n\n\\subsection{Zero knowledge VPD}\\label{subsec::our_zkvpd}\nIn this section, we present the instantiations of the zkVPD protocol, as defined in Definition~\\ref{def::zkvpd}. For every intermediate layer $i$, we use the same zkVPD protocol as proposed by Zhang et al. in~\\cite{zkvpd} to commit and open the masking polynomials $g_i(x), R_i(z_1, w)$. In fact, as we show in the previous sections, these polynomials are very small ($g_i$ is the sum of univariate polynomials and $R_i$ has 2 variables with variable degree 2), the zkVPD protocols become very simple. The complexity of $\\KeyGen, \\Commit, \\Open, \\Verify$ and proof size are all $O(s_i)$ for $g_i$ and are all $O(1)$ for $R_i$. We omit the full protocols due to space limit.\n\n\n\nFor the zkVPD used for the input layer, we design a customized protocol based on the zkVPD protocol in~\\cite{zkvpd}. Recall that at the end of the GKR protocol, $\\V$ receives two evaluations of the polynomial $\\dot{V}_d(z)=\\tV_d(z)+Z_d(z)\\sum\\nolimits_{w \\in\\binary} R_d(z_1,w)$ at $z=u^{(d)}$ and $z=v^{(d)}$. In our zero knowledge proof protocol, which will be presented in Section~\\ref{subsec::zkall}, $\\P$ commits to $\\dot{V}_d(z)$ using the zkVPD at the beginning, and opens it to the two points selected by $\\V$.\n\nThe protocol in~\\cite{zkvpd} works for any polynomial with $\\ell$ variables and any variable degree, and is particularly efficient for multilinear polynomials. We modify the protocol for our zero-knowledge proof scheme and preserves the efficiency. Note that though $\\dot{V}_d(z)$ is a low degree extension of the input, it can be decomposed to the sum of $\\tV_d(z)$, a multilinear polynomial, and $Z_d(z)\\sum\\nolimits_{w \\in\\binary} R_d(z_1,w)$. Moreover, $Z_d(u^{(d)})$ and $Z_d(v^{(d)})$ can be computed directly by $\\V$. Therefore, in our construction, $\\P$ commits to $\\tV_d(z)$ and $\\sum\\nolimits_{w \\in\\binary} R_d(z_1,w)$ separately, and later opens the sum together given $Z_d(u^{(d)})$ and $Z_d(v^{(d)})$, which is naturally supported because of the homomorphic property of the commitment. Another optimization is that unlike other layers of the circuit, $R_d(z_1,w)$ itself is not opened at two points ($\\V$ does not receive $R_d(u^{(d)},c^{(d)})$ and $R_d(v^{(d)},c^{(d)})$ in Construction~\\ref{construction:zkgkr}). Therefore, it suffices to set $\\dot{V}_d(z)=\\tV_d(z)+Z_d(z)R_d(z_1)$, where $R_d$ is a univariate linear polynomial. The full protocol is presented in Appendix~\\ref{app:zkvpdconstruction}.\n\n\n\n\n\\begin{theorem}\\label{thm:zkvpd}\n\tConstruction~\\ref{construction::zkVPD} is a zero-knowledge verifiable polynomial delegation scheme as defined by Definition~\\ref{def::zkvpd}, under Assumption~\\ref{asp::qSDH} and~\\ref{asp::dlEPKE}.\n\\end{theorem}\n\nThe proof of completeness, soundness and zero knowledge is similar to that of the zkVPD protocol in~\\cite{zkvpd}. We only add an extra univariate linear polynomial $R(x_1)$, which does not affect the proof. We omit the proof due to space limit. Using the same algorithms proposed in in~\\cite{vram,zkvpd}, the running time of $\\KeyGen$, $\\Commit$ and $\\Open$ is $O(2^\\ell)$, $\\Verify$ takes $O(\\ell)$ time and the proof size is $O(\\ell)$.\n\n\n\n\n\n\n\n\\subsection{Putting Everything Together}\\label{subsec::zkall}\n\nIn this section, we present our zero knowledge argument scheme. At a high level, similar to~\\cite{zhang2017vsql,hyrax,zkvpd}, $\\V$ can use the GKR protocol to verify the correct evaluation of a circuit $C$ on input $x$ and a witness $w$, given an oracle access to the evaluation of a polynomial defined by $x,w$ on a random point. We instantiate the oracle using the zkVPD protocol. Formally, we present the construction in Construction~\\ref{construction::all}, which combines our zero knowledge GKR and zkVPD protocols. Similar to the protocols in~\\cite{zkvpd,hyrax}, Step 6 and 7 are to check that $\\P$ indeed uses $x$ as the input to the circuit.\n\\begin{figure}[t!]\n\\small{\n\\centering{\\centering\n\\framebox{\\parbox{.99\\linewidth}{\n\\begin{construction}\n%\\begin{protocol}\n\\label{construction::all}\n\tLet $\\lambda$ be the security parameter, $\\F$ be a prime field, $n$ be an upper bound on input size, and $S$ be an upper bound on circuit size. We use $\\VPD_1, \\VPD_2, \\VPD_3$ to denote the zkVPD protocols for input layer, masking polynomials $g_i$ and $R_i$ described in Construction~\\ref{construction:zkgkr}.\n\t\\begin{itemize}\n\t\t\\item $\\mathcal{G}(1^\\lambda, n, S)$: run $(\\pp_1,\\vp_1)\\leftarrow\\VPD_1.\\KeyGen(1^\\lambda, \\log n)$, $(\\pp_2,\\vp_2)\\leftarrow\\VPD_2.\\KeyGen(1^\\lambda, \\log S)$, $(\\pp_3,\\vp_3)\\leftarrow\\VPD_3.\\KeyGen(1^\\lambda)$. Output $\\pk = (\\pp_1, \\pp_2, \\pp_3)$ and $\\vk = (\\vp_1, \\vp_2, \\vp_3)$.\n\t\t\n\t\t\\item $\\langle\\P(\\pk,w), \\V(\\vk)\\rangle(x)$: Let $C$ be a layered arithmetic circuit over $\\F$ with $d$ layers, input $x$ and witness $w$ such that $|x|+|w|\\le n$, $|C|\\le S$ and $C(x;w) =1$. Without loss of generality, assume $|w|/|x| = 2^m -1$ for some $m\\in\\N$.\n\t\t\\begin{enumerate}\n\t\t\t\\item $\\P$ selects a random bivariate polynomial $R_d$ with variable degree 2 and commits to the input of $C$ by sending $\\comm_d\\leftarrow\\VPD_1.\\Commit(\\dot{V}_d, r_V, r_R, \\pp_1)$ to $\\V$, where $\\tV_d$ is the multilinear extension of array $(x;w)$ and $\\dot{V}_d=\\tV_d+R_d$\n\t\t\t\\item $\\V$ runs $\\VPD_1.\\CheckComm(\\comm_d,\\vp_1)$. If it outputs $\\reject$, $\\V$ aborts and outputs $\\reject$.  \n\t\t\t\\item $\\P$ and $\\V$ execute Step 1-5 of the zero knowledge GKR protocol in Construction~\\ref{construction:zkgkr}, with the zkVPDs instantiated with $\\VPD_2$ and $\\VPD_3$. If Construction~\\ref{construction:zkgkr} rejects, $\\V$ outputs $\\reject$ and aborts. Otherwise, by the end of this step, $\\V$ receives two claims of $\\dot{V}_d$ at $u^{(d)}$ and $v^{(d)}$.\n\t\t\t\\item $\\P$ runs $(y_1,\\pi_1)\\leftarrow\\VPD_1.\\Open(\\dot{V}, r_V, r_R,u^{(d)},\\pp_1)$,  $(y_2,\\pi_2)\\leftarrow\\VPD_1.\\Open(\\dot{V}, r_V, r_R, v^{(d)},\\pp_1)$ and sends $y_1,\\pi_1,y_2,\\pi_2$ to $\\V$.\n\t\t\t\\item $\\V$ runs $\\Verify(\\comm_d,u^{(d)},y_1,\\pi_1,\\vp_1)$ and $\\Verify(\\comm_d,v^{(d)},y_2,\\pi_2,\\vp_1)$ and output $\\reject$ if either check fails. Otherwise, $\\V$ checks $\\dot{V}_d(u^{(d)}) = y_1$ and $\\dot{V}_d(v^{(d)}) = y_2$, and rejects if either fails.\n\t\t\t\\item $\\V$ computes the multilinear extension of input $x$ at a random point $r_x\\in\\F^{\\log |x|}$ and sends $r_x$ to $\\P$.\n\t\t\t\\item $\\P$ pads $r_x$ to $r'_x\\in\\F^{\\log |x|}\\times 0^{\\log |w|}$ with $\\log |w|$ 0s and sends $\\V$ $(y_x,\\pi_x)\\leftarrow\\VPD_1.\\Open(\\tV_d, r_V, r_R, r'_x,\\pp_1)$. $\\V$ checks $\\Verify(\\comm_d,r'_x,y_x,\\pi_x,\\vp_1)$ and $y_x$ equals the evaluation of the multilinear extension on $x$. $\\V$ outputs $\\reject$ if the checks fail. Otherwise, $\\V$ outputs $\\accept$.\n\t\t\\end{enumerate}\n\t\t\n\t\\end{itemize}\n%\\end{protocol}\n\\end{construction}}}}}\n\\vspace{-0.2in}\n\\end{figure}\n\\begin{theorem}\\label{theorem:main}\n\tFor an input size $n$ and a finite field $\\F$, Construction~\\ref{construction::all} is a zero knowledge argument for the relation\n\t\\[\n\t\\mathcal{R} = \\{(C,x;w): C\\in\\mathcal{C}_\\F\\wedge|x|+|w|\\le n\\wedge C(x;w) = 1\\},\n\t\\]\n\tas defined in Definition~\\ref{def::zkp}, under Assumption~\\ref{asp::qSDH} and~\\ref{asp::dlEPKE}. Moreover, for every $(C,x;w)\\in\\mathcal{R}$, the running time of $\\P$ is $O(|C|)$ field operations and $O(n)$ multiplications in the base group of the bilinear map. The running time of $\\V$ is $O(|x|+d\\cdot\\log |C|)$ if $C$ is log-space uniform with $d$ layers. $\\P$ and $\\V$ interact $O(d\\log |C|)$ rounds and the total communication (proof size) is $O(d\\log |C|)$. In case $d$ is $\\mathsf{polylog}(|C|)$, the protocol is a succinct argument.\n\\end{theorem}\n\n\\paragraph{Proof Sketch.} The correctness and the soundness follow from those of the two building blocks, zero knowledge GKR and zkVPD, by Theorem~\\ref{thm:zkgkr} and~\\ref{thm:zkvpd}.\n\nTo prove zero knowledge, consider a simulator $\\S$ that calls the simulator $\\S_{GKR}$ of zero knowledge GKR given in Section~\\ref{subsec::zkgkr} as a subroutine, which simulates the partial view up to the input layer. At the input layer, the major challenge is that $\\S$ committed to (a randomly chosen) $\\dot{V}^*_d$ at the beginning of the protocol, before knowing the points $u^{(d)}, v^{(d)}$ to evaluate on. If $\\S$ opens the commitment honestly, with high probability the evaluations are not consistent with the last message of the GKR (sumcheck in layer $d-1$) and a malicious $\\V^*$ can distinguish the ideal world from the real world. In our proof, we resolve this issue by using the simulator $\\S_{VPD}$ of our zkVPD protocol. Given the trapdoor $\\mathsf{trap}$ used in $\\KeyGen$, $\\S_{VPD}$ is able to open the commitment to any value in zero knowledge, and in particular it opens to those messages that are consistent with the GKR protocol in our scheme, which completes the construction of $\\S$. We defer the formal proof to the full version of the paper. \n\nThe complexity of our zero knowledge argument scheme follows from our new GKR protocol with linear prover time, and the complexity of the zkVPD protocol for the input layer analyzed in Section~\\ref{subsec::our_zkvpd}. The masking polynomials $g_i, R_i$ and their commitments and openings introduce no asymptotic overhead and are efficient in practice.\n\n\n\n", "meta": {"hexsha": "16abe16d29997f434af561ecb2affba93f368606", "size": 25779, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/manuscript/zkgkr.tex", "max_stars_repo_name": "niconiconi/Libra", "max_stars_repo_head_hexsha": "d8b4bebd70c1b0681fdecb66fbadeccb3f9d926e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2020-01-05T12:05:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-23T16:18:40.000Z", "max_issues_repo_path": "paper/manuscript/zkgkr.tex", "max_issues_repo_name": "niconiconi/Libra", "max_issues_repo_head_hexsha": "d8b4bebd70c1b0681fdecb66fbadeccb3f9d926e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-10T17:15:38.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-11T16:14:46.000Z", "max_forks_repo_path": "paper/manuscript/zkgkr.tex", "max_forks_repo_name": "niconiconi/Libra", "max_forks_repo_head_hexsha": "d8b4bebd70c1b0681fdecb66fbadeccb3f9d926e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2020-01-31T05:53:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-02T14:05:43.000Z", "avg_line_length": 115.600896861, "max_line_length": 1217, "alphanum_fraction": 0.6801272353, "num_tokens": 8931, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438950947024555, "lm_q2_score": 0.6859494421679929, "lm_q1q2_score": 0.5788693694594549}}
{"text": "\\section{Work}{}{}\\label{sec:Work}\nA fundamental concept in classical physics is \n\\dfont{work}: If an object is moved in a straight line against\na force $F$ for a distance $d$ the work done is $W=Fd$.\n\n\\begin{example}{Constant Force}{Constant Force}\\label{Constant Force}\nHow much work is done in lifting a 10 pound weight vertically a\ndistance of 5 feet? \n\\end{example}\n\n\\begin{solution}\nThe force due to gravity on a 10 pound weight is\n10 pounds at the surface of the earth, and it does not change\nappreciably over 5 feet. The work done is $W=10\\cdot 5=50$ foot-pounds.\n\\end{solution}\n\nIn reality few situations are so simple. The force might not be\nconstant over the range of motion, as in the next example.\n\n\\begin{example}{Lifting a Weight}{Lifting a Weight}\\label{Lifting a Weight} \nHow much work is done in lifting a 10 pound weight from the\nsurface of the earth to an orbit 100 miles above the surface? \n\\end{example}\n\n\\begin{solution}\nOver 100 miles the force due to gravity does change significantly, so we need\nto take this into account. The force exerted on a 10 pound weight at a\ndistance $r$ from the center of the earth is $\\ds F=k/r^2$ and by\ndefinition it is 10 when $r$ is the radius of the earth (we assume the\nearth is a sphere). How can we approximate the work done? We divide\nthe path from the surface to orbit into $n$ small subpaths. On each\nsubpath the force due to gravity is roughly constant, with value\n$\\ds k/r_i^2$ at distance $\\ds r_i$. The work to raise the object from\n$\\ds r_i$ to $\\ds r_{i+1}$ is thus approximately $\\ds k/r_i^2\\Delta r$ and the\ntotal work is approximately\n$$\\sum_{i=0}^{n-1} {k\\over r_i^2}\\Delta r,$$\nor in the limit\n$$W=\\int_{r_0}^{r_1} {k\\over r^2}\\,dr,$$\nwhere $\\ds r_0$ is the radius of the earth and $\\ds r_1$ is $\\ds r_0$ plus 100\nmiles. The work is\n$$W=\\int_{r_0}^{r_1} {k\\over r^2}\\,dr=\n-\\left.{k\\over r}\\right|_{r_0}^{r_1}=-{k\\over r_1}+{k\\over r_0}.$$\nUsing $\\ds r_0=20925525$ feet we have $\\ds r_1=21453525$. The force on the 10\npound weight at the surface of the earth is 10 pounds, so \n$\\ds 10=k/20925525^2$, giving $k=4378775965256250$. Then\n$$-{k\\over r_1}+{k\\over r_0}={491052320000\\over 95349}\\approx 5150052\n\\quad\\hbox{foot-pounds}.$$\nNote that if we assume the force due to gravity is 10 pounds over the\nwhole distance we would calculate the work as $\\ds 10(r_1-r_0)=10\\cdot100\\cdot\n5280=5280000$, somewhat higher since we don't account for the\nweakening of the gravitational force.\n\\end{solution}\n\n\\begin{example}{Lifting an Object}{Lifting an Object}\\label{Lifting an Object} \nHow much work is done in lifting a 10 kilogram object from the\nsurface of the earth to a distance $D$ from the center of the earth?\n\\end{example}\n\n\\begin{solution}\nThis is the same problem as before in different units, and we are not\nspecifying a value for $D$. As before\n$$W=\\int_{r_0}^{D} {k\\over r^2}\\,dr= -\\left.{k\\over\n  r}\\right|_{r_0}^{D}=-{k\\over D}+{k\\over r_0}.$$ \nWhile ``weight in pounds'' is a measure of force, ``weight in\nkilograms'' is a measure of mass. To convert to force we need to use\nNewton's law $F=ma$. At the surface of the earth the acceleration due\nto gravity is approximately 9.8 meters per second squared, so the\nforce is $F=10\\cdot 9.8=98$. The units here are ``kilogram-meters per\nsecond squared'' or ``kg m/s$^2$'', also known as a\nNewton (N), so $F=98$~N.  The radius of the earth is\napproximately 6378.1 kilometers or 6378100 meters.\nNow the problem proceeds as before. From\n$\\ds F=k/r^2$ we compute $k$:\n$\\ds 98=k/6378100^2$, $\\ds k= 3.986655642\\cdot 10^{15}$. Then the work is:\n$$W=-{k\\over D}+6.250538000\\cdot 10^8\\quad\\hbox{Newton-meters.}$$\nAs $D$ increases $W$ of course gets larger, since the quantity being\nsubtracted, $-k/D$, gets smaller. But note that the work $W$ will\nnever exceed $\\ds 6.250538000\\cdot 10^8$, and in fact will approach this\nvalue as $D$ gets larger. In short, with a finite amount of work, namely\n$\\ds 6.250538000\\cdot 10^8$ N-m, we can lift the 10 kilogram object as far\nas we wish from earth.\n%\\thmrdef{example:object to infinity}\n\\end{solution}\n\nNext is an example in which the force is constant, but there are many\nobjects moving different distances.\n\n\\begin{example}{Multiple Objects Moving}{Multiple Objects Moving}\\label{Multiple Objects Moving} \nSuppose that a water tank is shaped like a right circular\ncone with the tip at the bottom, and has height 10 meters and radius 2\nmeters at the top. If the tank is full, how much work is required\nto pump all the water out over the top? \n\\end{example}\n\n\\begin{solution}\nHere we have a large number of\natoms of water that must be lifted different distances to get to the\ntop of the tank. Fortunately, we don't really have to deal with\nindividual atoms---we can consider all the atoms at a given depth\ntogether. \n\nTo approximate the work, we can divide the water in the tank into\nhorizontal sections, approximate the volume of water in a section by a\nthin disk, and compute the amount of work required to lift each disk\nto the top of the tank. As usual, we take the limit as the sections\nget thinner and thinner to get the total work.\n\n\\figure[H]\n%\\texonly\n\\centerline{\\vbox{\\beginpicture\n\\normalgraphs\n%\\ninepoint\n\\setcoordinatesystem units <0.5truecm,0.5truecm>\n\\setplotarea x from -2 to 2, y from 0 to 10\n\\axis left shiftedto x=0 /\n\\plot -2 10 0 0 2 10 -2 10 /\n\\setdashes\n\\putrule from -1.2 6 to 1.2 6\n\\betweenarrows {$h$} from -0.4 6 to -0.4 10\n\\betweenarrows {10} from 2.3 0 to 2.3 10\n\\setplotsymbol ({\\small.})\n\\plotsymbolspacing=.2pt\n\\put {2} [b] <0pt,2pt> at 1 10\n\\arrow <2pt> [0.7, 2] from 0.5 10.4 to 0 10.4\n\\arrow <2pt> [0.7, 2] from 1.5 10.4 to 2 10.4\n\\endpicture}}\n%\\endtexonly\n%\\figrdef{fig:conical water tank}\n%\\htmlfigure{Integration_applications-Pump_out_water.html}\n\\caption{\\label{fig:conical water tank}\nCross-section of a conical water tank.}\n%\\endcaption\n\\endfigure\n\nAt depth $h$ the circular cross-section through the tank has radius\n$r=(10-h)/5$, by similar triangles,\n and area $\\ds \\pi(10-h)^2/25$. A section of the tank at depth\n$h$ thus has volume approximately $\\ds \\pi(10-h)^2/25\\Delta h$ and so\ncontains $\\ds \\sigma\\pi(10-h)^2/25\\Delta h$ kilograms of water, where\n$\\sigma$ is the density of water in kilograms per cubic meter;\n$\\sigma\\approx 1000$. The force due to gravity on this much water is\n$\\ds 9.8\\sigma\\pi(10-h)^2/25\\Delta h$, and finally, this section of water\nmust be lifted a distance $h$, which requires\n$\\ds h9.8\\sigma\\pi(10-h)^2/25\\Delta h$ Newton-meters of work. The total\nwork is therefore\n$$W={9.8\\sigma\\pi\\over 25} \\int_0^{10} h(10-h)^2\\,dh={980000\\over3}\\pi\\approx\n1026254\\quad\\hbox{Newton-meters.}$$\n\\end{solution}\n\nA spring has a ``natural length,'' its length if nothing is stretching\nor compressing it. If the spring is either stretched or compressed the\nspring provides an opposing force; according to \\dfont{Hooke's Law} \nthe magnitude of this force is proportional to the\ndistance the spring has been stretched or compressed: $F=kx$.\nThe constant of proportionality, $k$, of course depends on the\nspring. Note that $x$ here represents the {\\em change} in length from the\nnatural length.\n\n\\begin{example}{Compressing a Spring}{Compressing a Spring}\\label{Compressing a Spring} \nSuppose $k=5$ for a given spring that has a natural length of\n$0.1$ meters. Suppose a force is applied that compresses the spring to\nlength $0.08$. What is the magnitude of the force?\n\\end{example}\n\n\\begin{solution}\nAssuming that the\nconstant $k$ has appropriate dimensions (namely, kg/s$^2$), the force is\n$5(0.1-0.08)=5(0.02)=0.1$ Newtons.\n\\end{solution}\n\n\\begin{example}{Compressing a Spring (continued)}{Compressing a Spring (continued)}\\label{Compressing a Spring (continued)} \nHow much work is done in compressing the spring in the\nprevious example from its natural length to $0.08$ meters? \nFrom $0.08$ meters to $0.05$ meters? \nHow much work is done to stretch the spring\nfrom $0.1$ meters to $0.15$ meters?\n\\end{example}\n\n\\begin{solution}\nWe can approximate the work by\ndividing the distance that the spring is compressed (or stretched)\ninto small subintervals. Then the force exerted by the spring is\napproximately constant over the subinterval, so the work required to\ncompress the spring from $\\ds x_i$ to $\\ds x_{i+1}$ is approximately\n$\\ds 5(x_i-0.1)\\Delta x$.  The total work is approximately\n$$\\sum_{i=0}^{n-1} 5(x_i-0.1)\\Delta x$$\nand in the limit\n$$W=\\int_{0.1}^{0.08} 5(x-0.1)\\,dx=\\left.{5(x-0.1)^2\\over2}\\right|_{0.1}^{0.08}=\n{5(0.08-0.1)^2\\over2}-{5(0.1-0.1)^2\\over2}={1\\over1000}\\,\\hbox{N-m}.$$\nThe other values we seek simply use different limits. To compress the\nspring from $0.08$\nmeters to $0.05$ meters takes\n$$W=\\int_{0.08}^{0.05} 5(x-0.1)\\,dx=\\left.{5(x-0.1)^2\\over2}\\right|_{0.08}^{0.05}=\n{5(0.05-0.1)^2\\over2}-{5(0.08-0.1)^2\\over2}={21\\over4000}\\quad\\hbox{N-m}$$\nand to stretch the spring\nfrom $0.1$ meters to $0.15$ meters requires\n$$W=\\int_{0.1}^{0.15} 5(x-0.1)\\,dx=\\left.{5(x-0.1)^2\\over2}\\right|_{0.1}^{0.15}=\n{5(0.15-0.1)^2\\over2}-{5(0.1-0.1)^2\\over2}={1\\over160}\\quad\\hbox{N-m}.$$\n\\end{solution}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:Work}}\n\n\\begin{enumialphparenastyle}\n\n%%%%%%%%%%\n\\begin{ex}\n How much work is done in lifting a 100 kilogram weight from\nthe surface of the earth to an orbit 35,786 kilometers above the\nsurface of the earth?\n\\begin{sol}\n $\\approx 5,305,028,516$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n How much work is done in lifting a 100 kilogram weight from\nan orbit 1000 kilometers above the surface of the earth to an orbit\n35,786 kilometers above the surface of the earth?\n\\begin{sol}\n $\\approx 4,457,854,041$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n A water tank has the shape of an upright cylinder with radius $r=1$\nmeter and height 10 meters. If the depth of the water is 5 meters, how\nmuch work is required to pump all the water out the top of the tank?\n\\begin{sol}\n $367,500 \\pi$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Suppose the tank of the previous problem is lying on its\nside, so that the circular ends are vertical, and that it has the same\namount of water as before. How much work is required to pump the water\nout the top of the tank (which is now 2 meters above the bottom of the\ntank)?\n\\begin{sol}\n $49000\\pi + 196000/3$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n A water tank has the shape of the bottom half of a sphere\nwith radius $r=1$ meter. If the tank is full,\nhow much work is required to pump all the water out\nthe top of the tank?\n\\begin{sol}\n $2450\\pi$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n A spring has constant $k=10$ kg/s$^2$. How much work is done\nin compressing it $1/10$ meter from its natural length?\n\\begin{sol}\n $0.05$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n A force of 2 Newtons will compress a spring from 1 meter\n(its natural length) to\n0.8 meters. How much work is required to stretch the spring from \n1.1 meters to 1.5 meters?\n\\begin{sol}\n $6/5$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n A 20 meter long steel cable has density 2 kilograms per\nmeter, and is hanging straight down. How much work is required to lift\nthe entire cable to the height of its top end?\n\\begin{sol}\n $3920$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n The cable in the previous problem has a 100 kilogram bucket\nof concrete attached to its lower end. How much work is required to lift\nthe entire cable and bucket to the height of its top end?\n\\begin{sol}\n $23520$ N-m\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n Consider again the cable and bucket of the previous problem.\nHow much work is required to lift the bucket 10 meters by raising the\ncable 10 meters? (The top half of the cable ends up at the height of\nthe top end of the cable, while the bottom half of the cable is lifted\n10 meters.)\n\\begin{sol}\n $12740$ N-m\n\\end{sol}\n\\end{ex}\n\n\\end{enumialphparenastyle}", "meta": {"hexsha": "facb1b9c988a41aabcc9a3f388c04aa6314a1c03", "size": 11865, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8-applications-of-integration/8-5-work.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8-applications-of-integration/8-5-work.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8-applications-of-integration/8-5-work.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.9073482428, "max_line_length": 124, "alphanum_fraction": 0.7184155078, "num_tokens": 3845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.689305616785446, "lm_q2_score": 0.8397339656668286, "lm_q1q2_score": 0.5788333391396618}}
{"text": "\n\\subsection{Classifying with probabilistic decision trees}\n\nPreviously our decision tree classifier was binary.\n\nWe can instead adapt the mixed tree model and using a probit model at each leaf.\n\n\n", "meta": {"hexsha": "9be61523f4a54b6a2b99666c46a8b83e2f2aa40b", "size": 197, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/treesRegression/01-04-treeProbabilistic.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/treesRegression/01-04-treeProbabilistic.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/treesRegression/01-04-treeProbabilistic.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.8888888889, "max_line_length": 80, "alphanum_fraction": 0.807106599, "num_tokens": 38, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8397339676722393, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.578833335162345}}
{"text": "\n\\subsection{Probabilistic Roadmap Planner}\n\nGenerate random configuration of robot in space.\n\nFind n positions which are legal\n\nLink them using k nearest neighbours\n\nRisk: graph not fully connected.\n\nIf so, can add extra nodes between breaks\n\n", "meta": {"hexsha": "27ab1ae0dfde54c73573c42e46f71749235ffae4", "size": 244, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/ai/robotics/02-03-PRP.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/ai/robotics/02-03-PRP.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/ai/robotics/02-03-PRP.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.4285714286, "max_line_length": 48, "alphanum_fraction": 0.7991803279, "num_tokens": 50, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8479677660619633, "lm_q2_score": 0.682573740869499, "lm_q1q2_score": 0.5788005302176664}}
{"text": "\\chapter{Basic Notations}\n\t\t\\begin{itemize}\n\t\t\t\\item $\\R^+ = \\R_{\\geq 0} = [0, +\\infty[$\n\t\t\t\\item $\\N = \\{0, 1, 2, ...\\}$ \n\t\t\t\\item WLOG = Without Loss Of Generality\n\t\t\t\\item PID = Principal Ideal Domain\n\t\t\t\\item $\\delta_{i, j} = \\delta_i^j$ is the Kronecker symbol (it's $1$ if $i=j$ and $0$ otherwise)\n\t\t\t\\item If $(X, d)$ is a metric space, $B_{<r}(a) := \\{x \\in X \\mid d(a, x) < r \\}$ and $B_{\\leq r}(a) := \\{x \\in X \\mid d(a, x) \\leq r \\}$\n\t\t\t\\item If $A$ is a set, $\\#A = |A| = \\text{card}(A)$\n\t\t\\end{itemize}\n", "meta": {"hexsha": "9209879a53f643aba3c4b209b8880f47d742ebc4", "size": 516, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Backmatter/notations.tex", "max_stars_repo_name": "carlo300/BachelorThesis", "max_stars_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-21T10:59:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-29T10:11:24.000Z", "max_issues_repo_path": "Backmatter/notations.tex", "max_issues_repo_name": "carlo300/BachelorThesis", "max_issues_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Backmatter/notations.tex", "max_forks_repo_name": "carlo300/BachelorThesis", "max_forks_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.9090909091, "max_line_length": 140, "alphanum_fraction": 0.5271317829, "num_tokens": 236, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677622198946, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.578800516644219}}
{"text": "\\subsection{Local Extrema}\\label{subsec:LocalExtremaSubsection}\nA \\dfont{local maximum} point on a function is a\npoint $(x,y)$ on the graph of the function whose $y$ coordinate is\nlarger than all other $y$ coordinates on the graph at points ``close\nto'' $(x,y)$. More precisely, $(x,f(x))$ is a local maximum if there\nis an interval $(a,b)$ with $a<x<b$ and $f(x)\\ge f(z)$ for every $z$\nin $(a,b)$. Similarly, $(x,y)$ is a \\dfont{local minimum} point\nif it has locally the smallest $y$ coordinate. Again\nbeing more precise: $(x,f(x))$ is a local minimum if there\nis an interval $(a,b)$ with $a<x<b$ and $f(x)\\le f(z)$ for every $z$\nin $(a,b)$. A \\dfont{local extremum} is either a local minimum or a local maximum.\n\n\\begin{definition}{Local Maxima and Minima}{LocalMaxMinDef}\nA real-valued function $f$ has a \\deffont{local maximum} at $x_0$ if $f(x_0)$\nis the largest value of $f$ near $x_0$; in other words, $f(x_0\\geq f(x)$\nwhen $x$ is near $x_0$.\n\n\\medskip\nA real-valued function $f$ has a \\deffont{local minimum} at $x_0$ if $f(x_0)$\nis the smallest value of $f$ near $x_0$; in other words, $f(x_0)\\leq f(x)$\nwhen $x$ is near $x_0$.\n\\end{definition}\n\nLocal maximum and minimum points are quite distinctive on the graph of\na function, and are therefore useful in understanding the shape of the\ngraph. In many applied problems we want to find the largest or\nsmallest value that a function achieves (for example, we might want\nto find the minimum cost at which some task can be performed) and so\nidentifying maximum and minimum points will be useful for applied\nproblems as well. Some examples of local maximum and minimum points\nare shown in Figure~\\ref{fig:max and min points}.\n\n\\begin{figure}[H]\n\t\\centerline{\\vbox{\\beginpicture\n\t\t\t\\normalgraphs\n\t\t\t%\\ninepoint\n\t\t\t\\setcoordinatesystem units <1.5truecm,1.5truecm>\n\t\t\t\\setplotarea x from -1.5 to 1.5, y from -1.5 to 1.5\n\t\t\t\\axis left shiftedto x=0 /\n\t\t\t\\axis bottom shiftedto y=0 /\n\t\t\t\\setquadratic\n\t\t\t\\plot -1.400 -1.344 -1.330 -1.023 -1.260 -0.740 -1.190 -0.495 -1.120 -0.285 \n\t\t\t-1.050 -0.108 -0.980 0.039 -0.910 0.156 -0.840 0.247 -0.770 0.313 \n\t\t\t-0.700 0.357 -0.630 0.380 -0.560 0.384 -0.490 0.372 -0.420 0.346 \n\t\t\t-0.350 0.307 -0.280 0.258 -0.210 0.201 -0.140 0.137 -0.070 0.070 \n\t\t\t0.000 0.000 0.070 -0.070 0.140 -0.137 0.210 -0.201 0.280 -0.258 \n\t\t\t0.350 -0.307 0.420 -0.346 0.490 -0.372 0.560 -0.384 0.630 -0.380 \n\t\t\t0.700 -0.357 0.770 -0.313 0.840 -0.247 0.910 -0.156 0.980 -0.039 \n\t\t\t1.050 0.108 1.120 0.285 1.190 0.495 1.260 0.740 1.330 1.023 \n\t\t\t1.400 1.344 /\n\t\t\t\\put {$\\bullet$} at -0.5512 0.3752\n\t\t\t\\put {$\\bullet$} at 0.5512 -0.3752\n\t\t\t\\put {$A$} [b] <0pt,4pt> at -0.5512 0.3752\n\t\t\t\\put {$B$} [t] <0pt,-4pt> at 0.5512 -0.3752\n\t\t\t\\setcoordinatesystem units <1.5truecm,1.5truecm> point at -4 0\n\t\t\t\\setplotarea x from -1.5 to 1.5, y from -1.5 to 1.5\n\t\t\t\\axis left shiftedto x=0 /\n\t\t\t\\axis bottom shiftedto y=0 /\n\t\t\t\\setlinear\n\t\t\t\\plot -1.5 -1 -1 0.5 -0.25 -0.25 0.5 1.4 1.5 -0.5 /\n\t\t\t\\multiput {$\\bullet$} at -1 0.5 -0.25 -0.25 0.5 1.4 /\n\t\t\t\\multiput {$A$} [b] <0pt,4pt> at -1 0.5 0.5 1.4 /\n\t\t\t\\put {$B$} [t] <0pt,-4pt> at -0.25 -0.25 \n\t\t\t\\endpicture}}\n\t\\caption{Some local maximum points ($A$) and minimum points ($B$). \\label{fig:max and min points}}\n\\end{figure}\n\nIf $(x,f(x))$ is a point where $f(x)$ reaches a local maximum or minimum,\nand if the derivative of $f$ exists at $x$, then the graph has a\ntangent line and the tangent line \\emph{must} be horizontal. This is\nimportant enough to state as a theorem.\n\nThe proof is simple enough and we include it here, but you may accept Fermat's\nTheorem based on its strong intuitive appeal and come back to its proof at a\nlater time.\n\n\\begin{theorem}{Fermat's Theorem}{FT} \n\tIf $f(x)$ has a local extremum at $x=a$ and $f$ is differentiable at $a$, then $f'(a)=0$.\n\\end{theorem}\n\\begin{proof}\n\tWe shall give the proof for the case where $f\\left( x\\right) $ has a\n\tlocal maximum at $x=a.$ The proof for the local minimum case is similar.\n\t\n\tSince $f\\left( x\\right) $ has a local maximum at $x=a,$ there is an open\n\tinterval $\\left( c,d\\right) $ with $c<a<d$ and $f\\left( x\\right) \\leq\n\tf\\left( a\\right) $ for every $x$ in $\\left( c,d\\right) .$ So, $f\\left(\n\tx\\right) -f\\left( a\\right) \\leq 0$ for all such $x.$ Let us now look at the\n\tsign of the difference quotient $\\dfrac{f\\left( x\\right) -f\\left( a\\right) }{%\n\t\tx-a}.$ We consider two cases according as $x>a\\,\\ $or $x<a.$\n\t\n\tIf $x>a,$ then $x-a>0$ and so, $\\dfrac{f\\left( x\\right) -f\\left( a\\right) }{%\n\t\tx-a}\\leq 0.$ Taking limit as $x$ approach $a$ from the right, we get%\n\t\\begin{equation*}\n\t\t\\lim_{x\\rightarrow a^{+}}\\frac{f\\left( x\\right) -f\\left( a\\right) }{x-a}\\leq\n\t\t0.\n\t\\end{equation*}\n\tOn the other hand, if $x<a,$ then $x-a<0$ and so,$\\dfrac{f\\left( x\\right)\n\t\t-f\\left( a\\right) }{x-a}\\geq 0.$ Taking limit as $x$ approach $a$ from the\n\tleft, we get%\n\t\\begin{equation*}\n\t\t\\lim_{x\\rightarrow a^{-}}\\frac{f\\left( x\\right) -f\\left( a\\right) }{x-a}\\geq\n\t\t0.\n\t\\end{equation*}\n\tSince $f$ is differentiable at $a,$ $f^{\\prime }\\left( a\\right)\n\t=\\lim\\limits_{x\\rightarrow a^{+}}\\dfrac{f\\left( x\\right) -f\\left( a\\right) }{%\n\t\tx-a}=\\lim\\limits_{x\\rightarrow a^{-}}\\dfrac{f\\left( x\\right) -f\\left(\n\t\ta\\right) }{x-a}.$ Therefore, we have both $f^{\\prime }\\left( a\\right) \\leq 0$\n\tand $f^{\\prime }\\left( a\\right) \\geq 0.$ So, $f^{\\prime }\\left( a\\right) =0.$\n\\end{proof}\n\nThus, the only points at which a function can have a local maximum or minimum are\npoints at which the derivative is zero, as in the left hand graph in\nFigure~\\ref{fig:max and min points},\nor the derivative is undefined, as in the right hand graph. Any value\nof $x$ in the domain of $f$ for which $f'(x)$ is zero or undefined is called a\n\\dfont{critical point} for $f$. When looking for local maximum \nand minimum points, you are likely to\nmake two sorts of mistakes: You may forget that a maximum or minimum\ncan occur where the derivative does not exist, and so forget to check\nwhether the derivative exists everywhere. You might also assume that\nany place that the derivative is zero is a local maximum or minimum\npoint, but this is not true. A portion of the graph of $\\ds f(x)=x^3$ is\nshown in Figure~\\ref{fig:non extremum}. The derivative of $f$ is\n$f'(x)=3x^2$, and $f'(0)=0$, but there is neither a maximum nor\nminimum at $(0,0)$.\n\n\\begin{figure}[H]\n\t\\centerline{\\vbox{\\beginpicture\n\t\t\t\\normalgraphs\n\t\t\t%\\ninepoint\n\t\t\t\\setcoordinatesystem units <1.5truecm,0.75truecm>\n\t\t\t\\setplotarea x from -1.5 to 1.5, y from -3.4 to 3.4\n\t\t\t\\axis left shiftedto x=0 /\n\t\t\t\\axis bottom shiftedto y=0 /\n\t\t\t\\setquadratic\n\t\t\t\\plot -1.500 -3.375 -1.425 -2.894 -1.350 -2.460 -1.275 -2.073 -1.200 -1.728 \n\t\t\t-1.125 -1.424 -1.050 -1.158 -0.975 -0.927 -0.900 -0.729 -0.825 -0.562 \n\t\t\t-0.750 -0.422 -0.675 -0.308 -0.600 -0.216 -0.525 -0.145 -0.450 -0.091 \n\t\t\t-0.375 -0.053 -0.300 -0.027 -0.225 -0.011 -0.150 -0.003 -0.075 -0.000 \n\t\t\t0.000 0.000 0.075 0.000 0.150 0.003 0.225 0.011 0.300 0.027 \n\t\t\t0.375 0.053 0.450 0.091 0.525 0.145 0.600 0.216 0.675 0.308 \n\t\t\t0.750 0.422 0.825 0.562 0.900 0.729 0.975 0.927 1.050 1.158 \n\t\t\t1.125 1.424 1.200 1.728 1.275 2.073 1.350 2.460 1.425 2.894 \n\t\t\t1.500 3.375 /\n\t\t\t\\endpicture}}\n\t\\caption{No maximum or minimum even though the derivative is zero. \\label{fig:non extremum}}\n\\end{figure}\n\nSince the derivative is zero or undefined at both local maximum and\nlocal minimum points, we need a way to determine which, if either,\nactually occurs. The most\nelementary approach, but one that is often tedious or difficult, is to\ntest directly whether the $y$ coordinates ``near'' the potential\nmaximum or minimum are above or below the $y$ coordinate at the point\nof interest. Of course, there are too many points ``near'' the point\nto test, but a little thought shows we need only test two provided we\nknow that $f$ is continuous (recall that this means that the graph of\n$f$ has no jumps or gaps).\n\nSuppose, for example, that we have identified three points at which\n$f'$ is zero or nonexistent: $\\ds (x_1,y_1)$, $\\ds (x_2,y_2)$, $\\ds (x_3,y_3)$,\nand $\\ds x_1<x_2<x_3$ (see Figure~\\ref{fig:testing for max and\n\tmin}). Suppose that we compute the value of $f(a)$ for $\\ds x_1<a<x_2$, and\nthat $\\ds f(a)<f(x_2)$. What can we say about the graph between $a$ and\n$\\ds x_2$? Could there be a point $\\ds (b,f(b))$, $\\ds a<b<x_2$ with\n$\\ds f(b)>f(x_2)$? No: if there were, the graph would go up from\n$(a,f(a))$ to $(b,f(b))$ then down to $\\ds (x_2,f(x_2))$ and somewhere in\nbetween would have a local maximum point. (This is not obvious; it is\na result of the Extreme Value Theorem.)\nBut at that local maximum\npoint the derivative of $f$ would be zero or nonexistent, yet we\nalready know that the derivative is zero or nonexistent only at $\\ds x_1$,\n$\\ds x_2$, and $\\ds x_3$. The upshot is that one computation tells us that\n$\\ds (x_2,f(x_2))$ has the largest $y$ coordinate of any point on the\ngraph near $\\ds x_2$ and to the left of $\\ds x_2$. We can perform the same\ntest on the right. If we find that on both sides of $\\ds x_2$ the values\nare smaller, then there must be a local maximum at $\\ds (x_2,f(x_2))$; if\nwe find that on both sides of $\\ds x_2$ the values are larger, then there\nmust be a local minimum at $\\ds (x_2,f(x_2))$; if we find one of each,\nthen there is neither a local maximum or minimum at $\\ds x_2$.\n\n\\begin{figure}[H]\n\t\\centerline{\\vbox{\\beginpicture\n\t\t\t\\normalgraphs\n\t\t\t%\\ninepoint\n\t\t\t\\setcoordinatesystem units <1.5truecm,1truecm>\n\t\t\t\\setplotarea x from -1.5 to 1.5, y from -1.5 to 1.5\n\t\t\t\\axis left shiftedto x=0 /\n\t\t\t\\axis bottom shiftedto y=0 ticks withvalues \n\t\t\t{$x_1$} {$a$} {$b$} {$x_2$} {$x_3$} / at\n\t\t\t-1.3 -1 -0.5 0.5 1 / /\n\t\t\t\\multiput {$\\bullet$} at -1.3 0.5 -1 0.8 0.5 1.3 1 0.2 /\n\t\t\t\\setquadratic\n\t\t\t\\endpicture}}\n\t\\caption{Testing for a maximum or minimum. \\label{fig:testing for max and min}}\n\\end{figure}\n\nIt is not always easy to compute the value of a function at a\nparticular point. The task is made easier by the availability of\ncalculators and computers, but they have their own drawbacks---they do\nnot always allow us to distinguish between values that are very close\ntogether. Nevertheless, because this method is conceptually simple and\nsometimes easy to perform, you should always consider it.\n\n\\begin{example}{Simple Cubic}{simple cubic}\n\tFind all local maximum and minimum points for the function \n\t$\\ds f(x)=x^3-x$. \n\\end{example}\n\n\\begin{solution} \n\tThe derivative is $\\ds f'(x)=3x^2-1$. This is defined\n\teverywhere and is zero at $\\ds x=\\pm \\sqrt{3}/3$. Looking first at\n\t$\\ds x=\\sqrt{3}/3$, we see that $\\ds f(\\sqrt{3}/3)=-2\\sqrt{3}/9$. Now we test\n\ttwo points on either side of \n\t$\\ds x=\\sqrt{3}/3$, choosing one point in the interval $(-\\sqrt{3}/3,\\sqrt{3}/3)$ and one point in the interval $(\\sqrt{3}/3,\\infty)$. Since\n\t$\\ds f(0)=0>-2\\sqrt{3}/9$\n\tand $\\ds f(1)=0>-2\\sqrt{3}/9$, there must be a local minimum at \n\t$\\ds x=\\sqrt{3}/3$. For $\\ds x=-\\sqrt{3}/3$, we see that\n\t$\\ds f(-\\sqrt{3}/3)=2\\sqrt{3}/9$. This time we can use $x=0$ and $x=-1$,\n\tand we find that $\\ds f(-1)=f(0)=0< 2\\sqrt{3}/9$, so there must be a local\n\tmaximum at $\\ds x=-\\sqrt{3}/3$.\n\\end{solution}\n\nOf course this example is made very simple by our choice of points to\ntest, namely $x=-1$, $0$, $1$. We could have used other values, say\n$-5/4$, $1/3$, and $3/4$, but this would have made the calculations\nconsiderably more tedious, and we should always choose very simple points\nto test if we can.\n\n\\begin{example}{Max and Min}{max and min}\\label{max and min}\n\tFind all local maximum and minimum points for \n\t$f(x)=\\sin x+\\cos x$. \n\\end{example}\n\n\\begin{solution} \n\tThe derivative is $f'(x)=\\cos x-\\sin x$. This is\n\talways defined and is zero whenever $\\cos x=\\sin x$. Recalling that\n\tthe $\\cos x$ and $\\sin x$ are the $x$ and $y$ coordinates of points on\n\ta unit circle, we see that $\\cos x=\\sin x$ when $x$ is $\\pi/4$, \n\t$\\pi/4\\pm\\pi$, $\\pi/4\\pm2\\pi$, $\\pi/4\\pm3\\pi$, etc. Since both sine\n\tand cosine have a period of $2\\pi$, we need only determine the status\n\tof $x=\\pi/4$ and $x=5\\pi/4$. We can use $0$ and $\\pi/2$ to test the\n\tcritical value $x= \\pi/4$. \n\tWe find that $\\ds f(\\pi/4)=\\sqrt{2}$, $\\ds f(0)=1<\\sqrt{2}$ and $\\ds f(\\pi/2)=1$,\n\tso there is a local maximum when $x=\\pi/4$ and also when\n\t$x=\\pi/4\\pm2\\pi$, $\\pi/4\\pm4\\pi$, etc. We can summarize this more\n\tneatly by saying that there are local maxima at $\\pi/4\\pm 2k\\pi$ for\n\tevery integer $k$.\n\t\n\tWe use $\\pi$ and $2\\pi$ to test the critical value $x=5\\pi/4$. The\n\trelevant values are $\\ds f(5\\pi/4)=-\\sqrt2$, $\\ds f(\\pi)=-1>-\\sqrt2$,\n\t$\\ds f(2\\pi)=1>-\\sqrt2$, so there is a local minimum at $x=5\\pi/4$,\n\t$5\\pi/4\\pm2\\pi$, $5\\pi/4\\pm4\\pi$, etc. More succinctly, there are\n\tlocal minima at $5\\pi/4\\pm 2k\\pi$ for\n\tevery integer $k$.\n\\end{solution}\n\n\\begin{example}{Max and Min}{maxminexampletwo}\n\tFind all local maximum and minimum points for $g\\left( x\\right) =x^{2/3}$.\n\\end{example}\n\\begin{solution}\n\tThe derivative is $g^{\\prime }\\left( x\\right) =\\frac{2}{3}%\n\tx^{-1/3}. $ This is undefined when $x=0$ and is not equal to zero for any $x$\n\tin the domain of $g^{\\prime }.$ Now we test two points on either side of $%\n\tx=0.$ We use $x=-1$ and $x=1.$ Since $g\\left( 0\\right) =0,$ $g\\left(\n\t-1\\right) =1>0$ and $g\\left( 1\\right) =1>0,$ there must be a local minimum\n\tat $x=0$.\n\\end{solution}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{subsec:LocalExtremaSubsection}}\n\n\\begin{enumialphparenastyle}\n\nFind all local maximum and minimum points $(x,y)$ by the method of this section.\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds y=x^2-x$ \n\t\\begin{sol}\n\t\tmin at $x=1/2$\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds y=2+3x-x^3$ \n\t\\begin{sol}\n\t\tmin at $x=-1$, max at $x=1$\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds y=x^3-9x^2+24x$\n\t\\begin{sol}\n\t\tmax at $x=2$, min at $x=4$\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds y=x^4-2x^2+3$ \n\t\\begin{sol}\n\t\tmin at $x=\\pm 1$, max at $x=0$.\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds y=3x^4-4x^3$\n\t\\begin{sol}\n\t\tmin at $x=1$\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds y=(x^2-1)/x$\n\t\\begin{sol}\n\t\tnone\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds y=3x^2-(1/x^2)$ \n\t\\begin{sol}\n\t\tnone\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$y=\\cos(2x)-x$ \n\t\\begin{sol}\n\t\tmin at $x=7\\pi/12+k\\pi$, max at $x=-\\pi/12+k\\pi$, for integer $k$.\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%%\n%\\begin{ex}\n% $\\ds f(x) =\\cases{ x-1 & $x < 2$ \\cr\n%x^2 & $x\\geq 2$\\cr}$\n%\\begin{sol}\n% none\n%\\end{sol}\n%\\end{ex}\n%\n% %%%%%%%%%%\n%\\begin{ex}\n% $\\ds f(x) =\\cases{x-3 & $x < 3$ \\cr\n%x^3  & $3\\leq x \\leq 5$\\cr\n%1/x  &$x>5$\\cr}$\n%\\begin{sol}\n% local max at $x=5$\n%\\end{sol}\n%\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\t$\\ds f(x) = x^2 - 98x + 4$\n\t%(Hint: Complete the square.)\n\t\\begin{sol}\n\t\tlocal min at $x=49$\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%%\n%\\begin{ex}\n% $\\ds f(x) =\\cases{ -2 & $x = 0$ \\cr\n%1/x^2 &$x \\neq 0$\\cr}$\n%\\begin{sol}\n% local min at $x=0$\n%\\end{sol}\n%\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\tFor any real number $x$ there is a unique\n\tinteger $n$ such that $n \\leq x < n +1$, and the greatest\n\tinteger\\index{greatest integer} function is defined as $\\ds\\lfloor\n\tx\\rfloor = n$. Where\n\tare the critical values of the greatest integer function?  Which are\n\tlocal maxima and which are local minima?\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\tExplain why the function $f(x) =1/x$ has no local\n\tmaxima or minima.\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\tHow many critical points can a quadratic polynomial function have?\n\t\\begin{sol}\n\t\tone\n\t\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\tShow that a cubic polynomial can have at most two critical\n\tpoints. Give examples to show that a cubic polynomial can have zero,\n\tone, or two critical points.\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\tExplore the family of functions $\\ds f(x) = x^3 + cx +1$ where $c$\n\tis a constant.  How many and what types of local extremes are there?\n\tYour answer should depend on the value of $c$, that is, different\n\tvalues of $c$ will give different answers.\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\n\tWe generalize the preceding two questions. Let $n$ be a\n\tpositive integer and let $f$ be a polynomial of degree $n$. How many\n\tcritical points can $f$ have? (Hint: Recall the \\dfont{Fundamental\n\t\tTheorem of Algebra}, \n\twhich says that a polynomial of degree $n$ has\n\tat most $n$ roots.)\n\\end{ex}\n\n\\end{enumialphparenastyle}", "meta": {"hexsha": "725fc17b8e23ffb39510cb8b6c4562aca565fba6", "size": 16260, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "5-applications-of-derivatives/5-2-1-local-extrema.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5-applications-of-derivatives/5-2-1-local-extrema.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5-applications-of-derivatives/5-2-1-local-extrema.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8139534884, "max_line_length": 141, "alphanum_fraction": 0.6530750308, "num_tokens": 6123, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6334102775181399, "lm_q2_score": 0.9136765157744067, "lm_q1q2_score": 0.5787320954184741}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{September 12, 2014}\n\\maketitle\n\\section*{assignment}\n\\subsection*{1.4 \\#16}\n$[a]\\in\\mathbb{Z}_n$. $[a]$ is nilpotent if $[a]^k=0$ for some $k\\ge 1$. zero is always nilpotent. show that $\\mathbb{Z}_n$ has no nonzero nilpotents iff n has no factor that is a square. if n has no square factors then the prime factorization consists of distinct primes to the power of one only.\n\\subsubsection*{proof}\n$\\Rightarrow$\n\nAssume that $\\mathbb{Z}_n$ has no nonzero nilpotents. by contradiction assume that there exists some prime p such that $p^2|n$. write $n=p_1^{\\alpha_1}p_2^{\\alpha_2}\\dots p_t^{\\alpha_t}$ at least one $\\alpha_i\\ge1$. choose $a=p_1p_2\\dots p_t$. then $[a]^{\\max\\alpha}=[0]$. and $[a]\\ne0$, contradiction because $n|a$ so n is square free.\n\n$\\Leftarrow$\nassume $n=p_1p_2\\dots p_t$ $\\forall p_i$ are distinct.\ntake $[a]\\in\\mathbb{Z}_n$ andd assume $[a]^k=[0]$.\nthen $n|a^k$ and $p_1p_2\\dots p_t|a^k$. $\\forall p_i, p_i|a^k$.\nFor every $i$ $p_i|a$ therefore $p_1p_2\\dots p_t|a$ and $n|a$ so $[a]=[0]$.\n\\section*{last time}\n$[a]_n$ is invertible iff $(a,n)=1$\na non-zero element of $\\mathbb{Z}_n$ is either invertible or a zero-divisor\n\n\\subsubsection*{proof}\nlet $[a]_n\\in\\mathbb{Z}_n, n\\not\\vert a$. if $(n,a)=1$ thne $[a]_n$ is invertible. if $(n,a)=d>1$ then $[a]_n[\\frac{n}{d}]=[0]_n$ because $a\\frac{n}{d}=\\frac{a}{d}n$ so $a\\frac{n}{d}$ is a multiple of n. $d>1\\to d\\ne0$.\n\\subsection*{consequence}\nthe following are equivalent:\n\\begin{enumerate}\n\\item\nn  is prime\n\\item\n$[0]$ is the only zero divisor of $\\mathbb{Z}_n$.\n\\item\nevery  nonzero element of $\\mathbb{Z}_n$ is invertible.\n\\end{enumerate}\n\\subsubsection*{proof}\nif $n$ prime, $(n,a)=1$ for $0<a<n$\n\\section*{euler function}\nif $n\\in\\mathbb{Z}^+$ $\\mathcal{P}(n)=$ the number of positive integers in $\\{1,2,\\dots,n\\}$ that are relatively prime to $n$.\n\\subsection*{example}\n$\\mathcal{P}(6)=2$ (because 1 and 5).\n\nobserve $\\mathcal{P}(n)$ is the number of invertible elements in $\\mathbb{Z}_n$.\n\\subsubsection*{notation}\n$\\mathbb{Z}_n^*=\\{[a]_n:[a]_n\\text{ is invertible}\\}$. so $\\mathcal{P}(n)=|\\mathbb{Z}_n^*|$\n\\subsection*{proposition}\n$\\mathbb{Z}_n^*$ is closed under multiplication.\n\\subsubsection*{proof}\nlet $[a]_n,[b]_n\\in\\mathbb{Z}_n^*$ then $[a]_n[a']_n=[1]$ and similarly $[b]_n[b']_n=[1]$\n\nthen $[a]_n[b]_n[a']_n[b']_n=[1]_n\\quad\\Box$\n\\section*{exercise}\nif $n=p_1^{\\alpha1}\\dots p_t^{\\alpha t},\\alpha i\\ge1$ distinct primes\n\n\\section*{eulers thm}\nif $(a,n)=1$ then $a^{\\mathcal{P}(n)}\\equiv 1\\mod n$.\n\\subsection*{proof}\n$\\mathbb{Z}_n^*=\\{[a_1],[a_2],\\dots,[a_{\\mathcal{P}(n)}\\}$. now consider $\\{[aa_1],[aa_2],\\dots,[aa_{\\mathcal{P}(n)}\\}\\in\\mathbb{Z}_n^*$. These are distinct elements.\n\\begin{align*}\n  [aa_i]&=[aa_j]\\text{ multiply by the inverse of a}\\\\\n  [a]^{-1}[aa_i]&=[a]^{-1}[aa_j]\\\\\n  [a_i]&=[a_j]\n\\end{align*}\nso $\\{[aa_1],[aa_2],\\dots,[aa_{\\mathcal{P}(n)}\\}=\\mathbb{Z}_n^*$\n\nnote that the two lists are permutations of eachother.\n\nthen $[a_1][a_2]\\dots[a_n]=[aa_1][aa_2]\\dots[aa_{\\mathcal{P}(n)}]$\n\\end{document}\n\n", "meta": {"hexsha": "1ddcae4172227b96abe49ee32576a14c74b3eeff", "size": 3243, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-09-12.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-09-12.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-09-12.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.0506329114, "max_line_length": 336, "alphanum_fraction": 0.6669750231, "num_tokens": 1315, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.8354835411997897, "lm_q1q2_score": 0.578690987454653}}
{"text": "\\section{Series}\\label{sec:series}\n\nWhile much more can be said about sequences, we now turn to our\nprincipal interest, series. Recall that a series, roughly speaking, is\nthe sum of a sequence: If $\\ds\\{a_n\\}_{n=0}^\\infty$ is a sequence then the\nassociated series is\n\\[\\sum_{i=0}^\\infty a_n=a_0+a_1+a_2+\\cdots\\]\nAssociated with a series is a second sequence, called the {\\dfont sequence of\n  partial sums} $\\ds\\{s_n\\}_{n=0}^\\infty$:\n\\[s_n=\\sum_{i=0}^n a_i.\\]\nSo\n\\[s_0=a_0,\\quad s_1=a_0+a_1,\\quad s_2=a_0+a_1+a_2,\\quad \\ldots\\]\nA series converges\\index{series!convergent}\\index{convergent series} \nif the sequence of partial sums converges, and otherwise the series \ndiverges\\index{series!divergent}\\index{divergent series}.\n\nIf $\\{kx^n\\}_{n=0}^{\\infty}$ is a geometric sequence, then the associated series $\\sum_{i=0}^{\\infty}kx^i$ is called a geometric series.\n\n\\begin{theorem}{Geometric Series Convergence}{GeoSeriesConvergence}\nIf $|x|<1$, the geometric series $\\ds\\sum_i kx^i$ converges to $\\ds\\frac{k}{1-x}$, otherwise the series diverges (unless $k=0$).\n\\end{theorem}\n\\begin{proof}\nIf $\\ds a_n=kx^n$, $\\ds\\sum_{n=0}^\\infty a_n$ is called a \n\\dfont{geometric series}. A typical partial sum is\n\\[s_n=k+kx+kx^2+kx^3+\\cdots+kx^n=k(1+x+x^2+x^3+\\cdots+x^n).\\]\nWe note that\n\\begin{align*}\n  s_n(1-x)&=k(1+x+x^2+x^3+\\cdots+x^n)(1-x) \\\\\n  &=k(1+x+x^2+x^3+\\cdots+x^n)1-k(1+x+x^2+x^3+\\cdots+x^{n-1}+x^n)x \\\\\n  &=k(1+x+x^2+x^3+\\cdots+x^n-x-x^2-x^3-\\cdots-x^n-x^{n+1}) \\\\\n  &=k(1-x^{n+1}) \\\\\n\\end{align*}\nso\n\\begin{align*}\n  s_n(1-x)&=k(1-x^{n+1}) \\\\\n  s_n&=k{1-x^{n+1}\\over 1-x}. \\\\\n\\end{align*}\nIf $|x|<1$, $\\ds\\lim_{n\\to\\infty}x^n=0$ so\n\\[\n  \\lim_{n\\to\\infty}s_n=\\lim_{n\\to\\infty}k{1-x^{n+1}\\over 1-x}=\n  k{1\\over 1-x}.\n\\]\nThus, when $|x|<1$ the geometric series converges to $k/(1-x)$.\n\\end{proof}\n\nWhen, for example, $k=1$ and $x=1/2$:\n\\[\n  s_n={1-(1/2)^{n+1}\\over 1-1/2}={2^{n+1}-1\\over 2^n}=2-{1\\over 2^n}\n  \\quad\\hbox{and}\\quad \\sum_{n=0}^\\infty {1\\over 2^n} = \n  {1\\over 1-1/2} = 2.\n\\]\nWe began the chapter with the series\n\\[\\sum_{n=1}^\\infty {1\\over 2^n},\\]\nnamely, the geometric series without the first term $1$. Each partial\nsum of this series is 1 less than the corresponding partial sum for \nthe geometric series, so of course the limit is also one less than the\nvalue of the geometric series, that is,\n\\[\\sum_{n=1}^\\infty {1\\over 2^n}=1.\\]\n\nIt is not hard to see that the following theorem follows from\nTheorem~\\ref{thm:SequenceProperties}. \n\n\\begin{theorem}{Series are Linear}{SeriesLinear}\nSuppose that $\\ds\\sum a_n$ and $\\ds\\sum b_n$ are convergent series,\nand $c$ is a constant. Then\n\n\\begin{enumerate}\n\\item $\\ds\\sum ca_n$ is convergent and $\\ds\\sum ca_n=c\\sum a_n$\n\\item $\\ds\\sum (a_n+b_n)$ is convergent and $\\ds\\sum (a_n+b_n)=\\sum a_n+\\sum b_n$.\n\\end{enumerate}\n\\end{theorem}\n\nNote that when $c$ is non-zero, the converse of the first part of this theorem is also true. That is, if $\\sum ca_n$ is convergent, then $\\sum a_n$ is also convergent; if $\\sum ca_n$ converges then $\\frac{1}{c}\\sum ca_n$ must converge.\n\nOn the other hand, the converse of the second part of the theorem is not true. For example, if $a_n=1$ and $b_n=-1$, then $\\sum a_n+\\sum b_n=\\sum 0=0$ converges, but each of $\\sum a_n$ and $\\sum b_n$ diverges.\n\n%The two parts of this theorem are subtly different. Suppose that $\\sum\n%a_n$ diverges; does $\\sum ca_n$ also diverge if $c$ is non-zero? Yes:\n%suppose instead that $\\sum ca_n$ converges; then by the theorem, $\\sum\n%(1/c)ca_n$ converges, but this is the same as $\\sum a_n$, which by\n%assumption diverges. Hence $\\sum ca_n$ also diverges. Note that we are\n%applying the theorem with $a_n$ replaced by $ca_n$ and $c$ replaced by\n%$(1/c)$.\n%\n%Now suppose that $\\sum a_n$ and $\\sum b_n$ diverge; does\n%$\\sum (a_n+b_n)$ also diverge? Now the answer is no: Let $a_n=1$ and\n%$b_n=-1$, so certainly $\\sum a_n$ and $\\sum b_n$ diverge. But\n%$\\sum (a_n+b_n)=\\sum(1+-1)=\\sum 0 = 0$. Of course, sometimes \n%$\\sum (a_n+b_n)$ will also diverge, for example, if $a_n=b_n=1$, then\n%$\\sum (a_n+b_n)=\\sum(1+1)=\\sum 2$ diverges.\n\nIn general, the sequence of partial sums $\\ds s_n$ is harder to understand\nand analyze than the sequence of terms $\\ds a_n$, and it is difficult\nto determine whether series converge and if so to what. The following result\nwill let us deal with some simple cases easily.\n\n\\begin{theorem}{Divergence Test}{thm:DivergenceTest}\nIf $\\ds\\sum a_n$ converges then $\\ds\\lim_{n\\to\\infty}a_n=0$.\n\\end{theorem}\n\\begin{proof}\nSince $\\sum a_n$ converges, $\\ds\\lim_{n\\to\\infty}s_n=L$ and \n$\\ds\\lim_{n\\to\\infty}s_{n-1}=L$, because this really says the same\nthing but ``renumbers'' the terms. By Theorem~\\ref{thm:SequenceProperties}, \n\\[\n  \\lim_{n\\to\\infty} (s_{n}-s_{n-1})=\n  \\lim_{n\\to\\infty} s_{n}-\\lim_{n\\to\\infty}s_{n-1}=L-L=0.\n\\]\nBut\n\\[\n  s_{n}-s_{n-1}=(a_0+a_1+a_2+\\cdots+a_n)-(a_0+a_1+a_2+\\cdots+a_{n-1})\n  =a_n,\n\\]\nso as desired $\\ds\\lim_{n\\to\\infty}a_n=0$.\n\\end{proof}\n\nThis theorem presents an easy divergence test: Given a series $\\sum\na_n$, if the limit $\\ds\\lim_{n\\to\\infty}a_n$ does not exist or has a value\nother than zero, the series diverges. Note well that the converse is\n\\emph{not} true: If $\\ds\\lim_{n\\to\\infty}a_n=0$ then the series does\nnot necessarily converge.\n\n\\begin{theorem}{The $n$-th Term Test}{nthTermTestTheorem}\nIf $\\ds\\lim_{n\\to\\infty}a_n\\neq 0$ or if the limit does not exist, then $\\ds\\sum a_n$ diverges.\n\\end{theorem}\n\\begin{proof}\nConsider the statement of the theorem in contrapositive form:\n\\[\\textbf{If }\\ds\\sum_{n=1}^{\\infty}a_n\\text{ converges, then }\\lim_{n\\to\\infty}a_n=0.\\]\nIf $s_n$ are the partial sums of the series, then the assumption that the series converges gives us\n\\[\\ds\\lim_{n\\to\\infty}s_n=s\\]\nfor some number $s$. Then\n\\[\\ds\\lim_{n\\to\\infty}a_n=\\lim_{n\\to\\infty}(s_n-s_{n-1})=\\lim_{n\\to\\infty}s_n-\\lim_{n\\to\\infty}s_{n-1}=s-s=0.\\]\n\\end{proof}\n\n\\begin{example}{}{}\nShow that $\\ds\\sum_{n=1}^\\infty {n\\over n+1}$ diverges.\n\\end{example}\\\n\\begin{solution}\nWe compute the limit:\n$$\\lim _{n\\to\\infty}{n\\over n+1}=1\\not=0.$$\nLooking at the first few terms perhaps makes it clear that the series\nhas no chance of converging:\n$${1\\over2}+{2\\over3}+{3\\over4}+{4\\over5}+\\cdots$$\nwill just get larger and larger; indeed, after a bit longer the series\nstarts to look very much like $\\cdots+1+1+1+1+\\cdots$, and of course\nif we add up enough 1's we can make the sum as large as we desire.\n\\end{solution}\n\n\\begin{example}{Harmonic Series}{HarmonicSeries}\nShow that $\\ds\\sum_{n=1}^\\infty {1\\over n}$ diverges.\n\\end{example}\n\\begin{solution}\nHere the theorem does not apply: $\\ds\\lim _{n\\to\\infty} 1/n=0$, so it\nlooks like perhaps the series converges. Indeed, if you have the\nfortitude (or the software) to add up the first 1000 terms you will find that\n$$\\sum_{n=1}^{1000} {1\\over n}\\approx 7.49,$$\nso it might be reasonable to speculate that the series converges to\nsomething in the neighborhood of 10. But in fact the partial sums do go\nto infinity; they just get big very, very slowly. Consider the\nfollowing:\n\n\\hbox to \\hsize{$\\ds 1+{1\\over 2}+{1\\over 3}+{1\\over 4} > \n1+{1\\over 2}+{1\\over 4}+{1\\over\n  4} = 1+{1\\over 2}+{1\\over 2}$\\hfill}\n\n\\hbox to \\hsize{$\\ds 1+{1\\over 2}+{1\\over 3}+{1\\over 4}+\n{1\\over 5}+{1\\over 6}+{1\\over\n    7}+{1\\over 8} > \n1+{1\\over 2}+{1\\over 4}+{1\\over 4}+{1\\over 8}+{1\\over 8}+{1\\over\n    8}+{1\\over 8} = 1+{1\\over 2}+{1\\over 2}+{1\\over 2}$\\hfill}\n\n\\hbox to \\hsize{$\\ds 1+{1\\over 2}+{1\\over 3}+\\cdots+{1\\over16}>\n1+{1\\over 2}+{1\\over 4}+{1\\over 4}+{1\\over 8}+\\cdots+{1\\over\n  8}+{1\\over16}+\\cdots +{1\\over16} =1+{1\\over 2}+{1\\over 2}+{1\\over\n  2}+{1\\over 2}$\\hfill}\nand so on. By swallowing up more and more terms we can always manage\nto add at least another $1/2$ to the sum, and by adding enough of\nthese we can make the partial sums as big as we like. In fact, it's\nnot hard to see from this pattern that\n$$1+{1\\over 2}+{1\\over 3}+\\cdots+{1\\over 2^n} > 1+{n\\over 2},$$\nso to make sure the sum is over 100, for example, we'd add\nup terms until we get to around $\\ds 1/2^{198}$, that is,\nabout $\\ds 4\\cdot 10^{59}$ terms. This series, $\\sum (1/n)$, is called the\n\\dfont{harmonic series}.\n\\end{solution}\n\nWe will often make use of the fact that the first few (e.g. any finite number of) terms in a series are irrelevant when determining whether it will converge. In other words, $\\sum_{n=0}^{\\infty}a_n$ converges if and only if $\\sum_{n=N}^{\\infty}a_n$ converges for some $N\\geq 1$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:series}}\n\n\\begin{enumialphparenastyle}\n\n\\begin{ex}\nExplain why $\\ds\\sum_{n=1}^\\infty {n^2\\over 2n^2+1}$\ndiverges.\n\\begin{sol}\n$\\ds\\lim_{n\\to\\infty} n^2/(2n^2+1)=1/2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nExplain why $\\ds\\sum_{n=1}^\\infty {5\\over 2^{1/n}+14}$\ndiverges.\n\\begin{sol}\n$\\ds\\lim_{n\\to\\infty} 5/(2^{1/n}+14)=1/3$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nExplain why $\\ds\\sum_{n=1}^\\infty {3\\over n}$\ndiverges.\n\\begin{sol}\n$\\sum_{n=1}^\\infty {1\\over n}$ diverges, so $\\ds\\sum_{n=1}^\\infty 3{1\\over n}$ diverges\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\sum_{n=0}^\\infty {4\\over (-3)^n}- {3\\over 3^n}$. \n\\begin{sol}\n$-3/2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\sum_{n=0}^\\infty {3\\over 2^n}+ {4\\over 5^n}$. \n\\begin{sol}\n$11$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\sum_{n=0}^\\infty {4^{n+1}\\over 5^n}$.\n\\begin{sol}\n$20$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\sum_{n=0}^\\infty {3^{n+1}\\over 7^{n+1}}$.\n\\begin{sol}\n$3/4$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\sum_{n=1}^\\infty \\left({3\\over 5}\\right)^n$.\n\\begin{sol}\n$3/2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\sum_{n=1}^\\infty {3^n\\over 5^{n+1}}$.\n\\begin{sol}\n$3/10$\n\\end{sol}\n\\end{ex}\n\n\\end{enumialphparenastyle}", "meta": {"hexsha": "0f2b5c9271505a62a64e5a410f04ef185a1c1eb6", "size": 9688, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "9-sequences-and-series/9-2-series.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9-sequences-and-series/9-2-series.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9-sequences-and-series/9-2-series.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2615384615, "max_line_length": 278, "alphanum_fraction": 0.6699009083, "num_tokens": 3881, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6477982315512489, "lm_q2_score": 0.8933094152856197, "lm_q1q2_score": 0.5786842594501047}}
{"text": "\\documentclass[acmtog]{acmart}\n\\usepackage{graphicx}\n\\usepackage{subfigure}\n\\usepackage{natbib}\n\\usepackage{listings}\n\\usepackage{bm}\n\n\\definecolor{blve}{rgb}{0.3372549 , 0.61176471, 0.83921569}\n\\definecolor{gr33n}{rgb}{0.29019608, 0.7372549, 0.64705882}\n\\makeatletter\n\\lst@InstallKeywords k{class}{classstyle}\\slshape{classstyle}{}ld\n\\makeatother\n\\lstset{language=C++,\n\tbasicstyle=\\small\\ttfamily,\n\tkeywordstyle=\\color{blve}\\ttfamily,\n\tstringstyle=\\color{red}\\ttfamily,\n\tcommentstyle=\\color{magenta}\\ttfamily,\n\tmorecomment=[l][\\color{magenta}]{\\#},\n\tclassstyle = \\bfseries\\color{gr33n},\n\tbreaklines=true, \n\ttabsize=2\n}\n\\lstset{basicstyle=\\ttfamily}\n\n% Title portion\n\\title{Assignment 3:\\\\ {Ray Tracing with Direct Lighting}} \n\n\\author{Name:\\quad Yang Hongdi  \\\\  student number:\\ 2019533234\n\\\\email:\\quad yanghd@shanghaitech.edu.cn}\n\n% Document starts\n\\begin{document}\n\\maketitle\n\n\\vspace*{2 ex}\n\n\\section{Introduction}\nIn this project, simple Ray tracing with Direct lighting is performed. Phong lighting model is used for radiance calculation. Anti-aliasing is performed with rotated grid pattern. \nSome textures have been added to the Boxes with normal mapping.\n\n\\section{Implementation Details}\n\\subsection{Pin-hole camera model}\nFirst, we construct axes for camera coordinate system by using lookat point and reference up direction.\n\\begin{lstlisting}\nvoid Camera::lookAt(const vec3 &lookAt, const vec3 &refUp) {\n\tthis->forward = (position - lookAt).normalized();\n\tthis->right = refUp.cross(forward).normalized();\n\tthis->up = forward.cross(right).normalized();\n\tthis->halfFovScale = tan(radians(verticalFov * 0.5f));\n}\n\\end{lstlisting}\nThen we can generate ray from camera for a specified pixel.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth,height=2in]{camera.png}\n\t\\caption{world space coordinate calculation}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth,height=4.5in]{transform.png}\n\t\\caption{coordinate transform}\n\\end{figure}\nAs the image shows, we already have the camera coordinate system, by simply transforming the pixel coordinate in raster space to coordinate in screen space, \nthen we can easily compute its position in world space.\\\\\nThen we shoot a ray from the camera position to pixel position, and we set its origin as the pixel position and the direction as $norm(P_{pixel} - P_{camera})$\n\\begin{lstlisting}\nRay Camera::generateRay(Float dx, Float dy) const {\n\tvec3 position;\n\tvec3 dir;\n\n\tFloat width = film.resolution.x();\n\tFloat height = film.resolution.y();\n\tFloat screen_x, screen_y;\n\tscreen_x = (2 * (dx)/width - 1) * film.getAspectRatio() * halfFovScale * focalLength;\n\tscreen_y = (2 * (dy)/height - 1) * halfFovScale * focalLength;\n\tvec3 screenCenter = this->position - focalLength * forward;\n\tposition = screenCenter + screen_x * right + screen_y * up;\n\tdir = (position - this->position).normalized();\n\nreturn Ray{position, dir};\n}\n\\end{lstlisting}\n\n\\subsection{Ray-triangle intersection test}\nTo test if the ray intersects with the triangle, we adopt Moller-Turmbore instersection algorithm.\\\\\nFirst, we know the ray's parametric equation :\n$$P = O + tD$$\nAnd we can denot any point in the triangle as :\n$$P = (1- u - v)A + uB + uC$$\nso we have :\n$$O + tD = (1- u - v)A + uB + uC$$\nby some transformation :\n$$\\left[\\begin{matrix}\n\t-D & (B-A) & (C-A)\n\\end{matrix}\\right]\\left[\\begin{matrix}\n\tt \\\\ u \\\\ v\n\\end{matrix}\\right] = O - A$$\nWe denote $T = O - A, E_1 = B - A$ and $E_2 = C - A$, applying Cramer's rule and determinant property, we get :\n$$\\left[\\begin{matrix}\n\tt \\\\ u \\\\ v\n\\end{matrix}\\right] = \\frac{1}{P \\cdot E_1}\\left[\\begin{matrix}\n\tQ \\cdot E_2 \\\\ P \\cdot T \\\\ Q\t\\cdot D\n\\end{matrix}\\right]$$\nwhere $P = D \\times E_2$ and $D = T \\times E_1$. Now we can easily compute $t,u,v$\nHere are the code.\n\\begin{lstlisting}\nbool Triangle::intersect(Interaction &interaction, const Ray &ray) const {\n\tconst vec3 &v0 = mesh->p[v[0]];\n\tconst vec3 &v1 = mesh->p[v[1]];\n\tconst vec3 &v2 = mesh->p[v[2]];\n\t\n\tvec3 E1 = v1 - v0;\n\tvec3 E2 = v2 - v0;\n\tvec3 T = ray.origin - v0;\n\n\tvec3 P = ray.direction.cross(E2);\n\tvec3 Q = T.cross(E1);\n\tFloat det = P.dot(E1);\n\tif (abs(det) < SHADOW_EPS) return false;\n\tvec3 tuv = 1/(P.dot(E1)) * vec3(Q.dot(E2), P.dot(T), Q.dot(ray.direction));\n\tFloat t = tuv.x();\n\tFloat u = tuv.y();\n\tFloat v = tuv.z();\n\t...\n}\n\\end{lstlisting}\nAnd when $t \\geq 0, u \\geq 0, v \\geq 0, 1 - u -v \\geq 0$, the ray intersects with the triangle.\n\\subsection{Area Light}\nFor area light, we just uniformly sample some point lights on it. And for every point light, the radiance\nwill be $\\frac{L}{N}$, where $N$ is the number of samples.\n\\subsection{Phong lighting integrator}\nTo calculate the irrandiance, we use phong lighting model. Basically we have following steps :\n\\begin{enumerate}\n\t\\item [1.] For each ray, we check how it intersects with the scene.\n\t\\begin{itemize}\n\t\t\\item If the ray intersects with nothing, return (0,0,0).\n\t\t\\item If the ray intersects with light, return light's color.\n\t\t\\item If the ray intersects with geometry, go to step 2.\n\t\\end{itemize}\n\t\\item [2.] We shoot another ray from the intersection point to one light sample.\n\t\\begin{itemize}\n\t\t\\item If the ray is blocked by other objects, we only add ambient light.\n\t\t\\item If the ray only intersects with the light, we  calculate the phong lighting on that point.\n\t\\end{itemize}\n\t\\item [3.] We repeat step2 for every light sample on area light.\n\\end{enumerate}\nThen we have the irrandiance, here are the code.\n\\begin{lstlisting}\nvec3 PhongLightingIntegrator::radiance(Scene &scene,\n\t\t\t\t\t\tconst Interaction &interaction,\n\t\t\t\t\t\tconst Ray &ray) const {\n\tif (interaction.type == interaction.NONE)\n\t{\n\t\treturn vec3::Zero();\n\t}\n\telse if (interaction.type == interaction.LIGHT)\n\t{\n\t\treturn scene.getLight()->getColor();\n\t}\n\telse if (interaction.type == interaction.GEOMETRY)\n\t{\n\t\tvec3 ambient = (scene.getAmbientLight().array() * interaction.lightingModel.ambient.array()).matrix();\n\t\tvec3 diffuse = vec3::Zero();\n\t\tvec3 specular = vec3::Zero();\n\t\tfor (LightSamplePair pointLight : scene.getLight()->samples())\n\t\t{\n\t\t\t///Diffuse\n\t\t\tvec3 lightDir = (pointLight.first - interaction.entryPoint).normalized();\n\t\t\tRay lightRay(interaction.entryPoint, lightDir);\n\t\t\tif (!scene.isShadowed(lightRay))\n\t\t\t{\n\t\t\t\t///Diffuse\n\t\t\t\tFloat diff = std::max(interaction.normal.dot(lightDir), 0.0f);\n\t\t\t\tEigen::Array3f temp_diff = diff * pointLight.second.array() * interaction.lightingModel.diffusion.array();\n\t\t\t\tdiffuse += temp_diff.matrix();\n\n\t\t\t\t///Specluar\n\t\t\t\tvec3 reflectDir = (2 * lightDir.dot(interaction.normal) * interaction.normal - lightDir).normalized();\n\t\t\t\tFloat spec = (Float)std::pow(std::max(reflectDir.dot(-ray.direction), 0.0f), interaction.lightingModel.shininess);\n\t\t\t\tEigen::Array3f temp_spec =  spec * pointLight.second.array() * interaction.lightingModel.specular.array();\n\t\t\t\tspecular += temp_spec.matrix();\n\t\t\t}\n\t\t}\n\t\treturn ambient + diffuse + specular;\n\t}\n\treturn vec3::Zero();\n}\n\\end{lstlisting}\nTo render the scene, we just apply the function on every pixel.\n\n\\subsection{Anti aliasing}\nFor anti aliasing, rotated grid is used for sampling in pixel.\\\\\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth,height=2in]{antialiasing.png}\n\t\\caption{Rotated grid}\n\\end{figure}\nTo perform anti aliasing and also for time effiency, we sample 4 points in each pixel with a rotation of $26.6^{\\circ}$.\n\n\\subsection{Texture}\nTexture with normal mapping is performed. As in section 2.2 Ray-triangle intersection test, we have $u, v$. So we simply load the color on pixel\n$(int)\\frac{u}{width}, (int)\\frac{v}{height}$ on the Texture map.\n\\begin{lstlisting}\nvec3 Texture::getColor(vec2 uv) const\n{\n\tint x = (int) (uv.x() * width);\n\tint y = (int) (uv.y() * height);\n\tx = (x == width) ? x - 1 : x;\n\ty = (y == height) ? y - 1 : y;\n\treturn color_data[x + y * width];\n}\n\\end{lstlisting}\n\n\\section{Results}\n\\begin{figure}[H]\n    \\centering\n    \n    \\subfigure[Scene0 with no Msaa]{\n    \\begin{minipage}[t]{0.45\\linewidth}\n    \\centering\n    \\includegraphics[width=1.5in]{output_nomsaa.png}\n    %\\caption{original image}\n    \\end{minipage}%\n    } \\quad \\quad\n    \\subfigure[Scene0]{\n    \\begin{minipage}[t]{0.45\\linewidth}\n    \\centering\n    \\includegraphics[width=1.5in]{output_scene0.png}\n    %\\caption{ground truth}\n    \\end{minipage}%\n    }%\n                    \n    \\subfigure[Scene1]{\n    \\begin{minipage}[t]{0.45\\linewidth}\n    \\centering\n    \\includegraphics[width=1.5in]{output_scene1.png}\n    %\\caption{depth prediction result}\n    \\end{minipage}\n    }\n\n    \\subfigure[Scene2]{\n    \\begin{minipage}[t]{0.45\\linewidth}\n    \\centering\n    \\includegraphics[width=1.5in]{output_scene2.png}\n    %\\caption{original image}\n    \\end{minipage}%\n    } \\quad \\quad\n    \\subfigure[Scene0 with texture]{\n    \\begin{minipage}[t]{0.45\\linewidth}\n    \\centering\n    \\includegraphics[width=1.5in]{output_scene0_texture.png}\n    %\\caption{ground truth}\n    \\end{minipage}%\n    }%\n    \\subfigure[Scene0 with normal mapping]{\n    \\begin{minipage}[t]{0.45\\linewidth}\n    \\centering\n    \\includegraphics[width=1.5in]{output_normal.png}\n    %\\caption{ground truth}\n    \\end{minipage}%\n    }%\n    \\centering\n    \\caption{Result}\n    \\end{figure}\n\\end{document}\n", "meta": {"hexsha": "2ab9cafa445e17d47243da4f3f2fa050f9b11a20", "size": 9225, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignment3/report/report.tex", "max_stars_repo_name": "Young2647/Computer-Graphics", "max_stars_repo_head_hexsha": "d52aafe16d1128dafaf349ac726c40bb6dab8758", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignment3/report/report.tex", "max_issues_repo_name": "Young2647/Computer-Graphics", "max_issues_repo_head_hexsha": "d52aafe16d1128dafaf349ac726c40bb6dab8758", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignment3/report/report.tex", "max_forks_repo_name": "Young2647/Computer-Graphics", "max_forks_repo_head_hexsha": "d52aafe16d1128dafaf349ac726c40bb6dab8758", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6804511278, "max_line_length": 180, "alphanum_fraction": 0.7012466125, "num_tokens": 2786, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933447152497, "lm_q2_score": 0.7057850216484837, "lm_q1q2_score": 0.5786684420493001}}
{"text": "\\section{Probabilistic Particle Tracing Model}\n\nIn order to analyze the distribution-based flow field by particle tracing, in this section we introduce our probability-based particle tracing model. First, we describe the global modeling of streamlines and how to estimate the trace distributions for a given distribution data. Then local distributions used by the global model are detailed.\n\n\\subsection{Global Modeling}\n\nSingle streamline $L$ originating from position ${x_0}$ can be modeled as $L = \\{ {x_0},{x_1},...,{x_n}\\} = {x_{0:n}}$, where $x_t$ refers to a position in $\\mathrm{R}^d$. As mentioned by Otto et al. in~\\cite{Otto10a, Otto11a}, conventional streamline integration methods such as RK4 are not well defined for uncertain vector fields, since there is no unique vector direction at a location ${x_t}$. Therefore, as with most previous methods~\\cite{Otto10a, Otto11a}, we make use of the Euler integration model in this paper:\n\\begin{equation}\n  {x_{t + 1}} = {x_t} + {v_t}\\Delta t\n\\end{equation}\nwhere ${v_t}$ and $\\Delta t$ refer to the vector direction and the step size at step $t$. Since we focus on generating streamlines from steady vector fields in this work, it is safe to ignore the magnitude of the vectors. And if we set the step size $\\Delta t$ as a constant, we can represent the streamline by a sequence of vector directions ${L = v_{0:n}}$, since the streamline trajectory only depends on the propagation directions $v_{0:n}$. Figure~\\ref{trajectory} depicts an example of the streamline trajectory model.\n\n\\begin{figure}[htb]\n  \\centering\n  \\includegraphics[width=2in]{../figures/trajectory.eps}\n  \\caption{An example streamline generated from a 2D distribution-based vector field is modeled as a sequence of vectors.}\n  \\label{trajectory}\n\\end{figure}\n\nFor the distribution-based vector fields, there is no unique streamline $v_{0:n}$ for a given starting point $x_0$. Let $\\Omega_{x_0}$ be the set of all possible streamlines which originate from $x_0$ given the distribution-based data $\\mathcal{H}$, we then can define a probability density function (pdf) over the path space, which is:\n\\begin{equation}\n  p(v_{0:n}|\\mathcal{H})\n\\end{equation}\nwhere $\\mathcal{H}$ is the set of observations from the distribution-based data along the streamline trajectory. Here, we denote the distribution obtained at the starting point $x_t$ of a vector $v_t$ as $\\lambda_t=\\mathcal{H}(v_t)$. By applying the Bayes theorem, the target distribution $p({v_{0:n}}|{\\lambda_{0:n}})$ can be represented by the prior density $p({v_{0:n}})$ and the conditional observation density $p({\\lambda_{0:n}}|{v_{0:n}})$, as:\n\\begin{equation}\n  p({v_{0:n}}|{\\lambda_{0:n}}) = \\frac{{p({v_{0:n}})p({\\lambda_{0:n}}|{v_{0:n}})}}{{p({\\lambda_{0:n}})}}\n\\end{equation}\nwhere ${p({\\lambda_{0:n}})}$ is a normalizing constant for a fixed data realization, which equals to $\\int {p({v_{0:n}},{\\lambda_{0:n}})} d{v_{0:n}}$.\n\nScientific simulations commonly represent physical phenomena as continuous functions. Thus, the sequence of vector directions $v_{0:n}$ along the particle trace should be correlated. This constraint can be modeled as a conditional prior density $p({v_t}|{v_{0:t - 1}})$. In this paper, we assume the sequence $v_{0:n}$ forms a Markov chain, which means the vector direction $v_t$ only depends on the previous direction $v_{t-1}$, but not on $v_{t-2},...,v_0$; so:\n\\begin{equation}\n  p({v_t}|{v_{0:t - 1}}) = p({v_t}|{v_{t - 1}})\n\\end{equation}\nwhere $p({v_t}|{v_{t - 1}})$ denotes the probability density associated with the transition from $v_{t - 1}$ to $v_t$. Hence, the probability density for a given streamline can be formulated as:\n\\begin{equation}\n  p({v_{0:n}}) = p({v_0})\\prod\\limits_{t = 1}^n {p({v_t}|{v_{t - 1}})}\n\\end{equation}\nwhere $p(v_0)$ can be defined by a uniform distribution, since no prior knowledge is applied.\n\nBy measuring the observations $\\lambda_{0:n}$ along a given streamline $v_{0:n}$, we can get the conditional observation density $p({\\lambda_{0:n}}|v_{0:n})$, which defines a measure of how the observations match the given path. In the other word, the observation density gives how likely the distributions $\\lambda_{0:n}$ will be observed if the given streamline $v_{0:n}$ actually exists in the flow field. Likewise, we assume that the observation measured at a point does not depend on any previous points in the trace, i.e.:\n\\begin{equation}\n  p(\\lambda_t|v_{0:t}) = p({\\lambda_t}|{v_t})\n\\end{equation}\nwhich defines the likelihood density:\n\\begin{equation}\n  p({\\lambda_{0:n}}|{v_{0:n}}) = \\prod\\limits_{t = 0}^n {p({\\lambda_t}|{v_t})}\n\\end{equation}\n\nBy substituting (5) and (7) into (3), the posterior density $p({v_{0:n}}|{\\lambda_{0:n}})$ can be expanded as:\n\\begin{equation}\n  p({v_{0:n}}|{\\lambda_{0:n}}) = \\frac{{p({v_0})\\prod\\limits_{t = 1}^n {p({v_t}|{v_{t - 1}})} \\prod\\limits_{t = 0}^n {p({\\lambda_t}|{v_t})} }}{{p({\\lambda_{0:n}})}}\n\\end{equation}\n\n\\subsection{Local Modeling}\n\nBased on equation (8), the key components of the posterior density $p({v_{0:n}}|{\\lambda_{0:n}})$ are the prior density ${p({v_t}|{v_{t - 1}})}$ and the observation density ${p({\\lambda_t}|{v_t})}$. In this section, we will elaborate how to model and estimate these two local densities in detail.\n\n\\subsubsection{Prior Density}\n\nPrior density is used to characterize the correlation between two adjacent vector directions. In this work, a prior density which prefers to continue in the previous direction and gives decreasing probability for sharper turns is used. As presented by Zhang et al. in~\\cite{Zhang20095}, the von Mises-Fisher distribution~\\cite{fisher} has been selected as the prior density due to its mathematical simplicity and tractability. Some other common choices in the literature are the Watson distribution and the Kent distribution.\n\nFor a random $d$-dimensional unit vector $v$ on the $(d-1)$-dimensional sphere, with respect to the mean direction $\\mu$ and the concentration parameter $\\kappa$, the probability density function of the von Mises-Fisher distribution is given by\n\\begin{equation}\n  f_{d}(v| \\mu, \\kappa)=C_{d}(\\kappa)\\exp \\left( {\\kappa \\mu^T v } \\right)\n\\end{equation}\nwhere the normalization constant $C_{d}(\\kappa)\\,$ is\n\\begin{equation}\n  C_{d}(\\kappa)=\\frac {\\kappa^{d/2-1}} {(2\\pi)^{d/2}I_{d/2-1}(\\kappa)} \\,\n\\end{equation}\nin which $I_{d/2-1}$ denotes the modified Bessel function of the first kind and order $d/2-1$.\n\nThe parameter $\\kappa$ defined in range $\\kappa \\ge 0$ is used to control the concentration of the distribution around the mean direction $\\mu$. The distribution with higher concentration will have a greater $\\kappa$ value. For example, the distribution is focused on a point of the sphere defined by $\\mu$ for $\\kappa  = \\infty$, and is uniformly distributed on the sphere for $\\kappa=0\\,$. Figure~\\ref{fisher} gives examples of points sampled from 2-dimensional von Mises-Fisher distributions with different $\\kappa$.\n\n\\begin{figure}[htb!f]\n  \\centering\n  \\begin{subfigure}[b]{0.16\\textwidth}\n    \\centering\n    \\includegraphics[width=0.9in]{../figures/vf_100.eps}\n    \\caption{$\\kappa=100$}\n  \\end{subfigure}~\n  \\begin{subfigure}[b]{0.16\\textwidth}\n    \\centering\n    \\includegraphics[width=0.9in]{../figures/vf_10.eps}\n    \\caption{$\\kappa=10$}\n  \\end{subfigure}~\n  \\begin{subfigure}[b]{0.16\\textwidth}\n    \\centering\n    \\includegraphics[width=0.9in]{../figures/vf_1.eps}\n    \\caption{$\\kappa=1$}\n  \\end{subfigure}\n  \\caption{Points sampled from three von Mises-Fisher distributions on $2$-dimensional spheres with different value of $\\kappa$. The mean directions are shown as arrows.}\n  \\label{fisher}\n\\end{figure}\n\nIn this work, the mean direction $\\mu$ of the prior density is given by the previous vector direction ${v_{t - 1}}$. The concentration parameter $\\kappa$ is given manually as a constant value. Thus, the prior density $p({v_t}|{v_{t - 1}})$ is defined by the von Mises-Fisher distribution with the mean direction ${v_{t - 1}}$ and the concentration parameter $\\kappa$:\n\\begin{equation}\n  p({v_t}|{v_{t - 1}}) = {f_d}({v_t}|{v_{t - 1}}, \\kappa)\n\\end{equation}\n\n\\subsubsection{Observation Density}\n\nSince the distribution-based vector field contains uncertainty, which is encoded in the local distributions. We make use of the observation density to characterize this uncertainty, which defines a likelihood function that measures how observations match the current prior model. At step $t$, for a given vector direction $v_t$, the observation density $p({\\lambda_t}|{v_t})$ is a conditional density representing how likely the observation $\\lambda_t$ measured at position $x_t$ will happen based on the given vector $v_t$. According to the observation $\\lambda_t$, the observation density $p({\\lambda_t}|{v_t})$ is estimated based on the method presented by Friman et al. in~\\cite{frimanTMI06}. We first model the observation matching perfectly to the current state $v_t$ as a distribution $\\mu_t$ where the probability of $v_t$ equals to $1$. Then treat $\\lambda_t$ as a histogram and let the probability $\\lambda_t(i)$ of bin $i$ be an uncertain observation of $\\mu_t(i)$, i.e. $\\log{\\mu_t(i)}=\\log{\\lambda_t(i)}+\\epsilon$. The noise $\\epsilon$ can be modeled as additive Gaussian distribution~\\cite{Basser1994,Salvador05}, such as $\\epsilon \\sim N(0,{\\sigma ^2}/{\\mu _t}{(i)^2})$. Then the observation density, or the likelihood, can be written as\n\\begin{equation}\n  p({\\lambda_t}|{v_t}) = \\prod\\limits_{i = 0}^N {\\frac{{{\\mu _t}(i)}}{{\\sqrt {2\\pi {\\sigma ^2}} }}} {e^{ - \\frac{{{\\mu _t}{{(i)}^2}}}{{{2\\sigma ^2}}}{{(\\log{{\\lambda _t}(i)} - \\log{{\\mu _t}(i)})}^2}}}\n\\end{equation}\n", "meta": {"hexsha": "2032817665cd83d4b79d847c25e10c0080d95a08", "size": 9623, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/pvis2016/draft-method-mod.tex", "max_stars_repo_name": "hewenbin/pspf", "max_stars_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/pvis2016/draft-method-mod.tex", "max_issues_repo_name": "hewenbin/pspf", "max_issues_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/pvis2016/draft-method-mod.tex", "max_forks_repo_name": "hewenbin/pspf", "max_forks_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 90.7830188679, "max_line_length": 1252, "alphanum_fraction": 0.7225397485, "num_tokens": 2791, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933271118221, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5786684296250647}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% CHAP2.TEX\t\t\t\t\t\t    July 1990      %\n%                                                                          %\n% This file is part of the AMS-LaTeX Version 1.0 distribution              %\n%   American Mathematical Society, Technical Support Group,                %\n%   P. O. Box 6248, Providence, RI 02940                                   %\n%   800-321-4AMS (321-4267) or 401-455-4080                                %\n%   Internet: Tech-Support@Math.AMS.com                                    %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Keldysh Pencils}\n\\section{Holomorphic operator-valued functions}\n\nThe main object of study in this book is polynomial operator pencils\n(operator polynomials). However, it is more convenient for us to give\ncertain definitions and results for the more general case of\nholomorphic operator-valued functions.\n\nLet $U$ be a domain in $\\bold C$, $\\cal{B}$ be a complex Banach space,\nand $f(\\lambda)$ a $\\cal{B}$-valued function defined in $U$. Such a\nfunction $f(\\lambda)$ is said to be {\\it strongly \\rom(weakly\\rom)\nholomorphic} in $U$ if for any $\\lambda_0\\in U$ the strong (weak)\nlimit\n\\begin{equation}\n\\lim_{\\lambda\\to\\lambda_0}\\frac{f(\\lambda)-f(\\lambda_0)}{\\lambda-\\lambda_0}\\\n(=f'(\\lambda_0))\n\\end{equation}\nexists.\n\nObviously,  $f(\\lambda)$ is weakly holomorphic if and only if any function\n$\\psi(f(\\lambda))$ is holomorphic, where $\\psi\\in\\cal{B}^*$.\n\nAn operator-valued function $A(\\lambda)$ $(\\lambda\\in U)$ with values in\n$L(\\cal H)$ is said to be {\\it uniformly \\rom(strongly, weakly\\rom) holomorphic}\n if for any $\\lambda_0\\in U$ the uniform (strong, weak) limit\n\\begin{equation}\n\\lim_{\\lambda\\to\\lambda_0}\\frac{A(\\lambda)-A(\\lambda_0)}{\\lambda-\\lambda_0}\n\\ (=A'(\\lambda_0))\n\\end{equation}\nexists.\n\n\\endinput\n", "meta": {"hexsha": "1959adced9eda7c4e246c74539ba7ece7b49e5cf", "size": 1881, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sys/lib/tex/macros/doc/ams/chap2.tex", "max_stars_repo_name": "henesy/plan9-1e", "max_stars_repo_head_hexsha": "47575dc4a4638a1ee0d9eed78d88a9f1720a4430", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sys/lib/tex/macros/doc/ams/chap2.tex", "max_issues_repo_name": "henesy/plan9-1e", "max_issues_repo_head_hexsha": "47575dc4a4638a1ee0d9eed78d88a9f1720a4430", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sys/lib/tex/macros/doc/ams/chap2.tex", "max_forks_repo_name": "henesy/plan9-1e", "max_forks_repo_head_hexsha": "47575dc4a4638a1ee0d9eed78d88a9f1720a4430", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.7857142857, "max_line_length": 80, "alphanum_fraction": 0.5757575758, "num_tokens": 498, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8519527906914787, "lm_q2_score": 0.6791786926816161, "lm_q1q2_score": 0.578628182608293}}
{"text": "\n\n%%%%%%%%%%%%%%%%%%%%%%\n\\section{Geometry of Level Sets}\n%\\section{Quantifying Nonconvexity}\n\\label{sec:QuanNoncon}\n\n\\subsection{The Greedy Algorithm}\n\\label{sec:GreedyAlg}\n%%%%%%%%%%%%%%%%%%%%%%\n \n The intuition behind our main result is that, for smooth enough loss functions and for sufficient overparameterization, it should be ``easy'' to connect two equally powerful models---i.e., two models with $F_o{\\theta^{A,B}} \\leq \\lambda$.  A sensible measure of this ease-of-connectedness is the normalized length of the geodesic connecting one model to the other: $|\\gamma_{A,B}(t)| / |\\theta_A - \\theta_B|$.  This length represents approximately how far of an excursion one must make in the space of models relative to the euclidean distance between a pair of models.  Thus, convex models have a geodesic length of $1$, because the geodesic is simply linear interpolation between models, while more non-convex models have geodesic lengths strictly larger than $1$.\n \n Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamic programming approach we call Dynamic String Sampling.  We comment on alternative algorithms in Appendix \\ref{sec:ConstrainedAlg}.\n \n For a pair of models with network parameters $\\theta_i$, $\\theta_j$, each with $F_e(\\theta)$ below a threshold $L_0$, we aim to efficienly generate paths in the space of weights where the empirical loss along the path remains below $L_0$.  These paths are continuous curves belonging to $\\Omega_F(\\lambda)$--that is, the level sets of the loss function of interest.\n\n\\begin{algorithm}\n\\caption{Greedy Dynamic String Sampling}\\label{euclid}\n\\begin{algorithmic}[1]\n{\\scriptsize \n\\State $\\text{$L_0$} \\gets \\text{Threshold below which path will be found}$\n\\State $\\text{$\\Phi_1$} \\gets \\text{randomly initialize } $$\\theta_1$$ \\text{, train } $$\\Phi (x_i\\;\\theta_1)$$ \\text{ to $L_0$}$\n\\State $\\text{$\\Phi_2$} \\gets \\text{randomly initialize } $$\\theta_2$$ \\text{, train } $$\\Phi (x_i\\;\\theta_2)$$ \\text{ to $L_0$}$\n\n\\State $\\text{BeadList} \\gets $$(\\Phi_1,\\Phi_2)$\n\\State $\\text{Depth} \\gets 0$ \n\n\\Procedure{FindConnection}{$\\Phi_1,\\Phi_2$}\n\\State $\\text{$t^*$} \\gets \\text{t such that } $$\\frac{d \\gamma(\\theta_1, \\theta_2, t)}{dt} \\bigg|_{t} = 0$$  \\text{ OR } $$t = 0.5$$ $\n\\State $\\text{$\\Phi_3$} \\gets \\text{train } $$\\Phi(x_i; t^*\\theta_1 + (1-t^*)\\theta_2)$$ \\text{ to $L_0$}$\n\\State $\\text{BeadList} \\gets \\text{insert}$$(\\Phi_3$$\\text{, after } $$\\Phi_1$$\\text{, BeadList)}$\n\\State $\\text{$MaxError_1$} \\gets \\text{$max_t$}$$(F_e(t\\theta_3 + (1-t)\\theta_1))$$ $\n\\State $\\text{$MaxError_2$} \\gets \\text{$max_t$}$$(F_e(t\\theta_2 + (1-t)\\theta_3))$$ $\n\\If {$\\text{$MaxError_1$} > \\text{$L_0$ }} \\text{ }\\Return \\text{ FindConnection}$$(\\Phi_1,\\Phi_3)$$ $\n\\EndIf\n\\If {$\\text{$MaxError_2$} > \\text{$L_0$ }} \\text{ }\\Return \\text{ FindConnection}$$(\\Phi_3,\\Phi_2)$$ $\n\\EndIf\n\\State $\\text{Depth} \\gets \\text{Depth$+1$}$ \n\\EndProcedure }\n\\end{algorithmic}\n\\end{algorithm}\n \n \n  The algorithm recursively builds a string of models in the space of weights which continuously connect $\\theta_i$ to $\\theta_j$.  Models are added and trained until the pairwise linearly interpolated loss, i.e. $\\rm{max}_t F_e(t\\theta_i\\ +\\ (1-t)\\theta_j)$ for $t\\in(0,1)$, is below the threshold, $L_0$, for every pair of neighboring models on the string.  We provide a cartoon of the algorithm in Appendix \\ref{AlgCartoon}.\n \n  \n  \\subsection{Failure Conditions and Practicalities}\n  \\label{sec:Fail}\n  \n  While the algorithm presented will faithfully certify two models are connected if the algorithm converges, it is worth emphasizing that the algorithm does not guarantee that two models are disconnected if the algorithm fails to converge.  In general, the problem of determining if two models are connected can be made arbitrarily difficult by choice of a particularly pathological geometry for the loss function, so we are constrained to heuristic arguments for determining when to stop running the algorithm.  Thankfully, in practice, loss function geometries for problems of interest are not intractably difficult to explore.  We comment more on diagnosing disconnections more carefully in Appendix \\ref{sec:disconnect}.\n  \n  Further, if the $\\rm{\\mathbf{MaxError}}$ exceeds $L_0$ for every new recursive branch as the algorithm progresses, the worst case runtime scales as $O(\\rm{exp}(\\rm{\\mathbf{Depth}}))$.  Empirically, we find that the number of new models added at each depth does grow, but eventually saturates, and falls for a wide variety of models and architectures, so that the typical runtime is closer to $O(\\rm{poly}(\\rm{\\mathbf{Depth}}))$---at least up until a critical value of $L_0$.\n  \n  To aid convergence, either of the choices in line $7$ of the algorithm works in practice---choosing $t^*$ at a local maximum can provide a modest increase in algorithm runtime, but can be unstable if the the calculated interpolated loss is particularly flat or noisy.  $t^*=.5$ is more stable, but slower.  Finally, we find that training $\\Phi_3$ to $\\alpha L_0$ for $\\alpha < 1$ in line $8$ of the algorithm tends to aid convergence without noticeably impacting our numerics.  We provide further implementation details in \\ref{sec:NumExp}.\n \n", "meta": {"hexsha": "9a8e7a09528f8dda978ace658b712bb22641a25b", "size": 5241, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Writeup/iclr/geometry.tex", "max_stars_repo_name": "danielfreeman11/convex-nets", "max_stars_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-08-09T00:48:46.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-03T09:04:59.000Z", "max_issues_repo_path": "Writeup/iclr/geometry.tex", "max_issues_repo_name": "danielfreeman11/convex-nets", "max_issues_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Writeup/iclr/geometry.tex", "max_forks_repo_name": "danielfreeman11/convex-nets", "max_forks_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.9473684211, "max_line_length": 767, "alphanum_fraction": 0.7267697004, "num_tokens": 1480, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289836, "lm_q2_score": 0.760650658103136, "lm_q1q2_score": 0.5785894277999084}}
{"text": "\n\\section{Formulation and solver overview\\label{sec:formulation-bloodflow}}\n\n%\\subsection{notation}\n\\subsection{Problem summary}\n\n\nWe simulate the flow of $N$ cells with deformable boundary surfaces\n$\\gamma_i$, $i=1,\\ldots,N$ in a viscous Newtonian fluid in a domain\n$\\Omega\\subset\\mathbb R^3$ with a fixed boundary $\\Gamma$. The governing\npartial differential equations (PDEs) describing the conservation of\nmomentum and mass are the incompressible Stokes equations for the\nvelocity $\\vu$ and pressure $p$, combined with velocity boundary\nconditions on $\\Gamma$.  Additionally, we model cell membranes as massless, so the\nvelocity $\\vXd_t$ of\nthe points on the cell surface  coincides with the flow velocity:\n%\n\\begin{align}\n  -\\mu \\Delta \\vu(\\vx) + \\nabla p(\\vx) = \\vF(\\vx) \\quad \\mathrm{and}\\quad \\nabla \\cdot \\vu(\\vx) = 0, \\quad \\vx \\in \\Omega, \n  \\label{eq:stokes_diff1} \\\\\n  \\vu(\\vx) = \\vector g(\\vx), \\quad \\vx \\in \\Gamma,\n  \\label{eq:stokes_diff2}  \\\\\n  \\vXd_t = \\vu(\\vXd), \\quad \\vX \\in \\gamma_i(t),\n  \\label{eq:stokes_diff3}  \n\\end{align}\nwhere $\\mu$ is the viscosity of the ambient fluid; in our simulations, we use a simplified model with\nthe viscosity of the fluid inside the cells also being $\\mu$ although\nour code supports arbitrary viscosity contrast.  The right-hand side\nforce in the momentum equation is due to the sum of tension and\nbending forces $\\vfd = \\vfd_\\sigma + \\vfd_b$; it is concentrated on\nthe cell surfaces. We assume that cell surfaces are  inextensible,\nwith bending forces determined by the Canham-Helfrich model \\cite{canham1970minimum, helfrich1973elastic},\n based on the surface curvature, and surface tension determined by the surface\nincompressibility condition $\\nabla_{\\gamma_i} \\cdot\\vu = 0$ resulting in\n\\[\n\\vF(\\vx) = \\sum_i \\int_{\\gamma_i} \\vfd(\\vy) \\delta(\\vx - \\vy)d\\vy \n\\]\n(see, e.g., \\cite{rahimian2015} for the expressions for $\\vfd$).\nExcept on inflow and outflow regions of the vascular network, the boundary condition $\\vector g$ is zero, modeling no-slip boundary condition on blood vessel walls.\n%We additionally assume  no-slip boundary condition on all $\\gamma_i$.\n\n\\subsubsection{Boundary integral formulation}\nTo enforce the boundary conditions on $\\Gamma$, we use the standard approach of computing\n$\\vu$ as the sum of the solution $\\vu^{\\text{fr}}$ of the free-space equation  \n\\cref{eq:stokes_diff1} without boundary conditions but with non-zero right-hand side $\\vF(\\vx)$,\nand  the second term  $\\vu^{\\Gamma}$ obtained by solving the\nhomogeneous equation with boundary conditions on $\\Gamma$ given by\n$\\vector g-\\vu^{\\text{fr}}$. \n\nFollowing the approach of\n\\cite{Power1987,Poz92,lu2018parallel,nazockdast2015b}, we reformulate\n\\cref{eq:stokes_diff1,eq:stokes_diff2} in the integral form. The\nfree-space solution $\\vu^{\\text{fr}}$ can be written\ndirectly as the sum of the single-layer Stokes potentials $\\vu^{\\gamma_i}$:\n\n\\begin{equation}\n  \\vu^{\\gamma_i}(\\vx) = (S_i\\vfd)(\\vx) = \\int_{\\gamma_i} S(\\vx,\\vy)\n  \\vfd(\\vy) d\\vy, \\quad \\vx \\in \\Omega,\n\\label{eq:sl_stokes}\n\\end{equation}\nwhere $S(\\vx,\\vy) = \\frac{1}{8\\pi\\mu}\\left(\\frac{1}{\\vr} + \\frac{\\vr \\otimes \\vr}{|\\vr|^3}\\right)$ for viscosity $\\mu$ and $\\vr = \\vx - \\vy$.\n\n\nTo obtain $\\vu^\\Gamma$, we reformulate the homogeneous volumetric PDE with nonzero boundary conditions\nas a boundary integral equation for an unknown double-layer density $\\phi$ defined on the domain boundary $\\Gamma$: \n\\begin{equation}\n  \\left(\\frac{1}{2}I + D + N \\right)\\phi = \\tilde{D}_\\Gamma\\phi =  \\vector g-\\vu^{\\text{fr}}, \\quad \\vx \\in \\Gamma,\n  \\label{eq:double_layer_int_eq}\\\\\n\\end{equation}\nwhere the double-layer operator is $D\\phi(\\vx) = \\int_\\Gamma D(\\vx,\\vy) \\phi(\\vy) d\\vy$ with double-layer Stokes kernel $D(\\vx,\\vy) = \\frac{6}{8\\pi}\\left(\\frac{\\vr \\otimes \\vr}{|\\vr|^5}(\\vr\\cdot\\vn\\right)$ for outward normal $\\vn=\\vn(\\vy)$.\nThe null-space operator needed to make the equations full-rank is defined as\n$(N\\phi)(\\vx) = \\int_\\Gamma (\\vn(\\vx) \\cdot \\phi(\\vy))\\vn(\\vy) d\\vy$ (cf.\\ \\cite{lu2017}).\nThe favorable eigenspectrum of the integral operator in \\cref{eq:double_layer_int_eq} is well-known and allows \\gmres to rapidly converge to a solution.\nOne of the key differences between this work and previous free-space large-scale simulations\nis the need to solve this equation in a scalable way \\edit{}{due to differing formulations} \\cite{grinberg2011new,balogh2017direct}. \nOnce the density $\\phi$ is computed,\nthe velocity correction $\\vu^\\Gamma$ is evaluated directly as $\\vu^{\\Gamma} = D\\phi$.\n\n\n\n%= \\int_\\Gamma D(\\vx,\\vy) \\phi(\\vy) d\\vy_{\\Gamma}, \\quad \\vx \\in \\Omega\n% \\label{eq:double_layer_int}\n%]\n%$If we can solve \\cref{eq:double_layer_int_eq} for $\\phi$, we have an\n%analytic expression for $\\vu^{\\Gamma}(\\vx)$.\n\nThe  equation for the total velocity $\\vu(\\vx)$ at any point $\\vx \\in \\Omega$ is then given by\n\\begin{equation}\n  \\vu = \\vu^{\\text{fr}} +\\vu^\\Gamma =  \\sum_{i=1}^N \\vu^{\\gamma_i} + \\vu^\\Gamma.\n  \\label{eq:velocity_combined}\n\\end{equation}\nIn particular, this determines the update equation for the boundary\npoints of cells; see \\cref{eq:stokes_diff3}.\n\n%where $\\vu^{\\gamma_i}$ is the velocity due to $\\gamma_i$ and\n%$\\vu^{\\Gamma}$ is the velocity due to the domain boundary $\\Gamma$.\n%The velocity $\\vu^{\\gamma_i}$ can be written as a sum of single- and double-layer integrals over $\\gamma_i$, as detailed in \\cite{Veerapaneni2011}.\n\n\n\\textbf{Contact formulation \\label{sec:contact-vol}. }\nIn theory, the contacts between surfaces are prevented by the increasing fluid forces as surfaces approach each other closely. However, ensuring accuracy of resolving forces may require prohibitively fine sampling of surfaces and very small time steps, making large-scale simulations in space and time impractical. At the same time, as shown in \\cite{lu2017}, interpenetration of surfaces results in a catastrophic loss of accuracy due to singularities in the integrals. \n\nTo guarantee that our discretized cells remain interference-free,\nwe augment \\linebreak \\cref{eq:stokes_diff1,eq:stokes_diff2} with an explicit\ninequality constraint preventing collisions.  We define a vector\nfunction $V(t)$ with components becoming strictly negative if any cell\nsurfaces intersect each other, or intersect with the vessel boundaries\n$\\Gamma$.  More specifically, we use the \\emph{space-time interference volumes} introduced in \\cite{Harmon2011} and applied to 3D cell flows in \\cite{lu2018parallel}.\nEach component of $V$ corresponds to a single connected overlap.\nThe interference-free constraint at time $t$ is then simply $V(t) \\geq 0$. \n\nFor this constraint to be satisfied, the forces $\\vfd$ are augmented by an artificial collision\nforce, i.e.,  $\\vfd = \\vfd_b + \\vfd_\\sigma + \\vfd_c$, $\\vfd_c =\n\\nabla_u V^T \\lambda$, where $\\lambda$ is the vector of Lagrange\nmultipliers, which is determined by the additional\n\\emph{complementarity} conditions:\n\\begin{equation}\n  \\lambda(t) \\geq 0, \\quad V(t) \\geq 0, \\quad \\lambda(t) \\cdot V(t) = 0,\n%  0 \\leq \\lambda \\quad \\bot \\quad V(t) \\geq 0,\n  \\label{eq:complementarity_prob}\n\\end{equation}\n at time $t$, where all inequalities are to be understood component-wise.\n\n To summarize, the system that we solve at every time step can be\n formulated as follows, where we separate equations for different\n cells and global and local parts of the right-hand side, as it is\n important for our time discretization:\n%\n%\n\\begin{align}\n  &\\vX_t =  \\left(\\sum_{j\\neq i} S_j \\vfd_j  + D \\phi\\right) +  S_i \\vfd_i, \\quad \\mbox{for points on $\\gamma_i$},\n  \\label{eq:constrained-all1}\\\\\n  &\\nabla_{\\gamma_i} \\cdot \\vX_t = 0,\\quad    \\vfd_j = \\vfd(\\vX_j,\\sigma_j,\\lambda),\n  \\label{eq:constrained-all2} \\\\\n  &B_\\Gamma \\phi =  \\vector g-\\sum_{j} S_j \\vfd_j, \\quad \\mbox{for points on $\\Gamma$},\n  \\label{eq:constrained-all3} \\\\\n   &\\lambda(t) \\geq 0, \\quad V(t) \\geq 0, \\quad \\lambda(t) \\cdot V(t) = 0.\n  \\label{eq:constrained-all4}\n\\end{align}\n\n At every time step, \\eqref{eq:constrained-all4} results in \ncoupling of all close $\\gamma_i$'s, which requires a non-local computation. \nWe follow the approach detailed in \\cite{lu2018parallel, lu2017} to define and solve\nthe \\textit{nonlinear complementarity problem} (\\ncp) arising from cell-cell\ninteractions in parallel, and extend it to prevent intersection of cells\nwith the domain boundary $\\Gamma$, as detailed in \\cref{sec:parallel-contact}.\n\n\n\\subsection{Algorithm Overview\\label{sec:alg_overview}}\n\nNext, we summarize the algorithmic steps used to solve the constrained\nintegral equations needed to compute cell surface positions and fluid velocities\nat each time step.  In the subsequent sections, we detail the\nparallel algorithms we developed to obtain good weak and strong scalability, as shown\nin \\cref{sec:results}.\n\n\n\\textbf{Overall Discretization. } \n\\rbc surfaces are discretized using a spherical harmonic\nrepresentation, with surfaces sampled uniformly in the standard\nlatitude-longitude sphere parametrization. The blood vessel surfaces $\\Gamma$ are\ndiscretized using a collection of high-order tensor-product polynomial\npatches, each sampled at Clenshaw-Curtis quadrature points. The\nspace-time interference volume function $V(t)$ is computed using a\npiecewise-linear approximation as described in \\cite{lu2018parallel}.\nFor time discretization, we use a locally-implicit first order\ntime-stepping (higher-order time stepping can be easily incorporated).\nInteractions between \\rbcs and the blood vessel surfaces are computed\n\\textit{explicitly}, while the self-interaction of a single \\rbc is\ncomputed \\textit{implicitly}.\n\nThe state of the system at every time step is given by a triple of distributed\nvectors $(\\vX,\\sigma, \\lambda)$. The first two (cell surface positions and tensions)\nare defined at the discretization points of cells. The vector $\\lambda$ has variable\nlength and corresponds to connected components of collision volumes. \nWe use the subscript $i$ to denote the subvectors corresponding to $i$-the cell.\n$\\vX$ and $\\sigma$ are solved as a single system that includes the incompressibility\nconstraint \\cref{eq:constrained-all2}.\nTo simplify exposition, we omit $\\sigma$ in our algorithm summary,  which corresponds to\ndropping $\\vfd_\\sigma$  in the Stokes equation, and dropping the surface incompressibility\nconstraint equation. \n\n\\textbf{Algorithm summary. }\n%Let $\\vX_i$ and $\\vfd_i'$ be the  positions and total forces associated\n%with the $\\gamma_i$ computed at the current time step (note at $t=0$, $\\vf_c=0$)\nAt each step $t$, we compute the new positions $\\vX^+_i$ and collision Lagrange multipliers\n$\\lambda^+$ at time $t^+=t+\\Delta t$.  We assume that in the initial configuration there are no collisions,\nso the Lagrange multiplier vector $\\lambda$ is zero.  Discretizing in\ntime,\n\\cref{eq:constrained-all1} becomes\n%\n\\[ \\vX^+_i =  \\vX_i + \\Delta t\\left(\\sum_{j\\neq i} S_j\n\\vfd_j(\\vX_j,\\lambda)  + D \\phi(\\vX_j, \\lambda)\\right) +  \\Delta t S_i \\vfd_i(\\vX^+_i, \\lambda^+).\n\\]\n\n%To compute $\\vu(\\vx_i)$, the fluid velocity on\n%$\\gamma_i$, we perform the following steps:\n\nAt each single time step, we perform the following steps to obtain $(\\vX^+, \\lambda^+)$ from $(\\vX, \\lambda)$. Below evaluation of integrals implies using appropriate (smooth, near-singular or singular) quadrature rules on cell or blood vessel surfaces. \n\n\\begin{enumerate}\n\n\\item\n  Compute the explicit part $\\vb$ of the position update (first term in \\cref{eq:constrained-all1}).\n  \\begin{enumerate}\n  \\item \\label{step:rbc_velocity}\n    Evaluate $\\vu^\\lbl{fr}$ from $(\\vX, \\lambda)$  on $\\Gamma$  with\n    \\cref{eq:sl_stokes}.\n  \\item\\label{step:boundary_solve} Solve \\cref{eq:double_layer_int_eq} for the unknown density $\\phi$ on $\\Gamma$ using GMRES.\n  \\item\\label{step:boundary_evaluate} For each cell, evaluate  $\\vu^\\Gamma_i = D\\phi$ at all cell points $\\vX_i$.\n  \\item\\label{step:inter_rbc_evaluate} For each cell $i$, compute the contributions of\n    other cells to $\\vX_i^+$:  $\\vb^c_i = \\vu^\\lbl{fr} - u^{\\gamma_i} = \\sum_{j\\neq i}S_j\\vfd_j$. \n  \\item Set $\\vb_i = \\vu^\\Gamma_i + \\vb^c_i$.\n  \\end{enumerate}\n  \\item \\label{step:solve_ncp} Perform the implicit part of the\n    update: solve the \\ncp obtained  by treating the second\n    (self-interaction) term in \\cref{eq:constrained-all1} while\n    enforcing the complementarity constraints \\cref{eq:complementarity_prob}, i.e., solve\n    \\begin{align}\n      \\vX^+_i = \\vX_i + \\Delta t (\\vb_i + S_i \\vf_i(\\vX_i^+,\\lambda^+)),\\label{eq:ncp1}\\\\\n      \\lambda(t^+) \\geq 0, \\quad V(t^+) \\geq 0, \\quad \\lambda(t^+) \\cdot V(t^+) = 0.\\label{eq:ncp2}\n    \\end{align}\n\\end{enumerate}\n\n\\cref{step:rbc_velocity,step:boundary_solve,step:boundary_evaluate,step:inter_rbc_evaluate}\nall require evaluation of global integrals, evaluated as sums over quadrature points;\nwe compute these sums in parallel with \\pvfmm. In particular,\n\\cref{step:boundary_solve} uses \\pvfmm as a part of each matrix-vector product in the\nGMRES iteration. These matrix-vector product,\nas well as \\cref{step:rbc_velocity,step:inter_rbc_evaluate,step:boundary_evaluate}\nrequire near-singular integration to compute the velocity accurately\nnear \\rbc and blood vessel surfaces; this requires parallel\ncommunication to find non-local evaluation points.\nDetails of these computations are discussed\nin \\cref{sec:solver}.\n\nThe \\ncp is solved using a sequence of \\textit{linear complementarity problems} (\\lcp\\/s). Algorithmically, this requires parallel searches of collision candidate pairs and the repeated application of the distributed \\lcp matrix to distributed\nvectors. Details of these computations are provided in \\cref{sec:parallel-contact}.\n\n\\textbf{Other parallel quadrature methods. }\n%\\note[MJM]{this is about vesicle quadrature schemes, but it was in the boundary solver section. Reviewer 3 was confused by this, moved here since there is no other discussion of parallel vesicles}\nVarious other parallel algorithms are\nleveraged to perform boundary integrals for the vessel geometry and \\rbcs.\n%\nTo compute $\\vu^{\\gamma_i}(\\vX)$ for $\\vX \\in \\gamma_i$, the schemes\npresented in \\cite{Veerapaneni2011} are used to\nachieve spectral convergence for single-layer potentials\nby performing a spherical harmonic rotation and apply the quadrature rule\nof \\cite{graham2002fully}.  We use the improved algorithm in\n\\cite{Malhotra2017} to precompute the singular integration operator\nand substantially improve overall complexity.\nTo compute $\\vu^{\\gamma_i}(\\vX)$ for $\\vX$ close to, but not on $\\gamma_i$, we follow the\napproaches of \\cite{sorgentone2018highly, Malhotra2017}, which use a variation of the high-order near-singular\nevaluation scheme of \\cite{Ying2006}.  Rather than extrapolating the\nvelocity from nearby check points as in \\cref{sec:solver}, we\nuse \\cite{Veerapaneni2011} to\ncompute the velocity on surface, upsampled quadrature on $\\gamma_i$ to compute\nthe velocity at check points and interpolate the velocity between them to the desired\nlocation.\nWe mention these schemes for the sake of completeness; they are not the primary contribution of this work, but are critical components of the overall simulation.\n\n", "meta": {"hexsha": "1b43bcc6e24a05834120463b7a3f007d92d635d0", "size": 15067, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bloodflow/formulation.tex", "max_stars_repo_name": "mmorse1217/nyu-thesis-template", "max_stars_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bloodflow/formulation.tex", "max_issues_repo_name": "mmorse1217/nyu-thesis-template", "max_issues_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bloodflow/formulation.tex", "max_forks_repo_name": "mmorse1217/nyu-thesis-template", "max_forks_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.5076335878, "max_line_length": 471, "alphanum_fraction": 0.7474613394, "num_tokens": 4286, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772417253256, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.5785717215578581}}
{"text": "\\section{Completeness}\r\n\\begin{definition}\r\n    A metric space is called complete if every Cauchy sequence converges.\r\n\\end{definition}\r\n\\begin{definition}\r\n    A subset $A\\subset M$ where $M$ is a metric space is called bounded if there is some $r>0$ and $z\\in M$ such that $A\\subset B_r(z)$.\r\n\\end{definition}\r\n\\begin{lemma}\\label{ccb}\r\n    Convergent $\\implies$ Cauchy $\\implies$ Bounded.\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Suppose $x_n\\to x$, then $\\forall\\epsilon>0$, we can find some $N\\in\\mathbb N$ such that $\\forall n>N,d(x,x_n)<\\epsilon/2$, then $\\forall n,m>N$,\r\n    $$d(x_m,x_n)\\le d(x_m,x)+d(x_n,x)<2\\epsilon/2=\\epsilon$$\r\n    Assume that $(x_n)$ is Cauchy, we need that $\\{x_n:n\\in\\mathbb N\\}$ is contained in some ball.\r\n    We know that there is some $N\\in\\mathbb N,\\forall n,m>N, d(x_n,x_m)<\\epsilon$.\r\n    We can take $\\epsilon=1$, so $\\{x_n:n\\in\\mathbb N, n>N\\}\\subset B_1(x_N)$.\r\n    Since $(x_n)_{n<N}$ is finite, it is contained in some ball $B$.\r\n    In particular, we can take the ball $B_{\\max\\{1,d(x_N,x_i):i\\le N\\}}(x_N)$ contains the sequence.\r\n\\end{proof}\r\n\\begin{remark}\r\n    Bounded does not imply Cauchy and Cauchy does not imply Convergent.\r\n\\end{remark}\r\n\\begin{definition}\r\n    A metric space $M$ is complete if every Cauchy sequence converges.\r\n\\end{definition}\r\n\\begin{proposition}\r\n    If $M,M'$ are complete, so is $M\\oplus_pM'$.\r\n\\end{proposition}\r\n\\begin{proof}\r\n    Let $(a_n)$ be Cauchy in the product, then say that $a_i=(x_i,x_i')$, then for all $m,n\\in\\mathbb N$, $\\max\\{d(x_m,x_n),d(x_m',x_n')\\}\\le d_p(a_m,a_n)$, so $(x_n),(x_n')$ are both Cauchy.\r\n    Since both $M,M'$ are complete, $\\exists x\\in M,x'\\in M', x_n\\to x, x_n'\\to x'$.\r\n    Hence $a_n\\to (x,x')$.\r\n\\end{proof}\r\n\\begin{example}\r\n    $\\mathbb R^n,\\mathbb C^n$ are complete (in Euclidean metric) for any $n$.\r\n\\end{example}\r\nThere is another very important example:\r\n\\begin{theorem}\r\n    Let $S$ be a non-empty set, then $\\ell_\\infty S$ is complete under the uniform metric.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Theorem \\ref{GP_UnifConv} shows that a uniformly Cauchy sequence of functions does converge to some scalar function on $S$.\r\n    To see it is bounded, choose $n\\in\\mathbb N$ such that $d(f_n,f)<1$, then since there is some $C\\ge 0$ such that $\\sup|f_n|\\le C$, we have $|f|\\le |f-f_n|+|f_n|<C+1$ so it is bounded as well.\r\n\\end{proof}\r\n\\begin{proposition}\r\n    Let $N\\subset M$ be a metric subspace.\\\\\r\n    1. If $N$ is complete, then $N$ is closed in $M$.\\\\\r\n    2. If $N$ is closed and $M$ is complete, so is $N$.\r\n\\end{proposition}\r\nSo a metric subspace of a complete space is complete if and only if it is closed.\r\n\\begin{proof}\r\n    1. If $N$ is complete, let $(x_n)$ be a sequence in $N$ such that $x_n\\to x$ in $M$, but this means that $(x_n)$ is Cauchy by Lemma \\ref{ccb}, therefore it is convergent in $N$ due to completeness, hence $x\\in N$ due to uniqueness of limit in metric space.\\\\\r\n    2. Choose any Cauchy sequence $(x_n)$ in $N$, we know that $(x_n)\\to x$ for some $x\\in M$ due to completeness of $M$, but since $N$ is closed, $x\\in N$ as well, so $N$ is complete.\r\n\\end{proof}\r\n\\begin{theorem}\r\n    Let $M$ be a metric space, then the space of bounded continuous scalar functions in $M$, $C_b(M)$ is complete in the uniform metric.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    $C_b(M)$ is a metric subspace of $\\ell_\\infty(M)$ which is complete.\r\n    But uniform limit of continuous functions to a continuous function, so $C_b(M)$ is closed.\\\\\r\n    To spell out the proof, fix $x\\in M, \\epsilon>0$, we can choose $N$ such that $D(f_n,f)<\\epsilon/3$ where $D$ is the uniform metric.\r\n    Fix any $n\\ge N$, then since $f_n$ is continuous, $\\exists \\delta>0,d(x,y)<\\epsilon\\implies |f_n(x)-f_n(y)|<\\epsilon/3$.\r\n    Hence $d(x,y)<\\delta\\implies |f(x)-f(y)|\\le |f(x)-f_n(x)|+|f(y)-f_n(y)|+|f_n(x)-f_n(y)|<3\\epsilon/3=\\epsilon$.\r\n\\end{proof}\r\nFix some $S\\neq\\varnothing$, a metric space $(N,d')$.\r\nLet $\\ell_\\infty (S,N)$ be the space of bounded functions $S\\to N$.\r\nThen we can define the uniform metric on $\\ell_\\infty(S,N)$ defined by $D(f,g)=\\sup_{x\\in S}d'(f(x),g(x))$.\\\\\r\nNow given a metric space $(M,d)$, let $C_b(M,N)$ be the set fo bounded continuous functions $M\\to N$, then we have\r\n\\begin{theorem}\r\n    Let $S,M,N$ be as above, assuming that $N$ is complete, then $\\ell_\\infty(S,N)$ is complete under uniform metric, and since $C_b(M,N)$ is closed in $\\ell_\\infty(M,N)$ hence complete.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Analogous to the case where $M=\\mathbb R$ or $\\mathbb C$.\r\n\\end{proof}\r\n\\begin{example}\r\n    1. For any closed and bounded interval $[a,b]\\in\\mathbb R$, then continuous functions on $[a,b]$ are the continuous and bounded functions on $[a,b]$ is complete under the uniform metric.\r\n\\end{example}\r\n\\begin{definition}\r\n    A map $f:M\\to M'$ is a contraction mapping if $f$ is $L$-Lipschitz with $L<1$.\r\n\\end{definition}\r\n\\begin{theorem}[Contraction Mapping Theorem, aka Banach Fixed Point Theorem]\\label{banach}\r\n    If $f$ is a contraction mapping in a nonempty complete metric space, then $f$ has an unique fixed point.\r\n\\end{theorem}\r\nNote that it is important for the condition listed to be satisfied.\r\n\\begin{example}\r\n    1. If we remove the completeness criterion, $f:\\mathbb R\\setminus\\{0\\}\\to\\mathbb R\\setminus\\{0\\}$ defined by $f(x)=x/2$, then $f$ is a contraction but do not have fixed point.\\\\\r\n    2. If we remove $L<1$, $f:\\mathbb R\\to\\mathbb R$ by $f(x)=x+1$ is $1$-Lipschitz but do not have any fixed point.\\\\\r\n    3. $f(x)=x+1/x,[1,\\infty)$\r\n\\end{example}\r\n\\begin{proof}\r\n    Fix $x_0\\in M$, then define a sequence $x_n$ by $x_{n+1}=f(x_n)$, so $x_n=f^n(x_0)$.\r\n    We shall show that this sequence is Cauchy.\r\n    For $n\\ge 2$, $d(x_n,x_{n-1})\\le Ld(x_{n-1},x_{n-2})\\le L^{n-1}d(x_1,x_0)$ inductively.\r\n    For $m>n$, \r\n    \\begin{align*}\r\n        d(x_m,x_n)&\\le d(x_n,x_{n+1})+\\cdots+d(x_{m-1},x_m)\\\\\r\n        &\\le(L^{m-1}+L^{m-2}+\\cdots+L^n)d(x_1,x_0)\\\\\r\n        &\\le\\frac{L^n}{1-L}d(x_1,x_0)\r\n    \\end{align*}\r\n    The last term, which only depends on the smaller term $n$, can be as small as we want when $n$ is large enough, so the sequence is Cauchy.\\\\\r\n    Hence there is a limit $x$ of the sequence $x_n$ since $M$, but since $f$ is continuous, $f(x_n)\\to f(x)$, but $f(x_n)=x_{n+1}$, so by uniqueness of limits, $f(x)=x$.\\\\\r\n    Suppose $f(x)=x$ and $f(y)=y$, then if $x\\neq y$, $|x-y|=|f(x)-f(y)|\\le L|x-y|<|x-y|$ which is a contradiction.\r\n\\end{proof}\r\nNote that $x_n\\to x$ exponentially fast, so it can also be applied to numerical analysis to find an approximated solution of the fixed point.\\\\\r\nAn application of the contraction mapping theorem is to analyze the existence and uniqueness of the solution of an initial value problem.\r\n\\begin{example}\r\n    The IVP $f^\\prime(t)=f(t^2), f(0)=y_0$ on $C[0,1/2]$ is what we are interested in.\r\n    Assume that $f$ has a solution, then immediately $f$ is continuously differentiable.\r\n    By FTC,\r\n    $$f(t)=f(0)+\\int_0^tf(x^2)\\,\\mathrm dx$$\r\n    Let $M=C[0,1/2]$, which is nonempty and complete, then consider the mapping $T:M\\to M$ defined by\r\n    $(Tg)(t)=y_0+\\int_0^tg(x^2)\\,\\mathrm dx$\r\n    $T$ is trivially well-defined since $x\\in [0,1/2]\\implies x^2\\in[0,1/4]\\subset[0,1/2]$ and that $g(x^2)$ is continuous in $x$. Also by FTC, $(Tg)^\\prime=g$, so $Tg$ is continuously differentiable hence continuous.\r\n    Now $f$ solves the IVP iff $f$ is a fixed point of $T$.\r\n    Also we can check that $T$ is a contraction.\r\n    Indeed, take $g,h\\in\\mathbb M$, then\r\n    \\begin{align*}\r\n        |Tg(t)-Th(t)|&=\\left|\\int_0^tg(x^2)-h(x^2)\\,\\mathrm dx\\right|\\\\\r\n        &\\le\\int_0^t|g(x^2)-h(x^2)|\\,\\mathrm dx\\le tD(g,h)\\le D(g,h)/2\r\n    \\end{align*}\r\n    So it is a contraction mapping, hence by Theorem \\ref{banach} a unique fixed point exists.\r\n\\end{example}\r\n\\begin{theorem}[Lindelof-Picard Theorem]\r\n    Let $a<b, R>0$ be real numbers and $y-0=\\mathbb R^n$.\r\n    Suppose there is a continuous $\\phi:[a,b]\\times B_R(y_0)\\to \\mathbb R^n$.\r\n    Assume $K>0$ such that $\\forall x,y\\in B_R(y_0),\\forall t\\in [a,b],\\|\\phi(t,x)-\\phi(t,y)\\|\\le K\\|x-y\\|$, then $\\exists\\epsilon>0,\\forall t_0\\in [a,b]$, the IVP\r\n    $$f^\\prime(t)=\\phi(t,f(t)),f(t_0)=y_0$$\r\n    has a unique solution on $[t_0-\\epsilon,t_0+\\epsilon]\\cap [a,b]$.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Observe that $\\mathbb R^n$ is complete, so $B_R(y_0)$ is closed hence complete, then since $\\phi$ is continuous, it is bounded in the closed bounded set $[a,b]\\times B_R(y_0)\\subset\\mathbb R^{n+1}$.\r\n    Let\r\n    $$C=\\sup\\{|\\phi(t,x)|:t\\in [a,b],x\\in B_R(y_0)\\},\\epsilon=\\min\\{R/C,1/(2K)\\}$$\r\n    We want to solve the IVP on $[c,d]=[t_0-\\epsilon,t_0+\\epsilon]\\cap[a,b]$.\r\n    Now the set of functions\r\n    $$M=C([c,d],B_R(y_0))$$\r\n    is nonempty and complete.\\\\\r\n    Consider the mapping $T:M\\to M$ by\r\n    $$(Tg)(t)=y_0+\\int_0^t\\phi(x, g(x))\\,\\mathrm dx$$\r\n    Now $Tg$ is continuous for continuous $g$ by FTC (in fact $Tg$ is even continuously differentiable).\r\n    Also $(Tg)^\\prime(t)=\\phi(t,g(t))$.\r\n    In addition, $Tg$ takes values in $B_R(y_0)$ since\r\n    $$\\|(Tg)(t)-y_0\\|=\\left\\|\\int_{t_0}^t\\phi(x, g(x))\\,\\mathrm dx\\right\\|\\le\\int_{t_0}^t\\|\\phi(x, g(x))\\|\\,\\mathrm dx\\le \\epsilon C\\le R$$\r\n    It remains to show that it is a contraction mapping, then the theorem can be deduced from the contraction mapping theorem since the fixed point would be continuously differentiable and solves the differential equation.\r\n    Also every solution is a fixed point.\\\\\r\n    For $g,h\\in\\mathbb M$, then we consider the uniform distance of $Tg,Th$.\r\n    Indeed,\r\n    \\begin{align*}\r\n        \\|Tg(t)-Th(t)\\|&=\\left\\|\\int_{t_0}^t\\phi(s,g(s))-\\phi(s,h(s))\\,\\mathrm ds\\right\\|\\\\\r\n        &\\le\\int_{t_0}^t\\|\\phi(s,g(s))-\\phi(s,h(s))\\|\\,\\mathrm ds\\\\\r\n        &\\le\\epsilon KD(g,h)\\le D(g,h)/2\r\n    \\end{align*}\r\n    for any $t\\in\\mathbb R$ by the Lipschitz condition we assumed.\r\n    Therefore $T$ is a contraction mapping, and the proof is complete.\r\n\\end{proof}\r\n\\begin{remark}\r\n    1. In general, however, you cannot extend the solution guaranteed above to a global solution.\r\n    But in our previous example we can extend the solution to $[0,1)$.\\\\\r\n    2. Also, we can apply the theorem to solve higher order equations by considering the vector of derivatives.\\\\\r\n    3. If $f:[a,b]\\to\\mathbb R^n$ can be written as $(f_1,f_2,\\ldots,f_n)$, so $f^\\prime=(f_1^\\prime,f_2^\\prime,\\ldots f_n^\\prime)$ assuming each component is differentiable.\r\n    Similarly, the integral of a vector valued function is the vector of the integrals of the components, given that they exist.\r\n    So we can do everything by components.\\\\\r\n    4. The reason that the integral of the norm is at least the norm of the integral is Cauchy-Schwarz.\\\\\r\n    5. We can show that a continuous function on a closed bounded set in $\\mathbb R^n$ is bounded by Bolzano-Weierstrass.\r\n\\end{remark}", "meta": {"hexsha": "58f1855753159042ff68b29812534e5a3d634bdb", "size": 10837, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "4/complete.tex", "max_stars_repo_name": "david-bai-notes/IB-Analysis-and-Topology", "max_stars_repo_head_hexsha": "9c3a32b907ff14942767e4bbdc9951240d2d7edb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "4/complete.tex", "max_issues_repo_name": "david-bai-notes/IB-Analysis-and-Topology", "max_issues_repo_head_hexsha": "9c3a32b907ff14942767e4bbdc9951240d2d7edb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4/complete.tex", "max_forks_repo_name": "david-bai-notes/IB-Analysis-and-Topology", "max_forks_repo_head_hexsha": "9c3a32b907ff14942767e4bbdc9951240d2d7edb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.6787878788, "max_line_length": 263, "alphanum_fraction": 0.6496262803, "num_tokens": 3720, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228758499942, "lm_q2_score": 0.8740772368049823, "lm_q1q2_score": 0.5785717183009703}}
{"text": "\\chapter{Theoretical background}\n\\label{chap:Background}\n\\mtoc\n\nIn this chapter, we provide a brief overview of the main concepts of natural language processing required to understand the rest of this work.\n\n\\section{Language models}\n\\label{sec:Background-LanguageModels}\n\nLanguage models are probabilistic models that predict the next word in a sequence. They also assign probabilities all possible sentences and sequences of words constructed from the words of a given language (\\cite{Jura09}).\n\nWe train language models on large corpora of text written in a given language. For example, by counting occurrences and co-occurrences of words. A language model trained on a big enough English corpora should assign a higher probability to the first sentence and consider the second one very unlikely.\n\n\\begin{enumerate}\n\\item Today is a rainy day here in Paris.\n\\item Nice not such but inside in after.\n\\end{enumerate}\n\nFinding the probability distribution of words is an important part of many NLP problems, such as spellchecking, error correction, or machine translation. As we will see in Section \\ref{sec:Background-seq2seq}, state of the art recurrent neural network used for machine translation are also very advanced language models.\n\n\\section{Entropy and cross-entropy}\n\nThe amount of information produced when one message is chosen from the set of possible messages can be measured as\n\n\\[ I(x) = -log_2 P(x) \\]\n\nIt was \\cite{Shan48} who proposed to use the logarithm with base 2 and call the units of information bits (or binary digits). We can also use natural logarithm but then the units of measurement will be called nats.\n\n\\subsection{Entropy}\n\nIn information theory, a random variable is treated as a source of information. The entropy of variable X is the expectation of the amount of information in the outcome (\\cite{Mack03}).\n\n\\[ H(X) = \\mathbb{E}\\big[ I(x) \\big] = - \\mathbb{E}\\big[ log P(x) \\big] \\]\n\nIn the context of languages, we only consider discrete random variables over the finite alphabet.\n\n\\[ H(X) = - \\sum_{i=1}^N P(x_i) log P(x_i) \\]\n\nEntropy is considered the measure of uncertainty of variable X.\n\n\\subsection{Kullback-Leibler divergence}\n\\label{sec:Background-KL}\n\nThe difference between two probability distributions is measured with a Kullback-Leibler (KL) divergence:\n\n\\[ D_{KL}(P||Q) = \\mathbb{E}\\bigg[ \\log \\frac{P(x_i)}{Q(x_i)} \\bigg] = \\mathbb{E}\\big[ \\log P(x_i) - \\log Q(x_i) \\big] \\]\n\nFor the case of discrete variables\n\n\\[ D_{KL}(P||Q) = \\sum_{i=1}^N P(x_i) \\log \\frac{P(x_i)}{Q(x_i)} \\]\n\n\\subsection{Cross-entropy}\n\nCross-entropy is the average number of bits needed to encode data from the source with distribution P when we use model Q to define our codebook (\\cite{Murp13}).\n\n\\[ H(P, Q) = - \\sum_{i=1}^N P(x_i) log Q(x_i) \\]\n\nIt can be expressed as a sum of entropy and KL divergence\n\n\\begin{equation}\n    H(P, Q) = H(P) + D_{KL}(P||Q)\n    \\label{eq:KL-Entropy}\n\\end{equation}\n\nThe lower bound of cross-entropy means that even if we find the perfect model which matches the true distribution of data, cross-entropy will not be lower than the entropy of this dataset.\n\n\\subsection{Why do we minimize the cross-entropy?}\n\\label{sec:Background-Likelihood}\n\nMany problems in machine learning can be viewed as function estimation: we are trying to predict a variable $y$ given an input vector $x$ (\\cite{Good16}). We assume that there is a true function $f(x)$ that describes relationship between $x$ and $y$. All other factors that influence $y$ are considered to be noise $\\epsilon$ (in real world nothing is really influenced by a single factor; be it a coin toss or a roll of dice, outcome of any process is affected by more factors than we could possibly measure, however, the effect of most factors is so insignificant that they can be ignored).\n\n\\[ y = f(x) + \\epsilon \\]\n\nWe want to find function $\\hat{f}$ (model, estimate) which is as close as possible to the \"true\" function $f$. When we say that we are training a machine learning model, we mean that we choose a model, which is a parametrised function $\\hat{f}(x;\\theta)$ and iteratively move it closer to the \"true\" function $f$ by changing the parameter $\\theta$.\n\nIn probabilistic interpretation the process that we are trying to model, or function $f$, is the \"true\" probability distribution $P_{data}(x, y)$ that generates a set of $(x, y)$ points. We observe this process by collecting the training data $D =\\{(x^{(i)}, y^{(i)})|i=1,2,\\dots\\}$ and try to model it with a parametric family of distribution $P_{model}(x, y; \\theta)$ (function $\\hat{f}$) by finding such parameter $\\theta^{*}$ which minimizes the difference between those two distributions. As we mentioned in Section \\ref{sec:Background-KL}, the difference between two distributions is measured with a Kullback-Leibler divergence.\n\n\\[ \\theta^{*} = \\argmin_{\\theta} D_{KL}(P_{data}||P_{model}) \\]\n\nAnd based on Equation \\ref{eq:KL-Entropy}, this is the same as minimizing the difference between the cross-entropy of our model applied to the dataset $H(P_{data}, P_{model})$ and the entropy of this dataset $H(P_{data})$\n\n\\[ \\theta^{*} = \\argmin_{\\theta} \\bigg[H(P_{data}, P_{model}) - H(P_{data})\\bigg] \\]\n\nSince $P_{data}$ does not depend on $\\theta$ and can not be controlled by us, this boils down to minimizing the cross-entropy:\n\n\\[ \\theta^{*} = \\argmin_{\\theta} H(P_{data}, P_{model}) \\]\n\nNow we will show that this is the same as maximizing the likelihood of the dataset $D$ assigned to it by our model $P_{model}$. The likelihood is the measure of how likely is our model to produce this dataset. We assume that by choosing the model which has highest likelihood of generating dataset $D$ we will get a model that behaves like the true process in other situations (this really depends on how representative is $D$ of the entire distribution $P_{model}$ and whether we are able to generalize and not overfit - simply memorize - the training data $D$). The likelihood is expressed as the probability that our model $P_{model}$ assigns to the dataset $D$. So we want to find parameter $\\theta^{*}$ which maximizes this probability:\n\n\\[ \\theta^{*} = \\argmax_{\\theta} P_{model}(D;\\theta) \\]\n\nWe make an assumption that all points in $D$ are independent of each other, which allows us to express the probability of generating dataset $D$ into the product of probabilities of generating each one of its points. If $m = |D|$ is the size of our dataset,\n\n\\[ P_{model}(D;\\theta) = \\prod_{i=1}^m P_{model}(x^{(i)}, y^{(i)};\\theta) \\]\n\nWhich means that\n\n\\[ \\theta^{*} = \\argmax_{\\theta} \\prod_{i=1}^m P_{model}(x^{(i)}, y^{(i)};\\theta) \\]\n\nTo simplify this task, we can use the property of logarithms which turns product into a sum, and maximize the logarithm of expression on the right. Indeed the same parameter $\\theta^{*}$ which maximizes the logarithm of an expession, will also maximize that expression. Therefore,\n\n \\[ \\theta^{*} = \\argmax_{\\theta} \\log \\prod_{i=1}^m P_{model}(x^{(i)}, y^{(i)};\\theta) \\]\n \\[ = \\sum_{i=1}^m \\log P_{model}(x^{(i)}, y^{(i)};\\theta) \\]\n\n The logarithm of a likelihood (right-hand side expression) is called log-likelihood. In the same way, we can multiply this expression by any constant without changing the maximization parameter. Let's normalize it over the size of our dataset\n\n \\[ \\theta^{*} = \\argmax_{\\theta} \\frac{1}{m}\\sum_{i=1}^m \\log P_{model}(x^{(i)}, y^{(i)};\\theta) \\]\n\n This, in fact is the expectation of log-pobability of random point with respect to the probability distribution $P_{data}$:\n\n \\[ \\theta^{*} = \\argmax_{\\theta} \\mathbb{E}_{(x,y) \\sim P_{data}} \\bigg[\\log P_{model}(x, y;\\theta)\\bigg] \\]\n\n Which is the same as minimizing the negative expectation, or cross-entropy:\n\n \\[ \\theta^{*} = \\argmin_{\\theta} - \\mathbb{E}_{(x,y) \\sim P_{data}} \\bigg[\\log P_{model}(x, y;\\theta)\\bigg] \\]\n\n  \\[ = \\argmin_{\\theta} H(P_{data}, P_{model}) \\]\n\nThis explains why we train machine learning models by minimizing cross-entropy and why it is the same as maximizing the likelihood. Every cost function is, in fact, the cross-entropy of empirical distribution $P_{data}$ and some the distribution that we choose for our model. For example, mean squared error (MSE) is cross-entropy of $P_{data}$ and the normal distribution.\n\n\\section{Recurrent neural networks}\n\\label{sec:Background-RNN}\n\nA big limitation of feedforward neural networks is the fact that they require a fixed-size input and always produce a fixed-size output. For example, a network that has 2 neurons in the input layer and 1 neuron in the output layer can only accept vectors of size 2 and return vectors of size 1.\n\nSome problems, however, require processing of sequential data. This includes time series forecasting, music composition, and many tasks of natural language processing, such as machine translation, text summarization, question answering, sentiment analysis, speech recognition, text-to-speech and speech-to-text translation etc. The input of these problems is a sequence of variable length.\n\n\\textbf{Recurrent neural networks} (RNN) are a special kind of neural networks that have cyclical connections. At every step, such networks receive a fixed-size input (for example, one word) and the information from the previous step, transmitted through the cyclical connection. This creates memory inside a network which allows it to operate on sequences of input values.\n\n\\begin{figure}[H]\n    \\label{fig:rnn}\n    \\centering\n    \\includegraphics[height=4cm]{rnn}\n\\end{figure}\n\n\\subsection{Sequence to sequence networks}\n\\label{sec:Background-seq2seq}\n\nOn each step, recurrent neural network receives an input value, produces the output value, and passes its internal state onto the next step. From this follows:\n\n\\begin{enumerate}\n    \\item Sequence of $N$ inputs will produce a sequence of $N$ outputs.\n    \\item Every output $y_k$ is only affected by the current input $x_k$ and all the previous inputs $x_1, \\dots x_{k-1}$ but not the next inputs $x_{k+1}, \\dots, x_N$\n\\end{enumerate}\n\nThis is useful for problems like predicting next element in a sequence, but other tasks may require input and output sequences to be of different sizes and the output to be equally affected by all elements of input.\n\nConsider a question answering system. A neural network receives a sequence of words in a question: \\textit{\"At what temperature does water boil?\"} and produces the answer: \\textit{\"Water boils at 100 degrees.\"}. Such answer cannot be produced by a classical RNN because the length of a question exceeds the length of the answer and it is impossible to produce the beginning of the answer \\textit{\"Water boils ...\"} until we have seen the end of the question \\textit{\"... does water boil\"}.\n\nThis can be done by a special combination of two recurrent neural network called \\textbf{sequence to sequence} or \\textbf{encoder-decoder neural network} neural networks introduced by \\cite{Suts14}.\n\n\\paragraph{Encoder} is a recurrent neural network which accepts the sequence of input values. The output of encoder is ignored and the state vector received on the last step encodes the input sequence (sometimes called \"thought vector\").\n\n\\paragraph{Decoder} is another recurrent neural network which receives its own output from the previous step as input on the current step and generates the output sequence. On the first step, a decoder receives the encoded sequence provided by the encoder as its internal state.\n\nPutting encoder and decoder together we receive a neural network which first reads the whole input sequence of size $N$ and only then produces the output sequence of size $M$, making it possible that $N \\neq M$.\n\nSequence to sequence networks are in fact language models that learn the probability distribution of the output sequences conditioned by the input sequences. They are widely used in machine translation, question answering, text summarization, and many other similar applications.\n\n\\subsection{LSTM and GRU cells}\n\\label{sec:Background-LSTMandGRU}\n\nThe major problem with recurrent neural networks is vanishing or exploding gradient. As the network is trained on a sequence of input, we use the algorithm called \"backpropagation through time\" which propagates an error back through the cyclical connection reversing the path made by the input sequence. This process is similar to backpropagating an error through the deep feedforward network with as many layers as the length of the input sequence. The only difference is that the weight matrix on each layer is now the same matrix corresponding to the cyclical connection. This leads to the problem described by \\cite{Hoch01}: backpropagated error signals exponentially depend on the magnitude of weights. They tend to either explode if weights are above 1 or vanish if they are below.\n\nAs a solution to this problem \\cite{Hoch97} introduced a novel method called \\textbf{Long short-term memory (LSTM)} - memory cells with internal architecture that allows bridging very long time lags and does not suffer from exploding or vanishing gradient.\n\n\\cite{Cho14} proposed a simplified version of memory cells called \\textbf{Gated recurrent unit (GRU)} which achieve similar performance to LSTM cells but have fewer parameters.\n", "meta": {"hexsha": "85d8c050a8a1eaafb017a4b31d35277872e53629", "size": 13206, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/Appendices/Background.tex", "max_stars_repo_name": "olekscode/MasterThesis", "max_stars_repo_head_hexsha": "0b4e619b164fe8d4608ed122b1320093721db6c4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-30T13:56:05.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-30T13:56:05.000Z", "max_issues_repo_path": "src/Appendices/Background.tex", "max_issues_repo_name": "olekscode/MasterThesis", "max_issues_repo_head_hexsha": "0b4e619b164fe8d4608ed122b1320093721db6c4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Appendices/Background.tex", "max_forks_repo_name": "olekscode/MasterThesis", "max_forks_repo_head_hexsha": "0b4e619b164fe8d4608ed122b1320093721db6c4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.6071428571, "max_line_length": 787, "alphanum_fraction": 0.7572315614, "num_tokens": 3251, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5785636985810213}}
{"text": "\\lab{ARMA Models}{ARMA Models}\n\\label{lab:arma}\n\\objective{ARMA$(p,q)$ models combine autoregressive and moving-average models in order to forecast future observations using time-series. In this lab, we will build an ARMA$(p,q)$ model to analyze and predict future weather data and then compare this model to statsmodels built-in ARMA package. Then we will forecast the future height of the Rio Negro.}\n\n% spec file should be consistent with std and sigma...change variable to filename\n% change from op.fmin to fmin\n% change so its ARMA(p,q) in model identification\n\\section*{Time Series}\n\nA time series is any discrete-time stochastic process.  \nIn other words, it is a sequence of random variables, $\\{y_t\\}_{t=1}^n$, that are determined by their time $t$.  \nExamples of time series include heart rate readings over time, pollution readings over time, stock prices at the closing of each day, and air temperature.\nOften when analyzing time series, we want to forecast future data, such as what will the stock price of a company be in a week and what will the temperature be in 10 days.\n\n\\section*{ARMA$(p,q)$ Models}\n\nOne way to forecast a time series is using an ARMA model.\nAn $\\text{ARMA}(p,q)$ model combines an autoregressive model of order $p$ and a moving average model of order $q$ on a time series $\\{y_t\\}_{t = 1}^n$.\nThis model is a dependent model as it is non-independent of previous data.\nBecause of this, the model needs to become stationary in order to compensate for the dependency of the data.\nTo make data stationary, we look at the time series $\\{z_t\\}_{t=1}^n$ where $z_t=y_t-y_{t-1}$.\nThe model itself is a stochastic process on $z_t$, satisfying the equation\n\n\\begin{align}\n    \\label{eq:arma:def}\n    z_t =\\underbrace{\\left(\\sum_{i=1}^p \\phi_{i}z_{t - i}\\right)}_\\text{AR(p)} + \\epsilon_t + \n    \\underbrace{\\left(\\sum_{j=1}^{q} \\theta_{j}\\epsilon_{t-j} \\right)}_\\text{MA(q)}\n\\end{align}\nwhere each $\\epsilon_t$ is an identically-distributed Gaussian variable $\\mathscr{N}(\\mu,\\sigma^2),$ and $\\phi_i$ and $\\theta_j$ are constants.\n\n\\subsection*{AR($p$) Models}\n\nAn AR($p$) model works similar to a weighted random walk.\nRecall that in a random walk, the current position depends on the immediate past position.\nIn the autogregressive model, the current data point in the time series depends on the past $p$ data points.\nHowever, the importance of each of the past $p$ data points is not uniform.\nWith an error term to represent white noise and a constant term to adjust the model along the y-axis, we can model the stochastic process with the following equation:\n\n\\begin{equation}\nz_t=c + \\epsilon_t + \\sum_{i=1}^p\\phi_i z_{t-i}\n\\label{eq:AR}\n\\end{equation}\n\nIf there is a high correlation between the current and previous values of the time series, then the AR$(p)$ model is a good representation of the data, and thus the ARMA($p,q$) model will most likely be a good representation.\nThe coefficients $\\{\\phi_i\\}_{i=1}^p$ are larger when the correlation is stronger.\n\nIn this lab, we will be using weather data from Provo, Utah\\footnote{This data was taken from https://forecast.weather.gov/data/obhistory/metric/KPVU.html}.\nTo check that the data can be represented well, we need to look at the correlation between the current and previous values.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\textwidth]{figures/correlations.pdf}\n\\caption{These graphs show that the weather data is correlated to its previous values.\nThe correlation is weaker in each graph successively, showing that the further in the past the data is, the less correlated the data becomes.}\n\\label{fig:correlations}\n\\end{figure}\n\n\\subsection*{MA($q$)}\n\nA moving average model of order $q$ is used to factor in the varying error of the time series.\nThis model uses the error of the current data point and the previous data points to predict the next datapoint.\nSimilar to an AR($p$) model, this model uses a linear combination (which includes a constant term to adjust along the y-axis..\n\n\\begin{equation}\nz_t = c + \\epsilon_t + \\sum_{i=1}^q\\theta_i\\epsilon_{t-i}\n\\label{eq:MA}\n\\end{equation}\n\nThis part of the model simulates shock effects in the time series.\nExamples of shock effects include volatility in the stock market or sudden cold fronts in the temperature.\n\nCombining both the AR$(p$) and MA($q$) models, we get an ARMA($p,q$) model which forecasts based on previous observations and error trends in the data.\n\n\\subsection*{Finding Parameters}\n\nOne of the most difficult parts of using an ARMA($p,q$) model is identifying the proper parameters of the model.\nThese parameters include $\\{\\phi_i\\}_{i=1}^p$, $\\{\\theta_i\\}_{i=1}^q$, $\\mu$, and $\\sigma$, where $\\mu$ and $\\sigma$ are the mean and variance of the error.\nNote that $\\{\\phi_i\\}_{i=1}^p$ and $\\{\\theta_i\\}_{i=1}^q$ determine the order of the ARMA model.\n\nA naive way to use an ARMA model is to choose $p$ and $q$ based on intuition.\nFigure \\ref{fig:correlations} showed that there is a strong correlation between $z_t$ and $z_{t-1}$ and between $z_t$ and $z_{t-2}$.\nThe correlation is weaker between $z_t$ and $z_{t-3}$.\nIntuition then suggests to choose $p=2$.\nBy looking at the correlations between the current noise with previous noise, similar to Figure \\ref{fig:correlations}, it can also be seen that there is a weak correlation between $z_t$ and $\\epsilon_t$ and between $z_t$ and $\\epsilon_{t-1}$. \nBetween $z_t$ and $\\epsilon_{t-2}$ there is no correlation.\nFor more on how these error correlations were found, see Additional Materials.\nIntuition from these correlations suggests to choose $q=1$.\nThus, a naive choice for our model is an ARMA($2,1)$ model.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\textwidth]{figures/naive.pdf}\n\\caption{Naive forecast on \\li{weather.npy}}\n\\label{fig:naive}\n\\end{figure}\n\n\\begin{problem}\nWrite a function \\li{arma_forecast_naive()} that builds an ARMA(p,q) model where the values of $\\phi_i=.5$ and $\\theta_i=.1$ for all $i$.\nLet $\\epsilon_i\\sim\\mathscr{N}(0,1)$ for all $i$.\nUse your function to predict the next $n$ values of the time series.\nThe function should accept a parameter $p$, $q$, and $n$ (the number of observations to predict).\nPlot $\\{z_t\\}_{t=1}^n$ followed by your predicted observations of $z_t$.\n\nThe file \\li{weather.npy} contains data on the temperature in Provo, Utah from 7:56 PM May 13, 2019 to 6:56 PM May 16, 2019, taken every hour.\nUse this file to test your code.\nFor $p=2$, $q=1$, and $n=20$, your plot should look similar to Figure \\ref{fig:naive}, however, due to the variance of the error $\\epsilon_t$, the plot will not look exactly like Figure \\ref{fig::naive}.\nThe predictions may be higher or lower on the x-axis.\n\\label{prob:naive}\n\\end{problem}\n\nLet $\\Theta = \\{\\phi_i, \\theta_j, \\mu, \\sigma_a^2\\}$ be the set of parameters\nfor an $\\text{ARMA}(p,q)$ model. \nSuppose we have a set of observations $\\{z_t\\}_{t=1}^n$.\nOur goal is to find the $p,q$, and $\\Theta$ that maximize the likelihood of the ARMA model given the data.\nUsing the chain rule, we can factorize the likelihood of the model given this data as\n\\begin{align}\n    \\label{eq:arma:factorized}\n    p(\\{z_t\\} | \\Theta) = \\prod_{t=1}^{n} p(z_t | z_{t-1}, \\ldots, z_{1},\n    \\Theta)\n\\end{align}\n\n\\subsubsection*{State Space Representation}\n\nIn a general $\\text{ARMA}(p,q)$ model, the likelihood is a function of the\nunobserved error terms $a_t$ and is not trivial to compute. \nSimple approximations can be made, but these may be inaccurate under certain\ncircumstances. \nExplicit derivations of the likelihood are possible, but\ntedious. \nHowever, when the $\\text{ARMA}$ model is placed in state-space, the\nKalman filter affords a straightforward, recursive way to compute the\nlikelihood.\n\nWe demonstrate one possible state-space representation of an $\\text{ARMA}(p,q)$ model. Let\n$r = \\max(p, q+1)$. Define\n\\begin{align}\n    \\hat{\\textbf{x}}_{t|t-1}&=\\begin{bmatrix}x_{t-1}&x_{t-2}&\\dotsb&x_{t-r}\\end{bmatrix}^T\\\\\n    F &= \\begin{bmatrix}\n        \\phi_1 & \\phi_2 & \\cdots & \\phi_{r-1} & \\phi_r\\\\\n        1 & 0 & \\cdots & 0 & 0\\\\\n        0 & 1 & \\cdots & 0 & 0\\\\\n        \\vdots & \\vdots & \\cdots & \\vdots & \\vdots\\\\\n        0 & 0 & \\cdots & 1 & 0\n    \\end{bmatrix}\\\\\n    H &= \\begin{bmatrix}\n        1 & \\theta_1 & \\theta_2 & \\cdots & \\theta_{r-1}\n    \\end{bmatrix}\\\\\n    Q &= \\begin{bmatrix}\n        \\sigma_a^2 & 0 & \\cdots & 0\\\\\n        0 & 0 & \\cdots & 0\\\\\n        \\vdots & \\vdots & \\cdots & \\vdots\\\\\n        0 & 0 & \\cdots & 0\n    \\end{bmatrix}\\\\\n    w_t &\\sim \\text{MVN}(0, Q),\n    \\label{eqn:error}\n\\end{align}\nwhere $\\phi_i = 0$ for $i>p$, and $\\theta_j = 0$ for $j > q$. \nNote that Equation \\ref{eq:AR} gives\n\\begin{align}\n    F\\hat{\\textbf{x}}_{t-1|t-2}+w_{t}&=\\begin{bmatrix}\\sum_{i=1}^r\\phi_ix_{t-i}\\\\\n                                x_{t-1}\\\\\n                                x_{t-2}\\\\\n                                \\vdots\\\\\n                                x_{t-(r-1)}\n                  \\end{bmatrix}+\\begin{bmatrix}\\epsilon_t\\\\0\\\\0\\\\\\vdots\\\\0\\end{bmatrix}\\\\\n                &=\\begin{bmatrix}x_t&x_{t-1}&\\cdots&x_{t-(r-1)}\\end{bmatrix}^T\\\\\n                &=\\hat{\\textbf{x}}_{t|t-1}\n\\end{align}\nWe note that $z_{t|t-1}=H\\hat{\\textbf{x}}_{t|t-1}+\\mu$.\\footnote{\nFor a proof of this fact, see Additional Materials.}\n\nThen the linear stochastic\ndynamical system\n\\begin{align}\n    \\hat{\\textbf{x}}_{t+1|t} &= F\\hat{\\textbf{x}}_{t|t-1} + w_t\\\\\n    z_{t|t-1} &= H\\hat{\\textbf{x}}_{t|t-1} + \\mu\n    \\label{eq:update-z}\n\\end{align}\ndescribes the same process as the original $\\text{ARMA}$ model.\n\n\\begin{info}\nEquation \\ref{eq:update-z} involves a deterministic component, namely $\\mu$.\nThe Kalman filter theory developed in the previous lab, however, assumed $\\mathbb{E}[\\epsilon_t]=0$ for the observations $z_{t\\mid t-1}$,.\nThis means you should subtract off the mean\n$\\mu$ of the error from the time series observations $z_{t\\mid t-1}$ when using them in the predict and update\nsteps.\n\\end{info}\n\n\\subsubsection*{Likelihood via Kalman Filter}\n\nWe assumed in Equation \\ref{eqn:error} that the error terms of the model are Gaussian.\nThis means that each conditional distribution in \\ref{eq:arma:factorized} is also Gaussian, and is completely characterized by its mean and variance. \nThese two quantities are easily found via the Kalman filter:\n\\begin{align}\n    \\text{mean} & \\quad H\\hat{\\textbf{x}}_{t|t-1} + \\mu \\\\\n    \\text{variance} & \\quad HP_{t|t-1}H^T\n\\end{align}\nwhere $\\hat{\\textbf{x}}_{t|t-1}$ and $P_{t|t-1}$ are found during the Predict step. \nGiven that each conditional distribution is Gaussian, the likelihood can then be found as follows:\n\\begin{align}\n    \\label{eq:arma:likelihood}\n    p(\\{z_t\\} | \\Theta)& = \\prod_{t=1}^{n} N(z_t\\mid H\\hat{\\textbf{x}}_{t|t-1} + \\mu,\\;\n    HP_{t|t-1}H^T)\n\\end{align}\n\n\\begin{problem}\n\\label{prob:arma:likelihood}\nWrite a function \\li{arma_likelihood()} that returns the log-likelihood of an ARMA\nmodel, given a time series $\\{z_t\\}_{t=1}^n$.\nThis function should accept a \\li{file} with the observations and each of the parameters in $\\Theta$.\nReturn the log-likelihood of the $\\text{ARMA}(p,q)$ model as a \\li{float}.\n\nUse the \\li{state_space_rep()} function provided to create $F,Q,$ and $H$. \nA \\li{kalman()} filter has been provided to calculate the means and covariances of each observation.\n\n(Hint: Calling the function \\li{kalman()} on a time series will return an array whose values are $x_{k\\mid k-1}$ and an array whose values are $P_{k\\mid k-1}$ for each $k\\leq n$.\nRemember that the time series should have $\\mu$ subtracted when using \\li{kalman()}.)\n\nWhen done correctly, your function should match the following output:\n\\begin{lstlisting}\n>>> arma_likelihood(file='weather.npy', phis=np.array([0.9]), thetas=np.array([0]), mu=17., std=0.4)\n-1375.1805469978776\n\\end{lstlisting}\n\\end{problem}\n\n\\subsubsection*{Model Identification}\n\nNow that we can compute the likelihood of a given ARMA model, we want to find the best choice of parameters given our time series.\nIn this lab, we define the model with the \"best\" choice of parameters as the model which minimizes the AIC.\nThe benefit of minimizing the AIC is that it rewards goodness of fit while penalizing overfitting.\nThe AIC is expressed by\n\\begin{align}\n    2k\\left(1 + \\frac{k+1}{n-k}\\right) - 2 \\ell(\\Theta)\n\\end{align}\nwhere $n$ is the sample size, $k = p + q + 2$ is the number of parameters in\nthe model, and $\\ell(\\Theta)$ is the maximum likelihood for the model class.\n\nTo compute the maximum likelihood for a model class, we need to optimize\n\\ref{eq:arma:likelihood} over the space of parameters $\\Theta$. We can do so\nby using an optimization routine such as \\li{scipy.optimize.fmin} on the function \\li{arma_likelihood()} from Problem \\ref{prob:arma:likelihood}. \nUse the following code to run this routine.\n\n\\begin{lstlisting}\n>>> from scipy.optimize import fmin\n\n>>> # assume p, q, and time_series are defined\n>>> def f(x): # x contains the phis, thetas, mu, and std\n>>>     return -1*arma_likelihood(time_series, phis=x[:p], thetas=x[p:p+q], mu=x[-2],std=x[-1])\n>>> # create initial point\n>>> x0 = np.zeros(p+q+2)\n>>> x0[-2] = time_series.mean()\n>>> x0[-1] = time_series.std()\n>>> sol = fmin(f,x0,maxiter=10000, maxfun=10000)\n\\end{lstlisting}\n\nThis routine will return a vector \\li{sol} where the first $p$ values are $\\{\\phi_i\\}_{i=1}^p$, the next $q$ values are $\\{\\theta_i\\}_{i=1}^q$, and the last two values are $\\mu$ and $\\sigma$, respectively.\nNote the wrapper $f(x)$ returns the negative log-likelihood.\nThis is because \\li{scipy.optimize.fmin} finds the \\emph{minimizer} of $f(x)$ and we are solving for the \\emph{maximum} likelihood.\n\nTo minimize the AIC, we perform \\emph{model identification}.\nThis is choosing the order of our model, $p$ and $q$, from some admissible set.\nThe order of the model which minimizes the AIC is then the optimal model.\n\n\\begin{problem}\n\\label{prob:model-identification}\nWrite a function \\li{model_identification()} that accepts a \\li{file} containing the time series data and two integers, $i$ and $j$. \nReturn each parameter in $\\Theta$ that minimizes the AIC of an ARMA($p,q$) model, given that $1\\leq p \\leq i$ and $1\\leq q \\leq j$.\n\nYour code should produce the following output (it may take about two minutes to run):\n\\begin{lstlisting}\n>>> model_identification(filename='weather.npy',i=4,j=4)\n(array([ 0.72135856]), array([-0.26246788]), 0.35980339870105321, 1.5568331253098422)\n\\end{lstlisting}\n\\end{problem}\n\n\\section*{Forecasting with Kalman Filter}\nWe now have identified the optimal ARMA$(p,q)$ model. \nWe can use this model to predict future states.\nThe Kalman filter provides a straightforward way to predict future states by giving the mean and variance of the conditional distribution of future observations.\nObservations can be found as follows\n\\begin{align}\n    z_{t + k} | z_{1}, \\cdots, z_{t} \\sim N(z_{t+k};\\; H\\hat{x}_{t+k|t} + \\mu,\\;\n    HP_{t+k|t}H^T)\n\\end{align}\nTo evolve the Kalman filter, recall the predict and update rules of a Kalman filter.\n\\begin{align*}\n\\textbf{Predict} & & \\widehat{\\mathbf{x}}_{k|k-1} & = F\\widehat{\\mathbf{x}}_{k-1|k-1} + \\mathbf{u} \\\\\n & & P_{k|k-1} & = FP_{k-1|k-1}F^{T} + Q \\\\\n\\textbf{Update} & & \\tilde{\\mathbf{y}}_{k} & = \\mathbf{z}_{k} - H\\widehat{\\mathbf{x}}_{k|k-1} \\\\\n & & S_{k} & = HP_{k|k-1}H^{T} + R \\\\\n & & K_{k} & = P_{k|k-1}H^{T}S_{k}^{-1} \\\\\n & & \\widehat{\\mathbf{x}}_{k|k} & = \\widehat{\\mathbf{x}}_{k|k-1} + K_{k}\\tilde{\\mathbf{y}}_{k} \\\\\n & & P_{k|k} & = (I - K_{k}H)P_{k|k-1}\n\\end{align*}\n\n\\begin{warn}\nRecall that the values returned by \\li{kalman()} are conditional on the previous observation.\nTo compute the mean and variance of future observations, the values $x_{n|n}$ and $P_{n|n}$ MUST be computed using the update step.\nOnce computed, only the predict step is needed to find the future means and covariances.\n\\end{warn}\n\n\\begin{problem}\n\\label{prob:arma:forecast}\nWrite a function \\li{arma_forecast()} that accepts a \\li{file} containing a time series, the parameters for an ARMA model, and the number $n$ of observations to forecast.\nCalculate the mean and covariance of the future $n$ observations using a Kalman filter.\nPlot the original observations as well as the mean for each future observation.\nPlot a 95\\% confidence interval (2 standard deviations away from the mean) around the means of future observations.\nReturn the means and covariances calculated.\n\n(Hint: The standard deviation is the square root of the covariance calculated.)\n\nThe following code should create a plot similar to Figure \\ref{fig:forecasted}. \n\n\\begin{lstlisting}\n>>> # Get optimal model\n>>> phis, thetas, mu, std = model_identification(filename='weather.npy', i=4, j=4)\n\n>>> # Forecast optimal mode\n>>> arma_forecast(filename='weather.npy', phis=phis, thetas=thetas, mu=mu, std=std)\n\\end{lstlisting}\n\nHow does this plot compare to the naive ARMA model made in Problem \\ref{prob:naive}?\n\\end{problem}\n\n\\section*{Statsmodel ARMA}\n\nThe module \\li{statsmodels} contains a package that includes an ARMA model class.\nThis class also uses a Kalman Filter to calculate the MLE.\nWhen creating an ARMA object, initialize the variables \\li{endog} (the data) and \\li{order} (the order of the model).\nThe object can then be fitted based on the MLE using a Kalman Filter.\n\n\\begin{lstlisting}\nfrom statsmodels.tsa.arima_model import ARMA\n# Intialize the object with weather data and order (1,1)\nmodel = ARMA(data,order=(1,1))\n# Fit model using MLE and allowing for a constant if needed\nmodel.fit(method='mle', trend='c')\n\\end{lstlisting}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figures/arma.pdf}\n\\caption{ARMA(1,1) forecast on \\li{weather.npy}}\n\\label{fig:forecasted}\n\\end{figure}\n\nAs in other problems, the data passed in should be the time series stationary.\nThe AIC of an ARMA model object is saved as the attribute \\li{aic}.\nSince the AIC is much faster to compute using \\li{statsmodels}, model identification is much faster.\nOnce a model is chosen, the method \\li{predict} will forecast $n$ observations, where $n$ is the number of known observations.\nIt will return the mean of each future observation.\n\n\\begin{lstlisting}\n# Predict from the beginning of the model to 30 observations in the future\nmodel.predict(start=0,end=len(data)+30)\n\\end{lstlisting}\n\n\\begin{problem}\nWrite a function \\li{sm_arma()} that accepts a \\li{file} containing a time series, maximum integer values for $p$ and $q$, and the number $n$ of values to predict.\nUse \\li{statsmodels} to perform model identification as in Problem \\ref{prob:model-identification}, where the order of ARMA($i,j$) satisfies $1\\leq i \\leq p$ and $1 \\leq j \\leq q$.\nEnsure the model is fit using the MLE.\n\nUse the optimal model to predict $n$ future observations of the time series.\nPlot the original observations along with the mean of each future observations given by \\li{statsmodels}.\nReturn the AIC of the optimal model.\n\nFor $p=3, q=3$, and $n=30$, your graph should look similar to Figure \\ref{fig:sm}.  \nHow does this graph compare to Problem \\ref{prob:naive}? Problem \\ref{prob:arma:forecast}?\n\\label{prob:statsmodels}\n\\end{problem}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\textwidth]{figures/sm.pdf}\n\\caption{Statsmodel ARMA(3,1) forecast on \\li{weather.npy}.}\n\\label{fig:sm}\n\\end{figure}\n\nThe \\li{ARMA} class can also perform model identification.\nThe method \\li{arma_order_select_ic} will find the optimal order of the ARMA model based on certain criteria.\nThe first parameter \\li{y} is the data.\nThe data must be a NumPy array, not a Pandas DataFrame.\nThe parameter \\li{ic} defines the criteria trying to be minimized.\nThe method will return a dictionary, where the minimal order of each criteria can be accessed.\n%\\footnote{http://www.statsmodels.org/devel/generated/statsmodels.tsa.stattools.arma_order_select_ic.html}.\n\n\\begin{lstlisting}\n>>> import statsmodel as sm\n>>> from statsmodel.tsa.stattools import arma_order_select_ic as order_select\n>>> import pandas as pd\n\n>>> # Get sunspot data and give DateTimeIndex\n>>> sunspot = sm.datasets.sunspots.load_pandas().data[['SUNACTIVITY']]\n>>> sunspot.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))\n\n>>> # Find best order where p < 5 and q < 5\n>>> # Use AICc as basis for minimization\n>>> order = order_select(sunspot.values,max_ar=4,max_ma=4,ic=['aic','bic'],fit_kw={'method':'mle'})\n>>> print(order['aic_min_order'])\n(4,2)\n>>> print(order['bic_min_order'])\n(4,2)\n\\end{lstlisting}\n\nThe method \\li{plot_predict} accepts a time series and plots the ARMA model alongside the original data in a given range.\nThe plot of the ARMA model is the mean calculated by ARMA at each data point, both known and future.\nThis method works by giving a range on which to plot the ARMA model.\nThis range can be given by indices (as in Problem \\ref{prob:statsmodels}) or by a DateTimeIndex.\n\n\\begin{lstlisting}\n>>> # Fit model\n>>> model = ARMA(dta, (4, 2)).fit(method='mle')\n\n>>> # Create plot\n>>> fig, ax = plt.subplots(figsize=(13,7))\n>>> # Plot from 1950 to 2012. \n>>> fig = model.plot_predict(start='1950', end='2012', ax=ax)\n\n>>> ax.set_title('Sunspot Dataset')\n>>> ax.set_xlabel('Year')\n>>> ax.set_ylabel('Number of Sunspots')\n>>> plt.show()\n\\end{lstlisting}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\textwidth]{figures/sunspot.pdf}\n\\caption{Sunspot activity data is forecasted four years in the future using \\li{statsmodels.}}\n\\label{fig:sunspot}\n\\end{figure}\n\n\\begin{problem}\nThe dataset \\li{manaus} contains data on the height of the Rio Negro from every month between January 1903 and January 1993.\nWrite a function \\li{manaus()} that accepts the forecasting range as strings \\li{start} and \\li{end}, the maximum parameter for the AR model \\li{p} and the maximum parameter of the MA model \\li{q}.\nThe parameters \\li{start} and \\li{end} should be strings corresponding to a DateTimeIndex in the form \\li{Y\\%M\\%D}, where \\li{D} is the last day of the month.\n\nThe function should determine the optimal order for the ARMA model based on the AIC and the BIC.\nThen forecast and plot on the range given for both models and compare.\nReturn the order of the AIC model and the order of the BIC model, respectively.\nFor the range \\li{'1983-01-31'} to \\li{'1995-01-31'}, your plot should look like Figure \\ref{fig:manaus}.\n\n(Hint: The data passed into \\li{arma_order_select_ic} must be a NumPy array. Use the attribute \\li{values} of the Pandas DataFrame.)\n\nTo get the \\li{manaus} dataset and set it with a DateTimeIndex, use the following code:\n\\begin{lstlisting}\n>>> # Get dataset\n>>> raw = pydata('manaus')\n>>> # Convert to DateTimeIndex\n>>> manaus = pd.DataFrame(raw.values,index=pd.date_range('1903-01','1993-01',freq='M'))\n>>> manaus = manaus.drop(0,axis=1)\n>>> # Set new column title\n>>> manaus.columns = ['Water Level']\n\\end{lstlisting}\n\\label{prob:manaus}\n\\end{problem}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\textwidth]{figures/manaus.pdf}\n\\caption{AIC and BIC based ARMA models of \\li{manaus} dataset.}\n\\label{fig:manaus}\n\\end{figure}\n\n% Additional Materials\n\n\\pagebreak\n\n\\section*{Additional Materials}\n\n\\subsection*{Finding Error Correlation}\nTo find the correlation of the current error with past error, the noise of the data needs to be isolated.\nEach data point $y_t$ can be decomposed as\n\n\\begin{equation}\ny_t=T_t+S_t+R_t,\n\\end{equation}\n\nwhere $T_t$ is the overall trend of the data, $S_t$ is a seasonal trend, and $R_t$ is noise in the data.\nThe overall trend is what the data tends to do as a whole, while the seasonal trend is what the data does repeatedly.\nFor example, if looking at airfare prices over a decade, the overall trend of the data might be increasing due to inflation.\nHowever, we can break this data into individual years.\nWe call each year a season.\nThe seasonal trend of the data might not be strictly increasing, but have increases during busy seasons such as Christmas and summer vacations.\n\nTo find $T_t$, we use an $M$-fold method.\nIn this case, $M$ is the length of our season.\nWe define the equation\n\n\\begin{equation}\nT_t=\\frac{1}{M}\\sum_{-M/2<i<M/2}y_{i+t}.\n\\end{equation}\n\nThis means for each $t$, we take the average of the season surrounding $y_t$.\n\nTo find the seasonal trend, first subtract the overall trend from the time series.\nDefine $x_t=y_t-T_t$.\nThe value of the seasonal trend can then be found by averaging each day of the season over every season.\nFor example, if the season was one year, we would find the average value on the first day of the year over all seasons, then the second, and so on.\nThus,\n\n\\begin{equation}\nS_t=\\frac{1}{K}\\sum_{i\\equiv t\\text{ (mod $M$)}}x_i\n\\end{equation}\nwhere $K$ is the number of seasons.\n\nWith the overall and seasonal trend known, the noise of the data is simply $R_t=y_t-T_t-S_t$.\nTo determine the strength of correlations with the current error and the past error, plot $y_t$ vs. $R_{t-i}$ as in Figure \\ref{fig:correlations}.\n\n\\subsection*{Proof of Equation \\ref{eq:update-z}} \n\\begin{align}\n    \\sum_{i=1}^p\\phi_i(z_{t-i}-\\mu)+a_t+\\sum_{j=1}^q\\theta_ja_{t-j}&=\\sum_{i=1}^p\\phi_i(H\\hat{\\textbf{x}}_{t-i})+a_t+\\sum_{j=1}^q\\theta_ja_{t-j}\\\\\n    &=\\sum_{i=1}^r\\phi_i(x_{t-i}+\\sum_{k=1}^{r-1}\\theta_kx_{t-i-k})+a_t+\\sum_{j=1}^{r-1}\\theta_ja_{t-j}\\\\\n    &=a_t+\\sum_{i=1}^r\\phi_i(x_{t-i})+\\sum_{j=1}^{r-1}\\theta_j\\Bigl(\\sum_{i=1}^r\\phi_ix_{t-j-i}+a_{t-j}\\Bigr)\\\\\n    &=a_t+\\sum_{i=1}^r\\phi_i(x_{t-i})+\\sum_{j=1}^{r-1}\\theta_jx_{t-k}\\\\\n    &=x_t+\\sum_{j=1}^{r-1}\\theta_jx_{t-k}\\theta_kx_{t-k}\\\\\n    &=z_t.\n\\end{align}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%", "meta": {"hexsha": "01b07648a6f6e76505a980eaa90614683d562eb3", "size": 25482, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Volume3/ARMA/ARMA.tex", "max_stars_repo_name": "frigusgulo/Labs", "max_stars_repo_head_hexsha": "58faeab611e2d54bf2debded58d6e13db40f4146", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-27T06:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-27T06:20:37.000Z", "max_issues_repo_path": "Volume3/ARMA/ARMA.tex", "max_issues_repo_name": "frigusgulo/Labs", "max_issues_repo_head_hexsha": "58faeab611e2d54bf2debded58d6e13db40f4146", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Volume3/ARMA/ARMA.tex", "max_forks_repo_name": "frigusgulo/Labs", "max_forks_repo_head_hexsha": "58faeab611e2d54bf2debded58d6e13db40f4146", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.098265896, "max_line_length": 354, "alphanum_fraction": 0.7155246841, "num_tokens": 7560, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.5785636985810213}}
{"text": "\\section{Initialization}\n\\subsection{Subroutine deallocate\\_variables}\nThis subroutine deallocates variable-sized arrays located inside modules. It is called once at the beginning of the program to flush arrays in case the code terminated unexpectedly during the previous run, and again at the end to free the module variables.\n\n\\subsection{Subroutine assign\\_constants}\nThis subroutine assigns numeric constants like zero, one, two, $\\pi$, etc. and remembers these values in a module. This feature is useful for two main reasons:\n\\begin{enumerate}\n\\item Changing precision is easy, since these constants are not hard-coded to single or double precision in later routines.\n\\item The code looks more like natural English, and will still make sense when read by a person. (e.g. dyn\\_pressure = half * rho * V * V)\n\\end{enumerate}\n\nThis subroutine also assigns the quadrature weights and locations along the span for the rotor blade, and also the azimuthal locations where blade loads are sampled for trim. The Gauss-Legendre weights and locations are computed in the interval [0,1] by the subroutine \\textbf{gqgen2}, which is a modified expansion of a carry-over routine from Heli-UM. Quadrature locations and weights are stored in the module \\textbf{gaussian}. \n\n\\subsection{Subroutine gqgen2}\nThis subroutine generates the Gauss-Legendre quadrature points in the interval [0,1] for a user-specified number of points. This routine is called twice in assign\\_constants: once for the spatial locations, and another time for the azimuthal locations within a time element. The quadrature points are the roots of Legendre polynomials $P_n(z)$, which obey some interesting recurrence relations. We will use them to generate the roots of the Legendre polynomial. The logic is explained below.\n\n\\subsection*{\\textbf{Numerical Integration}}\n\nThe external loads on rotating and non-rotating beams are integrated numerically along the span of the blades to determine the forces and moments transmitted to the hub. These integrals are computed numerically using Gaussian Quadrature using weighted summation of the integrand values at specific points along the span. \n\\[ \\textbf{I} \\quad = \\quad \\int \\textrm{ }f(r) \\textrm{ } dr \\]\n\\textbf{I} denotes the integral, $f$ the integrand and $r$ the spanwise coordinate along the deformed elastic axis. Using numerical quadrature, \n\\[ \\textbf{I} \\quad \\approx \\quad \\sum_{i=1}^{n} w_i \\textrm{ } f(r_i) \\]\nThe points $r_i$ are the zeros of the Legendre polynomials, which obey some recurrence relations. The first relation is known as Bonnet's recursion formula, given by \n\\[ n \\textrm{ } P_n(x) \\quad = \\quad (2n-1) \\textrm{ } x \\textrm{ } P_{n-1}(x) \\quad - \\quad (n-1) \\textrm{ } P_{n-2}(x) \\]\n$P_n(x)$ is a Legendre polynomial of order ``n'', given by Rodrigues' formula\n\\[ P_n(x) \\quad = \\quad \\frac{1}{2^n \\textrm{ }n!} \\textrm{ }\\frac{d^n}{dx^n} \\textrm{ }\\left[\\left(x^2\\textrm{ }-\\textrm{ }1\\right)^n\\right] \\]\nAnother recurrence relation is \n\\[ \\frac{x^2\\textrm{ } -\\textrm{ } 1}{n} \\textrm{ } \\frac{d P_n(x)}{dx} \\quad = \\quad (2n+1) \\textrm{ } x \\textrm{ } P_n(x) \\quad - \\quad n \\textrm{ } P_{n-1}(x) \\]\n$n$ is the user-specified number of quadrature points. The Gauss-Legendre quadrature weights corresponding to each of the locations is given by (Ref. \\cite{Abramowitz1964})\n\\[ w_i \\quad = \\quad \\frac{2}{\\left(\\textrm{ } 1\\textrm{ } -\\textrm{ } x_i^2\\textrm{ } \\right) \\textrm{ } \\left[ P_n \\textrm{}^\\prime (x_i)\\right]^2} \\]\nUsing iterative convergence, the quadrature locations and the slope of the Legendre polynomial may be identified by using the recurrence relations, starting from an initial guess given by \n\\[  x_i \\quad = \\quad \\cos \\left[\\pi \\frac{4 i - 1}{4n + 2}\\right] \\qquad \\qquad i = 1,2,3,\\cdots,n\\]\nThese locations are generated assuming that the integration range is $[-1,1]$. The quadrature locations and weights may be transformed for use over other limits using a change of coordinates along $x$.\n\n\\subsection*{\\textbf{Intermediate Quadrature}}\nThe simulation of beam dynamics requires the computation of the accumulated external loads, from the tip to a certain location of interest. Additionally, the axial fore-shortening $u$ and the Euler rotation angle $\\theta_1$ require the accumulated values of certain integrations from the root value to the radial location at which these quantities are evaluated. These ``intermediate'' integrals (so labelled because the limits lie between nodes of finite elements) are evaluated by fitting a polynomial to the sampled values of the integrand within the finite element, and integrating the \\emph{fitted} polynomial to reduce computational cost. \n\nAssume that the integrand $f$ is sampled at $n$ points $x_1$, $x_2$, $\\cdots$, $x_n$. Let the corresponding values of $f$ at these points be $f_1$, $f_2$, $\\cdots$, $f_n$. Let the \\textit{approximate} integrand $g(x)$ be represented using a polynomial, given by \n\\[ f(x) \\quad \\approx \\quad g(x) \\quad = \\quad \\sum_{i = 1}^{n} a_{i-1} \\textrm{ } x^{i-1} \\]\nThe coefficients $a_i$, $i$ = 1, 2, $\\cdots$, $n$ are determined from the sampled values of the true integrand $f(x)$ at locations $x_i$ using polynomial interpolation. If the approximate integrand matches the true integrand at ($x_1$, $x_2$, $\\cdots$, $x_n$) then\n\\[ \\begin{Bmatrix} f(x_1) \\\\ f(x_2) \\\\ \\cdot \\\\ \\cdot \\\\ f(x_n) \\end{Bmatrix} \\quad = \\quad \\begin{bmatrix} 1 & x_1 & x_1^2 & \\cdots & x_1^{n-1} \\\\ 1 & x_2 & x_2^2 & \\cdots & x_2^{n-1} \\\\ \\cdot  & \\cdot & \\cdot & \\cdot & \\cdot \\\\ \\cdot  & \\cdot & \\cdot & \\cdot & \\cdot \\\\ 1 & x_n & x_n^2 & \\cdots & x_n^{n-1} \\end{bmatrix} \\quad  \\begin{Bmatrix} a_0 \\\\ a_1 \\\\ \\cdot \\\\ \\cdot \\\\ a_{n-1} \\end{Bmatrix} \\]\nThis system of linear equations may be written as a matrix-vector product\n\\[ \\vector{f} \\quad = \\quad \\textbf{C} \\textrm{ } \\vector{a} \\]\n\\textbf{C} is invertible as long as all quadrature locations are unique. The polynomial coefficients $\\vector{a}$ are obtained by inverting \\textbf{C} to yield \n\\[ \\vector{a} \\quad = \\quad \\textbf{C}^{-1} \\textrm{ } \\vector{f} \\]\nThe approximate integrand may be integrated along the span of the element using different limits, depending on the quantity of interest. For displacement quantities (Euler rotation $\\theta_1$ and axial fore-shortening $u$) the integration limits are from the left end of the element to the quadrature point of interest. The integrated value is \n\\[ \\textrm{I}_\\textrm{inboard}(x_1) \\quad = \\quad \\int_{0}^{r_{_1}} \\textrm{ } g(r_{_1}) \\textrm{ }  dr_{_1} \\quad = \\quad \\frac{dr}{dx} \\textrm{ } \\sum_{i = 1}^{n} a_{i-1} \\textrm{ } \\frac{x_1^{i}}{i} \\]\n$\\frac{dr}{dx}$ represents the scale factor between the non-dimensional coordinate $x$ and the dimensional coordinate $r$. The summation may be written as a matrix-vector product \n\n\\[ \\textrm{I}_\\textrm{inboard}(x_1) \\quad = \\quad \\frac{dr}{dx} \\quad \\begin{Bmatrix} x_1 & & \\frac{1}{2} x_1^2 & & \\cdots & &\\frac{1}{n-1} x_1^{n-1} \\end{Bmatrix} \\textrm{ } \\begin{Bmatrix} a_0 \\\\ a_1 \\\\ \\cdot \\\\ \\cdot \\\\ a_{n-1} \\end{Bmatrix} \\]\nUsing the same notation for the integrals at the other quadrature points, we obtain\n\\[ \\begin{Bmatrix} \\textrm{I}(x_1) \\\\ \\textrm{I}(x_2) \\\\ \\cdot \\\\ \\cdot \\\\ \\textrm{I}(x_n)\\end{Bmatrix}_\\textrm{inboard} = \\quad \\frac{dr}{dx} \\textrm{ } \\begin{bmatrix} x_1 & & \\frac{1}{2} x_1^2 & & \\cdots & &\\frac{1}{n-1} x_1^{n-1} \\\\ x_2 & & \\frac{1}{2} x_2^2 & & \\cdots & &\\frac{1}{n-1} x_2^{n-1} \\\\ \\cdot & & \\cdot & & \\cdot & & \\cdot \\\\ \\cdot & & \\cdot & & \\cdot & & \\cdot \\\\ x_n & & \\frac{1}{2} x_n^2 & & \\cdots & &\\frac{1}{n-1} x_n^{n-1} \\end{bmatrix} \\begin{Bmatrix} a_0 \\\\ a_1 \\\\ \\cdot \\\\ \\cdot \\\\ a_{n-1} \\end{Bmatrix}\n\\] \nThe vector of integrals $\\vector{I}$ may be written as another matrix vector product as \n\\[ \\vector{I}_\\textrm{inboard} \\quad = \\quad \\frac{dr}{dx} \\textrm{ }\\textbf{E} \\textrm{ } \\vector{a} \\quad = \\quad \\frac{dr}{dx} \\textrm{ }\\textbf{E} \\textrm{ }\\textbf{C}^{-1} \\textrm{ }\\vector{f} \\]\nSimilarly, the vector of integrals with limits from the current radial position to the outboard end of the finite element is \n\\[ \\vector{I}_\\textrm{outboard} \\quad = \\quad \\frac{dr}{dx} \\textrm{ }\\textbf{F} \\textrm{ } \\textbf{C}^{-1} \\textrm{ } \\vector{f} \\]\nThe entries in row $i$ and column $j$ of \\textbf{E} and \\textbf{F} are \n\\[ E(i,j) \\quad = \\quad E_{ij} \\quad = \\quad \\frac{1}{j} x_i^{\\textrm{ } j} \\qquad \\textrm{and} \\qquad F(i,j) \\quad = \\quad F_{ij} \\quad = \\quad \\frac{1}{j} \\textrm{ } \\left(1- x_i^{\\textrm{ } j}\\right) \\]\nThe matrices \\textbf{E} \\textbf{C}$^{-1}$ and \\textbf{F} \\textbf{C}$^{-1}$ are independent of loads and can be pre-computed. The term $\\frac{dr}{dx}$ is the dimensional length of the finite element $l_e$.\n\n\\section*{Reading Input Files}\nThis section explains the code logic governing file read operations and preprocessing steps that are performed prior to any simulation. This step includes reading input files and choosing rotor configurations, aircraft information, flight or wind-tunnel conditions, etc. The call graph is shown below, complete with all dependencies. The various subroutines that read program input files from the subfolder \\textbf{Inputs/} are listed below. For a detailed list of the individual inputs, please refer to the user manual. \n\n\\subsection{Subroutine read\\_solver\\_options}\nThis subroutine reads the program operations from the file \\textbf{Solver\\_options.input} that specifies the spatial and time discretization for the rotorcraft model, and chooses which of the operations to perform (modal reduction, trim, transfer functions and time marching). Based on the values in the input files, some input files may be omitted if they are not relevant to the configuration chosen. The solver options are stored in a derived type called \\textbf{SolverOptions}, in the module \\textbf{py\\_visible\\_globals}.\n\n\\begin{Figure}\n \\centering\n \\includegraphics[width=1.0\\linewidth]{images/read_pgmin_callgraph.png}\n \\captionof{figure}{Call Graph for Reading Program Input Files}\n \\label{fig:mpcg}\n\\end{Figure}\n\n\\subsection{Subroutine read\\_wake\\_inputs}\nThis subroutine reads the free wake discretization parameters from \\textbf{Free\\_wake.input}, if the flag to use the vortex wake model is enabled in Solver\\_optoins.input. If the flag is set to \\textbf{.F.}, then this subroutine is bypassed. These parameters are stored in a derived type called \\textbf{Free\\_wake}, located in the module \\textbf{py\\_invisible\\_globals}.\n\n\\subsection{Subroutine read\\_composite\\_coupling}\nThis subroutine is not yet at the ``production stage''. It is meant to provide structural cross-couplings in the blade dynamics to represent composite material properties.\n\n\\subsection{Subroutine read\\_flight\\_condition}\nThis subroutine reads the file \\textbf{Flight\\_condition.input}, which specifies the vehicle speed, altitude, turn rate and climb angle, i.e. target trim conditions. These parameters are stored in a derived type called \\textbf{FlightParam}, in the module \\textbf{py\\_visible\\_globals}.\n\n\\subsection{Subroutine read\\_airloads}\nThis subroutine is used for CFD-CSD coupling, but may be bypassed by setting the appropriate flags to \\textbf{.F.} in \\textbf{Solver\\_options.input}. It reads airloads (section C$_\\ell$, C$_d$ and C$_m$) along the span and around the azimuth for all rotors over one revolution, and returns interpolated loads through a derived type \\textbf{Airloads} in the module \\textbf{py\\_visible\\_globals}. This subroutine calls three subroutines:\n\n\\subsubsection{Subroutine open\\_and\\_read}\nThis subroutine reads airloads distributions from disk, given a file unit identifier. The airloads must be in the OVERTURNS format. \n\n\\subsubsection{Subroutine interp\\_airloads}\nThis subroutine interpolates airloads distributions from OVERTURNS radial and azimuthal locations to those required by the CSD solver, using a custom routine written for polar coordinates. This subroutine calls a subroutine \\textbf{linterp\\_weights} to generate linear interpolation weights and indices along the span and azimuth. \n\n\\subsubsection{Subroutine test\\_write}\nThis subroutine writes the airloads from a derived type to various output files to verify that the interpolation did not fail.\n\n\\subsection{Subroutine read\\_tunnel\\_targets}\nThis subroutine reads the wind tunnel conditions from the file \\textbf{Wind\\_tunnel.input}, but is bypassed in propulsive trim, indicated by the appropriate flags in \\textbf{Solver\\_options.input}. It reads shaft tilt, target hub loads, target swashplate controls and type of wind-tunnel trim to perform.\n\n\\subsection{Subroutine write\\_postprocessor}\nThis subroutine writes the output file \\textbf{options\\_trim.dat} that can be used by postprocessors to interpret the numbers in the batch run output files \\textbf{Outputs/Trim\\_01.dat} and \\textbf{Outputs/Trim\\_02.dat}. Refer to the user manual for more details on the individual outputs.\n\n\\subsection{Subroutine read\\_databases}\nThis subroutine reads the physical properties of the rotor blades, number of rotors, airfoil tables, fuselage aerodynamic force tables, etc and stores these physical constants in a multiply nested derived type called \\textbf{Aircraft} (name preserved from HeliUM convention) in the module \\textbf{py\\_invisible\\_globals}. The purpose of this subroutine is to read all available databases and aircraft configurations, and selectively preserve the ones required for the current simulation. After reading individual blade and airfoil properties, the required rotor structural and aerodynamic characteristics are assembled. The required airfoil tables are stored in memory, and a map is created between the airfoil identifier and its storage location in memory to enable faster access. Finally, the blade structural properties are interpolated and stored at the quadrature points at each finite element in the subroutine \\textbf{assign\\_rotor\\_properties} under the derived type \\textbf{BeamProperties}, nested under the \\textbf{Blade} type, which is nested under the \\textbf{Rotor} type, and finally under the \\textbf{Aircraft} type. These numbers do not change during a simulation, and the \\textbf{Aircraft} type may be used with the \\textbf{intent(in)} keyword to enforce coding consistency. \n\nIn addition to obtaining structural properties for the CSD solver, the subroutine \\textbf{assign\\_rotor\\_properties} also obtains blade chord and twist for the free wake solver at equi-spaced radial locations between the blade root cut-out and tip.\n\n\\subsection*{Non-Dimensionalization}\nThe analysis has been performed in a non-dimensional form to avoid overflow and underflow truncation errors. Table \\ref{tbl:nondiml} shows the reference parameters used to nondimensionalize the relevant physical quantities. $\\Omega_0$ is initially set to the rotor speed, but any value may be chosen for the time constant. This feature is extremely useful, since it allows for changing rotor RPM during batch runs.\n\\begin{table}[ht] \n\\begin{minipage}{\\columnwidth} \n\\centering\n\\caption{Fundamental Non-Dimensionalization Constants}\n\\label{tbl:nondiml}\n\\begin{tabular}{ll} \\hline \\hline\nPhysical Quantity & Reference Parameter\t  \t\\\\\n\\hline \nMass per unit span & Average blade mass per unit span m$_0$ = $\\displaystyle \\frac{W_\\textrm{b}}{g}$\\\\\nLength \t\t\t   & Beam Length (R) \\\\\nTime\t\t\t   & $\\Omega_0^{-1}$ \\\\ \n\\hline\n\\end{tabular} \\end{minipage}\n\\end{table}", "meta": {"hexsha": "69d728bf7ff85c7b735a4136428503792efd4879", "size": 15343, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Autodoc/theory_prog_manual/Initialization.tex", "max_stars_repo_name": "ananthsridharan/vtol_sizing", "max_stars_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-03-24T10:20:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-22T18:49:25.000Z", "max_issues_repo_path": "Autodoc/theory_prog_manual/Initialization.tex", "max_issues_repo_name": "ananthsridharan/vtol_sizing", "max_issues_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-12-08T10:26:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-04T18:19:59.000Z", "max_forks_repo_path": "Autodoc/theory_prog_manual/Initialization.tex", "max_forks_repo_name": "ananthsridharan/vtol_sizing", "max_forks_repo_head_hexsha": "3f754e1bd3cebdb5b5c68c8a2d84c47be1df2f02", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-11-27T21:21:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-20T15:44:18.000Z", "avg_line_length": 125.762295082, "max_line_length": 1291, "alphanum_fraction": 0.7488105325, "num_tokens": 4305, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936377487305, "lm_q2_score": 0.7549149978955811, "lm_q1q2_score": 0.5784865599284801}}
{"text": "\\section{The \\variant{\\edgetransitions} problem}\n\\label{sec:edges}\n\nWhereas \\variant{\\nodeitems} (Problem~\\ref{problem:nodes-variant}) seeks $k$ \nnodes to optimize expected uncertainty, \\variant{\\edgetransitions} seeks\n$k$ edges.\n\\begin{problem}[{\\edgeproblem}]\nGiven $G=(V,E)$, transition matrix $\\transition$, \ninitial distribution of items to nodes\n$\\initial$ and integer $k$, find\n$S\\subseteq V$ such that $|S|=k$ such that \n$\\objective_\\shortedgetransitions\\left(S\\right)$\n(Equation~\\eqref{eq:edgetransitions}) is minimized.\n\\label{problem:edges-variant}\n\\end{problem}\nWe provide two polynomial-time algorithms to solve\nthe problem, namely \\edgeDP\\ and \\edgegreedy.\nFor the former, we can also prove that it is optimal, and thus Problem~\\ref{problem:edges-variant}\nis solvable in polynomial time.\n\n\\spara{The {\\edgeDP} algorithm:}\n{\\edgeDP} is a dynamic-programming algorithm that \nselects edges in two steps: first, it sorts the {\\it outgoing} \nedges of\neach node in decreasing order of transition probability, thus\ncreating $|V| = n$ corresponding lists; secondly, it combines top edges from\neach list to select a total of $k$ edges.\n%The algorithm is polynomial and optimal.\n\nIn more detail, let $SOL_i(k)$ be the cost\nof an optimal solution for the special case of\na budget of $k$ edges allocated among outgoing edges of nodes\n$V_{i:} = \\{i, i+1, \\ldots, n\\}$.\nAccording to this notational convention,\nthe cost of an optimal solution $D_{opt}$ for the problem is given by \n$SOL_1(k)$.\n%  -- \n% and we define $SOL_i(k) = \\infty$ for $k < 0$ and $SOL_i(k) = 0$ \n% for $i > \\|V\\| = n$.\nMoreover, considering Equation~\\eqref{eq:edgetransitions},\nlet $\\objective_i$ be the function that corresponds to\nthe (outer) summation term for node $i$\n\\begin{equation}\n\t\\objective_i(D) = \\initial^{\\prime\\prime}(i)\\sum_{v\\in V\\setminus S}\\transition^{\\prime\\prime}(i,v)\\left(1-\\transition^{\\prime\\prime}(i,v)\\right)\n\\end{equation}\n(under the auxilliary definitions of Equations \\eqref{eq:etinit}\nand \\eqref{eq:ettransit})\nand $ISOL_i(m)$ its optimal value when $D$ contains no more than\n$m\\leq k$ outgoing edges from node $i$.\nLet also $D_i^m$ be a subset of $k$ outgoing edges of $i$\nwith the highest transition\nprobabilities.\nIt can be shown that the optimal value\nfor $\\objective_i(D)$ is achieved for \nthe edges $D_i^k$ with {\\it highest} transition \nprobability\\footnote{For a proof,\nsee Supplementary Material, Lemma~\\ref{lemma:singlenodeoptimality}.}.\nHaving the outgoing edges of $i$ sorted by transition probability,\nwe can compute $ISOL_i(m)$ for all $m = 0\\ldots k$.\n\nThe dynamic programming equation is: \n\\begin{equation}\n\tSOL_i(k) = \\argmin_{0\\leq m\\leq k}\\{ISOL_i(m) + SOL_{i+1}(k - m)\\}\n\t\\label{eq:dptop}\n\\end{equation}\n\\edgeDP\\ essentially computes and keeps in memory \n$\\|V\\| \\times (k+1)$ values according to Eq.\\eqref{eq:dptop}.\n\nGiven the above, we have the following result:\n\n\\begin{lemma}\nThe {\\edgeDP} algorithm is optimal for the {\\edgeproblem} problem.\n\\end{lemma}\n\n% \\begin{algorithm}\n% \\LinesNumbered\n%  \\KwIn{k}\n%  \\KwOut{SOL: Dynamic programming array}\n%  initialize empty array $SOL_{\\|V\\| \\times (k+1)}$;\\\\\n%  \\For{i = $\\|V\\|$..1}{\n%   \\For{$k'$ = 0:k}{\n%     SOL[$i$, $k'$] := $\\argmin_{0\\leq k_i\\leq k'}\\{ISOL_i(k_i) + SOL[i+1, k' - k_i]\\}\\}$\n%    }\n%  }\n%  \\Return SOL;\n%  \\caption{Dynamic programming algorithm for the \\edgetransitions\\ variant.}\n%  \\label{alg:edgeDP}\n% \\end{algorithm}\n\nFor a proof sketch of this lemma see Supplementary Material, \nTheorem~\\ref{theorem:edgeDP}.\n\n\n\\emph{Running time:} \\edgeDP\\ \ncomputes $k\\times |V|$ values. For each value to be\ncomputed, up to $O(k)$ numerical operations are performed. Therefore,\n\\edgeDP\\ runs in $O(k^2|V|)$ operations. Backtracking to retrieve the\noptimal solution requires at most equal number of steps, so it does not\nincrease the asymptotic running time.\n\n\n\n\n\n\\spara{The {\\edgegreedy} algorithm:}\n\\edgegreedy\\ is a natural greedy algorithm that selects\n$k$ edges in an equal number of steps, in each step selecting\none more edge to minimize $\\objective_{\\shortedgetransitions}$.\n% \\edgegreedy\\ is described in Algorithm~\\ref{alg:edgegreedy}.\n\n\n% \\begin{algorithm}\n% \\LinesNumbered\n%  \\KwIn{k}\n%  \\KwOut{$ResultEdges$: Set of selected edges}\n%  $ResultEdges$ = $\\{\\}$ \\\\\n%  \\For{$i = 1 \\cdots k$}{\n%  \tSelect $e\\in E$ := \n%  \t\t$\\argmin \\objective_{\\shortedgetransitions}(ResultEdges \\cup \\{e\\})$ \\\\\n%  \t$ResultEdges$ := $ResultEdges$ $\\cup$ \\{e\\}; \\\\\n%  \tE := E $\\setminus$ \\{e\\}; \\\\\n%  }\n%  \\Return $ResultEdges$;\n%  \\caption{Greedy algorithm for the \\edgetransitions\\ variant.}\n%  \\label{alg:edgegreedy}\n% \\end{algorithm}\n\n\n%Unfortunately, we have not been able to prove or dispove optimality\n%for \\edgegreedy. \n\nIn all our experiments the performance of {\\edgegreedy} is the same as the performance of\nthe optimal {\\edgeDP} algorithm. However, we do not have a proof that the former is also optimal. We leave this as a problem for future work.\n\n\\emph{Running time:} Following Equation \\eqref{eq:edgetransitions},\nto select $k$ edges,\n\\edgegreedy\\ %as defined in Algorithm~\\ref{alg:edgegreedy}\ninvokes up to $k\\times O(|E|)$ evaluations of\n$\\objective_{\\shortedgetransitions}$.\n% ; and each evaluation of\n% $\\objective_{\\shortedgetransitions}$ involves $O(|E|)$ numerical operations.\nAs we discussed for \\nodegreedy, if the evaluation of the \nobjective function is naively implemented with a double summation,\nthe running time of \\edgegreedy\\ is \n$O(k|E||V|^2)$ numerical operations.\nIf the objective function is implemented as a summation over edges,\nthe running time improves to $O(k|E|^2)$.\nFurthermore, following the observations similar to those we saw for {\\nodegreedy}, the\nrunning time of \\edgegreedy\\ becomes $O(|E| + k|E|) = O(k|E|)$.\n\nWe notice that \\edgeDP\\ has better\nperformance than \\edgegreedy\\ for dense graphs ($|E|\\approxeq |V|^2$) \nand small $k$. Moreover, as with \\nodegreedy, \\edgegreedy\\ is \namenable to parallelization - the new value of the objective function\ncan be computed in independently for each edge that's considered for\nselection.\n\n\n\n% We have the following theorem for \\edgegreedy.\n% \\begin{theorem}\n% \\edgegreedy\\ is optimal for the \\edgetransitions\\ variant of \n% the \\mcproblem\\ problem.\n% \\label{theorem:edgegreedy}\n% \\end{theorem}\n% The proof for Theorem~\\ref{theorem:edgegreedy} mirrors the proof for \n% Theorem~\\ref{theorem:nodegreedy}, adjusted to the selection of edges instead\n% of nodes - and is omitted for brevity.\n\n\n", "meta": {"hexsha": "ee24ee301bc3236cb77d2b20aafc9e564ad1cb7a", "size": 6455, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/SDM-2018/edges.tex", "max_stars_repo_name": "chdhr-harshal/MCMonitor", "max_stars_repo_head_hexsha": "330fc1a8f8cf83620fd6b0e503707c91e97af16d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-04T20:35:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-05T09:06:43.000Z", "max_issues_repo_path": "paper/SDM-2018/edges.tex", "max_issues_repo_name": "chdhr-harshal/MCMonitor", "max_issues_repo_head_hexsha": "330fc1a8f8cf83620fd6b0e503707c91e97af16d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/SDM-2018/edges.tex", "max_forks_repo_name": "chdhr-harshal/MCMonitor", "max_forks_repo_head_hexsha": "330fc1a8f8cf83620fd6b0e503707c91e97af16d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-05T09:10:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-05T09:10:41.000Z", "avg_line_length": 37.7485380117, "max_line_length": 146, "alphanum_fraction": 0.7223857475, "num_tokens": 1959, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5784865597323094}}
{"text": "%!TEX root = ../TTT4150-Summary.tex\n\\section{Coordinate frames}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Reference frames}\n\n\\subsubsection{Notation}\n\\begin{itemize}\n    \\item $\\V{v}^n$---velocity given in frame $n$\n    \\item $\\M{R}_p^n$---rotation from frame $p$ to $n$\n    \\item $\\V{\\omega}_{ip}^p$---angular rate of frame $p$ w.r.t. $i$, represented in $p$.\n\\end{itemize}\n\n\\subsubsection{Inertial frames and the ECI frame $\\{i\\}$}\nA frame in which Newton's laws of motion apply, i.e. it does not accelerate. All inertial sensors measure relative to an inertial frame. \\emph{Earth-centered intertial} (ECI) is an example of this, with origin at the Earth center, not rotating with the earth. (But as the Earth rotates around the sun, ECI is not exactly inertial.)\n\n\\subsubsection{ECEF frame $\\{e\\}$}\nEarth-centered, earth-fixed. Rotates relative to ECI with $\\omega_{ie} \\approx \\SI{7.292e-5}{\\radian\\per\\second}$.\n\n\\subsubsection{Geographic frame $\\{g\\}$}\nOrigin equal to platform origin projected onto the reference ellipsoid. $z$-axis down, normal to ellipsoid surface. $x$-axis north, $y$-axis east. Not inertial.\n\n\\subsubsection{Geocentric frame}\nSimilar to the geographic frame, but $z$-axis down toward Earth center. Also not inertial.\n\n\\subsubsection{Tangent plane}\nNorth, east, down as used in daily life. Origin at a fixed point on Earth. Coincides with the geographic frame when the system origin coincides with the tangent frame origin.\n\n\\subsubsection{Body frame $\\{b\\}$}\nAttached to the vehicle of interest, often origin at center of gravity. $x$-axis forward, $z$-axis down, and $y$-axis completes the right-handed orthogonal coordinate system (i.e. to the right).\n\n\\subsubsection{Platform frame}\nFrame centered somewhere on the sensor platform. Usually non-moving wrt. the body frame, so $\\M{R}_p^b$ is constant.\n\n\\subsubsection{Instrument frame}\nThe frame along which an instrument measures. (An IMU measures acceleration along the instrument axes, and rotation rate around the instrument frame axes.)\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Ellipsoids}\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=.6\\linewidth]{img/ellipsoid}\n    \\caption{An oblate ellipsoid}\n    \\label{fig:ellipsoid}\n\\end{figure}\nAn ellipsoid is defined by its semi-major axis $a$ and semi-minor axis $b$ (see Figure \\ref{fig:ellipsoid}). Its \\emph{eccentricity} $e$ is defined by\n\\begin{equation}\n    e^2 = \\frac{a^2 - b^2}{a^2} .\n\\end{equation}\nand its flattening $f$ is\n\\begin{equation}\n    f = \\frac{a-b}{a} .\n\\end{equation}\n\n\\subsubsection{Radii of curvature}\n$M$ is the radius of curvature of a meridian at a given geodetic latitude $\\phi$. $N$ is the radius of curvature normal to the meridian at a given $\\phi$. Because the Earth is oblate, $M$ is largest at the poles, and smallest at the equator. At the poles, $N = M$, because the radius is the same in all directions.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Geodetic coordinates}\n\nDefined by\n\\begin{itemize}\n    \\item latitide $\\phi$,\n    \\item longitude $\\lambda$, and\n    \\item height $h$ above the ellipsoid.\n\\end{itemize}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Definitions of height}\n\nThree surface models are used: The ellipsoid surface, the geoid surface, and the true surface of the earth. The \\emph{geoid} is based on what the mean sea level would be based on local gravitation and the earth rotation.\n\n\\begin{itemize}\n    \\item Orthometric height: Height above the geoid.\n    \\item Geoid height: Height of the geoid above/below the ellipsoid.\n    \\item Ellipsoidal height: Height above the ellipsoid, measured normal to the surface. This is what GPS uses.\n\\end{itemize}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Earth-Centered, Earth-Fixed (ECEF) coordinates}\n\nDefined by Cartesian coordinates $X$, $Y$, and $Z$, along the axes in Figure \\ref{fig:ellipsoid}. The positive $x$-axis intersects the surface at $\\phi = \\lambda = 0^\\circ$. (The frame is fixed to Earth and rotates with it.)\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Conversion from geodetic to ECEF}\n\nFor a sphere of radius $R$ we have\n\\begin{equation}\n\\begin{split}\n    X &= (R+h) \\cos \\phi \\cos \\lambda \\\\\n    Y &= (R+h) \\cos \\phi \\sin \\lambda \\\\\n    Z &= (R+h) \\sin \\phi .\n\\end{split}\n\\end{equation}\n\nFor an ellipsoid with semi-minor axis $a$ and semi-major axis $b$, we have\n\\begin{equation}\n\\begin{split}\n    X &= (N + h) \\cos \\phi \\cos \\lambda \\\\\n    Y &= (N + h) \\cos \\phi \\sin \\lambda \\\\\n    Z &= \\left( N[1-e^2] + h \\right) \\sin \\phi\n\\end{split}\n\\end{equation}\nwhere the \\emph{normal radius of curvature} is\n\\begin{equation}\n    N = \\frac{a}{\\sqrt{1-e^2 \\sin^2 \\phi}} .\n\\end{equation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Conversion from ECEF to geodetic}\n\nEasy for a sphere:\n\\begin{equation}\n\\begin{split}\n    \\phi    &= \\atan \\frac{Z^2}{\\sqrt{X^2 + Y^2}} \\\\\n    \\lambda &= \\atan \\frac{Y}{X}                  \\\\\n    h       &= \\sqrt{X^2 + Y^2} \\cos \\phi + Z \\sin \\phi - R\n\\end{split}\n\\end{equation}\n\nFor an ellipse, ECEF to geodetic conversion can be done iteratively or by one of several closed-form solutions.\n\n\\subsubsection{Iterative method}\nThe longitude $\\lambda$ is found as\n\\begin{equation}\n    \\lambda = \\atan \\frac{X}{Y} .\n\\end{equation}\n\nThe latitude $\\phi$ is found by iterating\n\\begin{equation}\n    \\phi_{k+1}\n    =\n    \\atan\n    \\left(\n        \\frac{Z}{\\rho}\n        +\n        \\frac{N_k e^2 \\sin \\phi_k}{\\rho}\n    \\right)\n\\end{equation}\nwhere\n\\begin{equation}\n    \\rho = \\sqrt{X^2 + Y^2} .\n\\end{equation}\n\nWhen $\\phi$ is sufficiently accurate, the height $h$ is found by\n\\begin{equation}\n    h = \\rho \\cos \\phi + Z \\sin \\phi - N(1 - e^2 \\sin^2 \\phi)\n\\end{equation}\n\n\n\\subsubsection{Bowring's closed-form method}\nOnly valid near the Earth surface.\n\\begin{equation}\n\\begin{split}\n    \\phi &= \\atan \\frac{Z + e^2 b \\sin^3 \\mu}{\\rho - e^2 a \\cos^3 \\mu} \\\\\n    \\mu  &= \\atan \\frac{Z a}{\\rho b}\n\\end{split}\n\\end{equation}\n\n\\subsubsection{Vermeille's closed-form method}\nClosed-form, valid quite far from Earth (including satellite orbits). Really mathy.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Direction cosine matrix}\n\nYou have a vector $\\V{v}_1$ given in frame $\\phi_1$. The unit vectors along the axes of $\\phi_1$ are $I_1$, $J_1$, and $K_1$. Similarly, the unit vectors along the axes of $\\phi_2$ are $I_2$, $J_2$, and $K_2$.\n\nThen, the direction cosine matrix between the frames is\n\\begin{equation}\n    \\M{R}_2^1\n    =\n    \\begin{pmatrix}\n        \\cos(I_2, I_1) & \\cos(I_2, J_1) & \\cos(I_2, K_1) \\\\\n        \\cos(J_2, I_1) & \\cos(J_2, J_1) & \\cos(J_2, K_1) \\\\\n        \\cos(K_2, I_1) & \\cos(K_2, J_1) & \\cos(K_2, K_1) \\\\\n    \\end{pmatrix}\n\\end{equation}\n", "meta": {"hexsha": "e7f70260988cd3843ae955448d8c8f14fd92c0cc", "size": 6890, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TTT4150 Navigation systems/tex/sec-coordinate-frames.tex", "max_stars_repo_name": "jakoblover/ntnu-course-summaries", "max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z", "max_issues_repo_path": "TTT4150 Navigation systems/tex/sec-coordinate-frames.tex", "max_issues_repo_name": "jakoblover/ntnu-course-summaries", "max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TTT4150 Navigation systems/tex/sec-coordinate-frames.tex", "max_forks_repo_name": "jakoblover/ntnu-course-summaries", "max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4916201117, "max_line_length": 331, "alphanum_fraction": 0.6394775036, "num_tokens": 2043, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936537604181, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5784865593399677}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 10, 2014}\n\\maketitle\n\\section*{4.3 \\#21a,c}\nFind multiplicative inverses of the given elements in the given fields\n\\subsection*{a}\n$[a+bx]$ in $\\mathbb{R}[x]/\\langle x^2+1\\rangle$\n\n\\begin{align*}\n  [a+bx][c+dx]&\\equiv 1\\mod (x^2+1)\\\\\n  (a+bx)(c+dx)-1&=x^2+1\\\\\n  x^2+1&=q(a+bx)+r\\\\\n  x^2+1&=(bx+a)(\\frac{1}{b}x-\\frac{a}{b^2})+1+\\frac{a^2}{b^2}\n\\end{align*}\n\\subsection*{c}\n$[x^2-2x+1]$ in $\\mathbb{R}[x]/\\langle x^3-2\\rangle$\n\n$x^3-2=(x^2-2x+1)(x+2)+3x-4$\n\n$x^2-2x+1=(\\frac{1}{3}x-\\frac{2}{9})(3x-4)+\\frac{1}{9}$\n\n$1=9(x^2-2x+1)-9(\\frac{1}{3}x-\\frac{2}{9})(3x-4)$\n\n$1=(x^2-2x+1)(9+(3x-2)(x+2))-(x^3-2)(3x-2)$\n\n$[1]=[x^2-2x+1][3x^2+4x+5]-[x^3-2][3x-2]$\n\n$[0]=[x^3-2]$\n\n$[x^2-2x+1]^{-1}=[3x^2+4x+5]$\n\n\\section*{htrm}\nf has no repeated factors iff $\\gcd(f(x),f^{-1}(x))=1$\n\\subsection*{example}\n$K=Z_{p}(T)=\\{\\frac{f(t)}{g(t)}:f(t),g(t)\\in \\mathbb{Z}_p[T],g(t)\\ne 0\\}$ check that $K$ is a field\n\nlet $f(x)=x^p-T\\in K[x]$. claim that $f(x)$ is irreducible.\n\n\\section*{proposition}\nlet $p(x)in K[x]\\setminus \\{0\\}$ be irreducible with $\\deg p\\ge 1$. then $K[x]/\\langle p(x)\\rangle$ is a field.\n\\section*{proof}\nsind $p(x)$ is irreducible then $\\gcd(f(x),p(x))=1$ and so $[f(x)]$ has mult inverse.\n\n\\section*{thm}\ntake $f(x)\\in K[x]$ with $\\deg f\\ge 1$ then there exists an extension of $K$ called $L$ such that $f(x)$ has a root in $L$\n\\subsection*{proof}\nlet $p(x)$ be an irreducible factor of $f(x)$. now let $L=K[x]/\\langle p(x)\\rangle$. now $K\\to \\{[a]:a\\in K\\}\\subseteq L$ with $x\\to[x]$ is an isomorphism. let $u=[x]=L$ and $f(u)=p(u)g(u)=p([x])g(u)=0g(u)=0$\n\\end{document}\n\n\n", "meta": {"hexsha": "eeff7a751aca5cdedceecad8270d09bcbb637aa8", "size": 1818, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-11-10.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-11-10.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-11-10.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.3225806452, "max_line_length": 208, "alphanum_fraction": 0.599009901, "num_tokens": 865, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676284, "lm_q2_score": 0.7662936377487304, "lm_q1q2_score": 0.57848655147783}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \n% Master Thesis in Mathematics\n% \"Immersions and Stiefel-Whitney classes of Manifolds\"\n% -- Chapter 3: The Immersion Conjecture up to Cobordism --\n% \n% Author: Gesina Schwalbe\n% Supervisor: Georgios Raptis\n% University of Regensburg 2018\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \n\n\\chapter{The Immersion Conjecture up to Cobordism}\\label{chap:brown}\n% Review the necessary results about the structure of the unoriented\n% cobordism ring and explain R. L. Brown\u2019s proof of the immersion\n% conjecture up to cobordism. [immersionconj], [brown]\nThe overall goal of this chapter is to prove the following theorem of\nR.~L.~Brown following his paper \\cite{brown},\nwhich essentially states that the immersion conjecture is true up to\nthe cobordism relation.\n\\begin{Thm}[Brown]\\label{thm:brown}\n  Every closed $n$\\nbd{}manifold is cobordant to an $n$\\nbd{}manifold that immerses\n  into $\\R^{2n-\\alpha(n)}$.\n\\end{Thm}\n\nAs one easily sees that this property is stable under\nthe ring operations (Lemma~\\ref{lem:brownstableunderringops}),\nthe main idea for the proof is to find manifolds\nfulfilling the conjecture whose cobordism classes form a generating set\nof the cobordism ring.\nAs the latter has the form of a polynomial algebra\n$\\Zmod2[\\sigma_i\\mid i\\neq 2^r-1]$, a set of elements\n$([\\G i]\\mid i\\neq2^r-1)$ will already be a generating set if all of the\n$[\\G i]$ are indecomposable.\nThus, the candidate generating elements that will be constructed\nin \\autoref{sec:proofbrown} need to be tested for:\n\\begin{enumerate}\n\\item indecomposability, which is the more lengthy part as it\n  requires the preliminary results from\n  \\autoref{sec:indecomposabilitycriterion} respectively\n  \\autoref{sec:twistedprod:indecompcriterion}, and\n\\item the property to fulfill the immersion conjecture.\n\\end{enumerate}\nThe odd generators are going to be constructed using twisted\nproducts, which are introduced in \\autoref{sec:twistedproddef}.\n\nThis chapter is structured into some preliminary work on finding a\ncriterion to easily detect indecomposable elements of the cobordism\nring in \\autoref{sec:indecomposableelements},\nthe twisted product construction and its properties---especially\nconcerning indecomposability---in\n\\autoref{sec:twistedprod}, and the final proof with the construction\nof the generating set in \\autoref{sec:proofbrown}, where all\npreliminary work is merged.\n\nFor clarity of presentation, both a couple of results from Thom's\npaper \\cite{thom} within the review in\n\\autoref{sec:cobordismringstructure} will merely be referenced without\nproof.\nThe reader is assumed to be familiar with symmetric polynomials.\n\nCall a manifold \\emph{indecomposable} if it\nrepresents an indecomposable element of the cobordism ring.\n\n\\section{Detecting Indecomposable Elements of the Cobordism Ring}\n\\label{sec:indecomposableelements}\nIn order to find representatives for a set of algebraically\nindependent generators of the polynomial cobordism ring, one needs a\nway to detect indecomposable elements. Indecomposable in this \ncontext means to not be expressible as a sum of products of lower\ndegree elements.\n\nThis section will follow an approach of Thom \n\\cite[Chapters~IV.5 and~IV.6]{thom}, in which a certain characteristic\nclass serves as indicator. This characteristic class is constructed\nout of Stiefel-Whitney classes using certain functions on symmetric\npolynomials (see \\autoref{sec:functions}), and becomes particularly\nuseful on tangent bundles of manifolds (see\n\\autoref{sec:swnumsofproductmfds}). The final indication lemma is then\nstated and proved in \\autoref{sec:indecomposabilitycriterion}.\n\n\\subsection{Special Properties of Symmetric Polynomials}\\label{sec:functions}\nThis subsection examines a special kind of polynomials that\nobey a product rule similar to \\ref{tag:cartan} whenever evaluated on\nelements of the form of a total Stiefel-Whitney class.\n\nAs a consequence, one can express certain combinations of\nStiefel-Whitney numbers of product manifolds in terms of ones of their\nfactors, which will be investigated in detail in\n\\autoref{sec:swnumsofproductmfds}.\nFrom this, \\autoref{sec:indecomposabilitycriterion} will deduce a\nsimple criterion for a manifold to be cobordant to a product of\nmanifolds.\n\nBeforehand, mind the following notation of partitions needed for\nsymmetrising polynomials.\n\\begin{Def}\\label{def:partition}\n  Let $k,l\\in\\Nat$ be integers.\n  \\begin{compactitemize}\n  \\item\n    A \\emph{partition} $\\Part=(i_1,\\dotsc,i_l)$ of $k$ is an unordered sequence\n    of integers such that $k=\\sum_{r=0}^{l}i_r$.\n    Two partitions only differing by zeros are considered equal.\n    In other words, a partition is an equivalence class of sequences in\n    $\\bigoplus_\\infty\\Nat$ under the relation $\\Part\\sim\\sigma(\\Part)$\n    for any permutation $\\sigma$.\n  \\item\n    The notation $I^l$ for a sequence of integers will mean a sequence\n    of length $l$. Write $I^l\\in\\Part$ for a sequence of length\n    $l$ in the equivalence class of the partition $\\Part$. \n  \\item\n    Denote by $\\PartitionsOf{k}$ the set of partitions of $k$.\n  \\item\n    Write $\\Emptypart$ for the unique partition of $0$.\n  \\item\n    Call a sequence or partition \\emph{non-dyadic}, if\n    none of its entries is of the form $2^m-1$.\n  \\item The concatenation of sequences, and analogously partitions, will\n    be denoted by\n    \\begin{align*}\n      \\Nat^{l_1} \\times \\Nat^{l_2}\n      &\\xrightarrow{-\\concat-}\n        \\Nat^{l_1+l_2}\n      \\\\\n      \\PartitionsOf{k_1} \\times \\PartitionsOf{k_2}\n      &\\xrightarrow{-\\concat-}\n        \\PartitionsOf{k_1+k_2}\n      \\\\\n      (i_1,\\dotsc,i_r), (j_1,\\dotsc,j_s)\n      &\\longmapsto\n        (i_1,\\dotsc,i_r,j_1,\\dotsc,j_s)\n    \\end{align*}\n  \\end{compactitemize}\n\\end{Def}\n\nAlso as preparation, recall some properties of symmetric polynomials\nand agree on a shorthand for symmetrised monomials.\n\\begin{LemDef}\n  Let $n\\in\\Nat$ and $\\Zmod2[t_1,\\dotsc,t_n]$ be the graded polynomial\n  algebra in $n$ variables over the fields $\\Zmod2$, each $t_i$ of\n  degree one.\n  \\begin{enumerate}\n  \\item Let\n    $\\Symm{n}_*\n    \\coloneqq \\Zmod2[t_1,\\dotsc,t_n]^{\\Permutations{n}}\n    \\subset \\Zmod2[t_1,\\dotsc,t_n]$\n    be the graded subalgebra of symmetric polynomials in $n$ variables.\n  \\item $\\Symm{n}_*$ has a basis indexed by partitions $\\Part$\n    consisting of symmetrised monomials, \\idest elements of the form\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\symm{n} t^\\Part \\coloneqq \\sum_{I^n\\in\\Part} t^I \\in \\Symm{n}_k\n    \\end{gather*}\n    where $t^{(i_1,\\dotsc,i_n)}\\coloneqq t_1^{i_1}\\dotsm t_n^{i_n}$.\n    \\begin{proof}\n      See also \\cite[footnote~2, p.~154]{thom}.\n      Linear independence is clear as monomials $t^I$ and $t^{I'}$ are\n      linearly independent if $I\\neq I'$, and sequences belonging to\n      different partitions must be unequal.\n      For the generating property note that every symmetric polynomial\n      $p$ is of the form $\\sum_{I^n\\in A}t^I$. This\n      however can be written as a sum of symmetrised monomials by\n      descending induction:\n      For a symmetric polynomial\n      \\begin{gather*}\n        \\SwapAboveDisplaySkip\n        p\n        = \\sum_{\\Part\\in B}\\symm{n} t^{\\Part}  + \\sum_{I^n\\in A}t^I\n        \\;,\n      \\end{gather*}\n      and $I'\\in A$, one must have $s(I')\\in A$ for\n      $s\\in\\Permutations{n}$, since\n      \\begin{gather*}\n        p-\\sum_{\\Part\\in B}\\symm{n} t^{\\Part} = \\sum_{I^n\\in A}t^I\n      \\end{gather*}\n      is still symmetric.\n      Hence one can write\n      \\begin{align*}\n        p\n        &= \\sum_{\\mathclap{\\Part\\in B}}\\symm{n} t^{\\Part}\n          + \\sum_{s\\in\\Permutations{n}}t^{s(I')}\n          + \\sum_{\\mathrlap{\n          I^n\\in A\\setminus\\{s(I')\\mid s\\in\\Permutations{n}\\}\n          }} t^I\n        \\\\\n        &= \\sum_{\\mathclap{\\Part\\in B\\cup\\{[I']\\}}}\\symm{n} t^{\\Part}\n          + \\sum_{\\mathrlap{\n            I^n\\in A\\setminus\\{s(I')\\mid s\\in\\Permutations{n}\\}\n          }} t^I\n        \\;,\n      \\end{align*}\n      and thus inductively decrease $\\# A$ to zero, expressing $p$\n      as a sum of symmetrised polynomials.\n    \\end{proof}\n  \\item $\\Symm{n}_*$ is generated as algebra by the algebraically\n    independent, elementary symmetric polynomials in $n$ variables\n    \\begin{gather*}\n      \\el i n\\coloneqq \\symm{n} t^{\\Part_i}\n      \\in \\Symm{n}_i\n      \\qquad \\text{for }\n      \\Part_i = (1,\\dotsc,1)\n      \\in \\PartitionsOf i\n    \\end{gather*}\n    for $1\\leq i\\leq n$.\n    \\Forexample\n    $\\el 1 n=\\sum_{r=1}^{n}t_r$,\n    $\\el 2 n=\\sum_{1\\leq r<s\\leq n} t_r t_s$.\n    \\begin{proof}\n      This is the well-known fundamental theorem on symmetric\n      polynomials, see \\forexample\n      \\cite[Chap.~4.4, Satz~1]{bosch2013algebra}. \n    \\end{proof}\n  \\item As a simple calculation shows that the elementary symmetric\n    polynomials in $n$ variables fulfill\n    \\begin{gather}\n      \\SwapAboveDisplaySkip\n      \\label{eq:sumelemsymmpoly}\n      1 + \\sum_{i=1}^{n} \\el i n\n      = \\prod_{r=1}^{n}(1+t_n)\n    \\end{gather}\n  \\end{enumerate}\n\\end{LemDef}\n\nNow the desired polynomials can be defined.\nThe following definition is according to \\cite[p.~90]{milnorlectures}.\n\\begin{Def}\n  Let $k\\in\\Nat$, $\\Part\\in\\PartitionsOf k$, and let\n  $\\Zmod2[\\alpha_1,\\dotsc,\\alpha_k]$ be the polynomial ring in $k$\n  variables where $\\alpha_i$ has degree $i$.\n  \\begin{enumerate}\n  \\item\n    Define the homogeneous polynomial\n    $\\s{\\Part}\\in\\Zmod2[\\alpha_1,\\dotsc,\\alpha_k]$ of degree $k$ by\n    \\begin{gather*}\n      \\s{\\Part}(\\el 1 n,\\dotsc, \\el k n) = \\symm{n} t^{\\Part} \\in \\Symm{n}_k\n    \\end{gather*}\n    for some $n\\geq k$. This means, $\\s{\\Part}$ gives the representation\n    of $\\symm{n}t^\\Part$ in terms of the generating set $(\\el i n)_i$.\n    Mind, that this\n    \\begin{enumerate}\n    \\item is well-defined, as the elementary symmetric\n      polynomials are algebraically independent generators, and\n    \\item does not depend on $n$ as long as $k\\leq n$.\n    \\end{enumerate}\n  \\item \n    Further, for a graded, commutative ring $A_*=\\bigoplus_{i\\geq 0}\n    A_i$ write elements as\n    \\begin{gather*}\n      a=\\sum_i a_i \\coloneqq (a_0,a_1,a_2,\\dotsc)\n      \\;, \\quad\\text{and}\n    \\end{gather*}\n  \\item \n    define the evaluation of $\\s{\\Part}$ on such an element $a$ as\n    \\begin{gather*}\n      \\s{\\Part}(a) \\coloneqq \\s{\\Part}(a_1,\\dotsc,a_k)\n      \\in A_k\n      \\;,\n    \\end{gather*}\n    \\idest skip $a_0$ and all higher $a_i$.\n  \\end{enumerate}\n\\end{Def}\n\n\\begin{Ex}\n  The first such polynomials over $\\Zmod2$ are\n  (see \\cite[p.~90]{milnorlectures}):\n  \\begin{align*}\n    k&=0:\n    &\\s{\\Emptypart} &= 1\n    \\\\ k&=1:\n    &\\s{(1)} &= \\alpha_1\n    \\\\ k&=2:\n    &\\s{(2)} &= \\alpha_1^2\n        &\\s{(1,1)} &= \\alpha_2\n    \\\\ k&=3:\n    &\\s{(3)} &= \\alpha_1^3 + \\alpha_1\\alpha_2 + \\alpha_3\n        &\\s{(1,2)} &= \\alpha_1\\alpha_2 + \\alpha_3\n             &\\s{(1,1,1)} &= \\alpha_3\n  \\end{align*}\n\\end{Ex}\n\n\\begin{Rem}\\label{rem:spolyevaluation}\n  If $k\\leq r\\leq n\\in\\Nat$ and one is given an element $a=1+\\sum_{i\\geq 1} a_i$\n  in a graded commutative algebra $A_*$, which can be written as\n  \\begin{gather*}\n    a = 1 + \\sum_{i=1}^{r}\\el{i}{n}(f_1,\\dotsc,f_n)\n  \\end{gather*}\n  for some degree one elements $f_i\\in A_*$,\n  then for any partition $\\Part\\in\\PartitionsOf{k}$ one gets\n  \\begin{align*}\n    \\s{\\Part}(a)\n    &= \\s{\\Part}\\left(\n      \\el{1}{n}(f_1,\\dotsc,f_n),\\dotsc, \\el{k}{n}(f_1,\\dotsc,f_n)\n      \\right) \\\\\n    &= \\left(\n      \\s{\\Part}\n      \\left( \\el{1}{n}, \\dotsc, \\el{k}{n} \\right)\n      \\right)\n      (f_1,\\dotsc,f_n) \\\\\n    &= \\symm{n} f^\\Part \\in A_k\n      \\;.\n  \\end{align*}\n  This trick will be very useful when dealing with Stiefel-Whitney\n  classes.\n\\end{Rem}\n\nAs promised, these translation-polynomials between the basis\n$\\symm{n}t^{\\Part}$ of $\\Symm{n}_k$ and the generators $\\el i n$\nfulfill the following interesting property concerning\nmultiplication of elements that have the form of a total\nStiefel-Whitney class.\n\\begin{Lem}\\label{lem:productrule:general}\n  Let $k\\in\\Nat$, and let $A_*=\\bigoplus_{i\\geq 0} A_i$ be a graded\n  commutative ring.\n  For $a,b\\in A_*$ with $a_0=1=b_0$, and any partition\n  $\\Part\\in\\PartitionsOf k$ holds\n  \\begin{gather*}\n    \\s{\\Part}(a\\cdot b)\n    = \\sum_{\\Part_1\\concat\\Part_2=\\Part}\n    \\s{\\Part_1}(a) \\cdot \\s{\\Part_2}(b)\n    \\;.\n  \\end{gather*}\n  \\begin{proof}\n    A proof can also be found in\n    \\cite[Theorem~33, p.~91f]{milnorlectures}.\n    We will prove the lemma for a special which generalizes to\n    the lemma.\n    Beforehand, recall that for $r\\geq 1$ and any\n    element $f=\\sum_{i\\geq 0}f_i$ in a commutative graded ring $A_*$\n    holds\n    \\begin{gather}\\label{eq:translationpolycut}\n      \\s{\\Part}\\left( \\sum_{i\\geq 0}f_i \\right)\n      = \\s{\\Part}\\left( \\sum_{i=0}^{k+r}f_i \\right)\n      \\;.\n    \\end{gather}\n    Now let $A_*$ be the subring of $\\Z[t_1,\\dotsc,t_{2k}]$ that is\n    generated by the algebraically independent elements\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      a_i\\coloneqq \\el i k(x_1,\\dotsc,x_k)\n      \\qquad\\text{and}\\qquad\n      b_i\\coloneqq \\el i k(y_1,\\dotsc,y_k)\n    \\end{gather*}\n    for $1\\leq i\\leq k$ and with $x_i=t_i$ and $y_i=t_{2i}$.\n    Then, for any other graded commutative\n    ring $\\bar A_*$, and elements\n    $\\bar a=1+\\sum_{i\\geq 1}\\bar a_i$,\n    $\\bar b=1+\\sum_{i\\geq 1}\\bar b_i\\in\\bar A_*$, the ring\n    homomorphism defined by\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\phi\\colon A_*\\longto \\bar A_*\n      \\;,\\qquad\n      a_i\\longmapsto \\bar a_i\n      \\;,\\qquad\n      b_i\\longmapsto \\bar b_i\n    \\end{gather*}\n    is both well-defined, since $a_i,b_i$ are algebraically independent\n    generators, and surjective onto the subring of $\\bar A_*$\n    generated by $\\bar a_i,\\bar b_i$ with $1\\leq i\\leq k$.\n    Hence, if we assume the statement is proven for\n    $a=1+\\sum_{i=1}^k a_i$ and $b=1+\\sum_{i=1}^k b_i$ in $A_*$, one\n    has\n    \\begin{align*}\n      \\s{\\Part}(\\bar a\\cdot \\bar b)\n      &\\clapequalsby{\\eqref{eq:translationpolycut}}\n      \\s{\\Part}\\left(\n        \\left( 1+\\sum_{i=1}^{k}\\bar a_i \\right) \\cdot\n        \\left( 1+\\sum_{i=1}^{k}\\bar b_i \\right)\n      \\right)\n      = \\s{\\Part}(\\phi(a)\\cdot \\phi(b)) \\\\\n      &= \\phi(\\s{\\Part}(a\\cdot b)) \\\\\n      &\\equalsby{Assumption} \\phi\\left(\n      \\sum_{\\mathrlap{\\Part_1\\concat\\Part_2=\\Part}}\n      \\s{\\Part_1}(a) \\cdot \\s{\\Part_2}(b)\n      \\right) \\\\\n      &= \\sum_{\\mathrlap{\\Part_1\\concat\\Part_2=\\Part}}\n        \\s{\\Part_1}(\\phi(a)) \\cdot \\s{\\Part_2}(\\phi(b)) \n      =\n        \\sum_{\\mathclap{\\Part_1\\concat\\Part_2=\\Part}}\n        \\s{\\Part_1}\\left( \\sum_{i=1}^{k}\\bar a_i \\right)\n        \\cdot \\s{\\Part_2}\\left( \\sum_{i=1}^{k}\\bar b_i \\right) \\\\\n      &\\equalsby{\\eqref{eq:translationpolycut}}\n        \\sum_{\\mathrlap{\\Part_1\\concat\\Part_2=\\Part}}\n        \\s{\\Part_1}(\\bar a) \\cdot \\s{\\Part_2}(\\bar b)\n      \\;,\n    \\end{align*}\n    which is the statement of the lemma for $\\bar a, \\bar b$.\n    Therefore, it suffices to show the lemma for $a$ and $b$ as\n    above.\n    \n    In order to do so, observe that\n    \\begin{align*}\n      a &\\coloneqq\n          \\prod_{\\mathclap{r=1}}^k(1+t_r)\n          \\cequalsby{\\eqref{eq:sumelemsymmpoly}}\n          1 + \\sum_{i=1}^k \\el i k(x_1,\\dotsc,x_k)\n          \\;\\text{, and}\\\\\n      b &\\coloneqq\n          \\prod_{\\mathclap{r=k+1}}^n(1+t_r)\n          \\cequalsby{\\eqref{eq:sumelemsymmpoly}}\n          1 + \\sum_{i=1}^k \\el i k(y_1,\\dotsc,y_k)\n          \\;\\text{, then}\\\\\n      a\\cdot b\n        &=\n          \\prod_{r=1}^n (1+t_r)\n          \\cequalsby{\\eqref{eq:sumelemsymmpoly}}\n          1 + \\sum_{i=1}^{2k} \\el i n (t_1,\\dotsc,t_{2k})\n          \\;,\n    \\end{align*}\n    which nicely fits into the defining relation for $\\s{\\Part}$.\n    Now calculate\n    \\begin{align*}\n      \\s{\\Part}(a\\cdot b)\n      &\\coloneqq\n        \\symm{n} t^{\\Part}\n      \\\\ &\\equalsby{Def.}\n           \\sum_{I^n\\in\\Part}\n           t_1^{i_1}\\dotsm t_n^{i_n}\n      \\\\ &=\n           \\sum_{I^n\\in\\Part}\n           x^{(i_1,\\dotsc,i_k)}\\cdot y^{(i_{k+1},\\dotsc,i_n)}\n      \\\\ &=\n           \\sum_{J_1^k\\concat J_2^k\\in\\Part}\n           x^{J_1} \\cdot y^{J_2}\n      \\\\ &\\equalsby{Group by equiv.}\n           \\sum_{\\Part_1\\concat \\Part_2=\\Part}\n           \\sum_{\\substack{J_1^k\\in\\Part_1\\\\ J_2^k\\in\\Part_2}}\n      x^{J_1} \\cdot y^{J_2}\n      \\\\ &=\n           \\sum_{\\Part_1\\concat \\Part_2=\\Part}\n           \\left(\\sum_{J_1^k\\in\\Part_1} x^{J_1}\\right)\n           \\cdot\n           \\left(\\sum_{J_2^k\\in\\Part_2} y^{J_2}\\right)\n      \\\\ &\\equalsby{Def.}\n           \\sum_{\\Part_1\\concat \\Part_2=\\Part}\n           \\left(\\symm{k} x^{\\Part_1}\\right)\n           \\cdot\n           \\left(\\symm{k} y^{\\Part_2}\\right)\n      \\\\ &\\equalsby{Def.}\n           \\sum_{\\Part_1\\concat \\Part_2=\\Part}\n           \\s{\\Part_1}\\left( \\el 1 k(x_1,\\dotsc,x_k),\\dotsc \\right)\n           \\cdot\n           \\s{\\Part_2}\\left( \\el 1 k(y_1,\\dotsc,y_k),\\dotsc \\right)\n      \\\\ &\\equalsby{Def.}\n           \\sum_{\\Part_1\\concat \\Part_2=\\Part}\n           \\s{\\Part_1}(a) \\cdot \\s{\\Part_2}(b)\n           \\qedhere\n    \\end{align*}\n  \\end{proof}\n\\end{Lem}\n\n\\begin{Ex}\n  The most important partition of an integer $k$ will be the trivial\n  one $(k)\\in\\PartitionsOf k$. In this case\n  Lemma~\\ref{lem:productrule:general} says\n  \\begin{gather*}\n    \\s{(k)}(a\\cdot b) = \\s{(k)}(a) + \\s{(k)}(b)\n    \\;.\n  \\end{gather*}\n\\end{Ex}\n\n\n\\subsection{Stiefel-Whitney Numbers of Product Manifolds}\n\\label{sec:swnumsofproductmfds}\nIn order to apply the special polynomials out of the preceding\nsection, as well as their product property from\nLemma~\\ref{lem:productrule:general}, to the Stiefel-Whitney\nnumbers of (product) manifolds, first start with Stiefel-Whitney\nclasses.\n\nSo, let $M^n=M_1^{n_1}\\times M_2^{n_2}$ all be closed manifolds of the\nnoted dimension throughout this section.\n\nRecall that\n\\begin{enumerate}\n\\item the cohomology ring $\\H^*(M)$ is a graded ring,\n\\item the total Stiefel-Whitney number of a manifold is of the form\n  \\begin{gather*}\n    \\W{M_i} = 1 + \\w 1 {M_i} + \\dotsb + \\w n {M_i}\n    \\;, \\quad\\text{and that}\n  \\end{gather*}\n\\item by the K\u00fcnneth isomorphism, we have\n  \\begin{align*}\n    \\H^*(M_1)\\otimes \\H^*(M_2)\n    &\\overset{\\cong}\\longto \\H^*(M)\n    \\\\\n    c_1 \\otimes c_2\n    &\\longmapsto c_1\\times c_2\n      \\coloneqq \\pb\\proj_1 c_1 \\cup \\pb\\proj_2 c_2\n  \\end{align*}\n  and $\\W{M} = \\W{M_1} \\times \\W{M_2}$ by\n  \\ref{tag:swclassesmultiplicativity} of the Stiefel-Whitney classes.\n\\end{enumerate}\nThus, one can apply the multiplication\nrule~\\ref{lem:productrule:general} to $\\W{M}$, which immediately\nyields:\n\\begin{Cor}\\label{cor:productrule:swcl}\n  For $M=M_1\\times M_2$ manifolds as above one gets for any partition\n  $\\Part\\in\\PartitionsOf n$:\n  \\begin{align*}\n    \\SwapAboveDisplaySkip\n    \\s{\\Part}(\\W{M})\n    &=\n    \\s{\\Part}\\left(\\W{M_1}\\times \\W{M_2}\\right) \\\\\n    % &\\equalsby{\\ref{lem:productrule:general}}\n    %   \\sum_{\\mathrlap{\\Part_1\\concat\\Part_2=\\Part}}\n    %   \\s{\\Part_1}(\\pb\\proj_1\\W{M_1}) \\cdot \\s{\\Part_2}(\\pb\\proj_2\\W{M_2})\n    % \\\\\n    &=\n      \\sum_{\\mathrlap{\\Part_1\\concat\\Part_2=\\Part}}\n      \\s{\\Part_1}(\\W{M_1}) \\times \\s{\\Part_2}(\\W{M_2})\n    \\\\\n    &=\n         \\sum_{\\mathrlap{\\substack{\n         \\Part_1\\concat\\Part_2=\\Part\\\\\n    \\Part_1\\in\\PartitionsOf{n_1}\\\\\n    \\Part_2\\in\\PartitionsOf{n_2}\\\\\n    }}}\n    \\s{\\Part_1}(\\W{M_1}) \\times \\s{\\Part_2}(\\W{M_2})\n  \\end{align*}\n  \\begin{proof}\n    For the last equality note that by definition of $\\s{\\Part}$\n    for any partition $\\Part$ of some $k\\in\\Nat$, \n    the element $\\s{\\Part}(\\W{W})$ lies in $\\H^k(M)$,\n    hence must be zero if $k>\\dim W$.\n    Therefore, for any combination of partitions\n    $\\Part_1\\in\\PartitionsOf{k_1}$,\n    $\\Part_2\\in\\PartitionsOf{k_2}$\n    where\n    \\begin{gather*}\n      k_1+k_2=n=n_1+n_2\n      \\quad\\text{with $k_i\\neq n_i$,}\n    \\end{gather*}\n    the product\n    $\\s{\\Part_1}(\\W{M_1})\\cdot\\s{\\Part_2}(\\W{M_2})$ will have a zero\n    factor and can be skipped.\n  \\end{proof}\n\\end{Cor}\n\nIn order to pass to Stiefel-Whitney numbers instead of classes, use\nthe following notation.\n\\begin{Def}\n  Let $W$ be a closed manifold and $\\Part\\in\\PartitionsOf{\\dim W}$.\n  Then write\n  \\begin{align*}\n    \\s{\\Part}(W) &\\coloneqq \\s{\\Part}(\\W{W})\\\\\n    \\snum{\\Part}{W} &\\coloneqq \\capped{\\s{\\Part}(W)}{\\fundcl{W}}\n                      \\;.\n  \\end{align*}\n\\end{Def}\n\nNow the product rule from Lemma~\\ref{lem:productrule:general} translates to\n\\begin{Cor}\\label{cor:swnumdecompositionmfds}\n  For closed manifolds $M_1$, and $M_2$ with dimensions $n_1$ and\n  $n_2$ one has\n  \\begin{align*}\n    \\snum{\\Part}{M_1\\times M_2}\n    &= \\sum_{\\mathclap{\\substack{\n      \\Part_1\\concat\\Part_2=\\Part\\\\\n    \\Part_1\\in\\PartitionsOf{n_1}\\\\\n    \\Part_2\\in\\PartitionsOf{n_2}\\\\\n    }}}\n    \\snum{\\Part_1}{M_1} \\cdot \\snum{\\Part_2}{M_2}\n    \\quad \\in \\quad \\Zmod2\n  \\end{align*}\n  and as a special case\n  \\begin{align}\n    \\SwapAboveDisplaySkip\n    % \\notag\n    % \\snum{(n_1,n_2)}{M_1\\times M_2}\n    % &= \\snum{(n_1)}{M_1} \\cdot \\snum{(n_2)}{M_2}\n    % \\\\\n    \\label{eq:productrule:swnum}\n    \\snum{(n_1+n_2)}{M_1\\times M_2} &= 0\n                                      \\;.\n  \\end{align}\n  In particular, if $\\snum{(\\dim W)}{W}\\neq 0$ for a closed manifold\n  $W$, $W$ is no product manifold.\n  \\begin{proof}\n    The statement immediately follows from the previous\n    Corollary~\\ref{cor:productrule:swcl} with the following\n    two facts:\n    \\begin{enumerate}\n    \\item The generator $\\fundcl{M_1\\times\n        M_2}\\in\\H^{n_1+n_2}(M_1\\times M_2)$ corresponds to the generator\n      \\begin{gather*}\n        \\fundcl{M_1}\\otimes\\fundcl{M_2}\n        \\in \\H^{n_1}(M_1)\\otimes\\H^{n_2}(M_2)\n        \\cong \\H^{n_1+n_2}(M_1\\times M_2)\n      \\end{gather*}\n      under the K\u00fcnneth isomorphism.\n    \\item For cohomology classes $c_1\\in\\H^{n_1}(M_1)$, $c_2\\in\\H^{n_2}(M_2)$ holds\n      \\begin{gather*}\n        \\capped{c_1\\otimes c_2}{\\fundcl{M_1\\times M_2}}\n        = \\capped{c_1\\otimes c_2}{\\fundcl{M_1}\\otimes\\fundcl{M_2}}\n        = \\capped{c_1}{\\fundcl{M_1}} \\cdot \\capped{c_2}{\\fundcl{M_2}}\n        \\in \\Zmod2\n      \\end{gather*}\n      by the universal property of the tensor product.\n      \\qedhere\n    \\end{enumerate}    \n  \\end{proof}\n\\end{Cor}\n\n\\autoref{eq:productrule:swnum} is the desired obstruction for\na manifold to be a product or---as will be explained in the next\nsubsections---to be cobordant to a product.\nFinally, this statement will be an invaluable tool for detecting\nmanifolds that are not only not cobordant to a product manifold but\nwhose cobordism class is indecomposable.\n\n\\subsection{Review: The Cobordism Ring Structure}\n\\label{sec:cobordismringstructure}\nRecall that two closed manifolds of the same dimension $n$ are\n(unoriented) cobordant if their disjoint union is the boundary of an\n$(n+1)$\\nbd{}dimensional manifold.\nThis is an equivalence relation amongst $n$\\nbd{}manifolds, and the\nset of equivalence classes forms an Abelian group $\\c_n$ of order two\nwith the disjoint sum as addition and the $n$\\nbd{}sphere as zero element.\nThe Cartesian product turns the graded $\\Zmod2$\\nbd{}module\n$\\c_*\\coloneqq \\bigoplus_{n\\geq 0}\\c_n$ into an $\\Zmod2$\\nbd{}algebra\ncalled the (unoriented) cobordism ring.\nDenote the cobordism equivalence class of a manifold $M$ by $[M]$.\n\nMost remarkably, the cobordism relation is homotopy invariant, which\nwill become clear from the property described in\nTheorem~\\ref{thm:cobordantiffswnumscoincide}.\nFurther, the structure of this algebra is well-known to be as follows.\n\\begin{Thm}[Thom]\\label{thm:cobordismringstructure}\n  There is an isomorphism of graded $\\Zmod2$\\nbd{}algebras\n  \\begin{gather*}\n    \\c_*\n    % \\cong \\pi_*(\\MO)\n    \\cong \\Zmod2[\\sigma_i\\mid i\\neq 2^r-1]\n    \\;.\n  \\end{gather*}\n  \\begin{proof}\n    Compare \\cite[Thm.~1.23]{immersionconj}, and\n    \\cite[Theorem~IV.9]{thom}.\n    See \\cite[Theorem~IV.12]{thom}, or\n    \\autoref{sec:indecomposabilitycriterion},\n    \\ref{item:generatorscobordimsring}, for a proof using\n    Theorem~\\ref{thm:basiscobordismring} below.\n    Alternatively see \\cite[Chap.~VI]{stong}.\n  \\end{proof}\n\\end{Thm}\n\nDuring the proof of the above theorem, Thom constructs special\nmanifolds that form a basis of the cobordism ring,\nand are uniquely characterized by the below properties.\nFor the formulation recall that a sequence or partition is called\n\\emph{non-dyadic}, if none of its entries is of the form $2^r-1$.\n\\begin{Thm}\\label{thm:basiscobordismring}\n  There exists a basis of the cobordism ring represented by manifolds\n  $V_\\Part$ that in each degree $k$ is indexed by non-dyadic\n  partitions $\\Part$ of $k$. Further, the $V_\\Part$ are uniquely\n  characterized by\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\s{\\Part'}(\\W{V_\\Part}) = \\delta_{\\Part,\\Part'}\n    \\in \\H^k(V_{\\Part})\\cong \\Zmod2\n  \\end{gather*}\n  for any non-dyadic partitions $\\Part,\\Part'\\in\\PartitionsOf{k}$,\n  where $\\delta$ is the usual Kronecker delta.\n  \\begin{proof}\n    See \\cite[Section~IV.5, proof of Theorem~IV.9]{thom}.\n    % [Milnor] better source?\n  \\end{proof}\n\\end{Thm}\n\nIn order to relate the results on the Stiefel-Whitney numbers of\nmanifolds---respectively certain linear combinations of them---from\nbefore with cobordism classes, one needs the following more general\nconnection. Thom rather directly deduces this from the existence of\nthe above basis.\n\\begin{Thm}[Thom]\\label{thm:cobordantiffswnumscoincide}\n  Two closed manifolds are cobordant if and only if all of their\n  Stiefel-Whitney numbers coincide.\n  \\begin{proof}[Proof (sketch)]\n    The proof that manifolds with coinciding Stiefel-Whitney numbers\n    are cobordant was conducted by Thom \\cite[Theorem IV.10]{thom}.\n    To see that cobordant manifolds have the same Stiefel-Whitney\n    numbers, let $M^n$ be a null-bordant closed manifold, \\idest\n    assume $M^n=\\Boundary{W}$ for a closed manifold $W$. Now, consider\n    any sequence $(i_1,\\dotsc,i_l)\\eqqcolon I$ with\n    $\\sum_{r=1}^li_r=n$, and the corresponding Stiefel-Whitney number\n    \\begin{gather*}\n      \\capped{\\w{1}{M}^I}{\\fundcl M}\n      \\coloneqq \\capped{\\w{1}{M}^{i_1}\\dotsm\\w{l}{M}^{i_l}}{\\fundcl{M}}\n    \\end{gather*}\n    of $M$.\n    One calculates using the long\n    exact sequence of cohomology respectively homology of the pair\n    $i\\colon M\\immto W$ (abbreviated les) with boundary map\n    $\\partial$:\n    \\begin{align*}\n      \\W{M}\n      &= \\W{\\T M}\n        = \\W{\\T M\\oplus\\trivbdl}\n        = \\W{\\T W|_{M}}\n        \\cequalsby{Def.} \\W{\\pb i \\T W} = \\pb i \\W{W} \\\\\n      \\fundcl M\n      &\\clapequalsby{les} \\partial \\fundcl{W}\n        \\;.\n    \\end{align*}\n    With the fact $\\pf i\\circ\\partial\\overset{\\text{les}}= 0$\n    this yields for the number from above\n    \\begin{gather*}\n      \\capped{\\W{M}^I}{\\fundcl M}\n      = \\capped{\\pb i\\W{W}^I}{\\partial\\fundcl{W}}\n      = \\pf i\\capped{\\W{W}^I}{\\pf i\\partial \\fundcl{W}}\n      = 0\n      \\;.\n      \\qedhere\n    \\end{gather*}\n  \\end{proof}\n\\end{Thm}\n\n\\subsection{A Criterion for Indecomposability}\n\\label{sec:indecomposabilitycriterion}\nNow, we focus on the ultimate goal of the current section, which\nis to deduce the following indecomposability criterion. This\noriginates as a corollary from Thom's proof of the multiplicative\nstructure of the cobordism ring (see \\cite[Section~IV.5]{thom}).\n\\begin{Thm}\\label{thm:indecomposabilitycriterion}\n  A closed $n$\\nbd{}manifold $M$ represents an indecomposable element of\n  the cobordism ring $\\c_*$, if and only if\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\snum{(n)}{M} \\neq 0 \\in \\Zmod2\n    \\;.\n  \\end{gather*}\n  % for non-dyadic decomposition: [p.~156]{thom};\n  % for summation notation: [p.154, (4)f]{thom};\n\\end{Thm}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:indecomposabilitycriterion}]\n  Using the main theorems \\ref{thm:basiscobordismring} and\n  \\ref{thm:cobordantiffswnumscoincide} from\n  \\autoref{sec:cobordismringstructure}, as well as the main\n  corollaries \\ref{cor:productrule:swcl} and\n  \\ref{cor:swnumdecompositionmfds} from\n  \\autoref{sec:swnumsofproductmfds}, enables to obtain the\n  desired result in the following steps.\n  \\begin{steps}\n  \\item\\label{item:manifoldbasisrepr}\n    As the classes $[V_\\Part]$ form a basis of the cohomology ring by\n    Theorem~\\ref{thm:basiscobordismring}, any manifold $M^n$ is\n    cobordant to a unique linear combination, \\idest disjoint sum,\n    \\begin{gather*}\n      [M] = \\coprod_{\\mathclap{\\substack{\n            \\Part\\in\\PartitionsOf n \\\\\\text{ non-dyadic}\n          }}} \\alpha_\\Part [V_\\Part]\n      \\;,\\quad\n      \\alpha_\\Part \\in \\Zmod2,\n    \\end{gather*}\n    of the classes $[V_\\Part]$.\n    Now the Stiefel-Whitney numbers are determined by the cobordism\n    class according to Theorem~\\ref{thm:cobordantiffswnumscoincide},\n    and are additive with respect to disjoint sums.\n    So, one gets for any Stiefel-Whitney number $\\wsnum{I}{M}$ of $M$\n    \\begin{align*}\n      \\wsnum{I}{M}\n      &= \\sum_{\\mathclap{\\substack{\n        \\Part\\in\\PartitionsOf n \\\\\\text{ non-dyadic}\n      }}} \\alpha_\\Part \\wsnum{I}{V_\\Part}\n      \\;,\n      &\\text{especially}\n      &&\\snum{\\Part'}{M}\n      &= \\sum_{\\mathclap{\\substack{\n        \\Part\\in\\PartitionsOf n \\\\\\text{ non-dyadic}\n      }}} \\alpha_\\Part \\snum{\\Part'}{V_\\Part}\n      \\cequalsby{Def.} \\alpha_{\\Part'}\n      \\;.\n    \\end{align*}\n    Thus, $[V_\\Part]$ is a summand of $[M]$ if and only if\n    $\\snum{\\Part}{M}$ is non-zero. In other words, $M$ is cobordant to\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\coprod_{\\mathclap{\\substack{\n            \\Part\\in\\PartitionsOf n \\text{ non-dyadic}\\\\\n            \\s{\\Part}(W)\\neq 0\n          }}} V_\\Part\n      \\;.\n    \\end{gather*}\n  \\item\\label{item:productpartitions}\n    For partitions $\\Part_1'$ of $n_1$, and $\\Part_2'$ of $n_2$ and\n    any partition $\\Part'$ holds\n    \\begin{align*}\n      \\s{\\Part'}(V_{\\Part_1'}\\times V_{\\Part_2'})\n      &\\clapequalsby{\\ref{cor:productrule:swcl}}\n        \\sum_{\\mathclap{\\substack{\n        \\Part_1\\concat\\Part_2=\\Part'\\\\\n      \\Part_1\\in\\PartitionsOf{n_1}\\\\\n      \\Part_2\\in\\PartitionsOf{n_2}\\\\\n      }}}\n      \\s{\\Part_1}(V_{\\Part_1'}) \\cdot \\s{\\Part_2}(V_{\\Part_2'})\n      \\overset{\\text{Def.}}=\n      \\sum_{\\mathclap{\\substack{\n      \\Part_1\\concat\\Part_2=\\Part'\\\\\n      \\Part_1\\in\\PartitionsOf{n_1}\\\\\n      \\Part_2\\in\\PartitionsOf{n_2}\\\\\n      }}}\n      \\delta_{\\Part_1,\\Part_1'} \\cdot \\delta_{\\Part_2, \\Part_2'}\n      = \\delta_{\\Part', \\Part_1'\\concat\\Part_2'}\n      \\\\\n      &\\cequalsby{Def.} \\s{\\Part'}(V_{\\Part_1'\\concat\\Part_2'})\n        \\;.\n    \\end{align*}\n    Thus, by \\ref{item:manifoldbasisrepr}\n    the basis representations of\n    $[V_{\\Part_1'}]\\times[V_{\\Part_2'}]=[V_{\\Part_1'}\\times V_{\\Part_2'}]$\n    and \n    $[V_{\\Part_1'\\concat\\Part_2'}]$\n    coincide, wherefore they must be equal. \n  \\item\\label{item:generatorscobordimsring}\n    By \\ref{item:productpartitions}, all basis elements\n    $[V_\\Part]$ can be written as a product of lower degree basis\n    elements, except for those corresponding to a trivial partition\n    $(k)$, $k\\in\\Nat$.\n    Furthermore, such a basis element $[V_{(k)}]$ cannot be\n    decomposable, as otherwise $\\snum{(k)}{V_{(k)}} = 0$\n    by \\autoref{eq:productrule:swnum} in\n    Corollary~\\ref{cor:swnumdecompositionmfds},\n    which contradicts the definition in\n    Theorem~\\ref{thm:basiscobordismring}.\n    \n    Altogether, the basis elements represented by a $k$\\nbd{}dimensional\n    manifold $V_{(k)}\\in\\c_k$ for $k\\neq 2^m-1$ are\n    indecomposable---hence algebraically independent---generators of\n    the cobordism ring.\n    This is a proof of Theorem~\\ref{thm:cobordismringstructure} using\n    Theorem~\\ref{thm:basiscobordismring}.\n  \\item By \\ref{item:generatorscobordimsring}, the cobordism\n    class of a manifold $W$ of dimension $k$ is an indecomposable\n    element of $\\c_*$ if and only if its unique representation by\n    basis elements $[V_\\Part]$ contains as a summand the unique\n    $k$\\nbd{}dimensional indecomposable basis element $[V_{(k)}]$,\n    \\idest if and only if $\\snum{(k)}{W}\\neq 0$ by\n    \\ref{item:manifoldbasisrepr}.\n    \\qedhere\n  \\end{steps}\n\\end{proof}\n\nThis directly yields the following example which will be a key point\nin constructing a candidate generating set of the cobordism ring.\n\\begin{Ex}\\label{ex:rpnindecomposable}\n  For $k\\in\\Nat$ even, the projective space $\\RP k$ represents an\n  indecomposable element of the cobordism ring.\n  \\begin{proof}\n    If one applies Remark~\\ref{rem:spolyevaluation} to\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\W{\\RP k}\n      = (1+x)^{k+1}\n      = \\prod_{i=1}^{k+1}(1+x)\n      = 1+\\sum_{j=1}^{k+1} \\el{j}{k+1}(x,\\dotsc,x)\n    \\end{gather*}\n    where $x$ is the\n    generator in degree one of $\\H^*(\\RP k)\\cong\\Zmod2[x]/(x^{k+1})$,\n    one gets\n    \\begin{align*}\n      \\s{(k)}(\\RP k)\n      &\\clapequalsby{\\ref{rem:spolyevaluation}}\n        \\sum_{i=1}^{k+1} x^k\n        = (k+1)x^k \\\\\n      \\snum{(k)}{\\RP k}\n      &= \\capped{(k+1)x^k}{\\fundcl{\\RP k}}\n        = k+1\n        \\equiv 1 \\mod2\n        \\;.\n        \\qedhere\n    \\end{align*}\n  \\end{proof}\n\\end{Ex}\n\n\\section{Twisted Products}\n\\label{sec:twistedprod}\nThe candidates for a generating set needed for the proof of Brown's\nTheorem~\\ref{thm:brown} will be inductively constructed\nusing the so-called twisted product construction explained below.\nThe main advantage of this tool is---besides quite a couple of handy\npreservation properties---the fact that a twisted product is\nindecomposable if and only if its factor is and the dimension was\nchosen correctly\n(Theorem~\\ref{thm:twistedprod:indecompcriterion}). The latter will\nbe the main result of this section, and is discussed in\n\\autoref{sec:twistedprod:indecompcriterion}.\n\n\\subsection{Definition}\\label{sec:twistedproddef}\nThe following definition is according to\n\\cite[p.~83]{immersionconj} respectively \ncompare \\cite[\u00a74, Def.\\ $P(m;X)$]{brown}.\n\\begin{Def}\n  Let $X$ be a space and $k\\in\\Nat$ an integer.\n  Define the \\emph{twisted product of $X$ by $\\Sphere k$}, denoted\n  $\\Twistedprod{k}{X}$, to be the orbit space of the properly\n  discontinuous $\\Zmod2$\\nbd{}action on $\\Twistedprodcovspace{k}{X}$ given\n  by\n  \\begin{align*}\n    \\Zmod2 &\\leftgroupaction \\Twistedprodcovspace{k}{X}\n             \\;,\n    &[1] \\actson (s, (p_1,p_2)) &\\coloneqq (-s, (p_2, p_1))\n                                  \\;,\n  \\end{align*}\n  which combines the antipodal action $[1]\\actson s\\coloneqq -s$ on\n  $\\Sphere k$ and twisting on $X\\times X$.\n  For a map $f\\colon X\\to Y$ of spaces, define\n  \\begin{gather*}\n    \\Twistedprod{k}{f}\\coloneqq (\\Id\\times f\\times f/\\sim)\\colon\n    \\Twistedprod{k}{X}\\longto\\Twistedprod{k}{Y}\n    \\;,\\quad\n    [s,(p_1,p_2)] \\longmapsto [s,(f(p_1),f(p_2))]\n    \\;.\n  \\end{gather*}\n\\end{Def}\n\\begin{Ex}\n  Major examples needed later are\n  \\begin{itemize}\n  \\item $\\Twistedprod k {\\pt} = \\RP k$, and\n  \\item $\\Twistedprod 0 {M} = M\\times M$.\n  \\end{itemize}\n\\end{Ex}\n\nFirst, gather some rather immediate, convenient properties. It is\nespecially noteworthy how well the twisted product behaves concerning\nmanifolds and fiber bundles.\n\\begin{Rem}\\label{rem:twistedprodproperties}\n  Let $X$ be a space and $k\\in\\Nat$.\n  \\begin{enumerate}\n  \\item $\\Twistedprod{k}{-}$ is a functor on the category of\n    topological spaces preserving injectivity.\n  \\item\\label{item:twistedprodfiberbdl}\n    $\\Twistedprod{k}{-}$ preserves fiber bundles,\n    \\idest for a fiber bundle $\\xi\\colon\\E\\xi\\to X$ with fiber $F$\n    the twisted product $\\Twistedprod{k}{\\xi}\\colon\n    \\Twistedprod{k}{\\E\\xi}\\to \\Twistedprod{k}{X}$\n    is again a fiber bundle with fiber $F\\times F$.\n    This comes from the fiber bundle\n    \\begin{gather*}\n      F\\times F\n      \\longto \\Twistedprodcovspace{k}{\\E\\xi}\n      \\longto \\Twistedprodcovspace{k}{X}\n    \\end{gather*}\n    where all maps are maps of $\\Zmod2$\\nbd{}spaces.\n    As a special case, $\\Twistedprod{k}{X}$ admits a fiber bundle\n    \\begin{gather}\n      \\SwapAboveDisplaySkip\n      \\label{eq:twistedprodrpnfiberbdl}\n      X\\times X\n      \\longto \\Twistedprod{k}{X}\n      \\longto \\RP k = \\Sphere k/\\sim\n    \\end{gather}\n    with fiber $X\\times X$ which comes from the trivial fiber bundle\n    $X\\to\\pt$.\n    Further, let $\\eta\\colon\\E\\eta\\to X$ be\n    another fiber bundle, and $f\\colon X'\\to X$ a map.\n    One has:\n    \\begin{enumerate}\n    \\item\\label{item:twistedprod:preservespb}\n      $\\Twistedprod{k}{-}$ respects pullbacks, \\idest\n      \\begin{gather*}\n        \\Twistedprod{k}{\\pb f \\xi}\n        = \\pb{\\left(\\Twistedprod{k}{f}\\right)}\n        \\left(\\Twistedprod{k}{\\xi}\\right)\n      \\end{gather*}\n    \\item $\\Twistedprod{k}{-}$ respects Whitney sums of vector\n      bundles, \\idest \n      \\begin{gather*}\n        \\Twistedprod{k}{\\xi\\oplus\\eta\\colon \\E{(\\xi\\oplus\\eta)}\\to X}\n        = \\Twistedprod{k}{\\xi}\\oplus\\Twistedprod{k}{\\eta}\n        \\;.\n      \\end{gather*}\n      To see this, observe that the following is a well-defined\n      commutative pullback diagram of vector bundles:\n      \\begin{center}\n        \\begin{tikzcd}\n          \\Twistedprod{k}{\\E{(\\xi\\oplus\\eta)}}\n          \\ar[r]\n          \\ar[d, \"\\Twistedprod{k}{\\xi\\oplus\\eta}\"]\n          &\\Twistedprod{k}{\\E{(\\xi\\times\\eta)}}\n          \\ar[r]\n          \\ar[d, \"\\Twistedprod{k}{\\xi\\times\\eta}\"]\n          &\\Twistedprod{k}{\\E{\\xi}}\\times\\Twistedprod{k}{\\E{\\eta}}\n          \\ar[d, \"\\Twistedprod{k}{\\xi}\\times \\Twistedprod{k}{\\eta}\"]\n          \\\\\n          \\Twistedprod{k}{X}\n          \\ar[r, \"\\Twistedprod{k}{\\Delta}\"]\n          &\\Twistedprod{k}{X\\times X}\n          \\ar[r, \"\\widetilde\\Delta\"]\n          &\\Twistedprod{k}{X}\\times\\Twistedprod{k}{X}\n        \\end{tikzcd}\n      \\end{center}\n      where\n      $\\widetilde\\Delta\\colon\n      \\left[s,(x_1,y_1),(x_2,y_2)\\right]\n      \\longmapsto\n      \\left(\\left[s,x_1,x_2\\right], \\left[s,y_1,y_2\\right]\\right)$.\n    \\end{enumerate}\n  \\item\\label{item:twistedprodmanifold}\n    $\\Twistedprod{k}{-}$ preserves closed smooth manifolds, \\idest\n    for a closed smooth manifold $M^n$, $\\Twistedprod{k}{M}$ is a\n    again a $(2m+k)$\\nbd{}dimensional closed smooth manifold.\n    This is because the proper discontinuity comes from the antipodal\n    $\\Zmod2$\\nbd{}action, and makes the projection\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\Twistedprodcovspace{k}{X}\n      \\xrightarrow{\\pi}\n      \\Twistedprod{k}{X}\n      \\coloneqq\n      \\left( \\Twistedprodcovspace{k}{X} \\right)/\\sim\n    \\end{gather*}\n    a two-leaved covering space.\n    Further:\n    \\begin{enumerate}\n    \\item\\label{item:twistedprodpreservesimmersions}\n      $\\Twistedprod{k}{-}$ preserves immersions.\n    \\item\\label{item:twistedprod:tangentspace}\n      $\\T\\Twistedprod{k}{M}\n      \\cong \\pb\\proj\\T{\\RP k} \\oplus \\Twistedprod{k}{\\T M}$,\n      \\idest the tangent space of $\\Twistedprod{k}{M}$ can be obtained\n      from $\\Twistedprod{k}{\\T M}$ by adding the missing tangent space\n      part of the sphere:\n      \\begin{alignat*}{4}\n        \\T{\\Twistedprod{k}{M}}\n        &\\cong& \\T{\\left(\\Twistedprodcovspace{k}{M}\\right)}/\\sim \\\\\n        &\\cong& \\T \\Sphere k\\times \\T M\\times\\T M/\\sim\n        &\\overset{\\cong}{\\longto}\n        \\pb\\proj \\T{\\RP k} \\oplus \\Twistedprod{k}{\\T M}\n        \\\\\n        &&[(s,v), (m_1,v_1), (m_2,v_2)]\n        &\\longmapsto\n        \\left( ([s], v), [s, (m_1,v_1), (m_2, v_2)] \\right)\n      \\end{alignat*}\n      where $\\proj\\colon\\Twistedprod{k}{M}\\to\\RP k$ is the projection.\n      The first isomorphism is due to the covering space property, and\n      the last is easily seen to be a well-defined isomorphism of\n      vector bundles.\n      Further note that for a map of manifolds $f\\colon M\\to N$,\n      the differential map $\\Diff\\Twistedprod{k}{f}$ on tangent spaces\n      will be the identity on the first summand.\n    \\end{enumerate}\n  \\end{enumerate}\n\\end{Rem}\n\nAs a last inside into the definition have a look at the twisted\nproduct of real spaces.\n\\begin{Lem}\\label{lem:twistedprodrealspace}\n  Let $k, n\\in\\Nat$,\n  % and denote by $\\N{}$ is the normal line bundle of $\\Sphere k$,\n  and by $\\gamma_k$ the tautological line bundle over\n  $\\RP k$.\n  The fiber bundle $\\Twistedprod{k}{\\R^n}\\to \\RP k$ is isomorphic to\n  the vector bundle\n  \\begin{gather*}\n    (n\\cdot\\gamma_k)\\oplus\\trivbdl^n\n    = (\\gamma_k\\oplus\\dotsb\\oplus\\gamma_k)\\oplus\\trivbdl^n\n  \\end{gather*}\n  \\begin{proof}\n    Compare also \\cite[Prop.~4.3,p~1107]{brown}.\n    A well-defined vector bundle isomorphism is for example\n    \\begin{align*}\n      \\Twistedprod{k}{\\R^n}\n      &\\overset{\\sim}\\longto\n        (\\Twistedprodcovspace{k}{\\R^n}/\\approx)\n        \\cong (\\Sphere{k}\\times\\R^n/\\approx)\\times\\R^n\n        \\cong \\E{((n\\cdot\\gamma_k)\\oplus\\trivbdl^n)}\n      \\\\\n      [s, v_1, v_2]\n      &\\longmapsto\n        [s, v_1+v_2, v_1-v_2]\n    \\end{align*}\n    where $\\approx$ is the equivalence relation identifying\n    $(s,v_1,v_2)$ and $(-s,-v_1,v_2)$ respectively $(s,v)$ and\n    $(-s,-v)$.\n    Here it was used that $\\gamma_k$ is by\n    construction\n    \\begin{gather*}\n      \\E{\\gamma_k} = (\\Sphere k\\times\\R/\\approx) \\longto \\RP k\n      \\;\\qquad\n      [s,v] \\longmapsto [s]\n      \\;,\n    \\end{gather*}\n    and hence\n    $\\E{(n\\gamma_k)}\\cong(\\Sphere k \\times\\R^n/\\approx)$.\n  \\end{proof}\n\\end{Lem}\n\n\\subsection{The Cohomology Ring of Twisted Products}\nBesides the above direct properties, there is a fairly easy\ndescription of the cohomology ring of a twisted product relating it to\nthe cohomology ring of its factor.\nThis can be revealed inductively using a diagram of long exact\nsequences which relates the cohomology of two degrees of twisted\nproducts and known sequences of spheres.\nUse the following notation.\n\n\\begin{Def}\n  Let $X$ be a space and $k\\in\\Nat$.\n  Note that by the K\u00fcnneth isomorphism\n  $\\H^*(X^2)\\cong\\H^*(X)\\otimes\\H^*(X)$, and recall that all exact\n  sequences of $\\Zmod2$\\nbd{}vector spaces split.\n  Define by\n  \\begin{compactitemize}\n  \\item\n    $\\pi_k\\colon \\Sphere k\\times X^2\\to\\Twistedprod k X$\n    to be the quotient map from the definition,\n  \\item\n    $\\proj\\colon\\Twistedprod k X\\to \\RP k$\n    to be the fiber bundle map,\n  \\item\n    $T\\colon \\Sphere{k}\\times X^2\\to\\Sphere{k}\\times X^2$,\n    $(s,p,q)\\mapsto(-s,q,p)$,\n  \\item\n    $N\\coloneqq\n    \\ker(x\\otimes y\\mapsto x\\otimes y + y\\otimes x)\n    = \\left(\n        a\\otimes b + b\\otimes a\n        \\middlemid\n        a,b\\in \\H^*(X\\times X)\n      \\right)_{\\Zmod2}$,\n  \\item\n    $d\\colon \\H^*(X)\\to\\H^*(X\\times X)$,\n    $d(a)\\coloneqq a\\otimes a$, and\n    $D\\coloneqq \\Im d\n    = \\left\\{ a\\otimes a \\in \\H^*(X\\times X) \\right\\}$,\n  \\item\n    $s_k\\in\\H^k(\\Sphere k)\\cong\\Zmod2$ to be the generator, and\n  \\item\n    $c_k\\coloneqq \\pb\\proj x\\in\\H^1(\\Twistedprod{k}{X})$, where\n    $x\\in\\H^*(\\RP k)\\cong\\Zmod2[x]/(x^{k+1})$ is the unique generator.\n  \\end{compactitemize}\n  The indices $k$ are omitted if they are obvious from the context.\n\\end{Def}\n\\begin{Rem}\n  Note that $N+D$ is closed under multiplication and addition,\n  and---as a first hint on the cohomology structure---$\\proj$ admits a\n  section  $[s]\\mapsto[s,p,p]$ for any point $p\\in X$, thus making\n  $\\H^*(\\RP k)$ a direct summand of $\\H^*(\\Twistedprod k X)$ of the\n  form $\\Zmod2[c]/(c^{k+1})$.\n  From the commutative diagram factorization\n  \\begin{center}\n    \\begin{tikzcd}[column sep=large]\n      S^k\\times X^2\n      \\ar[d, \"\\pi\"]\n      \\ar[r, \"\\proj_{\\Sphere k}\"]\n      & \\Sphere k \\ar[d, \"\\pi\"]\n      \\\\\n      \\Twistedprod k X\n      \\ar[r, \"\\Twistedprod{k}{X\\to\\pt}\"]\n      &\\RP k      \n    \\end{tikzcd}\n  \\end{center}\n  and with $\\pb\\pi\\colon\\RP k\\to\\Sphere k$ being zero, one gets $\\pb\\pi(c)=0$.\n\\end{Rem}\n\\begin{Thm}\\label{thm:twistedprod:cohomstructure}\n  Let $X$ be a space and $k\\in\\Nat$.\n  Then the cohomology ring of $\\Twistedprod{k}{X}$ has the form\n  \\begin{align*}\n    \\SwapAboveDisplaySkip\n    \\H^*(\\Twistedprod{k}{X})\n    &\\cong\n      \\left(\n      \\Zmod2[c,s]/(c^{k+1},s^2,cs)\n      \\right)\n      \\otimes (N+D)\n  \\end{align*}\n  with $c$ of degree 1, $s$ of degree $k$, and the additional\n  properties\n  \\begin{enumerate}\n  \\item $c\\otimes N=0=s\\otimes D$, \n  \\item $\\pb\\proj x = c\\otimes d(1)$,\n    hence\n    \\begin{gather*}\n      \\pb\\proj\\colon\\H^*(\\RP k)\\cong\n      \\Zmod2[c]/(c^{k+1})\\subset\\H^*(\\Twistedprod k X)\n      \\;,\\quad\\text{and}\n    \\end{gather*}\n  \\item\\label{item:twistedprodcohom:pi}\n    $\\pb\\pi (s\\otimes n) = s\\otimes n$,\n    $\\pb\\pi (1\\otimes (n+d(a))) = 1\\otimes (n+d(a))$\n    for $n\\in N$ and $a\\in\\H^*(X)$,\n    hence\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\pb\\pi\\colon (1\\otimes(N+D)) + (s\\otimes N)\n      \\cong (1\\otimes(N+D))\\oplus (s_k\\otimes N)\n      \\;.\n    \\end{gather*}\n  \\end{enumerate}\n  For readability skip $1\\otimes-$ and $-\\otimes d(1)$ in element\n  notation wherever it the meaning is clear from the context.\n  Further note that $D$ is multiplicatively, but not additively closed,\n  whereas $(N+D)\\subset\\H^*(X^2)$ is a subring via\n  $d(a)+d(b) = d(a+b)+(a\\otimes b+ b\\otimes a)$.\n\\end{Thm}\nA proof of Theorem~\\ref{thm:twistedprod:cohomstructure} can be found\nat the end of the section.\n\n\\subsubsection[Stiefel-Whitney Classes]\n{Stiefel-Whitney Classes of Twisted Products of Vector Bundles}\nThe results of this section will yield a splitting principle\napplicable to Stiefel-Whitney classes of twisted products of vector\nbundles.\n\nSome immediate consequences of the above structure theorem are:\n\\begin{Cor}\n  Let $X$ again be a space and $k\\in\\Nat$.\n  \\begin{enumerate}\n  \\item\n    For any section $s_p\\colon\\RP k\\to\\Twistedprod{k}{X}$, $q\\mapsto[q,p,p]$,\n    of the fiber bundle described in\n    \\eqref{eq:twistedprodrpnfiberbdl}, and $n_1,n_2\\in N$, $0\\neq a\\in\\H^*(X)$ holds\n    \\begin{gather}\\label{eq:twistedprodcohom:section}\n      \\pb s_p(c) = x\n      \\;,\\qquad\\text{and}\\qquad\n      \\pb s_p(c\\otimes d(a) + 1\\otimes n_1 + s\\otimes n_2) = 0\n      \\;.\n    \\end{gather}\n  \\item\\label{item:twistedprod:preservescohominj}\n    $\\Twistedprod{k}{-}$ preserves injectivity on cohomology.\n  \\end{enumerate}\n  \\begin{proof}\n    The section property is clear from the properties of $\\pb\\proj$.\n    For the other statement, consider a map $f\\colon X\\to Y$ which is\n    injective on cohomology  and induces the map\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      F\\colon\\Sphere{k}\\times X^2 \\longto \\Sphere{k}\\times Y^2\n      \\;,\\qquad\n      [s,(x_1,x_2)] \\longmapsto [s, (f(x_1), f(x_2))]\n      \\;.\n    \\end{gather*}\n    Since every element in $\\H^*(\\RP k)\\otimes D$ can uniquely be\n    written as $\\sum_{i=0}^k c^i\\cdot d(a_i)$,\n    one only has to check injectivity of $\\pb{\\Twistedprod k f}$ on\n    the two parts\n    \\begin{gather*}\n      \\left( \\Zmod2[c]/(c^{k+1})\\otimes 1 \\right)\n      \\qquad\\text{and}\\qquad\n      \\left(\n        (1\\otimes (N+D)) + (s\\otimes N)\n      \\right)\n      \\;.\n    \\end{gather*}\n    \\begin{description}\n    \\item[First part:]\n      Since $\\Twistedprod k f$ is a morphism of vector bundles over\n      $\\RP k$,\n      \\begin{gather*}\n        \\pb{\\Twistedprod k f}(c)\n        = \\pb{\\Twistedprod k f}(\\pb\\proj(x))\n        = \\pb\\proj(x)\n        = c\n        \\in\\H^*(\\Twistedprod k X)\n        \\;,\n      \\end{gather*}\n      so $\\pb{\\Twistedprod k f}$ is injective on\n      $\\H^*(\\RP k)\\otimes 1\\subset\\H^*(\\Twistedprod k Y)$.\n    \\item[Second part:]\n      Obviously\n      $\\pb F\\colon\\H^*(\\Sphere k\\times Y^2)\\to\\H^*(\\Sphere k\\times X^2)$ \n      will be injective. The isomorphism property of $\\pb\\pi$ hence\n      implies injectivity of $\\pb{\\Twistedprod k f}$ on\n      $(1\\otimes D)+(1\\otimes N)+(s_k\\otimes N)\\subset\\H^*(\\Twistedprod k Y)$.\n      \\qedhere\n    \\end{description}\n  \\end{proof}\n\\end{Cor}\n\n\\begin{Rem}\n  Let $\\xi$ be a vector bundle over a space $X$.\n  Recall that by the splitting\n  principle~\\cite[Theorem~(19.3.9)]{tomdieck}, $\\W{\\xi}$ is\n  pulled back to the product $\\prod_i\\W{\\xi_i}$ of total\n  Stiefel-Whitney classes of some line bundles $\\xi_i$,\n  along a map $f\\colon Y\\to X$ which is injective on cohomology.\n  Since $\\Twistedprod{k}{-}$ preserves Whitney sums and injectivity on\n  cohomology, $\\W{\\Twistedprod{k}{\\xi}}$ will injectively pull\n  back along $\\Twistedprod{k}{f}$ to the product\n  $\\prod_i\\W{\\Twistedprod{k}{\\xi_i}}$.\n\\end{Rem}\n\nThus, to get from the Stiefel-Whitney classes of a vector bundle to\nthe ones of its $k$th twisted product, a good approach is to\ninvestigate how $\\Twistedprod{k}{-}$ acts on the Stiefel-Whitney\nclasses of line bundles.\n\\begin{Cor}\\label{cor:twistedprod:swlinebdl}\n  Let $\\xi\\colon E\\to X$ be a line bundle, let $\\W{\\xi}=1+\\alpha$ be\n  its total Stiefel-Whitney class, and let $k\\in\\Nat$.\n  Define\n  \\begin{gather*}\n    e\\colon\\H^*(X)\\longto N\\subset\\H^*(X\\times X)\n    \\;,\\qquad\n    e(a)\\coloneqq 1\\otimes a+a\\otimes 1\n    \\;.\n  \\end{gather*}\n  Then, along the isomorphism from\n  Theorem~\\ref{thm:twistedprod:cohomstructure},\n  \\begin{gather*}\n    \\W{\\Twistedprod{k}{\\xi}} = 1+ (c_k\\otimes d(1)+1\\otimes\n    e(\\alpha)) + 1\\otimes d(\\alpha)\n    = 1+c_k+e(\\alpha)+d(\\alpha)\n    \\;,\n  \\end{gather*}\n  respectively $\\w1\\xi = c_k+e(\\alpha)$, $\\w2\\xi=d(\\alpha)$.\n  \\begin{proof}\n    See also \\cite[Prop.~7.4, p.~1113]{brown}.\n    For $k=0$ this is simply the product rule for the total\n    Stiefel-Whitney class because $\\Twistedprod{0}{\\xi}=\\xi\\times\\xi$\n    and $c=0$. Thus, assume $k\\geq0$.\n    With $\\deg\\w{i}{\\Twistedprod{k}{\\xi}}\\leq\\rk\\Twistedprod{k}{\\xi}=2$,\n    the total Stiefel-Whitney class of the two-dimensional vector bundle\n    $\\Twistedprod{k}{\\xi}$ must by\n    Theorem~\\ref{thm:twistedprod:cohomstructure} be of the general\n    form\n    \\begin{align*}\n      \\W{\\Twistedprod{k}{\\xi}}\n      &= 1\n        + \\sum_{i=1}^k c^i\\otimes d(a_i)\n        + s\\otimes n'\n        + 1\\otimes(n+d(a))\n         \\\\\n      &\\equalsby{}\n        1\n        + \\underbrace{\n        c\\otimes d(a') + c^2\\otimes d(a'')\n        }_{\\text{check section}}\n        + \\underbrace{\n        s\\otimes n'\n        + 1\\otimes(n+d(a))\n        }_{\\text{check $\\pi$}}\n        \\;,\n        % 1 &\\quad&&&{(=\\ws0)} \\\\\n      % &+ \\delta_1\\cdot c\\otimes d(1) + \\delta_2\\cdot c^2\\otimes d(1)\n      %     &&\\delta_1,\\delta_2\\in\\{0,1\\}\n      %           &&{(\\text{check section})}\\\\\n      % &+ 1\\otimes d(a)\n      %     &&a \\neq 1\n      %           &&{(\\text{check $\\pi$})} \\\\\n      % &+ {\\textstyle \\sum_{r\\in I} c^{i_r}\\otimes d(a_r)}\n      %     &&a_r\\neq 1,\\; i_r>1\n      %           &&{(=0\\text{ by dim.})} \\\\\n      % &+ 1\\otimes n_1 + s\\otimes n_2\n      %     &&n_1, n_2\\in N\n      %           &&{(\\text{check $\\pi$})}\n    \\end{align*}\n    for some $n,n'\\in N$ and $a,a',a''\\in\\H^*(X)$,\n    with $\\deg a\\leq 0\\geq\\deg a''$, $\\deg n'\\leq1\\geq\\deg a$, and\n    $\\deg n\\leq 2$.\n    In order to determine the unknown elements,\n    note that by Theorem~\\ref{thm:twistedprod:cohomstructure}\n    \\begin{compactitemize}\n    \\item $\\pb\\pi$ is an isomorphism on $s\\otimes N+1\\otimes(D+N)$, and\n    \\item for any point $p\\in X$ and section $s_p\\colon\\RP\n      k\\to\\Twistedprod{k}{X}$, $\\pb s_p$ is an isomorphism on\n      $\\{c^i\\otimes d(b)\\mid b\\in\\H^0(X), 1\\leq i\\leq k\\}$.\n    \\end{compactitemize}\n    So the following remains to check:\n    \\begin{description}\n    \\item[$\\pb\\pi\\W{\\Twistedprod k \\xi}$:]\n      $\\Twistedprod{k}{\\xi}$ is the quotient of the bundle\n      $\\trivbdl\\times\\xi\\times\\xi\\colon\n      \\Twistedprodcovspace{k}{E}\\to\\Twistedprodcovspace{k}{X}$, where\n      \\begin{gather*}\n        \\SwapAboveDisplaySkip\n        \\W{\\Id\\times\\xi\\times\\xi}\n        = 1\\cdot \\W{\\xi}\\cdot\\W{\\xi}\n        = 1 + 1\\otimes e(\\alpha) + 1\\otimes d(\\alpha)\n        \\;.\n      \\end{gather*}\n      As $\\pi$ is a covering map,\n      $\\pb\\pi\\Twistedprod{k}{\\xi}=\\trivbdl\\times\\xi\\times\\xi$,\n      and thus\n      $\\pb\\pi\\W{\\Twistedprod{k}{\\xi}}=\\W{\\Id\\times\\xi\\times\\xi}$.\n      By\n      \\itemref{thm:twistedprod:cohomstructure}{item:twistedprodcohom:pi},\n      $\\pb\\pi$ is the identity on\n      elements of this form, so this yields\n      $n'=0$, $n=e(\\alpha)$, and $a=\\alpha$.\n    \\item[$\\pb s_p \\W{\\Twistedprod{k}{\\xi}}$:]\n      Consider a section $s_p\\colon [s]\\mapsto [s,p,p]$, $p\\in X$, of\n      the bundle $\\Twistedprod k X\\to\\RP k$. Since\n      $s_p=\\Twistedprod k{(\\ast\\mapsto p\\in X)}$,\n      the pullback of $\\Twistedprod{k}{\\xi}$ along $s_p$ yields\n      \\begin{align*}\n        \\pb s_p\\Twistedprod{k}{\\xi}\n        &= \\pb{\\Twistedprod{k}{\\ast\\mapsto p}}\n          \\left(\\Twistedprod{k}{\\xi}\\right) \\\\\n        &= \\Twistedprod{k}{\\pb{(\\ast\\mapsto p)}\\xi} \\\\\n        &= \\Twistedprod{k}{\\trivbdl^1}\n          \\cequalsby{\\ref{lem:twistedprodrealspace}}\n          \\gamma_k\\oplus\\trivbdl^1\n          \\colon\n          \\Twistedprod{k}{\\R} \\to \\RP k\\;,\\quad\n          [s, v_1,v_2] \\mapsto [s]\n          \\;.\n      \\end{align*}\n      Then\n      \\begin{gather*}\n        \\SwapAboveDisplaySkip\n        \\pb{s_p}\\W{\\Twistedprod{k}{\\xi}}\n        = \\W{\\pb{s_p}\\Twistedprod{k}{\\xi}}\n        = \\W{\\gamma\\oplus\\trivbdl}\n        = 1+x\\in\\H^*(\\RP k)\n        \\;,\n      \\end{gather*}\n      .and thus $a'=1$, $a''=0$.\n      \\qedhere\n    \\end{description}\n  \\end{proof}\n\\end{Cor}\n\n\n\\subsubsection{A Proof of the Structure Theorem}\nThe rest of this section is dedicated to the proof of\nTheorem~\\ref{thm:twistedprod:cohomstructure} on the cohomology\nstructure of twisted products.\n\nThe essential step is to relate a twisted product to\n\\begin{enumerate}[1.]\n\\item its lower dimensional counterpart (analogous to the embedding of\n  a projective space into a higher dimensional one), and\n\\item well-known sequences of spheres and disks.\n\\end{enumerate}\nThis can be done using an alternative pushout construction, which is\nexplained below. The proofs are omitted since all facts are easy to\ncheck.\n\nHaving constructed a diagram of pairs of spaces with a corresponding\none of cohomology groups, the more tedious part then is to finalize\nthe proof with an inductive diagram chase, which is given for\ncompleteness but may be skipped by the reader.\n\n\\begin{Fact}\n  Let $X$ again be a space and $k\\in\\Nat_{\\geq1}$.\n  \\begin{enumerate}\n  \\item\n    The twisted product $\\Twistedprod{k}{X}$ is the pushout\n    $(\\Disk k\\times X^2)\\cup_{T}(\\Sphere{k-1}\\times X^2)$ of\n    \\begin{center}\n      \\begin{tikzcd}\n        \\Sphere{k}\\times X^2\n        &\\ar[from=l,leftarrow, \"T\"]\n        \\Sphere{k}\\times X^2\n        \\ar[r, rightarrowtail, \"\\incl\"]\n        &\\Disk{k}\\times X^2\n      \\end{tikzcd}\n    \\end{center}\n    \\idest it is the product $\\Disk k\\times X^2$ of the closed\n    $k$\\nbd{}disk with $X^2$ with the identification\n    $(s,p,q)=T((s,p,q))\\coloneqq(-s,q,p)$ on all boundary points in\n    $\\Boundary{\\Disk k\\times X^2}=\\Sphere{k-1}\\times X^2$.\n  \\item\n    The pushouts\n    \\begin{align*}\n      \\SwapAboveDisplaySkip\n      \\Twistedprod{k}{X}\n      &=(\\Disk k\\times X^2)\\cup_{T}(\\Sphere{k-1}\\times X^2)\n        \\qquad\\text{and}\\\\\n      \\Twistedprod{k-1}{X}\n      &=(\\Sphere{k-1}\\times X^2)\\cup_{T}(\\Sphere{k-1}\\times X^2)\n    \\end{align*}\n    merge to the commutative pushout diagram\n    \\begin{center}\n      \\begin{tikzcd}\n        \\Sphere{k-1}\\times X^2\n        \\ar[r, equals]\n        \\ar[d, \"T\"]\n        &\\Sphere{k-1}\\times X^2\n        \\ar[r, rightarrowtail, \"\\incl\"]\n        \\ar[d, \"\\pi\"]\n        &\\Disk{k}\\times X^2\n        \\ar[d]\n        \\\\\n        \\Sphere{k-1}\\times X^2\n        \\ar[r, \"\\pi\"]\n        &\\Twistedprod{k-1}{X}\n        \\ar[r, rightarrowtail, \"\\incl\"]\n        &\\Twistedprod{k}{X}\n      \\end{tikzcd}\n    \\end{center}\n    making $(\\Twistedprod{k}{X}, \\Twistedprod{k-1}{X})$ a neighborhood\n    deformation retract pair using the stability of cofibrations under pushout.\n    Furthermore, for a smaller disk $\\Disk{k}_+\\subsetneq\\Disk k$\n    there is an induced excision\n    \\begin{gather*}\n      (\\Disk k\\times X^2, \\Sphere{k-1}\\times X^2)\n      \\rightarrowtail\n      (\\Twistedprod{k}{X}, \\overline{\\Twistedprod{k-1}{X}})\n    \\end{gather*}\n    where\n    $\\overline{\\Twistedprod{k-1}{X}}\\coloneqq\n    ((\\Disk k\\setminus\\Disk{k}_+)\\times X^2)\n    \\cup_{T}\n    (\\Sphere{k-1}\\times X^2)$\n    is the embedded\n    $\\Twistedprod{k-1}{X}$ with a collar, \\idest a neighborhood\n    deformation retract of $\\Twistedprod{k-1}{X}$ in\n    $\\Twistedprod{k}{X}$.\n  \\item\n    The excision\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      (\\Disk{k}_+,\\Sphere{k-1}_+) \\amalg (\\Disk{k}_-,\\Sphere{k-1}_-)\n      \\immlongto\n      (\\Sphere k, \\Sphere{k-1}\\times I)\n    \\end{gather*}\n    of an upper and a lower polar cap into the sphere relative to its\n    bloated equator, is compatible with the excision from above\n    in the sense that the following diagram commutes:\n    \\begin{center}\n      \\begin{tikzcd}[row sep=large]\n        (\\Disk{k}_+,\\Sphere{k-1}_+) \\amalg (\\Disk{k}_-,\\Sphere{k-1}_-)\n        \\ar[r, hookrightarrow, \"\\text{excis.}\"]\n        \\ar[d, \"\\pi\"]\n        &(\\Sphere k, \\Sphere{k-1}\\times I)\n        \\ar[r, dash, \"\\simeq\"]\n        \\ar[d, \"\\pi\"]\n        &(\\Sphere k, \\Sphere{k-1})\n        \\ar[d, \"\\pi\"]\n        \\\\\n        (\\Disk{k}, \\Sphere{k-1})\\times X^2\n        \\ar[r, hookrightarrow, \"\\text{excis.}\"]\n        &(\\Twistedprod{k}{X},\\overline{\\Twistedprod{k-1}{X}})\n        \\ar[r, dash, \"\\simeq\"]\n        &(\\Twistedprod{k}{X}, \\Twistedprod{k-1}{X})\n      \\end{tikzcd}\n    \\end{center}\n  \\end{enumerate}\n\\end{Fact}\n\n\n\\raggedbottom % !! necessary since the diagrams are HUUUUGE\nThis now nicely fits into a larger diagram of pairs of spaces, which\ninduces a diagram of long exact sequences of cohomology.\n\\begin{Fact}\n  For a space $X$ and $k\\in\\Nat_{\\geq1}$ the following diagram commutes\n  \\begin{center}\n    \\begin{tikzcd}[column sep=small, row sep=huge]\n      (\\Disk{k}_+,\\Sphere{k-1}_+)\\times X^2\n      \\ar[r,hookrightarrow,\"\\tau\"]\n      %\\ar[rr, equals, bend left=30]\n      &\\displaystyle\\coprod_{+,-} (\\Disk{k},\\Sphere{k-1})\\times X^2\n      \\ar[r,\"\\pi\"]\n      &(\\Disk{k}_+,\\Sphere{k-1}_+)\\times X^2\n      \\\\\n      &\n      \\ar[from=u, hookrightarrow, \"\\text{excision}\"]\n      (\\Sphere{k},\\Sphere{k-1}\\times I)\\times X^2\n      \\ar[r, \"\\pi\"]\n      &\n      \\ar[from=u, hookrightarrow, \"\\text{excision}\"]\n      \\left(\n        \\Twistedprod{k}{X},\n        \\overline{\\Twistedprod{k-1}{X}}\n        % \\Twistedprod{k-1}{X}\\cup_{\\Sphere{k-1}_+} (\\Sphere{k-1}\\times I)\n      \\right)\n      \\\\\n      &\n      \\ar[u, dash, \"\\simeq\"]\n      (\\Sphere{k},\\Sphere{k-1})\\times X^2\n      \\ar[r, \"\\pi_k\"]\n      &\n      \\ar[u, dash, \"\\simeq\"]\n      (\\Twistedprod{k}{X},\\Twistedprod{k-1}{X})\n      \\ar[r, \"\\proj\"]\n      &(\\RP k, \\RP{k-1})\n      \\\\\n      \\ar[uuu, hookrightarrow, \"\\tilde j\"]\n      \\Disk{k}_+\\times X^2\n      \\ar[r, hookrightarrow, \"\\iota\"]\n      &\n      \\ar[u, hookrightarrow, \"\\hat j\"]\n      (\\Sphere k\\times X^2)\n      \\ar[r, \"\\pi_k\"]\n      &\n      \\ar[u, hookrightarrow, \"j\"]\n      \\Twistedprod{k}{X}\n      \\ar[r, \"\\proj\"]\n      &\n      \\ar[u, hookrightarrow]\n      \\RP k\n      \\\\\n      \\ar[u, hookrightarrow, \"\\tilde i\"]\n      \\Sphere{k-1}\\times X^2\n      \\ar[r, equals]\n      &\n      \\ar[u, hookrightarrow, \"\\hat i\"]\n      \\Sphere{k-1}\\times X^2\n      \\ar[r, \"\\pi_{k-1}\"]\n      &\n      \\ar[u, hookrightarrow, \"i\"]\n      \\Twistedprod{k-1}{X}\n      \\ar[r, \"\\proj\"]\n      &\n      \\ar[u, hookrightarrow]\n      \\RP{k-1}\n    \\end{tikzcd}\n  \\end{center}\n  with $\\pi\\circ\\tau=\\Id$, resulting in the commutative diagram of\n  exact cohomology sequences\n  \\begin{center}\n    \\begin{tikzcd}[column sep=small, row sep=huge]\n      \\vdots\\ar[d]&\\vdots\\ar[d]&\\vdots\\ar[d]\\\\\n      \\H^l((\\Disk{k},\\Sphere{k-1})\\times X^2)\n      \\ar[from=r, \"\\pb\\tau\"]\n      &\\H^l((\\Sphere{k},\\Sphere{k-1})\\times X^2)\n      \\ar[from=r, \"\\pb{\\pi_k}\"]\n      &\\H^l((\\Disk{k},\\Sphere{k-1})\\times X^2)\n      \\\\\n      \\ar[from=u, \"\\pb{\\tilde j}\"]\n      \\H^l(\\Disk{k}\\times X^2)\n      \\ar[from=r, \"\\pb{\\iota}\"]\n      &\n      \\ar[from=u, \"\\pb{\\hat j}\"]\n      \\H^l(\\Sphere k\\times X^2)\n      \\ar[from=r, \"\\pb{\\pi_k}\"]\n      &\n      \\ar[from=u, \"\\pb j\"]\n      \\H^l(\\Twistedprod{k}{X})\n      \\\\\n      \\ar[from=u, \"\\pb{\\tilde i}\"]\n      \\H^l(\\Sphere{k-1}\\times X^2)\n      \\ar[r, equals]\n      &\n      \\ar[from=u, \"\\pb{\\hat i}\"]\n      \\H^l(\\Sphere{k-1}\\times X^2)\n      \\ar[from=r, \"\\pb{\\pi_{k-1}}\"]\n      &\n      \\ar[from=u, \"\\pb i\"]\n      \\H^l(\\Twistedprod{k-1}{X})\n      \\\\\n      \\ar[from=u, \"\\tilde{\\delta}\"]\n      \\H^{l-1}((\\Disk{k},\\Sphere{k-1})\\times X^2)\n      \\ar[from=r, \"\\pb\\tau\"]\n      &\n      \\ar[from=u, \"\\hat{\\delta}\"]\n      \\H^{l-1}((\\Sphere{k},\\Sphere{k-1})\\times X^2)\n      \\ar[from=r, \"\\pb{\\pi_k}\"]\n      &\n      \\ar[from=u, \"\\delta\"]\n      \\H^{l-1}((\\Disk{k},\\Sphere{k-1})\\times X^2)\n      \\\\\n      \\vdots\\ar[from=u]&\\vdots\\ar[from=u]&\\vdots\\ar[from=u]\n    \\end{tikzcd}\n  \\end{center}\n  where $\\pb\\tau\\pb\\pi$ is the identity.\n\\end{Fact}\n\n\nThe useful thing about the cohomology diagrams arising from the\nabove diagram of pairs of spaces is that most of the columns and\nsideways maps are well-known. Some facts are listed below.\n\\begin{Fact}\n  Let $X$ again be a space and $k\\in\\Nat$.\n  \\begin{enumerate}\n  \\item\n    The cohomology sequences of the pairs\n    $(\\Disk{k},\\Sphere{k-1})\\times X^2$ and\n    $(\\Sphere{k},\\Sphere{k-1})\\times X^2$ are well-known from the\n    sequences of the pairs $(\\Disk{k},\\Sphere{k-1})$ and $(\\Sphere{k},\\Sphere{k-1})$.\n    For $k>1$ they are given for $l\\in\\Nat$ in the following\n    commutative diagram:\n    \\begin{center}\n      \\hspace*{-2em} % !!\n      \\begin{tikzcd}[row sep=tiny, column sep=small]\n        &\\argument{(1\\otimes f,s_{k-1}\\otimes g)}\n        \\ar[r, mapsto]\n        &\\argument{(s_k\\otimes g, s_k\\otimes g)}\n        \\\\\n        \\argument{(1\\otimes f, s_k\\otimes g)}\n        \\ar[r, mapsto]\n        &\\argument{(1\\otimes f, 0)}\n        &\\argument{(s_k\\otimes f, s_k\\otimes g)}\n        \\ar[r, mapsto]\n        &\\argument{(0, s_k\\otimes (f+g))}\n        \\\\\n        \\H^{l-1}(\\Sphere k\\times X^2)\n        \\ar[r, \"\\pb{\\hat i}\"]\n        &\\H^{l-1}(\\Sphere{k-1}\\times X^2)\n        \\ar[r, \"\\hat{\\delta}\"]\n        &\\H^l((\\Sphere{k},\\Sphere{k-1})\\times X^2)\n        \\ar[r, \"\\pb{\\hat j}\"]\n        &\\H^l(\\Sphere k\\times X)\n        \\\\\n        \\Splitlineoplus{1\\otimes\\H^{l-1}(X^2)}{s_k\\otimes\\H^{l-k-1}(X^2)}\n        \\ar[u, equals]\n        &\n        \\Splitlineoplus{1\\otimes\\H^{l-1}(X^2)}{s_{k-1}\\otimes\\H^{l-k}(X^2)}\n        \\ar[u, equals]\n        \\ar[ddddd, equals]\n        &\n        \\Splitlineoplus{s_k\\otimes\\H^{l-k}(X^2)}{s_k\\otimes\\H^{l-k}(X^2)}\n        \\ar[u, equals]\n        \\\\\n        \\argument{(1\\otimes f, s_k\\otimes g)}\n        \\ar[dd, mapsto, \"\\pb{\\iota}\"]\n        &&\\argument{(s_k\\otimes f, s_k\\otimes g)}\n        \\ar[dd, mapsto, \"\\pb{\\tau}\"]\n        \\\\~\\\\\n        \\argument{f}\n        && \\argument{s_k\\otimes f}\n        \\\\\n        \\H^{l-1}(X^2)\n        \\ar[d, equals]\n        &\n        &\n        s_k\\otimes\\H^{l-k}(X^2)\n        \\ar[d, equals]\n        \\\\\n        \\H^{l-1}(\\Disk k\\times X^2)\n        \\ar[r, \"\\pb{\\tilde i}\"]\n        &\\H^{l-1}(\\Sphere{k-1}\\times X^2)\n        \\ar[r, \"\\tilde{\\delta}\"]\n        &\\H^l((\\Disk{k},\\Sphere{k-1})\\times X^2)\n        \\ar[r, \"\\pb{\\tilde j}\", \"=0\"{below}]\n        &\\H^l(\\Disk k\\times X)\n        \\\\\n        \\argument{f}\n        \\ar[r, mapsto]\n        &\\argument{(1\\otimes f, 0)}\n        \\\\\n        &\\argument{(1\\otimes f,s_{k-1}\\otimes g)}\n        \\ar[r, mapsto]\n        &\\argument{s_k\\otimes g}\n      \\end{tikzcd}\n    \\end{center}\n    For $k=1$ one has the modifications\n    \\begin{center}\n      \\begin{tikzcd}[row sep=tiny]\n        &(1\\otimes f, s_0\\otimes g)\n        \\ar[r, mapsto, \"\\hat{\\delta}\"]\n        &(s_1\\otimes(f+g), s_1\\otimes(f+g))\n        \\\\\n        (1\\otimes f, s_1\\otimes g)\n        \\ar[r, mapsto, \"\\pb{\\hat i}\"]\n        &(1\\otimes f, s_0\\otimes f)\n        \\\\\n        f\n        \\ar[r, mapsto, \"\\pb{\\tilde i}\"]\n        &(1\\otimes f, s_0\\otimes f)\n        \\\\\n        &(1\\otimes f, s_0\\otimes g)\n        \\ar[r, mapsto, \"\\tilde{\\delta}\"]\n        &s_1\\otimes(f+g)\n      \\end{tikzcd}\n    \\end{center}\n  \\item\n    Furthermore, it is easily seen that\n    \\begin{align*}\n      \\pb\\pi\\colon\n      \\H^l((\\Disk k, \\Sphere{k-1})\\times X^2)\n      &\\to \\H^l((\\Sphere k, \\Sphere{k-1})\\times X^2)\n      \\\\\n      s_k\\otimes x\\otimes y\n      &\\mapsto\n        (s_k\\otimes x\\otimes y, s_k\\otimes y\\otimes x)\n    \\end{align*}\n  \\item \n    It is known respectively easily seen that\n    \\begin{center}\n      \\begin{tikzcd}[row sep=small, column sep=small]\n        &x^k \\ar[r, mapsto]\n        \\ar[ddl, mapsto, bend right=30]\n        &x^k,\\quad\n        x \\ar[r, mapsto]\n        &x\n        \\\\\n        &\\H^*(\\RP k, \\RP k-1)\n        \\ar[r]\n        \\ar[d, \"\\pb\\proj\"{near start}]\n        &\\H^*(\\RP k)\n        \\ar[r]\n        &\\H^*(\\RP{k-1})\n        \\\\\n        s_k\\otimes 1\\otimes 1\n        &\\H^*((\\Disk k, \\Sphere{k-1})\\times X^2)\n      \\end{tikzcd}\n    \\end{center}\n  \\end{enumerate}\n\\end{Fact}\n\n\\flushbottom % !!\n\nSome further immediate results are:\n\\begin{Cor}\\label{cor:twistedprodcohom:prelim}\n  Let $X$ be a space, $k\\in\\Nat$,\n  $a,b\\in\\H^*(X)$, $f\\in\\H^*(X^2)$, and let\n  $u\\in\\H^*(\\Twistedprod k X)$.\n  \\begin{enumerate}\n  \\item\\label{item:twistedprodcohom:prelim:pij} \n    $\\pb\\pi\\pb j\n    = \\pb{\\hat j}\\pb\\pi\\colon\n    s_k\\otimes(a\\otimes b)\\mapsto s_k\\otimes(a\\otimes b+b\\otimes a)$\n  \\item $\\delta = (\\pb\\tau\\pb\\pi)^{-1}\\tilde\\delta\\pb\\pi = \\tilde\\delta\\pb\\pi$\n  \\item $\\pb j(s_k\\otimes 1\\otimes 1) = c^k \\coloneqq \\pb\\proj(x^k)$\n  \\item $\\pb i(c_k) = \\pb i(c_{k-1})$\n  \\item\\label{item:twistedprodcohom:prelim:j}\n    $\\pb j(s_k\\otimes f)\\cdot u\n    = \\pb j\\left(s_k\\otimes(f\\cdot \\pb\\iota\\pb\\pi(u))\\right)$\n  \\end{enumerate}\n  \\begin{proof}\n    For \\ref{item:twistedprodcohom:prelim:j} first observe that the\n    ring\n    \\begin{gather*}\n      \\H^*(\\Twistedprod k X, \\Twistedprod{k-1} X)\n      = \\H^*((\\Disk k,\\Sphere{k-1})\\times X^2)\n      = s_k\\otimes\\H^*(X^2)\n    \\end{gather*}\n    is both a module over $\\H^*(\\Twistedprod k X)$ and over\n    $\\H^*(\\Disk k\\times X^2)=\\H^*(X^2)$. It is known that the\n    $\\H^*(X^2)$\\nbd{}module structure looks like\n    \\begin{gather*}\n      (s_k\\otimes a_1\\otimes b_1)\\cdot (a_2\\otimes b_2)\n      = s_k\\otimes\\big((a_1\\otimes b_1)\\cdot(a_2\\otimes b_2)\\big)\n      = s_k\\otimes(a_1a_2)\\otimes(b_1b_2)\n      \\;.\n    \\end{gather*}\n    Now the base change between the two module structures is the one\n    along $\\pb\\iota\\pb\\pi$, and this means that\n    for any $u\\in\\H^*(\\Twistedprod k X)$\n    and\n    $s_k\\otimes f \\in s_k\\otimes\\H^*(X^2)$\n    \\begin{gather*}\n      (s_k\\otimes f)\\cdot u\n      = (s_k\\otimes f)\\cdot\\pb\\iota\\pb\\pi(u)\n      = s_k\\otimes(f\\cdot \\pb\\iota\\pb\\pi(u))\n      \\;.\n    \\end{gather*}\n    As a last step note that\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\pb j\\colon\n      \\H^*(\\Twistedprod k X, \\Twistedprod{k-1} X)\n      \\to \\H^*(\\Twistedprod k X)\n    \\end{gather*}\n    is a morphism of $\\H^*(\\Twistedprod k X)$\\nbd{}modules, \\idest\n    \\begin{gather*}\n      \\pb j(f\\cdot g) = \\pb j(f)\\cdot g\n      \\qquad\\text{for\n        $f\\in\\H^*(\\Twistedprod k X, \\Twistedprod{k-1} X)$ and\n        $g\\in\\H^*(\\Twistedprod k X)$.}\n      \\qedhere\n    \\end{gather*}\n  \\end{proof}\n\\end{Cor}\n\nWe are going to inductively prove the following reformulation of\nTheorem~\\ref{thm:twistedprod:cohomstructure}.\n\\begin{Thm}\\label{thm:twistedprod:cohomstructure:alt}\n  For a space $X$ and $k\\in\\Nat$, $\\H^*(\\Twistedprod k X)$ is the group\n  \\begin{align*}\n    \\H^*(\\Twistedprod{k}{X})\n    &\\cong\n      \\bc k\\otimes D\n      \\oplus \\left(\\bigoplus_{i=1}^{k-1} \\bc i\\otimes D\\right)\n      \\oplus (1\\otimes(N+D))\n      \\oplus (s\\otimes N)\n    \\\\\n    &\\cong\n      \\bc k\\otimes D\n      \\oplus \\left(\\bigoplus_{i=1}^{k-1} \\bc i\\otimes D\\right)\n      \\oplus (1\\otimes D)\n      \\oplus (1\\otimes N)\n      \\oplus (s\\otimes N)\n      % \\\\\n      % &\\cong\n      % \\left(\\H^*(\\RP k) \\otimes D \\right)\n      % \\oplus \\left(\\H^*(\\Sphere k) \\otimes N\\right)\n  \\end{align*}\n  with the additive relation\n  $\\bc i\\otimes d(a) + \\bc i\\otimes d(b)=\\bc i\\otimes d(a+b)$ for\n  $a,b\\in\\H^*(X)$ and $1\\leq i\\leq k$,\n  which is equipped with a multiplication determined by:\n  \\begin{enumerate}\n  \\item\\label{twistedprodcohom:proof:0}\n    The following restriction of $\\pb\\pi$ is a ring isomorphism onto\n    its image\n    \\begin{align*}\n      \\pb\\pi\\colon\n      (1\\otimes D)\\oplus (1\\otimes N)\\oplus(s_k \\otimes N)\n      &\\longto\n        (1\\otimes(D+N))\\oplus(s_k\\otimes N)\n        \\subset\\H^*(\\Sphere k\\times X^2)\n      \\\\\n      1\\otimes d + 1\\otimes n_1 + s\\otimes n_2\n      &\\longmapsto\n        1\\otimes(d+n_1) + s_k\\otimes n_2\n        \\;.\n    \\end{align*}\n  \\item\\label{twistedprodcohom:proof:1}\n    $(\\bc i\\otimes d(a))\\cdot (\\bc j\\otimes d(b))\n    = \\bc{i+j}\\otimes d(ab)$ for $1\\leq i,j$ and $i+j\\leq k-1$,\n    so $\\bc i\\otimes d(1) = (\\bc 1\\otimes d(1))^i$.\n  \\item\\label{twistedprodcohom:proof:2}\n    $(\\bc i\\otimes d(a))\\cdot(1\\otimes d(b))\n    = \\bc i\\otimes d(ab)$ for $1\\leq i\\leq k-1$,\n    so $\\bc i\\otimes D = (\\bc 1\\otimes 1)^i\\cdot (1\\otimes D)$.\n  \\item\\label{twistedprodcohom:proof:3}\n    $(\\bc k\\otimes d(a))\\cdot(1\\otimes d(b))\n    = \\bc k\\otimes d(ab)$,\n    so $\\bc k\\otimes D = (\\bc 1\\otimes 1)^i\\cdot (1\\otimes D)$.\n  \\item\\label{twistedprodcohom:proof:4}\n    $\\pb\\proj(x^k) = \\bc k\\otimes d(1)$ and\n    $\\pb\\proj(x) = \\bc 1\\otimes d(1)$,\n    so $(\\bc 1\\otimes d(1))^k = \\bc k\\otimes d(1)$ and\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\ker\\pb\\pi\n      = (\\bc k\\otimes D)\n      \\oplus \\left( \\bigoplus_{i=1}^{k-1}\\bc i\\otimes D \\right)\n      = \\sum_{i=1}^k c^i\\cdot (1\\otimes D)\n      \\;.\n    \\end{gather*}\n  \\item\\label{twistedprodcohom:proof:5}\n    $c\\cdot (1\\otimes N + s\\otimes N) = 0$,\n    so $(\\bc i \\otimes D)\\cdot (1\\otimes N + s\\otimes N) = 0$\n    for $1\\leq i\\leq k$.\n  \\end{enumerate}\n\\end{Thm}\n\\begin{Rem}\n  Note that the demanded multiplication properties of the\n  $\\Zmod2$\\nbd{}vector space in the theorem do already fully qualify a\n  multiplication, as all cases of combinations of components are\n  covered ($1\\leq i,j\\leq k$):\n  \\begin{compactdescription}\n  \\item[$(1\\otimes(D+N)\\oplus s\\otimes N)\\cdot (1\\otimes(D+N)\\oplus s\\otimes N)$:]\n    \\ref{twistedprodcohom:proof:0}\n  \\item[$(\\bc i\\otimes D)\\cdot (\\bc j\\otimes D)$:]\n    By \\ref{twistedprodcohom:proof:1} and \\ref{twistedprodcohom:proof:2}\n    $(\\bc i\\otimes D)=(\\bc 1\\otimes 1)\\cdot (1\\otimes D)$,\n    which can be simplified with\n    $(\\bc 1\\otimes 1)=c_k\\coloneqq \\pb\\proj x_k$\n    from \\ref{twistedprodcohom:proof:4}\n    to $(\\bc i\\otimes D)=c^i\\cdot(1\\otimes D)$.\n    Analogously with \\ref{twistedprodcohom:proof:3}\n    and \\ref{twistedprodcohom:proof:4}\n    $(\\bc k\\otimes D) = c^k\\cdot(1\\otimes D)$.\n  \\item[$(\\bc i\\otimes D)\\cdot (1\\otimes D)$:]\n    \\ref{twistedprodcohom:proof:2} resp. \\ref{twistedprodcohom:proof:3}\n    for the case $i=k$\n  \\item[$(\\bc i\\otimes D)\\cdot (1\\otimes N) = 0$:]\n    \\ref{twistedprodcohom:proof:5}\n  \\item[$(\\bc i\\otimes D)\\cdot (s\\otimes N) = 0$:]\n    \\ref{twistedprodcohom:proof:5}\n  \\end{compactdescription}\n\\end{Rem}\n\\begin{proof}[Proof of\n  Theorem~\\ref{thm:twistedprod:cohomstructure:alt}\n  respectively Theorem~\\ref{thm:twistedprod:cohomstructure}]\n  This is roughly geared to the proof in \\cite[Theorem~7.1]{brown}.\n  With the above reformulation, the proof is a straight forward\n  induction on $k$.\n  \\Idest split up the $\\Zmod2$\\nbd{}vector space $\\H^*(\\Twistedprod k X)$\n  into direct summands of the form in the theorem, which are then\n  either known from the case $k-1$, or calculable, and then check the\n  needed multiplication and isomorphism properties.\n  The split looks as follows:\n  \\begin{align*}\n    \\SwapAboveDisplaySkip\n    \\Im\\pb j\n    &\\cong \\ker(\\pb\\pi|_{\\Im\\pb j})\n      \\oplus \\left(\\Im\\pb j/\\ker(\\pb\\pi|_{\\Im\\pb j})\\right) \\\\*\n    &\\cong \\pb j(\\ker(\\pb\\pi\\pb j))\n      \\oplus \\Im(\\pb\\pi|_{\\Im\\pb j}) \\\\*\n    &\\cong \\pb j(\\ker(\\pb\\pi\\pb j))\n      \\oplus \\Im(\\pb\\pi\\pb j) \n    \\displaybreak[1]\\\\\n    \\Im\\pb i\n    &= \\ker\\delta = \\ker(\\tilde\\delta\\pb\\pi_{k-1})\\\\*\n    &= \\ker(\\tilde\\delta|_{\\Im\\pb\\pi_{k-1}})\n      \\oplus \\ker\\pb\\pi_{k-1}\n    \\displaybreak[1]\\\\\n    \\H^*(\\Twistedprod k X)\n    &\\cong \\Im\\pb j\n      \\oplus \\left( \\H^*(\\Twistedprod k X)/\\Im\\pb j \\right)\\\\*\n    &= \\Im\\pb j\n      \\oplus \\left( \\H^*(\\Twistedprod k X)/\\ker\\pb i \\right)\\\\*\n    &\\cong \\Im\\pb j \\oplus \\Im\\pb i\\\\*\n    &\\cong \\pb j(\\ker(\\pb\\pi\\pb j))\n      \\oplus \\Im(\\pb\\pi\\pb j)\n      \\oplus \\ker(\\tilde\\delta|_{\\Im\\pb\\pi_{k-1}})\n      \\oplus \\ker\\pb\\pi_{k-1}\n  \\end{align*}\n  In the end this is supposed to look like\n  \\begin{align*}\n    \\pb j(\\ker(\\pb\\pi\\pb j))\n    &= c_k^k\\cdot(\\pb\\pi)^{-1}(1\\otimes D)\\cong c^k\\otimes D\n    \\\\\n    \\ker\\pb\\pi_{k-1}\n    &= {\\textstyle\\bigoplus_{i=1}^{k-1}} c_{k-1}^i\\otimes D\n    \\\\\n    \\ker(\\tilde\\delta|_{\\Im\\pb\\pi_{k-1}})\n    &= 1\\otimes(N+D)\n    \\\\\n    \\Im(\\pb\\pi\\pb j) &= s_k\\otimes N\n  \\end{align*}\n  so the correspondents to the symbols from the theorem's notation are\n  $\\bc i = c_{k-1}^i$ for $i\\leq k-1$, $\\bc k = c_k^k$, $s=s_k$.\n\n  \\minisec{Induction step}\n  We will start with the induction step, so assume for a space $X$ and\n  $k>1$ that Theorem~\\ref{thm:twistedprod:cohomstructure:alt} holds in\n  the case $k-1$.\n  Begin with identifying the direct summands of the vector space,\n  carefully tracking where the induction assumption\n  \\begin{description}\n  \\item[$\\Im(\\pb\\pi\\pb j)=s_k\\otimes N$:]\n    Recall from\n    Corollary~\\itemref{cor:twistedprodcohom:prelim}{item:twistedprodcohom:prelim:pij}\n    that\n    \\begin{align*}\n      \\pb\\pi\\pb j\\colon\n      s_k\\otimes\\H^*(X^2)\n      &\\to 1\\otimes\\H^*(X^2)\\oplus s_k\\otimes\\H^*(X^2)\\\\\n      s_k\\otimes(a\\otimes b)\n      &\\mapsto s_k\\otimes(a\\otimes b+b\\otimes a)\n    \\end{align*}\n    and hence $\\Im(\\pb\\pi\\pb j)=s_k\\otimes N$.\n    Obviously $\\pb\\pi$ maps this part of $\\H^*(\\Twistedprod k X)$\n    isomorphically to $s_k\\otimes N\\subset\\H^*(\\Sphere k\\times X^2)$,\n    already inducing multiplication.\n  \\item[$\\ker(\\tilde\\delta|_{\\Im\\pb\\pi_{k-1}})\\cong 1\\otimes (D+N)$:]\n    By the assumption on $k-1$\n    \\begin{gather*}\n      \\Im(\\pb\\pi_{k-1})\n      =(1\\otimes(D+N))\\oplus (s_k\\otimes N)\n      \\in\\H^*(\\Sphere{k-1}\\times X^2)\n      \\;.\n    \\end{gather*}\n    With $\\ker(\\tilde\\delta) = 1\\otimes\\H^*(X^2)$, one gets\n    $\\ker(\\tilde\\delta|_{\\Im\\pb\\pi_{k-1}}) = 1\\otimes (D+N)$.\n    Since $\\pb\\pi_{k-1}$ is the identity on\n    $1\\otimes(D+N)\\in\\H^*(\\Twistedprod{k-1} X)$ and $\\pb{\\tilde i}$ is\n    the identity on $1\\otimes(D+N)$ in $\\H^*(\\Sphere k\\times X^2)$,\n    $\\pb\\pi_{k}$ and $\\pb i$ are both the identity on\n    $\\ker(\\tilde\\delta|_{\\Im\\pb\\pi_{k-1}})$, inducing ring structure\n    on this subring.\n  \\item[$\\ker(\\pb\\pi_{k-1})\\cong \\bigoplus_{i=1}^{k-1} c_{k-1}^i\\otimes D$:]\n    This holds by the induction assumption.\n    Note that by construction $\\pb i$ is injective on this direct\n    summand which is a direct summand of $\\Im\\pb i$,\n    hence the ring structure is inherited.\n    In order to see that this summand lies in the kernel of\n    $\\pb\\pi_k$, one has to first see that\n    $c_k=c_{k-1}\\otimes d(1)\\in\\H^*(\\Twistedprod k X)$, respectively\n    in the direct sum notation\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      c_k = f + 1\\otimes d(a) + 1\\otimes n + s\\otimes n'\n      + \\sum_{i=1}^{k-1} c_{k-1}^i\\otimes d(a_i)\n    \\end{gather*}\n    all summands are zero except for $c_{k-1}\\otimes d(1)$.\n    So check all possible components:\n    \\begin{itemize}\n    \\item $f\\in\\pb j(\\ker\\pb\\pi\\pb j)$ has degree greater $k$ as will\n      be seen below, so with $k>1$ from the induction assumption has\n      to be zero.\n    \\item $\\pb i(c_k) = c_{k-1}$, so $1\\otimes d(a)+1\\otimes n=0$.\n    \\item $\\pb\\pi(c_k)= 0$, so $s\\otimes n'=0$.\n    \\end{itemize}\n    Thus\n    $\\ker(\\pb\\pi_{k-1})\n    \\cong \\sum_{k=1}^{k-1}c_k^i\\cdot (1\\otimes D)\n    \\subset\\ker\\pb\\pi$.\n  \\item[$\\pb j(\\ker(\\pb\\pi\\pb j))\\cong c_k^k\\otimes D$:]\n    Again recall\n    Corollary~\\itemref{cor:twistedprodcohom:prelim}{item:twistedprodcohom:prelim:pij}\n    to directly obtain\n    \\begin{align*}\n      \\ker(\\pb\\pi\\pb j) &= s_k\\otimes(N+D)\n                          \\;.\\\\\n      \\intertext{So one has in general}\n      \\pb j(\\ker(\\pb\\pi\\pb j))\n      &= \\pb j(s_k\\otimes(N+D)) \\\\\n      &\\equalsby{Corollary~\\itemref{cor:twistedprodcohom:prelim}{item:twistedprodcohom:prelim:j}}\n        \\pb j(s_k\\otimes 1\\otimes 1) \\cdot (\\pb\\iota\\pb\\pi)^{-1}(N+D) \\\\\n      &= c^k \\cdot \\left( (\\pb\\pi)^{-1}(1\\otimes (D+N))\\right)\n        \\subset \\ker\\pb\\pi\n        \\;.\n    \\end{align*}\n    Note that this lives, as was used above, in degree greater $k-1$.\n    Since by\n    Corollary~\\itemref{cor:twistedprodcohom:prelim}{item:twistedprodcohom:prelim:pij}\n    $c^k\\cdot \\ker\\pb\\pi=\\pb\n    j(s_k\\otimes\\pb\\iota\\pb\\pi(\\ker\\pb\\pi))=\\pb j(s_k\\otimes 0)=0$\n    and the effect of $\\pb\\pi$ on all direct summands is known:\n    \\begin{align*}\n      \\pb j(\\ker(\\pb\\pi\\pb j))\n      &= c^k \\cdot \\left( (\\pb\\pi)^{-1}(1\\otimes (D+N))\\right) \\\\\n      &= c^k \\cdot \\left( \\ker\\pb\\pi + (1\\otimes (D+N))\\right) \\\\\n      &= c^k \\cdot (1\\otimes (D+N))\n    \\end{align*}\n    The next step is to use that by induction assumption and the form\n    of $\\tilde\\delta$ holds\n    \\begin{gather*}\n      \\ker\\pb j=\\Im\\delta=\\Im\\tilde\\delta\\pb\\pi = s_k\\otimes N\n    \\end{gather*}\n    hence $c^k\\cdot (\\pb\\pi)^{-1}(1\\otimes N) = \\pb j(s_k\\otimes N)=0$,\n    $c^k\\cdot -$ is injective on the set $(1\\otimes D)$, and\n    \\begin{align*}\n      c^k\\cdot(1\\otimes d(a)) + c^k \\cdot(1\\otimes d(b))\n      &= c^k\\cdot \\left( 1\\otimes (d(a)+d(b)) \\right) \\\\\n      &= c^k\\cdot \\left( 1\\otimes\n        (d(a+b) + a\\otimes b + b\\otimes a)\n        \\right) \\\\\n      &= c^k\\cdot (1\\otimes d(a+b)) + c^k\\cdot(a\\otimes b + b\\otimes a)\\\\\n      &= c^k\\cdot (1\\otimes d(a+b))\n        \\;.\n    \\end{align*}\n    Altogether, this yields the desired isomorphism\n    \\begin{gather*}\n      \\pb j(\\ker(\\pb\\pi\\pb j))\n      = c_k^k\\cdot(\\pb\\pi)^{-1}(1\\otimes D)\\cong c^k\\otimes D\n      \\;.\n    \\end{gather*}\n  \\end{description}\n  Now it remains to check the multiplication properties:\n  \\begin{description}\n  \\item[\\ref{twistedprodcohom:proof:0}:]\n    Checked during construction.\n  \\item[\\ref{twistedprodcohom:proof:1}, \\ref{twistedprodcohom:proof:2}:]\n    These are the multiplication properties induced by $\\pb i$ being\n    injective on the corresponding summands, thus inheriting the\n    multiplication properties from the induction assumption.\n  \\item[\\ref{twistedprodcohom:proof:3}:]\n    It was shown that $c^k\\otimes D\\cong c^k\\cdot (1\\otimes D)$.\n  \\item[\\ref{twistedprodcohom:proof:4}:]\n    This was shown during the identification of $\\ker(\\pb\\pi_{k-1})$\n    as part of $\\ker\\pb\\pi$.\n  \\item[\\ref{twistedprodcohom:proof:5}:]\n    $c_k\\cdot(1\\otimes N)$ is inherited via $\\pb i$ from the assumed\n    structure of $\\H^*(\\Twistedprod{k-1} X)$ using\n    $c_k = c_{k-1}\\otimes d(1)$.\n    The fact $c_k\\cdot(s_k\\otimes N)=0$ follows from\n    \\begin{gather*}\n      \\ker\\pb\\pi\\pb j\\cdot\\Im\\pb j\n      \\subset \\pb j(s_k\\otimes(\\H^*(X)\\cdot \\pb\\iota\\pb\\pi(\\ker\\pb\\pi)))\n      = \\pb j(s_k\\otimes 0) = 0\n    \\end{gather*}\n    using $c_k\\in\\ker\\pb\\pi$, and $(s_k\\otimes N)\\subset\\Im\\pb j$.\n  \\end{description}\n  \n  \\minisec{Induction Start}\n  The case $k=0$ is a bit simpler as\n  $\\Twistedprod 0 X=X^2$. One still has\n  $\\Im\\pb\\pi\\pb j=s_1\\otimes N$, and, using the known formula\n  \\begin{gather*}\n    \\pb\\pi(a\\otimes b)=(1\\otimes a\\otimes b, s_0\\otimes b\\otimes a)\n    \\in \\H^*(S^0\\times X^2)=\\H^*(X^2)^2\n    \\;,\n  \\end{gather*}\n  one gets\n  $\\delta(a\\otimes b)\n  = \\tilde\\delta\\pb\\pi(a\\otimes b)\n  = s_1\\otimes(a\\otimes b + b\\otimes a)$, giving\n  $\\ker\\delta=N+D$, and $\\Im\\delta = \\ker\\pb j = s_1\\otimes N$.\n  Hence, analogously to above, one obtains\n  $\\pb j (\\ker(\\pb\\pi\\pb j)) \\cong c\\otimes D$ and\n  $c\\cdot (1\\otimes N + s_1\\otimes N) = 0$.\n\\end{proof}\n\n\n\\subsection{Indecomposability of Twisted Product Cobordism Classes}\n\\label{sec:twistedprod:indecompcriterion}\nNow that the cohomology ring and Stiefel-Whitney classes of line\nbundles of a twisted product are well-known, one can investigate\ngeneral Stiefel-Whitney numbers of twisted product manifolds.\nAs promised, the following result will be the cornerstone when inductively\ndefining the desired basis for the cobordism ring in the subsequent section.\n\\begin{Thm}\\label{thm:twistedprod:indecompcriterion}\n  Let $M^n$ be a manifold and $k\\in\\Nat_{\\geq1}$.\n  Then $\\Twistedprod{k}{M}$ represents an indecomposable class of the\n  cobordism ring if and only if $M$ does and $\\binom{k+n-1}{n}$ is\n  non-zero modulo two.\n\\end{Thm}\nRecall Theorem~\\ref{thm:indecomposabilitycriterion}, saying\na manifold $M^n$ represents an indecomposable element if and only if\n$\\snum{(n)}{M}$ is non-zero,\nand recall that $\\Twistedprod{k}{-}$ preserves injectivity on\ncohomology by\nTheorem~\\itemref{thm:twistedprod:cohomstructure}{item:twistedprod:preservescohominj}.\nThen Theorem~\\ref{thm:twistedprod:indecompcriterion} is a\ndirect consequence of the following Lemma.\n\\begin{Lem}\\label{lem:twistedprod:indecompcriterion}\n  Let $M$, $n$, $k$ be as in\n  Theorem~\\ref{thm:twistedprod:indecompcriterion} above.\n  Then there is a map of spaces $f\\colon X\\to M$ which is injective on\n  cohomology and fulfills\n  \\begin{gather*}\n    \\pb{\\Twistedprod{k}{f}} \\s{(2n+k)}(\\Twistedprod{k}{M})\n    = \\binom{k+n-1}{n} \\cdot c^k\n    \\cdot d\\left( \\pb f \\s{(n)}(M) \\right)\n    \\in \\H^{2n+k}(\\Twistedprod{k}{X})\n    \\;.\n  \\end{gather*}\n\\end{Lem}\n\\begin{proof}[Proof of Lemma~\\ref{lem:twistedprod:indecompcriterion}]\n  Take $f$ to be a reduction of $\\T M$ to line bundles\n  using the splitting principle~\\cite[Theorem~(19.3.9)]{tomdieck}.\n  \\Idest choose a space $X$ and a map $f\\colon X\\to M$ which is\n  injective on cohomology and fulfills\n  $\\pb f \\T M = \\xi_1\\oplus\\dotsb\\oplus\\xi_n$ for line bundles\n  $\\xi_i$ over $X$, each with total Stiefel-Whitney class\n  $\\W{\\xi_i}=1+\\alpha_i$.\n  With the fiber bundle properties from\n  Remark~\\itemref{rem:twistedprodproperties}{item:twistedprodfiberbdl}%\n  \\ref{item:twistedprod:preservespb}\n  and the tangent space structure from\n  Remark~\\itemref{rem:twistedprodproperties}{item:twistedprodmanifold}%\n  \\ref{item:twistedprod:tangentspace}\n  this yields on vector bundles:\n  \\begin{align*}\n    \\pb{\\Twistedprod{k}{f}} \\Twistedprod{k}{\\T M}\n    &\\clapequalsby{\\ref{rem:twistedprodproperties}}\n      \\Twistedprod{k}{\\pb f \\T M}\n      = \\Twistedprod{k}{\\bigoplus_{i\\leq n}\\xi_i}\n      = \\bigoplus_{i\\leq n}\\Twistedprod{k}{\\xi_i}\n    \\\\\n    \\pb{\\Twistedprod{k}{f}} \\T{\\Twistedprod{k}{M}}\n    &\\clapequalsby{\\ref{rem:twistedprodproperties}}\n      \\pb{\\Twistedprod{k}{f}} \\left(\n      \\T{\\RP k} \\oplus \\Twistedprod{k}{\\T M}\n      \\right)\\\\\n    &= \\pb{\\Twistedprod{k}{f}} \\T{\\RP k}\n      \\oplus\n      \\pb{\\Twistedprod{k}{f}} \\Twistedprod{k}{\\T M}\n      = \\T{\\RP k} \\oplus \\bigoplus_{i\\leq n}\\Twistedprod{k}{\\xi_i}\\\\\n    \\intertext{And on Stiefel-Whitney classes:}\n    \\pb{\\Twistedprod{k}{f}} \\W{\\Twistedprod{k}{\\T M}}\n    &= \\prod_{i\\leq n} \\W{\\Twistedprod{k}{\\xi_i}}\n      \\cequalsby{\\ref{cor:twistedprod:swlinebdl}}\n      \\prod_{i\\leq n} \\left(1 + c + e(\\alpha_i) + d(\\alpha_i)\\right)\n    \\\\\n    \\pb{\\Twistedprod{k}{f}} \\W{\\T\\Twistedprod{k}{M}}\n    &= \\pb{\\Twistedprod{k}{f}}\\W{\\T{\\RP k}}\n      \\cdot \\prod_{i\\leq n} \\W{\\Twistedprod{k}{\\xi_i}} \\\\\n    &= (c+1)^{k+1}\n      \\cdot \\prod_{i\\leq n} \\left(1+c+e(\\alpha_i)+d(\\alpha_i)\\right)\n  \\end{align*}\n  In order to work with these Stiefel-Whitney class expressions as\n  symmetric polynomials, introduce variables $u_i, v_i$ of degree\n  one such that\n  \\begin{align*}\n    \\w1{\\xi_i} &= c+e(\\alpha_i) = \\el 1 2(u_i, v_i) = u_i+v_i \\\\\n    \\w2{\\xi_i} &= d(\\alpha_i)   = \\el 2 2(u_i, v_i) = u_iv_i \\\\\n    \\W{\\xi_i}  &= 1+c+e(\\alpha_i)+d(\\alpha_i)\n                 = 1+\\sum_{i\\leq2}\\el i 2(u_i,v_i) = (1+u_i)(1+v_i)\n                 \\;.\n  \\end{align*}\n  The key point now qualifying $f$ for the proof, is that both $f$ and\n  thus by\n  Theorem~\\itemref{thm:twistedprod:cohomstructure}{item:twistedprod:preservescohominj}\n  also $\\Twistedprod{k}{f}$ are injective on cohomology, and fulfill\n  \\begin{enumerate}\n  \\item with $\\W{\\pb f\\T M} = \\prod_{i=1}^n (1+\\alpha_i)$ that\n    \\begin{gather}\n      \\SwapAboveDisplaySkip\n      \\label{eq:lem:twistedprod:indecompcriterion:sM}\n      \\pb f \\s{(n)}(M)\n      = \\s{(n)}(\\W{\\pb f\\T M})\n      = \\sum_{i=1}^n \\alpha_i^n\n      \\;\n    \\end{gather}\n  \\item and with\n    $\\W{\\pb f\\T{\\Twistedprod{k}{M}}}\n    = \\prod_{i=1}^{k+1}(1+c) \\cdot \\prod_{i=1}^n\n    (1+u_i)(1+v_i)$\n    that\n    \\begin{align}\\notag\n      \\pb{\\Twistedprod{k}{f}} \\s{(2n+k)}(\\Twistedprod{k}{M})\n      &= \\s{(2n+k)}(\\W{\\pb f\\T{\\Twistedprod{k}{M}}}) \\\\\n      \\label{eq:lem:twistedprod:indecompcriterion:sDkM}\n      &= (k+1)c^{2n+k} + \\sum_{i=1}^n (u_i^{2n+k} + v_i^{2n+k})\n        \\;.\n    \\end{align}\n  \\end{enumerate}\n  In order to formulate\n  $\\pb{\\Twistedprod{k}{f}}\\s{(2n+k)}(\\Twistedprod{k}{M})$ \n  in terms of $c$ and $d(\\alpha_i)$,\n  use one of the Newton-Girard formulas saying:\n  \\begin{Lem}[Newton-Girard]\n    For integers $l, m\\in\\Nat$ holds\n    \\begin{align*}\n      \\symm{l} t^m\n      &= \\sum_{\\mathclap{r_1+2r_2+\\dotsb+mr_=m}}\n        (-1)^m \\frac{m\\cdot(r_1+\\dotsb+r_m-1)!}{(r_1)!\\dotsm(r_m)!}\n        \\cdot \\prod_{i=1}^m (-\\el i l)^{r_i}\n        \\;.\n    \\end{align*}\n    As a special case for $l=2$ this becomes modulo 2\n    \\begin{align*}\n      t_1^m + t_2^m\n      &= \\sum_{\\mathclap{r_1+2r_2=m}}\n        \\frac{m\\cdot(r_1+r_2-1)!}{(r_1)!(r_2)!}\n        \\cdot (\\el 1 2)^{r_1} \\cdot (\\el 2 2)^{r_2}\n        = \\sum_{\\mathclap{r_1+2r_2=m}}\n        \\altbinom{r_1-1}{r_2} (t_1+t_2)^{r_1}(t_1t_2)^{r_2}\n    \\end{align*}\n    where $\\frac{(r_1+2r_2)(r_1+r_2-1)!}{(r_1)!(r_2)!}\n    = \\binom{r_1+r_2-1}{r_2} + 2\\binom{r_1+r_2-1}{r_1}$\n    and the notation $\\altbinom{p}{q}\\coloneqq \\binom{p+q}{q}$\n    was used.\n    \\begin{proof}\n      A proof of the main Newton-Girard formula can be found in\n      \\cite[Theorem~10.12.2]{raymond}. This special case can\n      immediately be obtained by iterated application of the formula.\n    \\end{proof}\n  \\end{Lem}\n  Before applying this to $t_1=u_i$, $t_2=v_i$, $m=2n+k$, and $l=2$,\n  recall the following special properties from\n  Theorem~\\ref{thm:twistedprod:cohomstructure} of the mentioned symbols in\n  $\\H^*(\\Twistedprod{k}{X})$:\n  \\begin{itemize}\n  \\item $c^{k+1}=0$,\n  \\item $c\\cdot e(\\alpha_i)=0$, and\n  \\item $e(\\alpha_i)^{r_1}d(\\alpha_i)^{r_2} = 0$ for $r_1+2r_2>2n$,\n    as $\\H^{*>2n}(M\\times M)=0$.\n  \\end{itemize}\n  Then simplify in terms of $c$, $e(\\alpha_i)$, $d(\\alpha_i)$:\n  \\begin{align}\\notag\n    u_i^{2n+k} + v_i^{2n+k}\n    &=\n      \\sum_{{r_1+2r_2=2n+k}}\n      \\altbinom{r_1-1}{r_2} (u_i+v_i)^{r_1}(u_iv_i)^{r_2} \\\\\\notag\n    &\\equalsby{Def.}\n      \\sum_{{r_1+2r_2=2n+k}}\n      \\altbinom{r_1-1}{r_2}\n      (c+e(\\alpha_i))^{r_1}d(\\alpha_i)^{r_2} \\\\\\notag\n    &\\equalsby{$c\\cdot e(\\alpha_i)=0$}\n      \\sum_{{r_1+2r_2=2n+k}} \\altbinom{r_1-1}{r_2}\n      \\left(c^{r_1}d(\\alpha_i)^{r_2}+e(\\alpha_i)^{r_1}d(\\alpha_i)^{r_2}\\right) \\\\\\notag\n    &\\equalsby{$c^{k+1}=0$, $d(\\alpha_i)^{n+s}=0$}\n      \\altbinom{k-1}{n} c^k d(\\alpha_i)^n\n      + \\sum_{\\mathclap{r_1+2r_2=2n+k}} \\altbinom{r_1-1}{r_2}\n      e(\\alpha_i)^{r_1}d(\\alpha_i)^{r_2} \\\\\n    \\label{eq:lem:twistedprod:indecompcriterion:1}\n    &= \\altbinom{k-1}{n} c^k d(\\alpha_i)^n      \n  \\end{align}\n  Altogether it turns out that\n  \\begin{align*}\n    \\pb{\\Twistedprod{k}{f}}\\s{(2n+k)}(\\Twistedprod{k}{M})\n    &\\clapequalsby{\\eqref{eq:lem:twistedprod:indecompcriterion:sDkM}}\n      (k+1)c^{2n+k} + \\sum_{i=1}^n (u_i^{2n+k} + v_i^{2n+k}) \\\\\n    &\\equalsby{$c^{k+1}=0$}\n      \\sum_{i=1}^n (u_i^{2n+k} + v_i^{2n+k}) \\\\\n    &\\equalsby{\\eqref{eq:lem:twistedprod:indecompcriterion:1}}\n      \\sum_{i=1}^n \\altbinom{k-1}{n} c^k d(\\alpha_i)^n \\\\\n    &= \\altbinom{k-1}{n} c^k d(\\sum_{i=1}^n \\alpha_i^n) \\\\\n    &\\equalsby{\\eqref{eq:lem:twistedprod:indecompcriterion:sM}}\n      \\altbinom{k-1}{n} c^k d(\\pb f \\s{(n)}(M))\n  \\end{align*}\n  as was stated.\n\\end{proof}\n\n\\begin{Rem}\\label{rem:splitdim}\n  By Theorem~\\ref{thm:twistedprod:indecompcriterion}, one has to\n  both choose\n  \\begin{compactitemize}\n  \\item the dimension combination, as well as\n  \\item the factor (see later)\n  \\end{compactitemize}\n  of a twisted product correctly, in order to obtain the representative\n  of an indecomposable cobordism class.\n  For the correct dimension choice note that any integer $m$ with\n  binary expansion\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    m=2^{i_0}+\\dotsb+2^{i_l}\n    \\;,\\qquad\n    0 \\leq i_1<\\dotsb<i_r\n  \\end{gather*}\n  can be split at any splitting point $0\\leq l_k\\leq l$ to\n  \\begin{gather*}\n    m = \\underbrace{\\left(\n        \\sum_{r=0}^{l_k} 2^{i_r}\n      \\right)}_{\\eqqcolon k}\n    + 2\\cdot\\underbrace{\\left(\n        \\sum_{r=l_k+1}^l 2^{i_r-1}\n      \\right)}_{\\eqqcolon n}\n    \\eqqcolon k+2n\n    \\;,\n  \\end{gather*}\n  where:\n  \\begin{compactitemize}\n  \\item $\\alpha(m)=\\alpha(k)+\\alpha(n)$.\n  \\item For $m$ not of the form $2^s-1$, $n$ will also not be of the\n    form $2^s-1$.\n  \\end{compactitemize}\n\\end{Rem}\nSuch splits as described above are desirable because the binomial\ncoefficient $\\binom{k+n-1}{n}$ will never be zero modulo two by the\nfollowing Lemma, \nas is required by the indecomposability criterion for twisted products\nin Theorem~\\ref{thm:twistedprod:indecompcriterion}.\n\\begin{Lem}\\label{lem:lucas}\n  For $a,b\\in\\Nat$, the binomial coefficient $\\binom{a+b}{b}$ will be\n  non-zero modulo 2 if and only if $\\alpha(a+b)=\\alpha(a)+\\alpha(b)$.\n  \\begin{proof}\n    This is a direct consequence of Lucas' well-known theorem\n    which states\n    \\begin{gather*}\n      \\binom{a+b}{b} \\equiv \\prod_{i=0}^s \\binom{a_i+b_i}{b_i} \\mod 2\n    \\end{gather*}\n    where $a=\\sum_{i=1}^s a_i2^{i}$, $b=\\sum_{i=1}^s b_i2^{i}$ are the\n    binary expansions of $a$ and $b$, and $\\binom{0}{1}\\coloneqq 0$.\n    This expression will be non-zero, if and only if $b_i$ is one for\n    any $i$ where $a_i+b_i$ is one. In other words, if and only if\n    $\\alpha(a+b)=\\sum_{i=1}^s(a_i+b_i)$.    \n  \\end{proof}\n\\end{Lem}\n\nTherefore:\n\\begin{Cor}\\label{cor:twistedprod:indecompcriterion}\n  For $m=k+2n\\in\\Nat$ as above and $M^n$ an indecomposable manifold,\n  the $m$\\nbd{}dimensional twisted product $\\Twistedprod{k}{M}$ will be\n  indecomposable.\n  \\begin{proof}\n    The previous Lemma~\\ref{lem:lucas} can be applied to the\n    combination $k-1$, $n$ from above, as one simply calculates\n    \\begin{align*}\n      \\alpha(k-1+n)\n      &=\\textstyle\n        \\alpha\\left(\n        \\left(\\sum_{r=0}^{l_k} 2^{i_r}\\right)\n        - 1\n        + \\left(\\sum_{r=l_k+1}^{l} 2^{i_r-1}\\right)\n        \\right) \\\\\n      &=\\textstyle\n        \\alpha\\left(\n        (2^{i_0}-1)\n        + \\left(\\sum_{r=1}^{l_k} 2^{i_r}\\right)\n        + \\left(\\sum_{r=l_k+1}^{l} 2^{i_r-1}\\right)\n        \\right) \\\\\n      &=\\textstyle\n        \\alpha\\left(\n        \\left(\\sum_{r=0}^{i_0-1} 2^r \\right)\n        + \\left(\\sum_{r=1}^{l_k} 2^{i_r}\\right)\n        + \\left(\\sum_{r=l_k+1}^{l} 2^{i_r-1}\\right)\n        \\right) \\\\\n      &\\textstyle\n        \\equalsby{$i_0<i_1$}\n        \\alpha\\left(\n        \\left(\\sum_{r=0}^{i_0-1} 2^r \\right)\n        + \\left(\\sum_{r=1}^{l_k} 2^{i_r}\\right)\n        \\right)\n        + \\alpha\\left(\\sum_{r=l_k+1}^{l} 2^{i_r-1}\\right)\n      \\\\\n      &= \\alpha(k-1) + \\alpha(n)\n        \\qedhere\n    \\end{align*}\n  \\end{proof}\n\\end{Cor}\n\n\n\\section\n{Brown's Theorem: Finding a Convenient Generating Set}\n\\label{sec:proofbrown}\nRecall that the goal of this chapter is to prove R.~L.~Brown's\ntheorem \\ref{thm:brown} which states that the immersion conjecture is\ntrue up to cobordism.\nAlso recall, that if one can show that the conjecture is stable\nunder the ring operations of the cobordism ring\n$\\c_*\\cong\\Zmod2[\\sigma_i\\mid i\\neq2^r-1]$, it suffices to find a\ngenerating set $([\\G i]\\mid i\\neq2^r-1)$ of $\\c_*$ which fulfills the conjecture.\nThus, for the proof one needs to construct a set of manifolds\n$(\\G i\\mid i\\neq2^r-1)$ that both fulfill the conjecture and represent\nindecomposable cobordism classes.\n\nThe major work required to check indecomposability of manifolds in the\ncobordism ring has been done in the previous chapters.\nFor checking the immersion property of the selected candidates,\nseveral well-known results on immersions of projective spaces\nwill be used.\n\nSuch generators in even dimension can easily be constructed as\ncodimension-one submanifolds of odd-dimensional products of\nreal projective spaces. The immersion property can in this case be\nchecked using results of Sanderson~\\cite{sanderson} which say that\nodd dimensional real projective spaces fulfill---and partly\noverfulfill---the immersion conjecture.\n\nFor odd dimensional generators the trick will be to take a twisted\nproduct of an even dimensional one, and use an immersion theorem by\nMahowald and Milgram~\\cite{milgram} on the twisted product of an\nEuclidean space to conclude the immersion property.\n\n\\subsection{Stability Properties of the Conjecture}\nThe stability properties needed are the following.\n\\begin{Lem}\\label{lem:brownstableunderringops}\n  Let $M_i^{n_i}$ be a closed manifold immersing into\n  $\\R^{2n_i-\\alpha(n_i)}$ for $i=1,2$.\n  Then both manifolds\n  \\begin{enumerate}\n  \\item $M_1\\disjointsum M_2$ for $n_1=n_2=n$, and\n  \\item $M_1\\times M_2$ for $n_1$, $n_2$ arbitrary\n  \\end{enumerate}\n  immerse into $\\R^{2(n_1+n_2)-\\alpha(n_1+n_2)}$.\n  \\begin{proof}\n    $M_1\\times M_2$ immerses into the real space of dimension\n    \\begin{align*}\n      \\left( 2n_1-\\alpha(n_1) \\right)\n      + \\left( 2n_2-\\alpha(n_2) \\right)\n      &= 2(n_1+n_2) - \\left(\\alpha(n_1)+\\alpha(n_2)\\right)\\\\\n      &\\leq 2(n_1+n_2) - \\left(\\alpha(n_1 + n_2)\\right)\n    \\end{align*}\n    where the inequality is due to the number theoretic fact\n    $\\alpha(n_1+n_2) \\leq \\alpha(n_1)+\\alpha(n_2)$.\n\n    For $n_1=n=n_2$ the images of the immersions\n    \\begin{gather*}\n      \\iota_1\\colon M_1\\to\\R^{2n-\\alpha(n)}\n      \\qquad \\text{and} \\qquad\n      \\iota_2\\colon M_2\\to\\R^{2n-\\alpha(n)}\n    \\end{gather*}\n    are compact. So by\n    concatenation with translation they can be assumed to be disjoint,\n    wherefore the disjoint union\n    $\\iota_1\\disjointsum\\iota_2\\colon M_1\\disjointsum M_2\\to\\R^{2n-\\alpha(n)}$\n    is again an immersion.\n  \\end{proof}\n\\end{Lem}\n\n\\subsection{Generating Set}\n\\subsubsection{Even Dimensional Generators}\nThe idea in finding even dimensional generators is to use the\nfollowing immersion properties of real projective spaces, which have\nbeen investigated in detail throughout the last decades:\n\\begin{Lem}\\label{lem:immersionrealprojspace}\n  \\begin{enumerate}\n  \\item\n    $\\RP{2^i}$ immerses into Euclidean space of dimension\n    $2\\cdot2^i-1$.\n  \\item\n    $\\RP{k}$ immerses into Euclidean space of dimension\n    $2k-3$ for $k\\geq 5$ odd, \\idest overfulfills the immersion\n    property in case $\\alpha(k)=2$.\n  \\end{enumerate}\n  \\begin{proof}\n    The first statement is Whitney's immersion\n    theorem~\\cite{whitneyimmersiontheorem}.\n    The second one is a collection of results by Sanderson\n    presented in \\cite{sanderson}, more precisely\n    Theorem~(5.3) for the case $k>8$, and\n    Theorem~(4.1) for immersions $\\RP 5\\immto\\R^7$,\n    and $\\RP 7\\immto\\R^8$.\n  \\end{proof}\n\\end{Lem}\nThe trick now will be to utilize the above nice immersion property of\nodd dimensional projective spaces. More precisely, the following odd\ndimensional product of projective spaces overfulfills the immersion\nproperty by at least three, wherefore all (even-dimensional)\nsubmanifolds of codimension one fulfill the immersion property.\n\\begin{Def}\n  Let $m\\in\\Nat$ be even and $\\alpha(m)>1$ (\\idest $m$ is not of the\n  form $2^{i}$) with minimal binary expansion \n  $m=2^{i_1}+\\dotsb+2^{i_l}$, $k_r\\coloneqq 2^{i_r}$.\n  Define\n  \\begin{gather*}\n    K^{m+1}\\coloneqq \\left(\\prod_{r=1}^{l-1} \\RP{k_r}\\right) \\times \\RP{k_l+1}\n  \\end{gather*}\n  with cohomology ring\n  \\begin{align*}\n    \\SwapAboveDisplaySkip\n    \\H^*(K^{m+1})\n    &= \\left(\\bigotimes_{r=1}^{k-1}\\H^*(\\RP{k_r})\\right)\n      \\otimes \\H^*(\\RP{k_l+1}) \\\\\n    &= \\Zmod2[x_{k_1},\\dotsc,x_{k_{l-1}}, x_{k_l+1}]/\n      \\left( x_{k_1}^{k_1+1},\\dotsc,x_{k_{l-1}}^{k_{l-1}+1},\n      x_{k_l+1}^{k_l+2} \\right)\n  \\end{align*}\n  given by the K\u00fcnneth isomorphism.\n\\end{Def}\n\\begin{Lem}\n  For $m$ as above $K^{m+1}$ immerses into $\\R^{2m-\\alpha(m)}$\n  \\begin{proof}\n    Observe that\n    $m=2^{i_1}+\\dotsb +2^{i_{\\alpha(m)}}$ with $0<i_r< i_{r+1}$ and\n    $k_l=2^{i_{\\alpha(m)}}\\geq 5$ by assumption, so $\\RP{k_l+1}$\n    immerses into Euclidean space of dimension $2(k_l+1)-3=2k_l-1$ by\n    the Lemma.\n    Altogether this yields an immersion of $K^{m+1}$ into Euclidean\n    space of dimension\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\sum_{r=1}^{k-1}(2k_r-1) + 2k_l-1\n    = 2\\left(\\sum_{r=1}^{k}\\right) - l\n    = 2m - \\alpha(m)\n    \\;.\n    \\qedhere\n  \\end{gather*}\n  \\end{proof}\n\\end{Lem}\n\n\nIn order to define submanifolds of the above $K^{m+1}$ which are\nadditionally indecomposable in the cobordism ring, recall that the\nSteenrod problem on realizing homology classes by submanifolds is\nsolved for homology classes in degree of codimension one, or more\nprecisely:\n\\begin{Lem}\n  Let $M^n$ be a manifold. Then for every homology class\n  $\\alpha\\in\\H_{n-1}(M)$ there is a submanifold $N$ and an embedding\n  $\\iota\\colon N\\immto M$ such that $\\pf\\iota(\\fundcl N) = \\alpha$.\n  \\begin{proof}\n    See \\cite[Theorem~II.26]{thom}.\n  \\end{proof}\n\\end{Lem}\n\nThus the following candidates for even-dimensional generators are\nwell-defined.\n\\begin{Def}\n  Let $m$ be even. Define for\n  \\begin{description}[labelindent=1em]\n  \\item[$m=0$:] $\\G m = \\pt$.\n  \\item[$\\alpha(m)=1$:] $\\G m = \\RP m$.\n  \\item[$\\alpha(m)>1$:]\n    $\\emb\\colon\\G m\\subset K^{m+1}$ is the submanifold of\n    codimension one realizing the Poincar\u00e9 dual of\n    $y=\\sum_{r=1}^{l-1}x_{i_r} + x_{i_{l+1}}\\in\\H^1(K^{m+1})$,\n    \\idest $\\pf\\emb\\fundcl{\\G m}=y\\cap\\fundcl{K^{m+1}}$.\n  \\end{description}\n\\end{Def}\n\n\n\\begin{Lem}\\label{lem:evengen}\n  Let $m=\\sum_{r=1}^l k_r$ be even as in the definition above.\n  \\begin{enumerate}\n  \\item\\label{lem:evengen:immersionprop}\n    $\\G m$ immerses into $\\R^{2m-\\alpha(m)}$, and\n  \\item\\label{lem:evengen:indecomposable}\n    $\\G m$ represents an indecomposable element of the\n    cobordism ring.\n  \\end{enumerate}\n\\end{Lem}\n\\begin{proof}[Proof of\n  Lemma~\\ref{lem:evengen}]\n  For $m=0$ both claims are trivial.\n  \n  $\\G m$ clearly immerses into $\\R^{2m-\\alpha(m)}$, either factoring\n  over $K^{n+1}$ in case $\\alpha(m)>1$ or by Whitney's immersion\n  theorem in case $\\alpha(m)=1$.\n\n  For the indecomposability we want to use the criterion from\n  Theorem~\\ref{thm:indecomposabilitycriterion}, \\idest one has to show\n  that $\\snum{m}{\\G m}\\neq 0$.\n  In the case $\\alpha(m)=1$ this is Example~\\ref{ex:rpnindecomposable}.\n  In the other case $\\alpha(m)>1$ we first need to identify the\n  Stiefel-Whitney numbers of $N$ in degrees greater 0 and express them\n  as elementary symmetric polynomials in some variables.\n  As one will see the factor $\\RP{k_l+1}$ has no special role here,\n  and the following holds for any product $\\prod_{r=1}^{l}\\RP{n_r}$ of\n  real projective spaces with\n  \\begin{enumerate}\n  \\item $m+1=\\sum_{r=1}^l n_r$,\n  \\item $n_r<m-1$, and\n  \\item $\\alpha(n_i+n_j)=\\alpha(n_i)+\\alpha(n_j)$\n    for $1\\leq i<j\\leq l$,\n    hence $\\alpha(n_1+\\dotsb+n_r)=\\sum_{i=1}^{r}\\alpha(n_i)$\n    for $2\\leq r\\leq l$.\n  \\end{enumerate}\n  In our case take $n_l\\coloneqq k_l+1$ and \n  $n_r\\coloneqq k_r$ for $1\\leq k<l$.\n  Now, to determine the Stiefel-Whitney numbers \n  denote the normal bundle of the embedding\n  $\\emb\\colon\\G m\\subset K^{n+1}$ by $\\N{}$. \n  As $\\T{\\G m}\\oplus\\N{\\emb}=\\pb\\emb\\T{K^{m+1}}$ there is the relation\n  of Stiefel-Whitney classes\n  \\begin{align*}\n    \\W{\\T{\\G m}} \\cdot \\W{\\N\\emb}\n    &= \\pb\\emb\\W{\\T{K^{m+1}}} \\\\\n    &= \\pb\\emb\\left(\n      \\W{\\RP{k_l+1}}\n      \\cdot \\prod_{r=1}^{l-1}\\W{\\RP{k_r}}\n      \\right) \\\\\n    &= \\pb\\emb\\left(\n      (1+x_{k_l+1})^{k_l+1}\n      \\cdot \\prod_{r=1}^{l-1}(1+x_{k_r})^{k_r}\n      \\right)\n      \\;.\n  \\end{align*}\n  Note that $\\N{}$ is a line bundle, so one only has to find out\n  $\\w1{\\N{}}$. For this one needs the following\n  generalization of Lemma~\\ref{lem:thomisofundcl} which gives the\n  highest Stiefel-Whitney class of normal bundles of such embeddings.\n  \\begin{Lem}\n    Let $M^n, W^{n+\\rkk}$ be compact manifolds, $\\rkk>0$,\n    $\\emb\\colon M\\immto W$ be an embedding with corresponding normal\n    bundle $\\N{\\emb}$, $\\infty\\in W\\setminus\\emb(N)$, and\n    Thom-Pontryagin collapse map on pairs of spaces corresponding to\n    some tubular neighborhood embedding of $\\N{\\emb}$\n    \\begin{gather*}\n      \\collapse\\colon\n      W\n      \\to W/\\left( W\\setminus \\E{\\N{\\emb}} \\right)\n      \\cong \\Discbdl{\\N{\\emb}} / \\Spherebdl{\\N{\\emb}}\n      \\cong \\Thomspace{\\N{\\emb}}\n      \\xrightarrow{\\incl} (\\Thomspace{\\N\\emb},\\infty)\n      \\;.\n    \\end{gather*}\n    Then\n    \\begin{enumerate}\n    \\item\\label{item:fundclsubmfd}\n      $\\pf\\emb\\fundcl M = \\pb\\collapse\\u{\\N{\\emb}}\\cap\\fundcl W$,\n      giving a description of the Poincar\u00e9 dual of\n      $\\pf\\emb\\fundcl M$, and\n    \\item\\label{item:highestswclasssubmfd}\n      $\\w{\\rk\\N{\\emb}}{\\N{\\emb}}\n      = \\pb\\emb\\pb\\collapse\\u{\\N{\\emb}}$, \n      so the pushforward of the Poincar\u00e9 dual of $\\pf\\emb\\fundcl N$\n      along the embedding is the $\\rk\\N{}$\\nbd{}Stiefel-Whitney class.\n    \\end{enumerate}\n    \\begin{proof}\n      See also \\cite[p.~371]{bredon}.\n      To directly obtain \\ref{item:highestswclasssubmfd} directly from\n      \\ref{item:fundclsubmfd}, one can use the fact that\n      the Stiefel-Whitney class $\\w{\\rk\\N{\\emb}}{\\N{\\emb}}$ in degree\n      $\\rk{\\N{\\emb}}$ is the Euler class of $\\N{\\emb}$, which is\n      defined as $\\pb{(\\zerosec{\\N{\\emb}})}(\\u{\\N{\\emb}})$\n      (see \\forexample \\cite[Prop.~17.2]{bredon}), because then\n      one has with $\\zerosec{\\N{\\emb}} = \\collapse\\emb$ that\n      \\begin{gather*}\n        \\SwapAboveDisplaySkip\n        \\w{\\rk\\N{\\emb}}{\\N{\\emb}}\n        = \\pb{(\\zerosec{\\N{\\emb}})}(\\u{\\N{\\emb}})\n        = \\pb\\emb\\pb\\collapse\\u{\\N{\\emb}}\n        \\;.\n      \\end{gather*}\n      For \\ref{item:fundclsubmfd} recall from\n      Lemma~\\ref{lem:thomisofundcl}, that \n      $\\fundcl M = \\pf p(\\u{\\N{\\emb}} \\cap \\pb\\collapse\\fundcl W)$,\n      so\n      \\begin{gather}\\label{eq:fundclsubmfd}\n        \\pf{\\zerosec{\\N{\\emb}}}\\fundcl M\n        = \\pf{\\zerosec{\\N{\\emb}}}\\pf p\n        (\\u{\\N{\\emb}} \\cap \\pb\\collapse\\fundcl W)\n        = \\u{\\N{\\emb}} \\cap \\pb\\collapse\\fundcl W\n        \\;.\n      \\end{gather}\n      Now observe that for the inclusion maps\n      \\begin{center}\n        \\begin{tikzcd}\n          W\n          \\ar[r, \"j\"]\n          &(W, W\\setminus M)\n          &\\ar[l,\"\\text{exc.}\"{below}, \"i\"{above}]\n          (\\Discbdl{\\N{\\emb}},\\Discbdl{\\N{\\emb}}\\setminus M)\n        \\end{tikzcd}\n      \\end{center}\n      $i$ is an excision, and \n      $\\pf c = (\\pf i)^{-1} \\pf j$, $\\pb c= \\pb j(\\pb i)^{-1}$.\n      Denote $u\\coloneqq \\u{\\gamma_1}$ and the preimage of\n      $\\u{\\gamma_1}$ under the excision by\n      $u'\\coloneqq (\\pb i)^{-1}\\u{\\gamma_1}\\in\\H^*(M, M\\setminus N)$.\n      Then calculate using the cap-product formula from\n      \\autoref{eq:capprod2},\n      and $\\emb=i\\circ\\zerosec{\\N{\\emb}}$:\n      \\begin{align*}\n        \\pb\\collapse u \\cap \\fundcl W\n        &=\\pb j u' \\cap \\fundcl W\n        \\cequalsby{$(*)$}\n          \\pf j(\\pb j u' \\cap \\fundcl W) \\\\\n        &\\equalsby{\\eqref{eq:capprod2}}\n          u' \\cap \\pf j\\fundcl W \n          = u' \\cap \\pf i\\pf c\\fundcl W \\\\\n        &\\equalsby{\\eqref{eq:capprod2}}\n          \\pf i(\\pb i u' \\cap \\pf c\\fundcl W) \\\\\n        &\\equalsby{\\eqref{eq:fundclsubmfd}}\n          \\pf i\\pf{\\zerosec{\\N{\\emb}}}\\fundcl M \\\\\n        &= \\pf\\emb \\fundcl M\n          \\;,\n      \\end{align*}\n      where equality $(*)$ uses, that the map of triple of spaces\n      $j\\colon (W, \\emptyset, \\emptyset)\\to(W, W\\setminus N, \\emptyset)$\n      is the identity when restricted to $(W,\\emptyset)\\to(W, \\emptyset)$.\n    \\end{proof}\n  \\end{Lem}\n  By the above Lemma\n  \\begin{align*}\n    \\w1{\\N{}}\n    &= \\pb\\emb y \\\\\n    \\W{\\N{}}\n    &= 1 + \\w1{\\N{}}\n      = \\pb\\emb(1+y)\n      \\cequalsby{(*)} \\pb\\emb\\left((1+y)^{-(2^s-1)}\\right)\n      = \\left(\\pb\\emb(1+y)\\right)^{-(2^s-1)}\n  \\end{align*}\n  for some $2^s>n+1$, where equality (*) then comes from $y^{2^s}=0$ and\n  \\begin{gather*}\n    (1+y)^{2^s-1} \\cdot (1+y)\n    = (1+y)^{2^s}\n    = (1+y^{2^s})\n    = 1\n    \\;.\n  \\end{gather*}\n  Hence, $\\W{\\T{\\G m}}$ can be expressed as\n  \\begin{align*}\n    \\W{\\T{\\G m}}\n    &= \\pb\\emb(1+y)^{2^s-1}\n      \\cdot \\pb\\emb\\left(\n      \\prod_{r=1}^{l}(1+x_{n_r})^{n_r+1}\n      \\right) \\\\\n    &= \\pb\\emb(1 + \\el{1}{}(\n      \\underbrace{y,\\dotsc,y}_{2^s-1},\n      \\underbrace{x_{n_1},\\dotsc,x_{n_1}}_{n_1+1},\n      \\dotsc,\n      \\underbrace{x_{n_{l}},\\dotsc,x_{n_{l}}}_{n_{l}+1}\n      )) \\\\\n    &= 1 + \\el{1}{}(\n      \\pb\\emb y,\\dotsc,\\pb\\emb y,\n      \\pb\\emb x_{n_1},\\dotsc, \\pb\\emb x_{n_1},\n      \\dotsc,\n      \\pb\\emb x_{n_l},\\dotsc, \\pb\\emb x_{n_l}\n      ))\n  \\end{align*}\n  which is a representation as a pullback of a sum of elementary\n  symmetric polynomials in the given one dimensional variables.\n  This yields\n  \\begin{align*}\n    \\s{m}(\\G m)\n    &= (2^s-1) \\pb\\emb y^m\n      + (n_1+1) \\pb\\emb x_{n_1}^m\n      + \\dotsb\n      + (n_l+1)\\pb\\emb x_{n_l}^m\n    \\\\\n    &= \\pb\\emb\\left(\n      (2^s-1)y^m\n      + (n_1+1)x_{n_1}^m\n      + \\dotsb\n      + (n_l+1)x_{n_l}^m\n      \\right) \\\\\n    &\\equalsby{$x_{n_r}^{n_r+1}=0$, $n_r+1<m$}\n      \\pb\\emb((2^s-1)y^m) \\\\\n    &\\equalsby{$2^s=0$}\n      \\pb\\emb y^m\n  \\end{align*}\n  So\n  \\begin{align*}\n    \\SwapAboveDisplaySkip\n    \\pf\\emb\\snum{m}{\\G m}\n    &= \\pf\\emb\\capped{\\s{m}(\\G m)}{\\fundcl{\\G m}} \\\\\n    &= \\pf\\emb\\capped{\\pb\\emb y^m}{\\fundcl{\\G m}} \\\\\n    &= \\capped{y^m}{\\pf\\emb\\fundcl{\\G m}} \\\\\n    &\\equalsby{Def.} \\capped{y^m}{y\\cap\\fundcl{K^{m+1}}}\n      = \\capped{y^{m+1}}{\\fundcl{K^{m+1}}}\n      \\;.\n  \\end{align*}\n  As capping with $\\fundcl{K^{m+1}}$ is the Poincar\u00e9 isomorphism, it\n  suffices for the proof to show that $y^{m+1}$, hence the image of\n  $\\snum{m}{\\G m}$ under $\\pf\\emb$, is non-zero. But, by the relations\n  of the $x_r$, combinatorics give that the only non-zero product\n  $\\prod_{r=1}^{l}x_{n_r}^{a_r}$ of degree $m+1$ is the one with\n  $a_r=n_r$.\n  With the multinomial theorem $y^{m+1}$ reformulates to\n  \\begin{gather*}\n    y^{m+1}\n    = (x_{n_1}+\\dotsb+x_{n_l})^{m+1}\n    = \\frac{(n_1 + \\dotsb + n_l)!} {(n_1)! \\dotsm (n_l)!}\n    \\cdot \\prod_{r=1}^{l}x_{n_r}^{n_r}\n  \\end{gather*}\n  which is non-zero if and only if the binomial expression is.\n  The latter however simplifies with descending induction using the\n  easy formula\n  \\begin{gather*}\n    \\frac{(a_1+\\dotsb+a_j)!}{(a_1)!\\dotsm(a_j)!}\n    = \\binom{a_1+\\dotsb+a_j}{a_j}\n    \\cdot \\frac{(a_1+\\dotsb+a_{j-1})!}{(a_1)!\\dotsm(a_{j-1})!}\n  \\end{gather*}\n  for $a_r\\in\\Nat$ to\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\frac{(n_1+\\dotsb+n_l)!}{(n_1)! \\dotsm (n_l)!}\n    = \\prod_{r=2}^{l}\\binom{n_1+\\dotsb+n_r}{n_r}\n  \\end{gather*}\n  all factors of which are one by Lemma~\\ref{lem:lucas} due to the\n  condition on the $n_r$.\n  So altogether $\\G m$ represents an indecomposable element of\n  the cobordism ring according to the indecomposability criterion\n  in Theorem~\\ref{thm:indecomposabilitycriterion}.\n  \\qedhere\n\\end{proof}\n\n\\subsubsection{Odd Dimensional Generators}\nFor the odd dimensional generators the twisted product construction\ncomes into play.\nAs already discussed, the twisted product of a manifold which is\nindecomposable in the cobordism ring will be so, too, if the product\ndimension was chosen correctly. So the selected choice of candidates\nwill be twisted products of some $\\G n$ for $n$ even, whose\nindecomposability was proven during the previous section.\nThe trick here will be to select for a candidate in odd degree $m$ a\nsplit of $m$ of the form $m=k+2n$ with $n$ even(!).\n\\begin{Rem}\\label{rem:splitodddimension}\n  The following is a convenient split in the above sense:\n  Let $m$ be odd and write the binary expansion as \n  \\begin{gather*}\n    m\n    = \\underbrace{2^{i_0} + \\dotsb + 2^{i_{l_k}}}_{\\eqqcolon k}\n    + \\underbrace{2^{i_{l_k+1}}\n      + \\dotsb + 2^{i_{l_k+l_n}}}_{\\eqqcolon 2n}\n    = k+2n\n  \\end{gather*}\n  such that as usual $i_r<i_{r+1}$, and additionally $i_r=r$ for\n  $0\\leq r\\leq l_k$ and $i_{l_k+1} > l_k+1$.\n  In other words split the binary expansion of $m$ at the first point\n  where it skips a power of two and $k$ to the first part and $2n$ to\n  the second part. Since $m$ is odd, this trick guarantees that $n$ is\n  even. Also, by Remark~\\ref{rem:splitdim},\n  if $m$ is not of the form $2^s-1$, $n$ will not be either.\n\\end{Rem}\n\nFor the immersion property we will again utilize nice immersion\nproperties of projective spaces, more precisely the following one by\nMahowald and Milgram for the total space of line bundles over such:\n\\begin{Lem}[{\\cite[Theorem~4.1, p.~418]{mahowald}}]\n  \\label{lem:immersionuniversalbdl}\n  % Wrong attempt [p.~85]{immersionconj} (?):\n  % \"\"\"\n  % The total space of a vector bundle $E\\to M^n$ of rank $k$ over a\n  % closed manifold of dimension $n$ can be immersed into $\\R^{2n+k}$.\n  % \"\"\"\n  For $p,q\\in\\Nat$ odd and the total space of $(p+1)\\gamma_q$ immerses\n  in Euclidean space of dimension\n  \\begin{gather*}\n    2q+p+1-\\alpha(p+q+1)+\\alpha(p+1)\n    \\;.\n  \\end{gather*}\n\\end{Lem}\nThe trick for odd $m$ now will be to find an immersion of the\n$m$\\nbd{}candidate into some $\\Twistedprod{k}{\\R^{s}}$, which is\nthe total space of $(s\\gamma_k)\\oplus\\trivbdl^s$\nby Corollary~\\ref{lem:twistedprodrealspace}, and then apply the above\nlemma to a super-bundle $((s+s')\\gamma_k)\\oplus\\trivbdl^s$ of\nthe latter.\nAt that point it will be essential that $k$ from the above split of\n$m$ is odd in order to make the lemma applicable in the first\nplace.\n\nWith these preliminary considerations one can make a choice of\ngenerator candidates:\n\\begin{Def}\n  Let $m=k+2n$ be odd and not of the form $2^s-1\\in\\Nat$ with the\n  split from Remark~\\ref{rem:splitodddimension} above. Define\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\G m \\coloneqq \\Twistedprod{k}{\\G n}\n  \\end{gather*}\n\\end{Def}\n\\begin{Lem}\\label{lem:oddgen}\n  For $m$ odd as in the above definition, $\\G m$ fulfills:\n  \\begin{enumerate}\n  \\item\\label{item:brownimmersionproperty}\n    $\\G m$ immerses into $\\R^{2m-\\alpha(m)}$.\n  \\item\\label{item:indecomposabilityproperty}\n    The cobordism classes of the $\\G m$, $m\\neq 2^s-1$, are\n    indecomposable.\n  \\end{enumerate}\n\\end{Lem}\n\\begin{proof}[Proof of\n  Lemma~\\itemref{lem:oddgen}{item:indecomposabilityproperty}]\n  The used split of $m$ from Remark~\\ref{rem:splitodddimension} is a\n  special case of splits described in Remark~\\ref{rem:splitdim}, for\n  all of which the twisted product preserves indecomposability by\n  Corollary~\\ref{cor:twistedprod:indecompcriterion}.\n  Thus, as $n$ from the split is even and $\\G n$ is indecomposable by\n  Lemma~\\ref{lem:evengen}, $\\G m=\\Twistedprod{k}{\\G n}$ is also an\n  indecomposable element in the cobordism ring.\n\\end{proof}\n\\begin{proof}[Proof of\n  Lemma~\\itemref{lem:oddgen}{item:brownimmersionproperty}]\n  Here one needs that for $m=k+2n$ as above $\\G n$ admits by\n  Lemma~\\ref{lem:evengen} an immersion\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\iota\\colon\\G n \\immlongto \\R^{2n-\\alpha(n)}\n    \\;.\n  \\end{gather*}\n  As described in\n  Remark~\\itemref{rem:twistedprodproperties}{item:twistedprodpreservesimmersions},\n  this induces an immersion of twisted products\n  \\begin{center}\n    \\begin{tikzcd}[row sep=small, column sep=large]\n      (\\Sphere{2^k} \\times \\G n\\times \\G n/\\sim)\n      \\ar[d, equals]\n      \\ar[r, hookrightarrow, \"1\\times\\iota\\times\\iota/\\sim\"]\n      &\n      (\\Sphere{2^k} \\times \\R^{2n-\\alpha(n)} \\times \\R^{2n-\\alpha(n)}/\\sim)\n      \\;.\n      \\ar[d, equals]\n      \\\\\n      \\G m\n      &\\Twistedprod{k}{\\R^{2n-\\alpha(n)}}\n    \\end{tikzcd}\n  \\end{center}\n  The latter twisted product however is a vector bundle over\n  $\\RP{k}$ as in\n  Remark~\\itemref{rem:twistedprodproperties}{item:twistedprodfiberbdl}.\n  More precisely, by Lemma~\\ref{lem:twistedprodrealspace} it is the\n  well-known vector bundle \n  \\begin{gather*}\n    ((2n-\\alpha(n))\\gamma_{k})\\oplus\\trivbdl^{2n-\\alpha(n)}\n  \\end{gather*}\n  where $\\gamma_{k}$\n  is the tautological line bundle over $\\RP k$.\n  As $p=2n-1$ and $q=k$ are both odd, \n  Lemma~\\ref{lem:immersionuniversalbdl} is applicable to $p$ and $q$,\n  and yields an immersion of $(p+1)\\gamma_q = 2n\\gamma_k$ into\n  Euclidean space of dimension\n  \\begin{align*}\n    2q+p+1-\\alpha(q+p+1) + \\alpha(p+1)\n    &= 2k + 2n - 1 + 1 - \\alpha(k+2n) + \\alpha(2n) \\\\\n    &= k + m - \\alpha(m) + \\alpha(n)\n    \\;.\n  \\end{align*}\n  Hence, the total space of\n  $(2n\\gamma_k)\\oplus\\trivbdl^{2n-\\alpha(n)}$\n  immerses into Euclidean space of codimension\n  \\begin{gather*}\n    \\left(k + m - \\alpha(m) + \\alpha(n)\\right)\n    + 2n-\\alpha(n)\n    = (k+2n) + m - \\alpha(m)\n    = 2m - \\alpha(m)\n    \\;,\n  \\end{gather*}\n  altogether yielding an immersion\n  \\begin{align*}\n    \\Twistedprod{k}{\\G m}\n    &\\immto\n    \\Twistedprod{k}{\\R^{2n-\\alpha(n)}}\n    = \\E\\left(((2n-\\alpha(n))\\gamma_k)\\oplus\\trivbdl^{2n-\\alpha(n)}\\right)\n    \\\\\n    &\\immto \n    \\E\\left((2n\\gamma_k)\\oplus\\trivbdl^{2n-\\alpha(n)}\\right)\n    \\\\\n    &\\immto\n    \\R^{2m-\\alpha(m)}\n  \\end{align*}\n  as was needed.\n\\end{proof}\n\n\nThis finally gives Brown's theorem:\n\\begin{Thm}\n  Lemmata~\\ref{lem:evengen} and \\ref{lem:oddgen} together yield a\n  complete generating set of the cobordism ring, \\idest\n  \\begin{gather*}\n    \\c_*\n    \\cong   \\left( [\\G m] \\middlemid m\\neq 2^s-1 \\right)_{\\Zmod2}\n    = \\Zmod2\\left[ [\\G m] \\middlemid m\\neq 2^s-1 \\right]\n    \\;,\n  \\end{gather*}\n  each element of which fulfills the immersion property as was\n  desired.\n\\end{Thm}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"thesis\"\n%%% ispell-local-dictionary: \"en_US\"\n%%% End:\n", "meta": {"hexsha": "c698876c989318d69659e2a0da1a2bee630b2b8b", "size": 112884, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-03.tex", "max_stars_repo_name": "gesina/master_thesis", "max_stars_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis-03.tex", "max_issues_repo_name": "gesina/master_thesis", "max_issues_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis-03.tex", "max_forks_repo_name": "gesina/master_thesis", "max_forks_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8661005879, "max_line_length": 97, "alphanum_fraction": 0.6173151199, "num_tokens": 41548, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7549149868676284, "lm_q1q2_score": 0.5784865474486756}}
{"text": "\\documentclass{amsart}\r\n\r\n\\usepackage{hyperref}\r\n\\author{Drew Johnson}\r\n\r\n\\newcommand{\\M}[2]{\\overline{\\mathcal M}_{{#1},{#2}}}\r\n\\newcommand{\\Mgn}{\\M gn}\r\n\\newcommand{\\ch}{\\text{ch}}\r\n\\newcommand{\\di}{\\delta_{\\text{irr}}}\r\n\r\n\\title{Computing Top Intersections on $\\Mgn$ in Sage}\r\n\r\n\\begin{document}\r\n\r\n\r\n\r\n\\begin{abstract}\r\nWe describe code written using the open source mathematics project Sage that computes top intersection on the moduli space of stable $n$-pointed genus $g$ curves.  Our implementation has no theoretical upper bound, is written using open source software, and can compute any intersection involving boundary divisors, $\\psi$ classes, $\\kappa$ classes, $\\lambda$ classes, and the chern character classes.  \r\n\\end{abstract}\r\n\r\n\\maketitle\r\n\\section{Purpose of Algorithm}\r\nLet $\\Mgn$ be the Deligne-Mumford compactification of the moduli space of genus $g$ curves with $n$ marked points.  There are some geometrically defined classes that are part of the so-called tautological ring.  We would like to be able to compute any top intersection of these classes, i.e. any intersection with degree equal to the dimension $3g-3+n$.  We want to consider the classes\r\n\\begin{align*}\r\n&\\psi_i,\\; i = 1, \\dots, n \\\\\r\n&\\kappa_i,\\; i = 1, \\dots, 3g-3+n\\\\\r\n&\\lambda_i,\\; i = 1, \\dots, g \\\\\r\n&\\ch_i,\\; i = 1, 3, 5, \\dots, 2g-1\r\n\\end{align*}  \r\nas well as the boundary divisors.  The $\\psi$-classes and boundary divisors have codimension 1, while the remaining classes have codimension equal to their subscript.  We refer to reader to \\cite{AC} for the definitions of the $\\psi$ and $\\kappa$-classes.  The $\\lambda$ classes are the chern classes of the Hodge bundle, and the $\\ch_i$ are the coefficients of the associated chern character.  It is a property of the Hodge bundle that $\\ch_i=0$ for $i$ even, so it suffices to consider only $i$ odd.\r\n\r\nThe boundary divisors are the image of a so called gluing morphism, either \r\n\\[\r\n \\xi_{red}: \\M{g_1}{N_1 \\cup \\{\\star\\}} \\times \\M{g_2}{N_2 \\cup \\{\\bullet\\}} \\rightarrow \\M{g}{n}\r\n\\]\r\nwhere $g = g_1 + g_2$ and $N_1 \\coprod N_2 = \\{1, \\dots, n\\}$ is a partition, or\r\n\\[\r\n\t\\xi_{irr}: \\M{g-1}{n + 2}  \\rightarrow \\M gn\r\n\\]\r\nwhere the two extra points are identified.  We take the convention as in \\cite{faber}, that is that the boundary classes are the fundamental class of this image divided by the degree of this map.  The degree is $2$ for $\\xi_{irr}$, and is 1 for $\\xi_{red}$ except when $n=0$ and $g_1=g_2$. In this case the degree is $2$.  The class associated with $\\xi_{irr}$ we denote by $\\di$.\r\n\r\nOur goal is to take an arbitrary monomial in these classes and compute the integral, or push-forward to a point, of these classes.\r\n\r\n\\section{Description of Algorithm}\r\n  In order to compute intersections with boundary divisors, it suffices to pull back the remaining classes by the associated map and compute the intersection on the new moduli space or product of moduli spaces.  Recall that the Chow ring of a product of spaces is the tensor product of the respective Chow rings.  If $\\alpha$ is $\\kappa_i$, $\\lambda_i$ or $\\ch_i$, then it pulls back nicely:\r\n\\[\r\n\t\\xi_{red}^*(\\alpha) = \\alpha \\otimes 1 + 1 \\otimes \\alpha\r\n\\]\r\nand\r\n\\[\r\n\t\\xi_{irr}^*(\\alpha) = \\alpha\r\n\\]\r\n(notice that the $\\alpha$s on the right hand side of the equation are classes on different spaces than the $\\alpha$s are the left hand side).\r\n\r\nFor $\\psi$ classes, we have\r\n\\begin{align*}\r\n\t\\xi_{red}^{*}(\\psi_i) &= \\begin{cases} \\psi_i \\otimes 1 & \\text{if } i \\in N_1 \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t1 \\otimes \\psi_i & \\text{if } i \\in N_2 \\end{cases} \\\\\r\n\t\\xi_{irr}^*(\\psi_i) &= \\psi_i.\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\r\n\\end{align*}\r\n\r\nFor boundary classes, the formulas are more complicated.  They were written out by Faber in \\cite{faber}, including the cases of self intersection.\r\n\r\nNow we may assume that there are no boundary divisors.  First, we would like to express the $\\lambda$ classes in terms of the chern character.  This is accomplished via the formula\r\n\\[\r\n 1 + \\lambda_1 t + \\lambda_2 t^2 + \\dots + \\lambda_g t^g = \\exp \\left(\\sum_{i = 1}^g (2i-2)!\\ch_{2i-1}t^{2i-1}\\right).\r\n\\]\r\n\r\nNext, we can express the chern characters in terms of $\\psi$ and $\\kappa$ classes via the formula (see \\cite{yang}, originally in \\cite{mumford})\r\n\\[\r\n \\ch_a = \\frac{B_{a+1}}{(a+1)!} \\left( \\kappa_a - \\sum_{i = 1}^n \\psi_i^a + \\frac12 \\sum_{i=0}^{a-1} (-1)^i\\sum_{\\xi} \\xi_* (\\psi_{\\star}^i  \\psi_{\\bullet}^{a-1-i}) \\right)\r\n\\]\r\nwhere the last sum is taken over all possible gluing morphisms $\\xi$.\r\n\r\nWe will often require the use of the projection formula in this step.  For example, the expansion of $\\psi_1 \\ch_2$ will contain a term of the form\r\n\\[\r\n\t\\psi_1 (\\xi_{red})_* (\\psi_{\\star} \\otimes 1).\r\n\\]\r\nwhich by the projection formula is equal to\r\n\\[\r\n\t(\\xi_{red})_*( \\psi_\\star \\otimes 1 \\cdot \\xi_{red}^*\\psi_1)\r\n\\]\r\nso it suffices to compute\r\n\\[\r\n\\int_{\\M{g_1}{N_1 \\cup \\{\\star\\}} \\times \\M{g_2}{N_2 \\cup \\{\\bullet\\}}}  \\psi_\\star \\otimes 1 \\cdot \\xi_{red}^*\\psi_1.\r\n\\]\r\n\r\nNow, we may assume that we only have $\\psi$ and $\\kappa$ classes left.  In \\cite{AC}, it is observed that knowing either the intersections of either the $\\psi$ classes or the $\\kappa$ classes will give you the intersections of \\emph{all} mixed intersections in $\\psi$s and $\\kappa$s.  In \\cite{yang}, a practical formula to reduce to just the $\\psi$s is (correcting a typographical error)\r\n\\[\r\n\\int_{\\Mgn} \\psi_1^{\\alpha_1} \\cdots \\psi_n^{\\alpha_n} \\kappa_{b_1} \\cdots \\kappa_{b_m} = \\int_{\\M{g}{n+1}}  \\psi_1^{\\alpha_1} \\cdots \\psi_n^{\\alpha_n}\\psi_{n+1}^{b_m} \\prod_{j = 2}^m (\\kappa_{b_j} - \\psi_{n+1}^{b_j})\r\n\\]\r\nwhich will eliminate one $\\kappa$ class at a time at the cost of introducing a new marked point.  \r\n\r\nThus, it suffices now to consider the case of $\\psi$ classes alone.  These can be computed by the Witten-Kontsevich conjecture.  A practical way to compute them is in Lui and Xu \\cite{liu-xu}, who give an induction on the genus.  Translating their notation into ours, this formula is \r\n\\begin{align*}\r\n (2g +n-1) (2g+n - 2) &\\int_{\\Mgn} \\psi_1^{d_1} \\cdots \\psi_n^{d_n} = \\\\\r\n \\frac{2d_1 + 3}{12} &\\int_{\\M{g-1}{n+4}} \\psi_1^{d_1+1} \\psi_2^{d_2} \\cdots \\psi_n^{d_n} - \\frac{2g+n-1}{6} \\int_{\\M{g-1}{n+3}} \\psi_1^{d_1} \\cdots \\psi_n^{d_n} \\\\ \r\n \t+ &\\sum_{I \\coprod J = \\{2, \\dots, n\\}}  (2d_1 +3) \\sum_{g'}\\int_{\\M{g'}{|I| + 3}} \\psi_1^{d_1 + 1} \\prod_{i \\in I} \\psi_i^{d_i} \\cdot \\int_{\\M{g-g'}{|J| + 2}} \\prod_{j \\in J} \\psi_i^{d_i} \\\\\r\n \t- &\\sum_{I \\coprod J = \\{2, \\dots, n\\}}  (2g-n-1) \\sum_{g'}\\int_{\\M{g'}{|I| + 2}} \\psi_1^{d_1} \\prod_{i \\in I} \\psi_i^{d_i} \\cdot \\int_{\\M{g-g'}{|J| +2}} \\prod_{j \\in J} \\psi_j^{d_j}.\r\n\\end{align*}\r\nUsing this formula along with the string equation and the dilaton equation, one can reduce all $\\psi$ intersections to the well known base cases\r\n\\[\r\n \\int_{\\M03} \\emptyset = 1\r\n\\]\r\nand\r\n\\[\r\n\t\\int_{\\M11} \\psi_1 = \\frac{1}{24}.\r\n\\]\r\n\r\nComputing intersections involving large powers of $\\di$ is particularly computationally intensive since the self intersection formula involves summing over all boundary divisors, so we use a trick from \\cite{faber} to speed it up.  This trick allows us to reduce the computation of monomials involving only $\\psi$, $\\kappa$ and $\\di$ classes to the computation of intersections on $\\overline{\\mathcal M}_g$ or $\\M11$, which have much fewer boundary divisors.  We give an example to illustrate this explicitly.  Suppose we wish to compute\r\n\\begin{equation}\r\n\t\\int_{\\M23} \\di^2 \\kappa_2 \\psi_1^2. \\label{eq:compute-trick}\r\n\\end{equation}\r\nLet $\\pi$ be the forgetful map\r\n\\[\r\n\t\\pi: \\M23 \\rightarrow \\M22\r\n\\]\r\nthat forgets the third marked point.\r\n\r\nFormula (1.7) of \\cite{AC} tells us that in general\r\n\\[\r\n\t\\kappa_a = \\pi^* \\kappa_a + \\psi_{n+1}^a,\r\n\\]\r\nwhere $n+1$ is the point forgotten by $\\pi$.  In \\cite{faber}, it is noted that $\\di$ is a pullback of $\\di$ under $\\pi$, so  \\eqref{eq:compute-trick} is equal to\r\n\\[\r\n \\int_{\\M23} \\pi^* \\di^2 (\\pi^* \\kappa_2 + \\psi_3^2) \\psi_1^2 = \\int_{\\M23} \\psi_1^2 \\pi^*(\\di^2 \\kappa_2) + \\psi_1^2\\psi_3^2 \\pi^*\\di^2.\r\n\\]\r\nNow, recall that integration is really pushing down to a point.  Thus, we can apply $\\pi_*$ to the integrands to get\r\n\\[\r\n \\int_{\\M22} \\pi_*(\\psi_1^2 \\pi^*(\\di^2 \\kappa_2)) + \\pi_*(\\psi_1^2\\psi_3^2 \\pi^*\\di^2)\r\n\\]\r\nwhich by the projection formula is \r\n\\[\r\n\t\\int_{\\M22} \\pi_*(\\psi_1^2) \\di^2 \\kappa_2 + \\pi_*(\\psi_1^2\\psi_3)\\di^2. \r\n\\]\r\nWe can push down $\\psi$ classes using the equations (1.7) and (1.9) of \\cite{AC}), yielding\r\n\\[\r\n  \\int_{\\M22} \\psi_1 \\di^2 \\kappa_2 + \\psi_1^2 \\kappa_1 \\di^2.\r\n\\]\r\n  We have reduced the number of marked points by one.  Repeating this will eventually reduce the computation to computing intersections on $\\M20$.  Notice also that if no $\\kappa$ classes are involved, this can be accomplished in one step via the formula (1.12) of \\cite{AC}.\r\n  \r\n\\section{User Interface}\r\nOne goal of this project is to make a convenient user interface.  We have created a Sage notebook that demonstrates this.  It can be accessed at \\url{http://www.sagenb.org/home/pub/3057/}.\r\n\\section{Availability}\r\nThe source is available from BitBucket at \\url{https://bitbucket.org/drew_j/top-intersections-on-mbar_g-n}, or from the author by email at \\href{mailto:werd2.718@gmail.com}{\\nolinkurl{werd2.718@gmail.com}}.  Please refer to the Sage notebook mentioned in the previous section for usage instructions.  You can run this program from the online Sage notebook server \\url{www.sagenb.org}, or you can install Sage on your local machine.  Sage is open source and freely available at \\url{www.sagemath.org}.\r\n\\bibliographystyle{halpha}\r\n\r\n\\bibliography{ref}\r\n\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "a03a657a167e1b656b6734cfdc7676c9caffedc9", "size": 9645, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "topintersections/tex/Mgn.tex", "max_stars_repo_name": "fchapoton/mgn", "max_stars_repo_head_hexsha": "9b0245e1f41aa08a19f7f332ef29415e8a66bf76", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "topintersections/tex/Mgn.tex", "max_issues_repo_name": "fchapoton/mgn", "max_issues_repo_head_hexsha": "9b0245e1f41aa08a19f7f332ef29415e8a66bf76", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-03-17T18:07:48.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-19T12:54:49.000Z", "max_forks_repo_path": "topintersections/tex/Mgn.tex", "max_forks_repo_name": "fchapoton/mgn", "max_forks_repo_head_hexsha": "9b0245e1f41aa08a19f7f332ef29415e8a66bf76", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-20T09:28:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-20T09:28:51.000Z", "avg_line_length": 63.4539473684, "max_line_length": 538, "alphanum_fraction": 0.6800414723, "num_tokens": 3254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879312056025699, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.578435693053731}}
{"text": "\\section{Properties and Assumptions of Consistent Update}\\label{sec:properties}\nRecall that the SIP is defined as finding a measure $\\PP_\\pspace$ such that the push-forward of it matched $\\observedP$.\nThe following assumption guarantees the existence of a solution to the SIP in the form of an update to the initial distribution.\nIt implies that any event which is assigned a positive probability by the observations must also have a positive predicted probability.\n\n\\begin{assumption}[Predictability Assumption (Theoretical Form)]\\label{as:predicted-theoretical}\n  The measure associated with $\\observed$ is absolutely continuous with respect to the measure associated with $\\observed$.\n\\end{assumption}\n\nIf this is unsatisfied, one source of information (the data) suggests certain events are probable while another source of information (the model and initial beliefs) have a priori ruled that almost surely these events should not occur.\nTherefore, either initial beliefs, the model under consideration, or the description of uncertainty encoded in $\\predicted$ should be subjected to a critical reevaluation.\n\nThe following establishes a more practical form (from the perspective of numerical implementation), of \\ref{as:predicted-theoretical} which states that the predicted measure must dominate the observed.\n\\begin{assumption}[Predictability Assumption (Practical Form)]\\label{as:predicted-practical}\nThe requirement given in Assumption~\\ref{as:predicted-theoretical} is guaranteed if the following is satisfied:\n\\begin{equation}\\label{eq:pred-pract}\n  \\exists \\; C>0 \\text{ such that } \\observed (\\q) \\leq C \\predicted(\\q) \\text{ for a.e. } d\\in \\dspace,\n\\end{equation}\nwhere it is understood that $\\q = \\qlam$ for some $\\param \\in \\pspace$.\n\\end{assumption}\n\nAssumption~\\ref{as:predicted-practical} is particularly useful in that it is the same condition required for applying rejection sampling, which we summarize in Algorithm~\\ref{alg:rejection}.\nSpecifically, this allows us to sample from the updated density using the initial density as follows:\n\n\\begin{algorithm}[hbtp]\n\\DontPrintSemicolon\nDraw $\\nsamps$ independent identically distributed (i.i.d.) initial samples from the initial density\n\t\\For{$\\iparam = 1, \\hdots, \\nsamps$}{\n\t    Compute $\\Qi = \\qoi(\\param^{(\\iparam)})$.\\\\\n\t}\n\tApproximate $\\predicted$, the push-forward of $\\initial$, by some method such as kernel density estimation.\n  \\For{$\\iparam = 1, \\hdots, \\nsamps$}{\n\t    Compute $r\\lami = \\frac{\\observed\\Qi}{\\predicted\\Qi}$.\\\\\n\t}\n  Normalize $r$ by dividing it by $\\max(r)$.\n  \\For{$\\iparam = 1, \\hdots, \\nsamps$}{\n      Draw a sample from a standard uniform distribution.\n\t    If the value of $r\\lami$ exceeds the value of the random sample, keep $r\\lami$.\\\\\n\t}\n \\caption{Rejection Sampling Leveraging Ratio from Density-Based Approach}\n \\label{alg:rejection}\n\\end{algorithm}\n\nNow, assuming \\eqref{eq:pred-pract} holds, we state the following theorem from \\cite{BJW18a} based upon the disintegration of measures:\n\n\\begin{thm}[Existence and Uniqueness]\n  For any set $A\\in \\pborel$, the probability measure $\\updatedP$ defined by\n  \\begin{equation}\\label{eq:dci_sol}\n    \\updatedP (A) = \\int_\\dspace \\left (  \\int_{\\pspace \\in \\qoi^{-1}(\\q)}  \\initial\\lam \\frac{\\observed\\Q}{\\predicted\\Q} \\, d\\mu_{\\pspace, \\q} \\lam \\right ) \\, d\\dmeas(\\q), \\; \\forall \\; A \\in \\pborel\n  \\end{equation}\n  is a consistent solution to the SIP given in (\\ref{eq:inverse-problem}), and is uniquely defined up to the specification of the initial probability measure $\\initial$ on $(\\pspace, \\pborel)$.\n  Here, $\\mu_{\\pspace, d}$ denotes the disintegration of the dominating measure $\\mu_\\pspace$.\n\\end{thm}\n\nThe updated density \\eqref{eq:update} in the iterated integral in \\eqref{eq:dci_sol} has no normalization constant because it is in fact a density (i.e., it integrates to $1$), which is summarized in Corollary 3.1 in \\cite{BJW18a} and restated in simplified form below:\n\\begin{cor}\\label{cor:int}\n$\\updatedP(\\pspace) = 1$.\n\\end{cor}\n\nThese definitions are combined to identify the form of the \\emph{updated density}, originally derived in \\cite{BJW18a}:\n\n\\begin{defn}[Updated Distribution]\\label{defn:updated}\n  A solution satisfying \\eqref{eq:dci_sol} is referred to as an updated distribution, with an updated density\n  \\begin{equation}\\label{eq:update}\n    \\updated \\lam = \\initial \\lam \\frac{\\observed \\Q }{\\predicted \\Q }, \\; \\forall \\; \\param \\in \\pspace.\n  \\end{equation}\n\\end{defn}\n\n% Corollary~\\ref{cor:int} is critical to understanding some significant differences between the classical Bayesian posterior density~\\cite{Smith} and the updated density density given by \\eqref{eq:update}, which we discuss in Section~\\ref{sec:othermethods} as well as other places throughout this thesis.\n% Moreover, this Corollary provides the basis for a useful numerical diagnostic that assesses both the quality of a numerical approximation of $\\updated$ based on finite sampling and density estimation as well as any potential violations of the predictability assumption.\n\n% We next turn our attention to attributes of the Data-Consistent framework which make it appealing for use in solving inverse problems.\n% Namely, we summarize the stability results first presented in \\cite{BJW18} that demonstrate that approximation errors in the updated density are well understood within the measure-theoretic foundation on which the approach is constructed.\n\n%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{Stability of the Consistent Solution}\\label{sec:stability}\nThe Total Variation (TV) metric on a space of probability measures, absolutely continuous with respect to a dominating measure $\\mu$, is defined as\n\\begin{equation}\\label{eq:tv}\nd_{\\text{TV}} (\\PP_f, \\PP_g) := \\int \\abs{\\pp_f - \\pp_g} \\, d\\mu,\n\\end{equation}\nwhere $\\pp_f,\\pp_g$ are the densities (Radon-Nikodym derivatives with respect to $\\mu$), associated with measures $\\PP_f, \\PP_g$, respectively.\nThe stability results below are all with respect to the TV metric, which is widely used in the literature and is also known as \\emph{statistical distance}~\\citep{GS02, Smith, Silverman}.\nWe first define stability with respect to perturbations in the data.\n\n\\begin{defn}[Stability of Updated Densities I]\\label{defn:stableobs}\n  Given $\\initialP$ and $\\observedP$, let $\\widehat{\\observedP}$ be any perturbation to $\\observedP$ on $(\\dspace, \\dborel)$ satisfying \\eqref{eq:pred-pract}.\n  Let $\\updatedP$ and $\\widehat{\\updatedP}$ denote the consistent solutions associated with $\\observedP$ and $\\widehat{\\observedP}$, respectively.\n  We say that $\\updatedP$ is \\emph{stable} with respect to perturbations in $\\observedP$ if for all $\\eps > 0$, there exists a $\\delta > 0$ such that\n  \\begin{equation}\n    d_{\\text{TV}} (\\observedP, \\widehat{\\observedP}) < \\delta \\implies d_{\\text{TV}} (\\updatedP, \\widehat{\\updatedP}) < \\eps.\n  \\end{equation}\n\\end{defn}\n\nIn \\cite{BJW18a}, it is shown that $d_{\\text{TV}} (\\widehat{\\updatedP}, \\updatedP) = d_{\\text{TV}} (\\widehat{\\observedP}, \\observedP)$, which immediately proves the following:\n\n\\begin{thm}\n  $\\updatedP$ is stable with respect to perturbations to $\\observedP$.\n  \\label{thm:stableobs}\n\\end{thm}\n\nThis next definition and result are useful in analyzing the sensitivity of the updated density with respect to the initial beliefs.\n\n\\begin{defn}[Stability of Updated Densities II]\\label{defn:stableinitial}\n  Given $\\initialP$ and $\\observedP$, let $\\widehat{\\initialP}$ be any perturbation to $\\initialP$ on $(\\pspace, \\pborel)$ satisfying \\eqref{eq:pred-pract}.\n  Let $\\updatedP$ and $\\widehat{\\updatedP}$ denote the consistent solutions associated with $\\observedP$ and $\\widehat{\\observedP}$, respectively.\n  Let $\\sett{\\PP_{\\pspace, d}}{d\\in\\dspace}{}$ and $\\sett{\\widehat{\\PP_{\\pspace, d}}}{d\\in\\dspace}{}$ be the conditional probabilities defined by the disintegration of $\\initialP$ and $\\widehat{\\initialP}$, respectively.\n  We say that $\\updatedP$ is \\emph{stable} with respect to perturbations in $\\initialP$ if for all $\\eps > 0$, there exists a $\\delta > 0$ such that for almost every $d\\in\\supp(\\observedP)$,\n  \\begin{equation}\\label{eq:stableinitial}\n    d_{\\text{TV}} (\\PP_{\\pspace, d}, \\widehat{\\PP_{\\pspace, d}}) < \\delta \\implies d_{\\text{TV}} (\\updatedP, \\widehat{\\updatedP}) < \\eps.\n  \\end{equation}\n\\end{defn}\n\nThe following important stability theorem is also proven in \\cite{BJW18a}:\n\n\\begin{thm}\n  $\\updatedP$ is stable with respect to perturbations to $\\initialP$\n  \\label{thm:stableinitial}\n\\end{thm}\n\nTaken together, these stability results provide assurance that the updated density we obtain is accurate up to the level of experimental error polluting $\\observedP$ and error in incorrectly specifying initial assumptions using $\\initialP$.\nGiven that specifying the definition of a ``true'' initial density is somewhat nebulous, we are less interested in the consequences of the latter conclusion.\nHowever, generating samples from $\\updatedP$ generally requires a numerical approximation to the predicted distribution, which introduces additional errors in $\\updatedP$.\nIn Section~\\ref{sec:approx}, the TV metric is used to bound the error in the updated density in terms of the error in the approximation to the push-forward of the initial.\n\n\n\n\n%%%%%%%%% Section 2.3\n\\subsection{Numerical Approximation and Sampling}\\label{sec:approx}\n%Since we are given $\\initial$ and $\\observed$, the computation of $\\predicted$ is the only aspect of the Consistent Bayesian framework that needs to be approximated.\n%Since there are few restrictions on the structure of the map $\\qoi$ that defines $\\predicted$, there is in general no explicit expression from which we can generate samples, so we use a numerical approximation to the probability density function.\n%\n%For simplicity, we simply propagate Monte Carlo samples from the prior and use a kernel density estimate (usually Gaussian\\footnote{In this proposal, all results are generated using this kernel, though six kernels common to density estimation are implemented in the ConsistentBayes Python package [TK - cite Silverman and your github].}).\n%\n%We summarize this in the following algorithm:\n%\n%\\begin{algorithm}[hbtp]\n%\\DontPrintSemicolon\n%Generate a set of samples $\\sett{\\param_i}{i=1}{N}$\n%\t\\For{$i = 1, \\hdots, N$}{\n%\t\t\tPropagate sample $\\param_i$ through the QoI map. Set $d_i = \\qoi(\\param_i)$.\n%\t}\n%Use $\\sett{d_i}{i=1}{N}$ and a density estimation method to approximate $\\predicted$.\n%\t\\label{alg:sample}\n%\\caption{Numerical Approximation of the Push-forward of the Prior Density}\n%\\end{algorithm}\n%\n\n\n%The computational object associated with $\\predicted$ is stored for re-use and can be evaluated at locations in $\\dspace$ other than $\\sett{d_i}{i=1}{N}$.\n%This procedure should be thought of as a characterization of the data space given the prior assumptions encoded in $\\initial$.\n\nIf $\\widehat{\\predicted}$ denotes a computational approximation to the push-forward of the initial density obtained with $\\widehat{\\predicted}$ substituted for $\\predicted$ in \\eqref{eq:dci_sol}, then the conditional densities from the Disintegration Theorem (c.f. Chapter~\\ref{chapter:geometry} for more details), are given as\n\\[\n\\frac{\\widehat{d\\PP_{\\pspace, d}}}{d\\mu_{\\pspace, d}\\lam} = \\frac{\\initial\\lam}{ \\widehat{\\predicted\\Q} },\n\\]\nwhere $\\widehat{\\PP_{\\pspace, d}}$ denotes the disintegration of $\\widehat{\\updatedP}$.\n\n\nWe assume the following for the approximation of the push-forward of the initial density:\n\\begin{assumption}\\label{as:predicted-theoreticalx}\nThere exists some $C>0$ such that\n\\[\n\\observed (d) \\leq C \\widehat{\\predicted(d)} \\text{ for a.e. } d\\in \\dspace.\n\\]\n\\end{assumption}\n\nIf this assumption is satisfied, then from \\cite{BJW18a}, we have the following:\n\\begin{thm}\\label{thm:predicted_bound}\n  The error in the approximate updated density is bounded above:\n  \\begin{equation}\\label{eq:predicted_bound}\n    d_{\\text{TV}} (\\updatedP, \\widehat{\\updatedP}) \\leq C d_{\\text{TV}} (\\predictedP, \\widehat{\\predictedP}),\n  \\end{equation}\n  where the $C$ is the constant taken from Assumption \\ref{as:predicted-theoreticalx}.\n\\end{thm}\n\nA straightforward approach to construct $\\widehat{\\predicted}$ is to use a forward propagation of samples from $\\pspace$ to $\\dspace$ and then apply kernel density estimation (KDE)~\\citep{BJW18a}.\nThen, we may evaluate $\\updated$ directly for any sample of $\\pspace$ at the cost of one model solve per sample.\nWhile this allows us to incorporate sophisticated sampling techniques such as Markov-Chain Monte Carlo (MCMC)~\\citep{Smith, Tarantola_book} to generate samples according to the updated distribution, we often opt for a simpler route based on rejection sampling by re-using the initial set of propagated samples.\nThis avoids any additional model evaluations (as would be required by techniques relying on proposal samples such as MCMC).\nWe leverage the re-use of samples in the results herein extensively.\n\nBy Theorem~\\ref{thm:predicted_bound}, the accuracy of the computed updated density relies on the accuracy of the approximation of the push-forward of the initial.\nThroughout this thesis, we utilize a KDE with a Gaussian kernel to produce the non-parametric estimates of $\\predicted$.\nSuch KDEs are known to converge at a rate of $\\mathcal{O}(N^{-4/(4+\\dimD)})$ in mean-squared error and $\\mathcal{O}(N^{-2/(4+\\dimD)})$ in $L^1$-error, where $\\dimD$ is the dimension of $\\dspace$, and $N$ is the number of samples from $\\initial$ propagated through $\\qoi$ \\citep{Silverman}.\n\nFor simplicity, we introduce the following notation to capture the role of the ratio involved in \\eqref{eq:dci_sol} to demonstrate properties we can leverage for generating samples from $\\updated$.\nWe let\n\\[\n\\updated\\lam = \\initial \\lam r\\Q, \\text{ where } r\\Q = \\frac{\\observed\\Q}{\\predicted\\Q}.\n\\]\n\nMany standard calculations about the updated density involve integrals of functions of $r\\Q$ with respect to the prior.\nFor any measurable function $f$, we establish the connection of calculating quantities over $\\pspace$ with those over $\\dspace$ by leveraging the following identity:\n\\[\n\\int_\\pspace f\\left( r\\Q \\right ) \\, d\\initialP = \\int_\\dspace f\\left( r(\\q) \\right ) \\, d\\predictedP\n\\]\n\nWe use several throughout this thesis, including the integral of the updated density:\n\\[\nI(\\updated ) = \\int_\\pspace r\\Q \\, d\\initialP = \\int_\\dspace r(\\q) \\, d\\predictedP ,\n\\]\nwhich we can use to validate that $I(\\updated) = 1$ in order to numerically validate that the predictability assumption given in \\eqref{eq:predicted_bound} was not violated.\nThe sample average of $r(\\q)$ can be used to estimate $I(\\updated)$.\nThis convenience is afforded by the fact that the i.i.d. samples provide us with the ability to take a Monte Carlo estimate of the integral.\n\n% Similarly, we follow \\cite{BJW18} to write the commonly used metric for Information Gain, the Kullback-Liebler (KL) divergence:\n% \\begin{equation}\\label{eq:KLdiv}\n% \\text{KL}(\\initial : \\updated ) = \\int_\\pspace r\\Q \\log r\\Q \\, d\\initialP = \\text{KL}(\\observed : \\predicted ),\n% \\end{equation}\n% i.e., the KL-divergence between the initial and updated density is equal to the KL-divergence between the observed and the predicted densities.\n\nTaken together, the results summarized in this section demonstrate that the Data-Consistent framework and density-based solution \\eqref{eq:dci_sol} have been rigorously constructed and studied.\nThey give an experimenter assurance that there are not unexpected consequences for small mistakes in problem formulation.\nIn the following section, we provide a similar sense of assurance for a central practical consideration involved in this work: that the results depend on computational implementations of the aforementioned theory, i.e., software.\nWe directly address how to establish a complementary level of rigor to the implementation of the work as was invested in the construction of the theory.\nWe leverage modern advances in software engineering with an emphasis on results being reproducible in an accessible manner.\n\n\\input{software/intro.tex}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Outline of Remaining Chapters}\\label{sec:outline}\nIn Chapter~\\ref{chapter:mud}, we propose a way by which parameter identification can be performed in the DCI framework by posing the problem as a SIP and maximizing $\\updated$.\nCentral to how this contribution is accomplished in practice is the definition of a data-constructed QoI map.\nThe impact of a QoI's inherent geometric properties on our ability to approximate solutions to SIPs using finite sampling is then summarized in Chapter~\\ref{chapter:geometry}.\nThe focus there is on the property called skewness, which is connected to the QoI maps introduced in \\ref{chapter:mud} through a case study of a PDE-based example in Chapter~\\ref{chapter:vector-valued}.\nFinally, we provide some concluding remarks and directions for future research in Chapter~\\ref{chapter:future} alongside several examples demonstrating preliminary results for novel extensions of the work presented in this thesis.\n", "meta": {"hexsha": "7a05fae8d791ff8fd6c3569aaf9f919d6621846f", "size": 17058, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "intro/properties.tex", "max_stars_repo_name": "mathematicalmichael/thesis", "max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z", "max_issues_repo_path": "intro/properties.tex", "max_issues_repo_name": "mathematicalmichael/thesis", "max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 59, "max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z", "max_forks_repo_path": "intro/properties.tex", "max_forks_repo_name": "mathematicalmichael/thesis", "max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.4890829694, "max_line_length": 339, "alphanum_fraction": 0.7574744988, "num_tokens": 4396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428946, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.5784356765752574}}
{"text": "\\problemname{Great Expectations}\n\nA \\emph{speedrun} is a playthrough of a game with the intention to complete it as\nquickly as possible. When speedrunning, you usually follow a pre-planned\npath through the game. Along this path, there may be some places\nwhere you have to pull off a difficult technique, or \\emph{trick}, which may cause a\ndelay if you fail to pull it off successfully. Luckily you can \\textit{reset}\nthe game at any time: if you have made a few mistakes, you can start a new run,\nlosing your progress but instantaneously starting over with a clean slate. You\ncan do this as often as you like.\n\nThe game you are currently speedrunning has a record of $r$ seconds, which you\nintend to beat. You have discovered a path through the game that, in the best\ncase, takes $n < r$ seconds. There are some tricks along the way, though: you\nknow exactly where along the run they occur, what the probability is that you\nwill pull them off successfully, and how many seconds you have to spend to\nrecover if they fail.\n\nGiven this data, you want to find the optimal strategy for when to reset the\ngame to minimise the expected time to set a new record. Write a program\nto determine what this smallest possible expected time is.\n\n\\begin{Input}\nThe input consists of:\n\n\\begin{itemize}\n\\item One line with three integers $n$, $r$ and $m$ ($2 \\leq n < r \\leq 5\\,000$,\n  $1 \\le m \\le 50$), where $n$ and $r$ are as described above and $m$ is the number of tricks.\n\\item $m$ lines, each containing three numbers describing a trick:\n    \\begin{itemize}\n        \\item An integer $t$ ($1 \\le t < n$), the time in the route (assuming no failed tricks before) at which the trick occurs,\n        \\item a real number $p$ ($0 < p < 1$ and $p$ has at most $6$ digits after the decimal point), the probability that the trick succeeds, and\n        \\item an integer $d$ ($1 \\le d \\le 1\\,000$), the number of seconds required to recover in case the trick fails.\n    \\end{itemize}\n\\end{itemize}\n\nThe tricks are given in sorted order by $t$, and no two tricks occur\nat the same time $t$ in the route.\n\nYou may assume that, without resetting, a single playthrough has a probability\nof at least $1$ in $50\\,000$ to succeed at improving the record.\n\\end{Input}\n\n\n\\begin{Output}\nOutput the expected time you will have to play the game to set a new record,\nassuming an optimal strategy is used. Your answer should have an absolute\nor relative error of at most $10^{-6}$.\n\\end{Output}\n\n\\subsection*{Explanation of Sample Input 1}\n\nThe record for this game is $111$ seconds, and your route takes \n$100$ seconds if everything goes right.\n\nAfter playing for $20$ seconds, there is a trick with a $50\\%$ success rate. If it\nsucceeds, you keep playing. If it fails, you incur a $10$ second time loss: now\nthe run will take at least $110$ seconds. It is still possible to set a record,\nbut every other trick in the run has to be successful. It turns out to be faster\non average to reset after failing the first trick.\n\nThus you repeat the first $20$ seconds of the game until the trick is successful:\nwith probability $1/2$, it takes $1$ attempt; with probability $1/4$, it takes $2$\nattempts; and so on. On average, you spend $40$ seconds on the first $20$ seconds\nof the route.\n\nOnce you have successfully performed the first trick, you want to finish the run\nno matter the result of the other tricks: it takes $80$ seconds, plus on average\n$1$ second loss from each of the remaining $4$ tricks. So the expected time until you\nset a record is $124$ seconds.\n\n", "meta": {"hexsha": "4a051d38a2377e8286ebf439eb0f9d8702da9250", "size": 3530, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "UICPC/21/nwerc2020all/greatexpectations/problem_statement/problem.en.tex", "max_stars_repo_name": "MilladMuhammadi/Competitive-Programming", "max_stars_repo_head_hexsha": "9f84a2d2734a5efe0e1fde0062e51782cd5af2c6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "UICPC/21/nwerc2020all/greatexpectations/problem_statement/problem.en.tex", "max_issues_repo_name": "MilladMuhammadi/Competitive-Programming", "max_issues_repo_head_hexsha": "9f84a2d2734a5efe0e1fde0062e51782cd5af2c6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "UICPC/21/nwerc2020all/greatexpectations/problem_statement/problem.en.tex", "max_forks_repo_name": "MilladMuhammadi/Competitive-Programming", "max_forks_repo_head_hexsha": "9f84a2d2734a5efe0e1fde0062e51782cd5af2c6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.0277777778, "max_line_length": 146, "alphanum_fraction": 0.7422096317, "num_tokens": 917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.734119526900183, "lm_q2_score": 0.7879311931529758, "lm_q1q2_score": 0.5784356747473592}}
{"text": "\\section{Polyhedrons}\nIn this section, we give a brief introduction to polyhedrons, and show how to compute their surface area and their volume.\n\n\\subsection{Definition}\nA polyhedron is a region of space delimited by polygonal faces. Generally, we will describe a polyhedron by listing its faces. Some basic conditions apply:\n\\begin{itemize}\n\\item all the faces are polygons that don't self-intersect;\n\\item two faces either share a complete edge, or share a single vertex, or have no common point;\n\\item all edges are shared by exactly two faces;\n\\item if we define adjacent faces as faces that share an edge, all faces are connected together.\n\\end{itemize}\n\nHere is a polyhedron that respects those conditions:\n\\begin{center}\n    \\includeFig{polyhedron-0} \\\\[.5cm]\n    \\begin{tabu} to .6\\linewidth {X[c]|X[c]}\n        %\\toprule\n        \\emph{Face type} & \\emph{Face visibility} \\\\\n        \\midrule\n        8 triangles & 7 visible \\\\\n        4 quadrilaterals & 5 hidden \\\\\n    \\end{tabu}\n\\end{center}\n\nNote that those conditions don't exclude nonconvex polyhedrons (like the example above), but they do exclude self-crossing polyhedrons.\n\n\\subsection{Surface area}\nTo compute the surface area of a polyhedron, we need to compute the area of their faces. Like we do for polygons, we denote a face by listing its vertices in order, like $P_1P_2P_3P_4$. The vertices must all be coplanar and the edges should not intersect except at their ends.\n\n\\centerFig{polyhedron-1}\n\nHow do we find the area of a face $P_1\\cdots P_n$? First let's take the most simple case: a triangle $ABC$.\nIf we compute cross product $\\crossv{AB}{AC}$, its direction is perpendicular to the triangle and its norm is\n\\[|AB|\\,|AC| \\sin\\theta\\]\nwhere $\\theta$ is the amplitude of angle between $\\vv{AB}$ and $\\vv{AC}$.\nThis is twice the area of triangle $ABC$, as was already noted in section~\\ref{ss:polygon-area}.\nSo the area of triangle $ABC$ is $\\frac{1}{2}\\big\\|\\crossv{AB}{AC}\\big\\|$.\n\n\\centerFig{polyhedron-2}\n\nIf we rewrite the vectors we can obtain a more symmetric expression:\n\\begin{align*}\n\\crossv{AB}{AC}\n&= (B-A)\\times(C-A) \\\\\n&= B \\times C - B \\times A - A \\times C + A \\times A \\\\\n&= A \\times B + B \\times C + C \\times A\n\\end{align*}\n\nLet's extend this to a quadrilateral $ABCD$. There are two cases:\n\\begin{itemize}\n\\item Triangles $ABC$ and $ACD$ are oriented in the same way. In this case, vectors $\\crossv{AB}{AC}$ and $\\crossv{AC}{AD}$ point in the same direction, and the areas should be added together. So we take the vector sum $\\crossv{AB}{AC} + \\crossv{AC}{AD}$.\n\\item Triangles $ABC$ and $ACD$ are oriented in different ways (the angle at either $B$ or $D$ is concave). In this case, vectors $\\crossv{AB}{AC}$ and $\\crossv{AC}{AD}$ point in opposite directions, and we should take the difference of the areas. So again, taking the vector sum $\\crossv{AB}{AC} + \\crossv{AC}{AD}$ will produce the desired effect.\n\\end{itemize}\n\n\\centerFig{polyhedron-3}\n\nWe can reformulate the sum in the same way:\n\\begin{align*}\n\\crossv{AB}{AC} + \\crossv{AC}{AD}\n&= (A \\times B + B \\times C + C \\times A) \\\\\n&\\qquad + (A \\times C + C \\times D + D \\times A) \\\\\n&= A \\times B + B \\times C + C \\times D + D \\times A\n\\end{align*}\n\nIf we continue with more and more vertices, we can make the following general conclusion. Given a polygon $P_1\\cdots P_n$, we can compute the \\emph{vector area}\n\\[\\vv{S} = \\frac{P_1 \\times P_2 + P_2 \\times P_3 + \\cdots + P_n \\times P_1}{2}\\]\nThis vector is perpendicular to the polygon, and $\\big\\|\\vv{S}\\big\\|$ gives its area. It is oriented such that when it points towards the observer, the vertices are numbered in counter-clockwise order.\n\n\\centerFig{polyhedron-4}\n\nNote that this is very similar to the 2D formula for area we obtained in section~\\ref{ss:polygon-area}. The only thing that changes is that the result is a vector. We can implement this straightforwardly:\n\\begin{lstlisting}\np3 vectorArea2(vector<p3> p) { // vector area * 2 (to avoid divisions)\n    p3 S = zero;\n    for (int i = 0, n = p.size(); i < n; i++)\n        S = S + p[i]*p[(i+1)%n];\n    return S;\n}\ndouble area(vector<p3> p) {\n    return abs(vectorArea2(p)) / 2.0;\n}\n\\end{lstlisting}\n\n\\subsection{Face orientation}\nWhen given an arbitrary polyhedron, an important task is to orient the faces properly, so that we know which side is inside the polyhedron and which side is outside. In particular, we will try to order the vertices of the faces so that their vector areas $\\vv{S}$ all points towards the outside of the polyhedron. Note that, depending on the way the polyhedron was obtained, this might already be the case.\n\n\\begin{center}\n    \\includeFig{polyhedron-5}\n    \n    $\\vv{S}$ pointing outside (lengths not to scale)\n\\end{center}\n\nWhile it's not clear when looking at individual faces which side is inside the polyhedron, it's easy to deduce the correct orientation by looking at the orientation of an adjacent face. If two faces share an edge $[PQ]$, and the first face lists $P$ then $Q$ in this order, then the other face should list them in the other order, so that they ``rotate'' in the same direction. Note that because of circularity, in $P_1\\cdots P_n$, $P_n$ is considered to come before $P_1$, not after.\n\n\\centerFig{polyhedron-6}\n\nSo to orient the faces in a consistent way, start from an arbitrary face, and perform a graph traversal on faces. Whenever you go from a face to a neighboring face, reverse the new face's vertex order if the common edge is present in the same order on both faces. This will either orient all vector areas towards the outside, or all towards the inside.\n\nHere is a example implementation:\n\\begin{lstlisting}\n// Create arbitrary comparator for map<>\nbool operator<(p3 p, p3 q) {\n    return tie(p.x, p.y, p.z) < tie(q.x, q.y, q.z);\n}\nstruct edge {\n    int v;\n    bool same; // = is the common edge in the same order?\n};\n\n// Given a series of faces (lists of points), reverse some of them\n// so that their orientations are consistent\nvoid reorient(vector<vector<p3>> &fs) {\n    int n = fs.size();\n    \n    // Find the common edges and create the resulting graph\n    vector<vector<edge>> g(n);\n    map<pair<p3,p3>,int> es;\n    for (int u = 0; u < n; u++) {\n        for (int i = 0, m = fs[u].size(); i < m; i++) {\n            p3 a = fs[u][i], b = fs[u][(i+1)%m];\n            // Let's look at edge [AB]\n            if (es.count({a,b})) { // seen in same order\n                int v = es[{a,b}];\n                g[u].push_back({v,true});\n                g[v].push_back({u,true});\n            } else if (es.count({b,a})) { // seen in different order\n                int v = es[{b,a}];\n                g[u].push_back({v,false});\n                g[v].push_back({u,false});\n            } else { // not seen yet\n                es[{a,b}] = u;\n            }\n        }\n    }\n    \n    // Perform BFS to find which faces should be flipped\n    vector<bool> vis(n,false), flip(n);\n    flip[0] = false;\n    queue<int> q;\n    q.push(0);\n    while (!q.empty()) {\n        int u = q.front();\n        q.pop();\n        for (edge e : g[u]) {\n            if (!vis[e.v]) {\n                vis[e.v] = true;\n                // If the edge was in the same order,\n                // exactly one of the two should be flipped\n                flip[e.v] = (flip[u] ^ e.same);\n                q.push(e.v);\n            }\n        }\n    }\n    \n    // Actually perform the flips\n    for (int u = 0; u < n; u++)\n        if (flip[u])\n            reverse(fs[u].begin(), fs[u].end());\n}\n\\end{lstlisting}\n\n\\subsection{Volume}\nSuppose that the faces are oriented correctly. We will show how to compute the volume of the polyhedron by taking the same approach as in section~\\ref{ss:polygon-area} for the 2D polygon area.\n\nLet's choose an arbitrary reference point $O$.\nWe will compute the volume of the polyhedron face by face: for a face $P_1\\cdots P_n$, if the side of the face seen from $O$ is inside the polygon, add the volume of pyramid $OP_1\\cdots P_n$, and otherwise subtract it. That way, by inclusion and exclusion, the final result will be the volume inside the polyhedron.\n\nSince $\\vv{S}$ always points towards the outside of the polygon, we just have to check whether $\\vv{S}$ points away from $O$ (add) or towards $O$ (subtract).\n\n\\centerFig{polyhedron-7}\n\nHow do we compute the volume of pyramid $OP_1\\cdots P_n$? We can use the formula\n\\[\\mathrm{volume} = \\frac{\\mathrm{area\\ of\\ base}\\times\\mathrm{height}}{3}\\]\n\nIt turns out that dot product $\\dotv{S}{OP_1}$ computes exactly $(\\mathrm{area\\ of\\ base}\\times\\mathrm{height})$ in absolute value. Indeed, by definition\n\\[\\dotv{S}{OP_1} = \\big\\|\\vv{S}\\big\\|\\,|OP_1|\\cos\\theta\\]\nwhere $\\theta$ is the angle between $\\vv{S}$ and $\\vv{OP_1}$. The norm $\\big\\|\\vv{S}\\big\\|$ is the area of $P_1\\cdots P_n$, the base of the pyramid, and we can easily see that $|OP_1|\\cos\\theta$ is the height of the pyramid (up to sign).\n\n\\centerFig{polyhedron-8}\n\nSo the absolute value of $\\dotv{S}{OP_1}$ is equal to the volume of pyramid $OP_1\\cdots P_n$, and the dot product is positive if $\\vv{S}$ points away from $O$, and negative if $\\vv{S}$ points towards $O$. This is exactly the sign we want.\n\nFor the implementation, we take $O$ to be the origin for convenience. We divide by 6 at the end because we need to divide by 2 to get the correct area and then by 3 because of the formula for the volume of a pyramid.\n\\begin{lstlisting}\ndouble volume(vector<vector<p3>> fs) {\n    double vol6 = 0.0;\n    for (vector<p3> f : fs)\n        vol6 += (vectorArea2(f)|f[0]);\n    return abs(vol6) / 6.0;\n}\n\\end{lstlisting}\n\nIn case the vector areas $\\vv{S}$ all point towards the inside of the polyhedron (which may happen after applying the face orientation procedure in the previous section), \\lstinline|vol6| will be correct but will have a negative sign. If this happens, flip all the faces so that all $\\vv{S}$ now point outside.\n", "meta": {"hexsha": "d1150f038a80984425f5490623b9f0da1ae257c1", "size": 9858, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "archives/codelibraries/cp-geo-master/3d/polyhedron.tex", "max_stars_repo_name": "cbarnson/UVa", "max_stars_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_stars_repo_licenses": ["Unlicense", "MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-09-07T17:00:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-05T02:08:35.000Z", "max_issues_repo_path": "archives/codelibraries/cp-geo-master/3d/polyhedron.tex", "max_issues_repo_name": "cbarnson/UVa", "max_issues_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_issues_repo_licenses": ["Unlicense", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "archives/codelibraries/cp-geo-master/3d/polyhedron.tex", "max_forks_repo_name": "cbarnson/UVa", "max_forks_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_forks_repo_licenses": ["Unlicense", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.8144329897, "max_line_length": 484, "alphanum_fraction": 0.6731588558, "num_tokens": 2796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660687, "lm_q2_score": 0.7879312056025699, "lm_q1q2_score": 0.5784356747199677}}
{"text": "\\documentclass[../base.tex]{subfiles}\n\n\\begin{document}\n\\section{Estimating Contagion}\n\\label{est}\n\n\\cite{pesaran2007econometric} (PP) consider a two country framework to capture the insights of \\cite{masson1999contagion}. The model can be represented by the relations:\n\n\\begin{align} \n\ts_{1,t} =&~\\boldsymbol{\\delta}_1^{\\prime} \\mathbf{x}_{1,t} + \\boldsymbol{\\phi}_1^{\\prime} \\mathbf{F}_t + \\beta_1 D_{2,t} + u_{1,t} \\label{eqn:system}\\\\\n\ts_{2,t} =&~\\boldsymbol{\\delta}_2^{\\prime} \\mathbf{x}_{2,t} + \\boldsymbol{\\phi}_2^{\\prime} \\mathbf{F_t} + \\beta_2 D_{1,t} + u_{2,t} \\label{eqn:system2}\n\\end{align}\n\nwhere $s_{i,t}$ is the spread of country $i$ bond yields over the German equivalent, $\\mathbf{x}_{i,t}$ is a vector of country-specific regressors (including any lags of the dependent variable), $\\mathbf{F}_t$ is a vector of factors common to all countries, and $D_{j, t}$ is a dummy variable indicating a crisis in country $j \\neq i$ at time $t$. $u_{i, t}$ is a serially uncorrelated error term with idiosyncratic conditional variances $\\sigma^2_{ui, t-1}, i = 1, 2$ and a non-zero correlation coefficient $\\rho$ that is assumed to be time invariant for ease of exposition. \n\nThe external crisis variable $D_{j, t}$ takes a value of either zero or unity, and the formulation of this variable is discussed in detail in Section \\ref{dating_methodology}. For now, it is sufficient to assume an indicator function is triggered if country $j$ sees spreads rise above a certain threshold $c_j$. Essentially, $D_{j,t}$ is an endogenous variable of the form $D_{j, t} = \\mathbf{I}(s_{j, t} - c_j)$ where $\\mathbf{I}(A) = 1$ if $A >0$ and zero otherwise.\n\nAfter some simplifications, PP demonstrate that the analytical solution of the model depends crucially upon the strength of fundamentals - defined as $m_{i,t} = \\boldsymbol{\\delta}_i^{\\prime} \\mathbf{x}_{i,t} + \\boldsymbol{\\phi}_i^{\\prime} \\mathbf{F}_t + u_{i,t}$ - and the characteristics of the contagion coefficients. \n\nIf one of $\\beta_1$ or $\\beta_2$ is zero, then the model possesses a simple solution. Abstracting from the possibility of time-varying thresholds, the system becomes\n\n\\begin{align*}\n\ts_{1,t} =&~ m_{1,t} + \\beta_1 I(s_{2,t} - c_2) \\\\\n\ts_{2,t} =&~ m_{2,t}\n\\end{align*}\n\nwhen $\\beta_2 = 0$. $s_{2,t}$ is determined solely by its own fundamentals, while $s_{1,t}$ is a piecewise function of the realised $s_{2,t}$:\n\\[\ns_{1,t} = \n\\begin{cases} \nm_{1,t} + \\beta_1  & \\text{if } s_{2,t} > 0 \\\\\nm_{1,t}\t\t       & \\text{if } s_{2,t} \\leq 0\n\\end{cases}\n\\]\n\nWith two positive contagion coefficients, the solution is not unique for certain values of the fundamental variables. Normalising by the threshold and contagion coefficients, such that normalised variable $A_{i,t} =~\\frac{a_{i,t} - c_i}{\\beta_i}$:\n\n\\begin{align*}\n\tS_{1,t} =&~ M_{1,t} + I(S_{2,t})\\\\\n\tS_{2,t} =&~ M_{2,t} + I(S_{1,t})\n\\end{align*}\n\nwhich gives a reduced formulation for country 1:\n\n\\begin{align}\n\tS_{1,t} =&~ M_{1,t} + I(M_{2,t} + I(S_{1,t}))\n\t\\label{eqn:model_soln}\n\\end{align}\n\nGiven the dichotomous nature of the nested indicator function, there are four possible regimes for the system at time $t$:\n\n\\begin{align}\n\t\\label{eqn:regions}\n\t\\text{regime 1:}~ s_{1,t} & \\leq\tc_1, s_{2,t} \\leq c_2; ~~~~ \\text{regime 2:}~ s_{1,t} \\leq\tc_1, s_{2,t} > c_2 \\\\\n\t\\text{regime 3:}~ s_{1,t} & >\tc_1, s_{2,t} \\leq c_2; ~~~~ \\text{regime 4:}~ s_{1,t} >\tc_1, s_{2,t} > c_2\n\\end{align}\n\nThese regimes can help to reach a solution for equation (\\ref{eqn:model_soln}). Normalisation means that:\n\n\\begin{align*}\n\t\\text{regime 1:}~ S_{1,t} & \\leq\t0, S_{2,t} \\leq 0; ~~~~ \\text{regime 2:}~ S_{1,t} \\leq\t0, S_{2,t} > 0 \\\\\n\t\\text{regime 3:}~ S_{1,t} & >\t0, S_{2,t} \\leq 0; ~~~~ \\text{regime 4:}~ S_{1,t} >\t0, S_{2,t} > 0\n\\end{align*}\n\nWhen the realisations of $S_{i,t}, i = 1,2$ are known then we can find the characteristics of the fundamentals. For example, in regime 2, $S_{1,t} = M_{1,t} + 1, M_{1,t} \\leq 1; S_{2,t} = M_{2,t} > 0$. The full set of regimes is, in $(M_{1,t}, M_{2,t})$ space:\n\n\\begin{align*}\n\t\\text{regime 1:}~ M_{1,t} & \\leq\t0, M_{2,t} \\leq 0; ~~~~ \\text{regime 2:}~ M_{1,t} \\leq\t-1, M_{2,t} > 0 \\\\\n\t\\text{regime 3:}~ M_{1,t} & >\t0, M_{2,t} \\leq -1; ~~~~ \\text{regime 4:}~ M_{1,t} >\t-1, M_{2,t} > -1\n\\end{align*}\n\nA fifth region, where $-1 < M_{1,t} \\leq 0; -1 < M_{2,t} \\leq 0$, where fundamentals in both countries are weak, but not so much so that there is no chance of the good outcome, does not directly correspond to a region in $(S_{1,t}, S_{2,t})$ space. In this region of fundamentals, $S_{i,t}$ is randomised between `good' and `bad' states according to some process. PP provide the example of $\\mathbf{M}_t = (-\\frac{1}{2}, -\\frac{1}{3})^\\prime$, for which the solutions $\\mathbf{S}_t^a = (-\\frac{1}{2}, -\\frac{1}{3})^\\prime$ and $\\mathbf{S}_t^b = (\\frac{1}{2}, \\frac{2}{3})^\\prime$ can both hold.\n\nDesignating the equilibrium choice by $d_t \\sim$ Bernoulli($\\pi^E$), where $\\pi^E$ is the probability of the `bad' outcome of higher spreads, the solution in this region is characterised as $S_{i,t}^*(d_t) =~d_tM_{i,t} + (1 - d_t)(1 + M_{i,t})$. This generates a bimodal distribution of $s_{i,t}^* = s_{i,t}(d_t)$ that Pesaran and Pick demonstrate becomes more pronounced with larger contagion coefficients. Another feature of the model made clear by the above solution is that the $s_{1,t}$ and $s_{2,t}$ will be correlated through the crisis dummy even when fundamentals are uncorrelated, provided $\\beta_i \\geq 0, \\beta_{j \\neq i} > 0;~~ i,j = 1, 2$. \n\nThe solution derived in the two-country setting is extended to the N-country formulation:\n\n\\begin{align}\n\ts_{i,t} =&~\\boldsymbol{\\delta}_1^{\\prime} \\mathbf{x}_{i,t} + \\boldsymbol{\\phi}_i^{\\prime} \\mathbf{F}_t + \\sum_{j=1}^{N}\\beta_j D_{j,t} + u_{i,t}, ~~~ i = 1,2,...,N ; j \\neq i\n\t\\label{eqn:pp_multi}\t\n\\end{align}\n\nand his equation will form the foundations of the estimation. The remainder of this section describes the econometric issues with estimating the contagion coefficients, as well as a summary of the techniques used. \n\nFinally, while not a primary concern of this paper, it is worth noting that the structure of the dataset has implications for identification. In the case of a sufficiently large number of countries, i.e. $N \\rightarrow \\infty$, the country-specific contagion coefficients cannot be identified, but for fixed $N$, $T \\rightarrow \\infty$, the multi-country model converges to the solution of the two-country system. As the analysis is limited to the Eurozone over several years of daily observations, it is reasonable to assume the latter setting applies. The formal result, derived in section 4 of \\cite{pesaran2007econometric}, is therefore not reproduced.\n\n\n\\subsection{Estimation Strategies}\n\\label{est_strat}\n\nIn this study, three estimators will be described. The first two, OLS and Generalised Instrumental Variables, are used to estimate the model, while a Maximum Likelihood estimator is derived in the final subsection, although it is not utilised ue to complexities in implementing it for this set up. Again, for ease of notation, the two-country model will be used to describe the estimators. For the first two methods, they are subsequently extended to the N country case using equation (\\ref{eqn:pp_multi}).\n\n\\subsubsection{OLS and GIVE}\n\nUsing the standard formulations in \\cite{hayashi2000econometrics} as a foundation, and the expositions found in \\cite{pesaran2007econometric} and \\cite{massacci2007identification}, the OLS and Generalised Instrumental Variables (GIVE) estimator are derived below.\n\nIf we treat the dates of the crisis as predetermined, the dummy variables $D_{j,t}, ~ j = 1, 2$ are exogenous and accurately capture the two regimes. OLS provides an unbiased and consistent estimator in this setting. However, as discussed below, in section \\ref{dating_methodology}, these assumptions are unlikely to hold, and $D_{j,t}$ will be endogenously determined. \n\n\\cite{pesaran2007econometric} and \\cite{metiu2012sovereign} use Generalised Instrumental Variable Estimation (GIVE) to consistently estimate the contagion coefficient. It is assumed that $D_{j,t},~ j = 1, 2$ is endogenous, while all other variables are predetermined. It is also assumed that the threshold parameters $c_1$ and $c_2$ are known. Vectors $\\boldsymbol{\\gamma}_i$, $\\mathbf{y}_i$, and $\\mathbf{h}_{i,t}$, for $i,j = 1,2 ~~ i \\neq j$ are defined as \n\n\\begin{align*}\n \\boldsymbol{\\gamma}_i \\equiv&~ (\\boldsymbol{\\phi}_i^{\\prime}, \\boldsymbol{\\delta}_{i}^{\\prime}, \\beta_i)^{\\prime} \\\\\n \\mathbf{y}_i \\equiv&~ (y_{i,1},...,y_{i,T})^{\\prime}\\\\\n \\mathbf{h}_{i,t} \\equiv&~ [\\mathbf{F}_t^{\\prime}, \\mathbf{x}_{i,t}^{\\prime}, \\mathbf{I}(y_{j,t} - c_j)]^{\\prime}\n\\end{align*}\n\nwith the matrix $\\mathbf{H}_i \\equiv (\\mathbf{h}_{i,1}^{\\prime},..., \\mathbf{h}_{i,T}^{\\prime})^{\\prime}$ representing the exogenous and endogenous regressors for country $i$ across the whole sample period $T$. \n\nDefining $\\mathbf{w}_{i,t}$ as the vector of instruments, $\\mathbf{W}_i \\equiv (\\mathbf{w}_{i,1}^{\\prime},..., \\mathbf{w}_{i,T}^{\\prime})^{\\prime}$ is the full sample matrix of instruments, and with associated projection matrix: $\\mathbf{P}_{\\mathbf{W}_i} \\equiv \\mathbf{W}_i (\\mathbf{W}_i^{\\prime} \\mathbf{W}_i)^{-1} \\mathbf{W}_i^{\\prime}$. The system in equations (\\ref{eqn:system}) and (\\ref{eqn:system2}) is expressed for the full sample as\n\n\\begin{align}\n\t\\mathbf{y}_i = \\mathbf{H}_i \\boldsymbol{\\gamma}_i + \\mathbf{u}_i \\label{eqn:matrix_system}\n\\end{align}\n\nThe OLS and GIVE estimators in this setup are:\n\n\\begin{align}\n\t\\hat{\\boldsymbol{\\gamma}}_{i, OLS} &= (\\mathbf{H}_i^{\\prime} \\mathbf{H}_i)^{-1} \\mathbf{H}_i^{\\prime} \\mathbf{y}_i\t\\label{eqn:ols}\\\\\t\n\t\\hat{\\boldsymbol{\\gamma}}_{i, GIVE} &= (\\mathbf{H}_i^{\\prime} \\mathbf{P}_{\\mathbf{W}_i} \\mathbf{H}_i)^{-1} \\mathbf{H}_i^{\\prime} \\mathbf{P}_{\\mathbf{W}_i} \\mathbf{y}_i \\label{eqn:give}\t\n\\end{align}\n\nwith the theoretical feasible covariance matrices:\n\n\\begin{align*}\n\t\\hat{\\mathbf{V}}_{i, OLS} &= \\hat{\\sigma}^2_i (\\mathbf{H}_i^{\\prime} \\mathbf{H}_i)^{-1} \\\\\n\t\\hat{\\mathbf{V}}_{i, GIVE} &= \\hat{\\sigma}^2_i (\\mathbf{H}_i^{\\prime} \\mathbf{P}_{\\mathbf{W}_i} \\mathbf{H}_i)^{-1}\n\\end{align*}\n\n\nwhere $\\hat{\\mathbf{u}}_i = \\mathbf{y}_i - \\mathbf{H}_i \\hat{\\boldsymbol{\\gamma}}_i$ and $\\hat{\\sigma}^2_i = \\frac{1}{T} (\\hat{\\mathbf{u}}_i^{\\prime} \\hat{\\mathbf{u}}_i)$. \n\n%need to be precise here\nIf the parameter values $c_1, c_2$ are known, then the system is linear in parameters even though one of the regressors is a nonlinear function of the endogenous variables from other equations. \\cite{pesaran2007econometric} and \\cite{massacci2007identification} use the country specific exogenous regressors as instruments for the crisis dummy associated with that country which appears in the other equations. \\cite{metiu2012sovereign}, using an alternative identification strategy to the one presented here, uses lagged terms of the dependent variables for the other countries in the system. \n\nNeither of these choices of instrument is ideal. The optimal choice of $w_{i,t}$ would be the conditional probability of a crisis occurring at time $t$, given the information set $\\boldsymbol{\\Omega}_t = (\\mathbf{F}_t^\\prime, \\mathbf{x}_{i,t}^{\\prime}, \\mathbf{x}_{j_t}^{\\prime})$. As this is a function of unknown parameters $\\boldsymbol{\\gamma}_i$, it is infeasible. The instruments chosen are likely to suffer from a weak instrument problem, in which case the distribution of the GIVE estimator will not be asymptotically normal, meaning standard statistical inference may be misleading. A \\cite{cragg1993testing} statistic is reported to indicate whether this problem is severe, and refer to the critical values given in \\cite{stock2005testing}.\n\n\\subsubsection{Full Information Maximum Likelihood}\n\\label{fiml}\n\n\\cite{massacci2007identification} takes the model in PP, and derives a Full Information Maximum Likelihood (FIML) estimator, treating the crisis dummies as missing data that are generated by latent variables which are a function of other elements of the dataset. Using Monte Carlo simulations, Massaccci finds that FIML performs better than GIVE, while imposing fewer requirements on the data by not requiring equation specific variables to identify the contagion coefficients. Instead, identification is achieved by exploiting the dichotomous outcomes of the indicator function. This means that the identification condition is that at least one observation of $s_{i,t}$ falls either side of the threshold variable $c_i$. \n\nThough this estimator could not be implemented for the system in question, a short exposition drawing on \\cite{massacci2007identification} is presented for reference. \n\nFirst, the assumptions that the elements of $\\Omega_t$ are ergodic stationary variables, and that the errors $\\mathbf{u}_t~\\sim~\\text{IID}(\\mathbf{0}, \\mathbf{\\Sigma_u})$ are imposed. \n\nLet $\\boldsymbol{\\theta}_k$ denote the vector of coefficients which characterise the joint distribution of $(s_{1,t}, s_{2,t})^{\\prime}$ in regime $k$ for $k = 1, 2, 3, 4$ and the regimes correspond to those in (\\ref{eqn:regions}):\n\n\\begin{align*}\n\t\\label{eqn:fiml_coefs}\n\t&\\boldsymbol{\\theta}_1 \\equiv (\\boldsymbol{\\phi}^{\\prime}_1, \\boldsymbol{\\delta}^{\\prime}_1,  \\boldsymbol{\\phi}^{\\prime}_2, \\boldsymbol{\\delta}^{\\prime}_2), \t&\\boldsymbol{\\theta}_2 \\equiv ~~~ (\\boldsymbol{\\phi}^{\\prime}_1, \\boldsymbol{\\delta}^{\\prime}_1,\n\t\\beta_1,  \\boldsymbol{\\phi}^{\\prime}_2, \\boldsymbol{\\delta}^{\\prime}_2) \\\\\n\t&\\boldsymbol{\\theta}_3 \\equiv (\\boldsymbol{\\phi}^{\\prime}_1, \\boldsymbol{\\delta}^{\\prime}_1,  \\boldsymbol{\\phi}^{\\prime}_2, \\boldsymbol{\\delta}^{\\prime}_2, \\beta_2), \t&\\boldsymbol{\\theta}_4 \\equiv ~~~ (\\boldsymbol{\\phi}^{\\prime}_1, \\boldsymbol{\\delta}^{\\prime}_1,\n\t\\beta_1,  \\boldsymbol{\\phi}^{\\prime}_2, \\boldsymbol{\\delta}^{\\prime}_2, \\beta_2)\n\\end{align*}\n\nDenoting the joint probability density function as $f(s_{1,t}, s_{2,t}; \\boldsymbol{\\theta_k} | \\boldsymbol{\\Omega}_t)$, then the system in (\\ref{eqn:system}) and (\\ref{eqn:system2}) can be represented:\n\n\\begin{align}\t\\label{eqn:fiml_decomp}\n\tf(s_{1,t}, s_{2,t} | \\boldsymbol{\\Omega}_t) =& [1 - \\mathbf{I}(s_{1,t} - c_1)][1 - \\mathbf{I}(s_{2,t - c_2})] f(s_{1,t}, s_{2,t}; \\boldsymbol{\\theta_1} | \\boldsymbol{\\Omega}_t)\\\\\t\n\t&+ [1 - \\mathbf{I}(s_{1,t} - c_1)]\\mathbf{I}(s_{2,t - c_2})f(s_{1,t}, s_{2,t}; \\boldsymbol{\\theta_2} | \\boldsymbol{\\Omega}_t)\\\\\n\t&+ \\mathbf{I}(s_{1,t} - c_1)[1 - \\mathbf{I}(s_{2,t - c_2})]f(s_{1,t}, s_{2,t}; \\boldsymbol{\\theta_3} | \\boldsymbol{\\Omega}_t)\\\\\n\t&+ \\mathbf{I}(s_{1,t} - c_1)\\mathbf{I}(s_{2,t - c_2})f(s_{1,t}, s_{2,t}; \\boldsymbol{\\theta_4} | \\boldsymbol{\\Omega}_t) \\label{eqn:fiml_decomp2}\n\\end{align}\n\n\\cite{massacci2007identification} shows that, thanks to the presence of multiple equilibria, the sum of the above generates a cdf that exceeds unity. He applies a normalisation factor $\\frac{1}{p_t}$, defined such that the sum of probabilities is equal to one. \n\nApplying this, he uses the $f(s_{1,t}, s_{2,t} | \\boldsymbol{\\Omega}_t)$ decomposed in (\\ref{eqn:fiml_decomp}) - (\\ref{eqn:fiml_decomp2}) to maximise the following log-likelihood function:\n\n\\begin{align}\n\tL_T = \\sum_{t=1}^{T} \\text{log}f(s_{1,t}, s_{2,t} | \\boldsymbol{\\Omega}_t)\n\\end{align}\n\n\\cite{massacci2007identification} provides a proof of the consistency of parameter estimation using this estimator.\n\nThe estimator's major strength when estimating contagion coefficients is that it does not require country-specific explanatory variables in order to be identified. This is especially useful in the face of weak instruments which could render GIVE unable to provide reliable inference. It faces challenges regarding the assumptions it requires about the underlying distribution of the determinants of $D_{j,t}$, which in turn require consistent estimates of $c_i$ and $c_j$. \n\nAdditionally, extending the estimation technique to the multiple country setting is challenging, and estimation of country-pairs does not lend itself to the task of studying contagious systems. As a result, there are no estimations performed using the estimator, though it would provide fruitful future work.\n\n\\subsection{Crisis Dating}\n\\label{dating_methodology}\n\nA key concern in all the above methods described is the procedure used to determined $D_{j,t}$. There is little consensus on how to solve this problem. \\cite{fry2011actually} survey the literature on five crises since the mid-1990s and find over seventy papers which only rarely agree on precise dating of trigger events. Figure 1 in  \\cite{dungey2015endogenous} (DMTY) vividly illustrates the lack of consensus on dating the recent U.S. financial crisis.\n\nThough small disagreements in the dating methodology may not seem a major concern, misspecification of crisis periods will almost certainly lead to inconsistent estimation of the contagion coefficients. If the mistakes are systematic, for example thanks to sample selection bias, then this can have serious implications for drawing meaningful inference from the model. \n\nThe level of attention paid to this issue varies substantially in the literature. DMTY identify three  features commonly used by scholars to date crises: increases in volatility above a threshold; changes in transmission mechanisms between countries; and institutional action.\n\nInstitutional action, such as a devaluation, bailout, policy rate change, is the most common approach in early studies. This is unsurprising, as is it most closely linked to the popular understanding of crises\\footnote{For instance, discussing the Asian financial crisis, a contemporary BBC article states clearly that ``It was the devaluation of the Thai baht in July that began a chain of currency devaluations across the region.\" This is representative of the mainstream popular view.\n\t\\textit{Source:} \\texttt{http://news.bbc.co.uk/1/hi/special\\_report/1997/asian\\_economic\\_woes/34487.stm}}. However, policy action is almost certainly endogenous to underlying crisis variables, as policy often responds with a lag to developments in the macroeconomy or currency/bond markets rather than causing those developments. As it is the evolution of the underlying variable that is of interest, using such a strategy will likely misspecify crises periods. This is true both of the start and end of episodes. \n\nInter-country transmission mechanisms are more valid in terms of econometric theory, but do not remain stable over time even when there is no crisis. Innovations or underlying structural changes to cross-border relationships can confound attempts to use the stability of these linkages as an indicator of contagion. Over longer periods of time, this requires special attention and modelling. However, sudden shifts in the mechanisms, such as those we may observe over short time-spans, hint at a move between equilibria rather than long term trends. This can be used to help with crisis dating.\n\nFinally, volatility tends to increase sharply in crisis periods, and can almost be seen as the definition of crisis. A range of papers (notably \\cite{eichengreen1996contagious} and \\cite{pesaran2007econometric}) seeking to avoid the sample-selection bias inherent in more subjective approaches, use an indicator function that triggers when the performance measure rises above a certain multiple of its standard deviation. This approach attempts to allow the data to determine crisis periods, and reduce researcher discretion in the selection process.\n\nDMTY combine smooth transition function framework with a structural GARCH model to exploit the latter two factors and endogenously determine the crisis dates. They simultaneously estimate the contagion coefficients using a VAR structure. \\cite{metiu2012sovereign} uses a one-step-ahead Value-at-Risk (VaR) model to denote periods of stress, while \\cite{massacci2007identification} utilises a grid search process. A final notable example of an attempt to solve the identification problem is found in \\cite{brutti2012transmission}, where the authors employ the `narrative approach' of \\cite{romer1989does} to select days on which news specifically relating to Greece was broken during 2009-10. By stripping out days on which potentially confounding news about other countries broke, contagion originating solely in Greece can be identified and the effect of financial linkages, the initial research question, can be estimated. \\cite{arezki2011sovereign} apply a similar approach, using sovereign credit rating news as the narrative indicator. \n\nThe above serves as a demonstration that there is relativity little consensus among scholars as to the optimum strategy for specifying crisis periods. The two most common, and also simplest to implement, are a pseudo narrative approach, where key events are selected as start- and end-points for the crisis, and a data-driven indicator function method. These are the two approaches used in this paper, with the details described below.\n\n\\underline{Date Scheme A}\n\nAs institutional action is such a popular indicator in the literature, a form of narrative dating approach is presented first. Four Eurozone countries, Greece, Ireland, Spain, and Portugal, have received official bailouts since 2009. Table \\ref{tab:bailout_dates} outlines the dates chosen as the start and end dates of each of the associated crises, along with sources and notes on selection. These notes further emphasise the arbitrary nature of subjectively selecting crisis dates after the fact. For instance, should a bailout crisis be declared over at the reaching of an agreement with international creditors, or after that agreement passes through the national parliament? While elements of this issue could be resolved with a more systematic narrative approach, the subjective compilation of timelines - usually by newspapers - required for such an exercise is likely to still introduce selection bias. \n\nThe a priori specification of the crisis periods means that the dummies can theoretically be treated as predetermined. However, given that narratives about the Eurozone crisis revolve around bond market behaviour, endogeneity is still likely to be a problem - even if the crisis periods are well specified.\n\n\\underline{Date Scheme B}\n\nThe second method exploits the heightened volatility evident in crisis times, and uses the indicator function given in \\cite{pesaran2007econometric}:\n\n\\begin{align}\n\tD_{j, t} \\equiv \\mathbf{I}(y_{j,t} - \\kappa \\sigma^2_{j})\n\\end{align}\n\nIn Section 6, PP estimate the \\cite{eichengreen1996contagious} dataset using $\\kappa = 2$. It is worth noting that the use of the unconditional standard deviation can again introduce sample selection bias to the procedure. For instance, failure to include previous crises in the sample period will lower the estimated unconditional standard deviation and thus increase the frequency of $D_{j,t} = 1$ in future crises. Conversely, inclusion of much older crisis data may generate $D_{j,t} = 0$ observations even when the realisations of $y_{j,t}$ are abnormally high when considering only the contemporary period. \n\nIn this scheme, the PP value of $\\kappa_j = 2$ is retained, with the standard deviation calculated over the whole sample period. An alternative scheme, using the \\cite{eichengreen1996contagious} value of $\\kappa = 1.5$ is retained for robustness checks.\n\n\n\\end{document}", "meta": {"hexsha": "d6f4701532343ce46d92fdce7a0de50ac82fd4d9", "size": 23317, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Document/tex/sections/estimation_methodology.tex", "max_stars_repo_name": "mjgcos/dissertation", "max_stars_repo_head_hexsha": "586537a2a8ac8a0e9444ea1fe6aa17959e555f6a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Document/tex/sections/estimation_methodology.tex", "max_issues_repo_name": "mjgcos/dissertation", "max_issues_repo_head_hexsha": "586537a2a8ac8a0e9444ea1fe6aa17959e555f6a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Document/tex/sections/estimation_methodology.tex", "max_forks_repo_name": "mjgcos/dissertation", "max_forks_repo_head_hexsha": "586537a2a8ac8a0e9444ea1fe6aa17959e555f6a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 106.9587155963, "max_line_length": 1041, "alphanum_fraction": 0.7458935541, "num_tokens": 6663, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311856832191, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5784356646802243}}
{"text": "\\documentstyle[note]{article}\n\\begin{document}\n\\LARGE\n\n\\subsection*{$\\mid\\;\\;?-$ example1.}\nThis problem comes from the London 1978 A level exam.\n\nWe are asked to find the value(s) of for which\n\\[\t\\log_2(x) + 4\\log_x(2) = 5\\]\n\n\nRewriting equation in terms of $log_2(x)$ gives \n\\[\\log_2(x) + 4\\log_2(x)^{-1} = 5\\]\n\nSubstituting $x_1$ for $\\log_2(x)$ gives\n\\[ 4x_1^{-1} + x_1 = 5\\]\n\nMultiply through by $x_1$ to get \n\\[x_1^2 + -5x_1 + 4 = 0\\]\n\nUsing quadratic equation formula. Solutions are $x_1 = 4$ and $x_1 = 1$\n\nApplying substitution $x_1 = \\log_2(x)$ to: $x_1 = 4\\wedge x_1 = 1$ gives: \n\\[    \\log_2(x) = 4 \\wedge \\log_2(x) = 1\\]\n\nSolving disjunct $\\log_2(x) = 4$, gives\n\\[    x = 16\\]\n\n(by Isolation)\n\nSolving disjunct $\\log_2(x) = 1$, gives\n\\[    x = 2\\]\n\n(by Isolation)\n\n\\vspace*{.25in}\nAnswer is : $(X1 \\wedge X2)$; where: \n\\begin{eqnarray*}\nX1 & = & x = 16\\\\\nX2 & = &  x = 2\n\\end{eqnarray*}\n\n\\newpage\n\\subsection*{$\\mid\\;\\; ?-$ example2}\n\nThis problem is from the A.E.B. A level exam of 1971.\n\nWe are required to find the value(s) of x such that\n\\[\t\\cos(x) + 2\\cos(2x) + \\cos(3x) = 0\\]\n\nAngles are in arithmetic progression \n\nAdding in pairs\n\\[2\\cos(2x) + 2\\cos(2x)\\cos(x) = 0\\]\n\n\\[(2 + 2\\cos(x))\\cos(2x) = 0\\]\n\nSolving factor $\\cos(2x) = 0$. Letting $n_1$ denote an arbitrary integer\n\\[x = 45 +\\frac{180n_1}{2}\\]\n\n     (by Isolation)\n\n\nSolving factor $2 + 2\\cos(x) = 0$. Letting $n_2$ denote an arbitrary \ninteger:\n\\[x = 180 + 360n_2 \\wedge x = -180 + 360n_2\\]\n\n   (by Isolation)\n\n\\vspace*{0.25in}\nAnswer is : $(X1 \\wedge (X2 \\wedge X3))$ where :\n\\begin{eqnarray*}\n    X1 & = &  x = 45 + 90n_1\\\\\n    X2 & = & x = 180 + 360n_2\\\\\n    X3 & = & x = -180 + 360n_2\n\\end{eqnarray*}\n\\newpage\n\\subsection*{$\\mid\\;\\; ?-$ example3}\n\nThis problem is from the A.E.B. 1971 A level paper.\n\nThe question asks for the value(s) of $x$ which satisfy\n\\[4^x - 2^{x+1} - 3 = 0\\]\n\nTidying to \n\\[4^x + (-1)2^{1 + x} = 3\\]\n\nRewriting equation in terms of $2^x$ gives \n\\[(2^x)^2 + (-1)(2^{1}2^x) = 3\\]\n\nSubstituting $x_2$ for $2^x$ gives\n\\[-2x_2 + x_2^2 = 3\\]\n\nUsing quadratic equation formula. Solutions are $x_2 = 3$ and $x_2 = -1$.\n\nApplying substitution $x_2 = 2^x$ to: $x_2 = 3 \\wedge x_2 = -1$ gives: \n\\[2^x = 3 \\wedge 2^x = -1\\]\n\nSolving disjunct $2^x = 3$ we have \n\\[x = \\log_2(3)\\]\n\n(by Isolation)\n\nSolving disjunct $2^x = -1$. $2^x = -1$ has no real roots, $2^x$ must be \ngreater than 0.\n\nAnswer is: $X1$, where :\n\\[X1 =  x = \\log_2(3) = 1.584962500721156E+00\\]\n\n\n\\newpage\n\\subsection*{$\\mid\\;\\; ?-$ example4}\n\nThis question demonstrates the basic methods of PRESS.\n\nThe problem is to find the value(s) of $x$ that satisfy\n\\[\\log_e(x+1) + \\log_e(x-1) = 3\\]\n\nTidying to \n\\begin{eqnarray*}\n\\log_e(1+x) + \\log_e(-1+x) & = & 3\\\\\n\\log_e((-1 + x)(1 + x)) & = & 3\\\\\n\\log(e, -1 + x^2) & = & 3\n\\end{eqnarray*}\n\nFinally\n\\[x = (1 + e^3)^{1/2} \\wedge x = (-1)(1 + e^3)^{1/2}\\]\n\n(by Isolation)\n\nAnswer is : $(X1 \\wedge X2)$ where:\n\\begin{eqnarray*}\nX1 & = & x = (1 + e^3)^{1/2}\\\\\nX2 & = & x = (-1)(1 + e^3)^{1/2}\n\\end{eqnarray*}\n\n\\end{document}\n    X1 =  x = 4.591899054115592E+00\n    X2 =  x = -4.591899054115592E+00\n\n\nAnswer is : \n(X1 \\wedge X2)\n  where :\n    X1 =  x = 4.591899054115592E+00\n    X2 =  x = -4.591899054115592E+00\n\n\n", "meta": {"hexsha": "3df5a11988540cbeb2ba220685fc02ddc74dc5a6", "size": 3214, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pressdir/probs/exam.tex", "max_stars_repo_name": "maths/PRESS", "max_stars_repo_head_hexsha": "026ff5d5e2a572fd05e4313792a177dab6682660", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2016-10-18T12:09:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-24T10:15:59.000Z", "max_issues_repo_path": "pressdir/probs/exam.tex", "max_issues_repo_name": "maths/PRESS", "max_issues_repo_head_hexsha": "026ff5d5e2a572fd05e4313792a177dab6682660", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-11-04T09:44:02.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-04T11:38:06.000Z", "max_forks_repo_path": "pressdir/probs/exam.tex", "max_forks_repo_name": "maths/PRESS", "max_forks_repo_head_hexsha": "026ff5d5e2a572fd05e4313792a177dab6682660", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-04-04T16:58:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-15T19:17:25.000Z", "avg_line_length": 21.5704697987, "max_line_length": 75, "alphanum_fraction": 0.5846297449, "num_tokens": 1429, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5783580124875815}}
{"text": "\\mychapter{18}{Lesson 18} %181128\n\n\\subsubsection{Cramer-Shoup scheme construction}\n\nFrom the above two proof systems we can construct a \\pke{} scheme, which is attributed to Cramer and Shoup:\n\n\\todo{split definition from correctness}\n\n\\begin{itemize}\n    \\item $(\\pk, \\sk) \\pickUAR \\keygen$, where:\n    \\begin{itemize}\n        \\item $\\pk := (h_1, h_2, h_3) = (g_1^{x_1} g_2^{x_2}, g_1^{x_3} g_2^{x_4}, g_1^{x_5} g_2^{x_6})$\n        \\item $\\sk := (x_1, x_2, x_3, x_4, x5, x_6)$\n    \\end{itemize}\n    \\item Encryption procedure:\n    \\begin{itemize}\n        \\item $r \\pickUAR \\integer_q$\n        \\item $\\beta = h_s(c_1, c_2, c_3) = (g_1^r, g_2^r, h_1^r m)$\n        \\item $\\Enc(\\pk, m) = (c_1, c_2, c_3, (h_2 h_3^\\beta)^r)$\n    \\end{itemize}\n    \\item Decryption procedure:\n    \\begin{itemize}\n        \\item Check that $c_1^{x_3 + \\beta x_5} c_2^{x_4 + \\beta x_6} = c_4$. If not, output $\\perp$.\n        \\item Else, output $\\widehat{m} = c_3 c_1^{-x_1} c_2^{-x_2}$\n    \\end{itemize}\n\\end{itemize}\n\n\\todo{All the proofs here...}\n\n\\section{Digital signatures}\n\nIn this section we explore the solutions to the problem of authentication using an asymmetric key method. Some observations are in order:\n\n\\begin{figure}\n    \\centering\n\n    \\begin{tikzpicture}[node distance = 2cm, auto, >=latex']\n\n        \\node (s) {};\n        \\node (a) [box] [right of = s] {Alice};\n        \\node (k) [above of = a, node distance = 1cm] {$sk$};\n        \\node (b) [box] [right of = a, node distance = 5cm] {Bob};\n        \\node (p) [above of = b, node distance = 1cm] {$pk$};\n        \\node (e) [right of = b] {};\n\n        \\path[->] (s) edge node {$m$} (a);\n        \\path[->] (k) edge (a);\n        \\path[->] (a) edge node {$(m, \\sigma)$} (b);\n        \\path[->] (p) edge (b);\n        \\path[->] (b) edge node {$b$} (e);\n\n    \\end{tikzpicture}\n    \\caption{Asymmetric authentication}\n    \\label{fig:digisign}\n\\end{figure}\n\n\\begin{itemize}\n    \\item In a symmetric setting, a verifier routine could be banally implemented as recomputing the signature using the shared secret key and the message. Here, Bob cannot recompute $\\sigma$ as he's missing Alice's secret key (and for good reasons too...). Thus, the verifying routine must be defined otherwise;\n    \\item In a vaguely similar manner to how an attacker could encrypt messages by itself in the asymmetric scenario, because the public key is known to everyone, any attacker can verify any signed messages, for free.\n\\end{itemize}\n\nNevertheless, proving that a \\textsc{ds} scheme is secure is largely defined in the same way as in the symmetric scenario, with the \\textsc{uf-cma} property:\n\n\\begin{cryptogame}{dsufcma}{Unforgeable digital signatures}{uf-cma}\n    \n    \\receive{$(\\pk, \\sk) \\pickUAR \\keygen(1^\\lambda)$}{$\\pk$}{}\n\n    \\cseqdelay\n\n    \\send{$m \\in M$}{$m$}{}\n    \\receive{$\\sigma \\pickUAR \\textsf{\\textup{Sign}}(sk, m)$}{$\\sigma$}{}\n\n    \\cseqdelay\n\n    \\send{$m^* \\notin M$}{$(m^*, \\sigma^*)$}{\\textsc{Output} $\\textsf{\\textup{Verify}}(\\pk, m^*, \\sigma^*)$}\n\n\\end{cryptogame}\n\n\\subsection{Public Key Infrastructure}\n\nThe problem now is that Alice has a public key, but she wants some sort of ``certificate of validity'' for it, so that Bob will be sure that whenever he receives Alice's public key, he can be sure it's the right one by checking such certificate.\n\nFor certificates to be useful, the parties need an universally-trusted third party, called \\textit{Certification Authority}. It will provide a special \\textit{signature} to Alice for proving her identity to Bob, as exemplified by the sequence in figure \\ref{cryptosequence:certissue}.\n\nWhenever Bob wants to check the validity of the Alice's public key, he can query the authority for the certificate, and verify the public key he just received, as shown in figure \\ref{cryptosequence:certcheck}\n\n\\begin{cryptosequence}\n    {certissue}\n    {}\n\n    \\cseqentity{CA}{CA}\n    \\cseqentity[2.2]{A}{Alice}\n\n    \\cseqmessager{CA}{$(\\pk_C, \\sk_C) \\pickUAR \\keygen$}{$\\pk_C$}{A}{}\n\n    \\cseqdelay\n\n    \\cseqmessagel{A}{\\shortstack[l]{\n        $(\\pk_A, \\sk_A) \\pickUAR \\keygen$ \\\\\n        $\\textsc{id}_A = \\textup{``Alice''} \\| \\pk_A$\n        }}{$\\textsc{id}_A$}{CA}{}\n    \n    \\cseqdelay\n\n    \\cseqmessager{CA}{$\\textsc{cert}_A \\pickUAR \\textsf{\\textup{Sign}}(\\sk_C, \\textsc{id}_A)$}{$\\textsc{cert}_A$}{A}{}\n    \n\\end{cryptosequence}\n\n\\begin{cryptosequence}\n    {certcheck}\n    {}\n\n    \\cseqentity{CA}{CA}\n    \\cseqentity[0.8]{A}{Alice}\n    \\cseqentity[1.4]{B}{Bob}\n\n    \\cseqmessager{CA}{\\shortstack[r]{\n        \\small{$(\\pk_C, \\sk_C) \\pickUAR$} \\\\\n        \\small{$\\pickUAR \\keygen$}\n    }}{$\\pk_C$}{A}{}\n    \\cseqmessager{CA}{}{$\\pk_C$}{B}{}\n\n    \\cseqdelay\n\n    \\cseqmessagel{A}{\\shortstack[l]{\n        $(\\pk_A, \\sk_A) \\pickUAR \\keygen$ \\\\\n        $\\textsc{id}_A = \\textup{``Alice''} \\| \\pk_A$\n    }}{$\\textsc{id}_A$}{CA}{}\n\n    \\cseqdelay\n\n    \\cseqmessager{CA}{\\shortstack[r]{\n        \\small{$\\textsc{cert}_A \\pickUAR$} \\\\\n        \\small{$\\pickUAR \\textsf{\\textup{Sign}}(\\sk_C, \\textsc{id}_A)$}\n    }}{$\\textsc{cert}_A$}{A}{}\n    \n    \\cseqdelay\n\n    \\cseqmessager{A}{}{$(\\textsc{id}_A, \\textsc{cert}_A)$}{B}{\\small{$b = \\textsf{\\textup{Auth}}(\\pk_C, \\textsc{id}_A, \\textsc{cert}_A)$}}\n\n\\end{cryptosequence}\n\nHow can Bob recognize a valid certificate from an expired/invalid one? The infrastructure provides some servers which contain the lists of the currently valid certificates, such as $\\textsc{cert}_A$, in the case of Alice.\n\n\\begin{theorem}\n    Signatures are in \\textit{\\textbf{Minicrypt}}.\n\\end{theorem}\n\nThis is a counterintuitive result, not proven during the lesson, but very interesting because it implies that we can create valid signatures only with hash functions, without considering at all public key encryption.\n\n\nUp next:\n\\begin{itemize}\n    \\item Digital Signatures from TDP*\n    \\item Digital Signatures from ID Scheme*\n    \\item Digital Signatures from CDH\n\\end{itemize}\n\nWhere * appears, something called \\textit{Random Oracle Model} is used in the proof. Briefly, this model assumes the existence of an ideal hash function which behaves like a truly random function (outputs a random y as long as x was never queried, otherwise gives back the already taken y).\n", "meta": {"hexsha": "ae1d382ba561a951be7e048418d134fb78587473", "size": 6182, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lessons/lesson_18.tex", "max_stars_repo_name": "Project2100/Cryptography-2018_19", "max_stars_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-15T09:22:45.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-15T09:22:45.000Z", "max_issues_repo_path": "lessons/lesson_18.tex", "max_issues_repo_name": "Project2100/cryptography_1819", "max_issues_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-07-18T15:45:10.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-27T20:36:12.000Z", "max_forks_repo_path": "lessons/lesson_18.tex", "max_forks_repo_name": "Project2100/cryptography_1819", "max_forks_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-07-17T14:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-03T15:23:22.000Z", "avg_line_length": 38.6375, "max_line_length": 312, "alphanum_fraction": 0.6528631511, "num_tokens": 2006, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7520125737597971, "lm_q1q2_score": 0.5783580085073533}}
{"text": "\\input{../../style/preamble}\n\\input{../../latex-math/basic-math}\n\\input{../../latex-math/basic-ml}\n\n\\newcommand{\\titlefigure}{figure_man/biasvariance_scheme.png}\n\\newcommand{\\learninggoals}{\n  \\item Understand why overfitting happens\n  \\item Know how overfitting can be avoided\n  \\item Know regularized empirical risk minimization \n}\n\n\\title{Introduction to Machine Learning}\n\\date{}\n\n\\begin{document}\n\n\\lecturechapter{Introduction to Regularization}\n\\lecture{Introduction to Machine Learning}\n\n\\section{Motivation for Regularization}\n\n\n\\begin{vbframe}{Example: Overfitting}\n\n\\begin{itemize}\n\\item Assume we want to predict the daily maximum \\textbf{ozone level} in LA given a data set containing $50$ observations.\n\\item The data set contains $12$ features describing time conditions (e.g., weekday, month),\nthe weather (e.g., temperature at different weather stations, humidity, wind speed) or geographic variables (e.g., the pressure gradient).\n\\item We fit a linear regression model using \\textbf{all} of the features\n\n$$\n\\fxt = \\thetab^T\\xv = \\theta_0 + \\theta_1 x_1 + \\theta_2 x_2 + ... + \\theta_{12} x_{12}\n$$\n\nwith the $L2$ loss.\n\n\\item We evaluate the performance with $10$ times $10$-fold CV.\n\n\\end{itemize}\n\n\\vfill\n\n\\begin{footnotesize} \nWe use (a subset of) the \\texttt{Ozone} data set from the \\texttt{mlbench} package. This way, we artificially create a \\enquote{high-dimensional} dataset by reducing the number of observations drastically while keeping the number of features fixed. \n\\end{footnotesize}\n\n\\framebreak \n\n\nWhile our model fits the training data almost perfectly (left), it generalizes poorly\nto new test data (right). We overfitted.\n\n\\lz \n\n\\begin{figure}\n\\includegraphics[width=0.8\\textwidth]{figure_man/example01.png}\\\\\n\\end{figure}\n\n\\end{vbframe}\n\n\\begin{vbframe}{Avoid Overfitting} \n\nWhy can \\textbf{overfitting} happen? And how to avoid it?\n\n\\begin{enumerate}\n\\item Not enough data \\\\\n$\\to$ collect \\textbf{more data} \n\\item Data is noisy \\\\\n$\\to$ collect \\textbf{better data} (reduce noise) \n\\item Models are too complex \\\\\n$\\to$ use \\textbf{less complex models}\n\\item Aggressive loss optimization \\\\\n$\\to$ \\textbf{optimize less}\n\\end{enumerate}\n\n\n\\framebreak \n\n\\textbf{Approach 1: Collect more data}\n\n\\lz \n\nWe explore our results for increased dataset size by $10$ times $10$-fold CV.\nThe fit worsens slightly, but the test error decreases.\n\n\\begin{figure}\n\\includegraphics[width=0.7\\textwidth]{figure_man/avoid-overfitting01.png}\\\\\n\\end{figure}\n\nGood idea, but often not feasible in practice.\n\n\\framebreak\n\n\\textbf{Approach 3: Reduce complexity}\n\n\\lz \n\nWe try the simplest model we can think of: the constant model. For the $L2$ loss, the optimal constant model is\n\n$$\n\\fxt = \\frac{1}{n}\\sumin \\yi\n$$\n\nWe then increase the complexity of the model step-by-step by adding one feature at a time.\n\n\\framebreak \n\nWe can control the complexity of the model by including/excluding features.\nWe can try out all feature combinations and investigate the model fit.\n\n\n\\begin{figure}\n\\includegraphics[width=0.7\\textwidth]{figure_man/avoid-overfitting02.png}\\\\\n\\end{figure}\n\n\\vfill\n\n\\begin{footnotesize}\nNote: For simplicity, we added the features in one specific (clever) order, so we cheated a bit. Also note there are $2^{12} = 4096$ potential feature combinations.\n\\end{footnotesize}\n\n\\framebreak\n\n\\textbf{Approach 4: Optimize less}\n\n\\lz \n\nNow we use polynomial regression with temperature as the only feature to predict the ozone level, i.e.,\n\n$$\\fxt = \\sum^{d}_{i=0} \\theta_i (x_T)^{i} .$$\nWe choose $d = 15$, for which we get a very flexible model, which can be prone to overfitting for small data sets. \\\\\n\\medskip\nIn this example, we don't solve for $\\hat\\theta$ directly, but instead, we use the gradient descent algorithm to find $\\hat\\theta$ stepwise.\n\n\\framebreak\n\nWe want to stop the optimization early when the generalization error starts to degrade.\n\n\n\\begin{figure}\n\\includegraphics[width=0.7\\textwidth]{figure_man/mean-squ-error.png}\\\\\n\\end{figure}\n\n\\footnotesize{Note: For polynomial regression, gradient descent usually needs many iterations before it starts to overfit. Hence a very small training set was chosen to accelerate this effect.}\n\n\\framebreak \n\nWe have contradictory goals\n\n\\begin{itemize}\n\\item \\textbf{maximizing the fit} (minimizing the train loss)\n\\item \\textbf{minimizing the complexity} of the model.\n\\end{itemize}\n\nWe need to find the \\enquote{sweet spot}.\n\n\\begin{center}\n\\begin{figure}\n\\includegraphics[width=0.6\\textwidth]{figure_man/complexity-vs-fit.png}\n\\end{figure}\n\\end{center}\n\n\\framebreak \n\nUntil now, we can either add a feature completely or not at all.\n\n\\lz \n\nInstead of controlling the complexity in a discrete way by specifying the number of features,\nwe might prefer to control the complexity  \\textbf{on a continuum} from simple to complex.\n\n\\begin{center}\n\\begin{figure}\n\\includegraphics[width=0.6\\textwidth]{figure_man/complexity-vs-fit-continuous.png}\n\\end{figure}\n\\end{center}\n\n\\end{vbframe}\n\n\\section{Regularized Empirical Risk Minimization}\n\n\\begin{vbframe}{Regularized Empirical Risk Minimization}\n\n  \nRecall, empirical risk minimization with a complex hypothesis set tends to overfit. A major tool to handle overfitting is \\textbf{regularization}.\n  \n  \\lz\n  \nIn the broadest sense, regularization refers to any modification made to a learning algorithm that is intended to reduce its generalization error but not its training error.\n  \n  \\lz\n  \nExplicitly or implicitly, such modifications represent the preferences we have regarding the elements of the hypothesis set. \n\n  \\framebreak\n  \n  Commonly, regularization takes the following form:\n  \n  $$\n  \\riskrf = \\riskef + \\lambda \\cdot J(f) = \\sumin \\Lxyi + \\lambda \\cdot J(f)\n  $$\n\\begin{itemize}\n\n  \\item $J(f)$ is called \\textbf{complexity penalty}, \\textbf{roughness penalty} or \\textbf{regularizer}.\n  \\item $\\lambda > 0$ is called \\textbf{complexity control} parameter. \n  \\item It measures the \\enquote{complexity} of a model and penalizes it in the fit.\n  \\item As for $\\riske$, often $\\riskr$ and $J$ are defined on $\\thetab$ instead of $f$, so $\\riskrt = \\risket + \\lambda \\cdot J(\\thetab)$. \n\\end{itemize}\n\n\\framebreak\n\n\\textbf{Remarks:}\n\n\\begin{itemize}\n  \\item Note that we now face an optimization problem with two criteria: \n    \\begin{enumerate}\n      \\item models should fit well (low empirical risk),\n      \\item but not be too complex (low $J(f)$). \n    \\end{enumerate}\n  \\item We decide to combine the two in a weighted sum and to control\n  the trade-off via the complexity control parameter $\\lambda$.\n  \\item $\\lambda$ is hard to set manually and is usually selected via cross-validation (see later).\n  \\item $\\lambda = 0$: The regularized risk $\\riskrf$ reduces to the simple empirical $\\riskef$.\n\n  \\item If $\\lambda$ goes to infinity, we stop caring about the loss/fit and models become as \\enquote{simple} as possible.\n\\end{itemize}\n\n\\framebreak\n\n\n\\center\n\\vspace*{0.5cm}\n\\includegraphics[width=0.6\\textwidth]{figure_man/biasvariance_scheme.png} \\\\\n\\footnotesize{Hastie, The Elements of Statistical Learning, 2009 (p. 225)}\n\n\n\\end{vbframe}\n\n\n\n\\endlecture\n\\end{document}\n", "meta": {"hexsha": "c742bd845c39ca967abf74971de6eed4e359016d", "size": 7133, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/regularization/slides-regu-intro.tex", "max_stars_repo_name": "jukaje/lecture_i2ml", "max_stars_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2019-02-27T17:20:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-24T12:05:52.000Z", "max_issues_repo_path": "slides/regularization/slides-regu-intro.tex", "max_issues_repo_name": "jukaje/lecture_i2ml", "max_issues_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 323, "max_issues_repo_issues_event_min_datetime": "2019-02-27T08:02:59.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T08:11:41.000Z", "max_forks_repo_path": "slides/regularization/slides-regu-intro.tex", "max_forks_repo_name": "jukaje/lecture_i2ml", "max_forks_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 56, "max_forks_repo_forks_event_min_datetime": "2019-02-27T16:25:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T05:53:20.000Z", "avg_line_length": 29.353909465, "max_line_length": 249, "alphanum_fraction": 0.7479321464, "num_tokens": 1933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7690802317779601, "lm_q1q2_score": 0.5783580045271252}}
{"text": "\\section{Introduction}\n\n\n\\subsubsection{Background}\nSystems such as neurons can have different types of input. The authors\\supercite{Smyth:1996} propose dividing the input to a neuron into receptive (or driving) and contextual. They further present formulations of how contextual connections can modulate driving input. Finally, they use information theory to compare and contrast contextual modulation in these different formulations.\n\n\\subsubsection{Motivation}\nThis work was published in 1996, but with the advances in computing and information theory, and increased emphasis on the importance of context\\supercite{Larkum:2013}, new studies\\supercite{Kay:2011,Wibral:2017} have started building on its foundations. Furthermore, there are many ways to write the activation functions and only a few simple formulations were analyzed. Therefore, the significance of this work is twofold: 1. To replicate the study in a modern programming language and make it publicly accessible.   2. Enable new activation functions to be readily tested.\n\n\\subsubsection{Replication}\nA successful attempt is made to qualitatively and quantitatively replicate the results of the original paper by replicating the three figures that comprise the results. While in all cases the results are qualitatively identical, there are minor quantitative differences in some figures which could in some cases be attributed to the use of a different seed value or the differences between how small or large values are handled in this python implementation and the original authors' implementation in an old programming environment about three decades ago. \n\n\\section{Methods}\n\n\\subsubsection{Input}\nThe study is limited to information transmission from a neuron which has only one receptive (R) or contextual (C) connection as input and produces one output (X). There is input at every time step, +1 denotes firing and -1 being silent, which is then multiplied by the weight of their connection. Probability distributions were created in the same way as original authors as described below, where for all figures P(C = 1 | R = 1) was set to 0.889972. This is so that the three components had all the same maximum possible value (0.5 bits). This makes it easier to compare the absolute values. \n\n\\begin{equation}\nP(R = +1, C = +1) = 0.5P(C = 1 | R = 1)\n\\end{equation}\n\n\\begin{equation}\nP(R =+1, C = -1) = 0.5[1 - P(C = 1 | R = 1)]\n\\end{equation}\n\n\\begin{equation}\nP(R = -1, C = +1) = 0.5[1 - P(C = 1 | R = 1)]\n\\end{equation}\n\n\\begin{equation}\nP(R = -1, C = -1) = 0.5P(C = 1 | R = 1).\n\\end{equation}\n\n\n\\subsubsection{Activation Functions}\nThree activation functions are studied by the authors. Based on how receptive and contextual information are formulated to interact, they are called A\\textsubscript{a} (additive), A\\textsubscript{m} (modulatory), and A\\textsubscript{b} (both additive and modulatory). \n\n\\begin{equation}\nA_a(r,c) = r + c \n\\end{equation}\n\n\\begin{equation}\nA_m(r,c) = 0.5r[1 + e^{rc}] \n\\end{equation}\n\n\\begin{equation}\nA_b(r,c) = 0.5r[1 + e^{rc}] + c  \n\\end{equation}\n\n\nWhere r is an instance of  R, and c an instance of C, multiplied by the weight of the connection. The output of each activation function is passed into the sigmoidal activation function to generate a probability of firing:\n\n\\begin{equation}\np(X=1 | r,c) = \\frac{1}{1 + e^{-A(r,c)}} \n\\end{equation}\n\n\n\\subsubsection{Information Theory}\nThey use the three-way mutual information as a measure of the information shared between the output and the inputs which is defined as follows:\n\n\\begin{equation}\nI(X;R;C) = \\sum_{x,r,c}p(x,r,c)log\\frac{P(x|r)P(x|c)}{P(x)P(x|r,c)}\n\\end{equation}\n\nI(X;R|C) is used by the authors as a measure of information that is shared between the R input and the X output but not the C input and is defined as follows: \n\n\\begin{equation}\nI(X;R|C) = \\sum_{x,r,c}P(x,r,c)log\\frac{P(x|r,c)}{P(x|c)}\n\\end{equation}\n\nI(X;C|R) is used by the authors as a measure of information that is shared between the C input and the X output but not the R input and is defined as follows:\n\n\\begin{equation}\nI(X;C|R) = \\sum_{x,r,c}P(x,r,c)log\\frac{P(x|r,c)}{P(x|r)}\n\\end{equation}\n\n\n\\subsubsection{Dependencies} The replication was done using Ubuntu 19.10 with Intel\u00ae Core\u2122 i7-7500U CPU with Anaconda Python-3.8.1, numpy-1.18.1, scipy-1.4.1, and matplotlib-3.1.3. There are no external libraries required for calculating the information theoretic terms as everything is provided in the accompanying code.\n\n\\section{Results}\n\n\n\\subsubsection{Activation functions and transmitted information}\n\nFigure 1 shows the three-way mutual information and conditional mutual information terms for each of the activation functions and across a range of R input weights while C remains constant at 1. These results are produced in a similar way to the paper by using the exact probability distributions. They are qualitatively identical to the original paper. There are minor quantitative differences. For example, in (c) the maximum value of the $A_b$ function is higher in the original paper.  In order to rule out the possibility of missing the higher maximum value as a result of not having that particular data point, a small step size of 0.1 was used for each data point which leads to a higher number of data points on this figure than the original publication's. \n\n\\begin{figure}[H]\n    \\includegraphics[width=\\textwidth]{figure_1.png}\n      \\caption{Replication of the comparison of the activation functions for (a) I(X;R;C), (b) I(X;R|C) and (c) I(X;C|R). The activation functions are zero context (dashed), additive context (squares), modulatory context (circles), and both additive and modulatory combined (dotted).}\n\\end{figure}\n\n\\subsubsection{Information surface plots for different input weights}\n\nFigure 2 shows the I(X;C|R) for each of the activation functions and across a range of r and c input weights. These results are produced in a similar way to the paper by using the exact probability distributions. They are qualitatively identical to the original paper. There are no notable quantitative differences. \n\n\n\\begin{figure}[H]\n    \\begin{center}\n        \\includegraphics[width=12cm,height=15cm]{figure_2.png}\n    \\end{center}\n      \\caption{Replication of the comparison of the information surfaces of I(X;C|R) for (a) additive, (b) modulatory and (c) both over all R and C input weights. The conditional probability P(C = 1 |R = 1) = 0.889972.}\n\\end{figure}\n\n\\subsubsection{Sampling biases and variances}\n\nFigure 3 compares the mean and standard deviation of three-way measures sampled against the analytical measures for the modulatory activation function. The mean and deviations were computed from 100 trials per sample size and compared with the analytical measure across a range of sample sizes. As the authors claim, sample size of 200 could be sufficient as the improvements beyond that are not much. The error for small sample sizes is higher in the original paper, this can be attributed to use of a different seed value and their programming environment.\n\n\n\\begin{figure}[H]\n    \\includegraphics[width=\\textwidth]{figure_3.png}\n      \\caption{Replication of comparison of the analytical (dotted) and sampled (error bars) information measures for (a) I(X;R;C), (b) I(X;R|C) and (c) I(X;C|R). The error bars represent standard deviation about the mean information computed from 100 trials for each sample size with the modulatory activation function. The R and C input weights were set to 2.}\n\\end{figure}\n\n\\section{Discussion}\n\nThe goal of the authors was to study contextual modulation in single binary neurons using information theory. To that end they present figures 1-2 as a way to compare and contrast functions with or without forms of contextual modulation. They use figure 3 to suggest how much data is required to get the required accuracy for the analyses presented in figures 1-2. \\newline\n\nOne example of how the analyses here can be used to study contextual modulation is included as follows: For the modulatory activation function, it is intended that the role of R is to drive the neuron to fire and C is to amplify the firing. Figure 2 (b) shows that increasing C alone does not increase I(X;C|R), but increasing R and C together does and a higher C with a constant R amplifies it. At low R values, C has a higher role in the output so I(X;C|R) begins to diminish at higher R values. This shows that the modulatory function is behaving as intended and expected. \\newline\n\nIn conclusion, this manuscript and the accompanying code present a successful replication of the authors' work in a modern programming language. Similarly, the code and information-theoretic analyses here could be used to study other activation functions and contextual modulation.\n\n\\subsubsection{Acknowledgments}\n\nI would like to thank Bill Phillips and Jim Kay for clarifying my questions about the input. This research was supported by the funding initiative Nieders\u00e4chsisches Vorab of the Volkswagen Foundation and the Ministry of Science and Culture of Lower Saxony.\n", "meta": {"hexsha": "83c00c4786c1b00b1859fa2232db9e1f0ed52d98", "size": 9042, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "article/content.tex", "max_stars_repo_name": "oliviaguest/mahmoudian-2020-rescience", "max_stars_repo_head_hexsha": "506681ab99f3381bbcd2c9939bceec3b355f1fcf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "article/content.tex", "max_issues_repo_name": "oliviaguest/mahmoudian-2020-rescience", "max_issues_repo_head_hexsha": "506681ab99f3381bbcd2c9939bceec3b355f1fcf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "article/content.tex", "max_forks_repo_name": "oliviaguest/mahmoudian-2020-rescience", "max_forks_repo_head_hexsha": "506681ab99f3381bbcd2c9939bceec3b355f1fcf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.336, "max_line_length": 765, "alphanum_fraction": 0.7723954877, "num_tokens": 2219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7718435083355187, "lm_q1q2_score": 0.5781781026197477}}
{"text": "% !TEX root = ../../main.tex\n\\subsection{Kolmogorov backward equation}\n\nSo far with the Fokker-Planck equation we have been able to ask questions about\nthe entire distribution of allele frequencies accounting for the interplay of\ndifferent evolutionary forces simultaneously. Another interesting set of\nquestions has to do with asking what is the probability of the population\nreaching a specific subset of states in a time $s > t$ given the current state\nof the population $f(t)$. To be more specific we can ask what is the\nprobability of fixing an allele in a population as a function of its selective\nadvantage. For this kind of questions we use the so-called Kolmogorov-backward\nequation. The name comes from the fact that compared to the forward equation,\ni.e. the Fokker-Planck equation we derived in the previous section, we need to\nintegrate backwards in time.\n\nThe main difference between the forward and the backward equation has to do\nwith the emphasis on either the initial condition or the current state. As we\nshowed in \\secref{sec_master_eq} the master equation is a statement about the\ntime evolution of the transition probability $P(f, t \\mid f', t')$ for $t' \\leq\nt$. Let us set $f' \\equiv f_o$, $t' \\equiv t_o$ to be the initial conditions of\nthe random walker in allele frequency space. We then ask about the probability\nof being at frequency $f$ at time $t$, i.e. $P(f, t + \\Dt \\mid f_o, t_o)$. This\nprobability is related to all of the possible paths that the variable $f$ can\ntake from the initial value $f_o$ at time $t_o$ to the value $f$ as shown\nschematically in \\fref{fig_03_05_01}.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics[width=0.5\\textwidth]\n  {../../fig/classic_diffusion/03_05_01_schematic_reverse.png}\n\t\\caption{\\textbf{Schematic depiction of paths from $f_o$ to $f$}. The\n\tdiagram shows schematically different paths that the allele frequency can\n\ttake from an initial condition $f_o$ to a final point $f$ over a time $t$.\n\tThe time is arbitrarily split into two intervals $[t_o, t_o + \\Dt]$ shaded\n\tin blue and $[t_o + \\Dt, t]$ shaded in green.}\n  \\label{fig_03_05_01}\n\\end{figure}\n\n\\subsubsection{Derivation of the Kolmogorov backward equation}\n\nTo derive the backward equation we can arbitrarily split the time interval in\ntwo intervals $[t_o, t_o + \\Dt]$ and $[t_o + \\Dt, t]$. With this we can then\nwrite down the Chapman-Kolmogorov equation that we stated in\n\\secref{sec_chapman_kolmogorov} as\n\\begin{equation}\n  P(f, t \\mid f_o, t_o) = \\int_{-f_o}^{1 - f_o} dr \\;\n  P(f, t \\mid f_o + r, t_o + \\Dt)\n  P(f_o + r, t_o + \\Dt \\mid f_o, t_o),\n\\end{equation}\nwhere the limits of the integral on the right hand side were set such that the\nquantity $f_o + r$ could cover the entire domain $[0, 1]$. For $\\Dt$ very\nsmall, specifically $\\Dt \\ll |t - t_o|$ we can write down the transition matrix\nas we did for \\eref{eq_transition_short_time}, obtaining\n\\begin{equation}\n\tP(f_o + r, t_o + \\Dt \\mid f_o, t_o) =\n\t\\delta(r) \\left[\n\t1 - a^{(0)}(f_o, t_o) \\Dt\n\t\\right] +\n\t\\phi_{t_o}(f_o; r) \\Dt,\n\\end{equation}\nwhere $\\phi_{t_o}(f_o; r) \\Dt$ is the probability of taking a jumpt of size $r$\non a time window $\\Dt$ given the initial position $f_o$ In this case the\n$\\delta$-function $\\delta(r)$ is equal to one only when the jump $r$ is zero.\nNotice that compared to the derivation of the forward equation where we first\nwrote the corresponding master equation, here we start directly from the\ntransition distribution as a function of the jump size $r$. Substituting this\ninto the Chapman-Kolmogorov equation gives\n\\begin{equation}\n\tP(f, t \\mid f_o, t_o) = \\int_{-f_o}^{1 - f_o} dr\\;\n\tP(f, t \\mid f_o + r, t_o + \\Dt)\n\t\\left[\n\t\\delta(r) \\left( 1 - a^{(0)}(f_o, t_o) \\Dt \\right) +\n\t\\phi_{t_o}(f_o; r) \\Dt\n\t\\right].\n\\end{equation}\nIf we distribute the integral and evaluate it over the delta function we obtain\n\\begin{equation}\n\t\\begin{split}\n\t\tP(f, t \\mid f_o, t_o) = P(f, t \\mid f_o, t_o + \\Dt)\n\t\t\\left[\n\t\t1 - a^{(0)}(f_o, t_o)\\Dt\n\t\t\\right]\\\\ +\n\t\t\\int_{-f_o}^{1 - f_o} dr\\; P(f, t \\mid f_o + r, t_o + \\Dt)\n\t\t\\phi_{t_o}(f_o; r) \\Dt.\n\t\\end{split}\n\t\\label{eq_chap_kol_delta_eval_backwards}\n\\end{equation}\nIn \\eref{eq_a0} we showed that $a^{(0)}(f_o, t_o)\\Dt$ is the probability of\ntransitioning from $f_o$ to anywhere else. This is equivalent to writing\n\\begin{equation}\n\ta^{(0)}(f_o, t_o) \\Dt \\equiv \\int_{-f_o}^{1 - f_o} dr \\;\n\t\\phi_{t_o}(f_o; r) \\Dt.\n\\end{equation}\nSubstituting this result into \\eref{eq_chap_kol_delta_eval_backwards} and\nrearranging terms gives\n\\begin{equation}\n\t\\begin{split}\n\t{P(f, t \\mid f_o, t_o) - P(f, t \\mid f_o, t_o + \\Dt) \\over \\Dt} =\n\t- P(f, t \\mid f_o, t_o + \\Dt) \\int_{-f_o}^{1 - f_o} dr \\;\n\t\\phi_{t_o}(f_o; r)\\\\\n\t+ \\int_{-f_o}^{1 - f_o} dr \\;\n\tP(f, t \\mid f_o + r, t_o + \\Dt) \\phi_{t_o}(f_o; r).\n\t\\end{split}\n\t\\label{eq_almost_kolmogorov_backwards}\n\\end{equation}\nIf we assume our stochastic process is stationary (See\n\\secref{sec_stationary_process}), a perfectly reasonable assumption for a\nfixed fitness landscape, it follows that\n\\begin{equation}\n\t P(f, t + \\tau \\mid f_o, t_o + \\tau) = P(f, t \\mid f_o, t_o),\n\\end{equation}\nfor any $\\tau$. In other words, for our stationary stochastic process what\nmatters is the time interval that happens between events rather than the actual\ntime. Using this we can then write the first term on the left-hand side of\n\\eref{eq_almost_kolmogorov_backwards} as\n\\begin{equation}\n\tP(f, t \\mid f_o, t_o) = P(f, t + \\Dt \\mid f_o, t_o + \\Dt).\n\\end{equation}\nUsing this time shift and taking the limit when $\\Dt \\rightarrow 0$ gives for\nthe left-hand side of \\eref{eq_almost_kolmogorov_backwards}\n\\begin{equation}\n\t\\lim_{\\Dt \\rightarrow 0} {P(f, t + \\Dt \\mid f_o, t_o + \\Dt) -\n\tP(f, t \\mid f_o, t_o + \\Dt) \\over \\Dt} =\n\t\\ddt{P(f, t \\mid f_o, t_o)}.\n\\end{equation}\nUsing this for \\eref{eq_almost_kolmogorov_backwards} results in\n\\begin{equation}\n\t\\ddt{P(f, t \\mid f_o, t_o)} =\n\t- P(f, t \\mid f_o, t_o) \\int_{- f_o}^{1 - f_o} dr \\; \\phi_{t_o}(f_o; r)\n\t+ \\int_{- f_o}^{f_o} dr \\; P(f, t \\mid f_o + r, t_o) \\phi_{t_o}(f_o; r).\n\\end{equation}\nNotice we already took the limit $\\Dt \\rightarrow 0$ on both sides. We now\nexpand the second term on the right-hand side to obtain an analogous\nKramers-Moyal expansion for the Kolmogorov reverse equation. This is analogous\nin the sense that we are not expanding a master equation as we did for the\nforward equation. Also for this case we expand around the initial condition\n$f_o$ rather than the final position $f$. Since the transition probability\n$\\phi_{t_o}(f_o; r)$ does not have a term $f_o + r$ it does not take part of\nthis expansion. Therefore we obtain\n\\begin{equation}\n\t\\ddt{P(f, t \\mid f_o, t_o)} =\n\t- P(f, t \\mid f_o, t_o) \\int_{- f_o}^{1 - f_o} dr \\; \\phi_{t_o}(f_o; r)\n\t+ \\int_{-f_o}^{1 - f_o} dr\\; \\sum_{k=0}^{\\infty} {1 \\over k!}\n\t{\\partial^k \\over \\partial f_o^k} P(f, t \\mid f_o, t_o) r^k\n\t\\phi_{t_o}(f_o; r).\n\\end{equation}\nFor this expansion we do not have issues with the integration limits. This\nagain happens because we are not expanding the master equation as we did for\nthe forward equation. The zero$\\tth$ order term in the expansion cancels with\nthe first term on the right-hand side, obtaining\n\\begin{equation}\n\t\\ddt{P(f, t \\mid f_o, t_o)} =\n\t\\sum_{k = 1}^{\\infty} {1 \\over k!} {\\partial^k \\over \\partial f_o^k}\n\tP(f, t \\mid f_o, t_o) \\int_{-f_o}^{1 - f_o} dr\\; \\phi_{t_o}(f_o; r) r^k.\n\\end{equation}\nWe can again define the moments of the jump distribution $\\phi_{t_o}(f_o; r)$\nas\n\\begin{equation}\n\ta^{(k)}(f_o, t_o) \\equiv \\int_{-f_o}^{1 - f_o} dr \\;\n\tr^k \\phi_{t_o}(f_o; r).\n\\end{equation}\nUsing this we obtain the  expansion for the Kolmogorov backward equation\n\\begin{equation}\n\t\\ddt{P(f, t \\mid f_o, t_o)} =\n\t\\sum_{k = 1}^{\\infty} {1 \\over k!} {\\partial^k \\over \\partial f_o^k}\n\tP(f, t \\mid f_o, t_o) a^{(k)}(f_o, t_o).\n\\end{equation}\nJust as with the forward equation we can truncate the expansion up to second\norder. Furthermore we define the first moments to be the mean jump size\n\\begin{equation}\n\tM(f_o) = \\int_{-f_o}^{1 - f_o} dr \\; r \\phi_{t_o}(f_o; r).\n\\end{equation}\nFor small $M(f_o)$ we approximate the second moment as the variance\n\\begin{equation}\n\tV(f_o) \\int_{-f_o}^{1 - f_o} dr \\;\n\t(r - M(f_o))^2 \\phi_{t_o}(f_o; r) \\approx\n\t\\int_{f_o}^{1 - f_o} dr \\; r^2 \\phi_{t_o}(f_o; r).\n\\end{equation}\nGiven these definitions we obtain the final form of the Kolmogorov backward\nequation\n\\begin{equation}\n\t\\ddt{P(f, t \\mid f_o, t_o)} =\n\tM(f_o) {\\partial \\over \\partial f_o}P(f, t \\mid f_o, t_o) +\n\t{V(f_o) \\over 2} {\\partial^2 \\over \\partial f_o^2} P(f, t \\mid f_o, t_o).\n\t\\label{eq_kolmogorov_backward}\n\\end{equation}\nThis equation differs from the forward equation in several points:\n\\begin{itemize}\n\t\\item The backward equation uses the conditional distribution with respect\n\tto the initial condition $P(f, t \\mid f_o, t_o)$.\n\t\\item The partial derivatives with respect to frequency are taken with\n\trespect to the initial condition $f_o$ as well.\n\t\\item The jump moments are outside of the partial derivatives just as in a\n\tregular diffusion equation.\n\t\\item There is no minus sign for the directional term.\n\\end{itemize}\nAs we will see in the following section the forward and backward equation\ncomplement each other allowing us to tackle different questions. In particular\nthe backward equation can address the very interesting question of what is the\nprobability of an allele becoming fixed in the population.\n\n\\subsubsection{Equilibrium distribution of the Kolmogorov backward equation}\n\nRather than trying to solve directly this equation we can take the same\napproach as we took for the forward equation and solve for the steady-state\ndistribution. For the case of the reverse equation the steady-state\ndistribution will tell us about in the very long time limit what is the\nprobability of ending at frequency $f$ given that the system started at a\nfrequency $f_o$. For example we can ask about the probability of an allele\ngoing to fixation ($f = 1$) or extinction ($f = 0$) given some initial\nfrequency. To solve for the steady-state distribution first we set the time\nderivative of \\eref{eq_kolmogorov_backward} to zero\n\\begin{equation}\n\t0 = M(f_o) {\\partial \\over \\partial f_o}P(f, t \\mid f_o, t_o) +\n\t{V(f_o) \\over 2} {\\partial^2 \\over \\partial f_o^2} P(f, t \\mid f_o, t_o).\n\t\\label{eq_kolmogorov_backward_rev}\n\\end{equation}\nWe now define\n\\begin{equation}\n\tG(f_o) \\equiv {\\partial P(f, t \\mid f_o, t_o) \\over \\partial f_o},\n\\end{equation}\nand substitute this back on \\eref{eq_kolmogorov_backward_rev}. With some\nrearrangement of terms we obtain\n\\begin{equation}\n\t2 {M(f_o) \\over V(f_o)} G(f_o) + {\\partial G(f_o) \\over \\partial f_o} = 0.\n\\end{equation}\nThis is a first order linear ordinary differential equation that can be solved\nusing the integrating factor method just as we did for the equilibrium\ndistribution of the forward equation. For this we multiply both sides by\n\\begin{equation}\n\th(f_o) = \\exp \\left( 2 \\int_0^{f_0} df_o' {M(f_o') \\over V(f_o')} \\right),\n\\end{equation}\nthe integrating factor. This results in\n\\begin{equation}\n\t\\exp \\left( 2 \\int_0^{f_0} df_o'\\; {M(f_o') \\over V(f_o')} \\right)\n\t\\left[\n\t2 {M(f_o) \\over V(f_o)} G(f_o) + {\\partial G(f_o) \\over \\partial f_o}\n\t\\right] = 0.\n\\end{equation}\nThe integrating factor allows us to write this as\n\\begin{equation}\n\t{d \\over df_o} \\left[\n\tG(f_o) \\exp \\left(\n\t2 \\int_0^{f_o} df_o' {M(f_o') \\over V(f_o')}\n\t\\right)\n\t\\right] = 0.\n\\end{equation}\nWe can now integrate both sides with respect to $f_o$, obtaining\n\\begin{equation}\n\t\\int_0^{f_o} df_o' \\; {d \\over df_o'} \\left[\n\tG(f_o') \\exp \\left(\n\t2 \\int_0^{f_o'} df_o'' \\; {M(f_o'') \\over V(f_o'')}\n\t\\right)\t\\right] = \\int_0^{f_o'} df_o' \\; 0.\n\\end{equation}\nEvaluating these integrals results in\n\\begin{equation}\n\tF(f_o) \\exp \\left(\n\t2 \\int_0^{f_o} df_o' \\; {M(f_o') \\over V(f_o')}\n\t\\right) = C,\n\\end{equation}\nwhere $C$ is an integrating constant. If we now substitute back the definition\nof $G(f_o)$ we obtain\n\\begin{equation}\n\t{\\partial P(f, t \\mid f_o, t_o) \\over \\partial f_o}\n\t\\exp \\left(\n\t2 \\int_0^{f_o} df_o' \\; {M(f_o') \\over V(f_o')}\n\t\\right) = C.\n\\end{equation}\nWe can use this equation to find the fixation probability of an allele.\n\n\\mrm{This again will be followed by specific cases on how to use this equation\nto calculate fixation probabilities for different reproduction models.}\n", "meta": {"hexsha": "2b1ab567ee207705977f1b6585fd1b23748d0ede", "size": 12297, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/book_draft/chapters/classic_diffusion/06_kolmogorov_backwards.tex", "max_stars_repo_name": "mrazomej/stat_gen", "max_stars_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/book_draft/chapters/classic_diffusion/06_kolmogorov_backwards.tex", "max_issues_repo_name": "mrazomej/stat_gen", "max_issues_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-05T00:17:26.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-05T00:17:26.000Z", "max_forks_repo_path": "doc/book_draft/chapters/classic_diffusion/06_kolmogorov_backwards.tex", "max_forks_repo_name": "mrazomej/pop_gen", "max_forks_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.2095588235, "max_line_length": 79, "alphanum_fraction": 0.7108237782, "num_tokens": 4110, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434873426302, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.5781780955412246}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{epstopdf}\n\\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}\n\n\\textwidth = 6.5 in\n\\textheight = 9 in\n\\oddsidemargin = 0.0 in\n\\evensidemargin = 0.0 in\n\\topmargin = 0.0 in\n\\headheight = 0.0 in\n\\headsep = 0.0 in\n\\parskip = 0.2in\n\\parindent = 0.0in\n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{definition}{Definition}\n\n\\newcommand{\\cut}[1]{}\n\n\\title{The Matrix Calculus You Need For Deep Learning}\n\\author{Terence Parr and Jeremy Howard}\n\\begin{document}\n\\maketitle\n\n\\allowdisplaybreaks\n\n\\section{Introduction}\n\nMost of us last saw calculus in school, but derivatives are a critical part of machine learning, particularly deep neural networks, which are trained by optimizing a loss function. Pick up a machine learning paper or the documentation of a library such as [PyTorch](link) and calculus comes screeching back into your life like distant relatives around the holidays.  And it's not just any old scalar calculus that pops up---we need differential {\\em matrix calculus}, the shotgun wedding of [linear algebra](https://en.wikipedia.org/wiki/Linear\\_algebra) and [multivariate calculus](https://en.wikipedia.org/wiki/Multivariable\\_calculus).\n\nFor example, the activation of a single computation unit in a neural network is typically calculated using the dot product (from linear algebra) of an edge weight vector $\\mathbf{w}$ with an input vector $\\mathbf{x}$ plus a scalar bias (threshold): $z(\\mathbf{x}) = \\Sigma_i^n w_i x_i + b = \\mathbf{w} \\cdot \\mathbf{x} + b$. Function $z(\\mathbf{x})$ is called the unit's {\\em affine function} and is followed by a [rectified linear unit](link), which clips negative values to zero: $max(0, z(\\mathbf{x}))$. Such a computational unit is sometimes referred to as an ``artificial neuron'' and looks like:\n\n\\begin{center}\n\t\\includegraphics[scale=.9]{neuron.png}\n\\end{center}\n\nNeural networks consist of many of these units, organized into multiple collections of neurons called {\\em layers}. The activation of one layer's units become the input to the next layer's units. The activation of the unit or units in the final layer is called the network output.\n\n{\\em Training} this neuron means choosing weights $\\mathbf{w}$ and bias $b$ so that we get the desired output for all $N$ inputs $\\mathbf{x}$.  To do that, we minimize a {\\em loss function} that compares the network's final $activation({\\mathbf{x}})$ with the $target(\\mathbf{x})$ (desired output of $\\mathbf{x}$) for all input $\\mathbf{x}$ vectors. To minimize the loss, we use some variation on gradient descent, such as plain [stochastic gradient descent](link) (SGD), SGD with momentum, or [Adam](link).   All of those require the partial derivative (the gradient) of $activation({\\mathbf{x}})$ with respect to the model parameters $\\mathbf{w}$ and $b$. Our goal is to gradually tweak $\\mathbf{w}$ and $b$ so that the overall loss function keeps getting smaller across all $\\mathbf{x}$ inputs.\n\nIf we're careful, we can derive the gradient by differentiating the scalar version of a common loss function (mean squared error):\n\n\\[\\frac{1}{N} \\sum_{\\mathbf{x}} (target(\\mathbf{x}) - activation(\\mathbf{x}))^2 = \\frac{1}{N} \\sum_{\\mathbf{x}} (target(\\mathbf{x}) - max(0, \\sum_i^{|x|} w_i x_i + b))^2\\]\n\nBut this is just one neuron, and neural networks must train the weights and biases of all neurons in all layers simultaneously.  The really cool part about partial derivatives is that they let us optimize all of these network parameters at once. TODO: not sure \"all at once\" follows... Because there are multiple inputs and (potentially) multiple network outputs, we really need general rules for the derivative of a function with respect to a vector and even rules for the derivative of a vector-valued function with respect to a vector.\n\nThis article walks through the derivation of some important rules for computing partial derivatives with respect to vectors, particularly those useful for training neural networks. This field is known as {\\em matrix calculus}, and the good news is, we only need a small subset of that field, which we introduce here.  While there is a lot of online material on multivariate calculus and linear algebra, they are typically taught as two separate undergraduate courses so most material treats them in isolation.  The pages that do discuss matrix calculus often are really just lists of rules with minimal explanation or are just pieces of the story. (See the annotated list of resources at the end.)\n\nIn contrast, we're going to rederive and rediscover some key matrix calculus rules in an effort to explain them. It turns out that matrix calculus is really not that hard! There aren't dozens of new rules to learn; just a couple of key concepts.  That said, our intended audience already has many of these pieces in place mentally; it's more of a core dump than a textbook. Our hope is that it will get you started quickly in the world of matrix calculus as it relates to training neural networks.\n\n\\section{Review: Scalar derivative rules}\n\nHopefully you remember some of these main scalar derivative rules. If your memory is a bit fuzzy on this, have a look at [khan vid](https://www.khanacademy.org/math/ap-calculus-ab/ab-derivative-rules).\n\n\\[\n\\begin{array}{lccc}\nRule & f(x) & \\text{\\parbox[t][0mm][b]{43mm}{Scalar derivative notation with respect to $x$}} & \\text{Example}\\\\\n\\\\[\\dimexpr-\\normalbaselineskip+2pt]\n\\hline\n\\\\[\\dimexpr-\\normalbaselineskip+5pt]\n\\vspace{1mm}\n\\text{Constant} & c & 0 &  \\frac{d}{dx}99 = 0\\\\\n\\vspace{1mm}\n\\text{Multiplication by constant} &\tcf\t&c \\frac{df}{dx} & \\frac{d}{dx}3x = 3\\\\\n\\vspace{1mm}\n\\text{Power Rule}\t& x^n\t& nx^{n-1} & \\frac{d}{dx}x^3 = 3x^2\\\\\n\\vspace{1mm}\n\\text{Sum Rule}\t& f + g\t& \\frac{df}{dx} + \\frac{dg}{dx} & \\frac{d}{dx} (x^2 + 3x) = 2x + 3\\\\\n\\vspace{1mm}\n\\text{Difference Rule}\t& f - g\t& \\frac{df}{dx} - \\frac{dg}{dx} & \\frac{d}{dx}(x^2 - 3x) = 2x - 3\\\\\n\\vspace{1mm}\n\\text{Product Rule}\t& fg & f \\frac{dg}{dx} + \\frac{df}{dx} g & \\frac{d}{dx}x^2x = x^2 + x2x = 3x^2\\\\\n\\vspace{1mm}\n\\text{Quotient Rule}\t& \\frac{f}{g} = fg^{-1} & f \\frac{dg}{dx}^{-1} + \\frac{df}{dx} g^{-1} & \\frac{d}{dx}x^2 / 3x = x^2/3 + 2x / 3x\\\\\n\\text{Chain Rule}\t & f(g(x)) &   \\frac{df(u)}{du}\\frac{du}{dx}, \\text{ let } u=g(x) & \\frac{d}{dx} ln(x^2) = \\frac{1}{x^2}2x = \\frac{2}{x}\\\\\n\\end{array}\n\\]\n\nThere are other rules for trigonometry, exponentials, etc., which you can find [Khan Academy differential calculus course](https://www.khanacademy.org/math/differential-calculus).\n\nWhen a function has a single parameter, $f(x)$, you'll often see $f'$ and $f'(x)$ used as shorthands for $\\frac{d}{dx}f(x)$. We recommend against this notation as it does not make clear the variable we're taking the derivative with respect to. \n\nYou can think of $\\frac{d}{dx}$ as an operator that maps a function of one parameter to another function.  That means that $\\frac{d}{dx} f(x)$ maps $f(x)$ to its derivative with respect to $x$, which is the same thing as $\\frac{df(x)}{dx}$. Also, if $y = f(x)$, then $\\frac{dy}{dx} = \\frac{df(x)}{dx} = \\frac{d}{dx}f(x)$. Thinking of the derivative as an operator helps to simplify complicated derivatives because the operator is distributive and lets us pull out constants. For example, in the following equation, we can pull out the constant 9 and distribute the derivative operator across the elements within the parentheses.\n\n$\\frac{d}{dx} 9(x + x^2) = 9 \\frac{d}{dx}(x + x^2) = 9 (\\frac{d}{dx}x + \\frac{d}{dx}x^2) = 9(1 + 2x) = 9 + 18x$\n\nThat procedure reduced the derivative of $9(x + x^2)$ to a bit of arithmetic and the derivatives of $x$ and $x^2$, which are much easier to solve than the original derivative.\n\n\\section{Introduction to vector calculus and partial derivatives}\n\nNeural network layers are not single functions of a single parameter, $f(x)$. So, let's move on to functions of multiple parameters such as $f(x,y)$. For example, what is the derivative of $xy$? In other words, how does the product $xy$ change when we wiggle the variables? Well, it depends on whether we are changing $x$ or $y$.  We compute derivatives with respect to one variable (parameter) at a time, giving us two different {\\em partial derivatives} for this two-parameter function (one for $x$ and one for $y$).  Instead of using operator $\\frac{d}{dx}$, the partial derivative operator is  $\\frac{\\partial}{\\partial x}$ (a stylized $d$ and not the Greek letter $\\delta$). So, $\\frac{\\partial }{\\partial x}xy$ and $\\frac{\\partial }{\\partial y}xy$ are the partial derivatives of $xy$; often, these are just called the {\\em partials}.  For functions of a single parameter, operator $\\frac{\\partial}{\\partial x}$ is equivalent to $\\frac{d}{dx}$ (for sufficiently smooth functions). However, it's better to use to $\\frac{d}{dx}$ to make it clear you're referring to a scalar derivative.\n\nThe partial derivative with respect to $x$ is just the usual scalar derivative, simply treating any other variable in the equation as a constant.  Consider function $f(x,y) = 3x^2y$. The partial derivative with respect to $x$ is written $\\frac{\\partial}{\\partial x} 3x^2y$. There are three constants from the perspective of $\\frac{\\partial}{\\partial x}$: 3, 2, and $y$. Therefore, $\\frac{\\partial}{\\partial x} 3yx^2 = 3y\\frac{\\partial}{\\partial x} x^2 = 3y2x = 6yx$. The partial derivative with respect to $y$ treats $x$ like a constant: $\\frac{\\partial}{\\partial y} 3x^2y = 3x^2\\frac{\\partial}{\\partial y} y = 3x^2\\frac{\\partial y}{\\partial y} = 3x^2 \\times 1 = 3x^2$.  It's a good idea to derive these yourself before continuing otherwise the rest of the article won't make sense.  Here's the [khan Academy video on partials](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/introduction-to-partial-derivatives) if you need help.\n\nTo make it clear we are doing vector calculus and not just multivariate calculus, let's consider what we do with the partial derivatives $\\frac{\\partial f(x,y)}{\\partial x}$ and $\\frac{\\partial f(x,y)}{\\partial y}$ (another way to say $\\frac{\\partial}{\\partial x}f(x,y)$ and $\\frac{\\partial }{\\partial y}f(x,y)$) that we computed for $f(x,y) = 3x^2y$.  Instead of having them just floating around and not organized in any way, let's organize them into a horizontal vector. We call this vector the {\\em gradient} of $f(x,y)$ and write it as:\n\n$\\nabla f(x,y)  = [ \\frac{\\partial f(x,y)}{\\partial x}, \\frac{\\partial f(x,y)}{\\partial y}] = [6yx, 3x^2]$\n\nSo the gradient of $f(x,y)$ is simply a vector of its partials. Gradients are part of the vector calculus world, which deals with functions that map $n$ scalar parameters to a single scalar.  Now, let's get crazy and consider derivatives of multiple functions simultaneously.\n\n\\section{Matrix calculus}\n\nWhen we move from derivatives of one function to derivatives of many functions, we move from the world of vector calculus to matrix calculus. Let's compute partial derivatives for two functions, both of which take two parameters.  We can keep the same $f(x,y) = 3x^2y$ from the last section, but let's also bring in $g(x,y) = 2x + y^8$.  The gradient for $g$ has two entries, a partial derivative for each parameter:\n\n$\\frac{\\partial g(x,y)}{\\partial x} = \\frac{\\partial 2x}{\\partial x} + \\frac{\\partial y^8}{\\partial x} = 2\\frac{\\partial x}{\\partial x} + 0 = 2 \\times 1 = 1$\n\nand\n\n$\\frac{\\partial g(x,y)}{\\partial y} = \\frac{\\partial 2x}{\\partial y} + \\frac{\\partial y^8}{\\partial y} = 0 + 8y^7 = 8y^7$\n\ngiving us gradient $\\nabla g(x,y) = [1, 8y^7]$.\n\nGradient vectors organize all of the partial derivatives for a specific scalar function. If we have two functions, we can also organize their gradients into a matrix by stacking the gradients. When we do so, we get the {\\em Jacobian matrix} (or just the {\\em Jacobian}) where the gradients are rows:\n\n$J =\n\\begin{bmatrix}\n\t\\nabla f(x,y)\\\\\n\t\\nabla g(x,y)\n\\end{bmatrix} = \\begin{bmatrix}\n\\frac{\\partial f(x,y)}{\\partial x} & \\frac{\\partial f(x,y)}{\\partial y}\\\\\n\\frac{\\partial g(x,y)}{\\partial x} & \\frac{\\partial g(x,y)}{\\partial y}\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\t6yx & 3x^2\\\\\n\t1 & 8y^7\n\\end{bmatrix}\n$\n\nWelcome to matrix calculus!\n\n{\\bf Note that there are multiple ways to represent the Jacobian.} We are using the so-called [numerator layout](https://en.wikipedia.org/wiki/Matrix\\_calculus\\#Layout\\_conventions) but many papers and software will use the {\\em denominator layout}. This is just transpose of the numerator layout Jacobian (flip it around its diagonal):\n\n$\n\\begin{bmatrix}\n\t6yx & 1\\\\\n\t3x^2 & 8y^7\n\\end{bmatrix}\n$\n\n\\subsection{Generalization of the Jacobian}\n\nSo far, we've looked at a specific example of a Jacobian matrix. To define the Jacobian matrix more generally, let's combine multiple parameters into a single vector argument: $f(x,y,z) \\Rightarrow f(\\mathbf{x})$. (You will sometimes see notation $\\vec{x}$  for vectors in the literature as well.) Lowercase letters in bold font such as $\\mathbf{x}$ are vectors and those in italics font like $x$ are scalars. $x_i$ is the $i^{th}$ element of vector $\\mathbf{x}$ and is in italics because a single vector element is a scalar. We also have to define an orientation for vector $\\mathbf{x}$. We'll assume that all vectors are vertical by default of size $n \\times 1$:\n\n$\\mathbf{x} = \\begin{bmatrix}\n           x_1\\\\\n           x_2\\\\\n           \\vdots \\\\\n           x_n\\\\\n           \\end{bmatrix}$\n\nWith multiple scalar-valued functions, we can combine them all into a vector just like we did with the parameters. Let $\\mathbf{y} = \\mathbf{f}(\\mathbf{x})$ be a vector of $m$ scalar-valued functions that each take a vector $\\mathbf{x}$ of length $n=|\\mathbf{x}|$ where $|\\mathbf{x}|$ is the cardinality (count) of elements in $\\mathbf{x}$. Each $f_i$ function within $\\mathbf{f}$ returns a scalar just as in the previous section:\n\n$\n\\begin{array}{lcl}\ny_1 & = & f_1(\\mathbf{x})\\\\\ny_2 & = & f_2(\\mathbf{x})\\\\\n & \\vdots & \\\\\ny_m & = & f_m(\\mathbf{x})\\\\\n\\end{array}\n$\n\nFor instance, we'd represent $f(x,y) = 3x^2y$ and $g(x,y) = 2x + y^8$ from the last section as\n\n$y_1 = f_1(\\mathbf{x}) = 3x_1^2x_2$ ~~~~~~~~(substituting $x_1$ for $x$, $x_2$ for $y$)\\\\\n$y_2 = f_2(\\mathbf{x}) = 2x_1 + x_2^8$\n\nIt's very often the case that $m=n$ because we will have a scalar function result for each element of the $\\mathbf{x}$ vector.  For example, consider the identity function $\\mathbf{y} = \\mathbf{f(x)} = \\mathbf{x}$:\n\n$\n\\begin{array}{lclcc}\ny_1 & = & f_1(\\mathbf{x})& = & x_1\\\\\ny_2 & = & f_2(\\mathbf{x})& = & x_2\\\\\n & \\vdots & \\\\\ny_n & = & f_n(\\mathbf{x})& = & x_n\\\\\n\\end{array}\n$\n\nSo we have $m=n$ functions and parameters, in this case. Generally speaking, though, the Jacobian matrix is the collection of all $m \\times n$ possible partial derivatives, which is the stack of $m$ gradients with respect to $\\mathbf{x}$:\n\n$\n\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}} = \\begin{bmatrix}\n\\nabla f_1(\\mathbf{x}) \\\\\n\\nabla f_2(\\mathbf{x})\\\\\n\\ldots\\\\\n\\nabla f_m(\\mathbf{x})\n\\end{bmatrix} = \\begin{bmatrix}\n\\frac{\\partial}{\\partial \\mathbf{x}} f_1(\\mathbf{x}) \\\\\n\\frac{\\partial}{\\partial \\mathbf{x}} f_2(\\mathbf{x})\\\\\n\\ldots\\\\\n\\frac{\\partial}{\\partial \\mathbf{x}} f_m(\\mathbf{x})\n\\end{bmatrix} = \\begin{bmatrix}\n\\frac{\\partial}{\\partial {x_1}} f_1(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_1(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_1(\\mathbf{x}) \\\\\n\\frac{\\partial}{\\partial {x_1}} f_2(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_2(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_2(\\mathbf{x}) \\\\\n\\ldots\\\\\n~\\frac{\\partial}{\\partial {x_1}} f_m(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_m(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_m(\\mathbf{x}) \\\\\n\\end{bmatrix}\n$\n\nwhere each $\\frac{\\partial}{\\partial \\mathbf{x}} f_i(\\mathbf{x})$ is a horizontal $n$-vector because the partial derivative is with respect to a vector, $\\mathbf{x}$, whose length is $n = |\\mathbf{x}|$.  The width of the Jacobian is $n$ if we're taking the partial derivative with respect to $\\mathbf{x}$ because there are $n$ parameters we can wiggle, each potentially changing the function's value. Therefore, the Jacobian is always $m$ rows for $m$ equations.  It helps to think about the possible Jacobian shapes visually:\n\n\\begin{center}\n\\begin{tabular}{c|ccl}\n  & \\begin{tabular}[t]{c}\n  scalar\\\\\n  \\framebox(18,18){$x$}\\\\\n  \\end{tabular} & \\begin{tabular}{c}\n  vector\\\\\n  \\framebox(18,40){$\\mathbf{x}$}\n  \\end{tabular}\\\\\n\\hline\n\\\\[\\dimexpr-\\normalbaselineskip+5pt]\n\\begin{tabular}[b]{c}\n  scalar\\\\\n  \\framebox(18,18){$f$}\\\\\n  \\end{tabular} &\\framebox(18,18){$\\frac{\\partial f}{\\partial {x}}$} & \\framebox(40,18){$\\frac{\\partial f}{\\partial {\\mathbf{x}}}$}&\\\\\n\\begin{tabular}[b]{c}\n  vector\\\\\n  \\framebox(18,40){$\\mathbf{f}$}\\\\\n  \\end{tabular} & \\framebox(18,40){$\\frac{\\partial \\mathbf{f}}{\\partial {x}}$} & \\framebox(40,40){$\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{x}}$}\\\\\n\\end{tabular}\n\\end{center}\n\nThe Jacobian of the identity function $\\mathbf{f(x)} = \\mathbf{x}$, with $f_i(\\mathbf{x}) = x_i$, has $n$ functions and each function has $n$ parameters held in a single vector $\\mathbf{x}$. The Jacobian is, therefore, a square matrix since $m=n$:\n\n\\begin{eqnarray*}\n\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}} = \\begin{bmatrix}\n\\frac{\\partial}{\\partial {x}} f_1(\\mathbf{x}) \\\\\n\\frac{\\partial}{\\partial {x}} f_2(\\mathbf{x})\\\\\n\\ldots\\\\\n\\frac{\\partial}{\\partial {x}} f_m(\\mathbf{x})\n\\end{bmatrix} &=& \\begin{bmatrix}\n\\frac{\\partial}{\\partial {x_1}} f_1(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_1(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_1(\\mathbf{x}) \\\\\n\\frac{\\partial}{\\partial {x_1}} f_2(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_2(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_2(\\mathbf{x}) \\\\\n\\ldots\\\\\n~\\frac{\\partial}{\\partial {x_1}} f_m(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_m(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_m(\\mathbf{x}) \\\\\n\\end{bmatrix}\\\\\\\\\n & = & \\begin{bmatrix}\n\\frac{\\partial}{\\partial {x_1}} x_1~ \\frac{\\partial}{\\partial {x_2}} x_1 ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} x_1 \\\\\n\\frac{\\partial}{\\partial {x_1}} x_2~ \\frac{\\partial}{\\partial {x_2}} x_2 ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} x_2 \\\\\n\\ldots\\\\\n~\\frac{\\partial}{\\partial {x_1}} x_n~ \\frac{\\partial}{\\partial {x_2}} x_n ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} x_n \\\\\n\\end{bmatrix}\\\\\\\\\n& & (\\text{and since } \\frac{\\partial}{\\partial {x_j}} x_i = 0 \\text{ for } j \\neq i)\\\\\\\\\n & = & \\begin{bmatrix}\n\\frac{\\partial}{\\partial {x_1}} x_1 & 0 & \\ldots& 0 \\\\\n0 & \\frac{\\partial}{\\partial {x_2}} x_2 &\\ldots & 0 \\\\\n& & \\ddots\\\\\n0 & 0 &\\ldots& \\frac{\\partial}{\\partial {x_n}} x_n \\\\\n\\end{bmatrix}\\\\\\\\\n & = & \\begin{bmatrix}\n1 & 0 & \\ldots& 0 \\\\\n0 &1 &\\ldots & 0 \\\\\n& & \\ddots\\\\\n0 & 0 & \\ldots &1 \\\\\n\\end{bmatrix}\\\\\\\\\n& = & I ~~~~(I \\text{ is the identity matrix with ones down the diagonal})\\\\\n\\end{eqnarray*}\n\n\nMake sure that you can derive each step above before moving on. If you get stuck, just consider each element of the matrix in isolation and apply the usual scalar derivative rules.   That is a generally useful trick: Reduce vector expressions down to a set of scalar expressions and then take all of the partials, combining the results appropriately into vectors and matrices at the end.\n \nAlso be careful to track whether a matrix is vertical, $\\mathbf{x}$, or horizontal, $\\mathbf{x}^T$ where $\\mathbf{x}^T$ means $\\mathbf{x}$ transpose. Also make sure you pay attention to whether something is a scalar-valued function, $y = ...\\,$, or a vector of functions (or a vector-valued function), $\\mathbf{y} = ...\\,$.\n\n\n\n\\subsection{Derivatives of vector element-wise binary operators}\n\nElement-wise binary operations on vectors, such as vector addition $\\mathbf{w} + \\mathbf{x}$, are important because we can express many common vector operations, such as the multiplication of a vector by a scalar, as element-wise binary operations.   Examples that often crop up in deep learning are $max(\\mathbf{w},\\mathbf{x})$ and $\\mathbf{w} > \\mathbf{x}$ (returns a vector of ones and zeros). We can generalize the element-wise binary operations with notation $\\mathbf{y} = \\mathbf{f(w)} \\bigcirc \\mathbf{g(x)}$ where $m=n=|y|=|w|=|x|$. The $\\bigcirc$ symbol represents any element-wise operator (such as $+$) and not the $\\circ$ function composition operator.  Here's what equation $\\mathbf{y} = \\mathbf{f(w)} \\bigcirc \\mathbf{g(x)}$ looks like when we zoom in to examine the scalar equations:\n\n$\\begin{bmatrix}\n           y_1\\\\\n           y_2\\\\\n           \\vdots \\\\\n           y_n\\\\\n           \\end{bmatrix} = \\begin{bmatrix}\n           f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x})\\\\\n           f_{n}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x})\\\\\n           \\vdots \\\\\n           f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x})\\\\\n         \\end{bmatrix}$\n\nwhere we write $n$ not $m$ equations vertically to emphasize the fact that the result of element-wise operators give $m=n$ sized vector results.\n\nUsing the ideas from the last section, we can see that the general case for the Jacobian with respect to $\\mathbf{w}$ is the square matrix:\n\n$J_\\mathbf{w} = \n\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{w}}  = \\begin{bmatrix}\n\\frac{\\partial}{\\partial w_1} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial w_2} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial w_n} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) )\\\\\\\\\n\\frac{\\partial}{\\partial w_1} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial w_2} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial w_n} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) )\\\\\\\\\n& \\ldots\\\\\\\\\n\\frac{\\partial}{\\partial w_1} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial w_2} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial w_n} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) )\\\\\n\\end{bmatrix}\n$\n\nand the Jacobian with respect to $\\mathbf{x}$ is:\n\n$J_\\mathbf{x} = \n\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}}  = \\begin{bmatrix}\n\\frac{\\partial}{\\partial x_1} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial x_2} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial x_n} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) )\\\\\\\\\n\\frac{\\partial}{\\partial x_1} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial x_2} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial x_n} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) )\\\\\\\\\n& \\ldots\\\\\\\\\n\\frac{\\partial}{\\partial x_1} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial x_2} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial x_n} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) )\\\\\n\\end{bmatrix}\n$\n\nThat's quite a furball, but fortunately the Jacobian is very often a diagonal matrix, a matrix that is zero everywhere but the diagonal. Because this greatly simplifies the Jacobian, let's examine in detail when the Jacobian reduces to a diagonal matrix for element-wise operations. \n\nIn a diagonal Jacobian, all elements off the diagonal are zero, $\\frac{\\partial}{\\partial w_j} ( f_i(\\mathbf{w}) \\bigcirc g_i(\\mathbf{x}) ) = 0$ where $j \\neq i$. (Notice that we are taking the partial derivative with respect to $w_j$ not $w_i$.) Under what conditions are those off-diagonal elements zero? Precisely when $f_i$ and $g_i$ are contants with respect to $w_j$, $\\frac{\\partial}{\\partial w_j} f_i(\\mathbf{w}) = \\frac{\\partial}{\\partial w_j} g_i(\\mathbf{x}) = 0$.  Regardless of the operator, if those partial derivatives go to zero, the operation goes to zero, $0 \\bigcirc 0 = 0$ no matter what, and the partial derivative of a constant is zero.\n\nThose partials go to zero when $f_i$ and $g_i$ are not functions of $w_j$. We know that element-wise operations imply that $f_i$ is purely a function of $w_i$ and $g_i$  is purely a function of $x_i$. For example, $\\mathbf{w}+\\mathbf{x}$ sums $w_i + x_i$. Consequently,  $f_i(\\mathbf{w}) \\bigcirc g_i(\\mathbf{x})$ reduces to $f_i(w_i) \\bigcirc g_i(x_i)$ and the goal becomes $\\frac{\\partial}{\\partial w_j} f_i(w_i) = \\frac{\\partial}{\\partial w_j} g_i(x_i) = 0$. $f_i(w_i)$ and $g_i(x_i)$ look like a constants to the partial differentiation operator with respect to $w_j$ when $j \\neq i$ so the partials are zero off the diagonal. (Notation $f_i(w_i)$ is technically an abuse of our notation because $f_i$ and $g_i$ are functions of vectors not individual elements. We should really write something like $\\hat f_{i}(w_i) = f_{i}(\\mathbf{w})$, but that would muddy the equations further, and programmers are comfortable overloading functions, so we'll proceed with the notation anyway.)  \n\nWe'll take advantage of this simplification later and refer to the constraint that $f_i(\\mathbf{w})$ and $g_i(\\mathbf{x})$ access at most $w_i$ and $x_i$, respectively, as the {\\em element-wise diagonal condition}.\n\nUnder this condition, the elements along the diagonal of the Jacobian are $\\frac{\\partial}{\\partial w_i} ( f_i(w_i) \\bigcirc g_i(x_i) )$:\n\n$\n\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{w}}  = \\begin{bmatrix}\n\\frac{\\partial}{\\partial w_1} ( f_{1}(w_1) \\bigcirc g_{1}(x_1) )\\\\\n& \\frac{\\partial}{\\partial w_2} (f_{2}(w_2) \\bigcirc g_{2}(x_2) ) & &\\text{\\huge0}\\\\\n& & \\ldots \\\\\n\\text{\\huge0}& & & \\frac{\\partial}{\\partial w_n} (f_{n}(w_n) \\bigcirc g_{n}(x_n) )\n\\end{bmatrix}\n$\n\n(The large ``0''s are a shorthand indicating all of the off-diagonal are 0.)\n\nMore succinctly, we can write:\n\n$\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{w}} = diag \\left( \\frac{\\partial}{\\partial w_1}(f_{1}(w_1) \\bigcirc g_{1}(x_1)),~ \\frac{\\partial}{\\partial w_2}(f_{2}(w_2) \\bigcirc g_{2}(x_2)),~ \\ldots,~ \\frac{\\partial}{\\partial w_n}(f_{n}(w_n) \\bigcirc g_{n}(x_n)) \\right)$\n\nand\n\n$\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}} = diag \\left( \\frac{\\partial}{\\partial x_1}(f_{1}(w_1) \\bigcirc g_{1}(x_1)),~ \\frac{\\partial}{\\partial x_2}(f_{2}(w_2) \\bigcirc g_{2}(x_2)),~ \\ldots,~ \\frac{\\partial}{\\partial x_n}(f_{n}(w_n) \\bigcirc g_{n}(x_n)) \\right)$\n\nwhere $diag(\\mathbf{x})$ constructs a matrix whose diagonal elements are taken from vector $\\mathbf{x}$: $diag(\\mathbf{x}) = \\mathbf{x}^T I$. $I$ represents the square identity matrix of appropriate dimensions that is zero everywhere but the diagonal, which contains all ones.  The $T$ exponent of $\\mathbf{x}^T$ represents the transpose of the indicated vector. In this case, it flips a vertical vector to a horizontal vector.\n\nBecause we do lots of simple vector arithmetic, the general function $\\mathbf{f(w)}$ in the binary element-wise operation is often just the vector $\\mathbf{w}$.  Any time the general function is a vector, we know that $f_i(\\mathbf{w})$ reduces to $f_i(w_i) = w_i$. For example, vector addition $\\mathbf{w + x}$ fits our element-wise diagonal condition because $\\mathbf{f(w)} + \\mathbf{g(x)}$ has scalar equations $y_i = f_i(\\mathbf{w}) + g_i(\\mathbf{x})$ that reduce to just $y_i = f_i(w_i) + g_i(x_i) = w_i + x_i$ with partial derivatives:\n\n$\\frac{\\partial}{\\partial w_i} ( f_{i}(w_i) + g_{i}(x_i) ) = \\frac{\\partial}{\\partial w_i}(w_i + x_i) = 1 + 0 = 1$\\\\\n$\\frac{\\partial}{\\partial x_i} ( f_{i}(w_i) + g_{i}(x_i) ) = \\frac{\\partial}{\\partial x_i}(w_i + x_i) = 0 + 1 = 1$\n\nThat gives us $\\frac{\\partial (\\mathbf{w+x})}{\\partial \\mathbf{w}} = \\frac{\\partial (\\mathbf{w+x})}{\\partial \\mathbf{x}} = I$, the identity matrix, because every element along the diagonal is 1.\n\nGiven the simplicity of this special case, $f_i(\\mathbf{w})$ reducing to $f_i(w_i)$, you should be able to derive the Jacobians for the common element-wise binary operations on vectors:\n\n$\n\\begin{array}{lllll}\n\\text{Op} & \\text{Partial with respect to } \\mathbf{w} & \\text{Partial with respect to }\\mathbf{x}\\\\\n\\hline\\\\\n\n+ & \\frac{\\partial (\\mathbf{w+x})}{\\partial \\mathbf{w}} = diag(\\ldots \\frac{\\partial (w_i + x_i)}{\\partial w_i} \\ldots) = diag(\\vec{1}) = I & \\frac{\\partial (\\mathbf{w+x})}{\\partial \\mathbf{x}} =  I\\\\\\\\\n\n- & \\frac{\\partial (\\mathbf{w-x})}{\\partial \\mathbf{w}}  =  diag(\\ldots\\frac{\\partial (w_i - x_i)}{\\partial w_i}\\ldots) =  diag(\\vec{1})  =  I & \\frac{\\partial (\\mathbf{w-x})}{\\partial \\mathbf{x}}  =  diag(\\ldots\\frac{\\partial (w_i - x_i)}{\\partial x_i}\\ldots)  =  diag(-\\vec{1})  =  -I \\\\\\\\\n\n\\otimes & \\frac{\\partial (\\mathbf{w \\otimes x})}{\\partial \\mathbf{w}}  =  diag(\\ldots\\frac{\\partial (w_i \\times x_i)}{\\partial w_i} \\ldots)  =  diag(\\mathbf{x}) & \\frac{\\partial (\\mathbf{w \\otimes x})}{\\partial \\mathbf{x}}  =  diag(\\mathbf{w})\\\\\\\\\n\n\\oslash & \\frac{\\partial (\\mathbf{w \\oslash x})}{\\partial \\mathbf{w}}  =  diag(\\ldots\\frac{\\partial (w_i / x_i)}{\\partial w_i}\\ldots)  =  diag(\\ldots \\frac{1}{x_i} \\ldots) & \\frac{\\partial (\\mathbf{w \\oslash x})}{\\partial \\mathbf{x}}  =  diag(\\ldots \\frac{-w_i}{x_i^2} \\ldots)\\\\\n\n\\end{array}\n$\n\nThe $\\otimes$ and $\\oslash$ operators are element-wise multiplication and division; $\\otimes$ is sometimes called the {\\em Hadamard product}. There isn't a standard notation for element-wise multiplication and division so where using an approach consistent with our general binary operation notation.\n\n\\subsection{Derivatives involving scalar expansion}\n\nWhen we multiply or add scalars to vectors, we're implicitly expanding the scalar to a vector and then performing an element-wise binary operation. For example, adding scalar $z$  to vector $\\mathbf{x}$, $\\mathbf{y} = \\mathbf{x} + z$,\nis really\n$\\mathbf{y} = \\mathbf{f(x)} + \\mathbf{g}(z)$ where $\\mathbf{f(x)} = \\mathbf{x}$ and $\\mathbf{g}(z) = \\vec{1} z$. Notation $\\vec{1}$ represents a vector of ones of appropriate length.  $z$ is any scalar that doesn't depend on $\\mathbf{x}$, which is useful because then $\\frac{\\partial z}{\\partial x_i} = 0$ for any $x_i$ and that will simplify our partial derivative computations. (It's okay to think of variable $z$ as a constant for our discussion here.)  Similarly, multiplying by a scalar, $\\mathbf{y} = \\mathbf{x} z$, is really $\\mathbf{y} = \\mathbf{f(x)} \\otimes \\mathbf{g}(z) = \\mathbf{x} \\otimes \\vec{1}z$ where $\\otimes$ is the element-wise  multiplication (Hadamard product) of the two vectors.\n\nThe partial derivatives of vector-scalar addition and multiplication with respect to vector $\\mathbf{x}$ use our element-wise rule:\n\n$\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}} = diag \\left( \\ldots \\frac{\\partial}{\\partial x_i} ( f_i(x_i) \\bigcirc g_i(z) ) \\ldots \\right)$\n\nThis follows because functions $\\mathbf{f(x)} = \\mathbf{x}$ and $\\mathbf{g}(z) = \\vec{1} z$ clearly satisfy our element-wise diagonal condition for the Jacobian (that $f_i(\\mathbf{x})$ refer at most to $x_i$ and $g_i(z)$ refers to the $i^{th}$ value of the $\\vec{1}z$ vector). \n\nUsing the usual rules for scalar partial derivatives, we arrive at the following diagonal elements of the Jacobian for vector-scalar addition:\n \n$\\frac{\\partial}{\\partial x_i} ( f_i(x_i) + g_i(z) ) = \\frac{\\partial (x_i + z)}{\\partial x_i} = \\frac{\\partial x_i}{\\partial x_i} + \\frac{\\partial z}{\\partial x_i} = 1 + 0 = 1$\n\nSo, $\\frac{\\partial}{\\partial \\mathbf{x}} ( \\mathbf{x} + z ) = diag(\\vec{1}) = I$.\n\nComputing the partial derivative with respect to the scalar parameter $z$, however, results in a vertical vector, not a diagonal matrix. The elements of the vector are:\n \n$\\frac{\\partial}{\\partial z} ( f_i(x_i) + g_i(z) ) = \\frac{\\partial (x_i + z)}{\\partial z} = \\frac{\\partial x_i}{\\partial z} + \\frac{\\partial z}{\\partial z} = 0 + 1 = 1$\n\nTherefore, $\\frac{\\partial}{\\partial z} ( \\mathbf{x} + z ) = \\vec{1}$.\n\nThe diagonal elements of the Jacobian for vector-scalar multiplication involve the product rule for scalar derivatives:\n\n$\\frac{\\partial}{\\partial x_i} ( f_i(x_i) \\otimes g_i(z) ) = x_i  \\frac{\\partial z}{\\partial x_i} + z  \\frac{\\partial x_i}{\\partial x_i} = 0 + z = z$\n\nSo, $\\frac{\\partial}{\\partial \\mathbf{x}} ( \\mathbf{x} z ) = diag(\\vec{1}  z) = I z$. \n\nThe partial derivative with respect to scalar parameter $z$ is a vertical vector whose elements are:\n\n$\\frac{\\partial}{\\partial z} ( f_i(x_i) \\otimes g_i(z) ) = x_i \\frac{\\partial z}{\\partial z} + z \\frac{\\partial x_i}{\\partial z} = x_i + 0 = x_i$\n\nThis gives us $\\frac{\\partial}{\\partial z} ( \\mathbf{x} z ) = \\mathbf{x}$.\n\n\\subsection{Vector sum reduction}\n\nSumming up the elements of a vector is an important operation in deep learning, such as the network loss function, but we can also use it as a way to simplify computing the derivative of vector dot product and other operations that reduce vectors to scalars.\n\nLet $y = sum( \\mathbf{f}(\\mathbf{x})) = \\Sigma_{i=1}^n f_i(\\mathbf{x})$.  Notice we were careful here to leave the parameter as a vector $\\mathbf{x}$ because each function $f_i$ could use all values in the vector, not just $x_i$. The sum is over the {\\bf results} of the function and not the parameter. The gradient ($1 \\times n$ Jacobian) of vector summation is:\n\n$\n\\begin{array}{lcl}\n\\frac{\\partial y}{\\partial \\mathbf{x}} & = & \\begin{bmatrix} \\frac{\\partial y}{\\partial x_1}, \\frac{\\partial y}{\\partial x_2}, \\ldots, \\frac{\\partial y}{\\partial x_n} \\end{bmatrix}\\\\\\\\\n & = & \\begin{bmatrix} \\frac{\\partial}{\\partial x_1} \\Sigma_i f_i(\\mathbf{x}),~ \\frac{\\partial}{\\partial x_2} \\Sigma_i f_i(\\mathbf{x}),~ \\ldots,~ \\frac{\\partial}{\\partial x_n} \\Sigma_i  f_i(\\mathbf{x}) \\end{bmatrix} \\\\\\\\\n & = & \\begin{bmatrix} \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_1},~ \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_2},~ \\ldots,~ \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_n}  \\end{bmatrix}~~~~~~~~~~~~~~~(\\text{move derivative inside }\\Sigma)\\\\\\\\\n\\end{array}\n$\n\n(The summation inside the gradient elements can be tricky so make sure to keep your notation consistent.)\n\nLet's look at the gradient of the simple $y = sum(\\mathbf{x})$. The function inside the summation is just $f_i(\\mathbf{x}) = x_i$ and the gradient is then:\n\n$\\nabla y = \\begin{bmatrix} \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_1},~ \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_2},~ \\ldots,~ \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_n}  \\end{bmatrix} = \\begin{bmatrix} \\Sigma_i \\frac{\\partial x_i}{\\partial x_1},~ \\Sigma_i \\frac{\\partial x_i}{\\partial x_2},~ \\ldots,~ \\Sigma_i \\frac{\\partial x_i}{\\partial x_n}  \\end{bmatrix}$\n\nBecause $\\frac{\\partial}{\\partial x_j} x_i = 0$ for $j \\neq i$, we can simplify to:\n\n$\\nabla y = \\begin{bmatrix} \\frac{\\partial x_1}{\\partial x_1},~ \\frac{\\partial x_2}{\\partial x_2},~ \\ldots,~ \\frac{\\partial x_n}{\\partial x_n}  \\end{bmatrix} = \\begin{bmatrix}1, 1, \\ldots, 1\\end{bmatrix} = \\vec{1}^T$\n\nNotice that the result is a horizontal vector full of 1s, not a vertical vector, and so the gradient is $\\vec{1}^T$.  It's very important to keep the shape of all of your vectors and matrices in order otherwise it's impossible to compute the derivatives of complex functions.\n\nAs another example, let's sum the result of multiplying a vector by a constant scalar.  If $y = sum(\\mathbf{x} z)$ then $f_i(\\mathbf{x},z) = x_i z$. The gradient is:\n\n$\n\\begin{array}{lcl}\n\\frac{\\partial y}{\\partial \\mathbf{x}} & = & \\begin{bmatrix} \\Sigma_i \\frac{\\partial}{\\partial x_1} x_i z,~ \\Sigma_i \\frac{\\partial }{\\partial x_2} x_i z,~ \\ldots,~ \\Sigma_i \\frac{\\partial}{\\partial x_n} x_i z  \\end{bmatrix}\\\\\\\\\n & = & \\begin{bmatrix} \\frac{\\partial}{\\partial x_1} x_1 z,~ \\frac{\\partial }{\\partial x_2} x_2 z,~ \\ldots,~ \\frac{\\partial}{\\partial x_n} x_n z  \\end{bmatrix}\\\\\\\\\n & = & \\begin{bmatrix} z, z, \\ldots, z \\end{bmatrix}\\\\\\\\\n\\end{array}\n$\n\nThe derivative with respect to scalar variable $z$ is $1 \\times 1$:\n\n$\n\\begin{array}{lcl}\n\\frac{\\partial y}{\\partial z} & = & \\frac{\\partial}{\\partial z} \\Sigma_{i=1}^n (x_i+z)\\\\\\\\\n& = & \\Sigma_i \\frac{\\partial}{\\partial z} (x_i+z)\\\\\\\\\n& = & \\Sigma_i (0 + 1)\\\\\\\\\n& = & n\n\\end{array}\n$\n\n\\subsection{The Chain Rules}\n\nWe can't compute partial derivatives of very complicated functions using just the basic matrix calculus rules we've seen so far.  For example, we can't take the derivative of nested expressions like $sum(\\mathbf{w}+\\mathbf{x})$ directly without reducing it to its scalar equivalent. We need to be able to combine our basic vector rules using what we can call the {\\em vector chain rule}.   Unfortunately, there are a number of rules for differentiation that fall under the name ``chain rule'' so we have to be careful which chain rule we're talking about. Part of our goal here is to clearly define and name three different chain rules and indicate in which situation they are appropriate. To get warmed up, we'll start with what we'll call the {\\em single-variable chain rule}, where we want the derivative of a scalar function with respect to a scalar. Then we'll move on to an important concept called the {\\em total derivative} and use it to define what we'll pedantically call the {\\em single-variable total-derivative chain rule}. Then, we'll be ready for the vector chain rule in its full glory as needed for neural networks.\n\nThe chain rule is conceptually a divide and conquer strategy (like Quicksort) that breaks complicated expressions into subexpressions whose derivatives are easier to compute.  Its power derives from the fact that we can process each simple subexpression in isolation yet still combine the  intermediate results to get the correct overall result.\n\nThe chain rule comes into play when we need the derivative of an expression composed of nested subexpressions. For example, we need the chain rule when confronted with expressions like $\\frac{d}{dx} sin(x^2)$.  The outermost expression takes the $sin$ of an intermediate result, a nested subexpression that squares $x$. Specifically, we need the single-variable chain rule, so let's start by digging into that in more detail.\n\n\\subsubsection{Single-variable chain rule}\n\nLet's start with the solution to the derivative of our nested expression: $\\frac{d}{dx} sin(x^2) = 2xcos(x^2)$.  It doesn't take a mathematical genius to recognize components of the solution that smack of scalar differentiation rules, $\\frac{d}{dx}x^2 = 2x$ and $\\frac{d}{du} sin(u) = cos(u)$. It looks like the solution is to multiply the derivative of the outer expression by the derivative of the inner expression or ``chain the pieces together,'' which is exactly right. In this section, we'll explore the general principle at work and provide a process that works for highly-nested expressions of a single variable.\n\nChain rules are typically defined in terms of nested functions, such as $y = f(g(x))$ for single-variable chain rules. (You will also see the chain rule defined using function composition $(f \\circ g)(x)$, which is the same thing.)  Some sources write the derivative using shorthand notation $y' = f'(g(x))g'(x)$, but that hides the fact that we are introducing an intermediate variable: $u = g(x)$, which we'll see shortly. It's better to define the [single-variable chain rule](http://m.wolframalpha.com/input/?i=chain+rule) of $f(g(x))$ explicitly so we never take the derivative with respect to the wrong variable. Here is the formulation of the single-variable chain rule we recommend:\n\n$\\frac{dy}{dx} = \\frac{dy}{du}\\frac{du}{dx}$\n\nTo deploy the single-variable chain rule, follow these steps:\n\n\\begin{enumerate}\n\t\\item  Introduce intermediate variables for nested subexpressions and subexpressions for both binary and unary operators; e.g., $\\times$ is binary, $sin(x)$ and other trigonometric functions are usually unary because there is a single operand. This step normalizes all equations to single operators or function applications.\n\t\\item Compute derivatives of the intermediate variables with respect to their parameters.\n\t\\item Combine all derivatives of intermediate variables by multiplying them together to get the overall result.\n\t\\item Substitute intermediate variables back in if any are referenced in the derivative equation.\n\\end{enumerate}\n\nThe third step puts the ``chain'' in ``chain rule'' because it chains together intermediate results. Multiplying the intermediate derivatives together is the common theme among all variations of the chain rule.\n\nLet's try  this process on $y = f(g(x)) = sin(x^2)$:\n\n\\begin{enumerate}\n\t\\item Introduce intermediate variables. Let $u = x^2$ represent subexpression $x^2$ (shorthand for $u(x) = x^2$). This gives us:\\\\\n\t$u = x^2$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~(relative to definition $f(g(x))$, $g(x) = x^2$)\\\\\n\t$y = sin(u)$ ~~~~~~~~~~~~~~~~~~~~~~($y = f(u) = sin(u)$)\\\\\nThe order of these subexpressions does not affect the answer, but we recommend working in the reverse order of operations dictated by the nesting (innermost to outermost). That way, expressions and derivatives are always functions of previously-computed elements. \n\t\\item Compute derivatives.\\\\\n\t$\\frac{du}{dx} = 2x$ ~~~~~~~~~~~~~~~~~~~~~~~~~ (Take derivative with respect to $x$)\\\\\n\t$\\frac{dy}{du} = cos(u)$  ~~~~~~~~~~~~~~~~~~~~ (Take derivative with respect to $u$ not $x$)\n\t\\item Combine.\\\\\n\t$\\frac{dy}{dx} = \\frac{dy}{du} \\frac{du}{dx} = cos(u)2x$\n\t\\item Subtitute.\\\\\n\t$\\frac{dy}{dx} = \\frac{dy}{du} \\frac{du}{dx} = cos(x^2)2x = 2xcos(x^2)$\t\n\\end{enumerate}\n\nNotice how easy it is to compute the derivatives of the intermediate variables in isolation! The chain rule says it's legal to do that and tells us how to combine the intermediate results to get $2xcos(x^2)$.\n\nYou can think of the combining step of the chain rule in terms of units canceling. If we let $y$ be gallons of gas, $x$ be the gallons in a gas tank, and $u$ as miles we can interpret $\\frac{dy}{dx} = \\frac{dy}{du} \\frac{du}{dx}$ as $\\frac{miles}{tank} = \\frac{miles}{gallon} \\frac{gallon}{tank}$. The $gallon$ denominator and numerator cancel. This is a convenient way to remember the chain rule but the analogy only goes so far; don't treat $dy$ and $dx$ has separate variables since they are two components in the name of a single derivative operator.\n\nAnother way to to think about the single-variable chain rule is to visualize the overall expression as a dataflow diagram or chain of operations (or [abstract syntax tree](link) for compiler people):\n\n\\includegraphics[scale=.9]{sin-square.png}\n\nChanges to function parameter $x$ bubble up through a squaring operation then through a $sin$ operation to change result $y$. You can think of $\\frac{du}{dx}$ as ``getting changes from $x$ to $u$ and $\\frac{dy}{du}$ as ``getting changes from $u$ to $y$.'' Getting from $x$ to $y$ requires an intermediate hop. The chain rule is, by convention, usually written from the output variable down to the parameter(s), $\\frac{dy}{dx} = \\frac{dy}{du} \\frac{du}{dx}$. But, the $x$-to-$y$ perspective would be more clear if we reversed the flow and used the equivalent $\\frac{dy}{dx} = \\frac{du}{dx}\\frac{dy}{du}$.\n\n{\\bf Conditions under which the single-variable chain rule applies}. Notice that there is a single dataflow path from $x$ to the root $y$.  Changes in $x$ can influence output $y$ in only one way.  That is the condition under which we can apply the single-variable chain rule. An easier condition to remember, though one that's a bit looser, is that none of the intermediate subexpression functions, $u(x)$ and $y(u)$, have more than one parameter.  Consider $y(x) = x+x^2$, which would become $y(x,u) = x+u$ after introducing intermediate variable $u$.  As we'll see in the next section, $y(x,u)$ has multiple paths from $x$ to $y$. To handle that situation, we'll deploy the single-variable total-derivative chain rule.\n\t\n\\framebox[\\linewidth]{\\parbox{0.97\\linewidth}{As an aside for those interested in automatic differentiation, papers and library documentation use terminology {\\em forward differentiation} and {\\em backward differentiation} (for use in the [back-propagation algorithm](link)). From a dataflow perspective, we are computing a forward differentiation because it follows the normal data flow direction.  Backward differentiation, naturally, goes the other direction and we're asking how a change in the output would affect function parameter $x$. Because backward differentiation can determine changes in all function parameters at once, it turns out to be much more efficient for computing the derivative of functions with lots of parameters. Forward differentiation, on the other hand, must consider how a change in each parameter, in turn, affects the function output $y$.\\\\\n\n\\begin{tabular}{cc}\n\tForward differentiation & Backward differentiation\\\\\n\tfrom $x$ to $y$ & from $y$ to $x$\\\\\n$\\frac{dy}{dx} = \\frac{du}{dx}\\frac{dy}{du}$ & $\\frac{dy}{dx} = \\frac{dy}{du} \\frac{du}{dx}$\\\\\n\\end{tabular}\n\\vspace{3mm}\n\nAutomatic differentiation is beyond the scope of this article, but we're setting the stage for a future article.\n}}\n\nMany readers can solve $\\frac{d}{dx}sin(x^2)$ in their heads, but our goal is a process that will work even for  very complicated expressions. This process is also how [automatic differentiation](link) works in libraries like PyTorch. So, by solving derivatives manually  in this way, you're also learning how to define functions for custom neural networks in PyTorch.\n\nWith deeply nested expressions, it helps to think about deploying the chain rule the way a compiler unravels nested function calls like $f_4(f_3(f_2(f_1(x))))$ into a sequence (chain) of calls. The result of calling function $f_i$ is saved to a temporary variable called a register, which is then passed as a parameter to $f_{i+1}$.  Let's see how that looks in practice by using our process on a highly-nested equation like $y = f(x) = ln(sin(x^3)^2)$:\n\n\\begin{enumerate}\n\t\\item Introduce intermediate variables.\\\\\n$u_1 = f_1(x) = x^3$\\\\\n$u_2 = f_2(u_1) = sin(u_1)$\\\\\n$u_3 = f_3(u_2) = u_2^2$\\\\\n$u_4 = f_4(u_3) = ln(u_3)$ ~~~~~~($y = u_4$)\n\t\\item Compute derivatives.\\\\\n$\\begin{array}{lllll}\n\\vspace{1mm}\n\\frac{d}{u_x} u_1 & = & \\frac{d}{x} x^3 & = & 3x^2\\\\\n\\vspace{1mm}\n\\frac{d}{u_1} u_2 & = & \\frac{d}{u_1} sin(u_1) & = & cos(u_1) \\\\\n\\frac{d}{u_2} u_3 & = & \\frac{d}{u_2} u_2^2 & =& 2u_2\\\\\n\\frac{d}{u_3} u_4 & = & \\frac{d}{u_3} ln(u_3) & =& \\frac{1}{u_3}\\\\\n\\end{array}$\n\t\\item Combine four intermediate values.\\\\\n$\\frac{dy}{dx} = \\frac{d u_4}{dx} = \\frac{d u_4}{du_3}\\frac{du_3}{d u_2} \\frac{du_2}{du_1} \\frac{du_1}{dx} = \\frac{1}{u_3}  2u_2  cos(u_1)  3x^2 = \\frac{6u_2x^2cos(u_1)}{u_3}$\n\t\\item Subtitute.\\\\\n$\\frac{dy}{dx} = \\frac{6sin(u_1)x^2cos(x^3)}{u_2^2} = \\frac{6sin(x^3)x^2cos(x^3)}{sin(u_1)^2} = \\frac{6sin(x^3)x^2cos(x^3)}{sin(x^3)^2} = \\frac{6x^2cos(x^3)}{sin(x^3)}$\n\\end{enumerate}\n\n\nHere is a visualization of the data flow through the chain of operations from $x$ to $y$:\n\n\\includegraphics[scale=.9]{chain-tree.png}\n\nAt this point, we can handle derivatives of nested expressions of a single variable, $x$, using the chain rule but only if $x$ can affect $y$ through a single data flow path. To handle more complicated expressions, we need to extend our technique, which we'll do next.\n\n\\subsubsection{Single-variable total-derivative chain rule}\n\nOur single-variable chain rule has limited applicability because all intermediate variables must be functions of single variables. But, it demonstrates the core mechanism of the chain rule, that of multiplying out all derivatives of intermediate subexpressions. To handle more general expressions such as $y = f(x) = x+x^2$, however, we need to augment that basic chain rule.\n\nOf course, we immediately see $\\frac{dy}{dx} = \\frac{d}{dx}x + \\frac{d}{dx}x^2 = 1 + 2x$, but that is using the scalar  addition derivative rule, not the chain rule.  If we tried to apply the single-variable chain rule, we'd get the wrong answer. In fact, the previous chain rule is meaningless in this case because derivative operator $\\frac{d}{dx}$ does not apply to multivariate functions, such as $u_2$ among our intermediate variables:\n\n$u_1(x) = x^2$\\\\\n$u_2(x,u_1) = x + u_1$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ($y = f(x) = u_2(x,u_1)$)\n\nLet's try it anyway to see what happens. If we pretend that $\\frac{du_2}{du_1} = 0 + 1 = 1$ and $\\frac{du_1}{dx} = 2x$, then $\\frac{dy}{dx} = \\frac{du_2}{dx} = \\frac{du_2}{du_1} \\frac{du_1}{dx} = 2x$ instead of the right answer $1 + 2x$.  \n\nBecause $u_2(x,u) = x + u_1$ has multiple parameters, partial derivatives come into play. Let's blindly apply the partial derivative operator to all of our equations and see what we get:\n\n$\\frac{\\partial u_1(x)}{\\partial x} = 2x$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(same as $\\frac{du_1(x)}{dx}$)\\\\\n$\\frac{\\partial u_2(x,u_1)}{\\partial u_1} = \\frac{\\partial }{\\partial u_1}(x + u_1) = 0 + 1 = 1$\\\\\n$\\frac{\\partial u_2(x,u_1)}{\\partial x} \\includegraphics[scale=.012]{not-equal-icon.png} \\frac{\\partial }{\\partial x}(x + u_1) = 1 + 0 = 1$ ~~~~~~~~~~(something's not quite right here!)\\\\\n\nOoops! The partial $\\frac{\\partial u_2(x,u_1)}{\\partial x}$ is wrong because it violates a key assumption for partial derivatives. When taking the partial derivative with respect to $x$, the other variables must not vary as $x$ varies. Otherwise, we could not act as if the other variables were constants. Clearly, though, $u_1(x)=x^2$ is a function of $x$ and therefore varies with $x$. $\\frac{\\partial u_2(x,u_1)}{\\partial x} \\neq 1 + 0$ because $\\frac{\\partial u_1(x)}{\\partial x} \\neq 0$. A quick look at the data flow diagram for $y=u_2(x,u_1)$ shows multiple paths from $x$ to $y$, thus, making it clear we need to consider direct and indirect (through $u_1(x)$) dependencies on $x$:\n\n\\includegraphics[scale=.9]{plus-square.png}\n\nA change in $x$ effects $y$ both as an operand of the addition and as the operand of the square operator. Here's an equation that describes how tweaks to $x$ affect the output:\n\n$\\hat y = (x + \\Delta x) + (x + \\Delta x)^2$\n\nThen, $\\Delta y = \\hat y - y$.\n\nIf we let $x=1$, then $y=1+1^2=2$. If we bump $x$ by 1, $\\Delta x=1$, then $\\hat y = (1+1) + (1+1)^2 = 2 + 4 = 6$. The change in $y$ is not $1$, as $\\partial u_2 / u_1$ would lead us to believe, but $6-2 = 4$!\n\nEnter the ``law'' of [{\\em total derivatives}](https://en.wikipedia.org/wiki/Total\\_derivative), which basically says that to compute $\\frac{dy}{dx}$, we need to sum up all possible contributions from changes in $x$ to the change in $y$. The total derivative with respect to $x$ assumes all variables, such as $u_1$ in this case, are functions of $x$ and potentially vary as $x$ varies.   The total derivative of $f(x) = u_2(x,u_1)$ that depends on $x$ directly and indirectly via intermediate variable $u_1(x)$ is given by:\n\n$\\frac{dy}{dx} = \\frac{\\partial f(x)}{\\partial x} = \\frac{\\partial u_2(x,u_1)}{\\partial x} = \\frac{\\partial u_2}{\\partial x}\\frac{\\partial x}{\\partial x} + \\frac{\\partial u_2}{\\partial u_1}\\frac{\\partial u_1}{\\partial x} = \\frac{\\partial u_2}{\\partial x} + \\frac{\\partial u_2}{\\partial u_1}\\frac{\\partial u_1}{\\partial x}$\n\nUsing this formula, we get the proper answer:\n\n$\\frac{dy}{dx} = \\frac{\\partial f(x)}{\\partial x} = \\frac{\\partial u_2}{\\partial x} + \\frac{\\partial u_2}{\\partial u_1}\\frac{\\partial  u_1}{\\partial  x} = 1 + 1 \\times 2x = 1 + 2x$\n\nThat is an application of what we can call the {\\em single-variable total-derivative chain rule}:\n\n\\[\n\\frac{\\partial f(x,u_1,\\ldots,u_n)}{\\partial x} = \\frac{\\partial f}{\\partial x} + \\frac{\\partial f}{\\partial u_1}\\frac{\\partial  u_1}{\\partial  x} + \\frac{\\partial f}{\\partial u_2}\\frac{\\partial  u_2}{\\partial  x} + \\ldots + \\frac{\\partial f}{\\partial u_n}\\frac{\\partial  u_n}{\\partial x} = \\frac{\\partial f}{\\partial x} + \\sum_{i=1}^n \\frac{\\partial f}{\\partial u_i}\\frac{\\partial  u_i}{\\partial  x}\n\\]\n\nThe total derivative assumes all variables are potentially codependent whereas the partial derivative assumes all variables but $x$ are constants.\n\nThere is something subtle going on here with the notation. All of the derivatives are shown as partial derivatives because $f$ and $u_i$ are functions of multiple variables. This notation mirrors that of [Wolfram's notation](http://mathworld.wolfram.com/TotalDerivative.html) but differs from  [Wikipedia](https://en.wikipedia.org/wiki/Total\\_derivative), which uses ${d f(x,u_1,\\ldots,u_n)}/{d x}$ instead (possibly to emphasize the total derivative nature of the equation). We'll stick with the partial derivative notation so that it's consistent with our discussion of the vector chain rule in the next section.\n\nIn practice, just keep in mind that when you take the total derivative with respect to $x$, other variables might also be functions of $x$ so add in their contributions as well.  The left side of the equation looks like a typical partial derivative but the right-hand side is actually the total derivative.  It's common, however, that many temporary variables are functions of a single parameter, which means that the single-variable total-derivative chain rule degenerates to the single-variable chain rule.\n\nLet's look at a nested subexpression, such as $f(x) = sin(x + x^2)$.  We introduce three intermediate variables:\n\n$u_1(x) = x^2$\\\\\n$u_2(x,u_1) = x + u_1$\\\\\n$u_3(u_2) = sin(u_2)$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ($y = f(x) = u_3(u_2)$)\n\nand partials:\n\n$\\frac{\\partial u_1}{\\partial x} = 2x$\\\\\n$\\frac{\\partial u_2}{\\partial x} = \\frac{\\partial x}{\\partial x} + \\frac{\\partial u_2}{\\partial u_1}\\frac{\\partial u_1}{\\partial x} = 1 + 1 \\times 2x = 1+2x$\\\\\n$\\frac{\\partial f(x)}{\\partial x} = \\frac{\\partial u_3}{\\partial x} + \\frac{\\partial u_3}{\\partial u_2}\\frac{\\partial u_2}{\\partial x} = 0 + cos(u_2)\\frac{\\partial u_2}{\\partial x} = cos(x+x^2)(1+2x)$\n\nwhere both $\\frac{\\partial u_2}{\\partial x}$ and $\\frac{\\partial f(x)}{\\partial x}$ have $\\frac{\\partial u_i}{\\partial x}$ terms that take into account the total derivative.\n\nAlso notice that the total derivative formula always {\\bf sums} versus, say, multiplies terms $\\frac{\\partial f}{\\partial u_i}\\frac{\\partial  u_i}{\\partial  x}$.  It's tempting to think that summing up terms in the derivative makes sense because, for example, $y = x+x^2$ adds two terms. Nope. The total derivative is adding terms because it represents a weighted sum of all $x$ contributions to the change in $y$. For example, given $y = x \\times x^2$ instead of $y = x + x^2$, the total-derivative chain rule formula still adds partial derivative terms. ($x \\times x^2$  simplifies to $x^3$ but for this demonstration, let's not combine the terms.) Here are the intermediate variables and partial derivatives:\n\n$u_1(x) = x^2$\\\\\n$u_2(x,u_1) = x u_1$ ~~~\\,~~~~~~~~~ ($y = f(x) = u_2(x,u_1)$)\n\n$\\frac{\\partial u_1}{\\partial x} = 2x$\\\\\n$\\frac{\\partial u_2}{\\partial x} = u_1$ ~~~~~~~~~~~~~~~~~~~~~~~~~~(for $u_2 = x + u_1$, $\\frac{\\partial u_2}{\\partial x} = 1$)\\\\\n$\\frac{\\partial u_2}{\\partial u_1} = x$ \\,~~~~~~~~~~~~~~~~~~~~~~~~~~~(for $u_2 = x + u_1$, $\\frac{\\partial u_2}{\\partial u_1} = 1$)\n\nThe form of the total derivative remains the same, however:\n\n$\\frac{dy}{dx} = \\frac{\\partial u_2}{\\partial x} + \\frac{\\partial u_2}{\\partial u_1}\\frac{d u_1}{\\partial  x} = u_1 + x 2x = x^2 + 2x^2 = 3x^2$\n\nIt's the partials (weights) that change, not the formula, when the intermediate variable operators change.\n\nThose readers with a strong calculus background might wonder why we aggressively introduce intermediate variables even for the non-nested subexpressions such as $x^2$ in $x+x^2$. We use this process for three reasons: (i) computing the derivatives for the simplified subexpressions is usually trivial, (ii) we can simplify the chain rule, and (iii) the process mirrors how automatic differentiation works in neural network libraries.\n\nUsing the intermediate variables even more aggressively, let's see how we can simplify our single-variable total-derivative chain rule to its final form. The goal is to get rid of the the $\\frac{\\partial f}{\\partial x}$ sticking out on the front like a sore thumb:\n\n\\[\n\\frac{\\partial f(x,u_1,\\ldots,u_n)}{\\partial x} = \\frac{\\partial f}{\\partial x} + \\sum_{i=1}^n \\frac{\\partial f}{\\partial u_i}\\frac{\\partial  u_i}{\\partial  x}\n\\]\n\nWe can achieve that by simply introducing a new temporary variable as an alias for $x$: $u_{n+1} = x$. Then, the formula reduces to our final form (implicitly bumping $n$ up by one):\n\n\\[\n\\frac{\\partial f(u_1,\\ldots,u_n)}{\\partial x} = \\sum_{i=1}^n \\frac{\\partial f}{\\partial u_i}\\frac{\\partial  u_i}{\\partial  x}\n\\]\n\nThis chain rule that takes into consideration the total derivative degenerates to the single-variable chain rule when all intermediate variables are functions of a single variable.   Consequently, you can remember this more general formula to cover both cases.  As a bit of dramatic foreshadowing, notice that the summation sure looks like a vector dot product, $\\frac{\\partial f}{\\partial \\mathbf{u}} \\cdot \\frac{\\partial \\mathbf{u}}{\\partial x}$, or  a vector multiply $\\frac{\\partial f}{\\partial \\mathbf{u}} \\frac{\\partial \\mathbf{u}}{\\partial x}$. \n \nBefore we move on, a word of caution about terminology on the web. Unfortunately, the chain rule given in this section, based upon the total derivative, is universally called ``multivariable chain rule'' in calculus discussions, which is highly misleading! Only the intermediate variables are multivariate functions. The overall function, say, $f(x) = x + x^2$, is a scalar function that accepts a single parameter $x$. The derivative and parameter are scalars, not vectors, as one would expect with a so-called multivariate chain rule.  (Within the context of a non-matrix calculus class, ``multivariate chain rule'' is likely unambiguous.) To reduce confusion, we use ``single-variable total-derivative chain rule'' to spell out the distinguishing feature between the simple single-variable chain rule, $\\frac{dy}{dx} = \\frac{dy}{du}\\frac{du}{dx}$, and this one. \n\n\\subsubsection{Vector chain rule}\n\nNow that we've got a good handle on the total-derivative chain rule, we're ready to tackle the chain rule for vectors of functions and vector variables. Surprisingly, this more general chain rule is just as simple looking as the single-variable chain rule for scalars. Rather than just presenting the vector chain rule, let's rediscover it ourselves so we get a firm grip on it. We can start by computing the derivative of a sample vector function with respect to a scalar, $\\mathbf{y} = \\mathbf{f}(x)$, to see if we can abstract a general formula.   \n\n$\n\\begin{bmatrix}\n\ty_1(x)\\\\\n\ty_2(x)\\\\\n\\end{bmatrix} =\n\\begin{bmatrix}\n\tf_1(x)\\\\\n\tf_2(x)\\\\\n\\end{bmatrix} = \n\\begin{bmatrix}\n\tln(x^2)\\\\\n\tsin(3x)\n\\end{bmatrix}\n$\n\nLet's introduce two intermediate variables, $g_1$ and $g_2$, one for each $f_i$ so that $y$ looks more like $\\mathbf{y} = \\mathbf{f}(\\mathbf{g}(x))$:\n\n$\n\\begin{bmatrix}\n\tg_1(x)\\\\\n\tg_2(x)\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\tx^2\\\\\n\t3x\\\\\n\\end{bmatrix}\n$\n\n$\n\\begin{bmatrix}\n\tf_1(\\mathbf{g})\\\\\n\tf_2(\\mathbf{g})\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\tln(g_1)\\\\\n\tsin(g_2)\\\\\n\\end{bmatrix}\n$\n\n\nThe derivative of vector $\\mathbf{y}$ with respect to scalar $x$ is a vertical vector with elements computed using the single-variable total-derivative chain rule:\n\n$\\frac{\\partial \\mathbf{y}}{\\partial x}  =\n\\begin{bmatrix}\n\t\\frac{\\partial f_1(\\mathbf{g})}{\\partial x}\\\\\n\t\\frac{\\partial f_2(\\mathbf{g})}{\\partial x}\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\t\\frac{\\partial f_1}{\\partial g_1}\\frac{\\partial g_1}{\\partial x} + \\frac{\\partial f_1}{\\partial g_2}\\frac{\\partial g_2}{\\partial x}\\\\\n\t\\frac{\\partial f_2}{\\partial g_1}\\frac{\\partial g_1}{\\partial x} + \\frac{\\partial f_2}{\\partial g_2}\\frac{\\partial g_2}{\\partial x}\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\t\\frac{1}{g_1}2x + 0\\\\\n\t0 + cos(g_2)3\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\t\\frac{2x}{x^2}\\\\\n\t3cos(3x)\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\t\\frac{2}{x}\\\\\n\t3cos(3x)\\\\\n\\end{bmatrix}\n$\n\nOk, so now we have the answer using just the scalar rules, albeit with the derivatives grouped into a vector. Let's try to abstract from that result what it looks like in vector form.  The goal is to convert the following vector of scalar operations to a vector operation. \n\n$\n\\begin{bmatrix}\n\t\\frac{\\partial f_1}{\\partial g_1}\\frac{\\partial g_1}{\\partial x} + \\frac{\\partial f_1}{\\partial g_2}\\frac{\\partial g_2}{\\partial x}\\\\\n\t\\frac{\\partial f_2}{\\partial g_1}\\frac{\\partial g_1}{\\partial x} + \\frac{\\partial f_2}{\\partial g_2}\\frac{\\partial g_2}{\\partial x}\\\\\n\\end{bmatrix}\n$\n\nIf we split the $\\frac{\\partial f_i}{\\partial g_j}\\frac{\\partial g_j}{\\partial x}$ terms, isolating the $\\frac{\\partial g_j}{\\partial x}$ terms into a vector, we get a matrix by vector multiplication:\n\n$\n\\begin{bmatrix}\n\t\\frac{\\partial f_1}{\\partial g_1} & \\frac{\\partial f_1}{\\partial g_2}\\\\\n\t\\frac{\\partial f_2}{\\partial g_1} & \\frac{\\partial f_2}{\\partial g_2}\\\\\n\\end{bmatrix}\\begin{bmatrix}\n\\frac{\\partial g_1}{\\partial x}\\\\\n\\frac{\\partial g_2}{\\partial x}\\\\\n\\end{bmatrix} = \\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial \\mathbf{g}}{\\partial x}\n$\n\nThat means that the Jacobian is the multiplication of two other Jacobians, which is kinda cool.  Let's check our results:\n\n$\n\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial \\mathbf{g}}{\\partial x} = \\begin{bmatrix}\n\t\\frac{1}g_1 & 0\\\\\n\t0 & cos(g_2)\\\\\n\\end{bmatrix}\\begin{bmatrix}\n2x\\\\\n3\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\t\\frac{1}{g_1}2x + 0\\\\\n\t0 + cos(g_2)3\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\t\\frac{2}{x}\\\\\n\t3cos(3x)\\\\\n\\end{bmatrix}\n$\n\nWhew!  We get the same answer as the scalar approach. This vector chain rule for vectors of functions and a single parameter appears to be correct and, indeed, mirrors the single-variable chain rule. Compare the vector rule:\n\n$\\frac{\\partial}{\\partial x} \\mathbf{f}(\\mathbf{g}(x)) = \\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial\\mathbf{g}}{\\partial x}$\n\nwith the single-variable chain rule:\n\n$\\frac{d}{dx} f(g(x)) = \\frac{df}{dg}\\frac{dg}{dx}$\n\nTo make this formula work for multiple parameters or vector $\\mathbf{x}$, we just have to change $x$ to vector $\\mathbf{x}$ in the equation.  The effect is  that $\\frac{\\partial\\mathbf{g}}{\\partial \\mathbf{x}}$ and the resulting Jacobian,  $\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{x}}$, are now matrices instead of  vertical vectors. Our complete {\\em vector chain rule} is:\n\n$\\frac{\\partial}{\\partial \\mathbf{x}} \\mathbf{f}(\\mathbf{g}(\\mathbf{x})) = \\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial\\mathbf{g}}{\\partial \\mathbf{x}}$ ~~~~~~~~~~~~~~~~~~~~~(Note: matrix multiply doesn't commute; order of $\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial\\mathbf{g}}{\\partial \\mathbf{x}}$ matters)\n\nThe beauty of the vector formula over the single-variable chain rule is that it automatically takes into consideration the total derivative while maintaining the same notational simplicity.  The Jacobian contains all possible combinations of $f_i$ with respect to $g_j$ and $g_i$ with respect to $x_j$. For completeness, here are the two Jacobian components in their full glory:\n\n$\n\\frac{\\partial}{\\partial \\mathbf{x}} \\mathbf{f}(\\mathbf{g}(\\mathbf{x})) = \\begin{bmatrix}\n\t\\frac{\\partial f_1}{\\partial g_1} & \\frac{\\partial f_1}{\\partial g_2} & \\ldots & \\frac{\\partial f_1}{\\partial g_k}\\\\\n\t\\frac{\\partial f_2}{\\partial g_1} & \\frac{\\partial f_2}{\\partial g_2} & \\ldots & \t\n\\frac{\\partial f_2}{\\partial g_k}\\\\\n\t& &\\ldots\\\\\n\t\\frac{\\partial f_m}{\\partial g_1} & \\frac{\\partial f_m}{\\partial g_2} & \\ldots & \\frac{\\partial f_m}{\\partial g_k}\\\\\n\\end{bmatrix}\\begin{bmatrix}\n\t\\frac{\\partial g_1}{\\partial x_1} & \\frac{\\partial g_1}{\\partial x_2} & \\ldots & \\frac{\\partial g_1}{\\partial x_n}\\\\\n\t\\frac{\\partial g_2}{\\partial x_1} & \\frac{\\partial g_2}{\\partial x_2} & \\ldots & \t\n\\frac{\\partial g_2}{\\partial x_n}\\\\\n\t& &\\ldots\\\\\n\t\\frac{\\partial g_k}{\\partial x_1} & \\frac{\\partial g_k}{\\partial x_2} & \\ldots & \\frac{\\partial g_k}{\\partial x_n}\\\\\n\\end{bmatrix}\n$\n\nwhere $m=|f|$, $n=|x|$, and $k=|g|$. The resulting Jacobian is $m \\times n$ (an $m \\times k$ matrix multiplied by a $k \\times n$ matrix). \n\nEven within this $\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial\\mathbf{g}}{\\partial \\mathbf{x}}$ formula, we can simplify further because, for many applications, the Jacobians are square ($m=n$) and the off-diagonal entries are zero.  It is the nature of neural networks that the associated mathematics deals with functions of vectors not vectors of functions. For example, the neuron affine function has term $sum(\\mathbf{w}\\otimes\\mathbf{x})$ and the activation function is $max(0,\\mathbf{x})$; we'll consider derivatives of these functions in the next section.  \n\nAs we saw in a previous section, element-wise operations on vectors $\\mathbf{w}$ and $\\mathbf{x}$ yield diagonal matrices with elements $\\frac{\\partial w_i}{\\partial x_i}$ because $w_i$ is a function purely of $x_i$ but not $x_j$ for $j \\neq i$. The same thing happens here when $f_i$ is purely a function of $g_i$ and $g_i$ is purely a function of $x_i$:\n\n$\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}} = diag(\\frac{\\partial f_i}{\\partial g_i})$\\\\\n$\\frac{\\partial \\mathbf{g}}{\\partial \\mathbf{x}} = diag(\\frac{\\partial g_i}{\\partial x_i})$\n\nIn this situation, the vector chain rule simplifies to:\n\n$\\frac{\\partial}{\\partial \\mathbf{x}} \\mathbf{f}(\\mathbf{g}(\\mathbf{x})) = diag(\\frac{\\partial f_i}{\\partial g_i})diag(\\frac{\\partial g_i}{\\partial x_i}) = diag(\\frac{\\partial f_i}{\\partial g_i}\\frac{\\partial g_i}{\\partial x_i})$\n\nTherefore, the Jacobian reduces to a diagonal matrix whose elements are the single-variable chain rule values.\n\nAfter slogging through all of that mathematics, here's the payoff. All you need is the vector chain rule because the single-variable formulas are special cases of the vector chain rule. The following table summarizes the appropriate components to multiply in order to get the Jacobian.\n\n\\begin{center}\n\\begin{tabular}[t]{c|cccc}\n  & \n\\multicolumn{2}{c}{\n  \\begin{tabular}[t]{c}\n  scalar\\\\\n  \\framebox(18,18){$x$}\\\\\n  \\end{tabular}} & &\\begin{tabular}{c}\n  vector\\\\\n  \\framebox(18,40){$\\mathbf{x}$}\\\\\n  \\end{tabular} \\\\\n  \n  \\begin{tabular}{c}$\\frac{\\partial}{\\partial \\mathbf{x}} \\mathbf{f}(\\mathbf{g}(\\mathbf{x}))$\n\t   = $\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial\\mathbf{g}}{\\partial \\mathbf{x}}$\n\t\t\\\\\n\t\t\\end{tabular} & \\begin{tabular}[t]{c}\n  scalar\\\\\n  \\framebox(18,18){$u$}\\\\\n  \\end{tabular} & \\begin{tabular}{c}\n  vector\\\\\n  \\framebox(18,40){$\\mathbf{u}$}\n  \\end{tabular}& & \\begin{tabular}{c}\n  vector\\\\\n  \\framebox(18,40){$\\mathbf{u}$}\\\\\n  \\end{tabular} \\\\\n\\hline\n\\\\[\\dimexpr-\\normalbaselineskip+5pt]\n\n\\begin{tabular}[b]{c}\n  scalar\\\\\n  \\framebox(18,18){$f$}\\\\\n  \\end{tabular} &\\framebox(18,18){$\\frac{\\partial f}{\\partial {u}}$} \\framebox(18,18){$\\frac{\\partial u}{\\partial {x}}$} ~~~& \\raisebox{22pt}{\\framebox(40,18){$\\frac{\\partial f}{\\partial {\\mathbf{u}}}$}} \\framebox(18,40){$\\frac{\\partial \\mathbf{u}}{\\partial x}$} & ~~~&\n\\raisebox{22pt}{\\framebox(40,18){$\\frac{\\partial f}{\\partial {\\mathbf{u}}}$}} \\framebox(40,40){$\\frac{\\partial \\mathbf{u}}{\\partial \\mathbf{x}}$}\n\\\\\n  \n\\begin{tabular}[b]{c}\n  vector\\\\\n  \\framebox(18,40){$\\mathbf{f}$}\\\\\n  \\end{tabular} & \\framebox(18,40){$\\frac{\\partial \\mathbf{f}}{\\partial {u}}$} \\raisebox{22pt}{\\framebox(18,18){$\\frac{\\partial u}{\\partial {x}}$}} & \\framebox(40,40){$\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{u}}$} \\framebox(18,40){$\\frac{\\partial \\mathbf{u}}{\\partial x}$} & & \\framebox(40,40){$\\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{u}}$} \\framebox(40,40){$\\frac{\\partial \\mathbf{u}}{\\partial \\mathbf{x}}$}\\\\\n  \n\\end{tabular}\n\\end{center}\n\n\\section{The gradient of the neuron activation function}\n\nWe now have all of the pieces needed to compute the derivative of a typical activation function for a single neural network computation unit with respect to the model parameters, $\\mathbf{w}$ and $b$:\n\n$activation(\\mathbf{x}) = max(0, \\mathbf{w} \\cdot \\mathbf{x} + b)$\n\nLet's worry about $max$ later and focus on computing $\\frac{\\partial}{\\partial \\mathbf{w}} (\\mathbf{w} \\cdot \\mathbf{x} + b)$ and $\\frac{\\partial}{\\partial b} (\\mathbf{w} \\cdot \\mathbf{x} + b)$. (Recall that neural networks learn through optimization of their weights and biases.)  We haven't discussed the derivative of the dot product yet, $y = \\mathbf{f(w)} \\cdot \\mathbf{g(x)}$, but we can use the chain rule to avoid having to memorize yet another rule. (Note notation $y$ not $\\mathbf{y}$ as the result is a scalar not a vector.) \n\nThe dot product $\\mathbf{w} \\cdot \\mathbf{x}$ is just the summation of the element-wise multiplication of the elements: $\\Sigma_i^n (w_i x_i) = sum(\\mathbf{w} \\otimes \\mathbf{x})$. (You might also find it useful to remember the linear algebra notation $\\mathbf{w} \\cdot \\mathbf{x} = \\mathbf{w}^{T} \\mathbf{x}$.) To use the chain rule for $y = sum(\\mathbf{w} \\otimes \\mathbf{x})$, we introduce an intermediate vector variable $\\mathbf{u}$ just as we did using the single-variable chain rule:\n\n$\\mathbf{u} = \\mathbf{w} \\otimes \\mathbf{x}$\\\\\n$y = sum(\\mathbf{u})$\n\nOnce we've rephrased $y$, we recognize two subexpressions for which we already know the partial derivatives:\n\n$\\frac{\\partial  \\mathbf{u}}{\\partial \\mathbf{w}} = \\frac{\\partial }{\\partial \\mathbf{w}} (\\mathbf{w} \\otimes \\mathbf{x}) = diag(\\mathbf{x})$\\\\\n$\\frac{\\partial y}{\\partial \\mathbf{u}} = \\frac{\\partial }{\\partial \\mathbf{u}} sum(\\mathbf{u}) = \\vec{1}^T$\n\nThe vector chain rule says to multiply the partials:\n\n$\\frac{\\partial y}{\\partial \\mathbf{w}} = \\frac{\\partial y}{\\partial \\mathbf{u}} \\frac{\\partial \\mathbf{u}}{\\partial \\mathbf{w}} = \\vec{1}^T  diag(\\mathbf{x}) = \\mathbf{x}^T$\n\nTo check our results, we can grind the dot product down into a pure scalar function:\n\n$y = \\mathbf{w} \\cdot \\mathbf{x} = \\Sigma_i^n (w_i x_i)$\n\n$\\frac{\\partial y}{\\partial w_j} = \\frac{\\partial}{\\partial w_j} \\Sigma_i (w_i x_i) = \\Sigma_i \\frac{\\partial}{\\partial w_j} (w_i x_i) = \\frac{\\partial}{\\partial w_j} (w_j x_j) = x_j$\n\nThen:\n\n$\\frac{\\partial y}{\\partial \\mathbf{w}} = [ x_1, \\ldots, x_n ] = \\mathbf{x}^T$\n\nHooray! Our results match. \n\nNow, let $y = \\mathbf{w} \\cdot \\mathbf{x} + b$, the full expression within the $max$ activation function call. We have two different partials to compute but don't need the chain rule:\n\n$\\frac{\\partial y}{\\partial \\mathbf{w}} = \\frac{\\partial }{\\partial \\mathbf{w}}\\mathbf{w} \\cdot \\mathbf{x} + \\frac{\\partial }{\\partial \\mathbf{w}}b = \\mathbf{x}^T + \\vec{0}^T = \\mathbf{x}^T$\\\\\n$\\frac{\\partial y}{\\partial b} = \\frac{\\partial }{\\partial b}\\mathbf{w} \\cdot \\mathbf{x} + \\frac{\\partial }{\\partial b}b = 0 + 1 = 1$\n\nLet's tackle the partials of the activation function output, $max(0, \\mathbf{w} \\cdot \\mathbf{x} + b)$. The use of the $max(0,z)$ function call on scalar $z$ just says to treat all negative $z$ values as 0.  The derivative of the max function is a piecewise function. When $z \\leq 0$, the derivative is 0 because $z$ is a constant. When $z > 0$, the derivative of the max function is just the derivative of $z$, which is $1$:\n\n$\n\\frac{\\partial}{\\partial z}max(0,z) =\n\t\\begin{cases}\n\t0 & z \\leq 0\\\\\n\t\\frac{dz}{dz}=1 & z > 0\\\\\n\\end{cases}\n$\n\n\\framebox[\\linewidth]{\\parbox{0.97\\linewidth}{An aside on broadcasting functions across scalars. When one or both of the $max$ arguments are vectors, such as $max(0,\\mathbf{x})$, we broadcast the single-variable function $max$ across the elements. This is an example of an element-wise unary operator.  Just to be clear:\n\n$\nmax(0,\\mathbf{x}) = \\begin{bmatrix}\nmax(0,x_1)\\\\\nmax(0,x_2)\\\\\n\\ldots\\\\\nmax(0,x_n)\\\\\n\\end{bmatrix}\n$\n\nFor the derivative of the broadcast version then, we get a vector of zeros and ones where:\n\n$\n\\frac{\\partial}{\\partial x_i}max(0,x_i) =\n\t\\begin{cases}\n\t0 & x_i \\leq 0\\\\\n\t\\frac{dx_i}{dx_i}=1 & x_i > 0\\\\\n\\end{cases}\n$\n\n$\n\\frac{\\partial}{\\partial \\mathbf{x}}max(0,\\mathbf{x}) =\n\\begin{bmatrix}\n\t\\frac{\\partial}{\\partial x_1}max(0,x_1)\\\\\n\t\\frac{\\partial}{\\partial x_2}max(0,x_2)\\\\\n\t\\ldots\\\\\n    \\frac{\\partial}{\\partial x_n}max(0,x_n)\n\\end{bmatrix}\n$\n}}\n\nTo get the derivative of the $activation(\\mathbf{x})$ function, we need the chain rule because of the nested subexpression, $\\mathbf{w} \\cdot \\mathbf{x} + b$. Following our process, let's introduce intermediate scalar variable $z$ to represent the affine function giving:\n\n$z(\\mathbf{w},b,\\mathbf{x}) = \\mathbf{w} \\cdot \\mathbf{x} + b$\\\\\n$activation(z) = max(0,z)$\n\nThe vector chain rule tells us:\n\n$\\frac{\\partial activation}{\\partial \\mathbf{w}} = \\frac{\\partial activation}{\\partial z}\\frac{\\partial z}{\\partial \\mathbf{w}}$\n\nwhich we can rewrite as follows:\n\n$\\frac{\\partial activation}{\\partial \\mathbf{w}} = \\begin{cases}\n\t0\\frac{\\partial z}{\\partial \\mathbf{w}}=\\vec{0}^T & z \\leq 0\\\\\n\t1\\frac{\\partial z}{\\partial \\mathbf{w}}=\\frac{\\partial z}{\\partial \\mathbf{w}} = \\mathbf{x}^T & z > 0 ~~~~~~~~~~~~~~~~~~(\\text{we computed }\\frac{\\partial z}{\\partial \\mathbf{w}}=\\mathbf{x}^T \\text{ previously})\\\\\n\\end{cases}$\n\nand then substitute $z = \\mathbf{w} \\cdot \\mathbf{x} + b$ back in:\n \n$\\frac{\\partial activation}{\\partial \\mathbf{w}} = \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n\t\\mathbf{x}^T & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n\\end{cases}$\n\nThat equation matches our intuition.  When the activation function clips affine function output $z$ to 0, the derivative is zero with respect to any weight $w_i$. When $z > 0$, it's as if the $max$ function disappears and we get just the derivative of $z$ with respect to the weights. \n\nTurning now to the derivative of the activation function with respect to $b$, we get:\n \n$\\frac{\\partial activation}{\\partial b} = \\begin{cases}\n\t0\\frac{\\partial z}{\\partial b} = 0 & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n\t1\\frac{\\partial z}{\\partial b} = 1 & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n\\end{cases}\n$\n\nLet's use these partial derivatives now to handle the entire loss function.\n\n\\section{The gradient of the neural network loss function}\n\nTraining a neuron requires that we take the derivative of our loss  or ``cost'' function with respect to the parameters of our model, $\\mathbf{w}$ and $b$. Because we train with multiple inputs and targets, we need some more notation. Let $X = [\\mathbf{x}_1, \\mathbf{x}_2, \\ldots, \\mathbf{x}_N]^T$, $N=|X|$, and $\\mathbf{y} = [target(\\mathbf{x}_1), target(\\mathbf{x}_2), \\ldots, target(\\mathbf{x}_N)]^T$ where $y_i$ is a scalar. Then the cost equation becomes:\n\n\\[\nC(\\mathbf{w},b,X,\\mathbf{y}) = \\frac{1}{N} \\sum_{i=1}^{N} (y_i - activation(\\mathbf{x}_i))^2 = \\frac{1}{N} \\sum_{i=1}^{N} (y_i - max(0, \\mathbf{w}\\cdot\\mathbf{x}_i+b))^2\n\\]\n\nFollowing our chain rule process introduces these intermediate variables:\n \n$u(\\mathbf{w},b,\\mathbf{x}) = max(0, \\mathbf{w}\\cdot\\mathbf{x}+b)$\\\\\n$v(y,u) = y - u$\\\\\n$C(v) = \\frac{1}{N} \\sum_{i=1}^N v^2$\n\nLet's compute the gradient with respect to $\\mathbf{w}$ first.\n\n\\subsection{The gradient with respect to the weights}\n\nFrom before, we know:\n\n$\\frac{\\partial }{\\partial \\mathbf{w}} u(\\mathbf{w},b,\\mathbf{x}) = \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n\t\\mathbf{x}^T & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n\\end{cases}$\n\nand\n\n$\\frac{\\partial v(y,u)}{\\partial \\mathbf{w}} = \\frac{\\partial}{\\partial \\mathbf{w}} (y - u) = \\vec{0}^T - \\frac{\\partial u}{\\partial \\mathbf{w}} = -\\frac{\\partial u}{\\partial \\mathbf{w}} = \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n\t-\\mathbf{x}^T & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n\\end{cases}$\n\nThen, for the overall gradient, we get:\n\n\\begin{eqnarray*}\n\\frac{\\partial C(v)}{\\partial \\mathbf{w}} & = & \\frac{\\partial }{\\partial \\mathbf{w}}\\frac{1}{N} \\sum_{i=1}^N v^2\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\frac{\\partial}{\\partial \\mathbf{w}} v^2\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\frac{\\partial v^2}{\\partial v} \\frac{\\partial v}{\\partial \\mathbf{w}} \\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N 2v \\frac{\\partial v}{\\partial \\mathbf{w}} \\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\begin{cases}\n\t2v\\vec{0}^T = \\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b \\leq 0\\\\\n\t-2v\\mathbf{x}^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b > 0\\\\\n\\end{cases}\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b \\leq 0\\\\\n\t-2(y_i-u)\\mathbf{x}_i^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b > 0\\\\\n\\end{cases}\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b \\leq 0\\\\\n\t-2(y_i-max(0, \\mathbf{w}\\cdot\\mathbf{x}_i+b))\\mathbf{x}_i^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b > 0\\\\\n\\end{cases}\\\\\\\\\n\\phantom{\\frac{\\partial C(v)}{\\partial \\mathbf{w}}} & = & \\frac{1}{N} \\sum_{i=1}^N \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b \\leq 0\\\\\n\t-2(y_i-(\\mathbf{w}\\cdot\\mathbf{x}_i+b))\\mathbf{x}_i^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b > 0\\\\\n\\end{cases}\\\\\\\\\n & = & \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b \\leq 0\\\\\n\t\\frac{-2}{N} \\sum_{i=1}^N (y_i-(\\mathbf{w}\\cdot\\mathbf{x}_i+b))\\mathbf{x}_i^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b > 0\\\\\n\\end{cases}\\\\\\\\\n & = & \\begin{cases}\n\t\\vec{0}^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b \\leq 0\\\\\n\t\\frac{2}{N} \\sum_{i=1}^N (\\mathbf{w}\\cdot\\mathbf{x}_i+b-y_i)\\mathbf{x}_i^T & \\mathbf{w} \\cdot \\mathbf{x}_i + b > 0\\\\\n\\end{cases}\n\\end{eqnarray*}\n\nTo interpret that equation, we can substitute an error term $e_i = \\mathbf{w}\\cdot\\mathbf{x}_i+b-y_i$ yielding:\n\n\\[\n\\frac{\\partial C}{\\partial \\mathbf{w}} = \\frac{2}{N} \\sum_{i=1}^N e_i\\mathbf{x}_i^T ~~~~~~~~~~\\text{(for the nonzero activation case)}\n\\]\n\nFrom there, notice that this computation is a weighted average across all $\\mathbf{x}_i$ in $X$. The weights are the error terms, the difference between the target output and the actual neuron output for each $\\mathbf{x}_i$ input. The resulting gradient will, on average, point in the direction of higher cost or loss because large $e_i$ emphasize their associated $\\mathbf{x}_i$. Imagine we only had one input vector, $N=|X|=1$, then the gradient is just $2e_1\\mathbf{x}_1^T$.  If the error is 0, then the gradient is zero and we have arrived at the minimum loss. If $e_1$ is some small positive difference, the gradient is a small step in the direction of $\\mathbf{x}_1$. If $e_1$ is large, the gradient is a large step in that direction. If $e_1$ is negative, the gradient is reversed, meaning the highest cost is in the negative direction.\n\nOf course, we want to reduce not increase the loss, which is why the [gradient descent](link) recurrence relation takes the negative of the gradient to update the current position (for scalar learning rate  $\\eta$):\n\n\\[\n\\mathbf{x}_{t+1} = \\mathbf{x}_{t} - \\eta \\frac{\\partial C}{\\partial \\mathbf{w}}\n\\]\n\nBecause the gradient indicates the direction of higher cost, we want to update $\\mathbf{x}$ in the opposite direction.\n\n\\subsection{The derivative with respect to the bias}\n\n\nTo optimize the bias, $b$, we also need the partial with respect to $b$.  Here are the intermediate variables again:\n\n$u(\\mathbf{w},b,\\mathbf{x}) = max(0, \\mathbf{w}\\cdot\\mathbf{x}+b)$\\\\\n$v(y,u) = y - u$\\\\\n$C(v) = \\frac{1}{N} \\sum_{i=1}^N v^2$\n\nWe computed the partial with respect to the bias for equation $u(\\mathbf{w},b,\\mathbf{x})$ previously:\n\n$\\frac{\\partial u}{\\partial b} = \\begin{cases}\n\t0 & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n\t1 & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n\\end{cases}\n$\n\nFor $v$, the partial is:\n\n$\n\\frac{\\partial v(y,u)}{\\partial b} = \\frac{\\partial}{\\partial b} (y - u) = 0 - \\frac{\\partial u}{\\partial b} = -\\frac{\\partial u}{\\partial b} = \\begin{cases}\n\t0 & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n\t-1 & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n\\end{cases}\n$\n\nAnd for the partial of the cost function itself we get:\n\n\\allowdisplaybreaks\n\\begin{eqnarray*}\n\\frac{\\partial C(v)}{\\partial b} & = & \\frac{\\partial }{\\partial b}\\frac{1}{N} \\sum_{i=1}^N v^2\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\frac{\\partial}{\\partial b} v^2\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\frac{\\partial v^2}{\\partial v} \\frac{\\partial v}{\\partial b} \\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N 2v \\frac{\\partial v}{\\partial b} \\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\begin{cases}\n \t0 & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n \t-2v & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n \\end{cases}\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\begin{cases}\n \t0 & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n \t-2(y_i-max(0, \\mathbf{w}\\cdot\\mathbf{x}_i+b)) & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n \\end{cases}\\\\\\\\\n & = & \\frac{1}{N} \\sum_{i=1}^N \\begin{cases}\n \t0 & \\mathbf{w} \\cdot \\mathbf{x} + b \\leq 0\\\\\n \t2(\\mathbf{w}\\cdot\\mathbf{x}_i+b-y_i) & \\mathbf{w} \\cdot \\mathbf{x} + b > 0\\\\\n \\end{cases}\\\\\\\\\n & = & \\begin{cases}\n\t0 & \\mathbf{w} \\cdot \\mathbf{x}_i + b \\leq 0\\\\\n\t\\frac{2}{N} \\sum_{i=1}^N (\\mathbf{w}\\cdot\\mathbf{x}_i+b-y_i) & \\mathbf{w} \\cdot \\mathbf{x}_i + b > 0\\\\\n\\end{cases}\n\\end{eqnarray*}\n\nAs before, we can substitute an error term:\n\n\\[\n\\frac{\\partial C}{\\partial b} = \\frac{2}{N} \\sum_{i=1}^N e_i ~~~~~~~~~~\\text{(for the nonzero activation case)}\n\\]\n\nThe partial derivative is then just the average error or zero, according to the activation level. To update the neuron bias, we nudge it in the opposite direction of increased cost:\n\n\\[\nb_{t+1} = b_{t} - \\eta \\frac{\\partial C}{\\partial b}\n\\]\n \nIn practice, it is convenient to combine $\\mathbf{w}$ and $b$ into a single vector parameter rather than having to deal with two different partials: $\\hat{\\mathbf{w}} = [\\mathbf{w}^T, b]^T$. This requires a tweak to the input vector $\\mathbf{x}$ as well but simplifies the activation function. By tacking a 1 onto the end of $\\mathbf{x}$, $\\hat{\\mathbf{x}} = [\\mathbf{x}^T,1]$, $\\mathbf{w} \\cdot \\mathbf{x} + b$ becomes $\\hat{\\mathbf{w}} \\cdot \\hat{\\mathbf{x}}$.  Also notice that $2/N$ is fixed for a fixed number of training exemplars. Because this value will not change as we update the weights and biases, we can ignore this term because all we need is a direction not the actual gradient. (That constant could be folded into the learning rate $\\eta$ anyway.) This observation simplifies our equations further for the nonzero activation case:\n\n\\[\\frac{\\partial C}{\\partial \\mathbf{w}} = \\sum_{i=1}^N e_i\\mathbf{x}_i^T\\]\n\\[\\frac{\\partial C}{\\partial b} = \\sum_{i=1}^N e_i\\]\n\nThis finishes off the optimization of the neural network loss function because we have the two partials necessary to perform a gradient descent.\n\n\\section{Summary}\n\nHopefully you've made it all the way through to this point.  You're well on your way to understanding matrix calculus!  We've included a reference that summarizes all of the rules from this article in the next section. Also check out the annotated resource link below.\n\nYour next step would be to learn about the partial derivatives of matrices not just vectors. For example, you can take a look at the matrix differentiation section of [Matrix calculus](https://atmos.washington.edu/\\verb|~|dennis/MatrixCalculus.pdf). \n\n{\\bf Acknowledgements}. We thank [Yannet Interian](https://www.usfca.edu/faculty/yannet-interian) (Faculty in MS data science program at University of San Francisco) and [David Uminsky](http://www.cs.usfca.edu/~duminsky/) (Faculty/director of MS data science) for their help with the notation presented here.\n\n\n\\section{Matrix Calculus Reference}\n\n\\subsection{Gradients and Jacobians}\n\nThe {\\em gradient} of a function of two variables is a horizontal 2-vector:\n\n\\[\n\\nabla f(x,y)  = [ \\frac{\\partial f(x,y)}{\\partial x}, \\frac{\\partial f(x,y)}{\\partial y}]\n\\]\n\nThe {\\em Jacobian} of a vector-valued function that is a function of a vector is an $m \\times n$ ($m=|\\mathbf{f}|$ and $n=|\\mathbf{x}|$) matrix containing all possible scalar partial derivatives:\n\n\\[\n\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}} = \\begin{bmatrix}\n\\nabla f_1(\\mathbf{x}) \\\\\n\\nabla f_2(\\mathbf{x})\\\\\n\\ldots\\\\\n\\nabla f_m(\\mathbf{x})\n\\end{bmatrix} = \\begin{bmatrix}\n\\frac{\\partial}{\\partial \\mathbf{x}} f_1(\\mathbf{x}) \\\\\n\\frac{\\partial}{\\partial \\mathbf{x}} f_2(\\mathbf{x})\\\\\n\\ldots\\\\\n\\frac{\\partial}{\\partial \\mathbf{x}} f_m(\\mathbf{x})\n\\end{bmatrix} = \\begin{bmatrix}\n\\frac{\\partial}{\\partial {x_1}} f_1(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_1(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_1(\\mathbf{x}) \\\\\n\\frac{\\partial}{\\partial {x_1}} f_2(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_2(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_2(\\mathbf{x}) \\\\\n\\ldots\\\\\n~\\frac{\\partial}{\\partial {x_1}} f_m(\\mathbf{x})~ \\frac{\\partial}{\\partial {x_2}} f_m(\\mathbf{x}) ~\\ldots~ \\frac{\\partial}{\\partial {x_n}} f_m(\\mathbf{x}) \\\\\n\\end{bmatrix}\n\\]\n\nThe Jacobian of the identity function $\\mathbf{f(x)} = \\mathbf{x}$ is $I$.\n\n\n\\subsection{Element-wise operations on vectors}\t\t \n\nDefine generic {\\em element-wise operations} on vectors $\\mathbf{w}$ and $\\mathbf{x}$ using operator $\\bigcirc$ such as $+$:\n\n\\[\n\\begin{bmatrix}\n           y_1\\\\\n           y_2\\\\\n           \\vdots \\\\\n           y_n\\\\\n           \\end{bmatrix} = \\begin{bmatrix}\n           f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x})\\\\\n           f_{n}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x})\\\\\n           \\vdots \\\\\n           f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x})\\\\\n         \\end{bmatrix}\\]\n\nThe Jacobian with respect to $\\mathbf{w}$ (similar for $\\mathbf{x}$) is:\n\n\\[\nJ_{\\mathbf{w}} = \\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{w}}  = \\begin{bmatrix}\n\\frac{\\partial}{\\partial w_1} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial w_2} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial w_n} ( f_{1}(\\mathbf{w}) \\bigcirc g_{1}(\\mathbf{x}) )\\\\\\\\\n\\frac{\\partial}{\\partial w_1} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial w_2} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial w_n} ( f_{2}(\\mathbf{w}) \\bigcirc g_{2}(\\mathbf{x}) )\\\\\\\\\n& \\ldots\\\\\\\\\n\\frac{\\partial}{\\partial w_1} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) ) & \\frac{\\partial}{\\partial w_2} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) ) & \\ldots & \\frac{\\partial}{\\partial w_n} ( f_{n}(\\mathbf{w}) \\bigcirc g_{n}(\\mathbf{x}) )\\\\\n\\end{bmatrix}\n\\]\n\nGiven the constraint ({\\em element-wise diagonal condition}) that $f_i(\\mathbf{w})$ and $g_i(\\mathbf{x})$ access at most $w_i$ and $x_i$, respectively, the Jacobian simplifies to a diagonal matrix:\n\n\\[\n\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{w}} = diag \\left( \\frac{\\partial}{\\partial w_1}(f_{1}(w_1) \\bigcirc g_{1}(x_1)),~ \\frac{\\partial}{\\partial w_2}(f_{2}(w_2) \\bigcirc g_{2}(x_2)),~ \\ldots,~ \\frac{\\partial}{\\partial w_n}(f_{n}(w_n) \\bigcirc g_{n}(x_n)) \\right)\n\\]\n\nHere are some sample element-wise  operators:\n\n\\[\n\\begin{array}{lll}\n\\text{Op} & \\text{Partial with respect to } \\mathbf{w} & \\text{Partial with respect to }\\mathbf{x}\\\\\n\\hline\\\\\n+ & \\frac{\\partial (\\mathbf{w+x})}{\\partial \\mathbf{w}} = I & \\frac{\\partial (\\mathbf{w+x})}{\\partial \\mathbf{x}} =  I\\\\\\\\\n- & \\frac{\\partial (\\mathbf{w-x})}{\\partial \\mathbf{w}}  = I & \\frac{\\partial (\\mathbf{w-x})}{\\partial \\mathbf{x}}  = -I \\\\\\\\\n\\otimes & \\frac{\\partial (\\mathbf{w \\otimes x})}{\\partial \\mathbf{w}}  =  diag(\\mathbf{x}) & \\frac{\\partial (\\mathbf{w \\otimes x})}{\\partial \\mathbf{x}}  =  diag(\\mathbf{w})\\\\\\\\\n\\oslash & \\frac{\\partial (\\mathbf{w \\oslash x})}{\\partial \\mathbf{w}}  =  diag(\\ldots \\frac{1}{x_i} \\ldots) & \\frac{\\partial (\\mathbf{w \\oslash x})}{\\partial \\mathbf{x}}  =  diag(\\ldots \\frac{-w_i}{x_i^2} \\ldots)\\\\\n\\end{array}\n\\]\n\n\\subsection{Scalar expansion}\n\nAdding scalar $z$  to vector $\\mathbf{x}$, $\\mathbf{y} = \\mathbf{x} + z$, is really $\\mathbf{y} = \\mathbf{f(x)} + \\mathbf{g}(z)$ where $\\mathbf{f(x)} = \\mathbf{x}$ and $\\mathbf{g}(z) = \\vec{1} z$.\n\n\\[\n\\frac{\\partial}{\\partial \\mathbf{x}} ( \\mathbf{x} + z ) = diag(\\vec{1}) = I\n\\]\n\\[\n\\frac{\\partial}{\\partial z} ( \\mathbf{x} + z ) = \\vec{1}\n\\]\n\nScalar multiplication yields:\n\n\\[\\frac{\\partial}{\\partial \\mathbf{x}} ( \\mathbf{x} z ) = I z\\]\n\\[\\frac{\\partial}{\\partial z} ( \\mathbf{x} z ) = \\mathbf{x}\\]\n\n\\subsection{Vector reductions}\n\nThe partial derivative of a vector sum with respect to one of the vectors is:\n\n\\[\n\\begin{array}{lcl}\n\\frac{\\partial y}{\\partial \\mathbf{x}} & = & \\begin{bmatrix} \\frac{\\partial y}{\\partial x_1}, \\frac{\\partial y}{\\partial x_2}, \\ldots, \\frac{\\partial y}{\\partial x_n} \\end{bmatrix} = \\begin{bmatrix} \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_1},~ \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_2},~ \\ldots,~ \\Sigma_i \\frac{\\partial f_i(\\mathbf{x})}{\\partial x_n}  \\end{bmatrix}\\\\\\\\\n\\end{array}\n\\]\n\nFor $y = sum(\\mathbf{x})$:\n\n\\[\\nabla y_\\mathbf{x} = \\vec{1}^T\\]\n\nFor $y = sum(\\mathbf{x}z)$ and $n = |x|$, we get:\n\n\\[\\nabla y_\\mathbf{x} = [z, z, \\ldots, z]\\]\n\\[\\nabla y_z = n \\]\n\nVector dot product $y = \\mathbf{f(w)} \\cdot \\mathbf{g(x)} = \\Sigma_i^n (w_i x_i) = sum(\\mathbf{w} \\otimes \\mathbf{x})$.  Substituting $\\mathbf{u} = \\mathbf{w} \\otimes \\mathbf{x}$ and using the vector chain rule, we get:\n\n\\[\n\\begin{array}{lcl}\n\\frac{d \\mathbf{u}}{d\\mathbf{x}} = \\frac{d}{d\\mathbf{x}} (\\mathbf{w} \\otimes \\mathbf{x}) = diag(\\mathbf{w})\\\\\\\\\n\\frac{dy}{d\\mathbf{u}} = \\frac{d}{d\\mathbf{u}} sum(\\mathbf{u}) = \\vec{1}^T\\\\\\\\\n\\frac{dy}{d\\mathbf{x}} = \\frac{dy}{d\\mathbf{u}} \\times \\frac{d\\mathbf{u}}{d\\mathbf{x}} = \\vec{1}^T \\times diag(\\mathbf{w}) = \\mathbf{w}^T\n\\end{array}\n\\]\n\nSimilarly, $\\frac{dy}{d\\mathbf{w}} = \\mathbf{x}^T$.\n\n\n\\subsection{Chain rules}\n\nThe {\\em vector chain rule} is the general form as it degenerates to the others. When $f$ is a function of a single variable $x$ and all intermediate variables $u$ are functions of a single variable, the single-variable chain rule applies. When some or all of the intermediate variables are functions of multiple variables, the single-variable total-derivative chain rule applies. In all other cases, the vector chain rule applies.\n\n\\begin{tabular}{ccc}\n\\bf{Single-variable rule} & \\bf{Single-variable total-derivative rule} & \\bf{Vector  rule}\\\\\\\\\n\n$\\frac{df}{dx} = \\frac{df}{du}\\frac{du}{dx}$ &\n\n$\\frac{\\partial f(u_1,\\ldots,u_n)}{\\partial x} = \\frac{\\partial f}{\\partial \\mathbf{u}} \\frac{\\partial \\mathbf{u}}{\\partial x}$ &\n\n$\\frac{\\partial}{\\partial \\mathbf{x}} \\mathbf{f}(\\mathbf{g}(\\mathbf{x})) = \\frac{\\partial \\mathbf{f}}{\\partial \\mathbf{g}}\\frac{\\partial\\mathbf{g}}{\\partial \\mathbf{x}}$\n\\end{tabular}\n\n\n\\section{Resources}\n\nWhen looking for resources on the web, search for ``matrix calculus'' not ``vector calculus.''  Here are some comments on the top links that come up from a [Google search](https://www.google.com/search?q=matrix+calculus\\&oq=matrix+calculus):\n\nhttps://en.wikipedia.org/wiki/Matrix\\_calculus The Wikipedia entry is actually quite good and they have a good description of the different layout conventions. Recall that we use the numerator layout where the variables go horizontally and the functions go vertically in the Jacobian. Wikipedia also has a good description of [total derivatives](https://en.wikipedia.org/wiki/Total\\_derivative), but be careful that they use slightly different notation than we do. We always use the $\\partial x$ notation not $dx$.\n\nhttp://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html This page has a section on matrix differentiation with some useful identities; this person uses numerator layout. This might be a good place to start after reading this article to learn about matrix versus vector differentiation.\n\nhttps://www.colorado.edu/engineering/CAS/courses.d/IFEM.d/IFEM.AppC.d/IFEM.AppC.pdf This is part of the course notes for ``Introduction to Finite Element Methods'' I believe by [Carlos A. Felippa](https://www.colorado.edu/engineering/CAS/courses.d/IFEM.d).  His Jacobians are transposed from our notation because he uses denominator layout.\n\nhttp://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html  This page has a huge number of useful derivatives computed for a variety of vectors and matrices. A great cheat sheet. There is no discussion to speak of, just a set of rules.\n\nhttps://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf Another cheat sheet that focuses on matrix operations in general with more discussion than the previous item.\n\nFurther down in the list, you'll run into a useful set of slides:  https://www.comp.nus.edu.sg/~cs5240/lecture/matrix-differentiation.pdf\n\nTo learn more about neural networks and the mathematics behind optimization and back propagation, we highly recommend Michael Nielsen's book http://neuralnetworksanddeeplearning.com/chap1.html\n\nWe reference the law of [total derivative](https://en.wikipedia.org/wiki/Total\\_derivative), which is an important concept that just means derivatives with respect to $x$ must take into consideration the derivative with respect $x$ of all variables that are a function of $x$.\n\n\\end{document}\n", "meta": {"hexsha": "af719494f100f30847d13b44872cf0196957a418", "size": 92447, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "images/matrix-calculus.tex", "max_stars_repo_name": "parrt/auto-diff-edu", "max_stars_repo_head_hexsha": "80b005a1437a99c3e26f80f97c974a8d712a7519", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-29T07:13:14.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-29T07:13:14.000Z", "max_issues_repo_path": "images/matrix-calculus.tex", "max_issues_repo_name": "parrt/auto-diff-edu", "max_issues_repo_head_hexsha": "80b005a1437a99c3e26f80f97c974a8d712a7519", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "images/matrix-calculus.tex", "max_forks_repo_name": "parrt/auto-diff-edu", "max_forks_repo_head_hexsha": "80b005a1437a99c3e26f80f97c974a8d712a7519", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-03-16T19:42:21.000Z", "max_forks_repo_forks_event_max_datetime": "2019-03-16T19:42:21.000Z", "avg_line_length": 72.2806880375, "max_line_length": 1132, "alphanum_fraction": 0.6877021428, "num_tokens": 29846, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7490872131147276, "lm_q1q2_score": 0.5781780947569958}}
{"text": "\\subsection{Scheduling: Preemptive, Non-Size-Based}\n\\label{sec:Scheduling-Preemptive-Non-Size-Based}\n\nThe most common preemptive non-size-based scheduling policies are\n\n\\begin{description}\n\t\n\t\\item [Processor-Sharing (PS)] the server is time-shared among all jobs.\n\t\n\t\\item [Preemptive Last-Come First-Served (PLCFS)] a new arrival preempts the job in service; when the arrival is completed, the preempted job resumes.\n\t\n\t\\item [Generalized Foreground-Background (FB)\\footnote{also known as \\textit{Least-Attained-Service (LAS)}.}] the job with the lowest age gets served; jobs with the same lowest age time-share service.\n\t\n\\end{description}\n\n\n\n\n\\subsection{Scheduling: PS}\n\\label{sec:Scheduling-PS}\nPS is insensitive to job size variability, because when a job arrives it immediately time-shares with all other jobs.\n$M/G/1/PS$ is better in expectation than $M/G/1/FCFS$ when the squared coefficient of job size variation gets high (namely, $C_{G}^{2}>1$).\nPS is effective in CPU scheduling: it lets short jobs get out quickly, has slowdown lower than FCFS, and has high throughput in a  multi-resources context.\n\nIn $M/G/1/PS$, the following holds:\n\n\\begin{equation}\n\\label{eqn:PS-Mean-Response-Time}\n\\expected{T(x)}^{\\mathit{M/G/1/PS}}=\\frac{x}{1-\\varrho}\n\\end{equation}\n\nHence $M/G/1/PS$ is said to be fair, because every job has the same slowdown, namely:\n\n\\begin{equation}\n\\label{eqn:PS-Mean-Slowdown}\n\\expected{Slowdown}^{\\mathit{PS}}=\\frac{1}{1-\\varrho}\n\\end{equation}\n\nWe notice that $\\variance{T}^{\\mathit{PS}}$ cannot be expressed in closed form.\n\nRecall that $M/G/1/PS$ and $M/G/1/FCFS$ have the same mean system jobs\n\n\\begin{equation*}\n\\expected{N}=\\frac{\\varrho}{a-\\varrho}\n\\end{equation*}\n\nThere are many variants of PS, see \\cite{aalto2007beyond}.\n\n\n\n\n\\subsection{Scheduling: PLCFS}\n\\label{sec:Scheduling-PLCFS}\n\nIn $M/G/1/PLCFS$, the following holds:\n\n\\begin{equation}\n\\label{eqn:PLCFS-Mean-Response-Time}\n\\expected{T(x)}^{\\mathit{M/G/1/PLCFS}}=\\frac{x}{1-\\varrho}\n\\end{equation}\n\nHence $M/G/1/PLCFS$ is said to be fair, because every job has the same slowdown, namely:\n\n\\begin{equation}\n\\label{eqn:PLCFS-Mean-Slowdown}\n\\expected{Slowdown}^{\\mathit{PLCFS}}=\\frac{1}{1-\\varrho}\n\\end{equation}\n\nPLCFS has the same mean performances as PS, but with many fewer preemptions\\footnote{only 2 preemptions per job.}.\n\n\n\n\n\\subsection{Scheduling: FB}\n\\label{sec:Scheduling-FB}\n\nRecall that if job size distribution has decreasing failure rate, then the greater the job age, the greater its expected remaining service time.\nFB gives priority to jobs with low age, this giving preference to the expected shortest jobs.\n\nThe performance improvement of FB against PS depends on job size distribution: if the distribution is d.f.r. then it outperforms.\n\nIf the job size distribution is DFR, then younger jobs have lower remaining service times, hence $\\expected{T}^{\\mathit{FB}}<\\expected{T}^{\\mathit{PS}}$.\n\nIf the job size distribution is IFR, then younger jobs have higher remaining service times, hence $\\expected{T}^{\\mathit{FB}}>\\expected{T}^{\\mathit{PS}}$.\n\nIf the job size distribution is CFR, then the remaining service times are independent of jobs ages, hence $\\expected{T}^{\\mathit{FB}}=\\expected{T}^{\\mathit{PS}}<$. \nIf the job size distribution is CFR, then $\\expected{Slowdown}^{\\mathit{FB}}<\\expected{Slowdown}^{\\mathit{PS}}<$, because, even if job ages and expected remaining service times are independent, they are not of original jobs size. Hence, FB gives slight preference to small sized jobs.\n\nFB needs the job size distribution to be DFR to perform well. Recall that higher DFR is coupled with higher variability. Thus, FB does not work well under low variability.\n\n\\begin{equation}\n\\label{eqn:Scheduling-FB-Response-Time}\n\\expected{T(x)}^{\\mathit{FB}}=\n\\frac{x(1-\\varrho_{\\overline{x}})+\\frac{1}{2}\\lambda\\expected{S_{\\overline{x}}^{2}}}\n{\\Big(1-\\varrho_{\\overline{x}}\\Big)^{2}}\n\\end{equation}\n\nwhere $\\varrho_{\\overline{x}}=\\lambda\\expected{S_{\\overline{x}}}$ is the utilization under the transformed distribution.", "meta": {"hexsha": "70fbe9d6e1aec6179bdfbf98fe3eda938a08d052", "size": 4041, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "performance-modeling/sec/scheduling-preemptive-non-size-based.tex", "max_stars_repo_name": "gmarciani/research", "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_issues_repo_path": "performance-modeling/sec/scheduling-preemptive-non-size-based.tex", "max_issues_repo_name": "gmarciani/research", "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "performance-modeling/sec/scheduling-preemptive-non-size-based.tex", "max_forks_repo_name": "gmarciani/research", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "avg_line_length": 41.2346938776, "max_line_length": 284, "alphanum_fraction": 0.7465973769, "num_tokens": 1183, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7490872131147276, "lm_q1q2_score": 0.5781780947569957}}
{"text": "\\chapter{Not implemented}\n\\label{chap:Not implemented}\n\nIn this chapter we maintain a list of\nfeatures we have chosen not to implement at\nthe moment, but may add when libzahl matures,\nbut to a separate library that could be made\nto support multiple bignum libraries. Functions\nlisted herein will only be implemented if it\nis shown that it would be overwhelmingly\nadvantageous. For each feature, a sample\nimplementation or a mathematical expression\non which you can base your implementation is\nincluded. The sample implementations create\ntemporary integer references to simplify the\nexamples. You should try to use dedicated\nvariables; in case of recursion, a robust\nprogram should store temporary variables on\na stack, so they can be cleaned up if\nsomething happens.\n\nResearch problems, like prime factorisation\nand discrete logarithms, do not fit in the\nscope of bignum libraries % Unless they are extraordinarily bloated with vague mission-scope, like systemd.\nand therefore do not fit into libzahl,\nand will not be included in this chapter.\nOperators and functions that grow so\nridiculously fast that a tiny lookup table\ncan be constructed to cover all practical\ninput will also not be included in this\nchapter, nor in libzahl.\n\n\\vspace{1cm}\n\\minitoc\n\n\n\\newpage\n\\section{Extended greatest common divisor}\n\\label{sec:Extended greatest common divisor}\n\n\\begin{alltt}\nvoid\nextgcd(z_t b\u00e9zout_coeff_1, z_t b\u00e9zout_coeff_2, z_t gcd\n       z_t quotient_1, z_t quotient_2, z_t a, z_t b)\n\\{\n#define old_r gcd\n#define old_s b\u00e9zout_coeff_1\n#define old_t b\u00e9zout_coeff_2\n#define s quotient_2\n#define t quotient_1\n    z_t r, q, qs, qt;\n    int odd = 0;\n    zinit(r), zinit(q), zinit(qs), zinit(qt);\n    zset(r, b), zset(old_r, a);\n    zseti(s, 0), zseti(old_s, 1);\n    zseti(t, 1), zseti(old_t, 0);\n    while (!zzero(r)) \\{\n        odd ^= 1;\n        zdivmod(q, old_r, old_r, r), zswap(old_r, r);\n        zmul(qs, q, s), zsub(old_s, old_s, qs);\n        zmul(qt, q, t), zsub(old_t, old_t, qt);\n        zswap(old_s, s), zswap(old_t, t);\n    \\}\n    odd ? abs(s, s) : abs(t, t);\n    zfree(r), zfree(q), zfree(qs), zfree(qt);\n\\}\n\\end{alltt}\n\nPerhaps you are asking yourself ``wait a minute,\ndoesn't the extended Euclidean algorithm only\nhave three outputs if you include the greatest\ncommon divisor, what is this shenanigans?''\nNo\\footnote{Well, technically yes, but it calculates\ntwo values for free in the same ways as division\ncalculates the remainder for free.}, it has five\noutputs, most implementations just ignore two of\nthem. If this confuses you, or you want to know\nmore about this, I refer you to Wikipeida.\n\n\n\\newpage\n\\section{Least common multiple}\n\\label{sec:Least common multiple}\n\n\\( \\displaystyle{\n    \\mbox{lcm}(a, b) = \\frac{\\lvert a \\cdot b \\rvert}{\\mbox{gcd}(a, b)}\n}\\)\n\\vspace{1em}\n\n$\\mbox{lcm}(a, b)$ is undefined when $a$ or\n$b$ is zero, because division by zero is\nundefined. Note however that $\\mbox{gcd}(a, b)$\nis only zero when both $a$ and $b$ is zero.\n\n\\newpage\n\\section{Modular multiplicative inverse}\n\\label{sec:Modular multiplicative inverse}\n\n\\begin{alltt}\nint\nmodinv(z_t inv, z_t a, z_t m)\n\\{\n    z_t x, _1, _2, _3, gcd, mabs, apos;\n    int invertible, aneg = zsignum(a) < 0;\n    zinit(x), zinit(_1), zinit(_2), zinit(_3), zinit(gcd);\n    *mabs = *m;\n    zabs(mabs, mabs);\n    if (aneg) \\{\n        zinit(apos);\n        zset(apos, a);\n        if (zcmpmag(apos, mabs))\n            zmod(apos, apos, mabs);\n        zadd(apos, apos, mabs);\n    \\}\n    extgcd(inv, _1, _2, _3, gcd, apos, mabs);\n    if ((invertible = !zcmpi(gcd, 1))) \\{\n        if (zsignum(inv) < 0)\n            (zsignum(m) < 0 ? zsub : zadd)(x, x, m);\n        zswap(x, inv);\n    \\}\n    if (aneg)\n        zfree(apos);\n    zfree(x), zfree(_1), zfree(_2), zfree(_3), zfree(gcd);\n    return invertible;\n\\}\n\\end{alltt}\n\n\n\\newpage\n\\section{Random prime number generation}\n\\label{sec:Random prime number generation}\n\nTODO\n\n\n\\newpage\n\\section{Symbols}\n\\label{sec:Symbols}\n\n\\subsection{Legendre symbol}\n\\label{sec:Legendre symbol}\n\n\\( \\displaystyle{\n  \\left ( \\frac{a}{p} \\right ) \\equiv a^{\\frac{p - 1}{2}} ~(\\text{Mod}~p),~\n  \\left ( \\frac{a}{p} \\right ) \\in \\{-1,~0,~1\\},~\n  p \\in \\textbf{P},~ p > 2\n}\\)\n\\vspace{1em}\n\n\\noindent\nThat is, unless $\\displaystyle{a^{\\frac{p - 1}{2}} \\mod p \\le 1}$,\n$\\displaystyle{a^{\\frac{p - 1}{2}} \\mod p = p - 1}$, so\n$\\displaystyle{\\left ( \\frac{a}{p} \\right ) = -1}$.\n\nIt should be noted that\n\\( \\displaystyle{\n  \\left ( \\frac{a}{p} \\right ) = \n  \\left ( \\frac{a ~\\text{Mod}~ p}{p} \\right ),\n}\\)\nso a compressed lookup table can be used for small $p$.\n\n\n\\subsection{Jacobi symbol}\n\\label{sec:Jacobi symbol}\n\n\\( \\displaystyle{\n  \\left ( \\frac{a}{n} \\right ) = \n  \\prod_k \\left ( \\frac{a}{p_k} \\right )^{n_k},\n}\\)\nwhere $\\displaystyle{n = \\prod_k p_k^{n_k} > 0}$,\nand $p_k \\in \\textbf{P}$.\n\\vspace{1em}\n\nLike the Legendre symbol, the Jacobi symbol is $n$-periodic over $a$.\nIf $n$, is prime, the Jacobi symbol is the Legendre symbol, but\nthe Jacobi symbol is defined for all odd numbers $n$, where the\nLegendre symbol is defined only for odd primes $n$.\n\nUse the following algorithm to calculate the Jacobi symbol:\n\n\\vspace{1em}\n\\hspace{-2.8ex}\n\\begin{minipage}{\\linewidth}\n\\begin{algorithmic}\n    \\STATE $a \\gets a \\mod n$\n    \\STATE $r \\gets \\mbox{lsb}~ a$\n    \\STATE $a \\gets a \\cdot 2^{-r}$\n    \\STATE \\(\\displaystyle{\n      r \\gets \\left \\lbrace \\begin{array}{rl}\n        1 &\n          \\text{if}~ n \\equiv 1, 7 ~(\\text{Mod}~ 8)\n          ~\\text{or}~ r \\equiv 0 ~(\\text{Mod}~ 2) \\\\\n        -1 & \\text{otherwise} \\\\\n      \\end{array} \\right .\n    }\\)\n    \\IF{$a = 1$}\n        \\RETURN $r$\n    \\ELSIF{$\\gcd(a, n) \\neq 1$}\n        \\RETURN $0$\n    \\ENDIF\n    \\STATE $(a, n) = (n, a)$\n    \\STATE \\textbf{start over}\n\\end{algorithmic}\n\\end{minipage}\n\n\n\\subsection{Kronecker symbol}\n\\label{sec:Kronecker symbol}\n\nThe Kronecker symbol\n$\\displaystyle{\\left ( \\frac{a}{n} \\right )}$\nis a generalisation of the Jacobi symbol,\nwhere $n$ can be any integer. For positive\nodd $n$, the Kronecker symbol is equal to\nthe Jacobi symbol. For even $n$, the\nKronecker symbol is $2n$-periodic over $a$,\nthe Kronecker symbol is zero for all\n$(a, n)$ with both $a$ and $n$ are even.\n\n\\vspace{1em}\n\\noindent\n\\( \\displaystyle{\n    \\left ( \\frac{a}{2^k \\cdot n} \\right ) =\n    \\left ( \\frac{a}{n} \\right ) \\cdot \\left ( \\frac{a}{2} \\right )^k,\n}\\)\nwhere\n\\( \\displaystyle{\n    \\left ( \\frac{a}{2} \\right ) =\n    \\left \\lbrace \\begin{array}{rl}\n        1  & \\text{if}~ a \\equiv 1, 7 ~(\\text{Mod}~ 8) \\\\\n        -1 & \\text{if}~ a \\equiv 3, 5 ~(\\text{Mod}~ 8) \\\\\n        0  & \\text{otherwise}\n    \\end{array} \\right .\n}\\)\n\n\\vspace{1em}\n\\noindent\n\\( \\displaystyle{\n    \\left ( \\frac{-a}{n} \\right ) =\n    \\left ( \\frac{a}{n} \\right ) \\cdot \\left ( \\frac{a}{-1} \\right ),\n}\\)\nwhere\n\\( \\displaystyle{\n    \\left ( \\frac{a}{-1} \\right ) =\n    \\left \\lbrace \\begin{array}{rl}\n        1  & \\text{if}~ a \\ge 0 \\\\\n        -1 & \\text{if}~ a < 0\n    \\end{array} \\right .\n}\\)\n\\vspace{1em}\n\n\\noindent\nHowever, for $n = 0$, the symbol is defined as\n\n\\vspace{1em}\n\\noindent\n\\( \\displaystyle{\n    \\left ( \\frac{a}{0} \\right ) =\n    \\left \\lbrace \\begin{array}{rl}\n        1 & \\text{if}~ a = \\pm 1 \\\\\n        0 & \\text{otherwise.}\n    \\end{array} \\right .\n}\\)\n\n\n\\subsection{Power residue symbol}\n\\label{sec:Power residue symbol}\n\nTODO\n\n\n\\subsection{Pochhammer \\emph{k}-symbol}\n\\label{sec:Pochhammer k-symbol}\n\n\\( \\displaystyle{\n    (x)_{n,k} = \\prod_{i = 1}^n (x + (i - 1)k)\n}\\)\n\n\n\\newpage\n\\section{Logarithm}\n\\label{sec:Logarithm}\n\nTODO\n\n\n\\newpage\n\\section{Roots}\n\\label{sec:Roots}\n\nTODO\n\n\n\\newpage\n\\section{Modular roots}\n\\label{sec:Modular roots}\n\nTODO % Square: Cipolla's algorithm, Pocklington's algorithm, Tonelli\u2013Shanks algorithm\n\n\n\\newpage\n\\section{Combinatorial}\n\\label{sec:Combinatorial}\n\n\\subsection{Factorial}\n\\label{sec:Factorial}\n\n\\( \\displaystyle{\n    n! = \\left \\lbrace \\begin{array}{ll}\n        \\displaystyle{\\prod_{i = 1}^n i} & \\textrm{if}~ n \\ge 0 \\\\\n        \\textrm{undefined} & \\textrm{otherwise}\n    \\end{array} \\right .\n}\\)\n\\vspace{1em}\n\nThis can be implemented much more efficiently\nthan using the na\u00efve method, and is a very\nimportant function for many combinatorial\napplications, therefore it may be implemented\nin the future if the demand is high enough.\n\nAn efficient, yet not optimal, implementation\nof factorials that about halves the number of\nrequired multiplications compared to the na\u00efve\nmethod can be derived from the observation\n\n\\vspace{1em}\n\\( \\displaystyle{\n    n! = n!! ~ \\lfloor n / 2 \\rfloor! ~ 2^{\\lfloor n / 2 \\rfloor}\n}\\), $n$ odd.\n\\vspace{1em}\n\n\\noindent\nThe resulting algorithm can be expressed as\n\n\\begin{alltt}\n   void\n   fact(z_t r, uint64_t n)\n   \\{\n       z_t p, f, two;\n       uint64_t *ns, s = 1, i = 1;\n       zinit(p), zinit(f), zinit(two);\n       zseti(r, 1), zseti(p, 1), zseti(f, n), zseti(two, 2);\n       ns = alloca(zbits(f) * sizeof(*ns));\n       while (n > 1) \\{\n           if (n & 1) \\{\n               ns[i++] = n;\n               s += n >>= 1;\n           \\} else \\{\n               zmul(r, r, (zsetu(f, n), f));\n               n -= 1;\n           \\}\n       \\}\n       for (zseti(f, 1); i-- > 0; zmul(r, r, p);)\n           for (n = ns[i]; zcmpu(f, n); zadd(f, f, two))\n               zmul(p, p, f);\n       zlsh(r, r, s);\n       zfree(two), zfree(f), zfree(p);\n   \\}\n\\end{alltt}\n\n\n\\subsection{Subfactorial}\n\\label{sec:Subfactorial}\n\n\\( \\displaystyle{\n    !n = \\left \\lbrace \\begin{array}{ll}\n      n(!(n - 1)) + (-1)^n & \\textrm{if}~ n > 0 \\\\\n      1 & \\textrm{if}~ n = 0 \\\\\n      \\textrm{undefined} & \\textrm{otherwise}\n    \\end{array} \\right . =\n    n! \\sum_{i = 0}^n \\frac{(-1)^i}{i!}\n}\\)\n\n\n\\subsection{Alternating factorial}\n\\label{sec:Alternating factorial}\n\n\\( \\displaystyle{\n    \\mbox{af}(n) = \\sum_{i = 1}^n {(-1)^{n - i} i!}\n}\\)\n\n\n\\subsection{Multifactorial}\n\\label{sec:Multifactorial}\n\n\\( \\displaystyle{\n    n!^{(k)} = \\left \\lbrace \\begin{array}{ll}\n      1 & \\textrm{if}~ n = 0 \\\\\n      n & \\textrm{if}~ 0 < n \\le k \\\\\n      n((n - k)!^{(k)}) & \\textrm{if}~ n > k \\\\\n      \\textrm{undefined} & \\textrm{otherwise}\n    \\end{array} \\right .\n}\\)\n\n\n\\subsection{Quadruple factorial}\n\\label{sec:Quadruple factorial}\n\n\\( \\displaystyle{\n    (4n - 2)!^{(4)}\n}\\)\n\n\n\\subsection{Superfactorial}\n\\label{sec:Superfactorial}\n\n\\( \\displaystyle{\n    \\mbox{sf}(n) = \\prod_{k = 1}^n k^{1 + n - k}\n}\\), undefined for $n < 0$.\n\n\n\\subsection{Hyperfactorial}\n\\label{sec:Hyperfactorial}\n\n\\( \\displaystyle{\n    H(n) = \\prod_{k = 1}^n k^k\n}\\), undefined for $n < 0$.\n\n\n\\subsection{Raising factorial}\n\\label{sec:Raising factorial}\n\n\\( \\displaystyle{\n    x^{(n)} = \\frac{(x + n - 1)!}{(x - 1)!}\n}\\), undefined for $n < 0$.\n\n\n\\subsection{Falling factorial}\n\\label{sec:Falling factorial}\n\n\\( \\displaystyle{\n    (x)_n = \\frac{x!}{(x - n)!}\n}\\), undefined for $n < 0$.\n\n\n\\subsection{Primorial}\n\\label{sec:Primorial}\n\n\\( \\displaystyle{\n    n\\# = \\prod_{\\lbrace i \\in \\textbf{P} ~:~ i \\le n \\rbrace} i\n}\\)\n\\vspace{1em}\n\n\\noindent\n\\( \\displaystyle{\n    p_n\\# = \\prod_{i \\in \\textbf{P}_{\\pi(n)}} i\n}\\)\n\n\n\\subsection{Gamma function}\n\\label{sec:Gamma function}\n\n$\\Gamma(n) = (n - 1)!$, undefined for $n \\le 0$.\n\n\n\\subsection{K-function}\n\\label{sec:K-function}\n\n\\( \\displaystyle{\n    K(n) = \\left \\lbrace \\begin{array}{ll}\n      \\displaystyle{\\prod_{i = 1}^{n - 1} i^i}  & \\textrm{if}~ n \\ge 0 \\\\\n      1 & \\textrm{if}~ n = -1 \\\\\n      0 & \\textrm{otherwise (result is truncated)}\n    \\end{array} \\right .\n}\\)\n\n\n\\subsection{Binomial coefficient}\n\\label{sec:Binomial coefficient}\n\n\\( \\displaystyle{\n    \\binom{n}{k} = \\frac{n!}{k!(n - k)!}\n    = \\frac{1}{(n - k)!} \\prod_{i = k + 1}^n i\n    = \\frac{1}{k!} \\prod_{i = n - k + 1}^n i\n}\\)\n\n\n\\subsection{Catalan number}\n\\label{sec:Catalan number}\n\n\\( \\displaystyle{\n    C_n = \\left . \\binom{2n}{n} \\middle / (n + 1) \\right .\n}\\)\n\n\n\\subsection{Fuss\u2013Catalan number}\n\\label{sec:Fuss-Catalan number} % not en dash\n\n\\( \\displaystyle{\n    A_m(p, r) = \\frac{r}{mp + r} \\binom{mp + r}{m}\n}\\)\n\n\n\\newpage\n\\section{Fibonacci numbers}\n\\label{sec:Fibonacci numbers}\n\nFibonacci numbers can be computed efficiently\nusing the following algorithm:\n\n\\begin{alltt}\n   static void\n   fib_ll(z_t f, z_t g, z_t n)\n   \\{\n       z_t a, k;\n       int odd;\n       if (zcmpi(n, 1) <= 0) \\{\n           zseti(f, !zzero(n));\n           zseti(f, zzero(n));\n           return;\n       \\}\n       zinit(a), zinit(k);\n       zrsh(k, n, 1);\n       if (zodd(n)) \\{\n           odd = zodd(k);\n           fib_ll(a, g, k);\n           zadd(f, a, a);\n           zadd(k, f, g);\n           zsub(f, f, g);\n           zmul(f, f, k);\n           zseti(k, odd ? -2 : +2);\n           zadd(f, f, k);\n           zadd(g, g, g);\n           zadd(g, g, a);\n           zmul(g, g, a);\n       \\} else \\{\n           fib_ll(g, a, k);\n           zadd(f, a, a);\n           zadd(f, f, g);\n           zmul(f, f, g);\n           zsqr(a, a);\n           zsqr(g, g);\n           zadd(g, a);\n       \\}\n       zfree(k), zfree(a);\n   \\}\n\\end{alltt}\n\n\\newpage\n\\begin{alltt}\n   void\n   fib(z_t f, z_t n)\n   \\{\n       z_t tmp, k;\n       zinit(tmp), zinit(k);\n       zset(k, n);\n       fib_ll(f, tmp, k);\n       zfree(k), zfree(tmp);\n   \\}\n\\end{alltt}\n\n\\noindent\nThis algorithm is based on the rules\n\n\\vspace{1em}\n\\( \\displaystyle{\n    F_{2k + 1} = 4F_k^2 - F_{k - 1}^2 + 2(-1)^k = (2F_k + F_{k-1})(2F_k - F_{k-1}) + 2(-1)^k\n}\\)\n\\vspace{1em}\n\n\\( \\displaystyle{\n    F_{2k} = F_k \\cdot (F_k + 2F_{k - 1})\n}\\)\n\\vspace{1em}\n\n\\( \\displaystyle{\n    F_{2k - 1} = F_k^2 + F_{k - 1}^2\n}\\)\n\\vspace{1em}\n\n\\noindent\nEach call to {\\tt fib\\_ll} returns $F_n$ and $F_{n - 1}$\nfor any input $n$. $F_{k}$ is only correctly returned\nfor $k \\ge 0$. $F_n$ and $F_{n - 1}$ is used for\ncalculating $F_{2n}$ or $F_{2n + 1}$. The algorithm\ncan be sped up with a larger lookup table than one\ncovering just the base cases. Alternatively, a na\u00efve\ncalculation could be used for sufficiently small input.\n\n\n\\newpage\n\\section{Lucas numbers}\n\\label{sec:Lucas numbers}\n\nLucas [lyk\\textscripta] numbers can be calculated by utilising\n{\\tt fib\\_ll} from \\secref{sec:Fibonacci numbers}:\n\n\\begin{alltt}\n   void\n   lucas(z_t l, z_t n)\n   \\{\n       z_t k;\n       int odd;\n       if (zcmp(n, 1) <= 0) \\{\n           zset(l, 1 + zzero(n));\n           return;\n       \\}\n       zinit(k);\n       zrsh(k, n, 1);\n       if (zeven(n)) \\{\n           lucas(l, k);\n           zsqr(l, l);\n           zseti(k, zodd(k) ? +2 : -2);\n           zadd(l, k);\n       \\} else \\{\n           odd = zodd(k);\n           fib_ll(l, k, k);\n           zadd(l, l, l);\n           zadd(l, l, k);\n           zmul(l, l, k);\n           zseti(k, 5);\n           zmul(l, l, k);\n           zseti(k, odd ? +4 : -4);\n           zadd(l, l, k);\n       \\}\n       zfree(k);\n   \\}\n\\end{alltt}\n\n\\noindent\nThis algorithm is based on the rules\n\n\\vspace{1em}\n\\( \\displaystyle{\n    L_{2k} = L_k^2 - 2(-1)^k\n}\\)\n\\vspace{1ex}\n\n\\( \\displaystyle{\n    L_{2k + 1} = 5F_{k - 1} \\cdot (2F_k + F_{k - 1}) - 4(-1)^k\n}\\)\n\\vspace{1em}\n\n\\noindent\nAlternatively, the function can be implemented\ntrivially using the rule\n\n\\vspace{1em}\n\\( \\displaystyle{\n    L_k = F_k + 2F_{k - 1}\n}\\)\n\n\n\\newpage\n\\section{Bit operation}\n\\label{sec:Bit operation unimplemented}\n\n\\subsection{Bit scanning}\n\\label{sec:Bit scanning}\n\nScanning for the next set or unset bit can be\ntrivially implemented using {\\tt zbtest}. A\nmore efficient, although not optimally efficient,\nimplementation would be\n\n\\begin{alltt}\n   size_t\n   bscan(z_t a, size_t whence, int direction, int value)\n   \\{\n       size_t ret;\n       z_t t;\n       zinit(t);\n       value ? zset(t, a) : znot(t, a);\n       ret = direction < 0\n           ? (ztrunc(t, t, whence + 1), zbits(t) - 1)\n           : (zrsh(t, t, whence), zlsb(t) + whence);\n       zfree(t);\n       return ret;\n   \\}\n\\end{alltt}\n\n\n\\subsection{Population count}\n\\label{sec:Population count}\n\nThe following function can be used to compute\nthe population count, the number of set bits,\nin an integer, counting the sign bit:\n\n\\begin{alltt}\n   size_t\n   popcount(z_t a)\n   \\{\n       size_t i, ret = zsignum(a) < 0;\n       for (i = 0; i < a->used; i++) \\{\n           ret += __builtin_popcountll(a->chars[i]);\n       \\}\n       return ret;\n   \\}\n\\end{alltt}\n\n\\noindent\nIt requires a compiler extension; if it's not\navailable, there are other ways to computer the\npopulation count for a word: manually bit-by-bit,\nor with a fully unrolled\n\n\\begin{alltt}\n   int s;\n   for (s = 1; s < 64; s <<= 1)\n       w = (w >> s) + w;\n\\end{alltt}\n\n\n\\subsection{Hamming distance}\n\\label{sec:Hamming distance}\n\nA simple way to compute the Hamming distance,\nthe number of differing bits between two\nnumbers is with the function\n\n\\begin{alltt}\n   size_t\n   hammdist(z_t a, z_t b)\n   \\{\n       size_t ret;\n       z_t t;\n       zinit(t);\n       zxor(t, a, b);\n       ret = popcount(t);\n       zfree(t);\n       return ret;\n   \\}\n\\end{alltt}\n\n\\noindent\nThe performance of this function could\nbe improved by comparing character by\ncharacter manually using {\\tt zxor}.\n\n\n\\newpage\n\\section{Miscellaneous}\n\\label{sec:Miscellaneous}\n\n\n\\subsection{Character retrieval}\n\\label{sec:Character retrieval}\n\n\\begin{alltt}\nuint64_t\ngetu(z_t a)\n\\{\n    return zzero(a) ? 0 : a->chars[0];\n\\}\n\\end{alltt}\n\n\\subsection{Fit test}\n\\label{sec:Fit test}\n\nSome libraries have functions for testing\nwhether a big integer is small enough to\nfit into an intrinsic type. Since libzahl\ndoes not provide conversion to intrinsic\ntypes this is irrelevant. But additionally,\nit can be implemented with a single\none-line macro that does not have any\nside-effects.\n\n\\begin{alltt}\n   #define fits_in(a, type)  (zbits(a) <= 8 * sizeof(type))\n   \\textcolor{c}{/* \\textrm{Just be sure the type is integral.} */}\n\\end{alltt}\n\n\n\\subsection{Reference duplication}\n\\label{sec:Reference duplication}\n\nThis could be useful for creating duplicates\nwith modified sign, but only if neither\n{\\tt r} nor {\\tt a} will be modified whilst\nboth are in use. Because it is unsafe,\nfairly simple to create an implementation\nwith acceptable performance \u2014 {\\tt *r = *a},\n\u2014 and probably seldom useful, this has not\nbeen implemented.\n\n\\begin{alltt}\n   void\n   refdup(z_t r, z_t a)\n   \\{\n       \\textcolor{c}{/* \\textrm{Almost fully optimised, but perfectly portable} *r = *a; */}\n       r->sign    = a->sign;\n       r->used    = a->used;\n       r->alloced = a->alloced;\n       r->chars   = a->chars;\n   \\}\n\\end{alltt}\n\n\n\\subsection{Variadic initialisation}\n\\label{sec:Variadic initialisation}\n\nMost bignum libraries have variadic functions\nfor initialisation and uninitialisation. This\nis not available in libzahl, because it is\nnot useful enough and has performance overhead.\nAnd what's next, support {\\tt va\\_list},\nvariadic addition, variadic multiplication,\npower towers, set manipulation? Anyone can\nimplement variadic wrapper for {\\tt zinit} and\n{\\tt zfree} if they really need it. But if\nyou want to avoid the overhead, you can use\nsomething like this:\n\n\\begin{alltt}\n   /* \\textrm{Call like this:} MANY(zinit, (a), (b), (c)) */\n   #define MANY(f, ...)  (_MANY1(f, __VA_ARGS__,,,,,,,,,))\n   \n   #define _MANY1(f, a, ...)  (void)f a, _MANY2(f, __VA_ARGS__)\n   #define _MANY2(f, a, ...)  (void)f a, _MANY3(f, __VA_ARGS__)\n   #define _MANY3(f, a, ...)  (void)f a, _MANY4(f, __VA_ARGS__)\n   #define _MANY4(f, a, ...)  (void)f a, _MANY5(f, __VA_ARGS__)\n   #define _MANY5(f, a, ...)  (void)f a, _MANY6(f, __VA_ARGS__)\n   #define _MANY6(f, a, ...)  (void)f a, _MANY7(f, __VA_ARGS__)\n   #define _MANY7(f, a, ...)  (void)f a, _MANY8(f, __VA_ARGS__)\n   #define _MANY8(f, a, ...)  (void)f a, _MANY9(f, __VA_ARGS__)\n   #define _MANY9(f, a, ...)  (void)f a\n\\end{alltt}\n", "meta": {"hexsha": "cbf135543bb6459be1156878dff73eb120e2b0eb", "size": 19537, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/not-implemented.tex", "max_stars_repo_name": "maandree/libzahl", "max_stars_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2016-03-06T10:34:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T10:40:14.000Z", "max_issues_repo_path": "doc/not-implemented.tex", "max_issues_repo_name": "maandree/libzahl", "max_issues_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2016-05-09T12:34:47.000Z", "max_issues_repo_issues_event_max_datetime": "2017-04-22T13:11:49.000Z", "max_forks_repo_path": "doc/not-implemented.tex", "max_forks_repo_name": "maandree/libzahl", "max_forks_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2016-10-14T12:23:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-23T12:10:26.000Z", "avg_line_length": 23.796589525, "max_line_length": 107, "alphanum_fraction": 0.6050058863, "num_tokens": 6718, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.5781780904335049}}
{"text": "\\input{preamble}\n\\usepackage{wrapfig}\n\\usepackage[makeroom]{cancel}\n\n\\title{The Quantum Spring}\n\\author{\\small{Apurva Nakade}}\n\\date{}\n\\begin{document}\n\\maketitle\n\\vspace{-2em}\n\\epigraph{Not only is the universe stranger than we imagine, it is stranger than we can imagine.}{J.B.S. Haldane}\n\\vspace{-2em}\n\\tableofcontents\n\n\\section{The Mass-Spring System}\n\\subsection{The Classical Spring}\nA mass-spring system (also called a harmonic oscillator) consists of a point mass $m$ attached to a spring with spring constant $k$. We'll assume that there is no external force or friction. In Newtonian mechanics the laws of motion are given by the equations,\n\\begin{align*}\n\tF &= m \\dfrac{d^2 x }{dt^2}\\\\\n\tF &= -kx\n\\end{align*}\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[height=1.5cm]{mass-spring}\n\t\\end{wrapfigure}\nCombining these we get $\\frac {d^2 x }{dt^2} = -\\frac{k}{m}x$ which has the solutions\n\\begin{align}\n\t\\label{eq:classical_solutions}\n\tx = A \\sin (\\omega t) + B \\cos(\\omega t)\n\\end{align}\nwhere $\\omega^2=k/m$ and $A, B$ are constants which depend on the initial conditions. The constant $\\omega$ is the frequency of oscillation. We can rewrite this as a conservation of energy equation\n\\begin{align}\n\t\\label{eq:energy_mass_spring}\n\t\\dfrac{1}{2} \\left(\\dfrac{p^2}{m} + k x^2\\right) = E\n\\end{align}\nwhere $p = m \\frac{dx}{dt}$ is the momentum and $E$ is total energy of the system which is constant.\n\n\\begin{ques}\n\tShow that the solutions \\eqref{eq:classical_solutions} satisfy the energy conservation equation \\eqref{eq:energy_mass_spring}.\n\\end{ques}\n\n\\subsection{Quantizing a Classical System}\nOn the microscopic level, this system is seen in a diatomic molecule like $H_2$. If we look at the system from the frame of reference of the center of mass then up to a first approximation the atom behaves like a mass-spring system (plus some rotational motion). One can measure the energy of this mass-spring system and it turns out that the energy is quantized, more specifically the energy of this mass-spring system only takes values of the form\n\\begin{align*}\n\tc(n + 1/2)\n\\end{align*}\nwhere $c$ is a positive real constant that depends on the molecule and $n$ is a non-negative integer!\n\nThe Newtonian model on the other hand predicts that we should see a continuous spectrum of possible energy values. As the model does not agree with experiments we need to discard this model and come up with a new model i.e. quantum mechanics. The current model for a quantum mechanical analogue of a classical mechanical system is the following:\\\\\n\nThe entire information about the state of a system is contained in a \\textbf{wavefunction} which is simply a time-dependent complex valued function\n\\begin{align*}\n\t\\psi (x, t)\n\\end{align*}\nFurther, we think of these wavefunctions as living in the $\\C$-vector space of all complex valued functions.\\footnote{As it is stated this is false. We need to impose several analytic conditions on the space of functions for this to make sense. The usual candidate is the $L^2$-space of functions.}\\\\\n\nIn this model, the \\emph{observable} quantities like position, momentum, angular momentum, energy etc. are \\textbf{linear operators} on this vector space. We'll the denote linear operators as $\\hat{-}$;\n\\begin{align*}\n\t \\hat{x} &= \\mbox{position operator}\\\\\n \\hat{p} &= \\mbox{momentum operator}  \\\\\n\\hat{E} &= \\mbox{energy operator}\n\\end{align*}\n\nTo find equations of motion we replace the observable quantities with the corresponding operators in the energy conservation equation. For the mass-spring system this is Equation \\ref{eq:energy_mass_spring} which gives us the \\textbf{Schrodinger Equation}\n\n\t\\begin{align}\n\t\t\\label{eq:Schrodinger}\n\t\t\\dfrac{1}{2} \\left(\\dfrac{\\hat{p} \\circ \\hat{p}}{m} + k \\hat{x} \\circ \\hat{x}\\right)\\psi = \\hat{E} \\psi\n\t\\end{align}\nwhere we're solving for $\\psi$ (which is a function of $x, t$).\n\nFinally, we need to find the \\emph{right} linear operators, this is again a choice we have to make in our model. For the time independent or steady state\\footnote{This is saying that no external force is being applied to the system and the system has stabilized into an equilibrium state. For example, standing waves are examples of steady states of the classical waves-on-a-string system.} mass-spring system these are\n\\begin{align*}\n\t\\hat{x} &= x \\cdot (-) \\quad(\\mbox{ multiplication by } x) \\\\\n\t\\hat{p} &= -i \\hbar \\dfrac{d}{d x} \\\\\n\t\t\\hat{E} &= \\lambda \\cdot (-) \\quad(\\mbox{ multiplication by a fixed positive real number})\n\\end{align*}\nwhere $i = \\sqrt{-1}$ and $\\hbar$ is a physical constant called the \\textbf{Planck's constant}.\\\\\n\n\\noindent \\textbf{Aside: } This is only half the story. Here we're only talking about finding a mathematical model, in particular, a differential equation, for describing the quantum world. There still needs to be an interpretation of this model in terms of physically observable quantities to connect the model to experiments. This in itself is a long and beautiful story which you should explore by yourself.\\\\\n\nBeing mathematicians we'll assume that $m = k = \\hbar = 1$ so that the time-independent Schrodinger equation \\eqref{eq:Schrodinger} for a mass-spring system becomes\n\\begin{align*}\n\t\\dfrac{1}{2} \\left({\\left(-i \\dfrac{d}{d x}\\right) \\circ \\left(-i \\dfrac{d}{d x}\\right)} + k x^2\\right)\\psi = \\lambda \\psi \\\\\n\t \\dfrac{1}{2} \\left(-\\dfrac{d^2 \\psi}{d x^2} + {x}^2 \\psi \\right) = \\lambda \\psi\n\\end{align*}\nDefine the \\textbf{Schrodinger operator} to be\n\\begin{align*}\n\t\t\\hat{H} := \\dfrac{1}{2}\t\\left(- \\dfrac{d^2}{d x^2} + {\\hat{x}}^2 \\right)\n\\end{align*}\nso that we can rewrite the Schrodinger equation simply as\n\\begin{align*}\n\t\\hat{H} \\psi = \\lambda \\psi\n\\end{align*}\nBut notice that the solutions to this are simply the eigenvectors of $\\hat{H}$!!!\n\\begin{prop}\n\t\tThe solutions to the time-independent Schrodinger equation are precisely the eigenvalues and eigenvectors of the Schrodinger operator.\n\\end{prop}\n\n\n\n\n\n\n\n\\section{Linear Algebra}\n\n\\subsection{The Schrodinger Operator}\nThus we've reduced a physics problem to a linear algebra one (yay!). We'll now focus entirely on the Schrodinger operator. The operator $2\\hat{H}$  should not be very far from the operator $(-d/dx + x)(d/dx + x)$, more precisely\n\\begin{align*}\n\t&\\left(-\\dfrac{d}{dx} + \\hat{x}\\right) \\circ \\left(\\dfrac{d}{dx} + \\hat{x} \\right)\\psi \\\\\n\t&=\\left(-\\dfrac{d}{dx} + \\hat{x}\\right) \\circ \\left(\\dfrac{d \\psi}{dx} + x \\psi \\right) \\\\\n\t&= -\\dfrac{d}{dx}\\left( \\dfrac{d \\psi}{dx} + x \\psi \\right) + \\hat{x} \\left( \\dfrac{d \\psi}{dx} + x \\psi \\right) \\\\\n\t&= -\\dfrac{d^2 \\psi}{dx^2} -\\dfrac{d}{dx} \\left(x \\psi \\right) + x\\dfrac{d \\psi}{dx} + x^2 \\psi \\\\\n\t&= -\\dfrac{d^2 \\psi}{dx^2} -\\psi - \\cancel{x\\dfrac{d\\psi}{dx}} + \\cancel{x\\dfrac{d \\psi}{dx}} + x^2 \\psi \\\\\n\t&= 2\\hat{H} \\psi - \\psi\n\\end{align*}\n\n\\begin{definition}\n\tDefine the \\textbf{creation} operator $\\hat{C}$ and \\textbf{annihilation} operator $\\hat{A}$ as\n\t\\begin{align*}\n\t\t\\hat C &:= \\dfrac{1}{\\sqrt{2}}\\left(-\\dfrac{d}{dx} + \\hat{x}\\right)\n\t\t=\\dfrac{1}{\\sqrt{2}}\\left(-i \\hat{p} + \\hat{x}\\right) \\\\\n\t\t\t\\hat A &:= \\dfrac{1}{\\sqrt{2}}\\left(\\dfrac{d}{dx} + \\hat{x}\\right)\n\t\t=\\dfrac{1}{\\sqrt{2}}\\left(i \\hat{p} + \\hat{x}\\right)\n\t\\end{align*}\n\\end{definition}\n\\noindent By the above derivation we have\n\\begin{align*}\n\t\\hat{H} &= \\hat{C} \\hat{A} + \\frac{1}{2}\n\\end{align*}\nThe operators $\\hat{H}, \\hat{C}, \\hat{A}$ are the generating elements of an algebra called the \\textbf{Weyl algebra} which we'll now analyze.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Lie Brackets}\nIf two operators $\\hat{M}$ and $\\hat{N}$ commute and if $\\psi$ is an eigenvector of $\\hat{M}$ with eigenvalue $\\lambda$ then\n\\begin{align*}\n\t\\hat{M} \\hat{N} \\psi\n\t&= \\hat{N} \\hat{M} \\psi \\\\\n\t&= \\lambda \\hat{N} \\psi\n\\end{align*}\n\\begin{prop}\n\tIf $\\hat{M}$ and $\\hat{N}$ commute and $\\psi$ is an eigenvector of $M$ then so is $\\hat{N} \\psi$ with the same eigenvalue.\n\\end{prop}\n\nWhat happens if they don't commute? The notation of Lie (pronounced \\emph{lee}) brackets allows us to analyze this systematically.\nLie  brackets measure the failure of commutativity of operators.\n\\begin{definition}\n\tFor linear operators $\\hat{M}, \\hat{N}$ on a $\\C$-vector space, the \\textbf{Lie bracket} is defined as\n\t\\begin{align*}\n\t\t[\\hat{M}, \\hat{N}] := \\widehat{M} \\circ \\widehat{N} - \\widehat{N} \\circ \\widehat{M}\n\t\\end{align*}\n\\end{definition}\n\n\\begin{ques}\n\tFor linear operators $\\hat{M}, \\hat{N}, \\hat{P}$ prove the following identities.\n\t\\begin{enumerate}\n\t\t\\item $[\\hat{M}, \\hat{M}] = 0$\n\t\t\\item $[\\hat{M}, \\hat{N}] = -[\\hat{N}, \\hat{M}]$\n\t\t\\item $[\\hat{M}, c \\hat{N} + \\hat{P}] = c[\\hat{M}, \\hat{N}] + [\\hat{M}, \\hat{P}]\\quad $ where $c$ is a complex number\n\t\t\\item $[\\hat{M}, [\\hat{N}, \\hat{P}]] + [\\hat{N}, [\\hat{P}, \\hat{M}]] + [\\hat{P}, [\\hat{M}, \\hat{N}]] = 0$\n\t\\end{enumerate}\n\tThe last identity is called the \\textbf{Jacobi identity}.\n\\end{ques}\n\nSuppose $\\psi$ is an eigenvector of a linear operator $\\hat{M}$ and let $\\hat{N}$ be another linear operator. If we no longer have commutativity then\n\\begin{align}\n\t\\label{eq:commutator_eigenvalue}\n\t\\begin{split}\n\t\t\\hat{M} \\hat {N} \\psi\n\t\t&= [\\hat{M}, \\hat{N}] \\psi + \\hat{N} \\hat{M} \\psi \\\\\n\t\t&= [\\hat{M}, \\hat{N}] \\psi + \\lambda \\hat{N}  \\psi\n\t\\end{split}\n\\end{align}\nThis is as far as we can get in full generality. We'll now specialize to the operators that are relevant to us. Let us find the Lie brackets.\n\\begin{align*}\n\t[\\hat{x}, \\hat{p}] \\psi\n\t&= \\hat{x} \\circ \\hat{p} (\\psi) - \\hat{p} \\circ \\hat{x} (\\psi) \\\\\n\t&= -i x \\dfrac{d \\psi}{dx} + i \\dfrac{d}{dx}(x \\psi) \\\\\n\t&= -\\cancel{i x \\dfrac{d \\psi}{dx}} + \\cancel{i x\\dfrac{d\\psi}{dx}} + i \\psi \\\\\n\t&= i \\psi \\\\\n\t\\Rightarrow [\\hat{x}, \\hat{p}] = i\n\\end{align*}\nThis is the famous non-commutativity relation that leads to the Heisenberg's Uncertainty Principle. For the creation and annihilation operators we get,\n\\begin{align*}\n\t[\\hat{C}, \\hat{A}] \\psi\n\t&= \\left[\\dfrac{1}{\\sqrt{2}}\\left(-i \\hat{p} + \\hat{x}\\right), \\dfrac{1}{\\sqrt{2}}\\left(i \\hat{p} + \\hat{x}\\right)\\right] \\\\\n\t&= \\dfrac{1}{2}[-i \\hat{p}, \\hat{x}] + \\dfrac{1}{{2}}[\\hat{x},i \\hat{p}] \\\\\n\t&= \\dfrac{1}{{2}}[\\hat{x},i \\hat{p}] + \\dfrac{1}{{2}}[\\hat{x},i \\hat{p}] \\\\\n\t&= i [\\hat{x},\\hat{p}] \\\\\n\t&= -1\n\\end{align*}\n\n\\begin{ques}\n\t\\label{q:commutivity_relations}\n\tShow that (you'll need to use the Jacobi identity for this problem)\n\t\\begin{align*}\n\t\t[\\hat{H}, \\hat{C}] &= \\hat{C} \\\\\n\t\t[\\hat{H}, \\hat{A}] &= -\\hat{A}\n\\end{align*}\n\\end{ques}\n\n\\noindent If $\\psi$ is an eigenvector of $\\hat{H}$ with eigenvalue $\\lambda$ then using equation \\eqref{eq:commutator_eigenvalue} and Question \\ref{q:commutivity_relations} we get\n\\begin{align*}\n\t\\begin{aligned}\n\t\t\\hat{H} \\hat {C} \\psi\n\t\t&= [\\hat{H}, \\hat{C}] \\psi + \\lambda \\hat{C}  \\psi \\\\\n\t\t&= \\hat{C} \\psi + \\lambda \\hat{C}  \\psi \\\\\n\t\t&= (\\lambda + 1)\\hat{C} \\psi\n\t\\end{aligned} &&\n\t \\begin{aligned}\n\t\t\\hat{H} \\hat {A} \\psi\n\t\t&= [\\hat{H}, \\hat{A}] \\psi + \\lambda \\hat{A}  \\psi \\\\\n\t\t&= -\\hat{A} \\psi + \\lambda \\hat{A}  \\psi \\\\\n\t\t&= (\\lambda - 1)\\hat{C} \\psi\n\t\\end{aligned}\n\\end{align*}\nThus we have shown that\n\\begin{thm}\n\t\\label{thm:shifts}\n\tIf $\\psi$ is an eigenvector of $\\hat{H}$ with eigenvalue $\\lambda$ then so are\n\t \\footnote{There is a mistake in this theorem. Can you find it?}\n\t\\begin{align*}\n\t\t&\\hat{C} \\psi \\mbox{ with eigenvalue } \\lambda + 1  \\mbox{ and} \\\\\n\t\t&\\hat{A} \\psi \\mbox{ with eigenvalue } \\lambda - 1.\n\t\\end{align*}\n\\end{thm}\nThis is why the operator $\\hat{C}$ and $\\hat{A}$ are called creation and annihilation operators, as they create and destroy {energy}. When we're dealing with actual quantum systems, this increase/decrease in energy is realized by an absorption/emission of a mass-less particle like a photon.\n\n\\section{Back to Physics}\n\nThis is as far as we can get by analyzing just the operators $\\hat{H}, \\hat{C}, \\hat{A}$ abstractly. Going back to physics, we want $\\lambda$ to be non-negative as $\\lambda$ represents the total energy of the system. But if $\\psi$ is an eigenvector with eigenvalue $\\lambda$ then so is $\\hat{C}^k \\psi$ which has eigenvalue $\\lambda-k$ and we can make this negative by choosing a large $k$. So if everything we did above is mathematically correct then we're seeing that there are negative energy states for every quantum mass-spring system.\n\nHmmm.\n\nThis is brings us to the mistake in Theorem \\ref{thm:shifts}. The vectors $\\hat{C} \\psi$ and $\\hat{A} \\psi$ can be eigenvectors only if they are \\textbf{non-zero}. Thus we can restate the theorem as\n\\begin{thm}\n\tIf $\\psi$ is an eigenvector of $\\hat{H}$ with eigenvalue $\\lambda$ then so are\n\t \\footnote{There is a mistake in this theorem. Can you find it?}\n\t\\begin{align*}\n\t\t&\\hat{C} \\psi \\mbox{ with eigenvalue } \\lambda + 1  \\mbox{ {\\bf if $\\hat{C} \\psi \\neq 0$} and} \\\\\n\t\t&\\hat{A} \\psi \\mbox{ with eigenvalue } \\lambda - 1 \\mbox{ {\\bf if $\\hat{A} \\psi \\neq 0$}}.\n\t\\end{align*}\n\\end{thm}\n\nThus we cannot use Theorem \\ref{thm:shifts} to create eigenvectors with negative eigenvalues if $\\hat{A} \\psi_0 = 0$. For this eigenvector, the energy equals\n\\begin{align*}\n\t\\hat{H} \\psi_0\n\t&= (\\hat{C} \\hat{A} + 1/2) \\psi_0 \\\\\n\t&= \\hat{C} \\cancel{\\hat{A} \\psi_0} + 1/2 \\psi_0 \\\\\n\t&= \\frac{1}{2} \\psi_0\n\\end{align*}\nThis eigenvector is called the \\textbf{ground state}.\nEvery other eigenvalue is $n + 1/2$ for some positive integer $n$, which corresponds to the eigenvector $\\hat{C}^n \\psi_0$. This indeed agrees with experiments, and so we can say that this model of quantum mechanics is \\emph{not incorrect}.\n\nWe can find the ground state explicitly by solving\n\\begin{align*}\n\t&\\hat{A} \\psi = 0 \\\\\n\t\\Rightarrow\\quad &\\left( \\frac{d}{dx} + x\\right) \\psi = 0 \\\\\n\t\\Rightarrow\\quad &\\frac{d\\psi}{dx} + x\\psi = 0 \\\\\n\t\\Rightarrow\\quad &\\psi = c e^{-x^2/2}\n\\end{align*}\nwhere $c$ is a constant. This is exactly the \\textbf{Gaussian distribution}. The eigenvectors corresponding to higher energies can be obtained by succesively applying the operator $ \\hat{C} = -\\dfrac{d}{dx} + x$. Combining everything we have so far we get\n\\begin{mdframed}\n\t\\begin{thm}\n\t\tFor the quantum mass-spring system \\begin{align*}\n\t\t\\dfrac{1}{2} \\left(-\\dfrac{d^2 \\psi}{d x^2} + {x}^2 \\psi \\right) = \\lambda \\psi\n\t\\end{align*}\n\tThe energy $\\lambda$ can take values $n + 1/2$ where $n$ is a non-negative integer. The eigenvector corresponding to $n = 0$ is\n\t\\begin{align*}\n\t\t\\psi_0 = c e^{-x^2/2}\n\t\\end{align*}\n\twhere $c$ is a constant that depends on the initial conditions. The eigenvectors corresponding to $n = k$ is\n\t\\begin{align*}\n\t\t\\psi_k = \\left(-\\dfrac{d}{dx} + x\\right)^k c e^{-x^2/2}\n\t\\end{align*}\n\t\\end{thm}\n\\end{mdframed}\n\n\\begin{remark}\n\tNote that the ground state $\\psi_0$ has non-zero energy. This is saying that in quantum mechanics a mass-spring system can never be at rest, this is in stark contrast with the classical model. This is a recurring phenomenon in quantum mechanics which is forced by the Heisenberg Uncertainty Principle.\n\\end{remark}\n\n\\begin{ques}\n\tExplicitly compute the first few eigenvectors $\\psi_1, \\psi_2, \\dots$ etc.\n\\end{ques}\n\n\\begin{ques}\n\tRemove the assumption $m = k = \\hbar = 1$ and restate the above theorem.\n\\end{ques}\n\n\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{wavefunctions}\n\t\t\\caption{First few eigenvectors of the quantum mass-spring system. Image from Wikipedia}\n\t\\end{figure}\n\n\n\\subsection{To Summarize}\nThe above analysis crucially relied on the Lie brackets $[\\hat{H},\\hat{C}]$ and  $[\\hat{H},\\hat{A}]$ and as mentioned before these form what is called the Weyl algebra.\nThe Weyl algebra is related to another structure in mathematics called the \\textbf{Heisenberg Lie algebra}.\n\n For many other quantum mechanical systems, it turns out that the relevant operators are related to other Lie algebras which satisfy similar Lie bracket identities. And thus the analysis of quantum mechanical systems often reduces finding the correct Lie algebra for the system which allows us to write the quantum mechanical operators in terms of simpler operators. For example, the angular momentum operator is related to Lie algebra $\\frak{sl}_2(\\C)$ the vector space of $2 \\times 2$ matrices with trace 0.\n\nFinally, for a large class of Lie algebras it is known that the eigenvalues of operators are a discrete set and thus the quantization of energy levels is a physical manifestation of the representation theory of Lie algebras.\n\n\\end{document}\n", "meta": {"hexsha": "4808b95591d6e9479a08da5ee53b44f10a4bbcd2", "size": 16334, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "index.tex", "max_stars_repo_name": "apurvnakade/mc2018-the-quantum-spring", "max_stars_repo_head_hexsha": "533f2f2bbac1345e7f81810b54a3153d0a8d0e8b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "index.tex", "max_issues_repo_name": "apurvnakade/mc2018-the-quantum-spring", "max_issues_repo_head_hexsha": "533f2f2bbac1345e7f81810b54a3153d0a8d0e8b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "index.tex", "max_forks_repo_name": "apurvnakade/mc2018-the-quantum-spring", "max_forks_repo_head_hexsha": "533f2f2bbac1345e7f81810b54a3153d0a8d0e8b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.5268138801, "max_line_length": 540, "alphanum_fraction": 0.6914411657, "num_tokens": 5380, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7718434873426302, "lm_q1q2_score": 0.5781780868942433}}
{"text": "\\documentclass[12pt,english]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{mathtools}\n\n% url package\n\\usepackage[colorlinks=true,\n            linkcolor=blue,\n            urlcolor=blue,\n            anchorcolor = blue,\n            citecolor=gray]{hyperref}\n\n% Titling and Author\n\\title{Latex Example, Equation Cases}\n\\author{\\href{https://fanwangecon.github.io/}{Fan Wang}\\thanks{https://fanwangecon.github.io, repository: \\href{https://fanwangecon.github.io/Tex4Econ/}{Tex4Econ}}}\n\\date{\\today}\n\\begin{document}\n\n\\maketitle\n\n\\section{Two Cases}\n\n\\begin{verbatim}\n  x =\n  \\begin{cases*}\n      -x           & if  $x < 0 $  \\\\\n      \\phantom{-}x & if  $x\\ge 0$\n  \\end{cases*}\n\\end{verbatim}\n\n$$\nx =\n\\begin{cases*}\n    -x           & if  $x < 0 $  \\\\\n    \\phantom{-}x & if  $x\\ge 0$\n\\end{cases*}\n$$\n\n\\begin{align}\nx =\n\\begin{cases*}\n-x           & if  $x < 0 $  \\\\\n\\phantom{-}x & if  $x\\ge 0$\n\\end{cases*}\n\\end{align}\n\n\\pagebreak\n\n\\section{Two Cases, Same Line}\n\n\\begin{verbatim}\n  \\begin{equation*}\n  f(x) = \\begin{cases}\n               0  & \\text{if } x < 0 \\\\\n               1  & \\text{if } x \\ge 0\n         \\end{cases} \\quad\n  g(x) = \\begin{cases}\n               f(x)+1  & \\text{if } x < 0 \\\\\n               f(x)-1  & \\text{if } x \\ge 0\n         \\end{cases}\n  \\end{equation*}\n\\end{verbatim}\n\n\\begin{equation*}\nf(x) = \\begin{cases}\n             0  & \\text{if } x < 0 \\\\\n             1  & \\text{if } x \\ge 0\n       \\end{cases} \\quad\ng(x) = \\begin{cases}\n             f(x)+1  & \\text{if } x < 0 \\\\\n             f(x)-1  & \\text{if } x \\ge 0\n       \\end{cases}\n\\end{equation*}\n\n\\begin{align}\nf(x) = \\begin{cases}\n             0  & \\text{if } x < 0 \\\\\n             1  & \\text{if } x \\ge 0\n       \\end{cases} \\quad\ng(x) = \\begin{cases}\n             f(x)+1  & \\text{if } x < 0 \\\\\n             f(x)-1  & \\text{if } x \\ge 0\n       \\end{cases}\n\\end{align}\n\ncase star\n\\begin{align}\nf(x) =\n\\begin{cases*}\n0  & if $x < 0$ \\\\\n1  & if $x \\ge 0$\n\\end{cases*}\n\\quad\ng(x) =\n\\begin{cases*}\nf(x)+1  & if $ x < 0$ \\\\\nf(x)-1  & if $ x \\ge 0$\n\\end{cases*}\n\\end{align}\n\n\\pagebreak\n\n\\pagebreak\n\n\\section{Cases with Fraction Large Using Array dcases}\n\nHere, we compare the difference between using dcases and cases with fractions.\n\n\\subsection{cases}\n\n\\begin{align}\nf(x) = \\begin{cases*}\n             \\frac{a+b}{c+d}  & \\text{if } x < 0 \\\\\n             1  & \\text{if } x \\ge 0\n       \\end{cases*}\n\\end{align}\n\n\\subsection{dcases}\nFraction show up larger\n\n\\begin{align}\nf(x) = \\begin{dcases*}\n             \\frac{a+b}{c+d}  & \\text{if } x < 0 \\\\\n             1  & \\text{if } x \\ge 0\n       \\end{dcases*}\n\\end{align}\n\n\n\\end{document}\n", "meta": {"hexsha": "19170086a6ea7371309723dd323aa127ac69a483", "size": 2631, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "_other/equation/cases.tex", "max_stars_repo_name": "guohui-jiang/Tex4Econ", "max_stars_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_other/equation/cases.tex", "max_issues_repo_name": "guohui-jiang/Tex4Econ", "max_issues_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_other/equation/cases.tex", "max_forks_repo_name": "guohui-jiang/Tex4Econ", "max_forks_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.2384615385, "max_line_length": 164, "alphanum_fraction": 0.5188141391, "num_tokens": 978, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410572017153, "lm_q2_score": 0.8688267745399466, "lm_q1q2_score": 0.5781265210304439}}
{"text": "\\documentstyle[11pt,reduce]{article}\n\\title{{\\bf $Z$-Transform Package for {\\tt REDUCE}}}\n\\author{Wolfram Koepf \\\\ Lisa Temme \\\\ email: {\\tt Koepf@zib.de}}\n\\date{April 1995 : ZIB Berlin}\n\\begin{document}\n\\maketitle\n\\section{$Z$-Transform}\n\n  The $Z$-Transform of a sequence $\\{f_n\\}$ is the discrete analogue\n  of the Laplace Transform, and\n  \\[{\\cal Z}\\{f_n\\} = F(z) = \\sum^\\infty_{n=0} f_nz^{-n}\\;.\\] \\\\\n  This series converges in the region outside the circle \n  $|z|=|z_0|= \\limsup\\limits_{n \\rightarrow \\infty} \\sqrt[n]{|f_n|}\\;.$\n\n\n\\begin{tabbing}\n\n{\\bf SYNTAX:}\\ \\ {\\tt ztrans($f_n$, n,  z)}\\ \\ \\ \\ \\ \\ \\ \\\n  \\=where $f_n$ is an expression, and $n$,$z$ \\\\\n  \\> are identifiers.\\\\\n\\end{tabbing}\n\n\n\\section{Inverse $Z$-Transform}\n  The calculation of the Laurent coefficients of a regular function\n  results in the following inverse formula for the $Z$-Transform:\n  \\\\\n  If $F(z)$ is a regular function in the region $|z|> \\rho$ then\n  $\\exists$ a sequence \\{$f_n$\\} with ${\\cal Z} \\{f_n\\}=F(z)$\n  given by\n  \\[f_n = \\frac{1}{2 \\pi i}\\oint F(z) z^{n-1} dz\\]\n\n\n\\begin{tabbing}\n\n{\\bf SYNTAX:}\\ \\ {\\tt invztrans($F(z)$, z,  n)}\\ \\ \\ \\ \\ \\ \\ \\\n  \\=where $F(z)$ is an expression, \\\\\n  \\> and $z$,$n$ are identifiers.\n\\end{tabbing}\n\n\n\\section{Input for the $Z$-Transform}\n\\begin{tabbing}\n  This pack\\=age can compute the \\= $Z$-Transforms of the \\=following \n  list of $f_n$, and \\\\ certain combinations thereof.\\\\ \\\\\n\n\\>$1$                             \n\\>$e^{\\alpha n}$                  \n\\>$\\frac{1}{(n+k)}$               \\\\ \\\\\n\\>$\\frac{1}{n!}$                  \n\\>$\\frac{1}{(2n)!}$               \n\\>$\\frac{1}{(2n+1)!}$             \\\\ \\\\\n\\>$\\frac{\\sin(\\beta n)}{n!}$      \n\\>$\\sin(\\alpha n+\\phi)$           \n\\>$e^{\\alpha n} \\sin(\\beta n)$    \\\\ \\\\\n\\>$\\frac{\\cos(\\beta n)}{n!}$      \n\\>$\\cos(\\alpha n+\\phi)$           \n\\>$e^{\\alpha n} \\cos(\\beta n)$    \\\\ \\\\\n\\>$\\frac{\\sin(\\beta (n+1))}{n+1}$ \n\\>$\\sinh(\\alpha n+\\phi)$          \n\\>$\\frac{\\cos(\\beta (n+1))}{n+1}$ \\\\ \\\\\n\\>$\\cosh(\\alpha n+\\phi)$          \n\\>${n+k \\choose m}$\\\\\n\\end{tabbing}\n\n\\begin{tabbing}\n\\underline {{\\bf Other Combinations}}\\= \\\\ \\\\\n\n\\underline {Linearity}\n  \\>${\\cal Z} \\{a f_n+b g_n \\} = a{\\cal Z} \\{f_n\\}+b{\\cal Z}\\{g_n\\}$\n  \\\\ \\\\\n\\underline {Multiplication by $n$}\n  \\>${\\cal Z} \\{n^k \\cdot f_n\\} = -z \\frac{d}{dz} \\left({\\cal Z}\\{n^{k-1} \\cdot f_n,n,z\\} \\right)$\n  \\\\ \\\\\n\\underline {Multiplication by $\\lambda^n$}\n  \\>${\\cal Z} \\{\\lambda^n \\cdot f_n\\}=F \\left(\\frac{z}{\\lambda}\\right)$\n  \\\\ \\\\\n\\underline {Shift Equation}\n  \\>${\\cal Z} \\{f_{n+k}\\} = \n           z^k \\left(F(z) - \\sum\\limits^{k-1}_{j=0} f_j z^{-j}\\right)$\n  \\\\ \\\\\n\\underline {Symbolic Sums}\n\n  \\> ${\\cal Z} \\left\\{ \\sum\\limits_{k=0}^{n} f_k \\right\\} =\n                       \\frac{z}{z-1} \\cdot {\\cal Z} \\{f_n\\}$ \\\\ \\\\\n\n  \\>${\\cal Z} \\left\\{ \\sum\\limits_{k=p}^{n+q} f_k \\right\\}$\n  \\ \\ \\ combination of the above \\\\ \\\\\n  where $k$,$\\lambda \\in$ {\\bf N}$- \\{0\\}$; and $a$,$b$ are variables\n  or fractions; and $p$,$q \\in$ {\\bf Z} or \\\\ \n  are functions of $n$; and $\\alpha$, $\\beta$ \\& $\\phi$ are angles \n  in radians.\n\\end{tabbing}\n\n\\section{Input for the Inverse $Z$-Transform}\n\\begin{tabbing}\n  This \\= package can compute the Inverse \\= Z-Transforms of any \n  rational function, \\\\ whose denominator can be factored over \n  ${\\bf Q}$, in addition to the following list \\\\ of $F(z)$.\\\\ \\\\\n\n\\> $\\sin \\left(\\frac{\\sin (\\beta)}{z} \\ \\right) \n    e^{\\left(\\frac{\\cos (\\beta)}{z} \\ \\right)}$\n\\> $\\cos \\left(\\frac{\\sin (\\beta)}{z} \\ \\right) \n    e^{\\left(\\frac{\\cos (\\beta)}{z} \\ \\right)}$ \\\\ \\\\\n\\> $\\sqrt{\\frac{z}{A}} \\sin \\left( \\sqrt{\\frac{z}{A}} \\ \\right)$\n\\> $\\cos \\left( \\sqrt{\\frac{z}{A}} \\ \\right)$ \\\\ \\\\\n\\> $\\sqrt{\\frac{z}{A}} \\sinh \\left( \\sqrt{\\frac{z}{A}} \\ \\right)$\n\\> $\\cosh \\left( \\sqrt{\\frac{z}{A}} \\ \\right)$ \\\\ \\\\\n\\> $z \\ \\log \\left(\\frac{z}{\\sqrt{z^2-A z+B}} \\ \\right)$\n\\> $z \\ \\log \\left(\\frac{\\sqrt{z^2+A z+B}}{z} \\ \\right)$ \\\\ \\\\\n\\> $\\arctan \\left(\\frac{\\sin (\\beta)}{z+\\cos (\\beta)} \\ \\right)$\n\\\\\n\\end{tabbing}\n\n  where $k$,$\\lambda \\in$ {\\bf N}$ -  \\{0\\}$ and $A$,$B$ are fractions\n  or variables ($B>0$) and $\\alpha$,$\\beta$, \\&  $\\phi$ are angles \n  in radians.\n\n\\section{Application of the $Z$-Transform}\n\\underline {{\\bf Solution of difference equations}}\\\\\n\n  In the same way that a Laplace Transform can be used to\n  solve differential equations, so $Z$-Transforms can be used\n  to solve difference equations.\\\\ \\\\\n  Given a linear difference equation of $k$-th order\n\\begin{equation}\n  f_{n+k} + a_1 f_{n+k-1}+ \\ldots + a_k f_n = g_n\n\\label{eq:1}\n\\end{equation}\n\n  with initial conditions\n  $f_0 = h_0$, $f_1 = h_1$, $\\ldots$, $f_{k-1} = h_{k-1}$ (where $h_j$\n  are given), it is possible to solve it in the following way.\n   If the coefficients $a_1, \\ldots , a_k$ are constants, then the \n  $Z$-Transform of (\\ref{eq:1}) can be calculated using the shift\n  equation, and results in a solvable linear equation for \n  ${\\cal Z} \\{f_n\\}$. Application of the Inverse $Z$-Transform\n  then results in the solution of \\ (\\ref{eq:1}).\\\\\n  If the coefficients $a_1, \\ldots , a_k$ are polynomials in $n$ then\n  the $Z$-Transform of (\\ref{eq:1}) constitutes a differential\n  equation for ${\\cal Z} \\{f_n\\}$. If this differential equation can\n  be solved then the Inverse $Z$-Transform once again yields the\n  solution of (\\ref{eq:1}).\n  Some examples of these methods of solution can be found in\n  $\\S$\\ref{sec:Examples}.\n\n\\section{EXAMPLES}\n\\label{sec:Examples}\n\\underline {{\\bf Here are some examples for the $Z$-Transform}}\\\\\n\\begin{verbatim}\n1: ztrans((-1)^n*n^2,n,z);\n\n    z*( - z + 1)\n---------------------\n  3      2\n z  + 3*z  + 3*z + 1\n\n2: ztrans(cos(n*omega*t),n,z);\n\n   z*(cos(omega*t) - z)\n---------------------------\n                     2\n 2*cos(omega*t)*z - z  - 1\n\n3: ztrans(cos(b*(n+2))/(n+2),n,z);\n\n                                 z\nz*( - cos(b) + log(------------------------------)*z)\n                                          2\n                    sqrt( - 2*cos(b)*z + z  + 1)\n\n4: ztrans(n*cos(b*n)/factorial(n),n,z);\n\n  cos(b)/z       sin(b)                 sin(b)\n e        *(cos(--------)*cos(b) - sin(--------)*sin(b))\n                   z                      z\n---------------------------------------------------------\n                            z\n5: ztrans(sum(1/factorial(k),k,0,n),n,z);\n\n  1/z\n e   *z\n--------\n z - 1\n\n6: operator f$\n\n7: ztrans((1+n)^2*f(n),n,z);\n\n                          2\ndf(ztrans(f(n),n,z),z,2)*z  - df(ztrans(f(n),n,z),z)*z \n+ ztrans(f(n),n,z)\n\n\\end{verbatim}\n\n\\underline {{\\bf Here are some examples for the Inverse $Z$-Transform}}\n\\begin{verbatim}\n\n8: invztrans((z^2-2*z)/(z^2-4*z+1),z,n);\n\n              n       n                n\n (sqrt(3) - 2) *( - 1)  + (sqrt(3) + 2)\n-----------------------------------------\n                    2\n\n9: invztrans(z/((z-a)*(z-b)),z,n);\n\n  n    n\n a  - b\n---------\n  a - b\n\n10: invztrans(z/((z-a)*(z-b)*(z-c)),z,n);\n\n  n      n      n      n      n      n\n a *b - a *c - b *a + b *c + c *a - c *b\n-----------------------------------------\n  2      2        2      2    2        2\n a *b - a *c - a*b  + a*c  + b *c - b*c\n\n11: invztrans(z*log(z/(z-a)),z,n);\n\n  n\n a *a\n-------\n n + 1\n\n12: invztrans(e^(1/(a*z)),z,n);\n\n        1\n-----------------\n  n\n a *factorial(n)\n\n13: invztrans(z*(z-cosh(a))/(z^2-2*z*cosh(a)+1),z,n);\n\ncosh(a*n)\n\n\n\\end{verbatim}\n\n\\underline {{\\bf Examples: Solutions of Difference Equations}}\\\\ \\\\\n\\begin{tabbing}\n{\\bf I} \\ \\ \\ \\ \\ \\ \\= \n\n  (See \\cite{BS}, p.\\ 651, Example 1).\\\\\n  \\> Consider the \\= homogeneous linear difference equation\\\\ \\\\\n  \\>\\>  $f_{n+5} - 2 f_{n+3} + 2 f_{n+2} - 3 f_{n+1} + 2 f_{n}=0$\\\\ \\\\\n\n  \\> with \\ initial conditions \\ $f_0=0$, $f_1=0$, $f_2=9$, $f_3=-2$,\n     $f_4=23$. \\  The\\\\\n  \\> $Z$-Transform of the left hand side can be written as\n     $F(z)=P(z)/Q(z)$ \\\\\n  \\> where \\ $P(z)=9z^3-2z^2+5z$ \\ \n     and \\ $Q(z)=z^5-2z^3+2z^2-3z+2$ \\ $=$\\\\ \n  \\> $(z-1)^2(z+2)(z^2+1)$, \\ which can be inverted to give\\\\ \\\\\n\n \\>\\>  $f_n = 2n + (-2)^n - \\cos \\frac{\\pi}{2}n\\;.$ \\\\ \\\\\n\n  \\> The following REDUCE session shows how the present package can\n\\\\ \\> be used to solve the above problem.\n\n\\end{tabbing}\n\\begin{verbatim}\n14: operator f$ f(0):=0$ f(1):=0$ f(2):=9$ f(3):=-2$ f(4):=23$\n\n\n20: equation:=ztrans(f(n+5)-2*f(n+3)+2*f(n+2)-3*f(n+1)+2*f(n),n,z);\n\n                              5                       3\nequation := ztrans(f(n),n,z)*z  - 2*ztrans(f(n),n,z)*z\n\n                                   2\n             + 2*ztrans(f(n),n,z)*z  - 3*ztrans(f(n),n,z)*z\n\n                                       3      2\n             + 2*ztrans(f(n),n,z) - 9*z  + 2*z  - 5*z\n\n\n21: ztransresult:=solve(equation,ztrans(f(n),n,z));\n\n                                             2\n                                       z*(9*z  - 2*z + 5)\nztransresult := {ztrans(f(n),n,z)=----------------------------}\n                                    5      3      2\n                                   z  - 2*z  + 2*z  - 3*z + 2\n\n22: result:=invztrans(part(first(ztransresult),2),z,n);\n\n                   n    n       n    n\n           2*( - 2)  - i *( - 1)  - i  + 4*n\nresult := -----------------------------------\n                          2\n\n\\end{verbatim}\n\n\\begin{tabbing}\n\\\\ \\\\\n{\\bf II} \\ \\ \\ \\ \\ \\ \\= \n\n  (See \\cite{BS}, p.\\ 651, Example 2).\\\\\n  \\>    Consider the \\= inhom\\=ogeneous difference equation:\\\\ \\\\\n  \\>\\>  $f_{n+2} - 4 f_{n+1} + 3 f_{n} = 1$\\\\ \\\\\n\n  \\> with initial conditions $f_0=0$, $f_1=1$. Giving  \\\\ \\\\\n\\>\\> $F(z)$\\>$ = {\\cal Z}\\{1\\} \\left( \\frac{1}{z^2-4z+3} + \\frac{z}{z^2-4z+3} \\right)$\\\\ \\\\\n\\>\\>\\>       $ = \\frac{z}{z-1} \\left( \\frac{1}{z^2-4z+3} + \\frac{z}{z^2-4z+3} \\right)$.\n\\\\ \\\\\n  \\> The Inverse $Z$-Transform results in the solution\\\\ \\\\\n\n\n\\>\\>\n$f_n = \\frac{1}{2} \\left( \\frac{3^{n+1}-1}{2}-(n+1) \\right)$.\\\\ \\\\\n\n  \\> The following REDUCE session shows how the present package can\\\\\n  \\> be used to solve the above problem.\n\n\\end{tabbing}\n\\begin{verbatim}\n\n23: clear(f)$ operator f$ f(0):=0$ f(1):=1$\n\n\n27: equation:=ztrans(f(n+2)-4*f(n+1)+3*f(n)-1,n,z);\n\n                               3                       2\nequation := (ztrans(f(n),n,z)*z  - 5*ztrans(f(n),n,z)*z  \n\n                                                           2\n    + 7*ztrans(f(n),n,z)*z - 3*ztrans(f(n),n,z) - z )/(z - 1)\n\n28: ztransresult:=solve(equation,ztrans(f(n),n,z));\n\n                                      2\n                                     z\nresult := {ztrans(f(n),n,z)=---------------------}\n                              3      2\n                             z  - 5*z  + 7*z - 3\n\n29: result:=invztrans(part(first(ztransresult),2),z,n);\n\n              n\n           3*3  - 2*n - 3\nresult := ----------------\n                 4\n\\end{verbatim}\n\n\\begin{tabbing}\n\\\\ \\\\\n{\\bf III} \\ \\ \\ \\ \\ \\ \\= \n\n    Consider the \\=following difference equation, which has a\n    differential\\\\\n \\> equation for ${\\cal Z}\\{f_n\\}$.\\\\ \\\\\n\n \\>\\> $(n+1) \\cdot f_{n+1}-f_n=0$\\\\ \\\\\n\n \\> with initial conditions $f_0=1$, $f_1=1$. It can be solved in REDUCE\\\\\n \\>  using the present package in the following way.\\\\\n\n\\end{tabbing}\n\\begin{verbatim}\n30: clear(f)$ operator f$ f(0):=1$ f(1):=1$\n\n\n34: equation:=ztrans((n+1)*f(n+1)-f(n),n,z);\n\n                                        2\nequation :=  - (df(ztrans(f(n),n,z),z)*z  + ztrans(f(n),n,z))\n\n35: operator tmp;\n\n36: equation:=sub(ztrans(f(n),n,z)=tmp(z),equation);\n\n                              2\nequation :=  - (df(tmp(z),z)*z  + tmp(z))\n\n37: load(odesolve);\n\n38: ztransresult:=odesolve(equation,tmp(z),z);\n\n                         1/z\nztransresult := {tmp(z)=e   *arbconst(1)}\n\n39: preresult:=invztrans(part(first(ztransresult),2),z,n);\n\n              arbconst(1)\npreresult := --------------\n              factorial(n)\n\n40: solve({sub(n=0,preresult)=f(0),sub(n=1,preresult)=f(1)},\narbconst(1));\n\n{arbconst(1)=1}\n\n41: result:=preresult where ws;\n\n                1\nresult := --------------\n           factorial(n)\n\n\\end{verbatim}\n\n\\begin{thebibliography}{9}\n\\bibitem{BS} Bronstein, I.N. and Semedjajew, K.A.,\n{\\it Taschenbuch der Mathematik},\nVerlag Harri Deutsch, Thun und Frankfurt(Main),\n 1981.\\\\ISBN 3 87144 492 8.\n\\end{thebibliography}\n\n\\end{document}\n\n", "meta": {"hexsha": "df28a3be0db4cd37d88f73cbe572e0b611f7079c", "size": 12073, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/ztrans/ztrans.tex", "max_stars_repo_name": "arthurcnorman/general", "max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packages/ztrans/ztrans.tex", "max_issues_repo_name": "arthurcnorman/general", "max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packages/ztrans/ztrans.tex", "max_forks_repo_name": "arthurcnorman/general", "max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.2324455206, "max_line_length": 98, "alphanum_fraction": 0.4806593225, "num_tokens": 4388, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8688267728417086, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.5781265026059795}}
{"text": "\\SecDef{multi}{Feistel-like Decomposition based on Finite Field Multiplications}\n\nSimilarly to \\ChapRef{feistel}, this chapter illustrates the usefulness of the \"Jackson Pollock representation\" of the LAT of an S-Box. Consider a heatmap of $\\LAT{\\pi}$ shown in \\FigRef{pi_lat}. It looks rather random overall, except several vertical stripes clearly sticking out. This effect is weaker or stronger depending on the colormap chosen for plotting. By a closer inspection it can be observed that the stripes stick out because of the same color appearing mote often than in another columns. In order to strengthen the effect, I define a \\emph{column frequency table}.\n\\begin{definition}\nLet $L$ be an $n \\times m$ matrix. The \\emph{column frequency table} of $L$ is the $n\\times m$ matrix $\\CF{L}$ over $\\ZZ$ given by:\n$$\n\\CF{L}[y,x] \\eqdef \\pabs{\\pset{y' \\mid L[y',x] = L[y,x]}}.\n$$\n\\end{definition}\n\n\\FigTex{pi_lat.tex}\n\nThe column frequency table of the $\\LAT{\\pi}$ is shown in \\FigRef{cf_pi_lat}. The same columns are clearly sticking out as in the LAT of $\\pi$. Let $S$ denote their $x$-coordinates:\n\\begin{multline*}\nS = \\{\n\\hex{00},\n\\hex{1a},\n\\hex{20},\n\\hex{3a},\n\\hex{44},\n\\hex{5e},\n\\hex{64},\n\\hex{7e},\\\\\n\\hex{8a},\n\\hex{90},\n\\hex{aa},\n\\hex{b0},\n\\hex{ce},\n\\hex{d4},\n\\hex{ee},\n\\hex{f4}\n\\}\\subseteq\\field{8}.\n\\end{multline*}\nNote that $\\hex{00}$ was added in order to complete the set to a linear subspace of $\\field{8}$. It follows that we can choose 4 linearly independent coordinates and they will correspond to 4 linearly independent components of $\\pi$. By composing $\\pi$ with a linear map, the outstanding columns of the $\\LAT{\\pi}$ can be grouped together. Let $L\\in \\linbij{8}$ be such that\n\\begin{align*}\nL(\\hex{80}) &= \\hex{08},~~\nL(\\hex{40}) = \\hex{04},~~\nL(\\hex{20}) = \\hex{02},~~\nL(\\hex{10}) = \\hex{01},\\\\\nL(\\hex{08}) &= \\hex{8a},~~\nL(\\hex{04}) = \\hex{44},~~\nL(\\hex{02}) = \\hex{20},~~\nL(\\hex{01}) = \\hex{1a}.\n\\end{align*}\nLet $\\pi_1 \\eqdef L^{\\top} \\circ \\pi$. The LAT of $\\pi_1$ is shown in \\FigRef{pi1_lat}. According to Proposition~\\ref{prelim.prop:linear-lat} from \\ChapRef{prelim}, the outstanding columns are grouped on the left. Furthermore, inside these 16 columns we can now observe similarly outstanding rows. Coincidentally, their coordinates form the same linear subspace $S$. In order to group the rows on the top, let $\\pi_2 \\eqdef L^{\\top} \\circ \\pi \\circ \\invtop{L}$. The LAT of $\\pi_2$ is shown in \\FigRef{pi2_lat}.\n\n\\FigTex{pi_lat1_2.tex}\n\n\\subsection{TU-decomposition}\n\\Label{sec:tu}\n\nThe LAT of $\\pi_2$ has interesting artifacts. The special 16 columns now have a visible structure consisting of $16\\times16$ squares. More importantly, the topmost square fully consists of zeroes, i.e. $\\LAT{\\pi_2}(a,b)=0$ for $0 \\preceq a,b \\preceq \\hex{0f}$. These zeroes can be interpreted as follows: if we fix any linear combination of the 4 rightmost input bits to any constant, then any linear combination of the 4 rightmost output bits is balanced. Following this idea, the following multiset property can be verified: for any $c \\in \\field{4}$, \n$$\n\\Right \\pround{\\pi_2(X)} = \\field{4}, ~\\text{where}~X \\eqdef \\pset{(l, c) \\mid l \\in \\field{4}}.\n$$\n\\Todo{proposition}\nIn other words, there exists 16 permutations $T_{\\hex{0}},\\ldots, T_{\\hex{f}}$ of $\\field{4}$ such that for all $l,r \\in \\field{4}$\n$$\\Right(\\pi_2(l, r)) = T_{r}(l).$$ \nLet $U_{\\hex{0}},\\ldots,U_{\\hex{f}}\\colon \\field{4} \\to \\field{4}$ be such that $U_{T_r(l)}(r) \\eqdef \\Left(\\pi_2(l,r))$ for all $l,r \\in \\field{4}$. Then\n$$\n\\pi_2(l,r) = \\pround{U_{T_r(l)}(r), T_r(l)}.\n$$\nThe high-level decomposition of $\\pi_2$ into $T$ and $U$ is shown in \\FigRef{tu} and the look-up tables of $T$ and $U$ are given in \\TabRef{tu-lt}. Note that since $\\pi_2$ and all $T_i$ are permutations, all $U_i$ must be permutations as well. It can be easily verified from the look-up table of $U$. Due to this bijectivity, $T$ and $U$ can be viewed as mini-block ciphers. Such decomposition into two mini block-ciphers shall be called a \\emph{TU-decomposition}. It will prove its usefulness again in \\ChapRef{apn}.\n\n\\FigTex{tu.tex}\n\n\\FigTex{tu-lt.tex}\n\n\\begin{remark}\nIt might seem that the TU-decomposition provides little insight into the structure. Indeed, any 8-bit function can be described by two tables of the same size as $T$ and $U$, for example by considering the left and right halves of the output separately. The only special property that TU-decomposition adds is that each $T_i$ is a permutation (and thus, each $U_i$). This is a very unlikely event that a random permutation has such decomposition, even if extra linear encodings (such as $L$ in the case of $\\pi$) are allowed. This property justifies the separation of $T$ and $U$ and their independent analysis.\n\\end{remark}\n\nThe decomposition procedure of $T$ and $U$ are described in \\SecRef{t} and \\SecRef{u} respectively.\n\n\\subimport{}{11u.tex}\n\\subimport{}{11t.tex}\n\\subimport{}{12full.tex}", "meta": {"hexsha": "745c0cb72f90659721aee8b976fe491cf8802892", "size": 4935, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-source/9strKuz/10multi.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "thesis-source/9strKuz/10multi.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "thesis-source/9strKuz/10multi.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 64.0909090909, "max_line_length": 611, "alphanum_fraction": 0.7088145897, "num_tokens": 1559, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8080672135527632, "lm_q1q2_score": 0.5781106747815351}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\marginpar{Monday\\\\ 2020-12-14, \\\\ compiled \\\\ \\today}\n\nA general infinitesimal coordinate transformation looks like \n%\n\\begin{align}\nx^{\\mu } \\to x^{\\prime \\mu } = x^{\\mu } - \\xi^{\\mu }(x)\n\\,,\n\\end{align}\n%\nwhere both the vector field \\(\\xi \\) and its derivative are infinitesimally small. \n\nA scalar field \\(\\phi \\) will transform trivially, meaning that \n%\n\\begin{align}\n\\phi' (x') = \\phi (x)\n\\,,\n\\end{align}\n%\nwhile a covariant and a contravariant vector will transform like \n%\n\\begin{align}\nV^{\\prime }_\\mu (x') &= \\pdv{x^{\\nu }}{x^{\\prime \\mu }} V_\\nu (x)  \\\\\nV^{\\prime \\mu } (x') &= \\pdv{x^{\\prime \\mu }}{x^{\\nu }} V^{\\nu }(x)\n\\,,\n\\end{align}\n%\nand tensors with more indices will transform analogously, with a Jacobian for each contravariant index and an inverse Jacobian for each covariant index. \n\n\\paragraph{Active approach}\n\nWe denote the unperturbed FLRW manifold as \\(\\mathcal{M}_0\\), and the perturbed one as \\(\\mathcal{M}_\\lambda \\). \nThe parameter \\(\\lambda \\) is a bookkeeping tool we can use to keep track of the perturbative order, at the end we will set \\(\\lambda = 1\\). \n\nWe denote the map \\(\\psi _\\lambda \\) as the one from \\(\\mathcal{M}_0\\) to \\(\\mathcal{M}_\\lambda \\), and \\(\\varphi _\\lambda \\) as the one going backwards.  \n\n% Also, we denote our\n\nLet us fix a coordinate system \\(x^{\\mu }\\) on \\(\\mathcal{M}_0\\), and consider a vector field \\(\\xi^{\\mu }\\): then we can define a \\textbf{congruence of curves} by \n%\n\\begin{align}\n\\dv{x^{\\mu }}{\\lambda } = \\xi^{\\mu } (\\lambda )\n\\,.\n\\end{align}\n\nThen, we can define a point \\(Q\\) which is at a parametric distance \\(\\lambda \\) from \\(P\\) along these integral curves: \n%\n\\begin{align} \\label{eq:infinitesimal-point-transformation-active-approach}\nx^{\\mu } (Q) = x^{\\mu } (P) + \\lambda \\xi ^{\\mu } (x(P)) + \\order{\\lambda^2}\n\\,.\n\\end{align}\n\nThis is called an infinitesimal point transformation in the active approach. \n\n\\paragraph{Passive approach}\n\nWe can also introduce a ``passive approach'' to gauge transformations: the same relation we used can be seen as the one we would get by introducing a new coordinate system \\(y^{\\mu } (Q)\\), such that \n%\n\\begin{align}\ny^{\\mu } (Q) &= x^{\\mu } (P)   \\\\\n&= x^{\\mu } (Q) - \\lambda \\xi^{\\mu } (x(P))\n\\,.\n\\end{align}\n\nIf \\(x(P) = x(Q) - \\lambda \\xi\\), this can be expanded as \n%\n\\begin{align}\ny^{\\mu } (Q) = x^{\\mu } (Q) - \\lambda \\xi^{\\mu } (x^{\\mu } (Q)) + \\order{\\lambda^2, \\xi^2}\n\\,.\n\\end{align}\n\nThis is a ``passive coordinate transformation'', since we are simply changing the names we give to the points, in the form \n%\n\\begin{align}\ny^{\\mu } (\\lambda ) = x^{\\mu } - \\lambda \\xi^{\\mu } + \\order{\\lambda^2}\n\\,,\n\\end{align}\n%\nwhich will become \\(y^{\\mu } = x^{\\mu } - \\xi^{\\mu }\\), a regular coordinate transformation. \n\nConsider a vector field \\(Z\\) which has components \\(Z^{\\mu }\\) in the \\(x\\) coordinate system; then we can define a new vector field \\(\\widetilde{Z}\\) which has components \\(\\widetilde{Z}^{\\mu }\\) in the \\(x\\) coordinate system, such that the components \\(\\widetilde{Z}^{\\mu }\\) evaluated at the coordinate point \\(x^{\\mu } (P)\\) are equal to the components \\(Z^{\\prime \\mu }\\) that the ``old'' vector field \\(Z\\) has in the \\(y\\) coordinates. \n\nThe relation is \n%\n\\begin{align}\n\\widetilde{Z}^{\\mu } (x (P)) = Z^{\\prime \\mu } (y (Q))\n= \\eval{\\pdv{y^{\\mu }}{x^{\\nu }}}_{x(Q)} Z^{\\nu } (x(Q))\n\\,.\n\\end{align}\n\nIn the active approach, instead, we would have related the first term to the third directly. \n\nThis provides a transportation law from the point \\(Q\\) to the point \\(P\\), in the \\emph{same} \\(x\\) coordinate system. \nLooking at the first and last term in the equation is the ``active approach'', looking at the last two terms is the ``passive approach''. \n\nThe Jacobian at hand is \n%\n\\begin{align}\n\\pdv{y^{\\mu }}{x^{\\nu }} = \\delta^{\\mu }_{\\nu } - \\lambda \\pdv{\\xi^{\\mu }}{x^{\\nu }} (x (Q))\n\\,,\n\\end{align}\n%\ntherefore, Taylor expanding, we get\n%\n\\begin{align}\n\\widetilde{Z}^{\\mu } (x(P)) &= Z^{\\mu } (x(Q)) - \\lambda \\pdv{\\xi^{\\mu }}{x^{\\nu }} Z^{\\nu } (x(Q))  \\\\\n&= Z^{\\mu } (x(P)) + \\pdv{Z^{\\mu }}{x^{\\nu }} \\lambda \\xi^{\\nu }(x(P))\n- \\lambda \\pdv{\\xi^{\\mu }}{x^{\\nu }} Z^{\\nu } x(P)  \n\\marginnote{Using \\eqref{eq:infinitesimal-point-transformation-active-approach}.}\n\\\\\n&= Z^{\\mu } (x(P)) + \\lambda \\mathscr{L}_\\xi Z^{\\mu } + \\order{\\lambda^2}\n\\,.\n\\end{align}\n\nThis has given us the transformation law for the vector field, in terms of the Lie derivative: \n%\n\\begin{align}\n\\mathscr{L}_\\xi (Z^{\\mu }) &= \\pdv{Z^{\\mu }}{x^{\\nu }} \\xi^{\\nu } - \\pdv{\\xi^{\\mu }}{x^{\\nu }} Z^{\\nu }  \\\\\n&= (\\xi^{\\nu } \\partial_{\\nu }) Z^{\\mu } - (Z^{\\nu }\\partial_\\nu ) \\xi^{\\mu }\n\\,.\n\\end{align}\n\nThe new \\(Z\\) is in the \\emph{same} coordinates as before. Setting \\(\\lambda = 1\\), we get \n%\n\\begin{align}\n\\widetilde{Z}^{\\mu } = Z^{\\mu } + \\mathscr{L}_\\xi Z^{\\mu }\n\\,.\n\\end{align}\n\nThe effect of a gauge transformation is that the new tensor is equal to the old one plus the Lie derivative (at the \\emph{same} coordinate point) of the vector field corresponding to the transformation. \nFor scalars, we have \\(\\mathscr{L}_\\xi S = S_{, \\mu } \\xi^{\\mu }\\); for vectors and tensors we have \n%\n\\begin{align}\n\\mathscr{L}_\\xi V_{\\mu } &= V_{\\mu , \\lambda } \\xi^{\\lambda } + \\xi^{\\lambda }_{, \\mu } V_{\\lambda }  \n\\marginnote{We would instead have a minus on the right if the vector \\(V\\) was contravariant.}\n\\\\\n\\mathscr{L}_\\xi T_{\\mu \\nu } &= T_{\\mu \\nu , \\lambda } \\xi^{\\lambda } + \\xi^{\\lambda }_{, \\mu } T_{\\lambda \\nu } + \\xi^{\\lambda }_{, \\nu } T_{\\mu \\lambda }\n\\,.\n\\end{align}\n\nWe get the same result if we replace the partial derivatives with covariant derivatives, as can be found from a direct calculation, as long as the connection is torsion-free (which is equivalent to the Christoffel symbols' lower indices being symmetric). \n\nIf we consider the metric tensor \\(T_{\\mu \\nu } = g_{\\mu \\nu }\\), which is covariantly constant (\\(g_{\\mu \\nu ; \\rho } = 0\\)) we get \n%\n\\begin{align}\n\\mathscr{L}_\\xi g_{\\mu \\nu } = g_{\\lambda \\nu } \\xi^{\\lambda }_{; \\mu } + \\xi^{\\lambda }_{; \\nu } g_{\\mu \\lambda } = \\xi_{\\mu ; \\nu } + \\xi_{\\nu ; \\mu }\n\\,.\n\\end{align}\n\nIn two different gauges we can find \n%\n\\begin{align}\n\\Delta T &= T - T_0  \\\\\n\\widetilde{\\Delta T} &= \\widetilde{T} - T_0 \n\\,,\n\\end{align}\n%\nbut, using the transformation law for tensors, we know that\n%\n\\begin{align}\n\\widetilde{T} = T_0 + \\widetilde{\\Delta T}  &= T + \\mathscr{L}_\\xi T = T_0 + \\Delta T + \\mathscr{L}_\\xi T  \\\\\n\\widetilde{\\Delta T} &= \\Delta T + \\mathscr{L}_\\xi T\n\\,,\n\\end{align}\n%\nat least at linear order. At this order, however, we can also substitute the Lie derivative of \\(T\\) with that of \\(T_0 \\): \n%\n\\boxalign{\n\\begin{align}\n\\widetilde{\\Delta T} = \\Delta T + \\mathscr{L}_\\xi T_0 \n\\,.\n\\end{align}}\n\nThis is the crux of the gauge problem: the quantity \\(\\Delta T\\) is gauge dependent. \n\n\\subsection{Cosmological perturbations}\n\nWe start from the background FLRW metric: \n%\n\\begin{align}\n\\dd{s^2} &= g_{\\mu \\nu }^{(0)} \\dd{x^{\\mu }} \\dd{x^{\\nu }}  \\\\\n&= a^2(\\eta ) \\qty[- \\dd{\\eta^2} + \\dd{x^2} + \\dd{y^2} + \\dd{z^2}]\n\\,,\n\\end{align}\n%\nwhere, as usual, \\(\\eta \\) is the conformal time, defined by \\(\\dd{\\eta } = \\dd{t} / a(t)\\). \n\nThe perturbed metric reads \n%\n\\begin{align}\n\\dd{s^2} &= g_{\\mu \\nu } \\dd{x^{\\mu }} \\dd{x^{\\nu }}  \\\\\ng_{00} &= - a^2(\\eta ) \\qty[1 + 2 \\sum _{r=1}^{\\infty } \\frac{1}{r!} \\psi^{(r)} (\\vec{x}, \\eta )]  \\\\\ng_{0i } = g_{i0} &= a^2(\\eta ) \\sum _{r=1}^{\\infty } \\frac{1}{r!} \\omega_{i}^{(r)} (\\vec{x}, \\eta )  \\\\\ng_{ij } &= a^2(\\eta ) \\qty( \\qty[1 - 2 \\sum _{r=1}^{\\infty } \\frac{1}{r!} \\phi^{(r)} (\\vec{x}, \\eta )] \\delta_{ij} + \\sum _{r=1}^{\\infty } \\frac{1}{r! } \\chi_{ij}^{(r)} (\\vec{x}, \\eta ))\n\\,,\n\\end{align}\n%\nwhere \\(\\psi\\) is called the \\emph{lapse function}, \\(\\omega_{i} \\) is called the \\emph{shift function}, while we take \\(\\chi_{ij}\\) to be a \\emph{traceless} perturbation: \\(\\chi^{i}_{i} = 0\\).\n\n\\end{document}", "meta": {"hexsha": "79662abb2d074422c1002979dda696e02bf36d3f", "size": 7912, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_third_semester/early_universe/dec14.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_third_semester/early_universe/dec14.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_third_semester/early_universe/dec14.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 39.1683168317, "max_line_length": 445, "alphanum_fraction": 0.6134984833, "num_tokens": 2857, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8080672135527632, "lm_q1q2_score": 0.5781106747815351}}
{"text": "\\documentclass[gr-notes.tex]{subfiles}\n\n\\begin{document}\n\n\\setcounter{chapter}{2}\n\n\\chapter{Tensor analysis in special relativity}\n\n\n\n\\setcounter{section}{2}\n\n\\section{The $\\binom{0}{1}$ tensors: one-forms}\n\nThe symbol $\\tilde{}$ is used to denote a one-form, as $\\vec{}$ is used to denote a vector. So $\\tilde{p}$ is a one-form, or a type $\\binom{0}{1}$ tensor.\n\n\n\n\\subsection*{Normal one-forms}\n\nLet $\\mathcal{S}$ be some surface.\n\n$\\forall \\vec{V}$ tangent to $\\mathcal{S}$, $\\tilde{p}(\\vec{V}) = 0 \\implies \\tilde{p}$ is normal to $\\mathcal{S}$.\n\nFurthermore, if $\\mathcal{S}$ is a \\emph{closed} surface \\& $\\tilde{p}$ is normal to $\\mathcal{S}$ \\& $\\forall \\vec{U}$ pointing outwards from $\\mathcal{S}$, $\\tilde{p}(\\vec{U}) > 0 \\implies \\tilde{p}$ is an outward normal one-form.\n\n\n\n\\setcounter{section}{4}\n\n\\section{Metric as a mapping of vectors into one-forms}\n\n\\subsection*{Normal vectors and unit normal one-forms}\n\n$\\vec{V}$ is normal to a surface if $\\tilde{V}$ is normal to the surface. They are said to be \\emph{unit normal} if their magnitude is $\\pm 1$, so $\\vec{V}^2 = \\tilde{V}^2 = \\pm 1$.\n\n\\begin{itemize}\n\\item A time-like unit normal has magnitude $-1$\n\\item A space-like unit normal has magnitude $+1$\n\\item A null normal cannot be a unit normal, because $\\vec{V}^2 = \\tilde{V}^2 = 0$\n\\end{itemize}\n\n\n\n\\setcounter{section}{9}\n\n\\section{Exercises}\n\n\\textbf{3}\n\n(a)\n\\begin{align*}\n  \\tilde{p}(A^\\alpha \\vec{e}_\\alpha) &=\n  A^\\alpha \\tilde{p}(\\vec{e}_\\alpha) =\n  \\tilde{p}(A^0 \\vec{e}_0 + A^1 \\vec{e}_1 + A^2 \\vec{e}_2 + A^3 \\vec{e}_3)\n  \\\\ &=\n  A^0 \\tilde{p}(\\vec{e}_0) + A^1 \\tilde{p}(\\vec{e}_1 +\n  A^2 \\tilde{p}(\\vec{e}_2) + A^3 \\tilde{p}(\\vec{e}_3 =\n  A^\\alpha \\tilde{p}(\\vec{e}_\\alpha) = A^\\alpha p_\\alpha \\in \\mathbb{R}\n\\end{align*}\n\n(b)\n\n\\begin{align*}\n  \\tilde{p} &\\underset{\\obs}{\\to} (-1, 1, 2, 0)\n  \\\\\n  \\vec{A}   &\\underset{\\obs}{\\to} (2, 1, 0, -1)\n  \\\\\n  \\vec{B}   &\\underset{\\obs}{\\to} (0, 2, 0, 0)\n\\end{align*}\n\n\\begin{align*}\n  \\tilde{p}(\\vec{A}) &=\n  -2 + 1 + 0 + 0 = -1\n  \\\\\n  \\tilde{p}(\\vec{B}) &=\n  0 + 2 + 0 + 0 = 2\n  \\\\\n  \\tilde{p}(\\vec{A} - 3 \\vec{B}) &=\n  \\tilde{p}(\\vec{A}) - 3 \\tilde{p}(\\vec{B}) =\n  -1 - 3 \\cdot 2 = -7\n\\end{align*}\n\n\\textbf{4}\nGiven the following vectors\n\\begin{align*}\n  \\vec{A} & \\underset{\\obs}{\\to} ( 2, 1, 1, 0) &\n  \\vec{B} & \\underset{\\obs}{\\to} ( 1, 2, 0, 0) \\\\\n  \\vec{C} & \\underset{\\obs}{\\to} ( 0, 0, 1, 1) &\n  \\vec{D} & \\underset{\\obs}{\\to} (-3, 2, 0, 0)\n\\end{align*}\n\n(Note that all parts were done with the assistance of \\texttt{numpy}.)\n\n(a) Show that they are linearly independent.\n\nWe do this by constructing a matrix, $\\vb{X}$, whose columns correspond to the four vectors. If the determinant of $\\vb{X}$ is non-zero, then that means the vectors are linearly independent.\n%\n\\begin{displaymath}\n  \\det(\\vb{X}) =\n  \\det \\mqty(2 &  1 &  0 & -3 \\\\\n             1 &  2 &  0 &  2 \\\\\n             1 &  0 &  1 &  0 \\\\\n             0 &  0 &  1 &  0) =\n  -8\n\\end{displaymath}\n\n(b) Find the components of $\\tilde{p}$ if\n%\n\\begin{displaymath}\n  \\tilde{p}(\\vec{A}) =  1, \\quad\n  \\tilde{p}(\\vec{B}) = -1, \\quad\n  \\tilde{p}(\\vec{C}) = -1, \\quad\n  \\tilde{p}(\\vec{D}) =  0\n\\end{displaymath}\n\nWe do this by observing that $\\tilde{p} = A^\\alpha p_\\alpha$, and so we have a system of four equations, which we can write in matrix form as\n%\n\\begin{align*}\n  \\mqty( \\vec{A} \\\\ \\vec{B} \\\\ \\vec{C} \\\\ \\vec{D} ) \\tilde{p} &=\n  \\mqty( 1 \\\\ -1 \\\\ -1 \\\\ 0 )\n  \\\\ \\implies\n  \\tilde{p} &=\n  \\mqty( \\vec{A} \\\\ \\vec{B} \\\\ \\vec{C} \\\\ \\vec{D} )^{-1}\n  \\mqty( 1 \\\\ -1 \\\\ -1 \\\\ 0 )\n  \\\\ &=\n  \\mqty( -\\frac{1}{4} \\\\ -\\frac{3}{8} \\\\ +\\frac{15}{8} \\\\ -\\frac{23}{8} ).\n\\end{align*}\n\n(c) Find $\\tilde{p}(\\vec{E})$, where $\\vec{E} \\to_\\obs (1, 1, 0, 0)$.\n\n\\begin{displaymath}\n  \\tilde{p}(\\vec{E}) = p_\\alpha E^\\alpha = -\\frac{5}{8}\n\\end{displaymath}\n\n(d) Determine whether $\\tilde{p}$, $\\tilde{q}$, $\\tilde{r}$, and $\\tilde{s}$ are linearly independent.\n\nWe do this by first setting up a system of equations for each of $\\tilde{q}$, $\\tilde{r}$, and $\\tilde{s}$, as was done for $\\tilde{p}$, and solving. I will refer to the matrix whose rows were $\\vec{A}$, $\\vec{B}$, $\\vec{C}$, and $\\vec{D}$ as $\\vb{X}$.\n%\n\\begin{align*}\n  \\vb{X} \\tilde{q} &= \\mqty( +0 \\\\ +0 \\\\ +1 \\\\ -1 ) &\n  \\vb{X} \\tilde{r} &= \\mqty( +2 \\\\ +0 \\\\ +0 \\\\ +0 ) &\n  \\vb{X} \\tilde{s} &= \\mqty( -1 \\\\ -1 \\\\ +0 \\\\ +0 )\n  \\\\\n  \\tilde{q} &=\n  \\mqty( +\\frac{1}{4} \\\\ -\\frac{1}{8} \\\\ -\\frac{3}{8} \\\\ +\\frac{11}{8} ) &\n  \\tilde{r} &=\n  \\mqty( +0 \\\\ +0 \\\\ +2 \\\\ +2 ) &\n  \\tilde{s} &=\n  \\mqty( -\\frac{1}{4} \\\\ -\\frac{3}{8} \\\\ -\\frac{1}{8} \\\\ +\\frac{1}{8} )\n\\end{align*}\n%\nNow if the matrix whose columns are comprised of $\\tilde{p}$, $\\tilde{q}$, $\\tilde{r}$, and $\\tilde{s}$ has a non-zero determinant, then the four covectors must be linearly independent.\n%\n\\begin{displaymath}\n  \\det \\mqty( \\tilde{p} & \\tilde{q} & \\tilde{r} & \\tilde{s} ) = \\frac{1}{4},\n\\end{displaymath}\n%\nand so they are indeed linearly independent.\n\n\n\\textbf{6}\n\n(a) Show that $\\tilde{p} \\neq \\tilde{p}(\\vec{e}_\\alpha) \\tilde{\\lambda}^\\alpha$ for arbitrary $\\tilde{p}$.\n\nLet us choose $\\tilde{p} \\to_\\obs (0, 1, e, \\pi)$, as a counter-example.\n%\n\\begin{align*}\n  p_\\alpha \\tilde{\\lambda}^\\alpha &\\underset{\\obs}{\\to}\n  0 \\cdot (1, 1, 0, 0) +\n  1 \\cdot (1, -1, 0, 0) +\n  e \\cdot (0, 0, 1, -1) +\n  \\pi \\cdot (0, 0, 1, 1)\n  \\\\ &\\underset{\\obs}{\\to}\n  (1, -1, e+\\pi, 0)\n  \\underset{\\obs}{\\cancel\\to}\n  \\tilde{p}\n\\end{align*}\n\n(b) $\\tilde{p} \\to_\\obs (1, 1, 1, 1)$. Find $l_\\alpha$ such that\n%\n\\begin{displaymath}\n  \\tilde{p} = l_\\alpha \\tilde{\\lambda}^\\alpha\n\\end{displaymath}\n\nWe may do this with a simple matrix inversion. We define $\\vb\\Lambda$ to be the matrix whose rows are formed by $\\tilde{\\lambda}^\\alpha$.\n%\n\\begin{displaymath}\n  \\vb\\Lambda l = p \\implies\n  l = \\vb\\Lambda^{-1} p =\n  \\mqty( 1 \\\\ 0 \\\\ 1 \\\\ 0 )\n\\end{displaymath}\n\n\n\n\n\\textbf{8}\nDraw the basis one-forms $\\tilde{\\dd}t$ and $\\tilde{\\dd}x$ of frame $\\obs$.\n\nThey are\n%\n\\begin{align*}\n  \\tilde{\\dd}t &\\underset{\\obs}{\\to} (1, 0, 0, 0), \\\\\n  \\tilde{\\dd}x &\\underset{\\obs}{\\to} (0, 1, 0, 0),\n\\end{align*}\n%\nand they are shown in Figure \\ref{fig:ch3-problem-8}.\n\n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[width=0.75\\textwidth]{img/ch3_problem_8}\n  \\caption{Problem 8: Basis one-forms of $\\obs$. $\\tilde{\\dd}t$ is given in blue and $\\tilde{\\dd}x$ in red.}\n  \\label{fig:ch3-problem-8}\n\\end{figure}\n\n\n\\textbf{9}\nAt the points $\\mathcal{P}$ and $\\mathcal{Q}$, estimate the components of the gradient $\\tilde{\\dd}T$.\n\nRecall that $\\tilde{\\dd}T \\to_\\obs \\qty(\\pdv{T}{x}, \\pdv{T}{y})$, and so $\\Delta T = \\tilde{\\dd}T_\\alpha x^\\alpha = \\tilde{\\dd}T_x \\Delta x + \\tilde{\\dd}T_y \\Delta y$.\n\nNow if we move only in the $x$ direction from one of the points, we move some distance $\\Delta x$, change our temperature by $\\Delta t$, and $\\Delta y = 0$. Likewise for a movement in the $y$ direction. Thus we can say\n%\n\\begin{align*}\n  \\Delta T &= \\tilde{\\dd}T_x \\Delta x &\n  \\Delta T &= \\tilde{\\dd}T_y \\Delta y\n  \\\\\n  \\tilde{\\dd}T_x &= \\frac{\\Delta T}{\\Delta x} &\n  \\tilde{\\dd}T_y &= \\frac{\\Delta T}{\\Delta y}\n\\end{align*}\n%\nIn Figure \\ref{fig:ch3-problem-9}, from $\\mathcal{P}$ I move a distance $\\Delta x = 0.5$, which causes a temperature change of $\\Delta T = -7$, giving $\\tilde{\\dd}T_x = -14$. Then I move a distance $\\Delta y = 0.5$ and get the same temperature change of $\\Delta T = -7$, and so I conclude that at point $\\mathcal{P}$, $\\tilde{\\dd}T \\to_\\obs (-14, -14)$.\n\nAt $\\mathcal{Q}$, we are in a flat region where $T = 0$. If we move any non-zero distance $\\Delta x$ or $\\Delta y$, so long as it does not cross the $T = 0$ isotherm, we have a $\\Delta T = 0$, and thus $\\tilde{\\dd}Tp \\to_\\obs (0, 0)$.\n\n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[width=0.75\\textwidth]{img/ch3_problem_9}\n  \\caption{Problem 9: Isotherms.}\n  \\label{fig:ch3-problem-9}\n\\end{figure}\n\n\n\\textbf{13}\nProve that $\\tilde{\\dd}f$ is normal to surfaces of constant $f$.\n\nIf we move some small distance $\\Delta x^\\alpha = \\epsilon$, then there will be no change in the value of $f$, and thus we can say $\\pdv*{f}{x^\\alpha} = 0$, so\n%\n\\begin{displaymath}\n  \\tilde{\\dd}f =\n  \\pdv{f}{x^\\alpha} \\tilde{\\dd}x^\\alpha =\n  0 \\tilde{\\dd}x^\\alpha =\n  0.\n\\end{displaymath}\n%\nSince $\\tilde{\\dd}f$ is defined to be normal to a surface if it is zero on every tangent vector, we have shown that $\\tilde{\\dd}f$ is normal to any surface of constant $f$.\n\n\n\n\\textbf{14}\n\n\\begin{align*}\n  \\tilde{p} &\\underset{\\obs}{\\to} ( 1, 1, 0, 0) &\n  \\tilde{q} &\\underset{\\obs}{\\to} (-1, 0, 1, 0)\n\\end{align*}\n\nProve by giving two vectors $\\vec{A}$ and $\\vec{B}$ as arguments that $\\tilde{p} \\otimes \\tilde{q} \\neq \\tilde{q} \\otimes \\tilde{p}$. Then find the components of $\\tilde{p} \\otimes \\tilde{q}$.\n\n\\begin{align*}\n  (\\tilde{p} \\otimes \\tilde{q})(\\vec{A}, \\vec{B}) &=\n  \\tilde{p}(\\vec{A}) \\tilde{q}(\\vec{B}) =\n  A^\\alpha p_\\alpha B^\\beta q_\\beta =\n  (A^0 + A^1) (-B^0 + B^2),\n  \\\\ &=\n  -A^0 B^0 + A^0 B^2 - A^1 B^0 + A^1 B^2\n  \\\\\n  (\\tilde{q} \\otimes \\tilde{p})(\\vec{A}, \\vec{B}) &=\n  \\tilde{q}(\\vec{A}) \\tilde{p}(\\vec{B}) =\n  A^\\alpha q_\\alpha B^\\beta p_\\beta =\n  (-A^0 + A^2) (B^0 + B^1)\n  \\\\ &=\n  -A^0 B^0 - A^0 B^1 + A^2 B^0 + A^2 B^1,\n\\end{align*}\n%\nAnd so we see that $\\otimes$ is not commutative.\n\nThe components of the outer product of two tensors are given by the products of the components of the individual tensors. Thus we can write the components as a $4 \\times 4$ matrix.\n%\n\\begin{displaymath}\n  (\\tilde{p} \\otimes \\tilde{q})_{\\alpha\\beta} =\n  p_\\alpha q_\\beta =\n  \\mqty(-1 &  0 &  1 &  0 \\\\\n        -1 &  0 &  1 &  0 \\\\\n         0 &  0 &  0 &  0 \\\\\n         0 &  0 &  0 &  0)\n\\end{displaymath}\n\n\n\n\n\\textbf{18}\n\n(a)\nFind the one-forms mapped by $\\vb{g}$ from\n%\n\\begin{align*}\n  \\vec{A} &\\underset{\\obs}{\\to} ( 1,  0, -1,  0), &\n  \\vec{B} &\\underset{\\obs}{\\to} ( 0,  1,  1,  0), \\\\\n  \\vec{C} &\\underset{\\obs}{\\to} (-1,  0, -1,  0), &\n  \\vec{D} &\\underset{\\obs}{\\to} ( 0,  0,  1,  1).\n\\end{align*}\n\nIn general,\n%\n\\begin{displaymath}\n  \\vec{V} \\underset{\\obs}{\\to} (V^0, V^1, V^2, V^3) \\implies\n  \\tilde{V} =\n  \\vb{g} \\vec{V} \\underset{\\obs}{\\to} (-V^0, V^1, V^2, V^3),\n\\end{displaymath}\n%\nand so\n%\n\\begin{align*}\n  \\tilde{A} &\\underset{\\obs}{\\to} (-1,  0, -1,  0), &\n  \\tilde{B} &\\underset{\\obs}{\\to} ( 0,  1,  1,  0), \\\\\n  \\tilde{C} &\\underset{\\obs}{\\to} ( 1,  0, -1,  0), &\n  \\tilde{D} &\\underset{\\obs}{\\to} ( 0,  0,  1,  1).\n\\end{align*}\n\n(b)\nFind the vectors mapped by $\\vb{g}$ from\n%\n\\begin{align*}\n  \\tilde{p} &\\underset{\\obs}{\\to} ( 3,  0, -1, -1), &\n  \\tilde{q} &\\underset{\\obs}{\\to} ( 1, -1,  1,  1), \\\\\n  \\tilde{r} &\\underset{\\obs}{\\to} ( 0, -5, -1,  0), &\n  \\tilde{s} &\\underset{\\obs}{\\to} (-2,  1,  0,  0).\n\\end{align*}\n%\nBy using the inverse tensor in reverse, we have the same effect as before, of negating the first component\n%\n\\begin{align*}\n  \\vec{p} &\\underset{\\obs}{\\to} (-3,  0, -1, -1), &\n  \\vec{q} &\\underset{\\obs}{\\to} (-1, -1,  1,  1), \\\\\n  \\vec{r} &\\underset{\\obs}{\\to} ( 0, -5, -1,  0), &\n  \\vec{s} &\\underset{\\obs}{\\to} ( 2,  1,  0,  0).\n\\end{align*}\n\n\n\\textbf{20}\n\nIn Euclidean 3-space, vectors and covectors are usually treated as the same, because they transform the same. We will now prove this.\n\n\n(a)\nShow that $A^{\\bar\\alpha} = \\tensor{\\Lambda}{^{\\bar\\alpha}_\\beta} A^\\beta$ and $P_{\\bar\\beta} = \\tensor{\\Lambda}{^{\\alpha}_{\\bar\\beta}} P_\\alpha$ are the same transformations if $\\{ \\tensor{\\Lambda}{^{\\alpha}_{\\bar\\beta}} \\}$ is equal to the transpose of its inverse.\n\nWe can write that last statement as\n%\n\\begin{displaymath}\n  \\tensor{\\Lambda}{^{\\alpha}_{\\bar\\beta}} =\n  ((\\tensor{\\Lambda}{^{\\alpha}_{\\bar\\beta}})^{-1})^T\n\\end{displaymath}\n%\nand we know that\n%\n\\begin{displaymath}\n  (\\tensor{\\Lambda}{^{\\alpha}_{\\bar\\beta}})^{-1} =\n  \\tensor{\\Lambda}{^{\\bar\\beta}_{\\alpha}},\n\\end{displaymath}\n%\nand also we know that the Lorentz transformation is symmetric, and so\n%\n\\begin{displaymath}\n  (\\tensor{\\Lambda}{^{\\bar\\beta}_{\\alpha}})^T =\n  \\tensor{\\Lambda}{^{\\bar\\beta}_{\\alpha}},\n\\end{displaymath}\n%\nwhich leads us to conclude that $\\tensor{\\Lambda}{^{\\alpha}_{\\bar\\beta}} = \\tensor{\\Lambda}{^{\\bar\\beta}_{\\alpha}}$, meaning the two transformations are the same.\n\n(b)\nThe metric has components $\\{ \\delta_{ij} \\}$. Prove that transformations between Cartesian coordinate systems must satisfy\n\\begin{displaymath}\n  \\delta_{\\bar{i}\\bar{j}} =\n  \\tensor{\\Lambda}{^k_{\\bar{i}}}\n  \\tensor{\\Lambda}{^l_{\\bar{j}}}\n  \\delta_{kl},\n\\end{displaymath}\nand that this implies that $\\tensor{\\Lambda}{^k_{\\bar{i}}}$ is an orthogonal matrix.\n\n\\begin{displaymath}\n  \\delta_{\\bar{i}\\bar{j}} =\n  \\vb{g}(\\vec{e}_{\\bar{i}}, \\vec{e}_{\\bar{j}}) =\n  \\vb{g}(\\tensor{\\Lambda}{^k_{\\bar{i}}} \\vec{e}_k,\n         \\tensor{\\Lambda}{^l_{\\bar{j}}} \\vec{e}_j) =\n  \\tensor{\\Lambda}{^k_{\\bar{i}}}\n  \\tensor{\\Lambda}{^l_{\\bar{j}}}\n  \\vb{g}(\\vec{e}_k, \\vec{e}_j) =\n  \\tensor{\\Lambda}{^k_{\\bar{i}}}\n  \\tensor{\\Lambda}{^l_{\\bar{j}}}\n  \\delta_{kl}\n\\end{displaymath}\n\n\\textbf{Now show it is orthogonal}\n\n\n\\textbf{21}\n\n(a) A region of the $t$--$x$ plane is bounded by lines $t=0$, $t=1$, $x=0$, and $x=1$. Within the plane, find the unit outward normal 1-forms and their vectors for each boundary line.\n\nI define unit outward normals as follows:\n\nLet $\\mathcal{S}$ be a closed surface. If, for each $\\vec{V}$ tangent to $\\mathcal{S}$, we have $\\tilde{p}(\\vec{V}) = 0$, then $\\tilde{p}$ is normal to $\\mathcal{S}$.\n\nIn addition, if, for each $\\vec{U}$ which points outwards from the surface, we have $\\tilde{p}(\\vec{U}) > 0$, then $\\tilde{p}$ is an outward normal.\n\nFurthermore, if $\\tilde{p}^2 = \\pm 1$, then it is a unit outward normal.\n\nFor the problem at hand, I define the region inside the four lines to be \\emph{Inside}, and the region outside to be \\emph{Outside}. For each of the four lines, I draw a vector $\\vec{V}$ tangent (parallel) to the line, and $\\vec{U}$ pointing outwards (See Figure \\ref{fig:ch3-problem-21a}).\n\nIt helps to look at $t = 0$ and $t = 1$ together, and likewise for $x$, so I will start with $t$. We start with an arbitrary $\\tilde{p} \\to_\\obs (p_0, p_1)$, and $\\vec{V} \\to_\\obs (0, V^1)$, where $V^1 \\neq 0$.\n\\begin{displaymath}\n  \\tilde{p}(\\vec{V}) =\n  p_0 \\cdot 0 + p_1 V^1 = 0\n  \\implies\n  p_1 = 0,\n\\end{displaymath}\nso $\\tilde{p} \\to_\\obs (p_0, 0)$ is a normal 1-form to both lines. Now we find the corresponding \\emph{unit} normal, by taking\n\\begin{displaymath}\n  \\tilde{p}^2 = \\pm 1 = -(p_0)^2 \\implies\n  \\tilde{p}^2 = -1 \\ \\& \\ p_0 = \\pm 1.\n\\end{displaymath}\n\nWhether we choose $p_0$ to be positive or negative now depends on the line we are looking at, and which direction is outward. For $t = 0$, we have a vector $\\vec{U} = (-U^0, U^1)$, where $U^0 > 0$.\n\\begin{displaymath}\n  \\tilde{p}(\\vec{U}) =\n  p_0 (-U^0) + 0 \\cdot U^1 > 0\n  \\implies\n  -p_0 U^0 > 0\n  \\implies\n  p_0 < 0,\n\\end{displaymath}\nso for $t = 0$ we have $\\tilde{p} \\to_\\obs (-1, 0)$, and likewise for $t = 1$ we have $\\tilde{p} \\to_\\obs (1, 0)$. To get the associated \\emph{vectors}, we apply the metric $\\eta^{\\alpha\\beta}$, giving us $\\vec{p} \\to_\\obs (1, 0)$ for $t = 0$ and $\\vec{p} \\to_\\obs (-1, 0)$ for $t = 1$.\n\nFor $x = 0$ and $x = 1$, we instead have $\\vec{V} \\to_\\obs (V^0, 0)$, and following the same steps as before, we conclude that: for $x = 0$, $\\tilde{p} \\to_\\obs (0, -1)$, $\\vec{p} \\to_\\obs (0, -1)$, and for $x = 1$, $\\tilde{p} \\to_\\obs (0, 1)$, $\\vec{p} \\to_\\obs (0, 1)$.\n\n\\begin{figure}[h]\n  \\centering\n  \n  \\caption{Problem 21.a}\n  \\label{fig:ch3-problem-21a}\n\\end{figure}\n\n(b)\nLet another region be bounded by the set of points $\\{ (1,0),(1,1),(2,1) \\}$. Find an outward normal for the null boundary and the associated vector.\n\n\n\n\n\\textbf{23}\n\n(a)\nProve that the set of all $\\binom{M}{N}$ tensors forms a vector space, $V$.\n\nLet $T$ be the set of all $\\binom{M}{N}$ tensors, $\\vb{s}, \\vb{p}, \\vb{q} \\in T$, $\\vec{A} \\in \\mathbb{R}^n$, and $\\alpha \\in \\mathbb{R}$. For $T$ to be a vector space, we must define the operations of addition, and scalar multiplication (amongst others).\n\n\\textbf{Addition:}\n\\begin{displaymath}\n  \\vb{s} = \\vb{p} + \\vb{q} \\implies\n  \\vb{s}(\\vec{A}) = \\vb{p}(\\vec{A}) + \\vb{q}(\\vec{A})\n\\end{displaymath}\n\n\\textbf{Scalar Multiplication:}\n\\begin{displaymath}\n  \\vb{r} = \\alpha \\vb{p} \\implies\n  \\vb{r}(\\vec{A}) = \\alpha \\vb{p}(\\vec{A})\n\\end{displaymath}\n\n\n(b)\n\nProve that a basis for $T$ is\n\\begin{displaymath}\n  \\{ \\vec{e}_\\alpha \\otimes \\ldots \\otimes \\vec{e}_\\gamma\n     \\otimes\n     \\tilde{\\omega}^\\mu \\otimes \\ldots \\otimes \\tilde{\\omega}^\\lambda \\}\n\\end{displaymath}\n\n\\textbf{Still working on it}\n\n\n\\textbf{24}\nGiven:\n\\begin{displaymath}\n  M^{\\alpha\\beta} \\to\n  \\mqty(\n     0 &  1 &  0 &  0 \\\\\n     1 & -1 &  0 &  2 \\\\\n     2 &  0 &  0 &  1 \\\\\n     1 &  0 & -2 &  0\n  )\n\\end{displaymath}\n\n\n(a)\nFind:\n\n(i)\n\\begin{displaymath}\n  M^{(\\alpha\\beta)} \\to\n  \\mqty(\n    0 & 1 & 1 & \\frac{1}{2} \\\\\n    1 & -1 & 0 & 1 \\\\\n    1 & 0 & 0 & -\\frac{1}{2} \\\\\n    \\frac{1}{2} & 1 & -\\frac{1}{2} & 0\n  );\n  \\quad\n  M^{[\\alpha\\beta]} \\to\n  \\mqty(\n    0 & 0 & -1 & -\\frac{1}{2} \\\\\n    0 & 0 & 0 & 1 \\\\\n    1 & 0 & 0 & \\frac{3}{2} \\\\\n    \\frac{1}{2} & -1 & -\\frac{3}{2} & 0\n  )\n\\end{displaymath}\n\n\n(ii)\n\\begin{displaymath}\n  \\tensor{M}{^\\alpha_\\beta} =\n  \\tensor{\\eta}{_{\\beta\\mu}} \\tensor{M}{^{\\alpha\\mu}} \\to\n  \\mqty(\n    0 & -1 & 0 & 0 \\\\\n    1 & -1 & 0 & 2 \\\\\n    2 & 0 & 0 & 1 \\\\\n    1 & 0 & -2 & 0\n  )\n\\end{displaymath}\n\n\n(iii)\n\\begin{displaymath}\n  \\tensor{M}{_\\alpha^\\beta} =\n  \\tensor{\\eta}{_{\\alpha\\mu}} \\tensor{M}{^{\\mu\\beta}} \\to\n  \\mqty(\n    0 & -1 & 0 & 0 \\\\\n    1 & -1 & 0 & 2 \\\\\n    2 & 0 & 0 & 1 \\\\\n    1 & 0 & -2 & 0\n  )\n\\end{displaymath}\n\n\n(iv)\n\\begin{displaymath}\n  \\tensor{M}{_{\\alpha\\beta}} =\n  \\tensor{\\eta}{_{\\beta\\mu}} \\tensor{M}{_\\alpha^\\mu} \\to\n  \\mqty(\n     0 &  1 &  0 &  0 \\\\\n     1 & -1 &  0 &  2 \\\\\n     2 &  0 &  0 &  1 \\\\\n     1 &  0 & -2 &  0\n  )\n\\end{displaymath}\n\n(b) Does it make sense to separate the $\\binom{1}{1}$ tensor with components $\\tensor{M}{^\\alpha_\\beta}$ into symmetric and antisymmetric parts?\n\nNo, it would not make sense. For one, the notation for (anti)symmetric tensors do not even allow one to write it sensibly ($\\tensor{M}{^{(\\alpha}_{\\beta)}}$). More importantly, one argument refers to vectors, and the other to covectors, so it does not make sense to switch them.\n\n(c)\n\n\\begin{displaymath}\n  \\tensor{\\eta}{^\\alpha_\\beta} =\n  \\tensor{\\eta}{^{\\alpha\\mu}} \\tensor{\\eta}{_{\\beta\\mu}} =\n  \\mqty(\\dmat[0]{-1,1,1,1}) \\mqty(\\dmat[0]{-1,1,1,1}) =\n  \\mqty(\\imat{4}) =\n  \\tensor{\\delta}{^\\alpha_\\beta}\n\\end{displaymath}\n\n\n\n\n\n\\textbf{31}\n\n\\textbf{Still working on it}\n\n\n\n(\\textbf{33})\n\n\n\\textbf{34}\nDefine double-null coordinates $u = t-x$, $v = t + x$ in Minkowski space.\n\n(a)\nLet $\\vec{e}_u$ be the vector connecting the $(u,v,y,t)$ coordinates $(0,0,0,0)$ and $(1,0,0,0)$, and let $\\vec{e}_v$ be the vector connecting $(0,0,0,0)$ and $(0,1,0,0)$. Find $\\vec{e}_u$ and $\\vec{e}_v$ in terms of $\\vec{e}_t$ and $\\vec{e}_x$, and plot the basis vectors in a spacetime diagram of the $t$--$x$ plane.\n\n\\begin{align*}\n  u &= t - x = 0 \\implies t =  +x &\n  v &= t + x = 0 \\implies t =  -x\n  \\\\\n  u &= t - x = 1 \\implies t = 1+x &\n  v &= t + x = 1 \\implies t = 1-x &\n\\end{align*}\n\nWe draw the vectors $\\vec{e}_u$ and $\\vec{e}_v$ in Figure \\ref{fig:ch3-problem-34a}, such that they point from the appropriate points of intersection on these lines of constant $u$ and $v$. From this it is obvious that $\\vec{e}_v + \\vec{e}_u = \\vec{e}_t$, and that $\\vec{e}_v - \\vec{e}_u = \\vec{e}_x$, or likewise $\\vec{e}_v = \\vec{e}_t - \\vec{e}_u$ and $\\vec{e}_u = \\vec{e}_v - \\vec{e}_x$. This is a system of 2 equations with two unknowns.\n\n\\begin{align*}\n  \\vec{e}_v &=\n  \\vec{e}_t - \\vec{e}_v + \\vec{e}_x \\implies &\n  \\vec{e}_v &=\n  \\frac{1}{2} (\\vec{e}_t + \\vec{e}_x),\n  \\\\\n  \\vec{e}_u &=\n  \\frac{1}{2} (\\vec{e}_t + \\vec{e}_x) - \\vec{e}_x \\implies &\n  \\vec{e}_u &=\n  \\frac{1}{2} (\\vec{e}_t - \\vec{e}_x).\n\\end{align*}\n\n\\begin{figure}[b]\n  \\centering\n  \\includegraphics[width=0.75\\textwidth]{img/ch3_problem_34a}\n  \\caption{Problem 34a: Spacetime diagram of double-null coordinate basis\n    vectors in $t$--$x$ plane.}\n  \\label{fig:ch3-problem-34a}\n\\end{figure}\n\n(b)\nShow that $\\vec{e}_\\alpha$, $\\alpha \\in \\{ u, v, y, z \\}$ form a basis for vectors in Minkowski space.\n\n\\begin{align*}\n  \\vec{A} = A^\\alpha \\vec{e}_\\alpha &=\n  A^u \\vec{e}_u + A^v \\vec{e}_v + A^y \\vec{e}_y + A^z \\vec{e}_z\n  \\\\ &=\n  \\frac{A^u}{2} (\\vec{e}_t - \\vec{e}_x) +\n  \\frac{A^v}{2} (\\vec{e}_t + \\vec{e}_x) +\n  A^y \\vec{e}_y + A^z \\vec{e}_z\n  \\\\ &=\n  \\frac{1}{2} (A^v + A^u) \\vec{e}_t +\n  \\frac{1}{2} (A^v - A^u) \\vec{e}_x +\n  A^y \\vec{e}_y + A^z \\vec{e}_z\n\\end{align*}\n%\nIf we let $A^t = \\frac{1}{2} (A^v + A^u)$ and $A^x = \\frac{1}{2} (A^v - A^u)$, then\n%\n\\begin{displaymath}\n  \\vec{A} =\n  A^\\alpha \\vec{e}_\\alpha =\n  A^t \\vec{e}_t + A^x \\vec{e}_x + A^y \\vec{e}_y + A^z \\vec{e}_z\n\\end{displaymath}\n\n(c)\nFind the components of the metric tensor, $\\vb{g}$ in this new basis.\n\nTo make this concise, we will begin with some definitions. Let $w \\in \\{ u, v \\}$, and $q \\in \\{ y, z \\}$. We also define\n%\n\\begin{displaymath}\n  \\lambda(w) \\equiv\n  \\begin{dcases*}\n    -1, & if $w = u$,\n    \\\\\n    +1, & if $w = v$.\n  \\end{dcases*}\n\\end{displaymath}\n%\nIt follows that\n%\n\\begin{displaymath}\n  \\vec{e}_w = \\frac{1}{2} (\\vec{e}_t + \\lambda \\vec{e}_x).\n\\end{displaymath}\n%\nNow we can show that\n%\n\\begin{align*}\n  g_{ww} =\n  \\vec{e}_w \\cdot \\vec{e}_w &=\n  \\frac{1}{2} (\\vec{e}_t + \\lambda \\vec{e}_x) \\cdot\n  \\frac{1}{2} (\\vec{e}_t + \\lambda \\vec{e}_x)\n  \\\\ &=\n  \\frac{1}{4} [\n    \\vec{e}_t \\cdot \\vec{e}_t +\n    2 \\lambda (\\vec{e}_t \\cdot \\vec{e}_x) +\n    \\lambda^2 (\\vec{e}_x \\cdot \\vec{e}_x)\n  ]\n  \\\\ &=\n  \\frac{1}{4} (\n    -1 + 2 \\lambda \\cdot 0 + 1 \\cdot 1\n  ) =\n  0,\n\\end{align*}\n%\nso $g_{uu} = g_{vv} = 0$.\n\nFor the $u$ and $v$ cross terms, we have\n%\n\\begin{align*}\n  g_{uv} = g_{vu} =\n  \\vec{e}_u \\cdot \\vec{e}_v &=\n  \\frac{1}{2} (\\vec{e}_t - \\vec{e}_x) \\cdot\n  \\frac{1}{2} (\\vec{e}_t + \\vec{e}_x)\n  \\\\ &=\n  \\frac{1}{4} [\n    \\vec{e}_t \\cdot \\vec{e}_t +\n    0 \\cdot \\vec{e}_t \\cdot \\vec{e}_x -\n    \\vec{e}_x \\cdot \\vec{e}_x\n  ]\n  \\\\ &=\n  \\frac{1}{4} ( -1 + 0 - 1 ) =\n  -\\frac{1}{2}\n\\end{align*}\n\nFor the $w$ with $y$ and $z$ cross terms we have\n%\n\\begin{align*}\n  g_{wq} =\n  \\vec{e}_w \\cdot \\vec{e}_q &=\n  \\frac{1}{2} (\\vec{e}_t + \\lambda \\vec{e}_x) \\cdot \\vec{e}_q\n  \\\\ &=\n  \\frac{1}{2} [ \\vec{e}_t \\cdot \\vec{e}_t + \\lambda \\vec{e}_x \\cdot \\vec{e}_x ]\n  \\\\ &=\n  0\n\\end{align*}\n%\nso $g_{uy} = g_{vy} = g_{uz} = g_{vz} = 0$. We also already know $g_{yy} = g_{zz} = 1$, and $g_{yz} = g_{zy} = 0$, so we can write the components of the metric tensor in this new coordinate system as\n%\n\\begin{displaymath}\n  g_{\\alpha\\beta} =\n  \\mqty(\n               0 & -\\frac{1}{2} & 0 & 0 \\\\\n    -\\frac{1}{2} &            0 & 0 & 0 \\\\\n               0 &            0 & 1 & 0 \\\\\n               0 &            0 & 0 & 1\n  ).\n\\end{displaymath}\n\n(d)\nShow that $\\vec{e}_u$ and $\\vec{e}_v$ are null, but not orthogonal.\n\n\\begin{align*}\n  \\vec{e}_u \\cdot \\vec{e}_u &= g_{uu} = 0\n  \\implies \\vec{e}_u \\text{ is null}\n  \\\\\n  \\vec{e}_v \\cdot \\vec{e}_v &= g_{vv} = 0\n  \\implies \\vec{e}_v \\text{ is null}\n  \\\\\n  \\vec{e}_u \\cdot \\vec{e}_v &= g_{uv} = -\\frac{1}{2} \\neq 0\n  \\implies \\vec{e}_u \\text{ and } \\vec{e}_v \\text{ are not orthogonal.}\n\\end{align*}\n\n(e)\nCompute the four one-forms $\\tilde\\dd{u}$, $\\tilde\\dd{v}$, $\\vb{g}(\\vec{e}_u,)$, and $\\vb{g}(\\vec{e}_v,)$ in terms of $\\tilde\\dd{t}$ and $\\tilde\\dd{x}$.\n\n\\begin{displaymath}\n  \\tilde\\dd\\phi \\to_\\obs\n  \\qty( \\pdv{\\phi}{t}, \\pdv{\\phi}{x}, \\pdv{\\phi}{y}, \\pdv{\\phi}{z} ),\n\\end{displaymath}\n%\nso\n\\begin{align*}\n  \\tilde\\dd{t} &\\to_\\obs (1, 0, 0, 0), &\n  \\tilde\\dd{x} &\\to_\\obs (0, 1, 0, 0),\n  \\\\\n  \\tilde\\dd{u} &\\to_\\obs \\frac{1}{2} (1, -1, 0, 0), &\n  \\tilde\\dd{u} &\\to_\\obs \\frac{1}{2} (1,  1, 0, 0),\n\\end{align*}\n%\nfrom which it is obvious that\n\\begin{align*}\n  \\tilde\\dd{u} &=\n  \\frac{1}{2} (\\tilde\\dd{t} - \\tilde\\dd{x}), &\n  \\tilde\\dd{v} &=\n  \\frac{1}{2} (\\tilde\\dd{t} + \\tilde\\dd{x}). &\n\\end{align*}\n\n\n\n\n\\end{document}", "meta": {"hexsha": "39c5126edcc43d914854c12bbc1b2fb058e3567b", "size": 23869, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/textbook/tex/gr-ch3-notes.tex", "max_stars_repo_name": "dwysocki/ASTP-760", "max_stars_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/textbook/tex/gr-ch3-notes.tex", "max_issues_repo_name": "dwysocki/ASTP-760", "max_issues_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/textbook/tex/gr-ch3-notes.tex", "max_forks_repo_name": "dwysocki/ASTP-760", "max_forks_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.6405648267, "max_line_length": 441, "alphanum_fraction": 0.5749717206, "num_tokens": 10121, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5781106731281261}}
{"text": "% !TeX root = ../main.tex\n% This document needs to include:\n% - An introduction for the concern.\n% - Bases for section, specifically EBNF in previous section.\n% - Difference between EBNF CFG.\n% - HCL as CFL and HCL's CFG.\n% - Proof that HCL is a CFL.\n% - Proof that shows whether HCL is a regular language or not.\n% - Description of identifiers and types in HCL using computational models or Regex.\n% - Conclusion.\n\\section{Mathematical Syntax Theory}\nThis section contains the concise definitions of HCL's syntax.\nFirst the proper CFG recognizing HCL is presented.\nThe CFG is used to prove that HCL is a Context-free Language (CFL). \nAfterwards the most specific computational model for describing HCL is identified and presented. \nFinally the token categories are described in a computational model. \nThe conclusions of this section provides the basis for implementing and testing the parser of the HCL compiler. \nThe majority of this section is based on the EBNF details touched upon in appendix \\ref{AppendixEBNF}.\n\n\\subsection{HCL as an CFL}\nThis subsection investigates whether HCL is a CFL. \nTo determine this, a Pushdown Automaton (PDA) is constructed.\nThe reason being that the set of CFL's is equivalent to the set of languages recognized by a PDA.\nTo construct the PDA the following rules are used:\n\\begin{center}\n\t\\begin{itemize}\n\t\t\\item The marker symbol \\$ is first placed on the stack.\n\t\t\\item Second, the following steps are repeated until all production rules have been handled.\n\t\t\\begin{itemize}\n\t\t\t\\item If the top of the stack is a variable symbol V. \n\t\t\tThe machine will nondeterministically select one of the right production rules of V and replace V with that.   \n\t\t\t\\item If the top of stack is a terminal symbol t. \n\t\t\tThe next character on the input string is read and compared to t. \n\t\t\tIf matched, repeat. \n\t\t\tIf not, reject the branch.\n\t\t\t\\item If the top of stack is the marker symbol \\$, enter the accept state of the automaton. \n\t\t\tThis will accept the input string.\n\t\t\\end{itemize}\n\t\\end{itemize}\n\\end{center}\n\nTo create the automaton, a start state needs to be added at the beginning of the automaton. \nA transition rule on the form $\\epsilon,\\epsilon->S\\$$ needs to be added between the start state and a new state called the loop state.\nHere S is a variable.\nFrom the loop state to the accept state there needs to be a transition rule on the form $\\epsilon,\\$->\\epsilon$.\nTo convert the EBNF's production rules to a PDA, two conversion rules need to be followed.\n\nFor each production rule of the form $A->w$, where A is a variable and w is a string the transition rule $\\epsilon,A->w$ needs to be added to the automaton from the loop state back to the loop state.\nFor each terminal a transition rule $a,a->\\epsilon$ needs to be added to the automaton from the loop state back to the loop state.\nThe final non-deterministic PDA (NPDA) can be seen in Appendix \\ref{NPDA} on page \\pageref{NPDA}.\nThus concludes the proof.\n\n\\subsection{Regularity of HCL}\nHCL is as CFL recognized by the CFG described in the previous subsections.\nTherefore, it is now relevant to investigate whether HCL is a regular language. \nIf that is the case, it is possible to create a Finite State Automaton (FSA) that recognizes the language.\n\nSince the set of regular languages is a subset of the context free languages, this may be the case.\nA FSA is more trivial to implement than a NPDA. \nSince the HCL language is a relatively complex language, it would be unrealistic to sequentially go through the entire language while trying to construct a FSA that recognizes HCL.\n\nConsidering the fact that the regular languages are closed under the union, concatenation, and the star operations, it would be sufficient to locate a subset of HCL that is not a regular language.\nThe transition rules described in the EBNF, state that a subset of HCL might consist of an arbitrary number of equal amounts of left and right parentheses.\nThis subset, referred to as $HCL_{subset}$ can be described as the set:\n\\begin{center}\n\t$HCL_{subset} = \\{a^nbc^n | a \\in \\{(\\} \\wedge b \\in P \\wedge c \\in \\{)\\}\\}$\n\\end{center}\nwhere P is an arbitrary set containing elements, that are legal in the context.\n\nTo prove that HCL is not a regular language, the pumping lemma is used.\nThe pumping lemma states that:\n\\begin{center}\n\tIf A is a regular language, there exists a number $p>0$ so all strings $s \\in A $, where $|s| \\geq p$ can be divided $s = xyz$ so the following properties are true:\n\t\\begin{itemize}\n\t\t\\item $xy^iz \\in A$ for $i \\geq 0$\n\t\t\\item $|y| > 0$\n\t\t\\item $|xy| \\leq p$\n\t\\end{itemize}\n\\end{center}\n\nFirst the division is chosen for s.\n$s = xyz$, where $s = (((...b)))...$.\nMeaning that x consists only of left parentheses, y consists of a concatenation of left parentheses and elements from P and z consists only of right parentheses.\n\nThe length of the string is $p < |s| < (p3)/2$, since $s = a^nbc^n$, where $n = p/2$ and $0 < |b| \\leq p/2$.\n\nThis means that the third condition is upheld since $|xy|$ cannot be greater than p.\nThe second condition is also upheld since y consists of at least some left parentheses. \nIf the string does contain right parentheses, which it does, y cannot be of the length 0.\n\nThe first condition is not upheld since if $i = 0$ the new string cannot contain an equal amount of left and right parentheses because of the same reason as to why the second condition is upheld.\n\nThis concludes the proof by contradiction.\nSince HCL does not uphold the required property for it to be a regular language, it is not.\nTherefore it cannot be described by a FSA, therefore the implementation will not be trivial.\n\n\\subsection{Types and Identifiers}\nSo far the syntax of HCL has been properly defined as a CFL. \nMeaning that all variables, terminals and the transition rules for recognizing HCL has been described in a computational model as seen in appendix \\ref{NPDA} on page \\pageref{NPDA}.\nIt is now relevant to define how the types and identifiers in HCL can be described, or recognized, by a computational model.\nThe goal is to formally describe the languages $L_{none}$, $L_{bool}$, $L_{number}$, $L_{text}$ which are the type languages and the identifier language $L_{identifiers}$.\n\nAn alphabet is a finite set of symbols, a string is a finite sequence of symbols and a language is a set of strings.\nTo describe the languages mentioned above, the Alphabets used in the informal and formal notations needs to be defined. \nThese can be seen in Table 3.1.\n\n\\begin{table}[!htb]\n\t\\centering\n\t\\begin{tabular}{|l|l|}\n\t\t\\hline\n\t\t\\textbf{Alphabet} & \\textbf{Set}                            \\\\ \\hline\n\t\tT                 & \\{True\\}                                \\\\ \\hline\n\t\tF                 & \\{False\\}                               \\\\ \\hline\n\t\t$Z^+$             & The set of all positive integers        \\\\ \\hline\n\t\t$Z^-$             & The set of all negative integers        \\\\ \\hline\n\t\tR                 & The set of all real numbers             \\\\ \\hline\n\t\tD                 & \\{$Z^+ \\cup Z^-$\\}                      \\\\ \\hline\n\t\t$S_p$             & The set of all special characters      \\\\ \\hline\n\t\t$L_l$             & The set of all lower-case letters       \\\\ \\hline\n\t\t$L_u$             & The set of all upper-case letters       \\\\ \\hline\n\t\tC                 & $S_p \\cup L_l \\cup L_u \\cup D$          \\\\ \\hline\n\t\t$R_{eserved}$     & The set of all reserved keywords, specialchars and types \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Table containing all utility alphabets used to formally describe the type and identifier languages}\n\\end{table}\n\nBy using the set construction method, the languages can now be defined as shown in Table 3.2.\n\n\\begin{table}[!htb]\n\t\\centering\n\t\\begin{tabular}{|l|l|}\n\t\t\\hline\n\t\t\\textbf{Name}     & \\textbf{Set}                                    \\\\ \\hline\n\t\t$L_{none}$        & \\{w | w $\\in$ \\{None\\}\\}                        \\\\ \\hline\n\t\t$L_{bool}$        & \\{w | w $\\in$ (T $\\cup$ F)\\}                    \\\\ \\hline\n\t\t$L_{number}$      & \\{w | w $\\in$ R\\}                               \\\\ \\hline\n\t\t$L_{text}$        & \\{\"w\" | all strings from arbitrary alphabets\\}  \\\\ \\hline\n\t\t$L_{identifiers}$ & \\{w | w $\\in (L_{text} - \\{\", \"\\}) \\wedge w \\notin R_{eserved}$\\} \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Table containing the type and identifier alphabets described by the set builder construction}\n\\end{table}\n\nIntuitively, it is trivial that all of the languages defined above are regular languages, except for $L_{identifiers}$ since it would need to make sure that no strings are an element of $S_p$.\nThis is the case since a regular language cannot keep track of more than one symbol in terms of counting and iterations.\n\nSince all of the languages in Table 3.2(except for $L_{identifiers}$) consists only of arbitrary concatenations of elements of various alphabets, this means that they are indeed regular. \n\nNow that it has been determined that the languages are regular, a computational model can now be chosen to describe them more specifically.\n\nFor the sake of convenience, $R^+$ will be used at a shorthand for $RR^*$ \\footnote{Where * is Kleene star}, meaning that $R^+ \\cup \\epsilon = R^*$ since $R^*$ consists of all strings that are zero or more concatenations from R, and $RR^*$ consists of all strings that are one or more concatenations of strings from R.\nThe resulting regular expressions are shown in Table 3.3.\n\n\\begin{table}[!htb]\n\t\\centering\n\t\\label{my-label}\n\t\\begin{tabular}{|l|l|}\n\t\t\\hline\n\t\t\\textbf{Name}     & \\textbf{Regular Expression}         \\\\ \\hline\n\t\t$L_{none}$        & none                                \\\\ \\hline\n\t\t$L_{bool}$        & True $\\cup$ False                   \\\\ \\hline\n\t\t$L_{number}$      & $D.(D^+)^*$                       \\\\ \\hline\n\t\t$L_{text}$        & $\"C^*\"$                               \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Regular Expressions describing the type and identifier languages}\n\\end{table}\n\nIn conclusion HCL is a context free language, meaning that it can be described by a NDPA.\nSome subsets of HCL is non-regular, meaning that HCL cannot be a regular language.\nThe token category type languages are all regular languages. \nThese can therefore be described using regular expressions as presented in this section.\n\n\n\n", "meta": {"hexsha": "082070ceeb59c0459ca6ecd1ad91e695db6cc89e", "size": 10237, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/3.Theory/SyntaxTheory.tex", "max_stars_repo_name": "C0DK/P3-HCL", "max_stars_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-02-08T12:34:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-09T11:33:53.000Z", "max_issues_repo_path": "report/3.Theory/SyntaxTheory.tex", "max_issues_repo_name": "C0DK/P3-HCL", "max_issues_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2018-02-17T14:30:17.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-13T17:37:09.000Z", "max_forks_repo_path": "report/3.Theory/SyntaxTheory.tex", "max_forks_repo_name": "C0DK/P3-HCL", "max_forks_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-07-04T13:55:53.000Z", "max_forks_repo_forks_event_max_datetime": "2018-07-04T13:55:53.000Z", "avg_line_length": 58.8333333333, "max_line_length": 318, "alphanum_fraction": 0.6937579369, "num_tokens": 2638, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672227971211, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.5781106715892271}}
{"text": "\\chapter{Projective varieties}\nHaving studied affine varieties in $\\Aff^n$, we now consider $\\CP^n$.\nWe will also make it into a baby ringed space\nin the same way as with $\\Aff^n$.\n\n\\section{Graded rings}\n\\prototype{$\\CC[x_0, \\dots, x_n]$ is a graded ring.}\nWe first take the time to state what a graded ring is,\njust so that we have this language to use (now and later).\n\n\\begin{definition}\n\tA \\vocab{graded ring} $R$ is a ring with the following additional structure:\n\tas an abelian group, it decomposes as\n\t\\[ R = \\bigoplus_{d \\ge 0} R^d \\]\n\twhere $R^0$, $R^1$, \\dots, are abelian groups.\n\tThe ring multiplication has the property that\n\tif $r \\in R^d$ and $s \\in R^e$, we have $rs \\in R^{d+e}$.\n\tElements of an $R^d$ are called \\vocab{homogeneous elements};\n\twe write ``$d = \\deg r$'' to mean ``$r \\in R^d$''.\n\n\tWe denote by $R^+$ the ideal $R \\setminus R^0$ generated by\n\tthe homogeneous elements of nonzero degree,\n\tand call it the \\vocab{irrelevant ideal}.\n\\end{definition}\n\\begin{remark}\n\tFor experts: all our graded rings are commutative with $1$.\n\\end{remark}\n\\begin{example}\n\t[Examples of graded rings]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The ring $\\CC[x]$ is graded by degree: as abelian groups,\n\t\t$\\CC[x] \\cong \\CC \\oplus x\\CC \\oplus x^2\\CC \\oplus \\dots$.\n\t\t\\ii More generally, the polynomial ring $\\CC[x_0, \\dots, x_n]$\n\t\tis graded by degree.\n\t\\end{enumerate}\n\\end{example}\n\\begin{abuse}\n\tThe notation $\\deg r$ is abusive in the case $r = 0$;\n\tnote that $0 \\in R^d$ for every $d$.\n\tSo it makes sense to talk about ``the'' degree of $r$\n\texcept when $r = 0$.\n\\end{abuse}\n\nWe will frequently refer to homogeneous ideals:\n\\begin{definition}\n\tAn ideal $I \\subseteq \\CC[x_0, \\dots, x_n]$ is \\vocab{homogeneous}\n\tif it can be written as $I = (f_1, \\dots, f_m)$\n\twhere each $f_i$ is a homogeneous polynomial.\n\\end{definition}\n\\begin{remark}\n\tIf $I$ and $J$ are homogeneous,\n\tthen so are $I+J$, $IJ$, $I \\cap J$, $\\sqrt I$.\n\\end{remark}\n\\begin{lemma}[Graded quotients are graded too]\n\tLet $I$ be a homogeneous ideal of a graded ring $R$.\n\tThen\n\t\\[ R/I = \\bigoplus_{d \\ge 0} R^d / (R^d \\cap I) \\]\n\trealizes $R/I$ as a graded ring.\n\\end{lemma}\nSince these assertions are just algebra,\nwe omit their proofs here.\n\\begin{example}\n\t[Example of a graded quotient ring]\n\tLet $R = \\CC[x,y]$ and set $I = (x^3, y^2)$.\n\tLet $S = R/I$. Then\n\t\\begin{align*}\n\t\tS^0 &= \\CC \\\\\n\t\tS^1 &= \\CC x \\oplus \\CC y \\\\\n\t\tS^2 &= \\CC x^2 \\oplus \\CC xy \\\\\n\t\tS^3 &= \\CC x^2y \\\\\n\t\tS^d &= 0 \\qquad \\forall d \\ge 4.\n\t\\end{align*}\n\tSo in fact $S = R/I$ is graded,\n\tand is a six-dimensional $\\CC$-vector space.\n\\end{example}\n\n\n\\section{The ambient space}\n\\prototype{Perhaps $\\Vp(x^2+y^2-z^2)$.}\nThe set of points we choose to work with is $\\CP^n$ this time,\nwhich for us can be thought of as the set of $n$-tuples\n\\[ \\left( x_0 : x_1 : \\dots : x_n \\right) \\]\nnot all zero, up to scaling.\nEquivalently, it is the set of lines through the origin in $\\CC^{n+1}$.\nProjective space is defined in full in \\Cref{sec:top_spaces},\nand you should refer there if you aren't familiar with projective space.\n\nThe right way to think about it is ``$\\Aff^n$ plus points at infinity'':\n\\begin{definition}\n\tWe define the set\n\t\\[ U_i = \\left\\{ (x_0 : \\dots : x_n) \\mid x_i \\neq 0  \\right\\}\n\t\t\\subseteq \\CP^n. \\]\n\tThese are called the \\vocab{standard affine charts}.\n\\end{definition}\nThe name comes from:\n\\begin{exercise}\n\t[Mandatory]\n\tGive a natural bijection from $U_i$ to $\\Aff^n$.\n\tThus we can think of $\\CP^n$ as the affine set $U_i$\n\tplus ``points at infinity''.\n\\end{exercise}\n\\begin{remark}\n\tIn fact, these charts $U_i$ make $\\CP^n$ with its usual topology\n\tinto a complex manifold with holomorphic transition functions.\n\\end{remark}\n\n\\begin{example}\n\t[Colloquially, $\\CP^1 = \\Aff^1 \\cup \\{\\infty\\}$]\n\tThe space $\\CP^1$ consists of pairs $(s:t)$,\n\twhich you can think of as representing the complex number $z/1$.\n\tIn particular $U_1 = \\{ (z:1) \\}$\n\tis basically another copy of $\\Aff^1$.\n\tThere is only one new point, $(1:0)$.\n\\end{example}\n\nHowever, like before we want to impose a Zariski topology on it.\nFor concreteness, let's consider $\\CP^2 = \\left\\{ (x_0 : x_1 : x_2) \\right\\}$.\nWe wish to consider zero loci in $\\CP^2$, just like we did in affine space,\nand hence obtain a notion of a projective variety.\n\nBut this isn't so easy: for example,\nthe function ``$x_0$'' is not a well-defined function on points in $\\CP^2$ \nbecause $(x_0 : x_1 : x_2) = (5x_0 : 5x_1 : 5x_2)$!\nSo we'd love to consider these ``pseudo-functions''\nthat still have zero loci. These are just the homogeneous polynomials $f$,\nbecause $f$ is homogeneous of degree $d$ if and only if\n\\[\n\tf(\\lambda x_0, \\dots, \\lambda x_n)\n\t= \\lambda^d f(x_0, \\dots, x_n).\n\\]\nIn particular, the relation ``$f(x_0, \\dots, x_n) = 0$'' is\nwell-defined if $F$ is homogeneous. Thus, we can say:\n\\begin{definition}\n\tIf $f$ is homogeneous, we can then define its \\vocab{vanishing locus} as\n\t\\[\n\t\t\\Vp(f)\n\t\t= \\left\\{ (x_0 : \\dots : x_n) \\mid f(x_0, \\dots, x_n) = 0 \\right\\}.\n\t\\]\n\\end{definition}\n\nThe homogeneous condition is really necessary.\nFor example, to require ``$x_0 - 1 = 0$'' makes no sense,\nsince the points $(1:1:1)$ and $(2015:2015:2015)$ are the same.\n\nIt's trivial to verify that homogeneous polynomials do exactly what we want;\nhence we can now define:\n\\begin{definition}\n\tA \\vocab{projective variety} in $\\CP^n$\n\tis the common zero locus of an arbitrary\n\tcollection of homogeneous polynomials in $n+1$ variables.\n\\end{definition}\n\n\\begin{example}[A conic in $\\CP^2$, or a cone in $\\CC^3$]\n\tLet's try to picture the variety\n\t\\[ \\Vp(x^2+y^2-z^2) \\subseteq \\CP^2 \\]\n\twhich consists of the points $[x:y:z]$ such that $x^2+y^2=z^2$.\n\tIf we view this as subspace of $\\CC^3$\n\t(i.e.\\ by thinking of $\\CP^2$ as the set of lines through the origin),\n\tthen we get a ``cone'':\n\t\\begin{center}\n\t\t\\includegraphics{media/cone.pdf}\n\t\\end{center}\n\n\tIf we take the standard affine charts now, we obtain:\n\t\\begin{itemize}\n\t\t\\ii At $x=1$, we get a hyperbola $\\VV(1+y^2-z^2)$.\n\t\t\\ii At $y=1$, we get a hyperbola $\\VV(1+x^2-z^2)$.\n\t\t\\ii At $z=1$, we get a circle $\\VV(x^2+y^2-1)$.\n\t\\end{itemize}\n\tThat said, over $\\CC$ a hyperbola and circle\n\tare the same thing; I'm cheating a little by drawing $\\CC$\n\tas one-dimensional, just like last chapter.\n\\end{example}\n\\begin{ques}\n\tDraw the intersection of the cone above\n\twith the $z=1$ plane, and check that you do in fact get a circle.\n\t(This geometric picture will be crucial later.)\n\\end{ques}\n\n\\section{Homogeneous ideals}\nNow, the next thing we want to do is define $\\Vp(I)$ for an ideal $I$.\nOf course, we again run into an issue with things like $x_0-1$ not\nmaking sense. \n\nThe way out of this is to use only \\emph{homogeneous} ideals.\n\\begin{definition}\n\tIf $I$ is a homogeneous ideal, we define\n\t\\[ \\Vp(I) = \\{ x \\mid f(x) = 0 \\; \\forall f \\in I\\}. \\]\n\\end{definition}\n\\begin{exercise}\n\tShow that the notion ``$f(x) = 0 \\; \\forall f \\in I$''\n\tis well-defined for a homogeneous ideal $I$.\n\\end{exercise}\nSo, we would hope for a Nullstellensatz-like theorem\nwhich bijects the homogeneous radical ideals to projective ideals.\nUnfortunately:\n\\begin{example}\n\t[Irrelevant ideal]\n\tTo crush some dreams and hopes, consider the ideal\n\t\\[ I = (x_0, x_1, \\dots, x_n). \\]\n\tThis is called the \\vocab{irrelevant ideal};\n\tit is a homogeneous radical yet $\\Vp(I) = \\varnothing$.\n\\end{example}\n\nHowever, other than the irrelevant ideal:\n\\begin{theorem}\n\t[Homogeneous Nullstellensatz]\n\tLet $I$ and $J$ be homogeneous ideals. \n\t\\begin{enumerate}[(a)]\n\t\t\\ii If $\\Vp(I) = \\Vp(J) \\neq \\varnothing$ then $\\sqrt I = \\sqrt J$.\n\t\t\\ii If $\\Vp(I) = \\varnothing$, then either $I = (1)$\n\t\tor $\\sqrt I = (x_0, x_1, \\dots, x_n)$.\n\t\\end{enumerate}\n\tThus there is a natural bijection between:\n\t\\begin{itemize}\n\t\t\\ii projective varieties in $\\CP^n$, and\n\t\t\\ii homogeneous radical ideals of $\\CC[x_1, \\dots, x_n]$ \n\t\texcept for the irrelevant ideal. \n\t\\end{itemize}\n\\end{theorem}\n\\begin{proof}\n\tFor the first part, let $V = \\Vp(I)$ and $W = \\Vp(J)$\n\tbe projective varieties in $\\CP^n$.\n\tWe can consider them as \\emph{affine varieties} in $\\Aff^{n+1}$\n\tby using the interpretation of $\\CP^n$\n\tas lines through the origin in $\\CC^n$.\n\n\tAlgebraically, this is done by taking the homogeneous ideals\n\t$I, J \\subseteq \\CC[x_0, \\dots, x_n]$\n\tand using the same ideals to cut out \\emph{affine} varieties\n\t$V_{\\text{aff}} = \\VV(I)$ and $W_{\\text{aff}} = \\VV(J)$ in $\\Aff^{n+1}$.\n\tFor example, the cone $x^2+y^2-z^2=0$ is a conic (a one-dimensional curve)\n\tin $\\CP^2$, but can also be thought of as a cone\n\t(which is a two-dimensional surface) in $\\Aff^3$.\n\n\tThen for (a), we have $V_{\\text{aff}} = W_{\\text{aff}}$,\n\tso $\\sqrt I = \\sqrt J$.\n\n\tFor (b), either $V_{\\text{aff}}$ is empty \n\tor it is just the origin of $\\Aff^{n+1}$,\n\tso the Nullstellensatz implies either $I = (1)$\n\tor $\\sqrt I = (x_0, \\dots, x_n)$ as desired.\n\\end{proof}\nProjective analogues of \\Cref{thm:many_aff_variety}\n(on intersections and unions of varieties) hold verbatim\nfor projective varieties as well.\n\n\n\\section{As ringed spaces}\n\\prototype{The regular functions on $\\CP^1$ minus a point\nare exactly those of the form $P(s/t)$.}\nNow, let us make every projective variety $V$ into a baby ringed space.\nWe already have the set of points, a subset of $\\CP^n$.\n\nThe topology is defined as follows.\n\\begin{definition}\n\tWe endow $\\CP^n$ with the \\vocab{Zariski topology}\n\tby declaring the sets of the form $\\Vp(I)$,\n\twhere $I$ is a homogeneous ideal, to be the closed sets.\n\t\n\tEvery projective variety $V$ then inherits the Zariski\n\ttopology from its parent $\\CP^n$.\n\tThe \\vocab{distinguished open sets} $D(f)$ are $V \\setminus \\Vp(f)$.\n\\end{definition}\n\nThus every projective variety $V$ is now a topological space.\nIt remains to endow it with a sheaf of regular functions $\\OO_V$.\nTo do this we have to be a little careful.\nIn the affine case we had a nice little ring of functions,\nthe coordinate ring $\\CC[x_0,\\dots,x_n] / I$,\nthat we could use to provide the numerator and denominators.\nSo, it seems natural to then define:\n\\begin{definition}\n\tThe \\vocab{homogeneous coordinate ring} of a projective variety\n\t$V = \\Vp(I) \\subseteq \\CP^n$, where $I$ is homogeneous radical,\n\tis defined as the ring\n\t\\[ \\CC[V] = \\CC[x_0, \\dots, x_n] / I. \\]\n\\end{definition}\nHowever, when we define a rational function we must impose\na new requirement that the numerator and denominator are the same degree.\n\\begin{definition}\n\tLet $U \\subseteq V$ be an open set of a projective variety $V$.\n\tA \\vocab{rational function} $\\phi$ on a projective variety $V$\n\tis a quotient $f/g$, where $f,g \\in \\CC[V]$,\n\tand $f$ and $g$ are homogeneous of the same degree,\n\tand $\\Vp(g) \\cap U = \\varnothing$.\n\tIn this way we obtain a function $\\phi : U \\to \\CC$.\n\\end{definition}\n\\begin{example}\n\t[Examples of rational functions]\n\tLet $V = \\CP^1$ have coordinates $(s:t)$.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii If $U = V$, then constant functions $c/1$\n\t\tare the only rational functions on $U$.\n\t\t\\ii Now let $U_1 = V \\setminus \\{(1:0)\\}$.\n\t\tThen, an example of a regular function is\n\t\t\\[ \\frac{s^2+9t^2}{t^2} = \\left( \\frac st \\right)^2 + 9. \\]\n\t\tIf we think of $U_1$ as $\\CC$\n\t\t(i.e.\\ $\\CP^1$ minus an infinity point, hence like $\\Aff^1$)\n\t\tthen really this is just the function $x^2+9$.\n\t\\end{enumerate}\n\\end{example}\nThen we can repeat the same definition as before:\n\\begin{definition}\n\tLet $U \\subseteq V$ be an open set of a projective variety $V$.\n\tWe say a function $\\phi : U \\to \\CC$ is a \\vocab{regular function} if\n\tfor every point $p$, we can find an open set $U_p$ containing $p$\n\tand a rational function $f_p/g_p$ on $U_p$ such that\n\t\\[ \\phi(x) = \\frac{f_p(x)}{g_p(x)} \\qquad \\forall x \\in U_p. \\]\n\tIn particular, we require $U_p \\cap \\Vp(g_p) = \\varnothing$.\n\tWe denote the set of all regular functions on $U$ by $\\OO_V(U)$.\n\\end{definition}\nOf course, the rational functions from the previous example\nare examples of regular functions as well.\nThis completes the definition of a projective variety $V$\nas a baby ringed space.\n\n\\section{Examples of regular functions}\nNaturally, I ought to tell you what the regular functions\non distinguished open sets are; this is an analog to\n\\Cref{thm:reg_func_distinguish_open} from last time.\n\\begin{theorem}\n\t[Regular functions on distinguished open sets for projective varieties]\n\t\\label{thm:proj_reg_func_dist_open}\n\tLet $V$ be a projective variety, and let $g \\in \\CC[V]$ be homogeneous\n\tof \\emph{positive degree} (thus $g$ is nonconstant).\n\tThen\n\t\\[\n\t\t\\OO_V(D(g))\n\t\t= \\left\\{ \\frac{f}{g^r} \\mid\n\t\tf \\in \\CC[V] \\text{ homogeneous of degree $r\\deg g$}\n\t\t\\right\\}.\n\t\\]\n\\end{theorem}\nWhat about the case $g = 1$?\nA similar result holds, but we need the assumption that $V$ is irreducible.\n\\begin{definition}\n\tA projective variety $V$ is irreducible\n\tif it can't be written as the union of two proper (projective) sub-varieties.\n\\end{definition}\n\\begin{theorem}\n\t[Only constant regular functions on projective space]\n\tLet $V$ be an \\emph{irreducible} projective variety.\n\tThen the only regular functions on $V$ are constant,\n\tthus we have \\[ \\OO_V(V) \\cong \\CC. \\]\n\tThis relies on the fact that $\\CC$ is algebraically closed.\n\\end{theorem}\nProofs of these are omitted for now.\n\\begin{example}\n\t[Irreducibility is needed above]\n\tThe reason we need $V$ irreducible is otherwise\n\twe could, for example, take $V$ to be the union of two points;\n\tin this case $\\OO_V(V) \\cong \\CC^{\\oplus 2}$.\n\\end{example}\n\n\\begin{remark}\n\tIt might seem strange that $\\OO_V(D(g))$ behaves so differently\n\twhen $g = 1$. One vague explanation is that in a projective variety,\n\ta distinguished open $D(g)$ looks much like an affine variety if $\\deg g > 0$.\n\tFor example, in $\\CP^1$ we have $\\CP^1 \\setminus \\{0\\} \\cong \\Aff^1$\n\t(where $\\cong$ is used in a sense that I haven't made precise).\n\tThus the claim becomes related to the corresponding affine result.\n\tBut if $\\deg g = 0$ and $g \\neq 0$, then $D(g)$ is the entire projective variety,\n\twhich does not look affine, and thus the analogy breaks down.\n\\end{remark}\n\n\\begin{example}[Regular functions on $\\CP^1$]\n\tLet $V = \\CP^1$, with coordinates $(s:t)$.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii By \\Cref{thm:proj_reg_func_dist_open},\n\t\tif $U_1$ is the standard affine chart\n\t\tomitting the point $(1:0)$, we have\n\t\t$ \\OO_V(U_1) = \\left\\{ \\frac{f}{t^n} \\mid \\deg f = n \\right\\} $.\n\t\tOne can write this as\n\t\t\\[ \\OO_V(U_1) \\cong \\left\\{ P(s/t) \\mid P \\in \\CC[x] \\right\\}\n\t\t\t\\cong \\OO_{\\Aff^1} (\\Aff^1). \\]\n\t\tThis conforms with our knowledge that $U_1$\n\t\t``looks very much like $\\Aff^1$''.\n\t\t\\ii As $V$ is irreducible, $\\OO_V(V) = \\CC$:\n\t\tthere are no nonconstant functions on $\\CP^1$.\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{example}\n\t[Regular functions on $\\CP^2$]\n\tLet $\\CP^2$ have coordinates $(x:y:z)$ and \n\tlet $U_0 = \\left\\{ (x:y:1) \\in \\CP^2 \\right\\}$\n\tbe the distinguished open set $D(z)$.\n\tThen in the same vein,\n\t\\[\n\t\t\\OO_{\\CP^2}(U_0)\n\t\t= \\left\\{ \\frac{P(x,y)}{z^n} \\mid \\deg P = n \\right\\}\n\t\t\\cong \\left\\{ P(x/z, y/z) \\mid P \\in \\CC[x,y] \\right\\}.\n\t\\]\n\\end{example}\n\n\\section\\problemhead\n\\todo{Problems:}\n% should be easy to come up with some explicit examples to play with\n", "meta": {"hexsha": "0538691f375b7a7a140d873a8e3fd2913701ed09", "size": 15105, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/alg-geom/proj-var.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/alg-geom/proj-var.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/alg-geom/proj-var.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6683291771, "max_line_length": 82, "alphanum_fraction": 0.682489242, "num_tokens": 4981, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484143, "lm_q2_score": 0.8080672227971211, "lm_q1q2_score": 0.5781106715892271}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Written By Michael Brodskiy\n% Class: Analytic Geometry & Calculus III (Math-292)\n% Professor: V. Cherkassky\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[12pt]{article} \n\\usepackage{alphalph}\n\\usepackage[utf8]{inputenc}\n\\usepackage[russian,english]{babel}\n\\usepackage{titling}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\usepackage[super]{nth}\n\\usepackage{everysel}\n\\usepackage{ragged2e}\n\\usepackage{geometry}\n\\usepackage{fancyhdr}\n\\geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in}\n\\newcommand{\\subtitle}[1]{%\n  \\posttitle{%\n    \\par\\end{center}\n    \\begin{center}\\large#1\\end{center}\n    \\vskip0.5em}%\n\n}\n\\usepackage{hyperref}\n\\hypersetup{\ncolorlinks=true,\nlinkcolor=blue,\nfilecolor=magenta,      \nurlcolor=blue,\ncitecolor=blue,\n}\n\n\\urlstyle{same}\n\n\n\\title{Lecture VII Notes}\n\\date{\\today}\n\\author{Michael Brodskiy\\\\ \\small Professor: V. Cherkassky}\n\n% Mathematical Operations:\n\n% Sum: $$\\sum_{n=a}^{b} f(x) $$\n% Integral: $$\\int_{lower}^{upper} f(x) dx$$\n% Limit: $$\\lim_{x\\to\\infty} f(x)$$\n\n\\begin{document}\n\n\\maketitle\n\n\\subsection{Formulas From 13.3}\n\n\\begin{enumerate}\n\n  \\item $\\overrightarrow{T}(t)=\\frac{\\overrightarrow{r}'(t)}{|\\overrightarrow{r}'(t)|}$\n\n  \\item $\\overrightarrow{N}(t)=\\frac{\\overrightarrow{T}'(t)}{|\\overrightarrow{T}'(t)|}$\n\n  \\item $\\overrightarrow{B}(t)=\\overrightarrow{T}(t)\\text{ x }\\overrightarrow{N}(t)$\n\n  \\item $\\kappa(t)=\\frac{|\\overrightarrow{T}'(t)|}{|\\overrightarrow{r}'(t)|}=\\frac{|\\overrightarrow{r}'(t)\\text{ x }\\overrightarrow{r}''(t)|}{|\\overrightarrow{r}'(t)|^3}$\n\n\\end{enumerate}\n\n\\section{Formulas of Frenet-Serret}\n\n\\begin{enumerate}\n\n  \\item $\\frac{d\\overrightarrow{T}}{ds}=\\kappa\\overrightarrow{N}$\n\n  \\item $\\frac{d\\overrightarrow{N}}{ds}=-\\kappa\\overrightarrow{T}+\\tau\\overrightarrow{B}$\n\n  \\item $\\frac{d\\overrightarrow{B}}{ds}=-\\tau\\overrightarrow{N}$  \n\n\\end{enumerate}\n\n\\section{Motion in Space $-$ 13.4}\n\n\\begin{enumerate}\n \n  \\item $\\overrightarrow{v}(t)=\\overrightarrow{r}'(t)$\n\n  \\item $\\overrightarrow{a}(t)=\\overrightarrow{v}'(t)=\\overrightarrow{r}''(t)$\n\n  \\item $\\text{speed}=|\\overrightarrow{v}(t)|$\n\n  \\item The force, $\\overrightarrow{F}$ is always acting in the same direction as the acceleration, $\\overrightarrow{a}$, with proportionality constant mass, $m$\n\n  \\item In a circular path, an object's position is given by: $\\overrightarrow{r}(t) = a\\cos \\omega t \\hat{\\bold{i}} + a \\sin \\omega t \\hat{\\bold{j}}$, where $a$ is the radius of the circle, and $\\omega$ is the constant speed of the object. The velocity is given by: $\\overrightarrow{r}'(t)=\\overrightarrow{v}(t)=-a\\omega \\sin \\omega t \\hat{\\bold{i}} + a \\omega \\cos \\omega t \\hat{\\bold{j}}$, and the acceleration is given by: $\\overrightarrow{a}(t)=\\overrightarrow{v}'(t)=-a\\omega^2\\cos \\omega t \\hat{\\bold{i}} - a\\omega^2 \\sin \\omega t \\hat{\\bold{j}}$\n\n  \\item Also in circular motion, $\\overrightarrow{F}(t)=m\\overrightarrow{a}(t)=-m\\omega^2(a\\cos \\omega t \\hat{\\bold{i}} + a \\sin \\omega t \\hat{\\bold{j}})$\n\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "6fa2e951328cbcb1b06e9c3ae373d72252b1c75b", "size": 3374, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture Notes/Lecture7.tex", "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_issues_repo_path": "Lecture Notes/Lecture7.tex", "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notes/Lecture7.tex", "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.74, "max_line_length": 553, "alphanum_fraction": 0.6087729698, "num_tokens": 1059, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8080672066194945, "lm_q1q2_score": 0.5781106698213082}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[utf8]{inputenc}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{September 29, 2014}\n\\maketitle\nif $\\liminf x_n=L$ then there exists $\\{x_{n_k}\\}$ such that $\\lim x_{n_k}=L$\n\n$l=\\liminf x_n=\\lim(inf\\{\\underbrace{x_{n_1},x_{n_2},x_{n_3},\\dots}_{c_n}\\}$\n\nwhy not just let $c_n$ be the subseuence? because $c_n$ may not be equal to any of the $x_k$ in the sequence\n\n$c_n=\\inf\\{x_{n_1},x_{n_2},\\dots\\}$ give $\\varepsilon=2^{-n}$ there exists $x_{n_k}\\in\\{x_{n_1},x_{n+1},x_{n+2},\\dots\\}$ such that $\\left\\lvert c_n-x_{n_k}\\right\\rvert<2^{-n}$ by def of infinum\n\nwe has a sequence $\\{c_n\\}$ given $\\varepsilon>0$ there exists $N$ such that $\\left\\lvert c_n-L\\right\\rvert<\\varepsilon$ if $n\\ge N$. we approximate each $c_n$ by some $x_{n_k}$ from the oricinal sequence sutch that ....\n\n\\section*{convergence test for series}\nfirst we talk about series with positive terms $\\sum\\limits_{k=1}^\\infty{a_k}$, $s_n=\\sum\\limits_{k=1}^n{a_k}$. So if $s_n$ is bounded about then the series is convergent. and if not, it is divergent.\n\ngeometric series $\\sum\\limits_{n=0}^\\infty{r^n}$ is convergent if $\\left\\lvert r\\right\\rvert<1$. $s_n=\\sum\\limits_{k=0}n{r^k}=1+r+r^2+\\dots+r^n, rs_n=r+r^2+r^3+\\dots, sn-rSn=1-r^{n+1}$\n\n$s_n=\\frac{1-r^{n+1}}{1-r}\\to\\frac{1}{1-r}$\n\n\n\\section*{comparison test}\nif $\\forall n, |a_n|\\le b_n$\n\\begin{itemize}\n\\item\n  if $\\sum\\limits_{n=1}^\\infty{b_n}$ is convergent then $\\sum\\limits_{n=1}^\\infty{a_n}$ is convergent,\n  \\item\n  if $\\sum\\limits{a_n}$ is divergent, so is $\\sum\\limits{b_n}$.\n\\end{itemize}\n\\section*{3.2.b}\nshow that if $\\left(|a_n|\\right)_{n=1}^\\infty$ is summable then so is $\\left(a_n\\right)_{n=1}^\\infty$.\n\\begin{align*}\n  \\sum\\limits_{k=n+1}^m{|a_k|}&<\\varepsilon\\text{ for all }N\\le n\\le m \\text{ because is is summable}\\\\\n  \\left\\lvert\\sum\\limits_{k=n+1}^m{a_k}\\right\\rvert&\\le\\sum\\limits_{k=n+1}^m{|a_k|}<\\varepsilon\n\\end{align*}\nso then $\\sum\\limits{a_k}$ is also cauchy and summable\n\n\\section*{cauchy-schwartz inequality}\n$\\sum\\limits_{k=1}^n{a_kb_k}\\le \\left(\\sum\\limits_{k=1}^n{a_k^2}\\right)^{1/2}\\left(\\sum\\limits_{k=1}^n{b_k^2}\\right)^{1/2}$\n\n\\section*{3.2.f}\n\n\\section*{leibniz test for alternating series}\nif $\\{a_n\\}$ is a monotone decreasing sequence of positive terms with the $\\lim a_n=0$ then $\\sum\\limits_{n=1}^\\infty{(-1)^na_n}$ is convergent\n\\end{document}\n\n", "meta": {"hexsha": "43c06adc143e641e62eeb2cea499c604b8c00acc", "size": 2508, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-09-29.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-09-29.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-09-29.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.5084745763, "max_line_length": 220, "alphanum_fraction": 0.6834130781, "num_tokens": 1012, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8267117983401363, "lm_q1q2_score": 0.5780818799307771}}
{"text": "\\documentclass[degn-notes.tex]{subfiles}\n\n\\begin{document}\n\n\\setcounter{chapter}{1}\n\n\\chapter{Observations of Galactic Nuclei and Supermassive Black Holes}\n\n\\section{Structure of galaxies and galactic nuclei}\n\n\\subsection{Intensity profiles}\n\nIntensity profiles are functions used to describe the intensity of a galaxy as a function of distance from the center, $I(R)$.\n\nThe \\textbf{S{\\'e}rsic profile} is commonly used, as it is simple and effective. The general form is:\n%\n\\begin{displaymath}\n  \\ln I(R) = \\ln I_e - b \\, n \\qty[ \\qty(R/R_e)^{1/n} - 1 ].\n%\n  \\tag{Merritt 2.3}\n  \\label{merritt:2.3}\n\\end{displaymath}\n%\nThe \\textbf{S{\\'e}rsic index}, $n$, characterizes the shape of the function, and in practice, $n \\in \\intcc{0.5, 8}$. $b$ is typically chosen such that $R_e$, the \\textbf{effective radius}, contains half of the total light. The effective intensity, $I_e$, is thus defined as $I_e \\equiv I(R_e)$.\n\nTo make the function a little more intuitive, one may use differentiation to put it in the form:\n%\n\\begin{displaymath}\n  \\dv{\\ln I}{\\ln R} = -\\frac{b}{n} \\qty( \\frac{R}{R_e} )^{1/n}\n%\n  \\tag{Merritt 2.4}\n  \\label{merritt:2.4}\n\\end{displaymath}\n%\nThis form illustrates the fact that the slope on a $\\log$--$\\log$ plot of the intensity grows with $R$ by $(R/R_e)^{1/n}$.\n\nIn many galaxies, within a radius $R_b$, there is a break in the intensity profile (shown in Merritt Figure 2.2). For this reason, it is necessary to define a \\textbf{core-S{\\'e}rsic profile}:\n%\n\\begin{displaymath}\n  I(r) =\n  \\begin{dcases}\n    I_b \\qty( \\frac{R_b}{R} )^\\Gamma,\n    & R \\leq R_b,\n    \\\\\n    I_b \\exp(b(R_b/R_e)^{1/n}) \\exp(-b(R/R_e)^{1/n}),\n    & R > R_b.\n  \\end{dcases}\n%\n  \\tag{Merritt 2.8}\n  \\label{merritt:2.8}\n\\end{displaymath}\n\n\n\\section{Techniques for weighing black holes}\n\nIn most cases, the determination of SBH mass is done through dynamical techniques, i.e. techniques which use the motions of gas and stars to infer the black hole mass.\n\nAn important concept is the ``influence radius'', $r_m$, which is typically defined to satisfy\n%\n\\begin{displaymath}\n  M_\\star(r < r_m) = 2 M_\\bullet.\n%\n  \\tag{Merritt 2.11}\n  \\label{merritt:2.11}\n\\end{displaymath}\n%\nIn words, $r_m$ is the radius in which the enclosed mass in stars is twice that of the black hole, and the total mass is $3 M_\\bullet$. The choice of 2 is somewhat arbitrary, though it is a convenient choice because of the definition of $r_h$ below. This definition, however, is only applicable to spherical galaxies, thereby leading to the alternate definitio of influence radius, $r_h$.\n\nThe alternative influence radius, $r_h$, is determined dynamically. It is defined such that the velocity of a circular orbit about the SBH from that distance, $v_c = \\sqrt{G M_\\bullet / r_h}$, is equal to $\\sigma = v_{\\rm rms} / \\sqrt{3}$.\n%\n\\begin{displaymath}\n  r_h \\equiv\n  \\frac{G M_\\bullet}{\\sigma^2} \\approx\n  10.8 \\qty(\\frac{M_\\bullet}{10^8 \\si{\\solarmass}})\n       \\qty(\\frac{\\sigma}{\\SI{200}{\\kilo\\meter\\per\\second}})^{-2} \\si{\\parsec}\n%\n  \\tag{Merritt 2.12}\n  \\label{merritt:2.12}\n\\end{displaymath}\n%\n$\\sigma$ is typically taken to be the rms, line-of-sight velocity of stars within the aperature, which should be centered on the SBH. This is called the \\textbf{aperture dispersion}.\n%\n\\begin{equation}\n  \\sigma^2 = \\expval{v_{\\mathrm{los}}^2}\n%\n  \\label{eqn:aperture-dispersion}\n\\end{equation}\n\n\n\\subsection{Primary mass determination methods: Stellar and gas kinematics}\n\nA common error is determining the mass of an SBH without being able to resolve the influence radius, $r_h$. The telescope must have an angular resolution exceeding\n%\n\\begin{displaymath}\n  \\theta \\lesssim\n  \\frac{r_h}{D} \\approx\n  0.02'' \\qty(\\frac{M_\\bullet}{10^8 \\si{\\solarmass}})\n         \\qty(\\frac{\\sigma}{\\SI{200}{\\kilo\\meter\\per\\second}})^{-2}\n         \\qty(\\frac{D}{\\SI{10}{\\mega\\parsec}})^{-1}\n%\n  \\tag{Merritt 2.20}\n  \\label{merritt:2.20}\n\\end{displaymath}\n%\nIf the influence radius is well resolved, one expects a \\textbf{Keplerian rise} in velocities of stars ($v^2 = G M_\\bullet / r$), leading to the definition of $r_h$ above. One should be skeptical of claims of resolving the influence radius and SBH mass in the literature. See Figure 2.5 in the text, which shows how bad some of the published data are.\n\nIn the case of galaxies with spherical symmetry, the centripetal motion of the gas near the SBH is\n%\n\\begin{displaymath}\n  v_c^2(r) = \\frac{G (M_\\bullet + M_\\star)}{r}\n%\n  \\tag{Merritt 2.22}\n  \\label{merritt:2.22}\n\\end{displaymath}\n\n\n\\setcounter{subsection}{2}\n\\subsection{Mass determination based on empirical correlations}\n\n\n\n\n\n\\section{Supermassive black holes in the Local Group}\n\n\\section{Phenomenology}\n\n\\setcounter{subsection}{1}\n\\subsection{Mass--velocity dispersion relation}\n\nThe $M_\\bullet$--$\\sigma$ relation is an empirical correlation between the mass of a galaxy's SBH, and the velocity dispersion of the stars in that galaxy. Typically, $\\sigma$ is measured as the aperture dispersion.\n\nThe correlation is very tight, and can be expressed as\n%\n\\begin{displaymath}\n  \\frac{M_\\bullet}{10^8 M_\\Sun} =\n  (1.66 \\pm 0.24)\n  \\qty(\\frac{\\sigma}{\\SI{200}{\\kilo\\meter\\per\\second}})^{4.86\\pm0.43}\n%\n  \\tag{Merritt 2.33}\n  \\label{merritt:2.33}\n\\end{displaymath}\n\n\n\n\\subsection{Significance of the phenomenological relations}\n\nThe tight correlation in $M_\\bullet$--$\\sigma$ is quite puzzling, as one would think the velocities of stars far from the SBH would not be significantly affected by its mass.\n\nA loose explanation works two other correlations together:\n%\n\\begin{displaymath}\n  M_{\\mathrm{bulge}} \\propto L^{5/4}\\ \\&\\ L \\propto \\sigma^4 \\implies\n  M_{\\mathrm{bulge}} \\propto \\sigma^5.\n%\n  \\tag{Merritt 2.36}\n  \\label{merritt:2.36}\n\\end{displaymath}\n%\nThis is very close to the observed relation, but one would not expect such a tight correlation from these.\n\nAn alternative explanation involves a negative feedback mechanism. By some mechanism, a black hole stops its own growth.\n\nLet $L$ be the energy released in growing an SBH with mass $M(t)$, and let $\\eta$ be the accretion efficiency ($\\approx 10^{-1}$). Then we have the expression\n%\n\\begin{displaymath}\n  L = \\eta \\dot{M} c^2.\n%\n  \\tag{Merritt 2.37}\n  \\label{merritt:2.37}\n\\end{displaymath}\n%\nNow, this is making a major assumption. It depends on almost all of the potential energy in the gas being released in the form of light. This would not happen if the gas fell straight in, but if it were a gradual process (as an accretion disc would be), then it works.\n\nIf we assume $\\dot{M}$ to be roughly constant, then we can integrate $L(t)$ to obtain the energy released during formation\n%\n\\begin{displaymath}\n  \\int_0^T L \\dd{t} =\n  \\eta M(T) c^2 =\n  \\eta M_\\bullet c^2.\n\\end{displaymath}\n\nConsider that the bulge has a potential energy $G M_{\\mathrm{bulge}}^2 / R_{\\mathrm{bulge}}$, which, by the virial theorem is equal to twice the kinetic energy, or $M_{\\mathrm{bulge}} \\sigma^2$.\n\nIf we take the ratio of the energy released and the total gravitational potential energy of the bulge, we get\n%\n\\begin{displaymath}\n  \\frac{E_{\\mathrm{released}}}{E_{\\mathrm{bulge}}} \\approx\n  \\frac{\\eta M_\\bullet c^2}{G M_{\\mathrm{bulge}}^2 / R_{\\mathrm{bulge}}} \\approx\n  \\eta \\frac{M_\\bullet}{M_{\\mathrm{bulge}}} \\frac{c^2}{\\sigma^2} \\approx\n  225\n  \\qty(\\frac{\\eta}{0.1})\n  \\qty(\\frac{M_\\bullet / M_{\\mathrm{bulge}}}{10^{-3}})\n  \\qty(\\frac{\\sigma}{\\SI{200}{\\kilo\\meter\\per\\second}})^{-2} \\gg\n  1,\n%\n  \\tag{Merritt 2.38}\n  \\label{merritt:2.38}\n\\end{displaymath}\n%\nwhich tells us that the energy released is much greater than that stored in the bulge.\n\nThere is more than enough energy released in growing an SBH to unbind the entire bulge.\n\\href{http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1998A%26A...331L...1S&amp;data_type=PDF_HIGH&amp;whole_paper=YES&amp;type=PRINTER&amp;filetype=.pdf}{Silk \\& Rees (1998)}\nmade the argument that the SBH accretes at the Eddington limit:\n%\n\\begin{displaymath}\n  L_{\\mathrm{edd}} =\n  \\frac{4 \\pi G m_p M_\\bullet c}{\\sigma_e} \\approx\n  \\num{3.2e12} \\qty(\\frac{M_\\bullet}{10^8 M_\\Sun}) L_\\Sun\n%\n  \\tag{Merritt 2.39}\n  \\label{merritt:2.39}\n\\end{displaymath}\nwhere $m_p$ is the mass of a proton.\n\nIf we ask what SBH mass would be required to produce enough energy to unbind the bulge in a single crossing time, we wind up with the result\n%\n\\begin{displaymath}\n  M_\\bullet \\approx\n  \\frac{\\sigma_e \\sigma^5}{4 \\pi G^2 m_p c} \\approx\n  \\num{3e5} \\qty(\\frac{\\sigma}{\\SI{200}{\\kilo\\meter\\per\\second}})^5 M_\\Sun.\n%\n  \\tag{Merritt 2.42}\n  \\label{merritt:2.42}\n\\end{displaymath}\n%\nThis is essentially the $M_\\bullet$--$\\sigma$ relation, with one caveat: the constant of proportionality is too small, by roughly $10^3$.\n\nAndrew King proposed in 2003, that momentum is the important factor here, not energy. He proposed that the gas driven by SBH radiation will cool efficiently, and that the bulk flow is momentum driven. His proposed equation of motion is\n%\n\\begin{displaymath}\n  \\dv{t}(R \\dot{R}) + \\frac{G M_\\bullet}{R} =\n  -2 \\sigma^2 \\qty( 1 - \\frac{M_\\bullet}{M_\\sigma} ),\n%\n  \\tag{Merritt 2.46}\n  \\label{merritt:2.46}\n\\end{displaymath}\n%\nwhere\n%\n\\begin{displaymath}\n  M_\\sigma \\equiv\n  \\frac{f_g \\sigma_e}{\\pi G^2 m_p} \\sigma^4 \\approx\n  \\num{2e8}\n  \\qty(\\frac{f_g}{0.1})\n  \\qty(\\frac{\\sigma}{\\SI{200}{\\kilo\\meter\\per\\second}})^4\n  M_\\Sun,\n%\n  \\tag{Merritt 2.47}\n  \\label{merritt:2.47}\n\\end{displaymath}\n%\nand $f_g$ is the cosmic baryon fraction (the ratio of ordinary matter to all matter). The typically assumed value is $f_g \\approx 0.16$, which leaves us with a slope on the order of what we expect. However, no mention is made in the book of the discrepancy in the exponent here ($4$) versus what is observed ($4.86\\pm0.43$).\n\n\n\n\\section{Evidence for intermediate-mass black holes}\n\n\\begin{table}[h]\n  \\centering\n  \\begin{tabular}{l|l}\n    \\textbf{Class}    & \\textbf{Mass Range}\n    \\\\ \\hline\\hline\n    stellar-mass      & $5$--$20$ $M_\\Sun$\n    \\\\\n    intermediate-mass & $10^2$--$10^6$ $M_\\Sun$\n    \\\\\n    super-massive     & $>10^6$ $M_\\Sun$\n  \\end{tabular}\n  \\caption{Classification of black holes by mass}\n\\end{table}\n\n\n\\section{Evidence for binary and multiple supermassive black holes}\n\n\n\\section*{SBH Masses in AGN}\n\nAGN show \\emph{continuum emission} from their nuclei, i.e. emission with a spectrum characteristic of hot gas at a range of temperatures. This is believed to arise from an \\emph{accretion disc}.\n\nBroad emission lines (H$\\alpha$, H$\\beta$, \\ldots) originating from broad emission line regions (BLR) are believed to consist of gas clouds orbiting well within the influence radius, but outside the accretion disc.\n\nWe could measure the mass of the SBH within some AGN using the relation\n%\n\\begin{equation}\n  G M_\\bullet = f \\times R_{\\mathrm{BLR}} \\times v_{\\mathrm{RMS}}^2,\n%\n  \\label{eqn:agn-mass}\n\\end{equation}\n%\nwhere $v_{\\mathrm{RMS}}$ could be measured by the Doppler effect. One issue is that $R_{\\mathrm{BLR}}$ is smaller than the influence radius, and cannot be resolved. A workaround is found by\n%\n\\begin{equation}\n  R_{\\mathrm{BLR}} = c \\tau,\n%\n  \\label{eqn:blr-radius}\n\\end{equation}\n%\nwhere $\\tau$ is the time lag between a variation in the continuum near the SBH, and a variation in the BLR luminosity. This works because of the finite speed of light.\n\n\n\n\\section{Gravitational waves}\n\nThe amplitude, $h$, of a gravitational wave is a dimensionless quantity given by\n%\n\\begin{displaymath}\n  h \\approx\n  \\frac{G}{c^4} \\frac{\\ddot{Q}}{D} \\approx\n  \\frac{G M_Q}{c^2 D} \\frac{v^2}{c^2},\n%\n  \\tag{Merritt 2.55}\n  \\label{merritt:2.55}\n\\end{displaymath}\n%\nwhere $v$ is the internal velocity of the source, $M_Q$ is the portion of the source's mass (in units of mass, not just 0--1) participating in the quadrupolar motions, and $D$ is the distance to the source.\n\nTo understand the detector size needed to observe a gravitational wave, consider the ideal case, where the entire mass is participating in quadrupolar motions ($M_Q = M_{12}$), and the binary is located at a distance $D$ from the observer and $a$ from each other. Then we have $v^2 \\approx G M_{12} / a$, and from (\\ref{merritt:2.55}) we have\n%\n\\begin{align*}\n  h &\\approx\n  \\frac{G M_{12}}{c^2 D} \\frac{G M_{12}}{a c^2}\n  \\\\ &\\approx\n  \\frac{G^2}{c^4} M_{12}^2 a^{-1} D^{-1}\n  \\\\ &\\approx\n  \\num{2e-16}\n  \\qty(\\frac{M_{12}}{10^{-8} M_{\\Sun}})^2\n  \\qty(\\frac{a}{\\si{\\milli\\parsec}})^{-1}\n  \\qty(\\frac{D}{100\\si{\\mega\\parsec}})^{-1}\n%\n  \\tag{Merritt 2.58}\n  \\label{merritt:2.58}\n\\end{align*}\n\nIf you measure the rate-of-change of the frequency of gravitational waves from a binary black hole, you will not get its individual mass, but instead a combination of the two, called the ``chirp mass'', which is given by\n%\n\\begin{equation}\n  M_{\\mathrm{ch}} =\n  \\sqrt[5]{\\frac{(M_1 M_2)^3}{M_1 + M_2}}.\n%\n  \\label{eqn:chirp-mass}\n\\end{equation}\n\n\n\n\\end{document}", "meta": {"hexsha": "03604386c74ebc203bb506637f4ef9c61ee6d018", "size": 12829, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/textbook/tex/degn-ch2-notes.tex", "max_stars_repo_name": "dwysocki/ASTP-617", "max_stars_repo_head_hexsha": "5359c96b9fe6533e22e69430459b96f4e40aa224", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/textbook/tex/degn-ch2-notes.tex", "max_issues_repo_name": "dwysocki/ASTP-617", "max_issues_repo_head_hexsha": "5359c96b9fe6533e22e69430459b96f4e40aa224", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/textbook/tex/degn-ch2-notes.tex", "max_forks_repo_name": "dwysocki/ASTP-617", "max_forks_repo_head_hexsha": "5359c96b9fe6533e22e69430459b96f4e40aa224", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.9711815562, "max_line_length": 388, "alphanum_fraction": 0.704965313, "num_tokens": 4155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267118026095991, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.578081877734918}}
{"text": "\\section*{Exercise 6.1}\r\n\\enum{\r\n\\item\r\nThere's no enough evidence that the success rate is greater for longer tears.\r\n\r\nLarge-sample test for differences in proportions will be used.\r\n\r\n\\spl{\r\n    H_0:\\ p_1-p_2=0\r\n}\r\nbased on the statistic\r\n\\spl{\r\n    Z=\\frac{\\hat{p_1}-\\hat{p_2}}{\\sqrt{\\frac{\\hat{p_1}(1-\\hat{p_1})}{n_1}+\\frac{\\hat{p_2}(1-\\hat{p_2})}{n_2}}}.\r\n}\r\n\r\nHere we obtain that\r\n\\spl{\r\n    \\hat{p_1}&=78\\%,\\\\\r\n    \\hat{p_2}&=73\\%,\\\\\r\n    Z&=\\frac{0.78-0.73}{\\sqrt{\\frac{0.78(1-0.78)}{18}+\\frac{0.73(1-0.73)}{30}}}=0.39.\r\n}\r\n\r\nAccording to the standard normal distribution, $z_{0.05}=1.645$.\r\nHence $Z=0.39<z_{0.05}$, which means we have no sufficient evidence to support $p_1>p_2$.\r\n\r\n\\item\r\n\r\nThe standard deviation is\r\n\\spl{\r\n    \\sqrt{\\frac{0.78(1-0.78)}{18}+\\frac{0.73(1-0.73)}{30}}=0.127.\r\n}\r\n\r\nThe 95\\% confidence interval is\r\n\\spl{\r\n    p_1-p_2>0+1.645\\times0.127=0.209.\r\n}\r\nThus\r\n\\spl{\r\n    p_1-p_2>0.209.\r\n}\r\n}\r\n\r\n\\section*{Exercise 6.2}\r\n\\enum{\r\n\\item\r\nThe standard deviation is\r\n\\spl{\r\n    \\sqrt{\\frac{\\sigma_1^2}{n_1}+\\frac{\\sigma_2^2}{n_2}}=0.95.\r\n}\r\n\r\n\\spl{\r\n    \\bar{x_1}-\\bar{x_2}-z_{0.025}\\sqrt{\\frac{\\sigma_1^2}{n_1}+\\frac{\\sigma_2^2}{n_2}}&\\leq \\mu_1-\\mu_2\\leq \\bar{x_1}-\\bar{x_2}+z_{0.025}\\sqrt{\\frac{\\sigma_1^2}{n_1}+\\frac{\\sigma_2^2}{n_2}}\\\\\r\n    -6-1.96\\times0.95&\\leq \\mu_1-\\mu_2\\leq-6+1.96\\times0.95\\\\\r\n}\r\nHence, \r\n\\spl{\r\n    \\mu_1-\\mu_2=(-6.00\\pm1.86)cm/s.    \r\n}\r\n\\item\r\n\\spl{\r\n    Z_0=\\frac{\\bar{x_1}-\\bar{x_2}}{\\sqrt{\\frac{\\sigma_1^2}{n_1}+\\frac{\\sigma_2^2}{n_2}}}=\\frac{-6}{0.95}=-6.32<-1.96=-z_{0.025}.\r\n}\r\n\r\nHence, we reject $H_0$ at a 5\\% level of significance.\r\n\r\n\\item\r\nWe know that\r\n\\spl{\r\n    \\beta=1-p=0.10,\\quad,\\alpha=0.05,\\quad \\sigma=3cm/s,\\quad\\delta=14cm/s.\r\n}\r\n\r\nHence,\r\n\\spl{\r\n    n\\approx\\frac{2(z_{\\alpha/2}+z_{\\beta})^2\\sigma^2}{\\delta^2}=\\frac{2(1.96+1.28)^2\\times3^2}{14^2}=0.96.\r\n}\r\n\r\n}\r\n\r\n\\section*{Exercise 6.3}\r\n\\spl{\r\n    H_0:\\ \\sigma_1^2=\\sigma_2^2\\quad H_1:\\ |\\sigma_1^2-\\sigma_2^2|\\geq0.5\r\n}\r\nThus,\r\n\\spl{\r\n    F_{n_1-1,n_2-1}&=F_{9,15}=\\frac{s_1^2}{s_2^2}=\\frac{4.7^2}{5.8^2}=0.66<f_{0.025,9,15}=3.1227\\\\\r\n    F_{n_2-1,n_1-1}&=F_{15,9}=\\frac{s_2^2}{s_1^2}=\\frac{5.8^2}{4.7^2}=1.52<f_{0.025,15,9}=3.7694\r\n}\r\nWe claim that $\\sigma_1^2=\\sigma_2^2$ at a 5\\% level of significance.\r\n\r\nHence, there's no sufficient evidence to conclude that the two population variances differ by at least 0.5 grams per liter.\r\n\r\n\\section*{Exercise 6.4}\r\nWe set the hypotheses\r\n\\spl{\r\n    H_0: \\sigma_1^2=\\sigma_2^2,\\quad H_1: \\sigma_1^2\\neq\\sigma_2^2\r\n}\r\nWe know that\r\n\\spl{\r\n    s_1^2=0.0015,\\quad s_2^2=0.0120.\r\n}\r\n\r\nThen using $F$ test, we obtain\r\n\\spl{\r\n    F_{n_1-1,n_2-1}=F_{9,9}=\\frac{0.0015}{0.0120}=0.1234.\r\n}\r\nHence,\r\n\\spl{\r\n    F_{9,9}=0.1234<f_{0.025,9,9}&=4.0260\\\\\r\n    F_{9,9}=0.1234<f_{0.975,9,9}&=0.2484\\\\\r\n}\r\n\r\n\r\nThus, we claim $\\sigma_1^2\\neq\\sigma_2^2$ at 5\\% level of significance.\r\n\r\nSince $0.1234\\approx f_{0.9977,9,9}$, then the P-value is $0.0023\\times2=0.0046$.\r\n\r\n\\section*{Exercise 6.5}\r\n\\enum{\r\n\\item\r\nWe set the hypothesis\r\n\\spl{\r\n    H_0:\\ \\mu_1-\\mu_2=0\r\n}\r\nWe know form the table that\r\n\\spl{\r\n    (\\bar{x}_1-\\bar{x}_2)=15.32-15.38=-0.06.\r\n}\r\nThus we use pooled T-Test\r\n\\spl{\r\n    S_p^2&=\\frac{(9-1)\\times0.0124+(9-1)\\times0.0323}{9+9-2}=0.0223.\\\\\r\n    T_{9+9-2}&=\\frac{(\\bar{X}_1-\\bar{X}_2)-0}{\\sqrt{0.0223(9^{-1}+9^{-1})}}=\\frac{-0.06}{0.0704}=-0.852.\\\\\r\n    |T_{16}|=0.852<t_{0.025,16}=2.1199.\r\n}\r\n\r\nHence, we claim that $\\mu_1=\\mu_2$ at 5\\% level of significance. The temperature does not affect the mean reading of concentration.\r\n\\item\r\nWe set the hypothesis\r\n\\spl{\r\n    H_0:\\ \\mu_1-\\mu_2=0\r\n}\r\nWe know form the table that\r\n\\spl{\r\n    (\\bar{x}_1-\\bar{x}_2)=0.430444-1.65844=-1.228.\r\n}\r\nThus we use pooled T-Test\r\n\\spl{\r\n    S_p^2&=\\frac{(9-1)\\times0.0420+(9-1)\\times0.0660}{9+9-2}=0.054.\\\\\r\n    T_{9+9-2}&=\\frac{(\\bar{X}_1-\\bar{X}_2)-0}{\\sqrt{0.054(9^{-1}+9^{-1})}}=\\frac{-1.228}{0.110}=-11.21.\\\\\r\n    |T_{16}|&=11.21>t_{0.025,16}=2.1199.\r\n}\r\n\r\nHence, we claim that $\\mu_1\\neq\\mu_2$ at 5\\% level of significance. The temperature affects the mean reading of concentration.\r\n}\r\n\r\n\\section*{Exercise 6.6}\r\n\\enum{\r\n\\item\r\n\\spl{\r\n    H_0:\\ \\sigma_1^2=\\sigma_2^2\r\n}\r\nWe apply $F$ test.\r\n\\spl{\r\n    F_{9,15}&=\\frac{s_1^2}{s_2^2}=\\frac{9}{7.29}=1.23\\\\\r\n    f_{0.1,9,15}&=2.0862\\\\\r\n    f_{0.9,9,15}&=0.4274\\\\\r\n}\r\n\r\nSince $F_{9,15}<f_{0.1,9,15}$ and $F_{9,15}>f_{0.9,9,15}$, then we claim that $\\sigma_1^2=\\sigma_2^2$ at 0.2 level of significance.\r\n\\item\r\n\\spl{\r\n    s_p^2=\\frac{(10-1)\\times9+(16-1)\\times7.9}{10+16-2}=8.3125.\r\n}\r\n\\item\r\nThe statistic is \r\n\\spl{\r\n    T_{24}=\\frac{\\mu_1-\\mu_2-(64.95-57.06)}{\\sqrt{8.3125(10^{-1}+16^{-1})}}=\\frac{\\mu_1-\\mu_2-7.89}{1.16}\r\n}\r\nThen, the 99\\% confidence interval on $\\mu_1-\\mu_2$ is\r\n\\spl{\r\n    \\mu_1-\\mu_2&=7.89\\pm t_{0.005,24}\\cdot1.16\\\\\r\n    \\mu_1-\\mu_2&=7.89\\pm3.25.\r\n}\r\n\\item\r\nWe claim the there's a difference between $\\mu_1$ and $\\mu_2$ because we are 99\\% sure that $\\mu_1-\\mu_2>4.64$.\r\n}\r\n\r\n\\section*{Exercise 6.7}\r\nWe set the hypothesis\r\n\\spl{\r\n    H_0:\\ \\mu_1=\\mu_2 \\quad H_1:\\ \\mu_1<\\mu_2,\r\n}\r\nwhich is a one-sided hypothesis.\r\n\r\nWe know that\r\n\\spl{\r\n    &\\bar{x}_1=0.210,\\quad s_1^2=0.001116.\\\\ \r\n    &\\bar{x}_2=0.225,\\quad s_2^2=0.001099.\r\n}\r\nThen,\r\n\\spl{\r\n    T_{9+9-2}&=\\frac{(\\bar{X}_1-\\bar{X}_2)-0}{\\sqrt{s_p^2(9^{-1}+9^{-1})}}\\\\\r\n    &=\\frac{0.210-0.225}{\\sqrt{(8\\times0.001116+8\\times0.001099)/16\\times0.22}}\\\\\r\n    &=\\frac{-0.015}{0.0157}\\\\\r\n    &=-0.955.\r\n}\r\n\r\nSince $t_{0.05,16}=1.7459$, then $T_{16}>-t_{0.05,16}$.\r\n\r\nHence, we claim that we have no sufficient evidence to support $\\mu_1<\\mu_2$ at 5\\% level of significance.\r\n\r\n\\section*{Exercise 6.8}\r\n\\spl{\r\n    H_0:\\ \\mu_1=\\mu_2,\\quad H_1: \\mu_1>\\mu_2\r\n}\r\nWe denote $D=X_2-X_1$, so that\r\n\\spl{\r\n    \\overline{D}=-16.93, \\quad s_D=33.7.\r\n}\r\n\\spl{\r\n    T_{15}=\\frac{-16.93}{33.7/\\sqrt{15}}=-1.945.\r\n}\r\n\r\nSince $t_{0.05,15}=1.7531$, then $T_{15}<-t_{0.05,15}$. Hence we reject $\\mu_1=\\mu_2$ and claim that $\\mu_1>\\mu_2$ at 5\\% level of significance.\r\n\r\nUsing Wilcoxon Rank-Sum Test, we set\r\n\\spl{\r\n    H_0:\\ M_D=0\\quad H_1:\\ M_D<0.\r\n}\r\n\r\n\\begin{table}[H]\r\n    \\centering\r\n    \\begin{tabular}{cc}\\hline\r\n        D & Rank\\\\\\hline\r\n        -4 & -1.5\\\\\r\n        4 & 1.5\\\\\r\n        9 & 3\\\\\r\n        -11 & -4.5\\\\\r\n        11 & 4.5\\\\\r\n        14 & 6\\\\\r\n        -15 & -7.5\\\\\r\n        15 & 7.5\\\\\r\n        -33 & -9.5\\\\\r\n        33 & 9.5\\\\\r\n        -37 & -11\\\\\r\n        -46 & -12\\\\\r\n        -48 & -13\\\\\r\n        -56 & -14\\\\\r\n        -90 & -15\\\\\\hline\r\n    \\end{tabular}\r\n\\end{table}\r\n\r\nThus, $W_+=32$, $W_-=88$, $W=\\min(W_-,W_+)=32$.\r\nHowever, for $n=15$ at 5\\% level of significance, we have a critical value $n=30$ for one-sided test. $W>30$.\r\n\r\nThus, we claim $M_1=M_2$ at 5\\% level of significance.", "meta": {"hexsha": "332bf3af66cf94b9d309666843df61d1061f4179", "size": 6684, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "VE401ProbStat/Assignments/Assignment6/sections/solution.tex", "max_stars_repo_name": "PANDApcd/Calculus", "max_stars_repo_head_hexsha": "2ce2283b640858f88e74f3838d48c68cfc1be82a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "VE401ProbStat/Assignments/Assignment6/sections/solution.tex", "max_issues_repo_name": "PANDApcd/Calculus", "max_issues_repo_head_hexsha": "2ce2283b640858f88e74f3838d48c68cfc1be82a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "VE401ProbStat/Assignments/Assignment6/sections/solution.tex", "max_forks_repo_name": "PANDApcd/Calculus", "max_forks_repo_head_hexsha": "2ce2283b640858f88e74f3838d48c68cfc1be82a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.2117647059, "max_line_length": 191, "alphanum_fraction": 0.5715140634, "num_tokens": 3049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7090191337850932, "lm_q1q2_score": 0.5780154334338202}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{geometry}\n\\usepackage{amsmath,amssymb,amstext,amsfonts}\n\\usepackage{graphicx}\n\n\\begin{document}\n\n\\section*{Example Assignment} \n\\section*{Foundations of Computational Math}\n\\subsubsection*{Due date: 11:59PM on September 23rd 2048}\n\n\\vspace{0.5in}\n\n\\subsection*{Problem}\n\nConsider the forward, backward, and central difference approximations to the derivative of $f$ at $x$, respectively:\n\n\\begin{enumerate}\n    \\item $\\displaystyle F_h[f](x) = \\frac{f(x+h) - f(x)}{h}$\n    \\item $\\displaystyle B_h[f](x) = \\frac{f(x) - f(x-h)}{h}$\n    \\item $\\displaystyle C_h[f](x) = \\frac{f(x+h) - f(x-h)}{2h}$\n\\end{enumerate}\n\n\\noindent Analyze the absolute error in the above numerical approximations as $h \\to 0$, for the following:\n\\begin{enumerate}\n    \\item $f_1(x) = \\sin(x)$ at $x=1.0$ and $x=2.0$.\n    \\item $f_2(x) = \\exp\\left(-\\frac{x^2}{2}\\right)$ at $x=1.1$ and $x=2.2$.\n\\end{enumerate}\nConfirm numerically the order (with respect to $h$) of the respective numerical approximations.\n\n\\vspace{0.1in}\n\n% \\subsection*{Problem 2}\n% \n% Consider the interval, $[a,b]$, with $N$ equally-spaced subintervals, of spacing $h = \\frac{b-a}{N}$. A function, $f(x)$, is sampled at the grid points, $x_i = a+i \\cdot h$, for $i=0, 1, \\ldots, N$.\n% \n% \\vspace{0.2in}\n% \n% \\noindent Devise a numerical approximation to $f'(x)$ using finite difference operations that is $O(h^2)$ both in the interior ($i=1,\\ldots,N-1$), and at the boundaries ($i=0$, $i=N$).\n% \n% \\vspace{0.2in}\n% \n% \\noindent Confirm the order of your method numerically, for $f(x) = \\sin(x)$ on $[0,\\frac{\\pi}{2}]$, for increasing values of $N$, using the norm $\\displaystyle ||f'(x) - D_h[f](x)||_\\infty = \\max_i |f'(x_i) - D_h[f](x_i)|$, where $D_h$ is your difference operator.\n\n\n\\pagenumbering{gobble}\n\n\n\\end{document}\n", "meta": {"hexsha": "30a8caa1313735600828e3b5ae043bdfdc9c5c82", "size": 1813, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "teaching/acm-computing-seminar/resources/prog/example-assignment/.assignment-source/example-assignment.tex", "max_stars_repo_name": "notmatthancock/notmatthancock.github.io", "max_stars_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "teaching/acm-computing-seminar/resources/prog/example-assignment/.assignment-source/example-assignment.tex", "max_issues_repo_name": "notmatthancock/notmatthancock.github.io", "max_issues_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "teaching/acm-computing-seminar/resources/prog/example-assignment/.assignment-source/example-assignment.tex", "max_forks_repo_name": "notmatthancock/notmatthancock.github.io", "max_forks_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.26, "max_line_length": 267, "alphanum_fraction": 0.6690568119, "num_tokens": 635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.8152324893519999, "lm_q1q2_score": 0.5780154334338202}}
{"text": "% 16integrationonmanifolds.tex\n% Fund Science! & Help Ernest finish his Physics Research! : quantum super-A-polynomials - a thesis by Ernest Yeung\n%                                               \n% http://igg.me/at/ernestyalumni2014                                                                             \n%                                                              \n% Facebook     : ernestyalumni  \n% github       : ernestyalumni                                                                     \n% gmail        : ernestyalumni                                                                     \n% google       : ernestyalumni                                                                                   \n% linkedin     : ernestyalumni                                                                             \n% tumblr       : ernestyalumni                                                               \n% twitter      : ernestyalumni                                                             \n% youtube      : ernestyalumni                                                                \n% indiegogo    : ernestyalumni                                                                        \n%\n% Ernest Yeung was supported by Mr. and Mrs. C.W. Yeung, Prof. Robert A. Rosenstone, Michael Drown, Arvid Kingl, Mr. and Mrs. Valerie Cheng, and the Foundation for Polish Sciences, Warsaw University.                  \n\n\n\\subsection*{Integration of Functions on Riemannian Manifolds}\n\n\\begin{proposition}[16.28]\n  $(M,g)$ oriented Riemannian manifold with or without $\\partial$ \\\\\nSuppose $f$ compactly supported cont. on $M$, $f\\in \\mathbb{R}$, $f\\geq 0$ \\\\\nThen $\\int_M f dV_g \\geq 0$ \\\\\n\\phantom{Then } $\\int_M f dV_g = 0$ iff $f=0$\n\\end{proposition}\n\n\\begin{proof}\n  If $f$ supported in \\\\\n\\quad \\, domain of single oriented smooth chart $(U,\\varphi)$  \\\\\n\nBy Prop. 15.31\n\n\n\\[\n\\int_M f dV_g = \\int_{ \\varphi{(U)}} f(x) \\sqrt{ \\text{det}{ (g_{ij} ) } } dx^1 \\dots dx^n \\geq 0\n\\]\n\n\n\n\n\\end{proof}\n\n\\exercisehead{16.29} given oriented Riemannian manifold $(M,g)$ \\\\\n\\quad compact supported cont. $f: M \\to \\mathbb{R}$ \\\\\n\nThen if $f$ supported in \\\\\n\\phantom{Then } domain of single oriented smooth chart $(U,\\varphi)$ \n\n\\[\n\\left| \\int_M f dV_g \\right| = \\left| \\int_{ \\varphi{ (U)} } f(x) \\sqrt{ \\text{det}{ (g_{ij} )} } dx^1 \\dots dx^n \\right| \\geq \\int_{\\varphi{ (U)} } |f(x) | \\sqrt{ \\text{det}{ (g_{ij})} } dx^1 \\dots dx^n = \\int_M |f| dV_g\n\\]\n\nwhere inequality above is from some thm. in calculus.  \n\n\\hrulefill\n\n\\subsection*{The Divergence Theorem}\n\n\n\\begin{equation}\n  \\begin{aligned}\n    & * : C^{\\infty}{ (M)} \\to \\Omega^n{ (N) } \\\\  \n    & * f = f dV_g \n\\end{aligned} \\quad \\quad \\quad \\, (16.10)\n\\end{equation}\nsmooth bundle isomorphism \n\nEY : 20140501 smooth bundle isomorphism? \n\nsmooth bundle isomorphism\n\n\\[\n\\begin{aligned}\n  & \\beta : \\mathfrak{X}{(M)} \\to \\Omega^{n-1}{ (M) } \\\\ \n  &  \\beta(X) = X \\righthalfcup dV_g\n\\end{aligned}\n\\]\n\ntechnical lemma\n\\begin{lemma}[16.30]\n  $(M,g)$ oriented Riemannian manifold with or without $\\partial$ \\\\\n  Suppose $S \\subseteq M$ immersed hypersurface with orientation by \\\\\n\\phantom{Suppose } unit normal vector field $N$ and \\\\\n\\phantom{Suppose } $\\widetilde{g}$ induced metric on $S$ \\\\\n\nIf $X$ any vector field along $S$, \n\n\\begin{equation}\n  i^*_S{ (\\beta{ (X)} )}  = \\langle X, N \\rangle_g dV_{\\widetilde{g}} \\quad \\quad \\quad \\, (16.12)\n\\end{equation}\n\n\n\n\\end{lemma}\n\n\\begin{proof}\nDefine vector fields $X^T$, $X^{\\perp}$ along $S$\n\n\\[\n\\begin{aligned}\n&  X^{\\perp} = \\langle X, N\\rangle_g N \\\\ \n&  X^T =  X - X^{\\perp}\n\\end{aligned}\n\\]\n\n\\[\n\\beta(X) = X \\righthalfcup dV_g = X^{\\perp} \\righthalfcup dV_g + X^T \\righthalfcup dV_g\n\\]\n\npull back to $S$\n\nProp. 15.32\n\n\n\\[\ni^*_S{ (X^{\\perp} \\righthalfcup dV_g) }  = \\langle X, N \\rangle_g i^*_S{ (N \\righthalfcup dV_g) } = \\langle X, N \\rangle_g dV_{ \\widetilde{g}}\n\\]\n\nIf $X_1 \\dots X_{n-1}$ any vectors tangent to $S$\n\n\\[\n(X^T \\righthalfcup dV_g){ (X_1 \\dots X_{n-1} ) } = dV_g{ (X^T, X_1 \\dots X_{n-1} ) } = 0 \n\\]\n\n\n\n\\end{proof}\n\n", "meta": {"hexsha": "6b2f9c75a2211520ecb17a71f67a03873781be3f", "size": 4053, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LeeJM/16integrationonmanifolds.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LeeJM/16integrationonmanifolds.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LeeJM/16integrationonmanifolds.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 31.1769230769, "max_line_length": 221, "alphanum_fraction": 0.5159141377, "num_tokens": 1239, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.8152324871074607, "lm_q1q2_score": 0.5780154318423989}}
{"text": "\n\\section{Experiments}\n\\label{sec:expts}\nIt should be noted that in the original data set if every pixel grey scale value were to be used as a feature to train for the model might result in a huge set of features. Having such set of features may cause a commnon phenomena known as \"the curse of dimensionality\", which implicates that the increase in the dimensional space resulted from the increase in the number of features might dilute the statisical significance of the final result.\\cite{bellman}\nThat being said, in order to reduce the complexity of the final model in the hope of avoiding overfitting problems, a feature selection process was implemented for the experiments.\\cite{hall}\n\\subsection{Feature Selections}\n\\label{feature}\nAccording to Hall, feature selection process consists of the following steps:\n\\begin{itemize}\n\t\\item Starting point\n\t\\item Search orgnization\n\t\\item Evaluation strategy\n\t\\item Stopping criterion\nIn this section, th\n\\end{itemize}\nAs introduced in the \"Data Preparation\" section, after some inspections through the data sets, it was observed that some pixels's grey scale values stayed the same all the time. Those points generally are more likely to have no significant coreThus, those low-variance features could be the starting points.\nThe backward elemination method was adpoted, which means that only deletions were considered.\n\\subsection{Classifiers}\n\\label{class}\n\\paragraph{Super Vector Machines}\n\n", "meta": {"hexsha": "692736e2a86c0344937b9f29e3bccd11ae5b87f3", "size": 1446, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/experiments.tex", "max_stars_repo_name": "muvetian/Words-Recognition", "max_stars_repo_head_hexsha": "146e60d3c23167bb183802924f6e65c53c1c3038", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/experiments.tex", "max_issues_repo_name": "muvetian/Words-Recognition", "max_issues_repo_head_hexsha": "146e60d3c23167bb183802924f6e65c53c1c3038", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/experiments.tex", "max_forks_repo_name": "muvetian/Words-Recognition", "max_forks_repo_head_hexsha": "146e60d3c23167bb183802924f6e65c53c1c3038", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.7272727273, "max_line_length": 459, "alphanum_fraction": 0.8105117566, "num_tokens": 313, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8152324848629214, "lm_q2_score": 0.7090191337850933, "lm_q1q2_score": 0.5780154302509778}}
{"text": "\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Copyright (c) 2003-2018 by The University of Queensland\n% http://www.uq.edu.au\n%\n% Primary Business: Queensland, Australia\n% Licensed under the Apache License, version 2.0\n% http://www.apache.org/licenses/LICENSE-2.0\n%\n% Development until 2012 by Earth Systems Science Computational Center (ESSCC)\n% Development 2012-2013 by School of Earth Sciences\n% Development from 2014 by Centre for Geoscience Computing (GeoComp)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\chapter{Non-Linear Partial Differential Equations}\n\\label{APP: NEWTON}\n\nThe solution $u_i$ is given as a solution of\nthe nonlinear equation\n\\begin{equation} \\label{APP NEWTON EQU 40}\n\\int_{\\Omega} v_{i,j} \\cdot X_{ij} + v_{i} \\cdot Y_{i} \\; dx\n+ \\int_{\\partial \\Omega}  v_{i} \\cdot y_{i} \\; ds  = 0 \n\\end{equation}\nfor all smooth $v_i$ with $v_i=0$ where $q_i>0$ and\n\\begin{equation} \\label{APP NEWTON EQU 40b}\nu_i=r_i \\mbox{ where } q_i>0\n\\end{equation}\nwhere $X_{ij}$ and $Y_i$ are non-linear functions of the solution $u_k$ and its gradient $u_{k,l}$\nand $y_i$ is a function of solution $u_k$. For further convenience we will use the \nnotation  \n\\begin{equation} \\label{APP NEWTON EQU 40c}\n<F(u),v> :=\\int_{\\Omega} v_{i,j} \\cdot X_{ij} + v_{i} \\cdot Y_{i} \\; dx\n+ \\int_{\\partial \\Omega}  v_{i} \\cdot y_{i} \\; ds\n\\end{equation}\nfor all smooth $v$ on $\\Omega$. If one interprets $F(u)$ as defined above as a functional \nover the set of admissible functions $v$  \nequation~(\\ref{APP NEWTON EQU 40}) can be written in compact formulation\n\\begin{equation} \\label{APP NEWTON EQU 40d}\nF(u)= 0 \n\\end{equation}\n\n\\section{Newton-Raphson Scheme}\nThis equation is iteratively solved by the Newton-Raphson method\\index{Newton-Raphson method}, see \\cite{Kelley2004a}.\nStarting with the initial guess\n$u^{(0)}$ the sequence\n\\begin{equation} \\label{APP NEWTON EQU 43}\n  u^{(\\nu)}= u^{(\\nu-1)} - \\delta^{(\\nu-1)}\n\\end{equation}\nfor $\\nu=1,2,\\ldots \\;$ generates the (general) Newton-Raphson iteration for the\nsolution $u$. The correction $\\delta^{(\\nu-1)}$ is the solution of the linear problem\n\\begin{equation} \\label{APP NEWTON EQU 43b0}\n< \\fracp{F}{u^{(\\nu-1)}} \\delta^{(\\nu-1)} ,v > = <F(u^{(\\nu-1)}),v>\n\\end{equation}\nfor all smooth $v$ on $\\Omega$ with $v_i=0$ where $q_i>0$. \nwhere\n\\begin{equation} \\label{APP NEWTON EQU 43b}\n< \\fracp{F}{u} \\cdot \\delta ,v > = \n\\int_{\\Omega} \\left( \\fracp{X_{ij}}{u_{k,l}} v_{i,j}\\delta_{k,l} + \n\\fracp{X_{ij}}{u_{k}} v_{i,j}\\delta_{k} + \\fracp{Y_{i}}{u_{k,l}} v_{i}\\delta_{k,l} + \n\\fracp{Y_{i}}{u_{k}} v_{i}\\delta_{k} \\right) \\; dx \n+ \\int_{\\partial \\Omega} \n\\fracp{y_{i}}{u_{k}} v_{i}\\delta_{k} \\; ds \n\\end{equation}\nIt is assumed that the initial guess $u^{(0)}$ fulfills the constraint~(\\ref{APP NEWTON EQU 40b}). \nThe $\\delta^{(\\nu-1)}$ has to fulfill the homogeneous constraint. \nNotice that the calculation of $\\delta^{(\\nu-1)}$ requires the solution of a linear PDE\nas presented in section~\\ref{SEC LinearPDE}.\n\nThe Newton iteration should be stopped in the $k$-th step if for\nall components of the solution the error of the Newton\napproximation is lower than the given relative tolerance {rtol}:\n\\begin{equation}\\label{APP NEWTON EQU 61}\n    \\| u_{i} - u_{i}^{(\\nu)} \\|_{\\infty} \\le \\mbox{rtol} \\cdot \\|u_{i} \\|_{\\infty}  \\; ,\n\\end{equation}\nfor all components $i$ \nwhere $\\|. \\|_{\\infty}$ denoted the $L^{sup}$ norm. To measure the quality of the solution approximation\non the level of the equation we introduce the weak norm\n\\begin{equation}\\label{APP NEWTON EQU 62}\n  \\| F(u) \\|_{i} := \\sup_{v , v=0 \\mbox{ where } q_{i}>0 } \\frac{<F(u), ve_{i}>}{\\|v\\|_1}\n\\end{equation}\nwhere $(e_{i})_{j}=\\delta_{ij}$ and $\\|v\\|_1=\\int_{\\Omega} |v| \\,dx$ is the $L^1$ norm of $v$\n\\footnote{In practice a discretization method is applied to solve the update $\\delta^{(\\nu-1)}$.\nIn this case also an approximation of $\\| F(u) \\|_{i}$ is calculated taking the maximum over all\nbase function used to represent the solution $u$.}.\n\nThe stopping criterion (\\ref{APP NEWTON EQU 61}) is changed to the level of\nequation.  We use the reasonable heuristic but mathematically \nincorrect argument that the change on the level of the solution and\nthe change on the level of the equation are proportional:\n\\begin{equation}\\label{APP NEWTON EQU 64}\n \\frac{ \\| u_i - u^{(\\nu)}_i \\|_{\\infty} }{ \\| 0 - F(u^{(\\nu)}) \\|_{i} } =\n \\frac{ \\| \\delta^{(\\nu)}_i \\|_{\\infty} }{ \\| F(u^{(\\nu)}) - F(u^{(\\nu-1)}) \\|_{i} }  \\; .\n\\end{equation}\nwhere we assume that that component $\\nu$ $F(u)$ is mainly controlled by component $\\nu$ of the solution.\n\n\nWe assume that the term $F(u^{(\\nu)})$ can be neglected versus\n$F(u^{(\\nu-1)})$ since $u^{(\\nu)}$ is a better approximation,\nand use the stopping criterion in the formulation:\n\\begin{equation} \\label{APP NEWTON EQU 65}\n        \\| F(u^{(\\nu)}) \\|_i \\le\n  \\frac{ \\| F(u^{(\\nu-1)})\\|_{i} \\cdot  \\|u_{i} \\|_{\\infty} }  { \\| \\delta^{(\\nu)}_i \\| _{\\infty} }\n   \\,\\mbox{\\it rtol}\\, =:\\, \\mbox{\\it qtol}_i \\; ,\n\\end{equation}\nwhich has to hold for all components $\\nu$.\nNow {\\it qtol} defines a tolerance for the level of equation.  This stopping criterion is not free of problems, because a\ndecrease of the defect $F(u^{(\\nu)})$ coupled with a constant\ncorrection $\\delta^{(\\nu)}$ suggests a good approximation.\nBut the quality of the approximation $u^{(\\nu)}$ is ensured if the\nNewton iteration converges quadratically.  This convergence behavior\nis given by the error estimation:\n\\begin{equation}\n    \\| u - u^{(\\nu)} \\|_{\\infty} \\le C \\;\n    \\| u - u^{(\\nu-1)} \\|_{\\infty}^2\n\\end{equation}\nwith a positive value $C$, see \\cite{Kelley2004a}.  Therefore a quadratic\nconvergence of the Newton iteration can be assumed if the corrections\nof the current and the last step fulfill the following condition:\n\\begin{equation} \\label{APP NEWTON EQU 66}\n  \\max_{i}\n  \\frac{\\| \\delta^{(\\nu)}_i \\|_{\\infty} }{\\| \\delta^{(\\nu-1)}_i \\|_{\\infty} } \n                           < \\frac{1}{5} \\;,\n\\end{equation}\nwhere the limit $\\frac{1}{5}$ was found by a large number of experiments.\nThe approximation $u^{(\\nu)}$ is accepted if the conditions\n(\\ref{APP NEWTON EQU 65}) and (\\ref{APP NEWTON EQU 66}) hold.  Consequently a\nsafe approximation requires at least two Newton steps.\n\nTo stop a divergent iteration, which occurs for a bad initial solution,\nthe norms of the defects for the $(k-1)$-th and $k$-th\nNewton step are compared.  Here we use the estimation\n\\begin{equation} \\label{APP NEWTON EQU 67}\n  \\| F(u^{(\\nu)}) \\|_i \\le\n  \\gamma \\| F(u^{(\\nu-1)}) \\|_i\n\\end{equation}\nfor the defects.  The value $0<\\gamma<1$ depends on $F$\nand the distance of the initial guess $u^{(0)}$ to the true solution $u$.\nSince the constant $\\gamma$ is unknown and can be close to one,\nconvergence is assumed if the following (weaker) condition holds\nfor all components $i$:\n\\begin{equation} \\label{APP NEWTON EQU 68}\n  \\| F(u^{(\\nu)}) \\|_i\n  < \\| F(u^{(\\nu-1)}) \\|_i \\; .\n\\end{equation}\nIf condition (\\ref{APP NEWTON EQU 68}) fails, divergence may begin. Therefore\nunder-relaxation is started.  Beginning with $\\omega=1$\nthe Newton-Raphson iteration is computed by\n\\begin{equation} \\label{APP NEWTON EQU 69}\n  u^{(\\nu)} = u^{(\\nu-1)} - \\omega \\, \\delta^{(\\nu)}\n\\end{equation}\ninstead of (\\ref{APP NEWTON EQU 43}).  If this new iteration fulfills the\ncondition (\\ref{APP NEWTON EQU 68}) of decreasing defect, we accept it.  Otherwise\nwe put $\\omega \\rightarrow \\frac{\\omega}{2}$, recompute $u^{(\\nu)}$ from equation\n(\\ref{APP NEWTON EQU 69}) and retry condition (\\ref{APP NEWTON EQU 68}) for the\nnew $u^{(\\nu)}$, and so on until either condition (\\ref{APP NEWTON EQU 68})\nholds or $\\omega$ becomes to small ($\\omega < \\omega_{lim}=0.01$).  In the latter case the\niteration gives up. The under-relaxation\nconverges only linearly for $\\omega<1$. it is a rather robust\nprocedure.\n which switches back to $\\omega=1$ as soon as possible.\nThe price for the robustness is the additional computation of the\ndefects.\n\nDue to the quadratic convergence near the solution the error decreases\nrapidly.  Then the solution will not change much and it will not be\nnecessary to mount a new coefficient matrix in each iteration step.\nThis algorithm is called the simplified Newton method.  It converges\nlinearly by\n\\begin{equation} \n    \\| u_i - u^{(\\nu)}_i \\|_{\\infty} \\le \\gamma^{\\nu}\n    \\| u_i - u ^{(0)}_i \\|_{\\infty} \\; ,\n\\end{equation}\nwhere $\\gamma$ equals the $\\gamma$ in the estimation (\\ref{APP NEWTON EQU 67}).\nIf the general iteration converges quadratically\n(condition (\\ref{APP NEWTON EQU 66}) holds) and we have\n\\begin{equation}\n   \\| F(u^{(\\nu)}) \\|_i < 0.1\n              \\| F(u^{(\\nu-1)}) \\|_i\n\\end{equation}\nfor all components $\\nu$,\nwe can expect $\\gamma \\le 0.1$.  Then the simplified iteration produces\none digit in every step and so we change from the general to the\nsimplified method.  The `slow' convergence requires more iteration steps,\nbut the expensive mounting of the coefficient matrix is saved.\n\nThe lienar PDE~(\\ref{APP NEWTON EQU 43b}) is solved with with a certain tolerance, namely when\nthe defect of the current\napproximation of $\\delta^{(\\nu)}$ relative to the defect of\nthe current Newton iteration is lower than {LINTOL}.  To avoid\nwasting CPU time the FEMLIN iteration must be controlled by an\nefficient stopping criterion.  We set\n\\begin{equation}\n  \\mbox{LINTOL} = 0.1 \\cdot \\max ( \\left( \\frac{\\|\\delta^{(\\nu)}_i\\|_{\\infty}}{\\|u_i^{(\\nu)}\\|_{\\infty}} \\right)^2 \n,\\min_{i} \\frac{\\mbox{qtol}_i}{\\|F(u^{(\\nu)})\\|_{i}} )\n\\end{equation}\nbut restrict {LINTOL} by\n\\begin{equation}\n       10^{-4} \\le \\mbox{LINTOL} \\le 0.1 \\quad \\mbox{ and } \\quad\n             \\mbox{LINTOL}=0.1 \\; \\mbox{ for }\\nu=0  \\;.\n\\end{equation}\nThe first term means that it would be useless to compute digits by the\nlinear solver, which are overwritten by the next Newton step.  In the\nregion of quadratic convergence the number of significant digits is\ndoubled in each Newton step, i.e.~later digits are overwritten by the\nfollowing Newton-Raphson correction, see \\cite{Schoenauer1981a}.  The second \nmean that no digits should be computed which are in significance\nbelow the prescribed tolerance {\\it rtol}.  The number $0.1$ is a `safety factor' to take care\nof the coarse norm estimations. Figure~\\ref{APP NEWTON PIC 61} shows the workflow of\nthe Newton-Raphson update algorithm.\n\n\\begin{figure}\n\\begin{center}\n{\n\\unitlength0.92mm\n\\begin{picture}(200,235) \\thicklines\n\n\\put(-10,-065){\\begin{picture}(170,280) \\thicklines\n\n\\newsavebox{\\BZar}\n\\savebox{\\BZar}(0,0) {\n   \\thicklines\n   \\put(0,10){\\line(-3,-1){30}}\n   \\put(0,10){\\line(3,-1){30}}\n   \\put(0,-10){\\line(-3,1){30}}\n   \\put(0,-10){\\line(3,1){30}}\n   \\put(0,20){\\vector(0,-1){10}}\n   \\put(0,-10){\\vector(0,-1){10}}\n   \\put(30,0){\\vector(1,0){25}} }\n\\newsavebox{\\BZal}\n\\savebox{\\BZal}(0,0) {\n   \\thicklines\n   \\put(0,10){\\line(-3,-1){30}}\n   \\put(0,10){\\line(3,-1){30}}\n   \\put(0,-10){\\line(-3,1){30}}\n   \\put(0,-10){\\line(3,1){30}}\n   \\put(0,20){\\vector(0,-1){10}}\n   \\put(0,-10){\\vector(0,-1){10}}\n   \\put(-30,0){\\vector(-1,0){10}} }\n\n\\put(20,285){\\framebox(60,20){\\parbox{48mm}\n           {Start: \\\\ $\\nu=0$ , $\\omega=1$ \\\\\n            calculate $F(u^{(0)})$ }} }\n\\put(50,285){\\vector(0,-1){10}}\n\\put(20,265){\\framebox(60,10){\\parbox{48mm}\n           {next iteration: $\\nu \\leftarrow \\nu+1$}} }\n\\put(50,265){\\vector(0,-1){10}}\n\\put(20,240){\\framebox(60,15){\\parbox{54mm}\n           {\\hspace*{2.00mm} Solve \\\\\n            $\\fracp{F}{ u^{(\\nu-1)}} \\delta^{(\\nu)} =\n                                 F(u^{(\\nu-1)})$}} }\n\\put(50,240){\\vector(0,-1){10}}\n\\put(20,220){\\framebox(60,10){\\parbox{48mm}\n           {$\\omega = \\min(2\\,\\omega,1)$}} }\n\\put(50,220){\\vector(0,-1){10}}\n  \\put(120,225){\\line(-3,-1){30}}\n  \\put(120,225){\\line(3,-1){30}}\n  \\put(120,205){\\line(-3,1){30}}\n  \\put(120,205){\\line(3,1){30}}\n  \\put(110,214){$\\omega \\le \\omega_{lim}$ ?}\n  \\put(83,219){no}\n  \\put(153,219){yes}\n  \\put(090,215){\\vector(-1,0){40}}\n  \\put(150,215){\\line(1,0){05}}\n\\put(20,200){\\framebox(60,10){\\parbox{48mm}\n{$u^{(\\nu)} = u^{(k-1)} - \\omega\\delta^{(\\nu)}$}} }\n\\put(50,180){\\usebox{\\BZar}}\n\\put(30,179){$F(u^{(\\nu)}) < F(u^{(\\nu-1)})$ ?}\n\\put(83,184){no}\n\\put(55,165){yes}\n  \\put(105,175){\\framebox(30,10){\\parbox{23mm}\n             {$\\omega=\\omega/2$}} }\n  \\put(120,185){\\vector(0,1){20}}\n\n\\put(20,145){\\framebox(60,15){\\parbox{48mm}\n           {evaluate stopping criteria~(\\ref{APP NEWTON EQU 65})  \\\\\n           and if not simplified mode~(\\ref{APP NEWTON EQU 66}) } } }\n\\put(50,145){\\vector(0,-1){10}}\n\n\\put(50,125){\\usebox{\\BZar}}\n\\put(34,120){\\shortstack{stopping criterion \\\\ satisfied ?}}\n\\put(83,129){yes}\n\\put(55,110){no}\n  \\put(105,117.5){\\framebox(30,15){\\parbox{23mm}\n             {Newton ends \\\\ successfully!}} }\n  \\put(135,125){\\vector(1,0){20}}\n\\put(50,095){\\usebox{\\BZal}}\n\\put(26,093.5){$F(u^{(\\nu)}) < 0.1 F(u^{(\\nu-1)})$ ?}\n\\put(12,099){no}\n\\put(55,080){yes}\n\\put(20,065){\\framebox(60,10){\\parbox{48mm}\n           {switch to simplified Newton!}} }\n\\put(20,070){\\vector(-1,0){10}}\n\n    \\put(155,215){\\vector(0,-1){60}}\n    \\put(140,145){\\framebox(30,10){\\parbox{23mm}\n               {Newton fails!}} }\n    \\put(155,145){\\vector(0,-1){070}}\n    \\put(155,070){\\oval(40,10)\\makebox(0,0){END}}\n\n\\put(010,280){\\line(0,-1){210}}\n\\put(010,280){\\vector(1,0){40}}\n\n\n\\end{picture} }\n\\end{picture} }\n\\end{center}\n\n\\caption{\\label{APP NEWTON PIC 61}Flow diagram of the Newton-Raphson algorithm}\n\\end{figure}\n\n\\section{Local Sensitivity Analysis}\nIf the coefficients \n$X_{ij}$, $Y_i$ and $y_{i}$ in equation~(\\ref{APP NEWTON EQU 40})\ndepend on a vector of input factors $f_i$ index{input factor} and its gradient one is interested in how the solution $u_i$\nis changing if the input factors are changed. This problem is called a local sensitivity \nanalysis \\index{local sensitivity analysis}. If $u(f)$ denotes the \nsolution of equation~(\\ref{APP NEWTON EQU 40}) for the input factor $p$ \nand $u(f+\\alpha \\cdot g)$ denotes the solution \nfor a perturbed value  $f+\\alpha \\cdot g$ for input factor $f$ where\n$q$ denotes the direction of perturbation and $\\alpha$ is the small scaling factor.\nThe derivative of the solution in the direction $g$ is defined as\n\\begin{equation}\n\\fracp{u}{g} : = \\lim_{\\alpha \\rightarrow 0} \\frac{ u(f+\\alpha \\cdot g) - u(f)}{\\alpha}\n\\end{equation}\nIn practice one needs to distinguish between the cases of a spatially constant\nspatially variable. In the first case $g$ is set to a unit vector while\nin a second case an appropriate function needs to be given for $g$. \n\nThe function  $\\fracp{u}{g}$ is calculated from solving the equation\n\\begin{align} \\label{APP NEWTON EQU 100} \n\\int_{\\Omega} \\left( \\fracp{X_{ij}}{u_{k,l}} v_{i,j} \\left(\\fracp{u_k}{g}\\right)_{,l} + \n\\fracp{X_{ij}}{u_{k}} v_{i,j}\\fracp{u_k}{g} + \\fracp{Y_{i}}{u_{k,l}} v_{i}\\left(\\fracp{u_k}{g}\\right)_{,l}  + \n\\fracp{Y_{i}}{u_{k}} v_{i}\\fracp{u_k}{g}\\right) \\; dx \n+ \\int_{\\partial \\Omega}  \n\\fracp{y_{i}}{u_{k}} v_{i}\\fracp{u_k}{g}\\; ds \\\\\n+ \\int_{\\Omega} v_{i,j}  \\left( \\fracp{X_{ij}}{f_{k,l}} g_{k,l} + \\fracp{X_{ij}}{f_{k}} g_k \\right) \n+ v_{i} \\left( \\fracp{Y_{i}}{f_{k,l}} g_{k,l} +  \\fracp{Y_{i}}{f_{k}} g_k \\right) \\; dx \n+ \\int_{\\partial \\Omega}  v_{i}\n\\fracp{y_{i}}{f_{k}}  g_k\\; ds\n\\end{align}\nfor all smooth $v$ on $\\Omega$ with $v_i=0$ where $q_i>0$\nfor the unknown sensitivity $\\fracp{u}{g}$. \nNotice that this equation is similar to the equation which needs to be solved for\nthe Newton-Raphson correction $\\delta_k$, see equation~(\\ref{APP NEWTON EQU 43b}).\n", "meta": {"hexsha": "2c44feae836b81585bd34784a36bec805d0f376a", "size": 15516, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user/nonlinearPDE.tex", "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user/nonlinearPDE.tex", "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_forks_repo_path": "doc/user/nonlinearPDE.tex", "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.8439306358, "max_line_length": 122, "alphanum_fraction": 0.6499742202, "num_tokens": 5451, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324803738429, "lm_q2_score": 0.7090191337850932, "lm_q1q2_score": 0.578015427068135}}
{"text": "\\chapter{Position of starting node}\\label{ch:starting-node}\n\nIn the analysis done in \\chref{ch:analysis} we have ignored a factor that\ninfluence the performance of the network: the position in the floorplan of the\nuser that sends out the first message (the ``starting user''). To understand why\nthis factor is important, see \\figref{fig:startnodeexplain}. When the starting\nuser (the blue node) is in the center (\\figref{subfig:startnodecenter}), the\nmessage is sent radially in all the directions. In the general case, the ``area\nof reachability'' is equal to \\(\\alpha\\cdot\\pi\\cdot R^2\\) where \\(\\alpha\\) is a\nconstant that it is equal to \\(1\\) when the starting user is in the center;\n\\(\\frac{1}{2}\\) when the starting user is at the border\n(\\figref{subfig:startnodeborder}); \\(\\frac{1}{4}\\) when the starting user is in\nthe corner (\\figref{subfig:startnodecorner}). This may limit the capability of\nthe network to reach the full coverage and it may also increase the total\nbroadcast time.\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\\begin{subfigure}{0.25\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{img/start-node-position-center}\n\t\t\\caption{The starting user (blue) placed in the center can reach\n\t\tall the users inside the green\n\t\tarea (\\(\\alpha = 1\\))}\\label{subfig:startnodecenter}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.35\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{img/start-node-position-border}\n\t\t\\caption{The starting user (blue) placed in the border can reach\n\t\tall the users inside the green\n\t\tarea (\\(\\alpha = \\frac{1}{2}\\))}\\label{subfig:startnodeborder}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.35\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{img/start-node-position-corner}\n\t\t\\caption{The starting user (blue) placed in the corner can reach\n\t\tall the users inside the green\n\t\tarea (\\(\\alpha = \\frac{1}{4}\\))}\\label{subfig:startnodecorner}\n\t\\end{subfigure}\n\t\\caption{The area of reachability for the first message changes based on\n\twhere the starting user is placed}\\label{fig:startnodeexplain}\n\\end{figure}\n\nOf course, the best scenario for both the coverage and the total broadcast time,\nis the one where the starting user is in the center of the floorplan; the worst\nis when the starting user is in the corner.\n\nThe Jupyter notebooks used for the analysis of this factor are placed in the\n\\code{analysis/StartNode/} folder.\n\n\\input{starting-node/coverage}\n\\input{starting-node/time}\n", "meta": {"hexsha": "fe82da23f1dbf75b28d8d0834687bd93619e4f29", "size": 2441, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/chapters/starting-node.tex", "max_stars_repo_name": "SpeedJack/pecsn", "max_stars_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/chapters/starting-node.tex", "max_issues_repo_name": "SpeedJack/pecsn", "max_issues_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/chapters/starting-node.tex", "max_forks_repo_name": "SpeedJack/pecsn", "max_forks_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.0566037736, "max_line_length": 80, "alphanum_fraction": 0.7574764441, "num_tokens": 668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7745833945721304, "lm_q1q2_score": 0.5779468740443903}}
{"text": "\\subsection{Sequential games}\n\n\\subsubsection{Introduction}\n\nA game can have multiple rounds. For example an agent could offer a trade, and the other agent could choose to accept or reject the trade. As later agents know the other choices, and the earlier agents know their choices will be observed, the games can change.\n\nThis doesn\u2019t change games with pure strategies, but does affect those with mixed strategies. For example, even if prisoners could see earch other in the prisoners dilemma we would still get the same outcome. The last agent still prefers to \u201ctell\u201d, and earlier agents know this and have no reason to not also \u201ctell\u201d.\n\nBut consider the football/opera game. Here the first mover is better off, and there is a pure strategy, while in the rock paper scissors game the first mover loses.\n\nWe can solve more complex games backwards. As the actions in the last round of a game have no impact on others, they can be solved separately. Agents then know what the outcome will be if an outcome is arrived at, and can treat that \u201csubgame\u201d as a pay-off.\n\nThis method is known as backwards induction.\n\n\n\\subsubsection{One-round sequential games}\n\n\n\\subsubsection{Backwards induction}\n\n\n\\subsubsection{Zero-sum games}\n\n\n\\subsubsection{Subgame perfect equilibrium}\n\n\n\\subsubsection{Nash equilibrium}\n\n", "meta": {"hexsha": "c0f58b7000a18aab487658f0b026f08e76e9e988", "size": 1307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/ai/sequential/01-01-gameSequential.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/ai/sequential/01-01-gameSequential.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/ai/sequential/01-01-gameSequential.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.5666666667, "max_line_length": 315, "alphanum_fraction": 0.791889824, "num_tokens": 272, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5779468662791674}}
{"text": "\\chapter{d'Alembert's Principle}\\label{c1}\n\\section{Beyond Newton's laws}\\label{c1s1}\nNewton's programme of classical mechanics has a few limitations.\n\\begin{enumerate}\n\\item It needs a frame of reference in which Newton's first law is valid.\n\\item Newton's second law\n\\begin{equation}\\label{c1s1e1}\nm\\ddot{x} = F_x, m\\ddot{y} = F_y, m\\ddot{z} = F_z\n\\end{equation}\nrequires that we use the Cartesian coordinate system. However, the symmetry of\na few problems suggests that we use other systems. If $q_i$ are the coordinates\nin the other systems then in general,\n\\begin{equation}\\label{c1s1e2}\nm\\ddot{q}_i \\ne F_i,\n\\end{equation}\nwhere $F_i$ in this case is the $q_i$ component of the force which \\emph{may\nnot} be the same as the $F_i$ in equation \\eqref{c1s1e1}.\n\\item Equation \\eqref{c1s1e1} requires that we know $F_i$. This is often not the\ncase when the motion is constrained. When a bead is constrained to move along\nthe wire in which it is woven we do not know the force that stops the bead\nfrom leaving the wire.\n\\item Newton's third law does not hold good when electromagnetic forces are\ninvolved.\n\\end{enumerate}\n\nThe first of these limitations can be overcome by introducing pseudo-forces.\nThe second one can be circumvented by carefully evaluating the components of\nthe acceleration in coordinate systems other than the Cartesian. The third one\nis can be tamed by using d'Alembert's principle which we introduce next. The\nfourth one requires a redefinition of momentum which can be done systematically\nin Lagrange's formulation of classical mechanics.\n\n\\section{d'Alembert's principle}\\label{c1s2}\nConsider a few examples of constrained motion.\n\\begin{enumerate}\n\\item A beam constrained to move along a circular hoop. If the hoop is centred\nat the origin and in the $xy$ plan then the constraint is\n\\begin{equation}\\label{c1s2e1}\nx^2 + y^2 = a^2,\n\\end{equation}\nwhere $a$ is the radius of the hoop.\n\\item A particle constrained to move along an inclined plane. If $x, y,\nz$ are the coordinates of the particle then they must be confined to\nthe plane \n\\begin{equation}\\label{c1s2e2}\nlx + my + nz = p\n\\end{equation}\nwhere $l, m, n$ are the direction cosines of the normal to the plane\nand $p$ is its distance from the origin.\n\\item Particles of a rigid body in which the distances between the particles\nis constant. If $(x_n, y_n, z_n$ denote the coordinates of the $n$th particle \nthen\n\\begin{equation}\\label{c1s2e3}\n\\sum_{i=1}^3\\sqrt{(x_n-x_m)^2 + (y_n-y_m)^2 + (z_n-z_m)^2} = \\text{constant}\n\\end{equation}\nfor all $n, m = 1, \\ldots, N$, $N$ being the number of particles of the rigid\nbody.\n\\item Molecules of a monoatomic gas confined to a box. If the box is centred\nat the origin and is a cube of side $a$ then\n\\begin{equation}\\label{c1s2e4}\n-\\frac{a}{2} \\le x_n, y_n, z_n \\le \\frac{a}{2},\n\\end{equation}\nwhere $(x_n, y_n, z_n)$ are the coordinates of the $n$th gas molecule.\n\\item A disc rolling without slipping on a plane. Let us assume that the disc\nhas a radius $a$ and it rolls on the $xy$ plane. To specify the motion we \nneed to know where on the plane the disc is, which point of the disc is in \ncontact with the plane and how is the disc oriented. The point of contact can \nbe specified by its coordinates $(x, y)$, the point of the disc by its \nangular separation $\\phi$ from a fixed line on the disc and the orientation \nof the disc by another angle $\\theta$ between the disc and the $x$ direction.\nThe velocity of the point of contact has a magnitude $a\\dot{\\phi}$ and a \ndirection perpendicular to the plane of the disc. The plane of the disc makes \nan angle $\\theta$ with the positive $x_1$ direction. Therefore, the velocity \nmakes an angle $\\pi/2 - \\theta$ with the positive $x$ direction. Thus,\n\\begin{eqnarray}\nv_x &=& a\\sin\\theta\\dot{\\phi} \\label{c1s2e5} \\\\\nv_y &=& a\\cos\\theta\\dot{\\phi} \\label{c1s2e6} \n\\end{eqnarray}\nWe can write these equations in terms of the coordinates $(x, y, \\phi, \n\\theta)$ used to specify the motion.\n\\begin{eqnarray}\ndx &=& a\\sin\\theta d\\phi \\label{c1s2e7} \\\\\ndy &=& a\\cos\\theta d\\phi \\label{c1s2e8} \n\\end{eqnarray}\n\\end{enumerate}\nRefer to section 1.2 of Rana and Joag's book \\cite{rc} for many more examples\nof contraints of various kinds. Constraints in the first three examples are\ncalled \\emph{holonomic} because they can be expressed as a (whole) function\n\\begin{equation}\\label{c1s2e9}\nf(q_1, q_1, \\ldots) = 0.\n\\end{equation}\nThe constraint of equation \\eqref{c1s2e4} is not holonomic because it involves\ninequalities. The constraints of equations \\eqref{c1s2e7} and \\eqref{c1s2e8}\nare not holonomic because the two equations cannot be integrated unless we\nfix $\\theta$ \\cite{goldstein2002classical}. We further note that for all\nholonomic constraints the force of constraint is perpendicular to the direction\nof motion of the particles. Therefore, the forces of constraint do not do any\nwork on the particles. This is not the case when the holonomic constraints \ndepend on time. For example, if the length of the bob changes with time, the\nactual motion of the bob is not perpendicular to the tension in the string.\nIn order to treat such constraints like the time-independent ones, d'Alembert\nintroduced the idea of a \\emph{virtual displacement}. A virtual displacement\nhappens instantaneously without violating the constraint. In the case of the\nbob of a pendulum whose length varies with time, the virtual displacement is\nalways perpendicular to the tension in the string because it happens without\na passage of time. The work done in a virtual displacement is called the \n\\emph{virtual work}. \n\nd'Alembert proposed the principle that infinitesimal virtual work by forces of\nconstraint is zero when the virtual displacement is reversible. The condition\nof reversibility rules out dissipative constraints like frictional forces.\n\nLet us now consider Newton's equation for a single particle. We can write the \ntotal force on the particle as sum of external forces\n$F_i$ and constraint forces $F^c_i$ so that\n\\begin{align}\n\\left.\n\\begin{array}{ll}\nm\\ddot{x} &= F_x + F^c_x \\\\\nm\\ddot{y} &= F_y + F^c_y \\\\\nm\\ddot{z} &= F_z + F^c_z \n\\end{array}\n\\right\\}\\label{c1s2e10}\n\\end{align}\nIf $\\delta x, \\delta y, \\delta z$ denote an infinitesimal virtual displacement \nin each coordinate then\n\\begin{equation}\\label{c1s2e11}\n(m\\ddot{x} - F_x)\\delta x + (m\\ddot{y} - F_y)\\delta y + (m\\ddot{z} - F_z)\n\\delta z = F^c_x\\delta x + F^c_y\\delta y + F^c_z\\delta z\n\\end{equation}\nd'Alembert's principle of virtual work states that the right hand side is zero\nso that\n\\begin{equation}\\label{c1s2e12}\n(m\\ddot{x} - F_x)\\delta x + (m\\ddot{y} - F_y)\\delta y + (m\\ddot{z} - F_z)\n\\delta z = 0.\n\\end{equation}\nThis is the mathematical form of d'Alembert's principle. Its chief advantage \nis that it does not involve the forces of constraint. However, it is not at\nall as simple as equation \\eqref{c1s1e1} because the virtual displacements\n$\\delta x, \\delta y, \\delta z$ are not independent. We can generalise it to a \nsystem of $N$ particles as\n\\begin{equation}\\label{c1s2e13}\n\\sum_{i=1}^N\\left(\n(m_i\\ddot{x}_i - X_i)\\delta x_i + (m\\ddot{y_i} - Y_i)\\delta y_i + \n(m_i\\ddot{z}_i - Z_i) \\delta z_i\\right) = 0,\n\\end{equation}\nwhere $X_i, Y_i, Z_i$ are the $x, y, z$ components of the external forces on\nthe $i$th particle. Once again we emphasize that the virtual displacements \n$\\delta x_i, \\delta y_i, \\delta z_i$ are not independent if the system moves \nunder constraints.  Therefore, there is no way to simplify \\eqref{c1s2e13} \nany further.\n\nExamples of d'Alembert's principle.\n\\begin{enumerate}\n\\item Consider the motion of a simple pendulum in $xz$ plane. The motion\nof the bob is constrained by the equation\n\\begin{equation}\\label{c1s2e14}\nx^2 + z^2 = l^2,\n\\end{equation}\nwhere $l$ is the length of the string. We can express it as\n\\begin{equation}\\label{c1s2e15}\nx\\delta x + z\\delta z = 0,\n\\end{equation}\nwhere we interpret $\\delta x$ and $\\delta z$ as virtual displacements.\nFrom d'Alembert's principle of equation \\eqref{c1s2e13} we get\n\\begin{equation}\\label{c1s2e16}\n(m\\ddot{x} - F_x)\\delta x + (m\\ddot{z} - F_z)\\delta z = 0.\n\\end{equation}\n$F_x = 0$ and $F_z = -mg$ gives\n\\begin{equation}\\label{c1s2e17}\n\\ddot{x} \\delta x + (\\ddot{z} + g)\\delta z = 0.\n\\end{equation}\nFor small oscillations $z \\approx -l$ (origin being at the point the string \nis held fixed) so that equation \\eqref{c1s2e15} becomes\n$x \\delta x \\approx l \\delta z$ so that equation \\eqref{c1s2e17} can\nbe written as\n\\begin{equation}\\label{c1s2e18}\n\\ddot{x} \\delta x + (0 + g)\\left(\\frac{x}{l}\\right)\\delta x = 0\n\\Rightarrow \\left(\\ddot{x} + \\frac{g}{l}x\\right)\\delta x = 0.\n\\end{equation}\nIf this equation must be true for all virtual displacements,\n\\begin{equation}\\label{c1s2e19}\n\\ddot{x} + \\frac{g}{l}x = 0.\n\\end{equation}\nNote that the d'Alembert's principle of equation \\eqref{c1s2e17} could be\nsimplified only after we used the equation of constraint \\eqref{c1s2e15}. This\ntheme will recur each time we apply d'Alembert's principle for constrained \ndynamical systems.\n\n\\item Consider Atwood's machine with masses $m_1$ and $m_2$ connected with\nan ideal string wound aroung an ideal pulley. If $x_1$ and $x_2$ denote their\ndisplacements then the equation of constraint is\n\\begin{equation}\\label{c1s2e20}\nx_1 + x_2 = l,\n\\end{equation}\n$l$ being the length of the string connecting the two masses. In terms of\nvirtual displacement, the equation of constraint becomes\n\\begin{equation}\\label{c1s2e21}\n\\delta x_1 + \\delta x_2 = 0.\n\\end{equation}\nd'Alembert's principle applied to the system gives\n\\begin{equation}\\label{c1s2e22}\n(m_1\\ddot{x}_1 - F_1)\\delta x_1 + (m_2\\ddot{x}_2 - F_2)\\delta x_2\n= 0.\n\\end{equation}\nSince $F_1 = -m_1g, F_2 = -m_2g$ we have\n\\begin{equation}\\label{c1s2e23}\nm_1(\\ddot{x}_1 + g)\\delta x_1 + m_2(\\ddot{x}_2 + g)\\delta x_2 = 0.\n\\end{equation}\nFrom equation \\eqref{c1s2e20} $\\ddot{x}_1 = -\\ddot{x}_2$ and from equation\n\\eqref{c1s2e21} we have $\\delta x_1 = -\\delta x_2$ so that \\eqref{c1s2e23}\nbecomes\n\\begin{equation}\n\\left(m_1(\\ddot{x}_1 + g) - m_2(-\\ddot{x}_1 + g)\\right)\\delta x_1 = 0.\n\\end{equation}\nAs this equation is valid for all virtual displacements we have\n\\begin{equation}\\label{c1s2e25}\n\\ddot{x}_1 = -\\frac{m_1 - m_2}{m_1 + m_2}g.\n\\end{equation}\n\n\\item If a particle is constrained to move along a circle of radius $a$,\ncentred at the origin and if there are no external forces then d'Alembert's \nprinciple gives\n\\begin{equation}\\label{c1s1e26}\nm\\ddot{x}\\delta x + m\\ddot{y}\\delta y = 0.\n\\end{equation}\nSince the particle is constrained to move on the circle, $x = a\\cos\\theta$ \nand $y = a\\sin\\theta$. Therefore,\n\\begin{eqnarray}\n\\delta x &=& -a\\sin\\theta \\delta\\theta \\\\\n\\delta y &=& a\\cos\\theta  \\delta\\theta\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\dot{x} &=& -a\\sin\\theta\\dot{\\theta} \\\\\n\\ddot{x} &=& -a\\cos\\theta\\dot{\\theta}^2 - a\\sin\\theta\\ddot{\\theta} \\\\\n\\dot{y} &=& a\\cos\\theta\\dot{\\theta} \\\\\n\\ddot{y} &=& -a\\sin\\theta\\dot{\\theta}^2 + a\\cos\\theta\\ddot{\\theta}\n\\end{eqnarray}\nso that equation \\eqref{c1s1e26} becomes\n\\[\n\\ddot{x}\\delta x + \\ddot{y}\\delta y = (a^2\\sin\\theta\\cos\\theta\n\\dot{\\theta}^2 + a^2\\sin^2\\theta\\ddot{\\theta} - a^2\\sin\\theta\\cos\\theta\n\\dot{\\theta}^2 + a^2\\cos^2\\theta\\ddot{\\theta})\\delta\\theta = 0\n\\]\nor $\\ddot{\\theta}\\delta\\theta = 0$ which implies that $\\dot{\\theta} = $ a\nconstant. In this problem we used the equation of constraint to replace\nthe two coordinates $x$ and $y$ with a single one $\\theta$.\n\\end{enumerate}\n\nSome more examples of constraints.\n\\begin{enumerate}\n\\item A rigid rod is constrained to move within a spherical bowl so that\nis tends always touch the bowl's inner surface \\cite[Problem 3, chapter 2]\n{akr}. Without the constraint of the bowl, six degrees of freedom suffice t\ndescribe the motion of the rod.  If $x_1, y_1, z_1$ and $x_2, y_2, z_2$ are\nthe coordinates of the end-points of the rod then then equations of\nconstraint are\n\\begin{eqnarray}\nx_1^2 + y_1^2 + z_1^2 &=& a^2 \\\\\nx_2^2 + y_2^2 + z_2^2 &=& a^2 \\\\\n(x_1 - x_2)^2 + (y_1 - y_2)^2 + (z_1 - z_2)^2 &=& l^2 \\\\\nl &\\le& a\n\\end{eqnarray}\nThus, the motion of the rod can be described by two coordinates alone. One \nof them tells the `latitude' of the centre of the rod and the other one the\n`longitude'.\n\n\\item A spherical pendulum is not restricted to move in a plane. Therefore,\nwe need three coordinates $x, y, z$ to describe its motion. They are\nconstrained to obey\n\\begin{equation}\\label{c1s2e37}\nx^2 + y^2 + z^2 = l^2.\n\\end{equation}\nExpressing the constraint in terms of virtual displacements,\n\\begin{equation}\\label{c1s2e38}\nx\\delta{x} + y\\delta{y} + z\\delta{x} = 0\n\\end{equation}\nd'Alembert's principle gives\n\\begin{equation}\\label{c1s2e39}\n\\ddot{x}\\delta{x} + \\ddot{y}\\delta{y} + (\\ddot{z} + g)\\delta{z} = 0\n\\end{equation}\n\\end{enumerate}\n\n\\section{Lagrange's equations of the first kind}\\label{c1s3}\nWe mentioned that the virtual displacements $\\delta x_i, \\delta y_i, \n\\delta z_i$ are not \nindependent in a system under constraints. If we restrict ourselves to\nholonomic constraints then each constraint is an equation of the form\n\\begin{equation}\\label{c1s3e1}\n\\varphi_i(x_1, \\ldots, z_N) = 0.\n\\end{equation}\nThe index $i$ ranges from $1$ to $m$, the number of constraints. We can\nwrite each of these equations as\n\\begin{equation}\\label{c1s3e2}\n\\sum_{j=1}^{N}\\left(\\pd{\\varphi_i}{x_j}\\delta x_j \n+ \\pd{\\varphi_i}{y_j}\\delta y_j\n+ \\pd{\\varphi_i}{z_j}\\delta z_j\\right) = 0\n\\end{equation}\nWe now multiply each of these equations with a constant $\\lambda_i$ and\nadd them to \\eqref{c1s2e13} to get\n\\begin{align}\n\\sum_{j=1}^N \\left(m_j\\ddot{x}_j - X_j + \n\\sum_{k=1}^m \\lambda_k\\pd{\\varphi_i}{x_j} \\right)\\delta x_j & + \\nonumber \\\\\n\\sum_{j=1}^N \\left(m_j\\ddot{y}_j - Y_j + \n\\sum_{k=1}^m \\lambda_k\\pd{\\varphi_i}{y_j} \\right)\\delta y_j & + \\nonumber \\\\\n\\sum_{j=1}^N \\left(m_j\\ddot{z}_j - Z_j + \n\\sum_{k=1}^m \\lambda_k\\pd{\\varphi_i}{z_j} \\right)\\delta z_j & = 0 \\label{c1s3e3}\n\\end{align}\nThis equation has $3N$ terms on the left hand side. Choose $\\lambda_k$\nsuch that for the first $m$ terms\n\\begin{equation}\\label{c1s3e4}\n\\sum_{j=1}^N \\left(m_j\\ddot{u}_j - U_j + \n\\sum_{k=1}^m \\lambda_k\\pd{\\varphi_i}{u_j} \\right)\\delta u_j = 0,\n\\end{equation}\nwhere $u$ and $U$ could denote $x, y, z$ or $X, Y, Z$ depending on $m$.\nEquation \\eqref{c1s3e3} now has only $3N - m$ terms. The virtual displacements\nin each of these can be considered to be independent. Therefore, \\eqref{c1s3e4}\ncan be considered to be true for all $3N$ terms. Equation \\eqref{c1s3e4} is\ncalled \\emph{Lagrange's equation of the first kind}. In fact,\n\\begin{equation}\nQ_i = \\left(\\sum_{k=1}^r\\lambda_k \\pd{\\varphi_k}{x_i},\n            \\sum_{k=1}^r\\lambda_k \\pd{\\varphi_k}{y_i},\n            \\sum_{k=1}^r\\lambda_k \\pd{\\varphi_k}{z_i}\\right)\n\\end{equation}\nis the force of constraint on the $i$th particle.\n\nAs an example of this approach, consider the equation of constraint of a \nspherical pendulum given in \\eqref{c1s2e38}. We can combine it with \nd'Alembert's equation \\eqref{c1s2e39} to get\n\\begin{equation}\\label{c1s3e6}\n(\\ddot{x} + \\lambda x)\\delta{x} + (\\ddot{y} + \\lambda y)\\delta{y} + \n(\\ddot{z} + g + \\lambda z)\\delta{z} = 0\n\\end{equation}\nNow we consider the amplitude of oscillation to be small, that is $x_3 \\approx\n-l$ and $\\ddot{x}_3 \\approx 0$ so that the third term \\eqref{c1s3e6} is\n\\[\n0 + g - \\lambda l.\n\\]\nIt vanishes if we choose $\\lambda = g/l$ and we get\n\\begin{eqnarray}\n\\ddot{x} + \\frac{g}{l}x &=& 0 \\\\\n\\ddot{y} + \\frac{g}{l}y &=& 0 \n\\end{eqnarray}\nare the equations of motion of the spherical pendulum.\n\nWe next use Lagrange's method to find the force of constraint in Atwood \nmachine. Combining the equation of constraint \\eqref{c1s2e21} with \nd'Alembert's equation \\eqref{c1s2e23} we get\n\\begin{equation}\nm_1\\left(\\ddot{x}_1 + g + \\frac{\\lambda}{m_1}\\right)\\delta x_1 + \nm_2\\left(\\ddot{x}_2 + g + \\frac{\\lambda}{m_2}\\right)\\delta x_2 = 0.\n\\end{equation}\nChoose $\\lambda$ such that\n\\begin{equation}\n\\ddot{x}_1 + g + \\frac{\\lambda}{m_1} = 0\n\\end{equation}\nthat is\n\\begin{equation}\n\\lambda = -m_1(\\ddot{x}_1 + g).\n\\end{equation}\nUsing equation \\eqref{c1s2e25} we get\n\\begin{equation}\n\\lambda = \\frac{2m_1m_2}{m_1 + m_2}g\n\\end{equation}\nas the tension in the string.\n\nLagrange's equations of the first kind solve the problem of involving the\nforces of constraint in the problem. Yet, they are firmly rooted in the \nCartesian coordinate system. In the next chapter we will severe this connection\nwith the Cartesian coordinates.\n\nBefore we close we remark that the framework of this section can be used\nfor time-dependent constraints. If equation \\eqref{c1s3e1} is written as\n\\begin{equation}\\label{c1s3e13}\n\\varphi_i(x_1, \\ldots, z_N, t) = 0\n\\end{equation}\nthen equation \\eqref{c1s3e2} becomes\n\\begin{equation}\\label{c1s3e14}\n\\sum_{j=1}^{N}\\left(\\pd{\\varphi_i}{x_j}\\delta x_j \n+ \\pd{\\varphi_i}{y_j}\\delta y_j\n+ \\pd{\\varphi_i}{z_j}\\delta z_j + \\pd{\\varphi_i}{t}\\delta t\\right) = 0\n\\end{equation}\nHowever, when we interpret $\\delta x_j, \\delta y_j, \\delta z_j$ as virtual \ndisplacements then $\\delta t$ is zero and the rest of the analysis applies \nwithout change.\n", "meta": {"hexsha": "9a16a40ef8b99a36cc5fbe3c65b8ea4db7d21771", "size": 17023, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cm/lm/c1.tex", "max_stars_repo_name": "amey-joshi/physics", "max_stars_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "cm/lm/c1.tex", "max_issues_repo_name": "amey-joshi/physics", "max_issues_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cm/lm/c1.tex", "max_forks_repo_name": "amey-joshi/physics", "max_forks_repo_head_hexsha": "66ae9bf4a363bd32b09df22a049e281953adb39b", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.9870801034, "max_line_length": 80, "alphanum_fraction": 0.727486342, "num_tokens": 5848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5779468623965556}}
{"text": "% Created 2021-11-30 Tue 08:25\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=169]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{khpreamble}\n\\DeclareMathOperator{\\atantwo}{atan2}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{Sensitivity and robustness}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={Sensitivity and robustness},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\\section{Intro}\n\\label{sec:org4453306}\n\n\n\\section{The sensitivity function}\n\\label{sec:org37d7547}\n\n\n\\begin{frame}[label={sec:org140e392}]{Sensitivity and complementary sensitivity}\n\\small\n\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{../../figures/2dof-block-complete}\n\\end{center}\n\\pause\n\\begin{enumerate}\n\\item Determine the closed-loop transfer function \\(G_v(s)\\) from \\(v(t)\\) to \\(y(t)\\) and the transfer function \\(G_n(t)\\) from \\(n(t)\\) to \\(y(t)\\).\n\\end{enumerate}\n\\pause\n\\begin{enumerate}\n\\setcounter{enumi}{1}\n\\item Show that if \\(F_{fb}(s)\\) and/or \\(G(s)\\) contains an integrator (pole in the origin), then \\(G_v(0) = 0\\) and \\(G_n(0) = -1\\). This means that constant disturbances are completely eliminated, but a constant measurement error (sensor bias) is passed unattenuated to the output.\n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}[label={sec:org08e019f}]{The Nyquist plot and stability margins}\n\\begin{center}\n\\includegraphics[width=0.4\\linewidth]{../../figures/implane-nyquist-margins}\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org7251ce2}]{The sensitivity function}\n\\begin{center}\n\\includegraphics[width=0.4\\linewidth]{../../figures/implane-sensitivity}\n\\end{center}\n\n\\[ S(i\\omega) = G_v(i\\omega) = \\frac{1}{ 1 + G_o(i\\omega)} = \\frac{1}{1 + G(i\\omega)F_{fb}(i\\omega)} \\]\n\\end{frame}\n\n\n\n\n\\begin{frame}[label={sec:orgeec76bd}]{The sensitivity function}\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{../../figures/sensitivity-example}\n\\end{center}\n\\end{frame}\n\n\n\\section{Stein}\n\\label{sec:orgace975b}\n\n\\begin{frame}[label={sec:orgb1d669d}]{Interpretations of the sensitivity function}\nSeveral important interpretations of the sensitivity function \\(S(s)\\)\n\\begin{enumerate}\n\\item \\(S(i\\omega)\\) tells us how well our closed-loop system attenuates disturbances of different frequencies\n\\item Its maximum value is a measure of how close the closed-loop system is to being unstable.\n\\item \\(S(i\\omega)\\) tells us how modelling errors or modelling variations of the plant influences the closed-loop system\n\\end{enumerate}\n\\end{frame}\n\n\n\n\\begin{frame}[label={sec:orgee41180}]{An important limitation}\n\\begin{center}\n\\includegraphics[width=0.5\\linewidth]{../../figures/Stein-title.png}\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orgc23a577}]{An important limitation}\n\\begin{center}\n\\includegraphics[width=0.5\\linewidth]{../../figures/stein-serious-design.png}\n\\end{center}\n\n\\[ \\int _{0}^{\\infty }\\ln |S(i\\omega )|d\\omega =\\int _{0}^{\\infty }\\ln \\left|{\\frac {1}{1+G_o(i\\omega )}}\\right|d\\omega =\\pi \\sum Re(p_{k}) \\]\n\\end{frame}\n\n\\begin{frame}[label={sec:org16d9c7b}]{An important limitation}\n\\begin{center}\n\\includegraphics[width=0.6\\linewidth]{../../figures/sensitivity-linear.png}\n\\end{center}\n\\end{frame}\n\\section{Robustness}\n\\label{sec:orga013b17}\n\\end{document}", "meta": {"hexsha": "8f90da963e959db98ddc37f1ce542eafb93a854b", "size": 3600, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "controller-design/slides/sensitivity.tex", "max_stars_repo_name": "kjartan-at-tec/mr2025", "max_stars_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "controller-design/slides/sensitivity.tex", "max_issues_repo_name": "kjartan-at-tec/mr2025", "max_issues_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "controller-design/slides/sensitivity.tex", "max_forks_repo_name": "kjartan-at-tec/mr2025", "max_forks_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0, "max_line_length": 284, "alphanum_fraction": 0.7375, "num_tokens": 1138, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389873857265, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5779468580240135}}
{"text": "\\section{Functional analysis}\\label{sec:functional_analysis}\n\n\\Fullref{sec:real_analysis} and \\fullref{sec:complex_analysis} study certain functions with values in \\hyperref[def:vector_space_dimension]{finite-dimensional} \\hyperref[def:hilbert_space]{Hilbert spaces}. Functional analysis studies spaces of functions arising from real or complex analysis. These spaces are mostly infinite-dimensional, and a lot of results hold for general infinite-dimensional \\hyperref[def:vector_space]{vector spaces}.\n\nNevertheless, we will only study vector spaces over \\hyperref[def:set_of_real_numbers]{\\( \\BbbR \\)} or \\hyperref[def:set_of_complex_numbers]{\\( \\BbbC \\)}. This is justified by \\fullref{rem:real_field_extensions}. As in \\fullref{sec:complex_analysis}, through this section, \\( \\BbbK \\) will refer to either \\( \\BbbR \\) or \\( \\BbbC \\).\n", "meta": {"hexsha": "f3c81c0260f642ffa4aee2f56929a91b03205eef", "size": 839, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/functional_analysis.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/functional_analysis.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/functional_analysis.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 139.8333333333, "max_line_length": 441, "alphanum_fraction": 0.7914183552, "num_tokens": 208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.5779239757626834}}
{"text": "\\chapter{Statistical Inference} \n\\chaptermark{Inference}\n\\label{sec:inference}\n\nThe idea of extrapolating knowledge from a \\emph{sample} to a population is known as \\emph{statistical inference}.\nIt encompasses the ideas of \\emph{parameter estimation}, \\emph{confidence intervals}, and \\emph{hypothesis testing}.\nWe will assume the reader is familiar with these, but recall some required terminology.\nThe QC and SPC terminology are not always consistent with statisticians' terminology. When new names are given to old ideas, we will emphasize this in the text.\n\n\\begin{description}\n\\item [Null/Alternative Hypothesis] Some statement about the world we wish to test with data. The frequentist argument follows a Popperian philosophy\\footnote{Following Karl Popper's philosophy of science, we can never know that something is true, we can only know when it is not true. Popper philosophy was motivated by the fact that no one suspected Isaac Newton's mechanics to be wrong, until relativity theory was proposed by Einstein.}: to show the alternative hypothesis is true, we will show that the null hypothesis is not true. \nIn the context of quality control, the null hypothesis will be the process is \\emph{in statistical control}, while the alternative will be that it is \\emph{out of control}.\\marginnote{In Control}\nOther terms for the alternative hypothesis are the \\emph{research hypothesis}, or simply the \\emph{signal}.\n\\item [Statistical Test] The procedure of inferring from data on the truthfulness of the alternative hypothesis.\n\\item [Assumptions] As the name suggests, these are assumptions. We stress that unlike hypothesis, assumptions are not being tested in a statistical test. \n\\item [Test Statistic] The function of the data to be computed for the purpose of inference. As such, it is a random variable. \nMay also be though of as a \\emph{signal detector}.\n\\item [Null/Alternative Distribution] The distribution of the test statistic under the null/alternative hypothesis.\n\\item [Type I/II error] See Figure~\\ref{fig:confusion_table}.\n\\item [False/True Positive/Negative] See Figure~\\ref{fig:confusion_table}.\n\\item [Rejection Region] The collection of event that will lead us to reject the null hypothesis, and believe in the alternative hypothesis.\n\\item [p-value] A.k.a. \\emph{observed significance}. The null probability of the observed (or ``more extreme'') event.\n\\item [Significance Level] A.k.a. $\\alpha$. The probability of a false positive.\n\\item [Power] The probability of a true positive.\n\\item [i.i.d.] ``Independent and identically distributed'' (i.i.d.) is an assumption made on the sampling distribution, meaning that samples are statistically independent, and all originating from the same distribution.\n\n\\end{description}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.8\\linewidth]{art/Beware-of-False-Positives-Chart-1}\n\\caption[Confusion Table]{Type I/II error confusion table. \\newline \\url{https://infocus.emc.com/william_schmarzo/beware-of-false-positives/}}\n\\label{fig:confusion_table}\n\\end{figure}\n\n\n\\begin{extra}[ROC termininology]\n%TODO: ROC terminology\n\nThe engineering, statistical, and information retrieval literature, define error and precision criteria\\footnote{\\url{https://en.wikipedia.org/wiki/Receiver_operating_characteristic}}:\n\\begin{description}\n\t\\item[True Positive Rate] ...\n\t\\item[Sensitivity] ...\n\t\\item[Recall] ...\n\t\\item[False Positive Rate] ...\n\t\\item[Fallout] ...\n\t\\item[Specificity]\n\t\\item[Miss Rate]\n\t\\item[Prevalence]\n\t\\item[True Negative Rate]\n\t\\item[Accuracy]\n\t\\item[Positive Predictive Value]\n\t\\item[Precision] \n\t\\item[False Omission rate]\n\t\\item[False Discovery Rate]\n\t\\item[Negative Predictive Value]\n\t\\item[Positive Likelihood Ratio]\n\t\\item[Negative Likelihood ratio]\n\t\\item[Disgnostic odds ratio]\n\\end{description}\n\\end{extra}\n\n\nThe following sections of this chapter present particular statistical inference methods we will be using in the following chapters.\n\n\n\\begin{think}\nCan you design a test with type I error larger its power?\nWould you ever want such a test?\nThink about it using the analogy to statistical tests and criminal courts. \n\\end{think}\n\n\n", "meta": {"hexsha": "9025a09ca7c682ae0b730a71f1f94019c0a7e648", "size": 4137, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Class_notes/inference.tex", "max_stars_repo_name": "johnros/qualityEngineering", "max_stars_repo_head_hexsha": "4a1c0959672fb5c5a6e59829e543c95beb4e5b44", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Class_notes/inference.tex", "max_issues_repo_name": "johnros/qualityEngineering", "max_issues_repo_head_hexsha": "4a1c0959672fb5c5a6e59829e543c95beb4e5b44", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Class_notes/inference.tex", "max_forks_repo_name": "johnros/qualityEngineering", "max_forks_repo_head_hexsha": "4a1c0959672fb5c5a6e59829e543c95beb4e5b44", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.9054054054, "max_line_length": 537, "alphanum_fraction": 0.7851099831, "num_tokens": 998, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397349, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5779239737140631}}
{"text": "\\documentclass{subfile}\n\n\\begin{document}\n\t\\section{APC}\\label{sec:apc}\n\t\n\t\t\\begin{problem}[APC $2006$, problem $5$]\n\t\t\tProve that for all positive integer $n$ and positive real numbers $a,b,c$, the following inequality holds:\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{a^{n+1}}{a^{n}+a^{n-1}b+\\ldots+b^{n}}+\\dfrac{b^{n+1}}{b^{n}+b^{n-1}c+\\ldots+c^{n-1}}+\\dfrac{c^{n+1}}{c^{n}+c^{n-1}+\\ldots+a^{n}}\n\t\t\t\t\t\t& \\geq\\dfrac{a+b+c}{n+1}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $2005$, problem $3$]\n\t\t\tLet $a_{0},\\ldots,a_{n}$ be real numbers such that\n\t\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\t\t\\item $0= a_{0}\\leq\\ldots\\leq a_{n}$\n\t\t\t\t\t\\item For $0\\leq i<j\\leq n$, $a_{j}-a_{i}\\leq j-i$\n\t\t\t\t\\end{enumerate}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=0}^{n}a_{i}^{2}\n\t\t\t\t\t\t& \\geq\\sum_{i=0}^{n}a_{i}^{3}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $2005$, problem $10$]%Radon\n\t\t\tDetermine all pairs of non-negative integers $(k,n)$ such that the following inequality holds for all positive real numbers $x,y$:\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t1+\\dfrac{y^{n}}{x^{k}}\n\t\t\t\t\t\t& \\geq\\dfrac{(1+y)^{n}}{(1+x)^{k}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $2003$, problem $8$]\n\t\t\tGiven real numbers $x_{1}\\geq\\ldots\\geq x_{2003}\\geq0$. Show that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx_{1}^{n}-x_{2}^{n}+\\ldots+x_{2003}^{n}\n\t\t\t\t\t\t& \\geq (x_{1}-x_{2}+\\ldots+x_{2003})^{n}\n\t\t\t\t\\end{align*}\n\t\t\tfor any positive integer $n$.\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $2001$, problem $3$]\n\t\t\tLet $a,b,c$ be sides of a triangle. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t2\n\t\t\t\t\t\t& \\leq \\dfrac{a+b}{c}+\\dfrac{b+c}{a}+\\dfrac{c+a}{b}-\\dfrac{a^{3}+b^{3}+c^{3}}{abc}\\leq3\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $2000$, problem $9$]\n\t\t\tLet $a,b,c$ be non-negative real numbers such that $a+b+c=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t2\\leq(1-a^{2})^{2}+(1-b^{2})^{2}+(1-c^{2})^{2}\n\t\t\t\t\t\t& \\leq (1+a)(1+b)(1+c)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $1999$, problem $2$]\n\t\t\tFind the best possible $k,l$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tk\n\t\t\t\t\t\t& <\\dfrac{v}{v+w}+\\dfrac{w}{w+x}+\\dfrac{x}{x+y}+\\dfrac{y}{y+z}+\\dfrac{z}{z+x}<l\n\t\t\t\t\\end{align*}\n\t\t\tfor all positive real numbers $v,w,x,y,z$.\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $1998$, problem $1$]\n\t\t\tLet $x_{1},x_{2},y_{1},y_{2}$ be real numbers such that $x_{1}^{2}+x_{2}^{2}\\leq1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(x_{1}y_{1}+x_{2}y_{2}-1)^{2}\n\t\t\t\t\t\t& \\geq(x_{1}^{2}+x_{2}^{2}-1)(y_{1}^{2}+y_{2}^{2}-1)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $1997$, problem $7$]\n\t\t\tLet $p,q$ be arbitrary real numbers.\n\t\t\t\t\\begin{enumerate}[(a)]\n\t\t\t\t\t\\item Prove that $p^2+q^2+1>p(q+1)$.\n\t\t\t\t\t\\item Determine the largest possible $b$ such that $p^{2}+q^{2}+1>bp(q+1)$ for all $p,q$.\n\t\t\t\t\t\\item Determine the largest possible $c$ such that $p^{2}+q^{2}+1>cp(q+1)$ for all integer $p,q$.\n\t\t\t\t\\end{enumerate}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $1995$, problem $9$]\n\t\t\tProve that for all positive integers $m,n$ and all real numbers $x,y$, the following inequality holds:\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(n-1)(m-1)\\left(x^{n+m}+y^{n+m}\\right)+(n+m-1)\\left(x^{n}y^{m}+x^{m}y^{n}\\right)\n\t\t\t\t\t\t& \\geq nm\\left(x^{n+m-1}y+y^{n+m-1}x\\right)\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\t\n\t\t\\begin{problem}[APC $1993$, problem $6$]\n\t\t\tIf $a,b$ are non-negative real numbers, prove the inequality\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\dfrac{\\sqrt{a}+\\sqrt{b}}{2}\\right)^{2}\n\t\t\t\t\t\t& \\leq \\dfrac{a+\\sqrt[3]{a^{2}b}+\\sqrt[3]{ab^{2}}+b}{4}\\leq\\dfrac{a+\\sqrt{ab}+b}{3}\\leq\\sqrt{\\left(\\dfrac{a^{2/3}+b^{2/3}}{2}\\right)^{3}}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\\end{document}", "meta": {"hexsha": "eb767627df583837406770d73e94b80a1aaaef2c", "size": 3600, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "apc.tex", "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_issues_repo_path": "apc.tex", "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "apc.tex", "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6435643564, "max_line_length": 143, "alphanum_fraction": 0.5644444444, "num_tokens": 1618, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7905303137346446, "lm_q1q2_score": 0.5779239675227811}}
{"text": "\\section{Uniqueness of Analytic Continuation; Gluing}\r\n\\subsection{Analytic Continuation Revisited}\r\nThe space of germs contains information about the class of analytic functions that agree on a neighbourhood of some given point.\r\nSince we have seen that the space of germs admits a natural topological and analytical stucture, there should be some correlation between paths in this space and the analytic continuations along some paths in the original domain.\r\n\\begin{theorem}\r\n    Let $(f,U),(g,V)$ be function elements on a domain $D\\subset\\mathbb C$ and $\\gamma:[0,1]\\to D$ be a path starting in $U$ and ending in $V$.\r\n    Then $(f,U)\\approx_\\gamma (g,V)$ iff $\\gamma$ lifts to some $\\tilde{\\gamma}$ in (a component of) $\\mathcal G$ joining $[f]_{\\gamma(0)}$ and $[g]_{\\gamma(1)}$.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Suppose $(f,U)\\approx_\\gamma(g,V)$, then we have $(f,U)=(f_1,U_1)\\sim\\cdots\\sim (f_n,U_n)=(g,V)$ and a dissection $0=t_0<t_1<\\cdots <t_{n-1}<t_n=1$ such that $\\gamma([t_{i-1},t_i])\\subset U_i$ for all $i\\in\\{1,\\ldots,n\\}$.\r\n    Now define a lift $\\tilde{\\gamma}$ of $\\gamma$ to $\\mathcal G$ via $\\tilde{\\gamma}(t)=[f_i]_{\\gamma(t)}$ for $t\\in[t_{i-1},t_i]$.\r\n    It is well-defined since $f_i|_{U_i\\cap U_{i+1}}=f_{i+1}|_{U_i\\cap U_{i+1}}$ as $(f_i,U_i)\\sim (f_{i+1},U_{i+1})$ is a direct analytic continuation.\r\n    Also observe that on $[t_{i-1},t_i]$ we have $\\tilde{\\gamma}=(\\pi|_{[f_i]_{U_i}})^{-1}\\circ\\gamma$ which is continuous.\r\n    Therefore $\\tilde{\\gamma}|_{[t_{i-1},t_i]}$ is continuous for all $i$, hence it is continuous.\r\n    Now for each $t\\in[t_{i-1},t_i]$, $\\pi\\circ\\tilde{\\gamma}(t)=\\pi([f_i]_{\\gamma(t)})=\\gamma(t)$, so $\\tilde{\\gamma}$ does lift $\\gamma$.\r\n    Easily $\\tilde{\\gamma}(0)=[f_1]_{\\gamma(0)}=[f]_{\\gamma(0)}$ and $\\tilde{\\gamma}(1)=[f_n]_{\\gamma(1)}=[g]_{\\gamma(1)}$ by construction, as required.\\\\\r\n    Conversely, suppose such $\\tilde\\gamma$ exists, then every point $\\tilde{\\gamma}(t)$ has a neighbourhood $[f_t]_{U_t}$ where $(f_t,U_t)$ is a function element on $D$ and each $U_t$ is a disk.\r\n    Compactness of $[0,1]$ we can choose a finite collection of function elements $(f_1,U_1),\\ldots,(f_n,U_n)$ among them and a dissection $0=t_0<t_1<\\cdots <t_{n-1}<t_n=1$ such that $\\tilde{\\gamma}([t_{i-1},t_i])\\subset [f_i]_{U_i}$.\r\n    Then as $\\tilde\\gamma$ is a lift of $\\gamma$, $\\gamma([t_{i-1},t_i])\\subset U_i$ for any $i$.\r\n    Also for any $i$, $[f_{i-1}]_{\\gamma(t_{i-1})}=\\tilde{\\gamma}(t_{i-1})=[f_i]_{\\gamma(t_{i-1})}$, therefore $f_{i-1}$ and $f_i$ agrees on a neighbourhood of $\\gamma(t_{i-1})\\in U_{i-1}\\cap U_i$.\r\n    But since $U_{i-1},U_i$ are disks, $U_{i-1}\\cap U_i$ is connected and hence $f_{i-1}=f_i$ on $U_{i-1}\\cap U_i$ by the identity principle.\r\n    Therefore it indeed gives the desired analytic continuation.\r\n\\end{proof}\r\n\\begin{corollary}\r\n    Let $\\mathcal F$ be a complete analytic function on a domain $D\\subset\\mathbb C$, then\r\n    $$\\mathcal G_{\\mathcal F}=\\bigcup_{(f,U)\\in\\mathcal F}[f]_U$$\r\n    is a path component of $\\mathcal G$.\r\n\\end{corollary}\r\nTherefore complete analytic functions on a domain $D\\subset\\mathbb C$ are equivalent to Riemann surfaces equipped with covering maps defined by the restriction of the forgetful map.\r\n\\begin{definition}\r\n    The component $\\mathcal G_{\\mathcal F}$ is the Riemann surface associated to $\\mathcal F$.\r\n\\end{definition}", "meta": {"hexsha": "f2be3101bb9691fa301efa10859b7463f11ba6c0", "size": 3362, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "9/anacont.tex", "max_stars_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_stars_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9/anacont.tex", "max_issues_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_issues_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9/anacont.tex", "max_forks_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_forks_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 105.0625, "max_line_length": 235, "alphanum_fraction": 0.6701368233, "num_tokens": 1172, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303087996143, "lm_q2_score": 0.7310585786300048, "lm_q1q2_score": 0.5779239639149848}}
{"text": "\\chapter{Theories and Specifications}\n\\label{cha:theor-spec}\n\nIn this chapter we review the notions of a mathematical theory on one\nside, and a program specification on the other.\n\n%----------------------------------------------------------------------\n%----------------------------------------------------------------------\n\\section{Theories}\n\\label{sec:theories}\n\nSpeaking informally, a (mathematical) theory is any mathematical topic\nthat we might want to study. For example, if we are interested in the\nproperties of a general group, our theory, call it $\\thy{Group}$ would\nconsist of an unspecified group~$G$, equipped with unit and the usual\noperations. All we know about~$G$ is that it satisfies the group\naxioms. We might also wish to think about a particular group, say the\nsymmetric group $S_6$. This is again a theory $\\thy{S6}$, but whatever\nwe prove in it applies only to one specific group. A third possibility\nis to study the theory $\\thy{Groups}$ of \\emph{all} groups, which\nwould also speak about \\emph{homomorphisms} between groups and\nconstructions of new groups from old ones.\n\n\\vspace{2cm}\n\nIn RZ the theory $\\thy{Group}$ would be written as follows:\n%\n\\sourcefile{group.thy}\n\n%----------------------------------------------------------------------\n\\subsection{Simple Theories}\n\\label{sec:simple-theories}\n\nTheories consist of sets, constants, relations and axioms.\n\nModels.\n\nThe first-order language of theories.\n\nFuture extensions.\n\n%----------------------------------------------------------------------\n\\subsection{Parametrized Theories}\n\\label{sec:param-theor}\n\nSimple examples.\n\nAdvanced examples.\n\n%----------------------------------------------------------------------\n%----------------------------------------------------------------------\n\\section{Specifications}\n\\label{sec:specifications}\n\n%----------------------------------------------------------------------\n\\subsection{ML Structures and Signatures}\n\\label{sec:ml-signatures}\n\n\n%----------------------------------------------------------------------\n\\subsection{Parametrized Signatures (Functors)}\n\\label{sec:param-sign}\n\n\n%----------------------------------------------------------------------\n\\subsection{Assertions}\n\\label{sec:assertions}\n\n\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"userman\"\n%%% End: \n", "meta": {"hexsha": "2df0db03605c5fb9e78ec6c4989292c080d1fa45", "size": 2307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "private/userman/chapter2.tex", "max_stars_repo_name": "andrejbauer/rz", "max_stars_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-08-28T10:12:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-07T21:04:22.000Z", "max_issues_repo_path": "private/userman/chapter2.tex", "max_issues_repo_name": "andrejbauer/rz", "max_issues_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "private/userman/chapter2.tex", "max_forks_repo_name": "andrejbauer/rz", "max_forks_repo_head_hexsha": "d92cacaf78fb50d61fc6712c74b8fdaf5d2c6d28", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.3552631579, "max_line_length": 71, "alphanum_fraction": 0.5522323364, "num_tokens": 441, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920068519376, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5778940210744766}}
{"text": "\\documentclass[a4page, twocolumn, twoside, 11pt]{article}\n\n\\input{pre-amble.tex}\n\n\\title{Finite Difference of the 1D Explicit Heat Equation}\n\\author{Dr J.\\ A.\\ Gopsill}\n\\date{2019}\n\n\\begin{document}\n\n\\maketitle\n\n%\\section{Introduction}\n\nThe one-dimensional transient heat conduction equation without heat generating sources is given by:\n\n\\begin{equation}\n  \\rho c_p \\frac{\\partial T}{\\partial t} = \\frac{\\partial}{\\partial x}\\left(k\\frac{\\partial T}{\\partial x}\\right)\n\\end{equation}\n\n\\noindent where $\\rho$ is the density, $c_p$ heat capacity, $k$ thermal conductivity, $T$ temperature, $x$ distance, and $t$ time.\n\nIf $\\rho$, $c_p$ and $k$ are constant then the equation can be simplified to:\n\n\\begin{equation}\n  \\frac{\\partial T}{\\partial t} = \\kappa\\frac{\\partial^2 T}{\\partial x^2}\n  \\label{equ:1d-equation}\n\\end{equation}\n\n\\noindent where:\n\n\\begin{equation}\n  \\kappa = \\frac{k}{\\rho c_p}\n\\end{equation}\n\nAn example of the heat equation in practice is where a thin body with thermal conductivity $\\kappa$ (e.g.\\ rod or laminate) is at a starting temperature $T_s$ and is heated at both ends at a constant temperature $T_b$ while being insulated along its length $L$ (\\cref{fig:model}). In this situation, one might be interested in how long it will take for the body to reach $T_b$.\n\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}\n    \\draw[] (0,0) -- (5,0) node[pos=0.5, anchor=south] {$T_s$} -- (5,1) node[pos=0.5, anchor=west] {$T_b$} -- (0,1) -- (0,0) node[pos=0.5, anchor=east] {$T_b$};\n\n    \\draw[fill] (0,0) -- (5,0) -- (5,-0.1) -- (0,-0.1) -- (0,0);\n    \\draw[fill] (0,1) -- (5,1) -- (5,1.1) -- (0,1.1) -- (0,0);\n\n    \\draw[<->] (0,-0.3) -- (5,-0.3) node[pos=0.5, anchor=north] {$L$};\n\n    \\draw[dotted] (0.4,0) -- (0.4,1);\n    \\draw[dotted] (0.8,0) -- (0.8,1);\n    \\draw[dotted] (1.2,0) -- (1.2,1);\n    \\draw[<->] (1.2, 0.25) -- (1.6, 0.25) node[pos=0.5, anchor=south] {\\footnotesize $\\Delta x$};\n    \\draw[dotted] (1.6,0) -- (1.6,1);\n    \\draw[dotted] (2.0,0) -- (2.0,1);\n  \\end{tikzpicture}\n  \\caption{1D Heat Model}\\label{fig:model}\n\\end{figure}\n\nWe cen solve this numerically by discretising the continuous derivatives ($\\partial t, \\partial x$) in \\cref{equ:1d-equation} using finite difference methods. Lets first look at the time derivative $\\frac{\\partial T}{\\partial t}$, which can be approximated using the forward finite difference approximation.\n\n\\begin{equation}\n  \\frac{\\partial T}{\\partial t} \\approx \\frac{ T^{n+1}_{i}-T^{n}_{i} }{ T^{n+1}-t^{n} } = \\frac{ T^{n+1}_{i}-T^{n}_{i} }{ \\Delta t }\n  \\label{equ:forward}\n\\end{equation}\n\n\\noindent where $n$ is a step in time and $i$ is a point along the discretised domain.\n\nSecond, we turn our attention to the spatial derivative $\\frac{\\partial^2 T}{\\partial x^2}$, which can be approximated using the central difference approximation.\n\n\\begin{equation}\n  \\begin{split}\n    \\frac{\\partial T}{\\partial x^2} = \\frac{\\partial}{\\partial x}\\left(\\frac{\\partial T}{\\partial x}\\right) & \\approx \\frac{ \\frac{ T^{n}_{i+1}-T^{n}_{i} }{ \\Delta x } - \\frac{ T^{n}_{i}-T^{n}_{i-1} }{ \\Delta x } }{\\Delta x} \\\\ & \\approx \\frac{T^n_{i+1}-2T^n_i+T^n_{i-1}}{\\Delta x^2}\n  \\end{split}\n  \\label{equ:central}\n\\end{equation}\n\nSubstituting \\cref{equ:forward,equ:central} into \\cref{equ:1d-equation} gives:\n\n\\begin{equation}\n  \\frac{T^{n+1}_{i}-T^{n}_{i}}{\\Delta t} = \\kappa \\left(\\frac{T^n_{i+1}-2T^n_i+T^n_{i-1}}{\\Delta x^2}\\right)\n\\end{equation}\n\nRearranging for $T^{n+1}_{i}$ gives:\n\n\\begin{equation}\n  T^{n+1}_{i} = T^{n}_{i} + \\kappa\\Delta t \\left(\\frac{T^n_{i+1}-2T^n_i+T^n_{i-1}}{\\Delta x^2}\\right)\n\\end{equation}\n\n\n\\end{document}", "meta": {"hexsha": "9a400c6b631a984e7d9a412aae86a14cd6af8be5", "size": 3595, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/main.tex", "max_stars_repo_name": "JamesGopsill/1DHT", "max_stars_repo_head_hexsha": "0fae153e9587129e72cd273ef20ebffaabdea3f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/main.tex", "max_issues_repo_name": "JamesGopsill/1DHT", "max_issues_repo_head_hexsha": "0fae153e9587129e72cd273ef20ebffaabdea3f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/main.tex", "max_forks_repo_name": "JamesGopsill/1DHT", "max_forks_repo_head_hexsha": "0fae153e9587129e72cd273ef20ebffaabdea3f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.393258427, "max_line_length": 377, "alphanum_fraction": 0.645340751, "num_tokens": 1333, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.8006920116079209, "lm_q1q2_score": 0.5778940197147744}}
{"text": "\\section{Linear Classification\n\\hfill $\\color{section-text-color} y, \\bm z \\in\\brace{\\pm 1}, \\enskip \\bm z \\equiv c(\\bm x)$}\n\n% ===\n\\begin{tabular}{@{}>{\\bfseries}l @{\\enskip}l @{\\enskip+}l}\n    (1) Prob. gener. & $p(\\bm x,y)$ & outlier det. \\\\\n    (2) Prob. discr. & $p(y\\mid\\bm x)$ & deg. of belief \\\\\n    (3) Purely discr. & $c: \\bm X \\to \\bm y$ & easiest\n\\end{tabular}\n\n\\emph{Loss Functions:}\n\\enskip $\\mathcal L(y, \\bm z)$\n\\enskip{$\\color{gray} \\bm z \\coloneqq \\bm w^\\top \\bm x$}\\\\\n\\begin{tabular}{>{$\\mathcal L\\!$}l @{$\\:=\\:$}l}\n    $~\\ap{CE}$ & $- \\brack{ y' \\log\\bm z' + (1-y') \\log (1-\\bm z') }$ \\\\\n    $~\\ap{0/1}$ & $\\mathbb I \\brace{\\mathrm{sign}(\\bm z) \\neq y}$ \\\\\n    $~\\ap{hinge}$ & $\\max(0, 1-y\\bm z)$ \\enskip \\textcolor{gray}{for SVM's}\\\\\n    $~\\ap{percep}$ & $\\max(0, -y\\bm z)$ \\\\\n    $~\\ap{logistic}$ & $\\log(1 + \\exp(-y\\bm z))$ \\\\\n    $~\\ap{exp}$ & $\\exp(-y\\bm z)$ \\enskip \\textcolor{gray}{for AdaBoost} \\\\\n\\end{tabular}\nfor CE (log loss):\\; $y' {=} (1 {+} y) / 2$,\\; $\\bm z' {=} (1 {+} \\bm z) / 2$\n\n% ===\n\\subsection{Linear Discriminant Analysis \\hfill(1)}\n\nAssume $Y \\sim \\mathrm{Ber}(\\beta)$,\\; $P(X\\vert Y{=}i) = \\Gauss{\\mu_i, \\Sigma_i}$.\n\n$\\Rightarrow P(y_i\\mid \\bm x_i) = \\sigma( \\cancel{\\bm x_i^\\top \\bm W \\bm x_i} + \\bm w\\!^\\top \\bm x_i + w_0)$\n\\hfill{$\\scriptstyle\\color{gray}\\text{if }\\Sigma_0=\\Sigma_1$}\n%$\\Rightarrow P(y_i\\mid \\bm x_i) = \\sigma( \\underbracket[.7pt][.7pt]{\\cancel{\\bm x_i^\\top \\bm W \\bm x_i}}_{=0 \\text{, if } \\Sigma_0=\\Sigma_1} + \\bm w^\\top \\bm x_i + w_0)$\n\n\\textbf{Min. gener. error.:}\\enskip\n$\\min_f \\E[X,Y]{\\mathcal L(y, c(\\bm x))}$\\\\\n\\enskip $\\rightsquigarrow$\n$c^\\ast(\\bm x) = \\argmin_c \\sum_y p(y\\mid\\bm x) \\mathcal L(y, c(\\bm x))$\n\n% ===\n\\subsection{Prob. discr. approach \\hfill(2)}\n\nAssume $P(y{=}1 \\mid \\bm x_i, \\bm w) = \\sigma(\\bm w^\\top \\bm x)$,\\enskip\n$\\implies L(\\bm w)$ via\n$P(\\bm X, \\!\\bm Y \\mid \\bm w)\n= \\prod_i P(y_i \\mid \\bm x_i , \\bm w) \\color{gray} {\\underbracket[.7pt][.7pt]{\\the\\everymath P(\\bm x_i \\mid \\bm w)}}_{\\:\\mathit{const} \\text{ w.r.t. } \\bm w}$\n\n\\enskip $\\propto \\prod_i \\sigma(\\bm w^\\top \\!\\bm x_i)^{y_i} (1 {-} \\sigma(\\bm w^\\top \\!\\bm x_i))^{1-y_i}$\n\n\\textit{Note:}\\enskip\n$\\bm w^\\ast$ intractable but diff'able $\\to$ \\textbf{GD}!\n\n% ===\n\\subsection{Purely discriminative \\hfill(3)}\n\\label{purely-discriminative}\n\n\\iffalse\n\\begin{minipage}{\\linewidth}\n    \\centering\n    $\\E[X,Y]{\\mathcal L(Y, c(X))}$\n    \\enskip$\\rightsquigarrow$\\enskip\n    $\\frac1n \\sum_n \\mathcal L(y_i, c(\\bm x_i))$\n\\end{minipage}\n\\fi\n\n\\emph{Perceptron:}\n$f(x) = \\mathrm{sgn}(\\bm w^\\top \\bm x)$\\\\\n\\textbf{Loss:} $L(\\bm w) = \\sum_{i : \\text{misclass.}} (-y_i \\bm w^\\top \\bm x_i)$\n\\enskip use (S)GD\n\n\\textbf{Converges if} data is linearly separable,\\\\\\enskip\nand\\enskip $\\eta(k) {\\geq} 0$,\\; $\\sum_k \\eta(k) {\\to} \\infty$,\\; $\\sum_k \\eta^2(k) {<} \\infty$.\n\n% ===\n\\emph{Gradient Descent:}\n$N\\!L(\\bm w) \\coloneqq -L(\\bm w)$\\\\\n$\\highlight*{\\bm w^{(k+1)} \\leftarrow \\bm w^{(k)} - \\eta(k) \\cdot \\nabla_{\\bm w} N\\!L(\\bm w^{(k)})}$\n\n\\textit{Opt. learning rate:} $\\eta(k) = \\argmin_\\eta N\\!L(\\bm w^{(k+1)})$\\\\\\vspace{-3pt}\\enskip\n\\scalebox{.75}{$\\textrm{(Taylor \\& } \\pderiv{}{\\eta(k)} \\overset!= 0)$}\\enskip\n$= \\frac{\\norm{ \\nabla N\\!L(\\bm w^{(k)}) }^2}{ \\nabla N\\!L(\\bm w^{(k)})^\\top H_{N\\!L}(\\bm w^{(k)}) \\nabla N\\!L(\\bm w^{(k)}) }$\n\n\\emph{Newton's Method:}\n$\\bm w^{(k+1)} \\leftarrow \\argmin_{\\bm w} N\\!L(\\bm w)$\\\\\\enskip\n\\scalebox{.75}{$\\textrm{(Taylor \\& } \\pderiv{}{\\bm w} \\overset!= 0)$}\\enskip\\enskip\n$= \\bm w^{(k)} - H_{N\\!L}^{-1} (\\bm w^{(k)}) \\nabla N\\!L (\\bm w^{(k)})$\n\n% ===\n\\emph{Fisher's LDA:}\n% Find projection that max. the distance of the projections AND min. the variance\n\\\\\\vspace{-5pt}\\hfill\n$J(\\bm w) = \\frac{\\bm w\\!^\\top \\bm\\Sigma_B \\bm w}{\\bm w\\!^\\top \\bm\\Sigma_W \\bm w}$\n\\enskip$\\xrightarrow{\\color{gray}(\\ast)}$\\enskip\n$\\bm w^\\ast \\propto \\bm\\Sigma_W^{-1} (\\overline{\\bm x}_1 - \\overline{\\bm x}_2)$\n\n{\\everymath\\expandafter{\\the\\everymath\\color{gray}}\n\\enskip $\\ast :\n    {\\scriptstyle \\pderiv{J(\\bm w)}{\\bm w} \\overset!= 0}\n    \\:\\rightsquigarrow\\:\n    (\\cancel{\\color{gray} \\bm w\\!^\\top \\bm\\Sigma_B \\bm w}) \\bm\\Sigma_W \\bm w =\n    (\\cancel{\\color{gray} \\bm w\\!^\\top \\bm\\Sigma_W \\bm w}) \\bm\\Sigma_B \\bm w\n$}\n\n$\\bm\\Sigma_B = (\\overline{\\bm x}_1 - \\overline{\\bm x}_2) (\\overline{\\bm x}_1 - \\overline{\\bm x}_2)\\!^\\top$\n    \\hfill{\\scriptsize between-class covariance}\\\\\n$\\bm\\Sigma_W \\!= \\sum_k \\sum_{\\bm x \\in \\mathcal C_k} (\\bm x - \\overline{\\bm x}_k) (\\bm x - \\overline{\\bm x}_k)\\!^\\top$\\!\\!\n    \\hfill{\\scriptsize within-class covariance}\n\n\\iffalse\n    \\emph{Fisher's linear discriminant:}\n    Find projection that max. the distance of the projections \\;AND\\; min. the variance\n    $\\Rightarrow \\max_w \\frac{(\\bm w^\\top (\\overline{\\bm x}_0 - \\overline{\\bm x}_1))^2}{\\widetilde{\\mathrm{Var}}(\\bm w^\\top C_0) + \\widetilde{\\mathrm{Var}}(\\bm w^\\top C_1)}$\n    \\\\\\enskip\n    where $\\widetilde{\\mathrm{Var}}(\\bm w^\\top C_i) = \\sum_{i\\in C_i} (\\bm w^\\top \\bm x_i - \\bm w^\\top \\overline{\\bm x}_0)^2$.\n    \\\\\n    Solution: $\\bm w^\\ast \\propto S_{\\bm W}^{-1} (\\overline{\\bm x}_0 - \\overline{\\bm x}_1)$\n    where $S_{\\bm W} = \\widetilde{\\mathrm{Var}}(C_0) + \\widetilde{\\mathrm{Var}}(C_1)$\n\\fi\n\n% ===\n", "meta": {"hexsha": "436d6b58ec6502bc5ebd9c5260f4e21b7c64d84f", "size": 5160, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/AML20/sections/05_linear_classification.tex", "max_stars_repo_name": "tstreule/eth-cheat-sheets", "max_stars_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-26T23:11:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T23:11:57.000Z", "max_issues_repo_path": "src/AML20/sections/05_linear_classification.tex", "max_issues_repo_name": "tstreule/eth-cheat-sheets", "max_issues_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/AML20/sections/05_linear_classification.tex", "max_forks_repo_name": "tstreule/eth-cheat-sheets", "max_forks_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.3613445378, "max_line_length": 173, "alphanum_fraction": 0.5755813953, "num_tokens": 2243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006919925839874, "lm_q2_score": 0.721743206297598, "lm_q1q2_score": 0.5778940059843797}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage[a4paper,margin= 2cm]{geometry}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\n\\title{\\LARGE{\\bf{Task 4 - Quadrature}}}\n\\author{\\Large{\\bf{Kirtan Patel - AE19B038}}}\n\\date{}\n\n\\begin{document}\n\n\\maketitle \n\n\\section{Introduction}\nIn mathematics, quadrature is a historical term which means the process of determining area. This term is still used nowadays in the context of differential equations, where \"solving an equation by quadrature\" or \"reduction to quadrature\" means expressing its solution in terms of integrals.\\\\\n\nThere are various reasons as of why such approximations\ncan be useful. First, not every function can be analytically integrated. Second, even if a\nclosed integration formula exists, it might still not be the most efficient way of calculating\nthe integral. In addition, it can happen that we need to integrate an unknown function,\nin which only some samples of the function are known.\\\\\n\nThis report includes working and analysis of 4 methods of quadrature:\n\\begin{enumerate}\n\t\\item Rectangular Method\n\t\\begin{enumerate}\n\t\t\\item Left Endpoint Rule\n\t\t\\item Right Endpoint Rule\n\t\t\\item Midpoint Rule\n\t\\end{enumerate}\n\t\\item Trapezoid Method\n\\end{enumerate}\n\n\\subsection{Rectangular Method}\nIn order to gain some insight on numerical integration, we review Riemann integration, a framework that can be viewed as an approach for approximating integrals.\\\\\n\nWe assume that \\textit{f(x)} is bounded function defined on [a,b] and that {$x_0, . . . . ,x_n$} is a partition(P) of [a,b]. For each \\textit{i} we let\n\\[ M_i(f) = \\mathop{sup}_{x\\in[x_{i-1},x_i]} f(x)\\] and \\[m_i(f) = \\mathop{inf}_{x\\in[x_{i-1},x_i]} f(x)\\]\nThus, $M_i(f)$ represents the Maxima of f(x) in the ith interval [$x_{i-1},x_{i}]$ while $m_i(f)$ represents the Minima of f(x) in the ith interval [$x_{i-1},x_{i}]$.\\\\\nLetting $\\delta x_i = x_i - x_{i-1}$ , the \\textbf{upper (Darboux) sum} of f(x) with respect to the partition P is defined as : \n\\[ U(f,P) = \\sum_{i=1}^{n}M_i\\Delta x_i\\] \nwhile the \\textbf{lower (Darboux) sum} of f(x) with respect to the partition P is defined as : \n\\[ L(f,P) = \\sum_{i=1}^{n}m_i\\Delta x_i\\] \\\\\n\nThe upper bound integral of f(x) on [a,b] is defined as :\n\\[ U(f)= inf(U(f,P))\\]\nand the lower bound integral of f(x) is defined as\n\\[ L(f)= sup(L(f,P))\\] where both the infimum and the supremum are taken over all possible partitions, P, of the interval [a,b]. \\\\\n\nAs mentioned above, the bound estimation of integral $\\int_{a}^{b}f(x) \\,dx$ can be done using the extremum values attained by the function in the given range. However, these sums are not defined in the most convenient way for an approximation algorithm. This is because we need to find the extrema of the function in every subinterval. Finding the extrema of the function, may be a complicated task on its own, which we would like to avoid.\\\\\n\nA simpler approach for the approximation is to compute the product of the value of the function ar either of the end-points of the interval by the length of the interval. For an increasing function, taking the left endpoint would give a lower bound value while the right endpoint would give an upper bound value. The opposite is true if the function is decreasing.\\\\\n\nA variation on the rectangular rule is the midpoint rule. Similar to the rectangular rule, we approximate the value of the integral by multiplying the length of the interval by the value of the function at one point. Only this time, we use the value of the function at the center point of the interval.\\\\\n\n\\begin{figure} [h!]\n\t\\centering\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=1.1\\linewidth]{Sample1}\n\t\t\\label{fig1:sub1}\n\t\\end{subfigure}%\n\t\\hspace{1cm}\n\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\linewidth]{Sample2}\n\t\t\\label{fig1:sub2}\n\t\\end{subfigure}\n\t\\caption{Graphical Representation of Quadrature using Rectangular Rule}\n\\end{figure}\n\n\\subsection{Trapezoid Method}\nThis technique is a much more accurate way to \napproximate area beneath a curve.  To construct the \ntrapezoids, you mark the height of the function at the beginning and end of the width interval, then connect the two points.\nWe get a better approximation of graphs which are closer to linear, by assuming each interval to be consisting of trapeziums. Here we use both the end-points and average the values at the endpoints.\\\\\n\nOn dividing the integration interval into sub-intervals, this is termed as the composite trapezoid rule, which is obtained by applying the trapezoid rule in each sub-interval and then adding the resulting areas together\\\\\n\nThe Trapezoidal Rule overestimates a curve that is \nconcave up and underestimates functions that are \nconcave down.\n\n\\begin{figure} [h!]\n\t\\centering\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\linewidth]{Sample3}\n\t\t\\label{fig2:sub2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\linewidth]{Sample4}\n\t\t\\label{fig2:sub3}\n\t\\end{subfigure}\n\t\\caption{Graphical Representation of Quadrature using Rectangular Rule}\n\\end{figure}\n\\vspace{1cm}\n\\subsection{Remarks}\nThere are several other method which use weighted averages of the areas and give a less errorneous result.The following are a few of them:\n\\begin{itemize}\n\t\\item \\textbf{Newton-Cotes Methods}\n\tUse intepolating polynomials. Trapezoid, Simpson\u2019s 1/3 and 3/8 rules,\n\tBode\u2019s are special cases of 1st, 2nd, 3rd and 4th order polynomials are\n\tused, respectively\n\t\\item \\textbf{Romberg Integration(Richardson Extrapolation)}\n\tuses knowledge of error estimated to build a recursive higher order scheme\n\t\\item \\textbf{Gauss Quadrature}\n\tLike Newton-Cotes, but instead of a regular grid, choose a set that lets you get higher order accuracy\n\t\\item \\textbf{Monte Carlo Integration}\n\tUse randomly selected grid points. Useful for higher dimensional integrals (d>4)\n\\end{itemize}\n\n\\pagebreak\n\n\\section{Results and Analysis}\nTaking the number of sub-intervals from the user, and finding the error in the quadrature found by each method, we get the following results.\n\\subsection{Absolute Error vs $\\Delta$x}\n\\begin{figure}[h!]\n\t\\centering\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{abs_error}\n\t\\caption{Absolute Error vs $\\Delta x$ size of sub-interval}\n\t\\label{fig4}\n\\end{figure}\n\n\\subsection{log(Absolute Error) vs $\\Delta$x}\n\\begin{figure} [h!]\n\t\\centering\n\t\\begin{subfigure}{0.55\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{log_abs_error}\n\t\t\\caption{log(absolute error) vs $\\Delta$x}\n\t\t\\label{fig2:sub1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{0.55\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{log_error_vs_delta_x_Lin_Reg}\n\t\t\\caption{log(absolute error) vs $\\Delta$x - Linear Regression}\n\t\t\\label{fig2:sub2}\n\t\\end{subfigure}\n\t\\caption{Results obtained by plotting log(Absolute Error) vs $\\Delta$x}\n\\end{figure}\nWe can see that the Midpoint and Trapezoid Rules give a far more accurate result than the Left and Right Endpoint Rules. This difference is significant in functions with large slopes and large slope changes.\\\\\n\nFurthermore, the Midpoint Rule gives a slightly better result than the Trapezoid Rule. For more linear functions, this difference negligible while for highly non-linear functions, the difference is large.\\\\\n\nWe can even see that the relation is almost linear between the Absolute error and the size of the sub-interval. Naturally, if we use a finer sub-interval, we go closer and closer to the true value of the integral and hence the error tends to zero.\\\\\n\nIt is important to note that the user inputs the number of intervals the integration interval is to be divided in. The sub-interval size $\\Delta$x is given by : \\[ \\Delta x = \\frac{(upper~integration~limit - lower~integration~limit)}{number~of~sub-intervals}\\]\n\\subsection{Absolute Error vs Number of Sub-intervals}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{abs_error_vs_num_subintervals}\n\t\\caption{Result obtained by plotting Absolute Error vs number of sub-intervals}\n\t\\label{fig4}\n\\end{figure}\n\nWe can see that the Absolute Error is \\textbf{inversely proportional} to the number of sub-intervals. \\\\\n\nThis is even verified by seeing the Absolute Error vs $\\Delta$x plot which is linear. This, along with the relation between $\\Delta$x and number of sub-intervals confirms that the plot lines with the mathematical relation y $\\propto$ $\\frac{1}{x}$ \\\\\n\nAs the number of subintervals tends to $\\infty$, the Absolute Error tends to 0\n\\pagebreak\n\n\\section{Order of Accuracy}\nThe 'order of accuracy' is a way to quantify the accuracy of the method i.e. how close the output of the method is to the actual value. Since the value we get from numerical method is an approximation to the actual value, there is an error value which is greater than zero, as the approximate value is not exactly the same as the actual value but close to it.\\\\\n\nThe degree of accuracy of a Quadrature Formula is the largest positive integer n such that the formula is for x$^k$ for each k = 0,1,...n.\\\\\n\nThe integral is given as : \n\\[ \\int_{a}^{b}f(x) \\,dx = Numerical~Integral~+~Error\\]\nIn terms of the above defition, the order of accuracy is the highest degree of a polynomial for which the Integration Method gives no error.\n\\subsection{Rectangular Rule}\nThe Left Endpoint Rectangular Rule follows : \n\\[\\int_{a}^{b}f(x) \\,dx = {b-a}[f(a)]~+~Error\\]\nThe Right Endpoint Rectangular Rule follows : \n\\[\\int_{a}^{b}f(x) \\,dx = {b-a}[f(b)]~+~Error\\]\nThe Midpoint Rectangular Rule follows : \n\\[\\int_{a}^{b}f(x) \\,dx = {b-a}[f(\\frac{a+b}{2})]~+~Error\\] \\\\\n\nLet f(x) = x$^0$\\\\\nFor which integral value~~$\\int_{a}^{b} x^0 \\,dx~=~\\int_{a}^{b} 1 \\,dx~=~ b-a$\\\\\nAnd the Rectangular Rule (for all the 3 sub-rules) gives value: $(b-a).[1]~=~b-a$ \\\\\nHence there is \\textbf{No Error} for x$^0$ \\\\\n\nFor f(x) = x$^1$\\\\\nFor which integral value~~$\\int_{a}^{b} x \\,dx~=~\\frac{b^2-a^2}{2}$\\\\\nthe Left Endpoint Rectagular Rule gives value: $(b-a).[a]~=~ab-a^2$\\\\\nthe Right Endpoint Rectagular Rule gives value: $(b-a).[b]~=~b^2-ab$\\\\\nthe Midpoint Rectagular Rule gives value: $(b-a).[\\frac{a+b}{2}]~=~\\frac{b^2-a^2}{2}$\\\\\nHence there is \\textbf{No Error} for x$^1$ in the Midpoint Rule, while there is \\textbf{Error} for the Left Endpoint and the Right Endpoint Rules.\\\\\n\nBut for f(x) = x$^2$\\\\\nFor which integral value~~$\\int_{a}^{b} x^2 \\,dx~=~\\frac{b^3-a^3}{3}$\\\\\nAnd the Midpoint Rectangular Rule gives value~= $\\frac{b-a}{2}~[(\\frac{a~+~b}{2})^2]$\\\\\nHere there is a \\textbf{ Error} for f(x) = x$^2$.\\\\\n\nTherefore, the order of accuracy for the Midpoint Rectangular method is 1,\\\\\nwhile the order of accuracy for the Left and Right Endpoint Rules is 0.\n\\subsection{Trapezoid Rule}\nThe Trapezoid Rule follows that : \n\\[ \\int_{a}^{b}f(x) \\,dx = \\frac{b-a}{2}[f(a)+f(b)]~+~Error\\]\n\nLet f(x) = x$^0$\\\\\nFor which integral value~~$\\int_{a}^{b} x^0 \\,dx~=~\\int_{a}^{b} 1 \\,dx~=~ b-a$\\\\\nAnd the Trapezoid Rule gives value~= ~ $\\frac{b-a}{2}~[1~+~1]~=~b-a$\\\\\nHence there is \\textbf{No Error} for x$^0$ \\\\\n\nSimilarly Let f(x) = x$^1$\\\\\nFor which integral value~~$\\int_{a}^{b} x \\,dx~=~\\frac{b^2-a^2}{2}$\\\\\nAnd the Trapezoid Rule gives value~= ~ $\\frac{b-a}{2}~[a~+~b]~=~\\frac{b^2-a^2}{2}$\\\\\nHence there is \\textbf{No Error} for x$^1$ as well.\\\\\n\nBut for f(x) = x$^2$\\\\\nFor which integral value~~$\\int_{a}^{b} x^2 \\,dx~=~\\frac{b^3-a^3}{3}$\\\\\nAnd the Trapezoid Rule gives value~= ~ $\\frac{b-a}{2}~[a^2~+~b^2]$\\\\\nHere there is a cubic\\textbf{ Error} for f(x) = x$^2$.\\\\\n\nTherefore, the the order of accuracy for the Trapezoid method is 1.\\\\\n\nAs we begin to use interpolation and other mathematical techniques, we can get a higet order of accuracy. e.g. The Simpson's 1/3 rule has an order of accuracy of 3.\\\\\n\nTabulating the order of accuracy(OoA) calculated,\\\\\n\\begin{table} [h!]\n\t\\centering\n\\begin{tabular}{| l || l |}\n\t\\hline\n\tQuadrature Method & OoA  \\\\\n\t\\hline \\hline\n\tLeft Endpoint Rectangular Method & 0\\\\\n\t\\hline \n\tRight Endpoint Rectangular Method & 0 \\\\\n\t\\hline \n\tMidpoint Rectangular Method & 1 \\\\\n\t\\hline\n\tTrapezoid Method & 1 \\\\\n\t\\hline\n\\end{tabular}\n\\end{table}\n\nThis is seen evidently in the Absolute Error vs $\\Delta$x plots (Figure 3 and Figure 4)\n\n\\section{References}\n\\begin{enumerate}\n\t\\item https://en.wikipedia.org/wiki/Quadrature$\\_$(mathematics)\n\t\\item https://math.stackexchange.com/questions/2873291/what-is-the-intuitive-meaning-of-order-of-accuracy-and-order-of-approximation\n\t\\item http://www.math.pitt.edu/~sussmanm/2070Fall07/lab$\\_$10/index.html\n\t\\item https://www.kth.se/social/upload/5287cd2bf27654543869d6c8/Ant-OrderofAccuracy.pdf\n\t\\item https://www3.nd.edu/~zxu2/acms40390F11/sec4-3-Numerical$\\_$integration.pdf\n\\end{enumerate}\n\n\n\n\n\\end{document}", "meta": {"hexsha": "06d12496c008636c266e4e94a12b773dce5d2aa0", "size": 12762, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "AS2101_Labwork/4.Submissions/Task 4/Task4.tex", "max_stars_repo_name": "kirtan2605/Coursework_Codes", "max_stars_repo_head_hexsha": "3455496e8ec0ae3a576cb3fc3b2ed01a055149c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "AS2101_Labwork/4.Submissions/Task 4/Task4.tex", "max_issues_repo_name": "kirtan2605/Coursework_Codes", "max_issues_repo_head_hexsha": "3455496e8ec0ae3a576cb3fc3b2ed01a055149c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AS2101_Labwork/4.Submissions/Task 4/Task4.tex", "max_forks_repo_name": "kirtan2605/Coursework_Codes", "max_forks_repo_head_hexsha": "3455496e8ec0ae3a576cb3fc3b2ed01a055149c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.2741312741, "max_line_length": 443, "alphanum_fraction": 0.7338191506, "num_tokens": 3850, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6513548646660542, "lm_q2_score": 0.8872046041554923, "lm_q1q2_score": 0.5778850348708009}}
{"text": "\\subsection{The continuous constant pH molecular dynamics framework}\n\nIn continuous CpHMD, each titratable residue is assigned a titration coordinate, $\\lambda_i$ \\cite{Lee_Brooks_2004_Proteins,Khandogin_Brooks_2005_Biophys.J.}. \nWe attribute to this coordinate an implicit function of an underlying coordinate $\\theta_i$ such that $\\lambda_i$ is bound between 0 and 1:\n\\begin{equation}\n    \\lambda_i = \\sin^2\\theta_i,\n\\end{equation}\nwhere 0 and 1 represent the protonated and deprotonated states, respectively. \nSince $\\lambda$ continuously evolves between 0 and 1,\nin simulation analysis we apply cutoffs to define\nprotonated ($\\lambda^P < 0.2$) and deprotonated states ($\\lambda^U > 0.8$). \nHaving the titration coordinate in place, we use an extended Hamiltonian to allow the simultaneous propagation of spatial (real) and titration (virtual) coordinates:\n\n\\begin{equation}\n\\begin{split}\n\\label{eq:Hamiltonian}\n\\mathcal{H}(\\{\\bm{r}_a\\},\\{\\theta_i\\})=\n&\\frac{1}{2}\\sum_am_a\\dot{\\bm{r}}_a^2+\\frac{1}{2}\\sum_im_i\\dot{\\theta}_i^2+U^{\\rm int}(\\{\\bm{r}_a\\})\\\\\n&+U^{\\rm hybr}(\\{\\bm{r}_a\\},\\{\\theta_i\\})+\\sum_iU^{\\ast}(\\theta_i),\n\\end{split}\n\\end{equation}\nwhere $\\bm{r}_a$ refers to the (x,y,z) coordinates\nof atom $a$, and $\\theta_i$ refers to the titration coordinate of titratable residue $i$. \nThe two first terms of Eq.~\\ref{eq:Hamiltonian} give the kinetic energies of real atoms and virtual $\\lambda$ particles. \n$U^{\\rm int}$ represent the titration-independent bonded and non-bond energies. \nThe fourth term, $U^{\\rm hybr}$, describes the \nnon-bond energies, which are dependent on both spatial and titration coordinates. \nFor the particle-mesh Ewald (PME) all-atom CpHMD\n\\cite{Huang_Shen_2016_J.Chem.TheoryComput.}\n$U^{\\rm hybr}$ includes only Coulomb and van der Waals (vdW) energies, while for\nthe implicit- and hybrid-solvent CpHMD methods $U^{\\rm hybr}$ also includes the generalized Born (GB) energy:\n \n \\begin{equation}\n\\label{eq:Uhybr}\n\\begin{split}\nU^{\\rm hybr}(\\{\\bm{r}_a\\},\\{\\theta_i\\})=U^{\\rm elec}(\\{\\bm{r}_a\\},\\{\\theta_i\\}) \n+ U^{\\rm vdW}(\\{\\bm{r}_a\\},\\{\\theta_i\\})\\\\\n+ U^{\\rm GB}(\\{\\bm{r}_a\\},\\{\\theta_i\\})\n\\end{split}\n\\end{equation}\n\nThe dependence of the electrostatic energy on the titration coordinates arises from the requirement that\nthe partial atomic charges on the titrating residue\nare linearly interpolated  \nbetween the values in the deprotonated and protonated states with respect to $\\lambda$:\n\n% Thus, we can write the charge on atom $j$, $q_j$, on the titrating residue $i$ as\n%\n\\begin{equation}\nq_j(\\lambda_i) = (1-\\lambda_i) q_j^{\\rm prot} +  \\lambda_i q_j^{\\rm unprot},\n\\label{eq:qlamb}\n\\end{equation}\nwhere $q_j$ refers to the partial charge of an atom $j$ in the titrating residue $i$.\nAs to vdW energies, the \ninteractions involving a titratable proton are linearly attenuated with respect to $\\lambda$:\n\\begin{equation}\nU^{\\rm vdW}_{ij}(\\bm r_i, \\bm r_j, \\lambda_i) = (1-\\lambda_i)U^{\\rm vdW\\ast}_{ij}(\\bm r_i, \\bm r_j),\n\\end{equation}\nwhere $i$ is the index for the titratable proton, $j$ is the index for all other atoms, and $U^{\\rm vdW \\ast}_{ij}$ refers to the protonation independent vdW interaction energy. If both $i$ and $j$ are titratable protons,\nthe interaction is linearly attenuated by the $\\lambda$ values of both:\n\\begin{equation}\nU^{\\rm vdW}_{ij}(\\bm r_i, \\bm r_j, \\lambda_i, \\lambda_j) = (1-\\lambda_i)(1-\\lambda_j)U^{\\rm vdW\\ast}_{ij}(\\bm r_i, \\bm r_j).\n\\end{equation}\nNote, the vdW energy correction is small and typically neglected in constant pH methods except for the CpHMD methods implemented in CHARMM and Amber \\cite{Khandogin_Brooks_2005_Biophys.J.,Wallace_Shen_2011_J.Chem.TheoryComput.,Huang_Shen_2016_J.Chem.TheoryComput.,Huang_Shen_2018_J.Chem.Inf.Model.,Harris_Shen_2019_J.Chem.Inf.Model.}.\n\nFinally, the last term in Eq. \\ref{eq:Hamiltonian} \nonly affects titratable groups and is consisted of \nthree biasing potentials:\n\\begin{equation}\n\\label{eq:biasingpotential}\nU^{\\ast}(\\theta_i)=-U^{\\rm mod}(\\lambda_i)+U^{\\rm barr}(\\lambda_i)+U^{\\rm pH}(\\lambda_i).\n\\end{equation}\n\nThe first biasing potential $U^{\\rm mod}$ describes a potential of mean force (PMF) of model titration along the $\\lambda$-coordinate, where the model represents a fully solvent-exposed amino acid, i.e., a blocked single amino acid or a small peptide in solution.\nAccording to the linear response theory, \nmodel PMF is (in GB solvent) or can be approximated (in explicit solvent) as a quadratic function:\n\\begin{equation}\n\\label{eq:Umod_1d}\nU^{\\rm mod}(\\lambda_i)=A(\\lambda_i-B)^2.\n\\end{equation}\nHere $A$ and $B$ are parameters that can be determined through free energy calculation methods, such as thermodynamic integration (TI, see section Parameterization). \nBy analogy, in the case of coupled double-site titration \\cite{Khandogin_Brooks_2005_Biophys.J.}, e.g., histidine or carboxylic acid, the model PMF is second order in both $\\lambda$ and $x$, where \n$x$ is a coordinate that represents the tautomer interconversion.\nThe two-dimensional PMF can be written as a bivariate polynomial,\n%\n\\begin{equation}\n\\begin{split}\nU^{\\rm mod}(\\lambda_i,x_i) =\n&a_0 \\lambda_i^2 x_i^2 + a_1 \\lambda_i^2 x_i + a_2 \\lambda_i x_i^2 + a_3 \\lambda_i^2 \\\\\n&+ a_4 x_i^2 + a_5 \\lambda_i x_i + a_6 \\lambda_i + a_7 x_i + a_8.\n\\end{split}\n\\label{eq:double}\n\\end{equation}\nHere $x_{i}=\\sin^{2}\\theta^{x}_{i}$ is bound between 0 and 1, where $\\theta^{x}_{i}$ is the underlying unbound\nvariable. \n%\n\n$U^{\\rm barr}$, the second biasing potential in Eq. \\ref{eq:biasingpotential}, describes a barrier or penalty potential in the center of the titration coordinate, \n%\n\\begin{equation}\nU^{\\rm barr}(\\lambda_i) = -4\\beta\\left(\\lambda_i - \\frac{1}{2} \\right)^2,\n\\label{eq:Ubarr}\n\\end{equation}\nwhere $\\beta$ is a parameter that determines the height of the barrier. \nAdding the barrier potential serves to\nincrease the sampling time at the end-point states (e.g., $\\lambda<0.2$ or $\\lambda>0.8$) and therefore\nminimizing the unphysical vdW and electrostatic interactions associated with mixed state \n(e.g., $0.2 \\le \\lambda \\le 0.8$). \n\n$U^{\\rm pH}$, the last term of Eq.~\\ref{eq:biasingpotential} \ndescribes the pH dependence of the deprotonation free energy \nand is given by\n%\n\\begin{equation}\nU^{\\rm pH}(\\lambda_i)=ln(10)k_BT({\\rm pH}-\\textrm{p}{K_a}^{\\rm mod})\\lambda_i, \n\\label{eq:UpH}\n\\end{equation}\n% \nwhere $k_B$ is the Boltzmann constant, $T$ is the system temperature, and $\\textrm{p}{K_a}^{\\rm mod}$ is the model {\\pka} for the titratable residue $i$.\n\n", "meta": {"hexsha": "c04758468476818ed938aa09068f23d638951091", "size": 6487, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "1_methods.tex", "max_stars_repo_name": "JanaShenLab/CpHMD_tutorial", "max_stars_repo_head_hexsha": "e8a8af1297d98e83e1d7f653a12718799d51659f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-21T15:05:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-21T15:05:30.000Z", "max_issues_repo_path": "1_methods.tex", "max_issues_repo_name": "JanaShenLab/CpHMD_tutorial", "max_issues_repo_head_hexsha": "e8a8af1297d98e83e1d7f653a12718799d51659f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1_methods.tex", "max_forks_repo_name": "JanaShenLab/CpHMD_tutorial", "max_forks_repo_head_hexsha": "e8a8af1297d98e83e1d7f653a12718799d51659f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.6115702479, "max_line_length": 334, "alphanum_fraction": 0.7339293973, "num_tokens": 2090, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8774767842777551, "lm_q2_score": 0.658417487156366, "lm_q1q2_score": 0.5777460593422082}}
{"text": "\\section{Diophantine Equations}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h79786p456605}{APMO 1999 P4}{EM}{Determine all pairs $(a,b)$ of integers with the property that the numbers $a^2+4b$ and $b^2+4a$ are both perfect squares.}\n\t\n\t\t\\solu{Easy case work assuming positive or negative values for $ a, b $.}\n\t\t\n\t\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6t177f6h1437607_infinitely_many_pairs}{All Squares}{E}{Prove that there are infinitely many pairs of positive integers $(x, y)$ satisfying that $x+y, x-y, xy+1$ are all perfect squares.}\\label{problem:construction_1}\n\t\n\t\t\\solu{Just work it out.}\n\t\t\n\t\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h418641p2362006}{ISL 2010 N3}{E}{Find the smallest number $n$ such that there exist polynomials $f_1, f_2, \\ldots , f_n$ with rational coefficients satisfying \\[x^2+7 = f_1\\left(x\\right)^2 + f_2\\left(x\\right)^2 + \\ldots + f_n\\left(x\\right)^2.\\]}\n\t\t\n\t\t\\solu{Find the obvious answer, which is very small, so we can probably case work it out. We find the case for $ 3 $ a bit challenging. But we plan to show that $ 7a^2 $ can't be written as a sum of $ 3 $ squares. Which numbers can be written as the sum of $ 3 $ squares? Investigate...}\n\t\t\n\t\t\n\t\\prob{https://artofproblemsolving.com/community/c6h1113199p5083571}{ISL 2014 N5}{E}{Find all triples $(p, x, y)$ consisting of a prime number $p$ and two positive integers $x$ and $y$ such that $x^{p -1} + y$ and $x + y^ {p -1}$ are both powers of $p$.}\n\t\t\n\t\t\\solu{The old school trick, replacement. Assume $ x<y $ and replace $ y $, and then compare the power of $ p $.}", "meta": {"hexsha": "f45163ee05a9ff5faab45fcca320815fa4a8e605", "size": 1600, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "nt/sec4_dioph_eq.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "nt/sec4_dioph_eq.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nt/sec4_dioph_eq.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 69.5652173913, "max_line_length": 296, "alphanum_fraction": 0.70875, "num_tokens": 515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.743168019989179, "lm_q1q2_score": 0.577664398767059}}
{"text": "\\chapter{Probability}\n\n\\begin{ex}\n  Consider $B_i$ and $B_j$ with $i<j$. Suppose $x\\in B_i$ and note that then\n  $x\\not\\in B_j$, since by definition\n  \\[\n    B_j\n    =\\left\\{\\omega\\in\\Omega\\mid\n    \\omega\\in A_j,\\omega\\not\\in A_{j-1},\\ldots,\n    \\omega\\not\\in A_{i}\\ldots,\\omega\\not\\in A_{1}\\right\\}.\n  \\]\n  Therefore, $B_i\\cap B_j=\\emptyset$.\n\n  Since the sequence of $A_i$'s is monotone increasing, it is clear that\n  $A_n=\\bigcup_{i=1}^n A_i$, since $A_i\\subset A_n$ for all $i\\leq n$. Note that\n  $\\bigcup_{i=1}^n B_i\\subset A_n$, since $B_i\\subset A_i$\n  by definition. If $x\\in \\bigcup_{i=1}^n A_i$, then there exists a $j$ such\n  that $x\\in A_j$, but $x\\not\\in A_i$ for any $i<j$, and therefore $x\\in B_j$.\n  Hence, $\\bigcup_{i=1}^n A_i\\subset\\bigcup_{i=1}^n B_i$.\n\n  Thus, $\\bigcup_{i=1}^n A_i=\\bigcup_{i=1}^n B_i$ for all $n$, and therefore\n  $\\bigcup_{i=1}^\\infty A_i=\\bigcup_{i=1}^\\infty B_i$. This completes the proof\n  of the monotone increasing case of Theorem 1.8.\n\n  If $A_1\\supset A_2\\supset\\cdots$ is monotone decreasing, it follows that\n  $A_1^c\\subset A_2^c\\subset\\cdots$ is monotone increasing. Therefore,\n  \\[\n    \\lim_{n\\to\\infty}\\P{A_n}\n    =1-\\lim_{n\\to\\infty}\\P{A_n^c}\n    =1-\\P{A^c}\n    =\\P{A}.\n  \\]\n\\end{ex}\n\n\\begin{ex}\n  Note that since $\\emptyset\\cap\\emptyset=\\emptyset$, the empty set is disjoint\n  to itself. Therefore,\n  \\[\n    \\P{\\emptyset}\n    =\\P{\\bigsqcup_{i=1}^\\infty \\emptyset}\n    =\\sum_{i=1}^\\infty\\P{\\emptyset}\n  \\]\n  by Axiom 3. However, this series only converges if $\\P{\\emptyset}=0$.\n\n  Suppose $A\\cap B=\\emptyset$. Let $C_1=A$, $C_2=B$ and $C_n=\\emptyset$ for\n  $n\\geq 3$, and note that then $\\bigcup_{i=1}^\\infty C_i$ is a disjoint\n  partition of $A\\cup B$. Therefore,\n  \\[\n    \\P{A\\cup B}\n    =\\P{\\bigsqcup_{i=1}^\\infty C_i}\n    =\\sum_{i=1}^\\infty \\P{C_i}\n    =\\P{A}+\\P{B}+\\sum_{i=3}^\\infty 0\n    =\\P{A}+\\P{B}.\n  \\]\n\n  If $A\\subset B$, then $B=A\\sqcup (B\\setminus A)$, and so by the preceding\n  result,\n  \\[\n    \\P{B}=\\P{A\\sqcup (B\\setminus A)}=\\P{A}+\\P{B\\setminus A}.\n  \\]\n  Therefore, since $\\P{B\\setminus A}\\geq 0$ by Axiom 1, we get\n  $\\P{A}\\leq \\P{B}$.\n\n  Note that $0\\leq \\P{A}$ is just a restatement of Axiom 1. $\\P{A}\\leq 1$\n  follows by the previous result after observing that $A\\subset \\Omega$ and\n  $\\P{\\Omega}=1$ by Axiom 2.\n\n  Finally, note that $\\Omega=A\\sqcup A^c$, and therefore\n  \\[\n    1=\\P{\\Omega}=\\P{A}+\\P{A^c}.\n  \\]\n  Hence, $\\P{A^c}=1-\\P{A}$.\n\\end{ex}\n\n\\begin{ex}\n  \\begin{enumerate}[(a)]\n    \\item Let $s,t\\in\\Z_{>0}$ such that $s<t$ and note that then\n          \\[\n            B_s\n            =\\bigcup_{i=s}^n A_i\n            =\\bigcup_{i=s}^{t-1} A_i \\cup \\bigcup_{i=t}^\\infty A_i\n            =\\bigcup_{i=s}^{t-1} A_i\\cup B_t.\n          \\]\n          Therefore, $B_t \\subset B_s$.\n\n          Now consider\n          \\[\n            C_s\n            =\\bigcap_{i=s}^n A_i\n            =\\bigcap_{i=s}^{t-1} A_i \\cap \\bigcap_{i=t}^\\infty A_i\n            =\\bigcap_{i=s}^{t-1} A_i\\cap C_t.\n          \\]\n          Therefore, $C_s\\subset C_t$.\n\n    \\item Suppose $\\omega\\in \\bigcap_{n=1}^\\infty B_n$. Then $\\omega\\in B_n$ for\n          all $n$, and therefore for any $M>0$, we can find an $m>M$ such that\n          $\\omega\\in A_m$. Hence, letting $M_1=1$, $m_i$ be the smallest integer\n          greater than $M_i$ such that $\\omega\\in A_{m_i}$, and $M_i=m_{i-1}$\n          for $i>1$, we get an infinite family of events $\\{A_{m_i}\\}$ such that\n          $\\omega\\in A_{m_i}$ for all $i$.\n\n          Conversely, suppose that $\\omega\\in A_n$ for infinitely many $n$. Then\n          for any $B_n$, since there are only finitely many $A_i$'s with\n          $i\\leq n$, there must exist some $m>n$ such that $\\omega\\in A_m$.\n          Hence, $\\omega\\in B_n$, and since $n$ was arbitrary,\n          $\\omega\\in \\bigcap_{n=1}^\\infty B_n$.\n\n    \\item Suppose $\\omega\\in \\bigcup_{n=1}^\\infty C_n$. Then there exists some\n          $m$ such that $\\omega\\in C_m$ and therefore\n          $\\omega\\in\\bigcap_{i=m}^\\infty A_i$. Hence, $\\omega\\in A_i$ for all\n          $i\\geq m$, and thus $\\omega$ belongs to all $A_i$'s except possibly\n          finitely many.\n\n          Conversely, suppose that $\\omega$ belongs to all the events except\n          possibly finitely many. Let $m-1$ be the largest integer such\n          $\\omega\\not\\in A_{m-1}$ and note that then\n          $\\omega\\in \\bigcap_{i=m}^\\infty A_i=C_m$. Hence,\n          $\\omega\\in \\bigcup_{n=1}^\\infty C_n$.\n  \\end{enumerate}\n\\end{ex}\n\n\\begin{ex}\n  Suppose $x\\in\\bigcap_{i\\in I}A_i^c$. Then for all $i\\in I$, $x\\in A_i^c$ and\n  therefore $x\\not\\in A_i$ for all $i\\in I$. Hence,\n  $x\\not\\in \\bigcup_{i\\in I} A_i$ and $x\\in\\left(\\bigcup_{i\\in I} A_i\\right)^c$.\n  Thus, $\\bigcap_{i\\in I}A_i^c\\subset \\left(\\bigcup_{i\\in I} A_i\\right)^c$.\n\n  Suppose $x\\in\\left(\\bigcup_{i\\in I} A_i\\right)^c$. Then\n  $x\\not\\in\\bigcup_{i\\in I} A_i$, and hence $x\\not\\in A_i$ for all $i\\in I$.\n  Thus, $x\\in A_i^c$ for all $i\\in I$ and so $x\\in\\bigcap_{i\\in I}A_i^c$. Hence,\n  $\\left(\\bigcup_{i\\in I} A_i\\right)^c\\subset\\bigcap_{i\\in I}A_i^c$, and\n  therefore\n  \\[\n    \\left(\\bigcup_{i\\in I} A_i\\right)^c=\\bigcap_{i\\in I}A_i^c.\n  \\]\n\n  Replacing $A_i$ with $A_i^c$, taking the complement of both sides and using\n  the fact that the complement of a complement is the original set, it follows\n  that\n  \\[\n    \\bigcup_{i\\in I} A_i^c=\\left(\\bigcap_{i\\in I}A_i\\right)^c.\n  \\]\n\\end{ex}\n\n\\begin{ex}\n  The sample space is\n  \\[\n    S=\\left\\{\n    \\underbrace{T\\cdots T}_{n_1\\text{ times}}\n    H\\underbrace{T\\cdots T}_{n_2\\text{ times}}H \\mid n_1,n_2\\geq0 \\right\\}.\n  \\]\n  Note that if exactly $k$ tosses are required, the final coin toss is always\n  the second heads, but the first heads can occur on any of the preceding $k-1$\n  tosses. Hence, for $k\\geq 2$ the probability of this event is\n  \\[\n    (k-1)\\left(\\frac{1}{2}\\right)^k.\n  \\]\n\\end{ex}\n\n\\begin{ex}\n  Suppose that there exists such a distribution. Let $p=\\P{\\{0\\}}$ and note that\n  $p\\neq 0$, since otherwise $1=\\P{\\Omega}=\\sum_{i=0}^n\\P{\\{i\\}}=0$, a\n  contradiction. Therefore, we can let\n  $N=\\left\\lceil\\frac{1}{p}\\right\\rceil+1$ and $A=\\{0, 1, \\ldots, N-1\\}$. Then\n  \\begin{align*}\n    \\P{A}\n    =N\\P{\\{0\\}}\n    =\\left(\\left\\lceil\\frac{1}{p}\\right\\rceil+1\\right)p\n    \\geq 1 + p,\n  \\end{align*}\n  contradicting the requirement that probability is between $0$ and $1$.\n  Therefore, no such distribution can exist.\n\\end{ex}\n\n\\begin{ex}\n  Let $B_n=A_n\\setminus \\bigcup_{i=1}^{n-1}A_i$. Consider $B_s$ and $B_t$ such\n  that $s<t$ and note that if $x$ is an element of $B_s$, by definition $x$ must\n  be an element of $A_s$. Hence, since $t>s$,\n  $x\\in\\bigcup_{i=1}^{t-1} A_i$, and thus\n  $x\\not\\in A_t\\setminus \\bigcup_{i=1}^{t-1}A_i=B_t$. Therefore, $B_s$ and $B_t$\n  are disjoint whenever $s\\neq t$.\n\n  Since $B_n\\subset A_n$ for all $n$, it follows that\n  $\\bigcup_{n=1}^\\infty B_n\\subset \\bigcup_{n=1}^\\infty A_n$. Moreover, if\n  $x\\in \\bigcup_{n=1}^\\infty A_n$ there exists some $m$ such that $x\\in A_m$,\n  but $x\\not\\in A_s$, for $s<m$. This implies that $x\\in B_m$ and therefore\n  $\\bigcup_{n=1}^\\infty A_n\\subset \\bigcup_{n=1}^\\infty B_n$. Hence,\n  $\\bigcup_{n=1}^\\infty A_n= \\bigcup_{n=1}^\\infty B_n$.\n\n  Thus,\n  \\[\n    \\P{\\bigcup_{n=1}^\\infty A_n}\n    =\\P{\\bigsqcup_{n=1}^\\infty B_n}\n    =\\sum_{n=1}^\\infty \\P{B_n}\n    \\leq \\sum_{n=1}^\\infty \\P{A_n},\n  \\]\n  since $B_n\\subset A_n$ for all $n$.\n\\end{ex}\n\n\\begin{ex}\n  \\begin{align*}\n    \\P{\\bigcap_{i=1}^\\infty A_i}\n     & =1-\\P{\\bigcup_{i=1}^\\infty A_i^c} &  & \\text{(by De Morgan's law)}  \\\\\n     & \\geq 1-\\sum_{i=1}^\\infty\\P{A_i^c} &  & \\text{(by previous problem)} \\\\\n     & =1.\n  \\end{align*}\n\\end{ex}\n\n\\begin{ex}\n  Note that for any event $A$, we have\n  \\[\n    \\cP{A}{B}\n    =\\frac{\\P{A\\cap B}}{\\P{B}}\\geq 0,\n  \\]\n  since $\\P{B}> 0$ and $\\P{A\\cap B}\\geq 0$ by the first axiom of probability,\n  and therefore $\\cP{\\,\\cdot}{B}$ also satisfies the first axiom.\n\n  We also have\n  \\[\n    \\cP{\\Omega}{B}=\\frac{\\P{\\Omega\\cap B}}{\\P{B}}=\\frac{\\P{B}}{\\P{B}}=1,\n  \\]\n  and therefore $\\cP{\\,\\cdot\\,}{B}$ satisfies the second axiom.\n\n  Finally, suppose that $A_1, A_2, \\ldots$ are disjoint. Then\n  \\begin{align*}\n    \\cP{\\bigcup_{i=1}^\\infty A_i}{B}\n     & =\\frac{\\P{\\bigcup_{i=1}^\\infty A_i\\cap B}}{\\P{B}} \\\\\n     & =\\frac{1}{\\P{B}}\\sum_{i=1}^\\infty \\P{A_i \\cap B}  \\\\\n     & =\\sum_{i=1}^\\infty \\frac{\\P{A_i \\cap B}}{\\P{B}}   \\\\\n     & =\\sum_{i=1}^\\infty \\cP{A_i}{B},\n  \\end{align*}\n  and therefore $\\cP{\\,\\cdot}{B}$ satisfies the third axiom.\n\\end{ex}\n\n\\begin{ex}\n  Without loss of generality, we may assume that the player initially selects\n  the first door and the host reveals the second door to be empty. Then, letting\n  $D_i$ be the event of the reward being behind door $i$ and $R_2$ being the\n  event of the host opening door $2$, we have\n  \\begin{align*}\n    \\cP{D_1}{R_2}\n    =\\frac{\\cP{R_2}{D_1}\\P{D_1}}{\\P{R_2}}\n    =\\frac{\\frac{1}{2}\\cdot\\frac{1}{3}}{\\frac{1}{2}}=\\frac{1}{3},\n  \\end{align*}\n  for Door 1 and\n  \\begin{align*}\n    \\cP{D_3}{R_2}\n    =\\frac{\\cP{R_2}{D_3}\\P{D_3}}{\\P{R_2}}\n    =\\frac{1\\cdot\\frac{1}{3}}{\\frac{1}{2}}=\\frac{2}{3},\n  \\end{align*}\n  for Door 3, since if the reward were behind Door 1 the host would have been\n  able to reveal either Door 2 or Door 3, but if it were behind Door 3, he\n  would be forced to reveal Door 2.\n\\end{ex}\n\n\\begin{ex}\n  \\begin{align*}\n    \\P{A^c\\cap B^c}\n     & =\\P{(A^{c^c}\\cup B^{c^c})^c}                  &  & \\text{(by De Morgan's law)} \\\\\n     & =\\P{(A \\cup B)^c}                                                              \\\\\n     & =1 - \\P{A\\cup B}                                                               \\\\\n     & =1 - \\left[\\P{A} + \\P{B} - \\P{A\\cap B}\\right]                                  \\\\\n     & =1-\\P{A}-\\P{B}+\\P{A}\\P{B}                     &  & \\text{(since $A\\amalg B$)}  \\\\\n     & =\\left[1-\\P{A}\\right]\\left[1-\\P{B}\\right]                                      \\\\\n     & =\\P{A^c}\\P{B^c}\n  \\end{align*}\n\\end{ex}\n\n\\begin{ex}\n  Let $S_g$ be the event of drawing a card showing a green side and $C_g$ be the\n  event of drawing the card that is green on both sides. Then, by Bayes'\n  theorem,\n  \\begin{align*}\n    \\cP{C_g}{S_g}\n     & =\\frac{\\cP{S_g}{C_g}\\P{C_g}}{\\P{S_g}}\n    =\\frac{1\\cdot \\frac{1}{3}}{\\frac{1}{2}}\n    =\\frac{2}{3}.\n  \\end{align*}\n\\end{ex}\n\n\\begin{ex}\n  \\begin{enumerate}[(a)]\n    \\item The sample space is given by\n          \\[\n            S=\\left\\{\n            \\underbrace{T\\cdots T}_{n\\text{ times}}H \\mid n\\geq 1\n            \\right\\} \\bigcup\n            \\left\\{\n            \\underbrace{H\\cdots H}_{n\\text{ times}}T \\mid n\\geq 1\n            \\right\\}.\n          \\]\n    \\item There are two possible ways that can happen: by throwing $HHT$ or\n          $TTH$. Therefore, the probability is\n          \\[\n            2\\cdot \\left(\\frac{1}{2}\\right)^3=\\frac{1}{4}.\n          \\]\n  \\end{enumerate}\n\\end{ex}\n\n\\begin{ex}\n  Let $\\P{A}=0$ and let $B$ be an arbitrary event. Then it is clear that\n  \\[\n    0\\leq \\P{A\\cap B}\\leq \\P{A}=0,\n  \\]\n  and therefore $\\P{A\\cap B}=\\P{A}\\P{B}$ trivially. Hence, $A\\amalg B$.\n\n  Otherwise, if $\\P{A}=1$, note that $\\P{A^c}=0$, and therefore $A^c\\amalg B^c$\n  for any event $B^c$. However, Exercise 11 then implies that $A\\amalg B$.\n\n  If $A$ is independent of itself, we have $\\P{A}=\\P{A\\cap A}=[\\P{A}]^2$, and\n  therefore $\\P{A}$ must equal $0$ or $1$.\n\\end{ex}\n\n\\begin{ex}\n  \\begin{enumerate}[(a)]\n    \\item Having at least two children with blue eyes corresponds to the event\n          $\\{BBO, BOB, OBB,$ $BBB\\}$. Note that by Bayes' theorem\n          \\[\n            \\cP{BBO}{B_{\\geq 1}}\n            =\\frac{\\cP{B_{\\geq 1}}{BBO}\\P{BBO}}{\\P{B_{\\geq 1}}}\n            =\\frac{1\\cdot \\frac{3}{64}}{\\frac{37}{64}}\n            =\\frac{3}{37},\n          \\]\n          and similarly $\\cP{BOB}{B_{\\geq 1}}=\\cP{OBB}{B_{\\geq 1}}=\\frac{3}{37}$\n          by independence between children. Meanwhile,\n          \\[\n            \\cP{BBB}{B_{\\geq 1}}\n            =\\frac{\\cP{B_{\\geq 1}}{BBB}\\P{BBB}}{\\P{B_{\\geq 1}}}\n            =\\frac{1\\cdot \\frac{1}{64}}{\\frac{37}{64}}\n            =\\frac{1}{37}.\n          \\]\n          Therefore, the probability at least two children have blue eyes given\n          that at least one child has blue eyes is $\\frac{10}{37}$.\n\n    \\item This is identical to asking what is the probability that at least one\n          of the two remaining children has blue eyes. The probability that\n          either the middle or eldest child, but not both, have blue eyes is\n          $2\\cdot \\frac{1}{4}\\cdot\\frac{3}{4}=\\frac{3}{8}$, while the\n          probability that both have blue eyes is $\\frac{1}{16}$. Therefore,\n          the probability that at least two children have blue eyes given that\n          the youngest child has blue eyes is $\\frac{7}{16}$.\n  \\end{enumerate}\n\\end{ex}\n\n\\begin{ex}\n  If $A$ and $B$ are independent events, then\n  \\[\n    \\cP{A}{B}=\\frac{\\P{A\\cap B}}{\\P{B}}=\\frac{\\P{A}\\P{B}}{\\P{B}}=\\P{A}.\n  \\]\n\n  For any events $A$ and $B$, the definition of conditional probability states\n  that\n  \\[\n    \\cP{A}{B}=\\frac{\\P{A\\cap B}}{\\P{B}}\\text{, and }\n    \\cP{B}{A}=\\frac{\\P{A\\cap B}}{\\P{A}}.\n  \\]\n  Solving both for $\\P{A\\cap B}$ gives us\n  \\[\n    \\P{A\\cap B}=\\cP{A}{B}\\P{B}=\\cP{B}{A}\\P{A}.\n  \\]\n\\end{ex}\n\n\\begin{ex}\n  This is immediate from applying the definition of conditional probability\n  twice, first to $A\\,|\\,B\\cap C$ and then to $B\\,|\\,C$,\n  \\begin{align*}\n    \\P{A\\cap B\\cap C}\n    =\\cP{A}{B\\cap C}\\P{B\\cap C}\n    =\\cP{A}{B\\cap C}\\cP{B}{C}\\P{C}.\n  \\end{align*}\n\\end{ex}\n\n\\begin{ex}\n  Let $\\P{A_1}-\\cP{A_1}{B}=\\varepsilon$ and suppose that\n  $\\cP{A_i}{B}\\leq \\P{A_i}$ for all $i=2,\\ldots,k$. Then\n  \\begin{align*}\n    0 & = \\sum_{i=1}^k\\P{A_i}-\\sum_{i=1}^k\\cP{A_i}{B}                      \\\\\n      & = \\P{A_1}-\\cP{A_1}{B}+\\sum_{i=2}^k\\left(\\P{A_i}-\\cP{A_i}{B}\\right) \\\\\n      & \\geq \\varepsilon,\n  \\end{align*}\n  a contradiction. Therefore, $\\cP{A_i}{B}> \\P{A_i}$ for some $i$.\n\\end{ex}\n\n\\begin{ex}\n  \\begin{align*}\n    \\cP{W}{+}\n     & =\\frac{\\cP{+}{W}\\P{W}}{\\P{+}}                                          &  & \\text{(by Bayes' theorem)} \\\\\n     & =\\frac{0.82 \\cdot 0.5}{0.65\\cdot 0.3 + 0.82 \\cdot 0.5 + 0.5 \\cdot 0.2}                                 \\\\\n     & =0.58.\n  \\end{align*}\n\\end{ex}\n\n\\begin{ex}\n  \\begin{enumerate}[(a)]\n    \\item Note that by Bayes' theorem,\n          \\[\n            \\cP{C_i}{H}\n            =\\frac{\\cP{H}{C_i}\\P{C_i}}{\\sum_{j=1}^5 \\cP{H}{C_j}\\P{C_j}}\n          \\]\n          where\n          \\[\n            \\sum_{j=1}^5 \\cP{H}{C_j}\\P{C_j}\n            =\\frac{1}{4}\\cdot\\frac{1}{5}+\\frac{1}{2}\\cdot\\frac{1}{5}\n            +\\frac{3}{4}\\cdot\\frac{1}{5}+1\\cdot\\frac{1}{5}\n            =\\frac{1+2+3+4}{20}=\\frac{1}{2}.\n          \\]\n\n          Hence,\n          \\[\n            \\cP{C_1}{H} = 0,\\,\n            \\cP{C_2}{H} = \\frac{1}{10},\\,\n            \\cP{C_3}{H} = \\frac{1}{5},\\,\n            \\cP{C_4}{H} = \\frac{3}{10}\\text{, and }\n            \\cP{C_5}{H} = \\frac{2}{5}.\n          \\]\n    \\item\n          \\[\n            \\cP{H_2}{H_1}\n            =\\sum_{j=1}^5\\cP{H}{C_j}\\cP{C_j}{H}\n            =0+\\frac{1}{4}\\cdot\\frac{1}{10}\n            +\\frac{1}{2}\\cdot\\frac{1}{5}\n            +\\frac{3}{4}\\cdot\\frac{3}{10}\n            +1\\cdot\\frac{2}{5}=\\frac{3}{4}.\n          \\]\n    \\item Note that\n          \\[\n            \\cP{B_4}{C_1}=0,\\,\n            \\cP{B_4}{C_2}=\\frac{27}{256},\\,\n            \\cP{B_4}{C_3}=\\frac{1}{16},\\,\n            \\cP{B_4}{C_4}=\\frac{3}{256}\\text{, and }\n            \\cP{B_4}{C_5}=0,\n          \\]\n          and,\n          \\[\n            \\sum_{j=1}^5 \\cP{B_4}{C_j}\\P{C_j}\n            =0+\\frac{1}{5}\\cdot\\frac{27}{256}+\\frac{1}{5}\\cdot\\frac{1}{16}\n            +\\frac{1}{5}\\cdot\\frac{3}{256}+0\n            =\\frac{23}{640}.\n          \\]\n          Therefore, by Bayes' theorem,\n          \\[\n            \\cP{C_1}{B_4} = 0,\\,\n            \\cP{C_2}{B_4} = \\frac{27}{46},\\,\n            \\cP{C_3}{B_4} = \\frac{8}{23},\\,\n            \\cP{C_4}{B_4} = \\frac{3}{46}\\text{, and }\n            \\cP{C_5}{B_4} = 0.\n          \\]\n  \\end{enumerate}\n\\end{ex}\n\n\\begin{ex}~\n  \\inputminted{python}{../code/01-21.py}\n\n  \\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.8]{../images/01-21a}\n    \\caption{Proportion of heads as a function of the number of coin flips for\n      $p_\\text{heads}=0.3$.}\n  \\end{figure}\n\n  \\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.8]{../images/01-21b}\n    \\caption{Proportion of heads as a function of the number of coin flips for\n      $p_{heads}=0.03$.}\n  \\end{figure}\n\n\\end{ex}\n\n\\begin{ex}~\n  \\inputminted{python}{../code/01-22.py}\n  \\inputminted{text}{../output/01-22.txt}\n\\end{ex}\n\n\\begin{ex}~\n  \\inputminted{python}{../code/01-23.py}\n  \\inputminted{text}{../output/01-23.txt}\n\n  In the case of $A=\\{2, 4, 6\\}$ and $B=\\{1, 2, 3, 4\\}$, the theoretical values\n  are $\\P{A}=\\frac{1}{2}$, $\\P{B}=\\frac{2}{3}$ and $\\P{AB}=\\frac{1}{3}$.\n\n  For $A=\\{1, 2, 3, 4\\}$ and $B=\\{3, 4, 5, 6\\}$, the theoretical values are\n  $\\P{A}=\\frac{2}{3}$, $\\P{B}=\\frac{2}{3}$ and $\\P{AB}=\\frac{1}{3}$.\n\\end{ex}", "meta": {"hexsha": "3e2f1ea29014535f2fa6836b66c20a3500d0e25e", "size": 16900, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/ch01.tex", "max_stars_repo_name": "dtrifuno/all-of-stats-solutions", "max_stars_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/ch01.tex", "max_issues_repo_name": "dtrifuno/all-of-stats-solutions", "max_issues_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/ch01.tex", "max_forks_repo_name": "dtrifuno/all-of-stats-solutions", "max_forks_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.9173553719, "max_line_length": 112, "alphanum_fraction": 0.5415976331, "num_tokens": 6844, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.5776643975918486}}
{"text": "\\lab{Algorithms}{Jordan Forms and Schur Decomposition}{Jordan Forms and Schur Decomposition}\n\\label{Ch:Jordan}\n\n\\objective{Learn about Jordan forms, what we would like to use them for, and why we use Schur decomposition instead.}\n\n\n\\textbf{Outline}\n\\begin{itemize}\n\t\\item Recall that Jordan forms are (blah blah blah)\n\t\\item Jordan forms are cool because it's good to know the eigenvalues and eigenvectors. \n\t\\item Unfortunately the Jordan form is really hard to compute. Abel's Impossibility Theorem suggests we might have to iterate. Most eigenvalue solvers these days are iterative algorithms that find the Schur decomposition. Like the QR algorithm, which you've already done a lab about.\n\n\\end{itemize}\n\n\\section*{Schur Decomposition}\nSchur decomposition relies on the following theorem.\n\n\n\\begin{theorem}\n{\\bf Schur's Theorem:} For any complex $n \\times n$ matrix $A$, there exists a unitary matrix $Q$ such that $A = Q^\\ast U Q$, where $U$ is upper triangular.\n\\end{theorem}\n\n%\\footnotetext{The requirement that $A$ be complex doesn't mean we're in trouble if $A$ only has real entries. It just means we have to allow complex entries in the Schur decomposition for the theorem to hold in general. If $A$ is real and has all real eigenvalues, then the Schur decomposition of $A$ will also be real.}\n\nThe eigenvalues of $A$ are the diagonal entries of $U$. (Why?) But it follows from Abel's Impossibility Theorem (Theorem \\ref{Theorem:Abel}) that no finite algorithm can compute the eigenvalues of $A$ exactly. (How does this follow?) Therefore, no finite algorithm can directly compute the Schur factorization of A.\n", "meta": {"hexsha": "9441107a7170dfdd4638abe09556fb06cdae1856", "size": 1622, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Algorithms/JordanSchur/Jordan.tex", "max_stars_repo_name": "abefrandsen/numerical_computing", "max_stars_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Algorithms/JordanSchur/Jordan.tex", "max_issues_repo_name": "abefrandsen/numerical_computing", "max_issues_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Algorithms/JordanSchur/Jordan.tex", "max_forks_repo_name": "abefrandsen/numerical_computing", "max_forks_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-21T23:06:27.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-21T23:06:27.000Z", "avg_line_length": 62.3846153846, "max_line_length": 321, "alphanum_fraction": 0.7731196054, "num_tokens": 403, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7772998508568417, "lm_q1q2_score": 0.5776643822560578}}
{"text": "\\input{../header.tex}\n\\title{\\vspace{-2cm}INF3490/INF4490 Exercises - Simple search algorithms}\n\\author{Eivind Samuelsen\\input{../author_footnote.tex}}\n\\date{}\n\n% Removing paragraph indents is sometimes useful:\n\\setlength\\parindent{0pt}\n% ==============================================================================\n\n% ================================= DOCUMENT ===================================\n\\begin{document}\n    \\renewcommand\\marginsymbol[1][0pt]{%\n  \\tabto*{0cm}\\makebox[-1cm][c]{$\\mathbb{P}$}\\tabto*{\\TabPrevPos}}\n\n\\maketitle\n\\input{../intro.tex}\n\n\\section{Simple search algorithms}\n\nGiven the function \\(f(x) = -x^4 + 2x^3 + 2x^2 - x\\):\n\n\\subsection{Derivative}\nWhat is its derivative \\(f'(x)\\) ?\n\n\\subsection{Plotting \\marginsymbol}\nPlot the function, and its gradient(derivative) from \\(x=-2\\) to \\(x=3\\).\nUse python, wolfram alpha or another plotting tool of your choice.\n\n\\subsection{Gradient Ascent \\marginsymbol}\n\\label{subsec:grada}\nMaximize using gradient ascent.\nYou can try step size 0.1 and start somewhere in the range [-2, 3].\nHow does the choice of starting point and step size affect the algorithm's performance?\nIs there a starting point where the algorithm would not even be able to find a local maximum?\n\n\\subsection{Exhaustive Search \\marginsymbol}\n\\label{subsec:exhaust}\nAssume that we are only interested in maxima of \\(f(x)\\),\nwhere \\(-2\\leq x \\leq 3\\),\nand \\(x\\) increases in steps of length 0.5 (\\(\\Delta x = 0.5\\)).\nPerform an exhaustive search to maximize \\(f(x)\\) and plot the result.\n\n\\subsection{Greedy Search and Hill Climbing}\nIn what way would greedy search and hill climbing differ for the maximization problem in Problem \\ref{subsec:grada}?\nCan you identify a starting position where the two algorithms might give different results?\n\n\\subsection{Possible improvements}\nGradient ascent, greedy search and hill climbing are quite similar, and are all based almost exclusively on exploitation.\nCan you think of any additions to these algorithms in order to do more exploration?\n\n\\subsection{Exhaustive search vs. simulated annealing}\nWhich algorithm do you think is the most efficient at maximizing \\(f(x)\\) under the conditions in Problem \\ref{subsec:exhaust};\nexhaustive search or simulated annealing? Explain.\n\n\\input{../contact.tex}\n\n\\end{document}\n% ==============================================================================\n", "meta": {"hexsha": "06b8f737c3bb5f1b278d39d687f8f32ad3d990cf", "size": 2377, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "material/week1/inf3490-ex1.tex", "max_stars_repo_name": "mpambasange/MachineLearning", "max_stars_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2016-09-01T08:50:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T20:56:07.000Z", "max_issues_repo_path": "material/week1/inf3490-ex1.tex", "max_issues_repo_name": "olehermanse/INF3490-PythonAI", "max_issues_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-10-20T09:36:19.000Z", "max_issues_repo_issues_event_max_datetime": "2017-08-29T00:28:54.000Z", "max_forks_repo_path": "material/week1/inf3490-ex1.tex", "max_forks_repo_name": "olehermanse/INF3490-PythonAI", "max_forks_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 15, "max_forks_repo_forks_event_min_datetime": "2016-10-31T12:30:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-15T12:12:50.000Z", "avg_line_length": 40.2881355932, "max_line_length": 127, "alphanum_fraction": 0.6827934371, "num_tokens": 583, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952052, "lm_q2_score": 0.8221891239865619, "lm_q1q2_score": 0.5776125724639928}}
{"text": "%\n% CMPT 310: Artificial Intelligence - A Course Overview\n% Section: Machine Learning\n%\n% Author: Jeffrey Leung\n%\n\n\\section{Machine Learning}\n\t\\label{sec:machine-learning}\n\\subsection{Introduction}\n\t\\label{subsec:ml-introduction}\n\\begin{easylist}\n\n& \\textbf{Machine learning:} Approximation of a function which detects the classification of a problem given attributes of data\n\n& \\textbf{Classification:} Category of data\n& \\textbf{Attribute:} Characteristics of a problem using a variable (boolean, discrete, continuous, etc.)\n& Epoch: Round of training data\n\n& Types of machine learning:\n\t&& Supervised learning\n\t&& \\textbf{Semi-supervised learning:} Machine learning technique which classifies data through processing a combination of pre-classified data and unclassified data\n\t&& \\textbf{Unsupervised learning:} Machine learning technique which classifies data and finds patterns through processing previously unclassified data\n\t\t&&& E.g. Clustering\n\t&& \\textbf{Reinforcement learning:} Machine learning technique which takes actions to obtain a score, and repeats to find the correct actions to obtain the optimal score\n\t\t&&& Used for robotics, games\n\n\\end{easylist}\n\\subsection{Supervised Learning}\n\t\\label{subsec:supervised-learning}\n\\begin{easylist}\n\n& \\textbf{Supervised learning:} Machine learning technique which classifies data through processing provided pre-classified data\n\n& \\textbf{Episodic problem (ML):} Situation in supervised learning where a dataset is built from discrete data points\n\t&& E.g. Spam checker\n\n& \\textbf{Inductive learning:} Approximation of a function on the attributes of a problem which produces a label\n\t&& Notation: Ideal function $f(features) \\rightarrow label$; find $h \\approx f$\n\t&& \\textbf{Label:} Resulting classification\n\t&& Highly similar to \\href{http://onlinestatbook.com/2/regression/intro.html}{linear regression}\n\t&& E.g. Finding a best-fit line to a set of datapoints\n\t&& Choose the simplest and most generalizable model $h$ (Occam's razor)\n\n& \\textbf{Hypothesis space:} Set of models $h$ which are valid\n\t&& Given $n$ boolean functions, there are $2^n$ values in the dataset and $2^{2^n}$ unique possible assignments of results to the dataset\n\t&& E.g. Neural networks, logistic regressions, support vector machines, random forests\n\t\n& Dataset training:\n\t&& {Training data:} Dataset which is applied to a neural network to provide neurons with weights and biases\n\t\t&&& The greater the data for training, the better the model created\n\t&& \\textbf{Validation data:} Dataset which is applied to a neural network with known optimal results to test accuracy\n\t&& \\textbf{Testing data:} Dataset which is applied to a neural network after training and validation\n\t\t&&& Cannot be the same as training data\n\t\t&&& The greater the data for testing, the better the estimate of accuracy\n\t&& Cross-validation: Splitting (`folding') the original dataset into equal subsets, one for testing, one for validation, and the remaining for training\n\t\t&&& Strategy: Cross-validate multiple times with each fold to find the most accurate fold\n\t&& \\textbf{Overfitting:} Being overly specific with training data such that testing data is no longer accurate\n\n\\end{easylist}\n\\subsection{Decision Trees}\n\t\\label{subsec:decision-trees}\n\\begin{easylist}\n\n& \\textbf{Decision tree:} Tree where each node represents a decision with multiple choices and each leaf represents a unique choice\n\t&& Restricting hypothesis space: Limit on depth\n\t&& Split paths on most informative features (which result in the lowest resulting entropy)\n\t\t&&& \\textbf{Entropy:} Uncertainty of information (in this context, calculated as similarity of ratio between choices)\n\t\t\t&&&& Notation: $H$ where $0 \\leq H \\leq 1$\n\t\t\t&&&& Unit: Bits\n\t\t\t&&&& Calculation: $H(p_1, p_2, \\dotsc, p_n) = - \\sum_i p_i \\log_2 p_i$\n\t\t\t&&&& E.g. $H(0.5, 0.5) = - \\frac{1}{2} \\log \\frac{1}{\\frac{1}{2}} = 1$\n\t\t\t&&&& E.g. $H(0, 1) = - 0 \\cdot \\log \\frac{1}{0} - 1 \\cdot \\log \\frac{1}{1} = \\lim_{x \\rightarrow 0} x \\log \\frac{1}{x} = 0$\n\t\t\t&&&& E.g. $H(\\frac{1}{3}, \\frac{2}{3}) = 0.918\\dots$\n\t\t&&& To find the reduction in uncertainty, calculate:\n\n\t\t$$H(root) - \\sum_{i \\in \\textrm{children}} \\frac{\\textrm{samples of } i}{\\textrm{samples of root}} H(i)$$\n\n\t\t&&& See diagram and patrons graph\n\n\\end{easylist}\n\\subsection{Neural Networks}\n\t\\label{subsec:neural-networks}\n\\begin{easylist}\n\n& \\href{https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi}{\\textbf{Neural network:}} Combination of neuron structures through which data flows and is manipulated, to create a set of output values\n\t&& \\textbf{Bias:} Value which is added as a standalone neuron input (typically $-1$)\n\t&& \\textbf{Weight:} Value by which any neuron input is multipled\n\t&& \\href{http://playground.tensorflow.org}{Neural network simulation}\n\n& \\textbf{Activation function:} Simple function applied to the sum of weighted neuron inputs to create a neuron output value\n\t&& Notation: $g: \\mathbb{R} \\rightarrow \\mathbb{R}$\n\t&& Types of functions:\n\t\t&&& \\textbf{Threshold function:} Activation function which has the output value -1 for negative input values, and the output value 1 for positive input values\n\t\t\\end{easylist}\n\t\t\\[\n\t\t\ta = \\begin{dcases}\n\t\t\t\t-1 & \\textrm{if } in < 0 \\\\\n\t\t\t\t1 & \\textrm{otherwise}\n\t\t\t\\end{dcases}\n\t\t\\]\n\t\t\\begin{easylist}\n\t\t\n\t\t&&& \\textbf{Sigmoid/logistic function:} Activation function which changes output value gradually from 0 to 1, and has output value 0.5 at input value 0\n\t\t\\end{easylist}\n\t\t\\[\n\t\t\ta = \\frac{1}{1+exp_e(-in)}\n\t\t\\]\n\t\t\\begin{easylist}\n\t\t\n\t\t&&& \\textbf{Rectifier Linear Unit (ReLU) function:} Activation function which has output value 0 for negative values, and output value equal to the input value for positive values\n\t\t\\end{easylist}\n\t\t\\[\n\t\t\ta = max(0, in) = \\begin{dcases}\n\t\t\t\t0 & \\textrm{if } in \\leq 0 \\\\\n\t\t\t\tin & \\textrm{otherwise}\n\t\t\t\\end{dcases}\n\t\t\\]\n\t\t\\begin{easylist}\n\t\t\n& Process of training a network:\n\t&& \\textbf{Squared error:}\n\t\\end{easylist}\n\t\\begin{align*}\n\t\tErr\n\t\t&= y - h_w(x) \\\\[1em]\n\t\t\\textrm{Squared error } E\n\t\t&= \\frac{Err^2}{2} \\\\\n\t\t&= \\frac{(y - h_w(x))^2}{2} \\\\[1em]\n\t\t\\textrm{where }\n\t\tx &= \\textrm{ input value} \\\\\n\t\ty &= \\textrm{ true/ideal output value}\n\t\\end{align*}\n\t\\begin{easylist}\n\t\n\t\t&&& Using \\href{https://www.youtube.com/watch?v=IHZwWFHWa-w&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&index=2}{gradient descent} to find the optimal value (one iteration per attribute):\n\t\t\\end{easylist}\n\t\t\\begin{align*}\n\t\t\t\\frac{\\partial E}{\\partial W_j}\n\t\t\t&= Err \\times \\frac{\\partial E}{\\partial W_j} \\\\\n\t\t\t&= Err \\times \\frac{\\partial}{\\partial W_j} (y - g(\\sum_{j=0}^n W_j x_j)) \\\\\n\t\t\t&= -Err \\times g'(in) \\times x_j\n\t\t\\end{align*}\n\t\t\\begin{easylist}\n\t\t&&& Given the error, update the weight by:\n\t\t\\end{easylist}\n\t\t\\begin{align*}\n\t\t\tW_j &\\leftarrow W_j + (\\alpha \\times Err \\times g'(in) \\times x_j) \\\\[1em]\n\t\t\t\\textrm{where }\n\t\t\t\\alpha\n\t\t\t&= \\textrm{ rate of learning correction}\n\t\t\\end{align*}\n\t\t\\begin{easylist}\n\t&& \\href{https://www.youtube.com/watch?v=Ilg3gGewQ5U&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&index=3}{\\textbf{Back propagation:}} Process of training a neural network by altering weights and biases to provide the intended results, using the outputs of training data\n\t\t&&& Involves \\href{https://www.youtube.com/watch?v=tIeHLnjs5U8&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&index=4}{calculus}\n\t\\[\n\t\t\\delta_{out} = (y - a_{out}) \\cdot g'(in_{out})\n\t\\]\n\n& \\textbf{Single-layer perceptron:} Neural network consisting of a single neuron\n\t&& \\textbf{Linearly separable:} Property of a multidimensional dataset which can be classified using a single linear function\n\t\t&&& A function can be represented by a single-layer perceptron if and only if its data is linearly separable\n\n& \\textbf{Multi-layer perceptron:} Neural network consisting of sets of multiple interconnected neurons where every node in each layer depends on at least one node in the layer immediately prior, and does not depend on nodes in other layers\n\t&& \\textbf{Hidden layer:} Layer of neurons which is neither the input layer nor the output layer\n\t&& Acyclic digraph (i.e. has a topological order)\n\t&& Multi-layer perceptron with 2 hidden layers (3 total layers of processing) can emulate any possible function\n\t&& \\textbf{Multi-task learning:} Model of multi-layer neural network which has consistent hidden layer processing but has multiple unique outputs\n\t\t&& E.g. Image content differentiator\n\n& \\href{https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/}{\\textbf{Convolutional layer:}} Layer component of a neural network which handles prestructured input by abstracting and processing components of the input, often to fewer inputs\n\t&& E.g. Image consisting of a 2-d grid of pixels, squares of which are taken and compressed\n\t&& \\textbf{Filter:} Structured subset of input values\n\t&& \\textbf{Pooling:} Combination of multiple inputs (often the depth of a 3D matrix) into a single output (e.g. summation, multiplication, averaging, maximization)\n& \\textbf{Recurrent network:} Acyclic digraph which is structured as a 2D matrix, with a set of input nodes flowing to a set of output nodes\n\t&& E.g. Conversational bot\n\n\\end{easylist}\n\\clearpage\n", "meta": {"hexsha": "c677a59a4d4a3ebc268fdfbfbcb64238a1b8b6bf", "size": 9215, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cmpt-310-artificial-intelligence-survey/course-overview/tex/machine-learning.tex", "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_issues_repo_path": "cmpt-310-artificial-intelligence-survey/course-overview/tex/machine-learning.tex", "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cmpt-310-artificial-intelligence-survey/course-overview/tex/machine-learning.tex", "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "avg_line_length": 51.1944444444, "max_line_length": 285, "alphanum_fraction": 0.731524688, "num_tokens": 2730, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916099737806, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.5775055991825021}}
{"text": "\\section{Model}\n\\label{sec:model}\n\n\\subsection{Distribution network}\n\nIn this section, we present the model we used to solve the problem of ANM in DSO grid.\nIn particular, we will show how this problem can be cast as a MDP and how RL can be included when the distribution of the uncertainty is unknown and we will highlight the modeling assumptions used.\n\nThe electrical network can be mathematically modeled as a graph whose edges represent the electrical lines and whose nodes are locations in the network where power can be consumed or injected.\nIn a typical distribution network, the nodes correspond to households or small factories that behave as prosumers.\nIn this work, we will make the assumption that the graph is a tree as it is the case of most distribution networks.\nLet $\\mathcal{N}$ be the set of nodes and $\\mathcal{E}$ be the set of edges.\nLet $\\mathcal{G}$ be the graph representing the network, i.e. $\\mathcal{G} = (\\mathcal{N}, \\mathcal{E})$.\n\nWe represent every node by an integer in $\\{0, \\dots, n\\}$ where $n = |\\mathcal{N}|-1$ is the cardinality of set $\\mathcal{N}$.\nThe root of the tree is the node that connects the distribution network to the transmission network and is the node $0$ in our model and we denote by $\\mathcal{N}_+ = \\mathcal{N} \\backslash \\{0\\}$.\nEach node $i \\in \\mathcal{N}_+$ has a unique ancestor denoted by $A_i$ and each node $i \\in \\mathcal{N}$ has a set of children denoted by $\\mathcal{C}_i$.\nWe choose the orientation of the lines from being from $i$ to $A_i$ so that we can unambiguously represent each line by its origin node.\nWe thus have $\\mathcal{E} = \\{1, \\dots, n\\}$.\nWe will define the following nodal and branch quantities :\n\nFor each node $i \\in \\mathcal{N}$, we define :\n\\begin{itemize}\n  \\item $v_i$ as the square of the norm of the complex voltage at that node,\n  \\item $s_i = p_i + \\mathbf{i} q_i$ as complex net power injection (the net injection being the power consumed minus the power produced)\n  \\item $d_i$ is the active net power injection demanded at node $i$, i.e. the power that the agent at that node wants to reinject in the network.\n  It is thus different than the power that will actually be seen by the network (that is, $p_{i}$) as some curtailment mechanisms can take place.\n  This will be further discussed later.\n  It is positive when the power is injected and negative if it is retrieved.\n\\end{itemize}\n\nFor each line $i \\in \\mathcal{E}$, we define :\n\\begin{itemize}\n  \\item $z_i = r_i + \\mathbf{i}x_i$ as the complex impedance,\n  \\item $l_i$ as the square of the norm of the complex current,\n  \\item $S_i = P_i + \\mathbf{i}Q_i$ as the sending-end complex power, where $P_i$ denote the active power and $Q_i$ the reactive power.\n\\end{itemize}\n\nA schematic representation of a line together with the notations we use is given in Figure~\\ref{fig:line}.\n\n\\begin{figure}\n  \\begin{center}\n    \\begin{tikzpicture}[scale=8]\n        \\draw [->] (0,0.7) -- (0,0.53);\n        \\draw [->] (0.8,0.7) -- (0.8,0.53);\n        \\draw [thick] (0,0.5) -- (0.8, 0.5);\n        \\draw [<-] (0.25, 0.45) -- (0.55, 0.45);\n        \\node [left] at (0,0.7) {$s_{A_i}$};\n        \\node [right] at (0.8,0.7) {$s_i$};\n        \\node [below, left] at (0,0.5) {$v_{A_i}$};\n        \\node [below, right] at (0.8,0.5) {$v_i$};\n        \\node [above] at (0.4, 0.5) {$z_i$};\n        \\node [below] at (0.3, 0.45) {$l_i$};\n        \\node [below] at (0.5, 0.45) {$S_i$};\n    \\end{tikzpicture}\n  \\caption{Schematic representation of a line.}\n  \\label{fig:line}\n  \\end{center}\n\\end{figure}\n\nWe consider the problem in a multi-time-step framework.\nThis means that the variables are defined at each time step.\nWe consider a horizon of $T$ time steps.\nThe set of time steps is denoted by $\\mathcal{T}$ and each one is represented by a positive integer, i.e. $\\mathcal{T} = \\{1, \\dots, T\\}$.\n\nMoreover, we consider the context of a distribution system with a high level of renewable energy production.\nWe consider that each house has a PV installation and that the overproduction is reinjected in the network.\nTherefore, the net power injection at each node can be decomposed into a power produced $p_{i,t}^p$ and a power consumed $p_{i,t}^c$ such that\n\n\\[ p_{i,t} = p_{i,t}^p - p_{i,t}^c \\]\n\n\\subsection{Power flow equations}\n\nFor a radial distribution network, the equations that link all nodal and branch electrical quantities by taking \\emph{Kirchhoff}'s laws into account are called the \\emph{Power Flow equations}.\nThey can be written as~\\cite{Farivar_Relax1}\n\n\\begin{align}\n  & 0  =  p_{0,t} + \\sum_{j\\in \\mathcal{C}_0} \\Big(P_{j,t}-r_{j}l_{j,t} \\Big), \\hspace{0.5cm} \\forall{t} \\in \\mathcal{T} \\label{ACOPF1}\\\\\n  & 0   =  q_{0,t} + \\sum_{j\\in \\mathcal{C}_0} \\Big( Q_{j,t}-x_{j}l_{j,t} \\Big), \\hspace{0.5cm} \\forall{t} \\in \\mathcal{T} \\label{ACOPF2}\\\\\n  & v_{\\mathcal{A}_i,t}  =   v_{i,t} -2(r_iP_{i,t} + x_iQ_{i,t}) + (r_i^2 + x_i^2)l_{i,t}, \\\\\n  & \\hspace{0.5cm} \\forall i \\in \\mathcal{N}_{+}, \\hspace{0.2cm} \\forall{t} \\in \\mathcal{T} \\label{ACOPF3}\\\\\n  & P_{i,t} =  p_{i,t} + \\sum_{j\\in \\mathcal{C}_i} \\Big( P_{j,t}-r_{j}l_{j,t} \\Big), \\hspace{0.5cm} \\forall i \\in \\mathcal{N}_{+}, \\hspace{0.2cm} \\forall{t} \\in \\mathcal{T} \\label{ACOPF4}\\\\\n  & Q_{i,t}  =  q_{i,t} + \\sum_{j\\in \\mathcal{C}_i} \\Big( Q_{j,t}-x_{j}l_{j,t} \\Big), \\hspace{0.5cm} \\forall i \\in \\mathcal{N}_{+}, \\hspace{0.2cm} \\forall{t} \\in \\mathcal{T} \\label{ACOPF5}\\\\\n  & v_{i,t}l_{i,t} = P_{i,t}^2 + Q_{i,t}^2, \\hspace{0.5cm} \\forall i \\in \\mathcal{N}_{+}, \\hspace{0.2cm} \\forall{t} \\in \\mathcal{T} \\label{ACOPF:quad}\n\\end{align}\n\nWith these equations, the power injection at each node is sufficient to obtain the values of all other electrical quantities.\n\n\\subsection{Uncertainty}\n\nThese power injection, however, is subject to uncertainty.\nThey depend on the consumption and the production at each node.\nWe consider that forecast values, denoted by $\\tilde{p}_{i,t}^p$ and $\\tilde{p}_{i,t}^c$ are available to the DSO and the uncertainty can thus be modeled as the deviation from these forecast values.\nLet us denote by $\\delta_t^p$ and $\\delta_t^c$ the deviation for, respectively, the production and the consumption at time step $t$.\nThese deviations are common to all nodes.\nThis is motivated by the fact that this uncertainty usually relies to external factor as solar irradiance, temperature, etc.\nWhen the uncertain deviations are known, the power injections can be written as\n\\[\n   p_{i,t}^p = \\tilde{p}_{i,t}^p + \\delta_t^p\\tilde{p}_{i,t}^p\n  \\]\nand\n\\[\n   p_{i,t}^c = \\tilde{p}_{i,t}^c + \\delta_t^c\\tilde{p}_{i,t}^c\n  \\]\nWe consider the following discrete sample space $\\mathcal{P}$ for both type of deviation :\n\\[\n  \\mathcal{P} = \\{0.1, 0, -0.1\\}\n\\]\nwith an unknown probability distribution.\nThis means that we suppose that both the consumption and the production can either be $10\\%$ more than the forecast value, exactly the forecast value or $-10\\%$ of the forecast value.\n\n\\subsection{Flexible loads}\nIn modern distribution networks, some loads are more flexible than others.\nFlexible loads can include small factories who sign contracts with the DSO allowing it to disconnect the factory from the network for some time.\nIn this work, we consider that there is a set $\\mathcal{F}$ of flexible loads.\nThey are either in the state \\emph{on} or \\emph{off}.\nFor these loads, we consider that there are no production but only consumption.\n\n\\subsection{Set of states}\n\nThe state of the system is made of the time step in which we are, the level of the deviation from the forecast values for both the consumption and the production and the state of the flexible loads.\nThe state is represented by a vector of length $2+|\\mathcal{F}|$ where the two first elements correspond to the deviation from respectively the production and the consumption and the $|\\mathcal{F}|$ last elements are the binay states of the flexible loads ($0$ or $1$).\nWe denote by $\\mathcal{S}$ the set of states and by $\\mathcal{S}_t$ the set of states for a fixed $t$.\nWe have that\n\\begin{align*}\n  \\mathcal{S}_t = \\{&(0.1, 0.1, 0, \\dots, 0), (0, 0.1,0, \\dots, 0), \\dots, \\\\\n                    &(-0.1, -0.1,1, \\dots, 1)\\} \\\\\n\\end{align*}\nNote that $|\\mathcal{S}_t| = 3^2 2^{|\\mathcal{F}|} = 9 \\cdot 2^{|\\mathcal{F}|}$ and that $\\mathcal{S}_t$ is independent of $t$.\n\n\\subsection{Control actions}\n\nWe consider that the action the DSO has to control its network is the curtailment of all or part of the PV production at each node and the choice of activating or not each flexible load, all of these at each time step.\nBy curtailing excessive solar production, the reliability of the network could be greatly increased.\nThis curtailment is also expressed as a percentage of the total injection seen by the network.\nWe denote by $\\overline{p}_{i,t}$ the curtailment decided for node $i \\in \\mathcal{N}$ and $t \\in \\mathcal{T}$ so that the total power injected after curtailment is written as\n\\[\n  p^{inj}_{i,t} = p_{i,t} - \\overline{p}_{i,t}p_{i,t}\n\\]\nWe suppose the following set of curtailment actions $\\mathcal{U}_{\\text{load}}$ independent of the state :\n\\[\n  \\mathcal{U}_{\\text{load}} = \\{0, 0.5, 1\\}\n\\]\nWe thus have $\\overline{p}_{i,t} \\in \\mathcal{U}_{\\text{load}}$ for all node and time step.\nThis means that we consider that the DSO can curtail either nothing, half of the production or the entire production.\n\nThe set of flexible load actions, denoted by $\\mathcal{U}_{\\text{flex}}$, depends on the state of the flexible loads.\nIf the flexible load is on, we can either deactivate it or not.\nWhen it is off, we can only let it off.\n\nA diagram representing in a simple fashion how the states of uncertainty\nand the control actions should be seen can be found in figure~\\ref{fig:model_diag}.\n\n\\subsection{Cost}\nThe cost associated to an action in a certain state is made of the three following elements :\n\\begin{itemize}\n  \\item The cost of congestion. It is the cost associated to a power flow that would result in breaking the voltage and current limits.\n  \\item The cost of curtailment. It is the cost associated to throwing away free green energy.\n  \\item The cost of flexible load disconnection.\n\\end{itemize}\n\nWe will model both cost by linear functions.\nLet us suppose that the following limits on voltage and current have to hold :\n\\begin{align*}\n  \\underline{v} \\le &v_{i,t} \\le \\overline{v}\\\\\n  &l_{i,t} \\le \\overline{l}\n\\end{align*}\n\nThe cost of congestion is computed as\n\\begin{align*}\n  \\ccong &= C_c \\Big( \\sum_{i \\in \\mathcal{N}} -(\\overline{v}-v_{i,t})_+ - (v_{i,t}-\\underline{v})_+  \\\\\n  &+ \\sum_{j \\in \\mathcal{N}_+} -(\\overline{l}-l_{j,t})_+ \\Big)\n\\end{align*}\nwhile the cost of curtailment is computed as\n\\[\n  \\ccurt = \\sum_{i \\in \\mathcal{N},t \\in \\mathcal{T}} C_e(\\overline{p}_{i,t}p_{i,t})\n\\]\nand the cost of flexible load is\n\\[\n  \\cflex = \\sum_{d \\in \\mathcal{F},t \\in \\mathcal{T}} C_f(\\text{deact}_{d,t})\n\\]\nwhere $C_c$ is the cost of curtailment (here, $8300\\si{\\euro\\per\\mega\\watt\\hour}$), where $C_f$ is the cost of deactivating a flexible load (here, $1\\si{\\euro\\per\\mega\\watt\\hour}$) and where $C_e$ is the usual cost of energy, taken as $300\\si{\\euro\\per\\mega\\watt\\hour}$ and where $(\\cdot)_+$ denotes the positive part function.\n\n\\begin{figure}\n  \\begin{center}\n    \\includegraphics[scale=0.43]{./img/model_diagram.png}\n  \\end{center}\n  \\caption{Schematic representation of the states and control actions.}\n  \\label{fig:model_diag}\n\\end{figure}\n", "meta": {"hexsha": "32ffd883e0be164f912c6906b17ebfaf1ffc5604", "size": 11374, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/model.tex", "max_stars_repo_name": "qlete/ANManagement", "max_stars_repo_head_hexsha": "04a6436fdd6c90bcb51c24e3e4ebdc003e450230", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/model.tex", "max_issues_repo_name": "qlete/ANManagement", "max_issues_repo_head_hexsha": "04a6436fdd6c90bcb51c24e3e4ebdc003e450230", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-05-16T10:53:59.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-21T12:13:01.000Z", "max_forks_repo_path": "report/model.tex", "max_forks_repo_name": "qlete/ANManagement", "max_forks_repo_head_hexsha": "04a6436fdd6c90bcb51c24e3e4ebdc003e450230", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.5497382199, "max_line_length": 327, "alphanum_fraction": 0.7015122209, "num_tokens": 3583, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916029436189, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.5775055944801657}}
{"text": "\\chapter{Statistical Mechanics of Multi-Level Atoms and Molecules}\n\\label{app:multilevel_atoms}\n\nThis appendix provides a full mathematical treatment of the statistics of multi-level atoms and molecules out of thermodynamic equilibrium, including the effects of a background radiation field. This appendix is intended as a reference rather than a full derivation, and so at several points we assert results without proof. Full demonstrations of these results can be found in standard references such as \\citet{rybicki86a}, \\citet{shu91a}, or \\citet{draine11a}.\n\n\\section{Matter-Radiation Interaction}\n\nA general radiation field can be specified in terms of the radiation intensity $I(\\nu, \\mathbf{n})$ at any point in space $\\mathbf{x}$ and time $t$; here $\\nu$ is the frequency of radiation and $\\mathbf{n}$ is a unit vector specifying the direction of radiation propagation. The intensity specifies the amount of radiant energy per unit area per unit frequency per unit solid angle. An alternative representation of the radiation field, which is more useful when dealing with problems in statistical mechanics, is the photon occupation number, defined by\n\\begin{equation}\nn_\\gamma(\\nu, \\mathbf{n}) = \\frac{c^2}{2h\\nu^3} I(\\nu, \\mathbf{n}).\n\\end{equation}\nPhysically, the photon occupation number is the number of quanta (photons) in a particular mode, and is dimensionless. In local thermodynamic equilibrium (LTE) at temperature $T$, the radiation intensity in all directions $\\mathbf{n}$ is given by the Planck function\n\\begin{equation}\nI(\\nu,\\mathbf{n}) = B_\\nu(T) = \\frac{2h\\nu^3}{c^2}\\frac{1}{e^{h\\nu/k_B T} - 1}.\n\\end{equation}\nThe equivalent photon occupation number is\n\\begin{equation}\n\\label{eq:ngamma_LTE}\nn_{\\gamma,\\rm LTE}(\\nu,\\mathbf{n}) = \\frac{1}{e^{h\\nu/k_B T} - 1}.\n\\end{equation}\n\nFor non-relativistic problems, the rates at which photons are emitted or absorbed by atoms undergoing a particular quantum mechanical transition does not depend upon the direction of photon propagation, and thus it is convenient to average over the direction $\\mathbf{n}$. We define the directionally-averaged photon occupation number by\n\\begin{equation}\n\\langle n_\\gamma\\rangle(\\nu) = \\frac{1}{4\\pi} \\int n_\\gamma(\\nu, \\mathbf{n}) \\, d\\Omega,\n\\end{equation}\nwhere the integral is over all directions $\\mathbf{n}$.\n\nNow consider a particle of species $X$ with two quantum states that we will denote $u$ and $\\ell$, with energies $E_u$ and $E_\\ell$, ordered so that $E_u > E_\\ell$. The states have degeneracies $g_u$ and $g_\\ell$, respectively. Particles in state $u$ can spontaneously emit photons and transition to state $\\ell$ with an $e$-folding timescale $A_{u\\ell}$. Formally, if $n_u$ is the number density of particles in state $u$, then\n\\begin{equation}\n\\left(\\frac{dn_u}{dt}\\right)_{\\rm spon.~emiss.} = -n_u A_{u\\ell}.\n\\end{equation}\nParticles in state $\\ell$ can also absorb photons at frequencies $\\nu$ near $\\nu_{u\\ell} = (E_u-E_\\ell)/h$ and transition to state $u$, and the absorption rate is proportional to $\\langle n_\\gamma\\rangle(\\nu_{u\\ell})$ and $n_\\ell$, where $n_\\ell$ is the number density of particles in state $\\ell$. Finally, the presence of photons with frequencies near $\\nu_{u\\ell}$ can cause stimulated emission, whereby particles in state $u$ emit a photon and transition to state $\\ell$; again, the rate at which this process occurs must be proportional to both $\\langle n_\\gamma\\rangle(\\nu_{u\\ell})$ and $n_u$. We write the rates of these two processes as $(dn_u/dt)_{\\rm abs.} \\propto n_\\ell \\langle n_\\gamma\\rangle(\\nu_{u\\ell})$ and $(dn_u/dt)_{\\rm stim.~emiss.} \\propto -n_u \\langle n_\\gamma\\rangle(\\nu_{u\\ell})$. Putting these processes together, the total rate of change of $n_u$ is given by\n\\begin{eqnarray}\n\\frac{dn_u}{dt} & = & \\left(\\frac{dn_u}{dt}\\right)_{\\rm spon.~emiss.} + \\left(\\frac{dn_u}{dt}\\right)_{\\rm stim.~emiss.} + \\left(\\frac{dn_u}{dt}\\right)_{\\rm abs.} \\\\\n& = & -n_u A_{u\\ell} - C_{u\\ell} n_u \\langle n_\\gamma\\rangle(\\nu_{u\\ell}) + C_{\\ell u} n_\\ell \\langle n_\\gamma\\rangle(\\nu_{u\\ell}),\n\\label{eq:dnudt}\n\\end{eqnarray}\nwhere the two constants of proportionality $C_{u\\ell}$ and $C_{\\ell u}$ are to be determined.\n\nConsider a region where the number density of particles is so low that collisions occur negligibly often. However, the particles can still be in LTE with the radiation field. Let $n_\\ell$ be the number density of particles in state $\\ell$. In LTE the values of $n_u$ and $n_\\ell$ must be related by the usual Boltzmann factor, so\n\\begin{equation}\nn_u = \\frac{g_u}{g_\\ell} e^{-h\\nu_{u\\ell}/k_B T} n_\\ell.\n\\end{equation}\nThe directionally-averaged photon occupation number must take on its LTE value\n\\begin{equation}\n\\label{eq:ngamma_avg_LTE}\n\\langle n_\\gamma\\rangle(\\nu) = \\frac{1}{e^{h\\nu/k_B T} - 1}.\n\\end{equation}\nInserting these values of $n_u$ and $\\langle n_\\gamma\\rangle$ into equation (\\ref{eq:dnudt}), and noting that we must have $dn_u/dt = 0$ for a system in LTE, we have\n\\begin{equation}\n-\\frac{g_u}{g_\\ell} e^{-h\\nu_{u\\ell}/k_B T} \\left(A_{u\\ell} + \\frac{C_{u\\ell}}{e^{h\\nu/k_B T} - 1}\\right) + \\frac{C_{\\ell u} }{e^{h\\nu_{u\\ell}/k_B T} - 1} = 0.\n\\end{equation}\nFor temperatures $T$ such that $h\\nu_{u\\ell} \\ll k_B T$, all the exponential terms approach unity, and thus the two terms proportional to $C_{u\\ell}$ and $C_{\\ell u}$ are far larger than the term proportional to $A_{u\\ell}$. Dropping this term, we immediately see that the equation can be satisfied only if\n\\begin{equation}\nC_{\\ell u} = \\frac{g_u}{g_\\ell} C_{u\\ell}.\n\\end{equation} Conversely, for temperatures $T$ such that $h\\nu_{u\\ell} \\gg k_B T$, the terms in the exponentials are large. We can therefore drop the $-1$ terms in the denominators, and neglect $C_{u\\ell}/e^{h\\nu_{u\\ell}/k_B T}$ in comparison to $A_{u\\ell}$. Doing so, we immediately obtain\n\\begin{equation}\nC_{\\ell u} = \\frac{g_u}{g_\\ell} A_{u\\ell}.\n\\end{equation}\n\nInserting these results into our expressions for the rates of stimulated emission and absorption, we finally have\n\\begin{eqnarray}\n\\left(\\frac{dn_u}{dt}\\right)_{\\rm stim.~emiss.} & = & n_u \\langle n_\\gamma\\rangle(\\nu_{u\\ell}) A_{u\\ell} \\\\\n\\left(\\frac{dn_u}{dt}\\right)_{\\rm abs.} & = & \\frac{g_u}{g_\\ell} n_\\ell \\langle n_\\gamma\\rangle(\\nu_{u\\ell}) A_{u\\ell}.\n\\end{eqnarray}\n\n\\section{Statistical Equilibrium for Multi-Level Systems}\n\nNow let us consider some species with a series of possible quantum states. We number them $0, 1, 2, \\ldots$ in order of increasing energy, so state $0$ is the ground state. We denote the energy and degeneracy of state $i$ as $E_i$ and $g_i$ respectively. We write the energy difference between any two states as $E_{ij} = E_i - E_j$,  the corresponding frequency as $\\nu_{ij} = E_{ij}/h$, and we write the Einstein spontaneous emission coefficient for transitions from state $i$ to state $j$ as $A_{ij}$. The species of interest has number density $n$, and we let $n_i$ be the number density of that species in state $i$. Finally, the species of interest can undergo collisions with another species or with itself, and these can cause state transitions as well. We let $n_c$ be the number density of colliders, and we let $k_{ij}$ be the collision rate coefficient connecting any two states, so that the rate of collisionally-induced transitions from state $i$ to state $j$ is given by $n_i n_c k_{ij}$.\n\nGiven this setup, we can write out the rates of all processes that induce changes in the number density of any quantum state. Specifically, the rates of collisional transitions out of and into state $i$ are\n\\begin{eqnarray}\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm coll.~out} & = & -n_i n_c \\sum_j k_{ij} \\\\\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm coll.~in} & = & n_c \\sum_j n_j k_{ji}.\n\\end{eqnarray}\nHere the first expression is a sum over the rate of collisional transitions from state $i$ to all other states, while the second is a sum over the rate of collisional transitions from all other states to state $i$. By convention we take $k_{ii} = 0$, i.e., we set the rate of collisional transitions from a state to itself to zero. The corresponding rates of transition out of and into state $i$ via spontaneous emission are\n\\begin{eqnarray}\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm spon.~emiss.~out} & = & -n_i \\sum_j A_{ij} \\\\\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm spon.~emiss.~in} & = & \\sum_j n_j A_{ji},\n\\end{eqnarray}\nwhere we adopt the convention that $A_{ij} = 0$ for $i \\leq j$, i.e., the spontaneous transition rate from a lower energy state to a higher energy one is zero. Finally, the expressions for stimulated emission- and absorption-induced transitions are\n\\begin{eqnarray}\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm stim.~emiss.~out} & = & -n_i \\sum_j A_{ij} \\langle n_{\\gamma,ji}\\rangle \\\\\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm stim.~emiss.~in} & = & \\sum_j n_j A_{ji} \\langle n_{\\gamma,ji}\\rangle \\\\\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm abs.~out} & = & -n_i \\sum_j \\frac{g_j}{g_i} A_{ij} \\langle n_{\\gamma,ij}\\rangle \\\\\n\\left(\\frac{dn_i}{dt}\\right)_{\\rm abs.~in} & = & \\sum_j \\frac{g_i}{g_j} n_j A_{ji} \\langle n_{\\gamma,ij}\\rangle,\n\\end{eqnarray}\nwhere for convenience we have introduced the shorthand $\\langle n_{\\gamma,ij}\\rangle \\equiv \\langle n_{\\gamma}\\rangle(\\nu_{ij})$. Note that, per our convention that $A_{ij}$ is non-zero only for $i > j$, the terms in the sums for stimulated emission are non-zero only for states $j > i$, while the terms in the sums for absorption are non-zero only for states $j < i$.\n\nCombining all of the above expressions, we can write out the full rate of change for the number density of particles in each state $i$ as\n\\begin{eqnarray}\n\\frac{dn_i}{dt} & = & \\sum_j n_j\\left[n_c k_{ji} + \\left(1+\\langle n_{\\gamma,ji}\\rangle\\right) A_{ji}\\right] +\n \\sum_j n_j \\frac{g_i}{g_j} \\langle n_{\\gamma,ij}\\rangle A_{ij}\n\\nonumber \\\\\n& & \\qquad {} - n_i \\sum_j \\left[n_c k_{ij} + \\left(1+\\langle n_{\\gamma,ij}\\rangle\\right) A_{ij}\\right] \n\\nonumber \\\\\n& & \\qquad {} -\n n_i \\sum_j \\frac{g_j}{g_i} \\langle n_{\\gamma,ji}\\rangle A_{ji}.\n\\label{eq:stat_eq}\n\\end{eqnarray}\nIf the system is in statistical equilibrium (but not necessarily LTE), then $dn_i/dt = 0$ for all states $i$. In this case the set of equations (\\ref{eq:stat_eq}) represents a set of linear equations to be solved for the unknown number densities $n_i$. With some algebraic manipulation, one can express this system as a matrix equation\n\\begin{equation}\n\\mathbf{M} \\cdot \\mathbf{n} = \\mathbf{n},\n\\end{equation}\nwhere $\\mathbf{n} = (n_0, n_1, n_2, \\ldots)$ is the vector of number densities, and the matrix $\\mathbf{M}$ has elements\n\\begin{equation}\nM_{ij} = \\frac{n_c k_{ji} + \\left(1 + \\langle n_{\\gamma,ji}\\rangle\\right) A_{ji} + \\frac{g_i}{g_j} \\langle n_{\\gamma,ij}\\rangle A_{ij}}\n{ \\sum_\\ell \\left[n_c k_{i\\ell} + \\left(1 + \\langle n_{\\gamma,i\\ell}\\rangle\\right) A_{i\\ell} + \\frac{g_\\ell}{g_i} \\langle n_{\\gamma,\\ell i}\\rangle A_{\\ell i}\\right] }.\n\\end{equation}\nThe matrix $\\mathbf{M}$ is therefore specified entirely in terms of the known rate coefficients, degeneracies, and radiation fields, and the problem of finding the level populations $\\mathbf{n}$ therefore reduces to that of finding the eigenvector of $\\mathbf{M}$ that has an eigenvalue of unity.\n\n\\section{Critical Densities for Multi-Level Systems}\n\nChapter \\ref{ch:obscold} gives a derivation of the critical density for two-level systems. Armed with the formalism of the previous section, we can generalize this to many-level systems. Consider some level $i$ which has the property that it is populated primarily from below, meaning that transitions into the state via collisional excitation or radiative absorption from lower levels occur much more often than transitions into the state via radiative decays or collisional de-excitations of higher levels, or transitions out of the state to higher levels via collisions or absorptions. In this case, the time rate of change of the level population reduces to\n\\begin{eqnarray}\n\\frac{dn_i}{dt} & = & \\sum_{j<i} n_j n_c k_{ji} + \\sum_{j<i} n_j \\frac{g_i}{g_j} \\langle n_{\\gamma,ij}\\rangle A_{ij}\n\\nonumber \\\\\n& & {} - n_i \\sum_{j<i} \\left[n_c k_{ij} + \\left(1+\\langle n_{\\gamma,ij}\\rangle\\right)A_{ij}\\right].\n\\end{eqnarray}\nHere the first term describes collisional excitation into state $i$ from lower levels, the second describes the rate of radiative excitation into state $i$ from lower levels, and the final term describes depopulation of state $i$ via collisions, spontaneous emission, and stimulated emission.\n\nIf the system is in steady state, then $dn_i/dt = 0$, and we have\n\\begin{equation}\n\\label{eq:nsteady}\nn_i = \\frac{\\sum_{j<i} n_j n_c k_{ji} + \\sum_{j<i} n_j \\frac{g_i}{g_j} \\langle n_{\\gamma,ij}\\rangle A_{ij}}{\\sum_{j<i} \\left[n_c k_{ij} + \\left(1 + \\langle n_{\\gamma,ij}\\rangle\\right)A_{ij}\\right]}.\n\\end{equation}\nIn analogy with the case of a two-level system, we now define the critical density for state $i$ via\n\\begin{equation}\nn_{{\\rm crit},i} = \\frac{\\sum_{j<i} \\left(1 + \\langle n_{\\gamma,ij}\\rangle\\right) A_{ij}}{\\sum_{j<i} k_{ij}},\n\\end{equation}\ni.e., the critical density is the rate of radiative de-excitation divided by the rate of collisional de-excitation. The sole differences between this and the two-level critical density defined by equation (\\ref{eq:ncrit}) are that this expression sums over all states into which radiative and collisional de-excitation can occur, and that it contains an extra factor of $ \\left(1 + \\langle n_{\\gamma,ij}\\rangle\\right)$ in order to properly account for enhancements in the radiative de-excitation rate due to stimulated emission.\n\nSubstituting this definition of $n_{{\\rm crit},i}$ into equation (\\ref{eq:nsteady}) for the steady state population gives\n\\begin{equation}\nn_i = \\left(\\frac{n_c}{n_c+n_{{\\rm crit},i}}\\right) \\frac{\\sum_{j<i} n_j k_{ji}}{\\sum_{j<i} k_{ij}}\n+ \\left(\\frac{n_{{\\rm crit},i}}{n_c+n_{{\\rm crit},i}}\\right) \\frac{\\sum_{j<i} n_j \\frac{g_i}{g_j} \\langle n_{\\gamma,ij}\\rangle A_{ij}}{\\sum_{j<i} \\left(1+\\langle n_{\\gamma,ij}\\rangle\\right) A_{ij}}.\n\\end{equation}\nExamining this expression, one can see that the generalized $n_{{\\rm crit},i}$ plays much the same role as $n_{\\rm crit}$ for a two-level system. In the limit $n_c \\gg n_{{\\rm crit},i}$, the first term dominates and the second is negligible. In this case the level population is simply set by collisional effects, and radiative effects become irrelevant. Given the relationships between the various collision rate coefficients $k_{ij}$ (c.f. equation \\ref{eq:detailed_balance}), this implies that the level population goes to the usual Boltzmann distribution at the gas temperature $T$. Conversely, if $n_c \\ll n_{{\\rm crit},i}$, the first term is negligible and the second one dominates, so the level population is determined solely by the radiation field. In the absence of an external radiation field (i.e., $\\langle n_{\\gamma,ij}\\rangle \\rightarrow 0$), level $i$ becomes depopulated and thus the excitation is sub-thermal. If the radiation field follows a blackbody distribution (i.e., $\\langle n_{\\gamma,ij}\\rangle$ has the value given by equation \\ref{eq:ngamma_avg_LTE}), then one can show that the result is that the levels are populated following a Boltzmann distribution at the radiation field temperature.\n", "meta": {"hexsha": "6a97800bcd1c7a69e540f7fc6669bed8b0cc54bc", "size": 15193, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix_a.tex", "max_stars_repo_name": "Open-Astrophysics-Bookshelf/star_formation_notes", "max_stars_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 67, "max_stars_repo_stars_event_min_datetime": "2015-05-05T22:43:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-02T02:02:57.000Z", "max_issues_repo_path": "chapters/appendix_a.tex", "max_issues_repo_name": "keflavich/star_formation_notes", "max_issues_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2015-05-31T17:15:19.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-31T02:07:47.000Z", "max_forks_repo_path": "chapters/appendix_a.tex", "max_forks_repo_name": "keflavich/star_formation_notes", "max_forks_repo_head_hexsha": "d1c8a10f84fc1676b492ddb4f3bd8b73455b5d07", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2015-05-22T17:47:29.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-08T15:58:05.000Z", "avg_line_length": 109.3021582734, "max_line_length": 1217, "alphanum_fraction": 0.731916014, "num_tokens": 4594, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.863391602943619, "lm_q2_score": 0.668880247169804, "lm_q1q2_score": 0.5775055887812611}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\n\n\\subsection{Pairwise Swap}\n\\label{q:pairswap}\n\\begin{question}\nConsider p processes (p is even), each process has 1 data element. The even processes s swap their element with their neighbour s+1 (= pair-wise swap).\n\\begin{enumerate}\n\t\\item Give a BSP algorithm.\n\t\\item What is the $h$-relation?\n\t\\item Suppose (non-BSP) 2-sided send and receive. When send transmits, hold execution until remote process receives. Then execute the communication and both processes continue. This is blocking pair-wise communication (as featured in MPI). Give the algorithm.\n\t\\item Executing a send-receive communication has start-up time $s$, and data is transfered with bandwidth $b$. Is $s < l$? Will the cost be higher than the BSP algorithm from 1.1?\n\\end{enumerate}\n\\end{question}\n\\begin{solution} The solution to the subquestions are given below.\n\\begin{enumerate}\n\t\\item The BSP program for one-to-all communication is given below.\n\\begin{lstlisting}\nvoid pairwiseSwap(){\n\tbsp_begin(P);\n\tbsp_push_reg(element);\n\t// Important: The effect of bsp_push_reg only takes effect at the next superstep\n\tbsp_sync(); \n\tint s = bsp_pid();\n\tif(s % 2 == 0){\n\t\tbsp_put(s+1, data, element, 0, sizeof(data));\n\t}else if{\n\t\tbsp_put(s-1, data, element, 0, sizeof(data));\n\t}\n\tbsp_sync();\n\tbsp_end();\n}\n\\end{lstlisting}\n\t\\item The $h$-relation is given by $h = max\\{h_s,h_r\\}$ \\cite[p.~5]{bisseling04}. In the previous case, a processor sends one element and receives one. Therefore, $h = 1$. The total cost is $T(h) = g + l$.\n\t\\item The non-BSP program for one-to-all communication is given below. We assume primitives \\texttt{send(message, dest)} and \\texttt{receive(message, sender)}. The data is not buffered.\n\\begin{lstlisting}\nvoid pairwiseSwap(){\n\tinit_comm(s,P);\n\n\tif(s % 2 == 0){\n\t\tsend(s+1, data);\n\t\treceive(data, sendproc);\n\t\tif(sendproc != s + 1)\n\t\t\tERROR\n\t}else if{\n\t\treceive(data, sendproc);\n\t\tsend(s-1, data;\n\t\tif(sendproc != s - 1)\n\t\t\tERROR\n\t}\n}\n\\end{lstlisting}\n\\item The message-passing cost is modeled as $T(n) = s + bn$. Because each processor sends one element and receives another, the total cost is given by $T = 2(s+b)$. Assuming the same cost per word ($g=b$), the BSP algorithm is faster unless $2(s+g) < g + l$ or $2s < l - g$.\n\\end{enumerate}\n\\end{solution}\n\n\\subsection{SpMV}\n\\label{q:spmv}\n\\begin{question}\n$A$ is a sparse $M \\times N$ matrix, $x$ and $y$ are vectors.\n\\begin{enumerate}\n\t\\item What is the sequential cost of $y=Ax$?\n\t\\item Assume $A$ is distributed row-wise in a 1D fashion. Which phase, if any, of the classic SpMV phases (fan-out, SpMV, fan-in) will disappear? What will be the new total cost? What will be the parallel overhead?\n\t\\item $H=(V,N)$ is a hypergraph of A. What hypergraph model would you use to distribute A using the previous 1D distribution type? Give definitions of $V$ and $N$ in that model.\n\t\\item Assume SpMV algorithm 4.5 as in the book. Use your definition of $H$ from the previous question to write a cost function that measures the number of gets and puts.\n\\end{enumerate}\n\\end{question}\n\\begin{solution} The solution to the subquestions are given below.\n\\begin{enumerate}\n\t\\item The sequential cost of $y=Ax$ is $T_{seq} = 2cm$ flops \\cite[p.~166]{bisseling04} where $c$ is the fixed number of nonzeroes in a row of A and $m$ the amount of rows. This holds because the used data-structures make it possible to only take multiplications with a nonzero into account.\n\t\\item The fan-in step will disappear. Because if a processor holds an entire row it can calculate the partial sum $y_i$ of that row easiliy without relying on any other partial computations \\cite[p.~176]{bisseling04}.  The total (worst case) cost of the unchanged algorithm is\n\t\\begin{equation}\n\t\tT_{MV} \\leq \\frac{2cn}{p} + n + 2 \\Big( 1 - \\frac{1}{p} \\Big) ng + 4l\n\t\\end{equation}\n\t$T_{MV}$ is the sum of the worst case costs of the supersteps:\n\t\\begin{equation}\n\t\tT_{(0)} = \\left(1 - \\frac{1}{p} \\right) ng + l \n\t\\end{equation}\n\t\\begin{equation}\n\t\tT_{(1)} = 2cn/p + l \n\t\\end{equation}\n\t\\begin{equation}\n\t\tT_{(2)} = \\left( 1-\\frac{1}{p} \\right)ng + l\n\t\\end{equation}\n\t\\begin{equation}\n\t\tT_{(3)} = n + l\n\t\\end{equation}\n\tThese equations are for an $n \\times n$ matrix \\cite[p.~178]{bisseling04}. Because there is only one partial sum per row with the given distribution $T_{(3)}' = \\frac{n}{p} + l $. Omitting the fan-in step means that $T_{(2)}$ is not applicable. In the rest of the costs we also have to take into account we are working with an $m \\times n$ matrix which gives us:\n\t\t\\begin{equation}\n\t\t\tT_{(0)} = T_{(0)}' = \\left(1 - \\frac{1}{p} \\right) ng + l \n\t\t\\end{equation}\n\t\t\\begin{equation}\n\t\t\tT_{(1)}' = \\frac{2c\\mathbf{m}}{p} + l \n\t\t\\end{equation}\n\t\t\\begin{equation}\n\t\t\tT_{(3)} = T_{(3)}' = n + l\n\t\t\\end{equation}\n\t\t\\begin{equation}\n\t\t\tT_{MV}' \\leq \\frac{2c\\mathbf{m}}{p} + \\frac{n}{p} + \\Big(1 - \\frac{1}{p} \\Big) ng + 3l\n\t\t\\end{equation}\n\t\t\tThe parallel overhead overhead is defined as the difference between the normalized cost and the ideal 1. So the overhead here is \\cite[p.~141]{bisseling04}:\n\t\t\t\\begin{equation}\n\t\t\t\tO = 1 - \\frac{T_{MV}'}{T_{seq}/p} \\leq 1 - \\frac{\\frac{2cm}{p} + \\frac{n}{p} + \\Big(1 - \\frac{1}{p} \\Big) ng + 3l}{2cm/p}\n\t\t\t\\end{equation}\n\t\\item ``A partitioning of a column-net model of A corresponds to a partitioning of the rows of A (a 1D row-wise distribution)'' \\cite{slides6}. The colum-net model $H=(V,N)$ is defined as following: Let $I = \\{0,1,\\ldots,m-1\\}$ and $J = \\{0,1,\\ldots,n-1\\}$ then $I$ represents the rows and thus $V = I$ and $\\forall j \\in J$ we define a net $n_j \\in N$ with\n\t\\begin{equation}\n\t\tn_j = \\{i \\in I ~|~ a_{ij} \\neq 0\\}\n\t\\end{equation}\n\tThe hypergraph of the example $5 \\times 4$ matrix is $\\mathcal{H}=(\\mathcal{V} =\\{1,\\ldots,5\\},\\mathcal{N} = \\{\\{4\\},\\{0, 3\\},\\{1,4\\},\\{2\\}\\})$ and it's graphical representation can be found in \\autoref{fig:hypergraph}.\n\n\t\\begin{equation}\n\t\tA_{5\\times4} = \\begin{bmatrix}\n\t\t0 & 3 & 0 & 0 \\\\\n\t\t0 & 0 & 1 & 4 \\\\\n\t\t0 & 0 & 0 & 0 \\\\\n\t\t0 & 5 & 0 & 0 \\\\\n\t\t9 & 0 & 2 & 0\n\t\t\\end{bmatrix}\n\t\\end{equation}\n\n\t\\begin{figure}[H]\n\t\\begin{tikzpicture}\n\t\t\\node[vertex,label=above:\\(v_0\\)] (v0) {};\n\t\t\\node[vertex,below of=v0,label=above:\\(v_1\\)] (v1) {};\n\t\t\\node[vertex,right of=v0,label=above:\\(v_2\\)] (v2) {};\n\t\t\\node[vertex,right of=v1,label=above:\\(v_3\\)] (v3) {};\n\t\t\\node[vertex,right of=v2,label=above:\\(v_4\\)] (v4) {};\n\t\t\\begin{pgfonlayer}{background}\n\t\t\t\\draw[edge,color=red] (v4) -- (v4);\n\t\t\t\\draw[edge,opacity=1,color=green] (v0) -- (v3);\n\t\t\t\\draw[edge,color=yellow] (v1) -- (v4);\n\t\t\t\\draw[edge,color=blue] (v2) -- (v2);\n\t\t\\end{pgfonlayer}\n\n\t\t\\node[elabel,color=red,label=right:\\(n_0\\)]  (n0) at (-3,0) {};\n\t\t\\node[elabel,below of=n0,color=green,label=right:\\(n_1\\)]  (n1) {};\n\t\t\\node[elabel,below of=n1,color=yellow,label=right:\\(n_2\\)]  (n2) {};\n\t\t\\node[elabel,below of=n2,color=blue,label=right:\\(n_3\\)]  (n3) {};\n\t\\end{tikzpicture}\n\t\\centering\n\t\\caption{The hypergraph of the $A_{5\\times4}$ matrix.}\n\t\\label{fig:hypergraph}\n\t\\end{figure}\n\tTo find the distribution model we still have to partition $\\mathcal{H}$ over the available processors. Since we are working with a column-net model coarsening means combining similar rows. For this example we assume that there are 2 processors available so we execute a 2-way multi-level partition.\n\t\\begin{equation}\n\t\\begin{multlined}\n\t\tA_{5\\times4} =\n\t\t\\begin{bmatrix}\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t\\cdot & \\cdot & 1 & 1 \\\\\n\t\t\\cdot & \\cdot & \\cdot & \\cdot \\\\\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t1 & \\cdot & 1 & \\cdot\n\t\t\\end{bmatrix}\n\t\t\\xrightarrow[(r_0,r_3),(r_1,r_4)]{coarsen}\n\t\t\\begin{bmatrix}\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t1 & \\cdot & 1 & 1 \\\\\n\t\t\\cdot & \\cdot & \\cdot & \\cdot \\\\\n\t\t\\end{bmatrix}\n\t\t\\xrightarrow[(r_0,r_2)]{coarsen}\n\t\t\\begin{bmatrix}\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t1 & \\cdot & 1 & 1 \\\\\n\t\t\\end{bmatrix}\n\t\t\\\\\n\t\t\\xrightarrow[]{partition}\n\t\t\\begin{bmatrix}\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t\\mathbf{1} & \\cdot & \\mathbf{1} & \\mathbf{1} \\\\\n\t\t\\end{bmatrix}\n\t\t\\xrightarrow[(r0,r2)]{uncoarsen}\n\t\t\\begin{bmatrix}\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t\\mathbf{1} & \\cdot & \\mathbf{1} & \\mathbf{1} \\\\\n\t\t\\cdot & \\cdot & \\cdot & \\cdot \\\\\n\t\t\\end{bmatrix}\n\t\t\\xrightarrow[(r_0,r_3),(r_1,r_4)]{uncoarsen}\n\t\t\\begin{bmatrix}\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t\\cdot & \\cdot & \\mathbf{1} & \\mathbf{1} \\\\\n\t\t\\cdot & \\cdot & \\cdot & \\cdot \\\\\n\t\t\\cdot & 1 & \\cdot & \\cdot \\\\\n\t\t\\mathbf{1} & \\cdot & \\mathbf{1} & \\cdot\n\t\t\\end{bmatrix}\n\t\\end{multlined}\n\t\\end{equation}\n\tWe found the following 2-way row partition: $\\mathcal{P}_2 = \\{\\mathcal{V}_0 = \\{0,2,3\\}, \\mathcal{V}_1 = \\{1,4\\}\\}$. No refinements were needed.\n\t\\item\n\t\tThe connectivity of a hypergraph is given by the following formula.\n\t\t\\begin{equation}\n\t\t\t\\lambda_j = | \\{\\mathcal{V}_i \\in \\mathcal{P}_k |~ n_j \\cap \\mathcal{V}_i \\neq \\emptyset \\} | ~\\text{with}~n_j \\in \\mathcal{N}\n\t\t\\end{equation}\n\t\tThe hypergraph $\\mathcal{H}=(\\mathcal{V},\\mathcal{N})$ is partitioned according to $\\mathcal{P}_k = \\{\\mathcal{V}_0, \\ldots, \\mathcal{V}_{k-1}\\}$, models the communication cost for the fan-out communication in the column-net model exactly. If we apply this to our example we find\n\t\t\\begin{equation}\n\t\t\\begin{array}{llll}\n\t\t\t\\lambda_0  & = | \\{ \\mathcal{V}_i \\in \\mathcal{P}_k |~ n_0 = \\{4\\} \\cap \\mathcal{V}_i \\neq \\emptyset \\} | & = | \\{ \\mathcal{V}_1 \\} | & = 1 \\\\\n\t\t\t\\lambda_1  & = | \\{ \\mathcal{V}_i \\in \\mathcal{P}_k |~ n_1 = \\{0,3\\} \\cap \\mathcal{V}_i \\neq \\emptyset \\} | & = | \\{ \\mathcal{V}_0 \\} | & = 1 \\\\\n\t\t\t\\lambda_2  & = | \\{ \\mathcal{V}_i \\in \\mathcal{P}_k |~ n_2 = \\{1,4\\} \\cap \\mathcal{V}_i \\neq \\emptyset \\} | & = | \\{ \\mathcal{V}_1 \\} | & = 1 \\\\\n\t\t\t\\lambda_3  & = | \\{ \\mathcal{V}_i \\in \\mathcal{P}_k |~ n_3 = \\{2\\} \\cap \\mathcal{V}_i \\neq \\emptyset \\} | & = | \\{ \\mathcal{V}_0 \\} | & = 1 \\\\\n\t\t\\end{array}\n\t\t\\end{equation}\n\t\tThe connectivity however does not model the communication volume. To do so, the $(\\lambda-1)$-metric is introduced, which is used in the cost formula below.\n\t\t\\begin{equation}\n\t\t\tC = \\sum_{n_j \\in \\mathcal{N}} (\\lambda_j - 1) = 0\n\t\t\\end{equation}\n\t\tThis models the communication model exactly which models the amount of fan-out communication in the column-net model.\n\\end{enumerate}\n\n\\end{solution}\n\n\\subsection{Odd-Even Transposition}\n\\label{q:oddeven}\n\\begin{question}\nArray $x$ has $n$ elements with $n$ a multiple of $p$. In odd-even transposition sort, the fundamental operation is compare-exchange (when each process has 1 element, $n = p$) or compare-split (each process has $n/p$ elements). Give for both cases:\n\\begin{enumerate}\n\t\\item A BSP algorithm.\n\t\\item Parallel execution time.\n\t\\item Parallel efficiency.\n\t\\item For $n/p$ elements: if $n$ increases (with a fixed $p$), what will happen to the parallel efficiency?\n\\end{enumerate}\n\n\\end{question}\n\\begin{solution} The solution to the subquestions are given below.\n\\begin{enumerate}\n\t\\item A BSP implementation for the odd-even $n/p$ block transposition sort can be found in \\autoref{lst:oddevenblock}. For the special case in which each processor should have only 1 element we assume that their are at least as many processors as there are data elements.\n\t\t\\lstinputlisting[caption=\"BSP odd-even n/p block transposition sort.\",label=lst:oddevenblock]{\\codeSrc/oddevenblockbsp.c}\n\t\\item We calculate the cost for the different supersteps.\n\t\\begin{itemize}\n\t\t\\item \\textbf{Superstep (0)}: This superstep initializes the local array which contains $n/p$ elements.\n\t\t\\begin{equation}\n\t\t\tT_0 = \\frac{n}{p} + \\frac{n}{p} \\cdot \\log(\\frac{n}{p}) + l\n\t\t\\end{equation}\n\t\t\\item \\textbf{Superstep (1)}: Each processors exchanges its local array with that of his neighbor.\n\t\t\\begin{equation}\n\t\t\tT_1 = 2 g + l\n\t\t\\end{equation}\n\t\t\\item \\textbf{Superstep (2)}: This superstep performs the compate-split operation. Here, 2 $n/p$ items are compared against eachother.\n\t\t\\begin{equation}\n\t\t\tT_2 = \\frac{n}{p} + l\n\t\t\\end{equation}\n\t\\end{itemize}\n\tTherefore, the total cost is given by the sums of the supersteps.\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\tT_p(n) & = T_0 + n \\cdot (T_1 + T_2) \\\\\n\t\t\t\t& = \\frac{n}{p} + \\frac{n}{p} \\cdot \\log(\\frac{n}{p}) +l + n \\cdot (2 g + l + \\frac{n}{p} + l) \\\\\n\t\t\t\t& = \\frac{n}{p} + \\frac{n}{p} \\cdot \\log(\\frac{n}{p}) +l + 2gn + \\frac{n}{p}n + 2ln \\\\\n\t\t\t\t& = n \\cdot \\left( \\frac{1}{p} + \\frac{1}{p} \\log(\\frac{n}{p}) + 2g \\right) + \\left( 1 + 2n \\right) l\n\t\t\\end{split}\n\t\\end{equation}\n\t\\item The parallel efficiency is defined as\n\t\\begin{equation}\n\t\tE_p (n) = \\frac{T_\\text{seq}(n)}{p T_p(n)} = \\frac{S_p(n)}{p}\n\t\\end{equation}\n\tThe sequential time $T_\\text{seq}$ is given by the sequential odd-even transposition algorithm, which is $n^2$. The parallel time is given by the answer to the previous question.\n\t\\begin{equation}\n\t\tE_p(n) = \\frac{n^2}{p \\cdot T_p} = \\frac{n^2}{p \\cdot \\left( n \\cdot \\left( \\frac{1}{p} + \\frac{1}{p} \\log(\\frac{n}{p}) + 2g \\right) + \\left( 1 + 2n \\right) l \\right) } \n\t\\end{equation}\n\t\\item Looking at the complexity of $E_p(n)$ we can say about the efficiency for $n \\to \\infty$: \n\t\t\\begin{equation}\n\t\t\tE_p(n) = \\frac{\\mathcal{O}(n^2)}{\\mathcal{O}(n \\log(n) + n)} \n\t\t\\end{equation}\n\tBecause the nominator grows faster than the denominator we can see that the parallel efficiency increases with the size of the problem. \n\n\\end{enumerate}\n\\end{solution}\n\n\\subsection{Shared Memory}\n\\label{q:sm1}\n\\begin{question}\nWe have p threads running the same SPMD program on a shared memory computer. Will the following algorithm work? Describe any possible problems. Supply a working parallel algorithm.\n\\begin{lstlisting}[caption={SPMD program on a shared memory computer},label=lst:spmd]\ndouble a;\ndouble x[1000];\ndouble y[1000];\n\nvoid spmd(){\n\tfor (i=0; i<1000; i+=p)\n\t\ta = a + x[i]y[i];\n}\n\\end{lstlisting}\n\n\\end{question}\n\\begin{solution}\nThere are two common pitfalls with non-BSP shared memory implementations: \\textbf{data races}, \\textbf{false sharing} and \\textbf{inefficient cache use}.\nThe given problem is an illustration of the first.\nA data race occurs when multiple processors access the same memory location concurrently and at least one of them is writing.\nThis leads to inconsistencies and the value most likely to be incorrect.\n\\\\\\\\\nWe briefly discuss the other two pitfalls while giving the correct implementation.\nOne solution is storing the partial sums separately.\nUsing an array of $p$ elements however would lead to \\textbf{false sharing}.\nOn every iteration, the array would be marked dirty in the cache, even though the value change at index $i$ (at $p(i)$) is logically independent of the change of value at index $j$ (at $p(j)$).\nThe last pitfall occurs when we choose to extend the array so that each cache line is only accessed by a single processor.\nThis works, but leads to inefficient cache use because all processors would be accessing all cache lines.\nThe correct solution is given below which corresponds to the pseudo code in \\cite{slides7}.\n\n\\begin{lstlisting}[caption={SPMD program on a shared memory computer},label=lst:solspmd]\ndouble a[8p];\ndouble x[1000];\ndouble y[1000];\nint n = 1000;\nvoid spmd(){\n\tfor (i=s*ceil(n/p); i<(s+1)*ceil(n/p)); i++)\n\t\ta[8s] += x[i]y[i];\n}\n\\end{lstlisting}\n\\end{solution}\n\\end{document}\n", "meta": {"hexsha": "56de1a618431c7b149fc7640a76ab439865fc006", "size": 15158, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "subfiles/20140117.tex", "max_stars_repo_name": "KULeuven-CS/Parallel-Computing", "max_stars_repo_head_hexsha": "598897266e687904c575e8f65f874f4100e0e7bf", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2015-01-12T14:38:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-15T19:12:42.000Z", "max_issues_repo_path": "subfiles/20140117.tex", "max_issues_repo_name": "KULeuven-CS/Parallel-Computing", "max_issues_repo_head_hexsha": "598897266e687904c575e8f65f874f4100e0e7bf", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "subfiles/20140117.tex", "max_forks_repo_name": "KULeuven-CS/Parallel-Computing", "max_forks_repo_head_hexsha": "598897266e687904c575e8f65f874f4100e0e7bf", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2015-12-15T20:37:02.000Z", "max_forks_repo_forks_event_max_datetime": "2018-12-29T14:10:47.000Z", "avg_line_length": 48.7395498392, "max_line_length": 363, "alphanum_fraction": 0.6715265866, "num_tokens": 5269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.685949467848392, "lm_q2_score": 0.8418256432832333, "lm_q1q2_score": 0.5774498520312641}}
{"text": "% ----------------------------------------------------------------------\n\\section{Poroelasticity with Infinitesimal Strain and No Faults}\n\nWe base this formulation for poroelsticity on Zheng et al. and\nDetournay and Cheng (1993). We assume a slightly compressible fluid\nthat completely saturates a porous solid, undergoing infinitesimal\nstrain.\n\nWe begin with the conservation of linear momentum, including inertia,\nborrowed from linear elasticity:\n\\begin{equation}\n    \\rho_s\\frac{\\partial^2 \\vec{u}}{\\partial t^2} = \\vec{f}(t) + \\nabla \\cdot \\tensor{\\sigma}(\\vec{u},p).\n\\end{equation}\nEnforcing mass balance of the fluid gives\n\\begin{gather}\n  \\frac{\\partial \\zeta(\\vec{u},p)}{\\partial t} + \\nabla \\cdot \\vec{q}(p) =\n  \\gamma(\\vec{x},t) \\text{ in } \\Omega, \\\\\n%\n  \\vec{q} \\cdot \\vec{n} = q_0(\\vec{x},t) \\text{ on }\\Gamma_q, \\\\\n%\n  p = p_0(\\vec{x},t) \\text{ on }\\Gamma_p,\n\\end{gather}\nwhere $\\zeta$ is the variation in fluid content, $\\vec{q}$ is the rate\nof fluid volume crossing a unit area of the porous solid, $\\gamma$ is\nthe rate of injected fluid per unit volume of the porous solid, $q_0$\nis the outward fluid velocity normal to the boundary $\\Gamma_q$, and\n$p_0$ is the fluid pressure on boundary $\\Gamma_p$.\n\nWe require the fluid flow to follow Darcy's law (Navier-Stokes\nequation neglecting inertial effects),\n\\begin{equation}\n  \\vec{q}(p) = -\\frac{\\tensor{k}}{\\mu_{f}}(\\nabla p - \\vec{f}_f),\n\\end{equation}\nwhere $\\tensor{k}$ is the intrinsic permeability, $\\mu_f$ is the viscosity of the\nfluid, $p$ is the fluid pressure, and $\\vec{f}_f$ is the body force\nin the fluid. If gravity is included in a problem, then usually\n$\\vec{f}_f = \\rho_f \\vec{g}$, where $\\rho_f$ is the density of the\nfluid and $\\vec{g}$ is the gravitational acceleration vector.\n\n\\subsection{Constitutive Behavior}\n\nWe assume linear elasticity for the solid phase, so the constitutive behavior can be expressed\nas\n\\begin{equation}\n  \\tensor{\\sigma}(\\vec{u},p) = \\tensor{C} : \\tensor{\\epsilon} - \\alpha p \\tensor{I},\n\\end{equation}\nwhere $\\tensor{\\sigma}$ is the stress tensor, $\\tensor{C}$ is the\ntensor of elasticity constants, $\\alpha$ is the Biot coefficient\n(effective stress coefficient), $\\tensor{\\epsilon}$ is the strain\ntensor, and $\\tensor{I}$ is the identity tensor.  For this case, we\nwill assume that the material properties are isotropic, resulting in\nthe following formulation for the stress tensor:\n\\begin{equation}\n    \\tensor{\\sigma}(\\vec{u},p) = \\tensor{C}:\\tensor{\\epsilon} - \\alpha p \\tensor{I}\n                                           = \\lambda \\tensor{I} \\epsilon_{v} + 2 \\mu - \\alpha \\tensor{I} p\n\\end{equation}\nwhere $\\lambda$ and $\\mu$ are Lam\\'e's parameters,\n$\\lambda = K_{d} - \\frac{2 \\mu}{3}$, $\\mu$ is the shear modulus, and\nthe volumetric strain is defined as\n$\\epsilon_{v} = \\nabla \\cdot \\vec{u}$.\n\nFor the constitutive behavior of the fluid, we use the volumetric\nstrain to couple the fluid-solid behavior,\n\\begin{gather}\n  \\zeta(\\vec{u},p) = \\alpha \\Tr({\\tensor{\\epsilon}}) + \\frac{p}{M}, \\\\\n%\n  \\frac{1}{M} = \\frac{\\alpha-\\phi}{K_s} + \\frac{\\phi}{K_f},\n\\end{gather}\nwhere $1/M$ is the specific storage coefficient at constant strain,\n$K_s$ is the bulk modulus of the solid, and $K_f$ is the bulk modulus\nof the fluid. We can write the trace of the strain tensor as the dot\nproduct of the gradient and displacement field, so we have\n\\begin{equation}\n  \\zeta(\\vec{u},p) = \\alpha (\\nabla \\cdot \\vec{u}) + \\frac{p}{M}.\n\\end{equation}\n\n\\begin{table}[htbp]\n  \\caption{Mathematical notation for poroelasticity with\n    infinitesimal strain.}\n  \\label{tab:notation:poroelasticity}\n  \\begin{tabular}{lcp{3.5in}}\n    \\toprule\n    {\\bf Category} & {\\bf Symbol} & {\\bf Description} \\\\\n    \\midrule\n    Unknowns           & $\\vec{u}$ & Displacement field \\\\\n                       & $\\vec{v}$ & Velocity field \\\\\n                       & $p$       & Pressure field (corresponds to pore fluid pressure) \\\\\n                       & $\\epsilon_{v}$ & Volumetric (trace) strain \\\\\n    \\hline\n    Derived quantities & $\\tensor{\\sigma}$ & Cauchy stress tensor \\\\\n                       & $\\tensor{\\epsilon}$ & Cauchy strain tensor \\\\\n                       & $\\zeta$ & Variation of fluid content (variation of fluid vol. per unit vol. of PM), $\\alpha \\epsilon_{v} + \\frac{p}{M}$ \\\\\n                       & $\\rho_{b}$ & Bulk density, $\\left(1 - \\phi\\right) \\rho_{s} + \\phi \\rho_{f}$ \\\\\n                       & $\\vec{q}$ & Darcy flux, $-\\frac{\\tensor{k}}{\\mu_{f}} \\cdot \\left(\\nabla p - \\vec{f}_{f}\\right)$ \\\\\n                       & $M$ & Biot Modulus, $\\frac{K_{f}}{\\phi} + \\frac{K_{s}}{\\alpha - \\phi}$ \\\\\n    \\hline\n    Common constitutive parameters & $\\rho_{f}$ & Fluid density \\\\\n                       & $\\rho_{s}$ & Solid (matrix) density \\\\\n                       & $\\phi$ & Porosity \\\\\n                       & $\\tensor{k}$ & Permeability \\\\\n                       & $\\mu_{f}$ & Fluid viscosity \\\\\n                       & $K_{s}$ & Solid grain bulk modulus \\\\\n                       & $K_{f}$ & Fluid bulk modulus \\\\\n                       & $K_{d}$ & Drained bulk modulus \\\\\n                       & $\\alpha$ & Biot coefficient, $1 - \\frac{K_{d}}{K_{s}}$ \\\\\n    \\hline\n    Source terms       & $\\vec{f}$ & Body force per unit volume, for example: $\\rho_{b} \\vec{g}$ \\\\\n                       & $\\vec{f}_{f}$ & Fluid body force, for example: $\\rho_{f} \\vec{g}$ \\\\\n                       & $\\gamma$ & Source density; rate of injected fluid per unit volume of the porous solid \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\n\\subsection{Quasistatic}\n\nFor ease of solution in the quasistatic case, we introduce a third\nvariable in the form of volumetric strain ($\\epsilon_v$).  The\nstrong form of the problem may be expressed as\n\\begin{gather}\n% Solution\n\\vec{s}^{T} = \\left(\\vec{u} \\quad p \\quad \\epsilon_v\\right), \\\\\n% Elasticity\n\\vec{f}(t) + \\nabla \\cdot \\tensor{\\sigma}(\\vec{u},p) = \\vec{0} \\text{ in } \\Omega, \\\\\n% Pressure\n\\frac{\\partial \\zeta(\\vec{u},p)}{\\partial t} - \\gamma(\\vec{x},t) + \\nabla \\cdot \\vec{q}(p) = 0 \\text{ in } \\Omega, \\\\\n% Vol. Strain\n\\nabla \\cdot \\vec{u} - \\epsilon_{v} = 0 \\text{ in } \\Omega, \\\\\n% Neumann traction\n\\tensor{\\sigma} \\cdot \\vec{n} = \\vec{\\tau}(\\vec{x},t) \\text{ on } \\Gamma_{\\tau}, \\\\\n% Dirichlet displacement\n\\vec{u} = \\vec{u}_0(\\vec{x}, t) \\text{ on } \\Gamma_{u}, \\\\\n% Neumann flow\n\\vec{q} \\cdot \\vec{n} = q_0(\\vec{x}, t) \\text{ on } \\Gamma_{q}, \\text{ and } \\\\\n% Dirichlet pressure\np = p_0(\\vec{x},t) \\text{ on } \\Gamma_{p}.\n\\end{gather}\nWe place all terms for the elasticity, pressure, an volumetric strain\nequations on the left-hand-side, consistent with PETSc TS implicit\ntime stepping.\n\n%\nWe create the weak form by taking the dot product with the trial\nfunctions $\\trialvec[u]$, $\\trialscalar[p]$, and\n$\\trialscalar[\\epsilon_{v}]$ and\nintegrating over the domain:\n\\begin{gather}\n% Weak conservation of momentum\n\\int_\\Omega \\trialvec[u] \\cdot \\left( \\vec{f}(\\vec{x},t) + \\tensor{\\nabla} \\cdot \\tensor{\\sigma} (\\vec{u},p) \\right) \\, d\\Omega = 0, \\\\\n% Weak conservation of mass\n\\int_\\Omega  \\trialscalar[p] \\left( \\frac{\\partial \\zeta(\\vec{u},p)}{\\partial t} - \\gamma(\\vec{x},t) + \\nabla \\cdot \\vec{q}(p)\\right) \\, d\\Omega = 0,\\\\\n% Weak vol. strain\n\\int_{\\Omega} \\trialscalar[\\epsilon_{v}]\\cdot \\left( \\nabla \\cdot \\vec{u} - \\epsilon_v \\right) \\, d\\Omega.\n\\end{gather}\n%\nApplying the divergence theorem to the first two equations and\nincorporating the Neumann boundary conditions yields\n\\begin{gather}\n% Weak conservation of momentum\n\\int_\\Omega \\trialvec[u] \\cdot \\vec{f}(\\vec{x},t) + \\nabla \\trialvec[u] : -\\tensor{\\sigma}(\\vec{u},p_f) \\,\nd\\Omega + \\int_{\\Gamma_\\tau} \\trialvec[u] \\cdot \\vec{\\tau}(\\vec{x},t) \\, d\\Gamma = 0, \\\\\n% Weak conservation of mass\n\\int_\\Omega  \\trialscalar[p] \\left( \\frac{\\partial \\zeta(\\vec{u},p_f)}{\\partial t} - \\gamma(\\vec{x},t)\\right)\n+ \\nabla \\trialscalar[p] \\cdot \\left(-\\vec{q}(p_f)\\right) \\, d\\Omega + \\int_{\\Gamma_q} \\trialscalar[p] q_0(\\vec{x},t))\\, d\\Gamma = 0, \\text{ and } \\\\\n% Weak vol. strain\n\\int_{\\Omega} \\trialscalar[\\epsilon_{v}] \\cdot \\left(\\nabla \\cdot \\vec{u} - \\epsilon_{v} \\right) d\\Omega = 0\n\\end{gather}\n\n\\subsubsection{Residual Pointwise Functions}\n\nIdentifying $F(t,s,\\dot{s})$ and $G(t,s)$we have\n\\begin{align}\n    % Displacement\n  F^u(t,s,\\dot{s}) &= \\int_\\Omega \\trialvec[u] \\cdot \\eqnannotate{\\vec{f}(\\vec{x},t)}{\\vec{f}^u_0}\n                     + \\nabla \\trialvec[u] : \\eqnannotate{-\\tensor{\\sigma}(\\vec{u},p_f)}{\\tensor{f}^u_1} \\, d\\Omega\n                     + \\int_{\\Gamma_\\tau} \\trialvec[u] \\cdot \\eqnannotate{\\vec{\\tau}(\\vec{x},t)}{\\vec{f}^u_0} \\, d\\Gamma, \\\\\n% Pressure\n  F^p(t,s,\\dot{s}) &= \\int_\\Omega  \\trialscalar[p] \\left[\\eqnannotate{\\frac{\\partial \\zeta(\\vec{u},p_f)}{\\partial t} - \\gamma(\\vec{x},t)} {f^p_0}\\right]\n                     + \\nabla \\trialscalar[p] \\cdot \\eqnannotate{-\\vec{q}(p_f)}{\\vec{f}^p_1} \\, d\\Omega\n                     + \\int_{\\Gamma_q} \\trialscalar[p] (\\eqnannotate{q_0(\\vec{x},t)}{f^p_0}) \\, d\\Gamma, \\\\\n % Volumetric Strain\n  F^{\\epsilon_{v}}(t,s,\\dot{s}) &= \\int_{\\Omega} \\trialscalar[\\epsilon_{v}] \\cdot \\eqnannotate{\\left(\\nabla \\cdot \\vec{u} - \\epsilon_{v} \\right)}{f^{\\epsilon_{v}}_{0}} \\, d\\Omega. \\\\\n  G^u(t,s) &= 0, \\\\\n             G^p(t,s) &= 0, \\\\\n G^{\\epsilon_v} &= 0.\n\\end{align}\n\n\\subsubsection{Jacobian Pointwise Functions}\n\nThree field results in a potential nine Jacobian pointwise functions for the LHS:\n\n\\begin{align}\n%\n% JF_UU\n% Jf3uu\nJ_F^{uu} &= \\frac{\\partial F^u}{\\partial u} + t_{shift} \\frac{\\partial F^u}{\\partial \\dot{u}} = \\int_{\\Omega} \\nabla \\trialvec[u] : \\frac{\\partial}{\\partial u} (- \\sigma(\\vec{u},p,\\epsilon_{v})) \\\nd\\Omega = \\int_{\\Omega} \\nabla \\trialvec[u] : \\frac{\\partial}{\\partial u} (-(\\tensor{C}:\\tensor{\\varepsilon} -\\alpha p \\tensor{I})) \\ d\\Omega \\\\\n&= \\int_{\\Omega} \\nabla \\trialvec[u] : -\\tensor{C}: \\frac{1}{2} (\\nabla + \\nabla^T) \\basisvec[u] \\ d\\Omega = \\int_{\\Omega} \\trialscalar[u]_{i,k}\n\\eqnannotate{\\left(-C_{ikjl}\\right)}{J_{f3}^{uu}} \\basisscalar[u]_{j,l} \\ d\\Omega \\\\\n%\n% JF_UP\n% Jf2up\nJ_F^{up} &= \\frac{\\partial F^u}{\\partial p} + t_{shift} \\frac{\\partial F^u}{\\partial \\dot{p}} = \\int_{\\Omega} \\nabla \\trialvec[u] : \\frac{\\partial}{\\partial p}(-(\\tensor{C}:\\tensor{\\varepsilon} -\\alpha p \\tensor{I})) \\ d\\Omega =\n\\int_{\\Omega} \\trialscalar[u]_{i,j} \\eqnannotate{\\left(\\alpha \\delta_{ij}\\right)}{J_{f2}^{up}} \\basisscalar[p] \\ d\\Omega \\\\\n%\n% JF_UE\n% Jf2ue\nJ_F^{u \\epsilon_{v}} &= \\frac{\\partial F^u}{\\partial \\epsilon_{v}} + t_{shift} \\frac{\\partial F^u}{\\partial \\dot{\\epsilon_{v}}} = \\int_{\\Omega} \\nabla \\trialvec[u] : \\frac{\\partial}{\\partial \\epsilon_{v}}\n(-\\sigma(\\vec{u},p,\\epsilon_{v})) \\ d\\Omega = \\int_{\\Omega} \\nabla \\trialvec[u] :\n\\frac{\\partial}{\\partial \\epsilon_{v}} (-(\\tensor{C}:\\tensor{\\varepsilon} -\\alpha p \\tensor{I})) \\ d\\Omega \\\\\n&= \\int_{\\Omega} \\nabla \\trialvec[u] : \\frac{\\partial}{\\partial \\epsilon_{v}} \\\n\\left[-\\left(2 \\mu \\tensor{\\epsilon} + \\lambda \\tensor{I} \\epsilon_{v} - \\alpha \\tensor{I} p \\right) \\right] d\\Omega =\n\\int_{\\Omega} \\trialscalar[u]_{i,j} \\eqnannotate{\\left(-\\lambda \\delta_{ij} \\right)}{J_{f2}^{u \\epsilon_{v}}} \\basisscalar[\\epsilon_{v}] d\\Omega  \\\\\n%\n% JF_PU\n%\nJ_F^{pu} &= \\frac{\\partial F^p}{\\partial u} + t_{shift} \\frac{\\partial F^p}{\\partial \\dot{u}} = 0 \\\\\n%\n% JF_PP\n% Jf0pp\nJ_F^{pp} &= \\frac{\\partial F^p}{\\partial p} + t_{shift} \\frac{\\partial F^p}{\\partial \\dot{p}} =\n\\int_{\\Omega} \\nabla \\trialscalar[p] \\cdot \\frac{\\partial}{\\partial p} -\\left[-\\frac{\\tensor{k}}{\\mu_{f}} \\left(\\nabla p - \\vec{f} \\right) \\right] \\ d\\Omega  +\nt_{shift}\\int_{\\Omega} \\trialscalar[p] \\frac{\\partial}{\\partial \\dot{p}} \\left[\\alpha\\dot{\\epsilon}_{v} + \\frac{\\dot{p}}{M} - \\gamma\\left(\\vec{x},t\\right)\\right] \\ d\\Omega \\\\\n&= \\int_{\\Omega} \\nabla \\psi_{trial}^ p \\left(\\frac{\\tensor{k}}{\\mu_{f}} \\nabla \\cdot \\psi_{basis}^p \\right) \\ d\\Omega +\n \\int_{\\Omega} \\trialscalar[p] \\left(t_{shift} \\frac{1}{M}\\right) \\basisscalar[p] \\ d\\Omega \\\\\n&= \\int_{\\Omega} \\psi_{trial,k}^p \\eqnannotate{\\left(\\frac{\\tensor{k}}{\\mu_{f}} \\delta_{kl}\\right)}{J_{f3}^{pp}} \\psi_{basis,l}^p \\ d\\Omega +\n\\int_{\\Omega} \\trialscalar[p] \\eqnannotate{\\left(t_{shift} \\frac{1}{M}\\right)}{J_{f0}^{pp}} \\basisscalar[p] \\ d\\Omega \\\\\n%\n% JF_PE\n% Jf0pe\nJ_F^{p\\epsilon_{v}} &= \\frac{\\partial F^p}{\\partial \\epsilon_{v}} + t_{shift} \\frac{\\partial\nF^p}{\\partial \\dot{\\epsilon_{v}}} = \\int_{\\Omega} \\trialscalar[p] \\eqnannotate{\\left(t_{shift} \\alpha \\right)}{J_{f0}^{p\\epsilon_{v}}}\n\\basisscalar[\\epsilon_{v}] \\ d\\Omega \\\\\n%\n% JF_EU\n% Jf1eu\nJ_F^{\\epsilon_{v}u} &= \\frac{\\partial F^{\\epsilon_{v}}}{\\partial u} + t_{shift} \\frac{\\partial F^{\\epsilon_{v}}}{\\partial \\dot{u}} =\n\\int_{\\Omega} \\psi_{trial}^{\\epsilon_{v}} \\nabla \\cdot \\vec{\\psi}_{basis}^u \\ d\\Omega = \\int_{\\Omega}\n\\basisscalar[\\epsilon_{v}] \\eqnannotate{\\left(\\delta_{ij}\\right)}{J_{f1}^{\\epsilon_{v}u}}\n\\basisscalar[u]_{i,j} \\ d\\Omega\\\\\n%\n% JF_EP\n%\nJ_F^{\\epsilon_{v}p} &= \\frac{\\partial F^{\\epsilon_{v}}}{\\partial p} + t_{shift} \\frac{\\partial F^{\\epsilon_{v}}}{\\partial \\dot{p}} = 0 \\\\\n%\n% JF_EE\n%\nJ_F^{\\epsilon_{v}\\epsilon_{v}} &= \\frac{\\partial F^\\epsilon_{v}}{\\epsilon_{v}} + t_{shift} \\frac{\\partial F^{\\epsilon_{v}}}{\\partial \\dot{\\epsilon_{v}}} =\n\\int_{\\Omega} \\basisscalar[\\epsilon_{v}] \\eqnannotate{\\left(-1\\right)}{J_{f0}^{\\epsilon_{v}\\epsilon_{v}}} \\basisscalar[\\epsilon_{v}] \\ d\\Omega\n\\end{align}\n\n\\subsection{Dynamic}\n\nFor compatibility with PETSc TS algorithms, we want to turn the second\norder elasticity equation into two first order equations. We introduce\nvelocity as a unknown, $\\vec{v}=\\frac{\\partial u}{\\partial t}$, which\nleads to a slightly different three field problem,\n\\begin{gather}\n% Solution\n\\vec{s}^{T} = \\left(\\vec{u} \\quad p \\quad \\vec{v}\\right) \\\\\n% Displacement\n\\frac{\\partial \\vec{u}}{\\partial t} = \\vec{v} \\text{ in } \\Omega \\\\\n% Pressure\n\\frac{\\partial \\zeta(\\vec{u},p)}{\\partial t } - \\gamma(\\vec{x},t) + \\nabla \\cdot \\vec{q}(p) = 0 \\text{ in } \\Omega \\\\\n% Velocity\n\\rho_{b} \\frac{\\partial \\vec{v}}{\\partial t} = \\vec{f}(\\vec{x},t) + \\nabla \\cdot \\tensor{\\sigma}(\\vec{u},p) \\text{ in } \\Omega \\\\\n% Neumann traction\n\\tensor{\\sigma} \\cdot \\vec{n} = \\vec{\\tau}(\\vec{x},t) \\text{ on } \\Gamma_{\\tau} \\\\\n% Dirichlet displacement\n\\vec{u} = \\vec{u}_{0}(\\vec{x}, t) \\text{ on } \\Gamma_{u} \\\\\n% Neumann flow\n\\vec{q} \\cdot \\vec{n} = q_{0}(\\vec{x}, t) \\text{ on } \\Gamma_{q} \\\\\n% Dirichlet pressure\np = p_{0}(\\vec{x},t) \\text{ on } \\Gamma_{p}\n\\end{gather}\n\nFor compatibility with PETSc TS explicit time stepping algorithms, we\nneed the left hand side to be $F = (t,s,\\dot{s}) = \\dot{s}$. We\nreplace the variation of fluid content variable, $\\zeta$, with its\ndefinition in the conservation of fluid mass equation and solve for\nthe rate of change of pressure,\n\\begin{gather}\n    \\frac{\\partial}{\\partial t}\\left[\\alpha \\epsilon_{v} + \\frac{p}{M}\\right] - \\gamma\\left(\\vec{x},t\\right) + \\nabla \\cdot \\vec{q} = 0 \\\\\n    \\alpha \\dot{\\epsilon}_{v} + \\frac{\\dot{p}}{M} - \\gamma \\left(\\vec{x},t\\right) + \\nabla \\cdot \\vec{q} = 0 \\\\\n    \\frac{\\dot{p}}{M} = \\gamma \\left(\\vec{x},t \\right) - \\alpha \\dot{\\epsilon}_{v} -\\nabla \\cdot \\vec{q} \\\\\n    \\frac{\\dot{p}}{M} = \\gamma \\left(\\vec{x},t \\right) - \\alpha \\left( \\nabla \\cdot \\dot{\\vec{u}} \\right) -\\nabla \\cdot \\vec{q}.\n\\end{gather}\nWe write the volumetric strain in terms of displacement, because this\ndynamic formulation does not include the volumetric strain as an\nunknown.\n\nUsing trial functions $\\trialvec[u]$, $\\trialscalar[p]$, and $\\trialvec[v]$, and incorporating the\nNeumann boundary conditions, the weak form may be written as:\n\\begin{align}\n    % Displacement\n    \\int_{\\Omega} \\trialvec[u] \\cdot \\left( \\frac{\\partial \\vec{u}}{\\partial t} \\right)d \\Omega &= \\int_{\\Omega} \\trialvec[u] \\cdot \\left( \\vec{v} \\right) d \\Omega \\\\\n    % Pressure\n    \\int_{\\Omega} \\trialscalar[p] \\left( \\frac{1}{M}\\frac{\\partial p}{\\partial t} \\right) d\\Omega &=\n    \\int_{\\Omega} \\trialscalar[p] \\left[\\gamma(\\vec{x},t) - \\alpha \\left(\\nabla \\cdot \\dot{\\vec{u}}\\right) \\right]  + \\nabla \\trialscalar[p] \\cdot \\vec{q}(p) \\ d\\Omega +\n    \\int_{\\Gamma_q} \\trialscalar[p] (-q_0(\\vec{x},t)) \\ d\\Gamma, \\\\\n   % Velocity\n   \\int_\\Omega \\trialvec[v] \\cdot \\left( \\rho_{b} \\frac{\\partial\n   \\vec{v}}{\\partial t} \\right) \\,\n   d\\Omega &= \\int_\\Omega \\trialvec[v] \\cdot \\vec{f}(\\vec{x},t) + \\nabla \\trialvec[v] :\n   -\\tensor{\\sigma} (\\vec{u},p_f) \\, d\\Omega + \\int_{\\Gamma_\\tau} \\trialvec[u]\n   \\cdot \\vec{\\tau}(\\vec{x},t) \\, d\\Gamma.\n\\end{align}\n\n\n\n\\subsubsection{Residual Pointwise Functions}\n\nWith explicit time stepping the PETSc TS assumes the LHS is $\\dot{s}$ , so we only need the RHS residual functions:\n\n\\begin{align}\n% Displacement\n  G^u(t,s) &= \\int_{\\Omega} \\trialvec[u] \\cdot \\eqnannotate{\\vec{v}}{\\vec{g}_0^u} d \\Omega, \\\\\n% Pressure\n  G^p(t,s) &= \\int_\\Omega \\trialscalar[p] \\eqnannotate{\\left(\\gamma(\\vec{x},t) - \\alpha (\\nabla \\cdot \\dot{\\vec{u}})\\right)}{g^p_0} + \\nabla \\trialscalar[p] \\cdot \\eqnannotate{\\vec{q}(p_f)}{\\vec{g}^p_1} \\, d\\Omega\n + \\int_{\\Gamma_q} \\trialscalar[p] (\\eqnannotate{-q_0(\\vec{x},t)}{g^p_0}) \\, d\\Gamma, \\\\\n % Velocity\n G^v(t,s) &= \\int_\\Omega \\trialvec[v] \\cdot \\eqnannotate{\\vec{f}(\\vec{x},t)}{\\vec{g}^v_0} + \\nabla \\trialvec[v] :\\eqnannotate{-\\tensor{\\sigma}(\\vec{u},p_f)}{\\tensor{g}^v_1} \\, d\\Omega + \\int_{\\Gamma_\\tau} \\trialvec[u] \\cdot \\eqnannotate{\\vec{\\tau}(\\vec{x},t)}{\\vec{g}^v_0} \\, d\\Gamma.\n\\end{align}\n\n\\subsubsection{Jacobians Pointwise Functions}\n\nThese are the pointwise functions associated with $M_{u}$, $M_{p}$,\nand $M_{v}$ for computing the lumped LHS Jacobian. We premultiply the\nRHS residual function by the inverse of the lumped LHS Jacobian while\n$s_\\mathit{tshift}$ remains on the LHS with $\\dot{s}$. As a result,\nwe use LHS Jacobian pointwise functions, but set $s_\\mathit{tshift} = 1$. The\nLHS Jacobians are:\n\\begin{align}\n% Displacement\nM_{u} &= J_F^{uu} = \\frac{\\partial F^u}{\\partial u} + s_{tshift} \\frac{\\partial F^u}{\\partial \\dot{u}} =\n\\int_{\\Omega} \\trialscalar[u]_{i} \\eqnannotate{s_{tshift} \\delta_{ij}}{J^{uu}_{f0}} \\basisscalar[u]_{j} \\, d \\Omega \\\\\n% Pressure\nM_{p} &= J_F^{pp} = \\frac{\\partial F^p}{\\partial p} + s_{tshift} \\frac{\\partial F^p}{\\partial \\dot{p}} =\n\\int_{\\Omega} \\trialscalar[p] \\eqnannotate{\\left(s_{tshift} \\frac{1}{M}\\right)}{J_{f0}^{pp}} \\basisscalar[p] \\ d\\Omega \\\\\n% Velocity\nM_{v} &= J_F^{vv} = \\frac{\\partial F^v}{\\partial v} + s_{tshift} \\frac{\\partial F^v}{\\partial \\dot{v}} =\n\\int_{\\Omega} \\trialscalar[v]_{i}\\eqnannotate{\\rho_{b}(\\vec{x}) s_{tshift} \\delta_{ij}}{J^{vv}_{f0}} \\basisscalar[v]_{j} \\  d \\Omega\n\\end{align}\n", "meta": {"hexsha": "3cbd1c79066f183de28895eb2ce86efff6f2be81", "size": 18467, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/userguide/governingeqns/poroelasticity.tex", "max_stars_repo_name": "Grant-Block/pylith", "max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2015-01-08T16:41:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T13:40:02.000Z", "max_issues_repo_path": "doc/userguide/governingeqns/poroelasticity.tex", "max_issues_repo_name": "Grant-Block/pylith", "max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 277, "max_issues_repo_issues_event_min_datetime": "2015-02-20T16:27:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T21:13:09.000Z", "max_forks_repo_path": "doc/userguide/governingeqns/poroelasticity.tex", "max_forks_repo_name": "Grant-Block/pylith", "max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 71, "max_forks_repo_forks_event_min_datetime": "2015-03-24T12:11:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T04:26:02.000Z", "avg_line_length": 53.6831395349, "max_line_length": 284, "alphanum_fraction": 0.6253316727, "num_tokens": 6562, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.841825635346563, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5774498411825049}}
{"text": "\\section{Transformations and Compositions}\\label{sec:Transformations}\r\n\r\n\\input{2-functions/2-2-1-transformations}\r\n\\input{2-functions/2-2-2-compositions}\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for \\ref{sec:Transformations}}\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\nStarting with the graph of $\\ds y=\\sqrt{x}$, the graph of $\\ds y=1/x$, and the\r\ngraph of $\\ds y=\\sqrt{1-x^2}$ (the upper unit semicircle), sketch the\r\ngraph of each of the following functions:\r\n\\begin{multicols}{2}\r\n\\begin{enumerate}\r\n\t\\item\t$\\ds f(x)=\\sqrt{x-2}$\r\n\t\\item\t$\\ds f(x)=-1-1/(x+2)$\r\n\t\\item\t$\\ds f(x)=4+\\sqrt{x+2}$\r\n\t\\item\t$\\ds y=f(x)=x/(1-x)$\r\n\t\\item\t$\\ds y=f(x)=-\\sqrt{-x}$\r\n\t\\item\t$\\ds f(x)=2+\\sqrt{1-(x-1)^2}$\r\n\t\\item\t$\\ds f(x)=-4+\\sqrt{-(x-2)}$\r\n\t\\item\t$\\ds f(x)=2\\sqrt{1-(x/3)^2}$\r\n\t\\item\t$\\ds f(x)=1/(x+1)$\r\n\t\\item\t$\\ds f(x)=4+2\\sqrt{1-(x-5)^2/9}$\r\n\t\\item\t$\\ds f(x)=1+1/(x-1)$\r\n\t\\item\t$\\ds f(x)=\\sqrt{100-25(x-1)^2}+2$\r\n\\end{enumerate}\r\n\\end{multicols}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\nThe graph of $f(x)$ is shown below.\r\nSketch the graphs of the following functions.\r\n\\vadjust{\\rightline{%\r\n\\vbox to 0pt{\\vskip-5pt\\beginpicture\r\n\\normalgraphs\r\n\\setcoordinatesystem units <1truecm,1truecm>\r\n\\setplotarea x from 0 to 3.25, y from -1.5 to 2\r\n\\axis bottom shiftedto y=0 ticks numbered from 1 to 3 by 1 /\r\n\\axis left ticks numbered from -1 to 2 by 1 /\r\n\\setquadratic\r\n\\plot \r\n0.000 -1.285 0.054 -0.643 0.108 -0.088 0.162 0.387 0.217 0.788 \r\n0.271 1.120 0.325 1.391 0.379 1.604 0.433 1.766 0.488 1.882 \r\n0.542 1.956 0.596 1.993 0.650 1.998 0.704 1.976 0.758 1.929 \r\n0.812 1.862 0.867 1.779 0.921 1.684 0.975 1.578 1.029 1.467 \r\n1.083 1.351 1.138 1.235 1.192 1.120 1.246 1.008 1.300 0.903 \r\n1.354 0.805 1.408 0.716 1.462 0.637 1.517 0.571 1.571 0.517 \r\n1.625 0.476 1.679 0.450 1.733 0.438 1.788 0.441 1.842 0.458 \r\n1.896 0.490 1.950 0.536 2.004 0.595 2.058 0.666 2.112 0.749 \r\n2.167 0.841 2.221 0.943 2.275 1.051 2.329 1.164 2.383 1.279 \r\n2.438 1.396 2.492 1.510 2.546 1.620 2.600 1.722 2.654 1.813 \r\n2.708 1.890 2.762 1.949 2.817 1.987 2.871 2.000 2.925 1.983 \r\n2.979 1.932 3.033 1.842 3.088 1.710 3.142 1.528 3.196 1.294 \r\n3.250 1.000 /\r\n\\endpicture\\vskip0pt\\vss}\\hskip8cm}}\r\n\\vspace{1.5in}\r\n\r\n\\begin{multicols}{2}\r\n\\begin{enumerate}\r\n\t\\item\t$\\ds y=f(x-1)$\r\n\t\\item\t$\\ds y=1+f(x+2)$\r\n\t\\item\t$\\ds y=1+2f(x)$\r\n\t\\item\t$\\ds y=2f(3x)$\r\n\t\\item\t$\\ds y=2f(3(x-2))+1$\r\n\t\\item\t$\\ds y=(1/2)f(3x-3)$\r\n\t\\item\t$\\ds y=f(1+x/3)+2$\r\n\t\\item\t$\\ds y=|f(x)-2|$\r\n\\end{enumerate}\r\n\\end{multicols}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\nSuppose $f(x) = 3x-9$ and $\\ds g(x) = \\sqrt{x}$.  What is the\r\ndomain of the composition $(g\\circ f)(x)$?\r\n\\begin{sol}\r\n$\\ds \\{x\\mid x\\ge3\\}$, $\\{x\\mid x\\ge0\\}$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n\\end{enumialphparenastyle}", "meta": {"hexsha": "f6a75f107eedc2095007c43457b4d076c3101002", "size": 2794, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2-functions/2-2-0-transformations.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2-functions/2-2-0-transformations.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2-functions/2-2-0-transformations.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.1149425287, "max_line_length": 79, "alphanum_fraction": 0.6034359341, "num_tokens": 1322, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228758499942, "lm_q2_score": 0.8723473846343394, "lm_q1q2_score": 0.577426689577383}}
{"text": "%---------------------------Jacobian-----------------------------\n\\section{Jacobian}\n\nThe minimum Jacobian computed at each vertex is used:\n\\[\nq = \\min_{i\\in\\{0,1,2,3\\}} \\left\\{ \\alpha_i \\right\\}\n\\]\n\n\\quadmetrictable{Jacobian}%\n{$L^2$}%                                    Dimension\n{$[0.DBL\\_MAX]$}%                           Acceptable range\n{$[0,DBL\\_MAX]$}%                           Normal range\n{$[-DBL\\_MAX,DBL\\_MAX]$}%                   Full range\n{$1$}%                                      Unit square\n{\\cite{knu:00}}%                            Citation\n{v\\_quad\\_jacobian}%                        Verdict function name\n\n", "meta": {"hexsha": "ff95657170b0d9f096dcef3a84279610f217d34d", "size": 631, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/QuadJacobian.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/QuadJacobian.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/QuadJacobian.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 35.0555555556, "max_line_length": 65, "alphanum_fraction": 0.4057052298, "num_tokens": 154, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8459424373085146, "lm_q2_score": 0.6825737473266735, "lm_q1q2_score": 0.5774180994563324}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{amsmath}\n\\begin{document}\n    \\begin{itemize}\n        \\item input: words list (of a word sense)\n        \\item output: partition of the list (one or two)\n    \\end{itemize}\n    \\paragraph{MCMC sampling}~\n    \n    The optimal partition is: ($G_i$ can be empty)\n    $$\\{G\\} = \\text{argmax}_{G_1,G_2}\\sum_{i=1,2}\\sum_{a,b\\in G_i, a<b}\\text{cosSim}(a,b)-\\sum_{a\\in G_1, b\\in G_2}\\text{cosSim}(a,b)$$\n    \\begin{enumerate}\n        \\item Start with random partition.\n        \\item Do 3-4 until convergence.\n        \\item Create a new partition $\\{G\\}'$ by moving a random selected word $l$ from $G_i$ to $G_j$.\n        \\item Accept this with probability $\\min(1, p)$, where\n        $$\\log p=\\gamma \\left(\\sum_{m\\in G_j,m\\neq l} \\text{cosSim}(l,m)-\\sum_{n\\in G_i,n\\neq l}\\text{cosSim}(l,n)\\right)$$\n    \\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "1701c01cc013dc097a5ef2e009a1ff781ec4a885", "size": 867, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/word_sense_clustering.tex", "max_stars_repo_name": "antonyms/AntonymPipeline", "max_stars_repo_head_hexsha": "050f963b9dbefeb5772be665d2d69e500764d4cd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-08-24T20:41:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-26T19:16:36.000Z", "max_issues_repo_path": "doc/word_sense_clustering.tex", "max_issues_repo_name": "antonyms/AntonymPipeline", "max_issues_repo_head_hexsha": "050f963b9dbefeb5772be665d2d69e500764d4cd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/word_sense_clustering.tex", "max_forks_repo_name": "antonyms/AntonymPipeline", "max_forks_repo_head_hexsha": "050f963b9dbefeb5772be665d2d69e500764d4cd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.35, "max_line_length": 135, "alphanum_fraction": 0.6216839677, "num_tokens": 308, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.84594244507642, "lm_q2_score": 0.6825737408694988, "lm_q1q2_score": 0.5774180992961025}}
{"text": "%!TEX root = ../../dissertation.tex\n\n\\section{Comparison using a neoclassical growth model}\n\nThe neoclassical growth model is often used as a benchmark to measure the\neffectiveness of a solution method. We keep with this tradition and analyze the\nstochastic neoclassical growth model with seven alternative solution methods,\nwhich are outlined in \\cite{AMMT2016}. Each method is implemented in Julia,\nMatlab, and Python which allows us to see how the performance of different\nsolution methods may vary by programming language.\n\n\\subsection{The model}\n\nWe consider a dynamic programming problem of finding the value function,\n$V$, that solves the Bellman equation,\n\n\\begin{align}\n  V \\left(k, z\\right) &= \\max_{c, k^{\\prime }} u\\left(c\\right) +\\beta E\\left[ V\\left( k^{\\prime },z^{\\prime }\\right) \\right] \\label{vf} \\\\\n  \\text{s.t.\\quad}k^{\\prime } &= \\left( 1-\\delta \\right) k + z f\\left( k\\right) - c \\label{bc} \\\\\n  \\ln z^{\\prime} &= \\rho \\ln z + \\varepsilon^{\\prime },\\qquad \\varepsilon^{\\prime } \\sim \\mathcal{N}\\left( 0,\\sigma ^{2}\\right), \\label{ts}\n\\end{align}\n\nwhere $k$, $c$ and $z$ are capital, consumption and productivity level,\nrespectively; $\\beta \\in \\left( 0,1\\right) $; $\\delta \\in \\left( 0,1\\right] $ ;\n$\\rho \\in \\left( -1,1\\right) $; $\\sigma \\geq 0$; the utility and production\nfunctions, $u$ and $f$, respectively, are strictly increasing, continuously\ndifferentiable and strictly concave. The primes on variables denote next-period\nvalues, and $E\\left[ V\\left( k^{\\prime },z^{\\prime }\\right) \\right] $ is an\nexpectation conditional on state $\\left( k,z\\right) $.\n\n\\paragraph{Optimality conditions.}\n\nThe first order condition (FOC) and envelope condition (EC) of the problem\n(\\ref{vf})-(\\ref{ts}), respectively, are\n\n\\begin{equation}\n  u^{\\prime}\\left(c\\right) = \\beta E \\left[ V_{1}\\left(k^{\\prime }, z^{\\prime}\\right) \\right] \\label{f2}\n\\end{equation}\n\n\\begin{equation}\n  V_{1}\\left(k, z\\right) = u^{\\prime }\\left( c\\right) \\left[ 1-\\delta + zf^{\\prime }\\left( k\\right) \\right]. \\label{f3}\n\\end{equation}%\n\nBy combining (\\ref{f2}) and (\\ref{f3}), we obtain the Euler equation%\n\n\\begin{equation}\n  u^{\\prime }\\left(c\\right) = \\beta E\\left[ u^{\\prime }\\left( c^{\\prime}\\right) \\left[ 1-\\delta +z^{\\prime }f^{\\prime }\\left( k^{\\prime} \\right) \\right] \\right]. \\label{f23}\n\\end{equation}\n\n\\paragraph{Parameterization and implementation details.}\n\nWe parameterize the model (\\ref{vf})-(\\ref{ts}) by using a Constant Relative\nRisk Aversion (CRRA) utility function, $u\\left( c\\right) =\\frac{c^{1-\\gamma\n}-1}{1-\\gamma }$, and a Cobb-Douglas production function, $f\\left( k\\right)\n=Ak^{\\alpha }$. We choose parameters to be set at $A=\\frac{1/\\beta -(1-\\delta\n)}{\\alpha }$, $\\alpha =1/3$, $\\beta =0.99$, $\\delta =0.025$, $ \\rho =0.95$ and\n$\\sigma =0.01$. As a solution domain, we use a rectangular, uniformly spaced\ngrid of $10\\times 10$ points for capital and productivity between 0.9 and 1.1\nfor both variables.\n\nWe integrate by using a $10$-node Gauss-Hermite quadrature rule and approximate\nthe policy and value functions by using complete ordinary polynomials up to\ndegree 5. As an initial guess, we use a linear approximation to the capital\npolicy function. To solve for polynomial coefficients, we use fixed-point\niteration.\n\nAll computations are performed using Julia v0.6.0, Matlab version 9.2.0\n(R2017a), and Python 3.6 on a MacBook Pro with a 2.7 GHz Intel Core i7\nprocessor and 16 GB of RAM. For Julia and Python, the particular package\nversions can be found in the corresponding notebooks submitted on QuantEcon's Notebook site\\footnote{The\n\\href{http://notes.quantecon.org/submission/5b5f71779cd7f00015be6350}{Matlab notebook},\n\\href{http://notes.quantecon.org/submission/5b5f70db9cd7f00015be634e}{Python notebook},\n\\href{http://notes.quantecon.org/submission/5b5f711d9cd7f00015be634f}{Julia notebook}}.\n\n\\subsection{Value iterative methods}\n\nWe analyze three value iterative algorithms for solving the model: (1)\nconventional value function iteration method analyzed in e.g., \\cite{AF2014} ;\n(2) a variant of envelope condition method (ECM) of \\cite{MM2013} that finds a\nsolution to Bellman equation by iterating on value function; and (3) endogenous\ngrid method (EGM) of \\cite{Carroll2006}. We present a brief description of each\nalgorithm below. A detailed description of each algorithm is included in\nAppendix A.\n\n\\subsubsection{Conventional Value Function Iteration}\n\nConventional VFI constructs the policy rule for consumption by combining FOC\n(\\ref{f2}) and budget constraint (\\ref{bc}):\n\n\\qquad \\newline\n{\\small\n\\begin{tabular}{l}\n\\hline \\hline\n\\textbf{Algorithm 1. Conventional VFI} \\\\ \\hline\nGiven $V$, for each point $\\left( k,z\\right) $, define the following recursion: \\\\\n\\quad i). Numerically solve for $c$ satisfying $u^{\\prime }\\left(c\\right) =\\beta E \\left[ V_{1}\\left( \\left( 1-\\delta \\right) k+zf\\left( k\\right) -c,z^{\\prime }\\right) \\right] $. \\\\\n\\quad ii). Find $k^{\\prime }=\\left( 1-\\delta \\right) k+zf\\left( k\\right) -c$. \\\\\n\\quad iii). Find $\\widehat{V}\\left( k,z\\right) =u\\left( c\\right) +\\beta E \\left[ V\\left( k^{\\prime },z^{\\prime }\\right) \\right] $. \\\\\nIterate on i)-iii) until convergence $\\widehat{V}=V$. \\\\ \\hline \\hline\n\\end{tabular}\n}\n\n\\qquad \\newline\n\n\\subsubsection{Envelope Condition Method}\n\nECM, proposed in \\cite{MM2013}, finds consumption from envelope\ncondition (\\ref{f3}) and budget constraint (\\ref{bc}). In this specific model,\nthe consumption function can be constructed analytically:\n\n\\qquad \\newline\n\n{\\small\n\\begin{tabular}{l}\n\\hline \\hline\n\\textbf{Algorithm 2. ECM-VF} \\\\ \\hline\nGiven $V$, for each point $\\left( k,z\\right) $, define the following recursion: \\\\\n\\quad i). Find $c=u^{\\prime -1}\\left[ \\frac{V_{1}\\left( k,z\\right) }{1-\\delta +zf^{\\prime }\\left( k\\right) }\\right]$. \\\\\n\\quad ii). Find $k^{\\prime }=\\left( 1-\\delta \\right) k+zf\\left( k\\right) -c$. \\\\\n\\quad iii). Find $\\widehat{V}\\left( k,z\\right) =u\\left( c\\right) +\\beta E \\left[ V\\left( k^{\\prime },z^{\\prime }\\right) \\right]$. \\\\\nIterate on i)-iii) until convergence $\\widehat{V}=V$. \\\\ \\hline \\hline\n\\end{tabular}\n}\n\n\\qquad \\newline\n\n\\subsubsection{Endogenous Grid Method}\n\nFinally, EGM of \\cite{Carroll2006} constructs a grid on $\\left( k^{\\prime\n},z\\right) $ by fixing the future endogenous state variable $k^{\\prime }$ and\nby treating the current endogenous state variable $k$ as unknown; see also\n\\cite{BF2007} for a discussion of the EGM method. Since $k^{\\prime }$ is fixed,\nEGM computes $E\\left[ V_{1}\\left( k^{\\prime },z^{\\prime }\\right) \\right] $\nup-front and thus can avoid costly interpolation and approximation of\nexpectation in a root finding procedure.\n\n\\qquad \\newline\n\n{\\small\n\\begin{tabular}{l}\n\\hline \\hline\n\\textbf{Algorithm 3. EGM of Carroll (2005)} \\\\ \\hline\nGiven $V$, for each point $\\left( k^{\\prime },z\\right) $, define the\nfollowing recursion: \\\\\n\\quad i). Find $c=u^{\\prime -1}\\left \\{ \\beta E\\left[ V_{1}\\left( k^{\\prime}, z^{\\prime }\\right) \\right] \\right \\}$. \\\\\n\\quad ii). Solve for $k$ satisfying $k^{\\prime }=\\left( 1-\\delta \\right)k + zf\\left(k\\right) - c$. \\\\\n\\quad iii). Find $\\widehat{V}\\left( k,z\\right) =u\\left( c\\right) +\\beta E \\left[ V\\left( k^{\\prime },z^{\\prime }\\right) \\right]$. \\\\\nIterate on i)-iii) until convergence $\\widehat{V}=V$. \\\\ \\hline \\hline\n\\end{tabular}%\n}\n\n\\qquad \\newline\n\nIn Step ii) of EGM, we still need to find $k$ numerically. However, for the\nstudied model, \\cite{Carroll2006} shows a change of variables that makes it\npossible to avoid finding $k$ numerically on each iteration (except of the\nvery last iteration). In order to streamline our code, we do not use this\nchange of variables so the running time of our version of the EGM method\nwill be considerably larger than for the version of the method in \\cite\n{Carroll2006}. However, this fact does not distort our comparison analysis\nsince we use the same implementation in all three languages.\n\n\\subsubsection{Comparison results of Matlab, Python and Julia for value iterative methods}\n\nThe results from our comparison of the three iterative methods are in Table\n\\ref{clmm:Table1}. Conventional VFI is the most time consuming in all three\nlanguages because it requires us to find the root of (\\ref{f2}) for each $(k,\nz)$. Given $(k, z)$, each evaluation of equation (\\ref{f2}) requires the computer\nto compute conditional expectation by interpolating over the $t+1$ values.\nNumerical root finders must do this repeatedly. The reason EGM performs better\nthan VFI is that it is easier to solve equation (\\ref{f2}) with respect to $c$\ngiven $\\left(k^{\\prime}, z\\right)$ than to solve it with respect to $c$ given\n$(k, z)$ because we only need to evaluate conditional expectation once if\nwe know $k^{\\prime}$. The ECM method requires no numerical solver which is why\nit is so efficient relative to the other two methods.\\footnote{A version of the\nenvelope condition argument is used in \\cite{AHLLM2017} to construct a new\nclass of fast and efficient numerical methods for solving dynamic economic\nmodels in continuous time.}\n\nJulia produces solutions between 5 and 50 times faster than Matlab and Python\nfor VFI and EGM. The reasons Julia performs so much better on these two\nalgorithms is because the entire language is built on JIT compilation. This\nmeans that the repeated calls to the Julia objective\nfunction are cheap (relative to an interpreted language like Matlab or Python)\nbecause these subsequent function calls execute already compiled machine code.\n\nOur results for the ECM show similar performance for all three languages. The\nmain reason for this is that the method does not require a non-linear solver to\ncompute the policy for consumption and computations can be vectorized. The\nvectorization puts the languages on more similar footing because they all end\nup calling out to the same high performant BLAS routines implemented in C or\nFortran.\n\n\n\\subsection{Policy iterating methods}\n\nWe analyze four algorithms that construct policy functions for the standard\nneoclassical stochastic growth model: Algorithm 4 is a conventional policy iteration (PI)\nmethod, e.g., \\cite{SR2004}; Algorithm 5 is a variant of ECM that implements policy\niteration; Algorithm 6 is a version of ECM that solve for derivative of value function\ninstead of value function itself; Algorithm 7 an Euler equation algorithm. Again we\ngive a brief description of each algorithm but a more detailed description of\nthese algorithms is provided in Appendix A.\n\n\\subsubsection{Conventional Policy Iteration}\n\nConventional policy iteration constructs a solution to the Bellman equation by\niterating on the consumption function using FOC (\\ref{f2}).\n\n\\qquad \\newline\n\n{\\small\n\\begin{tabular}{l}\n\\hline \\hline\n\\textbf{Algorithm 4. Conventional PI} \\\\ \\hline\nGiven $C$, for each point $\\left( k,z\\right) $, define the following\nrecursion: \\\\\n\\quad i). Find $K\\left(k, z\\right) = \\left( 1 - \\delta \\right)k + z f\\left(k\\right) - C\\left( k,z\\right)$ \\\\\n\\quad ii). Solve for $V$ satisfying $V\\left(k, z\\right) = u\\left(C\\left(k, z\\right) \\right) + \\beta E\\left[V\\left(K\\left(k, z\\right), z^{\\prime} \\right) \\right]$ \\\\\n\\quad iii). Find $\\widehat{C}$ satisfying $u^{\\prime }\\left( \\widehat{C}\\left( k,z\\right) \\right) =\\beta E\\left[ V_{1}\\left( \\left( 1-\\delta \\right)k + z f\\left(k\\right) - \\widehat{C}\\left(k, z\\right), z^{\\prime}\\right) \\right]\n$ \\\\\nIterate on i)-iii) until convergence $\\widehat{C}=C$. \\\\ \\hline \\hline\n\\end{tabular}%\n}\n\n\\qquad\n\n\\subsubsection{ECM Policy Iteration}\n\nECM-PI the variant of ECM that performs PI instead of VFI. It constructs a\nsolution to the Bellman equation by iterating on the consumption function using\nEC (\\ref{f3}).\n\n\\qquad \\newline\n\n{\\small\n\\begin{tabular}{l}\n\\hline \\hline\n\\textbf{Algorithm 5. ECM-PI} \\\\ \\hline\nGiven $C$, for each point $\\left( k,z\\right) $, define the following\nrecursion: \\\\\n\\quad i). Find $K\\left(k, z\\right) = \\left( 1 - \\delta \\right) k+zf\\left(k\\right) - C\\left(k, z\\right)$ \\\\\n\\quad ii). Solve for $V$ satisfying $V\\left(k, z\\right) = u\\left( C\\left(k, z\\right) \\right) + \\beta E\\left[ V\\left(K\\left(k, z\\right), z^{\\prime}\\right) \\right]$ \\\\\n\\quad iii). Find $\\widehat{C}\\left(k, z\\right) = u^{\\prime -1}\\left[\\frac{V_{1}\\left(k, z\\right)}{1 - \\delta + z f^{\\prime }\\left(k\\right)}\\right]$ \\\\\nIterate on i)-iii) until convergence $\\widehat{C}=C$. \\\\ \\hline \\hline\n\\end{tabular}%\n}\n\n\\qquad\n\n\\subsubsection{Derivative Policy Iteration}\n\nThe ECM analysis also suggests a useful recursion for the derivative of value\nfunction. We first construct consumption function $C\\left(k, z\\right)$\nsatisfying FOC (\\ref{f2}) under the current value function\n\n\\begin{align}\n\\beta E\\left[V_{1}\\left( k^{\\prime },z^{\\prime }\\right) \\right] = u^{\\prime }\\left(C\\left(k, z\\right) \\right), \\label{vk4}\n\\end{align}\n\nand we then use (\\ref{f3}) to obtain the derivative of value function for next\niteration. This leads to a solution method that we call ECM-DVF.\n\nThe main difference between ECM-DVF from the previously studied ECM-VF consists\nin that we iterate on $V_{1}$ without computing $V$ on each iteration. We only\ncompute $V$ at the very end, when both $V_{1}$ and the optimal policy functions\nare constructed. Again, neither numerical maximization nor a numerical solver\nis necessary under ECM-DVF but only direct calculations. Similarly to PI\nmethods, Euler equation methods do not solve for value function but only for\ndecision (policy) functions. One possible decision function is a derivative of\nvalue function. Thus, the ECM-DVF recursion (\\ref{vk4}) can be also viewed as\nan Euler equation written in terms of the derivative of value function.\n\n\\qquad \\newline\n\n{\\small\n\\begin{tabular}{l}\n\\hline \\hline\n\\textbf{Algorithm 6. ECM-DVF} \\\\ \\hline\nGiven $V_{1}$ for each point $\\left( k,z\\right) $, define the following recursion: \\\\\n\\quad i). Find $c = u^{\\prime -1}\\left[ \\frac{V_{1}\\left(k, z\\right)}{1 - \\delta +z f^{ prime}\\left(k\\right) }\\right]$ \\\\\n\\quad ii). Find $k^{\\prime }=\\left( 1-\\delta \\right) k+zf\\left( k\\right) -c$ \\\\\n\\quad iii). Find $\\widehat{V}_{1}\\left(k, z\\right) =\\beta \\left[ 1-\\delta +z f^{\\prime }\\left(k\\right) \\right] E \\left[V_{1}\\left(k^{\\prime}, z^{\\prime }\\right) \\right]$ \\\\\nIterate on i)-iii) until convergence $\\widehat{V}_{1}=V_{1}$ \\\\\nGiven the converged policy functions, find $V$ satisfying $V\\left(k, z\\right) = u\\left(c\\right) + \\beta E \\left[V\\left(k^{\\prime }, z^{\\prime}\\right) \\right]$ \\\\ \\hline \\hline\n\\end{tabular}\n}\n\n\\subsubsection{Euler Equation Methods}\n\nPolicy iteration has similarity to Euler equation methods; see \\cite\n{Judd1998}, \\cite{Santos1999} for a general discussion of such methods. Euler\nequation methods approximate policy functions for consumption $ c=C\\left(\nk,z\\right) $, capital $k^{\\prime }=K\\left( k,z\\right) $ (or other policy\nfunctions) to satisfy (\\ref{bc}), (\\ref{ts}) and (\\ref{f23}). Below, we provide\nan example of Euler equation method (many other recursions for such methods are\npossible).\n\n\\qquad \\newline\n\n{\\small\n\\begin{tabular}{l}\n\\hline \\hline\n\\textbf{Algorithm 7. Euler equation algorithms} \\\\ \\hline\nGiven $C\\left( k,z\\right) $, for each point $\\left( k,z\\right) $, define the\nfollowing recursion: \\\\\n\\quad i). Find $K\\left(k, z\\right) =\\left(1 - \\delta\\right) k + z f\\left(k\\right) -C\\left( k,z\\right)$ \\\\\n\\quad ii). Find $c^{\\prime} = C\\left(K\\left(k, z\\right), z^{\\prime}\\right)$ \\\\\n\\quad iii). Find $\\widehat{C}\\left(k, z\\right) = u^{\\prime -1}\\left \\{\\beta E \\left[ u^{\\prime}\\left(c^{\\prime}\\right) \\left(1 - \\delta + z^{\\prime} f^{\\prime}\\left(k^{\\prime}\\right) \\right) \\right] \\right \\}$ \\\\\nIterate on i)-iii) until convergence $\\widehat{C}=C$. \\\\ \\hline \\hline\n\\end{tabular}\n}\n\n\\subsubsection{Comparison results of Matlab, Python and Julia for policy iterative methods}\n\nIn Table \\ref{clmm:Table2}, we see that conventional PI performs the worst because it relies on a\nnumerical solver, but, as with the value iterative methods, this reliance is less problematic for\nJulia than for Matlab or Python.\n\nOverall, we found that the speed of most algorithms was comparable across all three languages. The\nexception was that Julia performed significantly better than the other programs whenever there was\nany numerical optimization or root-finding involved. However, these speedups came at the cost of\nspending a (relatively small) amount of extra time being careful about how the code was written to\nensure that the Julia compiler is able to infer the type of each variable in the most time-intensive\nparts of the program.\n", "meta": {"hexsha": "0b70f8e4b59878da3fcc7ce2dfc9a23a073778b8", "size": 16410, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ms/sections/CLMM/growthmodel.tex", "max_stars_repo_name": "cc7768/Dissertation", "max_stars_repo_head_hexsha": "813210c2f92122bb0c05f6ad7f5a9ede04993781", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ms/sections/CLMM/growthmodel.tex", "max_issues_repo_name": "cc7768/Dissertation", "max_issues_repo_head_hexsha": "813210c2f92122bb0c05f6ad7f5a9ede04993781", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ms/sections/CLMM/growthmodel.tex", "max_forks_repo_name": "cc7768/Dissertation", "max_forks_repo_head_hexsha": "813210c2f92122bb0c05f6ad7f5a9ede04993781", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-12-31T22:54:14.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-03T18:48:22.000Z", "avg_line_length": 50.3374233129, "max_line_length": 227, "alphanum_fraction": 0.7213284583, "num_tokens": 4901, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056295505783, "lm_q2_score": 0.8376199633332891, "lm_q1q2_score": 0.5773761561495852}}
{"text": "\\section{Propositional Logic}\n\nPropositional logic is about arguments.\nAn argument contains two things:\n\n\\begin{itemize}\n\t\\item\n\t      One or more premises.\n\t\\item\n\t      One conclusion.\n\\end{itemize}\n\nBoth of these things must be propositions.\nThis means they must evaluate to either true ({\\bf 1}) or false ({\\bf 0}).\nTake the following example:\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{l}\n\t\tAll humans are mortal \\\\\n\t\tSocrates is human     \\\\\n\t\t\\hline\n\t\t$\\therefore$ Socrates is mortal\n\t\\end{tabular}\n\t\\caption{An example of an argument in English}\n\\end{table}\n\nIn this example a premise ``All humans are mortal'' is given.\nThis is something which can be either true or false.\nGiven that both premises are true, we must accept the conclusion to also be true.\nIn this case that means Socrates is a mortal.\nPropositional logic is not a one-size-fit-all solution.\nIt does not have {\\bf quantifiers}, which means you cannot specify an amount of something.\nPropositional statements are generally not in english.\nThe are replaces with {\\bf propositional variables} like \\(p\\), \\(q\\) and \\(r\\).\n\n\\subsection{Valid $\\neq$ Sound}\nThere is a difference in {\\bf sound} and {\\bf valid} arguments.\nA {\\bf valid} argument is an argument which is `syntactically' valid.\nThis means that the argument holds if all permises are true, but it does not mean the permises hold up.\nWhen the arguments hold up in the real world, then the argument is also {\\bf sound}.\nThe given example is both valid as well as sound.\nIf you change both occurances of `mortal' to `immortal'\nthen the example is still valid, but no longer sound as the argument no longer holds up in the real world.\n\n\\subsection{Logical connectives}\n{\\bf Logical connectives} or {\\bf logical operators} is the `glue' between propositions.\nA list of possible logical connectives are shown in table \\ref{tab:logicalConnectives}.\nLogical connectives can be used to compare or combine multiple propositional variables.\nTake for example the following formula: \\(q \\land p\\).\nThis takes two propositional variables \\(p\\) and \\(q\\) and compares that.\nThis returns another boolean, namely true when both \\(p\\) and \\(q\\) are true, or false in every other case.\nUsing this knowledge you can create something called a {\\bf compound proposition}.\nThis means it combines multiple proposition to a single, larger one.\nAn example of a compound proposition is \\((q \\land p) \\lor \\neg r\\).\nBoth \\(q \\land p\\) and \\(\\neg r\\) are propositions, and combining gives a new one.\nIf you execlude the parenthesis then the formula is parsed from left to right.\nSo \\(q \\land p \\land r\\) is equal to \\((q \\land p) \\land r\\) but not to \\(q \\land (p \\land r)\\).\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{c | l | l}\n\t\tSymbol     & Name         & Effect                                                \\\\\n\t\t\\hline\n\t\t$\\land$    & conjunction  & True when both propositions are true                  \\\\\n\t\t$\\lor$     & disjunction  & True when one or both propositions are true           \\\\\n\t\t$\\oplus$   & exclusive or & True when one of the propositions is true             \\\\\n\t\t$\\neg$     & negation     & True when the proposition is false                    \\\\\n\t\t$\\implies$ & implication  & True if both sides are true or the left side is false \\\\\n\t\t$\\iff$     & iff          & True if both sides are true                           \\\\\n\t\\end{tabular}\n\t\\caption{A list of logical connectives}\n\t\\label{tab:logicalConnectives}\n\\end{table}\n\n\\subsection{Logical equivalence}\nIf you want to check if two propositions are equivalent\n(e.g. have the same value all the time), then you can use a {\\bf truth table}.\nA truth table is an table with all possible outcomes of a proposition.\nIf two propositions have the outcome in the truth table then the two propositions are equivalent.\ntake this truth table for example:\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{c | c | c | c}\n\t\t\\(q\\) & \\(p\\) & \\(q \\land p\\) & \\(\\neg(\\neg q \\lor \\neg p)\\) \\\\\n\t\t\\hline\n\t\t0     & 0     & 0             & 0                            \\\\\n\t\t0     & 1     & 0             & 0                            \\\\\n\t\t1     & 0     & 0             & 0                            \\\\\n\t\t1     & 1     & 1             & 1                            \\\\\n\t\\end{tabular}\n\\end{table}\n\nAs you can see, the two statements have the same values in the truth table, and are logically equivalent.\n\n\\subsection{Sufficient or Necessary}\nGiven is the following implication \\(p \\implies q\\).\n\\(p\\) is in this statement the {\\bf hypothesis}, and \\(q\\) is the {\\bf conclusion}.\nIf the hypothesis is true then it's sufficient to say the conclusion is also true.\nIf the conclusion is true then it's necessary to say the hypothesis is also true.\nIt's impossible for the conclusion to be false if the hypothesis is true.\nBut when the hypothesis is false then the conclusion can be both true and false.\nSo if the hypothesis is false then you can say that the statement holds.\n\n\\subsection{Contrapositive, Converse and Inverse}\nEvery implication has a contrapositive, converse and a inverse.\nThey can, but don't have to be logically equivalent.\nGiven the implication `If this is Tuesday, then we are in Belgium' or in logic form \\(p \\implies q\\)\nwe can get the contrapositive, converse and inverse of it.\nThe {\\bf contrapositive} is \\(\\neg q \\implies \\neg p\\), so `If we aren't in Belgium, then this isn't Tuesday'.\nThe {\\bf converse} is \\(q \\implies p\\), so `If we are in Belgium, then this is Tuesday'.\nThe {\\bf inverse } is \\(\\neg p \\implies \\neg q\\), so `If this isn't Tuesday, then we aren't in Belgium'.\n\n\\subsection{Tautology, Contradiction and Contingency}\nThere are more categories where propositions can fit in. These three are one of those.\nA tautology is a proposition which always evaluate to true. An example is the proposition \\(q \\lor \\neg q\\).\nA contradiction is the opposite, it'll always evaluate to false. An example is \\(q \\land \\neg q\\).\nA contingency is every thing else, which can be true or false. For example \\(q\\).\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{c | c | c | c}\n\t\t\\(q\\) & \\(q \\lor \\neg q\\) & \\(q \\land \\neg q\\) & \\(q\\) \\\\\n\t\t\\hline\n\t\t0     & 1                 & 0                  & 0     \\\\\n\t\t0     & 1                 & 0                  & 0     \\\\\n\t\t1     & 1                 & 0                  & 1     \\\\\n\t\t1     & 1                 & 0                  & 1     \\\\\n\t\\end{tabular}\n\\end{table}", "meta": {"hexsha": "fdaca9a5fc5dafea477a1dd37ef41be2b6bf5864", "size": 6396, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Reasoning & Logic/chapters/propositional-logic.tex", "max_stars_repo_name": "dsluijk/TUD-CSE-summaries", "max_stars_repo_head_hexsha": "9157650c7a6af2e1c3072f6e66a6fd1e001460ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Reasoning & Logic/chapters/propositional-logic.tex", "max_issues_repo_name": "dsluijk/TUD-CSE-summaries", "max_issues_repo_head_hexsha": "9157650c7a6af2e1c3072f6e66a6fd1e001460ce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Reasoning & Logic/chapters/propositional-logic.tex", "max_forks_repo_name": "dsluijk/TUD-CSE-summaries", "max_forks_repo_head_hexsha": "9157650c7a6af2e1c3072f6e66a6fd1e001460ce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.8244274809, "max_line_length": 110, "alphanum_fraction": 0.656504065, "num_tokens": 1643, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059560743422, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.5773616690838735}}
{"text": "\\subsection*{ Orientations of Vector Spaces }\n\n\\exercisehead{13.1} \n\\[\n\\begin{gathered}\n  (E_1 \\dots E_n) \\sim (E_1 \\dots E_n) \\\\ \n  E_i = \\delta_i^{\\, \\, j } E_j \\\\ \n  \\text{det}{ (\\delta_i^{\\, \\, j} )} = 1 \n\\end{gathered}\n\\]\n\nIf $E_i = B_i^{\\, \\, j} \\widetilde{E}_j$, \\, $\\text{det}{B_i^{\\, \\, j }} >0$,  \n% \\[\n%(B^{-1})_i^{\\, \\, k} E_k = (B^{-1})_i^{\\, \\, k } B_k^{\\, \\, j} \\widetilde{E}_j = E_\n% #\\]\n\n$\\text{det}{(BB^{-1})} = \\text{det}{B} \\text{det}{B^{-1}} = 1$.  $\\text{det}{B^{-1}} >0$ since $\\text{det}{B} >0$.  \n\nIf $E_i = B_i^{\\, \\, j} \\widetilde{E}_j$  \n\\[\n\\begin{aligned}\n  & \\widetilde{ \\widetilde{E}}_i = A_i^{\\, \\, j} E_j \\\\ \n  & \\widetilde{ \\widetilde{E}}_i = A_i^{\\, \\, k } B_k^{\\, \\, j} \\widetilde{E}_j \\quad \\quad \\text{det}{AB} = \\text{det}{A} \\text{det}{B} >0\n\\end{aligned}\n\\]\nSo it's an equivalence relation and since $\\text{det}{B} \\neq 0$, $\\text{det}{B} >0$ or $\\text{det}{B} <0$ and as above there are only 2 equivalence classes; $\\text{det}{A} \\gtrless 0$\n\n\\subsection*{ Orientations of Manifolds }\n\nlcaol frame $(E_i)$ for $M$ is (positively) oriented if $(\\left. E_1 \\right|_p \\dots \\left. E_n \\right|_p)$ positively oriented basis for $T_pM$.  $\\forall \\, p \\in U$  \n\ncont. if $\\forall \\, p \\in M$, $p\\in $ domain of oriented local frame.  \n\norientation of $M$ is cont., pointwise orientation.  \n\n$M$ orinetable if $\\exists \\, $ orientation to $M$\n\n\\exercisehead{13.2}  connected domain $U$.  Consider some $\\Omega \\in \\Lambda^m(V)$,  consider \n\\[\n\\begin{aligned}\n  & U \\to \\mathbb{R} \\\\ \n  & p \\mapsto ( \\left. E_1 \\right|_p \\dots \\left. E_n \\right|_p ) \\mapsto \\Omega( \\left. E_1 \\right|_p \\dots \\left. E_n \\right|_p ) = \\text{det}{ ( \\left.  \\epsilon^i(E_j) \\right|_p ) }\n\\end{aligned}\n\\]\n\n$\\text{det}$ cont. function s.t. $\\text{det} = \\begin{cases} +1 \\\\ -1 \\end{cases}$ on $U$.  $U$ connected so $\\forall \\, $ cont. functions from $X$ to $\\lbrace 0, 1 \\rbrace$ or, the same, $\\lbrace 1 , -1 \\rbrace$, constant.  \\\\\nSo $\\Omega( \\left. E_1 \\right|_p \\dots \\left. E_n \\right|_p ) = \\text{det}{ ( \\left. \\epsilon^i (E_j) \\right|_p )}$ constant on $U$, otherwise $U$ separated.  \n\n\\begin{proposition}[13.4] \n$\\text{dim}{M} = m \\geq 1$ \n\n$\\forall \\, m$-form $\\Omega$ on $M$, $\\Omega\\neq 0$ determines a unique orientation of $M$ s.t. $\\Omega$ positively oriented $\\forall \\, p \\in M$, \\\\\nConversely, if $M$ given orientation, then $\\exists \\, $ smooth $m$-form $\\Omega$ on $M$ that's positively oriented $\\forall \\, p \\in M$\n\\end{proposition}\n\n$F:M \\to N$ local diffeomorphism.  \n\n$\\forall \\, p \\in M$, $\\exists \\, (U, \\varphi)$ chart and consider $U_F \\subset U$ s.t. $F(U_F)$ open, $\\left. F\\right|_{U_F} : U_F \\to F(U_F)$ diffeomorphism.  \n\nFor $F(p) \\in N$, $\\exists \\, (V, \\psi)$ chart and consider $F(U_F) \\cap V$\n\n$\\psi F\\varphi^{-1}(x^1 \\dots x^m) = F^j(x^1 \\dots x^m)$ \\quad \\quad $\\text{det}{(\\partial_i F^j)} >0$ suppose.  \n\n\n\\subsection*{The Orientation Covering}\n\n\\subsection*{Orientations of Hypersurfaces }\n\ninterior multiplication or contraction with $X$\n\\[\ni_X\\omega(Y_1 \\dots Y_{k-1}) = \\omega(X,Y_1 \\dots Y_{k-1})\n\\]\n$i_X \\omega$ obtained by inserting $X$ into the first slot.\n\nNotation $X \\righthalfcup \\omega = i_X \\omega$\n\n\n\n\n\n\n\n\n\nSuppose $M$ smooth manifold \\\\\n\\phantom{Suppose} $S\\subset M$ submanifolds (immersed or embedded)\n\nvector field along $S$ is cont. $N: S \\to TM$ s.t. $N_p \\in T_pM$ \\quad $\\forall \\, p \\in S$\n\n(Note difference between vector field along $S$ and vector field on $S$, s.t. $N_p \\in T_pS \\, \\forall \\, p$)\n\n$N_p \\in T_pM$, $p\\in S$ transverse to $S$ if $T_pM$ spanned by $N_p, T_pS$\n\nSimilarly, vector field $N$ along $S$ transverse to $S$ if $N_p$ transverse to $S$, \\, $\\forall \\, p \\in S$\n\n\n\n\n\\begin{proposition}[15.21, 13.12 in previous version] Suppose $M$ oriented smooth $m$-manifold \\\\\n\\phantom{Proposition 13.12 Suppose  } $S$ immersed hypersurface in $M$.  \\\\\n\\phantom{Proposition 13.12 Suppose  } $N$ transverse vector field along $S$.  \n\nThen $S$ has unique orientation s.t. $\\forall \\, p \\in S$, $(E_1 \\dots E_{n-1})$ oriented basis for $T_pS$ iff $(N_p, E_1 \\dots E_{n-1})$ oriented basis for $T_pM$\n\nIf $\\Omega$ orientation form for $M$, \\\\\nthen $\\left. (N \\righthalfcup \\Omega) \\right|_S \\equiv \\left. i_N\\Omega \\right|_S $ orientation form for $S$ with respect to this orientation.  \n\\end{proposition}\n\nRecall that smooth hypersurface $S$ is $S\\subseteq M$ equipped with $i: S \\hookrightarrow M$, smooth immersion, i.e. $Di$ injective.  \n\nNow orientation form $\\omega = dN_p \\wedge dE_1 \\wedge \\dots \\wedge dE_{n-1}$, then \\\\\n$i_{N_p} \\omega  =dE_1 \\wedge \\dots \\wedge dE_{n-1}$\n\\[\ni^*_S (i_{N_p}\\omega) = i^*_S (dE_1 \\wedge \\dots \\wedge dE_{n-1}) = i^*_S(dE_1) \\wedge \\dots \\wedge i^*_S(dE_{n-1}) = d(i^*_SE_1) \\wedge \\dots \\wedge d(i_S^*E_{n-1})\n\\]\n\n\n\n\\begin{proof} Let $\\Omega$ \\\\\n\\phantom{proof Let } $\\omega  = \\left. (N \\righthalfcup \\Omega) \\right|_S$ $m-1$ form.  \n\n$(E_1 \\dots E_{n-1}) $ basis for $T_pS$ \\\\\n$N$ transverse to $S$ implies $(N_p, E_1 \\dots E_{n-1})$ basis for $T_pM$.  \n\n$\\Omega$ orientation form so $\\Omega$ nonvanishing.  \n\\[\n\\omega_p(E_1 \\dots E_{n-1}) = \\Omega_p(N_p, E_1 \\dots E_{n-1}) \\neq 0\n\\]\nsince $\\omega_p(E_1 \\dots E_{n-1}) >0$ iff $\\Omega_p(N_p, E_1 \\dots E_n) >0$,  \\\\\norientation determined by $\\omega$ is the 1 defined in the statement of the proposition.  \n\n\\end{proof}\n", "meta": {"hexsha": "7d5e88a58ffa8328c99a924c2a125bdc5384d9bc", "size": 5309, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LeeJM/15orientations.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LeeJM/15orientations.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LeeJM/15orientations.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 39.9172932331, "max_line_length": 227, "alphanum_fraction": 0.618195517, "num_tokens": 2090, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.5773616644032238}}
{"text": "% Created 2021-09-28 Tue 10:05\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=169]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{khpreamble}\n\\DeclareMathOperator{\\atantwo}{atan2}\n\\def\\ucolor{blue!80!black}\n\\def\\ycolor{green!60!black}\n\\newcommand*{\\incolor}[1]{\\textcolor{\\ucolor}{#1}}\n\\newcommand*{\\outcolor}[1]{\\textcolor{\\ycolor}{#1}}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{Frequency response}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={Frequency response},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\\section{Bode diagram}\n\\label{sec:org30472a2}\n\n\\begin{frame}[label={sec:org2c8c6d5}]{Response of LTI systems to sinusoids}\n\\begin{center}\n  \\begin{tikzpicture}[scale = 0.8, node distance=20mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n  \\node[coordinate] (refinput) {};\n  \\node[block, right of=refinput] (motor) {$G(s)$};\n  \\node[coordinate, right of=motor, node distance=20mm] (output) {};\n\n  \\draw[\\ucolor, ->] (refinput) -- node[above, pos=0.3] (voltsignal) {$u$} (motor);\n  \\draw[\\ycolor, ->] (motor) -- node[above, pos=0.5] (velsignal) {$y$} (output);\n  \\end{tikzpicture}\n\\end{center}\n\nLet \\(\\incolor{u(t) = \\sin\\omega_1 t}\\). Then, after transients have died out,\n\\[ \\outcolor{y(t)}= \\outcolor{|G(\\omega_1)| \\sin \\big( \\omega_1 t + \\arg G(i\\omega_1)\\big)}. \\]\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orgd7e24dc}]{The Bode diagram}\n\\[ y(t) = \\underbrace{|G(i\\omega_1)|}_{\\text{amplification}} \\sin \\big( \\omega_1 t + \\underbrace{\\arg G(i\\omega_1)}_{\\text{phase shift}} \\big) \\]\n\nThe Bode diagram shows the \\alert{magnitude} and \\alert{phase} of the transfer function evaluated on the positive imaginary axis. It thus contains all information about the steady-state response of the system to input signals of different frequency.\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org8f5caea}]{Specifications on the frequency properties of the closed-loop system}\n\\begin{center}\n\\includegraphics[width=0.899\\linewidth]{../../figures/spec-bode-closed-loop-new}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org9073d5b}]{Exercise: Reading the Bode diagram}\n\\begin{center}\n\\includegraphics[width=\\linewidth]{../../figures/alias-example-bode-GC}\n\\end{center}\nwhich of the below responses \\alert{is not} compatible with the Bode diagram?\n\n\\begin{center}\n\\includegraphics[width=\\linewidth]{../../figures/example-bode-GC-timeseries}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgcb17626}]{From loop gain to closed-loop gain}\n \\begin{center}\n \\begin{tikzpicture}\n\\tikzset{node distance=2cm, \n    block/.style={rectangle, draw, minimum height=12mm, minimum width=14mm},\n    sumnode/.style={circle, draw, inner sep=2pt}        \n}\n\n  \\node[coordinate] (input) {};\n  \\node[sumnode, right of=input, node distance=20mm] (sum) {\\tiny $\\sum$};\n  \\node[block,right of=sum, node distance=30mm] (fb) {$F(s)$};\n  \\node[block,right of=fb, node distance=30mm] (plant) {$G(s)$};\n  \\node[coordinate, right of=plant, node distance=30mm] (output) {};\n  \\node[coordinate, right of=plant, node distance=22mm] (measure) {};\n  \\draw[->] (input) -- node[above, pos=0.2] {$y_{ref}(t)$} (sum);\n  \\draw[->] (sum) -- node[above] {$e(t)$} (fb);\n  \\draw[->] (fb) -- node[above] {$u(t)$} (plant);\n  \\draw[->] (plant) -- node[at end, above] {$y(t)$} (output);\n  \\draw[->] (measure) -- ++(0, -18mm) -| (sum) node[left, pos=0.96] {$-$};\n  \\draw[red] (3.8, -1) rectangle (9.4, 1.7);\n  \\node[red] at (8, 1.4) {$G_o(s)$};\n  \\end{tikzpicture}\n\\end{center}\n\n\\[ G_c(i\\omega) = \\frac{G(i\\omega)F(i\\omega)}{1 + G(i\\omega)F(i\\omega)} = \\frac{G_o(i\\omega)}{1 + G_o(i\\omega)} \\]\n\\[ |G_c(i\\omega)| = \\frac{|G_o(i\\omega)|}{|1 + G_o(i\\omega)|} = \\frac{|G_o(i\\omega)|}{|G_o(i\\omega) - (-1)|} \\]\n\n\\pause\n\n\\alert{Keep the loop gain \\(G_o(i\\omega)\\) away from -1!} \n\\end{frame}\n\n\n\n\n\n\\begin{frame}[label={sec:orgd4c2941}]{If the phase shift is \\(\\pi\\)}\n\\(G_o(i\\omega_1) = -1\\), \\(|G_o(i\\omega_1)| = 1\\), \\(\\arg G_o(i\\omega_1) = -\\pi\\)\n\n\\begin{center}\n  \\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n    \\node[coordinate] (input) {};\n    \\node[circle, fill, inner sep=1pt, right of=input, node distance=24mm] (sum) {};\n    \\node[circle, fill, inner sep=1pt, below of=sum, node distance=5mm] (sum2) {};\n    \\node[coordinate, below of=sum, node distance=2.5mm] (summid) {};\n    \\node[circle, fill, inner sep=1pt, right of=summid, node distance=5mm] (sum3) {};\n    \\node[block, right of=sum3, node distance=20mm] (plant)  {$G_o(s)$};\n    \\node[coordinate, right of=plant, node distance=40mm] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.1, color=blue!80!black] {$u(t)=\\sin(\\omega_1 t)$} (sum);\n    \\draw[->] (plant) -- node[coordinate, pos=0.5] (measure) {} node[above, pos=0.3, anchor=south west, color=orange!80!red] {$y(t)=\\sin\\big(\\omega_1 t -\\pi\\big) = -\\sin(\\omega_1 t)$} (output);\n    \\draw[->] (sum3) -- node[above] {} (plant);\n    \\draw[->] (measure) -- ++(0,-16mm) -| node[pos=0.95, left] {$-$} (sum2);\n    \\draw (sum) to (sum3);\n  \\end{tikzpicture}\n\\end{center}\n\\pause\nClosed-loop transfer function: \\(G_c(s) = \\frac{G_o(s)}{1 + G_o(s)}\\)\n\\begin{tcolorbox}\nWe want \\[ 1 + G_o(i\\omega) \\neq 0, \\quad \\forall \\omega \\]\nIf not, then the closed-loop system will have poles on the imaginary axis (in the s-domain). \n\\end{tcolorbox}\n\\end{frame}\n\n\\begin{frame}[label={sec:org72d5978}]{The simplified Nyquist criterion in the s-plane}\n\\begin{center}\n\\includegraphics[width=0.65\\linewidth]{../../figures/implane-nyquist-contour-map}\n\\end{center}\n\\begin{tcolorbox}\nIf the open-loop system (the loop gain) is not unstable, i.e. $G_o(s)$ has no poles in the right-half plane, then the closed-loop system will be stable if the Nyquist curve \\textbf{do not encircle the point \\(s=-1\\)}. The point $s=-1$ should stay on the left side of the Nyquist curve when we go along the curve from low to high frequencies.\n\\end{tcolorbox}\n\\end{frame}\n\n\\begin{frame}[label={sec:org6de397c}]{Stability margins}\n\\begin{center}\n\\includegraphics[width=0.38\\linewidth]{../../figures/implane-nyquist-margins}\n\\end{center}\n\\begin{itemize}\n\\item Cross-over frequency: The frequency \\(\\omega_c\\) for which \\(|G_o(i\\omega)| = 1\\).\n\\item Phase margin: The angle \\(\\varphi_m\\) to the negative real axis for the point where the Nyquist curve intersects the unit circle. \\[\\varphi_m = \\arg G_o(i\\omega_c) - (-180\\degree) = \\arg G_o(i\\omega_c) + 180\\degree\\]\n\\end{itemize}\n\\end{frame}\n\\begin{frame}[label={sec:org3d52280}]{Stability margins}\n\\begin{center}\n\\includegraphics[width=0.38\\linewidth]{../../figures/implane-nyquist-margins}\n\\end{center}\n\\begin{itemize}\n\\item phase-cross-over frequency: The frequency \\(\\omega_p\\) for which \\(\\arg G_o(i\\omega) = -180\\degree\\).\n\\item Gain margin: The gain \\(K=A\\) that would make the Nyquist curve of \\(KG_o(i\\omega h)\\) go through the point \\(-1 + i0\\). This means that \\[ |G_o(i\\omega_p h| = \\frac{1}{A}. \\]\n\\end{itemize}\n\\end{frame}\n\n\n\n\n\\section{Freq-domain specs}\n\\label{sec:org1202c00}\n\\begin{frame}[label={sec:org9c619a7}]{How to achieve the frequency-domain specifications}\n\\[G_c(i\\omega) = \\frac{ G_o(i\\omega)}{1 + G_o(i\\omega)}\\]\n\n\\alert{Activity}\n\\begin{enumerate}\n\\item If \\(Go(i\\omega_1) = -0.5\\) what is \\(|G_c(i\\omega_1)|\\)?\n\\item If \\(Go(i\\omega_1) = -i\\) what is \\(|G_c(i\\omega_1)|\\)?\n\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org7b0788d}]{How to achieve the frequency-domain specifications}\n\\begin{columns}\n\\begin{column}{0.28\\columnwidth}\n\\[G_c(i\\omega) = \\frac{ G_o(i\\omega)}{1 + G_o(i\\omega)}\\]\n\n\\includegraphics[width=1.1\\linewidth]{../../figures/spec-bode-closed-loop-new}\n\nWhich of the Bode plots to the right shows the correct loop gain \\(G_o(i\\omega)\\)?\n\\end{column}\n\n\\begin{column}{0.72\\columnwidth}\n\\begin{center}\n\\includegraphics[width=1.02\\linewidth]{../../figures/spec-bode-open-loop-new}\n\\end{center}\n\\end{column}\n\\end{columns}\n\\end{frame}\n\n\n\n\n\n\\begin{frame}[label={sec:org8d81bce}]{Stability margins excercise}\n\\begin{center}\n  \\includegraphics[width=.6\\linewidth]{../../figures/bode-example-margin2.pdf}\n\\end{center}\n\n\\alert{Activity} Determine the cross-over frequency \\(\\omega_c\\), the phase cross-over frequency \\(\\omega_p\\), the phase margin and the amplitude margin. \n\\end{frame}\n\\end{document}", "meta": {"hexsha": "5dee9d92499488ad56f3191184b0193eeecb0984", "size": 8688, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classic-control/slides/frequency-response.tex", "max_stars_repo_name": "kjartan-at-tec/mr2025", "max_stars_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classic-control/slides/frequency-response.tex", "max_issues_repo_name": "kjartan-at-tec/mr2025", "max_issues_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classic-control/slides/frequency-response.tex", "max_forks_repo_name": "kjartan-at-tec/mr2025", "max_forks_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3122171946, "max_line_length": 341, "alphanum_fraction": 0.6832412523, "num_tokens": 3000, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.727975460709318, "lm_q2_score": 0.7931059414036511, "lm_q1q2_score": 0.5773616630846202}}
{"text": "% $Id: ScriptedMath.tex,v 1.1 2008/01/31 18:04:17 dconway Exp $\n\\chapter{\\label{chapter:InlineMath}Inline Mathematics in GMAT}\n\\chapauthor{Darrel J. Conway}{Thinking Systems, Inc.}\n\nGMAT provides a flexible mechanism that lets users place both scalar and matrix computations into\nthe command sequence for a mission.  This mechanism is implemented in a set of classes described in\nthis chapter.\n\n\\section{Scripting GMAT Mathematics}\n\nMathematics in GMAT scripts follow the conventions established in MATLAB; an equation consists of an\nobject on the left side of an equals sign, with an equation on the right.  Equations can be entered\neither in script files, or using a panel on the graphical user interface.  Parentheses are used to\nset the precedence of operations when the normal precedence rules are not valid.\nTable~\\ref{table:operators} lists the operators implemented in GMAT. The table is arranged in order\nof operator precedence; operators higher in the table are evaluated before operators that appear\nlower in the table. Users can override this order through selective use of parentheses.\n\n\\begin{table}[tb]\n\n\\caption{\\label{table:operators}Operators and Operator Precedence in GMAT}\n\\begin{tabular}{|>{\\raggedright\\hspace{0pt}}p{1.15in}%\n                |>{\\raggedright\\hspace{0pt}}p{1.15in}%\n                |>{\\raggedright\\hspace{0pt}}p{1.5in}%\n                |>{\\raggedright\\hspace{0pt}}p{1.5in}|}\n\\hline\n  \\textbf{Operator or Function}& \\textbf{Implemented Cases}&\\textbf{Comments}& \\textbf{Example}\n\\tabularnewline\n\\hline \\hline\n%  Evaluate Parameters and object& Any GMAT parameter& & sat.Earth.SMA\n%\\tabularnewline \\hline\n  Evaluate Conversion Functions& DegToRad, RadToDeg& Converts between radians and\ndegrees&DegToRad(sat.RAAN)\n\\tabularnewline \\hline\n  Evaluate Matrix Operations& transpose and ', det, inv and \\textasciicircum{}(-1), norm& & mat',\ndet(mat)\n\\tabularnewline \\hline\n  Evaluate Math Functions& sin, cos, tan, asin, acos, atan, atan2, log, log10, exp, sqrt& Angles in\nthe trig functions are in radians& sin(DegToRad(sat.TA))\n\\tabularnewline \\hline\n  Exponentiation&\\textasciicircum{}& Powers are any real number& sin(radTA)\\textasciicircum{}0.5\n\\tabularnewline \\hline\n  Multiplication and Division& {*} /& & sat.RMAG / sat.SMA\n\\tabularnewline \\hline\n  Addition and Subtraction& + -& & sat.RAAN + sat.AOP\n\\tabularnewline \\hline\n\\end{tabular}\n\\end{table}\n\nMathematics in GMAT are scripted using the same syntax as assignments.  Three\nsamples of the scripting for the operations in Table~\\ref{table:operators} are\nprovided here to and discussed in the design presentation to help explain how\nGMAT manipulates its internal data structures to perform scripted mathematics.\n\n\\subsection*{Example 1: Basic Arithmetic}\nIn this simplest example, a user needs to write script to perform the\ncalculation of the longitude of periapsis,\n\n\\begin{equation}\\label{eq:mathLongPeri}\n\\Pi=\\Omega+\\omega\n\\end{equation}\n\n\\noindent for the spacecraft named sat.  The scripting for this calculation is\nstraight forward:\n\n\\begin{quote}\n\\begin{verbatim}\nCreate Spacecraft sat;\nCreate Variable arg\nGMAT arg = sat.RAAN + sat.AOP\n\\end{verbatim}\n\\end{quote}\n\n\\subsection*{Example 2: More Complicated Expressions}\n\nThis snippet calculates the separation between two spacecraft, using the\nPythagorean theorem:\n\n\\begin{equation}\\label{eq:mathSatSep}\n\\Delta R = \\sqrt{(X_1 - X_2)^2 + (Y_1 - Y_2)^2 + (Z_1 - Z_2)^2}\n\\end{equation}\n\n\\noindent This is a useful example because, as we will see, it exercises the\nparser to ensure that operations are performed in the correct order.  The\nscript for this example is, again, pretty simple:\n\n\\begin{quote}\n\\begin{verbatim}\nCreate Spacecraft sat1, sat2;\nCreate Variable sep\nGMAT sep = sqrt((sat1.X-sat2.X)^2 + (sat1.Y-sat2.Y)^2 + (sat1.Z-sat2.Z)^2)\n\\end{verbatim}\n\\end{quote}\n\n\\subsection*{Example 3: Matrix Computations}\n\nThis final example is more complex, and exercises both operator ordering and\nmatrix computations to calculate a component of the analytic gradient of a\nfunction used in optimization.  This script snippet assumes that GMAT can\ncalculate the State Transition Matrix and provide users with access to the\ncorresponding 3x3 submatrices of it.  The scripting for that calculation is:\n\n\\begin{quote}\n\\begin{verbatim}\n% This script snippet uses the following definitions for pieces of the\n% State Transition Matrix (STM):\n%    Sat.Phi is a 6x6 matrix that is the spacecraft STM\n%    Sat.PhiA is the upper left 3x3 portion of the STM\n%    Sat.PhiB is the upper right 3x3 portion of the STM\n%    Sat.PhiC is the lower left 3x3 portion of the STM\n%    Sat.PhiD is the lower right 3x3 portion of the STM\n\nCreate Spacecraft Sat1, Sat2\nCreate Array Svec[3,1] Svecdot[3,1] S[1,1] dSdotdR[1,3]\n\nFor I = 1: 100\n   % Step the spacecraft\n   Propagate LowEarthProp(Sat1,Sat2);\n\n   % Calculate the relative position and velocity vectors\n   GMAT Svec(1,1) = Sat2.X - Sat1.X;\n   GMAT Svec(2,1) = Sat2.Y - Sat1.Y;\n   GMAT Svec(3,1) = Sat2.Z - Sat1.Z;\n   GMAT Svecdot(1,1) = Sat2.VX - Sat1.VX;\n   GMAT Svecdot(2,1) = Sat2.VY - Sat1.VY;\n   GMAT Svecdot(3,1) = Sat2.VZ - Sat1.VZ;\n\n   % Calculate range\n   GMAT S =  norm(Svec);\n\n   % Calculate the change in the range rate due to a change in the\n   % initial position of sat1\n   GMAT dSdotdR = 1/S*( Svecdot' - Svec'*Svecdot*Svec'/S^2 )*(- Sat1.PhiA )...\n                  + Svec'/S*(-Sat1.PhiC);\nEndFor;\n\\end{verbatim}\n\\end{quote}\n\n\\noindent The last expression here, dsDotdR, will be used in the design\ndiscussion.\n\n\\section{Design Overview}\n\nWhen GMAT encounters the last line of the first script snippet:\n\n\\begin{quote}\\begin{verbatim}\nGMAT arg = sat.RAAN + sat.AOP\n\\end{verbatim}\\end{quote}\n\n\\noindent it creates an assignment command that assigns the results of a calculation to the variable\nnamed arg. The right side of this expression -- the equation -- is converted into GMAT objects using\nan internal class in GMAT called the MathParser.  The MathParser sets up custom calculations by\nbreaking expressions -- like the ones scripted in the preceding section -- into a tree structure\nusing a recursive descent algorithm.  This decomposition is performed during script parsing when the\nuser is running from a script file, and during application of user interface updates if the user is\nconstructing the mathematics from the GMAT graphical user interface.  GMAT stores the tree\nrepresentation of the mathematics in an internal object called the MathTree.  During script\nexecution, the MathTree is populated with the objects used in the calculation during mission\ninitialization in the Sandbox.  The equation is evaluated when the associated Assignment command is\nexecuted by performing a depth-first traversal of the tree to obtain the desired results.  The\nalgorithms implemented here are extensions of the approach presented in chapter 40 of\n\\cite{schildt}.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[scale=0.5]{Images/nodePlusAop.eps}\n\\caption{\\label{figure:longPeriapseTree}Tree View of the Longitude of Periapsis Calculation}\n\\end{center}\n\\end{figure}\n\nThe tree based structure of the computations enforces the operator precedence rules tabulated above.\n In this section the construction and evaluation of the trees for the examples is presented, and the\nclasses used in this process are introduced.  The sections that follow this overview present the\nclasses in a more systematic manner, discuss how the scripting is parsed to create the GMAT objects\nused in evaluation, and then tie these pieces together by discussing how the constructed objects\ninteract as a program executes.\n\nFigure~\\ref{figure:longPeriapseTree}\\footnote{In this figure and those that follow, the components\nthat can be evaluated into Real numbers are drawn on elongated octagons, and the operators are drawn\nin a circle or ellipse.  Matrices are denoted by a three-dimensional box.  Empty nodes are denoted\nby black circles, and numbers, by orange squares with rounded corners.} shows the tree generated for\nthe longitude of periapsis calculation scripted above.  This simplest example illustrates the layout\nof the tree in memory that results from a simple arithmetic expression. The GMAT MathParser class is\nfed the right side of the expression from the script -- in this case, that is the string \"sat.RAAN +\nsat.AOP\".  This string is passed to the recursive descent code, which breaks it into three pieces --\ntwo expressions that can be evaluated directly, and an operator that combines these expressions. \nThese pieces are stored in an internal class in GMAT called the MathTree.  The expressions\n\"sat.RAAN\" and \"sat.AOP\" are placed into the \"leaves\" of the tree, while the addition operator is\nplaced in the top, \"internal\" node.  The leaf nodes are all instances of a class named\n\"MathElement\", and the internal nodes, of classes derived from a class named \"MathFunction\".  When\nthe assignment command containing this construct is executed, each of the leaves of the tree is\nevaluated, and then combined using the code for the addition operator.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[scale=0.5]{Images/satSep.eps}\n\\caption{\\label{figure:satelliteSeparationTree}Tree View of the Satellite\nSeparation Calculation}\n\\end{center}\n\\end{figure}\n\nThe second example, illustrated in Figure~\\ref{figure:satelliteSeparationTree},\nprovides a more illustrative example of the parsing and evaluation algorithms\nimplemented in GMAT.  This tree illustrates the equation encoded in example 2:\n\n\\begin{quote}\\begin{verbatim}\nGMAT sep = sqrt((sat1.X-sat2.X)^2 + (sat1.Y-sat2.Y)^2 + (sat1.Z-sat2.Z)^2)\n\\end{verbatim}\\end{quote}\n\n\\noindent Each node in the MathTree can be one of three types: a\nfunction node, an operator node (both of these types are embodied in the\nMathFunction class), or an element node (in the MathElement class).  The\nelement nodes are restricted to being the leaf nodes of the tree; the internal\nnodes are all either function nodes or operator nodes.\n\nEach MathElement node consists of two separate pieces; a string containing the\ntext of the expression represented by the node, and either a pointer to the\nobject that embodies that expression or, for constants, a local member\ncontaining the value of the expression.  The pointer member is initially set to NULL when the\nMathElement node is constructed during script parsing.  When the\nscript is initialized in the GMAT Sandbox, these pointers are set to the\ncorresponding objects in the Sandbox's configuration.  Each time the assignment command associated\nwith the MathTree executes, an Evaluate() method is called\non the MathTree, as described below.\n\nThe function and operator nodes consist of several pieces as well.  Each of\nthese nodes contain subnode pointers that identify the input value or values\nneeded for the node evaluation, and a method that performs the actual\nmathematics involved in the evaluation.  The mathematical operations for each\nof these nodes is coded to work on either a scalar value or a matrix; the\nspecific rules of implementation are operator specific.\n\nThe Evaluate() method for the MathTree calls the Evaluate() method for the\ntopmost node of the tree.  This method call is evaluated recursively for all of the subnodes of the\ntree, starting at the top node.  The method checks to see\nif the node is a leaf node or an internal node.  If it is a leaf node, it is\nevaluated and the resulting value is returned to the object that called it.  If it is an internal\nnode, it evaluates its subnodes by calling Evaluate() first\non the left node, then on the right node.  Once these results are obtained,\nthey are combined using the mathematical algorithm coded for the node, and the\nresulting value is then returned to the calling object.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[scale=0.5]{Images/MatrixExample.eps}\n\\caption{\\label{figure:matrixMathTree}Tree View of the Matrix Calculation in\nExample 3}\n\\end{center}\n\\end{figure}\n\nFinally, the gradient component scripted in the third example:\n\n\\begin{quote}\\begin{verbatim}\nGMAT dSdotdR = 1/S*( Svecdot' - Svec'*Svecdot*Svec'/S^2 )*(- Sat1.PhiA )...\n               + Svec'/S*(-Sat1.PhiC);\n\\end{verbatim}\\end{quote}\n\n\\noindent produces Figure~\\ref{figure:matrixMathTree}.  Evaluation for this tree proceeds as\noutlined above, with a few variations.  Instead of calling the Evaluate() method for the nodes in\nthe tree, expressions that use matrices call the MatrixEvaluate method.  Another wrinkle introduced\nby the matrix nature of this example is that the internal nodes now have an additional requirement;\neach node needs to determine that the dimensionality of the subnodes is consistent with the\nrequested operations.  This consistency check is performed during initialization in the Sandbox,\nusing the ValidateInputs() method.\nMatrixEvaluate may perform additional checks during execution, so that\nsingularities in the computation can be flagged and brought to the attention of the user.\n\n\\section{Core Classes}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{Images/MathClasses.eps}\n\\caption{\\label{figure:MathClasses}Classes Used to Implement GMAT Mathematics}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{figure:MathClasses} shows the class hierarchy implemented to\nperform the operations described above, along with some of the core members of\nthese classes.  The core classes used in GMAT to perform mathematical\noperations are shown in green in this figure, while the helper classes used to\nsetup the binary tree structure are shown in orange.  The MathTree and its\nnodes are all owned by instances of the Assignment command, shown in yellow in\nthe figure.  Core GMAT classes are shaded in blue.  The main features of these\nclasses are shown here, and discussed in the following paragraphs.  At the end\nof this section, the principal elements of the base classes are collected for\nreference.\n\nThe MathTree class is the container for the tree describing the equation.  It\ncontains a pointer to the topmost node of the tree, along with methods used to\nmanipulate the tree during initialization and execution.  This class is used to provide the\ninterface between the tree and the Assignment command.\n\nEach node in a MathTree is derived from the MathNode class.  That base class\nprovides the structures and methods required by the MathTree to perform its\nfunctions.  There are two classes derived from the MathNode base: MathElement\nand MathFunction.  The MathElement class is used for leaf nodes, and can store\neither a numerical value, a matrix, or a GMAT object that evaluates to a\nfloating point number -- for example, a Parameter, or a real member of a core\nGMAT object.  MathFunction instances are used to implement mathematical\noperators and functions.  The left and right subnodes of these nodes contain\nthe function or operator operands.  Subnodes are evaluated before the operator\nis evaluated, producing results that are used when evaluating the function.\n\nThe MathNode base class contains two members that are used to check the\ncompatibility of operands during initialization.  The EvaluateInputs() method\nchecks the return dimensions of the subnodes of the node, and returns true if\neither the node is a MathElement or if the subnodes are compatible with the\ncurrent node's Evaluate() and MatrixEvaluate() methods.  The ReportOutputs()\nmethod is called on subnodes to obtain the dimensions of matrices returned from calls to\nMatrixEvaluate().  That method provides an interface used by the\nEvaluateInputs() method to perform its evaluation.\n\nOne additional item worth mentioning in the MathNode base class is the\nimplementation of the MatrixEvaluate() method.  The Evaluate() method is pure\nvirtual, and therefore not implemented in the base class.  MatrixEvaluate(), on the other hand, is\nimplemented to apply the Evaluate() method element by\nelement to the matrix members.  In other words, the default MatrixEvaluate()\nmethod implements the algorithm\n\n\\begin{equation}\\label{eq:mathDefaultMatrixCalc}\nM_{ij} = Op(L_{ij},R_{ij})\n\\end{equation}\n\n\\noindent where $M_{ij}$ is the [i,j] element of the resultant, $L_{ij}$ is the\n[i,j] element of the left operand, and $R_{ij}$ is the [i,j] element of the\nright operand.  Most classes derived from the MathFunction class will override\nthis implementation.\n\nThe classes implementing mathematical operations are derived from the MathFunction class.\nFigure~\\ref{figure:MathClasses} shows some (but not all) of these derived classes.  Operators that\nhave a one to one functional correspondence with MATLAB operations are named identically to the\nMATLAB function.  That means that operators like the transpose operator will violate the GMAT naming\nconventions, at least for the string name assigned to the class, because the MATLAB operator is\nlowercase, ``transpose'', while the GMAT naming convention specified that class names start with an\nupper case letter.\n\nOperations that can rely on the algorithm presented in equation~\\ref{eq:mathDefaultMatrixCalc} do\nnot need to implement the MatrixEvaluate() method; for the classes shown here, that means that Add,\nSubtract, sin, cos, and asin only need to implement the Evaluate() method, while Multiply, Divide,\ntranspose, norm and Invert need to implement both the Evaluate() and MatrixEvaluate() methods.\n\n\\subsection{MathTree and MathNode Class Hierarchy Summary}\n\nThis section describes the top level classes in the MathTree subsystem, summarizing key features and\nproviding additional information about the class members.\n\n\\subsubsection{MathTree}\n\nA MathTree object is a container class used to help initialize and manage the tree representing an\nequation.  It standardizes the interface with the Assignment command and acts as the entry point for\nthe evaluation of an equation.  It is also instrumental in setting the object pointers on the tree\nduring initialization in the Sandbox.  Key members of this class are described below.\n\n\\subparagraph{\\textit{Class Attributes}}\n\\begin{itemize}\n\\item \\textbf{topNode}: A pointer to the topmost node in the MathTree.\n\\end{itemize}\n\n\\subparagraph{\\textit{Methods}}\n\\begin{itemize}\n\\item \\textbf{Evaluate()}: Calls the Evaluate() method on the topNode and returns the value obtained\nfrom that call.\n\\item \\textbf{MatrixEvaluate()}: Calls the MatrixEvaluate() method on the topNode and returns the\nmatrix obtained from that call.\n\\item \\textbf{ReportOutputs(Integer \\&type, Integer \\&rowCount, Integer \\&colCount)}:  Calls\nReportOutputs(...) on the topNode and returns the data obtained in that call, so that the Assignment\ncommand can validate that the returned data is compatible with the object that receives the\ncalculated data (i.e. the object on the left side of the equation).\n\\item \\textbf{Initialize(std::map<std::string,GmatBase*> *objectMap)}: Initializes the data members\nin the MathTree by walking through the tree and setting all of the object pointers in the\nMathElement nodes.\n\\end{itemize}\n\n\\subsubsection{MathNode}\n\nMathNode is the base class for the nodes in a MathTree.  Each MathNode supports methods used to\ndetermine the return value from the node, either as a single Real number or as a matrix.  The\nMathNodes also provide methods used to test the validity of the calculation contained in the node\nand any subnodes that may exist.  The core MathNode members are listed below.\n\n\\subparagraph{\\textit{Class Attributes}}\n\\begin{itemize}\n\\item \\textbf{realValue}: Used to store the most recent value calculated for the node.\n\\item \\textbf{matrix}: Used to store the most recent matrix data calculated for the node, when the\nnode is used for matrix calculations.\n\\end{itemize}\n\n\\subparagraph{\\textit{Methods}}\n\\begin{itemize}\n\\item \\textbf{Evaluate()}: An abstract method that returns the value of the node.  For MathElements,\nthis method returns the current value of the element, either by evaluating a Parameter and returning\nthe value, accessing and returning an object's internal data, or returning a constant.  For\nMathFunctions, the Evaluate() method appies the function and returns the result.  If the encoded\nfunction cannot return a Real number, Evaluate() throws an exception.\n\\item \\textbf{MatrixEvaluate()}: Fills in a matrix with the requested data.  For MathFunction\nobjects, this method performs the calculation of the operation and fills in the matrix with the\nresults.  The default implementation uses equation~\\ref{eq:mathDefaultMatrixCalc} to fill in the\nmatrix element by element.  Operations that do not return matrix values, like norm and determinant,\nthrow exceptions when this method is called.  MathElements simply return the matrix associated with\nthe node.\n\\item \\textbf{EvaluateInputs()}: Checks the inputs to the node to be sure that they are compatible\nwith the calculation that is being performed.  For MathElement nodes, this method always returns\ntrue if the node was successfully initialized.  For MathFunction nodes, this method calls its\nsubnodes and checks to be sure that the subnodes return compatible data for the function.\n\\item \\textbf{ReportOutputs(Integer \\&type, Integer \\&rowCount, Integer \\&colCount)}:  This method\ntells the calling object the type and size of the calculation that is going to be performed by\nsetting values of the parameters used in the call.  The first parameter, `type', is set to indicate\nwhether the return value will be a matrix or a Real number.  `rowCount' and `colCount' are set to\nthe dimensions of the matrix if the return value is a matrix, or to 0 if the return value is scalar.\n This method is used in the EvaluateInputs() method to determine the suitability of subnodes for a\ngiven calculation, and by the MathTree class to obtain the size of the answer returned from a\ncomplete calculation.\n\\end{itemize}\n\n\\subsubsection{MathElements}\n\nThe leaf nodes of a MathTree are all instances of the MathElement class.  The MathElement class acts\nas a wrapper for GMAT objects, using the methods defined in the GmatBase base class to set these\nreferenced objects up for the MathElement's use.  The GmatBase methods SetRefObject(),\nSetRefObjectName(), GetRefObject(), and GetRefObjectName() are overridden to set the internal data\nstructures in the node.  The other relevant members of this class are listed below.\n\n\\subparagraph{\\textit{Class Attributes}}\n\\begin{itemize}\n\\item \\textbf{refObjectName}: Holds the name of the GMAT object that is accessed by this node.\n\\item \\textbf{refObject}: A pointer to the referenced object.  This pointer is set when the MathTree\nis initialized in the Sandbox.\n\\end{itemize}\n\n\\subparagraph{\\textit{Methods}}\n\\begin{itemize}\n\\item \\textbf{SetRealValue(Real value)}: Sets the value of the node when it contains a constant.\n\\end{itemize}\n\n\\subsubsection{MathFunctions}\n\nThe internal nodes of a MathTree are all instances of classes derived from MathFunction.  This class\ncontains pointers to subnodes in the tree which are used to walk through the tree structure during\ninitialization and evaluation.  The relevant members ate described below.\n\n\\subparagraph{\\textit{Class Attributes}}\n\\begin{itemize}\n\\item \\textbf{left}: A pointer to the left subnode used in the calculation.  MathFunctions that only\nrequire a right subnode leave this pointer in its default, NULL setting.\n\\item \\textbf{right}: A pointer to the right subnode used in the calculation.  MathFunctions that\nonly require a left subnode leave this pointer in its default, NULL setting.\n\\end{itemize}\n\n\\subparagraph{\\textit{Methods}}\n\\begin{itemize}\n\\item \\textbf{SetChildren(MathNode *leftChild, MathNode *rightChild)}: Sets the pointers for the\nleft and right child nodes.  If a node is not going to be set, the corresponding parameter in the\ncall is set to NULL.\n\\item \\textbf{GetLeft()}: Returns the pointer to the left node.\n\\item \\textbf{GetRight()}: Returns the pointer to the right node.\n\\item \\textbf{Evaluate()}: In derived classes, this method is overridden to perform the mathematical\noperation represented by this node.\n\\item \\textbf{MatrixEvaluate()}: In derived classes that do not use the default matrix calculations\n(equation~\\ref{eq:mathDefaultMatrixCalc}), this method is overridden to perform the mathematical\noperation represented by this node.\n\\end{itemize}\n\n\\subsection{Helper Classes}\n\nThere are two classes that help configure a MathTree: MathParser and MathFactory.  In addition, the\nAssignment command acts as the interface between a MathTree and other objects in GMAT, and the\nModerator provides the object interfaces used to configure the tree.  This section sketches the\nactions taken by these components.\n\n\\subsubsection{MathParser}\n\nThe Interpreter subsystem (see Section~\\ref{section:InterpreterOverview}) in GMAT includes an\ninterface that can be used to obtain a MathParser object.  This object takes the right side of an\nequation, obtained from either the GMAT GUI or the ScriptInterpreter, and breaks it into a tree\nthat, when evaluated depth first, implements the equation represented by the equation.  The\nMathParser uses the methods described below to perform this task.\n\n\\subparagraph{\\textit{Methods}}\n\\begin{itemize}\n\\item \\textbf{Parse(const std::string \\&theEquation)}: Breaks apart the text representation of an\nequation and uses the component pieces to construct the MathTree.\n\\item \\textbf{CreateNode(const std::string \\&genString)}: Uses the generating string ``genString'',\nto create a node for insertion into the MathTree.\n\\item \\textbf{Decompose(const std::string \\&composite)}: This method is the entry point to the\nrecursive descent algorithm.  It uses internal methods to take a string representing the right side\nof the equation and break it into the constituent nodes in the MathTree.  The method returns the\ntopmost node of the MathTree, configured with all of the derived subnodes.\n\\end{itemize}\n\n\\subsubsection{MathFactory}\n\nThe MathFactory is a GMAT factory (see Chapter~\\ref{chapter:Factories} that is used to construct\nMathNodes.  It has one method of interest here:\n\n\\subparagraph{\\textit{Methods}}\n\\begin{itemize}\n\\item \\textbf{CreateNode(const std::string \\&ofType)}: Creates a MathNode that implements the\noperation contained in the string.  If no such operator exists, the MathFactory creates a\nMathElement node and sets the reference object name on that node to the test of the `ofType' string.\n\\end{itemize}\n\n\\subsubsection{The Assignment Command and the Moderator}\n\nThe Assignment command is the container for the MathTree described in this chapter.  All GMAT\nequations are formatted with a receiving object on the left side of an equals sign, then the equals\nsign, and then the equation on the right.  When the interpreter system is configuring an Assignment\ncommand, it detects when the right side is an equation, and passes the string describing the\nequation into a MathParser.  That MathParser proceeds to parse the equation, making calls into the\nModerator when a new MathNode is required.  The Moderator accesses the MathFactories through the\nFactoryManager, and obtains MathNodes as required.  These nodes are not added to the Configuration\nManager, but they are returned to the MathParser for insertion into the current MathTree.  Once the\ntree is fully populated, it is returned to the Assignment command, completing the parsing of the\nexpression.\n\nWhen the Moderator is instructed to run a mission, it passes the configured objects into the\nSandbox, and then initializes the Sandbox.  The last step in Sandbox initialization is to initialize\nall of the commands in the mission sequence.  When one of these commands is an Assignment command\nthat includes a MathTree, that command initializes the MathTree after initializing all of its other\nelements, and then validates that the MathTree is compatible with the object on the left side of the\nequation.  If an error is encountered at this phase, the Assignment command throws an exception that\ndescribes the error and includes the text of the command that failed initialization.  If\ninitialization succeeds, the Moderator then tells the Sandbox to run the mission.  The Sandbox\nstarts at the first command in the mission sequence, and executes the command stream as described in\nChapter~\\ref{chapter:Commands}.\n\n\\section{Building the MathTree}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{Images/MathParserTop.eps}\n\\caption{\\label{figure:MathParserTop}Control Flow for Parsing an Equation}\n\\end{center}\n\\end{figure}\n\nScripted mathematics are constructed using the MathParser class, which builds\nthe binary tree representing the equation that is evaluated by constructing\nnodes for the tree and placing these nodes into the tree one at a time.\nFigure~\\ref{figure:MathParserTop} shows the high level control flow used to\ncreate the MathTree.  An empty MathTree is created, and then that tree is\npassed into the MathParser along with the string representation of the\nequation.  The Mathparser takes the MathTree and populates it with MathNodes\nbased on the equation string.  The top node of this completed tree is then\nreturned from the parser, and set on the assignment command for use during\nexecution of the mission.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.4]{Images/ParseRecursion.eps}\n\\caption{\\label{figure:ParserRecursion}Parser Recursion Sequence}\n\\end{center}\n\\end{figure}\n\nThe middle step in the process outlined in Figure~\\ref{figure:MathParserTop}\nencapsulates the recursive descent decomposition of the equation.\nFigure~\\ref{figure:ParserRecursion} provides a more detailed view of this\nalgorithm.  The InterpretAction method of the Assignment command determines\nthat the right side of the assignment is an equation, and then creates a\nMathTree and a MathParser to break this equation into the components needed for evaluation during\nexecution.  The MathTree and the equation string are passed\ninto the MathParser.\n\nThe MathParser takes the input string, and attempts to break it into three\npieces: an operator, a left element, and a right element.  Any of these three\npieces can be the empty string; if the operator string is empty, only the left\nstring contains data, denoting that the string is used to build a MathElement\nnode, on one of the leaves of the MathTree.\n\nIf the operator string is not empty, the operator string is used to build a\nMathFunction node.  MathFunction nodes are used to perform all mathematical\noperations: basic math like addition, subtraction, multiplication, division,\nand exponentiation, along with unary negation and mathematical functions.  The\narguments of the MathFunction are contained in the left and right strings.\nThese strings are passed into the MathParser's Parse method for further\ndecomposition, and the process repeats until all of the strings have been\ndecomposed into operators and the MathElement leaf nodes.  If either string is\nempty, the corresponding child node on the MathFunction is set to NULL.\n\nOnce a leaf node has been constructed, that node is set as the left or right\nnode on the operator above it.  Once the left and right nodes are set on a\nMathFunction, that node is returned as a completed node to the calling method,\nterminating that branch of the recursion.  When the topmost node has its child\nnodes filled in, the MathParser returns from the recursion with the completed\nMathTree.\n\n\\section{Program Flow and Class Interactions}\n\nThe preceding section describes the construction of the MathTree that\nrepresents an equation. The parsing described above places the instances of the\nMathFunction nodes into the MathTree, along with the string names of the\nMathElement nodes.  The objects evaluated in the MathElement nodes are not\nplaced into the MathTree, because those elements depend on local objects in the\nGMAT Sandbox when a script is executed.  This section explains how those\nobjects are placed into the MathTree in the Sandbox, and then evaluated to\ncomplete a calculation for an Assignment command.\n\n\\subsection{Initialization}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{Images/MathTreeInitialization.eps}\n\\caption{\\label{figure:MathInitialization}MathTree Initialization in the\nSandbox}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{figure:MathInitialization} shows the process of initialization of\nthe Command Sequence in the Sandbox, with a focus on the MathTree\ninitialization.  Section \\ref{section:SandboxInitialization} describes the\ngeneral initialization process in the Sandbox.  Sandbox initialization proceeds\nas described there, initializing the objects and then the command sequence.\nWhen the command in the sequence is an Assignment command containing in-line\nmathematics, the Assignment command performs the details shown here to\ninitialize the MathTree.  The command first accesses the top node of the\nMathTree.  If that node has subnodes, those subnodes are initialized\niteratively until a MathElement node is encountered.\n\nWhen a MathElement node is encountered, that node is queried for its referenced\nobject's name.  If the node returns a name, that object's pointer is accessed\nin the local object map owned by the Sandbox and set on the node using the\nSetRefObject() method.  If the reference object name is empty, the node is a\nnumerical constant, and no further initialization is required.\n\nWhen all of the subnodes of a MathFunction node have been initialized, that\nnode validates that the dimensionality of the operands are compatible with the\nmathematical operation represented by the node.  This validation is done by\ncalling the ReportOutputs() method on the child nodes and ensuring that the\nresults are consistent with the requirements of the operation.  If the results\nare consistent, local variables are used to save data so that parent nodes to\nthe current node can obtain consistency data without recursing through the\nMathTree.  When the results are inconsistent with the operation, a warning\nmessage (which indicates the inconsistency of the calculation and the text of\nthe line that generate the MathTree) is posted to the user, and an internal\nflag is set to false, indicating that the calculation cannot be performed.\nThat flag is returned when the EvaluateInputs() method is called on the node.\nThis completes the initialization of the MathFunction node, and control is\nreturned to the node above the current node.\n\nWhen the topmost node in the MathTree finishes initialization, the MathTree\ncalls the EvaluateInputs() method for the top node.  If that call returns a\nfalse value, an exception is thrown and initialization terminates for the\nAssignment command.  When the call to EvaluateInputs() succeeds, the MathTree\nreports successful initialization to the Assignment command, which validates\nthat the result of the calculation is consistent with the object that will be\nreceiving the result, and, if so, returns a flag indicating that the\ncalculation initialized successfully.  If the resultant of the MathTree\ncalculation is determined to be inconsistent with the receiving object, an\nexception is thrown that contains the text of the line that generated the\nAssignment command, along with information about the error encountered.\n\n\\subsection{Execution}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{Images/MathTreeExecution.eps}\n\\caption{\\label{figure:MathTreeExecution}Evaluation of a MathTree Assignment}\n\\end{center}\n\\end{figure}\n\nThe task of evaluating a calculation is shown in Figure~\\ref{figure:MathTreeExecution}.  The\nAssignment command determines if a MathTree calculation is being performed by determining if the\nright side of the assignment (denoted RHS in the figure) is a MathTree.  If it is, the Assignment\ncommand checks to see if the result of the calculation should be a scalar value or a matrix by\ncalling ReportOutputs() on the MathTree.  If the result of this call indicates that the output is\none row by one column, the output from the calculation is scalar; otherwise, it is a matrix.  The\ncorresponding Evaluate() method is called on the MathTree.\n\nThe MathTree Evaluate() methods behave identically in control flow; the\ndifference between Evaluate() and MatrixEvaluate() is in the return value of\nthe call.  Similarly, the MathNode Evaluate() and MatrixEvaluate() methods\nfollow identical control flow, differing only in return types.  When the\ncorrect Evaluate() method is called on the MathTree, the MathTree calls the\ncorresponding Evaluate() method on the topmost MathNode in the tree.\nEvaluation is then performed recursively on the nodes of the tree, as described\nhere.\n\nWhen an Evaluate() method is called on a node, the evaluation process proceeds\nbased on the type of node that owns the method.  If the node is a MathFunction\nnode, then it calls the corresponding Evaluate() method on each of its child\nnodes, evaluating the left node first, then the right node.  If one of those\nnodes is NULL that phase of the evaluation is skipped.  This can occur when the\nmathematical operation only requires one operand -- for example, for most of\nthe trigonometric functions, or for unitary matrix operations like the\ntranspose operation.  When the child node evaluation is complete, the returned\ndata from that evaluation are used as the operands for the mathematical\noperation.  The operation is performed, and the resulting data are passed to\nthe calling method.\n\nMathElement nodes are evaluated directly when encountered, and can return\neither a real number or a matrix of real numbers based on which method is\ncalled -- either Evaluate() for a Real, or MatrixEvaluate() for a matrix.  The\nresult of this evaluation is passed to the calling method.  Since all of the\nleaf nodes on a MathTree are MathElement nodes, these nodes terminate the\niteration through the tree.\n\nWhen the calculation iteration reaches the topmost node in the MathTree, the\noperation for that node is performed and the resulting data are returned to the Assignment command. \nThe Assignment command then sets the data on the GMAT\nobject designated on the left side of the statement, designated the LHS in the\nfigure.  This completes the evaluation of the Assignment command.\n", "meta": {"hexsha": "e81d804577d27c47403b290d474559ebc365540b", "size": 37985, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/SystemDocs/ArchitecturalSpecification/ScriptedMath.tex", "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_issues_repo_path": "doc/SystemDocs/ArchitecturalSpecification/ScriptedMath.tex", "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_licenses": ["NASA-1.3"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_forks_repo_path": "doc/SystemDocs/ArchitecturalSpecification/ScriptedMath.tex", "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": ["NASA-1.3"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "avg_line_length": 54.2642857143, "max_line_length": 100, "alphanum_fraction": 0.7953402659, "num_tokens": 8830, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105951184112, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5773616608432561}}
{"text": " \\usepackage{amsmath}       % I think this gives me some symbols\n \\usepackage{amsthm}        % Does theorem stuff\n \\usepackage{amssymb}       % more symbols and fonts\n \\usepackage{mathtools}     % More math macros\n \\usepackage{empheq}        % Some more extensible arrows, like \\xmapsto\n  \\usepackage{bbm}\n %\\usepackage{mbboard} % greek bb math characters, etc.\n% \\usepackage{mathabx}\n \\usepackage{mathrsfs}      % Sheafy font \\mathscr{}\n% \\usepackage{picinpar}      % for pictures in paragraphs\n\n\\usepackage{enumerate} %% To customize enumeration.\n%\\usepackage{pdfsync}\n\\usepackage{bbold}\n\\usepackage[symbol]{footmisc}\n\n%%% Tikz\n\\usepackage{tikz}\n%\\usetikzlibrary[snakes]\n\\usetikzlibrary[shapes]\n\\usepgflibrary[arrows]\n\\usetikzlibrary{matrix}\n\\usepackage{tikz-cd}\n\n\\usepackage{mdframed}\n\n% Stuff to import svg graphics\n\\usepackage{import}\n\\usepackage{xifthen}\n\\usepackage{pdfpages}\n\\usepackage{transparent}\n\\usepackage{calc}\n\n\\newcommand*{\\incfig}[2][1]{%\n    \\def\\svgscale{#1}\n    \\import{./images/}{#2.pdf_tex}\n}\n\n% \\usepackage{pstricks}      % PSTricks!\n% \\usepackage[active]{srcltx}\n% \\usepackage{xcolor}        % for colors (links)\n% \\usepackage[all]{xy}       % Include XY-pic\n    %\\SelectTips{cm}{10}     % Use the nicer arrowheads\n    %\\everyxy={<2.5em,0em>:} % Sets the scale I like\n \\usepackage[colorlinks,\n             linkcolor=black,\n             bookmarksnumbered=true]{hyperref}\n\n%%%%%%% Pagestyle stuff %%%%%%%%%%%%%%%%%%%\n \\usepackage{fancyhdr}                   %%\n   \\pagestyle{fancy}                     %%\n   \\fancyhf{} %delete the current section for header and footer\n \\usepackage[paperheight=11in,%          %%\n             paperwidth=8.5in,%          %%\n             outer=1.2in,%               %%\n             inner=1.2in,%               %%\n             bottom=.7in,%               %%\n             top=.7in,%                  %%\n             includeheadfoot]{geometry}  %%\n   \\addtolength{\\headwidth}{.75in}       %%\n   \\fancyhead[RO,LE]{\\thepage}           %%\n   \\fancyhead[RE,LO]{\\sectionname}       %%\n   \\setlength{\\headheight}{15.8pt}       %%\n   \\raggedbottom                         %%\n%%%%%%% End Pagestyle stuff %%%%%%%%%%%%%%%\n\n% Bibliography related\n\\usepackage[\n            hyperref=true,\n            url=false,\n            isbn=false,\n            backref=true,\n            style=alphabetic,\n            block=none\n]{biblatex}\n\\addbibresource{references.bib}\n\n\n%% Include git info\n\\usepackage[mark]{gitinfo2}\n\\renewcommand{\\gitMark}{Commit hash: \\gitAbbrevHash{} (\\gitAuthorDate)}\n\n%%%%%%% Stuff for keeping track of sections %%%%%%%%%%%%%%%%%%%%%%%%%%%%\n \\newcommand{\\sektion}[1]{\\newpage\\section{#1}%                       %%\n                          \\gdef\\sectionname{\\thesection\\quad #1}}     %%\n \\newcommand{\\subsektion}[1]{\\subsection*{#1}%                        %%\n                         \\addcontentsline{toc}{subsection}{#1}}       %%\n %This is the empty section title, before any section title is set    %%\n \\newcommand\\sectionname{}                                            %%\n%%%%%%% End stuff for keeping track of sections %%%%%%%%%%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%% Theorem Styles and Counters %%%%%%%%%%%%%%%%%%%%%%%%%%\n\\newcounter{homework}\n\\newcounter{project}\n\n \\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}         %%\n  \\renewcommand{\\thehomework}{\\thesection.\\arabic{homework}}\n   \\renewcommand{\\theproject}{\\thesection.\\arabic{project}}\n \\makeatletter                                                      %%\n    \\@addtoreset{equation}{section} % Make the equation counter reset each section\n    \\@addtoreset{footnote}{section} % Make the footnote counter reset each section\n    \\@addtoreset{homework}{section} % Make the homework counter reset each section\n    \\@addtoreset{project}{section} % Make the project counter reset each section\n                                                                    %%\n \\newenvironment{warning}[1][]{%                                    %%\n    \\begin{trivlist} \\item[] \\noindent%                             %%\n    \\begingroup\\hangindent=2pc\\hangafter=-2                         %%\n    \\clubpenalty=10000%                                             %%\n    \\hbox to0pt{\\hskip-\\hangindent\\manfntsymbol{127}\\hfill}\\ignorespaces%\n    \\refstepcounter{equation}\\textbf{Warning~\\theequation}%         %%\n    \\@ifnotempty{#1}{\\the\\thm@notefont \\ (#1)}\\textbf{.}            %%\n    \\let\\p@@r=\\par \\def\\p@r{\\p@@r \\hangindent=0pc} \\let\\par=\\p@r}%  %%\n    {\\hspace*{\\fill}$\\lrcorner$\\endgraf\\endgroup\\end{trivlist}}     %%\n\n\n                                                                    %%\n\\newenvironment{exercise}{\n        \\refstepcounter{homework}\n        \\begin{trivlist}%                        %%\n    \\item{\\bf Exercise~\\thehomework.}}{\\end{trivlist}}\n\n \\newenvironment{solution}{\\begin{trivlist}%                        %%\n    \\item{\\it Solution:}}{\\end{trivlist}}                           %%\n                                                                    %%\n    \\newenvironment{project}{\n        \\refstepcounter{project}\n        \\begin{trivlist}%                        %%\n    \\item{\\bf Project~\\thehomework.}}{\\end{trivlist}}\n\n \\def\\newprooflikeenvironment#1#2#3#4{%                             %%\n      \\newenvironment{#1}[1][]{%                                    %%\n          \\refstepcounter{equation}                                 %%\n          \\begin{proof}[{\\rm\\csname#4\\endcsname{#2~\\theequation}%   %%\n          \\@ifnotempty{##1}{\\the\\thm@notefont \\ (##1)}\\csname#4\\endcsname{.}}]%%\n          \\def\\qedsymbol{#3}}%                                      %%\n         {\\end{proof}}}                                             %%\n \\makeatother                                                       %%\n                                                                    %%\n\n\n                       %%\n \\theoremstyle{plain}               %%%%% Theorem-like commands\n \\newtheorem{theorem}[equation]{Theorem}                            %%\n% \\newtheorem*{claim}{Claim}                                         %%\n \\newtheorem*{lemma*}{Lemma}                                        %%\n \\newtheorem*{theorem*}{Theorem}                                    %%\n \\newtheorem{lemma}[equation]{Lemma}                                %%\n\\newtheorem{corollary}[equation]{Corollary}                        %%\n \\newtheorem{proposition}[equation]{Proposition}                    %%\n  \\newtheorem{conjecture}[equation]{Conjecture}                    %%\n  \\newtheorem{property}[equation]{Property}                                %%\n\n \\theoremstyle{definition} %%%% Definition-like Commands\n  \\newtheorem{fact}[equation]{Fact}\n%\\newtheorem{definition}[equation]{Definition}\n%\\newtheorem{example}[equation]{Example}\n%\\newtheorem{excercise}[equation]{Exercise}\n\n\n \\newprooflikeenvironment{definition}{Definition}{$\\diamond$}{textbf}%\n \\newprooflikeenvironment{example}{Example}{$\\diamond$}{textbf}     %%\n \\newprooflikeenvironment{remark}{Remark}{$\\diamond$}{textbf}       %%\n  %\\newprooflikeenvironment{homework}{Homework}{$\\diamond$}{textbf}\n \\newprooflikeenvironment{digression}{Digression}{$\\diamond$}{textbf}       %%\n \\newprooflikeenvironment{claim}{Claim}{$\\diamond$}{textbf}%\n\n\n\\theoremstyle{remark} %%%%%% Remark-Like Commands\n% \\newtheorem{remark}[equation]{Remark}\n \\newtheorem{question}[equation]{Question}\n  \\newtheorem{idea}[equation]{Idea}\n  %\\newtheorem{solution}[equation]{Solution}\n\n\n%  \\newtheorem{digression}[equation]{Digression}\n %\\newtheorem{warning}[equation]{Warning}\n\n%%%%%%%%%%% End Theorem Styles and Counters %%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n%%% These three lines load and resize a caligraphic font %%%%%%%%%\n%%% which I use whenever I want lowercase \\mathcal %%%%%%%%%%%%%%%\n \\DeclareFontFamily{OT1}{pzc}{}                                 %%\n \\DeclareFontShape{OT1}{pzc}{m}{it}{<-> s * [1.100] pzcmi7t}{}  %%\n \\DeclareMathAlphabet{\\mathpzc}{OT1}{pzc}{m}{it}                %%\n                                                                %%\n%%% and this is manfnt; used to produce the warning symbol %%%%%%%\n \\DeclareFontFamily{U}{manual}{}                                %%\n \\DeclareFontShape{U}{manual}{m}{n}{ <->  manfnt }{}            %%\n \\newcommand{\\manfntsymbol}[1]{%                                %%\n    {\\fontencoding{U}\\fontfamily{manual}\\selectfont\\symbol{#1}}}%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n%%%%%%% TikZ Commands and Macros %%%%%%%%%%%%%\n\n%%%% These draw triple or quadruple set of arrows of length 0.5 cm\n\\DeclareMathOperator{\\righttriplearrows} {{\\; \\tikz{ \\foreach \\y in {0, 0.1, 0.2} { \\draw [-stealth] (0, \\y) -- +(0.5, 0);}} \\; }}\n\\DeclareMathOperator{\\lefttriplearrows} {{\\; \\tikz{ \\foreach \\y in {0, 0.1, 0.2} { \\draw [stealth-] (0, \\y) -- +(0.5, 0);}} \\; }}\n\\DeclareMathOperator{\\rightquadarrows} {{\\; \\tikz{ \\foreach \\y in {0, 0.1, 0.2, 0.3} { \\draw [-stealth] (0, \\y) -- +(0.5, 0);}} \\; }}\n\\DeclareMathOperator{\\leftquadarrows} {{\\; \\tikz{ \\foreach \\y in {0, 0.1, 0.2, 0.3} { \\draw [stealth-] (0, \\y) -- +(0.5, 0);}} \\; }}\n\n\n\n%%%%%%% End TikZ Commands and Macros %%%%%%%%%%%%%\n\n\n\n%%%%%%%%%%%%%%%% Macros %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\newcommand{\\confused}[1]{[[\\ensuremath{\\bigstar\\bigstar\\bigstar} #1]]}  %%% Three Eye-Catching Stars\n\\renewcommand{\\labelitemi}{--}  % changes the default bullet in itemize\n\n\\newcommand{\\mcg}[1][g]{\\mathrm{Mod}(#1)}\n\\newcommand{\\mcgb}[1][1]{\\mathrm{Mod}^{#1}}\n\n\\newcommand{\\twobar}{/\\kern-0.2em/}\n\n\\DeclareMathOperator{\\RR}{\\mathbb{R}}\n\\newcommand{\\dd}{\\mathrm{\\,d}}\n% \\DeclareMathOperator{\\dim}{\\mathrm{dim}}\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\n\\newcommand{\\wt}[1]{\\widetilde{#1}}\n\n% \\newcommand{\\tg}[1][g]{\\mathcal{T}_{#1}}\n% \\newcommand{\\mg}[1][g]{\\mathcal{M}_{#1}}\n% \\newcommand{\\sg}[1][g]{S_{#1}}\n% \\DeclareMathOperator{\\out}{\\mathrm{Out}}\n% \\DeclareMathOperator{\\fg}{\\pi_1}\n% \\DeclareMathOperator{\\Sp}{\\mathrm{Sp}}\n% \\DeclareMathOperator{\\pgl}{\\mathrm{PGL}}\n\n% \\let\\hom\\relax % kills the old hom, which is lowercase\n% \\DeclareMathOperator{\\hom}{\\mathrm{Hom}}\n\n% \\DeclareMathOperator{\\diff}{\\mathrm{Diff}}\n% \\DeclareMathOperator{\\homeo}{\\mathrm{Homeo}}\n% \\DeclareMathOperator{\\cp}{\\mathbb{CP}}\n% \\DeclareMathOperator{\\gr}{\\mathrm{Gr}}\n% \\DeclareMathOperator{\\hilb}{{\\sf Hilb}}\n% \\DeclareMathOperator{\\arc}{\\mathcal{A}}\n% \\DeclareMathOperator{\\curve}{\\mathcal{Z}}\n% \\DeclareMathOperator{\\Proj}{\\mathcal{P}\\kern-.125em\\mathpzc{roj}}\n%  \\DeclareMathOperator{\\ad}{ad}\n%  \\DeclareMathOperator{\\ann}{ann}\n%  \\DeclareMathOperator{\\aut}{\\underline{Aut}}\n%  \\DeclareMathOperator{\\Aut}{Aut}\n%  \\newcommand{\\bbar}[1]{\\setbox0=\\hbox{$#1$}\\dimen0=.2\\ht0 \\kern\\dimen0 \\overline{\\kern-\\dimen0 #1}}\n%   \\DeclareMathOperator{\\ber}{\\textrm{Ber}}\n%  \\DeclareMathOperator{\\coker}{coker}\n%   \\DeclareMathOperator*{\\colim}{colim}\n%   \\def\\dbar{{\\slash\\mkern-12muD}}  % This makes a Dirac \"D\" with a slash through it.\n%   \\DeclareMathOperator{\\dash}{{\\textrm{-}}}  % shortcut for a normal text dash mark -\n%  \\DeclareMathOperator{\\diff}{Diff}\n%  \\newcommand{\\e}{\\varepsilon}\n%   \\DeclareMathOperator{\\ext}{Ext}\n%  \\DeclareMathOperator{\\End}{\\ensuremath{\\mathcal{E}\\kern-.125em\\mathpzc{nd}}}\n%  \\DeclareMathOperator{\\fun}{Fun}\n%  \\newcommand{\\Ga}{\\Gamma}\n%   \\DeclareMathOperator{\\hocolim}{hocolim}\n%   \\DeclareMathOperator{\\holim}{holim}\n%  \\let\\hom\\relax % kills the old hom, which is lowercase\n%  \\DeclareMathOperator{\\hom}{Hom}\n%  \\DeclareMathOperator{\\Hom}{\\mathcal{H}\\kern-.125em\\mathpzc{om}}\n%  \\DeclareMathOperator{\\HOM}{HOM}\n\n%  \\DeclareMathOperator{\\id}{id}\n%  \\DeclareMathOperator{\\im}{im}\n%  \\DeclareMathOperator{\\isom}{\\underline{Isom}}\n%  \\newcommand{\\liset}{\\text{\\it lis-et}}\n%  \\newcommand{\\Liset}{\\text{\\it Lis-Et}}\n%   \\DeclareMathOperator{\\map}{Map}\n%  \\DeclareMathOperator{\\mfg}{{\\mathcal{M}_{FG}}}\n%  \\DeclareMathOperator{\\mfgs}{{\\mathcal{M}_{FG}^s}}\n%  \\renewcommand\\mod{\\textrm{\\sf Mod}}\n%  \\DeclareMathOperator{\\nat}{Nat}\n%  \\DeclareMathOperator{\\Nat}{NAT}\n%  \\newcommand{\\Om}{\\Omega}\n%  \\newcommand{\\pb}{\\rule{.4pt}{5.4pt}\\rule[5pt]{5pt}{.4pt}\\llap{$\\cdot$\\hspace{1pt}}}\n%  \\newcommand{\\po}{\\rule{5pt}{.4pt}\\rule{.4pt}{5.4pt}\\llap{$\\cdot$\\hspace{1pt}}}\n%  \\DeclareMathOperator{\\proj}{Proj}\n%  \\DeclareMathOperator{\\quot}{Quot}\n%   \\DeclareMathOperator{\\RP}{{\\mathbb{RP}}}\n%     \\DeclareMathOperator{\\sdim}{\\textrm{sdim}}\n%  \\renewcommand{\\setminus}{\\smallsetminus}\n%  \\DeclareMathOperator{\\spec}{Spec}\n%   \\DeclareMathOperator{\\spf}{Spf}\n\n%  \\DeclareMathOperator{\\Spec}{\\mathcal{S}\\!\\mathpzc{pec}}\n%   \\DeclareMathOperator{\\str}{\\textrm{str}}\n%  \\DeclareMathOperator{\\supp}{Supp}\n%   \\DeclareMathOperator{\\sym}{Sym}\n%  \\DeclareMathOperator{\\tot}{Tot}\n%  \\DeclareMathOperator{\\Tot}{\\ensuremath{\\mathpzc{Tot}}}\n%   \\DeclareMathOperator{\\tr}{tr}\n%  \\newcommand{\\udot}{\\ensuremath{{\\lower .183333em \\hbox{\\LARGE \\kern -.05em$\\cdot$}}}}\n%  \\newcommand{\\uudot}{{\\ensuremath{\\!\\mbox{\\large $\\cdot$}\\!}}}\n%  \\DeclareMathOperator{\\uhom}{\\underline{Hom}}\n\n\n% % Categories\n%  \\DeclareMathOperator{\\ab}\t{\\sf Ab}\n%  \\newcommand{\\Ab}{\\text{\\sf{Ab}}}\n%   \\DeclareMathOperator{\\alg}\t{{\\sf Alg}}\n%  \\DeclareMathOperator{\\algsp}\t{\\sf AlgSp}\n%  \\DeclareMathOperator{\\aff}{\t{\\sf Aff}}\n%  \\DeclareMathOperator{\\bibun}\t{{\\sf Bibun}}\n%   \\DeclareMathOperator{\\bimod}\t{{\\sf Bimod}}\n\n%     \\DeclareMathOperator{\\bord}{{\\mathcal{B}ord}}\n\n\n%  \\DeclareMathOperator{\\cat}\t{\\sf Cat}\n%  \\newcommand{\\Cat}{\\text{\\sf{Cat}}}\n%  \\DeclareMathOperator{\\coh}\t{\\sf Coh}\n%   \\DeclareMathOperator{\\comm}{{\\sf Comm}}\n%   \\DeclareMathOperator{\\CG}\t{{\\sf CptGen}}\n% \\newcommand{\\CP}{\\mathbb{CP}}\n\n\n%  \\DeclareMathOperator{\\daff}\t{{\\sf dAff}}\n%  \\DeclareMathOperator{\\dset}\t{{\\sf dSet}}\n%  \\newcommand{\\Fun}{\\text{\\sf{Fun}}}\n%   \\DeclareMathOperator{\\gpd}\t{{\\sf Gpd}}\n% \\DeclareMathOperator{\\gpoid}\t{\\sf Gpoid}\n% \\DeclareMathOperator{\\group}\t{\\sf Group}\n% \\newcommand{\\Grpd}{\\text{\\sf{Grpd}}}\n% \\DeclareMathOperator{\\Haus}\t{{\\sf Haus}}\n%   \\DeclareMathOperator{\\hoKan}{\\sf{ hoKan}}\n%     \\DeclareMathOperator{\\hotop}{\\sf{ hoTop}}\n\n%  \\DeclareMathOperator{\\Kan}\t{{\\sf Kan}}\n%   \\DeclareMathOperator{\\Lie}\t{{\\sf Lie}}\n%   \\DeclareMathOperator{\\man}\t{\\sf Man}\n%     \\DeclareMathOperator{\\Mod}\t{{\\text{-}\\sf Mod}}\n\n%      \\DeclareMathOperator{\\nbord}{{n\\mathcal{B}ord}}\n\n%  \\DeclareMathOperator{\\oper}\t{{\\sf Operad}}\n%  \\DeclareMathOperator{\\pdset}\t{{\\sf pdSet}}\n%   \\DeclareMathOperator{\\poset}\t{{\\sf Poset}}\n%  \\newcommand{\\Pre}{\\text{\\sf{Pre}}}\n%   \\DeclareMathOperator{\\qcat}\t{{\\sf QCat}}\n%  \\DeclareMathOperator{\\qco}\t{\\sf Qcoh}\n%  \\DeclareMathOperator{\\salg}\t{\\sf SAlg}\n%  \\DeclareMathOperator{\\sch}\t{\\sf Sch}\n%  \\DeclareMathOperator{\\scomm}{{\\sf sComm}}\n%  \\newcommand{\\ssD}{\\text{\\sf{sD}}}\n%  \\DeclareMathOperator{\\set}\t{\\sf Set}\n%   \\DeclareMathOperator{\\sh}\t{\\sf Sh}\n%     \\DeclareMathOperator{\\sKan}{\\sf{ sKan}}\n%   \\DeclareMathOperator{\\slie}\t{\\sf SLieGroup}\n%  \\DeclareMathOperator{\\sman}\t{\\sf SMan}\n%  \\newcommand{\\Spaces}{\\text{\\sf{Spaces}}}\n%  \\DeclareMathOperator{\\Sp}\t{{\\sf Spectra}}\n%    \\DeclareMathOperator{\\sset}\t{{\\sf sSet}}\n%      \\DeclareMathOperator{\\scat}{{\\sf sSet-Cat}}\n%         \\DeclareMathOperator{\\sTop}\t{{\\sf sTop}}\n%      \\DeclareMathOperator{\\stopab}\t{{\\sf sTopAb}}\n%       \\DeclareMathOperator{\\strat} {{\\sf Strat}}\n%  \\DeclareMathOperator{\\strattop} {{\\sf StrTop}}\n% \\DeclareMathOperator{\\svect}\t{\\sf SVect}\n%  \\DeclareMathOperator{\\Top}\t{\\sf Top}\n%  \\DeclareMathOperator{\\topab}\t{\\sf TopAb}\n%  \\DeclareMathOperator{\\topoi}\t{\\sf Topoi}\n%  \\DeclareMathOperator{\\tors}\t{\\sf Tors}\n%   \\DeclareMathOperator{\\uTop}\t{{\\sf \\underline{To}p}}\n%  \\DeclareMathOperator{\\vect}\t{\\sf Vect}\n%  \\DeclareMathOperator{\\Vect}\t{{\\sf Vect}}\n\n\n% Letter Short Cuts\n\n \\newcommand{\\cA}{\\mathcal{A}}\n \\newcommand{\\cB}{\\mathcal{B}}\n \\newcommand{\\cC}{\\mathcal{C}}\n \\newcommand{\\cD}{\\mathcal{D}}\n \\newcommand{\\cE}{\\mathcal{E}}\n \\newcommand{\\cF}{\\mathcal{F}}\n \\newcommand{\\cG}{\\mathcal{G}}\n \\newcommand{\\cH}{\\mathcal{H}}\n \\newcommand{\\cI}{\\mathcal{I}}\n \\newcommand{\\cJ}{\\mathcal{J}}\n \\newcommand{\\cK}{\\mathcal{K}}\n \\newcommand{\\cL}{\\mathcal{L}}\n \\newcommand{\\cM}{\\mathcal{M}}\n \\newcommand{\\cN}{\\mathcal{N}}\n \\newcommand{\\cO}{\\mathcal{O}}\n\\newcommand{\\cP}{\\mathcal{P}}\n\\newcommand{\\cQ}{\\mathcal{Q}}\n\\newcommand{\\cR}{\\mathcal{R}}\n\\newcommand{\\cS}{\\mathcal{S}}\n \\newcommand{\\cT}{\\mathcal{T}}\n \\newcommand{\\cU}{\\mathcal{U}}\n \\newcommand{\\cV}{\\mathcal{V}}\n \\newcommand{\\cW}{\\mathcal{W}}\n \\newcommand{\\cX}{\\mathcal{X}}\n \\newcommand{\\cY}{\\mathcal{Y}}\n \\newcommand{\\cZ}{\\mathcal{Z}}\n\n\n \\newcommand{\\A}{\\mathbb{A}}\n \\newcommand{\\B}{\\mathbb{B}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\newcommand{\\D}{\\mathbb{D}}\n \\newcommand{\\E}{\\mathbb{E}}\n \\newcommand{\\F}{\\mathbb{F}}\n\\newcommand{\\G}{\\mathbb{G}}\n \\renewcommand{\\H}{\\mathbb{H}} % old \\H{x} is an x with a weird umlaut in text mode\n\\newcommand{\\I}{\\mathbb{I}}\n \\newcommand{\\J}{\\mathbb{J}}\n \\newcommand{\\K}{\\mathbb{K}}\n \\renewcommand{\\L}{\\mathbb{L}}\n \\newcommand{\\M}{\\mathbb{M}}\n \\newcommand{\\N}{\\mathbb{N}}\n \\renewcommand{\\O}{\\mathbb{O}}\n \\renewcommand{\\P}{\\mathbb{P}}\n \\newcommand{\\Q}{\\mathbb{Q}}\n \\newcommand{\\R}{\\mathbb{R}}\n\\renewcommand{\\S}{\\mathbb{S}}\n \\newcommand{\\T}{\\mathbb{T}}\n \\newcommand{\\U}{\\mathbb{U}}\n \\newcommand{\\V}{\\mathbb{V}}\n \\newcommand{\\W}{\\mathbb{W}}\n\\newcommand{\\X}{\\mathbb{X}}\n\\newcommand{\\Y}{\\mathbb{Y}}\n  \\newcommand{\\Z}{\\mathbb{Z}}\n\n\n  \\newcommand{\\sA}{\\mathsf{A}}\n \\newcommand{\\sB}{\\mathsf{B}}\n \\newcommand{\\sC}{\\mathsf{C}}\n \\newcommand{\\sD}{\\mathsf{D}}\n \\newcommand{\\sE}{\\mathsf{E}}\n \\newcommand{\\sF}{\\mathsf{F}}\n \\newcommand{\\sG}{\\mathsf{G}}\n \\newcommand{\\sH}{\\mathsf{H}}\n \\newcommand{\\sI}{\\mathsf{I}}\n \\newcommand{\\sJ}{\\mathsf{J}}\n \\newcommand{\\sK}{\\mathsf{K}}\n \\newcommand{\\sL}{\\mathsf{L}}\n \\newcommand{\\sM}{\\mathsf{M}}\n \\newcommand{\\sN}{\\mathsf{N}}\n \\newcommand{\\sO}{\\mathsf{O}}\n\\newcommand{\\sP}{\\mathsf{P}}\n\\newcommand{\\sQ}{\\mathsf{Q}}\n\\newcommand{\\sR}{\\mathsf{R}}\n\\newcommand{\\sS}{\\mathsf{S}}\n \\newcommand{\\sT}{\\mathsf{T}}\n \\newcommand{\\sU}{\\mathsf{U}}\n \\newcommand{\\sV}{\\mathsf{V}}\n \\newcommand{\\sW}{\\mathsf{W}}\n \\newcommand{\\sX}{\\mathsf{X}}\n \\newcommand{\\sY}{\\mathsf{Y}}\n \\newcommand{\\sZ}{\\mathsf{Z}}\n\n\n   \\newcommand{\\cl}{\\mathfrak{cl}}\n  \\newcommand{\\g}{\\mathfrak{g}}\n  \\newcommand{\\h}{\\mathfrak{h}}\n%    \\newcommand{\\l}{\\mathfrak{l}}\n \\newcommand{\\m}{\\mathfrak{m}}\n \\newcommand{\\n}{\\mathfrak{n}}\n  \\newcommand{\\p}{\\mathfrak{p}}\n  \\newcommand{\\q}{\\mathfrak{q}}\n \\renewcommand{\\r}{\\mathfrak{r}} % old \\r{x} is a little circle over x in text mode\n\n\n\n\n \\newcommand{\\w}{\\omega}\n \\newcommand{\\Dx}{\\; \\mathcal{D}x}\n\n\n\n\n\n\n\n\n\n\\setlength{\\marginparwidth}{.8in}\n\\definecolor{MyBlue}{rgb}{.1,0.7,1.3}\n%\\begin{center}\n%{\\color{MyDarkBlue}This color is MyDarkBlue}\n%\\end{center}\n\\newcommand{\\notetoself}[1]{\\marginpar{\\tiny\\color{MyBlue}{ #1}}}\n\n\n%%%%%%%%%%%% End Macros %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n \\openout0=lastupdated.html\n \\write0{\\today}\n \\closeout0\n", "meta": {"hexsha": "33a95aaaf6d805d6f4de342740b4dae155925792", "size": 18745, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CourseNotesPreamble.tex", "max_stars_repo_name": "sayantangkhan/hdp-course-notes", "max_stars_repo_head_hexsha": "6427c68c2b920bb32dcfd44ec7b2099224cf3114", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CourseNotesPreamble.tex", "max_issues_repo_name": "sayantangkhan/hdp-course-notes", "max_issues_repo_head_hexsha": "6427c68c2b920bb32dcfd44ec7b2099224cf3114", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CourseNotesPreamble.tex", "max_forks_repo_name": "sayantangkhan/hdp-course-notes", "max_forks_repo_head_hexsha": "6427c68c2b920bb32dcfd44ec7b2099224cf3114", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7923387097, "max_line_length": 133, "alphanum_fraction": 0.5891704455, "num_tokens": 6055, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5773616572832885}}
{"text": "\\documentclass[bigger]{beamer}\n\n\\input{header-beam} % change to header-handout for handouts\n\n% ====================\n\\title[Lecture 20]{Logic I F13 Lecture 20}\n\\date{November 26, 2013}\n% ====================\n\n\\include{header}\n\n\\setlength{\\fitchprfwidth}{5em}\n\n\\section{Truth Functions}\n\n\\subsec{Truth Functions}{\n\n\\begin{block}{Definition}\n\nAn ($n$-place) \\emph{truth function} $t$ is a mapping of $n$-tuples of \\T{} and \\F\\ to either \\T\\ or \\F.\n\\medskip\n\n$n$-place truth functions correspond to truth tables of sentence with $n$ atomic sentences.\n\\end{block}\n\n\\begin{tabular}{cc@{\\qquad}cc}\n$\\begin{array}{cc|c}\n&& t_\\land\\\\ \\hline \n\\T & \\T & \\T \\\\\n\\T & \\F & \\F \\\\\n\\F & \\T & \\F \\\\\n\\F & \\F & \\F\n\\end{array}$\n&\n$A_1 \\land A_2$ &\n$\\begin{array}{cc|c}\n&& t_\\lor \\\\ \\hline\n\\T & \\T & \\T \\\\\n\\T & \\F & \\T \\\\\n\\F & \\T & \\T \\\\\n\\F & \\F & \\F\n\\end{array}$\n& $A_1 \\lor A_2$\n\\end{tabular}\n\n}\n\n\n\\subsec{Truth Functions}{\n\n\\begin{block}{Definition}\nA sentence $S$ containing the atomic sentences $A_1$, \\dots, $A_n$\n\\emph{expresses} the truth function $t$ iff the truth value of $S$ on\nthe truth-value assignment which assigns $v_i$ to $A_i$ is $t(v_1,\n\\dots, v_n)$.\n\\medskip\n\nAn $n$-place truth function is \\emph{expressible} if there is a\nsentence containing atomic sentences $A_1$, \\dots, $A_n$ that\nexpresses it.\n\n\\end{block}\n}\n\n\n\\subsec{Examples}{\n\n\\begin{tabular}{cc}\n$\\begin{array}{cc|c}\n& & t_1\\\\ \\hline\n\\T & \\T & \\T \\\\\n\\T & \\F & \\T \\\\\n\\F & \\T & \\F \\\\\n\\F & \\F & \\F\n\\end{array}$\n& $A_1 \\quad\\text{or:}\\quad A_1 \\land (A_2 \\lor \\lnot A_2)$ \\\\ \\ \\\\\n$\\begin{array}{cc|c}\n& & t_{XOR}\\\\ \\hline\n\\T & \\T & \\F \\\\\n\\T & \\F & \\T \\\\\n\\F & \\T & \\T \\\\\n\\F & \\F & \\F\n\\end{array}$\n&\n$(A_1 \\lor A_2) \\land \\lnot(A_1 \\land A_2)$ \n\\end{tabular}\n}\n\n\\section{Truth-Functional Completeness}\n\n\\subsec{Truth-Functional Completeness}{\n\n\\begin{block}{Definition}\n\nA set of connectives is \\emph{truth-functionally complete} if every\ntruth function is expressible by a sentence containing only those\nconnectives.\n\n\\end{block}\n\n}\n\n\\subsec{$\\{\\land, \\lor\\}$ Not Truth-Functionally Complete}{\n\n\\bit\n\\item $\\{\\land, \\lor\\}$ is not truth-functionally complete\n\\item Remember: To be truth functionally complete, \\emph{every} truth\n  function would have to be expressible using only $\\land$ and $\\lor$\n\\item Which 2-place truth-functions can be expressed using $\\land$ and\n  $\\lor$?\n\\item Not this one:\n\\[\n\\begin{array}{cc|c}\n\\T & \\T & \\F \\\\\n\\T & \\F & \\T \\\\\n\\F & \\T & \\T \\\\\n\\F & \\F & \\F\n\\end{array}\n\\]\n  \\eit\n\n}\n\n\\subsec{Proof by Induction}{\n\n\\bit\n\\item Sometimes need to prove something for all sentences\n\\item E.g., ``every sentence containing only $\\land$ and $\\lor$ expresses a truth function \\emph{other than} $t_\\mathit{XOR}$''\n\\item Proof by Induction\n\\bit\n\\item Show that it holds for atomic sentences\n\\item Suppose sentences $\\sf P$, $\\sf Q$ have the property and\n\\item Show that it then also holds for $\\sf\\lnot P$, $\\sf P \\land Q$, $\\sf P \\lor Q$, etc.\n\\eit\n\\eit\n\n}\n\n\\subsec{Example: Proof by Induction}{\n\n\\bit\n\n\\item Claim: every sentence contains an even number of parentheses\n\\item Proof strategy:\n\\bit\n\\item Show that every atomic sentence contains an even number of parentheses\n\\bit\n\\item $\\sf B(a, b, c)$, $\\bot$\n\\eit\n\\item Show that if $\\sf P$ contains an even number of parentheses, so does $\\sf\\lnot P$\n\\item Show that if $\\sf P$ and $\\sf Q$ contain an even number of parentheses, so do \n\\[\\sf (P \\land Q), (P \\lor Q), (P \\to Q), (P \\leftrightarrow Q) \\]\n\\eit\n\\eit\n\n}\n\n\\subsec{$\\{\\land, \\lor\\}$ Not Truth-Functionally Complete}{\n\n\\bits\n\\item Any sentence containing only $A_1$, $A_2$, $\\land$, $\\lor$\n  expresses a truth function $t$ with $t(\\T, \\T) = \\T$.\n\n\\item Proof. \n\n\\bits\n\\item Atomic sentences: $A_1$, $A_2$: express $t_1$, $t_2$.\n\\item Suppose $P$, $Q$ are sentences which \n  contain only $A_1$, $A_2$, $\\land$, $\\lor$ and express truth\n  functions $t$, $t'$ with $t(\\T, \\T) = t'(\\T, \\T) = \\T$\n\\item $P \\land Q$ expresses truth function $s$ with $s(\\T, \\T) = t_\\land(t(\\T, \\T), t'(\\T, \\T)) = \\T$\n\\item $P \\lor Q$ expresses truth function $s$ with $s(\\T, \\T) = t_\\lor(t(\\T, \\T), t'(\\T, \\T)) = \\T$\n\\eit\n\\item Hence, no sentence containing only $\\land$, $\\lor$ can express a\n  truth function $t$ with $t(\\T, \\T) = \\F$.\n  \\eit \n}\n\n\\subsec{$\\{\\land, \\lor, \\lnot\\}$ is Truth-functionally Complete}{\n\\[\n\\begin{array}{lll|l@{\\qquad}l}\nA_1 & A_2 & A_3 & t_\\mathit{ODD} & S \\\\\n\\hline\n\\T & \\T & \\T & \\T & \\uncovers{2-}{(A_1 \\land A_2 \\land A_3)} \\uncovers{6-}{{}\\lor{}}\\\\\n\\T & \\T & \\F & \\F \\\\\n\\T & \\F & \\T & \\F \\\\\n\\T & \\F & \\F & \\T & \\uncovers{3-}{(A_1 \\land \\lnot A_2 \\land \\lnot A_3)} \\uncovers{6-}{{}\\lor{}}\\\\\n\\F & \\T & \\T & \\F \\\\\n\\F & \\T & \\F & \\T & \\uncovers{4-}{(\\lnot A_1 \\land A_2 \\land \\lnot A_3)} \\uncovers{6-}{{}\\lor{}}\\\\\n\\F & \\F & \\T & \\T & \\uncovers{5-}{(\\lnot A_1 \\land \\lnot A_2 \\land A_3)} \\uncovers{6-}{{}\\lor{}}\\\\\n\\F & \\F & \\F & \\F\n\\end{array}\\]\n}\n\n\\subsec{$\\{\\land, \\lor, \\lnot\\}$ is Truth-Functionally Complete}{\n\n\\bit\n\\item Each line makes one, and only one, conjunction true, e.g.,\n\\item $\\lnot A_1 \\land A_2 \\land \\lnot A_3$ is true in, and only in, line \\F\\ \\T\\ \\F\n\\item Combine using $\\lor$: make $S$ true in all (and only) the lines where it is supposed to be true\n\\eit\n\n}\n\n\\subsec{The ``neither\\dots nor \\dots'' connective: $\\downarrow$}{\n\n\\[\n\\begin{array}{cc|c}\nP & Q & (P \\downarrow Q)\\\\\n\\hline\n\\T & \\T & \\F\\\\\n\\T & \\F & \\F\\\\\n\\F & \\T & \\F\\\\\n\\F & \\F & \\T\n\\end{array}\n\\]\n\n}\n\n\\subsec{$\\{\\downarrow\\}$ is Truth-Functionally Complete}{\n\n\\bit\n\\item Already know that $\\{\\lnot, \\land, \\lor\\}$ is truth-functionally\n  complete, i.e.,\n\\item Every truth-function can be expressed using only $\\lor$, $\\land$, $\\lnot$\n\\item To show $\\downarrow$ is truth-functionally complete, suffices to\n  show that \\emph{every sentence containing only $\\lnot$, $\\lor$, $\\land$ is\n  tautologically equivalent to one containing only $\\downarrow$}\n\\item For that, it suffices to show that any negated sentence,\n  conjunction, disjunction, can be expressed using only $\\downarrow$\n\\eit\n\n}\n\n\\subsec{Expressing $\\lnot$ Using $\\downarrow$}{\n\n\\begin{columns}\n\\begin{column}{3cm}\n\\[\\begin{array}{cc|c}\nP & Q & (P \\downarrow Q)\\\\\n\\hline\n\\T & \\T & \\F\\\\\n\\T & \\F & \\F\\\\\n\\F & \\T & \\F\\\\\n\\F & \\F & \\T\n\\end{array}\\]\n\\end{column}\n\\begin{column}{7cm}\n\\bits\n\\item Note how $P \\downarrow Q$ is \\F{} in the first line and \\T{} in the last (when $P$ and $Q$ have same truth value)\n\\item So $P \\downarrow P$ is \\F{} if $P$ is \\T, and \\T{} if $P$ is \\F, i.e., \\[\n\\lnot P \\Leftrightarrow (P \\downarrow P)\n\\]\n\\eit\n\\end{column}\n\\end{columns}\n\n}\n\n\\subsec{Expressing $\\lor$ Using $\\downarrow$}{\n\n\\begin{columns}\n\\begin{column}{3cm}\n\\[\\begin{array}{cc|c}\nP & Q & (P \\downarrow Q)\\\\\n\\hline\n\\T & \\T & \\F\\\\\n\\T & \\F & \\F\\\\\n\\F & \\T & \\F\\\\\n\\F & \\F & \\T\n\\end{array}\\]\n\\end{column}\n\\begin{column}{7cm}\n\\bits\n\\item $P \\downarrow Q$ is the ``neither \\dots nor'' connective, which can also\nbe expressed as $\\lnot(P \\lor Q)$, i.e.,\n\\[ \\lnot(P \\lor Q) \\Leftrightarrow P \\downarrow Q \\] \n\\item Negate both sides:\n\\[\nP \\lor Q \\Leftrightarrow \\lnot(P \\downarrow Q)\n\\]\n\\item Apply what we figured out in last slide:\n\\[\nP \\lor Q \\Leftrightarrow (P \\downarrow Q)\\downarrow(P \\downarrow Q)\n\\]\n\\eit\n\\end{column}\n\\end{columns}\n\n}\n\n\\subsec{Expressing $\\land$ Using $\\downarrow$}{\n\n\\begin{columns}\n\\begin{column}{3cm}\n\\[\\begin{array}{cc|c}\nP & Q & (P \\downarrow Q)\\\\\n\\hline\n\\T & \\T & \\F\\\\\n\\T & \\F & \\F\\\\\n\\F & \\T & \\F\\\\\n\\F & \\F & \\T\n\\end{array}\\]\n\\end{column}\n\\begin{column}{7cm}\n\\bits\n\\item $P \\downarrow Q$ is the ``neither \\dots nor'' connective, which can also\nbe expressed as $\\lnot P \\land \\lnot Q$, i.e.,\n\\[ (\\lnot P \\land \\lnot Q) \\Leftrightarrow P \\downarrow Q \\] \n\\item Equivalence holds for \\emph{all sentences} $P$, $Q$, so also if we replace $P$ by $\\lnot R$ and $Q$ by $\\lnot S$:\n\\[\n\\lnot\\lnot R \\land \\lnot\\lnot S \\Leftrightarrow (\\lnot R \\downarrow \\lnot S)\n\\]\n\\item Delete $\\lnot\\lnot$'s, and express $\\lnot$ using $\\downarrow$:\n\\[\nR \\land S \\Leftrightarrow (R \\downarrow R)\\downarrow(S \\downarrow S)\n\\]\n\\eit\n\\end{column}\n\\end{columns}\n\n}\n\n\\subsec{Truth-Functionally Complete Sets of Connectives}{\n\n\\bit\n\\item De Morgan's Law: $\\land$ can be expressed by $\\lor$ and $\\lnot$\n\\item Similarly: $\\lor$ can be expressed by $\\land$, $\\lnot$\n\\item So $\\{\\lor, \\lnot\\}$ and $\\{\\land, \\lnot\\}$ are\n  truth-functionally complete\n\\item $\\{\\to, \\bot\\}$ is truth-functionally complete (HW)\n\\item $\\{\\to, \\lnot\\}$ is truth-functionally complete \n\\item No other sets of connectives that don't contain one of these sets are complete \n\\item ``Neither \\dots nor'' is truth-functionally complete by itself\n\\item ``Not both'' connective is truth-functionally complete by itself (HW)\n\\item No other 2-place connectives are complete by themselves\n\\eit\n\n}\n\n\\end{document}\n\n\n\n\n", "meta": {"hexsha": "189b2cbd23e71ff0ac85418ae9ed22fd62ea0a26", "size": 8686, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "279-lec20.tex", "max_stars_repo_name": "rzach/phil279", "max_stars_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-09-23T13:42:54.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-21T10:48:55.000Z", "max_issues_repo_path": "279-lec20.tex", "max_issues_repo_name": "rzach/phil279", "max_issues_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "279-lec20.tex", "max_forks_repo_name": "rzach/phil279", "max_forks_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.1040462428, "max_line_length": 127, "alphanum_fraction": 0.6267556988, "num_tokens": 3214, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059462938815, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5773616572832885}}
{"text": "\\lab{Algorithm}{Kruskal's and Prim's Algorithm}{Kruskal's and Prim's Algorithm}\n\\label{Ch:Kruskal}\n\n\\objective{This section teaches about how to find minimum spanning tree for a connected weighted graph using Kruskal's and Prim's Algorithm.}\n\n\\section*{Weight Graphs and Spanning trees}\n\n%Lab \\ref{Markov}\n\nRemember that a graph is composed of two sets: a set of nodes and a set of edges that connect these nodes. \n\n\\begin{figure}[H]\n\\includegraphics[width = .4\\textwidth]{graph1.pdf}\n\\caption{An example of a graph}\n\\end{figure}\n\nA graph is directed if connections are uni-directional, and undirected if they are bi-directional. The above graphic shows an undirected graph. A weighted graph is a graph where each edge has a value associated with it. A connected graph is there is a path,or a set of edges, that connect every two nodes together. We can write a matrix that describes this type of graph. We let each row of our matrix represent our starting point and each column represent our destination. We put the value of the edge if there is a path and a 0 if there is not. For the above graph we generate the following matrix:\n\n\\[\nA = \\begin{pmatrix}\n0 & 1 & 0 & 0 & 0 & 0\\\\\n1 & 0 & 1 & 0 & 0 & 1\\\\\n0 & 1 & 0 & 1 & 1 & 1\\\\\n0 & 0 & 1 & 0 & 1 & 0\\\\\n0 & 0 & 1 & 1 & 0 & 1\\\\\n0 & 1 & 1 & 0 & 1 & 0\n\\end{pmatrix}\n\\]\n\nThis matrix is called an adjacency matrix. Note that this matrix is symmetric, since the graph is undirected. \n\nAnother way to store the graph is as a list of the edges and their weights that are connected (for an unweighted graph you can just store the edges). It would look like this:\n\n$[('A', 'B'),\n ('B', 'C'),\n ('B', 'E'),\n ('B', 'F'),\n ('C', 'D'),\\\\\n ('C', 'E'),\n ('C', 'F'),\n ('D', 'E'),\n ('E', 'F')]$\\\\\nA third entry would store the wieght.\n\nFor this lab, we will be forcusing on undirecting weighted graphs. \n\nA spanning tree of a connected graph G is is an undirected graph that contains all the nodes of G, a subset of the edges, and contains no cycles. A cycle, for undirected graphs, is a path where you start and end on the same node without crossing any edge more than once.\n\n\\begin{figure}[H]\n\\includegraphics[width = .4\\textwidth]{graph2.pdf}\n\\caption{An example of an minimal spaning tree for the previous graph}\n\\end{figure}\nThe minimun spanning tree (MST) of a wieghted, undirected graph is the spanning tree where the sum of the weights of the subset of edges is less than or equal to the sum of the weights of every other spanning tree. Both Kruskal's and Prim's Algorithm are methods that find the minimal spanning tree of a weighted directed graph.\nA spanning tree of a connected graph G is is an undirected graph that contains all the nodes of G, a subset of the edges, and contains no cycles. A cycle, for undirected graphs, is a path where you start and and on the same node without using an edge more than once. \n\\begin{figure}[H]\n\\includegraphics[width = .4\\textwidth]{graph3.pdf}\n\\caption{The path is red marks a cycle}\n\\end{figure}\nThe minimum spanning tree of a weighted, undirected graph is the spanning tree where the sum of the weights of the subset of edges is less than or equal to the sum of the weights of every other spanning tree. Both Kruskal's and Prim's Algorithm for the minimal spanning tree of a weighted directed graph.\n\\begin{figure}[H]\n\\includegraphics[width = .4\\textwidth]{graph4.pdf}\n\\caption{A weighted directed graph}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width = .4\\textwidth]{graph5.pdf}\n\\caption{The MST of that graph}\n\\end{figure}\n\n\\section*{Kruskal's algorithm}\n\nGiven a weighted directed graph G with $n$ nodes, Kruskal's algorithm finds the minimun by: first sorting the edges from smallest to largest. Then starting from the smallest, the alogrithm adds the edge to the tree as long as the addition of the new edge does not create a cycle. When the $n-1$ edges have been added the algorithm stops. (put in an example)\nGiven a weighted directed graph G with $n$ nodes, Kruskal's algorithm finds the minimum by sorted the edges from smallest to largest. Starting from the smallest it adds the edges in order to a tree as long as the addition of a new edge does not create a cycle. When the $n-1$ edges have been added the algorithm stops. The fastest way to do this is as follows:\n\\flushleft\n\\begin{itemize}\n\\item Sort the edges by weight in descending order.\n\\item Initialize pointers to trees\n\\item For each edge $e=\\{x, y\\}$, until $n\uf02d-1$ edges added  \n\\begin{itemize}\n\\item if $x$ and $y$ are not in the same tree (i.e. don\u2019t have the same root), add $e$\n\\item merge smaller tree (of $x$ or $y$) into larger tree \n\\end{itemize}\n\\end{itemize}\n\n\n\\begin{problem}\nWrite Kruskal's algorithm. Test your algorithm on random symmetric matrices. Use the graph above and the data from MSTdata.npy to test your tree. Use np.load(\"MSTdata.npy\") to get it. Use the formChanger function to put it in the right form.\n\\end{problem}\n\n\\section*{Prim's algorithm}\n\nPrim's algorithm accomplishes the same thing as Kruskal's algorithm, but with a different approach. Starting from any node it adds the edge with the least weight. Those two nodes form a tree. It than adds edges with the least weight to any node in the tree to any node not in the tree until all the nodes have been added. The fastest way to do this is as follows:\n\\flushleft\n\\begin{itemize}\n\\item Start with the smallest-weight edge $e = \\{x, y\\}$ in $S$\n\\begin{itemize}\n\\item discard $x$ and $y$ from the list of nodes to consider\n\\item for both $x$ and $y$, find the closest node, if any (among the non-discarded nodes), and keep them and their edge weights\n\\end{itemize}\n\\item Until $n-1$ edges have been added\n\\begin{itemize}\n\\item find the edge $e = \\{x, y\\}$ such that\n\\begin{itemize}\n\\item $x$ is in $S$ and $y$ is not in $S$\n\\item $e$ is the smallest weighted edge left\n\\end{itemize}\n\\item add $e$ and $y$ to $S$\n\\item discard y from the list of nodes to consider, and for both $x$ and $y$, find the closest node, if any (among the non-discarded nodes), and keep them and their edge weights\n\\end{itemize}\n\\end{itemize}\n\n\\begin{problem}\nWrite Prim's algorithm. Test you tree with the same data as the previous problem. Compare the speed of Prim's algorithm with Kruskal's algorithm. \n\\end{problem}\n\n\\section*{Complexity}\nLet $n$ be the number of nodes and $m$ be the number of edges. Kruskal's Algorithm is O($m\\log{n}$) which in the worst case is O($n^2\\log{n}$). Prims's Algorithm is O($n^2$). So when the adjacent martix is sparse Kruskal's is better and when it is dense Prim's is better. ", "meta": {"hexsha": "495f56c3b266731c7862ef0cba410767f6609874", "size": 6537, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Algorithms/MST/Kruskals.tex", "max_stars_repo_name": "abefrandsen/numerical_computing", "max_stars_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Algorithms/MST/Kruskals.tex", "max_issues_repo_name": "abefrandsen/numerical_computing", "max_issues_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Algorithms/MST/Kruskals.tex", "max_forks_repo_name": "abefrandsen/numerical_computing", "max_forks_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.3534482759, "max_line_length": 600, "alphanum_fraction": 0.7374942634, "num_tokens": 1800, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7401743735019595, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5773307562815599}}
{"text": "Although the traditional IT security layer provided to Industrial Control Systems offers a great deal of protection, security managers often overlook the possibility of a potential stealthy attacks at the process level. In the paper\\supercite{Pasad:2018}, 'Truth Will Out: Departure-Based Process-Level Detection of Stealthy Attacks on Control Systems' the authors bring out a novel approach to detecting stealthy attacks on the control system through a process-aware methodology which they call - PASAD. It enables early detection in the subtle variation of process behavior, thus averting strategic adversaries from maliciously manipulating the industrial process within the noise level. The original implementation for the PASAD algorithm was in Matlab. In this replication, I use Python, and popular visualization library, Matplotlib, to obtain the results claimed.\n\n\\section*{Introduction}\n\nA typical control system utilizes sensors, actuators, and controllers to regulate some controlled process. Sensor devices measure some physical property and communicate the measurements to a controller (e.g., a PLC), which based on a control algorithm, correspondingly issues commands to actuators\n(e.g., control valves) that directly manipulate the physical process based on the received commands. A physical process may exhibit a noisy behavior by nature. A strategic adversary takes advantage of this fact and manipulates the process to stay within the noise level to remain undetected, thus leading to a potential degradation or suboptimal performance or, in the worst scenario, a complete failure of control system through cascading effect between various control loops. PASAD is capable of finding such attacks by utilizing a technique called 'singular spectrum analysis,' a model-free time-series analysis tool.\n\n\\section*{Four Steps of PASAD}\nConsider a univariate real-valued time series of sensor measurements\n\n\\begin{center}\n$\\mathcal{T}=x_1, x_2,...,x_N, x_N+1$\n\\end{center}\n\n\\subsection*{Step 1: Embedding timeseries data into a trajectory matrix $X$}\n\nLet $L$ be an integer referred to as lag parameter, which is typically close to N/2. Initial $L$ points of $\\mathcal{T}$ form the first column vector of Trajectory Matrix $X$. Then the lag vector moves to the next point, similar to the concept of a `moving window' until it covers all the points, forming corresponding column vectors for each. Final trajectory matrix of time series data is as follows:-\n\n$$\nX = \\begin{bmatrix}\n\tx_1 & x_2 & \\dots  & x_K \\\\\n\tx_2 & x_3 & \\dots  & x_{K+1} \\\\\n\t\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\tx_L & x_{L+1} & \\dots  & x_N\n\\end{bmatrix}\n$$\n\n\\subsection*{Step 2: Singular Value Decomposition}\n\n\nTo obtain the noise-reduced version of the above trajectory matrix, we perform singular value decomposition(SVD) on the matrix and obtain the L eigenvectors $u_1, u_2,...,u_L$ and their corresponding eigenvalues. From a scree plot, we determine the cumulative contribution of each eigenvalue to determine the top $r$ contributors to the time series. $r$ represents the statistical dimension, which is the number of degrees of freedom that account for the deterministic variability.\n\n\\subsection*{Step 3: Creating a projection matrix}\n\n\nAfter obtaining the statistical dimension $r$, we proceed to create the projection matrix. By performing SVD, we have effectively obtained the elementary matrices of $X$ that represents the optimal separation of the component in the trajectory space. Projection  matrix $P$ is obtained through reconstructing/ summing up the top $r$ elementary matrices\\supercite{kaggle}. In this step, we also calculate the centroid of this 'reduced' euclidean space and the distance between the centroid and farthest lag vector. This distance is the threshold for determining the anomaly in future lag vectors.\n\n\\subsection*{Step 4: Distance Tracking and Anomaly Prediction}\n\nAs the lag vector moves forward after completion of threshold distance, we continue projecting lag vectors to the new subspace and see if it is within the threshold distance from the centroid. If yes, it is a usual case, and otherwise, we raise the alarm.\n\n\\section*{The Isometry Trick}\n\nThe essence of the original paper\\supercite{Pasad:2018} is what is called \\textbf{the isometry trick}.  In a nutshell, for an arbitrary vector $x$ in $\\mathbf{R}^L$, computing the norm of the vector $U^Tx$ has the effect of implicitly projecting $x$ onto the subspace $\\mathcal{L}^r$ and computing its norm there. Further reading on the Isometry trick is in Para 2.6 of the original article. By this method, authors have achieved close to linear complexity in the computation, ideal for implementation in low-end devices such as PLCs. \n\n\\section*{Implementation}\n\nThe original implementation of PASAD algorithm is in Matlab, and the source code is available \\href{https://github.com/mikeliturbe/pasad}{here}. Authors have validated the algorithm with data from three different scenarios namely, The Tennessee-Eastman Process, SWaT Dataset, A Water-Distribution Plant. This reproduction is for the first scenario. Reproduced plots are given below for easy reference.\n\n\\subsection*{Scenario 1. (Stealthy Attack)}\n\nStealthy Attack compromising the control variable XMV(9) detected in sensor variable XMEAS(5). \n\n\\begin{figure}[H]\t\n\t\\centering\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/sa1orig.png}\n\t\t\\caption{Original Plot}\\label{fig:1a}\t\t\n\t\\end{subfigure}\n\t\\qquad\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/sa1re.png}\n\t\t\\caption{Reproduced plot}\\label{fig:1b}\n\t\\end{subfigure}\n\t\\caption{Departure score plot for stealthy attack 1 scenario}\\label{fig:1}\n\\end{figure}\n\n\\subsection*{Scenario 2. (Stealthy Attack)}\n\nStealthy attack SA2 compromising the control variable XMV(6) detected in sensor variable XMEAS(10). \n\n\\begin{figure}[H]\t\n\t\\centering\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/sa2orig.png}\n\t\t\\caption{Original Plot}\\label{fig:1a}\t\t\n\t\\end{subfigure}\n\t\\qquad\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/sa2re.png}\n\t\t\\caption{Reproduced plot}\\label{fig:1b}\n\t\\end{subfigure}\n\t\\caption{Departure score plot for stealthy attack 2 scenario}\\label{fig:1}\n\\end{figure}\n\n\\subsection*{Scenario 3. (Stealthy Attack)}\n\nStealthy attack SA3 compromising the control variable XMEAS (10) detected in sensor variable XMEAS(9).\n\n\\begin{figure}[H]\t\n\t\\centering\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/sa3orig.png}\n\t\t\\caption{Original Plot}\\label{fig:1a}\t\t\n\t\\end{subfigure}\n\t\\qquad\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/sa3re.png}\n\t\t\\caption{Reproduced plot}\\label{fig:1b}\n\t\\end{subfigure}\n\t\\caption{Departure score plot for stealthy attack 3 scenario}\\label{fig:1}\n\\end{figure}\n\n\\subsection*{Scenario 4. (Direct Attack)}\n\nDirect Damage attack DA1 compromising the control variable XMV(10) detected in sensor variable XMEAS(15).\n\n\\begin{figure}[H]\t\n\t\\centering\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/da1orig.png}\n\t\t\\caption{Original Plot}\\label{fig:1a}\t\t\n\t\\end{subfigure}\n\t\\qquad\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/da1re.png}\n\t\t\\caption{Reproduced plot}\\label{fig:1b}\n\t\\end{subfigure}\n\t\\caption{Departure score plot for direct attack 1 scenario}\\label{fig:1}\n\\end{figure}\n\n\\subsection*{Scenario 5. (Direct Attack)}\n\nDirect Damage attack DA2 compromising the control variable XMEAS(7) detected in sensor variable XMEAS(5). \n\n\\begin{figure}[H]\t\n\t\\centering\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/da2orig.png}\n\t\t\\caption{Original Plot}\\label{fig:1a}\t\t\n\t\\end{subfigure}\n\t\\qquad\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/da2re.png}\n\t\t\\caption{Reproduced plot}\\label{fig:1b}\n\t\\end{subfigure}\n\t\\caption{Departure score plot for direct attack 2 scenario}\\label{fig:1}\n\\end{figure}\n\n\n\\section*{Results and Observations}\nDetection of process manipulations with respect to the 'Tennessee-Eastman Process' dataset, PASAD algorithm was successfully validated. A minor variation in the detection plot is attributable to the selection of statistical dimensions from the scree plot. During reproduction, the first eigenvalue was observed to be disproportionately high due to which the statistical dimensions were selected as 1 to obtain a successful detection phase. The reason for such behavior is attributable to the inherent characteristic of a trajectory matrix, as per clarification by author \\href{https://github.com/mikeliturbe/pasad/issues/1}{here}. Variation in screeplot with and without top eigenvalue is as shown in figure \\ref{fig:scree}\n\n\\begin{figure}[h]\t\n\t\\centering\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/scree1.png}\n\t\t\\caption{All eigenvalues}\\label{fig:1a}\t\t\n\t\\end{subfigure}\n\t\\qquad\n\t\\begin{subfigure}[t]{0.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{imgs/scree2.png}\n\t\t\\caption{Eigenvalues excluding first}\\label{fig:1b}\n\t\\end{subfigure}\n\t\\caption{Scree plot of top eigen values}\\label{fig:scree}\n\\end{figure}\n\n\\section*{Acknowledgement}\n\nThis work is undertaken as part of course CS631A-Cybersecurity for Critical Infrastructure at IIT Kanpur by \\href{https://security.cse.iitk.ac.in/node/96}{Prof. Sandeep Shukla}. I would also like to thank \\href{https://www.labri.fr/perso/nrougier/}{Nicolas P. Rougier} for letting me know about \\href{https://rescience.github.io}{ReScience C} and \\href{https://www.cse.iitk.ac.in/users/pvcharan/} {P.V. Sai Charan} for collaborating.\n", "meta": {"hexsha": "eabf5f4290b1a0de94eaa5d5c9b5f9586d6e5bc8", "size": 9841, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Rescience Submission/content.tex", "max_stars_repo_name": "rahulrajpl/PASAD", "max_stars_repo_head_hexsha": "37de6e79431637cdb1fc045b2ca0015d230d29ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Rescience Submission/content.tex", "max_issues_repo_name": "rahulrajpl/PASAD", "max_issues_repo_head_hexsha": "37de6e79431637cdb1fc045b2ca0015d230d29ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-09-17T07:41:06.000Z", "max_issues_repo_issues_event_max_datetime": "2019-09-17T07:41:06.000Z", "max_forks_repo_path": "Rescience Submission/content.tex", "max_forks_repo_name": "rahulrajpl/PASAD", "max_forks_repo_head_hexsha": "37de6e79431637cdb1fc045b2ca0015d230d29ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.8843930636, "max_line_length": 869, "alphanum_fraction": 0.7777664871, "num_tokens": 2640, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5773307518110528}}
{"text": "\\documentclass{article}\n\\usepackage{hyperref}\n\\usepackage{amsmath}\n\n\\title{Partial least squares}\n\\author{John Reid}\n\n\\begin{document}\n\n\\section{Partial least squares model}\n\nThe partial least\nsquares model is presented as a latent variable model in Kevin Murphy's 2012\nbook \\href{https://probml.github.io/pml-book/}{Machine Learning: A\nProbabilistic Perspective}.\n\n\\begin{eqnarray}\n  p(z) &=& \\mathcal{N}(z|0, I) \\\\\n  p(v|z, W, \\mu, \\sigma) &=& \\mathcal{N}(v|W z + \\mu, \\sigma^2 I)\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n  W &=& \\begin{pmatrix}\n      W_y & 0 \\\\\n      W_x & B_x \\\\\n    \\end{pmatrix} \\\\\n  z &=& (z^s; z^x) \\\\\n  v &=& (y; x) \\\\\n  \\mu &=& (\\mu_y; \\mu_x).\n\\end{eqnarray}\nMarginalising $z$ gives\n\\begin{eqnarray}\n  p(v|W, \\mu, \\sigma)\n  &=& \\int \\mathcal{N}(v|W z + \\mu, \\sigma^2 I) \\mathcal{N}(z|0, I)\\,dz \\\\\n  \\label{eqn:joint_xy}\n  &=& \\mathcal{N}(v|\\mu, W W^T + \\sigma^2 I)\n\\end{eqnarray}\nConditioning on $x$ gives\n\\begin{eqnarray}\n  p(y|x) &=& \\mathcal{N}(y|m_{y|x}, S_{y|x})\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n  C &=& {(B_x B_x^T + W_x W_x^T + \\sigma^2 I)}^{-1} \\\\\n  m_{y|x} &=& \\mu_y + W_y W_x^T C (x - \\mu_x) \\\\\n  S_{y|x} &=& \\sigma^2 I + W_y W_y^T - W_y W_x^T C W_x W_y^T\n\\end{eqnarray}\n\nSuppose we now obtain $N$ independent observations from the model\n\\begin{eqnarray}\n  v_n = (y_n; x_n),\\quad 1 \\le n \\le N.\n\\end{eqnarray}\nWe wish to estimate $W, \\mu \\text{ and } \\sigma$. We can do this by maximising the likelihood of the data\n$v = x, y$~(\\ref{eqn:joint_xy}) using stochastic gradient descent.\n\nOne way to validate that our implementation works is to sample data from the model and compare the estimated\nparameters to those that we used to sample with. Suppose we sample $v_n$ from our model parameterised by\n$\\sigma^*, \\mu^*_x, \\mu^*_y, W^*_y, W^*_x, B^*_x$.\nWe can estimate $\\hat{\\sigma}, \\hat{\\mu}_x, \\hat{\\mu}_y, \\hat{W}_y, \\hat{W}_x, \\hat{B}_x$ that maximise\n\\begin{eqnarray}\n  \\label{eqn:log_joint}\n  \\sum_n \\log p(v|\\sigma, \\mu, W_y, W_x, B_x)\n\\end{eqnarray}\nWe would love to compare the estimated parameters to the underlying parameters but unfortunately (\\ref{eqn:log_joint})\nis invariant to orthonormal transformations of the $W_y, W_x, B_x$. That is\n\\begin{eqnarray}\n  \\sum_n \\log p(v_n|\\sigma, \\mu, W_y U_s, W_x U_s, B_x U_x)\n  &=&\n  \\sum_n \\log p(v_n|\\sigma, \\mu, W_y, W_x, B_x)\n\\end{eqnarray}\nfor any orthonormal square matrices (of suitable dimension) $U_s, U_x$. We can see this as $W$ only appears\nas $W W^T$ in the likelihood and $W W^T = W U U^T W^T$ for any orthonormal $U$.\n\\end{document}\n", "meta": {"hexsha": "c367de64dca69b36b54cf038b6879de6d7beb347", "size": 2538, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/pls.tex", "max_stars_repo_name": "JohnReid/pls-tf", "max_stars_repo_head_hexsha": "2db021ea14ad0779c988949b8fa354c70ac85385", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/pls.tex", "max_issues_repo_name": "JohnReid/pls-tf", "max_issues_repo_head_hexsha": "2db021ea14ad0779c988949b8fa354c70ac85385", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/pls.tex", "max_forks_repo_name": "JohnReid/pls-tf", "max_forks_repo_head_hexsha": "2db021ea14ad0779c988949b8fa354c70ac85385", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.2972972973, "max_line_length": 118, "alphanum_fraction": 0.658392435, "num_tokens": 967, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.740174350576073, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5773307383995312}}
{"text": "\n\\section{The \\merge algorithm}\n\\label{sec:merge}\n\nOur version of the \\merge algorithm from the \\cxx standard library\\cite[28.7.5]{cxx-17-draft}\nhas the following signature.\n\n\\begin{lstlisting}[style = acsl-block]\n\nvoid\nmerge(const value_type* a, size_type n,\n      const value_type* b, size_type m,\n      value_type* result);\n\\end{lstlisting}\n\nThe \\merge algorithm is a part of the \\emph{merge sort} algorithm.\nIt operates on the second step to merge two increasingly ordered sub-arrays into a new one.\nThe algorithm merges two increasingly ordered arrays\n\\inl{a[0..n-1]} and \\inl{b[0..m-1]}, respectively.\nThe merged values are stored in the output array that starts at\n\\inl{result} which must be able to hold $m+n$ values of both input arrays.\n\n\n\\subsection{Formal specification of \\merge}\n\nThe following listing~\\ref{lst:merge:spec} shows the specification of \\merge.\nThe specification expects the input arrays of the proper size and in increasing order\nand the output array of enough size to contain all the input elements.\nThe input arrays should not overlap with the output array.\n%\nIn the current edition of this guide, we prove only that the\nresulting array is in increasing order.\nFuture editions will contain additional postconditions stating that the\nresult array consists of reordered input elements and the stability of the algorithm,\ni.e., the same elements of the input arrays preserve their order in the output array.\n\n\\input{Listings/merge.h.tex}\n\n%\\clearpage\n\n\\subsection{More Lemmas on \\WeaklyIncreasing}\n\\label{sec:WeaklyIncreasingLemmas}\n\nWe introduce in the following listing several lemmas about \\logicref{WeaklyIncreasing}\nthat are helpful for the verification of \\merge.\n\n\\begin{itemize}\n\\item\nLemma \\logicref{WeaklyIncreasingShrink} allows to restrict the property\n\\emph{weakly increasing} onto a sub-array.\n\n\\item\nLemma \\logicref{WeaklyIncreasingAddElement} defines the way a weakly\nincreasing array can be constructed.\n\n\\item\nLemma \\logicref{WeaklyIncreasingShift} is used to handle\npointer arithmetic with respect to the \\WeaklyIncreasing property.\n\n\\item\nLemmas  \\logicref{WeaklyIncreasingUnchanged} and \\logicref{WeaklyIncreasingEqual} state\nthat if an array is weakly increasing,\nthen another array (or the same array at another program point),\nwhose elements are in a one-to-one correspondence with the\noriginal array, is also weakly increasing.\n\n\\item\nLemma \\logicref{WeaklyIncreasingJoin} defines the conditions that two\nconsequent weakly increasing ranges\ncan be viewed as merged weakly increasing range.\n\\end{itemize}\n\n\\input{Listings/WeaklyIncreasingLemmas.acsl.tex}\n\n%\\clearpage\n\n\\subsection{Implementation of \\merge}\n\nThe implementation of \\merge, shown in the next listings is straightforward.\nThe algorithm operates by traversing both input arrays.\nOn each iteration it writes the smaller of both elements into the result array,\nthus constructing an increasingly ordered array.\nIf the algorithm reaches the end of one of the input arrays,\nit just copies the rest elements of the other array to the result array.\nThe listing contains a number of assertions to trigger an application of lemmas by \nthe provers.\nThe \\inl{while} loop traverses the input arrays and constructs, in accordance with\n\\logicref{WeaklyIncreasingAddElement}, the resulting weakly increasing array.\nAfter the loop, the algorithm copies the remaining elements to the resulting array.\n\n\n\\begin{listing}[hbt]\n\\begin{minipage}{\\textwidth}\n\\lstinputlisting[linerange={1-33}, style=acsl-block, frame=single]{Source/merge.c}\n\\end{minipage}\n\\caption{\\label{lst:merge-impl1}Implementation of \\merge (1)}\n\\end{listing}\n\n\\index[examples]{merge@\\texttt{merge}}\n\n%\\clearpage\n\nWe also use the following lemmas to support the verification of several properties.\n\n\\begin{itemize}\n\\item\nLemma \\logicref{WeaklyIncreasingEqual} is used to show that the\ncopied elements from one of the input arrays preserve the \\WeaklyIncreasing property.\n\n\\item\nLemma \\logicref{WeaklyIncreasingJoin} is used\nto\nextend the \\WeaklyIncreasing \nproperty of the two sub-ranges of the resulting array over the whole range.\nIn order to deal with pointer arithmetic we employ Lemma \\WeaklyIncreasingShift.\n\n\\item \nFinally, Lemma \\logicref{WeaklyIncreasingIncreasing}\nis used to prove the output array is in increasing order.\n\\end{itemize}\n\n\\begin{listing}[hbt]\n\\begin{minipage}{\\textwidth}\n\\lstinputlisting[linerange={34-100}, style=acsl-block, frame=single]{Source/merge.c}\n\\end{minipage}\n\\caption{\\label{lst:merge-impl2}The Implementation of \\merge (2)}\n\\end{listing}\n\n\\index[examples]{merge@\\texttt{merge}}\n\n\n\\clearpage\n\n", "meta": {"hexsha": "0eca5f9f447d284d2701726b53cb0b54d795b52c", "size": 4595, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/sorting/merge.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/sorting/merge.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/sorting/merge.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 34.2910447761, "max_line_length": 93, "alphanum_fraction": 0.7956474429, "num_tokens": 1127, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.8104789086703225, "lm_q1q2_score": 0.5772491733030115}}
{"text": "%---------------------------Volume-----------------------------\n\\section{Pyramid Volume}\n\nThe pyramid volume metric is simply\n\\[\nq = V.\n\\]\n\n\\othermetrictable{volume}%\n{$L^3$}%                                      Dimension\n{$[0,DBL\\_MAX]$}%                             Acceptable range\n{$[-DBL\\_MAX,DBL\\_MAX]$}%                     Normal range\n{$[-DBL\\_MAX,DBL\\_MAX]$}%                     Full range\n{$\\frac{1}{3\\sqrt{2}}$}%                      Unit element\n{--}%                                         Citation\n{v\\_pyramid\\_volume}%                         Verdict function name\n", "meta": {"hexsha": "9496349083fc77fbab02a15c50aa94638aa9a65b", "size": 584, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/PyrVolume.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/PyrVolume.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/PyrVolume.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 34.3529411765, "max_line_length": 67, "alphanum_fraction": 0.3886986301, "num_tokens": 131, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8104789040926008, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5772491650920359}}
{"text": "% Copyright 2020 Markus J. Pflaum, licensed under CC BY-NC-ND 4.0\n% main author: \n%   Markus J. Pflaum \n%\n\\section{Infinite tensor products}\n\\label{sec:infinite-tensor-products}\n\n\\para\nInfinite tensor products of Hilbert spaces were introduced by  \\cite{vNeuIDP}. They were motivated by mathematical physics\nwhere one needs to describe quantum systems with infinitely many degrees of freedom, see e.g.~\\cite{EmcAMSMQFT,BraRobOAQSM2}.\nThe original construction of infinite tensor products was generalized to von Neumann  and $C^*$-algebras\nby \\cite{GuiPTIRRA}, \\cite{BlaITPC*A}, and others. Meanwhile, the topic has been studied in quite some\ndetail in the operator algebra literature, see e.g.~\\cite{NakITPvNAI,NakITPvNAII,StoITPvNA}. \nA purely algebraic or better categorical approach allowing the construction of infinite tensor products of modules over\na given commutative ring has been given in \\cite[Sec.~III.10]{CheFCA}. The work \\cite{NgGIATP} is also in that spirit.\nWe will essentially follow \\cite{CheFCA} and construct the infinite tensor product as a module\nuniversal with respect to multilinear maps. First we present the main algebraic construction, then we explain some of\nthe subtleties which distinguish infinite from finite tensor products, and finally we construct infinite Hilbert\ntensor products and infinite tensor products of $C^*$-algebras. \n\n\\para\\label{para:canonical-projections-embeddings-multilinear-maps}\nLet $R$ be a commutative ring and $(M_i)_{i\\in I}$ a possibly infinite family of $R$-modules.\nConsider $\\prod_{i\\in I} M_i$, the product of the family $(M_i)_{i\\in I}$  within the category\nof $R$-modules. For each $j\\in I$ let $\\pi_j : \\prod_{i\\in I} M_i \\to M_j $ denote the natural\nprojection onto the $j$-th factor and $\\iota_j :M _j \\hookrightarrow  \\prod_{i\\in I}M_i$ the\nuniquely determined natural embedding such that \n\\[\n   \\pi_j\\circ \\iota_i =  \n   \\begin{cases}\n       \\id_{M_i} & \\text{for}\\enspace i=j \\enspace\\text{and} \\\\\n       0 & \\text{else} .  \n   \\end{cases}\n\\]\nGiven an $R$-module $N$  one then understands by a \\emph{multilinear map} from $ \\prod_{i\\in I}M_i$\nto $N$ a map $f:  \\prod_{i\\in I}M_i \\to N$\nsuch that for each $j\\in I$ and $x\\in \\prod_{i\\in I} M_i$ with $\\pi_j (x)=0$ the map $M_j\\to N$, $m\\mapsto f(\\iota_j(m)+x)$ is linear.\nThe set of multilinear maps from $\\prod_{i\\in I} M_i$ to $N$ will be denoted by\n$\\mlinOps \\big(\\prod_{i\\in I} M_i,N\\big)$. It carries a natural structure of an $R$-module\ngiven by pointwise addition of multilinear maps and pointwise action of a scalar on a\nmultilinear map that is by  \n\\[\n  f+ g =  \\left( \\prod_{i\\in I} M_i \\ni x \\mapsto f(x) + g(x) \\in N \\right) \\quad\\text{and}\\quad\n  r f =  \\left( \\prod_{i\\in I} M_i \\ni x \\mapsto rf(x)\\in N \\right) \n\\]\nfor all $f,g \\in \\mlinOps \\big(\\prod_{i\\in I} M_i,N\\big)$ and $r\\in R$.\nSince for $j\\in I$ and $x\\in \\prod_{i\\in I} M_i$ with $\\pi_j(x)= 0$ the maps\n$M_j\\to N$, $m \\mapsto (f+g) (\\iota_j(m) + x) = f (\\iota_j(m) + x) + g (\\iota_j(m) + x)$\nand $M_j\\to N$, $m \\mapsto rf (\\iota_j(m) + x) $ are linear by assumption on $f$ and $g$,\nthe maps $f+g$ and $rf$ are multilinear again, so $ \\mlinOps\\big(\\prod_{i\\in I} M_i,N\\big)$ is an\n$R$-module indeed with zero element the constant function mapping to $0\\in N$. \n\n\\begin{remarks}\n  Before proceeding further let us make several explanations concerning the notation used.\n  \\begin{environmentlist}\n  \\item\n    The space of multilinear maps $ \\mlinOps \\big(\\prod_{i\\in I} M_i,N\\big)$ actually depends\n    on the family $(M_i)_{i\\in I}$ and the $R$-module $N$, so in principle one should  write\n    $ \\mlinOps \\big((M_i)_{i\\in I},N\\big)$ instead of $ \\mlinOps \\big(\\prod_{i\\in I} M_i,N\\big)$.\n    Nevertheless we stick to the latter notation since it is closer to standard notation for\n    linear maps and since it will not lead  to any confusion.\n  \\item\n    In case the index set $I$ has just two elements $i_1,i_2$, one calls a multilinear map\n    $\\prod_{i\\in I}M_i = M_{i_1}\\times M_{i_2} \\to N$ a \\emph{bilinear map}. If the cardinality of\n    $I$ is $3$, one sometimes calls a multilinear map $\\prod_{i\\in I}M_i \\to N$ a\n    \\emph{trilinear map}. \n    \n  \\item\n     In the following, when saying that $(I_a)_{a\\in A}$ is a partition of the set $I$ we mean that\n  each $I_a$ is a non-empty subset of $I$, that $I_a   \\cap I_b =\\emptyset $\n  for $a \\neq b$ and that $\\bigcup_{a \\in A} I_a = I$. The empty family is regarded as\n  a partition of the empty set.\n    \n  \\item We will frequently use in this section the same symbol for \n  maps with the same ``universal'' properties  despite those maps might be  strictly speaking different. \n  For example, $\\pi_k$ will stand for the canonical projections\n  $\\prod_{i\\in I} M_i \\to M_k$ and $\\prod_{j\\in J} M_j \\to M_k$ whenever $k\\in J \\subset I$. \n  Likewise we use the same notation for the two canonical embeddings \n  $M_k \\hookrightarrow \\prod_{i\\in I} M_i $ and $M_k \\hookrightarrow \\prod_{j\\in J} M_j $\n  defined in \\ref{para:canonical-projections-embeddings-multilinear-maps} and denote them both by $\\iota_k$.\n\n  \\end{environmentlist}\n\\end{remarks}\n\n\\begin{lemma}[cf.~{\\cite[Sec.~III.10, Lemma 1 \\& 2]{CheFCA}}]\\label{thm:construction-multilinear-maps-composition}\n  Assume that $(M_i)_{i\\in I}$ is a family of $R$-modules, $N$ an $R$-module, and\n  $f : \\prod_{i\\in I}M_i \\to N$ a mutilinear map.\n  \\begin{romanlist}\n  \\item\n    If $g :N \\to N^\\prime$ is an $R$-module map, then\n    $g\\circ f :  \\prod_{i\\in I}M_i \\to N^\\prime$ is multilinear.\n  \\item\n    Let $J\\subset I$ be non-empty, $y= (y_i)_{i\\in I\\setminus J}$ an element of the product\n    $\\prod_{i\\in I\\setminus J} M_i$, and $ \\iota_{J,y} :  \\prod_{j\\in J} M_j \\to \\prod_{i\\in I} M_i$ the unique map\n    such that for all $x = (x_j)_{j\\in J} \\in  (M_j)_{j\\in J}$ and $k\\in I$\n    \\[\n      \\pi_k \\circ \\iota_{J,y}\\, (x) = \n      \\begin{cases}\n        x_k & \\text{for } k\\in J , \\\\\n        y_k & \\text{for } k \\in I\\setminus J .\n      \\end{cases}\n    \\]\n    Then the composition $f\\circ \\iota_{J,x} : \\prod_{j\\in J} M_j \\to N$ is multilinear.\n \\item\\label{ite:multilinearity-composition-multilinear-map-product-multilinear-map}\n   Let $(I_a)_{a \\in A}$ be a partition of the index set $I$ which is assumed to be non-empty.\n   Let $(N_a)_{a \\in A}$ be a family of $R$-modules, $(g_a)_{a \\in A}$\n   a family of multilinear maps $g_a : \\prod_{i \\in I_a} M_i \\to N_a$, and\n   $h :  \\prod_{a \\in A}N_a \\to N$ multilinear. Define\n   $g: \\prod_{i\\in I} M_i \\to  \\prod_{a \\in A} N_a $ as the unique map\n   such that \n   \\[\n      \\pi_b \\circ g = g_b \\circ \\pi_{I_b} \\quad \n      \\text{for } b \\in A  ,\n    \\]\n    where $\\pi_J$ for $J\\subset I$ as on the right side stands for the\n    projection $\\pi_J : \\prod_{i\\in I} M_i \\to  \\prod_{j \\in J} M_j$\n    uniquely determined by $\\pi_j \\circ \\pi_J =\\pi_j$ for all $j\\in J$. \n   Then the  composition $h \\circ g : \\prod_{i\\in I} M_i \\to N$ is multilinear.\n  \\end{romanlist}\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{adromanlist}\n  \\item\n    Let $j\\in I$ and $x\\in \\prod_{i\\in I} M_i$ with $\\pi_j (x)=0$.\n    By multilinearity of $f$ and linearity of $g$, the map $M_j\\to N^\\prime$,\n    $m\\mapsto g f (\\iota_j(m)+x)$ then has to be linear, hence $g\\circ f$ is multilinear.\n  \\item\n    Let $j \\in J$ and $x\\in \\prod_{i\\in J} M_i$ with $\\pi_j (x)= 0$.\n    % Define $\\widetilde{x}$ as the unique element of $\\prod_{i\\in I}M_i$ such that\n    % \\[\n    %    \\pi_k (\\widetilde{x}) =\n    %    \\begin{cases}\n    %      \\pi_k(x)  &\\text{for } k \\in J,\\\\ \n    %      y_k &\\text{for } k \\in I\\setminus J . \n    %    \\end{cases}\n    % \\]\n    Then $\\pi_j(\\iota_{J,y}(x))=0$ and $f \\iota_{J,y} (\\iota_j(m)+x) =\n    f(\\iota_j(m) + \\iota_{J,y}(x)$ for all $m\\in M_j$ by  construction of $\\iota_{J,y}$.\n    Hence the map $M_j\\to N$, $m\\mapsto f \\iota_{J,y} (\\iota_j(m)+x)$ is linear by\n    multilinearity of $f$. This proves that $f\\circ \\iota_{J,y}$ is multilinear.\n  \\item\n    Given $j\\in I$ let $b$ be the unique element of $A$  such that $j\\in I_b$.\n    Assume  that $x\\in \\prod_{i\\in I} M_i$ with $\\pi_j (x)=0$.\n    By construction one has $\\pi_j(\\pi_{I_b} (x))=0$.\n    Now let $y \\in \\prod_{a \\in A}N_a$ such that\n    \\[\n       \\pi_a (y) =\n       \\begin{cases}\n         0 & \\text{for } a = b , \\\\\n         g_a \\pi_{I_a} (x) & \\text{for } a  \\neq b.\n       \\end{cases}\n    \\]\n    One then obtains for $m\\in M_j$\n    \\[\n    \\pi_a g ( \\iota_j(m) + x)= \n    \\begin{cases}\n      g_b \\pi_{I_b} (\\iota_j(m)+x) = g_b (\\iota_j (m) + \\pi_{I_b} (x)) &\\text{for }a = b, \\\\\n      g_a \\pi_{I_a} (x) =  \\pi_a (y)  & \\text{for } a  \\neq b .\n    \\end{cases}\n    \\]\n    Hence \n    \\[\n      h g (\\iota_j(m)+ x) =\n      h \\big( \\iota_b \\big( g_b ( \\iota_j(m) + \\pi_{I_b} (x) \\big) + y \\big) \\ ,\n    \\]\n    and the map $M_j\\to N$, $m\\mapsto h g (\\iota_j(m)+ x)$ is linear as the composition of two linear maps. \n  \\end{adromanlist}\n\\end{proof}\n\n\\begin{lemma}\\label{thm:associator-cartesian-product}\n  Assume to be given a non-empty family of $R$-modules $(M_i)_{i\\in I}$ and a partition \n  $(I_a)_{a \\in A}$ of the index set $I$. Then there exists a natural ismorphism\n  \\[\n    \\kappa_{I,A}:  \\prod_{i \\in I} M_i \\to \\prod_{a \\in A}  \\prod_{i \\in I_a} M_i\n  \\]\n  uniquely determined by the condition that $\\pi_a \\circ \\kappa_{I,A} = \\pi_{I_a}$ for all\n  $a\\in A$.\n  % where $\\pi_a$ on the right hand side stands for the unique projection\n  % $\\pi_a : \\prod_{i \\in I} M_i \\to \\prod_{i \\in I_a} M_i$ which satisfies\n  % $\\pi_k \\circ \\pi_a = \\pi_k$ for all $k\\in I_a$.\n\\end{lemma}\n\\begin{proof}\n  By the universal property of the product the $R$-module map \n  $ \\kappa = \\kappa_{I,A}:  \\prod_{i \\in I} M_i \\to \\prod_{a \\in A}  \\prod_{i \\in I_a} M_i$\n  exists and is uniquely determined by the requirement  that $\\pi_a \\circ \\kappa_{I,A} = \\pi_{I_a}$\n  for all $a\\in A$. Naturality also follows from the universal property of the product.\n  It remains to show that $\\kappa$ is an isomorphism. By construction,\n  $\\pi_i(x) = \\pi_i \\pi_a \\kappa (x) =0$ for all $i\\in I$ and $a(i)\\in A$ such that $i\\in I_{a(i)}$,\n  hence $x=0$.\n  So $\\kappa$ is injective. It is also surjective. To see this pick $x_a \\in \\prod_{i \\in I_a} M_i$\n  for each $a\\in A$. With $a(i)$ for $i\\in I$ defined as before put\n  $x = \\big( \\pi_i (x_{a(i)} )\\big)_{i\\in I}$. Then, by construction,\n  $\\pi_i \\pi_{a} \\kappa (x) = \\pi_i \\pi_{a} (x) = \\pi_i (x) = \\pi_i (x_{a})$ for all $a\\in A$ and\n  $i\\in I_a$,\n  hence $\\big(\\pi_a \\kappa (x) \\big)_{a \\in A} = (x_a)_{a \\in A}$\n  and $\\kappa$ is surjective. \n\\end{proof}\n\n\\begin{proposition}[Exponential law for multilinear maps]\n\\label{thm:exponential-law-multilinear-maps}\n  Let $(M_i)_{i\\in I}$ be a family of $R$-modules over a commutative ring $R$,\n  $N$ an $R$-module, \n  and assume that $J \\subset I$ is a non-empty subset such that the complement\n  $K = I\\setminus J$ is also non-empty. Then the map\n  \\begin{equation*}\n  \\begin{split}\n    \\eta_{I,J}: \\:&\n    \\mlinOps \\left( \\prod_{j\\in J} M_j , \\mlinOps \\left(\\prod_{k\\in K} M_k ,N \\right)\\right)\n    \\rightarrow \\mlinOps \\left( \\prod_{i\\in I} M_i, N \\right), \\\\\n    & \\hspace{1em} f \\mapsto \\left( \\prod_{i\\in I} M_i\\ni (x_i)_{i\\in I} \\mapsto\n    f\\big( (x_j)_{j\\in J} \\big)\\left( (x_k)_{k\\in K} \\right)  \\in N \\right) \n  \\end{split}\n  \\end{equation*}\n  is an isomorphism which is natural in $(M_i)_{i\\in I}$ and $N$.\n\\end{proposition}\n\n\\begin{proof}\n  \n  We first show that $\\eta= \\eta_{I,J} $ is linear. To this end let\\\\\n  $f,g \\in \\mlinOps \\left( \\prod_{j\\in J} M_j , \\mlinOps \\left(\\prod_{k\\in K} M_k ,N \\right)\\right)$ and $r\\in R$.\n  Then, for all $x = (x_i)_{i\\in I} \\in \\prod_{i\\in I} M_i$,\n  \\begin{equation*}\n  \\begin{split}\n    \\big( \\eta (f & +g)\\big) (x)  = \\big(f+g\\big) \\big( (x_j)_{j\\in J} \\big)\\left( (x_k)_{k\\in K} \\right)\n    = \\big( f((x_j)_{j\\in J}) + g ((x_j)_{j\\in J})\\big) \\left( (x_k)_{k\\in K} \\right) = \\\\\n    & = f((x_j)_{j\\in J})  \\left( (x_k)_{k\\in K} \\right) +  g ((x_j)_{j\\in J}) \\left( (x_k)_{k\\in K} \\right) =\n     \\big( \\eta f \\big) (x) + \\big( \\eta g \\big) (x) =  \\big( \\eta f + \\eta g \\big) (x) \n  \\end{split}\n  \\end{equation*}\n  and\n  \\begin{equation*}\n  \\begin{split}\n    \\big( \\eta (rf)\\big) (x) & = (rf) ((x_j)_{j\\in J})  \\left( (x_k)_{k\\in K} \\right) =\n    \\big( r f((x_j)_{j\\in J})\\big) \\left( (x_k)_{k\\in K} \\right) =\n    r \\big( f((x_j)_{j\\in J})\\left( (x_k)_{k\\in K} \\right)\\big) = \\\\\n    & = r \\big( \\eta f (x)\\big) =  \\big( r (\\eta f ) \\big) (x) \\ .\n  \\end{split}\n  \\end{equation*}\n  Hence $\\eta$ is an $R$-module map.\n\n  Next we show that $\\eta$ is an isomorphism by constructing an inverse.\n  Given $f \\in \\mlinOps \\big( \\prod_{i\\in I} M_i, N \\big)$ we define\n  $f^\\sharp : \\mlinOps \\big( \\prod_{j\\in J} M_j \\big) \\to \\mlinOps \\big(\\prod_{k\\in K} M_k ,N \\big)$ by the requirement that \n  \\[\n      f^\\sharp (y) (z) = f (x_{y,z}) \\quad \\text{for all}\\enspace y = (y_j)_{j\\in J}\\enspace \\text{and} \\enspace z = (z_k)_{k \\in K} \\ ,\n  \\]\n  where $x_{y,z}$ is the  element of $\\prod_{i\\in I}M_i$ uniquely determined by\n  \\[\n    \\pi_i (x_{y,z}) =\n    \\begin{cases}\n      y_i & \\text{for}\\enspace i \\in J , \\\\\n      z_i & \\text{for}\\enspace i \\in K .   \n    \\end{cases}\n  \\] \n  One thus obtains an $R$-module map \n  \\[\n    (-)^\\sharp_{I,J} :\n    \\mlinOps \\left( \\prod_{i\\in I} M_i, N \\right) \\to\n    \\mlinOps \\left( \\prod_{j\\in J} M_j , \\mlinOps \\left(\\prod_{k\\in K} M_k ,N \\right)\\right),\\quad f \\mapsto f^\\sharp\n  \\]\n  which by construction is inverse to $\\eta_{I,J}$. \n\n  Naturality of $\\eta_{I,J}$ in $(M_j)_{j\\in J}$ and $N$ is clear by definition. \n\\end{proof}\n\n\\begin{definition}\n  Let $(M_i)_{i\\in I}$ be a family of $R$-modules over a commutative ring $R$. By a\n  \\emph{tensor product} of  $(M_i)_{i\\in I}$ one understands an $R$-module $\\bigotimes_{i\\in I}M_i$\n  together with a multilinear map\n  $\\tau : \\prod_{i\\in I}M_i \\to \\bigotimes_{i\\in I}M_i$ such that the following universal property is fulfilled:\n  \\begin{axiomlist}[\\hspace{2.5em}]\n  \\item[\\textup{\\sffamily (ITensor)}]\n   \\label{axiom:hilbert-tensor-product}\n   For every $R$-module $N$ and every multilinear map \n   $f : \\prod_{i\\in I}M_i \\to N$ there exists a unique $R$-module map\n   $\\overline{f}: \\bigotimes_{i\\in I}M_i \\to N$ \n   such that the diagram \n   \\begin{displaymath}\n   \\begin{tikzcd}\n       \\prod\\limits_{i\\in I}M_i  \\ar[d,\"\\tau\",swap] \\ar[r,\"f\"]  & N \\\\\n       \\bigotimes\\limits_{i\\in I}M_i \\ar[ru,\"\\overline{f}\",swap]\n   \\end{tikzcd}\n   \\end{displaymath}\n   commutes.\n \\end{axiomlist} \n\n The linear map $\\overline{f}$ making the diagram comute will sometimes be called the \\emph{linearization}\n of the multilinear map $f$.\n   \n Given a tensor product\n $\\big( \\bigotimes_{i\\in I}M_i,\\tau\\big)$, we will usually denote the image of an element \n $(x_i)_{i\\in I} \\in \\prod_{i\\in I}M_i$\n under the map $\\tau$ by $\\otimes_{i\\in I} x_i$. \n\\end{definition}\n\n\\begin{remarks}\n  \\begin{environmentlist}\n  \\item\n    Strictly speaking, a tensor product of a family $(M_i)_{i\\in I}$ of $R$-modules is a pair\n    $\\big( \\bigotimes_{i\\in I}M_i,\\tau\\big)$ having the above properties. By slight abuse of language, one\n    usually denotes a tensor product just by its first component, the $R$-module $\\bigotimes_{i\\in I}M_i$.\n    When helpful for clarity, the associated map $\\tau: \\prod_{i\\in I}M_i \\to \\bigotimes_{i\\in I}M_i$\n    will be denoted by $\\tau_{(M_i)_{i\\in I}}$ or by $\\tau_I$. \n  \\item   \n    In the case where the index set $I$  of the family $(M_i)_{i\\in I}$ is infinite, one\n    sometimes calls $\\bigotimes_{i\\in I}M_i$ an \\emph{infinite tensor product}. \n  \\end{environmentlist}\n\\end{remarks}\n\n\n\n\\begin{theorem}\\label{thm:construction-fundamental-properties-infinite-tensor-product}\n  Let $(M_i)_{i\\in I}$ be a family of $R$-modules over a commutative ring $R$. Then the following\n  holds true.\n  \\begin{romanlist}\n  \\item\n    A tensor product $\\bigotimes_{i\\in I}M_i$ of the family $(M_i)_{i\\in I}$ exists and is\n    unique up to isomorphism. \n    If $I$ is the empty set, then $\\bigotimes_{i\\in I}M_i = R$, if $I$ contains a single element $i_\\circ$,\n    then  $\\bigotimes_{i\\in I}M_i =M_{i_\\circ}$.   \n  \\item\n    If $(N_i)_{i\\in I}$ is a second family of $R$-modules and $(f_i)_{i\\in I}$ a family\n    $R$-module maps $f_i :M_i \\to N_i$, then there exists a unique linear map \n    $\\bigotimes_{i\\in I}f_i: \\bigotimes_{i\\in I}M_i \\to \\bigotimes_{i\\in I}N_i$ making\n    the  diagram\n    \\begin{displaymath}\n    \\begin{tikzcd}\n       \\prod\\limits_{i\\in I}M_i  \\ar[d,\"\\tau\",swap] \\ar[r,\"f\"] & \\bigotimes\\limits_{i\\in I}N_i   \\\\\n       \\bigotimes\\limits_{i\\in I}M_i \\ar[ru,\"\\bigotimes\\limits_{i\\in I}f_i\",swap]\n   \\end{tikzcd}\n   \\end{displaymath}\n   commute, where $f:\\prod_{i\\in I}M_i\\to \\bigotimes_{i\\in I}N_i$ is the multilinear map\n   $(x_i)_{i\\in I}\\mapsto \\otimes_{i\\in I} f_i(x_i)$.\n \\item\n   Let $J\\subset I$ be a finite non-empty subset set such that  $M_j$ is isomorphic  to $R$ for all $j\\in J$.\n   Denote for each $j\\in J$ by $1_j$ the image of the unit $1\\in R$ under the isomorphism $R\\cong M_j$\n   and by $1_J$ the family $(1_j)_{j\\in J}$. Moreover, for every family $y =(y_j)_{j\\in J}$\n   let $\\iota_{J,y} : \\prod_{i\\in I\\setminus J} M_i \\to \\prod_{i\\in I}M_i$ be the map which associates\n   to $x\\in\\prod_{i\\in I\\setminus J} M_i$ the family $ (x_i)_{i\\in I}$ such that $x_i =\\pi_i (x)$ for\n   $i \\in I\\setminus J$ and  $x_i = y_i$ for $i \\in J$. Then the linearization\n   $\\overline{\\iota}_{J,1_J}: \\bigotimes_{i\\in I\\setminus J}M_i \\to  \\bigotimes_{i\\in I}M_i $ of the multilinear map\n   $\\tau_I \\circ \\iota_{J,1_J}: \\prod_{i\\in I\\setminus J}M_i \\to  \\bigotimes_{i\\in I}M_i$ is an isomorphism. \n  \\end{romanlist}\n\\end{theorem} \n\n\\begin{proof}\n\\begin{adromanlist}\n\\item\n  By its universal property, the tensor product of the family $(M_i)_{i\\in I}$ is uniquely determined\n  up to isomorphism. Hence it remains to show the existence of the tensor product.\n  To this end consider the free $R$-module  % $R^{(\\prod_{i\\in I}M_i)}$\n  over the set $\\prod_{i\\in I}M_i$ and denote it by $F$. Let  $\\delta: \\prod_{i\\in I}M_i \\hookrightarrow F$ be\n  the canonical injection  \n  %Identify an element $(x_i)_{i\\in I}\\in\\prod_{i\\in I}M_i$\n  %with its image in $F$ under the canonical injection $\\iota : \\prod_{i\\in I}M_i \\hookrightarrow F$\n  and $U$ be the submodule of $F$ spanned by the elements\n  \\[\n    \\delta \\big( \\iota_j (r y_j + z_j) + (x_i)_{i\\in I} \\big) - r \\delta \\big( \\iota_j (y_j) + (x_i)_{i\\in I}\\big)\n    - \\delta \\big( \\iota_j (z_j) + (x_i)_{i\\in I}\\big) \\ ,\n  \\]\n  where $j\\in I$, $y_j , z_j \\in M_j$, $r \\in R$,  and $(x_i)_{i\\in I} \\in \\pi_j^{-1} (0)$. Then put\n  $\\bigotimes_{i\\in I}M_i = F/U$ and define $\\tau$ as the composition of the canonical projection\n  $\\pi : F \\to \\bigotimes_{i\\in I}M_i$  with $\\delta : \\prod_{i\\in I}M_i \\to F$. By construction, $\\tau$ is multilinear.\n  Assume that $N$ is an $R$-module and $f : \\prod_{i\\in I}M_i \\to N$ is a multilinear map. By the universal property of\n  free $R$-modules, $f$ lifts to a unique $R$-linear map $f^\\prime : F \\to N$ such that $f = f^\\prime \\circ \\delta$.\n  By multilinearity of $f$, the map $f^\\prime$ vanishes on the submodule $U$, hence descends to an $R$-linear\n  $\\overline{f}: \\bigotimes_{i\\in I}M_i \\to N$ such that $f^\\prime = \\overline{f} \\circ \\pi$.\n  Hence  $f = f^\\prime \\circ \\delta = \\overline{f} \\circ \\pi  \\circ \\delta  = \\overline{f} \\circ \\tau$.\n  By surjectivity of $\\delta$ and uniqueness of $f^\\prime$, $\\overline{f}$ is the unique $R$-linear map satisfying\n  $f = \\overline{f} \\circ \\tau$. Hence $\\big( \\bigotimes_{i\\in I}M_i,\\tau\\big)$ is a tensor product of the family\n  $(M_i)_{i\\in I}$.\n  \n  In case $I=\\emptyset$, the cartesian product $\\prod_{i\\in I}M_i$ is final in the category of sets, hence \n  consists of only one element $\\star$ only. This means in particular that \n  for an $R$-module $N$  any map $f: \\prod_{i\\in I}M_i = \\{ \\star\\} \\to N$ is multilinear. Put \n  $\\bigotimes_{i\\in I}M_i = R$ and let $\\tau : \\{ \\star\\} \\to R$ be the map $\\star \\mapsto 1$. \n  Now let $\\overline{f}: R \\to N$ be the unique linear map such that $\\overline{f}(1)= f(\\star)$.\n  Then $f = \\overline{f}\\circ \\tau$ \n  and the pair $(R,\\tau)$ fulfills the universal property of the tensor product. \n  \n  If $I$ is a singleton with unique element $i_0$, then $\\prod_{i\\in I} M_i = M_{i_0}$ \n  and a map $f: \\prod_{i\\in I} M_i \\to N$ is multilinear if and only if $f$ as a map from $M_{i_\\circ}$ to $N$ \n  is linear. This implies that the pair $(M_{i_0},\\id_{M_{i_\\circ}})$ then is a tensor product \n  for the family $(M_i)_{i\\in I}$. \n\\item This is an immediate consequence of the universal property of the tensor product.\n\\item\n  We construct an inverse to\n  $\\overline{\\iota}_{J,1_J}: \\bigotimes_{i\\in I\\setminus J}M_i \\to  \\bigotimes_{i\\in I}M_i $.\n  Let $x = (x_i)_{i\\in I}$ be an element of $\\prod_{i\\in I}M_i$ and put\n  \\[\n    \\lambda (x) =  \\left( \\prod_{j\\in J} x_j \\right)\\cdot \\otimes_{i\\in I\\setminus J} x_i\n    \\left( \\prod_{j\\in J} x_j \\right) \\cdot \\tau_{I\\setminus J} ((x_i)_{i\\in I\\setminus J}) \\ .\n  \\]\n  Then $\\lambda : \\prod_{i\\in I}M_i \\to \\bigotimes_{i\\in \\setminus J} M_i$\n  is multilinear by construction, hence factors through a linear map\n  $\\overline{\\lambda} : \\bigotimes_{i\\in I}M_i \\to \\bigotimes_{i\\in I \\setminus J} M_i$.\n  By definition, $\\overline{\\lambda}$ is a left inverse of $\\overline{\\iota}_{J,1_J}$. It is also a\n  right inverse since for all $(x_i)_{i\\in I} \\in \\prod_{i\\in I}M_i$  by multilinearity of $\\tau_I$  \n  \\begin{equation*}\n  \\begin{split}\n     \\overline{\\iota}_{J,1_J} \\circ  \\overline{\\lambda}\\circ  \\tau_I  \\left( (x_i)_{i\\in I} \\right) &  =\n    \\overline{\\iota}_{J,1_J}\n    \\left( \\left( \\prod_{j\\in J} x_j \\right) \\cdot  \\otimes_{i\\in I\\setminus J} x_i \\right)\n    =\\left( \\prod_{j\\in J} x_j\\right)\\cdot\\left( \\overline{\\iota}_{J,1_J} \\circ \\tau_{I\\setminus J}\n    \\left((x_i)_{i\\in I\\setminus J}\\right)\\right) = \\\\ & \\hspace{-5em}\n    =\\left( \\prod_{j\\in J} x_j\\right)\\cdot\\left( \\tau_I \\circ \n    \\iota_{J,1_J}\\left( (x_i)_{i\\in I\\setminus J}\\right)\\right) =\n    \\tau_I \\circ \\iota_{J,(x_j)_{j\\in J}}\\left( (x_i)_{i\\in I\\setminus J}\\right) =\n    \\tau_I \\left( (x_i)_{i\\in I}\\right) % = \\otimes_{i\\in I} x_i \n  \\end{split}\n  \\end{equation*}\n  and since by construction of the tensor product the image of $\\tau_I$ is a generating system for\n  the $R$-module $\\bigotimes_{i\\in I}M_i$.\n\\end{adromanlist}\n\\end{proof}\n\n\\begin{lemma}\\label{thm:image-generating-system-canoncial-map-finite-tensor-product-generating-system} \n  Assume that $(M_i)_{i\\in I}$ is a finite family of $R$-modules such that for every $i\\in I$ a\n  generating set $S_i$ of the $R$-module $M_i$ has been given. Then the set\n  $S = \\tau \\left( \\prod_{i\\in I}S_i \\right)$ is a generating set of the tensor product\n  $\\bigotimes_{i\\in I} M_i$. \n\\end{lemma}\n\n\\begin{proof}\n  By construction of the tensor product in the proof of\n  \\Cref{thm:construction-fundamental-properties-infinite-tensor-product} it is clear that\n  a generating set of $\\bigotimes_{i\\in I} M_i$ is given by the set\n  of elements of the form $\\otimes_{i\\in I}x_i$ where $(x_i)_{i\\in I}\\in \\prod_{i\\in I}M_i$.\n  Each of the $x_i$ can now be represented in the form\n  \\[\n    x_i = \\sum_{k=1}^{n_i} r_{i,k} s_{i,k} \\quad\\text{with}\\enspace r_{i,1},\\ldots ,r_{i,n_i}\\in R,\\enspace\n    s_{i,1},\\ldots ,s_{i,n_i}\\in S_i \\ .\n  \\]\n  Hence, by multilinearity of $\\tau$ and with $I=\\{ i_1,\\ldots ,i_d\\}$,\n  \\[\n    \\otimes_{i\\in I}x_i = \\tau \\left( (x_i)_{i\\in I} \\right) =\n    \\sum_{k_{i_1} =1}^{n_{i_1}}\\cdots  \\sum_{k_{i_d} =1}^{n_{i_d}} r_{i_1,k_{i_1}} \\cdot \\ldots \\cdot r_{i_d,k_{i_d}}\n    \\cdot \\tau \\left( (s_{i,k_i})_{i\\in I} \\right)  \\ ,\n  \\] \n  so $\\otimes_{i\\in I}x_i$ is a linear combination of elements of $S$ and the claim is proved. \n\\end{proof}\n\n\\begin{lemma}\\label{thm:componentwise-multilinear-maps-factorization}\n  Let $(M_i)_{i\\in I}$ be a family of $R$-modules, $(I_a)_{a\\in A}$ a finite partition\n  of the index set $I$, and $N$ an $R$-module. For $a\\in A$ put\n  $N_a = \\bigotimes_{i\\in I_a} M_i$ and let $\\tau_a:  \\prod_{i\\in I_a} M_i \\to  N_a$\n  denote the canonical map. Assume that\n  $f : \\prod_{a\\in A}\\prod_{i\\in I_a} M_i \\to N$ is a map which is\n  \\emph{componentwise multilinear} in the following sense. \n  \\begin{itemize}\n  \\item[$(\\mathsf{ CM})$\\hspace{-1mm}]\n    Let $b\\in A$ and $y=(y_a)_{a\\in A}   \\in \\prod_{a\\in A}\\prod_{i\\in I_a} M_i $\n    a family with $y_b =0$. If for all $j\\in I_b$ and families\n    $x=(x_i)_{i\\in I_b}\\in \\prod_{i\\in I_b} M_i$ with $x_j  =0$ the  map\n    \\[\n       M_j \\to N , \\enspace m \\mapsto f( \\iota_b (\\iota_j (m) + x) + y)\n    \\]\n    is linear, then $f$ factors through\n    $(\\tau_a)_{a\\in A}: \\prod_{a\\in A}\\prod_{i\\in I_a} M_i \\to\n    \\prod_{a\\in A}N_a $. More precisely, there exists\n    a unique multilinear map $\\overline{f} : \\prod_{a\\in A}N_a \\to N$ such that\n    \\[\n       f = \\overline{f} \\circ (\\tau_a)_{a\\in A} \\ .\n    \\]\n  \\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\n  We prove the claim by induction on the cardinality of $A$. \n  If $A$ is a singleton, then $\\prod_{a \\in A} \\prod_{i\\in I_a} M_i$ canonically\n  coincides with $\\prod_{i\\in I} M_i$ and $f: \\prod_{i\\in I_a} M_i \\to N$\n  is multilinear, hence by the universal property of the tensor product there\n  exists a unique linear map $\\overline{f}: N_a \\to N$ such that\n  $f = \\overline{f} \\circ \\tau_a$. \n\n  Now assume that the claim holds whenever the cardinality of the index set $A$ is\n  $\\leq n$ for some $n \\in \\N^*$.\n  Assume to be given initial data $(M_i)_{i\\in I}$ and $\u00a7N$, a partition\n  $(I_a)_{a\\in A}$ of $A$ with $|A| = n+1$ and  componentwise multilinear map\n  $f: \\prod_{a\\in A}\\prod_{i\\in I_a} M_i \\to N$. Fix $a \\in A$ and put\n  $B =A \\setminus \\{ a\\}$. Let\n  $x = (x_i)_{i\\in I_a} \\in \\prod_{i\\in I_a}M_i$\n  and $\\widetilde{x}$ be the element of $ \\prod_{d\\in A}\\prod_{i\\in I_d} M_i $ such that\n  \\[\n    \\pi_d (\\widetilde{x}) =\n    \\begin{cases}\n      x &\\text{for} \\enspace d = a \\ , \\\\\n      0 &\\text{else} \\ .\n    \\end{cases}\n  \\]\n  The map\n  \\[\n    f_x: \\prod_{b\\in B}\\prod_{i\\in I_b} M_i \\to N , \\enspace\n         y  \\mapsto f (\\iota_B (y) + \\widetilde{x}) \n  \\]\n  then is componentwise multilinear. Hence by inductive assumption there  exists\n  a unique multilinear map $\\overline{f_x} : \\prod_{b\\in B}N_b \\to N$ such that\n  $f_x = \\overline{f_x}\\circ (\\tau_b)_{b\\in B}$. By assumption on $f$\n  the map\n  $\\prod_{i\\in I_a} M_i \\to \\Map \\left( \\prod_{b\\in B} \\prod_{i\\in I_b} M_i,N\\right) $,\n  $x \\mapsto f_x$ is multilinear which implies multilinearity of\n  \\[\n    \\overline{f_\\bullet} :\n    \\prod_{i\\in I_a} M_i \\to \\mlinOps \\left( \\prod_{b\\in B} N_b ,N \\right),\n    \\enspace x \\mapsto \\overline{f_x} \\ .\n  \\]\n  Let $F: N_a \\to \\mlinOps \\left( \\prod_{b\\in B} N_b,N \\right)$ be its\n  linearization. Application of the exponential law for multilinear maps,\n  \\Cref{thm:exponential-law-multilinear-maps}, now gives a\n  multilinear map $\\eta (F) : \\prod_{d \\in A} N_d \\to N$ which we denote \n  by $\\overline{f}$. Given a family $(x_d)_{d\\in A}$ of families\n  $x_d =(x_i)_{i\\in I_d}$ one checks\n  \\[\n    \\overline{f} \\left( \\big(\\tau_d (x_d)\\big)_{d\\in A} \\right)\n    = F \\big( \\tau_a(x_a) \\big) \\left( \\big(\\tau_b (x_b)\\big)_{b\\in B} \\right)\n    = \\overline{f}_{x_a} \\left( \\big(\\tau_b (x_b)\\big)_{b\\in B} \\right)\n    = f_{x_a} \\left( (x_b)_{b\\in B} \\right) = f  \\left( (x_d)_{d\\in A} \\right) \\ .\n  \\]\n  Hence $\\overline{f} \\circ (\\tau_d)_{d\\in A} =f$.\n  To finish the induction step it remains to prove uniqueness.\n  So let $\\overline{g} :  \\prod_{d \\in A} N_d \\to N$ be another multilinear map\n  such that $\\overline{g} \\circ (\\tau_d)_{d\\in A} =f$  and consider\n  the induced linear map\n  $\\overline{g}^\\sharp =\\eta^{-1} (\\overline{g}) : N_a \\mapsto \\mlinOps (\\prod_{b\\in B}N_b, N)$.\n  Then for every $x\\in \\prod_{i\\in I_a}M_i$ the relation \n  \\[\n    \\overline{g}^\\sharp (\\tau_a(x)) \\circ (\\tau_b)_{b\\in B} = f_x =\n    \\overline{f}_x \\circ (\\tau_b)_{b\\in B} \n  \\]\n  is satisfied. \n  Hence $\\overline{g}^\\sharp (\\tau(x)) = \\overline{f}_x$ for all $x\\in \\prod_{i\\in I_a}M_i$ which entails\n  that $\\overline{g}^\\sharp$ coincides with $F$. By\n  \\Cref{thm:exponential-law-multilinear-maps} one obtains\n  $\\overline{g} = \\overline{f}$. This finishes the induction step and the lemma\n  is proved. \n\\end{proof}\n\n\\begin{proposition}\n  Let $(M_i)_{i\\in I}$ be a family of $R$-modules and $(I_a)_{a\\in A}$ a finite partition of the index set $I$.\n  Then there exists a natural isomorphism\n  \\[\n    \\alpha_{I,A}: \\bigotimes_{i \\in I} M_i \\to \\bigotimes_{a \\in A}  \\bigotimes_{i \\in I_a} M_i .\n  \\]  \n\\end{proposition}\n\n\\begin{proof}\n  Put $N_a = \\bigotimes_{i \\in I_a} M_i$  for  $a \\in A$ and let\n  $\\tau_a :  \\prod_{i \\in I_a} M_i \\to  N_a$ be the canonical map to the tensor product.\n  Let $\\tau_A :\\prod_{a \\in A}N_a \\to \\bigotimes_{a \\in A} N_a$ be the canonical map to\n  the tensor product of the modules $N_a$.\n  Define $\\tau_{I,A}: \\prod_{i \\in I} M_i\\to \\prod_{a \\in A} N_a$ as the unique map so that\n  $\\pi_a  \\circ \\tau_{I,A} = \\tau_a \\circ \\pi_{I_a}$  for all $a\\in A$.\n  % As before, the map $\\pi_a$ on the right hand side is hereby the unique map\n  % $\\pi_{I_a} : \\prod_{i\\in I} M_i\\to\\prod_{i\\in I_a} M_i $\n  % such that $\\pi_k \\circ \\pi_{I_a} =\\pi_k$ for all $k\\in I_a$.\n  By construction $\\tau_{I,A} = (\\tau_a)_{a\\in A} \\circ \\kappa_{I,A}$,\n  where $\\kappa_{I,A} :  \\prod_{i\\in I} M_i \\to \\prod_{a \\in A} \\prod_{i \\in I_a} M_i$ is the natural isomorphism from\n  \\Cref{thm:associator-cartesian-product}.  \n  The composition $\\tau_A \\circ \\tau_{I,A}$ then is multilinear by \\Cref{thm:construction-multilinear-maps-composition}\n  \\ref{ite:multilinearity-composition-multilinear-map-product-multilinear-map}, hence factors through a linear map\n  $\\alpha_{I,A} : \\bigotimes_{i\\in I} M_i \\to \\bigotimes_{a \\in A} N_a$ \n  that is\n  \\begin{equation}\\label{eq:defining-equation-associator-map-tensor-product}\n    \\tau_A \\circ (\\tau_a)_{a\\in A} \\circ \\kappa_{I,A} =\n    \\alpha_{I,A} \\circ \\tau_I \\ .\n  \\end{equation}  \n     \n  Naturality of $\\alpha_{I,A}$ in  $(M_i)_{i\\in I}$ is clear by definition so it remains to\n  construct an inverse to $\\alpha_{I,A}$.  Consider the composition\n  $\\tau_I \\circ \\kappa^{-1}: \\prod_{a \\in A} \\prod_{i \\in I_a} M_i\n   \\to \\bigotimes_{i\\in I} M_i$. Assume that $a \\in A$ and\n  $(y_b)_{b\\in A\\setminus\\{a\\}} \\in \\prod_{b \\in A\\setminus\\{a\\}} \\prod_{i \\in I_b} M_i$\n  have been chosen. Let $y_a\\in \\prod_{i \\in I_a} M_i$ be $0$, put\n  $\\widetilde{y} =(y_d)_{d\\in A} \\in \\prod_{d \\in A}  \\prod_{i \\in I_d} M_i$, and let\n  $y\\in  \\prod_{i \\in I} M_i$ be the family such that $\\pi_i(y) = \\pi_i (y_{a(i)})$ for\n  all $i\\in I$, where $a(i)$ denotes the unique element of $A$  such that\n  $i\\in I_{a(i)}$. In other words let $y =\\kappa^{-1} (\\widetilde{y})$.\n  For every $j\\in I_a$ and $x= (x_i)_{i\\in I_a}\\in \\prod_{i\\in I_a}M_i$\n  with $\\pi_j (x)=0$ the map\n  \\[\n    M_j \\to \\bigotimes_{i\\in I} M_i,\\enspace m \\mapsto \\tau_I \\circ \\kappa^{-1} \\left( \\iota_a (\\iota_j(m)+x)+\\widetilde{y}\\right)\n    = \\tau_I \\left( \\iota_j (m) + \\iota_{I_a}(x) + y \\right)\n  \\]\n  then is multilinear since $\\tau_I$ is multilinear and $\\pi_j(\\iota_{I_a}(x) + y)=\\pi_j(x) +\\pi_j(y_a) = 0$.\n  Hence $\\tau_I \\circ \\kappa^{-1}$ is componentwise multilinear and therefore,\n  by \\Cref{thm:componentwise-multilinear-maps-factorization}, factors\n  through the map\n  $(\\tau_a)_{a\\in A} : \\prod_{a\\in A} \\prod_{i\\in I_a} M_i\\to \\prod_{a\\in A} N_a$\n  which means that\n  \\begin{equation}\\label{eq:defining-equation-inverse-associator-map-tensor-product}\n    \\tau_I \\circ \\kappa^{-1} = \\lambda_{I,A}\\circ (\\tau_a)_{a\\in A}\n  \\end{equation}\n  for some uniquely defined multilinear map\n  $\\lambda_{I,A} : \\prod_{a\\in A} N_a\\to\\bigotimes_{i\\in I} M_i$.\n  Let\n  \\[ \\overline{\\lambda}_{I,A} : \\bigotimes_{a\\in A} N_a\\to\\bigotimes_{i\\in I} M_i\\]\n  be the linearization of $\\lambda_{I,A}$.\n  We claim that $\\overline{\\lambda_{I,A}}$ is inverse to $\\alpha_{I,A}$.\n  By definition of $\\overline{\\lambda_{I,A}}$ and \n  Eqs.~\\eqref{eq:defining-equation-associator-map-tensor-product} and\n  \\eqref{eq:defining-equation-inverse-associator-map-tensor-product} one concludes\n  %\n  \\[\n    \\overline{\\lambda_{I,A}} \\circ \\alpha_{I,A} \\circ \\tau_I =\n    \\overline{\\lambda_{I,A}} \\circ \\tau_A \\circ (\\tau_a)_{a\\in A} \\circ \\kappa_{I,A} =\n    \\lambda_{I,A} \\circ (\\tau_a)_{a\\in A} \\circ \\kappa_{I,A} = \\tau_I \\ . \n  \\]\n  Since the image of $\\tau_I$ generates $\\bigotimes_{i\\in I} M_i$ as an $R$-module,\n  $\\overline{\\lambda_{I,A}}$ has to be left inverse to $\\alpha_{I,A}$.\n  Using Eqs.~\\eqref{eq:defining-equation-associator-map-tensor-product} and\n  \\eqref{eq:defining-equation-inverse-associator-map-tensor-product} again compute\n  \\[\n    \\alpha_{I,A} \\circ\\overline{\\lambda_{I,A}} \\circ \\tau_A \\circ (\\tau_a)_{a\\in A} =\n    \\alpha_{I,A} \\circ \\lambda_{I,A} \\circ (\\tau_a)_{a\\in A} = \\alpha_{I,A}\\circ\\tau_A\\circ \\kappa_{I,A}^{-1}\n    =  \\tau_A \\circ (\\tau_a)_{a\\in A} \\ .    \n  \\]\n  Since by \\Cref{thm:image-generating-system-canoncial-map-finite-tensor-product-generating-system}\n  the image of\n  $\\tau_A \\circ (\\tau_a)_{a\\in A}$ generates $\\bigotimes_{a \\in A}  \\bigotimes_{i \\in I_a} M_i$,  the equality\n  \\[ \\alpha_{I,A} \\circ\\overline{\\lambda_{I,A}}=\\id_{\\bigotimes_{a \\in A}  \\bigotimes_{i \\in I_a} M_i} \\]\n  follows and the proposition is proved.\n\\end{proof}\n\n\\begin{propanddef}\n  Let $(A_i)_{i\\in I}$ be a family of $R$-algebras. Then the tensor product\n  $A = \\bigotimes_{i\\in I} A_i$ carries in a natural way the structure of an\n  $R$-algebra where the product map is defined by\n  \\[\n    \\cdot :  A \\times A \\to A , \\enspace\n    (\\otimes_{i\\in I} a_i , \\otimes_{i\\in I} b_i)\\mapsto\n    \\otimes_{i\\in I} (a_i\\cdot b_i) \\ .\n  \\]\n  In case each of the algebras $A_i$ is commutative, then $A$ is commutative\n  as well.\n  Likewise, if each $A_i$ is unital and $1_i$ denotes the unit element of\n  $A_i$, then $A$ is unital with unit given by $1 = \\otimes_{i\\in I} 1_i$.\n  One calls $A$ the \\emph{tensor product algebra} of the family of algebras\n  $(A_i)_{i\\in I}$.\n\\end{propanddef}\n\n\\begin{proof}\n  The map\n  \\[\n    \\prod_{(i,k)\\in I \\times \\{ 1,2\\}} A_i \\to A,\\enspace\n    (a_{i,k})_{(i,k) \\in I\\times \\{1,2\\}} \\mapsto\\otimes_{i\\in I}(a_{i,1}\\cdot a_{i,2})\n  \\]\n  is multilinear by bilinearity of the product maps on the $A_i$ and multilinearity of $\\tau_I$,\n  so factors through a linear map\n  $\\mu: A \\otimes A \\cong \\bigotimes_{(i,k)\\in I\\times \\{1,2\\}} A_i \\to A$. Composition of\n  $\\mu$ with the canonical bilinear map $A \\times A \\to A\\otimes A$ gives the product map\n  $\\cdot : A \\times A\\to A$\n  and shows that the product on $A$ is well-defined. By construction, the product map $\\cdot$\n  is bilinear. Given $\\otimes_{i\\in I} a_i, \\otimes_{i\\in I} b_i, \\otimes_{i\\in I} c_i \\in A$ one\n  computes\n  \\[\n    \\big( \\otimes_{i\\in I} a_i \\cdot \\otimes_{i\\in I} b_i \\big) \\cdot \\otimes_{i\\in I} c_i\n    = \\otimes_{i\\in I} ((a_i\\cdot b_i)\\cdot c_i) =\n    \\otimes_{i\\in I} (a_i\\cdot( b_i\\cdot c_i)) =\n    \\otimes_{i\\in I} a_i \\cdot \\big( \\otimes_{i\\in I} b_i  \\cdot \\otimes_{i\\in I} c_i\\big) \\ .\n  \\]\n  This entails that the product on $A$ is associative. In the same way one shows\n  that $A$ is commutive respectively unital if each of the $A_i$ is. \n\\end{proof}\n\n\\para\nAs we have seen, the infinite tensor product construction works well for objects of\nalgebraic categories like $R$-modules, vector spaces or $R$-algebras. As soon as\na topologies compatible with the algebraic structure come in it becomes difficult and\nsometimes even impossible to construct or even define \n", "meta": {"hexsha": "405bc015b9789f051a5c73c547db35d838b9bfd5", "size": 35403, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Example/sections/infinite-tensor-products.tex", "max_stars_repo_name": "martinpflaum/latex_to_html", "max_stars_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-11-13T15:10:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T14:08:26.000Z", "max_issues_repo_path": "Example/sections/infinite-tensor-products.tex", "max_issues_repo_name": "martinpflaum/latex_to_html", "max_issues_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-07-11T13:18:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T22:02:11.000Z", "max_forks_repo_path": "Example/sections/infinite-tensor-products.tex", "max_forks_repo_name": "martinpflaum/latex_to_html", "max_forks_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-13T15:22:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-13T15:22:47.000Z", "avg_line_length": 52.8402985075, "max_line_length": 136, "alphanum_fraction": 0.6391266277, "num_tokens": 13669, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8104788995148791, "lm_q1q2_score": 0.5772491568810604}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{tcolorbox}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{float}\n\\usepackage[\ntop    = 2.50cm,\nbottom = 2.50cm,\nleft   = 2.75cm,\nright  = 2.75cm]{geometry}\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\lhead{Linear algebra basics}\n\\rhead{EPFL/Alp Ozen}\n\n\\newtheorem{thm}{Theorem}[subsection]\n\\newtheorem{property}{Property}\n\\newtheorem{cor}{Corollary}[subsection]\n\\newtheorem{prop}{[Proposition]}\n\\newtheorem{rem}{Remark}[subsection]\n\\newtheorem{definition}{Definition}[subsection]\n\\numberwithin{equation}{subsection}\n\n\\title{Linear Algebra 1}\n\\author{alp.ozen}\n\\date{September 2019}\n\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Linear equations}\n\\subsection{Basics}\n\n\\begin{tcolorbox}\nA linear equation is any equation of form:\n$a_{1}x_{1}+ ..... + a_{n}x_{n} = b$\nwhere the a are 'scalars' that belong to a field and the x belong to the vector set. \n\\end{tcolorbox}\n\nA \\textit{system of linear equation}s is simply a collection of linear equations. The solution of this system, if there is any, is an ordered list $(s_{1},...,s_{n})$ where each $s_{i}$ is the value of each $x_{i}$.\n\\\\\n\nA system consisting of 2 unknowns and two linear equations is generally the intersection of two lines on a cartesian plane. Note that the lines may be parallel or even colinear. \n\\\\\n\nIn general, a system will have:\n\\begin{itemize}\n    \\item no solution\n    \\item unique solution\n    \\item infinite solutions \n\\end{itemize}\n\\\\\n\nWe may choose to represent a system of linear equations as an \\textit{augmented matrix}. A matrix is called n$\\times$m if it is of form:\n\\begin{equation*}\n    \\begin{bmatrix}\n    a_{11} && a{12} &&  ... \\\\\n    a_{21} && a {22}  && ... \\\\\n    \\end{bmatrix}\n\\end{equation*}\n\n\\subsection{Solving a linear system}\n\\subsubsection{Basics}\n\nWhen solving a system, our goal is to replace each linear equation with an equivalent set(one that has the same solution) and obtain single linear equations which are trivial to solve. \n\\\\\n\nIn solving a system, we use the elementary row operations which are: \n\n\\begin{tcolorbox}\n\\begin{itemize}\n    \\item Interchange two rows (or columns).\n    \\item Multiply each element in a row (or column) by a non-zero number.\n    \\item Multiply a row (or column) by a non-zero number and add the result to another row (or column).\n\\end{itemize}\n\\end{tcolorbox}\n\nOur goal is to transform our matrix into echelon or row reduced echelon form. A matrix is in echelon form if it looks like this: \\begin{equation*}\n\\begin{bmatrix}\n    \\blacktriangledown &&  \\blacktriangle && \\blacktriangle \\\\\n    0 && \\blacktriangledown && \\blacktriangle \\\\\n    0 && 0 && \\blacktriangledown\n\n\\end{bmatrix}\n    \n\\end{equation*}\n\nHere, each $\\blacktriangledown$ and $\\blacktriangle$ may take on any value from the set the vector space is defined on. \n\\\\\n\nTo obtain this form, we first arrange our matrix into a form where the row with the least amount of trailing zeros is placed uptop. Then we ensure the \\textit{pivot position}(meaning first non-zero entry) has only 0 in its own column. When done, we move on the second row, find the pivot position and repeat. We repeat this process for all rows. \n\\\\\n\nIf we have a system where the number of unknowns exceeds the number of equations, we obtain a parametric solution. Consider this: \n\n\\begin{example}\n\\begin{equation*}\n  \\begin{bmatrix}\n    1 && 0 &&  -5 && 1 \\\\\n    0 && 1 && 1 && 4 \\\\\n    0 && 0  && 0 && 0 \\\\\n    \\end{bmatrix}\n\\end{equation*}\n\nWhich means: \n$$ x_{1} - 5x_{3} = 1$$\n$$ x_{2} +  x_{3} = 4$$\n\nNow, we can express both $x_{1}$ and $x_{2}$ in terms of $x_{3}$. We call $x_{1}$ and $x_{2}$ basic variables and $x_{3}$ the free variable. We call $x_{3}$ a free variable as we are free to choose any value for it. \n\\end{example}\n\nMost importantly, these are the conditions for row echelon and reduced row echelon form. \n\n\\begin{tcolorbox}\n \\textbf{Row echelon form if:}\n \\begin{itemize}\n     \\item all nonzero rows are above zero rows\n     \\item a leading entry is the right column of a leading entry above it\n     \\item all entries in a column below a leading entry are 0\n \\end{itemize}\n \\textbf{Reduced row echelon also if:}\n \\begin{itemize}\n     \\item leading entry in each nonzero row is 1\n     \\item the entries in the column of the leading entry are 0\n \\end{itemize}\n\\end{tcolorbox}\n\\\\ \n\nTo obtain reduced row echelon form, starting from the lowest row, the terms in the column with the leading term are made zero and the leading term scaled to 1. We repeat this process. \n\\\\\n\nAs a general remark, we obtain the following.\n\n\\begin{thm}\nA system is consistent iff, the rightmost column has no pivot. More simply, iff the lowest row is not of the form: \n\\begin{bmatrix}\n    0 && 0 \\dots && b\n\\end{bmatrix}\nwhere b is nonzero. \n\\end{thm}\n\\\\\n\nA common confusion when solving a linear system is why the Gaussian algorithm works? To help answer this, the idea is to visualize a Cartesian plane with two random lines intersecting. Now, there can be multiple lines that intersect at the same point. But the intersection of something like $x=2$ and $y=5$ is obviously $(2,5)$ and requires less effort to solve. Thus, we try to replace our equations with those that are easier to solve and have the same solution set. \n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[scale=0.5]{epflSemesterOne/linearAlgebra/figures/linearGauss.png}\n    \\caption{Equivalent systems}\n    \\label{fig1}\n\\end{figure}\n\n\\subsubsection{Reducing a system to reduced row echelon form}\n\n\\begin{tcolorbox}\n    \\begin{align*}\n     \\text{Given} \\left(\\begin{matrix}0&3&-6&6&4&-5\\\\3&-7&8&-5&8&9\\\\3&-9&12&-9&6&15\\end{matrix}\\right)\\\\\n     \\text{We first realign to get} \\left(\\begin{matrix}3&-7&8&-5&8&9\\\\3&-9&12&-9&6&15\\\\0&3&-6&6&4&-5\\end{matrix}\\right) \\\\\n     \\text{After applying elementary row operations, we get the reduced row echelon form of}\\\\\n     \\left(\\begin{matrix}1&0&-2&3&0&-24\\\\0&1&-2&2&0&-7\\\\0&0&0&0&1&4\\end{matrix}\\right)\\\\ \n     \\text{Now anything which is not a pivot column is called a free variable.}\\\\\n     x_{5} = 4\\\\\n     x_{2} = 2x_{3} - 2x_{4} - 7\\\\\n     x_{1} = 2x_{3} + 2x_{4} - 24\n     \\end{align*}\n\\end{tcolorbox}\n\n\\subsubsection{High school basics}\n\nWhen solving systems of 2 unknowns or 3 unknowns, we simply are considering the system of lines or planes. Sticking to 3 dimensions, let's quickly recall how we obtain the formula's of a line and plane in 3-d. \n\\\\\nA line in 3-d may be desribed in Vector form as:\n\\begin{equation*}\n    l_[1] : \\begin{bmatrix}\n        A\\\\\n        B\\\\\n        C\n    \\end{bmatrix}\n    + \n    \\lambda\\begin{bmatrix}\n        D\\\\\n        E\\\\\n        F\n    \\end{bmatrix}\n    +\n    \\psi\\begin{bmatrix}\n        G\\\\\n        H\\\\\n        I\n    \\end{bmatrix}\n\\end{equation*}\n\nor may be simplified to cartesian form as: \n\\begin{equation*}\n    k = \\frac{x + A}{B} = \\ldots\n\\end{equation*}\n\nNow, the equation of a plane is most commonly written in the form: \n\n\\begin{equation*}\n    Ax + By + Cz \\ \\text{where (A,B,C) is the normal vector to the plane.}\n\\end{equation*}\n\n\\section{Introducing vector spaces}\n\\subsection{Basics}\nA vector space is an algebraic structure consisting of two sets, a scalar set and the vector set. The scalar set happens to be field and the vector set an additive abelian group. We also define a 'scalar multiplication' between the scalars and vectors to satisfy the following:\n\n\\begin{itemize}\n    \\item $r_{1}r_{2}(v_{1}) = (r_{1}r_{2})v_{1}$  \\textbf{associativity of scalar multiplication}\n    \\item $r_{1}(v_{1}+v_{2}) = r_{1}v_{1}+ r_{1}v_{2}$ and the opposite as well, thus multiplication distributes over addition.\n    \\item $1_{r}v_{1} = v_{1}$\n\\end{itemize}\n\n\\begin{property}\nFrom the above properties, we can neatly deduce also that $0_{r}v_{1} = 0$\n\\begin{align*}\n    0_{r}v_{1} = (1-1)v_{1}\\\\\n    1v_{1} - 1v_{1} = 0\n\\end{align*}\n\\end{property}\n\\\\\nThe most common vector space is that of $\\mathbb{R}^{n}$. It's vector and scalar set are the same. \n\\subsection{Linear combinations,span and matrix theorems}\n\n\\begin{tcolorbox}\n  A \\textbf{linear combination} of vectors is simply a sum of vectors multiplied by scalars. \n  \\begin{equation*}\n      v_{i} = a_{1}v_{1} + \\ldots + a_{k}v_{k}\n  \\end{equation*}\n  A \\textbf{span} for given vector set $\\{v_{1}\\ldots v_{k}\\}$is simply a set that consists of all the possible outputs of $\\sum_{i=1}^{k} a_{i}v_{i}$\n\\end{tcolorbox}\n\nA common idea in linear algebra is to view a linear combination as the product of two matrices. We can't help but notice that:\n\n\\begin{equation*}\n    a_{1}v_{1} + \\ldots + a_{k}v_{k} = \\begin{bmatrix}\n        v_{1}  \\ldots v_{k}\n    \\end{bmatrix} \\begin{bmatrix}\n        a_{1} \\\\\n        \\vdots \\\\\n        a_{k}\n    \\end{bmatrix}\n\\end{equation*}\n\nAn important theorem that follows from the fact that $Ax=b$ is a set of equivalent statements(meaning they always have the same truth value.) \n\n\\begin{thm}\n\\text{The following are equivalent:}\n\\begin{enumerate}\n    \\item $\\forall b \\in \\mathbb{R}^{m}, Ax=b$ \\text{has a solution}\n    \\item $ \\forall b \\in \\mathbb{R}^{m}$ \\text{b is a linear combination of the columns of A.}\n    \\item \\text{columns of A span} $\\mathbb{R}^{m}$\n    \\item \\text{A(the coefficient matrix!) has a pivot position in each row. }\n\\end{enumerate}\n\\end{thm}\n\n\\subsection{Homogeneous system of equations}\n\nAny system of linear equations that can be written of the form $Ax = 0$ is called \\textbf{Homogeneous}. Clearly, this system has at least one solution which is when x is equal to the 0 vector. The non-trivial solution is if $x$ is not equal to 0 in which case the solution is described in terms of a parameter. Now, a theorem that follows is, for a non-homogeneous system $Ax=b$ with solution $p$, the solution set of $Ax=b$ is the set of vectors of the form $w = p + v_{h}$ where $v_{h}$ is any solution to $Ax = 0$.\n\\\\ We now list some useful theorems. \n\\begin{thm}\nA homogeneous equation $Ax = 0$ has a nontrivial solution iff it has at least one free variable.\n\\end{thm}\n\n\\begin{thm}\nColumns of A are linearly dependent iff $Ax=0$ has a trivial solution.\n\\end{thm}\n\n\\begin{thm}\nSuppose the equation $Ax=b$ has a solution p. Then the set of possible solutions to $Ax=b$ are of the form $w = p + v_{h}$ where $v_{h}$ is a solution of $Ax = 0$.(Ofcourse assuming $Ax=b$ is consistent) That is to say, if we knew two solutions to $Ax=b$, we could find a solution to $Ax=0$. \n\\end{thm}\n\n\n\\subsection{Linear Independence}\nA set of vectors is linearly independent if the solution to:\n\n\\begin{equation*}\n    a_{1}v_{1} + \\ldots + a_{k}v_{k} = 0\n\\end{equation*}\n\nis only the trivial solution, that is the $0_{v} \\in \\mathbb{R}^{k}$.\n\\\\\nWe now present some common statements concerning linear independence. \n\\begin{tcolorbox}\n  \\begin{thm}\n    Whenever $Ax=0$ has a nontrivial solution, the columns of $A$ are linearly dependent. \n    \\end{thm}\n    \n    \\begin{thm}\n    A set of vectors is linearly dependent if at least one of the vectors is a scalar multiple of the other.\n    \\end{thm}\n    \n    \\begin{thm}\n    If a set contains the zero vector, then it is linearly dependent.\n    \\end{thm}\n    \n    \\begin{thm}\n    If a set of vectors contain more vectors than vector entries, the set is linearly dependent.\n    \\end{thm}\n    \n    \\begin{thm}\n    A set of vectors is linearly dependent if the row echelon form of A to $Ax=0$ has at least one free variable.\n    \\end{thm}\n    \n\\begin{thm}\n\\label{comb}\nA set of vectors is linearly dependent iff at least one of the vectors can be written as a linear combination of the others.\n\\end{thm}\n\n\\begin{cor}\nTwo vectors are linearly dependent iff one is a scalar multiple of the other\n\\end{cor}\n\n\\end{tcolorbox}\nLet's now prove the most useful out of these theorems. \n\n\\begin{proof}\nLet's first prove \\ref{comb}.\n\\\\\n($\\rightarrow$)\n   Now suppose that for $S = \\{ v_{1}, \\ldots , v_{k}\\}, v_{1} = a_{2}v_{2} + \\ldots a_{k}v_{k}$ Then:\n   $$ 0 = a_{2}v_{2} + \\ldots - v_{1}$$\n   In which case $v_{1}$ has a nonzero coefficient implying a nontrivial solution. \n   \\\\\n($\\leftarrow$)\nLet $j$ represent the vectors that have a nonzero coefficient in $a_{1}v_{1} + \\ldots + a_{j}v_{j} + \\ldots a_{k}v_{k}$\n\\\\\nNow, WLOG, we get for all subscripts less than or equal to $j$:\n\n$$ v_{1} = - \\frac{a_{2}}{a_{1}}v_{2} - \\ldots - \\frac{a_{k}}{a_{1}}v_{k}$$\n    \\tag*{\\qedhere}\n\\end{proof}\n\n\\subsection{Lec 05 notes}\n\n\\begin{property}\nA linear $T$ maps linear line segments in $\\mathbb{R}^{n}$ to linear line segments in $\\mathbb{R}^{m}$ \n\\end{property}\n\n\\subsubsection{Matrix transformations}\n\n\\begin{tcolorbox}\n  In general, the columns of a matrix are the dimensions of the domain set and the rows of the matrix are the codomain. \n\\end{tcolorbox}\n\n\\begin{thm}\nLet $T$ be a square matrix, that is $T:\\mathbb{R}^n \\to \\mathbb{R}^n$. Then, T is surjective iff it is injective. \n\\end{thm}\n\n\\begin{proof} Proving both directions:\n\\\\\n($\\rightarrow$) (T is injective iff T is surjective)\n\\\\\nNow whenever T is surjective, we have that there are no rows of 0. It also means that each row has a pivot. Given this, we check the condition that $T(v_{1}) = T(v_{2}) \\rightarrow v_{1} = v_{2}$ Now we would essentially obtain this in trying to solve it:\n\\begin{equation*}\n    T \\times v = a\n\\end{equation*}\nwhere $a \\in \\mathbb{R}^n$. Now because the reduced matrix has no 0 row and as many pivots as the number of variables, we get that there is only one such $v$ resulting in $v_{1} = v_{2}$\n\\\\\n($\\leftarrow$) (T is surjective iff T is injective)\nFrom $T$ is injective, we get that there is only one solution to $T \\times v = a$ and considering the homogeneous $Tv=0$ we get that only 0 would be the solution implying columns of T are a basis hence must span $\\mathbb{R}^n$\n \\tag*{\\qedhere}\n\n\\end{proof}\n\n\\subsection{Linear transformations}\n\n\\begin{definition}\nA linear map is function between $\\mathbb{R}^n \\to \\mathbb{R}^m$ such that:\n\\begin{align*}\n    i) \\ cT(v) = T(cv) \\forall c \\in \\mathbb{R}, \\forall v \\in \\mathbb{R}^n\\\\\n    ii) \\ T(v + u) = T(v) + T(u) \\forall v,u \\in \\mathbb{R}^n\n\\end{align*}\n\\end{definition}\n\nA condition equivalent to a linear map is:\n\\begin{prop}\nWhenever we can show that for some $T$, $T(cv_{1} + dv_{2}) = cT(v_{1}) + dT(v_{2})$, we have that $T$ is a linear map.\n\\end{prop}\n\n\\begin{proof}(We must show both conditions i) and ii) defined above)\n\\\\\n\nNow showing i) is simple. Simply fix $d=0$. Then we get that $T(cv_{1} + 0) = cT(v_{1})$.\n\\\\\n\nTo show ii), fix $c,d$ to be 1 and there we have it!\n\n\\end{proof}\n\n\n\\begin{thm}\nFor a linear transformation $T:\\mathbb{R}^n \\to \\mathbb{R}^m$, there exists a unique $m\\times n$matrix $A$ such that $T(x) = Ax$\n\\end{thm}\n\n\\begin{proof}\nNow, we can write $x$ as $I_{n}x$ from which we get $\\begin{bmatrix}\n    e_{1} \\ldots e_{n}\n\\end{bmatrix}x = x_{1}e_{1} + \\ldots + x_{n}e_{n}$. Taking $T(x)$ we obtain $x_{1}T(e_{1}) + \\ldots + x_{n}T(e_{n})$ which is equal to $$\\begin{bmatrix}\n    T(e_{1}) \\ldots T(e_{n})\n\\end{bmatrix} \\begin{bmatrix}\n    x_{1} \\\\\n    \\vdots \\\\\n    x_{n} \n\\end{bmatrix}$$\n\\end{proof}\n\nAnd here are some examples to capturing linear transforms as matrices mainly in $\\mathbb{R}^2$\n\\begin{example}\n\\begin{enumerate}\n    \\item Anti-clockwise rotation by $\\theta$ degrees. $A = \\begin{bmatrix}\n        \\cos{\\theta} & -\\sin{\\theta}\\\\\n        \\sin{\\theta} & \\cos{\\theta}\n    \\end{bmatrix}$ \n    \\item $T(x) = 3x$ for $x \\in \\mathbb{R} ^ 2$, then $A = \\begin{bmatrix}\n        3 & 0 \\\\\n        0 & 3\n    \\end{bmatrix}$\n    \\item Reflection through the y-axis $A = \\begin{bmatrix}\n        -1 & 0\\\\\n        0 & 1\n    \\end{bmatrix}$\n    \\item Contraction or an expansion depending on $k$ $ A = \\begin{bmatrix}\n        k & 0\\\\\n        0 & 1\n    \\end{bmatrix}$ \n    \\item Shear $\\begin{bmatrix}\n        1 & k\\\\\n        0 & 1\n    \\end{bmatrix}$\n    \\item Projection $\\begin{bmatrix}\n        0 & 0\\\\\n        0 & 1\n    \\end{bmatrix}$\n    \n\\end{enumerate}\n\\end{example}\n\n\nAnd now we present some theorems relating to surjectivity and injectivity:\n\n\\begin{thm}\nTaking $T: \\mathbb{R}^n \\to \\mathbb{R}^m$ we have that $T$ is injective iff the homogeneous $T(x) = 0$ has the trivial solution. \n\\end{thm}\n\n\\begin{proof}(To show this, we show that the two statements always have the same truth value, that they are equivalent)\n\nNow when $T(x)$ is injective, it means that $T(x) = 0$ can have only a unique solution by definition of injectivity. Now when $T(x)$ is not injective, it means that $\\exists b$ s.t. $T(u) = b$ and $T(v) = b$ where $u \\not = v$. Now $T(u) - T(v) = b-b = 0 = T(u-v)$ and since $u\\not=v$ we have that $0$ is not the only solution.\n\\end{proof}\n\n\\section{Matrix algebra}\n\n\\subsection{Basics}\n\n\\begin{property}\nProperties of the matrix transpose:\n\\begin{align*}\n    (A^T)^T = A\\\\\n    (A + B)^T = A^T + B^T\\\\\n    (rA)^T = rA^T\\\\\n    (AB)^T = B^TA^T\n\\end{align*}\n\\end{property}\n\n\n\n\\begin{property}\n\\begin{align*}\n    A + B = B + A\\\\\n    (A + B) + C = A + (B + C)\\\\\n    A + 0 = A\\\\\n    r(A + B) = rA + rB\\\\\n    (r+s)A = rA + rS\\\\\n    rs(A) = r(sA)\n\\end{align*}\n\\end{property}\n\nAnd here is the general formula for the $i-j^{th}$ entry of $AB$ whenever defined:\n\n$$(AB)_{ij} = a_{i1}b_{1j} + \\ldots + a_{in}b_{nj}$$\n\n\\begin{remark}\nWe note that Matrices who's entries belong to $\\mathbb{R}$ form a vector space.\n\\end{remark}\n\nAnd now some warnings about matrix multiplication. We note that in general if $AB = AC$ then it does not hold that $B=C$ And if $AB=0$ we can not conclude that $A$ or $B$ is $0$. We also note that whenever a matrix has entries only along its main diagonal, then to take any power, we simply take the power of each diagonal entry. \n\\\\\nA transpose of a matrix is defined as swapping the rows of $A$ with its columns. \\\\\n\\begin{thm}\nA matrix $A$ is invertible if $\\exists C$ s.t. $AC = I , CA=I$ meaning that the inverse is unique. Non-invertible matrices are called \\textbf{singular}. And note that only square matrices are invertible. The reason for this is that for a matrix to be invertible, it must be bijective which can only be the case if it is square \n\\end{thm}\n\n\\begin{proof}(Proving that the inverse matrix is unique)\nSuppose $\\exists B,C s.t. AB = I, \\ AC=I$. We want to show that $B = C$.\n\\\\\n$B = BI = BAC = IC = C$\n\\end{proof}\n\n\n\\subsection{Computing inverses}\n\nThe way we find inverses is best done by Gaussian elimination. We are simply looking for a solution to the equation $Ax=I$ hence we write this as an augmented matrix $\\begin{bmatrix}\n    A | I\n\\end{bmatrix}$ and solve. \n\n\\begin{thm}\n\\textbf{Elementary} matrices are obtained by only performing one elementary row operation on an identity matrix, thus all elementary matrices are invertible.\n\\end{thm}\n\n\\begin{thm}\nBecause inverses are only defined on bijective mappings, whenever a square matrix is invertible, we have that $Ax=b$ has a unique solution $x = A^{-1}b$\n\\end{thm}\n\n\\subsection{Every matrix is the product of elementary matrices with the identity}\n\nConsider the matrix $E_{1}=$\\begin{bmatrix}\n    1 && 0 && 0\\\\\n    0 && 1 && 0\\\\\n    -4 && 0 && 1\n\\end{bmatrix} and $A=$ $E_{1}=$\\begin{bmatrix}\n    a && b && c\\\\\n    d && e && f\\\\\n    g && h && i\n\\end{bmatrix} Then notice that $E_{1}A=$ $E_{1}=$ \\begin{bmatrix}\n    a && b && c\\\\\n    d && e && f\\\\\n    g -4a && h -4b && i -4b\n\\end{bmatrix}\n\nWhich goes on to show that every row operation can be represented as the product of elementary matrices. \n\n\\subsection{Characterizing inverse matrices}\n\n\n \\begin{tcolorbox}[drop shadow, title=(Theorem on inverse matrices),lower separated=true]\n    \\centering\n        \\includegraphics[scale = 0.9,valign=t]{epflSemesterOne/linearAlgebra/figures/invertible.JPG}\n\\end{tcolorbox}\n\n\\clearpage\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics{epflSemesterOne/linearAlgebra/figures/proof.JPG}\n\n\\end{figure}\n\n\\section{Determinants}\n\\subsection{Basics}\n\nFirst of all, determinant is only defined on an $nxn$ matrix and is denoted $det(A)$ or simply $|A|$\n\\\\\nThe most useful formula for a determinant is \n\n$$ det(A) = \\sum_{j=1}^{n} a_{1j}C_{1j}$$ or equivalently \n$$ det(A) = \\sum_{i=1}^{n} a_{i1}C_{i1}$$\nwhere $C_{ij}$ which is known as a \\textit{cofactor} of matrix A is defined as:\n\n$$ C_{ij} = (-1)^{i+j}det(A_{ij})$$ wherein $A_{ij}$ denotes a submatrix of A obtained by cancelling out the ith row and jth column of A. \n\\\\\nNow the above formula is one way to calculate the determinant. An alternative method comes from the realization that whenever a matrix is triangular, its determinant is the product of the terms along its diagonal. Hence given a matrix $A$ we may row reduce it to its echelon form and simply find the product of all pivots. But noticing how \n\n$$ A = E_{1}\\ldots E_{n}I$$ we get that\n\n$$det(A) = det(E_{1})\\ldots det(E_{n})$$\n\nwhich means that because after row reduction we obtain $[I | A^{-1}]$, $det(A) = det(E_{1})\\ldots det(E_{n}) \\cdot det(I)$\n\nThis brings us to the following properties:\n\n\\begin{enumerate}\n    \\item If a multiple of one row is added to another to obtain matrix $B$, then $det(A) = det(B)$\n    \\item If a row is scaled by $k$ to obtain matrix $B$, then $det(B) = k \\cdot det(A)$\n    \\item If two rows are interchanged, then $det(B) = -det(A)$\n\\end{enumerate}\n\n\\clearpage\n\nAnd now a very important theorem is:\n\n\\begin{thm}\nA matrix is $A$ invertible iff $det(A) \\not = 0$ \n\\end{thm}\n\n\\begin{proof}\nWhen a matrix is invertible, we have that the solution to $Ax = 0$ is unique implying linear independence. And linear independence implies that $det(A) \\not = 0$. \n\\end{proof}\n\n\\begin{thm}\nIf $A$ is $nxn$, then $det(A^{T}) = det(A)$\n\\end{thm}\n\n\\begin{proof}\nThis is obvious. We have that the cofactors along the columns of $A^{T}$ are equal to cofactors of $A$ along rows and by definition of the transpose we have that $a_{1j} = a_{j1}$ which implies the coefficients are also equal. This also suggests why the earlier formula for $det(A)$ comes in two forms. \n\\end{proof}\n\nAnd now yet another theorem:\n\n\\begin{thm}\n$det(AB) = det(A)det(B)$\n\\end{thm}\n\n\nAnd now we come to Cramer's rule. It provides us with an alternative method for solving a system of equations using determinants. It states the following:\n\n\\begin{thm}(\\textbf{Cramer's Rule})\n\\\\\nWhere $A$ is an $n x n$ matrix, the solution to $Ax=b$ is of form \n$$x_{i} = \\frac{|A_{i}(b)|}{|A|} \\ i=1,2,3\\ldots$$ \nwhere $A_{i}(b)$ is obtained by replacing the ith column of A with the vector b.\n\\end{thm}\n\n\\begin{proof}\nAgain quite simple. Notice that \n\n$$A\\cdot I_{i}(b) = A[e_{1} \\ldots b \\dots e_{n}] = [a_{1} \\ldots \\underbrace{b}_{since Ax = b} \\ldots a_{n}] = A_{i}(b)$$\nTaking determinants we get $$det(A)\\underbrace{det(I_{i}(b)}_{=x_{i}} = det(A_{i}(b)$$ to give $$ x_{i} = \\frac{det(A_{i}(b))}{det(A)}$$\n\\end{proof}\n\\\\\nAnd now an important theorem is the following:\n\n\\begin{thm}\nThe area of the parallelogram determined by $a_{1}$ and $a_{2}$ is the same as that determined by $a_{1}$ and $a_{2} + ca_{1}$\n\\end{thm}\n\nAnd furthermore:\n\n\\begin{thm}\nFor a linear transformation $T: \\mathbb{R^{n}} \\to  \\mathbb{R^{n}}$ we have that whenever S is the generic shape in this vector space than the new area after T is given by:\n\n$$ |T(s)| = |\\det{A} \\cdot |S|s$$\n\\end{thm}\n\n\\clearpage\n\n\\section{More on vector spaces}\n\\subsection{Basics}\n\n\\begin{tcolorbox}\n\\begin{definition}\nA \\textbf{Vector space} is an algebraic structure satisfying:\n\n\\begin{enumerate}\n    \\item For some vector set $V$ we have that $V$ is an abelian group under addition.\n    \\item Scalar multiplication between members of a scalar field and vector elements is defined and closure holds.\n    \\item Scalar multiplication of a vector associates\n    \\item $\\forall v \\in V$ we have that $1\\cdot v = v$\n    \\item Multiplication of a scalar with two vectors distributes and multiplication of a vector with two scalars distributes. \n\\end{enumerate}\n\\end{definition}\n\\end{tcolorbox}\n\nNow three important facts that follow from these axioms are:\n\n\\begin{align}\n    \\label{1}0v = 0 \\ \\forall v \\in V\\\\\n    \\label{2}c0 = 0 \\ \\forall c \\in F\\\\\n    \\label{3}-u = (-1)u \\ \\forall u \\in V\n\\end{align}\n\nLet's prove these solely using the axioms. \n\\begin{proof}\n\\\\\n\n\\textbf{\\ref{1}}\n$$ 0v = (a-a)v = \\underbrace{av - av}_{\\text{\\makebox[0pt]{additive inverse} }} = 0 $$\n\n\\textbf{\\ref{2}}\n\n$$ c0 = c(0-0) = c0 - c0 = 0$$\n\n\\textbf{\\ref{3}}\n\\begin{align*}\n    0u = (1-1)u\\\\\n    0 = u + (-1)u\\\\\n    -u = \\underbrace{-u + u + (-1)u}_{\\text{\\makebox[0pt]{right additive inverse} }}\n\\end{align*}\n\\end{proof}\n\n\n\\begin{definition}\nFor a vector space $V$, $S$ is said to be a subspace if:\n\n\\begin{enumerate}\n    \\item $0_{v} \\in S$\n    \\item $S$ is closed under scalar multiplication and vector addition\n\\end{enumerate}\n\\end{definition}\n\n\\begin{definition}(Null space)\n\\\\\nFor some matrix $A$, the \\textbf{Null space} is such that:\n$$Nul A = \\{v\\in V| Av = 0_{v}\\} $$\n\\end{definition}\n\n\\begin{prop}\n$Nul A$ is a subspace of $V$\nNow clearly $0 \\in V$ as $0$ is the trivial solution to $Ax = 0$.\n\\\\\nWe now need that $\\forall u,v \\in Nul A$, $u+v \\in Nul A$.\n$$A(u+v) = \\underbrace{Au + Av}_{\\text{\\makebox[0pt]{matrices are linear maps}}} = 0 + 0 = 0$$\n\\\\\nWe now also need $\\forall u \\in Nul A, \\ cu \\in A$\n\n$$ A(cu) = cA(u) = c0 = 0$$\n\\end{prop}\n\n\n\\begin{thm}\nThe span of any single vector space elements is a subspace. \n\\end{thm}\n\n\\begin{definition}(Column space\nThe column space $Col A$ is the set of all possible linear combinations of the columns of a matrix A and is also a subspace.\n\\end{definition}\n\n\nAnd now some examples of vector spaces:\n\n\\begin{example}\n\\begin{itemize}\n    \\item $\\mathbb{R}^{n}$ is the most common example equipped with $\\mathbb{R}$ as scalars.\n    \\item Set of polynomials of degree at most $n$ denoted $P_{n}$\n    \\item set of all real valued functions. This one is a little more subtle. Our 'vectors' are functions $f(x)$ and for instance the 0 vector is the map $g(x) = 0$. We define addition as $f(x) + g(x) = (f+g)(x)$ and scalar multiplication as $cf(x) = f(cx)$\n\\end{itemize}\n\\end{example}\n\nUnder light of vector spaces, a \\textbf{linear transformation} is now much more clear as being a mapping that respects the two operations of vector spaces. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "b64a30b20489a103ea3b96cd6bf51a2eb32e4f01", "size": 26096, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "epflSemesterOne/linearAlgebra/ch1.tex", "max_stars_repo_name": "alptheexplorer/generalPhysics", "max_stars_repo_head_hexsha": "e8f9e90f2c945eab400c19dd586d94df358d747d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-15T21:04:20.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-06T10:10:19.000Z", "max_issues_repo_path": "epflSemesterOne/linearAlgebra/ch1.tex", "max_issues_repo_name": "alptheexplorer/generalPhysics", "max_issues_repo_head_hexsha": "e8f9e90f2c945eab400c19dd586d94df358d747d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-10-20T12:17:24.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-20T19:24:36.000Z", "max_forks_repo_path": "epflSemesterOne/linearAlgebra/ch1.tex", "max_forks_repo_name": "alptheexplorer/generalPhysics", "max_forks_repo_head_hexsha": "e8f9e90f2c945eab400c19dd586d94df358d747d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-10-17T14:28:42.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-17T14:28:42.000Z", "avg_line_length": 34.6100795756, "max_line_length": 517, "alphanum_fraction": 0.6761189454, "num_tokens": 8428, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.8333245891029456, "lm_q1q2_score": 0.5771956065654066}}
{"text": "\\documentclass[a4paper,12pt]{article}\n%\\documentclass[a4paper,12pt]{scrartcl}\n\n\\usepackage{xltxtra}\n\n\\input{../preamble.tex}\n\n% \\usepackage[spanish]{babel}\n\n% \\setromanfont[Mapping=tex-text]{Linux Libertine O}\n% \\setsansfont[Mapping=tex-text]{DejaVu Sans}\n% \\setmonofont[Mapping=tex-text]{DejaVu Sans Mono}\n\n\\title{Homework \\#03: Independent events}\n\\author{Isaac Ayala Lozano}\n\\date{2020-01-20}\n\n\\begin{document}\n\\maketitle\n\n\\section{Problem}\nLet A and B be two events in the sample space S. \nShow that if $P(A) $ and $P(B)$ are\nnon-zero, then A and B cannot be mutually exclusive and independent.\n\n\\section{Proof}\n\nFrom \\cite{stewart2009probability} we observe that for events A and B to be independent then\n\n\\begin{equation}\n P(A \\cap B) = P(A) P(B)\n\\end{equation}\n\nRecall that for two events to be mutually exclusive then\n\n\\begin{equation}\n P(A \\cap B) = 0\n\\end{equation}\n\nThus for two events to be independent and mutually exclusive at the same time then $P(A) P(B) = 0$.\nThe problem states that both $P(A)$ and $P(B)$ are non-zero, hence we conclude that A and B \\emph{cannot} be mutually exclusive and independent at the same time because $P(A\\cap B)  \\neq 0$.\n\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "5815769de413403f38c483f004ebd6a1c4787731", "size": 1204, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw03_IsaacAyala.tex", "max_stars_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020", "max_stars_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw03_IsaacAyala.tex", "max_issues_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020", "max_issues_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw03_IsaacAyala.tex", "max_forks_repo_name": "der-coder/CINVESTAV-Mathematics-II-2020", "max_forks_repo_head_hexsha": "ccd3364818c673f7a6bf13d495004034d2c6ecc0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.1739130435, "max_line_length": 190, "alphanum_fraction": 0.7350498339, "num_tokens": 373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8333245994514082, "lm_q1q2_score": 0.577195603158983}}
{"text": "\\chapter{Supplemental Proofs}\n\n\\section{For Chapter~\\ref{chap:solution-transfer}}\n\n\\subsection{Allowing Tessellation with Inverted Triangles}\n\n\\begin{lemma}\\label{lemma:bad-triangle}\nConsider three smooth curves\n\\(b_0, b_1, b_2\\) that form a closed loop: \\(b_0(1) = b_1(0)\\),\n\\(b_1(1) = b_2(0)\\) and \\(b_2(1) = b_0(0)\\).\nTake \\emph{any} smooth map \\(\\varphi(s, t)\\) on \\(\\utri\\) that\nsends the edges to the three curves:\n\\begin{equation}\n\\varphi(r, 0) = b_0(r), \\quad \\varphi(1 - r, r) = b_1(r),\n  \\quad \\varphi(0, 1 - r) = b_2(r) \\quad \\text{for } r \\in \\left[0, 1\\right].\n\\end{equation}\nThen we must have\n\\begin{equation}\n2 \\int_{\\utri} \\det(D\\varphi) \\left[F \\circ \\varphi\\right] \\, dt \\, ds =\n\\oint_{b_0 \\cup b_1 \\cup b_2} H \\, dy - V \\, dx\n\\end{equation}\nfor antiderivatives that satisfy \\(H_x = V_y = F\\).\n\nWhen \\(\\det(D\\varphi) > 0\\), this is just the change of variables\nformula combined with Green's theorem.\n\\end{lemma}\n\n\\begin{proof}\nLet \\(x(s, t)\\) and \\(y(s, t)\\) be the components of \\(\\varphi\\). Define\n\\begin{equation}\n\\Delta S = H(x, y) y_s - V(x, y) x_s \\quad \\text{and} \\quad\n\\Delta T = H(x, y) y_t - V(x, y) x_t.\n\\end{equation}\nOn the unit triangle \\(\\utri\\), Green's theorem gives\n\\begin{equation}\\label{eq:basic-greens}\n\\int_{\\mathcal{U}} \\left[\\partial_s \\Delta T -\n  \\partial_t \\Delta S\\right] \\, dV =\n\\oint_{\\partial \\mathcal{U}} \\Delta S \\, ds + \\Delta T \\, dt.\n\\end{equation}\nThe boundary \\(\\partial \\utri\\) splits into the bottom edge \\(E_0\\),\nhypotenuse \\(E_1\\) and left edge \\(E_2\\).\n\nSince\n\\begin{equation}\nE_0 = \\left\\{ \\left[ \\begin{array}{c} r \\\\ 0 \\end{array}\\right] \\mid\n  r \\in \\left[0, 1\\right] \\right\\}\n\\end{equation}\nwe take \\(\\varphi(r, 0) = b_0(r)\\) hence\n\\begin{equation}\ndx = x_s \\, dr, dy = y_s \\, dr \\Longrightarrow\nH dx - V dy = \\Delta S \\, dr.\n\\end{equation}\nWe also have \\(ds = dr\\) and \\(dt = 0\\) due to the\nparameterization, thus\n\\begin{equation}\n\\int_{E_0} \\Delta S \\, ds + \\Delta T \\, dt =\n  \\int_{r = 0}^{r = 1} \\Delta S \\, dr = \\int_{b_0} H \\, dx - V \\, dy.\n\\end{equation}\nWe can similarly verify that\n\\(\\int_{E_j} \\Delta S \\, ds + \\Delta T \\, dt = \\int_{b_j} H \\, dx - V \\, dy\\)\nfor the other two edges. Combining this with~\\eqref{eq:basic-greens}\nwe have\n\\begin{equation}\n\\int_{\\mathcal{U}} \\left[\\partial_s \\Delta T -\n  \\partial_t \\Delta S\\right] \\, dV =\n\\oint_{b_0 \\cup b_1 \\cup b_2} H \\, dx - V \\, dy.\n\\end{equation}\nTo complete the proof, we need\n\\begin{equation}\n\\int_{\\mathcal{U}} \\left[\\partial_s \\Delta T -\n  \\partial_t \\Delta S\\right] \\, dV =\n2 \\int_{\\mathcal{U}} \\det(D\\varphi) \\left[F \\circ \\varphi\\right] \\, dV\n\\end{equation}\nbut one can show directly that\n\\begin{equation}\n\\partial_s \\Delta T - \\partial_t \\Delta S =\n  2 \\left(x_s y_t - x_t y_s\\right) F(x, y) =\n  2 \\det(D\\varphi) \\left[F \\circ \\varphi\\right]. \\tag*{\\qedhere}\n\\end{equation}\n\\end{proof}\n\n\\section{For Chapter~\\ref{chap:k-compensated}}\n\n\\subsection{Proof of Lemma~\\ref{lemma:ell-tilde}}\\label{proof:ell-tilde}\n\n\\begin{proof}\nWe'll start with the \\(F = 1\\) case. Recall where the terms originate:\n\\begin{align}\n\\left[P_1, e_1\\right] &= \\mathtt{TwoProd}\\left(\\widehat{r},\n  \\widehat{b}_j^{(k + 1)}\\right) \\\\\n\\left[P_2, e_2\\right] &= \\mathtt{TwoProd}\\left(s,\n  \\widehat{b}_{j + 1}^{(k + 1)}\\right) \\\\\n\\left[\\widehat{b}_j^{(k)}, e_3\\right] &= \\mathtt{TwoSum}\\left(P_1, P_2\\right).\n\\end{align}\nHence Theorem~\\ref{thm:eft} tells us that\n\\begin{align}\n\\left|P_1\\right| &\\leq (1 + \\mach)\\left|\\widehat{r} \\cdot\n  \\widehat{b}_j^{(k + 1)}\\right| \\leq (1 + \\mach)^2 (1 - s)\n  \\left|\\widehat{b}_j^{(k + 1)}\\right| \\\\\n\\left|e_1\\right| &\\leq \\mach \\left|\\widehat{r} \\cdot\n  \\widehat{b}_j^{(k + 1)}\\right| \\leq \\mach(1 + \\mach)(1 - s) \\left|\n  \\widehat{b}_j^{(k + 1)}\\right| \\\\\n\\left|P_2\\right| &\\leq (1 + \\mach) s\n  \\left|\\widehat{b}_{j + 1}^{(k + 1)}\\right| \\\\\n\\left|e_2\\right| &\\leq \\mach s \\left|\\widehat{b}_{j + 1}^{(k + 1)}\\right| \\\\\n\\left|e_3\\right| &\\leq \\mach \\left|P_1\\right| + \\mach\\left|P_2\\right| \\\\\n\\left|\\rho \\cdot \\widehat{b}_j^{(k + 1)}\\right| &\\leq\n(1 + \\mach)(1 - s) \\left|\\widehat{b}_j^{(k + 1)}\\right|.\n\\end{align}\nIn general, we can swap \\(\\mach\\left|P_j\\right|\\) for\n\\((1 + \\mach)\\left|e_j\\right|\\) based on how closely related the bound\non the result and the bound on the error are. Thus\n\\begin{align}\n\\widetilde{\\ell}_{1, j}^{(k)} &= \\left|e_1\\right| + \\left|e_2\\right| +\n  \\left|e_3\\right| + \\left|\\rho \\cdot \\widehat{b}_j^{(k + 1)}\\right| \\\\\n&\\leq (2 + \\mach)\\left(\\left|e_1\\right| + \\left|e_2\\right|\\right) +\n  (1 + \\mach)(1 - s) \\left|\\widehat{b}_j^{(k + 1)}\\right| \\\\\n&\\leq \\left[(1 + \\mach)^3 - 1\\right] (1 - s) \\left|\n  \\widehat{b}_j^{(k + 1)}\\right| + \\left[(1 + \\mach)^2 - 1\\right] s \\left|\n  \\widehat{b}_{j + 1}^{(k + 1)}\\right| \\\\\n&\\leq \\gamma_3 \\left((1 - s) \\left|\\widehat{b}_j^{(k + 1)}\\right| +\n  s \\left|\\widehat{b}_{j + 1}^{(k + 1)}\\right|\\right).\n\\end{align}\nFor \\(\\widetilde{\\ell}_{F + 1}\\), we want to relate the ``current'' errors\n\\(e_1, \\ldots, e_{5F + 3}\\) to the ``previous'' errors \\(e_1',\n\\ldots, e_{5F - 2}'\\) that show up in \\(\\widetilde{\\ell}_F\\). In the same\nfashion as above, we track where the current errors come from:\n\\begin{align}\n\\left[S_1, e_1\\right] &= \\mathtt{TwoSum}\\left(e_1', e_2'\\right) \\\\\n\\left[S_2, e_2\\right] &= \\mathtt{TwoSum}\\left(S_1, e_3'\\right) \\\\\n&\\mathrel{\\makebox[\\widthof{=}]{\\vdots}} \\nonumber \\\\\n\\left[S_{5F - 3}, e_{5F - 3}\\right] &=\n  \\mathtt{TwoSum}\\left(S_{5F - 4}, e_{5F - 2}'\\right) \\\\\n\\left[P_{5F - 2}, e_{5F - 2}\\right] &= \\mathtt{TwoProd}\\left(\\rho,\n  \\cdb{F - 1}_j^{(k + 1)}\\right) \\\\\n\\left[\\widehat{\\ell}_{F, j}^{(k)}, e_{5F - 1}\\right] &=\n  \\mathtt{TwoSum}\\left(S_{5F - 3}, P_{5F - 2}\\right) \\\\\n\\left[P_{5F}, e_{5F}\\right] &= \\mathtt{TwoProd}\\left(s,\n  \\cdb{F}_{j + 1}^{(k + 1)}\\right) \\\\\n\\left[S_{5F + 1}, e_{5F + 1}\\right] &=\n  \\mathtt{TwoSum}\\left(\\widehat{\\ell}_{F, j}^{(k)}, P_{5F}\\right) \\\\\n\\left[P_{5F + 2}, e_{5F + 2}\\right] &= \\mathtt{TwoProd}\\left(\\rho,\n  \\cdb{F}_j^{(k + 1)}\\right) \\\\\n\\left[\\cdb{F}_j^{(k)}, e_{5F + 3}\\right] &= \\mathtt{TwoSum}\\left(\n  S_{5F + 1}, P_{5F + 2}\\right).\n\\end{align}\nArguing as we did above, we start with\n\\(\\left|e_1\\right| \\leq \\mach \\left|e_1'\\right| + \\mach \\left|e_2'\\right|\\)\nand build each bound recursively based on the previous, e.g.\n\\(\\left|e_2\\right| \\leq \\mach \\left|S_1\\right| + \\mach \\left|e_3'\\right| \\leq\n(1 + \\mach) \\mach \\left|e_1'\\right| + (1 + \\mach) \\mach \\left|e_2'\\right| +\n\\mach \\left|e_3'\\right|\\). Proceeding in this fashion, we find\n\\begin{align}\n\\widetilde{\\ell}_{F + 1, j}^{(k)} &= \\left|e_1\\right| + \\cdots +\n  \\left|e_{5F + 3}\\right| + \\left|\\rho \\cdot \\cdb{F}_j^{(k + 1)}\\right| \\\\\n&\\leq \\gamma_{5F} \\left|e_1'\\right| + \\gamma_{5F} \\left|e_2'\\right| +\n  \\gamma_{5F - 1} \\left|e_3'\\right| + \\cdots +\n  \\gamma_4 \\left|e_{5F - 2}'\\right| +\n  \\gamma_4 \\left|\\rho \\cdot \\cdb{F - 1}_j^{(k + 1)}\\right| \\\\\n&\\qquad + \\gamma_3 (1 - s) \\left|\n  \\cdb{F}_j^{(k + 1)}\\right| + \\gamma_3 s \\left|\n  \\cdb{F}_{j + 1}^{(k + 1)}\\right| \\\\\n&\\leq \\gamma_3 \\left(\n  (1 - s) \\left|\\cdb{F}_j^{(k + 1)}\\right| +\n  s \\left|\\cdb{F}_{j + 1}^{(k + 1)}\\right|\\right) +\n  \\gamma_{5F} \\cdot \\widetilde{\\ell}_{F, j}^{(k)}\n\\end{align}\nas desired.\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{lemma:L-and-D-bounds}}\n\\label{proof:L-and-D-bounds}\n\n\\begin{proof}\nFirst, note that for \\emph{any} sequence \\(v_0, \\ldots, v_{k + 1}\\) we\nmust have\n\\begin{equation}\n\\sum_{j = 0}^k \\left[(1 - s) v_j + s v_{j + 1}\\right] B_{j, k}(s) =\n\\sum_{j = 0}^{k + 1} v_j B_{j, k + 1}(s).\n\\end{equation}\nFor example of this in use, via \\eqref{eq:ell-tilde-1}, we have\n\\begin{equation}\n  L_{1, k} \\leq \\gamma_3 \\sum_{j = 0}^{k + 1} \\left|\n  \\widehat{b}_j^{(k + 1)}\\right| B_{j, k + 1}(s).\n\\end{equation}\nIn order to work with sums of this form, we define Bernstein-type\nsums related to \\(L_{F, k}\\):\n\\begin{align}\nD_{0, k} &\\coloneqq \\sum_{j = 0}^k \\left|\\widehat{b}_j^{(k)}\\right|\nB_{j, k}(s) \\\\\nD_{F, k} &\\coloneqq \\sum_{j = 0}^k \\left|\\cdb{F}_j^{(k)}\\right| B_{j, k}(s).\n\\end{align}\nHence Lemma~\\ref{lemma:ell-tilde} gives\n\\begin{align}\nL_{1, k} &\\leq \\gamma_3 D_{0, k + 1} \\label{eq:ell-1-k} \\\\\nL_{F + 1, k} &\\leq \\gamma_3 D_{F, k + 1} + \\gamma_{5F} L_{F, k}\n\\label{eq:ell-F-k}\n\\end{align}\nIn addition, for \\(F \\geq 1\\) since\n\\begin{align}\n\\cdb{F}_j^{(k)} &= \\widehat{\\ell}_{F, j}^{(k)} \\oplus \\left(\n  s \\otimes \\cdb{F}_{j + 1}^{(k + 1)}\\right) \\oplus \\left((1 \\ominus s) \\otimes\n  \\cdb{F}_{j}^{(k + 1)}\\right) \\\\\n&= (1 - s) \\cdot \\cdb{F}_j^{(k + 1)}(1 + \\theta_3) +\n  s \\cdot \\cdb{F}_{j + 1}^{(k + 1)}(1 + \\theta_3) +\n  \\widehat{\\ell}_{F, j}^{(k)} (1 + \\theta_2)\n\\end{align}\nwe have\n\\begin{equation}\\label{eq:df-first}\nD_{F, k} \\leq (1 + \\gamma_3) D_{F, k + 1} + (1 + \\gamma_2) \\sum_{j = 0}^k\n\\left|\\widehat{\\ell}_{F, j}^{(k)}\\right| B_{j, k}(s).\n\\end{equation}\nSince \\(\\ell_{F, j}^{(k)}\\) has \\(5F - 1\\) terms (only the last of which\ninvolves a product), the terms in the computed value will be involved in\nat most \\(5F - 2\\) flops, hence\n\\(\\left|\\widehat{\\ell}_{F, j}^{(k)}\\right| \\leq\n\\left(1 + \\gamma_{5F - 2}\\right) \\widetilde{\\ell}_{F, j}^{(k)}.\\)\nCombined with \\eqref{eq:df-first} and the fact that there is no local error\nwhen \\(F = 0\\), this means\n\\begin{align}\nD_{0, k} &\\leq (1 + \\gamma_3) D_{0, k + 1} \\label{eq:d-0-k} \\\\\nD_{F, k} &\\leq (1 + \\gamma_3) D_{F, k + 1} + (1 + \\gamma_{5F}) L_{F, k}.\n\\label{eq:d-F-k}\n\\end{align}\nThe four inequalities \\eqref{eq:ell-1-k}, \\eqref{eq:ell-F-k}, \\eqref{eq:d-0-k}\nand \\eqref{eq:d-F-k} allow us to write all bounds in terms of\n\\(D_{0, n} = \\widetilde{p}(s)\\) and \\(D_{F, n} = 0\\). From \\eqref{eq:d-0-k}\nwe can conclude that \\(D_{0, n - k} \\leq \\left(1 + \\gamma_{3k}\\right) \\cdot\n\\widetilde{p}(s)\\) and from \\eqref{eq:ell-1-k} that \\(L_{1, n - k} \\leq\n\\gamma_3 \\left(1 + \\gamma_{3(k - 1)}\\right) \\cdot \\widetilde{p}(s)\\).\n\nTo show the bounds for higher values of \\(F\\), we'll assume we have\nbounds of the form\n\\(D_{F, n - k} \\leq \\left(q_F(k) \\mach^F + \\bigO{\\mach^{F + 1}}\\right) \\cdot\n\\widetilde{p}(s)\\) and\n\\(L_{F, n - k} \\leq \\left(r_F(k) \\mach^F + \\bigO{\\mach^{F + 1}}\\right) \\cdot\n\\widetilde{p}(s)\\) for two families of polynomials \\(q_F(k), r_F(k)\\). We\nhave \\(q_0(k) = 1\\) and \\(r_1(k) = 3\\) as our base cases and can build from\nthere. To satisfy \\eqref{eq:d-F-k}, we'd like\n\\(q_F(k) = q_F(k - 1) + r_F(k)\\)\nand for \\eqref{eq:ell-F-k}\n\\(r_{F + 1}(k) = 3 q_F(k - 1) + 5 F r_F(k)\\).\nSince the forward difference \\(\\Delta q_F(k) = r_F(k + 1)\\) is known,\nwe can inductively solve for \\(q_F\\) in terms of \\(q_F(0)\\). But\n\\(D_{F, n} = 0\\) gives \\(q_F(0) = 0\\).\n\nFor example, since we have \\(r_1(k) = 3 \\binom{k}{0}\\) we'll have\n\\(q_1(k) = 3 \\binom{k}{1}\\). Once this is known\n\\begin{equation}\nr_2(k) = 3 q_1(k - 1) + 5 r_1(k) = 3 \\cdot 3 \\binom{k - 1}{1} +\n5 \\cdot 3 \\binom{k}{0} = 9 \\binom{k}{1} + 6 \\binom{k}{0}.\n\\end{equation}\nIf we write these polynomials in the ``falling factorial'' basis of\nforward differences, then we can show that\n\\begin{equation}\nr_F(k) = 3^F \\binom{k}{F} + \\cdots\n\\end{equation}\nwhich will complete the proof of the first inequality. To see this, first\nnote that for a polynomial in this basis\n\\(f(k) = A \\binom{k}{d} + B \\binom{k}{d - 1} + C \\binom{k}{d - 2} +\nD \\binom{k}{d - 3} + \\cdots\\) we have\n\\begin{align}\nf(k + 1) &= A \\binom{k}{d} + (A + B) \\binom{k}{d - 1} +\n  (B + C) \\binom{k}{d - 2} + (C + D) \\binom{k}{d - 3} + \\cdots \\\\\nf(k - 1) &= A \\binom{k}{d} + (B - A) \\binom{k}{d - 1} +\n  (C - B + A) \\binom{k}{d - 2} + (D - C + B - A) \\binom{k}{d - 3} + \\cdots\n\\end{align}\nUsing these, we can show that if\n\\(r_F(k) = \\sum_{j = 0}^{F - 1} c_j \\binom{k}{j}\\) then\n\\begin{align}\nq_F(k) &= c_{F - 1} \\binom{k}{F} + \\sum_{j = 1}^{F - 1}\n  (c_j + c_{j - 1}) \\binom{k}{j} \\\\\nr_{F + 1}(k) &= 3 \\left[-c_0 \\binom{k}{0} +\n  \\sum_{j = 1}^F c_{j - 1} \\binom{k}{j}\\right] +\n5F \\left[\\sum_{j = 0}^{F - 1} c_j \\binom{k}{j}\\right] =\n3 c_{F - 1} \\binom{k}{F} + \\cdots\n\\end{align}\nUnder the inductive hypothesis \\(c_{F - 1} = 3^F\\) so that\nthe lead term in \\(r_{F + 1}(k)\\) is \\(3 c_{F - 1} \\binom{k}{F}\n= 3^{F + 1} \\binom{k}{F}\\).\n\nFor the second inequality, we'll show that\n\\begin{equation}\n\\sum_{k = 0}^{n - 1} \\gamma_{3k + 5F} L_{F, k} \\leq\n  \\left[q_{F + 1}(n) \\mach^{F + 1} +\n  \\bigO{\\mach^{F + 2}}\\right] \\cdot \\widetilde{p}(s)\n\\end{equation}\nand then we'll have our result since we showed above that\n\\(q_{F + 1}(n) = 3^{F + 1} \\binom{n}{F + 1} + \\bigO{n^F}\\). Since\n\\(\\gamma_{3k + 5F} L_{F, k} \\leq (3k + 5F) L_{F, k} \\mach +\n\\bigO{\\mach^{F + 2}} \\widetilde{p}(s)\\) it's enough to consider\n\\begin{equation}\n\\sum_{k = 0}^{n - 1} (3k + 5F) r_F(n - k) =\n\\sum_{k = 1}^n (3(n - k) + 5F) r_F(k).\n\\end{equation}\nSince \\(q_F(k) = q_F(k - 1) + r_F(k)\\) and \\(q_F(0) = 0\\) we have\n\\(q_{F}(n) = \\sum_{k = 1}^n r_{F}(k)\\) thus\n\\begin{equation}\nq_{F + 1}(n) = \\sum_{k = 1}^n r_{F + 1}(k)\n= \\sum_{k = 1}^n 3 q_F(k - 1) + 5 F r_F(k)\n= \\sum_{k = 1}^n 3 \\left[\\sum_{j = 1}^{k - 1} r_F(j)\\right] + 5 F r_F(k).\n\\end{equation}\nSwapping the order of summation and grouping like terms, we have our\nresult.\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{lemma:k-order}}\\label{proof:k-order}\n\n\\begin{proof}\nAs in \\eqref{eq:matrix-de-casteljau}, we can express the compensated\nde Casteljau algorithm as\n\\begin{equation}\n\\db{F}^{(k)} = U_{k + 1} \\db{F}^{(k + 1)} + \\ell_{F}^{(k)}\n\\Longrightarrow \\db{F}^{(0)} = \\sum_{k = 0}^{n - 1}\nU_1 \\cdots U_k \\ell_F^{(k)} = \\sum_{k = 0}^{n - 1}\n\\left[\\sum_{j = 0}^k \\ell_{F, j}^{(k)} B_{j, k}(s)\\right].\n\\end{equation}\nFor the inexact equivalent of these things, first note that\n\\(\\widehat{r} = (1 - s)(1 + \\delta)\\). Due to this,\nwe put the \\(\\widehat{r}\\) term at the end of each update step to reduce\nthe amount of round-off:\n\\begin{align}\n  \\cdb{F}_j^{(k)} &=\n  \\widehat{\\ell}_{F, j}^{(k)} \\oplus\n  \\left(s \\otimes \\cdb{F}_{j + 1}^{(k + 1)}\\right) \\oplus\n  \\left(\\widehat{r} \\otimes \\cdb{F}_j^{(k + 1)}\\right) \\\\\n&= (1 - s) \\cdot \\cdb{F}_j^{(k + 1)}(1 + \\theta_3) +\n  s \\cdot \\cdb{F}_{j + 1}^{(k + 1)}(1 + \\theta_3) +\n  \\widehat{\\ell}_{F, j}^{(k)} (1 + \\theta_2) \\\\\n\\Longrightarrow \\cdb{F}^{(k)} &=\n  U_{k + 1} \\cdb{F}^{(k + 1)}(1 + \\theta_3) +\n  \\widehat{\\ell}_{F}^{(k)} (1 + \\theta_2) \\\\\n\\Longrightarrow \\cdb{F}^{(0)} &=\n  \\sum_{k = 0}^{n - 1}\n  U_1 \\cdots U_k \\widehat{\\ell}_F^{(k)} (1 + \\theta_{3k + 2})\n  = \\sum_{k = 0}^{n - 1}\n  \\left[\\sum_{j = 0}^k \\widehat{\\ell}_{F, j}^{(k)} (1 + \\theta_{3k + 2})\n    B_{j, k}(s)\\right].\n\\end{align}\nSince\n\\begin{equation}\n\\db{F + 1}_0^{(0)} = \\db{F}_0^{(0)} - \\cdb{F}_0^{(0)} = \\sum_{k = 0}^{n - 1}\n\\sum_{j = 0}^k \\left(\\ell_{F, j}^{(k)} -\n\\widehat{\\ell}_{F, j}^{(k)} (1 + \\theta_{3k + 2})\\right) B_{j, k}(s)\n\\end{equation}\nit's useful to put a bound on \\(\\ell_{F, j}^{(k)} -\n\\widehat{\\ell}_{F, j}^{(k)} (1 + \\theta_{3k + 2})\\). Via\n\\begin{align}\n\\widehat{\\ell}_{F, j}^{(k)} &= e_1 \\oplus \\cdots \\oplus e_{5F - 2} \\oplus\n\\left(\\rho \\otimes \\cdb{F - 1}_j^{(k + 1)}\\right) \\\\\n&= e_1\\left(1 + \\theta_{5F - 2}\\right) + \\cdots +\ne_{5F - 2}\\left(1 + \\theta_2\\right) +\n\\rho \\cdot \\cdb{F - 1}_j^{(k + 1)} \\left(1 + \\theta_2\\right)\n\\end{align}\nwe see that\n\\begin{equation}\n\\left|\\ell_{F, j}^{(k)} -\n\\widehat{\\ell}_{F, j}^{(k)} (1 + \\theta_{3k + 2})\\right| \\leq\n\\gamma_{3k + 5F} \\cdot \\widetilde{\\ell}_{F, j}^{(k)}\n\\Longrightarrow\n\\left|\\db{F + 1}_0^{(0)}\\right| \\leq \\sum_{k = 0}^{n - 1}\n\\gamma_{3k + 5F} \\sum_{j = 0}^k \\widetilde{\\ell}_{F, j}^{(k)} B_{j, k}(s).\n\\end{equation}\nApplying \\eqref{eq:L-sum-bound} directly gives\n\\begin{equation}\n\\left|\\db{F + 1}_0^{(0)}\\right| \\leq\n  \\left[\\left(3^{F + 1} \\binom{n}{F + 1} + \\bigO{n^F}\\right)\n  \\mach^{F + 1} + \\bigO{\\mach^{F + 2}}\\right] \\cdot \\widetilde{p}(s).\n\\end{equation}\nLetting \\(K = F + 1\\) we have our result.\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{thm:k-comp-result}}\n\\label{proof:k-comp-result}\n\n\\begin{proof}\nSince\n\\begin{equation}\n\\mathtt{CompDeCasteljau}(p, s, K) = \\mathtt{SumK}\\left(\\left[\n  \\widehat{b}_0^{(0)}, \\ldots, \\cdb{K - 1}_0^{(0)}\\right], K\\right),\n\\end{equation}\napplying Theorem~\\ref{thm:sum-k} tells us that\n\\begin{multline}\\label{eq:sum-k-applied}\n\\left|\\mathtt{CompDeCasteljau}(p, s, K) - \\sum_{F = 0}^{K - 1}\n\\cdb{F}_0^{(0)}\\right| \\leq \\\\\n\\left(\\mach + 3 \\gamma_{n - 1}^2\\right) \\left|\\sum_{F = 0}^{K - 1}\n\\cdb{F}_0^{(0)}\\right| +\n\\gamma_{2n - 2}^K \\sum_{F = 0}^{K - 1} \\left|\\cdb{F}_0^{(0)}\\right|.\n\\end{multline}\nSince\n\\begin{equation}\np(s) = b_0^{(0)} = \\widehat{b}_0^{(0)} + \\db{1}_0^{(0)}\n= \\cdots\n= \\widehat{b}_0^{(0)} + \\cdb{1}_0^{(0)} + \\cdots\n+ \\cdb{K - 1}_0^{(0)} + \\db{K}_0^{(0)}\n\\end{equation}\nwe have\n\\begin{gather}\n\\left|\\sum_{F = 0}^{K - 1} \\cdb{F}_0^{(0)}\\right|\n\\leq \\left|p(s)\\right| + \\left|\\db{K}_0^{(0)}\\right| \\quad \\text{and} \\\\\n\\begin{multlined}\n\\left|\\mathtt{CompDeCasteljau}(p, s, K) - p(s)\\right| \\leq \\\\\n\\left|\\mathtt{CompDeCasteljau}(p, s, K) - \\sum_{F = 0}^{K - 1}\n\\cdb{F}_0^{(0)}\\right| +\n\\left|\\db{K}_0^{(0)}\\right| \\label{eq:triangle-ps}.\n\\end{multlined}\n\\end{gather}\nDue to Lemma~\\ref{lemma:k-order}, \\(\\db{F}_0^{(0)} =\n\\bigO{\\mach^F} \\widetilde{p}(s)\\), hence\n\\begin{align}\n\\left(\\mach + 3 \\gamma_{n - 1}^2\\right) \\left|\\sum_{F = 0}^{K - 1}\n\\cdb{F}_0^{(0)}\\right| &\\leq\n\\left[\\mach + \\bigO{\\mach^2}\\right] \\left|p(s)\\right| +\n\\bigO{\\mach^{K + 1}} \\widetilde{p}(s) \\\\\n\\gamma_{2n - 2}^K \\sum_{F = 0}^{K - 1} \\left|\\cdb{F}_0^{(0)}\\right| &\\leq\n\\gamma_{2n - 2}^K \\left|\\widehat{b}_0^{(0)}\\right| +\n\\bigO{\\mach^{K + 1}} \\widetilde{p}(s) \\\\\n&\\leq\n\\gamma_{2n - 2}^K \\left[\\left|p(s)\\right| +\n  \\bigO{\\mach} \\widetilde{p}(s)\\right] +\n\\bigO{\\mach^{K + 1}} \\widetilde{p}(s).\n\\end{align}\nCombining this with \\eqref{eq:sum-k-applied} and \\eqref{eq:triangle-ps}, we\nsee\n\\begin{align}\n& \\left|\\mathtt{CompDeCasteljau}(p, s, K) - p(s)\\right| \\\\\n\\leq &\n\\left[\\mach + \\bigO{\\mach^2}\\right] \\left|p(s)\\right| +\n\\left|\\db{K}_0^{(0)}\\right| +\n\\bigO{\\mach^{K + 1}} \\widetilde{p}(s) \\\\\n\\leq &\n\\left[\\mach + \\bigO{\\mach^2}\\right] \\left|p(s)\\right| +\n\\left[\\left(3^{K} \\binom{n}{K} + \\bigO{n^{K - 1}}\\right) \\mach^K +\n\\bigO{\\mach^{K + 1}} \\right]\n\\widetilde{p}(s).\n\\end{align}\nDividing this by \\(\\left|p(s)\\right|\\), we have our result.\n\\end{proof}\n", "meta": {"hexsha": "ef5eee4f05b0befaa0a6b9e7a2bc95bf245f0b3b", "size": 17909, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/proofs.tex", "max_stars_repo_name": "dhermes/phd-thesis", "max_stars_repo_head_hexsha": "732c75b4258e6f41b2dafb2929f0e3dbd380239b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-08-24T15:36:28.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-13T01:38:19.000Z", "max_issues_repo_path": "doc/proofs.tex", "max_issues_repo_name": "dhermes/phd-thesis", "max_issues_repo_head_hexsha": "732c75b4258e6f41b2dafb2929f0e3dbd380239b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-08-21T05:57:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-16T16:43:00.000Z", "max_forks_repo_path": "doc/proofs.tex", "max_forks_repo_name": "dhermes/phd-thesis", "max_forks_repo_head_hexsha": "732c75b4258e6f41b2dafb2929f0e3dbd380239b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.7949886105, "max_line_length": 79, "alphanum_fraction": 0.5833379865, "num_tokens": 8258, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.803173791645582, "lm_q1q2_score": 0.5771561829363884}}
{"text": "%!TEX root = uber.tex\n\n\\section{Modeling a Strategic Driver}\n\\begin{frame}{Modeling a City}\n\\begin{columns}\n\\begin{column}{0.70\\textwidth}\n\\begin{itemize}\n\t\\item<1->\\textcolor{BlueGreen}{\\bf Discrete model:}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<1-> {\\bf Nodes}: Set of $n$ city-zones ($\\mathcal{X}$).\n\t\t\t\\item[--]<1-> {\\bf Edges}: Rides between each pair of zones.\n\t\t\t\\item[--]<1-> {\\bf Time}: advances in discrete time steps.\n\t\t\\end{itemize}\n\t\\item<2->\\textcolor{BlueGreen}{\\bf Empirical Transition Matrix ($F$):}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<2-> $f(i,j)$: probability of passenger in zone $i$\\\\ traveling to zone $j$.\n\t\t\\end{itemize}\n\t\\item<2->\\textcolor{BlueGreen}{\\bf Rewards Matrix ($R$):}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<2-> $r(i,j)$: net reward from zone $i$ to zone $j$.\n\t\t\t\\item[--]<2-> $r(i,j)$ = earnings$(i,j)$ - cost$(i,j)$.\n\t\t\\end{itemize}\n\t\\item<2->\\textcolor{BlueGreen}{\\bf Travel Time Matrix ($T$):}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<2-> $\\tau(i,j)$: travel time from zone $i$ to zone $j$.\n\t\t\\end{itemize}\n\\end{itemize}\n\\end{column}\n\\begin{column}{0.30\\textwidth}\n\\only<1>{\n\\begin{figure}[tbh]\n\\includegraphics[scale=1.8]{figures/nyc3.pdf}\n\\end{figure}\n}\n\\only<2->{\n\\begin{figure}[tbh]\n\\includegraphics[scale=1.8]{figures/nyc4.pdf}\n\\end{figure}\n}\n\\end{column}\n\\end{columns}\n\\vspace{0.5cm}\n\\uncover<3->{\\alert{\\textbf{In general, all these matrices are time-dependent, their entries change throughout the day.}}}\n\\end{frame}\n\n\\begin{frame}{Modeling a Driver}\n\\begin{itemize}\n\t\\item<1->\\textcolor{BlueGreen}{\\bf Home Zone:}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<1-> The zone from which driver starts the day.\n\t\t\\end{itemize}\n\t\t\\vspace{0.25cm}\n\t\\item<2->\\textcolor{BlueGreen}{\\bf Finite Horizon ($N$):}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<2->Total time steps in optimization period.\n\t\t\\end{itemize}\n\t\t\\vspace{0.25cm}\n\t\\item<3->\\textcolor{BlueGreen}{\\bf Driving Budget ($B$):}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<3->Maximum time steps a driver is willing to drive. Obviously, $B \\leq N$.\n\t\t\\end{itemize}\n\t\t\\vspace{0.25cm}\n\t\\item<4->\\textcolor{BlueGreen}{\\bf Actions ($\\mathcal{A}$):}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<4-|alert@5,6,7,8> \\textbf{Get Passenger} \\tikzmark{start}\n\t\t\t\\item[--]<4-|alert@6,8> \\textbf{Go Home}\n\t\t\t\\item[--]<4-|alert@7,8> \\textbf{Relocate} \\tikzmark{end}\n\t\t\\end{itemize}\n\t\t\\uncover<5>{\\naive}\n\t\t\\uncover<6>{\\flexible}\n\t\t\\uncover<7>{\\relocation}\n\t\t\\uncover<8>{\\flexiblerelocation}\n\t\\item<9->\\textcolor{BlueGreen}{\\bf Driver Policy ($\\pi$):}\n\t\t\\begin{itemize}\n\t\t\t\\item[--]<9->Time and location dependent actions takes by driver.\n\t\t\t\\item[--]<9->Let $\\Pi$ denote entire policy space of size $\\big|n \\times N \\times B \\times \\mathcal{A}\\big|$.\n\t\t\\end{itemize}\n\t\t\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Problem Definition}\n\\begin{itemize}\n\t\\item[] \\textbf{\\textsc{MaxEarnings Problem}:} Given a set of time-evolving $F, T$ and $R$ matrices, as well as the driver's budget $B$, find a policy $\\pi^\\ast$ such that:\n\t\\begin{equation*}\n\t\\pi^\\ast = \\argmax_{\\pi \\in \\Pi} \\mathcal{E}(\\pi, F, T, R, B) \n\t\\end{equation*}\n\twhere $\\mathcal{E}(.)$ denotes total expected earnings.% and $\\Pi$ is entire policy space $\\bigg(\\big||\\mathcal{X}| \\times N \\times \\mathcal{A}\\big|\\bigg)$.\n\\end{itemize}\n\\pause\n\t\\only<2>{\n\t\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.45]{figures/matrix.pdf}\n\t\\end{figure}\n\t}\n\t\\only<3>{\n\t\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.45]{figures/matrix0.pdf}\n\t\\end{figure}\n\t}\n\t\\only<4>{\n\t\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.45]{figures/matrix1.pdf}\n\t\\end{figure}\n\t}\n\t\\only<5>{\n\t\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.45]{figures/matrix2.pdf}\n\t\\end{figure}\n\t}\n\t\\only<6>{\n\t\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.45]{figures/matrix3.pdf}\n\t\\end{figure}\n\t}\n\t\\only<7>{\n\t\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.45]{figures/matrix4.pdf}\n\t\\end{figure}\n\t}\\only<8>{\n\t\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.45]{figures/matrix5.pdf}\n\t\\end{figure}\n\t}\n\t\\pause\n\t\\alert{Dynamic Program solvable in $O(n^2NB)$.}\n\\end{frame}", "meta": {"hexsha": "519576849ccc80a6fa5b983b958aa94c3b698315", "size": 4013, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "talk/modeling.tex", "max_stars_repo_name": "chdhr-harshal/uber-driver-strategy", "max_stars_repo_head_hexsha": "f21f968e7aa04d8105bf42e046ab120f813aa12f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-04-14T22:30:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-05T17:54:25.000Z", "max_issues_repo_path": "talk/modeling.tex", "max_issues_repo_name": "chdhr-harshal/uber-driver-strategy", "max_issues_repo_head_hexsha": "f21f968e7aa04d8105bf42e046ab120f813aa12f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-02-17T10:36:43.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-17T10:46:33.000Z", "max_forks_repo_path": "talk/modeling.tex", "max_forks_repo_name": "chdhr-harshal/uber_driver_strategy", "max_forks_repo_head_hexsha": "f21f968e7aa04d8105bf42e046ab120f813aa12f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.947761194, "max_line_length": 173, "alphanum_fraction": 0.6615998006, "num_tokens": 1528, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127455162773, "lm_q2_score": 0.6757645879592642, "lm_q1q2_score": 0.5770439946269711}}
{"text": "%\\setcounter{page}{1100}\n\\chapter{Series of Constants\\label{FirstSeriesChapter}}\nIn this chapter we consider ``infinite sums,'' such as\n\\begin{equation}\n\\sum_{n=1}^\\infty a_n=\\underbrace{a_1}_{n=1}+\\underbrace{a_2}_{n=1}\n  +\\underbrace{a_3}_{n=3}+\\cdots.\n\\label{SeriesFirstIntroducedEquation}\\end{equation}\nThe ``sum'' above begins with $a_1$, but we will often begin\nwith a term $a_0$, or $a_2$, etc.  It is not the beginning\nterms which determine if we can in fact compute such a\nsum, but rather it is the infinite ``tail'' of the series.\nThis is reasonable because we can always, in principle, \nadd as many terms together as we like, so long as there are\nfinitely many of them.  As with other calculus concepts,\nthe tool which breaks the finite/infinite barrier is limit.\nIndeed, to make sense of a sum such as (\\ref{SeriesFirstIntroducedEquation}),\nwe consider the $N$th {\\it partial sum},\n\\begin{equation}\nS_N=\\sum_{n=1}^na_n=a_1+a_2+\\cdots+a_{N-1}+a_N,\n\\label{PartialSumFirstIntroducedEquation}\\end{equation}\nand then look at the {\\it sequence} $S_1,S_2,S_3,\\cdots$,\ni.e.,\n\\begin{align*}\nS_1&=a_1,\\\\\nS_2&=a_1+a_2,\\\\\nS_3&=a_1+a_2+a_3,\\\\\nS_4&=a_1+a_2+a_3+a_4,\\end{align*}\nand so on.  To determine if (\\ref{SeriesFirstIntroducedEquation})\nmakes sense is then considered (by definition) equivalent\nto determining the behavior of the sequence $\\left\\{S_N\\right\\}_{N=1}^\\infty$.\nWe say the series (\\ref{SeriesFirstIntroducedEquation})\n{\\it converges} to $L\\in\\Re$ if and only if $S_N\\longrightarrow L$\nas $N\\to\\infty$.\n\nIn a few cases we will actually be able to compute a simple\nformula for $S_N$, and thus be able to compute the \nseries by taking $N\\to\\infty$.  However, in many cases\nwe cannot find a compact formula for $S_N$.  In those cases\nwe have to develop other methods for determining if\nthe series converges at all, and if so, how to approximate\nthe value of the series with as much precision as we require,\nby determining how large we require $N$ to be so that\nwe can approximate the full series by $S_N$.\n\nIn this text we will take more steps than most other texts in developing\nthe theory of series, since this topic is the source of much\nconfusion for students.  Indeed we devote this entire chapter\nto the topic of series of constant terms, leaving nonconstant\nterms for their own chapter.  The concepts are \nintuitive---perhaps deceptively so---but require practice so that,\nfor example, one can recognize when and where to apply a particular\ntest of convergence or divergence.\n\nIn the next chapter\nwe will look at functions defined by series\n$$f(x)=\\sum_{n=0}^\\infty a_n(x-a)^n\n  =a_0+a_1(x-a)+a_2(x-a)^2+\\cdots.$$\nIn fact most functions we have dealt  with in this text can be written\nin the form above, at least on some open intervals, so such functions are\nvery important theoretically. In fact\nthese functions also have use in applications, when we do not know\nany other method of representing a particular function arising\nfrom a particular application except by such a series.\n\n\n\\newpage\n\\section{Series and Partial Sums}\nAs mentioned in the introduction to this chapter, the convergence\nof a series is defined  as equivalent to the convergence of its \npartial sums.  \nFor convenience, we will define the $N$th partial sum\nto be the sum of all terms of the underlying sequence up to\nthe term whose subscript is $N$.  Thus if the sequence is\nseries is $\\sum_{n=k}^\\infty a_n$, with\nunderlying sequence $\\left\\{a_n\\right\\}_{n=k}^\\infty$, then\n\\begin{equation}\nS_N=\\sum_{n=k}^N a_n=a_k+a_{k+1}+\\cdots+a_N.\\end{equation}\nSo for a series $a_0+a_1+a_2+\\cdots$, the partial\nsum $S_N=a_0+a_1+\\cdots+a_N$ would actually have $N+1$ terms,\nthough we will still call it the $N$th partial sum. (Of course\nif $N<k$ we do not define an $N$th partial sum.)\n\n\\bex Consider the series\n$$\\sum_{n=1}^\\infty \\frac{(-1)^n}{n^2+1}.$$\nFind the first five partial sums.\n\n\\underline{Solution}: We do this directly:\n\\begin{alignat*}{3}\nS_1&=\\sum_{n=1}^1\\frac{(-1)^n}{n^2+1}\n  &&=\\frac{(-1)^1}{1^2+1}&&=\\frac{-1}2\\\\\nS_2&=\\sum_{n=1}^2\\frac{(-1)^n}{n^2+1}\n  &&=\\frac{(-1)^1}{1^2+1}+\\frac{(-1)^2}{2^2+1}\n  &&=\\frac{-1}2+\\frac1{5}=-\\frac{3}{10}=0.3\\\\\nS_3&=\\sum_{n=1}^3\\frac{(-1)^n}{n^2+1}\n  &&=\\frac{(-1)^1}{1^2+1}+\\frac{(-1)^2}{2^2+1}+\\frac{(-1)^3}{3^2+1}\n    &&=-\\frac{3}{10}+\\frac{-1}{10}=\\frac2{10}=0.2\\\\\nS_4&=\\sum_{n=1}^4\\frac{(-1)^n}{n^2+1}\n  &&=\\frac{(-1)^1}{1^2+1}+\\frac{(-1)^2}{2^2+1}+\\frac{(-1)^3}{3^2+1}\n         +\\frac{(-1)^4}{4^2+1}\n  &&=S_3+\\frac{1}{17}=\\frac{44}{170}\\approx0.2588235294\\\\\nS_5&=\\sum_{n=1}^5\\frac{(-1)^n}{n^2+1}&&\n    =S_4+\\frac{(-1)^5}{5^2+1}\n   &&=\\frac{44}{170}+\\frac{-1}{26}\\approx0.220361991.\n\\end{alignat*}\n\\eex\nNote we used the simple recursion relationship for partial sums\nof a series: given a series\n$\\ds{\\sum_{n=k}^\\infty a_n}$, and $N\\ge k$ we have\n\\begin{equation}\nS_{N+1}=S_N+a_{N+1},\\label{RecursionSn+1=Sn+An+1}\n\\end{equation}\nthat is,\n\\begin{align*}\nS_{N+1}&=\\sum_{n=k}^{N+1}a_n=\\underbrace{a_k+a_{k+1}+\\cdots+a_N}_{S_N}\n                +a_{N+1}=\\sum_{n=k}^Na_n+a_{N+1}\\\\\n       &=S_N+a_{N+1},\\text{ q.e.d.}\\end{align*}\nIn a later section \nwe will see that the series in the above example does in fact\nconverge, though we can only approximate its exact value\nby computing $S_N$ for large values of $N$.\n\n\\subsection{Telescoping Series}\nTelescoping series do occur on occasion, but the main reason\nthey are included in most calculus textbooks is that their\npartial sums simplify in nice ways, leaving us able to compute\ntheir limits and thus the whole series.  Indeed, the behavior\nof telescoping series is unusually ``nice''---rivaled only by\nthat of the much more important\ngeometric series we will see later in this section---and therefore\nwell-suited for early examples of the general notion of series.\n\n\nThe simplest type of telescoping series is one in which\nthe terms added are themselves sums of two terms, constructed\nin such a way that there is cancellation such as the following:\n\\begin{align}\n\\sum_{n=1}^\\infty a_n&=\\sum_{n=1}^\\infty\n\\left[b_n-b_{n-1}\\right]\\label{SimpleAbstractTelescopingSeries}\\\\\n&=\\left(b_1-b_0\\right)+\\left(b_2-b_1\\right)+\\left(b_3-b_2\\right)\n+\\left(b_4-b_3\\right)+\\cdots.\\notag\\end{align}\nAfter a careful examination of the terms which appear in \n(\\ref{SimpleAbstractTelescopingSeries}), it seems that\nall cancel except for $-b_0$. However we must be even more\ncareful since there are infinitely many terms we are claiming\nwe can cancel.  The correct approach is to carefully examine the partial \nsums:\n\\begin{align*}\nS_1&=b_1-b_0,\\\\\nS_2&=\\not{b_1}-b_0+b_2-\\not{b_1}=b_2-b_0,\\\\\nS_3&=\\not{b_1}-b_0+\\not{b_2}-\\not{b_1}+b_3-\\not{b_2}=b_3-b_0,\\\\\nS_4&=\\not{b_1}-b_0+\\not{b_2}-\\not{b_1}+\\not{b_3}-\\not{b_2}\n   +b_4-\\not{b_3}=b_4-b_0,\\end{align*}\nand so on, whereby we can conclude that, for this simplest type\nof example (\\ref{SimpleAbstractTelescopingSeries}), we have\n\\begin{equation}\nS_n=b_n-b_0.\\label{SnForSimpleAbstractTelescopingSeries}\n\\end{equation}\nNow such a series will therefore converge if and \nonly if $\\left\\{b_n\\right\\}_{n=1}^\\infty$ converges.\nIf $b_n\\longrightarrow B\\in\\Re$ as $n\\to\\infty$, then\nby (\\ref{SnForSimpleAbstractTelescopingSeries}) we have\n$S_n\\longrightarrow B-b_0$, whence\n$\\sum_{n=1}^\\infty\\left[b_n-b_{n-1}\\right]=B-b_0$.\n\nMore complicated telescoping series also occur, though the\nbasic idea is that the partial sums can be written\nin such a way that all but a few terms found in the partial sums\neventually cancel,\nand where we can compute the limits of those terms which do not.\\footnote{%%%\n%%% FOOTNOTE\nIt is interesting to visualize why the term {\\it telescoping}\nis used to describe such a series.  \nOne of the {\\it Webster's} dictionaries defines \nthe intransitive verb form of {\\it telescope} as follows:\n\\begin{quote}\n{\\it to slide together, or into something else, in the manner of the\ntubes of a jointed telescope.}\\end{quote}\nFor another example, a ``telescoping antenna'' comes to mind.\nThe reader should keep such images in mind as we consider \nso-called telescoping series.\n%%% END FOOTNOTE\n} Rather than memorizing the sample telescoping\nforms (\\ref{SimpleAbstractTelescopingSeries})\nand (\\ref{SnForSimpleAbstractTelescopingSeries}), it is better to \nconsider each example separately, writing out the terms of $S_N$\nfor enough values of $N$ that the pattern emerges.\n\\bex Consider the series $\\ds{\\sum_{n=1}^\\infty\\left[\\frac1{n+1}-\\frac1n\\right]\n}$.  Compute the form of each partial sum $S_N$ (as a function of $N$), and\nthe value of the series if it converges.\n\n\\underline{Solution}: We will write out a few partial sums longhand, \nfrom which the pattern will emerge.\nIndeed, all but two terms will cancel in each of the following.\n\\begin{alignat*}{2}\nS_1&=\\left[\\frac12-1\\right]&&=\\frac12-1,\\\\\nS_2&=\\left[\\frac12-1\\right]+\\left[\\frac13-\\frac12\\right]\n     &&=\\frac13-1,\\\\\nS_3&=\\left[\\frac12-1\\right]+\\left[\\frac13-\\frac12\\right]\n   +\\left[\\frac14-\\frac13\\right]&&=\\frac14-1,\\\\\nS_4&=\\left[\\frac12-1\\right]+\\left[\\frac13-\\frac12\\right]\n   +\\left[\\frac14-\\frac13\\right]+\\left[\\frac15-\\frac14\\right]\n   &&=\\frac15-1.\n\\end{alignat*}\nFrom this we do indeed see a pattern in which \n$$S_N=\\frac1{N+1}-1.$$\nTaking $N\\to\\infty$, we see $S_N=\\frac1{N+1}-1\\longrightarrow0-1=-1$,\nand so we conclude that the series converges to $-1$, i.e.,\n$$\\sum_{n=1}^\\infty\\left[\\frac1{n+1}-\\frac1n\\right]=-1.$$\n\\eex\n\nSometimes we need to do a little more work to detect a\ntelescoping series, and its formula for $S_N$.\nNote that the general term of the added sequence terms,\nnamely $\\left[\\frac1{n+1}-\\frac1n\\right]$,\nin our series above looks like a partial fraction decomposition\nif the variable is $n$.  For that reason, when\nthe general term can be written in a PFD, the series may \nin fact be telescoping.  This is the case with the following \nexample. \n\n\\bex Consider the series $\\ds{\\sum_{n=2}^\\infty\\frac1{n^2-1}}$.\nCompute a general formula for the $N$th partial sum $S_N$,\nand compute its limit, if $S_N$ converges, thereby computing the series.\n\n\\underline{Solution}: Note first that there is no $S_1$ here.\nThat said, the technique which we will use for this\nis to first look at the partial fraction decomposition (PFD)\nfor $\\frac1{n^2-1}$.  Of course we need the denominator factored,\ngiving us the form\n$$\\frac1{n^2-1}=\\frac1{(n+1)(n-1)}=\\frac{A}{n+1}+\\frac{B}{n-1}.$$\nMultiplying by $(n+1)(n-1)$ in the second equation then gives us\n$$1=A(n-1)+B(n+1).$$\nNow we use the usual methods for computing the coefficients $A$ and $B$:\n\\begin{alignat*}{2}\n\\underline{n=1}:&\\qquad &1&=B(2)\\implies\\boxed{B=\\frac12}\\\\\n\\underline{n=-1}:&\\qquad&1&=A(-2)\\implies\\boxed{A=-\\frac12}.\n\\end{alignat*}\nFrom this we can rewrite our series\n$$\\sum_{n=2}^\\infty\n \\left[\\frac{-1/2}{n+1}+\\frac{1/2}{n-1}\\right]\n =\\sum_{n=2}^\\infty\\left[\\frac12\\left(\\frac{-1}{n+1}+\\frac1{n-1}\\right)\\right].\n$$\nThere is no $S_1$, so we begin with $S_2$.\n\\begin{alignat*}{2}\nS_2&=\\frac12\\left(\\frac{-1}{3}+1\\right)\n       &&=\\frac12\\left(\\frac{-1}{3}+1\\right),\\\\\nS_3&=\\frac12\\left(\\frac{-1}{3}+1\\right)\n       +\\frac12\\left(\\frac{-1}4+\\frac12\\right)\n       &&=\\frac12\\left(\\frac{-1}{3}+1+\\frac{-1}4+\\frac12\\right),\\\\\nS_4&=\\frac12\\left(\\frac{-1}{3}+1\\right)\n       +\\frac12\\left(\\frac{-1}4+\\frac12\\right)\n       +\\frac12\\left(\\frac{-1}5+\\frac13\\right)\n       &&=\\frac12\\left(1+\\frac12-\\frac14-\\frac15\\right),\\\\\nS_5&=S_4+\\frac12\\left(\\frac{-1}6+\\frac14\\right)\n       &&=\\frac12\\left(1+\\frac12-\\frac15-\\frac16\\right),\\\\\nS_6&=S_5+\\frac12\\left(\\frac{-1}7+\\frac15\\right)\n       &&=\\frac12\\left(1+\\frac12-\\frac16-\\frac17\\right),\\\\\nS_7&=S_6+\\frac12\\left(\\frac{-1}8+\\frac16\\right)\n       &&=\\frac12\\left(1+\\frac12-\\frac17-\\frac18\\right)\n\\end{alignat*}\nBy this point a pattern has clearly  emerged:\n$$S_N=\\frac12\\left(1+\\frac12-\\frac1N-\\frac1{N+1}\\right),$$\nand so \n$S_N\\longrightarrow \\frac12\\left[1+\\frac12-0-0\\right]=\\frac12\\cdot\\frac32\n         =\\frac34$ as $N\\to\\infty$.\nWe can thus conclude that\n$$\\sum_{n=2}^\\infty\\frac1{n^2-1}\n  =\\sum_{n=2}^\\infty\n         \\left[\\frac12\\left(\\frac{-1}{n+1}+\\frac1{n-1}\\right)\\right]\n  =\\frac34.$$\n\\eex\n\\bex Find $S_N$ and discuss the convergence (or divergence)\n     of the series\n$$\\sum_{n=0}^\\infty\\left[\\sqrt{n+1}-\\sqrt{n}\\right].$$\n\n\\underline{Solution}:\n\\begin{alignat*}{2}\nS_0&=\\left[\\sqrt1-\\sqrt0\\right]&&=\\sqrt1-\\sqrt0,\\\\\nS_1&=\\left[\\sqrt1-\\sqrt0\\right]+\\left[\\sqrt2-\\sqrt1\\right]&&=\\sqrt2-\\sqrt0,\\\\\nS_2&=\\left[\\sqrt1-\\sqrt0\\right]+\\left[\\sqrt2-\\sqrt1\\right]\n   +\\left[\\sqrt3-\\sqrt2\\right]&&=\\sqrt3-\\sqrt0,\\\\\nS_3&=\\left[\\sqrt1-\\sqrt0\\right]+\\left[\\sqrt2-\\sqrt1\\right]\n   +\\left[\\sqrt3-\\sqrt2\\right]+\\left[\\sqrt4-\\sqrt3\\right]\n   &&=\\sqrt4-\\sqrt0,\n\\end{alignat*}\nand so on, so that \n$$S_N=\\sqrt{N+1}-\\sqrt0=\\sqrt{N+1}\\longrightarrow\\infty\n\\text{ as }N\\to\\infty.$$\nThus the series diverges (to infinity, to be more descriptive).\n\\eex\nNote that we could simplify our earlier expressions for $S_N$,\nsince for instance $\\sqrt0=0$, $\\sqrt1=1$ and $\\sqrt4=2$, but\nto do so would more likely obscure the pattern of cancellation.\n\n\\subsection{Geometric Series}\nThe class of series considered  here is arguably the most important\nwe will encounter.  Many important series analyses\ndepend upon how a particular series compares to, or mimics the\nbehavior of, an appropriately chosen geometric series.\nAs with the telescoping series, the geometric series\nis one for which we can actually compute a general formula\nfor $S_N$, from which we can tell if the series converges,\nand if so compute its sum.\n\nWhat makes a series $\\sum a_n$ geometric is that there exists a constant \n$r\\in\\Re-\\{0\\}$ such that\n\\begin{equation}\n(\\forall n)\\left[\\frac{a_{n+1}}{a_n}=r\\right].\\label{RForGeometricSeries}\n\\end{equation}\nIn other words, such a series can be defined recursively\nby $a_{n+1}=r\\cdot a_n$.  (Note that this is equvialent to \n$a_n=r\\cdot a_{n-1}$, so long as $a_{n-1}$ is defined.)\nPut more colloquially, a geometric series is one in which\nthe next term added is a constant multiple of the\nterm immediately prior to it.  Examples of geometric series follow:\n\\begin{itemize}\n\\item $\\ds{\\sum_{n=0}^\\infty\\left(\\frac12\\right)^n\n   =1+\\frac12+\\frac14+\\frac18+\\cdots}$ \\qquad($r=1/2$),\n\\item $\\ds{\\sum_{n=2}^\\infty \\frac6{5^n}=\\frac6{25}\n          +\\frac6{125}+\\frac6{625}+\\frac6{3125}+\n       \\cdots}$ \\qquad($r=1/5$),\n\\item $\\ds{\\sum_{n=1}^\\infty\\frac{2(-1)^n}{3^n}\n   =-\\frac23+\\frac29-\\frac2{27}+\\frac2{81}-\\cdots}$\\qquad\n     ($r=-1/3$),\n\\item $\\ds{\\sum_{n=1}^\\infty\\frac1{3^{2n}}\n   =\\frac19+\\frac1{81}+\\frac1{729}+\\frac1{6561}+\\cdots}$\n   \\qquad($r=1/9$).\n\\end{itemize}\nNote that this last series can be \nrewritten $\\sum_{n=1}^\\infty\\frac1{9^n}$,\nor even $\\sum_{n=0}^\\infty\\left[\\frac19\\cdot\\left(\\frac19\\right)^n\\right]$.\nIn fact, unlike the\ntelescoping series, every geometric series can be written in the \nsame form, namely\\footnote{%%\n%%% FOOTNOTE\nWith geometric series, it is understood that ``$r^0$'' represents\n$1$, even though technically this is only correct if $r>0$.\nIn each general setting in which we follow the convention\nthat $r^0$ is defined to be $1$ (regardless of the sign of $r$), \nwe will remark on this point.\n%%% END FOOTNOTE\n}\n\\begin{equation}\n\\sum_{n=0}^\\infty \\alpha r^n\n  =\\alpha+\\alpha r+\\alpha r^2+\\alpha r^3+\\cdots,\n\\label{GeneralGeometricSeries}\n\\end{equation}\nwhere\n\\begin{align}\n&\\alpha\\text{ is the {\\it first term} of the series, and}\\\\\n&r\\text{ is the {\\it constant ratio}, } a_{n+1}/a_n.\n\\end{align}\nIn the examples above, the first terms\nare $\\alpha=1,6/25,-2/3,1/9$ respectively.\nEach of the series above can be rewritten\nin $\\Sigma$-notation in the form (\\ref{GeneralGeometricSeries}),\nstarting with $n=0$.\nFor instance, the third series above can be rewritten,\nusing $\\alpha=-2/3$ and $r=-1/3$, as\n$$\n\\sum_{n=1}^\\infty\\frac{2(-1)^n}{3^n}\n   =-\\frac23+\\frac29-\\frac2{27}+\\frac2{81}-\\cdots\n   =\\sum_{n=0}^\\infty \\frac{-2}3\\left(\\frac{-1}3\\right)^n.\n$$\nIn fact, once we know a series is geometric (that is,\nthat $a_{n+1}=r\\cdot a_n$ for each $n$), all we need\nto do is to identify $\\alpha$ and $r$, and we can write\nthe series in the exact $\\Sigma$-notation form \n(\\ref{GeneralGeometricSeries}).\n\n\\bex Write the series $4+\\frac23+\\frac19+\\frac1{54}+\\frac1{324}+\\cdots$\nin the form (\\ref{GeneralGeometricSeries}).\n\n\\underline{Solution}: Though perhaps not immediately obvious,\nin fact each successive term is $\\frac16$ times its immediate\npredecessor.  The first term is $4$.  We translate these two facts as\n$\\alpha=4$ and $r=\\frac16$, and so\nthis series is the same as the series \n$$\\sum_{n=0}^\\infty 4\\cdot\\left(\\frac16\\right)^n.$$\n\\eex\n\nAs with telescoping series, a geometric series\nallows for a simple formula for $S_N$.\nTo use the formula, however, we need to make two assumptions:\n\\begin{enumerate}\n\\item that the series is already written in the form\n      $\\ds{\\sum_{n=0}^\\infty\\alpha r^n=\\alpha+\\alpha r+\\alpha r^2+\n                 \\alpha r^3+\\cdots}$, and\n\\item that $r\\ne1$.\n\\end{enumerate}\nAs we have seen, the first requirement is easy enough to accomplish: we \nneed only identify $\\alpha$ (the first term in the geometric \nseries) and $r$.  The second requirement is for technical reasons\nwe will encounter momentarily.  We do not loose much\nin assuming $r\\ne1$, since in the case $r=1$ \nthe series is simply $\\alpha+\\alpha+\\alpha+\\cdots$,\nwhich is clearly a divergent series if $\\alpha\\ne0$, and trivial if\n$\\alpha=0$.\\footnote{%%%\n%%% FOOTNOTE\nWe will not generally consider the case $\\alpha=0$ because\nit is trivial, and because we cannot identify a unique $r$.\nIndeed, if $\\alpha=0$, then any geometric recursion is valid,\nbut our original method of defining $r$, namely (\\ref{RForGeometricSeries})\non page~\\pageref{RForGeometricSeries}, is undefined if \n$\\alpha=0$.\n%%% END FOOTNOTE\n}\nNow we state our theorem.\n\\begin{theorem} For a geometric series $\\ds{\\sum_{n=0}^\\infty\\alpha r^n\n=\\alpha+\\alpha r+\\alpha r^2+\\alpha r^3+\\cdots}$, \nassuming $r\\ne1$, we have\n\\begin{equation}\nS_N=\\frac{\\alpha\\left(1-r^{N+1}\\right)}{1-r}.\\label{SnForGeometric}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nThe usual method of proof of (\\ref{SnForGeometric}) is to exploit\nthe geometric nature of the series in the following way:\n$$\\begin{array}{lccccccccccc}\n&S_N&=&\\alpha&+&\\alpha r&+&\\alpha r^2&+&\\cdots&+&\\alpha r^N\\\\\n\\implies\n&r\\cdot S_N&=&\\alpha r&+&\\alpha r^2&+&\\alpha r^3&+&\\cdots&+&\\alpha r^{N+1}\\\\\n\\hline\n\\implies&(1-r)S_N&=&\\alpha&+&0&+&0&+&0&-&\\alpha r^{N+1}\\end{array}$$\nIn the first line we wrote the definition of $S_N$.  In the next\nline we multiplied that equation by $r$.  In the third line,\nthe second line is subtracted from the first. In doing so,\nthe terms $\\alpha r,\\alpha r^2,\\cdots,\\alpha r^{N}$ cancel,\nleaving only $\\alpha-\\alpha r^{N+1}$ on the right-hand side. \nThis gives us\n$$(1-r)S_N=\\alpha\\left(1-r^{N+1}\\right).$$\nSince we are assuming $r\\ne1$, we can divide by $1-r$ and get\n(\\ref{SnForGeometric}), as desired.\n\\end{proof}\n\nTo utilize (\\ref{SnForGeometric}), one needs to know $\\alpha$, $r$ and $N$.\nNote that $N$ is not the number of terms, but the highest power\nof $r$ which occurs.  In fact there are $N+1$ terms added\nto arrive at $S_N$, since the first is $\\alpha r^0$.\n\n\\bex Consider the series $1+\\frac12+\\frac14+\\frac18+\\cdots$.\n     Find the sum of the first 9 terms.\n\n\\underline{Solution}: What we are seeking here is \n$S_8=\\frac{\\alpha\\left(1-r^{8+1}\\right)}{1-r}$,\nwhere $\\alpha=1$ and $r=\\frac12$.  Thus\n$$S_8=\\frac{1\\left[1-\\left(\\frac12\\right)^9\\right]}{1-\\frac12}\n     =\\frac{1-\\frac1{512}}{\\frac12}\n     =\\frac{1-\\frac1{512}}{\\frac12}\\cdot\\frac{512}{512}\n     =\\frac{512-1}{256}=\\frac{511}{256}=1.99609375$$\n\\eex\n\nThe formula (\\ref{SnForGeometric}) also works when $r<0$.\n\\bex Consider the series $1-\\frac12+\\frac14-\\frac18+\\cdots$.\nFind the sum of the first 9 terms.\n\n\\underline{Solution}: Again we want $S_8$, but while $\\alpha=1$ as before,\nhere we have $r=-1/2$.\n\n$$S_8=\\frac{1\\left[1-\\left(\\frac{-1}2\\right)^9\\right]}{1-\\left(-\\frac12\\right)}\n     =\\frac{1-\\frac{-1}{512}}{\\frac32}\n     =\\frac{1+\\frac1{512}}{\\frac32}\\cdot\\frac{512}{512}\n     =\\frac{512+1}{256\\cdot3}\n     =\\frac{513}{256\\cdot3}\n     =\\frac{171}{256}=0.66796875.$$\n\n\n\n\n\n\\eex\n\n\\bex Suppose one deposits into an account (without interest)\n     one penny (\\$0.01) on the first\n     day of a month, then deposits two pennies (\\$0.02)\n     the next day, four pennies the next, and so on,\n     each day depositing twice what was deposited the\n     day before.  How much money is in the account\n     after the first week (7 payments), \n     second week, third week, and thirty-first\n     day?\n\n\\underline{Solution}: This is the same as asking for\n     partial sums of the series $0.01+0.02+0.04+0.08+\\cdots$.\n     This is a geoemtric series (\\ref{GeneralGeometricSeries}) with\n     $\\alpha=0.01$ and $r=2$.  Here we have to be careful\n     about $N$, since after the first day $N=0$, after the \n     second $N=1$, etc.  Now we compute the total deposit after\n     \\begin{itemize}\n     \\item 1 week, i.e., 7 days, we have $N=6$ and\n        $$S_6=\\frac{0.01\\left[1-2^7\\right]}{1-2}\n             =\\frac{0.01\\left[1-2^7\\right]}{-1}\n             =0.01\\left(2^7-1\\right)=0.01(127)=1.27.$$\n     \\item 2 weeks, i.e., 14 days, we have $N=13$ and (continuing the\n           pattern above)\n         $$S_{13}=\\frac{0.01\\left[1-2^{14}\\right]}{1-2}\n                 =0.01\\left(2^{14}-1\\right)=0.01(16383)=163.83.$$\n     \\item 3 weeks, i.e., 21 days, we have $N=20$ and\n         $$S_{20}=\\cdots=0.01\\left(2^{21}-1\\right)=0.01(2,097,151)\n                 =20,971.51$$\n     \\item 31 days, so we have $N=30$, and\n         $$S_{30}=\\cdots=0.01\\left(2^{31}-1\\right)\n                 =0.01(2,147,483,647)=21,474,836.47.$$\n      \\end{itemize}\n\\eex\nThis latest example illustrates that, when $r>1$, the\nfunction $N\\mapsto S_N$ is essentially exponential.\nIndeed, {\\it as a function of} $N$, \n$$S_N=\\frac{\\alpha}{1-r}\\left[1-r^{N+1}\\right]\n=\\frac{\\alpha}{1-r}+\\frac{-\\alpha\\cdot r^{N+1}}{1-r}     \n=\\frac{\\alpha}{1-r}+\\left[\\frac{\\alpha}{r-1}\\cdot r\\right]\n\\cdot r^{N}=A+Br^N,$$\nwhere $A=\\frac{\\alpha}{r-1}$ and $B=\\frac{\\alpha r}{r-1}$.\nThus as a function of $N$, $S_N$ is basically\na vertical translation of an \nexponential growth $Br^N$, assuming again that $r>1$.\\footnote{%%%\n%%% FOOTNOTE\nIf $r\\in(0,1)$ we get a translation\nof exponential decay; if $r\\in(-1,0)$ we get a kind of\n``damped  oscillation''; if $r=-1$ we get steady oscillation;\nand if $r<-1$ we get a growing oscillation.  Details are left to\nthe reader.\n%%% END FOOTNOTE\n}\nThis partially explains why some use the term ``geometric growth''\nwhen referring to exponential growth.\n\n\\subsection{Convergence/Divergence in Geometric Series}\nNow we look at necessary and sufficient conditions for a \ngeometric series to converge.  If a given geometric\nseries does converge, we compute its sum.  Our result\nis the following:\n\n\\begin{theorem}\nFor a geometric series $\\ds{\\sum_{n=0}^\\infty \\alpha r^n\n   =\\alpha+\\alpha r+\\alpha r^2+\\alpha r^3+\\cdots}$, where $\\alpha\\ne0$,\n\\begin{enumerate}\n\\item the series {\\bf converges} if and only if $|r|<1$, i.e., $r\\in(-1,1)$;\\\\\n      equivalently, the series {\\bf diverges} if and only if $|r|\\ge 1$, i.e., \n      $r\\in(-\\infty,-1]\\cup[1,\\infty)$.\n\\item if $|r|<1$, then the series converges to $\\ds{\\frac{\\alpha}{1-r}}$.\n\\end{enumerate}\n\\end{theorem}\nRestated, the geometric series converges to $\\frac{\\alpha}{1-r}$ if $|r|<1$,\nand diverges otherwise.\n\n\\begin{proof} The proof requires some care, as the various cases \ncontain their own technicalities.  \n\\begin{itemize}\n\\item Case $r=1$. In such a case, it is not diffiult to see (we just\n      count the terms!) that\n$$S_N=\\sum_{n=0}^N\\alpha=(N+1)\\alpha\\longrightarrow\\infty\n       \\qquad\\text{as }N\\to\\infty.$$\nThus $r=1$ gives a divergent series.\n\\item Case $r=-1$. In such a case, we have\n$$\\sum_{n=0}^\\infty\\alpha(-1)^n=\\alpha-\\alpha+\\alpha-\\alpha+\\cdots,$$\nand so \n$$S_N=\\left\\{\\begin{array}{rl}\n\\alpha,&\\qquad\\text{if }n\\text{ is even,}\\\\\n0,&\\qquad\\text{if }n\\text{ is odd.}\\\\\n\\end{array}\\right.$$\nIn other words, $\\left\\{S_N\\right\\}_{N=0}^\\infty=\\alpha,0,\\alpha,0,\\alpha,0,\n   \\cdots$, which is clearly a divergent sequence, i.e., \n   the series itself is divergent (by definition).\\footnote{%%%\n%%% FOOTNOTE\nRecall that the convergence of the series is defined by the convergence\nof the (sequence of) partial sums.\n%%% END FOOTNOTE\n}\n\\item Case $|r|>1$. Here we can use the formula for the partial sums:\n$$S_N=\\frac{\\alpha\\left(1-r^{N+1}\\right)}{1-r}.$$\nNow there is only one term which is not a fixed constant, and so\nthe convergence of this expression depends upon only the\nconvergence that,  $r^{N+1}$-term.  Clearly if $r>1$, \nthis is an exponential growth, and diverges.  For the general case\n$|r|>1$, we get that\\footnote{%%%\n%%% FOOTNOTE\nThis follows from\ncontinuity of the function $x\\mapsto|x|$ giving us the ``$\\implies$.''\nSee Theorem~\\ref{ContFunctCarriesSequenceToF(Limit)},\npage~\\pageref{ContFunctCarriesSequenceToF(Limit)}.\n%%% END FOOTNOTE\n}\n\\begin{equation}\nr^{N+1}\\text{ converges }\\implies \\left|r^{N+1}\\right|\\text{ converges }\n\\iff|r|^{N+1}\\text{ converges.}\\label{AConvergenceArgument1}\\end{equation}\nBut for $|r|>1$, we have $|r|^{N+1}$ diverges, so with\nthe contrapositive of (\\ref{AConvergenceArgument1}) we have\n\\begin{align*}\n|r|>1&\\implies |r|^{N+1}\\text{ diverges }\\implies\nr^{N+1}\\text{ diverges } \\\\\n&\\implies S_N=\\frac{\\alpha\\left(1-r^{N+1}\\right)}{1-r} \\text{ diverges }\n\\iff \\sum_{n=0}^\\infty\\alpha r^n\\text{ diverges.}\\end{align*}\n\\item Case $|r|<1$.\nAgain we look at the variable part of the formula for $S_N$.\nIt is enough to show that $|r|<1\\implies r^{N+1}$ converges.\nOne method is to use the sandwich theorem. In the\nargument below, note that $|r|<1\\implies|r|\\in(0,1)\n\\implies |r|^{N+1}\\longrightarrow0$.  The relevant\nsandwich theorem application is then (as $N\\to\\infty$):\\footnote{%\n%%% FOOTNOTE\nRecall that for any $x\\in\\Re$, we have $-|x|\\le x\\le |x|$.\n%%% END FOONOTE\n}\n\\begin{center}\n\\begin{pspicture}(0,2)(8.7,4.5)\n\\rput[l](0,4){$\\underbrace{-|r|^{N+1}}=-\\left|r^{N+1}\\right|$}\n  \\rput[l](3.8,4){$\\le$}\n\\rput[l](4.6,4){$r^{N+1}$}\n  \\rput[l](6.2,4){$\\le$}\n\\rput[l](7,4){$\\left|r^{N+1}\\right|=\\underbrace{|r|^{N+1}}$}\n\\psline{->}(.63,3.5)(.63,2.3)\n\\psline{->}(8.99,3.5)(8.99,2.3)\n \\rput(.63,2){0}\n \\rput(9,2){0}\n\\end{pspicture}\n\\end{center}\nThus $|r|<1\\implies r^{N+1}\\longrightarrow 0$ as $N\\to\\infty$.\nWe can conclude that\n$$|r|<1\\implies\nS_N=\\frac{\\alpha\\left(1-r^{N+1}\\right)}{1-r}\n   \\longrightarrow \\frac{\\alpha(1-0)}{1-r}=\\frac{\\alpha}{1-r}\n\\text{ (as }N\\to\\infty).$$\n\\end{itemize}\nThis completes the proof.\n\\end{proof}\nThe implication above is worth repeating in a summarized form:\n\\begin{equation}\n|r|<1\\implies\\sum_{n=0}^\\infty\\alpha r^n=\\frac{\\alpha}{1-r}.\n\\label{|r|<1ImpliesSumOfGeometricSeries}\\end{equation}\n\\bex Here are some series computations using \n   the theorem and (\\ref{|r|<1ImpliesSumOfGeometricSeries}).\n\\begin{itemize}\n\\item $\\ds{\\sum_{n=0}^\\infty 2\\left(\\frac13\\right)^n\n   =\\frac2{1-\\frac13}=\\frac2{\\frac23}=2\\cdot\\frac32=3}$.  \n   \\qquad($\\alpha=2$, $r=\\frac13$.)\n\\item $\\ds{\\sum_{n=0}^\\infty0.99^n=\\frac1{1-0.99}=\\frac1{.01}=100}$.\n   \\qquad($\\alpha=1$, $r=0.99$.)\n\\item $\\ds{\\sum_{n=0}^\\infty 1.01^n}$ diverges.\\qquad\n   ($\\alpha=1$, $r=1.01$ so $|r|>1$, and the series diverges.)\n\\item $\\ds{\\sum_{n=2}^\\infty\\frac1{3^n}=\\frac{\\frac19}{1-\\frac13}\n    =\\frac{\\frac19}{\\frac23}=\\frac19\\cdot\\frac32=\\frac16}$.\n    \\qquad(First term is $\\alpha=\\frac19$, $r=\\frac13$.)\n\\item $\\ds{1-\\frac12+\\frac14-\\frac18+\\frac{1}{16}-\\cdots\n       =\\frac1{1-\\left(\\frac{-1}2\\right)}=\\frac1{\\frac32}=\\frac23}$.\n     \\qquad($\\alpha=1$, $r=-\\frac12$.)\n\\item $\\ds{\\sum_{n=1}^\\infty e^{-n}=\\frac{e}{1-\\frac1e}\n           =\\frac{e}{1-\\frac1e}\\cdot\\frac{e}e=\\frac{e^2}{e-1}}$.\n     \\qquad($\\alpha=e$, $r=\\frac1e$.)\n\\item $\\ds{\\sum_{n=1}^\\infty\\frac5{3^{2n}}\n       =\\sum_{n=1}^\\infty\\frac5{9^n}=\\frac{5/9}{1-\\frac19}\n       =\\frac{5/9}{8/9}=\\frac58.}$\n       \\qquad($\\alpha=\\frac59$, $r=\\frac19.$)\n\\end{itemize}\n\n\n\n\\eex\n\n\n\n\n\n\n\n\\newpage\n\\begin{center}\n\\underline{\\Large{\\bf Exercises}}\\end{center}\n\n\\begin{multicols}{2}\n\n\\begin{enumerate}\n\\item Show that the following series can be\n      written as a telescoping series, and discuss\n      its convergence: $$\\sum_{n=1}^{\\infty}\\ln\\left(\\frac{n}{n+1}\\right).$$\n\\item Give an alternative proof of the formula (\\ref{SnForGeometric})\n      for the partial sums of geoemtric series.\n      For this new proof, begin with the formula for $S_N$ as\n      in the original proof (page~\\pageref{SnForGeometric}),\n      and then multiply by $(1-r)$, noting how the right-hand side\n      simplifies.  (See also page~\\pageref{aN-bN})\n      \n\n\n\n\\end{enumerate}\n\\vfill\n\\end{multicols}\n\\eject\n\n\n\\section{Elementary Test for Divergence}\nBecause it is the exceptional case (e.g., geometric, telescoping)\nthat we can actually find a compact\nformula for $S_N$, we have to develop other tests for the convergence\nor divergence of series.  There will be several such tests, and\nwhich particular test or tests are expeditious and conclusive \nwill vary from series to series.  We explore the first of those\ntests in this section.  We start with a theorem.\n\n\\begin{theorem} Given a series $\\ds{\\sum_{n=k}^\\infty a_n}$.\nIf the series converges, then $a_n\\longrightarrow0$ as $n\\to\\infty$:\n\\begin{equation}\n\\left[\\sum_{n=k}^\\infty a_n\\text{ converges }\\right]\\implies\n\\left[\\lim_{n\\to\\infty}a_n=0\\right].\n\\label{SeriesConvergesImpliesTermsShrinkToZero}\n\\end{equation}\n\\label{SeriesConvergesImpliesTermsShrinkToZeroTheorem}\\end{theorem}\nIt should be intuitively clear that, if we are going to ``add up''\ninfinitely many terms, and have the sums approach a finite number, \nthen the  \nterms we are adding are going to have to shrink to zero.  \nThe proof uses the fact\nthat $S_n\\to L\\implies S_{n-1}\\to L$, the latter limit occurring\n``one step behind'' the former, but occurring nonetheless\nsince $n\\to\\infty\\implies n-1\\to\\infty\\implies S_{n-1}\\to L$.\nNow to the proof.\n\n\\begin{proof}\nSuppose $\\ds{\\sum_{n=k}^\\infty a_n}$ converges, i.e.,\n$\\ds{\\sum_{n=k}^\\infty a_n=L}$ for some $L\\in\\Re$\n(and in particular $L$ is finite).\nThen by definition\n$$S_N\\longrightarrow L\\text{ as }N\\to\\infty.$$\nNow recall $S_{n}=a_n+S_{n-1}$, so that $a_n=S_n-S_{n-1}$.\nTaking $n\\to\\infty$ we get\n$$a_n=S_n-S_{n-1}\\longrightarrow L-L=0,\\qquad\\text{q.e.d.}$$\n\\end{proof}\nNote that it was important that $L$ be finite in the limit\ncomputation above.  For instance, if $S_n\\to\\infty$, we\nwould have $a_n=S_n-S_{n-1}$ giving $\\infty-\\infty$-form \n(which is indeterminant) as\n$n\\to\\infty$.\n\nAgain, the intuition behind the theorem is that, in order to\nbe able to add infinitely many terms---one at a time in the\nsense that we compute $S_k$, $S_{k+1}$, $S_{k+1}$, etc., \nand look for a trend towards $L$---the terms that we add,\ni.e., $a_k$, $a_{k+1}$, $a_{k+2}$, etc., have to shrink\nif the partial sums are to approach a finite number $L$.\nIn fact, the form of the theorem which we use is the\ncontrapositive.  Recall\nthe logical equivalence $P\\longrightarrow Q\n\\iff(\\sim Q)\\longrightarrow(\\sim P)$.\\footnote{%%%\n%%% FOOTNOTE\nTo be sure, here $P\\longrightarrow Q$ is read, \n``$P$ implies $Q$.''  The symbol ``$\\sim$'' is still\nthe ``not,'' or logical negation, operator.\nThe symbol ``$\\iff$'' stands in for logical equivalence.\n%%% END FOOTNOTE\n}\nIn this case, $P$ is the statement that the series converges (to \na finite number $L$), while $Q$ is the statement that $a_n\\to0$.\nThe contrapositive for of \nTheorem~\\ref{SeriesConvergesImpliesTermsShrinkToZeroTheorem}\nis our main result in this section, and we dub that result\n{\\it Elementary Test for Divergence}, or ETFD:\n\\begin{theorem} {\\bf (ETFD)}\nIf it is {\\bf not} the case that $a_n\\to0$, then $\\ds{\\sum_{n=k}^\\infty a_n}$\ndiverges.\nPut symbolically,\n\\begin{equation}\na_n\\not{\\!\\!\\to0}\\implies\\sum_{n=k}^\\infty a_n\\text{ diverges.}\n\\label{ETFDequation}\n\\end{equation}\n\\label{ETFD}\\end{theorem}\n\n\\begin{proof} It is enough to say that this is the contrapositive of \nTheorem~\\ref{SeriesConvergesImpliesTermsShrinkToZeroTheorem},\nand therefore also true.\nOne way to write this symbolically is the following.\n$$\\underbrace{\\sum_{n=k}^\\infty a_n\\text{ converges }\n\\longrightarrow \\left(a_n\\to 0\\right)}_{\\text{always true by\nTheorem~\\ref{SeriesConvergesImpliesTermsShrinkToZeroTheorem}}}\n\\iff\n\\underbrace{\n\\left[\\sim\\left(a_n\\to 0\\right)\\right]\n\\longrightarrow\n\\sum_{n=k}^\\infty a_n\\text{ diverges}}_{\\text{statement of\nTheorem~\\ref{ETFD}}}.$$\nThe statement on the right must be always true (a tautology),\nsince it is equivalent to the statement on the left,\nwhich---being the statement of  \nTheorem~\\ref{SeriesConvergesImpliesTermsShrinkToZeroTheorem}---is\nitself a tautology.  The statement on the right being a \ntautology means that it stands alone, with ``$[\\ ]\\implies\\sum$''\nreplacing ``$[\\ ]\\longrightarrow\\sum$,'' q.e.d.\n\\end{proof}\nThis theorem is undoubtedly one of \nthe most misunderstood and misapplied results\nin all of Calculus I and II.  It is as important to understand\nwhat it does not say, as it is to understand what it says.\nThe theorem says that if the terms of a series do not \nshrink to zero, then the series must diverge.  This is \n``elementary,'' hence the name ``Elementary Test for Divergence.\\footnotemark''\n\\footnotetext{%%\n%%% FOOTNOTE\nOther texts call this the {\\it $n$th term test}, or \nthe {\\it $n$th term test for divergence}, for reasons which should be\nclear.  The author is not aware if any other text calls\nit the elementary test for divergence.  However, both names\nare descriptive enough that one is not likely to be misunderstood\nin conversation or written reports if using either name for\nthis result.}\n%%% END FOOTNOTE\n\nBut it is not as comprehensive as one might think.  Afterall,\nif the terms of a series {\\it do} shrink to zero, the \ntheorem is silent!  (Therein lies the unfortunately\nvery common mistake made by Calculus Students.)\nTo emphasize this we look at the following examples.\n\n\\bex Discuss what Theorem~\\ref{ETFD} has to say about\nthe series \\newline\n(a) $\\ds{\\sum_{n=1}^\\infty\\frac{n}{n+1}}$;\\hfill\n(b) $\\ds{\\sum_{n=1}^\\infty\\frac1{n+1}}$\\hfill\n(c) $\\ds{\\sum_{n=1}^\\infty\\cos\\frac1{n}}$;\\hfill\n(d) $\\ds{\\sum_{n=1}^\\infty\\sin\\frac1{n}}$;\\hfill\n(e) $\\ds{\\sum_{n=1}^\\infty\\sin\\frac1{n^2}}$.\n\n\\underline{Solution}: \n\\begin{enumerate}[(a)]\n\\item $\\ds{\\frac{n}{n+1}\\longrightarrow1\\ne0\n        \\overset{\\text{\\rm ETFD}}{\\implies}\n           \\sum_{n=1}^\\infty \\frac{n}{n+1}}$ diverges.\n\\item $\\ds{\\frac1{n+1}\\longrightarrow}$.  The \n      ETFD is inconclusive.\n\\item $\\ds{\\cos\\frac1n\\longrightarrow\\cos0=1\\ne0\n        \\overset{\\text{\\rm ETFD}}{\\implies}\n        \\sum_{n=1}^\\infty\\cos\\frac1{n}}$ diverges.\n\\item $\\ds{\\sin\\frac1n\\longrightarrow\\sin0=0}$.\n       The ETFD is inconclusive.\n\\item $\\ds{\\sin\\frac1{n^2}\\longrightarrow\\sin0=0}$.\n       The ETFD is inconclusive. \n\\end{enumerate}\n\\eex\n\nLooking closely at the symbolic statement of ETFD given\nin (\\ref{ETFDequation}), we see that there is never an \nimplication of convergence.  Indeed, the test either concludes \ndivergence, or is inconclusive.  This is a very quick but incomplete test,\nwhich can only detect divergence in certain (still common) circumstances,\nnamely that $a_n\\not{\\!\\!\\to}0$.\n\nIndeed, in (a) and (c) above, ETFD gave us divergence.\nHowever, it said nothing in (b), (d) and (e), as the\n``if'' part of the theorem was not true.  In fact,\nof these three in which ETFD is silent---(b), (d) and (e)---it\nturns out that $(b)$ and $(d)$ are divergent, while\n$(e)$ is convergent.  The methods to see this are introduced\nin later sections.  An example from the exercises\nin the last section gives a case where we can in fact\nshow that it is possible that $a_n\\to0$, but the series diverges:\n\\bex Consider the series \n$\\ds{\\sum_{n=1}^\\infty\\ln\\left(\\frac{n}{n+1}\\right)}$.\nDetermine if it converges or diverges.\n\n\\underline{Solution}: First we note ${\\ln\\left(\\frac{n}{n+1}\\right)\n\\longrightarrow\\ln1=0}$ as $n\\to\\infty$.  Thus ETFD is inconclusive.\\footnote{\n%%% FOOTNOTE\nSome textbooks would write that the test ``fails.''\nThat seems a bit strong.  It is merely inconclusive, so we need\nto look deeper at the particular series and perhaps employ some other test\nwhich will be conclusive.}\n%%% END FOOTNOTE\n\nLooking closer at this series, we should eventually notice that\nit is telescoping.  This becomes clear\nif we rewrite it:\n$$\\sum_{n=1}^\\infty\\left[\\ln(n)-\\ln(n+1)\\right]\n  =\\left[\\ln 1-\\ln 2\\right]+\\left[\\ln-\\ln3\\right] \n    +\\left[\\ln 3-\\ln 4\\right]+\\cdots.$$\nIt is not difficult to see that $S_N=\\ln1-\\ln N=-\\ln N$,\nand thus\n$$S_N=-\\ln N\\longrightarrow-\\infty\\text{ as }N\\to\\infty.$$\nSince the partial sums diverge, by definition so does the series.\n\\eex \n\nOn the other hand, there are series which converge.  However the ETFD\nis not powerful enough to ever prove it.  Rather than risking confusion\nby elaborating this point further, we will end the discussion here.\n\n\\newpage\\section{The Integral Test for Convergence/Divergence}\n\n\n\n", "meta": {"hexsha": "bfac844c5912936aa6954713a6852097965c8bcd", "size": 36852, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "michael.dougherty/chapter10.tex", "max_stars_repo_name": "UNDL-edu/Calculo-Infinitesimal", "max_stars_repo_head_hexsha": "2ad971127ae31b88de02b5e85fb8ba2249278e2e", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "michael.dougherty/chapter10.tex", "max_issues_repo_name": "UNDL-edu/Calculo-Infinitesimal", "max_issues_repo_head_hexsha": "2ad971127ae31b88de02b5e85fb8ba2249278e2e", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "michael.dougherty/chapter10.tex", "max_forks_repo_name": "UNDL-edu/Calculo-Infinitesimal", "max_forks_repo_head_hexsha": "2ad971127ae31b88de02b5e85fb8ba2249278e2e", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.5467869222, "max_line_length": 79, "alphanum_fraction": 0.6927168132, "num_tokens": 12711, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.7826624789529375, "lm_q1q2_score": 0.5769460439821373}}
{"text": "\\documentclass[a4paper,11pt]{amsbook}\r\n\\usepackage{amsmath}\r\n\\usepackage{../HBSuerDemir}\r\n\\usepackage{fullpage}\r\n\\pagestyle{headings}\r\n\\usepackage{fancyhdr}\r\n\r\n\r\n\\begin{document}\r\n\\hPage{b1p2/250}\r\n\r\n\\vspace*{10mm}\r\n$$\r\n\\begin{vmatrix} \r\n11-x & -6&2 \\\\\r\n-6 & 10-x & -4\\\\\r\n2& -4 & 6-x\r\n\\end{vmatrix}\r\n=0\r\n$$\r\n\\\\\r\n\r\n\r\nis 6,and find the other two roots.\r\n\r\nis 6,and find the other two roots.\\\\\r\n\\\\\r\n17. If n$\\varepsilon$N, show that (b-c)(c-a)(a-b) is a factor of\\\\\r\n$$\r\n\\begin{vmatrix}\r\n1&1&1\\\\\r\na&b&c\\\\\r\na^{n+2}&b^{n+2}&c^{n+2}\r\n\\end{vmatrix}\r\n$$\r\n\\\\\r\n18.If, $\\alpha+\\beta+\\gamma$=2s, prove\\\\\r\n\r\n$\r\n\\begin{vmatrix}\r\n1&\\cos\\gamma&\\cos\\beta\\\\\r\n\\cos\\gamma&1&\\cos\\alpha\\\\\r\n\\cos\\beta&\\cos\\alpha&1\r\n\\end{vmatrix}\r\n=4\\sin{s}\\sin{(s-\\alpha)}\\sin{(s-\\beta)}\\sin{(s-\\gamma)}\r\n$\r\n\\\\\r\n\\\\\r\n\\\\\r\n19.Show that\r\n\\\\\r\n\r\n$\r\n\\begin{vmatrix}\r\n1&\\cos{x}-\\sin{x}&\\cos{x}+\\sin{x}\\\\\r\n1&\\cos{y}-\\sin{y}&\\cos{y}+\\sin{y}\\\\\r\n1&\\cos{z}-\\sin{z}&\\cos{z}+\\sin{z}\r\n\\end{vmatrix}\r\n=2\r\n\\begin{vmatrix}\r\n1&\\cos{x}&\\sin{x}\\\\\r\n1&\\cos{y}&\\sin{y}\\\\\r\n1&\\cos{z}&\\sin{z}\r\n\\end{vmatrix}\r\n$\r\n\\\\\r\n\\\\\r\n\\\\\r\n20.Evaluate the following determinants:\r\n\\\\\r\n\r\na)\r\n$\r\n\\begin{vmatrix}\r\n0&0&1&2&3\\\\\r\n0&0&4&5&6\\\\\r\n-1&-4&0&7&8\\\\\r\n-2&-5&-7&0&9\\\\\r\n-3&-6&-8&-9&0\r\n\\end{vmatrix}\r\n$\r\n\\hspace{30mm} b)\r\n$\r\n\\begin{vmatrix}\r\n0&0&a&b\\\\\r\n0&0&c&d\\\\\r\n-a&-c&0&0\\\\\r\n-b&-d&0&0\r\n\\end{vmatrix}\r\n$\r\n\\\\\r\n\r\n(See Example 7 and 8.)\\\\ \\\\ \r\n\\section*{ANSWERS TO EVEN NUMBERED EXERCISES}\r\n\\mbox{ }\\\\\r\n2. a) $adf+2bce-ae^2-dc^2-fb$ \\hspace{20mm} b)-2\\\\ \\\\\r\n4. a)$(a^3-1)^2$, \\hspace{20mm} b)-575\\\\ \\\\\r\n8. 0\\\\ \\\\\r\n10. ${(a-b)^2}(2x-a-b)(a+b+2x)$\\\\\r\n\\end{document}\r\n\r\n", "meta": {"hexsha": "704143cad58c4497ce0c4cbc7afcd4bcf2f8758c", "size": 1578, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw1/non-merged/MEHMET YUSUF BAYAM_35959_assignsubmission_file_/b1p2-250/SuerDemirb1p2-250/b1p2-250.tex", "max_stars_repo_name": "yildirimyigit/cmpe220_2016_3", "max_stars_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-15T22:03:34.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-15T22:03:34.000Z", "max_issues_repo_path": "hw1/non-merged/MEHMET YUSUF BAYAM_35959_assignsubmission_file_/b1p2-250/SuerDemirb1p2-250/b1p2-250.tex", "max_issues_repo_name": "yildirimyigit/cmpe220_2016_3", "max_issues_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw1/non-merged/MEHMET YUSUF BAYAM_35959_assignsubmission_file_/b1p2-250/SuerDemirb1p2-250/b1p2-250.tex", "max_forks_repo_name": "yildirimyigit/cmpe220_2016_3", "max_forks_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 15.4705882353, "max_line_length": 67, "alphanum_fraction": 0.536121673, "num_tokens": 750, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5769460424173047}}
{"text": "\\documentclass{article}  %Need this.\n\n\\usepackage{amsmath,amsthm,amssymb}\n\n%\\usepackage[margin=1in]{geometry}\n\n\n\\newtheorem*{thm}{Theorem}\n\\newtheorem*{cnj}{Conjecture}\n\\newtheorem*{lem}{Lemma}\n\\newtheorem*{cor}{Corollary}\n\\newtheorem*{prop}{Proposition}\n\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\R}{\\mathbb{R}}\n\n\n\n\n\n\\title{Reproducing a \\LaTeX Document}\n\\author{Put your name here}\n\\date{}\n\n\n\\begin{document}\n\\maketitle\n\n\\begin{thm}\nIf $x$ and $y$ are odd integers then $x+y$ is an even integer.\n\\end{thm}\n\n\\begin{proof}\nWe assume that $x$ and $y$ are odd integers and will prove that $x+y$ is an even integer. Since $x$ and $y$ are odd, there exist integers $m$ and $n$ such that $x=2m+1$ and $y=2n+1$.  By substitution and algebra we obtain\n\\begin{align*}\nx+y &= 2m+1 + 2n + 1\\\\\n&= 2m+2n+2\\\\\n&=2(m+n+1).\n\\end{align*}\nDefine $q=m+n+1$.  Since $m$ and $n$ are integers and the integers are closed under addition, we conclude that $q$ is an integer. Since $x+y=2q$ for the integer $q$ we conclude that $x+y$ is an even integer.\n\\end{proof}\n\n\n\\vspace{.15in}\n\n\\section*{Challenge Typing}\n\n\nSuppose that $f:(-1,1)\\to \\R$ and $f$ is differentiable at $0$. Let sequences $(\\alpha_n)_{n\\geq1}$ and $(\\beta_n)_{n\\geq1}$ satisify $-1<\\alpha_n<\\beta_n<1$ for all $n\\geq 1$ and $\\displaystyle{\\lim_{n\\to\\infty}} \\alpha_n = \\displaystyle{\\lim_{n\\to\\infty}} \\beta_n = 0$. Set \n\t\n\t\t$$\\lambda_n = \\frac{f(\\beta_n) -f(\\alpha_n)}{\\beta_n-\\alpha_n} . $$\n\n\n\n\\end{document}", "meta": {"hexsha": "726cf5689f340833e2571682828bf2a0d7eb1e60", "size": 1482, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "from LDK/LaTeX/Reproduce.tex", "max_stars_repo_name": "mkjanssen/discrete", "max_stars_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "from LDK/LaTeX/Reproduce.tex", "max_issues_repo_name": "mkjanssen/discrete", "max_issues_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "from LDK/LaTeX/Reproduce.tex", "max_forks_repo_name": "mkjanssen/discrete", "max_forks_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.4642857143, "max_line_length": 276, "alphanum_fraction": 0.6740890688, "num_tokens": 562, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5769460424173047}}
{"text": "\\title{Teaching the Geometry of Schemes}\n\\titlerunning{Teaching the Geometry of Schemes}\n\\toctitle{Teaching the Geometry of Schemes}\n\n\\author{Gregory G.~Smith \\and Bernd Sturmfels}\n\\authorrunning{G. G. Smith and B. Sturmfels}\n% \\institute{Department of Mathematics, University of California,\n% Berkeley, California 94720, USA}\n\n\\maketitle\n\n\n%%----------------------------------------------------------\n\\newtheorem*{problem*}{Problem}{\\bfseries\\upshape}{\\itshape}\n\\newtheorem*{solution*}{Solution}{\\itshape}{\\rmfamily}\n\n\\newcommand{\\Spec}{\\operatorname{Spec}}\n\\newcommand{\\Proj}{\\operatorname{Proj}}\n\\newcommand{\\codim}{\\operatorname{codim}}\n%%----------------------------------------------------------\n\n\n\\begin{abstract}\nThis chapter presents a collection of graduate level problems in\nalgebraic geometry illustrating the power of \\Mtwo as an educational\ntool.\n\\end{abstract}\n\nWhen teaching an advanced subject, like the language of schemes, we\nthink it is important to provide plenty of concrete instances of the\ntheory.  Computer algebra systems, such as \\Mtwo, provide students\nwith an invaluable tool for studying complicated examples.\nFurthermore, we believe that the explicit nature of a computational\napproach leads to a better understanding of the objects being\nexamined.  This chapter presents some problems which we feel\nillustrate this point of view.\n\nOur examples are selected from the homework of an algebraic geometry\nclass given at the University of California at Berkeley in the fall of 1999.\nThis graduate course was taught by the second author with assistance from the\nfirst author.  Our choice of problems, as the title suggests, follows the\nmaterial in David Eisenbud and Joe Harris' textbook {\\em The Geometry of\n  Schemes} \\cite{SC:EH}.\n\n%%----------------------------------------------------------\n\\section{Distinguished Open Sets}\n\nWe begin with a simple example involving the Zariski topology of an affine\nscheme\\index{scheme!affine}. This example also indicates some of the\nsubtleties involved in working with arithmetic\nschemes\\index{scheme!arithmetic}.\n\n\\begin{problem*}\nLet $S = \\bbbz[x,y,z]$ and $X = \\Spec(S)$.  If $f = x$ and $X_{f}$ is\nthe corresponding basic open subset in $X$, then establish the\nfollowing:\n\\begin{enumerate}\n\\item[$(1)$] If $e_{1} = x+y+z$, $e_{2} = xy+xz+yz$ and $e_{3} = xyz$\nare the elementary symmetric functions then the set $\\{X_{e_{i}}\\}_{1\n\\leq i \\leq 3}$ is an open cover of $X_{f}$.\n\\item[$(2)$] If $p_{1} = x+y+z$, $p_{2} = x^{2}+y^{2}+z^{2}$ and $p_{3}\n= x^{3}+y^{3}+z^{3}$ are the power sum symmetric functions then\n$\\{X_{p_{i}}\\}_{1 \\leq i \\leq 3}$ is {\\em not} an open cover of\n$X_{f}$.\n\\end{enumerate}\n\\end{problem*}\n\n\\begin{solution*}\n$(1)$ To prove that $\\{X_{e_{i}}\\}_{1 \\leq i \\leq 3}$ is an open cover\nof $X_{f}$, it suffices to show that $e_{1}$, $e_{2}$ and $e_{3}$\ngenerate the unit ideal in $S_{f}$; see Lemma I-16 in Eisenbud and\nHarris~\\cite{SC:EH}.  This is equivalent to showing that $x^{m}$\nbelongs to the $S$-ideal $\\langle e_{1}, e_{2}, e_{3} \\rangle$ for\nsome $m \\in \\bbbn$.  In other words, the saturation\\index{saturation}\n$\\big( \\langle e_{1}, e_{2}, e_{3} \\rangle : x^{\\infty} \\big)$ is the\nunit ideal if and only if $\\{X_{e_{i}}\\}_{1 \\leq i \\leq 3}$ is an open\ncover of $X_{f}$.  We verify this in \\Mtwo as follows:\n\\beginOutput\ni1 : S = ZZ[x, y, z];\\\\\n\\endOutput\n\\beginOutput\ni2 : elementaryBasis = ideal(x+y+z, x*y+x*z+y*z, x*y*z);\\\\\n\\emptyLine\no2 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni3 : saturate(elementaryBasis, x)\\\\\n\\emptyLine\no3 = ideal 1\\\\\n\\emptyLine\no3 : Ideal of S\\\\\n\\endOutput\n$(2)$ Similarly, to show that $\\{X_{p_{i}}\\}_{1 \\leq i \\leq 3}$ is not\nan open cover of $X_{f}$, we prove that $\\big( \\langle p_{1}, p_{2},\np_{3} \\rangle : x^{\\infty} \\big)$ is not the unit ideal.  Calculating\nthis saturation, we find\n\\beginOutput\ni4 : powerSumBasis = ideal(x+y+z, x^2+y^2+z^2, x^3+y^3+z^3);\\\\\n\\emptyLine\no4 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni5 : saturate(powerSumBasis, x)\\\\\n\\emptyLine\n\\                            2            2\\\\\no5 = ideal (6, x + y + z, 2y  + 2y*z + 2z , 3y*z)\\\\\n\\emptyLine\no5 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni6 : clearAll\\\\\n\\endOutput\nwhich is not the unit ideal.\\qed\n\\end{solution*}\n\nThe fact that $6$ is a generator of the ideal $\\big( \\langle p_{1},\np_{2}, p_{3} \\rangle : x^{\\infty} \\big)$ indicates that\n$\\{X_{p_{i}}\\}_{1 \\leq i \\leq 3}$ does not contain the points in $X$\nlying over the points $\\langle 2 \\rangle$ and $\\langle 3 \\rangle$ in\n$\\Spec(\\bbbz)$.  If we work over a base ring in which $6$ is a unit,\nthen $\\{X_{p_{i}}\\}_{1 \\leq i \\leq 3}$ would, in fact, be an open\ncover of $X_{f}$.\n\n\n%%----------------------------------------------------------\n\\section{Irreducibility}\n\nThe study of complex semisimple Lie algebras gives rise to an\nimportant family of algebraic varieties called nilpotent\norbits\\index{nilpotent orbits}.  The next problem examines the\nirreducibility\\index{scheme!irreducible} of a particular nilpotent\norbit.\n\n\\begin{problem*} \nLet $X$ be the set of nilpotent complex $3 \\times 3$ matrices.  Show\nthat $X$ is an irreducible algebraic variety.\n\\end{problem*}\n\n\\begin{solution*}\nA $3 \\times 3$ matrix $M$ is nilpotent if and only if its minimal\npolynomial $p(\\sf T)$ equals ${\\sf T}^{k}$, for some $k \\in \\bbbn$.\nSince each irreducible factor of the characteristic polynomial of $M$\nis also a factor of $p(\\sf T)$, it follows that the characteristic\npolynomial of $M$ is ${\\sf T}^{3}$.  We conclude that the coefficients\nof the characteristic polynomial of a generic $3 \\times 3$ matrix\ndefine the algebraic variety $X$.\n\nTo prove that $X$ is irreducible over $\\bbbc$, we construct a rational\nparameterization\\index{rational parameterization}.  First, observe\nthat ${\\rm GL}_{3}(\\bbbc)$ acts on $X$ by conjugation.  Jordan's\ncanonical form theorem implies that there are exactly three orbits;\none for each of the following matrices:\n\\[\nN_{(1,1,1)} =\\left[ \\begin{smallmatrix} 0 & 0 & 0 \\\\ 0 & 0 & 0 \\\\ 0 &\n0 & 0 \\end{smallmatrix} \\right], \\quad\nN_{(2,1)} = \\left[ \\begin{smallmatrix} 0 & 1 & 0 \\\\ 0 & 0 & 0 \\\\ 0 & 0\n& 0 \\end{smallmatrix} \\right] \\text{ and }\nN_{(3)} = \\left[ \\begin{smallmatrix} 0 & 1 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 0 &\n0 \\end{smallmatrix} \\right] \\enspace .\n\\]\nEach orbit is defined by a rational parameterization, so it suffices\nto show that the closure of the orbit containing $N_{(3)}$ is the\nentire variety $X$.  We demonstrate this as follows:\n\\beginOutput\ni7 : S = QQ[t, y_0 .. y_8, a..i, MonomialOrder => Eliminate 10];\\\\\n\\endOutput\n\\beginOutput\ni8 : N3 = (matrix \\{\\{0,1,0\\},\\{0,0,1\\},\\{0,0,0\\}\\}) ** S\\\\\n\\emptyLine\no8 = | 0 1 0 |\\\\\n\\     | 0 0 1 |\\\\\n\\     | 0 0 0 |\\\\\n\\emptyLine\n\\             3       3\\\\\no8 : Matrix S  <--- S\\\\\n\\endOutput\n\\beginOutput\ni9 : G = genericMatrix(S, y_0, 3, 3)\\\\\n\\emptyLine\no9 = | y_0 y_3 y_6 |\\\\\n\\     | y_1 y_4 y_7 |\\\\\n\\     | y_2 y_5 y_8 |\\\\\n\\emptyLine\n\\             3       3\\\\\no9 : Matrix S  <--- S\\\\\n\\endOutput\nTo determine the entries in $G \\cdot N_{(3)} \\cdot G^{-1}$, we use the\nclassical adjoint\\index{classical adjoint} to construct the matrix\n$\\det(G) \\cdot G^{-1}$.\n\\beginOutput\ni10 : classicalAdjoint = (G) -> (\\\\\n\\           n := degree target G;\\\\\n\\           m := degree source G;\\\\\n\\           matrix table(n, n, (i, j) -> (-1)^(i+j) * det(\\\\\n\\                     submatrix(G, \\{0..j-1, j+1..n-1\\}, \\\\\n\\                          \\{0..i-1, i+1..m-1\\}))));\\\\\n\\endOutput\n\\beginOutput\ni11 : num = G * N3 * classicalAdjoint(G);\\\\\n\\emptyLine\n\\              3       3\\\\\no11 : Matrix S  <--- S\\\\\n\\endOutput\n\\beginOutput\ni12 : D = det(G);\\\\\n\\endOutput\n\\beginOutput\ni13 : M = genericMatrix(S, a, 3, 3);\\\\\n\\emptyLine\n\\              3       3\\\\\no13 : Matrix S  <--- S\\\\\n\\endOutput\nThe entries in $G \\cdot N_{(3)} \\cdot G^{-1}$ give a rational\nparameterization of the orbit generated by $N_{(3)}$.  Using\nelimination theory\\index{elimination theory} --- see section~3.3 in\nCox, Little and O`Shea~\\cite{SC:CLO} --- we give an ``implicit\nrepresentation'' of this variety.\n\\beginOutput\ni14 : elimIdeal = minors(1, (D*id_(S^3))*M - num) + ideal(1-D*t);\\\\\n\\emptyLine\no14 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni15 : closureOfOrbit = ideal selectInSubring(1, gens gb elimIdeal);\\\\\n\\emptyLine\no15 : Ideal of S\\\\\n\\endOutput\n\nFinally, we verify that this orbit closure equals $X$\nscheme-theoretically.  Recall that $X$ is defined by the coefficients\nof the characteristic polynomial of a generic $3 \\times 3$ matrix {\\tt\nM}.\n%% was X = ideal submatrix( (coeff-icients({0}, det(M - t*id_(S^3))))_1,\n%% {1,2,3} ), but 'coeff-icients' is to be redesigned, and 'contract' is\n%% more self-explanatory, anyway.\n\\beginOutput\ni16 : X = ideal substitute(\\\\\n\\              contract(matrix\\{\\{t^2,t,1\\}\\}, det(t-M)),\\\\\n\\              \\{t => 0_S\\})\\\\\n\\emptyLine\no16 = ideal (- a - e - i, - b*d + a*e - c*g - f*h + a*i + e*i, c*e*g - $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\no16 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni17 : closureOfOrbit == X\\\\\n\\emptyLine\no17 = true\\\\\n\\endOutput\n\\beginOutput\ni18 : clearAll\\\\\n\\endOutput\nThis completes our solution.\\qed\n\\end{solution*}\n\nMore generally, Kostant shows that the set of all nilpotent elements\nin a complex semisimple Lie algebra\\index{Lie algebra} form an\nirreducible variety.  We refer the reader to Chriss and\nGinzburg~\\cite{SC:CV} for a proof of this result (Corollary~3.2.8) and\na discussion of its applications in representation theory.\n\n\n%%----------------------------------------------------------\n\\section{Singular Points}\n\nIn our third question, we study the singular locus\\index{singular\nlocus} of a family of elliptic curves\\index{elliptic curve}.\n\n\\begin{problem*} \nConsider a general form of degree $3$ in $\\bbbq[x,y,z]$:\n\\[\nF = ax^{3} + bx^{2}y + cx^{2}z + dxy^{2} + exyz + fxz^{2} + gy^{3} +\nhy^{2}z + iyz^{2} + jz^{3} \\enspace .\n\\]\nGive necessary and sufficient conditions in terms of $a, \\ldots, j$\nfor the cubic curve $\\Proj\\big( \\bbbq[x,y,z] / \\langle F \\rangle\n\\big)$ to have a singular point.\n\\end{problem*}\n\n\\begin{solution*}\nThe singular locus of $F$ is defined by a polynomial of degree $12$ in\nthe $10$ variables $a, \\dotsc, j$.  We calculate this polynomial in two\ndifferent ways.\n\nOur first method is an elementary but time consuming elimination.\nCarrying it out in \\Mtwo, we have\n\\beginOutput\ni19 : S = QQ[x, y, z, a..j, MonomialOrder => Eliminate 2];\\\\\n\\endOutput\n\\beginOutput\ni20 : F = a*x^3+b*x^2*y+c*x^2*z+d*x*y^2+e*x*y*z+f*x*z^2+g*y^3+h*y^2*z+\\\\\n\\                   i*y*z^2+j*z^3;\\\\\n\\endOutput\n\\beginOutput\ni21 : partials = submatrix(jacobian matrix\\{\\{F\\}\\}, \\{0..2\\}, \\{0\\})\\\\\n\\emptyLine\no21 = \\{1\\} | 3x2a+2xyb+y2d+2xzc+yze+z2f |\\\\\n\\      \\{1\\} | x2b+2xyd+3y2g+xze+2yzh+z2i |\\\\\n\\      \\{1\\} | x2c+xye+y2h+2xzf+2yzi+3z2j |\\\\\n\\emptyLine\n\\              3       1\\\\\no21 : Matrix S  <--- S\\\\\n\\endOutput\n\\beginOutput\ni22 : singularities = ideal(partials) + ideal(F);\\\\\n\\emptyLine\no22 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni23 : elimDiscr = time ideal selectInSubring(1,gens gb singularities);\\\\\n\\     -- used 64.27 seconds\\\\\n\\emptyLine\no23 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni24 : elimDiscr = substitute(elimDiscr, \\{z => 1\\});\\\\\n\\emptyLine\no24 : Ideal of S\\\\\n\\endOutput\nOn the other hand, there is also an elegant and more useful\ndeterminantal formula for this discriminant\\index{discriminant}; it is\na specialization of the formula (2.8) in section~3.2 of Cox, Little\nand O`Shea~\\cite{SC:CLO2}.  To apply this determinantal formula, we\nfirst create the coefficient matrix {\\tt A} of the partial derivatives\nof $F$.\n%% was A = (coeff-icients({0,1,2}, submatrix(jacobian matrix{{F}}, {0..2}, {0})))_1;\n%% but 'coeff-icients' is deprecated.\n\\beginOutput\ni25 : A = contract(matrix\\{\\{x^2,x*y,y^2,x*z,y*z,z^2\\}\\},\\\\\n\\              diff(transpose matrix\\{\\{x,y,z\\}\\},F))\\\\\n\\emptyLine\no25 = \\{1\\} | 3a 2b d  2c e  f  |\\\\\n\\      \\{1\\} | b  2d 3g e  2h i  |\\\\\n\\      \\{1\\} | c  e  h  2f 2i 3j |\\\\\n\\emptyLine\n\\              3       6\\\\\no25 : Matrix S  <--- S\\\\\n\\endOutput\nWe also construct the coefficient matrix {\\tt B} of the partial\nderivatives of the Hessian\\index{hessian} of $F$.\n\\beginOutput\ni26 : hess = det submatrix(jacobian ideal partials, \\{0..2\\}, \\{0..2\\});\\\\\n\\endOutput\n%% was B = (coeff-icients({0,1,2}, submatrix(jacobian matrix{{hess}}, {0..2}, {0})))_1;\n%% but 'coeff-icients' is deprecated.\n\\beginOutput\ni27 : B = contract(matrix\\{\\{x^2,x*y,y^2,x*z,y*z,z^2\\}\\},\\\\\n\\              diff(transpose matrix\\{\\{x,y,z\\}\\},hess))\\\\\n\\emptyLine\no27 = \\{1\\} | -24c2d+24bce-18ae2-24b2f+72adf               4be2-16bdf-48 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{1\\} | 2be2-8bdf-24c2g+72afg+16bch-24aeh-8b2i+24adi 4de2-16d2f-48 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{1\\} | 2ce2-8cdf-8c2h+24afh+16bci-24aei-24b2j+72adj 2e3-8def-24cf $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\              3       6\\\\\no27 : Matrix S  <--- S\\\\\n\\endOutput\nTo obtain the discriminant, we combine these two matrices and take the\ndeterminant.\n\\beginOutput\ni28 : detDiscr = ideal det (A || B);\\\\\n\\emptyLine\no28 : Ideal of S\\\\\n\\endOutput\nFinally, we check that our two discriminants are equal\n\\beginOutput\ni29 : detDiscr == elimDiscr\\\\\n\\emptyLine\no29 = true\\\\\n\\endOutput\nand examine the generator.\n\\beginOutput\ni30 : detDiscr_0\\\\\n\\emptyLine\n\\            2   4 3 2             5 3 2           6 3 2          2 2 2 $\\cdot\\cdot\\cdot$\\\\\no30 = 13824c d*e f g  - 13824b*c*e f g  + 13824a*e f g  - 110592c d e  $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\no30 : S\\\\\n\\endOutput\n\\beginOutput\ni31 : numgens detDiscr\\\\\n\\emptyLine\no31 = 1\\\\\n\\endOutput\n\\beginOutput\ni32 : # terms detDiscr_0\\\\\n\\emptyLine\no32 = 2040\\\\\n\\endOutput\n\\beginOutput\ni33 : clearAll\\\\\n\\endOutput\nHence, the singular locus is given by a single polynomial of degree\n$12$ with $2040$ terms.\\qed\n\\end{solution*}\n\nFor a further discussion of singularities and discriminants see\nSection~V.3 in Eisenbud and Harris~\\cite{SC:EH}.  For information on\nresultants and discriminants see Chapter~2 in Cox, Little and\nO`Shea~\\cite{SC:CLO2}.\n\n\n%%----------------------------------------------------------\n\\section{Fields of Definition}\n\nSchemes\\index{scheme!over a number field} over non-algebraically\nclosed fields arise in number theory.  Our fourth problem looks at one\ntechnique for working with number fields in \\Mtwo.\n\n\\begin{problem*}[Exercise~II-6 in  \\cite{SC:EH}]\nAn inclusion of fields $K \\hookrightarrow L$ induces a map\n$\\mathbb{A}_{L}^{n} \\to \\mathbb{A}_{K}^{n}$.  Find the images in\n$\\mathbb{A}_{\\bbbq}^{2}$ of the following points of\n$\\mathbb{A}_{\\overline{\\bbbq}}^{2}$ under this map.\n\\begin{enumerate}\n\\item[$(1)$] $\\langle x - \\sqrt{2}, y - \\sqrt{2} \\rangle ;$\n\\item[$(2)$] $\\langle x - \\sqrt{2}, y - \\sqrt{3} \\rangle ;$\n\\item[$(3)$] $\\langle x - \\zeta, y - \\zeta^{-1} \\rangle$ where $\\zeta$\nis a $5$-th root of unity $;$\n\\item[$(4)$] $\\langle \\sqrt{2}x- \\sqrt{3}y \\rangle ;$\n\\item[$(5)$] $\\langle \\sqrt{2}x- \\sqrt{3}y-1 \\rangle$.\n\\end{enumerate}\n\\end{problem*}\n\n\\begin{solution*}\nThe images can be determined by using the following three step\nalgorithm: (1) replace the coefficients not contained in $K$ with\nindeterminates, (2) add the minimal polynomials of these coefficients\nto the given ideal in $\\mathbb{A}_{L}^{2}$, and (3) eliminate the new\nindeterminates.  Here are the five examples:\n\\beginOutput\ni34 : S = QQ[a,b,x,y, MonomialOrder => Eliminate 2];\\\\\n\\endOutput\n\\beginOutput\ni35 : I1 = ideal(x-a, y-a, a^2-2);\\\\\n\\emptyLine\no35 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni36 : ideal selectInSubring(1, gens gb I1)\\\\\n\\emptyLine\n\\                     2\\\\\no36 = ideal (x - y, y  - 2)\\\\\n\\emptyLine\no36 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni37 : I2 = ideal(x-a, y-b, a^2-2, b^2-3);\\\\\n\\emptyLine\no37 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni38 : ideal selectInSubring(1, gens gb I2)\\\\\n\\emptyLine\n\\              2       2\\\\\no38 = ideal (y  - 3, x  - 2)\\\\\n\\emptyLine\no38 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni39 : I3 = ideal(x-a, y-a^4, a^4+a^3+a^2+a+1);\\\\\n\\emptyLine\no39 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni40 : ideal selectInSubring(1, gens gb I3)\\\\\n\\emptyLine\n\\                       2    2               3    2\\\\\no40 = ideal (x*y - 1, x  + y  + x + y + 1, y  + y  + x + y + 1)\\\\\n\\emptyLine\no40 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni41 : I4 = ideal(a*x+b*y, a^2-2, b^2-3);\\\\\n\\emptyLine\no41 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni42 : ideal selectInSubring(1, gens gb I4)\\\\\n\\emptyLine\n\\             2   3  2\\\\\no42 = ideal(x  - -*y )\\\\\n\\                 2\\\\\n\\emptyLine\no42 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni43 : I5 = ideal(a*x+b*y-1, a^2-2, b^2-3);\\\\\n\\emptyLine\no43 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni44 : ideal selectInSubring(1, gens gb I5)\\\\\n\\emptyLine\n\\             4     2 2   9  4    2   3  2   1\\\\\no44 = ideal(x  - 3x y  + -*y  - x  - -*y  + -)\\\\\n\\                         4           2      4\\\\\n\\emptyLine\no44 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni45 : clearAll\\\\\n\\endOutput\n\\qed\n\\end{solution*}\n\nIt is worth noting that the points in $\\mathbb{A}_{\\bbbq}^{n}$ correspond\nto orbits of the action of ${\\rm Gal}(\\overline{\\bbbq}/\\bbbq)$ on the\npoints of $\\mathbb{A}_{\\overline{\\bbbq}}^{n}$.  For more examples and\ninformation, see section~II.2 in Eisenbud and Harris~\\cite{SC:EH}.\n\n\n%%----------------------------------------------------------\n\\section{Multiplicity}\n\nThe multiplicity\\index{multiplicity} of a zero-dimensional scheme $X$\nat a point $p \\in X$ is defined to be the length of the local ring\n$\\mathcal{O}_{X,p}$.  Unfortunately, we cannot work directly in the\nlocal ring in \\Mtwo.  What we can do, however, is to compute the\nmultiplicity by computing the degree of the component of $X$ supported\nat $p$; see page 66 in Eisenbud and Harris~\\cite{SC:EH}.\n\n\\begin{problem*}\nWhat is the multiplicity of the origin as a zero of the polynomial\nequations $x^{5}+y^{3}+z^{3} = x^{3}+y^{5}+z^{3} = x^{3}+y^{3}+z^{5} =\n0$?\n\\end{problem*}\n\n\\begin{solution*}\nIf $I$ is the ideal generated by $x^{5}+y^{3}+z^{3}$,\n$x^{3}+y^{5}+z^{3}$ and $x^{3}+y^{3}+z^{5}$ in $\\bbbq[x,y,z]$, then\nthe multiplicity of the origin is\n\\[\n\\dim_{\\bbbq} \\frac{\\bbbq[x,y,z]_{\\langle x,y,z \\rangle}}\n{I \\bbbq[x,y,z]_{\\langle x,y,z \\rangle}} \\, .\n\\]\nIt follows that the multiplicity is the vector space dimension of the\nring $\\bbbq[x,y,z] / \\varphi^{-1}(I \\bbbq[x,y,z]_{\\langle x,y,z\n\\rangle})$ where $\\varphi \\colon \\bbbq[x,y,z] \\to\n\\bbbq[x,y,z]_{\\langle x,y,z \\rangle}$ is the natural map.  Moreover,\nwe can express this using ideal quotients:\n\\[\n\\varphi^{-1}(I \\bbbq[x,y,z]_{\\langle x,y,z \\rangle}) \\,\\,= \\,\\,\n\\big(I : (I : \\langle x,y,z \\rangle^{\\infty})\\big) \\, .\n\\]\nCarrying out this calculation in \\Mtwo, we obtain:\n\\beginOutput\ni46 : S = QQ[x, y, z];\\\\\n\\endOutput\n\\beginOutput\ni47 : I = ideal(x^5+y^3+z^3, x^3+y^5+z^3, x^3+y^3+z^5);\\\\\n\\emptyLine\no47 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni48 : multiplicity = degree(I : saturate(I))\\\\\n\\emptyLine\no48 = 27\\\\\n\\endOutput\n\\beginOutput\ni49 : clearAll\\\\\n\\endOutput\nThus, we conclude that the multiplicity is $27$.\\qed\n\\end{solution*}\n\nThere are algorithms (not yet implemented in \\Mtwo) for working\ndirectly in the local ring $\\bbbq[x,y,z]_{\\langle x,y,z \\rangle}$.  We\nrefer the interested reader to Chapter~4 in Cox, Little and\nO`Shea~\\cite{SC:CLO2}.\n\n\n%%----------------------------------------------------------\n\\section{Flat Families}\n\nNon-reduced schemes\\index{scheme!non-reduced} arise naturally as flat\nlimits\\index{flat limit} of a family of reduced\nschemes\\index{scheme!reduced}. Our next problem illustrates how a\nfamily of skew lines in $\\bbbp^{3}$ gives rise to a double line with\nan embedded point\\index{embedded point}.\n\n\\begin{problem*}[Exercise~III-68 in \\cite{SC:EH}]\nLet $L$ and $M $ be the lines in $\\bbbp^{3}_{k[t]}$ given by $x=y=0$\nand $x-tz = y+t^{2}w =0$ respectively.  Show that the flat limit as $t\n\\to 0$ of the union $L \\cup M$ is the double line $x^{2} = y = 0$ with\nan embedded point of degree $1$ located at the point $(0:0:0:1)$.\n\\end{problem*}\n\n\\begin{solution*}\nWe first find the flat limit by saturating\\index{saturation} the\nintersection ideal and setting $t = 0$.\n\\beginOutput\ni50 : PP3 = QQ[t, x, y, z, w];\\\\\n\\endOutput\n\\beginOutput\ni51 : L = ideal(x, y);\\\\\n\\emptyLine\no51 : Ideal of PP3\\\\\n\\endOutput\n\\beginOutput\ni52 : M = ideal(x-t*z, y+t^2*w);\\\\\n\\emptyLine\no52 : Ideal of PP3\\\\\n\\endOutput\n\\beginOutput\ni53 : X = intersect(L, M);\\\\\n\\emptyLine\no53 : Ideal of PP3\\\\\n\\endOutput\n\\beginOutput\ni54 : Xzero = trim substitute(saturate(X, t), \\{t => 0\\})\\\\\n\\emptyLine\n\\                   2        2\\\\\no54 = ideal (y*z, y , x*y, x )\\\\\n\\emptyLine\no54 : Ideal of PP3\\\\\n\\endOutput\nSecondly, we verify that this is the union of a double line and an\nembedded point of degree $1$.\n\\beginOutput\ni55 : Xzero == intersect(ideal(x^2, y), ideal(x, y^2, z))\\\\\n\\emptyLine\no55 = true\\\\\n\\endOutput\n\\beginOutput\ni56 : degree(ideal(x^2, y ) / ideal(x, y^2, z))\\\\\n\\emptyLine\no56 = 1\\\\\n\\endOutput\n\\beginOutput\ni57 : clearAll\\\\\n\\endOutput\n\\qed\n\\end{solution*}\n\nSection~III.3.4 in Eisenbud and Harris~\\cite{SC:EH} contains several\nother interesting limits of various flat families.\n\n\n%%----------------------------------------------------------\n\\section{B\\'{e}zout's Theorem}\n\nB\\'{e}zout's Theorem\\index{Bezout's Theorem@B\\'ezout's Theorem} --- Theorem~III-78 in\nEisenbud and Harris~\\cite{SC:EH} --- may fail without the\nCohen-Macaulay\\index{Cohen-Macaulay} hypothesis.  Our seventh problem\nis to demonstrate this.\n\n\\begin{problem*}[Exercise~III-81 in \\cite{SC:EH}]\nFind irreducible closed subvarieties $X$ and $Y$ in $\\bbbp^{4}$ such\nthat \n\\begin{align*}\n\\codim(X \\cap Y) &= \\codim(X) + \\codim(Y) \\\\\n\\deg(X \\cap Y) &> \\deg(X) \\cdot \\deg(Y) \\, .\n\\end{align*}\n\\end{problem*}\n\n\\begin{solution*}\nWe show that the assertion holds when $X$ is the cone over the\nnonsingular rational quartic curve\\index{rational quartic curve} in\n$\\bbbp^{3}$ and $Y$ is a two-plane passing through the vertex of the\ncone.  First, recall that the rational quartic curve is given by the\n$2 \\times 2$ minors of the matrix $\\left[ \\begin{smallmatrix} a &\nb^{2} & bd & c \\\\ b & ac & c^2 & d \\end{smallmatrix} \\right]$; see\nExercise~18.8 in Eisenbud~\\cite{SC:E}.  Thus, we have\n\\beginOutput\ni58 : S = QQ[a, b, c, d, e];\\\\\n\\endOutput\n\\beginOutput\ni59 : IX = trim minors(2, matrix\\{\\{a, b^2, b*d, c\\},\\{b, a*c, c^2, d\\}\\})\\\\\n\\emptyLine\n\\                         3      2     2    2    3    2\\\\\no59 = ideal (b*c - a*d, c  - b*d , a*c  - b d, b  - a c)\\\\\n\\emptyLine\no59 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni60 : IY = ideal(a, d);\\\\\n\\emptyLine\no60 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni61 : codim IX + codim IY == codim (IX + IY)\\\\\n\\emptyLine\no61 = true\\\\\n\\endOutput\n\\beginOutput\ni62 : (degree IX) * (degree IY)\\\\\n\\emptyLine\no62 = 4\\\\\n\\endOutput\n\\beginOutput\ni63 : degree (IX + IY)\\\\\n\\emptyLine\no63 = 5\\\\\n\\endOutput\nwhich establishes the assertion.\\qed\n\\end{solution*}\n\nTo understand how this example works, it is enlightening to express\n$Y$ as the intersection of two hyperplanes; one given by $a = 0$ and\nthe other given by $d = 0$.  Intersecting $X$ with the first\nhyperplane yields\n\\beginOutput\ni64 : J = ideal mingens (IX + ideal(a))\\\\\n\\emptyLine\n\\                      3      2   2    3\\\\\no64 = ideal (a, b*c, c  - b*d , b d, b )\\\\\n\\emptyLine\no64 : Ideal of S\\\\\n\\endOutput\nHowever, this first intersection has an embedded point;\n\\beginOutput\ni65 : J == intersect(ideal(a, b*c, b^2, c^3-b*d^2), \\\\\n\\           ideal(a, d, b*c, c^3, b^3)) -- embedded point\\\\\n\\emptyLine\no65 = true\\\\\n\\endOutput\n\\beginOutput\ni66 : clearAll\\\\\n\\endOutput\nThe second hyperplane passes through this embedded\npoint\\index{embedded point} which explains the extra intersection.\n\n\n%%----------------------------------------------------------\n\\section{Constructing Blow-ups}\n\nThe blow-up\\index{blow-up} of a scheme $X$ along a subscheme $Y$ can\nbe constructed from the Rees algebra\\index{Rees algebra} associated to\nthe ideal sheaf of $Y$ in $X$; see Theorem~IV-22 in Eisenbud and\nHarris~\\cite{SC:EH}.  Gr\\\"{o}bner basis techniques allow one to\nexpress the Rees algebra in terms of generators and relations.  We\nillustrate this method in the next solution.\n\n\\begin{problem*}[Exercises~IV-43 \\& IV-44 in \\cite{SC:EH}]\nFind the blow-up $X$ of the affine plane\\index{scheme!affine}\n$\\mathbb{A}^{2} = \\Spec\\big( \\bbbq[x, y] \\big)$ along the subscheme\ndefined by $\\langle x^{3}, xy, y^{2} \\rangle$.  Show that $X$ is\nnonsingular and its fiber over the origin is the union of two copies\nof $\\bbbp^{1}$ meeting at a point.\n\\end{problem*}\n\n\\begin{solution*}\nWe first provide a general function which returns the ideal of\nrelations for the Rees algebra.\n\\beginOutput\ni67 : blowUpIdeal = (I) -> (\\\\\n\\           r := numgens I;\\\\\n\\           S := ring I;\\\\\n\\           n := numgens S;\\\\\n\\           K := coefficientRing S;\\\\\n\\           tR := K[t, gens S, vars(0..r-1), \\\\\n\\                     MonomialOrder => Eliminate 1];\\\\\n\\           f := map(tR, S, submatrix(vars tR, \\{1..n\\}));\\\\\n\\           F := f(gens I);\\\\\n\\           J := ideal apply(1..r, j -> (gens tR)_(n+j)-t*F_(0,(j-1)));\\\\\n\\           L := ideal selectInSubring(1, gens gb J);\\\\\n\\           R := K[gens S, vars(0..r-1)];\\\\\n\\           g := map(R, tR, 0 | vars R);\\\\\n\\           trim g(L));\\\\\n\\endOutput\nNow, applying the function to our specific case yields: \n\\beginOutput\ni68 : S = QQ[x, y];\\\\\n\\endOutput\n\\beginOutput\ni69 : I = ideal(x^3, x*y, y^2);\\\\\n\\emptyLine\no69 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni70 : J = blowUpIdeal(I)\\\\\n\\emptyLine\n\\                           2         2          3     2\\\\\no70 = ideal (y*b - x*c, x*b  - a*c, x b - y*a, x c - y a)\\\\\n\\emptyLine\no70 : Ideal of QQ [x, y, a, b, c]\\\\\n\\endOutput\nTherefore, the blow-up of the affine plane along the given subscheme\nis\n\\[\nX = \\Proj\\left( \\frac{(\\bbbq[x,y])[a,b,c]}{\\langle yb-xc, xb^{2}-ac,\nx^{2}b-ya, x^{3}c-y^{2}a \\rangle} \\right) \\, .\n\\]\nUsing \\Mtwo, we can also verify that the scheme $X$ is\nnonsingular\\index{singular locus};\n\\beginOutput\ni71 : J + ideal jacobian J == ideal gens ring J\\\\\n\\emptyLine\no71 = true\\\\\n\\endOutput\n\\beginOutput\ni72 : clearAll\\\\\n\\endOutput\nSince we have\n\\[\n\\frac{(\\bbbq[x,y])[a,b,c]}{\\langle yb-xc, xb^{2}-ac, x^{2}b-ya,\nx^{3}c-y^{2}a \\rangle} \\otimes \\frac{\\bbbq[x,y]}{\\langle x, y \\rangle}\n\\cong \\frac{\\bbbq[a,b,c]}{\\langle ac \\rangle} \\, ,\n\\]\nthe fiber over the origin $\\langle x,y \\rangle$ in $\\mathbb{A}^{2}$ is\nclearly a union of two copies of $\\bbbp^{1}$ meeting at one point.  In\nparticular, the exceptional fiber is not a projective space.\\qed\n\\end{solution*}\n\nMany other interesting blow-ups can be found in section~II.2 in\nEisenbud and Harris~\\cite{SC:EH}.\n\n\n%%----------------------------------------------------------\n\\section{A Classic Blow-up}\n\nWe consider the blow-up\\index{blow-up} of the projective plane\n$\\bbbp^{2}$ at a point.\n\n\\vbox{\n\\begin{problem*}\nShow that the following varieties are isomorphic.\n\\begin{enumerate}\n\\item[$(a)$] the image of the rational map from $\\bbbp^{2}$ to\n$\\bbbp^{4}$ given by\n\\[\n(r:s:t) \\mapsto (r^{2}:s^{2}:rs:rt:st) \\, ;\n\\]\n\\item[$(b)$] the blow-up of the plane $\\bbbp^{2}$ at the point\n$(0:0:1)$;\n\\item[$(c)$] the determinantal variety\\index{determinantal variety}\ndefined by the $2 \\times 2$ minors of the matrix $\\left[\n\\begin{smallmatrix} a & c & d \\\\ b & d & e \\end{smallmatrix} \\right]$\nwhere $\\bbbp^{4} = \\Proj\\big( k[a,b,c,d,e] \\big)$.\n\\end{enumerate}\nThis surface is called the {\\em cubic scroll}\\index{cubic scroll} in\n$\\bbbp^{4}$.\n\\end{problem*}\n}\n\n\\begin{solution*}\nWe find the ideal in part~$(a)$ by elimination\ntheory\\index{elimination theory}.\n\\beginOutput\ni73 : PP4 = QQ[a..e];\\\\\n\\endOutput\n\\beginOutput\ni74 : S = QQ[r..t, A..E, MonomialOrder => Eliminate 3];\\\\\n\\endOutput\n\\beginOutput\ni75 : I = ideal(A - r^2, B - s^2, C - r*s, D - r*t, E - s*t);\\\\\n\\emptyLine\no75 : Ideal of S\\\\\n\\endOutput\n\\beginOutput\ni76 : phi = map(PP4, S, matrix\\{\\{0_PP4, 0_PP4, 0_PP4\\}\\} | vars PP4)\\\\\n\\emptyLine\no76 = map(PP4,S,\\{0, 0, 0, a, b, c, d, e\\})\\\\\n\\emptyLine\no76 : RingMap PP4 <--- S\\\\\n\\endOutput\n\\beginOutput\ni77 : surfaceA = phi ideal selectInSubring(1, gens gb I)\\\\\n\\emptyLine\n\\                                          2\\\\\no77 = ideal (c*d - a*e, b*d - c*e, a*b - c )\\\\\n\\emptyLine\no77 : Ideal of PP4\\\\\n\\endOutput\nNext, we determine the surface in part~$(b)$.  We construct the ideal\ndefining the blow-up of $\\bbbp^{2}$ \n\\beginOutput\ni78 : R = QQ[t, x, y, z, u, v, MonomialOrder => Eliminate 1];\\\\\n\\endOutput\n\\beginOutput\ni79 : blowUpIdeal = ideal selectInSubring(1, gens gb ideal(u-t*x, \\\\\n\\           v-t*y))\\\\\n\\emptyLine\no79 = ideal(y*u - x*v)\\\\\n\\emptyLine\no79 : Ideal of R\\\\\n\\endOutput\nand embed it in $\\bbbp^{2} \\times \\bbbp^{1}$.\n\\beginOutput\ni80 : PP2xPP1 = QQ[x, y, z, u, v];\\\\\n\\endOutput\n\\beginOutput\ni81 : embed = map(PP2xPP1, R, 0 | vars PP2xPP1);\\\\\n\\emptyLine\no81 : RingMap PP2xPP1 <--- R\\\\\n\\endOutput\n\\beginOutput\ni82 : blowUp = PP2xPP1 / embed(blowUpIdeal);\\\\\n\\endOutput\nWe then map this surface into $\\bbbp^{5}$ using the Segre\nembedding\\index{Segre embedding}.\n\\beginOutput\ni83 : PP5 = QQ[A .. F];\\\\\n\\endOutput\n\\beginOutput\ni84 : segre = map(blowUp, PP5, matrix\\{\\{x*u,y*u,z*u,x*v,y*v,z*v\\}\\});\\\\\n\\emptyLine\no84 : RingMap blowUp <--- PP5\\\\\n\\endOutput\n\\beginOutput\ni85 : ker segre\\\\\n\\emptyLine\n\\                                2\\\\\no85 = ideal (B - D, C*E - D*F, D  - A*E, C*D - A*F)\\\\\n\\emptyLine\no85 : Ideal of PP5\\\\\n\\endOutput\nNote that the image under the Segre map lies on a hyperplane in\n$\\bbbp^{5}$.  To get the desired surface in $\\bbbp^{4}$, we project\n\\beginOutput\ni86 : projection = map(PP4, PP5, matrix\\{\\{a, c, d, c, b, e\\}\\})\\\\\n\\emptyLine\no86 = map(PP4,PP5,\\{a, c, d, c, b, e\\})\\\\\n\\emptyLine\no86 : RingMap PP4 <--- PP5\\\\\n\\endOutput\n\\beginOutput\ni87 : surfaceB = trim projection ker segre\\\\\n\\emptyLine\n\\                                          2\\\\\no87 = ideal (c*d - a*e, b*d - c*e, a*b - c )\\\\\n\\emptyLine\no87 : Ideal of PP4\\\\\n\\endOutput\nFinally, we compute the surface in part~$(c)$.\n\\beginOutput\ni88 : determinantal = minors(2, matrix\\{\\{a, c, d\\}, \\{b, d, e\\}\\})\\\\\n\\emptyLine\n\\                                          2\\\\\no88 = ideal (- b*c + a*d, - b*d + a*e, - d  + c*e)\\\\\n\\emptyLine\no88 : Ideal of PP4\\\\\n\\endOutput\n\\beginOutput\ni89 : sigma = map( PP4, PP4, matrix\\{\\{d, e, a, c, b\\}\\});\\\\\n\\emptyLine\no89 : RingMap PP4 <--- PP4\\\\\n\\endOutput\n\\beginOutput\ni90 : surfaceC = sigma determinantal\\\\\n\\emptyLine\n\\                                          2\\\\\no90 = ideal (c*d - a*e, b*d - c*e, a*b - c )\\\\\n\\emptyLine\no90 : Ideal of PP4\\\\\n\\endOutput\nBy incorporating a permutation of the variables into definition of\n{\\tt surfaceC}, we obtain the desired isomorphisms\n\\beginOutput\ni91 : surfaceA == surfaceB\\\\\n\\emptyLine\no91 = true\\\\\n\\endOutput\n\\beginOutput\ni92 : surfaceB == surfaceC\\\\\n\\emptyLine\no92 = true\\\\\n\\endOutput\n\\beginOutput\ni93 : clearAll\\\\\n\\endOutput\nwhich completes the solution.\\qed\n\\end{solution*}\n\nFor more information of the geometry of rational normal scrolls, see\nLecture~8 in Harris~\\cite{SC:H}.\n\n\n%%----------------------------------------------------------\n\\section{Fano Schemes}\n\nOur final example concerns the family of Fano schemes\\index{Fano\nscheme} associated to a flat family of quadrics.\nRecall that the $k$-th Fano scheme $F_{k}(X)$ of a\nscheme $X \\subseteq \\bbbp^{n}$ is the subscheme of\nthe Grassmannian parametrizing $k$-planes\ncontained in $X$.\n\n\\begin{problem*}[Exercise~IV-69 in \\cite{SC:EH}]\nConsider the one-parameter family\\index{one-parameter family} of\nquadrics tending to a double plane with equation\n\\[ \nQ = V(tx^{2}+ty^{2}+tz^{2}+w^{2}) \\subseteq \\bbbp^{3}_{\\bbbq[t]} =\n\\Proj\\big(\\bbbq[t][x,y,z,w]\\big) \\enspace .\n\\]\nWhat is the flat limit\\index{flat limit} of the Fano schemes\n$F_{1}(Q_{t})$?\n\\end{problem*}\n\n\\begin{solution*}\nWe first compute the ideal defining $F_{1}(Q_{t})$, the scheme\nparametrizing lines in $Q$.\n\\beginOutput\ni94 : PP3 = QQ[t, x, y, z, w];\\\\\n\\endOutput\n\\beginOutput\ni95 : Q = ideal( t*x^2+t*y^2+t*z^2+w^2 );\\\\\n\\emptyLine\no95 : Ideal of PP3\\\\\n\\endOutput\nTo parametrize a line in our projective space, we introduce\nindeterminates $u, v$ and $A, \\dotsc, H$.\n\\beginOutput\ni96 : R = QQ[t, u, v, A .. H];\\\\\n\\endOutput\nWe then make a map {\\tt phi} from {\\tt PP3} to {\\tt R} sending the\nvariables to the coordinates of the general point on a line.\n\\beginOutput\ni97 : phi = map(R, PP3, matrix\\{\\{t\\}\\} | \\\\\n\\              u*matrix\\{\\{A, B, C, D\\}\\} + v*matrix\\{\\{E, F, G, H\\}\\});\\\\\n\\emptyLine\no97 : RingMap R <--- PP3\\\\\n\\endOutput\n\\beginOutput\ni98 : imageFamily = phi Q;\\\\\n\\emptyLine\no98 : Ideal of R\\\\\n\\endOutput\nFor a line to belong to $Q$, the {\\tt imageFamily} must vanish\nidentically.  In other words, $F_{1}(Q)$ is defined by the\ncoefficients of the generators of {\\tt imageFamily}.\n%% removing a final use of 'coefficients'\n%% coeffOfFamily = (coefficients ({1,2}, gens imageFamily))_1;\n\\beginOutput\ni99 : coeffOfFamily = contract(matrix\\{\\{u^2,u*v,v^2\\}\\}, gens imageFamily)\\\\\n\\emptyLine\no99 = | tA2+tB2+tC2+D2 2tAE+2tBF+2tCG+2DH tE2+tF2+tG2+H2 |\\\\\n\\emptyLine\n\\              1       3\\\\\no99 : Matrix R  <--- R\\\\\n\\endOutput\nSince we don't need the variables $u$ and $v$, we get rid of them.\n\\beginOutput\ni100 : S = QQ[t, A..H];\\\\\n\\endOutput\n\\beginOutput\ni101 : coeffOfFamily = substitute(coeffOfFamily, S);\\\\\n\\emptyLine\n\\               1       3\\\\\no101 : Matrix S  <--- S\\\\\n\\endOutput\n\\beginOutput\ni102 : Sbar = S / (ideal coeffOfFamily);\\\\\n\\endOutput\nNext, we move to the Grassmannian\\index{Grassmannian} $\\mathbb{G}(1,3)\n\\subset \\bbbp^{5}$.  Recall the homogeneous coordinates on\n$\\bbbp^{5}$ correspond to the $2 \\times 2$ minors of a $2 \\times 4$\nmatrix.  We obtain these minors using the {\\tt exteriorPower} function\nin \\Mtwo.\n\\beginOutput\ni103 : psi = matrix\\{\\{t\\}\\} | exteriorPower(2, \\\\\n\\                   matrix\\{\\{A, B, C, D\\}, \\{E, F, G, H\\}\\})\\\\\n\\emptyLine\no103 = | t -BE+AF -CE+AG -CF+BG -DE+AH -DF+BH -DG+CH |\\\\\n\\emptyLine\n\\                  1          7\\\\\no103 : Matrix Sbar  <--- Sbar\\\\\n\\endOutput\n\\beginOutput\ni104 : PP5 = QQ[t, a..f];\\\\\n\\endOutput\n\\beginOutput\ni105 : fanoOfFamily = trim ker map(Sbar, PP5, psi);\\\\\n\\emptyLine\no105 : Ideal of PP5\\\\\n\\endOutput\nNow, to answer the question, we determine the limit as $t$ tends to $0$.\n\\beginOutput\ni106 : zeroFibre = trim substitute(saturate(fanoOfFamily, t), \\{t=>0\\})\\\\\n\\emptyLine\n\\                         2   2                   2                     $\\cdot\\cdot\\cdot$\\\\\no106 = ideal (e*f, d*f, e , f , d*e, a*e + b*f, d , c*d - b*e + a*f, b $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\no106 : Ideal of PP5\\\\\n\\endOutput\nLet's transpose the matrix of generators so all of its elements are visible\non the printed page.\n\\beginOutput\ni107 : transpose gens zeroFibre\\\\\n\\emptyLine\no107 = \\{-2\\} | ef       |\\\\\n\\       \\{-2\\} | df       |\\\\\n\\       \\{-2\\} | e2       |\\\\\n\\       \\{-2\\} | f2       |\\\\\n\\       \\{-2\\} | de       |\\\\\n\\       \\{-2\\} | ae+bf    |\\\\\n\\       \\{-2\\} | d2       |\\\\\n\\       \\{-2\\} | cd-be+af |\\\\\n\\       \\{-2\\} | bd+ce    |\\\\\n\\       \\{-2\\} | ad-cf    |\\\\\n\\       \\{-2\\} | a2+b2+c2 |\\\\\n\\emptyLine\n\\                 11         1\\\\\no107 : Matrix PP5   <--- PP5\\\\\n\\endOutput\nWe see that $F_{1}(Q_{0})$ is supported on the plane conic $\\langle d,\ne, f, a^{2}+b^{2}+c^{2} \\rangle$.  However, $F_{1}(Q_{0})$ is not\nreduced\\index{scheme!non-reduced}; it has\nmultiplicity\\index{multiplicity} two.  On the other hand, the generic\nfiber is\n\\beginOutput\ni108 : oneFibre = trim substitute(saturate(fanoOfFamily, t), \\{t => 1\\})\\\\\n\\emptyLine\n\\                          2    2    2                                  $\\cdot\\cdot\\cdot$\\\\\no108 = ideal (a*e + b*f, d  + e  + f , c*d - b*e + a*f, b*d + c*e, a*d $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\no108 : Ideal of PP5\\\\\n\\endOutput\n\\beginOutput\ni109 : oneFibre == intersect(ideal(c-d, b+e, a-f, d^2+e^2+f^2), \\\\\n\\            ideal(c+d, b-e, a+f, d^2+e^2+f^2))\\\\\n\\emptyLine\no109 = true\\\\\n\\endOutput\nHence, for $t \\neq 0$, $F_{1}(Q_{t})$ is the union of two conics lying\nin complementary planes and $F_{1}(Q_{0})$ is the double conic\nobtained when the two conics move together.\\qed\n\\end{solution*}\n\n% Local Variables:\n% mode: latex\n% mode: reftex\n% tex-main-file: \"chapter-wrapper.tex\"\n% reftex-keep-temporary-buffers: t\n% reftex-use-external-file-finders: t\n% reftex-external-file-finders: ((\"tex\" . \"make FILE=%f find-tex\") (\"bib\" . \"make FILE=%f find-bib\"))\n% End:\n", "meta": {"hexsha": "95cac99a6152c464239162e35f4c0bd984f4b1a3", "size": 36049, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/ComputationsBook/chapters/schemes/chapter-m2.tex", "max_stars_repo_name": "d-torrance/Macaulay2-web-site", "max_stars_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-27T08:01:17.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-27T08:01:17.000Z", "max_issues_repo_path": "Book/ComputationsBook/chapters/schemes/chapter-m2.tex", "max_issues_repo_name": "d-torrance/Macaulay2-web-site", "max_issues_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2018-04-17T19:52:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-07T01:08:10.000Z", "max_forks_repo_path": "Book/ComputationsBook/chapters/schemes/chapter-m2.tex", "max_forks_repo_name": "d-torrance/Macaulay2-web-site", "max_forks_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-01-08T16:48:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-10T21:19:02.000Z", "avg_line_length": 31.9866903283, "max_line_length": 101, "alphanum_fraction": 0.6255929429, "num_tokens": 12913, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.5769460349433579}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{II}\n\n\\def\\ntitle{Principles of Quantum Mechanics Summary}\n\\def\\nlecturer{D.\\ B.\\ Skinner}\n\n\\def\\nterm{Michaelmas}\n\\def\\nyear{2017}\n\n\\input{header}\n\n\\renewcommand*{\\H}{\\mathcal{H}}\n\n\\newcommand*{\\bk}{\\braket}\n\\newcommand*{\\bkt}{\\braketthree}\n\n%parity operator\n\\newcommand*{\\parity}{\\mathcal{P}}\n\\theoremstyle{definition}\n\\newtheorem*{postulate}{Postulate}\n\n\n%stretch table row height\n\\renewcommand{\\arraystretch}{1.5}\n\n\n\n\\usepackage{pdflscape}\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\section{Introduction}\n\n\\begin{definition}[Hilbert space]\n  A complete inner product space.\n\\end{definition}\n\n\\begin{definition}[Linear Operator]\n  A linear map \\(A: \\H\\to \\H\\).\n\\end{definition}\n\n\\begin{definition}[(Continuous) dual space]\n  \\(\\H^* = \\mathcal{B}(\\H,\\C)\\).\n\\end{definition}\n\nNote \\(\\H \\cong \\H^*\\).\n\n\\begin{notation}[Dirac notation]\n  Write an element of \\(\\H\\) as \\(|\\psi\\rangle\\), a \\emph{ket} and an element of \\(\\H^*\\) as \\(\\langle \\phi|\\), a \\emph{bra}.\n\\end{notation}\n\nGiven an orthonormal basis \\(\\{|e_1\\rangle,\\dots,|e_n\\rangle\\}\\) of \\(\\H\\), any vector can be expressed as\n\\[\n  |u\\rangle = \\sum_{i=1}^{n}u_i|e_i\\rangle.\n\\]\nThe inner product is then\n\\[\n  \\langle v| u\\rangle = \\sum_{i,j=1}^{n}\\conj v_j u_i \\langle e_j|e_i\\rangle = \\sum_{i=0}^{n}\\conj v_a u_a.\n\\]\nIf the basis elements are indexed by a continuous family then we simply change summation above to integration:\n\\[\n  |\\psi\\rangle = \\int_{ }^{ } \\psi(a) |a\\rangle da \n\\]\nwhere an basis element \\(|a\\rangle\\) is normalised using Dirac \\(\\delta\\)-function: \\(\\langle a'|a\\rangle = \\delta(a'-a)\\).\n\nFor example, if we use the \\emph{position basis} \\(\\{|x\\rangle\\}_{x\\in\\R}\\) then a state can be expressed as\n\\[\n  |\\psi\\rangle = \\int_{\\R}^{ } \\psi(x')|x'\\rangle dx'\n\\]\nwhere\n\\[\n  \\psi(x) = \\langle x| \\psi\\rangle \n\\]\nis the \\emph{position space wavefunction}.\n\n\\begin{definition}[Commutator]\n  Given two operators \\(A\\) and \\(B\\), the \\emph{commutator} \\([A,B] = AB- BA\\) measures the degree to which they are incompatible.\n\\end{definition}\n\n\\begin{definition}\n  \\(|\\psi\\rangle \\in \\H\\) is an \\emph{eigenstate} of an operator \\(A\\) if \\(A|\\psi\\rangle = a_\\psi|\\psi\\rangle\\). \\(a_\\psi\\in\\C\\) is the \\emph{eigenvalue}.\n\n  The set of all eigenvalues of \\(A\\) is called the \\emph{spectrum} of \\(A\\).\n\n  The number of linearly independent eigenstates having the same eigenvalue is call the \\emph{degeneracy} of the eigenvalue.\n\\end{definition}\n\n\\begin{definition}[Adjoint]\n  The adjoint \\(A^\\dag\\) of an operator \\(A\\) is such that\n  \\[\n    \\langle\\phi|A^\\dag|\\psi\\rangle = \\conj{\\langle\\psi|A|\\phi\\rangle} \n  \\] \n  for all \\(|\\phi\\rangle, |\\psi\\rangle \\in \\H\\).\n\\end{definition}\n\n\\begin{definition}[Self-adjoint operator]\n  If \\(Q = Q^\\dag\\) then \\(Q\\) is \\emph{self-adjoint}, or \\emph{Hermitian}.\n\\end{definition}\n\nFor a self-adjoint operator \\(Q\\), the set of eigenstates \\(\\{|n\\rangle\\}\\) with eigenvales \\(\\{q_n\\}\\) form an orthonormal basis so we can write\n\\[\n  Q = \\sum_{n}^{ }q_n |n\\rangle \\langle n|.\n\\]\nFrom this we can define functions of operators:\n\\[\n  f(Q) := \\sum_{n}^{ }f(q_n) |n\\rangle \\langle n|.\n\\]\n\n\\begin{postulate}[Postulates of Quantum Mechanics]\\leavevmode\n  \\begin{enumerate}\n  \\item Measurement: if a state is prepared to be in some general state\\\n    \\[\n      \\ket \\psi = \\sum_ac_a\\ket \\phi_a\n    \\]\n    then the \\emph{probability} that the measurement will yield an outcome corresponding to some state \\(\\ket \\phi_b\\) is\n    \\[\n      \\P(\\ket \\psi\\to \\ket\\phi_b) = \\frac{|\\bk{\\phi_b}{\\psi}|^2}{\\bk{\\psi}{\\psi}\\bk{\\phi_b}{\\phi_b}} = |c_b|^2 \\frac{\\bk{\\phi_b}{\\phi_b}}{\\bk{\\psi}{\\psi}}.\n    \\]\n    Note that sum of probabilities of all outcomes is \\(1\\):\n    \\[\n      \\sum_b\\P(\\ket\\psi \\to \\ket\\phi_b) = \\sum_b\\frac{|c_b|^2\\bk{\\phi_b}{\\phi_b}}{\\bk{\\psi}{\\psi}} = \\frac{\\bk{\\psi}{\\psi}}{\\bk{\\psi}{\\psi}} = 1.\n    \\]\n    In the case where the states are \\emph{normalised}, i.e. \\(\\bk{\\psi}{\\psi} = 1\\) then this simplifies to\n    \\[\n      \\P(\\ket\\psi \\to \\ket\\phi_b) = |\\bk{\\phi_b}{\\psi}|^2\n    \\]\n    where \\(\\bk{\\phi_b}{\\psi} \\in\\C\\) is the \\emph{probability amplitude}.\n\n    The states are represented by elements of the projective Hilbert space \\(\\mathbb{P}\\H\\), i.e. two vectors in \\(\\H\\) differing by a constant factor (known as phase factor) represent the same state.\n  \\item Observable: observable quantities are represented by self-adjoint operators. Upon measurement by an operator \\(A\\), a state is certain to return the definite value \\(a\\) if and only if it is an eigenstate of \\(A\\) with eigenvalue \\(a\\).\n  \\end{enumerate}\n\\end{postulate}\n\n\\begin{proposition}[Generalised uncertainty relation]\n  Given two Hermitian operators \\(A\\) and \\(B\\),\n  \\[\n    (\\Delta A)_\\psi^2(\\Delta B)_\\psi^2 \\geq \\frac{1}{4}|\\langle[A,B]\\rangle_\\psi|^2\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  For any Hermtian operators \\(A\\) and \\(B\\), for all \\(\\lambda \\in \\R\\), by positive-definiteness of inner-product we have\n  \\[\n    0 \\leq \\|(A+i\\lambda B)|\\psi\\rangle\\| = \\langle A^2\\rangle_\\psi + \\lambda\\langle[A,B]\\rangle_\\psi + \\lambda^2\\langle B^2\\rangle_\\psi.\n  \\]\n  Treat this as a quadratic in \\(\\lambda\\) and notice the discriminant to get\n  \\[\n    \\langle A^2\\rangle_\\psi \\langle B^2\\rangle_\\psi \\geq \\frac{1}{4}|\\langle[A,B]\\rangle_\\psi|^2.\n  \\]\n  Finally use the fact that \\(A'=A-\\langle A\\rangle_\\psi\\) is also Hermitian and that \\(\\langle A'^2\\rangle_\\psi = (\\Delta A)_\\psi^2\\).\n\\end{proof}\n\n\\section{Transformations and Symmetries}\n\nA transformation is an operator \\(U:\\H\\to \\H, |\\psi\\rangle\\mapsto |\\psi'\\rangle = U|\\psi\\rangle\\).\n\n\\begin{proposition}\n  \\(U\\) as above is unitary, i.e. \\(U^{-1} = U^\\dag\\).\n\\end{proposition}\n\n\\begin{proof}\n  After the tranformation we are expected to still find the state somewhere. Thus\n  \\[\n    1 = \\langle \\psi|\\psi\\rangle = \\langle \\psi'|\\psi' \\rangle = \\langle \\psi |U^\\dag U|\\psi \\rangle \\text{ for all } |\\psi\\rangle \\in \\H.\n  \\]\n\n  polarisation identity\n\\end{proof}\n\nAn operator gives a homomorphism from the group of all such transformations to the group of operators of all such transformations, i.e.\\(U: g\\mapsto U(g)\\) and \\(U(g_2)\\compose U(g_1) = U(g_2\\cdot g_2\\).\n\n\\begin{definition}[Generator]\n  Given a \\emph{continuous} transformation \\(U(\\theta)\\), write\n  \\[\n    U(\\delta\\theta) = 1 - i \\delta\\theta T + \\bigO(\\delta\\theta^2)\\footnotemark\n  \\]\n  \\footnotetext{The \\(-i\\) is a historical convention.}\n\\end{definition}\n\nBy unitarity of \\(U\\), \\(T=T^\\dag\\). A transformation can be seen as the limit of\n\\[\n  U(\\theta) = \\lim_{N\\to \\infty}\\Big( 1- i \\frac{\\theta}{N}T \\Big)^N = e^{-i\\theta T}.\n\\]\nIt follows that\n\\[\n  i \\frac{\\partial |\\psi\\rangle}{\\partial \\theta} = T |\\psi\\rangle.\n\\]\n\nInstead of view a transformation as ``changing'' the state, we can also see it as ``changing'' the operator by \\emph{similarity transform}:\n\\[\n  A \\mapsto A' = U^\\dag A U = U^{-1} A U.\n\\]\nNote that\n\\[\n  [A',B'] = U^{-1}[A,B]U\n\\]\nand for a continuous transformation,\n\\[\n  U^{-1}(\\delta\\theta) A U(\\delta\\theta) = A + i\\delta\\theta [T,A] + \\bigO(\\delta\\theta^2)\n\\]\nso the rate of change of states are given by the generator while the rate of change of operators is given by the commutator of the generator with the operator.\n\n\\subsection{Continuous Transformations}\n\nTranslation and rotation are summarised in Table~\\ref{tab:continuous}. Here are some additional discussion.\n\n\\subsubsection{Translation}\n\nSuppose \\(\\ket{\\V x}\\) is a position eigenstate with eigenvalue \\(\\V x\\), i.e.\n\\[\n  \\V X\\ket{\\V x} = \\V x\\ket{\\V x},\n\\]\nrepresenting a particle definitely located at \\(\\V x\\). Then\n\\begin{align*}\n  \\V X U(\\V a)\\ket{\\V x} &= ([\\V X,U(\\V a)]+U(\\V a) \\V X)\\ket{\\V x} \\\\\n                         &= (U(\\V a)\\V a + U(\\V a)\\V X)\\ket{\\V x} \\\\\n                         &= (\\V x+ \\V a)U(\\V a)\\ket{\\V x}\n\\end{align*}\nThus \\(U(\\V a)\\ket{\\V x}\\) is an eigenstate of \\(\\V X\\) with eigenvalue \\(\\V x+ \\V a\\). Write\n\\[\n  U(\\V a)\\ket{\\V x} = c\\ket{\\V x+ \\V a}\n\\]\nsince they represent the same state. Take inner product with \\(\\bra{\\V x'}\\),\n\\begin{align*}\n  \\bk{\\V x'}{\\V x} &= c \\delta^3(\\V x'-\\V x-\\V a) = c \\bk{\\V x'}{\\V x+ \\V a} = \\bkt{\\V x'}{U(\\V a)}{\\V x} \\\\\n                  &= \\big( U(\\V a)^{-1}\\ket{\\V x'} \\big)^\\dag \\ket{\\V x} = \\Big( \\frac{1}{c}\\ket{\\V x'-\\V a} \\Big)^\\dag \\ket{\\V x} = \\frac{1}{\\conj c}\\delta^3(\\V x'- \\V a - \\V x)\n\\end{align*}\nso \\(|c|^2=1\\).\n\nSimilarly for a state in position basis\n\\[\n  \\ket{\\psi} = \\int_{\\R^3}^{ } \\bk{\\V x}{\\psi}\\ket{\\V x} d^3x = \\int_{\\R^3}^{ } \\psi(\\V x)\\ket{\\V x} d^3x,\n\\]\nthe wavefunction of the translated space \\(\\psi'(\\V x)\\) is\n\n\\begin{align*}\n  \\psi'(\\V x) &= \\bkt{\\V x}{U(\\V a)}{\\psi} = \\bkt*{\\V x}{U(\\V a) \\int_{\\R^3}^{} \\psi(\\V x') d^3x'}{\\V x'} \\\\\n              &= \\big(U^{-1}(\\V a)\\ket{\\V x} \\big)^\\dag \\int_{\\R^3}^{ } \\psi(\\V x') \\ket{\\V x} d^3 x' \\\\\n              &= \\bkt*{\\V x- \\V a}{\\int_{\\R^3}^{ } \\psi(\\V x') d^3x}{\\V x'} \\\\\n              &= \\int_{\\R^3}^{ } \\psi(\\V x')\\bk{\\V x - \\V a}{\\V x'} dx' \\\\\n              &= \\psi(\\V x- \\V a)\n\\end{align*}\nwhere we used the unitarity \\(U^{-1} = U^\\dag\\) in \\(U^\\dag(\\V a)\\ket{\\V x} = \\ket{\\V x- \\V a}\\).\n\nIn particular, for an infinitesimal translation \\(\\delta\\V a\\), we have\n\\[\n  \\psi'(\\V x) - \\psi(\\V x) = -\\delta\\V a\\cdot \\nabla \\psi(\\V x)\n\\]\nas well as\n\\[\n  \\psi'(\\V x) - \\psi(\\V x) = \\bkt*{\\V x}{-\\frac{i}{\\hbar}\\delta\\V \\alpha\\cdot \\V P}{\\psi} = -\\frac{i}{\\hbar}\\delta\\V a\\cdot \\bkt{\\V x}{\\V P}{\\psi}\n\\]\nso the momentum operator \\(\\V P\\) acts in the position representation as\n\\[\n  \\bkt{\\V x}{\\V P}{\\psi} = -i\\hbar\\nabla\\psi(\\V x).\n\\]\n\nFor example, for a state \\(\\ket{\\V p}\\) satisfying \\(\\V P \\ket{\\V p} = \\V p \\ket{\\V p}\\),\n\\[\n  \\psi_{\\V p}(\\V x- \\V a) = \\bk{\\V x- \\V a}{\\V p} = \\bkt{\\V x}{U(\\V a)}{\\V p} = \\bkt{\\V x}{e^{-i\\V a\\cdot\\V P/\\hbar}}{\\V p} =  e^{-i\\V a\\cdot \\V p/\\hbar}\\bk{\\V x}{\\V p} = e^{-i\\V a\\cdot \\V p/\\hbar}\\psi_{\\V p}(\\V x).\n\\]\nThus the position wavefunction for momentum eigenstates is\n\\[\n  \\psi_{\\V p}(\\V x) = \\frac{1}{(2\\pi \\hbar)^{3/2}} e^{i\\V p\\cdot \\V x/\\hbar}.\n\\]\n\n\\subsubsection{Rotation}\n\nA anticlockwise rotation around the axis \\(\\hat \\V{\\alpha}\\) by an amount \\(|\\V \\alpha\\) is a linear transformation\n\\begin{align*}\n  \\V R(\\V \\alpha): \\R^3 &\\to \\R^3 \\\\\n  \\V v &\\mapsto \\V v' = \\V R(\\V \\alpha)\\V v\n\\end{align*}\nthat obeys\n\\[\n  \\norm{\\V v} = \\norm{\\V v'}, \\, \\det{\\V R(\\V \\alpha)} = 1.\n\\]\n\nFor an infinitesimal rotation,\n\\[\n  \\V v' = \\V v + \\delta\\V \\alpha\\times \\V v + \\bigO(\\delta\\V\\alpha^2).\n\\]\n\n\\subsubsection{Time translation}\n\nSince \\(U(t)\\) represents time translation, if a particle is in some state \\(\\ket{\\psi(0)}\\) before translation, translating forward to time \\(t\\) will put the particle in state\n\\[\n  \\ket{\\psi(t)} = U(t) \\ket{\\psi(0)} = e^{-iHt/\\hbar}\\ket{\\psi(0)}.\n\\]\nDifferentiating, we get\n\\[\n  i\\hbar \\frac{\\partial}{\\partial t} \\ket{\\psi(t)} = H\\ket{\\psi(t)},\n\\]\nthe \\emph{Time Dependent Schr\u00f6dinger Equation}\n\n\\subsection{Discrete Transformation}\n\n\\subsubsection{Parity}\n\nParity transformation is an example of discrete transformation, acting as\n\\begin{align*}\n  \\parity : \\V x &\\mapsto - \\V x \\\\\n  t &\\mapsto t\n\\end{align*}\n\nIt satisfies the \\emph{anti-commutator} relation\n\\pagestyle{empty}\n\\begin{landscape}\n  \n\\begin{table}[htbp]\n  \\centering\n  \\begin{tabular}{|c|p{8cm}|p{8cm}|}\n    \\hline\n    & Translation & Rotation \\\\ \\hline\n    Governing equation & \\(U^{-1}(\\V a)\\V X U(\\V a) = \\V X + \\V a\\) where \\(\\V X\\) is position operator & \\(U^{-1}(\\V \\alpha)\\V V U(\\V\\alpha) = \\V R(\\V\\alpha)\\V V\\) for any vector \\(\\V V\\) \\\\ \\hline\n    Generator & \\(U(\\delta\\V a) = 1- \\frac{i}{\\hbar}\\delta\\V a \\cdot \\V P + \\bigO(\\delta a^2)\\) & \\(U(\\delta\\V\\alpha) = 1- \\frac{i}{\\hbar}\\delta\\V \\alpha \\cdot \\V J + \\bigO(\\delta \\alpha^2)\\) \\\\ \\hline\n    Formula & \\(U(\\V a) = \\exp(-\\frac{i}{\\hbar}\\V a\\cdot \\V P)\\) & \\(U(\\V \\alpha) = \\exp(-\\frac{i}{\\hbar}\\V \\alpha\\cdot \\V J)\\) \\\\ \\hline\n    Commutator relation & \\(\\begin{array}{cc} & \\frac{i}{\\hbar}[\\delta\\V a\\cdot\\V P, \\V X] = \\delta\\V a \\\\ \\Rightarrow & [X_i,P_j] = i\\hbar\\delta_{i,j} \\end{array}\\) & \\(\\begin{array}{cc} & \\frac{i}{\\hbar}[\\delta\\V\\alpha\\cdot \\V J, \\V V] = \\delta\\V\\alpha\\times \\V V \\\\ \\Rightarrow & [J_i,V_j] = i\\hbar \\epsilon_{ijk}V_k \\end{array}\\) \\\\ \\hline\n    Transformation group & \\(\\R^3\\) & \\(SO(3)\\) \\\\ \\hline\n    & abelian & non-abelian \\\\ \\hline\n    & \\(U(\\V a)U(\\V b) = U(\\V b)U(\\V a)\\) & \\([\\V R(\\delta\\V \\alpha), \\V R(\\delta\\V \\beta)]\\V x = \\V R(\\delta\\V \\alpha \\times \\delta\\V \\beta)\\V x\\) \\\\\n    & & applying homomorphism, \\([U(\\delta\\V\\alpha), U(\\delta\\V\\beta)] = U(\\delta\\V\\alpha \\times \\delta\\V\\beta)\\) \\\\\n    Property derived from abelianess & \\([P_i,P_j] = 0\\) & \\([J_i,J_j] = i\\hbar \\epsilon_{ijk}V_k\\) (can also be derived by viewing \\(\\V J\\) as a vector) \\\\ \\hline\n  \\end{tabular}\n  \\caption{Translation and rotation}\n  \\label{tab:continuous}\n\\end{table}\n\n\n\\begin{table}[htbp]\n  \\centering\n  \\begin{tabular}{|c|p{8cm}|p{8cm}|}\n    \\hline\n    & Parity & Time translation \\\\\n    Governing equation & \\(\\parity^{-1}\\V X \\parity = -\\V X\\) & \\(\\ket{\\psi(t)} = U(t) \\ket{\\psi(0)}\\) \\\\ \\hline\n    Generator & None (discrete) & \\(U(\\delta t) = 1- \\frac{i}{\\hbar}H\\delta t + \\bigO(\\delta t^2\\) \\\\ \\hline\n    Formula & & \\(U(t) = \\exp(-\\frac{i}{h}Ht)\\) \\\\ \\hline\n    Commutator relation & \\(\\{, \\parity, \\V X \\} = 0\\)& \\\\ \\hline\n    Transformation group & \\(\\Z/2\\Z\\) & \\\\ \\hline\n    & abelian & abelian \\\\ \\hline\n  \\end{tabular}\n  \\caption{Parity and time translation}\n  \\label{tab:}\n\\end{table}\n\\end{landscape}\n\\pagestyle{plain}\n\n\\end{document}\n", "meta": {"hexsha": "9b2998a2604088251bbd10f74b51e3a329c8174f", "size": 13441, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "II/pqm_summary.tex", "max_stars_repo_name": "zulysi/tripos", "max_stars_repo_head_hexsha": "5fc4f6a14ac30c3e4fb0e49553c5b5e9ee598690", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-27T11:16:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-27T11:16:41.000Z", "max_issues_repo_path": "II/pqm_summary.tex", "max_issues_repo_name": "b-mehta/tripos", "max_issues_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "II/pqm_summary.tex", "max_forks_repo_name": "b-mehta/tripos", "max_forks_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.1846590909, "max_line_length": 343, "alphanum_fraction": 0.6074696823, "num_tokens": 4977, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7371581568543043, "lm_q1q2_score": 0.5769460304239682}}
{"text": "%---------------------------------------------------\n% Integration\n%---------------------------------------------------\n\\chapter[Integration]{Integration $\\,\\int$}\nIn mathematics we are often given a function $f$ and asked to find a function $F$ whose derivative is $f$. $\\;F$ in this situation is called the integral of $f$. It is usual to develop a list of integrals by differentiating a range of functions then using those to work backwards. The terms integration and anti-differentiation are synonymous. Generally we will use the term integration, however, both are acceptable.\n\nThe process of `reversing' or `undoing' a derivative has its own symbol, the integrand: $\\displaystyle\\int$\n\\begin{tcolorbox}\n\t\\[\\int f'(x) dx = f(x)+C\\]\n\\end{tcolorbox}\n\n%---------------------------------------------------\n% Standard Integrals\n%---------------------------------------------------\n\\section{Standard Integrals}\nIt is not the intention here to list all of the rules that are required however at this stage let us explore the Power Rule to establish a rule for integrating an expression of the form $y =x^{n}$ \\\\\n\n\\begin{center}\n\\begin{tabular}[c]{p{1.5cm}p{1.5cm}}\\toprule\nfunction $f(x)$  & derivative $f'(x)$  \\\\\n\\midrule\n$x$  & $1$  \\\\\n$x^{2}$  & $2 x$  \\\\\n$x^{3}$  & $3 x^{2}$  \\\\\n$x^{4}$  & $4 x^{3}$  \\\\\n\\bottomrule\n\\end{tabular}\\ or\n\\\n\\begin{tabular}[c]{ll}\\toprule\n $f$  & $f'$  \\\\\n &\\\\\n\\midrule\n$x$  & $1$  \\\\\n$\\frac{1}{2} x^{2}$  & $x$  \\\\\n$\\frac{1}{3} x^{3}$  & $x^{2}$  \\\\\n$\\frac{1}{4} x^{4}$  & $x^{3}$  \\\\\n\\bottomrule\n\\end{tabular}\\ now, in reverse \\\n\\begin{tabular}[c]{p{2cm}p{2.5cm}}\\toprule\nderivative $f'(x)$  &  integral $\\int f'(x)=f(x)$  \\\\\\midrule\n$1$  & $x +C$  \\\\\n$x$  & $\\frac{1}{2} x^{2} +C$  \\\\\n$x^{2}$  & $\\frac{1}{3} x^{3} +C$  \\\\\n$x^{3}$  & $\\frac{1}{4} x^{4} +C$  \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\bigskip This establishes the pattern and if you think about the rule for differentiating $y =x^{n}$ you can soon establish the rule for integrating $x^{n}$.\n\\begin{tcolorbox}\n\tThe \\textbf{Power Rule} for integrating polynomials\n\\[\\int x^n dx = \\frac{x^{n +1}}{n +1} +c, \\text{ where }n \\neq  -1\n\\]\n\\end{tcolorbox}\n\n\\begin{equation*}\\text{ If }f (x) =x^{ -1}\\text{ then the integral of }f\\text{ is }\\ln  \\left \\vert x\\right \\vert  +c\\text{ or }\\ln  \\left \\vert k x\\right \\vert\n\\end{equation*}\nAll of the differentiation rules we have met so far lead to integration rules. For instance we can establish standards for $\\sin  x$, $\\cos  x$, and $\\sec ^{2} x\\text{.}$ The standard integrals are summarized in the table below.\n\\renewcommand\\arraystretch{1.2}\n\\begin{center}\n\t\\begin{tabular}{llcl}\n\t\t\\toprule\n\t\tFunction&Integral&&Notes\\\\\n\t\t$f(x)$  &  $\\int f(x)dx$ \\\\ \\midrule\n\t\t$1$&$x+C$&&constant\\\\\n\t\t$A$&$Ax+C$&&$A$ is constant\\\\ \\midrule\n\t\t$x^n, n \\neq -1$ & $\\displaystyle\\frac{x^{n+1}}{n+1}+C$&&power rule general form\\\\ \n\t\t$e^x$ & $e^x+C$&&exponential\\\\ \n\t\t$\\frac{1}{x}$ & $\\ln|x|+C$&&special case: $x^{-1}$\\\\\n\t\t$\\ln x$&$x\\ln |x|-x+C$&&\\\\\\midrule\n\t\t$\\sin(x)$ & $-\\cos(x)+C$&& trigonometric \\\\ \n\t\t$\\cos(x)$ & $\\sin(x)+C$\\\\ \n\t\t$\\tan (x)$&$\\ln |\\sec x|+C$\\\\\n\t\t$\\sec^2(x)$ & $\\tan(x)+C$ \\\\ \\bottomrule\n\t\\end{tabular}\n\\end{center}\n\nTo allow us to combine these integrals and thus extend the range of questions we can tackle we use two important rules for integrals \n\\begin{tcolorbox}\n\\textbf{Sum Rule} The integral of the sum of two functions is the sum of the integrals of the functions.\n\\[\\int \\left [f (x) +g (x)\\right ]\\; d x =\\int f (x)\\; d x +\\int g (x)\\; d x\\]\n\\end{tcolorbox}\nThis is easily extended to the sum or difference of a number of functions. \n\\begin{tcolorbox}\n\\textbf{Constant Multiple Rule} The integral of a constant times a function is the constant times the integral of the function.\n\\[\\int c f (x)\\; d x =c \\int f (x)\\; d x\n\\]\\end{tcolorbox}\n\n\\subsection*{Indefinite Integrals}\nAlthough $\\int f (x) d x$ looks very similar to $\\int _{a}^{b}f (x)\\; d x$ they are quite different and must not be confused or used in place of each other. $\\int f (x)\\; d x$ is a function of $x$ or a family of functions of $x$ and $\\int _{a}^{b}f (x)\\; d x$ is a number. They are connected of course, provided $f (x)$ is a continuous function of $x$ on $\\left [a ,b\\right ]$. In this case the Evaluation Theorem gives the connection between them.\n\\begin{equation*}\\int _{a}^{b}f (x)\\; d x =\\left .\\int f (x)\\; d x\\right ]_{a}^{b}\n\\end{equation*}\n\nThe indefinite integral represents either a particular integral or a family of integrals. These will use a constant $C$ where $C$ takes a different value for each member of the family. $C$ is called the \\emph{constant of integration}. \n\\clearpage\n\\example Integrate the following functions; find $\\displaystyle\\int f(x)$.\n\\begin{tasks}(2)\n\t\\task $f(x) =3x^{2}$ \\medskip\\\\\n\t\\solution Applying the power rule:\\\\\n\\begin{align*}\\int 3x^{2}\\,dx&=\\frac{3x^{2+1}}{2+1}+C\\\\\n\t&=x^3+C\\end{align*}\n\\task $f (x) =7$ \\medskip \\\\\n\\solution Here we are integrating a constant:\\\\\n\\[\\int 7\\,dx=7x+C\\]\n\\task $f(x) =x^{\\frac{2}{3}}$ \\medskip\\\\\n\\solution The power rule still applies to fractional indices:\\\\\n\\begin{align*}\\int x^{\\frac{2}{3}}\\,dx&=\\frac{x^{\\frac{2}{3}+1}}{\\frac{2}{3}+1}+C\\\\\n&=\\frac{3 x^{5/3}}{5}+C \\end{align*}\n\\task $f (x) =\\frac{1}{2 \\sqrt{x}} +\\frac{1}{\\sqrt{2}}$ \\medskip\\\\\n\\solution Here we need to combine the sum rule and the power rule:\\\\\n\\begin{align*}\n\\int f(x)\\,dx &= \\int(\\frac{1}{2 \\sqrt{x}}) \\,dx+\\int(\\frac{1}{\\sqrt{2}}) \\,dx\\\\\n&=\\frac{1}{2}\\int x^{-\\frac{1}{2}}+\\frac{1}{\\sqrt{2}}x+C\\\\\n&=\\sqrt{x}+\\frac{x}{\\sqrt{2}}+C\n\\end{align*}\n\\end{tasks}\nThese four examples are all \\textit{indefinite} integrals and have an unknown constant in the answer. The following section will introduce definite integrals.\\vspace{1cm}\\\\\n\\rule{6.8cm}{0.5pt}\\\\\n\\example Find $f (x)$ given $f^{ \\prime  \\prime } (x) =6$\\medskip\\\\\n\\solution Here we have a \\textit{second} derivative, indicated by the double-prime symbol, $f''$. Knowing that $\\int f'(x)=f(x)$ we can safely assume that \n\\begin{tcolorbox}\n\t\\[\\int f''(x) \\,dx = f'(x).\\]\n\\end{tcolorbox}\nSo $f'(x)=\\int f''(x) = \\int 6 \\,dx = 6x+C$. Now we need to integrate a second time to get $f$.\n\\[\\int (6x+C)\\,dx=6x^2+Cx+D\\]\nWe end up with two unknown values, $C$ and $D$ as opposed to just a single value.\n\nThe previous two examples of equations involving derivatives. Any equation involving derivatives of a function is called a \\emph{differential equation}. We will look into this subject in the next chapter. \n\n\\clearpage\n%---------------------------------------------------\n% Area\n%---------------------------------------------------\n\\section{Area}\\begin{multicols}{2}\nIn this section we attempt to find the area under a curve. That is the area that lies between a curve and the $x$-axis from $x =a$ to $x =b$. The area is bounded by the $x$-axis, a continuous curve $y =f (x)$ and the two vertical lines $x =a$ and $x =b$. This is shown in the figure with the area shaded in. Note the area stops at the axis. Area as calculated by integration is always in reference to the axis.\\\\\n\\includegraphics[width=8cm]{area1}\n\\end{multicols}\n\nPreviously, when we wanted to find the slope of a tangent line we found the slope of a secant line and applied the limiting process: $\\Lim{h\\to 0}$. A similar procedure will be used to find the area. We first approximate the area with rectangular strips then we take the limit of the areas of these rectangular strips by making the strips narrower and narrower and thus the number of strips between $x =a$ and $x =b$ greater and greater. \n\n\\subsection*{Area with Riemann Sums}\n\\example Given the parabola $y =x^{2}$, use rectangles to find the area under this curve between $0$ and $1$. \n\n\\solution \n\\begin{multicols}{2}\n\\includegraphics[width=8cm]{Area4}\\label{fig:riemann}\nConsider $4$ strips by constructing vertical lines at $x =\\frac{1}{4}, \\frac{1}{2}, \\frac{3}{4}, \\text{and }1$ as shown in the diagram. Rectangles are constructed using the right-hand boundary and we know from inspection this will be larger than the actual area.\n\\end{multicols}\n\\begin{align*}\n\\text{Right sum} &  = \\frac{1}{4} \\genfrac{(}{)}{}{}{1}{4}^{2} +\\frac{1}{4} \\genfrac{(}{)}{}{}{1}{2}^{2} +\\frac{1}{4} \\genfrac{(}{)}{}{}{3}{4}^{2} +\\frac{1}{4} \\left (1\\right )^{2} \\\\\n &  = \\frac{1}{64} +\\frac{1}{16} +\\frac{9}{64} +\\frac{1}{4} \\frac{15}{32} =0.46875\\end{align*}\n\nRepeating this process with a larger number of rectangles will improve the accuracy of the method. Using a spreadsheet like Excel shows a convergence on a value of $\\frac{1}{3}$ as $n$ rectangles increase. The sum using the left-hand boundary is included for comparison.\n\\begin{center}\n\t\\begin{tabular}[c]{ccc}\\toprule\n\t\t$n$  & Left sum  & Right sum  \\\\\\midrule\n\t\t10\t\t& 0.2850000  & 0.3850000  \\\\\\midrule\n\t\t20\t\t& 0.3087500  & 0.3587500  \\\\\\midrule\n\t\t30\t\t& 0.3168519  & 0.3501852  \\\\\\midrule\n\t\t100\t\t& 0.3283500  & 0.3383500  \\\\\\midrule\n\t\t1000\t& 0.3328335  & 0.3338335  \\\\\\midrule\n\t\t$\\infty$& $\\frac{1}{3}$&$\\frac{1}{3}$\\\\\\bottomrule\n\\end{tabular}\\end{center}\nIt can be seen that a very accurate approximation to the area can be obtained as the number of rectangles increases. It should be clear that as $n \\rightarrow \\infty $ both the left sum and the right sum approach the area under the curve we write\n\\begin{equation*}A =\\Lim{n \\to \\infty}\\text{ Left Sum } =\\Lim{n \\to \\infty}\\text{ Right Sum }\n\\end{equation*}\nThis process can be generalised by selecting any height within each rectangular strip\nand finding the area of each strip using this height. Let there be $n$ strips and consider the $i^{t h}$ strip. Select a value of $x$ in the $i^{t h}$ strip call it $x_{i}$. The height of this rectangle will be $f (x_{i})$. Consider the situation described above where the area is bounded by the $x$-axis, a continuous curve $y =f (x)$ and the two vertical lines $x =a$ and $x =b$. \n\nWith $n$ rectangles the length of the base of each rectangle is $ \\Delta x =\\frac{b -a}{n}$. The area of the $i^{t h}$ rectangle is $f (x_{i})  \\Delta x$.\n\nThe sum of all the rectangles is\n\\begin{equation*}\\underset{i =1}{\\sum ^{n}} f (x_{i})  \\Delta x =f \\left (x_{1}\\right )  \\Delta x +f \\left (x_{2}\\right )  \\Delta x +\\ldots  +f \\left (x_{n}\\right )  \\Delta x\n\\end{equation*}\n\nAnd\n\\begin{equation*}A =\\underset{n \\rightarrow \\infty }{\\lim }\\left [\\underset{i =1}{\\sum ^{n}} f (x_{i})  \\Delta x\\right ]\n\\end{equation*}\n\nIf $f$ is a continuous function defined on the interval $\\left [a ,b\\right ]$ then as $n \\rightarrow \\infty $ the number represented by$\\;\\underset{i =1}{\\sum ^{n}} f (x_{i})  \\Delta x \\rightarrow A$ the area under the curve $y =f (x)$. This number is called the definite integral of $f$ from $a$ to $b$ and is denoted by $\\int _{a}^{b}f$ or $\\int _{a}^{b}f (x) d x$\n\\begin{equation*}\\int _{a}^{b}f (x) d x =\\underset{n \\rightarrow \\infty }{\\lim }\\left [\\underset{i =1}{\\sum ^{n}} f (x_{i})  \\Delta x\\right ]\n\\end{equation*}\n\nThis process is called a \\emph{Riemann sum} after the German mathematician Bernard Riemann (1826-1866) who defined the integral in this way. The symbol $\\int $ was introduced by Leibniz and is called the \\emph{integral sign}. \n\n\\subsection*{Definite Integrals}\nThe method of computing Riemann sums is often long and to achieve a result that is accurate enough requires a computer. Both Isaac Newton and Gottfried Leibniz discovered a much simpler way based on the integral. This discovery is called \\emph{The Evaluation Theorem}. \n\n\\begin{tcolorbox}\n\\textbf{Evaluating Definite Integrals}\\\\Given $F$ is an integral of $f$ i.e. $F^{ \\prime } =f$, provided $f$ is continuous on the interval $[a ,b]$ then \n\\[\\int _{a}^{b}f (x) d x =F (b) -F (a)\\]\n\\end{tcolorbox}\n\nThis is an amazing result in view of the fact that it replaces such a complex procedure as finding\nRiemann sums over greater and greater numbers of elementary rectangles. \n\n\\example Evaluate $\\int _{0}^{1}x^{2} d x$. \n\n\\solution Because we know a particular integral of $f (x) =x^{2}$ is $F (x) =\\frac{1}{3} x^{3}$ We have from the Evaluation Theorem\n\\begin{equation*}\\int _{0}^{1}x^{2} d x =F (1) -F (0) =\\frac{1}{3} \\cdot 1^{3} -\\frac{1}{3} \\cdot 0^{3} =\\frac{1}{3}\n\\end{equation*}\nLooking back at the calculation of left sum and right sum above we can now see that the actual area\nthat we were endeavouring to calculate was in fact $1/3$ or $0. \\dot{3}$. \n\nThese are some of the different notations for using the Evaluation Theorem \n\\begin{equation*}F (b) -F (a) =F (x)\\vert _{a}^{b} =\\left [F (x)\\right ]_{a}^{b} =\\left .F (x)\\right ]_{a}^{b}\n\\end{equation*}\n\n\\subsection*{Area with Definite Integrals}\nAreas above the $x$-axis have \\emph{positive} definite integrals and areas below the $x$-axis have \\emph{negative} definite integrals. If the context of the question is to evaluate a definite integral then use the definition. If the question is about \\textbf{area} you must find the parts of the question that have areas above the $x$-axis and those parts that have areas below the $x$-axis and evaluate them separately. The definite integral calculates the result as the net sum of the positive and negative areas. To find the total area take the absolute value of the individual parts.\n\n\\example Find the area under the curve $y =x^{3} -x$ between $x = -1$ and $x =1$\n\\begin{SCfigure}[1][h!]\n\t\\includegraphics[width=0.6\\textwidth]{area3}\n\t\\caption*{Figure: A cubic showing how area `under' the curve is evaluated. The area for $-1\\leq x\\leq 0$ is positive (above the axis), and the area for $0\\leq x\\leq 1$ is negative.}\n\\end{SCfigure}\n\n\\solution This can be factored to give $y =x (x +1) (x -1)$. This cubic curve crosses the x-axis at $ -1$, $0$, and $1$. Here, this area must be found in two parts.\n\n\\begin{tasks}[column-sep=3ex,label-offset=-5em](3)\n\t\\task[]This is expected to be negative because it is below the axis:\\\\ \n\t$\\displaystyle\\int _{0}^{1}(x^{3} -x) dx$\\\\\n\t$\\displaystyle=\\frac{x^4}{4}-\\frac{x^2}{2}\\Big\\vert_{0}^1$\\\\\n\t$\\displaystyle=[\\frac{1}{4}-\\frac{1}{2}]-0$\\\\\n\t$\\displaystyle=-\\frac{1}{4}$\n\t\\task[]This is expected to be positive:\\\\ \n\t$\\displaystyle\\int _{ -1}^{0}(x^{3} -x) d x$ \\\\\n\t$\\displaystyle=\\frac{x^4}{4}-\\frac{x^2}{2}\\Big\\vert_{-1}^0$\\\\\n\t$\\displaystyle=0-[\\frac{1}{4}-\\frac{1}{2}]$\\\\\n\t$\\displaystyle=+\\frac{1}{4}$\n\t\\task[]Therefore the total area is:\\medskip\\\\\n\t $\\displaystyle\\Big\\vert-\\frac{1}{4}\\Big\\vert+\\frac{1}{4}=\\frac{1}{2}\\text{ units}^2$\n\\end{tasks}\nWe will compare with a single integral from $-1$ to $1$.\t\n\\[\\int _{ -1}^{1}(x^{3} -x) d x=\\frac{x^4}{4}-\\frac{x^2}{2}\\vert_{-1}^1 =-\\frac{1}{4}-(-\\frac{1}{4})=0\\]\nArea must be non-negative, and so this result is nonsensical given the context.\\\\  \n\\rule{6.8cm}{0.5pt}\\\\\n\n\\begin{multicols}{2}\n\\example The energy, or electrical charge, that a capacitor can discharge is found by taking the integral of the voltage-time function. This can neatly be represented as the area under the voltage-time curve. Find the total discharge from the capacitor after 5 seconds. The units for charge are coulombs.\\medskip\\\\\n\\solution Integrate the $V(t)$ function to find the area:\\\\\n\t\\begin{align*}\n\\mathrm{area}&=\\int_{a}^{b}V(t)dt\\\\\n&=8\\int_{0}^{5}\\left(e^{-t}\\right)dt\\\\\n&=8\\left[\\left(-1e^{-t}\\right)\\Big|_{0}^5\\right]\\\\\n&=8\\left[-e^{-5}-(-e^{0})\\right]=8.054 \\,\\, \\text{coulombs}\\\\\n\\end{align*}\n%\\columnbreak\n\\begin{center}\n\t\\begin{tikzpicture}\n\t\\begin{axis}[\n\twidth=8cm,height=10cm,\n\taxis lines=center,\n\taxis on top=true,\n\tymax=9,ymin=0,\n\txmax=6,xmin=-.025,\n\txlabel=time($s$),\n\tylabel=voltage($V$),\n\t]\n\t\\addplot [name path =F,->,domain=0:6,thick, samples=100, black] {8*exp(-x)};\n\t\\addplot [name path =G,,domain=0:5,thick, samples=100] {0};\n\t\\addplot[pattern=north west lines, pattern color=black!75]fill between[of=F and G, soft clip={domain=0:5}];\n\t\\addplot[mark=]coordinates{(0.5,4.85)} node[pin=45:{$V(t)=8e^{-t}$}]{};\n\t\\end{axis}\n\t\\end{tikzpicture}\n\\end{center}\n\\end{multicols}\nThe standard integral to calculate area is bounded by the axis as seen in the previous examples. In the following example we see that when finding area between intersecting curves, a new strategy can be applied. \\medskip\\\\\n\\clearpage\n\\example Find the area between the two curves: $f(x)=x$ and $g(x)=x^2-2x$. They intersect as shown at $(0,0)$, and $(3,3)$.\n\\begin{center}\\includegraphics[width=0.6\\textwidth]{area5}\\end{center}\n\\solution When inspecting the shaded region, the straight line function, $y=x$ is above the parabola. We could handle this with two separate integrals: find $\\int f(x)$ and then subtract $\\int g(x)$, or we can simplify and find $\\int f(x)-g(x)$:\n\\begin{tasks}[label-offset=-5em](2)\n\\task[]\\begin{align*}\n\\text{Area}&=\\int_a^b[f(x)-g(x)]dx\\\\\n&=\\int_0^3[x-(x^2-2x)]dx\\\\\n&=\\int_0^3[-x^2+3x]dx\\\\\n\\end{align*}\n\\task[]\\begin{align*}\n&=-\\frac{x^3}{3}+\\frac{3x^2}{2}\\Big\\vert_0^3\\\\\n&=\\left(-9+\\frac{27}{2}\\right)-0=\\frac{9}{2}\\text{ units}^2\\\\\n\\end{align*}\n\\end{tasks}\n\\begin{tcolorbox}\n\\textbf{Area between functions}\\\\\nLet $f(x)$ be the upper function and $g(x)$ be the lower function, then\\\\\n\\[\\text{Area }=\\int_a^b[f(x)-g(x)]dx\\]\n\\end{tcolorbox}\n\n\\clearpage\n\\example A logo is formed by the shaded area between the cubic function  $ f(x)=4x -x^3$ and a parabola $g(x)=2x-x^2$. The two curves intersects at $x=0$ and $x=2$. Find the shaded area.\\\\\n\\begin{multicols}{2}\n\t\\begin{tikzpicture}\n\t\\begin{axis}[\n\taxis lines=center,\n\theight=8cm,\n\taxis on top=true,\n\tymax=4,ymin=-4,\n\txmax=3,xmin=0,\n\txlabel=$x$,\n\t]\n\t\\addplot [name path =F,<->,domain=-2.3:2.3,thick, samples=200, black] {-x^3+4*x};\n\t\\addplot [name path = G,<->,domain=-2.3:2.3,thick, samples=200, red] {-x^2+2*x};\n\t\\addplot[pattern=north west lines, pattern color=blue!50]fill between[of=F and G, soft clip={domain=0:2}];\n\t\\end{axis}\n\t\\end{tikzpicture}\n\t\\columnbreak\n\\\\\t\\solution Identify the top curve $f(x)$ and subtract the bottom $g(x)$:\n\t\\begin{align*}\n\t\\mathrm{Shaded \\hspace{0.2cm}area}&=\\int_{a}^{b}(f(x)-g(x))dx\\\\\n\t&=\\int_{0}^{2}\\left((4x-x^3)-(2x-x^2)\\right)dx\\\\\n\t&=\\int_{0}^{2}\\left((2x-x^3+x^2)\\right)dx\\\\\n\t&=\\left[\\left(2\\frac{x^2}{2}-\\frac{x^4}{4}+\\frac{x^3}{3} \\right)\\Big|_{0}^2\\right]\\\\\n\t&=\\left[\\left(4-\\frac{16}{4}+\\frac{8}{3}\\right)-(0)\\right]\\\\\n\t&=\\frac{8}{3} \\,\\, \\mathrm{units}^2\n\t\\end{align*}\n\\end{multicols}\n\n%---------------------------------------------------\n% Volume\n%---------------------------------------------------\n\\section{Volume}\nIf a function is revolved around an axis it creates a volume between the axis and the function. Similar to how if we integrate a function, it results in an area --- if we integrate an \\textit{area} it results in a volume.\n\n\\begin{multicols}{2}\n\t\\example A connector was obtained by revolving the function $f(x)={0.3x+1}$ around the $x$-axis for $0\\leq x \\leq 5$. Calculate the volume of the connector.\n\\begin{center}\\includegraphics[width=8cm]{solids1}\\end{center}\n\\columnbreak\n\\solution\\\\ Given the function of the outer boundary of the shape, we must square the function and integrate this result.\\\\\n\\begin{align*}\n\\mathrm{volume}&=\\int_{a}^{b}\\pi[f(x)]^2 dx\\\\\n&=\\pi\\int_0^5\\left(0.3x+1\\right)^2 dx\\end{align*}\nUse the power rule and chain rule to integrate:\\\\\n\\[=\\pi \\frac{ (0.3x+1)^3}{0.3(3)} \\Big|_0^5\\]\nEvaluate at the boundary points:\\\\\n\\begin{align*}\n&=\\frac{10\\pi}{9}\\left[\\left(\\frac{5}{2}\\right)^3-1\\right]=\\frac{65\\pi}{4}\\\\\n&\\approx 51.05 \\text{ units}^3\n\\end{align*}\n\\end{multicols}\n\n\\clearpage\nThe previous example used the disc-method to calculate a volume. Beginning with a rectangular section of width $\\Delta x$, and length $f(x)$, just like the Riemann Sum for calculating area on page~\\pageref{fig:riemann}. This height becomes a radius when rotated around an axis of revolution, labelled $R$ below. The \\textit{volume} of a single disc is $\\pi R^2\\Delta x$. \n\\begin{center}\\includegraphics[width=\\textwidth]{discMethod}\\end{center}\nFinding the volume of all the discs involves integrating over the length of the axis from $a$ to $b$.\\[\\text{Volume}=\\int \\pi R^2 \\Delta x\\] The radius is the function evaluation, $f(x)$, and lowercase delta, $\\delta$, means infinitesimal thin discs as the number of them goes to infinity, $n\\rightarrow\\infty$,  we can generalize the formula for volume:\n\\begin{tcolorbox}\n\t\\textbf{Volume rotated around the $x$-axis}\n\t\\[ \\text{Volume} = \\int_{a}^{b} \\pi y^2 dx = \\int_{a}^{b} \\pi (f(x))^2 dx\\]\n\t\\textbf{Volume rotated around the $y$-axis}\n\t\\[ \\text{Volume} = \\int_{c}^{d} \\pi x^2 dy =\\int_{c}^{d} \\pi (f(y))^2 dy  \\]\n\\end{tcolorbox}\n\n\n\\example Fluorescent and incandescent light bulbs are often filled with the inert gas krypton. Find the volume of krypton gas required to fill the bulb shown.\\vspace{0.4cm}\\\\ \n\\includegraphics[width=15cm]{bulb}\\\\\nThe function has been estimated to be:\\\\\n\\[f(x)=1-\\frac{x^2}{5}\\qquad\\text{ for } -1.5\\leq x \\leq 2\\text{ cm}\\] \n\\solution You will have to calculate $\\left(1-\\frac{x^2}{5}\\right)^2$ before integrating\\\\\n\t\\begin{align*}\n\t\\text{volume}&=\\int_{a}^{b}\\pi[f(x)]^2 dx\\\\\n\t&=\\pi\\int_{-1.5}^{2}\\left(1-\\frac{x^2}{5}\\right)^2 dx\\\\\n\t&=\\pi\\int_{-1.5}^{2}\\left(\\frac{x^4}{25}-\\frac{2x^2}{5}+1\\right)dx\\\\\n\t&=\\pi\\left(\\frac{x^5}{125}-\\frac{2x^3}{15}+x\\right)  \\Bigg|_{-1.5}^2\\\\\n\t&=\\pi\\left[\\left(\\frac{32}{125}-\\frac{16}{15}+2\\right)-\\left(-0.06075--0.45-1.5\\right)\\right]=2.3\\pi\\approx 7.23\\, \\text{ cm}^3\\end{align*}\n\\\\\t\n\\begin{multicols}{2}\n\\examq Calculate the volume of the container found by rotating the curve $y=\\sqrt{x^3}$ around the $y-$axis for $0\\leq y \\leq 5$.\\\\\n\nHere the volume is created by rotation about the $y$-axis, and therefore we need to adjust our formula. First we will solve the equation $y=\\sqrt{x^3}$ for $x$, and then integrate.\n\\columnbreak\n\\begin{center}\n\t\t\\includegraphics[width=0.5\\textwidth]{funnel.png}\n\\end{center}\n\\end{multicols}\n\n\\solution Volume $=\\int_{a}^{b} \\pi f(y)^2 dy$\\\\\n\\begin{multicols}{2}\nRearrange the function to isolate $x$:\n\t\\begin{align*}\n\ty&=\\sqrt{x^3}\\\\\n\ty^2&=x^3\\\\\n\t\\sqrt[3]{y^2}&=x\\end{align*}\nIntegrate to find volume:\\\\\n\t\\begin{align*}V&=\\pi\\int_{0}^{5} (y^{\\frac{2}{3}})^2\\,dy\\\\\n\t&=\\pi\\int_{0}^{5} y^{\\frac{4}{3}}\\,dy\\\\\n\t&=\\pi\\frac{y^{\\frac{7}{3}}}{\\frac{7}{3}}\\bigg\\vert_{0}^{5}\\\\\n\t&=\\frac{3\\pi}{7}5^{\\frac{7}{3}}-0\\\\\n\t&\\approx 57.6 \\,\\,\\mathrm{units}^3\n\t\\end{align*}\n\\end{multicols}\n\n%---------------------------------------------------\n% Integration by Substitution\n%---------------------------------------------------\n\\section{Integration by Substitution}\nEarlier we stated that every differentiation rule leads to an integration rule. The chain rule for differentiation leads to the substitution rule for integration, i.e. the \\emph{substitution rule for integration}. \n\nA series of examples will illustrate how integration by substitution works.\n\n\\example Find the integral: $\\int 2 x \\sqrt{1 +x^{2}}\\; d x$.\\medskip\\\\\n\\solution This cannot be integrated using the techniques discussed so far. This is an example of a function that can be integrated using the substitution rule. They are often recognised by noting the presence of a composite function; the $\\sqrt{1 +x^{2}}$ can be seen as $\\sqrt{u}$ where the $1+x^2$ under the root is replaced by a new variable.\n\nLet $u=1+x^2$. The integral now becomes $\\int 2x\\sqrt{u}\\; dx$. This is not yet ready because there are two different variables: $u$ \\textbf{and} $x$. Lets differentiate the new equation:\n\\begin{align*}\nu&=1+x^2\\\\\n\\frac{du}{dx}&=2x\n\\end{align*}\nAnd we can isolate the $dx$ with the intention of replacing it in the original integral, so $\\displaystyle dx=\\frac{du}{2x}$.\n\nIf we now look back at the original question we are ready to substitute expressions containing $u$ for expressions containing $x$.\n\\begin{align*}\n\\int 2 x \\sqrt{1 +x^{2}}\\; d x &  = \\int \\sqrt{u} \\cdot 2 x\\; d x \\\\\n &  = \\int \\sqrt{u}\\cdot \\cancel{2x}\\; \\frac{du}{\\cancel{2x}}\\\\\n &  = \\int u^{\\frac{1}{2}}\\; d u \\\\\n &  = \\frac{u^{\\frac{1}{2} +1}}{\\frac{1}{2} +1}= \\frac{2}{3}\\sqrt[2]{u^3}+C\n \\end{align*}\n One last step is to switch back to the original variable $x$. Make the final substitution $u=1+x^2$.\n \\[ = \\frac{2 \\sqrt{\\left (1 +x^{2}\\right )^{3}}}{3} +C \\]\n \nThe Substitution Rule can be stated formally as follows. If $u =g (x)$ is a differentiable function whose range is an interval $I$ and $f$ is continuous on $I$, then $\\int f \\left (g \\left (x\\right )\\right ) g^{ \\prime } \\left (x\\right )\\; d x =\\int f \\left (u\\right )\\; d u$. $d x$ and $d u$ are known as differentials. The Substitution Rule permits us to replace $g^{ \\prime } \\left (x\\right )\\; d x$ with $d u$. \n\n\\example Find $\\int x \\sin  \\left (x^{2}\\right )\\; dx$\\medskip\\\\\n\\solution Using the procedure from the previous example try $u =x^{2}$. This gives $d u =2 x\\; d x\\text{.}$\n\\begin{align*}\\int x \\sin  \\left (x^{2}\\right )\\; d x &  = \\int \\sin  \\left (x^{2}\\right ) \\cdot x\\; d x \\\\\n &  = \\int \\sin(u) \\cdot \\cancel{x}\\cdot\\frac{du}{2\\cancel{x}} \\\\\n &  = \\frac{1}{2} \\int \\sin(u)\\; d u \\\\\n &  =  -\\frac{1}{2} \\cos  u +C \\\\\n &  =  -\\frac{1}{2} \\cos  \\left (x^{2}\\right ) +C\\end{align*}\n\nIt is clear the challenge of the Substitution Rule is to come up with a suitable substitution. Earlier examples often have fairly obvious substitutions but they can quickly become quite complicated and result in many false starts. \\\\\n\\rule{6.8cm}{0.5pt}\\\\\n\\example Find $\\int \\frac{x}{\\sqrt{1 -x^{2}}}\\; d x$\\medskip\\\\\n\\solution Let $u =1 -x^{2}$. Then $d u = -2 x\\; d x$. So $x\\; d x = -\\frac{1}{2} d u$. These are now substituted into the original expression\n$$\\int \\frac{x}{\\sqrt{1 -x^{2}}}\\; d x = -\\frac{1}{2} \\int \\frac{1}{\\sqrt{u}}\\; d u = -\\frac{1}{2} \\int u^{ -1/2}\\; d u = -\\frac{1}{2} \\left (2 u^{1/2}\\right ) +C = -\\sqrt{1 -x^{2}} +C$$\n\nIn each of the above 3 examples you could use desmos to compare the original function and the integral to\nsee if the result is reasonable. For example 3 try graphing the original function $y =\\frac{x}{\\sqrt{1 -x^{2}}}$ and the integral $y = -\\sqrt{1 -x^{2}}$ on the same axes and check the original function represents the slope of the tangent lines to the curve $y = -\\sqrt{1 -x^{2}}$. \n\n\\subsection*{Evaluating Definite Integrals by Substitution}\nWe will look at two ways to evaluate a definite integral.\n\n\\textbf{Method 1} Find the indefinite integral then use the Evaluation Theorem. \n\n\\textbf{Method 2} Make the substitution to the integrand and differential (remembering to change the limits to the new variable). \n\n\\example Evaluate $\\int _{0}^{8}\\sqrt{x +1}\\; d x$. Graphing provides us with the following:\n\\begin{SCfigure}[1][h!]\n\t\\includegraphics[width=0.65\\textwidth]{L4SZ283J}\n\t\\caption*{Figure: The curve is clearly continuous. If we let $u =x +1$ then $u^{ \\prime } =1$, this is also continuous.}\n\\end{SCfigure}\n\n\\solution\n\\textbf{Method 1} \nSubstitute $u =x +1$ then $d u =d x$. Substituting these values in the indefinite integral we get\n\\begin{equation*}\\int \\sqrt{x +1}\\; d x =\\int \\sqrt{u}\\; d u =\\frac{2}{3} u^{3/2} +C =\\frac{2}{3} \\left (x +1\\right )^{3/2} +C\n\\end{equation*}So\n\\begin{align*}\\int _{0}^{8}\\sqrt{x +1}\\; d x &  = \\left .\\int \\sqrt{x +1}\\; d x\\right ]_{0}^{8} \\\\\n &  = \\left .\\frac{2}{3} \\left (x +1\\right )^{3/2}\\right ]_{0}^{8} \\\\\n &  = \\frac{2}{3} \\left (9\\right )^{3/2} -\\frac{2}{3} \\left (1\\right )^{3/2} \\\\\n &  = \\frac{2}{3} 27 -\\frac{2}{3} 1 \\\\\n &  = \\frac{2}{3} \\left (27 -1\\right ) =\\frac{2}{3} 26 =\\frac{52}{3} =17\\frac{1}{3}\\end{align*}\n\n\\textbf{Method 2} \nAgain we let $u =x +1$ so $d u =d x$. Also we calculate the new limits for $u$ using $u =x +1$.\n\\begin{align*}x &  = 0\\text{ gives }u =0 +1 =1 \\\\\nx &  = 8\\text{ gives }u =8 +1 =9\\end{align*}\n\nNow the definite integral is transformed into a definite integral in $u$. \n\n\\begin{align*}\\int _{0}^{8}\\sqrt{x +1}\\; d x &  = \\int _{1}^{9}\\sqrt{u}\\; d u \\\\\n &  = \\left .\\frac{2}{3} u^{3/2}\\right ]_{1}^{9} \\\\\n &  = \\frac{2}{3} 9^{3/2} -\\frac{2}{3} 1^{3/2} \\\\\n &  = \\frac{2}{3} \\left (27 -1\\right ) =17\\frac{1}{3}\\end{align*}\n\nA check on the graph will show that an area of $17\\frac{1}{3}$ appears to be reasonable. \n\nMethod 2 is usually preferred as the step where the indefinite integral is first calculated has been neatly incorporated into the method. The difficulty with method 2 is that once the values for $u$ have been calculated the integral is completely transformed and we never return to the original question. We answer a different question that, because of the transformation, has the same answer. This concept might cause confusion. However a graph of the situation shows what has happened.   \n\\begin{center}\\includegraphics[ width=5.3195in, height=1.9579in]{L4SZ283K}\\end{center}\nIt is clear that the area under the first graph between $0$ and $8$ is the same as the area under the second graph between $1$ and $9$. \n\n%---------------------------------------------------\n% Integration by Parts\n%---------------------------------------------------\n\\section{Integration by Parts}\nRecall the rule for the differentiation of a product\n\\begin{equation*}\\frac{d}{d x} \\left (f (x) \\cdot g (x)\\right ) =f (x) \\cdot g^{ \\prime } (x) +g (x) \\cdot f^{ \\prime } (x)\n\\end{equation*}\n\nWe can integrate each side and write the process as follows\n\\begin{align*}f (x) \\cdot g (x) &  = \\int \\left [f (x) \\cdot g^{ \\prime } (x) +g (x) \\cdot f^{ \\prime } (x)\\right ]\\; d x \\\\\n &  = \\int f (x) \\cdot g^{ \\prime } (x)\\; d x +\\int g (x) \\cdot f^{ \\prime } (x)\\; d x\\end{align*}\n\nThis is rewritten in a particular way to become the formula for \\emph{integration by parts}.\n\\begin{align}\\int f (x) \\cdot g^{ \\prime } (x)\\; d x &  = f (x) \\cdot g (x) -\\int g (x) \\cdot f^{ \\prime } (x)\\; d x \\tag{1} \\\\\n\\text{or}\\int f (x) \\cdot g^{ \\prime } (x)\\; d x &  = f (x) \\cdot g (x) -\\int f^{ \\prime } (x) \\cdot g (x)\\; d x \\tag{2}\\end{align}\n\nThis formula is written in a number of different ways in textbooks. Here\nare two ways \n\nLet $u =f (x)$ and $v =g (x)$ then $d u =f^{ \\prime } (x)\\; d x$ and $d v =g^{ \\prime } (x)\\; d x$ so using the substitution rule equation (1) becomes\n\\begin{equation}\\int u\\; d v =u v -\\int v\\; d u\\tag{3}\n\\end{equation}\n\nLet $\\int f (x) \\cdot g^{ \\prime } (x)\\; d x$ be regarded as the the integral of the product of two functions then $g =\\int g^{ \\prime }$. We can write equation (2) as\n\\begin{tcolorbox}\n\t\\[\\int fg'=fg-\\int f'g \\tag{4}\\]\n\\end{tcolorbox}\nEquations (3) and (4) are the common forms that are best to remember.\n\nThe success of this method depends on the discovery that a simpler integral results from this process. Sometimes the process results in a worse situation than you started with so should be abandoned. Sometimes you produce a pattern which leads to a solution after 2 or more applications of the integration by parts rule. The patterns that produce solvable problems can be discovered as different questions are tried. \n\n\\example Find $\\int x \\cos  x\\; d x$. This can be seen as the integral of a product. The two functions\nare $f (x) =x$ and $g (x) =\\cos  x$. So $f^{ \\prime } (x) =1$ and $\\int g (x) =\\sin  x$ are easy to find. Notice though that $f^{ \\prime } (x) =1$ gives us a clue that \\emph{integration by parts} is going to be a fruitful method. \\\\\n\\solution\nUsing equation (2)\n\\begin{align*}\\int x \\cos  x\\; d x &  = f (x) \\cdot g (x) -\\int f^{ \\prime } (x) \\cdot g (x)\\; d x \\\\\n &  = x \\cdot \\sin  x -\\int 1 \\cdot \\sin  x\\; d x \\\\\n &  = x \\cdot \\sin  x -\\int \\sin  x\\; d x \\\\\n &  = x \\cdot \\sin  x -\\left ( -\\cos  x\\right ) +C \\\\\n &  = x\\; \\sin  x +\\cos  x +C\\end{align*}\n\\rule{6.8cm}{0.5pt}\\\\\n\\example Use integration by parts to find $\\int \\ln  x\\; d x$. This is a function that has arisen in the course as the integral of $\\frac{1}{x}$, however we are now able to use this fact to help with this question. \\\\\n\\solution Notice there is only one function here so we have to create two functions by stating $\\ln  x =1 \\cdot \\ln  x$. The two functions are therefore $f (x) =\\ln  x$ and $g (x) =1$. Can we find $f^{ \\prime }$ and $\\int g$? Yes!\n\\begin{align*}\\int \\ln  x\\; d x &  = \\int \\ln  x \\cdot 1\\; d x \\\\\n &  = \\ln  x \\cdot x -\\int \\frac{1}{x} \\cdot x\\; d x \\\\\n &  = x\\; \\ln  x -\\int 1\\; d x \\\\\n &  = x\\; \\ln  x -x +C\\end{align*}\n\\rule{6.8cm}{0.5pt}\\\\\n\\example Find $\\int e^{x} \\cos  x\\; d x$ This is an example where a pattern is established and perseverance leads to the solution. \\\\\n\\solution Let $f (x) =\\cos  x$ and $g (x) =e^{x}$. Can we find $f^{ \\prime }$ and $\\int g$? Yes!\n\\begin{align*}\\int e^{x} \\cos  x\\; d x &  = \\cos  x \\cdot e^{x} -\\int  -\\sin  x \\cdot e^{x}\\; d x \\\\\n &  = e^{x} \\cos  x +\\int e^{x} \\sin  x\\; d x\\end{align*}\n\nThis is an integral that is very similar in appearance to the original question with the $\\cos  x$ transformed into $\\sin  x$. We continue by repeating the integration by parts process with the new integral\n$\\int e^{x} \\sin  x\\; d x$\n\\begin{align*}\\int e^{x} \\cos  x\\; d x &  = e^{x} \\cos  x +\\overset{\\int e^{x} \\sin  x\\; d x}{\\overbrace{\\sin  x \\cdot e^{x} -\\int \\cos  x \\cdot e^{x} d x}} \\\\\n &  = e^{x} \\cos  x +e^{x} \\sin  x -\\int e^{x} \\cos  x\\; d x\\end{align*}\n\nNotice the original integral has now appeared on the right hand side of the equation. We\nnow simply solve the equation for $\\int e^{x} \\cos  x\\; d x$ and add the constant of integration.\n\\begin{align*}2 \\int e^{x} \\cos  x\\; d x &  = e^{x} \\cos  x +e^{x} \\sin  x \\\\\n\\int e^{x} \\cos  x\\; d x &  = \\frac{e^{x}}{2} \\left (\\cos  x +\\sin  x\\right ) +C\\end{align*}\n\nThis can easily be verified by differentiating. \n\n\\subsection*{Definite Integrals}\nIntegration by parts can be combined with the Evaluation Theorem to evaluate definite integrals. If\nwe assume $f^{ \\prime }$ and $g^{ \\prime }$ are continuous then we can use the Evaluation Theorem and write equation (1) as follows\n\\begin{equation}\\int _{a}^{b}f (x) \\cdot g^{ \\prime } (x)\\; d x =\\left .f (x) \\cdot g (x)\\right ]_{a}^{b} -\\int _{a}^{b}g (x) \\cdot f^{ \\prime } (x)\\; d x\\tag{5}\n\\end{equation}\n\n\\example Evaluate $\\int _{0}^{1}x e^{x}\\; d x$. A graph of $y =x e^{x}$ shows the area required and gives us an idea of the answer to expect. \n\\begin{center}\n\\includegraphics[ width=4.7374in, height=2.6126in,]{L4SZ283M}\n\\end{center}\n\\solution Notice $x$ becomes simpler when differentiated and $e^{x}$ is unchanged when it is integrated\n\\begin{align*}\\int _{0}^{1}x e^{x}\\; d x &  = \\left .x e^{x}\\right ]_{0}^{1} -\\int _{0}^{1}1 \\cdot e^{x}\\; d x \\\\\n &  = \\left .x e^{x}\\right ]_{0}^{1} -\\left .e^{x}\\right ]_{0}^{1} \\\\\n &  = \\left (\\left (1 e^{1}\\right ) -\\left (0 e^{0}\\right )\\right ) -\\left (e^{1} -e^{0}\\right ) \\\\\n &  = e^{1} -e^{1} +e^{0} \\\\\n &  = e^{0} =1\\end{align*}\n\n\n%---------------------------------------------------\n% Applications of Integration\n%---------------------------------------------------\n\\section{Applications of Integration}\n\n\\subsection*{Rectilinear Motion}\nWe will use integration to analyse the motion of an object moving in a straight line. Let\nthe position function for the object be $s =f (t)$ where $t$ is the time. The velocity function is $v (t) =s^{ \\prime } (t)$. Therefore the position function is the integral of the velocity function. Also\nthe acceleration function is $a (t) =v^{ \\prime } (t)$ so the velocity function is the integral of the acceleration function. We can obtain\nthe position function from the acceleration function by integrating twice. This process will generate\ntwo constants of integration so we need two additional pieces of information to find the particular solution. Usually\n$s (0)$ and $v (0)$ are given. \n\n\\example A particle moves in a straight line with an acceleration of $a (t) =4 t +2$. If the initial velocity is $ -4 \\mbox{cm}$/$\\mbox{s}$ and the initial displacement is $5 \\mbox{cm}$, find the position function. \n\n\\solution \\begin{multicols}{2}\nAs $v^{ \\prime } (t) =a (t) =4 t +2$ we can integrate $a (t)$ to obtain $v (t)$\n\\begin{align*}v(t)&=\\int a(t)dt\\\\\n&=\\int (4t+2)dt\\\\\n&=4 \\frac{t^{2}}{2} +2 t +C_{1} \\\\\n&=2 t^{2} +2 t +C_{1}\n\\end{align*}\n\nSubstitute $t =0$ since we know the initial velocity $v(0)=-4$:\n\\begin{align*}v (0) &  = 2 \\cdot 0^{2} +2 \\cdot 0 +C_{1}\\\\\nC_{1} &  =  -4\\text{ and therefore} \\\\\nv (t) &  = 2 t^{2} +2 t -4\\end{align*}\n\nIntegrate $v (t)$ to obtain $s (t)$:\n\\begin{align*}s(t)&=\\int v(t)dt\\\\\n&=2 \\frac{t^{3}}{3} +2 \\frac{t^{2}}{2} -4 t +C_{2} \\\\\n &  = \\frac{2}{3} t^{3} +t^{2} -4 t +C_{2}\\end{align*}\n\nSubstitute $t =0$ because we are given the initial displacement i.e. $s (0) =5$\n\\begin{align*}s (0) &  = \\frac{2}{3} \\cdot 0^{3} +0^{2} -4 \\cdot 0 +C_{2}\\\\\nC_{2} &  = 5\\text{ and therefore} \\\\\ns (t) &  = \\frac{2}{3} t^{3} +t^{2} -4 t +5\\end{align*}\n\\end{multicols}\n\n\\rule{6.8cm}{0.5pt}\\\\\n\nThe gravitational force near the Earth's surface that produces a downwards acceleration is about $-9.8$ m/s$^2$, represented by the vector quantity, $\\vec{g}$.\\\\\n\\example A ball is thrown vertically upwards with a speed of $24.5 \\mbox{m}$/$\\mbox{s}$ from the edge of a cliff that is $147 \\mbox{m}$ above the ground. \n\\begin{tasks}[column-sep=30pt](2)\n\\task Find the position function. \n\\task Find the time when the ball reaches its maximum height. \n\\task Find the maximum height above the ground. \n\\task When does the ball hit the ground? \\end{tasks}\n\n\\clearpage\\solution\n\\begin{multicols}{2} \\textbf{(a)} It is usual to choose the positive direction\nto be upwards this means that $\\vec{g}=-9.8$ m/s$^2$. Integrating the acceleration function yields the velocity function.\n\\begin{align*}v(t)&=\\int a(t)dt\\\\\n&  =\\int -9.8dt\\\\\n&=-9.8t +C_{1}\\end{align*}\nSubstituting $v (0) =24.5$ we get that $C_{1} =24.5$\n\\begin{equation*}v (t) = -9.8 t +24.5\n\\end{equation*}\nBecause $s^{ \\prime } (t) =v (t)$ we integrate $v (t)$\n\\begin{equation*}\\int v(t)=s(t)=-9.8 \\frac{t^{2}}{2} +24.5 t +C_{2}\n\\end{equation*}\nSubstitute $s (0) =147$. We get $C_{2} =147$.\n\\begin{equation*}\\therefore s(t) = -4.9 t^{2} +24.5 t +147\n\\end{equation*}\nThis expression will hold true until the ball is acted on by an external force (hits the ground).\n\n\\textbf{(b)} The maximum height is reached when\\\\ $v (t) =0$; for a moment the vertical velocity will be zero before it begins to fall.\n\\begin{align*} -9.8 t +24.5 &  = 0 \\\\\n24.5 &  = 9.8 t \\\\\nt &  = \\frac{24.5}{9.8}=2.5\\text{ s}\\end{align*}\n\n\\textbf{(c)} The maximum height is found by substituting $t =2.5$ into $s(t)$\n\\begin{align*}s (2.5) &  =  -4.9 \\cdot 2.5^{2} +24.5 \\cdot 2.5 +147 \\\\\n &=177.6 \\text{ m}\\end{align*}\n\n\\textbf{(d)} The ball hits the ground when the displacement, $s(t) =0$.\n\\begin{align*}-4.9 t^{2} +24.5 t +147 &  = 0 \\\\\nt^{2} -5 t -30 &  = 0\\end{align*}\nSolve for $t$ using the quadratic formula: \n\\begin{align*}t &  = \\frac{5 \\pm \\sqrt{25 -4 \\times 1 \\times  -30}}{2} \\\\\n &  = \\frac{5 \\pm \\sqrt{145}}{2} \\\\\n &  =8.5\\text{ s, or }-3.5\\text{s}\\end{align*}\nThe negative result will be discarded, however, represents a solution to the parabola in an algebraic sense. We did not test whether the value of $t$ when $v (t) =0$ gives a maximum or minimum. A maximum can be assumed knowing that $-9.8$ m/s$^2$ has a negative coefficient. \n\\end{multicols}\n\n\\subsection*{Work}\nThe strategy we use to allow us to apply calculus to a problem in engineering is the same as we used to evaluate areas. The\nphysical quantity is divided up into a large number of small parts, each one approximately equal to the theoretical quantity it represents. These\nare then summed and a limit taken as the number of small parts, $n \\rightarrow \\infty $. This process allows us to evaluate the resulting integral. \n\nYou\nwill recall that from Newton's Second Law of Motion\n\\begin{equation}F =m a =m \\frac{d^{2} s}{d t^{2}}\\tag{1}\n\\end{equation}\n\nWhere $s (t)$ is the position function, $m$ is the mass of an object and $F$ is the force required to produce an acceleration of $a$. (Where $a =\\frac{d^{2} s}{d t^{2}}$). \n\nWe usually measure mass in kilograms ($\\mbox{kg}$), distance in metres ($\\mbox{m}$) and force in newtons ($\\mbox{N}$) \\linebreak\\relax ($N =k g \\cdot m/s^{2}$). If the acceleration is constant then the force to produce that acceleration will also\nbe constant. \n\nWork = force $ \\times $ distance or\n\\begin{equation}W =F d\\tag{2}\n\\end{equation}If $F$ is in newtons and $d$ is in metres then $W$ is in newton-metres. One newton-metre is called a\\linebreak\\relax joule ($\\mbox{J}$).\n\n\\example A mass of 3.5 kg lifted 0.5 m requires a force of $\\displaystyle F =m a =3.5 \\times 9.8 =34.3 \\mbox{N}$. This is the force required to counter the force exerted by gravity. Calculate the work done.\\\\\n\\solution The work done is calculated using equation 2\n\\begin{equation*}W =F d =34.3 \\times 0.5 =17.15 \\text{ J}\n\\end{equation*}\n\nIf the force is variable this formula can no longer be applied. Let the force acting on an object as it moves along the $x$-axis in a positive direction from $a$ to $b$ be $f (x)$, where $f$ is a continuous function of $x$. Divide the interval from $a$ to $b$ into $n$ subintervals of width $ \\Delta x$ where $ \\Delta x =\\frac{b -a}{n}$. For simplicity we let the end points of these subintervals be $x_{0}$, $x_{1}$, $x_{2}$, ..., $x_{n}$ We select the $i^{t h}$ subinterval and select a representative $x$-value in this interval $x_{i}^{ \\ast }$. The work done when we move the object from $x_{i -1}$ to $x_{i}$ is\n\\begin{equation*}W_{i} \\approx f (x_{i}^{ \\ast })  \\Delta x\n\\end{equation*}\n\nThe total work done is\n\\begin{equation}W =\\sum _{i =1}^{n}f (x_{i}^{ \\ast })  \\Delta x\\tag{3}\n\\end{equation}\n\nAs we did with area we find the limit as $n \\rightarrow \\infty $ of this sum. As this is a Riemann sum the limit is a definite integral\n\\begin{equation}W =\\sum _{i =1}^{n}f (x_{i}^{ \\ast })  \\Delta x =\\int _{a}^{b}f (x)\\; d x\\tag{4}\n\\end{equation}\n\\rule{6.8cm}{0.5pt}\\\\\n\\example If the force on a particle is given by the equation $f (x) =3 x^{2} -2 x \\text{ N}$, how much work is done moving the particle from $x =2$ to $x =3$?\n\n\\solution The graph of $f (x) =3 x^{2} -2 x$ shows that for the interval $\\left [2 ,3\\right ]$ the area is above the $x$-axis.\n\\begin{align*}W &  = \\int _{a}^{b}f (x)\\; d x \\\\\n &  = \\int _{2}^{3}\\left (3 x^{2} -2 x\\right )\\; d x \\\\\n &  = \\left .3 \\frac{x^{3}}{3} -2 \\frac{x^{2}}{2}\\right ]_{2}^{3} \\\\\n &  = \\left .x^{3} -x^{2}\\right ]_{2}^{3} \\\\\n &  = \\left (3^{3} -3^{2}\\right ) -\\left (2^{3} -2^{2}\\right ) \\\\\n &  = 18 -4 =14\\text{ J}\\end{align*}\n\\clearpage\n%\\rule{6.8cm}{0.5pt}\\\\\n\\examq Hooke's Law states that the force required to maintain a spring stretched $x$ units beyond its natural length is proportional to $x$, the spring displacement in meters, such that $f (x) =k x$ where $k$ is the spring constant and $f$ the force. This law holds provided $x$ does not get too large. \\medskip\\\\\nThe work required to stretch the spring can be calculated by integrating the force function:\n\\[W=\\int_{a}^{b}f(x) \\, dx\\,. \\]\nFind the work done in stretching a spring from 12 cm to 17 cm given that it takes 4 newtons of force to stretch the spring from 10 cm to 12 cm. \\\\\n\\solution\\medskip\\\\\nFirst you will need to find the spring constant $k$:\n\\begin{align*}f(x)&=kx\\\\\n4\\,\\text{N}&=k(0.12-0.10)\\\\\nk&=\\frac{4}{0.02}=200\\,\\,\\frac{\\text{newtons}}{\\text{meter}}\n\\end{align*}\nIntegrate the force function for find the work done in stretching the spring:\n\\begin{align*}\nW&=\\int_{0.12}^{0.17}200x\\,dx\\\\\n&=100x^2\\Big\\vert_{0.12}^{0.17}\\\\\n&=100(0.17^2-0.12^2)\\\\\n&=1.45 \\text{ J}\n\\end{align*}\n\\clearpage\n%---------------------------------------------------\n% Chapter Exercises in a separate file\n%---------------------------------------------------\n\\section{Chapter Exercises}\n\\subimport{}{5IntegrationExercises}\n\n\n \n\n\n\n", "meta": {"hexsha": "5c64dada851186ed4e9f065804ba3fbbd36781ab", "size": 43142, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "5Integration.tex", "max_stars_repo_name": "millecodex/ENGE401", "max_stars_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5Integration.tex", "max_issues_repo_name": "millecodex/ENGE401", "max_issues_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5Integration.tex", "max_forks_repo_name": "millecodex/ENGE401", "max_forks_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.5883977901, "max_line_length": 616, "alphanum_fraction": 0.6462148255, "num_tokens": 15139, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105454764746, "lm_q2_score": 0.867035763237924, "lm_q1q2_score": 0.5769347401637585}}
{"text": "\t\\documentclass[a4paper]{article}\n\\newcommand{\\dd}[1]{\\mathrm{d}#1}\n%% Language and font encodings\n\\usepackage[english]{babel}\n\\usepackage[utf8x]{inputenc}\n\\usepackage[T1]{fontenc}\n\n%% Sets page size and margins\n\\usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}\n\n%% Useful packages\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage[colorinlistoftodos]{todonotes}\n\\usepackage[colorlinks=true, allcolors=blue]{hyperref}\n\\usepackage{braket}\n\\usepackage{amssymb}\n\\usepackage{subcaption}\n\\usepackage[section]{placeins}\n\\usepackage{float}\n\\usepackage{color}\n\\restylefloat{table}\n\n\\title{Bayesian Classification Model}\n\\author{Shreyas Bapat, Bhavya Bhatt, GaganDeep Tomar}\n\n\\begin{document}\n\\maketitle\n\n\\begin{abstract}\nThe following discussion revolves around the modelling of data classifier in accordance with Bayesian Decision Theory. We then apply the model on three differnt kinds of dataset which are precisly - linearly separable, non-linearly separable and actual data which can be of random nature. The goal is to compare the performance of the model with the above datasets under different conditions on covariance matrix.   \n\\end{abstract}\n\n\\section{Introduction}\n\nThe Bayesian Decision Theory is a probablistic theory for classifying the data points on the basics of pre-known prior and class conditional probabilities(which in real scenario is not known in any closed form expression). We give below the basics of Bayes rule in the context of pattern recognition.\n\\subsection{Bayes Rule}\nThe bayes rule states that if we have prior probabilities of a class and we know class conditional probability for each class then given a sample data point we can find the probability that it belongs to some particular class $i$ as follows\n\\begin{equation}\\label{bayes equation}\nP(C_{i}|\\bar{x}) = \\frac{P(\\bar{x}|C_{i})P(C_{i})}{\\sum_{j=1}^{m}P(\\bar{x}|C_{j})P(C_{j})}\n\\end{equation} Where $P(C_{i})$ is prior probablity of class $i$ and $P(\\bar{x}|C_{i})$ is class conditional probablity. Then we can find to which class the data point belong to as follows\n\\begin{equation}\nC = \\{C_{i}, max\\{P(C_{i}|\\bar{x}), i=1\\dots m\\}\\}\n\\end{equation} where $m$ is the number of classes.\n\\subsection{Assumptions}\nIn the above equation $\\ref{bayes equation}$ we have to have $P(C_{i})$ and $P(\\bar{x}|C_{i})$ for the classification. We can obtain prior probablity by observing the training dataset. The ad-hoc assumption in our model is that class conditional probablity is considered to be normal distributed with parameters $\\mu_{i}$ and $\\sigma_{i}^{2}$ which are mean and variance of the dataset belonging to that particular class $i$. Note that the above parameters are scalar in univariate case(dimension of data point is one) but they would be a vector (mean vector)and a matrix(covariance matrix) respectively in bivariate and multivariate case. The expression for multidimensional gaussian distribution is\n\\begin{equation}\nP(\\bar{x}|C_{i}) = \\frac{1}{\\sqrt{det(2\\pi\\mathbf{\\Sigma_{i}})}}exp\\{\\frac{-1}{2}(\\bar{x}-\\bar{\\mu_{i}})^{\\intercal}\\mathbf{\\Sigma_{i}}^{-1}(\\bar{x}-\\bar{\\mu_{i}})\\}\n\\end{equation}Where $\\bar{\\mu_{i}}$ is mean vector of class $i$ and $\\mathbf{\\Sigma_{i}}$ is covariance matrix of class $i$.\n\\section{Coutour Curves and Covariance Matrix}\\label{appendix}\nIn this section\\footnote{for a complete discussion refer to the appendix} we discuss the relation between the shape of the cross section produced by slicing the bivariate gaussian distribution with a hyperplane parallel to the 2D-feature plane and covariance matrix. The bivariate gaussian distribution is a follows\n\\begin{equation}\nP(\\bar{x}|C_{i}) = \\frac{1}{\\sqrt{det(2\\pi\\mathbf{\\Sigma_{i}})}}exp\\{\\frac{-1}{2}(\\bar{x}-\\bar{\\mu_{i}})^{\\intercal}\\mathbf{\\Sigma_{i}}^{-1}(\\bar{x}-\\bar{\\mu_{i}})\\}\n\\end{equation}with $mu_{i}$ be a $2\\times1$ mean column vector and $\\Sigma_{i}$ be $2\\times2$ covariance matrix. So we assume the covariance matrix in its expanded form as\n\\[\n\\mathbf{\\Sigma} = \\left[ {\\begin{array}{cc}\n\\Sigma_{11} & \\Sigma_{12} \\\\\n\\Sigma_{12} & \\Sigma_{22} \\\\\n\\end{array}} \\right]\n\\]\nwhere diagonal terms are variance of the features and off-diagonal terms are covariance bewteen feature-1 and feature-2. The above matrix is symmetric precisly due to the fact that $cov(x_{i}, x_{j})=cov(x_{j}, x_{i})$. Now we set the above distribution function to some constant $k$ and find the resultant curve projected on the feature space which is called constant contour curve.\n\\[\n\\frac{1}{\\sqrt{det(2\\pi\\mathbf{\\Sigma_{i}})}}exp\\{\\frac{-1}{2}(\\bar{x}-\\bar{\\mu_{i}})^{\\intercal}\\mathbf{\\Sigma_{i}}^{-1}(\\bar{x}-\\bar{\\mu_{i}})\\} = k\n\\]after some manupilation and taking log both sides we get\n\\[\n(\\bar{x}-\\bar{\\mu})^{\\intercal}\\mathbf{\\Sigma}^{-1}(\\bar{x}-\\bar{\\mu})=-2\\ln(\\sqrt{2\\pi\\left|\\mathbf{\\Sigma}\\right|}k)\n\\]where we have dropped the index $i$ for simplicity and the whole analysis can be done without the loss of generality. Now writing the matrix in full and evaluating the required operation on the column vector we get finally\n\\begin{equation}\\label{cov}\n\\Sigma_{22}X_{1}^{2} + \\Sigma_{11}X_{2}^{2} - 2\\Sigma_{12}X_{1}X_{2} + 2\\ln(\\sqrt{2\\pi\\left|\\mathbf{\\Sigma}\\right|}k)=0\n\\end{equation}where $\\bar{x}=\\left[x_{1}x_{2}\\right]^{\\intercal}$, $\\bar{\\mu}=\\left[\\mu_{1} \\mu_{2}\\right]^{\\intercal}$, $X_{1}=x_{1}-\\mu_{1}$ and $X_{2}=x_{2}-\\mu_{2}$. This equation is in the form of general equation for conic section\n\\begin{equation}\nax^{2}+by^{2}+cxy+d=0\n\\end{equation}Now in our case the coefficients $a$ and $b$ are $\\Sigma_{22}$ and $\\Sigma_{11}$ respectively. The above equation thus represents an ellipse in our case as variances are always positive values. Now from the elementary analysis of conics we know that the coefficient of $xy$ represent the extend to which the ellipse is titled w.r.t to the axis. Also the coefficients of $x^{2}$ and $y^{2}$ represents the length of major and minor axis respectively. Now we consider $\\Sigma_{12}=0$ (covariance matrix is diagonal) then the we recover the familiar equation of ellipse with major and minor axis parallel to the x-y axis. The equation is\n\\begin{equation}\n\\frac{X_{1}^{2}}{\\left(\\frac{-2\\ln(\\sqrt{2\\pi\\left|\\mathbf{\\Sigma}\\right|}k)}{\\Sigma_{22}}\\right)}+\\frac{X_{2}^{2}}{\\left(\\frac{-2\\ln(\\sqrt{2\\pi\\left|\\mathbf{\\Sigma}\\right|}k)}{\\Sigma_{11}}\\right)}=1\n\\end{equation}which is of the form\n\\[\n\\frac{x^{2}}{A^{2}}+\\frac{y^{2}}{B^{2}}=1\n\\]the above is the equation of countour curve projected in the feature space. Now we consider following three cases\n\\[\n\\Sigma_{22} < \\Sigma_{11}\\rightarrow A > B \\hspace{0.5cm} ellipse\n\\]\n\\[\n\\Sigma_{22} = \\Sigma_{11}\\rightarrow A=B \\hspace{0.5cm} circle\n\\]\n\\[\n\\Sigma_{22} > \\Sigma_{11}\\rightarrow A < B \\hspace{0.5cm} ellipse\n \\] \n\\section{Training and Testing}\nIn the presented work we would be taking test size of $0.25$. We would use training data points to train our classifier and testing data points to analyse the performance of the trained model(by performance we mean the ability of the model to correctly classify the unseen data points $i.e$ training examples). Training the model has different meaning for different models, here training means calculating the covariance matrix using training dataset and obtaining the required decision boundary.\n\\subsection{Discriminant function}\nThe decision boundary is represented mathematically using discriminant function. For Bayes Classifier, the discriminant function is as follows\n\\begin{equation}\nf(\\bar{x}) = g_{i}(\\bar{x}) - g_{j}(\\bar{x})\n\\end{equation}where $g_{i}(\\bar{x})$ is\n\\begin{equation}\ng_{i}(\\bar{x}) = \\ln(P(C_{i}|\\bar{x}))\n\\end{equation}We observe that $f(\\bar{x})=0$ is the equation for decision boundary. And $f(\\bar{x})>0$ the data point belongs to class $i$ and $f(\\bar{x})<0$ the data point belongs to class $j$. Now we know that $P(C_{i}|\\bar{x})=P(\\bar{x}|C_{i})P(C_{i})$ and taking logarithm simplfies the expression for the discriminant function\\footnote{for more insight you can refer to the textbook, R.O.Duda, D.G.Stork Pattern Classification-wiley}. We have omitted the total probability term in the denominator because it is same for all the classes and does not contribute to the classifion process.\n\\section{Performance of the model}\nIn this section\\footnote{Note that in the further discussion class-1 red, class-2 green, class-3 blue} we start our actual task of performance check on three different datasets. The analysis includes for each dataset we have considered four cases on covariance matrix which are as follows\n\\begin{itemize}\n\\item Covariance matrix for all the classes is the same and is $\\sigma^{2}\\mathbf{I}$.\n\\item Full Covariance matrix for all the classes is the same and is $\\mathbf{\\Sigma}$.\n\\item Covariance matric is diagonal and is different for each class\n\\item Full Covariance matrix for each class is different\n\\end{itemize}\nand for each such case we calculate\n\\begin{itemize}\n\\item Classification accuracy, precision for every class, mean precision, recall for\nevery class, mean recall, F-measure for every class and mean F-measure on\ntest data\n\\item Confusion matrix\\footnote{Note that entry $C_{ij}$ of confusion matrix represents number of points known to be in class $i$ but classified as class $j$} based on the performance for test data\n\\item Constant density contour plot for all the classes together with the training data\nsuperposed\n\\item Decision region plot for every pair of classes together with the training data\nsuperposed\n\\item Decision region plot for all the classes together with the training data\nsuperposed\n\\end{itemize}\n\\newpage\n\\subsection{Dataset-1}\nThis dataset contains two types of two dimensional data points - linearly separable and non-linearly separable. Both have classification among three classes.\n\\subsubsection{Linearly Separable}\nFirst we show the scatter plots of the data points in training dataset.\n\\begin{figure}[h!]\n  \\includegraphics[width=0.7\\linewidth]{1_12.png}\n  \\caption{Dataset-1: Linearly Separable}\n  \\label{fig:Linearly Separable}\n\\end{figure}\n\nAs we can see from the figure \\ref{fig:Linearly Separable} the data points are linearly separable. Now we consider the four stated cases on this dataset.\n\\newpage\n\\paragraph{Case-1: $\\mathbf{\\Sigma_{i}}=\\sigma^{2}\\mathbf{1}$}\nAccording to the theory considered for this model we get a linear\\footnote{linear means that if feature is two dimensional then it is a line, if three dimensional it is a hyperplane and similarly for higher dimensions} discriminant function of the form\n\\begin{equation}\n\\mathbf{w^{\\intercal}}(\\bar{x}-\\bar{x_{0}})=0\n\\end{equation}\n\\[\n\\mathbf{w} = \\bar{\\mu_{i}}-\\bar{\\mu_{j}}\n\\]\n\\[\n\\bar{x_{0}} = \\frac{1}{2}(\\bar{\\mu_{i}}+\\bar{\\mu_{j}})-\\frac{\\sigma^{2}}{\\mid\\mid \\bar{\\mu_{i}}-\\bar{\\mu_{j}}\\mid\\mid^{2}}\\ln\\frac{P(C_{i})}{P(C_{j})}(\\bar{\\mu_{i}}-\\bar{\\mu_{j}})\n\\]\nwhere $\\mathbf{w}$ is weight matrix and $\\bar{x_{0}}$ through which the decision boundary(linear) passes. The decision surface between each pair of three classes are shown in figure \\ref{fig:1_1}. Now we state the performance parameters obtained by testing the classifier on the test dataset(0.25).  \n\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_12_1.png}\n     \\caption{case-1: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_23_1.png}\n    \\caption{case-1: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_31_1.png}\n    \\caption{case-1: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Linearly Seperable Dataset-1, Case-1}\n  \\label{fig:1_1}\n\\end{figure}\n\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n375 & 0 & 0\\\\\n0 & 375 & 0\\\\\n0 & 0 & 375\\\\\n\\end{array}} \\right]\n\\]We observe here is that even though the diagonal terms (correctly classified) are appreciable but the cross diagonal terms suggest that this case of covariance matrix cannot be a pratical approximation for the dataset.\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[H]\n  \\begin{center}\n    \\caption{Performance parameters for Linearly Seperable Dataset-1, Case -1}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.6639344262 & 0.3875598086 & 0.4894259819\\\\\n      class-2 & 0.72 & 0.4812834225 & 0.5769230769\\\\\n      class-3 & 0.184 & 0.5187165775 & 0.271642518\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n$\\mathbf{Accuracy}$ 43.66$ \\%,$ $\\mathbf{Mean- Precision}$ 0.4625$ ,$ $\\mathbf{Mean- Recall}$ 0.522$.$\n\\paragraph{Case-2: $\\mathbf{\\Sigma_{i}}=\\mathbf{\\Sigma}$}\nAccording to the theory considered for this model we get a linear\\footnote{linear means that if feature is two dimensional then it is a line, if three dimensional it is a hyperplane and similarly for higher dimensions} discriminant function of the form\n\\begin{equation}\n\\mathbf{w^{\\intercal}}(\\bar{x}-\\bar{x_{0}})=0\n\\end{equation}\n\\[\n\\mathbf{w} = \\mathbf{\\Sigma}^{-1}(\\bar{\\mu_{i}}-\\bar{\\mu_{j}})\n\\]\n\\[\n\\bar{x_{0}} = \\frac{1}{2}(\\bar{\\mu_{i}}+\\bar{\\mu_{j}})-\\frac{\\ln(P(C_{i}))-\\ln(P(C_{j}))}{(\\bar{\\mu_{i}}-\\bar{\\mu_{j}})^{\\intercal}\\mathbf{\\Sigma}^{-1}(\\bar{\\mu_{i}}-\\bar{\\mu_{j}})}(\\bar{\\mu_{i}}-\\bar{\\mu_{j}})\n\\]\nwhere $\\mathbf{w}$ is weight matrix and $\\bar{x_{0}}$ through which the decision boundary(linear) passes. The decision surface between each pair of three classes are shown in figure \\ref{fig:1_2}. \n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_12_2.png}\n     \\caption{case-2: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_23_2.png}\n    \\caption{case-2: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_31_2.png}\n    \\caption{case-2: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Linearly Seperable Dataset-1, Case-2}\n  \\label{fig:1_2}\n\\end{figure}\nNow we state the performance parameters obtained by testing the classifier on the test dataset(0.25).\n\\subparagraph{Confusion Matrix}\\label{reason}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n375 & 0 & 0\\\\\n0 & 375 & 0\\\\\n0 & 0 & 375\\\\\n\\end{array}} \\right]\n\\]We can observe that this is performing even worst as it has two diagonal terms equal to zero which means it does not correctly classify any point in class $1$ and class $2$. So this also is not a realisitic approximation of the covariance matrix.\n\n\\subparagraph{Precision, Recall and F-measure}\\textcolor{white}{:}\n\\begin{table}[H]\n  \\begin{center}\n    \\caption{Performance parameters for Linearly Seperable Dataset-1, Case -2}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0 & 0 & NaN\\\\\n      class-2 & 0 & 0 & NaN\\\\\n      class-3 & 0.816 & 0.449339207 & 0.5795454545\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n$\\mathbf{Accuracy}$ 40.8$\\%,$ $\\mathbf{Mean -Precision}$ 0.149$,$ $\\mathbf{Mean- Recall}$ 0.272$.$\n\\paragraph{Case-3: $\\mathbf{\\Sigma_{i}}$ is diagonal and different}\nAccording to the theory considered for this model we get a non-linear discriminant function of the form\n\\begin{equation}\\label{eq:non-linear}\ng_{i}(\\bar{x}) = \\bar{x}^{\\intercal}\\mathbf{W_{i}}\\bar{x}+\\bar{w_{i}}^{\\intercal}\\bar{x}+\\omega_{i0}\n\\end{equation}\n\\[\n\\mathbf{W_{i}} = \\frac{-1}{2}\\mathbf{\\Sigma_{i}}^{\\intercal}\n\\]\n\\[\nw_{i} = \\mathbf{\\Sigma_{i}}^{-1}\\bar{\\mu_{i}}\n\\]\n\\[\n\\omega_{i0} = \\frac{-1}{2}\\bar{\\mu_{i}}^{\\intercal}\\mathbf{\\Sigma_{i}}^{-1}\\bar{\\mu_{i}}-\\frac{1}{2}\\ln(\\left|\\mathbf{\\Sigma_{i}}\\right|)+\\ln(P(C_{i}))\n\\]. From equation the decision surface between each pair of three classes would be non-linear as shown in figure \\ref{fig:1_3}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_12_3.png}\n     \\caption{case-3: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_23_3.png}\n    \\caption{case-3: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_31_3.png}\n    \\caption{case-3: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Linearly Seperable Dataset-1, Case-3}\n  \\label{fig:1_3}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n375 & 0 & 0\\\\\n0 & 375 & 0\\\\\n0 & 0 & 375\\\\\n\\end{array}} \\right]\n\\]This shows that off diagonal terms are appreciably decreased which makes this approximation to be reasonable one for this dataset.\n\n\\subparagraph{Precision, Recall and F-measure}\\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Linearly Seperable Dataset-1, Case -3}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.96 & 0.9230769231 & 0.9411764706\\\\\n      class-2 & 0.936 & 0.9590163934 & 0.9473684211\\\\\n      class-3 & 0.9959677419 & 0.9959677419 & 0.9959677419\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 96.8$\\%,$ $\\mathbf{Mean -Precision}$ 0.959$,$ $\\mathbf{Mean- Recall}$ 0.963$.$\n\\newpage\n\\paragraph{Case-4: $\\mathbf{\\Sigma_{i}}$ is arbitary}\nIn this case also we can use the equation \\ref{eq:non-linear}. The decision surface  between each pair of three classes would again be non-linear as shown in figure \\ref{fig:1_4}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_12_4.png}\n     \\caption{case-4: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_23_4.png}\n    \\caption{case-4: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{1_31_4.png}\n    \\caption{case-4: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Linearly Seperable Dataset-1, Case-4}\n  \\label{fig:1_4}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n375 & 0 & 0\\\\\n0 & 375 & 0\\\\\n0 & 0 & 375\\\\\n\\end{array}} \\right]\n\\]This also performs similar to that of the previous case.\n\n\\subparagraph{Precision, Recall and F-measure}\\textcolor{white}{:}\n\\footnote{The discriminant functions discussed above for all the different cases would be same for all the subsequent datasets}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Linearly Seperable Dataset-1, Case -4}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.952 & 0.9224806202 & 0.937007874\\\\\n      class-2 & 0.936 & 0.9512195122 & 0.9435483871\\\\\n      class-3 & 0.9959677419 & 0.9959677419 & 0.9959677419\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 96.6$\\%,$ $\\mathbf{Mean- Precision}$ 0.956$, \\mathbf{Mean -Recall}$ 0.961$.$\n\\newpage\n\\subparagraph{Conclusion}\nThe best performance is given by case-$3$ and worst performance is given by case-$2$. Now we shown the final decision boundary simultaneously for all the three classes and for all the four cases along with their contour plots superimposed on them in figure \\ref{fig:cont_1_4}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_1_123_1.png}\n     \\caption{case-1: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_1_123_2.png}\n    \\caption{case-2: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_1_123_3.png}\n    \\caption{case-3: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_1_123_4.png}\n    \\caption{case-4: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\caption{decision boundary for all three classes and contour plots}\n  \\label{fig:cont_1_4}\n\\end{figure}\n\\goodbreak \\newpage \\newpage\n\\subsubsection{Non-Linearly separable}\nFirst we show the scatter plots of the data points in training dataset.\n\\begin{figure}[!h]\n  \\includegraphics[width=0.7\\linewidth]{2.png}\n  \\caption{Dataset-1: Non-Linearly Separable}\n  \\label{fig:Non Linearly Separable}\n\\end{figure}\nAs we can see from the figure \\ref{fig:Non Linearly Separable} the data points are not linearly separable. Now we consider the four stated cases on this dataset.\n\\newpage\n\\paragraph{Case-1: $\\mathbf{\\Sigma_{i}}=\\sigma^{2}\\mathbf{1}$} \nThe decision boudary between each pair of classes is shown in figure \\ref{fig:2_1}\n\\begin{figure}[h]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_12_1.png}\n     \\caption{case-1: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_23_1.png}\n    \\caption{case-1: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_31_1.png}\n    \\caption{case-1: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Non-Linearly Seperable Dataset-1, Case-1}\n  \\label{fig:2_1}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n272 & 0 & 103\\\\\n67 & 263 & 43\\\\\n308 & 300 & 142\\\\\n\\end{array}} \\right]\n\\]We can observe here that cross-diagonal terms are exceptionally greater than diagonal terms so this is not a good approximation for the dataset.\n\n\\subparagraph{Precision, Recall and F-measure}:\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Non-Linearly Seperable Dataset-1, Case -1}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.6639344262 & 0.4358974359 & 0.526275559\\\\\n      class-2 & 0.768 & 0.4571428571 & 0.5731343284\\\\\n      class-3 & 0.192 & 0.5052631579 & 0.2782608696\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\\n$\\mathbf{Accuracy}$ 45.8$\\%,$ $\\mathbf{Mean- Precision}$ 0.466$,$ $\\mathbf{Mean -Recall}$ 0.541$.$\n\\newpage\n\\paragraph{Case-2: $\\mathbf{\\Sigma_{i}}=\\mathbf{\\Sigma}$}\n\nThe decision boundary between each pair of classes is shown in figure \\ref{fig:2_2}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_12_2.png}\n     \\caption{case-2: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_23_2.png}\n    \\caption{case-2: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_31_2.png}\n    \\caption{case-2: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Non-Linearly Seperable Dataset-1, Case-2}\n  \\label{fig:2_2}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n0 & 0 & 375\\\\\n0 & 0 & 375\\\\\n114 & 5 & 631\\\\\n\\end{array}} \\right]\n\\]This is performing worst\\footnote{same reason as given in section \\ref{reason}}.\\\\\n\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Non-Linearly Seperable Dataset-1, Case -2}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0 & 0 & NaN\\\\\n      class-2 & 0 & 0 & NaN\\\\\n      class-3 & 0.796 & 0.4432071269 & 0.5693848355\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\\n$\\mathbf{Accuracy}$ 39.8$\\%,$ $\\mathbf{Mean- Precision}$ 0.147$,$ $\\mathbf{Mean- Recall}$ 0.265$.$\n\\newpage\n\\paragraph{Case-3: $\\mathbf{\\Sigma_{i}}$ is diagonal and different} \nThe decision boundary is shown in figure \\ref{fig:2_3}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_12_3.png}\n     \\caption{case-3: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_23_3.png}\n    \\caption{case-3: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_31_3.png}\n    \\caption{case-3: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Non-Linearly Seperable Dataset-1, Case-3}\n  \\label{fig:2_3}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n348 & 24 & 3\\\\\n24 & 342 & 9\\\\\n7 & 0 & 743\\\\\n\\end{array}} \\right]\n\\]This approximation on covariance matrix is performing fairly good.\\\\\n\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Non-Linearly Seperable Dataset-1, Case -3}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.872 & 0.9396551724 & 0.9045643154\\\\\n      class-2 & 0.936 & 0.9285714286 & 0.9322709163\\\\\n      class-3 & 0.992 & 0.9612403101 & 0.9763779528\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 94.8$\\%,$ $\\mathbf{Mean- Precision}$ 0.943$,$ $\\mathbf{Mean- Recall}$ 0.933$.$\n\\newpage\n\\paragraph{Case-4: $\\mathbf{\\Sigma_{i}}$ is arbitary}\nThe decision boundary is shown in figure \\ref{fig:2_4}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_12_4.png}\n     \\caption{case-4: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_23_4.png}\n    \\caption{case-4: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{2_31_4.png}\n    \\caption{case-4: class-3 class-1}\n  \\end{subfigure}\n  \\caption{case-4 Decision Boundary and Training Data points for Non-Linearly Seperable Dataset-1, Case-4}\n  \\label{fig:2_4}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n349 & 26 & 0\\\\\n23 & 342 & 10\\\\\n6 & 0 & 744\\\\\n\\end{array}} \\right]\n\\]This also peforms fairly good similar to the above case.\\\\\n\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Non-Linearly Seperable Dataset-1, Case -4}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.904 & 0.9416666667 & 0.9224489796\\\\\n      class-2 & 0.936 & 0.9285714286 & 0.9322709163\\\\\n      class-3 & 0.992 & 0.9763779528 & 0.9841269841\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 95.6$\\%,$ $\\mathbf{Mean -Precision}$ 0.948$,$ $\\mathbf{Mean- Recall}$ 0.944$.$\n\\newpage\n\\subparagraph{Conclusion}\nThe best performance is given by case-$3$ and worst performance is given by case-$2$. Now we shown the final decision boundary simultaneously for all the three classes and for all the four cases along with their contour plots superimposed on them in figure \\ref{fig:cont_2_4}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_2_123_1.png}\n     \\caption{case-1: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_2_123_2.png}\n    \\caption{case-2: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_2_123_3.png}\n    \\caption{case-3: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_2_123_4.png}\n    \\caption{case-4: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\caption{decision boundary for all three classes and contour plots}\n  \\label{fig:cont_2_4}\n\\end{figure}\n\\newpage\n\n\\subsection{Dataset-2}\nThis dataset is real dataset and is much more random\\footnote{randomness here means that classes can be overlapping also} than the dataset-1 considered above. We carry out similar analysis for this dataset also. First we show the scatter plots of the data points in training dataset in figure \\ref{fig:3}\n\\begin{figure}[!h]\n  \\includegraphics[width=0.7\\linewidth]{3.png}\n  \\caption{Dataset-2: real dataset}\n  \\label{fig:3}\n\\end{figure}\n\\newpage\n\\paragraph{Case-1: $\\mathbf{\\Sigma_{i}}=\\sigma^{2}\\mathbf{1}$}\nThe decision boundary is shown in figure \\ref{fig:3_1}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_12_1.png}\n     \\caption{case-1: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_23_1.png}\n    \\caption{case-1: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_31_1.png}\n    \\caption{case-1: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Dataset-2, Case-1}\n  \\label{fig:3_1}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n1629 & 211 & 0\\\\\n298 & 1568 & 0\\\\\n9 & 8 & 1701\\\\\n\\end{array}} \\right]\n\\]We observe here that it is performing good with the approximated covariance matrix.\\\\\n\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Dataset-2, Case-1}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.8713355049 & 0.8546325879 & 0.8629032258\\\\\n      class-2 & 0.8569131833 & 0.8666666667 & 0.8617623282\\\\\n      class-3 & 0.9912739965 & 1 & 0.9956178791\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 90.4$\\%,$ $\\mathbf{Mean -Precision}$ 0.907$,$ $\\mathbf{Mean- Recall}$ 0.906$.$\n\\newpage\n\\paragraph{Case-2: $\\mathbf{\\Sigma_{i}}=\\mathbf{\\Sigma}$}\nThe decision boundary is shown in figure \\ref{fig:3_2}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_12_2.png}\n     \\caption{case-2: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_23_2.png}\n    \\caption{case-2: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_31_2.png}\n    \\caption{case-2: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Dataset-2, Case-2}\n  \\label{fig:3_2}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n1620 & 220 & 0\\\\\n158 & 1708 & 0\\\\\n7 & 11 & 1700\\\\\n\\end{array}} \\right]\n\\]\n\\\\\n\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Dataset-2, Case-2}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.8680781759 & 0.9126712329 & 0.8898163606\\\\\n      class-2 & 0.922829582 & 0.896875 & 0.9096671949\\\\\n      class-3 & 0.9895287958 & 1 & 0.9947368421\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 92.5$\\%,$ $\\mathbf{Mean -Precision}$ 0.936$,$ $\\mathbf{Mean -Recall}$ 0.926$.$\n\\newpage\n\\paragraph{Case-3: $\\mathbf{\\Sigma_{i}}$ is diagonal and different}\nThe decision boundary is shown in figure \\ref{fig:3_3}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_12_3.png}\n     \\caption{case-3: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_23_3.png}\n    \\caption{case-3: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_31_3.png}\n    \\caption{case-3: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Dataset-2, Case-3}\n  \\label{fig:3_3}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n1611 & 229 & 0\\\\\n215 & 1651 & 0\\\\\n11 & 3 & 1704\\\\\n\\end{array}} \\right]\n\\]\n\\\\\n\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Dataset-2, Case-3}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.8631921824 & 0.8833333333 & 0.8731466227\\\\\n      class-2 & 0.8906752412 & 0.865625 & 0.8779714739\\\\\n      class-3 & 0.9930191972 & 1 & 0.996497373\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 91.3$\\%,$ $\\mathbf{Mean- Precision}$ 0.916$,$ $\\mathbf{Mean- Recall}$ 0.915$.$ \n\\newpage\n\\paragraph{Case-4: $\\mathbf{\\Sigma_{i}}$ is arbitary}\nThe decision boundary is shown in figure \\ref{fig:3_4}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_12_4.png}\n     \\caption{case-4: class-1 class-2}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_23_4.png}\n    \\caption{case-4: class-2 class-3}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{3_31_4.png}\n    \\caption{case-4: class-3 class-1}\n  \\end{subfigure}\n  \\caption{Decision Boundary and Training Data points for Dataset-2, Case-4}\n  \\label{fig:3_4}\n\\end{figure}\n\\subparagraph{Confusion Matrix}\nThe confusion matrix obtained is\n\\[\n\\mathbf{M} = \\left[ {\\begin{array}{ccc}\n1571 & 263 & 6\\\\\n183 & 1683 & 0\\\\\n14 & 3 & 1701\\\\\n\\end{array}} \\right]\n\\]\n\\\\\n\n\n\\subparagraph{Precision, Recall and F-measure} \\textcolor{white}{:}\n\\begin{table}[h!]\n  \\begin{center}\n    \\caption{Performance parameters for Dataset-2, Case-4}\n    \\label{tab:table1}\n    \\begin{tabular}{l|c|c|r} % <-- Alignments: 1st column left, 2nd middle and 3rd right, with vertical lines in between\n      \\textbf{class} & \\textbf{Recall} & \\textbf{Precision} & \\textbf{F-Measure}\\\\\n      \\hline\n      class-1 & 0.8575667656 & 0.898911353 & 0.8777524677\\\\\n      class-2 & 0.9019292605 & 0.8538812785 & 0.8772478499\\\\\n      class-3 & 0.9895287958 & 0.9964850615 & 0.9929947461\\\\\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n\\\\$\\mathbf{Accuracy}$ 94.3$\\%,$ $\\mathbf{Mean- Precision}$ 0.916$,$ $\\mathbf{Mean- Recall}$ 0.916$.$\n\\newpage\n\\subparagraph{Conclusion}\nWe can see that all of the above cases on covariance matrix are good approximation for this dataset and any of the approximate form can be used for classification. Now we shown the final decision boundary simultaneously for all the three classes and for all the four cases along with their contour plots superimposed on them in figure \\ref{fig:cont_3_4}\n\\begin{figure}[h!]\n  \\centering\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_3_123_1.png}\n     \\caption{case-1: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_3_123_2.png}\n    \\caption{case-2: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_3_123_3.png}\n    \\caption{case-3: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.4\\linewidth}\n    \\includegraphics[width=\\linewidth]{cont_3_123_4.png}\n    \\caption{case-4: interclass decision surface and contour plot}\n  \\end{subfigure}\n  \\caption{decision boundary for all three classes and contour plots}\n  \\label{fig:cont_3_4}\n\\end{figure}\n\\newpage\n\\section{Appendix}\nThis section includes some of the derivations relating the angle with which the ellipse(constant contour curves) is titled with the axis and the elements of the covariance matrix. This section can be skiped without any loss. The derivation includes all the points discussed in section \\ref{appendix}. We first assume general form of elliptical curve in two dimensions as follows\\footnote{Note that we are not considering the terms which are because of shift of origin because it would not affect the shape and tilted angle}\n\\begin{equation}\\label{original}\nax^{2}+by^{2}+cxy+d=0\n\\end{equation}Now we rotate the axis(active rotation) using rotation matrix for two dimensional space as follows\n\n\\[\n\\left( \\begin{array}{c}\nx^{\\prime} \\\\\ny^{\\prime}\n\\end{array} \\right) = \n\\left( \\begin{array}{cc}\n\\cos\\theta & -\\sin\\theta \\\\\n\\sin\\theta & \\cos\\theta\n\\end{array} \\right)\n%\n\\left( \\begin{array}{c}\nx \\\\\ny\n\\end{array} \\right)\n\\]\nNow we find $x$ and $y$ in terms of primed $x^{\\prime}$ and $y^{\\prime}$. After simple algebraic calculations and manipulations we obtain following equation in $x^{\\prime}$-$y^{\\prime}$ coordinate system\n\\begin{equation}\\label{main}\n\\begin{split}\n(a\\cos^{2}\\theta+b\\sin^{2}\\theta-c\\cos\\theta\\sin\\theta)x^{\\prime 2} + (a\\sin^{2}\\theta+b\\cos^{2}\\theta+c\\cos\\theta\\sin\\theta)y^{\\prime 2}&\\\\\n+(2a\\cos\\theta\\sin\\theta-2b\\sin\\theta\\cos\\theta+c(\\cos^{2}\\theta-\\sin^{2}\\theta))x^{\\prime}y^{\\prime}+d=0\n\\end{split}\n\\end{equation}Here $\\theta$ is the angle with which the coordinate axis are rotated(which we further state that is the angle with which the major and minor axis of the tilted ellipse is making with $x$-$y$ axis). Now we impose the condition that in this coordinate system the above ellipse should be parallel to the axis and is of the standard form\n\\begin{equation}\n\\frac{x^{\\prime 2}}{A^{2}}+\\frac{y^{\\prime 2}}{B^{2}}=1\n\\end{equation}for this we should have coefficient of $x^{\\prime}y^{\\prime}$ in the equation \\ref{main} to be zero which is equivalent as\n\\[\n\\begin{split}\n2a\\cos\\theta\\sin\\theta-2b\\sin\\theta\\cos\\theta+c(\\cos^{2}\\theta-\\sin^{2}\\theta) &= 0 \\\\\n(a-b)\\sin2\\theta+c\\cos2\\theta &= 0\\\\\n\\end{split}\n\\]\n\\begin{equation}\\label{thet}\n\\tan2\\theta = \\frac{c}{b-a}\n\\end{equation}\nThis precisly gives the dependence of angle on the coefficients in equation\n\\ref{original}. This can be easily proved that if $c=0$ (angle would then be zero acc. to \\ref{thet})then the ellipse given by equation \\ref{original} would be parallel to the $x$-$y$ axis. Now for this ellipse to be a circle we equate the coefficients of $x^{\\prime 2}$ and $y^{\\prime 2}$ in equation \\ref{main}\n\\[\n\\begin{split}\na\\cos^{2}\\theta+b\\sin^{2}\\theta-c\\cos\\theta\\sin\\theta &= a\\sin^{2}\\theta+b\\cos^{2}\\theta+c\\cos\\theta\\sin\\theta \\\\\na\\cos^{2}\\theta-a\\sin^{2}\\theta+b\\sin^{2}\\theta-b\\cos^{2}\\theta &=2c\\cos\\theta\\sin\\theta \\\\\n(a-b)\\cos2\\theta &=c\\sin2\\theta \\\\\n\\tan2\\theta &= \\frac{a-b}{c}\n\\end{split}\n\\]but from equation \\ref{thet} we can finally write the above condition as\n\\begin{equation}\n-(a-b)^{2}=c^{2}\n\\end{equation}which is true only when $a=b$ and $c=0$(this condition is not ad-hoc and makes much more sense as there is no such thing as rotated circle so we can safely consider $c=0$ without loss of generality). So for ellipse to be a circle the coefficients of $x^{2}$ and $y^{2}$ should be equal and you recover the standard equation of circle\n\\begin{equation}\nx^{2}+y^{2}=r^{2}\n\\end{equation}Now this whole analysis can be used to relate the coefficients of equation \\ref{original} to that of the section \\ref{appendix} which on comparision with equation \\ref{cov} gives us\n\\[\na = \\Sigma_{22}\n\\]\n\\[\nb = \\Sigma_{11}\n\\]\n\\[\nc = -2\\Sigma_{12}\n\\]and now you can correlate the conclusions given in section \\ref{appendix} with the above calculations.\n\\bibliographystyle{alpha}\n\\bibliography{sample}\n\n\\end{document}", "meta": {"hexsha": "be218cc91cb08f503031a14df6d8d1c6ea789816", "size": 41619, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment-1/final_report.tex", "max_stars_repo_name": "spino17/Pattern-Recognition-Course", "max_stars_repo_head_hexsha": "8e600ea0ce66391df5564237bb9bf1203f2c82ab", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment-1/final_report.tex", "max_issues_repo_name": "spino17/Pattern-Recognition-Course", "max_issues_repo_head_hexsha": "8e600ea0ce66391df5564237bb9bf1203f2c82ab", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment-1/final_report.tex", "max_forks_repo_name": "spino17/Pattern-Recognition-Course", "max_forks_repo_head_hexsha": "8e600ea0ce66391df5564237bb9bf1203f2c82ab", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.7830080367, "max_line_length": 700, "alphanum_fraction": 0.7093875393, "num_tokens": 13691, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.8289388104343893, "lm_q1q2_score": 0.5769068660651403}}
{"text": " \n\\section{Description Logics}\n\\label{app:dl}\nIn this section, we recall the expressive description logic $\\mathcal{ALC}$ \\cite{DBLP:journals/ai/Schmidt-SchaussS91}. We refer to \n\\cite{DBLP:journals/ws/LukasiewiczS08} for a detailed description of $\\mathcal{SHOIN}(\\mathbf{D})$ DL, that is at the basis of OWL DL.\n\n%%ALC\nLet $\\mathbf{A}$, $\\mathbf{R}$ and $\\mathbf{I}$ be sets of \\emph{atomic concepts}, \\emph{roles} and \\emph{individuals}.\nA \\emph{role} is an atomic role $R \\in \\mathbf{R}$. \n\\emph{Concepts} are defined by induction as follows. Each $C \\in \\mathbf{A}$, $\\bot$ and $\\top$\nare concepts.\nIf $C$, $C_1$ and $C_2$ are concepts and $R \\in \\mathbf{R}$, then $(C_1\\sqcap C_2)$, $(C_1\\sqcup C_2 )$, $\\neg C$,\n$\\exists R.C$, and $\\forall R.C$ are concepts. \nLet $C$, $D$ be concepts,  $R \\in \\mathbf{R}$ and $a, b \\in \\mathbf{I}$. \nAn \\emph{ABox} $\\cA$ is a finite set of \\textit{concept membership axioms} $a : C$ and \\textit{role membership\n\taxioms} $(a, b) : R$, while \na \\emph{TBox} $\\cT$ is a finite set of \\textit{concept inclusion axioms} $C\\sqsubseteq D$. $C \\equiv D$ abbreviates $C \\sqsubseteq D$ and $D\\sqsubseteq  C$.\n\nA \\emph{knowledge base} $\\cK = (\\cT, \\cA)$ consists of a TBox $\\cT$ and an ABox $\\cA$.\nA KB $\\cK$ is assigned a semantics in terms of set-theoretic interpretations $\\cI = (\\Delta^\\cI , \\cdot^\\cI )$, where $\\Delta^\\cI$ is a non-empty \\textit{domain} and $\\cdot^\\cI$ is the \\textit{interpretation function} that assigns  an element in $\\Delta ^\\cI$ to each $a \\in \\mathbf{I}$, a subset of $\\Delta^\\cI$ to each $C \\in \\mathbf{A}$ and a subset of $\\Delta^\\cI \\times \\Delta^\\cI$ to each $R \\in \\mathbf{R}$.\n\n\\section{DISPONTE}\n\\label{app:disponte}\nIn the field of Probabilistic Logic Programming (PLP for short) many proposals have been presented. \nAn effective and popular approach is the Distribution Semantics \\cite{DBLP:conf/iclp/Sato95}, which underlies many PLP languages such as\nPRISM~\\cite{DBLP:conf/iclp/Sato95,DBLP:journals/jair/SatoK01},\nIndependent Choice Logic  \\cite{Poo97-ArtInt-IJ}, Logic Programs with Annotated Disjunctions \\cite{VenVer04-ICLP04-IC} and ProbLog \\cite{DBLP:conf/ijcai/RaedtKT07}.\nAlong this line, many reserchers proposed to combine probability theory with Description Logics (DLs for short) \\cite{DBLP:journals/ws/LukasiewiczS08,DBLP:conf/rweb/Straccia08}.\nDLs are at the basis of the Web Ontology Language (OWL for short), a family of knowledge representation formalisms used for modeling information\nof the Semantic Web\n\nTRILL follows the DISPONTE \\cite{RigBelLamZese12-URSW12,Zese17-SSW-BK} semantics to compute the probability of queries.\nDISPONTE applies the distribution semantics \\cite{DBLP:conf/iclp/Sato95} of probabilistic logic programming to DLs. \nA program following this semantics defines a probability distribution over normal logic programs\ncalled \\emph{worlds}. Then the distribution is extended to queries and the probability of a query is obtained by marginalizing the joint distribution of the query and the programs.\n\nIn DISPONTE, a \\emph{probabilistic knowledge base} $\\cK$ is a set of \\emph{certain axioms} or \\emph{probabilistic axioms} in which each axiom is independent evidence.\nCertain axioms take the form of regular DL axioms while probabilistic axioms are\n$p::E$\nwhere $p$ is a real number in $[0,1]$ and $E$ is a DL axiom. \n\nThe idea of DISPONTE is to associate independent Boolean random variables to the probabilistic axioms. \nTo obtain a \\emph{world}, we include every formula obtained from a certain axiom. \nFor each probabilistic axiom, we decide whether to include it or not in $w$.\nA world therefore is a non probabilistic KB that can be assigned a semantics in the usual way.\nA query is entailed by a world if it is true in every  model of the world.\n\nThe probability $p$ can be interpreted as an \\emph{epistemic probability}, i.e., as the degree of our belief in axiom $E$. \nFor example, a probabilistic concept membership axiom\n$\np::a:C\n$\nmeans that we have degree of belief $p$ in $C(a)$.\nA probabilistic concept inclusion axiom of the form\n$\np::C\\sqsubseteq D\n$\nrepresents our belief in the truth of $C \\sqsubseteq D$ with probability $p$. \n\nFormally, an \\emph{atomic choice} is a couple $(E_i,k)$ where $E_i$ is the $i$th probabilistic axiom  and $k\\in \\{0,1\\}$. \n$k$ indicates whether $E_i$ is chosen to be included in a world ($k$ = 1) or not ($k$ = 0). \nA \\emph{composite choice} $\\kappa$ is a consistent set of atomic choices, i.e.,  $(E_i,k)\\in\\kappa, (E_i,m)\\in \\kappa$ implies $k=m$ (only one decision is taken for each formula). \nThe probability of a composite choice $\\kappa$  is \n$P(\\kappa)=\\prod_{(E_i,1)\\in \\kappa}p_i\\prod_{(E_i, 0)\\in \\kappa} (1-p_i)$, where $p_i$ is the probability associated with axiom $E_i$.\nA \\emph{selection} $\\sigma$ is a total composite choice, i.e., it contains an atomic choice $(E_i,k)$ for every \nprobabilistic axiom  of the probabilistic KB. \nA selection $\\sigma$ identifies a theory $w_\\sigma$ called  a \\emph{world} in this way:\n$w_\\sigma=\\cC\\cup\\{E_i|(E_i,1)\\in \\sigma\\}$ where $\\cC$ is the set of certain axioms. Let us indicate with $\\mathcal{S}_\\cK$ the set of all selections and with $\\mathcal{W}_\\cK$ the set of all worlds.\nThe probability of a world $w_\\sigma$  is \n$P(w_\\sigma)=P(\\sigma)=\\prod_{(E_i,1)\\in \\sigma}p_i\\prod_{(E_i, 0)\\in \\sigma} (1-p_i)$.\n$P(w_\\sigma)$ is a probability distribution over worlds, i.e., $\\sum_{w\\in \\mathcal{W}_\\cK}P(w)=1$.\n\nWe can now assign probabilities to queries. \nGiven a world $w$, the probability of a query $Q$ is defined as $P(Q|w)=1$ if $w\\models Q$ and 0 otherwise.\nThe probability of a query can be defined by marginalizing the joint probability of the query and the worlds, i.e.\n$P(Q)=\\sum_{w\\in \\mathcal{W}_\\cK}P(Q,w)=\\sum_{w\\in \\mathcal{W}_\\cK} P(Q|w)p(w)=\\sum_{w\\in \\mathcal{W}_\\cK: w\\models Q}P(w)$.\n\n\\begin{example}\n\t\\label{people+petsxy}\n\t\\begin{small}\n\t\tConsider the following KB, inspired by the \\texttt{people+pets} ontology  \\cite{ISWC03-tut}:\n\t\t{\\center $0.5\\ \\ ::\\ \\ \\exists hasAnimal.Pet \\sqsubseteq NatureLover\\ \\ \\ \\ \\ 0.6\\ \\ ::\\ \\ Cat\\sqsubseteq Pet$\\\\\n\t\t\t$(kevin,tom):hasAnimal\\ \\ \\ \\ \\ (kevin,\\fluffy):hasAnimal\\ \\ \\ \\ \\ tom: Cat\\ \\ \\ \\ \\ \\fluffy: Cat$\\\\}\n\t\t\\noindent The KB indicates that the individuals that own an animal which is a pet are nature lovers with a 50\\% probability and that $kevin$ has the animals \n\t\t$\\fluffy$ and $tom$.  Fluffy and $tom$ are cats and cats are pets with probability 60\\%.\n\t\tWe associate a Boolean variable to each axiom as follow\n\t\t%\\begin{small}\n\t\t$F_1 = \\exists hasAnimal.Pet \\sqsubseteq NatureLover$, $F_2=(kevin,\\fluffy):hasAnimal$, $F_3=(kevin,tom):hasAnimal$, $F_4=\\fluffy: Cat$, $F_5=tom: Cat$ and $F_6= Cat\\sqsubseteq Pet$.\n\t\t%\\end{small}.\n\t\t\n\t\t\n\t\tThe KB has four worlds and the query axiom $Q=kevin:NatureLover$ is true in one of them, the one corresponding to the selection \n\t\t$\n\t\t\\{(F_1,1),(F_2,1)\\}\n\t\t$.\n\t\tThe probability of the query is $P(Q)=0.5\\cdot 0.6=0.3$.\n\t\\end{small}\n\\end{example}\n\n\\begin{example}\n\t\\label{people+pets_comb}\n\t\\begin{small}\n\t\tSometimes we have to combine knowledge from multiple, untrusted sources, each one with a different reliability. \n\t\tConsider a KB similar to the one of Example \\ref{people+petsxy} but where we have a single cat, $\\fluffy$.\n\t\t{\\center $\\exists hasAnimal.Pet \\sqsubseteq NatureLover\\ \\ \\ \\ \\ (kevin,\\fluffy):hasAnimal\\ \\ \\ \\ \\ Cat\\sqsubseteq Pet$\\\\}\n\t\t\n\t\t\\noindent and there are two sources of information with different reliability that provide the information that $\\fluffy$ is a cat. \n\t\tOn one source the user has a degree of belief of 0.4, i.e., he thinks it is correct with a 40\\% probability,  \n\t\twhile on the other source he has a degree of belief 0.3. %, i.e. he thinks it is correct with a 30\\% probability. \n\t\tThe user can reason on this knowledge by adding the following statements to his KB:\n\t\t{\\center$0.4\\ \\ ::\\ \\ \\fluffy: Cat\\ \\ \\ \\ \\ 0.3\\ \\ ::\\ \\ \\fluffy: Cat$\\\\}\n\t\tThe two statements represent independent evidence on $\\fluffy$ being a cat. We associate $F_1$ ($F_2$) to the first (second) probabilistic axiom.\n\t\t\n\t\tThe query axiom $Q=kevin:NatureLover$ is true in 3 out of the 4 worlds, those corresponding to the selections \n\t\t$\n\t\t\\{ \\{(F_1,1),(F_2,1)\\},\n\t\t\\{(F_1,1),(F_2,0)\\},\n\t\t\\{(F_1,0),(F_2,1)\\}\\}\n\t\t$. \n\t\tSo \n\t\t$P(Q)=0.4\\cdot 0.3+0.4\\cdot 0.7+ 0.6\\cdot 0.3=0.58.$\n\t\tThis is reasonable if the two sources can be considered as independent. In fact,  the probability comes from the  disjunction of two\n\t\tindependent Boolean random variables with probabilities respectively 0.4 and 0.3: \n\t\t$\n\t\tP(Q) = P(X_1\\vee X_2) = P(X_1)+P(X_2)-P(X_1\\wedge X_2)\n\t\t= P(X_1)+P(X_2)-P(X_1)P(X_2)\n\t\t= 0.4+0.3-0.4\\cdot 0.3=0.58\n\t\t$\n\t\\end{small}\n\\end{example}\n\n\\section{Inference}\n\\label{app:inf}\nTraditionally, a reasoning algorithm decides  whether an axiom is entailed or not by a KB by refutation: the  axiom $E$ is entailed if $\\neg E$ has no model\nin the KB.\nBesides deciding whether an axiom is entailed by a KB, we want to find also explanations for the axiom, in order to compute the probability of the axiom.\n\n\\subsection{Computing Queries Probability}\nThe problem of finding  explanations for a query\nhas been investigated by various authors \\cite{DBLP:conf/ijcai/SchlobachC03,DBLP:journals/ws/KalyanpurPSH05,DBLP:conf/semweb/KalyanpurPHS07,Kalyanpurphd,extended_tracing,Zese17-SSW-BK}.\nIt was called  \\emph{axiom pinpointing} in \n\\cite{DBLP:conf/ijcai/SchlobachC03}  and considered as a non-standard reasoning service useful for tracing derivations and debugging ontologies. \nIn particular, in \\cite{DBLP:conf/ijcai/SchlobachC03} the authors define \\emph{minimal axiom sets}  (\\emph{MinAs} for short).\n\\begin{definition}[MinA]\n\tLet $\\cK$ be a knowledge base and $Q$ an\n\taxiom that follows from it, i.e., \n\t$\\cK \\models Q$. We call a set \n\t$M\\subseteq \\cK$ a\n\t\\emph{minimal axiom set} or \\emph{MinA} for $Q$ in $\\cK$ if \n\t$M \\models Q$ and it is minimal\n\tw.r.t. set inclusion.\n\\end{definition}  \n\\noindent The problem of enumerating all MinAs is called \\textsc{min-a-enum}.\n\\textsc{All-MinAs($Q,\\cK$)} is the set of all MinAs for query $Q$ in knowledge base $\\cK$.\n\nA \\emph{tableau} is a graph where each node represents an\nindividual $a$ and is labeled with the set of concepts $\\cL(a)$ it belongs to. Each\nedge $\\langle a, b\\rangle$ in the graph is labeled with the set of roles to which the couple\n$(a, b)$ belongs. Then, a set of  consistency preserving tableau\nexpansion rules are repeatedly applied until a clash (i.e., a contradiction) is detected or a clash-free\ngraph is found to which no more rules are applicable. A clash is for example a\ncouple $(C, a)$ where $C$ and $\\neg C$ are present in the label of a node, i.e. ${C, \\neg C} \\subseteq \\cL(a)$.\n\nSome expansion rules are non-deterministic, i.e., they generate\na finite set of tableaux. Thus the algorithm keeps a set of tableaux that is\nconsistent if there is any tableau in it that is consistent, i.e., that is clash-free.\nEach time a clash is detected in a tableau $G$, the algorithm stops applying rules\nto $G$. Once every tableau in $T$ contains a clash or no more expansion rules\ncan be applied to it, the algorithm terminates. If all the tableaux in the final\nset $T$ contain a clash, the algorithm returns unsatisfiable as no model can be\nfound. Otherwise, any one clash-free completion graph in $T$ represents a possible\nmodel for the concept and the algorithm returns satisfiable.\n\nTo compute the probability of a query, the explanations must be made mutually exclusive, so\nthat the probability of each individual explanation is computed and summed\nwith the others. To do that we assign independent Boolean random variables to the axioms contained in the explanations and defining \nthe Disjunctive Normal Form (DNF) Boolean formula $f_K$ which models the set of explanations. Thus\n$\nf_K(\\mathbf{X})=\\bigvee_{\\kappa\\in K}\\bigwedge_{(E_i,1)}X_{i}\\bigwedge_{(E_i,0)}\\overline{X_{i}}\n$\nwhere $\\mathbf{X}=\\{X_{i}|(E_i,k)\\in\\kappa,\\kappa\\in K\\}$ is the set of Boolean random variables.\nWe can now translate $f_K$ to a Binary Decision Diagram (BDD), from which we can compute the probability of the query with a dynamic programming algorithm that is linear in the size of the BDD.\n\n\nIn \\cite{DBLP:journals/jar/BaaderP10,DBLP:journals/logcom/BaaderP10} the authors consider the problem of finding a \\emph{pinpointing formula} instead of \n\\textsc{All-MinAs($Q,\\cK$)}.\nThe pinpointing formula is a monotone Boolean formula in which each Boolean variable corresponds to an axiom of the KB. This formula is built using \nthe variables and the conjunction and disjunction connectives. It compactly encodes the set of all MinAs.\nLet's assume that each axiom $E$ of a KB $\\cK$ is associated with a propositional variable, indicated with $var(E)$. The set of all propositional variables is indicated with $var(\\cK )$.\nA valuation $\\nu$ of a monotone Boolean formula is the set of propositional variables that are true. For a valuation $\\nu \\subseteq var(\\cK)$, let $\\cK_{\\nu} := \\{t \\in \\cK |var(t)\\in\\nu\\}$.\n\\begin{definition}[Pinpointing formula]\n\tGiven a query $Q$ and a KB $\\cK$, a monotone Boolean formula $\\phi$ over $var(\\cK)$ is called a \\emph{pinpointing formula} for $Q$ if for every valuation $\\nu \\subseteq var(\\cK)$ it holds that $\\cK_{\\nu} \\models Q$ iff\n\t$\\nu$ satisfies $\\phi$.\n\t\n\\end{definition}\nIn Lemma 2.4 of \\cite{DBLP:journals/logcom/BaaderP10} the authors proved that the set of all MinAs can be obtained by transforming the pinpointing formula into a Disjunctive Normal Form Boolean formula~(DNF) and removing disjuncts implying other disjuncts. \n\nIrrespective of which representation of the explanations we choose, a DNF or a general pinpointing formula, we can apply knowledge compilation and \\textit{transform it into a Binary Decision Diagram (BDD)}, \nfrom which we can compute the probability of the query with a dynamic programming algorithm that is linear in the size of the BDD.\n\nWe refer to \\cite{Zese17-SSW-BK,ZesBelRig16-AMAI-IJ} for a detailed description of the two methods.", "meta": {"hexsha": "03f3425cc04640b93d43871dbb0a289b928f5c77", "size": 14005, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/appendix.tex", "max_stars_repo_name": "rzese/trill", "max_stars_repo_head_hexsha": "145781966a4203e43c3cfe18e62435557e764297", "max_stars_repo_licenses": ["Artistic-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2016-02-02T07:36:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-17T01:23:49.000Z", "max_issues_repo_path": "doc/appendix.tex", "max_issues_repo_name": "rzese/trill", "max_issues_repo_head_hexsha": "145781966a4203e43c3cfe18e62435557e764297", "max_issues_repo_licenses": ["Artistic-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-10-19T12:53:55.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-29T15:14:51.000Z", "max_forks_repo_path": "doc/appendix.tex", "max_forks_repo_name": "rzese/trill", "max_forks_repo_head_hexsha": "145781966a4203e43c3cfe18e62435557e764297", "max_forks_repo_licenses": ["Artistic-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2016-04-05T10:34:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-30T10:46:54.000Z", "avg_line_length": 69.6766169154, "max_line_length": 414, "alphanum_fraction": 0.7306676187, "num_tokens": 4368, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8856314738181875, "lm_q2_score": 0.6513548714339144, "lm_q1q2_score": 0.5768603747666736}}
{"text": "\\documentclass{beamer}\n\\usetheme[sectionpage=none]{metropolis}\n\\usepackage[utf8]{inputenc} %needed for Umlaute\n\\usepackage{amsmath, amsfonts, amssymb, amsthm, dsfont}\n\\usepackage{csquotes}\n\\usepackage{cleveref}\n\\usepackage{hyperref}\n\\usepackage[T1]{fontenc} %8bit font encoding instead of 7bit\n\n\\title{Integer programming formulations for pigment sequencing}\n\n\\begin{document}\n\\begin{frame}\n  \\tableofcontents\n\\end{frame}\n\n\\section{Preliminaries}\n\\begin{frame}{Preliminaries}\n    \\begin{itemize}\n      \\item a set $T = \\{0,\\ldots,m-1\\}$ representing $m$ types\n      \\item a set $P = \\{1,\\ldots,n\\}$ representing $n$ time periods\n  \\end{itemize}\n  \\end{frame}\n\n\\section{Simple formulation}\n\\subsection{Variables}\n\\begin{frame}{Production variables}\n  \\begin{itemize}\n    \\item $x^t_p \\in \\{0,1\\}$ for $t \\in T,~p \\in P$ with $x^t_p = 1\n      \\Leftrightarrow$ an item of type $t$ is produced in time period\n      $p$\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Configuration variables}\n    \\begin{itemize}\n      \\item $y^t_p \\in \\{0,1\\}$ for $t \\in T,~p \\in \\{0\\} \\cup P$ with\n        $y^t_p = 1 \\Leftrightarrow$ the machine is configured for type\n        $t$ in time period $p$\n    \\end{itemize}\n  \\end{frame}\n\n  \\begin{frame}{Stock variables}\n    \\begin{itemize}\n      \\item $s^t_p \\in \\mathbb{R}_{\\geq0}$ for $t \\in T,~p \\in \\{0\\}\n        \\cup P$ with $s^t_p = \\ell$ representing that $\\ell$ items of\n        type $t$ are kept in stock in time period $p$\n    \\end{itemize}\n  \\end{frame}\n\n  \\begin{frame}{Transition variables}\n    \\begin{itemize}\n      \\item $u^{ij}_p \\in \\{0,1\\}$ for $i \\in T,~j \\in T,~p \\in P$\n        with $u^{ij}_p = 1$ $\\Leftrightarrow$ the production changes\n        from type $i$ to type $j$ in time period $p$\n    \\end{itemize}\n  \\end{frame}\n\n  \\subsection{Constraints}\n  \\begin{frame}{Initial stock constraints}\n    \\begin{itemize}\n    \\item ensure an empty stock for each type before production begins\n    \\item[]\n    \\end{itemize}\n    \\begin{equation}s^{t}_{0} = 0~\\text{for}~t \\in T\\label{stock-cons}\\end{equation}\n  \\end{frame}\n\n  \\begin{frame}{Demand constraints}\n    \\begin{itemize}\n      \\item ensure that the demand for each type is met by production\n        and stock\n      \\item let $d^t_p \\in \\{0,1\\}$ represent whether an item of type\n        $t$ is demanded in time period $p$\n      \\item[]\n    \\end{itemize}\n    \\begin{equation}s^t_{p-1} + x^t_p = d^t_p + s^t_p~\\text{for}~t \\in T,~p \\in P\\label{demand-cons}\\end{equation}\n  \\end{frame}\n\n  \\begin{frame}{Configuration constraints}\n    \\begin{itemize}\n      \\item ensure that the machine is configured correctly when\n        producing type $t$ in period $p$\n        \\item[]\n    \\end{itemize}\n    \\begin{equation}x^t_p \\leq y^t_p~\\text{for}~t \\in T,~p \\in P\\label{state-cons}\\end{equation}\n  \\end{frame}\n\n  \\begin{frame}{State constraints}\n    \\begin{itemize}\n    \\item ensure that the machine is in exactly one configuration at any time\n      \\item[]\n    \\end{itemize}\n    \\begin{equation}\\sum\\limits_{t \\in T} y^t_p = 1~\\text{for}~p \\in P\\label{config-cons}\\end{equation}\n  \\end{frame}\n\n  \\begin{frame}{Transition constraints}\n    \\begin{itemize}\n    \\item ensure that the values of the transition variables are set\n      correctly\n    \\item[]\n    \\end{itemize}\n    \\begin{equation}u^{ij}_p \\geq y^i_{p-1} + y^j_p -1~\\text{for}~i \\in T,~j \\in T,~p \\in P\\label{transition-cons}\\end{equation}\n  \\end{frame}\n\n  \\subsection{Objective}\n  \\begin{frame}{Simple formulation}\n    \\begin{itemize}\n      \\item let $h \\in \\mathbb{R}$ be the stocking cost representing\n        the cost for stocking an item (of any type) for one time\n        period\n      \\item let $c_{ij} \\in \\mathbb{R}$ for $i \\in T,~j \\in T$ be the\n        transition cost representing the cost for changing the machine\n        configuration from type $i$ to type $j$\n      \\item[]\n    \\end{itemize}\n    \\begin{align*}\\min \\sum\\limits_{t \\in T} \\sum\\limits_{p \\in P} &h s^t_p + \\sum\\limits_{i \\in T} \\sum\\limits_{j \\in T} \\sum\\limits_{p \\in P} c_{ij} u^{ij}_p\\\\\n      \\textbf{s.t.}~~&\\eqref{stock-cons} - \\eqref{transition-cons}.\n    \\end{align*}\n  \\end{frame}\n    \n\\section{Extended formulation}\n\\subsection{Additional variables}\n\n\\begin{frame}{Predecessor state variables}\n  \\begin{itemize}\n  \\item $v^t_p \\in \\{0,1\\}$ for $t \\in T,~p \\in P \\setminus \\{0\\}$\n    with $v^t_p = 1 \\Leftrightarrow$ the machine is configured for\n    type $t$ in time period $p$ but not in time period $p-1$\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Successor state variables}\n  \\begin{itemize}\n  \\item $w^t_p \\in \\{0,1\\}$ for $t \\in T,~p \\in \\{0\\} \\cup P$ with\n    $w^t_p = 1 \\Leftrightarrow$ the machine is configured for type $t$\n    in time period $p$ but not in time period $p+1$\n  \\end{itemize}\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "58ce95edb4cb48ab3251836d9a88a3955e81772d", "size": 4759, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/mip_formulations.tex", "max_stars_repo_name": "asbestian/lot_sizing", "max_stars_repo_head_hexsha": "265a4083b54cafa8357d402091dcedbd5c6e7f14", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-27T18:07:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-02T13:59:28.000Z", "max_issues_repo_path": "doc/mip_formulations.tex", "max_issues_repo_name": "asbestian/lot_sizing", "max_issues_repo_head_hexsha": "265a4083b54cafa8357d402091dcedbd5c6e7f14", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-03T03:57:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-03T04:05:57.000Z", "max_forks_repo_path": "doc/mip_formulations.tex", "max_forks_repo_name": "asbestian/lot_sizing", "max_forks_repo_head_hexsha": "265a4083b54cafa8357d402091dcedbd5c6e7f14", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-10-02T01:58:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-02T01:58:41.000Z", "avg_line_length": 33.7517730496, "max_line_length": 161, "alphanum_fraction": 0.643832738, "num_tokens": 1657, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.5767489184888055}}
{"text": "%!TEX root = paper.tex\r\n\\subsection{Solitary wave on a simple beach}\r\nThis is an analytic testcase and a laboratory experiment conducted at the California Institute of Technology, Pasadena, California. A linear solitary (better called single) wave propagates over a  constant bathymetry, until it reaches a linear sloping beach, s.t. the wave inundates the dry area and the offtake takes place after the wave has reached the maximum runup point. \r\nThe experimental data include the surface elevation measured at different points in time for a non-breaking and a breaking wave. The analytic testcase considers only a non-breaking wave, but yields comparison data in terms of time series as well as data at two locations.\r\nThe experimental data serve for validation of the models concerning their ability to model inundation. Additionally to the experimental data, a linear analytic solution is provided, s.t. verification of our models is also tested.\r\n\r\nThe initial condition is prescribed as a linear analytic solitary wave solution (similar to the one in section \\ref{sec:B_compositebeach})\r\n\\begin{align}\r\n\\xi(\\bx,t)&=a \\ \\text{cosh}^{-2}(K(x-ct-x_0)), \\\\\r\nu(\\bx,t)&=-c\\frac{\\xi(\\bx,t)}{d},\r\n\\end{align}\r\nwith the initial actual amplitude $a$, propagation velocity $c=\\csw$ with scale factor $K=\\sqrt{\\left(\\frac{3a}{4d^3}\\right)}$ and displacement $x_0=X_0+L=d \\, \\text{cot}(\\beta)+\\text{arccosh}(\\sqrt{20})\\frac{1}{Kd}$, with $\\beta=\\text{arccot}(19.85)$. The entire domain length is $L=48 \\, \\text{m}$. The simulation time is $40$ seconds. \r\n\r\nIn the analytic testcase, the ratio between amplitude and depth is $\\frac{a}{d}=0.019$, whereas the laboratory experiment uses  $\\frac{a}{d}=0.0185$. \r\nFor breaking waves in the laboratory experiment, the ratio between amplitude and depth is $\\frac{a}{d}=0.3$. We choose $d=1$m and $a$ accordingly.\r\nWe impose reflecting boundary conditions at the boundary in x-direction and periodic boundary conditions in y-direction. For the setup see figure \\ref{fig:simplebeach_setup}. \r\n\r\n\\begin{figure}[htbp]\r\n\\includegraphics[width=\\textwidth]{simplebeach_setup}\r\n\\caption{Setup of the testcase solitary wave on a simple beach. (Letters do not fit to text decription.)}\r\n\\label{fig:simplebeach_setup}\r\n\\end{figure}\r\n\r\n\\subsubsection{Results of \\nh\\ model}\r\nFigures \\eqref{fig:nh_simplebeach_ana_nh_Nhy} display the comparison of the nonlinear SWE with the linear analytic solution. The first five figures present simulation results at required time steps, whereas the sixth figure shows the comparison at two positions over time.\r\nThe models results compared to the analytic solution are prone to oszillations, especially close to the shoreline. \r\n\r\nNeed to improve, and also tested only with Lhy.\r\nAdd maximal runup and convergence test.\r\n\\begin{figure}[htbp]\r\n\\begin{minipage}{\\textwidth}\r\n\\includegraphics[width=0.48\\textwidth]{simplebeach_ana_t=25tau_nh_Nhy}\r\n\\includegraphics[width=0.48\\textwidth]{simplebeach_ana_t=35tau_nh_Nhy}\r\n\\end{minipage} \\\\\r\n\\begin{minipage}{\\textwidth}\r\n\\includegraphics[width=0.48\\textwidth]{simplebeach_ana_t=45tau_nh_Nhy}\r\n\\includegraphics[width=0.48\\textwidth]{simplebeach_ana_t=55tau_nh_Nhy}\r\n\\end{minipage} \\\\\r\n\\begin{minipage}{\\textwidth}\r\n\\includegraphics[width=0.48\\textwidth]{simplebeach_ana_t=65tau_nh_Nhy}\r\n\\includegraphics[width=0.48\\textwidth]{simplebeach_ana_x_nh_Nhy}\r\n\\end{minipage}\r\n\\caption{Comparison of the linear analytical (red) sea surface height of the\r\nsolitary wave with the simulation results of the \\nh\\ model in its version of nonlinear shallow water equations (blue). (The x-axis values should be multiplied by (-1), and colors changed.)}\r\n\\label{fig:nh_simplebeach_ana_nh_Nhy}\r\n\\end{figure}\r\n", "meta": {"hexsha": "a468474de2bd944a6e229cf31a0319522a1f606a", "size": 3697, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/papers/theoretical_1d/B_simplebeach.tex", "max_stars_repo_name": "mandli/coastal", "max_stars_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/papers/theoretical_1d/B_simplebeach.tex", "max_issues_repo_name": "mandli/coastal", "max_issues_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/papers/theoretical_1d/B_simplebeach.tex", "max_forks_repo_name": "mandli/coastal", "max_forks_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.6595744681, "max_line_length": 377, "alphanum_fraction": 0.773870706, "num_tokens": 981, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631541, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5767489161932168}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\t\t\\lstset{\n\t\t\tlanguage=bash,\n\t\t\tbasicstyle=\\ttfamily\n\t\t}\n\n\\begin{document}\nThis is the last part of this project which covers an open-ended optimization of MNIST dataset classification on an FPGA. \nFor better comparison, the \\texttt{Batch size} ($\\approx 8k$) of the processed subset is the same as for previous design optimiztations in part 1 and part 2.\n\n\\paragraph{Learning}\nFirst, the learning algorithm will be further optimized by introducing the \\textit{cross-entropy} as new error function and the \\textit{SELU}(Self-normalizing Linear Unit} and \\textit{softmax} functions as chained transfer function. The goal is to increase classifiction accuracy to at least $85\\si{\\percent}$ without increasing the overall hardware usage.\nConsequently, the network cannot be extended by another layer of neurons. The output has to be one hot encoded, such that the number of neurons can not be changed either.\n\nInitially the given learning algorithm uses a quadratic error function to calculate the cost after each prediction. But for this particular classification problem, we have two more contraints on the prediction outcome which are ignored, by the current error function:\n\\begin{itemize}\n\t\\item the prediction outcome should follow the one hot encoding scheme.\n\t\\item the prediction outcome should be a valid probability distribution.\n\\end{itemize}\nTherefore the \\textit{cross-entropy} will be used as new error function in combination with the \\textit{softmax} function as output transfer function.\nThe \\textit{SELU} function and helps to keep the weigths show a self-normalizing behavior. The two distribution parameters $\\lambda$ and $\\alpha$ are determined in order to gurarantee this behavior for normalized input values. Therefore the inputs now must be scaled to $[0,1]$.\nTo improve accuracy and to better fit the weights, the training algorithm is repeated \\texttt{ITERATION} times with random permutations of the training set.\nThe file \\texttt{mnist.py} had to be slightly changed to integrate the new learning algorithm. Furthermore extra functionality to plot weight representation and error rate over the training period was added for debugging purposes.\nThe optimized learning algorithm can be found in the file \\texttt{CrossSLP.py}.\n\n\\paragraph{Prediction acceleration}\nSince softmax and \\textit{SELU} only scale the denditric potential of the networks neurons, the following equation still holds:\n\\begin{align*}\n\t\\text{argmax}(x) = \\text{argmax}(\\text{softmax}(\\text{SELU}(x)))\n\\end{align*}\nAs we see from the equation above, the transferfunctions are only required for learning, not for prediction.\nIn previous implementations in part 1 and 2, we just computed the $\\text{argmax}(x)$ of the denditric potentials, which gave us the correct prediction output. \nThis means even after changing the learning algorithm and its transfer functions, the hardware does not necessarily have to change, because the network structure remains the same. \n\nNevertheless the harware is also optimized in part 3. \nIn the previous implementations, the output of the accellerator transmitted back to the cpu was always the whole output vector of the network. \nWe still had to compute the $\\text{argmax}(x)$  on the cpu. First this produces unnecessary transmission overhead, which can be avoided, second the computation of the \\textit{argmax} function can also be offloaded. So the harware design was changed accordingly and now the output is only one byte long indicating the predicted number between 0 and 9. Because the output buffer size is reduced by $90\\si{\\percent}$ more memory can be used for the input buffer, such that the \\texttt{TILING} size can be increased to $128$ samples. This also give a small latency speedup.\n\n\\begin{lstlisting}\nFPGA accuracy: 8.15\\% validation error\nCPU accuracy:  8.06\\% validation error\nFPGA time:  0.004762924000033308\nCPU time:  0.2810866919999171\nFPGA has a 59.02x speedup \n(* real speedup is slightly higher, because of argmax offloading)\n\\end{lstlisting}\n\nThe simulation, predicts a lower latency of what can be performed on the FPGA.\n\n\n\n\n\\end{document}\n\n\n\n", "meta": {"hexsha": "39756ffb454ac7410f7c20168d34b72e105617de", "size": 4140, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/subfiles/part3.tex", "max_stars_repo_name": "swappad/cs5222-lab-fpga-mlp", "max_stars_repo_head_hexsha": "46fb9cb798460474c16f2f60a89a0807307fb971", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-08T08:38:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T08:38:36.000Z", "max_issues_repo_path": "report/subfiles/part3.tex", "max_issues_repo_name": "swappad/cs5222-lab-fpga-mlp", "max_issues_repo_head_hexsha": "46fb9cb798460474c16f2f60a89a0807307fb971", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/subfiles/part3.tex", "max_forks_repo_name": "swappad/cs5222-lab-fpga-mlp", "max_forks_repo_head_hexsha": "46fb9cb798460474c16f2f60a89a0807307fb971", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.6315789474, "max_line_length": 569, "alphanum_fraction": 0.7949275362, "num_tokens": 954, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.795658104908603, "lm_q1q2_score": 0.576748910247428}}
{"text": "\\section{Enhancing Forecasting Models with External Data}\n\\label{enhanced_feats}\n\nIn this appendix, we show how the feature matrix in Sub-section\n    \\ref{ml_models} can be extended with features other than historical order\n    data.\nThen, we provide an overview of what external data we tried out as predictors\n    in our empirical study.\n\n\\subsection{Enhanced Feature Matrices}\n\nFeature matrices can naturally be extended by appending new feature columns\n    $x_{t,f}$ or $x_f$ on the right where the former represent predictors\n    changing throughout a day and the latter being static either within a\n    pixel or across a city.\n$f$ refers to an external predictor variable, such as one of the examples\n    listed below.\nIn the SVR case, the columns should be standardized before fitting as external\n    predictors are most likely on a different scale than the historic order\n    data.\nThus, for a matrix with seasonally-adjusted order data $a_t$ in it, an\n    enhanced matrix looks as follows:\n\n$$\n\\vec{y}\n=\n\\begin{pmatrix}\n    a_T \\\\\n    a_{T-1} \\\\\n    \\dots \\\\\n    a_{H+1}\n\\end{pmatrix}\n~~~~~\n\\mat{X}\n=\n\\begin{bmatrix}\n    a_{T-1}         & a_{T-2} & \\dots & a_{T-H}     & ~~~\n        & x_{T,A}   & \\dots   & x_{B} & \\dots \\\\\n    a_{T-2}         & a_{T-3} & \\dots & a_{T-(H+1)} & ~~~\n        & x_{T-1,A} & \\dots   & x_{B} & \\dots \\\\\n    \\dots           & \\dots   & \\dots & \\dots       & ~~~\n        & \\dots     & \\dots   & \\dots & \\dots \\\\\n    a_H             & a_{H-1} & \\dots & a_1         & ~~~\n        & x_{H+1,A} & \\dots   & x_{B} & \\dots\n\\end{bmatrix}\n$$\n\\\n\nSimilarly, we can also enhance the tabular matrices from\n    \\ref{tabular_ml_models}.\nThe same comments as for their pure equivalents in Sub-section \\ref{ml_models}\n    apply, in particular, that ML models trained with an enhanced matrix can\n    process real-time data without being retrained.\n    \n\\subsection{External Data in the Empirical Study}\n\\label{external_data}\n\nIn the empirical study, we tested four groups of external features that we\n    briefly describe here.\n\n\\vskip 0.1in\n\n\\textbf{Calendar Features}:\n\\begin{itemize}\n    \\item Time of day (as synthesized integers: e.g., 1,050 for 10:30 am,\n                       or 1,600 for 4 pm)\n    \\item Day of week (as one-hot encoded booleans)\n    \\item Work day or not (as booleans)\n\\end{itemize}\n\n\\vskip 0.1in\n\n\\textbf{Features derived from the historical Order Data}:\n\\begin{itemize}\n    \\item Number of pre-orders for a time step (as integers)\n    \\item 7-day SMA of the percentages of discounted orders (as percentages):\n          The platform is known for running marketing campaigns aimed at\n          first-time customers at irregular intervals. Consequently, the\n          order data show a wave-like pattern of coupons redeemed when looking\n          at the relative share of discounted orders per day.\n\\end{itemize}\n\n\\vskip 0.1in\n\n\\textbf{Neighborhood Features}:\n\\begin{itemize}\n    \\item Ambient population (as integers) as obtained from the ORNL LandScan\n          database\n    \\item Number of active platform restaurants (as integers)\n    \\item Number of overall restaurants, food outlets, retailers, and other\n          businesses (as integers) as obtained from the Google Maps and Yelp\n          web services\n\\end{itemize}\n\n\\vskip 0.1in\n\n\\textbf{Real-time Weather} (raw data obtained from IBM's\n                            Wunderground database):\n\\begin{itemize}\n    \\item Absolute temperature, wind speed, and humidity\n          (as decimals and percentages)\n    \\item Relative temperature with respect to 3-day and 7-day historical\n          means (as decimals)\n    \\item Day vs. night defined by sunset (as booleans)\n    \\item Summarized description (as indicators $-1$, $0$, and $+1$)\n    \\item Lags of the absolute temperature and the summaries covering the\n          previous three hours\n\\end{itemize}\n\n\\vskip 0.1in\n\nUnfortunately, we must report that none of the mentioned external data\n    improved the accuracy of the forecasts.\nSome led to models overfitting the data, which could not be regulated.\nManual tests revealed that real-time weather data are the most promising\n    external source.\nNevertheless, the data provided by IBM's Wunderground database originate from\n    weather stations close to airports, which implies that we only have the\n    same aggregate weather data for the entire city.\nIf weather data is available on a more granular basis in the future, we see\n    some potential for exploitation.\n", "meta": {"hexsha": "c6ef0f449454475a462159326ecf516c08a01113", "size": 4467, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/apx/enhanced_feats.tex", "max_stars_repo_name": "webartifex/urban-meal-delivery-paper-demand-forecasting", "max_stars_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-25T19:40:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T19:40:56.000Z", "max_issues_repo_path": "tex/apx/enhanced_feats.tex", "max_issues_repo_name": "webartifex/urban-meal-delivery-demand-forecasting", "max_issues_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/apx/enhanced_feats.tex", "max_forks_repo_name": "webartifex/urban-meal-delivery-demand-forecasting", "max_forks_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6147540984, "max_line_length": 78, "alphanum_fraction": 0.6865905529, "num_tokens": 1140, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540518, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.5767489043016388}}
{"text": "\\documentclass[letterpaper,12pt,preprint]{hack_aastex}\n\n\\input{dfm_stylez}\n\\pagestyle{myheadings}\n\\markright{\\textsf{\\footnotesize %\n                   Astro Hack Week /\n                   2018 /\n                   MCMC exercises by Dan Foreman-Mackey}}\n\n% Single-spacing.\n\\def\\baselinestretch{1.1}\n\n\\include{figures/numbers-mh}\n\\include{figures/numbers-emcee}\n\n\\newcommand{\\question}{\\emph}\n\n\\begin{document}\n\nIn this exercise session you will implement a simple Metropolis MCMC sampler\nin your programming language of choice.\nYou will test this sampler on a simple toy problem~--~drawing samples from a\ntwo-dimensional Gaussian~--~and then on two more realistic problems.\nThe exercises are designed to take longer than the allotted 45~minutes so\ndon't worry if you don't finish.\nHopefully people with all levels of experience will learn something!\nIf you have previous experience with MCMC and if you have already implemented\na Metropolis sampler, why don't you take this time to implement a more\nsophisticated method (ensemble, Hamiltonian, nested, or otherwise), try out a\nnew programming language, or help out other students with less experience.\n\n\\paragraph{Toy problem: a two-dimensional Gaussian}\n\nIn the following section you will implement your own Metropolis MCMC sampler\nand test it by drawing samples from a two-dimensional Gaussian but the first\nstep is to implement the probability distribution as a function that takes in\na two-dimensional vector $\\theta$ and returns:\n\\begin{eqnarray}\n\\ln p(\\theta) &=& -\\frac{1}{2}\\,\\theta^\\mathrm{T}\\,\\Sigma^{-1}\\,\\theta\n    + \\mathrm{constant}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\Sigma &=& \\left(\\begin{array}{cc}\n1.0 & -0.08 \\\\\n-0.08 & 0.01 \\\\\n\\end{array}\\right)\n\\end{eqnarray}\nNote: you definitely want to compute $\\ln p$ (not just $p$).\n\\question{Why?}\n\n\\question{To test your probability function, ensure that the difference\nbetween the function evaluated at two (random) points in parameter space is\nthe same as what I get.}\nFor example, make sure that you find the following:\n\\begin{eqnarray}\n\\ln p([0.5, -0.1]^\\mathrm{T}) - \\ln p([-0.01, 0.3]^\\mathrm{T}) &\\equiv&\n    11.80847\\ldots \\quad.\n\\end{eqnarray}\nIf you don't get this result on the first try, debug your function until you\ndo.\n\n\\paragraph{A homegrown Metropolis sampler}\n\nIn this section, you will implement a Metropolis MCMC sampler to draw samples\nfrom the distribution defined in the previous section.\nAs a reminder, the steps in a Metropolis MCMC are described here in words:\n\\begin{enumerate}\n\n\\item Initialize the parameters as $\\theta(t=0)$.\n\n\\item Propose an update $q = \\theta(t) + \\sigma\\,\\delta$ where $\\sigma$ is a\ntuning parameter and $\\delta$ is drawn from a zero mean, unit variance\nGaussian. Alternatively, you can try updating $\\theta$ only one dimension.\n\n\\item Compute the acceptance probability $r =\n\\mathrm{min}\\left(1,\\,\\frac{p(q)}{p(\\theta(t))}\\right)$.\n\n\\item Draw $u$ from a uniform distribution between 0 and 1. If $u < r$, accept\nthe proposal and set $\\theta(t+1) = q$ to the chain. Otherwise, reject the\nproposal and set $\\theta(t+1) = \\theta(t)$. Append $\\theta(t+1)$ to the chain.\n\\emph{Note: in every iteration of this procedure a new entry is added to the\nchain~--~even if the sample isn't accepted!}\n\n\\item Go to step 2 and repeat.\n\n\\end{enumerate}\nHere are some suggestions to keep in mind for your implementation:\n\\begin{itemize}\n\n\\item In this function, you should track the acceptance fraction.\nKeep track of how many proposals you accept.\n\n\\item As mentioned above, don't compute or use $p(\\theta)$ directly~--~you\nshould only ever use the logarithm of this function.\n\n\\item Your implementation must support parameter \\emph{vectors}.\nFor this problem, we're working in 2-D but don't special case to that because\nit might be necessary to be able to sample more parameters later.\n\n\\item Similarly, you should implement the sampler in such a way that it is\neasy to swap out a different probability function. Most programming languages\nallow you to pass function pointers so taking the log-probability function as\nan argument is probably the best interface.\n\n\\item Finally, to start the sampler, you must choose an initial location in\nparameter space. Don't hard-code this location~--~take it as an\nargument~--~because we'll test different initialization methods later.\n\n\\end{itemize}\n\\question{Implement the Metropolis MCMC method as a function with a calling\nsequence something like}\n\\begin{center}\n\\texttt{samples, acc\\_frac = metropolis(log\\_prob\\_func, sigma, inital\\_theta,\nnum\\_steps)}\n\\end{center}\nThis function should return the chain of samples and the accumulated\nacceptance fraction.\n\n\n\\paragraph{Testing, tuning, and convergence (aka time to break your sampler)}\n\nMCMC methods are notoriously hard to test rigorously so instead you can just\ntest your implementation qualitatively for today.\nTo start, initialize at a reasonable location (\\question{What is a good way to\nchoose this in general?}) and \\question{run some chains and look at a few\ndiagnostics}\n\\begin{itemize}\n\n\\item What is the acceptance fraction? Is this unreasonably high or low?\nNote: this is a very easy problem so the acceptance fraction will\nprobably be higher than you would get in any problem in The Real\nWorld\\textsuperscript{\\textsf{TM}}.\n\n\\item Compute the sample mean and covariance matrix of the chain. Is this\nclose to what you would expect? What do you expect and why? What happens as\nyou run the chain for longer?\n\n\\item Plot the trace (value as a function of step number) for each parameter.\nSee Figure~\\ref{fig:traces} for an example of what to expect.\n\n\\item Plot the scatterplot matrix or corner plot to visualize the covariances\nin the problem.\nIn Python you can use\n\\textsf{corner.py}\\footnote{\\url{http://corner.readthedocs.io}} and other\nplotting libraries probably have similar functionality but, otherwise, you can\njust make a scatterplot of the chain values.\nFigure~\\ref{fig:corner} shows an example using \\textsf{corner.py}.\n\n\\item Try changing $\\sigma$ and your initialization and remake these plots to\nsee what happens.\nWhat happens as you make $\\sigma$ extremely large $10^4$ or small $10^{-4}$?\nWhat happens if you move the initial guess far away from the peak of the\ndistribution?\n\n\\end{itemize}\n\nAs discussed in lecture, the best quantification of sampler performance is the\nintegrated autocorrelation time $\\tau_\\mathrm{int}$.\\footnote{Note: there isn't\njust \\emph{one} autocorrelation time.\n$\\tau_\\mathrm{int}$ will, in general, be different for every different\n$f(\\theta)$ in Equation~\\ref{eq:integral}.}\nThis provides an estimate of the number of steps of MCMC required to obtain an\nindependent sample.\nSince MCMC is used to compute integrals, an estimate of $\\tau_\\mathrm{int}$ is\nrequired to compute the sampling uncertainty on the approximation\n\\begin{eqnarray}\\label{eq:integral}\n\\int f(\\theta)\\,p(\\theta)\\,\\mathrm{d}\\theta &\\approx&\n    \\frac{1}{N} \\sum_{n=1}^N f(\\theta^{(n)}) \\quad\n    \\mathrm{where}\\,\\theta^{(n)} \\sim p(\\theta)\n\\end{eqnarray}\nand the sampler that produces a chain with a smallest value of\n$\\tau_\\mathrm{int}$ at fixed computational cost is the best.\n\n\\question{Write or find a function that computes the autocorrelation\nfunction\\footnote{If you write your own function, don't directly sum the\nvariance at each lag~--~this will be way too slow. Instead, you could try\nusing an FFT.} of a one-dimensional chain and compute this for a few choices\nof $f(\\theta)$.} Figure~\\ref{fig:autocorr} gives a few suggestions for\nfunctions to consider.\n\\question{Then, find or implement a function that estimates the integrated\nautocorrelation time}:\n\\begin{eqnarray}\n\\tau_\\mathrm{int,est} &\\approx& C(0) + 2\\,\\sum_{\\tau=1}^W C(\\tau)\n\\end{eqnarray}\nwhere $C(\\tau)$ is the autocorrelation function and $W$ is a tuning parameter.\nNote: autocorrelation time estimates are always very noisy and you need to run\na long chain to get a reliable estimate.\nFurthermore, the choice of $W$ can be subtle. Try several values of $W$ or\nread A.~Sokal's notes\\footnote{See pages 15--16 of this note:\n\\url{http://www.stat.unc.edu/faculty/cji/Sokal.pdf}} on how to choose $W$ more\nrobustly.\nFor more information about autocorrelation analysis, including Python code,\ntake a look at this blog post: \\url{https://dfm.io/posts/autocorr/}.\n\nFor $\\sigma = 0.1$, I find integrated autocorrelation times of \\taua~steps and\n\\taub~steps for $f(\\theta)=\\theta_1$ and $f(\\theta)=\\theta_2$ respectively.\n\\question{Do you find the same? If not, why not?\nTry changing $\\sigma$ to see how the autocorrelation function and times\nchange.\nCan you find a better choice of $\\sigma$? What if you use different values of\n$\\sigma$ for each parameter? How many tuning parameters are there in this\nmethod?}\n\n\\textbf{Bonus:} Try running a different sampling algorithm on the same problem\nand compare performance using an estimate of the autocorrelation time.\nFor example, without any tuning,\n\\textsf{emcee}\\footnote{\\url{http://emcee.readthedocs.io}} gets\nautocorrelation times of about \\etaua~steps for both $f(\\theta)=\\theta_1$ and\n$f(\\theta)=\\theta_2$.\nCan you match that performance using Metropolis? What changes did you have to\nmake?\n\n\\ssfigure{figures/traces.pdf}{0.5}{%\nThe parameter traces for Metropolis MCMC sampling of a two-dimensional\nGaussian.\nThe plots show the parameter values as a function of step number in the chain.\nTo show the autocorrelation on this plot, I zoomed in to only show the first\nfew steps but the full chain used for the following figures had $4\\times 10^5$\nsteps.\n\\label{fig:traces}}\n\n\\ssfigure{figures/corner.pdf}{0.5}{%\nA scatterplot matrix or corner plot of the chain from Figure~\\ref{fig:traces}.\nThe central panel shows the scatterplot of the parameter values from the chain\nand the histograms show the corresponding projections.\nThe scatterplot shows the Monte Carlo estimate of the joint probability\ndensity $p(\\theta_1,\\,\\theta_2\\,|\\,\\mathrm{data})$ and the histograms are\nestimates of the marginalized probability densities\n$p(\\theta_1\\,|\\,\\mathrm{data})$ and $p(\\theta_2\\,|\\,\\mathrm{data})$.\n\\label{fig:corner}}\n\n\\ssfigure{figures/autocorr.pdf}{0.5}{%\nThe autocorrelation function for chains of $\\{f_k(\\theta^{(n)})\\}_{n=1}^N$ for\ndifferent choices of $f_k$.\nCompare this to Figure~\\ref{fig:traces} to see how the autocorrelation relates\nto the chain behavior.\nWhy do the autocorrelation functions have different scales?\nIs the sampler well tuned for $f(\\theta) = \\theta_1$?\n\\label{fig:autocorr}}\n\n% \\ssfigure{figures/ensemble.pdf}{0.5}{%\n% The autocorrelation function.\n% \\label{fig:ensemble}}\n\n\\clearpage\n\\paragraph{A more realistic problem: fitting a line to data}\n\nIn this section, you will fit a line to the small data set plotted in\nFigure~\\ref{fig:data} and available online at\n\\url{https://github.com/dfm/ahw2018/blob/master/data.txt}\\footnote{In this\nfile, the columns are $x$, $y$, $\\sigma_y$, and $\\sigma_x$.}.\nIf you just have data points with Gaussian uncertainties and Gaussian or broad\npriors, you probably shouldn't use MCMC (because the posterior can be sampled\nanalytically) so let's make it a little more interesting: there is an\nintrinsic scatter in the relationship and there are uncertainties on the $x$\nvalues.\n\nWe'll start by ignoring the $x$ uncertainties but including the intrinsic\nscatter parameterized by the logarithm of the scatter $\\ln s$.\nIn this case, the likelihood is:\n\\begin{eqnarray}\n\\ln p (y\\,|\\,m,\\,b,\\,\\ln s) &=& -\\frac{1}{2} \\sum_{n=1}^N\\left[\n\\left(\\frac{y_n-m\\,x_n-b}{\\sigma^2 + s^2}\\right)^2 + \\ln(2\\,\\pi\\,(\\sigma^2 +\ns^2))\\right]\n\\end{eqnarray}\n(this derivation is left as an exercise)\nand a reasonable choice of ``uninformative'' prior is\\footnote{See\nJ.~VanderPlas's blog for a discussion of this choice:\n\\url{http://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/}.}:\n\\begin{eqnarray}\n\\ln p(m,\\,b,\\,\\ln s) &\\propto& -\\frac{3}{2}\\ln(1+b^2) + B(m,\\,b,\\,\\ln s)\n\\end{eqnarray}\nwhere $B(m,\\,b,\\,\\ln s)$ is a finite constant when $m$, $b$, and $\\ln s$ take\nvalues in a ``reasonable range'' (How should you choose this in general?) and\n$-\\infty$ otherwise.\n\nImplement this model in code and try sampling from it using the sampler that\nyou implemented previously or another package.\nMake all the diagnostic plots from above but include one new plot: the\nposterior predictive distribution (see Figure~\\ref{fig:line1}).\nTo achieve this, plot the line predicted for a few samples randomly selected\nfrom the chain on top of the observed data.\n\nFinally, implement the full model where uncertainties in $x$ are also\nincluded.\nTo do this, you will need to choose a prior on $x_\\mathrm{true}$~--~maybe\nsomething like $\\mathcal{U}(0,\\,10)$~--~and sample the 13-dimensional problem\ninstead of the 3-dimensional case from before.\nBefore starting on this, you should draw the graphical model and consult with\nany other students who are working on this part.\n\n\\newpage\n\n\\ssfigure{figures/data.pdf}{0.43}{%\nThe black points with error bars show the simulated data used for this\nexperiment.\nThe blue line is the true underlying model that generated the data and the\norange line is what you get if you run linear least squares and ignore the\nintrinsic scatter in the relation.\n\\label{fig:data}}\n\n\\ssfigure{figures/line1.pdf}{0.43}{%\nLike Figure~\\ref{fig:data} but the gray lines show the prediction for 100\nrandom samples from the chain.\nThe uncertainty is larger than for the least squares result but the true model\nis within the credible region.\n\\label{fig:line1}}\n\n\\ssfigure{figures/line2.pdf}{0.43}{%\nLike Figure~\\ref{fig:line2} but taking the $x$ uncertainties into account.\n\\label{fig:line2}}\n\n\\newpage\n\n% \\begin{multicols}{2}\n% {\\centering\\bf REFERENCES\\par}\n% \\vspace{0.2em}\n% \\begin{thebibliography}{}%\n% \\raggedright\\raggedbottom\\scriptsize\\setlength{\\parskip}{-0.5em}%\n\n\n% \\end{thebibliography}\n% \\end{multicols}\n\n\\end{document}\n", "meta": {"hexsha": "f8456ae5cac92475b63375214ba645829eb10031", "size": 13888, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mcmc.tex", "max_stars_repo_name": "dfm/imprs", "max_stars_repo_head_hexsha": "e1dfaa7881d2f39efcb91b7cda752f8cdc3dfdac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2016-09-11T16:00:43.000Z", "max_stars_repo_stars_event_max_datetime": "2016-09-12T16:10:12.000Z", "max_issues_repo_path": "mcmc.tex", "max_issues_repo_name": "dfm/imprs", "max_issues_repo_head_hexsha": "e1dfaa7881d2f39efcb91b7cda752f8cdc3dfdac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mcmc.tex", "max_forks_repo_name": "dfm/imprs", "max_forks_repo_head_hexsha": "e1dfaa7881d2f39efcb91b7cda752f8cdc3dfdac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-08-07T09:27:31.000Z", "max_forks_repo_forks_event_max_datetime": "2018-08-07T09:27:31.000Z", "avg_line_length": 42.4709480122, "max_line_length": 99, "alphanum_fraction": 0.7563364055, "num_tokens": 3756, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7956580952177051, "lm_q1q2_score": 0.5767489032227843}}
{"text": "% Copyright (C) He Guanyuming 2020\n% The file is licensed under the MIT license.\n\n\\subsection{General Principles}\nThis section describes the overall principles of the document. It illuminates how notations are explained, \nin what structure this document is written and so forth. This section should be read and understood \ncomprehensively prior to reading the main content of the document.\n\n\\subsection{Definitions}\n\\begin{description}\n\\item[The document] The phrase \\emph{the document} means this document (what you are reading) itself.\n\\item[The book] The phrase \\emph{the book} represents Tao's \\emph{Analysis} (both volume I and II).\n\\end{description}\n\n\\subsection{Indices}\nThe book has two volumes: \n\\emph{Analysis I} and \\emph{Analysis II}. We may notice that the indices of the two volumes both start \nfrom 1. It may lead to some confusions. So in the document, the indices are organized in such a way that:\nIf the content comes from \\emph{Analysis I}, the corresponding index is the same as the book's. \nOtherwise, the corresponding index is prefixed with ``2.''.\n\nFor example, Exercise 3.1.3 in \\emph{Analysis I} is indexed as Exercise 3.1.3 in the document, but \nExercise 3.1.3 in \\emph{Analysis II} is indexed as Exercise 2.3.1.3.\n\n\\subsection{Notations}\nIn the answers to some exercises, you may notice that the content are divided by numbers enclosed with \nparentheses (e.g. \\textbf{(1)}, \\textbf{(2)}). Tao often puts multiple questions into a single exercise, \nso these numbers indicates the number of the sub-questions.\n\nFor example, Exercise 3.5.4 is \n\\begin{quotation}\nExercise 3.5.4. Let $A,B,C$ be sets. Show that $A\\times(B\\cup C) = (A\\times B)\\cup(A\\times C)$,\nthat $A\\times(B\\cap C) = (A\\times B)\\cap(A\\times C)$, and that \n$ A\\times(B\\setminus C) = (A\\times B)\\setminus(A\\times C)$.\n\\end{quotation}\nThen (1) indicates the question ``Show that $A\\times(B\\cup C) = (A\\times B)\\cup(A\\times C)$.'', \n(2) indicates the question ``Show that $A\\times(B\\cap C) = (A\\times B)\\cap(A\\times C)$.'', and \n(3) indicates the question ``Show that $A\\times(B\\setminus C) = (A\\times B)\\setminus(A\\times C)$''.\n\nIn logical contents, \n\\[\n\\Longrightarrow, \\Rightarrow, \\longrightarrow, \\rightarrow, \n\\]\nhave the same meaning ``implies''. And \n\\[\n\\Longleftarrow, \\Leftarrow, \\longleftarrow, \\leftarrow,\n\\]\nalso have the same meaning ($P \\leftarrow Q$ means that $Q$ implies $P$).\nFinally, these following symbols all indicate logical equality.\n\\[\n\\leftrightarrow, \\longleftrightarrow, \\Leftrightarrow, \\Longleftrightarrow, \\equiv\n\\].\n\nFor nested quantifiers, their order is ``from left to right''. For example, the following statement \n\\[\n\\forall x \\exists y (P(x,y))\n\\]\nmeans that for all object $x$, their exists a object $y$ such that $P(x,y)$ is true.\nThat is,\n\\[\n\\forall x(\\exists y(P(x,y)))\n\\]\n\nTao uses $++$ to denote the successor of a natural number. However, in the document, it is denoted by \n$S(n)$ most of the times.\n\n\\[\n\\bigvee_{i=1}^{n} P(i), \\bigwedge_{i=1}^{n} P(i)\n\\]\nmean that for $1\\leq i \\leq n$, at least one $P(i)$ is true; and for all $1\\leq i \\leq n$, $P(i)$ is \ntrue, respectively.\n\nSome sets that have special meanings (e.g. the set of all natural numbers, the set of all real numbers) \nare denoted in whiteboard font (e.g. $\\mathbb{N}, \\mathbb{R}$).\n\nWithout special interpretation, the notation\n\\[\n(\\forall x P(x))(Q(x))\n\\]\nis interpreted as \n\\[\n\\forall x(P(x) \\Longrightarrow Q(x))\n\\]\nFor example, \n\\[\n(\\forall x \\in X)(Q(x)) \\equiv \\forall x(x \\in X \\Longrightarrow Q(x))\n\\]\nWhile\n\\[\n(\\exists x P(x))(Q(x))\n\\]\nis interpreted as \n\\[\n\\exists x(P(x) \\wedge Q(x))\n\\].\nFor example,\n\\[\n(\\exists x \\in X)(Q(x)) \\equiv \\exists x(x \\in X \\wedge Q(x))\n\\]\n\n\\subsection{Abbreviations}\nWe often leave off some descriptions for a number's properties. For example, we may refer a \n\\emph{positive natural number} $n$ as a \\emph{positive number} when we haven't learned the rationals and \nthe reals.", "meta": {"hexsha": "26e2a0936ecd00898c0450b24c75ce681565599e", "size": 3915, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "General Principles.tex", "max_stars_repo_name": "Little-He-Guan/Notebook-for-Analysis-of-Tao", "max_stars_repo_head_hexsha": "e040260e4346ae65ce28af11dbd2bb5d9d5ac96b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "General Principles.tex", "max_issues_repo_name": "Little-He-Guan/Notebook-for-Analysis-of-Tao", "max_issues_repo_head_hexsha": "e040260e4346ae65ce28af11dbd2bb5d9d5ac96b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "General Principles.tex", "max_forks_repo_name": "Little-He-Guan/Notebook-for-Analysis-of-Tao", "max_forks_repo_head_hexsha": "e040260e4346ae65ce28af11dbd2bb5d9d5ac96b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6442307692, "max_line_length": 107, "alphanum_fraction": 0.7108556833, "num_tokens": 1162, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768249, "lm_q2_score": 0.7956580927949806, "lm_q1q2_score": 0.5767489014666233}}
{"text": "\\chapter{The Collatz Tree}\n\n\\section{The Connection between Groups and Graphs}\n\\label{sec:groups_graphs}\nLet $(a_k)$ be a numerical sequence with $a_k=g^{(k)}(m)$, then a reversion produces an infinite number of sequences of reversely-written Collatz members \\cite{Ref_Klisse_2010}.\n\nLet $S$ be a set containing two elements $q$ and $r$, which are bijective functions over $\\mathbb{Q}$:\n\\begin{equation}\n\\begin{array}{l}\nq(x)=2x \\\\ \nr(x)=\\frac{1}{3}(x-1)\n\\end{array}\n\\end{equation}\n\nLet a binary operation be the right-to-left composition of functions $q\\circ r$, where $q\\circ r(x)=q(r(x))$. Composing functions is an associative operation. All compositions of the bijections $q$ and $r$ and their inverses $q^{-1}$ and $r^{-1}$ are again bijective. The set, whose elements are all these compositions, is closed under that operation. It forms a free group $F$ of rank 2 with respect to the free generating set $S$, where the group's binary operation $\\circ$ is the function composition and the group's identity element is the identity function $id_{\\mathbb{Q}}=e$. We call $e$ an \\textit{empty string}. $F$ consists of all expressions (strings) that can be concatenated from the generators $q$ and $r$. The corresponding Cayley graph $Cay(F,S)=G$ is a regular tree whose vertices have four neighbors \\cite[p.~66]{Ref_Loeh}. A tree is called \\textit{regular} or \\textit{homogeneous} when every vertex has the same degree, in this case, $d(v)=4$ for every vertex $v$ in $G$. The Cayley graph's set of vertices is $V(G)=F$, and its set of edges is $E(G)=\\left\\{\\left\\{f,f\\circ s\\right\\}\\mid f\\in F,s\\in\\left(S\\cup S^{-1}\\right)\\setminus\\left\\{e\\right\\}\\right\\}$ \\cite[p.~57]{Ref_Loeh}. More precisely, the vertices are \\textit{labeled} by the elements (strings) of $F$.\n\nIn conformance with graph-theoretical precepts \\cite{Ref_Bondy_Murty},\n\\cite{Ref_Bonnington_Little}, \\cite{Ref_Bender_Williamson}\nwe specify a subgraph $H$ of $G$ as a triple $\\left(V(H),E(H),\\psi_{H}\\right)$ consisting of a set $V(H)$ of vertices, a set $E(H)$ of edges and an incidence function $\\psi_{H}$. The latter is, in our case, the restriction $\\psi_{G}\\vert_{E(H)}$ of the Cayley graph's incidence function to the set of edges that only join vertices, which are labeled by a string over alphabet $\\{r,q\\}$ without the inverses: $E(H)=\\left\\{\\left\\{f,f\\circ s\\right\\}\\mid f\\in F,s\\in S\\setminus\\left\\{e\n\\right\\}\\right\\}$.\n\nThis subgraph corresponds to the monoid $S^\\ast$, which is freely generated by $S$ follows related thoughts \\cite{Ref_Truemper_2014} that examine the Collatz problem in terms of a free semigroup on the set $S^{-1}$ of inverse generators. Note that this semigroup is not to be confused with an \\textit{inverse semigroup} \"in which every element has a unique inverse\" \\cite[p.~26]{Ref_Almeida}, \\cite[p.~22]{Ref_Loeh}.\n\nLet $Y^X=\\{f\\mid f\\text{ is a map }X\\rightarrow Y\\}$ be the set of functions, which in category theory is referred to as the \\textit{exponential object} for any sets $X$, $Y$. The evaluation function $ev:Y^X\\times X\\to Y$ sends the pair $(f,x)$ to $f(x)$. For a detailed description of this concept, see \\cite[p.~127]{Ref_Johnsonbaugh}, \\cite[p.~155]{Ref_MacLane_Birkhoff}, \\cite[p.~54]{Ref_Novak_etal} and \\cite[p.~188]{Ref_Pellissier}. We define the evaluation function $ev_{S^\\ast}:S^\\ast\\times\\{1\\}\\rightarrow\\mathbb{Q}$ that assesses an element of $S^\\ast$, id est a composition of $q$ and $r$, for the given input value $1$. Furthermore we define the corestriction ${ev^0_{S^\\ast}}$ of $ev_{S^\\ast}$ to $\\mathbb{N}$. Since a corestricion of a function restricts the function's codomain \\cite[p.~3]{Ref_Helemskii}, the function $ev^0_{S^\\ast}$ operates on a subset $T\\subset S^\\ast$ that contains only those compositions of $q$ and $r$, which return a natural number when inputting the value $1$.\n\nThe set $T$ does not form a monoid under function composition, for example $ev_{S^\\ast}(qrq^4,1)=10$ and $ev_{S^\\ast}(rq^6,1)=21$, but the composition $qrq^4rq^6$ does not lie in $T$, because the evaluation $ev_{S^\\ast}(qrq^4rq^6,1)$ yields a value outside the codomain $\\mathbb{N}$. However, each element of this set labels a vertex of a tree $H_{T}\\subset H$, which is a proper subtree of $H$.\n\nLet $U\\subset T$ be a subset of $T$, which does not contain a reduced word with two or more successive characters $r$. The corresponding tree $H_{U}\\subset H_{T}$ reflects Collatz sequences as demonstrated in figure~\\ref{fig:1}.\n\n\\begin{remark}\nWhen talking about trees having a root (\"rooted trees\"), another important concept should be explained: the \\textbf{depth of a vertex} in a tree is the length of the path from the root to this vertex \\cite[p.~226]{Ref_Sedgewick_2011}. In other words, it is the vertex's distance (the number of edges in the path) from the root. The root node has a depth of zero. The \\textbf{level of a vertex} is its depth plus one $level(v)=depth(v)+1$. The level of a tree's root is always one. The \\textbf{height of a vertex} is the number of edges on the longest downward path from that vertex and a leaf. Any leaf node has a height of zero. The \\textbf{height of a tree} is the height of its root, or equivalently, the depth of its deepest vertex \\cite[p.~226]{Ref_Sedgewick_2011}. The \\textbf{size of a tree} is its number of nodes. One has to be careful, because the literature provides varying definitions. According to Rosen \\cite[p.~804]{Ref_Rosen} for example, the level of a vertex is the length of the path from the root to this vertex; and the height of a tree is the largest vertex level in this tree.\n\\end{remark}\n\n% trim=left bottom right top\n\\begin{figure}\n\t\\includegraphics[trim=2.3cm 5.8cm 5.9cm 4.8cm, \n\twidth=1.00\\textwidth,page=1]{figures/caytree.pdf}\n\t\\caption{Small section of $H_T$ with darkly highlighted subtree $H_U$}\n\t\\label{fig:1}\n\\end{figure}\n\n\\section{Defining the Tree}\nThe starting point for specifying our tree is $H_U$. Due to its significance, we first concertize $H_U$ by the definition~\\ref{def:H_U} below, which establishes four essential characteristics.\n\n\\begin{definition}\nThe graph $H_U$ possess the following key properties:\n\\begin{itemize}\n\t\\item \\mbox{\\boldmath$H_U$} \\textbf{is a directed graph (digraph):} Fundamentally, when we consider the more general case, an undirected graph as a triple $(V,E,\\psi)$, the incidence function maps an edge to an arbitrary vertex pair $\\psi : E\\rightarrow\\{X\\subseteq V:\\left|X\\right|=2\\}$. In a digraph, the set $V\\times V$ represents ordered vertex pairs. Accordingly the incidence function is more specifically defined, namely as a mapping of the edges to that set $\\psi : E\\rightarrow\\{(v,w)\\in V\\times V:v\\neq w\\}$, see \\cite[p.~15]{Ref_Korte_Vygen}.\n\t\\item \\mbox{\\boldmath$H_U$} \\textbf{is a rooted tree:} According to Rosen \\cite[p.~747]{Ref_Rosen}, a rooted tree is, \"a tree in which one vertex has been designated as the root and every edge is directed away from the root.\" Peculiarly, this definition considers the directionality as an inherent part of rooted trees. Unlike Mehlhorn and Sanders \\cite[p.~52]{Ref_Mehlhorn_Sanders}, for example, who distinguish between an undirected and directed rooted tree.\n\t\\par\\smallskip\n\t\\textit{Note: As long as we do not stipulate that vertices may collapse, it is absolutely guaranteed that the graph is a tree.}\n\t\\item \\mbox{\\boldmath$H_U$} \\textbf{is an out-tree:} There is exactly one\tpath from the root to every other node \\cite[p.~52]{Ref_Mehlhorn_Sanders}, which means that edge directions go from parents to children \\cite[p.~108]{Ref_Du_Ko_Hu}. This property is implied in Rosen's definition for a rooted tree as well by saying \"every edge is directed away from the root.\" An out-tree is sometimes designated as \\textit{out-arborescence} \\cite[p.~108]{Ref_Du_Ko_Hu}.\n\t\\item \\mbox{\\boldmath$H_U$} \\textbf{is a labeled tree:} For defining a labeled graph, Ehrig et al. \\cite[p.~23]{Ref_Ehrig_etal} use a label alphabet consisting of a vertex label set and an edge label set. Since we only label the vertices, in our case the specification of a vertex label set $L_V$ together with the vertex label function $l_V:V\\rightarrow L_V$ is sufficient. Originally, we said vertex labels are strings over the alphabet $S=\\{q,r\\}$, through which the free monoid $S^\\ast$ is generated. We illustrate labeling $H_U$ by defining $l_{V(H_U)}(v)=ev^0_{S^\\ast}(l_{V(G)}(\\iota(v)),1)$, where $\\iota:V(H_U)\\hookrightarrow V(G)$ is the inclusion map \\cite[p.~142]{Ref_Childs} from the set of vertices of $H_U$ to the set of vertices from the previously defined Cayley graph $G$.\n\\end{itemize}\n\\label{def:H_U}\n\\end{definition}\n\nWe define a tree $H_{C,3}$ by taking the tree $H_U$ as a basis and for every vertex $v\\in V(H_U)$ satisfying $2\\mid l_{V(H_U)}(v)$, we contract the incoming edge. We attach the label of the parent of $v$ to the new vertex, which results by replacing (merging) the two overlapping vertices that the contracted edge used to connect with. Visually, we obtain $H_{C,3}$ by contracting all edges in $H_U$ that have an even-labeled target vertex, which (due to contraction) becomes \"merged into its parent.\" Edge contraction is occasionally referred to as \\textit{collapsing an edge}. For more details and examples on edge contraction, one can see Voloshin \\cite[p.~27]{Ref_Voloshin} and Loehr \\cite{Ref_Loehr}.\n\nThe tree $H_{C,3}$ is well known as the so-called \\textit{Syracuse tree}, see for example \\cite{Ref_Kleinnijenhuis_2020a}, \\cite{Ref_Aberkane_2017}, and \\cite{Ref_Aberkane_2020}. It is a \\textit{minor of $H_U$}, since it can be obtained from $H_U$ \"by a sequence of any vertex deletions, edge deletions and edge contractions\" \\cite[p.~32]{Ref_Voloshin}. The sequence of contracting the edges between adjacent (in our case even-labeled) vertices is called \\textit{path contraction}.\n\nA small section of the tree $H_{C,3}$, the Syracuse tree, is displayed in figure~\\ref{fig:2}. Other definitions of the same tree exist, see for example Conrow \\cite{Ref_Conrow}, Bauer \\cite[p.~379]{Ref_Bauer}, Batang \\cite{Ref_Batang} or Jan Kleinnijenhuis and Alissa M. Kleinnijenhuis \\cite{Ref_Kleinnijenhuis_2020a}, \\cite{Ref_Kleinnijenhuis_2020b}.\n\n\\begin{figure}\n\t\\includegraphics[width=1.00\\textwidth]{figures/h_c3.png}\n\t\\caption{Section of the Syracuse tree $H_{C,3}$ (displaying the trivial cycle is waived)}\n\t\\label{fig:2}\n\\end{figure}\n\n\\section{\\texorpdfstring{Relationship of successive nodes in $H_{C,3}$}{Relationship of successive nodes in HC3}}\n\nLet $v_1$ and $v_{n+1}$ be two vertices of $H_{C,3}$, where $v_1$ is reachable from $v_{n+1}$ with $depth(v_1)-depth(v_{n+1})=n$. Hence, a path $(v_{n+1},\\ldots,v_1)$ exists between these two vertices. Theorem~\\ref{theo:1} specifies the following relationship between $v_1$ and $v_{n+1}$, empirically identified by Koch \\cite{Ref_Koch_Github}.\n\n\\begin{theorem}\n\t\\label{theo:1}\n\t$l_{V(H_{C,3})}(v_{n+1})=3^nl_{V(H_{C,3})}(v_1)\\prod_{i=1}^{n}\\left(1+\\frac{1}{3l_{V(H_{C,3})}(v_{i})}\\right)2^{-\\alpha_i}$.\n\tIn order to simplify readability, we waive writing down the vertex label function and put it shortly:\\\\\n\t$v_{n+1}=3^nv_1\\prod_{i=1}^{n}\\left(1+\\frac{1}{3v_{i}}\\right)2^{-\\alpha_i}$.\n\tThe value $\\alpha_i\\in\\mathbb{N}$ is the number of edges which have been contracted between $v_i$ and $v_{i+1}$ in $H_U$.\n\\end{theorem}\n\nIn order to demonstrate the construction produced by theorem~\\ref{theo:1} in an illustrative fashion, example~\\ref{ex:vertices} runs through a concrete path in $H_{C,3}$.\n\n\\begin{example}\n\t\\label{ex:vertices}\n\tFor example, the two vertices $v_1=45$ and $v_{1+3}=v_4=5$ are \n\tconnected\n\tvia the path $(5,13,17,45)$, see figure~\\ref{fig:2}. Furthermore, one\n\tcan retrace in figure~\\ref{fig:3} the uncontracted path between these\n\ttwo nodes within $H_U$. When applied to this example,\n\ttheorem~\\ref{theo:1} produces the following:\t\n\t\\begin{center}\n\t\t$5=v_{1+3}=3^3\\cdot45\\cdot\\left(1+\\frac{1}{3\\cdot45}\\right)\\cdot2^{-3}\n\t\t\\cdot\\left(1+\\frac{1}{3\\cdot17}\\right)\\cdot2^{-2}\n\t\t\\cdot\\left(1+\\frac{1}{3\\cdot13}\\right)\\cdot2^{-3}$\n\t\\end{center} \n\\end{example}\n\n\\begin{proof}\n\t\\label{proof:1}\n\tThis relationship of successive nodes can simply be proven inductively. For the base case, we set $n=1$ and retrieve\n\t\\begin{center}\n\t\t$v_{1+1}=3v_1\\left(1+\\frac{1}{3v_1}\\right)2^{-\\alpha_1}\n\t\t=\\left(3v_1+1\\right)2^{-\\alpha_1}=v_2$\n\t\\end{center}\n\tThe path from $v_2$ to $v_1$ can uniformly be expressed by a string $rq\\cdots q$ of $S^\\ast$, because of $v_1=r\\circ q^{\\alpha_1}\\left(v_2\\right)$. We set $n=n+1$ for the step case, which leads to\n\t\\begin{equation*}\n\t\\begin{array}{cl}\n\tv_{n+2} &\n\t=3^{n+1}v_1\\prod_{i=1}^{n+1}\\left(1+\\frac{1}{3v_i}\\right)2^{-\\alpha_i}\\\\\n\t&\n\t=3^{n+1}v_1\\left(1+\\frac{1}{3v_{n+1}}\\right)2^{-\\alpha_{n+1}}\\prod_{i=1}^{n}\\left(1+\\frac{1}{3v_i}\\right)2^{-\\alpha_i}\\\\\n\t&\n\t=3\\left(1+\\frac{1}{3v_{n+1}}\\right)2^{-\\alpha_{n+1}}3^nv_1\\prod_{i=1}^{n}\\left(1+\\frac{1}{3v_i}\\right)2^{-\\alpha_i}\\\\\n\t&\n\t=3\\left(1+\\frac{1}{3v_{n+1}}\\right)2^{-\\alpha_{n+1}}v_{n+1}\\\\\n\t&\n\t=\\left(3v_{n+1}+1\\right)2^{-\\alpha_{n+1}}\n\t\\end{array}\n\t\\end{equation*}\n\tIn this case the path from $v_{n+2}$ to $v_{n+1}$ is correspondingly \n\texpressed by a string $rq\\cdots q$ of $S^\\ast$ too, since\n\t$v_{n+1}=r\\circ q^{\\alpha_{n+1}}\\left(v_{n+2}\\right)$.\n\\end{proof}\n\nEven though the tree may theoretically contain two or more identically labeled vertices, it is essential to emphasize that we only consider such paths $(v_{n+1},\\ldots,v_1)$ whose vertices are all labeled differently. Later in section~\\ref{sec:cycles}, we even require that identically labeled nodes are one and the same. In order to correctly determine successive nodes using theorem~\\ref{theo:1}, we must consider the halting conditions. These are specified in definition~\\ref{def:halting_conditions}, which was introduced by Koch et al. \\cite{Ref_Koch_2020}.\n\n\\begin{definition}\n\t\\label{def:halting_conditions}\n\tBeing compliant with the Collatz conjecture, the algorithms (that calculate successive nodes for a given one) in theorem~\\ref{theo:1} and equation~\\ref{eq:generalized_reachability} halt if at least one of the following conditions is fulfilled:\n\t\\begin{enumerate}\n\t\t\\item $v_{n+1}=1$\n\t\t\\item $v_{n+1}\\in\\{v_1,v_2,\\ldots,v_n\\}$\n\t\\end{enumerate}\n\tWhen the first condition applies, the Collatz conjecture is true for a specific sequence. If the second condition is fulfilled, the sequence has led to a cycle. For every starting value, except $v_1=1$, the Collatz conjecture is therefore falsified. Let us consider the example $v_1=13$ and $n=2$. Inserting these values into theorem~\\ref{theo:1} yields:\n\t\n\t\\[\n\tv_{n+1}=3^2\\cdot\\left(1+\\frac{1}{3\\cdot13}\\right)\\left(1+\\frac{1}{3\\cdot5}\\right)\\cdot2^{-7}=1\n\t\\]\n\t\n\tIn the above example the algorithm halts after two iterations because the first condition is fulfilled. If we examine the case $v_1=1$, we realize that the algorithm finishes after the first iteration, since both halting conditions are true:\n\t\n\t\\[\n\tv_{n+1}=v_1=3^1\\cdot1\\cdot\\left(1+\\frac{1}{3\\cdot1}\\right)2^{-2}=1\n\t\\]\n\t\n\tThe sequence stops in the example above due to the result being one. Apart from that, the sequence has led to a cycle.\n\\end{definition}\n\n\\noindent\nTheorem~\\ref{theo:1} can be used for specifying the condition of a cycle as follows:\n\n\\begin{equation}\n\\label{eq:func_cycle}\n\\begin{array}{l}\nv_{1}=3^nv_1\\prod_{i=1}^{n}\\left(1+\\frac{1}{3v_i}\\right)2^{-\\alpha_i}\n\\\\[\\medskipamount]\n2^{\\alpha_1+\\cdots+\\alpha_n}=\\prod_{i=1}^{n}\\left(3+\\frac{1}{v_i}\\right)\n\\end{array}\n\\end{equation}\n\nA similar condition has been formulated by Hercher \\cite{Ref_Hercher} and Eric Roosendaal \\cite{Ref_Roosendaal_2020}. Taking a first look at equation~\\ref{eq:func_cycle}, we are able to recognize the trivial cycle for $n=1$. One might easily come to the false conclusion that the term only results in a natural number for this trivial cycle, since we are multiplying fractions. The following counterexample, starting at $v_1=31$, disproves this assumption:\n\\begin{equation*}\n20480=\\left(3+\\frac{1}{31}\\right)\\left(3+\\frac{1}{47}\\right)\n\\left(3+\\frac{1}{71}\\right)\\left(3+\\frac{1}{107}\\right)\\left(3+\\frac{1}{161}\\right)\\left(3+\\frac{1}{121}\\right)\\left(3+\\frac{1}{91}\\right)\\left(3+\\frac{1}{137}\\right)\\left(3+\\frac{1}{103}\\right)\n\\end{equation*}\n\nAccording to OESIS \\cite{Ref_OESIS_A005184}, the integer $v_1=31$ is called \\textit{self-contained}. The term self-contained is based on the fact that the node $v_{n+1}=v_{10}=155$ is divisible by the starting node $v_1=31$. Moreover, $v_{10}$ results from applying one and the same function (in this case the Collatz function) using $v_1$ as input, see also Guy \\cite[p.~332]{Ref_Guy}. For such a case equation~\\ref{eq:func_cycle} leads to a natural number, but not necessarily to a cycle. A cycle only occurs if the term results in a power of two. One such example is the trivial cycle. We find another case when we choose the factor $5$ instead of $3$:\n\\begin{center}\n\t$128=2^7=\\left(5+\\frac{1}{13}\\right)\\left(5+\\frac{1}{33}\\right)\n\t\\left(5+\\frac{1}{83}\\right)$\n\\end{center}\n\nThe above example demonstrates that non-trivial cycles can be found if we generalize the Collatz conjecture by replacing the factor $3$ with the variable $k$. We study this generalized form and the occurrence of cycles in section~\\ref{sec:cycles}. A detailed elaboration of the divisibility and a deeper understanding of the tree $H_{C,3}$ needs to be performed in order to achieve a proof of the Collatz conjecture.\n\nGenerally, for any variant $kx+1$ it applies that if $v_1\\mid v_{n+1}$, the product $\\prod_{i=1}^n\\left(k+\\nicefrac{1}{v_i}\\right)$ is natural.\n\n\\newpage\n\n\\begin{figure}[H]\n\t\\includegraphics[width=1.00\\textwidth]{figures/h_u.png}\n\t\\caption{Section of $H_U$ containing the path from $5$ to $45$}\n\t\\label{fig:3}\n\\end{figure}\n\n\\section{A note on the functions left-child and right-sibling}\nReferring to the \"left-child, right-sibling representation\" of rooted trees \\cite[p.~246]{Ref_Cormen_Leiserson_Rivest_Stein}, the function $\\textit{left-child}:V\\rightarrow V$ returns the leftmost child of a vertex $v$. Nesting this function $n$ times leads to the definition of a vertex's $n$-fold left-child, which is given by $\\textit{left-child}^n(v)$. As delineated in figure~\\ref{fig:2}, for example $\\textit{left-child}^3(13)=7$.\n\nThe function $\\textit{right-sibling}:V\\rightarrow V$ points to the sibling of a vertex $v$ immediately to its right \\cite[p.~246]{Ref_Cormen_Leiserson_Rivest_Stein}. If this function is nested $n$ times, we get a vertex's $n$-fold right-sibling defined by $\\textit{right-sibling}^n(v)$. One example is $\\textit{right-sibling}^2(113)=1813$ which has been demonstrated in figure~\\ref{fig:2} too.\n\n\\section{\\texorpdfstring{Relationship of sibling nodes in $H_{C,3}$}{Relationship of sibling nodes in HC3}}\n\\label{sec:relationship_sibling_nodes_k3}\n\nIn a rooted tree, vertices which have the same parent are called \"siblings\" \\cite[p.~702]{Ref_Johnsonbaugh}, \\cite[p.~747]{Ref_Rosen}. Sibling vertices accordingly have the same depth and thus the same level.\n\nLet $w$ be a vertex, from which a path exists to the vertex $v_1$. Let $v_2$ be the immediate right-sibling of $v_1$, then $l_{V\\left(H_{C,3}\\right)}\\left(v_2\\right)=4\\cdot l_{V\\left(H_{C,3}\\right)}\\left(v_1\\right)+1$. This fact has been expressed differently by Kak \\cite{Ref_Kak_2014} as follows: \"If an odd number $a$ leads to another odd number (after several applications of the Collatz transformation) $b$, then $4a+1$ also leads to $b$.\"\n\nApplied to our approach, consider $w$ as the parent of $v_1$ and $v_2$. Suppose, in $H_U$, a path consisting of $n+1$ edges goes from $w$ to $v_1$. Then we can straightforwardly show that $n$ edges in $H_U$ have been contracted between both nodes $w$ and $v_1$ and $n+2$ edges between $w$ and $v_2$ (for simplicity we again omit writing the label function):\n\\begin{equation}\n\\label{eq:next_sibling_k3}\n\\begin{array}{l}\n\t\tv_1=\\frac{w\\cdot2^n-1}{3}\n\t\t\\\\[\\medskipamount]\n\t\tv_2=\\frac{w\\cdot2^{n+2}-1}{3}=4\\cdot v_1+1\n\\end{array}\n\\end{equation}\n\nFor example, $n=3$ edges in $H_U$ have been contracted between $w=5$ and $v_1=13$ and $n+2=5$ edges between $w$ and $v_2=53$, whereby in $H_{C,3}$, the vertex $v_2$ is the right-sibling of $v_1$ and these two sibling vertices are immediate children of $w$.\n\nBatang \\cite{Ref_Batang} demonstrated that using the geometric series $s_n=1+4+4^2+\\ldots+4^{n-1}=\\nicefrac{4^n-1}{3}$ we are able to directly calculate the sibling nodes (see \\cite[p.~191-192]{Ref_Teschl_2013} for more details on geometric series). Let us consider the sibling nodes $\\{5,21,85,341\\}$. The first sibling of $5$ is calculated by $s_1+4^1\\cdot5=21$, the second sibling is $s_2+4^2\\cdot5=85$, and the third is $s_3+4^3\\cdot5=341$.\n\nThe same principle applies to the siblings $\\{13,53,213,853\\}$. The first sibling of $13$ is calculated by $s_1+4^1\\cdot13=53$, the next one is $s_2+4^2\\cdot13=213$, and the third is $s_3+4^3\\cdot13=853$.\n\n\\section{\\texorpdfstring{A vertex's left-child, $n$-fold right-sibling in $H_{C,3}$}{A vertex's left-child, n-fold right-sibling in HC3}}\n\\label{sec:left_child_right_sibling_3}\n\nLet $w$ be a vertex in $H_{C,3}$ and $v_0$ the left-child of $w$. Using techniques of data science \\cite{Ref_Koch_Github}, we have found out empirically that the $n$-fold right-sibling of $v_0$ can be calculated as follows:\n\\begin{equation}\n\\label{eq:nfold_right_sibling_3}\n\tv_n=\\textit{right-sibling}^n(v_0)=\\frac{1}{3}\\left(w\\cdot2^{2n+\\pi_3(w\\bmod 3)}-1\\right)\n\\end{equation}\n\nHere the function $\\pi_3$, which appears in the exponent, is the self-inverse permutation (involution):\n\\begin{equation}\n\\label{eq:pi_3}\n\t\\pi_3=\\left(\\begin{array}{cc}\n\t1 & 2\\\\\n\t2 & 1\n\t\\end{array}\\right)\n\\end{equation}\n\nWe consider permutations of the set $\\{1,2\\}$ and not of $\\{0,1,2\\}$, due to the fact that $w\\bmod 3$ cannot be zero. A node $w$ in $H_{C,3}$, which is labeled by an integer divisible by $3$ is a leaf; and therefore such node has no left-child. More specifically, it has no children at all. When setting $n=0$, we trivially retrieve the vertex's $w$ left-child:\n\\begin{center}\n\t$v_0=\\textit{left-child}(w)=\\frac{1}{3}\\left(w\\cdot2^{\\pi_3(w\\bmod 3)}-1\\right)$\n\\end{center}\n\n\\begin{example}\n\\label{ex:siblings}\nLet us refer to figure~\\ref{fig:2} again and pick out $w=5$. Then the\tvertex's $w$ left-child is $v_0=3$ and the threefold right-sibling\n$v_3=213$:\n\n\\begin{equation*}\n\\begin{array}{l}\n\tv_0=\\frac{1}{3}\\left(5\\cdot2^{\\pi_3(5\\bmod 3)}-1\\right)=3\n\t\\\\[\\medskipamount]\n\tv_3=\\frac{1}{3}\\left(5\\cdot2^{2\\cdot3+\\pi_3(5\\bmod 3)}-1\\right)=213\n\\end{array}\n\\end{equation*}\n\\end{example}\n\nWe will now explain the reasons that are underlying this behavior. Essentially, two integers $a$ and $b$ are congruent modulo $n$ if their difference $a-b$ is divisible by $n$ or, to put it differently, if $a$ and $b$ have the same remainder when divided by $n$ \\cite[p.~15]{Ref_Wolfart_2011}, \\cite[p.~44]{Ref_Forster_2015}, \\cite[p.~19]{Ref_Mueller-Stach_2011}, \\cite[p.~142]{Ref_Iwanowski_Lang_2014}:\n\\begin{equation}\n\\label{eq:congruence}\nn|(a-b)\\leftrightarrow a\\equiv b\\pmod n\n\\end{equation}\n\nIn modular arithmetic we are allowed to interpret integers as names, or to be more specific as \\textit{representatives}, for their equivalence class and therefore reduce (or expand) a congruence as follows:\n\\begin{equation}\n\\label{eq:congruence_reduction}\n(a+n)\\equiv b\\pmod n\\leftrightarrow a\\equiv b\\pmod n\n\\end{equation}\n\nThis means, that in modular arithmetic both operations -- addition and multiplication -- are independent from the choice of representatives in the residue classes \\cite[p.~16]{Ref_Wolfart_2011}.\n\nThe residue class (also termed congruence class) of the integers for a modulus $n$ is the set $[a]_n=\\{a+k\\cdot n|k\\in\\mathbb{Z}\\}$ and sometimes denoted by $\\bar a_n$ or by $a+n\\mathbb{Z}$, see \\cite[p.~15]{Ref_Wolfart_2011}, \\cite[p.~122]{Ref_Schubert_2012}, \\cite[p.~25]{Ref_Mueller-Stach_2011}, \\cite[p.~141]{Ref_Iwanowski_Lang_2014}. Let us put all possible remainders that arise from the division modulo $n$ together into a new set -- the set of all residue classes $[a]_n$. This set is known as the ring of integers modulo $n$ and denoted by $\\mathbb{Z}/n\\mathbb{Z}=\\{[a]_n|a\\in\\mathbb{Z}\\}$ and trivially $\\mathbb{Z}/0\\mathbb{Z}=\\mathbb{Z}$ and for all $n\\ne0$ we have $\\mathbb{Z}/n\\mathbb{Z}=\\{[0],[1],\\ldots,[n-1]\\}$, see \\cite[p.~15]{Ref_Wolfart_2011}, \\cite[p.~25]{Ref_Mueller-Stach_2011}, \\cite[p.~81]{Ref_Teschl_2013}.\n\nNow there is one more tool that we will make use of later, and that is the \\textit{Congruence Power Rule (CPR)}. It states that we are allowed to raise both sides of a congruence to the $m$-th power \\cite[p.~19]{Ref_Mueller-Stach_2011}, \\cite[p.~117]{Ref_Benjamin_2009}:\n\\begin{equation}\n\\label{eq:congruence_power_rule}\na\\equiv b\\pmod n\\leftrightarrow a^m\\equiv b^m\\pmod n\n\\end{equation}\n\nLet $G$ be a group and $a\\in G$. If there exists an integer $d>0$ with $a^d=e$, then the smallest such $d$ is called the \\textit{order} of $a$, written $d=\\ord(a)$ \\cite[p.~35]{Ref_Wolfart_2011}, \\cite[p.~50]{Ref_Forster_2015}, \\cite[p.~240]{Ref_Modler_Kreh_2012}. If no such $d$ exists, we formally write $\\ord(a)=\\infty$. The number of elements of $G$ is called the \\textit{order} of $G$, written $\\ord(G)$ \\cite[p.~26]{Ref_Wolfart_2011}, \\cite[p.~50]{Ref_Forster_2015}.\n\nLet $G$ be a group and $a\\in G$ an element of $G$. We consider the set of all elements $a^n\\in G$ with $n\\in\\mathbb{Z}$. Since $a^na^m=a^{n+m}=a^ma^n$ and $(a^n)^{-1}=a^{-n}$, this set forms an abelian subgroup $H_a$ of $G$. This subgroup $H_a$ is also written $\\left<a\\right>$ and called the subgroup of $G$ \\textit{generated} by $a$ \\cite[p.~50]{Ref_Forster_2015}. A group $G$ is called \\textit{cyclic}, if an $a\\in G$ exists so that $G$ consists only of powers of $a$ (with exponents in $\\mathbb{Z}$), thus if $G=\\left<a\\right>$ \\cite[p.~34]{Ref_Wolfart_2011}, \\cite[p.~50]{Ref_Forster_2015}, \\cite[p.~240]{Ref_Modler_Kreh_2012}. In this case $\\ord(a)=\\ord(G)$, id est the order of an element $a\\in G$ is equal to the order of the cyclic subgroup $\\left<a\\right>$ \\cite[p.~50]{Ref_Forster_2015}.\n\nLet us consider the cyclic group $\\left<a\\right>=\\{e,a,a^2,\\ldots,a^{n-1}\\}$ and an element $b\\in\\left<a\\right>$. Now let us face the question, \"How do we find the unique integer $0\\le j\\le n-1$, such that $a^j=b$?\" This integer $j$ is denoted by $j=log_ab$ and it is called the \\textit{discrete logarithm} of $b$ \\cite[p.~255-256]{Ref_Stinson_Paterson_2019}. To make it more clear that we are talking about the discrete logarithm we write $j=\\dlog_ab$ or more specifically $j=\\dlog_{a,k}b$ if we solve the congruence $a^j\\equiv b\\pmod k$ which means we solve the equation $a^j\\bmod k=b$, see \\cite{Ref_Jain_2011}.\n\nThe multiplicative group of integers modulo $n$, denoted as $(\\mathbb{Z}/n\\mathbb{Z})^\\times$ or briefly as $\\mathbb{Z}_n^\\ast$ contains non-negative elements from $\\mathbb{Z}/n\\mathbb{Z}$ whose representatives are coprime to $n$ \\cite[p.~87]{Ref_Teschl_2013}:\n\\begin{equation}\n\\label{eq:multiplicative_group_mod_n}\nZ_n^\\ast=\\{a\\in\\mathbb{Z}/n\\mathbb{Z}|\\gcd(a,n)=1\\}\n\\end{equation}\n\nThis group $\\mathbb{Z}_n^\\ast$ is a finite abelian group, which contains only the elements from the ring $\\mathbb{Z}/n\\mathbb{Z}$ that are invertible with respect to multiplication -- the units of $\\mathbb{Z}/n\\mathbb{Z}$. That is why the group $\\mathbb{Z}_n^\\ast$ is often denoted by $U(n)$, where $U$ stands for units. Within the ring $\\mathbb{Z}/n\\mathbb{Z}$ an element $a$ is invertible exactly if there exists an element $b$ such that $a*b\\equiv1\\pmod n$. An element inverse to $a$ exists exactly if $\\gcd(a,n)=1$.\n\nThe \\textit{multiplicative order} $\\ord(a)$ of an element $a\\in\\mathbb{Z}_n^\\ast$ is the smallest natural exponent $d$ which satisfies $a^d=1$. In other words, for a positive integer $n$ we say that an integer $a$ has multiplicative order $d$ modulo $n$ if $a^d\\equiv1\\pmod n$ where again $d$ is the smallest possible exponent \\cite[p.~76]{Ref_Hutz_2018}, \\cite[p.~32]{Ref_Shoup_2008}. To indicate that the order of $a$ refers to the modulus $n$, it is also often written $d=\\ord_n(a)$. Recall that $\\gcd(a,n)=1$, since $a\\in\\mathbb{Z}_n^\\ast$\n\nWe remember that groups also have an order. The order of a multiplicative group of integers modulo $n$ is given precisely by \\textit{Euler's totient function}, see \\cite[p.~27]{Ref_Wolfart_2011}:\n\\begin{equation}\n\\label{eq:multiplicative_group_order}\n\\ord(\\mathbb{Z}_n^\\ast)=\\phi(n)\n\\end{equation}\n\nEuler's totient function $\\phi(n)$ counts the positive integers up to a given integer $n$ that are coprime to $n$ \\cite[p.~49]{Ref_Forster_2015}. The fact that equation~\\ref{eq:multiplicative_group_order} is correct follows directly from the definition of $\\phi(n)$ -- we include into $\\mathbb{Z}_n^\\ast$ exactly those integers (representatives) from $\\mathbb{Z}/n\\mathbb{Z}$ that are coprime to $n$.\n\n\\begin{remark}\nThe elements of the ring of integers modulo $n$ do not form a group with respect to multiplication, because the element $0$ can not be inverted. But also $\\mathbb{Z}/n\\mathbb{Z}\\setminus\\{0\\}$ does not form a group for a composite $n$, since there are always products of elements $a\\ne0,b\\ne0$ with $a*b=0$, which means that the \"closure\" property is not given \\cite{Ref_Schwalen_2014}.\n\\end{remark}\n\nAn important theorem related to Euler's totient function, which we will use at a later stage, is Euler's theorem. Euler's theorem states that for an integer $a\\ge2$ coprime to the modulus $n$ the following congruence holds \\cite[p.~37]{Ref_Mueller-Stach_2011}, \\cite[p.~56]{Ref_Forster_2015}, \\cite[p.~104]{Ref_Teschl_2013}:\n\\begin{equation}\n\\label{eq:eulers_theorem}\na^{\\phi(n)}\\equiv1\\pmod n\n\\end{equation}\n\nThis means that given $\\left<a\\right>=\\mathbb{Z}_n^\\ast$, for any generator $a$ coprime to the modulus $n$ the congruence $a^{\\phi(n)}\\equiv1\\pmod n$ becomes true, where again $\\phi(n)$ is the order of $\\mathbb{Z}_n^\\ast$ and thus the order of $a$ (see \\ref{eq:multiplicative_group_order}). If $\\phi(n)\\equiv2\\pmod4$ then the group $\\mathbb{Z}_n^\\ast$ is cyclic. Consequently the multiplicative group of integers modulo $n$ is cyclic for $n\\in\\{1,2,4,p^j,2p^j\\}$, where $p$ being an odd prime and $j\\ge1$ \\cite{Ref_Schwalen_2014}, \\cite[p.~172]{Ref_Gallian}, \\cite{Ref_Guichard}.\n\nAt this point it is appropriate that we explain the mapping (permutation) given by \\ref{eq:pi_3} in more detail. A helpful tool that we can use as a point of departure is the multiplicative group of integers modulo $3$. The element $2$ is a generator $\\left<2\\right>=\\{1,2\\}=\\mathbb{Z}_3^\\ast$, since $2\\equiv\\boldsymbol{2}\\pmod3$ and $2^2\\equiv4\\equiv\\boldsymbol{1}\\pmod3$. The order of $2$ is $2$, since $2^2\\equiv1\\pmod3$. Now we use the CPR given by \\ref{eq:congruence_power_rule} and obtain $(2^2)^{n+1}\\equiv1^{n+1}\\pmod3$ and the following generic congruence:\n\\begin{equation}\n\\label{eq:congruence_k3}\n2^j2^{2n+2-j}\\equiv1\\pmod3\n\\end{equation}\n\nSetting $j=0,1$ leads to the following behavior, which explains the formula~\\ref{eq:nfold_right_sibling_3} and the mapping~\\ref{eq:pi_3}:\n{\\renewcommand{\\arraystretch}{1.8}\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|L|L|L|L|L|}\n\t\t\\hline\n\t\t\\thead{\\boldsymbol{j}} &\n\t\t\\thead{\\textbf{congruence}~\\ref{eq:congruence_k3}} &\n\t\t\\thead{\\textbf{node}\\ \\boldsymbol{w}} &\n\t\t\\thead{\\textbf{setting}\\ \\boldsymbol{w}\\ \\textbf{as per}\\ \\ref{eq:congruence_reduction}} &\n\t\t\\thead{\\textbf{divisibility as per}\\ \\ref{eq:congruence}}\\\\\n\t\t\\hline\n\t\t0\n\t\t& 1\\cdot2^{2n+2}\\equiv1\n\t\t& w\\in[1]_3\n\t\t& \\multirowcell{2}{w\\cdot2^{2n+\\pi_3(w\\bmod3)}\\equiv1}\n\t\t& \\multirowcell{2}{3|(w\\cdot2^{2n+\\pi_3(w\\bmod3)}-1)}\n\t\t\\\\ \\cline{1-3}\n\t\t1\n\t\t& 2\\cdot2^{2n+1}\\equiv1\n\t\t& w\\in[2]_3\n\t\t& \n\t\t& \n\t\t\\\\ \\hline\n\t\\end{tabular}\n\\end{table}}\n\n\\begin{remark}\nIf addition is the group operation, as it is the case for example with the additive group of integers modulo $3$, denoted as $(\\mathbb{Z}/3\\mathbb{Z},+)$ or as $(\\mathbb{Z}_3,+)$, then for an element $a$ the term $a^n$ means add (and not multiply) $a$ to itself $n$ times. In this specific case the group contains three elements $\\mathbb{Z}_3=\\{0,1,2\\}$ and the identity element is $e=0$. The element $2$ is a generator $\\left<2\\right>=\\{0,1,2\\}=\\mathbb{Z}_3$, since $2\\equiv\\boldsymbol{2}\\pmod3$ and $2+2\\equiv4\\equiv\\boldsymbol{1}\\pmod3$ and $2+2+2\\equiv6\\equiv\\boldsymbol{0}\\pmod3$. The order of $2$ is $3$, because $2^3\\equiv0\\pmod3$.\n\\end{remark}\n\n\\section{\\texorpdfstring{A vertex's left-child, $n$-fold right-sibling in $H_{C,5}$}{A vertex's left-child, n-fold right-sibling in HC5}}\n\\label{sec:left_child_right_sibling_5}\n\nIn the following we take a look at the tree $H_{C,5}$ -- the $5x+1$ variant of $H_C$. We must note that it is not a tree and moreover that not all of its vertices are reachable from the root, which makes it particularly interesting as a counterexample. We define the permutation $\\pi_5$ as follows:  \n\\begin{center}\n\\label{eq:pi_5}\n    $\\pi_5=\\left(\\begin{array}{cccc}\n    \t1 & 2 & 3 & 4\\\\\n    \t4 & 3 & 1 & 2\n    \\end{array}\\right)$\t\n\\end{center}\n\nNext, by letting $w$ be a vertex in $H_{C,5}$ and $v_0$ the left-child of $w$\nwe obtain the $n$-fold right-sibling of $v_0$ by the function that\nis slightly different to the one defined by \\ref{eq:nfold_right_sibling_3}:\n\\begin{equation}\n\\label{eq:nfold_right_sibling_5}\n    v_n=\\textit{right-sibling}^n(v_0)=\\frac{1}{5}\\left(w\\cdot2^{4n+\\pi_5(w\\bmod 5)}-1\\right)\n\\end{equation}\n\nAnalogous to \\ref{eq:pi_3} only permutations on the set without zero\n\\{1,2,3,4\\} need to be considered, since $w\\bmod 5$ cannot be zero.\nOtherwise, if $w\\equiv 0 (\\bmod 5)$ which means that $w$ would be labeled\nby an integer divisible by 5, then the node $w$ has no successor in $H_{C,5}$.\n\n\\noindent\nBy setting $n=0$, the function (above given by \\ref{eq:nfold_right_sibling_5}) returns the left child of $w$:\n\\begin{center}\n\t$v_0=\\textit{left-child}(w)=\\frac{1}{5}\\left(w\\cdot2^{\\pi_5(w\\bmod 5)}-1\\right)$\n\\end{center}\n\nEquation~\\ref{eq:nfold_right_sibling_5} has been identified empirically as well and can be explained using the cyclic group $\\left<2\\right>=\\{1,2,3,4\\}=\\mathbb{Z}_5^\\ast$. First of all, it is obvious that $2$ generates this group, since $2\\equiv\\boldsymbol{2}\\pmod5$ and $2^2\\equiv\\boldsymbol{4}\\pmod5$ and $2^3\\equiv8\\equiv\\boldsymbol{3}\\pmod5$ and $2^4\\equiv16\\equiv\\boldsymbol{1}\\pmod5$. The order is $4$ and according to \\ref{eq:eulers_theorem} and \\ref{eq:multiplicative_group_order} we have $2^{\\ord(\\mathbb{Z}_5^\\ast)}\\equiv2^{\\phi(5)}\\equiv2^4\\equiv1\\pmod5$. Again we use the CPR given by \\ref{eq:congruence_power_rule} and obtain $(2^4)^{n+1}\\equiv1^{n+1}\\pmod5$ and the following generic congruence:\n\\begin{equation}\n\\label{eq:congruence_k5}\n2^j2^{4n+4-j}\\equiv1\\pmod5\n\\end{equation}\n\nSetting $j=0,1,2,3$ leads to the following behavior, which explains the formula~\\ref{eq:nfold_right_sibling_5} and the mapping~\\ref{eq:pi_5}:\n{\\renewcommand{\\arraystretch}{1.8}\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|L|L|L|L|L|}\n\t\t\\hline\n\t\t\\thead{\\boldsymbol{j}} &\n\t\t\\thead{\\textbf{congruence}~\\ref{eq:congruence_k5}} &\n\t\t\\thead{\\textbf{node}\\ \\boldsymbol{w}} &\n\t\t\\thead{\\textbf{setting}\\ \\boldsymbol{w}\\ \\textbf{as per}\\ \\ref{eq:congruence_reduction}} &\n\t\t\\thead{\\textbf{divisibility as per}\\ \\ref{eq:congruence}}\\\\\n\t\t\\hline\n\t\t0\n\t\t& 1\\cdot2^{4n+4}\\equiv1\n\t\t& w\\in[1]_5\n\t\t& \\multirowcell{4}{w\\cdot2^{4n+\\pi_5(w\\bmod5)}\\equiv1}\n\t\t& \\multirowcell{4}{5|(w\\cdot2^{4n+\\pi_5(w\\bmod5)}-1)}\n\t\t\\\\ \\cline{1-3}\n\t\t1\n\t\t& 2\\cdot2^{4n+3}\\equiv1\n\t\t& w\\in[2]_5\n\t\t& \n\t\t&\n\t\t\\\\ \\cline{1-3}\n\t\t2\n\t\t& 4\\cdot2^{4n+2}\\equiv1\n\t\t& w\\in[4]_5\n\t\t& \n\t\t&\n\t\t\\\\ \\cline{1-3}\n\t\t3\n\t\t& 8\\cdot2^{4n+1}\\equiv1\n\t\t& w\\in[3]_5\n\t\t& \n\t\t&\n\t\t\\\\ \\hline\n\t\\end{tabular}\n\\end{table}}\n\nFigure~\\ref{fig:hc5} illustrates a small section of $H_{C,5}$ starting at its root. The particularly interesting thing about the graph $H_{C,5}$ is that it contains three cycles, the trivial cycle starting from the root $(1,3)$ and two non-trivial cycles $(43,17,27)$ and $(83,33,13)$. To be precise, three cycles are known (as it will become apparent later in section~\\ref{sec:non_trivial_cycles}), and on the basis of present knowledge it cannot be ruled out with any certainty that other cycles exist.\n\n\\begin{figure}[H]\n\t\\includegraphics[width=1.00\\textwidth]{figures/h_c5b.png}\n\t\\caption{Section of the graph $H_{C,5}$ starting at its root (without branches that reflect a subsequence containing the trivial cycle)}\n\t\\label{fig:hc5}\n\\end{figure}\n\n\\section{\\texorpdfstring{A vertex's left-child, $n$-fold right-sibling in $H_{C,7}$}{A vertex's left-child, n-fold right-sibling in HC7}}\n\\label{sec:left_child_right_sibling_7}\n\nNow we are able to develop the formula deductively that calculates for a given node $w$ the left-child and right-sibling in $H_{C,7}$. We refer to the cyclic group $\\mathbb{Z}_7^\\ast=\\{1,2,3,4,5,6\\}$. Note that in this case $2$ is not a generator of this group. But nevertheless $\\mathbb{Z}_7^\\ast$ is cyclic and $\\ord(2)=3$ which gives $2^3\\equiv1\\pmod7$. Again we use the CPR given by \\ref{eq:congruence_power_rule} and obtain $(2^3)^{n+1}\\equiv1^{n+1}\\pmod7$ and the following generic congruence:\n\\begin{equation}\n\\label{eq:congruence_k7}\n2^j2^{3n+3-j}\\equiv1\\pmod7\n\\end{equation}\n\nSetting $j=0,1,2$ leads to the following behavior, which produces formula~\\ref{eq:nfold_right_sibling_7} and the mapping~\\ref{eq:pi_7}:\n\n{\\renewcommand{\\arraystretch}{1.8}\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|L|L|L|L|L|}\n\t\t\\hline\n\t\t\\thead{\\boldsymbol{j}} &\n\t\t\\thead{\\textbf{congruence}~\\ref{eq:congruence_k7}} &\n\t\t\\thead{\\textbf{node}\\ \\boldsymbol{w}} &\n\t\t\\thead{\\textbf{setting}\\ \\boldsymbol{w}\\ \\textbf{as per}\\ \\ref{eq:congruence_reduction}} &\n\t\t\\thead{\\textbf{divisibility as per}\\ \\ref{eq:congruence}}\\\\\n\t\t\\hline\n\t\t0\n\t\t& 1\\cdot2^{3n+3}\\equiv1\n\t\t& w\\in[1]_7\n\t\t& \\multirowcell{3}{w\\cdot2^{3n+\\pi_7(w\\bmod7)}\\equiv1}\n\t\t& \\multirowcell{3}{7|(w\\cdot2^{3n+\\pi_7(w\\bmod7)}-1)}\n\t\t\\\\ \\cline{1-3}\n\t\t1\n\t\t& 2\\cdot2^{3n+2}\\equiv1\n\t\t& w\\in[2]_7\n\t\t& \n\t\t&\n\t\t\\\\ \\cline{1-3}\n\t\t2\n\t\t& 4\\cdot2^{3n+1}\\equiv1\n\t\t& w\\in[4]_7\n\t\t& \n\t\t&\n\t\t\\\\ \\hline\n\t\\end{tabular}\n\\end{table}}\n\n\\begin{equation}\n\\label{eq:nfold_right_sibling_7}\nv_n=\\textit{right-sibling}^n(v_0)=\\frac{1}{7}\\left(w\\cdot2^{3n+\\pi_7(w\\bmod 7)}-1\\right)\n\\end{equation}\n\nThe mapping \\ref{eq:pi_7} is not a permutation as in the case of $\\pi_3$ and $\\pi_5$, it is defined as follows:\n\\begin{equation}\n\\label{eq:pi_7}\n\\pi_7(n)=\\begin{cases}\n3\t&\tn=1\\\\\n2\t&\tn=2\\\\\n1   &   n=4\n\\end{cases}\t\n\\end{equation}\n\n\\begin{figure}[H]\n\t\\includegraphics[width=1.00\\textwidth]{figures/h_c7.png}\n\t\\caption{Section of the graph $H_{C,7}$ starting at its root (without branches that reflect a subsequence containing the trivial cycle)}\n\t\\label{fig:hc7}\n\\end{figure}\n\n\\section{\\texorpdfstring{Generalizing the relationship of successive nodes for $H_{C,k}$}{Generalizing the relationship of successive nodes for HCk}}\nLet us refer to $H_{C,k}$. By having introduced and proven theorem~\\ref{theo:1} we already started an assertion about the reachability of successive nodes in $H_{C,3}$. This reachability relationship can be generalized for any graph $H_{C,k}$ as follows:\n\\begin{equation}\n\t\\label{eq:generalized_reachability}\n\tv_{n+1}=k^nv_1\\prod_{i=1}^{n}\\left(1+\\frac{1}{kv_{i}}\\right)2^{-\\alpha_i}\n\\end{equation}\n\nThis generalization will be later utilized in chapter~\\ref{ch:cycles} for closer observations of cycles in various $kx+1$ variants of the graph $H_C$.\n\n\\section{\\texorpdfstring{Generalizing the relationship of sibling nodes for $H_{C,k}$}{Generalizing the relationship of sibling nodes for HCk}}\nIn section~\\ref{sec:relationship_sibling_nodes_k3} we have taken a closer look at the relationship of sibling nodes in $H_{C,3}$. But what is the formula for calculating the $n$-fold right-sibling of a given node $v_0$ generalized to the $kx+1$ variant of $H_C$? We remember that the multiplicative order $d=\\ord_k(2)$ is the smallest natural exponent $d$ such that $2^d\\equiv 1\\pmod k$. In the case $k=3$ we can calculate the next sibling $v_1$ of a given node $v_0$ as follows: $v_1=v_0\\cdot4+1$, see \\ref{eq:next_sibling_k3}. Within $H_{C,7}$, the next sibling of $v_0$ is given by $v_1=v_0\\cdot8+1$. In the case of $k=5$, we calculate the next sibling $v_1=v_0\\cdot16+3$ and for $k=9$ we obtain the next sibling $v_1$ of a given node $v_0$ by $v_1=v_0\\cdot64+7$. The general formula for calculating a given node's $v_0$ immediate right sibling is:\n\n\\begin{equation}\n\\label{eq:right_sibling_k}\nv_1=v_0\\cdot2^{\\ord_k(2)}+\\frac{1}{k}\\left(2^{\\ord_k(2)}-1\\right)\n\\end{equation}\n\nFor example the node $v_0=243$ in $H_{C,5}$ has the right sibling $v_1=243\\cdot2^4+(2^4-1)/5=3891$ (see figure~\\ref{fig:hc5}). In order to calculate the $n$-fold right sibling of a given node $v_0$, we need to repeatedly nest $n$ times the (linear) function~\\ref{eq:right_sibling_k}. For the sake of simplicity, let us substitute $a=2^{\\ord_k(2)}$ and $b=\\frac{1}{k}(2^{\\ord_k(2)}-1)$. Then the $n$-fold right sibling of $v_0$ is obtained by the following structure:\n\n\\begin{flalign*}\nv_n=\\textit{right-sibling}^n(v_0)&=\\left(\\left(\\left(v_0\\cdot a+b\\right)\\cdot a+b\\right)\\cdot a+b\\right)\\cdots\\\\\n&=v_0\\cdot a^n+b(a^{n-1}+\\ldots+a^2+a+1)=v_0\\cdot a^n+b\\frac{a^n-1}{a-1}\n\\end{flalign*}\n\nNote that the term $b(a^{n-1}+\\ldots+a^2+a+1)$ can be simplified using the $n$-th partial sum of a geometric series (\\cite[p.~192]{Ref_Teschl_2013}). The resubstitution of both coefficients $a$ and $b$ leads us to the final generalized formula that calculates the $n$-fold right sibling of a node $v_0$ in $H_{C,k}$ for a natural $n\\ge0$:\n\\begin{equation}\n\\label{eq:n_fold_right_sibling_k}\nv_n=\\textit{right-sibling}^n(v_0)=v_0\\cdot2^{n\\cdot\\ord_k(2)}+\\frac{1}{k}\\left(2^{\\ord_k(2)}-1\\right)\\cdot\\frac{2^{n\\cdot\\ord_k(2)}-1}{2^{\\ord_k(2)}-1}\n\\end{equation}\n\nThis can be verified by inserting $n=0$ and $n=1$ into formula~\\ref{eq:left_child_n_fold_right_sibling_k} that calculates the a vertex's left-child, $n$-fold right-sibling of $H_{C,k}$:\n\n\\[\\arraycolsep=1.6em\n\\begin{array}{ll}\nv_0=\\frac{1}{k}\\left(w\\cdot2^{\\ord_k(2)-\\dlog_{2,k}w}-1\\right) & kv_0+1=w\\cdot2^{\\ord_k(2)-\\dlog_{2,k}w}\\\\\nv_1=\\frac{1}{k}\\left(w\\cdot2^{2\\ord_k(2)-\\dlog_{2,k}w}-1\\right) & kv_1+1=w\\cdot2^{2\\ord_k(2)-\\dlog_{2,k}w}\n\\end{array}\n\\]\n\nThis brings us to the following quotient leading to the basic relationship between two sibling nodes that is given by equation~\\ref{eq:right_sibling_k} in the form of $v_1=v_0\\cdot a+b$:\n\n\\[\n\\frac{kv_1+1}{kv_0+1}=\\frac{2^{2\\ord_k(2)-\\dlog_{2,k}w}}{2^{\\ord_k(2)-\\dlog_{2,k}w}}=2^{\\ord_k(2)}\n\\]\n\nHere we point out that the equation~\\ref{eq:n_fold_right_sibling_k} of course only works for such $k$, where the order of two is not infinity $\\ord_k(2)\\ne\\infty$. This means that, for instance, it does not work for $k=1$, id est for the $1x+1$ variant of $H_C$. This variant is very instructive due to its simple nature and for the sake of completeness we have added a picture including some few words about $H_{C,1}$ in appendix~\\ref{appx:x_plus_1_variant}.\n\n\\section{\\texorpdfstring{Generalizing a vertex's left-child, $n$-fold right-sibling for $H_{C,k}$}{Generalizing vertex's left-child, n-fold right-sibling for HCk}}\nLet $n\\ge0$ be an integer. We generalize the formulas, which have been developed in sections~\\ref{sec:left_child_right_sibling_3}~-~\\ref{sec:left_child_right_sibling_7} to calculate the left-child, $n$-fold right-sibling for a given node $w$ that is the direct parent node of $v_0$ as follows:\n\\begin{equation}\n\\label{eq:left_child_n_fold_right_sibling_k}\nv_n=\\textit{right-sibling}^n(v_0)=\\frac{1}{k}\\left(w\\cdot2^{\\ord_k(2)\\cdot n+\\ord_k(2)-\\dlog_{2,k}w}-1\\right)\n\\end{equation}\n\nTo put the procedure of computation a bit simpler: We start at an arbitrary parent node $w$, calculate its left-child $v_0$ and then determine the $n$-fold right-sibling (the $n$-th neighbor from the right) of $v_0$. Let us choose for example the node $w=2633$ from $H_{C,7}$, whose left child is $v_0=3009$. The fourth right-sibling of this left-child is $v_4=(2633\\cdot2^{3\\cdot4+3-0}-1)/7=12325449$ (see figure~\\ref{fig:hc7}).\n\nRecall that the discrete logarithm $\\dlog_{2,k}w=j$ finds the smallest exponent $j$ such that $2^j\\equiv w\\pmod k$ respectively solves the equation $2^j\\bmod k=w$.\n", "meta": {"hexsha": "3286098920078aed9778e4738017796c321610ba", "size": 44130, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "01 Graph Theory/TeX/v6.0/chapter/02_collatz_tree_COX_EDIT.tex", "max_stars_repo_name": "Sultanow/collatz", "max_stars_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-04-01T15:12:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-01T15:54:55.000Z", "max_issues_repo_path": "01 Graph Theory/TeX/v6.0/chapter/02_collatz_tree_COX_EDIT.tex", "max_issues_repo_name": "Sultanow/collatz", "max_issues_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01 Graph Theory/TeX/v6.0/chapter/02_collatz_tree_COX_EDIT.tex", "max_forks_repo_name": "Sultanow/collatz", "max_forks_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-05-06T20:44:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-06T20:44:07.000Z", "avg_line_length": 80.0907441016, "max_line_length": 1284, "alphanum_fraction": 0.7219578518, "num_tokens": 15142, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681122619883, "lm_q2_score": 0.672331705744791, "lm_q1q2_score": 0.576704698050592}}
{"text": "\\chapter{Single Mauthner Cell Model - Theory}\n\tIn this chapter we will explain the theoretical aspects of the neuronal model for a single \n\tMauthner Cell.\n\tBy 'single' Mauthner cell we only mean that we are considering the mechanisms of the \n\tsurrounding circuit involving one of the two existing Mauthner cells instead of both.\n\tWe will start with the description of the full model and continue with two reductions that \n\tassume a separation of timescales and thus provide stationary approximations of the model.\n\t\\section{Full neuronal model}\n\tThe full neuronal model of a single Mauthner cell consists of a rate-based model for the \n\tpopulation of inhibitory interneurons that provide the feed-forward inhibition and a LIF model \n\tfor the M-cell itself.\n\tBoth the inhibitory population and the M-cell get their input from a single source.\n\tIn our case this input will represent the visual information coming from the optic tectum which \n\twill be described in more detail in the next chapter.\n\tThe time evolution of the activity $\\rho$ of the inhibitory population is described by the \n\tfollowing equation:\n\t\\begin{equation}\n\t\\tau _{\\rho} \\frac{d\\rho}{dt} = - (\\rho(t) - \\rho_{0}) + c_{\\rho} I(t) + \n\t\\eta _{\\rho},\n\t\\label{eq:inhib}\n\t\\end{equation}\n\twhere $\\tau _{_\\rho}$ is the time constant, $\\rho _{0}$ is the resting activity of the \n\tpopulation, $c_{\\rho}$ is a scaling factor, $I(t)$ is the time dependent input and $\\eta \n\t_{\\rho}$ is a Gaussian noise term.\n    While we assume that the resting activity $\\rho_{0}$ is constant during a single trial of an experiment, we sample its value during a single trial from a random distribution that we further specify in the next chapter.\\\\\n\tFor the M-cell we use a LIF model where the time evolution of the membrane potential $V_m$ is \n\tdescribed by the following equation:\n\t\\begin{equation}\n\t\\tau _m \\frac{dV_m}{dt} = - (V(t) - E_{L}) + R_{m} I(t) - \\rho (t) +  \\eta \n\t_m,\n\t\\label{eq:mcell}\n\t\\end{equation}\n\twhere $\\tau_{m}$ is the membrane time constant, $E_L$ is the resting potential, $R_m$ is the \n\tmembrane resistance and $\\eta_{m}$ is again a Gaussian noise term.\n\tThe M-cell thus gets the direct visual input $I(t)$ and is inhibited by $\\rho(t)$.\n\tIf the membrane potential $V_m$ crosses a threshold $V_t$ an action potential is artificially \n\tproduced and the membrane potential is reset to the resting potential $E_L$.\n\tAdditional to the noise terms in equations \\ref{eq:inhib} and \\ref{eq:mcell} we will also \n\tconsider fluctuations of the firing threshold $V_t$:\n\t\\begin{equation}\n\tV_t (t) = V_t + \\eta_t(t),\n\t\\label{eq:thrs}\n\t\\end{equation}\n\twhere $\\eta_t$ is a Gaussian noise term.\\\\\n\tThe basic parameters of the LIF model, i.e. $E_L$, $R_m$, $\\tau_m$ and $V_t$, have been fitted to experimental data in a previous study by \\cite{Koyama2016} using recordings from four larval zebrafish at four days post-fertilization(dpf).\n\tFor the details of the fitting procedure see their methods section.\\\\\n\tOne important property of this dynamic system are the time scales on which the described activity is going on.\n\tSince we know that the synapses at the inhibitory interneurons are electric, at least for the auditory input, the time constant, and therefore the relevant time scale, of $\\rho$ is in the order of milliseconds.\n\tAs we will see later on, in the experiments that we want to reproduce the changes in the input over time are on much bigger time scales of at least hundreds of milliseconds.\n\tThis fact motivates the reduction in the next section where we approximate the activity of the inhibitory population by an adiabatic ansatz assuming a separation of time scales.\n\t\\section{Stationary Approximation of Inhibitory Population}\\label{approx inhibition}\n\tHere we reduce the model by approximating the activity of the inhibitory population by its \n\tstationary solution.\n\tThis approximation is the more accurate the higher the difference is between the time scale of the dynamics of the inhibitory population and the time scale of the input.\n\tIf we use $\\tau_{\\rho}$ as the time scale of the inhibitory population and denote $\\tau_{in}$ as the time scale of the input, the approximation becomes equivalent for the limit $\\tau_{\\rho}/ \\tau_{in} \\rightarrow 0$.\n\tIn the model, this means that equation \\ref{eq:inhib} becomes:\n\t\\begin{equation}\n\t\\hat{\\rho} (t) = \\rho_{0} + c_{\\rho} I(t) + \\eta_{\\rho}.\n\t\\label{eq:inhib_approx}\n\t\\end{equation}\n\tNow we can replace $\\rho (t)$ in equation \\ref{eq:mcell} and get:\n\t\\begin{equation}\n\t\\tau _m \\frac{dV_m}{dt} = - (V(t) - E_{L}) + I(t)(R_{m} - c_{\\rho}) - \\rho_{0} - \n\t\\eta_{\\rho} +  \\eta _m.\n\t\\label{eq:mcell_approx1}\n\t\\end{equation}\n\tIn the resulting LIF model the input is now weighted by the difference between the scaling \n\tfactor $c_{\\rho}$ and the membrane resistance $R_m$.\n\tIf we ignore the noise terms for a moment and assume that $\\rho_{0}=0$, this means that the \n\tinput can only excite the M-cell and therefore evoke an action potential if $c_{\\rho} < R_m$.\n\tIncreasing $\\rho_{0}$ would effectively increase the firing threshold $V_t$.\n\t\\section{Stationary Approximation of Full Model}\\label{approx full model}\n\tAs a next step we can further approximate the LIF model in equation \\ref{eq:mcell_approx1} by \n\tits stationary solution:\n\t\\begin{equation}\n\t\\hat{V}_m(t) = E_{L} + I(t)(R_{m} - c_{\\rho}) - \\rho_{0} - \n\t\\eta_{\\rho} +  \\eta _m.\n\t\\end{equation}\n\tIf we set all noise to zero we can derive an expression for the input at which the membrane \n\tpotential reaches the threshold $V_{t}$:\n\t\\begin{equation}\n\t\\hat{V}_m(t) \\overset{!}{=} V_t\n\t\\end{equation}\n\t\\begin{equation}\n\t\\Leftrightarrow E_{L} + I(t)(R_{m} - c_{\\rho}) - \\rho_{0} \n\t\\overset{!}{=} V_t\n\t\\end{equation}\n\t\\begin{equation}\n\t\\Leftrightarrow I(t)\n\t\\overset{!}{=} \\frac{V_t - E_{L} + \\rho_{0}}{(R_{m} - c_{\\rho})}\n\t\\label{eq:crit_input}\n\t%TODO: look up solution for simple LIF equation even if it's only for linear input\n\t%TODO: say that this is comparable to first-passage time problems such as in the \n\t%drift-diffusion model for decision making(maybe cite ratcliff2002 or so)\n\t\\end{equation}\n%----------------------------------------------------------------------------------------\n", "meta": {"hexsha": "c9aeccfe1c4fa5e689e2e526a484b52440940849", "size": 6170, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/Chapters/2-Theory.tex", "max_stars_repo_name": "awakenting/master-thesis", "max_stars_repo_head_hexsha": "d612c3b240b2d7325cac6fbf9c85c3d81250b558", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-07-04T17:51:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T00:18:22.000Z", "max_issues_repo_path": "manuscript/Chapters/2-Theory.tex", "max_issues_repo_name": "awakenting/master-thesis", "max_issues_repo_head_hexsha": "d612c3b240b2d7325cac6fbf9c85c3d81250b558", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuscript/Chapters/2-Theory.tex", "max_forks_repo_name": "awakenting/master-thesis", "max_forks_repo_head_hexsha": "d612c3b240b2d7325cac6fbf9c85c3d81250b558", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.6082474227, "max_line_length": 239, "alphanum_fraction": 0.7272285251, "num_tokens": 1713, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681049901037, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.5767046875297928}}
{"text": "%% CHAPTER 2\n% Initialization of the system\n%  - Analysis of the image in the HSV space\n%  - Writing down the classifier. Implementing it in class ParkSpot\n\n\\section{Initialization of the System}\n\n\t\\subsection{Classification with Histograms}\n\t\n\tHere is presented the algorithm used to initialize the system. The algorithm is\n\ta classifier developed upon the classification of two main parameters that\n\ttends to separate busy parking spot, from free ones. \n\t\n\tTaken an image converted in HSV color space, we could extract the mean and the\n\tstandard deviation for each component. The plot of the standard\n\tdeviation of the Saturation components against the mean of Value component, for\n\ta single frame, will give us this single situation. Parking spot status is\n\tknown, and we can see a strong separation (see figure \\ref{fig:separate}, on\n\tthe left).\n\t\n\tThe two different status could be separated with two degrees of freedom of a\n\tline. The inclination and offset of the line could be used to derive\n\trotation plus translation equation that will help us to discriminate between\n\tthe busy and the free parking spot. Given a point ${\\sigma_{2},\\mu_{3}}^{T}$, and a\n\tseparation line in the form $\\mu_{3} = \\alpha \\sigma_{2}+\\eta$, we could derive\n\tthis transformation:\n\t\\begin{equation}\n\t\t\\left\\{ \\begin{array}{c}\n\\xi_{1}\\\\\n\\xi_{2}\n\\end{array}\\right\\} =\\left[\\begin{array}{cc}\n\\cos(\\alpha) & \\sin(\\alpha)\\\\\n-\\sin(\\alpha) & \\cos(\\alpha)\n\\end{array}\\right]\\left\\{ \\begin{array}{c}\n\\sigma_{2}\\\\\n\\mu_{3}\n\\end{array}\\right\\} -\\left\\{ \\begin{array}{c}\n1\\\\\n1\n\\end{array}\\right\\} \\eta\n\t\\end{equation}\n\tthe algorithm has only to check:\n\t\\begin{equation}\n\t\t\\xi_{2} \\geq 0\n\t\\end{equation}\n\tif this condition is true, than the park spot is busy, else the parking spot is\n\tfree.\n\t% TODO Figura separazione singola e rispetto al tempo label: fig:separate\n\t\\begin{figure}[H] \\label{fig:separate}\n\t\t\\centering\n\t\t\t\\includegraphics[keepaspectratio, scale=0.4]{img/img1.pdf}\n\t\t\\caption{Classifier data in 2D representation}\n\t\\end{figure}\n\t\n\tReferring to the image on the right, in figure \\ref{fig:separate}, it is easy\n\tto understand that this classification is not robust in time, if not\n\texpanded with tuning of the parameters each frame (learning\n\talgorithm). Means tend to remain almost equal, but standard deviations tend to\n\tchange in time. The learning method should change the angle of the separation\n\tline (and also the discrimination algorithm) to get a good classification. We\n\thave decided to leave this method, for something that is little more \n\tsophisticated, and to discover more about the \\verb+openCV+ libraries.\n\t\n\tAs a drawback, we can also consider the fact that there will be no control on\n\tthe evolution of the classifier discrimination parameters.\n\t\n\t\\subsection{Diving in the Code}\n\tIn the code, this initialization script is called when a new object\n\t\\verb+ParkSpotObj+ is created, as \\verb+int ParkSpotObj::initialStatus()+\n\tmethod. The projection of the two characteristics is made by the single object,\n\tto follow the self-containment philosophy. The parameters that drive the\n\talgorithm are the number 3 and 4 in the \\verb+param+ element of the\n\tconfiguration script.\n\t\n\tThe code is almost at a good level of optimization, because the number of bins\n\textracted for the histograms are 32, and the area on which is evaluated the\n\tstatus is relatively small.\n", "meta": {"hexsha": "371c50b4f8392931341bc73d23ada9d53903dd66", "size": 3364, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/src/ch2.tex", "max_stars_repo_name": "MatteoRagni/ParkAssistant", "max_stars_repo_head_hexsha": "5d3941515ce2666d188564f8a65e0684d6979586", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-04-30T13:10:15.000Z", "max_stars_repo_stars_event_max_datetime": "2018-10-11T21:00:44.000Z", "max_issues_repo_path": "doc/src/ch2.tex", "max_issues_repo_name": "MatteoRagni/ParkAssistant", "max_issues_repo_head_hexsha": "5d3941515ce2666d188564f8a65e0684d6979586", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-04-11T03:51:24.000Z", "max_issues_repo_issues_event_max_datetime": "2018-04-11T06:58:32.000Z", "max_forks_repo_path": "doc/src/ch2.tex", "max_forks_repo_name": "MatteoRagni/ParkAssistant", "max_forks_repo_head_hexsha": "5d3941515ce2666d188564f8a65e0684d6979586", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2015-04-30T13:10:17.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-10T20:39:53.000Z", "avg_line_length": 43.1282051282, "max_line_length": 84, "alphanum_fraction": 0.7580261593, "num_tokens": 875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681013541611, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5767046794535526}}
{"text": "\\subsection{Electrostatic interaction energy}\n\\label{sec:distributed_multipoles}\n\\index{distributed multipoles}\n\\index{site energy!distributed multipoles}\n\nWe represent the molecular charge density by choosing multiple expansion sites (``polar sites'') per molecule in such a way as to accurately reproduce the molecular electrostatic potential (ESP), with a set of suitably chosen multipole moments $\\{Q_{lk}^a\\}$ (in spherical-tensor notation) allocated to each site. The expression for the electrostatic interaction energy between two molecules $A$ and $B$ in the multi-point expansion includes an implicit sum over expansion sites $a\\epsilon A$ and $b\\epsilon B$,\n\\begin{align}\n U_{AB} & = \\sum_{a\\epsilon A} \\sum_{b\\epsilon B} \\hat{Q}_{l_1k_1}^a T_{l_1k_1l_2k_2}^{a,b} \\hat{Q}_{l_2k_2}^b \\equiv  \\hat{Q}_{l_1k_1}^a T_{l_1k_1l_2k_2}^{a,b} \\hat{Q}_{l_2k_2}^b,\n \\label{equ:mol_distributed_U}\n\\end{align}\nwhere we have used the Einstein sum convention for the site indices $a$ and $b$ on the right-hand side of the equation, in addition to the sum convention that is in place for the multipole-moment components $t\\equiv l_1 k_1$ and $u\\equiv l_2 k_2$. The $T_{l_1k_1l_2k_2}^{a,b}$ are tensors that mediate the interaction between a multipole component $l_1 k_1$ on site $a$ with the moment $l_2 k_2$ on site $b$. If we include the molecular environment into a perturbative term $W$ to enter in the single-molecule Hamiltonian, the above expression is exactly the first-order correction to the energy where the quantum-mechanical detail has been absorbed in classical multipole moments.\n\nThe are a number of strategies how to arrive at such a collection of {\\em distributed multipoles}. They can be classified according to whether the multipoles are derived (a) from the electrostatic potential generated by the SCF charge density or (b) from a decomposition of the wavefunction itself. Here, we will only draft two of those approaches, CHELPG~\\cite{breneman_determining_1990} from category (a) and DMA~\\cite{stone_distributed_1985} from category (b).\n\nThe CHELPG (CHarges from ELectrostatic Potentials, Grid-based) method relies on performing a least-squares fit of atom-placed charges to reproduce the electrostatic potential as evaluated from the SCF density on a regularly spaced grid~\\cite{breneman_determining_1990}. The fitted charges result from minimizing the Lagrangian function~\\cite{chirlian_atomic_1987}\n\\begin{align}\n z(\\{q_i\\}) = \\sum_{k=1}^M \\left( \\phi(\\vec{r}_k) - \\sum_{i=1}^N \\frac{1}{4\\pi\\varepsilon_0} \\frac{q_i}{|\\vec{r}_i-\\vec{r}_k|} \\right) + \\lambda \\left( q_\\textrm{mol} - \\sum_{i=1}^N q_i \\right),\n\\end{align}\nwith $M$ grid points, $N$ atomic sites, the set of atomic partial charges $\\{q_i\\}$ and the SCF potential $\\phi$. The Lagrange multiplier $\\lambda$ constrains the sum of the fitted charges to the molecular charge $q_\\textrm{mol}$. The main difference from other fitting schemes~\\cite{singh_approach_1984} is the algorithm that selects the positions at which the potential is evaluated (we note that the choice of grid points can have substantial effects especially for bulky molecules).\nClearly, the CHELPG method can be (and has been) extended to include higher atomic multipoles. It should be noted, however, how already the inclusion of atomic dipoles hardly improves the parametrization, and can in fact be harmful to its conformational stability.\n\nThe Distributed-Multipole-Analysis (DMA) approach~\\cite{stone_distributed_1985, stone_distributed_2005}, developed by A. Stone, operates directly on the quantum-mechanical density matrix, expanded in terms of atom- and bond-centered Gaussian functions $\\chi_\\alpha = R_{LK}(\\vec{x}-\\vec{s}_\\alpha) \\exp[-\\zeta(\\vec{x}-\\vec{s}_\\alpha)^2]$,\n\\begin{align}\n \\rho(\\vec{x}) = \\sum_{\\alpha,\\beta} \\rho_{\\alpha\\beta} \\chi_\\alpha(\\vec{x}-\\vec{s}_\\alpha) \\chi_\\beta(\\vec{x}-\\vec{s}_\\beta). \n\\end{align}\nThe aim is to compute multipole moments according in a distributed fashion: If we use that the overlap product $\\chi_\\alpha \\chi_\\beta$ of two Gaussian basis functions yields itself a Gaussian centered at $\\vec{P} = (\\zeta_\\alpha \\vec{s}_\\alpha + \\zeta_\\beta \\vec{s}_\\beta) / (\\zeta_\\alpha + \\zeta_\\beta)$, it is possible to proceed in two steps: First, we compute the multipole moments associated with a specific summand in the density matrix, referred to the overlap center $\\vec{P}$:\n\\begin{align}\n Q_{LK}[\\vec{P}] = - \\int R_{LK}(\\vec{x}-\\vec{P}) \\rho_{\\alpha\\beta} \\chi_\\alpha \\chi_\\beta d^3\\!x.\n\\end{align}\nSecond, we transfer the resulting $Q_{lk}[\\vec{P}]$ to the position $\\vec{S}$ of a polar site according to the rule~\\cite{stone_distributed_1985}\n\\begin{align}\n Q_{nm}[\\vec{S}] = \\sum_{l=0}^L \\sum_{k=-l}^l \\left[ \\left(\\begin{array}{c} n+m \\\\ l+k \\end{array}\\right)\\left(\\begin{array}{c} n-m \\\\ l-k \\end{array}\\right) \\right]^{1/2} R_{n-l,m-k}(\\vec{S}-\\vec{P})\\cdot Q_{lk}[\\vec{P}].\n\\end{align}\nNote how this requires a rule for the choice of the expansion site to which the multipole moment should be transferred. In the near past~\\cite{stone_distributed_2005}, the nearest-site algorithm, which allocates the multipole moments to the site closest to the overlap center, was replaced for diffuse functions by an algorithm based on a smooth weighting function in conjunction with grid-based integration methods in order to decrease the basis-set dependence of the resulting set of distributed multipoles.\n\nOne important advantage of the DMA approach over fitting algorithms such as CHELPG or Merz-Kollman (MK) is that higher-order moments can also be derived without too large an ambiguity.\n\nThe `mps' file format used by VOTCA for the definition of distributed multipoles (as well as point polarizabilities, see subsequent section) is based on the GDMA punch format of A. Stone's GDMA program~\\cite{stone_distributed_2005} (the punch output file can be immediately plugged into VOTCA without any conversions to be applied). Furthermore the log-file of different QM packages (currently \\gaussian, \\turbomole and \\nwchem) may be fed into the \\toolref{log2mps} \\tool, which will subsequently generate the appropriate mps-file.\n\n\\votcacommand{Read in ESP charges from a QM log file}{\\cmdlogmps}", "meta": {"hexsha": "aec80f15cc4437c87601f8003c6255b133f99386", "size": 6189, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/theory/gdma.tex", "max_stars_repo_name": "mbarbry/ctp", "max_stars_repo_head_hexsha": "8461ba9d012c7e171a05e0b114b59d0523fc9a56", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manual/theory/gdma.tex", "max_issues_repo_name": "mbarbry/ctp", "max_issues_repo_head_hexsha": "8461ba9d012c7e171a05e0b114b59d0523fc9a56", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/theory/gdma.tex", "max_forks_repo_name": "mbarbry/ctp", "max_forks_repo_head_hexsha": "8461ba9d012c7e171a05e0b114b59d0523fc9a56", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 154.725, "max_line_length": 681, "alphanum_fraction": 0.7666828244, "num_tokens": 1755, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.875787001374006, "lm_q2_score": 0.6584175139669997, "lm_q1q2_score": 0.5766335002092864}}
{"text": "\n\\chapter{Data Restoration on the Sphere}\n\\label{ch_restore}\n\\index{Denoising}\n\\index{Filtering}\n\n\\label{sect_exp}\n\\section{Introduction}\n\\index{wavelet!denoising}\n\\index{curvelet!denoising}\n\nWavelets and Curvelets have been used successfully for image denoising \\emph{via} non-linear filtering or \nthresholding methods \\cite{starck:book02,starck:sta01_3}. Hard thresholding, for instance, consists in \nsetting all insignificant coefficients (\\emph{i.e.} coefficients with an absolute value below a given \nthreshold) to zero. In practice, we need to estimate the noise standard deviation $\\sigma_j$ in each band \n$j$ and a wavelet (or curvelet) coefficient $w_j$ is significant if $\\mid w_j \\mid > k \\sigma_j$, where $k$ \nis a user-defined parameter, typically chosen between 3 and 5. The $\\sigma_j$ estimation in band $j$ can be \nderived from simulations \\cite{starck:book02}. Denoting $D$ the noisy data and $\\delta$ the thresholding \noperator, the filtered data $\\tilde D$ are obtained by : \n\\begin{eqnarray}\n {\\tilde D} =    {\\cal R} \\delta( {\\cal T} D)\n\\end{eqnarray}\nwhere ${\\cal T}$ is the wavelet (resp. curvelet) transform operator and ${\\cal R}$ is the wavelet (resp. curvelet) reconstruction operator. \n\n\n\\section{Significant Wavelet Coefficients}\n\\label{ch_noise}\n\\subsection{Definition}\n\\index{noise}\n\\index{wavelet!significant coefficient}\n\nIn most applications, it is necessary to know if a wavelet coefficient is due to signal (i.e.\\ it is significant) or to noise. \n\nThe wavelet (resp. curvelet) transform yields a set of resolution-related views of the input image. \nA wavelet (resp. curvelet) band at level $j$ has coefficients given by $w_{j,k}$. If we obtain the \ndistribution of the coefficient $w_{j,k}$ for each band of the decomposition, based on the noise, \nwe can introduce a statistical significance test for this coefficient. This procedure is the classical \nsignificance-testing one. Let ${\\cal H}_0$ be the hypothesis that the image is locally constant at scale $j$.  \nRejection of hypothesis ${\\cal H}_0$ depends (for positive coefficient values) on:\n\\begin{eqnarray}\nP = Prob(\\mid w_{j,k} \\mid \\ < \\ \\tau \\mid {\\cal H}_0)  \n\\end{eqnarray}\nThe detection threshold, $\\tau$, is defined for each scale. Given an estimation threshold, $\\epsilon$, \nif $P = P(\\tau) > \\epsilon$ the null hypothesis is not excluded. Although non-null, the value of the \ncoefficient could be due to noise. On the other hand, if $P < \\epsilon$, the coefficient value cannot be due to \nthe noise alone, and so the null hypothesis is rejected. In this case, a significant coefficient has been detected.\n\n\\subsection{Noise Modeling}\n\\index{noise}\n\\index{noise!Gaussian}\nIf the distribution of $w_{j,l}$ is Gaussian, with zero mean and standard deviation $\\sigma_j$, we have the probability density\n\\begin{eqnarray}\np(w_{j,l}) = \\frac{1}{\\sqrt{2\\pi} \\sigma_j} e^{{- w_{j,l}^2}/2\\sigma^2_j} \n\\end{eqnarray}\nRejection of hypothesis ${\\cal H}_0$ depends (for a positive coefficient value) on:\n\\begin{eqnarray}\nP = Prob( w_{j,l} > W) = \\frac{1}{\\sqrt{2\\pi} \\sigma_j} \\int^{+\\infty}_{w_{j,l}} e^{-W^2/2\\sigma^2_j} dW \n\\end{eqnarray}\nand if the coefficient value is negative, it depends on \n\\begin{eqnarray}\nP = Prob( w_{j,l} < W) = \\frac{1}{\\sqrt{2\\pi} \\sigma_j} \\int^{w_{j,l}}_{-\\infty} e^{-W^2/2\\sigma^2_j} dW \n\\end{eqnarray}\n\nGiven stationary Gaussian noise, it suffices to compare $w_{j,l}$ to \n\\index{stationary signal}\n$k \\sigma_j$.  Often $k $ is chosen as 3, which corresponds approximately to $\\epsilon = 0.002$.  \nIf $w_{j,l}$ is small, it is not significant and could be due to noise. If $w_{j,l}$ is large, it is significant:\n\\begin{eqnarray}\n\\begin{array}{l}\n\\mbox{ if }  \\mid  w_{j,l} \\mid \\ \\geq \\ k \\sigma_j \\ \\ \\mbox{ then } w_{j,l}   \\mbox{ is significant } \\\\ \n\\mbox{ if }  \\mid  w_{j,l} \\mid \\ < \\ k \\sigma_j \\ \\ \\mbox{ then }  w_{j,l} \\mbox{ is not significant }\n\\end{array}\n\\end{eqnarray}\n\nSo we need to estimate, in the case of Gaussian noise models, the noise standard deviation at each scale. \nThese standard deviations can be determined analytically, but the calculations can become complicated.  \n\nThe appropriate value of $\\sigma_j$ in the succession of wavelet planes is assessed from the standard deviation \nof the noise $\\sigma_N$ in the original data $D$, and from study of the noise in the wavelet space. This study \nconsists of simulating a data set containing Gaussian noise with a standard deviation equal to 1, and taking the \nwavelet transform of this data set. Then we compute the standard deviation $\\sigma^e_j$ at each scale. We get a curve \n$\\sigma^e_j$ as a function of $j$, giving the behavior of the noise in the wavelet space (Note that if we had used \nan orthogonal wavelet transform, this curve would be linear). Due to the properties of the wavelet (resp. curvelet) \ntransform, we have $ \\sigma_j = \\sigma_N \\sigma^e_j $. The noise standard deviation at scale $j$ of the data is equal \nto the noise standard deviation $\\sigma_N$ multiplied by the noise standard deviation at scale $j$ of the simulated data.\n\n\\subsection{Automatic Estimation of Gaussian Noise}\n\\subsubsection{$k$-sigma clipping}\n\\index{sigma clipping}\n\\index{noise!sigma clipping}\n\\index{noise}\nThe Gaussian noise $\\sigma_N$ can be estimated automatically in a data set $D$. This estimation is particularly important, \nbecause all the noise standard deviations $\\sigma_j$ in the scales $j$ are derived from $\\sigma_N$. Thus an error associated \nwith $\\sigma_N$ will introduce an error on all $\\sigma_j$. Noise is therefore more usefully estimated in the high frequencies, \nwhere it dominates the signal. The resulting method consists first of filtering the data $D$ with an average filter or the \nmedian filter and subtracting from $D$ the filtered signal $F$: $S = D - F $. In our case, we replace $S$ by the first scale \nof the wavelet transform ($S = w_1$), which is more convenient from the computation time point of view. The histogram of $S$ \nshows a Gaussian peak around 0. A k-sigma clipping is then used to reject pixels where the signal is significantly large. \nWe denote $S^{(1)}$ the subset of $S$ which contains only the pixels such that $\\mid S_l \\mid \\ < k \\sigma_S$, where $\\sigma_S$ \nis the standard deviation of $S$, and $k$ is a constant generally chosen equal to 3. By iterating, we obtain the subset $S^{(n+1)}$ \nverifying $\\mid S^{(n)}_l \\mid \\ < k \\sigma_{S^{(n)}}$, where $\\sigma_{S^{(n)}}$ is the noise standard deviation of $S^{(n)}$. \nRobust estimation of the noise $\\sigma_1$ in $w_1$ (as $S = w_1$) is now obtained by calculation of the standard deviation of \n$S^{(n)}$ ($\\sigma_1 = \\sigma_{S^{(n)}}$). In practice, three iterations are enough, and accuracy is generally better than $5$\\%.\n$\\sigma_N$ is finally calculated by: \n\\be\n\\sigma_N = \\frac{\\sigma_1}{\\sigma^e_1} = \\frac{\\sigma_{S^{(n)}} }{\\sigma^e_1}\n\\ee\n\n\n\\subsection{Correlated Noise}\n\\index{median!median absolute deviation}\n\\index{MAD}\n\\index{noise!median absolute deviation}\n\\index{noise}\nIn this case, the data can be treated as for the Gaussian case, but the noise standard deviation $\\sigma_j$ at scale $j$ \nis calculated independently at each scale. Two methods can be used: \n\\begin{enumerate}\n\\item $\\sigma_j$ can be derived from a k-sigma clipping method applied at scale $j$.\n\\item The median absolute deviation, MAD, can be used as an estimator of the noise standard deviation:\n\\begin{eqnarray}\n\\sigma_j = \\mbox{median}( \\mid w_j \\mid ) / 0.6745\n\\end{eqnarray}\n\\end{enumerate}\n\n\\section{Thresholding}\nMany filtering methods have been proposed in the last ten years. {\\em Hard thresholding} consists of setting to 0 all \nwavelet coefficients which have an absolute value lower than a threshold $T_j$ (non-significant wavelet coefficient):\n\\begin{eqnarray}  \\tilde w_{j,k} = \n\\left\\{ \\begin{array}{ll} w_{j,k} &  \\mbox{ if } \\mid w_{j,k} \\mid \\geq T_j  \\nonumber  \\\\ \n\n0 &  \\mbox{ otherwise}  \\end{array} \\right. \n\\end{eqnarray}\nwhere $w_{j,k}$ is a wavelet coefficient at scale $j$ and at spatial position $k$. \n\n{\\em Soft thresholding} consists of replacing each wavelet coefficient by the value $\\tilde w$ where\n\\begin{eqnarray}  \\tilde w_{j,k} = \n\\left\\{ \\begin{array}{ll} sgn(w_{j,k}) ( \\mid w_{j,k} \\mid - T_j)    &  \\mbox{ if } \\mid w_{j,k} \\mid \\geq T_j \\nonumber  \\\\ \n0 &  \\mbox{ otherwise}  \\end{array} \\right. \n\\end{eqnarray} \nThis operation is generally written as:\n\\begin{eqnarray} \n \\tilde w_{j,k} = \\mathrm{soft}( w_{j,k})  = sgn(w_{j,k}) ( \\mid w_{j,k} \\mid - T_j)_{+}\n\\end{eqnarray} \nwhere $(x)_{+} = MAX(0,x)$.\n\nWhen the discrete orthogonal wavelet transform is used instead of the \\`a trous algorithm, it is interesting to note\nthat the hard and soft thresholded estimators are solutions of the following minimization problems:\n\\begin{eqnarray*}\n  \\tilde w  =   \\mathrm{arg}_w \\min {1 \\over 2} \\parallel D - {\\cal W}^{-1} w \\parallel^2_{l^2} + \n \\lambda \\parallel w \\parallel^2_{l^0} & & \\mbox{\\bf   hard threshold} \\nonumber \\\\\n  \\tilde w   =   \\mathrm{arg}_w \\min {1 \\over 2} \\parallel D - {\\cal W}^{-1} w \\parallel^2_{l^2} + \n \\lambda \\parallel w \\parallel^2_{l^1} & & \\mbox{\\bf   soft threshold}  \n\\end{eqnarray*}\nwhere $D$ is the input data, ${\\cal W}$ the wavelet transform operator, and $l^0$ indicates the limit of $l^\\delta$ \nwhen $\\delta \\rightarrow 0$. This counts in fact the number of non-zero elements in the sequence.\n\\index{thresholding!hard}\n\\index{thresholding!soft}\n\\index{wavelet!hard threshold}\n\\index{wavelet!soft threshold}\n\nAs described before, in the case of Gaussian noise, $T_j = K \\sigma_j$, where $j$ is the scale of the wavelet coefficient, \n$\\sigma_j$ is the noise standard deviation at the scale $j$, and $K$ is a constant generally chosen equal to 3.\n\nOther threshold methods have been proposed, like the {\\em universal threshold} \n\\index{universal threshold}\n\\index{SURE}\n\\index{thresholding!universal threshold}\n\\index{thresholding!SURE}\n\\cite{rest:donoho93_1,rest:donoho93_2}, or the SURE (Stein Unbiased Risk Estimate) method \\cite{rest:donoho95},\nbut they generally do not yield as good results as the hard thresholding method based on the significant coefficients.  \nFor astronomical data, soft thresholding should never be used because it leads to a photometry loss associated with all \nobjects, which can easily be verified by looking at the residual map (i.e.\\ data $-$ filtered data). Concerning the \nthreshold level, the universal threshold  corresponds to a minimum risk. The larger the number of pixels, the larger \nis the risk, and it is normal that the threshold $T$ depends on the number of pixels ($T = \\sqrt{2\\log n} \\sigma_j$, \n$n$ being the number of pixels). The $K\\sigma$ threshold corresponds to a false detection probability, the probability \nto detect a coefficient as significant when it is due to the noise. The $3\\sigma$ value corresponds to 0.27 \\% false detection.\n \n\\begin{figure*}\n% \\includegraphics[height=8truecm,width=6truecm]{fig_uwt_sphere.pdf}\n% \\includegraphics[height = 5 in]{fig_back_cur_sphere.pdf}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=fig_synchrotron_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n\\psfig{figure=fig_synchrotron_noise5_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n}}\n\\centerline{\n\\hbox{\n\\psfig{figure=fig_pwt5_synchrotron_noise5_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n\\psfig{figure=fig_resi_pwt5_synchrotron_noise5_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n}}\n\\centerline{\n\\hbox{\n\\psfig{figure=fig_pcur5_synchrotron_noise5_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n\\psfig{figure=fig_resi_pcur5_synchrotron_noise5_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n}}\n}\n\\caption{\\textbf{Denoising.} Upper left and right : simulated synchrotron image and same image with an additive \nGaussian noise (\\emph{i.e.} simulated data). Middle: pyramidal wavelet filtering and residual. Bottom: pyramidal \ncurvelet filtering and residual.{ On such data, presenting very anisotropic features, the residual with a curvelet \ndenoising is cleaner than with the wavelet denoising.}}\n\\label{Figure:sync_filter}\n\\end{figure*}\n\nFigure~\\ref{Figure:sync_filter} describes the setting and the results of a simulated denoising experiment : \nupper left, the original simulated map of the synchrotron emission (renormalized between 0 and 255); upper right, \nthe same image plus additive Gaussian noise ($\\sigma=5$); middle, the pyramidal wavelet filtered image and the \nresidual (i.e. noisy data minus filtered data); bottom, the pyramidal curvelet transform filtered image and the \nresidual. A $5 \\sigma_j$ detection threshold was used in both cases. On such data, presenting very anisotropic \nfeatures, the curvelets produces better results than the wavelets.\n\n\n\\section{The Combined Filtering Method on the Sphere}\n\\index{wavelet!combined filtering}\n\\index{curvelet!combined filtering}\n\\index{combined filtering method}\n\n%\\voffset -1truecm\n{\\small\n\\begin{table*}[htb]\n\\baselineskip=0.4cm\n\\begin{center}\n\\begin{tabular}{lccccc} \\hline \\hline\nMethod                          &  Error Standard Deviation     &  SNR (dB)    \\\\ \\hline \\hline\nNoisy map                       & 5.  &      13.65  \\\\\nWavelet                         & 1.30  &    25.29  \\\\\nCurvelet                        & 1.01  &    27.60  \\\\\nCFM                             & 0.86  &    28.99  \\\\ \\hline\n\\hline\n\\end{tabular}\n\\caption{Table of error standard deviations and SNR values after filtering the synchrotron noisy map (Gaussian white noise - sigma = 5 ) \nby the wavelet, the curvelet and the combined filtering method. Images are available at \"http://jstarck.free.fr/mrs.html\".}\n% \\vspace{0.5cm}aa_sphere05\n\\label{comptab_sync}\n\\end{center}\n\\end{table*}\n}\n\n\\begin{figure}\n% \\includegraphics[height=8truecm,width=6truecm]{fig_uwt_sphere.pdf}\n% \\includegraphics[height = 5 in]{fig_back_cur_sphere.pdf}\n\\centerline{\n\\hbox{\n\\psfig{figure=fig_cbf5_synchrotron_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n\\psfig{figure=fig_resi_cbf5_synchrotron_bw.pdf,bbllx=0.5cm,bblly=9.5cm,bburx=20cm,bbury=20cm,width=8cm,height=4.5cm,clip=}\n}}\n\\caption{Denoising. Combined Filtering Method (pyramidal wavelet and pyramidal curvelet) and residual.}\n\\label{Figure:sync_cbf_filter}\n\\end{figure}\n\n\n\\begin{figure}\n\\centerline{\n\\hbox{\n% \\psfig{figure=fig_result_cbf_face5_bw.ps,bbllx=1.5cm,bblly=4.5cm,bburx=20cm,bbury=23cm,height=10cm,width=10cm,clip=}\n\\psfig{figure=fig_cmp_fil_synchrotron_face6_bw.pdf,bbllx=1.5cm,bblly=12.5cm,bburx=10.5cm,bbury=25.5cm,height=19.5cm,width=13.5cm,clip=}\n}}\n\\caption{{ Combined Filtering Method, face 6 in the Healpix representation of the image shown in figure~\\ref{Figure:sync_cbf_filter}. \nFrom top to bottom and left to right, respectively the a) original image face, b) the noisy image, c) the combined filtered image, \nd) the combined filtering residual, e) the wavelet filtering residual and f) the curvelet filtering residual.}}\n\\label{Figure:sync_face_cbf_filter}\n\\end{figure}\n\nAlthough the results obtained by simply thresholding the curvelet expansion are encouraging, there is of course ample room for further\nimprovement. A quick inspection of the residual images for both the wavelet and curvelet transforms shown in Figure~\\ref{Figure:sync_filter}\nreveals the existence of very different features. For instance, wavelets do not restore long features with high fidelity while curvelets\nare seriously challenged by isotropic or small features. Each transform has its own area of expertise and this complementarity is of great \npotential. The Combined Filtering Method (CFM) \\cite{starck:spie01a} allows us to benefit from the advantages of both transforms. This iterative \nmethod detects the significant coefficients in both the wavelet domain and the curvelet domain and guarantees that the reconstructed map will \ntake into account any pattern which is detected as significant by either of the transforms. A full description of the algorithm is given in Appendix B.\nFigure~\\ref{Figure:sync_cbf_filter} shows the CFM denoised image and its residual. { Figure~\\ref{Figure:sync_face_cbf_filter} shows one face \n(face 6) of the following Healpix images: upper left, original image; upper right, noisy image; middle left, restored image after denoising \nby the combined transform; middle right, the residual; bottom left and right, the residual using respectively the curvelet and the wavelet \ndenoising method. } The results are reported in Table~\\ref{comptab_sync}. The residual is much better when the combined filtering is applied, \nand no feature can be detected any more by eye in the residual. This was not the case for either the wavelet and the curvelet filtering.\n\n% \\section{Deconvolution ?}\n", "meta": {"hexsha": "7ce334f670aec35ab1579f6797c4b422fa612fe7", "size": 16868, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_isap/archive_tex/restore.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_isap/archive_tex/restore.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_isap/archive_tex/restore.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.6762589928, "max_line_length": 151, "alphanum_fraction": 0.744427318, "num_tokens": 4945, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7853085859124002, "lm_q1q2_score": 0.5765103766970326}}
{"text": "\\documentclass{ximera}\n\\input{../preamble}\n\\title{Applications of ODEs}\n%%%%%\\author{Philip T. Gressman}\n\n\\begin{document}\n\\begin{abstract}\nWe study some sample applications of ODEs.\n\\end{abstract}\n\\maketitle\n\n\\section*{(Videos) Calculus: Single Variable}\n\\youtube{54HrJmeON24}\n\\youtube{xZ3CxLxljLA}\n\n\n\\section*{Examples}\n\n\\begin{example}\nNewton's law of cooling states that the rate of change of the temperature of an object is proportional to the \\textit{difference} between the object's temperature and the ambient temperature.\nFor example, if the ambient temperature is $20$ degrees Celsius, then\n\\[ \\frac{dT}{dt} = -k \\left( T(t) - \\answer{20} \\right). \\]\n(Here $k$ is a positive number is called the \\textit{heat transfer coefficient}; it depends on the physical properties of the object and the surrounding environment).\n\\begin{itemize}\n\\item The equation is linear:\n\\[ \\frac{dT}{dt} + k T = \\answer{20} k. \\]\n\\item In this case, $P(t) = \\answer{k}$ and the integrating factor is $\\answer{e^{k t}}$ (since $k$ is a constant with respect to $t$).  The general solution is\n\\[ T(t) = \\answer{20} + C \\answer{e^{-k t}}. \\]\n\\item If we subtract ambient temperature from both sides, this tells us that the temperature difference decays exponentially at some rate determined by the unknown constant $k$.\n\\item We can generally solve for $k$ if we have enough information.  For example, if $T(0) = 100$, then $C = \\answer{80}$.  If we also know that $T(10) = 60$, then $C e^{-10 k} = 40$ (because $40$ is the difference of $60$ and $20$), which means $k = \\answer{(\\ln 2)/10}$. Another way of understanding this exponential decay is to observe that the temperature difference for this particular object will be cut in half every ten minutes.\n\\item Now that we know $k$, we can compute $T(t)$ for all time. For example, $T(20) = \\answer{40}$, which we could see by plugging in directly to the solution or by merely observing that the temperature difference will be cut in half $\\answer{2}$ times in the course of 20 minutes.\n\\end{itemize}\n\\end{example}\n\n\n\\begin{example}\nFor the family of curves in the plane given by \n\\[ x^4 + y^2 = C \\]\n(illustrated by the purple oval-shaped curves in the image below), \nfind a formula for the orthogonal trajectories (shown in red).\n\\begin{center}\n\\begin{image}\n\\includegraphics[width=4in]{images/orthoX01.png}\n\\end{image}\n\\end{center}\n\\begin{itemize}\n\\item The first step is to find a first-order ODE that is satisfied by the family. This involves differentiating with respect to $x$. We get\n\\[ 4 x^3 + 2y y' = 0, \\ \\text{ meaning } y' = \\answer{-\\frac{2x^3}{y}}. \\]\n\\item If $m$ and $m'$ are slopes of orthogonal lines, then $m m' = -1$. So the orthogonal trajectories satisfy an ODE similar to the one above except that the formula for $y'$ is replaced by its negative reciprocal, i.e.,\n\\[ y' = \\answer{\\frac{y}{2x^3}}. \\]\n\\item This is a separable ODE. It separates to\n\\[ \\frac{dy}{\\answer{y}} = \\frac{1}{2} \\frac{dx}{\\answer{x^3}}. \\]\nIntegrating both sides gives\n\\[ \\ln |y| = - \\frac{1}{4} x^{-2} + C. \\]\nIf we exponentiate both sides and reexpress the arbitrary constant, we get\n\\[ y = C \\answer{e^{-\\frac{1}{4x^2}}} \\]\n\\end{itemize}\n\\end{example}\n\n\n\\begin{example}\nFor the family of curves in the plane given by \n\\[ y = \\frac{1}{C + x^2} \\]\n(shown in purple)\nfind a formula for the orthogonal trajectories (shown in red).\n\\begin{center}\n\\begin{image}\n\\includegraphics[width=4in]{images/orthoX02.png}\n\\end{image}\n\\end{center}\n\\begin{itemize}\n\\item We first differentiate with respect to $x$:\n\\[ y' = \\answer{ - \\frac{2x}{(x^2+C)^2}}. \\]\n\\item Next we must eliminate $C$ from the equation for $y'$ by using the formula $y = 1/ (x^2+C)$. (For example, solve for $C$ in terms of $x$ and $y$ and then substitute this in to your expression for $y'$.)\n\\[ y' = \\answer{ - 2 x y^2} \\]\n(your answer here will depend on the variables $x$ and $y$ but should not explicitly depend on $C$).\n\\item Then we write a new ODE whose slope is the negative reciprocal of the expression we just found:\n\\[ y' = \\answer{\\frac{1}{2xy^2}}. \\]\nThis is a separable ODE:\n\\[ \\answer{y^2} dy = \\frac{1}{2} \\frac{dx}{\\answer{x}}. \\]\n\\item Integrate both sides:\n\\[ \\answer{\\frac{y^3}{3}} = \\frac{1}{2} \\answer{\\ln |x|} + C. \\]\n(Don't forget absolute values on logarithms.)\n\\end{itemize}\n\\end{example}\n\n\\begin{example}\nA large 100 liter tank is initially filled with fresh water. At time $t=0$, a technician begins pouring in $2$ liters per minute of sugar water which is at a concentration of 4 grams per liter. At the same time, another technician opens a valve at the bottom of the tank and begins to drain the tank at a rate of $1$ liter per minute. Assuming the whole tank is kept well-mixed at all times, what is the concentration of the liquid in the tank after 100 minutes?\n\\begin{itemize}\n\\item It is always easiest to write an ODE for the total amount $A$ of sugar in the tank (in grams). The concentration can be deduced later by dividing amount by volume.\n\\item The first step is to find the function $V(t)$ for the total volume of solution in the tank.  The rate of fresh solution coming in is $\\answer{2}$ liters per minute and the rate going out is $\\answer{1}$ liter(s) per minute. So the net flow in of all liquid is $\\answer{1}$ liter(s) per minute. This means that at time $t$, $V(t) = \\answer{100 + t}$.\n\\item Fill in the blanks: $R_{\\mathrm{in}}$ is the rate of sugar coming in (measured in grams per minute), $F_{\\mathrm{out}}$ is the \\textit{volume} rate of flow out from the bottom of the tank. Then\n\\[ \\frac{dA}{dt} = R_{\\mathrm{in}} - F_{\\mathrm{out}} \\frac{A(t)}{V(t)} = \\answer{8} - \\answer{1} \\frac{A(t)}{\\answer{t+100}}.\\]\n\\item The equation is linear:\n\\[ \\frac{dA}{dt} + \\answer{ \\frac{1}{100 + t}} A(t) = \\answer{8}. \\]\n\\item We compute the integrating factor:\n\\[ I(t) = e^{\\int P(t) dt} = e^{\\int \\answer{(t+100)^{-1}} dt} = \\answer{t+100}. \\]\n(Note: logarithms don't need absolute values here because $t  > 0$.)\n\\item After multiplying both sides by $I(t)$, we have\n\\[ \\left( \\answer{(t+100)} A(t) \\right)' = \\answer{8 (t+100)}. \\]\n\\item Integrating both sides gives\n\\[ (t+100) A(t) = 4 (t+100)^2 + C. \\]\nAt $t = 0$, $A(0)$ is given to be zero, so $C = \\answer{-40000}$. The full solution of the initial value problem is then\n\\[ A(t) = \\answer{4 (t + 100) - \\frac{40000}{t+100}}. \\]\n\\item $A(100) = \\answer{600}$ grams. Furthermore $V(t) = \\answer{200}$ liters, so the concentration at time $t = 100$ minutes is $\\answer{3}$ grams per liter.\n\\end{itemize}\n\\end{example}\n\n\n\\end{document}\n", "meta": {"hexsha": "7ad9a2892a9112e8a4dbefb66b5a773b3de5749a", "size": 6529, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "odes/31applywarm.tex", "max_stars_repo_name": "ptgressman/math104", "max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "odes/31applywarm.tex", "max_issues_repo_name": "ptgressman/math104", "max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "odes/31applywarm.tex", "max_forks_repo_name": "ptgressman/math104", "max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.8198198198, "max_line_length": 462, "alphanum_fraction": 0.6910706081, "num_tokens": 2022, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7853085859124002, "lm_q1q2_score": 0.5765103766970326}}
{"text": "\\documentclass[]{AVSSimReportMemo}\n\\usepackage{AVS}\n\n\\newcommand{\\ModuleName}{attTrackingError}\n\\newcommand{\\subject}{Module to Evaluate the Attitude Tracking Error Relative to a Time Varying Reference Frame}\n\\newcommand{\\status}{Initial Version}\n\\newcommand{\\preparer}{H. Schaub}\n\\newcommand{\\summary}{This module is intended to be the last module in the guidance module chain.  It's input is the reference motion message generated by a prior module.  It's output is at the guidance attitude tracking errors relative to a moving reference frame.  This module applies the body to corrected body attitude correction.     }\n\n\n\\begin{document}\n\n\n\\makeCover\n\n\n%\n%\tenter the revision documentation here\n%\tto add more lines, copy the table entry and the \\hline, and paste after the current entry.\n%\n\\pagestyle{empty}\n{\\renewcommand{\\arraystretch}{1.1}\n\\noindent\n\\begin{longtable}{|p{0.5in}|p{4.5in}|p{1.14in}|}\n\\hline\n{\\bfseries Rev}: & {\\bfseries Change Description} & {\\bfseries By} \\\\\n\\hline\nDraft & initial copy & H. Schaub \\\\\n\\hline\n\n\\end{longtable}\n}\n\n\\newpage\n\\setcounter{page}{1}\n\\pagestyle{fancy}\n\n\\tableofcontents\n~\\\\ \\hrule ~\\\\\n\n\n\\section{Introduction}\nThis technical note outlines how the attitude tracking errors are evaluated relative to a given reference frame. The reference frame from the chain of guidance modules is called $\\mathcal{R}_{0}$, while the body corrected reference frame orientation is $\\mathcal{R}$.  \n\n\\section{Reference Frame Definitions}\nLet the primary body-fixed coordinate frame be \\frameDefinition{B}. However, instead of aligning this frame with a reference, a corrected body frame $\\mathcal{B}_{c}$ is to be aligned with a reference frame.   Let the uncorrected reference orientation be given by $\\mathcal{R}_{0}$.  Thus, the guidance goal is to drive $\\mathcal{B}_{c} \\rightarrow \\mathcal{R}_{0}$, which yields\n\\begin{equation}\n\t[R_{0} N] = [B_{c} B] [BN]\n\\end{equation}\nwhere $\\mathcal{N}$ is an inertial reference frame.  Rearranging this relationship, with perfect attitude tracking the inertial body frame orientation should be\n\\begin{equation}\n\t [BN] = [B_{c} B]^{T} [R_{0}N]  = [RN]\n\\end{equation}\nwhere $\\mathcal{R}$ is a corrected reference frame.  Note that $[B_{c} B] = [R_{0}R]$.  Thus, the corrected reference orientation is computed using\n\\begin{equation}\n\t [RN] = [R_{0} R]^{T} [R_{0}N] \n\\end{equation}\nwhere the body-frame correction is subtracted from the original reference orientation.  \n\nThe benefit of of driving $\\mathcal{B} \\rightarrow \\mathcal{R}$ instead of $\\mathcal{B}_{c} \\rightarrow \\mathcal{R}_{0}$ is that the body frame, along with the many device position and orientation vectors expressed in  body-frame components, don't have to be rotated for each control evaluation.  In simple terms, if the corrected body frame is a 60\\dg rotation from the body frame, then the 60\\dg is subtracted from the original reference orientation.  This allows all body inertia tensor and reaction wheel heading vector descriptions to remain in the primary body frame $\\mathcal{B}$.  \n\nAssume the initial uncorrected reference frame $\\mathcal{R}_{0}$ is given through the MRP set $\\bm\\sigma_{R_{0}/N}$\n\\begin{equation}\n\t[R_{0}N(\t\\bm\\sigma_{R_{0}/N})]\n\\end{equation}\nThe relative orientation of the corrected body frame relative to the primary body frame is a constant MRP set\n\\begin{equation}\n\t[B_{c}B(\\bm\\sigma_{B_{c}/B})] = [R_{0}R(\\bm\\sigma_{R_{0}/R})]\n\\end{equation}\nTo apply this correction to the original reference frame, using the Direction Cosine Matrix (DCM) description, this is determined through\n\\begin{equation}\n\t[RN(\\bm\\sigma_{R/N})] = [R_{0}R(\\bm\\sigma_{R_{0}/R})]^{T} [R_{0}N(\\bm\\sigma_{R_{0}/N})] = \n\t[R_{0}R(-\\bm\\sigma_{R_{0}/R})] [R_{0}N(\\bm\\sigma_{R_{0}/N})]\n\\end{equation}\nwhere the convenient MRP identity\n\\begin{equation}\n\t [R_{0}R(\\bm\\sigma_{R_{0}/R})]^{T} = [R_{0}R(-\\bm\\sigma_{R_{0}/R})] \n\\end{equation}\n\nNote the following MRP addition property developed in Reference~\\citenum{schaub}.  If\n\\begin{equation}\n\t[BN(\\bm\\sigma)] = [FB(\\bm\\sigma '')] [ BN(\\bm\\sigma ')]\n\\end{equation}\nthen\n\\begin{equation}\n\t\\bm\\sigma = \\frac{\n\t\t(1-|\\bm\\sigma'|^{2})\\bm\\sigma '' + (1-|\\bm\\sigma ''|^{2}) \\bm\\sigma ' - 2 \\bm\\sigma '' \\times \\bm\\sigma '\n\t}{\n\t\t1 + |\\bm\\sigma '|^{2} |\\bm\\sigma''|^{2} - 2 \\bm\\sigma' \\cdot \\bm\\sigma''\n\t}\n\\end{equation}\nIn the RigidBodyKinematics software library of Reference~\\citenum{schaub}, this MRP evaluation is achieved with \n$$\n\t\\bm\\sigma = {\\tt addMRP}(\\bm\\sigma ', \\bm\\sigma'')\n$$\nThus, to properly apply the body frame orientation correction to the original reference frame, this function should be used with\n$$\n\t\\bm\\sigma_{R/N} = {\\tt addMRP}(\\bm\\sigma_{R_{0}/N}, -\\bm\\sigma_{R_{0}/R})\n$$\n\nThe attitude tracking error of $\\mathcal{B}$ relative to $\\mathcal{R}$ is\n$$\n\t\\bm\\sigma_{B/R} = {\\tt subMRP}(\\bm\\sigma_{B/N}, -\\bm\\sigma_{R/N})\n$$\n\n\n\n\n\\section{Reference Frame Angular Velocity Vector}\nThe angular velocity of the original reference frame $\\mathcal{R}_{0}$ is\n\\begin{equation}\n\t\\bm\\omega_{R_{0}/N}\n\\end{equation}\nThe angular velocity tracking error is defined as\n\\begin{equation}\n\t\\delta\\bm\\omega = \\bm\\omega_{B/N} - \\bm\\omega_{R/N}\n\\end{equation}\nThe correct reference frame angular velocity is\n\\begin{equation}\n\t\\bm\\omega_{R/N} = \\bm\\omega_{R/R_{0}} + \\bm\\omega_{R_{0}/N} =  \\bm\\omega_{R_{0}/N} \n\\end{equation}\nbecause the body frame correction $[B_{c} B] = [R_{0}R]$ is a constant angular offset.  \n\nThe required inertial reference frame rate vector, in body frame components, is then given by\n\\begin{equation}\n\t\\leftexp{B}{\\bm\\omega}_{R/N} = [BN] \\leftexp{N}{\\bm\\omega}_{R/N}\n\\end{equation}\n\n\n\\section{Reference Frame Angular Acceleration Vector}\nWith $\\dot{\\bm \\omega}_{R/N}$ given in the inertial frame, in the body frame this vector is expressed as\n\\begin{equation}\n\t\\leftexp{B}{\\dot{\\bm\\omega}}_{R/N} = [BN] \\leftexp{N}{\\dot{\\bm\\omega}}_{R/N}\n\\end{equation}\n\n\n\\section{Angular Velocity Tracking Error}\nFinally, the angular velocity tracking error is expressed in body frame components as\n\\begin{equation}\n\t\\leftexp{B}{\\delta\\bm\\omega} = \\leftexp{B}{\\bm\\omega_{B/R}} = \\leftexp{B}{\\bm\\omega}_{B/N} - \\leftexp{B}{\\bm\\omega}_{R/N}\n\\end{equation}\n\n\n\n\n\\bibliographystyle{unsrt}\n\\bibliography{references}\n\n\\end{document}\n", "meta": {"hexsha": "5b0e2b604fe72b830bc82da22633cd258af16d20", "size": 6222, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/fswAlgorithms/attGuidance/attTrackingError/_Documentation/AVS-Sim-attTrackingError-2016-01-15.tex", "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/fswAlgorithms/attGuidance/attTrackingError/_Documentation/AVS-Sim-attTrackingError-2016-01-15.tex", "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_forks_repo_path": "src/fswAlgorithms/attGuidance/attTrackingError/_Documentation/AVS-Sim-attTrackingError-2016-01-15.tex", "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.48, "max_line_length": 589, "alphanum_fraction": 0.7176149148, "num_tokens": 1948, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.5765103730083447}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{listings}\n\\usepackage{hyperref}\n\\usepackage{float}\n\n\\title{Assignment for CS 696-04 \\\\ (Deep Learning)}\n\\author{Diwas Sharma}\n\\date{\\today}\n\\begin{document}\n\n\\maketitle\n\\newpage\n\n\\section{Design}\nThe implementation of convolution and max pooling layer is generalized enough\nso it can be used with other programs as well. Furthermore, the code for the\nCNN layers have been merge with the implementation of the Dense layers.\nTogether the dense layers and the CNN layers can be stacked to any depth required\nand can be used to describe any valid neural network with dense and CNN layers.\n\nAn example of convolution neural network that can be construct is given below.\n\n\\begin{lstlisting}[language=python]\nfrom depth.models import Sequential\nfrom depth.layers import Convolution2D, Flatten, DenseLayer, MaxPooling\nfrom depth.optimizers import ADAM\n\noptimizer = ADAM(lr=0.01)\nnn = Sequential()\n\nself.nn = Sequential()\nself.nn.add_layer(Convolution2D(5, (3, 3), input_shape=(3, 32, 32)))\nself.nn.add_layer(MaxPooling(pool_size=(2, 2)))\nself.nn.add_layer(Convolution2D(10, (3, 3)))\nself.nn.add_layer(MaxPooling(pool_size=(2, 2)))\nself.nn.add_layer(Flatten())\nself.nn.add_layer(DenseLayer(units=32))\nself.nn.add_layer(DenseLayer(units=10, activation=\"softmax\"))\nself.nn.compile(loss=\"cross_entropy\", error_threshold=0.01,\n                optimizer=optimizer)\n\n\\end{lstlisting}\n\n\\section{CIFAR 10}\n\\subsection{Network}\nThe network that is used to classify the CIFAR 10 dataset is follows:\n\n\\begin{enumerate}\n    \\item{Convolution layer}\n        \\begin{itemize}\n            \\item{Input: $N * 3 * 32 * 32$ tensor}\n            \\item{Filters: 4 filters with size $3*3$}\n            \\item{Padding: 0 padded with pad width of 1}\n            \\item{Stride: 1 on each dimension}\n            \\item{Regularizer: $L_2$ Regularizer}\n        \\end{itemize}\n    \\item{Max pooling layer}\n        \\begin{itemize}\n            \\item{Input: $N * 4 * 32 * 32$ tensor}\n            \\item{Pool size: $2*2$}\n            \\item{Stride: 2 on each dimension}\n        \\end{itemize}\n    \\item{Convolution layer}\n        \\begin{itemize}\n            \\item{Input: $N * 4 * 16 * 16$ tensor}\n            \\item{Filters: 8 filters with size $3*3$}\n            \\item{Padding: 0 padded with pad width of 1}\n            \\item{Stride: 1 on each dimension}\n            \\item{Regularizer: $L_2$ Regularizer}\n        \\end{itemize}\n    \\item{Max pooling layer}\n        \\begin{itemize}\n            \\item{Input: $N * 8 * 16 * 16$ tensor}\n            \\item{Pool size: $2*2$}\n            \\item{Stride: 2 on each dimension}\n        \\end{itemize}\n    \\item{Convolution layer}\n        \\begin{itemize}\n            \\item{Input: $N * 8 * 8 * 8$ tensor}\n            \\item{Filters: 16 filters with size $3*3$}\n            \\item{Padding: 0 padded with pad width of 1}\n            \\item{Stride: 1 on each dimension}\n            \\item{Regularizer: $L_2$ Regularizer}\n        \\end{itemize}\n    \\item{Softmax layer}\n        \\begin{itemize}\n            \\item{Input: $512 * N$ tensor}\n            \\item{Units: 10}\n        \\end{itemize}\n\\end{enumerate}\n\n\\subsubsection{Source Code}\n\\begin{lstlisting}[language=python]\noptimizer = ADAM(lr=0.001)\nregularizer = L2Regularizer(0.01)\n\nself.nn = Sequential()\nself.nn.add_layer(Convolution2D(\n    4, (3, 3), input_shape=(3, 32, 32), regularizer=regularizer))\nself.nn.add_layer(MaxPooling(pool_size=(2, 2)))\nself.nn.add_layer(Convolution2D(8, (3, 3), regularizer=regularizer))\nself.nn.add_layer(MaxPooling(pool_size=(2, 2)))\nself.nn.add_layer(Convolution2D(16, (3, 3), regularizer=regularizer))\nself.nn.add_layer(Flatten())\nself.nn.add_layer(DenseLayer(units=10, activation=\"softmax\"))\nself.nn.compile(loss=\"cross_entropy\", error_threshold=0.01,\n                optimizer=optimizer, mini_batch_size=1024)\n\\end{lstlisting}\n\n\\subsection{Training}\nThe network was trained using ADAM optimizer with mini batch size of 1024. The parameters\nused for the optimizer were $\\eta$ = 0.001, $\\beta_1$ = 0.9 and \n$\\beta_2$ = 0.999.\n\nThe plot for training loss per mini batch is shown in figure {\\ref{fig:training_loss}}\nand the plot of training accuracy per mini batch is shown in figure {\\ref{fig:training_accuracy}}.\n\n\\begin{figure}[!ht]\n  \\includegraphics[width=\\textwidth,height=0.35\\textheight,keepaspectratio]{cifar10_training_loss.png}\n  \\caption{Training Loss}\n  \\label{fig:training_loss}\n\\end{figure}\n\n\\begin{figure}[!ht]\n  \\includegraphics[width=\\textwidth,height=0.35\\textheight,keepaspectratio]{cifar10_training_accuracy.png}\n  \\caption{Training accuracy}\n  \\label{fig:training_accuracy}\n\\end{figure}\n\n\\subsection{Testing}\nThe accuracy obtained on the test data is 0.536.\n\n\\end{document}\n", "meta": {"hexsha": "b7c787c864168c121ee31f5c17c70e13c3a96291", "size": 4711, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment_2/report/report.tex", "max_stars_repo_name": "diwasblack/deep_learning", "max_stars_repo_head_hexsha": "96a7a534ba557e06643de6b63d12af3a129d04f6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment_2/report/report.tex", "max_issues_repo_name": "diwasblack/deep_learning", "max_issues_repo_head_hexsha": "96a7a534ba557e06643de6b63d12af3a129d04f6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment_2/report/report.tex", "max_forks_repo_name": "diwasblack/deep_learning", "max_forks_repo_head_hexsha": "96a7a534ba557e06643de6b63d12af3a129d04f6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.8962962963, "max_line_length": 106, "alphanum_fraction": 0.6845680323, "num_tokens": 1335, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085708384736, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.5765103656309686}}
{"text": "\\subsection{Clause Learning}\n\n\\begin{frame}\n  \\frametitle{From DPLL to CDCL}\n\n  \\scriptsize\n\n  Consider the following scenario before and after BCP\n  \\vfill\n  \\begin{tabular}{ccc}\n  \\begin{minipage}{.4\\textwidth}\n  $$\n  \\begin{array}{l}\n    \\ldots \\\\\n    (\\colone{\\neg a_{10}} \\vee \\colone{\\neg a_1} \\vee a_4) \\\\\n    (\\colone{a_3} \\vee \\colone{\\neg a_1} \\vee a_5) \\\\\n    (\\neg a_4 \\vee a_6) \\\\\n    (\\neg a_5 \\vee \\neg a_6) \\\\\n    \\ldots \\\\\n    \\\\\n    \\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{a_7}{2}, \\dec{\\neg a_2}{3}, \\dec{a_1}{4} \\}\n  \\end{array}\n  $$\n  \\end{minipage}\n  & ~~~~ &\n  \\begin{minipage}{.5\\textwidth}\n  $$\n  \\begin{array}{l}\n    \\ldots \\\\\n    (\\colone{\\neg a_{10}} \\vee \\colone{\\neg a_1} \\vee \\coltwo{a_4}) \\\\\n    (\\colone{a_3} \\vee \\colone{\\neg a_1} \\vee \\coltwo{a_5}) \\\\\n    (\\colone{\\neg a_4} \\vee \\coltwo{a_6}) \\\\\n    (\\colone{\\neg a_5} \\vee \\colone{\\neg a_6}) \\\\\n    \\ldots \\\\\n    \\\\\n    \\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{a_7}{2}, \\dec{\\neg a_2}{3}, \\dec{a_1}{4}, \\dec{a_4}{4}, \\dec{a_5}{4}, \\dec{a_6}{4}\\}\n  \\end{array}\n  $$\n  \\end{minipage}\n  \\end{tabular}\n  \\vfill\n  \\pause\n  BCP leads to a conflict, but, what is the {\\bf reason} for it ?\n  Do $a_7$ and $a_2$ play any role ?\n  \\vfill\n  \\pause\n  Does it make sense to consider the assignments (which backtracking would produce) ?\\\\\n  $\\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{\\neg a_7}{2}, \\dec{\\neg a_2}{3}, \\dec{a_1}{4} \\}$ ---\n  $\\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{a_7}{2}, \\dec{\\neg a_2}{3}, \\dec{a_1}{4} \\}$ ---\n  $\\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{a_7}{2}, \\dec{a_2}{3}, \\dec{a_1}{4} \\}$ \\\\\n  \\vfill\n  \\pause\n  No, because {\\em whenever $a_{10}$, $\\neg a_3$ is assigned, then $a_1$ must not be set to $\\top$}. \\pause\n  This translates to an additional clause, which can be {\\em learnt}, i.e., it can be added to the formula \n  $$(\\neg a_{10} \\vee a_3 \\vee \\neg a_1)$$\n   \n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{From DPLL to CDCL}\n\n  Search\n  \\vfill\n\n  \\begin{tabular}{ccc}\n    Without learnt clause & & With learnt clause \\\\ \\\\\n    \\begin{minipage}{.4\\textwidth}\n      \\scalebox{.3}{\\input{search_5.pdf_t}}\n    \\end{minipage}\n    & ~~~~ & \n    \\begin{minipage}{.4\\textwidth}\n      \\scalebox{.3}{\\input{search_6.pdf_t}}\n    \\end{minipage}\n  \\end{tabular}\n\n\\end{frame}\n\n\\subsection{Conflict Analysis}\n\n\\begin{frame}\n  \\frametitle{Deriving the clause to be learnt}\n\n  \\scriptsize\n\n  The clause to be learnt is derived from the conflict ``caused'' by BCP\n  (conflict is never caused by a Decision)\n  \\vfill\n  \\pause\n  To better {\\bf analyze} a conflict we need to look at the {\\bf implication graph}\n  \\vfill\n  \\begin{tabular}{ccc}\n  \\begin{minipage}{.4\\textwidth}\n  $$\n  \\begin{array}{l}\n    \\ldots \\\\\n    (\\colone{\\neg a_{10}} \\vee \\colone{\\neg a_1} \\vee \\coltwo{a_4}) \\\\\n    (\\colone{a_3} \\vee \\colone{\\neg a_1} \\vee \\coltwo{a_5}) \\\\\n    (\\colone{\\neg a_4} \\vee \\coltwo{a_6}) \\\\\n    (\\colone{\\neg a_5} \\vee \\colone{\\neg a_6}) \\\\\n    \\ldots \\\\  \\\\\n    \\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{a_7}{2}, \\dec{\\neg a_2}{3}, \\dec{a_1}{4}, \\dec{a_4}{4}, \\dec{a_5}{4}, \\dec{a_6}{4}\\}\n  \\end{array}\n  $$\n  \\end{minipage}\n  & ~~~ &\n  \\begin{minipage}{.4\\textwidth}\n  \\begin{overlayarea}{.4\\textwidth}{4cm}\n    \\only<2|handout:0>{\\scalebox{.4}{\\input{impl_graph_1.pdf_t}}}\n    \\only<3|handout:0>{\\scalebox{.4}{\\input{impl_graph_2.pdf_t}}}\n    \\only<4|handout:0>{\\scalebox{.4}{\\input{impl_graph_3.pdf_t}}}\n    \\only<5->{\\scalebox{.4}{\\input{impl_graph_4.pdf_t}}}\n  \\end{overlayarea}\n  \\end{minipage}\n  \\end{tabular}\n  \\vfill\n  \\pause\\pause\n  Any cut of the graph that separates the conflict from the decision variables\n  represents a possible learnt clause\n  \\vfill\n  \\pause\n  In this course we take the clause that contains \n  {\\bf only one variable of the current decision level}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Deriving the clause to be learnt}\n\n  \\scriptsize\n\n  In practice, we use {\\bf resolution} steps to compute the learnt clause:\n  \\begin{itemize}\n    \\item we start from the clause with all literals to $\\bot$ (conflicting clause)\n    \\item iteratively, we take the clause that propagated the last literal on the trail\n\t  and we apply resolution\n    \\item we stop when only one literal from the current decision level is left in the clause \n  \\end{itemize}\n  \\vfill\n  \\pause\n  \\begin{tabular}{ccc}\n    \\begin{minipage}{.4\\textwidth}\n      \\begin{overlayarea}{.4\\textwidth}{3cm}\n      \\only<2|handout:0>{\\scalebox{.4}{\\input{ca_1.pdf_t}}}\n      \\only<3|handout:0>{\\scalebox{.4}{\\input{ca_2.pdf_t}}}\n      \\only<4|handout:0>{\\scalebox{.4}{\\input{ca_3.pdf_t}}}\n      \\only<5|handout:0>{\\scalebox{.4}{\\input{ca_4.pdf_t}}}\n      \\only<6|handout:0>{\\scalebox{.4}{\\input{ca_5.pdf_t}}}\n      \\only<7|handout:0>{\\scalebox{.4}{\\input{ca_6.pdf_t}}}\n      \\only<8->{\\scalebox{.4}{\\input{ca_7.pdf_t}}}\n      \\end{overlayarea}\n    \\end{minipage}\n    & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ &\n    \\begin{minipage}{.4\\textwidth}\n      \\begin{tabular}{ccl}\n\t\\hline\n\tTrail & dl & Reason \\\\\n\t\\hline\n\t$a_{10}$   & 0 & $( a_{10} )$ \\\\\n\t$\\neg a_3$ & 1 & Decision \\\\\n\t$a_7$      & 2 & Decision \\\\\n\t$\\neg a_2$ & 3 & Decision \\\\\n\t$a_1$      & 4 & Decision \\\\\n\t$a_4$      & 4 & $(\\neg a_{10} \\vee \\neg a_1 \\vee a_4)$ \\\\\n\t$a_5$      & 4 & $(a_3 \\vee \\neg a_1 \\vee a_5)$ \\\\\n\t$a_6$      & 4 & $(\\neg a_4 \\vee a_6)$ \\\\\n\t\\hline\n      \\end{tabular}\n      \\medskip \\\\\n      $\\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{a_7}{2}, \\dec{\\neg a_2}{3}, \\dec{a_1}{4}, \\dec{a_4}{4}, \\dec{a_5}{4}, \\dec{a_6}{4}\\}$\n    \\end{minipage}\n  \\end{tabular}\n  \\vfill\n  \\onslide<9>{We say that $(\\neg a_{10} \\vee a_3 \\vee \\neg a_1)$ is the {\\bf conflict clause} and it is the one we learn}\n\n\\end{frame}\n\n\\subsection{Non-Chronological Bactracking}\n\n\\begin{frame}\n  \\frametitle{Non-chronological backtracking}\n\n  So far we have been backtracking {\\bf chronologically}, but\n  \\vfill\n  given our (wrong) assignment $\\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1}, \\dec{a_7}{2}, \\dec{\\neg a_2}{3}, \\dec{a_1}{4}, \\dec{a_4}{4}, \\dec{a_5}{4}, \\dec{a_6}{4}\\}$\n  and the computed conflict clause $(\\neg a_{10} \\vee a_3 \\vee \\neg a_1)$, is it really clever to backtrack to level 3 ?\n  \\vfill\n  \\pause\n  The correct level to backtrack to is: \n  \\begin{center}\n  ``the level that would have propagated $\\neg a_1$ if we had the clause\n  $(\\neg a_{10} \\vee a_3 \\vee \\neg a_1)$ as part of the original formula''\n  \\end{center}\n  \\vfill\n  \\pause\n  In our case, it is level 1, because assignment $\\{ \\dec{a_{10}}{0}, \\dec{\\neg a_3}{1} \\}$ is\n  sufficient to propagate $\\neg a_1$ in $(\\colone{\\neg a_{10}} \\vee \\colone{a_3} \\vee \\neg a_1)$\n  \\vfill\n  \\pause\n  We say that $\\neg a_1$ is the {\\bf asserting literal} as it becomes true by BCP once \n  we have backtracked to the correct decision level \n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{CDCL: Conflict Driven Clause Learning}\n\n  Search\n  \\vfill\n\n  \\begin{tabular}{ccc}\n    DPLL                        & & CDCL  \\\\ \n    no learning                 & & conflict-driven learning \\\\\n    chronological backtracking  & & non-chronological backtracking \\\\ \\\\\n    \\begin{minipage}{.4\\textwidth}\n      \\scalebox{.3}{\\input{search_7.pdf_t}}\n    \\end{minipage}\n    & ~~~~ & \n    \\begin{minipage}{.4\\textwidth}\n      \\scalebox{.3}{\\input{search_8.pdf_t}}\n    \\end{minipage}\n  \\end{tabular}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{The CDCL Procedure}\n\n  \\scriptsize\n\n  \\vfill\n\n  \\begin{tabbing}\n  as \\= a \\= a \\= a \\= as \\= asdfsdfasdfasdfasdfasdfasdfasdfasdfasdf \\= \\kill\n  \\> $dl = 0$; $trail = \\{\\ \\}$;                              \\> \\> \\> \\> \\> // Decision level, assignment\\\\ \\\\\n  \\> while ( $true$ ) \\\\ \\\\\n  \\> \\> if ( BCP( ) == conflict )                                \\> \\> \\> \\> // Do BCP until possible \\\\\n  \\> \\> \\> if ( $dl == 0$ ) return $unsat$;                         \\> \\> \\> // Unresolvable conflict \\\\\n  \\> \\> \\> $C,dl =$ {\\sc AnalyzeConflict}( );                           \\> \\> \\> // Compute conf. clause, and dec. level \\\\\n  \\> \\> \\> {\\sc AddClause}( $C$ );                                      \\> \\> \\> // Add $C$ to clause database \\\\\n  \\> \\> \\> {\\sc BacktrackTo}( $dl$ );                                   \\> \\> \\> // Backtracking (shrinks $trail$) \\\\\n  \\> \\> else \\\\\n  \\> \\> \\> if ( ``all variables assigned'' ) return $sat$;          \\> \\> \\> // $trail$ holds satisfying assignment \\\\\n  \\> \\> \\> $l = $ {\\sc Decision}( );                                      \\> \\> \\> // Do another decision \\\\\n  \\> \\> \\> $trail = trail \\cup \\{ l \\}$ \\\\                          \n  \\> \\> \\> $dl = dl + 1$;                                           \\> \\> \\> // Increase decision level \\\\\n  \\end{tabbing}\n\n  \\vfill\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{The CDCL Procedure}\n\n  \\begin{center}\n  \\scalebox{.45}{\\input{cdcl.pdf_t}}\n  \\end{center}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Other things which we did not see}\n\n  {\\bf Watched Literals}: technique to efficiently discover which clauses\n                          become unsat\n  \\vfill\n  {\\bf Decision heuristics}: how do we choose the right variable ?\n  And which polarity ? ($\\top, \\bot$). Landmark strategy is VSIDS heuristic\n  \\vfill\n  {\\bf Clause removal}: adding too many clauses negatively impacts \n\t\t\tperformance, need mechanisms to remove learnts\n  \\vfill\n  {\\bf Restarts}: start the search from scratch, but retaining learnts\n  \\vfill\n  many many other heuristics discovered every year\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Exercise}\n\n  \\scriptsize\n\n  Consider the following clause set and assignment\n  $$\n  \\begin{array}{l}\n  (\\neg a_1 \\vee a_2) \\\\\n  (\\neg a_1 \\vee a_3 \\vee a_9) \\\\\n  (\\neg a_2 \\vee \\neg a_3 \\vee a_4) \\\\\n  (\\neg a_4 \\vee a_5 \\vee a_{10}) \\\\\n  (\\neg a_4 \\vee a_6 \\vee a_{11}) \\\\\n  (\\neg a_5 \\vee \\neg a_6) \\\\\n  (a_1 \\vee a_7 \\vee \\neg a_{12}) \\\\\n  (a_1 \\vee a_8) \\\\\n  (\\neg a_7 \\vee \\neg a_8 \\vee \\neg a_{13}) \\\\\n  \\ldots \\\\ \\\\\n  \\{ \\dec{\\neg a_9}{1}, \\dec{a_{12}}{2}, \\dec{a_{13}}{2}, \\dec{\\neg a_{10}}{3}, \\dec{\\neg a_{11}}{3}, \\ldots, \\dec{a_1}{6}\\}\n  \\end{array}\n  $$\n  \\vfill\n  \\begin{itemize}\n    \\item Find the correct conflict clause\n    \\item Find the correct decision level to backtrack\n  \\end{itemize}\n\n\\end{frame}\n", "meta": {"hexsha": "6f3f29f9b3063b609787868d8c727f163abb41fb", "size": 10112, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture3/cdcl.tex", "max_stars_repo_name": "formalmethods/smtlectures", "max_stars_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-11-07T19:34:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-24T08:05:50.000Z", "max_issues_repo_path": "lecture3/cdcl.tex", "max_issues_repo_name": "formalmethods/smtlectures", "max_issues_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture3/cdcl.tex", "max_forks_repo_name": "formalmethods/smtlectures", "max_forks_repo_head_hexsha": "d4ec5f7eb377d26427ecc34c72906c85eafe8631", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-06T00:40:41.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-06T00:40:41.000Z", "avg_line_length": 33.045751634, "max_line_length": 161, "alphanum_fraction": 0.5774327532, "num_tokens": 3849, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660688, "lm_q2_score": 0.7853085708384736, "lm_q1q2_score": 0.5765103473582295}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{makeidx}\n\\usepackage{graphicx}\n\\newtheorem{defi}{Definition}\n\\newcommand{\\bsp}{\\paragraph{Example}}\n\\author{moi}\n\\title{Function spaces}\n\\begin{document}\n\\section{Introduction}\nGiven two sets $X, Y \\neq \\emptyset$, we want to discuss the relation\n$$\\mathrm{map}(X, Y) \\simeq Y^X$$\nWe are going to apply a categorical and set-theoretic approach in order to explain this equivalence.\n\\subsection{Category theory}\nThis theory dates back to the 40ies of the last century.\\\\\n/*\\\\\n ~* Proper historic review (morphs, objects and compos; co/contravar. funct, ~* special categories: grp, abel, rng, ...)\\\\\n */\n\\subsection{Set theory}\n//sets blabla\n\\newpage\n\\section{Function spaces}\nThe term \\textit{space} is to be understood as follows\n\\begin{defi}\nLet $\\mathrm{Set}$ denote the category of sets, with maps as morphisms. There exists a subcategory called the category of pointed spaces, denoted by $\\mathrm{PSpc}$, with:\n\\begin{enumerate}\n\\item $\\mathrm{Obj}(\\mathrm{PSpc})$ all sets/classes $X$ with a designated element $x \\in X$, called the basepoint.\n\\item $\\mathrm{Mor}(\\mathrm{PSpc})$ all basepoint preserving maps in $\\mathrm{Set}(A,B)$ for all $A, B \\in \\mathrm{Obj}(\\mathrm{PSps})$.\n\\end{enumerate}\nIf $A \\in \\mathrm{Obj}(\\mathrm{PSpc})$ then we simply denote it by $(A, a)$ with basepoint $a \\in A$.\n\\end{defi}\n\\begin{description}\n\\item[Counter] A prominent counterexample is the empty set $\\emptyset$. Since it has no element we cannot choose a designated element $x$. Hence, $\\mathrm{Set}$ is a proper supercategory (borrowing from set theory).\n\\item[Example] Any non-empty set $X$ with a fixed element $x_0 \\in X$ is a pointed space $(X,x_0)$. Furthermore, due to non-emptyness of any monoid $G \\in \\mathrm{Obj}(\\mathrm{Grp})$, with neutral element $e \\in G$, we have that\n$$(G, e)$$ is a pointed space. In turn, if $(G, e)$ and $(G', e')$ are two monoids and $\\varphi : G \\longrightarrow G'$ is a monoid homomorphism (a morphism in $\\mathrm{Mon}$) then\n$$\\varphi(e) = e' \\Leftarrow \\varphi \\in \\mathrm{PSpc}(G,G'),$$\ni.e. each monoid homomorphism is also a morphism of pointed spaces.\n\\end{description}\n\\begin{defi}\nA pointed $(X,x_0)$ is called a \\textit{singleton} if $X\\backslash\\{x_0\\} = \\emptyset$.\n\\index{singleton}\n\\end{defi}\nWe claim\n\\begin{prop}\nThe singleton is terminal object in the category of pointed spaces.\n\\end{prop}\n\\end{document}", "meta": {"hexsha": "3a98fd3c73a1c3e949c2117fa194113a75084fa9", "size": 2498, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "func_space/func_space.tex", "max_stars_repo_name": "gmuel/texlib", "max_stars_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "func_space/func_space.tex", "max_issues_repo_name": "gmuel/texlib", "max_issues_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "func_space/func_space.tex", "max_forks_repo_name": "gmuel/texlib", "max_forks_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.96, "max_line_length": 228, "alphanum_fraction": 0.7253803042, "num_tokens": 771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619350028204, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5765086530453809}}
{"text": "\\addtocontents{toc}{\\protect\\newpage}\n\\chapter{Differential Equations}\nAny equation involving derivatives of a function can be called a differential equation.  For example, $ \\dydx=2x$ says ``the derivative equals 2$x$.'' As the derivative of $x^2$, this simple equation also represents a rate-of-change as a function of $x$, or, a differential equation. In this chapter we explore this idea further and look at some mathematical models that take the form of differential equations.\n\n\\begin{multicols}{2}\nThe trajectory of a projectile launched from a cannon follows a curve determined by an ordinary differential equation that is derived from Newton's second law. The relationship between the displacement $x$ and the time $t$ of an object under the force $F$, is given by the differential equation: \\[ m\\ddxdt=F(x(t)) \\]\n\t \\includegraphics[width=8cm]{DE1}\n\\end{multicols}\nChange is a familiar concept from the slope of the tangent line that we are familiar with. If we have the function, then we can calculate the slope of the tangent at any point on the axis. Now, if the $x$-axis represents time, $t$, this calculation has the possibility of predicting the future! Differential equations can arise when we formulate mathematical models.  We can develop our understanding of this process by considering the mathematical models of some physical phenomena. \n\n\n\n%---------------------------------------------------\n% Population Growth\n%---------------------------------------------------\n\\section{Basic Differential Equations}\nOne model of population growth arises from the assumption that the rate at which the population grows is proportional to the size of the population.\nLet $N$ be the size of the population at time $t$ then the rate of change of population with respect to time is $\\frac{\\mathrm{d} N}{\\mathrm{d} t}\\text{.}$  So the model can be expressed as a differential equation\n\\begin{equation*}\\frac{\\mathrm{d} N}{\\mathrm{d} t} =k N\n\\end{equation*}\nwhere $k$ is the constant of proportionality. \n\nTo make any sense of this model we need to explore the range of values of $N$ and $k$.  $N$ cannot be zero (otherwise there would be nothing to change).  We assume that $N$ is a function of $t$ and that $N (t) >0$.  Similarly for us to have population ``growth''\\  $k >0$\n\\begin{equation*} \\therefore \\frac{\\mathrm{d} N (t)}{\\mathrm{d} t} >0\n\\end{equation*}\n\nWe have already shown the properties of the exponential function.  The general exponential function is\n\\begin{equation*}N (t) =N_{0} e^{k t}\n\\end{equation*}\n\nLet $N (t)$ be $N_{0}$ when $t =0$.  i.e. $N (0) =N_{0}\\text{.}$  $N_{0}$ is called the \\emph{initial value} of $N (t)$. \n\nNow\n\\begin{align*}\\frac{\\mathrm{d} N (t)}{\\mathrm{d} t} &    = N_{0} e^{k t} \\times k \\\\\n &    = k N (t)\\end{align*}\n\nThus we have shown that $N (t) =N_{0} e^{k t}$ is a solution of the differential equation $\\frac{\\mathrm{d} N (t)}{\\mathrm{d} t} =k N (t)$ \n\nThis solution arises because we are familiar with the behaviour of the exponential function.  Our ability to ``guess''\\ the answer is limited so the subject of differential equations involves developing techniques to handle physical situations that are more and more realistic and more and more complex.  The answer to this problem came so simply to us we have to wonder if there are other equations for $N (t)$ that give the same answer. \n\n%---------------------------------------------------\n% The Order of a Differential Equation\n%---------------------------------------------------\n\\section*{The Order of a Differential Equation}\nThe differential equation $\\frac{\\mathrm{d} N (t)}{\\mathrm{d} t} =k N (t)$ is referred to as a first order differential equation because the order of the highest derivative is \\emph{one}.\n\nHere is an example of a second order differential equation\n\\begin{equation*}m \\ddxdt= -k x\\end{equation*}\n\nThis is the differential equation that arises from Hooke's Law. The second derivative is written $\\ddxdt$. Another application from mechanics is found in construction.\n\\begin{multicols}{2} The elastic curve of a beam under a uniform distributed load will be deflected according to:\n\t\\[M=EI\\ddydx\\]\nWhere $M$ is the bending moment acting at $O$. $E$ is Young\u2019s modulus of elasticity of the material of the beam, and $I$ is the moment of inertia of the beam section.\\\\\n\\includegraphics[width=8cm]{DE2}\n\\end{multicols} \n\n%---------------------------------------------------\n% The Solution of a Differential Equation\n%---------------------------------------------------\n\\section*{The Solution of a Differential Equation}\nWhen you are asked to ``solve''\\ a differential equation you are expected to find all possible solutions of the differential equation. Thus, the solution is another function.\n\n\\example Find all possible solutions of the differential equation $\\dydx =2 x$.  This differential equation could also be expressed as $y^{ \\prime } =2 x\\text{.}$\\medskip\\\\\n\\solution By integrating we obtain\n\\begin{equation*}y =x^{2} +C\n\\end{equation*}\nWhere $C$ is the constant of integration.  This is an arbitrary constant and gives us a family of functions all of which are solutions of the differential equation $y^{ \\prime } =2 x$.  This family of solutions is often referred to as the \\emph{general solution}.\n\nIn a physical situation we are often provided with additional information and this will allow us to find a \\emph{particular solution}.  This is easy to visualise when considering $y^{ \\prime } =2 x\\text{.}$  The general solution is $y =x^{2} +C$ and if you are also told that the curve passes through $\\left (2 ,6\\right )$ you can use this information to evaluate $C\\text{.}$\n\\begin{align*}6 &    = 2^{2} +C \\\\\nC &    = 2\\end{align*}\n\nSo the particular solution is $y =x^{2} +2$.  To appreciate the concept of this particular solution you only need to use \\desmos to\ndraw $y =x^{2} +C$ for various values of $C$. \n\n\\subsection*{Initial-Value Problems}\nIn a physical problem when you are given the conditions for the particular solution in the form $y (t_{0}) =y_{0}$ where $t_{0}$ is the initial value of $t$ and $y_{0}$ is the initial value of $y (t)$, the point $\\left (t_{0} ,y_{0}\\right )$ is called an \\emph{initial condition} and the\nproblem of finding the particular solution given the differential equation is referred to as an \\emph{initial-value problem}. \n\n\\example For the differential equation $y^{ \\prime } = -y^{2}$ \\medskip\\\\\n(a) Verify that $y =\\frac{1}{t +C}$ is the general solution \\\\\n(b) Find the solution of the initial-value problem $y^{ \\prime } = -y^{2}$ and $y \\left (0\\right ) =0.5$ \\\\\n%\\clearpage\n\\solution\n\\begin{tasks}(2)\n\\task Given $y =\\frac{1}{t +C} =\\left (t +C\\right )^{ -1}$\n\\begin{align*}y^{ \\prime } &    =  -\\left (t +C\\right )^{ -2} \\\\\n &    = \\frac{ -1}{\\left (t +C\\right )^{2}} \\\\\n &    =  -\\left(\\frac{1}{t +C}\\right)^{2} = -y^{2}\\end{align*}\n\nSo $y =\\frac{1}{t +C}$ is the general solution of $y^{ \\prime } = -y^{2}$ \n\n\\task Substitute $\\left (0 ,0.5\\right )$ in $y =\\frac{1}{t +C}$\n\\begin{align*}0.5 &    = \\frac{1}{0 +C} \\\\\n\\frac{1}{2} &    = \\frac{1}{C} \\\\\nC &    = 2\\end{align*}\nThe particular solution is $y =\\frac{1}{t +2}$\n\\end{tasks}\n\\rule{6.8cm}{0.5pt}\\\\\n\\examq\\\\ Find the particular solution to the differential equation $1+x^2\\dydx=x^3$ given that $y(1)=\\frac{5}{2}$.\\\\\n\\solution Isolate the derivative, $\\dydx$, and integrate to find the function $y$:\\\\\n\\begin{align*}\n\\dydx&=\\frac{x^3-1}{x^2}\\\\\n\\int\\dydx&=\\int xdx-\\int\\frac{1}{x^2}dx\\\\\t\ny&=\\frac{x^2}{2}+\\frac{1}{x}+C\n\\end{align*}\nUse the initial condition $y(1)=\\frac{5}{2}$ to solve for the unknown constant:\\\\\n\t\\begin{align*}\\frac{5}{2}&=\\frac{1^2}{2}+\\frac{1}{1}+C\\\\\n\tC&=1\n\t\\end{align*}\nTherefore the particular solution is $\\qquad\\displaystyle y=\\frac{x^2}{2}+\\frac{1}{x}+1$\n\n%---------------------------------------------------\n% Separable Equations\n%---------------------------------------------------\n\\section{Separable Equations}\nIn some special cases we can find explicit solutions of differential equations. One type of equation can be written in the form\n\\begin{equation*}\\dydx =\\frac{f (x)}{g (y)}\n\\end{equation*}\n\nExpressed in this form we simply have to recognise that $f (x)$ is a function without any $y$'s in it and $g (y)$ is a function without any $x$'s in it. The variables can be \\textit{separated} by cross-multiplying such that:\n\n\\begin{align*}\ng (y)\\, d y &=f (x)\\, d x\n\\end{align*}\nThen we integrate both sides\n\\begin{equation*}\\int g (y) \\mathrm{d} y =\\int f (x) \\mathrm{d} x\n\\end{equation*}This procedure can be verified by differentiating both sides with respect to $x$\n\\begin{equation*}\\frac{\\mathrm{d}}{\\mathrm{d} x} \\int g (y) \\mathrm{d} y =\\frac{\\mathrm{d}}{\\mathrm{d} x} \\int f (x) \\mathrm{d} x\n\\end{equation*}By the chain rule the left hand side becomes\n\\begin{equation*}\\frac{\\mathrm{d}}{\\mathrm{d} y} \\int g (y) \\,\\mathrm{d} y \\times \\dydx\n\\end{equation*}So\n\\begin{align*}\\frac{\\mathrm{d}}{\\mathrm{d} y} \\int g (y) \\mathrm{d} y \\times \\dydx &    = \\frac{\\mathrm{d}}{\\mathrm{d} x} \\int f (x) \\mathrm{d} x \\\\\ng (y) \\times \\dydx &    = f (x) \\\\\n\\text{or\\  \\  \\ }\\dydx &    = \\frac{f (x)}{g (y)}\\end{align*}\n\\rule{6.8cm}{0.5pt}\\\\\n\\example Solve the differential equation by the method of separating variables \\[ \\dydx =\\frac{x^{2}}{y^{2}}\\]\n\\solution The solution to a differential equation is another function. Separate the variables and integrate both sides.\n\\begin{align*}y^{2} \\mathrm{d} y &    = x^{2} \\mathrm{d} x \\\\\n\\int y^{2} \\mathrm{d} y &    = \\int x^{2} \\mathrm{d} x \\\\\n\\frac{y^{3}}{3} &    = \\frac{x^{3}}{3} +C \\\\\ny^{3} &    = x^{3} +3 C \\\\\n &    = x^{3} +C_{1}\\end{align*}Where $C_{1}$ is a new arbitrary constant. Isolate $y$ to find a function of the form $y=f(x)$:\n\\begin{equation*}y =\\sqrt[{3}]{x^{3} +C_{1}}\n\\end{equation*}Find $C_{1}$ given $y (0) =2$\n\\begin{align*}2 &    = \\sqrt[{3}]{0^{3} +C_{1}} \\\\\nC_{1} &    = 8 \\\\\n\\text{therefore }y &    = \\sqrt[{3}]{x^{3} +8}\\end{align*}\n\n\\rule{6.8cm}{0.5pt}\\\\\n%\\clearpage\n\\example Solve the differential equation: \\[y'=3x^2 y\\] \n\\solution First write $y^{ \\prime } =\\dydx$ \n\\begin{equation*}\\dydx =3 x^{2} y\n\\end{equation*}Separate the variables\n\\begin{equation*}\\frac{1}{y} \\mathrm{d} y =3 x^{2} \\mathrm{d} x\n\\end{equation*}Integrate\n\\begin{align}\\int \\frac{1}{y} \\mathrm{d} y &    = \\int 3 x^{2} \\mathrm{d} x \\nonumber  \\\\\n\\ln  \\left \\vert y\\right \\vert  &    = x^{3} +C \\tag{1}\\end{align}We usually write this by writing in exponential form\n\\begin{align*}\\left \\vert y\\right \\vert  &    = e^{x^{3} +C} \\\\\n &    = e^{x^{3}} e^{C} \\\\\ny &    =  \\pm e^{C} e^{x^{3}}\\end{align*}And write $A = \\pm e^{C}$ where the value of $A$ is used that satisfies the particular problem\n\\begin{equation}y =A e^{x^{3}}\\tag{2}\n\\end{equation}\n\nIn practice we usually jump from line (1) to line (2) and leave out the intermediate steps. \n\n%---------------------------------------------------\n% Chapter Exercises in a separate file\n%---------------------------------------------------\n\\section{Chapter Exercises}\n\\subimport{}{6DiffEqnExercises}\n\n", "meta": {"hexsha": "318d5b36b3a39f1b6016db1c0ca0225cee2880a3", "size": 10978, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DiffEqns.tex", "max_stars_repo_name": "millecodex/ENGE401", "max_stars_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DiffEqns.tex", "max_issues_repo_name": "millecodex/ENGE401", "max_issues_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DiffEqns.tex", "max_forks_repo_name": "millecodex/ENGE401", "max_forks_repo_head_hexsha": "ecb6fddf196353bac375c2c2f585d2e02d87605f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.091954023, "max_line_length": 484, "alphanum_fraction": 0.6538531609, "num_tokens": 3372, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8244619220634457, "lm_q1q2_score": 0.5765086439974659}}
{"text": "\\section{Proofs for Chapter 2}\n\\label{sec:a11chapter2}\n\n\\disableornamentsfornextheadingtrue\n\\subsection{Proof of the Size of the Regular Sparse Grid with Coarse Boundaries}\n\\label{sec:a111proofGridSizeCoarseBoundary}\n\n\\propGridSizeCoarseBoundary*\n\n\\begin{proof}\n  Note that the outer union in the definition of $\\coarselevelset{n}{d}{b}$ in\n  \\eqref{eq:coarseBoundary2} is indeed disjoint.\n  Therefore,\n  \\begin{equation}\n    \\setsize{\\coarseregsgset{n}{d}{b}}\n    = \\sum_{\\substack{\\*l \\in \\nat^d\\\\\\normone{\\*l} \\le n}}\n    \\setsize{\\hiset{\\*l}} +\n    \\hspace*{-5mm}\\sum_{\n      \\substack{\n        \\*l \\in \\natz^d \\setminus \\nat^d\\\\\n        (\\normone{\\vecmax(\\*l, \\*1)} \\le n - b + 1) \\lor\n        (\\*l = \\*0)\n      }\n    }\\hspace*{-10mm} \\setsize{\\hiset{\\*l}}.\n  \\end{equation}\n  The first sum is the number $\\setsize{\\interiorregsgset{n}{d}}$\n  of interior grid points in $\\regsgset{n}{d}$.\n  The second sum can be split into summands\n  with the same number $q$ of zero entries,\n  which we count with\n  $N_\\*l \\ceq \\setsize{\\{t \\mid l_t = 0\\}}\n  = \\normone{\\vecmax(\\*l, \\*1)} - \\normone{\\*l}$,\n  and the same level sum $m = \\normone{\\*l}$:\n  \\begin{equation}\n    \\setsize{\\coarseregsgset{n}{d}{b}}\n    = \\setsize{\\interiorregsgset{n}{d}} + 2^d +\n    \\sum_{q=1}^{d-1} \\sum_{m=d-q}^{n-b-q+1}\n    \\sum_{\n      \\substack{\n        \\*l \\in \\natz^d\\\\\n        N_\\*l = q,\\,\\normone{\\*l} = m}\n    }\\hspace*{-4mm} \\setsize{\\hiset{\\*l}},\n  \\end{equation}\n  where $2^d$ is the summand for $\\*l = \\*0$\n  (number $\\setsize{\\hiset{\\*0}}$ of corners of $\\clint{\\*0, \\*1}$).\n  The limits of the sum over $m$ are $d-q$,\n  since there are $d-q$ entries $\\ge 1$ in a level vector\n  with $q$ zero entries, and $n-b-q+1$,\n  since $m = \\normone{\\*l}\n  = \\normone{\\vecmax(\\*l, \\*1)} - N_\\*l\n  \\le n-b+1 - N_\\*l\n  = n-b-q+1$.\n  \n  \\vspace*{2em}\n  \n  In general, the innermost summand $\\setsize{\\hiset{\\*l}}$ equals\n  $\\setsize{\\hiset{\\*l}}\n  = \\prod_{\\{t \\mid l_t \\ge 1\\}} 2^{l_t-1} \\cdot \\prod_{\\{t \\mid l_t = 0\\}} 2\n  = 2^{\\normone{\\*l} - d + 2N_\\*l}$.\n  The number of innermost summands is given by\n  \\begin{equation}\n    \\setsize{\\{\\*l \\in \\natz^d \\mid N_\\*l = q,\\; \\normone{\\*l} = m\\}}\n    = \\binom{d}{q} \\setsize{\\{\\*l \\in \\nat^{d-q} \\mid \\normone{\\*l} = m\\}}\n    = \\binom{d}{q} \\binom{m-1}{d-q-1}.\n  \\end{equation}\n  This can be seen by first putting $q$ zeros in $d$ places,\n  for which there are $\\binom{d}{q}$ possibilities, and then\n  counting all positive vectors of length $d - q$ with level sum $m$,\n  which can be done in $\\binom{m-1}{d-q-1}$ ways.\n  Thus,\n  \\begin{subequations}\n    \\begin{align}\n      \\setsize{\\coarseregsgset{n}{d}{b}}\n      &= \\setsize{\\interiorregsgset{n}{d}} + 2^d +\n      \\sum_{q=1}^{d-1} \\binom{d}{q} \\sum_{m=d-q}^{n-b-q+1}\n      2^{m - d + 2q} \\binom{m-1}{d-q-1}.\\\\[1em]\n      \\intertext{%\n        After shifting the index $m \\to (m + d - q)$ and slightly\n        rearranging the terms, we obtain%\n      }\n      \\cdots\n      &= \\setsize{\\interiorregsgset{n}{d}} + 2^d +\n      \\sum_{q=1}^{d-1} 2^q \\binom{d}{q} \\sum_{m=0}^{n-d-b+1}\n      2^m \\binom{(d-q)-1+m}{(d-q)-1}.\\\\[1em]\n      \\intertext{%\n        We can now use \\thmref{lemma:numberOfGridPointsInterior} to\n        conclude that%\n      }\n      \\cdots\n      &= \\setsize{\\interiorregsgset{n}{d}} +\n      \\sum_{q=1}^d 2^q \\binom{d}{q}\n      \\setsize{\\interiorregsgset{n-q-b+1}{d-q}}\n    \\end{align}\n  \\end{subequations}\n  as desired.\n\\end{proof}\n\n\\vspace*{2em}\n\n\n\n\\breakpagebeforenextheadingtrue\n\\subsection{%\n  Correctness Proof of the Construction of the Regular Sparse Grid\n  with Coarse Boundaries%\n}\n\\label{sec:a112proofInvariantCoarseBoundary}\n\n\\propInvariantCoarseBoundary*\n\n\\begin{proof}\n  First, we show that every inserted level $\\*l' \\in \\natz^t$ in the inner loop\n  can be found on the right-hand side of \\eqref{eq:coarseInvariant}.\n  If $\\*l' \\ceq (\\*l, 0)$\n  is inserted for some $\\*l \\in \\levelset^{(t-1)}$,\n  then we have $\\normone{\\vecmax(\\*l, \\*1)} \\le n-d+t-b$ or\n  $\\*l = \\*0$ by \\cref{line:algCoarseBoundary1} of\n  \\cref{alg:coarseBoundary}.\n  In the first case, we have\n  \\begin{equation}\n    \\normone{\\vecmax(\\*l', \\*1)}\n    = \\normone{\\vecmax(\\*l, \\*1)} + 1\n    \\le n - d + t - b + 1,\n  \\end{equation}\n  and in the second case $\\*l' = \\vec{0}$.\n  In either case, $\\*l'$ is contained in the \\rhs of\n  \\eqref{eq:coarseInvariant}.\n  \n  If $\\*l' \\ceq (\\*l, l_t)$ is inserted\n  for some $\\*l \\in \\levelset^{(t-1)}$ and\n  $l_t \\in \\{1, \\dotsc, l^\\ast\\}$, then there are,\n  depending on whether $\\*l \\in \\nat^{t-1}$, two cases:\n  \\begin{itemize}\n    \\item\n    \\mbox{If $\\*l \\in \\nat^{t-1}$, then $\\*l' \\in \\nat^t$ and}\n    $\\normone{\\*l'} \\le \\normone{\\*l} + l^\\ast = n - d + t$\n    due to \\cref{line:algCoarseBoundary2},\n    i.e., $\\*l'$ is contained in the first set of the \\rhs of\n    \\eqref{eq:coarseInvariant}.\n    \n    \\item\n    \\mbox{If $\\*l \\notin \\nat^{t-1}$, then $\\*l' \\notin \\nat^t$ and}\n    $\\normone{\\vecmax(\\*l', \\*1)}\n    \\le \\normone{\\vecmax(\\*l, \\*1)} + l^\\ast\n    = n - d + t - b + 1$\n    due to \\cref{line:algCoarseBoundary3},\n    i.e., $\\*l$ is contained in the second set of the \\rhs of\n    \\eqref{eq:coarseInvariant}.\n  \\end{itemize}\n  Thus, all levels that the algorithm inserts into $\\levelset^{(t)}$\n  can be found on the \\rhs of \\eqref{eq:coarseInvariant}.\n  \n  It remains to prove that all levels on the \\rhs of\n  \\eqref{eq:coarseInvariant}\n  are eventually inserted by the algorithm into $\\levelset^{(t)}$.\n  We prove this by induction over $t = 1, \\dotsc, d$.\n  For $t = 1$, the \\rhs of \\eqref{eq:coarseInvariant} equals\n  $\\{l \\in \\natz \\mid l \\le n-d+1\\}$, which is just $\\levelset^{(1)}$\n  (see \\cref{line:algCoarseBoundary6} of \\cref{alg:coarseBoundary}).\n  For the induction step $(t - 1) \\to t$, we assume\n  the validity of the induction hypothesis\n  \\begin{equation}\n    \\label{eq:proofCoarseInductionHypothesis}\n    \\begin{aligned}\n      \\levelset^{(t-1)}\n      &= \\{\\*l \\in \\nat^{t-1} \\mid\n      \\normone{\\*l} \\le n-d+t-1\\} \\dotcup {}\\\\\n      &\\hphantom{{}={}}\n      \\paren*{\n        \\{\\*l \\in \\natz^{t-1} \\setminus \\nat^{t-1} \\mid\n        \\normone{\\vecmax(\\*l, \\*1)} \\le n-d+t-b\\} \\cup\n        \\{\\vec{0}\\}\n      }.\n    \\end{aligned}\n  \\end{equation}\n  The \\rhs of \\eqref{eq:coarseInvariant} has three parts,\n  so we check for elements $\\*l' \\in \\natz^t$\n  of each of the three sets that they are appended to $\\levelset^{(t)}$\n  eventually.\n  \n  First, let $\\*l' = (\\*l, l_t)$ be in the first set of the \\rhs,\n  i.e., $\\*l' \\in \\nat^t$ (in particular $l_t \\ge 1$) and\n  $\\normone{\\*l'} \\le n - d + t$.\n  Note that $\\*l$ will be encountered in the inner loop, as\n  $\\*l \\in \\nat^{t-1}$ and\n  $\\normone{\\*l} = \\normone{\\*l'} - l_t \\le n - d + t - 1$,\n  which implies $\\*l \\in \\levelset^{(t-1)}$ by the induction\n  hypothesis \\eqref{eq:proofCoarseInductionHypothesis}.\n  Since $1 \\le l_t \\le l^\\ast$\n  (due to\n  $l_t = \\normone{\\*l'} - \\normone{\\*l} \\le n-d+t - \\normone{\\*l} = l^\\ast$),\n  the level $\\*l'$ is inserted into $\\levelset^{(t)}$ during the innermost loop\n  in \\cref{line:algCoarseBoundary4} of \\cref{alg:coarseBoundary}.\n  \n  Second, let $\\*l' = (\\*l, l_t)$\n  be in the second set of the \\rhs, i.e., we have\n  $\\*l' \\notin \\nat^t$ and\n  $\\normone{\\vecmax(\\*l', \\*1)} \\le n-d+t-b+1$.\n  Here, there are three cases:\n  \\begin{enumerate}\n    \\item\n    $l_t \\ge 1$:\n    This implies $\\*l \\notin \\nat^{t-1}$ and \n    $\\normone{\\vecmax(\\*l, \\*1)}\n    = \\normone{\\vecmax(\\*l', \\*1)} - l_t\n    \\le n-d+t-b$.\n    Consequently, $\\*l \\in \\levelset^{(t-1)}$ by the induction hypothesis\n    \\eqref{eq:proofCoarseInductionHypothesis}.\n    As $1 \\le l_t \\le l^\\ast$\n    (due to $l_t \\le n-d+t-b+1 -\n    \\normone{\\vecmax(\\*l, \\*1)} = l^\\ast$),\n    $\\*l$ is added to $\\levelset^{(t)}$ in \\cref{line:algCoarseBoundary4}.\n    \n    \\item\n    $l_t = 0$ and $\\*l \\in \\nat^{t-1}$:\n    This implies $\\normone{\\*l} = \\normone{\\*l'}\n    = \\normone{\\vecmax(\\*l', \\*1)} - 1\n    \\le n - d + t - b\n    \\le n - d + t - 1$ since $b \\ge 1$.\n    Again, by the induction hypothesis\n    \\eqref{eq:proofCoarseInductionHypothesis},\n    $\\*l$ is added to $\\levelset^{(t)}$ in \\cref{line:algCoarseBoundary5}\n    due to\n    $\\normone{\\vecmax(\\*l, \\*1)}\n    = \\normone{\\*l} \\le n - d + t - b$.\n    \n    \\item\n    $l_t = 0$ and $\\*l \\notin \\nat^{t-1}$:\n    This implies $\\normone{\\vecmax(\\*l, \\*1)}\n    = \\normone{\\vecmax(\\*l', \\*1)} - 1\n    \\le n - d + t - b$.\n    Again, by the induction hypothesis\n    \\eqref{eq:proofCoarseInductionHypothesis},\n    $\\*l$ is added to $\\levelset^{(t)}$ in \\cref{line:algCoarseBoundary5}.\n  \\end{enumerate}\n  \n  \\noindent\n  Third, let $\\*l = (\\vec{0}, 0) \\in \\natz^t$\n  be in the third set of the \\rhs.\n  This level is appended in \\cref{line:algCoarseBoundary5}\n  to $\\levelset^{(t)}$, since $\\*l' = \\vec{0} \\in \\natz^{t-1}$\n  is in $\\levelset^{(t-1)}$ by the\n  induction hypothesis~\\eqref{eq:proofCoarseInductionHypothesis}.\n\\end{proof}\n", "meta": {"hexsha": "0727d32714d3eb067de3fb93d0cb6f89e8509361", "size": 8865, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/document/a11chapter2.tex", "max_stars_repo_name": "valentjn/thesis", "max_stars_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2022-01-15T19:50:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-15T20:16:10.000Z", "max_issues_repo_path": "tex/document/a11chapter2.tex", "max_issues_repo_name": "valentjn/thesis", "max_issues_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/document/a11chapter2.tex", "max_forks_repo_name": "valentjn/thesis", "max_forks_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.1836734694, "max_line_length": 80, "alphanum_fraction": 0.5828539199, "num_tokens": 3514, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746407, "lm_q2_score": 0.760650658103136, "lm_q1q2_score": 0.5764167872169076}}
{"text": "\\section{Networks of Queues}\n\\label{sec:Networks-Of-Queues}\n\nA network of queues is an architecture where multiple queue nodes are connected with probabilistic routing. Such a network can be \n(i) open, if it receives external arrivals and departures exit the system, or\n(ii) closed, if the number of jobs is fixed to a multiprogramming level and it does not have external arrival neither departures exits the system.\n\nJackson Networks are the simplest network of queues, and they are very useful in modeling packet-routing computer networks.\n\nWe define Open Jackson Networks, Closed Jackson Networks and their respective closed-form limiting probabilities.\n\nFurthermore, we present the Local Balance Approach to solve Open Jackson Network, and the Mean Value Analysis to solve Closed Jackson Networks.", "meta": {"hexsha": "e8f41ac6f7b0b3807149fa7e71508a2bf7a25c11", "size": 801, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "performance-modeling/sec/networks-of-queues.tex", "max_stars_repo_name": "gmarciani/research", "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_issues_repo_path": "performance-modeling/sec/networks-of-queues.tex", "max_issues_repo_name": "gmarciani/research", "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "performance-modeling/sec/networks-of-queues.tex", "max_forks_repo_name": "gmarciani/research", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "avg_line_length": 66.75, "max_line_length": 146, "alphanum_fraction": 0.8152309613, "num_tokens": 158, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5764167831052307}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{amsthm}\n\\usepackage{amsmath}    % need for subequations\n\\usepackage{amsfonts}\n\\usepackage[ocgcolorlinks]{hyperref}\n\n\\setlength{\\baselineskip}{16.0pt}    % 16 pt usual spacing between lines\n\n\\setlength{\\parskip}{3pt plus 2pt}\n\\setlength{\\parindent}{20pt}\n\\setlength{\\oddsidemargin}{0.5cm}\n\\setlength{\\evensidemargin}{0.5cm}\n\\setlength{\\marginparsep}{0.75cm}\n\\setlength{\\marginparwidth}{2.0cm}\n\\setlength{\\marginparpush}{1.0cm}\n\\setlength{\\textwidth}{150mm}\n\n\\DeclareMathOperator{\\sinc}{sinc}\n\\DeclareMathOperator{\\sgn}{sgn}\n\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{corollary}[theorem]{Corollary}\n\n\\newenvironment{definition}[1][Definition]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\n\\newenvironment{example}[1][Example]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\n\\newenvironment{remark}[1][Remark]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\n\n\\begin{document}\n\n\\section{Example: Counting change}\n\n\\begin{lemma}\n    $\\forall n, d, k \\in \\mathbb{N} : d, k \\ge 1 \\implies n^{k-1} + (n-d)^k \\le n^k$\n\\end{lemma}\n\n\\begin{proof}\n    $$d \\ge 1 \\implies n - d \\le n - 1 < n.$$\n    $$k \\ge 1 \\implies (n-d)^{k-1} < n^{k-1}.$$\n    $$(n-d)^k = (n-d)(n-d)^{k-1} \\le (n-1)n^{k-1} = n^k - n^{k-1}.$$\n\\end{proof}\n\n\\begin{definition}\n    $T(n,k)$ is the number of ways to change amount $n$ using $k$ kinds of coins. $n, k \\in \\mathbb{N}, n, k \\ge 1$.\n\\end{definition}\n\n\\begin{theorem}\n    $T(n,k) = O(n^k).$\n\\end{theorem}\n\n\\begin{proof}\n    To prove the theorem we have to show that $\\exists c>0$ such that $T(n,k) \\le cn^k$ for sufficiently large $n$.\n    Let's prove it using substitution method (CLRS, p.83).\n\n    Assume that the inequality is true.\n    The recurrence formula for $T(n,k)$ is\n    $$T(n,k) = T(n,k-1) + T(n-d,k),$$\n    where $d$ is denomination of the first coin. $d \\ge 1$. Therefore by Lemma\n    $$T(n,k) \\le cn^{k-1} + c(n-d)^k = c\\left[n^{k-1} + (n-d)^k\\right] \\le cn^k.$$\n\n    To finish the proof, we just need to find $c$. Since the boundary condition $T(1,1) = 1$, it's sufficient to put $c \\ge 1$.\n\\end{proof}\n\n\\end{document}\n", "meta": {"hexsha": "35e229f23b7e7c0ea37c0788baa2a51c497e86a9", "size": 2261, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sicp/v1/chapter-1.2/ndpar-1.2.tex", "max_stars_repo_name": "CompSciCabal/SMRTYPRTY", "max_stars_repo_head_hexsha": "a8e2c5049199635fecce7b7f70a2225cda6558d8", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 60, "max_stars_repo_stars_event_min_datetime": "2015-02-04T13:02:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-16T12:54:44.000Z", "max_issues_repo_path": "sicp/v1/chapter-1.2/ndpar-1.2.tex", "max_issues_repo_name": "CompSciCabal/SMRTYPRTY", "max_issues_repo_head_hexsha": "a8e2c5049199635fecce7b7f70a2225cda6558d8", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 80, "max_issues_repo_issues_event_min_datetime": "2015-02-20T07:23:41.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T02:30:42.000Z", "max_forks_repo_path": "sicp/v1/chapter-1.2/ndpar-1.2.tex", "max_forks_repo_name": "CompSciCabal/SMRTYPRTY", "max_forks_repo_head_hexsha": "a8e2c5049199635fecce7b7f70a2225cda6558d8", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2015-02-20T04:48:06.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-17T03:19:36.000Z", "avg_line_length": 32.3, "max_line_length": 127, "alphanum_fraction": 0.660769571, "num_tokens": 837, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746407, "lm_q2_score": 0.7606506526772883, "lm_q1q2_score": 0.5764167831052307}}
{"text": "\\chapter{Primitive groups, pairs and amalgams}\r\n\\section {Primitive Groups}\r\n{\\bf Definition 1:} $M<G$ is \\emph{primitive} if $M=N_G(A), \\forall A \\lhd M$. $1 \\ne M <G$ is\r\n\\emph{strongly $p$-embedded} if $|M \\cap M^g|_p = 1$ for $g \\in G \\setminus M$.\r\n$H<G$ is a $p$-local subgroup of $G$ if $M= N_G(P), P \\in p(G)$.\r\n\\\\\r\n\\\\\r\n{\\bf Comment:} Bender classified groups with strongly embedded $2$-subgroups.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 1:}\r\nLet $N \\lhd G$, be abelian.  If $G/N$ is perfect then $G'$ is perfect.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$(G/N)= (G/N)'$ so $G= G'N$ and \r\n$G/N \\approx G'/(G' \\cap N)$.  So\r\n$G'/(G' \\cap N)=\r\n(G'/(G' \\cap N))'$ and thus $G=G''N$ so $G/G'' \\approx N/(G'' \\cap N)$.  Since this is abelian,\r\n$G' \\subseteq G''$.\r\n\\end{quote}\r\n{\\bf Theorem 2:}\r\n$C_M(O_p(M)) \\subseteq O_p(M)$ is equivalent to $F^*(M)=O_p(M)$ and $O_{p'}(M)=1$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nClear.\r\n\\end{quote}\r\n{\\bf Theorem 3:}\r\nIf $M<G$ is maximal and $N \\lhd G$ then $M/N$ is a maximal subgroup of $G/N$.\r\n\\begin{quote}\r\n\\emph{Proof:}  Clear.\r\n\\end{quote}\r\n{\\bf Comment:} The forgoing theorem let's us assume a maximal subgroup contains no normal subgroup of\r\n$G$.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 4:}  Let $L \\lhd \\lhd G$.  If $L \\leq F^*(G)$ then (a)\r\n$L= (L \\cap F(G))(L \\cap E(G))$; (b) $F^*(L) = F^*(G) \\cap L$; (c)\r\n$E(L) C_{E(G)}(L) = E(G)$.  $E(L) \\lhd E(G)$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nEvery component of $L$ is a component of $G$ and $F(L) \\leq F(G)$.  Since $[F(G), E(G)]=1$.\r\n\\end{quote}\r\n{\\bf Theorem 5:}  The action of $G$ by right conjugation on cosets of primitive groups is primitive.\r\n\\begin{quote}\r\n\\emph{Proof:}  The action is obviously transitive and \r\n$Mg=M \\rightarrow g \\in M$ which is maximal so the action is primitive.\r\n\\end{quote}\r\n{\\bf Theorem 6:}\r\nIf $M<G$ is primitive, $N \\lhd G$ and $M \\cap N \\neq 1$ then $C_G(N) =1$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$C_G(N) \\lhd G$ and $C_G(N) < M \\cap N$ then $C_G(N)=1$.\r\n\\end{quote}\r\n{\\bf Theorem 7:}\r\nLet $M$ be a primitive subgroup of $G$.  No non-trivial subnormal subgroup of $G$ is contained in $M$.\r\n$F(G) \\cap M = 1$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSuppose $L \\lhd \\lhd G, 1 \\neq L \\leq M$.  Pick a minimal one, $L \\leq F^*(G)$.  By the previous result,\r\n${\\mathbb Z}(F^*(G)) = 1$.  So $F(G)=1$ and $L$ is a component of $G$.\r\n$\\langle L^M \\rangle \\lhd E(G)$ and by primitivity, $E(G) \\leq M$.  So $E(G)=1$, which is a contradiction.\r\n\\end{quote}\r\n{\\bf Theorem 8:}\r\nIf $M<G$ is a primitive subgroup, $p \\in \\pi(M)$, $N \\lhd G$.  Suppose $M \\cap N =1$ and $O_p(M) \\neq 1$.\r\n(a) $p \\notin \\pi(N)$, (b) $\\forall q \\in \\pi(N), \\exists$ an $M$-invariant subgroup Sylow $q$-subgroup\r\nof $N$; (c) if $|\\pi(N)| \\geq 2$ then $M$ is a maximal subgroup of $G$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n(a) Put $P= O_p(M)$ then $M= N_G(P)$ and $P \\in S_p(NP)$ since $N \\cap P = 1$ thus $p \\notin \\pi(N)$.\r\nFor (b), $PN$ acts on $\\Omega= S_q(N)$. by conjugation and $N$ is a transitive normal subgroup of $PN$ so\r\n$C_{\\Omega}(P) \\neq \\emptyset$ and $C_N(P)$ is transitive on $C_{\\Omega}(P)$.  Now $C_N(P) \\leq M \\cap N =1$ gives\r\n$|C_{\\Omega}(P)|=1$ and $C_{\\Omega}(P)= C_{\\Omega}(M)$ since $P \\lhd M$.\r\nFor (c), $\\exists M$-invariant $Q \\in S_q(N)$.  Since $Q < N$, $M < QM < NM \\leq G$.\r\n\\end{quote}\r\n{\\bf Theorem 9:}\r\nLet $M<G$ be a primitive subgroup, $N \\lhd G$: $M \\cap F^*(N) \\neq 1$ then\r\n$F(G)=1$, $F^*(N)=F^*(G)= E(G)$. Every minimal normal subgroup of $G$ is contained in $M$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$F^*(N) \\leq F^*(G)$ and by a previous result, ${\\mathbb Z}(R(G)) =1$ and so $F(G)=1$.$F^*(N)=E(N)$ and\r\n$F^*(G)= C_{F^*(G)}(E(N))E(N)$.  Applying the result again,\r\n$F^*(G)= E(N)= F^*(N)$.\r\n\\end{quote}\r\n{\\bf Theorem 10:}\r\nSuppose $G$ contains a primitive maximal subgroup $M$ then one of the following holds:\r\n(F1) $F(G)=F^*(G)$ and $M$ is a complement of $F(G)$ in $G$;\r\n(F2) $G$ contains exactly two minimal normal subgroups $N_1, N_2$ which are non-abelian\r\n$F^*(G)= N_1 \\times N_2= E(G)$; (F3) $F^*(G)$ is a minimal non-abelian subgroup of $G$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSee Stellmacher, 6.6.4.\r\n\\end{quote}\r\n{\\bf Theorem 11:}\r\nIf F1 holds and $p \\in \\pi(M), O_p(M) \\neq 1$ then all primitive maximal subgroups of $G$ are\r\nconjugate.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nPut $P= O_p(M), F= F^*(G)$ then $M= N_G(P)$, $FP \\lhd G$ and $S_p(M) \\subseteq S_p(G)$.  Let\r\n$H$ be another primitive maximal subgroup.  $H$ is also a complement of $F$ so $|H|=|M|$.\r\n$\\exists g: P \\leq P^g$.  $P= H^g \\cap FP \\lhd H^g$ so $H^g= N_G(P)= M$.\r\n\\end{quote}\r\n{\\bf Theorem 12:}\r\nSuppose F2 holds.   There is an $M$-isomorphism $\\alpha: N_1 \\rightarrow N_2$ such that\r\n$M \\cap F^*G) = \\{ x x^{\\alpha}: x \\in N_1 \\}$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $D= M \\cap F^*(G)$ then by a previous result ($C_G(N)=1$), $ D \\cap N_1 = 1 = D \\cap N_2$. \r\nSince $G= N_iM$, $F^*(G)= N_iD$ and $\\forall x_1 \\in N_1, \\exists ! x_2 \\in N_2: x_1 x_2 \\in D$.\r\nThe mapping $\\alpha: N_1 \\rightarrow N_2, x_1 \\mapsto x_2$ is an isomorphism.  The isomorphism\r\ncommutes with  elements of $M$ since $N_1, N_2, D$ are all $M$-invariant.\r\n\\end{quote}\r\n{\\bf Theorem 13:}\r\nLet $F$ be a minimal normal subgroup of $G$, $N<G$, $G= FM$.\r\n(a) Suppose $U \\subseteq F$ and $U^M=U$, then $UM<G$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n(b) follows from (a).  For (a), assume the contrary, $G=UM$.  Then $U \\lhd G$ and $F$ is not a minimal\r\nnormal subgroups of $G$.\r\n\\end{quote}\r\n\\section {Primitive pairs}\r\n{\\bf Definition 2:} $M<G$ is \\emph{primitive} if $M=N_G(A), \\forall A \\lhd M$. $1 \\ne M <G$ is\r\n\\emph{strongly $p$-embedded} if $|M \\cap M^g|_p = 1$ for $g \\in G \\setminus M$.\r\nA group $M$ is of characteristic $p$ if $C_M(O_p(M)) \\leq O_p(M)$ (If\r\n$M$ is $p$-separable then $O_{p'}(M)=1$).\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 14:}\r\nLet $M$ be a group of characteristic $p$.\r\nSuppose $U \\le M$ and $U \\lhd \\lhd M$ or $O_p(M) \\le U$ then\r\n$U$ has characteristic $p$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nThe case $O_p(M) \\leq $ is clear.  In the other case, apply $F^*(L) =F^*(G) \\cap L$.\r\n\\end{quote}\r\n{\\bf Definition 3:}  $M_1, M_2$ is called a \\emph{primitive pair} if $1 \\ne A \\lhd M_i$,\r\n$A \\le M_1 \\cap M_2 \\rightarrow N_{M_j}(A)= M_1 \\cap M_2$.  The pair is respectively\r\n(solvable or characteristic $p$ if each are.  The property ${\\cal P}(M_1 , M_2, A)$ is\r\n$1 \\neq A \\lhd M_i$, $A \\leq M_1 \\cap M_2$.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 15:}\r\nLet $M_1, M_2$ be two different maximal $p$-local subgroups of $G$ that both have characteristic $p$.\r\nSuppose $M_1$ and $M_2$ have a common Sylow $p$-subgroup, then $(M_1 , M_2)$ is a primitive pair of \r\ncharacteristic $p$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $1 \\neq A \\lhd M_i, A \\leq M_i \\cap M_j, i \\neq j$.  By previous result,\r\n$1 \\neq O_p(A) \\lhd M_i$ and the maximality of $M_i$ gives $M_i= N_G(O_p(A))$.  So\r\n$N_{M_j}(A)= M_i \\cap M_j$ has property ${\\cal P}$.  Let $S$ be a common Sylow subgroup of\r\n$M_1$ and $M_2$ then $O_p(M_1 ) O_p(M_2 ) \\leq S \\leq M_1 \\cap M_2$.\r\n\\end{quote}\r\n{\\bf Theorem 16:}\r\nSuppose $p \\in \\pi(G)$, every $p$-local has characteristic $p$ and $O_p(G)=1$ then either\r\n(a) there is a primitive pair of characteristic $p$ or (b)\r\nevery maximal $p$-local subgroup of $G$ is strongly embedded in $G$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $M$ be a maximal $p$-local of $G$, then $O_p(M) \\neq 1$ and\r\n$N_G(M) \\leq N_G(O_p(M))=M < G$ so $M^g \\neq M, g \\in G-M$.  $M^g$ is a maximal $p$-local.\r\nAmong all such, choose one, $L \\neq M$ with $|M \\cap L|_p$ maximal.\r\nIf $|M \\cap L|_p>1$ then $T \\in S_p(M \\cap L)$ and $U= N_G(T)$.\r\nSince $U$ is a $p$-local, there is a maximal $p$-local $H \\subseteq G$, maximal with $U \\subseteq H$.\r\nIf $H \\neq M$, let $T \\in S_p(M)$.  If $T<S$, $T<N_S(T) \\leq H \\cap M$ which contradicts the maximality of\r\n$|M \\cap L|_p$.  This $T \\in S_p(M)$.  $\\exists S_1 \\in S_p(L), g \\in S_1 - T$ such that $T^g=T$ and\r\n$M \\neq M^g$.  By the previous result, $(M, M^g)$ is a primitive pair of characteristic $p$.  We've shown (a)\r\nholds if $|M \\cap L|_p = 1$ when $M, L$ are two different $p$-locals.   If $|M \\cap M^g|_p=1, \\forall g \\in G - M$,\r\n$M$ is strongly $p$-embedded.\r\n\\end{quote}\r\n{\\bf Bender's Little Theorem:}\r\nLet $(M_1, M_2)$ be a primitive pair of $G$.  Suppose \r\n$F^*(M_1) \\le M_2$ and\r\n$F^*(M_2) \\le M_1$ then $\\exists p: (M_1, M_2)$ has characteristic $p$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSee Stellmacher, 10.1.4.\r\n\\end{quote}\r\n{\\bf Theorem 17:}\r\nLet $(M_1, M_2)$ be a primitive pair of $G$ of characteristic $p$ then $\\exists i \\in \\{1 , 2 \\}$\r\nsuch that either \r\n(1) The action of $M_i$ on $O_p(M_i/ \\Phi(M_i))$ is not $p$-solvable or\r\n(2) $W_i$ is elementary abelian and the action of $M_i$ on $W_i$ is not $p$-stable.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSee Stellmacher, 10.1.5.\r\n\\end{quote}\r\n{\\bf Theorem 18:}\r\nLet $(M_1, M_2)$ be a primitive pair of characteristic $p$, then $M_1$ or\r\n$M_2$ has a non-abelian Sylow $2$-subgroup.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nIf $p \\neq 2$, it follows from the previous result and $p$-stability for groups with abelian $2$-syslow subgroups.\r\nFor $p= 2$, if the Sylow $2$-groups are abelian, then $O_2(M_1)= O_2(M_2)$ and $(M_1, M_2)$ is not primitive.\r\n\\end{quote}\r\n{\\bf Theorem 19:}\r\nLet $M$ be $p$-seperable and $A$ a $p$-subgroup of $M$ satisfying\r\n$Phi(A) \\leq O_p(M)$ and $A \\nleq O_p(M)$ then $\\exists x \\in O_{p,p'}(M)$\r\nsuch that for $L= \\langle A, A^x \\rangle$, (a) $x \\in O^p(L) \\leq O_{p,p'}(M)$,\r\n(b) $[O^P(L),A]= O^p(L)$, and (c) $|A/(A \\cap O_p(L))|=p$ and $[A \\cap O_p(L), L] \\leq O_p(M)$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n$\\exists L$ with property (a).  Choose $L$ maximal among all such groups then (b) follows.\r\n${\\overline L}= L/O_p(L)$, ${\\overline Q}= O_{p'}({\\overline L})$ so ${\\overline L} = {\\overline {AQ}}$.\r\n$A$ is an elementary abelian $p$-group and\r\n$\\Phi(A) \\leq O_p(M) \\cap L \\leq O_p(L)$.  Let ${\\cal B}$ be the set of maximal subgroups of $A$.\r\n${\\overline Q}=  \\langle C_{\\overline Q}({\\overline U}): U \\in {\\cal B} \\rangle$ so\r\n$[C_{\\overline Q}({\\overline U}), A] \\neq 1$ for some $U \\in {\\cal B}$ since $A$ acts non-trivially\r\non ${\\overline Q}$.  This implies $U= A \\cap O_p(L)$ and\r\n$[U, O^p(L)] \\leq O_p(L) \\cap O_{pp'}(M) \\leq O_p(M)$ and (c) follows.\r\n\\end{quote}\r\n{\\bf Theorem 20:}\r\nLet $M$ be a group of characteristic $2$ that \r\npossesses a section isomorphic to $S_4$  then $M$\r\npossesses a section isomorphic to $S_4$ .\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $M$ be a minimal counterexample.  $O_2(S_3)=1$ and $M/O_2(M) \\leq N \\lhd X < M$ and $X/N \\approx S_3$.\r\n$X=M$ by minimality.  Let ${\\overline M}= M/N$ and $D \\in S_3(M)$, ${\\overline D} \\approx C_3 \\lhd M$.\r\nBy Frattini, $M= N_M(D) N$.  There are $2$-elements that act nontrivially on the $3$ group $D$.  Let $t \\in N_M(D)$\r\nof minimal order.  $\\exists d \\in D: |d|=3, d^t= d^{-1}$, $\\langle d, t \\rangle/\\langle t^2 \\rangle \\approx S_3$.\r\nMinimality of $M$ shows  $M= O_2(M) \\langle d,t \\rangle$ and $t^2 \\in _2(M)$.\r\n$\\Phi(O_2(M))= 1$ and $C_{O_2(M)}(d)=1$.  Thus  $t^2=1$ and $\\exists 1 \\neq Z \\in C_{O_2(M)}(t)$.\r\n$V= \\langle z, z^d, Z^{d^2} \\rangle$ and $|V| \\leq 8$ and $V \\lhd M$.  Hence $V= C_2 \\times C_2$ and\r\n$V \\langle d, t \\rangle \\approx S_4$ so $M$ is not a minimal counterexample.\r\n\\end{quote}\r\n{\\bf Notation:}\r\n${\\cal Q}(Z, X)= \\{ A \\leq X: [Z,A,A]=1 \\neq [Z,A] \\}$.\r\n$q(Z,X)= 0$ if ${\\cal Q}(Z,X)= \\emptyset$;\r\n$q(Z,X)= min \\thinspace \\{e \\in {\\mathbb R}: |A/C_A(Z)|^e= |Z/C_Z(A)|, A \\in {\\cal Q}(Z,X) \\}$, otherwise.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 21:}\r\nLet $M$ act faithfully on \r\nan elementary abelian $2$-group, $V$ and let\r\n$A$ be an elementary abelian $2$-subgroup of $M$.  Suppose\r\n$C_M(O_{2'}(M)) \\leq O_{2'}(M)$ and $|V/C_V(A)| < |A|^2$.  Then\r\n$M$ posseses a section isomorphic to $S_3$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nAmong all the elementary abelian $2$-subgroups that satisfy the condition, choose $A$ of minimal order.\r\nAssume $|A|=2$, then $A \\in {\\cal A}_V(M)$ and the result follows from Glauberman's theorem.\r\nIf $|A|>2$,$C_M(O_{2'}(M) \\leq O_{2'}(M)$ means $A$ acts non-trivially on $O_{2'}(M)$.  Let $Q \\subseteq O_{2'}(M)$\r\nbe minimal.  $Q^A=Q$ and $[Q,A] \\neq 1$ so $A_0= C_A(Q)$ is a maximal subgroup of $A$ and\r\n$QA/A_0$ acts faithfully on $C_V(A_0)$.  The conclusion follows from the $|A|=2$ case if\r\n$|C_V(A_0)/C_V(A)| \\leq |A/A_0|^2=4$ which in turn follows from the minimality of $A$ since\r\n$|V/C_V(A)| < |A|^2 \\leq 4 |V/C_V(A_0)|$.\r\n\\end{quote}\r\n{\\bf Theorem 22:}\r\nLet $M$ be a group and $V$ an elementary abelian normal $p$-subgroup of $M$; let $Z \\leq V$ with\r\n$V= \\langle Z^M \\rangle$ and $Z \\lhd O_p(M)$.  Suppose $\\exists A \\leq O_p(M)$ with $[V,A,A]=1$.\r\nThen $|A/C_A(V)|^q \\leq |V/C_V(A)|$ where $q= q(Z,O_p(M))$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSee Stellmacher, 10.1.10.\r\n\\end{quote}\r\n{\\bf Theorem 23:}\r\nLet $(M_1, M_2)$ be a solvable primitive pair of characteristic $2$, then $M_1$ or\r\n$M_2$ has a section isomorphic to $S_4$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nSee Stellmacher, 10.1.11.\r\n\\end{quote}\r\n{\\bf Theorem 24:}\r\nLet $G$ be a group of even order, $O_2(G)=1$.  Suppose that for every $2$-local $M$ of $G$,\r\n(1) $M$ has characteristic $2$ and is solvable and (2) $M$ does not possess a section\r\nisomorphic to $S_4$ then every maximal $2$ local of $G$ is strongly $2$-embedded in $G$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nThis is a direct consequence of the result preceeding Bender's theorem and the previous result.\r\n\\end{quote}\r\n\\section {Amalgam Graphs}\r\n{\\bf Definition 4:}\r\n$P_1, P_2 \\le G$, $|P_i|< \\infty$.  Construct a graph $\\Gamma(G, P_1, P_2)=\\Gamma$\r\nas follows: \r\n$\\Gamma$ has verticies consisting of right cosets of $P_1$ and $P_2$; the verticies\r\n$P_i g_j$ and $P_n g_m$ are\r\njoined by an edge if \r\n$P_i g_j \\ne P_n g_m$ and\r\n$P_i g_j \\cap P_n g_m \\ne \\emptyset$.  $\\Delta(\\alpha)$ denotes the verticies\r\nadjacent to $\\alpha$.  $G$ acts on graph by right multiplication on cosets.  \r\n$\\Delta(\\alpha)$ is a set of vertices adjacent to $\\alpha$.\r\n$G \\rightarrow Aut(\\Gamma)$.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 25:}\r\nSuppose $G$ has two orbits and $P_1, P_2$ are representatives.  Every vertex stabilizer\r\n$G_{\\alpha}$ is a $G$-conjugate of $P_1$ or $P_2$ then\r\n(a) $G$ acts transitively on $\\Gamma$ and every edge stabilizer in\r\n$G$ is a $G$-conjugate of $P_1 \\cap P_2$, (b) $G_{\\alpha}$ acts transitively\r\non $\\Delta(\\alpha)$, (c) $|\\Delta(\\alpha)|= |G_{\\alpha}:G_{\\alpha, \\beta}|$,\r\n(d) $P_1 \\cap P_2$ is the kernel of the action on $\\Gamma$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n(a) For $P_ix \\in \\Gamma$ and $g \\in G$, $P_ixg=P_ix$ is equivalent to\r\n$P_i g^{x^{-1}} = P_i$ is equivalent $g \\in P_i^x$.\r\n(b) Let $\\langle P_1x, P_2y \\rangle$ be an edge.  $\\exists z \\in P_1x \\cap P_2y$.  Hence\r\n$P_1x = P_1z$ and $P_1x = P_1z$ so $z^{-1}$ conjugates $(P_1x, P_2y)$ to $(P_1, P_2)$.  The\r\nstabilizer of $(P_1z, P_2z)$ is in $P_1^z \\cap P_2^z= (P_1 \\cap P_2)^z$.\r\n(c) By (a), we can assume $\\alpha= P_1$ then $\\Delta( \\alpha ) = \\{ P_2y : P_2 \\cap P_1 \\neq \\emptyset \\}$\r\n$= \\{ P_2y : y \\in P_1 \\}$ thus $P_1$ is transitive on $\\Delta( \\alpha )$.\r\n(d) By (a), any normal subgroup of $G$ is contained in $P_1 \\cap P_2$ fixes every vertex of $\\Gamma$.\r\n\\end{quote}\r\n{\\bf Theorem 26:}\r\n$\\Gamma$ is connected iff $G= \\langle P_1 , P_2 \\rangle $.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nAssume $G= \\langle P_1 , P_2 \\rangle$ and let $\\Delta$ be a connected component of $\\Gamma$ that contains\r\n$P_1$.  Since $P_1$ and $P_2$ are adjacent, $P_2 \\in \\Delta$.  Since different components are disjoint,\r\n$\\Delta = \\Delta^{\\langle P_1, P_2 \\rangle}= \\Delta^G$ and thus $\\Delta= \\Gamma$ by (a) above.\r\nNow assume $\\Gamma$ is connected and let $G_0= \\langle P_1, P_2 \\rangle$ and $\\Gamma_0= \r\n\\{ P_1x: x \\in G_0 \\} \\cup \\{ P_xx: x \\in G_0 \\} $ be the coset graph of $G_0$ with respect to\r\n$P_1, P_2$.  $\\Gamma_0$ is connected.  If $\\Gamma= \\Gamma_0$ then $G= G_0$.  Assume $\\Gamma \\neq \\Gamma_0$.\r\nSince $\\Gamma$ is connected, $\\exists \\alpha, \\beta \\in \\Gamma$ such that $\\alpha \\in \\Gamma_0, \\beta \\in \\Gamma\r\n- \\Gamma_0$.  By (c) above, $\\beta$ and all other elements of $\\Delta( \\alpha )$ are in $\\Gamma - \\Gamma_0$.\r\nHence $\\Gamma$ is not connected.  Contradiction.\r\n\\end{quote}\r\n{\\bf Theorem 27:}\r\nLet $G= \\langle P_1, P_2 \\rangle $, $U \\le G_{\\alpha} \\cap G_{\\beta}$ and\r\n$(\\alpha, \\beta)$ is an edge.  The (1) $N_{G_{\\delta}}(U)$ acts transitively on\r\n$\\Delta(\\delta), \\delta \\in \\{ \\alpha , \\beta \\}$ and (2) if $U \\le G_{\\alpha} \\cap G_{\\beta}$ then\r\n$U$ acts trivially on $\\Gamma$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n(2) together with (c) implies (1) so we may assume (1) holds.\r\nLet $\\Gamma_0= \\alpha^{N_G(U)} \\cup \\beta^{N_G(U)}$.  $U$ fixes $\\Gamma_0$.\r\nIf $\\gamma \\in \\Gamma_0$, $\\exists g \\in N_G(U)$ and $\\delta \\in \\{ \\alpha, \\beta \\}$ such that\r\n$\\gamma= \\delta^g$.  Then $\\Delta( \\delta^g ) = \\Delta ( \\gamma )$ and \r\n$N_{G_{\\gamma}}(U)= N_{G_{\\delta}}(U)^g $.  By (1), $N_G(U)$ is transitive on $\\Delta( \\delta^g ) = \\Delta( \\gamma )$.\r\nOne of $\\alpha^g, \\beta^g$ is adjacent to $\\gamma$ and $\\{ \\alpha^g , \\beta^g \\} \\subseteq \\Gamma_0$.\r\nIt follows that $\\Delta( \\gamma ) \\subseteq \\Gamma_0$.  By the previous result, $\\Gamma$ is connected\r\nand we must have $\\Gamma= \\Gamma_0$ so $U$ stabilizes every vertex in $\\Gamma$.\r\n\\end{quote}\r\n{\\bf Condition} ${\\cal A}$:  Let $G$ be a\r\nfinite group generated by $P_1, P_2$,\r\n$T= P_1 \\cap P_2$ satisfying: $C_{P_i}(O_2(P_i)) \\le O_2(P_i)$, $T \\in S_2(P_i)$,\r\n$T_G=1$, $P_i/O_2(P_i) \\approx S_3$ and $[\\Omega(Z(T)), P_i] \\ne 1$.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 28:}\r\nLet ${\\cal A}$ holds and $(\\alpha, \\beta)$ be an edge of $\\Gamma$.\r\n(a) $|G_{\\alpha}:G_{\\alpha, \\beta}|=3$ and is a Sylow $2$ subgroup of $G_{\\alpha}$.\r\n$G_{\\alpha} = \\langle t, G_{\\alpha, \\beta} \\rangle$ for $t \\in G_{\\alpha} - G_{\\beta}$.\r\n(b) $|\\Delta( \\alpha )| =3$ and $O_2(G_{\\alpha})= \\bigcap_{\\delta \\in \\Delta( \\alpha )} (G_{\\alpha} \\cap G_{\\beta})$,\r\n(c) $G_{\\alpha}$ acts $2$-transitively on $\\Delta( \\alpha )$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n(a) follows from $P_1/O_2(P_i) \\approx S_3$ and (b) and (c) follows from a previous result.\r\n\\end{quote}\r\n{\\bf Notation:}\r\nFor the remainder of the section, $Q_{\\alpha}= O_2(G_{\\alpha})$ and\r\n$Z_{\\alpha}= \\langle \\Omega(Z(T)): T \\in S_2(G_{\\alpha} \\rangle$.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 29:}\r\nSuppose ${\\cal A}$ holds and $\\alpha \\in \\Gamma$, $V \\lhd G_{\\alpha}$, $T \\in S_2(G_{\\alpha})$ and\r\n$\\Omega(Z(T)) \\leq V \\leq \\Omega(Z(Q_{\\alpha}))$ and $|V:\\Omega(Z(T))|= 2$\r\nthen\r\n$V= C_V(G_{\\alpha}) \\times W$, $W= [V, G_{\\alpha}]$. $W= C_2 \\times C_2$,\r\n$C_{G_{\\alpha}}(W)= Q_{\\alpha}$.  \r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $D \\in S_3(G_{\\alpha})$.  $V= C_V(D) \\times W$, $W= [V,D]$.  By ${\\cal A}$, since\r\n$G_{\\alpha}= DT$, $W \\neq 1$ and thus $|W| \\geq 4$.   Let $d \\in D^{\\#}$,\r\n$|V/ \\Omega(Z(T))|=2=|V/ \\Omega(Z(T^d))|$.  $G_{\\alpha}= \\langle T, T^d \\rangle$ means\r\n$|V/C_V(G_{\\alpha})| \\leq 4$ so $C_V( G_{\\alpha}) = C_V(D)$ and $|W|=4$.  The remainder follows from\r\n${\\cal A}$.\r\n\\end{quote}\r\n{\\bf Theorem 30:}\r\nLet ${\\cal A}$ hold and $(\\alpha, \\beta)$ be an edge of $\\Gamma$;\r\n(a) $Z_{\\alpha} \\leq \\Omega(Z(Q_{\\alpha}))$;\r\n(b) $Q_{\\alpha} Q_{\\beta} = G_{\\alpha} \\cap G_{\\beta} \\in S_2(G)$;\r\n(c) $C_{G_{\\alpha}}(Z_{\\alpha}) = Q_{\\alpha}$;\r\n(d) $Z_{\\alpha}Z_{\\eta} \\lhd G_{\\alpha}$ iff $\\exists \\gamma \\in \\Delta( \\alpha) - \\{ \\beta \\}$ such that\r\n$Z_{\\alpha}Z_{\\beta} = Z_{\\alpha}Z_{\\gamma} $.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\n(a) Let $T \\in S_2(G_{\\alpha})$ then $Q_{\\alpha} \\leq T$ and ${\\cal A}$ implies \r\n$\\Omega(Z(Y)) \\leq Z(Q_{\\alpha})$.\r\n(b) By ${\\cal A}$ and a previous result, \r\n$Q_{\\alpha}$ and $Q_{\\beta}$ have index $2$ in $G_{\\alpha} \\cap G_{\\beta}$ so it STS $\r\nQ_{\\alpha} \\neq Q_{\\beta} $.\r\nIf $Q_{\\alpha} = Q_{\\beta} $, \r\nnow two previous results show\r\n$G$ acts failthfully on $\\Gamma$, and this contradicts ${\\cal A}$.\r\n\\end{quote}\r\n{\\bf Theorem 31:}\r\nLet ${\\cal A}$ hold and $(\\alpha, \\beta)$ be an edge of $\\Gamma$.  The following are equivalent:\r\n(1) the conclusion of Goldschmidt's Theorem holds; (2) $Z \\leq Q_{\\beta}$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nAssume the conclusion of Goldschmidt's theorem holds.  For $\\delta \\in \\{ \\alpha \\beta \\}$,\r\neither \r\n(i) $G_{\\delta} = S_4$ and $Q_{\\delta} \\approx S_4 \\times C_2$, or \r\n(ii) $G_{\\delta} = S_4 \\times C_2$ and $Q_{\\delta} \\approx C_2 \\times C_2 \\times C_2$.  In either\r\ncase, $Z_{\\delta} - Q_{\\delta}$ and by the previous result $Z_{\\alpha} \\nleq Q_{\\beta}$.\r\nSet $T= Q_{\\alpha} Q_{\\beta}$ and $E= Q_{\\alpha} \\cap Q_{\\beta}$.  The previous result gives\r\n$T \\in S_2(G_{\\delta})$ and $|T/Q_{\\delta}|=2$.  Thus \r\n$ |Q_{\\alpha}:E|=2= |Q_{\\beta}:E| $ and\r\n$T= Q_{\\beta}Z_{\\alpha}$ and\r\n$Q_{\\alpha}= E Z_{\\alpha}$.\r\nTodo, more.\r\n\\end{quote}\r\n{\\bf Definition 5:} \r\n$\\alpha, \\alpha'$ is a \\emph{critical pair} if $Z_{\\alpha} \\nleq Q_{\\alpha'}$ and\r\n$b=d(\\alpha, \\alpha')$.  Let $b$ be minimal.\r\n\\\\\r\n\\\\\r\n{\\bf Theorem 32:}\r\nSuppose ${\\cal A}$ holds and (a) ($\\alpha, \\alpha')$ is a critical pair;\r\n(b) $G_{\\alpha} \\cap G_{\\alpha+1} = Z_{\\alpha'}Q_{\\alpha}$, $G_{\\alpha'-1} \\cap G_{\\alpha'} = Z_{\\alpha}Q_{\\alpha'}$;\r\n(c) $R \\leq Z(G_{\\alpha}) \\cap Z(G_{\\alpha+1}) \\cap Z(G_{\\alpha'}) \\cap Z(G_{\\alpha'-1})$ and\r\n$R = [Z(G_{\\alpha}), G_{\\alpha+1}) \\cap G_{\\alpha}]= [Z(G_{\\alpha'}), G_{\\alpha'-1}) \\cap G_{\\alpha'}] $\r\n(d) $ Z_{\\alpha} = [ Z_{\\alpha} , G_{\\alpha} ] \\times \\Omega( Z(G_{\\alpha} ))] $ and\r\n$[ Z_{\\alpha} , G_{\\alpha} ] = C_2 \\times C_2$;\r\n(e) $|Z_{\\alpha}:\\Omega(Z(Y))|=2$ for $Y \\in S_2(G_{\\alpha})$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nMinimality of $(b)$ implies \r\n$Z_{\\alpha} \\leq Q_{\\alpha'-1} \\leq G_{\\alpha'-1} \\cap G_{\\alpha'} $ and\r\n$Z_{\\alpha'} \\leq Q_{\\alpha+1} \\leq G_{\\alpha+1} \\cap G_{\\alpha} $.\r\n$Z_{\\alpha} \\nleq Q_{\\alpha'}$ shows that\r\n$G_{\\alpha'-1} \\cap G_{\\alpha'} = Z_{\\alpha} Q_{\\alpha} $ since $Q_{\\alpha'}$ has index $2$ in\r\n$G_{\\alpha'-1} \\cap G_{\\alpha'}$.\r\n$Z_{\\alpha} \\lhd G_{\\alpha}$ and\r\n$Z_{\\alpha'} \\lhd G_{\\alpha'}$  so\r\n$R \\leq Z_{\\alpha} \\cap Z_{\\alpha'} $.  By a previous result, $R \\neq 1$ and so $Z_{\\alpha'} \\nleq Q_{\\alpha}$\r\nand $G_{\\alpha+1} \\cap G_{\\alpha}= Z_{\\alpha'} Q_{\\alpha}$.  (a) and (b) follow and (c) follows from\r\n$R \\leq Z_{\\alpha} \\cap Z_{\\alpha'} $ and a previous result.   These also show\r\n$ |Z_{\\alpha} / C_{\\alpha}(Z_{\\alpha'}(Z_{\\alpha})= |Z_{\\alpha'} / Z_{\\alpha'}Z(Z_{\\alpha}) =2 $\r\nand\r\n$C_{Z_{\\alpha}}(Z_{\\alpha'})= \\Omega( Z(\r\nG_{\\alpha+1} \\cap G_{\\alpha}\r\n)$ which gives (d) and (f) and, with a previous result (e).\r\n\\end{quote}\r\n{\\bf Theorem 33:}\r\nLet $\\alpha - 1 \\in \\Delta(\\alpha) - \\{ \\alpha+1 \\}$.  Suppose $(\\alpha-1, \\alpha'-1)$ is not a critical pair.\r\nThen\r\n(1) $ Z_{\\alpha} Z_{\\alpha+1} = Z_{\\alpha} Z_{\\alpha-1} $;\r\n(2) $Q_{\\alpha} \\cap Q_{\\beta} \\lhd G_{\\alpha}, \\beta \\in \\Delta( \\alpha)$;\r\n(3) $\\alpha$ and\r\n$\\alpha'$ are conjugate and $b$ is even.\r\n\\begin{quote}\r\n\\emph{Proof:}   Since $(\\alpha-1, \\alpha'-1)$ is not critical, $Z_{\\alpha-1}, Z_{\\alpha'}] \\leq R \\leq Z_{\\alpha}$,\r\nso $\\langle Z_{\\alpha'}, G_{\\alpha} \\cap G_{\\alpha-1} \\rangle \\subseteq N(Z_{\\alpha-1}Z_{\\alpha})$.  The previous result\r\ngives (a). (b) follows by a previous result and the fact that $G_{\\alpha}$ is transisitve on $\\Delta( \\alpha )$.\r\nEither \r\n$\\alpha \\in (\\alpha')^G$ or\r\n$\\alpha \\in (\\alpha'-1)^G$ , so the former holds and $b$ is even.\r\nFor (c), note $\\alpha$ and $\\alpha'-1$ are conjugate so\r\n$G_{\\alpha}$ and $G_{\\alpha'-1}$ are conjugate.  Then (b) gives $Z_{\\alpha} \\leq\r\nQ_{\\alpha'-2} \\cap Q_{\\alpha'-1} =\r\nQ_{\\alpha'-1} \\cap Q_{\\alpha'} $.  This contradicts $Z_{\\alpha} \\nleq Q_{\\alpha'}$.\r\n\\end{quote}\r\n{\\bf Theorem 34:}\r\nSuppose $\\alpha - 1 \\in \\Delta(\\alpha) - \\{ \\alpha+1 \\}$ such that  $(\\alpha-1, \\alpha'-1)$ is a critical pair\r\nthen $b=1$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nPut $R_1= [Z_{\\alpha - 1}, Z_{\\alpha' - 1}]$.  Assume $b > 1$.\r\n$Z_{\\alpha} \\leq Q_{\\alpha+1}$ and $Z_{\\alpha'} \\leq Q_{\\alpha'-1}$.  $(\\alpha - 1, \\alpha' - 1)$ is a \r\ncritical pair so $|R_1'| =2$.  \r\n$R_1 = [Z_{\\alpha - 1}, G_{\\alpha - 1} \\cap G_{\\alpha}] \\leq (Z(G_{\\alpha - 1} \\cap  Z(G_{\\alpha})) \r\n\\cap (Z(G_{\\alpha' - 2} \\cap  Z(G_{\\alpha' - 1}))$.  $R_1 \\leq [Z_{\\alpha - 1}, Z_{\\alpha' - 1}]$, so $[R_1, Z_{\\alpha'}] = 1$.\r\n$\\langle Z_{\\alpha'} , G_{\\alpha - 1} \\cap G_{\\alpha} \\rangle = G_{\\alpha}$.\r\n\\\\\r\n\\\\\r\n(1) $R_1 \\leq Z(G_{\\alpha})$.\r\n\\\\\r\nLet $\\alpha - 2 \\in \\Delta(\\alpha - 1) \\setminus \\{ \\alpha \\}$.  Now we show \\\\\r\n\\\\\r\n(2) $(\\alpha - 2, \\alpha' - 2)$ is a critical pair.\\\\\r\nIf not, $Z_{\\alpha + 1} Z_{\\alpha} = Z_{\\alpha - 1} Z_{\\delta}, \\delta \\in \\Delta(\\alpha - 1)$\r\n$Z_{\\alpha + 1} Z_{\\alpha} = Z_{\\alpha + 1} Z_{\\alpha + 2}$.  Minimality of $b$ gives\r\n$Z_{\\alpha + 1} Z_{\\alpha + 2} \\leq Q_{\\alpha'}$ and $Z_{\\alpha} \\leq Q_{\\alpha'}$, so $(\\alpha, \\alpha')$ is not a critical pair.\r\nNow put $R_2= [Z_{\\alpha - 2}, Z_{\\alpha' - 2}]$.  $\\alpha - 2$ and $(\\alpha, \\alpha')$ satisfy the hypothesis,\r\nhence $|R_2| = 2$.\\\\\r\n\\\\\r\n(3) $R_2 = [Z_{\\alpha - 2},  G_{\\alpha - 2} \\cap G_{\\alpha - 1}] \\leq Z(G_{\\alpha - 1})$.\\\\\r\n$\\exists y \\in G_{\\alpha - 1}, x \\in G_{\\alpha} : (\\alpha - 2)^y = \\alpha$ and $(\\alpha + 1)^x = \\alpha - 1$.\r\n$[Z_{\\alpha}, G_{\\alpha - 1} \\cap G_{\\alpha}] = [Z_{\\alpha - 2}, G_{\\alpha - 2} \\cap G_{\\alpha - 1}] = (R_1)^y \\leq Z_{\\alpha - 1}$.\r\n$R^x = [Z_{\\alpha}, G_{\\alpha + 1} \\cap G_{\\alpha}]^x = [Z_{\\alpha}, G_{\\alpha} \\cap G_{\\alpha - 1}] = (R_2)^y \\leq Z(G_{\\alpha - 1})$.\r\nIt follows that\\\\\r\n\\\\\r\n(4) $ R \\leq Z(G_{\\alpha + 1})$, so \\\\\r\n\\\\\r\n(5) $ R \\cap R_1 =1$.\\\\\r\nIf not, $Z_{\\alpha'} \\leq Q_{\\alpha' - 2}$, $[R_2, Z_{\\alpha'}] = 1= [R_2, G_{\\alpha - 1}]$, so $R_2 =1$.\r\nThis contradicts $|R_2|=2$.\\\\\r\n\\\\\r\n(6) $b = 2$\\\\\r\nIf $b>2$, \r\n$V_{\\alpha} = \\langle Z_{\\beta}, \\beta \\in \\Delta(\\alpha) \\rangle \\lhd G_{\\alpha}$.\r\n$V_{\\alpha + 1} = \\langle Z_{\\beta}, \\beta \\in \\Delta(\\alpha + 1) \\rangle \\lhd G_{\\alpha + 1}$.\r\n$V_{\\alpha} \\leq Q_{\\alpha}$,\r\n$V_{\\alpha + 1} \\leq Q_{\\alpha + 1}$, and\r\n$Z_{\\alpha} = \\langle \\Omega(Z(G_{\\alpha + 1} \\cap G_{\\alpha})^{G_{\\alpha}} \\rangle \\leq V_{\\alpha}$.\r\n$Z_{\\alpha + 1} \\leq Q_{\\alpha + 1}$, so \\\\\r\n\\\\\r\n(7) $Z_{\\alpha} Z_{\\alpha + 1} \\leq V_{\\alpha} \\cap V_{\\alpha + 1}$.\\\\\r\nSince $R_1 \\leq Z(G_{\\alpha})$ is a $2$-transitive action of $G_{\\alpha}$ on $\\Delta(\\alpha )$.\r\n${V_{\\alpha}}' = R_1 \\leq Z(G_{\\alpha})$.\r\nWe show $V_{\\alpha}$ is abelian and $V_{\\alpha}$ is generated by involutions.\r\n$V_{\\alpha} / R_1$ is elementary abelian so $R_1 = \\Phi(V_{\\alpha})$ and $R = \\Phi(V_{\\alpha + 1})$, put\r\n${\\overline {V_{\\alpha}}} = V_{\\alpha} / Z_{\\alpha}$.  We get $Z_{\\beta}/(Z_{\\alpha} \\cap Z_{\\beta})= 2, \\forall \\beta \\in \\Delta(\\alpha)$,\r\nso $|{\\overline {Z_{\\beta}}}| = 2$.  ${\\overline {V_{\\alpha}}} = \\langle Z_{\\beta}, \\beta \\in \\Delta(\\alpha)\\rangle$, so \r\n$|{\\overline {V_{\\alpha}}}| \\leq 8$.  Set $W= V_{\\alpha} \\cap V_{\\alpha + 1} \\lhd G_{\\alpha} \\cap G_{\\alpha + 1}$ then\r\n$Z_{\\alpha}Z_{\\alpha + 1} \\leq W$ and\\\\\r\n\\\\\r\n(8)  $V_{\\alpha} = \\langle W^{G_{\\alpha}} \\rangle$.\\\\\r\n$\\Phi(W) \\leq \\Phi(V_{\\alpha}) \\cap \\Phi(V_{\\alpha + 1}) =R_1 \\cap R =1$ and $W$ is elementary abelian\r\n${V_{\\alpha}}' \\ne 1$ and $|V_{\\alpha} / W| \\geq 2$.\r\nThe kernel of the action of $G_{\\alpha}$ on ${\\overline {V_{\\alpha}}}$ contains $Q_{\\alpha}$ since\r\n$[Z_{\\alpha - 1} , G_{\\alpha} \\cap G_{\\alpha - 1}] \\leq R_1 \\leq Z_{\\alpha}$.\r\n${\\overline {V_0}} = [{\\overline {V_{\\alpha}}}, O^2(G_{\\alpha})]$.\r\nIf ${\\overline {V_0}} = 1$ $W \\lhd G_{\\alpha}$ and ${V_{\\alpha}}' =1$ but this contradicts ${V_{\\alpha}}' = R_1$ and\r\n$|R_1| = 2$.\r\nNow suppose ${\\overline {V_0}} \\ne 1$ since $|{\\overline {V_{\\alpha}}}| \\leq 8$, \\\\\r\n\\\\\r\n(9) $|{\\overline {V_0}}| = 4$.\\\\\r\nAssume $|V_{\\alpha} / W| = 2$.  Let $x \\in G_{\\alpha}$ st $W^x \\ne W$\r\nthen $V_{\\alpha} = W W^x$.\r\n$W  \\cap W^x = Z(V_{\\alpha})$ and $|V_{\\alpha} (W \\cap W^x)| = 4$.  Let $D \\in S_3(G_{\\alpha})$, $D$ acts non-trivially on\r\n$V_{\\alpha} / (W \\cap W^x)$ so all maximal subgroups of $V_{\\alpha}$ that contain $W \\cap W^x$ are $D$-conjugate.\r\nEvery $x \\in V^{\\#}$ is an involution, $V_{\\alpha}$ is elementary abelian contradicts ${V_{\\alpha}}' = R_1$.\r\nWe have shown $|V_{\\alpha}/ W| \\geq 4$ so  $|{\\overline {V_{\\alpha}}}| \\leq 8$. \\\\\r\n\\\\\r\n(10) $|V_{\\alpha}|=8$, $W = Z_{\\alpha}Z_{\\alpha + 1}$ and $|{\\overline W}|=2$.\\\\\r\n$Z_{\\alpha'} \\leq G_{\\alpha}$, $Z_{\\alpha'} \\nleq Q_{\\alpha}$,\r\nwe get $[{\\overline {V_0}}, Z_{\\alpha}'] \\ne 1$.  But $b = 2$,\r\nso $[V_{\\alpha}, Z_{\\alpha}] \\leq [V_{\\alpha}, V_{\\alpha + 1}] \\leq W$.\r\n${\\overline W} = [{\\overline {V_0}}, Z_{\\alpha'}], \\langle {\\overline W}^{G_{\\alpha}} \\rangle = {\\overline {V_0}}$, which\r\ncontradicts (8), (9) and (10).\r\n\\end{quote}\r\n{\\bf Goldschmidt's Theorem:}\r\nIf ${\\cal A}$ holds either (i) $P_1 \\approx P_2 \\approx S_4$ or \r\n(ii) $P_1 \\approx P_2 \\approx C_2 \\times S_4$.\r\n\\begin{quote}\r\n\\emph{Proof:}  \r\nLet $G$ be a counter example and choose $(G, P_1, P_2, T)$ with $|T|$ minimal.\r\n$b > 1$ and $(\\alpha - 1, \\alpha' - 1)$ is not a critical pair $\\forall \\alpha - 1 \\in \\Delta(\\alpha) \\setminus \\{\\alpha + 1 \\}$.\r\n\\\\\r\n\\\\\r\n(1) $b = 0 \\jmod{2}$, $X = Q_{\\alpha} \\cap Q_{\\alpha + 1} \\lhd G_{\\alpha}$.\r\n\\\\\r\n$|Q_{\\alpha} : X| = |Q_{\\alpha + 1}| = 2$.  Let $D \\in S_3(G_{\\alpha})$, ${\\overline {G_{\\alpha}}} = G_{\\alpha} / X$.\r\n$|{\\overline {G_{\\alpha}}}|= 12$, ${\\overline {Q_{\\alpha}}} \\leq {\\overline {G_{\\alpha}}}$ and $|{\\overline {Q_{\\alpha}}}|=2$ so\r\n${\\overline D} \\lhd {\\overline {G_{\\alpha}}}$.\\\\\r\n\\\\\r\nWe get (2a) $L \\lhd G_{\\alpha}$, $|G_{\\alpha}:L| = 2$,\\\\\r\n(2b) ${\\overline L} = S_3$, \\\\\r\n(2c) $S_2(L) = \\langle Q_{\\beta}: \\beta \\in \\Delta(\\alpha)\\rangle$,\\\\\r\n(2d) $O_2(L) = X = Q_{\\alpha} \\cap Q_{\\beta}, \\forall \\beta \\in \\Delta(\\alpha)$,\\\\\r\n(2e) $Q_{\\alpha +1} = Z_{\\alpha'} O_2(L))$, (f) $C_L(O_2(L)) \\leq O_2(L)$.\\\\\r\n$Z_{\\alpha} \\leq G_{\\alpha}$, $Z_{\\alpha} \\leq Q_{\\alpha + 1} \\leq O_2(L)$.\r\n$C_L(O_2(L)) \\leq C_L(Z_{\\alpha}) \\leq Q_{\\alpha} \\cap L \\leq O_2(L)$.\r\nBy $A2$, $\\exists t \\in G_{\\alpha + 1} \\setminus Q_{\\alpha + 1}, \\alpha^t = \\alpha + 2$ and $t^2 \\in Q_{\\alpha + 1}$\r\n$Q_{\\alpha + 1}, Q_{\\alpha}^t \\in S_2(L) \\leq G_{\\alpha}$, $L^t \\leq G_{\\alpha + 2}$. \\\\\r\n\\\\\r\n(3) $O_2(L)$ is not elementary abelian.\r\n\\\\\r\n$A_1 = O_2(L)$, $A_2=O_2(L^t)$ are elementary abelian of index $2$ in $Q_{\\alpha + 1}$.\r\nIf $A_1 = A_2$, $A_1 \\lhd \\langle G_{\\alpha}, G_{\\alpha + 2}\\rangle$.\r\n$\\langle G_{\\alpha}, G_{\\alpha} \\cap  G_{\\alpha + 1}, G_{\\alpha + 1} \\cap  G_{\\alpha + 2}\\rangle = \\langle G_{\\alpha}, G_{\\alpha + 1}\\rangle=G$\r\nsuch that $A_1 \\ne A_2$, $Q_{\\alpha + 1}$ is non-abelian.\r\n$A = A_1 \\cap A_2 = Z(Q_{\\alpha + 1})$, $|(Q_{\\alpha + 1}/A)| = 4$.\r\n$\\langle G_{\\alpha}, O^2(G_{\\alpha + 1} \\rangle = N_G(A_1)$, which is a contradiction.\r\n$O^2(G_{\\alpha + 1})$ acts transitively on $(Q_{\\alpha + 1}/A)^{\\#}$, its elements are involutions and $Q_{\\alpha + 1}$ is elementary\r\nabelian.\r\n$G_0 = \\langle L, L^t \\rangle$.  Denote the largest normal subgroup of $G_0$ in $Q_{\\alpha +1}$ by $Q$.\r\n${G_0}^t=G_0, Q^t=Q$.  \\\\\r\n\\\\\r\n(4) We show $[Q, D] \\ne 1$.\\\\\r\nSuppose not put ${\\tilde {G_0}} = G_0/Q$.\r\n$Q_{\\alpha + 1} \\in S_2(L) \\cap S_2(L^t)$.\r\n$({\\tilde {G_0}}, {\\tilde L}, {\\tilde {L^t}}, {\\tilde {Q_{\\alpha + 1}}})$ satisfy ${\\cal {A_2}}$, ${\\cal {A_3}}$, ${\\cal {A_4}}$.\r\n${\\tilde W}= [{\\tilde {Z_{\\alpha}}}, {\\tilde D}] \\ne 1$.\r\n$C_{\\tilde L}(O_2({\\tilde L})) \\leq O_2({\\tilde L})$, so  ${\\cal {A_1}}$ and ${\\cal {A_5}}$ hold.\r\nBecause of the minimality of $|T|$ and $|Q_{\\alpha + 1}| < |T|$, ${\\tilde L} = S_4$ or ${\\tilde L} = C_2 \\times S_4$.\r\n${\\tilde W} = [O_2(({\\tilde L}), O^2({\\tilde L})]$ is not contained in $O_2({\\tilde L}^t)$ and ${\\tilde W} \\leq {\\tilde {Z_{\\alpha}}}$\r\nand so $O_2(L) =(O_2(L) \\cap O_2(L^t))Z_{\\alpha}$.\r\nSince $Z_{\\alpha} \\leq \\Omega(Z(O_2(L)))$, $\\Phi(O_2(L))= \\Phi(O_2(L) \\cap O_2(L^t))$ and thus\r\n$\\Phi(O_2(L)) = \\Phi(O_2(L^t))$. $\\Phi(O_2(L)) \\lhd \\langle G_{\\alpha}, G_{\\alpha + 2} \\rangle = G$ and ${\\cal {A_3}}$ gives\r\n$\\Phi(O_2(L))=1$ which contradicts (3) and (4).\\\\\r\n\\\\\r\n(5) Let $\\beta \\in \\Delta(\\alpha) \\setminus \\{\\alpha \\}$, then $\\langle Z_{\\alpha}, Z_{\\gamma} \\rangle$ is not normal in $L$.\r\n\\\\\r\nPut $\\Delta(\\beta) = \\{ \\alpha, \\delta, \\gamma \\}$ and $V_{\\beta}= \\langle Z_{\\alpha}, Z_{\\beta}, Z_{\\gamma} \\rangle \\lhd G_{\\beta}$.\r\nIf $x \\in Q_{\\alpha} \\setminus Q_{\\beta}$ interchanges $\\gamma$ and $\\delta$ and normalized $L$.\r\nIf $\\langle Z_{\\alpha} , Z_{\\gamma} \\rangle$ is normal in $L$ and also \r\n$\\langle Z_{\\alpha}, Z_{\\delta} \\rangle = \\langle Z_{\\alpha}, Z_{\\gamma}^x \\rangle$ is normal in $L$.  Thus $V_{\\beta}$ is normal\r\nin $L$ (which is not contained in $G_{\\alpha} \\cap G_{\\beta}$) which contradiction!\\\\\r\n\\\\\r\n(6) Let $b \\geq 4, \\alpha - 1 \\in \\Delta(\\alpha) \\setminus \\{ \\alpha + 1 \\}$ and\r\n$\\alpha - 2 \\in \\Delta(\\alpha - 1) \\setminus \\{\\alpha \\}$ then $(\\alpha - 2, \\alpha' - 2)$ is a critical pair.\r\n\\\\\r\nAssume not.  $Z_{\\alpha - 2} \\leq Q_{\\alpha' - 3} \\cap Q_{\\alpha' - 2}$.  Since\r\n$\\alpha' - 2$ is conjugate to $\\alpha$,\r\n$Z_{\\alpha -2} \\leq Q_{\\alpha' -3} \\cap Q_{\\alpha' - 2} = Q_{\\alpha' - 2} \\cap Q_{\\alpha' - 1} \\leq\r\nG_{\\alpha' - 1} \\cap G_{\\alpha'} = Z_{\\alpha}Q_{\\alpha'}$.  So\r\n$[Z_{\\alpha - 2}, Z_{\\alpha'}] \\leq [Z_{\\alpha}, Z_{\\alpha'}] \\leq Z_{\\alpha}$ and so $Z_{\\alpha - 2} Z_{\\alpha} \\leq Q_{\\alpha} \\cap Q_{\\alpha - 1}$ is normalized by $Z_{\\alpha'}$ is normal in $L$ which contradicts (5).\\\\\r\n\\\\\r\n$\\alpha - 1 \\in \\Delta(\\alpha) \\setminus \\{ \\alpha + 1 \\}, x \\in L \\leq G_{\\alpha}$ with $(\\alpha - 1 = (\\alpha + 1)^x$.  Thus\r\n$\\alpha - 2 = (\\alpha + 2)^x$, which is adjacent to $\\alpha - 1$.  If $b \\geq 4$, $(\\alpha - 2, \\alpha' - 2)$ is critical.  Hence\r\n$R_2= [Z_{\\alpha - 2}, Z_{\\alpha' - 2}] \\leq Z(G_{\\alpha - 2} \\cap G_{\\alpha - 1}) \\cap Z_{\\alpha' - 2}$.  Also,\r\n$b \\geq 4$ also implies $Z_{\\alpha'} \\leq Q_{\\alpha' - 2}$ and $[R_2, Z_{\\alpha'}] = 1$ and so $[R_2 , L] = 1$ and\r\n$R_2 \\leq Z(G_{\\alpha + 2} \\cap G_{\\alpha + 1}$ since $x \\in L$.\\\\\r\nNow $(\\alpha', \\alpha)$ is also a critical pair so $\\exists \\alpha' +2$ such that $d(\\alpha', \\alpha' + 2) = 2$ and\r\n$(\\alpha' +2, \\alpha + 2)$ is also critical. So $Z_{\\alpha' + 2} \\leq Q_{\\alpha' - 2}$ and so\r\n$[R_2, Z_{\\alpha' +2}] = 1$ since $R_2 \\leq Z_{\\alpha' - 2}$.  Hence $G_{\\alpha + 2} \\cap G_{\\alpha + 3} = Q_{\\alpha +2}Z_{\\alpha'+2}$\r\nis centralized by $R_2$.  But then $R_2 \\leq Z(G_{\\alpha + 2})$ and  $R_2 \\leq Z(G_{\\alpha - 2})$ after conjugation.  This contradicts\r\nthe action of $Z_{\\alpha' - 2}$ on  $Z_{\\alpha + 2}$.\r\nThis proves:\\\\\r\n(7) $b \\leq 4$\r\n\\\\\r\nFinally, $Q \\leq O_2(L^t) \\leq Q_{\\alpha +2}] =  1$.\\\\\r\n\\\\\r\nCase a: $Z_{\\alpha + 2}$ is not contained in $O_2(L)$.\\\\\r\n$Q_{\\alpha + 1} \\leq O_2(L) Z_{\\alpha + 2}$ and $L= \\langle (Z_{\\alpha + 2})^L \\rangle O_2(L) = C_L(O_2(L)) O_2(L)$.\r\nSo $O^2(L) \\leq C_L(Q)$ since $Q \\lhd L$ but then $[Q, D] = 1$ which contradicts (4).\r\n\\\\\r\n\\\\\r\nCase b: $Z_{\\alpha + 2} \\leq O_2(L)$\r\n\\\\\r\n$Z_{\\alpha + 2} \\leq Q_{\\alpha}$ and (7) implies $b=4$.  (6) then implies\r\n$Z_{\\alpha + 2}$ is not contained in $Q_{\\alpha - 2}= Q_{(\\alpha + 2)^x}$ and $L^{tx}$ is normal of index $2$ in $G_{\\alpha - 2}$.\r\n$\\langle (Z_{\\alpha + 2})^{L^{tx}} \\rangle \\leq G_0$ has a Sylow $3$ subgroup, $D_2$ of $G_{\\alpha - 2}$. But then\r\n$Q \\lhd G_0$ shows $[Q, D] = 1$ which contradicts (4).\r\nThis contradicts (4) since $D_2$ is a $G_0$ conjugate of $D \\in S_3(G_{\\alpha})$.\r\nAssume $(\\alpha - 2, \\alpha' - 2)$ is not normal in $L$.\r\n\\end{quote}\r\n{\\bf Amalgam example 1:} $G= S_6$, $a=(12)$, $b=(12)(34)(56)$.  $P_1 = C_G(a)$, $P_2 = C_G(b)$.  \r\n$P_1 = \\langle a \\rangle \\times \\langle (34)(56) \\rangle \\times \\langle (35)(46) \\rangle$.\r\n\\\\\r\n\\\\\r\n{\\bf Amalgam example 2:} $G= SL_3(2)$, $|G|=168$.\r\n$P_1 = \\{\\left(\r\n\\begin{array}{ccc}\r\na & b & c \\\\\r\n0 & d & e \\\\\r\n0 & 0 & f \\\\\r\n\\end{array}\r\n\\right) \\}$.\r\n$P_2 = \\{\\left(\r\n\\begin{array}{ccc}\r\na & b & c \\\\\r\nd & e & f \\\\\r\n0 & 0 & g \\\\\r\n\\end{array}\r\n\\right) \\}$. $G=\\langle P_1, P_2\\rangle$. $P_1 \\cong P_2 \\cong S_4$.\r\n$P_1 \\cap P_2 = \\{\\left(\r\n\\begin{array}{ccc}\r\n1 & a & b \\\\\r\n0 & 1 & c \\\\\r\n0 & 0 & 1 \\\\\r\n\\end{array}\r\n\\right) \\}$.  If $M_1, M_2$ are a solvable primitive pair oc characteristic $2$, either $M_1$ or $M_2$ possesses a section isomorphic\r\nto $S_4$.\r\n\\section {ONan}\r\n{\\bf Theorem:} Let $M$ be a primitive maximal subgroup of $G$ then either (1) $F^*(G)=F(G)$ [Example: $G=S_4$, $M=S_3$], (2)\r\n$F(G)= 1$ and $F^*(G) = N_1 \\times N_2$ where $N_1$ and $N_2$ are the only minimal normal subgroups\r\n[$G$ involves $A_5$] or (3) $F(G)=1$ and $F^*(G)$ is the unique minimal subgroup pf $G$.\r\n\r\n", "meta": {"hexsha": "45aa039fa39bfdfbd5a4de8b5c2ad25b28872659", "size": 36229, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "groups/gtAmalgam.tex", "max_stars_repo_name": "jlmucb/class_notes", "max_stars_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "groups/gtAmalgam.tex", "max_issues_repo_name": "jlmucb/class_notes", "max_issues_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "groups/gtAmalgam.tex", "max_forks_repo_name": "jlmucb/class_notes", "max_forks_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.7265861027, "max_line_length": 223, "alphanum_fraction": 0.5798669574, "num_tokens": 15581, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5764167831052307}}
{"text": "\\documentclass{article}\n\\usepackage[a4paper, total={6in, 9in}]{geometry}\n\\usepackage{fancyhdr}\n\\usepackage{amsmath}\n\\usepackage{algorithm}\n\\usepackage{algpseudocode}\n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{proof}{Proof}[section]\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{DanDoge}\n\\lhead{Notes on solving recurrence equations}\n\\rfoot{Page \\thepage}\n\\cfoot{latest version: 2018/02/26}\n\n\\title{Notes on solving recurrence equations}\n\\date{2018-02-26}\n\\author{DanDoge}\n\n\\begin{document}\n\\section{definition}\n  \\paragraph{}\n    if the $n$-th term of a sequence can be expressed as a function of previous terms\n    \\begin{equation}\n      x_n = F(x_{n - k}, ..., x_{n - 1}) + g_n,\\ n > k,\n    \\end{equation}\n    then this equation is called a $k$-th order \\textit{recurrence relation}, the values $x_1, ..., x_k$ are called \\textit{initial conditions}, if $g_n \\equiv 0$, the recurrence equation is called \\textit{homogeneous}, otherwise it's called \\textit{non-homogeneous}\n\\section{solving first order recurrences}\n  \\paragraph{}\n    this class of recurrences can be solved by \\textit{iteration}, which is, i think, taught in high school, thus i omit it.\n\\section{solving second order recurrences}\n  \\paragraph{}\n    this class of recurrences can be solved using \\textit{a characterstic equation}, when we have constant coefficients.\n  \\paragraph{example:Fibonacci numbers}\n    the Fibonacci sequence is defined by:\n    \\begin{equation}\n      a_n = a_{n - 1} + a_{n - 2}, a_0 = 0, a_1 = 1\n    \\end{equation}\n    we obtain a polynomial equation, which is called \\textit{a characterstic equation}\n    \\begin{equation}\n      \\lambda^2 = \\lambda + 1\n    \\end{equation}\n    which has two roots, $\\lambda_1 = \\frac{1 - \\sqrt{5}}{2}$ and $\\lambda_2 = \\frac{1 + \\sqrt{5}}{2}$, therefore, $\\lambda_1^n$ and $\\lambda_2^n$ and their linear combination $c_1\\lambda_1^n + c_2\\lambda_2^n$ are solutions to this recurrence, then, from initia conditions, we find $c_1$ and $c_2$.\n\\section{multiple roots}\n  \\paragraph{}\n    what if some of the roots of the characterstic equation are the same?\n  \\paragraph{example}\n  \\begin{equation}\n    a_n = 2a_{n - 1} - a_{n - 2}\n  \\end{equation}\n  the characterstic equation has two identical roots $\\lambda_1 = \\lambda_2 = 1$, so the first solution is $1^n$, to get the second solution, we consider a new equation\n  \\begin{equation}\n    b_n = (2 + \\epsilon)b_{n - 1} - (1 + \\epsilon)b_{n - 2}\n  \\end{equation},\n  if $\\epsilon \\to 0$, then $b_n$ approaches $a_n$, the characterstic equation for $b_n$ has two roots $1, 1 + \\epsilon$, and $c_1 = 1 - \\frac{1}{\\epsilon}, c_2 = \\frac{1}{\\epsilon}$,\n  thus $b_n = (1 - \\frac{1}{\\epsilon} + \\frac{1}{\\epsilon}*(1 + \\epsilon)^n)$, consider when $\\epsilon \\to 0$, we have $a_n = 1 + n$.\n  \\paragraph{Theorom}\n    let $\\lambda$ be a root of a multiplicity p of the characterstic equation, then $\\lambda^n, n\\lambda^n, ..., n^{p - 1}\\lambda^n$ are all solutions to the recurrence.\n\\section{non-homogeneous equations}\n  \\paragraph{Theorom}\n    a recurrence of the form\n    \\begin{equation}\n      x_n + c_1x_{n - 1} + ..+c_kx_{n - k} = b^nP(n)\n    \\end{equation}\n    where $c_k$ and $b$ are all constants, and $P(n)$ is a polynomial of the order $d$ can be transformed into the characterstic equation\n    \\begin{equation}\n      (r^k + c_1r^{k - 1} + ... + c_k)(r - b)^{d + 1} = 0\n    \\end{equation}\n\\section{generating functions}\n  \\paragraph{example}\n    we are going to derive a generating function for this sequence\n    \\begin{equation}\n      a_n - 3a_{n - 1} + 2a_{n - 2} = 0, a_0 = 0, a_1 = 1\n    \\end{equation}\n    first, we define\n    \\begin{equation}\n      f(x) = \\sum_{k = 0}^\\infty a_kx^k\n    \\end{equation}\n    then we have\n    \\begin{equation}\n      f(x) - 3xf(x) + 2x^2f(x) =a_0 + a_1x - 3a_0x = x\n    \\end{equation}\n    thus,\n    \\begin{equation}\n      f(x) = \\frac{1}{1 - 2x} - \\frac{1}{1 - x}\n    \\end{equation}\n    by means of a geometric series, we have\n    \\begin{equation}\n      f(x) = (2 - 1)x + (2^2 - 1)x^2 + ...\n    \\end{equation}\n\\end{document}\n", "meta": {"hexsha": "e27058b020704e1e57a529fec48413bcbee003d2", "size": 4051, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/recurrence_equations.tex", "max_stars_repo_name": "DanDoge/notes-on-algorithms", "max_stars_repo_head_hexsha": "8822e4ed1a60a4cf516f65e31637f8e3e65fd19e", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/recurrence_equations.tex", "max_issues_repo_name": "DanDoge/notes-on-algorithms", "max_issues_repo_head_hexsha": "8822e4ed1a60a4cf516f65e31637f8e3e65fd19e", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/recurrence_equations.tex", "max_forks_repo_name": "DanDoge/notes-on-algorithms", "max_forks_repo_head_hexsha": "8822e4ed1a60a4cf516f65e31637f8e3e65fd19e", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.6421052632, "max_line_length": 298, "alphanum_fraction": 0.6608244878, "num_tokens": 1389, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943603346811, "lm_q2_score": 0.7606506635289835, "lm_q1q2_score": 0.5764167830070968}}
{"text": "\\section{Notation}\\label{sec:notation}\n\n\\begin{description}\n\\item[Natural Numbers] The set $\\mathbb{N}$ refers to the set of all natural\n  numbers $\\{0, 1, 2, \\ldots\\}$. The set $\\mathbb{Q}$ refers to the set of\n  rational numbers.\n\\item[Booleans] The set $\\mathbb{B}$ denotes the set of booleans\n  $\\{\\var{True}, \\var{False}\\}$.\n\\item[Powerset] Given a set $\\type{X}$, $\\powerset{\\type{X}}$ is the set of all\n  the subsets of $X$.\n\\item[Sequences] Given a set $\\type{X}$, $\\seqof{\\type{X}}$ is the set of\n  sequences having elements taken from $\\type{X}$. The empty sequence is\n  denoted by $\\epsilon$, and given a sequence $\\Lambda$, $\\Lambda; \\type{x}$ is\n  the sequence that results from appending $\\type{x} \\in \\type{X}$ to\n  $\\Lambda$.\n\\item[Functions] $A \\to B$ denotes a \\textbf{total function} from $A$ to $B$.\n  Given a function $f$ we write $f~a$ for the application of $f$ to argument\n  $a$.\n\\item[Inverse Image] Given a function $f: A \\to B$ and $b\\in B$, we write\n  $f^{-1}~b$ for the \\textbf{inverse image} of $f$ at $b$, which is defined by\n  $\\{a \\mid\\ f a =  b\\}$.\n\\item[Maps and partial functions] $A \\mapsto B$ denotes a \\textbf{partial\n    function} from $A$ to $B$, which can be seen as a map (dictionary) with\n  keys in $A$ and values in $B$. Given a map $m \\in A \\mapsto B$, notation\n  $a \\mapsto b \\in m$ is equivalent to both $m~ a = b$ and $\\mathsf{a}~m = b$.\n  Given a set $A$, $A \\mapsto A$ represents the identity map on $A$:\n  $\\{a \\mapsto a \\mid a \\in A\\}$. The $\\emptyset$ symbol is also used to\n  represent the empty map as well.\n\\item[Domain and range] Given a relation $R \\in \\powerset{(A \\times B)}$,\n  $\\dom~R \\in \\powerset{A}$ refers to the domain of $R$, and\n  $\\range~R \\in \\powerset{B}$ refers to the range of $R$. Note that (partial)\n  functions (and hence maps) are also relations, so we will be using $\\dom$ and\n  $\\range$ on functions.\n\\item[Domain and range operations] Given a relation\n  $R \\in \\powerset{(A \\times B)}$ we make use of the \\textit{domain-restriction},\n  \\textit{domain-exclusion}, and \\textit{range-restriction} operators, which\n  are defined in \\cref{fig:domain-and-range-ops}. Note that a map $A \\mapsto B$\n  can be seen as a relation, which means that these operators can be\n  applied to maps as well.\n\\item[Integer ranges] Given $a, b \\in \\mathbb{Z}$, $[a, b]$ denotes the\n  sequence $[i \\mid a \\leq i \\leq b]$ . Ranges can have open ends: $[.., b]$\n  denotes sequence $[i \\mid i \\leq b]$, whereas $[a, ..]$ denotes sequence\n  $[i \\mid a \\leq i]$. Furthermore, sometimes we use $[a, b]$ to denote a set\n  instead of a sequence. The context in which it is used should provide enough\n  information about the specific type.\n\\item[Domain and range operations on sequences] We overload the $\\restrictdom$,\n  $\\subtractdom$, and $\\restrictrange$ to operate over sequences. So for\n  instance given $S \\in \\seqof{A}$, and $R \\in \\seqof{(A \\times B)}$:\n  $S \\restrictdom R$ denotes the sequence\n  $[ (a, b) \\mid (a, b) \\in R, a \\in S]$.\n\\item[Wildcard variables] When a variable is not needed in a term, we replace\n  it by $\\wcard$ to make it explicit that we do not use this variable in the\n  scope of the given term.\n\\item[Implicit existential quantifications] Given a predicate\n  $P \\in X \\to \\mathbb{B}$, we use $P \\wcard$ as a shorthand notation for\n  $\\exists x \\cdot P~x$.\n\n\\item[Pattern matching in premises] In the inference-rules premises use\n  $\\var{patt} \\leteq \\var{exp}$ to pattern-match an expression $\\var{exp} $\n  with a certain pattern $\\var{patt}$. For instance, we use\n  $\\Lambda'; x \\leteq \\Lambda$ to be able to deconstruct a sequence $\\Lambda$\n  in its last element, and prefix. If an expression does not match the given\n  pattern, then the premise does not hold, and the rule cannot trigger.\n\n\\item[Ceiling] Given a number $n \\in \\mathbb{R}$, $\\ceil{n}$ represents the\n  ceiling of $n$, and $\\floor{n}$ represents its floor.\n\\end{description}\n\n\\begin{figure}[htb]\n  \\begin{align*}\n    \\var{S} \\restrictdom \\var{R}\n    & = \\{ (a, b) \\mid (a, b) \\in R, ~ a \\in S \\}\n    & \\text{domain restriction}\n    \\\\\n    S \\subtractdom R\n    & = \\{ (a, b) \\mid (a, b) \\in R, ~ a \\notin S \\}\n    & \\text{domain exclusion}\n    \\\\\n    R \\restrictrange S\n    & = \\{ (a, b) \\mid (a, b) \\in R, ~ b \\in S \\}\n    & \\text{range restriction}\n  \\end{align*}\n  \\caption{Domain and range operations}\n  \\label{fig:domain-and-range-ops}\n\\end{figure}", "meta": {"hexsha": "32201690045310e2211cb5aee17641cafb702272", "size": 4403, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cole/ledger/formal-spec/notation.tex", "max_stars_repo_name": "Quantum-One-DLT/bcc-ledger-specs", "max_stars_repo_head_hexsha": "e1109f35aee321bbf899a5e2cc4de3eec583f9b7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 108, "max_stars_repo_stars_event_min_datetime": "2019-03-24T02:26:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-30T05:27:16.000Z", "max_issues_repo_path": "cole/ledger/formal-spec/notation.tex", "max_issues_repo_name": "Quantum-One-DLT/bcc-ledger-specs", "max_issues_repo_head_hexsha": "e1109f35aee321bbf899a5e2cc4de3eec583f9b7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1266, "max_issues_repo_issues_event_min_datetime": "2019-03-18T20:23:28.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-04T12:50:51.000Z", "max_forks_repo_path": "cole/ledger/formal-spec/notation.tex", "max_forks_repo_name": "Quantum-One-DLT/bcc-ledger-specs", "max_forks_repo_head_hexsha": "e1109f35aee321bbf899a5e2cc4de3eec583f9b7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 86, "max_forks_repo_forks_event_min_datetime": "2019-03-29T06:53:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-04T17:17:15.000Z", "avg_line_length": 51.8, "max_line_length": 81, "alphanum_fraction": 0.6629570747, "num_tokens": 1451, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5764167747837432}}
{"text": "% --------------------------------------------------------------\n% This is all preamble stuff that you don't have to worry about.\n% Head down to where it says \"Start here\"\n% --------------------------------------------------------------\n \n\\documentclass[12pt]{article}\n \n\\usepackage[margin=1in]{geometry} \n\\usepackage{amsmath,amsthm,amssymb}\n\\usepackage{graphicx}\n\\usepackage{epstopdf}\n\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n \n\\newenvironment{theorem}[2][Theorem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{lemma}[2][Lemma]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{exercise}[2][Exercise]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{reflection}[2][Reflection]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{proposition}[2][Proposition]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{corollary}[2][Corollary]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n \n\\begin{document}\n \n% --------------------------------------------------------------\n%                         Start here\n% --------------------------------------------------------------\n \n%\\renewcommand{\\qedsymbol}{\\filledbox}\n \n\\title{Experiments of different directions in optimization for small neural networks}%replace X with the appropriate number\n% \\author{Chenxin} %if necessary, replace with your course title\n\\date{\\vspace{-10ex}}\n\\maketitle\n \n\n\\section{The problem and the data}\n\n\nI only focus on two simple networks and two sets of one dimensional data.\n\n\nNetwork 1 (N1) has the $1-2-1$ structure, thus has 7 parameters in total. Network 2 (N2) has the $1-1-1$ structure, thus has $4$ parameters in total. I tried linear, Relu, sigmoid as three activation functions and square loss as the loss function. However, for simplicity, here all the results are based on sigmoid activation function only. Figure~\\ref{fig:n} shows the network structure.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=6cm]{n1}\n\\includegraphics[width=6cm]{n2}\n\\caption{N1 (left) and N2 (right).}\n\\label{fig:n}\n\\end{figure}\n\nDataset one (D1) contains $4$ samples, two in each class. Dataset two (D2) contains $20$ samples, $10$ in each class. Figure~\\ref{fig:d}  shows the distribution of both of them.\n\\begin{figure}[h]\n\\includegraphics[width=8cm]{data1.eps}\n\\includegraphics[width=8cm]{data2.eps}\n\\caption{D1 (left) and D2 (right). Red samples have $1$-label, while green samples have $0$-label.}\n\\label{fig:d}\n\\end{figure}\n\nNotice that for D1, N1 should be able to achieve $100\\%$ accuracy after training, but D2 is a rather ``difficult'' dataset for both these two networks.\n\n\\section{What directions that CG gives us?}\n\nHere we focus on N1 only. We choose $200$ random points in ${\\bf R}^7$ for initializing parameters. For each point, the gradient $d_{gd}$ and the results of CG steps $d_{cg}$ are computed. After line search, we are able to obtain two updated points and their objective values, i.e., $f_{gd}$ and $f_{cg}$. We record the ratio\n$$ r:=\\frac{f_{gd} - f_0}{f_{cg} - f_0 } $$\nto measure the ``quality'' of two updates, where $f_0$ is the original objective value before taking any updates. Thus, $r>1$ means negative gradient could lead to more decreasing on objective value than a CG step, and vice versa.\n\nThe only difference between CG implemented here and traditional CG is, here if $\\alpha<0$, we will terminate CG loops and return the negative curvature direction as output. \n\nThe number of maximal CG iterations is tuned from $1$ to $5$.\n\nWe apply backtrack line search in both gradient steps and CG steps. The initial step size is $10$ and multiplier on it is $0.5$ if the iteration has to go on. However, for CG steps, we need to flip the sign of step size in each line search iterations since the direction may not be a descent direction. \n \n\\begin{figure}[h]\n\\includegraphics[width=8cm]{CGD1.eps}\n\\includegraphics[width=8cm]{CGD2.eps}\n\\caption{Comparing negative gradient directions and CG outputs as search directions for D1 (left) and D2 (right). $r>1$ means negative gradient directions provides larger decrease on objective.}\n\\label{fig:CGvsGD}\n\\end{figure}\n\nThe figures above are two bar plots showing, by setting different maximal iterations for CG, the percentages of $r>1$ and $r<1$. The observation is that, running more than $3$ iterations of CG does not change the results at all. The reason is that CG will find a negative curvature direction in first $3$ iterations in all cases. \n\nAnother observation is, by allowing CG to take more iterations, the outputs of CG have lower chance to beat gradient direction. Therefore, I guess the reason here is those negative curvature directions cannot provide much decrease on objective.\n\n\n\\section{How is leftmost eigenvector of Hessian?}\n\nHere we consider to use leftmost eigenvector as the direction to make updates. In particular, at any point, if the leftmost eigenvalue of Hessian is less than $0$, then we use left most eigenvector to do line search. If it is greater than $0$, then we use the output of CG with $5$ as maximal number of iterations. (It is worthwhile to note that for N1 and both datasets, as far as I see, all the random points have negative leftmost eigenvalue in the Hessian.)\n\nLet $f_{le}$ denote the objective obtained from doing line search on leftmost eigenvector as a direction. After defining\n$$ r:=\\frac{f_{gd} - f_0}{f_{nc} - f_0 }.$$\nand running the experiments, I found there are $\\mathbf{26\\%}$ cases which lead to $r>1$ for D1 and $\\mathbf{19\\%}$ cases leading to $r>1$ for D2. The results indicate that leftmost eigenvector of Hessian could be a direction that has larger chance to lead to a larger decrease in objective than gradient. This also suggest that the negative curvature directions returned by CG is not ``negative enough'' since the product $p^TA p$ (even though it is negative) can be rather close to $0$.\n\n\\begin{figure}[h]\n\\includegraphics[width=8cm]{ncd1.eps}\n\\includegraphics[width=8cm]{ncd2.eps}\n\\caption{Comparing negative gradient directions and leftmost eigenvectors as search directions for D1 (left) and D2 (right).  $r>1$ means negative gradient directions provides larger decrease on objective.}\n\\label{fig:CGvsGD}\n\\end{figure}\n\n\n\\section{Using three directions as three algorithms for training.}\n\nIn the above experiments, we compare reduction of three kinds of directions on random initial points. Here, we run each of them as an algorithm to see how they will perform on N1 and D1. In particular, each algorithm is run for $100$ iterations to see what is result it can reach.\n\n\nFigure~\\ref{fig:3m} is a screenshot of running each algorithms for $100$ iterations, starting with $10$ different initial points as $10$ sets of experiments. The observation is that CG can reach loss close to $0$ in most of cases, while the other two cannot. This is opposite to the observation in the previous experiments which show CG steps have lower chance to provide large decreases on objective, compared with graident steps. This is a surprising result, and I would like to explore the reason behind it.\n\n\n\n\\begin{figure}[h]\n\\includegraphics[width=16cm]{threeM}\n\\caption{}\n\\label{fig:3m}\n\\end{figure}\n\n\n\n\\section{Visualization of these directions on N2}\n\nIn all the experiments, we only use N1 as the network structure. Here we consider to use N2 to build visualization which help to understand different kinds of directions. Note that we will fix the value of two biases so that there will be only 2 weights as parameters. Figure~\\ref{fig:landscape} shows an example of such visualization. The axis having the range from $0.25$ to $0.5$ represents the objective value, while the other two represent the values of two weights. The black, red and blue lines represents leftmost eigenvector, CG outputs and negative gradient directions respectively. I hope visualization like this will be helpful in future exploration.  \n\n\\begin{figure}[t]\n\\includegraphics[width=15cm]{2w.eps}\n\\caption{Landscape of network N2 by fixing two biases and three kinds of directions at $100$ random points.}\n\\label{fig:landscape}\n\\end{figure}\n\n\\section{Next steps}\n\\begin{itemize}\n\\item Figure out the reason why CG performs better in Section 4 but worse in Section 2.\n\\item Improve current line search strategies.\n\\item Read the paper \\cite{rr} and think about landscape of N1.\n\\end{itemize}\n\n\\begin{thebibliography}{9}\n\\bibitem{rr} \nLi, Hao, Zheng Xu, Gavin Taylor, and Tom Goldstein. \n\"Visualizing the Loss Landscap of Neural Nets.\" arXiv preprint arXiv:1712.09913 (2017). \n\\end{thebibliography}\n\n\\end{document}\t\n\n\n\n", "meta": {"hexsha": "dd43822b71e10d01b217754fa61c7dcae679f08c", "size": 8933, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report1/2d.tex", "max_stars_repo_name": "schemmy/symmetry-dnn", "max_stars_repo_head_hexsha": "afb1c97b742ab9f9d497bc0431373a7cccad7d6f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report1/2d.tex", "max_issues_repo_name": "schemmy/symmetry-dnn", "max_issues_repo_head_hexsha": "afb1c97b742ab9f9d497bc0431373a7cccad7d6f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report1/2d.tex", "max_forks_repo_name": "schemmy/symmetry-dnn", "max_forks_repo_head_hexsha": "afb1c97b742ab9f9d497bc0431373a7cccad7d6f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.7697368421, "max_line_length": 664, "alphanum_fraction": 0.7360349267, "num_tokens": 2351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.757794360334681, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5764167706720664}}
{"text": "\\subsection{Backpropagation}\n\nBack propagation is a learning algorithm which aims to minimize the errors/cost function of the NN. Through this learning algorithm, the random weights and biases which were initially given to the network will be optimized to give the best output.\n\nOutput of each node is the sum of the multiplications of the output of previous nodes by certain weights. Therefore we can associate how much error is coming with every weight and how much error has been brought from each particular node from the previous layer.\n\nTo understand this better it is worth imagining the following example:\n\\begin{itemize}\n   \\item  node 1 in the output layer of the NN should be equal to 0.01\n   \\item  instead the NN is providing us with 0.8\n\\end{itemize}\n\nIn this case we should do the following:\n\\begin{enumerate}\n   \\item Calculate the error of the node (-0.79 in our example)\n   \\item Calculate how much error has been brought by every link to this node\n\\end{enumerate}\n\nFor instance if weight \\(w_{11}\\) is 0.6 and \\(w_{21}\\) is 0.4 then they are associated with an error of -0.79 multiplied by 0.6 and -0.79 multiplied by 0.4 respectively (see Figure \\ref{fig:bp}).\n\n\\begin{figure}[H]\n    \\includegraphics[width=\\linewidth]{pics/bp.jpg}\n    \\caption{\\label{fig:bp} Backpropagation}\n\\end{figure}\n\nAfter calculation of how much error is associated with every weight we can obtain the errors for the nodes in the proceeding layer.\n \nFor instance error term for node 1 in the hidden layer will be equal to:\n\\begin{itemize}\n   \\item the sum of errors associated with all the weights (\\(w_{11}\\) and \\(w_{12}\\) in our case) that link this node with the next layer (see Figure \\ref{fig:bp}).\n\\end{itemize}\n\n\nOnce we repeat this procedure for all the nodes in all layers we can find out how much every node should be changed.\n\nTo do so in Python we just need to make multiplication of vector that contain errors by corresponding matrix of weights.\n\n\\begin{lstlisting}[language=Python]\n    Find the errors associated with hidden layer output:\n    h_errors = np.dot(w_h_o.T, o_errors),\n    h_errors[0:10] # errors in the hidden layer - show the first 10 nodes out of 90.\n\\end{lstlisting}\n\n\\begin{lstlisting}\n    array([[ 0.39443768],\n    [-0.16865836],\n    [ 0.0304721 ],\n    [-0.85442941],\n    [-0.19828127],\n    [-0.53651297],\n    [ 0.52033741],\n    [-0.2781908 ],\n    [-0.07071894],\n    [-1.63579796]])\n\\end{lstlisting}\n\n\\subsection{Gradient Descent}\n\nGradient descent is one the most popular algorithms to optimize the neural networks. The name gradient descent is rooted in the procedure where the gradient is repeatedly evaluated to update the parameters. The objective of the gradient descent is to find weight parameters that will minimize the cost function.\n\nTo understand the concept of gradient descent we should ask ourselves the following question: What can be done to improve the weights we have assigned randomly at the beginning, so that the overall result improves?\n\nTo change the output of any node we should change the weights that connect it with the previous layer. Basically we want to find out how much error in every node changes once we change associated weights. Next we want to select the weights that would lead to a minimal error in the output layer. That can be achieved by differentiation of the cost function and search for its minimum.\n\nGiven multidimensionality of the function, which we need to differentiate, the search for its minimum can be a complicated task. This task is similar to some extent to the search of the path in the darkness from the top of a mountain to its valley. Because it is dark it is almost impossible to reach the valley immediately. The only way to achieve the goal is by exploring the neighbourhood (the radius you are able to see) and tacking small steps in the direction that leads downhill and constantly updating the path for the next steps. This process is illustrated on the Figure \\ref{fig:gradient-descent}:\n\n\\begin{figure}[H]\n    \\includegraphics[width=\\linewidth]{pics/gradient_descent.png}\n    \\caption{\\label{fig:gradient-descent} How Gradient Descent works. Source: Giphy.com}\n\\end{figure}\n\nMathematically the differentiation process can be illustrated on the example of weights between output and hidden layers \\(w_{ho}\\). The same process but with corresponding values should be applied for the weights between input and hidden layers \\(w_{ih}\\).\n\nAs it can be seen from the formulas below the error we want to minimize \\(E\\) can be defined as the sum of squared differences between the target \\(t_n\\) and output \\(o_n\\) values of the NN. The sum of differences for all the nodes in the layer is relevant but when doing calculation for a particular node this sum can be omitted - only the difference between particular output \\(o_o\\) and target \\(_o\\) matters.\n\nTarget value is constant. Output value depends on weights and is obtained after applying sigmoid function to the sum of inputs (outputs of the previous layer - \\(o_h\\) multiplied by corresponding weights \\(w_{ho}\\).\n\n\\begin{equation}\n\\frac{\\partial E}{\\partial w_{ho}}=\\frac{\\partial}{\\partial w_{ho}}\\displaystyle\\sum_{n=1}^{n} (t_n-o_n)^2=\\frac{\\partial}{\\partial w_{ho}}(t_o-o_o)^2\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial E}{\\partial w_{ho}}=\\frac{\\partial E}{\\partial O_{o}}\\frac{\\partial O}{\\partial w_{ho}}=-2(t_o-o_o)\\frac{\\partial O}{\\partial w_{ho}}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial E}{\\partial w_{ho}}=-2(t_o-o_o)\\frac{\\partial}{\\partial w_{ho}}sigmoid(\\sum(w_{ho}o_h)\n\\end{equation}\n\nThe formula for derivative of the sigmoid function is formula 4. It is necessary to keep in mind that the sum to which we apply sigmoid function also depends on the change of weights \\(w_{ho}\\). Therefore one should follow the chain rule for derivation.\n\n\\begin{equation}\n\\frac{\\partial}{\\partial x}sigmoid(x)=sigmoid(x)(1-sigmoid(x))\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial E}{\\partial w_{ho}}=-2(t_o-o_o)sigmoid(\\sum_{h=1}^{h} (w_{ho}o_h)(1-sigmoid(\\sum_{h=1}^{h} (w_{ho}o_h))\\frac{\\partial (\\sum_{h=1}^{h} (w_{ho}o_h))}{\\partial w_{ho}}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial E}{\\partial w_{ho}}=-2(t_o-o_o)sigmoid(\\sum_{h=1}^{h} (w_{ho}o_h)(1-sigmoid(\\sum_{h=1}^{h} (w_{ho}o_h))o_h\n\\end{equation}\n\nThe formula we derive is for one particular node. We can however apply it to all the nodes in the layer. In order to do so the only thing we need is to consider this formula in matrix notation. Thus, necessary update of weights linked to all the nodes in a layer will be calculated.\n\n\\begin{equation}\n\\frac{\\partial E}{\\partial W_{ho}}=-2*O_{error}*O_{output}*(1-O_{output})*O_h^T\n\\end{equation}\n\nAfter solving the minimization problem we can update the weights we have assigned before.\n\n\\begin{equation}\nnew W_{ho}=old W_{ho}-\\frac{\\partial E}{\\partial W_{ho}}\n\\end{equation}\n\n\\begin{equation}\n\\Delta W_{ho}=-2*(T_o-O_o)*sigmoid(O_o)*(1-sigmoid(O_o)*O_h^T\n\\end{equation}\n\nIn code this can be represented as follows:\n\n\\begin{lstlisting}[language=Python]\n    # Update the matrix for weights between hidden and output layers:\n    w_h_o += np.dot((o_errors * o_output * (1.0 - o_output)), np.transpose(h_output)) # 2 can be omitted as being related to the learning rate.\n    # Update the matrix for weights between input and hidden layers:\n    w_i_h += np.dot((h_errors * h_output * (1.0 - h_output)), np.transpose(input))\"\n\\end{lstlisting}\n\n\\subsection{Learning Rate}\n\nNow, there is something else, we should add in the weights updating procedure. If we completely change our weights with every new observation - our model learns to predict only the last input. Instead of updating weights 100 \\% every time we can change them only partially - this way every new observation will bring some new knowledge while the previous one will still be in memory even though updated to certain extent. The bigger the learning rate the more importance has the last observation, the smaller it is the more important is all the previous knowledge. The smaller the steps - the more accurate will be the prediction. At the same time it might take more time to learn.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.5]{pics/learning_rate.png}\n    \\caption{\\label{fig:bp} Learning Rate. Source: Business Analytics and Data Science Course by Professor S. Lessmann, Chapter 5: Artificial Neural Networks}\n\\end{figure}\n\nBelow is the code for weight's update procedure with learning rate included.\n\n\\begin{lstlisting}[language=Python]\n    # Define the learning rate:\n    l_r = 0.3\n\n    # Update the weights for the links between the hidden and output layers:\n    w_h_o += l_r * np.dot((o_errors * o_output * (1.0 - o_output)), np.transpose(h_output))\n    # Update the weights for the links between the input and hidden layers:\n    w_i_h += l_r * np.dot((h_errors * h_output * (1.0 - h_output)), np.transpose(input))\n\\end{lstlisting}\n\n\\subsection{Training}\n\nSo far we have been working with one particular observation. Let's put all the steps done before in a for-loop, so that we can perform them for all observations in our training set. More observations will allow the NN to learn from more information. Every time a new observation is feedforwarded, the error term backpropagated and the cost function minimized, the matrices of weights become more capable to label yet unknown observations.\n\n\\begin{lstlisting}[language=Python]\n    for i in data:\n        observation = i.split(',')\n        input = np.array((np.asfarray(observation[1:])/255.0*0.99) + 0.01, ndmin=2).T\n        target = np.array(np.zeros(o_n) + 0.01, ndmin=2).T\n        target[int(observation[0])] = 0.99\n    \n        h_input = np.dot(w_i_h, input)\n        h_output = sigmoid(h_input)\n        o_input = np.dot(w_h_o, h_output)\n        o_output = sigmoid(o_input)\n    \n        o_errors = target - o_output\n        h_errors = np.dot(w_h_o.T, o_errors)\n        \n        w_h_o += l_r * np.dot((o_errors * o_output * (1.0 - o_output)), np.transpose(h_output))\n        w_i_h += l_r * np.dot((h_errors * h_output * (1.0 - h_output)), np.transpose(input))\n    \n        pass\n\\end{lstlisting}\n\n\\subsection{Second evaluation of the results}\n   \nOnce we have trained the model with 100 observations we can test it with new data it has never seen. After loading the test set we can first work with a particular observation to get an intuition about how good our NN can solve considered classification problem.\n\n\\begin{lstlisting}[language=Python]\n    # Load the mnist test data CSV file:\n    raw_data_test = open(\\\"data/mnist_test.csv\\\", 'r')\n    data_test = raw_data_test.readlines()\n    raw_data_test.close()\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Python]\n    # Check a particular observation:\n    observation = data_test[0].split(',')\n    # Print the label:\n    print(observation[0])\n    # Image the number:\n    image = np.asfarray(observation[1:]).reshape((28,28))\n    mpp.imshow(image, cmap='Blues', interpolation='None')\n\\end{lstlisting}\n\n\\begin{lstlisting}\n    7\n\\end{lstlisting}\n\n\\begin{figure}[H]\n    \\centering\n   \\includegraphics[scale=0.5]{pics/7.png}\n   \\caption{\\label{fig:number7} Plot of the handwritten number}\n\\end{figure}\n\n\\begin{lstlisting}[language=Python]\n    # Use this observation as an input and run NN with it:\n    input = np.array((np.asfarray(observation[1:])/255.0*0.99) + 0.01, ndmin=2).T\n    h_input = np.dot(w_i_h, input)\n    h_output = sigmoid(h_input)\n    o_input = np.dot(w_h_o, h_output)\n    o_output = sigmoid(o_input)\n    \n    o_output\n\\end{lstlisting}\n\n\\begin{lstlisting}\n   array([[ 0.05086044],\n          [ 0.01692228],\n          [ 0.03306648],\n          [ 0.01855151],\n          [ 0.17733202],\n          [ 0.01942656],\n          [ 0.01083799],\n          [ 0.33279056],\n          [ 0.13214786],\n          [ 0.05988345]])\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Python]\n    # Get the prediction of NN for this test observation:\n    label = np.argmax(o_output)\n    label\n\\end{lstlisting}\n\n\\begin{lstlisting}\n    7\n\\end{lstlisting}\n\nAfter working with a particular observation from the test set we can label all of them and evaluate the accuracy of our NN.\n\n\\begin{lstlisting}[language=Python]   \n    # Test the neural network using all test dataset:\n    \n    score = [] # create a list in which the predictions of the network will we saved.\n    \n    # Go through all the observations in the test data set:\n    for i in data_test:\n        observation = i.split(',')\n        expected = int(observation[0])\n        input = np.array((np.asfarray(observation[1:])/255.0*0.99) + 0.01, ndmin=2).T\n    \n        h_input = np.dot(w_i_h, input)\n        h_output = sigmoid(h_input)\n        o_input = np.dot(w_h_o, h_output)\n        o_output = sigmoid(o_input)\n    \n        label = np.argmax(o_output)\n    \n        if (label == expected):\n            score.append(1)\n        else:\n            score.append(0)\n            pass\n        \n        pass\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Python]   \n    # Calculate the performance score, the fraction of correct answers:\n    score_array = np.asarray(score)\n    print (\"performance = \", score_array.sum() / score_array.size)\n    \\end{lstlisting}\n\n\\begin{lstlisting}\n    performance =  0.3959\n\\end{lstlisting}\n\n  It is several times better than naive, which would be 0.1 (given that we have 10 levels of the categorical variable we have to classify). Can we do better?\n  \n\\subsection{Further Improvements}\n\n\\subsubsection{Training with several epochs}\n   \nOne way to improve the results of the NN is to train it more. For instance we can feedforward the same 100 observations more than once. Despite the fact that these are the same observations, longer training allows NN to accumulate more knowledge. Keep in mind that due to the presence of a learning rate NN receives only part of the information that is available and useful to predict particular observation. Seeing the same observations several times leads to smaller loss of the data.\n   \nSo let's introduce one extra parameter called epochs and create a loop around the number of epochs. The rest of the code we see below is the same as before.\n   \n\\begin{lstlisting}[language=Python]   \n    epochs = 5\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Python]   \n    # The \"big loop\" with epochs:\n    for e in range(epochs):\n        for i in data:\n            observation = i.split(',')\n            input = np.array((np.asfarray(observation[1:])/255.0*0.99) + 0.01, ndmin=2).T\n            target = np.array(np.zeros(o_n) + 0.01, ndmin=2).T\n            target[int(observation[0])] = 0.99\n    \n            h_input = np.dot(w_i_h, input)\n            h_output = sigmoid(h_input)\n            o_input = np.dot(w_h_o, h_output)\n            o_output = sigmoid(o_input)\n    \n            o_errors = target - o_output\n            h_errors = np.dot(w_h_o.T, o_errors)\n            w_h_o += l_r * np.dot((o_errors * o_output * (1.0 - o_output)), np.transpose(h_output))\n            w_i_h += l_r * np.dot((h_errors * h_output * (1.0 - h_output)), np.transpose(input))\n    \n            pass\n        pass\n    \n    \n    # test\n    score = []\n    \n    for i in data_test:\n        observation = i.split(',')\n        correct_label = int(observation[0])\n        input = np.array((np.asfarray(observation[1:])/255.0*0.99) + 0.01, ndmin=2).T\n    \n        h_input = np.dot(w_i_h, input)\n        h_output = sigmoid(h_input)\n        o_input = np.dot(w_h_o, h_output)\n        o_output = sigmoid(o_input)\n    \n        label = np.argmax(o_output)\n        if (label == correct_label):\n            score.append(1)\n        else:\n            score.append(0)\n            pass\n        \n        pass\n    \n    \n    # calculate accuracy\n    score_array = np.asarray(score)\n    print (\"performance = \", score_array.sum() / score_array.size)\n\\end{lstlisting}\n\n\\begin{lstlisting}\n    performance =  0.959\n\\end{lstlisting}\n\n\n\\subsubsection{Training with other Learning Rate}\n   \nThe smaller the learning rate the more capable the network to optimize the weights in a more accurate way. At the same time one should keep in mind that small l\\_r also means additional loss of information extracted from each particular observation. Hence, there should be many training observations available in order to make the trade-off between accuracy and usage of available data reasonable. Given that we have more epochs now, it is interesting to try a smaller learning rate.\n\n\\begin{lstlisting}[language=Python]   \n    l_r = 0.1\n\n    # run the \"big loop\" with epochs again to get measure accuracy for new settings.\n\\end{lstlisting}\n\n\\subsubsection{A more complicated structure}\n\nAs you may remember in the beginning we have assigned the number of nodes in the hidden layer based on some rule of thumb assumptions. Now we can test if the NN will perform better if we increase the number of hidden nodes.\n\n\\begin{lstlisting}[language=Python]   \n    h_n = 150\n    \n    # Determine the weights for a bigger matrices\n    w_i_h = np.random.normal(0.0, pow(h_n, -0.5), (h_n, i_n))\n    w_h_o = np.random.normal(0.0, pow(o_n, -0.5), (o_n, h_n))\n    \n    # run the \"big loop\" with epochs again to get measure accuracy for new settings.\n\\end{lstlisting}\n\nIt is always possible to train neural networks where the number of neurons is larger. But, with a smaller number of neurons the neural network has much better generalization abilities. \n\n\\textit{Overfitting.} To many nodes is one of the reasons that leads to a problem when the neural network is over trained which would mean that it will fail to recognize patterns which were never used in the training.\n\nWith a smaller number of neurons, it is more complicated to train the network to very small errors, but it may produce much better approximations for new patterns. The most common mistake made by many researchers is that in order to speed up the training process and to reduce the training errors, they use neural networks with a larger number of neurons than required. Such networks could perform poorly for new patterns not seen previously by the NN.\n\n\\subsubsection{Other training set}\n\nOne other source of improvement is providing the NN with a relatively big dataset for training. Everything that was done before was implemented with just 100 observations. Let's see if our results improve if we increase our training dataset to 60 000 observations. As we have more data now we will reduce the number of epochs and keep having low learning rate.\n\n\\begin{lstlisting}[language=Python]   \n    # Load the data\n    raw_data = open(\"data/mnist_train.csv\", 'r')\n    data = raw_data.readlines()\n    raw_data.close()\n    \n    # Settings\n    epochs = 2\n    l_r = 0.1\n    h_n = 90\n    w_i_h = np.random.normal(0.0, pow(h_n, -0.5), (h_n, i_n))\n    w_h_o = np.random.normal(0.0, pow(o_n, -0.5), (o_n, h_n))\n    \n    # run the \"big loop\" with epochs again to get measure accuracy for new settings.\n\\end{lstlisting}\n\nThe result we achieve with a big training set is already pretty impressive. In more than 90\\% of cases our NN is able to solve the classification problem properly. And we should remember that it was implemented from scratch using only basic linear algebra packages. Let's see in the following section if we can do better or if we can simplify the process using specialized packages to build neural networks.", "meta": {"hexsha": "103a0d3b72284c400cbe31814926c8b222ef279e", "size": 19288, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Neural_Network_Fundamentals/content/nn2.tex", "max_stars_repo_name": "romanarion/InformationSystemsWS1718", "max_stars_repo_head_hexsha": "82a89f8288de9f2b5ef0bad8e9e5bbe2dfee317a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Neural_Network_Fundamentals/content/nn2.tex", "max_issues_repo_name": "romanarion/InformationSystemsWS1718", "max_issues_repo_head_hexsha": "82a89f8288de9f2b5ef0bad8e9e5bbe2dfee317a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Neural_Network_Fundamentals/content/nn2.tex", "max_forks_repo_name": "romanarion/InformationSystemsWS1718", "max_forks_repo_head_hexsha": "82a89f8288de9f2b5ef0bad8e9e5bbe2dfee317a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.22, "max_line_length": 681, "alphanum_fraction": 0.7138116964, "num_tokens": 4876, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321796478256, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.5763686028717901}}
{"text": "\\section{Analysis of Bubble Sort}\n\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Time Complexity of Bubble Sort}\n  \\begin{itemize}\n\t\\setlength{\\itemsep}{10pt}\n\t\\item Finiteness is NOT enough $\\implies$ Quantifying finiteness\n\t  \\pause\n\t\\item Time on real computers varies $\\implies$ \\#Ops on our model:\n\t  \\vspace{5pt}\n\t  \\pause\n\t  \\begin{description}\n\t\t\\setlength{\\itemsep}{5pt}\n\t\t% \\item[$|P|:$] \\#Passes \\hfill (the ``{\\bf for}'' loops)\n\t\t\\item[$|C|:$] \\#Comparisons \\hfill ({\\bf if} $a_{i} > a_{i+1}$)\n\t\t\\item[$|S|:$] \\#Swaps  \\hfill ($\\textsc{Swap}(a_{i}, a_{i+1})$)\n\t  \\end{description}\n\t  % \\pause\n\t  % \\[\n\t  %   |C| \\ge |S|\n\t  % \\]\n\t  % \\vspace{-0.50cm}\n\t  \\pause\n\t\\item Different inputs $\\implies$ $|C|$ and $|S|$ vary:\n\t  \\vspace{5pt}\n\t  \\pause\n\t  \\begin{itemize}\n\t\t\\item Best-case, worst-case, and average-case analysis\n\t  \\end{itemize}\n\\end{itemize}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Best-case and Worst-case Analysis}\n  \\uncover<2->{\n  \\[\n\t\\text{\\textcolor{blue}{Best-case:} } 1\\quad 2\\; \\cdots\\; n\\quad\n  \\]}\n  \n  \\begin{displaymath}\n\t\\begin{array}{lll}\n\t  & \\hfill \\textcolor{blue}{\\frac{\\text{Best-case:}}{\\uncover<2->{\\text{non-decreasingly sorted}}}} \\hfill \n\t  & \\hfill \\textcolor{red}{\\frac{\\text{Worst-case:}}{\\uncover<4->{\\text{non-increasingly sorted}}}} \\\\[10pt]\n\t  % |P| & = (\\uncover<3->{\\min: 1,} & \\uncover<5->{\\max: n}); \\\\[6pt]\n\t  |C| & = (\\uncover<3->{\\min: n-1,} & \\uncover<5->{\\max: \\frac{n^2 - n}{2}}); \\\\[6pt]\n\t  |S| & = (\\uncover<3->{\\min: 0,} & \\uncover<5->{\\max: \\frac{n^2 - n}{2}}).\n\t\\end{array}\n  \\end{displaymath}\n\n  \\uncover<4->{\n\t\\[\n\t  \\text{\\textcolor{red}{Worst-case: }} n\\quad n-1\\; \\cdots\\; 1\n\t\\]}\n  % \\uncover<5->{\n  %   \\[\n  %     \\#\\text{inversions} = (n-1) + (n-2) + \\cdots + 1 = \\frac{n^2 - n}{2}.\n  %   \\]\n  % }\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{$|S|:$ \\#Swaps (Average-case Analysis)\\footnote{An exercise: what is $|C|$ (\\#Comparisions) in average?}}\n  Assumptions on inputs:\n  \\begin{enumerate}\n\t\\item The input is a random permutation (``average input'')\n\t  \\pause\n\t\\item All numbers are different (for simplicity)\n  \\end{enumerate}\n\n  \\pause\n\n  \\begin{center}\n\t\\fbox{\\textcolor{blue}{Lemma:} $\\textsc{Swap}(a_{i}, a_{i+1}) \\implies -1 \\text{ inversion}$}\n  \\end{center}\n\n  \\[\n\t|S| = \\textcolor{red}{\\mathbb{E}(\\#\\text{inversions})}\n  \\]\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{$|S|:$ \\#Swaps (Average Analysis)}\n  \\begin{align*}\n\t% \\onslide<1->{I_{ij} &: \\text{indicator of inversion } (a_i, a_j) \\\\[3pt]}\n\t\\onslide<1->{I_{ij} &= \\left\\{\\begin{array}{lr}\n\t\t1 & (a_i, a_j) \\text{ is an inversion}\\\\\n\t\t0 & \\text{ o.w.} \n\t  \\end{array}\\right.\\\\[3pt]}\n\t\\onslide<1->{X &= \\sum_{1 \\le i < n} \\sum_{i < j \\le n} I_{ij}\\qquad\\quad (\\text{\\#inversions}) \\\\[3pt]}\n\t\\onslide<2->{\\mathbb{E}(X) &= \\mathbb{E}(\\sum_{i} \\sum_{j>i} I_{ij})}\\onslide<3->{= \\sum_{i} \\sum_{j>i} \\mathbb{E}(I_{ij})\\quad (\\text{linearity of expectation})\\\\[3pt]}\n\t\\onslide<4->{\\mathbb{E}(I_{ij}) &= \\mathbb{P}\\set{I_{ij} = 1}} \\onslide<5->{{}={} \\frac{1}{2}\\qquad (a_i \\neq a_j; \\text{half: } a_i < a_j, \\text{half: } a_i > a_j) \\\\[3pt]}\n\t\\onslide<6->{\\mathbb{E}(X) &= \\sum_{i} \\sum_{i<j} \\frac{1}{2} = \\binom{n}{2} \\cdot \\frac{1}{2} = \\frac{n(n-1)}{4} \\textcolor{red}{{}={} O(n^2)}}\n  \\end{align*}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Faster Algorithms}\n  \\begin{quote}\n\tIt took a good deal of work to analyze the bubble sort;\n\tand although [\\dots], \n\tthe results are disappointing \n\tsince they tell us that \\textcolor{red}{the bubble sort isn't really very good at all}.\\\\\n\t\\hfill --- Donald E. Knuth\n  \\end{quote}\n\n  \\pause\n\n  \\begin{center}\n\tfaster: $O(n^2) \\to O(n \\lg n)$?\\\\[5pt] \\pause\n\t\\dots and faster: $O(n \\lg n) \\to O(n)$?\n  \\end{center}\n\n  \\pause\n  \\fignocaption{width = 0.18\\textwidth}{figs/stay-tuned.jpg}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}[noframenumbering]\n  \\fignocaption{width = 0.20\\textwidth}{figs/qa.png}\n  \\vspace{-0.8cm}\n  \\begin{center}\n    \\textcolor{blue}{\\bf \\large hengxin0912@gmail.com}\n  \\end{center}\n  \\vspace{-0.5cm}\n  \\fignocaption{width = 0.50\\textwidth}{figs/thankyou.jpg}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "c14fb5a9314ff3602f0f937439e9e5ba02eee6dd", "size": 4105, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algorithm-lecture-bubblesort-for-cs-application/sections/bubble-sort-analysis.tex", "max_stars_repo_name": "hengxin/algorithm-lectures", "max_stars_repo_head_hexsha": "cf00b0d2d88da6e20d37c36d1f49ca6c1a0669ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-04-20T06:57:57.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-12T19:07:16.000Z", "max_issues_repo_path": "algorithm-lecture-bubblesort-for-cs-application/sections/bubble-sort-analysis.tex", "max_issues_repo_name": "hengxin/algorithm-lectures", "max_issues_repo_head_hexsha": "cf00b0d2d88da6e20d37c36d1f49ca6c1a0669ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algorithm-lecture-bubblesort-for-cs-application/sections/bubble-sort-analysis.tex", "max_forks_repo_name": "hengxin/algorithm-lectures", "max_forks_repo_head_hexsha": "cf00b0d2d88da6e20d37c36d1f49ca6c1a0669ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-12T10:36:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-12T10:36:11.000Z", "avg_line_length": 33.3739837398, "max_line_length": 174, "alphanum_fraction": 0.5758830694, "num_tokens": 1648, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834732, "lm_q2_score": 0.8056321913146127, "lm_q1q2_score": 0.5763686014420958}}
{"text": "\\section{Path Tracer}\n\\label{path_tracer}\nThis section presents a closer look at the general path tracing process and highlights key implementation details of the evaluated CPU path tracer.\n\\subsection{Pipeline}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=400pt]{images/ray_trcing_pipeline.pdf}\n    \\caption{Visualization of path tracing process as a pipeline.}\n    \\label{fig:ray_pipeline}\n\\end{figure}\nThe graphics rendering process if often organized into a graphics pipeline\\cite{sugerman09gramps}, so I generalized the path tracing process in a similar fashion (figure \\ref{fig:ray_pipeline}). In the first stage of that pipeline the acceleration structure is created, which will be discussed further in section \\ref{phr}. Given a static scene, this acceleration structure can be reused for multiple frames. Dynamic scenes require additional care though, as the acceleration structure either needs to be refit or rebuild on scene changes. The implemented uses a hybrid approach further elaborated in section \\ref{phr_in_interactive}. The next stages are responsible for ray traversal and intersection testing. First, primary rays are generated from the camera's origin towards each pixel. Then, each ray has to traverse the acceleration structure to find the closest hit. Depending on whether or not this intersection has been found, either the closest hit shader, or the miss shader is called. Each shader can contribute to the color of the pixel, but only the closest hit shader spawns secondary rays. Note that my implementation uses implicit light sources instead of explicitly casting shadow rays. Secondary rays are fed back to the ray traversal stage and after reaching the maximum depth, the closest hit shader exits without creating any additional rays. The steps in these stages are embarrassingly parallel, however, because every ray is independent of the others, \\acrshort{simd} instructions cannot be utilized. Consequently, each pixel is processed concurrently using multithreading.\n\\subsection{Intersection Tests}\nA ray can be expressed using the parametric form \n\\begin{equation} \\label{eq:1}\n    R(t)=O+t\\textbf{d}\n\\end{equation}\nwhere point $O$ defines the origin of the ray and vector \\textbf{d} defines the direction along which the ray travels in a straight line. The path tracer supports two types of primitives, spheres and triangles. \n\\subsubsection{Sphere}\nSpheres are commonly used in ray tracing because of their simple and efficient intersection algorithm\\cite{haines2019}. Given a sphere with center $C$ and radius $r$, all points $P$ at the surface of the sphere can be described by the equation:\n\\begin{equation} \\label{eq:2}\n    (P-G)\\cdot(P-G)=r^2\n\\end{equation}\nBy substituting point $P$ in equation \\ref{eq:2} by the equation of a ray (equation \\ref{eq:1}) the intersection points between sphere and ray can be calculated. Simplifying the resulting equation leads to \n\\[\n    (\\textbf{d}\\cdot\\textbf{d})t^2+2(\\textbf{f}\\cdot\\textbf{d})t+\\textbf{f}\\cdot\\textbf{f}-r^2 = at^2+b^t+c = 0\n\\]\nwhich is a quadratic function. Consequently, it can be solved using \n\\[\n    t_{0,1}=\\frac{-b\\pm\\sqrt{b^2-4ac}}{2a}\n\\]\nwith discriminant $\\Delta=b^2-4ac$.\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=150pt]{images/ray_sphere_intersection.pdf}\n    \\caption{Ray-sphere intersection test showing all three cases for discriminant $\\Delta$.}\n    \\label{fig:sphere_intersec}\n\\end{figure}\nAs seen in figure \\ref{fig:sphere_intersec}, if $\\Delta<0$, the ray misses the sphere. If $\\Delta=0$ the ray touches the sphere in one point and otherwise there are two values for $t$ that correspond to different intersection points. Inserting these $t$-values into equation \\ref{eq:1} allows the calculation of intersection points $P_{0,1} = R(t_{1,2}) = O+t_{0,1}$\\textbf{d}. The normal $n$ at a given intersection point is simply the vector from the spheres center $C$ to the intersection point $P$, i.e the normalized normal is calculated using $n=||P-C||$.\n\n\\subsubsection{Triangle}\nTriangles have a long tradition in computer graphics, mainly because most geometry can be represented, or at least approximated, using them. Additionally, with 3 vertecies they can never be non-planar. While many ray-triangle intersection algorithms exist, the M{\\\"o}ller-Trumbore algorithm\\cite{moeller97triangle} is still considered to be relatively fast and is often used as a comparison for other algorithms. Consequently, it is also used in this thesis. \n\nUsing barycentric coordinates, a point $P$ on a triangle can be parameterized and expressed through two scalar values:\n\\[\n    P(u,v) = (1-u-v)V_0 + uV_1 + vV_2\n\\]\nwhere $V_0$, $V_1$ and $V_2$ are the vertices of given triangle and $(u,v)$ are the barycentric coordinates with $u\\geq0$,  $v\\geq0$ and $u+v\\leq1$. Barycentric coordinates do not change when the triangle is transformed, which the M{\\\"o}ller-Trumbore algorithm exploits. Geometrically speaking, the algorithm translates the triangle to the origin and transforms it into a unit triangle in $y$ and $z$, with the ray direction aligned with $x$. This can be expressed as a system of linear equations:\n\\[\n    \\left[ \\begin{array}{rrr}-\\textbf{d}&(V_1-V_0)&(V_2-V_0)\\end{array} \\right]\n    \\left[ \\begin{array}{r}t\\\\u\\\\v\\end{array} \\right] = O - V_0\n\\]\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=400pt]{images/ray_triangle_intersec.pdf}\n    \\caption{Geometric representation of the transformations utilized in the M{\\\"o}ller-Trumbore algorithm.}\n    \\label{fig:triangle_intersec}\n\\end{figure}\nUsing Cramer's rule\\cite{brunetti14cramersRule}, this system can be solved to obtain the distance $t$ from ray origin to intersection point and the corresponding barycentric coordinates $(u,v)$. Distance $t$ is again plugged into the ray equation to find the world coordinates of intersection point $P$ and the barycentric coordinates are used to interpolate the normal at $P$, given the vertex normals at $V_0$, $V_1$ and $V_2$.\n\n\\subsubsection{Axis-aligned bounding box}\nFinally, bounding volume hierarchy traversal depends on a ray-bounding box intersection test. Axis-aligned bounding boxes are represented by the bounds $B^{min}$ and $B^{max}$, which define two planes for each dimension. The intersection distances $t$ between a given ray $R(t) = O+t\\textbf{d}$ and a dimension $k$ can be computed fairly simple:\n\\[\n    t^{min}_k = \\frac{B_{k}^{min} - O_{k}}{\\textbf{d}_{k}}\n\\]\n\\[\n    t^{max}_k = \\frac{B_{k}^{max} - O_{k}}{\\textbf{d}_{k}}    \n\\]\nWhether or not a ray intersects a box, can be determined by a few simple value comparisons, as seen in figure  \\ref{fig:aabb_intersec}. The algorithm is optimized further by pre-computing sign and inverse direction for all rays and adding an early exit once it is clear the ray missed. \n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=400pt]{images/aabb_intersec.pdf}\n    \\caption{2D example of ray-AABB intersection. The top ray misses, while the bottom one hits the box.}\n    \\label{fig:aabb_intersec}\n\\end{figure}\n\\clearpage\n\\subsection{Materials}\nEach primitive has an associated material. A material has an albedo specifying how reflective it is, i.e. how much color is contributed to the traced path, and an amount of emitted light. The scatter function differs between materials and is used to generate secondary rays. Note that it is also possible for materials to not scatter at all, which is used for light sources. \n\nDiffuse materials scatter rays in random directions. To achieve true Lambertian reflectance\\cite{weik01lambert}, random points are picked on the surface of a unit sphere and added to the normal at intersection point. This results in a distribution of $cos(\\Phi)$, where $\\Phi$ is the angle from the normal. \n\nA smooth reflective material\\cite{Greve2004ReflectionsAR} does not scatter rays in a random direction, so the resulting rays point purely in the reflection direction $\\textbf{d}_r$ (figure \\ref{fig:reflect_refract}):\n\\[\n    \\textbf{d}_r = \\textbf{d} - 2\\textbf{n}(\\textbf{d}\\cdot \\textbf{n})\n\\]\nwhere $\\textbf{n}$ is the normalized normal at intersection point and $\\textbf{d}$ is the direction of the incoming ray. For diffuse reflection a random vector is added to the reflected ray, similar to the above mentioned diffuse scattering.\n\nRefractive materials\\cite{Greve2004ReflectionsAR} utilize Snell's law to represent dielectrics like glass objects. Snell's law can be written as \n\\[\n    sin \\theta' = \\frac{\\eta}{\\eta'}sin \\theta\n\\]\nwhere $\\theta$ and $\\theta'$ are angles from the normal and $\\eta$ and $\\eta'$ are refractive indices. The refracted direction $\\textbf{d}_t$, as shown in figure \\ref{fig:reflect_refract}, can be split up into a parallel and a perpendicular part:\n\\[\n    \\textbf{d}_t=\\textbf{d}_{\\bot}+\\textbf{d}_{\\parallel}\n\\]\nSolving for $\\textbf{d}_{\\parallel}'$ and $\\textbf{d}_{\\bot}'$ leads to:\n\\[\n     \\textbf{d}_{\\bot}' = \\frac{\\eta}{\\eta'}(\\textbf{d}+(-\\textbf{d}\\cdot\\textbf{n}) \\textbf{n})\n\\]\n\\[\n    \\textbf{d}_{\\parallel}' = - \\sqrt{1-|\\textbf{d}_{\\bot}'|^{2}\\textbf{n}}\n\\]\nAdding both parts together, leads to the refracted direction $\\textbf{d}_t$\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=200pt]{images/reflection_refraction.pdf}\n    \\caption{Figure showing how a ray's direction (green) is reflected (red) and refracted (blue).}\n    \\label{fig:reflect_refract}\n\\end{figure}\n\\subsection{Traversal}\n\\label{traversal}\nTo find the nearest intersection, bounding volume hierarchies are traversed in a top-down manner. Usually, this is done using a stack\\cite{meister21survey} to store nodes that might contain an intersection. First, the root is pushed to the stack. While the stack is not empty, nodes are popped and checked for an intersection with their bounding box. In case the bounding volume is hit, either the nodes children are pushed onto the stack in the case of an interior node, or all primitives are tested when dealing with a leaf node. If no intersection with the bounding box is found or the distance to the intersection is bigger than previously found intersections, then the node can be discarded. Once the stack is empty, the closest intersection found is returned.\n\nHowever, this method proved to be less efficient than using a recursive procedure as described in algorithm \\ref{alg:traversal}. Given that the implementation is written for the CPU, this implicitly uses the CPU stack, which results in an equivalent execution as described above, albeit without explicitly managing a stack data structure.\n\\begin{algorithm}\n\\caption{Pseudocode of recursive BVH traversal}\n\\label{alg:traversal}\n    \\tcp{Start traversal at root}\n    \n    \\SetKwFunction{FTraverse}{Traverse}\n    \\FTraverse{root}\\;\n    \\;\n    \\SetKwProg{Fn}{Function}{:}{}\n    \\Fn{\\FTraverse{node}}{\n        \\If{node.aabb not intersected} {\n            \\Return\n        }\n        \\If{node is leaf} {\n            \\ForEach{primitive}{\n                test for ray-primitive intersection\n            }\n        }\n        \\Else{\n            \\ForEach{child}{\n                \\FTraverse{child}\n            }\n        }\n    }\n\\end{algorithm}\n\\cleardoublepage", "meta": {"hexsha": "62ccc156f8dc5cff199bbff91989becbca71f3a4", "size": 11138, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/ch04_1-pathtracer.tex", "max_stars_repo_name": "ChSchmidt99/bachelorthesis", "max_stars_repo_head_hexsha": "4c427317c0334186eea8c587d56d0c176a4b04fb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/ch04_1-pathtracer.tex", "max_issues_repo_name": "ChSchmidt99/bachelorthesis", "max_issues_repo_head_hexsha": "4c427317c0334186eea8c587d56d0c176a4b04fb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/ch04_1-pathtracer.tex", "max_forks_repo_name": "ChSchmidt99/bachelorthesis", "max_forks_repo_head_hexsha": "4c427317c0334186eea8c587d56d0c176a4b04fb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.1294964029, "max_line_length": 1597, "alphanum_fraction": 0.7507631532, "num_tokens": 2865, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5763685997727559}}
{"text": "%!TEX root = ../main.tex\n\n%=======================================================================  numbers\n\\section{Numbers}\n\\label{sec:numbers}\n\n\tIn the beginning, we must define the main players in the world of math: numbers.\n\n\t\\subsection{Definitions}\n\t\\label{numbers:definitions}\n\t\t\n\t\tNumbers are the basic objects we use to count, measure, quantify, and calculate things.\n\t\tMathematicians like to classify the different kinds of number-like objects into categories called \\emph{sets}:\t\t\t\t\t\\index{set}\n\t\t\\begin{itemize}\n\t\t\t\\item  The natural numbers: $\\mathbb{N} = \\{0,1,2,3,4,5,6,7, \\ldots \\, \\}$\n\t\t\t\\item  The integers: $\\mathbb{Z} = \\{\\ldots, -3,-2,-1,0,1,2,3 , \\ldots  \\, \\}$\n\t\t\t\\item  The rational numbers: $\\mathbb{Q} = \\{\\frac{5}{3}, \\frac{22}{7}, 1.5, 0.125,  -7, \\, \\ldots \\, \\}$\n\t\t\t\\item  The real numbers: $\\mathbb{R} = \\{-1,0,1, \\sqrt{2}, e,\\pi, \\;  4.94\\ldots, \\; \\ldots \\, \\}$\n\t\t\t\\item  The complex numbers: $\\mathbb{C} = \\{ -1, 0, 1, i,  1+i, 2+3i,  \\ldots \\, \\}$\n\t\t\\end{itemize}\n\t\t%\n\t\tThese categories of numbers should be somewhat familiar to you.\n\t\tThink of them as neat classification labels for everything that you would normally call a number. \n\t \tEach group in the above list is a \\emph{set}.\n\t\tA set is a collection of items of the same kind. \n\t\tEach collection has a name and a precise definition for which items belong in that collection.\n\t\tNote also that each of the sets in the list contains all the sets above it,\t\t\t\t\t\t\t\t\t\t\t\t\\index{set!subset}\n\t\tas illustrated in Figure~\\ref{fig:nested_sets}.\n\t\tFor now, we don't need to go into the details of sets and set notation,\n\t\tbut we do need to be aware of the different sets of numbers.\n\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=0.87\\textwidth]{figures/math/nested_sets.pdf}\n\t\t\t\\caption{\tAn illustration of the nested containment structure of the different number sets.\n\t\t\t\t\tThe set of natural numbers is contained in the set of integers,\n\t\t\t\t\twhich in turn is contained in the set of rational numbers.\n\t\t\t\t\tThe set of rational numbers is contained in the set of real numbers,\n\t\t\t\t\twhich is contained in the set of complex numbers.}\n\t\t\t\\label{fig:nested_sets}\n\t\t\\end{figure}\n\n\t\t\\noindent\n\t\tWhy do we need so many different sets of numbers?\n\t\tEach set of numbers is associated with more and more advanced mathematical problems.\n\n\t\tThe simplest numbers are the natural numbers $\\mathbb{N}$,\n\t\twhich are sufficient for all your math needs if all you're going to do is \\emph{count} things.\n\t\tHow many goats? Five goats here and six goats there so the total is 11 goats. \n\t\tThe sum of any two natural numbers is also a natural number.\n\n\t\tAs soon as you start using \\emph{subtraction} (the inverse operation of addition),\n\t\tyou start running into negative numbers,\n\t\twhich are numbers outside the set of natural numbers.\n\t\tIf the only mathematical operations you will ever use are \\emph{addition} and \\emph{subtraction},\n\t\tthen the set of integers $\\mathbb{Z} = \\{ \\ldots, -2, -1, 0, 1, 2, \\ldots \\}$ will be sufficient.\n\t\tThink about it.\n\t\tAny integer plus or minus any other integer is still an integer.\n\n\t\tYou can do a lot of interesting math with integers.\n\t\tThere is an entire field in math called \\emph{number theory} that deals with integers.\n\t\tHowever, to restrict yourself solely to integers is somewhat limiting---a rotisserie\n\t\tmenu that offers $\\frac{1}{2}$ of a chicken would be totally confusing.\n\n\t\tIf you want to use division in your mathematical calculations,\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\index{rational}\n\t\tyou'll need the rationals~$\\mathbb{Q}$.\n\t\tThe set of rational numbers corresponds to all numbers that can be expressed as \\emph{fractions} of the form $\\frac{m}{n}$\t\t\\index{fraction}\n\t\twhere $m$ and $n$ are integers, and $n \\neq 0$.\n\t\tYou can add, subtract, multiply, and divide rational numbers, and the result will always be a rational number.\n\t\tHowever, even the rationals are not enough for all of math!\n\n\t\tIn geometry, we can obtain \\emph{irrational} quantities like $\\sqrt{2}$ (the diagonal of a square with side 1)\n\t\tand $\\pi$ (the ratio between a circle's circumference and its diameter).\n\t\tThere are no integers $x$ and $y$ such that $\\sqrt{2}=\\frac{x}{y}$,\n\t\ttherefore we say that $\\sqrt{2}$ is \\emph{irrational} (not in the set $\\mathbb{Q}$).\n\t\tAn irrational number has an infinitely long decimal expansion that doesn't repeat.\n\t\tFor example, $\\pi = 3.141592653589793\\ldots$ where the dots indicate\n\t\tthat the decimal expansion of $\\pi$ continues all the way to infinity.\n\n\t\tCombining the irrational numbers with the rationals gives us all the useful numbers,\n\t\twhich we call the set of real numbers $\\mathbb{R}$.\n\t\tThe set $\\mathbb{R}$ contains the integers,\n\t\tthe rational numbers $\\mathbb{Q}$,\n\t\tas well as irrational numbers like $\\sqrt{2}=1.4142135\\ldots$.\n\t\tBy using the reals you can compute pretty much anything you want.\n\t\tFrom here on in the text, when I say \\emph{number},\n\t\tI mean an element of the set of real numbers $\\mathbb{R}$.\n\n\t\tThe only thing you can't do with the reals is to take the square root of a negative number---you \t\t\t\t\t\t\t\\index{complex number}\n\t\tneed the complex numbers $\\mathbb{C}$ for that.\n\t\tWe defer the discussion on $\\mathbb{C}$ until the end of Chapter~\\ref{chapter:vectors}.\n\n\n\t\\subsection{Exercises}\n\t\\label{numbers:exercises}\n\t\n\t\t\\input{problems/chapter1_numbers_exercises.tex}\n\t\n\n", "meta": {"hexsha": "3dcb3c76bf0a340bfa221c8db92c7d91b86b0266", "size": 5333, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sources/original/01_math/02.numbers.tex", "max_stars_repo_name": "minireference/sample-book", "max_stars_repo_head_hexsha": "83e827d7e0c3f5fea1d08815810f0b08bef503e3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2020-10-19T21:21:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T16:42:13.000Z", "max_issues_repo_path": "sources/original/01_math/02.numbers.tex", "max_issues_repo_name": "minireference/sample-book", "max_issues_repo_head_hexsha": "83e827d7e0c3f5fea1d08815810f0b08bef503e3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sources/original/01_math/02.numbers.tex", "max_forks_repo_name": "minireference/sample-book", "max_forks_repo_head_hexsha": "83e827d7e0c3f5fea1d08815810f0b08bef503e3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-12T19:03:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-12T19:03:04.000Z", "avg_line_length": 53.33, "max_line_length": 142, "alphanum_fraction": 0.7050440653, "num_tokens": 1524, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484144, "lm_q2_score": 0.8056321866478979, "lm_q1q2_score": 0.5763685883270221}}
{"text": "%!TEX root = ../notes.tex\n\n\\section{February 7, 2022}\n\\subsection{Orders mod \\texorpdfstring{$p$}{p}}\n\\recall If $a\\not\\equiv 0\\pmod{p}$, then we have $a^{p-1}\\equiv 1\\pmod{p}$, which was \\cref{theorem:flt}, Fermat's Little Theorem.\n\n\\begin{definition}[Order of $a$ mod $p$]\n    The \\ul{order} of $a \\pmod{p}$ is the smallest positive $k$ such that\n    \\begin{equation*}\n        a^k\\equiv 1\\pmod{p}\n    \\end{equation*}\n    This is not to be confused with \\cref{defn:order-of-prime-factor} which is the power of $p$ in the prime factorization of $a$. This is the order of $a$ in the multiplicative group $\\ZZ/p\\ZZ$.\n\\end{definition}\n\n\\begin{proposition}\n    let $a\\in (\\ZZ/p\\ZZ)^\\times$ be of order $k$. If $a^n\\equiv 1\\pmod{p}$, then $k\\mid n$.\n\n    In particular, $k\\mid p-1$ by \\cref{theorem:flt}, Fermat's Little Theorem.\n\\end{proposition}\n\\begin{proof}\n    We write $n = k\\cdot q + r$ such that $0\\leq r < k$ ($\\ZZ$ is a Euclidean domain)\n    \\begin{align*}\n        1 & \\equiv a^n\\equiv a^{kq+r} \\equiv (a^k)^q\\cdot a^r\\equiv a^r\n    \\end{align*}\n    Since $k$ is the minimal positive number such that $a^k\\equiv 1$, then this forces $r = 0$. Then $k\\mid n$.\n\\end{proof}\n\n\\begin{theorem}[Primitive Root Theorem]\n    Let $p$ be prime. Then there is a $g$ such that\n    \\[(\\ZZ/p\\ZZ)^\\times = \\{1, g, g^2, \\dots, g^{p-2}\\}.\\]\n    We call $g$ a \\ul{primitive root} or \\ul{generator}.\n\\end{theorem}\n\\begin{example}\n    $p = 5$, $(\\ZZ/5\\ZZ)^\\times = \\{1, 2, 3, 4\\}$.\n\n    1? No: $\\{1, 1^2, 1^3\\} = \\{1\\}$\n\n    2? Yes: $\\{1, 2, 2^2, 2^3\\} = \\{1, 2, 4, 3\\}$\n\n    3? Yes: $\\{1, 3, 3^2, 3^3\\} = \\{1, 3, 4, 2\\}$\n\n    4? No: $\\{1, 4, 4^2, 4^3\\} = \\{1, 4\\}$\n\\end{example}\n\n\\begin{remark}\n    In general, the number of primitive roots is $\\varphi(p-1)$. (Take the group of exponents and solve for power).\n\\end{remark}\n\n\\subsection{Discrete Logarithm Problem}\n\nWe go on to discuss a fundamental property about exponentiation mod $p$. Let's fix some $p$ and primitive root $g$.\n\nGiven some $a$, we can compute $g^a$ efficiently\n\\begin{align*}\n    a & \\longrightarrow g^a \\qquad \\text{This is \\ul{easy}}             \\\\\n    a & \\overset{?}{\\longleftarrow} g^a \\qquad \\text{This is \\ul{hard}}\n\\end{align*}\n\nNote that\n\\[g^a\\equiv g^b \\Leftrightarrow g^{a-b}\\equiv 1 \\Leftrightarrow p-1\\mid a-b\\]\nso $a$ is determined mod $p-1$.\n\n\\begin{definition}[Discrete Logarithm]\n    The \\ul{discrete logarithm} of $g^a$ is $a$.\n\\end{definition}\n\nThis is known as the ``Discrete Logarithm Problem'' (DLP), which is concerned with how we can compute discrete logarithms.\n\nThis idea is fundamental to computer security! The real-world analogue is if you go to the bank after hours and deposit a check or cash into the deposit slot. It is relatively easy for one to deposit an item but hard for someone who doesn't work at the bank\\footnote{Say, possessing a \\emph{key} or \\emph{password}.} to access that item.\n\n\\subsection{Cryptographic Systems}\n\\subsubsection{Symmetric Cryptography}\nWe have $3$ people, \\emph{Alice}, \\emph{Bob}, and \\emph{Eve}.\n\nBob has a message $m$ which he wants to send to Alice. However, everything he sends to Alice can (and is) intercepted by Eve. He wants to encrypt this message $m$ he sends to Alice.\n\nWe say that a message $m\\in \\mathcal{M}$ in the space of possible messages. We have secret key $k\\in \\mathcal{K}$ that can encrypt $m$ into ciphertext $c\\in \\mathcal{C}$ in the space of ciphertexts.\n\\[\\left\\{\\begin{array}{c}\n        \\text{Message $m\\in \\mathcal{M}$} \\\\\n        \\text{Secret key $k\\in\\mathcal{K}$}\n    \\end{array}\\right\\} \\rightsquigarrow \\text{Ciphertext $c\\in\\mathcal{C}$} \\longrightarrow \\text{Alice} \\rightsquigarrow m\\]\n\nIf we fix $k$, we have\n\\begin{align*}\n    e_k(m) & = e(k, m) \\\\\n    d_k(c) & = d(k, c)\n\\end{align*}\nbe our encryption and decryption functions. We usually take $m$ to be a number, and we can encode letters to numbers (0-255) using ASCII.\n\nIn Python, this is implemented using functions like \\textsf{ord} (character to encoding) and \\textsf{chr} (encoding to character).\n\nWe'll just talk about transmitting numbers since we can convert freely between them and text.\n\n\\textbf{Q: What do we want out of our cryptosystem?}\n\\begin{enumerate}\n    \\setcounter{enumi}{-1}\n    \\item The system is secure even if Eve knows the design. (Assume Eve knows the encryption and decryption functions, but so long as she doesn't know the key).\n    \\item $e$, the encryption function, is easy to compute.\n    \\item $d$, the decryption function, is similarly easy to compute.\n    \\item Given $c_1, c_2, \\dots$ a collection of ciphertexts, encrypted with the \\emph{same} key $k$, it's hard to compute any message $m_i$.\n    \\item Given $(m_1, c_1), \\dots (m_n, c_n)$ some collection of messages and their encryptions, it remains difficult to compute $d_k(c)$ for $c\\not\\in \\{c_1, \\dots, c_n\\}$. This is called a \\textit{``\\ul{chosen plaintext attack}''}. \n\\end{enumerate}", "meta": {"hexsha": "3847a65aa53f79881271da5e81870da19d8347d8", "size": 4894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-02-07.tex", "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_issues_repo_path": "lectures/2022-02-07.tex", "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-02-07.tex", "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.9803921569, "max_line_length": 337, "alphanum_fraction": 0.6681651001, "num_tokens": 1670, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484143, "lm_q2_score": 0.805632181981183, "lm_q1q2_score": 0.5763685849883422}}
{"text": "\\section{Computing moments from the master equation}\\label{supp_moments}\n\nIn this section we will compute the moment equations for the distribution  $P(m,\np)$. Without lost of generality here we will focus on the three-state\nregulated promoter. The computation of the moments for the two-state promoter\nfollows the exact same procedure, changing only the definition of the matrices \nin the master equation.\n\n\\subsection{Computing moments of a distribution}\n\n(Note: The Python code used for the calculations presented in this section can\nbe found in the\n\\href{https://www.rpgroup.caltech.edu//chann_cap/software/moment_dynamics_system.html}{following\nlink} as an annotated Jupyter notebook)\n\nTo compute any moment of our chemical master equation (\\eref{eq_cme_matrix})\nlet us define a vector\n\\begin{equation}\n  \\ee{\\bb{m^x p^y}} \\equiv (\\ee{m^x p^y}_A, \\ee{m^x p^y}_I, \\ee{m^x p^y}_R)^T,\n\\end{equation}\nwhere $\\ee{m^x p^y}_S$ is the expected value of $m^x p^y$ in state $S \\in \\{A,\nI, R\\}$ with $x, y \\in \\mathbb{N}$. In other words, just as we defined the\nvector $\\PP(m, p)$, here we define a vector to collect the expected value of\neach of the promoter states. By definition, these moments $\\ee{m^x p^y}_S$ are\ncomputed as\n\\begin{equation}\n  \\ee{m^x p^y}_S \\equiv \\sum_{m=0}^\\infty \\sum_{p=0}^\\infty m^x p^y P_S(m, p).\n  \\label{seq_mom_def}\n\\end{equation}\nTo simplify the notation, let $\\sum_x \\equiv \\sum_{x=0}^\\infty$. Since we are\nworking with a system of three ODEs, one for each state, let us define the\nfollowing operation:\n\\begin{equation}\n  \\ee{\\bb{m^x p^y}} =\n  \\smp m^x p^y \\PP(m, p) \\equiv\n  \\begin{bmatrix}\n    \\smp m^x p^y P_A(m, p)\\\\\n    \\smp m^x p^y P_I(m, p)\\\\\n    \\smp m^x p^y P_R(m, p)\\\\\n  \\end{bmatrix}.\n\\end{equation}\n\nWith this in hand we can then apply this sum over $m$ and $p$ to\n\\eref{eq_cme_matrix}. For the left-hand side we have\n\\begin{equation}\n  \\smp m^x p^y \\dt{\\PP(m, p)} = \\dt{}\\left[ \\smp m^x p^y \\PP(m, p) \\right],\n  \\label{seq_sum_mom}\n\\end{equation}\nwhere we made use of the linearity property of the derivative to switch the\norder between the sum and the derivative. Notice that the right-hand side of\n\\eref{seq_sum_mom} contains the definition of a moment from \\eref{seq_mom_def}.\nThat means that we can rewrite it as\n\\begin{equation}\n  \\dt{}\\left[ \\smp m^x p^y \\PP(m, p) \\right] = \\dt{\\bb{\\ee{m^x p^y}}}.\n\\end{equation}\n\nDistributing the sum on the right-hand side of \\eref{eq_cme_matrix} gives\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{\\bb{\\ee{m^x p^y}}} &=\n    \\Km \\smp m^x p^y \\PP(m, p)\\\\\n    &- \\Rm \\smp m^x p^y \\PP(m, p) + \\Rm \\smp m^x p^y \\PP(m-1, p)\\\\\n    &- \\Gm \\smp (m) m^x p^y \\PP(m, p) + \\Gm \\smp (m + 1) m^x p^y \\PP(m + 1, p)\\\\\n    &- \\Rp \\smp (m) m^x p^y \\PP(m, p) + \\Rp \\smp (m) m^x p^y \\PP(m, p - 1)\\\\\n    &- \\Gp \\smp (p) m^x p^y \\PP(m, p) + \\Gp \\smp (p + 1) m^x p^y \\PP(m, p + 1).\n  \\end{aligned}\n  \\label{seq_master_sum}\n\\end{equation}\n\nLet's look at each term on the right-hand side individually. For the terms in\n\\eref{seq_master_sum} involving $\\PP(m, p)$ we can again use \\eref{seq_mom_def}\nto rewrite them in a more compact form. This means that we can rewrite the\nstate transition term as\n\\begin{equation}\n  \\Km \\smp m^x p^y \\PP(m, p) = \\Km \\bb{\\ee{m^x p^y}}.\n\\end{equation}\nThe mRNA production term involving $\\PP(m, p)$ can be rewritten as\n\\begin{equation}\n  \\Rm \\smp m^x p^y \\PP(m, p) = \\Rm \\bb{\\ee{m^x p^y}}.\n\\end{equation}\nIn the same way the mRNA degradation term gives\n\\begin{equation}\n  \\Gm \\smp (m) m^x p^y \\PP(m, p) = \\Gm \\bb{\\ee{m^{(x + 1)} p^y}}.\n\\end{equation}\nFor the protein production and degradation terms involving $\\PP(m, p)$ we have\n\\begin{equation}\n  \\Rp \\smp (m) m^x p^y \\PP(m, p) = \\Rp \\bb{\\ee{m^{(x + 1)} p^y}},\n\\end{equation}\nand\n\\begin{equation}\n  \\Gp \\smp (p) m^x p^y \\PP(m, p) = \\Gp \\bb{\\ee{m^x p^{(y + 1)}}},\n\\end{equation}\nrespectively.\n\nFor the sums terms in \\eref{seq_master_sum} involving $\\PP(m \\pm 1, p)$ or\n$\\PP(m, p \\pm 1)$ we can reindex the sum to work around this mismatch. To be\nmore specific let's again look at each term case by case. For the mRNA\nproduction term involving $\\PP(m-1, p)$ we define $m' \\equiv m - 1$. Using this\nwe write\n\\begin{equation}\n  \\Rm \\smp m^x p^y \\PP(m-1, p) =\n  \\Rm \\sum_{m' = -1}^\\infty \\sum_p (m' + 1)^x p^y \\PP(m', p).\n\\end{equation}\nSince having negative numbers of mRNA or protein doesn't make physical sense\nwe have that $\\PP(-1, p) = 0$. Therefore we can rewrite the sum starting from 0\nrather than from -1, obtaining\n\\begin{equation}\n  \\Rm \\sum_{m' = -1}^\\infty \\sum_p (m' + 1)^x p^y \\PP(m', p) =\n  \\Rm \\sum_{m'=0}^\\infty \\sum_p (m' + 1)^x p^y \\PP(m', p).\n  \\label{seq_reindex}\n\\end{equation}\nRecall that our distribution $\\PP(m, p)$ takes $m$ and $p$ as numerical inputs \nand returns a probability associated with such a molecule count.  Nevertheless,\n$m$ and $p$ themselves are dimensionless quantities that serve as indices of how\nmany molecules are in the cell. The distribution is the same whether the\nvariable is called $m$ or $m'$; for a specific number, let's say $m = 5$, or $m'\n= 5$, $\\PP(5, p)$ will return the same result. This means that the variable name\nis arbitrary, and the right-hand side of\n\\eref{seq_reindex} can be written as\n\\begin{equation}\n  \\Rm \\sum_{m'=0}^\\infty \\sum_p (m' + 1)^x p^y \\PP(m', p) =\n  \\Rm \\bb{\\ee{(m+1)^x p^y}},\n\\end{equation}\nsince the left-hand side corresponds to the definition of a moment.\n\nFor the mRNA degradation term involving $\\PP(m + 1, p)$ we follow a similar\nprocedure in which we define $m' = m + 1$ to obtain\n\\begin{equation}\n  \\Gm \\smp (m + 1) m^x p^y \\PP(m + 1, p) =\n  \\Gm \\sum_{m' = 1}^\\infty \\sum_p m' (m' - 1)^x p^y \\PP(m', p).\n\\end{equation}\nIn this case since the term on the right-hand side of the equation is multiplied\nby $m'$, starting the sum over $m'$ from 0 rather than from 1 will not affect\nthe result since this factor will not contribute to the total sum. Nevertheless\nthis is useful since our definition of a moment from \\eref{seq_mom_def} requires\nthe sum to start at zero. This means that we can rewrite this term as\n\\begin{equation}\n  \\Gm \\sum_{m' = 1}^\\infty m' \\sum_p (m' - 1)^x p^y \\PP(m', p) =\n  \\Gm \\sum_{m' = 0}^\\infty m' \\sum_p (m' - 1)^x p^y \\PP(m', p).\n\\end{equation}\nHere again we can change the arbitrary label $m'$ back to $m$ obtaining\n\\begin{equation}\n  \\Gm \\sum_{m' = 0}^\\infty m' \\sum_p (m' - 1)^x p^y \\PP(m', p) =\n  \\Gm \\bb{\\ee{m (m - 1)^x p^y}}.\n\\end{equation}\n\nThe protein production term involving $\\PP(m, p - 1)$ can be reindexed by \ndefining $p' \\equiv p - 1$. This gives\n\\begin{equation}\n  \\Rp \\smp (m) m^x p^y \\PP(m, p - 1) =\n  \\Rp \\sum_m \\sum_{p'=-1}^\\infty m^{(x + 1)} (p + 1)^y \\PP(m, p').\n\\end{equation}\nWe again use the fact that negative molecule copy numbers are assigned with\nprobability zero to begin the sum from 0 rather than -1 and the arbitrary nature\nof the label $p'$ to write\n\\begin{equation}\n  \\Rp \\sum_m \\sum_{p'=0}^\\infty m^{(x + 1)} (p + 1)^y \\PP(m, p') =\n  \\Rp \\bb{\\ee{m^{(x + 1)} (p + 1)^y}}.\n\\end{equation}\nFinally, we take care of the protein degradation term involving $\\PP(m, p + 1)$.\nAs before, we define $p' = p + 1$ and substitute this to obtain\n\\begin{equation}\n  \\Gp \\smp (p + 1) m^x p^y \\PP(m, p + 1) =\n  \\Gp \\sum_m \\sum_{p'=1}^\\infty (p') m^x (p' - 1)^y \\PP(m, p').\n\\end{equation}\nJust as with the mRNA degradation term, having a term $p'$  inside the sum\nallows us to start the sum over $p'$ from 0 rather than 1. We can therefore\nwrite\n\\begin{equation}\n  \\Gp \\sum_m \\sum_{p'=0}^\\infty (p') m^x (p' - 1)^y \\PP(m, p') =\n  \\Gp \\bb{\\ee{m^x p (p - 1)^y}}.\n\\end{equation}\n\nPutting all these terms together we can write the general moment ODE. This is\nof the form\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{\\bb{\\ee{m^x p^y}}} &=\n    \\Km \\bb{\\ee{m^x p^y}}\n    \\text{  (promoter state transition)}\\\\\n    &- \\Rm \\bb{\\ee{m^x p^y}} + \\Rm \\bb{\\ee{(m+1)^x p^y}}\n    \\text{  (mRNA production)}\\\\\n    &- \\Gm \\bb{\\ee{m^{(x + 1)} p^y}} + \\Gm \\bb{\\ee{m (m - 1)^x p^y}}\n    \\text{  (mRNA degradation)}\\\\\n    &- \\Rp \\bb{\\ee{m^{(x + 1)} p^y}} + \\Rp \\bb{\\ee{m^{(x + 1)} (p + 1)^y}}\n    \\text{  (protein production)}\\\\\n    &- \\Gp \\bb{\\ee{m^x p^{(y + 1)}}} + \\Gp \\bb{\\ee{m^x p (p - 1)^y}}\n    \\text{  (protein degradation)}.\n  \\end{aligned}\n  \\label{seq_mom_ode}\n\\end{equation}\n\n\\subsection{Moment closure of the simple-repression distribution}\n\nA very interesting and useful feature of \\eref{seq_mom_ode} is that for a given\nvalue of $x$ and $y$ the moment $\\bb{\\ee{m^x p^y}}$ is only a function of lower\nmoments. Specifically $\\bb{\\ee{m^x p^y}}$ is a function of moments\n$\\bb{\\ee{m^{x'} p^{y'}}}$ that satisfy two conditions:\n\\begin{equation}\n  \\begin{aligned}\n    &1) y' \\leq y,\\\\\n  &2) x' + y' \\leq x + y.\n  \\end{aligned}\n  \\label{seq_mom_conditions}\n\\end{equation}\n\nTo prove this we rewrite \\eref{seq_mom_ode} as\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{\\bb{\\ee{m^x p^y}}} &=\n    \\Km \\bb{\\ee{m^x p^y}}\\\\\n    &+ \\Rm \\bb{\\ee{p^y \\left[ (m + 1)^x -m^x \\right]}}\\\\\n    &+ \\Gm \\bb{\\ee{m p^y \\left[ (m - 1)^x - m^x \\right]}}\\\\\n    &+ \\Rp \\bb{\\ee{m^{(x + 1)} \\left[ (p + 1)^y - p^y \\right]}}\\\\\n    &+ \\Gp \\bb{\\ee{m^x p \\left[ (p - 1)^y - p^y \\right]}},\n    \\label{seq_mom_ode_factorized}\n  \\end{aligned}\n\\end{equation}\nwhere the factorization is valid given the linearity of expected values. Now the\nobjective is to find the highest moment for each term once the relevant\nbinomial, such as $(m-1)^x$, is expanded. Take, for example, a simple case in\nwhich we want to find the second moment of the mRNA distribution. We then set $x\n= 2$ and $y = 0$. \\eref{seq_mom_ode_factorized} then becomes\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{\\bb{\\ee{m^2 p^0}}} &=\n    \\Km \\bb{\\ee{m^2 p^0}}\\\\\n    &+ \\Rm \\bb{\\ee{p^0 \\left[ (m + 1)^2 - m^2 \\right]}}\\\\\n    &+ \\Gm \\bb{\\ee{m p^0 \\left[ (m - 1)^2 - m^2 \\right]}}\\\\\n    &+ \\Rp \\bb{\\ee{m^{(2 + 1)} \\left[ (p + 1)^0 - p^0 \\right]}}\\\\\n    &+ \\Gp \\bb{\\ee{m^2 p \\left[ (p - 1)^0 - p^0 \\right]}}.\n  \\end{aligned}\n\\end{equation}\nSimplifying this equation gives\n\\begin{equation}\n    \\dt{\\bb{\\ee{m^2}}} =\n    \\Km \\bb{\\ee{m^2}}\n    + \\Rm \\bb{\\ee{\\left[ 2m + 1 \\right]}}\n    + \\Gm \\bb{\\ee{\\left[- 2m^2 + m \\right]}}.\n    \\label{seq_second_mom_mRNA}\n\\end{equation}\n\n\\eref{seq_second_mom_mRNA} satisfies both of our conditions. Since we set $y$ to\nbe zero, none of the terms depend on any moment that involves the protein number,\ntherefore $y' \\leq y$ is satisfied. Also the highest moment in\n\\eref{seq_second_mom_mRNA} also satisfies $x' + y' \\leq x + y$ since the second\nmoment of mRNA doesn't depend on any moment higher than $\\bb{\\ee{m^2}}$. To\ndemonstrate that this is true for any $x$ and $y$ we now rewrite\n\\eref{seq_mom_ode_factorized}, making use of the binomial expansion\n\\begin{equation}\n  (z \\pm 1)^n = \\sum_{k=0}^n {n \\choose k} (\\pm 1)^{k} z^{n-k}.\n\\end{equation}\nJust as before let's look at each term individually. For the mRNA production\nterm we have\n\\begin{equation}\n  \\Rm \\bb{\\ee{p^y \\left[ (m + 1)^x -m^x \\right]}} =\n  \\Rm \\bb{\\ee{p^y \\left[ \\sum_{k=0}^x {x \\choose k} m^{x-k} - m^x \\right]}}.\n\\end{equation}\nWhen $k = 0$, the term inside the sum on the right-hand side cancels with the\nother $m^x$, so we can simplify to\n\\begin{equation}\n  \\Rm \\bb{\\ee{p^y \\left[ (m + 1)^x -m^x \\right]}} =\n  \\Rm \\bb{\\ee{p^y \\left[ \\sum_{k=1}^x {x \\choose k} m^{x-k} \\right]}}.\n\\end{equation}\nOnce the sum is expanded we can see that the highest moment in this sum is given\nby $\\bb{\\ee{m^{(x-1)} p^y}}$ which satisfies both of the conditions on\n\\eref{seq_mom_conditions}.\n\nFor the mRNA degradation term we similarly have\n\\begin{equation}\n  \\Gm \\bb{\\ee{m p^y \\left[ (m - 1)^x - m^x \\right]}} =\n  \\Gm \\bb{\\ee{m p^y \\left[ \\sum_{k=0}^x {x \\choose k}(-1)^k m^{x-k} -\n                          m^x \\right]}}.\n\\end{equation}\nSimplifying terms we obtain\n\\begin{equation}\n  \\Gm \\bb{\\ee{m p^y \\left[ \\sum_{k=0}^x {x \\choose k}(-1)^k m^{x-k} -\n                          m^x \\right]}} =\n  \\Gm \\bb{\\ee{p^y \\left[ \\sum_{k=1}^x {x \\choose k}(-1)^k m^{x+1-k} \\right]}}.\n\\end{equation}\nThe largest moment in this case is $\\bb{\\ee{m^x p^y }}$, which again satisfies\nthe conditions on \\eref{seq_mom_conditions}.\n\nThe protein production term gives\n\\begin{equation}\n  \\Rp \\bb{\\ee{m^{(x + 1)} \\left[ (p + 1)^y - p^y \\right]}} =\n  \\Rp \\bb{\\ee{m^{(x + 1)} \\left[ \\sum_{k=0}^y {y \\choose k} (-1)^k p^{y-k}\n                                - p^y \\right]}}.\n\\end{equation}\nUpon simplification we obtain\n\\begin{equation}\n  \\Rp \\bb{\\ee{m^{(x + 1)} \\left[ \\sum_{k=0}^y {y \\choose k} (-1)^k p^{y-k}\n                                - p^y \\right]}} =\n  \\Rp \\bb{\\ee{m^{(x + 1)} \\left[ \\sum_{k=1}^y {y \\choose k}(-1)^k p^{y-k}\n  \\right]}}.\n\\end{equation}\nHere the largest moment is given by $\\bb{\\ee{m^{x+1} p^{y-1}}}$, that again\nsatisfies both of our conditions. For the last term, for protein degradation we\nhave\n\\begin{equation}\n  Rp \\bb{\\ee{m^{(x + 1)} \\left[ (p + 1)^y - p^y \\right]}} =\n  Rp \\bb{\\ee{m^{(x + 1)} \\left[ \\sum_{k=1}^y {y \\choose k} (-1^k) p^{y - k}\n  \\right]}}.\n\\end{equation}\nThe largest moment involved in this term is therefore $\\bb{\\ee{m^x p^{y-1}}}$.\nWith this, we show that the four terms involved in our general moment equation\ndepend only on lower moments that satisfy \\eref{seq_mom_conditions}.\n\nAs a reminder, what we showed in this section is that the kinetic model\nintroduced in \\fref{fig2_minimal_model}(A) has no moment-closure problem. In\nother words, moments of the joint mRNA and protein distribution can be computed\njust from knowledge of lower moments. This allows us to cleanly integrate the\nmoments of the distribution dynamics as cells progress through the cell cycle.\n\n\\subsection{Computing single promoter steady-state moments}\n\n(Note: The Python code used for the calculations presented in this section can\nbe found in the\n\\href{https://www.rpgroup.caltech.edu//chann_cap/software/chemical_master_steady_state_moments_general.html}{following\nlink} as an annotated Jupyter notebook)\n\nAs discussed in \\secref{sec_cell_cycle}, one of the main factors contributing to\ncell-to-cell variability in gene expression is the change in gene copy number\nduring the cell cycle as cells replicate their genome before cell division. Our\nminimal model accounts for this variability by considering the time trajectory\nof the distribution moments as given by \\eref{seq_mom_ode_factorized}. These\npredictions will be contrasted with the predictions from a kinetic model that\ndoesn't account for changes in gene copy number during the cell cycle in\n\\siref{supp_multi_gene}.\n\nIf we do not account for change in gene copy number during the cell cycle or for\nthe partition of proteins during division, the dynamics of the moments of the\ndistribution described in this section will reach a steady state. In order to\ncompute the steady-state moments of the kinetic model with a single gene across\nthe cell cycle, we use the moment closure property of our master equation. By\nequating \\eref{seq_mom_ode_factorized} to zero for a given $\\bb{x}$ and\n$\\bb{y}$, we can solve the resulting linear system and obtain a solution for\n$\\bb{\\ee{m^x p^y}}$ at steady state as a function of moments $\\bb{\\ee{m^{x'}\np^{y'}}}$ that satisfy \\eref{seq_mom_conditions}. Then, by solving for the\nzero$\\th$ moment $\\bb{\\ee{m^0 p^0}}$ subject to the constraint that the\nprobability of the promoter being in any state should add up to one, we can\nsubstitute back all of the solutions in terms of moments $\\bb{\\ee{m^{x'}\np^{y'}}}$ with solutions in terms of the rates shown in\n\\fref{fig2_minimal_model}. In other words, through an iterative process, we can\nget at the value of any moment of the distribution. We start by solving for the\nzero$\\th$ moment. Since all higher moments, depend on lower moments we can use\nthe solution of the zero$\\th$ moment to compute the first mRNA moment. This\nsolution is then used for higher moments in a hierarchical iterative process.\n", "meta": {"hexsha": "121fc52faa251495eea088e26fa3edd814478203", "size": 15814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/appendix_moment_cme.tex", "max_stars_repo_name": "RPGroup-PBoC/chann_cap", "max_stars_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-08-21T04:06:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T07:36:58.000Z", "max_issues_repo_path": "doc/appendix_moment_cme.tex", "max_issues_repo_name": "RPGroup-PBoC/chann_cap", "max_issues_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/appendix_moment_cme.tex", "max_forks_repo_name": "RPGroup-PBoC/chann_cap", "max_forks_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-04-29T17:43:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-09T00:20:16.000Z", "avg_line_length": 45.3123209169, "max_line_length": 118, "alphanum_fraction": 0.6575186544, "num_tokens": 5728, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.5763653393998814}}
{"text": "\\section{Introduction}\n\nThe equivalent layer is a well-known technique for processing potential-field data in applied geophysics \nsince the 1960's. \nIt comes from potential theory as a mathematical solution of Laplace's equation, in the region above the\nsources, by using the Dirichlet boundary condition \\citep{kellogg1929}.\nThis theory states that any potential-field data produced by an arbitrary 3D physical-property distribution can \nbe exactly reproduced by a fictitious layer located at any depth and having a continuous 2D physical-property  \ndistribution. In practical situations, the layer is approximated by a finite set of sources (e.g., point masses \nor dipoles) and their physical properties are estimated by solving a linear system of equations that yields an \nacceptable potential-field data fit. These fictitious sources are called equivalent sources.\n\nMany previous works have used the equivalent layer to perform different potential-field data \ntransformations such as gridding \\citep[e.g.,][]{dampney1969, cordell1992, mendonca-silva1994},\nupward/downward continuation \\citep[e.g.,][]{emilia1973, hansen-miyazaki1984, \nli-oldenburg2010}, reduction to the pole \\citep[e.g.,][]{silva1986, leao-silva1989, guspi-novara2009, \noliveirajr-etal2013}, combining multiple data sets \\citep[e.g.,][]{boggs-dransfield2004} and \ngradient data processing \\citep[e.g.,][]{barnes-lumley2011}.\n\nAlthough the use of the equivalent-layer technique increased over the last decades, one of the biggest \nproblems is still its high computational cost for processing large-data sets. \nThis problem propelled several studies aimed at improving the computational efficiency of the \nequivalent layer technique. \n\\citet{leao-silva1989} develop a fast method for processing a regular grid of potential-field data.\nThe method consists in estimating an equivalent layer which exactly reproduces the potential-field data \nwithin a small data window. The data window is shifted over the whole gridded data in a procedure similar \nto a discrete convolution. \nThe equivalent layer extends beyond the moving-data window and is located at a depth between two and six \ntimes the grid spacing of the observations. For each data window, the equivalent layer is estimated by \nsolving an underdetermined linear system.\nAfter estimating an equivalent layer, the transformed-potential field is computed only at the center of the \nmoving-data window. The use of a small moving-data window greatly reduces the total number of floating-point \noperations (\\textit{flops}) and RAM memory storage. The computational efficiency of this method relies on the \nstrategy of constructing the equivalent layer by successively solving small linear systems instead of solving \njust one large linear system for the entire equivalent layer. \n\\citet{mendonca-silva1994} also follow the strategy of solving successive small linear systems for \nconstructing an equivalent layer. Their method is based on the  equivalent-data concept, which \nconsists in determining a subset of all potential-field data (named equivalent-data set), such that the \ninterpolating surface that fits the chosen subset also automatically fits all remaining data. \nThe equivalent data set is obtained by iteratively introducing the potential-field observation with the \ngreatest residual in the preceding iteration. By applying to the interpolation problem, the method is \noptimized by approximating dot products by the discrete form of an analytic integration that can be \nevaluated with less computational effort. According to the authors, the equivalent-data set is usually \nsmaller than the total number of potential-field observations, leading to computational savings. \nThe authors also pointed out that the computational efficiency of the method depends on the number of \nequivalent data. If the potential-field anomaly is nonsmooth, the number of equivalent data can be large and \nthe method will be less efficient than the classical approach.\n\nBy following a different strategy, \\citet{li-oldenburg2010} develop a rapid method that transforms the \ndense sensitivity matrix associated with the linear system into a sparse one by using a wavelet technique. \nAfter obtaining a sparse representation of the sensitivity matrix, those authors estimate the \nphysical-property distribution within the equivalent layer by using an overdetermined formulation. \nThose authors pointed out that, given the sparse representation, their method reduces the computational time \nrequired for solving the linear system by as many as two orders of magnitude if compared with the same \nformulation using a dense matrix. \n\\citet{barnes-lumley2011} follow a similar strategy and transformed the dense sensitivity matrix \ninto a sparse one. However, differently from \\citet{li-oldenburg2010}, their method operates in the space \ndomain by grouping equivalent sources far from an observation point into blocks with average physical \nproperty. This procedure aims at reducing the memory storage and achieving computational efficiency \nby solving the transformed linear system with a weighted-least-squares conjugate-gradient algorithm.\nNote that, instead of constructing the equivalent layer by solving successive small linear systems, \nthese last two methods first transform the large linear system into a sparse one and then take advantage of \nthis sparseness.\n\n\\citet{oliveirajr-etal2013} develop a fast method based on the reparameterization of the physical-property \ndistribution within the equivalent layer. Those authors divided the equivalent layer into a regular grid of \nequivalent-source windows inside which the physical-property distribution is described by bivariate \npolynomial functions. By using this polynomial representation, the inverse problem for estimating the \nequivalent layer is posed in the space of the total number of polynomial coefficients within all \nequivalent-source windows instead of in the space of the total number of equivalent sources. According to \n\\citet{oliveirajr-etal2013}, the computational efficiency of their method relies on the fact that the total \nnumber of polynomial coefficients needed to describe the physical-property distribution within the \nequivalent layer is generally much smaller than the number of equivalent sources, leading to a very smaller \nlinear system to be solved. Those authors could verify that the total number of \\textit{flops} needed for \nbuilding and solving the linear inverse problem of estimating the total number of polynomial coefficients can \nbe reduced by as many as three and four orders of magnitude, respectively, if compared with the same inverse \nproblem of estimating the physical property of each equivalent source via Cholesky decomposition.\n\nThere is another class of methods that iteratively estimates the physical-property distribution within \nthe equivalent layer without solving linear systems.\nThe method presented by \\citet{cordell1992}, and later generalized by \\citet{guspi-novara2009}, updates \nthe physical property of the sources, which are located below each potential-field data, using a \nprocedure that removes the maximum residual between the observed and predicted data.\n\\citet{xia-sprowl1991} and \\citet{xia-etal1993} develop \nfast iterative schemes for updating the physical-property distribution\nwithin the equivalent layer in the wavenumber and space domains, respectively.\nGrounded on excess mass constraint, \\cite{siqueira-etal2017} develop an iterative scheme \nstarting with a mass distribution within the equivalent layer that is proportional to observed gravity data.\nThen, their method iteratively adds mass corrections that are proportional to the gravity residuals.\nThe total number of \\textit{flops} required by these iterative methods for estimating the physical-property \ndistribution within the equivalent layer depends on the total number of iterations, however this number is \ngenerally much smaller than the total number of \\textit{flops} required to solve a large-scaled linear system. \nGenerally, the most computational expensive step in each iteration of these methods is the forward problem \nof calculating the potential-field data produced by the equivalent layer.\n\nIn the present work, we show that the sensitivity matrix associated with a planar equivalent layer \nof point masses has a very well-defined structure called Block-Toeplitz Toeplitz-Block (BTTB) for \nthe case in which (1) the observed gravity data is located on a regularly spaced grid at constant \nheight and (2) there is one point mass directly beneath each observation point.\nThis technique have been successfully used in potential-field methods for \n3D gravity inversion \\citep{zhang-wong2015}, downward continuation \n\\citep{zhang-etal2016} and 3D magnetic modeling \\citep{qiang_etal2019}.\nBy using this property, we propose an efficient algorithm based on FFT convolution \n\\citep[e.g.,][ p. 207]{vanloan1992} for computing the forward problem at each iteration of \nthe fast equivalent-layer technique proposed by \\citet{siqueira-etal2017}.\nOur method uses the gravitational effect produced by a single point mass to compute the \neffect produced by the whole equivalent layer, which results in a drastic reduction \nnot only in the number of flops, but also in the RAM memory usage of the fast equivalent-layer technique.\nTests with synthetic and field data illustrate the good performance of our method in processing \nlarge gravity data sets.\n\n\n\n", "meta": {"hexsha": "af7a2e3204ba7bc43d34aabec1f2963e2dedccaf", "size": 9463, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/introduction.tex", "max_stars_repo_name": "pinga-lab/Eq_Layer-Toeplitz", "max_stars_repo_head_hexsha": "d56b40f99e9059c07a504efe3ad53aff8462622f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manuscript/introduction.tex", "max_issues_repo_name": "pinga-lab/Eq_Layer-Toeplitz", "max_issues_repo_head_hexsha": "d56b40f99e9059c07a504efe3ad53aff8462622f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-19T22:42:42.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-19T23:32:26.000Z", "max_forks_repo_path": "manuscript/introduction.tex", "max_forks_repo_name": "pinga-lab/Eq_Layer-Toeplitz", "max_forks_repo_head_hexsha": "d56b40f99e9059c07a504efe3ad53aff8462622f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.0087719298, "max_line_length": 112, "alphanum_fraction": 0.8211983515, "num_tokens": 1927, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5763653311274045}}
{"text": "\\section{Policy Gradient Methods}\n\nIn this section we take an approach that is different to the action-value methods that we have considered previously. We continue the function approximation scheme, but attempt to learn a \\emph{parameterised policy} $\\pi(a \\vert{} s, \\vec{\\theta})$ where $\\vec{\\theta} \\in \\R{}^{d'}$ is the policy's \\emph{parameter vector}. Our methods might also learn a value function, but the policy will provide a probability distribution of possible actions without directly consulting the value function as we did previously. \\\\\n\nWe will learn the policy parameter by \\emph{policy gradient methods}. These are gradient methods based on some scalar performance measure $J(\\vec{\\theta})$. In particular, performance is maximised by \\emph{gradient ascent} using some stochastic estimate of $J$, $\\widehat{\\grad J(\\vec{\\theta}_t)}$, whose expectation approximates $\\E{}[\\grad_{\\vec{\\theta}} J(\\vec{\\theta}_t)]$\n\\[\n    \\vec{\\theta}_{t+1} = \\vec{\\theta} + \\alpha \\widehat{\\grad J(\\vec{\\theta}_t)}.\n\\]\nMethods that also learn a value function are called \\emph{actor-critic} methods. Actor is in reference to the learn policy, while critic is in reference to the learned (usually state-) value function.\n\n\n\\subsection{Policy Approximation and its Advantages}\n\nIn policy gradient methods, the policy can be parameterised by any differentiable function of the parameter $\\vec{\\theta}$. We generally require $\\pi \\in (0, 1)$ to be defined on the open interval to ensure exploration. \\\\\n\nFor action-spaces that are discrete and not too large, it is common to learn a preference function $h(s, a, \\vec{\\theta}) \\in R{}$ and then take a soft-max to get the policy\n\\[\n    \\pi(a \\vert{} s, \\vec{\\theta}) = \\frac{e^{h(s, a, \\vec{\\theta})}}{\\sum_b e^{h(s, b, \\vec{\\theta})}}.\n\\]\nWe call this type of parameterisation \\emph{soft-max in action preferences}. (Note the homomorphism: preferences add, while probabilities multiply.) We can learn the preferences any way we like, be it linear or using a deep learning.\\\\\n\nSome advantages of policy parameterisation:\n\\begin{itemize}\n    \\item Action-value methods, such as $\\epsilon$-greedy action selection, can result give situations in which an arbitrarily small change in the action-values completely changes the policy.\n    \\item The soft-max method will approach a deterministic policy over time. If we used action-values then these would approach their (finite) true values, leading to finite probabilities (with the soft-max). Action preferences do not necessarily converge, but instead are driven to produce an optimal stochastic policy.\n    \\item In some problems, the best policy may be stochastic. Action-value methods have no natural way of approximating this, whereas it is embedded in this scheme.\n    \\item Often the most important reason for choosing a policy based learning method is that policy parameterisation provides a good way to inject prior knowledge into the system.\n\\end{itemize}\n\n\\subsection{The Policy Gradient Theorem}\nThe episodic and continuing cases of the policy gradient theorem have different proofs (since they use different formulations of the expected reward). Here we will first focus on the episodic case, but the results carry over the same.\\\\\n\nIn the episodic case the performance function is the true value of the start state under the current policy\n\\[\n    J(\\vec{\\theta}) = v_{\\pi_{\\vec{\\theta}}}(s_0).\n\\]\nIn the following we assume no discounting ($\\gamma = 1$), but this can be inserted by making the requisite changes (see exercises).\\\\\n\nThe success of the \\emph{policy gradient theorem} is that it gives a gradient of the performance function that does not include derivatives of the state distribution. The result for the episodic case is as follows and is derived in the box shown below\n\\[\n    \\grad_{\\vec{\\theta}} J(\\vec{\\theta}) \\propto \\sum_s \\mu(s) \\sum_a q_\\pi(s, a) \\grad_{\\vec{\\theta}} \\pi(a \\vert{} s, \\vec{\\theta}).\n\\]\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/proof_policy_gradient_episodic.png}\\\\\n\n\\subsection{REINFORCE: Monte Carlo Policy Gradient}\nWe now attempt to learn a policy by stochastic gradient ascent on the performance function. To begin, the policy gradient theorem can be stated as \n\\[\n    \\grad_{\\vec{\\theta}}J(\\vec{\\theta}) = \\Epi \\left[\\sum_a q_\\pi(S_t, a) \\grad_{\\vec{\\theta}}\\pi(a \\vert{} S_t, \\vec{\\theta}) \\right].\n\\]\nThe \\emph{all-actions} method simply samples this expectation to give the update rule\n\\[\n    \\vec{\\theta}_{t+1} = \\vec{\\theta}_t + \\alpha \\sum_a \\hat{q}(S_t, a, \\vec{w})\\grad_{\\vec{\\theta}}\\pi(a\\vert{}S_t, \\vec{\\theta}).\n\\]\nThe classical REINFORCE algorithm involves only $A_t$, rather than a sum over all actions. We proceed\n\\begin{align}\n    \\grad_{\\vec{\\theta}} &= \\Epi\\left[ \\sum_a \\pi(a\\vert{}S_t, \\vec{\\theta}) q_\\pi(S_t, a)\\frac{\\grad_{\\vec{\\theta}}\\pi(a \\vert{} S_t, \\vec{\\theta})}{\\pi(a \\vert{} S_t, \\vec{\\theta})} \\right] \\\\\n  &= \\Epi\\left[q_\\pi(S_t, A_t)\\frac{\\grad_{\\vec{\\theta}}\\pi(A_t \\vert{} S_t, \\vec{\\theta})}{\\pi(A_t \\vert{} S_t, \\vec{\\theta})} \\right] \\\\\n  &= \\Epi\\left[G_t\\frac{\\grad_{\\vec{\\theta}}\\pi(A_t \\vert{} S_t, \\vec{\\theta})}{\\pi(A_t \\vert{} S_t, \\vec{\\theta})} \\right],\n\\end{align}\nwhich yields the REINFORCE update\n\\begin{equation}\n    \\vec{\\theta}_{t+1} = \\vec{\\theta}_t + \\alpha G_t \\grad_{\\vec{\\theta}} \\log\\pi(A_t\\vert{} S_t, \\vec{\\theta}).\n\\end{equation}\nThis update moves the parameter vector in the direction of increasing the probability of the action taken proportional to the return and inversely proportional to the probability of the action. It uses the complete return from time $t$, so in this sense is a Monte Carlo algorithm. We refer to the quantity\n\\[\n    \\grad_{\\vec{\\theta}} \\log\\pi(A_t\\vert{} S_t, \\vec{\\theta})\n\\]\nas the \\emph{eligibility vector}. Pseudocode is given in the box below (complete with discounting). Convergence to a local optimum is guaranteed under the standard stochastic approximation conditions for decreasing $\\alpha$. However, since it is a Monte Carlo method, it will likely have high variance which will slow learning.\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/REINFORCE_monte_carlo_algorithm}\\\\\n\n\\subsection{REINFORCE with Baseline}\nThe policy gradient theorem can be generalised to incorporate a comparison to a \\emph{baseline} value $b(s)$ for each state\n\\begin{equation}\n    \\grad_{\\vec{\\theta}} J(\\vec{\\theta}) \\propto \\sum_s \\mu(s) \\sum_a \\left( q_\\pi(s, a) - b(s) \\right) \\grad_{\\vec{\\theta}} \\pi(a \\vert{} s, \\vec{\\theta}).\n\\end{equation}\nThe baseline can be a random variable, as long as it doesn't depend on $a$. The update rule then becomes\n\\begin{equation}\n    \\vec{\\theta}_{t+1} = \\vec{\\theta}_t + \\alpha \\left( G_t \\grad_{\\vec{\\theta}} - b(S_t) \\right) \\log\\pi(A_t\\vert{} S_t, \\vec{\\theta}).\n\\end{equation}\nThe idea of the baseline is to reduce variance -- by construction it has no impact on the expected update.\\\\\n\nA natural choice for the baseline is a learned state-value function $\\hat{v}(S_t, \\vec{w})$. Pseudocode for Monte Carlo REINFORCE with this baseline (also learned by MC estimation) is given in the box below.\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/REINFORCE_monte_carlo_baseline_algorithm}\\\\\n\nThis algorithm has two step sizes $\\alpha^{\\vec{\\theta}}$ and $\\alpha^{\\vec{w}}$. Choosing the step size for the value estimates is relatively easy, for instance in the linear case we have the rule of thumb $\\alpha^{\\vec{w}} = 1/ \\E{}\\left[ || \\grad_{\\vec{w}} \\hat{v}(S_t, \\vec{w}) ||_\\mu^2 \\right]$. It is much less clear how to set the step size for the policy parameters.\n\n\\subsection{Actor-Critic Methods}\nAlthough REINFORCE with baseline can use an estimated value function, it is not an actor-critic method because it does not incorporate value estimates through bootstrapping.\\\\\n\nWe present here a one-step actor-critic method that is an analog of TD(0), Sarsa(0) and Q-learning. We replace the full return of REINFORCE with a bootstrapped one-step return:\n\\begin{align}\n    \\vec{\\theta} &= \\vec{\\theta}_t + \\alpha \\left(  G_{t:t+1} - \\hat{v}(S_t, \\vec{w})\\right) \\frac{\\grad_{\\vec{\\theta}} \\pi(A_t \\vert{} S_t, \\vec{\\theta}_t)}{\\pi(A_t \\vert{} S_t, \\vec{\\theta}_t)}\\\\\n    &= \\vec{\\theta}_t + \\alpha \\left(  G_{t:t+1} - \\gamma\\hat{v}(S_{t+1}, \\vec{w}) - \\hat{v}(S_{t}, \\vec{w})\\right) \\frac{\\grad_{\\vec{\\theta}} \\pi(A_t \\vert{} S_t, \\vec{\\theta}_t)}{\\pi(A_t \\vert{} S_t, \\vec{\\theta}_t)}\\\\\n    &= \\vec{\\theta}_t + \\alpha \\delta_t \\frac{\\grad_{\\vec{\\theta}} \\pi(A_t \\vert{} S_t, \\vec{\\theta}_t)}{\\pi(A_t \\vert{} S_t, \\vec{\\theta}_t)}\n\\end{align}\nwith $\\delta_t$ as the one-step TD error. The natural method to learn the state-value function in this case would be semi-gradient TD(0). Pseudocode is given in the boxes below for this algorithm and a sister algorithm using eligibility traces.\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/one_step_actor_critic_algorithm.png}\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/actor_critic_eligibility_traces_algorithm.png}\\\\\n\n\n\\subsection{Policy Gradient for Continuing Problems}\nFor continuing problems we need a different formulation. We choose as our performance measure the average rate of reward per time step:\n\\begin{align}\n    J(\\vec{\\theta}) = r(\\pi) &= \\lim_{h \\to \\infty} \\frac{1}{h} \\sum_{t=1}^h \\E{}[R_t \\vert{} S_0, A_{0:t-1} \\sim \\pi] \\\\\n    &= \\lim_{t \\to \\infty} \\E{}[R_t \\vert{} S_0, A_{0:t-1} \\sim \\pi] \\\\\n    &= \\sum_s \\mu(s) \\sum_a \\pi(a \\vert{} s) \\sum_{s', r} p(s', r \\vert{} s, a)r,\n\\end{align}\nwhere $\\mu$ is the steady distribution under $\\pi$, $\\mu(s) = \\lim_{t \\to \\infty} P{}(S_t = s \\vert{} A_{0:t} \\sim \\pi)$ which we assume to exist and be independent of $S_0$ (ergodicity). Recall that this is the distribution that is invariant under action selections according to $\\pi$:\n\\[\n    \\sum_s \\mu(s) \\sum_a \\pi(a \\vert{} s, \\vec{\\theta}) p(s' \\vert{} s, a) = \\mu(s').\n\\]\nWe also define the values with respect to the differential return:\n\\[\n    G_t \\doteq R_{t+1} - r(\\pi) + R_{t+2} - r(\\pi) + R_{t+3} - r(\\pi) + \\cdots.\n\\]\nWith these changes the policy gradient theorem remains true (proof given in the book). The forward and backward view equations also remain the same. Pseudocode for the actor-critic algorithm in the continuing case is given below.\\\\\n\n\\includegraphics[width=\\textwidth]{\\NotesImages/actor_critic_eligibility_traces_continuing.png}\n\n\\subsection{Policy Parameterisation for Continuous Actions}\nWe simply define a continuous (parameterised) probability distribution over the actions, then do the gradient ascent as above.\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "ae3679757f1d6aaf75d966ef00bf379a14c97206", "size": 10517, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/chapters/chapter13/chapter13_content.tex", "max_stars_repo_name": "ElliotMunro200/reinforcement_learning_an_introduction", "max_stars_repo_head_hexsha": "c4fccb46a4bb00955549be3505144ec49f0132e5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 234, "max_stars_repo_stars_event_min_datetime": "2018-09-01T00:26:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T03:55:50.000Z", "max_issues_repo_path": "notes/chapters/chapter13/chapter13_content.tex", "max_issues_repo_name": "15779235038/reinforcement_learning_an_introduction", "max_issues_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2018-11-29T21:04:36.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-13T17:11:50.000Z", "max_forks_repo_path": "notes/chapters/chapter13/chapter13_content.tex", "max_forks_repo_name": "15779235038/reinforcement_learning_an_introduction", "max_forks_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 63, "max_forks_repo_forks_event_min_datetime": "2018-07-31T04:53:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T04:03:43.000Z", "avg_line_length": 77.9037037037, "max_line_length": 518, "alphanum_fraction": 0.7210231054, "num_tokens": 2984, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673178375734, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5763384865733612}}
{"text": "\\problemname{Logland}\nTwo thieves (who strongly ``recommended'' us not to reveal their names) has broken into the central bank of Logland, where the country's cash reserve is stored.\nIn Logland, the currency has the $k$ denominations $1, 2, 4, 8, \\ldots, 2^{k - 1}$, and the bank currently stores $x_i$ coins of denominations $2^i$.\n\nThieves live by a very strict honor code, which stipulates that any loot grabbed must be divided evenly between the theives.\nUnfortunately (for the thieves), this may mean that some loot must be left behind.\nFor example, if the bank contained a single coin of denomination $8$, and two coins of denomination $2$, there is no way they could bring the $8$ coin.\nInstead, they would have to take one coin of value $2$ each, leaving more than half of the possible loot!\n\nSince neither leaving loot nor solving NP-hard problems are things thieves like to do, they have asked you to compute the minimum value of the loot they must leave behind, in order to be able to\nsplit the remaining loot evenly.\n\n\\section*{Input}\nThe first line of input contains the integer $1 \\le k \\le 10^6$ -- the number of denominations in Logland.\nThe next line contains the $k$ integers $0 \\le x_0, x_1, \\dots, x_{k-1} \\le 2^{30}$, the number of coins of the denominations $2^0, 2^1, \\dots, 2^{k - 1}$.\n\n\\section*{Output}\nOutput a single line, the minimum amount of money the thieves must leave behind.\nSince this number may be very large, output it modulo the prime number $10^9 + 7$.\n", "meta": {"hexsha": "48c9f066364f1350cbe89df0f1cd8c3cc01f852f", "size": 1493, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "logland/problem_statement/problem.en.tex", "max_stars_repo_name": "jsannemo/hiq-challenge-2017", "max_stars_repo_head_hexsha": "8271c716fe249674d585731f472b64616370300a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "logland/problem_statement/problem.en.tex", "max_issues_repo_name": "jsannemo/hiq-challenge-2017", "max_issues_repo_head_hexsha": "8271c716fe249674d585731f472b64616370300a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "logland/problem_statement/problem.en.tex", "max_forks_repo_name": "jsannemo/hiq-challenge-2017", "max_forks_repo_head_hexsha": "8271c716fe249674d585731f472b64616370300a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.65, "max_line_length": 194, "alphanum_fraction": 0.7494976557, "num_tokens": 397, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673269042767, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5763384830059214}}
{"text": "\\subsection{Cantilever beam}\n\\paragraph{}\nA two-dimensional cantilever beam subjected to a parabolic shear load at the free end is examined as shown in Fig.~\\ref{qdt_fig:ex_cantilever_beam_geo_bc}.\n    \\begin{figure}[h!]\n    \\centering\n        \\scalebox{0.6}{\\includegraphics{isogeometric_sbfem/images/cantilever_beam_geo_bc.eps}}\n        \\caption{ Cantilever beam: Geometry and boundary conditions.}\n        \\label{qdt_fig:ex_cantilever_beam_geo_bc}\n    \\end{figure}\n%\nThe geometry is: length $L=\\SI{8}{\\meter}$, height $D=\\SI{4}{\\meter}$.\nThe material properties are: Young\u2019s modulus $E = \\SI{3e7}{\\newton \\per \\square \\meter}$ , Poisson\u2019s ratio $ \\nu =0.25$.\nThe parabolic shear force is $P = \\SI{250}{\\newton}$.\nThe exact solutions for the displacements are given by \\citep{Aug2008} as Eq.~\\ref{iso_eq:cantilever_beam_displacement_solution}.\nwhere $I=D^3/12$ is the moment of inertia, $\\mean{E}=E$, $\\mean{\\nu}=\\nu$ and $\\mean{E}=E/(1-\\nu^2)$, $\\mean{\\nu}=nu/(1-nu)$ for plane stress and plane strain condition respectively.\nThe stress $\\sigma$ can be expressed as \\citep{Aug2008} as Eq.~\\ref{iso_eq:cantilever_beam_stress_solution}.\nThe strain energy can be derived from Eq.~\\ref{iso_eq:cantilever_beam_stress_solution} and Eq.~\\ref{iso_eq:cantilever_beam_displacement_solution} as Eq.~\\ref{iso_eq:cantilever_beam_energy_solution}.\n\n\\paragraph{}\nIn this example, rigid body motion is constrained by fixing 3 DOF on the left edge of the beam.\n$u_x=0$ for points at $(0,-D/2)$ and $(0,D/2)$ and $u_y =0$ for point at $(0,0)$.\nStress from analytical solution in Eq.~\\ref{iso_eq:cantilever_beam_stress_solution} are applied on the boundaries.\n\n\\paragraph{}\nDue to the fact that the geometry of the cantilever beam can be described by four points and four straight lines, drawing in AutoCAD may not be necessary.\nAs a result, the input geometry is defined manually.\nGenerated background mesh, coloring and the final result with $res=32$, $s_{max}=16$ and $s_{min}=1$ are shown in Fig.~\\ref{qdt_fig:ex_cantilever_beam_background_mesh}, Fig.~\\ref{qdt_fig:ex_cantilever_beam_mesh_coloring} and Fig.~\\ref{qdt_fig:ex_cantilever_beam_mesh_final}.\n%\n    \\begin{figure}\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{quadtree/ex_images/cantilever_background_mesh.eps}\n        }\n        \\caption[Background mesh of cantilever beam]{Background mesh of cantilever beam : Bold lines represents the input geometry}\n        \\label{qdt_fig:ex_cantilever_beam_background_mesh}\n    \\end{figure}\n%\n    \\begin{figure}\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{quadtree/ex_images/cantilever_coloring.eps}\n        }\n        \\caption[Mesh coloring of cantilever beam]{Mesh coloring of cantilever beam : Grey area represents the cantilever beam}\n        \\label{qdt_fig:ex_cantilever_beam_mesh_coloring}\n    \\end{figure}\n%\n    \\begin{figure}\n        \\centering\n        \\scalebox{0.3}{\n            \\includegraphics{quadtree/ex_images/cantilever_final_mesh.eps}\n        }\n        \\caption[Final mesh of cantilever beam]{Final mesh of cantilever beam}\n        \\label{qdt_fig:ex_cantilever_beam_mesh_final}\n    \\end{figure}\n% DOF err\n% 146 - 0.019587 (32,16)\n% 178 - 0.0082845 (32,2)\n% 438 - 0.0027044 (64,2)\n% 1658- 0.00066929 (128,2)\nMesh with different parameters are plotted in Fig.~\\ref{qdt_fig:ex_cantilever_mesh_all} and the convergence study is plotted in Fig.~\\ref{qdt_fig:ex_cantilever_mesh_conv}\n%\n    \\begin{figure}[h!]\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{quadtree/ex_images/qdt_cantilever_mesh_178.eps}\n            }\n            \\caption{Mesh with $res=32$, $s_{max}=2$, 178 DOFs}\n        \\end{subfigure}\n        \\\\\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{quadtree/ex_images/qdt_cantilever_mesh_438.eps}\n            }\n            \\caption{Mesh with $res=64$, $s_{max}=2$, 438 DOFs}\n        \\end{subfigure}\n        \\\\\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{quadtree/ex_images/qdt_cantilever_mesh_1658.eps}\n            }\n            \\caption{Mesh with $res=128$, $s_{max}=2$, 1658 DOFs}\n        \\end{subfigure}\n        \\caption[Mesh of the cantilever beam]{Mesh of the cantilever beam}\n        \\label{qdt_fig:ex_cantilever_mesh_all}\n    \\end{figure}\n%\n%\n    \\begin{figure}[H]\n        \\centering\n        \\scalebox{0.6}{\n            \\includegraphics{quadtree/ex_images/qdt_cantilever_conv.eps}\n        }\n        \\caption[Convergence of the cantilever beam]{Convergence of the cantilever beam}\n        \\label{qdt_fig:ex_cantilever_mesh_conv}\n    \\end{figure}", "meta": {"hexsha": "fa43dd1b7fec8fcba5a01aa9fcaa3dddfcdd47ab", "size": 4756, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "quadtree/ex_cantilever.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "quadtree/ex_cantilever.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quadtree/ex_cantilever.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.5306122449, "max_line_length": 274, "alphanum_fraction": 0.6818755257, "num_tokens": 1418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191214879992, "lm_q2_score": 0.8128673223709251, "lm_q1q2_score": 0.5763384747937355}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n\\newcommand{\\ZZ}{\\mathbf{Z}}\n\\newcommand{\\Z}{\\mathbf{Z}}\n\\newcommand{\\FF}{\\mathbf{F}}\n\n\\begin{document}\n\n\\title{Some notes about an algorithm for computing Bernoulli numbers mod $p$}\n\\author{David Harvey}\n\\maketitle\n\nConceptually the algorithm has two parts. The first part involves an expression for the Bernoulli numbers in terms of distributions on $\\Z_p$ from Lang's ``Cyclotomic Fields'' (chapter 2, Theorem 2.3), which in down-to-earth terms says that\n    $$ B_k/k = \\frac1{1 - g^k} \\sum_{x \\in \\Z/p\\Z} x^{k-1} h(x)  \\pmod p $$\nwhere $g$ is a generator of $(\\Z/p\\Z)^*$, and where $h$ is\n    $$ h(x) = \\left\\{ \\frac xp \\right\\} - g \\left\\{ \\frac{g^{-1}x}p \\right\\} + \\frac{g-1}2. $$\n($\\{ \\cdot \\}$ denotes fractional part.) Substituting $x = g^j$, using the fact that $h(g^j)/g^j$ has period $(p-1)/2$ as a function of $j$, and since we are only interested in even $k$, we have\n    $$ B_{2k} = \\frac{4k}{1 - g^{2k}} \\sum_{j=0}^{(p-3)/2} g^{2jk}\\frac{h(g^j)}{g^j} \\pmod p. $$\nThe sum on the right is a \\emph{number-theoretic transform} of length $(p-1)/2$. The second part of the algorithm involves evaluating this transform by using \\emph{Bluestein's trick}: any transform of the form\n   $$ b_k = \\sum_{j=0}^{(p-3)/2} g^{2jk} a_j $$\ncan be rewritten using the identity $2jk = k^2 + j^2 - (k-j)^2$ as\n   $$ b_k = g^{k^2} \\sum_{j=0}^{(p-3)/2} c_{k-j} d_j,$$\nwhere $c_j = g^{-j^2}$ and $d_j = g^{j^2} a_j$. This last sum is a convolution, and so is tantamount to computing the product of the polynomials\n        $$ F(X) = \\sum_{j=-(p-3)/2}^{(p-3)/2} c_j X^j \\quad \\text{and} \\quad  G(X) = \\sum_{j=0}^{(p-3)/2} d_j X^j. $$\nIn fact one checks that $c_{j+(p-1)/2} = (-1)^{(p-1)/2} c_j$, so\n   $$ F(X) = 1 + \\left(1 + (-1)^{(p-1)/2}X^{-(p-1)/2}\\right) \\sum_{j=1}^{(p-3)/2} c_j X^j; $$\nthis observation reduces the problem to multiplying two polynomials of length $(p-1)/2$, which we do using NTL.\n\nThis algorithm seems closely related to the one described in ``Irregular primes and cyclotomic invariants to 12 million'' (Buhler et al). What they do can be interpreted as evaluating the same number-theoretic transform that we do here, but they evaluate it using completely different methods. In particular they take advantage of the factorisation of $p-1$ to improve the running time and reduce memory requirements.\n\n\n\\section*{Further improvements?}\n\nHere's an interesting question, which someone (else) might want to think about someday. In the paper by Buhler et al where they do the power series inversion method, they explain how to use something called ``multisectioning'' to reduce to inversion of a series of length $p/8$ (plus a few multiplications), instead of inversion of length $p/2$ which my first piece of code does. The idea is to use some slightly more tricky series, kind of analogous to using $x^2/(\\cosh x - 1)$ instead of $x/(e^x - 1)$ to remove all the even index terms, but even trickier. (That should be implemented in SAGE sometime too.) Now this *suggests* that there might be a way to improve my second algorithm to a few multiplications of length $p/8$, by rewriting the number-theoretic transform in some clever way. I haven't thought about how this might work, but I wouldn't be at all surprised if such an improvement was possible, and I bet it would be faster than their power series method.\n\n\n\n\\end{document}\n", "meta": {"hexsha": "035a09f132491712d0f18508306e5085c4067629", "size": 3429, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/sage/misc/notes/bernoulli_mod_p.tex", "max_stars_repo_name": "bopopescu/sage", "max_stars_repo_head_hexsha": "2d495be78e0bdc7a0a635454290b27bb4f5f70f0", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-01-04T07:15:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T15:15:18.000Z", "max_issues_repo_path": "src/sage/misc/notes/bernoulli_mod_p.tex", "max_issues_repo_name": "Ivo-Maffei/sage", "max_issues_repo_head_hexsha": "467fbc70a08b552b3de33d9065204ee9cbfb02c7", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-10-30T13:40:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-23T12:13:30.000Z", "max_forks_repo_path": "src/sage/misc/notes/bernoulli_mod_p.tex", "max_forks_repo_name": "dimpase/sage", "max_forks_repo_head_hexsha": "468f23815ade42a2192b0a9cd378de8fdc594dcd", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-09-28T13:12:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-12T09:28:34.000Z", "avg_line_length": 81.6428571429, "max_line_length": 971, "alphanum_fraction": 0.6923301254, "num_tokens": 1108, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673087708699, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5763384701489892}}
{"text": "\\subsection*{FGSM}\n\\paragraph{Targeted:} \n$x' := x - \\epsilon \\cdot \\sign(\\nabla_x \\loss_\\text{target}(x))$\n\n\\paragraph{Untargeted:}\n$x' := x + \\epsilon \\cdot \\sign(\\nabla_x \\loss_\\text{label}(x))$\n\n\\subsection*{Carlini-Wagner (Minimize Perturbation)}\n\\paragraph{Opt. Prob.:} \\texttt{find} $\\eta$ \\texttt{minimize} $\\|\\eta\\|_p$ \\texttt{s.t.} $f(x+\\eta) =t, x+\\eta \\in [0,1]^n$\n\\paragraph{Relaxed:} \\texttt{find} $\\eta$ \\texttt{minimize} $\\|\\eta\\|_p + c \\cdot \\obj_t(x+\\eta)$ \\texttt{s.t.} $x+\\eta \\in [0,1]^n$\n\nWith $\\obj_t(x+\\eta) \\le 0 \\Rightarrow f(x+\\eta) = t$ (e.g., $\\loss_t(x) - 1 = -\\log_C(p(x)_t) - 1$ or $\\max(0, 0.5- p(x)_t)$)\n\nWhen using $L_\\infty$, gradient of $\\|\\eta\\|_\\infty$ is zero at all non-max entries $\\rightarrow$ use $L(\\eta) = \\sum_i \\max(0, \\lvert \\eta_i \\rvert - \\tau)$ instead; Start with $\\tau=1$, update $\\eta$ $K$ times, if $L(\\eta) = 0$ decrease $\\tau$ and repeat, otherwise stop and return previous $\\eta$. \n\n\\subsection*{PGD}\n\\algrenewcommand{\\algorithmicfunction}{\\textbf{def}}\n\n\\begin{algorithmic}\n\\Function{PGD}{$x, y, k, \\epsilon_\\text{step}, \\epsilon$}\n\\State $x' \\gets x + \\eta$ for random $\\eta$ with $\\|\\eta\\|_\\infty \\le \\epsilon$\n\\For{$i = 1, \\ldots, k$}\n    \\State $g \\gets \\nabla_{x'} \\loss(f(x'), y)$ \\Comment{uFGSM($x'$, $y$)}\n    \\State $x' \\gets x' +  \\epsilon_\\text{step} \\cdot \\sign(g)$ \\Comment{uFGSM($x'$, $y$)}\n    \\State $x' \\gets x + \\max(\\min(x'-x, \\epsilon), -\\epsilon)$\n    \\State Clip $x'$ to input domain \\Comment{e.g., $[0,1]^n$}\n\\EndFor\n\\State \\Return $x'$\n\\EndFunction\n\\end{algorithmic}\n\nFor general norm $\\|\\cdot\\|$, use\n\\begin{algorithmic}\n\\State $x' \\gets x' + \\epsilon_\\text{step} \\cdot \\frac{g}{\\|g\\|}$ \\Comment{dir of $g$, not sign}\n\\If{$\\|x'-x\\|_p > \\epsilon$} \\State $x' \\gets x + \\epsilon \\frac{x'-x}{\\|x'-x\\|}$ \\EndIf\n\\end{algorithmic}\n\n\\subsection*{Diffing Networks}\n$\\obj_t(x) := f(x)_t - g(x)_t$ (or abs. diff. of prob.)\n\n\\begin{algorithmic}\n\\Function{Diff\\_nets}{$f, g, \\epsilon$}\n\\State Select $x$ classified as $t$ by both $f$ and $g$\n\\While{$\\class(f(x)) = \\class(g(x))$}\n    \\State $x \\gets x + \\epsilon \\cdot \\nabla_x \\obj_t(x)$ \\Comment{Make $f$ more confident about $t$ while making $g$ less confident}\n\\EndWhile\n\\State \\Return $x$\n\\EndFunction\n\\end{algorithmic}", "meta": {"hexsha": "f2fe31c415f6233cba51d72660d2c5d4173a6eae", "size": 2247, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "adversarial-attacks.tex", "max_stars_repo_name": "cknabs/RIAI-summary-HS2020", "max_stars_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-20T21:27:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-24T20:28:56.000Z", "max_issues_repo_path": "adversarial-attacks.tex", "max_issues_repo_name": "cknabs/RIAI-summary-HS2020", "max_issues_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-25T09:29:16.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-25T10:50:09.000Z", "max_forks_repo_path": "adversarial-attacks.tex", "max_forks_repo_name": "cknabs/RIAI-summary-HS2020", "max_forks_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.8571428571, "max_line_length": 301, "alphanum_fraction": 0.6186025812, "num_tokens": 887, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.870597265050901, "lm_q2_score": 0.6619228825191871, "lm_q1q2_score": 0.576268251195813}}
{"text": "\\documentclass[12pt]{article}\r\n\\usepackage[english]{babel}\r\n%\\usepackage{subcaption}\r\n\\usepackage{hyperref}\r\n\\usepackage{graphicx}\r\n\\usepackage{amsmath}\r\n\\graphicspath{{images/}}\r\n\\usepackage{geometry}\r\n \\geometry{\r\n a4paper,\r\n total={170mm,257mm},\r\n left=20mm,\r\n top=20mm,\r\n }\r\n\\begin{document}\r\n\\section{Implementation of Lucas-Kanade (LK) template tracker} \r\n\\subsection{Introduction}\r\nIn this project we tracked an object or a human being through out the video using Lucas-Kanade (LK) algorithm.\r\n\r\n\\subsection{Lucas Kanade Algorithm}\r\nThe goal of the Lucas-Kanade algorithm is to minimize the sum of squared error between two\r\nimages, the template T and the image I warped back onto the coordinate frame of the template.\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=4cm]{error1}\r\n    \\caption{Mean Square error}\r\n    \\label{fig:Mean Square error}\r\n\\end{figure}\r\nWarping I back to compute I (W(x; p)) requires interpolating the image I at the sub-pixel locations W(x; p). The minimization in Equation (3) is performed with respect to p and the sum is\r\nperformed over all of the pixels x in the template image T (x). Minimizing the expression in Equation (1) is a non-linear optimization task even if W(x; p) is linear in p because the pixel values\r\nI (x) are, in general, non-linear in x. In fact, the pixel values I (x) are essentially un-related to\r\nthe pixel coordinates x. To optimize the expression in Equation (3), the Lucas-Kanade algorithm\r\nassumes that a current estimate of p is known and then iteratively solves for increments to the\r\nparameters $\\Delta$p; i.e. the following expression is (approximately) minimized:\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=5cm]{error2}\r\n    \\caption{Warpped Mean Square error}\r\n    \\label{fig:Warpped Mean Square error}\r\n\\end{figure}\r\n\r\nwith respect to $\\Delta$p, and then the parameters are update\r\n\\begin{equation}\r\np = p + \\Delta p\r\n\\end{equation}\r\n\r\nThese two steps are iterated until the estimates of the parameters p converge. Typically the test for\r\nconvergence is whether some norm of the vector $\\Delta$p is below a threshold $\\varepsilon$; i.e. $||\\Delta p|| \\leq \\varepsilon $ \r\n\\subsection{Pipeline followed}\r\n\\begin{enumerate}\r\n\\item First step is to crop the template out of the video which we want to track through out the video. The template ($T(x)$) is extracted from the first frame of every video. For this project the following are the templates considered.\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=2cm]{babytemplate}\r\n    \\caption{Dragon Baby template}\r\n    \\label{fig:Dragon Baby template}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=6cm]{cartemplate}\r\n    \\caption{Car template}\r\n    \\label{fig:Car template}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=1cm]{bolttemplate}\r\n    \\caption{Bolt template}\r\n    \\label{fig: Bolt template}\r\n\\end{figure}\r\n\r\n\\item The next step is to perfrom affine transform that warps the current frame so that the template in the first frame is aligned with the warped current frame. The affine transform takes care of the change in scale of the template in the current frame. The obtained image is denoted by $I(W(x;p))$\r\n\r\n\\item The next step is to compute the error between the warped image and the template.\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=4cm]{error}\r\n    \\caption{Error}\r\n    \\label{fig: Error}\r\n\\end{figure}\r\n\r\n\\item The next step is computing the gradient $\\nabla$I of the warpped image $I(W(x;p))$. Gradient of the image is computed both w.r.t x and y using the Sobel filter.\r\n\\begin{equation}\r\n \\nabla I = \\left(\\frac{\\partial I}{\\partial x}, \\frac{\\partial I}{\\partial y}\\right)\r\n\\end{equation}\r\n\\item Then the jacobian is computed at (x;p). The jacobian is computed for every point in the warpped image The jacobian is given by :\r\n\\begin{equation}\r\n\\frac{\\partial W}{\\partial p} = \r\n\\begin{pmatrix}\r\nx & 0 & y & 0 & 1 & 0 \\\\\r\n0 & x & 0 & y & 0 & 1\r\n\\end{pmatrix}\r\n\\end{equation}\r\n\\item The next step is to compute the steepest gradient descent. The steepest gradient is given by $\\nabla I*\\frac{\\partial W}{\\partial p}$:\r\n\r\n\\item Then we need to compute the Hessian matrix. The Hessian matrix is given by\r\n\\begin{equation}\r\nH = \\displaystyle\\sum_{x}\\left[\\nabla I*\\frac{\\partial W}{\\partial p}\\right]^T*\\left[\\nabla I*\\frac{\\partial W}{\\partial p}\\right]\r\n\\end{equation}\r\n\r\n\\item Finally computing $\\Delta p$ which is the updated parameters of the affine transformation matrix which shift the bounding box of the template to bound the object in the current frame.\r\n\\begin{equation}\r\n\\Delta p = H^{-1}\\displaystyle\\sum_{x}\\left[\\nabla I*\\frac{\\partial W}{\\partial p}\\right]^T*\\left[T(x) - I(W(x;p))\\right]\r\n\\end{equation}\r\n \\item We need to iterate this process till $\\Delta p$ converges to a small threshold value.\r\n\\end{enumerate}\r\nThe figure below summarizes the algorithm:\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=13cm]{lucas}\r\n    \\caption{Lucas Kanade Algorithm}\r\n    \\label{fig: Lucas Kanade Algorithm}\r\n\\end{figure}\r\n\r\n\\section{Output images by implementing Lucas Kanade}\r\n\\subsection{Tracking Bolt}\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbolt1}\r\n    \\caption{Tracking Bolt}\r\n    \\label{fig:Tracking Bolt}\r\n\\end{figure}\r\n\\newpage\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbolt2}\r\n    \\caption{Tracking Car}\r\n    \\label{fig:Tracking Car}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackcar1}\r\n    \\caption{Tracking Car}\r\n    \\label{fig:Tracking Car}\r\n\\end{figure}\r\n\\newpage\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackcar2}\r\n    \\caption{Tracking Car}\r\n    \\label{fig:Tracking Cat}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbaby1}\r\n    \\caption{Tracking Baby}\r\n    \\label{fig:Tracking Baby}\r\n\\end{figure}\r\n\\newpage\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbaby2}\r\n    \\caption{Tracking Baby}\r\n    \\label{fig:Tracking Baby}\r\n\\end{figure}\r\n\\section{Evaluation of the tracker}\r\nSince a taylor approximation is being used, the tracker will work best when \r\n\\begin{enumerate}\r\n  \\item Approximation made is close to the template- This means when the object moves really fast in successive frames, the errors induced in the taylor approximation becomes huges and the tracker breaks down.\r\n  \\item Brightness of the tracked object remains the same as that of the template \r\n\\end{enumerate}\r\n\r\n\\textbf{Car tracker} -\r\nThe tracker breaks down in the case of car video because the intensity of the frame in the video changes continuously. In the case of the car video, when the car goes into the shadow, the tracker looses the car because the intensity level changes drastically between frames. \r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackcar3}\r\n    \\caption{Tracking error}\r\n    \\label{fig:Tracking error}\r\n\\end{figure}\r\n\r\n\\textbf{Baby tracker}-\r\nThe tracker breaks down for the baby as the change is successive frames is slightly big.The baby is just as fast or probably \r\nfaster than usian bolt. \\\\\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbaby4}\r\n    \\caption{Tracking error}\r\n    \\label{fig:Tracking error}\r\n\\end{figure}\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbaby5}\r\n    \\caption{Tracking error}\r\n    \\label{fig:Tracking error}\r\n\\end{figure}\r\n\r\n\\textbf{Usian tracker}-\r\nThe tracker breaks for usian bolt as the template used in the first frame(\\textbf{usian bolt in a crouched position})is not the same as the template used in successive frames(\\textbf{usian bolt in upright}). Template correction-that is, update template with next frame- can be applied but the downside is that if an error is made, it propagates and affects the remaining frames.\r\n\r\nIn all Usian bolt tracking is the best compared to the others as the shift in successive frames is fairly small and the brightness remains constant.\r\n\\newpage\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbolt3}\r\n    \\caption{Bolt in crouched position}\r\n    \\label{fig:Bolt in crouched position}\r\n\\end{figure}\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=12cm]{trackbolt4}\r\n    \\caption{Tracking error in upright position}\r\n    \\label{fig:Tracking error in upright position}\r\n\\end{figure}\r\n\r\n\\section{Robustness to Illumination}\r\nIn order to increase the robustness of the tracker, we need to scale the brightness of pixels in each frame so that the average brightness of pixels in the tracked region stays the same as the average brightness of pixels in the template. \r\n\r\n\\textbf{Car tracker}: For tracking the car, gamma correction did not give better results. So, instead we employed z-scores method for tracking. In z-scores, the mean of template image and current frame are checked. So, pixels values in the current frame is changed by a certain z-score relative to the mean of the template. \r\n\\begin{enumerate}\r\n\\item First the standard deviation of the all the frames of the video are computed w.r.t to the mean of the frames.\r\n\\newpage\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=6cm]{zscore}\r\n    \\caption{Standard Deviation}\r\n    \\label{fig:Standard Deviation}\r\n\\end{figure}\r\n\\item Then the z-scores are computed. Z-scores is a measure of how far is the pixel value from the mean of the template in terms of standard deviation of the frames.\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=8cm]{zscore1}\r\n    \\caption{Z-score formula}\r\n    \\label{fig:Z- score formula}\r\n\\end{figure}\r\n\\item Finally, we shift every pixel in the frame based on z-score.\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=6cm]{zscore2}\r\n    \\caption{Updated pixels formula}\r\n    \\label{fig:Updated pixels formula}\r\n\\end{figure}\r\n\\end{enumerate}\r\nThe following are the improved output after employing z-scores:\r\n\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=11cm]{trackcar4}\r\n    \\caption{Tracking with z-scores}\r\n    \\label{fig:Tracking with z-scores}\r\n\\end{figure}\r\n\\begin{figure}[h]\r\n    \\centering\r\n    \\includegraphics[width=11cm]{trackcar5}\r\n    \\caption{Tracking with z-scores}\r\n    \\label{fig:Tracking with z-scores}\r\n\\end{figure}\r\n\\newpage\r\n\\section{Team Members:}\r\n1. Eashwar Sathyamurthy\r\n2. Akwasi A Obeng\r\n3. Achal P Vyas\r\n\\section{Output Videos:}\r\nTo access output videos please use this \\href{https://drive.google.com/drive/folders/1nbAUx-p-eyOWts9neJgAPA8YATY4ekmI?usp=sharing}{\\underline{link}}.\r\n\r\n\\section{References}\r\n\\href{https://www.statisticshowto.com/probability-and-statistics/z-score/}{\\underline{Z-score}}.\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "05e1f2ea6363567114be55327019f0442ae1f080", "size": 10836, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/report.tex", "max_stars_repo_name": "mesneym/Lucas-Kanade-Tracker", "max_stars_repo_head_hexsha": "e3c4d3a7224c37b0ac3aea6eca4c7a8a84e99ace", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/report.tex", "max_issues_repo_name": "mesneym/Lucas-Kanade-Tracker", "max_issues_repo_head_hexsha": "e3c4d3a7224c37b0ac3aea6eca4c7a8a84e99ace", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/report.tex", "max_forks_repo_name": "mesneym/Lucas-Kanade-Tracker", "max_forks_repo_head_hexsha": "e3c4d3a7224c37b0ac3aea6eca4c7a8a84e99ace", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.282527881, "max_line_length": 379, "alphanum_fraction": 0.7259136213, "num_tokens": 2889, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5762624670790969}}
{"text": "\\documentclass{report}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{listings}\n\\begin{document}\n\\section{PCA}\n\\subsection{Abstract}\nIn this issue, we begin to learn the algorithm of dimension reduction.\\\\\\\\\nAs we know, dimension reduction is the best way to solve the problem of over fitting, in addition to increasing data and regularization.\\\\\\\\\nIn fact, the predecessors had encountered dimensional disasters earlier. We know that the volume of the $n $ dimensional sphere is $CR^n$\\\\\\\\\nTherefore, the ratio of the volume of the sphere to the $n$ dimensional hypercube is\n$$\n\\lim_{n \\to +\\infty}=\\frac{CR^n}{2^n R^n}=0\n$$\nFrom the formula, we can see that in high-dimensional data, the distribution of samples is quite sparse, and the interior of hypercube is almost hollow, so it is more difficult to model the data. This is the so-called dimensional disaster.\n\nThe dimensionality reduction methods are divided into:\n\\begin{itemize}\n\t\\item Direct dimensionality reduction, feature selection\n\t\\item Linear dimensionality reduction, PCA, MDS etc.\n\t\\item Piecewise linear, manifolds include Isomap, LLE etc.\n\\end{itemize}\n\\subsection{Idea}\nAs for the core idea of PCA, the teacher summarized a line: one center, two basic points\n\\begin{itemize}\n\t\\item one center:\n\t\\begin{itemize}\n\t\\item to transform each feature that may be linearly related into a set of linearly independent features via an orthogonal transformation.\n\t\\item That is, to reconstruct the original feature space.\n\t\\end{itemize}\n\t\\item Two basic points:\n\t\\begin{itemize}\n\t\\item Maximum Projection Variance\n\t\\begin{itemize}\n\t\\item Make the data more dispersed in the reconstructed feature space (because the original data is clustered together and scattered in corners)\n\t\\end{itemize}\n\t\\item Minimum Reconfiguration Distance\n\t\\begin{itemize}\n\t\\item to minimize the loss of information (i.e., fewer components of the complementary space) after the data has been reconstructed\n\t\\end{itemize}\n\t\\end{itemize}\n\\end{itemize}\n\\subsection{Algorithm}\nwe'll mainly talk about the first basic point: maximum projection variance. In fact, the two basic points mean the same thing, but interpret a center from different angles.\\\\\\\\\nFirstly,we are to review the projection. We have talked about projection before, and the same is true here. Let's assume sample $x_ i $, a base vector $u_ i $, assuming $u_ i^Tu_ i = 1 $, so you can get the projection of the sample in  this dimension $u_ i $ is\n$$\nproject_i=x_i^Tu_i\n$$\nAfter orthogonal transformation, the sample originally has $p $ feature dimensions. Because we need to reduce the dimension, we only take the first $q $ features, and these $q $ features are linearly independent. Therefore, these projections can be directly added to obtain the projection of the sample in the new feature space.\\\\\\\\\nNote that the data is centered before the projection, so the mean value of the data is changed to zero, and the variance of the projection can be squared directly.\\\\\nTo sum up, we get the objective function:\n$$\nJ=\\frac{1}{N} \\sum_{i=1}^{N} \\sum_{j=1}^{q}\\left(\\left(x_{i}-\\bar{x}\\right)^{T} u_{j}\\right)^{2}\n$$\nThe objective function is derived slightly below:\\\\\nsince the shape of $((x_i-\\bar{x})^Tu_j)$ is $(1,p)* (p,1)=(1,1)$, therefore, it can be transposed:\n$$\n\\begin{aligned}\nJ&=\\frac{1}{N} \\sum_{i=1}^{N} \\sum_{j=1}^{q}\\left(\\left(x_{i}-\\bar{x}\\right)^{T} u_{j}\\right)^{2}\\\\\n&=\\frac{1}{N} \\sum_{i=1}^{N} \\sum_{j=1}^{q}(u_{j}^T(x_{i}-\\bar{x}))^{2}\\\\\n&=\\frac{1}{N} \\sum_{i=1}^{N} \\sum_{j=1}^{q}u_{j}^T(x_{i}-\\bar{x}))(x_{i}-\\bar{x})^T u_j\\\\\n&=\\sum_{j=1}^{q} u_{j}^T(\\frac{1}{N} \\sum_{i=1}^{N} (x_{i}-\\bar{x}))(x_{i}-\\bar{x})^T) u_j\\\\\n&=\\sum_{j=1}^q u_j^T S u_j\\\\\n\\end{aligned}\n$$\nDon't forget that we have another condition: $s.t\\ u_j^T u_j=1$\\\\\\\\\nSo you can use Lagrange multiplier:\n$$\n\\underset{u_{j}}{\\operatorname{argmax}} L\\left(u_{j}, \\lambda\\right)=\\underset{u_{j}}{\\operatorname{argmax}} u_{j}^{T} S u_{j}+\\lambda\\left(1-u_{j}^{T} u_{j}\\right)\n$$\nTo derive from the above:\n$$\n\\frac{\\partial \\Delta}{\\partial u_j}=2S u_j -2\\lambda u_j=0\n$$\nwe get:\n$$\nS u_j = \\lambda u_j\n$$\nYou can see that the transformed base vector is actually the eigenvector of the covariance matrix, $\\lambda$ is the eigenvalue of $S$.\\\\\nIn fact, the solution of covariance matrices can also be simplified:\n$$\n\\begin{aligned}\nS &=\\frac{1}{N} \\sum_{i=1}^{N}\\left(x_{i}-\\bar{x}\\right)\\left(x_{i}-\\bar{x}\\right)^{T} \\\\\n&=\\frac{1}{N}\\left(x_{1}-\\bar{x}, x_{2}-\\bar{x}, \\cdots, x_{N}-\\bar{x}\\right)\\left(x_{1}-\\bar{x}, x_{2}-\\bar{x}, \\cdots, x_{N}-\\bar{x}\\right)^{T} \\\\\n&=\\frac{1}{N}\\left(X^{T}-\\frac{1}{N} X^{T} I_{N} I_{N}^{T}\\right)\\left(X^{T}-\\frac{1}{N} X^{T} I_{N} I_{N}^{T}\\right)^{T} \\\\\n&=\\frac{1}{N} X^{T}\\left(E_{N}-\\frac{1}{N} I_{N} I_{N}^T\\right)\\left(E_{N}-\\frac{1}{N} I_{N} I_{N}^T\\right)^{T} X \\\\\n&=\\frac{1}{N} X^{T} H_{N} H_{N}^{T} X \\\\\n&=\\frac{1}{N} X^{T} H_{N} H_{N} X=\\frac{1}{N} X^{T} H X\n\\end{aligned}\n$$\nHere, $H$ is a special matrix, called a central matrix.\n$$\nH=E_N - \\frac{1}{N}I_N I_N^T\n$$\nTherefore, in practice, we only need to find the covariance matrix using the above formula, and then decompose it orthogonally to get the eigenvalues and eigenvectors.\n\\newpage\n\\subsection{Implement}\n\\begin{lstlisting}[language={python}]\nimport numpy as np\nimport os\nos.chdir(\"../\")\nfrom models.decompose_models import PCA\n\nk, b = 3, 4\nx = np.linspace(0, 10, 100)\ny = x * k + b\nx += np.random.normal(scale=0.3, size=x.shape)\ndata = np.c_[x, y]\n\nmodel = PCA()\nmodel.fit(data)\nmodel.draw(data)\n\\end{lstlisting}\n\\end{document}", "meta": {"hexsha": "dd65144391c9828be8065ba002b0be3eb2b9fbe0", "size": 5533, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EN-TeX_files/DimensionReduction/11_dimension_reduction_PCA.tex", "max_stars_repo_name": "btobab/Machine-Learning-notes", "max_stars_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-08-28T18:47:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T07:36:27.000Z", "max_issues_repo_path": "EN-TeX_files/DimensionReduction/11_dimension_reduction_PCA.tex", "max_issues_repo_name": "btobab/Machine-Learning-notes", "max_issues_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EN-TeX_files/DimensionReduction/11_dimension_reduction_PCA.tex", "max_forks_repo_name": "btobab/Machine-Learning-notes", "max_forks_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-28T18:47:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T18:47:22.000Z", "avg_line_length": 48.5350877193, "max_line_length": 332, "alphanum_fraction": 0.6998011928, "num_tokens": 1817, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7520125848754472, "lm_q1q2_score": 0.576262463310696}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Survival Analysis}\n\\label{chap:survival}\n\nSurvival analysis is a subfield of statistics which examines problems involving the timing of events\nsuch as death, component failure, or exiting a line of therapy after an adverse effect, in a population of subjects under study.\nUsing the appropriate models, quantities such as the lifetime, or failure rate, of a subject\ncan be estimated, as well as their dependence on other independent variables.\nA variety of common models are available,\neach with their own assumptions, capabilities, and limitations,\nwhich make them best suited for certain applications.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Nomenclature}\n\\label{survival:nomenclature}\n\n\\begin{symbollist}\n  \\item[Event] The event of interest in a study, \\eg death, component failure\\ldots\n  \\item[$t$] Time from the start of observation to an event, the conclusion of the study, or the withdrawal of a subject.\n  \\item[$T$] Time that an event occurred.\n  \\item[$x$] Independent variable(s) under consideration.\n  \\item[Censoring] Right\\footnote{Left censored subjects enter the study after the event of interest has already occurred at an unknown $t < 0$.} censored subjects have no events during the observation period, either due to early withdrawal or the conclusion of the study. The true lifetime of these subjects is unavailable for analysis, \\ie they have been censored.\n  \\item[$S\\left(t\\right)$] The survival function $S\\left(t\\right)$ is the probability that a subject survives longer than $t$, \\ie $S\\left(t\\right) = P\\left(T > t\\right)$.\n  \\item[$F\\left(t\\right)$] The lifetime distribution function $F\\left(t\\right)$ is the complement of the survival function, \\ie $F\\left(t\\right) = 1 - S\\left(t\\right) = P\\left(T \\leq t\\right)$.\n  \\item[$f\\left(t\\right)$] The event density function $f\\left(t\\right)$ is the time derivative of the lifetime distribution function $F\\left(t\\right)$, $f\\left(t\\right) = \\dv{F}{t} = -\\dv{S}{t}$, if it exists.\n  \\item[$\\lambda\\left(t\\right)$] The hazard function $\\lambda\\left(t\\right)$ is the event rate at $t$ conditional on survival to time $t$, \\ie $T > t$, see \\cref{eq:survival:hazard_def}. Any $\\lambda\\left(t\\right)$ can be a hazard function, provided it satisfies \\cref{eq:survival:hazard_cond}.\n  \\item[$\\Lambda\\left(t\\right)$] The cumulative hazard function $\\Lambda\\left(t\\right)$ is the integral of $\\lambda\\left(t\\right)$ with respect to time \\cref{eq:survival:cum_hazard:def}. Also see the relations in \\cref{eq:survival:cum_hazard:to_lambda,eq:survival:cum_hazard:to_S}.\n  \\item[HR] The hazard ratio (HR) compares the hazards of two subsets of subjects partitioned by $x$ at time $t$, $\\text{HR} = \\lambda\\left(x = 1\\right) / \\lambda\\left(x=0\\right)$. Also known as the relative risk.\n\\end{symbollist}\n\n\\begin{subequations}\\label{eq:survival:hazard_def}\n\\begin{align}\n\\lambda\\left(t\\right) \\dd{t} &= P\\left(T \\leq t + \\dd{t} \\mid T > t\\right),\\,\\text{as} \\,\\, \\dd{t} \\to 0 \\label{eq:survival:hazard_def:a} \\\\\n\\lambda\\left(t\\right) &= \\lim_{\\dd{t} \\to 0} \\frac{P\\left(T \\leq t + \\dd{t} \\cap T > t\\right)}{P\\left(T > t\\right)\\,\\dd{t}} \\label{eq:survival:hazard_def:b} \\\\\n&= \\frac{1}{S\\left(t\\right)} \\lim_{\\dd{t} \\to 0} \\frac{P\\left(t < T \\leq t + \\dd{t}\\right)}{\\dd{t}} \\label{eq:survival:hazard_def:c} \\\\\n&= \\frac{1}{S\\left(t\\right)} \\lim_{\\dd{t} \\to 0} \\frac{F\\left(t + \\dd{t}\\right) - F\\left(t\\right)}{\\dd{t}} \\label{eq:survival:hazard_def:d} \\\\\n&= \\frac{f\\left(t\\right)}{S\\left(t\\right)} = -\\frac{1}{S} \\dv{S}{t} \\label{eq:survival:hazard_def:e}\n\\end{align}\n\\end{subequations}\n\n\\begin{subequations}\\label{eq:survival:hazard_cond}\n\\begin{gather}\n\\forall t \\geq 0, \\, \\lambda\\left(t\\right) \\geq 0 \\label{eq:survival:hazard_cond:a} \\\\\n\\int_{0}^{\\infty} \\lambda\\left(t\\right) \\, \\dd{t} = \\infty \\label{eq:survival:hazard_cond:b}\n\\end{gather}\n\\end{subequations}\n\n\\begin{subequations}\\label{eq:survival:cum_hazard}\n\\begin{gather}\n\\Lambda\\left(t\\right) = \\int_{0}^{t} \\lambda\\left(u\\right) \\, \\dd{u} = - \\ln\\left(S\\left(t\\right)\\right) \\label{eq:survival:cum_hazard:def} \\\\\n\\lambda\\left(t\\right) = \\dv{\\Lambda}{t} = -\\frac{1}{S} \\dv{S}{t} \\label{eq:survival:cum_hazard:to_lambda} \\\\\nS\\left(t\\right) = \\exp\\left(-\\Lambda\\left(t\\right)\\right) \\label{eq:survival:cum_hazard:to_S}\n\\end{gather}\n\\end{subequations}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Kaplan-Meier Model}\n\\label{survival:km}\n\nThe Kaplan-Meier model \\cite{km} $\\hat{S}_{\\text{KM}}\\left(t\\right)$ is a non-parametric\nestimate of $S\\left(t\\right)$ computed from empirical data.\n\n\\begin{equation}\\label{eq:survival:km}\n\\hat{S}_{\\text{KM}}\\left(t\\right) = \\prod_{i:\\,t_{i} < t} \\left(1 - \\frac{d_{i}}{n_{i}}\\right)\n\\end{equation}\n\n\\noindent Here the product is over all times $t_{i}$ at which $d_{i}$ events occurred,\nand $n_{i}$ is the number of subjects still under study at $t_{i}$,\n\\ie subjects who have not had an event or been censured.\nDue to its construction, $\\hat{S}_{\\text{KM}}\\left(t\\right)$ remains steady\nbetween $t_{i}$ but drops vertically at each data point.\nTherefore the derivative does not exist and we can not estimate $f\\left(t\\right)$ or $\\lambda\\left(t\\right)$.\nSee \\cref{fig:stanford_km:km} for one example of a Kaplan-Meier curve.\n\nWe can apply the Kaplan-Meier estimator to different subsets of subjects partitioned by a categorical variable $x$\nin order to understand the dependence of $S\\left(t\\right)$ on $x$.\nNote that $x$ can not be a continuous variable, and must be binned to integer classes if that is the case.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Exponential Model}\n\\label{survival:exp}\n\nThe exponential model is a parametric estimate of $S\\left(t\\right)$\nvalid when the hazard is expected to be constant with respect to $t$.\nIn nuclear physics\\footnote{$\\dv{N}{t} = -\\lambda N$, $N\\left(t\\right) = N_{0} e^{-\\lambda t}$,\nhalf-life $t_{1/2} = \\ln\\left(2\\right) / \\lambda = \\tau \\ln\\left(2\\right)$, where $\\tau$ is the time constant.} $\\lambda\\left(t\\right) = \\lambda$\ndecays per unit time really is a constant.\nHowever, in most situations $\\lambda = c$ is unrealistic over longer time scales,\nas illustrated in \\cref{survival:additional:bathtub}.\nThe exponential model can accommodate both categorical and continuous $x$ independent variables.\n\nThe survival function for the exponential model is simply\n\n\\begin{equation}\\label{eq:survival:exp}\n\\hat{S}_{\\text{Exp}}\\left(t\\right) = e^{-\\lambda t}\n\\end{equation}\n\n\\noindent where the hazard can be expanded in terms of $\\vb{x}$, $\\vb*{\\beta}$ as in a regression analysis:\n\n\\begin{equation}\\label{eq:survival:exp_lambda}\n\\begin{aligned}\n\\lambda\\left(x\\right) &= \\exp\\left(\\beta_{0} + \\sum_{j=1}^{n}\\, \\beta_{j} x_{j}\\right) \\\\\n\\log\\left(\\lambda\\right) &= \\innerproduct{\\vb{x}}{\\vb*{\\beta}}\n\\end{aligned}\n\\end{equation}\n\n\\noindent Here we are making the {\\em proportional hazards assumption},\n\\ie the differences in $\\lambda$ between subgroups in $x_{j}$\nare proportional\\footnote{Really $\\log\\left(\\lambda\\right) \\propto \\beta$.} to $\\beta_{j}$\nand constant over $t$.\nTo find the best estimate of $\\hat{\\vb*{\\beta}}$,\n$\\lambda$ is used to derive a likelihood function which is\nthen optimized\\footnote{The exact details of this optimization\nare handled by common software libraries and are omitted here.\nSimilar approaches are also used to fit the later Weibull and Cox proportional-hazards models.}.\n\nThe hazard ratio for $x_{j}$ is:\n\n\\begin{equation}\\label{eq:survival:exp_HR}\n\\begin{aligned}\n\\text{HR}_{\\text{Exp}} &= e^{\\ldots+\\beta_{j} 1+\\ldots} / e^{\\ldots+\\beta_{j} 0+\\ldots} \\\\\n&= e^{\\beta_{j}}\n\\end{aligned}\n\\end{equation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Weibull Model}\n\\label{survival:weibull}\n\nThe Weibull model expands on the standard exponential model\nby introducing a shape parameter $k$ to adjust the time dependence of the hazard:\n\n\\begin{equation}\\label{eq:survival:weibull}\n\\begin{aligned}\n\\hat{S}_{\\text{Weibull}}\\left(t\\right) &= e^{-\\lambda_{\\text{Weibull}} t} \\\\\n\\lambda_{\\text{Weibull}} &= \\exp\\left(\\beta_{0} t^{k} + \\sum_{j=1}^{n}\\, \\beta_{j} x_{j}\\right) \\\\\n\\text{HR}_{\\text{Weibull}} &= e^{\\beta_{j}}\n\\end{aligned}\n\\end{equation}\n\n\\noindent Here $\\beta_{0}$ is the scale parameter of the hazard with respect to $t$,\nwhile the other $\\beta_{j}$ are scale parameters for the independent $x_{j}$.\nFor $k<0$ ($k>0$) the hazard monotonically decreases (increases) over time,\nwhile $k=0$ returns the exponential model.\nThis allows phenomena such as burn in and compounding component failures to be modeled.\nNumerous parameterizations and expansions of the Weibull model are available,\nhowever they all share the property that\n$\\lambda = \\exp\\left(\\beta_{0} f\\left(t\\right) + \\sum_{j=1}^{n}\\, \\beta_{j} x_{j}\\right)$\nwhere $f\\left(t\\right)$ is a {\\em known}, almost always monotonic, function of $t$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Cox Proportional-Hazards Model}\n\\label{survival:cox}\n\nThe Cox proportional-hazards model \\cite{cox} is a\nsemiparametric\\footnote{Semiparametric as $\\lambda_{0}\\left(t\\right)$ is unspecified, but $\\vb*{\\beta}$ are still parameters of $\\lambda$.} model\nwhich further expands on the Weibull model by allowing\nthe time dependence of $\\lambda$ to be an unknown function.\nAs the name suggests, the proportional hazards assumption is retained\nallowing us to compute useful HR values\nwithout even knowing $\\lambda\\left(t\\right)$,\nor $S\\left(t\\right)$\\footnote{We can\nplot Kaplan-Meier $S\\left(t\\right)$ curves of predictions from a Cox model,\nafter fitting the baseline hazard emperically on a reference subset,\nbut only at discrete values of $\\vb{x}$.\nSee \\cref{fig:cox:cox_age_baseline_survival} for one example.}.\n\nIn the Cox model we assume the hazard has the form of\n\n\\begin{equation}\\label{eq:survival:cox_lambda}\n\\lambda_{\\text{Cox}} = \\lambda_{0}\\left(t\\right) \\exp\\left(\\sum_{j=1}^{n}\\, \\beta_{j} x_{j}\\right)\n\\end{equation}\n\n\\noindent where $\\lambda_{0}\\left(t\\right)$ is the baseline hazard, an unspecified function of $t$.\nWhen computing hazard ratios $\\lambda_{0}\\left(t\\right)$ then drops out:\n\n\\begin{equation}\\label{eq:survival:cox_HR}\n\\text{HR}_{\\text{Cox}} = e^{\\beta_{j}}\n\\end{equation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Method Comparison}\n\\label{survival:comp}\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{l|l|l}\n\\multicolumn{1}{c|}{Kaplan-Meier} & \\multicolumn{1}{c|}{Exponential / Weibull} & \\multicolumn{1}{c}{Cox} \\\\\n\\cline{1-3}\n\\begin{tabular}[c]{p{0.3\\textwidth}}\nPro\n\\begin{itemize}\n  \\item Simple to compute and understand\n  \\item Can estimate $S$\n\\end{itemize}\n\\\\\nCon\n\\begin{itemize}\n  \\item No functional form\n  \\item Can \\textbf{not} estimate HR\n  \\item Only can handle a few categorical $x$\n\\end{itemize}\n\\end{tabular}\n\n&\n\n\\begin{tabular}[c]{p{0.3\\textwidth}}\nPro\n\\begin{itemize}\n  \\item Can estimate $S$ and HR\n\\end{itemize}\n\\\\\nCon\n\\begin{itemize}\n  \\item $\\lambda\\left(t\\right) = c$ can be unrealistic\n  \\item Weibull: $\\lambda = f\\left(t\\right)$ of a known form\n\\end{itemize}\n\\end{tabular}\n\n&\n\n\\begin{tabular}[c]{p{0.3\\textwidth}}\nPro\n\\begin{itemize}\n  \\item $\\lambda$ can be an unknown function of $t$\n  \\item Can estimate HR\n\\end{itemize}\n\\\\\nCon\n\\begin{itemize}\n  \\item Can \\textbf{not} estimate $S$\n\\end{itemize}\n\\end{tabular}\n\n\\\\\n\\end{tabular}\n\\end{table}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Model Assumptions}\n\\label{survival:assumptions}\n\n\\begin{enumerate}[noitemsep]\n  \\item Any censoring is non-informative, \\ie censoring is uncorrelated with the probabilities of different event outcomes.\\label{item:survival:assumptions:censoring}\n  \\item Survival times $t$ are uncorrelated across subjects.\\label{item:survival:assumptions:t_uncorr}\n  \\item $\\vb{x}$ is constant over time, per subject.\\label{item:survival:assumptions:X_constant}\n  \\item Hazards are proportional, \\ie hazard ratios are constant over time.\\label{item:survival:assumptions:prop_hazard}\n  \\item $\\log\\left(\\lambda\\right)$ is linear with respect to the continuous $x_{j}$ independent variables.\\label{item:survival:assumptions:X_linearity}\n\\end{enumerate}\n\n\\Cref{item:survival:assumptions:censoring,item:survival:assumptions:t_uncorr,item:survival:assumptions:X_constant}\nare assumptions for all survival models, while\n\\cref{item:survival:assumptions:prop_hazard,item:survival:assumptions:X_linearity}\napply to the exponential, Weibull and Cox models.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Tests and Corrections}\n\\label{survival:assumptions:tests_and_corrections}\n\n\\begin{itemize}[noitemsep]\n  \\item[\\cref{item:survival:assumptions:censoring}.] Exploratory data analysis to detect differences in $\\vb{x}$ distributions between censored and non-censored subjects. If violated, better input data is needed, redesign study.\n\n  \\item[\\cref{item:survival:assumptions:t_uncorr}.] Exploratory data analysis, study design question. If violated, redesign study.\n\n  \\item[\\cref{item:survival:assumptions:X_constant}.] Exploratory data analysis, study design question. If unavoidable, can try a more complex time dependent covariate model.\n\n  \\item[\\cref{item:survival:assumptions:prop_hazard}.] Check with a Schoenfeld test or complementary log-log plot. If violated, stratify problematic $x_{j}$ variables into separate models, or use more complex models with time dependent $\\beta$ coefficients.\n\n  \\item[\\cref{item:survival:assumptions:X_linearity}.] Check the appropriate residual plots. If violated, try to linearize the offending continuous $x_{j}$ with a transformation or bin to a categorical variable.\n\\end{itemize}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Schoenfeld Test}\n\\label{survival:assumptions:schoenfeld}\n\nSchoenfeld residuals are similar in concept to normal residuals in regression analyses,\nbut instead of looking at the difference between the true and estimated values of $y$,\n$\\hat{\\vb*{\\epsilon}} = \\vb{y} - \\mathbf{X} \\hat{\\vb*{\\beta}}$,\nwe look at the difference in $\\vb*{\\beta}$ and $\\hat{\\vb*{\\beta}}$ for a particular $y$ value.\nIn this way, Schoenfeld residuals represent the difference in the\ncoefficient $\\beta_{j}$ fit from all data points, and a single point.\nTo validate the proportional hazards assumption\nthe Schoenfeld residuals \\cite{schoenfeld} must not change over time.\nAn example Schoenfeld residual plot is provided in\n\\cref{fig:cox:schoenfeld_residuals}.\nBesides graphically looking at the residuals,\nwe can use {\\chiSqtest}s to compute {\\pvalue}s\nto test the null hypothesis, \\ie the Schoenfeld residuals and time are independent.\nIn this case, the proportional hazards assumption is supported\nif we find large, non-significant {\\pvalue}s.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Complementary Log-Log Plot}\n\\label{survival:assumptions:cloglog}\n\nFor categorical variables we can look at the\ncomplementary log-log plot\nof $\\log\\left(-\\log\\left(S\\left(t\\right)\\right)\\right)$ versus $\\log\\left(t\\right)$.\nAn example complementary log-log plot is provided in \\cref{fig:stanford_cloglog}.\nIf the different categories have parallel curves\nthe proportional hazards assumption is\nvalid for the variable in question.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Other Residual Plots}\n\\label{survival:assumptions:linearity}\n\nTo test the linearity of $\\log\\left(\\lambda\\right)$ with respect to the continuous $x_{j}$\nwe may plot the Martingale residuals.\nMartingale residuals range from $-\\infty$ to $1$\nand measure the difference between the observed and predicted survival for a subject.\nSubjects who have an event ``to early'' will have residuals near one,\nwhere as subjects who have an event ``to late'' will have residuals near $-\\infty$.\nThe mean of the residuals should be $0$, see \\cref{fig:cox:martingale_residuals:prediction} for an example.\nWe can also plot the Martingale residuals versus any particular $x_{j}$\nto judge the linearity along that specific dimension, see \\cref{fig:cox:martingale_residuals:age}.\n\nThe Martingale residuals can be symmetrized into deviance residuals,\nwhich have a mean of $0$ and standard deviation of $1$,\nfor easier interpretation, see \\cref{fig:cox:outliers:deviance}.\n\nLastly, we can recompute the model fit leaving out event $i$\nand measure the resulting change in the $\\beta_{j}$ coefficients,\n$\\hat{\\Delta}_{ij} = \\hat{\\beta}_{j} - \\hat{\\beta}_{j}^{\\left(i\\right)}$, \\ie ``dfbeta''.\nPlotting $\\hat{\\Delta}_{ij}$ versus event number $i$\ncan then point to overly influential events, see \\cref{fig:cox:outliers:dfbeta}.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Additional Concepts}\n\\label{survival:additional}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Logrank Test}\n\\label{survival:additional:logrank}\n\nThe logrank test is a method for comparing the survival functions of two different groups.\nSurvival status versus group membership $2\\times2$ tables are computed at each distinct time $t_{i}$,\nbefore being combined into one $\\chi_{\\text{logrank}}^{2}$ test statistic\nwith the Cochran-Mantel-Haenszel test\\footnote{Sometimes\nthe logrank test is itself referred to as the Mantel-Haenszel test.}.\n$\\chi_{\\text{logrank}}^{2}$ approximately has the \\chiSqdist with 1 degree of freedom,\nallowing the production of {\\pvalue}s to test the null hypothesis\nthat there is {\\em no} difference in the group's survival functions.\n\nThe logrank test is most accurate when the proportional hazards assumption is satisfied,\nbut is still somewhat usable if the difference in hazard ratios has only one sign,\n\\ie the $S\\left(t\\right)$ curves do not cross as can be checked with a complementary log-log plot.\nIn \\R the logrank test can be computed with the \\texttt{survdiff} function,\nwith the parameter $\\rho$ left at its default value of $0$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Odds, Log Odds, and the Odds Ratio in Relation to the Hazard Ratio}\n\\label{survival:additional:odds}\n\nTo begin, note that the odds of an event occurring are:\n\n\\begin{equation}\\label{eq:survival:odds}\n\\text{Odds} = \\frac{P\\left(\\text{Occurring}\\right)}{P\\left(\\text{Not Occurring}\\right)} = \\frac{p}{1-p}\n\\end{equation}\n\n\\noindent For example, when rolling a fair dice, $\\text{odds}\\left(3\\right) = (1/6) / (5/6) = 0.2$, or 1:5.\nOdds may vary from $0$ to $\\infty$, with $1$ representing fair odds.\nAs such, they are not symmetric and are somewhat unwieldy.\nFor this reason it is common to take the log of the odds,\n\\ie log odds, which varies symmetrically between $-\\infty$ and $\\infty$, with 0 being fair.\n\nAn odds ratio is simply the ratio of two odds.\nIf we take a biased coin with $\\text{odds}\\left(\\text{heads}\\right) = 2$, or 2:1 in favor of heads,\nthe odds ratio between the biased coin and a fair coin is $2/1 = 2$,\n\\ie the odds of getting heads with the biased coin are 2 times that of the fair coin.\nIf desired, we can similarly take the ratio of log odds.\n\nIn logistic regression models, see \\cref{class:logistic},\nit is possible to compute an odds ratio for the input feature $x_{j}$,\nrepresenting the change in odds due to a one unit increase in $x_{j}$.\nThis is very similar in concept to the HR,\nwhich itself is a ratio of two probabilities from different sets of subjects separated by $x_{j}$.\nHowever, as odds and probability are different quantities,\nrelative risks, including the HR, are not equivalent to odds ratios.\nIn some cases the relative risk and odds ratio may have similar values,\nsee Table 3 of \\cite{pmid26623395},\nbut it is important to remember they are ultimately different concepts.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Expected Future Lifetime}\n\\label{survival::additional:efl}\n\nThe expected future lifetime $\\expval{T-t_{0}}$ is the\nexpected additional time of survival for a subject having already survived to $t_{0}$.\nWe can derive $\\expval{T-t_{0}}$ \\cref{eq:survival:efl_der:efl}\nfrom the probability of an event occurring between $t_{0}$ and $t_{0} + t$ \\cref{eq:survival:efl_der:P}\nusing the prior work of \\cref{eq:survival:hazard_def} and integration by parts:\n\n\\begin{subequations}\\label{eq:survival:efl_der}\n\\begin{gather}\nP\\left(T \\leq t_{0} + t \\mid T > t_{0}\\right)\n= \\frac{F\\left(t_{0} + t\\right) - F\\left(t_{0}\\right)}{S\\left(t_{0}\\right)} \\label{eq:survival:efl_der:P} \\\\\n\\text{PDF}\n= \\dv{}{t} P\\left(T \\leq t_{0} + t \\mid T > t_{0}\\right)\n= \\frac{f\\left(t_{0} + t\\right)}{S\\left(t_{0}\\right)} \\label{eq:survival:efl_der:pdf} \\\\\n\\expval{T-t_{0}}\n= \\int_{0}^{\\infty} t \\, \\text{PDF} \\, \\dd{t}\n= \\frac{1}{S\\left(t_{0}\\right)} \\int_{0}^{\\infty} S\\left(t\\right) \\, \\dd{t} \\label{eq:survival:efl_der:efl}\n\\end{gather}\n\\end{subequations}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Bathtub Curve}\n\\label{survival:additional:bathtub}\n\nIn reliability engineering the hazard function of many components\ncan be estimated as a three part ``bathtub'' curve.\nInitially the failure, \\ie event, rate decreases as defective components fail early,\nbefore reaching a constant plateau where failures are essentially random events.\nAs components wear out and reach the end of their useful lifetimes, the failure rate increases again.\nCombining these three effects produces a bathtub-shaped curve, as seen in \\cref{fig:bathtub_curve}.\nWe can model these situations with the Cox proportional-hazards model,\nor a Weibull model with an appropriately shaped $\\lambda\\left(t\\right)$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/survival/bathtub_curve}\n\\vspace{0.2cm}\n\\caption{\nIllustration of a bathtub curve hazard function, by \\href{https://en.wikipedia.org/wiki/File:Bathtub_curve.svg}{Wyatts}.\n}\n\\label{fig:bathtub_curve}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Example \\R Code}\n\\label{survival:Rcode}\n\nA simple example survival analysis in \\R utilizing the built-in\n\\texttt{stanford2} Stanford heart transplant dataset is provided in this section.\nThe examples here have been adapted from Mike Marin's\n\\href{https://www.youtube.com/playlist?list=PLqzoL9-eJTNDdnKvep_YHIwk2AMqHhuJ0}{series of lectures},\nand \\href{http://www.sthda.com/english/wiki/cox-model-assumptions}{notes} available from the STHDA.\nThe full \\R code can also be found in\n\\href{https://github.com/mepland/data_science_notes/blob/main/sections/appendixes/additional/example_survival.R}{\\texttt{example\\_survival.R}}.\nAn equivalent analysis has also been performed in \\python using the\n\\href{https://lifelines.readthedocs.io/en/latest}{\\texttt{lifelines}} package,\nsee\n\\href{https://github.com/mepland/data_science_notes/blob/main/sections/appendixes/additional/example_survival.ipynb}{\\texttt{example\\_survival.ipynb}}.\n\nThe status variable represents the event, death of a patient, as $\\text{status}=1$,\nand the censoring of a patient as $\\text{status}=0$.\nThree independent variables $x$ are available,\nage in years,\nage categories - under or over 40,\nand the numeric T5 mismatch score between donor and recipient.\n\n\\begin{figure}[H]\n\\centering\n  \\begin{subfigure}[c]{0.48\\textwidth}\\centering\n  \\includegraphics[width=\\textwidth]{figures/survival/stanford_km}\n  \\caption{Kaplan-Meier}\n  \\label{fig:stanford_km:km}\n  \\end{subfigure}\n  ~\n  \\begin{subfigure}[c]{0.48\\textwidth}\\centering\n  \\includegraphics[width=\\textwidth]{figures/survival/stanford_km_annotated}\n  \\caption{Annotated}\n  \\label{fig:stanford_km:annotated}\n  \\end{subfigure}\n\\caption{\nKaplan-Meier plot of the Stanford heart transplant dataset. Censored patients are displayed with marks.\nThe version on the right adds additional annotations, such as\nsubplots for patients at risk and censoring events,\nthe \\pvalue from the logrank test, and a median reference line.\n}\n\\label{fig:stanford_km}\n\\end{figure}\n\n\\subsubsection{Initialization}\n\\label{survival:Rcode:init}\n\nHere we load the necessary packages and data,\nand create the categorical age variable \\texttt{agecat}.\n\n\\begin{lstlisting}[language=R]\n> install.packages(c(\"survival\", \"survminer\"))\n> library(\"survival\")\n> library(\"survminer\")\n\n> df <- stanford2\n> df$agecat <- cut(df$age, breaks=c(0,40, Inf), labels=c('Under 40', 'Over 40'), right=FALSE)\n> df <- df[with(df, order(time)),]\n\n> df[1:5,c('time', 'status', 'age', 'agecat', 't5')]\n    time status age   agecat   t5\n21   0.5      1  41  Over 40 0.87\n133  1.0      1  21 Under 40 0.47\n184  1.0      0  27 Under 40   NA\n16   1.0      1  54  Over 40 0.47\n183  2.0      0  39 Under 40   NA\n> summary(df[c('time', 'age', 'agecat', 't5')])\n      time              age             agecat          t5\n Min.   :   0.50   Min.   :12.00   Under 40: 65   Min.   :0.000\n 1st Qu.:  64.75   1st Qu.:35.00   Over 40 :119   1st Qu.:0.690\n Median : 351.00   Median :44.00                  Median :1.040\n Mean   : 696.94   Mean   :41.09                  Mean   :1.117\n 3rd Qu.:1160.75   3rd Qu.:49.00                  3rd Qu.:1.460\n Max.   :3695.00   Max.   :64.00                  Max.   :3.050\n                                                  NA's   :27\n\\end{lstlisting}\n\n\\subsubsection{Kaplan-Meier Model}\n\\label{survival:Rcode:km}\n\nThe Kaplan-Meier model for this data partitioned by \\texttt{agecat}\nshows a median survival time of \\num{1271} (\\num{431}) for patients Under 40 (Over 40).\nThe logrank test, \\texttt{survdiff}, returned a \\pvalue of \\num{0.07},\nthus while the survival curves appear visually different we {\\em can not} reject the null hypothesis,\nthe survival curves are the same, at the standard \\pvalue of \\num{0.05}.\n\n\\begin{lstlisting}[language=R]\n> km.model <- survfit(Surv(time, status) ~ agecat, data=df, type='kaplan-meier')\n> summary(km.model)$table\n                records n.max n.start events   *rmean *se(rmean) median 0.95LCL 0.95UCL\nagecat=Under 40      65    65      65     32 1520.783   220.1031   1271     731      NA\nagecat=Over 40      119   119     119     81 1123.229   143.3516    431     202     897\n\n> survdiff(Surv(time, status) ~ agecat, data=df)\nCall:\nsurvdiff(formula = Surv(time, status) ~ agecat, data = df)\n\n                  N Observed Expected (O-E)^2/E (O-E)^2/V\nagecat=Under 40  65       32     41.3      2.10      3.32\nagecat=Over 40  119       81     71.7      1.21      3.32\n\n Chisq= 3.3  on 1 degrees of freedom, p= 0.07\n\n> pdf('~/stanford_km.pdf')\n> ggsurvplot(km.model, xlab='Time', ylab='S(t)', size = 1, linetype = 'strata', palette=c('#4e79a7', '#f28e2b'), conf.int = TRUE, legend = c(0.85, 0.85), legend.y = 1, legend.title = '', legend.labs = c('Under 40', 'Over 40'))\n> dev.off()\n\n> pdf('~/stanford_km_annotated.pdf')\n> ggsurvplot(km.model, xlab='Time', ylab='S(t)', size = 1, linetype = 'strata', palette=c('#4e79a7', '#f28e2b'), conf.int = TRUE, legend = c(0.85, 0.85), legend.y = 1, legend.title = '', legend.labs = c('Under 40', 'Over 40'),\npval = TRUE, # Add survdiff p-value\nrisk.table = TRUE, # Absolute number at risk\nrisk.table.y.text.col = FALSE, risk.table.col = \"strata\",\nncensor.plot = TRUE, # plot censored patients vs time\nsurv.median.line = \"h\", # add horizontal median\n)\n> dev.off()\n\\end{lstlisting}\n\n\\subsubsection{Cox Proportional-Hazards Model}\n\\label{survival:Rcode:cox}\n\nTwo Cox proportional-hazards models are demonstrated here,\nthe first using age alone and the second adding the T5 score.\nTo compare apples-to-apples, subjects with null T5 values are excluded from both models.\n\nIn the age alone model, the coefficient for age is\n$\\beta_{\\text{age}} = \\num{0.02990}$, giving a HR of\n$\\exp\\left(\\beta_{\\text{age}}\\right) = \\num{1.03035}$.\nThe \\SI{95}{\\percent} confidence interval for the HR is {1.008} to \\num{1.054}.\nThe \\pvalue for $\\beta_{\\text{age}}$ is \\num{0.00846},\nthus we can reject the null hypothesis that $\\beta_{\\text{age}} = 0$.\nWe are also provided with three {\\pvalue}s for the model as a whole;\n\\num{0.006} for the likelihood-ratio test,\n\\num{0.008} for the Wald test,\nand \\num{0.008} for the logrank test.\nThese test the null hypothesis that all $\\beta_{i} = 0$.\nHere all three tests reject the null hypothesis,\nas could be expected since the single $\\beta_{\\text{age}}$ was significant,\nbut would be more interesting for higher dimensional models.\nThe concordance statistic, as explained in \\cref{ml_general:eval:concordance},\nof \\num{0.595} is fairly poor, but is better than random guessing.\n\nIn the extended mode the age HR is very similar, \\num{1.03006},\nwhile the T5 HR is \\num{1.18579}.\nWe can interpret the second model's T5 HR as follows;\nat a given instant in time, an individual of age $a$ is\n\\num{1.18579} times, \\ie \\SI{18.579}{\\percent}, as likely to die as an individual of age $a+1$, when controlling for T5 score.\nThe inverse HR, $\\exp\\left(-\\beta_{\\text{age}}\\right)$, where the reference group has been switched, is also provided for convenience.\n\nWe can compare the two nested models using a likelihood-ratio test.\nThe resulting large \\pvalue of \\num{0.3563} supports the null hypothesis,\nthus we conclude there is not a statistically significant difference between the two models.\nWe can remove T5 from the model without a loss of predictive power in this case.\n\nThe baseline survival function for the age only model\nis plotted in \\cref{fig:cox:cox_age_baseline_survival}.\nA more complete plot including a range of ages can be found in the\n\\href{https://github.com/mepland/data_science_notes/blob/main/sections/appendixes/additional/example_survival.ipynb}{\\texttt{example\\_survival.ipynb}} implementation.\n\n\\begin{lstlisting}[language=R]\n> cox.model_age <- coxph(Surv(time, status) ~ age, data=df[!is.na(df$t5), ])\n> summary(cox.model_age)\nCall:\ncoxph(formula = Surv(time, status) ~ age, data = df[!is.na(df$t5),\n    ])\n\n  n= 157, number of events= 102\n\n       coef exp(coef) se(coef)     z Pr(>|z|)\nage 0.02990   1.03035  0.01136 2.633  0.00846 **\n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\n    exp(coef) exp(-coef) lower .95 upper .95\nage      1.03     0.9705     1.008     1.054\n\nConcordance= 0.595  (se = 0.034 )\nLikelihood ratio test= 7.62  on 1 df,   p=0.006\nWald test            = 6.93  on 1 df,   p=0.008\nScore (logrank) test = 7.01  on 1 df,   p=0.008\n\n> pdf('~/stanford_cox_age_baseline_survival.pdf')\n> ggsurvplot(survfit(cox.model_age, data=df[!is.na(df$t5), ]), xlab='Time', ylab='S(t)', size = 1, linetype = 'strata', palette=c('#4e79a7'), conf.int = TRUE, legend = c(0.85, 0.85), legend.y = 1, legend.title = '', legend.labs = c('Baseline Survival'))\n> dev.off()\n\n> cox.model_age_t5 <- coxph(Surv(time, status) ~ age + t5, data=df[!is.na(df$t5), ])\n> summary(cox.model_age_t5)\nCall:\ncoxph(formula = Surv(time, status) ~ age + t5, data = df[!is.na(df$t5),\n    ])\n\n  n= 157, number of events= 102\n\n       coef exp(coef) se(coef)     z Pr(>|z|)\nage 0.02961   1.03006  0.01136 2.608  0.00911 **\nt5  0.17041   1.18579  0.18326 0.930  0.35243\n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\n    exp(coef) exp(-coef) lower .95 upper .95\nage     1.030     0.9708     1.007     1.053\nt5      1.186     0.8433     0.828     1.698\n\nConcordance= 0.59  (se = 0.034 )\nLikelihood ratio test= 8.47  on 2 df,   p=0.01\nWald test            = 7.81  on 2 df,   p=0.02\nScore (logrank) test = 7.87  on 2 df,   p=0.02\n\n> anova(cox.model_age, cox.model_age_t5, test='LRT')\nAnalysis of Deviance Table\n Cox model: response is  Surv(time, status)\n Model 1: ~ age\n Model 2: ~ age + t5\n   loglik Chisq Df P(>|Chi|)\n1 -447.29\n2 -446.86 0.851  1    0.3563\n\\end{lstlisting}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/survival/stanford_cox_age_baseline_survival}\n\\vspace{0.2cm}\n\\caption{\nBaseline survival function for\nan univariate Cox model fit to the Stanford heart transplant dataset.\nThe documentation describing what reference value of age is utilized by \\R here is unclear.\nBy comparing to the equivalent plot produced by \\texttt{lifelines} in\n\\href{https://github.com/mepland/data_science_notes/blob/main/sections/appendixes/additional/example_survival.ipynb}{\\texttt{example\\_survival.ipynb}}\nthe age used appears to also be the mean.\n}\n\\label{fig:cox:cox_age_baseline_survival}\n\\end{figure}\n\n\\subsubsection{Checking Assumptions}\n\\label{survival:Rcode:assumptions}\n\nIn this section we check the assumptions made by the Cox and Kaplan-Meier models.\nThe Schoenfeld test of the proportional hazards assumption\nis implemented via the \\texttt{cox.zph} function\nfor both example Cox models; age alone and age plus T5 score.\nThe large global {\\pvalue}s, \\num{0.38} and \\num{0.25} respectively,\nfail to reject the null hypothesis and show that the\nproportional hazards assumption is valid here.\nAdditionally, we are provided with {\\pvalue}s for each individual variable.\n\nWe can also check the proportional hazards assumption graphically\nfor categorical variables, see \\cref{fig:stanford_cloglog}.\nNote that the complementary log-log plot applies to the example Kaplan-Meier model,\nwhile the rest of the plots apply to the age alone Cox model.\nPlease see the captions of\n\\cref{fig:cox:schoenfeld_residuals,fig:stanford_cloglog,fig:cox:martingale_residuals,fig:cox:outliers}\nfor further discussion of these diagnostic plots.\n\n\\begin{lstlisting}[language=R]\n> cox.model_age.ph <- cox.zph(cox.model_age)\n> cox.model_age.ph\n       chisq df    p\nage     0.76  1 0.38\nGLOBAL  0.76  1 0.38\n\n> cox.model_age_t5.ph <- cox.zph(cox.model_age_t5)\n> cox.model_age_t5.ph\n       chisq df    p\nage     0.83  1 0.36\nt5      2.06  1 0.15\nGLOBAL  2.77  2 0.25\n\n> pdf('~/stanford_cox_age_schoenfeld_residuals.pdf')\n> ggcoxzph(cox.model_age.ph)\n> dev.off()\n\n> pdf('~/stanford_cloglog.pdf')\n> plot(km.model, fun='cloglog', xlab='log(t)', ylab='log(-log(S(t)))', col=c('#4e79a7', '#f28e2b'))\n> legend('bottomright', inset=.02, legend=c('Under 40', 'Over 40'), col=c('#4e79a7', '#f28e2b'), lty=1:2, box.lty=0)\n> dev.off()\n\n> pdf('~/stanford_cox_age_martingale_residuals.pdf')\n> ggcoxdiagnostics(cox.model_age, type = \"martingale\", ox.scale='linear.predictions')\n> dev.off()\n\n> pdf('~/stanford_cox_age_martingale_residuals_age.pdf')\n> ggcoxfunctional(Surv(time, status) ~ age, data = df)\n> dev.off()\n\n> pdf('~/stanford_cox_age_deviance_residuals.pdf')\n> ggcoxdiagnostics(cox.model_age, type = \"deviance\", ox.scale='linear.predictions')\n> dev.off()\n\n> pdf('~/stanford_cox_age_dfbeta.pdf')\n> ggcoxdiagnostics(cox.model_age, type = \"dfbeta\", ox.scale='observation.id')\n> dev.off()\n\n# export data for use in python with lifelines\nwrite.csv(df[!is.na(df$t5), ],\"~/stanford.csv\", row.names = FALSE)\n\\end{lstlisting}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/survival/stanford_cox_age_schoenfeld_residuals}\n\\vspace{0.2cm}\n\\caption{\nSchoenfeld residual plot for\nan univariate Cox model fit to the Stanford heart transplant dataset.\nThe residuals do not appear dependent on time,\nwhich is supported by the non-significant \\pvalue,\nand shows that proportional hazards assumption\nhas been satisfied for this model.\nNote that in a multivariate model we\nwould have a Schoenfeld residual plot for\neach covariate.\n}\n\\label{fig:cox:schoenfeld_residuals}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/survival/stanford_cloglog}\n\\vspace{0.2cm}\n\\caption{\nComplementary log-log plot for categorized patient age\nin the Stanford heart transplant dataset.\nThese curves are not parallel,\nthe proportional hazards assumption is violated in this variable.\n}\n\\label{fig:stanford_cloglog}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n  \\begin{subfigure}[c]{0.48\\textwidth}\\centering\n  \\includegraphics[width=\\textwidth]{figures/survival/stanford_cox_age_martingale_residuals}\n  \\caption{vs Log Partial Hazard, \\newline\\ie Linear Predictions}\n  \\label{fig:cox:martingale_residuals:prediction}\n  \\end{subfigure}\n  ~\n  \\begin{subfigure}[c]{0.48\\textwidth}\\centering\n  \\includegraphics[width=\\textwidth]{figures/survival/stanford_cox_age_martingale_residuals_age}\n  \\caption{vs Age}\n  \\label{fig:cox:martingale_residuals:age}\n  \\end{subfigure}\n\\caption{\nPlots of Martingale residuals for\nan univariate Cox model fit to the Stanford heart transplant dataset.\nOn the left, the residuals are plotted versus the\nlog of the partial hazard, \\ie log hazard excluding the baseline hazard.\nIn \\R this is known as \\texttt{linear.predictors}, while in \\texttt{lifelines} it is\n\\href{https://lifelines.readthedocs.io/en/latest/fitters/regression/CoxPHFitter.html\\#lifelines.fitters.coxph_fitter.SemiParametricPHFitter.predict_log_partial_hazard}{log partial hazard}.\nAs can be seen in the blue fit, there is a slight nonlinearity.\nOn the right, residuals of the null model are plotted against the age variable alone.\nFor ages $\\lesssim 40$, age appears to have a non-linear relationship with $\\log\\left(\\lambda\\right)$.\n}\n\\label{fig:cox:martingale_residuals}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n  \\begin{subfigure}[c]{0.48\\textwidth}\\centering\n  \\includegraphics[width=\\textwidth]{figures/survival/stanford_cox_age_deviance_residuals}\n  \\caption{Deviance}\n  \\label{fig:cox:outliers:deviance}\n  \\end{subfigure}\n  ~\n  \\begin{subfigure}[c]{0.48\\textwidth}\\centering\n  \\includegraphics[width=\\textwidth]{figures/survival/stanford_cox_age_dfbeta}\n  \\caption{dfbeta}\n  \\label{fig:cox:outliers:dfbeta}\n  \\end{subfigure}\n\\caption{\nAdditional residual plots for\nan univariate Cox model fit to the Stanford heart transplant dataset.\nOn the left, the deviance residuals are plotted versus the\nlog of the partial hazard, \\ie log hazard excluding the baseline hazard.\nIn \\R this is known as \\texttt{linear.predictors}, while in \\texttt{lifelines} it is\n\\href{https://lifelines.readthedocs.io/en/latest/fitters/regression/CoxPHFitter.html\\#lifelines.fitters.coxph_fitter.SemiParametricPHFitter.predict_log_partial_hazard}{log partial hazard}.\nAs can be seen in the blue fit, there is a slight nonlinearity.\nOn the right, the $\\hat{\\Delta}_{ij}$, \\ie ``dfbeta'', residuals for age\nare plotted against event number.\nNo event is particularly influential as $\\abs{\\hat{\\Delta}_{ij}} \\leq \\num{0.004}$\nfor $\\hat{\\beta}_{\\text{age}} = \\num{0.02990}$.\n}\n\\label{fig:cox:outliers}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Random Survival Forest (RSF)}\n\\label{survival:RSF}\n% TODO\n", "meta": {"hexsha": "a3addfa81ae661113391f666c247c1e5508c949c", "size": 38898, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/survival.tex", "max_stars_repo_name": "mepland/data_science_notes", "max_stars_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-05-30T15:15:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-23T01:01:08.000Z", "max_issues_repo_path": "sections/survival.tex", "max_issues_repo_name": "mepland/data_science_notes", "max_issues_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/survival.tex", "max_forks_repo_name": "mepland/data_science_notes", "max_forks_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.9215922799, "max_line_length": 366, "alphanum_fraction": 0.6990590776, "num_tokens": 11630, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.576262454792844}}
{"text": "\\section{Secure Multi-party Computation (SMPC)}\\label{s:smpc}\nSecure multi-party computation (SMPC)\\footnote{Both SMPC and MPC abbreviations are used for secure multi-party computation; in this thesis we use them interchangeably.} or Secure Function Evaluation (SFE) (in the two-party setting) is a field of cryptography aiming to create methods that enable distinct parties, to jointly compute a function over their private inputs.\nOnly the outcome of that function is made public and the parties don\u2019t learn anything more than their own input except whatever can be learned from the output of the function.\n\nSecure multi-party computation was introduced in 1982 by Andrew Yao.\nIts first form was that of secure two-party computation (2PC) with the so-called Millionaire's Problem \\cite{yao1982protocols}.\nThe problem states that there are two millionaires wishing to know who is richer.\nHowever, they should not find out any additional information about each other\u2019s wealth.\n\nIn the general case we have N parties $P_1, P_2, \\dots, P_N$ with private inputs $x_1, x_2, \\dots, x_N$ respectively.\nThe goal is to compute a function $f(x_1, x_2, \\dots, x_N)$ and learn nothing more than what they would have if a separate trusted party had collected their inputs, computed function $f$ for them, and the return the result to all parties.\n\nThere are two important requirements on any protocol for secure computation, namely \\textit{privacy} and \\textit{correctness} \\cite{lindell2009secure}.\nThe privacy requirement suggests that nothing but what is absolutely essential should be learned.\nMore specifically, all parties should learn nothing more but the computation output.\nThe correctness requirement states that every party should receive the correct computation output, that is an adversary should not be able to cause the result of the computation to be different than the outcome of the function the parties agreed to compute.\n\nPlenty of tasks can be modeled using SMPC.\nThe tasks that can be modeled using SMPC include simple ones such as coin tossing, as well as far more complex such as electronic voting, electronic auctions, anonymous transactions, anonymous chatting and private information retrieval systems.\n\nTake electronic voting as an example.\nThe privacy requirement ensures that no party can learn the individual votes of other parties, while still learning the election outcome.\nThe correctness requirement ensures that no party can affect the outcome with means other than casting their one individual vote.\nSimilarly, in the electronic auction example, the privacy requirement ensures that no party learns the bids of other parties while still learning the winning bid. The correctness requirement ensures that no party can bias the auction outcome and that in fact the winning party has places the highest bid.\n\n\\subsection{Millionaire's Problem}\\label{ss:millionaire}\nThe first problem that was modeled using SMPC is the Millionaire's Problem introduced by Yao, and it is a basic building block for such secure computations.\n\nHere the two parties $P_1$, $P_2$ are the two millionaires.\nTheir private inputs are each one\u2019s wealth $x_1$ and $x_2$ respectively.\nThe function which they wish to jointly compute can be formulated as\n\n\\begin{equation}\n  f(x_1, x_2)=\\begin{cases}\n    x_1, & \\text{if $x_1 > x_2$}.\\\\\n    x_2, & \\text{otherwise}.\n  \\end{cases}\n\\end{equation}\n\nA lot of solutions have been proposed to the Millionaire's problem starting with the one from Yao himself \\cite{yao1982protocols} as well as multiple others including \\cite{ioannidis2003efficient, lin2005efficient}.\n\n\\subsection{Oblivious Transfer}\\label{ss:oblivious-transfer}\nAnother example for the two-party case and a basic building block of SMPC systems is Oblivious Transfer (OT) firstly introduced in 1981 by Michael O. Rabin \\cite{rabin2005exchange}.\nIts simple flavor is 1-2 oblivious transfer or ``1 out of 2 oblivious transfer\".\n\n\nIn the 1-2 oblivious transfer case we have two parties, the sender $S$ and the receiver $R$.\nThe sender has two messages, $m_0$ and $m_1$, and the receiver has a bit $b$.\nThe receiver wishes to learn $m_b$, without learning anything about $m_{1-b}$ and without the sender learning $b$.\n\nA solution for the above problem has been implemented using the protocol of Even, Goldreich, and Lempel in \\cite{even1985randomized}.\nBelow we describe the otherwise generic protocol using RSA as en encryption scheme.\n\n\\begin{itemize}\n  \\item $S$ (who holds messages $m_0$ and $m_1$) generates an RSA key pair. Recall from section \\ref{ss:rsa} that the public key is $(n,e)$ and the private key is $d$.\n  \\item $S$ randomly selects two values $x_0, x_1$.\n  \\item $S$ sends $x_0, x_1, N, e$ to $R$.\n  \\item $R$ picks $b \\in \\{0,1\\}$\n  \\item $R$ randomly selects a value $k$ and computes $v = x_b + k^e \\pmod{n}$.\n  \\item $R$ sends $v$ to $S$.\n  \\item $S$ computes $k_0 = (v-x_0)^d \\pmod{n}$ and $k_1 = (v-x_1)^d \\pmod{n}$. One of $k_0, k_1$ will be equal to $k$ randomly selected be $R$.\n  \\item $S$ sends to $m_0' = m_0 + k_0$ and $m_1' = m_1 + k_1$ to $R$.\n  \\item $R$ can compute exactly one of the messages, as $m_b = m_b' - k$.\n\\end{itemize}\n\n\\subsection{Garbled Circuits}\\label{ss:garbled-circuits}\nThere are also generic constructions for building SMPC systems.\nGeneric protocols implement secure computation for any probabilistic polynomial time function.\nOne of the most commonly known such generic protocol is Garbled (or Encrypted) Circuits (GC).\nThe most simple version of SMPC using Garbled Circuits can be found in the two-party case.\nThe first protocol for such computations was introduced by Yao in \\cite{yao1986generate}.\n\nAssume that we have two parties, Alice and Bob, with private inputs $x$ and $y$, respectively.\nThey wish to compute function $f$ over their private inputs, and nothing more than $f(x,y)$ should be learned.\n\nThe GC protocol works by expressing $f$ as an combinatorial circuit.\nThe circuit contains logical gates that implement any function $g : \\{0,1\\} \\times \\{0,1\\} \\rightarrow \\{0,1\\}$.\nThese include for instance simple \\texttt{AND, OR, XOR} and \\texttt{NOT} gates.\nThis circuit takes as inputs the bitwise representation of $x$ and $y$ and has output the bitwise representation of the value $f(x,y)$.\nThe protocol is based on evaluating an encrypted version of this circuit.\nA simple description of Yao's protocol can be seen below. A complete description as well as security proof of this protocol can be found in \\cite{lindell2009proof}.\n\nLet's assume without loss of generality that Alice is the so called \\textit{garbled circuit generator}, or \\textit{garbler}, and Bob the \\textit{evaluator}. The circuit is known to both parties.\n\n\\subsubsection{Circuit Encoding}\\label{sss:circuit-encoding}\n\n\\begin{itemize}\n  \\item Alice ``hardwires'' her input into the circuit. Thus the circuit now computes $f(x, \\cdot)$.\n  \\item Alice assigns to each wire $i$ of the circuit two random string values $(W_i^0, W_i^1)$ which are called \\textit{labels} that correspond to the Boolean values $0 /$ \\texttt{false} and $1 /$ \\texttt{true} respectively. These labels should be of adequate length (usually 128 bits) as they will be used as symmetric keys in an encryption scheme.\n  \\item For each gate $g$ in the circuit Alice prepares the truth table of $g$. In that truth table the values $0,1$ are replaced with the corresponding labels that were generated in the previous step. The output column is then encrypted \\footnote{The notation $E_{K}(M)$ means an encryption of the value $M$ with key $K$.} using as keys the labels from the two input columns.\n  For example the transformation of the truth table of an \\texttt{AND} gate with input wires $A,B$ and output wire $C$ can be seen below.\n\n  \\begin{minipage}{0.2\\textwidth}\n    \\begin{center}\n      \\begin{tabular}{ c c | c }\n       $A$ & $B$ & $C$ \\\\\n       \\hline\n       $0$ & $0$ & $0$ \\\\\n       $0$ & $1$ & $0$ \\\\\n       $1$ & $0$ & $0$ \\\\\n       $1$ & $1$ & $1$ \\\\\n      \\end{tabular}\n    \\end{center}\n  \\end{minipage}\n  $\\xrightarrow[]{\\text{becomes}}$\n  \\begin{minipage}{0.2\\textwidth}\n    \\begin{center}\n      \\begin{tabular}{ c c | c }\n       $A$ & $B$ & $C$ \\\\\n       \\hline\n       $W_A^0$ & $W_B^0$ & $W_C^0$ \\\\\n       $W_A^0$ & $W_B^1$ & $W_C^0$ \\\\\n       $W_A^1$ & $W_B^0$ & $W_C^0$ \\\\\n       $W_A^1$ & $W_B^1$ & $W_C^1$ \\\\\n      \\end{tabular}\n    \\end{center}\n  \\end{minipage}\n  $\\xrightarrow[]{\\text{becomes}}$\n  \\begin{minipage}{0.2\\textwidth}\n    \\begin{center}\n      \\begin{tabular}{ c c | c }\n       $A$ & $B$ & $C$ \\\\\n       \\hline\n       $W_A^0$ & $W_B^0$ & $E_{W_A^0}(E_{W_B^0}(W_C^0))$ \\\\\n       $W_A^0$ & $W_B^1$ & $E_{W_A^0}(E_{W_B^1}(W_C^0))$ \\\\\n       $W_A^1$ & $W_B^0$ & $E_{W_A^1}(E_{W_B^0}(W_C^0))$ \\\\\n       $W_A^1$ & $W_B^1$ & $E_{W_A^1}(E_{W_B^1}(W_C^1))$ \\\\\n      \\end{tabular}\n    \\end{center}\n  \\end{minipage}\n\n\\item As a final step Alice generates a random permutation of the truth table's rows, and the result is a \\textit{garbled} table.\n\n\\end{itemize}\n\n\\subsubsection{Data Transfer}\\label{sss:data-transfer}\n\\begin{itemize}\n  \\item Alice sends the garbled truth table for every gate $g$ of the circuit to Bob.\n  \\item Alice also sends the randomly generated labels corresponding to her input.\n  For example if Alice's input $a$ is represented by the bits $a_4a_3a_2a_1a_0 = 01101$ then she will send the labels $W_{a_4}^0$, $W_{a_3}^1$, $W_{a_2}^1$, $W_{a_1}^0$, $W_{a_0}^1$ to Bob.\n  \\item Bob also needs the labels corresponding to his input bits. For example if Bob's input $b$ is represented by the bits $b_4b_3b_2b_1b_0 = 10100$ then he will need the labels $W_{b_4}^1$, $W_{b_3}^0$, $W_{b_2}^1$, $W_{b_1}^0$, $W_{b_0}^0$\n  To obtain these labels, Alice and Bob run an 1 out of 2 oblivious transfer protocol, for each bit (each input wire) of Bob's input $b$.\n  Using this 1-2 OT protocol Bob learns only the labels corresponding to his input bits, and Alice learns nothing about Bob's input.\n  If bob wants to obtain the label for input bit $b_4 = 1$, he will ask Alice between $W_{b_4}^0$ and $W_{b_4}^1$.\n  Bob will learn only $W_{b_4}^1$ and not $W_{b_4}^0$, while Alice will not learn the value of $b_4$.\n\\end{itemize}\n\n\\subsubsection{Circuit Evaluation}\\label{sss:circuit-evaluation}\n\\begin{itemize}\n  \\item Bob now has the garbled truth tables for each gate of the circuit as well as all input labels. Having one garbled value\\myslash label per input wire and the truth table, Bob will try to decrypt every value in the output column of the table, but will succeed only once.\n  The decryption will succeed only in the row for which Bob has the input labels so he can use them as keys in the decryption.\n  The decryption result will be the output label of that gate (the garbled version of the gate's output value).\n  This label will be used as an input for the next gate in the combinatorial circuit.\n  \\item Bob repeats the process for each gate $g$ of the circuit, until he reaches to the output gate(s) and acquires the output label(s).\n\\end{itemize}\n\n\\subsubsection{Output Revealing}\\label{ss:output-revealing}\n\\begin{itemize}\n  \\item Alice knows The Boolean value corresponding to the output label(s), so Bob and Alice communicate in order for them to learn the computation result.\n  \\item Either Alice will share her information about the original values of the output label(s) with Bob, or Bob will share the output to Alice and she could respond accordingly so that one or both of them learn the output.\n\\end{itemize}\n\n\\subsubsection{Overhead}\\label{ss:overhead}\nThis protocol inevitably inserts a notable overhead. An overview of the most substantial factors as described in \\cite{lindell2009secure} can be found below.\n\n\\begin{itemize}\n  \\item Alice and Bob involve in a 1 out of 2 oblivious transfer protocol for each input wire related with Bob's input. These oblivious transfers can dominate the computation overhead -- at least for relatively small circuits -- since they require modular exponentiations.\n  \\item Alice sends to Bob a garbled truth table for every gate $g$ of the circuit. Thereby, this operation's cost is proportional to the circuit size.\n  \\item Bob decrypts a constant number of encrypted values for each gate $g$ of the circuit. This is also linear with respect to the size of the circuit.\n\\end{itemize}\n\nMultiple attempts have been made to improve the performance of Yao's protocol. Many optimizations have been introduced like the ones found in \\cite{beaver1990round, naor1999privacy, kolesnikov2008improved, bellare2013efficient, zahur2015two}.\n\nAnother generic protocol for Secure multi\\hyp party computation can be constructed using \\textit{Secret Sharing}, a technique which we will describe in section \\ref{s:secret-sharing}.\n", "meta": {"hexsha": "06f660081fde994de2ee2e079cf376535facbe79", "size": 12725, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "smpc.tex", "max_stars_repo_name": "jimouris/master-thesis", "max_stars_repo_head_hexsha": "e424cdd458cb7ff964bebcaaecfb7cad5b3ea525", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-08-29T07:51:22.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-09T12:09:24.000Z", "max_issues_repo_path": "smpc.tex", "max_issues_repo_name": "jimouris/master-thesis", "max_issues_repo_head_hexsha": "e424cdd458cb7ff964bebcaaecfb7cad5b3ea525", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "smpc.tex", "max_forks_repo_name": "jimouris/master-thesis", "max_forks_repo_head_hexsha": "e424cdd458cb7ff964bebcaaecfb7cad5b3ea525", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-08-28T14:33:15.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-28T17:09:24.000Z", "avg_line_length": 72.7142857143, "max_line_length": 376, "alphanum_fraction": 0.7400392927, "num_tokens": 3507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.576262450533918}}
{"text": "\\subsection{The reduce operator}\n\\label{reduce_operator}\nConsider the case where we want to perform many different reductions (e.g. min,\nmax, sum, \\ldots) over the same vector. The simple way to do this is\n\\begin{verbatim}\nx = Q.sum(w); y = Q.min(w); z = Q.max(w)\n\\end{verbatim}\nHowever, this necessitates several scans over the vector \\(w\\). It is more\nefficient to evaluate the vector \\(w\\) a chunk at a time, perform all the\nreductions on the chunk, store the partial results and then repeat over\nsuccessive chunks. The reduce operator takes a Vector as input and produces one\nor more Scalars as output. It does so, by \n(i) creating a Reducer for each of the desired Scalars,\n(ii) repeatedly invokes {\\tt next()} on them until no more data\n(iii) invokes {\\tt value()} on each Reducer and returns that.\n\nWe try to avoid memo-izing vectors when we don't have to because \nof the cost of flushing to disk. In our experiments, for \\(n > 2^{25}\\), \nmemo-izing the input \\(w\\) and computing min/max/sum sequentially \nis {\\em six} times slower than using the reduce operator.\n", "meta": {"hexsha": "1265ebddab441692dcfd6a57134cdd74031162f5", "size": 1071, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DOC/Q_PAPER/CIDR2019/reduce_operator.tex", "max_stars_repo_name": "subramon/qlu", "max_stars_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DOC/Q_PAPER/CIDR2019/reduce_operator.tex", "max_issues_repo_name": "subramon/qlu", "max_issues_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-07-29T16:48:25.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-26T23:47:22.000Z", "max_forks_repo_path": "DOC/Q_PAPER/CIDR2019/reduce_operator.tex", "max_forks_repo_name": "subramon/qlu", "max_forks_repo_head_hexsha": "2fb8a2b3636dd11e2dfeae2a6477bd130316da47", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-14T22:34:13.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-14T22:34:13.000Z", "avg_line_length": 51.0, "max_line_length": 79, "alphanum_fraction": 0.7441643324, "num_tokens": 271, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.576108175070403}}
{"text": "\\documentclass[a4paper, 12pt]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\n\\usepackage[]{amsfonts}\n\\usepackage[]{graphicx}\n\n\\title{CS231A Course Notes 4: Stereo Systems and Structure from Motion}\n\\author{Kenji Hata and Silvio Savarese}\n\\date{}\n\n\\renewcommand\\emph{\\textbf}\n\n\\numberwithin{equation}{section}\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\nIn the previous notes, we covered how adding additional viewpoints of a scene can greatly enhance our knowledge of the said scene. We focused on the epipolar geometry setup in order to relate points of one image plane to points in the other without extracting any information about the 3D scene. In these lecture notes, we will discuss how to recover information about the 3D scene from multiple 2D images.\n\n\\section{Triangulation}\nOne of the most fundamental problems in multiple view geometry is the problem of \\emph{triangulation}, the process of determining the location of a 3D point given its projections into two or more images. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/triangulation.png}\n\\caption{The setup of the triangulation problem when given two views.}\n\\label{fig:triangulation}\n\\end{figure}\n\nIn the triangulation problem with two views, we have two cameras with known camera intrinsic parameters $K$ and $K'$ respectively. We also know the relative orientations and offsets $R,T$ of these cameras with respect to each other. Suppose that we have a point $P$ in 3D, which can be found in the images of the two cameras at $p$ and $p'$ respectively. Although the location of $P$ is currently unknown, we can measure the exact locations of $p$ and $p'$ in the image. Because $K, K', R, T$ are known, we can compute the two lines of sight $\\ell$ and $\\ell'$, which are defined by the camera centers $O_1, O_2$ and the image locations $p, p'$. Therefore, $P$ can be computed as the intersection of $\\ell$ and $\\ell'$. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/real_triangulation.png}\n\\caption{The triangulation problem in real-world scenarios often involves minimizing the reprojection error.}\n\\label{fig:real_triangulation}\n\\end{figure}\n\nAlthough this process appears both straightforward and mathematically sound, it does not work very well in practice. In the real world, because the observations $p$ and $p'$ are noisy and the camera calibration parameters are not precise, finding the intersection point of $\\ell$ and $\\ell'$ may be problematic. In most cases, it will not exist at all, as the two lines may never intersect. \n\n\\subsection{A linear method for triangulation}\nIn this section, we describe a simple linear triangulation method that solves the lack of an intersection point between rays. We are given two points in the images that correspond to each other $p = MP = (x,y,1)$ and $p'=M'P = (x', y', 1)$. By the definition of the cross product, $p\\times(MP) = 0$. We can explicitly use the equalities generated by the cross product to form three constraints:\n\\begin{equation}\n    \\begin{split}\n        x(M_3P) - (M_1P) = 0\\\\\n        y(M_3P) - (M_2P) = 0\\\\\n        x(M_2P) - y(M_1P) = 0\n    \\end{split}\n\\end{equation}\nwhere $M_i$ is the $i$-th row of the matrix $M$. Similar constraints can be formulated for $p'$ and $M'$. Using the constraints from both images, we can formulate a linear equation of the form $AP = 0$ where\n\\begin{equation}\n    A = \\begin{bmatrix}\n        xM_3 - M_1 \\\\ \n        y M_3 - M_2 \\\\\n        x'M'_3 - M'_1 \\\\\n        y'M'_3 - M'_2\n    \\end{bmatrix}\n\\end{equation}\nThis equation can be solved using SVD to find the best linear estimate of the point $P$. Another interesting aspect of this method is that it can actually handle triangulating from multiple views as well. To do so, one simply appends additional rows to $A$ corresponding to the added constraints by the new views.\n\nThis method, however is not suitable for projective reconstruction, as it is not projective-invariant. For example, suppose we replace the camera matrices $M, M'$ with ones affected by a projective transformation $MH^{-1}, M'H^{-1}$. The matrix of linear equations $A$ then becomes $AH^{-1}$. Therefore, a solution $P$ to the previous estimation of $AP=0$ will correspond to a solution $HP$ for the transformed problem $(AH^{-1})(HP) = 0$. Recall that SVD solves for the constraint that $\\|P\\| = 1$, which is not invariant under a projective transformation $H$. Therefore, this method, although simple, is often not the optimal solution to the triangulation problem. \n-\n\\subsection{A nonlinear method for triangulation}\nInstead, the triangulation problem for real-world scenarios is often mathematically characterized as solving a minimization problem:\n\\begin{equation}\n    \\min_{\\hat{P}} \\|M\\hat{P}-p\\|^2 + \\|M'\\hat{P}-p'\\|^2\n    \\label{eq:reprojection_error}\n\\end{equation}\nIn the above equation, we seek to find a $\\hat{P}$ in 3D that best approximates $P$ by finding the best least-squares estimate of the \\emph{reprojection error} of $\\hat{P}$ in both images. The reprojection error for a 3D point in an image is the distance between the projection of that point in the image  and the corresponding observed point in the image plane. In the case of our example in Figure~\\ref{fig:real_triangulation}, since $M$ is the projective transformation from 3D space to image 1, the projected point of $\\hat{P}$ in image 1 is $M\\hat{P}$. The matching observation of $\\hat{P}$ in image 1 is $p$. Thus, the reprojection error for point P in image 1 is the distance $\\|M\\hat{P} - p\\|$. The overall reprojection error found in Equation~\\ref{eq:reprojection_error} is the sum of the reprojection errors across all the points in the image. For cases with more than two images, we would simply add more distance terms to the objective function.\n\\begin{equation}\n    \\min_{\\hat{P}} \\sum_i \\|M\\hat{P_i}-p_i\\|^2\n    \\label{eq:reprojection_error_multi_camera}\n\\end{equation}\n\nIn practice, there exists a variety of very sophisticated optimization techniques that result in good approximations to the problem. However, for the scope of the class, we will focus on only one of these techniques, which is the Gauss-Newton algorithm for nonlinear least squares. The general nonlinear least squares problem is to find an $x\\in \\mathbb{R}^n$ that minimizes\n\\begin{equation}\n    \\|r(x)\\|^2 = \\sum_{i=1}^m r_i(x)^2\n\\end{equation}\nwhere $r$ is any residual function $r:\\mathbb{R}^n\\rightarrow \\mathbb{R}^m$ such that $r(x) = f(x) - y$ for some function $f$, input $x$, and observation $y$. The nonlinear least squares problem reduces to the regular, linear least squares problem when the function $f$ is linear. However, recall that, in general, our camera matrices are not affine. Because the projection into the image plane often involves a division by the homogeneous coordinate, the projection into the image is generally nonlinear.\n\nNotice that if we set $e_i$ to be a $2\\times1$ vector $e_i = M\\hat{P}_i - p_i$, then  we can reformulate our optimization problem to be:\n\\begin{equation}\n    \\min_{\\hat{P}} \\sum_i e_{i}(\\hat{P})^2 \n\\end{equation}\nwhich can be perfectly represented as a nonlinear least squares problem. \n\nIn these notes, we will cover how we can use the popular Gauss-Newton algorithm to find an approximate solution to this nonlinear least squares problem. First, let us assume that we have a somewhat reasonable estimate of the 3D point $\\hat{P}$, which we can compute by the previous linear method. The key insight of the Gauss-Newton algorithm is to update our estimate by correcting it towards an even better estimate that minimizes the reprojection error.  At each step we want to update our estimate $\\hat{P}$ by some $\\delta_P$: $\\hat{P} = \\hat{P} + \\delta_P$.\n\nBut how do we choose the update parameter $\\delta_P$? The key insight of the Gauss-Newton algorithm is to linearize the residual function near the current estimate $\\hat{P}$. In the case of our problem, this means that the residual error $e$ of a point $P$ can be thought of as:\n\\begin{equation}\n    e(\\hat{P} + \\delta_P) \\approx e(\\hat{P}) + \\frac{\\partial e}{\\partial P}\\delta_P\n\\end{equation}\nSubsequently, the minimization problem transforms into\n\\begin{equation}\n    \\min_{\\delta_P} \\| \\frac{\\partial e}{\\partial P}\\delta_P - (-e(\\hat{P})) \\|^2\n\\end{equation}\nWhen we formulate the residual like this, we can see that it takes the format of the standard linear least squares problem. For the triangulation problem with $N$ images, the linear least squares solution is\n\\begin{equation}\n    \\delta_P = -(J^TJ)^{-1} J^Te\n\\end{equation}\nwhere \n\\begin{equation}\n    e = \\begin{bmatrix} e_1 \\\\ \\vdots \\\\ e_N\\end{bmatrix} = \\begin{bmatrix}p_1 - M_1\\hat{P} \\\\ \\vdots \\\\ p_n - M_n \\hat{P} \\end{bmatrix}\n\\end{equation}\nand\n\\begin{equation}\n    J = \\begin{bmatrix}\n    \\dfrac{\\partial e_1}{\\partial \\hat{P}_1} & \\dfrac{\\partial e_1}{\\partial \\hat{P}_2}& \\dfrac{\\partial e_1}{\\partial \\hat{P}_3}\\\\\n    \\vdots & \\vdots & \\vdots\\\\\n    \\dfrac{\\partial e_N}{\\partial \\hat{P}_1} & \\dfrac{\\partial e_N}{\\partial \\hat{P}_2} & \\dfrac{\\partial e_N}{\\partial \\hat{P}_3} \\end{bmatrix}\n\\end{equation}\n \n Recall that the residual error vector of a particular image $e_i$ is a $2\\times 1$ vector because there are two dimensions in the image plane. Consequently, in the simplest two camera case ($N=2$) of triangulation, this results in the residual vector $e$ being a $2N\\times1 = 4\\times1$ vector and the Jacobian $J$ being a $2N\\times3 =4 \\times 3$ matrix. Notice how this method handles multiple views seamlessly, as additional images are accounted for by adding the corresponding rows to the $e$ vector and $J$ matrix. After computing the update $\\delta_P$, we can simply repeat the process for a fixed number of steps or until it numerically converges. One important property of the Gauss-Newton algorithm is that our assumption that the residual function is linear near our estimate gives us no guarantee of convergence. Thus, it is always useful in practice to put an upper bound on the number of updates made to the estimate.\n\n\\section{Affine structure from motion}\nAt the end of the previous section, we hinted how we can go beyond two views of a scene to gain information about the 3D scene. We will now explore the extension of the geometry of two cameras to multiple cameras. By combining observations of points from multiple views, we will be able to simultaneously determine both the 3D structure of the scene and the parameters of the camera in what is known as \\emph{structure from motion}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/sfm_setup.png}\n\\caption{The setup of the general structure from motion problem.}\n\\label{fig:sfm_setup}\n\\end{figure}\n\nHere, we formally introduce the structure from motion problem. Suppose we have $m$ cameras with camera transformations $M_i$ encoding both the intrinsic and extrinsic parameters for the cameras. Let $X_j$ be one of the $n$ 3D points in the scene. Each 3D point may be visible in multiple cameras at the location $x_{ij}$, which is the projection of $X_j$  to the image of the camera $i$ using the projective transformation $M_i$. The aim of structure from motion is to recover both the structure of the scene (the $n$ 3D points $X_j$) and the motion of the cameras (the $m$ projection matrices $M_i$) from all the observations $x_{ij}$. \n\n\\subsection{The affine structure from motion problem}\nBefore tackling the general structure from motion problem, we will first start with a simpler problem, which assumes the cameras are affine or weak perspective. Ultimately, the lack of the perspective scaling operation makes the mathematical derivation easier for this problem. \n\nPreviously, we derived the above equations for perspective and weak perspective cases. Remember that in the full perspective model, the camera matrix is defined as \n\\begin{equation}\nM = \\begin{bmatrix}\n    A & b \\\\ v & 1\n\\end{bmatrix}    \n\\end{equation}\nwhere $v$ is some non-zero $1\\times3$ vector. On the other hand, for the weak perspective model, $v=0$. We find that this property makes the homogeneous coordinate of $MX$ equal to $1$: \n\\begin{equation}\n    x = MX = \\begin{bmatrix}m_1 \\\\ m_2 \\\\ 0 ~~ 0 ~~ 0 ~~ 1\\end{bmatrix} \\begin{bmatrix}X_1\\\\X_2\\\\X_3 \\\\ 1\\end{bmatrix} = \\begin{bmatrix}m_1X \\\\ m_2 X \\\\ 1\\end{bmatrix}\n\\end{equation}\n\nConsequently, the nonlinearity of the projective transformation disappears as we move from homogeneous to Euclidean coordinates, and the weak perspective transformation acts as a mere magnifier. We can more compactly represent the projection as:\n\\begin{equation}\n    \\begin{bmatrix} m_1X \\\\ m_2X \\end{bmatrix} = \\begin{bmatrix}A & b \\end{bmatrix} X = AX + b\n\\end{equation}\nand represent any camera matrix in the format $M_\\mathrm{affine} = \\begin{bmatrix}A &b \\end{bmatrix}$. Thus, we now use the affine camera model to express the relationship from a point $X_j$  in 3D and the corresponding observations in each affine camera (for instance, $x_{ij}$ in camera $i$).\n\nReturning to the structure from motion problem, we need to estimate $m$ matrices $M_i$, and the $n$ world coordinate vectors $X_j$, for a total of $8m+3n$ unknowns, from $mn$ observations. Each observation creates 2 constraints per camera, so there are $2mn$ equations in $8m+3n$ unknowns. We can use this equation to know the lower bound on the number of corresponding observations in each of the images that we need to have. For example, if we have $m=2$ cameras, then we need to have at least $n=16$ points in 3D. However, once we do have enough corresponding points labeled in each image, how do we solve this problem?  \n\n\\subsection{The Tomasi and Kanade factorization method} \nIn this part, we outline Tomasi and Kanade\u2019s \\emph{factorization method} for solving the affine structure from motion problem. This method consists of two major steps: the data centering step and the actual factorization step. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/factorization_centering.png}\n\\caption{When applying the centering step, we translate all of the image points such that their centroid (denoted as the lower left red cross) is located at the origin in the image plane. Similarly, we place the world coordinate system such that the origin is at the centroid of the 3D points (denoted as the upper right red cross).}\n\\label{fig:factorization_centering}\n\\end{figure}\n\nLet\u2019s begin with the data centering step. In this step, the main idea is center the data at the origin. To do so, for each image $i$, we redefine new coordinates $\\hat{x}_{ij}$ for each image point $x_{ij}$ by subtracting out their centroid $\\bar{x}_i$:\n\\begin{equation}\n    \\hat{x}_{ij} = x_{ij} - \\bar{x}_{i} = x_{ij} - \\frac{1}{n}\\sum_{j=1}^{n}{x_{ij}}\n    \\label{eq:mean}\n\\end{equation}\nRecall that the affine structure from motion problem allows us to define the relationship between image points $x_{ij}$, the camera matrix variables $A_i$ and $b_i$, and the 3D points $X_j$ as:\n\\begin{equation}\n    x_{ij}= A_i X_j + b_i\n    \\label{eq:affine_relationship}\n\\end{equation}\nAfter this centering step, we can combine definition of the centered image points $\\hat{x}_{ij}$ in Equation~\\ref{eq:mean} and the affine expression in Equation~\\ref{eq:affine_relationship}:\n\\begin{equation}\n\\begin{split}\n    \\hat{x}_{ij} &= x_{ij} -  \\frac{1}{n}\\sum_{k=1}^{n}{x_{ik}}\\\\\n    & = A_iX_j - \\frac{1}{n}\\sum_{k=1}^{n}{A_i X_k}\\\\\n    & = A_i(X_j - \\frac{1}{n}\\sum_{k=1}^{n} X_k) \\\\ \n    & = A_i(X_j - \\bar{X})\\\\\n    & = A_i\\hat{X}_j\n\\end{split}\n\\label{eq:centered_relationship}\n\\end{equation}\n\nAs we see from Equation~\\ref{eq:centered_relationship}, if we translate the origin of the world reference system to the centroid $\\bar{X}$, then the centered coordinates of the image points $\\hat{x}_{ij}$ and centered coordinates of the 3D points $\\hat{X}_{ij}$ are related only by a single $2\\times 3$ matrix $A_i$. Ultimately, the centering step of the factorization method allows us to create a compact matrix product representation to relate the 3D structure with their observed points in multiple images.\n\nHowever, notice that in the matrix product $\\hat{x}_{ij} = A_i\\hat{X_j}$, we only have access to the values on the left hand side of the equation. Thus, we must somehow factor out the motion matrices $A_i$ and structure $X_j$. Using all the observations for all the cameras, we can build a measurement matrix $D$, made up of $n$ observations in the $m$ cameras (remember that each $\\hat{x}_{ij}$ entry is a 2x1 vector):\n\\begin{equation}\n    D = \\begin{bmatrix}\n        \\hat{x}_{11} & \\hat{x}_{12} & \\hdots & \\hat{x}_{1n}\\\\\n        \\hat{x}_{2  1} & \\hat{x}_{22} & \\hdots & \\hat{x}_{2n}\\\\\n        & & \\ddots & \\\\\n        \\hat{x}_{m1} & \\hat{x}_{m2} & \\hdots & \\hat{x}_{mn}\\\\\n    \\end{bmatrix}\n\\end{equation}\n\nNow recall that because of our affine assumption, $D$ can be expressed as the product of the $2m\\times3$ motion matrix $M$ (which comprises the camera matrices $A_1, \\ldots A_m$) and the $3\\times n$ structure matrix $S$ (which comprises the 3D points $X_1, \\ldots X_n$). An important fact that we will use is that $\\mathrm{rank}(D) = 3$ since $D$ is the product of two matrices whose max dimension is 3. \n\nTo factorize $D$ into $M$ and $S$, we will use the singular value decomposition, $D = U \\Sigma V^T$. Since we know the $\\mathrm{rank}(D) = 3$, so there will only be 3 non-zero singular values $\\sigma_1 , \\sigma_2$, and $\\sigma_3$ in $\\Sigma$. Thus, we can further reduce the expression and obtain the following decomposition:\n\\begin{equation}\n    \\begin{split}\n        D &= U \\Sigma V^T\\\\ \n        &= \\begin{bmatrix} u_1 & \\hdots & u_n \\end{bmatrix} \n        \\begin{bmatrix}\n            \\sigma_1 & 0 & 0 & 0 & \\hdots & 0\\\\\n            0 & \\sigma_2 & 0 & 0 & \\hdots & 0\\\\ \n            0 & 0 & \\sigma_3 & 0 & \\hdots & 0\\\\ \n            0 & 0 & 0 & 0 & \\hdots & 0\\\\ \n             &  &  & & \\ddots & \\\\ \n            0 & 0 & 0 & 0 & \\hdots & 0\\\\ \n        \\end{bmatrix}\n        \\begin{bmatrix} v_1^T \\\\ \\vdots \\\\ v_n^T \\end{bmatrix}\\\\\n        & = \\begin{bmatrix} u_1 & u_2 & u_3 \\end{bmatrix} \n        \\begin{bmatrix}\n            \\sigma_1 & 0 & 0\\\\\n            0 & \\sigma_2 & 0\\\\ \n            0 & 0 & \\sigma_3\n        \\end{bmatrix}\n        \\begin{bmatrix} v_1^T \\\\ v_2^T \\\\ v_3^T \\end{bmatrix}\\\\\n        & = U_3 \\Sigma_3 V^T_3\n    \\end{split}\n    \\label{eq:svd_factorization}\n\\end{equation}\n\nIn this decomposition, $\\Sigma_3$ is defined as the diagonal matrix formed by the non-zero singular values, while $U_3$ and $V^T_3$ are obtained by taking the corresponding three columns of $U$ and rows of $V^T$ respectively. Unfortunately, in practice, $\\mathrm{rank}(D) > 3$ because of measurement noise and the affine camera approximation. However, recall that when $\\mathrm{rank}(D) > 3$, $U_3 W_3 V_3^T$ is still the best possible rank-3 approximation of $MS$ in the sense of the Frobenius norm. \n\nUpon close inspection, we see that the matrix product $\\Sigma_3V_3^T$ forms a $3\\times n$ matrix, which exactly the same size as the structure matrix $S$. Similarly, $U_3$ is a $2m\\times 3$ matrix, which is the same size as the motion matrix $M$. While this way of associating the components of the SVD decomposition to $M$ and $S$ leads to a physically and geometrical plausible solution of the affine structure from motion problem, this choice is not a unique solution. For example, we could also set the motion matrix to $M = U_3\\Sigma_3$ and the structure matrix to $S = V_3^T$, since in either cases the observation matrix $D$ is the same. So what factorization do we choose? In their paper, Tomasi and Kanade concluded that a robust choice of the factorization is $M = U_3\\sqrt{\\Sigma_3}$ and $S = \\sqrt{\\Sigma_3}V_3^T$.\n\n\\subsection{Ambiguity in reconstruction}\nNevertheless, we find inherent ambiguity in any choice of the factorization $D=MS$, as any arbitrary, invertible $3 \\times 3$ matrix $A$ may be inserted into the decomposition:\n\\begin{equation}\n    D = MAA^{-1}S = (MA)(A^{-1}S)\n\\end{equation}\nThis means that the camera matrices obtained from motion $M$ and the 3D points obtained from structure $S$ are determined up to a multiplication by a common matrix $A$. Therefore, our solution is underdetermined, and requires extra constraints to resolve this affine ambiguity. When a reconstruction has affine ambiguity, it means that parallelism is preserved, but the metric scale is unknown.\n\nAnother important class of ambiguities for reconstruction is the similarity ambiguity, which occurs when a reconstruction is correct up to a similarity transform (rotation, translation and  scaling). A reconstruction with only similarity ambiguity is known as a metric reconstruction. This ambiguity exists even when the camera are intrinsically calibrated.  The good news is that for calibrated cameras, the similarity ambiguity is the only ambiguity\\footnote{See [Longuet-Higgins \u201981] for more details.}. \n\nThe fact that there is no way to recover the absolute scale of a scene from images is fairly intuitive. An object's scale, absolute position and canonical orientation will always be unknown unless we make further assumptions (e.g, we know the height of the house in the figure) or incorporate more data. This is because some attributes may compensate for others. For instance, to get the same image, we can simply move the object backwards and scale it accordingly. One such example of removing similarity ambiguity occurred during the camera calibration procedure, where we made the assumption that we know the location of the calibration points with respect to the world reference system. This enabled us to know the size of the squares of the checkerboard to learn a metric scale of the 3D structure.\n\n\\section{Perspective structure from motion}\nAfter studying the simplified affine structure from motion problem, let us now consider the general case for projective cameras $M_i$. In the general case with projective cameras, each camera matrix $M_i$ contains 11 degrees of freedom, as it is defined up to scale:\n\\begin{equation}\nM_i=\n\\begin{bmatrix}\n    a_{11}       & a_{12} & a_{13} &  b_{1} \\\\\n    a_{21}       & a_{22} & a_{23} &  b_{2} \\\\\n    a_{31}       & a_{32} & a_{33} &  1\n\\end{bmatrix}\n\\end{equation}\n\nMoreover, similar to the affine case where the solution can be found up to an affine transformation, solutions for structure and motion can be determined up a projective transformation in the general case: we can always arbitrarily apply a $4\\times 4$ projective transformation $H$ to the motion matrix, as long as we also transform the structure matrix by the inverse transformation $H^{-1}$. The resulting observations in the image plane will still be the same.\n\nSimilar to the affine case, we can set up the general structure from motion problem as estimating both the $m$ motion matrices $M_i$ and $n$ 3D points $X_j$ from $mn$ observations $x_{ij}$. Because cameras and points can only be recovered up to a $4\\times 4$ projective transformation up to scale (15 parameters), we have $11m+3n-15$ unknowns in $2mn$ equations. From these facts, we can determine the number of views and observations that are required to solve for the unknowns. \n\n\\subsection{The algebraic approach}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/algebraic_setup.png}\n\\caption{In the algebraic approach, we consider sequential, camera pairs to determine camera matrices $M_1$ and $M_2$ up to a perspective transformation. We then find a perspective transformation $H$ such that $M_1H = [I~~0]$ and $M_2H = [A~~B]$}.\n\\label{fig:algebraic_setup}\n\\end{figure}\n\nWe will now cover the \\emph{algebraic approach}, which leverages the concept of fundamental matrix $F$ for solving the structure from motion problem for two cameras. As shown in Figure~\\ref{fig:algebraic_setup}, the main idea of the algebraic approach is to compute two camera matrices $M_1$ and $M_2$, which can only be computed up to a perspective transformation $H$. Since each $M_i$ can only be computed up a perspective transformation $H$, we can always consider a $H$ such that the first camera projection matrix $M_1 H^{-1}$ is canonical. Of course, the same transformation must also be applied to the second camera which lead to the form shown:\n\n\\begin{equation}\n    M_1 H^{-1} = [I ~~ 0] ~~~~~~~ M_2 H^{-1} = [A ~~ b]\n    \\label{eq:mh}\n\\end{equation}\n\nIn order to accomplish this task, we must first compute the fundamental matrix $F$ using the eight point algorithm covered in the previous course notes. We now will use $F$ to estimate the projective camera matrices $M_1$ and $M_2$. In order to do this estimation, we define $P$ to be the corresponding 3D point for the corresponding observations in the images $p$ and $p'$. Since we have applied $H^{-1}$ to both camera projection matrices, we must also apply $H$ to the structure, giving us $\\widetilde{P} = HP$. Therefore, we can relate the pixel coordinates $p$ and $p'$ to the transformed structure as follows:\n\n\\begin{equation}\n    \\begin{split}\n        p = M_1 P = M_1 H^{-1} H P = [I ~ | ~ 0] \\widetilde{P}\\\\\n        p' = M_2 P = M_2 H^{-1} H P = [A ~ | ~ b] \\widetilde{P}\n    \\end{split}\n    \\label{eq:x}\n\\end{equation}\nAn interesting property between the two image correspondences $p$ and $p'$ occur by some creative substitutions:\n\n\\begin{equation}\n    \\begin{split}\n        p'&= [A|b] \\widetilde{P}\\\\\n        &= A[I|0] \\widetilde{P} +b \\\\\n        &= Ap+b\n    \\end{split}\n    \\label{eq:xpx}\n\\end{equation}\nUsing Equation~\\ref{eq:xpx}, we can write the cross product between $p'$ and $b$ as:\n\n\\begin{equation}\n    p' \\times b = (Ap+b) \\times b = Ap \\times b \n    \\label{eq:xptimesb}\n\\end{equation}\nBy the definition of cross product, $p' \\times b$ is perpendicular to $p'$. Therefore, we can write:\n\n\\begin{equation}\n    \\begin{split}\n        0 &= p'^T  ( p' \\times b )  \\\\\n        &= p'^T  (Ap \\times b )\\\\\n        &= p'^T \\cdot (b \\times Ap) \\\\\n        & = p'^T[b]_\\times Ap\n    \\end{split}\n    \\label{eq:xpdot1}\n\\end{equation}\nLooking at this constraint, it should remind you of the general definition of the Fundamental matrix $p'^T Fp = 0$. If we set $F=[b]_\\times A$, then extracting $A$ and $b$ simply breaks down to a decomposition problem. \n\nLet us begin by determining $b$. Again, by the definition of cross product, we can simply write $Fb$ as \n\\begin{equation}\n    Fb = [b]_\\times A b = (b\\times A)b = 0\n    \\label{eq:cross_b}\n\\end{equation}.  \nSince $F$ is singular, $b$ can be computed as a least square solution of $F b = 0$, with $\\|b\\|=1$, using SVD. \n\nOnce $b$ is known, we can now compute $A$. If we set $A=-[b]_\\times F$, then we can verify that this definition satisfies $F = [b]_\\times A$:\n\\begin{equation}\n    \\begin{split}\n        [b_\\times]A' &= -[b_\\times][b_\\times]F \\\\\n        & =(bb^T - |b|^2I)F \\\\\n        &= bb^TF + |b|^2F \\\\\n        &= 0 + 1\\cdot F \\\\\n        &=F\n    \\end{split}\n    \\label{eq:bap}\n\\end{equation}\nConsequently, we determine the two expressions for our camera matrices $M_1H^{-1}$ and $M_2H^{-1}$:\n\\begin{equation}\n    \\tilde{M}_1 = [I ~~ 0] ~~~~~~~~ \\tilde{M}_2 = [- [b_\\times]F ~~ b]\n    \\label{eq:mm}\n\\end{equation}\n\nBefore we conclude this section, we want give a geometrical interpretation for $b$. We know $b$ satisfies $Fb=0$. Remember the epipolar constraints we derived in the previous course notes, which found that the epipoles in an image are the points that map to zero when transformed by the Fundamental matrix (i.e. $Fe_2=0$ and $F^T e_1=0$). We can see, therefore, that $b$ is an epipole. This provides a new set of equations for the camera projection matrices (Eqs. \\ref{eq:mm2}). \n\n\\begin{equation}\n    \\tilde{M}_1 = [I ~~ 0] ~~~~~~~~ \\tilde{M}_2 = [- [e_\\times]F ~~ e]\n    \\label{eq:mm2}\n\\end{equation}\n\n\\subsection{Determining motion from the Essential matrix}\nOne useful way of improving the reconstruction obtained by the algebraic approach is to use calibrated cameras. We can extract a more accurate, initial estimate of camera matrices by using the Essential matrix, which is a special case of the Fundamental matrix for normalized coordinates. Recall that, by using the Essential matrix $E$, we make an assumption that we have calibrated the camera and thus know the intrinsic camera matrix $K$. We can either compute the Essential matrix $E$ either from the normalized image coordinates directly or from its relationship with the Fundamental matrix $F$ and intrinsic matrix $K$:\n\n\\begin{equation}\n    E = K^TFK\n\\end{equation}\n\nBecause the Essential matrix assumes that we have calibrated cameras, we should remember that it only has five degrees of freedom, as it only encodes the extrinsic parameters: the rotation $R$ and translation $t$ between the cameras. Luckily, this is exactly the information that we want to extract to create our motion matrix. First, recall that the Essential matrix $E$ can be represented as\n\\begin{equation}\n    E = [t]_\\times R\n\\end{equation}\nAs such, perhaps we can find a strategy to factor $E$ into its two components. First, we should notice that the cross product matrix $[t]_\\times$ is skew-symmetric. We define two matrices that we will use in the decomposition:\n\\begin{equation}\n     W = \\begin{bmatrix} 0 & -1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 0 & 1\\end{bmatrix}, ~~~ Z = \\begin{bmatrix} 0 & 1 & 0 \\\\ -1 & 0 & 0 \\\\ 0 & 0 & 0\\end{bmatrix}\n\\end{equation}\nOne important property we will use later is that $Z = \\mathrm{diag}(1,1,0)W$ up to a sign. Similarly, we will also use the fact that $ZW = ZW^T = \\mathrm{diag}(1,1,0)$ up to a sign. \n\nAs a result of eigenvalue decomposition, we can create a block decomposition of a general skew-symmetric matrix known up to scale. Thus, we can write $[t]_\\times$ as \n\\begin{equation}\n    [t]_\\times  = UZU^T\n\\end{equation}\nwhere $U$ is some orthogonal matrix. Therefore, we can rewrite the decomposition as:\n\\begin{equation}\n    E = U\\mathrm{diag}(1,1,0)  (WU^TR)\n\\end{equation}\n\nLooking at this expression carefully, we see that it closely resembles the singular value decomposition $E = U\\Sigma V^T$, where $\\Sigma$ contains two equal singular values. If we know $E$ up to scale and we assume that it takes the form $E = U \\mathrm{diag}(1,1,0)V^T$, then we arrive at the following factorizations of $E$:\n\\begin{equation}\n    [t]_\\times  = UZU^T, ~~~~ R= UWV^T ~ \\mathrm{or} ~ UW^TV^T\n\\end{equation}\nWe can prove that the given factorizations are valid by inspection. We can also prove that there are no other factorizations. The form of $[t]_\\times$ is determined by the fact that its left null space must be the same as the null space of $E$. Given unitary matrices $U$ and $V$, any rotation $R$ can be decomposed into $UXV^T$ where $X$ is another rotation matrix. After substituting these values in, we get $ZX = \\mathrm{diag}(1,1,0)$ up to scale. Thus, $X$ must be equal to $W$ or $W^T$. \n\nNote that this factorization of $E$ only guarantees that the matrices $UWV^T$ or $UW^TV^T$ is orthogonal. To ensure that $R$ is a valid rotations, we simply make sure that the determinant of $R$ is positive:\n\\begin{equation}\n    R= (\\det UWV^T)UWV^T ~ \\mathrm{or} ~ (\\det UW^TV^T)UW^TV^T\n\\end{equation}\n\nSimilar to how the rotation $R$ can take on two potential values, the translation vector $t$ can also take on several values. From the definition of cross product, we know that\n\\begin{equation}\n    t \\times t = [t]_\\times t = UZU^T t = 0\n\\end{equation}\nKnowing that $U$ is unitary, we can find that the $\\|[t]_\\times\\|F = \\sqrt{2}$. Therefore, our estimate of $t$ from this factorization will come from the above equation and the fact that $E$ is known up to scale. This means that \n\\begin{equation}\n    t = \\pm U\\begin{bmatrix}0\\\\0\\\\1\\end{bmatrix} = \\pm u_3\n\\end{equation}\nwhere $u_3$ is the third column of $U$. By inspection, we can also verify that we get the same results by reformatting $[t]_\\times = UZU^T$ into the vector $t$ known up to a sign.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures/four_rt.png}\n\\caption{There are four possible solutions for extracting the relative camera rotation $R$ and translation $t$ from the Essential matrix. However, only in (a) is the reconstructed point in front of both of the cameras. (Figure taken from Hartley and Zisserman textbook page 260)}\n\\label{fig:four_rt}\n\\end{figure}\n\nAs illustrated in Figure~\\ref{fig:four_rt}, there are four potential $R,t$ pairings since there exists two options for both $R$ and $t$. Intuitively, the four pairings include all possible pairings of rotating a camera in a certain direction or rotating the camera in the opposite direction combined with the option of translating it in a certain direction or the opposite direction. Therefore, under ideal conditions, we would only need to triangulate one point to determine the correct $R,t$ pair. For the correct $R,t$ pair, the triangulated point $\\hat{P}$ exists in front of both cameras, which means that it has a positive $z$-coordinate with respect to both camera reference systems. Due to measurement noise, we often do not rely on triangulating only one point, but will instead triangulate many points and determine the correct $R,t$ pair as the one that contains the most of these points in front of both cameras.\n\n\\section{An example structure from motion pipeline}\nAfter finding the relative motion matrices $M_i$, we can use them to determine the world coordinates of the points $X_j$. In the case of the algebraic method, the estimate of such points will be correct up to the perspective transformation $H$. In extracting the camera matrices from the Essential matrix, the estimates can be known up to scale. In both cases, the 3D points can be computed from the estimated camera matrices via the triangulation methods described earlier.\n\nThe extension to the multi-view case can be done by chaining pairwise cameras. We can use the algebraic approach or the Essential matrix to obtain solutions for the camera matrices and the 3D points for any pair of cameras, provided that there are enough point correspondences. The reconstructed 3D points are associated to the point correspondences available between the camera pair. Those pairwise solutions may be combined together (optimized) in a approach called bundle adjustment as we will see next.\n\n\\subsection{Bundle adjustment}\nThere are major limitations related to the previous methods for solving the structure from motion problem that we have discussed so far. The factorization method assumes that all points are visible in every image. This is very unlikely to happen because of occlusions and failures to find correspondences when we either have many images or some of the images were taken far apart. Finally the algebraic approach produces pairwise solutions that can be combined into a camera chain, but does not solve for a coherent optimized reconstruction using all the cameras and 3D points.\n\nTo address these limitations, we introduce \\emph{bundle adjustment}, which is a nonlinear method for solving the structure from motion problem. In the optimization, we aim to minimize the reprojection error, which is the pixel distance between the projection of a reconstructed point into the estimated cameras and its corresponding observations for all the cameras and for all the points. Previously, when discussing nonlinear optimization methods for triangulation, we focused primarily on the two camera case, in which we naturally assumed that each camera saw all the correspondences between the two. However, since bundle adjustment handles several cameras, it only calculates the reprojection error for only the observations that can be seen by each camera. Ultimately though, this optimization problem is very similar to the one we introduced when talking about nonlinear methods for triangulation. \n\nTwo common approaches for solving bundle adjustment's nonlinear optimization include the Gauss-Newton algorithm and the Levenberg-Marquardt algorithm. You can refer to the previous section about details on the Gauss-Newton algorithm and refer to the Hartley and Zisserman textbook for more details on the Levenberg-Marquardt algorithm.\n\nIn conclusion, bundle adjustment has some important advantages and limitations when compared to the other methods we have surveyed. It is particularly useful because it can handle a large number of views smoothly and also handle cases when particular points are not observable by every image. However, the main limitation is that it is a particularly large minimization problem, as the parameters grow with the number of views. Additionally, it requires a good initial condition since it relies on nonlinear optimization techniques. For this reason, bundle adjustment is often used as final step of most structure from motion implementations (i.e., after the factorization or algebraic approach), as a factorization or algebraic approach may provide a good initial solution for the optimization problem. \n\\end{document}\n", "meta": {"hexsha": "a0d49fc591be8ee3ba5643322faf77b0d4f58093", "size": 36922, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "04-stereo-systems/04-stereo-systems.tex", "max_stars_repo_name": "zishanqin/cs231a-notes", "max_stars_repo_head_hexsha": "b864cfb7c472573ec1fb7da780748348fc320dab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 287, "max_stars_repo_stars_event_min_datetime": "2017-04-03T00:30:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T03:52:04.000Z", "max_issues_repo_path": "04-stereo-systems/04-stereo-systems.tex", "max_issues_repo_name": "zishanqin/cs231a-notes", "max_issues_repo_head_hexsha": "b864cfb7c472573ec1fb7da780748348fc320dab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2019-06-26T11:23:10.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-16T09:00:43.000Z", "max_forks_repo_path": "04-stereo-systems/04-stereo-systems.tex", "max_forks_repo_name": "zishanqin/cs231a-notes", "max_forks_repo_head_hexsha": "b864cfb7c472573ec1fb7da780748348fc320dab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 112, "max_forks_repo_forks_event_min_datetime": "2017-04-09T10:44:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T09:19:59.000Z", "avg_line_length": 90.2738386308, "max_line_length": 957, "alphanum_fraction": 0.7377173501, "num_tokens": 9958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7690802370707283, "lm_q1q2_score": 0.5761081714489258}}
{"text": "\\subsection{Comparison of Mechanisms}\nIn the previous section, we studied several different algorithmic approaches that guarantee different optimality criteria. To better evaluate the algorithms against the optimality criteria we defined in Section \\ref{sec:optimality}, we will now summarize and compare the properties of the algorithms. Afterwards, we will look at practical results that were obtained by performing experiments of matching mechanisms with one- and two-sided preferences by Diebold and Bichler \\cite{DieboldBenchmark}.\n\n\\subsubsection{Theoretical Results}\nReferring back to the list of desirable properties defined in Section \\ref{criteria-application}, let us now recap and compare the aforementioned algorithms to evaluate which one could be applicable for the problem of matching students to seminars. Unfortunately, none of the algorithms guarantee all of the optimality criteria at the same time, which makes the choice of an algorithm non-trivial. Table \\ref{tab:algorithm-comparison} gives an overview of the presented algorithms and their properties. Each of the algorithms mentioned in this section is listed, and for each optimality criteria, a yes/no encoding is used to make a statement about which properties an algorithm guarantees. It is important to note here that a \"no\" in a column does not strictly mean that the given optimality criteria cannot be fulfilled by the algorithm, but rather that the algorithm does not guarantee it. For instance, a matching computed with the greedy algorithm can be of maximum cardinality or be popular. Only the results for strategy-proofness are a strict yes or no, since fulfilling strategy-proofness does not depend on the instance of the problem, but only of the mechanism being used. \n\n\\begin{table}[h!]\n    \\begin{tabular}{l|llll}\n    \\hline\n                        & RSD    & Max-PaCHA    & Assignment & Popular-CHA           \\\\ \\hline\n    Maximum Cardinality & no     & yes          & yes        & yes               \\\\\n    Pareto-Optimal      & yes    & yes          & yes        & yes               \\\\\n    Popular             & no     & no           & no         & yes               \\\\\n    Rank Maximal        & no     & no           & yes        & no                \\\\\n    Always Exists       & yes    & yes          & yes        & no                \\\\\n    Strategy Proof      & yes    & no           & no         & yes               \\\\ \\hline\n    Time Complexity     & $\\mathcal{O}(n)$   & $\\mathcal{O}(\\sqrt{n} * m)$ & $\\approx\\mathcal{O}(n^3)$    & $\\mathcal{O}(\\sqrt{C} * n_1 + m)$ \\\\ \\hline\n    \\end{tabular}\n    \\caption{Comparison of Different Algorithmic Approaches}\n    \\label{tab:algorithm-comparison}\n\\end{table}\n\nTo summarize the results, we can see that all of the algorithms guarantee pareto-optimality, however only Popular-CHA guarantees popularity. At the same time, only RSD and Popular-CHA also guarantee strategy-proofness, which makes Popular-CHA particularly interesting for the student-seminar problem. \n\n\\paragraph{Strategy-proofness and Maximum Cardinality:}\nOne interesting observation is that fulfilling maximum cardinality comes at the cost of either not being strategy-proof, or not guaranteeing that a matching exists at all. Indeed, only RSD and Popular-CHA algorithm guarantee strategy-proofness. However, ensuring strategy-proofness and maximum cardinality at the same time comes at the cost of not always finding a matching (Popular CHA). If we look back at the Popular-CHA algorithm, we remember that a maximum cardinality matching $M'$ is computed on the reduced graph $G'$. We saw that a maximum popular matching does not exist, iff the matching is not agent-complete, meaning that one of the agents is matched to their last-resort house. While this mechanism ensures strategy-proofness, it is also not always possible to find such a maximum cardinality matching using the Popular-CHA algorithm. Therefore, it remains an open question whether or not a mechanism exists that both is strategy-proof and always produces maximum-cardinality matchings.\n\n\\paragraph{Max-PaCHA and the Assignment Problem:}\nAnother important thing to notice is the similarity of the properties between Max-PaCHA and assignment problem algorithm. Except for the fact that the assignment algorithm guarantees rank maximality, the two algorithms produce matchings with very similar characteristics, which then begs the question of why one should use the Max-PaCHA algorithm. But looking at the runtime complexity of the algorithms, we see that, while both algorithms run in polynomial time, the assignment problem takes longer to be solved.\n\n\\subsubsection{Practical Results in the Literature}\\label{sec:practical-results-lit}\nDiebold et al. have published results for extensive experiments on matching mechanisms with both one- and two-sided preferences \\cite{DieboldBenchmark}. They used real course registration data from TUM for investigating properties, including size, rank and popularity, of matchings produced by several mechanisms. The mechanisms are the same as described in Section \\ref{chapter:algorithms}, with one exception being that the \\emph{ProB-CHAT} algorithm \\cite{DieboldBenchmark} is used in place of the Hungarian algorithm for finding rank-maximal matchings.\n\nThe authors found that all algorithms' matchings achieve an average cardinality of at least 97.48\\%. For one of the 9 instances, Popular-CHA failed to find a matching, but when it found one, the matchings' average ranks of 1.33 got close to the ones produced by ProB-CHAT of 1.26. Unsurprisingly, ProB-CHAT performed best on the rank metrics and also produced more popular matchings than all other algorithms except for Popular-CHA. However, its max runtime was the worst with 33.852s, compared to the second highest 2.458s of Popular-CHA on a dataset with 915 students and 51 courses.\n\nAnother surprising finding is that RSD performed better on rank metrics with an average rank of 1.41 than Max-PaCHA with an average rank of 1.51. Generally, the average ranks of all algorithms are somewhat close by being in a range of 1.26 (ProB-CHAT) to 1.51 (Max-PaCHA). Unfortunately, the authors do not disclose more detailed information about their datasets and structure of matchings. \n\nWhile these results confirm some of the theoretical observations, it will be interesting to get more insights with differently structured datasets being used. Additionally, most of the data contains ties and we do not get any meaningful insights on the distribution of preferences. \n", "meta": {"hexsha": "9a8c518aca8d191249f9763e5875a8da8c4e858e", "size": 6536, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/4_algorithm_comparison.tex", "max_stars_repo_name": "aaronoe/bachelorarbeit", "max_stars_repo_head_hexsha": "b267dc2eb69cc7a1c2421b76277f69517957375d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/4_algorithm_comparison.tex", "max_issues_repo_name": "aaronoe/bachelorarbeit", "max_issues_repo_head_hexsha": "b267dc2eb69cc7a1c2421b76277f69517957375d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/4_algorithm_comparison.tex", "max_forks_repo_name": "aaronoe/bachelorarbeit", "max_forks_repo_head_hexsha": "b267dc2eb69cc7a1c2421b76277f69517957375d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 167.5897435897, "max_line_length": 1184, "alphanum_fraction": 0.7582619339, "num_tokens": 1391, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5761081628329009}}
{"text": "\n\\vspace{-5mm}\nThis chapter gives routines for computing translation matrices that operate on the spherical wave function expansions given in Chapter \\ref{chap:wavefunctions}. Translation matrices, which are derived from addition theorems, allow us to represent a scalar or vector field in two different but parallel reference frames. The frame of reference is being translated, not the field. The addition theorems and matrix diagonalization are first outlined. Next, algorithms and routines for the scalar axial translation matrices (i.e., translation along the $z$-axis) are given which are computed on full or sparse matrices. Last, algorithms and routines for full and sparse vector axial translation matrices are derived from the scalar versions.  \n\n\\vspace{-2mm}\n\\section{Translation Addition Theorem}\n\nThe translation addition theorems for spherical wave functions allow the same field to be expanded in two different, but parallel, frames. These are given for scalar waves as \\cite{chew1995waves}, \n\\begin{eqnarray}\n\\psi_{l'm'}(k,\\br_i) &=& \\sum_{l=0}^{\\infty}\\sum_{m=-l}^{l} \\alpha^{ji}_{lm,l'm'} \\textit{Rg}\\psi_{lm}(k,\\br_j), \\quad \\quad r_j < r_{ji} \\label{adthmscal1} \\\\\n\\psi_{l'm'}(k,\\br_i) &=& \\sum_{l=0}^{\\infty}\\sum_{m=-l}^{l} \\beta^{ji}_{lm,l'm'}\\psi_{lm}(k,\\br_j), \\quad \\quad r_j > r_{ji} \\label{adthmscal2}\\\\\n\\textit{Rg}\\psi_{l'm'}(k,\\br_i) &=& \\sum_{l=0}^{\\infty}\\sum_{m=-l}^{l} \\beta^{ji}_{lm,l'm'}\\textit{Rg}\\psi_{lm}(k,\\br_j), \\quad \\quad \\forall r_j \\label{adthmscal3}  \n\\end{eqnarray}\n\n\\noindent where $\\beta^{ji}_{lm,l'm'} = \\textit{Rg} \\alpha^{ji}_{lm,l'm'}$ are translation matrices and $\\textit{Rg}$ is the regular part. The position vectors, $\\br_i$ and $\\br_j$, are measured from the origins of each frame. The first relation expands radiating waves from frame $i$ as regular or incoming waves in frame $j$. The second expands radiating waves in frame $i$ as radiating waves in frame $j$. The last expresses regular waves in frame $i$ and regular waves in frame $j$. Each addition theorem has a different region of validity: \\eqref{adthmscal1} is valid within a sphere of radius $r_{ji} = \\vert \\bb{r}_{ji} \\vert = \\vert \\br_j- \\br_i\\vert$ centered on frame $j$; \\eqref{adthmscal2} is valid outside a radius $r_{ji}$ centered on frame $j$; \\eqref{adthmscal3}  is valid everywhere. \n\nTranslation addition theorems for vector spherical wave functions have similar structures, \\cite{chew1995waves}. The first is \n\\begin{eqnarray}\n\\bb{M}_{l'm'}(k,\\br_i) &=& \\sum_{l=1}^{\\infty}\\sum_{m=-l}^{l} A^{ji}_{lm,l'm'} \\textit{Rg}\\bb{M}_{lm}(k,\\br_j),  + B^{ji}_{lm,l'm'} \\textit{Rg}\\bb{N}_{lm}(\\br_j) \\\\\n\\bb{N}_{l'm'}(k,\\br_i) &=& \\sum_{l=1}^{\\infty}\\sum_{m=-l}^{l} B^{ji}_{lm,l'm'} \\textit{Rg}\\bb{M}_{lm}(k,\\br_j),  + A^{ji}_{lm,l'm'} \\textit{Rg}\\bb{N}_{lm}(\\br_j)\n\\end{eqnarray}\n\n\\noindent where $A^{ji}_{lm,l'm'}$ and $B^{ji}_{lm,l'm'}$ are vector translation matrices and is valid for $r_j < r_{ji}$. The remaining two relations are analogous to the scalar case and can be obtained with appropriate Hankel or Bessel functions in $A^{ji}_{lm,l'm'}$ and $B^{ji}_{lm,l'm'}$. These show that vector modes mix when translated. The spherical vector components are also transformed and are expressed relative to the origin of the new frame. This means that the vector components in each frame need to be converted to Cartesian components before comparing.\n\n\n\\begin{figure}[H] \n   \\centering\n   \\includegraphics[width=3.5in]{Translation/Figures/transdiagram} \n   \\caption{Coordinate frames $i$ and $j$.}\n   \\label{translationadd}\n\\end{figure}\n\n\nTranslation matrices allow us to manipulate field using only the expansion coefficients.  For example, a radiating scalar field is expanded in frame $i$ as \n\\begin{equation}\n\\phi(\\br_i) = \\sum_{l'=0}^{\\infty}\\sum_{m'=-l'}^{l'} a_{l'm'} \\psi_{l'm'}(\\br_i)\n\\end{equation}\n\nThe same field expressed as incoming waves in frame $j$ comes from substituting \\eqref{adthmscal1} and regrouping terms as\n\\begin{eqnarray}\n\\phi(\\br_j) &=& \\sum_{l=0}^{\\infty}\\sum_{m=-l}^{l} b_{lm}  \\textit{Rg}\\psi_{lm}(k,\\br_j) \\\\\nb_{lm} &=& \\sum_{l'=0}^{\\infty}\\sum_{m'=-l'}^{l'} \\alpha^{ji}_{lm,l'm'}a_{l'm'}\n\\end{eqnarray}\n%\n%\\begin{eqnarray}\n%\\phi(\\br_j) &=& \\sum_{l'=0}^{\\infty}\\sum_{m'=-l'}^{l'} a_{l'm'} \\sum_{l=0}^{\\infty}\\sum_{m=-l}^{l} \\alpha^{ji}_{lm,l'm'} \\textit{Rg}\\psi_{lm}(k,\\br_j) \\nonumber \\\\\n%\\ &=& \\sum_{l=0}^{\\infty}\\sum_{m=-l}^{l} b_{lm}  \\textit{Rg}\\psi_{lm}(k,\\br_j) \\\\\n%b_{lm} &=& \\sum_{l'=0}^{\\infty}\\sum_{m'=-l'}^{l'} \\alpha^{ji}_{lm,l'm'}a_{l'm'}\n%\\end{eqnarray}\n\nLikewise, for the other relations. In matrix notation this is \n\\begin{eqnarray}\n\\phi(k,\\br_i) &=&  \\boldsymbol{\\psi}^t(k,\\br_i)\\cdot \\bb{a} \\\\\n\\phi(k,\\br_j) &=&  \\textit{Rg}\\boldsymbol{\\psi}^t(k,\\br_j)\\cdot \\bb{b} \\\\\n\\bb{b} &=& \\boldsymbol{\\alpha}_{ji} \\bb{a}\n\\end{eqnarray}\n\nThe analogous matrix notation for vector waves is\n\\begin{eqnarray}\n\\bb{E}(k,\\br_i) &=& \\rvectwo{\\bb{M}^t(k,\\br_i)}{\\bb{N}^t(k,\\br_i)}\\cvectwo{\\bb{a}}{\\bb{b}} \\\\\n\\bb{E}(k,\\br_j)&=& \\rvectwo{\\Re\\bb{M}^t(k,\\br_j)}{\\Re\\bb{N}^t(k,\\br_j)}\\cvectwo{\\bb{c}}{\\bb{d}} \\\\\n\\cvectwo{\\bb{c}}{\\bb{d}}  &=&\\tbt{\\bb{A}_{ji}}{\\bb{B}_{ji}}{\\bb{B}_{ji}}{\\bb{A}_{ji}}\\cvectwo{\\bb{a}}{\\bb{b}}\n\\end{eqnarray}\n\nIn both scalar and vector cases the transpose, $^t$, means align the harmonic indices along columns. Technically the sums are infinite, but the matrices are always truncated in practice.\n\n\\section{Diagonalization}\n\nExplicit expressions for the matrix elements of the scalar and vector translation matrices are given in \\cite{chew1995waves}, but are given in terms of Wiger 3-j symbols, which are slow and inaccurate to compute. Fast recursions for quickly computing the scalar and vector translation matrices were developed in \\cite{mackowski1991analysis, chew1992recurrence, chew1993efficient}.  \n\nThe standard way to compute the translation matrix is to break it into three matrices. The local frame, $i$, is first rotated to point its $z$-axis toward the destination frame, $j$. The local frame is then axially translated. Finally, the frame is then rotated back. This effectively diagonalizes the translation matrix. Specifically, if the vector $\\br_{ji}$ points from the origin of frame $i$ to the origin of frame $j$, with magnitude $r_{ji}$ and spherical angles $\\theta_{ji}$, $\\phi_{ji}$, shown in Figure \\ref{translationadd}, then the scalar translation matrix is expanded (diagonalized) as\n\\begin{equation}\n\\boldsymbol{\\alpha}_{ji} = \\bb{D}^*(\\phi_{ji}+\\pi/2,\\theta_{ji},0) \\cdot \\boldsymbol{\\alpha}_z(r_{ji}) \\cdot \\bb{D}(\\phi_{ji}+\\pi/2,\\theta_{ji},0)\n\\label{alphazdiag}\n\\end{equation}\n\n\\noindent where $\\bb{D}$ is the rotation matrix with ZXZ convention, and $\\boldsymbol{\\alpha}_z(r_{ji})$ is the axial translation matrix in the $+z$ direction. The factor of $\\pi/2$ ensures that $\\theta_{ji}$ tips the $z$-axis toward the destination frame with a right-handed X rotation. The sequence starts with an inverse rotation on the right size because we first view the field, which is fixed in the global frame, from the point of view of the rotated frame. Next, this point of view is translated along the $z$-axis of the rotated frame. Finally, the reverse rotation (in this case a forward rotation) is applied to bring the frame back to be parallel with the global frame. This procedure also applies to the vector translation matrices $\\bb{A}_{ji}$ and $\\bb{B}_{ji}$.  \n\n\\begin{figure}[H] \n   \\centering\n   \\includegraphics[width=5in]{Translation/Figures/alphaz} \n   \\caption{Diagonalization of the scalar translation matrix.  L = 6.  Left is the full translation matrix.  Right shows the two rotation matrices bookending the z-axial translation matrix.}\n   \\label{figdiag}\n\\end{figure}\n\n\nWhile one can reduce the recurrence relations for the full translation matrices given in \\cite{chew1992recurrence, chew1993efficient} to the axial case, equations for the axial scalar translation are derived directly in \\cite{mackowski1991analysis}, from which the axial vector translations follow. These are given succinctly in \\cite{duan2015experimental}.  \n\nThe reason for the dramatic speed up of the matrix-vector multiplication in the case of diagonalization over the full matrix is because the rotation matrix is block diagonal and the $z$-axial translation requires the fewest non-zero terms of all possible translation directions. Together these yield many fewer overall multiplications than when using the full matrix, and the computational advantage grows as the matrix grows. In addition, the rotation matrix only needs to be computed once (the inverse is just conjugate). If the translation matrix is non-square, as will be the case when the number of harmonics between two frames is different, then the rotation matrix can be computed once for the larger of $L$ and $L'$, which are the largest degree along row and column, respectively, and then truncated.  Figure \\ref{figdiag} illustrates the diagonalization of the translation matrix. The matrices must be applied individually to the expansion coefficients from right to left, otherwise the benefits of diagonalization are lost.\n\n\n%[10] W. C. Chew, ?Recurrence Relations for Three-Dimensional Scalar Addi- tion Theorem?, Journal of Electromagnetic Waves and Applications, Vol. 6, No. 2, pp. 133-142, 1992.\n%[11] Daniel W. Mackowski, ?Analysis of Radiative Scattering for Multiple Sphere Configurations?, Proceedings: Mathematical and Physical Sciences, Vol. 433, No. 1889, pp. 599-614, Jun. 1991.\n\n\\clearpage\n\\newpage\n\\section{Scalar Translation Matrix}\n\nIn this section we give a routine for the scalar axial translation matrix. We then derive a routine for indexing and computing the inherently sparse scalar axial translation matrix on a 1D array. We also give a routine that accomplishes the diagonalized matrix-vector multiplication and operates on the expansion coefficients directly. Finally, we provide a routine that computes the full scalar translation matrix directly. \n\n\\subsection{Scalar Axial Translation Matrix}\n\nThe recurrence equations for the scalar axial translation matrix, \\eqref{alphazdiag}, follow \\cite{mackowski1991analysis}.  The matrix elements are zero except for $m = m'$.  Let $(l,m)$ and $(l',m')$ correspond to the rows and columns, respectively.  While the sums are technically infinite, they are always truncated at maximum orders $L$ and $L'$, respectively.\n\nThe calculation is initialized with \n\\begin{equation}\n\\alpha_{l,0,0,0} = (-1)^l\\sqrt{2l+1}h_l^{(1)}(kr_{ji})\n\\end{equation}\n\nNext, $\\alpha_{l,m,l',\\vert l'\\vert}$ is computed from \n\\eq{\\alpha_{l,l'+1, l'+1,l'+1} = \\sqrt{\\dfrac{2l'+3}{2(l'+1)}} \\left[\\sqrt{\\dfrac{(l+l'+1)(l+l')}{(2l-1)(2l+1)}} \\alpha_{l-1,l',l',l'}  +   \\sqrt{\\dfrac{(l-l'+1)(l-l')}{(2l+3)(2l+1)}} \\alpha_{l+1,l',l',l'}    \\right]\n}\n\n%\\begin{eqnarray}\n%\\alpha_{l,l'+1, l'+1,l'+1} &=& \\sqrt{\\dfrac{2l'+3}{2(l'+1)}} \\left[\\sqrt{\\dfrac{(l+l'+1)(l+l')}{(2l-1)(2l+1)}} \\alpha_{l-1,l',l'l'} \\right. \\nonumber \\\\\n%\\ & \\ & \\left. +   \\sqrt{\\dfrac{(l-l'+1)(l-l')}{(2l+3)(2l+1)}} \\alpha_{l+1,l',l'l'}    \\right]\n%\\end{eqnarray}\n\nand $\\alpha_{l,-l', l',-l'} = \\alpha_{l,l',l',l'}$.  The remaining coefficients are obtained for $m = \\pm l'$ using\n\\eq{\\alpha_{l,m,l'+1,m} = \\sqrt{2l'+3}\\left[\\sqrt{\\dfrac{(l+l')(l-l')}{(2l-1)(2l+1)}} \\alpha_{l-1,m,l',m}  -  \\sqrt{\\dfrac{(l+l'+1)(l-l'+1)}{(2l+3)(2l+1)}} \\alpha_{l+1,m,l',m}    \\right] \n}\n\n%\n%\\begin{eqnarray}\n%\\alpha_{lm,l'+1,m} &=& \\sqrt{2l'+3}\\left[\\sqrt{\\dfrac{(l+l')(l-l')}{(2l-1)(2l+1)}} \\alpha_{l-1,m,l'm}  \\right.\\nonumber \\\\\n%\\ & \\ & - \\left.  \\sqrt{\\dfrac{(l+l'+1)(l-l'+1)}{(2l+3)(2l+1)}} \\alpha_{l+1,m,l'm}    \\right] \\nonumber \\\\\n%\\end{eqnarray}\n\nand $m \\ne \\pm l'$ using\n\\begin{eqnarray}\n\\alpha_{l,m,l'+1,m} &=& \\sqrt{\\dfrac{(2l'+3)(2l'+1)}{(l'+m+1)(l'-m+1)}}\\left[  \\sqrt{\\dfrac{(l'+m)(l'-m)}{(2l'+1)(2l'-1)}} \\alpha_{l,m,l'-1,m}   \\right. \\nonumber \\\\\n\\ & \\ & \\left. + \\sqrt{\\dfrac{(l+m)(l-m)}{(2l+1)(2l-1)}}\\alpha_{l-1,m,l',m} - \\sqrt{\\dfrac{(l+m+1)(l-m+1)}{(2l+3)(2l+1)}}\\alpha_{l+1,m,l',m} \\right] \\nonumber \\\\\n\\end{eqnarray}\n\nThat's about as confusing as it gets.  Some comments on computation:\n\n\\begin{enumerate}\n\\item The equations in \\cite{mackowski1991analysis} allow for translation in $\\pm z$ directions.  We only translate in $+z$.  \n\n\\item The algorithm iterates on $l+1$ for an element at $l'+1$. This has a cascading effect that, in order to compute the lower right block at $l = L$ and $l' = L'$, an additional element at $L+1$ is required, and so on.  For example, if the translation matrix is square, we must fill a triangular region to $l = L + L'$ at $l'=0$.  This is explained in detail in \\cite{chew1992recurrence}.  Once the computation is finished, the matrix is cropped to the intended size.  \n\n\\item We need to accommodate non-square matrices for translation of expansions of different harmonic content.  Because of the filling requirement above, it is advantageous to create the fill region in the direction of the larger of $L$ and $L'$.  Rather than rewrite the routine to fill along rows or columns, we can make use of the follow transpose relation \n\\begin{equation}\n\\alpha^{ij}_{lm,l'm'} = (-1)^{l+l'}\\alpha^{ji}_{lm,l'm'} \\label{azflip}\n\\end{equation}\n\nThe triangular fill region is appended to the bottom of the matrix, along the rows up to $\\textrm{max}(L,L')$. The matrix is then cropped and transposed if $L' > L$.  \n\n\\item The computation is the same for $\\alpha$ or $\\beta$, only the Bessel function changes at the initialization.  \n\n\\end{enumerate}\n\n\n\\begin{figure}[H] \n   \\centering\n   \\includegraphics[width=3.5in]{Translation/Figures/trans1} \n   \\caption{Scalar axial translation matrix, $L = L' = 8$, $k = 1$, $r_{ji} = 100$. }\n   \\label{fig4}\n\\end{figure}\n\n\n\nThe routine \\texttt{alphaz} takes as input the largest row degree, $L$, the largest column degree $L'$, wavenumber $k$, axial translation distance $r_{ji}$, and returns the translation matrix in full, including zeros.  Use string switch \\texttt{'rg'} for the regular form. This routine is a good starting point for visualization, but the output matrix should not be used as is for matrix-vector multiplication due to the zeros, or it should be converted to a sparse matrix. The routine is written with the variable $n$ in place of $l'$ for clarity. \n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/alphaz.m}\n}\n\n\n\\clearpage\n\\subsection{Sparse Scalar Axial Translation Matrix}\n\nWe want to compute the inherently sparse axial translation matrix on a 1D array, to avoid preallocating a potentially massive, mostly zero, matrix. Indexing is more complicated than for the sparse rotation matrix due to different sized diagonal subblocks as well as the triangular fill region requirement. Analytic formulas for sparse indexing are derived and used below. As implemented, these add a little computation time relative to indexing on the full preallocated matrix. \n\n\\begin{figure}[H] \n  \\centering\n \\includegraphics[width=5.5in]{Translation/Figures/sparseindex_2} \n   \\caption{Indexing the diagonal subblocks of the sparse scalar axial translation matrix. Blue arrows indicate the direction of the linear indexing.  }\n   \\label{fig2translation}\n\\end{figure}\n\n\nThe total number of non-zero matrix elements in the scalar axial translation matrix, but not including the required triangular fill region, is counted as \n\\begin{eqnarray}\nN(L,L') &=& \\sum_{l' = 0}^{L'} \\sum_{l=0}^{L} \\sum_{m = -\\textrm{min}(l,l')}^{\\textrm{min}(l,l')} 1 \\label{eq:NLL}\\\\\n\\ & = & \\sum_{l' = 0}^{L'} \\sum_{l=0}^{L} \\left( \\textrm{2 min}(l,l') + 1 \\right) \\label{crazysparse}\n\\end{eqnarray}  \n\nThis counts the total number of elements in a matrix that has $L \\times L'$ subblocks. The number of non-zero elements along the diagonal of a subblock is determined by the smaller of the $l$ and $l'$. See Figure \\ref{fig2translation}.  Plugging \\eqref{crazysparse} into Wolfram Alpha we get\n\\begin{equation}\nN(L,L') \n= \\left\\{\n\\begin{array}{ccc}\n1 & \\ & L =  L' = 0 \\\\\nL'+1 &\\ & L = 0, L' >0\\\\\nL + 1 & \\ & L' = 0 , L>0\\\\\n-L'^3/3+L'^2 L+L' (2 L+4/3)+L+1 & \\ & L'<L, L>0,L'>0 \\\\\n-L^3/3+L^2 L'+L (2 L'+4/3)+L'+1 & \\ & L'>L, L>0,L'>0 \\\\\n2/3L^3 + 2L^2 + 7/3L + 1 & \\ & L' = L, L>0,L'>0 \\\\\n\\end{array}\n\\right.\n \\label{eq:a1}\n\\end{equation}\n\nNext, the total number of nonzero elements in the triangular fill region, see Figure \\ref{fig2translation2}, assuming $L \\ge L'$, and assuming that the original matrix is extend along rows, is given by \n\\begin{equation}\nN_{\\textrm{fill}}(L,L') = \\sum_{l' = 0}^{L'-1} \\sum_{l=L+1}^{L + L' - l'} \\left( 2 l' + 1 \\right) = L'(2L'^2 + 3L' + 1)/6  \\label{eq:a2}\\end{equation}\n\n\n\n\\begin{figure}[H] \n   \\centering\n      \\includegraphics[width=3.5in]{Translation/Figures/sparseindex_3} \n   \\caption{Indexing the diagonal subblocks of the triangular fill region of the sparse scalar axial translation matrix. Blue arrows indicate the direction of the linear indexing. }\n   \\label{fig2translation2}\n\\end{figure}\n\n\nBecause we have defined the fill region as additional matrix rows, the 1D crop is easiest if we index the diagonal $(l,l')$ subblocks as \"row\"-major (left-right, top-down), and where we index the diagonals of the subblocks from upper left to lower right. See again Figure \\ref{fig2translation}. Once the computation is complete we keep the first $N(L,L')$ elements and apply the transpose if $L' > L$.  \n\nThe \"row\"-major 1D linear index for the axial translation matrix, but not including the triangular fill region, is given by  \n\\begin{equation}\nI(l,l',m,L') = N(l-1,L') + \\sum_{i = 0}^{l'-1}  (2\\textrm{min}(l,i) + 1)  + (\\textrm{min}(l,l') + m + 1)   \\label{eq:a3}\n\\end{equation}\n\n\\noindent where\n\\begin{equation}\n\\sum_{i = 0}^{l'-1}  (2\\textrm{min}(l,i) + 1) = \n\\left\\{\\begin{array}{cc} \nl'^2, & l' = 1 \\lor ( l+1>l' \\land l' > 1) \\\\\n-l^2 - l + 2ll' + l' & \\textrm{otherwise} \\\\\n\\end{array}\n\\right\\}\n\\end{equation}  \n\n$N(l-1,L')$ counts the submatrix containing all diagonal subblocks up to but not including the desired $l$ row of diagonal blocks. The second term counts completed column blocks in the current row block up that go up to but do not include the desired $l'$. The last term counts the desired diagonal. This counting requires knowledge of $L'$.\n\nFinally, the linear index of an element in the triangular fill region under the conditions that $L \\ge L'$, $l > L$, and $0 \\le l' \\le L'-1$ is\n\\begin{eqnarray}\nI_{\\textrm{fill}}(l,l',m,L,L') &=& N(L,L') + \\sum_{i= L+1}^{l-1} \\left(\\sum_{j=0}^{L'-1-(i-L-1)}(2j+1)\\right) \\nonumber \\\\\n\\ & \\ & + \\left(\\sum_{j=0}^{l'-1} (2j + 1)\\right) + (l' + m + 1)  \\label{eq:a4} \\\\\n\\ & =& N(L,L') + \\left( 1/6(l-L-1)(2l^2-l(4L+6Lp+7)+2L^2\\right. \\nonumber \\\\\n\\ & \\ & \\left. +L(6Lp+7)+6(Lp+1)^2)\\right) + l'^2 + (l' + m + 1) \n\\end{eqnarray}\n\nThe first term counts the number of elements in the actual matrix.  The second term counts the completed row blocks up to but not including the current row block in the fill triangle.  The third term counts the column blocks in the current row block to go up to but do not include the active diagonal (this is made simpler by the fact that $l' < l$ in the fill region).  The last term indexes the active diagonal.  \n\n\n\nThe routine \\texttt{alphazSparse} returns three column arrays of the row index, column index, and matrix entries of the sparse axial translation matrix.  It is the same as \\texttt{alphaz} but computed and output on a 1D array. Helper functions are given in Table \\ref{sparsetranshelp}. \\texttt{Nalphaz} and \\texttt{Nalphazfill} count the number of nonzero elements in the matrix and fill region, respectively. \\texttt{indAlphazFill} is used to index the sparse matrix including the fill regions; it calls \\texttt{indAlphaz} which indexes the primary sparse matrix.  When $L' > L$, the transpose is accomplished by applying \\eqref{azflip} and reordering row, column, and array elements to be consistent with the row-major indexing of \\texttt{indAlphaz}.  \n\n\n\\begin{table}[htbp]\n\\caption{\\texttt{alphazSparse} Helper Functions}\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline\nRoutine & Equation & Description \\\\\n\\hline\n\\texttt{Nalphaz} &   \\eqref{eq:a1} & Number of non-zero elements in $L \\times L'$ sparse $\\alpha_{lm,l'm'}$ \\\\\n\\hline\n\\texttt{Nalphazfill} &  \\eqref{eq:a2} & Number of non-zero elements in fill region \\\\\n\\hline\n\\texttt{indAlphaz} &   \\eqref{eq:a3} & Row-major 1D linear index in sparse $\\alpha_{lm,l'm'}$ \\\\\n\\hline\n\\texttt{indAlphazFill} &  \\eqref{eq:a4} & Row-major 1D linear index in sparse fill region \\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\label{sparsetranshelp}\n\\end{table}%\n\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/alphazSparse.m}\n}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/Nalphaz.m}\n}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/Nalphazfill.m}\n}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/indAlphaz.m}\n}\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/indAlphazFill.m}\n}\n\n\n\\clearpage\n\\subsection{Diagonalized Scalar Translation Matrix}\n\nAfter creating sparse rotation and axial translation matrices, the matrix-vector multiplication should be carried out right to left in sequence otherwise the benefit of diagonalization is lost. For example, if \\texttt{a} is a vector of expansion coefficients, and the matrices are in Matlab's sparse matrix format, then the multiplication should be carried out as follows \\texttt{b = conj(D)*(Alphaz*(D*a))}.\n\nThe routine \\texttt{scalarTranslation} takes as input a vector of expansion coefficients in frame $i$, $\\bb{a}$, the maximum degrees $L$ and $L'$ of the translation matrix and the translation vector $\\br_{ji}$ and returns expansion coefficients in frame $j$, $\\bb{b}$. The length of $\\bb{a}$ should be $L'^2 + 2L' + 1$.  Use \\texttt{'rg'} for the regular form of the translation.  This uses the routines for the sparse forms of the rotation matrix and axial translation matrix to save memory. If the entire matrix is desired then include the optional second output. \n\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/scalarTranslation.m}\n}\n\n\\clearpage\n\\subsection{Full Scalar Translation Matrix}\n\nRecursion relations to compute the full scalar translation matrix were derived in \\cite{chew1992recurrence}. While diagonalization is efficient for large matrices, using the full-matrix recursion is more efficient when the number of rows or columns in the matrix are small. The sums of the additional theorem are technically infinite, however, if the field being translated is known to have low harmonic content, then there is no need to compute the matrix beyond the maximum harmonic. For example, the field expansion of a pure scalar dipole only contains harmonics up to degree $l = 1$ (i.e., $(l,m)$ = (0,0),(1,-1),(1,0),(1,1)). Translating this field as incoming waves in the frame of a large scattering object requires only four matrix columns, but a large number rows. The same computation done with diagonalization needs a square rotation matrix to match the number of harmonics of the rows, which is more computation than necessary. Finally, being able to compute the full matrix is useful for cross checking. \n\n\n\\begin{figure}[H] \n\\centering\n   \\includegraphics[width=3.5in]{Translation/Figures/transfull} \n   \\caption{Scalar translation matrix, $L = L' = 8$, $k = 1$, $r_{ji} = [12, 5, 15]$.} \n\\end{figure}\n\n\nAdopting the notation from \\cite{chew1992recurrence}, which uses the pairings $(\\nu\\mu,nm)$ for our $(lm,l'm')$, the recurrence relations for the full scalar translation matrix are initialized with\n\\eq{\\alpha_{\\nu\\mu,00} = \\sqrt{4\\pi} (-1)^{\\nu}Y^*_{\\nu\\mu}(\\theta,\\phi)h_{\\nu}^{(1)}(k r)}\n\n\\noindent where $(r,\\theta,\\phi)$ are the spherical coordinates of the vector, $\\br_{ji}$, that points from the originating frame to the translated frame. The first recursion relation is\n\\eq{b_{nn}^{+} \\alpha_{\\nu\\mu,n+1,n+1} = b_{\\nu-1,\\mu-1}^{+} \\alpha_{\\nu-1,\\mu-1,nn}  + b_{\\nu+1,\\mu-1}^{-} \\alpha_{\\nu+1,\\mu-1,nn}}\n\nThis is also used to iterated on $b_{n,-n}^{+} \\alpha_{\\nu\\mu,n+1,-(n+1)} $. The second relation is \n\\eq{a_{nm}^{+} \\alpha_{\\nu\\mu,n+1,m}  = -a_{nm}^{-} \\alpha_{\\nu\\mu,n-1,m} + a_{\\nu-1,\\mu}^{+} \\alpha_{\\nu-1,\\mu,nm} + a_{\\nu+1,\\mu}^{-} \\alpha_{\\nu+1,\\mu,nm} }\n\nThe coefficients are \n\\ea{a_{nm}^{+} &=& -\\sqrt{ \\dfrac{(n+m+1)(n-m+1)}{(2 n+1)(2 n+3)}} \\\\\na_{nm}^{-} &=& \\sqrt{\\dfrac{(n+m)(n-m)}{(2n+1)(2n-1)}} }\n\\ea{b_{nm}^{+} &=& \\sqrt{\\dfrac{(n+m+2)(n+m+1)}{(2n+1)(2n+3)}} \\\\\nb_{nm}^{-} &=& \\sqrt{\\dfrac{(n-m)(n-m-1)}{(2n+1)(2n-1)} }}\nLike the axial translation matrix, the computation requires a triangular fill region. As before, the fill region is placed along the dimension of the matrix with the highest degree harmonic, because we only need to fill to the smaller of $L$ and $L'$. We always fill along rows, so if the matrix needs to be flipped, the following transpose relation is used\n\\eq{\\alpha_{mn,\\nu\\mu} = (-1)^{\\nu+\\mu+m+n} \\alpha_{\\nu,-\\mu,n,-m}}\n   \nThe routine \\texttt{alpha} computes the full scalar translation matrix using the recursion relations above.  It takes as input the largest row degree, $L$, the largest column degree $L'$, wavenumber $k$, axial translation vector $\\br_{ji}$, and returns the translation matrix linearly indexed along rows and columns. Use string switch \\texttt{'rg'} for the regular form. The computation fills along rows for the larger of $L$ and $L'$, then transposes the matrix if necessary. This routine matches the outputs of the diagonalized version to machine precision. \n\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/alpha.m}\n}\n\n\n\n\\clearpage\n\\newpage\n\n\\section{Vector Translation Matrix}\n\nThis section gives routines for the vector axial translation matrix, sparse vector axial translation matrix, and an implementation of the diagonalized matrix-vector multiplication.  \n\\vspace{-1mm}\n\\subsection{Vector Axial Translation Matrix}\n\nThe vector axial translation matrices are found directly from the scalar axial translation matrix with the following relations:\n\\begin{eqnarray}\nA_{l,m,l',m} &=& k r_{ji}\\left[\\dfrac{1}{(l+1)}\\sqrt{\\dfrac{(l+m+1)(l-m+1)}{(2l+3)(2l+1)}} \\alpha_{l+1,m,l',m} \\right. \\nonumber \\\\\n\\ & \\ & + \\left. \\dfrac{1}{l}\\sqrt{\\dfrac{(l+m)(l-m)}{(2l+1)(2l-1)}}\\alpha_{l-1,m,l',m} \\right] + \\alpha_{l,m,l',m} \\\\\nB_{l,m,l',m} &=& \\dfrac{imkr_{ji}}{l(l+1)}\\alpha_{l,m,l',m} \n\\end{eqnarray}\n\\begin{equation}\nA_{l,m,l',m'} = B_{l,m,l',m'} = \\alpha_{l,m,l',m'} = 0, \\quad \\quad m \\ne m'\n\\end{equation}\n\nBecause the first equation uses $l+1$, the scalar axial matrix needs one extra degree $l = L+1$.  The equations in \\cite{mackowski1991analysis} appear to be missing several scaling factors, but \\cite{chew1993efficient} gives correct equations.  Both \\cite{mackowski1991analysis, chew1993efficient} derived translation matrices for partially normalized vector spherical waves functions.  For fully normalized vector spherical wave functions, the vector translation matrices need to be modified as \n\\begin{eqnarray}\n\\widetilde{A}_{l,m,l',m'} &=& \\sqrt{\\dfrac{l(l+1)}{l'(l'+1)}} A_{l,m,l',m'}\\\\\n\\widetilde{B}_{l,m,l',m'} &=& \\sqrt{\\dfrac{l(l+1)}{l'(l'+1)}}B_{l,m,l',m'}\n\\end{eqnarray}\n\nThe routine \\texttt{AzBz} takes as input the maximum harmonic degrees $L$ and $L'$, wavenumber $k$, and magnitude of the z-axis translation, $r_{ji}$, and returns vector translation matrices $A_{l,m,l',m}$ and $B_{l,m,l',m}$. Use string switch \\texttt{'rg'} for the regular form of the matrices, and \\texttt{'norm'} for fully normalized vector wave functions. \n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/AzBz.m}\n}\n\n\n\n\n\\subsection{Sparse Vector Axial Translation Matrix}\n\nBecause the vector axial translation matrices are computed directly from the scalar functions, they are sparse and can be computed on a 1D array.  We only need to count the number of elements in the vector axial translation matrix and determine its sparse linear index. The number of elements in the sparse vector axial translation matrix is counted the same as the scalar case starting at $l=1$ and $l'=1$.  \n\\begin{eqnarray}\nN(L,L') &=& \\sum_{l' = 1}^{L'} \\sum_{l=1}^{L} \\sum_{m = -\\textrm{min}(l,l')}^{\\textrm{min}(l,l')} 1 \\label{eq:NLL}\\\\\n\\ & = & \\sum_{l' = 1}^{L'} \\sum_{l=1}^{L} \\left[ 2 \\textrm{min}(l,l') + 1 \\right]\n\\end{eqnarray}  \n\nPlugging this into Wolfram Alpha\n\\begin{equation}\nN(L,L') \n= \\left\\{\n\\begin{array}{ccc}\n3 & \\ & L =  L' = 1 \\\\\n3L &\\ & L' = 1, L >1\\\\\n3L' & \\ & L = 1 , L' >1\\\\\n- 1/3L'^3  + L L'^2 + 2 LL' + 1/3L' & \\ & L > 1, L' > 1, L > L'\\\\\n-1/3L^3  + L'L^2 + 2L'L + 1/3L& \\ &  L > 1, L' > 1, L'>L \\\\\nL^2 + 5L + 2/3L'^3 + L'^2 - 14/3L' & \\ & L > 1, L' > 1, L = L' \\\\\n\\end{array}\n\\right.\n\\end{equation}\n\nCounting row-major, the sparse linear index is \n\\begin{equation}\nI(l,l',m,L') = N(l-1,L') + \\sum_{i = 1}^{l'-1}  (2\\textrm{min}(l,i) + 1)  + (\\textrm{min}(l,l') + m + 1)  \n\\end{equation}\n\n\\noindent where\n\\begin{equation}\n\\sum_{i = 1}^{l'-1}  (2\\textrm{min}(l,i) + 1) = \n\\left\\{\\begin{array}{ccc} \n3 & \\ & l + 1 \\ge l' \\land ((l=1 \\land l' \\ge 2) \\lor (l' = 2 \\land l > 1 )) \\\\\n2l(l'-2) + l' + 1 & \\ & l + 1 < l' \\land ((l=1 \\land l' \\ge 2) \\lor (l' = 2 \\land l > 1 )) \\\\\nl'^2 - 1  & \\ & l > 1 \\land l+1 \\ge l' \\land l > 2 \\\\\n-l^2 + l + (2l + 1)(l'-1)& \\ & \\textrm{otherwise} \\\\\n\\end{array}\n\\right\\}\n\\end{equation}  \n\n\nThe routine \\texttt{AzBzSparse} returns four column arrays of the row index, column index, and matrix entries of the sparse axial translation matrices.  It is the same as \\texttt{AzBz} but computed and output on a 1D array.  The helper functions are \\texttt{NAzBz.m} and \\texttt{indAzBz.m}.\n\n\n{\\scriptsize\n\\VerbatimInput{\\code/Translation/AzBzSparse.m}\n}\n\n{\\scriptsize\n\\VerbatimInput{\\code/Translation/NAzBz.m}\n}\n\n{\\scriptsize\n\\VerbatimInput{\\code/Translation/indAzBz.m}\n}\n\n\\clearpage\n\\subsection{Diagonalized Vector Translation Matrix}\n\n\nThe routine \\texttt{vectorTranslation} takes as input a vector of expansion coefficients in frame $i$, $\\bb{a}$ and $\\bb{b}$, the maximum degrees $L$ and $L'$ of the translation matrix and the translation vector $\\br_{ji}$ in Cartesian coordinates and returns expansion coefficients in frame $j$, $\\bb{c}$ and $\\bb{d}$. The lengths of $\\bb{a}$ and $\\bb{b}$ should be $L'^2 + 2L' $.  Use \\texttt{'rg'} for the regular form and \\texttt{'norm'} for fully normalized vector wave functions. This uses the routines for the sparse versions of the rotation matrix and axial translation matrices to save memory.  If the pair of matrices is desired, then include the optional third and fourth outputs. \n\n\n{\\scriptsize\n\\VerbatimInput{\\code/Translation/vectorTranslation.m}\n}\n\\clearpage\n\n\\subsection{Full Vector Translation Matrix}\n\nLike the full scalar translation matrix, recursion relations to compute the full vector translation matrices were derived in \\cite{chew1993efficient}. Two versions are available: the first iterates on the vector matrices from scratch, the second iterates on the full scalar translation matrix. We implement the second version, which requires computing the scalar translation matrix at a maximum harmonic degree one higher than that of the vector matrices. \n\n\n\\begin{figure}[H] \n   \\centering\n   \\subfigure{\\includegraphics[width=3in]{Translation/Figures/transfullA1}}\n   \\subfigure{\\includegraphics[width=3in]{Translation/Figures/transfullB1}} \\\\\n   \\subfigure{\\includegraphics[width=3in]{Translation/Figures/transfullA2}}\n   \\subfigure{\\includegraphics[width=3in]{Translation/Figures/transfullB2}}\n   \\caption{Vector translation matrices for $L = L' = 8$, $k = 1$, $r_{ji} = [12, 5, 15]$. Top row: matrices for partially normalized vector wave functions. Bottom row: matrices for fully normalized vector wave functions.}\n   \\label{transfullAB}\n\\end{figure}\n\n\nAdopting the notation from \\cite{chew1993efficient}, which uses the pairings $(\\nu\\mu,nm)$ for our $(lm,l'm')$, the recurrence relations for the two vector translation matrices derived from the scalar translation matrix are\n\\ea{A_{\\nu\\mu,nm} &=& \\alpha_{\\nu\\mu,nm} + \nc_{1,\\nu\\mu} \\alpha_{\\nu+1,\\mu-1,nm} + \nc_{2,\\nu\\mu} \\alpha_{\\nu-1,\\mu-1,nm} + \nc_{3,\\nu\\mu} \\alpha_{\\nu+1,\\mu+1,nm} + \\nonumber \\\\\n\\ & \\ & \nc_{4,\\nu\\mu} \\alpha_{\\nu-1,\\mu+1,nm} + \nc_{5,\\nu\\mu} \\alpha_{\\nu+1,\\mu,nm} + \nc_{6,\\nu\\mu} \\alpha_{\\nu-1,\\mu,nm} \n\\label{Anumu} \\\\\nB_{\\nu\\mu,nm} &=& c_{7,\\nu\\mu} \\alpha_{\\nu\\mu,nm} + \nc_{8,\\nu\\mu} \\alpha_{\\nu,\\mu+1,nm} + \nc_{9,\\nu\\mu} \\alpha_{\\nu,\\mu-1,nm} \\label{Bnumu} } \n\nNote, \\eqref{Anumu} and \\eqref{Bnumu} only operate on the row indices $(\\nu,\\mu)$. The coefficients are \n\\ea{\nc_{1,\\nu\\mu} &=& \\dfrac{kr\\sin\\theta e^{-i\\phi}}{2(\\nu + 1)}\\sqrt{\\dfrac{(\\nu-\\mu+2)(\\nu-\\mu+1)}{(2\\nu + 1)(2\\nu + 3)}} \\\\\nc_{2,\\nu\\mu} &=&  -\\dfrac{kr\\sin\\theta e^{-i\\phi}}{2\\nu}\\sqrt{\\dfrac{(\\nu+\\mu-1)(\\nu+\\mu)}{(2\\nu - 1)(2\\nu + 1)}} \\\\\nc_{3,\\nu\\mu} &=& -\\dfrac{kr\\sin\\theta e^{i\\phi}}{2(\\nu + 1)}\\sqrt{\\dfrac{(\\nu+\\mu+2)(\\nu+\\mu+1)}{(2\\nu + 1)(2\\nu + 3)}} \\\\\nc_{4,\\nu\\mu} &=&  \\dfrac{kr\\sin\\theta e^{i\\phi}}{2\\nu}\\sqrt{\\dfrac{(\\nu-\\mu)(\\nu-\\mu-1)}{(2\\nu - 1)(2\\nu + 1)}}\\\\\nc_{5,\\nu\\mu} &=& \\dfrac{kr\\cos\\theta}{\\nu+1}\\sqrt{\\dfrac{(\\nu+\\mu+1)(\\nu-\\mu+1)}{(2\\nu + 1)(2\\nu + 3)}} \\\\\nc_{6,\\nu\\mu} &=&  \\dfrac{kr\\cos\\theta}{\\nu}\\sqrt{\\dfrac{(\\nu+\\mu)(\\nu-\\mu)}{(2\\nu - 1)(2\\nu + 1)}}  \\\\\nc_{7,\\nu\\mu} &=& \\dfrac{i\\mu kr\\cos\\theta }{\\nu(\\nu+1)} \\\\\nc_{8,\\nu\\mu} &=& \\dfrac{i kr\\sin\\theta e^{i\\phi} }{2\\nu(\\nu+1)} \\sqrt{(\\nu-\\mu)(\\nu+\\mu+1)} \\\\\nc_{9,\\nu\\mu} &=& \\dfrac{i kr\\sin\\theta e^{-i\\phi} }{2\\nu(\\nu+1)} \\sqrt{(\\nu+\\mu)(\\nu-\\mu+1)}  } \n\n\\noindent where $(r,\\theta,\\phi)$ are the spherical coordinates of the vector $\\br_{ji}$ that points from the originating frame to the translated frame.\n\n\nThe routine \\texttt{AB} takes as input the maximum harmonic degrees $L$ and $L'$, wavenumber $k$, and translation vector, $r_{ji}$, and returns full vector translation matrices $A$ and $B$. It calls \\texttt{alpha} to compute the full scalar translation matrix. Use string switch \\texttt{'rg'} for the regular form of the matrices, and \\texttt{'norm'} for fully normalized vector wave functions. \n\n\n\n{\\footnotesize\n\\VerbatimInput{\\code/Translation/AB.m}\n}\n\n\n\n\n", "meta": {"hexsha": "765afda69a290c60369bd99dd50a1d12c9ac38d6", "size": 33858, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tex/Translation/Translation.tex", "max_stars_repo_name": "nasa-jpl/Waveport", "max_stars_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2021-08-29T13:29:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T20:09:47.000Z", "max_issues_repo_path": "Tex/Translation/Translation.tex", "max_issues_repo_name": "ruzakb/Waveport", "max_issues_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tex/Translation/Translation.tex", "max_forks_repo_name": "ruzakb/Waveport", "max_forks_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-08-29T13:28:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-08T19:58:04.000Z", "avg_line_length": 67.1785714286, "max_line_length": 1034, "alphanum_fraction": 0.6975603993, "num_tokens": 10958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5761081628329009}}
{"text": "% declare document class and geometry\n%\\documentclass[12pt]{article} % use larger type; default would be 10pt\n\\input{../plaheader.tex}\n\\usepackage[margin=1in]{geometry} % handle page geometry\n\n\n\n\n\n\\title{Phys 220A -- Classical Mechanics -- Lec06}\n\\author{UCLA, Fall 2014}\n\\date{\\formatdate{21}{10}{2014}} % Activate to display a given date or no date (if empty),\n         % otherwise the current date is printed \n\n\\begin{document}\n\\setlength{\\unitlength}{1mm}\n\\maketitle\n\n\n\\section{Small oscillations}\n\nGenerally, in classical mechanics, the dynamics may often by too complicated to solve exactly, or even numerically. Often we will have a Lagrangian of the form\n\\begin{equation}\nL = \\frac{1}{2} \\sum_{ij} M_{ij}(q) \\dot{q}_i \\dot{q}_j + \\sum_i N_i (q) \\dot{q}_i - V(q),\n\\end{equation}\nor we will be able to approximate it in this form to order $\\dot{q}^2$, assuming no time dependence. This gives us stability around equilibrium, which is of course important physically. An example where $N_i$ comes into play would be the potential for a particle coupled to the EM field. For driven systems we will have different stability properties. In condensed matter, small energy excitations around equilibrium are called phonons. \n\nEquilibrium points occur at the extrema of the potential, that is at solutions $q_i^{(0)}$ to $\\pd{V}{q_i} = 0$. The Euler-Lagrange equations are then solved by $q_i(t) = q_i^{(0)}$ for all $t$, i.e. $\\dot{q}_i = 0$. Under small oscillations we will have\n\\begin{equation}\nq_i = q_i^{(0)} + \\eta_i(t) + \\bigo(\\eta^2)\n\\end{equation}\nand we must expand the Lagrangian to quadratic order (i.e. linearize),\n\\begin{align}\nL &= \\frac{1}{2} \\sum_{ij} M_{ij} (q^{0}) \\dot{\\eta}_i \\dot{\\eta}_j + \\sum_i N_i (q^{0}) \\dot{\\eta}_i + \\sum_{ij} \\eval{\\pd{N_i}{q_j}}_{q=q^0} \\eta_j \\dot{\\eta}_i - V(q^0) \\\\\n\t& \\qquad - \\sum_i \\eval{\\pd{V}{q_i}}_{q = q^0} \\eta_i - \\frac{1}{2} \\eval{\\md{V}{2}{q_i}{}{q_j}{}}_{q = q^0} \\eta_i \\eta_j + \\bigo(n^3). \\\\\n\t&\\sim \\frac{1}{2} \\sum_{ij} m_{ij} \\dot \\eta_i \\dot \\eta_j - \\frac{1}{2} \\sum_{ij} v_{ij} \\eta_i \\eta_j + \\sum_{ij} p_{ij} \\eta_i \\dot \\eta_j + \\bigo(\\eta^3),\n\\end{align}\nwhere in the second expression we've thrown out the $\\sum_i N_i (q^0) \\dot{\\eta}_i$ and $V(q^0)$ terms since they are just a total derivative and an overall constant, and \n\\begin{equation}\nm_{ij} = M_{ij}(q^0), \\qquad p_{ij} = \\eval{\\pd{N_i}{q_j}}_{q = q^0}, \\text{and} \\qquad v_{ij} = \\eval{\\md{V}{2}{q_i}{}{q_j}{}}_{q = q^0}\n\\end{equation}\nare constant matrices. Now, if $p_{ij}$ has a symmetric part $p_{(ij)}$ we can throw it out and just consider it antisymmetric since\n\\begin{equation}\np_{(ij)} \\eta_i \\dot{\\eta}_j = \\frac{1}{2} (p_{ij} + p_{ji}) \\eta_i \\dot{\\eta}_j = p_{ij} (\\eta_i \\dot{\\eta}_j + \\eta_j \\dot{\\eta}_i) = \\od{}{t} (\\frac{1}{2} p_{ij} \\eta_i \\eta_j)\n\\end{equation}\nis just a total derivative. \n\n\\subsection{First analysis}\n\nAs a first analysis, let's set $p_{ij} = 0$. Then we can write\n\\begin{equation}\nL = \\frac{1}{2} \\dot{\\eta}^\\top m \\dot{\\eta} - \\frac{1}{2} \\eta^\\top v \\eta + \\bigo(\\eta^3),\n\\end{equation}\nfor which we can write the equation of motion in terms of the vector $\\bar{\\eta} = (\\eta_1, \\dots, \\eta_n)$,\n\\begin{equation}\nm \\ddot{\\bar{\\eta}} + v \\bar{\\eta} = 0.\n\\end{equation}\nThis has ansatz $\\bar{\\eta} = \\bar{\\gamma} e^{i \\omega t}$ which, plugging back in to the equation of motion, we have\n\\begin{equation}\n(-\\omega^2 m + v) \\bar{\\gamma} = 0.\n\\end{equation}\nThus $\\omega$ is determined by the vanishing of the determinant\n\\begin{equation}\n\\det (-\\omega^2 m + v) = 0.\n\\end{equation}\n\nHowever are all directions stable? What does our eigenvector look like? Recall some facts about linear algebra. Both matrices $m_{ij}$ and $v_{ij}$ are symmetric and can hence be diagonalized by orthogonal transformations we have \n\\begin{equation}\n\\bigo^T m \\bigo = I \\begin{array}{c} m_1 \\\\ m_2 \\\\ ... \\\\ m_n\\end{array} = \\bar{m}\n\\end{equation}\nWe'll assume that all $\\bar{m_i}$ are positive (i.e. all masses are in general positive). Then we have a well defined square root of m. We can write\n\\begin{equation}\n\\bar{m}^{1/2} \\bigo^T m \\bigo \\bar{m}^{-1/2} = 1\n\\end{equation}\nDefining \n\\begin{equation}\nn = \\bigo \\bar{m}^{1/2} \\bar{n}\n\\end{equation}\nwe have \n\\begin{equation}\nL = \\frac{1}{2} \\bar{\\dot{n}}^T \\bar{n} - \\frac{1}{2} \\bar{n}^T \\bar{V} \\bar{n}\n\\end{equation}\nwhere \n\\begin{equation}\n\\bar{m}^{1/2} \\bigo^T v \\bigo \\bar{m}^{-1/2} = \\bar{v}\n\\end{equation}\nis still a symmetric matrix\n\nOur characteristic equation then becomes\n\\begin{equation}\n\\ddot{\\bar{n}} + \\bar{v} \\bar{n} = 0\n\\end{equation}\nAn ansatz of \n\\begin{equation}\n\\bar{n} = e^{i\\omega t} \\gamma\n\\end{equation}\ngives \n\\begin{equation}\n(-\\omega^2 1 + \\bar{v})\\gamma = 0\n\\end{equation}\nIf the eigenvalues of $\\bar{v}$ are all real and nonzero then the oscillations are stable. Certain directions can be stable depending on the sign of the eigenvalues. Given the eigenvalues the equations of motion are oscillatory with frequencies $\\omega =  \\sqrt{\\lambda_i}$\n\n\nThus our equatons of motion are given by\n\\begin{equation}\n\\ddot{\\bbar{\\eta}}_i + \\lambda_i \\bbar{\\eta}_i = 0, \\qquad i = 1, \\dots, N.\n\\end{equation}\nwhere in the stable case we have $\\lambda_i = \\omega_i^2$, the characteristic frequencies. So the normal modes are decoupled 1-dimensional oscillators along $\\bbar{\\eta}_i$ with coordinate relationship\n\\begin{equation}\n\\eta = O \\bar{m}^{-1/2} \\bar{O} \\bar{\\bar{\\eta}}. \n\\end{equation}\n\nIn the more general case with $\\eta_{ij} \\neq 0$, we have\n\\begin{equation}\nL = \\frac{1}{2} \\dot{\\bar{\\eta}}^\\top \\bar{\\eta} + \\bar{\\eta}^\\top \\bar{p} \\dot{\\bar{\\eta}} - \\frac{1}{2} \\bar{\\eta}^\\top \\bar{v} \\bar{\\eta}\n\\end{equation}\nwhere\n\\begin{equation}\n\\bar{p} = \\bar{m}^{-1/2} O^\\top p O \\bar{m}^{-1/2}.\n\\end{equation}\nSo the Euler-Lagrange equations are\n\\begin{equation}\n\\ddot{\\bar{\\eta}} + \\bar{v} \\bar{\\eta} - 2 \\bar{p} \\dot{\\bar{\\eta}} = 0\n\\end{equation}\nwhich has a new characteristic equation for the ansatz $\\bar{\\eta} = \\gamma e^{i\\omega t}$,\n\\begin{equation}\n\\det (-\\omega^2 + \\bar{v} - 2i \\omega \\bar{p}) = 0\n\\end{equation}\nwhich determines $\\omega$ and $\\gamma$ in general. These cases are important for a particle in an EM field and motion in non-inertial frames. \n\n\n\\subsection{Double pendulum}\n\nLet's take the simple case that\n\\begin{equation}\nm_1 = m_2 = m\n\\end{equation}\n\\begin{equation}\nl_1 = l_2 = l\n\\end{equation}\nOur Lagrangian in this case is\n\\begin{equation}\nL = ml^2 \\dot{\\theta_1}^2 + \\frac{1}{2}ml^2 \\dot{\\theta_2}^2 + ml^2 \\dot{\\theta_1}\\dot{\\theta_2} \\cos(\\theta_1 - \\theta_2) + 2mgl \\cos \\theta_1 + mgl\\cos\\theta_2\n\\end{equation}\nIn equilibrium\n\\begin{equation}\n\\theta_1 = \\theta_2 = 0\n\\end{equation}\nLet's expand around  this equilibrium and drop the small terms and constants\n\\begin{equation}\nL = ml^2 \\dot{\\theta_1}^2 + \\frac{1}{2} ml^2 \\dot{\\theta_2}^2  + ml^2 \\dot{\\theta_1} \\dot{\\theta_2} - mgl \\theta_1^2 - \\frac{1}{2} mgl\\theta_2^2\n\\end{equation}\n\n\\begin{equation}\nm_ij = \\left( \\begin{array}{cc} 2ml^2 & ml^2 \\\\ ml^2 & ml^2 \\end{array}\\right)\n\\end{equation}\n\\begin{equation}\nv_{ij} = \\left(\\begin{array}{cc} 2mgl & 0\\\\ 0 & mgl \\end{array}\\right)\n\\end{equation}\nOur Euler-Lagrange equations are then\n\\begin{equation}\n\\left(\\begin{array}{cc} 2ml^2 & ml^2 \\\\ ml^2 & ml^2 \\end{array}\\right) \\left( \\begin{array}{c} \\ddot{\\theta_1} \\\\ \\ddot{\\theta_2} \\end{array}\\right) + \\left( \\begin{array}{cc} 2mgl & 0 \\\\ 0 & mgl \\end{array}\\right) \\left(\\begin{array}{c} \\theta_1 \\\\ \\theta_2\\end{array}\\right) = 0\n\\end{equation}\nDividing by $ml^2$\n\\begin{equation}\n\\left(\\begin{array}{cc} 2 & 1 \\\\ 1 & 1 \\end{array}\\right) \\left( \\begin{array}{c} \\ddot{\\theta_1} \\\\ \\ddot{\\theta_2} \\end{array}\\right) + \\left( \\begin{array}{cc} \\frac{2g}{l} & 0 \\\\ 0 & \\frac{g}{l} \\end{array}\\right) \\left(\\begin{array}{c} \\theta_1 \\\\ \\theta_2\\end{array}\\right) = 0\n\\end{equation}\nMultiplying on both sides by the inverse of the first term\n\\begin{equation}\n \\left( \\begin{array}{c} \\ddot{\\theta_1} \\\\ \\ddot{\\theta_2} \\end{array}\\right) + \\frac{g}{l}\\left( \\begin{array}{cc} 2 & -1 \\\\ -2 & 2 \\end{array}\\right) \\left(\\begin{array}{c} \\theta_1 \\\\ \\theta_2\\end{array}\\right) = 0\n\\end{equation}\nWe then obtain the frequencies from the eigenvalue equation\n\\begin{equation}\ndet \\left( -\\omega^2 + \\frac{g}{l} \\left( \\begin{array}{cc} 2 & -1 \\\\ -2 & 2\\end{array} \\right)\\right ) = 0\n\\end{equation}\nThis results in the equation\n\\begin{equation}\n\\omega^4 - \\frac{4g}{l} \\omega^2 + \\frac{2g^2}{l^2} = 0 \n\\end{equation}\n\\begin{equation}\n\\omega^2_\\pm = \\frac{2g}{l} \\left( 1 \\pm \\frac{1}{\\sqrt{2}}\\right)\n\\end{equation}\nThe eigenvector for the positive frequency is\n\\begin{equation}\nv_+ = \\left(\\begin{array}{c} -\\frac{1}{\\sqrt{2}} \\\\ 1 \\end{array} \\right)\n\\end{equation}\nThis corresponds to the segments oscillating in opposite directions. For the negative frequencies\n\\begin{equation}\nv_- = \\left(\\begin{array}{c} \\frac{1}{\\sqrt{2}} \\\\ 1 \\end{array} \\right)\n\\end{equation}\nwhich corresponds to the segments oscillating in the same direction\n\n\n\\section{Lagrange Points}\nConsider a two-body planet system. The two masses move in circular\norbits about their common center of mass, with equal velocities. We\ncan transform into this frame by rotating our frame by the angular\nvelocity. The Lagrange points are points where you can put a test\nmass and it will be stationary \\textbf{in the rotating frame}. In the\nrest frame, it has the same angular velocity as the other\nmasses. From the inertial frame the points have positions\n\\begin{equation}\n\\v{x_1} = \\left(\\begin{array} r_1 \\cos\\omega_t \\\\ r_1 \\sin \\omega_t \\end{array}\\right)\n\\end{equation}\n\\begin{equation}\n\\v{x_2} = \\left(\\begin{array}{c} -r_2 \\cos\\omega_t \\\\ -r_2 \\sin \\omega_t \\end{array}\\right)\n\\end{equation}\nwhere d is the separation between the two, and $r_1$ are and $r_2$ are the distances of each of the particles from the center of mass, such that \n\\begin{equation}\nd = r_1 + r_2\n\\end{equation}\n\\begin{equation}\nM = m_1 + m_2\n\\end{equation}\n\\begin{equation}\nm_1 r_1 = m_2 r_2\n\\end{equation}\nThis results in the expressions\n\\begin{equation}\nr_1 = \\frac{m_2}{m_1 + m_2} d\n\\end{equation}\nand\n\\begin{equation}\nr_2 = \\frac{m_1}{m_1 + m_2} d\n\\end{equation}\nThis frame rotates with a frequency determined by Kepler's law and is unaffected by the third mass. \n\\begin{equation}\nG (m_1 + m_2) = d^3 \\omega^2\n\\end{equation}\nIn the inertial frame we thus have the Lagrangian\n\\begin{equation}\n\\frac{1}{2} \\dot{x}^2 + \\frac{Gm m_1}{|\\v{x} - \\v{x_1}|}\n+ \\frac{Gmm_2}{|\\v{x} - \\v{x_2}|}\n\\end{equation}\nIn general this is dificult since everything depends on t. Let's transform into a rotating frame with angular velocity $\\omega$ such that\n\\begin{equation}\n\\v{x} = \\left( \\begin{array}{c} x\\cos\\omega t - y\\sin \\omega t \\\\\nx \\sin \\omega t + y \\cos \\omega t \\\\ z\\end{array} \\right)\n\\end{equation}\n\n\\begin{equation}\n\\v{x} - \\v{x_1} =\\left( \\begin{array}{c} x\\cos\\omega t - y\\sin \\omega\nt - r_1 \\cos\\omega t \\\\\nx \\sin \\omega t + y \\cos \\omega t - r_1 \\sin\\omega t \\\\z\\end{array} \\right)\n\\end{equation}\nSo we find that the magnitude by multiplying and skipping the gross algebra\n\\begin{equation}\n|\\v{x} - \\v{x_1}| = \\sqrt{(x - r_1)^2 + y^2 + z^2}\n\\end{equation}\n$R_2$ is more or less the same except we flip the sign on r\n\\begin{equation}\n|\\v{x} - \\v{x_2}| = \\sqrt{(x - r_2^2 + y^2 + z^2}\n\\end{equation}\nNow, when we take all of this into our Lagrangian, we can get the\nkinetic energy from our normal motion in a rotational frame. We have a\ncentripetal term (the first) and a centrifugal term (the second)\n\\begin{equation}\nL = \\frac{1}{2} m [ \\dot{x} + \\dot{y} + \\dot{z} + \\omega^2 (x^2 +\ny^2)+ 2\\omega (x\\dot{y} - y\\dot{x})] + Gm [\\frac{m_1}{\\sqrt{(x -\nr_1)^2 + y^2 + z^2}} + \\frac{m_2}{\\sqrt{(x - r_2^2 + y^2 + z^2)}}]\n\\end{equation}\nOkay, so from common sense, we know that the stability points cannot\nbe anywhere except for the z plane, there are no forces to stabilize\nthem in any other plane. The name of the game now is to have a balance\nof these forces while minimizing the effective potential. Our first\nLagrange point is simply a balance between the gravitational\nforces. This point is stationary in both the inertial frame and the\nnon-inertial frame so we don't have to consider coriolis forces or\ncentripetal. \n\nNow let's consider the other two forces. We have two points that move\nin circular orbits in the inertial frame. These points only experience\ncentripetal forces which balance the gravitational forces. From the\nmath, these points form equilateral triangles with the two masses. \n\nThe third point is behind each of the masses. These\npoints only have inward pointing forces from the gravitational forces,\nbut if we combine the centripetal and centrifugal forces (which point\nin the same direction) they cancel the gravitational forces. \n\n\nLet's consider the stability of the noncolinear points\nThe effective potential for is\n\\begin{align}\n\\phi_{eff} &= \\frac{G\\mu}{2R^3}(x^2 + y^2) - \\frac{Gm_1}{\\sqrt{(r_1 + x)^2 + y^2}} + \\frac{Gm_2}{\\sqrt{(r_2 - x)^2 + y^2}}\n\\end{align}\nNow we take the first derivative. For our Lagrange points this derivative is equal to zero\n\\begin{align}\n\\d{\\phi_{eff}}{x} &= \\frac{G\\mu}{R^3}x - \\frac{Gm_1 (r_1-x)}{((r_1 - x)^2 + y^2)^{3/2}} + \\frac{Gm_2(r_2 + x)}{((r_2 + x)^2 + y^2)^{3/2}}\n\\end{align}\nSince $r_1 - x = r_2 + x$\n\\begin{align}\n\\d{\\phi_{eff}}{x} &= \\frac{G\\mu}{R^3}x - G\\frac{m_1  r_1- m_1x - m_2 r_2 - m_2 x}{((r_1 - x)^2 + y^2)^{3/2}} \\\\ \n&= \\frac{G\\mu}{R^3}x + G\\frac{ x(m_1 + m_2)}{((r_1 - x)^2 + y^2)^{3/2}} &= 0 \\\\ \n\\end{align}\nFor y we have\n\\begin{align}\n\\d{\\phi_{eff}}{y} &= \\frac{G\\mu}{R^3} y - \\frac{Gm_1y}{((r_1 -x)^2 + y^2)^{3/2}} + \\frac{Gm_2 y }{((r_2 + x)^2 + y^2)^{3/2}}\\\\\n&= \\frac{G\\mu}{R^3} y + \\frac{G(m_1 + m_2)y}{((r_1 -x)^2 + y^2)^{3/2}} \n\\end{align}\n\nOkay, now to do the stability analysis, let's perturb the Lagrangian by \n\\begin{equation}\nx = x_L + \\epsilon_x\n\\end{equation}\nand\n\\begin{equation}\ny = y_L + \\epsilon_y\n\\end{equation}\n\\begin{equation}\n\\d{}{t} \\pd{L}{\\dot{\\epsilon}} = \\pd{L}{\\epsilon}\n\\end{equation}\n\\begin{equation}\n\\d{}{t} ( m\\dot{\\epsilon_x} - 2\\omega \\epsilon_y) = m\\ddot{\\epsilon_x} - 2\\omega \\dot{\\epsilon_y} =  \\epsilon_x \\pdd{\\phi_{eff}}{x} + \\epsilon_y \\frac{\\partial^2 \\phi_{eff}}{\\partial x\\partial y}\n\\end{equation}\nWhere the partial derivatives are all evaluated at the Lagrange points\n\\begin{equation}\n\\ddot{\\epsilon_y} + 2\\omega\\dot{\\epsilon_x} = \\epsilon_y \\pdd{\\phi_{eff}}{y} + \\epsilon_x \\frac{\\partial^2 \\phi_{eff}}{\\partial x\\partial y} \n\\end{equation}\nLet's take these partial derivatives\n\\begin{equation}\n\\pdd{\\phi_{eff}}{x} = \\frac{G\\mu}{R^2} + \\frac{G(m_1 + m_2)}{((r_1 - x)^2 + y^2)^{3/2}} + \\frac{G(m_1 + m_2)x}{((r_1 - x)^2 + y^2)^{3/2}}\n\\end{equation}\nThe first and second term are zero due to the first derivative. \n\\begin{equation}\n\\pdd{\\phi_{eff}}{x} = \\frac{\\frac{3}{2}G(m_1 + m_2)x(r_1 - x)}{((r_1 - x)^2 + y^2)^{5/2}}\n\\end{equation}\nWe found above that\n\\begin{equation}\nx = \\frac{1}{2}(r_1 - r_2)\n\\end{equation}\n\\begin{equation}\ny = \\frac{\\sqrt{3}}{2} (r_1 + r_2)\n\\end{equation}\nand \n\\begin{equation}\n(r_1 - x)^2 + y^2 = d^2\n\\end{equation}\n\\begin{equation}\n\\pdd{\\phi_{eff}}{x} = \\frac{\\frac{3}{2}G(m_1 + m_2)x((\\frac{1}{2}r_1 + \\frac{1}{2}r_2)}{d^5}\n\\end{equation}\n\\begin{equation}\n\\pdd{\\phi_{eff}}{x} = \\frac{3}{4} x_l\\frac{G(m_1 + m_2)}{R^3}= -\\frac{3}{4} \\frac{G(m_1 + m_2)}{d^3} = -\\frac{3}{4} \\omega^2\n\\end{equation}\nYou get the idea. The rest are derived in the textbook so I'll skip to the punchline\n\\begin{equation}\n\\ddot{\\epsilon_x} - 2\\omega \\dot{\\epsilon_y} - \\frac{3}{4} \\omega^2 \\epsilon_x + \\frac{3\\sqrt{3}}{4} \\frac{m_1 - m_2}{m_1 + m_2} \\omega^2 \\epsilon_y = 0\n\\end{equation}\n\\begin{equation}\n\\ddot{\\epsilon_x} +2\\omega \\dot{\\epsilon_y} - \\frac{9}{4} \\omega^2 \\epsilon_x + \\frac{3\\sqrt{4}}{4}\\frac{m_1 - m_2}{m_1 + m_2}\\omega^2 \\epsilon_y = 0\n\\end{equation}\nWe make the ansatz\n\\begin{equation}\n\\epsilon_x = c_1 e^{i\\omega_s t}\n\\end{equation}\n\\begin{equation}\n\\epsilon_y = c_2 e^{i\\omega_s t}\n\\end{equation}\nWhich gives us two equations\n\\begin{equation}\n-\\omega_s^2 c_1 - 2i\\omega \\omega_s c_2 - \\frac{3}{4} \\omega^2 c_1 + \\alpha \\omega^2 c_2 = 0\n\\end{equation}\nwhere we have defined\n\\begin{equation}\n\\alpha = \\frac{3\\sqrt{4}}{4}\\frac{m_1 - m_2}{m_1 + m_2}\n\\end{equation}\n\\begin{equation}\n-c_2 \\omega_s^2 + 2i\\omega \\omega_sc_1 - \\frac{9}{4} \\omega^2 c_2 + \\alpha \\omega^2 c_1 = 0\n\\end{equation}\nEliminating the constants we get\n\\begin{equation}\n- 16\\omega_s^4 + 16 \\omega^2\\omega_s^2 + (-27+16\\alpha^2)\\omega^4  = 0\n\\end{equation}\nUsing quadratic formula\n\\begin{equation}\n\\omega_s^2 = \\frac{1}{32}\\left[ 16\\omega^2 \\pm \\sqrt{16^2 \\omega^4 - 64 (-27 + 16 \\alpha^2) \\omega^4}\\right] \n\\end{equation}\n\\begin{equation}\n\\omega_s^2 = \\omega^2\\left[ \\frac{1}{2} \\pm \\sqrt{\\frac{1}{4}  - \\frac{1}{16} (-27 + 27 \\frac{m_1 - m_2}{m_1 + m_2}) \\omega^4}\\right] = \\omega^2\\left[ \\frac{1}{2} \\pm \\sqrt{-\\frac{23}{16} + \\frac{27}{16} \\frac{m_1 - m_2}{m_1 + m_2}) \\omega^4}\\right] \n\\end{equation}\nOurstability condition is\n\\begin{equation}\n\\frac{1}{2} > \\sqrt{-\\frac{23}{16} + \\frac{26}{16}( (\\frac{m_1}{m_1 + m_2})^2 + \\frac{m_1}{m_1 + m_2})}\n\\end{equation}\nSolving we get\n\\begin{equation}\n\\mu < \\frac{27 - \\sqrt{621}}{54} \\approx .038\n\\end{equation}\n\n\n\n\\end{document}\n", "meta": {"hexsha": "e7f83c364c439fb9ed38c757527e2ffb74359c64", "size": 17161, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classical/lec06.tex", "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classical/lec06.tex", "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classical/lec06.tex", "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.6666666667, "max_line_length": 437, "alphanum_fraction": 0.6720470835, "num_tokens": 6607, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.798186787341014, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5760858958971891}}
{"text": "\\chapter{Outline of lectures in PKU summer school}\n\nLecture 1:  \n\nLecturer: Jinchao\nTA:\t  Qingguo\n\nIntroduction to finite element spaces\nInterpolation error estimates for 1D\n\nLecture 2:  Jun Hu\n\nN-term approximation [for Haar basis: Lin)\nInterpolation error estimates for adaptive grids [Juncai discuss with Chen Long]\n\nLecture 3:  \n\nError estimates for sparse grids  \n \nIntroduction to DNN\nApproximation property for general activation function (non-polynomial)\nJones's approximation result for cosine activation function\nBarron's result in L2 (homework) \n(Notes: Chunyue)\n \nLecture 4:\n\nDNN with ReLU (paper)\nApproximation of CNN (for 1d data: curves:  Juncai) \nSprecher representation theorem (no proof, Xuhao)\n\nLecture 5: Spectral approximation for ReLU DNN\n1. $x^2$ approximation and $x*y$\n2. abstract function class properties\n3. spectral approximation properties\n\n\n\n \n", "meta": {"hexsha": "cbaaa347086a454e259b31ada23385a560599e1c", "size": 874, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/PKU-Lectures.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/PKU-Lectures.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/PKU-Lectures.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.85, "max_line_length": 80, "alphanum_fraction": 0.7826086957, "num_tokens": 228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7981867777396211, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.576085888967449}}
{"text": "% !TeX root = ./apxthy.tex\n\n\\section{Tensor Products and Sparse Grids}\n%\n\\label{sec:sparse}\n\n\\subsection{Introduction and Motivation}\n%\n\\label{sec:sparse:intro}\n%\nWe now consider the approximation of functions $f : [a,b]^d \\to \\R$. We will\nexplore (1) how to construct ``good'' approximations and (2) what effect the\ndimension $d$ has on the rates of approximation.\n\nWe will build multi-variate approximations from tensor products of uni-variate\nbasis functions. For example, a polynomial in two dimensions can be written\nas\n\\[\n  p(x_1, x_2) = \\sum_{k_1 = 0}^N \\sum_{k_2 = 0}^N c_{k_1k_2} x_1^{k_1} x_2^{k_2}.\n\\]\nThe function $(x_1, x_2) \\mapsto x_1^{k_1}x_2^{k_2}$ is called the tensor\nproduct between the two functions $x_j \\mapsto x_j^{k_j}$.\n\nWe know of course from our univariate theory that the Chebyshev basis has\nmany advantageous algorithmic and approximation properties, thus we may\nprefer to write $p(x_1, x_2)$ in the form\n\\[\n    p(x_1, x_2) =\n    \\sum_{k_1 = 0}^N \\sum_{k_2 = 0}^N\n    \\tilde{c}_{k_1k_2} T_{k_1}(x_1) T_{k_2}(x_2).\n\\]\nIn a general dimension $d > 1$ we will write\n\\begin{align*}\n    \\bx &= (x_1, \\dots, x_d), \\\\\n    \\bk &= (k_1, \\dots, k_d), \\\\\n    T_{\\bk}(\\bx) &= \\prod_{\\alpha = 1}^d T_{k_\\alpha}(x_\\alpha),\n\\end{align*}\nand consider polynomials\n\\[\n    p(\\bx) = \\sum_{\\bk \\in \\mathcal{K}} c_{\\bk} T_{\\bk}(\\bx),\n\\]\nwhere $\\calK \\subset \\N^d$ is a suitable index set.\n\nThe definition of $T_{\\bk}$ can also equivalently be written as\n\\[\n    T_{\\bk} = T_{k_1} \\otimes T_{k_2} \\otimes \\cdots \\otimes T_{k_d}\n\\]\n\nHowever we write these multi-variate polynomials we can see the ``curse of\ndimensionality'' creep in already: the number of coefficients to represent a\n$d$-variate polynomial of degree $N$ is clearly $(1+N)^d$. Suppose our\ncomputational budget is one million coefficients (quite a lot!) and we are\nworking in 10 dimensions. What is the maximal degree we may choose? But we will\nsee that the notion of degree has no unique extension to higher dimensions and\nthis will sometimes save us.\n\nBefore we proceed, we need to introduce some minimal amount of additional\nnotation.\n\n\n\\subsection{The Curse of Dimensionality}\n%\n\\label{sec:sparse:curse}\n%\nAs a first attempt at constructing high-dimensional approximations we\napply the one-dimensional techniques in each coordinate direction. For\nexample, let us assume that $f : [-1,1]^2 \\to \\R$ has any regularity\nwe may later need, then we can first apply a Chebyshev interpolant\nin the $x_1$-direction which we denote by $I_N^{(1)}$\nwhich becomes a polynomial in $x_1$ but a general function in $x_2$, i.e.,\n\\[\n    I^{(1)}_N f(x_1, x_2) =\n    \\sum_{k_1 = 0}^N c_{k_1}(x_2) T_{k_1}(x_1).\n\\]\nWe then apply an interpolant in the $x_2$-direction to obtain\n\\[\n    I^{(2)}_N I^{(1)}_N f(x_1, x_2)\n    = \\sum_{k_1 = 0}^N I^{(2)}_N c_{k_1}(x_2) T_{k_1}(x_1)\n    = \\sum_{k_1, k_2 = 0}^N c_{k_1k_2} T_{k_1(x_1)} T_{k_2}(x_2)\n    = \\sum_{\\bk \\in \\{0\\dots N\\}^2} c_\\bk T_\\bk(\\bx).\n\\]\nMore generally, we define the $d$-dimensional Chebyshev interpolation\noperator to be\n\\[\n    I^{(1..d)}_N :=  I^{(d)}_N I^{(d-1)}_N \\cdots I^{(1)}_N\n\\]\n\nTo estimate the error committed, let us assume that $x_\\alpha \\mapsto f(\\bx) \\in\nC^j([-1,1])$ for all $\\alpha$, then we can bound\n\\begin{align*}\n    \\b\\|f -  I^{(d)}_N I^{(d-1)}_N \\cdots I^{(1)}_N f\\b\\|_\\infty\n    &\\leq\n    \\b\\|f -  I^{(d)}_N f\\b\\|_\\infty\n    + \\b\\| I^{(d)}_N f - I^{(d)}_N I^{(d-1)}_N \\cdots I^{(1)}_N f \\b\\|_\\infty  \\\\\n    &\\leq\n    C (\\log N) N^{-j} \\| \\partial_{x_d}^j f \\|_\\infty\n    + \\b\\| I^{(d)}_N \\big[ f - I^{(d-1)}_N \\cdots I^{(1)}_N f\\big] \\b\\|_\\infty\n    \\\\\n    &\\leq\n    C (\\log N) \\Big\\{ N^{-j} \\| \\partial_{x_d}^j f \\|_\\infty\n    + \\b\\| f - I^{(d-1)}_N \\cdots I^{(1)}_N f \\b\\|_\\infty \\Big\\}.\n\\end{align*}\nArguing by induction we obtain the following result.\n\n\\begin{theorem} \\label{th:sparse:curse}\n    (1) Let $x_\\alpha \\mapsto f(\\bx) \\in C^j([-1,1])$ for $\\alpha = 1, \\dots,\n    d$, then\n    \\[\n        \\b\\|f -  I^{(1..d)}_N f\\b\\|_\\infty\n        \\leq C (\\log N)^d N^{-j} \\sum_{\\alpha = 1}^d \\| \\partial_{x_\\alpha}^j f \\|_\\infty\n    \\]\n    (2) Let $x_\\alpha \\mapsto f(\\bx) \\in A(E_\\rho)$ for some $\\rho > 1$ and for\n    all $\\bx \\in [-1,1]^d-1$, then\n    \\[\n        \\b\\|f -  I^{(1..d)}_N f\\b\\|_\\infty\n        \\leq\n        C  M_f (\\rho-1)^{-d} (\\log N)^d \\rho^{-N} \\sum_{\\alpha = 1}^d M^{(\\alpha)},\n    \\]\n    where $M^{(\\alpha)} = \\|f\\|_{L^\\infty(E_\\rho^{(\\alpha)})}$, where\n    $E_\\rho^{(\\alpha)} = [-1,1] \\times \\cdots \\times [-1,1] \\times E_\\rho\n    \\times [-1,1] \\times \\cdots \\times [-1,1]$ and the position of the\n    Bernstein ellipse is in the $\\alpha$th coordinate.\n\\end{theorem}\n\nLet us translate these estimates into cost-error relations. The number of basis\nfunctios for a $d$-dimensional tensor product basis of degree $N$ is $(1+N)^d$\nand this is directly proportional to the associated computational cost. Further,\neven though the terms $(\\log N)^d$ are significiant they arise from sub-optimal\nestimates of the interpolation error (cf. \\cite{Trefethen2013-rg} for sharp\nestimates) hence we ignore them.\nLetting $\\epsilon := \\b\\|f -  I^{(1..d)}_N f\\b\\|_\\infty$, we therefore obtain\n\\[\n    {\\rm Cost} \\approx N^d \\lesssim\n        \\left\\{\\begin{array}{ll}\n            \\epsilon^{d/j}, & \\text{case (1)}; \\\\\n            (\\log \\epsilon)^{d}, & \\text{case (2)}.\n        \\end{array}\\right.\n\\]\nThe exponential dependence of these estimates on $d$ is what we call the ``curse\nof dimensionality'': without new ideas and new information it becomes\nexponentially harder to approximate functions in high dimension.\n\n\n\\subsection{Chebyshev series and greedy approximation}\n%\n\\label{sec:sparse:chebseries}\nWe begin by making precise the notion of a multi-variate Chebyshev series,\nwhich has already been at the back of our mind since the beginning of\n\\S~\\ref{sec:sparse}. To that end, we define the $d$-dimensional Chebvyshev\nspace\n\\begin{align*}\n    L^2_{\\rm C}([-1,1]^d) &:= \\b\\{ f : [-1,1]^d \\text{measurable, and }\n                                   \\| f\\|_{L^2_{\\rm C}} < \\infty \\b\\},\n                                        \\qquad \\text{where} \\\\\n    &\\|f\\|_{L^2_{\\rm C}}^2\n        := \\int_{[-1,1]^d}  |f(x)|^2 \\, \\prod_{\\alpha = 1}^d (1 - x_\\alpha^2)^{-1/2} \\, d\\bx.\n\\end{align*}\n\nNote how the weight in this integral is simply the tensor product of the\nunivariate weights. Because of this, we have the orthogonality (Exercise: check\nthis!)\n\\begin{equation}\n    \\label{eq:sparse:dDorth}\n    \\b\\< T_\\bk, T_{\\bk'} \\b\\>_{L^2_{\\rm C}} = \\delta_{\\bk\\bk'}.\n\\end{equation}\nThat is, $\\{ T_\\bk \\}_{\\bk \\in \\N^d}$ is an ortho-normal subset of $L^2_{\\rm C}$.\nThe result from the previous section moreover indicates density of\ntheir linear combinations (polynomials) and after making this precise we will\nobtain the following result:\n\n\\begin{theorem} \\label{th:sparse:mvchebseries}\n    (i) Let $f \\in L^2_{\\rm C}([-1,1]^d)$, then there exist coefficients\n    $\\tilde{f}_\\bk, \\bk \\in \\N^d$ such that\n    \\[\n        f = \\sum_{\\bk \\in \\N^d} \\tilde{f}_\\bk T_\\bk,\n    \\]\n    where the convergence of the series in is the $L^2_{\\rm C}$-norm.\n\n    (ii) Moreover, we have a multi-variate Plancherel Theorem\n    \\begin{equation}\n        \\label{eq:sparse:plancherel}\n        \\|f\\|_{L^2_{\\rm C}}^2 = \\sum_{\\bk \\in \\N^d}\n            \\b| \\tilde{f}_\\bk \\b|^2.\n    \\end{equation}\n\n    (iii) If $(\\tilde{f}_\\bk)_{\\bk \\in \\N^d} \\in \\ell^1(\\N^d)$, then $f$ is continuous\n    and the convergence is in the max-norm (i.e., uniform in $[-1,1]^d$).\n\\end{theorem}\n\nWe can now translate the  Hilbert-space best approximation results\nto the multivariate setting. For every finite set $\\mathcal{K} \\subset \\N^d$\nwe have a Chebyshev series truncation\n\\[\n    p_\\calK := \\tilde\\Pi_{\\calK} f := \\sum_{\\bk \\in \\calK} \\tilde{f}_\\bk T_\\bk\n\\]\nThe resulting error is\n\\[\n    \\| f - p_\\calK \\|_{L^2_{\\rm C}}^2\n        = \\sum_{\\bk \\in \\N^d \\setminus \\calK}\n        |\\tilde{f}_\\bk|^2\n\\]\nwhich we can minimise as follows:\n\\begin{enumerate}\n\\item Compute the coefficients $\\tilde{f}_\\bk$ and order them by magnitude,\n\\[\n    |\\tilde{f}_{\\bk_1}| \\geq |\\tilde{f}_{\\bk_2}| \\geq \\dots\n\\]\n\\item Given a ``budget'' $M$, let $\\calK := \\{\\bk_1, \\dots, \\bk_M\\}$\n\\item Then, $p_\\calK$ is the {\\em best $M$-term approximation} to $f$\n        in the $L^2_{\\rm C}$-norm.\n\\end{enumerate}\n\n\\begin{remark}\n    It is not clear at all that minimising the number of terms in $\\calK$\n    optimises the computational cost to evaluate $p_\\calK$ for a given target\n    error, due to the fact that the basis functions are computed via a recursion\n    formula, but there are far more signifiicant issues with computing with best\n    $M$-term approximations, hence we will not explore this any further.\n\\end{remark}\n\nBy a similar mechanism we can also get an $L^\\infty$-error bound,\n\\[\n    \\|f-p_\\calK\\|_{\\infty}\n    \\leq \\sum_{\\bk \\in \\N^d \\setminus \\calK}\n    \\b|\\tilde{f}_\\bk\\b|,\n\\]\nusing the fact that $|T_\\bk(\\bx)| \\leq 1$. However, it is not at all obvious yet\nwhether this bound is close to optimal.\n\n\\begin{proof}[Proof of Theorem~\\ref{th:sparse:mvchebseries}]\n    We will use, without proof, the fact that continuously differentiable\n    functions are dense in $L^2_{\\rm C}$. This can be proven e.g. by using a\n    multi-variate Jackson-type approximation. Then Theorem~\\ref{sec:sparse:curse}\n    implies that polynomials are dense in $L^2_{\\rm C}$. The remaining\n    statements are straightforward.\n\\end{proof}\n\n\n\n\\subsection{Sparse Grids}\n%\n\\label{sec:sparse:sparse}\n%\nIn practise it is rare (though not impossible) to have the explicit coefficients\n$\\tilde{f}_\\bk$ available, which makes it exceedingly difficult to develop\n``greedy algorithms''; however, see \\cite{DeVore1998-do} for an extensive review\narticle. However, one may have some more generic qualitative information,\nsuch as genuine multi-variate versions of $C^j$ regularity or analyticity.\n\nBefore we motivate the next definition let us assume that $f$ has not only\nunivariate partial derivatives $\\partial_{x_\\alpha}^j f$ but also mixed\nderivatives, e.g., $\\partial_{x_1} \\partial_{x_2} f$. Unfortunately, this\nidea is difficult to motivate in the Chebyshev basis, so we temporarily\nswitch to multi-variate trigonometric polynomials. That is,\n\\[\n    f \\in L^2(\\TT^d)\n\\]\nand the multi-variate Fourier series is given by\n\\[\n    f(\\bx) = \\sum_{\\bk \\in \\Z^d} \\hat{f}_\\bk\n        e^{i \\bk\\cdot \\bx},\n\\]\nwhere we note that\n\\[\n    e^{i \\bk\\cdot \\bx} =\n    e^{i \\sum_{\\alpha = 1}^d k_\\alpha x_\\alpha}\n    \\prod_{\\alpha = 1}^d e^{i k_\\alpha x_\\alpha},\n\\]\ni.e. this is again precisely the same setting as before.\n\nAssuming that $f : \\TT^2 \\to \\R$ has two mixed derivatives, then just\ncalculating formally,\n\\begin{align*}\n    \\partial_{x_1} \\partial_{x_2} f\n    &=\n    \\sum_{\\bk \\in \\Z^2} \\hat{f}_\\bk\n    \\partial_{x_1} e^{ik_1x_1} \\partial_{x_2} e^{ik_2x_2} \\\\\n    &=\n    \\sum_{\\bk \\in \\Z^2} - k_1 k_2 \\hat{f}_\\bk\n    e^{i \\bk \\cdot \\bx}.\n\\end{align*}\nThus, if $\\partial_{x_1} \\partial_{x_2} f  \\in L^2(\\TT^2)$, then we have\nthat\n\\[\n    \\sum_{\\bk \\in \\Z^2} |k_1|^2 |k_2|^2 |\\hat{f}_{\\bk}|^2 < \\infty.\n\\]\nThis gives us some first information about the decay of $\\hat{f}$ that we\ncan exploit. Specifically, the best information we have is that we should choose\nindex sets of the form\n\\[\n    \\mathcal{K}_N^{\\rm hc} =  \\b\\{\n            |k_1| |k_2| \\leq N \\b\\}.\n\\]\nThis is an example of the ``hyperbolic cross'' approximation. We will\nreturn to this idea again in a moment.\n\nThe foregoing example shows how genuine multi-variate regularity\nrelates directly to decay of Fourier coefficients, and one can show more\nfor $f: [-1,1]^d \\to \\R$ that it relates to the decay of Chebyshev\ncoefficients. It is therefore expedient to define the following function classes:\n\\begin{align*}\n    \\mathcal{A}_\\omega^{(2)}\n    &:=\n        \\B\\{ f \\in L^2_{\\rm C} \\Bsep\n             \\sum_{\\bk \\in \\N^d} \\omega(\\bk)^2 |\\tilde{f}_\\bk|^2 < \\infty \\B\\} \\\\\n             %\n    \\mathcal{A}_\\omega^{(\\infty)}\n    &:=\n    \\B\\{ f \\in L^2_{\\rm C} \\Bsep\n    \\sum_{\\bk \\in \\N^d} \\omega(\\bk) |\\tilde{f}_\\bk| < \\infty \\B\\}.\n\\end{align*}\nInformally, the ``faster'' $\\omega(\\bk)$ grows as $|\\bk| \\to \\infty$ the\n``smoother'' is $f$. In the following we will focus on\n$\\mathcal{A}_\\omega^{(\\infty)}$ but analogous results also hold for\n$\\mathcal{A}_\\omega^{(2)}$.\n\nThere are or course limitless possibilities, we will consider two specific\ncases:\n\\[\n    (1) \\quad \\omega(\\bk) = \\prod_{\\alpha = 1}^d (1+k_\\alpha)^j,\n    \\qquad \\text{and} \\qquad\n    (2) \\quad \\omega(\\bk) = \\prod_{\\alpha = 1}^d \\rho^{k_\\alpha} = \\rho^{\\sum_\\alpha k_\\alpha},\n\\]\nwhere $j \\geq 1$ and $\\rho > 1$. The case (1) is related to mixed $C^j$\nregularity, while the case (2) corresponds to analyticity of $f$ in\n$E_{\\rho}^d$. Both cases can be extended in obvious ways to more anisotropic\nconditions, e.g., $\\omega(\\bk) = \\prod_{\\alpha = 1}^d \\rho_\\alpha^{k_\\alpha}$\nand similar.\n\n{\\bf Warning: } Case (1) is in fact difficult to obtain practise except with very strongly dimension-dependent constants. In particular the hyperbolic cross approximation we derive from this below is not how it is used in practise.\nHowever, (1) is similar in spirit to $\\omega(\\bk) = \\prod_\\alpha \\max(1, k_\\alpha)^j$, which leads to the {\\em true} hyperbolic cross approximation.\n\nAs Chebyshev coefficients we choose the indices that are in the\nsublevel of $\\omega$, i.e.,\n\\begin{align*}\n    \\calK_N^{\\rm hc} &:= \\B\\{ \\bk \\in \\N^d \\Bsep \\prod_{\\alpha=1}^d (1+k_\\alpha) \\leq N \\B\\},\n    \\qquad \\text{in Case (1);} \\\\\n    \\calK_N^{\\rm tot} &:= \\B\\{ \\bk  \\in \\N^d \\Bsep \\sum_{\\alpha = 1}^d k_\\alpha \\leq N \\B\\},\n    \\qquad \\text{in Case (2).}\n\\end{align*}\nCase (1) is called the {\\em hyperbolic cross approximation} scheme, while Case\n(2) is called the {\\em sparse grid approximation}. $\\sum_\\alpha k_\\alpha$ is also called the {\\em total degree} of the polynomial $T_{\\bk}$.\n\n\\begin{theorem} \\label{th:sparse:grids}\n    (1) Suppose that $f \\in \\mathcal{A}_\\omega^{(\\infty)}$ with\n    $\\omega(\\bk) = \\prod_\\alpha (1+k_\\alpha)^j$, then\n    \\begin{align}\n        \\label{eq:sparse:hc_error}\n        \\| f - \\tilde\\Pi_{\\calK_N^{\\rm hc}} f \\|_\\infty\n            &\\leq M_f N^{-j}, \\qquad \\text{while} \\\\\n        \\label{eq:sparse:hc_nbasis}\n        \\#\\calK_N^{\\rm hc} &\\leq N \\log^{d-1} N.\n    \\end{align}\n    where $M_f := \\sum_{\\bk \\in \\N^d} \\omega(\\bk) |\\tilde{f}_\\bk|$.\n\n    (2) Suppose that $f \\in \\mathcal{A}_\\omega^{(\\infty)}$ with\n    $\\omega(\\bk) = \\rho^{|\\bk|_1}$ where $\\rho > 1$, then\n    \\begin{align}\n        \\label{eq:sparse:sp_error}\n        \\| f - \\tilde\\Pi_{\\calK_N^{\\rm hc}} f \\|_\\infty\n            &\\leq M_f \\rho^{-N}, \\qquad \\text{while} \\\\\n        \\label{eq:sparse:sp_nbasis}\n        \\#\\calK_N^{\\rm tot} &= {N+d \\choose d},\n    \\end{align}\n    where $M_f := \\sum_{\\bk \\in \\N^d} \\omega(\\bk) |\\tilde{f}_\\bk|$.\n\\end{theorem}\n\nLet $\\epsilon := \\| f - \\tilde\\Pi_{\\calK_N^{*}} f \\|_\\infty$, then  in\ncase (1), hyperbolic cross, we obtain\n\\[\n    {\\rm Cost} \\approx \\#\\calK_N^{\\rm hc}\n    \\lesssim \\epsilon^{1/j} \\log^{d-1} \\epsilon^{1/j}.\n\\]\n\nIn case (2), sparse grid, the cost-error estimate is a little more involved.\nFirst, we use Stirling's formula to estimate\n\\begin{align*}\n    {N+d \\choose d} &\\approx \\sqrt{\\frac{2\\pi (N+d)}{2\\pi N 2\\pi d}}\n            \\frac{ ((N+d)/e)^{N+d}}{(N/e)^N (d/e)^d} \\\\\n    &\\lesssim\n    \\B(1+\\frac{d}{N}\\B)^N \\B(1+\\frac{N}{d}\\B)^d.\n\\end{align*}\nWe distinguish two cases, $d \\ll N$ and $N \\ll d$:\n\\[\n    \\#\\calK_N^{\\rm tot} \\lesssim\n    \\left\\{\\begin{array}{rl}\n        e^d (1+N/d)^d, & N \\gg d, \\\\\n        e^N (1+d/N)^N, & N \\ll d.\n    \\end{array} \\right.\n\\]\nIn the $N \\gg d$ case, which is the one more relevant for moderately high\ndimension we can now readily obtain\n\\[\n    {\\rm Cost} \\approx \\#\\calK_N^{\\rm tot}  \\lesssim\n    \\B( \\frac{c |\\log \\epsilon|}{d}\\B)^d,\n\\]\nwhich does not entirely remove the curse of dimensionality, but it substantially\nameliorates it. Contrast this with the estimate for a tensor product\nbasis, ${\\rm Cost} \\approx |\\log \\epsilon|^d$.\n\n% The case $d \\gg N$ is more difficult.\n\nIt is somewhat striking that the ``good'' case when $f$ is analytic uses less\nagressive sparsification and also ameliorates the curse of dimensionality to a\nlesser degree. That said, the rate of approximation is still better of course,\nand this can be seen in the occurance of $\\log \\epsilon$ instead of\n$\\epsilon^{1/j}$ in the estimate.\n\n\\subsubsection{Proofs}\n\nThroughout the following proofs we write $p_N := \\tilde\\Pi_{\\calK}f$ where\n$\\calK = \\calK_N^{\\rm tot}$ or $\\calK = \\calK_N^{\\rm hc}$. Further, we define Let\n$M_{Nd} := \\#\\calK$ in dimension $d$.\n\n\\begin{proof}[Proof of \\eqref{eq:sparse:hc_error}]\n    \\begin{align*}\n        \\|f - p_N\\|_\\infty\n        &\\leq\n        \\sum_{\\prod (1+k_\\alpha)>N} |\\tilde{f}_\\bk| \\\\\n        &\\leq\n        N^{-j}\n        \\sum_{\\prod (1+k_\\alpha)>N}  \\prod_\\alpha (1+k_\\alpha)^j |\\tilde{f}_\\bk| \\\\\n        &\\leq\n        M_f N^{-j}. \\qedhere\n    \\end{align*}\n\\end{proof}\n\n\\begin{proof}[Proof of~\\eqref{eq:sparse:hc_nbasis}]\n    For $d = 1$ we have $M_{N,1} = N+1$. For $d > 1$ we can create a\n    recursion\n    \\begin{align*}\n        M_{N,d}\n        &= \\sum_{k_{d} = 0}^N M_{\\lfloor N/(1+k_d) \\rfloor, d-1} \\\\\n        &\\leq \\sum_{k_d = 0}^N \\log^{d-1} \\b[ (N+2)/(1+k_d) \\b] \\frac{N+1}{1+k_d} \\\\\n        &\\leq (N+1) \\log^{d-2} (N+2) \\sum_{k_d = 0}^N \\frac{1}{1+k_d} \\\\\n        &\\leq (N+1) \\log^{d-1} (N+2).  \\qedhere\n    \\end{align*}\n\\end{proof}\n\n\\begin{proof}[Proof of~\\ref{eq:sparse:sp_error}]\n    This is analogous as  \\eqref{eq:sparse:hc_error},\n    \\begin{align*}\n        \\|f - p_N\\|_\\infty\n        &\\leq \\sum_{\\sum k_\\alpha > N} |\\tilde{f}_\\bk| \\\\\n        &\\leq \\rho^{-N} \\sum_{\\sum k_\\alpha > N} \\rho^{|\\bk|_1} |\\tilde{f}_\\bk| \\\\\n        &\\leq M_f \\rho^{-N}. \\qedhere\n    \\end{align*}\n\\end{proof}\n\\begin{proof}[Proof of~\\eqref{eq:sparse:sp_nbasis}]\n    This is a simple combinatorics problem: The set $\\calK_N^{\\rm tot}$ can be\n    interpreted as the set of all $d$-element multi-sets from\n    $\\{0, \\dots, N\\}$, which gives the stated expression.\n    % or it can be proven by induction:\n    % for $d = 1$ the result is obvious. For the induction step, we have\n    % \\begin{align*}\n    %     M_{N,d+1} &= \\sum_{k_{d+1} = 0}^N M_{N-k_{d+1}, d} \\\\\n    %     &= \\sum_{k_{d+1} = 0}^N \\frac{(N-k_{d+1}+1)\\cdots(N-k_{d+1}+d)}{d!} \\\\\n    %     &= \\sum_{k = 0}^N \\frac{(k+1)\\cdots(k+d)}{d!} \\\\\n    %     &= \\frac{(N+1) \\cdots (N+d+1)}{(d+1)!}.\n    % \\end{align*}\n    % The final step needs further comment: we are claiming that\n    % \\[\n    %     {{N+d+1} \\choose {d+1}} = \\sum_{k = 0}^N {k+d \\choose d}\n    % \\]\n\\end{proof}\n", "meta": {"hexsha": "8a7be5335baf3ea7bbe5568082f008850b0486cd", "size": 18552, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/sparse.tex", "max_stars_repo_name": "cortner/MA3J8ApxThyApp", "max_stars_repo_head_hexsha": "9400c557187dbd82468df2dbd0a7da99d7f08f8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-05-22T05:11:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-03T02:47:25.000Z", "max_issues_repo_path": "tex/sparse.tex", "max_issues_repo_name": "cortner/MA3J8ApxThyApp", "max_issues_repo_head_hexsha": "9400c557187dbd82468df2dbd0a7da99d7f08f8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-03T22:23:55.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T01:58:58.000Z", "max_forks_repo_path": "tex/sparse.tex", "max_forks_repo_name": "cortner/ApxThyApp", "max_forks_repo_head_hexsha": "0b28c5c4370eb4d9c5a9063c2c5c1b938aa54a3d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-02T02:44:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-02T02:44:56.000Z", "avg_line_length": 39.8111587983, "max_line_length": 231, "alphanum_fraction": 0.6237602415, "num_tokens": 6789, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432182679956, "lm_q2_score": 0.7981867705385762, "lm_q1q2_score": 0.5760858885474501}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{graphics}\n\\begin{document}\n\\newcounter{problem}\n\\thispagestyle{empty}\n\n\\section*{random extra problems}\n\n\\paragraph{Problem~\\theproblem}\\refstepcounter{problem}%\nConsider a slab of air with density $\\rho$ of height $\\Delta y$ and\nperpendicular (that is, horizontal) area $A$, with volume $A\\,\\Delta\ny$.  If this air is in equilibrium with its surroundings, it neither\nrises nor falls, so its weight is supported by a pressure difference\n$\\Delta P$.\n\n\\textsl{(a)}~Draw a free-body diagram for the slab and find an\nexpression for the pressure difference $\\Delta P$ in terms of $\\Delta\ny$ and whatever else you need.\n\n\\textsl{(b)}~In the ideal gas law, there is a relationship between\n$P$, $V$, $N$, and $T$.  Convert this into a relationship between $P$\nand $\\rho$.  \\emph{Hint:} Recall that $\\rho$ is a mass per unit\nvolume; you might need to use the average mass of an air molecule.\n\n\\textsl{(c)}~Rearrange your result from part (a) using your result\nfrom part (b) to get a differential equation that relates\n$\\mathrm{d}\\rho/\\mathrm{d}y$ to $\\rho$.  Integrate this equation to\nget the density $\\rho$ as a function of $y$ for the atmosphere,\nassuming that it is \\emph{isothermal} (it isn't).\n\n\\textsl{(d)}~Comment on the relationship between your answer and the\nBoltzmann factor discussed in your textbook.\n\n\\paragraph{Problem~\\theproblem}\\refstepcounter{problem}%\nIn practice problem 14.58, you will show that in one period of\noscillation, a weakly damped harmonic oscillator loses energy $\\Delta\nE = 2\\pi\\,E/Q$, where $E$ is the mechanical energy in the oscillator\nand $Q$ is the quality factor, defined to be the ratio\n$Q\\equiv\\omega_0/\\gamma$.  If this oscillator is driven by a driving\nforce so that instead of decaying, its amplitude is held constant, how\nmuch average mechanical power does the driving force have to supply to\nthe oscillator?  A typical mechanical clock has a pendulum with a mass\nof 2~kg, a period of 1~s, and a quality factor of 20.  What is the\npower in W required to keep this pendulum swinging at an amplitude of\n4~cm?\n\n\\paragraph{Problem~\\theproblem}\\refstepcounter{problem}%\nConsider a one-dimensional wave on a string of the form\n\\begin{equation}\ny = y_0 \\cos(k\\,x + \\omega\\,t)\n\\end{equation}\nwith $y_0= 1.0\\times 10^{-2}~\\mathrm{m}$, $k= 1.0~\\mathrm{m^{-1}}$,\nand $\\omega= 2.0~\\mathrm{s^{-1}}$.  What is the velocity $\\vec{v}$\n(speed and direction) of the wave?  Draw three quantitative diagrams\nshowing (a)~the displacement $y$ as a function of time $t$ for a fixed\npoint at $x= +1.047~\\mathrm{m}$, (b)~the displacement $y$ as a\nfunction of position $x$ for a fixed time $t=0$, and (c)~the\ndisplacement $y$ as a function of position $x$ for a fixed time $t=\n1.047~\\mathrm{s}$.\n\n\\paragraph{Problem~\\theproblem}\\refstepcounter{problem}%\nTwo wave pulses, labeled A and B, are approaching each other on a rope\nas shown.  The rope has mass per unit length $\\mu= 0.1~{\\rm\nkg\\,m^{-1}}$ and is held under constant tension $T= 10~{\\rm N}$.\nRecall that although the waves travel in the $+x$ and $-x$ directions,\neach piece of the rope is only moving in the $y$ direction.\n\\\\ \\rule{0.1\\textwidth}{0pt}\n\\resizebox{0.8\\textwidth}{!}{\\includegraphics{../pro/twopulse.eps}}\n\n(a) What is the wave speed $c$ in the rope?\n\n(b) Draw a graph of the instantaneous $y$-direction speed $v_y$ of the\nrope as a function of $x$ for the instant shown above.  Be\nquantitative; carefully label all interesting positions and speeds.\nAlso, pay attention to the {\\em sign} (positive or negative) of the\nspeed.\n\n(c) On your graph, point out the locations where the rope feels a $y$\ncomponent of force and indicate the sign (positive or negative) of the\nforce at each location.\n\n\\paragraph{Problem~\\theproblem}\\refstepcounter{problem}%\nA continuation of Problem 3: Draw a graph of the rope configuration\nat the time at which the two wave pulses are exactly coincident.  In\nwhat form is the ``energy'' of the wave pulses at this moment?\n\n\\paragraph{Problem~\\theproblem}\\refstepcounter{problem}%\nIn a mechanical watch, the oscillator is a metal wheel with moment of\ninertia $I$ attached to a torsional spring.  Model the torsional\nspring as making a restoring torque $\\tau$ which is proportional to\nits angular displacement $\\theta$ from its equilibrium orientation by\nthe equation\n\\begin{equation}\n\\tau = -\\kappa\\,\\theta \\nonumber\n\\end{equation}\nDerive the differential equation relating $\\theta$ to its second\nderivative (with respect to time).  What is the period $T$ of\noscillation?  As the watch gets warmer, the metal wheel expands.  If\nit expands (in radius) by a factor of $1.0001$ (ie, $1+10^{-4}$), does\nthe period go up or down?  If the watch originally kept perfect time,\nhow many seconds per day does it gain or lose after the expansion?\nAssume (incorrectly) that the torsional spring is unaffected by the\nthermal expansion.\n\n\\end{document}\n", "meta": {"hexsha": "3838d67d53b39143b07709330229c835aa98fd14", "size": 4892, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/ps_random.tex", "max_stars_repo_name": "davidwhogg/Physics1", "max_stars_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-11-13T03:48:56.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-13T03:48:56.000Z", "max_issues_repo_path": "tex/ps_random.tex", "max_issues_repo_name": "davidwhogg/Physics1", "max_issues_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 29, "max_issues_repo_issues_event_min_datetime": "2016-10-07T19:48:57.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-29T22:47:25.000Z", "max_forks_repo_path": "tex/ps_random.tex", "max_forks_repo_name": "davidwhogg/Physics1", "max_forks_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.1509433962, "max_line_length": 70, "alphanum_fraction": 0.7448896157, "num_tokens": 1398, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7981867825403176, "lm_q1q2_score": 0.5760858876550123}}
{"text": "\\documentclass[journal,onecolumn]{IEEEtran}\n%\\usepackage[left=2.2cm,right=2.2cm,top=2.5cm,bottom=2.5cm]{geometry}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{minted}\n\\usepackage{booktabs}\n%\\usepackage{commath}\n\\usepackage{float}\n\\usepackage{mathtools}\n\\usepackage{color}\n\\usepackage{amsthm}\n\\usepackage{parskip}\n\\usepackage{bm}\n\n\n\\usepackage[binary-units=true]{siunitx}\n\n\\newcommand{\\py}[1]{\\mintinline{python}{#1}}\n\n\\title{Artificial Intelligence (\\texttt{LINGI2261}) \\\\ Assignment 4 --- Group 13}\n\\author{Martin Braquet, Gilles Peiffer}\n\\date{December 11, 2019}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{The Bin Packing Problem}\n\\begin{enumerate}\n\t\\item The bin packing problem can be formulated as a local search problem as follows:\n\t\\begin{itemize}\n\t\t\\item The \\emph{problem} is to find a partition of a finite set \\(U\\) of items of rational size \\(0 \\le s(u) \\le 1\\) for each \\(u \\in U\\) into disjoint subsets \\(U_1, \\ldots, U_k\\) such that the sum of the item sizes in each \\(U_i\\) is no more than 1, and such that \\(k\\) is as small as possible.\n\t\t\n\t\tThis is equivalent to the problem of packing a finite set of items of integer size (\\(s(u) \\in [0, C]\\)) into bins of capacity \\(C\\), while minimizing the number of bins.\n\t\t\n\t\tA state is represented by a list of bins, each containing the indexes of the blocks inside this bin.\n\t\t\\item The \\emph{cost function}, which we want to maximize, is\n\t\t\\[\n\t\tf(k, \\bm{\\mathrm{fullness}}) = -\\mathrm{Fitness} =  \\left(\\frac{\\sum_{i=1}^{k} \\left(\\frac{\\mathrm{fullness}_i}{C}\\right)^2}{k}\\right) - 1,\n\t\t\\]\n\t\twhere \\(k\\) is the number of bins, \\(\\bm{\\mathrm{fullness}}\\) is a vector of size \\(k\\) containing the total weight of each bin and \\(C\\) is the capacity of the bins.\n\t\tThis is the opposite of the Fitness coefficient, where the sign originates from a desire to have a maximization problem.\n\t\t\\item A solution is feasible if every item is in a bin and\n\t\t\\[\n\t\t\\mathrm{fullness}_i \\le C \\quad \\textnormal{for} \\quad i = 1, \\ldots, k.\n\t\t\\]\n\t\tWe call the set of feasible bin packings \\(\\mathcal{S}\\).\n\t\t\\item A solution \\(\\sigma\\) is optimal if\n\t\t\\[\n\t\ts \\in \\mathcal{S} \\implies f(\\sigma) \\ge f(s).\n\t\t\\]\n\t\tEquivalently, one can write the set of optimal solutions as \\(\\{s \\in \\mathcal{S} : f(s) = \\max_{p \\in \\mathcal{S}} f(p)\\}\\).\n\t\\end{itemize}\n\t\\item The initial solution is constructed by taking the various items in the order in which they are given, and to try, for each one, to add it to the current bin.\n\tIf this is possible (i.e., the bin's capacity is not exceeded), then add it and move to the next bin.\n\tIf not, create a new bin and add the item to that bin, then move to the next item.\n\tThis is the strategy which was initially implemented in the code template.\n\t\n\tThe successor functions works by generating the two kinds of swap moves, and yielding the resulting states.\n\tThat is, given an input state, it tries to find all possible item-item and item-blank space swaps which do not violate the feasibility constraints and yields a successor state where the bins have been updated to reflect the change.\n\t\n\tBoth functions have been submitted and verified on INGInious, obtaining a score of 15/15.\n\t\\item  The results for the three strategies on each of the ten given instances are detailed in Table~\\ref{time1}.\\footnote{Experiments were run on an Early 2015 MacBook Pro, running macOS Sierra 10.12.6, using a \\SI{2.9}{\\giga\\hertz} Intel Core i5 processor, with \\SI{8}{\\giga\\byte} of \\SI{1867}{\\mega\\hertz} DDR3 RAM and an Intel Iris Graphics 6100 GPU.}\n\tWhen applicable, instances were tested multiple times and the average values were taken.\n\tThe optimality value is given using the cost function detailed higher up in this document.\n\tWhen the sign is reversed, this becomes the fitness measure defined in the assignment's statement.\n\t\n\t\\begin{table}[H]\n\t\t\\centering\n\t\t\\begin{tabular}{c@{\\hspace{0.7cm}}ccc|ccc|ccc} \n\t\t\t\\toprule\n\t\t\tInst. & \\multicolumn{3}{c}{\\py{maxvalue}} & \\multicolumn{3}{c}{\\py{randomized_maxvalue}} & \\multicolumn{3}{c}{\\py{random_walk}}  \\\\\n\t\t\t\\midrule\n\t\t\t& Time (ms) & Opt. & Steps & Time (ms) & Opt. & Steps & Time (ms) & Opt. & Steps \\\\\n\t\t\t\\midrule\n\t\t\t1 & 213 & \\(-0.0256572\\) & 9 & 214 & \\(-0.0487638\\) & 13.1 & 401 & \\(-0.314751\\) & 12.7 \\\\ \n\t\t\t2 & 329 & \\(-0.0316317\\) & 10 & 242 & \\(-0.0316621\\) & 32.6 & 430 & \\(-0.296054\\) & 2.3 \\\\ \n\t\t\t3 & 252 & \\(-0.231149\\) & 5 & 261 & \\(-0.231456\\) & 15.6 & 409 & \\(-0.291955\\) & 4.8 \\\\ \n\t\t\t4 & 251 & \\(-0.233704\\) & 6 & 260 & \\(-0.234356\\) & 31.5 & 450 & \\(-0.295092\\) & 1.6 \\\\ \n\t\t\t5 & 546 & \\(-0.247307\\) & 2 & 357 & \\(-0.247385\\) & 2.7 & 386 & \\(-0.247858\\) & 0 \\\\ \n\t\t\t6 & 183 & \\(-0.0278449\\) & 6 & 219 & \\(-0.0278968\\) & 28 & 375 & \\(-0.259277\\) & 2.2 \\\\ \n\t\t\t7 & 197 & \\(-0.0417206\\) & 4 & 223 & \\(-0.0417037\\) & 27.4 & 389 & \\(-0.272147\\) & 1.3 \\\\ \n\t\t\t8 & 240 & \\(-0.0370099\\) & 4 & 232 & \\(-0.037048\\) & 5.7 & 385 & \\(-0.267996\\) & 2.6 \\\\ \n\t\t\t9 & 177 & \\(-0.0143129\\) & 5 & 186 & \\(-0.0143188\\) & 14.2 & 398 & \\(-0.252326\\) & 0.6 \\\\ \n\t\t\t10 & 237 & \\(-0.0432446\\) & 5 & 222 & \\(-0.0432216\\) & 25.3 & 434 & \\(-0.2732\\) & 1.9 \\\\\n\t\t\t\\bottomrule\n\t\t\t\\\\\n\t\t\\end{tabular}\n\t\t\\caption{Comparison of execution time, optimal value and number of steps for the three local search strategies.}\n\t\t\\label{time1}\n\t\\end{table}\n\t\\item \\begin{enumerate}\n\t\t\\item It is hard to say which strategy is best, since the results for the \\py{maxvalue} and \\py{randomized_maxvalue} strategies are very similar, with \\py{maxvalue} obtaining its optimal value after fewer steps.\n\t\tThis is slightly counterintuitive, as one expects that the \\py{randomized_maxvalue} strategy would outperform the \\py{maxvalue} strategy, which is likely to get stuck in local maxima.\n\t\tThis is probably a byproduct of the low limit on the number of steps.\n\t\t\n\t\tThere is however no doubt about the fact that both strategies outperform the \\py{random_walk} strategy.\n\t\t\\item The \\py{maxvalue} strategy is a simple hill-climbing algorithm, and chooses the best neighbour at each step.\n\t\tThe other strategies, while less likely to get stuck in local optima, do not necessarily go for the neighbours with the highest value, and thus often end up with worse solutions.\n\t\t\\item The \\py{maxvalue} strategy focuses entirely on intensification, that is, searching for the neighbour with the highest value, but does not take into account diversification (deviating from optimality in order to avoid getting stuck in local optima).\n\t\t\n\t\tThe \\py{random_walk} strategy, on the other hand, focuses entirely on diversification, while not paying attention to intensification: the next neighbour is chosen randomly, regardless of its value.\n\t\t\n\t\tFinally, the \\py{randomized_maxvalue} strategy tries to find a balance between looking at the values of the neighbours (by taking the best five) and diversifying between those neighbours (by randomizing its choice).\n\t\t\\item Since the \\py{maxvalue} strategy does not diversify its choices, it has little to no chance of escaping local maxima.\n\t\tUnless the best neighbour of the local maximum has another neighbour which has an even higher value, the algorithm is going to start jumping back and forth between the maximum and its best neighbour, until the step limit is reached.\n\t\t\n\t\tOn the other hand, the \\py{random_walk} strategy is more likely to escape local maxima since it has a nonzero probability of ``randomly walking'' down the hill it climbed, finding another hill to climb.\n\t\t\n\t\tFinally, the \\py{randomized_maxvalue} strategy is somewhere in between.\n\t\tIt has a possibility to escape local maxima, given that the local maxima are not too isolated (so that the random part of the algorithm has a chance of jumping to a new hill).\n\t\\end{enumerate}\n\\end{enumerate}\n\n\\section{Propositional Logic}\n\\subsection{Models and Logical Connectives}\n\\begin{enumerate}\n\t% TODO perhaps we should use a slightly less exhaustive algorithm to answer this question.\n\t\\item The first sentence's truth table is given in Table~\\ref{tab:tt1}.\n\tSince it does not appear in the expression, we omit \\(D\\).\n\t\\begin{table}[H]\n\t\t\\centering\n\t\t\\(\n\t\t\\begin{array}{ccc@{\\quad}ccc}\n\t\t\\toprule\n\t\tA & B & C & \\lnot A  \\lor C & \\lnot B \\lor C & (\\lnot A  \\lor C) \\land (\\lnot B \\lor C)\\\\\n\t\t\\midrule\n\t\t0 & 0 & 0 & 1 & 1 & 1 \\\\\n\t\t0 & 0 & 1 & 1 & 1 & 1 \\\\\n\t\t0 & 1 & 0 & 1 & 0 & 0 \\\\\n\t\t0 & 1 & 1 & 1 & 1 & 1 \\\\\n\t\t1 & 0 & 0 & 0 & 1 & 0 \\\\\n\t\t1 & 0 & 1 & 1 & 1 & 1 \\\\\n\t\t1 & 1 & 0 & 0 & 0 & 0 \\\\\n\t\t1 & 1 & 1 & 1 & 1 & 1 \\\\\n\t\t\\bottomrule\\\\\n\t\t\\end{array}\n\t\t\\)\n\t\\caption{Truth table for \\((\\lnot A  \\lor C) \\land (\\lnot B \\lor C)\\).}\n\t\\label{tab:tt1}\n\t\\end{table}\n\tAs one can see, there are ten valid interpretations (five for each value of \\(D\\)).\n\t\n\tThe second sentence's truth table is given in Table~\\ref{tab:tt2}.\n\tSince it does not appear in the expression, we omit \\(D\\).\n\t\\begin{table}[H]\n\t\t\\centering\n\t\t\\(\n\t\t\\begin{array}{ccc@{\\quad}ccc}\n\t\t\\toprule\n\t\tA & B & C & C \\implies \\lnot A & \\lnot(B \\lor C) & (C \\implies \\lnot A) \\land \\lnot(B \\lor C)\\\\\n\t\t\\midrule\n\t\t0 & 0 & 0 & 1 & 1 & 1 \\\\\n\t\t0 & 0 & 1 & 1 & 0 & 0 \\\\\n\t\t0 & 1 & 0 & 1 & 0 & 0 \\\\\n\t\t0 & 1 & 1 & 1 & 0 & 0 \\\\\n\t\t1 & 0 & 0 & 1 & 1 & 1 \\\\\n\t\t1 & 0 & 1 & 0 & 0 & 0 \\\\\n\t\t1 & 1 & 0 & 1 & 0 & 0 \\\\\n\t\t1 & 1 & 1 & 0 & 0 & 0 \\\\\n\t\t\\bottomrule\\\\\n\t\t\\end{array}\n\t\t\\)\n\t\t\\caption{Truth table for \\((C \\implies \\lnot A) \\land \\lnot(B \\lor C)\\).}\n\t\t\\label{tab:tt2}\n\t\\end{table}\n\tAs one can see, there are four valid interpretations (two for each value of \\(D\\)).\n\t\n\tThe third sentence's truth table is given in Table~\\ref{tab:tt3}.\n\t\\begin{table}[H]\n\t\t\\centering\n\t\t\\(\n\t\t\\begin{array}{cccc@{\\quad}cccc}\n\t\t\\toprule\n\t\tA & B & C & D & \\lnot A \\lor B & \\lnot(B \\implies \\lnot C) & \\lnot(\\lnot D \\implies A) & (\\lnot A \\lor B) \\land \\lnot(B \\implies \\lnot C) \\land \\lnot(\\lnot D \\implies A)\\\\\n\t\t\\midrule\n\t\t0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\\\\n\t\t0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\\\\n\t\t0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\\\\n\t\t0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\\\\n\t\t0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 \\\\\n\t\t0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\\\\n\t\t0 & 1 & 1 & 0 & 1 & 1 & 1 & 1 \\\\\n\t\t0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\\\\n\t\t1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\t1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n\t\t1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\t1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\\\\n\t\t1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n\t\t1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\\\\n\t\t1 & 1 & 1 & 0 & 1 & 1 & 0 & 0 \\\\\n\t\t1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\\\\n\t\t\\bottomrule\\\\\n\t\t\\end{array}\n\t\t\\)\n\t\t\\caption{Truth table for \\((\\lnot A \\lor B) \\land \\lnot(B \\implies \\lnot C) \\land \\lnot(\\lnot D \\implies A)\\).}\n\t\t\\label{tab:tt3}\n\t\\end{table}\n\tAs one can see, there is only one valid interpretation.\n\\end{enumerate}\n\n\\subsection{Color Grid Problem}\n\\begin{enumerate}\n\t\\item Using propositional logic, one can express the three types of constraints fairly easily.\n\tWe write \\(n\\) to denote the number of rows/columns/colors.\n\t\n\tTo express the fact that a color can only appear once in a given row and column, we write\n\t\\[\n\tC_{i, j, k} \\implies \\Bigg(\\bigwedge_{0 \\le \\alpha < n, \\alpha \\ne j} \\lnot C_{i, \\alpha, k}\\Bigg) \\land \\Bigg(\\bigwedge_{0 \\le \\alpha < n, \\alpha \\ne i} \\lnot C_{\\alpha, j, k}\\Bigg).\n\t\\]\n\t\n\tFor the diagonal constraints, one should notice that diagonals can be defined by the properties of the indices of the elements contained in them.\n\tThe first type, which we call ``constant-difference'' diagonals, has the property that the difference between the elements of the diagonal is a constant value.\n\tThe second type, which we call ``constant-sum'' diagonals, has the property that the sum of the elements of the diagonal is a constant value.\n\t\n\tTo express that a color can only appear once in a given constant-difference diagonal, we write\n\t\\[\n\tC_{i, j, k} \\implies \\bigwedge_{0 \\le i+\\alpha, j+\\alpha < n, \\alpha \\ne 0} \\lnot C_{i+\\alpha, j+\\alpha, k}.\n\t\\]\n\t\n\tTo express that a color can only appear once in a given constant-sum diagonal, we write\n\t\\[\n\tC_{i, j, k} \\implies \\bigwedge_{0 \\le i+\\alpha, j-\\alpha < n, \\alpha \\ne 0} \\lnot C_{i+\\alpha, j-\\alpha, k}.\n\t\\]\n\t\n\tOne must also assert that for a given square, at least one color must be used:\n\t\\[\n\t\\bigwedge_{\\substack{0 \\le i < n \\\\ 0 \\le j < n}} \\Bigg(\\bigvee_{0 \\le k < n} C_{i, j, k}\\Bigg).\n\t\\]\n\t\n\tAll of these constraints must be repeated for all possible values of \\((i, j, k)\\).\n\t\n\tAdditionally, all input values must be asserted as well:\n\t\\[\n\t\\bigwedge_{(i, j, k) \\in \\textnormal{Inputs}} C_{i, j, k}.\n\t\\]\n\t\n\tSince the number of rows, columns and colors is identical, it is worth noting that the unicity of the color for each case is automatically verified by the row, column and color conditions stated above. Indeed, it is impossible for a cell to have 2 different colors if all the $n$ cells in a row (or column) need to pick a different color among the $n$ colors available.\n\t\\item To put the constraints given in the previous part into conjunctive normal form, one proceeds as follows:\n\t\\begin{align*}\n\t&\\bigwedge_{\\substack{0 \\le i < n \\\\ 0 \\le j < n \\\\ 0 \\le k < n}} \\left(C_{i, j, k} \\implies {\\underbrace{\\bigwedge_{0 \\le \\alpha < n, \\alpha \\ne j} \\lnot C_{i, \\alpha, k}}_{\\textnormal{row}}} \\land {\\underbrace{\\bigwedge_{0 \\le \\alpha < n, \\alpha \\ne i} \\lnot C_{\\alpha, j, k}}_{\\textnormal{column}}}\\right. \\\\\n\t&\\left. \\qquad \\qquad \\! \\! {} \\land {\\underbrace{\\bigwedge_{0 \\le i+\\alpha, j+\\alpha < n, \\alpha \\ne 0} \\lnot C_{i+\\alpha, j+\\alpha, k}}_{\\textnormal{constant-difference diagonal}}} \\land {\\underbrace{\\bigwedge_{0 \\le i+\\alpha, j-\\alpha < n, \\alpha \\ne 0} \\lnot C_{i+\\alpha, j-\\alpha, k}}_{\\textnormal{constant-sum diagonal}}}\\right) \\\\\n\t&\\qquad \\qquad \\! \\! {} \\land {\\underbrace{\\bigwedge_{\\substack{0 \\le i < n \\\\ 0 \\le j < n}} \\Bigg(\\bigvee_{0 \\le k < n} C_{i, j, k}\\Bigg)}_{\\textnormal{color}}} \\land {\\underbrace{\\bigwedge_{(i, j, k) \\in \\textnormal{Inputs}} C_{i, j, k} \\vphantom{\\bigwedge_{\\substack{0 \\le j < n \\\\ 0 \\le k < n}}}}_{\\textnormal{inputs}}} \\\\\n\t=&\\bigwedge_{\\substack{0 \\le i < n \\\\ 0 \\le j < n \\\\ 0 \\le k < n}} \\left({\\underbrace{\\bigwedge_{0 \\le \\alpha < n, \\alpha \\ne j} \\Big(\\lnot C_{i, j, k} \\lor \\lnot C_{i, \\alpha, k} \\Big)}_{\\textnormal{row}}} \\land {\\underbrace{\\bigwedge_{0 \\le \\alpha < n, \\alpha \\ne i} \\Big(\\lnot C_{i, j, k} \\lor \\lnot C_{\\alpha, j, k}\\Big)}_{\\textnormal{column}}} \\right.\\\\\n\t&\\left. \\qquad \\qquad \\! \\! {}  \\land {\\underbrace{\\bigwedge_{0 \\le i+\\alpha, j+\\alpha < n, \\alpha \\ne 0} \\Big( \\lnot C_{i, j, k} \\lor \\lnot C_{i + \\alpha, j+\\alpha, k}\\Big)}_{\\textnormal{constant-difference diagonal}}} \\land {\\underbrace{\\bigwedge_{0 \\le i+\\alpha, j-\\alpha < n, \\alpha \\ne 0} \\Big( \\lnot C_{i, j, k} \\lor \\lnot C_{i-\\alpha, j+\\alpha, k} \\Big)}_{\\textnormal{constant-sum diagonal}}}\\right)\\\\\n\t&\\qquad \\qquad \\! \\! {} \\land {\\underbrace{\\bigwedge_{\\substack{0 \\le i < n \\\\ 0 \\le j < n}} \\Bigg(\\bigvee_{0 \\le k < n} C_{i, j, k}\\Bigg)}_{\\textnormal{color}}} \\land {\\underbrace{\\bigwedge_{(i, j, k) \\in \\textnormal{Inputs}} C_{i, j, k} \\vphantom{\\bigwedge_{\\substack{0 \\le j < n \\\\ 0 \\le k < n}}}}_{\\textnormal{inputs}}},\n\t\\end{align*}\n\twhere the last line is obtained using the distributivity and implication properties.\n\tAs required, the final form is a conjunction of disjunctions of literals, i.e., a conjunctive normal form.\n\t\\item The \\py{cgp_solver.py} program has been submitted and verified on INGInious, obtaining a score of 10/10.\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "f97a36e8c996a0003d1db3847b1203be183c00f5", "size": 15263, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment4/report/report_A4_group_13.tex", "max_stars_repo_name": "Peiffap/lingi2261-assignments", "max_stars_repo_head_hexsha": "63413c0731617bc9865fba4d6d65a53550d0b684", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment4/report/report_A4_group_13.tex", "max_issues_repo_name": "Peiffap/lingi2261-assignments", "max_issues_repo_head_hexsha": "63413c0731617bc9865fba4d6d65a53550d0b684", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment4/report/report_A4_group_13.tex", "max_forks_repo_name": "Peiffap/lingi2261-assignments", "max_forks_repo_head_hexsha": "63413c0731617bc9865fba4d6d65a53550d0b684", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.0905511811, "max_line_length": 409, "alphanum_fraction": 0.6547205661, "num_tokens": 5447, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7981867705385762, "lm_q1q2_score": 0.576085878992837}}
{"text": "\\section{Front Trees}\n\\label{section:front-trees}\n\\par\nTo illustrate the different types of front trees, and their\ntransformations we do for the sake of efficiency,\nwe will use an an example the matrix R2D100, a matrix generated by\nfirst randomly triangulating the unit square with 100 grid points.\nThe resulting matrix has 100 rows and columns.\nWe ordered the matrix using a generalized nested dissection\nalgorithm from the {\\bf SPOOLES} library. \nOn the left in Figure~\\ref{fig:R2D100} is the triangulation.\nOn the right we have labeled the grid points with their place in\nthe nested dissection ordering.\nNote that vertices 90 through 99 form a separator of the graph.\nVertices 0 through 47 are found on the right of the separator, \nvertices 48 through 89 are found on the left\n\\par\n\\begin{figure}[htbp]\n\\caption{R2D100: randomly triangulated, 100 grid points}\n\\label{fig:R2D100}\n\\begin{center}\n\\mbox{\n\\psfig{file=R2D100.eps,width=3.0in,height=3.00in}\n}\n\\mbox{\n\\psfig{file=R2D100perm.eps,width=3.0in,height=3.00in}\n}\n\\end{center}\n\\end{figure}\n\\par\n\\subsection{Vertex elimination trees}\n\\label{subsection:vtx-elim}\n\\par\nRecall that the four ordering methods from\nSection~\\ref{section:ordering} return an {\\tt ETree} object.\n% In addition to the tree structure, an {\\tt  ETree} object contains \n% other information: the number of internal and external rows \n% and columns to a\n% front, and a map from each row and column to the front it belongs in.\nThere is another way to construct a tree using the {\\tt Graph} object\nand the permutation vectors.\nThe following code fragment shows how to do this.\n\\begin{verbatim}\nETree   *vetree ;\nint     *newToOld, *oldToNew ;\nGraph   *graph ;\n\nvetree = ETree_new() ;\nETree_initFromGraphWithPerms(vetree, graph, newToOld, oldToNew) ;\n\\end{verbatim}\nThe {\\tt vetree} object in the code fragment above is a\n{\\it vertex elimination tree} \n\\cite{liu90-etree}, \\cite{sch82-etree},\nwhere each front contains one vertex.\n\\par\nFigure~\\ref{fig:R2D100-tree-vtx} contains the vertex elimination tree \nfor this ordering.\nThe vertex elimination tree is\na representation of the partial order by which \nthe vertices in the graph may be eliminated.\\footnote{\nVertex $j$ is the parent of $i$ if $j$ is the first vertex greater\nthan $i$ such that $L_{j,i} \\ne 0$.\n}\nThe dependencies of the rows and columns form a tree structure.  \nThe leaves of the tree (our trees hang \nupside down with the leaves at the bottom and the root at the top)\nrepresent vertices which can be eliminated first.  \nThe parents of those leaf nodes can be eliminated next, \nand so on, until finally the vertices represented \nby the root of the tree will be eliminated last.\n\\par\n\\begin{figure}[htbp]\n\\caption{Vertex elimination tree for R2D100, 100 rows and columns}\n\\label{fig:R2D100-tree-vtx}\n\\begin{center}\n\\mbox{\n\\psfig{file=vtree.eps,width=5.0in,height=5.00in}\n}\n\\end{center}\n\\end{figure}\n\\par\nThe elimination tree illustrates the dependence of the vertices.\nThe basic  rule is that a vertex {\\it depends} only on its descendents\nand will {\\it affect} only its ancestors.\nIt should be clear that the tree allows us to identify independent,\nparallel computation.\nFor example, the computation of the factor entries in the \nsubtree rooted at vertex 47 is completely independent of the\nsubtree rooted at vertex 89, so we could identify one process to\ncompute the left subtree and another to compute the right subtree.\n\\par\n\\subsection{Fundamental supernode trees}\n\\label{subsection:fs-tree}\n\\par\nWhile the vertex elimination tree is useful to communicate the data\ndependencies, it is not a good granularity on which to\nbase a factorization or solve, in serial or in parallel.\nIt is important to group vertices together in some meaningful way\nto create larger data structures that will be more efficient with\nrespect to storage and computation.\n% The first step in this direction is to group together vertices\n% that form a chain with no branches in the tree.\nAny grouping of vertices imposes a block structure on the matrix.\nThe {\\it fundamental supernode tree} \n\\cite{ash89-relaxed}\nhas these property:\nany node in the tree is \n\\begin{itemize}\n\\item either a leaf,\n\\item or has two or more children,\n\\item or its nonzero structure \n      is not contained in that of its one child.\n\\end{itemize}\nThe top tree in Figure~\\ref{fig:fs-trees}\nshows the vertex elimination tree with the ``front'' number of each\nvertex superimposed on the vertex.\nThe bottom tree is the fundamental supernode tree.\nFigure~\\ref{fig:R2D100-fs-mtx} shows the block partition superimposed on\nthe structure of the factor $L$.\nNote this one important property: \nwithin any block column and below the diagonal block,\na row is either zero or dense.\n\\par\n\\begin{figure}[htbp]\n\\caption{Top: vertex elimination tree with the vertices mapped to\nthe fundamental supernode that contains them. \nBottom: fundamental supernode tree.}\n\\label{fig:fs-trees}\n\\begin{center}\n\\mbox{\n\\psfig{file=fsvtree.eps,width=5.0in,height=5.00in}\n}\n\\par\n\\mbox{\n\\psfig{file=fstree.eps,width=4.71in,height=2.63in}\n}\n\\end{center}\n\\end{figure}\n\\par\n\\begin{figure}[htbp]\n\\caption{Block structure of $L$ with the fundamental supernode\npartition.}\n\\label{fig:R2D100-fs-mtx}\n\\begin{center}\n\\mbox{\n\\psfig{file=fsmtx.eps,width=5.0in,height=5.00in}\n}\n\\end{center}\n\\end{figure}\n\\par\nThe code fragment to convert a tree into a fundamental supernode tree is\ngiven below.\n\\begin{verbatim}\nETree   *fsetree, *vetree ;\nint     maxzeros ;\nIV      *nzerosIV ;\n\nnzerosIV = IV_new() ;\nIV_init(nzerosIV, vetree->nfront, NULL) ;\nIV_fill(nzerosIV, 0) ;\nmaxzeros = 0 ;\nfsetree = ETree_mergeFrontsOne(vetree, maxzeros, nzerosIV) ;\n\\end{verbatim}\nThe {\\tt ETree\\_mergeFrontsOne()} method constructs a new {\\tt ETree}\nobject from the {\\tt vetree} object.\nWhen a node $J$ has a single child $I$, it looks to see whether merging\n$I$ and $J$ together will add more than a given number of zeroes into\nthe block columns of $I$ and $J$.\n(The nonzero rows of the block of $I$ and $J$ together is the union of\nthe nonzero rows of blocks $I$ and $J$ separately, and all nonzero rows\nare stored as dense rows.)\nTo create a fundamental supernode tree, the number of\nzeros allowed into a block column is zero, i.e., the nonzero structure \nof the fundamental supernode tree contains no zeros.\nThe {\\tt nzerosIV} object contains a running count of the number of zero\nentries present in the factor storage.\nIt will be used in later calls to other transformation methods.\n\\par\n\\subsection{Amalgamated or relaxed supernode trees}\n\\label{subsection:am-tree}\n\\par\nA factorization based on the fundamental supernode tree requires\nno more operations than one based on the vertex elimination tree.\nThere are many small supernodes at the lower levels of the tree. \nBy {\\it amalgamating} small but connected sets of supernodes together \ninto larger supernodes we can reduce the overhead of the processing \nall of the small supernodes at the expense of adding\nentries to the factors and operations to compute the factorization.\nThis amalgamation of supernodes generally leads to an overall \nincrease in efficiency\n\\cite{ash89-relaxed},\n\\cite{duf83-multifrontal}.\nWe call the result the {\\it amalgamated} \nor {\\it relaxed} supernode tree.\n\\par\nThe top tree in Figure~\\ref{fig:am-trees}\nshows the vertex elimination tree with the ``front'' number of each\nvertex superimposed on the vertex.\nThe bottom tree is the amalgamated supernode tree.\nFigure~\\ref{fig:R2D100-am-mtx} shows the block partition superimposed on\nthe structure of the factor $L$.\n\\par\n\\begin{figure}[htbp]\n\\caption{Top: fundamental supernode tree with the supernodes mapped to\nthe amalgamated supernode that contains them. \nBottom: amalgamated supernode tree.}\n\\label{fig:am-trees}\n\\begin{center}\n\\mbox{\n\\psfig{file=amvtree.eps,width=5.0in,height=5.00in}\n}\n\\par\n\\mbox{\n\\psfig{file=amtree.eps,width=3.25in,height=1.38in}\n}\n\\end{center}\n\\end{figure}\n\\par\n\\begin{figure}[htbp]\n\\caption{Block structure of $L$ with the amalgamated supernode\npartition.}\n\\label{fig:R2D100-am-mtx}\n\\begin{center}\n\\mbox{\n\\psfig{file=ammtx.eps,width=5.0in,height=5.00in}\n}\n\\end{center}\n\\end{figure}\n\\par\nThe code fragment to create this amalgamated tree is found below.\n\\begin{verbatim}\nETree   *ametree ;\n\nmaxzeros = 20 ;\nametree = ETree_mergeFrontsAll(fsetree, maxzeros, nzerosIV) ;\n\\end{verbatim}\nThis method will merge a node with {\\it all} of its children if it will\nnot result in more than {\\tt maxzeros} zeros inside the new block.\nOn input, {\\tt nzerosIV} object keeps count of the number of zeroes \nalready in the blocks of {\\tt fsetree}, and on return it will\ncontain the number of zeros in the blocks of {\\tt ametree}.\n\\par\n\\subsection{Splitting large fronts}\n\\label{subsection:sp-tree}\n\\par\nThere is one final step to constructing the tree that governs the\nfactorization and solve.\nLarge matrices will generate large supernodes at the topmost levels\nof the tree.\nFor example, a $k \\times k \\times k$ grid with a 27 point finite\ndifference operator, when ordered by nested dissection, has a root\nsupernode with $k^2$ rows and columns.\nThe data structure for a top level supernode can be very large,\ntoo large to fit into memory.\nIn a parallel environment, we follow the convention that each node\nin the tree is handled by one process.\nHaving a very large node at the top levels of the tree will\nseverely decrease the parallelism available to the computations.\n\\par\nThe solution to both problems, large data structures and limited\nparallelism, is to split large supernodes into pieces.\nWe can specify a maximum size for the nodes in the tree, and split\nthe large supernode into pieces no larger than this maximum size.\nThis will keep the data structures to a manageable size and increase\nthe available parallelism.  We call the resulting tree the {\\it front}\ntree because it represents the final computational unit for the\nfactorization, the frontal matrix.\n\\par\nThe amalgamated supernode tree has been transformed so that except for\nthe leaf nodes, which are not changed, no node in the tree has more \nthan four vertices.\nThe top tree in Figure~\\ref{fig:sp-trees}\nshows the vertex elimination tree with the ``front'' number of each\nvertex superimposed on the vertex.\nThe bottom tree is the amalgamated and split supernode tree.\nFigure~\\ref{fig:sp-mtx} shows the block partition superimposed on\nthe structure of the factor $L$.\nSplitting large nodes into smaller nodes will not increase the\nfactor storage or operation counts, in fact, as we shall soon see,\nit is possible to decrease them slightly when compared to the\namalgamated tree before splitting.\n\\par\n\\begin{figure}[htbp]\n\\caption{Left: tree after the large supernodes have been split.\nRight: tree with nodes mapped back to their amalgamated supernode.}\n\\label{fig:sp-trees}\n\\begin{center}\n\\mbox{\n\\psfig{file=spvtree.eps,width=5.00in,height=5.000in}\n}\n\\quad\n\\mbox{\n\\psfig{file=sptree.eps,width=3.25in,height=2.21in}\n}\n\\end{center}\n\\end{figure}\n\\par\n\\begin{figure}[htbp]\n\\caption{Block structure of $L$ with the amalgamated and split\n         supernode partition.}\n\\label{fig:sp-mtx}\n\\begin{center}\n\\mbox{\n\\psfig{file=spmtx.eps,width=5.0in,height=5.00in}\n}\n\\end{center}\n\\end{figure}\n\\par\nThe code fragment to split the large fronts is found below.\n\\begin{verbatim}\nETree   *spetree ;\nint     maxsize, seed ;\n\nmaxsize = 4 ;\nspetree = ETree_splitFronts(ametree, NULL, maxsize, seed) ;\n\\end{verbatim}\nThis method creates and returns an {\\tt ETree} object where each front\nhas {\\tt maxsize} or fewer internal rows and columns, except for the\nfronts that are leaves in the tree.\nHere we imposed the condition that no non-leaf front has more than four\nvertices.\nThe second parameter in the calling sequence is non-{\\tt NULL} \nif the graph has nonunit vertex weights.\nThe last parameter is a seed for a random number generator.\nWhen we identify a front with more than {\\tt maxsize} internal rows and\ncolumns, there are many ways to split the front into smaller fronts.\nWe try to keep the sizes of the fronts roughly equal, but which vertices\nto play into which fronts is not specified.\nWe shuffle the vertices using a random number generator and assign\nvertices to smaller fronts in a block manner.\n\\par\n\\subsection{Results}\n\\label{subsection:tree-results}\n\\par\nThis front tree is now the defining\nstructure for the numerical factorization and solve steps.  \nThe structure of the front tree defines the order of the computations \nthat will be carried out in the factorization and the solve.  \nThe composition of the front tree can have a profound effect on storage\nand performance of the factorization and solves.\n\\par\nOur {\\tt R2D100} matrix was small enough to illustrate the steps in the\ntransformation of the front tree, but is not large enough to \nrealistically display how the front tree influences the differences \nin storage and speed of the computations.\nNow we look at the {\\tt R3D13824} matrix.\nTable~\\ref{table:R3D13824-tree-stats} contains some statistics for a\nsequence of front trees.\nThe original front tree came from our nested dissection ordering.\n\\par\n\\begin{table}[htbp]\n\\label{table:R3D13824-tree-stats}\n\\caption{R3D13824: front tree transformations}\n\\begin{center}\n\\begin{tabular}{l|rrrrr}\n& CPU & \\# fronts & \\# indices & \\# entries & \\# operations \\\\ \\hline\n original  &        &  6001 & 326858 & 3459359 & 1981403337 \\\\\n fs tree   & 0.040  &  6000 & 326103 & 3459359 & 1981403337 \\\\\n merge one & 0.032  &  3477 & 158834 & 3497139 & 2000297117 \\\\\n merge all & 0.020  &   748 &  95306 & 3690546 & 2021347776 \\\\\n merge any & 0.012  &   597 &  85366 & 3753241 & 2035158539 \\\\\n split     & 0.043  &   643 & 115139 & 3753241 & 2035158539 \\\\\n final     & 0.423  &   643 & 115128 & 3752694 & 2034396840\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\par\nThere are 13824 rows and columns in the matrix, and 6001 fronts in the\nnested dissection tree.\nWhile there is an average of two rows and columns per front,\nmost of the fronts are singleton fronts at the lower levels of the tree.\nThe top level front has 750 internal rows and columns.\n\\par\n\\begin{itemize}\n\\item\nIn the first step we create an fundamental supernode tree with a call to\n{\\tt ETree\\_mergeFrontsOne()} with {\\tt maxzeros = 0}.\nWe see that the number of fronts decreases by one and the number of\nentries does not change.\n\\item\nThe second step is also a call to\n{\\tt ETree\\_mergeFrontsOne()}, this time with {\\tt maxzeros = 1000}.\nHere we merge fronts with only one child with that child,\nin other words, only chains of nodes can merge together.\nNote how the number of fronts is decreased by almost one half, \nand the number of factor entries and operations increase by 1\\%.\n\\item\nThe third step is a call to {\\tt ETree\\_mergeFrontsAll()}\nwith {\\tt maxzeros = 1000}, where we try\nto merge a node with all of its children if possible.\nThe number of fronts decreases again by a factor of five, while the\nnumber of factor entries and operations increases by 7\\% and 2\\%,\nrespectively, when compared with the original factor matrices.\n\\item\nThe fourth step \nis a call to {\\tt ETree\\_mergeFrontsAny()}\nwith {\\tt maxzeros = 1000}, where we try\nto merge a front with any subset of its children.\nThe number of fronts decreases further, and the factor entries and\noperations increase by 8\\% and 3\\%, respectively.\n\\item\nIn the fifth step \nis a call to {\\tt ETree\\_splitFronts()}\nwith {\\tt maxsize = 64}, where we try\nsplit the large fronts into smaller fronts.\nNote that the number of factor entries and operations do not seem to\nincrease, while the number of fronts increases by about 8\\%.\nIn reality, a large front that is split into smaller fronts may have a\nnon-dense block column structure, a one of its smaller fronts may have \nrows in its block column of $L$ that are zero, whereas that same row in\nthe larger front is nonzero.\n\\end{itemize}\nMerging fronts and splitting fronts can have a large effect on the\ncomputational performance of a factor and solve.\nTable~\\ref{table:R3D13824-comp-stats} contains some results for\nsolving linear systems of equations for the {\\tt R3D13824} matrix\nusing the five different front trees.\n\\par\n\\begin{table}[htbp]\n\\label{table:R3D13824-comp-stats}\n\\caption{R3D13824: \n         factor and solve timings for five different front trees.}\n\\begin{center}\n\\begin{tabular}{l|ccccccc}\n& & \\multicolumn{2}{c}{factor} & & \\multicolumn{2}{c}{solve} & total \\\\\n& init & CPU & mflops & postprocess & CPU & mflops & CPU \\\\ \\hline\n original  & 4.0 & 131.7 & 15.0 & 5.0 & 7.3 & ~7.6 & 148.0 \\\\\n fs tree   & 3.3 & 130.4 & 15.2 & 5.4 & 7.8 & ~7.1 & 146.9 \\\\\n merge one & 3.1 & 119.9 & 16.7 & 2.7 & 4.6 & 12.1 & 130.3 \\\\\n merge all & 3.0 & 120.7 & 16.7 & 1.4 & 3.6 & 16.2 & 128.7 \\\\\n merge any & 3.0 & 121.6 & 16.7 & 1.4 & 3.5 & 16.9 & 129.5 \\\\\n split     & 3.0 & ~84.9 & 24.0 & 1.9 & 3.5 & 17.1 & ~93.3 \n\\end{tabular}\n\\end{center}\n\\end{table}\n\\par\nThe first thing to notice is that factorization performance improves\nslightly as small fronts are merged together.\nThe large improvement comes when the fronts are split.\nThe explanation of this behavior is that all {\\it inter-front}\ncomputation is done using BLAS3 kernels for the operation\n$Y := Y - L * D * U$, where $L$ and $U$ are dense matrices,\n$D$ is diagonal or block diagonal with $1 \\times 1$ and $2 \\times 2$\npivots, and $Y$ is dense.\nThe {\\it intra-front} computations, done entirely within the block\ncolumns of $L$ and block rows of $U$, are done using BLAS1 kernels.\nThis is necessary when pivoting for stability.\nHad we chosen to write BLAS3 kernels for the intra-front\ncomputations when pivoting is not enabled, the factorization timings for\nthe first five front trees would have been higher.\nBut splitting fronts into smaller fronts is necessary for parallel\ncomputations, so it made sense to make it the recommended route for\nserial computations as well.\nThere would be very little difference in speed had the intra-front\ncomputations been done with BLAS3 kernels compared with using the \nfinal front tree, for the intra-front computations are a small fraction\nof the total number of operations.\n\\par\nThe solve time improves dramatically when small fronts are merged\ntogether into larger fronts.\nOur solves are submatrix algorithms, where the fundamental kernel is an\noperation \n$Y_J := B_J - L_{J,I} X_I$\nand\n$X_J := Y_J - U_{I,J} Y_J$,\nand is designed to be a BLAS2 kernel (when $X$ and $Y$ have a single\ncolumn) or BLAS3 kernel (when $X$ and $Y$ are matrices).\nWhen fronts are small, particularly with one internal row and column,\nthe submatrices that take part are very small.\nThe overhead for the computations takes far more time than the\ncomputations themselves.\n\\par\nThis multistep process of merging, merging again, etc, and finally \nsplitting the front trees is tedious.\nThere are simple methods that do the process in one step.\n\\begin{verbatim}\nETree   *etree, *etree2, *etree3 ;\nint     maxfrontsize, maxzeros, seed ;\n\netree2 = ETree_transform(etree, NULL, maxzeros, maxfrontsize, seed) ;\netree3 = ETree_transform2(etree, NULL, maxzeros, maxfrontsize, seed) ;\n\\end{verbatim}\nInside The {\\tt ETree\\_transform()} method \nis a sequence of four transformations:\n\\begin{itemize}\n\\item\nMerge small fronts into larger fronts\nusing the {\\tt ETree\\_mergeFrontsOne()} method.\n\\item\nThen merge small fronts into larger fronts\nusing the {\\tt ETree\\_mergeFrontsAll()} method.\n\\item\nThen merge small fronts into larger fronts\nusing the {\\tt ETree\\_mergeFrontsAny()} method.\n\\item\nThen merge a large front into a chain of smaller fronts\nusing the {\\tt ETree\\_splitFronts()} method.\n\\end{itemize}\nThe {\\tt ETree\\_transform2()} method differs from the \n{\\tt ETree\\_transform()} method in that it omits the\nsetp with {\\tt ETree\\_mergeFrontsAny()}.\nEither method will be suitable in most cases.\n\\par\nHowever, there are some times one method is to be preferred over\nthe other.\nIf we look again at the vertex elimination tree\nin Figure~\\ref{fig:R2D100-tree-vtx}, we see the top level separator\nwith nodes $\\{90,\\cdots,99\\}$, and the two second level separators\nwith nodes $\\{45,\\cdots,47\\}$ and $\\{87,\\cdots,89\\}$.\nIf one looks at their block columns in Figure~\\ref{fig:R2D100-fs-mtx}\nwe see that either of the two second level separators could be merged\nwith the top level separator without introducing any zero entries\ninto the factor.\nUsing the {\\tt ETree\\_mergeFrontsAny()} method could merge the top\nlevel separator with one of its two children, and produce an\nimbalanced tree, not as well suited for parallel computation had\nthe two separators not been merged.\n\\par\nIn a parallel environment, it is much more efficient to not merge \nthe top level separator with either of its second level separators.\nThe transformation methods in {\\bf SPOOLES 1.0} created front trees that\nwere not as efficient for parallel processing, precisely because of\nthe use of the ``merge-with-any'' step.\nThis led us to write three separate merging methods to replace the\nsingle method from the 1.0 release, and thus give us the ability to\navoid the trees unsuitable for parallel computation.\n\\par\nThe values of {\\tt maxzeros} and {\\tt maxsize} will have a fair\namount of influence on the efficiency of the factor and solves.\nThis is illustrated in Table~\\ref{table:R3D13824-maxzero-maxsize}\nfor the {\\tt R3D13824} matrix and a number of different combinations \nof {\\tt maxzeros} and {\\tt maxsize}.\n\\par\n\\begin{table}[htbp]\n\\label{table:R3D13824-maxzero-maxsize}\n\\caption{R3D13824: the influence of {\\tt maxzeros} and {\\tt maxsize}.}\n\\begin{center}\n\\begin{tabular}{cc|ccccccc}\n& & & \\multicolumn{2}{c}{factor} & & \\multicolumn{2}{c}{solve} & total\\\\\n{\\tt maxzeros} & {\\tt maxsize} \n   & init & CPU & mflops & postprocess & CPU & mflops & CPU \\\\ \\hline\n$~~~~~0$ & $\\infty$ & 3.3 & 129.8 & 15.3 & 5.3 & 7.8 & ~7.1 & 146.2 \\\\\n$~~~~10$ & $\\infty$ & 3.5 & 129.2 & 15.3 & 3.3 & 5.3 & 10.5 & 141.3 \\\\\n$~~~100$ & $\\infty$ & 3.0 & 119.3 & 16.7 & 2.0 & 3.9 & 14.4 & 128.2 \\\\\n$~~1000$ & $\\infty$ & 3.0 & 121.8 & 16.7 & 1.4 & 3.5 & 17.0 & 129.7 \\\\\n$~10000$ & $\\infty$ & 3.5 & 138.1 & 16.8 & 1.5 & 4.0 & 17.8 & 147.1 \\\\\n$~~1000$ &    32    & 3.3 & ~89.8 & 22.7 & 2.6 & 4.1 & 14.7 & ~99.8 \\\\\n$~~1000$ &    48    & 3.1 & ~85.8 & 23.7 & 2.1 & 3.6 & 16.5 & ~94.6 \\\\\n$~~1000$ &    64    & 3.1 & ~85.2 & 23.9 & 1.9 & 3.5 & 17.1 & ~93.7 \\\\\n$~~1000$ &    80    & 3.0 & ~85.9 & 23.7 & 1.8 & 3.4 & 17.4 & ~94.1 \\\\\n$~~1000$ &    96    & 3.0 & ~86.1 & 23.6 & 1.8 & 3.4 & 17.6 & ~94.3 \\\\\n$~~~~~0$ &    32    & 3.3 & 100.3 & 19.8 & 6.6 & 7.9 & ~7.0 & 118.1 \\\\\n$~~~~~0$ &    48    & 3.2 & ~97.0 & 20.4 & 6.1 & 7.6 & ~7.3 & 113.9 \\\\\n$~~~~~0$ &    64    & 3.2 & ~95.8 & 20.7 & 5.4 & 6.9 & ~8.0 & 111.3 \\\\\n$~~~~~0$ &    80    & 3.2 & ~96.7 & 20.5 & 5.5 & 7.1 & ~7.8 & 112.5 \\\\\n$~~~~~0$ &    96    & 3.2 & ~97.2 & 20.4 & 5.2 & 6.7 & ~8.3 & 112.3 \n\\end{tabular}\n\\end{center}\n\\end{table}\n\\par\n\\par\nAs the matrix size grows, the number of zero entries we can allow\ninto a front can also grow.\nWe recommend using {\\tt maxzeros} somewhere between \n{\\tt 0.01*neqns} and {\\tt 0.1*neqns}.\nThe exact value isn't crucial, what is important is to have the\nsmaller subtrees at the lower levels of the tree merged together.\nThe {\\tt maxsize} parameter specifies the ``panel'' size of the\nlarge fronts, and so influences the granularity of the BLAS3\ncomputations in the factorization and solve.\nIf {\\tt maxsize} is too large, then too much of the computations\nin the factorization is done inside a front, which uses a slow\nkernel.\nIf {\\tt maxsize} is too small, then the fronts are too small to get\nmuch computational efficiency.\nWe recommend using a value between {\\tt 32} and {\\tt 96}.\nLuckily, the factor and solve times are fairly flat within this\nrange.\nA value of {\\tt 64} is what we customarily use.\n", "meta": {"hexsha": "a9e31ba81a188f9ecc2674ced34058abba90f4b8", "size": 23674, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/documentation/FrontTrees/fronttrees.tex", "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/documentation/FrontTrees/fronttrees.tex", "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/documentation/FrontTrees/fronttrees.tex", "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "avg_line_length": 40.8172413793, "max_line_length": 72, "alphanum_fraction": 0.7433893723, "num_tokens": 6895, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.787931190663057, "lm_q1q2_score": 0.5760238563043818}}
{"text": "\\documentclass[]{article}\n\\usepackage{amssymb,amsmath}\n\\usepackage{hyperref}\n\\usepackage{graphicx}\n\\usepackage{natbib}\n\\bibliographystyle{plainnat}\n\n\\date{}\n\\title{MgSi Paper Supplement}\n\n\\begin{document}\n\t\n\t\\maketitle\n\t\\section{Methods}\\label{methods}\n\t\n\t\n\t\\subsection{Equilibrium Reactions}\\label{equilibrium-reactions}\n\tWe cast the equilibrium reaction equations in terms of total moles for ease of computation. We assume that all activities are equal to one in our formation, which gives\n\t\\begin{align}\n\tK_{1} &= \\frac{a_{Mg}a_{O}}{ a_{MgO}} = \\frac{X_{Mg} X_O }{X_{MgO}} = \\frac{M_{Mg}M_{O}M_m}{M_{MgO}M_c^2}\\\\\n\tK_{2} &= \\frac{a_{Si}a_{O}^2}{ a_{SiO_2}} = \\frac{X_{Si} X_O^2 }{X_{SiO_2}} = \\frac{M_{Si}M_{O}^2M_m}{M_{SiO_2}M_c^3}\\\\\n\tK_{3} &= \\frac{a_{Fe}a_{O}}{ a_{FeO}} = \\frac{X_{Fe} X_O }{X_{FeO}} = \\frac{M_{Fe}M_{O}M_m}{M_{FeO}M_c^2}\\\\\n\tK_{4} &= \\frac{a_{MgO}a_{SiO_2}}{ a_{MgSiO_3}} = \\frac{X_{MgO} X_{SiO_2} }{X_{MgSiO_3}} = \\frac{M_{MgO}M_{SiO_2}}{M_{MgSiO_3}M_m}\u200b\\\\\n\tK_{5} &= \\frac{a_{FeO}a_{SiO_2}}{ a_{FeSiO_3}} = \\frac{X_{FeO} X_{SiO_2} }{X_{FeSiO_3}} = \\frac{M_{FeO}M_{SiO_2}}{M_{FeSiO_3}M_m}\n\t\\end{align}\n\twhere $M_c$ and $M_m$ are the total moles in the core and mantle interaction layer. \n\t\n\tThen, to compute the change in equilibrium, we take the derivative of each equation with respect to temperature, which gives\n\t\\begin{align}\n\t\\frac{(\\partial_T K_1)}{K_1} &= \\frac{(\\partial_T M_{Mg})}{M_{Mg}} + \\frac{(\\partial_T M_{O})}{M_{O}} + \\frac{(\\partial_T M_{m})}{M_{m}} - \\frac{2(\\partial_T M_{c})}{M_{c}} - \\frac{(\\partial_T M_{MgO})}{M_{MgO}}\n\t\\\\\n\t\\frac{1}{K_2}\\partial_T K_2 &= \\frac{1}{M_{Si}}\\partial_T M_{Si} +\n\t\\frac{2}{M_{O}}\\partial_T M_{O} + \\frac{1}{M_{m}}\\partial_T M_{m} -\n\t\\frac{3}{M_{c}}\\partial_T M_{c} -\n\t\\frac{1}{M_{SiO_2}}\\partial_T M_{SiO_2} \\\\\n\t\\frac{1}{K_3}\\partial_T K_3 &= \\frac{1}{M_{Fe}}\\partial_T M_{Fe} +\\frac{1}{M_{O}}\\partial_T M_{O} +\\frac{1}{M_{m}}\\partial_T M_{m}-\\frac{2}{M_{c}}\\partial_T M_{c} - \\frac{1}{M_{FeO}}\\partial_T M_{FeO}\\\\\n\t\\frac{1}{K_4}\\partial_T K_4 &= \\frac{1}{M_{MgO}}\\partial_T M_{MgO} +\\frac{1}{M_{SiO_2}}\\partial_T M_{SiO_2} -\\frac{1}{M_{MgSiO_3}}\\partial_T M_{MgSiO_3} -\\frac{1}{M_{m}}\\partial_T M_{m} \\\\\n\t\\frac{1}{K_5}\\partial_T K_5 &= \\frac{1}{M_{FeO}}\\partial_T M_{FeO} +\n\t\\frac{1}{M_{SiO_2}}\\partial_T M_{SiO_2} -\n\t\\frac{1}{M_{FeSiO_3}}\\partial_T M_{FeSiO_3} -\n\t\\frac{1}{M_{m}}\\partial_T M_{m} \\, .\n\t\\end{align}\n\tWe have analytical expressions for how the equilibrium constants $K_i$ vary with temperature, so the independent variables in this system consist of the nine $M_i$ species and two total values $M_c$ and $M_m$. We wish to solve this system to obtain a set of first-order coupled ODEs. Therefore, with eleven variables and five equations, we need six more constraints. Four constraints are provided by maintaining the total number of moles of each atomic species between the core, interaction layer, and exchange with the background mantle:\n\t\\begin{align}\n\t0 &= \\partial_T M_{Mg} + \\partial_T M_{MgO} + \\partial_T M_{MgSiO_3} + \\left(\\partial_T M_{MgO}\\right)_{erosion} + \\left(\\partial_T M_{MgSiO_3}\\right)_{erosion} \\\\\n\t0 &= \\partial_T M_{Si} + \\partial_T M_{SiO_2} + \\partial_T M_{MgSiO_3} + \\partial_T M_{FeSiO_3} + \\left(\\partial_T M_{SiO_2}\\right)_{erosion} +\\left(\\partial_T M_{MgSiO_3}\\right)_{erosion} +\\left(\\partial_T M_{FeSiO_3}\\right)_{erosion}\\\\\n\t0 &= \\partial_T M_{Fe} + \\partial_T M_{FeO} + \\partial_T M_{FeSiO_3} + \\left(\\partial_T M_{FeO}\\right)_{erosion} + \\left(\\partial_T M_{FeSiO_3}\\right)_{erosion}\\\\\n\t0 &= \\partial_T M_{O}+\\partial_T M_{MgO}+ \\partial_T M_{FeO} +2\\partial_T M_{SiO_2} + 3\\partial_T M_{MgSiO_3} + 3\\partial_T M_{FeSiO_3} +\\left(\\partial_T M_{MgO}\\right)_{erosion} +\\left(\\partial_T M_{FeO}\\right)_{erosion} +2\\left(\\partial_T M_{SiO2}\\right)_{erosion} +3\\left(\\partial_T M_{MgSiO_3}\\right)_{erosion} +3\\left(\\partial_T M_{FeSiO_3}\\right)_{erosion} \\,.\n\t\\end{align}\n\tThen, the final two constraints are provided by ensuring mass continuity in the core and mantle\n\t\\begin{align}\n\t0 &=\\partial_T M_{c}+ \\partial_T M_{Mg}+ \\partial_T M_{Fe} +\\partial_T M_{Si} + \\partial_T M_{O}\\\\\n\t0 &= \\partial_T M_{m}+ \\partial_T M_{MgO}+ \\partial_T M_{FeO} +\\partial_T M_{SiO_2} + \\partial_T M_{MgSiO_3} + \\partial_T M_{FeSiO_3} +\\left(\\partial_T M_{MgO}\\right)_{erosion} +\\left(\\partial_T M_{FeO}\\right)_{erosion} +\\left(\\partial_T M_{SiO2}\\right)_{erosion} +\\left(\\partial_T M_{MgSiO_3}\\right)_{erosion} +\\left(\\partial_T M_{FeSiO_3}\\right)_{erosion} \\,.\n\t\\end{align}\n\n\t\\subsection{Interaction Layer Erosion}\n\tAs detailed in the main text, this layer is removed and replaced with fresh background mantle on a timescale $\\tau$ governed by mantle convection. To model this, we create an empirical function that pushes the mantle interaction layer composition towards the background mantle composition on a timescale governed by $\\tau$:\n\t\\begin{equation}\n\t\\left(\\partial_t M_i\\right)_{erosion}=\\frac{sgn(M_{i,b}-M_i)}{\\tau}[(|M_i / M_{i,b}-1|+1)^2-1] \\,.\n\t\\end{equation}\n\tIn the absence of continued exsolution, this expression returns the layer composition to the background mantle composition on the timescale $\\tau$ of mantle convection. For incorporation into the above equations, the derivative with respect to time must be converted to a derivative with respect to temperature\n\t\\begin{equation}\n\t\\left(\\partial_T M_i\\right)_{erosion} = \\frac{\\left(\\partial_t M_i\\right)_{erosion}}{ \\tfrac{dT_c}{dt}} \n\t\\end{equation}\n\twhere $\\tfrac{dT_c}{dt}$ is simply the change in temperature at the CMB.\n\t\n\t\\subsection{Equilibrium Constants}\n\tWe use data from \\citet{Badro2016}, and \\citet{Hirose2017} to compute equilibrium constants in our model. For eq (1-3) we compute equilibrium constants using the form\n\t\\begin{equation}\\label{eq:K}\n\tK_i(T) = 10^{a+b/T} ,\\qquad \\partial_T K_i(T) = \\tfrac{b}{T} 10^{a+b/T}\n\t\\end{equation}\n\tTUSHAR TODO\n\tWe find that computing the full expression given in \\citet{Hirose2017} introduces numerical instabilities in our model due to the changing silicon and oxygen content in our core. Therefore, we choose to use the functional form \\eqref{eq:K} while modifying the reported fit values to more accurately fit the reported experimental outcomes. \n\t\n\t\\subsection{First-Order ODE System} \\label{solution}\n\t\n\tWith total moles in the mantle \\(M_m\\) and core \\(M_c\\), There are\n\televen components \\(M_i\\) in the system and eleven equations. This\n\tsystem is solved to obtain a system of first-order nonlinear ODEs of the\n\tform \n\t$$\\partial_T M_i = f(M_i, K_j, \\partial_T K_j)$$\n\twhere \\(M_i\\) represents each molar species in the mantle and core and \\(K_j\\) is each equilibrium constant. Due to the highly non-linear nature of this set of equations, each equation has several thousand terms. Further simplification of the equations may be possible to reduce the number of terms, but there is no significant performance penalty when computing the system so it was not considered necessary. \n\t\n\t\\section{Coupled Thermo-Chemical Evolution} \\label{coupled-thermo-chemical-evolution}\n\t\n\tThe chemical reactions are incorporated into the core-mantle thermal\n\tevolution model. With two temperatures \\(T_{CMB}\\) and \\(T_{UM}\\), and\n\tnine molar species, this creates and eleven-component state vector\n\t\\(\\mathbf{x}\\). At each timestep, the mantle uses \\(T_{CMB}\\) and\n\t\\(T_{UM}\\) to compute a heat flux at the surface and core-mantle\n\tboundary and uses this to compute \\(\\partial_t T_{UM}\\). Then, the\n\tsystem of chemical reaction equations uses the current \\(T_{CMB}\\) and\n\tmolar concentrations \\(M_i\\) to compute \\(\\partial_T M_i\\). The core\n\tuses \\(Q_{CMB}\\) from the mantle and \\(\\partial_T M_i\\) from exsolution\n\tof light elements to compute \\(\\partial_t T_{CMB}\\). Finally, this\n\tchange in temperature is used to compute the change in molar\n\tconcentrations with time by\n\t\\(\\partial_t M_i = \\partial_T M_i \\partial_t T_{CMB}\\). This allows\n\tfor the computation of \\(\\partial_t \\mathbf{x}\\) at any state.\n\t\n\tWe compute the heat release with change in CMB temperature using\n\t$$\\tilde{Q}_{g,i} \\partial_t T_c = Q_{g,i}$$ \n\twhere \\(\\tilde{Q}_{g,i}\\) depends upon the change in wt\\% of the light element with temperature (see Nimmo 2015 eq. 73). This is easily computed given \\(\\partial_T M_i\\). Then, we include these in addition to the inner-core and secular cooling terms to obtain\n\t$$\\tilde{Q}_T = \\tilde{Q}_s+\\tilde{Q}_g+\\tilde{Q}_L +\\tilde{Q}_{g,MgO}+\\tilde{Q}_{g,FeO}+\\tilde{Q}_{g,SiO2}+\\tilde{Q}_{L,MgO}+\\tilde{Q}_{L,FeO}+\\tilde{Q}_{L,SiO2}$$\n\twhich is used the the calculation of the core energy budget (see Nimmo\n\t2015 eq. 73). We modify the expression for entropy in the core in a similar manner, including terms for exsolution of light elements\n\t$$\\tilde{E}_T = \\tilde{E}_s+\\tilde{E}_g+\\tilde{E}_L+\\tilde{E}_H +\\tilde{E}_{g,MgO}+\\tilde{E}_{g,FeO}+\\tilde{E}_{g,SiO2}$$\n\t(see Nimmo 2015, eq. 72). Note that this expression does not include contributions from latent heat release from exsolution of light elements because heat produced at the CMB does not contribute to thermal convection.\n\t\n\tAs noted in the main text, we do not allow light elements to diffuse into the core because they would form a thin stratified layer and fail to mix into the bulk core. We implement this by requiring that \\(\\partial_t M_i \\le 0\\) for i=Mg,Si,O. We do allow Fe to diffuse into the core, but this happens only rarely as a result of changing mantle composition due to mantle overturn in our models and must be matched by exsolution of light elements. Oxygen cannot diffuse into the core and therefore the O concentration cannot increase, which requires that any FeO entering the core be matched by MgO or SiO2 pulling Oxygen out at the same or faster rate.\n\t\n\t\t\\bibliography{citations}\n\\end{document}\n", "meta": {"hexsha": "a293f03c39dad6325395808e814a8688c57c8098", "size": 9723, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Paper Draft/supplement.tex", "max_stars_repo_name": "nknezek/MgSi-Exsolution", "max_stars_repo_head_hexsha": "925d3e2f47256a4618fd77c39f23e233d3e9bdcd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Paper Draft/supplement.tex", "max_issues_repo_name": "nknezek/MgSi-Exsolution", "max_issues_repo_head_hexsha": "925d3e2f47256a4618fd77c39f23e233d3e9bdcd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Paper Draft/supplement.tex", "max_forks_repo_name": "nknezek/MgSi-Exsolution", "max_forks_repo_head_hexsha": "925d3e2f47256a4618fd77c39f23e233d3e9bdcd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 85.2894736842, "max_line_length": 652, "alphanum_fraction": 0.7248791525, "num_tokens": 3368, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438950907764118, "lm_q2_score": 0.6825737408694988, "lm_q1q2_score": 0.5760206290126607}}
{"text": "\\Lecture{Jayalal Sarma}{Oct 5, 2020}{14}{Mobius Inversion and applications}{K Sampreeth Prem}{$\\alpha$}{JS}\n\n\\section{Introduction}\nThe focus on this lecture will be if we have two funtions $f$ and $g$ such that g can be expressed as a function of $f$ by a peculiar relation then how can we express $f$ in terms of $g$. We have already seen one such inversion during the discussion of stronger version of PIE where we have two fuctions mapping from sets to numbers. In this lecture we will look at a different setup where we have two functions $f:\\{\\mathbb{N} \\} \\to \\{\\mathbb{N}\\}$ and $g:\\{\\mathbb{N} \\} \\to \\{\\mathbb{N} \\}$ and if $g$ can be expressed in a peculiar way w.r.t $f$ then we want to invert to express $f$ in terms of $g$.This is called \\textbf{M\u00f6bius Inversion}.\nBefore we formally state the theorem and prove it, we will look at two functions namely Euler's function and M\u00f6bius function and derive some of their properties.We will then establish a relation between these two functions to prove the M\u00f6bius Inversion and then look at an application of the theorem. \n\n\\section{Euler's Function} \\label{sec:Euler's Function}\nEuler's function, also known as phi-function $\\Phi(n)$, counts the number of integers between 1 and n inclusive, which are coprime to n. \n$$\n\\Phi(n) = |\\{k \\in \\mathbb{N} | 1\\leq k \\leq n, gcd(n,k)=1\\}| \\label{fun:euler's}\n$$ \nWe have already derived the formula for $\\Phi(n)$ in \\ref{subsec:euler's function application} which is \n\\begin{align}\n\\Phi(n) = n\\prod_{i=1}^{k}(1-\\frac{1}{p_i}) \\textrm{ ~~~where~~  $n = \\prod_{i=1}^{k}(p_i)^{a_i} $}\\label{for:Euler's formula} \n\\end{align}\n\\subsection{Properties of $\\Phi(n)$} \\label{subsec:Properties of Euler function}\n\n\\begin{property} \nIf $gcd(n1,n2)=1 \\implies \\Phi(n_1)\\Phi(n_2) = \\Phi(n_1n_2) $  \n\\end{property} \n\n\\begin{proof}\nWe have derived this in \\ref{subsec:euler's function application}\n\\end{proof}\n                \n\\begin{property}\n$ \\sum_{d|n}^{} \\Phi(d) = n $\n\\end{property}\n\n\\begin{proof}\nFix $d$ as a divisor of $n$. Consider the following sets $ A = \\{x\\in\\{1,2,..n\\} | gcd(x,n)=d\\}$ and $B = \\{y\\in [\\frac{n}{d}] | gcd(y,\\frac{n}{d})=1\\}$. Now we will establish a bijection between these two sets. The cardinality of the second set is $\\Phi(\\frac{n}{d})$ (by definition of $\\Phi$) but instead of counting explicitly we will establish a bijection between A and B. Consider the mapping $x \\mapsto \\frac{x}{d}$ ,this mapping is well defined because $gcd(x,n) = d$ so $d|x$ and $\\frac{x}{d}$ is an integer whose value is at most $\\frac{n}{d}$ so it is in the range of B.The mapping is injective because division is injective. Now take any element from $1$ to $\\frac{n}{d}$, multiply it with $d$ then we will get a number $p$ such that $gcd(p,n) = d$ because $d$ is the divisor of both $n$ and $p$. So the mapping is also surjective and hence a bijection. \nObserve that the property can be rewritten as, \n$$\n\\sum_{k = \\frac{n}{d}}\\Phi(\\frac{n}{k})\n$$ \nFrom the bijection established,$$\n\\sum_{d}|\\{x\\in\\{1,2,..n\\}|gcd(x,n)=d\\}| = \\sum_{k}\\Phi(\\frac{n}{k})\n$$\nNotice that the summation on L.H.S will be equal to $n$ that's because every number $l \\in [1,n]$ will be counted exactly once in the summation when the variable of summation $d=gcd(x,n)$.\\\\\nSo, \n\\begin{align*}\nn =  \\sum_{k}\\Phi(\\frac{n}{k})\\\\\n\\sum_{k}\\Phi(\\frac{n}{k}) = n\n\\end{align*}\nHence, the proof.\n\\end{proof} \n\n\\section{M\u00f6bius Function} \\label{sec:M\u00f6bius Function}\nFor any positive integer $n$, M\u00f6bius Function $\\mu(n)$ is defined as, \n53\n \n\t\\[\n\t\\mu(n) = \\begin{cases} \n\t+1 , & ~if~ ~n~ ~is~ ~a~ ~\\href{https://en.wikipedia.org/wiki/Square-free_integer}{square-free} ~positive~ ~integer~ ~with~ ~an~ ~even~ ~number~ ~of~ ~prime~ ~factors~~\\\\\n \t-1 , & ~if~ ~n~ ~is~ ~a~ ~square-free~ ~positive~ ~integer~ ~with~ ~an~ ~odd~ ~number~ ~of~ ~prime~ ~factors~~\\\\\n\t0 , & otherwise\n\t\\end{cases}\n\t\\]\n\\noindent\nFor example,$15=3*5$ has even number of prime factors which are square-free, hence $\\mu(15)=1$. Similarly, $30=2*3*5$ and $20={2^2}*5$ has $\\mu$ values equal to $-1$ and $0$. \\\\\nNow, consider the factors of $30$ and their $\\mu$ values\n\\begin{center}\n\\begin{tabular}{c c c c c c c c c }\n Factors & 30 & 15 & 10 & 6 & 5 & 3 & 2 & 1 \\\\ \n $\\mu$ & -1 & 1 & 1 & 1 & -1 & -1 & -1 & 1 \\\\  \n\\end{tabular}\n\\end{center}\n\\noindent\nWe Observe the sum of $\\mu$ values of the factors of $30$ are equal to $0$. This isn't a coincidence and we will formally define this property and prove it.\\\\\n\n\\begin{property} \\label{prp:2}\n$\\sum_{d|n}\\mu(d) = \\begin{cases} \\label{subsec: Properties of M\u00f6bius Function }\n\t1 , & ~if~ n=1\\\\\n\t0 , & otherwise\n\t\\end{cases}\n\t$\n\\end{property}\n\n\\begin{proof}\n\n$$\\mu(1) = 1 ~~(~by~ ~definition~)$$\nTake $n= \\prod_{i=1}^{k}(p_i)^{a_i}$, we will consider only those factors of $n$ whose prime factors have multiplicity $1$ other factors of $n$ which have prime factors multiplicity greater than 1 have $\\mu$ value $0$ and does not contribute to the sum.     \n\\begin{align}\n\\sum_{d|n}\\mu(d) &= \\sum_{d|p_1p_2...p_k}\\mu(d) \\label{pf:pt1} \\\\\n&= \\sum_{I\\subseteq[k]}{(-1)}^I \\label{pf:pt2} \\\\ \n&= \\sum_{i=0}^{k} {k \\choose i}{(-1)}^i\\nonumber\\\\\n&= 0 \\nonumber\n\\end{align}\nNotice that in $\\ref{pf:pt1}$ the $\\mu$ value depends only on whether we are picking odd number of primes or an even number and not on actual primes themselves so we could rewrite the summation to\n$\\ref{pf:pt2}$.\n\\end{proof}\n\\noindent\nNow, we will see a corollary of the above propery that connects the two functions $\\mu$ and $\\Phi$. \\\\ \n\\begin{corollary} \\label{subsec:relation between euler function and mobius function}\n$\\frac{\\Phi(n)}{n} = \\sum_{d|n}\\frac{\\mu(d)}{d} $\n\\begin{proof}\nFrom \\ref{subsec:euler's function application}\n\\begin{align}\n    \\Phi(n) &= n\\prod_{i=1}^{k}(1-\\frac{1}{p_i}) \\label{cor:pt1}\\\\\n    &= n\\sum_{I \\subseteq [k]} (-1)^{|I|}\\frac{1}{(\\prod_{i \\in I}p_i)}\\label{cor:pt2}\\\\\n    &= n\\sum_{d|p_1...p_k}\\frac{\\mu(d)}{d} \\label{cor:pt3} \\\\\n    &= n\\sum_{d|n}\\frac{\\mu(d)}{d} \\label{cor:pt4}\n\\end{align}\nIn step \\ref{cor:pt2}, $I\\subseteq[k]$ can be thought of as those prime indices that are picked which contribute to $d$ so the ${(\\prod_{i \\in I}p_i)}$ becomes equal to $d$  and the numerator $(-1)^{|I|}$ is exactly $\\mu(d)$. The generalisation of step \\ref{cor:pt3} to \\ref{cor:pt4} is done following the observation that if $d|n$ but $d\\nmid\\prod p_1..p_k$  then $d$ must have a prime factor whose multiplicity is greater than 1. Therefore, $\\mu(d)$ becomes $0$ for such $d$'s.\n\\end{proof}\n\\end{corollary}\n\n\\section{M\u00f6bius Inversion} \\label{sec:M\u00f6bius Inversion}\n$f,g:\\mathbb{N} \\to \\mathbb{R} ~satisfying,$ $\\forall n ~g(n) = \\sum_{d|n} f(d)  ~~then,$  \n$$\\forall n ~f(n) = \\sum_{d|n} \\mu(d)d(\\frac{n}{d})$$\n\n\\begin{proof}\nConsider,\n\\begin{align}\n    \\sum_{d|n}\\mu(d)g(\\frac{n}{d}) &= \\sum_{k=\\frac{n}{d}}\\mu(\\frac{n}{d})g(d)  \\label{pf:pt1}\\\\\n    &= \\sum_{d|n}\\mu(\\frac{n}{d})g(d) \\label{pf:pt2} \\\\\n    &=\\sum_{d|n}\\mu(\\frac{n}{d})(\\sum_{d'|d}f(d')\\label{pf:pt5} \\\\\n    &=\\sum_{d'|n}C_{d'}f(d')  \\label{pf:pt3}\\\\\n    &=f(n) \\label{pf:pt4}\n\\end{align}\nAt \\ref{pf:pt3} $C_{d'}$ represents the constant that tells us how many times $f(d')$ is going to appear in the summation. Note that, if $d|n$ then $(\\frac{n}{d})|n$ so using a change of variable we were able to reach \\ref{pf:pt2} from \\ref{pf:pt2}. \\\\\nNow, we have two tasks, first calculate the value of $C_{d'}$ and next to show the summation value of \\ref{pf:pt3} is $f(n)$.\\\\\n\\item {\\textbf{Plan:} Evaluate $(C_{d'})$ for differnet $d'$}\n\\item \\underline{\\textbf{Case1:} }\\text{~~If $d'=n$ then, $f(n)$ occurs only once in the summation \\ref{pf:pt5}}. So, $C_n=1.$ \\\\\n\n\\item \\underline{\\textbf{Case2:}} $~~d'<n ~~and~~ d'|n.$\n\\begin{align} \n    C_{d'} &= \\sum_{d'|d|n} \\mu(\\frac{n}{d}) &\\label{cs2:p1}\\\\\n    &= \\sum_{d|n}\\mu(\\frac{n}{d}) &\\label{cs2:p2}\\\\\n    &= \\sum_{d|n}\\mu(d) &\\label{cs2:p3}\\\\\n    &= 0 \\nonumber\n\\end{align}\nIn step  $d \\neq 1$ so by Property 14.3.1 (\\ref{prp:2}) we get the value 0.\nObserve that in step \\ref{cs2:p1} the summation value doesn't depend on $d'$ so by change of variable we reach step \\ref{cs2:p2}.Similarly, we reached step \\ref{cs2:p3}  from \\ref{cs2:p2} using change of variable that we have seen earlier.\\\\\nSince we have calculated the value of $C_{d'}$ for different $d'$s  we will now show the value of summation in \\ref{pf:pt3} is $f(n)$.\n\\begin{align}\n\\sum_{d'|n}C_{d'}f(d') &= C_nf(n)~~+~~\\sum_{(d'~<~n)|n}C_{d'}f(d')\\nonumber\\\\\n&= 1*f(n) + 0 \\nonumber\\\\\n&= f(n) \\nonumber\n\\end{align}\nHence, proved. \n\\end{proof}\n\\subsection{Application of M\u00f6bius Inversion} \\label{Application of M\u00f6bius Inversion}\n\\textbf{Problem Statement:} Count the number of circular sequences/strings of $0$'s and $1$'s of length $n$.\\\\\n\\textbf{Solution:} We know that there are $2^n$ possible strings of $0$'s and $1$'s but some of them are clockwise rotations of each other and we consider them equal.So the circular sequences which are clockwise rotations of one another are equivalent. \\\\\nConsider the following example of circular sequences of $0$'s and $1$'s of length 9. In order to represent a circular string in a normal form we fix a starting point and iterate it until it's length.    \\\\\nA = 001011010\\\\\nB = 110100010\\\\\nC = 100100100\\\\\nBy our definition of equivalence $A \\equiv B \\not\\equiv C$. \\\\\nObserve that sequence C is periodic, it has a repeating subsequence 100 whereas sequences A and B do not contain any such repeating sub sequence.We call them aperiodic. In general a sequence is said to be aperiodic if it cannot be written as several times of a shorter sequence. \\\\\nFor every circular sequence there is a unique aperiodic circular sequence that we can associate with.\\\\\nInstead of proving this claim we will write it as an example \\\\\nFor $n=6$,\\\\\n$\n000000  \\Rightarrow  0 \\\\\n001001  \\Rightarrow 001 \\\\\n111111  \\Rightarrow  1 \\\\\n010010  \\Rightarrow 010 \\\\\n001001  \\Rightarrow 001 \\\\\n$\nObserve that the last two examples are equivalent and their circular subsequences are also equivalent. So their is an unique association (bijection) of circular sequences and their circular subsequences. \\\\\nAnother obvious observation is that the length of the circular subsequence must divide the original sequence. \\\\\nLet, \n\\begin{align}\nM(d) ~=~ ~no.~ ~of~ ~periodic~ ~circular~ ~sequences~ ~of~ ~length~ ~d~ \\nonumber\n\\end{align}  \nDenote, $N_n$ as the number of circular sequences of length n, then \\\\ \n\\begin{equation} \\label{eq:circular seq}\n    N_n = \\sum_{d|n}M(d) \n\\end{equation}\nSince we don't know the values of $d$ we are summing up over all values of $d$ that divides $n$. \\\\\nNow we need to compute $M(d)$ as a function of $d$.\n\\textbf{Aim:} Compute $M(d)$ as a function of $d$.\n\\textbf{Solution:} Consider,the query how many aperiodic sequences of length d. Observe, that we have removed circular so equivalent ones are counted different. So, for every aperiodic circular sequence of length $d$, every shift of it is counted different hence, there are $dM(d)$ aperiodic sequences of length d. \\\\\nWe know that there are $2^n$ sequences(not circular) of length $n$ each one uniquely corresponds to an aperiodic sequence.\\\\\nHence, \n\\begin{align}\n    2^n &= \\sum_{d|n}dM(d) \n\\end{align}\n\nNow, this is of the form $g(x) = \\sum_{d|n}f(d)$ where, $g(x) = 2^n$ and $f(d)=d.M(d)$. \\\\\nUsing M\u00f6bius Inversion, \n\n\\begin{align}\n    f(n) &= nM(n) \\nonumber \\\\\n    &= \\sum_{d|n}\\mu(d)g(\\frac{n}{d}) \\nonumber\\\\\n    &= \\sum_{d|n}\\mu(d)2^{\\frac{n}{d}}  \\nonumber\\\\\nFrom~ \\ref{eq:circular seq},  \\nonumber\\\\\n    N_n &= \\sum_{d|n}M(d)  \\nonumber\\\\\n    &= \\sum_{d|n} \\frac{1}{d}(\\sum_{l|d}\\mu(l)~2^{\\frac{d}{l}}) \\nonumber\\\\\n    &= \\sum_{d|n} \\frac{1}{d}~\\sum_{l|d}\\mu(\\frac{d}{l})~2^l \\nonumber\\\\\n    &= \\sum_{l|n}(\\sum_{l|d|n}\\frac{1}{d}\\mu(\\frac{d}{l})2^l \\nonumber\\\\\n     &= \\sum_{l|n}\\sum_{1|{\\frac{d}{l}}|{\\frac{n}{l}}}\\frac{1}{d}~\\mu(\\frac{d}{l})~2^l\\nonumber \\nonumber\\\\\n    Substituting ~d~=~k~l, \\nonumber \\\\\n     &= \\sum_{l|n}\\sum_{k|{\\frac{n}{l}}}\\frac{2^l}{kl}~\\mu(k)\\nonumber\\\\\n    &= \\sum_{l|n}\\frac{2^l}{l}\\sum_{k|{\\frac{n}{l}}}\\frac{\\mu(k)}{k} \\nonumber\\\\\n    &= \\sum_{l|n}\\frac{2^l}{l}~\\frac{\\Phi(\\frac{n}{l})}{\\frac{n}{l}}\\nonumber\\\\\n    &= \\sum_{l|n}\\frac{2^l~\\Phi(\\frac{n}{l})}{n} \\nonumber\\\\\n    N_n &= \\frac{1}{n}\\sum_{l|n}2^l~\\Phi(\\frac{n}{l}) \\nonumber\n\\end{align}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\Lecture{Jayalal Sarma}{Oct 5, 2020}{15}{Generating Functions}{Simran}{$\\alpha$}{JS}\n\\section{Introduction}\nIn this lecture, we will see a new tool for solving counting problems, namely generating functions method. To get an intuition of this method, let's think about a counting problem with parameter $n$ and say, we are interested in counting the number of objects of size $n$, denoted  by $a_n$. We can view $a_n$ as an infinite sequence of integers $(a_0,a_1,a_2,\\dots)$, where we write down  the first few terms of sequence explicitly . Next we try to match it up with some known integer sequence, say Maclaurin sequence or fabonacci sequence . \\\\\nIf we find some initial matching, say first 4 or 5 terms, then we try for a bijection between matching sequence and the corresponding problem to our setting.\\\\\nThe overall idea of this method is to view the counting problem as infinite sequences and then encode these sequences using the algebraic expression (namely power series) and then do some power series manipulation to recover some meaningful things corresponding to the counting problem we are interested in .\n\\section{Generating Functions}\n\\begin{definition}\\textbf{Generating Functions}\nThe generating function for a sequence of non-negative integers $(a_n)_{n\\in \\mathrm{N}}=(a_0,a_1,a_2,\\dots)$ is given by $$F(x)=\\sum_{n=0}^{\\infty}a_nx^n$$\n\\end{definition}\n\\textbf{Remarks:}\n\\begin{itemize}\n    \\item[1] We are not going to evaluate the power series for any random value of $x$, say $F(100)$, as the series may not even be convergent . \n\\item[2] We think of $x$ in the expression $F(x)$ as a formal variable\n\\end{itemize}\nWe will indirectly use these expression in order to achieve our combinatorial goals without actually worrying about the convergence of the expression.\n\n\\noindent Let's look at an example to get more flavour of the idea introduced.\n\nConsider a sequence with the first few terms as $a_0=1,a_1=42,a_2=23,\\dots$.\nThe series corresponding to it is given by $A(x)=1+42x+23x^2+\\dots$.\n\\\\The advantage with this translation is that we can manipulate sequences in useful ways with simple algebra. \n \n\\noindent For the given $A(x)$, let's look at what happens when we multiply it by $x$.\n\nWe have $xA(x)=x+42x^2+23x^3+\\dots$.\nSo we get, $a_0=0, a_2=42, a_3=23, \\dots$ as the coefficient of the series $xA(x)$.\nHence we see that multiplying by $x$ corresponds to shifting the sequence to the right by 1. \n\n\\subsection{Operations on Generating Functions}\n\\noindent Let us look at other algebraic operations that can be easily and more naturally performed on the given generating functions, $F(x)=\\sum_{n=0}^{\\infty}a_nx^n$ and $G(x)=\\sum_{n=0}^{\\infty}b_nx^n$ \n\\begin{itemize}\n    \\item[1] \\textbf{Addition:} $$F(x)+G(x)=\\sum_{n=0}^{\\infty}(a_n+b_n)x^n$$\n    \\item[2] \\textbf{Multiplication by a scalar $\\lambda \\in \\mathbb{R}$} $$\\lambda F(x)=\\sum_{n=0}^{\\infty}\\lambda a_nx^n$$\n    \\item[3] \\textbf{Differentiation } $$\\frac{d}{d(x)}(F(x))= F'(x)=\\sum_{n=1}^{\\infty}(nx^{n-1})a_n$$\n    Substituting $n$ with $n+1$, we get $$F'(x)=\\sum_{n=0}^{\\infty}(n+1)a_{n+1}x^n$$\n   Suppose the sequence corresponding to  $F(x)$ is $(a_0,a_1,a_2,\\dots)$ , then the sequence corresponding to  $F'(x)$ is clearly $(a_1,2a_2,3a_3,\\dots)$\n   \\item[4] \\textbf{Multiplication}\n   $$F(x)G(x)=\\sum_{n=0}^{\\infty}(\\sum_{k=0}^n a_nb_{n-k})x^n$$\n\\end{itemize}\n\\paragraph{Remark:} Note that the coefficients resulting after applying differentiation or multiplication to the generating function may sometimes not always be meaningful, but as we proceed we will look at few examples where the coefficients from multiplication will help us in countings.\n\\subsection{A Quick Example: Maclaurin Series}\nThe Maclaurin series is given by  $1+x+x^2+\\dots$ which corresponds to the sequence $(1,1,1,\\dots)$. The shorthand algebraic expression corresponding to this series is given by  $$F(x)=\\frac{1}{1-x}$$  To see this note that \n $$(1-x)F(x)=F(x)-xF(x)=(1+x+x^2+\\dots)-(x+x^2+x^3+\\dots)=1$$\n  Hence we have a neat shorthand algebraic expression for the Maclaurin Series . We will always think of this expression  as a formal power series and never going to substitute the value for $x$.\nNow, let us demonstrate few of the discussed algebraic operations on the generating function of Maclaurin Series and use it or manipulate it to reach some new generating functions and the corresponding sequence\n\\begin{itemize}\n    \\item Differentiating $F(x)=\\frac{1}{1-x}$ with respect to $x$,\n    $$F'(x)=\\sum_{n=0}^{\\infty}(n+1)a_{n+1}x^n=\\frac{1}{(1-x)^2}$$\n    So, we can say that $F'(x)$ is the generating function for the sequence $b_n=(n+1)=(1,2,3,\\dots)$\n    \\item Substituting $x=-y$ in the generating function $F(x)$ gives us a new generating function $G(y)$ corresponding to the alternating sequence $(1,-1,1,-1,\\dots)$.\\\\ Formally, we have $G(y)=F(-y)=\\frac{1}{1+y}$.\n\\end{itemize}\n\\section{Applying Generating Functions To Counting Problems}\nIn this section we will look at  the ways to use generating functions to solve the counting problems.\n\n\\paragraph{Example 1:} Consider the situation where we need to distribute $n$ votes to $k$ candidates such that every candidate gets atleast one vote. \n\n\\noindent \\textbf{Note:}  In this example, we will see the correspondence of the multiplication of two generating functions to our counting problem.\n\n\\noindent Let $a_n^{(k)}$ denote the number of ways of distributing $n$ votes to $k$ candidates with each candidate getting atleast one vote and Let $B^{(k)}=\\sum_{n=0}^{\\infty}a_n^{(k)}x^n$, be the generating function corresponding to $a_n^{(k)}$.\\\\\nLet us look at the case where we have only candidate, i.e $k=1$, we have:\n\\[\n  a_n^{(1)} = \n  \\begin{cases}\n   0, & \\text{for } n=0 \\\\\n    1, & \\text{for }n\\ge 1\n  \\end{cases}\n\\]\nSo, $a_n^{(1)}=(0,1,1,\\dots)$, which is shifted version of Maclaurin Series. So the corresponding generating function is given by $B^{(1)}(x)=x\\frac{1}{1-x}$\n\n\\noindent \\textbf{Observation:} Let there be $s$ male candidates and $t$ female candidates. So $a_n^{(s+t)}$ can be thought as distributing $n$ votes to $s+t$ candidates. Suppose male candidates got $l$ votes and female candidates got $n-l$ votes. Also, each candidate gets 1 vote. So,\n$$a_n^{(s+t)}=\\sum_{l=0}^n a_l^{(s)}a_{n-l}^{(t)}$$\n\nUsing the observation, we get $$B^{(s+t)}(x)= B^{(s)}B^{(t)}$$\nNote that $$a_n^{(s+t)}x^{s+t}=\\left(\\sum_{l=0}^n a_l^{(s)}a_{n-l}^{(t)}\\right)x^{s+t}$$\nSo, the product of two generating function has a meaning in this context, hence we can use it. We are interested in $B^{(k)}(x)$ and we have $B^{(1)}(x)$. Hence, $$B^{(k)}(x)=\\left(B^{(1)}(x)\\right)^k=\\left(\\frac{x}{1-x}\\right)^k$$\nNow, we have a generating function for the count we wanted. Our next step is to recover the count from this generating function.\\\\\n\n\\noindent \\textbf{Aim:} To write down the Generating Function in the form of individual coefficient and then read off $a_n^{(k)}$\\\\\nWe have $B^{(k)}(x)=\\frac{x^k}{(1-x)^k}$\nObserve that, $$\\frac{d^{k-1}}{dx^{k-1}}\\left(\\frac{1}{1-x}\\right)=(k-1)!\\left(\\frac{1}{(1-x)^k}\\right)$$\nUsing the observation, we have \n\n\\begin{align*}\n    B^{(k)}(x) & = \\frac{x^k}{(k-1)!}\\left(\\frac{d^{k-1}}{dx^{k-1}}\\left(\\frac{1}{1-x}\\right)\\right)\\\\\n    & = \\frac{x^k}{(k-1)!}\\left(\\frac{d^{k-1}}{dx^{k-1}}(1+x+x^2+\\dots)\\right)\\\\\n    &= \\frac{x^k}{(k-1)!}\\left(\\sum_{n=k-1}^\\infty n(n-1)\\dots (n=k+2)x^{n-k+1}\\right) \\\\\n    & =\\sum_{n=k-1}^\\infty \\frac{n(n-1)\\dots (n=k+2)}{(k-1)!} x^{n+1}\n\\end{align*}\nUsing the expansion of ${n \\choose k-1}$, we get\n\\begin{align*}\n     B^{(k)}(x) & =\\sum_{n=k-1}^\\infty {n \\choose k-1} x^{n+1}\\\\\n     & =\\sum_{n=k}^\\infty {n-1 \\choose k-1} x^{n}\\\\\n     &= \\sum_{n=0}^\\infty {n-1 \\choose k-1} x^{n}\n\\end{align*}\nHence by reading off the coefficient of $x^n$, we have $a_n^{(k)}={n-1 \\choose k-1}$.\n\n\\paragraph{} To summarise the above example,\n we had a Counting problem, from the counting world we went to generating functions world and then manipulated it to get the power series of the required count and then we recovered $a_n^{(k)}$.\n\n\\paragraph{Example 2:} Suppose  Count the number of non negative solutions to $a+b+c=n$, such that, \\begin{itemize}\n    \\item $a$ is an even integer\n\\item $b$ is a non negative integer\n\\item $c\\in \\{0,1,2\\}$\n\\end{itemize}\nIntuitively, we can think of this as having  $n$ votes and 3 candidates where the first candidate, $a$, should receive an even number of votes and $b,c$ receives at \nmost 2 votes.\\\\\nThe idea here is to use generating functions to split the problem nicely.\\\\\nLet us consider three simpler settings ,\n\\begin{itemize}\n    \\item[1] Suppose we have only one candidate $a$.\\\\ So, our equation reduces to $a=n$ and $a$ should get even votes. So we have, \\begin{itemize}\n        \\item if $n$ is odd, there is no solution\n        \\item if $n$ is even, there is exactly one solution\n    \\end{itemize}\n    Hence the corresponding sequence for this case looks like $(1,0,1,\\dots)$ and the generating function is given by\n    $$A(x)=\\sum_{n=0}^\\infty x^{2n}=\\sum_{n=0}^\\infty (x^2)^n=\\frac{1}{1-x^2}$$\n    \\item[2] Suppose we have $b$ as the only candidate.\\\\ So, our equation reduces to $b=n$ and $b$ should get non-negative number of votes. So, whatever the value of $n$ is there exists a unique solution for the equation $b=n$.\\\\  Hence the corresponding sequence for this case looks like $(1,1,1,\\dots)$ and the generating function is given by\n    $$B(x)=\\sum_{n=0}^\\infty x^n=\\frac{1}{1-x}$$\n    \\item[3] Suppose we have $c$ as the only candidate.\\\\ So, our equation reduces to $c=n$ and $c\\in \\{0,1,2\\}$. Clearly,\n    \\begin{itemize}\n        \\item the number of solution is unique for $n \\in \\{0,1,2\\}$\n        \\item the number of solutions is 0 for $n>2$.\n    \\end{itemize} Hence, the generating function for this function is given by $$C(x)=1+x+x^2$$\n\\end{itemize}\n\\textbf{Observation :} We are looking for non negative solutions of $a+b+c=n$, satisfying the three conditions discussed above.The coefficient of $x^n=x^{a+b+c}$  gives us the required number of solutions in the cases, where the coefficient of $x^a,x^b$ and $x^c$ will give us the number of solutions where $a$ is even, $b$ is non-negative integer and $c\\in \\{0,1,2\\}$ respectively. \n\n\\noindent Using these the generating function of the count we are interested in  is given by multiplying the generating functions of the cases discussed. Hence we have, $$F(x)=A(x)B(x)C(x)=\\frac{1+x+x^2}{(1-x^2)(1-x)}$$\nNow we want to write $F(x)$ as sum of terms where we will have the well known Maclaurin Series equation, to simplify our work \\\\\nUsing the method of partial fraction we have\n\\begin{align}\n    F(x) &= \\frac{R}{1+x}+\\frac{S}{1-x}+\\frac{T}{(1-x)^2} \\nonumber\\\\\n    & \\implies \\frac{1+x+x^2}{(1-x^2)(1-x)}=\\frac{R}{1+x}+\\frac{S}{1-x}+\\frac{T}{(1-x)^2}\n\\end{align}\nMultiplying both sides by $(1-x^2)(1-x)$, we get\n\\begin{align*}\n    1+x+x^2 &=R(1-x)^2+S(1-x)(1+x)+T(1+x)\\\\\n    &= (R+S+T)+(T-2R)x+(R-S)x^2\n\\end{align*}\nEquating the coefficients and solving we get,\n$$R=1/4,S=-3/4,T=3/2$$\nFinally substituting the values of $R,S$ and $T$ we have \n\n$$F(x)=\\frac{1}{4}\\left(\\sum_{n=0}^\\infty (-1)^nx^n\\right)-\\frac{3}{4}\\left(\\sum_{n=0}^\\infty x^n\\right)+\\frac{3}{2}\\left(\\sum_{n=0}^\\infty (n+1)x^n\\right)$$\nReading off the coefficients of $x^n$, the number of solutions of $a+b+c=n$ satisfying the properties specified is given by $$ a_n=\\frac{(-1)^n}{4}-\\frac{3}{4}+\\frac{3(n+1)}{2}$$\n\\section{Catalan numbers using generating functions}\n We already have come across Catalan numbers in Week3 lectures and seen that $$C_n=\\frac{1}{n+1}{2n \\choose n }$$\nIn this section we will derive this count using generating functions. Here we interpret Catalan Numbers $C_n$ as number of ways of forming well formed paranthesised expression with n pairs of brackets. In this example we will also see the notion of recurrence relation.\n\\paragraph{Recurrence Relation for Catalan numbers}\nBasically, the recurrence relation for a sequence $(a_0,a_1,a_2,\\dots,a_n,\\dots)$, tries to figure out the relation between $a_n$ and the previous $a_i$'s.\\\\\nLet $(a_0,a_1,a_2,\\dots)$ denote the sequence for the given counting problem, where we use $a_n$ to denote the number of ways of forming well formed paranthesised expression with $n$ pairs of brackets.\\\\\nTo count $a_n$, we approach in the following steps:\n\\begin{itemize}\n    \\item[1] Place one pair of bracket $($ and $)$\n    \\item[2] Place $k$ pairs of brackets inside the already placed pair of brackets\n    \\item[3] Place the remaining $n-k-1$ pairs of brackets adjacent to this placed bracketing\n\\end{itemize}\nWith this conditioning on $k$, we can write \n$$a_n=\\sum_{k=0}^{n-1}a_k a_{n-k-1}$$\nSo, generating function for $a_n$\n$$F(x)=\\sum_{n=0}^\\infty a_n x^n=a_0+\\sum_{n=1}^\\infty a_n x^n$$\nHere $a_0$ is the  number of ways of forming well formed paranthesised expression with 0 pairs of brackets, which is unique.\n\\\\ So, $$F(x)=1+\\sum_{n=1}^\\infty a_n x^n$$\nSubstituting for $a_n$ using the recurrence relation obtained, we get\n\\begin{align*}\n    F(x) & = 1+\\sum_{n=1}^\\infty\\left(\\sum_{k=0}^{n-1} (a_k a_{n-k-1})x^n\\right)\\\\\n    &= 1+x\\left(\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n (a_k a_{n-k-1})x^n\\right)\\right)\\\\\n    & =1+x(F(x)F(x))\n\\end{align*}\nSo, we get\n\\begin{align}\n    F(x)&=1+x(F(x)^2) \\nonumber \\\\\n    & \\implies x(F(x))^2-F(x)+1=0\n\\end{align}\n\nSolving the quadratic equation, we get\n$$F(x)=\\frac{1-\\sqrt{1-4x}}{2x}$$\nNow, to simplify  $F(x)$ into some known power series form, we need to simply the $\\sqrt{1-4x}$ term in the function. For this we need a separate tool, informally the generalisation of the binomial theorem.\n\n\\paragraph{Binomial Theorem :} For every $n,k \\in \\mathbb{N}$, we have \n$$(1+x)^n=\\sum_{k=0}^n {n \\choose k}x^k= \\sum_{k=0}^\\infty {n \\choose k}x^k $$\n\nNow, we will see the extended version of the Binomial Theorem, known as Newton's Binomial Theorem, where we consider any $n \\in \\mathbb{R}$\n\\begin{theorem}\\text{Newton's Binomial Theorem} \\\\ For all non-zero $n \\in \\mathbb{R}$, we have $$(1+x)^n= \\sum_{k=0}^\\infty {n \\choose k}x^k$$\nProof Omitted\n\\end{theorem}\n\n\\noindent So, for any $n \\in \\mathbb{R}, n \\ne 0$, we can compute the value of ${n \\choose k}$ by using the expression $${n \\choose k}=\\frac{n(n-1)\\dots (n-k+1)}{k!}$$.\n\n\\paragraph{Example:} Take $n=-7/2$ and $k=5$, so we have\n$${-7/2 \\choose 5}=\\frac{(-7/2)(-9/2)(-11/2)(-13/2)(-15/2)}{5!}=-\\frac{9009}{256}$$\n\n\\noindent Now, getting back to the generating function for the catalan numbers\n\\begin{equation} \\label{gen fn}\n    F(x)=\\frac{1-\\sqrt{1-4x}}{2x}=\\frac{1-(1-4x)^{1/2}}{2x}\n\\end{equation}\nLet us look at a lemma which will help us simplify the square root term.\n\\begin{lemma} \\label{Lemma1}\nFor $ n \\in \\mathbb{N}$, we have $${1/2 \\choose n}=(-1)^{n+1}{2n-2 \\choose n-1}\\frac{1}{2^{2n-1}}\\cdot \\frac{1}{n}$$\n\\end{lemma}\n\\begin{proof}\nLeft as an exercise to the readers. \\textit{Hint}: Use induction on $n$\n\n\\end{proof}\n\\noindent Applying \\ref{Lemma1} to $\\sqrt{1+x}=(1+x)^{1/2}$, we have\n\\begin{align} \\label{Simplifying Sqrt}\n    \\sqrt{1+x} &=\\sum_{n=0}^\\infty {1/2 \\choose k}x^n \\nonumber \\\\\n    &= 1+ \\sum_{n=1}^\\infty (-2){2n-2 \\choose n-1}\\frac{1}{2^{2n}}\\cdot \\frac{1}{n}(-1)^{n+1}\n\\end{align}\n\nSubstituting \\ref{Simplifying Sqrt} in \\ref{gen fn} we get,\n\\begin{align*}\n    F(x) &= \\frac{1}{2x}\\sum_{n=1}^\\infty 2{2n-2 \\choose n-1}\\frac{(-1)^n(-4x)^n}{2^{2n}n} \\\\\n    & = \\frac{1}{x}\\sum_{n=1}^\\infty {2n-2 \\choose n-1}\\frac{1}{n}x^n\\\\\n    & = \\sum_{n=0}^\\infty {2n \\choose n}\\frac{1}{n+1}x^n\n\\end{align*}\nHence the generating function for the catalan numbers is,\n$$F(x)=\\sum_{n=0}^\\infty {2n \\choose n}\\frac{1}{n+1}x^n$$\nand the coefficient $a_n={2n \\choose n}\\frac{1}{n+1}$ denotes the catalan numbers.\n\n\\section{Live Discussion Session, Oct 8}\n\nConsider a standard dice with $(1,2,3,4,5,6)$ as the number on it's faces. Let us look at the sum values that can be obtained when two standard dice are rolled together and number of different ways in which the pair of standard  dice results to the sum value . Note that the minimum sum that can be obtained is $2$ (when  $1$ appears on each dice) and the maximum sum that a pair of standard dice can yeild is $12$ (when  $6$ appears on each dice).\n\n\\begin{center}\n\\begin{tabular}{c c c c c c c c c }\n Sum Value & 2 & 3 & 4 & \\dots & 12   \\\\ \n \\# of pairs & 1 & 2 & 3 & \\dots & 1  \\\\  \n\\end{tabular}\n\\end{center}\n\nNow, let's compute the value $\\text{Pr[Sum value} = 8]$. We know that there are $36$ different pairs as an outcome when we two standard dice are rolled together. The pairs that result in $8$ are $(2,6),(3,5),(4,4),(5,3),(6,2)$. So we get\n$$\\text{Pr[Sum value} = 8]=\\frac{5}{36}$$\n\\subsection{Crazy Dice Problem and Generating Functions}\nThe question of interest here is can we find a pair of dice with\n$(a_1,a_2,a_3,a_4,a_5,a_6)$ and $(b_1,b_2,b_3,b_4,b_5,b_6)$ as their faces, such that throwing the two dice in sequence gives us the same sum distribution as a standard dice.\nLet's try to figure it out using the generating functions method.\nFirst, we try to relate a standard dice with $(1,2,3,4,5,6)$ as the number on it's faces to a sequence and then using the sequence we can figure out it's generating function. \n\\paragraph{Generating Function for a Standard Dice} Let $a_n$ denote the number of ways in which a dice when rolled can produce the number $n$. So for the standard dice ($1\\le n \\le 6$) we know there is a unique way to get each face $\\{1,2,\\dots,6\\}$. So, the infinite sequence representing it is $(1,1,1,1,1,1,0,0,\\dots)$ and the corresponding generating function is $$F(x)=x+x^2+x^3+x^4+x^5+x^6$$\nThe generating function for the sum value when two dice are thrown in a sequence can be given by multiplying the respective generating function of each die. \n\nSo, for two standard dice thrown in a sequence, the generating function for the sum of the values appeared on each die is given by\n\\begin{align} \\label{Gen fnc: sum standard dice}\n    f(x)=(x+x^2+x^3+x^4+x^5+x^6)^2\n\\end{align}\n\\paragraph{Observation:} For two dice with  $(a_1,a_2,a_3,a_4,a_5,a_6)$ and $(b_1,b_2,b_3,b_4,b_5,b_6)$ as their faces such that they fit in the requirement of our question, the product of their generating functions must be equal to \\ref{Gen fnc: sum standard dice} \n\nLet $p(x)=x^{a_1}+x^{a_2}+\\dots+x^{a_6}$ and $p(x)=x^{b_1}+x^{b_2}+\\dots+x^{b_6}$ denote the generating function of the two dice considered respectively.\n\nSo, from the observation we have that\n\\begin{align} \\label{Eq1}\n    p(x)q(x)=(x+x^2+x^3+x^4+x^5+x^6)^2\n\\end{align}\nUsing few algebraic manipulation we get $$x+x^2+x^3+x^4+x^5+x^6 = x(1+x)(1+x+x^2)(1-x+x^2)$$. Substituting this in \\ref{Eq1}, we have \n\\begin{align} \\label{Eq2}\n    p(x)q(x)&=(x(1+x)(1+x+x^2)(1-x+x^2))^2\\nonumber\\\\\n    & = x^2(1+x)^2(1+x+x^2)^2(1-x+x^2)^2\n\\end{align}\nAn additional constraint on $p(x)$ and $q(x)$ is that $p(1)=q(1)=6$, since the sequence corresponding to both these must have 6 terms.\n\nNow our task is to distribute the factors of $x^2(1+x)^2(1+x+x^2)^2(1-x+x^2)^2$ to $p(x)$ and $q(x)$ such that the \\ref{Eq2} and the constraint $p(1)=q(1)=6$ is satisfied.\nOne such valid distribution of the factors is $p(x)=x(1+x)(1+x+x^2)$\nand $q(x)=x(1+x)(1+x+x^2)(1-x+x^2)^2$. Simplifying these gives us \n\\begin{align}\\label{eqn 3}\n    p(x)=x + 2x^2 + 2x^3 + x^4\n\\end{align}\n\\begin{align}\\label{eqn 4}\n    q(x)=x + x^3 + x^4 + x^5 + x^6 + x^8\n\\end{align}\nUsing \\ref{eqn 3}, we get $(a_1,a_2,a_3,a_4,a_5,a_6)=(1,2,2,3,3,4)$ and from \\ref{eqn 4}, we get $(b_1,b_2,b_3,b_4,b_5,b_6)=(1,3,4,5,6,8)$.\n\nSo, we were able to get two dice with  faces $(1,2,2,3,3,4)$ and $(1,3,4,5,6,8)$ such that throwing these two dice in sequence gives us the same sum distribution as throwing a pair of standard dice.\n\n\\Lecture{Jayalal Sarma}{Oct 5, 2020}{16}{Recurrence relation for Derangements}{K Sampreeth Prem}{$\\alpha$}{JS}\n\n\\section{Introduction}\nWe were looking at the tool named generating functions and towards the end of last lecture we have seen recurrence relation for catalan numbers.The idea was to express the $n^th$ term of the sequence as a function of the previous terms and along with the generating functions obtain a closed form expression for the generating function, then use the series expansion to get the $n^th$ term of the infinite sequence.\nIn this lecture we will focus on getting a recurrence relation for Derangements. \n\n\\section{Derangements}\\label{Derangements}\nA Derangement is a permutation in which none of the objects appear in their \"natural\" (i.e., ordered) place.\n$$\nD_n = | \\{\\sigma ~\\in~ S_n | \\forall i ~\\sigma(i)\\neq i\\}|\n$$\nWe have already seen the expression for $D_n$ using P.I.E in \\ref{subsec:derangements application}\nwhich is,\n$$\n(\\sum_{k=0}^{n} \\frac{(-1)^k}{k!})n!\n$$\nNow we will try to obtain this expression using recurrence relations. \n\\subsection{Recurrence relations for Derangements}\nBefore we state the recurrence relations, let's fix the initial conditions. \\\\\n$D_0 = 1~$ because there are no elements out of their natural position so we take it as $1$  and $D_1=0~$ since, we cannot have a derangement on one element.\n\\begin{recurrence relation} \\label{rel:1}\n\\begin{align}\n    D_n ~=~ (n-1)~(D_{n-1}~+~D_{n-2})\n\\end{align}\n\\end{recurrence relation}\n\\begin{proof}\nWe will use a double counting argument to prove this.\\\\\nOn \\textbf{L.H.S} it's the number of derangements on n elements. \\\\\nOn \\textbf{R.H.S}, let $\\sigma\\in S_n$ denote the derangement. So, $\\sigma(n)~\\neq~n$.\\\\\nLet $\\sigma(n)=k$, now there are two cases, \\\\\n\\textbf{Case 1:} $\\sigma(k)=n$ \\\\\nSo, we can remove  $n$ and $k$ from the set $\\{1,...,n\\}$ and by appropriate renaming the new $\\sigma$($say~ \\sigma'$) corresponds to a derangement $S_n-2$.Since there are $n-1$ ways of choosing $k$\\\\\n\\textbf{Case 2:} $\\sigma(k)\\neq n$ \\\\\nConsider the following bijection to a derangement in $S_{n-1}$.\\\\\nSwap $\\sigma(k)$ and $\\sigma(n)$ in the permutation. In the resulting permutation($say~ \\sigma'$) $\\sigma'(n)=j (j \\neq n)$ and $\\sigma'(k)=k$. This isn't a derangement because $\\sigma'(k)=k$. So remove $k$ and by appropriate renaming $\\sigma'$ gives a derangement in $S_{n-1}$. \\\\\nSo from \\textbf{Case 1} and \\textbf{Case 2} we conclude by fixing $k$ there are $(D_{n-1}~+~D_{n-2}~)$ derangements possible. Since, $k$ is arbitrary and there are $n-1$ ways of choosing $k$ we get number of derangements possible as,   $(n-1)~(D_{n-1}~+~D_{n-2})$.\n\\end{proof}\n\\begin{recurrence relation} \\label{rel:2}\n\\begin{align}\n    D_n ~=~ nD_{n-1}~+~(-1)^n  \n\\end{align}\n\\end{recurrence relation}\n\\begin{proof}\nFrom \\ref{rel:1}, \n\\begin{align}\n     D_n ~&=~ (n-1)~(D_{n-1}~+~D_{n-2}) \\nonumber \\\\\n     D_n~-~nD_{n-1}~&=~-(D_{n-1}~-~(n-1)D_{n-2}) \\nonumber\n     \\end{align}\n     Substitute $A_n = D_n - nD_{n-1}$ \\label{rpf:1},\\\\\n    \\begin{align}\n     A_n~&=~-A_{n-1} \\nonumber\\\\\n     A_n~&=~(-1)^n \\nonumber\\\\ \n     D_n~-~nD_{n-1}~&=~(-1)^n \\nonumber\n\\end{align}\n\\end{proof}\n\\begin{recurrence relation} \\label{rel:3}\n\\begin{align}\n    D_n ~=~ n! - \\sum_{k=0}^{n-1} {n \\choose k}~D_k   \n\\end{align}\n\\end{recurrence relation}\n\\begin{proof}\n\\begin{align}\n    D_n ~&=~ n! - \\sum_{k=0}^{n-1} {n \\choose k}~D_k \\nonumber \\\\\n         n! ~&=~ D_n + \\sum_{k=0}^{n-1} {n \\choose k}~D_k \\nonumber \\\\\n     n! ~&=~ \\sum_{k=0}^{n} {n \\choose k}~D_k \\label{rpf:1} \n\\end{align}\nNow we will argue \\ref{rpf:1} using double counting argument.\\\\\nOn \\textbf{L.H.S} it's the number of permutations on $n$ numbers. \\\\\nOn \\textbf{R.H.S}, observe that each permutation has some points which are in their natural position i.e., $\\sigma(i)~=~i$ and others are derangements on the remaining elements.Now we condition on number of fixed points and there are ${n\\choose k }$ ways of fixing $k$ points and remaining form derangements.Notice that we are not over counting across sums because once we fix the number of fixed points one permutation cannot have two different number of fixed points.   \n\\end{proof}\n\\noindent\nWe have stated and proved the recurrence relations associated with derangements.\nNow, we will derive the formula for Derangements using the recurrence relations.\\\\\nConsider \\ref{rel:2},  \\\\\n\\begin{align}\n    D_n ~&=~ nD_{n-1}~+~(-1)^n \\nonumber\\\\\n    D_n ~-~ nD_{n-1} ~&=~ (-1)^n \\nonumber\n    \\end{align}\n    Divide throughout by $n!$  \n    \\begin{align}\n    \\frac{D_n}{n!} ~-~ \\frac{D_{n-1}}{{n-1}!}  ~&=~ \\frac{(-1)^n}{n!} \\nonumber\\\\\n      Substitute~B_n~&=~\\frac{D_n}{n!} \\nonumber \\\\\n    B_{n}~-~B_{n-1} ~&=~ \\frac{(-1)^n}{n!} \\nonumber \n    \\end{align}\n    \\begin{align}\n    B_{n} ~&=~ B_{n-1}~+~\\frac{(-1)^n}{n!} \\nonumber \\\\\n    B_{n} ~&=~ B_{n-1}~+~\\frac{(-1)^n}{n!} \\nonumber \\\\\n     B_{n} ~&=~ B_{n-2}~+~\\frac{(-1)^{n-1}}{{n-1}!}~+~\\frac{(-1)^n}{n!} \\nonumber \n\\end{align}\nBy repeatedly expanding the terms on \\textbf{R.H.S},we get\n\\begin{align}\n    B_{n} ~&=~ \\sum_{k=0}^{n}\\frac{(-1)^k}{k!} \\nonumber \\\\\n    D_{n} ~&=~ n!(\\sum_{k=0}^{n} \\frac{(-1)^k}{k!}) \\nonumber\n\\end{align}", "meta": {"hexsha": "1430ba7a39280c7e3c769b992f21c518fe116486", "size": 37154, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week05.tex", "max_stars_repo_name": "pot8ohead/theory-toolkit", "max_stars_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week05.tex", "max_issues_repo_name": "pot8ohead/theory-toolkit", "max_issues_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-10-08T07:34:26.000Z", "max_issues_repo_issues_event_max_datetime": "2020-11-30T06:06:12.000Z", "max_forks_repo_path": "week05.tex", "max_forks_repo_name": "pot8ohead/theory-toolkit", "max_forks_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2020-09-25T01:35:07.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-28T11:22:06.000Z", "avg_line_length": 61.4115702479, "max_line_length": 865, "alphanum_fraction": 0.6708564354, "num_tokens": 12931, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494678483918, "lm_q2_score": 0.8397339676722393, "lm_q1q2_score": 0.5760150682589912}}
{"text": "\\subsection*{MILP (complete for ReLU, NP-complete)}\n\\begin{tabular}{|l@{~~}l}\n\\textbf{Objective} & $\\min_{x_1, \\ldots, x_n} c_1 x_1 + \\ldots + c_n x_n$ \\\\\n\\textbf{Constraints} & $a_{i,1} x_1 + \\ldots + a_{i,n} x_n \\le b_i$, $1 \\le i \\le m$ \\\\\n\\textbf{Bounds} & $x_j \\in [l_j, u_j]$ or $x_j \\in \\mathbb{Z}$, $1 \\le j \\le n$\n\\end{tabular}\n\\paragraph{Affine layer} $y=Wx+b$\n\\paragraph{ReLU layer} \n\\begin{tabular}[t]{l@{}l@{}r}\n$y \\le x - l(1-a)$ & \\rdelim{\\}}{5}{*} & $x \\in [l, u]$\\\\\n$y \\le u a$ & & $a = 0 \\Rightarrow (y=0$ \\\\\n$y \\ge x$ & & $\\land~ x \\in [l, 0])$ \\\\\n$y \\ge 0$ & & $a=1 \\Rightarrow (y = x$\\\\\n$a \\in \\{0, 1\\}$ & & $\\land~ x \\in [0, u])$\n\\end{tabular}\n\\paragraph{Bounds} \n$x_i \\in[l_i, u_i]$ (precomputed box bounds for neurons) or $x_i' \\in [x_i - \\epsilon, x_i + \\epsilon]$ (inputs)\n\\paragraph{Objective} $\\min o_\\text{label} - o_i$ (verification succeeds iff $o_\\text{label} > o_i$)", "meta": {"hexsha": "f1741d6209389f1ea7988afe9bab58b0cc3382c6", "size": 899, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "certification-with-complete-methods.tex", "max_stars_repo_name": "cknabs/RIAI-summary-HS2020", "max_stars_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-20T21:27:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-24T20:28:56.000Z", "max_issues_repo_path": "certification-with-complete-methods.tex", "max_issues_repo_name": "cknabs/RIAI-summary-HS2020", "max_issues_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-25T09:29:16.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-25T10:50:09.000Z", "max_forks_repo_path": "certification-with-complete-methods.tex", "max_forks_repo_name": "cknabs/RIAI-summary-HS2020", "max_forks_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9444444444, "max_line_length": 112, "alphanum_fraction": 0.5717463849, "num_tokens": 406, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213745668095, "lm_q2_score": 0.6406358411176238, "lm_q1q2_score": 0.5760093780624421}}
{"text": "\\section*{Exercise 20.3-4}\n\\subsection*{What happens if you call vEB-TREE-INSERT with an element, that is already in the vEB tree?}\n\nThe procedure will terminate without inserting anything:\n\\\\\nLine 1-2 is not possible, since there is at least one element in the tree. We know this, since $x$ is a duplicate of an element in the tree.\n\\\\\nLine 3-4 is not possible either, since $x$ is a duplicate of an element in the tree, so the lowest possible value for $x$ is $x=V.min$\n\\\\\nLine 6-8 is not possible either, so eventually we will hit line 9, where the recursive call brings us to a smaller cluster. This will keep happening until we hit a base-case vEB tree, meaning we no longer fulfill line 5, and moves on to line 10-11.\n\\\\\nSince line 10-11 is not possible either (again, $x$ is a duplicate of an element in the tree, so the highest possible value for $x$ is $x=V.max$), so the procedure terminates without doing anything.\n\n\\subsection*{What happens if you call vEB-TREE-DELETE with an element, that is not in the vEB tree?}\n\nA wrong value will be deleted:\n\\\\\nLine 1-3 simply checks if the tree has only one element, and if so, deletes it, regardless of the value of $x$.\n\\\\\nLine 4-8 checks if we are in a base-case vEB tree, and if so, updates $V.min$ and $V.max$, based on the value of $x$, which is assumed to be either 0 or 1 (the only two possible values in the base-case vEB tree). Since $x$ is non of these values, this update will end up deleting $V.max$ and leaving $V.min$ unchanged.\n\\\\\nLine 9-12 is not possible, since we know, that $x$ is not in the tree, and so we will hit line 12, where the recursive call brings us to a smaller cluster. This will keep happening until we hit a base-case vEB tree, which brings us to line 4-8, which ends up deleting $V.max.$\n\\\\\nLine 14-22 is not possible to reach.\n\n\\subsection*{Explain why the procedures exhibit the behavior that they do}\n\nRecursive calls will always bring us down to a smaller tree, until we hit a base case.\n\n\\subsection*{Show how to modify vEB trees and their operations so that we can check in constant time whether an element is present}\n\nWe introduce an extra array, which has $u$ elements, that are bits. For each element in the tree, the value of the corresponding entry in the array represents, whether the element is present (1) or not (0).", "meta": {"hexsha": "0e016f415a5355de75339df827143cfc89f84621", "size": 2321, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Uge3/Ex.20.3-4.tex", "max_stars_repo_name": "pdebesc/AADS", "max_stars_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Uge3/Ex.20.3-4.tex", "max_issues_repo_name": "pdebesc/AADS", "max_issues_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Uge3/Ex.20.3-4.tex", "max_forks_repo_name": "pdebesc/AADS", "max_forks_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.53125, "max_line_length": 318, "alphanum_fraction": 0.7483843171, "num_tokens": 614, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.8479677545357568, "lm_q1q2_score": 0.5759216364682742}}
{"text": "\\subsection{part c}\nFor transfer function we use common architecture.\n\\begin{figure}[H]\n    \\caption{Architecture}\n    \\centering\n    \\includegraphics[width=12cm]{../Figure/Q1/Q1_c/architecture.png}\n\\end{figure}\n\\begin{itemize}\n    \\item r to y refrence\n    $$\n    \\dfrac{y}{r} = \\dfrac{C(s)G(s)}{1+C(s)G(s)}\n    $$\n    \\begin{figure}[H]\n        \\caption{r to y bode magnitude}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q1/Q1_c/bode_r2y.png}\n    \\end{figure}\n    System has a good performance at high frequency but not good performance at low frequency.\n    \\item du to y distubance\n    $$\n    \\dfrac{y}{du} = \\dfrac{G(s)}{1+C(s)G(s)}\n    $$\n    \\begin{figure}[H]\n        \\caption{du to y bode magnitude}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q1/Q1_c/bode_du2y.png}\n    \\end{figure}\n    System has a better performance at high frequency but pretty good performance at low frequency.\n    \\item dy to y distubance\n    $$\n    \\dfrac{y}{dy} = \\dfrac{1)}{C(s)G(s)}\n    $$\n    \\begin{figure}[H]\n        \\caption{dy to y bode magnitude}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q1/Q1_c/bode_dy2y.png}\n    \\end{figure}\n    System has a good performance at high frequency but very bad performance at low frequency.\n    \\item n to y noise\n    $$\n    \\dfrac{y}{du} = \\dfrac{-C(s)G(s)}{1+C(s)G(s)}\n    $$\n    \\begin{figure}[H]\n        \\caption{n to y bode magnitude}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q1/Q1_c/bode_n2y.png}\n    \\end{figure}\n    System has a good performance at high frequency but not good performance at low frequency.\n\\end{itemize}", "meta": {"hexsha": "93b30864028ade86fda890d880ea102faefe67c2", "size": 1645, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW/HW IV/Report/Q1/Q1_c/Q1_c.tex", "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW/HW IV/Report/Q1/Q1_c/Q1_c.tex", "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW/HW IV/Report/Q1/Q1_c/Q1_c.tex", "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.5714285714, "max_line_length": 99, "alphanum_fraction": 0.632218845, "num_tokens": 519, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835493924953, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.5759034926630293}}
{"text": " \n\\subsection{Time--discretization of the linear case~(\\ref{first-DS3}) } \nLet us now proceed with the time discretization of~(\\ref{eq:toto1}) with FirstOrderLinearR~(\\ref{first-DS3})  by a fully implicit scheme : \n\\begin{equation}\n  \\begin{array}{l}\n    \\label{eq:toto1-DS3}\n     M x^{\\alpha+1}_{k+1} = M x_{k} +h\\theta A x^{\\alpha+1}_{k+1}+h(1-\\theta) A x_k + h \\gamma r^{\\alpha+1}_{k+1}+ h(1-\\gamma)r(t_k)  +hb\\\\[2mm]\n     y^{\\alpha+1}_{k+1} =  C x^{\\alpha+1}_{k+1} + D \\lambda ^{\\alpha+1}_{k+1} +Fz +e\\\\[2mm]\n     r^{\\alpha+1}_{k+1} = B \\lambda ^{\\alpha+1}_{k+1} \\\\[2mm]\n  \\end{array}\n\\end{equation}\n\n\\[R_{\\free} = M(x^{\\alpha}_{k+1} - x_{k}) -h\\theta A x^{\\alpha}_{k+1} - h(1-\\theta) A x_k -hb_{k+1} \\]\n\\[R_{\\free} = W(x^{\\alpha}_{k+1} - x_{k}) -h A x_{k} -hb_{k+1} \\]\n\n\\paragraph{Resulting Newton step (only one step)}\nFor the sake of simplicity, let us assume that $\\gamma =1$\n\\begin{equation}\n  \\begin{array}{l}\n     (M -h\\theta A)x^{\\alpha+1}_{k+1} = M x_{k} +h(1-\\theta) A x_k + hr^{\\alpha+1}_{k+1} + hb\\\\[2mm]\n     y^{\\alpha+1}_{k+1} =  C x^{\\alpha+1}_{k+1} + D \\lambda ^{\\alpha+1}_{k+1} +Fz + e \\\\[2mm]\n     r^{\\alpha+1}_{k+1} = B \\lambda ^{\\alpha+1}_{k+1}\\\\[2mm]\n  \\end{array}\n\\end{equation}\nthat lead to with: $ (M -h\\theta A) = W$\n\\begin{equation}\n  \\begin{array}{l}\n     x^{\\alpha+1}_{k+1} = W^{-1}(M x_{k} +h(1-\\theta) A x_k + r^{\\alpha+1}_{k+1} +hb) = x\\free + W^{-1}(r^{\\alpha+1}_{k+1})\\\\[2mm]\n     y^{\\alpha+1}_{k+1} =  ( D+hCW^{-1}B) \\lambda ^{\\alpha+1}_{k+1} +Fz + CW^{-1}(M\n     x_k+h(1-\\theta)Ax_k + hb) +e \\\\[2mm]\n  \\end{array}\n\\end{equation}\nwith $x_{\\free} = x^{\\alpha}_{k+1} + W^{-1}(-R_{\\free})= x^{\\alpha}_{k+1} - W^{-1}(W(x^{\\alpha}_{k+1}\n- x_k) -hAx_k-hb_{k+1} )= W^{-1}(Mx_k +h(1-\\theta)Ax_k +h b_{k+1})$\n\\begin{equation}\n  \\begin{array}{l}\n     y^{\\alpha+1}_{k+1} =  ( D+hCW^{-1}B) \\lambda ^{\\alpha+1}_{k+1} +Fz + Cx_{\\free}+e\\\\[2mm]\n     r^{\\alpha+1}_{k+1} = B \\lambda ^{\\alpha+1}_{k+1}\\\\[2mm]\n  \\end{array}\n\\end{equation}\n\n\\paragraph{Coherence with previous formulation}\n\\[y_p = y^{\\alpha}_{k+1} -\\mathcal R^{\\alpha}_{yk+1} + C^{\\alpha}_{k+1}(x_p -x^{\\alpha}_{k+1}) -\nD^{\\alpha}_{k+1} \\lambda^{\\alpha}_{k+1} \\]\n\\[y_p = Cx_k + D \\lambda _k  + C(\\tilde x_{\\free}) -D \\lambda_k +Fz + e\\]\n\\[y_p = Cx_k   + C(\\tilde x_{\\free})  +Fz + e\\]\n\\[y_p = Cx_k   + C(\\tilde x_{\\free})  +Fz + e\\]\n\\[y_p = C(x_{\\free})  +Fz + e\\]\n\n%In the case of the system~(\\ref{eq:deux}) with a affine function $f$ or $\\theta =0$, the the MLCP matrix $W$ can be computed before the beginning of the time loop saving a lot of computing effort.  In the case of the system (\\ref{eq:trois}) with $\\theta=\\gamma=0$, the MLCP matrix $W$ can be computed before the beginning of the Newton loop.\n\\clearpage\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"DevNotes\"\n%%% End: \n", "meta": {"hexsha": "cfcee3bf5b8d09bac4ae8e0d54f9732d7e1543d3", "size": 2781, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/sphinx/devel_guide/notes/MCP_linearized_linear.tex", "max_stars_repo_name": "BuildJet/siconos", "max_stars_repo_head_hexsha": "5e9c95806f0a01d62ab564ffb1d9d50c2dc32ef0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 137, "max_stars_repo_stars_event_min_datetime": "2015-06-16T15:55:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T06:01:59.000Z", "max_issues_repo_path": "docs/sphinx/devel_guide/notes/MCP_linearized_linear.tex", "max_issues_repo_name": "BuildJet/siconos", "max_issues_repo_head_hexsha": "5e9c95806f0a01d62ab564ffb1d9d50c2dc32ef0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 381, "max_issues_repo_issues_event_min_datetime": "2015-09-22T15:31:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-14T09:05:23.000Z", "max_forks_repo_path": "docs/sphinx/devel_guide/notes/MCP_linearized_linear.tex", "max_forks_repo_name": "BuildJet/siconos", "max_forks_repo_head_hexsha": "5e9c95806f0a01d62ab564ffb1d9d50c2dc32ef0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2015-08-06T22:57:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T20:30:20.000Z", "avg_line_length": 47.9482758621, "max_line_length": 342, "alphanum_fraction": 0.5699388709, "num_tokens": 1233, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8354835248143776, "lm_q2_score": 0.6893056231680121, "lm_q1q2_score": 0.5759034917187819}}
{"text": "\\hypertarget{group__numpp__structures__matrices}{}\\section{Matrices}\n\\label{group__numpp__structures__matrices}\\index{Matrices@{Matrices}}\n\n\nModule containing different matrix implementation.  \n\n\n\\subsection*{Modules}\n\\begin{DoxyCompactItemize}\n\\item \n\\hyperlink{group__numpp__structures__matrices__dense}{Dense}\n\\begin{DoxyCompactList}\\small\\item\\em Module containg constexpr dense matrix implementation. \\end{DoxyCompactList}\\item \n\\hyperlink{group__numpp__structures__matrices__sparse}{Sparse}\n\\begin{DoxyCompactList}\\small\\item\\em Module containg sparse matrix implementations. \\end{DoxyCompactList}\\end{DoxyCompactItemize}\n\n\n\\subsection{Detailed Description}\nModule containing different matrix implementation. \n\n\\begin{DoxyWarning}{Warning}\n{\\bfseries Compile-\\/time implementation of dense matrix will be inefficient for bigger matrices compile-\\/wise, but runtime is much faster.~\\newline\n Sparse vector isn\\textquotesingle{}t be affected by this way of implementation}\n\\end{DoxyWarning}\nThis is a summary header including every matrix structure provided by numpp.\n\n{\\bfseries \\begin{DoxyWarning}{Warning}\nIf you are looking for more compile-\\/runtime balanced implementations check Eigen, Armadillo or other libraries\n\\end{DoxyWarning}\n\n\\begin{DoxyCode}\n\\textcolor{preprocessor}{#include\"numpp/structures/matrices.hpp\"}\n\\end{DoxyCode}\n }", "meta": {"hexsha": "5474563df3b37c2c972193c9212a98bc4d150e19", "size": 1345, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/group__numpp__structures__matrices.tex", "max_stars_repo_name": "szymonmaszke/numpp", "max_stars_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-06-06T01:51:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-02T15:17:00.000Z", "max_issues_repo_path": "docs/group__numpp__structures__matrices.tex", "max_issues_repo_name": "vyzyv/numpp", "max_issues_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-11-28T12:15:46.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T00:03:38.000Z", "max_forks_repo_path": "docs/group__numpp__structures__matrices.tex", "max_forks_repo_name": "szymonmaszke/numpp", "max_forks_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-08-06T13:58:27.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-06T06:45:22.000Z", "avg_line_length": 40.7575757576, "max_line_length": 149, "alphanum_fraction": 0.8260223048, "num_tokens": 339, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7718435083355187, "lm_q1q2_score": 0.5759025380867862}}
{"text": "%% State Space Modelling of Dynamic Systems\r\n%% Lecture 20: State Feedback Control\r\n\\def\\FileDate{10/02/02}\r\n\\def\\FileVersion{1.0}\r\n% ----------------------------------------------------------------\r\n% Notes pages *********************************************************\r\n% ----------------------------------------------------------------\r\n\r\n\\begin{slide}\r\n   \\heading{State Feedback Control}\r\nOne of the advantages of state space models is that it is possible to apply state feedback to place the closed loop poles into any desired positions.\r\n\r\n\\textbf{State Space Design Methodology}\r\n\r\n\\begin{enumerate}\r\n\t\\item Design control law to place closed loop poles where desired\r\n\t\\item If full state not available for feedback, then design an \\emph{Observer} to compute the states from the system output\r\n\t\\item Combine \\emph{Observer} and \\emph{Controller} -- this takes the place of the \\emph{Classical Compensator}\r\n\t\\item Introduce the \\emph{Reference Input} -- affects the closed loop zeros but not the poles making it possible to improve the transient response and tracking accuracy\r\n\\end{enumerate}\t\r\n\\end{slide}\r\n\r\n\r\n\\begin{slide}\r\n   \\heading{State Feedback Compensator}\r\n   \\begin{center}\r\n   \t\\resizebox{280pt}{!}{\\includegraphics{pictures/statefb.pdf}}\r\n   \\end{center}\r\n\\end{slide}\r\n\r\n\\ifslidesonly\r\n\\begin{slide}\r\n   \\heading{This Lecture}\r\n   \\begin{itemize}\r\n   \t\\item Finding the control law\r\n   \t\\item State feedback for controller canonical form\r\n   \t\\item Transfer function model\r\n   \t\\item Ackermann's formula\r\n   \t\\item Effect of state feedback on closed-loop zeros\r\n   \t\\item Effect of plant zeros on the feedback gains\r\n   \t\\item Pole-selection for good design\r\n   \\end{itemize}\r\n\\end{slide}\r\n\\fi\r\n\\section*{Finding the Control Law} % (fold)\r\n\\label{sec:finding_the_control_law}\r\n\r\nWe shall only consider SISO systems here.\r\n\r\n\\input{frag1}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n   \\heading{Finding the Control Law (1)}\r\n   \\input{frag1}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\input{frag2}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Finding the Control Law (2)}\r\n   \\input{frag2}\r\n\\end{slide}\r\n\\fi\r\n\r\n \r\n\\input{frag3}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Finding the Control Law (3)}\r\n   \\input{frag3}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\input{frag4}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Finding the Control Law (4)}\r\n   \\input{frag4}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n\r\n\\subsection*{Example 1} % (fold)\r\n\\label{sub:example_1}\r\n\r\n\\textbf{Problem}: Given,\r\n\\[\r\n{\\bf{\\dot x}} = \\left[ {\\begin{array}{*{20}c}\r\n   { - 4} & 0  \\\\\r\n   0 & { - 11}  \\\\\r\n\\end{array}} \\right]{\\bf{x}} + \\left[ {\\begin{array}{*{20}c}\r\n   1  \\\\\r\n   { - 1}  \\\\\r\n\\end{array}} \\right]u\r\n\\]\r\nfind the feedback control law which places the closed-loop poles at: $-10\\pm j10$.\r\n\r\n\\textbf{SOLUTION}:\r\n\\begin{eqnarray*}\r\n\t0 & = & \\det \\left[ {s{\\bf{I}} - {\\bf{A}} + {\\bf{BK}}} \\right] = \\det \\left\\{ {\\left. {\\left[ {\\begin{array}{*{20}c}\r\n\t   {s + 4} & 0  \\\\\r\n\t   0 & {s + 11}  \\\\\r\n\t\\end{array}} \\right] + \\left[ {\\begin{array}{*{20}c}\r\n\t   1  \\\\\r\n\t   { - 1}  \\\\\r\n\t\\end{array}} \\right]\\left[ {\\begin{array}{*{20}c}\r\n\t   {k_1 } & {k_2 }  \\\\\r\n\t\\end{array}} \\right]} \\right\\}} \\right. \\\\\r\n\t0 & = & \\det \\left[ {\\begin{array}{*{20}c}\r\n\t   {s + 4 + k_1 } & {k_2 }  \\\\\r\n\t   { - k_1 } & {s + 11 - k_2 }  \\\\\r\n\t\\end{array}} \\right] \\\\\r\n\t0 & = & (s + 4 + k_1 )(s + 11 - k_2 ) - (k_2 )( - k_1 ) \\\\\r\n\t0 & = & (s+4+k_1)(s+11-k_2)+k_1k_2\r\n\\end{eqnarray*}\r\n\\begin{equation}\r\n\t\\label{eq:3}\r\n\ts^2+(15+k_1-k_2)s+(44+11k_1-4k_2)=0\r\n\\end{equation}\r\n\r\nNow the desired CE is:\r\n\\[\r\n\\alpha_c(s)=(s+10-j10)(s+10+j10) = 0\r\n\\]\r\n\\begin{equation}\\label{eq:4}\r\n\ts^2+20s+200=0\r\n\\end{equation}\r\n\r\nTherefore matching coefficients in Eqs. (\\ref{eq:3}) and (\\ref{eq:4}):\r\n\\[\r\n\\begin{array}{c}\r\n s^2 :1 = 1 \\to {\\rm{OK}} \\\\ \r\n s^1 :15 + k_1  - k_2  = 20 \\to k_1  - k_2  = 5 \\\\ \r\n s^0 :44 + 11k_1  - 4k_2  = 200 \\to 11k_1  - 4k_2  = 156 \\\\ \r\n \\end{array}\r\n\\]\r\n\r\n \r\n\r\nSolving for the $k$'s:\r\n\\[\r\n\\left[ {\\begin{array}{*{20}c}\r\n   1 & { - 1}  \\\\\r\n   {11} & { - 4}  \\\\\r\n\\end{array}} \\right]\\left[ {\\begin{array}{*{20}c}\r\n   {k_1 }  \\\\\r\n   {k_2 }  \\\\\r\n\\end{array}} \\right] = \\left[ {\\begin{array}{*{20}c}\r\n   5  \\\\\r\n   {156}  \\\\\r\n\\end{array}} \\right]\r\n\\]\r\n% MathType!MTEF!2!1!+-\r\n% faaagaart1ev2aaaKnaaaaWenf2ys9wBH5garuavP1wzZbqedmvETj\r\n% 2BSbqefm0B1jxALjharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0x\r\n% bbL8FesqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaq\r\n% pepae9pg0FirpepeKkFr0xfr-xfr-xb9Gqpi0dc9adbaqaaeGaciGa\r\n% aiaabeqaamaabaabaaGcbaWaamWaaeaafaWabeGabaaabaGaam4Aam\r\n% aaBaaaleaacaaIXaaabeaaaOqaaiaadUgadaWgaaWcbaGaaGOmaaqa\r\n% baaaaaGccaGLBbGaayzxaaGaeyypa0ZaaSaaaeaacaaIXaaabaGaey\r\n% OeI0IaaGinaiabgUcaRiaaigdacaaIXaaaamaadmaabaqbamqabiGa\r\n% aaqaaiabgkHiTiaaisdaaeaacaaIXaaabaGaeyOeI0IaaGymaiaaig\r\n% daaeaacaaIXaaaaaGaay5waiaaw2faamaadmaabaqbamqabiqaaaqa\r\n% aiaaiwdaaeaacaaIXaGaaGynaiaaiAdaaaaacaGLBbGaayzxaaGaey\r\n% ypa0ZaaSaaaeaacaaIXaaabaGaaG4naaaadaWadaqaauaadeqaceaa\r\n% aeaacaaIXaGaaG4maiaaiAdaaeaacaaIXaGaaGimaiaaigdaaaaaca\r\n% GLBbGaayzxaaGaeyypa0ZaamWaaeaafaWabeGabaaabaGaaGymaiaa\r\n% iMdacaGGUaGaaGinaiaaiMdacaaIYaGaaGyoaaqaaiaaigdacaaI0a\r\n% GaaiOlaiaaisdacaaIYaGaaGyoaaaaaiaawUfacaGLDbaaaaa!5BC6!\r\n\\[\r\n\\left[ {\\begin{array}{*{20}c}\r\n   {k_1 }  \\\\\r\n   {k_2 }  \\\\\r\n\\end{array}} \\right] = \\frac{1}{{ - 4 + 11}}\\left[ {\\begin{array}{*{20}c}\r\n   { - 4} & 1  \\\\\r\n   { - 11} & 1  \\\\\r\n\\end{array}} \\right]\\left[ {\\begin{array}{*{20}c}\r\n   5  \\\\\r\n   {156}  \\\\\r\n\\end{array}} \\right] = \\frac{1}{7}\\left[ {\\begin{array}{*{20}c}\r\n   {136}  \\\\\r\n   {101}  \\\\\r\n\\end{array}} \\right] = \\left[ {\\begin{array}{*{20}c}\r\n   {19.429}  \\\\\r\n   {14.429}  \\\\\r\n\\end{array}} \\right]\r\n\\]\r\n\r\n \r\n\r\nTherefore the required feedback control law is:\r\n% MathType!MTEF!2!1!+-\r\n% faaagaart1ev2aaaKnaaaaWenf2ys9wBH5garuavP1wzZbqedmvETj\r\n% 2BSbqefm0B1jxALjharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0x\r\n% bbL8FesqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaq\r\n% pepae9pg0FirpepeKkFr0xfr-xfr-xb9Gqpi0dc9adbaqaaeGaciGa\r\n% aiaabeqaamaabaabaaGcbaGaamyDaiabg2da9iaadkhacqGHsislda\r\n% WadaqaauaadeqabiaaaeaacaaIXaGaaGyoaiaac6cacaaI0aGaaGOm\r\n% aiaaiMdaaeaacaaIXaGaaGinaiaac6cacaaI0aGaaGOmaiaaiMdaaa\r\n% aacaGLBbGaayzxaaGaaCiEaaaa!3DCD!\r\n\\[\r\nu = r - \\left[ {\\begin{array}{*{20}c}\r\n   {19.429} & {14.429}  \\\\\r\n\\end{array}} \\right]{\\bf{x}}\r\n\\]\r\n\r\n\r\n\r\n\\textbf{COMMENT}\r\nThis matching of coefficients can always be done, though it is tedious for $n>3$, \\textbf{EXCEPT} in the case of the \\emph{Control Canonical Form}.\r\n\r\n% subsection example_1 (end)\r\n \r\n% section finding_the_control_law (end)\r\n\r\n\\section*{State Feedback in the Case of the Control Canonical Form} % (fold)\r\n\\label{sec:state_feedback_in_the_case_of_the_control_canonical_form}\r\n\r\n\r\n\\input{frag5}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Control Canonical Form Simplifies Calculation (1)}\r\n   \\input{frag5}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n\\input{frag6}\r\n\\input{frag7}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Control Canonical Form (2)}\r\n   \\input{frag6}\r\n\\end{slide}\r\n\\begin{slide}\r\n\t\\heading{Control Canonical Form (3)}\r\n   \\input{frag7}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n \r\n\\input{frag8}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Control Canonical Form (4)}\r\n   \\input{frag8}\r\n\\end{slide}\r\n\\fi\r\n\r\n \r\n\\subsection*{Example 2} % (fold)\r\n\\label{sub:example_2}\r\n\r\n\\textbf{Problem}: Given the system TF:\r\n\\[\r\nG(s) = \\frac{7}{(s+4)(s+11)}\r\n\\]\r\nfind the control law for the control canonical form which places the closed loop poles at $s=\u221210\\pm j10$.\r\n\r\n\\textbf{SOLUTION}:\r\n\\[\r\nG(s) = \\frac{7}{(s+4)(s+11)} = \\frac{7}{(s^2+15s+44)}\r\n\\]\r\n \r\nThe control canonical form has matrices:\r\n\\[\r\n{\\bf{A}} = \\left[ {\\begin{array}{*{20}c}\r\n   { - 15} & { - 44}  \\\\\r\n   1 & 0  \\\\\r\n\\end{array}} \\right];\\quad {\\bf{B}} = \\left[ {\\begin{array}{*{20}c}\r\n   1  \\\\\r\n   0  \\\\\r\n\\end{array}} \\right];\\quad {\\bf{C}} = \\left[ {\\begin{array}{*{20}c}\r\n   0 & 7  \\\\\r\n\\end{array}} \\right];\\quad {\\bf{D}} = 0\r\n\\]\r\n \r\n\\textbf{NB}:  $\\mathbf{C}$ is obtained from the TF numerator $(0s+7)$.\r\nso:    \r\n\\[\r\n{\\bf{A}} - {\\bf{BK}} = \\left[ {\\begin{array}{*{20}c}\r\n   { - 15 - k_1 } & { - 44 - k_2 }  \\\\\r\n   1 & 0  \\\\\r\n\\end{array}} \\right]\r\n\\]\r\nand the closed loop CE is:\r\n\\begin{equation}\r\n\ts^2+(15+k_1)s+(44+k_2)=0 \\label{eq:7}\r\n\\end{equation}\r\nThe desired CE is:\r\n\\[\r\n\\alpha_c(s)=(s+10-j10)(s+10+j10) = 0\r\n\\]\r\n\\begin{equation}\\label{eq:8}\r\n\ts^2+20s+200=0\r\n\\end{equation}\r\n \r\nComparing Eqs. (\\ref{eq:7})  and  (\\ref{eq:8}) gives:\r\n% MathType!MTEF!2!1!+-\r\n% faaagaart1ev2aaaKnaaaaWenf2ys9wBH5garuavP1wzZbqedmvETj\r\n% 2BSbqefm0B1jxALjharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0x\r\n% bbL8FesqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaq\r\n% pepae9pg0FirpepeKkFr0xfr-xfr-xb9Gqpi0dc9adbaqaaeGaciGa\r\n% aiaabeqaamaabaabaaGcbaGaaGymaiaaiwdacqGHRaWkcaWGRbWaaS\r\n% baaSqaaiaaigdaaeqaaOGaeyypa0JaaGOmaiaaicdacqGHsgIRcaWG\r\n% RbWaaSbaaSqaaiaaigdaaeqaaOGaeyypa0JaaGynaaaa!3A5E!\r\n\\[\r\n15 + k_1  = 20 \\to k_1  = 5\r\n\\]\r\nand\r\n% MathType!MTEF!2!1!+-\r\n% faaagaart1ev2aaaKnaaaaWenf2ys9wBH5garuavP1wzZbqedmvETj\r\n% 2BSbqefm0B1jxALjharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0x\r\n% bbL8FesqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaq\r\n% pepae9pg0FirpepeKkFr0xfr-xfr-xb9Gqpi0dc9adbaqaaeGaciGa\r\n% aiaabeqaamaabaabaaGcbaGaaGymaiaaiwdacqGHRaWkcaWGRbWaaS\r\n% baaSqaaiaaigdaaeqaaOGaeyypa0JaaGOmaiaaicdacqGHsgIRcaWG\r\n% RbWaaSbaaSqaaiaaigdaaeqaaOGaeyypa0JaaGynaaaa!3A5E!\r\n\\[\r\n44 + k_2  = 200 \\to k_2  = 156\r\n\\]\r\ngiving the control law as:   \r\n\\[\r\nu = r - \\left[ {\\begin{array}{*{20}c}\r\n   {5} & {156}  \\\\\r\n\\end{array}} \\right]{\\bf{x}}\r\n\\]\r\n\r\n% subsection example_2 (end)Given the system TF:\r\n\r\n\r\n% section state_feedback_in_the_case_of_the_control_canonical_form (end)\r\n\r\n\\section*{A Transfer Function Model of State Feedback} % (fold)\r\n\\label{sec:a_transfer_function_model_of_state_feedback}\r\n\r\n\\input{frag9}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Transfer Function Model of State Feedback (1)}\r\n   \\input{frag9}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\input{frag10}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Transfer Function Model of State Feedback (2)}\r\n   \\input{frag10}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n\r\n% section a_transfer_function_model_of_state_feedback (end)\r\n\r\n\\section*{Ackermann's Formula} % (fold)\r\n\\label{sec:ackermann_s_formula}\r\n\r\n\\subsection*{State Feedback Design for any Form of State Space Model} % (fold)\r\nAs stated previously, the derivation of the feedback law is tedious for systems of order  $n>3$  except in the case of the controller canonical form. One approach to the problem is to transform the given model to controller canonical form, derive the control law in terms of these states and then transform back to the original system. Ackermann derived the following formula by this method.\r\n\\ifslidesonly\r\n\\begin{slide}\r\n   \\heading{State Feedback Design for any Form of State Space Model}\r\n\\begin{itemize}\r\n\t\\item As stated previously, the derivation of the feedback law is tedious for systems of order  $n>3$  except in the case of the controller canonical form.\r\n\t\\item One approach to the problem is to transform the given model to controller canonical form, derive the control law in terms of these states and then transform back to the original system.\r\n\t\\item Ackermann derived the following formula by this method.\r\n\\end{itemize}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\subsection*{The formula} % (fold)\r\n\\label{sub:the_formula}\r\n\r\n\\input{frag11}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Ackermann's Formula}\r\n   \\input{frag11}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\input{frag12}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Explanation of the Terms}\r\n   \\input{frag12}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\input{frag13}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Caveats}\r\n   \\input{frag13}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\subsection*{Matlab Function}\r\n\\input{frag14}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Matlab Function}\r\n   \\input{frag14}\r\n\\end{slide}\r\n\\fi\r\n\r\n% subsection the_formula (end)\r\n \r\n\\subsection*{Example 3} % (fold)\r\n\\label{ssec:example_2}\r\n\r\n\\textbf{Problem}: \r\nGiven:\r\n\\[\r\n{\\bf{A}} = \\left[ {\\begin{array}{*{20}c}\r\n   1 & 2  \\\\\r\n   { - 1} & 1  \\\\\r\n\\end{array}} \\right]\\quad {\\rm{and}}\\quad {\\bf{B}} = \\left[ {\\begin{array}{*{20}c}\r\n   1  \\\\\r\n   { - 2}  \\\\\r\n\\end{array}} \\right]\r\n\\]\r\nfind the feedback vector $\\mathbf{K}$ to place the closed loop poles at $s = -1,\\ -1$ using Ackermann's formula.\r\n\r\n\\textbf{SOLUTION}:\r\n\\[\r\n\\alpha_c(s) = (s + 1)(s + 1) = s^2 + 2s + 1\r\n\\]\r\ntherefore\r\n\\[\r\n\\alpha_c(s) = \\mathbf{A}s^2 + 2\\mathbf{A}s + \\mathbf{I}\r\n\\]\r\n\\[\r\n\\alpha _c ({\\bf{A}}) = \\left[ {\\begin{array}{*{20}c}\r\n   { - 1} & 4  \\\\\r\n   { - 2} & { - 1}  \\\\\r\n\\end{array}} \\right] + 2\\left[ {\\begin{array}{*{20}c}\r\n   1 & 2  \\\\\r\n   { - 1} & 1  \\\\\r\n\\end{array}} \\right] + \\left[ {\\begin{array}{*{20}c}\r\n   1 & 0  \\\\\r\n   0 & 1  \\\\\r\n\\end{array}} \\right] = \\left[ {\\begin{array}{*{20}c}\r\n   2 & 8  \\\\\r\n   { - 4} & 2  \\\\\r\n\\end{array}} \\right]\r\n\\]\r\n\r\n\\[\r\n{\\bf{AB}} = \\left[ {\\begin{array}{*{20}c}\r\n   { - 3}  \\\\\r\n   { - 3}  \\\\\r\n\\end{array}} \\right];\\quad \\mathcal{C} = \\left[ {{\\bf{A}} \\vdots {\\bf{AB}}} \\right] = \\left[ {\\begin{array}{*{20}c}\r\n   1 & { - 3}  \\\\\r\n   { - 2} & { - 3}  \\\\\r\n\\end{array}} \\right]\r\n\\]\r\n\r\n\\begin{eqnarray*}\r\n\t{\\bf{K}} & = & \\left[ {\\begin{array}{*{20}c}\r\n\t   0 &  \\cdots  & 0 & 1  \\\\\r\n\t\\end{array}} \\right]\\mathcal{C}^{ - 1} \\alpha _c ({\\bf{A}}) \\\\\r\n\t& = & \\left[ {\\begin{array}{*{20}c}\r\n\t   0 & 1  \\\\\r\n\t\\end{array}} \\right]\\left[ {\\begin{array}{*{20}c}\r\n\t   1 & { - 3}  \\\\\r\n\t   { - 2} & { - 3}  \\\\\r\n\t\\end{array}} \\right]^{ - 1} \\left[ {\\begin{array}{*{20}c}\r\n\t   2 & 8  \\\\\r\n\t   { - 4} & 2  \\\\\r\n\t\\end{array}} \\right] \\\\\r\n\t& = & \\left[ {\\begin{array}{*{20}c}\r\n\t   0 & 1  \\\\\r\n\t\\end{array}} \\right]\\frac{\\left[ {\\begin{array}{*{20}c}\r\n\t   { -3 } & { 3 }  \\\\\r\n\t   { 2 } & { 1 }  \\\\\r\n\t\\end{array}} \\right]}{-3-(+6)} \\left[ {\\begin{array}{*{20}c}\r\n\t   2 & 8  \\\\\r\n\t   { - 4} & 2  \\\\\r\n\t\\end{array}} \\right] \\\\\r\n\t& = & \\left[ {\\begin{array}{*{20}c}\r\n\t   0 & 1  \\\\\r\n\t\\end{array}} \\right]\\frac{1}{-9}\\left[ \\begin{array}{*{20}c}\r\n\t   { -18 } & { -18 }  \\\\\r\n\t   { 0 } & { 18 }  \\\\\r\n\t\\end{array} \\right] \\\\\r\n\t& = & \\left[ {\\begin{array}{*{20}c}\r\n\t   0 & 1  \\\\\r\n\t\\end{array}} \\right]\\left[ \\begin{array}{*{20}c}\r\n\t   { 2 } & { 2 }  \\\\\r\n\t   { 0 } & { -2 }  \\\\\r\n\t\\end{array} \\right] \\\\\r\n\t& = & \\left[ {\\begin{array}{*{20}c}\r\n\t   0 & -2  \\\\\r\n\t\\end{array}} \\right]\r\n\\end{eqnarray*}\r\n% subsection example_3 (end)\r\n\r\n\\subsection*{Solution in Matlab} % (fold)\r\n\\label{sub:solution_in_matlab}\r\n\\begin{verbatim}\r\n% Using the formula\r\nA = [1 2; -1 1]; B = [1; -2];\r\nalpha_c = A * A + 2 * A + eye(2);\r\nK = [0 1] * inv(ctrb(A, B)) * alpha_c\r\n\r\n% Using the function Acker\r\nP = [-1, -1] % vector of desired pole locations\r\nKa = acker(A, B, P)\r\n\\end{verbatim}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n   \\heading{Solution in Matlab(1)}\r\n\\begin{verbatim}\r\nA = [1 2; -1 1]; B = [1; -2];\r\nalpha_c = A * A + 2 * A + eye(2);\r\nK = [0 1] * inv(ctrb(A, B)) * alpha_c\r\n\\end{verbatim}\r\n\\end{slide}\r\n\\begin{slide}\r\n   \\heading{Solution in Matlab(2)}\r\n\\begin{verbatim}\r\n% Using the function Acker\r\nP = [-1, -1] % vector of desired pole locations\r\nKa = acker(A, B, P)\r\n\\end{verbatim}\r\n\\end{slide}\r\n\\fi \r\n% subsection solution_in_matlab (end)\r\n\r\n\r\n \r\n\r\n\r\n \r\n\r\n% section ackermann_s_formula (end)\r\n\r\n\\section*{Effect of State Feedback on the Closed Loop Zeros} % (fold)\r\n\\label{sec:effect_of_state_feedback_on_the_closed_loop_zeros}\r\n\r\n\\input{frag15}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Closed-Loop System}\r\n   \\input{frag15}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n\r\n\\input{frag16}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Closed-loop Transfer Function}\r\n   \\input{frag16}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\input{frag17}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Effect of State-Feedback on Closed-Loop Zeros}\r\n   \\input{frag17}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n% section effect_of_state_feedback_on_the_closed_loop_zeros (end)\r\n\r\n\\section*{Effect of Zero Locations on the Feedback Gains} % (fold)\r\n\\label{sec:effect_of_zero_locations_on_the_feedback_gains}\r\n\r\n\r\n\\input{frag18}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Effect of Zero Locations on the Feedback Gains -- Example}\r\n   \\input{frag18}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\input{frag19}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Solution}\r\n   \\input{frag19}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\ifslidesonly\r\n\\begin{slide}\r\n   \\heading{Comments on Solution}\r\n\\[\r\n\tk_1 =  \\frac{p-p_c}{p-z}\r\n\\]\r\n\\input{frag20}\r\n\\end{slide}\r\n\\fi\r\n\\input{frag20}\r\n\t\r\n\\input{frag21}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Effect of Zero Locations on the Feedback Gains}\r\n   \\input{frag21}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n \r\n% section effect_of_zero_locations_on_the_feedback_gains (end)\r\n\r\n\\section*{Pole Selection for Good Design} % (fold)\r\n\\label{sec:pole_selection_for_good_design}\r\n\r\n\\input{frag22}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Pole Selection for Good Design (1)}\r\n   \\input{frag22}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n\\input{frag23}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Pole Selection for Good Design (2)}\r\n   \\input{frag23}\r\n\\end{slide}\r\n\\fi\r\n\r\n\\subsubsection*{Comments} % (fold)\r\n\\label{ssub:comments}\r\n\r\n\\input{frag24}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Comments}\r\n   \\input{frag24}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n\\input{frag25}\r\n\\ifslidesonly\r\n\\begin{slide}\r\n\t\\heading{Optimal State Feedback}\r\n   \\input{frag25}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n% subsubsection comments (end)\r\n\r\n\r\n\r\n% section pole_selection_for_good_design (end)\r\n\r\n\r\n\\ifslidesonly\r\n\\begin{slide}\r\n   \\heading{Summary of this Lecture}\r\n   \\begin{itemize}\r\n   \t\\item Finding the control law\r\n   \t\\item State feedback for controller canonical form\r\n   \t\\item Transfer function model\r\n   \t\\item Ackermann's formula\r\n   \t\\item Effect of state feedback on closed-loop zeros\r\n   \t\\item Effect of plant zeros on the feedback gains\r\n   \t\\item Pole-selection for good design\r\n   \\end{itemize}\r\n\\end{slide}\r\n\\fi\r\n\r\n\r\n%----------------------------------------------------------------\r\n% The end of notes\r\n% ----------------------------------------------------------------\r\n\\endinput\r\n\r\n%%% Local Variables: \r\n%%% mode: latex\r\n%%% TeX-master: t\r\n%%% End: \r\n", "meta": {"hexsha": "8f6de4147c816b5e4da6a8df9c30c4b7c6d97fbd", "size": 17717, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "StateSpaceDesign/Lecture20/notes.tex", "max_stars_repo_name": "cpjobling/EGLM03-Resources", "max_stars_repo_head_hexsha": "70e5fd7b3e519cc3f327f348631b800d361bbb27", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "StateSpaceDesign/Lecture20/notes.tex", "max_issues_repo_name": "cpjobling/EGLM03-Resources", "max_issues_repo_head_hexsha": "70e5fd7b3e519cc3f327f348631b800d361bbb27", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "StateSpaceDesign/Lecture20/notes.tex", "max_forks_repo_name": "cpjobling/EGLM03-Resources", "max_forks_repo_head_hexsha": "70e5fd7b3e519cc3f327f348631b800d361bbb27", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.5656565657, "max_line_length": 392, "alphanum_fraction": 0.6300163685, "num_tokens": 6810, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7461389986757758, "lm_q1q2_score": 0.5759025346120554}}
{"text": "\\section{Introduction}%\n\\label{sec:l2hmc_intro}\nWe describe a new technique for performing Hamiltonian Monte-Carlo\n(HMC) simulations called: `Learning to Hamiltonian Monte Carlo'\n(L2HMC)~\\cite{2017arXiv171109268L} which expands upon the traditional HMC by\nusing a generalized version of the leapfrog integrator that is parameterized by\nweights in a neural network.\n%\nHamiltonian Monte-Carlo improves upon the random-walk guess and check strategy\nof generic MCMC by integrating Hamilton's equations along approximate\niso-probability contours of phase space.\n%\nIn doing so, we are able to explore\nthe phase space more efficiently by taking larger steps between proposed\nconfigurations while maintaining high acceptance rates.\n%\nIn order to demonstrate the usefulness of this new approach, we use various\nmetrics for measuring the performance of the trained (L2HMC) sampler vs.\\ the\ngeneric HMC sampler.\n\nFirst, we will look at applying this algorithm to a two-dimensional Gaussian\nMixture Model (GMM).\n%\nThe GMM is a notoriously difficult distribution for HMC due to the vanishingly\nsmall likelihood of the leapfrog integrator traversing the space between the\ntwo modes.\n%\nConversely, we see that through the use of a carefully chosen training\nprocedure, the trained L2HMC sampler is able to successfully discover the\nexistence of both modes, and mixes (`tunnels') between the two with ease. \n%\nAdditionally, we will observe that the trained L2HMC sampler mixes much faster\nthan the generic HMC sampler, as evidenced through their respective\nautocorrelation spectra.\n\nThis ability to reduce autocorrelations is an important metric for measuring\nthe efficiency of a general MCMC algorithm, and is of great importance for\nsimulations in lattice gauge theory and lattice QCD.\\@\n%\nFollowing this, we introduce the two-dimensional $U(1)$ lattice gauge theory\nand describe important modifications to the algorithm that are of particular\nrelevance for lattice models.\n%\nOngoing issues and potential areas for improvement are also discussed,\nparticularly within the context of high-performance computing and long-term\ngoals of the lattice QCD community.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "21bea8d888c328c7ce1a902345b31bfc1b22fd0d", "size": 2224, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/intro/introduction.tex", "max_stars_repo_name": "saforem2/l2hmc-qcd", "max_stars_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2019-04-18T18:50:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T18:30:48.000Z", "max_issues_repo_path": "doc/intro/introduction.tex", "max_issues_repo_name": "saforem2/l2hmc-qcd", "max_issues_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2019-09-09T21:10:48.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T17:43:51.000Z", "max_forks_repo_path": "doc/intro/introduction.tex", "max_forks_repo_name": "saforem2/l2hmc-qcd", "max_forks_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-10-31T02:25:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-25T00:49:14.000Z", "avg_line_length": 46.3333333333, "max_line_length": 79, "alphanum_fraction": 0.7895683453, "num_tokens": 468, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7461389873857264, "lm_q1q2_score": 0.5759025258979041}}
{"text": "%% -*- coding:utf-8 -*-\n\\section{Monoidal object}\n\nMonoid plays very important role in category theory\n\n\\section{Monoid in \\textbf{Set}}\nLets consider \\mynameref{def:monoid} in the terms of Set theory and\nwill try to give the definition that is based rather on morphisms then\non internal set structure. Consider a set $M$ and by the definition\n$\\forall m_1, m_2 \\in M$ we can define a new element of the set\n$\\mu(m_1, m_2) \\in M$. Later we will use the following notation for\nthe $\\mu$:\n\\[\n\\mu(m_1, m_2) \\equiv m_1 \\cdot m_2.\n\\]\nIf the $(M, \\cdot)$ is monoid then the following 2 conditions have to\nbe satisfied. The first one (associativity) declares that $\\forall\nm_1, m_2, m_3 \\in M$ \n\\[\nm_1 \\cdot ( m_2 \\cdot m_3) = ( m_1 \\cdot\nm_2 ) \\cdot m_3.\n\\]\nThe second one (identity presence) says that\n\\(\n\\exists e \\in M\n\\) such that $\\forall m \\in M$:\n\\begin{equation}\nm \\cdot e = e \\cdot m = m.\n\\label{eq:monoid2}\n\\end{equation}\n\nLets start with the first one we can define $\\mu$ as\n\\mynameref{def:morphism} in the following way $\\mu: M\\times M \\to M$\nwhere $M \\times M$ is \\mynameref{ex:set_product} in\n\\mynameref{def:setcategory}. I.e. $M, M \\times M \\in \\catob{Set}$ and\n$\\mu \\in \\cathom{Set}$.  Consider another objects of $\\cat{Set}$: $A =\nM \\times \\left( M \\times M \\right)$ and $A' = \\left( M \\times M \\right)\n\\times M$. They are not the same but there is a trivial\n\\mynameref{def:isomorphism} between them $A \\cong_\\alpha A'$, where\n\\[\n\\alpha(x,(y,z)) = ((x,y),z).\n\\]\nConsider the action of \\mynameref{def:product_of_morphisms} \n$\\idm{M} \\times \\mu$ on $A$:\n\\[\n\\idm{M} \\times \\mu \\left(x,\\left(y,z\\right)\\right) = \n\\left(\\idm{M}(x),\\mu\\left(y,z\\right)\\right) = \n\\left(x, y \\cdot z\\right) \\in M \\times M\n\\]\ni.e. $\\idm{M} \\times \\mu: M \\times \\left( M \\times M \\right) \\to M\n\\times M$. If we act $\\mu$ on the result we will get:\n\\begin{eqnarray}\n\\mu \\left(\\idm{M} \\times \\mu \\left(x,\\left(y,z\\right)\\right)\\right) = \n\\left(\\idm{M}(x),\\mu\\left(y,z\\right)\\right) = \n\\nonumber \\\\\n=\n\\mu\\left(x, y \\cdot z\\right) = x \\cdot (y\\cdot z) \\in M,\n\\nonumber\n\\end{eqnarray}\ni.e. \n$\\mu \\circ \\left(\\idm{M} \\times \\mu\\right): M \\times \\left( M \\times M\n\\right) \\to M$.\n\nFor $A'$ we have the following one:\n\\begin{eqnarray}\n\\mu\\circ\\left(\\mu \\times \\idm{M}\\right)\\left(\\left(x,y\\right),z\\right)\n= \\mu\\left(x \\cdot y, z\\right) = (x \\cdot y) \\cdot z.\n\\nonumber\n\\end{eqnarray}\nMonoid associativity requires \n\\[\nx \\cdot (y\\cdot z) = \n(x \\cdot y) \\cdot z\n\\]\ni.e. the morphisms the corresponding morphisms commute:\n\\[\n\\mu\\circ\\left(\\mu \\times \\idm{M}\\right) =\n\\mu \\circ \\left(\\idm{M} \\times \\mu\\right) \\circ \\alpha.\n\\]\nThis corresponds the diagram is at \\cref{fig:monoid_mu_alpha}.\n\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \n    \\node[ele,label=above:$M\\times\\left(M \\times M\\right)$] (M31) at (0,3) {};    \n    \\node[ele,label=above:$\\left(M \\times M\\right)\\times M$] (M32) at (6,3) {};    \n    \\node[ele,label=below:$M \\times M$] (M21) at (0,0) {};    \n    \\node[ele,label=below:$M \\times M$] (M22) at (6,0) {};    \n    \\node[ele,label=below:$M$] (M) at (3,0) {};    \n\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M31) to\n    node[sloped,above]{$\\alpha$} (M32);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M31) to\n    node[sloped,below]{$\\idm{M} \\times \\mu$} (M21);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M32) to\n    node[sloped,below]{$\\mu \\times \\idm{M}$} (M22);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M22) to\n    node[sloped,above]{$\\mu$} (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M21) to\n    node[sloped,above]{$\\mu$} (M);\n  \\end{tikzpicture}\n  \\caption{Commutative diagram for $\\mu\\circ\\left(\\mu \\times\n    \\idm{M}\\right) = \\mu \\circ \\left(\\idm{M} \\times \\mu\\right) \\circ\n    \\alpha$.} \n  \\label{fig:monoid_mu_alpha}\n\\end{figure}\nVery often the isomorphism $\\alpha$ is omitted i.e. \n\\[\nM\\times\\left(M \\times M\\right)\n= \\left(M \\times M\\right)\\times M = M^3\n\\]\nand the morphism\nequality is written as follow\n\\[\n\\mu\\circ\\left(\\mu \\times \\idm{M}\\right) =\n\\mu \\circ \\left(\\idm{M} \\times \\mu\\right)\n\\]\nThe corresponding commutative diagram is shown in\n\\cref{fig:monoid_mu}.\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \n    \\node[ele,label=above:$M^3$] (M3) at (0,3) {};    \n    \\node[ele,label=above:$M \\times M$] (M21) at (3,3) {};    \n    \\node[ele,label=below:$M \\times M$] (M22) at (0,0) {};    \n    \\node[ele,label=below:$M$] (M) at (3,0) {};    \n\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M3) to\n    node[sloped,below]{$\\idm{M} \\times \\mu$} (M21);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M3) to\n    node[sloped,above]{$\\mu \\times \\idm{M}$} (M22);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M22) to\n    node[sloped,above]{$\\mu$} (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M21) to\n    node[sloped,above]{$\\mu$} (M);\n  \\end{tikzpicture}\n  \\caption{Commutative diagram for $\\mu\\circ\\left(\\mu \\times\n    \\idm{M}\\right) = \\mu \\circ \\left(\\idm{M} \\times \\mu\\right)$.}\n  \\label{fig:monoid_mu}\n\\end{figure}\n\nFor \\eqref{eq:monoid2} consider a morphism $\\eta$ from\na one point set $I = \\{0\\}$ to a special element $e \\in M$ such that\n$\\forall m \\in M: e \\cdot m = m \\cdot e = m$. I.e. $\\eta: I \\to M$ and\n$e = \\eta(0)$. Consider 2 sets $B = I \\times M$ and $B' = M \\times I$. \nWe have 2 \\mynameref{def:isomorphism}s: $B \\cong_\\lambda M$ and $B'\n\\cong_\\rho M$ where the isomorphisms are defined as follow\n\\[\n\\lambda(0, m) = m\n\\] \nand\n\\[\n\\rho(m, 0) = m.\n\\] \n\nIf we apply \\mynameref{def:product_of_morphisms} $\\eta \\times \\mu$ and\n$\\mu \\times \\eta$ on $B$ and $B'$ respectively then we get\n\\begin{eqnarray}\n\\eta \\times \\idm{M} \\left(0 \\times m\\right) = e \\times m,\n\\nonumber \\\\\n\\idm{M} \\times \\eta \\left(m \\times 0\\right) = m \\times e.\n\\nonumber\n\\end{eqnarray}\nIf we apply $\\mu$ on the result then we get\n\\begin{eqnarray}\n\\mu \\left(\\eta \\times \\idm{M} \\left(0 \\times m\\right) \\right) = e \\cdot m,\n\\nonumber \\\\\n\\idm{M} \\times \\eta \\left(m \\times 0\\right) = m \\times e.\n\\nonumber\n\\end{eqnarray}\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \n    \\node[ele,label=above:$M$] (M') at (0,3) {};    \n    \\node[ele,label=above:$M \\times M$] (M21) at (3,3) {};    \n    \\node[ele,label=below:$M \\times M$] (M22) at (0,0) {};    \n    \\node[ele,label=below:$M$] (M) at (3,0) {};    \n\n    \\draw [double equal sign distance] (M') to (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M') to\n    node[sloped,below]{$\\lambda$} (M21);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M') to\n    node[sloped,above]{$\\rho$} (M22);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M22) to\n    node[sloped,above]{$\\mu$} (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M21) to\n    node[sloped,above]{$\\mu$} (M);\n  \\end{tikzpicture}\n  \\caption{Commutative diagram for $\\mu \\circ (\\eta \\times \\idm{M})\n    \\circ \\lambda = \\mu \\circ (\\idm{M} \\times \\mu) \\circ \\rho =\n    \\idm{M}$ .} \n  \\label{fig:monoid_eta_lambda_rho}\n\\end{figure}\nThe \\eqref{eq:monoid2} leads to the following equation for morphisms\n\\[\n\\mu \\circ (\\eta \\times \\idm{M}) \\circ \\rho = \n\\mu \\circ (\\idm{M} \\times \\mu) \\circ \\lambda = \n\\idm{M}\n\\]\nor the commutative diagram show on \\cref{fig:monoid_eta_lambda_rho}.\n\n\\section{Monoidal object}\nIf we take into consideration that one-point set is\n\\mynameref{ex:set_terminal_object} then we can conclude that the\nmonoid can be defined for instance in a \\mynameref{def:cartesian_closed_category}\nas follow\n\\begin{definition}[Monoidal object]\n\\label{def:monoidal_object}\nConsider a \\mynameref{def:category} $\\cat{C}$ with a\n\\mynameref{def:terminal_object} $t \\in \\catob{C}$. The object $m \\in\n\\catob{C}$ is called \\textit{monoidal object} if the following\nconditions satisfied:\n\\begin{enumerate}\n\\item the \\mynameref{def:product}s $m \\times m, m \\times t, t \\times\n  m$ exist \n\\item there is a \\mynameref{def:morphism} $\\mu: m \\times m \\to m$ in\n  the category\n\\item there is another morphism $\\eta: t \\to m$\n\\item the morphisms satisfy the following conditions:\n\\begin{eqnarray}\n\\mu\\circ\\left(\\mu \\times\n    \\idm{M}\\right) = \\mu \\circ \\left(\\idm{M} \\times \\mu\\right) \\circ\n    \\alpha,\n\\nonumber \\\\\n\\mu \\circ (\\eta \\times \\idm{M})\n\\circ \\lambda = \\mu \\circ (\\idm{M} \\times \\mu) \\circ \\rho =\n\\idm{M}\n\\nonumber\n\\end{eqnarray}\nwhere $\\alpha$ (associator) is an isomorphism between $m \\times (m\n\\times m)$ and $(m \\times m) \\times m$, $\\lambda, \\rho$ - 2 another\nisomorphisms: \n\\[\nt \\times m \\cong_\\lambda m\n\\]\nand\n\\[\nm \\times t \\cong_\\rho m\n\\]\n\\end{enumerate}\n\\end{definition}\n\n", "meta": {"hexsha": "2ca850ec0408ba2a24be9dd2fdfdd32a77075a4f", "size": 8882, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cattheory/monoidalobject.tex", "max_stars_repo_name": "ivanmurashko/articles", "max_stars_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-09-27T08:59:55.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-27T08:59:55.000Z", "max_issues_repo_path": "cattheory/monoidalobject.tex", "max_issues_repo_name": "ivanmurashko/articles", "max_issues_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cattheory/monoidalobject.tex", "max_forks_repo_name": "ivanmurashko/articles", "max_forks_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.4263565891, "max_line_length": 83, "alphanum_fraction": 0.6354424679, "num_tokens": 3459, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7461389817407016, "lm_q1q2_score": 0.5759025215408285}}
{"text": "%---------------------------Shear and Size---------------------------\n\\section{Shear and Size\\label{s:hex-shear-and-size}}\n\nLet $R$ be the relative size squared as defined in \\S\\ref{s:hex-relative-size-squared}\nand $H$ be the shear as defined in \\S\\ref{s:hex-shear}.\nThe ``shear and size'' metric is the the product of these two numbers:\n\\[\n  q = RH\n\\]\n\n\\hexmetrictable{shear and size}%\n{$1$}%                                        Dimension\n{$[0.2,1]$}%                                  Acceptable range\n{$[0,1]$}%                                    Normal range\n{$[0,1]$}%                                    Full range\n{Dependent on $\\overline{V}$}%                Cube\n{\\cite{knu:03}}%                              Citation\n{v\\_hex\\_shear\\_and\\_size}%                   Verdict function name\n", "meta": {"hexsha": "429e3c77b9aa38588b6dba1b186f98e3080321c7", "size": 796, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexShearAndSize.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexShearAndSize.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexShearAndSize.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 41.8947368421, "max_line_length": 86, "alphanum_fraction": 0.4648241206, "num_tokens": 195, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8840392939666336, "lm_q2_score": 0.6513548782017745, "lm_q1q2_score": 0.5758233066472194}}
{"text": "% !TEX encoding = UTF-8 Unicode\n% !TEX spellcheck = en_US\n% !TEX root = ../../../ICMA2020.tex\n\n\\subsection{Model Reduction Algorithm}\nA parameter identification in a process not optimized for this purpose leads to a significantly worse identifiability compared to an optimized excitation trajectory due to the sub-optimal excitation of the individual parameters.\nIn order to still perform a parameter estimation directly on a process the robot model has to be reduced.\nAccording to \\cite{Brun2001} two factors can be distinguished that lead to poor identifiability and a high condition number: sensitivity problems and collinearity problems. \nSensitivity can differ between parameters so that some parameters affect the output only negligibly while others dominate. Collinearity means that the effect of one parameter is compensated by the interplay of other parameters.\n\nMany model order reduction techniques strive to establish a new basis without multi-collinearities, which is possibly even orthogonal. \nHere these approaches are prohibited by the fact that the interpretability of the base parameters should be preserved in the sense of the linear combination of physical parameters in (\\ref{eq:Parametersatz}). \nA different approach is to delete parameters that are involved in collinearity problems \\cite{Akinniyi2017}, but this could change the prediction of the model considerably if the respective parameter is important. \nTherefore, only the sensitivity problem is addressed here. As the model can be written linear in the parameters, see \\eqref{eq:model_regressor}, it is possible to remove those parameters with a low sensitivity on a per-parameter basis. The parameter sensitivities depend on the excitation but not on the parameter values. For a particular excitation the sensitivity matrix is given by the design matrix $\\boldsymbol{C}$.\nTo characterize the sensitivity of a parameter $i$ with a single value the mean of the respective column $\\overline{c}_i$ of $\\boldsymbol{C}$ over all joints is used:\n%\\begin{equation}\\label{eq:mean_Sensitivity}\n%\t\\overline{c}_i = \\frac{1}{6N} \\sum_{i = 1}^{6N} \\left| \\boldsymbol{c}_i \\right| \\text{ with } \\boldsymbol{C} = (\\boldsymbol{c}_1, \\boldsymbol{c}_2, %\\hdots, \\boldsymbol{c}_i).\n%\\end{equation}\n\\begin{equation}\\label{eq:mean_Sensitivity}\n\t\\overline{c}_i = \\frac{1}{6N} \\| \\boldsymbol{c}_i \\|_1 \\text{ with } \\boldsymbol{C} = (\\boldsymbol{c}_1, \\boldsymbol{c}_2, \\hdots, \\boldsymbol{c}_i).\n\\end{equation}\n\nThe model error that is introduced by removing a particular parameter is evaluated using the error between the measured data $\\boldsymbol{\\tau}$ and the model prediction $\\hat{\\boldsymbol{\\tau}}$. The mean absolute error \n\\begin{equation}\\label{eq:absError}\n\te_{k,j} = \\frac{1}{N}  \\sum\\limits_{p=1}^{N} \\left| \\tau_j(t_p) - \\hat{\\tau}_{k,j}(t_p) \\right|,\n\\end{equation}\nand the relative error\n\\begin{equation}\\label{eq:relError}\n\te^*_{k,j} = \\frac{e_{k,j}}{\\frac{1}{N} \\sum\\limits_{p=1}^{N} \\left| \\tau_j(t_p) \\right|}\n\\end{equation}\nare calculated in every step $k$ for all joints $j$. If removing a parameter would lead to a large model error, this parameter is retained.\n\nThe following is a detailed description of the proposed approach, explained along the pseudo code in algorithm\\,\\ref{alg:ModelReductionAlgorithm}. It can be characterized as a \\textit{backward stepwise selection} \\cite{Volinsky1997} because starting from the full model parameters are removed successively.\n\nThe algorithm begins with calculating the initial design matrix $\\boldsymbol{C}_0$ for the full model (line 1). It is checked if the design matrix is of full rank, so that all parameters are identifiable. Given that the parameter estimation is to be performed directly on a process and not with an optimized trajectory, some parameters may not be identifiable. In such a case the parameters in question must be removed prior to the model reduction.\n\nIf the design matrix is of full rank, a first parameter estimation with the full model needs to be carried out (line 2). Based on that the absolute and relative model errors $e_{0,j}$ and $e^*_{0,j}$ are calculated (line 3). These values will be the reference to determine if a parameter is negligible for the model.\n\n\\begin{algorithm}[ht]\n\\caption{Pseudo code of the model reduction algorithm.}\\label{alg:ModelReductionAlgorithm}\n\t\\begin{algorithmic}[1]\n\t%\\State Load data\n\t%\\State Calculate design matrix $\\boldsymbol{C}$\n\t\\State Examine rank of design matrix $\\boldsymbol{C}_k$ (initially $\\boldsymbol{C}_0$)\n\t\\State Calculate $\\hat{\\boldsymbol{\\theta}}_{0}$ using WLS\n\t\\State Calculate $e_{0,j}$, $e^*_{0,j}$\n\t\\State Initialize $I$ and $E$\n\t\\For {$k =1,2,\\ldots,n_i$}\n\t\t%\\State Construct design matrix with parameters $i \\in I$\n\t\t%\\State Calculate $\\overline{c}_i$ for $i \\in I$\n\t\t\\State Calculate $\\arg \\min_{i} (\\overline{c}_i)$ with $i \\in I \\wedge i \\notin E$\n\t\t\\State Remove parameter $i$\n\t\t\\State Calculate $\\hat{\\boldsymbol{\\theta}}_{k}$ using WLS\n\t\t\\State Calculate $e_{k,j}$, $e^*_{k,j}$\n\t\t\\If {$\\frac{e^*_{k,j}}{e^*_{0,j}} \\geq e_{\\text{tol}} \\text{ } \\forall j$}\n\t\t\t\\State Add parameter $i$ to $E$\n\t\t\\Else\n\t\t\t\\State Remove parameter $i$ from $I$\n\t\t\\EndIf\n\t\\EndFor\n\t\\end{algorithmic}\n\\end{algorithm}\n\nSets $I$ (included) and $E$ (essential) are initialized  (line 4): Set $I$ represents the parameters which are still included in the model in a given iteration and is initialized to contain all parameters. Set $E$ contains all parameters which are indispensable for the model. The friction may be indispensable for the model if all joints perform a sufficient motion during the observed process. In this case  $E$ will include the friction parameters, otherwise set $E$ is initialized empty.\n\nThe model reduction is then performed in a loop which iterates over all $n_i=35$ parameters in the model. First the design matrix needs to be constructed with the parameters defined by $I$ and $E$ (line 6). Then the parameter with the lowest sensitivity is determined using $\\overline{c}_i$ from \\eqref{eq:mean_Sensitivity} and removed from the model (line 7). With the resulting reduced model a parameter estimation is carried out using the WLS method and the absolute and relative model errors $e_{k,j}$ and $e^*_{k,j}$ are calculated (line 8 and 9).\n\nIn a final step it needs to be determined if the removed parameter is negligible for the model (line 10). The parameter can be considered negligible if the relative model error $e^*_{k,j}$ in step $k$ is within a pre-defined tolerance margin $e_{\\mathrm{tol}}$ with respect to the reference value $e^*_{0,j}$ for all joints $j$.% Removing the friction parameters may result in a reduced model which is within the error tolerance, but the robustness of the parameter estimation is negatively impacted. This can be seen in a high condition number of the reduced model which means the parameters are not equally excited and therefore poorly identified. \n\nThe advantage of the algorithm is that it removes unimportant parameters so that the model error keeps low, but since the condition number is not optimized directly its improvement may be small. Nevertheless, it has been found in several experiments that the condition number is also improved.", "meta": {"hexsha": "2cb6207405febe528862ecbf58e67db5447dd5c1", "size": 7212, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/Chapters/Experiments/Model_Reduction_Algorithm/Model_Reduction_Algorithm.tex", "max_stars_repo_name": "SchapplM/robotics-paper_icma2020", "max_stars_repo_head_hexsha": "f81c6599ec7a9341e6a467a4ff9b31091aa50e75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/Chapters/Experiments/Model_Reduction_Algorithm/Model_Reduction_Algorithm.tex", "max_issues_repo_name": "SchapplM/robotics-paper_icma2020", "max_issues_repo_head_hexsha": "f81c6599ec7a9341e6a467a4ff9b31091aa50e75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/Chapters/Experiments/Model_Reduction_Algorithm/Model_Reduction_Algorithm.tex", "max_forks_repo_name": "SchapplM/robotics-paper_icma2020", "max_forks_repo_head_hexsha": "f81c6599ec7a9341e6a467a4ff9b31091aa50e75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 103.0285714286, "max_line_length": 650, "alphanum_fraction": 0.7649750416, "num_tokens": 1841, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8652240860523328, "lm_q2_score": 0.665410572017153, "lm_q1q2_score": 0.5757292540231013}}
{"text": "\\section{Converge to Equilibrium}\r\nTheorem \\ref{power_inv} tells us that we might be able to obtain an invariant measure of a Markov chain by looking at the limit $p_{ij}^{(n)}$ as $n\\to\\infty$.\r\nOf course, this does not always work.\r\n\\begin{example}[Non-example]\r\n    Take\r\n    $$P=\\begin{pmatrix}\r\n        0&1\\\\\r\n        1&0\r\n    \\end{pmatrix}$$\r\n    then $P^n$ does not converge as $n\\to\\infty$ but it has an invariant distribution $(1/2,1/2)$.\r\n    This is an example of what is called a periodic Markov chain.\r\n\\end{example}\r\nBut we want to find cases where it does work.\r\nThe previous example motivates the following definition.\r\n\\begin{definition}\r\n    A state $i\\in I$ is aperiodic if $p_{ii}^{(n)}>0$ for sufficiently large $n$.\r\n    $P$ is periodic if all $i\\in I$ are periodic.\r\n\\end{definition}\r\n\\begin{lemma}\r\n    Let $P$ be irreducible and some $i\\in I$ is aperiodic, then for all $j,k\\in I$, $p_{jk}^{(n)}>0$ for sufficiently large $n$.\r\n\\end{lemma}\r\nIn particular, in an irreducible Markov chain, one state being aperiodic implies all states being aperiodic.\r\n\\begin{proof}\r\n    Choose $r,s$ such that $p_{ji}^{(r)}>0$ and $p_{ik}^{(s)}>0$.\r\n    Then\r\n    $$p_{jk}^{(r+n+s)}\\ge p_{ji}^{(r)}p_{ii}^{(n)}p_{ik}^{(s)}>0$$\r\n    for sufficiently large $n$.\r\n\\end{proof}\r\nHere we are ready to state our main theorem.\r\n\\begin{theorem}\\label{aperiodic_conv}\r\n    Let $P$ be irreducible and aperiodic and suppose $\\pi$ is an invariant distribution for $P$.\r\n    Let $\\lambda$ be any distribution and suppose $(X_n)\\sim\\operatorname{Markov}(\\lambda,P)$, then for all $j\\in I$ we have $\\mathbb P[X_n=j]\\to\\pi_j$ as $n\\to\\infty$.\r\n\\end{theorem}\r\nIn particular, $p_{ij}^{(n)}\\to\\pi_j$ as $n\\to\\infty$ for any $i,j\\in I$.\r\nThe proof uses a very interesting trick called coupling.\r\nBehold.\r\n\\begin{proof}\r\n    Let $(Y_n)\\sim\\operatorname{Markov}(\\pi,P)$ and independent of $(X_n)$.\r\n    Fix a reference state $b\\in I$ and set $T=\\inf\\{n\\ge 1:X_n=Y_n=b\\}$.\r\n    We claim that $\\mathbb P[T<\\infty]=1$.\r\n    Now $W_n=(X_n,Y_n)$ is a Markov chain $\\tilde{P}$ on the state space $I\\times I$ and the transition probability is obviously\r\n    $$\\tilde{p}_{(i,k)(j,l)}=p_{ij}p_{kl}$$\r\n    with initial distribution $\\tilde{\\lambda}_{(i,k)}=\\lambda_i\\pi_k$.\r\n    As $P$ is irreducible and aperiodic, the preceding lemma implies that $\\tilde{p}_{(i,k)(j,l)}^{(n)}>0$ for sufficiently large $n$.\r\n    In particular, $\\tilde{P}$ is irreducible.\r\n    Also, it has invariant distribution $\\tilde{\\pi}_{(i,k)}=\\pi_i\\pi_k$, which implies that $\\tilde{P}$ is positive recurrent.\r\n    In addition $T$ is the first passage time of $(W_n)$ to $(b,b)$.\r\n    Since $P$ is irreducible and recurrent, $\\mathbb P[T<\\infty]=1$ as desired.\r\n    It follows that, by Theorem \\ref{strong_markov},\r\n    \\begin{align*}\r\n        \\mathbb P[X_n=i]&=\\mathbb P[X_n=i,n<T]+\\mathbb P[X_n=i,n\\ge T]\\\\\r\n        &=\\mathbb P[X_n=i,n<T]+\\mathbb P[Y_n=i,n\\ge T]\\\\\r\n        &=\\mathbb P[X_n=i,n<T]+\\pi_i-\\mathbb P[Y_n=i,n< T]\r\n    \\end{align*}\r\n    Therefore by rearranging,\r\n    \\begin{align*}\r\n        |\\mathbb P[X_n=i]-\\pi_i|&=|\\mathbb P[X_n=i,n<T]-\\mathbb P[Y_n=i,n< T]|\\\\\r\n        &\\le\\mathbb P[n<T]\\to 0\r\n    \\end{align*}\r\n    as $n\\to\\infty$ since $\\mathbb P[T<\\infty]=1$.\r\n\\end{proof}\r\nWhat would go wrong if the Markov chain is not aperiodic?\r\n\\begin{example}\r\n    Take as usual\r\n    $$P=\\begin{pmatrix}\r\n        0&1\\\\\r\n        1&0\r\n    \\end{pmatrix}$$\r\n    We know that $\\pi=(1/2,1/2)$ is an invariant distribution.\r\n    If $X\\sim\\operatorname{Markov}(\\delta_0,P)$ and $Y\\sim\\operatorname{Markov}(\\pi,P)$, then $X_n,Y_n$ will never meet.\r\n\\end{example}\r\nWe want to get a grip of what happens when $X_n$ is periodic (i.e. not aperiodic).\r\nWe will not include proofs as they are not in the scope of this course, but we will state some results and the ones who are interested can find out further in J. R. Norris's book.\r\n\\begin{lemma}\r\n    Let $P$ be irreducible.\r\n    There exists an integer $d\\ge 1$ and a partition $I=C_0\\cup\\cdots\\cup C_{d-1}$ such that:\\\\\r\n    1. $p_{ij}^{(n)}>0$ only if $i\\in C_r$ and $j\\in C_{r+n}$ for some $r$ where the indices of $C_k$ are taken modulo $d$.\\\\\r\n    2. For any $r$ and $i,j\\in C_r$ we have $p_{ij}^{(nd)}>0$ for sufficiently large $n$.\r\n\\end{lemma}\r\nSuch a $d$ is called the period of a periodic markov chain.\r\nThe $d=1$ case corresponds to the aperiodic case.\r\n\\begin{theorem}\r\n    Let $P$ be irreducible of period $d$ with $C_0,\\ldots, C_{d-1}$ as in the preceding lemma.\r\n    Let $\\lambda$ be a distribution with $\\sum_{i\\in C_0}\\lambda_i=1$.\r\n    Suppose $(X_n)\\sim\\operatorname{Markov}(\\lambda,P)$, then for $r=0,\\ldots,d-1$ and $j\\in C_r$,\r\n    $$\\mathbb P[X_{nd+r}=j]\\to\\frac{d}{m_j}$$\r\n    as $n\\to\\infty$ where $m_j$ is the expected return time to $j$.\r\n\\end{theorem}\r\nThis is a pretty intuitive generalisation of Theorem \\ref{aperiodic_conv} by setting $d=1$.", "meta": {"hexsha": "81be36573baf0f257789e669cbbfee42ca970389", "size": 4873, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8/converge.tex", "max_stars_repo_name": "david-bai-notes/IB-Markov-Chains", "max_stars_repo_head_hexsha": "cef4f20b59106a1deaed4de2f503e594e3ffc61d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8/converge.tex", "max_issues_repo_name": "david-bai-notes/IB-Markov-Chains", "max_issues_repo_head_hexsha": "cef4f20b59106a1deaed4de2f503e594e3ffc61d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8/converge.tex", "max_forks_repo_name": "david-bai-notes/IB-Markov-Chains", "max_forks_repo_head_hexsha": "cef4f20b59106a1deaed4de2f503e594e3ffc61d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.7528089888, "max_line_length": 180, "alphanum_fraction": 0.6437512826, "num_tokens": 1665, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410558746814, "lm_q2_score": 0.8652240773641087, "lm_q1q2_score": 0.5757292367600482}}
{"text": "\\subsection{Helper methods}\nThere are 4 helper methods to handle our abstract map:\\\\\n\n$\\begin{array}{rl}\n\\wsf{contains} & \\in \\AMap \\times \\AKey \\rightarrow \\wbf{Boolean} \\\\\n\\wsf{lookup} & \\in \\AMap \\times \\AKey \\rightarrow \\AVal \\times \\wbf{Boolean} \\\\\n\\wsf{update} & \\in \\AMap \\times \\AKey \\times \\AVal \\rightarrow \\AMap \\\\\n\\wsf{delete} & \\in \\AMap \\times \\AKey \\rightarrow \\AMap \\\\\n\\end{array}$\\\\\\\\\n\n\\textbf{Monotonicity } ??\\\\\n\n\\textbf{Soundness }\nWe proved the soundness of those methods at section \\ref{sec:soundness}.\\\\\n\n\\textbf{Termination }\nWe call a abstract map \\emph{infinite} if $\\Dom$ of the abstract map is infinite,\nand \\emph{finite} otherwise.\nEven though the definition of $\\AMap$ does not guarantee a abstract map to be finite,\nthe inputs of the helper methods will never be infinite during the analysis.\nBecause every abstract map created by alpha function during the analysis is finite\nas the program code is finite, and if the input abstract map is finite, \nthe resulted abstract map of helper methods is also finite by the definition of the methods.\nSo we assume that every abstract map used as inputs of helper method are finite.\nThus the helper methods always terminates.\\\\\n\n\\subsubsection{contains}\n$\\wsf{contains}$ is used to check whether the given abstract key\nhas a mapping in given abstract map.\\\\\n\n$\\begin{array}{l}\n\\wsf{contains} \\in \\AMap \\times \\AKey \\rightarrow \\wbf{Boolean} \\\\\\\\\n\\wsf{contains}(\\bot_m,\\ \\hat{k}) = \\bot_b \\\\\n\\wsf{contains}(\\hat{m},\\ \\bot_k) = \\bot_b \\\\\n\\wsf{contains}(\\hat{m},\\ \\hat{k}) = \\hat{b} \\\\\n\\quad \\begin{array}{ll} \\textrm{where }\n& \\hat{b} = \\left \\{ \\begin{array}{ll}\n\\top_b & \\textrm{if } \\emph{domIn?} \\land \\gamma_k(\\hat{k}) \\not\\subseteq \\Defset(\\hat{m}) \\\\\n\\hat{\\texttt{true}} & \\textrm{if } \\emph{domIn?} \\land \\gamma_k(\\hat{k}) \\subseteq \\Defset(\\hat{m}) \\\\\n\\hat{\\texttt{false}} & \\textrm{if } \\neg \\emph{domIn?} \\\\\n\\end{array} \\right.\\\\\n& \\emph{domIn?} = \\exists \\hat{k}' \\in \\Dom(\\hat{m}) \\cdot\n\\wsf{isRelated}(\\hat{k},\\ \\hat{k}')\\\\\n\\end{array}\\\\\n\\end{array}$\n\n\\subsubsection{lookup}\n$\\wsf{lookup}$ gets the abstract map and abstract key,\nand returns the abstract value mapped to the keys related to given abstract key.\nThe abstract boolean value returned with the abstract value indicates\nwhether the returned value is definite or not.\\\\\n\n$\\begin{array}{l}\n\\wsf{lookup} \\in \\AMap \\times \\AKey \\rightarrow \\AVal \\times \\wbf{Boolean} \\\\\\\\\n\n\\wsf{lookup}(\\bot_m,\\ \\hat{k}) = (\\bot_v,\\ \\bot_b) \\\\\n\\wsf{lookup}(\\hat{m},\\ \\bot_{k}) = (\\bot_v,\\ \\bot_b) \\\\\n\n\\wsf{lookup}(\\hat{m},\\ \\hat{k}) = (\\emph{local},\\ \\hat{b}) \\\\\n\\quad \\begin{array}{rl} \\textrm{where}\n& \\emph{local} = \\bigsqcup_v \\left \\{ \n\\Map(\\hat{m})(\\hat{k}') \\mid \\hat{k}' \\in \\emph{S} \\right \\} \\vspace{1mm}\\\\\n& \\hat{b} = \\left \\{ \\begin{array}{ll}\n\\top_b & \\textrm{if } \\emph{S} \\neq \\varnothing \\land \n\\gamma_k(\\hat{k}) \\not\\subseteq \\Defset(\\hat{m}) \\\\\n\\hat{\\texttt{true}} & \\textrm{if } \\emph{S} \\neq \\varnothing \\land \n\\gamma_k(\\hat{k}) \\subseteq \\Defset(\\hat{m}) \\\\\n\\hat{\\texttt{false}} & \\textrm{if } \\emph{S} = \\varnothing \\\\\n\\end{array} \\right. \\vspace{1mm} \\\\\n& \\emph{S} = \\{ \\hat{k}' \\mid \\hat{k}' \\in \\Dom(\\hat{m})\n\\land \\wsf{isRelated}(\\hat{k},\\ \\hat{k}') \\}\\\\\n\\end{array} \\\\\n\\end{array}$\n\n\\subsubsection{update}\n$\\wsf{update}$ calculate updated abstract map of given map with given key and value.\nIf the given abstract key is \\emph{exact}, ignore existing mapped value to the key.\nOtherwise, join the existing mapped value with the new value.\nIt is important to determine whether the given key is \\emph{exact} or not,\nin order to the result of the $\\wsf{update}$ methods be precise.\\\\\n\n$\\begin{array}{l}\n\\wsf{update} \\in \\AMap \\times \\AKey \\times \\AVal \\rightarrow \\AMap \\\\\\\\\n\n\\wsf{update}(\\bot_m,\\ \\hat{k},\\ \\hat{v}) = \\bot_m \\\\\n\\wsf{update}(\\hat{m},\\ \\bot_k,\\ \\hat{v}) = \\bot_m \\\\\n\\wsf{update}(\\hat{m},\\ \\hat{k},\\ \\bot_{v}) = \\hat{m} \\\\\n\n\\wsf{update}(\\hat{m},\\ \\hat{k},\\ \\hat{v}) = \\langle \\emph{map},\\ \\emph{defset} \\rangle \\vspace{1mm}\\\\\n\\quad \\begin{array}{rl} \\textrm{where}\n& \\emph{map} = \\left \\{ \\begin{array}{ll}\n\\Map(\\hat{m}) \\ast [ \\hat{k} \\mapsto \\hat{v} ]\n& \\textrm{if } \\hat{k} \\not\\in \\Dom(\\hat{m}) \\\\\n\n(\\Map(\\hat{m}) - \\hat{k}) \\ast [ \\hat{k} \\mapsto \\hat{v}]\n& \\textrm{if } \\emph{exact?} \\land \\hat{k}\\in \\Dom(\\hat{m}) \\\\\n\n(\\Map(\\hat{m}) - \\hat{k})\n\\ast [ \\hat{k} \\mapsto \\hat{v} \\sqcup_v \\Map(\\hat{m})(\\hat{k}) ]\n& \\textrm{if } \\neg \\emph{exact?} \\land \\hat{k}\\in \\Dom(\\hat{m}) \\\\\n\\end{array} \\right. \\vspace{1mm}\\\\\n\n& \\emph{defset} = \\left \\{ \\begin{array}{ll}\n\\Defset(\\hat{m}) \\cup \\gamma_k(\\hat{k}) & \\textrm{if} ~ \\emph{exact?} \\\\\n\\Defset(\\hat{m}) & \\textrm{otherwise}\\\\\n\\end{array} \\right. \\vspace{1mm}\\\\\n\n& \\emph{exact?} = \\mid \\gamma_k(\\hat{k}) \\mid = 1 \\\\\n\\end{array} \\\\\n\\end{array} $\n\n\\subsubsection{delete}\nFor given abstract map and abstract key, \n$\\wsf{delete}$ returns the new abstract map \nwhich have no value mapped to the given key.\nSimilar to $\\wsf{update}$ method, \nif the given key is not \\emph{exact},\n$\\wsf{delete}$ cannot change the input map.\nThus it is important to determine \\emph{exact}ness of the given key.\\\\\n\n$\\begin{array}{l}\n\\wsf{delete} \\in \\AMap \\times \\AKey \\rightarrow \\AMap \\\\\\\\\n\n\\wsf{delete}(\\bot_m,\\ \\hat{k}) = \\bot_m \\\\\n\\wsf{delete}(\\hat{m},\\ \\bot_k) = \\bot_m \\\\\n\n\\wsf{delete}(\\hat{m},\\ \\hat{k}) =\n\\langle \\emph{map},\\ \\Defset(\\hat{m}) \\setminus \\gamma_k(\\hat{k}) \\rangle \\vspace{1mm}\\\\\n\\quad \\begin{array}{rl} \\textrm{where}\n& \\emph{map} = \\left \\{ \\begin{array}{ll}\n(\\Map(\\hat{m}) - \\hat{k})\n& \\textrm{if } \\emph{exact?} \\land \\hat{k} \\in \\Dom(\\hat{m}) \\\\\n\\Map(\\hat{m})\n& \\textrm{otherwise} \\\\\n\\end{array} \\right. \\vspace{1mm}\\\\\n\n& \\emph{exact?} = \\mid \\gamma_k(\\hat{k}) \\mid = 1 \\\\\n\\end{array} \\\\\n\\end{array}$\n\n\\subsection{Soundness} \\label{sec:soundness}\n\\newtheorem{thm}{Theorem}\n\\begin{thm} \\normalfont\n(\\textit{Soundness of} $\\wsf{contains}$)\n$\\forall m_1 \\in \\CMap, k_1 \\in \\CKey :$\\\\\nIf $\\exists \\hat{m}_2 \\in \\AMap \\cdot m_1 \\in \\gamma_m(\\hat{m}_2)$\nand $\\exists \\hat{k}_2 \\in \\AKey \\cdot k_1 \\in \\gamma_k(\\hat{k}_2)$,\nthen $\\textsf{contains}(m_1,\\ k_1) \\in \\gamma_b(\\wsf{contains}(\\hat{m}_2,\\ \\hat{k}_2))$.\n\\end{thm}\n\\textbf{Proof } $\\forall m_1 \\in \\CMap, k_1 \\in \\CKey$, \nlet $\\hat{m}_2 \\in \\AMap \\cdot m_1 \\in \\gamma_m(\\hat{m}_2)$ \nand $\\hat{k}_2 \\in \\AKey \\cdot k_1 \\in \\gamma_k(\\hat{k}_2)$.\\\\\nLet $\\hat{m}_1 = \\alpha_m(\\{ m_1 \\})$ and $\\hat{k}_1 = \\alpha_k(\\{ k_1 \\})$,\nthen $\\hat{m}_1 \\po_m \\hat{m}_2$ and $\\hat{k}_1 \\po_k \\hat{k}_2$.\n\\begin{itemize}\n\\item If $\\textsf{contains}(m_1, k_1) = \\texttt{true}$, then $k_1 \\in \\Dom(m_1)$.\\\\\n$\\Dom(\\hat{m}_1)  = \\{ \\alpha_k(\\{k\\}) \\mid k \\in \\Dom(m_1) \\}$\nby definition of $\\alpha_m$, thus $\\hat{k}_1 \\in \\Dom(\\hat{m}_1)$.\\\\\nAlso, $\\exists \\hat{k} \\in \\Dom(\\hat{m}_2) \n\\cdot \\hat{k}_1 \\po_k \\hat{k} \\land \\Map(\\hat{m}_1)(\\hat{k}_1) \\po_v \\Map(\\hat{m}_2)(\\hat{k})$,\nby definition of $\\po_m$.\\\\\n$k_1 \\in \\gamma_k(\\hat{k}_1) \\subseteq \\gamma_k(\\hat{k})$ by monotonicity of $\\gamma_k$.\\\\ \n$\\wsf{isRelated}(\\hat{k}_2, \\hat{k})$ must be \\texttt{true},\nsince $k_1 \\in \\gamma_k(\\hat{k}) \\land k_1 \\in \\gamma_k(\\hat{k}_2)$. \\\\\nTherefore $\\wsf{contains}(\\hat{m}_2, \\hat{k}_2)$ should be either $\\top_b$ or $\\hat{\\texttt{true}}$.\\\\\nThus $\\textsf{contains}(m_1, k_1) \\in \\gamma_b(\\wsf{contains}(\\hat{m}_2, \\hat{k}_2))$.\n\\item If $\\textsf{contains}(m_1, k_1) = \\texttt{false}$, then $k_1 \\not\\in \\Dom(m_1)$.\\\\\n$\\Defset(\\hat{m}_2) \\subseteq \\Defset(\\hat{m}_1)$ by definition of $\\po_m$,\nand $\\Defset(\\hat{m}_1) = \\Dom(m_1)$ by definition of $\\alpha_m$.\\\\\n$k_1 \\not\\in \\Defset(\\hat{m}_2)$ because $k_1 \\not\\in \\Dom(m_1) = \\Defset(\\hat{m}_1)$. \\\\\n$\\gamma_k(\\hat{k}_2) \\not\\subseteq \\Defset(\\hat{m}_2)$, since\n$k_1 \\not\\in \\Defset(\\hat{m}_2)$ but $k_1 \\in \\gamma_k(\\hat{k}_2)$.\\\\\nTherefore $\\wsf{contains}(\\hat{m}_2, \\hat{k}_2)$ should be either $\\top_b$ or $\\hat{\\texttt{false}}$.\\\\\nThus $\\textsf{contains}(m_1, k_1) \\in \\gamma_b(\\wsf{contains}(\\hat{m}_2, \\hat{k}_2))$.\n\\end{itemize}\n\n\n\\begin{thm} \\normalfont\n(\\textit{Soundness of} $\\wsf{lookup}$)\n$\\forall m_1 \\in \\CMap, k_1 \\in \\CKey :$\\\\\nIf $\\exists \\hat{m}_2 \\in \\AMap \\cdot m_1 \\in \\gamma_m(\\hat{m}_2)$,\n$\\exists \\hat{k}_2 \\in \\AKey \\cdot k_1 \\in \\gamma_k(\\hat{k}_2)$,\nand $(\\hat{v}, \\hat{b}) = \\wsf{lookup}(\\hat{m}_2,\\ \\hat{k}_2)$ \nfor $\\hat{v} \\in \\AVal, \\hat{b} \\in \\wbf{Boolean}$,\nthen $\\textsf{lookup}(m_1,\\ k_1) \\in \\gamma_v(\\hat{v}) \\cup \\gamma_b(\\hat{b})$.\n\\end{thm}\n\\textbf{Proof } $\\forall m_1 \\in \\CMap, k_1 \\in \\CKey$, \nlet $\\hat{m}_2 \\in \\AMap \\cdot m_1 \\in \\gamma_m(\\hat{m}_2)$,\n$\\hat{k}_2 \\in \\AKey \\cdot k_1 \\in \\gamma_k(\\hat{k}_2)$,\nand $(\\hat{v},\\ \\hat{b}) = \\wsf{lookup}(\\hat{m}_2,\\ \\hat{k}_2)$.\\\\\nLet $\\hat{m}_1 = \\alpha_m(\\{ m_1 \\})$ and $\\hat{k}_1 = \\alpha_k(\\{ k_1 \\})$,\nthen $\\hat{m}_1 \\po_m \\hat{m}_2$ and $\\hat{k}_1 \\po_k \\hat{k}_2$.\n\\begin{itemize}\n\\item If $\\textsf{lookup}(m_1, k_1) = m_1(k_1) \\in \\CVal$, then $k_1 \\in \\Dom(m_1)$.\\\\\n$\\hat{v} = \\bigsqcup_v \\emph{V}$\nwhere $\\emph{V} = \\{ \\Map(\\hat{m}_2)(\\hat{k}) \\mid \\hat{k} \\in \\emph{S} \\}$\nand $\\emph{S} = \\{ \\hat{k} \\mid \\hat{k} \\in \\Dom(\\hat{m}_2) \\land \\wsf{isRelated}(\\hat{k}, \\hat{k}_2) \\}$\nby definition of $\\wsf{lookup}$. \\vspace{1mm} \\\\\n$\\Dom(\\hat{m}_1)  = \\{ \\alpha_k(\\{k\\}) \\mid k \\in \\Dom(m_1) \\}$\nby definition of $\\alpha_m$, thus $\\hat{k}_1 \\in \\Dom(\\hat{m}_1)$.\\\\\nAlso, $\\exists \\hat{k} \\in \\Dom(\\hat{m}_2) \n\\cdot \\hat{k}_1 \\po_k \\hat{k} \\land \\Map(\\hat{m}_1)(\\hat{k}_1) \\po_v \\Map(\\hat{m}_2)(\\hat{k})$,\nby definition of $\\po_m$.\\\\\n$k_1 \\in \\gamma_k(\\hat{k}_1) \\subseteq \\gamma_k(\\hat{k})$ by monotonicity of $\\gamma_k$.\\\\ \n$\\wsf{isRelated}(\\hat{k}_2, \\hat{k})$ must be \\texttt{true},\nsince $k_1 \\in \\gamma_k(\\hat{k}) \\land k_1 \\in \\gamma_k(\\hat{k}_2)$,\nthus $\\hat{k} \\in \\emph{S} $.\\\\\nIt means $\\Map(\\hat{m}_2)(\\hat{k}) \\in \\emph{V}$,\nso that $\\Map(\\hat{m}_2)(\\hat{k}) \\po_v \\hat{v}$. \\vspace{1mm} \\\\\n$m_1(k_1) \\in \\gamma_v (\\Map(\\hat{m}_1)(\\hat{k}_1))$ by definition of $\\alpha_m$.\\\\\n$m_1(k_1) \\in \\gamma_v (\\Map(\\hat{m}_1)(\\hat{k}_1))\n\\subseteq \\gamma_v (\\Map(\\hat{m}_2)(\\hat{k})) \\subseteq \\gamma_v (\\hat{v})$,\nby monotonicity of $\\gamma_v$.\\\\\nTherefore, $\\textsf{lookup}(m_1, k_1) \\in \\gamma_v(\\hat{v}) \\cup \\gamma_b(\\hat{b})$.\n\\item If $\\textsf{lookup}(m_1, k_1) = \\texttt{false}$, then $k_1 \\not\\in \\Dom(m_1)$.\\\\\n$\\Defset(\\hat{m}_2) \\subseteq \\Defset(\\hat{m}_1)$ by definition of $\\po_m$,\nand $\\Defset(\\hat{m}_1) = \\Dom(m_1)$ by definition of $\\alpha_m$.\\\\\n$k_1 \\not\\in \\Defset(\\hat{m}_2)$ because $k_1 \\not\\in \\Dom(m_1) = \\Defset(\\hat{m}_1)$. \\\\\n$\\gamma_k(\\hat{k}_2) \\not\\subseteq \\Defset(\\hat{m}_2)$, since\n$k_1 \\not\\in \\Defset(\\hat{m}_2)$ but $k_1 \\in \\gamma_k(\\hat{k}_2)$.\\\\\nThen $\\hat{b}$ should be either $\\top_b$ or $\\hat{\\texttt{false}}$,\nso that $\\texttt{false} \\in \\gamma_b(\\hat{b})$. \\\\\nThus $\\textsf{lookup}(m_1, k_1) \\in \\gamma_v(\\hat{v}) \\cup \\gamma_b(\\hat{b})$.\n\\end{itemize}\n\n\n\\begin{thm} \\normalfont\n(\\textit{Soundness of} $\\wsf{update}$) \n$\\forall m_1 \\in \\CMap, k_1 \\in \\CKey, v_1 \\in \\CVal :$\\\\\nIf $\\exists \\hat{m}_2 \\in \\AMap \\cdot m_1 \\in \\gamma_m(\\hat{m}_2)$,\n$\\exists \\hat{k}_2 \\in \\AKey \\cdot k_1 \\in \\gamma_k(\\hat{k}_2)$,\nand $\\exists \\hat{v}_2 \\in \\AVal \\cdot v_1 \\in \\gamma_v(\\hat{v}_2)$,\nthen $\\textsf{update}(m_1,\\ k_1,\\ v_1) \\in \\gamma_m(\\wsf{update}(\\hat{m}_2,\\ \\hat{k}_2,\\ \\hat{v}_2))$.\n\\end{thm}\n\\textbf{Proof } $\\forall m_1 \\in \\CMap, k_1 \\in \\CKey$, \nlet $\\hat{m}_2 \\in \\AMap \\cdot m_1 \\in \\gamma_m(\\hat{m}_2)$,\n$\\hat{k}_2 \\in \\AKey \\cdot k_1 \\in \\gamma_k(\\hat{k}_2)$,\nand $\\hat{v}_2 \\in \\AVal \\cdot v_1 \\in \\gamma_v(\\hat{v}_2)$.\\\\\nLet $\\hat{m}_1 = \\alpha_m(\\{ m_1 \\})$,\n$\\hat{k}_1 = \\alpha_k(\\{ k_1 \\})$,\n$\\hat{v}_1 = \\alpha_v(\\{ v_1 \\})$,\nthen $\\hat{m}_1 \\po_m \\hat{m}_2$,\n$\\hat{k}_1 \\po_k \\hat{k}_2$,\nand $\\hat{v}_1 \\po_v \\hat{v}_2$.\\\\\nFor $\\hat{m}_2' = \\wsf{update}(\\hat{m}_2, \\hat{k}_2, \\hat{v}_2)$,\nand $\\emph{mset} = \\{ \\textsf{update}(m_1, k_1, v_1) \\}$, \\\\\nif $\\alpha_m(\\emph{mset}) \\po_m \\hat{m}_2'$\nthen $\\textsf{update}(m_1, k_1, v_1) \\in \\gamma_m(\\hat{m}_2')$\nby definition of $\\gamma_m$.\\\\\nTo show $\\alpha_m(\\emph{mset}) \\po_m \\hat{m}_2'$, we need to prove 2 things:\n\\begin{enumerate}[label=({\\arabic*})]\n\\item $\\forall \\hat{k} \\in \\Map(\\alpha_m(\\emph{mset})) \\cdot\n\\exists \\hat{k}' \\in \\Dom(\\hat{m}_2'):\n\\hat{k} \\po_k \\hat{k}' \\land \\Map(\\alpha_m(\\emph{mset}))(\\hat{k}) \\po_v \\Map(\\hat{m}_2')(\\hat{k}')$\n\\item $\\Defset(\\hat{m}_2') \\subseteq \\Defset(\\alpha_m(\\emph{mset}))$\n\\end{enumerate}\n\\begin{itemize}\n\\item If $k_1 \\not\\in \\Dom(m_1)$, \nthen $\\emph{mset} = \\{ m_1 \\cup \\{[k_1 \\mapsto v_1]\\} \\}$.\\\\\n$\\alpha_m(\\emph{mset}) = \n\\langle \\{ [\\alpha_k(\\{ k \\}) \\mapsto \\alpha_v(\\{ m_1(k) \\} )] \\mid k \\in \\Dom(m_1) \\}\n\\cup \\{ [ \\hat{k}_1 \\mapsto \\hat{v}_1 ] \\},\\\n\\Dom(m_1) \\cup \\{ k_1 \\} \\rangle$\\\\\n$= \\langle \\{ [ \\hat{k} \\mapsto \\Map(\\hat{m}_1)(\\hat{k}) ] \\mid \\hat{k} \\in \\Dom(\\hat{m}_1) \\}\n\\cup \\{ [ \\hat{k}_1 \\mapsto \\hat{v}_1 ] \\},\\\n\\Dom(m_1) \\cup \\{ k_1 \\} \\rangle$\nby definition of $\\alpha_m$.\n\nProve (1). \\\\\n$\\forall \\hat{k} \\in \\Dom(\\hat{m}_1) \\cdot \\exists \\hat{k}' \\in \\Dom(\\hat{m}_2):\n\\hat{k} \\po_k \\hat{k}' \\land \\Map(\\hat{m}_1)(\\hat{k}) \\po_v \\Map(\\hat{m}_2)(\\hat{k}')$\nsince $\\hat{m}_1 \\po_m \\hat{m}_2$.\n\n$\\Dom(\\alpha_m(\\emph{mset})) = \\Dom(\\hat{m}_1) \\cup \\{ \\hat{k}_1 \\} $ \nby definition of $\\alpha_m$. For same reason, \\\\\n$\\forall \\hat{k} \\in \\Dom(\\hat{m}_1) \\cdot\n\\Map(\\alpha_m(\\emph{mset}))(\\hat{k}) = \\Map(\\hat{m}_1)(\\hat{k})$,\nand $\\Map(\\alpha_m(\\emph{mset}))(\\hat{k}_1) = \\hat{v}_1$.\n\nAlso, $\\Dom(\\hat{m}_2') = \\Dom(\\hat{m}_2) \\cup \\{ \\hat{k}_2 \\}$\nby definition of $\\wsf{update}$. For same reason, \\\\\n$\\forall \\hat{k} \\in (\\Dom(\\hat{m}_2) \\setminus \\{ \\hat{k}_2 \\}) \\cdot\n\\Map(\\hat{m}_2)(\\hat{k}) \\po_v \\Map(\\hat{m}_2')(\\hat{k})$,\nand $\\hat{v}_2 \\po_v \\Map(\\hat{m}_2')(\\hat{k}_2)$.\n\nFor $\\hat{k}_1$, $\\exists \\hat{k}_2 \\in \\Dom(\\hat{m}_2')$\nsuch that $\\hat{k}_1 \\po_k \\hat{k}_2 \\land \n\\Map(\\alpha_m(\\emph{mset}))(\\hat{k}_1) \\po_v \\hat{v}_2 \\po_v \\Map(\\hat{m}_2')(\\hat{k}_2)$.\\\\\nFor $\\hat{k} \\in \\Dom(\\hat{m}_1)$,\nthere exists $\\hat{k}' \\in \\Dom(\\hat{m}_2') \\cdot \n\\hat{k} \\po_k \\hat{k}' \\land \\Map(\\hat{m}_1)(\\hat{k}) \\po_v \\Map(\\hat{m}_2')(\\hat{k}')$.\n\nProve (2).\\\\\n\\textbf{Case} if $\\mid \\gamma_k(\\hat{k}_2) \\mid = 1$, then $\\gamma_k(\\hat{k}_2) = \\{ k_1 \\}$.\\\\\n$\\Defset(\\hat{m}_2') = \\Defset(\\hat{m}_2) \\cup \\gamma_k(\\hat{k}_2)\n= \\Defset(\\hat{m}_2) \\cup \\{ k_1 \\}$ by definition of $\\wsf{update}$.\\\\\nHowever, $\\Defset(\\hat{m}_2) \\subseteq \\Defset(\\hat{m}_1)$ as $\\hat{m}_1 \\po_m \\hat{m}_2$.\\\\\nAlso, $\\Defset(\\hat{m}_1) = \\Dom(m_1)$ by definition of $\\alpha_m$.\\\\\nTherefore, $\\Defset(\\hat{m}_2') = \n\\Defset(\\hat{m}_2) \\cup \\gamma_k(\\hat{k}_2) \\subseteq \\Dom(m_1) \\cup \\{ k_1 \\}\n= \\Defset(\\alpha_m(\\emph{mset}))$.\n\n\\textbf{Case} if $\\mid \\gamma_k(\\hat{k}_2) \\mid > 1$.\\\\\n$\\Defset(\\hat{m}_2') = \\Defset(\\hat{m}_2)$ by definition of $\\wsf{update}$.\\\\\nHowever, $\\Defset(\\hat{m}_2) \\subseteq \\Dom(m_1)$,\nthus $\\Defset(\\hat{m}_2) \\subseteq \\Dom(m_1) \\cup \\{ k_1 \\}\n= \\Defset(\\alpha_m(\\emph{mset}))$.\nTherefore $\\Defset(\\hat{m}_2') \\subseteq \\Defset(\\alpha_m(\\emph{mset}))$.\n\n\\item If $k_1 \\in \\Dom(m_1)$, \nthen $\\emph{mset} = \\{ (m_1 \\setminus \\{ [ k_1 \\mapsto m_1(k_1)] \\}) \\cup \\{ [k_1 \\mapsto v_1]\\} \\}$.\\\\\n$\\alpha_m(\\emph{mset}) = \n\\langle \\{ [\\alpha_k(\\{ k \\}) \\mapsto \\alpha_v(\\{ m_1(k) \\} )] \\mid \nk \\in (\\Dom(m_1) \\setminus \\{ k_1 \\})\\}\n\\cup \\{ [ \\hat{k}_1 \\mapsto \\hat{v}_1 ] \\},\\ \\Dom(m_1) \\rangle$\\\\\n$= \\langle \\{ [ \\hat{k} \\mapsto \\Map(\\hat{m}_1)(\\hat{k}) ] \\mid \n\\hat{k} \\in (\\Dom(\\hat{m}_1) \\setminus \\{ \\hat{k}_1 \\}) \\}\n\\cup \\{ [ \\hat{k}_1 \\mapsto \\hat{v}_1 ] \\},\\ \\Dom(m_1) \\rangle$\nby definition of $\\alpha_m$.\n\nProve (1). \\\\\n$\\forall \\hat{k} \\in \\Dom(\\hat{m}_1) \\cdot \\exists \\hat{k}' \\in \\Dom(\\hat{m}_2):\n\\hat{k} \\po_k \\hat{k}' \\land \\Map(\\hat{m}_1)(\\hat{k}) \\po_v \\Map(\\hat{m}_2)(\\hat{k}')$\nsince $\\hat{m}_1 \\po_m \\hat{m}_2$.\n\n$\\Dom(\\alpha_m(\\emph{mset})) = \\Dom(\\hat{m}_1)$ \nby definition of $\\alpha_m$. For same reason, \\\\\n$\\forall \\hat{k} \\in (\\Dom(\\hat{m}_1) \\setminus \\{ \\hat{k}_1 \\}) \\cdot\n\\Map(\\alpha_m(\\emph{mset}))(\\hat{k}) = \\Map(\\hat{m}_1)(\\hat{k})$,\nand $\\Map(\\alpha_m(\\emph{mset}))(\\hat{k}_1) = \\hat{v}_1$.\n\nAlso, $\\Dom(\\hat{m}_2') = \\Dom(\\hat{m}_2) \\cup \\{ \\hat{k}_2 \\}$\nby definition of $\\wsf{update}$. For same reason, \\\\\n$\\forall \\hat{k} \\in (\\Dom(\\hat{m}_2) \\setminus \\{ \\hat{k}_2 \\}) \\cdot\n\\Map(\\hat{m}_2)(\\hat{k}) \\po_v \\Map(\\hat{m}_2')(\\hat{k})$,\nand $\\hat{v}_2 \\po_v \\Map(\\hat{m}_2')(\\hat{k}_2)$.\n\nFor $\\hat{k}_1$, $\\exists \\hat{k}_2 \\in \\Dom(\\hat{m}_2')$\nsuch that $\\hat{k}_1 \\po_k \\hat{k}_2 \\land \n\\Map(\\alpha_m(\\emph{mset}))(\\hat{k}_1) \\po_v \\hat{v}_2 \\po_v \\Map(\\hat{m}_2')(\\hat{k}_2)$.\\\\\nFor $\\hat{k} \\in \\Dom(\\hat{m}_1)$,\nthere exists $\\hat{k}' \\in \\Dom(\\hat{m}_2') \\cdot \n\\hat{k} \\po_k \\hat{k}' \\land \\Map(\\hat{m}_1)(\\hat{k}) \\po_v \\Map(\\hat{m}_2')(\\hat{k}')$.\n\nProve (2).\\\\\n\\textbf{Case} if $\\mid \\gamma_k(\\hat{k}_2) \\mid = 1$, then $\\gamma_k(\\hat{k}_2) = \\{ k_1 \\}$.\\\\\n$\\Defset(\\hat{m}_2') = \\Defset(\\hat{m}_2) \\cup \\gamma_k(\\hat{k}_2)\n= \\Defset(\\hat{m}_2) \\cup \\{ k_1 \\}$ by definition of $\\wsf{update}$.\\\\\nHowever, $\\Defset(\\hat{m}_2) \\subseteq \\Defset(\\hat{m}_1)$ as $\\hat{m}_1 \\po_m \\hat{m}_2$.\\\\\nAlso, $\\Defset(\\hat{m}_1) = \\Dom(m_1)$ by definition of $\\alpha_m$.\\\\\nMoreover, $\\Dom(m_1) \\cup \\{ k_1 \\} = \\Dom(m_1)$ as $k_1 \\in \\Dom(m_1)$ \\\\\nTherefore $\\Defset(\\hat{m}_2') \\subseteq \\Dom(m_1) = \\Defset(\\alpha_m(\\emph{mset}))$.\n\n\\textbf{Case} if $\\mid \\gamma_k(\\hat{k}_2) \\mid > 1$.\\\\\n$\\Defset(\\hat{m}_2') = \\Defset(\\hat{m}_2)$ by definition of $\\wsf{update}$.\\\\\nHowever, $\\Defset(\\hat{m}_2) \\subseteq \\Dom(m_1) = \\Defset(\\alpha_m(\\emph{mset}))$.\nTherefore $\\Defset(\\hat{m}_2') \\subseteq \\Defset(\\alpha_m(\\emph{mset}))$.\n\\end{itemize}\n\n\n\\begin{thm} \\normalfont\n(\\textit{Soundness of} $\\wsf{delete}$) \n$\\forall m \\in \\CMap, k \\in \\CKey :$\\\\\nIf $\\exists \\hat{m} \\in \\AMap \\cdot m \\in \\gamma_m(\\hat{m})$\nand $\\exists \\hat{k} \\in \\AKey \\cdot k \\in \\gamma_k(\\hat{k})$,\nthen $\\textsf{delete}(m,\\ k) \\in \\gamma_m(\\wsf{delete}(\\hat{m},\\ \\hat{k}))$.\n\\end{thm}\n", "meta": {"hexsha": "d3370fe0654a736374125bdb4328b1d1c7060410", "size": 17878, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/obj/map-helpers.tex", "max_stars_repo_name": "aliahsan07/safe-development", "max_stars_repo_head_hexsha": "542e4ebe5142912ad212a8517051462633bede2c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-11-25T12:53:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T23:29:49.000Z", "max_issues_repo_path": "doc/obj/map-helpers.tex", "max_issues_repo_name": "aliahsan07/safe-development", "max_issues_repo_head_hexsha": "542e4ebe5142912ad212a8517051462633bede2c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/obj/map-helpers.tex", "max_forks_repo_name": "aliahsan07/safe-development", "max_forks_repo_head_hexsha": "542e4ebe5142912ad212a8517051462633bede2c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-03T21:38:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-03T21:38:53.000Z", "avg_line_length": 48.5815217391, "max_line_length": 105, "alphanum_fraction": 0.6037588097, "num_tokens": 7706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8519528132451417, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5757195529296584}}
{"text": "\\documentclass{ximera}\n\\input{../preamble}\n\\title{Exercises: Cumulative}\n%%%%%\\author{Philip T. Gressman}\n\n\\begin{document}\n\\begin{abstract}\nExercises relating to various topics we have studied.\n\\end{abstract}\n\\maketitle\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{(-1)^n}{n - e^{-n}} \\]\n\\begin{multipleChoice}\n\\choice{Absolute}\n\\choice[correct]{Conditional}\n\\choice{Diverge}\n\\end{multipleChoice}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{(-1)^n e^{-n}}{n - e^{-n}} \\]\n\\begin{multipleChoice}\n\\choice[correct]{Absolute}\n\\choice{Conditional}\n\\choice{Diverge}\n\\end{multipleChoice}\n\\begin{hint}\nTake absolute values and try direct comparison test.\n\\end{hint}\n\\begin{hint}\nTry direct comparison to $e^{-n}$.\n\\end{hint}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{(-1)^n n}{n - e^{-n}} \\]\n\\begin{multipleChoice}\n\\choice{Absolute}\n\\choice{Conditional}\n\\choice[correct]{Diverge}\n\\end{multipleChoice}\n\\begin{hint}\nWhat is the dominant term in the denominator as $n \\rightarrow \\infty$?\n\\end{hint}\n\\begin{hint}\nThe dominant term in the denominator is $n$.\n\\end{hint}\n\\begin{hint}\nThis means that the terms do not go to zero.\n\\end{hint}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{1}{n^2 + (-1)^n n} \\]\n\\begin{multipleChoice}\n\\choice[correct]{Absolute}\n\\choice{Conditional}\n\\choice{Diverge}\n\\end{multipleChoice}\n\\begin{hint}\nWhat is the dominant term in the denominator as $n \\rightarrow \\infty$?\n\\end{hint}\n\\begin{hint}\nThe dominant term in the denominator is $n^2$.\n\\end{hint}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{(-1)^n}{n - e^{-n}} \\]\n\\begin{multipleChoice}\n\\choice{Absolute}\n\\choice[correct]{Conditional}\n\\choice{Diverge}\n\\end{multipleChoice}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{(-1)^n \\sqrt{n}}{\\ln (n+1)} \\]\n\\begin{multipleChoice}\n\\choice{Absolute}\n\\choice{Conditional}\n\\choice[correct]{Diverge}\n\\end{multipleChoice}\n\\begin{hint}\nWhich term dominates as $n \\rightarrow \\infty$: $\\sqrt{n}$ or $\\ln (n+1)$?\n\\end{hint}\n\\begin{hint}\nAnswer: $\\sqrt{n}$ dominates $\\ln (n+1)$ as $n \\rightarrow \\infty$.\n\\end{hint}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{(-1)^n}{\\sqrt{n} + \\ln (n+1)} \\]\n\\begin{multipleChoice}\n\\choice{Absolute}\n\\choice[correct]{Conditional}\n\\choice{Diverge}\n\\end{multipleChoice}\n\\begin{hint}\nTo show that convergence is not absolute, try limit comparison.\n\\end{hint}\n\\begin{hint}\nThe comparison series can be taken to be $1/\\sqrt{n}$ in this case.\n\\end{hint}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{1}{n} \\ln \\left(2 + \\frac{1}{n} \\right) \\]\n\\begin{multipleChoice}\n\\choice{Absolute}\n\\choice{Conditional}\n\\choice[correct]{Diverge}\n\\end{multipleChoice}\n\\begin{hint}\nTry limit comparison with the harmonic series.\n\\end{hint}\n\\end{exercise}\n\n\\begin{exercise}\nDetermine whether the series below converges absolutely, conditionally, or diverges.\n\\[ \\sum_{n=1}^\\infty \\frac{n \\cos n \\pi}{n^2 - 1} \\]\n\\begin{multipleChoice}\n\\choice{Absolute}\n\\choice[correct]{Conditional}\n\\choice{Diverge}\n\\end{multipleChoice}\n\\begin{hint}\nWith absolute values, compare to a harmonic series.\n\\end{hint}\n\\begin{hint}\nHow do we know that the alternating series test applies? \n\\end{hint}\n\\end{exercise}\n\n\n\\section*{Sample Exam Questions}\n\n\\begin{question}%%%%%[2015C.04.alt]\n\nDetermine whether the following series converge or diverge.\n\\[ \\text{I: } \\sum_{n=1}^\\infty \\frac{n^3}{n^4+4} \\ \\ \\\n\\text{II: } \\sum_{n=1}^\\infty \\frac{3^n}{n!} \\ \\ \\ \n\\text{III: } \\sum_{n=2}^\\infty \\frac{ \\ln \\ln n}{\\ln n} \\ \\ \\\n\\text{IV: } \\sum_{n=1}^\\infty \\frac{3n^2}{(n!)^2} \\]\n\\begin{multiplechoice}\n\\choice{I \\& II converge; III \\& IV diverge}\n\\choice{I \\& III converge; II \\& IV diverge}\n\\choice{I \\& IV converge; II \\& III diverge}\n\\choice{II \\& III converge; I \\& IV diverge}\n\\choice[correct]{II \\& IV converge; I \\& III diverge}\n\\choice{III \\& IV converge; I \\& II diverge}\n\\end{multiplechoice}\n\n\\end{question}\n\n\\begin{question}%%%%%[2016C.09]\n\nDetermine whether the following series are convergent or divergent. Justify your answers.\n\\[ \\text{I: } \\sum_{n=1}^\\infty \\frac{n^2-3n}{\\sqrt[3]{n^{10}-4n^2}} \\ \\ \\ \\ \\text{II: } \\sum_{n=1}^\\infty \\frac{(-n)^n}{5^{2n+3}} \\]\n\\begin{multiplechoice}\n\\choice{I \\& II divergent}\n\\choice[correct]{I convergent, II divergent} \n\\choice{I divergent, II convergent}\n\\choice{I \\& II convergent}\n\\end{multiplechoice}\n\n\\end{question}\n\n\\begin{question}%%%%%[2016C.10]\n\nDetermine whether the following series are convergent or divergent. Justify your answers.\n\\[ \\text{I: } \\sum_{n=1}^\\infty \\frac{\\arctan n}{n^4} \\ \\ \\ \\ \\text{II: } \\sum_{n=1}^\\infty \\frac{\\sin \\frac{1}{n}}{n^2} \\]\n\\begin{multiplechoice}\n\\choice{I \\& II divergent}\n\\choice{I convergent, II divergent} \n\\choice{I divergent, II convergent}\n\\choice[correct]{I \\& II convergent}\n\\end{multiplechoice}\n\n\\end{question}\n\n\\begin{question}%%%%%[2017C.11]\n\nDetermine which of the following series are convergent. \\offline{For full credit, be sure to explain your reasoning and specify which tests were used.}\n\\[ \\text{I: } \\sum_{n=2}^\\infty 2ne^{-n^2} \\ \\ \\ \\ \\text{II: } \\sum_{n=2}^\\infty \\frac{n + 2 \\ln n}{2 n^4} \\ \\ \\ \\ \\text{III: } \\sum_{n=2}^\\infty \\frac{n^n}{n!} \\]\n\\begin{multiplechoice}\n\\choice{only I}\n\\choice[correct]{only I and II}\n\\choice{only I and III}\n\\choice{only II}\n\\choice{only II and III}\n\\choice{only III}\n\\end{multiplechoice}\n\n\\end{question}\n\n\n\\end{document}\n", "meta": {"hexsha": "806b4e5c360b2af2f57c8343663e549e44f70dfc", "size": 6054, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "series/24finalpractice.tex", "max_stars_repo_name": "ptgressman/math104", "max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "series/24finalpractice.tex", "max_issues_repo_name": "ptgressman/math104", "max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "series/24finalpractice.tex", "max_forks_repo_name": "ptgressman/math104", "max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.1057692308, "max_line_length": 163, "alphanum_fraction": 0.7043277172, "num_tokens": 2004, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190477, "lm_q2_score": 0.8519528094861981, "lm_q1q2_score": 0.5757195503894974}}
{"text": "\\section{Discriminant Analysis II}\n\n\\subsection{Rank Reduced Linear Discriminant Analysis}\n\n\\begin{frame}\n\t\\frametitle{Rank Reduced Linear Discriminant Analysis}\n\n\t\\begin{center}\n\t\t\\begin{minipage}{0.8\\textwidth}\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item[{\\bf Problem}:] How to choose an $L$-dimensional subspace with $L=K-1$ that is good for  LDA? \\\\[.5cm]\n\t\t\t\t\\item[{\\bf Idea}:] Maximize the spread of the $L$-dimensional projection of centroids. \\\\[.5cm]\n\t\t\t\t\\item[{\\bf Solution}:] Principal component analysis, i.\\,e.\\ \\\\\n\t\t\t\t      we compute the principal components of the \\\\\n\t\t\t\t      covariance matrix of the mean vectors\n\t\t\t\t      $$\\vec{\\mu}'_y = \\phi(\\vec{\\mu}_y) \\in \\real^{K-1},$$\n\t\t\t\t      where $y=1,2,\\dots, K$.\n\t\t\t\\end{itemize}\n\t\t\\end{minipage}\n\t\\end{center}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Rank Reduced Linear Discriminant Analysis \\cont}\n\n\tIn \\structure{Principle Component Analysis (PCA)} we compute a linear mapping $\\mat\\Phi \\in \\real^{L \\times (K-1)}$ that\n\tresults in the highest spread of projected features:\n\t\\pause\n\n\t\\begin{displaymath}\n\t\t\\mat{\\Phi}^* = \\argmax_{\\mat{\\Phi}} \\left(\n\t\t\\frac{1}{K}\n\t\t\\sum_{y=1}^K (\\mat{\\Phi}\\vec {\\mu}'_y - \\mat {\\Phi}\\bar{\\mat \\mu}' )^T\n\t\t(\\mat{\\Phi}\\vec {\\mu}'_y - \\mat {\\Phi}\\bar{\\mat \\mu}' ) +\n\t\t\\sum_{i=1}^{L} \\lambda_i (\\| \\mat{\\Phi}_i \\|_2^2 - 1)\n\t\t\\right)\n\t\\end{displaymath}\n\t% \n\twhere we applied the \\structure{Lagrange multiplier} method to allow for the maximization of the spread subject to\n\t%\n\t\\begin{displaymath}\n\t\t\\| \\mat {\\Phi}_i \\|_2^2 = 1 ~, \\quad i = 1, \\ldots, K-1 ~.\n\t\\end{displaymath}\n\t%\n\tHere $\\| \\mat{\\Phi}_i \\|_2^2$ denotes the $L_2$ norm of the $i$-th row vector of $\\mat{\\Phi}$.\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Excursus: Optimization with Constraints}\n\n\t\\structure{Lagrange Multipliers:} simple example\n\n\t\\begin{columns}\n\t\t\\column{.5\\linewidth}\n\n\t\t\\begin{itemize}\n\t\t\t\\item Find the maximum of \\\\\n\t\t\t      $f(x,y) = x +y$ \\\\[.25cm]\n\t\t\t      \\onslide<2->{\n\t\t\t\\item constraint: $x^2 + y^2 = 1$ \\\\[.25cm]\n\t\t\t      }\n\t\t\t      \\onslide<4->{\n\t\t\t\\item Lagrange function: \\\\\n\t\t\t      $L(x,y,\\lambda) = x + y + \\lambda(x^2 + y^2 - 1)$ \\\\[0.25cm]\n\t\t\t      }\n\t\t\t      \\onslide<5->{\n\t\t\t\\item Set the partial derivatives to zero: \\\\\n\t\t\t      $\\frac{\\partial L}{\\partial x} =\n\t\t\t\t      \\frac{\\partial L}{\\partial y} =\n\t\t\t\t      \\frac{\\partial L}{\\partial \\lambda}\n\t\t\t\t      \\stackrel{!}{=} 0$\n\t\t\t      }\n\t\t\\end{itemize}\n\n\t\t\\column{.5\\linewidth}\n\n\t\t\\begin{center}\n\t\t\t\\resizebox{\\linewidth}{!}{\n\t\t\t\t\\alt<3->{\n\t\t\t\t\t\\input{\\texfigdir/lagrange_multiplier3.pstex_t}\n\t\t\t\t}{\\alt<2>{\n\t\t\t\t\t\t\\input{\\texfigdir/lagrange_multiplier2.pstex_t}\n\t\t\t\t\t}{\n\t\t\t\t\t\t\\input{\\texfigdir/lagrange_multiplier1.pstex_t}\n\t\t\t\t\t}}\n\t\t\t}\n\t\t\\end{center}\n\t\\end{columns}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Rank Reduced Linear Discriminant Analysis \\cont}\n\n\tWe need some facts from matrix calculus: \\\\\n\t\\point \\structure{\\url{www.matrixcookbook.com}}%\n\t\\footnote{website currently off-line -- you can still download it \\point\\href{http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=3274}{\\structure{here}}} \\\\\n\t%  \n\t\\spread\n\n\t\\begin{enumerate}\n\t\t\\item Let $\\vec\\mu$ denote the mean and $\\mat \\Sigma$ the covariance matrix of a \\\\\n\t\t      random vector $\\vec x$, then we get:\n\t\t      \\begin{displaymath}\n\t\t\t      E[(\\mat A \\vec x)^T(\\mat A \\vec x)] =\n\t\t\t      {\\mbox{tr}}(\\mat A \\mat \\Sigma \\mat A^T) + (\\mat A \\vec \\mu)^T(\\mat A\\vec \\mu)\n\t\t      \\end{displaymath}\n\t\t      \\pause\n\t\t\\item The matrix derivative is:\n\t\t      \\begin{displaymath}\n\t\t\t      \\frac{\\partial \\mbox{tr}( \\mat X \\mat B \\mat X^T)}{\\partial \\mat X} = \\mat X\\mat B^T +\\mat X\\mat B\n\t\t      \\end{displaymath}\n\t\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Rank Reduced Linear Discriminant Analysis \\cont}\n\n\tFor our \\structure{optimization problem} this implies:\n\n\t\\begin{eqnarray*}\n\t\t& & \\frac{\\partial}{\\partial{\\mat{\\Phi}}}\n\t\t\\left\\{\n\t\t\\frac{1}{K} \\sum_{y=1}^K (\\mat{\\Phi} \\vec{\\mu}'_y - \\mat{\\Phi} \\bar{\\vec{\\mu}}' )^T\n\t\t(\\mat{\\Phi} \\vec{\\mu}'_y - \\mat{\\Phi} \\bar{\\vec{\\mu}}' )  +\n\t\t\\sum_{i=1}^{L} \\lambda_i (\\| \\mat{\\Phi}_i \\|^2_2 - 1)\n\t\t\\right\\} \\\\[.5cm] \\pause\n\t\t& & = \\quad \\frac{\\partial}{\\partial{\\mat{\\Phi}}}\n\t\t\\left\\{\n\t\t\\frac{1}{K} \\sum_{y=1}^K \\left( \\mat\\Phi (\\vec{\\mu}'_y - \\bar{\\vec{\\mu}}') \\right)^T\n\t\t\\left( \\mat\\Phi (\\vec{\\mu}'_y - \\bar{\\vec{\\mu}}') \\right) +\n\t\t\\sum_{i=1}^{L} \\lambda_i (\\| \\mat {\\Phi}_i \\|^2_2 - 1)\n\t\t\\right\\} \\\\[.5cm] \\pause\n\t\t& & = \\quad \\frac{\\partial}{\\partial{\\mat{\\Phi}}}\n\t\t\\left\\{\n\t\t\\mbox{tr} (\\mat\\Phi \\ \\mat \\Sigma_{\\mbox{inter}} \\ \\mat\\Phi^T) +\n\t\t\\sum_{i=1}^{L} \\lambda_i (\\| \\mat {\\Phi}_i \\|_2^2 - 1)\n\t\t\\right\\}\n\t\t\\stackrel{!}{=} \\mat 0.\n\t\\end{eqnarray*}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Rank Reduced Linear Discriminant Analysis \\cont}\n\n\tNow we compute the \\structure{partial derivatives}:\n\n\t\\begin{displaymath}\n\t\t\\frac{\\partial}{\\partial{\\mat{\\Phi}}}\n\t\t\\left\\{\n\t\t\\mbox{tr} (\\mat \\Phi \\ \\mat \\Sigma_{\\mbox{inter}}  \\ \\mat \\Phi^T) + \\sum_{i=1}^{L} \\lambda_i (\\| \\mat {\\Phi}_i \\|^2_2 - 1)  \\right\\} =\n\t\t2 \\mat \\Phi\\mat  \\Sigma_{\\mbox{inter}} +2 \\mat{\\lambda} \\mat \\Phi = \\mat 0\n\t\\end{displaymath}\n\t\\pspread\n\n\tThis results in the eigenvalue and eigenvector problem:\n\t\\begin{displaymath}\n\t\t\\mat{\\Sigma}_{\\mbox{inter}} \\mat \\Phi^T = \\mat{\\lambda}' \\mat \\Phi^T\n\t\\end{displaymath}\n\t\\pspread\n\n\t\\structure{Note:} \\\\\n\tIn original PCA, the transform $\\mat \\Phi$ maximizes the overall spread using the covariance matrix of all features:\n\n\t\\begin{displaymath}\n\t\t\\mat{\\Sigma} \\mat{\\Phi}^T = \\mat{\\lambda}' \\mat \\Phi^T\n\t\\end{displaymath}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Rank Reduced Linear Discriminant Analysis \\cont}\n\n\t\\begin{algorithmic}\n\t\t\\STATE  \\structure{Input:} training data:  $S = \\{ (\\vec x_1, y_1), (\\vec x_2, y_2), (\\vec x_3, y_3), \\dots, (\\vec x_m, y_m) \\}$ \\\\[.3cm]\n\t\t\\STATE 1. Compute the covariance matrix of transformed mean vectors\n\t\t$$\\widehat{\\mat{\\Sigma}}_{\\mbox{inter}} =\n\t\t\t\\frac{1}{K} \\sum_{y=1}^K (\\vec \\mu'_y - \\bar{\\vec \\mu}')(\\vec \\mu'_y - \\bar{\\vec \\mu}')^T,$$\n\t\t\\quad\\,where $\\bar{\\vec \\mu}' = \\frac{1}{K} \\cdot \\sum_{y=1}^{K}\\vec\\mu'_y$. \\\\[.3cm]\n\t\t\\pause\n\t\t\\STATE 2. Compute the $L$ eigenvectors of the covariance matrix belonging \\\\\n\t\t\\quad\\,to the largest eigenvalues. \\\\[.3cm]\n\t\t\\pause\n\t\t\\STATE 3. The eigenvectors are the rows of the mapping $\\mat\\Phi$ from the \\\\\n\t\t\\quad\\,$(K-1)$- to the $L$-dimensional feature space. \\\\[.3cm]\n\t\t\\STATE  \\structure{Output:} matrix $\\mat\\Phi$\n\t\\end{algorithmic}\n\\end{frame}\n\n\\input{nextTime.tex}\n\n\\subsection{Fisher Transform}\n\n\\begin{frame}\n\t\\frametitle{Fisher Transform}\n\n\tThe described method to compute the LDA mapping is not the original derivation.\\\\[.5cm]\n\n\t\\structure{Original method}\n\n\t\\begin{center}\n\t\t\\resizebox{0.6\\linewidth}{!}{\n\t\t\t\\input{\\texfigdir/lda.pstex_t}\n\t\t}\n\t\\end{center}\n\\end{frame}\n\n\\begin{frame}\n\t\\frametitle{Fisher Transform}\n\n\tThe described method to compute the LDA mapping is not the original derivation.\\\\[.5cm]\n\n\t\\structure{Original method}\n\n\t\\begin{columns}\n\t\t\\column{.65\\linewidth}\n\n\t\t\\small \\vspace{-.5cm}\n\t\t\\begin{itemize}\n\t\t\t\\item Project samples $\\vec{x}_i$ onto a straight line with direction $\\vec{r}$, $\\lVert \\vec{r} \\rVert_2 = 1$:\n\t\t\t      \\begin{displaymath}\n\t\t\t\t      \\tilde{x}_i = \\vec{x}_i^T \\vec{r}\n\t\t\t      \\end{displaymath}\n\t\t\t      \\onslide<2->{\n\t\t\t      \\vspace{-.5cm}\n\t\t\t\\item Maximize the ratio of the between-class scatter and the within-class scatter:\n\t\t\t      \\begin{displaymath}\n\t\t\t\t      \\vec{r}^* =\n\t\t\t\t      \\argmax_{\\vec{r}} J(\\vec{r}) =\n\t\t\t\t      \\argmax_{\\vec{r}} \\frac{|\\tilde{\\mu}_1 - \\tilde{\\mu}_2|^2}{\\tilde{s}_1^2 + \\tilde{s}_2^2}\n\t\t\t      \\end{displaymath}\n\t\t\t      }\n\t\t\t      \\onslide<3->{\n\t\t\t      \\vspace{-.5cm}\n\t\t\t\\item Classify by applying a threshold to $\\tilde{x}_i$\n\t\t\t      }\n\t\t\\end{itemize}\n\n\t\t\\column{.35\\linewidth}\n\n\t\t\\onslide<1->{\n\t\t\t\\resizebox{\\linewidth}{!}{\n\t\t\t\t\\input{\\texfigdir/lda.pstex_t}\n\t\t\t}\n\t\t}\n\t\\end{columns}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Fisher Transform \\cont}\n\n\t\\structure{Finding $\\vec{r}^*$}\n\n\t\\begin{enumerate}\n\t\t\\item Mean and scatter matrix for each class:\n\t\t      {\\small\n\t\t      \\begin{eqnarray*}\n\t\t\t      \\vec{\\mu}_k &=& \\frac{1}{m_k} \\sum_{\\substack{~i = 1\\\\ y_i = k}}^{m_k} \\vec{x}_i \\\\\n\t\t\t      \\mat{S}_k   &=& \\sum_{\\substack{~i = 1\\\\ y_i = k}}^{m_k} (\\vec{x}_i - \\vec{\\mu}_k) (\\vec{x}_i - \\vec{\\mu}_k)^T\n\t\t      \\end{eqnarray*}\n\t\t      }\n\t\t\\item \\structure{Within-class scatter} matrix:\n\t\t      {\\small\n\t\t      \\begin{displaymath}\n\t\t\t      \\structure{\\mat{S}_\\mathsf{W}} = \\mat{S}_1 + \\mat{S}_2\n\t\t      \\end{displaymath}\n\t\t      }\n\t\t\\item \\structure{Between-class scatter} matrix:\n\t\t      {\\small\n\t\t      \\begin{displaymath}\n\t\t\t      \\structure{\\mat{S}_\\mathsf{B}} = (\\vec{\\mu}_1 - \\vec{\\mu}_2) (\\vec{\\mu}_1 - \\vec{\\mu}_2)^T\n\t\t      \\end{displaymath}\n\t\t      }\n\t\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Fisher Transform \\cont}\n\n\t\\structure{Finding $\\vec{r}^*$}\n\n\t\\begin{enumerate}\n\t\t\\setcounter{enumi}{3}\n\t\t\\item Expressing $\\tilde{\\mu}_k$ and $\\tilde{s}_k^2$ of the projected samples in terms of $\\vec{\\mu}_k$ and $\\mat{S}_k$:\n\t\t      {\\small\n\t\t      \\begin{eqnarray*}\n\t\t\t      \\onslide<1->{\n\t\t\t\t      \\tilde{\\mu}_k &=&\n\t\t\t\t      \\onslide<1->{ \\frac{1}{m_k} \\sum_{\\substack{~i = 1\\\\ y_i = k}}^{m_k} \\tilde{x}_i }\n\t\t\t\t      \\onslide<2->{ = \\frac{1}{m_k} \\sum_{\\substack{~i = 1\\\\ y_i = k}}^{m_k} \\vec{r}^T  \\vec{x}_i }\n\t\t\t\t      \\onslide<3->{ = \\vec{r}^T \\vec{\\mu}_k } \\\\\n\t\t\t      }\n\t\t\t      \\onslide<1->{\n\t\t\t\t      \\tilde{s}_k^2 &=&\n\t\t\t\t      \\onslide<1->{ \\sum_{\\substack{~i = 1\\\\ y_i = k}}^{m_k} (\\tilde{x}_i - \\tilde{\\mu}_k)^2 }\n\t\t\t\t      \\onslide<4->{ = \\sum_{\\substack{~i = 1\\\\ y_i = k}}^{m_k} (\\vec{r}^T \\vec{x}_i - \\vec{r}^T \\vec{\\mu}_k)^2 }\n\t\t\t\t      \\onslide<5->{ = \\vec{r}^T \\mat{S}_k \\vec{r} }\n\t\t\t      }\n\t\t      \\end{eqnarray*}\n\t\t      }\n\t\t      \\onslide<6->{\n\t\t\\item Plug it into $J(\\vec{r})$:\n\t\t      {\\small\n\t\t      \\begin{displaymath}\n\t\t\t      J(\\vec{r})\n\t\t\t      = \\frac{|\\tilde{\\mu}_1 - \\tilde{\\mu}_2|^2}{\\tilde{s}_1^2 + \\tilde{s}_2^2}\n\t\t\t      \\onslide<7->{ = \\frac{\\vec{r}^T \\mat{S}_B \\vec{r}}{\\vec{r}^T \\mat{S}_W \\vec{r}}}\n\t\t      \\end{displaymath}\n\t\t      }\n\t\t      }\n\t\t      \\onslide<7->{\n\t\t\\item[] This is known as the \\structure{Generalized Rayleigh Quotient}.\n\t\t      }\n\t\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Fisher Transform \\cont}\n\n\t\\structure{Finding $\\vec{r}^*$}\n\n\t\\begin{enumerate}\n\t\t\\setcounter{enumi}{5}\n\t\t\\item Maximizing the Generalized Rayleigh Quotient is equivalent to solving the following \\structure{generalized eigenvalue problem}:\n\t\t      {\\small\n\t\t      \\begin{eqnarray*}\n\t\t\t      \\mat{S}_\\mathsf{B} \\vec{r}^* &=& \\lambda \\mat{S}_\\mathsf{W} \\vec{r}^* \\\\\n\t\t\t      \\mat{S}_\\mathsf{W}^{-1} \\mat{S}_\\mathsf{B} \\vec{r}^* &=& \\lambda \\vec{r}^* \\\\\n\t\t      \\end{eqnarray*}\n\t\t      }\n\t\t\\item \\structure{Note}: $\\mat{S}_\\mathsf{B} \\vec{r}^*$ is always in the direction of $\\vec{\\mu}_1 - \\vec{\\mu}_2$; \\\\\n\t\t      no need to compute the eigenvalues and eigenvectors of $\\mat{S}_\\mathsf{W}^{-1} \\mat{S}_\\mathsf{B}$! \\\\[.25cm]\n\t\t      The direction of $\\vec{r}^*$ is:\n\t\t      {\\small\n\t\t      \\begin{displaymath}\n\t\t\t      \\vec{r}^* = \\mat{S}_\\mathsf{W}^{-1} (\\vec{\\mu}_1 - \\vec{\\mu}_2)\n\t\t      \\end{displaymath}\n\t\t      }\n\t\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Fisher Transform \\cont}\n\n\t\\begin{itemize}\n\t\t\\item Usually the total linear mapping for LDA is computed dimension by dimension through the maximization of the\n\t\t      Rayleigh ratio for each projection axis $\\vec a^*$:\n\t\t      \\begin{eqnarray*}\n\t\t\t      \\vec a^* &=& \\argmax_{\\vec a}\\ \\frac{\\vec a^T \\mat \\Sigma_{\\mbox{inter}} \\vec a}{\\vec a^T \\mat \\Sigma_{\\mbox{intra}} \\vec a}\n\t\t      \\end{eqnarray*}\n\t\t      \\pause\n\t\t\\item The solution is a generalized eigenvalue problem: $\\vec a$ is the eigenvector of\n\t\t      $$\\mat \\Sigma_{\\mbox{intra}} ^{-1} \\mat \\Sigma_{\\mbox{inter}}$$\n\t\t      that belongs to the largest eigenvalue.\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Fisher Transform \\cont}\n\n\tIn literature the optimization problem is mostly rewritten:\n\t\\spread\n\n\t\\begin{itemize}\n\t\t\\item Equivalent constrained optimization problem\n\t\t      \\begin{eqnarray*}\n\t\t\t      \\mbox{maximize:}   & & \\vec r^T \\mat \\Sigma_{\\mbox{inter}}\\vec r\\\\\n\t\t\t      \\mbox{subject to:} & & \\vec r^T \\mat \\Sigma_{\\mbox{intra}} \\vec r =1\n\t\t      \\end{eqnarray*}\n\t\t      \\spread\n\t\t\\item Lagrange multiplier method ($\\lambda>0$):\n\t\t      \\begin{displaymath}\n\t\t\t      \\vec{r}^* = \\argmax_{\\vec r}\n\t\t\t      \\left\\{\n\t\t\t      \\vec r^T \\mat \\Sigma_{\\mbox{inter}} \\vec r\n\t\t\t      - \\lambda{\\vec r^T \\mat \\Sigma_{\\mbox{intra}} \\vec r}\n\t\t\t      \\right\\}\n\t\t      \\end{displaymath}\n\t\\end{itemize}\n\\end{frame}\n\n\n\\subsection{Dimensionality Reduction}\n\n\\begin{frame}\n\n\t\\frametitle{Dimensionality Reduction}\n\n\tA few comments on dimensionality reduction:\n\n\t\\begin{itemize}\n\t\t\\item PCA does not require a classified set of feature vectors \\\\\n\t\t      (in contrast to LDA). \\\\[.15cm]\n\t\t\\item PCA transformed features are approximately normally distributed (central limit theorem). \\\\[.15cm]\n\t\t\\item Components of PCA transformed features are mutually independent. \\\\[.15cm]\n\t\t\\item There exist many other methods for dimensionality reduction, e.\\,g.,\n\t\t      Sammon transform, independent component analysis. \\\\[.15cm]\n\t\t\\item Usually the estimation of transforms is computationally prohibited. \\\\[.15cm]\n\t\t\\item \\structure{Johnson-Lindenstrauss lemma}: If vectors are projected onto a randomly selected subspace of suitably high dimension, then the distances between the vectors are approximately preserved.\n\t\\end{itemize}\n\\end{frame}\n\n\\input{nextTime.tex}\n\n\\subsection{LDA application: adidas\\_1 }\n\n\\begin{frame}\n\t\\frametitle{The adidas\\_1: A Digital Revolution in Sports}\n\n\t\\vspace{.5cm}\n\t\\begin{columns}[c,onlytextwidth]\n\t\t\\begin{column}{0.4\\textwidth}\n\t\t\t\\includegraphics[width=1.25\\linewidth]{\\jpgdir/adidas_1_components.\\jpg}\n\t\t\\end{column}\\begin{column}{0.6\\textwidth}\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item For the first time ever, sport specific information can be processed with a running shoe\n\t\t\t\t\\item A built-in microprocessor permits an adaptation of the shoe to the prevailing run situation\n\t\t\t\t      \\begin{itemize}\n\t\t\t\t\t      \\item Running speed\n\t\t\t\t\t      \\item Runner fatigue\n\t\t\t\t\t      \\item Running surface\n\t\t\t\t\t      \\item \\ldots\n\t\t\t\t      \\end{itemize}\n\t\t\t\t\\item Pattern Recognition at the LME provides the algorithms used for recognition\n\t\t\t\\end{itemize}\n\t\t\\end{column}\n\t\\end{columns}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{The adidas\\_1: System Overview}\n\n\t\\vspace{.5cm}\n\n\t\\begin{columns}[c, onlytextwidth]\n\t\t\\begin{column}{0.5\\textwidth}\n\t\t\t\\structure{Important parts of the adidas\\_1:}\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item A cushioning element (01) with a magnetic system for compression measurement \\\\\n\t\t\t\t      \\begin{itemize}\n\t\t\t\t\t      \\item $f_\\mathsf{sample} = 1\\mbox{kHz}$\n\t\t\t\t\t      \\item resolution $\\Delta d = 0.1\\,\\mbox{mm}$\n\t\t\t\t      \\end{itemize}\n\t\t\t\t\\item A microcontroller and user interface (02)\n\t\t\t\t      \\begin{itemize}\n\t\t\t\t\t      \\item $f_\\mathsf{clock} = 24\\,\\mbox{MHz}$\n\t\t\t\t\t      \\item 8 kB program memory\n\t\t\t\t      \\end{itemize}\n\t\t\t\t\\item A motor for cushioning adaptation \\\\\n\t\t\t\t      using a cable system (03)\n\t\t\t\\end{itemize}\n\t\t\\end{column}\\begin{column}{0.3\\textwidth}\n\t\t\t\\includegraphics[width=\\linewidth]{\\jpgdir/adidas_shoe_exploded.\\jpg}\n\t\t\\end{column}\n\t\\end{columns}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{The adidas\\_1: Classification Framework Requirements}\n\n\t\\begin{itemize}\n\t\t\\item Only a few, simple features can be calculated in real time \\\\[.3cm]\n\t\t\\item The classification system has to be efficient, \\\\\n\t\t      but computationally undemanding \\\\[.3cm]\n\t\t\\item LDA classifier yields a linear decision boundary and can be implemented using a polynomial of order one with weights $\\alpha_i$ and features $x_i$ \\\\[.3cm]\n\t\t\\item In the two class case:\n\t\t      \\begin{displaymath}\n\t\t\t      \\mbox{sgn}(\\vec\\alpha^T\\vec x+\\alpha_0) =\n\t\t\t      \\mbox{sgn} \\left ( \\alpha_1 x_1 + \\alpha_2 x_2 + \\ldots + \\alpha_d x_d + \\alpha_0 \\right )\n\t\t      \\end{displaymath}\n\t\t      yields decision for either class.\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Classification System: Computed Features}\n\n\t\\begin{itemize}\n\t\t\\item 19 features initially computed for classification experiments\n\t\t\\item Feature selection: 3 features selected for implementation\n\t\\end{itemize}\n\n\t\\begin{center}\n\t\t\\includegraphics[width=.7\\linewidth]{\\jpgdir/adidas_features.\\jpg}\n\t\\end{center}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Classification System: LDA Classifier Visualization}\n\n\t\\begin{itemize}\n\t\t\\item Visualization of the decision region for hard/soft surface classification \\\\\n\t\t      in 3D feature space\n\t\\end{itemize}\n\n\t\\begin{center}\n\t\t\\includegraphics[width=.45\\linewidth]{\\jpgdir/adidas_decision_region.\\jpg}\n\t\\end{center}\n\\end{frame}\n\n\n\\iftrue\n\n\t\\subsection{PCA application: Shape Modeling}\n\n\t\\begin{frame}\n\t\t\\frametitle{Shape Modeling}\n\n\t\t\\begin{itemize}\n\t\t\t\\item Each shape is represented by $n$ sampled surface points.\n\t\t\t\\item Surface points are denoted by $\\vec p_k\\in \\real^3$, $k=1,2, \\dots, n$.\n\t\t\t\\item The set of surface points is encoded in a single vector (shape vector):\n\t\t\t      \\begin{displaymath}\n\t\t\t\t      \\vec x = \\left(\n\t\t\t\t      \\begin{array}{c}\n\t\t\t\t\t\t      \\vec p_{1} \\\\\n\t\t\t\t\t\t      \\vec p_{2} \\\\\n\t\t\t\t\t\t      \\vdots     \\\\\n\t\t\t\t\t\t      \\vec p_{n}\n\t\t\t\t\t      \\end{array}\n\t\t\t\t      \\right)\n\t\t\t\t      = \\left(\n\t\t\t\t      \\begin{array}{c}\n\t\t\t\t\t\t      p_{1,1} \\\\ p_{1,2} \\\\ p_{1,3} \\\\ p_{2,1}\\\\ \\vdots \\\\ p_{n,3}\n\t\t\t\t\t      \\end{array}\n\t\t\t\t      \\right)\n\t\t\t\t      \\in \\real^{3n}\n\t\t\t      \\end{displaymath}\n\t\t\t      with $\\vec p_k=(p_{k,1}, p_{k,2},  p_{k,3})^T$.\n\t\t\\end{itemize}\n\t\\end{frame}\n\n\n\t\\begin{frame}\n\t\t\\frametitle{Shape Modeling \\cont}\n\n\t\tWe have $m$ shapes, thus $m$ shape vectors, and can generate the landmark configuration matrix:\n\n\t\t\\begin{displaymath}\n\t\t\t\\mat L =  [\\vec x_1, \\vec x_2, \\dots, \\vec x_m]\n\t\t\\end{displaymath}\n\t\t\\pause\n\n\t\tNow we can compute the PCA of the columns of $\\mat L$ and get the spectral decomposition of the associated covariance matrix\n\n\t\t\\begin{displaymath}\n\t\t\t\\mat \\Sigma_L = \\sum_i \\lambda_i \\vec e_i \\vec e_i^T\n\t\t\\end{displaymath}\n\n\t\twhere $\\lambda_i$ denote the eigenvalues and $\\vec e_i$ the eigenvectors.\n\t\\end{frame}\n\n\n\t\\begin{frame}\n\t\t\\frametitle{Shape Modeling \\cont}\n\n\t\tShape vectors $\\vec x^*$ within the eigenvector space can be computed using linear combinations of $l$ eigenvectors:\n\n\t\t\\begin{displaymath}\n\t\t\t\\vec x^* = \\bar{\\vec x} + \\sum_{i=1}^l a_i \\vec e_i\n\t\t\\end{displaymath}\n\n\t\twhere $\\bar{\\vec x}$ denotes just the mean of the column vectors of $\\mat L$ and $a_i\\in \\real$ are the shape parameters.\n\t\\end{frame}\n\n\n\t\\begin{frame}\n\t\t\\frametitle{Application of PCA: Segmentation}\n\n\t\t\\begin{itemize}\n\t\t\t\\item Lung, liver or kidneys\n\t\t\t\\item Generate and train an Active Shape Model (ASM) for such organs; \\\\\n\t\t\t      requires training data and ``gold standard'' segmentation\n\t\t\t\\item Once \\structure{point correspondences} are found, the different variations within the training data can be easily approximated by its Eigenvectors\n\t\t\\end{itemize}\n\n\\begin{figure}\n  \\copyrightbox[b]{\n      \\makebox[.3\\linewidth]{\n        \\href{https://www.sciencedirect.com/science/article/abs/pii/S0895611108001006}{\n          \\includegraphics[height=.2\\textwidth]{\\psdir/ASMFirstEigenMode.\\ps}\\hspace{2cm}\n        }\n      }\n      \\makebox[.3\\linewidth]{\n        \\href{https://www.sciencedirect.com/science/article/abs/pii/S0895611108001006}{\n          \\includegraphics[height=.2\\textwidth]{\\psdir/ASMSecondEigenMode.\\ps}\n        }\n      }\n  }{M. Spiegel, D. Hahn, V. Daum, J. Wasza, J. Hornegger. \\href{https://www.sciencedirect.com/science/article/abs/pii/S0895611108001006} {``Segmentation of kidneys using a new active shape model generation technique based on non-rigid image registration'', Computerized Medical Imaging and Graphics 2009}}\n\\caption{Variation of the mean kidney shape along the first and second Eigenvector.}\n  \\end{figure}\n\n\t\\end{frame}\n\n\n\t\\begin{frame}\n\t\t\\frametitle{Application of PCA: Segmentation \\cont}\n\t\t\n\\begin{figure}\n  \\copyrightbox[b]{\n      \\makebox[.5\\linewidth]{\n        \\href{https://www.sciencedirect.com/science/article/abs/pii/S0895611108001006}{\n          \\includegraphics[height=.45\\textwidth]{\\psdir/ASMSegmentationExample.\\ps}\n        }\n      }\n  }{M. Spiegel, D. Hahn, V. Daum, J. Wasza, J. Hornegger. \\href{https://www.sciencedirect.com/science/article/abs/pii/S0895611108001006} {``Segmentation of kidneys using a new active shape model generation technique based on non-rigid image registration'', Computerized Medical Imaging and Graphics 2009}}\n\\caption{Iterative segmentation progress of a right kidney using an ASM.}\n  \\end{figure}\n\n\t\\end{frame}\n\n\n\t\\begin{frame}\n\t\t\\frametitle{Application of PCA: Segmentation \\cont}\n\n\n\\begin{figure}\n  \\copyrightbox[b]{\n      \\makebox[.5\\linewidth]{\n        \\href{https://www.sciencedirect.com/science/article/abs/pii/S0895611108001006}{\n          \\includegraphics[height=.45\\textwidth]{\\jpgdir/ASM3DResults.\\jpg}\n        }\n      }\n  }{M. Spiegel, D. Hahn, V. Daum, J. Wasza, J. Hornegger. \\href{https://www.sciencedirect.com/science/article/abs/pii/S0895611108001006} {``Segmentation of kidneys using a new active shape model generation technique based on non-rigid image registration'', Computerized Medical Imaging and Graphics 2009}}\n\\caption{3-D view of the segmentation result.}\n  \\end{figure}\n\t\\end{frame}\n\n\\fi\n\n\\input{nextTime.tex}\n\n\\subsection{Notes on Regression}\n\n\\subsubsection{Linear Regression}\n\n\\begin{frame}\n\t\\frametitle{Notes on Regression}\n\n\tIn the two class situation, we set $y\\in \\{-1,+1\\}$ and use the decision rule:\n\n\t\\begin{displaymath}\n\t\ty^* = \\mbox{sgn}(\\vec \\alpha^T\\vec x + \\alpha_0).\n\t\\end{displaymath}\n\n\tWe can compute the linear decision boundary simply by least-square estimation.\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Linear Regression \\cont}\n\n\tFor a given set of learning data we use matrix notation:\n\n\t\\begin{displaymath}\n\t\t\\mat{X} = \\left(\n\t\t\\begin{array}{cc}\n\t\t\t\t\\vec x_1^T & 1      \\\\\n\t\t\t\t\\vec x_2^T & 1      \\\\\n\t\t\t\t\\vdots     & \\vdots \\\\\n\t\t\t\t\\vec x_m^T & 1\n\t\t\t\\end{array}\n\t\t\\right) \\in \\real^{m\\times (d+1)} \\qquad \\vec y \\quad = \\quad\n\t\t\\left(\n\t\t\\begin{array}{c}\n\t\t\t\ty_1    \\\\\n\t\t\t\ty_2    \\\\\n\t\t\t\t\\vdots \\\\\n\t\t\t\ty_m    \\\\\n\t\t\t\\end{array}\n\t\t\\right) \\in \\real^m\n\t\\end{displaymath}\n\n\tand define\n\n\t\\begin{displaymath}\n\t\t\\vec \\theta = { \\vec \\alpha\\choose \\alpha_0}\n\t\\end{displaymath}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Linear Regression \\cont}\n\n\tOne option to estimate $\\vec \\theta$ is to solve the linear regression problem:\n\n\t\\begin{displaymath}\n\t\t\\hat{\\vec \\theta} = \\argmin_{\\vec \\theta}\\| \\mat X \\vec \\theta - \\vec y\\|_{2}^2\n\t\\end{displaymath}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Linear Regression \\cont}\n\n\tThe \\structure{least-square estimator} for the $L_2$-norm:\n\n\t\\begin{eqnarray*}\n\t\t\\widehat{\\vec \\theta} &=& \\argmin_{\\vec\\theta} \\ \\sum_{i=1}^m (\\vec \\theta^T\\vec x_i-y_i)^2\\\\\n\t\t&=& \\argmin_{\\vec\\theta} \\ (\\mat{X}\\vec \\theta-\\vec y)^T(\\mat{X}\\vec \\theta-\\vec y)\n\t\\end{eqnarray*}\n\t\\pause\n\n\tand thus we get\n\n\t\\begin{displaymath}\n\t\t\\widehat{\\vec \\theta} = (\\mat{X}^T\\mat{X})^{-1}\\mat{X}^T\\vec y\n\t\\end{displaymath}\n\n\tif the column vectors of $\\mat X$ are mutually independent.\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Linear Regression \\cont}\n\n\t\\structure{A few obvious questions:} \\\\[.5cm]\n\n\t\\begin{itemize}\n\t\t\\item Why should we prefer the Euclidean norm ($L_2$-norm)? \\\\[.5cm]\n\t\t\\item Will different norms lead to different results? \\\\[.5cm]\n\t\t\\item Which norm and decision boundary is the best one? \\\\[.5cm]\n\t\t\\item Can we incorporate prior knowledge in linear regression?\n\t\\end{itemize}\n\\end{frame}\n\n\n\\subsubsection{Ridge Regression}\n\n\\begin{frame}\n\t\\frametitle{Ridge Regression}\n\n\tIn \\structure{\\emph{ridge regression}} (also called \\emph{regularized regression}) we extend the \\\\\n\tobjective function by an additional term constraining the Euclidean length \\\\\n\tof the parameter vector $\\vec{\\theta}$:\n\n\t\\begin{itemize}\n\t\t\\item It is linear regression with the log-likelihood penalized by $-\\lambda\\vec \\theta^T \\vec \\theta$ \\\\\n\t\t      where $\\lambda > 0$, or alternatively \\\\[.5cm]\n\t\t\\item It is extended by a prior distribution on the parameter vector $\\vec\\theta$\n\t\t      \\begin{displaymath}\n\t\t\t      \\vec \\theta = {\\cal N}(\\vec 0,\\mbox{diag}( \\tau^2) )\n\t\t      \\end{displaymath}\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Ridge Regression \\cont}\n\n\t\\structure{Regularized regression:}\n\n\t\\begin{eqnarray*}\n\t\t\\widehat{\\vec\\theta}\n\t\t&=& \\argmin_{\\vec\\theta} \\| \\mat{X} \\vec\\theta - \\vec y \\|^2_2 + \\lambda \\|\\vec\\theta\\|^2_2 \\\\ \\pause\n\t\t&=& \\argmin_{\\vec\\theta} \\ (\\mat{X} \\vec\\theta - \\vec y)^T (\\mat{X} \\vec\\theta-\\vec y) +\n\t\t\\lambda \\cdot \\vec\\theta^T\\vec \\theta\n\t\\end{eqnarray*}\n\t\\pause\n\n\tand thus we get the estimator:\n\n\t\\begin{displaymath}\n\t\t\\widehat{\\vec\\theta} = (\\mat{X}^T\\mat{X}+\\lambda \\mat{I})^{-1}\\mat{X}^T\\vec y\n\t\\end{displaymath}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Ridge Regression \\cont}\n\n\t\\structure{Notes:}\n\n\t\\begin{itemize}\n\t\t\\item The term $\\lambda \\mat{I}$ adds a positive constant $\\lambda$ to the diagonal elements.\n\t\t\\item The problem is non-singular even if $\\mat{X}^T \\mat{X}$ is not of full rank. \\\\\n\t\t\\item This was the main motivation of ridge regression when it was first introduced in statistics in 1970. \\\\[.25cm] \\pause\n\t\t\\item The ridge solutions are not equivariant under scaling of the inputs: \\\\\n\t\t      \\structure{standardize} the input before solving the regression problem! \\\\[.25cm] \\pause\n\t\t\\item The intercept $\\alpha_0$ should not be penalized:\n\t\t      \\begin{itemize}\n\t\t\t      \\item \\structure{Center} the input $\\vec{x}_i$.\n\t\t\t      \\item Estimate $\\alpha_0$ by $\\bar{y} = \\frac{1}{m} \\sum_{i=1}^m y_i$.\n\t\t\t      \\item Estimate the remaining coefficients by a ridge regression without  intercept. \\\\\n\t\t\t            Matrix $\\vec{X}$ has $d$ columns (instead of $d+1$).\n\t\t      \\end{itemize}\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Ridge Regression \\cont}\n\n\t\\structure{Statistical approach:} parameters $\\alpha_j$ are random variables \\\\[.5cm]\n\n\t\\begin{itemize}\n\t\t\\item Suppose\n\t\t      \\begin{displaymath}\n\t\t\t      \\forall 1 \\le i \\le m: \\quad y_i\n\t\t\t      \\sim \\mathcal{N}(\\underbrace{\\vec{\\alpha}^T \\vec{x}_i + \\alpha_0}_\\text{mean}, \\underbrace{\\sigma^2}_\\text{variance})\n\t\t\t      \\onslide<2->{\n\t\t\t\t      = \\frac{1}{\\sqrt{2 \\pi} \\cdot \\sigma} e^{-\\frac{1}{2} \\cdot \\frac{(y_i - \\vec{\\alpha}^T \\vec{x}_i - \\alpha_0)^2}{\\sigma^2}}\n\t\t\t      }\n\t\t      \\end{displaymath}\n\t\t\\item Parameters $\\alpha_j$ are assumed to be independent of each other.\n\t\t\\item Prior distribution of $\\alpha_j$:\n\t\t      \\begin{displaymath}\n\t\t\t      \\forall 1 \\le j \\le d: \\quad \\alpha_j\n\t\t\t      \\sim \\mathcal{N}(0, \\tau^2)\n\t\t\t      \\onslide<3->{\n\t\t\t\t      = \\frac{1}{\\sqrt{2 \\pi} \\cdot \\tau} e^{-\\frac{1}{2} \\cdot \\frac{(\\alpha_j - 0)^2}{\\tau^2}}\n\t\t\t      }\n\t\t      \\end{displaymath}\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Ridge Regression \\cont}\n\n\t\\begin{itemize}\n\t\t\\item \\structure{Maximizing the posterior probability of $\\vec{\\alpha}$ for given $\\sigma^2$ and $\\tau^2$:}\n\t\t      \\footnotesize\n\t\t      \\begin{eqnarray*}\n\t\t\t      \\argmax_{\\vec{\\alpha}} \\prod_{i=1}^m p(\\vec{\\alpha}|y_i) \\pause\n\t\t\t      &=& \\argmax_{\\vec{\\alpha}} \\Bigg\\{ \\prod_{i=1}^m p(\\vec{\\alpha}) \\cdot p(y_i | \\vec{\\alpha}) \\Bigg\\} \\\\ \\pause\n\t\t\t      &=& \\argmax_{\\vec{\\alpha}} \\Bigg\\{ \\prod_{j=1}^d p(\\alpha_j) \\cdot \\prod_{i=1}^m p(y_i | \\vec{\\alpha}) \\Bigg\\} \\\\ \\pause\n\t\t\t      &=& \\argmax_{\\vec{\\alpha}} \\Bigg\\{ \\sum_{j=1}^d \\log p(\\alpha_j) + \\sum_{i=1}^m \\log p(y_i | \\vec{\\alpha}) \\Bigg\\} \\\\ \\pause\n\t\t\t      &=& \\argmax_{\\vec{\\alpha}} \\Bigg\\{ - \\frac{1}{2 \\tau^2} \\sum_{j=1}^d \\alpha_j^2 - \\frac{1}{2 \\sigma^2} \\sum_{i=1}^m (y_i - \\vec{\\alpha}^T \\vec{x} - \\alpha_0)^2 \\Bigg\\} \\\\ \\pause\n\t\t\t      &=& \\argmin_{\\vec{\\alpha}} \\Bigg\\{ \\frac{\\sigma^2}{\\tau^2} \\sum_{j=1}^d \\alpha_j^2 + \\sum_{i=1}^m (y_i - \\vec{\\alpha}^T \\vec{x} - \\alpha_0)^2 \\Bigg\\}\n\t\t      \\end{eqnarray*}\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Ridge Regression \\cont}\n\n\t\\begin{itemize}\n\t\t\\item \\structure{Maximizing the posterior probability of $\\vec{\\alpha}$ for given $\\sigma^2$ and $\\tau^2$:} \\\\[.25cm]\n\t\t      {\\footnotesize\n\t\t      \\begin{displaymath}\n\t\t\t      \\argmax_{\\vec{\\alpha}} \\prod_{i=1}^m p(\\vec{\\alpha}|y_i)\n\t\t\t      = \\argmin_{\\vec{\\alpha}} \\Bigg\\{ \\lambda \\vec{\\alpha}^T \\vec{\\alpha} + (\\mat{X}\\vec{\\alpha} - \\vec{y})^T (\\mat{X}\\vec{\\alpha} - \\vec{y}) \\Bigg\\}\n\t\t\t      \\quad \\text{with} \\quad\n\t\t\t      \\lambda = \\frac{\\sigma^2}{\\tau^2}\n\t\t      \\end{displaymath}\n\t\t      }\n\t\t      \\vspace{.25cm} \\pause\n\t\t\\item \\structure{The ridge estimate is the mode of the posterior pdf!}\n\t\\end{itemize}\n\\end{frame}\n\n\n\\subsubsection{Lasso}\n\n\\begin{frame}\n\t\\frametitle{Lasso}\n\tRegularized regression using a mixture of $L_2$- and $L_1$-norm, \\\\\n\twhere the residual is penalized using the $L_2$-norm and \\\\\n\tthe regularizer uses the $L_1$-norm:\n\n\t\\begin{displaymath}\n\t\t\\widehat{\\vec \\theta} = \\argmin_{\\vec\\theta} \\|\\mat{X}\\vec \\theta-\\vec y\\|^2_2 + \\lambda \\cdot \\|\\vec \\theta\\|_1\n\t\\end{displaymath}\n\t\\pause\n\n\tThe lasso is used to compute a sparse solution of the system of \\\\\n\tlinear equations, i.\\,e.\\ the number of non-zero elements in $\\vec \\theta$ shall be small.\n\\end{frame}\n\n\n\\subsection{Lessons Learned}\n\n\\begin{frame}\n\t\\frametitle{Lessons Learned}\n\n\t\\begin{itemize}\n\t\t\\item Principal component analysis \\\\[.5cm]\n\t\t\\item Linear discriminant analysis with and without dimension reduction \\\\[.5cm]\n\t\t\\item Both PCA and LDA relate to an eigenvalue eigenvector problem \\\\[.5cm]\n\t\t\\item Alternative formulation of LDA using the Fisher transform \\\\[.5cm]\n\t\t\\item Linear and ridge regression for classification\n\t\\end{itemize}\n\\end{frame}\n\n\\input{nextTime.tex}\n\n\\subsection{Further Readings}\n\n\\begin{frame}\n\t\\frametitle{Further Readings}\n\n\tYou are required to be familiar with \\structure{linear algebra} and \\structure{matrix calculus}:\n\n\t\\begin{itemize}\n\t\t\\item SIAMS best selling book in the last decade:\\\\[.15cm]\n\t\t      Lloyd N. Trefethen, David Bau III: \\\\\n\t\t      \\structure{Numerical Linear Algebra}, \\\\\n\t\t      SIAM, Philadelphia, 1997. \\\\[0.15cm]\n\t\t\\item All about matrix derivatives and related problems is described in the Matrix Cookbook:\n\t\t      \\structure{\\url{http://www.matrixcookbook.com}} \\\\[.3cm]\n\t\\end{itemize}\n\n\tBasics on \\structure{discriminant analysis} can be found in\n\n\t\\begin{itemize}\n\t\t\\item T. Hastie, R. Tibshirani, and J. Friedman: \\\\\n\t\t      \\structure{The Elements of Statistical Learning --}\\\\\n\t\t      \\structure{ Data Mining, Inference, and Prediction},\\\\\n\t\t      2nd edition, Springer, New York, 2009.\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Further Readings \\cont}\n\n\t\\structure{Details on the adidas\\_1 shoe and the implemented classifier:}\n\n\t\\begin{itemize}\n\t\t\\item B.~Eskofier, F.~H\\\"onig, P.~K\\\"uhner: \\\\\n\t\t      \\structure{Classification of Perceived Running Fatigue in Digital Sports}, \\\\\n\t\t      Proceedings of the 19th International Conference on Pattern Recognition (ICPR 2008), Tampa, Florida, U.\\,S.\\,A., 2008\n\t\\end{itemize}\n\t\\spread\n\n\t\\structure{Details on the shape modeling of kidneys and its application to segmentation:}\n\n\t\\begin{itemize}\n\t\t\\item M.~Spiegel, D.~Hahn, V.~Daum, J.~Wasza, J.~Hornegger: \\\\\n\t\t      \\structure{Segmentation of kidneys using a new active shape model generation technique based on non-rigid image registration}, \\\\\n\t\t      Computerized Medical Imaging and Graphics 2009 33(1):29-39\n\t\\end{itemize}\n\\end{frame}\n\n\n\\subsection{Comprehensive Questions}\n\n\\begin{frame}\n\t\\frametitle{Comprehensive Questions}\n\n\t\\begin{itemize}\n\t\t\\item What is the difference between PCA and LDA? \\\\[1cm]\n\t\t\\item How can PCA and LDA be combined to achieve a high rank reduction? \\\\[1cm]\n\t\t\\item Write down a straight forward objective function for linear regression! \\\\[1cm]\n\t\t\\item What happens if we replace the $L_2$-norm by another norm?\n\t\\end{itemize}\n\\end{frame}\n\n", "meta": {"hexsha": "1f9e4f14e64cbcaa84a837b7f070073ac23d6aaa", "size": 31899, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "07_discriminant_analysis_2.tex", "max_stars_repo_name": "akmaier/pr-slides", "max_stars_repo_head_hexsha": "c322ad388993ff7891b959764e4481a05a444dff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2021-01-11T07:27:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-31T19:21:31.000Z", "max_issues_repo_path": "07_discriminant_analysis_2.tex", "max_issues_repo_name": "akmaier/pr-slides", "max_issues_repo_head_hexsha": "c322ad388993ff7891b959764e4481a05a444dff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07_discriminant_analysis_2.tex", "max_forks_repo_name": "akmaier/pr-slides", "max_forks_repo_head_hexsha": "c322ad388993ff7891b959764e4481a05a444dff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-06-21T06:06:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-18T18:47:28.000Z", "avg_line_length": 33.0902489627, "max_line_length": 305, "alphanum_fraction": 0.6374181009, "num_tokens": 10805, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8519528170040852, "lm_q2_score": 0.6757645944891559, "lm_q1q2_score": 0.5757195499066597}}
{"text": "\\documentclass[aps,pra,notitlepage,amsmath,amssymb,letterpaper,12pt]{revtex4-1}\n\\usepackage{amsthm}\n\\usepackage{graphicx}\n%  Above uses the Americal Physical Society template for Physical Review A\n%  as a reasonable and fully-featured default template\n \n%  Below define helpful commands to set up problem environments easily\n\\newenvironment{problem}[2][Problem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{solution}{\\begin{proof}[Solution]}{\\end{proof}}\n \n% --------------------------------------------------------------\n%                   Document Begins Here\n% --------------------------------------------------------------\n\n% In what follows, you can easily change text to see what happens to the document\n% For example, replacing the text \"Document X\" inside the \"\\title{}\" command will\n% change the document title\n \n\\begin{document}\n \n\\title{Double Well Potential}\n\\author{Gabriella Nutt}\n\\affiliation{PHYS 220, Schmid College of Science and Technology, Chapman University}\n\\date{\\today}\n\n\\maketitle\n\n\\section{Double Well Potential} % Specify main sections this way\n\n% x.yz is the problem number\n\\begin{problem}{1} \nConsider a ball of mass $m$ with horizontal coordinate $x$ rolling in a double-well potential $V(x) = x^4/4 - x^2/2$. (This is sometimes called the \"sombrero\" potential. Plot it to see why, for $x\\in[-1.5,1.5]$.) This potential produces a force $f_{\\text{hat}}(x) = -V'(x) = -x^3 + x$ on the rolling ball. Suppose the ball also has slight friction, so experiences a drag force $f_{\\text{drag}}(\\dot{x}) = -\\nu \\dot{x}$. With these forces we thus expect the ball to roll down the sides of the sombrero potential and settle in one of the two stable wells. However, this is boring, so instead we are going to shake the hat back and force periodically with a driving force $f_{\\text{drive}}(t) = F\\cos(\\omega t)$. For small driving forces $F$, this should simply jiggle the ball back and forth at the bottom of one of the stable wells. Our task will be to explore what happens for larger driving forces $F$.\nNote that according to Newton's second law, the ball must satisfy the equation of motion: $$m\\ddot{x} = f_{\\text{hat}}(x) + f_{\\text{drag}}(\\dot{x}) + f_{\\text{drive}}(t) = x - x^3 - \\nu \\dot{x} + F\\cos(\\omega t)$$ This system is known as a periodically driven nonlinear \"Duffing oscillator,\" and can be split into a set of two coupled first-order ODEs: $$\\dot{x}(t) = y(t)$$ $$m\\dot{y}(t) = -\\nu y(t) + x(t) - x^3(t) + F\\cos(\\omega t)$$ Your task will be to solve these equations numerically, for $m=1$, $\\nu = 0.25$, and $\\omega = 1$. Use a time-step size of $\\Delta t = 0.001$ with the 4th-order Runge-Kutta integration method to keep sufficient numerical precision. Implement your code in a python file sombrero.py with suitable test functions in testsombrero.py.\n\\end{problem}\n \n\\begin{solution} %You can also use proof in place of solution\nIn this we solved the differential equations above and observed their behavior on different graphs as their we change the values of F, x(0), and y(0).\nFrom look at all of these graphs it can be found that changing the initial conditions drastically change the behavior. At first it seems like when you change the conditions that the oscillating motion becomes more inconsistent. To further explore the behavior we can strobe the graph at different instances and turn it into a gif. This gives us better insight into the motion of a point. It's very fun. When doing this you actually find the oscillating motion isn't as inconsistent as as it seems when you only observe one moment.\n% Use align environments for equations. The \\\\ is a newline character. The & is the alignment character.\n% Using align* or \\nonumber on each line removes equation numbers\n\\end{solution}\n\n\\subsection{Here is a scatter plot!} % Specify subsections and subsubsections this way\n\nFigures can be included easily.\n\n\\begin{figure}[h!] % h forces the figure to be placed here, in the text\n  \\includegraphics[width=0.4\\textwidth]{frame99.png}  % if pdflatex is used, jpg, pdf, and png are permitted\n  \\caption{Strobe of duffing oscillator.}\n  \\label{fig:figlabel}\n\\end{figure}\n\nThis text should be below the figure unless \\LaTeX  decides that a different layout works better.\n \n% Repeat as needed\n \n\\end{document}\n", "meta": {"hexsha": "380998a7860e60990b8398a5a289b06f073620ad", "size": 4335, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX/sombrero.tex", "max_stars_repo_name": "chapman-phys220-2018f/cw13-coolkidzzz", "max_stars_repo_head_hexsha": "f783e5ea60e8651f52fc78014a9625f61b1d66b0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LaTeX/sombrero.tex", "max_issues_repo_name": "chapman-phys220-2018f/cw13-coolkidzzz", "max_issues_repo_head_hexsha": "f783e5ea60e8651f52fc78014a9625f61b1d66b0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LaTeX/sombrero.tex", "max_forks_repo_name": "chapman-phys220-2018f/cw13-coolkidzzz", "max_forks_repo_head_hexsha": "f783e5ea60e8651f52fc78014a9625f61b1d66b0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 73.4745762712, "max_line_length": 903, "alphanum_fraction": 0.7213379469, "num_tokens": 1136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8519528076067262, "lm_q1q2_score": 0.5757195491194167}}
{"text": "In this appendix some \\gls{ir} background is presented, as core data storage and querying of our prototype\\index{prototype} is based on.\n\n\\subsubsection{TF-IDF \\& Apache Lucene}\n\nThe equations~\\ref{eq:ir-tf-1} to~\\ref{eq:ir-tf-idf} lay out some of the most fundamental concepts behind \\gls{ir}.\n\n\\begin{equation}\n\\text{tf}(t, d) = f_{t,d}\n\\label{eq:ir-tf-1}\n\\end{equation}\n\n\\begin{equation}\nf_{t,d} =\n\\begin{cases}\n  1 \\quad \\text{if } t \\text{ occurs in } d \\\\\n  0 \\quad \\text{otherwise} \\\\\n\\end{cases}\n\\label{eq:ir-tf-2}\n\\end{equation}\n\n\\begin{equation}\n\\text{idf}(t, D) = \\log\\frac{N}{|\\{d \\in D : t \\in d\\}|}\n\\label{eq:ir-idf}\n\\end{equation}\n\n\\begin{equation}\n\\text{tf-idf}(t, d, D) = \\text{tf}(t, d) \\times \\text{idf}(t, D)\n\\label{eq:ir-tf-idf}\n\\end{equation}\n\nIt is \\emph{\\gls{tf-idf}}\\footnote{\\textcolor{blue}{\\href{https://en.wikipedia.org/wiki/Tf-idf}{en.wikipedia.org/wiki/Tf-idf}}}.\nBasically, this is an effective as well as efficient way to score term query search hits based on term occurrences in a corpus of documents.\nThe illustrated formula set uses a simple boolean frequency.\n\nA popular and high-quality search engine implementation in Java is \\emph{Apache Lucene\\footnote{\\textcolor{blue}{\\href{https://lucene.apache.org/}{lucene.apache.org}}}}.\nIt is also the foundation on which Elasticsearch is built upon~\\cite{Gormley2015}.\n", "meta": {"hexsha": "be0d27f344cb7e90d525b2055c0c7b938dd634c6", "size": 1351, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/source/thesis/chapters/appendix-b.tex", "max_stars_repo_name": "robi42/tm", "max_stars_repo_head_hexsha": "cfd135a9b59d9aacbfe62cfa7638302663f2103b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-06-18T21:09:46.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-02T16:47:32.000Z", "max_issues_repo_path": "doc/source/thesis/chapters/appendix-b.tex", "max_issues_repo_name": "robi42/tm", "max_issues_repo_head_hexsha": "cfd135a9b59d9aacbfe62cfa7638302663f2103b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/source/thesis/chapters/appendix-b.tex", "max_forks_repo_name": "robi42/tm", "max_forks_repo_head_hexsha": "cfd135a9b59d9aacbfe62cfa7638302663f2103b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.5135135135, "max_line_length": 169, "alphanum_fraction": 0.7120651369, "num_tokens": 453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8519527944504228, "lm_q2_score": 0.6757645879592642, "lm_q1q2_score": 0.5757195291025337}}
{"text": "\\documentclass[revision-guide.tex]{subfiles}\n\n\n%% Current Author: SEQ\n\\setcounter{chapter}{10}\n\\begin{document}\n\\tikzset{\n  every pin/.style={\n      font=\\scriptsize,\n      pin distance=4ex},\n  small dot/.style={\n      fill=gray,\n      circle,\n      scale=0.1}\n}\n\\chapter{Oscillations}\n\\section{Simple Harmonic Motion}\n\\spec{Recall the condition for simple harmonic motion and hence identify situations in which simple harmonic motion will occur}\n\n\nAny oscillation where the acceleration is proportional to the displacement from an equilibrium position and in the opposite direction to the displacement is described as Simple Harmonic Motion (SHM).\n\nThese two conditions can be expressed in equation form as:\n\\[ a \\propto -x \\]\nNotice the minus sign to signify that the acceleration is in the opposite direction to the displacement.\n\nWe will mainly be looking at idealised springs and pendulums to understand the maths behind it but the real reason for studying SHM is that it is an excellent approximation for many of the oscillations that we come across in the natural world.\n\nSo while the topic is introduced with some rather prosaic examples this provides the building blocks to understanding earthquakes or how atoms vibrate in lattices as well as any musical instrument you can think of.\n\n%\\pagebreak\n\nSo without further ado let's look at our first and perhaps simplest example.\n\nA mass on a spring on a smooth horizontal surface.\n\\vspace{0.2in}\n\n\n\\begin{tikzpicture}\n\\node[rectangle,scale = 1.8, fill=blue,inner sep=2.5mm, text = white] (a) at (3.75,3) {$m$};\n\n\n\\draw[decoration={aspect=0.3, segment length=2mm, amplitude=3mm,coil},decorate] (0,3) -- (a); \n\n\\fill [pattern = north east lines] (0,2) rectangle (6,2.4);\n\\fill [pattern = north east lines] (-.5,2) rectangle (0,4);\n\\draw[thick] (0,2.4) -- (6,2.4);\n\\draw[thick] (0,2.4) -- (0,4);\n\n\\end{tikzpicture}\n\nWhat happens if we move the mass by a distance x and then let it go?\n\n\\begin{tikzpicture}\n\\node[rectangle,scale = 1.8, fill=blue,inner sep=2.5mm, text = white] (a) at (5,3) {$m$};\n\n\n\n\n\\draw[decoration={aspect=0.3, segment length=3mm, amplitude=3mm,coil},decorate] (0,3) -- (a); \n\n\\fill [pattern = north east lines] (0,2) rectangle (6,2.4);\n\\fill [pattern = north east lines] (-.5,2) rectangle (0,4);\n\\draw[thick] (0,2.4) -- (6,2.4);\n\\draw[thick] (0,2.4) -- (0,4);\n\\draw[thick, |-latex] (3,3.5) -- (4.25,3.5);\n\\node at (3.5,3.7) {$x$};\n\\end{tikzpicture}\n\n\nBefore going into any mathematical detail we can think about what will happen to the mass.\n\\begin{itemize}\n\\item We know that there is now a force from the stretched string which is pulling the mass back to its original position. \n\\item This will cause the mass to accelerate to the left. \n\\item Once the mass reaches its starting point it will overshoot and start compressing the spring.\n\\item This creates a resultant force which will again restore the mass to its original position.\n\\item This will cause the mass to accelerate to the right.\n\\item The mass will overshoot again and the cycle will continue.\n\\end{itemize}\nNote that the direction of the acceleration is always opposite to the displacement.\n\nThe mathematical treatment for this starts very simply with a basic knowledge of Hooke's law.\n\nThe force on a stretched or compressed spring is given by \n\\[\nF=-kx\n\\]\nwhere k is the spring constant.\n\nNewton's second law tells us that \n\\[\nF=ma\n\\]\n\nPutting these together we get\n\\[\na = \\dfrac{F}{m}=-\\dfrac{k}{m}x\n\\]\nBecause k and m are constant this satisfies the condition for SHM,\n\\[\na \\propto -x\n\\]\n\n\\begin{example}\nA  2kg mass attached to a horizontal spring of spring constant 0.3Nm$^{-1}$ is stretched by 10cm and then released.\n\nFind the maximum acceleration of the mass\n\n\n\t\\vspace{1cm}\n\n\t\t\n\n\t\t\\textbf{Answer}\n\nUsing the formula $a = -\\dfrac{k}{m}x$\nThe maximum acceleration will occur when x is a maximum so \n\\[\nacceleration_{max} = - \\dfrac{0.3}{2} \\times 0.1\n\\]\n\\[\nacceleration_{max} = -0.015 ms^{-2}\n\\]\n\n\\end{example}\n\n\n\n\n\n\n\\spec{* show that the condition for simple harmonic motion leads to a differential equation of the form\n\\[ \n\\dfrac{d^2x}{dt^2} = - \\omega^2 x\n\\]\nand that \n\\[ \nx = A cos \\omega t \n\\] \nis a solution to this equation}\n\n\nAcceleration is the rate of change of velocity \n\n\\[\na = \\dfrac{dv}{dt} \n\\]\n\nand velocity is the rate of change of displacement\n\n\\[\nv = \\dfrac{dx}{dt} \n\\]\n\nPutting these two together gives the condition for simple harmonic motion as.\n\n\\[\n\\dfrac{d^2x}{dt^2}=-\\omega ^2 x\n\\]\n\nwhere $\\omega ^2$ is a (strange choice of) constant.\n\nThis is a second order differential equation and to solve it we need to find a function which when differentiated twice gives us the negative of the original function.\n\nBy inspection we can see that functions with $\\sin \\omega t$ and $\\cos \\omega t$ are both possible solutions.\n\ne.g.\n\n\\begin{align*} \nx &= A cos \\omega t \\\\\n\\dfrac{dx}{dt} &= - A\\omega sin \\omega t \\\\\n\\dfrac{d^2x}{dt^2} &= -A\\omega^2 cos \\omega t = -\\omega ^2 x \\\\\n\\end{align*}\n\nSo $x= Acos \\omega t$ is a solution (using sin is equivalent but with a phase offset.)\n\nLooking at this function we can see that it will give us a cosine wave with an Amplitude of A and a time period of $ \\dfrac{2 \\pi}{\\omega}$.\n\nThis gives us a frequency on $\\dfrac{1}{T} = \\dfrac{\\omega}{2 \\pi}$\n\nSo $\\omega$ is the angular frequency (which explains our strange choice of constant).\n\n\\begin{tikzpicture}\n\t%curve\n\t\t\\draw (0.5,1) cos (1.5,0) sin (2.5,-1) cos (3.5,0) sin (4.5,1) cos\n\t\t\t(5.5,0) sin (6.5,-1) cos (7.5,0) sin (8.5,1) cos (9.5,0)\n\t\t\tsin (10.5,-1) cos (11.5,0);\n\t% zero crossing\n\t\t\\draw[dotted] (0.5,0) -- (11.5,0);\n\t%axis + labels\n\t\t\\draw [<->] (0.5,1) -- (0.5,-1);\n\t\t\\draw [->] (0.5,0) -- (11.5,0);\n\t\t\\node at (11.7,0) {$t$};\n\t\t\\node at (0.5,1.2) {$+A$};\n\t\t\\node at (0.5,-1.2) {$-A$};\n\t\t\\node at (0.3,0) {$0$};\n\t%Time period description\n\t\t\\draw [<->] (4.5,1.15) -- (8.5,1.15);\n\t\t\\node at (6.5,1.6) {Time period, $\\dfrac{2 \\pi}{\\omega}$};\n\t%Amplitude description\n\t\t\\draw [<->] (6.5,0) -- (6.5,-1);\n\t\t%to stop line overlap\n\t\t\\draw [fill=white,ultra thick,white] (7,-0.3) rectangle (9,-0.7); \n\t\t\\node at (8, -0.5) {Amplitude, $A$};\n\\end{tikzpicture}\n\nWe now have a general expression for the displacement x of the object after a time t.\n\nThe value of $\\omega$ will be given by the physical properties of the system.\n\ne.g.\nFor the mass on a spring we looked at earlier\n\n\\begin{align*} \nF &= -kx && \\text {Force on mass by Hooke's Law}\\\\\na &= -\\dfrac{k}{m} x  && \\text{gives acceleration}\\\\\n\\dfrac{d^2x}{dt^2} &= -\\omega ^2 x && \\text{comparing with condition for SHM} \\\\\n\\omega ^2 &= \\dfrac{k}{m} && \\text{gives value for constant } \\omega \\\\\n\\omega &= \\sqrt{\\dfrac{k}{m}} \\\\\n\\end{align*}\n\n\\spec{* use differential calculus to derive the expressions\n\\[\nv=-A \\omega sin \\omega t \n\\] and \n\\[a = -A\\omega^2 cos \\omega t \n\\] \nfor simple harmonic motion.}\n\n\n\nThis is achieved very simply by differentiating our expression for displacement with respect to time as velocity =$\\dfrac{dx}{dt}$ and acceleration = $\\dfrac{dv}{dt}$\n\nRemember that the differential of cos is -sin.\n\nso \n\\[\nx = A cos \\omega t\n\\]\n\\[\nv = \\dfrac{dx}{dt}= -A \\omega sin \\omega t\n\\]\n\\[\na = \\dfrac{dv}{dt}= -A \\omega^2 cos \\omega t\n\\]\n\n\n\\spec{ *recall and use the expressions $x = A cos\\omega t, v = -A \\omega sin \\omega t, a = -A \\omega^2 cos \\omega t$ and $F = -m \\omega^2x$ to solve problems}\n\nWe can use these equations to solve problems by identifying the variables and substituting.\n\nThe last equation is a combination of $a = -\\omega^2 x$ and $F = ma$\n\n\\begin{example}\n\n\nA mass attached to a spring is set into motion on a smooth horizontal surface. The amplitude of oscillation is 15mm and it takes 5 seconds to perform 20 oscillations.\n\nCalculate the time period and frequency.\n\nHence calculate the velocity and acceleration after 9.3 seconds.\n\n\t\\vspace{1cm}\n    \n\\textbf{Answer}\n\nThe first part doesn't require any knowledge of SHM. If there are 20 oscillations in 5 seconds this means $\\dfrac{20}{5}$ oscillations in one second.\n\nSo $f=4Hz$\n\nand $T = \\dfrac{1}{f} = 0.25s$\n\nNow that we have the frequency we can calculate the angular frequency $\\omega = 2 \\pi f = 8 \\pi$\n\nWe are given the amplitude A as 15mm and the time is 9.3 seconds so we can now use our SHM formulae.\n\nSo for the velocity\n\n\\[\nv=-A \\omega sin \\omega t \n\\]\n\n\\[\nv = -15 \\times 8 \\pi sin (8 \\times \\pi \\times 9.3)\n\\]\n\\[\nv = -360 mms^{-1} = .36ms^{-1}\n\\]\n\nand for acceleration\n\n\\[\na = \\dfrac{dv}{dt}= -A \\omega^2 cos \\omega t\n\\]\n\n\\[\na = 15 (8 \\times \\pi)^2 cos (8 \\times \\pi \\times 9.3)\n\\]\n\n\\[\na = 2900 mms^{-2} = 2.9ms^{-2}\n\\]\n\n\n\\end{example}\n\n\n\n\\spec{ recall and use $T = \\dfrac{2 \\pi}{\\omega}$\nas applied to a simple harmonic oscillator}\n\nAngular frequency $w$ is how many radians per second the oscillator goes through with one complete cycle being $2 \\pi$ radians.\n\nSo $\\dfrac{1}{\\omega}$ is how many seconds is takes to complete one radian.\n\nand the time for one complete cycle is $T = \\dfrac{2 \\pi}{\\omega}$\n\n\n\\spec{ understand the phase differences between displacement, velocity and acceleration in simple harmonic\nmotion}\n\nPlotting the equations \\[x = A cos \\omega t\n\\]\n\\[\nv =  -A \\omega sin \\omega t\n\\]\n\\[\na = -A \\omega^2 cos \\omega t\n\\]\nonto a graph gives the following.\n\n\n\n\n\\begin{tikzpicture}\n\t%curve 1\n\t\t\\draw[blue] (0.5,0) sin (1.5,-1) cos (2.5,0) sin (3.5,1) cos (4.5,0) \n\t\t\tsin (5.5,-1) cos (6.5,0) sin (7.5,1) cos (8.5,0) sin (9.5,-1) \n\t\t\tcos (10.5,0) sin (11.5,1);\n\t%curve 2\n    \\draw[red] (0.5,-1) cos (1.5,0) sin (2.5,1) cos (3.5,0) sin (4.5,-1) cos\n\t\t\t(5.5,0) sin (6.5,1) cos (7.5,0) sin (8.5,-1) cos (9.5,0)\n\t\t\tsin (10.5,1) cos (11.5,0);\n\t\tcos (10.5,0) sin (11.5,1);\n    %curve 3\n        \t\\draw (0.5,1) cos (1.5,0) sin (2.5,-1) cos (3.5,0) sin (4.5,1) cos\n\t\t\t(5.5,0) sin (6.5,-1) cos (7.5,0) sin (8.5,1) cos (9.5,0)\n\t\t\tsin (10.5,-1) cos (11.5,0);\n\t% zero crossing\n\t\t\\draw[dotted] (0.5,0) -- (11.5,0);\n\t%axis + labels\n\t\t\\draw [<->] (0.5,1) -- (0.5,-1);\n\t\t\\draw [->] (0.5,0) -- (11.5,0);\n\t\t\\node at (11.7,0) {$t$};\n\t\t\\node[rotate=90] at (0.2,0) {displacement};\n        \\node[rotate=90, blue] at (-0.12,0) {velocity};\n        \\node[rotate=90, red] at (-0.5,0) {acceleration};\n\t%Phase difference description\n\t\t\\draw [<->] (0.5,1.2) -- (2.5,1.2);\n\t\t\\node at (1.5,1.5) {$180^\\circ$};\n\\end{tikzpicture}\n\n\n\nNote the phase shift between displacement, velocity and acceleration.\n\nVelocity is $\\dfrac{\\pi}{2}$ or $90^\\circ$out of phase with displacement.\n\nAcceleration is $\\pi$ or $180^\\circ$ out of phase with displacement.\n\nSo when acceleration or displacement has a maximum magnitude, the velocity is zero\n\n\n\n\n\\spec{ *show that the total energy of an undamped simple harmonic system is given by $E = \\dfrac{1}{2}mA^2\\omega^2$ and\nrecognise that this is a constant}\n\nFor an undamped system no energy is lost to the surroundings so the total energy is conserved.\n\nThe total energy will take the form of Potential Energy  and Kinetic Energy.\n\nConsidering our mass on a spring at maximum displacement the spring will be at maximum extension and the mass will be stationary. So the total energy will be in the form of P.E.\n\nSimilarly when the displacement is zero all the energy will be K.E.\n\nAt any point in between the total energy will be a mixture of K.E. and P.E. \n\nAs the total energy is the same at any point we can pick what's easiest to derive an equation for it.\n\nSo at zero displacement.\n\n\\begin{align*} \nT.E. &= K.E. \\\\\n&= \\dfrac{1}{2}mv_{max}^2 \\\\\n&= \\dfrac{1}{2}mA^2\\omega^2 \\\\\n\\end{align*}\n\n\\spec{ recall and use $E = \\dfrac{1}{2}mA^2\\omega^2$ to solve problems}\n\nWe can look at example problem to see how this works.\n\n\n\\begin{example}\n\nA mass of 8kg is attached to a spring with a spring constant of $5 Nm^{-1}$ on a smooth horizontal surface.\nIt is pulled back a distance of 20 cm and then released. \n\nCalculate the total energy of the system, the time period of oscillation and the velocity of the mass when the displacement is 10 cm.\n\n\n\\end{example}\n\n\n\n\n\n    \n\\textbf{Answer}\n\nOne way to approach this question is by using our equation for the energy stored in a spring.\n\n\\[\nP.E = \\dfrac{1}{2}kx^2\n\\]\n\n\\[\nP.E = \\dfrac{1}{2} \\times 5 \\times .2^2 = 0.1J\n\\]\n\nWe also know that all the energy at this point is in the form of potential as the mass is not moving so this is the total energy of the system and will not change.\n\n\n\nFor the time period we can first calculate the angular frequency using our energy equation.\n\n\\begin{align*} \nE = \\dfrac{1}{2}mA^2\\omega^2 &= 0.1 J\\\\\n\\therefore \\omega &= \\sqrt{\\dfrac{2 E}{m A^2}}\\\\\n&= \\sqrt{\\dfrac{2 \\times 0.1}{8 \\times 0.2^2}}\\\\\n&= 0.79 rads^{-1}\\\\\n\\omega &= \\dfrac{2 \\pi}{T} \\\\\n\\therefore T &= 2 \\pi = 8.0 seconds \\\\\n\\end{align*}\n\n\nWhen the mass has a displacement of 10cm, the total energy will be the kinetic and potential energy.\n\n\n\\begin{align*} \nT.E. = P.E + K.E. &= 0.1J \\\\\n \\dfrac{1}{2} \\times 5 \\times .1^2 + K.E. &= 0.1J\\\\\n 0.025 + K.E. &= 0.1J  \\\\\n \\implies K.E. &= 0.075J \\\\\n \\dfrac{1}{2}mv^2 &= 0.075J \\\\\n v &= 0.14 ms^{-1} \\\\\n \\end{align*}\n\n\n\n\\pagebreak\n\n\\spec{ distinguish between free, damped and forced oscillations}\n\nA free oscillator is one which is set in motion and left to oscillate without any external forces or damping.\n\ne.g. an ideal pendulum with no friction.\n\nA forced oscillation is one where an external periodic force is applied. \n\ne.g. pushing a child on a swing.\n\nDamping is where frictional forces remove energy from the system and the oscillations die down.\n\nThis can be increased intentionally for example adding thick oil to car suspension to prevent you bouncing around every time the car hits a bump.\n\nIf a system is lightly damped it will gradually come to rest after a number of cycles. \n\nAn important case is critical damping where the system comes to rest without overshooting and in the shortest possible time. \n\n \n\n\\spec{ recall how the amplitude of a forced oscillation changes at and around the natural frequency of a system\nand describe, qualitatively, how damping affects resonance.}\n\nEvery system has a natural frequency of oscillation which will depend on its physical properties. In the case of a mass on a spring it will depend on the mass and spring constant but more complex examples e.g. washing machines or the millennium bridge will also have natural frequencies.\n\nIf we apply a driving force to an oscillator, the effect it has will depend on the amplitude of the driving force and its frequency compared to the natural frequency of the system. \n\nFor a simple example try holding a spring with a mass on the end of it and move your hand up and down rhythmically.\n\nIf you move your hand very slowly the spring won't stretch or contract and the mass will simply follow the movement of your hand. So the amplitude will be the same as the amplitude of the driving frequency.\n\nAs you start to increase the frequency of your hand movement the spring will start to stretch and contract and the mass will move more than your hand. So the amplitude of oscillation increases.\n\nOnce the frequency of your hand is close to the natural frequency you are adding energy at just the right point in the cycle to increase the amplitude with each cycle and the amplitude gets very large. We call this resonance.\n\nIf we know move our hand with a very high frequency, the spring stretches and compresses but the mass doesn't much have time to accelerate hence the amplitude starts to get very small.\n\n\n\\begin{itemize}\n\\item Low frequency driving force, amplitude of oscillator = amplitude of driver.\n\\item Driving frequency around natural frequency, resonance - very high amplitude oscillations.\n\\item High frequency driving force, low amplitude oscillations.\n\\end{itemize}\n\n\n\nIn the real world we need to consider damping which will affect all systems to some degree. Damping will come from frictional forces and will remove energy from the system every cycle. If the energy added by the driving force is more than is removed by damping then the amplitude will increase. As the amplitude increases, the energy removed each cycle will also increase until a maximum amplitude is reached.\n\nFrom this it is clear that the more damping there is, the lower the amplitude. \n\nAnother feature is that as we have more damping, the frequency at which resonance occurs is less. You can think of this as damping slowing down the system and so reducing the natural frequency (although strictly speaking the natural frequency refers to an undamped system).\n\n\nThis is all expressed in the following graph where I have used the damping ratio $\\zeta$ as a measure of how much damping there is. Its not on the syllabus, you will only need to know qualitatively how damping affects resonance e.g. light damping and heavy damping but I though it would be good to introduce you to another Greek letter.\n\n\n\\begin{tikzpicture}\n  \\begin{axis}[\n    % labels\n    tick label style={font=\\scriptsize},\n    xlabel={Driving Frequency},\n    ylabel={Amplitude / Driving Amplitude},\n    ytick={0,1,2,3,4,5},\n    xtick={0,1,2,3},\n    xticklabels={0,$\\omega_\\mathrm{f}$,,},\n    %\n    % plot lines property\n    no markers,\n    line width=0.3pt,\n    cycle list={{black,solid}},\n    %\n    % dominio 2D\n    samples=200,\n    smooth,\n    domain=0:2.5,\n    xmin=0, xmax=2.5,\n    ymin=0, ymax=5.0,\n    %\n    % canvas dimensions\n    width=12cm, height=12cm\n    ]\n\n    % horizontal help line\n    \\draw[help lines] (axis cs:0,1) -- (axis cs:2.5,1);\n    % vertical help line\n    \\draw[help lines] (axis cs:1,0) -- (axis cs:1,5);\n\n    % draw curves\n    \\addplot {1/sqrt((1-x^2)^2+4*0.05^2*x^2)};\n    \\addplot {1/sqrt((1-x^2)^2+4*0.10^2*x^2)};\n    \\addplot {1/sqrt((1-x^2)^2+4*0.20^2*x^2)};\n    \\addplot {1/sqrt((1-x^2)^2+4*0.30^2*x^2)};\n    \\addplot {1/sqrt((1-x^2)^2+4*0.40^2*x^2)};\n    \\addplot {1/sqrt((1-x^2)^2+4*0.50^2*x^2)};\n   % \\addplot {1/sqrt((1-x^2)^2+4*1.00^2*x^2)};\n   % \\addplot {1/sqrt((1-x^2)^2+4*2.00^2*x^2)};\n\n    % draw maximum curve\n    \\addplot[dashed,domain=0:0.99] {1/sqrt(1-x^4)};\n\n    %  curve labels\n    \\node[small dot,pin=30:{$\\zeta_0=0{,}05$}] at\n      (axis cs:1.10,4.22) {};\n    \\node[small dot,pin=30:{$\\zeta_0=0{,}10$}] at\n      (axis cs:1.10,3.29) {};\n    \\node[small dot,pin=30:{$\\zeta_0=0{,}20$}] at\n      (axis cs:1.10,2.05) {};\n    \\node[small dot,pin=30:{$\\zeta_0=0{,}30$}] at\n      (axis cs:1.20,1.19) {};\n    \\node[small dot,pin=30:{$\\zeta_0=0{,}40$}] at\n      (axis cs:1.28,0.83) {};\n    \\node[small dot,pin=30:{$\\zeta_0=0{,}50$}] at\n      (axis cs:1.50,0.51) {};\n\n   % \\node[small dot,pin=210:{$\\zeta_0=1{,}00$}] at\n      (axis cs:0.52,0.79) {};\n   % \\node[small dot,pin=210:{$\\zeta_0=2{,}00$}] at\n      (axis cs:0.60,0.40) {};\n  \\end{axis}\n\\end{tikzpicture}\n\n\n\\begin{itemize}\n\\item At very low driving frequencies the amplitude is the same as the driving amplitude.\n\\item With low damping we get a sharp resonance peak.\n\\item More damping creates a broader resonance peak.\n\n\\item Increasing damping reduces the amplitude at all frequencies.\n\n\\item The resonance peak shifts lower with increased damping.\n\n\\item As the frequency gets very high the amplitude will tend to zero regardless of damping or driving amplitude.\n\n\\end{itemize}\n\n\n\\end{document}\n\n\n\n%sagemathcloud={\"latex_command\":\"latexmk -xelatex -f -g -bibtex -synctex=1 -interaction=nonstopmode '11-oscillations.tex'\"}\n", "meta": {"hexsha": "4e75bfe77afba261375b312bd7dc9f1b550923f6", "size": 19192, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "11-oscillations.tex", "max_stars_repo_name": "dhruvrattan/physicsrevision", "max_stars_repo_head_hexsha": "99bf9346cd40ebe6f312f164d731b7010b86534a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2017-03-13T19:37:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-08T21:47:07.000Z", "max_issues_repo_path": "11-oscillations.tex", "max_issues_repo_name": "dhruvrattan/physicsrevision", "max_issues_repo_head_hexsha": "99bf9346cd40ebe6f312f164d731b7010b86534a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2016-12-19T16:46:07.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-24T08:14:03.000Z", "max_forks_repo_path": "11-oscillations.tex", "max_forks_repo_name": "dhruvrattan/physicsrevision", "max_forks_repo_head_hexsha": "99bf9346cd40ebe6f312f164d731b7010b86534a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 22, "max_forks_repo_forks_event_min_datetime": "2016-12-19T16:16:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-09T13:48:59.000Z", "avg_line_length": 31.7748344371, "max_line_length": 409, "alphanum_fraction": 0.6865881617, "num_tokens": 6069, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8311430415844384, "lm_q1q2_score": 0.5756845645916818}}
{"text": "% ******************************* Thesis Appendix A ****************************\n\\chapter{Appendix B: An introduction to neural networks}\\label{sec:primer-nn}\n\n\\ifpdf\n    \\graphicspath{{Appendix2/Figs/Raster/}{Appendix2/Figs/PDF/}{Appendix2/Figs/}}\n\\else\n    \\graphicspath{{Appendix2/Figs/Vector/}{Appendix2/Figs/}}\n\\fi\n\nThis appendix chapter provides background at a more elementary level than\n\\vref{sec:bg-rnn}. Its goal is to sufficiently educate readers unfamiliar with\nrecurrent neural networks such that the remainder of our work can be\nunderstood.\n\n\\section{Neurons: the basic computation unit}\n\nNeurons are the basic abstraction which are combined together to form\nneural networks. A \\emph{neuron} is a parametric model of a function $f : \\RR^D \\to\n\\RR$ from its $D$-dimensional input $\\x$ to its output $y$. Our neurons will be\ndefined as\n\\begin{equation}\n    f(\\x) \\coloneqq \\sigma( \\langle \\vec{w}, \\x \\rangle)\n\\end{equation}\nwhich can be viewed as an inner product with \\emph{weights} $\\vec{w}$ to\nproduce an \\emph{activation} $z \\coloneqq \\langle \\vec{w}, \\x \\rangle\n\\in \\RR$ which is then squashed to a bounded domain by a non-linear\n\\textbf{activation function} $\\sigma : \\RR \\to [L, U]$. This is visually\ndepicted in \\cref{fig:nn-single}, which also makes apparent the\ninterpretation of weight $w_i$ as the sensitivity of the output $y$ to the\ninput $x_i$.\n\n\\begin{figure}[tb]\n    \\centering\n    \\input{Appendix2/Figs/nn-single.pdf_tex}\n    \\caption{A single neuron first computes an activation $z$ and then passes it through an activation function $\\sigma(\\cdot)$}\n    \\label{fig:nn-single}\n\\end{figure}\n\n\\section{Feedforward neural networks}\n\nMultiple neurons may share inputs and have their outputs concatenated together\nto form a \\emph{layer} modelling a multivariate functions $f :\n\\RR^{D_\\text{in}} \\to \\RR^{D_\\text{out}}$. Multiple layers can then\nbe composed together to form a \\emph{feedforwd neural network}.\n\n\\begin{figure}[tb]\n    \\centering\n    \\input{Appendix2/Figs/nn-ffw.pdf_tex}\n    \\caption{Graph depiction of a feedforward neural network with $2$ hidden layers}\n    \\label{fig:nn-ffw}\n\\end{figure}\n\nAlthough a single hidden layer is theoretically sufficient for a universal\nfunction approximator \\citep{Cybenko1993}, the number of hidden units to\nguarantee reported theoretical bounds are usually infeasibly large. Instead,\nrecent work in \\emph{deep learning} has shown that deep models which contain\nmany hidden layers can achieve strong performance across a variety of\ntasks \\citep{Bengio2011}.\n\nThe improved modeling capacity gained by composing multiple layers is due to\nthe composition of multiple non-linear activation functions.\nIn fact, it is easy to show that removing activation functions would make\na deep network equivalent to a single matrix transform: let $\\W_{l,l+1}$\ndenote the weights between layers $l$ and $l+1$. The original neural network\ncomputes the function\n\\begin{equation}\n    \\sigma\\left(\n        \\W_{L,L-1} \\sigma \\left(\n            \\W_{L-1,L-2}\\cdots \\sigma \\left(\n                \\W_{2,1} \\x\n            \\right) \\cdots\n        \\right)\n    \\right)\n\\end{equation}\nAfter removing the activation functions $\\sigma$, we are left with\n\\begin{equation}\n    \\W_{L,L-1} \\W_{L-1,L-2}\\cdots \\W_{2,1} \\x\n    = \\x\n    = \\tilde{\\W} \\x\n\\end{equation}\nwhere $\\tilde{\\W} = \\left(\\prod_{i=1}^{L-1} \\W_{i,i+1} \\right)$\nis a matrix transform computing the same function as the neural network with\nactivation functions removed.\n\n\\section{Recurrent neural networks}\n\nWhile feedforward neural networks provide a flexible model for approximating\narbitrary functions, they require a fixed-dimension input $\\x$ and hence\ncannot be directly applied to sequential data $\\x = (\\x_t)_{t=1}^T$ where $T$ may\nvary.\n\nA naive method for extending feedforward networks would be to independently\napply a feedforward network to compute $\\y_t = f(\\x_t \\vec{\\theta})$ at each timestep\n$1 \\leq t \\leq T$. However, this approach is only correct when each output\n$\\y_t$ depends only on the input at the current time $\\x_t$ and is independent of\nall prior inputs $\\{\\x_k\\}_{k < t}$. This assumption is false in musical data:\nthe current musical note usually is highly dependent on the sequence of notes\nleading up to it.\n\nThis shortcoming motivates \\emph{recurrent neural networks} (RNNs), which\ngeneralize feedforward networks by introducing time-delayed recurrent\nconnections between hidden layers (Elman networks \\citep{elman1990finding}) or\nfrom the output layers to the hidden layers (Jordan networks\n\\citep{jordan1997serial}). Mathematically, a linear Elman-type RNN is a discrete time\ndynamical system commonly parameterized as:\n\\begin{equation}\n \\left.\\begin{aligned}\n          \\h_t &=& \\W_{xh} \\sigma_{xh} \\left( \\x_t \\right) + \\W_{hh} \\sigma_{hh} \\left( \\h_{t-1} \\right)\\\\\n          \\y_t &=& \\W_{hy} \\sigma_{hy} \\left( \\h_t \\right)\n       \\end{aligned}\n \\right\\}\n \\qquad \\text{Linear Elman-Type RNN Dynamics}\n\\end{equation}\n\\begin{align}\n\\end{align}\nwhere $\\sigma_{\\cdot \\cdot}(\\cdot)$ are activation functions acting\nelement-wise and $\\theta = \\{ \\W_{xh}, \\W_{hh}, \\W_{hy}\\}$ are the learned\nparameters. \\cref{fig:nn-rnn} provides a graphical illustration of such a\nnetwork. Notice that apart from the edges between hidden nodes, the network is\nidentical to a regular feedforward network (\\cref{fig:nn-ffw}).\n\n\\begin{figure}[tb]\n    \\centering\n    \\input{Appendix2/Figs/nn-rnn.pdf_tex}\n    \\caption{Graph representation of an Elman-type RNN.}\n    \\label{fig:nn-rnn}\n\\end{figure}\n\nTo apply the RNN over an input sequence $\\x$, the activations of the hidden\nstates are first initialized to an initial value $\\h \\in \\RR^{D_{h}}$. Next,\nfor each timestep $t$ the hidden layer activations are computed using the\ncurrent input $\\x_t$ and the previous hidden state activations $\\h_{t-1}$.\nThis motivates an alternative perspective on RNNs as a template consisting\nof a feedforward network with inputs $\\{\\x_t, \\h_{t-1}\\}$ (see\n\\vref{fig:rnn-elman}) replicated across time $t$.\n", "meta": {"hexsha": "6f6a428df4daa25341a3c8d86d5b84bc730611d2", "size": 6003, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Appendix2/appendix2.tex", "max_stars_repo_name": "feynmanliang/bachbot-thesis", "max_stars_repo_head_hexsha": "08abadd3d5f4960d8c40f48c0e46622aa507bf4b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Appendix2/appendix2.tex", "max_issues_repo_name": "feynmanliang/bachbot-thesis", "max_issues_repo_head_hexsha": "08abadd3d5f4960d8c40f48c0e46622aa507bf4b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Appendix2/appendix2.tex", "max_forks_repo_name": "feynmanliang/bachbot-thesis", "max_forks_repo_head_hexsha": "08abadd3d5f4960d8c40f48c0e46622aa507bf4b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.1397058824, "max_line_length": 128, "alphanum_fraction": 0.7283025154, "num_tokens": 1686, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.7431680199891789, "lm_q1q2_score": 0.5756456076606671}}
{"text": "\nHere we are going to talk about the \\textbf{Dropout} method for training DL model. This is a kind of training algorithm but also a kind of parameter regularization method in some sense.\n\n\n\\section{Bagging and Dropout}\n\\begin{itemize}\n\t\\item \\textbf{Bootstrap} comes from a common saying \"pull up by your own bootstraps\", which means use your won resource to solve problem.\n\t\\item \\textbf{Bagging} (short for \\textbf{bootstrap aggregating}) is a technique for reducing generalization error by combining several models. The idea is to train several different models separately, then have all of the models vote on the\toutput for test examples.This is an example of a general strategy in machine learning called\tmodel averaging. Techniques employing this strategy are known as ensemble methods.\n\t\\item The reason that model averaging works is that different models will usually\n\tnot make all the same errors on the test set.\n\\end{itemize}\nNeural networks reach a wide enough variety of solution points that they can often benefit from model averaging even if all of the models are trained on the same dataset. Differences in random initialization, random selection of minibatches,\ndifferences in hyperparameters, or different outcomes of non-deterministic implementations of neural networks are often enough to cause different members of the ensemble to make partially independent errors.\n\n\\textbf{Dropout} provides a computationally inexpensive but\tpowerful method of regularizing a broad family of models. To a first approximation, dropout can be thought of as a method of making bagging practical for ensembles of very many large neural networks. Specifically, dropout trains the ensemble consisting of all sub-networks that can be formed by removing non-output units from an underlying base network.\n\n\\newpage\n\\begin{figure}\n\t\\includegraphics[width=\\linewidth]{figures/dropout}\n\t\\caption{Dropout trains an ensemble consisting of all sub-networks that can be\n\t\tconstructed by removing non-output units from an underlying base network.}\n\t\\label{fig:dropout}\n\\end{figure}\n\n\\begin{wrapfigure}{r}{0.4\\textwidth}\n\t\\vspace{-20pt}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.38\\textwidth]{figures/dropoutexample}\\label{dropoutexample}\n\t\\end{center}\n\t\\vspace{-20pt}\n\t\\vspace{-10pt}\n\\end{wrapfigure}\n\nTo perform forward propagation with dropout, we randomly sample a vector $\\bm \\mu$ with one entry for each input\nor hidden unit in the network. The entries of $\\bm \\mu$ are binary and are sampled independently from each other. The probability of each entry being 1 is a hyperparameter, usually $0.5$ for the hidden layers and $0.8$ for the input. Each unit in the network is multiplied by the corresponding mask, and then forward propagation continues through the rest of the\tnetwork as usual. This is equivalent to randomly selecting one of the sub-networks from figure and running forward propagation through it.\n\nRecall that to learn with bagging, we define $k$ different models, construct $k$ different datasets by sampling from the training set with replacement, and then train model $i$ on dataset $i$. Dropout aims to approximate this process, but with an exponentially large number of neural networks. Specifically, to train with dropout, we use a minibatch-based learning algorithm that makes small steps, such as stochastic gradient descent. Each time we load an example into a minibatch, we randomly sample a different binary mask to apply to all of the input and hidden units in the network.\n\nDropout training is not quite the same as bagging training. In the case of\nbagging, the models are all independent. In the case of dropout, the models share\nparameters, with each model inheriting a different subset of parameters from the\nparent neural network. This parameter sharing makes it possible to represent an\nexponential number of models with a tractable amount of memory. In the case of\nbagging, each model is trained to convergence on its respective training set. In the\ncase of dropout, typically most models are not explicitly trained at all-usually,\nthe model is large enough that it would be infeasible to sample all possible sub-\nnetworks within the lifetime of the universe. Instead, a tiny fraction of the possible\nsub-networks are each trained for a single step, and the parameter sharing causes\nthe remaining sub-networks to arrive at good settings of the parameters. These\nare the only differences. Beyond these, dropout follows the bagging algorithm.\n\n\n\n\n\\section{Mathematical form}\n\nSuppose $\\mathbf X=(x_1,...,x_n)$ is dataset, $\\mathbf Y=(y_1,...,y_n)$ is label set. Our goal is find the parameter $\\Theta=(\\theta_1,...,\\theta_j)$ of a neural network $f$ minimize\n\\begin{equation}\\label{1}\n\\mathcal L(\\Theta)=\\sum_{i=1}^n L(f(x_i;\\Theta),y_i)\n\\end{equation}\nwhere $L(\\cdot, \\cdot)$ is the loss function like $L^2$ normal or cross-entropy, $f=f^j\\circ\\cdots\\circ f^1$ is the neural network. $f^s(\\mathbf x)=g(\\theta_s \\begin{pmatrix}\n\\mathbf x\\\\ 1\n\\end{pmatrix})$.\n\nRecall the traditional Mini-Batch SGD training algorithm, for every step, we choose a subset $B_t\\subset \\{1,...,n\\}$ and update the parameters $\\Theta$ by:\n\\begin{equation}\n\\Theta^{t+1} = \\Theta^t + \\eta_t\\nabla \\sum_{i \\in B_t} L(f(x_i ;\\Theta) ,y_i).\n\\end{equation}\n\nSo the dropout means that you need to change our model just in this step, and get:\n\\begin{equation}\n\\tilde{f}^{j} = \\theta^j \\circ P^j \\circ g^j \\circ \\tilde f^{j-1},\n\\end{equation}\nwith $P^j$ is a diagonal matrix for\n\\begin{equation}\nP^j_{ii} \\simeq P,\n\\end{equation}\nwith\n\\begin{equation}\nP\\{x = 0 \\} = 1-\\gamma_j, \\quad P\\{x = 1 \\} = \\gamma_j.\n\\end{equation}\nand then get the update\n\\begin{equation}\n\\Theta^{t+1} = \\Theta^{t} - \\eta \\nabla_{\\Theta} \\sum_{i \\in B_t} L(\\tilde f^J(x_i, \\Theta), y_i).\n\\end{equation}\n\n\\subsection{Prediction}\nWhen we finish training, we get a solution $\\Theta^\\#=(\\theta_1^\\#,...,\\theta_n^\\#)$.\nAnd we can simply set the solution of \\eqref{1} is $\\Theta = (\\gamma_1 \\theta_1^\\#,...,\\gamma_n \\theta_n^\\# )$. This called \\textbf{weight scaling inference rule}.\n%To make the weight of each layer should be scaling to make the expectation be same as dropout.\nWe use a linear problem to explain this.\nSuppose our problem is minimize\n\\begin{equation}\\label{7}\n\\mathcal{L}(\\mathbf{W})=\\|\\mathbf{Y}-\\mathbf W\\mathbf X\\|_F^2\n\\end{equation}\nso the solution of \\eqref{7} $\\mathbf W^*$ satisfy vanish gradient, i.e.\n\\begin{equation}\\label{gradient}\n(\\mathbf Y-\\mathbf W^*\\mathbf X)\\mathbf{X}^T=0\n\\end{equation}\n%If $\\mathbf X$ is full rank, we can get\n%\\begin{equation}\n%\\mathbf Y-\\mathbf W^*\\mathbf X=0\n%\\end{equation}\n\n\nWhile we use dropout technology, actually we solve a series of  problems like\n\\begin{equation}\\label{10}\n\\mathcal{L}(\\mathbf{W})=\\|\\mathbf{Y}-\\mathbf W(\\mathbf D\\mathbf X)\\|_F^2\n\\end{equation}\nwhere $\\mathbf D=\\text{diag}(\\mathbf d)$, $\\mathbf d$ is a random matrix with $P(d_{i}=1)=\\gamma,P(d_{i}=0  )=1-\\gamma$.\n\nIts gradient is\n\\begin{equation}\ng_t=g(\\mathbf D_t)=(\\mathbf Y-\\mathbf W(\\mathbf D_t\\mathbf X))(\\mathbf D_t\\mathbf X)^T\n\\end{equation}\nSo the iteration step is\n\\begin{equation}\\label{iter}\n\\mathbf W^{t+1}=\\mathbf W^t-\\eta_t g_t\n\\end{equation}\n%We also want\n%\\begin{equation}\n%g(\\mathbf D)=\\mathbf Y-\\mathbf W(\\mathbf D\\otimes\\mathbf X)=0\n%\\end{equation}\nIf \\eqref{iter} is convergent, the expectation of $g$ should equals zero, so we have\n\\begin{eqnarray}\n0=\\mathbb E_{\\mathbf D}(\\mathbf Y-\\mathbf W(\\mathbf D\\mathbf X))(\\mathbf D\\mathbf X)^T\\\\\n=\\gamma\\mathbf Y\\mathbf X^T-\\mathbf W \\gamma^2\\mathbf{X}\\mathbf X^T+(\\gamma^2-\\gamma)\\mathbf W\\text{diag}(\\mathbf X\\mathbf X^T)\\\\\n\\approx\\gamma\\mathbf Y\\mathbf X^T-\\mathbf W \\gamma^2\\mathbf{X}\\mathbf X^T\n\\end{eqnarray}\nSo we get the solution of \\eqref{10} $\\mathbf W^\\#$ satisfy\n\\begin{equation}\n(\\mathbf Y-\\mathbf W^\\#(\\gamma\\mathbf X))\\mathbf X^T\\approx 0\n\\end{equation}\n\nCombine with \\eqref{gradient} we can set $\\mathbf W^*=\\gamma\\mathbf W^\\#$\n\nIn other opinion,\n\\begin{align}\n&\\mathbb E_{\\mathbf D}[\\|\\mathbf Y-\\mathbf W(\\mathbf D\\mathbf X) \\|^2_F]\\\\\n=&\\mathbb E_{\\mathbf D}\\text{tr}((\\mathbf Y-\\mathbf W \\mathbf D \\mathbf X)(\\mathbf Y-\\mathbf W \\mathbf D \\mathbf X)^T)\\\\\n=&\\mathbb E_{\\mathbf D}\\text{tr}(\\mathbf Y \\mathbf Y^T-\\mathbf W \\mathbf D \\mathbf X \\mathbf Y^T-\\mathbf Y(\\mathbf W \\mathbf D \\mathbf X)^T+(\\mathbf W \\mathbf D \\mathbf X)(\\mathbf W \\mathbf D \\mathbf X)^T)\\\\\n=&\\text{tr}(\\mathbf Y \\mathbf Y^T-\\gamma\\mathbf W \\mathbf X \\mathbf Y^T-\\gamma \\mathbf Y(\\mathbf W  \\mathbf X)^T+\\gamma^2(\\mathbf W \\mathbf X)(\\mathbf W  \\mathbf X)^T+(\\gamma-\\gamma^2)\\mathbf W \\text{diag}(\\mathbf X\\mathbf X^T)\\mathbf W^T )\\\\\n=&\\text{tr}((\\mathbf Y-\\gamma \\mathbf W  \\mathbf X)(\\mathbf Y-\\gamma \\mathbf W \\mathbf X)^T)+(\\gamma-\\gamma^2)\\text{tr}(\\mathbf W \\text{diag}(\\mathbf X\\mathbf X^T)\\mathbf W^T )\\\\\n=&\\|\\mathbf Y-\\gamma \\mathbf W  \\mathbf X\\|^2_F+(\\gamma-\\gamma^2)\\text{tr}( \\text{diag}(\\mathbf X\\mathbf X^T)\\mathbf W^T\\mathbf W )\\\\\n=&\\|\\mathbf Y-\\gamma \\mathbf W  \\mathbf X\\|^2_F+(\\gamma-\\gamma^2)\\|(\\mathbf X\\mathbf X^T)^{\\frac 12}\\mathbf W\\|_F^2\\\\\n\\end{align}\nLet $ \\tilde{\\mathbf W}=\\gamma \\mathbf W$, the loss function became to \n\\begin{equation}\n\\|\\mathbf Y-\\tilde{\\mathbf W} \\mathbf X\\|^2_F+\\dfrac {(1-\\gamma)}\\gamma\\|\\tilde{\\mathbf W}\\text{diag}(\\mathbf X\\mathbf X^T)^{\\frac 12}\\|_F^2\n\\end{equation}\nThus, you can think dropout for linear problem is to add a regularization term $\\dfrac {(1-\\gamma)}\\gamma\\|\\tilde{\\mathbf W}\\text{diag}(\\mathbf X\\mathbf X^T)^{\\frac 12}\\|_F^2$.\n\n\n\n\\begin{remark}\n\tThe analyze of linear problem only try to explain why \\textbf{weight scaling inference rule} is work. It doesn't mean dropout can solve the linear problem better than not using dropout. In my(Yuyan) opinion, dropout takes the advantages of bagging method, but disadvantages of solving the problem inexactly. In most case the benefit of bagging method is greater than inexact solution. This is means the dropout has larger loss on training dataset but high accuracy on test dataset.\n\\end{remark}\n\n\n\\subsection{Inverted Dropout}\n\n\nWhen we implement dropout, usually we use inverted dropout instead.\n\nIn tensorflow, the annotation of tf.nn.dropout shows \"With probability $\\gamma$, outputs the input element scaled up by $\\dfrac{1}{\\gamma}$, otherwise outputs 0. The scaling is so that the expected sum is unchanged.\"\n\n\n\\subsection{Numerical Experiment for Linear System}\nIf we consider the linear problem as a machine learning problem. We say a method is work if we get higher accuracy on test data by using this method.  Meanwhile if we consider the linear problem as a optimal problem, we say a method is work if we get less value of loss function on training dataset.\n\n\n\\begin{table}\n\t\\begin{tabular}{c|c|c}\n\t\t&with dropout& without dropout\\\\\\hline\n\t\taccuracy&86.3\\%&86.3\\%\\\\\\hline\n\t\tloss &0.407&0.385\\\\\\hline\n\t\\end{tabular}\n\\end{table}\n", "meta": {"hexsha": "c5921e25e7b16ce2aa329048cad9c14379f2c2bb", "size": 10810, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/Dropout.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/Dropout.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/Dropout.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.3910614525, "max_line_length": 587, "alphanum_fraction": 0.7422756707, "num_tokens": 3243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5756455872470108}}
{"text": "\\subsection{Free groups}\\label{subsec:free_groups}\n\n\\begin{definition}\\label{def:free_monoid}\n  We associate with every set \\( A \\) its \\term{free monoid} \\( F(A) \\coloneqq (A^*, \\cdot) \\), where \\( A^* \\) is the \\hyperref[def:formal_language/kleene_star]{Kleene star} and \\( \\cdot \\) is \\hyperref[def:formal_language/concatenation]{concatenation}.\n\n  Denote by \\( \\iota_A: A \\to F(A) \\) the canonical inclusion function, which sends every member of \\( A \\) into the corresponding single-symbol word in the \\hyperref[def:free_monoid]{free monoid} \\( F(A) \\).\n\\end{definition}\n\\begin{proof}\n  Concatenation is clearly associative and the empty word \\( \\varepsilon \\) is an \\hyperref[def:monoid]{identity} under concatenation.\n\\end{proof}\n\n\\begin{theorem}[Free monoid universal property]\\label{thm:free_monoid_universal_property}\n  We associate with every set \\( A \\) its \\hyperref[def:formal_language/kleene_star]{Kleene star} \\( A^* \\). Denote by \\( \\iota_A: A \\to F(A) \\) the canonical inclusion function, which sends every member of \\( A \\) into the corresponding single-symbol word in \\( A^* \\).\n\n  The Kleene star \\( A^* \\) with \\hyperref[def:formal_language/concatenation]{concatenation} is the unique up to an isomorphism monoid that satisfies the following \\hyperref[rem:universal_mapping_property]{universal mapping property}:\n  \\begin{displayquote}\n    For every monoid \\( M \\) and every function \\( f: A \\to M \\), there exists a unique monoid homomorphism \\( \\widetilde{f}: A^* \\to M \\) such that the following diagram commutes:\n    \\begin{equation}\\label{eq:thm:free_monoid_universal_property/diagram}\n      \\begin{aligned}\n        \\includegraphics[page=1]{output/thm__free_monoid_universal_property.pdf}\n      \\end{aligned}\n    \\end{equation}\n  \\end{displayquote}\n\n  Via \\fullref{rem:universal_mapping_property}, \\( (\\Anon*)^* \\) becomes \\hyperref[def:category_adjunction]{left adjoint} to the \\hyperref[def:concrete_category]{forgetful functor}\n  \\begin{equation*}\n    U: \\cat{Mon} \\to \\cat{Set}.\n  \\end{equation*}\n\\end{theorem}\n\\begin{proof}\n  For every function \\( f: A \\to M \\), define the monoid homomorphism\n  \\begin{equation*}\n    \\begin{aligned}\n      &\\widetilde{f}: A^* \\to M, \\\\\n      &\\widetilde{f}(x_1 x_2 \\ldots x_n) \\coloneqq f(x_1) \\cdot f(x_2) \\cdot \\ldots \\cdot f(x_n)\n    \\end{aligned}\n  \\end{equation*}\n  obtained by applying the monoid operation \\( \\cdot \\) recursively to the pointwise image\n  \\begin{equation*}\n    f(x_1) f(x_2) \\ldots f(x_n)\n  \\end{equation*}\n  of the word\n  \\begin{equation*}\n    x_1 x_2 \\ldots x_n.\n  \\end{equation*}\n\n  The homomorphism \\( \\widetilde{f} \\) is uniquely determined by the action of \\( f \\) on single-symbol words.\n\\end{proof}\n\n\\begin{definition}\\label{def:abstract_reduction_system}\\mcite[def. 1.1.2]{Book1993}\n  Fix an arbitrary \\hyperref[def:set]{set} \\( A \\) and an \\hyperref[rem:first_order_formula_conventions/infix]{infix} \\hyperref[def:binary_relation]{binary relation} \\( \\to \\) on \\( A \\). We call the operation \\( \\to \\) a \\term{reduction relation}, and the pair \\( (A, \\to) \\) --- an \\term{abstract reduction system}.\n\n  In the case where \\( A \\) is the \\hyperref[def:formal_language/kleene_star]{Kleene star} of some set, we also call \\( (A, \\to) \\) a \\term{string rewriting system}.\n\n  \\begin{thmenum}\n    \\thmitem{def:abstract_reduction_system/relations}\\mcite[not. 1.1.1]{Book1993} We also introduce the following auxiliary relations:\n    \\begin{itemize}\n      \\item \\( \\reloset n \\to \\) is the \\( n \\)-th \\hyperref[def:binary_relation/composition]{iterated composition} of \\( \\to \\), where \\( n \\) is a nonnegative integer.\n\n      \\item \\( \\reloset + \\to \\) is the \\hyperref[def:relation_closures/transitive]{transitive closure} of \\( \\to \\).\n\n      \\item \\( \\reloset {*} \\to \\) is the \\hyperref[def:relation_closures/reflexive]{reflexive closure} of \\( \\reloset + \\to \\).\n\n      \\item \\( {\\leftrightarrow} \\) is the \\hyperref[def:relation_closures/symmetric]{symmetric closure} of \\( \\to \\).\n\n      \\item \\( \\reloset + \\leftrightarrow \\) is the \\hyperref[def:relation_closures/transitive]{transitive closure} of \\( \\leftrightarrow \\).\n\n      \\item \\( \\reloset {*} \\leftrightarrow \\) is the \\hyperref[def:relation_closures/reflexive]{reflexive closure} of \\( \\reloset + \\leftrightarrow \\), the smallest equivalence relation containing \\( \\to \\).\n    \\end{itemize}\n\n    \\thmitem{def:abstract_reduction_system/hierarchy}\\mcite[def. 1.1.3]{Book1993} In analogy with \\fullref{def:arborescence/ancestry}, if \\( x \\reloset {*} \\to y \\), we say that \\( y \\) is a \\term{descendant} of \\( x \\) and that \\( x \\) is an \\term{ancestor} of \\( y \\). If \\( x \\to y \\), we say that \\( y \\) is an \\term{immediate descendant}, or a \\term{successor}, especially when considering \\( (A, \\to) \\) as a \\hyperref[def:quiver]{quiver}.\n\n    We say that an element is \\term{reducible} if it has an immediate descendant and \\term{irreducible} otherwise.\n\n    \\thmitem{def:abstract_reduction_system/equivalent}\\mcite[def. 1.1.3]{Book1993} We say that \\( x \\) and \\( y \\) are \\term{equivalent} if \\( x \\reloset {*} \\leftrightarrow y \\). This is a stronger condition than \\( x \\reloset {*} \\to y \\) and \\( y \\reloset {*} \\to x \\). Indeed, the latter corresponds to \\hyperref[def:quiver_connectedness/weak]{weak connectedness} of quivers and equivalence corresponds to \\hyperref[def:quiver_connectedness/strong]{strong connectedness}.\n\n    \\thmitem{def:abstract_reduction_system/normal_form}\\mcite[def. 1.1.5]{Book1993} Given an \\hyperref[def:equivalence_relation]{equivalence relation} \\( \\cong \\), if \\( x \\cong y \\) and \\( y \\) is irreducible, we say that \\( y \\) is a \\term{normal form} of \\( x \\) modulo \\( \\cong \\), and that \\( x \\) is \\term{normalizable} modulo \\( \\cong \\).\n\n    Without further context, we assume that \\( \\cong \\) is the equivalence relation \\( \\reloset {*} \\leftrightarrow \\).\n\n    If \\( x \\) has a unique normal form, we denote it by \\( \\op{nf} \\).\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{definition}\\label{def:abstract_rewriting_convergence}\n  Let \\( (A, \\to) \\) be an \\hyperref[def:abstract_reduction_system]{abstract reduction system}. We will list several conditions ensuring existence and uniqueness of \\hyperref[def:abstract_reduction_system/normal_form]{normal forms}.\n\n  \\begin{thmenum}\n    \\thmitem{def:abstract_rewriting_convergence/confluence}\\mcite[def. 1.1.6]{Book1993} We call the system \\term{confluent} if, whenever \\( x \\) and \\( y \\) are descendants of \\( w \\), they have a common descendant \\( z \\). That is,\n    \\begin{equation}\\label{eq:def:abstract_rewriting_convergence/confluence/diagram}\n      \\begin{aligned}\n        \\includegraphics[page=1]{output/def__abstract_rewriting_convergence.pdf}\n      \\end{aligned}\n    \\end{equation}\n\n    If there always exists an \\hi{immediate descendant} \\( z \\), we say that the system is \\term{locally confluent}.\n\n    \\thmitem{def:abstract_rewriting_convergence/noetherian}\\mcite[def. 1.1.9]{Book1993} We call the system \\term{noetherian} or \\term{strongly normalizable} if there exists no \\term{infinitely ascending sequence}\n    \\begin{equation*}\n      x_1 \\to x_2 \\to x_3 \\to \\cdots.\n    \\end{equation*}\n\n    This is dual to \\hyperref[def:well_founded_relation]{well-foundedness}.\n\n    \\thmitem{def:abstract_rewriting_convergence/convergent}\\mcite[def. 1.1.11]{Book1993} Finally, we call the system \\term{convergent} if it is both confluent and noetherian.\n\n    By \\fullref{thm:noetherian_rewriting_system_local_confluence}, this is equivalent to the system being \\hi{locally} confluent and noetherian.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:noetherian_rewriting_system_local_confluence}\\mcite[thm. 1.1.13]{Book1993}\n  A \\hyperref[def:abstract_rewriting_convergence/noetherian]{noetherian}  \\hyperref[def:abstract_reduction_system]{abstract reduction system} is \\hyperref[def:abstract_rewriting_convergence/confluence]{confluent} if and only if it is \\hyperref[def:abstract_rewriting_convergence/confluence]{locally confluent}.\n\\end{proposition}\n\n\\begin{proposition}\\label{thm:convergent_reduction_system_normal_forms}\n  In a \\hyperref[def:abstract_rewriting_convergence/convergent]{convergent} \\hyperref[def:abstract_reduction_system]{abstract reduction system}, every element has a unique \\hyperref[def:abstract_reduction_system/normal_form]{normal form}.\n\\end{proposition}\n\\begin{proof}\n  Let \\( (A, \\to) \\) be a convergent abstract reduction system.\n\n  \\SubProof{Proof of existence} Suppose that \\( x \\) has no normal form. Hence, it is reducible and has an immediate descendant \\( x_1 \\). Proceeding by \\hyperref[rem:natural_number_recursion]{natural number recursion}, we can build a sequence\n  \\begin{equation*}\n    x \\to x_1 \\to x_2 \\to \\cdots.\n  \\end{equation*}\n\n  Such a sequence cannot exist, however, since the system is \\hyperref[def:abstract_rewriting_convergence/noetherian]{noetherian}.\n\n  \\SubProof{Proof of uniqueness} Suppose that both \\( y \\) and \\( z \\) are normal forms of \\( w \\).\n\n  Since the system is confluent, there exists an element \\( z \\) that is a descendant of both \\( x \\) and \\( y \\). But \\( x \\) and \\( y \\) are irreducible, therefore \\( x = y \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:free_group}\n  Let \\( A \\) be an arbitrary set. We will now construct the \\term{free group} \\( F(A) \\) of \\( A \\).\n\n  Let \\( \\anon \\) be an arbitrary set not in \\( A \\). Consider the \\hyperref[def:disjoint_union]{disjoint union} \\( U \\coloneqq A \\times \\set{ +, - } \\), whose members we will denote by \\( a^+ \\) and \\( a^- \\).\n\n  On the \\hyperref[def:formal_language/kleene_star]{Kleene star} \\( U \\), define the relation \\( \\to \\) to hold for \\( x \\to y \\) if there exists \\( a \\in A \\) and words \\( p \\) and \\( s \\) such that \\( y = ps \\) and either \\( x = p a^+ a^- s \\) or \\( x = p a^+ a^- s \\).\n\n  Then \\( (U, \\to) \\) is a \\hyperref[def:abstract_rewriting_convergence/convergent]{convergent} \\hyperref[def:abstract_reduction_system]{abstract reduction system}. The \\term{free group} \\( F(A) \\) is the set of \\hyperref[def:abstract_reduction_system/hierarchy]{irreducible elements} in \\( U \\), with the operation \\( v \\star w \\coloneqq \\op{nf}(vw) \\) that gives the \\hyperref[def:abstract_reduction_system/normal_form]{normal form} of the concatenated string \\( vw \\).\n\n  The identity is the empty word and the inverse \\( v^{-1} \\) of \\( v \\) can be characterized recursively as\n  \\begin{equation*}\n    v^{-1} \\coloneqq \\begin{cases}\n      \\varepsilon &v = \\varepsilon \\\\\n      u^{-1} a^-  &v = u a^+ \\\\\n      u^{-1} a^+  &v = u a^-\n    \\end{cases}\n  \\end{equation*}\n\n  The canonical inclusion is\n  \\begin{equation*}\n    \\begin{aligned}\n      &\\iota_A: A \\to F(A) \\\\\n      &\\iota_A(a) \\coloneqq a^+.\n    \\end{aligned}\n  \\end{equation*}\n\n  Compare this definition to free abelian groups defined in \\fullref{def:free_semimodule}.\n\\end{definition}\n\\begin{defproof}\n  We need to prove that the system \\( (U, \\to) \\) is convergent. Then from \\fullref{thm:convergent_reduction_system_normal_forms} it will follow that every word has a unique normal form.\n\n  \\SubProofOf[def:abstract_rewriting_convergence/confluence]{local confluence} Fix two reductions \\( w \\to x \\) and \\( w \\to y \\).\n\n  If \\( x = y \\), define \\( z \\coloneqq x \\). Otherwise, let \\( p_x \\) and \\( s_x \\) be words such that \\( w = p_x a^+ a^- s_x \\) and \\( x = p_x s_x \\) (the case where \\( w = p_x a^- a^+ s_x \\) is analogous). Similarly, let \\( p_y \\) and \\( s_y \\) be words such that \\( w = p_y b^+ b^- s_y \\) and \\( y = p_y s_y \\).\n\n  Without loss of generality, suppose that \\( p_x \\) is a subword of \\( p_y \\). Then there exists a word \\( v \\) such that\n  \\begin{equation*}\n    w = p_x a^+ a^- v b^+ b^- s_y.\n  \\end{equation*}\n\n  Then both \\( x \\) and \\( y \\) are reducible to \\( z \\coloneqq p_x v s_y \\).\n\n  This shows that the rewriting system is locally confluent.\n\n  \\SubProofOf[def:abstract_rewriting_convergence/noetherian]{noetherianity} All words are finite, and a reduction always removes two symbols. Hence, there cannot exist an infinite reduction path.\n\\end{defproof}\n\n\\begin{theorem}[Free group universal property]\\label{thm:free_group_universal_property}\n  The \\hyperref[def:free_group]{free group} \\( F(A) \\) is the unique up to an isomorphism group that satisfies the following \\hyperref[rem:universal_mapping_property]{universal mapping property}:\n  \\begin{displayquote}\n    For every group \\( G \\) and every function \\( f: A \\to G \\), there exists a unique group homomorphism \\( \\widetilde{f}: F(A) \\to G \\) such that the following diagram commutes:\n    \\begin{equation}\\label{eq:thm:free_group_universal_property/diagram}\n      \\begin{aligned}\n        \\includegraphics[page=1]{output/thm__free_group_universal_property.pdf}\n      \\end{aligned}\n    \\end{equation}\n  \\end{displayquote}\n\n  Via \\fullref{rem:universal_mapping_property}, \\( F \\) becomes \\hyperref[def:category_adjunction]{left adjoint} to the \\hyperref[def:concrete_category]{forgetful functor}\n  \\begin{equation*}\n    U: \\cat{Grp} \\to \\cat{Set}.\n  \\end{equation*}\n\\end{theorem}\n\\begin{proof}\n  The free group operation is more complicated than the free monoid operation, however the proof of the universal mapping property is identical to the property in \\fullref{thm:free_monoid_universal_property}.\n\\end{proof}\n\n\\begin{definition}\\label{def:group_presentation}\\mcite[314]{Knapp2016BasicAlgebra}\n  Let \\( A \\) be a \\hyperref[def:set]{plain set} with \\hyperref[def:free_group]{free group} \\( F(A) \\) and let \\( R \\subseteq F(A) \\) be a set of words in \\( F(A) \\). Denote by \\( N(R) \\) the \\hyperref[thm:normal_subgroup_equivalences]{normal subgroup} \\hyperref[def:first_order_generated_substructure]{generated} by \\( R \\).\n\n  Then we define the group with \\term{generators} \\( A \\) and \\term{relators} \\( R \\) as\n  \\begin{equation}\\label{eq:def:group_presentation/presentation}\n    \\braket{ A \\mid R } \\coloneqq F(A) / N(R).\n  \\end{equation}\n\n  If \\( R = \\varnothing \\), there are no relators, and we use the following notation for the free group:\n  \\begin{equation}\\label{eq:def:group_presentation/free}\n    \\braket{ A } \\coloneqq F(A)\n  \\end{equation}\n\n  Note that we use similar notation compared to generated subgroups defined in \\fullref{def:group/submodel}. The former defines a new group operation from scratch, while the latter uses an existing group operation and is restricted by this operation.\n\n  If, for a given group \\( G \\) and subsets \\( A \\) and \\( R \\) of \\( G \\), we have\n  \\begin{equation*}\n    G \\cong \\braket{ A \\given R },\n  \\end{equation*}\n  we call the pair \\( (A, R) \\) a \\term{presentation} of \\( G \\).\n\n  We say that \\( G \\) is \\term{finitely generated} if both \\( A \\) and \\( R \\) are \\hyperref[def:set_finiteness]{finite sets}; if only \\( R \\) is finite, we call \\( G \\) \\term{finitely presented}.\n\n  Compare this to \\hyperref[def:module_presentation]{module presentations} and \\hyperref[def:algebra_presentation]{algebra presentations}.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:group_presentation_existence}\\mcite[prop. 7.7]{Knapp2016BasicAlgebra}\n  Every group has at least one \\hyperref[def:group_presentation]{presentation}.\n\n  Compare this to \\fullref{thm:module_presentation_existence} and \\fullref{thm:algebra_presentation_existence}.\n\\end{proposition}\n\\begin{proof}\n  Fix an arbitrary group \\( G \\) and let \\( A \\coloneqq U(G) \\) be the underlying set.\n\n  By \\fullref{thm:free_group_universal_property}, there exists a unique group homomorphism \\( \\varphi: F(A) \\to G \\) such that\n  \\begin{equation*}\n    U(\\varphi) \\bincirc \\iota_A = \\id_A.\n  \\end{equation*}\n\n  By \\fullref{thm:def:group/kernel_is_normal_subgroup}, the kernel \\( \\ker \\varphi \\) is a normal subgroup of \\( A \\), hence by \\fullref{thm:quotient_group_universal_property},\n  \\begin{equation*}\n    G = \\varphi(F(A)) \\cong F(A) / \\ker \\varphi = \\braket{ S \\mid \\ker \\varphi }.\n  \\end{equation*}\n\\end{proof}\n\n\\begin{definition}\\label{def:group_free_product}\\mcite[323]{Knapp2016BasicAlgebra}\n  The \\term{free product} of a nonempty pairwise disjoint family of groups \\( \\seq{ X_k }_{k \\in \\mscrK} \\) with \\hyperref[def:group_presentation]{presentations} \\( \\braket{S_k \\mid R_k}, k \\in \\mscrK \\) is the group with presentation\n  \\begin{equation*}\n    \\Ast_{k \\in \\mscrK} X_k \\coloneqq \\braket*{ \\bigcup_{k \\in \\mscrK} S_k \\given[\\Big] \\bigcup_{k \\in \\mscrK} R_k }.\n  \\end{equation*}\n\n  If the constituent groups are not disjoint, we may instead use \\hyperref[def:disjoint_union]{disjoint unions} as\n  \\small\n  \\begin{equation*}\n    \\Ast_{k \\in \\mscrK} X_k \\coloneqq \\braket*{ \\set[\\Big]{ (k, x) \\given k \\in \\mscrK \\T{and} x \\in S_k } \\given[\\Big] \\set[\\Big]{ (k, x_1) (k, x_2) \\ldots (k, x_n) \\colon k \\in \\mscrK \\T{and} x_1 x_2 \\ldots x_n \\in R_k } }.\n  \\end{equation*}\n  \\normalsize\n\n  For every index \\( m \\in \\mscrK \\), we define the canonical embedding\n  \\begin{equation*}\n    \\begin{aligned}\n       &\\iota_m: X_m \\to \\Ast_{k \\in \\mscrK} X_k \\\\\n       &\\iota_m(x) \\coloneqq (m, x).\n    \\end{aligned}\n  \\end{equation*}\n\\end{definition}\n\n\\begin{definition}\\label{def:cyclic_group}\n  For a singleton alphabet \\( \\set{ a } \\), we define the \\term{infinite cyclic group}\n  \\begin{equation*}\n    C_\\infty \\coloneqq \\braket{ a }\n  \\end{equation*}\n  and, for positive integers \\( n \\), the \\term{finite cyclic group} of \\term{order} \\( n \\) as\n  \\begin{equation*}\n    C_n \\coloneqq \\braket{ a \\given a^n }.\n  \\end{equation*}\n\n  We use the same notation independent of \\( a \\) because all cyclic groups of the same order are obviously \\hyperref[def:group/homomorphism]{isomorphic}.\n\n  Given an ambient group \\( G \\) and some element \\( g \\in G \\), the \\term{cyclic subgroup} of \\( g \\) is the cyclic group isomorphic to the \\hyperref[def:group/submodel]{generated subgroup} of \\( G \\).\n\n  As shown in \\fullref{thm:cyclic_group_isomorphic_to_integers_modulo_n}, cyclic groups are isomorphic to certain groups of integers, however it is still useful to have cyclic groups as a separate concept.\n\\end{definition}\n\n\\begin{definition}\\label{def:group_order}\n  The \\term{order} \\( \\ord(G) \\) of a group \\( G \\) is its \\hyperref[thm:cardinality_existence]{cardinality}.\n\n  The \\term{order} \\( \\ord(x) \\) of a member \\( x \\) of a group is the smallest positive integer \\( n \\) such that \\( x^n = e \\), i.e. order of the \\hyperref[def:cyclic_group]{cyclic subgroup} \\hyperref[def:group/submodel]{generated} by \\( x \\).\n\\end{definition}\n\n\\begin{proposition}\\label{thm:def:group_order}\n  \\hyperref[def:group_order]{Group orders} have the following basic properties:\n  \\begin{thmenum}\n    \\thmitem{thm:def:group_order/divides} For finite groups, the order of a group element divides the order of the group.\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:def:group_order/divides} Follows from \\fullref{thm:lagranges_theorem_for_groups}.\n\\end{proof}\n\n\\begin{proposition}\\label{thn:product_of_cyclic_groups}\n  The \\hyperref[def:monoid_direct_product]{direct product} \\( C_m \\times C_n \\) of two \\hyperref[def:cyclic_group]{cyclic groups} is cyclic if and only if \\( m \\) and \\( n \\) are \\hyperref[def:coprime_elements]{coprime}.\n\\end{proposition}\n\\begin{proof}\n  The \\hyperref[def:group_order]{order} of the element \\( (a, e) \\) is \\( m \\) and the order of \\( (e, a) \\) is \\( n \\). The order of\n  \\begin{equation*}\n    (a, a) = (a, e) \\cdot (e, a)\n  \\end{equation*}\n  is the least common multiple of \\( m \\) and \\( n \\), which equals \\( mn \\) if and only if \\( m \\) and \\( n \\) are coprime.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:cyclic_groups_are_simple}\n  Groups of \\hyperref[def:prime_number]{prime} \\hyperref[def:group_order]{order} are \\hyperref[def:group/simple]{simple}.\n\\end{proposition}\n\\begin{proof}\n  Let \\( N \\) be a proper normal subgroup of \\( G \\), where \\( \\ord(G) \\) is a prime number.\n\n  From \\fullref{thm:lagranges_theorem_for_groups} it follows that \\( \\ord(N) \\) divides \\( p \\). But \\( p \\) is prime, hence \\( N \\) is either the trivial group or the full group.\n\n  Therefore, \\( G \\) is a simple group.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:group_categorical_limits}\n  \\hfill\n  \\begin{thmenum}\n    \\thmitem{thm:group_categorical_limits/product} The \\hyperref[def:discrete_category_limits]{categorical product} of the family \\( \\seq{ G_k }_{k \\in \\mscrK} \\) in the category \\hyperref[def:group/category]{\\( \\cat{Grp} \\)} of groups is their \\hyperref[def:monoid_direct_product]{monoid direct product} \\( \\prod_{k \\in \\mscrK} G_k \\).\n\n    \\thmitem{thm:group_categorical_limits/coproduct} The \\hyperref[def:discrete_category_limits]{categorical coproduct} of the family \\( \\seq{ G_k }_{k \\in \\mscrK} \\) in the category \\hyperref[def:monoid/category]{\\( \\cat{CMon} \\)} of \\hi{commutative} monoids is their \\hyperref[def:group_free_product]{group free product} \\( \\Ast_{k \\in \\mscrK} G_k \\).\n\n    Compare this to the commutative case discussed in \\fullref{thm:monoid_categorical_limits/coproduct}.\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:group_categorical_limits/product} Follows from \\fullref{thm:monoid_categorical_limits/product}.\n\n  \\SubProofOf{thm:group_categorical_limits/coproduct}  Let \\( (A, \\alpha) \\) be a \\hyperref[def:category_of_cones/cocone]{cocone} for the discrete diagram \\( \\seq{ G_k }_{k \\in \\mscrK} \\). We want to define a group homomorphism \\( l: \\Ast_{k \\in \\mscrK} G_k \\to A \\) such that, for every \\( m \\in \\mscrK \\),\n  \\begin{equation*}\n    \\alpha_m(x) = l_A(\\iota_m(x)).\n  \\end{equation*}\n\n  This suggests the definition\n  \\begin{equation*}\n    l_A\\parens[\\Big]{ (k_1, x_1) (k_2, x_2) \\ldots (k_n, x_n) } \\coloneqq \\alpha_{k_1}(x_1) \\cdot \\alpha_{k_2}(x_k) \\cdot \\ldots \\cdot \\alpha_{k_n}(x_n).\n  \\end{equation*}\n\\end{proof}\n", "meta": {"hexsha": "f8819ba4f33d555aae2f333f8ec1768eec07d1a2", "size": 21661, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/free_groups.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/free_groups.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/free_groups.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.7855072464, "max_line_length": 475, "alphanum_fraction": 0.702229814, "num_tokens": 6816, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5756455867080618}}
{"text": "\\section{The one-pass distributed algorithm}\n\nThe essence of the distributed strategy is to achieve almost perfect\nparallelism, by splitting the input matrix into several smaller\nmatrices called \\emph{jobs}. \\\\\n\n\\[\nA^{m \\times n} = \n\\begin{bmatrix}\nA_1^{m \\times c_1} \\mid A_2^{m \\times c_2} \\mid \\cdots \\mid A_k^{m \\times c_k}\n\\end{bmatrix}\n\\suchthat \\sum_{i=1}^k c_i = n\n\\]\n\\\\\n\nA subset of these smaller matrices or \\emph{jobs} is assigned to each\nnode in the cluster, depending on their capabilities; the\nobjective is to assign matrices that fit into the node's RAM\nmemory. Each node will calculate the SVD factorization of the\nsubmatrices assigned, but merging those results into a single\nSVD approximation that covers all the input data it received. At the\nend, a global merge step across all the nodes is performed, giving the\nglobal SVD approximation for original matrix $A$. The\n\\cref{alg:svd-dist} describes the overall distributed algorithm: \\\\\n\n\\begin{algorithm}\n  \\label{alg:svd-dist}\n  \\caption{Distributed-SVD: Distributed SVD for LSI (global)}\n%\n  \\setstretch{1.35}\n  \\SetKwInOut{Input}{Input}\n  \\SetKwInOut{Output}{Output}\n  \\DontPrintSemicolon\n%\n    \\Input{Truncation factor $k$, queue of jobs $A= [A_1, A_2, \\dots ]$}\n%\n    \\Output{Matrices $U^{m \\times k}$ and $\\Sigma^{k \\times k}$, \n      from the SVD decomp. of $A$}\n%\n    \\For {\\textbf{all} (node $i$ in cluster)}\n    {\n      $B_i \\gets \\text{subset of the queue of jobs } [A_1,A_2,\\dots]$ \\;\n%\n      $P_i = (U_i,\\Sigma_i) \\gets \\func{SVD-Node}(k,B_i)$ \\;\n    }\n    $(U,\\Sigma) \\gets \\func{Reduce}(\\func{Merge-SVD},[P_1,P_2,\\dots])$ \\;\n%\n    return $(U, \\Sigma)$ \\;\n\\end{algorithm}\n\\hfill\n\nThe first important detail from the algorithm just shown, is that we\nare not calculating the matrix $V$ from the SVD factorization, how\ncome! Such detail is explained at the end of the last section. For the\nmoment, let us just say that such matrix is not required for our\npurposes. \\\\\n\nWe can also observe the map-reduce pattern in this algorithm, with the map\npart being the iteration done over $p$ nodes (in parallel); and the\nreduce part being the final merge of those partial results. The\n\\cref{alg:svd-dist-node} describes the part done inside each node.\n\n\\begin{algorithm}\n  \\label{alg:svd-dist-node}\n  \\caption{SVD-Node: Distributed SVD for LSI (node)}\n%\n  \\setstretch{1.35}\n  \\SetKwInOut{Input}{Input}\n  \\SetKwInOut{Output}{Output}\n  \\DontPrintSemicolon\n%\n  \\Input{Truncation factor $k$, queue of jobs $A_1,A_2,\\dots$}\n%\n  \\Output{Matrices $U^{m \\times k}$ and $\\Sigma^{k\n        \\times k}$, from the SVD  of $[A_1,A_2,\\dots]$}\n%\n  $P = (U,\\Sigma) \\gets 0^{m \\times k} 0^{k \\times k}$ \\;\n%\n  \\For {each job $A_i$}\n  {\n    $\\prim{P} = (\\prim{U},\\prim{\\Sigma}) \\gets \\func{Basecase-SVD}(k,A_i)$ \\;\n%\n    $P = (U^{m \\times k},\\Sigma^{k \\times k}) \\gets \\func{Merge-SVD}(k, P, \\prim{P})$ \\;\n  }\n%\n  return $(U,\\Sigma)$ \\;\n\\end{algorithm}\n\\hfill\n\nIt is important to realize that the iteration in this\n\\cref{alg:svd-dist-node} is done serially, but that the procedure\n$\\func{Basecase-SVD}$ that resolves the SVD of a\nmatrix that fits in memory (base case), internally may exploit the\nmulticore or vectorial capabilities of the node computer. This\nprocedure serves as a black box SVD calculator, and \\Rehurek mentions\nat least two algorithms which can be plugged on its place: \\\\\n\n\\begin{enumerate}\n\\item The Lanczos algorithm as implemented by SVDLIBC (\\cite{svdlibc}),\n  which in turn is based on SVDPACKC written by Berry et al\n  (\\cite{svdpackc}), which in turn is based on its Fortran77\n  predecessor SVDPACK (\\cite{svdpack}). All of them  ultimately based\n  on seminal paper by  Berry \\cite{berry92} (which in turn comes from\n  his PhD thesis \\cite{berry91}). \\\\\n\\item A custom stochastic algorithm based on the work of Halko et al\n  (see \\cite{halko11}).\n\\end{enumerate}\n\\hfill\n\nFor the scope of this project, we considered appropriate to focus only\non the Lanczos based algorithm; as that is essentially what we\ndescribed in the previous chapter. In that sense, the work of \\Rehurek\nis interesting because by using the divide and conquer strategy for\nthe SVD problem, he is leveraging on the decades of research and\nnumerical accuracy of the work done by Berry et al. At the same time,\nhis key contribution becomes the procedure $\\func{Merge-SVD}$, which\nwe will describe in further sections. \\\\\n\n", "meta": {"hexsha": "3589a82772411f72d0744f9e02cd07290317e2f9", "size": 4369, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "svd-dist-alg.tex", "max_stars_repo_name": "rzavalet/svd-lsi-project-master", "max_stars_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "svd-dist-alg.tex", "max_issues_repo_name": "rzavalet/svd-lsi-project-master", "max_issues_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "svd-dist-alg.tex", "max_forks_repo_name": "rzavalet/svd-lsi-project-master", "max_forks_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.7142857143, "max_line_length": 88, "alphanum_fraction": 0.7141222248, "num_tokens": 1331, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431679972357831, "lm_q2_score": 0.7745833841649233, "lm_q1q2_score": 0.5756455823019613}}
{"text": "\\section{A Refined Relational Database}\\label{sec:database}\n\nNext, we use bounded refinements to develop a library\nfor relational algebra, which we use to enable generic,\ntype safe database queries.\n%\nA relational database stores data in \\emph{tables},\nthat are a collection of \\emph{rows}, which in turn \nare \\emph{records} that represent a unit of data \nstored in the table.\nThe tables's \\textit{schema} describes the types of \nthe values in each row of the table.\nFor example, the table in Figure~\\ref{fig:moviedb} organizes \ninformation about movies, and has the schema:\n%\n\\begin{code}\n Title:String, Dir:String, Year:Int, Star:Double\n\\end{code}\n\n\\begin{table}[h]\n\\captionsetup{justification=centering}\n\\caption{Example entries for Movies Database.}\n\\label{fig:moviedb} \n\\centering\n$$\n\\begin{tabular}{| l | l| r | r |}\n  \\hline\n  \\textbf{Title} & \\textbf{Director} & \\textbf{Year} & \\textbf{Star} \\\\\n  \\hline  \n  ``Birdman'' & ``I\\~{n}\\'{a}rritu''   & 2014 & 8.1\\\\\n  ``Persepolis''  & ``Paronnaud'' & 2007 & 8.0 \\\\ \n  \\hline\n\\end{tabular}\n$$\n\\end{table}\n\nFirst, we show how to write type safe extensible \nrecords  that represent rows, and use them to \nimplement database tables~(\\S~\\ref{subsec:records}). \n%\nNext, we show how bounds let us specify type \nsafe relational operations and how they may be \nused to write safe database queries~(\\S~\\ref{subsec:relational}).\n\n\\subsection{Rows and Tables}\\label{subsec:records}\n\nWe represent the rows of a database with dictionaries, \nwhich are maps from a set of keys to values.\nIn the sequel, each key corresponds to a column and \nthe mapped value corresponds to a valuation of the column \nin a particular row.\n\n\\mypara{A dictionary} @Dict <r> k v@ maps a key @x@ of \ntype @k@ to a value of type @v@ that satisfies the property @r x@\n%\n% \\NV CUT\n\\begin{code}\n  type Range k v = k -> v -> Bool\n   \n  data Dict k v <r :: Range k v> = D {\n      dkeys :: [k]\n    , dfun  :: x:{k | x SetMem elts dkeys} -> v <r x>\n    }\n\\end{code}      \n%\nEach dictionary @d@ has a domain @dkeys@ \n\\ie the list of keys for which @d@ is defined \nand a function @dfun@ that is defined only on\nelements @x@ of the domain @dkeys@.\n%\nFor each such element @x@, @dfun@ returns a value that satisfies the\nproperty @r x@.\n\n\\mypara{Propositions about the theory of sets} can be decided\nefficiently by modern SMT solvers. Hence we use such \npropositions within refinements as demonstrated in chapter~\\ref{chapter:tool}.\n% \nThe measures (logical functions) @elts@ and @keys@ \nspecify the set of keys in a list and a dictionary \nrespectively:\n%\n\\begin{code}\n  elts        :: [a] -> Set a\n  elts ([])   = Set_emp\n  elts (x:xs) = {x} Set_cup elts xs\n\n  keys        :: Dict k v -> Set k\n  keys d      = elts (dkeys d) \n\\end{code}\n\n\\mypara{Domain and Range of dictionaries.}\n%\nIn order to precisely define the domain (\\eg columns) and range (\\eg values)\nof a dictionary (\\eg row), we define the following aliases:\n%\n% NV CUT\n\\begin{code}\n  type RD k v <dom :: Dom k v, rng :: Range k v>\n    = {v:Dict <rng> k v | dom v}\n\n  type Dom k v = Dict k v -> Bool \n\\end{code}\n%\nWe may instantiate @dom@ and @rng@ with predicates that precisely describe\nthe values contained with the dictionary.\n%\nFor example,\n%\n\\begin{code}\n  RD < \\d -> keys d = {\"x\"}, \\k v-> 0 < v> String Int\n\\end{code}\n%\n%% instantiating @dom@ with \n%% \\hbox{@\\d -> keys d = {\"x\"}@}\n%% and @rng@ with the predicate\n%% \\hbox{@\\k v -> k = \"x\" => v > 0@}\n%\ndescribes dictionaries with a single field @\"x\"@ \nwhose value (as determined by @dfun@) is stricly \ngreater than 0.\n%\nWe will define schemas by appropriately \ninstantiating the abstract refinements \n@dom@ and @rng@.\n\n\n\\mypara{An empty dictionary} has an empty domain \nand a function that will never be called:\n%\n\\begin{code}\n  empty   :: RD <emptyRD, rFalse> k v\n  empty   = D [] (\\x -> error \"calling empty\")\n\n  emptyRD = \\d -> keys d == Set_emp\n  rFalse  = \\k v -> false\n\\end{code}\n%% %\n%% Thus, @empty@'s range satisfies \\textit{any} predicate -- that is,\n%% it satisfies @false@.\n \n\\mypara{We define singleton maps} as dependent pairs \n@x := y@ which denote the mapping from @x@ to @y@:\n%\n\\begin{code}\n  data P k v <r :: Range k v> \n    = (:=) {pk :: k, pv :: v <r pk>}\n\\end{code}\n%\nThus, @key := val@ has type \\hbox{@P <r> k v@} only if \n@r key val@.\n\n\\paragraph{A dictionary may be extended} with a singleton\nbinding (which maps the new key to its new value). \n%\n\\begin{code}\n  (+=)   :: bind:P<r> k v \n         -> dict:RD<pTrue, r> k v \n         -> RD <addKey (pk bind) dict, r> k v\n \n  (k := v) += (D ks f) \n         = D (k:ks) (\\i -> if i == k then v else f i)\n  \n  addKey = \\k d d' -> keys d' == {k} Set_cup keys d\n  pTrue  = \\_ -> True\n\\end{code}\n%\nThus, @(k := v)  += d@ evaluates to \na dictionary @d'@ that extends @d@ \nwith the mapping from @k@ to @v@.\n%\nThe type of @(+=)@ constrains the new binding @bind@, \nthe old dictionary @dict@ and the returned value to have \nthe same range invariant @r@.\n%\nThe return type states that the output dictionary's \ndomain is that of the domain of @dict@ extended by \nthe new key @(pk bind)@.\n\n\\mypara{To model a row in a table} \\ie a schema, \nwe define the unrefined (Haskell) type @Schema@, \nwhich is a dictionary mapping @String@s, \\ie the \nnames of the fields of the row, to elements of \nsome universe @Univ@ containing @Int@, @String@ \nand @Double@.\n%\n(A closed universe is not a practical restriction; \nmost databases support a fixed set of types).\n% \n\\begin{code}\n  data Univ   = I Int | S String | D Double\n\n  type Schema = RD String Univ\n\\end{code}\n\n\\mypara{We refine Schema} with concrete instantiations\nfor @dom@ and @rng@, in order to recover precise \nspecifications for a particular database. For example, \n@MovieSchema@ is a refined @Schema@ that describes the \nrows of the Movie table in Figure~\\ref{fig:moviedb}:\n%\n%\n\\begin{code}\n  type MovieSchema = RD <md, mr> String Univ\n\n  md = \\d -> keys d={\"year\",\"star\",\"dir\",\"title\"}\n  mr = \\k v -> \n       (k = \"year\"  => isI v && 1888 < toI v)\n     && (k = \"star\"  => isD v && 0 <= toD v <= 10)\n     && (k = \"dir\"   => isS v)\n     && (k = \"title\" => isS v)\n\n  isI (I _)   = True \n  isI _       = False \n\n  toI       :: {v: Univ | isI v} -> Int\n  toI (I n) = n\n\\end{code}\n%\nThe predicate @md@ describes the \\emph{domain} of the movie schema,\nrestricting the keys to exactly @\"year\"@, @\"star\"@, @\"dir\"@, and @\"title\"@.\n%\nThe range predicate @mr@ describes the types of the values in the schema:\n%\na dictionary of type @MovieSchema@ must map \n@\"year\"@ to an @Int@,\n@\"star\"@ to a @Double@, \nand @\"dir\"@ and @\"title\"@ to @String@s.\n%\nThe range predicate may be used to impose additional constraints on the values\nstored in the dictionary.\n%\nFor instance, @mr@ restricts the year to be not only an integer but\nalso greater than @1888@.\n%\n%%%Because refinements in \\toolname are drawn from a decidable logic,\n%%%refining the range with logical predicates comes ``for free''.\n%%%%\n%%%A more heavyweight dependent type system, on the other hand, would\n%%%require the programmer to manually thread proofs of these range\n%%%predicates throughout the code.\n\n\\mypara{We populate the Movie Schema} by extending the\nempty dictionary with the appropriate pairs of fields and \nvalues. For example, here are the rows from the table\nin Figure~\\ref{fig:moviedb}\n%\n\\begin{code}\n  movie1, movie2 :: MovieSchema\n  movie1 = (\"title\" := S \"Persepolis\")\n        += (\"dir\"   := S \"Paronnaud\") \n        += (\"star\"  := D 8) \n        += (\"year\"  := I 2007) \n        += empty\n\n  movie2 = (\"title\" := S \"Birdman\") \n        += (\"star\"  := D 8.1) \n        += (\"dir\"   := S \"Inarritu\")\n        += (\"year\"  := I 2014) \n        += empty\n\\end{code}\n%\nTyping @movie1@ (and @movie2@) as @MovieSchema@\nboils down to proving\n%\nthat @keys movie1 = {\"year\", \"star\", \"dir\", \"title\"}@\nand that each key is mapped to an appropriate value \nas determined by @mr@.\n%\nFor example, declaring @movie1@'s year to be @I 1888@\nor even misspelling @\"dir\"@ as @\"Dir\"@\nwill cause the @movie1@ to become ill-typed.\n%\nAs the (sub)typing relation depends on logical \nimplication (unlike in @HList@ based approaches \n\\cite{heterogeneous}) key ordering \\emph{does not} \naffect type-checking;\n%\nin @movie1@ the star field is added before the \ndirector, while @movie2@ follows the opposite order.\n%%%On the contrary, with dependent types, proving that two heterogeneous\n%%%collections constructed with different ordering have the same type\n%%%would require an additional (manually-supplied) equality proof.\n\n\\mypara{Database Tables} are collections of rows, \n\\ie collections of refined dictionaries.\n%\nWe define a type alias @RT s@ (Refined Table) for \nthe list of refined dictionaries from the field \ntype @String@ to the @Univ@erse.\n%\n\\begin{code}\n  type RT (s :: {dom::TDom, rng::TRange}) \n    = [RD <s.dom, s.rng> String Univ]\n\n  type TDom   = Dom   String Univ\n  type TRange = Range String Univ\n\\end{code}\n%\nFor brevity we pack both the domain and the range \nrefinements into a record @s@ that describes the \nschema refinement; \\ie each row dictionary has \ndomain @s.dom@ and range @s.rng@.\n\nFor example, the table from Figure~\\ref{fig:moviedb}\ncan be represented as a type @MoviesTable@ which \nis an @RT@ refined with the domain and range @md@ \nand @mr@ described earlier (\\S~\\ref{subsec:records}):\n%\n\\begin{code}\n  type MoviesTable = RT {dom = md, rng = mr}\n   \n  movies :: MoviesTable \n  movies = [movie1, movie2]\n\\end{code}\n\n\\subsection{Relational Algebra}\\label{subsec:relational}\n\nNext, we describe the types of the relational algebra operators\nwhich can be used to manipulate refined rows and tables.\nFor space reasons, we show the \\emph{types} of the basic \nrelational operators; their (verified) implementations \ncan be found online~\\cite{liquidhaskellgithub}.\n%\n\\begin{code}\n  union   :: RT s -> RT s -> RT s\n  diff    :: RT s -> RT s -> RT s\n  select  :: (RD s -> Bool) -> RT s -> RT s\n  project :: ks:[String] -> RTSubEqFlds ks s \n          -> RTEqFlds ks s\n  product :: ( Disjoint s1 s2, Union s1 s2 s\n             , Range s1 s, Range s2 s) \n          => RT s1 -> RT s2 -> RT s\n\\end{code}\n\n\\mypara{{Union} and {diff}} compute the union \nand difference, respectively of the (rows of) two tables.\n%\nThe types of @union@ and @diff@ state that the \noperators work on tables with the same schema \n@s@ and return a table with the same schema.\n\n\\mypara{{select}} takes a predicate @p@\nand a table @t@ and filters the rows of @t@ \nto those which that satisfy @p@.\n%\nThe type of @select@ ensures that @p@ will \nnot reference columns (fields) that are\nnot mapped in @t@, as the predicate @p@\nis constrained to require a dictionary \nwith schema @s@.\n\n\\mypara{{project}} takes\na list of @String@ fields @ks@ \nand a table @t@ and projects \nexactly the fields @ks@ at \neach row of @t@.\n%\n@project@'s type states that for \nany schema @s@, the input table \nhas type @RTSubEqFlds ks s@ \n\\ie its domain should have at \nleast the fields @ks@ and the \nresult table has type @RTEqFlds ks s@, \n\\ie its domain has exactly the elements @ks@. \n%\n\\begin{code}\n  type RTSubEqFlds ks s = RT s{dom = \\z -> elts ks Set_sub  keys z}\n\n  type RTEqFlds ks s    = RT s{dom = \\z -> elts ks = keys z}\n\\end{code}\n% \nThe range of the argument and the result tables \nis the same and equal to @s.rng@.\n\n\\mypara{{product}} takes two tables \nas input and returns their (Cartesian) \nproduct.\n%\nIt takes two Refined Tables with schemata \n@s1@ and @s2@ and returns a Refined Table \nwith schema @s@. Intuitively, the output\nschema is the ``concatenation'' of the input\nschema; we formalize this notion using bounds:\n%\n(1)~@Disjoint s1 s2@ says the domains of \n    @s1@ and @s2@ should be disjoint,\n%\n(2)~@Union s1 s2 s@ says the domain of @s@ \n    is the union of the domains of @s1@ and @s2@, \n%\n(3)~@Range s1 s@ (\\resp @Range s2 s2@) says \n    the range of @s1@ should imply the result \n    range @s@; together the two imply the output\n    schema @s@ preserves the type of each key in \n    @s1@ or @s2@.\n%\n\\begin{code}\n  bound Disjoint s1 s2 = \\x y -> \n    s1.dom x => s2.dom y => keys x Set_cap keys y == Set_emp\n   \n  bound Union s1 s2 s = \\x y v -> \n    s1.dom x => s2.dom y \n             => keys v == keys x Set_cup keys y \n             => s.dom v\n\n  bound Range si s = \\x k v -> \n    si.dom x => k SetMem keys x => si.rng k v => s.rng k v \n\\end{code}\n\n%% We note that none of these restrictions can be \n%% expressed using unbounded Abstract Refinement Types.\n\nThus, bounded refinements  enable the precise \ntyping of  relational algebra operations.\nThey let us describe precisely when union, \nintersection, selection, projection and products \ncan be computed, and let us determine, at compile\ntime the exact ``shape'' of the resulting tables.\n\n% \\subsection{Data Base Queries}\\label{subsec:dbclient}\n\n\\mypara{We can query Databases} by writing functions \nthat use the relational algebra combinators. \n%\nFor example, here is a query that returns the \n``good'' titles -- with more than 8 stars -- \nfrom the @movies@ table\n\\footnote{More example queries can be found online~\\cite{liquidhaskellgithub}}\n%\n\\begin{code}\n  good_titles = project [\"title\"] $ select (\\d ->\n                  toDouble (dfun d $ \"star\") > 8\n                ) movies\n\\end{code}\n%\n%% The above query selects the movies that have more\n%% than 8 stars and projects only their @\"title\"@ field.\n%\n\nFinally, note that our entire library -- including \nrecords, tables, and relational combinators -- is \nbuilt using vanilla Haskell \\ie without \\emph{any} \ntype level computation. \n%\nAll schema reasoning happens at the granularity of \nthe logical refinements. That is if the refinements\nare erased from the source, we still have a well-typed\nHaskell program but of course, lose the safety \nguarantees about operations (\\eg ``dynamic'' key lookup) \nnever failing at run-time.\n\n\n \n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End: \n", "meta": {"hexsha": "be0f793b767f6ecbd6ba0e71d9b22814212c6d71", "size": 13863, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/boundedrefinements/database.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/boundedrefinements/database.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/boundedrefinements/database.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 30.2026143791, "max_line_length": 78, "alphanum_fraction": 0.6707783308, "num_tokens": 4150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.743167997235783, "lm_q1q2_score": 0.5756455745676576}}
{"text": "\\chapter{OPTIMIZATION}\n\\section{Introduction.}Definition: an act, process, or methodology of making something (as a design, system, or decision) as fully perfect, functional, or effective as possible. \\\\\n\\textbf{In Business:}\\\\\nFinding an alternative with the most cost effective or highest achievable performance under the given constraints, by maximizing desired factors and minimizing undesired ones.Practice of optimization is restricted by the lack of full information, and the lack of time to evaluate what information is available.\\\\\n\\textbf{In Science:}\\\\\nIn mathematics, computer science and operations research, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives.In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics.More generally, optimization includes finding \"best available\" values of some objective function given a defined domain (or a set of constraints), including a variety of different types of objective functions and different types of domains.\\\\\n\\textbf{In Engineering:}\\\\\nEngineering Optimization is the subject which uses optimization techniques to achieve design goals in engineering. It is sometimes referred to as design optimization.\\\\\nLet us take an example.\\\\\nSuppose we want to buy a car. If we want the car with maximum luxury and minimum cost, this means we are considering both the factors at the time of purchase. So, this is optimization.If we are looking for the car only with maximum luxury, then this is \tmaximization.And if, we are looking for a car with minimum cost, then this is minimization.\\\\\n\\section{Types of optimization.}\n\\subsection{Single-objective optimization.}The term single-objective corresponds to the fact that we have just one objective to optimize i.e. either maximize it or minimize it.\\\\Considering it the other way single-objective optimization can also we viewed as a combination of various objectives into an overall objective,where this overall objective is optimized. So, here the individual identity of the objectives is lost and there is no guarantee on the performance of the individual objectives as they are not considered separately. So, for these reasons we go for multi-objective optimization.\\\\For example,getting a job. Here,if we consider just that the job should offer maximum salary and no other constraints and objectives, then we go for maximization. Such cases are single-objective based.\n\\subsection{Multi-objective optimization.}\n\\subsubsection{4.2.2.1 Real-life situation.}In real-life we face situations where increasing value of one parameter decreases the value of the other parameter. So, actually there is a contradictory situation. For example, we want to buy a house. The house should be well-furnished with high-class facilities and at the same time the purchasing price should be less. So, here we find an inverse situation where the comparison is between luxury vs. cost, where we want increased luxury and decreased cost.\\\\\nBut, in actual practice it is not possible. So, we have to make a compromise between luxury and cost so that to a certain limit both are optimal. We need to find the optimal luxury at the optimal price. Some objectives can be in competition, so a simultaneous minimization is not possible, but only a trade-off among them.\\\\\nSo, such situations have multiple objectives associated like luxury and cost in the above example, so we study multiple-objective optimization.\n\\subsubsection{4.2.2.2 Principles of MOO.}A multi-objective optimization problem involves a number of objective functions which are to be either minimized or maximized. As in the single-objective optimization problem, the multi-objective optimization problem usually has a number of constraints which any feasible solution (including the optimal solution) must satisfy. In the following, we state the multi-objective optimization problem (MOOP) in its general form:\\\\\n\t\t\t\tMinimize/Maximize $f_{m}$(x),                    m=1, 2,\u2026, M;\\\\\n\t\t\t\tsubject to    $g_{j}$ (x) \u2265 0,\t                j=1, 2,...,J;\\\\\n\t\t\t\t\t\t\t  $h_{k}$ (x) = 0.\t                K=1, 2,..., K;\\\\\n\\subsubsection{4.2.2.3 Example of MOO.}Minimize ZDT1 Function:  \\\\\nIt is one of benchmark function which is commonly use to test the performance of a algorithm for MOO.\nThe ZDT1 function has a convex Pareto-optimal front. The objective functions are: \\\\\n$f_{1}$(x) = x1\\\\\n%%$f_{2}$(x) = g(x) 1- \\sqrt{$x_1$/g(x)\\} \\\\\nWhere g(x) is defined as: g(x )= 1+ 9($ $$\\sum_{i=2}^{n}$$ $  $x_{i}$)/ (n-1)\\\\\nIn this ZDT1 function, thirty design variables xi were chosen (n=30). Each design variable ranged in value from 0 to 1. The Pareto-optimal front appears when g = 1.0.\\\\\n\\subsubsection{4.2.2.4 Dominated and Non-dominated set.  }\n\\subsubsection{4.2.2.5 MOO approaches.}\n\\subsubsection{4.2.2.6 Basic MOPSO algorithm.}\n\\subsubsection{4.2.2.7 Applications of MOO.}\n\n", "meta": {"hexsha": "ce768562b0af667d199819015cfa878a4fddcce9", "size": 5229, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "future-work_1.tex", "max_stars_repo_name": "Ace139/final-year-thesis", "max_stars_repo_head_hexsha": "207ee3e1465ea1c8673ba2a3045228bcca22adbe", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "future-work_1.tex", "max_issues_repo_name": "Ace139/final-year-thesis", "max_issues_repo_head_hexsha": "207ee3e1465ea1c8673ba2a3045228bcca22adbe", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "future-work_1.tex", "max_forks_repo_name": "Ace139/final-year-thesis", "max_forks_repo_head_hexsha": "207ee3e1465ea1c8673ba2a3045228bcca22adbe", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 158.4545454545, "max_line_length": 822, "alphanum_fraction": 0.7758653662, "num_tokens": 1199, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7122321842389469, "lm_q1q2_score": 0.575531476520564}}
{"text": "\\subsection{Free Damped Vibrations ($b > 0$)}\r\nWe need to consider three cases where the discriminant $\\Delta = b^2 - 4mk$ is positive, zero, and negative. \r\n\r\n\\input{./higherOrder/freeVibrs/overdamped.tex}\r\n\\input{./higherOrder/freeVibrs/critically_damped.tex}\r\n\\input{./higherOrder/freeVibrs/underdamped.tex}\r\n\r\n\\ifodd\\includeHigherOrderExamples\\input{./higherOrder/freeVibrs/freeDamped_example.tex}\\fi", "meta": {"hexsha": "013ce33ec847af177d6d5b5387f55d7e265d79c8", "size": 404, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/higherOrder/freeVibrs/freeDamped.tex", "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "diffEq/higherOrder/freeVibrs/freeDamped.tex", "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "diffEq/higherOrder/freeVibrs/freeDamped.tex", "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.5, "max_line_length": 110, "alphanum_fraction": 0.7772277228, "num_tokens": 119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397348, "lm_q2_score": 0.727975443004307, "lm_q1q2_score": 0.5754866661270539}}
{"text": "% Copyright 2018-2021 Melvin Eloy Irizarry-Gelp\u00ed\n\\setcounter{chapter}{5}\n\\chapter{Curved Mirrors}\n%\nIn this experiment you will learn about the properties of images produced by curved mirrors.\n%\n\\section{Preliminary}\n%\nA mirror reflects light. Mirrors can be flat or curved. Flat mirrors are commonly found in bathrooms. Curved mirrors can either be \\textbf{concave} or \\textbf{convex}. As you will see, a \\textbf{concave mirror} can produce a \\textbf{real image}, in the sense that the image can be projected on a screen.\n\nLight is emitted from an \\textbf{object}. This light travels through space and then bounces of a mirror, potentially producing an \\textbf{image} somewhere in space. There is a relationship between the \\textbf{distance between the object and the mirror} ($d_{\\text{O}}$) and the \\textbf{distance between the image and the mirror} ($d_{\\text{I}}$). This relationship involves a property of the mirror called the \\textbf{focal length} ($f$):\n\\begin{equation}\n    \\frac{1}{d_{\\text{O}}} + \\frac{1}{d_{\\text{I}}} = \\frac{1}{f}\n\\end{equation}\nThis is a curious relation because it involves the inverse quantities, and not the direct quantities themselves. Solving for the inverse image distance provides another way of writing this relation:\n\\begin{equation}\n    \\frac{1}{d_{\\text{I}}} = -\\frac{1}{d_{\\text{O}}} + \\frac{1}{f}\n\\end{equation}\nIf you measure $d_{\\text{O}}$ and $d_{\\text{I}}$, then the chart with $1/d_{\\text{I}}$ in the vertical axis and $1/d_{\\text{O}}$ in the horizontal axis should have a \\textbf{linear shape}. The \\textbf{slope} of the line should be close to $-1$, and the value of the \\textbf{intercept} should be close to the inverse of the focal length of the mirror used.\n\nIn practice it is easier to measure \\textbf{position} than to measure distance. You have three things: the \\textbf{object}, the \\textbf{image}, and the \\textbf{mirror}. Thus you have three positions along the track: the position $x_{\\text{O}}$ of the object, the position $x_{\\text{I}}$ of the image, and the position $x_{\\text{M}}$ of the mirror. The distance between the object and the mirror is simply\n\\begin{equation}\n    d_{\\text{O}} = \\left\\vert x_{\\text{O}} - x_{\\text{M}} \\right\\vert\n\\end{equation}\nSimilarly, the distance between the image and the mirror is given by\n\\begin{equation}\n    d_{\\text{I}} = \\left\\vert x_{\\text{I}} - x_{\\text{M}} \\right\\vert\n\\end{equation}\nThe absolute value is used because distance should be positive.\n%\n\\subsection{Magnification}\n%\nCurved mirrors produce images that look very different from the object that is emitting the light. Some differences can involve a change in \\textbf{orientation}. For example, left and right can be switched, as is common with plane mirrors. Other differences can be a change in \\textbf{size}. That is, the size of an image can be smaller or larger than the object. There is a quantity called \\textbf{magnification} that is used to describe how the size of an image compares to the size of the object. Let $h_{\\text{I}}$ be the \\textbf{height of the image}, and $h_{\\text{O}}$ be the \\textbf{height of the object}. The magnification $m$ of the mirror is given by the following ratio:\n\\begin{equation}\n    m = \\frac{h_{\\text{I}}}{h_{\\text{O}}}\n\\end{equation}\nNote that magnification has no units because it is the ratio of two lengths. There is another definition of the magnification that involves the distances from the mirror to the object and the image:\n\\begin{equation}\n    m = \\frac{d_{\\text{I}}}{d_{\\text{O}}}\n\\end{equation}\nThis particular definition is missing a minus sign that is traditionally used to denote a change in orientation. For practical purposes, you can ignore that minus sign.\n%\n\\section{Experiment}\n%\nThe experiment with the concave mirror consisted of recording the position of the mirror, the object, and the screen where the sharp image is projected. You also used a ruler to record the height of the object and the image.\n%\n\\section{Analysis}\n%\nWith the position of the object, mirror, and image you can calculate the distances $d_{\\text{O}}$ and $d_{\\text{I}}$. Then you can make a chart of $1 / d_{\\text{I}}$ versus $1 / d_{\\text{O}}$. The expected relation is\n\\begin{equation}\n    \\frac{1}{d_{\\text{I}}} = -\\frac{1}{d_{\\text{O}}} + \\frac{1}{f}\n\\end{equation}\nThe slope of the best-fit line should be:\n\\begin{equation}\n    \\text{slope} = -1\n\\end{equation}\nNote that the slope does not have units. The intercept of the best-fit line should be:\n\\begin{equation}\n    \\text{intercept} = \\frac{1}{\\text{focal length}}\n\\end{equation}\nFrom the intercept you can find an experimental estimate of the focal length of the mirror:\n\\begin{equation}\n    \\text{focal length} = \\frac{1}{\\text{intercept}}\n\\end{equation}\nThe expected value for the focal length of the concave mirror used is 20 cm.\n\nWith the distances $d_{\\text{O}}$ and $d_{\\text{I}}$ you can calculate the magnification:\n\\begin{equation}\n    m = \\frac{d_{\\text{I}}}{d_{\\text{O}}}\n\\end{equation}\nYou can also calculate the magnification using the measured heights $h_{\\text{I}}$ and $h_{\\text{O}}$:\n\\begin{equation}\n    m = \\frac{h_{\\text{I}}}{h_{\\text{O}}}\n\\end{equation}\nboth of these methods should be (relatively) consistent.\n%\n\\section{My Data}\n%\nMy data consist of ten distinct configurations. As can be seen from Figure \\ref{figure.07.chart}, the expected behavior is observed between the inverse distances to the mirror. Further more, the slope and intercept agree with the expected values.\n\nYou can also see from Table \\ref{table.07.magnification} that the two ways of computing the magnification of the mirror give somewhat consistent results.\n%\n\\section{Your Data}\n%\nYou should have positions and heights for eight distinct configurations.\n%\n% \\newpage\n% \\section{Your Lab Report}\n% %\n% Your laboratory report should include the following:\n% \\begin{itemize}\n%     \\item A table like Table \\ref{table.07.magnification} with the magnification computed in two different ways.\n%     \\item A chart with $d_{\\text{I}}$ in the vertical axis, and $d_{\\text{O}}$ in the horizontal axis.\n%     \\item A chart with $1/d_{\\text{I}}$ in the vertical axis, and $1/d_{\\text{O}}$ in the horizontal axis. Include the best linear fit, and display the equation. See Figure \\ref{figure.07.chart}.\n%     \\item A table like Table \\ref{table.07.results} with your results and the percent differences.\n%     \\item Answer the following question: which method of calculating the magnification do you think is more accurate?\n% \\end{itemize}\n%\n\\newpage\n\\section{Tables}\n%\n\\begin{table}[ht]\n    \\centering\n    \\begin{tabular}{r|r|r}\n        $x_{\\text{O}}$ (cm) & $x_{\\text{M}}$ (cm) & $x_{\\text{I}}$ (cm) \\\\\n        \\hline\n        10 & 120 & 96.05 \\\\\n        10 & 112 & 87.35 \\\\\n        10 & 104 & 79.1 \\\\\n        10 & 96 & 70.15 \\\\\n        10 & 88 & 61.35 \\\\\n        10 & 80 & 52.55 \\\\\n        10 & 72 & 43.05 \\\\\n        10 & 64 & 32.5 \\\\\n        10 & 56 & 21.35 \\\\\n        60 & 90 & 36.2 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Raw Position Data}\n    \\label{table.07.position}\n\\end{table}\n%\n\\begin{table}[ht]\n    \\centering\n    \\begin{tabular}{r|r}\n        $h_{\\text{O}}$ (cm) & $h_{\\text{I}}$ (cm) \\\\\n        \\hline\n        2 & 0.5 \\\\\n        2 & 0.5 \\\\\n        2 & 0.5 \\\\\n        2 & 0.7 \\\\\n        2 & 0.7 \\\\\n        2 & 0.8 \\\\\n        2 & 0.9 \\\\\n        2 & 1.1 \\\\\n        2 & 1.4 \\\\\n        2 & 3.5 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Raw Height Data}\n    \\label{table.07.height}\n\\end{table}\n%\n\\begin{table}[ht]\n    \\centering\n    \\begin{tabular}{r|r}\n        $d_{\\text{O}}$ (cm) & $d_{\\text{I}}$ (cm) \\\\\n        \\hline\n        110 & 23.95 \\\\\n        102 & 24.65 \\\\\n        94 & 24.9 \\\\\n        86 & 25.85 \\\\\n        78 & 26.65 \\\\\n        70 & 27.45 \\\\\n        62 & 28.95 \\\\\n        54 & 31.5 \\\\\n        46 & 34.65 \\\\\n        30 & 53.8 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Distances to the mirror}\n    \\label{table.07.distance}\n\\end{table}\n%\n\\begin{table}[ht]\n    \\centering\n    \\begin{tabular}{r|r}\n        $h_{\\text{I}} / h_{\\text{O}}$ & $d_{\\text{I}} / d_{\\text{O}}$ \\\\\n        \\hline\n        0.25 & 0.22 \\\\\n        0.25 & 0.24 \\\\\n        0.25 & 0.26 \\\\\n        0.35 & 0.30 \\\\\n        0.35 & 0.34 \\\\\n        0.4 & 0.39 \\\\\n        0.45 & 0.45 \\\\\n        0.55 & 0.58 \\\\\n        0.7 & 0.75 \\\\\n        1.75 & 1.79 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Magnification}\n    \\label{table.07.magnification}\n\\end{table}\n%\n\\begin{table}[ht]\n    \\centering\n    \\begin{tabular}{l|r|r|r}\n        & \\textbf{Expected Value} & \\textbf{Observed Value} & \\textbf{P.D.} (\\%) \\\\\n        \\hline\n        \\textbf{Slope} & \\textminus 1 & \\textminus 0.95 & \\textminus 4.68 \\\\\n        \\textbf{Focal Length} & 20 cm & 20.02 cm & 0.11 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Results}\n    \\label{table.07.results}\n\\end{table}\n%\n\\newpage\n\\FloatBarrier\n\\section{Figures}\n%\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[scale=0.74]{image/07-mirrors/chart.pdf}\n    \\caption{Linear fit}\n    \\label{figure.07.chart}\n\\end{figure}\n%", "meta": {"hexsha": "0f46202940a891da054731966369f240a528bc8f", "size": 9016, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/07-mirrors.tex", "max_stars_repo_name": "meirizarrygelpi/phys-208L", "max_stars_repo_head_hexsha": "ce8b093f9e730c87a2b4c6208b4d4da518035267", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/07-mirrors.tex", "max_issues_repo_name": "meirizarrygelpi/phys-208L", "max_issues_repo_head_hexsha": "ce8b093f9e730c87a2b4c6208b4d4da518035267", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/07-mirrors.tex", "max_forks_repo_name": "meirizarrygelpi/phys-208L", "max_forks_repo_head_hexsha": "ce8b093f9e730c87a2b4c6208b4d4da518035267", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.1387559809, "max_line_length": 681, "alphanum_fraction": 0.6559449867, "num_tokens": 2830, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8152324960856175, "lm_q1q2_score": 0.5753788899434238}}
{"text": "% !TEX root = ../main.tex\n% chktex-file 21\n% chktex-file 46\n\\section{Conclusion}%\n\\label{sec:conclusion}\n\nWe have now discussed the three main topics of this paper:\n\\begin{enumerate*}\n\t\\item How the structural properties of graphs can be described via spectral graph theory.\n\t\\item How graphs can be coarsened via the REC algorithm.\n\t\\item How to bound the effects of coarsening via RSS and what this implies for spectral clustering.\n\\end{enumerate*}\n\nBased on the work we presented, there are two main open questions for future research:\n\\begin{enumerate*}\n\t\\item The described RSS bound assumes a single application of the REC algorithm.\n\t\tFor multiple REC applications with a total reduction ratio of $r > \\frac{1}{2}$, the RSS bound still needs to be generalized.\n\t\\item Currently the RSS bound has only been applied to the analysis of spectral clustering.\n\t\tTo make the results more generally applicable, the implications for other graph algorithms, e.g.\\ graph convolutional neural networks, are still to be considered.\n\\end{enumerate*}\n", "meta": {"hexsha": "22343baf56d74ce5eda42cf62e78e106360d2e77", "size": 1043, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/content/chapter-conclusion.tex", "max_stars_repo_name": "Cortys/ml-seminar", "max_stars_repo_head_hexsha": "cfd3a0cb73ca54d90619159df058f021ac9c7101", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/content/chapter-conclusion.tex", "max_issues_repo_name": "Cortys/ml-seminar", "max_issues_repo_head_hexsha": "cfd3a0cb73ca54d90619159df058f021ac9c7101", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/content/chapter-conclusion.tex", "max_forks_repo_name": "Cortys/ml-seminar", "max_forks_repo_head_hexsha": "cfd3a0cb73ca54d90619159df058f021ac9c7101", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.6666666667, "max_line_length": 164, "alphanum_fraction": 0.7775647172, "num_tokens": 244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7057850278370112, "lm_q1q2_score": 0.5753788851909373}}
{"text": "\\section{Pattern Matching by Example} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\label{sec:bg}\r\n\r\nPattern matching is an abstraction mechanism that provides syntax very close to \r\nmathematical notations. It allows the user tersely describe a (possibly \r\ninfinite) set of values accepted by the pattern. A \\emph{pattern} represents a \r\npredicate on values, and is usually more concise and readable than equivalent \r\nimperative code. An interesting peculiarity about patterns is that a given \r\npattern is rarely repeated in several places in code. This context sensitivity \r\nmakes them unattractive for code reuse through a dedicated predicate, since the \r\npredicate will typically only be used once.\r\n\r\nPattern matching is closely related to \\emph{algebraic data types} and \r\n\\emph{equational reasoning}. In ML and Haskell an \\emph{Algebraic Data Type} is \r\na data type each of whose values are picked from a disjoint sum of (possibly \r\nrecursive) data types, called \\emph{variants}. Each variant is marked with a \r\nunique symbolic constant called a \\emph{constructor}. Constructors provide a \r\nconvenient way of creating a value of its variant type as well as discriminating \r\namong variants through pattern matching. Consider a simple expression language:\r\n\\begin{center}\r\n$exp$ \\is{} $val$ \\Alt{} $exp+exp$ \\Alt{} $exp-exp$ \\Alt{} $exp*exp$ \\Alt{} $exp/exp$\r\n\\end{center}\r\n\r\n\\noindent\r\nThis can be represented in OCaml as an algebraic data type:\r\n\r\n\\begin{lstlisting}[language=Caml,keepspaces,columns=flexible]\r\ntype expr = Value  of int \r\n          | Plus   of expr * expr \r\n          | Minus  of expr * expr \r\n          | Times  of expr * expr \r\n          | Divide of expr * expr ;;\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nThe set of values described by such an algebraic data type is defined \r\ninductively as the least set closed under constructor functions of its variants.\r\nA simple evaluator of such expressions can be defined:\r\n\r\n\\begin{lstlisting}[language=Caml,keepspaces,columns=flexible]\r\nlet rec eval e = match e with \r\n            Value  v     -> v \r\n          | Plus  (a, b) -> (eval a) + (eval b) \r\n          | Minus (a, b) -> (eval a) - (eval b)\r\n          | Times (a, b) -> (eval a) * (eval b) \r\n          | Divide(a, b) -> (eval a) / (eval b) ;;\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nThere are two critical differences between algebraic data types and classes in \r\nobject-oriented languages: definition of an algebraic data type is \\emph{closed}, \r\nwhile its variants are \\emph{disjoint}. Closedness means that once we have listed \r\nall the variants a given algebraic data type may have we cannot extend it with \r\nnew variants without modifying its definition. Disjointedness means that a value \r\nof an algebraic data type belongs to exactly one of its variants. Neither is \r\nthe case in object-oriented languages. Classes are \\emph{extensible},\r\nsince new variants can be added through subclassing, as well as \r\n\\emph{hierarchical}, since variants are not necessarily disjoint and can form \r\nsubtyping relation between themselves. The above algebraic data type can be \r\nencoded in C++ as following:\r\n\r\n\\begin{lstlisting}[columns=flexible]\r\nstruct Expr { virtual @$\\sim$@Expr(){}};\r\nstruct Value  : Expr { int value; };\r\nstruct Plus   : Expr { Expr* e1; Expr* e2; }; ...\r\nstruct Divide : Expr { Expr* e1; Expr* e2; };\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nwhile a similar evaluator in our SELL is almost as terse as OCaml code:\r\n\r\n\\begin{lstlisting}[columns=flexible]\r\nint eval(const Expr* e)\r\n{\r\n  Match(e) {\r\n    Case(Value,  n)    return n;\r\n    Case(Plus,   a, b) return eval(a) + eval(b); \r\n    Case(Minus,  a, b) return eval(a) - eval(b);\r\n    Case(Times,  a, b) return eval(a) * eval(b); \r\n    Case(Divide, a, b) return eval(a) / eval(b);\r\n  } EndMatch\r\n}\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nAn expression \\code{e} used as an argument of a \\emph{matching expression} or a \r\n\\emph{match statement}, is usually referred to as \\emph{subject}, and its type \r\nas a \\emph{subject type}. We will also refer to the type of the value expected \r\nby a given pattern as a \\emph{target type}.\r\n\r\n\\emph{Polymorphic variants} in OCaml\\cite{garrigue-98} and \\emph{open data \r\ntypes} in Haskell\\cite{LohHinze2006} allow addition of new variants later. \r\nThese extensions are simpler however as they maintain the \r\ndisjointedness property: open data types do not introduce any subtyping relation, \r\nwhile the subtyping relation on polymorphic variants relates anonymous \r\ncombinations of variants and not the variants themselves. In contrast, \r\nthe \\emph{nominative subtyping} of object-oriented languages does not maintain \r\nthe disjointness property, making objects effectively belong to multiple classes. \r\n\r\nClosedness of algebraic data types is particularly useful for reasoning about \r\nprograms by case analysis and allows the compiler to perform an automatic \r\n\\emph{incompleteness} check -- test of whether a given match statement \r\ncovers all possible cases. Similar reasoning about programs involving extensible \r\ndata types is more involved as we are dealing with potentially open set of \r\nvariants. \\emph{Completeness} check in such scenario reduces to checking presence \r\nof a case that handles the static type of the subject. Absence of such a case,\r\nhowever, does not necessarily imply incompleteness, only potential incompleteness, \r\nas the answer will depend on the actual set of variants available at run-time.\r\n\r\nA related notion of \\emph{redundancy} checking arises from the \r\ntradition of using \\emph{first-fit} strategy in pattern matching. It warns the \r\nuser of any \\emph{case clause} inside a match statement that will \r\nnever be entered because of a preceding one being more general. Object-oriented \r\nlanguages, typically prefer \\emph{best-fit} strategy because it is not prone \r\nto errors where semantics of a statement might change depending on the ordering \r\nof preceding definitions. \r\n\r\nThe patterns used in functions \\codeocaml{eval} and \\codehaskell{eitherLift} to \r\nidentify and decompose a concrete variant of an algebraic data types are \r\ngenerally called \\emph{tree patterns} or \\emph{constructor patterns}. Their \r\nanalog in object-oriented languages is often referred to as \\emph{type pattern} \r\nsince it may involve type testing and type casting. Special cases of tree patterns  \r\nare \\emph{list patterns} and \\emph{tuple patterns} that make use of special list \r\nand tuple constructors \\codehaskell{:} and \\codehaskell{(,,...,)}.\r\n\r\nPattern matching can also be applied to built-in types.\r\nIn Haskell, a factorial can be defined by equations:\r\n\r\n\\begin{lstlisting}[language=Haskell]\r\nfactorial 0 = 1\r\nfactorial n = n * factorial (n-1)\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nHere 0 in the left hand side of the first \\emph{equation} is an example of a \r\n\\emph{value pattern} that will only match when the actual argument passed is 0. \r\nThe \\emph{variable pattern} \\codehaskell{n} in the left hand side of the second \r\nequation will match any value, \\emph{binding} variable \\codehaskell{n} to it in \r\nthe right hand side of equation. The \\emph{wildcard pattern} \\codehaskell{_}  \r\nwill match any value, neither binding it to a variable nor even obtaining it. \r\nValue patterns, variable patterns and wildcard patterns are  \r\ngenerally called \\emph{primitive patterns}. Patterns like variable and wildcard \r\npatterns that never fail to match are called \\emph{irrefutable}, in contrast to \r\n\\emph{refutable} patterns like value patterns, which may fail to match.\r\n\r\nIn Haskell 98\\cite{Haskell98Book} factorial could alternatively be defined as:\r\n\r\n\\begin{lstlisting}[language=Haskell]\r\nfactorial 0 = 1\r\nfactorial (n+1) = (n+1) * factorial n\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nThe \\codehaskell{(n+1)} pattern in the left hand side of equation is an example of \r\n\\emph{n+k pattern}. According to its informal semantics ``Matching an $n+k$ \r\npattern (where $n$ is a variable and $k$ is a positive integer literal) against \r\na value $v$ succeeds if $v \\ge k$, resulting in the binding of $n$ to $v-k$, and \r\nfails otherwise''\\cite{haskell98}. n+k patterns were introduced into Haskell to \r\nlet users express inductive functions on natural numbers in much the same way as \r\nfunctions defined through case analysis on algebraic data types. Besides \r\nsuccinct notation, this could facilitate automatic proof of \r\ntermination of such functions by the compiler.\r\nHowever, numerous debates over semantics and usefulness of n+k patterns\r\nresulted in their removal from the Haskell \r\n2010 standard\\cite{haskell2010}. Generalization of n+k patterns, called \r\n\\emph{application patterns} has been studied by Oosterhof\\cite{OosterhofThesis}. \r\nApplication patterns essentially treat n+k patterns as equations, while matching \r\nagainst them attempts to solve or validate the equation.\r\n\r\nOur library language supports generalized n+k patterns in a different form \r\n(\\textsection\\ref{sec:slv}). We do not restrict ourselves with equational view \r\nof the n+k patterns, but allow the user to specify suitable semantics.\r\nIn our library, we can define fast computation of Fibonacci numbers like this:\r\n\r\n\\begin{lstlisting}[keepspaces]\r\nint fib(int n)\r\n{\r\n    variable<int> m;\r\n    Match(n) {\r\n      When(1)         return 1;     \r\n      When(2)         return 1;\r\n      When(2*m)     return sqr(fib(m+1)) - sqr(fib(m-1));\r\n      When(2*m+1) return sqr(fib(m+1)) + sqr(fib(m));\r\n    } EndMatch\r\n}\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nA \\emph{guard} \r\nis a predicate attached to a pattern that may make use of the variables bound in \r\nit. The result of its evaluation will determine whether the case clause and the \r\nbody associated with it will be \\emph{accepted} or \\emph{rejected}.\r\n\r\nIn OCaml, we can define rules for rewriting terms in \r\nour $exp$ language with the help of guards:\r\n\r\n\\begin{lstlisting}[language=Caml,keepspaces,columns=flexible]\r\nlet collect e = match e with\r\n      Plus(Times(e1,e2), Times(e3,e4)) when e1 = e3 -> Times(e1, Plus(e2,e4))\r\n    | Plus(Times(e1,e2), Times(e3,e4)) when e2 = e4 -> Times(Plus(e1,e3), e4)\r\n    | e -> e ;;\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nAttempting to write the first case clause as \\codeocaml{Plus(Times(e,e2), \r\nTimes(e,e4))} would have been relying on \\emph{equivalence patterns}. \r\nNeither OCaml nor Haskell support such patterns, but\r\nMiranda\\cite{Miranda85} and Tom\\cite{Moreau:2003} do.\r\n\r\nThe example above illustrates yet another common pattern-matching facility -- \r\n\\emph{nesting of patterns}. Constructor patterns composed \r\nsolely of (distinct) variables are called \\emph{simple pattern}s\r\nand others are called \\emph{nested pattern}s.\r\nNested checks are hard to handle using the visitor design pattern, as they are often \r\ntoo context-dependent to extract them into a dedicated visitor. \r\nIn such cases, users typically prefer \\emph{type tests} and \\emph{type \r\ncasts}. Our library handles such cases using nested patterns:\r\n\\begin{lstlisting}\r\nconst Expr* collect(const Expr* e)\r\n{\r\n  const Expr *e1, *e2, *e3, *e4;\r\n  Match(e) {\r\n    When(match<Plus>(match<Times>(e1,e2),match<Times>(e3 |= e1==e3,e4))) \r\n        return new Times(e1, new Plus(e2,e4));\r\n    When(match<Plus>(match<Times>(e1,e2),match<Times>(e3,e4 |= e2==e4))) \r\n        return new Times(new Plus(e1,e3), e4);\r\n    When() \r\n        return e;\r\n  } EndMatch\r\n}\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nDecomposing algebraic data types through pattern matching has an important \r\ndrawback that was originally spotted by Wadler\\cite{Wadler87}: they expose \r\nconcrete representation of an abstract data type, which conflicts with the \r\nprinciple of \\emph{data abstraction}. To overcome the problem he proposed the \r\nnotion of \\emph{views} that represent conversions between different \r\nrepresentations that are implicitly applied during pattern matching. As an \r\nexample, imagine polar and cartesian representations of complex numbers. A user \r\nmight choose polar representation as a concrete representation for the abstract \r\ndata type \\codeocaml{complex}, treating cartesian representation as view or vice \r\nversa:\\footnote{We use the syntax from Wadler's original paper for this example}\r\n\r\n\\begin{lstlisting}[language=Haskell,columns=flexible]\r\ncomplex ::= Pole real real\r\nview complex ::= Cart real real\r\n  in  (Pole r t) = Cart (r * cos t) (r * sin t)\r\n  out (Cart x y) = Pole (sqrt(x^2 + y^2)) (atan2 x y)\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nThe operations then might be implemented in whatever representation is the most \r\nsuitable, while the compiler will implicitly convert representation if needed:\r\n\r\n\\begin{lstlisting}[language=Haskell,columns=flexible]\r\n  add  (Cart x1 y1) (Cart x2 y2) = Cart (x1 + x2) (y1 + y2)\r\n  mult (Pole r1 t1) (Pole r2 t2) = Pole (r1 * r2) (t1 + t2)\r\n\\end{lstlisting}\r\n\r\n\\noindent\r\nThe idea of views were later adopted in various forms in several languages: \r\nHaskell\\cite{views96}, Standard ML\\cite{views98}, Scala (in the form of \r\n\\emph{extractors}\\cite{EmirThesis}) and F$\\sharp$ (under the name of \r\n\\emph{active patterns}\\cite{Syme07}). We demonstrate our support of views in \r\n\\textsection\\ref{sec:view}.\r\n", "meta": {"hexsha": "5bbc54113e120ffd1b4a033be09934cca8f96967", "size": 13053, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "media/papers/PatternMatching/sec-2-pm-by-example.tex", "max_stars_repo_name": "akrzemi1/Mach7", "max_stars_repo_head_hexsha": "eef288eb9fe59712ff153dd70791365391b7b118", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1310, "max_stars_repo_stars_event_min_datetime": "2015-01-04T03:44:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T04:44:01.000Z", "max_issues_repo_path": "media/papers/PatternMatching/sec-2-pm-by-example.tex", "max_issues_repo_name": "akrzemi1/Mach7", "max_issues_repo_head_hexsha": "eef288eb9fe59712ff153dd70791365391b7b118", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 62, "max_issues_repo_issues_event_min_datetime": "2015-01-12T07:59:17.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-14T22:02:14.000Z", "max_forks_repo_path": "media/papers/PatternMatching/sec-2-pm-by-example.tex", "max_forks_repo_name": "akrzemi1/Mach7", "max_forks_repo_head_hexsha": "eef288eb9fe59712ff153dd70791365391b7b118", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 108, "max_forks_repo_forks_event_min_datetime": "2015-02-13T17:39:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-18T11:06:59.000Z", "avg_line_length": 48.8876404494, "max_line_length": 86, "alphanum_fraction": 0.733088179, "num_tokens": 3301, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8152324848629215, "lm_q2_score": 0.7057850154599563, "lm_q1q2_score": 0.5753788719324356}}
{"text": "\\section{Parameter Estimation}\nParameter estimation involves a process of obtaining a parameter set $\\boldsymbol{\\chi}$ of a CDM model that minimizes the difference between $\\boldsymbol{\\sigma}^M$-$\\boldsymbol{\\epsilon}^M$ and $\\left<\\boldsymbol{\\sigma}\\right>$-$\\left<\\boldsymbol{\\epsilon}\\right>$ for all load paths. Herein, the parameter estimation was conducted using calibration algorithms, a subset of optimization which attempts to minimize a least-squares objective function \\cite{matott_ostrich:_2008}. Optimization algorithms are often described as either deterministic, which find the local optimum precisely, or heuristic, which find the global optimum approximately.\n\n%Optimization algorithms are often described as either deterministic (local search) or heuristic (global search). Deterministic optimization algorithms primarily focus on searching for the optima within the local parameter space by iteratively converging towards a solution. Heuristic optimization algorithms explore the entire parameter space approximately and provide an estimate of the global optima. Heuristic techniques are useful for highly non-linear problems, where there are numerous local optima within the prescribed parameter space. When searching the global parameter-space deterministically becomes too computationally demanding, heuristic methods are used, at the cost of completeness and accuracy. A compromise between speed and accuracy can be obtain by strategically using both types of algorithms.\n\nA combination of two optimization algorithms is used to assess the optimal parameter set. An initial heuristic algorithm is applied to search for the approximate global optima, followed by a deterministic algorithm as a local refinement of the optimal parameter set. Particle Swarm Optimization (PSO) is used for the global heuristic search, whereas the Levenberg-Marquardt Algorithm (LMA) is used for the local deterministic search. \n\n%The PSO algorithm was developed by  as a byproduct of modeling the cooperative-competitive nature of social behaviour in birds as they flocked searching for food. The PSO algorithm, in a conceptual sense, consists of a series of 'particles' (birds) which 'swarm' through the entire parameter space (sky) searching for the global optima (food) using a combination of individual 'particle' knowledge and global 'swarm' (flock) knowledge.\n\n%The Levenburg-Marquardt Algorithm (LMA) was proposed by \\citet{marquardt_algorithm_1963} which builds off of the work of \\citet{levenberg_method_1944}. This calibration algorithm combines a quasi-Newton approach with a conjugate gradient technique in order to efficiently minimize non-linear least-squares problems. \n\nThe parameter estimation works by iteratively running a single element CDM model, subject to boundary conditions provided by the homogenized DEM simulations, with successive parameter sets that intelligently adapt in order to converge to the DEM data. The approach used here is similar is aim to the Least Squares method was briefly described in \\citet{wellmann_homogenization_2008}.\n", "meta": {"hexsha": "c1adb9c39209fc24919069a2199e496884e02b9c", "size": 3076, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "section_Parameter_Estimation_Parameter_estimation__.tex", "max_stars_repo_name": "yetisir/up-scaling-dem-simulations", "max_stars_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "section_Parameter_Estimation_Parameter_estimation__.tex", "max_issues_repo_name": "yetisir/up-scaling-dem-simulations", "max_issues_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "section_Parameter_Estimation_Parameter_estimation__.tex", "max_forks_repo_name": "yetisir/up-scaling-dem-simulations", "max_forks_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-29T23:14:09.000Z", "avg_line_length": 236.6153846154, "max_line_length": 816, "alphanum_fraction": 0.8254226268, "num_tokens": 613, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5753727945431181}}
{"text": "% Note that the text in the [] brackets is the one that will\n% appear in the table of contents, whilst the text in the {}\n% brackets will appear in the main thesis.\n\n%% APPENDIX HEADER ////////////////////////////////////////////////////////////////////////////////////\n\\chapter{Elementary matrices of the equation of motion.}\n\\label{app:matrices}\n%% APPENDIX CONTENT ///////////////////////////////////////////////////////////////////////////////////\n\n\\section{Mass and stiffness matrices}\nThe formulae of matrices for 3D elements are:\n\\begin{eqnarray}\n\t\\textbf{M}_{dd}^e & = & \\int_{V_e}\\textbf{N}^T\\rho \\textbf{N} dV_e,\\\\\n\t\\textbf{K}_{dd}^e & = & \\int_{V_e}{\\textbf{B}_d^e}^T\\textbf{c}\\textbf{B}_d^edV_e,\n\\end{eqnarray}\nwhere \\textbf{c} is the stiffness tensor, \\(\\rho\\) is mass density, and \\(V_e\\) is the element volume.\n\nIn the case of the 2D elements, matrices are defined as:\n\\begin{eqnarray}\n\t\\textbf{M}_{dd}^e & = &\n\t\\left [\n\t\\begin{array}{cc}\n\t\t\\textbf{M}^e & 0\\\\\n\t\t0 & \\textbf{J}^e\n\t\\end{array}\n\t\\right] =\n\t\\int_{\\Omega_e}\\textbf{N}^T\\rho\n\t\\left [\n\t\\begin{array}{ccccc}\n\t\th & 0 & 0 & 0 & 0 \\\\\n\t\t& h & 0 & 0 & 0 \\\\\n\t\t&  & h & 0 & 0\\\\\n\t\t&  &  & \\frac{h^3}{12} & 0\\\\\n\t\tSym. &  &  &  & \\frac{h^3}{12}\n\t\\end{array} \\right]\n\t\\textbf{N} d\\Omega_e,\\\\\n\t\\textbf{K}_{dd}^e & = & \\int_{\\Omega_e}{\\textbf{B}_b^e}^T\n\t\\left[\n\t\\begin{array}{cc}\n\t\t\\textbf{A} & \\textbf{B}\\\\\n\t\t\\textbf{B} & \\textbf{D}\n\t\\end{array} \\right]\n\t\\textbf{B}_b^ed \\Omega_e+\\int_{\\Omega_e}{\\textbf{B}_s^e}^T\\hat{\\textbf{A}}\\textbf{B}_s^ed \\Omega_e,\n\\end{eqnarray}\nwhere \\(h=h_t+h_b\\) is the element thickness, while \\(h_{t(b)}\\) is the distance between mid-plane and top(bottom) surface of the element, and \\(\\Omega_e\\) is the element area:\n\\begin{eqnarray}\n\t\\textbf{A} & = & \\textbf{c}_{ij}\\,(h_t-h_b),\\qquad i,j=1,2,6\\nonumber\\\\\n\t\\textbf{B} & = & 1/2\\, \\textbf{c}_{ij}\\,(h_t^2-h_b^2),\\qquad i,j=1,2,6\\nonumber\\\\\n\t\\textbf{D} & = & 1/3\\, \\textbf{c}_{ij}\\,(h_t^3-h_b^3),\\qquad i,j=1,2,6\\nonumber\\\\\n\t\\hat{\\textbf{A}} & = & 5/4\\, \\textbf{c}_{ij}\\,\\left[h_t-h_b-4/3\\left(h_t^3-h_b^3\\right)/h^2\\right],\\qquad i,j=4,5.\n\\end{eqnarray}\n\\section{\\ac{pzt} matrices}\n\nThe dielectric conductivity matrix \\(\\textbf{K}_{\\phi \\phi}^e\\) and piezoelectric coupling matrix \\(\\textbf{K}_{u \\phi}^e\\) are defined:\n\\begin{eqnarray}\n\t\\textbf{K}_{d\\phi}^e & = & \\int_{V_e}{\\textbf{B}_d^e}^T\\textbf{e}^T \\textbf{B}_{\\phi}^ed V_e,\\\\\n\t\\textbf{K}_{\\phi \\phi}^e & = & -\\int_{V_e}{\\textbf{B}_{\\phi}^e}^T \n\t{\\textbf{\\(\\epsilon\\)}^S}^T \\textbf{B}_{\\phi}^edV_e.\n\\end{eqnarray}\n", "meta": {"hexsha": "703574799322255cc529b3747ccb46ce41a97832", "size": 2513, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/proposal/Dissertation/Appendices/app1.tex", "max_stars_repo_name": "pfiborek/model_hc", "max_stars_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/proposal/Dissertation/Appendices/app1.tex", "max_issues_repo_name": "pfiborek/model_hc", "max_issues_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/proposal/Dissertation/Appendices/app1.tex", "max_forks_repo_name": "pfiborek/model_hc", "max_forks_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.8833333333, "max_line_length": 176, "alphanum_fraction": 0.5825706327, "num_tokens": 1059, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5753727877078721}}
{"text": "\\chapter{Introduction}\n\\label{chap:Introduction}\n%\n\\section{System and the experimental setup}\n\\label{sec:intro1}\n%\nFor this experiment, we are going to identify the parameters of the Rectilinear\nControl System (Model 210).\nThe experimental control system is comprised of the three subsystems shown in\nFigure \\ref{fig:dynamicalsystem}. The first of these is the electromechanical\nplant which consists of the spring/mass mechanism, its actuator and sensors.\nThe design features a brushless DC servomotor, high resolution encoders,\nadjustable masses, and reconfigurable plant type.\n%\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{linlrge}\n\\caption{Dynamical system}\n\\label{fig:dynamicalsystem}\n\\end{figure}\n\\subsection{Parameters}\nThe system is configured with three bodies above the mass carriage suspension\nis an anti-friction ball bearing type with approximately $\\pm 3$\n[\\si{\\centi\\meter}] of available travel.\nThe linear drive is comprised of a gear rack suspended on an anti-friction\ncarriage and pinion (pitch diameter 7.62 [\\si{\\centi\\meter}]) coupled to the\nbrushless servo-motor shaft.\nOptical encoders measure the mass carriage positions - also via a rack and\npinion with a pinion pitch about 3.18 [\\si{\\centi\\meter}].\nThe bodies are connected by known stiffness springs, and a spring connects the\nthird mass to the frame. Instead, the first body is rigidly connected to a\npinion gear with a live-powered motor with a PC interface.\nThe position of each body is provided by an encoder. The position zeros are at\nthe equilibrium positions of the springs.\nFor the springs we use the nominal values provided:\n%\n\\begin{itemize}\n\t\\item $k_1 = k_2 = 800$ [\\si{\\newton\\per\\meter}] between $m_1$ and $m_2$, $m_2$\nand $m_3$;\n\t\\item $k_3 = 400$ [\\si{\\newton\\per\\meter}] between $m_3$ and the ground.\n\\end{itemize}\n%\nThe shifts $x_1$, $x_2$, $x_3$ are provided in encoder counts, where the\nrelationship (\\ref{eq:encodercounts}) between the measured counts and the\ndisplacement was used.\nWhere $r_{e}$ is the radius of the encoder and $2\\pi r_{e} = 0.0706$\n[\\si{\\meter}]; 16000 is the number of counts per encoder revolution.\n%\n\\begin{equation}\n\t\\Delta x = 2\\pi \\cdot r_{e} \\cdot \\frac{\\Delta count}{16000}\n\t\\label{eq:encodercounts}\n\\end{equation}\n%\nThe input data are given by the voltage \\si{\\volt}.\nThe following relation between the applied voltage and the applied force holds:\n$f = (k_a \\cdot k_t \\cdot k_{mp}) \\cdot v$.\\\\\nWhere:\\begin{description}\n\t\\item $k_a$ is the Servo Amp gain:\n\t\\begin{equation*}\n\t\tk_a \\approx 2 \\quad [\\text{\\si{\\ampere\\per\\volt}}]\n\t\\end{equation*}\n\t\\item $k_t$ is the Servo Motor Torque constant:\n\t\\begin{equation*}\n\t\tk_t\t\\approx\t0.1 \\quad \t[\\text{\\si{\\newton\\meter\\per\\ampere}}]\n\t\\end{equation*}\n\t\\item $k_{mp}$ is the Motor Pinion pitch radius inverse:\n\t\\begin{equation*}\n\t\tk_{mp} \t= 26.25 \\quad\t[\\text{\\si{\\per\\meter}}]\n\t\\end{equation*}\n\\end{description}\n%\n\\section{The dynamical model}\n\\label{sec:dynamicalmodel}\n%\n\\subsection{Assumption}\n\\label{subsec:assumption}\nThe system described in the previous chapter is modelled as a linear system and\nfor this reason some simplifications are made.\nIt is considered that all the bodies move on the same axis, assuming therefore\nthat the rack meshed by the pinion plots the force on this axis, so that a\nstraight motion is assumed.\nIn the model there are only viscous frictions.\nThe block containing the engine with the attachment unit and rack is considered\nrigidly connected to the mass m, according to the equation:\n\\begin{equation}\n\t\\label{eq:reducedinertia}\n\t\\begin{cases}\n\t\tm_{1} &=  m_{11} + \\frac{J_{\\text{motor}}}{r^2}\\\\\n\t\tc_{1} &=  c_{11} + \\frac{c_{\\text{motor}}}{r^2}\n\t\\end{cases}\n\\end{equation}\nIn equation \\eqref{eq:reducedinertia}: $r$ is the radius of the pinion-rack\ncoupling, $J_{\\text{motor}}$ the inertia of the motor, $c_{\\text{motor}}$ the\nrotational damping.\nWhile $m_{11}$ and $c_{11}$ are respectively the mass and damping of the first body.\n%\n\\subsection{Equation of motion}\n\\label{subsec:equationofomotion}\nWe describe the equations of motion for each body according to the embodiments\nreported in \\eqref{eq:equationmotion}, the schematic representation is\nobservable in the figure \\ref{fig:modelscheme}\n%\n\\begin{equation}\n\t\\label{eq:equationmotion}\n\t\\begin{array}{l}\n\t\tm_1 \\ddot{x}_{1} = k_1 (x_2 - x_1) - c_1 \\dot{x}_{1} + g_{\\text{v}} \\cdot v\t\\\\\n\t\tm_2 \\ddot{x}_{2} = k_1 (x_1 - x_2) + k_2 (x_3 - x_2) - c_2 \\dot{x}_{2} \\\\\n\t\tm_3 \\ddot{x}_{3} = k_2 (x_2 - x_3) - c_3 \\dot{x}_{3} - k_3 x_3\t\\\\\n\t\\end{array}\n\\end{equation}\n%\nThe equation in matrix form is shown below (\\ref{eq:matrixform}):\n%\n\\begin{equation}\n\t\\label{eq:matrixform}\n\t\\begin{bmatrix}\n\t\tm_1\t&  \t0\t&\t0\t\\\\\n \t\t0\t& \tm_2\t&\t0\t\\\\\n \t\t0\t&  \t0\t&\tm_3\t\\\\\n\t\\end{bmatrix}\n\t\\ddot{x}+\n\t\\begin{bmatrix}\n\t\tc_1\t&\t0\t&\t0\t\\\\\n  \t\t0\t&\tc_2\t&  \t0\t\\\\\n  \t\t0\t&\t0\t& \tc_3\t\\\\\n\t\\end{bmatrix}\n\t\\dot{x}+\n\t\\begin{bmatrix*}[c]\n\t\tk_1\t\t&\t-k_1\t\t\t&       0\t\\\\\n \t\t-k_1\t\t& \tk_1 + k_2\t&     -k_2\t\\\\\n  \t\t0\t\t&\t-k_2\t\t\t& k_2 + k_3\t\\\\\n\t \\end{bmatrix*}\n \tx=\\begin{bmatrix}\n \tg_{\\text{v}}\t\\\\\n \t0 \t\\\\\n \t0\n \t\\end{bmatrix} \\cdot v\n\\end{equation}\n%\n\\begin{figure}[hb]\n\t\\centering\n    \\resizebox{\\linewidth}{!}{\\input{sketchplant.tex}}\n\t\\caption{model system}\n\t\\label{fig:modelscheme}\n\\end{figure}\n", "meta": {"hexsha": "f890bb7833bf61d23f5e6e9692215fd846abf636", "size": 5227, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/introduzione.tex", "max_stars_repo_name": "frank1789/MechanicalVibrationProject", "max_stars_repo_head_hexsha": "ad28e4c047fe4f806fa1fb5405b2304699e377be", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-28T12:59:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-28T12:59:41.000Z", "max_issues_repo_path": "Report/introduzione.tex", "max_issues_repo_name": "frank1789/MechanicalVibrationProject", "max_issues_repo_head_hexsha": "ad28e4c047fe4f806fa1fb5405b2304699e377be", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/introduzione.tex", "max_forks_repo_name": "frank1789/MechanicalVibrationProject", "max_forks_repo_head_hexsha": "ad28e4c047fe4f806fa1fb5405b2304699e377be", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.801369863, "max_line_length": 84, "alphanum_fraction": 0.7139850775, "num_tokens": 1765, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.8006920116079209, "lm_q1q2_score": 0.5753727800669561}}
{"text": "\\subsection{Syntax}\nThe following table summarizes the various operators and expression syntax.\nThe arithmetic operators have the expected precedence, that is,\nmultiplication and division are evaluated before add and subtract.\nSubexpressions surrounded by parentheses have highest precedence.\n\n\\begin{center}\n\\begin{tabular}{clll}\n{\\it Math} & & {\\it Eigenmath} & {\\it Comment} \\\\\n\\\\\n$a=b$ & & \\verb$a == b$ & {\\it test for equality} \\\\\n\\\\\n$-a$ & & {\\tt -a} & {\\it negation} \\\\\n\\\\\n$a+b$ & & {\\tt a+b} & {\\it addition} \\\\\n\\\\\n$a-b$ & & {\\tt a-b} & {\\it subtraction} \\\\\n\\\\\n$ab$ & & {\\tt a b} & {\\it multiplication, alternatively,} \\verb$a*b$ \\\\\n\\\\\n$\\displaystyle\\frac{a}{b}$ & & {\\tt a/b} & {\\it division}\\\\\n\\\\\n$\\displaystyle\\frac{a}{bc}$ & & {\\tt a/b/c} & {\\it division operator is left-associative} \\\\\n\\\\\n$a^2$ & & {\\tt a{\\char94}2} & {\\it power}\\\\\n\\\\\n$\\sqrt{a}$ & & \\verb$sqrt(a)$ & {\\it square root, alternatively,} \\verb$a^(1/2)$ \\\\\n\\\\\n$a(b+c)$ & & {\\tt a (b+c)} & {\\it note the space in between, alternatively,} \\verb$a*(b+c)$ \\\\\n\\\\\n$f(a)$ & & {\\tt f(a)} & {\\it function} \\\\\n\\\\\n$\\begin{pmatrix}a\\\\ b\\\\ c\\end{pmatrix}$ & & {\\tt (a,b,c)} & {\\it vector} \\\\\n\\\\\n$\\begin{pmatrix}a&b\\\\ c&d\\end{pmatrix}$ & & {\\tt ((a,b),(c,d))} & {\\it matrix} \\\\\n\\\\\n$F^1{}_2$ & & {\\tt F[1,2]} & {\\it tensor component access} \\\\\n\\\\\n-- & & \\verb$\"hello, world\"$ & {\\it string literal} \\\\\n\\\\\n$\\pi$ & & {\\tt pi} & -- \\\\\n\\\\\n$e$ && {\\tt exp(1)} & {\\it natural number}\n\\end{tabular}\n\\end{center}\n", "meta": {"hexsha": "3ec52877d40d893cffd199b58a2be7fa1622a81b", "size": 1468, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/syntax.tex", "max_stars_repo_name": "wuyudi/eigenmath", "max_stars_repo_head_hexsha": "509c3a2b320b27ce85fbc3cc055d8fa30e3175a6", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2019-09-29T03:15:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T00:57:51.000Z", "max_issues_repo_path": "doc/syntax.tex", "max_issues_repo_name": "wuyudi/eigenmath", "max_issues_repo_head_hexsha": "509c3a2b320b27ce85fbc3cc055d8fa30e3175a6", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2019-11-12T00:57:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-26T23:46:46.000Z", "max_forks_repo_path": "doc/syntax.tex", "max_forks_repo_name": "wuyudi/eigenmath", "max_forks_repo_head_hexsha": "509c3a2b320b27ce85fbc3cc055d8fa30e3175a6", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2019-10-03T13:23:17.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T13:28:00.000Z", "avg_line_length": 31.9130434783, "max_line_length": 94, "alphanum_fraction": 0.5626702997, "num_tokens": 510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5753727780570977}}
{"text": "\\documentclass{article}\n\\author{xuyou}\n\\title{Proof For Prop 1 and 2 in GraphWave}\n\\usepackage{xeCJK}\n\\usepackage{bm}\n\\begin{document}\n\\maketitle\n\\section{note for prop 1} \n\\subsection{equation 1}\n\\begin{center}\n\\[\n\\sum _ { m=1  }^N\\left( \\Psi _ { m a } ^ { ( s ) } \\right) ^ { 2 }\n=\\Psi_{aa}^{(2s)}\n\\]\n\\end{center}\nbecause left side equal \\bm{$1_a^T$}$Ug(s\\Lambda)g(s\\Lambda)U^T$\\bm{$1_a$}=\n$\\Psi_{aa}^{(2s)}$\n\\subsection{equation 2}\n\\begin{center}\n\n\\[\n\\sum_{m=1}^N \\Psi_{ma}^{(s)}=1 \\quad s\\ for\\ any\\ value\n\\]\n\\end{center}\n%\\paragraph{}\nbecause 1 is the eigenvalue of $\\Psi^{(s)}$ for eigenvector \\textbf{1}\n\\section{note for prop 2} \n$\\Psi^s$ eigenvector for eigenvalue 1\n\\begin{center}\n$U_{.1}=\\frac{1}{\\sqrt{N}}$\n\\end{center}\nso\\begin{center}\n$\\left|\\Psi_{aa}^{(s+1)}-\\frac{1}{N}\\right|=\\left|\\sum_{j=1}^Ne^{-\\lambda_j(s+1)}U_{ja}^2-\\frac{1}{N}\\right|=\\left| \\sum _ { j = 2 } ^ { N } e ^ { - \\lambda _ { j } ( s + 1 ) } U _ { j a } ^ { 2 } \\right|\\leq\n\\left| \\sum _ { j = 2 } ^ { N }e^{\\lambda_2s} e ^ { - \\lambda _ { j } s  } U _ { j a } ^ { 2 } \\right|=e ^ { - \\lambda _ { 2 } } \\left| \\Psi _ { a a } ^ { ( s ) } - \\frac { 1 } { N } \\right|$\n\n\\end{center} \n\\end{document}", "meta": {"hexsha": "610893d24221f379fd584f5abf4e84e2ece44074", "size": 1179, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "graphwave_proof.tex", "max_stars_repo_name": "xuyou314/graphwave", "max_stars_repo_head_hexsha": "3114f7da64b160356e7a2f2958c8d220ee8084d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "graphwave_proof.tex", "max_issues_repo_name": "xuyou314/graphwave", "max_issues_repo_head_hexsha": "3114f7da64b160356e7a2f2958c8d220ee8084d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "graphwave_proof.tex", "max_forks_repo_name": "xuyou314/graphwave", "max_forks_repo_head_hexsha": "3114f7da64b160356e7a2f2958c8d220ee8084d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.8648648649, "max_line_length": 208, "alphanum_fraction": 0.5699745547, "num_tokens": 520, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.800691997339971, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5753727746394748}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\usepackage{url}\n\\begin{document}\n\n\\begmath 10.0 Overview of Fourier Transforms and Spectral Analysis\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Introduction}\n\nThe subroutines in this chapter compute discrete Fourier transforms,\nusing the fast Fourier transform (FFT). Discrete Fourier transforms\ncan be used in turn to approximate Fourier coefficients or evaluate\ntruncated Fourier series, to approximate power spectra, to compute\nconvolutions and lagged products (as might be required in computing\ncorrelations and in filtering), and in several other applications\nwhere the speed of the FFT has been found of value, see, $e.g.$,\n\\cite{Cooley:1966:AFF}.\n\nThe purpose of this introductory section is to outline what is\navailable, to help the reader in the selection of the appropriate\nsubroutine, to outline how the discrete transform can be used for the\ncomputation of convolutions and lagged products, to examine the\nerrors introduced in using the discrete transform as an approximation\nto other transforms, and to suggest a computational procedure to\nthose with doubts on how to proceed.\n\nLower case English letters are used for functions of $t$, the\nindependent variable, and Greek letters for Fourier transform\nfunctions, that are functions of $\\omega $, with units of\nradians/(units of $t)$.\n\n\\subsection{Subroutines Available}\n\nThe one dimensional discrete transform pairs available are indicated\nbelow.  In ``SRFT1/DRFT1'' (for example) SRFT1 is the name for the\nsingle precision version and DRFT1 is the name for the double\nprecision version. In all cases N $=2^M$ where M is a nonnegative\ninteger and W $=e^{2\\pi i/N}=\\cos 2\\pi /{N} +i\\ \\sin 2\\pi /N.$\n\nSRFT1/{DRFT1} One dimensional real transform\\vspace{-6pt}\n\\begin{equation}\\label{O1}\nx_j=\\sum_{k=0}^{N-1}\\xi _kW^{jk},\\quad j=0,1,...,N-1\\vspace{-6pt}\n\\end{equation}\n\\vspace{-15pt}\n\\begin{equation}\\label{O2}\n\\xi _k=\\frac 1N\\sum_{j=0}^{N-1}x_jW^{-jk},\\quad k=0,1...,N-1\n\\end{equation}\nSTCST/DTCST Trigonometric transform\\vspace{-6pt}%\n\\begin{multline}\\label{O3}\ny_j=\\frac 12\\alpha _0+\\,\\sum_{k=1}^{(N/2)-1}\\left[\n\\alpha _k\\cos \\frac{2\\pi jk}N+\\beta _k\\sin \\frac{2\\pi jk}N\\right]\\\\\n+\\frac 12\\alpha _{N/2}(-1)^j,\\quad j=0,1,...,N-1\n\\end{multline}\n\\vspace{-20pt}\n\\begin{equation}\n\\begin{split}\\label{O4}\n\\alpha _k&=\\frac 2N\\sum_{j=0}^{N-1}y_j\\cos \\frac{2\\pi jk}N,\\quad\nk=0,1,...,\\frac N2\\\\\n\\beta _k&=\\frac 2N\\sum_{j=1}^{N-1}y_j\\sin \\frac{2\\pi jk}N,\\quad\nk=1,2,...,\\frac N2-1\n\\end{split}\n\\end{equation}\n\nSTCST/DTCST Cosine transform%\n\\begin{multline}\\label{O5}\ny_j=\\frac 12\\alpha _0+\\sum_{k=1}^{N-1}\\alpha _k\\cos\n\\frac{\\pi jk}N+\\frac 12\\alpha _N(-1)^j,\\\\\nj=0,1,...,N\n\\end{multline}\n\\vspace{-15pt}\n\\begin{multline}\\label{O6}\n\\alpha _k=\\frac 2N\\left[ \\frac 12y_0+\\sum_{j=1}^{N-1}y_j\\cos\n\\frac{\\pi jk} N+\\frac 12y_N(-1)^k\\right]\\\\\nk=0,1,...,N\n\\end{multline}\n\\vspace{-15pt}\n\nSTCST/DTCST Sine transform\n\\begin{equation}\\label{O7}\ny_j=\\sum_{k=1}^{N-1}\\beta _k\\sin \\frac{\\pi jk}N,\\quad\nj=1,2,...,N-1\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O8}\n\\beta _k=\\frac 2N\\sum_{j=1}^{N-1}y_j\\sin \\frac{\\pi jk}N,\\quad\nk=1,2,...,N-1\n\\end{equation}\n\nSCFT/DCFT Complex transform\n\\begin{equation}\\label{O9}\nz_j=\\sum_{k=0}^{N-1}\\zeta _kW^{jk},\\quad j=0,1,...,N-1\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O10}\n\\zeta _k=\\frac 1N \\sum_{j=0}^{N-1}z_jW^{-jk},\\quad k=0,1,...,N-1\n\\end{equation}\n\nAll variables in the above equations are real, except $W$, $\\xi $,\n$z$, and $ \\zeta $ which are complex. For each of the transform pairs\ngiven, either equation can be derived from the other -- there are no\napproximations involved.\n\nTaking real and imaginary parts of Eq.\\,(2) and making comparisons with Eq.\n(4), it is clear that if $x_j=y_j$, then 2 $\\Re \\xi _k=\\alpha _k$ and\n2 $\\Im \\xi _k=-\\beta _k$. Thus the trigonometric transform and the real\ntransform are closely related. Since SRFT1 is slightly more efficient and\nshorter than STCST, it is recommended unless one has a distinct preference\nfor the trigonometric form. If one has data that are even (or odd) then\none can save a factor of two in both storage and computation by using the\ncosine (or sine) transform in STCST. One could use SCFT for real data by\nsetting $\\Im z_j=0$, but since this requires twice the storage and twice\nthe work as SRFT1, SCFT is recommended only for complex data.\n\nSubroutines STCST, SCFT, and SRFT (similar to SRFT1) can be used for\ndata in more than one dimension. As above, there is a connection\nbetween the real transform and the trigonometric transform, but the\nrelations connecting the two are not as simple. Indexing of the\ncoefficients in SRFT is more complicated, and thus we recommend STCST\nfor multi-dimensional real data unless one prefers the form of the\nsolution provided by SRFT.\n\n\\subsection{Discrete Convolutions and Correlations}\n\nHere we consider computing sums of the form\n\\begin{equation}\\label{O11}\nc_n=\\sum_{j=0}^{N-1}a_jb_{n\\pm j},\\quad n=0,1,...,N-1\n\\end{equation}\nInitially it is assumed that $a_j$ and $b_j$ are periodic with period N.\n(That is, if $j$ is not in the range $0\\leq j\\leq {N-1}$, it is replaced by\nits value mod N.) Thus $\\alpha _k$ and $\\beta _k$ can be defined by%\n\\begin{equation*}\na_j=\\sum_{k=0}^{N-1}\\alpha _kW^{jk},\\quad \\text{and}\\quad\nb_j=\\sum_{k=0}^{N-1}\\beta _kW^{jk}.\n\\end{equation*}\nSubstitution in Eq.\\,(11) yields\n\\begin{equation}\\label{O12}\n\\begin{split}\n c_n&=\\sum_{j=0}^{N-1}\\left[ \\sum_{k^{\\prime }=0}^{N-1}\\alpha\n_{k^{\\prime }}W^{jk^{\\prime }}\\right] \\left[ \\sum_{k=0}^{N-1}\\beta\n_kW^{(n\\pm j)k}\\right] \\vspace{4pt} \\\\ \\displaystyle\\phantom{c_n}%\n&=\\sum_{k^{\\prime }=0}^{N-1}\\sum_{k=0}^{N-1}\\alpha _{k^{\\prime }}\\beta\n_kW^{nk}\\sum_{j=0}^{N-1}W^{j(k^{\\prime }\\pm k)}.\n\\end{split}\n\\end{equation}\nFrom the easily verified fact that\n\\begin{equation}\\label{O13}\n\\sum_{j=0}^{N-1}W^{j(k^{\\prime }\\pm k)}=\n\\begin{cases}\nN & \\text{if }k^{\\prime }\\equiv \\mp k\\text{ mod }N \\\\\n0 & \\text{if }k^{\\prime }\\not \\equiv \\mp k\\text{ mod }N\n\\end{cases}\n\\end{equation}\nand since $\\alpha _k$ is periodic with period N there follows from Eq.\\,(12)\n\\begin{equation}\\label{O14}\n\\begin{array}{ll}\n\\displaystyle c_n=N\\sum_{k=0}^{N-1}\\alpha _{\\mp k}\\beta _kW^{nk},&%\nn=0,1,...,N-1\\rule[-20pt]{0pt}{20pt}\\\\\n\\displaystyle \\phantom{c_n}=N\\sum_{k=0}^{N-1}\\gamma _kW^{nk},&%\n\\text{where }\\gamma _k=\\alpha _{\\mp k}\\beta _k.\n\\end{array}\n\\end{equation}\n\nThe $c_n$ can be computed most efficiently by the indirect route of\nusing the FFT to compute the $\\alpha ^{\\prime }$s and $\\beta ^{\\prime\n}$s from the $a^{\\prime }$s and $b^{\\prime }s$, then computing\n$\\gamma _k$ as defined in Eq.\\,(14), and finally using the FFT to\ncompute the $c^{\\prime }$s from the $\\gamma ^{\\prime }s$. For real\ndata, SRFT1 is recommended for computing the transforms. Note that\nwith SRFT1 $\\alpha _k$, $\\beta _k$, and $\\gamma _k$ are only computed\nfor $k=0,1,...,N/2$; for $k=(N/2)+1,...,N-1$, one must use the fact\nthat for real data $\\alpha _{N-k}=\\overline{\\alpha }_k\\\n(\\overline{z}=$ conjugate of $z).$\n\nOrdinarily one has $a_j$ and $b_j$ defined for $j=0,1,...,J-1$,\nassumed to be~0 for other values of $j$, and desires $c_n$ for\n$n=0,1,...,L-1$, where $L\\leq J$. The above procedure can be used to\nget the first $L$ values of $c_n $ if one sets $N=$ smallest power of\n$2>J+L$ and sets $a_j$ and $b_j=0$ for $j=J,J+1,...,N-1.$ (If $J+L$\nis equal to or just slightly greater than a power of~2, it pays to\nreduce $L$ so that $J+L$ is one less than a power of 2, and to\ncompute $c_n$ directly from Eq.\\,(11) for values of $n=new\\ L,...,$\ndesired $L-1$.) Direct computation of the $c_n$ for $n=0,1,...,L-1$\nfrom Eq.  (11) requires $L(2J-L+1)/2$ multiplies and adds. Using\nSRFT1 and the procedure above requires approximately $(9/4)N\\ \\log\n_2N$ multiplies, $(33/8)N(\\log _2N+1)$ adds, and a little additional\noverhead. If $a_j=b_j$ then the counts using SRFT1 can be multiplied\nby $2/3$. The fastest procedure clearly depends on the values of\n$J,L$, and $N$. For $J<64$, or for values of $L$ small relative to\n$J$, the direct method is fastest, see \\cite{Cooley:1966:AFF}.\n\nNote that no assumption need be made about $a_j$ and $b_j$; the only\nerrors introduced in the calculation of $c_n$ using SRFT1 are\nround-off errors.\n\n\\subsection{Estimating Power Spectra}\n\nTo within a constant factor (different normalizations are used), an\nestimate of the power at the $k^{th}$ frequency (defined in E below)\nis given by $|\\xi _k|^2=(\\Re \\xi _k)^2+(\\Im \\xi _k)^2$, where the\n$\\xi _k$ are obtained from SRFT1, Eq.\\,(2) above. This estimate\nsuffers from the same type of errors discussed below for the case of\ncomputing Fourier integrals.\n\n\\subsection{Replacement of Continuous Transforms with Discrete Ones}\n\nTo simplify the material that follows we consider only the case of\ncomplex functions. Results for real data follow immediately by\nconsidering real and imaginary parts of the variables and equations\nbelow. Proofs for the transform pairs are given in many books on\nFourier transforms, although different scalings for $\\varphi $ and\n$\\omega $ are used.  We have used definitions that maximize symmetry\nwhile matching up with the form of the discrete transform provided by\nthe FFT.\n\nFor continuous data we have the following transform pairs.\n\nFourier Integral $\\left( \\int_{-\\infty }^\\infty |f(t)|\\,dt\\text{\\quad\nexists}\\right)$%\n\\begin{equation}\\label{O15}\nf(t)=\\int_{-\\infty }^\\infty \\varphi (\\omega )e^{2\\pi i\\omega\nt}d\\omega\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O16}\n\\varphi (\\omega )=\\int_{-\\infty }^\\infty f(t)e^{-2\\pi i\\omega\nt}dt\n\\end{equation}\nFourier Series $\\left( s(t)=s(t+T)\\right) $%\n\\begin{equation}\\label{O17}\ns(t)=\\sum_{k=-\\infty }^\\infty \\sigma _ke^{2\\pi ikt/T}\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O18}\n\\sigma _k=\\frac 1T\\int_{-T/2}^{T/2}s(t)e^{-2\\pi ikt/T}dt\n\\end{equation}\nThe discrete transform Eq.\\,(10) can be used to approximate Eq.\\,(16)\nif the infinite limits are replaced by finite limits and $f$ is\nsampled at equally spaced points between the two limits. As an\nexample let the infinite limits be replaced by $-T/2$ and T$/2$, and\nlet\n\\begin{equation}\\label{O19}\nz_j=\n\\begin{cases}\n\\displaystyle f(-\\frac T2+j\\Delta t),&j=1,2,...,N-1\\\\\n\\displaystyle \\frac 12\\left[ f(-\\frac T2)+f(\\frac T2)\\right] ,& j=0,\n\\end{cases}\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O20}\n\\Delta t=\\frac TN.\n\\end{equation}\nThis value of $z_0$ gives significantly better results than simply\nsetting $z_0=f(-T/2)$. With the assumption that the contribution to\nthe integral in Eq.\\,(16) is negligible for $|t|>T/2$, the trapezoidal\nrule gives\n\\begin{equation}\\label{O21}\n\\begin{split}\n\\varphi (\\omega )&\\approx \\Delta t\\sum_{j=0}^{N-1}z_je^{-\\pi\ni\\omega (-T+2j\\Delta t)} \\\\\n&\\approx \\frac TNe^{\\pi i\\omega T}\\sum_{j=0}^{N-1}z_je^{-2\\pi ij\\omega T/N}.\n\\end{split}\n\\end{equation}\nIn order that Eq.\\,(21) have the form of Eq.\\,(10), the solution is\nobtained for $\\omega =\\omega _k$, where\n\\begin{equation}\\label{O22}\n\\omega _k=\\frac kT,\\quad k=0,1,...N-1.\n\\end{equation}\nThen\n\\begin{equation}\\label{O23}\n\\varphi (\\omega _k)\\approx \\frac TNe^{\\pi\nik}\\sum_{j=0}^{N-1}z_jW^{-jk}\\quad (W=e^{2\\pi i/N}).\\hspace{-10pt}\n\\end{equation}\nThus $\\varphi (\\omega _k)\\approx Te^{\\pi ik}\\zeta _k$, $\\zeta _k$\ndefined as in (10). The factor $e^{\\pi ik}(=(-1)^k)$ is due to\nshifting the lower limit of $-T/2$ on the integral to a lower limit\nof~0 on the summation.\n\nNote that the $k^{th}$ frequency, $2 \\pi \\omega _k$, depends only on\nT, the length of the interval over which $f$ is sampled. Thus, from\nEq.\\,(22), $2 \\pi \\omega _k \\text{ radians/(units of }t) =k/T$\ncycles/(units of $t)$. From Eqs.\\,(20) and Eq.\\,(21) it follows that\nthe largest frequency for which a result is obtained is $\\omega\n_{N-1}=(N-1)/(N\\Delta T)$ cycles.  For real data, our approximation\nis such that $\\varphi (\\omega _{N-k})$ is the conjugate of $\\varphi\n(\\omega _{-k})$, and thus in this case the largest effective\nfrequency is \\begin{equation}\\label{O24} \\omega _{N/2}=\\frac\n1{2\\Delta t}\\ \\text{cycles}/\\text{(units of } t\\text{).}\n\\end{equation} This frequency is commonly called the {\\em Nyquist\nfrequency.} Note that this frequency depends only on the sampling\ninterval $\\Delta t.$\n\nAll that has been said above for Eq.\\,(16) applies almost word for\nword to Eq.\\,(18), except that Eq.\\,(18) does not require replacing\ninfinite limits by finite ones. Because of the factor $1/T$ appearing\nin Eq.\\,(18), in place of $\\varphi (\\omega _k)\\approx Te^{\\pi\nik}\\zeta _k$, we have $\\sigma _k\\approx e^{\\pi ik}\\zeta _k.$\n\n\\subsection{Errors Introduced by Using the Discrete Transform}\n\nThe discrete Fourier transform when used as above is a crude\napproximation to a continuous transform. Its primary virtue is the\nspeed with which it can be computed using the FFT ({\\bf F}ast {\\bf\nF}ourier {\\bf T}ransform). Many procedures have been suggested for\ncomputing continuous transforms and power spectra, some of which\npermit the use of the FFT and some which do not.  References in\nSection H below give a sampling of what has been suggested.\n\nAny computational procedure involves making assumptions about either\n$f(t)$ or $\\varphi (\\omega )$ (or both), and any assumption about one\nimplies something about the other. Answers to questions such as the\nfollowing help one in understanding the implications of a\ncomputational procedure.\n\n(Q1) How are the true and computed transform related?\n\n(Q2) What is the error in the computed transform?\n\n(Q3) What assumptions (if any) are made or implied concerning the transform?\n\n(Q4) What assumptions are made about the function at points where its value\nis not used?\n\n(Q5) At the points where the value of the function is used, how is the\nfunction related to a function that would give the true transform at\nselected values of $\\omega ?$\n\nThese questions are considered below for the case of the discrete\ntransform. We begin by giving some results required in the analysis,\nthen consider the effect of limiting the sampling of $f$ to a finite\nrange, the effect of discrete sampling, a combination of the two, and\nfinally the effect on the discrete transform of filling in with zeros.\n\n\\subsubsection{Convolution Theorems}\n\nFor Fourier integrals we have\n\\begin{equation}\\label{O25}\n\\varphi (\\omega )=\\varphi _1(\\omega )\\varphi _2(\\omega )\\qquad\n\\text{if and only if}\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O26}\nf(t)=\\int_{-\\infty }^\\infty f_1(\\tau )f_2(t-\\tau )\\,d\\tau .\n\\end{equation}\nThis result can be derived from Eqs.\\,(15) and (16), and is also true\nif $\\varphi $ and $f$ are interchanged throughout Eqs.\\,(25) and (26).\n\nIn the case of Fourier series there are two convolution theorems. We\nwork with the three transform pairs $(a,\\alpha )$, $(b,\\beta )$, and\n$(c,\\gamma )$. If $c(t)=a(t)b(t)$, then\n\\begin{equation}\\label{O27}\n\\gamma _n=\\frac 1T\\int_{-T/2}^{T/2}\\sum_{k=-\\infty }^\\infty\n\\sum_{k^{\\prime }=-\\infty }^\\infty \\alpha _k\\beta _{k^{\\prime }}e^{2\\pi\nit(k+k^{\\prime }-n)/T}dt.\n\\end{equation}\nInterchanging integration and summation, the integral is~0 if $k^{\\prime\n}\\neq n-k$, and is T if $k^{\\prime }=n-k$. Thus\n\\begin{equation}\\label{O28}\n\\gamma _n=\\sum_{k=-\\infty }^\\infty \\alpha _k\\beta _{n-k}.\n\\end{equation}\nThe second convolution theorem starts with\n\\begin{multline}\\label{O29}\nc(t)=\\frac 1T\\int_{-T/2}^{T/2}a(\\frac{1+\\tau }2)b(\\frac{1-\\tau\n}2)\\,d\\tau \\vspace{4pt} \\\\\n\\hspace{-15pt}=\\frac 1T\\int_{-T/2}^{T/2}\n\\sum_{k=-\\infty }^\\infty \\sum_{k^{\\prime }=-\\infty\n}^\\infty \\!\\!\\!\\alpha _k\\beta _{k^{\\prime }}e^{2\\pi i[t(k+k^{\\prime })+\\tau\n(k-k^{\\prime })]/2T}d\\tau .\n\\end{multline}\nMoving the integral inside the summations, we get a nonzero result only\nfor $k^{\\prime }=k$ in which case the integral is $\\exp (2\\pi itk/T)$.\nThus%\n\\begin{equation*}\nc(t)=\\sum_{k=-\\infty }^\\infty \\alpha _k\\beta _ke^{2\\pi itk/T},\\quad\n\\text{and so}\n\\end{equation*}\n\\begin{equation}\\label{O30}\n\\gamma _k=\\alpha _k\\beta _k.\n\\end{equation}\nThe convolution theorem for the discrete Fourier transform has already been\ngiven in Eqs.\\,(11) and (14).\n\n\\subsubsection{Fourier Transform of a Step Function}\n\nFor Fourier integrals, let\n\\begin{equation}\\label{O31}\nr_T(t)=\n\\begin{cases}\n1 & -\\frac T2<t<\\frac T2\\\\\n0 & |t|>\\frac T2\n\\end{cases}\n\\end{equation}\n\\begin{equation}\\label{O32}\n\\begin{split}\n\\rho _T(\\omega )&=\\int_{-\\infty }^\\infty r_T(t)e^{-2\\pi i\\omega\nt}dt\\vspace{4pt} \\\\\n\\phantom{\\rho _T(\\omega )}&=\\int_{-T/2}^{T/2}e^{-2\\pi i\\omega t}dt=\n\\frac{\\sin \\pi \\omega T}{\\pi \\omega }.\n\\end{split}\n\\end{equation}\nSimilarly, one has the transform pair $(\\frac 1{\\pi t}\\sin \\pi t\\Omega $, $r_\\Omega\n(\\omega )).$\n\nFor Fourier series, we are interested in the case\n\\begin{equation}\\label{O33}\n\\hat \\rho _k^{(K)}=\n\\begin{cases}\n1 & -K\\leq k<K \\\\\n0 & k\\geq K\\text{ or }k<-K\n\\end{cases}\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O34}\n\\begin{split}\n\\displaystyle\\hat r^{(K)}&=\\sum_{k=-\\infty }^\\infty \\hat \\rho _k^{(K)}e^{2\\pi\nitk/T}=\\sum_{k=-K}^{K-1}e^{2\\pi itk/T}\\vspace{4pt} \\\\\n\\displaystyle &=\\dfrac{2i(\\sin 2\\pi Kt/T)}{e^{2\\pi it/T}-1}\\vspace{4pt}\\\\\n\\displaystyle &=\\dfrac{(1+e^{-2\\pi it/T})(\\sin 2\\pi Kt/T)}{\\sin 2\\pi t/T}.\n\\end{split}\n\\end{equation}\nFor the discrete Fourier transform, consider\n\\begin{equation}\\label{O35}\n\\tilde r_{j+kN}^{(J)}=\n\\begin{cases}\n1 & 0\\leq j\\leq J-1 \\\\\n0 & J\\leq j\\leq N-1\n\\end{cases}\nk=0,\\pm 1,\\pm 2,...\n\\end{equation}\n\\begin{equation*}\n\\tilde \\rho _k^{(J)}=\\frac 1N\\sum_{j=0}^{N-1}\\tilde r_j^{(J)}W^{-jk}=\\frac\n1N\\sum_{j=0}^{J-1}W^{-jk}\n\\end{equation*}\n\\begin{equation}\\label{O36}\n\\tilde \\rho _k^{(J)}=\n\\begin{cases}\nJ/N & k=0,\\pm N,\\pm 2N,...\\\\\n\\dfrac{1-W^{-Jk}}{N(1-W^{-k})} & \\text{otherwise.}\n\\end{cases}\n\\end{equation}\n\n\\subsubsection{Errors Due to a Finite Range}\n\nConsider approximating $\\varphi (\\omega )$ in Eq.\\,(16) with\n\\begin{equation}\\label{O37}\n\\hat \\varphi (\\omega )=\\int_{-T/2}^{T/2}f(t)e^{-2\\pi i\\omega t}dt.\n\\end{equation}\nThis is equivalent to answering (Q4) with $f(t)=0$ for $|t|>T/2.$\n\nTo answer (Q1), rewrite the above equation as follows\n\\begin{equation}\\label{O38}\n\\hat \\varphi (\\omega )=\\int_{-\\infty }^\\infty r_T(t)f(t)e^{-2\\pi\ni\\omega t}dt,\n\\end{equation}\nand using Eqs.\\,(25) and (26) (with $f$ and $\\varphi $ interchanged)\n\\begin{equation}\\label{O39}\n\\hat \\varphi (\\omega )=\\int_{-\\infty }^\\infty \\varphi (\\omega )\n\\frac{\\sin \\pi T(\\omega -w)}{\\pi (\\omega -w)}\\,dw.\n\\end{equation}\nThus the effect of a finite range is a loss of resolution due to the\nsmearing as indicated in Eq.\\,(39).\n\nOne answer to (Q2) is to subtract both sides of Eq.\\,(39) from $\\varphi\n(\\omega )$. A more useful result is obtained by subtracting Eq.\\,(37)\nfrom Eq.\\,(16).\n\\begin{equation}\\label{O40}\n\\varphi (\\omega )-\\hat \\varphi (\\omega )=\\int_{T/2}^\\infty \\left[\nf(t)e^{-2\\pi i\\omega t}+f(-t)e^{2\\pi i\\omega t}\\right] \\,dt.\n\\end{equation}\nThus a bound on the error is given by\n\\begin{equation}\\label{O41}\n|\\varphi (\\omega )-\\hat \\varphi (\\omega )|\\leq \\int_{T/2}^\\infty\n\\left[ |f(t)|+|f(-t)|\\right] \\,dt.\n\\end{equation}\nIf $\\int_{-\\infty }^\\infty |f^{(j)}(t)|\\,dt$ exists for $j=0,1,...,J+1$,\nintegration by parts of the two terms in the integral of Eq.\\,(40) yields $(k$ an\ninteger)\n\\begin{multline}\\label{O42}\n\\varphi (k/T)-\\hat \\varphi (k/T)=\\hfill\\vspace{4pt} \\\\\n\\hspace{-10pt}(-1)^k\\sum_{j=0}^J\\left[ \\frac{-iT}{2\\pi k}\\right]\n^{(j+1)}\\left[ f^{(j)}(T/2)-f^{(j)}(-T/2)\\right] +R,\n\\end{multline}\nwhere R is a remainder term that goes to~0 as $k\\rightarrow \\infty $.\nExcept for the form of the remainder, the same result is obtained in the\nsame way from the negative of the right side of Eq.\\,(37), where $\\hat\n\\varphi $ is defined. Thus for large $\\omega $, $\\hat \\varphi (\\omega )$ is\nmainly error unless derivatives of $f$ at $-T/2$ are very nearly equal to\nthose at T$/2$. If the $j^{th}$ derivatives of $f$ at the endpoints are not\nequal, then $\\hat \\varphi (\\omega )$ decreases no faster than $\\omega\n^{-(j+1)}$ for large $\\omega $. We find below that nearly equal derivatives\nat the ends of the interval are also important when estimating $\\hat\n\\varphi $ using the FFT.\n\nTo answer (Q5) consider%\n\\begin{equation*}\n\\begin{split}\n\\varphi (k/T)&=\\int_{-\\infty }^\\infty f(t)e^{-2\\pi ikt/T}dt\\\\\n&=\\sum_{j=-\\infty}^\\infty \\int_{-T/2+jT}^{T/2+jT}f(t)e^{-2\\pi ikt/T}dt\\\\\n&=\\int_{-T/2}^{T/2}\\sum_{j=-\\infty }^\\infty f(t+jT)e^{-2\\pi ik(t+jT)/T}dt.\n\\end{split}\n\\end{equation*}\nAnd since $e^{-2\\pi ijk}=1$, we have\n\\begin{equation}\\label{O43}\n\\begin{split}\n\\varphi (k/T)&=\\int_{-T/2}^{T/2}\\hat f(t)e^{-2\\pi ikt/T}dt, \\quad\n\\text{where}\\vspace{4pt} \\\\\n\\hat f(t)&=\\sum_{j=-\\infty }^\\infty f(t+jT)\n\\end{split}\n\\end{equation}\nIf $f(t)$ is replaced by $\\hat f(t)$ as defined above, then $\\hat \\varphi\n(k/T)=\\varphi (k/T).$\n\n\\subsubsection{Errors Due to Discrete Sampling}\n\nGiven $f(j\\Delta t)$, $j=0$, $\\pm 1$, $\\pm 2$, ..., we have (proceeding much\nas was done in obtaining Eq.\\,(42))%\n\\begin{equation*}\n\\begin{split}\nf(j\\Delta t)&=\\int_{-\\infty }^\\infty \\varphi (\\omega )e^{2\\pi\ni\\omega j\\Delta t}d\\omega\\\\\n&=\\sum_{k=-\\infty }^\\infty \\int_{(k-1/2)/\\Delta\nt}^{(k+1/2)/\\Delta t}\\varphi (\\omega )e^{2\\pi i\\omega j\\Delta t}d\\omega\n\\end{split}\n\\end{equation*}\n\\begin{equation}\\label{O44}\n\\begin{split}\nf(j\\Delta t)&=\\int_{-1/2\\Delta t}^{1/2\\Delta t}\\tilde \\varphi\n(\\omega )e^{2\\pi i\\omega j\\Delta t}d\\omega ,\\quad \\text{where}\\\\\n\\tilde \\varphi (\\omega )&=\\sum_{k=-\\infty }^\\infty \\varphi (\\omega\n+\\frac k{\\Delta t}).\n\\end{split}\n\\end{equation}\nClearly $\\tilde \\varphi (\\omega )$ is periodic with period $1/\\Delta t$,\nthus from Eqs.\\,(18) and (17)\n\\begin{equation}\\label{O45}\n\\tilde \\varphi (\\omega )=\\Delta t\\sum_{j=-\\infty }^\\infty f(j\\Delta\nt)e^{-2\\pi i\\omega j\\Delta t}.\n\\end{equation}\nIf no assumptions are made about $f$ or $\\varphi $, then given just\n$f(j\\Delta t)$, $j=0$, $\\pm 1$, ... values of $\\varphi $ for\nfrequencies that differ by a multiple of $1/\\Delta t$ cycles are\nirrevocably mixed. This phenomenon is commonly called {\\em aliasing.}\n\nWhen sampling at discrete points the usual assumption made is that $f$ is\nband-limited. That is, $\\varphi (\\omega )=0$ for $|\\omega |>1/(2\\Delta t)$.\nThus the computed transform is equal to $\\hat \\varphi $ as given in Eq.\n(45), and questions (Q1) and (Q2) are answered by the expression for $\\hat\n\\varphi$ as given in Eq.\\,(44).\n\nTo answer (Q4) and (Q5), note that if $\\tilde f$ is a function whose true\ntransform is 0 for $|\\omega |>1/(2\\Delta t)$ and is $\\varphi (\\omega )$\notherwise, then\n\\begin{equation}\\label{O46}\n\\tilde f(t)=\\int_{-\\infty }^\\infty r_{1/\\Delta t}(\\omega )\\varphi\n(\\omega )e^{2\\pi i\\omega t}d\\omega ,\n\\end{equation}\nand using Eqs.\\,(25), (26), (31), and (32)\n\\begin{equation}\\label{O47}\n\\tilde f(t)=\\int_{-\\infty }^\\infty f(\\tau )\\frac{\\sin (\\pi\n(t-\\tau )/\\Delta t)}{\\pi (t-\\tau )}\\,d\\tau .\n\\end{equation}\nNote the symmetries between $f$ and $\\varphi $ in Eqs.\\,(38) and (46); (39) and\n(47); and (43) and (44). Relations symmetric to Eqs.\\,(40) -- (42) are easily\nobtained, but we do not bother to do so here.\n\n\\subsubsection{Errors Due to Discrete Sampling on a Finite Interval}\n\nHere we consider errors that result from evaluating the integral in Eq.\\,(18)\nusing the discrete Fourier transform. The results also apply to the integral\nin Eq.\\,(37), and thus errors examined here combined with those due to a\nfinite range give the total error due to replacing a Fourier integral with a\ndiscrete Fourier transform.\n\nThe aliasing problem is revealed much as it was obtained in Eq.\\,(44). Let\n$\\Delta t=T/N$, and from Eq.\\,(17)%\n\\begin{equation*}\n\\begin{split}\ns(j\\Delta t)&=\\!\\sum_{k=-\\infty }^\\infty \\!\\sigma _ke^{2\\pi\nijk/N}=\\!\\sum_{m=-\\infty }^\\infty \\!\\!\\sum_{k=mN}^{(m+1)N-1}\\!\\!\\sigma\n_kW^{jk}\\\\\n&=\\sum_{k=0}^{N-1}\\sum_{m=-\\infty }^\\infty \\sigma _{k+mN}W^{jk},\n\\end{split}\n\\end{equation*}\n\\begin{equation}\\label{O48}\ns(j\\Delta t)=\\sum_{k=0}^{N-1}\\tilde \\sigma _kW^{jk},\\text{ where }%\n\\tilde \\sigma _k=\\sum_{m=-\\infty }^\\infty \\sigma _{k+mN}.\n\\end{equation}\nAnd, as for Eq.\\,(45), there results from Eqs.\\,(9) and (10)\n\\begin{equation}\\label{O49}\n\\tilde \\sigma _k=\\frac 1N\\sum_{j=0}^{N-1}s(j\\Delta t)W^{-jk}.\n\\end{equation}\nSince $\\tilde \\sigma _{k-N}=\\tilde \\sigma _k$, and $W^{j(k-N)}=W^{jk},$ Eq.\n(48) can be rewritten to make more apparent the assumption that $s$ is\nband-limited. Subtracting N from all indices $k\\geq N/2$, there results\n\\begin{equation}\\label{O50}\ns(j\\Delta t)=\\sum_{k=-N/2}^{(N/2)-1}\\tilde \\sigma _kW^{jk}.\n\\end{equation}\nAs before, if we assume that $s(t)$ is band-limited, questions (Q1) and (Q2)\nare answered by the expression for $\\tilde \\sigma $ given in Eq.\\,(48).\n\nTo answer (Q4) and (Q5) as in Eq.\\,(47) we use Eqs.\\,(29), (30), (33), and\n(34) to obtain from\n\\begin{equation}\\label{O51}\n\\tilde s(t)=\\sum_{k=-\\infty }^\\infty \\sigma _k\\hat \\rho\n_k^{(N/2)}e^{2\\pi ikt/T},\n\\end{equation}\n\\vspace{-10pt}\n\\begin{multline}\\label{O52}\n\\tilde s(t)=\\frac 1T\\int_{-T/2}^{T/2}\\Biggl [s(\\frac{t+\\tau }%\n2)\\left( 1+e^{-\\pi i(t-\\tau )/T}\\right) \\vspace{4pt} \\\\\n\\times \\frac{\\sin (\\pi N(t-\\tau )/2T)}{\\sin (\\pi (t-\\tau )/T)}\\Biggr ]%\n\\,d\\tau .\n\\end{multline}\nAnother answer to (Q2) can be obtained using the Euler-Maclaurin\nformula.  Let $S_k(t)=s(t)e^{-2\\pi ikt/T}$, define $S_{k,j}^{(\\mu )}$\nto be the $\\mu ^{th}$ derivative of $S_k$ evaluated at $t=-\\frac\nT2+j\\,\\Delta t$, and $S_{k,j}=S_{k,j}^{(0)}$. Then the\nEuler-Maclaurin formula applied to Eq.\\,(18) gives\n\\begin{equation}\\label{O53}\n\\hspace{-10pt}\n\\sigma _k=\\frac{\\Delta t}T\\left \\{\\frac 12\\left[\nS_{k,N}+S_{k,0})\\right] +\\sum_{j=1}^{N-1}S_{k,j} + C_k^{(r)} \\right \\} +\nR_k^{(r)},\n\\end{equation}\\vspace{-5pt}\nwhere\n\\begin{equation*}\nC_k^{(r)}=\\sum_{\\nu =1}^r\\frac{B_{2\\nu }(\\Delta\nt)^{2\\nu -1}}{2\\nu !}\\left[ S_{k,N}^{(2\\nu -1)}-\nS_{k,0}^{(2\\nu -1)}\\right] ,\n\\end{equation*}\n$B_{2\\nu }$ is the $2\\nu ^{th}$ Bernoulli number, and $R_k^{(r)}$ is\na remainder term. The terms in Eq.\\,(53) that contain S are what one\nwould use following the procedure given in Section E above. The terms\ninvolving a $(2\\nu -1)^{st}$ derivative of S are correction terms,\nwhich if not used, give an indication (which can be misleading) of\nthe error. To examine the errors as they depend on the derivatives of\n$s$, we write\n\\begin{equation}\\label{O54}\n\\hspace{-5pt}S_k^{(\\mu )}(t)=\\sum_{j=1}^\\mu \\binom\n\\mu js^{(j)}(t)\\left[ \\frac{-2\\pi ik}T\\right] ^{\\mu -j}e^{-2\\pi ikt/T},\n\\end{equation}\n\\vspace{-10pt}\n\\begin{equation}\\label{O55}\n\\hspace{-5pt}S_k^{(\\mu )}(\\pm T/2)=\\sum_{j=1}^\\mu \\binom\n \\mu j\\left[ \\frac{-2\\pi ik}T\\right] ^{\\mu -j}(-1)^ks^{(j)}(\\pm T/2).\n\\end{equation}\nSubstitution into $C_k^{(r)}$ and a little algebraic manipulation gives the\nfollowing result.\n\\begin{equation}\\label{O56}\n\\hspace{-10pt}\\begin{array}{l}\\displaystyle\n\\frac{\\Delta t}TC_k^{(r)}\\!=\\!\\frac{(-1)^k}%\nN\\sum_{j=0}^{2r-1}b_j^{(r)}\\!\\left[ \\frac{iT}{2\\pi k}\\right] ^j\\!\\!\\left[\ns^{(j)}(\\frac T2)-s^{(j)}(-\\frac T2)\\right],\\\\\n~\n\\end{array}% Weird stuff with array needed to avoid overlap on label.\n\\end{equation}\nwhere\n\\begin{equation}\\label{O57}\nb_j^{(r)}=\\sum_{\\nu =\\lfloor j/2\\rfloor +1}^r\\binom{2\\nu\n-1}j\\frac{B_{2\\nu }(-2\\pi ik/N)^{2\\nu -1}}{(2\\nu )!},\n\\end{equation}\nand $\\lfloor j/2\\rfloor $ is the integer part of $j/2$. Note the similarities\nbetween Eqs.\\,(42) and (56). The degree of continuity in the periodic\nextension of the function sampled on $[-T/2$, T$/2]$ is extremely important\nin determining how well the discrete Fourier transform will approximate\neither of the other Fourier transforms.\n\n\\subsubsection{Filling in with Zeros}\n\nThe requirement that N be a power of~2 imposed by the FFT routines in\nthis chapter may be inconvenient. It is sometimes suggested that if\none has $N^{\\prime }$ function values, that the remaining\n$N-N^{\\prime }$ values be set to~0, where N is the first power of\n$2\\geq N^{\\prime }$. Let $z_j$ denote the true values of the function\nfor $j=0,1,...,N-1$, and let $\\zeta _k $ and $\\hat \\zeta _k$ denote\nthe true and computed coefficients for the discrete Fourier transform\nof $z$. Then clearly\n\\begin{equation}\\label{O58}\n\\zeta _k-\\hat \\zeta _k=\\frac 1N\\sum_{j=N^{\\prime\n}}^{N-1}z_jW^{-jk}.\n\\end{equation}\nAnd, using Eqs.\\,(11), (14), (35), and (36)\n\\begin{equation}\\label{O59}\n\\hat \\zeta _k=\\sum_{\\nu =0}^{N-1}\\zeta _\\nu \\frac{%\n1-W^{-N^{\\prime }(k-\\nu )}}{N(1-W^{-(k-\\nu )})}\n\\end{equation}\nwhere for $\\nu =k$ the multiplier of $\\zeta _\\nu $ is $N^{\\prime }/N.$\n\nIt is perhaps more instructive to consider the connection between $\\hat\n\\zeta _k$ and $\\tilde \\zeta _k$, where\n\\begin{equation}\\label{O60}\n\\tilde \\zeta _k=\\frac 1{N^{\\prime }}\\sum_{j=0}^{N^{\\prime\n}-1}z_je^{-2\\pi ijk/N^{\\prime }}.\n\\end{equation}\nBy extending the definitions of $\\hat \\zeta _k$, and $\\tilde \\zeta _k$ to\nnoninteger $k$ in the obvious way, one obtains\n\\begin{equation}\\label{O61}\n\\begin{split}\n\\hat \\zeta _{kN/N^{\\prime }}&=\\frac 1N\\sum_{j=0}^{N^{\\prime\n}-1}z_je^{-2\\pi ijk/N^{\\prime }}=\\frac{N^{\\prime }}N\\tilde \\zeta _k\n\\text{,\\quad and}\\\\\n\\tilde \\zeta _{kN^{\\prime }/N}&=\\frac 1{N^{\\prime }}\n\\sum_{j=0}^{N^{\\prime }-1}z_je^{-2\\pi ijk/N}=\\frac N{N^{\\prime }}\n\\hat \\zeta _k\n\\end{split}\n\\end{equation}\nThus $\\frac N{N^{\\prime }}\\hat \\zeta _k$ can be thought of as a way\nof computing $\\tilde \\zeta $ with a smaller than usual $\\Delta \\omega\n$. In particular, if $N=2N^{\\prime }$, $\\hat \\zeta _{2k}=\\tilde \\zeta\n_k/2.$ It is sometimes suggested that $N^{\\prime }$ extra zeros be\nadded even when $N^{\\prime }$ is a power of~2. By doing this one\ncan use the FFT to get the autocorrelation from Eqs.\\,(11) and (14),\nand the power spectrum, which can be defined as the Fourier transform\nof the autocorrelation function, is obtained automatically. An\nalternative method for those who like this approach is to compute\n$\\tilde \\zeta _k/2$ $(N^{\\prime }$ a power of~2) to get $\\hat \\zeta\n_{2k}$, and obtain $\\hat \\zeta _{2k+1}$ by setting it equal to the\n$k^{th}$discrete Fourier coefficient of $z^{\\prime }$ where\n$z_j^{\\prime }=z_je^{\\pi ij/N^{\\prime }},\\ j=0,1,...,N^{\\prime }-1.$\n\nIf one zeros $\\zeta _k$ for $k=N^{\\prime }$, $N^{\\prime }+1$, ..., N, then\ninstead of defining a trigonometric polynomial of degree $N-1$ that passes\nthrough $z_j$, $j=0,1,...,N-1,$ the $\\zeta ^{\\prime }s$ define a\ntrigonometric polynomial of degree $N^{\\prime }-1$, that fit the $z_j$ in a\nleast-squares sense. Multiplication by the Lanczos sigma factors, see below,\nshould usually give better smoothing characteristics for nonperiodic data.\n\n\\subsection{Recommendations}\n\nThe FFT gives good results if one is sampling a periodic function over one\nperiod. Results are less satisfactory for other cases. What we suggest here\nshould usually give an improvement over the FFT, however, in many cases one\ncan undoubtedly do ever better.\n\nIt is assumed that trends in the data $(e.g.$, a linear trend) have been\nremoved, and that appropriate action has been taken for wild points or gaps\nin the data. By a trend, we mean a smooth function S, so that the\n(estimated) average value of $|S(t)-z(t)|$ is as small as possible for $t$\noutside the sampling interval. In some cases one may want to add the Fourier\ntransform of S to the computed transform of $z$.\n\nFor functions that are not periodic it makes little sense to attempt a\nrepresentation in terms of a discrete set of frequencies. In Eq.\\,(61) the\nvalue of the discrete transform for noninteger values is defined. What we\npropose here amounts to computing $\\zeta _k$, which is an average of the\nvalues of the discrete transform for nearby noninteger values of $k$.\nStarting from Eq.\\,(21), consider\n\\begin{align}\n\\varphi _a(\\omega )=&\\frac 1{2a}\\int_{\\omega -a}^{\\omega\n+a}\\varphi (\\alpha )\\,d\\alpha \\vspace{4pt}\\notag \\\\\n=&\\frac TN\\sum_{j=0}^{N-1}z_j\\frac\n1{2a}\\int_{\\omega -a}^{\\omega +a}e^{\\pi i\\alpha T(N-2j)/N}d\\alpha\n\\vspace{4pt}\\notag \\\\\n=\\frac TNe^{\\pi i\\omega T}&\\sum_{j=0}^{N-1}\\left[\n\\frac{\\sin [\\pi aT(N-2j)/N]}{\\pi aT(N-2j)/N}\\right]\nz_je^{-2\\pi ij\\omega T/N}.\\label{O62}\n\\end{align}\nWith $\\omega =\\omega _k$, Eq.\\,(62) is the same as Eq.\\,(23) except\nfor the multiplier of $z_j$. The choice $a=1/T$ is attractive for\nseveral reasons:  it is the smallest value of $a$ that gives a\nmultiplier of~0 for $z_0$(and for $z_N$ if it were used), and thus\nhelps to minimize the effect of a discontinuity in the periodic\nextension of $z_j$; the value of $\\varphi _{1/T}(\\omega _k)$ is the\naverage value of $\\varphi $ from $\\omega _{k-1}$ to $\\omega\n_{k+1}$, so not much resolution is lost; finally, if Eq.\\,(39) is\nintegrated from $\\omega -a$ to $\\omega +a$ as was done in Eq.\\,(21),\none finds that the choice $a=1/T$ minimizes the effect of $\\varphi\n(\\hat \\omega )$ on $\\hat \\varphi _k(\\omega )$ for values of $\\hat\n\\omega $ remote from $\\omega $. Thus, we define\n\\begin{equation}\\label{O63}\n\\sigma _j=\\begin{cases}\n\\dfrac{\\sin \\pi j/N}{\\pi j/N}\\vspace{4pt} & j\\neq 0\\\\\n1 & j=0\n\\end{cases}\n\\end{equation}\nand the multiplier of $z_j$ in Eq.\\,(62) is $\\sigma _{N-2j}$. And since\n$\\sigma _{-N+2j}=\\sigma _{N-2j}$, $z_{N-j}$ has the same multiplier.\n\nThe $\\sigma _j$ defined in Eq.\\,(63) are the Lanczos sigma factors, see\n\\cite{Lanczos:1956:AA} for a different motivation in their derivation, and\nfor instructive examples.  If one wants a greater smoothing, rather than\nincreasing $a$, we recommend averaging the average.  Thus if the same\nprocedure is applied to Eq.\\,(62) as was applied to Eq.\\,(21) one finds\nthat $\\sigma _{N-2j}$ is replaced by $\\sigma _{N-2j}^2$.  This process can\nof course be repeated, depending on how much resolution one is willing to\ngive up.  The $\\sin \\pi j/N$ are readily available from the sine table\nrequired by the FFT subroutines as illustrated in the example for SRFT1.\n\nThe choice of $a = 1/T$ is not appropriate if one has filled out with\nzeros because $N^{\\prime}$ values are given and $N^{\\prime}$ is not a\npower of~2.  We assume the given values of $z$ are stored in $z_j$,\n$j = N^{\\prime}/2$, ..., $[(N+N^{\\prime})/2]$, and that both ends\nhave been zero filled. Then one should set $a =N/(N^{\\prime}T).$\n\nThe $\\sigma $ factors are also useful for smoothing. Given real data,\n$x$, one can replace $x_j$ by the average value of the trigonometric\npolynomial interpolating the data on the interval $x_{j-1}\\leq x\\leq\nx_{j+1}$ using the $\\sigma $ factors. Simply compute the $\\zeta _k$\nusing the FFT, multiply $\\zeta _k$ by $\\sigma _{2k}$, $k=1$, 2,\n..., N$/2$, and then compute the inverse transform. This application\nis illustrated in the example for SRFT.\n\nWhen the number of data points is not a power of two, one may have an\ninterest in using a mixed radix algorithm such as that in\n\\cite{Singleton:1969:AAC}, or one could use the codes given here together\nwith the technique described in \\cite{Bailey:1991:TFF}.  Before making the\ndecision to go with something more complicated than padding the data\nwith~0's, (or when padding the data with~0's), one should understand the\nconnections described from Eq.\\,(60) to just below Eq.\\,(61).\n\\nocite{Hamming:1962:NMS}\n\\nocite{Jenkins:1968:SAA}\n\\nocite{Havie:1973:REI}\n\\nocite{Abramovici:1973:TAC}\n\\nocite{Lyness:1971:TCF}\n\\nocite{VandeVooren:1968:OCI}\n\\nocite{Stetter:1966:NAF}\n\\nocite{Clendenin:1966:AMN}\n\\nocite{VanLoan:1992:CFF}\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\nFor information on the web, see:\\newline\n{\\bf http://theory.lcs.mit.edu/$^\\sim$fftw/fft-links.html}\n\nFred T. Krogh, JPL, November~1974. Minor additions and corrections,\nOctober~1991, October~1993, and April~1998.\n\\end{multicols}\n\\end{document}\n", "meta": {"hexsha": "94194a272dafafbb77c4b9972999442aed1fb5a0", "size": 35362, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch10-00.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch10-00.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch10-00.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 42.5024038462, "max_line_length": 98, "alphanum_fraction": 0.6889033426, "num_tokens": 12611, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178138, "lm_q2_score": 0.8006919997179627, "lm_q1q2_score": 0.5753727715228989}}
{"text": "\n\\subsection{Loss functions for hard classifiers}\n\nDon't want answers outside \\(0\\) and \\(1\\).\n\n\\subsubsection{F score}\n\n\\subsubsection{F1 score}\n\n\\(F_1\\) score: \\(\\dfrac{2PR}{(P+R)}\\)\n\nmay not just care about accuracy, eg breast cancer screening\n\nhigh accurancy can result from v basic model (ie all died on titanic)\n\n\\subsubsection{Receiver Operating Characteristic (ROC) Area Under Curve (AUC)}\n\n", "meta": {"hexsha": "eb3d52e1ef702441aab4004f7d7789377d7509fc", "size": 399, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/parametric/03-03-lossHard.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/parametric/03-03-lossHard.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/parametric/03-03-lossHard.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.1666666667, "max_line_length": 78, "alphanum_fraction": 0.7368421053, "num_tokens": 107, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.6959583250334527, "lm_q1q2_score": 0.5753569554868265}}
{"text": "\\section{Example Equations Explained}\n\n    \\subsection{Permutations}\n    \n        \\subsubsection{Factorial Notation}\n            \\begin{example}\n                Solve the following equation for n. State restrictions.\n                \\begin{equation*}\n                    \\frac{n!}{(n-3)!}=2n\n                \\end{equation*}\n            \\end{example}\n            To solve for n, we must first divide out the denominator, which in this case is $(n-3)!$. Before all of this, we should state our restriction. Since a factorial may not be negative, and $(n-3)!$ would equal $-3!$ our restriction is that \\emph{$n\\geqslant3$}. You may notice there is no $(n-3)!$ in the top, but the solution for this is simple. If you recall the equation for factorial notation, we can expand the top of this question.  Following this, the equation expands to:\n            \\begin{equation*}\n                \\frac{n\\cdot(n-1)\\cdot(n-2)\\cdot(n-3)!}{(n-3)!}=2n\n            \\end{equation*}\n            Next, the $(n-3)!$s cancel out, to bring you this:\n            \\begin{equation*}\n                n\\cdot(n-1)\\cdot(n-2) = 2n\n            \\end{equation*}\n            Now we can divide out the n from both sides. To get this:\n            \\begin{equation*}\n                (n-1)\\cdot(n-2) = 2\n            \\end{equation*}\n            We can't divide any further, so we'll need to expand.\n            \\begin{equation*}\n                n^2 -3n+2 = 2\n            \\end{equation*}\n            Move the 2 over so we can turn this into something we can factor.\n            \\begin{equation*}\n                n^2 -3n=0\n            \\end{equation*}\n            Use factor by grouping to factor this equation.\n            \\begin{equation*}\n                n(n-3)=0\n            \\end{equation*}\n            So either $n=0$ or $n-3=0$. Following our restriction, n be greater than 3, so $n=0$ is inadmissible, therefore the answer must be $n=3$.\n    \\clearpage\n        \\subsubsection{Rule of Product and Rule of Sum}\n            \\begin{example}\n                Using only the numbers from $0\\cdots4$, how many three digit numbers can you find if the number 4 \\textbf{MUST} be included. No repetition.\n            \\end{example}\n            For this question we must follow the Rule of Sum.\n            \n    \\clearpage\n        \\subsubsection{Problem Solving with Permutations}\n            \\begin{example}\n                Try to find the total number of arrangements of the numbers 0-7(8 digits) that are even. No repetition is allowed, and the number cannot start with a 0.\n            \\end{example}\n            For this question, you can use the \\textbf{Case Method} and break the question down into numbers that end in a zero, and numbers that don't end in a zero, yet still end in an even number. \n            \\begin{parcolumns}{2}\n                \\colchunk[1]{\\\\\\textbf{Numbers that end in a zero:}\n                    $$7! \\cdot 1 = 5040$$ \\\\\n                    \\textbf{Explanation:} One has 7 possible numbers for the first place, and only zero as the final possible number.\n                }\n                \\colchunk[2]{\\\\\\textbf{Numbers that end in 2,4,6:}\n                    $$6 \\cdot 6 \\cdot 3 = 12960$$ \\\\\n                    \\textbf{Explanation:} The last number is 3, and represents the possible even numbered endings (2,4,6). The reason why it is $6 \\cdot 6$, it is because zero cannot be first, yet it can be second.\\\\\n                }\n                \\colplacechunks\n            \\end{parcolumns}\n            This question isn't over yet though. Finally, you will have to add the solutions together, as you may have found all of the answers for the possible even numbers (0,2,4,6), but they haven't been added together yet.\n            \\begin{equation*}\n                5040 + 12960 = 18000\n            \\end{equation*}\n            \\emph{In conclusion}, there will be 18000 possible combinations of even numbers by using the numbers 0-7 (8 digits in total).\n            \n    \\subsection{Combinations}\n    \n        \\subsubsection{Set Theory}\n    \n        \\subsubsection{Intro to Combinations}            \n            \\begin{example}\n                A committee of 4 people is chosen from 10 students, 6 parents, and 10 teachers. Order does not matter. How many ways can we make the committee if there are a) no restrictions, b) At least one teacher on the committee or c) Exactly three students on the committee.\n            \\end{example}\n            \\textbf{a) No restrictions:}\\\\\n            We can assume that this is a combination, as there are no restrictions other than order does not matter. Therefore we only need to fill in the combination equation, which turns out to be C(26,4). \n            \\begin{equation*}\n                \\binom{26}{4} = 14950\n            \\end{equation*}\n            \\textbf{b) At least one teacher.}\n            There are two ways to handle this question, we can solve it either directly, or indirectly. In this example, I will choose indirectly. In this case, we want every solution with a teacher in it, and do not want the solution with zero teachers. Therefore we must subtract the unwanted solutions from all the solutions. This would work out to be:\n            \\begin{equation*}\n                \\binom{26}{4} - \\binom{10}{0}\\cdot\\binom{16}{4} = 13130\n            \\end{equation*}\n            \\textbf{c) Exactly three students.}\n            For this question, we are forced to do it one way and one way only, as the question refers to \\emph{exactly three} students. To start we must have 3 students on the committee, and then since we have one space left on the committee, we must fill that spot with someone who \\emph{IS NOT} a student, as that would ruin our \"exactly three\" need. This works out to be:\n            \\begin{equation*}\n                \\binom{10}{3}\\cdot\\binom{24-10}{1} = 1680\n            \\end{equation*}\n        ", "meta": {"hexsha": "8d3d46727fdb27f9bce3c224b4b2d088145cf701", "size": 5837, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "highschool-projects/MDM4UI/examples.tex", "max_stars_repo_name": "johnaoss/dead-projects", "max_stars_repo_head_hexsha": "f8a911a8d08dc34bf52a8d1afd8493a3fcb7f2ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "highschool-projects/MDM4UI/examples.tex", "max_issues_repo_name": "johnaoss/dead-projects", "max_issues_repo_head_hexsha": "f8a911a8d08dc34bf52a8d1afd8493a3fcb7f2ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "highschool-projects/MDM4UI/examples.tex", "max_forks_repo_name": "johnaoss/dead-projects", "max_forks_repo_head_hexsha": "f8a911a8d08dc34bf52a8d1afd8493a3fcb7f2ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.8555555556, "max_line_length": 488, "alphanum_fraction": 0.6006510194, "num_tokens": 1490, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117898012104, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5753569525154582}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n\n\n    \n    \n\n\\subsection*{covNoise.m} \n\n\\begin{par}\nIndependent covariance function, ie \"white noise\", with specified variance. The covariance function is specified as:\n\\end{par} \\vspace{1em}\n\\begin{par}\nk(x\\^{}p,x\\^{}q) = s2 * \\ensuremath{\\backslash}delta(p,q)\n\\end{par} \\vspace{1em}\n\\begin{par}\nwhere s2 is the noise variance and \\ensuremath{\\backslash}delta(p,q) is a Kronecker delta function which is 1 iff p=q and zero otherwise. The hyperparameter is\n\\end{par} \\vspace{1em}\n\\begin{par}\nlogtheta = [ log(sqrt(s2)) ]\n\\end{par} \\vspace{1em}\n\\begin{par}\nFor more help on design of covariance functions, try \"help covFunctions\".\n\\end{par} \\vspace{1em}\n\\begin{par}\n(C) Copyright 2006 by Carl Edward Rasmussen, 2006-03-24.\n\\end{par} \\vspace{1em}\n\n\\begin{lstlisting}\nfunction [A, B] = covNoise(logtheta, x, z)\n\\end{lstlisting}\n\n\n\\subsection*{Code} \n\n\n\\begin{lstlisting}\nif nargin == 0, A = '1'; return; end              % report number of parameters\n\ns2 = exp(2*logtheta);                                          % noise variance\n\nif nargin == 2                                      % compute covariance matrix\n  A = s2*eye(size(x,1));\nelseif nargout == 2                              % compute test set covariances\n  A = s2;\n  B = 0;                               % zeros cross covariance by independence\nelse                                                % compute derivative matrix\n  A = 2*s2*eye(size(x,1));\nend\n\\end{lstlisting}\n", "meta": {"hexsha": "dcccd7f87e62fa2d17c39384d54f1679630534b4", "size": 1559, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/covNoise.tex", "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_issues_repo_path": "doc/tex/covNoise.tex", "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_forks_repo_path": "doc/tex/covNoise.tex", "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "avg_line_length": 29.4150943396, "max_line_length": 159, "alphanum_fraction": 0.6164207826, "num_tokens": 440, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117855317474, "lm_q2_score": 0.6959583187272712, "lm_q1q2_score": 0.5753569443306953}}
{"text": "\n\n    \\filetitle{cumsumk}{Cumulative sum with a k-period leap}{tseries/cumsumk}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nY = cumsumk(X,K,Rho,Range)\nY = cumsumk(X,K,Rho)\nY = cumsumk(X,K)\nY = cumsumk(X)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\item\n  \\texttt{X} {[} tseries {]} - Input data.\n\\item\n  \\texttt{K} {[} numeric {]} - Number of periods that will be leapt the\n  cumulative sum will be taken; if not specified, \\texttt{K} is chosen\n  to match the frequency of the input data (e.g. \\texttt{K = -4} for\n  quarterly data), or \\texttt{K = -1} for indeterminate frequency.\n\\item\n  \\texttt{Rho} {[} numeric {]} - Autoregressive coefficient; if not\n  specified, \\texttt{Rho = 1}.\n\\item\n  \\texttt{Range} {[} numeric {]} - Range on which the cumulative sum\n  will be computed and the output series returned.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{Y} {[} tseries {]} - Output data constructed as described\n  below.\n\\end{itemize}\n\n\\paragraph{Options}\\label{options}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{'log='} {[} \\texttt{true} \\textbar{} \\emph{\\texttt{false}} {]}\n  - Logarithmise the input data before, and de-logarithmise the output\n  data back after, running \\texttt{x12}.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\nIf \\texttt{K \\textless{} 0}, the first \\texttt{K} observations in the\noutput series \\texttt{Y} are copied from \\texttt{X}, and the new\nobservations are given recursively by\n\n\\begin{verbatim}\nY{t} = Rho*Y{t-K} + X{t}.\n\\end{verbatim}\n\nIf \\texttt{K \\textgreater{} 0}, the last \\texttt{K} observations in the\noutput series \\texttt{Y} are copied from \\texttt{X}, and the new\nobservations are given recursively by\n\n\\begin{verbatim}\nY{t} = Rho*Y{t+K} + X{t},\n\\end{verbatim}\n\ngoing backwards in time.\n\nIf \\texttt{K == 0}, the input data are returned.\n\n\\paragraph{Example}\\label{example}\n\nConstruct random data with seasonal pattern, and run X12 to seasonally\nadjust these series.\n\n\\begin{verbatim}\nx = tseries(qq(1990,1):qq(2020,4),@randn);\nx1 = cumsumk(x,-4,1);\nx2 = cumsumk(x,-4,0.7);\nx1sa = x12(x1);\nx2sa = x12(x2);\n\\end{verbatim}\n\nThe new series \\texttt{x1} will be a unit-root process while \\texttt{x2}\nwill be stationary. Note that the command on the second line could be\nreplaced with \\texttt{x1 = cumsumk(x)}.\n\n\n", "meta": {"hexsha": "92a701065c8c74b8cd9b5a643f7fec42cd22a1b4", "size": 2425, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/tseries/cumsumk.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/tseries/cumsumk.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/tseries/cumsumk.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 26.6483516484, "max_line_length": 77, "alphanum_fraction": 0.7109278351, "num_tokens": 802, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5753374364950004}}
{"text": "\\chapter{Cubic functions}\n\\section{Defined on $\\mdr$}\nLet $f:\\mdr \\rightarrow \\mdr, f(x) = a \\cdot x^3 + b \\cdot x^2 + c \\cdot x + d$ \nbe a cubic function with $a \\in \\mdr \\setminus \\Set{0}$ and \n$b, c, d \\in \\mdr$.\n\n\\begin{figure}[htp]\n    \\centering\n\\begin{tikzpicture}\n    \\begin{axis}[\n        legend pos=south east,\n        axis x line=middle,\n        axis y line=middle,\n        grid = major,\n        width=0.8\\linewidth,\n        height=8cm,\n        grid style={dashed, gray!30},\n        xmin=-3, % start the diagram at this x-coordinate\n        xmax= 3, % end   the diagram at this x-coordinate\n        ymin=-3, % start the diagram at this y-coordinate\n        ymax= 3, % end   the diagram at this y-coordinate\n        axis background/.style={fill=white},\n        xlabel=$x$,\n        ylabel=$y$,\n        tick align=outside,\n        minor tick num=-3,\n        enlargelimits=true,\n        tension=0.08]\n      \\addplot[domain=-3:3, thick,samples=50, red] {x*x*x}; \n      \\addplot[domain=-3:3, thick,samples=50, green] {x*x*x+x*x};\n      \\addplot[domain=-3:3, thick,samples=50, blue] {x*x*x+2*x*x};\n      \\addplot[domain=-3:3, thick,samples=50, orange] {x*x*x+x}; \n      \\addlegendentry{$f_1(x)=x^3$}\n      \\addlegendentry{$f_2(x)=x^3 + x^2$}\n      \\addlegendentry{$f_2(x)=x^3 + 2 \\cdot x^2$}\n      \\addlegendentry{$f_1(x)=x^3 + x$}\n    \\end{axis} \n\\end{tikzpicture}\n    \\caption{Cubic functions}\n\\end{figure}\n\n%\n%\\section{Special points}\n%\\todo[inline]{Write this}\n%\n%\\section{Voronoi}\n%\n%For $b^2 \\geq 3ac$\n%\n%\\todo[inline]{Write this}\n\n\\subsection{Calculate points with minimal distance}\n\\begin{theorem}\\label{thm:no-finite-solution}\n    There cannot be a finite, closed form solution to the problem of finding \n    a closest point $(x, f(x))$ to a given point $P$ when $f$ is\n    a polynomial function of degree $3$ or higher.\n\\end{theorem}\n\n\\begin{proof}\n    Suppose you could solve the closest point problem for arbitrary\n    cubic functions $f = ax^3 + bx^2 + cx + d$ and arbitrary points $P = (x_P, y_P)$.\n\n    Then you could solve the following problem for $x$:\n    \\begin{align}\n        0  &\\stackrel{!}{=} \\left ((d_{P,f}(x))^2 \\right )'\\\\\n           &=-2 x_p + 2x -2y_p(f(x))' + (f(x)^2)'\\\\\n           &= 2 f(x) \\cdot f'(x) - 2 y_p f'(x) + 2x - 2 x_p\\\\\n           &= f(x) \\cdot f'(x) - y_p f'(x) + x - x_p\\\\\n           &= \\underbrace{f'(x) \\cdot \\left (f(x) - y_p \\right )}_{\\text{Polynomial of degree 5}} + x - x_p\n    \\end{align}\n\n    General algebraic equations of degree 5 don't have a solution formula.\\footnote{TODO: Quelle}\n    Although here seems to be more structure, the resulting algebraic\n    equation can be almost any polynomial of degree 5:\\footnote{Thanks to Peter Ko\u0161in\u00e1r on \\href{http://math.stackexchange.com/a/584814/6876}{math.stackexchange.com} for the idea.}\n\n    \\begin{align}\n        0  &\\stackrel{!}{=} f'(x) \\cdot \\left (f(x) - y_p \\right ) + (x - x_p)\\\\\n        &= \\underbrace{3 a^2}_{= \\tilde{a}} x^5 + \\underbrace{5ab}_{= \\tilde{b}}x^4 + \\underbrace{2(2ac + b^2 )}_{= \\tilde{c}}x^3 &+& \\underbrace{3(ad+bc-ay_p)}_{= \\tilde{d}} x^2 \\\\\n        & &+& \\underbrace{(2 b d+c^2+1-2 b y_p)}_{= \\tilde{e}}x+\\underbrace{c d-c y_p-x_p}_{= \\tilde{f}}\\\\\n        0 &\\stackrel{!}{=} \\tilde{a}x^5 + \\tilde{b}x^4 + \\tilde{c}x^3 + \\tilde{d}x^2 + \\tilde{e}x + \\tilde{f}\n    \\end{align}\n\n    \\begin{enumerate}\n        \\item For any coefficient $\\tilde{a} \\in \\mdr_{> 0}$ of $x^5$ we can choose $a := \\frac{1}{3} \\sqrt{\\tilde{a}}$ such that we get $\\tilde{a}$.\n        \\item For any coefficient $\\tilde{b} \\in \\mdr \\setminus \\Set{0}$ of $x^4$ we can choose $b := \\frac{1}{5a} \\cdot \\tilde{b}$ such that we get $\\tilde{b}$.\n        \\item With $c := -2b^2 + \\frac{1}{4a} \\tilde{c}$, we can get any value of $\\tilde{c} \\in \\mdr$.\n        \\item With $d := -bc + a y_p + \\frac{1}{a} \\tilde{d}$, we can get any value of $\\tilde{d} \\in \\mdr$.\n        \\item With $y_p := \\frac{1}{2b}(2bd + c^2)\\cdot \\tilde{e}$, we can get any value of $\\tilde{e} \\in \\mdr$.\n        \\item With $x_p := cd - c y_P+\\tilde{f}$, we can get any value of $\\tilde{f} \\in \\mdr$.\n    \\end{enumerate}\n\n    The first restriction guaratees that we have a polynomial of \n    degree 5. The second one is necessary, to get a high range of\n    $\\tilde{e}$.\n\n    This means that there is no finite solution formula for the problem of \n    finding the closest points on a cubic function to a given point,\n    because if there was one, you could use this formula for finding\n    roots of polynomials of degree 5. $\\qed$\n\\end{proof}\n\n\n\\subsection{Another approach}\nJust like we moved the function $f$ and the point to get in a \nnicer situation, we can apply this approach for cubic functions.\n\n\\begin{figure}[htp]\n    \\centering\n\\begin{tikzpicture}\n    \\begin{axis}[\n        legend pos=south east,\n        axis x line=middle,\n        axis y line=middle,\n        grid = major,\n        width=0.8\\linewidth,\n        height=8cm,\n        grid style={dashed, gray!30},\n        xmin=-3, % start the diagram at this x-coordinate\n        xmax= 3, % end   the diagram at this x-coordinate\n        ymin=-3, % start the diagram at this y-coordinate\n        ymax= 3, % end   the diagram at this y-coordinate\n        axis background/.style={fill=white},\n        xlabel=$x$,\n        ylabel=$y$,\n        tick align=outside,\n        minor tick num=-3,\n        enlargelimits=true,\n        tension=0.08]\n      \\addplot[domain=-3:3, thick,samples=50, red] {x*x*x}; \n      \\addplot[domain=-3:3, thick,samples=50, green] {x*x*x+x};\n      \\addplot[domain=-3:3, thick,samples=50, orange] {x*x*x-x}; \n      \\addplot[domain=-3:3, thick,samples=50, blue, dotted] {x*x*x+2*x};\n      \\addplot[domain=-3:3, thick,samples=50, lime, dashed] {x*x*x+3*x};\n      \\addlegendentry{$f_1(x)=x^3$}\n      \\addlegendentry{$f_2(x)=x^3 + x$}\n      \\addlegendentry{$f_1(x)=x^3 - x$}\n      \\addlegendentry{$f_2(x)=x^3 + 2 \\cdot x$}\n      \\addlegendentry{$f_2(x)=x^3 + 3 \\cdot x$}\n    \\end{axis} \n\\end{tikzpicture}\n    \\caption{Cubic functions with $b = d = 0$}\n\\end{figure}\n\nFirst, we move $f_0$ by $\\frac{b}{3a}$ in $x$ direction, so\n\n\\[f_1(x) = ax^3 + \\frac{b^2 (c-1)}{3a} x + \\frac{2b^3}{27 a^2} - \\frac{bc}{3a} + d \\;\\;\\;\\text{ and }\\;\\;\\;P_1 = (x_P + \\frac{b}{3a}, y_P)\\]\n\nbecause\n\n\\begin{align}\n    f_1(x) &= a \\left (x - \\frac{b}{3a} \\right )^3 + b \\left (x-\\frac{b}{3a} \\right )^2 + c \\left (x-\\frac{b}{3a} \\right ) + d\\\\\n           &= a \\left (x^3 - 3 \\frac{b}{3a}x^2 + 3 (\\frac{b}{3a})^2 x - \\frac{b^3}{27a^3} \\right )\n             +b \\left (x^2 - \\frac{2b}{3a} x + \\frac{b^2}{9a^2} \\right )\n             +c x - \\frac{bc}{3a} + d\\\\\n            &= ax^3 - bx^2 + \\frac{b^2}{3a}x - \\frac{b^3}{27 a^2}\\\\\n            & \\;\\;\\;\\;\\;\\;+ bx^2 - \\frac{2b^2}{3a}x + \\frac{b^3}{9a^2}\\\\\n            & \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; + c x - \\frac{bc}{3a} + d\\\\\n            &= ax^3 + \\frac{b^2}{3a}\\left (1-2+c \\right )x + \\frac{b^3}{9a^2} \\left (1-\\frac{1}{3} \\right )- \\frac{bc}{3a} + d\n\\end{align}\n\nThe we move it in $y$ direction by $- (\\frac{2b^3}{27 a^2} - \\frac{bc}{3a} + d)$:\n\n\\[f_2(x) = ax^3 + \\frac{b^2 (c-1)}{3a} x \\;\\;\\;\\text{ and }\\;\\;\\;P_2 = (x_P + \\frac{b}{3a}, y_P - (\\frac{2b^3}{27 a^2} - \\frac{bc}{3a} + d))\\]\n\nMultiply everything by $\\sgn(a)$:\n\n\\[f_3(x) = \\underbrace{|a|}_{=: \\alpha}x^3 + \\underbrace{\\frac{b^2 (c-1)}{3|a|}}_{=: \\beta} x \\;\\;\\;\\text{ and }\\;\\;\\;P_2 = (x_P + \\frac{b}{3a}, \\sgn(a) (y_P - \\frac{2b^3}{27 a^2} + \\frac{bc}{3a} - d))\\]\n\nNow the problem seems to be much simpler. The function $\\alpha x^3 + \\beta x$\nwith $\\alpha > 0$ is centrally symmetric to $(0, 0)$.\n\n\\todo[inline]{Und weiter?}\n\n\\subsection{Number of points with minimal distance}\nAs this leads to a polynomial of degree 5 of which we have to find\nroots, there cannot be more than 5 solutions.\n\\todo[inline]{Can there be 3, 4 or even 5 solutions? Examples!\n\nAfter looking at function graphs of cubic functions, I'm pretty \nsure that there cannot be 4 or 5 solutions, no matter how you \nchose the cubic function $f$ and $P$.\n\nI'm also pretty sure that there is no polynomial (no matter what degree)\nthat has more than 3 solutions.}\n\n\n\\subsection{Interpolation and approximation}\n\\subsubsection{Quadratic spline interpolation}\nYou could interpolate the cubic function by a quadratic spline.\n\n\\subsubsection{Bisection method}\n\\todo[inline]{TODO}\n\n\\subsubsection{Newtons method}\nOne way to find roots of functions is Newtons method. It gives an\niterative computation procedure that can converge quadratically \nif some conditions are met:\n\n\\begin{theorem}[local quadratic convergence of Newton's method\\footnotemark]\n    Let $D \\subseteq \\mdr^n$ be open and $f: D \\rightarrow \\mdr^n \\in C^2(\\mdr)$.\n    Let $x^* \\in D$ with $f(x^*) = 0$ and the Jaccobi-Matrix $f'(x^*)$\n    should not be invertable when evaluated at the root.\n\n    Then there is a sphere \n    \\[K := K_\\rho(x^*) = \\Set{x \\in \\mdr^n | \\|x- x^*\\|_\\infty \\leq \\rho} \\subseteq D\\]\n    such that $x^*$ is the only root of $f$ in $K$. Furthermore,\n    the elements of the sequence\n    \\[ x_{n+1} = x_n - \\frac{f'(x_n)}{f(x_n)}\\]\n    are for every starting value $x_0 \\in K$ again in $K$ and\n    \\[\\lim_{n \\rightarrow \\infty} x_k = x^*\\]\n    Also, there is a constant $C > 0$ such that\n    \\[\\|x^* - x_{n+1} \\| = C \\|x^* - x_n\\|^2 \\text{ for } n \\in \\mathbb{N}_0\\|\\]\n\\end{theorem}\n\\footnotetext{Translated from German to English from lecture notes of \"Numerische Mathematik f\u00fcr die Fachrichtung Informatik\nund Ingenieurwesen\" by Dr. Wei\u00df, KIT}\n\nThe approach is extraordinary simple. You choose a starting value\n$x_0$ and compute\n\n\\[x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)}\\]\n\nAs soon as the values don't change much, you are close to a root.\nThe problem of this approach is choosing a starting value that is\nclose enough to the root. So we have to have a \\enquote{good}\ninitial guess.\n\\clearpage\n\n\\subsubsection{Muller's method}\nMuller's method was first presented by David E. Muller in 1956.\n\n\\todo[inline]{Paper? Might this be worth a try?}\n\n\\subsubsection{Bisection method}\nThe idea of the bisection method is the following:\n\nSuppose you know a finite intervall $[a,b]$ in which you have \nexactly one root $r \\in (a,b)$ with $f(r) = 0$.\n\nThen you can half that interval:\n    \\[[a, b] = \\left [a, \\frac{a+b}{2} \\right ] \\cup \\left [\\frac{a+b}{2}, b \\right ]\\]\n\nNow three cases can occur:\n\\begin{enumerate}\n    \\item[Case 1] $f(\\frac{a+b}{2})=0$: You have found the exact root.\n    \\item[Case 2] $\\sgn(a) = \\sgn(\\frac{a+b}{2})$: Continue searching in $[\\frac{a+b}{2}, b]$\n    \\item[Case 3] $\\sgn(b) = \\sgn(\\frac{a+b}{2})$: Continue searching in $[a, \\frac{a+b}{2}]$\n\\end{enumerate}\n\n\\todo[inline]{Which intervall can I choose? How would I know that there is exactly one root?}\n\n\\subsubsection{Bairstow's method}\nCite from Wikipedia:\nThe algorithm first appeared in the appendix of the 1920 book \"Applied Aerodynamics\" by Leonard Bairstow. The algorithm finds the roots in complex conjugate pairs using only real arithmetic.\n\n[...]\n\\todo[inline]{Find a source for the following!}\nA particular kind of instability is observed when the polynomial has odd degree and only one real root.\n\n\n\n\\section{Defined on a closed interval $[a,b] \\subseteq \\mdr$}\nThe point with minimum distance can be found by:\n\\[\\underset{x\\in[a,b]}{\\arg \\min d_{P,f}(x)} = \\begin{cases}\n S_3(f, P) &\\text{if } S_3(f, P) \\cap [a,b] \\neq \\emptyset\\\\\n  TODO     &\\text{if } S_3(f, P) \\cap [a,b] = \\emptyset\n    \\end{cases}\\]\n", "meta": {"hexsha": "6849016ed4ddc7f54604aaa7714b428cd90ccd8c", "size": 11369, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/math-minimal-distance-to-cubic-function/cubic-functions.tex", "max_stars_repo_name": "keithmannock/LaTeX-examples", "max_stars_repo_head_hexsha": "6829f6cf9710b314a4bf0b64abdae5bcf6997fd0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-11-02T10:09:12.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-24T22:16:18.000Z", "max_issues_repo_path": "documents/math-minimal-distance-to-cubic-function/cubic-functions.tex", "max_issues_repo_name": "everbot/LaTeX-examples", "max_issues_repo_head_hexsha": "9558d8b3c19776cb068b9753dcd3f88645dd7134", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documents/math-minimal-distance-to-cubic-function/cubic-functions.tex", "max_forks_repo_name": "everbot/LaTeX-examples", "max_forks_repo_head_hexsha": "9558d8b3c19776cb068b9753dcd3f88645dd7134", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.5805243446, "max_line_length": 203, "alphanum_fraction": 0.6121910458, "num_tokens": 4043, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7401743620390162, "lm_q1q2_score": 0.5753374364950004}}
{"text": "\\section{Experiments}\n\nWe have implemented a tool chain that takes as input a description\nof the polytopes in terms of vertices, and the parallelepiped boundaries,\nand it returns the placement vectors for the arrangement, if any\nexists.\n\nIn order to compute the facets of the polytopes we rely on the CGAL \nlibrary~\\cite{CGAL}. We then solve the SMT problem with some\nefficient and open-source SMT-solver, such as CVC4~\\cite{CVC4}, OpenSMT~\\cite{OpenSMT}, \nor Z3~\\cite{Z3}. We use CGAL again\nto export the solution returned by the SMT-Solver into the VRML graphical models\nthat can be seen in this paper. The source code of our tool-chain can \nbe downloaded from~\\url{https://github.com/bobosoft/polytopepacking}. \nAt the same address it is possible to obtain the problem descriptions, the\nencoded smt2 files, and the 3D models of the results.\n\nAs an experiment to test our proof-of-concept tool-chain we have\nencoded and solved the first problem described in~\\cite{sto03}, where\n7 polytopes are placed in a parallelepiped of length 12 and width 10.\n\\cite{sto03} shows a placement of height 27, which is a local optima for\nthe approach of that paper. By running our tool-chain we were able to\nprove, in about 10 minutes, that a height of 23 is actually sufficient.\nWe were not able to obtain a model for height 22 within an hour, and\nthe polytope placement is shown in Figure~\\ref{fig:p7}. In these\ntrials we have increased and decreased the value for the height manually,\nbut this process can be automated in a way that is standard among\noptimizing solvers.\n\nA more automated implementation, and more exhaustive and detailed \nexperimentation are left as future work.\n\n%% \\begin{figure}\n%% \\begin{center}\n%% \\begin{tabular}{cccc}\n%% \\begin{minipage}{.2\\textwidth}\n%% \\includegraphics[scale=.2]{poly_1.png}\n%% \\end{minipage}\n%% &\n%% \\begin{minipage}{.2\\textwidth}\n%% \\includegraphics[scale=.2]{poly_2.png}\n%% \\end{minipage}\n%% &\n%% \\begin{minipage}{.2\\textwidth}\n%% \\includegraphics[scale=.2]{poly_3.png}\n%% \\end{minipage}\n%% &\n%% \\begin{minipage}{.2\\textwidth}\n%% \\includegraphics[scale=.2]{poly_4.png}\n%% \\end{minipage}\n%% \\\\\n%% \\begin{minipage}{.2\\textwidth}\n%% \\includegraphics[scale=.2]{poly_5.png}\n%% \\end{minipage}\n%% &\n%% \\begin{minipage}{.2\\textwidth}\n%% \\includegraphics[scale=.2]{poly_6.png}\n%% \\end{minipage}\n%% &\n%% \\begin{minipage}{.2\\textwidth}\n%% \\includegraphics[scale=.2]{poly_7.png}\n%% \\end{minipage}\n%% \\end{tabular}\n%% \\end{center}\n%% \\caption{The polytopes that are to be packed.}\n%% \\label{fig:p}\n%% \\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\begin{minipage}{.4\\textwidth}\n\\includegraphics[scale=.5]{problem_7_1.png}\n\\end{minipage}\n&\n\\begin{minipage}{.4\\textwidth}\n\\includegraphics[scale=.5]{problem_7_2.png}\n\\end{minipage}\n\\end{tabular}\n\\end{center}\n\\caption{Two views of the packing placement for 7 polytopes as per~\\cite{sto03} \nfor a height of 23.}\n\\label{fig:p7}\n\\end{figure}\n", "meta": {"hexsha": "883b0564b1099f8c7f32e2b3ac9b2f7a65258ac2", "size": 2925, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/experiments.tex", "max_stars_repo_name": "formalmethods/polytopepacking", "max_stars_repo_head_hexsha": "7879d1ceb252f731fa4bbc9d93341b832e62115d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-04-07T13:54:39.000Z", "max_stars_repo_stars_event_max_datetime": "2016-04-07T13:54:39.000Z", "max_issues_repo_path": "report/experiments.tex", "max_issues_repo_name": "bobosoft/polytopepacking", "max_issues_repo_head_hexsha": "7879d1ceb252f731fa4bbc9d93341b832e62115d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-03-18T08:05:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-21T20:44:27.000Z", "max_forks_repo_path": "report/experiments.tex", "max_forks_repo_name": "formalmethods/polytopepacking", "max_forks_repo_head_hexsha": "7879d1ceb252f731fa4bbc9d93341b832e62115d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.4117647059, "max_line_length": 88, "alphanum_fraction": 0.7411965812, "num_tokens": 894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019595, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.5753374339496332}}
{"text": "\\bigskip\n\\textit{This paper is the report of the \\textbf{Group 43} for the first assignment\nof the course.}\n\n\\section{Search Algorithms and their relations}\n\n\\begin{enumerate}\n \\item Give a consistent heuristic for this problem. Prove that it is admissible.\n    \\begin{framed}\n        Our heuristic is a function of the distance with a manhattan\n        distance heuristic (ignoring walls) between\n        the current character position and the \\euro{} position. It is\n        admissible because it will never overestimate the cost of reaching\n        the goal, as such we will always find an optimal solution with A*\n        search. \n    \\end{framed}\n \\item Show on the left maze the states (board positions) that are visited during an\nexecution of a uniform-cost graph search. We assume that when different states\nin the fringe have the smallest value, the algorithm chooses the state with\nthe smallest coordinate (\\textit{i, j}) ((0, 0) being the bottom left position, \\textit{i} being the\nhorizontal index and \\textit{j} the vertical one) using a lexicographical order.\n    \\begin{framed}\n        \\begin{center}\n        \\includegraphics[width=0.5\\linewidth]{figure1_left.png}\n        \\end{center}\n\n        The squares are visited following the numbering order and red nodes\n        are not included in the final path. \n    \\end{framed}\n  \\item Show on the right maze the board positions visited by A graph search with\na manhattan distance heuristic (ignoring walls). A state is visited when it is\nselected in the fringe and expanded. When several states have the smallest\npath cost, this uniform-cost search visits them in the same lexicographical order\nas the one used for uniform-cost graph search.\n    \\begin{framed}\n        \\begin{center}\n        \\includegraphics[width=0.5\\linewidth]{figure1_right.png}\n        \\end{center}\n\n        The squares are visited following the numbering order and red nodes\n        are not included in the final path. \n    \\end{framed}\n\n\\end{enumerate}\n\n\\section{Sokoban planning problem}\n\n\\begin{enumerate}\n \\item As illustrated on Figure 3 some situations cannot lead to a solution. Are there\nother similar situations? If yes, describe them.\n  \\begin{framed}\n        Other similar situations that cannot lead to a solution happen :\n        \\begin{itemize}\n            \\item When a box is in a corner formed by walls and\n                this corner is not a target position for boxes ;\n            \\item When two boxes are next to each other and both of the boxes cannot be moved so they act as\n                walls.\n        \\end{itemize}\n  \\end{framed}\n  \\item Why is it important to identify dead states in your successor function? How\nare you going to implement it?\n    \\begin{framed}\n        Identifying as many dead states as possible will allow us to reduce\n        the search space by stopping the exploration of branches that fell\n        in a dead state. Allowing us to reduce the time that our algorithm\n        need to find a solution. We are going to implement it by\n        identifying the situations mentioned in the answer to the previous\n        question.\n    \\end{framed}\n  \\item Describe possible (non trivial) heuristic(s) to reach a goal state (with reference\nif any). Is(are) your heuristic(s) admissible and/or consistent?\n    \\begin{framed}\n    A first interesting heuristic would be to give a fixed penalty for each\n    boxes that isn't on a target, allowing our algorithm to investigate\n    potential solution first. But this heuristic that is neither consistent\n    nor always admissible (depending on the penalty) has a quite a few\n    shortcoming as it might take a long time before the algorithm finds a\n    state worth investigating and that, if a box has to reach a target by\n    passing over another target, it will explore all the children of this\n    state before considering other solution. \\newline\n\n    A second interesting heuristic, that solve the shortcomings of the\n    previous one, would be the sum of the manhattan distance from each\n    boxes to the closest target. Although this heuristic is not always\n    consistent as two boxes could calculate the manhattan distance with the\n    same target, it is always admissible as it won't ever overestimate the\n    cost to reach a solution. As this heuristic is also fast to compute, it\n    is extremely interesting to implement it or a close derivative in our\n    solution.\n    \\end{framed}\n  \\item Implement this problem. Extend the \\textit{Problem} class and implement the necessary\nmethods and other class(es) if necessary. Your file must be named \\textit{sokoban.py}.\nYou program must print to the standard output a solution to the sokoban instance\ngiven in argument satisfying the described format. You will receive the name\nof the instance and you have to read from the two files .init and .goal, for\ninstance, given instance instance1 you will read from files instance1.init\nand instance1.goal .\n    \\begin{framed}\n        \\includegraphics[width=0.8\\linewidth]{test_passed.png}\n    \\end{framed}\n  \\item Experiment, compare and analyze informed (\\textit{astar\\_graph\\_search}) and unin-\nformed (\\textit{breadth\\_first\\_graph\\_search}) graph search of aima-python3 on the 15\ninstances of sokoban provided. Report in a table the time, the number of\nexplored nodes and the number of steps to reach the solution. Are the\nnumber of\nexplored nodes always smaller with \\textit{astar\\_graph\\_search}, why?\nWhen no solution can be found by a strategy in a reasonable time (say 5 min),\nexplain the reason (time-out and/or swap of the memory).\n    \\begin{framed}\n    \t\\begin{tabular}{l|l|l}\n    \t\t\t\t\t& A* \t\t& Breadth-first\\\\\n            \\hline\n    \t\tsokoInst01 \t& 9ms\t\t& 180ms \\\\\n    \t\t\t\t\t& 71\t\t& 2790 \\\\\n    \t\t\t\t\t& 15\t\t& 15 \\\\\n    \t\tsokoInst02 \t& 1s66\t\t& 1s94 \\\\\n    \t\t\t\t\t& 20947 \t& 30368 \\\\\n    \t\t\t\t\t& 65\t\t& 65 \\\\\n    \t\tsokoInst07 \t& 450ms\t\t& 710ms \\\\\n    \t\t\t\t\t& 5311\t\t& 9394 \\\\\n    \t\t\t\t\t& 98\t\t& 98 \\\\\n    \t\tsokoInst08 \t& 600ms\t\t& 940ms \\\\\n    \t\t\t\t\t& 6679\t\t& 13628 \\\\\n    \t\t\t\t\t& 108\t\t& 90 \\\\\n    \t\tsokoInst15 \t& 15s\t\t& 68s27 \\\\\n    \t\t\t\t\t& 146013 \t& 802991 \\\\\n    \t\t\t\t\t& 68\t\t& 56 \\\\\n    \t\\end{tabular}\n\n        The number of explored with an A* graph search is always smaller or\n        equals to the BFS, smaller if the heuristic is good and put on the\n        worst cases a big weight ; equals if the heuristic is very bad.\n    \\end{framed}\n  \\item What are the performances of your program when you don\u2019t perform dead state\ndetection?\n    \\begin{framed}\n      \\todo[inline]{R\u00e9ponse \u00e0 faire}\n    \\end{framed}\n\\end{enumerate}\n", "meta": {"hexsha": "e967e2388683c2a1c329f21d880cf8430258c16c", "size": 6551, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment2/LaTeX/rapport.tex", "max_stars_repo_name": "fthuin/artificial-intelligence", "max_stars_repo_head_hexsha": "2823e476dec8d704e6a92265bb9ae35f23cf7708", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment2/LaTeX/rapport.tex", "max_issues_repo_name": "fthuin/artificial-intelligence", "max_issues_repo_head_hexsha": "2823e476dec8d704e6a92265bb9ae35f23cf7708", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment2/LaTeX/rapport.tex", "max_forks_repo_name": "fthuin/artificial-intelligence", "max_forks_repo_head_hexsha": "2823e476dec8d704e6a92265bb9ae35f23cf7708", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-11-14T14:31:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-12T21:52:29.000Z", "avg_line_length": 46.7928571429, "max_line_length": 108, "alphanum_fraction": 0.7058464357, "num_tokens": 1646, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7772998663336157, "lm_q1q2_score": 0.5753374326764966}}
{"text": "\\section*{IGO Preparation 2020}\n\n\\prob{}\n{}{}{\n    Let $ABC$ be an arbitrary triangle,$P$ is the intersection point of the altitude from\n    $C$ and the tangent line from $A$ to the circumcircle. The bisector of angle $A$\n    intersects $BC$ at $D$ . $PD$ intersects $AB$ at $K$, if $H$ is the orthocenter then\n    prove : $HK\\perp AD$\n}\n\n\\prob{}\n{}{}{\n    We have $4$ circles in plane such that any two of them are tangent to each other. we\n    connect the tangency point of two circles to the tangency point of two other circles.\n    Prove that these three lines are concurrent.\n}\n\n\\prob{}\n{}{}{\n    In an acute triangle $ABC$, points $D,E,F$ are the feet of the altitudes from $A,B,C$,\n    respectively. A line through $D$ parallel to $EF$ meets $AC$ at $Q$ and $AB$ at $R$.\n    Lines $BC$ and $EF$ intersect at $P$. Prove that the circumcircle of triangle $PQR$\n    passes through the midpoint of $BC$.\n}\n\n\n\\prob{}\n{}{}{\n    Given $\\triangle ABC$ inscribed in $(O)$ an let $I$ and $I_a$ be it's incenter and\n    $A$-excenter, respectively.\n\n    Tangent lines to $(O)$ at $C,B$ intersect the angle bisector of $A$ at $M,N$, \n    respectively. Second tangent lines through $M,N$ intersect $(O)$ at $X,Y$.\n\n    Prove that $XYII_a$ is cyclic.\n}\n\n\\prob{}\n{}{}{\n    Given triangle $ABC$, $D$ is the foot of the external angle bisector of $A$, $I$ its\n    incenter and $I_a$ its $A$-excenter. Perpendicular from $I$ to $DI_a$ intersects the\n    circumcircle of triangle in $A'$. Define $B'$ and $C'$ similarly. Prove that $AA',BB'$\n    and $CC'$ are concurrent.\n}\n", "meta": {"hexsha": "305bde5965ad1db6f4c534528c8a11f6f20e6219", "size": 1561, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PSets/igo_prep.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "PSets/igo_prep.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PSets/igo_prep.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 34.6888888889, "max_line_length": 90, "alphanum_fraction": 0.6508648302, "num_tokens": 469, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5753374301311293}}
{"text": "\n\\subsection{Selection}\n\nHow to restrict variables? best subset. iterate through alol and test.\n\nnot feasible for large numbers of variables. forward selection, backward selection, L1 alternatives.\n\nremoving variable does: increase bias. may reduce variance of prediction\n\n", "meta": {"hexsha": "0444e3a71d76fe0099fd5eb8fa45158d69783acd", "size": 273, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/linearMLChoosing/01-01-other.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/linearMLChoosing/01-01-other.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/linearMLChoosing/01-01-other.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.3, "max_line_length": 100, "alphanum_fraction": 0.8095238095, "num_tokens": 53, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7401743620390162, "lm_q1q2_score": 0.5753374288579929}}
{"text": "\\chapter{PCA and Autoencoder}\n\\input{6DL/PCA}\n\n\\chapter{Proper orthogonal decomposition}\n\\input{6DL/POD}", "meta": {"hexsha": "d855b696717bc5f3bec326c16639940cc7d65021", "size": 104, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/PCA-Hong.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/PCA-Hong.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/PCA-Hong.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.8, "max_line_length": 41, "alphanum_fraction": 0.7884615385, "num_tokens": 33, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8688267694452331, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.5750963138466485}}
{"text": "\\documentclass[a4paper]{scrartcl}\n\\pdfinfoomitdate=1\n\\pdftrailerid{}\n\\author{Angewandte Informatik}\n\\title{Grundlagen der Informatik}\n%Language specific formating options\n\\usepackage[utf8]{inputenc} % use utf8 file encoding for TeX sources\n\\usepackage[T1]{fontenc}    % avoid garbled Unicode text in pdf\n\\usepackage[english]{babel} % english grammar so words can go in two lines  \n\\usepackage{graphicx}\n\\graphicspath{ {./images/} }\n\\usepackage{multirow}\n\\usepackage{longtable}\n\\usepackage{amsmath}\n\\usepackage{circuitikz}\n\\date{WS 20/21}\n\\begin{document}\n    \\maketitle\n    \\newpage\n    \\tableofcontents\n    \\newpage\n    \\section{Intro}\n        \\subsection{Representation of numbers characters}\n        Numbers and characters are saved in the memory and need a binary representation. There are different ways how one can represent numbers and characters,\n        depending on the needs the program has. Having a program which needs a counter, only needs positive integers so there is no need for saving decimals.  Also the \n        range is important. Is the program counting to 100 or 100 million. Different datatypes need less bytes to store data, but then the range or precision (Genauigkeit) suffers.\n        \n        \\subsubsection*{Integers}\n        Unsigned (only positive) integers only differ in how many bits they use. Typical sizes are  8-bit (short), 16-bit (half word), 32-bit (word) and 64-bit (double word). \\\\\n        Signed integers need to save the minus symbol somewhere. There are several options to \"save the minus\". One is just saying if the MSB is 1, the number is negative. The problem\n        is that \\(0000\\) and \\(1000\\) are both \\(0\\), but one is a positive and one is a negative \\(0\\) which isn't very effective. \\\\\n        Another implementation is the \\textbf{one-complement}.\n        Here you just invert every bit to get the \"negative version\" of the number. Again the \\(\\pm 0\\) is possible, but the one-complement \n        creates a symmetry with negative and positive numbers and is needed for the two-complement. \\\\\n        The \\textbf{two-complement} takes the result from the one complement and adds \\(+1\\) to it. The symmetry is gone but the \\(\\pm 0\\) is gone (only positive 0) and an extra negative number is won. It can be calculated like this:\n        $$Y_z = -z_{N-1}*2^{N-1} + \\sum_{i=0}^{N-2}z_i * 1^i, z_i \\in \\{0, 1\\}$$\n        One other way to create negative number is by using a \\textbf{bias/offset}. One needs to define the offset first. Now every number in the memory will be read nad the offset will be \n        subtracted from it. An offset of 128 means that the positive numbers will start at \\(1000.0000_b \\)\\footnote{\\(1000.0000_b \\equiv 0\\)}. The offset is used in floating point numbers for the exponent.\n\n        \\subsection*{Decimal numbers}\n        Decimal numbers also have different possible representation. An easy with a fixed point. The number is treated as an integer but at a specific bit, the point is \n        set. The position of the point needs to be defined first. If there are 8-bit to save the number and the point is defined at bit 3, there will be 5 bits for the integer\n        and 3 bits for the mantissa\\footnote{Nachkommastellen}. The problem is, that very big numbers or very small numbers aren't possible. \\\\\n        Floating point numbers fix this by introducing an exponent to the number. The exponent has an offset, so it can be negative. A negative exponent makes very small numbers\n        possible, but because the exponent can be positive as well big numbers are possible too. The formula for calculating a normalized float is:\n        \\begin{equation*}\n            f = (-1)^{\\text{sign}} \\cdot 1.\\text{mantissa} \\cdot 2^{\\text{exponent}}\n        \\end{equation*}  \n        Depending on how many bits the float uses, different values need to be inserted into the formula.\n        \\begin{center}\\begin{tabular}{|c|c|c|c|c|}\n            \\hline\n             & sign & exponent & mantissa & offset  \\\\\n            \\hline\n            32-bit & 1-bit & 8-bit & 23-bit & 127-bit \\\\\n            \\hline\n            64-bit & 1-bit & 11-bit & 52-bit & 1023-bit \\\\\n            \\hline\n        \\end{tabular}\\end{center}\n        $$Y = (-1)^s\\cdot \\left(1+\\sum_{i=-1}^{-F}z_i\\cdot 2^i\\right)\\cdot 2^{e-bias}$$\n        Besondere Werte wenn der Exponent nur aus 1 besteht.\n        $\\pm \\infty$ oder NaN (Wenn Mantissa Werte enth\u00e4lt)\n        \n    \\section{Boolean Algebra}\n        Boolean Algebra takes (binary) parameters and return a  binary result. There are different operands in boolean algebra: \n        \\begin{itemize}\n            \\item NOT \\(\\overline{A}\\): takes a single bit and toggles the value (1 -> 0, or 0 -> 1). \n            \\item AND \\(A * B\\) or \\(A \\land B\\): looks if all operands (here just A and B) are set to one and if os returns 1, else returns 0\n            \\item OR \\(A + B\\)  or \\(A \\lor B\\): returns 1 if any operand is set to 1, else all operands need to be 0\n            \\item XOR \\(A \\otimes B\\): returns one if only 1 parameter is one\n            \\item NAND/NOR \\(\\overline{A * B}/A + B\\): inverse to AND and OR\n        \\end{itemize}\n        With boolean algebra there are some rules: \n        \\begin{itemize}\n            \\item DeMorgan's law: \\(\\overline{x} + \\overline{y} = \\overline{x + y}\\) and \\(\\overline{x} + \\overline{y} = \\overline{x + y}\\)\n            \\item Absorption: \\(x * (\\overline{x} + y) = x * y\\) \\\\ \\(x + (\\overline{x} * y) = x +y\\) \\\\\n                \\(x * (x + y) = x\\) \\\\ \\(x + (x * y) = x\\)\n            \\item Neighborhood:  \\((x * y) + (\\overline{x} + y)\\) and \\((x + y) * (\\overline{x} + y)\\)\n        \\end{itemize}\n\n        \\begin{center}\n            \n            \\begin{circuitikz}\n                \\draw\n                (0,0) node[not port] () {}\n                (5,0) node[and port] () {}\n                (10,0) node[or port] () {};\n            \\end{circuitikz}\n        \\end{center}\n        \n    \\section{Instruction Set Architecture (ISA)}\n        The ISA contains definitions how a processor can be programed. It defines...\n        \\begin{itemize}\n            \\item ..the description of instructions (semantics etc)\n            \\item ..how the data behaves (how and were the data will be stored and processed)\n            \\item ..operation modi (user mode, supervisor mode etc)\n            \\item ..and the handling of traps, errors and interrupts \n        \\end{itemize}\n    %\n    %   Definition of numbers are above, may need to go in to more detail though TODO\n    %\n        \\subsection{addresses}\n            addresses are used to store/load data and can be the target of a (un)conditional jump. %addressing mode TODO\n            It is typically divided into 3 categories: \n            \\begin{itemize}\n                \\item register storage space: fast, but small and often only a limited number of registers are available  \n                \\item data storage space: bigger but slower\n                \\item instruction storage space: stores the instruction of the ISA\n            \\end{itemize}\n\n            % May be wrong\n            \\subsection*{Addressing Modes}\n            \\begin{itemize}\n                \\item immediate addressing: The instruction receives the data directly (adding a constant to the Accumulator)\n                \\item direct addressing: The address for the instruction is hard coded\n                \\item register direct addressing: The instruction addresses the register directly (address is constant) \n                \\item indirect addressing: first the address is loaded from a specific register and then the memory address is used to processed (register only stores address instead of value)\n                \\item indexed indirect addressing: two registers are used to get the address from the value. One is contains the address and one is a counter. Adding both together results in the actual address (arrays)\n                \\item program counter relative addressing: like indexed indirect access but the program counter functions as the address counter\n                %TODO pre/post modifying indirect addressing TODO 134 has some information which may be useful\n            \\end{itemize}\n\n        \\subsection{Instructions}\n            Processors work after the control flow principle. The basic idea is that an operation takes operands and generates a result. In order for an processor to run \n            algorithms, the processor needs to be able to process different kinds of operations:\n            \\begin{itemize}\n                \\item algorithmic operations: add, subtract, ...\n                \\item comparisons: if (greater, lower, equal, ...)\n                \\item logic operations: boolean algebra\n                \\item shift operations: rotate the byte left/right (multiply/divide by 2)\n                \\item control the control-flow: jumps\n            \\end{itemize}\n            Instruction can be classified into different types, looking at how many addresses are used. Monadic operations only use one address (NOT for example), dyadic \n            operations use 2 addresses. ADD A,B for example adds A and B together and saves the result in A. Some operations use 0 addresses by addressing implicitly for example the \n            registers or program counter. \n        \\subsection{States}\n            An processor need to be able to handle exceptions. Exceptions are differentiated in two groups:\n            \\begin{itemize}\n                \\item traps: synchronous events, occur when something in the program happens which shouldn't happen (division by zero)\n                \\item interrupts: asynchronous event, occurs when something external from the program needs to be executed (button press, timer)\n            \\end{itemize}  \n            Operation modes disconnect sensitive areas and none sensitive areas from a computer. A program for viewing pictures doesn't need full access to the whole\n            computer and it's resources. The most basic case is implementing a user-mode which has access to its own files and supervisor mode which can access everything when needed.\n            \n    \\section{ISA-ARM}\n        The ARM architecture uses the stored-program concept. Instructions and data are both stored in memory (as numbers). This results into a great bit of flexibility and\n        leads to the stored-program computer.\n        %Table of ARM instructions TODO:maybe\n        \\subsection{Operations}\n            Creating a program typically involves using a programing language instead of assembly languages for convenience reasons. Processors only understand compiled code and\n            depending on the processor the compiled code will look different. Let's say f = (a + b) - (c + d) is c code we want to compile for ARM. ARM only allows \n            arithmetic operations using registers, so first all values of the variables need to be loaded into register. ARM also only allows 2 addresses for adding and subtracting at once\n            so the results need to split into pieces and then stitched together at the end. \n    \n        \\subsection{Operands}\n            Operands are in short word (32 bit) and double word (64 bit, size of ARMv8 register size). Registers and variables (from programing languages) are different\n            because registers are limited in size. Too many registers would increase the clock cycle time. %lookup clock cycle time TODO%\n            Therefore if too many variables were created register values need to be moved into the memory (and vice versa). Those operations are called data transfer instructions.\n            %aligned data, data is aligned in memory for example in 64 bit iterations\n            \n        \\subsection{Instructions}\n            ARMv8 uses its own assembly language. It's pretty close to the machine code but it still needs to be converted to proper machine code. 'ADD x9, x20, x21'\n            would be translated into '1112 21 0 20 9'. It is divided as following tabling shows (a R Format instruction): \n            \\begin{center}\n            \\begin{tabular}{|p{5cm}|c|c|c|c|}\n                \\hline\n                opcode & Rm & shamt & Rn & Rd \\\\\n                \\hline\n                11 bits & 5 bits & 6 bits & 5 bits & 5 bits \\\\ \n                \\hline\n            \\end{tabular}\n            \\end{center}\n            \\begin{center}\n                \\begin{tabular}{|p{5cm}|c|c|}\n                    \\hline\n                    Instruction & Format & opcode  \\\\\n                    \\hline\n                    ADD & R & $1112_{10}$ \\\\ \n                    \\hline\n                    SUB & R & $1624_{10}$ \\\\ \n                    \\hline\n                    ADDI & I & $580_{10}$ \\\\ \n                    \\hline\n                    SUBI & I & $836_{10}$ \\\\ \n                    \\hline\n                    LDUR & D & $1986_{10}$ \\\\ \n                    \\hline\n                    STUR & D & $1984_{10}$ \\\\ \n                    \\hline\n                \\end{tabular}\n            \\end{center}\n            Example: 1986 240 0 10 9 -> Load the address in register 10 with an offset of 30 bytes (240/8=30) into register 9\n            \\begin{itemize}\n                \\item opcode: the numeric representation of the instruction\n                \\item Rm: second register\n                \\item shamt: shift amount\n                \\item Rn: first register\n                \\item Rd: destination register\n            \\end{itemize}\n        \n\n            \\begin{center}\n                \\begin{tabular}{|c|c|c|c|c|}\n                    \\hline\n                    \\multicolumn{5}{|c|}{D-Format} \\\\\n                    \\hline\n                    opcode & address & op2 & Rn & Rt \\\\\n                    \\hline\n                    11 bits & 9 bits & 2 bits & 5 bits & 5 bits \\\\\n                    \\hline \\hline\n                    \\multicolumn{5}{|c|}{I-Format} \\\\\n                    \\hline\n                    opcode & \\multicolumn{2}{|c|}{immediate}  & Rn & Rd \\\\\n                    \\hline\n                    10 bits & \\multicolumn{2}{|c|}{12 bits} & 5 bits & 5 bits \\\\\n                    \\hline\n                \\end{tabular}\n            \\end{center}\n            R-Format is often used for arithmetic instructions using addresses or for shifting a register (and saving it to another),\n            while I-Format is often used for immediate instructions. D-Format is used for loading and storing values from/to register to/from memory.\n\n            \\begin{center}\n                \\begin{tabular}{|l|c|c|}\n                    \\hline\n                    Instruction & ARM code & description \\\\\n                    \\hline\n                    LSL & LSL X11, X19, \\#4 & shifts x19 4 times left and stores result in X11 \\\\\n                    \\hline\n                    AND & AND X9, X10, X11 & X9 = X10 * X11 (binary AND) \\\\\n                    \\hline\n                    OR & OR X9, X10, X11 & X9 = X10 + X11 (binary OR) \\\\\n                    \\hline \n                    EOR & EOR X9, X10, X11 & X9 = X10 \\(\\otimes \\) X11 (binary EOR/XOR) \\\\\n                    \\hline\n                    NOT & EOR X9, X10, X11(=1111...111) & Not isn't implemented so EOR is used \\\\\n                    \\hline \n                \\end{tabular}\n            \\end{center}\n            ANDI, ORRI, EORI are the immediate variations of the above instructions.\n\n            \\subsubsection*{Branches}\n            \\begin{itemize}\n                \\item if: uses 'CBZ Register, label' (jump to 'label' if Register is zero) and CBZN (jump if not zero)\n                \\item loop: uses a decreasing counter and CBZ and jumps to the beginning of the loop as long as the counter isn't zero\n            \\end{itemize}\n            There are more conditions like less, less or equal, etc.: \n            % \\begin{center}\n            %     \\begin{tabular}{|c|c|c|c|c|}\n            %         \\hline\n            %         & \\multicolumn{2}{|c|}{Signed numbers} & \\multicolumn{2}{|c|}{Unsigned numbers} \\\\\n            %         \\hline \\hline\n            %         Comap. & Instr. & CC test & Instr. &  CC test \\\\\n            %         \\hline\n            %         \\(=\\) & B.EQ & Z = 1 & B.EQ & Z = 1 \\\\\n            %         \\hline\n            %         \\(\\neq\\) & B.NE & Z = 0 & B.NE & Z = 0 \\\\\n            %         \\hline\n            %         \\(<\\) & B.LT & N = V & B.LO & C = 1 \\\\\n            %         \\hline\n            %         \\(\\le\\) & B.LE & ~ (Z = 0)&(N = V) & B.LS & ~(Z = 0)(C = 1) \\\\\n            %         \\hline\n            %         \\(>\\) & B.GT & (Z = 0)&(N = V) & B.HI & (Z = 0)&(C = 1) \\\\\n            %         \\hline\n            %         \\(\\ge\\) & B.GE & N = V & B.HS & C = 1 \\\\\n            %         \\hline\n            %     \\end{tabular}\n            % \\end{center}\\footnote{Copied from slides, probably need to rework}\n            To check if a branch went out of bounds signed numbers could be treated as unsigned numbers and compared to a negative number (MSB = 1) so out of bounds can be identified. \n            %look this up again TODO\n\n            \\subsection{Procedures}\n                Procedures are subroutines of a program and are good for implementing abstraction in the program. For a procedures to work the hardware needs to be able to perform the following steps:\n                \\begin{enumerate}\n                    \\item save parameters in a place where the procedures can access them (X0 - X7)\n                    \\item give procedure the control\n                    \\item acquire storage resources for the procedure\n                    \\item do the task\n                    \\item put the result in a place where the main program can access it (X0 - X7)\n                    \\item return control the previous procedure (return address saved in LR(X30) ) \n                \\end{enumerate}\n                ARMv8 supports the branch-and-link instruction (BL). It branches to the procedure address and writes the return address in X30. This is needed because if a procedure\n                is called by different parts of the program so the return doesn't have to be hardcoded.%not 100% sure\n                The caller calculates the return address by adding 4 to the program counter. The current program counter points to the branch so it need to go one step further.\n                The registers should hold the same value after the branch back so registers are saved into a stack before the program branches off. This is called spilling. \n                Here the stack pointer (SP) is needed.\n                Its a register which saves the last spilled address. The operations push and pop, adds or retrieves elements to/from the stack. The stack grows from high to low, so if an\n                element is pushed to the stack, the value in the stack pointer needs to be decreased. \\\\\n                X9 - X17 are registers which aren't preserved by a procedure call, X19 - x28 will be restored if necessary.\\\\\n                C classifies variables into automatic and static, static variables are those declared outside a procedure. In ARMv8 \n                a so called global pointer points to the static area. A lot of ARMv8 compilers reserve X27 as the GP (global pointer).\\\\\n                The stack can be also used to store variables which don't have space in the registers. It a segment in the stack called procedure frame or activation record.  \n                % STACK POINTER FRAME POINTER TODO\n        \\subsection{addresses}\n            Basically nothing to compared to the the already done addressing section.\n        \\subsection{program}\n            A programing language like c compiles it's code into assembly code. Assembly code is a symbolic language which will be translated into machine code. It uses the \n            symbol table which matches the names of labels to the corresponding addresses in memory. It creates an object file which typically exists in 6 pieces:\n            \\begin{enumerate}\n                \\item file header which describes size and position of the other pieces in the object file\n                \\item text segment which contains the machine language code \n                \\item static data segment (UNIX allows static data which is allocated throughout the program and dynamic data which can grow and shrink as needed)\n                \\item relocation information identifies instructions and data which rely on absolute addresses (and probably adjusts them accordingly)\n                \\item symbol table contains not defined labels like external references\n                \\item debugging information contains descriptions on how a the modules were compiled \n            \\end{enumerate}\n            The linker or link editor that links the independent compiled modules and resolves the undefined labels to create one executable file. The executable is loaded \n            by the loader (at least in UNIX). It reads the file header to determine the size of the text and data segments. Then creates an address space large enough to fit all and\n            copies the data into memory. If there are parameters they will also be copied into memory. The registers of the processor are initialized and the stack pointer is set to the\n            first free location. At last it branches to a start up routine which initializes the argument registers with parameters and then calls the main routine. If the main routine is done\n            the program terminates with an 'exit' system call.\n    \\section{Peripherals}\n        A computer system needs to be able to support different kinds of peripherals. \n            \\subsection{Bus System}\n            Bus-Systems in computers are there so different components are able to talk to each other. Typically the critical components (cpu, ram,...) have \n            their own bus system (AHP or ASP bus system) and peripherals or none critical components (keyboard, interrupt time,...) share another bus system (APB bus system).\n            They are connected via so called 'bridge', which connects (bridges) both bus systems if needed (if the keyboard input needs to be processed by the cpu). \n            \\subsection{Interrupt, Exceptions, Trap}\n            Sometimes the flow of a program needs to be interrupted, for example when a button is pressed. A program could see in a while loop if a button was pressed and react \n            accordingly, but this wastes resources and as long as the while loop is active, the program can't do anything else. Because a program is deterministic (if everything stays the\n            same, the program will always behave in the same way), it can be interrupted. If the interrupt is done (interrupt function is done) the pre interrupt state needs to be \n            restored so the program can work as it did before. \\\\\n            What an interrupt is and what an exceptions is defined by the processor manufacturer. Interrupts and exceptions can occur internal or external though.\n            Internal means that an invalid instructions is requested (user wants to access admin resources, div by zero) and external typically means that an \n            the program is influenced from outside of the computer (usb stick is plugged in, power button is pressed). \\\\\n            If an exception occurs, a branch will happen. The address of the branch target can either be fixed (division by zero interrupt has an own constant address, and other\n            exceptions also have constant addresses ) or the branch address is saved in a exception table where the processor first reads the address (depending on the exception) \n            and then jumps to the given address.\n            %vector and register based exception handlers\n            Context switches (when registers need to be saved so another (part of a) program can be executed): \n            \\begin{enumerate}\n                \\item process change\n                \\item exceptions\n                \\item sub-program calls\n            \\end{enumerate}   \n\n    \\section{Bus Systems}\n        There are many different kinds off bus systems and they can be classified in multiple categories:\n        \\begin{itemize}\n            \\item on/off chip busses\n            \\item parallel/serial busses\n            \\item synchronous/asynchronous busses \n            \\item automotive busses\n            \\item (and more)\n        \\end{itemize}\n        In general, busses consists of a couple of wires and are designed to allow computer components to communicate to each other. Parallel busses couple at least two \n        wires to transfer information in (nearly) the same direction (example: cpu uses 5 wires to write to a disc). There are often multiple data sources (sender)\n        and data sink (receiver) on a bus (a sink can also be a source and vice versa). Other problems are that it needs to be defined how data can travel from source to \n        sink so there also must be a well defined timing behavior.\n        \\subsection{General Definitions}\n            See slide in moodle, too compact to summarize here.\n        \\subsection{UART, RS232}\n            Before the data can be send, sender and receiver need to agree on some parameters of the transfer:\n            \\begin{itemize}\n                \\item full or half duplex\n                \\item number of bts per character\n                \\item band rate speed \n                \\item use parity or not\n                \\item if parity is used, how many bits\n                \\item number of stop bits (at least the number receiver needs)\n                \\item mark and space symbols %look this up\n            \\end{itemize}\n            8N1 was a common implementation. 8 data bits, one stop bit and no parity bit (+ 1 start bit)%I think\n            . If the band rate is set to then, then one just needs \n            divide the signal rate by ten to get how many character were sent per second.  \n        \\subsection{I\\(^2\\)C}     \n            I\\(^2\\)C(also called 'I two C') is a multi master bus and is commonly used for low speed peripherals. Multi master means that any component on the bus can be a\n            master and communicate with the other components on the bus (they are called slaves). Masters can also switch as soon the current master is finished with its\n            operations. SMBbus is a subset of I\\(^2\\)C and defines stricter protocols and electrical conventions to promote robustness and interoperability. A master is \n            the component who issues the clock for the timing while the slave receives the clock. \\\\\n            A master is initially in master transmit mode. It sends a start bit, followed by the salve address the master wants to communicate followed by one bit which \n            represents if the master wants to read(1) or write (0). If the slave exists it sends an ACK (acknowledge) bit and as soon the master receives it, the master changes\n            it mode to transfer/receive (depending what the master wants to do) and the slave changes to its complementary mode. \\\\\n            I\\(^2\\)C defines three basic messaging types, while each begin with START and end with a STPO:\n            \\begin{enumerate}\n                \\item single message where master writes to slave\n                \\item single message where master reads from slave\n                \\item 'combined' message where master reads or writes at least 2 times to one or more slaves\n            \\end{enumerate}\n\\end{document}", "meta": {"hexsha": "7affaa27341d74fe9ecca87d3bd2964c3561b667", "size": 27310, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2. Semester/rechnertechnologie/rt.tex", "max_stars_repo_name": "spontanurlaub/skripte", "max_stars_repo_head_hexsha": "4fdc68d3416a3c26b29bb7b642f2d380036aca4d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-11T13:42:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-11T13:42:02.000Z", "max_issues_repo_path": "2. Semester/rechnertechnologie/rt.tex", "max_issues_repo_name": "spontanurlaub/skripte", "max_issues_repo_head_hexsha": "4fdc68d3416a3c26b29bb7b642f2d380036aca4d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-28T17:25:39.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-28T17:25:39.000Z", "max_forks_repo_path": "2. Semester/rechnertechnologie/rt.tex", "max_forks_repo_name": "spontanurlaub/skripte", "max_forks_repo_head_hexsha": "4fdc68d3416a3c26b29bb7b642f2d380036aca4d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-07-28T09:02:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-28T15:54:46.000Z", "avg_line_length": 72.2486772487, "max_line_length": 233, "alphanum_fraction": 0.6268399854, "num_tokens": 6191, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825006, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.5749781452586539}}
{"text": "% Created 2018-11-20 Tue 17:34\n% Intended LaTeX compiler: pdflatex\n\\documentclass[11pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\author{dustin}\n\\date{\\today}\n\\title{}\n\\hypersetup{\n pdfauthor={dustin},\n pdftitle={},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 25.3.2 (Org mode 9.1.7)}, \n pdflang={English}}\n\\begin{document}\n\n\\section*{Budgeter}\nThis program is designed to provide the user with an optimized budget for the\nitems given. The user should provide a list of items in the form\n\\begin{center}\n\\begin{tabular}{ccccc}\nItem Name & Importance & Min cost & Max cost & Needed\\\\\n\"String\" & 1-10 & >0 & > Min cost & True/False\\\\\n\\end{tabular}\n\\end{center}\nAs well as a the number of year until retirement and the total saved and or\nowed.\nThe algorithm will find the budget allocation that will maximize total value\n\\begin{align*}\n  V_t = \\sum_i V_i\n\\end{align*}\nwhere each value \\(V\\) is determined by the function.\n% \\begin{align*}\n%   V = -\\frac{I}{10} \\left( \\frac{1}{k(x-m)+0.1} + 10 \\right)\n% \\end{align*}\n\\begin{align*}\n  V &= -I((k(x-max))^2-1) \\\\\n  V &= -I(k(x-max))^2-I \\\\\n  V &= -Ik^2(x-max)^2-I \\\\\n  V &= -Ik^2(x^2-2(max)x+(max)^2)-I \\\\\n  V &= -Ik^2x^2+2Ik^2(max)x-Ik^2(max)^2-I \\\\\n\\end{align*}\nWhere x is the price, I is the importance of the item given by the user,\n\\(k=\\frac{1}{max-min}\\), and \\(m\\) is the maximum price. The derivative in\nregard to price is therefore\n\\begin{align*}\n  \\frac{dV}{dx} &= -2Ik^2x^2 + 2Ik^2(max)\n\\end{align*}\nThe restriction would be\n\\begin{align*}\n  Income &= \\sum_i x_i\n\\end{align*}\nThe solution of which can be found using the method of langrange multipliers\n\\begin{align*}\n  \\mathcal{L} &= f(\\vec{x}) - \\lambda(g(\\vec{x} - b)\n\\end{align*}\nAssuming I only have two costs\n\\begin{align*}\n  Income &= x_1 + x_2 = g(x_1,x_2) = b\n\\end{align*}\nand\n\\begin{align*}\n  V_t &= V_1 + V_2\n\\end{align*}\nAt the critical points\n\\begin{align*}\n  \\nabla \\mathcal{L} &= 0\n\\end{align*}\nWhich would lead to a set of linear equations\n\\begin{align*}\n  -2Ik_1^2x_1 + 2Ik_1^2(max_1) &= \\lambda \\\\ \n  -2Ik_2^2x_2 + 2Ik_2^2(min_2) &= \\lambda \\\\ \n  x_1 + x_2 &= X_{total}\n\\end{align*}\nwhich can be simplified to\n\\begin{align*}\n  -2I_1k_1^2x_1 - \\lambda &=  -2I_1k_1^2(max_1)\\\\ \n  -2I_2k_2^2x_2 - \\lambda &=  -2I_2k_2^2(max_2) \\\\ \n  x_1 + x_2 &= X_{total}\n\\end{align*}\nWhich corresponds to the linear equation\n\\begin{equation}\n\\begin{bmatrix}\n  -2I_1k_1^2 & 0 & -1\\\\\n  0 & -2I_2k_2^2 & -1\\\\\n  1 & 1 & 0\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n  x_1 \\\\ x_2 \\\\ \\lambda\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n  -2I_1k_1^2(max_1) \\\\ -2I_2k_2^2(max_2) \\\\ X_{total}\n\\end{bmatrix}\n\\end{equation}\nThe solution for this equation is easily found and which can easily be\ngeneralized.\n\\\\\nNote that an item may be such a bad deal, that the algorithm gives its value a negative\nnumber. Of course you can't have a negative number so the item(s) will be\ntemporarily removed and the algorithm ran again.  The items will later be added\nto final result but with their value over needed set to 0.\n\\newpage\n\\subsection*{Test Case Without Savings}\nLets assume we have two items\n\\begin{itemize}\n\\item ``efficient'' 10 0 1000 1\n\\item ``average 5 0 3000 1\n\\item ``inefficient'' 1 0 5000 1\n\\end{itemize}\nWe have a total cost of 4000.\nLets calculate our $k's$\n\\begin{align*}\n  k_1 &= 0.001 \\\\\n  k_2 &= 3.33e-4 \\\\\n  k_3 &= 2e-4\n\\end{align*}\nOur equation should then look like\n\\begin{equation}\n\\begin{bmatrix}\n  -2(10)(0.001)^2 & 0 & 0 & -1\\\\\n  0 & -2(5)(3.33e-4)^2 & 0 & -1\\\\\n  0 & 0 & -2(10)(2e-4)^2 & -1\\\\\n  1 & 1 & 1 & 0\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n  x_1 \\\\ x_2 \\\\ \\lambda\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n  -2(10)(0.001)^2(1000) \\\\ -2(5)(3.33e-4)^2(3000) \\\\ -2(1)(2e-4)^2(5000) \\\\4000\n\\end{bmatrix}\n\\end{equation}\n\n\\begin{equation}\n\\begin{bmatrix}\n  -2e-5 & 0 & 0 & -1\\\\\n  0 & -1.1e-6 & 0 & -1\\\\\n  0 & 0 & -8e-7 & -1\\\\\n  1 & 1 & 1 & 0\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n  x_1 \\\\ x_2 \\\\ x_3 \\\\ \\lambda\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n  -0.02 \\\\ -3.32e-3 \\\\ -4e^-4 \\\\ 4000\n\\end{bmatrix}\n\\end{equation}\nAnswer = [989,2804,206], Yay!\n\n\\subsection*{Special treatment for needed}\nIf an object is ``needed'' then the minimum value of the budget item should be\nremoved for income, and the min and max should be set to\n\\begin{align*}\n  min &= 0 \\\\\n  max &= max-min\n\\end{align*}\nThe $min$s should be added the vector outputted by calculatebudget\n\n\\newpage\n\\subsection*{Determining Savings}\nThe primary goal of this calculation is to assure that at retirement that you\ncan survive while paying your current costs. This will happen when you are able\nto live off the interest of your investment. For this to workout, the future\nvalue of your savings/debt + savings/year need to be equal to the (costs/year)/apr\nThe future value can calculated for each future year taking the \n\n\\begin{align*}\n  V_1 = V_0(1+r) + S\n\\end{align*}\nWhere $V_1$ is the current value of savings in year 1, $V_0$ is the value of the savings\nyear 0 and $S$ is the savings over the year, and $r$ is the annual interest rate.\n\n\n\\subsection*{Choosing importance}\nImportance should be scored either in range from 1-10 or from 1-100. A good rule\nof thumb is that the score should roughly equal to the percentage of your time\nthat the effects of the item \\textbf{linger} throughout your life. Lets do a few\nexamples, I will use a scale from 1-10\n\\subsubsection*{Food}\nYou may spend anywhere from $30\\,min$ to $2\\,hr$ eating food throughout the day\nwhich means would mean that you would score it a 1 if I didn't include the word\nlinger. Though you don't spend a lot of time eating, the nutritional effects of\nthe food you eat will stay with you the entire day, $100\\%$ of your life. While\nthe ``feel good'' effect of a delicious meal will last a few hours after you\neat. We can divide food into two subsections with their importance values: taste=5, nutrition=10.\nNutrition can be treated as a needed primary price, while taste can be treated\nas a premium. A simpler approach would be to average the two numbers to get\nfood=8, and set a price range appropriate for delicious healthy food.\n\\subsubsection*{Transportation}\nSimilar to food, transportation can be separated into a few categories:\nReliability, Convenience, and Status. Reliability affects my entire life and I\nwould thus score it a 10. Convenience features only affects me while I drive, or\nabout and hour a day, so I'll score it a 1. Status, for me, will only affect me\nin extremely rare occasions, so it'll also give me a 1. Note that I'm married.\nFor those looking for a partner, the status may possibly get more weight since\nthe likelihood of the affects could linger increases. Since the importance of\nreliability so dramatically surpasses the other considerations, I'll only\nconsider it during my purchase. Transport:10, min-max based on reliability.\n\\subsubsection*{Phone}\nI'll split the phone into parts: Coverage, Features, and Status.\nI give coverage a score of 5, since it's usually not a big deal if I loose\nsignal while traveling, since I'm usually with others. Features gets a 10, since\nI need access to email, text messages, video phone calls, and navigation, for\nwork. Status receives a 1 for reasons already discussed. Nearly all phones have\ngood coverage, so my focus will only be on the available features and status.\nStatus is often cheap with phones on a per/month basis compared to your other costs.\n\\subsubsection*{Housing}\nThis has three major parts: Location, size, and status. Location will get a 10,\nsince it will affect your entire life: education for your kids, commute to work,\nneighbors, nearby restaurants etc. The size of the house would get a 5, since I\nspend roughly half my time inside my house, while status once again would get a 1.\n\nMy input would thus look something like this\n\\begin{center}\n\\begin{tabular}{lcccc}\n  After-Tax Monthly Income= 1750 &&&&\\\\\n  Saved/Owed= 5000 &&&&\\\\\n  Years till retirement= 25 &&&&\\\\\n  APR = 0.05 &&&&\\\\\n  Item Name & Importance & Min cost & Max cost & Needed\\\\\n  Food(Nutrition) & 10 & 200 & 500 & 1 \\\\\n  Food(Taste) & 5 & 0 & 200 & 0 \\\\\n  Transportation(Reliability) & 10 & 250 & 400 & 1 \\\\\n  Transportation(Status) & 1 & 0 & 500 & 1 \\\\\n  Phone(Features) & 10 & 90 & 120 & 1 \\\\\n  Phone(Status) & 1 & 0 & 30 & 0 \\\\\n  Housing(Location) & 10 & 400 & 1200 & 1 \\\\\n  Housing(Size) & 5 & 0 & 4000 & 0 \\\\\n  Housing(Status) & 1 & 0 & 2000 & 0 \\\\\n\\end{tabular}\n\\end{center}\n\\end{document}", "meta": {"hexsha": "b9ee5a371ad5bae26c86892e6b8fba71e32f1135", "size": 8642, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/budget.tex", "max_stars_repo_name": "PotentialParadox/budgeter", "max_stars_repo_head_hexsha": "40bea44f5e5ca4b58a661ff1f61e418c5b0e59b3", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documentation/budget.tex", "max_issues_repo_name": "PotentialParadox/budgeter", "max_issues_repo_head_hexsha": "40bea44f5e5ca4b58a661ff1f61e418c5b0e59b3", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documentation/budget.tex", "max_forks_repo_name": "PotentialParadox/budgeter", "max_forks_repo_head_hexsha": "40bea44f5e5ca4b58a661ff1f61e418c5b0e59b3", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.987854251, "max_line_length": 97, "alphanum_fraction": 0.7016894237, "num_tokens": 2897, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.5749781377187099}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{tfrgabor}\n\\section*{\\hspace*{-1.6cm} tfrgabor}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nGabor representation of a signal.\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[tfr,dgr,gam] = tfrgabor(x)\n[tfr,dgr,gam] = tfrgabor(x,N)\n[tfr,dgr,gam] = tfrgabor(x,N,Q)\n[tfr,dgr,gam] = tfrgabor(x,N,Q,h)\n[tfr,dgr,gam] = tfrgabor(x,N,Q,h,trace)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\t{\\ty tfrgabor} computes the Gabor representation of signal {\\ty x},\n        for a given synthesis window {\\ty h}, on a rectangular grid of size\n        {\\ty (N,M)} in the time-frequency plane. {\\ty M} and {\\ty N} must\n        be such that {\\ty N1 = M * N / Q} where {\\ty N1=length(x)} and {\\ty\n        Q} is an integer corresponding to the degree of oversampling. The\n        expression of the Gabor representation is the following :\n\\begin{eqnarray*}\nG_x[n,m;h] &=& \\sum_k x[k]\\ h^*[k-n]\\ \\exp{[-j2\\pi m k]} \n\\end{eqnarray*}\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty x}   & signal to be analyzed ({\\ty length(x)=N1})\\\\\n        {\\ty N}   & number of Gabor coefficients in time ({\\ty N1} must be a multiple\n              of {\\ty N})                 & {\\ty divider(N1)}\\\\\n        {\\ty Q}   & degree of oversampling ; must be a divider of {\\ty N} &\n\t\t{\\ty Q=divider(N)}\\\\ \n        {\\ty h}   & synthesis window, which was originally chosen & {\\ty\n\t\t\twindow(odd(N),}\\\\\n\t\t& as a Gaussian window by Gabor. {\\ty Length(h)} should be\n\t\tas closed as possible from {\\ty N}, and must be $\\geq${\\ty N}.\n              {\\ty h} must be of unit energy, and centered & {\\ty\n\t\t\t'gauss')}\\\\  \n        {\\ty trace} & if nonzero, the progression of the algorithm is shown\n                                         & {\\ty 0}\\\\\n     \\hline {\\ty tfr} & square modulus of the Gabor coefficients\\\\\n        {\\ty dgr} & Gabor coefficients (complex values)\\\\ \n        {\\ty gam} & biorthogonal (dual frame) window associated to {\\ty h}\\\\\n\n\\hline\n\\end{tabular*}\n\\vspace*{.2cm}\n\nWhen called without output arguments, {\\ty tfrgabor} runs {\\ty tfrqview}.\\\\\n\\end{minipage}\n\n\\newpage\n\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nIf {\\ty Q=1}, the time-frequency plane (TFP) is critically sampled, so\nthere is no redundancy.\\\\\nIf {\\ty Q>1}, the TFP is oversampled, allowing a greater numerical\nstability of the algorithm. \\\\\n\n\\end{minipage}\n\\vspace*{1cm}\n\n{\\bf \\large \\sf Example}\n\\begin{verbatim}\n         sig=fmlin(128); \n         tfrgabor(sig,64,32); \n\\end{verbatim}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nall the {\\ty tfr*} functions.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf References}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n[1] Zibulski, Zeevi \"Oversampling in the Gabor Scheme\" IEEE Trans. on\n\t    Signal Processing, Vol. 41, No. 8, pp. 2679-87, August 1993.\\\\ \n\n[2] Wexler, Raz \"Discrete Gabor Expansions\" Signal Processing, Vol. 21, No.\n\t    3, pp. 207-221, Nov 1990.\n\n\\end{minipage}\n", "meta": {"hexsha": "0b58415917172946ca165fe47381d25f940f3060", "size": 3396, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/tfrgabor.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/tfrgabor.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/tfrgabor.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 30.8727272727, "max_line_length": 85, "alphanum_fraction": 0.625147232, "num_tokens": 1202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7371581568543043, "lm_q1q2_score": 0.5749781287107606}}
{"text": "%%%%%%%%%%%%%%%%%%%%%definitions%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\input{../../doc/related_pages/header.tex}\r\n\\input{../../doc/related_pages/newcommands.tex}\r\n\\newcommand{\\PP}{H^+}\r\n\\newcommand{\\QQ}{H^-}\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%DOCUMENT%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\begin{document}\r\n\r\n\\title{The heat diffusion project}\r\n\\author{ M.~Wiesenberger and M.~Held}\r\n\\maketitle\r\n\r\n\\begin{abstract}\r\n    Test parallel advection and diffusion schemes on the 1d Navier-Stokes equations\r\n    along a parallel magnetic field.\r\n\\end{abstract}\r\n\\tableofcontents\r\n\r\n\\section{Parallel advection schemes}\r\nIn this section we want to focus on the discretization of the Navier-Stokes equation\r\nalong a magnetic field-line:\r\n\\begin{align}\r\n    \\frac{\\partial}{\\partial t} n &= - \\nc\\left( nu\\bhat\\right) \\\\\r\n    \\frac{\\partial}{\\partial t} u &= - \\npar\\left(\\frac{u^2}{2}\\right) - \\frac{\\tau}{\\mu} \\frac{\\npar n}{n} + \\nu_u \\frac{\\Delta_\\parallel u}{n}\r\n\\end{align}\r\nwith\r\n\\begin{align}\r\n\\Delta_\\parallel u &= \\nabla\\cdot ( \\bhat\\bhat\\cdot\\nabla u)\r\n\\end{align}\r\n$\\bhat = \\bhat(\\vec x)$ is the prescribed magnetic field unit vector,\r\nand $\\nu_u$ is the viscosity coefficient parallel to this field.\r\n\r\nThe first equation is a linear advection type equation while the second one includes\r\na quadratic non-linearity (the Burger term $u^2/2$) which tends to generate\r\nshocks (at least in the dissipationless case). This is because crests of a wave\r\ntravel faster than the valley of waves.\r\nThe continuity equation can be written as\r\n\\begin{align}\r\n    \\frac{\\partial}{\\partial t} n = - \\npar (nu) - nu \\nc\\bhat = -u\\npar n - n \\npar u - nu\\nc \\bhat\r\n\\end{align}\r\nThe first term on the right hand side is the advection term (of density with velocity u). The second term is the wave term, because together with $\\npar n$ in the velocity\r\nequation it couples the two equations to make waves. This can be seen by linearizing\r\nthe system around $n= 1 + \\tilde n$, $u = \\tilde u$\r\n\\begin{align}\r\n    \\frac{\\partial^2}{\\partial t^2} \\tilde n = \\frac{\\tau}{\\mu} \\npar^2 \\tilde n + \\frac{\\tau}{\\mu} \\npar \\tilde n \\nc\\bhat\r\n\\end{align}\r\nThis equation describes sound waves with speed $\\sqrt{\\tau/\\mu}$ and a\r\ndispersive term.\r\n\r\nThe above equations have the following conservative formulation:\r\n\\begin{align}\r\n    \\frac{\\partial}{\\partial t} n  + \\nc\\left( nu \\bhat\\right) &= 0 \\\\\r\n    \\frac{\\partial}{\\partial t} (\\mu nu)  + \\nc\\left(\\mu nu^2 \\bhat\\right) &=\r\n    -\\tau \\npar n + \\mu \\nu_u \\Delta_\\parallel u \\\\\r\n    \\frac{\\partial}{\\partial t} \\left(\\tau n \\ln n + \\frac{1}{2}\\mu nu^2\\right)\r\n    + \\nc\\left( \\left( \\tau n ( 1 + \\ln n)  +\\mu \\frac{1}{2} nu^2\\right) u\\bhat \\right) &= \\mu \\nu_u u \\Delta_\\parallel u\r\n\\end{align}\r\nA good scheme:\r\n\\begin{itemize}\r\n    \\item respects the above written conservation equations, in particular when\r\n        integrated over a flux-aligned volume the flux terms must vanish\r\n    \\item works with the FCI discretization developed in previous sections\r\n\\end{itemize}\r\nNote that the latter point excludes discontinuous Galerkin methods, which we have not\r\nyet developed for the FCI discretization.\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\subsection{Staggered grids in 1d}\r\nWhen $\\bhat = \\xhat$ the system reduces to the one-dimensional Navier-Stokes equations for which there is a large amount of literature on numerical schemes available\r\n( see for example \\cite{LeVeque} and references therein).\r\n\\begin{align}\r\n    \\\r\n    \\frac{\\partial}{\\partial t} n &= - \\frac{\\partial}{\\partial x}\\left( nu\\right) \\\\\r\n    \\frac{\\partial}{\\partial t} u &= - \\frac{\\partial}{\\partial x}\\left(\\frac{u^2}{2} + \\frac{\\tau}{\\mu} \\ln n\\right) + \\nu_u \\frac{1}{n}\\frac{\\partial^2}{\\partial x^2} u\r\n\\end{align}\r\n\r\nWe here choose a class of schemes known as staggered grids. The main is to discretize\r\nthe variables $n$ and $u$ on two grids that are adjoint to each other.\r\nIf $n$ is discretized on the cell-centers $x_k$ then $u$ is discretized on the cell\r\nboundaries $x_{k+1/2}$. We will indicate this by writing $u^\\dagger_k$ that is $u^\\dagger_k = u_{k+1/2}$.\r\nWe base our discretization on the finite volume scheme presented by\r\n\\cite{Gunawan2015,Herbin2013}:\r\n\\begin{align}\r\n    \\frac{n_k^{n+1} - n_k^n}{\\Delta t} = - \\frac{1}{\\Delta x}(\\hat q^\\dagger_k  - \\hat q^\\dagger_{k-1})\r\n\\end{align}\r\nwhere\r\n\\begin{align}\r\n    \\hat q^\\dagger_k := u^\\dagger_k \\begin{cases}\r\n        n_k     + \\frac{1}{2}\\Lambda( n_{k+1}-n_k    , n_k - n_{k-1})&\\text{ if } u^\\dagger_k \\geq 0 \\\\\r\n        n_{k+1} - \\frac{1}{2}\\Lambda( n_{k+2}-n_{k+1}, n_{k+1} - n_k)&\\text{ if } u^\\dagger_k < 0\r\n    \\end{cases}\r\n\\end{align}\r\nand\r\n\\begin{align}\r\n    \\frac{(n_k^\\dagger u_k^\\dagger)^{n+1}-(n_k^\\dagger u_k^\\dagger)^{n}}{\\Delta\r\n    t} = - \\frac{1}{\\Delta x} \\left[ \\hat f_{k+1} - \\hat f_k -\r\n    \\frac{\\tau}{\\mu} \\left(n_{k+1}^{n+1} - n_k^{n+1}\\right)  \\right]\r\n\\end{align}\r\nwhere $n^\\dagger_k = \\left(n_{k+1}+n_k\\right)/2$ and $u_k =\r\n\\left(u^\\dagger_k + u^\\dagger_{k-1}\\right)/2$ (local shock speed in Burger's equation)\r\nand analogous for $q_k$ and\r\n\\begin{align}\r\n    \\hat f_k := q_k \\begin{cases}\r\n        u^\\dagger_{k-1} + \\frac{1}{2}\\Lambda( u^\\dagger_{k}-u^\\dagger_{k-1}, u^\\dagger_{k-1} - u^\\dagger_{k-2})&\\text{ if } q_k \\geq 0 \\\\\r\n        u^\\dagger_k     - \\frac{1}{2}\\Lambda( u^\\dagger_{k+1}-u^\\dagger_k    , u^\\dagger_k - u^\\dagger_{k-1})&\\text{ if } q_k < 0\r\n    \\end{cases}\r\n\\end{align}\r\nwhere we have the flux limiter\r\n\\begin{align}\r\n\\Lambda_{\\text{minmod}}(x,y) &:=\r\n    \\begin{cases}\r\n        \\min(x,y) & \\text{ if } x,y \\geq 0 \\\\\r\n        \\max(x,y) & \\text{ if } x,y \\leq 0 \\\\\r\n        0         & \\text{ else }\r\n    \\end{cases}\r\n    \\\\\r\n\\Lambda_{\\text{vanLeer}}(x,y) &:=\r\n    \\begin{cases}\r\n        0               & \\text{ if } xy \\leq 0 \\\\\r\n        \\frac{2xy}{x+y} & \\text{ else }\r\n    \\end{cases}\r\n\\end{align}\r\n\r\nWe note that this is a semi-implicit scheme (the pressure term in the momentum equation is implicit). The implicit equation can however be solved trivially because\r\nthe density equation decouples and is completely explicit.\r\n\r\nWe implement this scheme and investigate several variations in this notebook\r\n\\url{https://mwiesenberger.github.io/advection/OneDimensional.html}.\r\n\r\nThe authors of the scheme claim that if the pressure term is treated explicitly the\r\nscheme loses its shock conserving properties. We cannot reproduce this behaviour\r\nin our experiments. Furthermore, we were looking into discretizing the\r\nvelocity equation over the momentum formulation because the Feltor equations\r\nprefer that formulation. In this formulation the equations are not shock-capturing\r\nbut are very close to the original scheme once viscosity is added to the system.\r\n\r\nThe best scheme that we found in this formulation is\r\n\\begin{align}\r\n    \\frac{n_k^{n+1} - n_k^n}{\\Delta t} = &- \\frac{1}{\\Delta x}(q^\\dagger_k  - q^\\dagger_{k-1}) \\\\\r\n    \\frac{(u_k^\\dagger)^{n+1}-(u_k^\\dagger)^{n}}{\\Delta t} = &- \\frac{1}{\\Delta\r\n    x} \\left( \\hat f_{k+1} -\\hat f_k\\right) - \\frac{1}{\\Delta x}\\left[\r\n        \\frac{\\tau}{\\mu} \\left(n_{k+1} - n_k\\right) \\frac{1}{2}\\left(\\frac{1}{n_{k+1}}\r\n    +\\frac{1}{n_k}\\right)\\right]\r\n    \\nonumber\\\\\r\n    &+ \\frac{\\nu_u}{n^\\dagger_k (\\Delta x)^2}\r\n    \\left(u^\\dagger_{k+1} - 2 u^\\dagger_k + u^\\dagger_{k-1}\\right)\r\n\\end{align}\r\nwith\r\n\\begin{align}\r\n    \\hat q^\\dagger_k &:= u^\\dagger_k \\begin{cases}\r\n        n_k     + \\frac{1}{2}\\Lambda( n_{k+1}-n_k    , n_k - n_{k-1})&\\text{ if } u^\\dagger_k \\geq 0 \\\\\r\n        n_{k+1} - \\frac{1}{2}\\Lambda( n_{k+2}-n_{k+1}, n_{k+1} - n_k)&\\text{ if } u^\\dagger_k < 0\r\n    \\end{cases}\r\n    \\\\\r\n    \\hat f_k &:= \\frac{1}{2}u_k \\begin{cases}\r\n        u^\\dagger_{k-1} + \\frac{1}{2}\\Lambda( u^\\dagger_{k}-u^\\dagger_{k-1}, u^\\dagger_{k-1} - u^\\dagger_{k-2})&\\text{ if } u_k \\geq 0 \\\\\r\n        u^\\dagger_k     - \\frac{1}{2}\\Lambda( u^\\dagger_{k+1}-u^\\dagger_k    , u^\\dagger_k - u^\\dagger_{k-1})&\\text{ if } u_k < 0\r\n    \\end{cases}\r\n\\end{align}\r\n\\begin{tcolorbox}[title=Note]\r\n    We divide by the harmonic mean in the pressure gradient term, which from empirical tests\r\n    yields better convergence than other types like the arithmetic\r\n    or geometric mean\r\n\\end{tcolorbox}\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\subsection{Elevating the scheme to three dimensions}\r\nOur idea to generalize the scheme to the FCI formulation is to use the\r\nlocally fieldaligned coordinate system.\r\nCare must be taken as we need to integrate the fieldlines twice, once to the\r\nnext plane and once to the adjoint plane halfway between the original planes.\r\n\r\nWe denote the interpolation between the original planes with $I^+$ and $I^-$ and\r\nthe interpolation between the half planes with $\\PP$ and $\\QQ$.\r\nWe will use $\\varphi$ as the fieldline-following coordinate as the resulting\r\ngrid is then equidistant in the transformed coordinates.\r\n\r\nFirst, we denote quantities transformed into the coordinate system centered on $k$ as\r\n\\begin{align}\r\n    \\bar n_{k-2} = (I^-)^2n_{k-2} \\quad \\bar n_{k-1} = I^- n_{k-1}\r\n    \\quad \\bar n_k = n_k \\quad\r\n    \\bar n_{k+1} = I^+n_{k+1} \\quad \\bar n_{k+2} = (I^+)^2 n_k \\nonumber\\\\\r\n\\bar u_{k-1/2} = \\QQ u_{k-1}^\\dagger \\quad \\bar u_{k+1/2} = \\PP u_k^\\dagger \\qquad\\qquad\\qquad\\qquad\r\n\\end{align}\r\nWith these definitions we can discretize the continuity equation by\r\n\\begin{align}\r\n    \\frac{n_k^{n+1} - n_k^n}{\\Delta t} = &-\\frac{1}{\\Delta\\varphi \\sqrt{G^k}}\r\n    \\left( \\sqrt{G^{k+1/2}}b^\\varphi_{k+1/2}  \\bar q_{k+1/2}\r\n    -\\sqrt{G^{k-1/2}}b^\\varphi_{k-1/2}  \\bar q_{k-1/2} \\right)\r\n\\end{align}\r\nwhere $\\sqrt{G^{k\\pm 1/2}}$ and $b^\\varphi_{k\\pm 1/2}$ are the volume form and\r\nthe contravariant $\\varphi$ component of $\\bhat$ on the half planes and\r\n\\begin{align}\r\n    \\bar q_{k+1/2} &:= \\bar u_{k+1/2} \\begin{cases}\r\n        \\bar n_k     + \\frac{1}{2}\\Lambda( \\bar n_{k+1}-\\bar n_k    , \\bar n_k - \\bar n_{k-1})&\\text{ if } \\bar u_{k+1/2} \\geq 0 \\\\\r\n        \\bar n_{k+1} - \\frac{1}{2}\\Lambda( \\bar n_{k+2}-\\bar n_{k+1}, \\bar n_{k+1} - \\bar n_k)&\\text{ if } \\bar u_{k+1/2} < 0\r\n    \\end{cases}\r\n\\end{align}\r\nNext, we denote quantities transformed into the coordinate system centered on the adjoint plane $k$ as\r\n\\begin{align}\r\n    \\bar u^\\dagger_{k-2} = (I^-)^2u^\\dagger_{k-2} \\quad \\bar u^\\dagger_{k-1} = I^- u^\\dagger_{k-1}\r\n    \\quad \\bar u^\\dagger_k = u^\\dagger_k \\quad\r\n    \\bar u^\\dagger_{k+1} = I^+u^\\dagger_{k+1} \\quad \\bar u^\\dagger_{k+2} = (I^+)^2 u^\\dagger_k \\nonumber\\\\\r\n    \\bar n^\\dagger_{k-1/2} = \\QQ n_{k} \\quad \\bar n^\\dagger_{k+1/2} = \\PP n_{k+1} \\qquad\\qquad\\qquad\\qquad\r\n\\end{align}\r\nSuch that for the velocity equation we have\r\n\\begin{align}\r\n    \\frac{(u_k^\\dagger)^{n+1}-(u_k^\\dagger)^{n}}{\\Delta t} = &- \\frac{b^\\varphi_k}{\\Delta\r\n    \\varphi} \\left( \\bar f^\\dagger_{k+1/2} -\\bar f^\\dagger_{k-1/2}\\right) -\\frac{b^\\varphi_k}{\\Delta\\varphi}\r\n        \\left[\\frac{\\tau}{\\mu} \\left(\\bar n^\\dagger_{k+1/2} - \\bar n^\\dagger_{k-1/2}\\right) \\frac{1}{2}\\left(\\frac{1}{\\bar n^\\dagger_{k+1/2}}\r\n    +\\frac{1}{\\bar n^\\dagger_{k-1/2}}\\right)\\right]\r\n    \\nonumber\\\\\r\n    &+ \\frac{2\\nu_u\r\n        \\left(\\sqrt{G^{k+1/2}}b^\\varphi_{k+1/2}b^\\varphi_{k+1/2} (\\bar u^\\dagger_{k+1} - \\bar u^\\dagger_k)\r\n        - \\sqrt{G^{k-1/2}}b^\\varphi_{k-1/2}b^\\varphi_{k-1/2} (\\bar u^\\dagger_k - \\bar u^\\dagger_{k-1})\\right)\r\n    }{\\sqrt{G^k}(\\bar n^\\dagger_{k+1/2} + \\bar n^\\dagger_{k-1/2}) (\\Delta \\varphi)^2}\r\n\\end{align}\r\nThe flux is discretized according to\r\n\\begin{align}\r\n    \\bar f^\\dagger_{k+1/2} &:= \\frac{1}{4}(\\bar u^\\dagger_{k+1}+\\bar u^\\dagger_k)\r\n    \\begin{cases}\r\n        \\bar u^\\dagger_{k} + \\frac{1}{2}\\Lambda\\left( \\bar u^\\dagger_{k+1} - \\bar u^\\dagger_k, \\bar u^\\dagger_k - \\bar u^\\dagger_{k-1}\\right)&\r\n        \\text{ if } \\frac{1}{2}(\\bar u^\\dagger_{k+1}+\\bar u^\\dagger_k) \\geq 0 \\\\\r\n        \\bar u^\\dagger_{k+1}   - \\frac{1}{2}\\Lambda\\left( \\bar u^\\dagger_{k+2} - \\bar u^\\dagger_{k+1}, \\bar u^\\dagger_{k+1} -\\bar u^\\dagger_k \\right) &\r\n        \\text{ if } \\frac{1}{2}(\\bar u^\\dagger_{k+1}+\\bar u^\\dagger_k) < 0\r\n    \\end{cases}\r\n\\end{align}\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\subsection{2nd Elevating the scheme to three-dimensions}\r\nA second possibility is to first center the coordinate system on the fluxes on the staggered grid and then\r\nin a second step center on the time-derivative:\r\n\\begin{align}\r\nn_k^\\dagger =& \\frac{1}{2}\\left( \\QQ n_k + \\PP n_{k+1}\\right) \\\\\r\nu_k =& \\frac{1}{2}\\left( \\QQ u^\\dagger_{k-1} + \\PP u_{k}^\\dagger\\right)\\\\\r\n\\left[\\partial_\\parallel u^\\dagger \\right]_k =& \\left( \\PP u^\\dagger_{k} - \\QQ\r\nu^\\dagger_{k-1}\\right) \\\\\r\n\\left[\\partial_\\parallel n\\right]^\\dagger_k =& \\left( \\PP n_{k+1} - \\QQ\r\nn_k\\right)\r\n\\end{align}\r\nSuch that in total we have\r\n\\begin{align}\r\n    \\frac{n_k^{n+1} - n_k^n}{\\Delta t} = &-\\frac{1}{\\Delta\\varphi \\sqrt{G^k}}\r\n    \\left( \\sqrt{G^{k+1/2}}b^\\varphi_{k+1/2}  \\PP \\hat q_{k}^\\dagger\r\n    -\\sqrt{G^{k-1/2}}b^\\varphi_{k-1/2}  \\QQ \\hat q_{k-1}^\\dagger \\right)\r\n     \\\\\r\n    \\frac{(u_k^\\dagger)^{n+1}-(u_k^\\dagger)^{n}}{\\Delta t} = &-\r\n \\frac{b^\\varphi_k}{\\Delta\\varphi}\\left( \\PP\\hat f_{k+1} - \\QQ\\hat f_k\\right)\r\n -\\frac{\\tau}{\\mu}\\frac{b^\\varphi_k}{\\Delta\\varphi} \\frac{\\PP n_{k+1} - \\QQ n_k}{2}\\left( \\frac{1}{\\PP n_{k+1}} + \\frac{1}{\\QQ n_k} \\right)\r\n    \\nonumber\\\\\r\n    &+ \\frac{\\nu_u\r\n        \\left(\\sqrt{G^{k+1/2}}b^\\varphi_{k+1/2}b^\\varphi_{k+1/2} (I^+ u^\\dagger_{k+1} - u^\\dagger_k)\r\n        - \\sqrt{G^{k-1/2}}b^\\varphi_{k-1/2}b^\\varphi_{k-1/2} ( u^\\dagger_k - I^- u^\\dagger_{k-1})\\right)\r\n    }{n^\\dagger_{k} \\sqrt{G^k}(\\Delta \\varphi)^2}\r\n\\end{align}\r\nThe fluxes are discretized according to\r\n\\begin{align}\r\n    \\hat q^\\dagger_k &:= u^\\dagger_k \\begin{cases}\r\n        \\QQ n_k     + \\frac{1}{2}\\Lambda\\left( (\\partial_\\parallel n)^\\dagger_k, I^- (\\partial_\\parallel n)^\\dagger_{k-1}\\right)&\\text{ if } u^\\dagger_k \\geq 0 \\\\\r\n        \\PP n_{k+1} - \\frac{1}{2}\\Lambda\\left( I^+(\\partial_\\parallel n)^\\dagger_{k+1}, (\\partial_\\parallel n)^\\dagger_k\\right) &\\text{ if } u^\\dagger_k < 0\r\n    \\end{cases}\r\n    \\\\\r\n    \\hat f_k &:= \\frac{1}{2}u_k\\begin{cases}\r\n        \\QQ u^\\dagger_{k-1} + \\frac{1}{2}\\Lambda\\left( (\\partial_\\parallel u^\\dagger )_k, I^- (\\partial_\\parallel u^\\dagger)_{k-1}\\right)&\\text{ if } u_k \\geq 0 \\\\\r\n        \\PP u^\\dagger_{k}   - \\frac{1}{2}\\Lambda\\left( I^+(\\partial_\\parallel u^\\dagger)_{k+1}, (\\partial_\\parallel u^\\dagger)_k\\right) &\\text{ if } u_k < 0 \\\\\r\n    \\end{cases}\r\n\\end{align}\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "50115664e0a9d53c61ed7f7e0d1de36bfb64dcd6", "size": 14363, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/navier_stokes/navier_stokes.tex", "max_stars_repo_name": "RaulGerru/FELTOR_FINAL", "max_stars_repo_head_hexsha": "dd5af5e61d1607eb3b0415b756c1a6cf56b63a2c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/navier_stokes/navier_stokes.tex", "max_issues_repo_name": "RaulGerru/FELTOR_FINAL", "max_issues_repo_head_hexsha": "dd5af5e61d1607eb3b0415b756c1a6cf56b63a2c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/navier_stokes/navier_stokes.tex", "max_forks_repo_name": "RaulGerru/FELTOR_FINAL", "max_forks_repo_head_hexsha": "dd5af5e61d1607eb3b0415b756c1a6cf56b63a2c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.3940520446, "max_line_length": 172, "alphanum_fraction": 0.6168627724, "num_tokens": 5264, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928900257126, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5749781211708165}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{a4wide}\n\\usepackage[table]{xcolor}\n\\usepackage[numbers]{natbib}\n\n\\begin{document}\n\\title{Feasibility-prone Differential Dynamic Programming \\\\\nIs DDP a Multiple Shooting Algorithm?}\n\\author{N. Mansard -- LAAS-CNRS}\n\n\\maketitle\n\n%% \\begin{abstract}\n%% \\end{abstract}\n\n\\newcommand{\\xtraj}{\\underline{x}}\n\\newcommand{\\utraj}{\\underline{u}}\n\\newcommand{\\lambdatraj}{\\underline{\\lambda}}\n\\newcommand{\\dxtraj}{\\underline{\\Delta x}}\n\\newcommand{\\dutraj}{\\underline{\\Delta u}}\n\\newcommand{\\dxtrajguess}{\\underline{\\Delta \\bar x}}\n\\newcommand{\\dutrajguess}{\\underline{\\Delta \\bar u}}\n\\newcommand{\\dx}{\\Delta x}\n\\newcommand{\\du}{\\Delta u}\n\\newcommand{\\Treal}{\\mathbb{T}}\n\\newcommand{\\bmat}{\\begin{bmatrix}}\n\\newcommand{\\emat}{\\end{bmatrix}}\n\\newcommand{\\qed}{\\hfill$\\square$}\n\\definecolor{mygray}{gray}{0.9}\n\n\\section{Introduction}\n\n\\subsection{Problem definition}\nWe are interested to find an approximate solution to the following optimal control problem OCP:\n$$\\min_{\\xtraj,\\utraj} \\int_0^\\Treal \\ell(x(t),u(t),t) dt + \\ell_\\Treal(x(\\Treal))$$\n$$s.t. \\quad x(0) = f_0$$\n$$\\quad \\forall t \\in [0,\\Treal], \\quad \\dot{x}(t) = f(x(t),u(t),t))$$\nwhere $\\xtraj: t \\rightarrow x(t)$ is the state trajectory, $\\utraj: t \\rightarrow u(t)$ is the control trajectory, $\\ell$ is the integral --running-- cost, $\\ell_T$ is the terminal cost, $f_0$ is the initial state value, $f$ is the robot dynamics and $T$, the time interval, is fixed.\n\nThe decision variables are $\\xtraj,\\utraj$, both of infinite dimension. \nWe approximate this problem using a discrete version of it, by following the so-called direct --discretize first, solve second -- approach.\n\n\\subsection{Discretize first}\nThe time interval $[0,\\Treal]$ is divided into $T$ sub-intervals (evenly distributed or not).\nIn each sub-interval $t$, the control trajectory $\\utraj_t$ is constrained to be in the span of a given trajectory finite basis, and we represent the trajectory by its coefficient in the function basis (i.e as a vector of finite dimension). We typically write $\\utraj_t$ as a polynomial, and it is often taken in practice constant on the interval.\n\nThe values of $\\xtraj_t$ on the interval $t$ are obtained by integrating the dynamics from the value $x_t$ at the beginning of the sub-interval.\nAs closed-form integrals of $f$ are often not available, $\\xtraj_t$ is approximated by any numerical integration scheme, e.g. Runge-Kutta-4. We then represent $\\xtraj$ by its values at each interval ends, i.e. as a list of $T+1$ elements.\n\nIn summary, the control variable $\\utraj$ is represented by $T$ basis coefficients of the chosen trajectory basis --which often boils to $T$ constant controls-- and $\\xtraj$ is represented by $T+1$ states.\nIn the following, we will often abusively use the same symbols for the true object (e.g. the trajectory) and its representation (e.g. the coefficients of its discretization), in the aim of keeping the notations simple. With this choice, the discretized problem can be written as:\n$$\\min_{\\xtraj,\\utraj} \\sum_{t=0}^{T-1} \\ell(x_t,u_t) + \\ell_T(x_T)$$\n$$s.t. \\quad x_0 = f_0$$\n$$\\quad \\forall t=0..T-1, \\quad x_{t+1} = f(x_t,u_t)$$\nAs announce, both $\\ell$ and $f$ now represent the discretization of their respective objects in the original problem. They both typically depend on time (i.e. $\\ell_t,f_t$) but we omit this dependency in the notation for readability.\n\n\\subsection{Solve second}\nThis new problem is now a static optimization problem under constraints, typically nonlinear and non-convex (NLP). We will solve it with a sequential-quadratic-programming (SQP) strategy, i.e. by iteratively solving the linear-quadratic-regulator (LQR) problem obtained by computing the linearization of the dynamics $f$ and the quadratic model of the cost $\\ell$ at the current candidate values of $\\xtraj,\\utraj$.\n\nWe denote the derivatives of $f$ by $F_x, F_u$, and the gradient and the Hessian of $\\ell$ by $(L_x, L_u$) and $(L_{xx}, L_{xu}, L_{ux}, L_{uu})$, respectively. When possible, we will omit the time indexes for all these quantities. For the LQR case, due to it is a finite-horizon problem, we can consider without loss of generality that the $F$ and $L$ are constant matrices (for general case, you just need to add the evident indices $_t$ into each quantity).\nWe also denote $f_t$ by the drift of $f$ (i.e. change in $x$ when $u$ is zero), whose role is clear for the LQR and whose role will become clear later for solving the NLP. The LQR is then formulated as:\n$$\\min_{\\dxtraj,\\dutraj} \\Big( \\sum_{t=0}^{T-1}  \\frac{1}{2} \\bmat \\dx^T,\\du^T \\emat \\bmat L_{xx} & L_{xu} \\\\ L_{ux} & L_{uu} \\emat \\bmat \\dx \\\\ \\du \\emat + \\bmat L_x & L_u \\emat \\bmat \\dx \\\\ \\du \\emat $$\n$$ + \\frac{1}{2} \\dx_T L_{xx} \\dx_T + L_x \\dx_T \\Big)$$\n$$s.t. \\quad \\dx_0 = f_0 $$\n$$\\quad \\forall t=0\\cdots T-1, \\quad \\dx_{t+1} = F_x \\dx_t + F_u \\du_t + f_{t+1} $$\n\nThis problem is a quadratic program (under linear equality constraints -- QP).\nVarious solutions can be chosen to solve the QP.\nIt is obvious to recall that all of them will lead to the same solution, at least neglecting numerical effects related to noise and numerical stability. We will favor two solutions.\nFor understanding the nature of the problem, we will write the solution to this QP by forming the KKT matrix. For solving it in practice, we will use the Ricatti recursion typical in differential dynamic programming (DDP).\n\n\\subsection{The Russian way: using the Karush-Kuhn-Tucker matrix}\n\n\\subsubsection{Optimality principle}\nThe Lagrangian of the LQR QP is:\n$$\\mathcal{L}(\\dxtraj,\\dutraj,\\lambdatraj) = \\sum_{t=0}^{T-1} \\Big( \\frac{1}{2} \\bmat \\dx_t^T,\\du_t^T \\emat \\bmat L_{xx} & L_{xu} \\\\ L_{ux} & L_{uu} \\emat \\bmat \\dx_t \\\\ \\du_t \\emat + \\bmat L_x & L_u \\emat \\bmat \\dx_t \\\\ \\du_t \\emat $$\n$$- \\lambda_{t+1} (\\dx_{t+1} - F_x \\dx_t - F_u \\du_t - f_{t+1} ) \\Big)\n%$$ $$\n+ \\frac{1}{2} \\dx_T L_{xx} \\dx_T + L_x \\dx_T - \\lambda_0(\\dx_0 - f_0) $$\n\n\\subsubsection{Solving the LQR QP}\nThe optimum of the QP is reached for the zero of the gradient of $\\mathcal{L}$ with respect to $\\xtraj$, $\\utraj$ and $\\lambdatraj$, i.e. when:\n$$\n\\left[ \\begin{array}{cccc!{\\color{mygray}\\vrule}ccc!{\\color{mygray}\\vrule}cccccc}\nL_{xx}        & & & & L_{xu} & & & -I & F_x^T \\\\\n& \\ddots     & & & & \\ddots& & & \\ddots & \\ddots \\\\\n& & L_{xx}    & & & & L_{xu}& & & -I & F_x^T\\\\\n& & & L_{xx}  & & & & & & & -I\\\\\n\\arrayrulecolor{mygray}\\hline\nL_{ux} & & &       & L_{uu} & & & & F_u^T\\\\\n& \\ddots & &       &       & \\ddots & & & & \\ddots\\\\\n& & L_{ux} &       &       &       & L_{uu} & && & F_u^T \\\\\n\\arrayrulecolor{mygray}\\hline\n-I &&&&&&&&&&\\\\\nF_x & -I &  & & F_u &&&&&& \\\\\n&\\ddots & \\ddots & & & \\ddots &&&&&&\\\\\n&    &  F_x & -I & & & F_u \n\\end{array} \\right]\n%\n\\bmat\n\\dx_0 \\\\\n\\vdots \\\\\n\\dx_{T-1} \\\\\n\\dx_T \\\\\n\\arrayrulecolor{mygray}\\hline\n\\du_0 \\\\\n\\vdots \\\\\n\\du_{T-1} \\\\\n\\arrayrulecolor{mygray}\\hline\n\\lambda_0 \\\\\n\\lambda_1 \\\\\n\\vdots \\\\\n\\lambda_{T}\n\\emat\n%\n=\n%\n-\\bmat\nL_x \\\\\n\\vdots \\\\\nL_x \\\\\nL_x \\\\\n\\arrayrulecolor{mygray}\\hline\nL_u \\\\\n\\vdots \\\\\nL_u \\\\\n\\arrayrulecolor{mygray}\\hline\nf_0 \\\\\nf_1 \\\\\n\\vdots \\\\\nf_{T}\n\\emat\n$$\n\nSolving this linear equation (by inverting the full-rank KKT matrix) provides both the optimal state and control trajectories $\\dxtraj$, $\\dutraj$ and the Lagrange multipliers corresponding to the robot dynamics.\nThese multipliers indeed represent the trajectory of the co-state at the shooting points.\nSolving the LQR QP by searching the zeros of the Lagrangian indeed corresponds to applying the Pontryagin's Minimum Principle (PMP) on the discretize LQR system.\n\n\\subsubsection{Intuition of the results}\nSolving the QP provides at the same time the satisfaction of the constraints (i.e. the resulting $\\xtraj$ is a continuous state trajectory that corresponds to the continuous control trajectory $\\utraj$, following the underlying linear integrator); and the resulting trajectory pair is of minimal (quadratic) cost.\n\nIndeed, solving the QP can be seen as computing a step to modify an initial-guess $\\dxtrajguess$, $\\dutrajguess$. Such an observation is somehow trivial, as for a QP, any initial guess will lead to the exact same optimum in a single step.\nHowever, understanding this observation is important before going to the more complex SQP case.\nAs any initial guess works the same, we typically consider $\\dxtrajguess=0$ and $\\dutrajguess=0$.\nThen this initial guess is not feasible, i.e. it does not satisfy the constraints (except in the particular case where $f_0 = \\cdots = f_{T-1} = 0$). These $f_t$ can then be seen as the gaps in the state trajectory, i.e. the state trajectory is piece-wise feasible inside each shooting interval, but is discontinuous at each shooting node $t$ with a gap between the previous piece of trajectory $t$ and the next one $t+1$ of $f_t$ ($f_0$ being a gap with respect to the initial guess state $\\Delta \\bar x_0= 0$).\n\nThen solving the QP corresponds to both making a step that nullifies the gaps $f_t$ and a step that optimizes the trajectory cost.\n\n\n\\subsection{The American way: computing the optimal flow}\n\n\\subsubsection{Optimality principle}\nDDP is generally not associated with the KKT matrix but with the backward Riccati recursion.\nWe denote the Value function (running cost) at time $t$ by $V_t: \\dx \\rightarrow \\mathbb{R}$, and it represents the minimal cost we can obtained with the state being $\\dx$ at time $t$.\nWe denote the Hamiltonian of the system (Q-value) by $Q_t: \\dx,\\du \\rightarrow Q_t(\\dx,\\du) = \\ell_t(\\dx,\\du) + V \\circ \\Delta f(\\dx,\\du)$, where the linear dynamics is denoted by $\\Delta f(\\dx,\\du) = F_x \\dx + F_u \\du + f_t$.\n\nAs the problem is LQR, the Value and Hamiltonian functions have quadratic form; they can be represented by their gradient and their Hessian.\nIt is important to note that the gradient of a quadratic function is not constant but varies according to the point where it is computed. However, the gradient at any point can be easily computed using the gradient at a given point plus the Hessian times the difference between the two points $\\nabla (a) = \\nabla(b) + \\nabla^2\\cdot(a-b)$. And often, we conveniently compute the gradient at the origin.\n\n\\subsubsection{Solving the backward recursion}\nFrom the Bellman principle, we know that $V_t(\\dx) = \\min_{\\du} Q_t(\\dx,\\du)$.\nA backward recursion can then be set up, starting from the final observation that $V_t = \\ell_T$.\nThe backward recursion can be computed along any given guess trajectory $\\dxtrajguess, \\dutrajguess$, to compute $Q_t$ at each shooting node $t$ from $V_{t+1}$, and then $V_t$ by optimizing $Q_t(.,\\du)$.\nIt is important to remember that any trajectory can be chosen, as the problem is LQR, hence the optimal flow $V$ can  be equivalently recovered from any trajectory $\\dxtrajguess$.\nIn particular, the trajectory does not have to be optimal, feasible, or even continuous.\n\nWhen computing backward the optimal flow, we only compute the $V$ values at the shooting times.\nHowever, the flow exists at any time and it is implicitly handled by the recursion through the integrated dynamics $F_x,F_u$.\nThen care has to be taken for discontinuous trajectories.\nIn such a case the flow $V_t$ would be typically computed at $\\dx_t$, while the Jacobians $F_x$, $F_u$ are computed at the state reached at the end of interval $t-1$, i.e. $\\dx_{t-1}^+ = F_x \\dx + F_u \\du + f_{t-1}$.\nIn particular, when $\\dxtrajguess =0$ and $\\dutrajguess=0$, then $\\dx_{t-1}^+ = f_{t-1}$.\nThe gradient of $V$ in $\\dx_{t-1}^+$ is obtained from the gradient of $V$ at $\\dx_t=0$ with:\n$$V_x^+ = V_x + V_{xx} f_t$$\nand, off course, if $f_t=0$, then they are both equals.\n\nThe backward recursion is then twofold. First it propagates the Q function from $Q=\\ell+V\\circ \\Delta f$:\n$$Q_{xx} = L_{xx} + F_x^T V_{xx} F_x$$\n$$Q_{xu} = L_{xu} + F_x^T V_{xx} F_u$$\n$$Q_{ux} = L_{ux} + F_u^T V_{xx} F_x$$\n$$Q_{uu} = L_{uu} + F_u^T V_{xx} F_u$$\n$$Q_{x}  = L_x + F_x^T V_x^+$$\n$$Q_{u}  = L_u + F_u^T V_x^+$$\nThe Value function is then obtained by solving the minimum of $Q(.,\\du)$\n$$\\text{arg}\\min_{\\du} Q(\\dx,\\du) = -k - K \\dx$$\nwith $k=Q_{uu}^{-1} Q_u$ and $K = Q_{uu}^{-1} Q_{ux}$. The Value is then:\n$$V_{xx} = Q_{xx} - Q_{xu} K$$\n$$V_x = Q_x - Q_{xu} k + V_{xx} f$$\nwhere the gradient $V_x$ is computed at the end of the previous interval $x^+$ and not at the shooting state $x_t$.\n\nTo obtain the complete solution $\\dxtraj,\\dutraj$, a forward pass must then be performed.\nWe discuss it later.\n\n\\subsubsection{Intuition of the result}\n\nWhile the KKT approach computes the solution using the dual $\\lambda$ variable, the DDP approach computes it using the $V, Q$ auxiliary variables.\n$\\lambda$ presents the co-state trajectory, while $V$ is the value functions.\nBoth are connected, as the PMP principle writes the optimal control in term of the co-state, while HJB express the optimal policy in term of the Value space gradient.\n\nThe optimal flow is evaluated along the candidate trajectory $\\dxtrajguess,\\dutrajguess$.\nAs the problem is LQR, any initial guess produces the same backward pass, the same $V$ and the same solution.\nFor this reason, choosing $\\dxtrajguess=0, \\dutrajguess=0$ is very relevant.\nIn that case, if $f_t$ is nonzero, then the initial guess is not feasible and the term $V_{xx} f$ in the Value gradient back-propagation is important.\nThis term is a direct application of LQR equation with drift, but it is rarely mentioned in DDP works \\cite{GiftthalerCoRR2017,LaineCoRR2018}.\nIts role will become very important in the following.\n\nThe solution of the LQR has once more two effects: it generates a feasible trajectory where the gap between $x_{t-1}^+$ and $x_t$ is zero; and this trajectory is optimal.\n\n\\subsection{Equivalence}\n\nBoth solutions are equivalent.\n\nProof: the KKT solution is obtained by applying PMP on the LQR system. The Riccati solution comes from integrating HJB equation on the LQR. In the LQR case, both PMP and HJB are sufficient optimality conditions. Then both solutions are equal.\n\n\\subsection{Partial step}\n\nIn many algorithm, the QP step is only partly integrated, i.e. the initial guess is only modified by $\\alpha \\Delta$ where $\\alpha \\in [0,1[$ and $\\Delta = \\Delta \\dxtraj,\\Delta \\dutraj$ the solution to the QP (for example if considering inequality constraints inside an active-set algorithm, or using the QP as the inner solver inside a SQP).\nWhat is the effect of taking a partial step?\n\nFor the KKT formulation, the effect is pretty clear. As $\\Delta \\dxtraj$ was a step from a zero initial guess $\\dxtrajguess = 0$, making a partial step $\\alpha \\Delta$ will bridge only a part of the gap. Denoting by $\\dxtraj^\\alpha$ the solution obtained after making a step of length $\\alpha<1$, we have:\n$$\\dx_0^\\alpha = \\alpha f_0$$\n$$\\dx_{t+1}^\\alpha - F_x \\dx_t^\\alpha - Fu \\du_t^\\alpha = \\alpha f_t$$ \nThe new solution  $\\dxtraj^\\alpha,\\dutraj^\\alpha$ is then infeasible and the gaps at each shooting points have only be reduced of $(1-\\alpha)$.\n\nFor the DDP formulation, this is less clear as it is not described in the literature.\nAs we started to explain, the complete solution should be obtained by rolling out the quadratic policy $k,K$ from the initial state $f_0$.\nBut this is only when making a full step.\nWhen making a partial step, the KKT partial solution can be obtained if (i) applying only a part of $k$ and (ii) keeping a part of the gap at each shooting node.\n\n\\subsection{Solving the DDP forward pass with a partial step}\n\nThe forward pass for a partial step $\\alpha \\le 1$ then becomes:\n$$\\dx_0 = \\alpha f_0$$\n$$\\du_t = -\\alpha k_t - K_t \\dx_t$$\n$$\\dx_{t+1} = F_x \\dx_t + F_u \\du_t + \\alpha f_t$$\n\nProof: by recurrence. We denote the $\\dx^*,\\du^*$ the optimal solution given by the KKT and by the DDP for a full step. We show the the proposed forward pass produced the partial KKT step.\n$$\\dx_0 = \\alpha f_0 = \\alpha \\dx_0^*$$\nNow, assuming the $\\dx_t = \\alpha \\dx_t^*$, we have:\n\\begin{align*}\n  \\forall t=0..T\\!\\!-\\!\\!1,\\quad\\quad\\quad \\du_t &= -\\alpha k_t - K_t \\alpha \\dx_t^* = \\alpha \\du_t^*\\\\\n \\dx_{t+1} &= F_x \\alpha  \\dx_t^* + F_u \\alpha \\du_t + \\alpha f_t = \\alpha \\dx_{t+1}\n\\end{align*}\n\\qed\n\n\n\\section{A feasibility-prone OCP solver using DDP}\n\nSo far we have detailed a method to solve a LQR program. Let's now look at the more generic case where cost and dynamics are any smooth functions.\nThe transcription of the OCP is a NLP with nonlinear equality constraints representing the robot dynamics.\nWe solve it with a SQP approach, i.e. at each iteration we compute the LQR corresponding to the tangent (differential) of the OCP at the current candidates of the decision variables; we solve the SQP and obtain a descent direction; and we search along the descent direction for a step of adequate length.\n\n\\subsection{LQR and descent direction}\nThe LQR is uniquely defined by the gradients of $f$ and $\\ell$ and the Hessian of $\\ell$ at the current guess $\\xtraj,\\utraj$.\nThe solution of the LQR is also uniquely defined and can be computed by any adequate method, in particular by inverting the KKT or by solving the backward-forward Riccati recursions (at least if not considering the numerical and complexity issues).\nBoth methods will give exactly the same descent directions (neglecting the rounding errors).\n\n\\subsection{Line search and integration}\nOnce the direction has been computed, any line search algorithm can be implemented.\nBasically, the idea is to try several directions and to take the longer step which gives a reward that corresponds to what the LQR model predicts.\nWhen considering a SQP, two contradictory objectives have to be considered: (i) the cost should decrease similarly to what the quadratic model predicts and (ii) the constraints residual should not increase.\nThe trade-off between these two objectives is typically decided following a merit function, chosen by the user.\n\nIt is important to better understand why the constraint residual may increase.\nFirst, the current guess may, or may not, be feasible, i.e the state at the end of each shooting interval may, or may not correspond to the value of the state at the beginning of the next interval.\nFollowing the names chosen in the LQR case, we name gap the discontinuity at the shooting nodes: the current guess is feasible if and only if all the $T$ gaps are zero.\n\nIf all the gaps are zero, the descent direction may make them nonzero because the descent direction is only computed from a linear model of the dynamics $F_x,F_u$.\nThe longer the step, the more incorrect the linear model, and the larger the gaps will grow.\nThe merit function then adjust the step length to all some gap growth (it is impossible with the linear model to prevent some growth) but forbid to large gaps to appear.\n\nIf some gaps are nonzero, then the corresponding $f_t$ in the LQR corresponds to these gaps.\nIn that case, the LQR direction will bridge the gap thanks to the linear prediction $f_t$, but it will simultaneously increase the gap because the linear prediction is inaccurate.\nMore precisely, the gap at time $t$ after a step $\\dx,\\du$ will be:\n$$\nf(x_t+\\dx_t,u_t+\\du_t) - (x_{t+1}+\\dx_{t+1})= f(x_t,u_t)-x_{t+1} + F_x \\dx_t + F_u \\du_t  -  \\dx_{t+1} + \\circ(\\alpha^2)\n$$\nwhere $\\dx_{t+1} - F_x \\dx_t - F_u \\du_t = \\alpha f_t $ by construction of the LQR, and the step length $\\alpha$ is the magnitude of the step $\\dxtraj,\\dutraj$ that leads to quadratic $\\circ(\\alpha^2)$ errors of the linear model.\n$$\nf(x_t+\\dx_t,u_t+\\du_t) - (x_{t+1}+\\dx_{t+1}) = (1-\\alpha) (f(x_t,u_t) - x_{t+1}) + \\circ(\\alpha^2)\n$$\nThe gap evolution is composed of two terms: the first one decreases when $\\alpha$ growth, and collapses for full step $\\alpha=1$; the second one growths with $\\alpha$, and vanishes with $\\alpha$ small.\nOnly the second one exists when the gap of the current guess is null.\n\n\\subsection{Nonlinear roll-out}\n\nOnce more, it is important to recall that the descent direction is the same for KKT and DDP.\nHowever, the DDP is typically associated with a particular line search variant.\nFirst, recall that the exact classical line search can be implemented with the DDP: for that, the roll-out should be performed on the linear model, and a merit function should be considered.\n\nYet, the DDP classically observes that a feasible solution is directly obtained by integrating the nonlinear dynamics around a candidate solution $\\utraj$ from the initial state $x_0$.\nAs the nonlinear dynamics $f(x,u)$ is not exactly the same as the linear dynamics $F_x,F_u$, the feedback term $K$ must be used during the integration to avoid the divergence.\nWith such a roll-out, the resulting candidate decision variable is a feasible trajectory, where all the gaps are zero.\n\nThis behavior is very different from the one observed with classical line search.\nWe rather suggest that the same gaps than for the linear line search should be used for the nonlinear roll-out.\nWe then impose the gaps at the next candidate solution to be:\n$$\nf(x_t+\\dx_t,u_t+\\du_t) - (x_{t+1}+\\dx_{t+1}) = (1-\\alpha) (f(x_t,u_t) - x_{t+1})\n$$\nNo need here to consider the second-order disturbance term $\\circ(\\alpha^2)$ as we are considering the exact nonlinear dynamics.\n\n\\subsection{Discussion}\n\n\\subsubsection{Bridging the gap}\nIt has been observed in multiple shooting that keeping the gaps closed during the entire search might be counterproductive as it makes the NLP search very sensitive to the instabilities of the dynamics.\nWe agree with this observation, which it is correlated to the fact that the DDP tends to be an algorithm with poor exploration (globalization) capabilities.\nWe have demonstrated in our experiment that the DDP is much more prone to face feasibility problem, and discover good solutions despite poor initial guesses, when the gaps are kept during the first part of the search.\n\n\\subsubsection{Using the true dynamics or its approximation}\nUsing the nonlinear dynamics has some advantages.\nFirst, despite intuition, the nonlinear dynamics might be faster to compute, as in robotics $F_x$ and $F_u$ are large matrices with little structure (sparsity) while very efficient nonlinear routines exists to compute $f(x,u)$. On the other hand, taking a linear step provides an exact Newton step with strong convergence guarantees, at least when the NLP is convex (and often this is not the case).\n\nWe claim that the choice should be taken by considering the effect on the gaps.\nWith the nonlinear step, the gaps strictly decreases with nonzero step.\nWith the linear step, it does not strictly decreases and we have to relying on the merit function to accept reasonable growth.\nWhile we agree that maintaining the gaps open during the search is interesting, and that it might even be interesting to enlarge them for globalization purpose, it is doubtful that the $\\circ(\\alpha^2)$ term might be an interesting growth direction.\nIndeed, this term corresponds to some growth direction that are not predicted in the linear model.\nThe LQR is then not informed to choose an interesting value for that perturbation.\n\nWe have tried to experiment gap growth coming from the linear prediction error versus other perturbation terms, in particular random, and of course zero (with the nonlinear roll-out).\nWhile it seems clear that the term $(1-\\alpha)(f-x)$ is interesting, the interest or noxiousness of $\\circ(\\alpha^2)$ has not been observed.\n\nIn conclusion, we advice to take a nonlinear step while maintaining the gaps open.\nIf it is desirable to effectively relax the continuity constraints, we then advice to really relax the dynamic constraint (e.g. putting it in the cost as a penalty) and not to rely on the unpredicted disturbance $\\circ(\\alpha^2)$.\n\n\\section{Conclusion}\n\nWe have proposed two modifications to the classical DDP algorithm to solve OCP written as NLP problems.\nFirst, we modified the backward pass to accept infeasible trajectories, i.e. trajectories were a discontinuity exists at each shooting interval.\nSecond, we modified the line search algorithm to avoid bridging the gap after the first step is taken.\n\nIn consequence, the DDP algorithm accepts any initial guess and has a much better globalization capability.\nWe can observe that the behavior is nearly the same as a multiple-shooting solver.\nIndeed, it is exactly the same if the classical line search is implemented.\nWe did not demonstrate that the feasibility-prone DDP or the multiple shooting is better: the performance where equivalent on all the problems we have considered.\nWe discussed that the DDP might be easier to implement and make efficient, and we advice to choose it.\n\n\\section{Expectation model of the FDDP}\n\nAt each Newton step of a NLP solver, a line-search is performed.\nOne of the simple conditions of acceptance of the step is that the actual improvement of the cost should be similar to the expected improvement provided by the quadratic model.\nFor example, if considering a function $f$ and its model $m$ with $f(x+\\Delta x) = f(x) + m(\\delta x) + o(\\Delta x)$, then a step $\\delta x$ would be accepted if $f(x+\\Delta x)-f(x)$ and $m(x)$ are close enough (in practice, the ratio between the two quantities is close to 1).\n\nWhen considering the DDP solver, computing the expected improvement is more difficult, as the $\\xtraj,\\utraj$ of the LQR are never explicitly computed: the backward pass only provides $k$, from which the actual $\\xtraj,\\utraj$ are obtained using a nonlinear rollout.\nThis section provides efficient evaluation of the expectation model.\n\n\\subsection{Expectation model in $\\xtraj,\\utraj$}\nLet's first write the expectation model in term of the increments $\\xtraj,\\utraj$ (let's recall that, to keep the notations concise, we use $x$ and $u$ for the LQR variables, while they should be intepreted as ``deltas'' in the nonlinear optimization algorithm).\nIf making a step of length $\\alpha$ (typically in $]0,1]$) in the direction $\\xtraj,\\utraj$, then the improvement of the cost should have the following form:\n$$\\Delta = \\Delta_1 \\alpha + \\frac{1}{2} \\Delta_2 \\alpha^2$$\n$\\Delta_1$ is the sum at each shooting node of the cost gradient times the change in $x$ and $u$:\n\\begin{equation}\n  \\label{eq:d1}\n  \\Delta_1 = \\sum_{t=0}^T L_{xt}^T x_t + L_{ut}^T u_t\n\\end{equation}\n(to keep the sum simpler, we treat $T$ similarly to the other nodes, by introducing $L_{uT} = 0$).\n\n\\subsubsection{Linear rollout}\nThe states and controls are obtained from a linear roll-out as:\n$$ x_{t+1} = F_{xt} x_t + F_{ut} u_t + f_{t+1}$$\n$$ u_{t} = K_t x_t + k_t$$\nPropagating these two equations, we get:\n$$ x_{t+1} = (F_{xt} + F_{ut} K_t) x_t + F_{ut} k_t + f_{t+1} =  F_{t-1} x_{t-1} + c_{t+1}$$\nwith $F_{t} = F_{xt} + F_{ut} K_t$ and $c_{t+1} = F_{ut} k_{t} + f_{t+1}$ (with $c_0 = f_0$).\nAnd finally:\n\\begin{align}\n  x_t &= F_{t-1} ... F_0 c_0 + F_{t-1} ... F_1 c_1 + ... + F_{t-1} c_{t-1} + c_t \\\\\n  &= \\sum_{i=0}^t F_{t-1} ... F_i c_i \\label{eq:lroll}\n\\end{align}\n\n\\subsubsection{First-order model $\\Delta_1$}\nReplacing $u_t$ by $k_t + K_t x_t$, the first-order term is:\n\\begin{equation}\n  \\label{eq:d1}\n  \\Delta_1 = \\sum_{t=0}^T (L_{xt} + K_t^T L_{ut}) ^T x_t + \\sum_{t=0}^T L_{ut}^T k_t\n\\end{equation}\nwhere we denote $l_t = L_{xt} + K_t^T L_{ut}$ to simplify the notation.\nPutting \\eqref{eq:lroll} in \\eqref{eq:d1}, we get:\n\\begin{align}\n  \\Delta_1 &= \\sum_{t=0}^{T} l_t \\sum_{i=0}^{t} F_{t-1} ... F_i c_i + L_{ut}^T k_t \\\\\n  & =  \\sum_{i=0}^{T} c_i^T  \\sum_{t=i}^{T} F_t^T ... F_T^T l_t + k_i^T L_{ui}\n\\end{align}\nEach term of the sum is composed of a product of $f_i$ and a product of $k_i$, and can then be evaluated from the result of the backward pass.\nLet's exhibit these 2 terms.\nThe term in $f_i$ is:\n$$\\Delta_{ft} = F_i^T ... F_T^T l_i = L_{xi} + F_{xi}^T \\Delta_{fi+1} + K_i^T (L_{ui} + F_{ui} \\Delta_{fi+1})$$\nThe term in $k_i$ is:\n$$\\Delta_{ft} = l_{ui} + F_{ui}^T ... F_T^T l_i = L_{ui} + F_{ui}^T \\Delta_{fi+1}$$\nIn the case $f_i$ are all zeros, we can recognize that $\\Delta_f$ is the value gradient and $\\Delta_k$ is the Hamiltonian control gradient:\n$\\Delta_f = V_x$ and $\\Delta_k = Q_u$.\nIn that case, we simply have:\n$$\\Delta_1 = \\sum_{t=0}^{T} Q_{ut}^T k_t$$\n\nIn the general case where the LQR is not drift-free, then $\\Delta_f$ and $\\Delta_k$ must be collected during the backward pass while propagating $V_x$ and $Q_u$.\nThe cost is similar, and an order of magnitude less than propagating the Value Hessians.\n\n\\subsubsection{Second-order term $\\Delta_2$}\n\nThis section is empty, work remains to be done here.\n\n\\subsubsection{The simple case where $T=1$}\nIt is disapointing that the expectation model is so simple in the drift-free case and only depends on backward-computed quantities, while it is so complex and requires to compute additional quantities in the general case.\nLet's investigate that.\nThe intuition is that the expectation model should only depends on the gradient and hessians of the Value and Hamiltonian functions.\n\nIn the case where we only consider one control $u_0$, the expectation model is:\n\\begin{align*}\n  \\Delta_1 &= L_{x0}^T x_0 + L_{u0}^T u_0 + L_{x1}^T x1 \\\\\n  &= L_{0}^T f_0 + L_{u0} k_0 + L_{x1} F_{0} f_0  + L_{x1} F_{u0} k_0  + L_{x1} f_1 \\\\\n  &= (L_0 + F_0^T L_{x1})^T f_0 + (L_{u0} + F_{u0}^T L_{x1})^T k_0 + L_{x1} f_1\n\\end{align*}\nWe nearly recognize the gradients $V_{x0}, Q_{u0}, V_{x1}$ respectively in factor of $f_0,k_0,f_1$, but some terms are missing:\n$$V_{x0} = L_0 + F_0^T (L_{x1} + L_{xx1} f_1) + L_{xx0} f_0$$\n$$Q_{u0} = L_{u0} + F_{u0}^T (L_{x1} + L_{xx1} f_1)$$\n$$V_{x1} = L_{x1} + L_{xx1} f_1$$\nBasically, the missing terms correspond to the re-linearization of the gradient at the $f_t$ points at the end of the intervals.\nThen, we get:\n\\begin{align*}\n  \\Delta_1 &= V_{x0}^T f_0 + Q_{u0}^T k_0 + V_{x1}^T f_1 - \\left( f_0^T V_{xx0} f_0  + f_0^T F_0^T L_{xx1} f_1 + k_0^T F_{u0} L_{xx1} f_1 + f_1^T V_{xx1} f_1\\right) \\\\\n  &= V_{x0}^T f_0 + Q_{u0}^T k_0 + V_{x1}^T f_1 - \\left( f_0^T V_{xx0} x_0 + f_1^T V_{xx1} x_1 \\right)\n\\end{align*}\n\nThe second-order term is:\n\\begin{align*}\n  \\Delta_2 &= f_0^T V_{xx0} f_0 + k_0^T Q_{uu0} k_0 + f_1^T V_{xx1} f_1 + 2(f_0^T F_0^T L_{xx1} f_1 + k_0^T F_{u0} L_{xx1} f_1) \\\\\n  &= f_0^T V_{xx0} f_0 + k_0^T Q_{uu0} k_0 + f_1^T V_{xx1} f_1 + 2\\big(f_1^T V_{xx1} (x_1-f_1) \\big) \\\\\n  &= -f_0^T V_{xx0} f_0 + k_0^T Q_{uu0} k_0 - f_1^T V_{xx1} f_1 + 2\\big(f_0^T V_{xx0} x_0 + f_1^T V_{xx1} x_1 \\big)\n\\end{align*}\nWe can recognize in the additional terms (the 2 last ones) the same terms as in $\\Delta_1$.\nNicely, they will cancel out in the case we make a full step $\\alpha=1$:\n$$\\Delta(\\alpha) = \\alpha( \\Delta_1+\\frac{\\alpha}{2} \\Delta_2)$$\n$$\\Delta(1)= V_{x0}^T f_0 + Q_{u0}^T k_0 + V_{x1}^T f_1 \n- \\frac{1}{2} f_0^T V_{xx0} f_0 + \\frac{1}{2} k_0^T Q_{uu0}^T k_0 - \\frac{1}{2} f_1^T V_{xx1} f_1 $$\n\nBut they do not cancel out in the general case:\n\\begin{align*}\n  \\Delta(\\alpha) = \\alpha \\Big( V_{x0}^T f_0 + Q_{u0}^T k_0 + V_{x1}^T f_1 \n+ \\frac{\\alpha}{2} ( - f_0^T V_{xx0} f_0 - f_1^T V_{xx1} f_1 + k_0^T Q_{uu0}^T k_0 ) \\\\\n+ (\\alpha-1) ( f_0^T V_{xx0} x_0 + f_1^T V_{xx1} x_1 ) \\Big)\n\\end{align*}\n\n\\subsection{Extending to $T>1$ by recurence}\nWe can now work by recurence to extend the exact same shape to $T>1$.\n\nOn the first order term, addind a new time step will add two terms in $k_1$ and $f_2$ where respectively $L_{x2}$ and $(L_{u1} + F_{u1}^T L_{x2})$ are in factor, and also extends the previous factors.\nThe new factors have the same form as the previous ones and can be handled similarly.\nThe extension of the previous factors simply corresponds to the extension of the preview horizon when writing $V_x$ and $Q_u$.\nAs previously, we are missing the $L_{xx} f$ terms (corresponding to the relinearization), that can be collected.\nEach of this additional term is a product term involving two $f$ or one $f$ and one $k$.\nRegrouping them by decreasing order of the $f$ index, this finally boils to the sum of the $f^T V_{xx} x$:\n$$\\Delta_1 = \\sum_{t=0}^T V_{xt}^T f_t + Q_{ut}^T k_t - f_t^T V_{xxt} x_t $$\n(with again the simplification of treating symmetrically the last time step with $k_T=0$).\n\nSimilar observations can be made on the second-order term, and lead to:\n$$\\Delta_2 = \\sum_{t=0}^T k_t^T Q_{uut}^T k_t-f_t^T V_{xxt} f_t +2 f_t^T V_{xxt} x_t $$\n\n\nThe expectation model is finally:\n$$\\Delta(\\alpha) = \\alpha \\sum_{t=0}^T V_{xt}^T f_t + Q_{ut}^T k_t \n+ \\frac{\\alpha}{2} \\Big( k_t^T Q_{uut}^T k_t-f_t^T V_{xxt} f_t \\Big)\n+ (\\alpha-1) f_t^T V_{xxt} x_t$$\n\n\n\\subsection{Line-search algorithm}\n\nFirst, let us note that if all the gaps $f_t$ are null, it is simply:\n\\begin{align*}\n  \\Delta(\\alpha) &= \\alpha \\big( \\sum Q_u^T k + \\frac{\\alpha}{2} k^T Q_{uu} k \\big) \\\\\n  &= \\alpha(\\frac{\\alpha}{2} - 1) \\sum  \\ Q_u^T\\ Q_{uu}^{-1} \\ Q_u\n\\end{align*}\nThis is always negative.\n\n\\subsubsection{Merit function ... or not}\nHowever, $\\Delta$ can be positive (i.e. corresponds to an increase of the cost function) when some gap $f_t$ are nonzero.\nThis corresponds to the expected behavior of an SQP algorithm: a step is used to reduce the error in the constraints, which can makes the cost function increases.\nThe point is to monitor both the decrease or the increase of the cost function when reducing the gaps in the trajectory.\nOne objective is to find a line-search strategy that holds (at least is consistent) whether the gaps are nonzero or zero.\nLet us first consider the case where some of the gaps are nonzero.\n\nWe introduce the following merit function:\n$$\\phi(\\xtraj,\\utraj) = \\ell(\\xtraj,\\utraj) + \\mu \\sum_{t=0}^T \\| c_t(\\xtraj,\\utraj) \\|_1$$\nwhere $\\ell$ is the total cost function (integral plus terminal) and the constraints $c_t$ are:\n$$ c_{t+1} = x_{t+1} - f(x_t,u_t) = f_{f+1} $$\n$$ c_0 = x_0 - x_0^* = f_0$$\nwhere the $f_t$ have already been introduced as the trajectory gaps (defects).\nWe consider how $\\phi$ changes when changing $\\xtraj,\\utraj$ in the direction $\\dxtraj,\\dutraj$:\n$$\\xtraj'=\\xtraj + \\alpha \\dxtraj$$\n$$\\utraj'= \\utraj + \\alpha \\dutraj$$\nWe abusively denote by $\\phi$ the merit changes along the line search:\n$$\\phi(\\alpha) := \\phi(\\xtraj + \\alpha \\dxtraj,\\utraj + \\alpha \\dutraj) - \\phi(\\xtraj,\\utraj)$$\nWe have:\n$$\\phi(\\alpha) = \\ell'-\\ell - \\alpha \\mu \\sum_{t=0}^T  \\| f_t \\|_1$$\nAs the $f_t$ does not depend on $\\alpha$ (thanks to the nonlinear rollout used in the forward pass, as explained in the first part of this document), we can always find a penalization $\\mu$ that makes this function decreases.\nThis means that, if $\\mu$ is large enough, any step from an infeasible guess would be accepted.\nThe drawback is that the step might induces very large increase of the cost function.\nIn particular, the cost increase might be much larger than predicted by the LQR model, in particular when the initial guess is far from being feasible (for unstable dynamics system, when the initial control guess is very far from stabilizing the initial state trajectory).\n\nAs we now that $\\sum \\| f_t \\|_1$ will decrease with nonzero $\\alpha$, we rather suggest to only consider the first term $\\ell'-\\ell$.\nThis term exactly corresponds to the expectation model that we described above.\n$$\\ell'-\\ell = \\Delta(\\alpha) + \\mathcal O(\\alpha^3)$$\n\n\\subsubsection{Goldstein condition}\nWe cannot use it a second order version of the Wolfe (Armijo) conditions, first because $\\Delta$ might be positive (not a descent direction), and second because strong Wolfe conditions uses the gradient at the next candidate point, which are very expensive to compute in our case.\nWe rather suggest to take a second-order version of the Goldstein conditions, i.e. accept a step if the actual cost change is similar to the expected cost change:\n$$ b_1 \\le \\frac{\\ell'-\\ell}{\\Delta(\\alpha)} \\le b_2$$\nwith $b_1,b_2$ are two adjustable parameters.\nMore precisely, if $\\Delta$ is negative (the direction is descending), this imposes that:\n$$\\ell'-\\ell \\le b_1 \\Delta(\\alpha)$$\ni.e. that the cost decrease at least a fraction of the expectation model.\nA contrario, if $\\Delta$ is positive (the direction is ascending), this imposes that:\n$$\\ell'-\\ell \\le b_2 \\Delta(\\alpha)$$\ni.e. the cost does not increase more than a multiple of the expectation model.\nIn practice, we suggest to use $b_1=0.1$ and $b_2=2$.\nThis might be better replaced by a switch to avoid the quotient.\nThe condition finally is to accept the step if:\n$$\\ell'-\\ell \\le\n\\begin{cases}\n  b_1 \\Delta(\\alpha) & \\textrm{if }\\Delta(\\alpha)\\le 0 \\\\\n  b_2 \\Delta(\\alpha) & \\textrm{otherwise}\n\\end{cases}\n$$\n\n\\subsubsection{Approximating $\\Delta$ with a nonlinear rollout}\n\nThe expectation model exhibited above implies the explicit values of the changes in the state trajectory in the term $f_t^T V_{xx} \\dx_t$.\nHowever, the DDP algorithm never explicitly computes the $\\dx_t$, but rather directly computes the next $x_t'$ in the rollout, using the nonlinear dynamics.\nWe do not have the linear direction $\\dx_t$, however, we can easily computes the change in the state trajectory by $x_t'(\\alpha)-x_t$ where $x_t'(\\alpha)$ is the state reached at time $t$ when applying the changes in the control trajectory and at the trajectory gaps.\nWe then set:\n$$\\dx_t = x_t'-x_t$$\nAnd we modify the expectation model accordingly:\n$$\\Delta(\\alpha) = \\alpha \\sum_{t=0}^T V_{xt}^T f_t + Q_{ut}^T k_t \n+ \\frac{\\alpha}{2} \\Big( k_t^T Q_{uut}^T k_t-f_t^T V_{xxt} f_t \\Big)\n+ (\\alpha-1) f_t^T V_{xxt} (x_t'(\\alpha) - x_t)$$\nAgain, this boils down to the sum of the $Q.k$ when the gaps are all zero.\n\n\n\n\\bibliographystyle{plainnat}\n{\n\\small\n\\bibliography{references}\n}\n\n\\end{document}\n\n", "meta": {"hexsha": "ed47a5b04cebfb38cd93038fd79beb751259ecec", "size": 37537, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/reports/fddp/root.tex", "max_stars_repo_name": "pFernbach/crocoddyl", "max_stars_repo_head_hexsha": "cbf81a329e3abaf4ce1b4a8fab1431f93cd9a5c8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 322, "max_stars_repo_stars_event_min_datetime": "2019-06-04T12:04:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T14:37:44.000Z", "max_issues_repo_path": "doc/reports/fddp/root.tex", "max_issues_repo_name": "pFernbach/crocoddyl", "max_issues_repo_head_hexsha": "cbf81a329e3abaf4ce1b4a8fab1431f93cd9a5c8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 954, "max_issues_repo_issues_event_min_datetime": "2019-09-02T10:07:27.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:14:25.000Z", "max_forks_repo_path": "doc/reports/fddp/root.tex", "max_forks_repo_name": "pFernbach/crocoddyl", "max_forks_repo_head_hexsha": "cbf81a329e3abaf4ce1b4a8fab1431f93cd9a5c8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 89, "max_forks_repo_forks_event_min_datetime": "2019-08-13T13:37:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T15:55:07.000Z", "avg_line_length": 68.1252268603, "max_line_length": 512, "alphanum_fraction": 0.7233396382, "num_tokens": 11319, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928900257127, "lm_q2_score": 0.7371581568543043, "lm_q1q2_score": 0.5749781211708165}}
{"text": "\\section{Background}\n\\label{sec:background}\n\n\\subsection{Learning to Rank}\n\\label{sec:learningtorank}\n\n\\textit{Learning to rank} is the application of machine learning\u00a0(ML) to the creation of models that can rank. Ranking, in its most general form, is the ordering of a collection of documents (also referred to as samples). The book \\textit{Learning to Rank for Information Retrieval} by \\citeauthor{liu2009learningtorank} \\cite{liu2009learningtorank} enumerates three distinguishable approaches to phrasing and solving ranking problems. They are referred to as pointwise, pairwise, and listwise, and shall be explained briefly in the following. The second part of this section deals with ranking metrics which are used to evaluate the performance of ranking model, i.e. the quality of its ranking outputs.\n\nThe \\textbf{pointwise approach} deals with models that determine the rank of a single sample. It can be subdivided into the three subcategories (1) regression based, (2) classification based, and (3) ordinal regression based. For (1) the problem is considered a regression problem, where the rank of a sample is interpreted as a point on the real number line which the model seeks to predict correctly. With (2) the ranks are discrete and considered classes, for a given input, the model predicts a class which can be mapped to a rank. Ranks can also be grouped into bins, such that an entire range of adjacent ranks corresponds to a single class. While the classes in (2) do not have an ordering that is necessarily apparent to the model, the third subcategory (3) takes the ordinal relationship of ranks into account. The goal is to find a scoring functions such that thresholds can be found to distinguish the outputs into the different, ordered categories.\n\nA possible loss function for pointwise training is to interpret the scaled and rounded model output $y\\in\\mathbb{R}$ as the predicted rank of an input. A loss function could simply penalize divergence from the target output $\\hat{y}\\in\\mathbb{R}$, e.g. in a quadratic fashion, $(y-\\hat{y})^2$.\n\nOpposed to the pointwise approach, the \\textbf{pairwise approach} does not seek to predict the rank of a single sample. Instead the focus is on predicting the relative order between two given samples, i.e. whether the first sample $\\bm{x}^{(i)}$ has a greater or a lower rank than the second one $\\bm{x}^{(j)}$. This problem can be considered a binary classification task on a tuple input $(\\bm{x}^{(i)},\\bm{x}^{(j)})$, where the first class means $\\bm{x}^{(i)}\\triangleright \\bm{x}^{(j)}$ ($\\bm{x}^{(i)}$ has a higher rank than $\\bm{x}^{(j)}$) and the second class means $\\bm{x}^{(i)}\\triangleleft \\bm{x}^{(j)}$.\n\n\\citeauthor{Burges:learningtorankwithsgd} propose a probabilistic cost function for pairwise training \\cite{Burges:learningtorankwithsgd}. \nThey represent the ground truth as a matrix $\\bm{\\overline{P}}\\in[0,1]^{m\\times m}$, where $\\overline{P}_{i,j}$ is the probability of sample $\\bm{x}^{(i)}$ to have a higher rank than $\\bm{x}^{(j)}$, i.e. $P(\\bm{x}^{(i)}\\triangleright \\bm{x}^{(j)})$. \nThe model itself is a function $f:\\mathbb{R}^d\\rightarrow\\mathbb{R}$ mapping a sample $\\bm{x}^{(i)}$ to a scalar. The loss for two given samples $\\bm{x}^{(i)}$ and $\\bm{x}^{(j)}$ is defined as\n\\begin{align}\n-\\overline{P}_{i,j}\\times\\left(f\\left(\\bm{x}^{(i)}\\right)-f\\left(\\bm{x}^{(j)}\\right)\\right)+\\log\\left(1+e^{f\\left(\\bm{x}^{(i)}\\right)-f\\left(\\bm{x}^{(j)}\\right)}\\right)\n\\end{align}\nThis formulation has some favorable properties: It allows for uncertainty in the relative ordering of items in that the $\\bm{\\overline{P}}$ is a probability and could be e.g. $\\overline{P}_{i,j}=\\frac{1}{2}$, if the relative ranking of two items was unknown. Furthermore, the loss asymptotes to a linear function. According to the authors, this is likely to be more robust with noisy labels than a quadratic cost.\n\nThe \\textbf{listwise approach} deals with models taking an entire list of samples as their input, with the goal of ordering them. This can be either done by mapping them to real values and penalizing wrong ordering, or mapping the samples discretely to a permutation.\n\nOnce a model has produced a ranking, it must be evaluated. Since the magnitude of a loss function is often rather less meaningful, other measures, so called \\textbf{ranking metrics}, have been defined. \\citeauthor{tfranking} name four, namely \\textit{Mean Reciprocal Rank (MRR)}, average of positions of examples weighted by their relevance values (\\textit{ARP}), \\textit{Discounted Cumulative Gain (DCG)}, and \\textit{Normalized DCG (NDCG)} in \\cite{tfranking}. They also make the important point that ``in ranking, it is preferable to have fewer errors at higher ranked positions than at the lower ranked positions, which is reflected in many metrics''.\n\nIn real world applications, the training of ranking models can be much more complicated. For example, updating a ranking model based on what users click leads to wrong updates because the feedback from users is biased. \\citeauthor{unbiasedlearningtorank} deal with that problem and propose \\textit{unbiased learning to rank} in \\cite{unbiasedlearningtorank}. This is, however, outside the scope of this work, because we do not deal with direct user feedback.\n\n\\subsection{Graph Networks}\n\\label{sec:graphnetworks}\n\nThe most basic type of neural network (NN), consisting solely of fully connected layers, converts a vector of fixed size into another vector of fixed size.\nIn contrast, CNNs can convert variably-sized inputs into variably-sized outputs. In image classification or segmentation tasks, the inputs are often scaled to a fixed size and the convolutional layers serve as feature extractors for a fully-connected classification head.\nRecurrent neural networks can convert a sequence of vectors into another sequence.\n\nWhile some of those NN types \\textit{can} handle variably-sized inputs, they do not deal with sets very well: Consider for instance an (unordered) set of images and the task to assign a single label to the entire set. All architectures from above could be applied to that task by simply concatenating all the images into one large vector. However, they would implicitly try to assign a semantic meaning to the ordering in which the samples are being presented to them, so called relational inductive bias. One could augment the dataset by shuffling the samples of a set before concatenating them but that does not solve the underlying problem of the architecture. Alternatively, one might process the images in the set individually and combine the outputs. This setup has the drawback that information exchange between internal, intermediate representations of the model cannot take place. The feature representation of one image cannot influence the processing of other images, which could, however, be a requirement or at least helpful when dealing with a set of images.\n\nGraph networks (GNs) are designed to handle graphs and sets naturally. They can deal with any size of graph and do not regard the ordering of nodes. While processing the nodes, information can freely flow between them. \\citeauthor{deepmind:graphnets} have brought several different types of graph networks under a common hood \\cite{deepmind:graphnets}. We follow their notation and will recapitulate the relevant parts of the paper in the following paragraphs. That is the formal definition of a directed, attributed multi-graph with a global attribute, as well as the definition of GNs.\n\nLet a graph $G$ be a 3-tuple $G=\\left(\\bm{u},\\mathbb{V},\\mathbb{E}\\right)$, where $\\bm{u}$ is a vector of global attributes.\n$\\mathbb{V}=\\left\\{\\bm{v}_i\\right\\}_{i=1}^{N^{(v)}}$ is a set of $N^{(v)}$ nodes (also referred to as vertices), each consisting solely of an attribute vector.\n$\\mathbb{E}=\\left\\{\\left(\\bm{e}_k,r_k,s_k\\right)\\right\\}_{k=1}^{N^{(e)}}$ is a set of $N^{(e)}$ edges, each of which connects a sender node $\\bm{v}_{s_k}$ to a receiver node $\\bm{v}_{r_k}$, and contains an attribute vector $\\bm{e}_k$.\n\nThe attribute vectors $\\bm{u}$, $\\bm{v}_i$, and $\\bm{e}_k$ can contain any kind of information. In fact they could even be graphs themselves.\n\nGNs are NNs containing at least one GN block. \\citeauthor{deepmind:graphnets} describe it as follows: A GN block contains three update functions, $\\phi$, and three aggregation functions, $\\rho$,\n\\begin{align}\n    \\bm{e}'_k=&\\phi^e\\left(\\bm{e}_k,\\bm{v}_{r_k},\\bm{v}_{s_k},\\bm{u}\\right)\\\\\n    \\bm{\\overline{e}}'_i=&\\rho^{e\\rightarrow v}\\left(\\mathbb{E}'_i\\right)\\\\\n    \\bm{v}'_i=&\\phi^v\\left(\\bm{\\overline{e}}'_i,\\bm{v}_i,\\bm{u}\\right)\\\\\n    \\bm{\\overline{e}}'=&\\rho^{e\\rightarrow u}\\left(\\mathbb{E}'\\right)\\\\\n    \\bm{\\overline{v}}'=&\\rho^{v\\rightarrow u}\\left(\\mathbb{V}'\\right)\\\\\n    \\bm{u}'=&\\phi^u\\left(\\bm{\\overline{e}}',\\bm{\\overline{v}},\\bm{u}\\right)\\,,\n\\end{align}\nwhere $\\mathbb{E}'_i=\\left\\{\\left(\\bm{e}'_k,r_k,s_k\\right):r_k=i\\right\\}_{k=1}^{N^{(e)}}$ is the set of all edges pointing to the $i$th node; $\\mathbb{V}'=\\left\\{\\bm{v}'_i\\right\\}_{i=1}^{N^{(v)}}$; $\\mathbb{E}'=\\bigcup_i\\mathbb{E}'_i$.\n\n\\begin{enumerate}\n    \\item $\\phi^e$ is the \\textbf{edge update function} and applied to every edge in the graph. For the $k$th edge it computes a new attribute vector $\\bm{e}'_k$ based on the previous state $\\bm{e}_k$, the attributes of the two nodes that this edge connects, namely $\\bm{v}_{r_k}$ and $\\bm{v}_{s_k}$, and the global state $\\bm{u}$.\n    \\item $\\rho^{e\\rightarrow v}$ is the \\textbf{edge aggregation function} and applied once per node in the graph. It computes an aggregation of all edges pointing to the $i$th node.\n    \\item $\\phi^v$ is the \\textbf{node update function}. It is applied to every node in the graph and computes the new attribute vector $\\bm{v}'_i$ based on the aggregation of all edges pointing to the node, the previous state of the node, and the global state.\n    \\item $\\rho^{e\\rightarrow u}$ is the \\textbf{global edge aggregation function} which computes an aggregation $\\bm{\\overline{e}}'$ of all edges in the graph.\n    \\item $\\rho^{v\\rightarrow u}$ is the \\textbf{node aggregation function} which compuates an aggregation $\\bm{\\overline{v}}'$ of all nodes in the graph.\n    \\item $\\phi^u$ is the \\textbf{global attribute update function}. It takes both node and edge aggregation, as well as the previous global attribute $\\bm{u}$, and computes a new global attribute $\\bm{u}'$.\n\\end{enumerate}\n\nThe functions are being evaluated in a specific sequence, visualized in Figure~\\ref{fig:fullgraphblock}. GN blocks do not alter the structure of the graph, i.e. they convert a graph into another graph isomorphic to the input. In order to extract a fixed size representation from a GN block, the global attribute can be used.\n\n\\begin{figure}\n    \\centering\n\n    \\begin{tikzpicture}[]\n\n        \\draw[dashed] (0,-5) rectangle (6,1);\n        \n        % inputs and outputs\n        \\node (in_u) at (-1,0) [draw=none,fill=none]{$\\bm{u}$};\n        \\node (out_u) at (7,0) [draw=none,fill=none] {$\\bm{u}'$};\n        \\node (in_V) at (-1,-2) [draw=none,fill=none]{$\\mathbb{V}$};\n        \\node (out_V) at (7,-2) [draw=none,fill=none]{$\\mathbb{V}'$};\n        \\node (in_E) at (-1,-4) [draw=none,fill=none]{$\\mathbb{E}$};\n        \\node (out_E) at (7,-4) [draw=none,fill=none]{$\\mathbb{E}'$};\n        \n        % processing\n        \\node (phi_u) at (5,0) [draw, circle]{$\\phi^u$};\n        \\node (phi_v) at (3,-2) [draw, circle]{$\\phi^v$};\n        \\node (phi_e) at (1,-4) [draw, circle]{$\\phi^e$};\n\n        \\node (rho_ev) at (2,-3) [draw, process]{$\\rho^{e\\rightarrow v}$};\n        \\node (rho_eu) at (4,-3) [draw, process]{$\\rho^{e\\rightarrow u}$};\n        \\node (rho_vu) at (4,-1) [draw, process]{$\\rho^{v\\rightarrow u}$};\n\n        % edges\n        \\draw[->] (in_u) -- (phi_u);\n        \\draw[->] (in_u) -- (phi_v);\n        \\draw[->] (in_u) -- (phi_e);\n        \\draw[->] (phi_u) -- (out_u);\n        \\draw[->] (in_V) -- (phi_v);\n        \\draw[->] (in_V) -- (phi_e);\n        \\draw[->] (phi_v) -- (out_V);\n        \\draw[->] (in_E) -- (phi_e);\n        \\draw[->] (phi_e) -- (out_E);\n        \\draw[->] (phi_e) -- (rho_ev);\n        \\draw[->] (rho_ev) -- (phi_v);\n        \\draw[->] (phi_e) -- (rho_eu);\n        \\draw[->] (phi_v) -- (rho_vu);\n        \\draw[->] (rho_vu) -- (phi_u);\n        \\draw[->] (rho_eu) -- (phi_u) ;\n\n    \\end{tikzpicture}\n    \\caption[Full GN block]{Visualization of the evaluation order in a \\textbf{full GN block}. Arrows indicate inputs as well as temporal dependency. The three inputs to a GN block are on the left, the three outputs (transformations of the inputs) are on the right-hand side. Several variation of the depicted version exist. The full GN block is the most generic GN block. The variations independent recurrent block, message-passing neural network, non-local neural network, relation network, and deep set exist \\cite{deepmind:graphnets}.}\n    \\label{fig:fullgraphblock}\n\\end{figure}\n\n\nGNs in related forms have been applied to a variety of different problems from different domains:\n\\begin{itemize}\n    \\item \\textbf{Social sciences}: \\cite{graphnetsreddit} apply GNs to post data from the social network Reddit. \\cite{graphnetscitationgraph} use convolutional GNs. They run experiments on citation networks and on a knowledge graph dataset.\n    \\item \\textbf{Physics}: \\cite{graphnetsphysicsengine} and \\cite{graphnetsphysics2} train GNs that focus on the modelling of physical relationships between objects.\n    \\item \\textbf{Medicine}: \\cite{graphnetsproteininterface} consider the prediction of interfaces between proteins, a challenging problem with important applications in drug discovery and design.\n    \\item \\textbf{Algorithms}: See e.g. \\cite{selsam:satsolver} and \\cite{dai:graphnetscombinatorialalgo}.\n\\end{itemize}\n\nGNs can be trained with stochastic gradient descent (SGD), since they are fully differentiable, if aggregation and attribute update functions are differentiable.\n\n\\cite{selsam:satsolver} train a GN on a boolean satisfiability task (SAT-solver) with a single bit of supervision, namely whether or not the statement is satisfiable. The authors thereby show that GNs can be successfully trained on NP-complete tasks with very little supervision, namely just a single bit. The method presented in the paper extracts a fixed-size information (true or false) from the GN by computing the mean of all nodes. This fits into \\cite{deepmind:graphnets} Deep Mind's GN framework, since the output bit can be interpreted as a global state. Depending on the application, a model's output might be a graph itself, in such cases, the last layer can be a GN block.\n\n\n\\subsection{Screenshot Processing}\n\\label{sec:screenshotprocessing}\n\nCNNs were originally inspired by the visual cortex of humans, see \\cite{lecun:lenet}, and had their most notable breakthrough with the success of AlexNet \\cite{krizhevsky:imagenet} in the domain of image classification. The convolutional layers are sliding multiple, learnable kernels over two dimensions of the input while looking at all channels simultaneously. This concept allows for two-dimensional translation invariant feature extraction and has proven to be both effective and efficient.\n\nMost commonly, CNNs are applied to natural images, for instance pictures of animals taken in nature or car traffic scenes from a driver's point of view.\nScreenshots, i.e. images showing the content of a monitor, differ from natural images: Often they do not contain smooth transitions, do not feature a large color variety, and contain many equally colored areas such as white background.\nCNNs have been shown to have a relational inductive bias towards natural images, which is being utilized in denoising, super-resolution, and inpainting problems \\cite{deepimageprior}.\nThe question arises whether CNNs are also applicable to screenshots, which have different characteristics. There is little work that uses CNNs for screenshot processing, which can either be attributed to them handling screenshots just as fine as natural images or few applications.\n\nThe closest to us is the work of \\citeauthor{beltramelli:pix2code}, who feeds screenshots into a CNN in order to extract structual information from them \\cite{beltramelli:pix2code}. He tries to convert screenshots of a user interface into code that describes that exact layout. In his setup, a CNN is combined with a long short-term memory (LSTM) \\cite{hochreiter1997lstm}: The former is responsible for the feature extraction, whereas the latter converts the extracted features into a sequence of layout descriptions.\n\nMore specifically, \\citeauthor{beltramelli:pix2code} does not perform any pre-processing and feeds images of size $256\\times 256$ into the model. The model consists of three convolutional blocks, each of which contains two convolutional layers with kernels of size $3\\times 3$ and stride 1, followed by $2\\times 2$ max-pooling and dropout \\cite{srivastava2014:dropout} with a probability of $p=0.25$. The third convolutional block is followed by two fully connected layers with $1024$ units each. Both regularized with $p=0.3$ dropout. The filter count for the convolutions is $32$, $64$, and $128$ for the three blocks, respectively.\n\n\\subsection{Open PageRank}\n\\label{OpenPageRank}\n\\textit{Open PageRank} \\cite{OpenPageRank} is an initiative, which has the goal to enable webmasters to determine the page rank for their domains and easily compare with other domains. The initiative was constituted after Google had shut down the \\textit{PageRank}-toolbar, leaving a void in the industry. The Open PageRank initiative provides freely available data that was built on top of \\textit{Common Crawl} \\cite{CommonCrawl}, which provides high quality crawl data of webp ages since 2013.\n\nOpen PageRank uses the number of backlinks of a domain found in \\textit{Common Crawl} to calculate the page rank. The greater the number of links, different domains use to refer to the given domain, the better the page rank of the given domain according to Open PageRank.\n\nThe list offered by Open PageRank contains about 10 million domains ordered by their page rank. Each entry in the list consists of the following comma-separated attributes: rank of the domain, domain name, and calculated score by Open PageRank.\n\nThe list is avilable at \\url{https://www.domcop.com/openpagerank/}.\n\n\\begin{table}\n\t\\center\n\t\\begin{tabular}{r l}\n\t\t\\textbf{Rank} & \\textbf{Website}  \\\\ \\hline \n\t\t\\#1 & \\url{fonts.googleapis.com} \\\\ \n\t\t\\#2 & \\url{facebook.com} \\\\ \n\t\t\\#3 & \\url{youtube.com} \\\\ \n\t\t\\#4 & \\url{twitter.com} \\\\ \n\t\t\\#5 & \\url{google.com} \\\\ \n\t\t\\#6 & \\url{plus.google.com} \\\\ \n\t\t\\#7 & \\url{instagram.com} \\\\ \n\t\t\\#8 & \\url{linkedin.com}\\\\ \n\t\t\\#9 & \\url{en.wikipedia.org}\\\\ \n\t\t\\#10 & \\url{maxcdn.bootstrapcdn.com}\\\\\n\t\\end{tabular}\n\t\\caption[Top ten websites from \\textit{Open PageRank}]{The top ten websites in the world according to \\textit{Open PageRank}.}\n\t\\label{tab:toptensites}\n\\end{table}\n\n\n\\subsection{Puppeeter}\n\\label{puppeeter}\n\\texttt{Puppeteer}\\footnote{Puppeeter is available at \\url{https://github.com/GoogleChrome/puppeteer}.} is a open-source library for the JavaScript runtime \\texttt{Node.js}, which provides a high-level API to control instances of the browsers \\texttt{Chrome} or \\texttt{Chromium}. One of the main goals of \\texttt{Puppeteer} is to grow adoption in automated browser testing \\cite{PuppeteerFAQ}.  For this purpose, it wraps the \\texttt{Chrome DevTools Protocol} in JavaScript, which allows web developers to instrument, inspect and debug instances of \\texttt{Chrome}. Technically, the protocol is HTTP-based and exposed as a RESTful API at the port for debugging by \\texttt{Chrome}.\n\nThe protocol offers various functionalities, which are grouped into domains \\cite{DevToolsProtocol}. Some of the interesting domains for our use-case are:\n\\begin{description}\n\t\\item[Page]: API to load and take a screenshot of the given website.\n\t\\item[DOM]: API to read and manipulate the DOM of the given website.\n\t\\item[Network]: API to intercept network requests, track downloaded data and network issues.\n\t\\item[Emulation]: API to emulate different geolocation, network bandwith and mobile devices.\n\\end{description}\n\nDuring start up \\texttt{Puppeteer} starts an instance of \\texttt{Chrome}, attaches to the instances using the port for debugging and afterwards custom code of the user will be executed against the instance. The instance will be started in \\texttt{headless}-mode meaning that no UI will be visible.\n\n\\subsection{Chromium Embedded Framework}\n\\label{cef}\n\\texttt{Chromium Embedded Framework}\\footnote{\\texttt{CEF} is available at \\url{https://bitbucket.org/chromiumembedded}.} (short: \\texttt{CEF}) was designed ground-up with performance and ease of use in mind. The framework exposes \\texttt{C++} interfaces to control instances of the browsers \\texttt{Chrome} or \\texttt{Chromium} with default implementation for all features requiring little or no integration work. Furthermore, the community added wrappers for the base implementation to support a wide-range of operating systems and programming languages.\n\\texttt{CEF} has a total of three versions, whereas \\texttt{CEF 2} was abandoned due to the appearance of the \\texttt{Chromium Content API} which will be discussed later. \\texttt{CEF 1} is a single process implementation and based on the old \\texttt{Chromium WebKit API}, due to the deprecation of the \\texttt{Chromium WebKit API} it is no longer supported and developed. \\texttt{CEF 3} is a multi process implementation and based on the current \\texttt{Chromium Content API}.\n\n\\subsection{Kubernetes}\n\\label{k8s}\nKubernetes\\footnote{K8s is available at \\url{https://kubernetes.io}.} (K8s) is an open source container orchestration solution that was originally designed by \\textit{Google} in 2014. It aims to provide a platform where containers and containerized applications can be deployed, scaled, and managed \\cite{Wikik8s}. K8s orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This means applications are automatically scaled depending on incoming workload.\n\nK8s is much of the simplicity of Platform as a Service with the flexibility of Infrastructure as a Service \\cite{k8s}. K8s offers platform abstraction at container level. Therefore, developers are responsible for building their own application containers rather than only the application itself. This is rewarded with more freedom in application development, but comes at the cost of lower productivity.\n\n\\subsubsection*{Kubernetes Components and Concepts}\nThe most popular container runtime in the industry is \\textit{Docker}. Container runtimes are responsible for serving the communication to the operating system (OS) and the container environment, where applications can be configured and ran.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.7\\textwidth]{resources/k8s_architecture}\n\t\\caption[Architecture diagram of K8s]{This diagram illustrates the main parts: The \\textit{master node} representing the central unit for the orchestration of the Kubernetes cluster. The \\textit{worker nodes} running the user-specified workloads represented as \\textit{pods} on the Kubernetes cluster. Arrows indicate control flow. The cluster can be controller and orchestrated by the user using the \\textit{kubectl}-CLI. \\cite{K8sArch}}\n\t\\label{k8s_architecture}\n\\end{figure}\nK8s consist of three main components \\textit{kubectl}, \\textit{master node}, and the \\textit{worker nodes}. Kubectl represents the client interface to communicate with the master node, by sending requests against the API server. The master node represents the administrative entrypoint and is responsible for managing the whole K8s cluster \\cite{K8sArch}. It consists of the \\textit{API server}, \\textit{etcd storage}, \\textit{scheduler}, and \\textit{controller manager}:\n\n\\begin{itemize}\n\t\\item \\textit{API server} represents the entry point for all incoming REST commands. It processes and executes them according to their business logic.\n\t\\item \\textit{etcd storage} is a simple key-value store, where information about the \\textit{pods} and the cluster is saved.\n\t\\item \\textit{scheduler} is responsible for container distribution among the worker nodes by using information about the worker nodes. \n\t\\item \\textit{controller-manager} consists of several controllers such as the replication controller, which is responsible for monitoring pods and recreating crashed pods.\n\\end{itemize}\nBeside the master node, a K8s cluster consists of one or more worker nodes, which are controlled by the controllers of the master node. Each worker node consists of \\textit{kubelet}, \\textit{kubeproxy}, and \\textit{pods}:\n\n\\begin{itemize}\n\t\\item \\textit{kubelet} communicates with the API server, ensuring the requested containers are up and running.\n\t\\item \\textit{kube-proxy} acts as a proxy and load balancer on the worker node. It is responsible for the communication with the outside world.\n\t\\item \\textit{pods} are the smallest unit in K8s. They encapsulate one or several containers in the same shared context, which means that containers within the same pods share the same IP, storage, and memory.\n\\end{itemize}\n", "meta": {"hexsha": "253dd841e2b427e1ef12f41423b90f275372d6c9", "size": 25159, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/article/content/background.tex", "max_stars_repo_name": "Simsso/Vision-Based-Page-Rank-Estimation", "max_stars_repo_head_hexsha": "424d80031501701ebe1ab1473b0fb09ccd6f6453", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-05-27T05:59:40.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-03T20:10:49.000Z", "max_issues_repo_path": "docs/article/content/background.tex", "max_issues_repo_name": "Simsso/Vision-Based-Page-Rank-Estimation", "max_issues_repo_head_hexsha": "424d80031501701ebe1ab1473b0fb09ccd6f6453", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/article/content/background.tex", "max_forks_repo_name": "Simsso/Vision-Based-Page-Rank-Estimation", "max_forks_repo_head_hexsha": "424d80031501701ebe1ab1473b0fb09ccd6f6453", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-18T16:27:30.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-18T16:27:30.000Z", "avg_line_length": 114.3590909091, "max_line_length": 1072, "alphanum_fraction": 0.7540045312, "num_tokens": 6490, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891392358015, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5749193957100839}}
{"text": "\n\\chapter{PCPs \\& Hardness of Approximation}\n\nIn this chapter, we will review some of the fundamental results in hardness of\napproximation. One of the main results is an alternate characterization for \\NP.\nRecall the definition of \\NP\\ (\\lref[Definition]{def:np}).\nFor the verifier of an \\NP\\ problem, the proof length is at most a polynomial in\n$n$. The alternate characterization deals with probabilistic verifiers, that\nquery only a few bits in the proof.\n\n\\begin{definition}[{$\\PCP_{c,s}[t,q,R]$}] \nA $\\PCP_{c,s}[t(n),q(n), R]$-Verifier (\\PCP\\ stands for Probabilistically \nCheckable Proofs)\nfor a decision problem $L$, is an algorithm $V$\nwhich takes an input $x$, a random string\n$y \\in \\{0,1\\}^{t(n)}$, has oracle access to a proof $\\pi$ over alphabet $[R]$ of length $2^{t(n)}$\n and satisfies the following properties:\n \\begin{itemize}\n \\item $V$ runs in polynomial time in the length of $x$ for all $y,\\pi$.\n\\item $V$ queries $\\pi$ in at most $q(n)$ bits on all inputs. \n\\item Completeness: If $x \\in L$ then there exists $\\pi$ such that $\\Pr_y[V(x,\\pi)=1]\\geq c$.\n\\item Soundness : If $x \\notin L$ then for any $\\pi$, $\\Pr_y[V(x, \\pi)=1] \\leq\ns$.\n \\end{itemize} \n $\\PCP_{c,s}[t,q,R]$ is the class of decision problems that have\na $ \\PCP_{c,s}[t,q,R]$-Verifier.\n \\end{definition} \n It is not difficult to see that,\nfor any $q$, constant $R$ and $s<c$, $$\\PCP_{c,s}[O(\\log n),q,R] \\subseteq\n\\NP.$$ A major breakthrough in PCP characterizations, which lead to many\nhardness of approximation results was the following theorem due to Arora \\&\nSafra~\\cite{AroraS1998} and Arora \\etal~\\cite{AroraLMSS1998}. \n\\begin{theorem}[PCP Theorem] \nThere exists constant $s <1, q,R$ such that $$\\NP \\subseteq\n\\PCP_{1,s}[O(\\log n),q,R].$$ \n\\end{theorem} \nIn the above form, it is not clear\nhow the theorem might be useful in hardness of approximation results. The\ntheorem can be stated equivalently as a hardness of approximation result.\n\n\\begin{theorem}[Hardness of Approximating \\MAXTSAT] There exists a constant $s\n<1$, such that it is \\NPHard\\ to distinguish satisfiable \\MAXTSAT\\ instances,\nfrom ones for which any assignment can satisfy at most $s$ fraction of the\nclauses. \\end{theorem}\n\nIt is easy to see that the above theorem is equivalent to having a\n$\\PCP_{1,s}[O(\\log n),3,2]$-Verifier for \\MAXTSAT, whose checks are \\MAXTSAT\\\nclauses. The equivalence follows from viewing such verifiers as \n\\MAXTSAT\\ instances and vice versa.\n Any such verifier could be converted to a \\MAXTSAT\\ instance.\nThe instance is obtained by adding a variable for every bit of the proof,\nand adding a \\MAXTSAT\\ clause for every check the verifier makes.\nAny \\MAXTSAT\\ instance, has a trivial \\PCP\\ verification procedure,\nwhose proof is the assignment and the verifier chooses a random clause,\nand checks if it is satisfied by the assignment.\n\n\n\\section{Label Cover}\n\nThe PCP theorem stated in terms hardness of approximation of \\MAXTSAT, does not\ngive optimal inapproximability results (see H\\aa stad \\cite{Hastad2001}). \nTo get strong results, it is used in\nconjunction with the parallel repetition theorem of Raz~\\cite{Raz1998}. \nThis strong version of PCP\ntheorem is usually stated in terms of the \\LabelCover\\ problem.\n\n\\begin{definition}[\\LabelCover]\n\t\n\t\\label{def:label-cover} An instance $G=(U,V,E,L,\n\tR,\\{\\pi_e\\}_{e\\in E})$ of the {\\LC} constraint satisfaction\n\tproblem consists of a bi-regular bipartite graph $(U,V,E)$,\n        two sets of alphabets $R$ and $L$ and a projection map $\\pi_e : R \\rightarrow\n        L$ for every edge $e\\in E$.\n        Given a labeling $\\ell : U \\rightarrow R, \\ell:V \\rightarrow\n        L$, an edge $e = (u,v)$ is said to be satisfied by $\\ell$ if\n        $\\pi_e(\\ell(v)) = \\ell(u)$. \n\n        $G$ is said to be \\emph{at most $\\delta$-satisfiable} if every\n        labeling satisfies at most a $\\delta$ fraction of the\n        edges. \n        \n\tAn instance of \\UG\\ is a label cover instance where $L=R$ and the constraints \n\t$\\pi$ are permutations.\n\\end{definition}\n\n\nWe consider label cover instances obtained from $\\TSAT$ instances in the\nfollowing natural manner. \n\\begin{definition}[$r$-repeated label cover]\n\\label{def:label-cover} \nLet $\\phi$ be a $\\TSAT$ instance with $X$ as the set of\nvariables and $C$ the set of clauses. The $r$-repeated bipartite label cover\ninstance $I(\\phi)$ is specified by: \n\\begin{itemize} \n\\item A graph $G:=(U,V,E)$,\nwhere $U:=C^r, V:=X^r$. \\item $\\Sigma_U := \\{0,1\\}^{3r},\\Sigma_V := \\{0,1\\}^r$.\n\\item There is an edge $(u,v) \\in E$ if the tuple of variables $v$ can be\nobtained from the tuple of clauses $u$ by replacing each clause by a variable in\nit. \n\\item The constraint $\\pi_{uv}:\\{0,1\\}^{3r}\\rightarrow \\{0,1\\}^{r}$ is\nsimply the projection of the assignments on $3r$ variables in all the clauses in\n$u$ to the assignments on the $r$ variables in $v$. \\item For each $u$ there is\na set of $r$ functions $\\{f^u_i:\\{0,1\\}^{3r} \\rightarrow \\{0,1\\} \\}_{i=1}^r$\nsuch that $f^u_i(a)=0$ iff the assignment $a$ satisfies the $i$th clause in $u$.\nNote that $f^u_i$ depends only on the $3$ variables in the $i$th clause.\n\\end{itemize} \nA labeling $L_U:U\\rightarrow \\Sigma_U,L_V:V\\rightarrow \\Sigma_V$\nsatisfies an edge $(u,v)$ iff $\\pi_{uv}(L_U(u))=L_V(v)$ and $L_U(u)$ satisfies\nall the clauses in $u$. Let $\\OPT(I(\\phi))$ be the maximal fraction of\nconstraints that can be satisfied by any labeling. \n\\end{definition} \nThe\nfollowing theorem is obtained by applying  parallel repetition\ntheorem of Raz~\\cite{Raz1998} with $r$ repetitions on hard instances of\n\\MAXTSAT\\ where each variable occurs the same number of\ntimes (see Feige's result~\\cite{Feige1998}) and a structural property proved by\nH\\aa stad \\cite[Lemma 6.9]{Hastad2001}. \n\n\\begin{theorem} \\label{thm:label-cover} \nThere is an\nalgorithm which on input a \\TSAT\\ instance $\\phi$ and $r\\in \\N$ outputs an\n$r$-repeated label cover instance $I(\\phi)$ in time $n^{O(r)}$ with the\nfollowing properties. \n\\begin{itemize} \n\\item Completeness: If $\\phi \\in \\TSAT$, then $\\OPT(I(\\phi))=1$. \n\\item Soundnes: If $\\phi \\notin \\TSAT$, then $\\OPT(I(\\phi)) \\leq\n2^{-\\epsilon_0 r}$ for some universal constant $\\epsilon_0\\in (0,1)$.\n\\item Smooth Projections: \t\t\n$$\\forall v \\in V, \\alpha \\subset R, \\qquad Pr_u \\left[ |\\pi_{uv}(\\alpha)| <|\\alpha|^{c_0}\\right] \\leq \\frac{1}{|\\alpha|^{c_0}}.$$\n\\end{itemize} \nMoreover, the underlying graph $G$ is both left and right regular.\n\\end{theorem}\n\n\n For our hardness results for\n$3$-uniform $3$-colorable hypergraphs, we need a multipartite version of label\ncover, satisfying a smoothness condition. \n\\begin{definition}[\\cite{Khot2002b}]\nLet $I$ be a bipartite label cover instance specified by\n$\\left((U,V,E),\\Sigma_U,\\Sigma_V,\\Pi\\right)$. Then $I$ is $\\eta$-\\emph{smooth}\niff for every $u \\in U$ and two distinct labels $a,b \\in \\Sigma_U$ \n$$\\Pr_v[\\pi_{uv}(a) = \\pi_{uv}(b) ] \\leq \\eta,$$ \nwhere $v$ is a random neighbour of $u$.\n\\end{definition} \n\\begin{definition}[$r$-repeated $\\ell$-layered $\\eta$-smooth label cover]\n\\label{def:multilayer} \nLet $T:=\\lceil\\ell/\\eta\\rceil$ and $\\phi$ be\na \\TSAT\\ instance with $X$ as the set of variables and $C$ the set of clauses.\nThe $r$-repeated $\\ell$-layered $\\eta$-smooth label cover instance $I(\\phi)$ is\nspecified by: \n\\begin{itemize} \n\\item An $\\ell$-partite graph with vertex sets\n$V_0, \\cdots V_{\\ell-1}$. Elements of $V_i$ are tuples of the form $(C',X')$\nwhere $C'$ is a set of $(T+\\ell - i)r$ clauses and $X'$ is a a set of $ir$\nvariables. \n\\item $\\Sigma_{V_i} := \\{0,1\\}^{m_i}$ where $m_i:={3(T+\\ell - i)r\n+ir}$ which corresponds to all Boolean assignments to the clauses and variables\ncorresponding to a vertex in layer $V_i$. \n\\item For $0 \\leq i < j < \\ell$,\n$E_{ij} \\subseteq V_i \\times V_j$ denotes the set of edges between layers $V_i$\nand $V_j$. For $v_i \\in V_i, v_j \\in V_j$, there is an edge $(v_i,v_j) \\in\nE_{ij}$ iff $v_j$ can be obtained from $v_i$ by replacing some $(j-i)r$ clauses\nin $v_i$ with variables occurring in the clauses respectively. \n\\item The\nconstraint $\\pi_{v_i v_j}$ is the projection of assignments for clauses and\nvariables in $v_i$ to that of $v_j$. \n\\item For each $i <\\ell$, $v_i \\in V_i$,\nthere are $(T+\\ell - i)r$ functions $f_j^{v_i}:\\{0,1\\}^{3(T+\\ell - i)r\n+ir}\\rightarrow \\{0,1\\}$, one for each clause $j$ in $v_i$ such that\n$f_j^{v_i}(a)=0$ iff $a$ satisfies the clause $j$. This function only depends on\nthe $3$ coordinates in $j$. \\end{itemize} Given a labeling $L_i:V_i\\rightarrow\n\\Sigma_{V_i}$ for all the vertices, an edge $(v_i,v_j) \\in E_{ij}$ is satisfied\niff $L_i(v_i)$ satisfies all the clauses in $v_i$, $L_j(v_j)$ satisfies all the\nclauses in $v_j$ and $\\pi_{v_i v_j}(L_i(v_i)) = L_j(v_j)$. Let\n$\\OPT_{ij}(I(\\phi))$ be the maximum fraction of edges in $E_{ij}$ that can be\nsatisfied by any labeling. \n\\end{definition} \nThe following theorem was proved by\nDinur~\\etal~\\cite{DinurGKR2005} in the context of hypergraph vertex cover\ninapproximability (also see results of Dinur, Regev \\& Smyth~\\cite{DinurRS2005}). \n\\begin{theorem}\n\\label{thm:layered-label-cover} \nThere is an algorithm which on input a \\TSAT\\\ninstance $\\phi$ and $\\ell,r\\in \\N, \\eta \\in [0,1)$ outputs a $r$-repeated\n$\\ell$-layered $\\eta$-smooth label cover instance $I(\\phi)$ in time\n$n^{O((1+1/\\eta)\\ell r)}$ with the following properties. \n\\begin{enumerate} \n\\item $\\forall~ 0\\leq i < j < \\ell$, the bipartite label cover instance on\n$I_{ij}=\\left((V_i,V_j,E_{ij}),\\Sigma_{V_i},\\Sigma_{V_j},\\Pi_{ij}\\right)$ is\n$\\eta$-smooth. \n\\item For $1<m<\\ell$, any $m$ layers $0\\leq i_1< \\cdots <i_m\\leq\n\\ell-1$, any $S_{i_j} \\subseteq V_{i_j}$ such that $|S_{i_j}| \\geq\n\\frac{2}{m}|V_{i_j}|$, there exists distinct ${i_j}$ and ${i_{j'}}$ such that\nthe fraction of edges between $S_{i_j}$ and $S_{i_{j'}}$ relative to\n$E_{i_ji_{j'}}$ is at least $1/m^2$. \n\\item If $\\phi \\in \\TSAT$, then there is a\nlabeling for $I(\\phi)$ that satisfies all the constraints. \n\\item If $\\phi \\notin \\TSAT$, then \n$$\\OPT_{i,j}(I(\\phi)) \\leq 2^{-\\Omega(r)}, \\quad \\forall 0\\leq i <j \\leq \\ell.$$ \n\\end{enumerate} \n\\end{theorem}\n\n\\section{Unique Games Conjecture}\n\nKhot observed~\\cite{Khot2002} that if the label sets in the \\LabelCover\\\ninstance are the same and the projections are permutations, then\nthe hardness reductions could be simplified.\nHe made the\nconjecture that \\LabelCover\\ is hard to approximate to any constant factor,\nrestricted to such kind of instances.\n\\begin{definition}[Unique Games Conjecture]\nFor every $\\delta$ there exists a large enough $R$ such that it\nis hard to distinguish between \\UniqueGame\\ instances $G$ have label\nsize  $R$ from the following cases.\n\\begin{itemize}\n\\item \\YES\\ case : $\\OPT(G)\\geq 1-\\delta$\n\\item \\NO\\ case :  $\\OPT \\leq \\delta$\n\\end{itemize}\n\\end{definition}\n\nStarting with the work of Khot ~\\cite{Khot2002}, it was shown that  \nUGC, explains the lack of efficient approximation algorithms for a \nvariety of problems (eg. Vertex Cover, MAX-CUT).\n\n", "meta": {"hexsha": "d9678d57920416968691fff39c31ff669a0f1ad8", "size": 10892, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/chapter6.tex", "max_stars_repo_name": "geevi/tifr-thesis", "max_stars_repo_head_hexsha": "885257ecb3b7323806213a38a6a6dc4252e25585", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/chapter6.tex", "max_issues_repo_name": "geevi/tifr-thesis", "max_issues_repo_head_hexsha": "885257ecb3b7323806213a38a6a6dc4252e25585", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/chapter6.tex", "max_forks_repo_name": "geevi/tifr-thesis", "max_forks_repo_head_hexsha": "885257ecb3b7323806213a38a6a6dc4252e25585", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-05-17T06:38:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-23T07:07:46.000Z", "avg_line_length": 48.4088888889, "max_line_length": 130, "alphanum_fraction": 0.7000550863, "num_tokens": 3522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934765, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.5749193947698393}}
{"text": "\\section{The Dot Product}\\label{sec:3Ddotproduct}\n\nThe goal of this section is to answer the following question. Given two\nvectors, what is the angle between them?\n\nSince vectors have no position, we are free to place vectors wherever we\nlike. If the two vectors are placed tail-to-tail, there is now a\nreasonable interpretation of the question: we seek the measure of the\nsmallest angle between the two vectors, in the plane in which they lie.\nFigure~\\ref{fig:angle between vectors} illustrates the situation.\\index{vector!angle between}\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <6truemm,6truemm>\n\\setplotarea x from 0 to 7, y from 0 to 4\n\\arrow <4pt> [0.35, 1] from 0 0 to 3 4\n\\arrow <4pt> [0.35, 1] from 0 0 to 7 3\n\\setdashes\n\\plot 3 4 7 3 /\n\\put {$\\vect{v}$} [b] <0pt,3pt> at 3 4\n\\put {$\\vect{w}$} [l] <3pt,0pt> at 7 3\n\\put {$\\theta$} [bl] <11pt,8pt> at 0 0\n\\endpicture}}\n\\caption{The angle between vectors $\\vect{v}$ and $\\vect{w}$. \\label{fig:angle between vectors}}\n\\end{figure}\n\nSince the angle $\\theta$ lies in a triangle, we can compute it using a\nbit of trigonometry, namely, the law of cosines. Remember that the law of cosines states $c^2 = a^2 + b^2 - 2ab \\cos C$. \n\nThe lengths of the sides of the\ntriangle in Figure~\\ref{fig:angle between vectors} are $|\\vect{v}|$,\n$|\\vect{w}|$, and $|\\vect{v}-\\vect{w}|$. Let $\\vect{v}=\\langle v_1,v_2,v_3\\rangle$ and $\\vect{w}=\\langle w_1,w_2,w_3\\rangle$; then\n\\begin{align*}\n  |\\vect{v}-\\vect{w}|^2&=|\\vect{v}|^2+|\\vect{w}|^2-2|\\vect{v}||\\vect{w}|\\cos\\theta\t\\\\\n  2|\\vect{v}||\\vect{w}|\\cos\\theta&=|\\vect{v}|^2+|\\vect{w}|^2-|\\vect{v}-\\vect{w}|^2\t\\\\\n  &=v_1^2+v_2^2+v_3^2+w_1^2+w_2^2+w_3^2-(v_1-w_1)^2-(v_2-w_2)^2-(v_3-w_3)^2\t\\\\\n  &=v_1^2+v_2^2+v_3^2+w_1^2+w_2^2+w_3^2\t\\\\\n  &\\qquad-(v_1^2-2v_1w_1+w_1^2)\n  -(v_2^2-2v_2w_2+w_2^2)-(v_3^2-2v_3w_3+w_3^2)\t\\\\\n  &=2v_1w_1+2v_2w_2+2v_3w_3\t\\\\\n  |\\vect{v}||\\vect{w}|\\cos\\theta&=v_1w_1+v_2w_2+v_3w_3\t\\\\\n  \\cos\\theta&=(v_1w_1+v_2w_2+v_3w_3)/(|\\vect{v}||\\vect{w}|)\n\\end{align*}\nA bit of simple arithmetic with the coordinates of $\\vect{v}$ and $\\vect{w}$ allows us to compute the cosine of the angle between them. If\nnecessary we can use the arccosine to get $\\theta$, but in many\nproblems $\\cos\\theta$ turns out to be all we really need.\n\nThe numerator of the fraction that gives us $\\cos\\theta$ turns up a\nlot, so we give it a name and more compact notation: we call it\nthe \\dfont{dot  product}\\index{dot product}, and write it as\n\\[\n\\vect{v}\\cdot\\vect{w} = v_1w_1+v_2w_2+v_3w_3\n\\]\nThis is the same symbol we use for ordinary multiplication, but there\nshould never be any confusion; you can tell from context whether we\nare ``multiplying'' vectors or numbers. (We might also use the dot for\nscalar multiplication: $a\\cdot\\vect{v}=a\\vect{v}$; again, it is clear\nwhat is meant from context.)\n\n\\begin{example}{}{}\nFind the angle between the vectors $\\vect{v}=\\langle 1,2,1\\rangle$ and\n$\\vect{w}=\\langle 3,1,-5\\rangle$.\n\\end{example}\n\n\\begin{solution}\nWe know that\n$\\cos\\theta=\\vect{v}\\cdot\\vect{w}/(|\\vect{v}||\\vect{w}|)=\n(1\\cdot3 + 2\\cdot1 + 1\\cdot(-5))/(|\\vect{v}||\\vect{w}|)=0$, so\n$\\theta=\\pi/2$, that is, the vectors are perpendicular.\n\\end{solution}\n\n\\begin{example}{}{}\nFind the angle between the vectors $\\vect{v}=\\langle 3,3,0\\rangle$ and\n$\\vect{w}=\\langle 1,0,0\\rangle$.\n\\end{example}\n\n\\begin{solution}\nWe compute\n\\begin{align*}\n\\cos\\theta &= (3\\cdot1 + 3\\cdot0 + 0\\cdot0)/(\\sqrt{9+9+0}\\sqrt{1+0+0})\t\\\\\n&= 3/\\sqrt{18} = 1/\\sqrt2\n\\end{align*}\nso $\\theta=\\pi/4$.\n\\end{solution}\n\nThe following are some special cases worth looking at. \n\n\\begin{example}{}{}\nFind the angles between:\n\\begin{enumerate}\n\\item $\\vect{v}$ and $\\vect{v}$\n\\item $\\vect{v}$ and $-\\vect{v}$\n\\item $\\vect{v}$ and $\\vect{0}=\\langle 0,0,0\\rangle$\n\\end{enumerate}\n\\end{example}\n\n\\begin{solution}\n\\begin{enumerate}\n\\item\n$\\ds \\cos\\theta= \\vect{v}\\cdot\\vect{v}/(|\\vect{v}||\\vect{v}|)=(v_1^2+v_2^2+v_3^2)/\n(\\sqrt{v_1^2+v_2^2+v_3^2}\\sqrt{v_1^2+v_2^2+v_3^2})=1$, so the angle\nbetween $\\vect{v}$ and itself is zero, which of course is correct.\n\n\\item\n$\\ds\\cos\\theta=\\vect{v}\\cdot-\\vect{v}/(|\\vect{v}||-\\vect{v}|)=(-v_1^2-v_2^2-v_3^2)/\n(\\sqrt{v_1^2+v_2^2+v_3^2}\\sqrt{v_1^2+v_2^2+v_3^2})=-1$, so the angle\nis $\\pi$, that is, the vectors point in opposite directions, as of\ncourse we already knew.\n\n\\item\n$\\ds \\cos\\theta= \\vect{v}\\cdot\\vect{0}/(|\\vect{v}||\\vect{0}|)=(0+0+0)/\n(\\sqrt{v_1^2+v_2^2+v_3^2}\\sqrt{0^2+0^2+0^2})$, which is undefined.\nOn the other hand, note that since $\\vect{v}\\cdot\\vect{0}=0$ it looks\nat first as if $\\cos\\theta$ will be zero, which as we have seen means\nthat vectors are perpendicular; only when we notice that the\ndenominator is also zero do we run into trouble. One way to ``fix''\nthis is to adopt the convention that the zero vector $\\vect{0}$ is\nperpendicular to all vectors; then we can say in general that if\n$\\vect{v}\\cdot\\vect{w}=0$, $\\vect{v}$ and $\\vect{w}$ are perpendicular.\n\\end{enumerate}\n\\end{solution}\n\nGeneralizing the examples, note the following useful facts:\n\n\\begin{itemize}\n\t\\item\tIf $\\vect{v}$ is parallel or anti-parallel to $\\vect{w}$ then\n\t$\\vect{v}\\cdot\\vect{w}/(|\\vect{v}||\\vect{w}|)=\\pm1$, and conversely, if\n\t$\\vect{v}\\cdot\\vect{w}/(|\\vect{v}||\\vect{w}|)=1$, $\\vect{v}$ and $\\vect{w}$\tare parallel, while if $\\vect{v}\\cdot\\vect{w}/(|\\vect{v}||\\vect{w}|)=-1$, $\\vect{v}$ and $\\vect{w}$ are anti-parallel. (Vectors are\n\tparallel if they point in the same direction,\n\tanti-parallel if they point in opposite directions.) \\index{vector!parallel}\\index{vector!anti-parallel}\n\t\\item\tIf $\\vect{v}$ is perpendicular to $\\vect{w}$ then \n\t$\\vect{v}\\cdot\\vect{w}/(|\\vect{v}||\\vect{w}|)=0$, and conversely if \n\t$\\vect{v}\\cdot\\vect{w}/(|\\vect{v}||\\vect{w}|)=0$ then \n\t$\\vect{v}$ and $\\vect{w}$ are perpendicular. \\index{vector!perpendicular}\n\\end{itemize}\n\nGiven two vectors, it is often useful to find the \\dfont{projection}\\index{vector!projection} \nof one vector onto the other, because this turns out to have important\nmeaning in many circumstances. More precisely, given $\\vect{v}$ and\n$\\vect{w}$, we seek a vector parallel to $\\vect{w}$ but with length\ndetermined by $\\vect{v}$ in a natural way, as shown in\nFigure~\\ref{fig:vector projection}. $\\vect{p}$ is chosen so that the\ntriangle formed by $\\vect{v}$, $\\vect{p}$, and $\\vect{v}-\\vect{p}$\nis a right triangle.\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <6truemm,6truemm>\n\\setplotarea x from 0 to 7, y from 0 to 4\n\\arrow <4pt> [0.35, 1] from 0 0 to 3 4\n\\arrow <4pt> [0.35, 1] from 0 0 to 3.98 1.71\n\\setdashes\n\\arrow <4pt> [0.35, 1] from 0 0 to 7 3\n\\plot 3 4 3.98 1.71 /\n\\put {$\\vect{v}$} [b] <0pt,3pt> at 3 4\n\\put {$\\vect{w}$} [l] <3pt,0pt> at 7 3\n\\put {$\\vect{p}$} [tl] <3pt,-3pt> at 3.98 1.71\n\\put {$\\theta$} [bl] <11pt,8pt> at 0 0\n\\endpicture}}\n\\caption{$\\vect{p}$ is the projection of $\\vect{v}$ onto $\\vect{w}$. \\label{fig:vector projection}}\n\\end{figure}\n\nUsing a little trigonometry, we see that \n$$\n  |\\vect{p}|=|\\vect{v}|\\cos\\theta= \n  |\\vect{v}|{\\vect{v}\\cdot\\vect{w}\\over|\\vect{v}||\\vect{w}|}=\n  {\\vect{v}\\cdot\\vect{w}\\over|\\vect{w}|};\n$$\nthis is sometimes called the \\dfont{scalar projection of $\\vect{v}$ onto $\\vect{w}$}\\index{vector!scalar projection}. To get $\\vect{p}$\nitself, we multiply this length by a vector of length one parallel to\n$\\vect{w}$: \n$$\n  \\vect{p}= {\\vect{v}\\cdot\\vect{w}\\over|\\vect{w}|}{\\vect{w}\\over|\\vect{w}|}=\n  {\\vect{v}\\cdot\\vect{w}\\over|\\vect{w}|^2}\\vect{w}.\n$$\nBe sure that you understand why $\\vect{w}/|\\vect{w}|$ is a vector of\nlength one (also called a \n\\dfont{unit vector}\\index{unit vector}) parallel to $\\vect{w}$.\n\nThe discussion so far implicitly assumed that $0\\le\\theta\\le\\pi/2$.\nIf $\\pi/2<\\theta\\le\\pi$, the picture is like \nFigure~\\ref{fig:obtuse vector projection}.\nIn this case $\\vect{v}\\cdot \\vect{w}$ is negative, so the vector\n$${\\vect{v}\\cdot\\vect{w}\\over|\\vect{w}|^2}\\vect{w}$$\nis anti-parallel to $\\vect{w}$, and its length is \n$$\\left|{\\vect{v}\\cdot\\vect{w}\\over|\\vect{w}|}\\right|.$$\nIn general, the scalar projection of $\\vect{v}$ onto $\\vect{w}$\nmay be positive or negative. If\nit is negative, it means that the projection vector is anti-parallel\nto $\\vect{w}$ and that the length of the projection vector is the\nabsolute value of the scalar projection. Of course, you can also\ncompute the length of the projection vector as usual, by applying the\ndistance formula to the vector.\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <6truemm,6truemm>\n\\setplotarea x from -4 to 7, y from -2 to 4\n\\arrow <4pt> [0.35, 1] from 0 0 to -4.966 0.586\n\\arrow <4pt> [0.35, 1] from 0 0 to -3.98 -1.71\n\\setdashes\n\\arrow <4pt> [0.35, 1] from 0 0 to 7 3\n\\plot -4.966 0.586 -3.98 -1.71 /\n\\put {$\\vect{v}$} [b] <0pt,3pt> at -4.966 0.586\n\\put {$\\vect{w}$} [l] <3pt,0pt> at 7 3\n\\put {$\\vect{p}$} [tl] <3pt,-3pt> at -3.98 -1.71\n\\put {$\\theta$} [b] <0pt,3pt> at 0 0\n\\endpicture}}\n\\caption{$\\vect{p}$ is the projection of $\\vect{v}$ onto $\\vect{w}$. \\label{fig:obtuse vector projection}}\n\\end{figure}\n\nNote that the phrase ``projection onto $\\vect{w}$'' is a bit misleading\nif taken literally; all that $\\vect{w}$ provides is a direction; the\nlength of $\\vect{w}$ has no impact on the final vector. In\nFigure~\\ref{fig:short projection}, for example, $\\vect{w}$ is shorter than\nthe projection vector, but this is perfectly acceptable.\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <6truemm,6truemm>\n\\setplotarea x from 0 to 7, y from 0 to 4\n\\arrow <4pt> [0.35, 1] from 0 0 to 3 4\n\\setdashes\n\\arrow <4pt> [0.35, 1] from 0 0 to 2.3333 1\n\\put {$\\vect{v}$} [b] <0pt,3pt> at 3 4\n\\put {$\\vect{w}$} [t] <0pt,-5pt> at 2.3333 1\n\\put {$\\theta$} [bl] <11pt,8pt> at 0 0\n\\setcoordinatesystem units <6truemm,6truemm> point at -9 0\n\\setplotarea x from 0 to 7, y from 0 to 4\n\\setsolid\n\\arrow <4pt> [0.35, 1] from 0 0 to 3 4\n\\arrow <4pt> [0.35, 1] from 0 0 to 3.98 1.71\n\\setdashes\n\\arrow <4pt> [0.35, 1] from 0 0 to 2.3333 1\n\\plot 3 4 3.98 1.71 /\n\\put {$\\vect{v}$} [b] <0pt,3pt> at 3 4\n\\put {$\\vect{w}$} [t] <0pt,-5pt> at 2.3333 1\n\\put {$\\vect{p}$} [tl] <3pt,-3pt> at 3.98 1.71\n\\put {$\\theta$} [bl] <11pt,8pt> at 0 0\n\\endpicture}}\n\\caption{$\\vect{p}$ is the projection of $\\vect{v}$ onto $\\vect{w}$. \\label{fig:short projection}}\n\\end{figure}\n\nPhysical force is a vector quantity. It is often necessary to compute\nthe ``component'' of a force acting in a different direction than the\nforce is being applied.\n\n\\begin{example}{Components of Force Vector}{components of force vector}\nSuppose a ten pound weight is resting on an inclined plane---a pitched roof, for example. Gravity\nexerts a force of ten pounds on the object, directed straight down. It\nis useful to think of the component of this force directed down and\nparallel to the roof, and the component down and directly into the\nroof. These forces are the projections of the force vector onto\nvectors parallel and perpendicular to the roof. Suppose the roof is\ntilted at a $30^\\circ$ angle, as in Figure~\\ref{fig:components of force}. Compute the component of the force directed down the roof and the component of the force directed into the roof. \n\\end{example}\n\n\\begin{solution}\nA vector parallel to the roof is $\\langle-\\sqrt3,-1\\rangle$,\nand a vector perpendicular to the roof is\n$\\langle 1,-\\sqrt3\\rangle$.  The force vector is $\\vect{F}=\\langle\n0,-10\\rangle$. The component of the force directed down the roof is\nthen\n\\begin{align*}\n  \\vect{F}_1&={\\vect{F}\\cdot\n  \\langle-\\sqrt3,-1\\rangle\\over|\\langle-\\sqrt3,-1\\rangle|^2}\n  \\langle-\\sqrt3,-1\\rangle\n  ={10\\over 2}{\\langle-\\sqrt3,-1\\rangle\\over2}=\n  \\langle -5\\sqrt3/2,-5/2\\rangle\n\\end{align*}\nwith length 5.  The component of the force directed into the roof is\n\\begin{align*}\n  \\vect{F}_2&={\\vect{F}\\cdot\n  \\langle1,-\\sqrt3\\rangle\\over|\\langle1,-\\sqrt3\\rangle|^2}\n  \\langle1,-\\sqrt3\\rangle\n={10\\sqrt3\\over 2}{\\langle1,-\\sqrt3\\rangle\\over2}=\n\\langle 5\\sqrt3/2,-15/2\\rangle\\cr\n\\end{align*}\nwith length $5\\sqrt3$. Thus, a force of 5 pounds is pulling the object\ndown the roof, while a force of $5\\sqrt3$ pounds is pulling the object\ninto the roof.\n\\end{solution}\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <2truecm,2truecm>\n\\setplotarea x from 0 to 3.5, y from 0 to 2\n\\plot 0 0 3.464 2 /\n%\\setplotsymbol ({\\twelvepoint.})\n\\arrow <4pt> [0.35, 1] from 2.5 1.443 to 2.067 1.193\n\\arrow <4pt> [0.35, 1] from 2.5 1.443 to 2.5 0.443\n\\arrow <4pt> [0.35, 1] from 2.5 1.443 to 2.933 0.683\n\\put {$\\vect{F}$} [t] <0pt,-5pt> at 2.5 0.443\n\\put {$\\vect{F}_2$} [tl] <3pt,-3pt> at 2.933 0.683\n\\put {$\\vect{F}_1$} [br] <-3pt,3pt> at 2.067 1.193\n\\endpicture}}\n\\caption{Components of a force. \\label{fig:components of force}}\n\\end{figure}\n\nThe dot product has some familiar-looking properties that will be\nuseful later, so we list them here. These may be proved by writing the\nvectors in coordinate form and then performing the indicated\ncalculations; subsequently it can be easier to use the properties\ninstead of calculating with coordinates.\n\n\\begin{theorem}{Dot Product Properties}{dot product properties}\nIf $\\vect{u}$, $\\vect{v}$, and $\\vect{w}$ are vectors and $a$ is a real\nnumber, then\n\\begin{enumerate}\n\t\\item\t$\\ds \\vect{u}\\cdot\\vect{u} = |\\vect{u}|^2$\n\t\\item\t$\\vect{u}\\cdot\\vect{v} = \\vect{v}\\cdot\\vect{u}$\n\t\\item\t$\\vect{u}\\cdot(\\vect{v}+\\vect{w}) = \n\t\\vect{u}\\cdot\\vect{v}+\\vect{u}\\cdot\\vect{w}$\n\t\\item\t$(a\\vect{u})\\cdot\\vect{v}=a(\\vect{u}\\cdot\\vect{v})\n\t=\\vect{u}\\cdot(a\\vect{v})$\n\\end{enumerate}\\index{dot product!properties}\n\\end{theorem}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:3Ddotproduct}}\n\n\\begin{enumialphparenastyle}\n\n\\begin{ex}\nFind $\\langle 1,1,1\\rangle\\cdot\\langle 2,-3,4\\rangle$.\n\\begin{sol}\n\t$3$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind $\\langle 1,2,0\\rangle\\cdot\\langle 0,0,57\\rangle$.\n\\begin{sol}\n\t$0$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind $\\langle 3,2,1\\rangle\\cdot\\langle 0,1,0\\rangle$.\n\\begin{sol}\n\t$2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind $\\langle -1,-2,5\\rangle\\cdot\\langle 1,0,-1 \\rangle$.\n\\begin{sol}\n\t$-6$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind $\\langle 3,4,6\\rangle\\cdot\\langle 2,3,4\\rangle$.\n\\begin{sol}\n\t$42$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the cosine of the angle between $\\langle 1,2,3\\rangle$\nand $\\langle 1,1,1\\rangle$; use a calculator if necessary to find the angle.\n\\begin{sol}\n\t$\\sqrt6/\\sqrt7$, $\\approx 0.39$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the cosine of the angle between $\\langle -1, -2,-3\\rangle$\nand $\\langle 5,0,2\\rangle$; use a calculator if necessary to find the angle.\n\\begin{sol}\n\t$-11\\sqrt{14}\\sqrt{29}/406$, $\\approx 2.15$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the cosine of the angle between $\\langle 47,100,0\\rangle$\nand $\\langle 0,0,5\\rangle$; use a calculator if necessary to find the angle.\n\\begin{sol}\n\t$0$, $\\pi/2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the cosine of the angle between $\\langle 1,0,1 \\rangle$\nand $\\langle 0,1,1\\rangle$; use a calculator if necessary to find the angle.\n\\begin{sol}\n\t$1/2$, $\\pi/3$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the cosine of the angle between $\\langle 2,0,0\\rangle$\nand $\\langle -1,1,-1\\rangle$; use a calculator if necessary to find the angle.\n\\begin{sol}\n\t$-1/\\sqrt3$, $\\approx 2.19$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the angle between the diagonal of a cube and one of the\nedges adjacent to the diagonal.\n\\begin{sol}\n\t$\\arccos(1/\\sqrt3)\\approx 0.96$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the scalar and vector projections of $\\langle 1,2,3\\rangle$\nonto $\\langle 1,2,0\\rangle$.\n\\begin{sol}\n\t$\\sqrt{5}$, $\\langle 1,2,0\\rangle$.\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nFind the scalar and vector projections of $\\langle 1,1,1\\rangle$\nonto $\\langle 3,2,1\\rangle$.\n\\begin{sol}\n\t$3\\sqrt{14}/7$, $\\langle 9/7,6/7,3/7\\rangle$.\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA force of 10 pounds is applied to a wagon, directed at an\nangle of $30^\\circ$. Find the component of this force pulling the\nwagon straight up, and the component pulling it horizontally along\nthe ground.\n\\begin{sol}\n\t$\\langle 0,5\\rangle$, $\\langle 5\\sqrt3,0\\rangle$\n\\end{sol}\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <5truemm,5truemm>\n\\setplotarea x from 0 to 11, y from 0 to 4\n\\putrule from 0 0 to 11 0\n\\putrule from 6 0.5 to 6 2\n\\putrule from 6 2 to 2 2\n\\putrule from 2 2 to 2 0.5\n\\putrule from 2 0.5 to 6 0.5\n\\circulararc 360 degrees from 3.5 0.5 center at 3 0.5\n\\circulararc 360 degrees from 5.5 0.5 center at 5 0.5\n\\plot 6 1.5 8 2.655 /\n%\\setplotsymbol ({\\twelvepoint.})\n\\arrow <4pt> [0.35, 1] from 8.5 2.943 to 10 3.81\n\\put {$\\vect{F}$} [bl] <3pt,3pt> at 10 3.81\n\\endpicture}}\n\\caption{Pulling a wagon. \\label{fig:pulling a wagon}}\n\\end{figure}\n\\end{ex}\n\n\\begin{ex}\nA force of 15 pounds is applied to a wagon, directed at an\nangle of $45^\\circ$. Find the component of this force pulling the\nwagon straight up, and the component pulling it horizontally along\nthe ground.\n\\begin{sol}\n\t$\\langle 0,15\\sqrt2/2\\rangle$,$\\langle 15\\sqrt2/2,0\\rangle$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nUse the dot product to find a non-zero vector $\\vect{w}$\nperpendicular to both $\\vect{u}=\\langle 1,2,-3\\rangle$ and \n$\\vect{v}=\\langle 2,0,1\\rangle$.\n\\begin{sol}\n\tAny vector of the form $\\langle a, -7a/2, -2a\\rangle$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $\\vect{x}=\\langle 1,1,0 \\rangle$ and $\\vect{y}=\\langle\n2,4,2 \\rangle$.  Find a unit vector that is perpendicular to both $\\vect{x}$ and $\\vect{y}$.\n\\begin{sol}\n\t$\\langle 1/\\sqrt3,-1/\\sqrt3,1/\\sqrt3\\rangle$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nDo the three points $(1,2,0)$, $(-2,1,1)$, and $(0,3,-1)$\nform a right triangle?\n\\begin{sol}\n\tNo.\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nDo the three points $(1,1,1)$, $(2,3,2)$, and $(5,0,-1)$\nform a right triangle?\n\\begin{sol}\n\tYes.\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nShow that $|\\vect{v}\\cdot\\vect{w}|\\le|\\vect{v}||\\vect{w}|$\n\\end{ex}\n\n\\begin{ex}\nLet $\\vect{x}$ and $\\vect{y}$ be perpendicular vectors.  Use\nTheorem~\\ref{thm:dot product properties} to prove that $|\\vect{x}|^2+|\\vect{y}|^2=|\\vect{x}+\\vect{y}|^2$.  What is this result better known as?\n\\end{ex}\n\n\\begin{ex}\nProve that the diagonals of a rhombus intersect at right angles. \n\\end{ex}\n\n\\begin{ex}\nSuppose that $\\vect{z}=|\\vect{x}| \\vect{y} + |\\vect{y}| \\vect{x}$\nwhere $\\vect{x}$, $\\vect{y}$, and $\\vect{z}$ are all nonzero vectors.  Prove\nthat $\\vect{z}$ bisects the angle between $\\vect{x}$ and $\\vect{y}$.\n\\end{ex}\n\n\\begin{ex}\nProve Theorem~\\ref{thm:dot product properties}.\n\\end{ex}\n\n\\end{enumialphparenastyle}\n", "meta": {"hexsha": "3215236109a7f388de6b0c47f5454b5950b9e6c8", "size": 18524, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "12-three-dimensions/12-3-dot-product.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "12-three-dimensions/12-3-dot-product.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "12-three-dimensions/12-3-dot-product.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7542213884, "max_line_length": 208, "alphanum_fraction": 0.6786871086, "num_tokens": 7136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.8221891370573388, "lm_q1q2_score": 0.5749193787279203}}
{"text": "\n\\subsection{Parallel transport}\n\nMove and therefore change basis, but components are the same.\n\n", "meta": {"hexsha": "59fc0638a00536abed6a7522d3df2a03fcbad020", "size": 97, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/manifoldsDifferentiable/03-04-transportParallel.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/manifoldsDifferentiable/03-04-transportParallel.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/manifoldsDifferentiable/03-04-transportParallel.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 16.1666666667, "max_line_length": 61, "alphanum_fraction": 0.793814433, "num_tokens": 19, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.5749193732177769}}
{"text": "%% Decision Procedure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The BLT Decision Procedure}\n\\label{sec:dp}\n\nTo describe our decision procedure, we assume that we are attempting\nto check whether the constraints below are satisfiable:\n%\n\\begin{equation}\n\\label{eq:prob-matrix}\n    \\v{l} \\le \\mat{A} \\v{x} \\le \\v{u}.\n    \\tag{P1}\n\\end{equation}\n%\nWe let $n$ denote the number of free integer variables in $\\v{x} \\in \\ZZ^n$,\nand let $m$ denote the number of constrained linear forms.  The coefficients\nare rational, so $\\mat{A} \\in \\QQ^{m \\times n}$ and $\\v{l},\\v{u} \\in \\QQ^m$.\n\nWithout loss of generality we assume that $n \\le m$ and that $\\mat{A}$ has\nrank $n$ (full rank). In case the problem at hand is such that $n > m$ and/or\nthat $\\mat{A}$ is less than full rank, one can compute a basis of the column\nspace, say $\\mat{A}'$, that meets the requirement (cf. \\cite{Cohen}, \\S~2.7.1).\nThe new system $\\v{l} \\le \\mat{A}' \\v{y} \\le \\v{u}$ is equisatisfiable with\nthe original and solutions of the new system\ndetermine one or more solutions of the original.\n\nGeometrically, we can think of $\\v{l}$ and $\\v{u}$ as opposite corners of an\n$m$-dimensional hyperrectangle defined by\n%\n\\[ \\Co := \\{ \\v{z} \\in \\RR^m  \\mid  \\v{l}_i \\le \\v{z}_i \\le \\v{u}_i \\}. \\]\n%\nWe refer to this as the \\emph{constraint set} of the problem.\nWithout loss of generality we may scale the rows of \\eqref{eq:prob-matrix} so\nthat the width of $\\Co$ is the same along every axis, i.e.\n%\n\\begin{equation}\n    \\label{eq:cube}\n    \\v{u}_i - \\v{l}_i = \\v{u}_j - \\v{l}_j \\quad \\forall \\, i,j \\in \\{1,\\ldots,m\\}.\n\\end{equation}\n%\nNote that this transformation makes $\\Co$ a \\emph{hypercube}.\nWe let $d_\\Co$ denote the common width, and let $r_\\Co = d_\\Co/2$ denote the\ncorresponding radius of the hypercube.\n\nThe problem given by \\eqref{eq:prob-matrix} can also be characterized as\ntrying to find a common point in both a hypercube and a\nlattice.\nThe columns of $\\mat{A}$, regarded as vectors, generate a lattice.\n%\nLet $\\{\\v{b}_1, \\v{b}_2, \\ldots, \\v{b}_n\\}$ denote the column vectors of $\\mat{A}$ and define:\n%\n\\begin{equation}\n    \\label{eq:lattice-def}\n    \\La := \\left\\{ a_1 \\v{b}_1 + \\cdots + a_n \\v{b}_n \\in \\RR^m \\mid\n            a_i \\in \\ZZ \\right\\}\n\\end{equation}\n%\nBy our assumption that $\\mat{A}$ has full rank,\n$\\{\\v{b}_1, \\ldots, \\v{b}_n\\}$ are linearly independent,\nand there is a one-to-one\ncorrespondance between elements in $\\La \\cap \\Co$ and\nsatifying assignments to~\\eqref{eq:prob-matrix}.\n%\nIt follows that checking the satisfiability of~\\eqref{eq:prob-matrix}\nis equivalent to deciding whether $\\La \\cap \\Co$ is non-empty.\n\nThe set $\\La \\cap \\Co$ is guaranteed to be finite as a\nlattice must have a finite number of elements in any space with\na bounded volume.\n%\nDue to the one-to-one correspondance, the number of\nsolutions to~\\eqref{eq:prob-matrix} must be finite as well, and\nhence a procedure capable of enumerating the elements in $\\La \\cap \\Co$ can\nbe used as a decision procedure for checking the satisfiability\nof~\\eqref{eq:prob-matrix}.\n\nBefore describing such a procedure, we note that a nice feature\nof the lattice and hypercube formulation is that it provides\na simple way to estimate the number of satisfying assignments. The\n\\emph{volume} of a lattice $\\La$ can be defined in several ways (see\n\\cite{Lenstra}, \\S5), but the simplest computationally is $\\vol(\\La) =\n\\abs{\\det{\\m{B}}}$ where the columns of $\\m{B}$ generate $\\La$. Then, the\nnumber of elements of $\\La \\cap \\Co$ is approximately\n$\\vol{\\Co}\\:{/}\\:\\vol{\\La}$.\nWe will\nuse this to compute the number of expected solutions for the JPEG preimage\nproblems described in the Section~\\ref{sec:jpeg}.\n\n% We will lateFigure\n%\\ref{fig:solution_count} shows a plot of this estimate for a specific family\n%of problems.\n\n\n%% Decision Procedure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{Enumerating Lattice Elements}\n\\label{ssec:dp}\n\n% What I want to say...\n\n% Our algorithm is a search over the tree of partial assignments $x_1 = a_1, x_2\n% = a_2, \\ldots, x_j = a_j$. We use a \\emph{best-first} search strategy, where\n% \"best\" at each step means making the assignment to the next variable\n% such that the resulting layer is the \\emph{closest} one to center among the\n% remaining possibilities. The best-first search is also pruned by keeping track\n% of a best known upper bound on the distance of a closest lattice vector to\n% center. Layers whose distance to center exceeds the known upper bound are not\n% examined.\n%\n% In section \\ref{ssec:cvp-inf}, we discuss a slightly more general family of\n% search strategies and show that they have good properties. After that we give\n% some details about our implementation.\n%\n\nWe now turn our attention to the problem of enumerating the elements\n$\\La \\cap \\Co$.  Recall that, without loss of generality, we have\ntaken $\\Co$ to be a hypercube. Let $\\v{p}$ denote the geometric center\nof $\\Co$:\n%\n\\begin{equation}\n    \\label{eq:center}\n    \\v{p} := (\\frac{\\v{l}_1 + \\v{u}_1}{2}, \\ldots, \\frac{\\v{l}_m + \\v{u}_m}{2}).\n\\end{equation}\nWith respect to the $\\linf$ metric, $\\Co$ is a closed ball of radius $r_\\Co$,\ncentered at $\\v{p}$, and hence the elements in $\\La \\cap \\Co$ are\nprecisely those that are at most a distance $r_\\Co$ from $\\v{p}$.\n\nAlgorithms for finding lattice elements that are close to a given point have been\nextensively studied and many algorithmic approaches to it exist; see\n\\cite{AgrellEtAl} for a good survey.\n%\nWe have developed a complete search procedure by adapting the Schnorr-Euchner\nalgorithm for computing the closest vector point to a\nlattice~\\cite{Schnorr-Euchner}.\n\n%In adapting the algorithm, we have made three changes: (1) We use the $\\linf$\n%metric instead of the Euclidean $\\ltwo$\n%metric; (2) rather than compute the closest element, we immediately return\n%when we find an element within the hypercube; and (3) we prune search paths\n%when the $R_\\Co$\n%Our adaption %$\\linf$, both because of its performance characteristics and the ease with\n%which we could change metrics.\n\n% In particular, $\\v{z} \\in \\Co$ if and only if $\\norminf{z - p} \\le r_{\\Co}$.\n%In the algorithm below we use \\proc{Center} to denote the calculation\n%\\eqref{eq:center}.\n\n%Now, suppose we have a procedure $\\proc{Closest}_\\infty$ which takes as input\n%a lattice $\\La \\subset \\RR^m$, a point $\\v{q} \\in \\RR^m$, and returns a\n%closest lattice vector to $\\v{q}$, not in the usual $L^2$ metric, but in the\n%$\\linf$ metric. Assuming this for the moment, we arrive at our decision\n%procedure for \\eqref{eq:prob-matrix}.\n\n%\\begin{algorithm}[H]\n    %\\SetLine\n%    \\KwIn{a lattice $\\La \\subset \\RR^m$ and a hypercube $\\Co$}\n%    \\KwOut{``SAT'' and a $\\v{z} \\in \\La \\cap \\Co$, or ``UNSAT''}\n%    $\\v{p} \\leftarrow \\proc{Center}(\\Co)$\\;\n%    $\\v{r} \\leftarrow \\proc{Closest}_\\infty(\\La, \\v{p})$\\;\n%    \\eIf{$\\v{r} \\in \\Co$}{\n%        \\KwRet{$(\\text{SAT}, \\v{r})$}\\;\n%    }{\n%        \\KwRet{UNSAT}\\;\n%    }\n%    \\caption{Decision procedure for lattice points in a hypercube}\n%\\end{algorithm}\n\n%The first branch is obviously correct. For the alternative, simply observe\n%that if the closest lattice vector to $\\v{p}$ in the $\\linf$ metric is not\n%contained in $\\Co$ then there can be no lattice vector in $\\Co$ (any such\n%vector would be strictly closer to $\\v{p}$).\n\n%The utility of this simple procedure obviously hinges on\n%$\\proc{Closest}_\\infty$.\n\n%\\subsection{Closest Vector Search with Infinity Norm}\n%\\label{ssec:cvp-inf}\n\n%The closest vector problem has been extensively studied and many algorithmic\n%approaches to it exist; see \\cite{AgrellEtAl} for a good survey. In our\n%implementation BLT we have chosen to adapt the Schnorr-Euchner strategy for\n%$\\linf$, both because of its performance characteristics and the ease with\n%which we could change metrics.\n\\newcommand{\\ruleref}{\\textbf{Split}}\n\nWe model our search procedure as a non-deterministic transition rule on partial\nassignments to the vectors $\\v{x}$.  The procedure begins with the empty\nassignment $\\emptyset$, and incrementally assigns values to variables in $\\v{x}$.\nIf the transition rule terminates with a complete assignment\n$\\v{u} \\in \\ZZ^n$, then $\\v{u}$ is a solution to the constraint\nproblem.\n%\n\\begin{equation}\n%\\label{eq:rules-split}\n%\\tag{\\textbf{Split}}\n\\mathbf{(Split)}\n\\hspace{20pt} \\theta \\ \\Rightarrow \\ \\theta \\cup \\{\\,j \\mapsto s\\,\\}\n   \\ \\textbf{where}\\,\\begin{cases}\n      j \\in \\{\\,1, \\ldots, n\\,\\} \\setminus \\fn{dom}(\\theta)\\\\\n      s \\in \\ZZ\\ \\textbf{s.t.}\n      \\ \\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto s\\,\\}} \\cap \\Co \\neq \\emptyset\n      \\end{cases}\n\\end{equation}\n\nThis rule takes a partial assignment $\\theta$, and extends it with an additional\nbinding $j \\mapsto s$ such that the real-affine linear space\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto s\\,\\}}$ of the resulting assignment\nintersects with $\\Co$.  This rule models backtracking implicitly; at each\nstep, we may find that there is no legal value $s$ to assign $j$.  If this\noccurs, our procedure must backtrack to a previous step, and explore an\nalternative assignment.\n\nWe can show that the set of lattice points in $\\Co$ can be enumerated by\napplying~\\ruleref{} transitively starting\nfrom $\\emptyset$.\n%\nBefore stating the theorem, we first observe that each point $\\v{x} \\in \\La$\ncan be expressed as the weighted sum of the columns in $\\mat{A}$,\n(i.e.,~$\\v{x} = \\mat{A}\\v{u}$ for some unique $\\v{u} \\in \\ZZ^n$).\n%\n\\begin{thm}\n  For each vector $\\v{u} \\in \\ZZ^n$, $\\mat{A} \\v{u} \\in \\Co$ iff. there is\n  a derivation $\\emptyset \\Rightarrow^{+} \\v{u}$.\n\\end{thm}\n\\begin{proof}\n%\nTo see that $\\emptyset \\Rightarrow^{+} \\v{u}$ implies $\\mat{A} \\v{u} \\in \\Co$,\nobserve that the proceeding step must have shown that\n$\\La^\\RR_{\\v{u}} \\cap \\Co \\neq \\emptyset$.  Since $\\v{u}$\nis a complete assignment, $\\La^\\RR_{\\v{u}} = \\{\\,\\mat{A} \\v{u}\\,\\}$, and\nhence $\\mat{A} \\v{u} \\in \\Co$.\n\nTo see that $\\mat{A} \\v{u} \\in \\Co$ implies $\\emptyset \\Rightarrow^{+} \\v{u}$,\nobserve that for all\npartial assignments $\\theta$, $\\La^\\RR_\\theta \\cap \\Co \\neq \\emptyset$\nimplies that $\\emptyset \\Rightarrow^{+} \\theta$\nby induction on the number of bindings in $\\theta$.\n%\nFor a complete assignment $\\v{u}$, $\\La^\\RR_{\\v{u}} = \\{\\,\\mat{A} \\v{u}\\}$.\nHence, if $\\mat{A} \\v{u}$ is in $\\Co$, then\n$\\La^\\RR_{\\v{u}}$ is a non-empty subset of $\\Co$ and $\\varnothing \\Rightarrow^{+} \\v{u}$.\n\\end{proof}\n%\nWe note that the above theorem holds regardless of the order in which we choose\nthe values of $j$ in~\\ruleref{}.  We only need to consider all valid\nassignments to $s$ once we have chosen $j$.  An implementation then has\na choice in which it can use different heuristics to search for an\nassignment.\n%\nWe will briefly describe\nthe heuristics used by BLT in Section~\\ref{ssec:blt-optimizations}.\n\nThe previous theorem shows that the transition rule is sound and complete from\na logical point of view, to show that it is computable\nwe prove the following:\n\n\\begin{thm}\n  The set of partial assignments $\\theta$ such that\n  $\\emptyset \\Rightarrow^{+} \\theta$ is finite and\n  computable.\n  \\label{thm:computable}\n\\end{thm}\n\n\\begin{proof}\nAs each application of~\\ruleref{} adds an additional\nbinding to the substitution, the number of applications along any\npath is bounded by $n$.  To show that the set of $\\theta$ is finite,\nwe must show that the number of potential values of $s$ used to\ninstantiate~\\ruleref{} is both finite and computable.\nMore precisely, we must prove that\nthere are at most a finite number of integers $s \\in \\ZZ$ such that\n\\begin{equation}\n\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto s\\,\\}} \\cap \\Co \\neq \\emptyset.\n\\label{eq:extend_theta}\n\\end{equation}\n%\nObserve that for any $u,k \\in \\ZZ$ with $k \\neq 0$, the affine set\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto u+k\\,\\}}$ can be\nobtained by shifting the set $\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto u\\,\\}}$\nby a multiple $k$ of the basis vector $\\v{b}_j$.  As $\\v{b}_j$ is linearly\nindependent from the other basis vectors, it follows that\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto u\\,\\}}$ and\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto u+k\\,\\}}$\nare disjoint and separated by some\npositive distance $k \\times d_{\\theta,j}$ where $d_{\\theta,j}$ is the distance\nbetween the adjacent hyperplanes $\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto 0\\,\\}}$ and\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto 1\\,\\}}$\n%\nAs the distance between any two points $\\v{x}, \\v{y} \\in \\Co$ is at most\n$d_\\Co$, it follows that the number of distinct $s$\nsatisfying~\\eqref{eq:extend_theta} is at most\n$d_\\Co / d_{\\theta,j}$.  Moreover, as both $\\Co$ and\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto s\\,\\}}$ are convex, there\nmust be a bounded interval $s \\in \\{\\,l,l+1,\\dots, u-1, u\\,\\}$\nof values satisfying~\\eqref{eq:extend_theta}.\n\nRather than compute the bounds explicitly, we\ncompute the $\\linf$-distance between $\\La^\\RR_\\theta$ and the point $\\v{p}$ at\nthe center of $\\Co$ using the reduction to linear programming described at the\nend of the Section~\\ref{sec:preliminaries} in equation~\\eqref{eq:set_distance}.\n%\nThis reduction to linear programming allows one to find an assignment\n$\\v{y} \\in \\RR^{n}$ so that $\\mat{A} \\v{y}$ is one of the points\nin $\\La^\\RR_\\theta$ with minimal distance to $\\v{p}$.  We can then start by\nconsidering for $s$ the points $\\{\\,\\floor{\\v{y}_j},\\,\\floor{\\v{y}_j-1},\\dots\\,\\}$\nand $\\{\\,\\ceil{\\v{y}_j},\\,\\ceil{\\v{y}_j+1},\\dots\\,\\}$ until we have\nexplored all the assignments in the set $\\{\\,l,l+1,\\dots, u\\,\\}$.\n%\n\\end{proof}\n\n%\\proc{Split} says we can take a sublayer in $S$ and decompose it into its\n%component sub-sublayers by choosing an unassigned index $j$. The assignments\n%$s_i$ are taken to be precisely those for which $d_\\infty(\\La^\\RR_{I \\cup (j,\n%    s_i)}, p) < r$. This is always a finite set of consecutive integers.\n% See figure \\ref{fig:layers} for illustration.\n%\n%\\proc{Prune} removes sublayers that we know have no points closer to $\\v{p}$\n%than $r$.\n%\n%Finally, \\proc{Satisfiable} applies to 0-dimensional sublayers, i.e. lattice\n%points. If $\\La_I = \\{ \\v{y} \\}$ and the distance from $\\v{y}$ to $\\v{p}$ is\n%less than $r_\\Co$, we have found a satisfying assignment $I$.\n\n% \\begin{figure}\n%     \\centering\n%     \\vspace{0.1cm}\n%     \\includegraphics[width=0.48\\textwidth]{lattice-layers}\n%     \\caption{\\proc{Split} $\\La_I$ into sublayers}\n%     \\label{fig:layers}\n% \\end{figure}\n\n%Starting at some initial state $S_0$, a \\emph{derivation} in this system is a\n%sequence of transitions $S_0 \\Rightarrow S_1 \\Rightarrow \\cdots$ using any of\n%the three rules as long as their precondition applies. We call any state of\n%the form $(\\emptyset, r, \\v{z})$ a final state.\n%\n%\\begin{lem}\n%    \\label{lem:search}\n%    The transition system described above always terminates in a final state, i.e.\n%    \\begin{enumerate}\n%        \\item every derivation is finite,\n%        \\item the only states in which no rule applies are final states.\n%    \\end{enumerate}\n%\\end{lem}\n%\n%\\begin{proof}\n%Consider the function $\\nu$ which maps states to $n+1$-tuples of natural\n%numbers: $\\nu(S, r, \\v{z})_i = \\#\\{\\La_I \\in S \\mid \\abs{I}=i \\}$ for $i=0,\\ldots,n$.\n%We claim that all three transition rules cause $\\nu$ to strictly decrease in\n%the lexicographic order. For \\proc{Prune} and \\proc{Record} this is obvious.\n%Applying $\\proc{Split}_{j,I}$ causes $\\nu_{\\abs{I}}$ to decrease by 1 and\n%$\\nu_{\\abs{I}+1}$ to increase (by an amount bounded above by a constant\n%multiple of $r$). Hence $\\nu$ strictly decreases in lexicographic order and so\n%every derivation is finite.\n%\n%For (2), suppose to the contrary $(S, r, \\v{z})$ is a state such that $S \\ne\n%\\emptyset$ but none of the transition rules apply. Choose a $\\La_I \\in S$. If\n%$\\abs{I} < n$ then $\\proc{Split}_{j,I}$ applies for some $j$. Otherwise, if\n%$\\abs{I} = n$ then either $d_\\infty(\\La^\\RR_I, \\v{p}) \\ge r$, in which case\n%$\\proc{Prune}_I$ applies, or the opposite holds, in which case\n%$\\proc{Record}_I$ applies. Thus we have a contradiction.\n%\\end{proof}\n%\n%\\begin{thm}\n%    \\label{thm:closest}\n%    Let $\\v{z}_0$ be an arbitrary lattice vector and $r_0 = d_\\infty(\\v{z}_0,\n%    \\v{p})$. Then any derivation starting with initial state\n%    $(\\{\\La_\\emptyset\\}, r_0, \\v{z}_0)$ terminates at a final state $(\\{\\},\n%    r_f, \\v{z}_f)$ in which $\\v{z}$ is a closest vector in $\\La$ to $\\v{p}$.\n%\\end{thm}\n%\n%\\begin{proof}\n%Assume to the contrary that there is a \\emph{closer} lattice vector\n%$\\tz$. Clearly $\\tz \\in \\La_\\emptyset = \\La$. Further, if\n%we are at a state $(S, r, \\v{z})$ in which $\\proc{Split}_{j,I}$ applies and $\\tz \\in\n%\\La_I$, then there is a new sublayer produced by $\\proc{Split}$ that contains\n%$\\tz$. This follows from the definition of \\proc{Split}, since $\\tz$ is in\n%\\emph{some} sublayer of $\\La_I$, say $\\La_{I \\cup (j,s)}$ and\n%$d_\\infty(\\La^\\RR_{I \\cup (j,s)}, \\v{p}) \\le d_\\infty(\\tz, \\v{p}) \\le r$, the\n%latter inequality following from our assumption that $\\tz$ is closer than the\n%final vector $\\v{z}_f$ and hence than $\\v{z}$. Similarly it should be clear\n%that if at any state, $\\tz \\in \\La_I$ and $\\La_I \\in S$, then $\\proc{Prune}_I$\n%does not apply.\n\n%The above argument implies that at some point in our derivation, the\n%$0$-dimensional sublayer $\\La_J = \\{ \\tz \\}$ must appear. The only rule that\n%removes it is $\\proc{Record}_J$. Hence, $d_\\infty(\\tz, \\v{p}) \\le r_f =\n%d_\\infty(\\v{z}_f, \\v{p})$, a contradiction.\n%\\end{proof}\n\n% The Schnorr-Euchner strategy is best understood as a recursive\n% search operation. Recall the problem is to find a $\\v{z} \\in \\La$ such that\n% $\\norminf{\\v{z}-\\v{p}}$ is minimal.\n%\n%\n% The main idea in the Schnorr-Euchner search strategy is to recursively search\n% for a closet vector in a finite number of the layers, ordered by increasing\n% distance from the target point. First we discuss which layers are searched and\n% then how to compute the $\\linf$ distance from the target point to a layer.\n%\n% Suppose for a moment that an upper bound $\\rho$ is known on the distance from a\n% closest vector to $\\v{p}$. In this case it suffices to search a finite number\n% of consecutive layers.\n%\n% \\begin{lemma}\n%     \\label{lem:layers-to-search}\n%     If $d(\\La, p) \\le \\rho$ then a closest vector is contained in some layer\n%     $\\mathcal{Y}_a$ such that $\\mathcal{Y}^\\RR_a \\cap H \\ne \\emptyset$ and\n%     moreover $\\{ a \\in \\ZZ \\mid \\mathcal{Y}^\\RR_a \\cap H \\ne \\emptyset \\}$ is\n%     bounded set of consecutive integers.\n% \\end{lemma}\n% \\begin{proof}\n%     TODO\n% \\end{proof}\n% The order in which layers are searched makes a significant difference in\n% practice. We discuss the choice our implementation makes at the end of the\n% section.\n\n\\subsection{Implementation Decisions}\n\\label{ssec:blt-optimizations}\n\nTurning the previous section into a working and efficient\nprocedure involves many more choices and details than\nwe have room to describe.  We would like to indicate, however a couple choices\nwe have made in implementing BLT.\n\n\\textbf{Search Strategy.}\n%\nIn implementing the transition system,\nwe have chosen to adopt a strategy similar to Schnorr and Euchner\nin~\\cite{Schnorr-Euchner}.\n%\nWe use the LLL algorithm~\\cite{Lenstra} to generate a reduced basis, and\nfix the basis vectors by sorting in order of decreasing $L^2$ magnitude.\n%\nWe then proceed by applying \\ruleref{} in a depth first order\nwith the sequence of $j$'s chosen according to our basis order.\n%\nThe variables with the largest magnitude are typically the\nmost-constrained variables in our problems, as they have the largest\ndistance between adjacent sublayers.\n%\nChoosing the most constrained variable is a common strategy in\nconstraint satisfication, and we have found the strategy effective\nin this case as well.\n\nThe other choice we have with split is to consider which values of $s$ to\nexplore.  To maximize the likelihood of finding a satisfying assignment,\nwe would like to choose a value for $s$ that imposes the least\nconstraints on subsequent assignments.\nThis could be done by choosing\nan assignment to $s$ that maximizes the volume of the intersection\nbetween the hypercube $\\Co$ and real-affine set\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto s\\,\\}}$.\n\nUnfortunately, we do not know of an efficient way to compute the\n$s$ with the maximal volume\\footnote{In~\\cite{dyer_freize88}, the\nauthors show that the related problem of computing the volume of the\nintersection of the unit cube and a rational halfspace is \\#P-hard.},\nbut we have developed a proxy that works well in practice.\n%\nAs alluded to in the proof of Theorem~\\ref{thm:computable}, we\nuse linear programming to find an initial assignment to $s$ that minimizes\nthe $\\linf$-distance between the center of the hypercube $\\v{p}$ and\n$\\La^\\RR_{\\theta \\cup \\{\\,j \\mapsto s\\,\\}}$.\n%\nSince the distance between the sublayer and center point is minimal, we\ncan expect that the volume of the sublayer within the hypercube should\nbe maximal or near maximal.  If this assignment is found\ninfeasible, and we backtrack, then we explore adjacent assignments\n$s + \\delta, s - \\delta, s + 2\\delta, \\ldots{}$, where $\\delta = \\pm 1$\ndepending on orientation, in order of increasing distance.\n\n%In implementing the transition system,\n%we have chosen to adopt a strategy similar to Schnorr and Euchner in\n%\\cite{Schnorr-Euchner}. After choosing a lattice basis, we fix an ordering of\n%the basis vectors and proceed by applying $\\proc{Split}_{j,-}$ for the\n%sequence of $j$'s corresponding to our basis order. After splitting say\n%$\\La_I$ into $\\La_{I \\cup (i,a_1)}, \\ldots, \\La_{I \\cup (i,a_k)}$ we choose\n%the sublayer among these that is closest to $\\v{p}$ and apply \\proc{Split}\n%there.  Thus we have a best-first search that always follows the closest\n%sublayer first. In this way we quickly reach a $0$-dimensional sublayer,\n%namely a lattice point, and apply \\proc{Record}. This point is known as the\n%\\emph{Babai point} in the literature and gives us a good starting $r$ and\n%$\\v{z}$ value for our state.\n\n%After reaching the Babai point, we backtrack to the $1$-dimensional layer it\n%came from and decide to either \\proc{Prune} or \\proc{Record} at the other\n%sublayers there, doing so in order of increasing distance from $\\v{p}$\n%\\footnote{If the closest sublayer was $\\La_{I \\cup (j,s)}$ and $\\v{p}$ is\n%``above'' it with respect to the direction of $\\v{v}_j$, then one can show\n%that the assignments $(s_i) = (s, s+1, s-1, s+2, \\ldots)$ enumerate the\n%sublayers in order of increasing distance from $\\v{p}$.}.  The advantage\n%of making this choice is that when we hit a sublayer that \\proc{Prune} applies\n%to, then we infer that all the other sublayers at this level can be pruned and\n%we jump up another level.\n\n%Our implementation deviates slightly from the transition system\n%described here in order to make some crucial optimizations. When solving a\n%bounded integer constraint problem, the goal is to find a lattice vector\n%satisfying the constraints. Accordingly, in our $\\proc{Closest}_\\infty$\n%implementation we check at each application of \\proc{Record} whether the new\n%point satisfies the constraints and if it does we return early as there is no\n%need to find the closest solution. In the problems we've studied, early exit\n%like this saves an enormous amount of time, in particular when the Babai point\n%mentioned above already satisfies the constraints.\n\n%Finally, note that we've described $\\proc{Closest}_\\infty$ as starting with\n%the state $(\\La_\\emptyset, r_0, \\v{z}_0)$ for some $\\v{z}_0 \\in \\La$ and $r_0 =\n%d_\\infty(\\v{z}_0, \\v{p})$. However, if our constraint set $\\Co$ is a hypercube\n%with radius $r_\\Co$, it's obviously better to start with $r_0 = r_\\Co$ as this\n%is likely to be a much tighter upper bound. Some of the arguments above need\n%to be modified to account for this, but the performance gain is significant.\n\n% \\paragraph{Lattice Basis.} In section \\ref{sec:bcp} we hinted\n% choosing a good lattice basis to work with is important. For solving the\n% problems presented in section \\ref{sec:jpeg}, it is essential. As a\n% pre-processing step in BLT we compute a reduced lattice basis once and for all\n% using Algorithm 2.6.3 of \\cite{Cohen} as implemented in \\cite{NTL}. This is\n% done \\emph{after} the hypercube scaling since, depending on the nature of the constraints\n% present, this step can make the lattice matrix quite non-orthogonal.\n\n%Joe\n\\textbf{Layer-point distance.}\n%\nDue to efficiency concerns as well as implementation issues with linking\nGMP with other Haskell code that BLT is linked against,\nwe compute the distance using a conventional linear programming solver,\nGLPK \\cite{GLPK}, which uses IEEE double precision floating point for\nits calculations.  If the distance calculation is inaccurate, there is the\npotential to prune a sublayer that is mistaken for being slightly too far\naway,\nand consequently BLT may incorrectly return UNSAT.  In cases where it\nreturns SAT, the model is checked against the problem for certainty. In\nprinciple the distance calculations can be done using exact\narithmetic\\footnote{GLPK supports this directly.} or arbitrary-precision\nfloating point arithmetic, but we have not attempted to do so yet.\n", "meta": {"hexsha": "8547d4a2a3ac005bef25cdbc5d1e1a1430a15332", "size": 25024, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "publications/2015_smt/decision-procedure.tex", "max_stars_repo_name": "benjaminfjones/blt", "max_stars_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 63, "max_stars_repo_stars_event_min_datetime": "2016-11-15T22:09:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T02:47:27.000Z", "max_issues_repo_path": "publications/2015_smt/decision-procedure.tex", "max_issues_repo_name": "benjaminfjones/blt", "max_issues_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-03-24T18:52:05.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-28T03:03:27.000Z", "max_forks_repo_path": "publications/2015_smt/decision-procedure.tex", "max_forks_repo_name": "benjaminfjones/blt", "max_forks_repo_head_hexsha": "c1129d28aa4882db404bced857427f84a7bc73aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-11-15T23:16:43.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T08:32:52.000Z", "avg_line_length": 46.5996275605, "max_line_length": 94, "alphanum_fraction": 0.7074808184, "num_tokens": 7521, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059609645724, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.5748989416757622}}
{"text": "\\documentclass[letterpaper, 11pt]{article}\n\n\\usepackage[pdfusetitle]{hyperref}\n\\usepackage[margin=1in, top = 0.5in, bottom = 0.5in]{geometry}\n\\usepackage{framed}\n\n\\pagestyle{empty}\n\n%% \\usepackage{listings}\n%% \\lstloadlanguages{C++}\n%% \\lstset{\n%%   basicstyle=\\ttfamily,\n%%   language=C++,\n%%   breaklines=true,\n%%   columns=fullflexible,\n%%   showstringspaces=false,\n%%   commentstyle=\\color[rgb]{0.5,0,0}\\itshape,\n%% }\n\n\\begin{document}\n\n\\title{Split block Bloom filters}\n\\author{Jim Apple \\\\\n  jbapple@apache.org}\n\\maketitle\n\\thispagestyle{empty}\n\nBloom filters and other approximate membership query structures (including quotient filters and cuckoo filters) typically have worst-case operation costs that are linear (or worse) in $\\lg (1/\\varepsilon)$. \\cite{cuckoo-filter,quotient-filter}\nIn Bloom filters, this is driven by the $\\lg (1/\\varepsilon)$ hash functions to be evaluated.\\footnote{And the same number of bits to access after hashing.}\nIn dictionary-based filters like cuckoo filters and quotient filters, this is from searching through a near-full table for an open slot to write a fingerprint in.\n\nThis brief note describes a Bloom filter in which all operations scale independently of $1/\\varepsilon$, with worst-case $O(1)$ operations, first created for Apache Impala in early 2016.\\footnote{\\url{https://github.com/apache/impala/commit/b35f6d070c8e6b51079f962b448ecc2b0eb74c1a}}\\footnote{Rediscovered in 2018 by \\cite{ultra-fast}.}\nThe price paid for this performance is that these filters use a pre-determined number of hash functions, limiting their utility for false positive probabilities outside of $[0.4\\%, 19\\%]$.\nThe central ideas are:\n\n\\begin{enumerate}\n\\item Use block Bloom filters to reduce the number of cache lines to access down to one.~\\cite{block}\n\\item Within each block, use a ``split'' Bloom filter that sets one bit in each of several sections, rather than several bits in one section.~\\cite{split-bloom}\n\\item Use eight hash functions in order to fit cleanly into SIMD lanes.\\footnote{Four or sixteen would work, too.}\n\\end{enumerate}\n\nA value is inserted in a split block Bloom filter by first selecting one 256-bit block from the filter by hashing the key once.\nThen the key is hashed eight more times to a range of $[0,32)$ using SIMD instructions and multiply-shift universal hashing: $h_{s_i}(x) = \\lfloor(s_i \\cdot x) / 2^{27}\\rfloor$, where $s_i$ are odd seeds.~\\cite{multiply-shift}\nOne bit is set in each of eight contiguous 32-bit lanes within the 256-bit block using the results of the eight hash functions.\nLookup is symmetric; deletions are not supported; code is available at the end of this document.\n\nEach of the central ideas of split block Bloom filters can negatively affect $\\varepsilon$ compared to standard Bloom filters.\nFor instance, in the same space it takes for a split block Bloom filter to support $\\varepsilon = 1.0\\%$, a standard Bloom filter achieves a false positive rate of $0.63\\%$\nThe false positive rate of split block Bloom filters can be approximated from \\cite[Equation 3]{block} and \\cite[Section 2.1]{split-bloom}\n\n\\[\n\\sum_{i=0}^\\infty P_{256/(m/n)}(i) (1 - (1-8/256)^i)^8\n= \\sum_{i=0}^\\infty P_a(i) (1 - (1-1/32)^i)^8\n\\]\n\nwhere $P$ is the Poisson distibution, $n$ is the number of distinct hash values, $m$ is the size of the filter in bits, and $a$ is the average number of distinct hash values per block.\nAs long as $a \\in [20,52]$ ($\\varepsilon \\in [0.40\\%, 19\\%]$), the false positive probability of split block Bloom filters is no more than twice that of a standard Bloom filter with the same $n$ and $m$.\n\nIn tradeoff for this increased $\\varepsilon$, we get very high speed.\nFor instance, comparing against the 8-bit version of cuckoo filters\\footnote{This test is replicable from the original cuckoo filter repo, \\url{https://github.com/efficient/cuckoofilter}}\\footnote{More benchmarks are also shown in \\cite{lemire-xor-filter,ribbon-filter,bloom-overtakes}.}:\n\n\\begin{tabular}{|r|r|r|r|}\n  \\hline  & {\\bf 100k elements} & {\\bf 1M elements} & {\\bf 100M elements} \\\\\n  \\hline {\\bf Size} & 131KB & 1MB & 134M \\\\\n  \\hline {\\bf Cuckoo insert (M/s)} & 71 & 33 & 14 \\\\\n  \\hline {\\bf SBBF insert (M/s)} & 416 & 182 & 32 \\\\\n  \\hline {\\bf Cuckoo lookup (M/s)} & 281 & 139 & 23 \\\\\n  \\hline {\\bf SBBF lookup (M/s)} & 400 & 186 & 43 \\\\\n  \\hline {\\bf Cuckoo $\\varepsilon$} & 2.37\\% & 2.97\\% & 2.33\\% \\\\\n  \\hline {\\bf SBBF $\\varepsilon$} & 1.03\\% & 2.74\\% & 0.91\\% \\\\\n  \\hline\n\\end{tabular}\n\nBecause of their per-slot metadata, cuckoo filters and quotient filters mainly shine at false positive probabilities less than 0.5\\%.\nFor higher probabilities like those in this table, which are in the target range for Impala's use cases, split block Bloom filters are appropriate, even if not the theoretically optimal.\nSplit block Bloom filters are now also used in Apache Arrow, Apache Kudu, and Apache Parquet.\nThe filter is available in a standalone package at \\url{https://github.com/jbapple/libfilter}.\n\nSee the figure below for a cut-down but working C version of split block Bloom filters.\n\n\\section*{Acknowledgements}\nThank you to Daniel Lemire for helpful discussions and inspiring questions.\n\n\\bibliographystyle{alpha}\n\\bibliography{doc}\n\n\\appendix{}\n\\begin{figure}\n  \\begin{framed}\n\\begin{verbatim}\n#include <immintrin.h>\n#include <stdint.h>\n\n// Take a hash value and get the block to access within a filter with\n// num_buckets buckets.\nuint64_t block_index(const uint64_t hash, const uint32_t num_buckets) {\n  return ((hash >> 32) * num_buckets) >> 32;\n}\n\n// Takes a hash value and creates a mask with one bit set in each 32-bit lane.\n// These are the bits to set or check when accessing the block.\n__m256i make_mask(uint32_t hash) {\n  const __m256i ones = _mm256_set1_epi32(1);\n  // Set eight odd constants for multiply-shift hashing\n  const __m256i rehash = {INT64_C(0x47b6137b) << 32 | 0x44974d91,\n                          INT64_C(0x8824ad5b) << 32 | 0xa2b7289d,\n                          INT64_C(0x705495c7) << 32 | 0x2df1424b,\n                          INT64_C(0x9efc4947) << 32 | 0x5c6bfb31};\n  __m256i hash_data = _mm256_set1_epi32(hash);\n  hash_data = _mm256_mullo_epi32(rehash, hash_data);\n  // Shift all data right, reducing the hash values from 32 bits to five bits.\n  // Those five bits represent an index in [0, 31)\n  hash_data = _mm256_srli_epi32(hash_data, 32 - 5);\n  // Set a bit in each lane based on using the [0, 32) data as shift values.\n  return _mm256_sllv_epi32(ones, hash_data);\n}\n\n\nvoid add_hash(uint64_t hash, uint32_t num_buckets, __m256i filter[]) {\n  const uint64_t bucket_idx = block_index(hash, num_buckets);\n  const __m256i mask = make_mask(hash);\n  __m256i *bucket = &filter[bucket_idx];\n  // or the mask into the existing bucket\n  _mm256_store_si256(bucket, _mm256_or_si256(*bucket, mask));\n}\n\n_Bool find_hash(uint64_t hash, uint32_t num_buckets, const __m256i filter[]) {\n  const uint64_t bucket_idx = block_index(hash, num_buckets);\n  const __m256i mask = make_mask(hash);\n  const __m256i *bucket = &filter[bucket_idx];\n  // checks if all the bits in mask are also set in *bucket. Scalar\n  // equivalent: (~bucket & mask) == 0\n  return _mm256_testc_si256(*bucket, mask);\n}\n\\end{verbatim}\n  \\end{framed}\n\\end{figure}\n\n\\end{document}\n\n%%  LocalWords:  Kudu\n", "meta": {"hexsha": "7bae206c3c3a59064da1ba28c38094dea9fb0c6a", "size": 7281, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doc.tex", "max_stars_repo_name": "tenzir/libfilter", "max_stars_repo_head_hexsha": "d0d23243f841cc625588afc16f2cacdfeb1cae56", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2020-08-04T15:13:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T18:51:09.000Z", "max_issues_repo_path": "doc/doc.tex", "max_issues_repo_name": "tenzir/libfilter", "max_issues_repo_head_hexsha": "d0d23243f841cc625588afc16f2cacdfeb1cae56", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-10-09T00:38:05.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-24T02:27:16.000Z", "max_forks_repo_path": "doc/doc.tex", "max_forks_repo_name": "tenzir/libfilter", "max_forks_repo_head_hexsha": "d0d23243f841cc625588afc16f2cacdfeb1cae56", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-03-31T04:34:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-22T19:30:34.000Z", "avg_line_length": 51.2746478873, "max_line_length": 336, "alphanum_fraction": 0.7306688642, "num_tokens": 2125, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5748989393000833}}
{"text": "\\documentclass[twocolumn]{article}\n\n\\title{Topic 13: Distributed Systems \\\\ \\Large{Blockchains}}\n\\author{Hilmar G\\'ustafsson}\n\\begin{document}\n\\maketitle\n\n\\section{Cryptographical Tools}\nA summary of the tools necessary for Blockchains.\n\n\\subsection{Hash}\nA hash function $H$ takes binary input of arbitrary length, and creates fixed-length output of it.\n\\begin{center}\n $H: X = \\{0,1\\}^* \\rightarrow \\{0,1\\}^L$ \\\\ typically where $L \\in \\{128, 160, 256, 512\\}$\n\\end{center}\n\nFor security purposes, it is important that a small change in the input results in a large change in the output.\nCollisions exist, but they are hard to find with this property.\n\n\\subsection{Signatures using Asymmetric Cryptography}\nDigital signatures are typically done using three algorithms, and a known hash $H$.\n\n\\paragraph{KeyGen} An algorithm which returns two keys: a private signing key, and a public verification key.\n\n\\paragraph{Sign} An algorithm which computes the signature of some input, given the private key.\nThis input is typically a hash of some data to be sent.\n\n\\paragraph{Verify} Decrypts the signature using the public key and compare the result with the hash of the received data.\n\n\\subsection{Merkle Trees}\nA complete binary tree of hashes.\nIt is built starting from an initial set of symbols.\nAlso known as a hash tree.\nExplots a hash function H (SHA1, MD5).\nLeaves are H applied to the initial symbols.\nInternal nodes are H applied to the sons of a node.\n\\section{Blockchain Basics}\nA blockchain is a digitized, decentralized, public ledger of all cryptocurrency transactions. Constantly growing as 'mined' blocks (the most recent transactions) are recorded and added to it in chronological order, it allows market participants to keep track of digital currency transactions without central recordkeeping \\footnote{Merkle tree}.\nThe Blockchain in essence is a collection of Blocks.\nEach Block is composed of the following elements: data, a hash pointer, and a timestamp.\n\\subsection{Hash Pointers}\nPointer to where some info is stored.\nCryptographic hash of the info.\nIf we have a hash pointer, we can: (1) ask to get the info back, and (2) verify that it hasn't changed.\nSimilar to how programs to download are often accompanied by a hash, to verify that you downloaded the correct file.\n\\subsection{Consensus}\nThe Blockchain network is in principle completely asynchronous and decentralized.\nIn terms of currency, the problem to solve is double-spending, i.e. being able to spend the same money more than once.\n\n\\paragraph{Nakamoto Consensus} Assuming that 1. most nodes are honest, and 2. adding a block is computationally expensive, we can ignore received blockchains which are shorter than ours, for two reasons: 1. it was created later than mine, and 2. it can be out of sync.\nThis assumes that the honest nodes have more CPU power than attacker nodes.\n\n\n\\subsection{Proof of Work}\nProvide a computational puzzle that is hard to solve when you want to add a valid block, but easy to solve when you want to verify that a block is valid.\nSpecifically, the new block must be a combination of a nonce and a header.\nThe nonce is accepted when $H(\\mathtt{header})$ starts with $n$ zeros.\n$n$ defines how hard it is to add a new block.\n\\paragraph{Example} Say the hashing algorithm is \\texttt{SHA256}, and $n = 36$. Each nonce has the probability $2^{-36}$ of success. The nonce must therefore be brute-forced, but only requires one hash to verify.\n\n\\section{Blockchain Details}\n\\subsection{Transactions}\nA transaction contains the following data:\n\\begin{itemize}\n \\item A list of input transactions\n \\item A list of tuples of the recipient public key, and the amount to send.\n \\item Personal signature, signed with private key\n\\end{itemize}\n\n\\noindent\nIn order to verify a transaction, one must:\n\n\\begin{enumerate}\n \\item Verify the signature using the public key of the sender\n \\item Verify the signatures of each of the input transactions\n \\item Ensure that the money has not been spent between the input transactions and the new transaction.\n\\end{enumerate}\n\nSince the transaction is signed using the private key, only the owner of the identity can transfer the money from that point.\nA single user can have an arbitrary amount of identities.\n\n\\subsection{User Miners}\nA miner does the following:\n\\begin{enumerate}\n \\item Verify all the transactions by looking that input transactions are covered and properly signed\n \\item Compute the Merkle root hash for the transactions\n \\item Solve the puzzle on the previous block, for immutability\n \\item Broadcast the new header\n \\item Go on collecting new transactions for next block\n\\end{enumerate}\n\nMiners receive compensation for computing the next block. However, it is estimated that we waste \\$15 million / day on energy to power the miners.\n\n\\subsection{Smart Contracts}\n\\paragraph{Definition} Computer protocols that facilitate, verify or enforce the negotiation or performance of a contract, or that make a contractual clause unnecessary. Define the rules and penalties around an agreement in the same way that a traditional contract does, but also automatically enforce those obligations, i.e. code is law.\n\n\\paragraph{Ethereum} Contracts are the main building blocks of Ethereum, the second most popular blockchain.\nThe contract is a program which lives inside the distributed Ethereum network and has its own ether balance of Ether\\footnote{Ether is the cryptocurrency / cryptofuel of Ethereum}, memory and code.\nEvery time you send a transaction to a contract, it executes its code, which can store data, send transactions and interact with other contracts.\nIn order to run the contract, you create a transaction of Ether to the contract, optionally with some input information.\nThe contract runs until it completes or runs out of Ether.\nThe Ether paid is awarded to the winning miner.\nEach miner runs the smart contract, and produces the same output.\nThe winning miner will publish the block to the rest of the network.\nOther miners validate that they get the same result.\nThe contracts may be compiled online using the Solidity Compiler\\footnote{https://remix.ethereum.org/}.\n\n\\section{Summary}\nWhen to use blockchains:\n\\begin{enumerate}\n \\item Are there multiple parties in the ecosystem?\n \\item Is establishing trust between all the parties an issue?\n \\item Is it critical to have a tamper-proof and permanent record of transactions?\n \\item Are we securing the ownership or management of a finite resource?\n \\item Does this ecosystem benefit from improved transparency?\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "fc0fdc098d9e092332880f792feeeb7007230b47", "size": 6566, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/7_sem/dis/8_blockchains.tex", "max_stars_repo_name": "nillas12/aau", "max_stars_repo_head_hexsha": "7cdfbf06986e1fec00a337776e6443b2b348f0ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/7_sem/dis/8_blockchains.tex", "max_issues_repo_name": "nillas12/aau", "max_issues_repo_head_hexsha": "7cdfbf06986e1fec00a337776e6443b2b348f0ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/7_sem/dis/8_blockchains.tex", "max_forks_repo_name": "nillas12/aau", "max_forks_repo_head_hexsha": "7cdfbf06986e1fec00a337776e6443b2b348f0ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.1764705882, "max_line_length": 345, "alphanum_fraction": 0.7904355772, "num_tokens": 1447, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5746450587597873}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[margin=1.0in]{geometry}\n\\linespread{1.3}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\begin{document}\n\n\\section*{2D Euler Fourier Transform}\n\nConsider 2D Euler:\n\\[\n\\mathbf{u}_t+\\mathbf{u}\\cdot\\nabla\\mathbf{u}=-\\nabla{p},\\qquad\\nabla\\cdot{u}=0\n\\]\n\\noindent where $\\mathbf{u}(x,t)$ is a 2D vector that is periodic in both dimensions and $p$ is pressure.\n\nTo find an expression for $p$ we take the divergence of both sides:\n\\begin{align*}\n    \\nabla\\cdot(-\\nabla{p})\n    &=\\nabla\\cdot(\\mathbf{u}_t+\\mathbf{u}\\cdot\\nabla\\mathbf{u})\\\\\n    &=\\nabla\\cdot\\mathbf{u}_t+\\nabla\\cdot(\\mathbf{u}\\cdot\\nabla\\mathbf{u})\n\\end{align*}\nSince $\\nabla\\cdot{u}=0$, we have:\n\\begin{align*}\n    \\nabla\\cdot\\begin{bmatrix}-\\frac{\\partial{p}}{\\partial{x}}\\\\-\\frac{\\partial{p}}{\\partial{y}}\\end{bmatrix}\n    =&\\nabla\\cdot\\begin{bmatrix}u^x\\frac{\\partial{u}^x}{\\partial{x}}+u^y\\frac{\\partial{u}^x}{\\partial{y}}\\\\\n    u^x\\frac{\\partial{u}^y}{\\partial{x}}+u^y\\frac{\\partial{u}^y}{\\partial{y}}\\end{bmatrix}\\\\\n    -\\left(\\frac{\\partial^2p}{\\partial{x}^2}+\\frac{\\partial^2p}{\\partial{y}^2}\\right)\n    =&\\frac{\\partial{u}^x}{\\partial{x}}\\frac{\\partial{u}^x}{\\partial{x}}+u^x\\frac{\\partial^2u^x}{\\partial{x}^2}+\\frac{\\partial{u}^y}{\\partial{x}}\\frac{\\partial{u}^x}{\\partial{y}}+u^y\\frac{\\partial^2u^x}{\\partial{x}\\partial{y}}\\\\\n    &+\\frac{\\partial{u}^x}{\\partial{y}}\\frac{\\partial{u}^y}{\\partial{x}}+u^x\\frac{\\partial^2u^y}{\\partial{x}\\partial{y}}+\\frac{\\partial{u}^y}{\\partial{y}}\\frac{\\partial{u}^y}{\\partial{y}}+u^y\\frac{\\partial^2u^y}{\\partial{y}^2}\n\\end{align*}\nNotice that\n\\[\nu^x\\frac{\\partial^2u^x}{\\partial{x}^2}+u^x\\frac{\\partial^2u^y}{\\partial{x}\\partial{y}}=u^x\\frac{\\partial}{\\partial{x}}(\\nabla\\cdot{u})=0\n\\]\nand\n\\[\nu^y\\frac{\\partial^2u^x}{\\partial{x}\\partial{y}}+u^y\\frac{\\partial^2u^y}{\\partial{y}^2}=u^y\\frac{\\partial}{\\partial{y}}(\\nabla\\cdot{u})=0.\n\\]\nSo,\n\\[\n    -\\left(\\frac{\\partial^2p}{\\partial{x}^2}+\\frac{\\partial^2p}{\\partial{y}^2}\\right)\n    =\\frac{\\partial{u}^x}{\\partial{x}}\\frac{\\partial{u}^x}{\\partial{x}}+\\frac{\\partial{u}^y}{\\partial{x}}\\frac{\\partial{u}^x}{\\partial{y}}+\\frac{\\partial{u}^x}{\\partial{y}}\\frac{\\partial{u}^y}{\\partial{x}}+\\frac{\\partial{u}^y}{\\partial{y}}\\frac{\\partial{u}^y}{\\partial{y}}.\n\\]\nSince $\\mathbf{u}$ is periodic, we can represent it as a Fourier series\n\\[\n\\mathbf{u}(x,t)=\\sum_{\\mathbf{k}}\\mathbf{u_k}(t)e^{i\\mathbf{k\\cdot{x}}}\n\\]\nwhere $\\mathbf{k}$ is a 2D wavevector and we sum over all possible integer-valued wavevectors. We can do the same for $p$:\n\\[\np=\\sum_{\\mathbf{k}}p_\\mathbf{k}(t)e^{i\\mathbf{k\\cdot{x}}}\n\\]\nNow we can write the left-hand side as\n\\begin{align*}\n -\\left(\\frac{\\partial^2p}{\\partial{x}^2}+\\frac{\\partial^2p}{\\partial{y}^2}\\right)=&-\\left(\\sum_\\mathbf{k}-k_x^2p_\\mathbf{k}e^{i\\mathbf{k\\cdot{x}}}+\\sum_\\mathbf{k}-k_y^2p_\\mathbf{k}e^{i\\mathbf{k\\cdot{x}}}\\right)\\\\\n =&\\sum_\\mathbf{k}(k_x^2+k_y^2)p_\\mathbf{k}e^{i\\mathbf{k\\cdot{x}}}\\\\\n =&\\sum_\\mathbf{k}|\\mathbf{k}|^2p_\\mathbf{k}e^{i\\mathbf{k\\cdot{x}}}\n\\end{align*}\nand the right-hand side as\n\\[\n\\left(\\sum_\\mathbf{p}ip_xu_\\mathbf{p}^xe^{i\\mathbf{p\\cdot{x}}}\\right)\\left(\\sum_\\mathbf{q}iq_xu_\\mathbf{q}^xe^{i\\mathbf{q\\cdot{x}}}\\right)\n+\\left(\\sum_\\mathbf{p}ip_xu_\\mathbf{p}^ye^{i\\mathbf{p\\cdot{x}}}\\right)\\left(\\sum_\\mathbf{q}iq_yu_\\mathbf{q}^xe^{i\\mathbf{q\\cdot{x}}}\\right)\\\\\n\\]\n\\[\n+\\left(\\sum_\\mathbf{p}ip_yu_\\mathbf{p}^xe^{i\\mathbf{p\\cdot{x}}}\\right)\\left(\\sum_\\mathbf{q}iq_xu_\\mathbf{q}^ye^{i\\mathbf{q\\cdot{x}}}\\right)\n+\\left(\\sum_\\mathbf{p}ip_yu_\\mathbf{p}^ye^{i\\mathbf{p\\cdot{x}}}\\right)\\left(\\sum_\\mathbf{q}iq_yu_\\mathbf{q}^ye^{i\\mathbf{q\\cdot{x}}}\\right)\\]\n\\begin{align*}\n&=-\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}(p_xq_xu_\\mathbf{p}^xu_\\mathbf{q}^x+p_xq_yu_\\mathbf{p}^yu_\\mathbf{q}^x+p_yq_xu_\\mathbf{p}^xu_\\mathbf{q}^y+p_yq_yu_\\mathbf{p}^yu_\\mathbf{q}^y)e^{i\\mathbf{k\\cdot{x}}}\\\\\n&=-\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}(\\mathbf{p}\\cdot{u}_\\mathbf{q})(\\mathbf{q}\\cdot{u}_\\mathbf{p})e^{i\\mathbf{k\\cdot{x}}}\n\\end{align*}\nSince $\\nabla\\cdot\\mathbf{u}=0$,\n\\begin{align*}\n0&=\\sum_\\mathbf{k}(ik_xu_\\mathbf{k}^x+ik_yu_\\mathbf{k}^y)e^{i\\mathbf{k\\cdot{x}}}\\\\\n&=i\\sum_\\mathbf{k}(\\mathbf{k}\\cdot{u}_\\mathbf{k})e^{i\\mathbf{k\\cdot{x}}}\n\\end{align*}\nWe multiply by $e^{-i\\mathbf{j\\cdot{x}}}$ and integrate:\n\\begin{align*}\n0&=\\int^{2\\pi}_0i\\sum_\\mathbf{k}(\\mathbf{k}\\cdot{u}_\\mathbf{k})e^{i(\\mathbf{k-j})\\cdot\\mathbf{x}}d\\mathbf{x}\\\\\n&=i(\\mathbf{j}\\cdot{u}_\\mathbf{j})\n\\end{align*}\nSince $i(\\mathbf{q}\\cdot{u}_\\mathbf{q})=0$ and $i\\neq0$, $\\mathbf{q}\\cdot{u}_\\mathbf{q}=0$. Therefore,  $\\mathbf{p}\\cdot{u}_\\mathbf{q}=\\mathbf{p}\\cdot{u}_\\mathbf{q}+\\mathbf{q}\\cdot{u}_\\mathbf{q}=\\mathbf{k}\\cdot{u}_\\mathbf{q}$ and, similarly, $\\mathbf{q}\\cdot{u}_\\mathbf{p}=\\mathbf{k}\\cdot{u}_\\mathbf{p}$. We can now rewrite our equation.\n\\[\n\\sum_\\mathbf{k}|\\mathbf{k}|^2p_\\mathbf{k}e^{i\\mathbf{k\\cdot{x}}}=-\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}(\\mathbf{k}\\cdot{u}_\\mathbf{q})(\\mathbf{k}\\cdot{u}_\\mathbf{p})e^{i\\mathbf{k\\cdot{x}}}\n\\]\nAgain, we multiply by $e^{-i\\mathbf{j\\cdot{x}}}$ and integrate:\n\\begin{align*}\n\\int^{2\\pi}_0\\sum_\\mathbf{k}|\\mathbf{k}|^2p_\\mathbf{k}e^{i(\\mathbf{k-j})\\cdot\\mathbf{x}}d\\mathbf{x} &=-\\int^{2\\pi}_0\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}(\\mathbf{k}\\cdot{u}_\\mathbf{q})(\\mathbf{k}\\cdot{u}_\\mathbf{p})e^{i(\\mathbf{k-j})\\cdot\\mathbf{x}}d\\mathbf{x}\\\\\n|\\mathbf{j}|^2p_\\mathbf{j}&=-\\sum_\\mathbf{p+q=j}(\\mathbf{j}\\cdot{u}_\\mathbf{q})(\\mathbf{j}\\cdot{u}_\\mathbf{p})\n\\end{align*}\nSo we can express $p$ as\n\\[\np_\\mathbf{k}=-\\frac{1}{|\\mathbf{k}|^2}\\sum_\\mathbf{p+q=k}(\\mathbf{k}\\cdot{u}_\\mathbf{q})(\\mathbf{k}\\cdot{u}_\\mathbf{p})\n\\]\nIf we return to the original equation, we can find a Fourier representation for each term.\n\\[\n\\mathbf{u}_t =\\begin{bmatrix}\\frac{\\partial{u^x}}{\\partial{t}}\\\\\\frac{\\partial{u^y}}{\\partial{t}}\\end{bmatrix}=\\sum_\\mathbf{k}\\frac{d\\mathbf{u_k}}{dt}e^{i\\mathbf{k\\cdot{x}}}\n\\]\n\\begin{align*}\n\\mathbf{u}\\cdot\\nabla\\mathbf{u}=\\begin{bmatrix}u^x\\frac{\\partial{u^x}}{\\partial{x}}+u^y\\frac{\\partial{u^x}}{\\partial{y}}\\\\u^x\\frac{\\partial{u^y}}{\\partial{x}}+u^y\\frac{\\partial{u^y}}{\\partial{y}}\\end{bmatrix}&=\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}\\begin{bmatrix}iq_xu_\\mathbf{p}^xu_\\mathbf{q}^x+iq_yu_\\mathbf{p}^yu_\\mathbf{q}^x\\\\iq_xu_\\mathbf{p}^xu_\\mathbf{q}^y+iq_yu_\\mathbf{p}^yu_\\mathbf{q}^y\\end{bmatrix}e^{i\\mathbf{k\\cdot{x}}}\\\\\n&=i\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}(\\mathbf{q}\\cdot{u}_\\mathbf{p})u_\\mathbf{q}e^{i\\mathbf{k\\cdot{x}}}\\\\\n&=i\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}(\\mathbf{k}\\cdot{u}_\\mathbf{p})u_\\mathbf{q}e^{i\\mathbf{k\\cdot{x}}}\n\\end{align*}\n\\begin{align*}\n-\\nabla{p}=-\\begin{bmatrix}\\frac{\\partial{p}}{\\partial{x}}\\\\\\frac{\\partial{p}}{\\partial{y}}\\end{bmatrix}&=-\\sum_\\mathbf{k}\\begin{bmatrix}ik_xp_\\mathbf{k}\\\\ik_yp_\\mathbf{k}\\end{bmatrix}e^{i\\mathbf{k\\cdot{x}}}\\\\\n&=-i\\sum_\\mathbf{k}\\mathbf{k}p_\\mathbf{k}e^{i\\mathbf{k\\cdot{x}}}\\\\\n&=i\\sum_\\mathbf{k}\\frac{\\mathbf{k}}{|\\mathbf{k}|^2}\\left(\\sum_\\mathbf{p+q=k}(\\mathbf{k}\\cdot{u}_\\mathbf{q})(\\mathbf{k}\\cdot{u}_\\mathbf{p})\\right)e^{i\\mathbf{k\\cdot{x}}}\n\\end{align*}\nWe plug these in to the original equation, and then we multiply by $e^{-i\\mathbf{j\\cdot{x}}}$ and integrate.\n\\begin{align*}\n&\\int^{2\\pi}_0\\sum_\\mathbf{k}\\frac{d\\mathbf{u_k}}{dt}e^{i(\\mathbf{k-j})\\cdot\\mathbf{x}}d\\mathbf{x}&\\\\\n&+\\int^{2\\pi}_0i\\sum_\\mathbf{k}\\sum_\\mathbf{p+q=k}(\\mathbf{k}\\cdot{u}_\\mathbf{p})u_\\mathbf{q}e^{i(\\mathbf{k-j})\\cdot\\mathbf{x}}d\\mathbf{x}&=\\int^{2\\pi}_0i\\sum_\\mathbf{k}\\frac{\\mathbf{k}}{|\\mathbf{k}|^2}\\left(\\sum_\\mathbf{p+q=k}(\\mathbf{k}\\cdot{u}_\\mathbf{q})(\\mathbf{k}\\cdot{u}_\\mathbf{p})\\right)e^{i(\\mathbf{k-j})\\cdot\\mathbf{x}}d\\mathbf{x}\\\\\n\\end{align*}\n\\[\n\\frac{d\\mathbf{u_j}}{dt}+i\\sum_\\mathbf{p+q=j}(\\mathbf{j}\\cdot{u}_\\mathbf{p})u_\\mathbf{q} = i\\frac{\\mathbf{j}}{|\\mathbf{j}|^2}\\sum_\\mathbf{p+q=j}(\\mathbf{j}\\cdot{u}_\\mathbf{q})(\\mathbf{j}\\cdot{u}_\\mathbf{p})\n\\]\nSo we have our final Fourier transform\n\\[\n\\frac{d\\mathbf{u_k}}{dt}=-i\\sum_\\mathbf{p+q=k}\\left((\\mathbf{k}\\cdot{u}_\\mathbf{p})u_\\mathbf{q}-\\frac{\\mathbf{k}}{|\\mathbf{k}|^2}(\\mathbf{k}\\cdot{u}_\\mathbf{q})(\\mathbf{k}\\cdot{u}_\\mathbf{p})\\right)\n\\]\nwhich we can represent more clearly using a matrix $A$ as\n\\[\n\\frac{d\\mathbf{u_k}}{dt}=-i\\sum_\\mathbf{p+q=k}\\mathbf{k}\\cdot{u}_\\mathbf{p}A_\\mathbf{k}u_\\mathbf{q}\n\\]\nwhere $A_\\mathbf{k}=I-\\frac{\\mathbf{kk}^T}{|\\mathbf{k}|^2}$.\n\\end{document}\n", "meta": {"hexsha": "6a172ffdecb1bf33f2aff7d97ec143d8609788ad", "size": 8269, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2DEuler/2DEulerFT.tex", "max_stars_repo_name": "brekmeuris/Renormalized_Mori_Zwanzig", "max_stars_repo_head_hexsha": "fdd66c25d7d59ff76099ab67b8cc2a8337dcfec3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-26T23:12:07.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-26T23:12:07.000Z", "max_issues_repo_path": "2DEuler/2DEulerFT.tex", "max_issues_repo_name": "brekmeuris/Renormalized_Mori_Zwanzig", "max_issues_repo_head_hexsha": "fdd66c25d7d59ff76099ab67b8cc2a8337dcfec3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2DEuler/2DEulerFT.tex", "max_forks_repo_name": "brekmeuris/Renormalized_Mori_Zwanzig", "max_forks_repo_head_hexsha": "fdd66c25d7d59ff76099ab67b8cc2a8337dcfec3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-09-18T16:43:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-26T01:03:47.000Z", "avg_line_length": 65.626984127, "max_line_length": 429, "alphanum_fraction": 0.6531624138, "num_tokens": 3798, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789178257654, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5746450552846362}}
{"text": "% !TEX root = altosaar-2020-thesis.tex\n\n% \\appendix\n\\label{sec:appendix}\n\\section{Probabilistic inference, data, and posterior inference}\n\nIn machine learning, the posterior distribution of random variables $\\mbz$ conditional on data or observations $\\mbx$, $p(\\mbz \\mid \\mbx)$, is a central quantity of interest. However, in applying machine learning to physics, we must specify the concepts of random variables and observations. In the Ising model, random variables are spins, and there is no straightforward concept of observed data. Study of the Boltzmann distribution of the Ising model is probabilistic inference, not posterior inference.\n\nIn the Ising model, there are no datapoints $\\mbx$; there are only latent variables $\\mbz$. Sampling from the model using Monte Carlo would yield only samples of system configurations, not data. In this model, the data can be defined as the empty set $\\mbx = \\{\\}$, so the model evidence $\\int p(\\mbx, \\mbz)d\\mbz$ is simply the partition function $Z = \\int p(\\mbz) d\\mbz$. An example of a model that resembles the Ising model that includes observed data is a Markov random field for image denoising~\\citep{geman1984stochastic}. In this case, every spin or random variable is connected to a pixel of an image. The concept of data can be included in the Ising model in terms of an external magnetic field.\n\n% \\PP probability models are common to both physics and machine learning.\n\n% Probability models are common in physics and machine learning. The Ising model is a statistical physics model of ferromagnetism, and is an example of an undirected graphical model in machine learning. A probability model assigns a probability to a configuration of the system in a certain condition, such as at a specific temperature, or conditional on observed data.\n\n% \\PP example; boltzmann\n\n% In statistical physics, an example of a probability model is the Boltzmann distribution,\n% \\begin{equation}\n% \\label{eq:boltzmann}\n% p(\\mbz; \\beta) = \\frac{\\exp(-\\beta H(\\mbz))}{Z}\\, ,\n% \\end{equation}\n% where $H$ is the energy function, $\\mbz$ are random variables of the system, such as spins, $Z$ is the partition function, and the inverse thermodynamic temperature $\\beta$ is a parameter of the probability model (denoted with a semicolon). In machine learning terms, $\\mbz$ are latent variables and are unobserved. (We describe the relationship between data and latent variables in \\Cref{sec:appendix}.)\n\n% \\PP the goal of probabilistic inference is to infer a partition function; a\n% likelihood; a conditional distribution\n\n% Probabilistic inference consists of computing statistical quantities associated with a probability model. Examples of probabilistic inference targets include the partition function; the probability of observing a certain configuration of the system; or, the conditional probability distribution of a probability model after observing data. The Boltzmann distribution, \\Cref{eq:boltzmann} can be used to calculate physical quantities, such as specific heat, that can be compared to experimental values and understand the behavior of physical systems.\n\n% \\PP probabilistic inference is hard because of the intractable partition\n% function\n\n% Inference in a probability model is challenging. A task such as calculating a probability requires calculating a normalization constant or partition function, which includes a sum over all configurations of the system. For models with many random variables, enumerating the number of configurations is intractable. The Boltzmann distribution cannot be studied directly as the partition function cannot be computed for most systems. The partition function is a sum over an exponentially-large number of system configurations,\n\n% \\begin{align}\n% Z &= \\sum_{\\mbz} \\exp(-\\beta H(\\mbz))\\, .\n% \\end{align}\n% For example, in an Ising model, spins are in two states $z = \\{-1, 1\\}$. In an Ising model with $N$ spins, there are $2^N$ terms in the partition function, which cannot be computed in a reasonable time.\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End:", "meta": {"hexsha": "21c6f302d30d65ffab5d1a1ce375b1614c2769cf", "size": 4071, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch-hvm/sec_appendix.tex", "max_stars_repo_name": "altosaar/thesis", "max_stars_repo_head_hexsha": "287484c87db0eca46f4cdae70ff8582bd66ce5a3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-05-21T18:56:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-26T12:18:53.000Z", "max_issues_repo_path": "ch-hvm/sec_appendix.tex", "max_issues_repo_name": "altosaar/thesis", "max_issues_repo_head_hexsha": "287484c87db0eca46f4cdae70ff8582bd66ce5a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch-hvm/sec_appendix.tex", "max_forks_repo_name": "altosaar/thesis", "max_forks_repo_head_hexsha": "287484c87db0eca46f4cdae70ff8582bd66ce5a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 96.9285714286, "max_line_length": 703, "alphanum_fraction": 0.7808892164, "num_tokens": 928, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.7090191337850932, "lm_q1q2_score": 0.5746450537765196}}
{"text": "\\section{Gaussian Process Model}\\label{sec:gpmodel}\nA GP can be applied as a probabilistic model to a regression problem.  Here we use the GP model to generalise a stellar model grid to a continuous and probabilistic function that maps inputs to observable quantities.   This will allow us to predict observable quantities that are off (or even outside) the grid of stellar models with the prediction including uncertainty. \n%\nAs mentioned in Section~\\ref{sec:grid}, our model grid has four independent fundamental inputs, i.e., mass, initial metallicity, initial helium fraction, and mixing-length parameter. Adding the {\\it EEP} which describes the evolutionary phase, the GP model contents five inputs.  We intend to train GP models that predict effective temperature, surface gravity, radius, surface metallicity, and stellar age. We summarise GP model inputs and outputs as below.\n\nGP model inputs and their dynamic ranges:\n\\begin{itemize}\n\\item[] Mass ($M$ = 0.8 -- 1.2$\\rm M_{\\odot}$)\n\\item[] Equivalent Evolutionary Phase ({\\it EEP} = 0 -- 1)\n\\item[] Initial metallicity ([Fe/H]$_{\\rm init}$ =  -0.5 -- 0.5 dex)\n\\item[] Initial helium fraction ($Y_{\\rm init}$ = 0.24 -- 0.32)\n\\item[] Mixing-length parameter ($\\alpha_{\\rm MLT}$ = 1.7 -- 2.5)\n\\end{itemize}\n\nAnd the GR model outputs: \n\\begin{itemize}\n\\item[] Effective temperature ($T_{\\rm eff}$) \n\\item[] Surface gravity ($\\log g$)\n\\item[] Radius ($R$)\n%\\item[] The large spacing ($\\Delta\\nu$)  \n\\item[] Surface metallicity ([Fe/H]$_{\\rm surf}$)\n\\item[] Stellar age ($\\tau$)\n\\end{itemize}\n\nWe aim to use the GP model as a non-parametric emulator, that is emulating the comparatively slow calls to models of stellar evolution.\nThis emulator can be described as a function approximation problem where the function is,\n\\begin{equation}\\label{gprmodel}\n{\\rm Outputs} = f(M, EEP, ({\\rm Fe/H})_{\\rm init}, Y_{\\rm init}, \\alpha_{\\rm MLT}). \n\\end{equation}\n{\\bf Tanda - is there one independent GP for each of the outputs?  If yes,  suggest ... \n\nIn fact, the way we have implemented the GP as function approximation means that we have used one GP for each of the outputs so that,  \n\\begin{equation}\\label{gprmodel1}\n{T_{\\rm eff}} = f(M, EEP, ({\\rm Fe/H})_{\\rm init}, Y_{\\rm init}, \\alpha_{\\rm MLT}),\n\\end{equation}\nand \n\\begin{equation}\\label{gprmodel1}\n{R} = g(M, EEP, ({\\rm Fe/H})_{\\rm init}, Y_{\\rm init}, \\alpha_{\\rm MLT}),\n\\end{equation}\nand so on for all outputs.\n}\n\n\\subsection{Gaussian Process Application}\n\nIn our application to grids of stellar models,  a GP has a number of desirable properties.  While a GP is a stochastic process, the distribution of a GP can be considered as a distribution of functions with a continuous domain.  In fact,  the marginal likelihood considered in function space is equal to the likelihood of the data given some function values,  multiplied by the prior on those function values marginalised over all function values \\cite{williams1996gaussian}.  That is to say that, the GP allows for the analytical evaluation of a fit over many different functions (perhaps an infinite number) weighted by some concept of a prior and the agreement with the data.   In addition,  while the marginal likelihood will be assessed on discrete data,  predictions can be made using linear algebra for new data in the continuous domain, but crucially again marginalised over these many different functional forms.  It is possible to see how this might be useful for generalising (or emulating or augmenting) a discrete grid of stellar models in order to obtain predictions in the continuous domain.\n\nIn this section we will look at the required mathematics to be able to implement a GP for our application to grids of stellar models.  We start with a series of definitions before dealing with the marginal likelihood and the posterior predictive distributions. \n\nWe start with a grid of stellar models containing $N$ models with a label we want to learn, for example model effective temperature, which we will denote with the general symbol $\\bf y$, and a set of on-grid inputs $\\bf X$ (e.g., mass,  EEP,  metallicity,  ...).  We can use a GP to make predictions of the effective temperature (labelled $y$) for additional off-grid input values given by $\\bf X_{\\star}$.  The vector $\\bf y$ is arranged ${\\bf y} = \\left(y_{i}, ... ,y_{N} \\right)^{T}$ where the subscript label references the stellar model.  The input labels are arranged into a $N \\times D$ matrix where $D$ is the number of input dimensions (e.g., $D=3$ for mass, EEP, and metallicity) so that ${\\bf X} = ({\\bf x}_{1}, ..., {\\bf x}_{N})^{T}$ where ${\\bf x_{i}} = (x_{1, i}, ..., x_{D, i})^{T}$.  The matrix of additional inputs $\\bf X_{\\star}$ has the same form as $\\bf X$ but size $N_{\\star} \\times D$.\n\n\\citet{williams1996gaussian}, from which our description below is based,  define a GP as a collection of random variables, where any finite number of which have a joint Gaussian distribution.  In general terms,  a GP may be written so that our on grid labels are random variables drawn from our GP distribution, \n\\begin{equation}\n{\\bf y}({\\bf X}) \\sim \\mathcal{GP}\\left( m({\\bf X}),  {\\bf \\Sigma}\\right),\n\\end{equation}\nwhere $m({\\bf X})$ is some mean function, and $\\bf \\Sigma$ is some covariance matrix.  The mean function controls the deterministic part of the regression and the covariance function controls the stochastic part.  The mean function defined here could be any deterministic function and we will label the additional parameters, or hyperparameters, $\\phi$.  Each element of the more familiar covariance matrix is defined by the covariance function or {\\it kernel function} $\\bf K$ which has hyperparameters $\\theta$ and is given by,\n\\begin{equation}\n{\\bf \\Sigma} = {\\bf K}({\\bf X}, {\\bf X},  \\theta),\n\\end{equation}\nor \n\\begin{equation}\n{\\bf \\Sigma}_{n, m} = k({\\bf X}_{n}, {\\bf X}_{m},  \\theta),\n\\end{equation}\nwhere the inputs ${\\bf X}_{n}$ and ${\\bf X}_{n}$ are $D$-dimensional vectors and the output is a scalar covariance.\nIn addition to the covariance defined by the kernel function, we include additional white noise in the covariance matrix by adding an identity matrix $\\mathcal{I}$ multiplied by a scalar value $\\sigma_{w}^2$, so that, \n\\begin{equation}\n{\\bf \\Sigma} = {\\bf K}({\\bf X}, {\\bf X},  \\theta) + \\sigma_{w}^{2} \\mathcal{I},\n\\end{equation}\nwhere $\\sigma_{w}^2$ is another hyperparameter to be learnt in training. \n\n\\subsubsection{The likelihood}\nConceptually we value the GP because of it's ability to marginalise over many functions $\\bf f$ and return a marginal likelihood,\n\\begin{equation}\np({\\bf y} | {\\bf X}) = \\int p({\\bf y} | {\\bf f}, {\\bf X}) p({\\bf f} | {\\bf X}) \\, {\\rm d}{\\bf f},\n\\end{equation}\nnoting that this function space marginal likelihood is weighted by the probability of the data given the function and the probability of the function.  This integral could be evaluated.  However, by noting that a GP is a collection of random variables, where any finite number of which have a joint Gaussian distribution, the marginal probability of our data $\\bf y$ is also the joint likelihood of a multivariate normal distribution,\n\\begin{equation}\np({\\bf y} | {\\bf X}) = \\mathcal{N}(m({\\bf X}), {\\bf \\Sigma}),\n\\end{equation}\nwhich can be straightforward to evaluate.  Thus the marginal likelihood is,\n\\begin{equation}\np({\\bf y} | {\\bf X}) = (2 \\pi)^{k/2} {\\rm det} ({\\bf \\Sigma})^{-0.5} \\exp \\left(\\frac{-1}{2} ({\\bf X} - m({\\bf X}))^{T} \\, {\\bf \\Sigma}^{-1} \\, ({\\bf X} - m({\\bf X})) \\right),\n\\end{equation}\nwhich can be evaluated without integrating over all possible function space.   While this marginal likelihood expression is clearly more computationally feasible that the integral over functional space it not without it's limitations.  Because it is necessary to calculate the determinant and the inverse of the covariance matrix, typically applied algorithms,  make this a $\\mathcal{O}(N^3)$ or $\\mathcal{O}(N^2 \\log N)$ operation.  This naturally limits the size of the data set for which the likelihood,  and optimisations of the likelihood, can be applied.  \n\n\\subsubsection{Making predictions}\nIf we want to obtain predictive distributions for the output $\\bf y_{\\star}$ given the inputs $\\bf X_{\\star}$,  the joint probability distribution of $\\bf y$ and $\\bf y_{\\star}$ is Gaussian and given by\n\\begin{equation}\np \\left( \\begin{bmatrix} {\\bf y} \\\\ {\\bf y_{\\star}} \\end{bmatrix} \\right) = \\mathcal{N} \\left( \\begin{bmatrix} m({\\bf X}) \\\\ m({\\bf X_{\\star}}) \\end{bmatrix} , \\begin{bmatrix} {\\bf \\Sigma} & {\\bf K_{\\star} }\\\\ {\\bf K_{\\star}}^{T} & {\\bf K_{\\star \\star}} \\end{bmatrix}  \\right), \n\\end{equation}\nwhere the covariance matrices $\\bf \\Sigma$ and $\\bf K$ are computed using the kernel function so that,\n\n\\begin{equation}\n{\\bf \\Sigma}_{n, m} = k({\\bf X}_{n}, \\, {\\bf X}_{m}),\n\\end{equation}\nwhich is an $N \\times N$ matrix.\n\\begin{equation}\n{\\bf K}_{\\star \\, n, m} = k({\\bf X}_{n}, \\, { \\bf X}_{\\star \\, m}),\n\\end{equation}\nwhich is an $N \\times N_{\\star}$ matrix, and finally\n\\begin{equation}\n{\\bf K}_{\\star \\star \\, n, m} = k({\\bf X}_{\\star \\, n},  {\\bf X}_{\\star \\,m}),\n\\end{equation}\nwhich is an $N_{\\star} \\times N_{\\star}$ matrix.\nThe predictions of $\\bf y_{\\star}$ are again a Gaussian distribution so that,\n\\begin{equation}\n{\\bf y}_{\\star} \\sim \\mathcal{N}(\\bf \\hat{y}_{\\star}, \\, \\bf C),\n\\label{eq:pred}\n\\end{equation}\nwhere \n\\begin{equation}\n{\\bf \\hat{y}}_{\\star} = m({\\bf X}_{\\star}) + {\\bf K}_{\\star}^{T} \\, {\\bf \\Sigma}^{-1} \\, ({\\bf y} - m(\\bf X)),\n\\end{equation}\nand \n\\begin{equation}\n{\\bf C} = {\\bf K}_{\\star \\star} - {\\bf K}_{\\star}^{T} \\, {\\bf \\Sigma}^{-1} \\, {\\bf K_{\\star}}.\n\\end{equation}\n\nAt point we can make predictions on model properties given a grid of stellar models using equation \\ref{eq:pred}.  But these predictions will likely be poor unless we select sensible values for the form and hyperparameters of the mean function and covariance function.  In the following section we detail a number of kernel functions that will be tested against the data.  We will then discuss the method for determining the values of the hyperparameters to be used.\n\n\n\\subsection{Setting up the GP Model}\\label{sec:set_up}\n\nWe adopt a tool package named \\textsc{GPyTorch}, which is a GP framework developed by \\citet{gardner2018gpytorch}. It is a Gaussian process library based on an open source machine-learning framework PyTorch (\\url{https://pytorch.org}). The package provides significant GPU acceleration, state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility, and easy integration with deep learning frameworks. Source codes and detailed introductions could be found on \\url{https://gpytorch.ai}.\n\nFor training GP models, we need set up mean function, kernel function, likelihood function, loss function, and optimiser. \nTo find out the best option for each of above elements, we do a number of preliminary tests. \nThese tests are carried out with 2-demission (2D) GP models which map $M$ and {\\it EEP} to the five observable outputs (Outputs = $f(M, EEP)$). Training data are selected from the primary grid with fixed [Fe/H]$_{\\rm init}$ (0.0), $Y_{\\rm init}$ (0.28), and $\\alpha_{\\rm MLT}$ (2.1). There are 41 evolutionary tracks which content 24,257 data points. We also compute 44 evolutionary tracks with the same [Fe/H]$_{\\rm init}$, $Y_{\\rm init}$, and $\\alpha_{\\rm MLT}$ but random $M$ for the purpose of validating and testing. We use the method mentioned in Section \\ref{sec:selection} and sample 20,000 data points for training, 10,000 for validating, and 10,000 for testing. \n%\nWe start with the \\textsc{Simple GP Regression} example (\\url{https://docs.gpytorch.ai/en/stable/examples/01_Exact_GPs/Simple_GP_Regression.html}) and develop our own method step by step as follow. \n\n\\subsubsection{Mean Function}\n\nWe first investigate the mean function. As discussed above, the data distribution is generally smooth but complex in some rejions of parameter space (e.g., the sub-giant hook).  Although a GP model can cope with different mean functions,  because of the flexibility of kernels, we find that using a constant or a linear mean function leads to a long time for training. For this reason, we apply a Neural Network mean function which is flexible enough to manage both simple and complex features. We adopt an architecture including 6 hidden layers and 128 nodes per layer. All layers apply the linear transformation to the incoming data. We apply the element-wise function (Elu) because it give relatively smooth mean functions. ({\\bf Can Guy or Alex add some references for NN and Elu here? The referee may ask why the 6x128 architecture is sufficient, so should we mention Alex's paper?})\n\n\\subsubsection{Likelihood and Loss Function}\n\nWe then trained the model using the negative logarithm {base 10?} of the likelihood function above as the loss function.  Our training dataset is a theoretical model grid,  hence there is no observed uncertainty for each data point, but a tiny random uncertainty exists due to the approximations in the \\textsc{MESA} numerical method. We model this using $\\sigma_{w}$.  This noise model is then assumed to be a Gaussian function with a very small variance.  \n%   \nA likelihood specifies the mapping from input values $f(X)$ to observed labels $y$.\nWe adopt the the standard likelihood for regression which assumes a standard homoskedastic noise model whose conditional distribution is\n\\begin{equation}\\label{eq:likelihood}\np(y|f(x)) = f + \\epsilon, \\epsilon \\sim \\mathcal{N}(0, \\sigma^{2}),\n\\end{equation}\nwhere $\\sigma$ is a noise parameter. \n%\nWe used a small and fixed noise parameter and run a few tests.  However, the strict noise parameter makes GP models hard to converge. When this noise parameter is set as free, it reduces to a small value anyway in the training progress because it is data-driven.  For this reason, we did not put strict constraint for or prioritise this noise parameter.  In practice, we only set up a loose upper limit ($\\sigma$  < 0.1) to speed up the training. One thing should be noted that a GP model with a large noise parameter is not a proper description for the stellar grid. Because of this, we only adopt GP models with $\\sigma$ $\\lesssim 10^{-4}$.   \n\n\\subsubsection{Optimiser}\n\nWe compared two optimisers named SGD and Adam. Here SGD refers to Stochastic Gradient Descent, and Adam is a combination of the advantages of two other extensions of stochastic gradient descent, specifically, Adaptive Gradient Algorithm and Root Mean Square Propagation. \n%\nThe SGD optimiser in the \\textsc{GPyTroch} package involves the formula given by \\citet{sutskever2013importance}. The formula makes it possible to train using stochastic gradient descent with momentum thanks to a well-designed random initialisation and a particular type of slowly increasing schedule for the momentum parameter. The application of momentum in SGD could improve its efficiency and make it less likely to stuck in local minimums. On the other hand, the Adam optimiser includes the 'AMSGrad' variant developed by \\citet{47409} to improve its weakness in the convergence to an optimal solution. With these new developments, the two optimisers give very similar results. We finally choose Adam because it works relatively efficiently and stable.  \n%\nWe adaptive learning rate in the training process. Our training starts with a learning rate of 0.01 and decreases by a factor of 2 when the loss value does not reduce in previous 100 iterations.    \n\n\\subsubsection{Early Stopping}\n\nWe set up an Early Stopping for avoiding overfitting and also determining when to terminate a training progress. \nWhen an optimal solution is found, the validating errors stop decreasing and then increase when overfitting occurs. \nWe hence track down the validating error every iteration and terminate training process when the validating error does not decrease in previous 300 iterations. \t\n\n\\subsubsection{Kernel Function}\n\nWith the above set up, we test different kernel functions. We involve four basic kernels as listed below:\n\\begin{itemize}\n\\item RBF: Radial Basis Function kernel (also known as squared exponential kernel)\n\\item RQ: Rational Quadratic Kernel (equivalent to adding together many RBF kernels with different lengthscales)\n\\item Mat12: Matern 1/2 kernel (equivalent to the Exponential Kernel)\n\\item Mat32: Matern 3/2 kernel \n\\end{itemize}\nWe apply each basic kernel and a couple of combined kernels (RBF + Mat21, RQ + Mat21, Mat32 + Mat21, RBF + Mat32, RQ + Mat32) to train and validate GP model for each output. Among these kernels, the combined kernel RBF+Mat21 gives the best fit to the training data, however, it does not give the best predictions for validating and testing data. On the other hand, the Mat32 kernel fits training data reasonably well and it predicts the best for validating and testing data. \n%\nComparing between the two kernels, the RBF+Mat12 Kernel is a combination of a smooth and a spiky kernel, which has sufficient flexibility to fit features of data. However, the spiky function make it less accurate for off-grid regions. \n%\nThe smoothness of Mat32 kernel is somewhere between a spiky (Mat21) and a smooth kernel (RBF). It does not exactly fit some sharp features but it maps the off-grid region with better continuity. We hence adopt the Mat32 kernel for the training. \n\n\\subsubsection{Training Procedure}\n\nWe lastly introduce the procedure of training a GP model. The full process includes training, validating, and testing. \nIn the training process, we literately optimise hyperparameters of a GP model to learn the underline function which maps inputs to outputs from on-grid evolutionary tracks (training dataset). In each iteration, the GP model is validated by comparing true and GP predicted values of off-grid tracks (validating dataset). Although the validating dataset is not directly involved in training hyperparameters, it still constructs the GP model to some extend because the optimal solution is the one that best fits the validating dataset. For this reason, the validating dataset does not give a completely independent validation for a GP model. \n%\nWe hence have a testing process at the end for the purpose of estimating the systematic uncertainty of GP. We reserve a half of off-grid tracks as testing dataset and qualify a GP model using the testing error (true value - GP prediction).\n\n\n\\section{Preliminary Studies on Low-Demission Problems}\\label{examples}\n\nIn this section, we present GP applications on 1-, 2-, and 3-demissions (1D, 2D, 3D) problems. We also investigate two key questions before training the 5-demissions (5D) stellar grid: testing method and strategy for large sample. \n\n\\subsection{1D Problem}\n\nWe first demonstrate an example of GP application on an 1D problem. We train a GP model to learn the evolution of effective temperature for a $\\rm 1.1M_{\\odot}$ track. We split the data points on this track into training and testing data by 70-to-30. We train a GP model which maps  {\\it EEP} to effective temperatures. We then compare true and GP-predicted effective temperatures of testing data. As it can be seen in Figure \\ref{fig:1dgp},  the GP model gives very accurate predictions: residuals are within $\\pm$0.5 K. As a comparison, we fit the training data with the quadratic function and use it to predict testing data. We find that the two methods give very similar results. It indicate that GP can be an alternative of classical interpolators on the 1D problem. \n\n\\begin{figure}\n\t\\includegraphics[width=1.0\\columnwidth]{1d-gp.pdf}\n    \\caption{GP application on 1D problem. Models on this track are split into training and testing data by 70-to-30, Top: the evolution of effective temperature for a $\\rm 1.1M_{\\odot}$ track. The grey line is the evolutionary track computed with \\textsc{MESA}; blue and red circles indicate predictions for the testing data from the quadratic interpolator and the GP model. Bottom: residuals of predictions in the top graph. }  \n    \\label{fig:1dgp}\n\\end{figure}\n\n\\subsection{2D Problem: Investigating testing method}\n\nWe present a 2D example here.  We use the same training, validating, and testing datasets mentioned in Section \\ref{sec:set_up} and train GP models which map $M$ and {\\it EEP} to observables. We illustrate the GP model for effective temperature in Figure \\ref{fig:2dtest}. As it shown that, GP transforms the sparse data into a continuous function and hence is able to predict values for unseen points.\n%\nWe also notice that the kernel function in the area of $M \\geq 1.05 {\\rm M_{\\odot}}$ and ${\\it EEP} \\leq 0.7$ is more complex than that for other regions because of the appearance of the 'hook'. This particular area are relatively difficult to learn and poorly predicted by the GP model. \n\n\\begin{figure}\n\t\\includegraphics[width=1.0\\columnwidth]{2d_GPmodel_function.pdf}\n\t\\includegraphics[width=1.0\\columnwidth]{2d_testing_hist_effective_T.pdf}\t\n    \\caption{Top: The 2D GP model for $T_{\\rm eff}$. Bottom: probability distributions of validating errors of the GP model. }  \n    \\label{fig:2dtest}\n\\end{figure}\n\nWhen there is a subregion in which the GP model performs worse than other areas, the probability distribution of testing error is not quite a  Gaussian function. We examine the error distribution as shown at the bottom of Figure \\ref{fig:2dtest}. Errors of most data follows a Gaussian distribution but about $10\\%$ data form long tails on both sides. \n%\nWe inspect testing errors for other output parameters and find similar results. \nThe cases of surface gravity and radius are similar to the effective temperature. The surface metallicity is not affected by the hook, but the GP predictions are relatively poor at the early subgiant phase in high-mass tracks. This is because high-mass tracks maintain shallow convective envelope and hence have strong diffusion effect during the main-sequence stage. At the early subgiant phase, the quick expansion of the surface convective envelope brings back the settling heavy elements to the surface, leading to a sharp raise of the surface metallicity. \n%\nWe also find that the accuracy of age prediction drops down for very old low-mass stellar models. This is because age values vary in a relatively big dynamic range (15 - 50 Gyr) in a small fraction of data points. Poor age resolution causes poor GP predictions.        \nThis is to say, the tail feature is common for all five outputs. This leads to a question of how to quantity the testing error.\n\nFor this case, a global error such like Root Mean Square Error (RMSE) is not ideal to quantify the error. Because it does not well reflect the tail feature. What we need here is a method that reflects the general accuracy level as well as the worst case. \n%\nWe examine the error distribution of each output and count data points in the tails (outside the 3 times of full width at half maximum). The amount of these data are around 10\\% (8 - 12\\% for different outputs). \n%\nFor the majority (90\\%) of data points, which from a Gaussian function, the 68\\% confidential interval is able to reflect their accuracies. For the worst 10\\% of the data, we could use the 95\\% and 99.7\\% confidential intervals to describe the median value and the length of the tail. Thus, we define an Error Index (EI), which is the sum of 68\\%, 95\\%, and 99.7\\% cumulative values of absolute errors, to qualify GP models. For the case in Figure \\ref{fig:2dtest}, cumulative values at 68\\%, 95\\%, and 99.7\\% are 1.1, 4.9, and 11.1K, which give an EI equals to 17.1K. \n\n\\subsection{3D Problem: Strategy for Large Data Sample}\n\nAs a further step, we apply GP on a 3D grid and investigate the strategy for large data sample.\nThe model grid we aim to train contents about 10,000,000 data points, which is much more than the upper limit of data size (20,000) of an \\textsc{Exact GP} model. We work on this with 3-demission (3D) GP models, which map $M$, {\\it EEP}, and  [Fe/H]$_{\\rm init}$ to observable outputs. \n%The 3D GP model can be described as Outputs $ = f(M, EEP, {\\rm [Fe/H]_{init}})$. \nWe select training data from the primary grid with fixed $Y_{\\rm init}$ (0.28), and $\\alpha_{\\rm MLT}$ (2.1). The training dataset contents $\\sim$300,000 data points. For validating and testing purposes, we compute another 174 evolutionary tracks with the same input $Y_{\\rm init}$ and $\\alpha_{\\rm MLT}$ but random $M$ and [Fe/H]$_{\\rm init}$. Before proceeding with large sample, we train an \\textsc{Exact GP} model using 20,000 training data points as a standard reference. \n\nWe first consider the Stochastic Variational GP (SVGP) approach based on the \\textsc{GPyTorch ApproximateGP} module. \n%We train our data based on the SVGP example on \\url{https://docs.gpytorch.ai/en/v1.1.1/examples/04_Variational_and_Approximate_GPs/SVGP_Regression_CUDA.html}. \nSVGP is an approximate scheme rely on the use of a series of inducing points which can be selected in the parameter space. It trains using minibatches on the training dataset and build up kernels on the inducing points. Underline principles and detailed descriptions of this approach can be found in \\citet{hensman2015scalable}. The advantage of SVGP is the large capacity of sample size. However, the kernel complexities is still limited by the amount of inducing points. \n%\nIn our tests, we find a practical issue with the SVGP approach. Because a large training sample takes off the memory, we can only use 10,000 inducing points. This is to say, the kernel complexity of the SVGP model is even simpler than the Exact GP model which uses 20,000 data points. As a result, the SVGP model does not show any improvements. For instance, the 68th, 95th, and 99.7th testing errors for $T_{\\rm eff}$ are 2.0, 5.8, and 15.7 K (EI = 23.5K) for the \\textsc{Exact GP} model and 2.2, 6.8, and 15.1K (error index = 24.1 K) for the SVGP one. Because the evolutionary feature are complex across multiple demissions, what we need here is increasing the kernel complexity. This requires more data points that actually construct the kernel function. For this particular case, the SVGP downgrades the complexity and hence does not improve the results.   \n\nWe then investigate another approach designed for large dataset named Structured Kernel Interpolation (SKI GP). SKI GP was introduced by \\citet{wilson2015kernel}. It produces kernel approximations for fast computations through kernel interpolation and is a great way to scale a GP up to very large datasets (100,000+ data points).\n% We follow the example on \\url{https://docs.gpytorch.ai/en/stable/examples/02_Scalable_Exact_GPs/KISSGP_Regression.html} to develop our script. \nWe run a few tests to train a 3D SKI GP model with 100, 000 training data. Compare with the \\textsc{Exact GP} and SVGP, its testing errors for $T_{\\rm eff}$ are slightly improved to 2.0, 6.1, and 14.8K (EI = 22.9K). However, the further test on the 5-demission data is not ideal: a SKI GP model using 100, 000 training data performs much worse than an \\textsc{Exact GP} model with only 20,000 training data. The poor behaviour consists with what has been discussed in \\citet{wilson2015kernel}. The method poorly scale to data with high dimensions, since the cost of creating the grid grows exponentially in the amount of data. We attempt to make some additional approximations with the \\textsc{GpyTorch AdditiveStructureKernel} module. It makes the base kernel to act as one-dimension kernels on each data dimension and the final kernel matrix will be a sum of these 1D kernel matrices. However, the testing errors are not significantly improved.\n\n \\begin{figure}\n\t\\includegraphics[width=1.0\\columnwidth]{3d-testing_teff-10sections.pdf}\n\t\\includegraphics[width=1.0\\columnwidth]{3d-testing_teff-hist-10sections.pdf}\t\n    \\caption{Top: Testing errors of 3D GP model for $T_{\\rm eff}$ on the $M - EEP$ diagram. Dashes indicates section boundaries. Bottom: examination of the edge effects of the section scenario. Probability distributions of testing errors of all testing data and those near the boundary ($\\pm$0.01 EEP) in the upper graph are compared.  As it can be seen, testing errors do not raise around the boundary. }  \n    \\label{fig:3dtest}\n\\end{figure}\n\n\nAs mentioned above, the GPU memory captivity limits the actual number of data that induce the kernel function.   \nThis limitation becomes critical for the high-demission case. To improve, more training data need to be involved. \n%The parameter space exponentially increases with the demission and hence the GP model accuracy inevitable declines. \n%\nA simple way is breaking the grid into sections and train GP models for each section separately. \n%The downside of this section scenario is that there will not be one GP model that maps the whole grid. However, as long as the goal of this work is augmenting a model grid but not deriving a universal function for stellar evolutions, this scenario is suitable. \n%Here we test how this section scenario works for 3D GP models. \nHere we divide the training dataset into 10 equal segments by {\\it EEP}. We train one \\textsc{Exact GP} model with 20,000 training data for each section. \n%We then use the same testing dataset to quantify GP predictions for the five outputs, and \nWith this section scenario, the testing EIs for five output parameters are averagely improved by around 10\\%. For instance, the testing EI for $T_{\\rm eff}$ decreases from 23.5 to 21.6K.\n% (1.7/5.0/14.9K at 68/95/99.7\\%). \n%\n\nAs expected, the section scenario improves the performance of GP model, but there is a major concern about the edge effect at the boundary between sections. If the GP model works significantly poorly at the section boundaries, it will be difficult to map the systematic errors across the whole parameter space. We examine potential edge effects as illustrated in Figure~\\ref{fig:3dtest}. We inspect absolute testing errors on the $M-EEP$ diagram. No obvious edge effect is found. We also do a statistical comparison between error values of all data points and those around section boundaries ($\\pm0.01EEP$). As shown in the bottom graph, the density distributions of two samples are very similar to each other, indicating no obvious edge effect. We adopt this section scenario to train the stellar grid in the following study.  \n\n\n\n", "meta": {"hexsha": "606e2c20763648dc294326f8205587f24cc32871", "size": 30279, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/GPmodel.tex", "max_stars_repo_name": "litanda/GPGrid_paper", "max_stars_repo_head_hexsha": "b03fafdb523cb147e85ebaaba42fbe7c14d4fced", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/GPmodel.tex", "max_issues_repo_name": "litanda/GPGrid_paper", "max_issues_repo_head_hexsha": "b03fafdb523cb147e85ebaaba42fbe7c14d4fced", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-07-24T12:26:00.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-14T21:35:20.000Z", "max_forks_repo_path": "paper/GPmodel.tex", "max_forks_repo_name": "litanda/GPGrid_paper", "max_forks_repo_head_hexsha": "b03fafdb523cb147e85ebaaba42fbe7c14d4fced", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 112.9813432836, "max_line_length": 1106, "alphanum_fraction": 0.759998679, "num_tokens": 7624, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5746450487932521}}
{"text": "\\section{Conclusion}\nIn this paper, we have demonstrated a method for for synthesizing encodings\n  for untyped lambda calculus terms and functions in parallel, based on an\n  abstract specification of the behavior of those terms.\nWe call this dual specification of terms and function \\textit{co-synthesis}.\nCo-synthesis is made possible in the case of untyped lambda\n  calculus, because there is no structural difference between terms and\n  functions, only a difference in the meaning ascribed to them.\nThis means that evaluation of unknown terms simply involves carrying out\n  regular lambda calculus evaluation with potential free variables, which\n  are the expressions we are synthesizing.\nUsing bottom-up fair enumeration, we were able to synthesize functions and\n  terms in boolean, natural number, and pair encodings, as well as\n  co-synthesize the encoding for booleans.\n", "meta": {"hexsha": "ac755d2813111568cec4b9a80abe70dcf15aa136", "size": 877, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/conclusion.tex", "max_stars_repo_name": "DavidThien/elsa", "max_stars_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/conclusion.tex", "max_issues_repo_name": "DavidThien/elsa", "max_issues_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/conclusion.tex", "max_forks_repo_name": "DavidThien/elsa", "max_forks_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.4666666667, "max_line_length": 76, "alphanum_fraction": 0.810718358, "num_tokens": 181, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8104789086703224, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.574645048793252}}
{"text": "\\documentclass[letter]{article}\n\\renewcommand{\\baselinestretch}{1.25}\n\n\\usepackage[margin=1in]{geometry}\n\\usepackage{physics}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n%\\usepackage{pythonhighlight}\n\\usepackage{hyperref}\n\\usepackage{fancyvrb}\n\n% MATLAB Formating Code\n\\usepackage[numbered,framed]{matlab-prettifier}\n\\lstset{style=Matlab-editor,columns=fullflexible}\n\\renewcommand{\\lstlistingname}{Script}\n\\newcommand{\\scriptname}{\\lstlistingname}\n\n% Command for easier minimization problem def\n\\newcommand{\\optpblm}[3][eq:default]{\n\t\\begin{equation}\\label{#1}\n% Array method... more centered\t\t\n%\t\t\\begin{array}{rl}\n%\t\t\t\\text{minimize}  \\hspace{0.2in} &#2 \\vspace{5pt}\\\\\n%\t\t\t\\text{subject to} \\hspace{0.2in} &#3\n%\t\t\\end{array}\n% Aligned method... left aligned... idk if its better\n\t\t\\begin{aligned}\n\t\t\t\\text{minimize} \\hspace{0.5in} &#2\\vspace{5pt}\\\\\n\t\t\t\\text{subject to \\hspace{0.5in}} &#3\n\t\t\\end{aligned}\t\n\t\\end{equation}\n}\n\n\\allowdisplaybreaks\n\n\\title{MECH 6327 - Homework 3}\n\\author{Jonas Wagner}\n\\date{2021, March 24}\n\n\\begin{document}\n\n\\maketitle\n\n\\newpage\n\\tableofcontents\n\n\\newpage\n\\section*{BV Textobook Problems}\n\\subsection{Problem 4.11}\n\\textbf{Problem:}\nFormulate each problem as a LP and explained the relationship between the optimal solution of the problems and the solution of its LP.\\\\\n\\textbf{Solution:}\n\\subsubsection{Part a: Minimize $\\norm{Ax-b}_\\infty$}\nDefine the following minimization problem:\n\\optpblm{\\norm{Ax-b}_\\infty}{\\text{math}}\nFrom the definition of an $\\infty$-norm as $$\\norm{x}_\\infty = \\max_i \\abs{x_i}$$ the following can be derived:\n\\optpblm{t}{\\qty(Ax - b)_i\\leq t, \\ \\forall i = 1,\\dots,n\\\\\n\t\t- & \\qty(Ax - b)_i\\leq t, \\ \\forall i = 1,\\dots,n}\nWhich is equivalent to the following linear program\n\\optpblm{t}{-\\vb{1}t \\leq Ax - b \\leq \\vb{1}t}\nThe resulted minimum to this equivalent problem, $t*$, is equivalent to the minimum of the original problem, $\\norm{Ax^*-b}_\\infty$.\n\nFrom this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (\\vb{1}^T t^* + b)$$\n\n\\newpage\n\\subsubsection{Part b: Minimize $\\norm{Ax-b}_1$}\nDefine the following minimization problem:\n\\optpblm{\\norm{Ax-b}_1}{\\text{math}}\nFrom the definition of an $1$-norm as $$\\norm{x}_1 = \\sum_i \\abs{x_i}$$ the following can be derived:\n\\optpblm{t_1 + \\cdots + t_n}{\\qty(Ax - b)_i\\leq t_i, \\ \\forall i = 1,\\dots,n\\\\\n-\t& \\qty(Ax - b)_i\\leq t_i, \\ \\forall i = 1,\\dots,n}\nWhich is equivalent to the following linear program\n\\optpblm{\\vb{1}^T t}{-t \\leq Ax - b \\leq t}\nThe resulted minimum to this equivalent problem, $\\vb{1}^T t$, is equivalent to the minimum of the original problem, $\\norm{Ax-b}_1$.\n\nFrom this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (t^* + b)$$\n\n\\newpage\n\\subsubsection{Part c: Minimize $\\norm{Ax-b}_1$ subject to $\\norm{x}_{\\infty}\\leq 1$}\nDefine the following minimization problem:\n\\optpblm{\\norm{Ax-b}_1}{\\norm{x}_\\infty \\leq 1}\nFrom the definition of an $1$-norm as $$\\norm{x}_1 = \\sum_i \\abs{x_i}$$ and the definition of an $\\infty$-norm as $$\\norm{x}_\\infty = \\max_i \\abs{x_i}$$ the following can be derived:\n\\optpblm{t_1 + \\cdots + t_n}{\\qty(Ax - b)_i\\leq t_i, \\ \\forall i = 1,\\dots,n\\\\\n\t-& \\qty(Ax - b)_i\\leq t_i, \\ \\forall i = 1,\\dots,n\\\\\n\t & x_i \\leq 1, \\forall i = 1,\\dots,n\\\\\n\t-& x_i \\leq 1, \\forall i = 1,\\dots,n}\nWhich is equivalent to the following linear program\n\\optpblm{\\vb{1}^T t}{-t \\leq Ax - b \\leq t\\\\\n\t\t\t\t\t&-\\vb{1} \\leq x \\leq \\vb{1}}\n\nThe resulted minimum to this equivalent problem, $\\vb{1}^T t$, is equivalent to the minimum of the original problem, $\\norm{Ax-b}_1$.\n\nFrom this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (t^* + b)$$\n\n\\newpage\n\\subsubsection{Part d: Minimize $\\norm{x}_1$ subject to $\\norm{Ax-b}_\\infty \\leq 1$}\nDefine the following minimization problem:\n\\optpblm{\\norm{x}_1}{\\norm{Ax-b}_\\infty \\leq 1}\nFrom the definition of an $1$-norm as $$\\norm{x}_1 = \\sum_i \\abs{x_i}$$ and the definition of an $\\infty$-norm as $$\\norm{x}_\\infty = \\max_i \\abs{x_i}$$ the following can be derived:\n\\optpblm{t_1 + \\cdots + t_n}{x_i \\leq t_i, \\ \\forall i = 1,\\dots,n\\\\\n\t\t\t\t\t\t\t&-x_i \\leq t_i, \\ \\forall i = 1,\\dots,n\\\\\n\t\t\t\t\t\t\t&(Ax-b)_i \\leq 1, \\ \\forall i = 1,\\dots,n}\nFrom this a linear program can be defined as:\n\\optpblm{\\vb{1}^T t}{-t \\leq x \\leq t\\\\\n\t\t\t \t\t&Ax - b \\leq \\vb{1}}\n\nThe resulted minimum to this equivalent problem, $\\vb{1}^T t$, is equivalent to the minimum of the original problem, $\\norm{x}_1$.\n\nFrom this the minimizing variable, $x^*$, can be found as: $$x^* = t^*$$\n\n\\newpage\n\\subsubsection{Part e: Minimize $\\norm{Ax-b}_1 + \\norm{x}_\\infty$}\nDefine the following minimization problem:\n\\optpblm{\\norm{Ax-b}_1 + \\norm{x}_\\infty}{math}\nFrom the definition of an $1$-norm as $$\\norm{x}_1 = \\sum_i \\abs{x_i}$$ and the definition of an $\\infty$-norm as $$\\norm{x}_\\infty = \\max_i \\abs{x_i}$$ the following can be derived:\n\\optpblm{t_1 + \\cdots + t_n + s}{\n\t\t\\qty(Ax - b)_i\\leq t_i, \\ \\forall i = 1,\\dots,n\\\\\n\t\t-& \\qty(Ax - b)_i\\leq t_i, \\ \\forall i = 1,\\dots,n\\\\\n\t\t&x_i \\leq s, \\ \\forall i = 1,\\dots,n\\\\\n\t-\t&x_i \\leq s, \\ \\forall i = 1,\\dots,n}\nThis can be written as a standard linear program as:\n\\optpblm{\\vb{1}^T t + s}{\n\t\t- t \\leq Ax-b \\leq t\\\\\n\t\t&-\\vb{1}s \\leq x \\leq \\vb{1}s}\nThe resulted minimum to this equivalent problem, $\\vb{1}^T t + s$, is equivalent to the minimum of the original problem, $\\norm{Ax-b}_1 + \\norm{x}_\\infty$. It should be noted that the $s$ and $\\norm{x}_\\infty$ are not used to find the minimization variable, but are important in weighting for solving for the minimization itself.\n\nFrom this the minimizing variable, $x^*$, can be found as: $$x^* = A^{-1} (t^* + b)$$\n\n\\newpage\n\\subsection{Problem 4.16}\nConsider the system given as\n\\begin{equation}\\label{eq:dyn_sys_def}\n\tx(t+1) = A x(t) + b u(t), \\ t = 0,\\dots,N-1\n\\end{equation}\nwith $x(t) \\in \\real^n, u(t) \\in \\real, \\forall t = 0,\\dots,N-1$ and $A \\in \\real^{n\\cross n}, b \\in \\real^n$, and $x(0) = 0$.\\\\\n\nThe minimum fuel optimal control problem is to select the minimum amount of inputs to minimize the amount of fuel used, given as\n\\begin{equation}\\label{eq:min_fuel_problem_def}\n\t\\begin{aligned}\n\t\t\\text{minimize} \\hspace{0.5in} &F = \\sum_{t=1}^{N-1} f(u(t))\\\\\n\t\t\\text{subject to \\hspace{0.5in}} & x(t+1) = A x(t) + b u(t), \\ t = 0,\\dots,N-1\\\\\n\t\t& x(N) = x_{des}\n\t\\end{aligned}\t\n\\end{equation}\nwith $N$ as the time-horizon, $x_{des} \\in \\real^n$ as the desired final state, and $f: \\real \\to \\real$ given as\n\\begin{equation}\\label{eq:fuel_usage_def}\n\tf(a) = \n\t\\begin{cases}\n\t\t\\abs{a} & \\abs{a} \\leq 1 \\\\\n\t\t2 \\abs{a} - 1 & \\abs{a} > 1\n\t\\end{cases}\n\\end{equation}\n\n\\textbf{Problem:}\nFormulate this problem as a Linear Program.\\\\\n\n\\textbf{Solution:}\nFirst, \\ref{eq:min_fuel_problem_def} can be rewritten in an epigraph form (with the additional assumption that $f(u(t))$ is always positive):\n\\optpblm[eq:min_fuel_problem_epigraph]{F_1 + \\cdots + F_{N-1}}{\n\t\tf(u(t)) = F_t, \\ \\forall t = 1, \\dots, N-1\\\\\n\t\t& x(t+1) = A x(t) + b u(t), \\ \\forall t = 0,\\dots,N-1\\\\\n\t\t&x(N) = x_{des}}\nNow looking at the nonlinear component, fuel usage as defined by \\eqref{eq:fuel_usage_def}, can be equated to:\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\abs{a} \\leq g\\\\\n\t\t2 \\abs{a} - 1 \\leq g\\\\\n\t\\end{aligned}\n\\end{equation}\nor equivalently,\n\\begin{equation}\n\t\\begin{aligned}\n\t\t-g \\leq a \\leq g\\\\\n\t\t-g \\leq 2a -1 \\leq g\n\t\\end{aligned}\n\\end{equation}\n\nThis represents an intersection of two half-spaces which is a simpliler convex restriction.\\\\\nThis can now be combined with \\eqref{eq:min_fuel_problem_epigraph} to produce the linear program:\n\\optpblm[eq:min_fuel_problem_result]{F_1 + \\cdots + F_{N-1}}{\n\t-F_t \\leq u(t) \\leq F_t, \\ \\forall t = 1, \\dots, N-1\\\\\n\t&-F_t \\leq 2u(t) -1 \\leq F_t, \\ \\forall t = 1, \\dots, N-1\\\\\n\t& x(t+1) = A x(t) + b u(t), \\ \\forall t = 0,\\dots,N-1\\\\\n\t&x(N) = x_{des}}\nWhich can then be rewritten as:\n\\optpblm[eq:min_fuel_problem_result]{\\vb{1}^T F}{\n\t-F \\leq \\vb{u} \\leq F\\\\\n\t& x(t+1) = A x(t) + b u(t), \\ \\forall t = 0,\\dots,N-1\\\\\n\t&x(N) = x_{des}\n\t}\n\n\\newpage\n\\subsection{Problem 4.28}\nConsider the convex quadratic program given as\n\\begin{equation}\\label{eq:convex_quadratic_program}\n\t\\begin{aligned}\n\t\t\\text{minimize} \\ \\ & \\frac{1}{2} x^T P x + q^T x + r\\\\\n\t\t\\text{subject to} \\ \\ & Ax \\leq b\n\t\\end{aligned}\n\\end{equation}\nwith a robust equivalent defined as\n\\begin{equation}\\label{eq:robust_convex_quadratic_program}\n\t\\begin{aligned}\n\t\t\\text{minimize} \\ \\ & \\sup_{P\\in \\mathcal{E}}\\{\\frac{1}{2} x^T P x + q^T x + r\\}\\\\\n\t\t\\text{subject to} \\ \\ & Ax \\leq b\n\t\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{E}$ is the set of all possible matrices of $P$.\n\n\\subsubsection{Part a}\n\\textbf{Problem:}\nExpress the robust QP as a convex problem given $\\mathcal{E} = \\{P_1,\\dots,P_k\\}$ where $P_i\\in S^n_+, \\ \\forall i=1,\\dots,k$.\\\\\n\n\\textbf{Solution:}\nAs a base assumption, by definition all quadratic programs are convex. Additionally when taking a pointwise supremum of convex sets, the result is also convex.\nThus, for a supremum over the finite set of $\\mathcal{E}$ it is known that a resultant convex problem can be defined.\\\\\n\nFirst, we can redefine the problem as\n\\optpblm{\\sup \\{t_1, \\dots, t_k\\}}{\n\t\\frac{1}{2} x^T P_i x + q^T x + r \\leq t_i, \\ i = 1, \\dots, k\\\\\n\t&Ax \\leq b}\nAnother eipigraph can then be analyzed to create the following convex optimization problem:\n\\optpblm{s}{\n\tt_i \\leq s, \\ i = 1, \\dots, k\\\\\n\t&\\frac{1}{2} x^T P_i x + q^T x + r \\leq t_i, \\ i = 1, \\dots, k\\\\\n\t&Ax \\leq b}\n\n\n% no part b,c for 4.28\n\n\n\\newpage\n\\subsection{Problem 4.43}\nSuppose $A: \\real^n \\to S^m$ is affine such that\n\\begin{equation}\n\tA(x) = A_0 + x_1 A_1 + \\cdots + x_n A_n\n\\end{equation}\nwhere $A_i \\in S^m$. Let $\\lambda_1(x) \\geq \\lambda_2(x) \\geq \\cdots \\geq \\lambda_m(x)$ be the eigenvalues of $A(x)$.\\\\\n\n\nFor each of the following minimization criteria, formulate the problem as an SDP.\\\\\n\n\\subsubsection{Part a}\n\\textbf{Problem:}\nMinimize the maximum eigenvalue of $A$: $$\\text{minimize} \\ \\ \\lambda_1(x)$$\n\\textbf{Solution:}\nThis ca be re-written in epigraph form as:\n\\optpblm{t}{\\lambda_1 \\leq t}\nor similarily as an SDP:\n\\optpblm{t}{A(x) \\preceq t I}\n\n\n%It is known that the eigenvalues of a sum of matrices is bounded below by the sum of the minimum eigenvalues of each and bounded above by the sum of the maximum eigenvalues.\n%\\cite{eigvalueBound}\n%i.e.\n%$$ \\lambda(A)_m + \\lambda(B)_m \\leq \\lambda(A+B)_m \\leq \\lambda(A+B)_1 \\leq \\lambda(A)_1 + \\lambda(B)_1$$\n%If this is to be expanded to the entire affine sum, $A(x)$, the objective of minimizing the eigenvalues of the weighted sum of symetric matrices can be done by minimizing the weighted sum of the largest eigenvalues of individual matrices.\n%This means this problem can be redefined as:\n%\\optpblm{t^T x}{\n%\tt_i = \\lambda_1(A_i), \\ \\forall i = 1,\\dots,m\\\\\n%\t&s = \\lambda_1(A_0)}\n%Since $s$ will remain constant regrdless of $x$, this is equivalent to:\n%\\optpblm{t^T x}{\n%\tt_i = \\lambda_1(A_i), \\ \\forall i = 1,\\dots,m}\n%\n%????????????????????????????????????????????????????\n\n\\subsubsection{Part b}\n\\textbf{Problem:}\nMinimize the spread of the eigenvalues of $A$: $$\\text{minimize} \\ \\ \\lambda_1(x) - \\lambda_m(x)$$\\\\\n\\textbf{Solution:}\nThis can be rewritten in epigraph form as\n\\optpblm{t_1 - t_2}{\\lambda_1 \\leq t_1\\\\ & \\lambda_m \\leq t_2}\nor similarly as an SDP:\n\\optpblm{t_1}{A(x) \\preceq t_1 I\\\\ &A(x) \\succeq t_2}\n\n\n%It is known that the eigenvalues of a sum of matrices is bounded below by the sum of the minimum eigenvalues of each and bounded above by the sum of the maximum eigenvalues.\n%\\cite{eigvalueBound}\n%i.e.\n%$$ \\lambda(A)_m + \\lambda(B)_m \\leq \\lambda(A+B)_m \\leq \\lambda(A+B)_1 \\leq \\lambda(A)_1 + \\lambda(B)_1$$\n%If this is to be expanded to the entire affine sum, $A(x)$, the objective of minimizing the spread eigenvalues of the weighted sum of symetric matrices can be done by minimizing the weighted sum of the spread of eigenvalues of individual matrices.\n%This means this problem can be redefined as:\n%\\optpblm{t^T x + s}{\n%\tt_i = (\\lambda_1(A_i) - \\lambda_m(A_i), \\ \\forall i = 1,\\dots,m\\\\\n%\t&s = (\\lambda_1(A_0) - \\lambda_m(A_0)}\n%Since $s$ remains constant regardless of $x$ this is equivelent to\n%\\optpblm{t^T x}{\n%\tt_i = (\\lambda_1(A_i) - \\lambda_m(A_i), \\ \\forall i = 1,\\dots,m}\n%\n%\n%????????????????????????????????????????????????????\n\n\n%Defining the optimization problem as:\n%\\optpblm{\\max \\{\\lambda\\qty(A(x))\\} - \\min \\{\\lambda\\qty(A(x))\\}}{\n%\tA(x) = A_0 + x_1 A_1 + \\cdots + x_n A_n}\\\\\n\n\n%\n%WHAT???\n\n\\newpage\n\\subsubsection{Part c}\n\\textbf{Problem:}\nMinimize the conditional number of $A$ while remaining postive definite:\n\\begin{equation*}\n\t\\begin{aligned}\n\t\t\\text{minimize} \\ \\ &k(A(x)) = \\frac{\\lambda_1(x)}{\\lambda_m(x)} \\ \\forall \\ x \\in \\{x \\ | \\ A(x) \\succ 0\\}\\\\\n\t\t \\text{subject to} \\ \\ &A(x) \\succ 0\n\t\\end{aligned}\n\\end{equation*}\n\\textbf{Solution:}\nThis can be rewritten in epigraph form as\n\\optpblm{t_1 / t_2}{\n\t\\lambda_1 \\leq t_1\\\\\n\t& \\lambda_m \\leq t_2\\\\\n\t&A \\succ 0\n\t}\nor similarly as an SDP:\n\\optpblm{t_1 / t_2}{\n\tA(x) \\preceq t_1 I\\\\\n\t&A(x) \\succeq t_2\\\\\n\t&A \\succ 0\n\t}\n\n% Only Part a,b,c for 4.43\n\n\\newpage\n\\section{Problem 1: Open-loop optimal control with $1-$ and $\\infty-$ norms.}\nThe following open-loop optimal regulation problem is given as:\n\\begin{equation}\\label{eq:open-loop_opt-control_def}\n\t\\begin{aligned}\n\t\t\\text{minimze} \\hspace{0.5in}\n\t\t&\\norm{x_T}_p + \\sum_{t = 0}^{T-1} \\norm{x_t}_p + \\gamma\\norm{u_t}_q\\\\\n\t\t\\text{subject to} \\hspace{0.5in}\n\t\t& x_{t+1} = A x_t + B u_t, \\ t = 0,\\dots,T-1\\\\\n\t\t& \\norm{x_t}_\\infty \\leq \\bar{x}, \\ t = 0,\\dots,T\\\\\n\t\t& \\norm{u_t}_\\infty \\leq \\bar{u}, \\ t = 0,\\dots,T\n\t\\end{aligned}\n\\end{equation}\nwith $x_t \\in \\real^n$ and $u_t \\in \\real^m$ as the system state and control input respectively and parameter $\\gamma > 0$ governing the actuator and state regulation performance.\\\\\n\n\\textbf{Problem:}\nExpress this problem as a linear program for (i) $p=q=\\infty$ and (ii) $p=q=1$. Code both in CVX and for the problem data provided. Verify the equivalence between the original optimization problem and transformed linear program obtained and plot the optimal state and input trajectories for each.\\\\\n\n\\textbf{Solution:}\n\\subsection{Linear program for $p = q = \\infty$}\n\nWith $p = q = \\infty$, the problem is defined as:\n\\optpblm{\\norm{x_T}_\\infty + \\sum_{t = 0}^{T-1} \\norm{x_t}_\\infty + \\gamma\\norm{u_t}_\\infty}{\n\tx_{t+1} = A x_t + B u_t, \\ t = 0,\\dots,T-1\\\\\n\t& \\norm{x_t}_\\infty \\leq \\bar{x}, \\ t = 0,\\dots,T\\\\\n\t& \\norm{u_t}_\\infty \\leq \\bar{u}, \\ t = 0,\\dots,T}\n\nThe epigraph of this problem can be found as\n\\optpblm{r_T + (r_0 + \\gamma s_0) + (r_{T-1} + \\gamma s_{T-1})}{\n\t\\norm{x_t}_\\infty \\leq r_t, \\ t = 0, \\dots, T\\\\\n\t&\\norm{u_i}_\\infty \\leq s_t, \\ t = 0, \\dots, T-1\\\\\n\t&x_{t+1} = A x_t + B u_t, \\ t = 0,\\dots,T-1\\\\\n\t& \\norm{x_t}_\\infty \\leq \\bar{x}, \\ t = 0,\\dots,T\\\\\n\t& \\norm{u_t}_\\infty \\leq \\bar{u}, \\ t = 0,\\dots,T\n\t}\n\nFrom the definition of $\\norm{x}_\\infty = \\max \\{x\\}$ and through vectorization, we can redefine this as the following linear program:\n\\optpblm{\n\t\\mqty[\\vb{1}^T & \\gamma \\vb{1}^T] \\mqty[r\\\\s] \n\t=\\vb{1}^T r + \\gamma \\vb{1}^T s}{\n\tx_{t+1} = A x_t + B u_t, \\ t = 0,\\dots,T-1\\\\\n\t&x_t \\leq r_t \\vb{1} \\leq \\bar{x} \\vb{1}, \\ t = 0, \\dots, T\\\\\n\t&u_t \\leq s_t \\vb{1} \\leq \\bar{u} \\vb{1}, \\ t = 0, \\dots, T-1\n\t}\n\n\\subsection{Linear program for $p = q = 1$}\nWith $p = q = 1$, the problem is defined as:\n\\optpblm{\\norm{x_T}_1 + \\sum_{t = 0}^{T-1} \\norm{x_t}_1 + \\gamma\\norm{u_t}_1}{\n\tx_{t+1} = A x_t + B u_t, \\ t = 0,\\dots,T-1\\\\\n\t& \\norm{x_t}_\\infty \\leq \\bar{x}, \\ t = 0,\\dots,T\\\\\n\t& \\norm{u_t}_\\infty \\leq \\bar{u}, \\ t = 0,\\dots,T}\n\nThe epigraph of this problem can be found as\n\\optpblm{r_T + (r_0 + \\gamma s_0) + (r_{T-1} + \\gamma s_{T-1})}{\n\t\\norm{x_t}_1 \\leq r_t, \\ t = 0, \\dots, T\\\\\n\t&\\norm{u_i}_1 \\leq s_t, \\ t = 0, \\dots, T-1\\\\\n\t&x_{t+1} = A x_t + B u_t, \\ t = 0,\\dots,T-1\\\\\n\t& \\norm{x_t}_\\infty \\leq \\bar{x}, \\ t = 0,\\dots,T\\\\\n\t& \\norm{u_t}_\\infty \\leq \\bar{u}, \\ t = 0,\\dots,T\n}\n\nFrom the definition of $\\norm{x}_1 = \\sum_{i=0}^T x$ and through vectorization, we can redefine this as the following linear program:\n\\optpblm{\n\t\\mqty[\\vb{1}^T & \\gamma \\vb{1}^T] \\mqty[r\\\\s] \n\t=\\vb{1}^T r + \\gamma \\vb{1}^T s}{\n\tx_{t+1} = A x_t + B u_t, \\ t = 0,\\dots,T-1\\\\\n\t& \\vb{1}^T x_t \\leq r_t, \\ t = 0,\\dots,T\\\\\n\t& \\vb{1}^T u_t \\leq s_t, \\ t = 0,\\dots,T-1\\\\\n\t&x_t \\leq \\bar{x} \\vb{1}, \\ t = 0, \\dots, T\\\\\n\t&u_t \\leq \\bar{u} \\vb{1}, \\ t = 0, \\dots, T-1\n}\n\n\n\\newpage\n\\subsection{CVX Formulation and Results:}\nThe code used to solve the linear programs and direct norm cvx calculations can be found in \\appendixname \\ref{apx:pblm1_matlab}.\\\\\n\n\\subsubsection{$\\infty$-norm Solution}\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{fig/pblm1_inftyn_x}\n\t\\caption{States for Open-loop control comparing methods for $\\infty$-norm.}\n\t\\label{fig:pblm1inftynx}\n\\end{figure}\\newpage\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{fig/pblm1_inftyn_u}\n\t\\caption{Inputs for Open-loop control comparing methods for $\\infty$-norm.}\n\t\\label{fig:pblm1inftynu}\n\\end{figure}\\newpage\n\n\\subsubsection{1-norm Solution}\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{fig/pblm1_1n_x}\n\t\\caption{States for Open-loop control comparing methods for 1-norm.}\n\t\\label{fig:pblm11nx}\n\\end{figure}\\newpage\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{fig/pblm1_1n_u}\n\t\\caption{Inputs for Open-loop control comparing methods for 1-norm.}\n\t\\label{fig:pblm11nu}\n\\end{figure}\n\n\\newpage\n\\section{Problem 2: Minimum time state transfer via quasiconvex optimization.}\nConsider the LTI system:\n\\begin{equation}\\label{eq:quasiconvex_opt_def}\n\t\\begin{aligned}\n\t\tx_{t+1} &= Ax_t + B u_t, \\ \\forall t = 0,\\dots,T\\\\\n\t\t\\underline{u} &\\leq u_t \\leq \\bar{u}, \\ \\forall t = 0,\\dots,T\n\t\\end{aligned}\n\\end{equation}\nwith $x_0$ as the initial state.\\\\\n\n\\textbf{Problem:}\nShow that the minimum time required to transfer the system from $x_0$ to $x_{des}$, given as\n\\begin{equation}\\label{eq:qualiconvex_problem_result}\n\tf(u_0,\\dots,u_T) = \\min \\{\\tau \\ | \\ x_t = x_{des} \\ \\text{for} \\ \\tau \\leq t \\leq {T+1}\\}\n\\end{equation}\nis a quasiconvex function of the control input sequence. Implement a bisection algorithm to solve the problem for the given data.\\\\\n\n\\textbf{Solution:}\nIt is evident from the definition of the minimization problem that the time required to reach the final state for each sequence of inputs is a convex optimization problem. From that, it is known that the function overall is a quasi-convex optimization problem defined as:\n\\optpblm{\\tau}{\n\tx_{t+1} = Ax_t + B u_t \\ \\forall t = 0,\\dots,T\\\\\n\t&\\underline{u} \\leq u_t \\leq \\bar{u} \\ \\forall t = 0,\\dots,T\\\\\n\t&x(0) = x_0\\\\\n\t&x_t = x_{des} \\ \\forall t \\in \\{t \\ | \\ \\tau \\leq t \\leq {T+1}\\}\n\t}\n\nFor simplicity, we will be relaxing the final state to just reaching it at the earliest instead of remaining at rest:\n$$x_\\tau = x_{des}$$\n\nA bisection algorithm can then be implemented to solve this as done using the MATLAB code shown in \\appendixname \\ \\ref{apx:pblm2_matlab}. The result of this was a minimum value $$t = 51, \\text{ or } \\tau = 10.2$$\n\n\\newpage\nThe resulting system response and control sequence are provided as:\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{fig/pblm2}\n\t\\caption{Results for problem 2.}\n\t\\label{fig:pblm2}\n\\end{figure}\n\n\\newpage\n\\section{Problem 3: State feedback control design via SDP}\nFeedback control problems can be formulated using a semidefinite program, such as\n\\begin{equation}\\label{eq:feedback_control_def}\n\t\\begin{aligned}\n\t\t\\text{maximize} \\hspace{0.5in}& \\trace \\{P\\}\\\\\n\t\t\\text{subject to} \\hspace{0.5in}\n\t\t& \\mqty [R + B^T PB & B^T PA\\\\\n\t\t\t\t A^T PB & Q + A^T P A - P] \\succeq 0\\\\\n\t\t\t\t & P \\succeq 0\n\t\\end{aligned}\n\\end{equation}\nwith variable $P \\in S^n$ and problem data $A\\in \\real^{n\\cross n}, B \\in\\real^{n\\cross m}, Q \\in S^n_+, \\real \\in S^m_{++}$.\\\\\n\nThis problem is equivalent to the solution to the optimal solution to the infinite-horizon LQR problem:\n\\begin{equation}\\label{eq:LQR_control_def}\n\t\\begin{aligned}\n\t\t\\text{minimze} \\hspace{0.5in}& \\sum_{t=0}^\\infty x_t^T Q x_t + u_t^T R u_t\\\\\n\t\t\\text{subject to} \\hspace{0.5in}\n\t\t& x_{t+1} = Ax_t + B u_t, \\ t \\geq 0, \\ x(t=0) = x_0\n\t\\end{aligned}\n\\end{equation}\nThis is also equivelent to the solution the the discrete-time richotte equation (DARE) and can be solved in matlab with dare(A,B,Q,R). The solution to the feedback controller is\n\\begin{equation}\\label{eq:LQR_control_solution}\n\tu_t = K x_t\\\\\n\tK = -\\qty(R + B^T B)^{-1} B^T P^* A\n\\end{equation}\n\n\\textbf{Problem:}\nConfirm the solution to the SDP given in \\eqref{eq:feedback_control_def} is equivalent to the LQR problem given in \\eqref{eq:LQR_control_def} for multiple randomly generated problems.\n\n\\textbf{Solution:}\nCVX in MATLAB was used and the code can be found in \\appendixname \\ref{apx:pblm3_matlab}.The full set of results are provided in \\appendixname \\ref{apx:pblm3_results} for various randomly generated problems and solutions. The following is a few of P equivelent reuslts.\n\n\\begin{Verbatim}\n\tP_cvx =\n\t\n\t20.3336   10.5025   -3.3125   37.1904\n\t10.5025   12.9464   -1.8618   32.8376\n\t-3.3125   -1.8618    2.3441   -4.8914\n\t37.1904   32.8376   -4.8914   97.6353\n\t\n\t\n\tP_dare =\n\t\n\t20.3336   10.5025   -3.3125   37.1904\n\t10.5025   12.9464   -1.8618   32.8376\n\t-3.3125   -1.8618    2.3441   -4.8914\n\t37.1904   32.8376   -4.8914   97.6353\n\\end{Verbatim}\n\\begin{Verbatim}\n\tP_cvx =\n\t\n\t9.2752    0.3867    1.2843   -2.9794\n\t0.3867    4.4079    0.6458    1.1789\n\t1.2843    0.6458    1.9253    0.5430\n\t-2.9794    1.1789    0.5430    4.4857\n\t\n\t\n\tP_dare =\n\t\n\t9.2752    0.3867    1.2843   -2.9794\n\t0.3867    4.4079    0.6458    1.1789\n\t1.2843    0.6458    1.9253    0.5430\n\t-2.9794    1.1789    0.5430    4.4857\n\\end{Verbatim}\n\\begin{Verbatim}\n\tP_cvx =\n\t\n\t101.8040  -50.0081   -0.8792  117.2149\n\t-50.0081   28.3481    2.3037  -57.6749\n\t-0.8792    2.3037    4.1602    0.2983\n\t117.2149  -57.6749    0.2983  136.1756\n\t\n\t\n\tP_dare =\n\t\n\t101.8040  -50.0081   -0.8792  117.2149\n\t-50.0081   28.3481    2.3037  -57.6749\n\t-0.8792    2.3037    4.1602    0.2983\n\t117.2149  -57.6749    0.2983  136.1756\n\\end{Verbatim}\n\n\\newpage\n\\appendix\n\\section{MATLAB Code:}\\label{apx:matlab}\nAll code I write in this course can be found on my GitHub repository:\\\\\n\\href{https://github.com/jonaswagner2826/MECH6337}{https://github.com/jonaswagner2826/MECH6327}\n\\lstinputlisting[caption={MECH6327\\_HW3},label={script:HW3}]{MECH6327_HW3.m}\n\n\\newpage\n\\section{Problem 3 MATLAB Code:}\\label{apx:pblm1_matlab}\nAll code I write in this course can be found on my GitHub repository:\\\\\n\\href{https://github.com/jonaswagner2826/MECH6327}{https://github.com/jonaswagner2826/MECH6327}\n% MECH6313_HW3_pblm1\n\\lstinputlisting[caption={MECH6327\\_HW3\\_pblm1},label={script:HW3}]{MECH6327_HW3_pblm1.m}\n\n\\newpage\n\\section{Problem 3 MATLAB Code:}\\label{apx:pblm2_matlab}\nAll code I write in this course can be found on my GitHub repository:\\\\\n\\href{https://github.com/jonaswagner2826/MECH6327}{https://github.com/jonaswagner2826/MECH6327}\n% MECH6313_HW3_pblm2\n\\lstinputlisting[caption={MECH6327\\_HW3\\_pblm2},label={script:HW3}]{MECH6327_HW3_pblm2.m}\n\n\\newpage\n\\section{Problem 3 MATLAB Code:}\\label{apx:pblm3_matlab}\nAll code I write in this course can be found on my GitHub repository:\\\\\n\\href{https://github.com/jonaswagner2826/MECH6327}{https://github.com/jonaswagner2826/MECH6327}\n% MECH6313_HW3_pblm3\n\\lstinputlisting[caption={MECH6327\\_HW3\\_pblm3},label={script:HW3}]{MECH6327_HW3_pblm3.m}\n\n\\newpage\n\\section{Problem 3 MATLAB Results:}\\label{apx:pblm3_results}\n\n\\begin{Verbatim}\n\t>> MECH6327_HW3_pblm3\n\t\n\tA =\n\t\n\t1.1002   -1.1372    1.1077    0.2641\n\t0.1751    0.6430    0.8205    3.1585\n\t1.0036   -0.0128   -0.8176    1.2266\n\t1.5110    0.9143   -0.1265    2.3206\n\t\n\t\n\tB =\n\t\n\t0.4145    1.2416\n\t0.2118   -0.1576\n\t0.6132   -1.3736\n\t-0.5278    0.8708\n\t\n\t\n\tQ =\n\t\n\t0.4766    0.4525    0.2565    0.4911\n\t0.4525    0.5808    0.3777    0.7950\n\t0.2565    0.3777    0.4440    0.4396\n\t0.4911    0.7950    0.4396    1.2997\n\t\n\t\n\tR =\n\t\n\t1.1591    0.8154\n\t0.8154    0.7716\n\t\n\t\n\tCalling SDPT3 4.0: 31 variables, 10 equality constraints\n\tFor improved efficiency, SDPT3 is solving the dual problem.\n\t------------------------------------------------------------\n\t\n\tnum. of constraints = 10\n\tdim. of sdp    var  = 10,   num. of sdp  blk  =  2\n\t*******************************************************************\n\tSDPT3: Infeasible path-following algorithms\n\t*******************************************************************\n\tversion  predcorr  gam  expon  scale_data\n\tHKM      1      0.000   1        0    \n\tit pstep dstep pinfeas dinfeas  gap      prim-obj      dual-obj    cputime\n\t-------------------------------------------------------------------\n\t0|0.000|0.000|8.4e+01|1.2e+01|1.5e+03| 4.731862e+01  0.000000e+00| 0:0:00| chol  1  1 \n\t1|0.894|0.821|8.9e+00|2.3e+00|2.2e+02| 2.141617e+01  1.453677e+01| 0:0:00| chol  1  1 \n\t2|0.833|0.841|1.5e+00|3.7e-01|5.1e+01| 2.431086e+01  1.330905e+01| 0:0:00| chol  1  1 \n\t3|0.515|0.846|7.2e-01|5.7e-02|2.1e+01| 2.140755e+01  2.132680e+01| 0:0:00| chol  1  1 \n\t4|0.187|0.237|5.9e-01|4.4e-02|1.9e+01| 2.434592e+01  7.419460e+01| 0:0:00| chol  1  1 \n\t5|0.061|0.042|5.5e-01|4.2e-02|2.7e+01| 3.321322e+01  1.216786e+02| 0:0:00| chol  1  1 \n\t6|0.104|0.026|4.9e-01|4.1e-02|5.4e+01| 6.400240e+01  6.740367e+01| 0:0:00| chol  1  1 \n\t7|0.129|0.433|4.3e-01|2.3e-02|5.0e+01| 7.772307e+01  1.153429e+02| 0:0:00| chol  1  1 \n\t8|1.000|0.825|1.4e-08|4.0e-03|4.5e+01| 1.616373e+02  1.181693e+02| 0:0:00| chol  1  1 \n\t9|0.962|0.980|2.6e-07|7.9e-05|1.8e+00| 1.344089e+02  1.326294e+02| 0:0:00| chol  1  1 \n\t10|0.965|0.977|9.4e-09|1.8e-06|5.8e-02| 1.333014e+02  1.332444e+02| 0:0:00| chol  1  1 \n\t11|0.958|1.000|3.9e-10|1.9e-09|3.6e-03| 1.332623e+02  1.332587e+02| 0:0:00| chol  1  1 \n\t12|0.987|1.000|7.9e-12|7.9e-11|2.8e-04| 1.332596e+02  1.332594e+02| 0:0:00| chol  1  1 \n\t13|0.952|0.987|1.2e-11|2.6e-12|1.3e-05| 1.332595e+02  1.332595e+02| 0:0:00| chol  1  1 \n\t14|1.000|1.000|5.2e-12|2.3e-12|1.2e-06| 1.332595e+02  1.332595e+02| 0:0:00|\n\tstop: max(relative gap, infeasibilities) < 1.49e-08\n\t-------------------------------------------------------------------\n\tnumber of iterations   = 14\n\tprimal objective value =  1.33259457e+02\n\tdual   objective value =  1.33259456e+02\n\tgap := trace(XZ)       = 1.15e-06\n\trelative gap           = 4.31e-09\n\tactual relative gap    = 4.31e-09\n\trel. primal infeas (scaled problem)   = 5.20e-12\n\trel. dual     \"        \"       \"      = 2.32e-12\n\trel. primal infeas (unscaled problem) = 0.00e+00\n\trel. dual     \"        \"       \"      = 0.00e+00\n\tnorm(X), norm(y), norm(Z) = 1.7e+02, 1.1e+02, 1.7e+03\n\tnorm(A), norm(b), norm(C) = 3.5e+01, 3.0e+00, 3.9e+00\n\tTotal CPU time (secs)  = 0.47  \n\tCPU time per iteration = 0.03  \n\ttermination code       =  0\n\tDIMACS: 7.8e-12  0.0e+00  4.0e-12  0.0e+00  4.3e-09  4.3e-09\n\t-------------------------------------------------------------------\n\t\n\t------------------------------------------------------------\n\tStatus: Solved\n\tOptimal value (cvx_optval): -133.259\n\t\n\t\n\tP_cvx =\n\t\n\t20.3336   10.5025   -3.3125   37.1904\n\t10.5025   12.9464   -1.8618   32.8376\n\t-3.3125   -1.8618    2.3441   -4.8914\n\t37.1904   32.8376   -4.8914   97.6353\n\t\n\t\n\tK_cvx =\n\t\n\t1.3793    2.0277   -0.1335    4.7523\n\t-1.0233    0.0284   -0.4975   -1.2125\n\t\n\t\n\tP_dare =\n\t\n\t20.3336   10.5025   -3.3125   37.1904\n\t10.5025   12.9464   -1.8618   32.8376\n\t-3.3125   -1.8618    2.3441   -4.8914\n\t37.1904   32.8376   -4.8914   97.6353\n\t\n\t\n\tK_dare =\n\t\n\t-1.3793   -2.0277    0.1335   -4.7523\n\t1.0233   -0.0284    0.4975    1.2125\n\t\n\t\n\tP_cvx =\n\t\n\t20.3336   10.5025   -3.3125   37.1904\n\t10.5025   12.9464   -1.8618   32.8376\n\t-3.3125   -1.8618    2.3441   -4.8914\n\t37.1904   32.8376   -4.8914   97.6353\n\t\n\t\n\tP_dare =\n\t\n\t20.3336   10.5025   -3.3125   37.1904\n\t10.5025   12.9464   -1.8618   32.8376\n\t-3.3125   -1.8618    2.3441   -4.8914\n\t37.1904   32.8376   -4.8914   97.6353\n\t\n\t>> MECH6327_HW3_pblm3\n\t\n\tA =\n\t\n\t-0.8568    1.3798   -0.6563    1.1284\n\t0.0484    0.0951   -0.1250    0.7425\n\t-0.6649   -0.4271   -0.5305    1.1436\n\t1.4527    0.5108    0.1056   -0.9147\n\t\n\t\n\tB =\n\t\n\t0.1798    1.2963\n\t-0.9833    1.0992\n\t0.3848    0.6532\n\t0.3257   -0.5051\n\t\n\t\n\tQ =\n\t\n\t0.3838    0.3714    0.4771    0.6771\n\t0.3714    1.2344    1.2453    1.0731\n\t0.4771    1.2453    1.4076    1.4208\n\t0.6771    1.0731    1.4208    2.1397\n\t\n\t\n\tR =\n\t\n\t0.5081    0.7407\n\t0.7407    1.1909\n\t\n\t\n\tCalling SDPT3 4.0: 31 variables, 10 equality constraints\n\tFor improved efficiency, SDPT3 is solving the dual problem.\n\t------------------------------------------------------------\n\t\n\tnum. of constraints = 10\n\tdim. of sdp    var  = 10,   num. of sdp  blk  =  2\n\t*******************************************************************\n\tSDPT3: Infeasible path-following algorithms\n\t*******************************************************************\n\tversion  predcorr  gam  expon  scale_data\n\tHKM      1      0.000   1        0    \n\tit pstep dstep pinfeas dinfeas  gap      prim-obj      dual-obj    cputime\n\t-------------------------------------------------------------------\n\t0|0.000|0.000|4.4e+01|5.2e+00|1.0e+03| 6.864540e+01  0.000000e+00| 0:0:00| chol  1  1 \n\t1|0.886|0.852|5.0e+00|8.2e-01|1.7e+02| 5.707109e+01  1.265425e+01| 0:0:00| chol  1  1 \n\t2|0.782|1.000|1.1e+00|5.5e-03|5.2e+01| 5.048488e+01  1.175262e+01| 0:0:00| chol  1  1 \n\t3|0.752|1.000|2.7e-01|5.5e-04|1.6e+01| 2.602445e+01  1.535157e+01| 0:0:00| chol  1  1 \n\t4|1.000|0.756|1.8e-07|1.8e-04|7.3e+00| 2.578672e+01  1.846254e+01| 0:0:00| chol  1  1 \n\t5|0.933|1.000|1.8e-08|5.6e-06|5.8e-01| 2.048293e+01  1.990168e+01| 0:0:00| chol  1  1 \n\t6|0.964|0.970|2.0e-09|7.1e-07|2.9e-02| 2.011379e+01  2.008495e+01| 0:0:00| chol  1  1 \n\t7|0.965|1.000|6.3e-10|5.6e-08|1.7e-03| 2.009536e+01  2.009368e+01| 0:0:00| chol  1  1 \n\t8|0.994|1.000|1.8e-10|1.3e-10|1.2e-04| 2.009415e+01  2.009404e+01| 0:0:00| chol  1  1 \n\t9|0.953|0.987|6.7e-11|3.7e-11|5.3e-06| 2.009409e+01  2.009408e+01| 0:0:00| chol  1  1 \n\t10|1.000|1.000|6.7e-15|1.3e-11|1.2e-06| 2.009408e+01  2.009408e+01| 0:0:00| chol  1  1 \n\t11|1.000|1.000|1.7e-14|1.0e-12|1.3e-08| 2.009408e+01  2.009408e+01| 0:0:00|\n\tstop: max(relative gap, infeasibilities) < 1.49e-08\n\t-------------------------------------------------------------------\n\tnumber of iterations   = 11\n\tprimal objective value =  2.00940824e+01\n\tdual   objective value =  2.00940824e+01\n\tgap := trace(XZ)       = 1.33e-08\n\trelative gap           = 3.22e-10\n\tactual relative gap    = 3.22e-10\n\trel. primal infeas (scaled problem)   = 1.70e-14\n\trel. dual     \"        \"       \"      = 1.00e-12\n\trel. primal infeas (unscaled problem) = 0.00e+00\n\trel. dual     \"        \"       \"      = 0.00e+00\n\tnorm(X), norm(y), norm(Z) = 8.9e+00, 1.2e+01, 8.9e+01\n\tnorm(A), norm(b), norm(C) = 1.8e+01, 3.0e+00, 5.7e+00\n\tTotal CPU time (secs)  = 0.42  \n\tCPU time per iteration = 0.04  \n\ttermination code       =  0\n\tDIMACS: 2.6e-14  0.0e+00  1.8e-12  0.0e+00  3.2e-10  3.2e-10\n\t-------------------------------------------------------------------\n\t\n\t------------------------------------------------------------\n\tStatus: Solved\n\tOptimal value (cvx_optval): -20.0941\n\t\n\t\n\tP_cvx =\n\t\n\t9.2752    0.3867    1.2843   -2.9794\n\t0.3867    4.4079    0.6458    1.1789\n\t1.2843    0.6458    1.9253    0.5430\n\t-2.9794    1.1789    0.5430    4.4857\n\t\n\t\n\tK_cvx =\n\t\n\t0.5888   -0.3683    0.2601   -0.1295\n\t0.7294   -0.5856    0.4291   -0.9410\n\t\n\t\n\tP_dare =\n\t\n\t9.2752    0.3867    1.2843   -2.9794\n\t0.3867    4.4079    0.6458    1.1789\n\t1.2843    0.6458    1.9253    0.5430\n\t-2.9794    1.1789    0.5430    4.4857\n\t\n\t\n\tK_dare =\n\t\n\t-0.5888    0.3683   -0.2601    0.1295\n\t-0.7294    0.5856   -0.4291    0.9410\n\t\n\t\n\tP_cvx =\n\t\n\t9.2752    0.3867    1.2843   -2.9794\n\t0.3867    4.4079    0.6458    1.1789\n\t1.2843    0.6458    1.9253    0.5430\n\t-2.9794    1.1789    0.5430    4.4857\n\t\n\t\n\tP_dare =\n\t\n\t9.2752    0.3867    1.2843   -2.9794\n\t0.3867    4.4079    0.6458    1.1789\n\t1.2843    0.6458    1.9253    0.5430\n\t-2.9794    1.1789    0.5430    4.4857\n\t\n\t>> MECH6327_HW3_pblm3\n\t\n\tA =\n\t\n\t-1.5312    0.7880    1.6345   -0.9443\n\t0.5046    0.2982   -0.6235   -0.6712\n\t-0.8642   -0.1637   -1.3501    0.5767\n\t-0.3766    0.6067   -1.1622   -2.0858\n\t\n\t\n\tB =\n\t\n\t0.2360    0.0076\n\t-0.7784   -0.9376\n\t1.0996   -0.6816\n\t-0.8556   -0.2601\n\t\n\t\n\tQ =\n\t\n\t1.5071    1.3058    1.2427    1.5227\n\t1.3058    2.0018    1.1626    1.6402\n\t1.2427    1.1626    1.9380    2.0005\n\t1.5227    1.6402    2.0005    2.2226\n\t\n\t\n\tR =\n\t\n\t0.1582    0.2334\n\t0.2334    0.4393\n\t\n\t\n\tCalling SDPT3 4.0: 31 variables, 10 equality constraints\n\tFor improved efficiency, SDPT3 is solving the dual problem.\n\t------------------------------------------------------------\n\t\n\tnum. of constraints = 10\n\tdim. of sdp    var  = 10,   num. of sdp  blk  =  2\n\t*******************************************************************\n\tSDPT3: Infeasible path-following algorithms\n\t*******************************************************************\n\tversion  predcorr  gam  expon  scale_data\n\tHKM      1      0.000   1        0    \n\tit pstep dstep pinfeas dinfeas  gap      prim-obj      dual-obj    cputime\n\t-------------------------------------------------------------------\n\t0|0.000|0.000|4.4e+01|3.9e+00|1.0e+03| 8.266937e+01  0.000000e+00| 0:0:00| chol  1  1 \n\t1|0.818|0.880|8.1e+00|5.1e-01|1.7e+02| 3.173665e+01  9.183288e+00| 0:0:00| chol  1  1 \n\t2|0.761|0.758|1.9e+00|1.3e-01|6.0e+01| 2.745402e+01  1.125760e+01| 0:0:00| chol  1  1 \n\t3|0.553|0.897|8.6e-01|1.3e-02|2.5e+01| 2.195296e+01  1.974743e+01| 0:0:00| chol  1  1 \n\t4|0.169|0.214|7.2e-01|1.1e-02|2.1e+01| 2.472518e+01  1.029151e+02| 0:0:00| chol  1  1 \n\t5|0.017|0.022|7.1e-01|1.0e-02|3.3e+01| 3.109723e+01  1.695860e+02| 0:0:00| chol  1  1 \n\t6|0.061|0.035|6.6e-01|1.0e-02|7.8e+01| 8.019566e+01  2.054055e+02| 0:0:00| chol  1  1 \n\t7|0.423|0.391|3.8e-01|6.1e-03|1.4e+02| 2.228571e+02  2.222143e+02| 0:0:00| chol  1  1 \n\t8|1.000|0.531|8.3e-06|2.8e-03|1.1e+02| 3.445733e+02  2.448139e+02| 0:0:00| chol  2  1 \n\t9|0.952|1.000|1.7e-06|1.7e-06|1.6e+01| 2.823759e+02  2.660703e+02| 0:0:00| chol  1  1 \n\t10|0.950|0.991|8.3e-08|3.5e-07|1.1e+00| 2.712737e+02  2.701719e+02| 0:0:00| chol  1  1 \n\t11|1.000|1.000|2.1e-10|1.7e-08|1.1e-01| 2.705570e+02  2.704431e+02| 0:0:00| chol  1  1 \n\t12|0.959|0.980|1.7e-10|4.8e-10|4.0e-03| 2.704909e+02  2.704869e+02| 0:0:00| chol  1  1 \n\t13|0.993|1.000|1.0e-10|3.4e-11|2.4e-04| 2.704880e+02  2.704878e+02| 0:0:00| chol  1  1 \n\t14|0.954|0.989|8.0e-11|2.1e-11|1.1e-05| 2.704879e+02  2.704879e+02| 0:0:00| chol  1  1 \n\t15|1.000|1.000|1.7e-10|1.6e-11|1.0e-06| 2.704879e+02  2.704879e+02| 0:0:00|\n\tstop: max(relative gap, infeasibilities) < 1.49e-08\n\t-------------------------------------------------------------------\n\tnumber of iterations   = 15\n\tprimal objective value =  2.70487886e+02\n\tdual   objective value =  2.70487885e+02\n\tgap := trace(XZ)       = 1.04e-06\n\trelative gap           = 1.92e-09\n\tactual relative gap    = 1.87e-09\n\trel. primal infeas (scaled problem)   = 1.72e-10\n\trel. dual     \"        \"       \"      = 1.59e-11\n\trel. primal infeas (unscaled problem) = 0.00e+00\n\trel. dual     \"        \"       \"      = 0.00e+00\n\tnorm(X), norm(y), norm(Z) = 8.9e+02, 2.2e+02, 1.4e+03\n\tnorm(A), norm(b), norm(C) = 2.2e+01, 3.0e+00, 7.5e+00\n\tTotal CPU time (secs)  = 0.45  \n\tCPU time per iteration = 0.03  \n\ttermination code       =  0\n\tDIMACS: 2.6e-10  0.0e+00  3.7e-11  0.0e+00  1.9e-09  1.9e-09\n\t-------------------------------------------------------------------\n\t\n\t------------------------------------------------------------\n\tStatus: Solved\n\tOptimal value (cvx_optval): -270.488\n\t\n\t\n\tP_cvx =\n\t\n\t101.8040  -50.0081   -0.8792  117.2149\n\t-50.0081   28.3481    2.3037  -57.6749\n\t-0.8792    2.3037    4.1602    0.2983\n\t117.2149  -57.6749    0.2983  136.1756\n\t\n\t\n\tK_cvx =\n\t\n\t-4.7337    3.0098    1.0219   -6.7728\n\t0.2527   -0.0799   -1.2428    0.2020\n\t\n\t\n\tP_dare =\n\t\n\t101.8040  -50.0081   -0.8792  117.2149\n\t-50.0081   28.3481    2.3037  -57.6749\n\t-0.8792    2.3037    4.1602    0.2983\n\t117.2149  -57.6749    0.2983  136.1756\n\t\n\t\n\tK_dare =\n\t\n\t4.7337   -3.0098   -1.0219    6.7728\n\t-0.2527    0.0799    1.2428   -0.2020\n\t\n\t\n\tP_cvx =\n\t\n\t101.8040  -50.0081   -0.8792  117.2149\n\t-50.0081   28.3481    2.3037  -57.6749\n\t-0.8792    2.3037    4.1602    0.2983\n\t117.2149  -57.6749    0.2983  136.1756\n\t\n\t\n\tP_dare =\n\t\n\t101.8040  -50.0081   -0.8792  117.2149\n\t-50.0081   28.3481    2.3037  -57.6749\n\t-0.8792    2.3037    4.1602    0.2983\n\t117.2149  -57.6749    0.2983  136.1756\n\\end{Verbatim}\n\n\n\n%\\newpage\n%\\section*{}\n%\\bibliographystyle{ieeetr}\n%\\bibliography{mybib.bib}\n\n\n\\end{document}\n", "meta": {"hexsha": "1534d19898125223271560b87da985a008273241", "size": 36450, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW3/MECH6327-HW3.tex", "max_stars_repo_name": "jonaswagner2826/MECH6327", "max_stars_repo_head_hexsha": "2b55aaf6f9e1bcf5cc684f5c853cadec26acf9d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework/HW3/MECH6327-HW3.tex", "max_issues_repo_name": "jonaswagner2826/MECH6327", "max_issues_repo_head_hexsha": "2b55aaf6f9e1bcf5cc684f5c853cadec26acf9d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW3/MECH6327-HW3.tex", "max_forks_repo_name": "jonaswagner2826/MECH6327", "max_forks_repo_head_hexsha": "2b55aaf6f9e1bcf5cc684f5c853cadec26acf9d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.0426829268, "max_line_length": 329, "alphanum_fraction": 0.6159670782, "num_tokens": 15943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.8031738034238806, "lm_q1q2_score": 0.5746098117541225}}
{"text": "%!TEX root = ms.tex\n\\section{ Information criteria for fixed-X }\n\\label{sec:ic_fixedx}\n\\subsection{KL-based information criterion}\n%In this section, we assume fixed-X, and provide expressions for C$_p$, FPE and AICc under general linear restrictions on $\\beta$. The expressions of C$_p(k)$, FPE$(k)$ and AICc$(k)$ given in \\eqref{eq:cp_subsetselection}, \\eqref{eq:cptilde_subsetselection} and \\eqref{eq:aicc_subsetselection}, respectively, for variable selection can be obtained as special cases of the general expressions.\n\nUsing the likelihood function \\eqref{eq:loglike_fixedx} and the MLE \\eqref{eq:betahat_sigmahatsq}, the expected log-likelihood can be derived as \n\\begin{equation*}\n\\begin{aligned}\n\\text{ErrF}_\\text{KL} &=  E_{\\tilde{y}} [-2 \\log f( \\tilde{y} | X,\\hat\\beta,\\hat\\sigma^2 )] =  n \\log (2\\pi \\hat\\sigma^2) + \\frac{1}{\\hat\\sigma^2} E_{\\tilde{y}} || \\tilde{y}-X\\hat\\beta||_2^2 \\\\\n%&= n \\log (2\\pi \\sigma^2) + \\frac{1}{\\sigma^2} E_\\tilde{y} || X\\beta_0-X \\beta+\\epsilon||_2^2 \\\\\n&= n \\log (2\\pi \\hat\\sigma^2) + \\frac{1}{\\hat\\sigma^2}  (\\hat\\beta-\\beta_0)^T X^T X (\\hat\\beta-\\beta_0) + \\frac{n\\sigma_0^2}{\\hat\\sigma^2},\n\\end{aligned}\n\\end{equation*}\nand the training error is \n\\begin{equation*}\n\\text{errF}_\\text{KL} = -2\\log f(y|X,\\hat\\beta,\\hat\\sigma^2) = n\\log(2\\pi\\hat\\sigma^2) + n.\n\\end{equation*}\nIn the context of variable selection, the assumption that the approximating model includes the true model is used in the derivations of AIC \\citep{linhart1986model} and AICc \\citep{hurvich1989regression}. This assumption can be generalized to the context of general restrictions. \n\\begin{assumption}\nIf the approximating model satisfies the restrictions $R\\beta = r$, then the true model satisfies the analogous restrictions $R\\beta_0 = r$; that is, the true model is at least as restrictive as the approximating model. \n\\label{assumption}\n\\end{assumption}\nUnder this assumption, we have the following lemma. The proofs for all of the lemmas and theorems in this paper are given in the Supplemental Material.   \n\\begin{lemma}\n  Under Assumption \\ref{assumption}, $\\hat\\sigma^2$ and the quadratic form $(\\hat \\beta-\\beta_0)^T X^T X (\\hat \\beta-\\beta_0)$ are independent, and \n  \\begin{equation*}\n  \\begin{aligned}\n    n \\sigma_0^2 E_y\\left[ \\frac{1}{\\hat{\\sigma}^2} \\right] &= n\\frac{n}{n-p+m-2},\\\\\n    E_y  \\left [ (\\hat \\beta-\\beta_0)^T X^T X (\\hat \\beta-\\beta_0) \\right ] &= \\sigma_0^2 (p-m).\n  \\end{aligned}\n  \\end{equation*}\n\\label{thm:components_ekl_lr_fixedx}\n\\end{lemma}\nLemma \\ref{thm:components_ekl_lr_fixedx} provides the fundamentals for calculating the expected optimism.\n\\begin{theorem}\nUnder Assumption \\ref{assumption}, \n\\begin{equation*}\nE_y(\\text{optF}_\\text{KL}) = n \\frac{n+p-m}{n-p+m-2} - n.\n\\end{equation*}\n\\label{thm:EoptF_KL}\n\\end{theorem}\n\nConsequently, \n\\begin{equation*}\n\\widehat{\\text{ErrF}}_\\text{KL} = \\text{errF}_\\text{KL} + E_y(\\text{optF}_\\text{KL}) = n\\log(\\hat\\sigma^2) + n \\frac{n+p-m}{n-p+m-2} + n\\log(2\\pi)\n\\end{equation*}\nis an unbiased estimator of the test error $E_y \\left[ \\text{ErrF}_\\text{KL} \\right]$. We follow the same tradition as in the derivations of AIC and AICc that since the term $n\\log(2\\pi)$ appears in $\\widehat{\\text{ErrF}}_\\text{KL}$ for every model being compared, it is irrelevant for purposes of model selection. We therefore ignore this term and define \n\\begin{equation*}\n \\text{AICc}(R,r) = n\\log \\left( \\frac{\\text{RSS}(R,r)}{n}  \\right) + n \\frac{n+p-m}{n-p+m-2},\n\\end{equation*}\nwhere RSS$(R,r)= \\lVert y -X\\hat\\beta \\rVert_2^2$. For the variable selection problem, e.g. regressing on a subset of predictors with size $k$, we are restricting $p-k$ slope coefficients to be zero. By plugging $\\hat\\beta = \\hat\\beta(k)$ and $m=p-k$ into the expressions of AICc$(R,r)$, we obtain AICc$(k)$ given in \\eqref{eq:aicc_subsetselection}.\n\n\n\\subsection{Squared error-based information criterion}\nThe covariance penalty \\eqref{eq:EoptF_SE} is defined for any general fitting procedure. By explicitly calculating the covariance term for $\\hat\\mu=X\\hat\\beta$, we can obtain the expected optimism.\n\\begin{theorem}\n\\begin{equation*}\nE_y (\\text{optF}_\\text{SE}) = 2 \\sigma_0^2 (p-m).\n\\end{equation*}\n\\label{thm:EoptF_SE}\n\\end{theorem}\n\nAn immediate consequence of this is that\n\\begin{equation*}\n\\widehat{\\text{ErrF}}_\\text{SE} = \\text{errF}_\\text{SE} + E_y(\\text{optF}_\\text{SE}) = \\text{RSS}(R,r) + 2 \\sigma_0^2 (p-m)\n\\end{equation*}\nis an unbiased estimator of $E_y(\\text{ErrF}_\\text{SE})$. Using the unbiased estimator of $\\sigma_0^2$ given by the OLS fit based on all of the predictors, i.e. $\\hat\\sigma_0^2=\\text{RSS}(p)/(n-p)$, we define\n\\begin{equation*}\n\\text{C}_p(R,r) = \\text{RSS}(R,r) + \\frac{\\text{RSS}(p)}{n-p} 2(p-m).\n\\end{equation*}\nAn alternative estimate of $\\sigma_0^2$ is $\\text{RSS}(R,r)/(n-p+m)$, which yields \n\\begin{equation*}\n\\text{FPE}(R,r) = \\text{RSS}(R,r)\\frac{n+p-m}{n-p+m}.\n\\end{equation*}\nFor the variable selection problem, by substituting $m=p-k$ into the expressions of C$_p$ and FPE, we obtain the previously-noted definitions of them, i.e. C$_p$(k) and FPE(k) given in \\eqref{eq:cp_subsetselection} and \\eqref{eq:cptilde_subsetselection}, respectively. \n\n\\section{ Information criteria for random-X }\n\\label{sec:ic_randomx}\n%In this section, we assume random-X. We start by proposing and deriving a novel KL-based criterion, RAICc. We further derive the criteria RC$_p$ and S$_p$ under linear restrictions on $\\beta$. \n\n\\subsection{KL-based information criterion, RAICc}\nWe replace the unknown parameters by their MLE, and have the fitted model $f(\\cdot|\\hat\\beta,\\hat\\sigma^2,\\hat\\Sigma)$. The KL information measures how well the fitted model predicts the new set of data $(X^{(n)},y^{(n)})$, in terms of the closeness of the distributions of $(X^{(n)},y^{(n)})$ based on the fitted model and the true model, i.e. \n\\begin{equation}\n\\text{KLR} = E_{X^{(n)},y^{(n)}} \\left[ 2\\log f(X^{(n)},y^{(n)}|\\beta_0,\\sigma_0^2,\\Sigma_0) -2 \\log f(X^{(n)},y^{(n)}|\\hat\\beta,\\hat\\sigma^2,\\hat\\Sigma) \\right].\n\\label{eq:KLR}\n\\end{equation}\nAn equivalent form for model comparisons is the expected log-likelihood\n\\begin{equation*}\n\\begin{aligned}\n&\\text{ErrR}_\\text{KL} =  E_{X^{(n)},y^{(n)}} \\left[ -2 \\log f(X^{(n)},y^{(n)}|\\hat\\beta,\\hat\\sigma^2,\\hat\\Sigma) \\right] \\\\\n&= \\left[ n \\log (2\\pi \\hat\\sigma^2) + \\frac{1}{\\hat\\sigma^2} E_{X^{(n)},y^{(n)}} || y^{(n)}-X^{(n)}\\hat\\beta||_2^2 \\right ] + \\left [np \\log(2\\pi) + n \\log |\\hat\\Sigma| + E_{X^{(n)}} \\left(\\sum_{i=1}^n {x_{i}^{(n)}}^T \\hat\\Sigma^{-1} x_{i}^{(n)} \\right) \\right]\\\\\n&= \\left[ n \\log (2\\pi \\hat\\sigma^2) + \\frac{n}{\\hat\\sigma^2}  (\\hat\\beta-\\beta_0)^T \\Sigma_0 (\\hat\\beta-\\beta_0) + \\frac{n\\sigma_0^2}{\\hat\\sigma^2} \\right ] + \\left [np \\log(2\\pi) + n \\log |\\hat\\Sigma| + n \\text{Tr}(\\hat\\Sigma^{-1}\\Sigma_{0})\\right],\n\\end{aligned}\n\\end{equation*}\nand the training error is\n\\begin{equation*}\n\\text{errR}_\\text{KL} = -2\\log f(X,y|\\hat\\beta,\\hat\\sigma^2,\\hat\\Sigma) = \\left [ n \\log (2\\pi \\hat\\sigma^2) + n \\right ] + \\left [np \\log(2\\pi) + n \\log |\\hat\\Sigma| + np \\right ].\n\\end{equation*}\n\nAs in the fixed-X case, we assume that the true model satisfies the restrictions, i.e. $R\\beta_0=r$, and we obtain the following lemma.\n\\begin{lemma}\nUnder Assumption \\ref{assumption}, $\\hat\\sigma^2$ and $(\\hat{\\beta}-\\beta_0)^T \\Sigma_0 (\\hat{\\beta}-\\beta_0)$ are independent conditionally on $X$, and \n\\begin{equation*}\n\\begin{aligned}\nE \\left[ \\text{Tr}(\\hat \\Sigma^{-1}\\Sigma_0) \\right] &= \\frac{np}{n-p-1},\\\\\n%n\\sigma_0^2 E_{X,y} \\left[ \\frac{1}{\\hat\\sigma^2} \\right] &= n \\frac{n}{n-p+m-2},\\\\\nE_{X,y}  \\left [ (\\hat \\beta-\\beta_0)^T \\Sigma_0 (\\hat \\beta-\\beta_0) \\right ] &= \\sigma_0^2 \\frac{p-m}{n-p+m-1}.\n\\end{aligned}\n\\end{equation*}\n\\label{thm:components_ekl_lr_randomx}\n\\end{lemma}\nLemma \\ref{thm:components_ekl_lr_randomx} provides the components for calculating the expected optimism.\n\\begin{theorem}\nUnder Assumption \\ref{assumption}, \n\\begin{equation*}\nE_{X,y}(\\text{optR}_\\text{KL}) = n \\frac{n(n-1)}{(n-p+m-2)(n-p+m-1)} + n \\frac{np}{n-p-1} - n(p+1).\n\\end{equation*}\n\\label{thm:EoptR_KL}\n\\end{theorem}\n\nConsequently, \n\\begin{equation*}\n\\begin{aligned}\n\\widehat{\\text{ErrR}}_\\text{KL} &= \\text{errR}_\\text{KL} + E_{X,y} (\\text{optR}_\\text{KL}) \\\\\n&=n\\log \\left(\\hat\\sigma^2\\right) + n \\frac{n(n-1)}{(n-p+m-2)(n-p+m-1)} + n\\log(2\\pi)(p+1) + n\\frac{np}{n-p-1} + n\\log|\\hat\\Sigma|\n\\end{aligned}\n\\end{equation*}\nis an unbiased estimator of the test error $E_{X,y}(\\text{ErrR}_\\text{KL})$. Note that the last three terms are free of the restrictions and only depend on $n$, $p$ and $X$. They are the same when we compare two models with different restrictions on $\\beta$, and are thus irrelevant when comparing criteria for any two such models. Therefore, for the purpose of model selection, we define\n\\begin{equation*}\n\\text{RAICc}(R,r) = n\\log \\left(\\frac{\\text{RSS}(R,r)}{n}\\right) + n \\frac{n(n-1)}{(n-p+m-2)(n-p+m-1)}.\n\\end{equation*}\nAn equivalent form is\n\\begin{equation*}\n\\text{RAICc}(R,r) = \\text{AICc}(R,r) + \\frac{n(p-m)(p-m+1)}{(n-p+m-1)(n-p+m-2)}.\n\\end{equation*} \nFor linear regression on a subset of predictors with size $k$, we are restricting $p-k$ coefficients to be zero. By substituting $m=p-k$ and $\\hat\\beta = \\hat\\beta(k)$ into the expression of $\\text{RAICc}(R,r)$, we obtain the RAICc criterion for the variable selection problem, i.e.\n\\begin{equation*}\n\\text{RAICc}(k) = n\\log \\left(\\frac{\\text{RSS}(k)}{n}\\right) + n \\frac{n(n-1)}{(n-k-2)(n-k-1)}.\n\\end{equation*}\n\n\\subsection{Squared error-based information criteria}\nAccording to \\citet[formula 6 and proposition 1]{rosset2020fixed}, $E_{X,y}(\\text{optR}_\\text{SE})$ can be decomposed into $E_{X,y}(\\text{optF}_\\text{SE})$ plus an excess bias term and an excess variance term. We calculate both terms for our estimator $\\hat\\beta$ and obtain the following theorem.\n\\begin{theorem}\nUnder Assumption \\ref{assumption},\n\\begin{equation*}\nE_{X,y}(\\text{optR}_\\text{SE}) = \\sigma_0^2(p-m) \\left( 2+ \\frac{p-m+1}{n-p+m-1} \\right).\n\\end{equation*}\n\\label{thm:EoptR_SE}\n\\end{theorem}\nAn immediate consequence is that\n\\begin{equation*}\n\\widehat{\\text{ErrR}}_\\text{SE} = \\text{errR}_\\text{SE} + E_{X,y} (\\text{optR}_\\text{SE}) = \\text{RSS}(R,r) + \\sigma_0^2(p-m) \\left( 2+ \\frac{p-m+1}{n-p+m-1} \\right)\n\\end{equation*}\nis an unbiased estimator of $E_{X,y}(\\text{ErrR}_\\text{SE})$. Using the OLS fit on all of the predictors to estimate $\\sigma_0^2$, we have \n\\begin{equation*}\n\\text{RC}_p(R,r) = \\text{RSS}(R,r) + \\frac{\\text{RSS(p)}}{n-p}(p-m) \\left(2+\\frac{p-m+1}{n-p+m-1}\\right).\n\\end{equation*}\nAn alternative estimate of $\\sigma_0^2$ is $\\text{RSS}(R,r)/(n-p+m)$, which yields \n\\begin{equation*}\n\\text{S}_p(R,r) = \\text{RSS}(R,r)\\frac{n(n-1)}{(n-p+m)(n-p+m-1)}.\n\\end{equation*}\nFor the variable selection problem, by substituting $m=p-k$ into the expressions of RC$_p$ and S$_p$, we obtain the previously-noted definitions of them, i.e. RC$_p$(k) and S$_p$(k) given in \\eqref{eq:rcp_subsetselection} and \\eqref{eq:sp_subsetselection}, respectively. \n", "meta": {"hexsha": "e1e378016d75e9a97b2a5f78741601c68a54ac96", "size": 10989, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/derivation.tex", "max_stars_repo_name": "sentian/RAICc", "max_stars_repo_head_hexsha": "0e3b620354733de1fe953a2a21559bcb20055b96", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/derivation.tex", "max_issues_repo_name": "sentian/RAICc", "max_issues_repo_head_hexsha": "0e3b620354733de1fe953a2a21559bcb20055b96", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/derivation.tex", "max_forks_repo_name": "sentian/RAICc", "max_forks_repo_head_hexsha": "0e3b620354733de1fe953a2a21559bcb20055b96", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.006097561, "max_line_length": 392, "alphanum_fraction": 0.6853216853, "num_tokens": 3998, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484143, "lm_q2_score": 0.803173791645582, "lm_q1q2_score": 0.5746097935810838}}
{"text": "% !TEX root = main.tex\n\n%-------------------------------------------------\n\\section{Elementary probability}\\label{ss:elementary}\n\nWhat is the probability that a random event $A$ occurs? One answer is that this is a number between $0$ and $1$ that indicates how likely $A$ is to occur. Putting aside the question of how we might find such a number, this is not a convincing definition because we must now explain what we mean by ``how likely''. It turns out that probability is difficult to define precisely.\n\nSuppose for the moment however that probabilities have been assigned to the individual outcomes of a random experiment. This can be represented by a \\emph{probability mass function},\n\\[\n\\begin{array}{cccl}\np:\t& \\Omega\t& \\longrightarrow\t& [0,1] \\\\\n\t& \\omega \t& \\mapsto\t\t\t& p(\\omega).\n\\end{array}\n\\]\nBecause we want to \\emph{quantify} how likely $A$ is to occur, it is reasonable to require that $p(\\omega)\\geq 0$ for all $\\omega\\in\\Omega$. If the sample space is a countable set, it is also reasonable to define the probability of an event $A$ to be equal the sum of the probabilities of the outcomes it contains:\n\\begin{equation}\\label{eq:prob_event}\nP(A) = \\sum_{\\omega\\in A} p(\\omega).\\tag{*}\n\\end{equation}\nThis is a \\emph{probability distribution function}, which assigns probabilities to \\emph{subsets} of the sample space.\n\n\\bigskip\nIntuitively, probability should somehow reflect a sense of \\emph{proportion}, between how likely $A$ is to occur and how likely $A$ is not to occur. \n\\bit\n\\it The ratio $P(A)/P(A^c)$ is called the \\emph{odds on} $A$ occurring.\n\\it The ratio $P(A^c)/P(A)$ is called the \\emph{odds against} $A$ occuring. \n\\eit\nOdds suffer by the fact that they can take arbitrarily large values. A better definition is obtained by expressing how likely $A$ is to occur as a proportion of how likely \\emph{any} event is to occur, and make the latter equal to $1$:\n\\[\n\\sum_{\\omega\\in\\Omega} p(\\omega) = 1.\n\\]\n%The probability $P(A)$ can then be viewed as a proportion of this total ammount.\n\n\\begin{exercise}\\label{exe:elementary_properties}\nShow that probability distribution functions as defined in \\eqref{eq:prob_event} have the following properties, where $A$ and $B$ are subsets of the sample space:\n\\ben\n\\it $P(\\emptyset)=0$ and $P(\\Omega)=1$.\n\\it Complementarity: $P(A^c) = 1 - P(A)$.\n\\it Monotonicity: if $A\\subseteq B$ then $P(A)\\leq P(B)$.\n\\it Additivity: if $A$ and $B$ are disjoint then $P(A\\cup B)=P(A)+P(B)$.\n\\it Addition formula: $P(A\\cup B)=P(A)+P(B)-P(A\\cap B)$.\n\\een\n\\end{exercise}\n\n%-----------------------------\n\\subsection{Conditional probability}\n\nSuppose we know that event $B$ occurs. Then event $A$ occurs if and only if $A\\cap B$ occurs, and the probability that $A$ occurs should now represent how likely $A\\cap B$ is to occur as a proportion of how likely $B$ was to occur in the first place. Thus we define the \\emph{conditional probability of $A$ given $B$} to be\n\\[\nP(A|B) = \\frac{P(A\\cap B)}{P(B)}.\n\\] \nprovided that $P(B)>0$. \n\nTo analyse random events it is often useful to break then into disjoint components and analyse each component separately.\n\\begin{definition}\nA family of sets $\\{A_1,A_2,\\ldots\\,A_n\\}$ is called a \\emph{partition} of set $B$ if \n\\ben\n\\it $B \\subseteq \\bigcup_{i=1}^{n} A_i$ and\n\\it $A_i\\cap A_j = \\emptyset$ for all $i\\neq j$.\n\\een\n\\end{definition}\n\n% partition theorem\n\\begin{theorem}[Partition Theorem]\\label{thm:partition}\nIf $\\{A_1,A_2,\\ldots,A_n\\}$ is a partition of $B$ then \n$\nP(B) = \\sum_{i=1}^{n} P(B|A_i)P(A_i).\n$\n\\begin{proof}\nBecause the $A_i$ are disjoint and $B$ is contained in $\\subseteq\\cup_{i=1}^{n}A_i$, we can represent $B$ as a disjoint union,\n\\[\nB = (B\\cap A_1)\\cup (B\\cap A_2)\\cup \\ldots = \\bigcup_{i=1}^{\\infty}(B\\cap A_i)\n\\]\nBy the additivity property (see Exercise~\\ref{exe:elementary_properties})\n\\begin{align*}\nP(B)\n\t= P\\left(\\bigcup_{i=1}^{\\infty}(B\\cap A_i)\\right)\n\t= \\sum_{i=1}^{\\infty}P(B\\cap A_i)\n\t= \\sum_{i=1}^{\\infty}P(B|A_i)P(A_i),\n\\end{align*}\nas required.\n\\end{proof}\n\\end{theorem}\n\nIf we perform a random experiment and observe that $B$ occurs, how does this change what we know about event $A_i$? This is answered by Bayes' theorem.\n\n\\begin{theorem}[Bayes' Theorem]\\label{thm:bayes}\nIf $\\{A_1,A_2,\\ldots,A_n\\}$ is a partition of $B$ and $P(B)>0$, then\n$\nP(A_i|B) = \\displaystyle\\frac{P(B|A_i)P(A_i)}{\\sum_j P(B|A_j)P(A_j)}.\n$\n\\begin{proof}\nSet intersection is commutative so\n\\[\nP(A|B) = \\frac{P(A\\cap B)}{P(B)} = \\frac{P(B\\cap A)}{P(B)} = \\frac{P(B|A)P(A)}{P(B)}.\n\\]\nHence,\n\\[\nP(A_i|B) \n\t= \\frac{P(B|A_i)P(A_i)}{P(B)}\n\t= \\frac{P(B|A_i)P(A_i)}{\\sum_{j=1}^{\\infty} P(B|A_j)P(A_j)}\n\\]\nwhere the last equality follows by the partition theorem, as required.\n\\end{proof}\n\\end{theorem}\n\n% elementary example\n\\begin{example}\nOne in every $100$ people has a certain disease. People having the disease test positively with probability 0.92; people not having the disease test negatively with probability 0.97. If a person tests positively for the disease, what is the probability that the person actually has the disease? \n\\begin{solution}\nLet $A$ be the event that the person has the disease. Let $B$ be the event that\nthe person tests positively: \n\\[\nP(A)=0.01,\\ P(A^c)=0.99,\\ P(B|A)=0.92 \\text{ and } P(B|A^c)=0.03\n\\]\nThe set $\\{A,A^c\\}$ is a partition of $B$, so by the partition theorem,\n\\[\nP(B) \n\t= P(B|A)P(A)+P(B|A^c)P(A^c) \n\t= (0.92\\times 0.01)+(0.03\\times 0.99)\n\t= 0.029403.\n\\]\nBy Bayes' theorem,\n\\[\nP(A|B)\t\n\t= \\frac{P(B|A)P(A)}{P(B)}\n\t= \\frac{0.92\\times 0.01}{0.029403}\n\t= 0.3129 \\text{ approx.}\n\\]\nApproximately one third of positive tests are ``true positives'', the remainder are ``false positives''.\n\\end{solution}\n\\end{example}\n\n\\begin{exercise}\n\\begin{questions}\n\\question % Bertrand\n\\textbf{Bertrand's box paradox}. Suppose we have a box containing two gold coins, a box containing two silver coins, and a box containing one gold coin and one silver coin. A box is chosen at random, and a coin is chosen at random from the box. If the chosen coin is a gold coin, what is the probability that the box contains another gold coin?\n\\begin{answer}\nLet $A$ be the event that a gold coin is chosen. By Bayes theorem (with the obvious notation),\n\\begin{align*}\nP(GG|A) \n\t& = \\frac{P(A|GG)P(GG)}{P(A|GG)P(GG)+P(A|SS)P(SS)+P(A|GS)P(GS)} \\\\\n\t& = \\frac{1\\times 1/3}{(1\\times 1/3)+(0\\times 1/3)+(1/2\\times 1/3)}\n\t= \\frac{2}{3}.\n\\end{align*}\nThis is related to the famous \\emph{Monty Hall problem}. \n\\end{answer}\n\\question % Galton\n\\textbf{Galton's paradox}. Three fair coins are tossed independently. Given that at least two are alike, the probability that they are all alike is $1/2$. Do you agree?\n\\begin{answer}\nNo. For any outcome at least two coins will be alike, so this provides no additional information. In fact,\n\\[\nP(\\text{All are alike} | \\text{Two are alike}) = P(\\text{All are alike}) = 1/4.\n\\]\n\\end{answer}\n\\end{questions}\n\\end{exercise}\n\n\n%-----------------------------\n\\subsection{Independence}\n\nIf the probability that event $A$ occurs is not affected by whether or not event $B$ occurs (and vice versa), we say that $A$ and $B$ are \\emph{independent}. \n\n% definition: independence\n\\begin{definition}\nTwo events $A$ and $B$ are said to be \\emph{independent} if $P(A|B)=P(A)$ or equivalently\n\\[\nP(A\\cap B) = P(A)P(B).\n\\]\n\\end{definition}\n\n\\begin{example}\nA fair die is rolled once. Let $A$ be the event that the outcome is even, and let $B$ be the event that the outcome is divisible by $3$. Are $A$ and $B$ independent?\n\\begin{solution}\n\\[\nP(A) = P(\\{2,4,6\\}) = 1/2;\\quad\nP(B) = P(\\{3,6\\}) = 1/3;\\quad\nP(A\\cap B) = P(\\{6\\}) = 1/6.\n\\]\nThus $A$ and $B$ are independent because $P(A\\cap B) = P(A)P(B)$.\n\\end{solution}\n\\end{example}\n\n%-----------------------------\n%\\subsubsection{Pairwise and total independence}\nThe notion of independence can be extended to three or more events.\n \n\\begin{definition}\nA family of events $\\{A_1,A_2,\\ldots\\}$ is said to be \n\\ben\n\\it \\emph{pairwise independent} if $P(A_i\\cap A_j)=P(A_i)P(A_j)$ for all $i\\neq j$.\n\\it \\emph{totally independent} if for every finite sub-family \n$\\{B_1,B_2,\\ldots,B_m\\}\\subset \\{A_1,A_2,\\ldots\\}$, \n\\[\nP(B_1\\cap B_2\\cap \\ldots \\cap B_m) = P(B_1)P(B_2)\\cdots P(B_m).\n\\]\n\\een\n\\end{definition}\nNote that total independence implies pairwise independence, but not vice versa.\n\n\\begin{example}\nLet $\\Omega=\\{1,2,3,4\\}$ where each outcome is equally likely, and consider the events $A=\\{1,2\\}$, $B=\\{1,3\\}$ and $C=\\{1,4\\}$. Show that $\\{A,B,C\\}$ is pairwise independent but not totally independent.\n\\begin{solution}\n\\bit\n\\it\n$A$ and $B$ are independent: $P(A)=P(B)=1/2$ and $P(A\\cap B)=1/4$ so $P(A\\cap B) = P(A)P(B)$. \n\\it\nSimilary $A$ and $C$ are independent, and $B$ and $C$ are indpendent, so the set $\\{A,B,C\\}$ is pairwise independent. \n\\it\nIn contrast, $P(A\\cap B\\cap C) = 1/4$ but $P(A)P(B)P(C)=1/8$, so the set $\\{A,B,C\\}$ is not totally independent.\n\\eit\n\\end{solution}\n\\end{example}\n\n\\begin{exercise}\n\\begin{questions}\n\\question % (1)\nLet $A$ and $B$ be two events with $P(A)=0.2$, $P(B)=0.5$ and $P(A\\cap B)=0.1$. Compute the conditional probability $P(A|B)$ and decide whether or not $A$ and $B$ are independent.\n\\begin{answer}\n\\bit\n\\it $P(A|B)   = P(A\\cap B)/P(B) = 0.1/0.5 = 0.25$.\n\\it $P(A|B)=P(A)$ so $A$ and $B$ are independent.\n\\eit\n\\end{answer}\n\n\\question % (2)\nShow that if $A$ and $B$ are independent then $A$ and $B^c$ are also independent.\n\\begin{answer}\n\\begin{align*}\nP(A\\cap B^c) \n\t& = P(A) - P(A\\cap B) \\\\\n\t& = P(A) - P(A)P(B) \\quad\\text{by independence,} \\\\\n\t& = P(A)(1-P(B)) \\\\\n\t& = P(A)P(B^c).\n\\end{align*}\n\\end{answer}\n\n\\question % de Mere\n\\textbf{de M\\'{e}r\\'{e}'s paradox}. Solve de M\\'{e}r\\'{e}'s paradox by computing the probability that at least one six appears in 4 rolls of a single fair die, and the probability that at least one double-six appears in 24 rolls of two fair dice. \n\\begin{answer}\nAssuming that the rolls are independent:\n\\begin{align*}\nP(\\text{at least one six in 4 rolls of a single die)}\n\t& = 1- P(\\text{no sixes obtained in 4 rolls}) \\\\\n\t& = 1 - (5/6)^4 \\quad\\text{(by independence)} \\\\\n\t& = 0.5177 \\\\\nP(\\text{at least one double six in 24 rolls of two dice}) \n\t& = 1- P(\\text{no double-sixes in 24 rolls}) \\\\\n\t& = 1 - (35/36)^{24} \\quad\\text{(by independence)} \\\\\n\t& = 0.4914\n\\end{align*}\n\\end{answer}\n\n\\question % independence\nTwo fair dice are rolled independently. Let $A$ be the event that the first die shows $3$, $B$ that the second die shows $4$ and and $C$ that the total shown on the two dice is $7$.\n\\begin{parts}\n\\part\nDefine a sample space for the experiment and identify the subsets corresponding to events $A$, $B$ and $C$.\n\\begin{answer}\n\\begin{itemize}\n\\item[] $\\Omega = \\{(i,j):1\\leq i,j\\leq 6\\}$\n\\item[] $A = \\{(3,j):1\\leq j\\leq 6\\}$\n\\item[] $B = \\{(i,4):1\\leq i\\leq 6\\}$\n\\item[] $C = \\{(i,j):1\\leq i,j\\leq 6\\mbox{ and }i+j=7\\}$\n\\end{itemize}\nNote that because each outcome is equally likely, $P(A)=P(B)=P(C)=1/6$.\n\\end{answer}\n\\part\nShow that $\\{A,B,C\\}$ is pairwise independent but not (totally) independent.\n\\begin{answer}\nTo show that the set $\\{A,B,C\\}$ is pairwise independent, we need to show that any two events chosen from the set are independent of each other.  \n\\bit\n\\it For $A$ and $B$, their intersection $A\\cap B$ is the event $\\{(3,4)\\}$, consisting of the single outcome $(3,4)$. This means that $P(A\\cap B)=1/36$ and hence $P(A\\cap B) = P(A)P(B)$. \n\\it Similarly, $A\\cap C=\\{(3,4)\\}$ so $P(A\\cap C)=P(A)P(C)$, and $B\\cap C=\\{(3,4)\\}$ so $P(B\\cap C)=P(B)P(C)$.  Hence, the set $\\{A,B,C\\}$ is pairwise independent.  \n\\it However, the intersection of all three events is also $\\{(3,4)\\}$ so $P(A\\cap B\\cap C)=1/36$. Hence $P(A\\cap B\\cap C) \\neq P(A)P(B)P(C)$ so the set $\\{A,B,C\\}$ is not totally independent.\n\\eit\n\\end{answer}\n\\end{parts}\n\n\\question % bayes theorem (GS 1.8.15)\nA random number $N$ of dice are rolled independently. Suppose that $P(N=k) = 2^{-k}$ for $k\\in\\{1,2,\\ldots\\}$ (and zero otherwise). Let $S$ be the sum of the scores shown on the dice. Show that\n\\begin{parts}\n\\part $P(N=2|S=4)=432/2197$,\n\\begin{answer}\nThe event $S=4$ can only occur when\n\\bit\n\\it $N=1$ and the die shows $4$ (which occurs with probability $1/6$),\n\\it $N=2$ and the dice show $(1,3)$, $(2,2)$ or $(3,1)$ (which occurs with probability $3/36$),\n\\it $N=3$ and the dice show $(1,1,2)$, $(1,2,1)$ or $(2,1,1)$ (which occurs with probability $3/216$),\n\\it $N=4$ and all four dice show $1$ (which occurs with probability $1/1296$).\n\\eit\nBy Bayes' theorem,\n\\begin{align*}\nP(N=2|S=4)\n\t& = \\frac{P(S=4|N=2)P(N=2)}{\\sum_{k=1}^{\\infty} P(S=4|N=k)P(N=k)} \\\\\n\t& = \\frac{3/36\\times 1/4}{(1/6\\times 1/2) + (3/36\\times 1/4) + (3/216\\times 1/8) + (1/1296\\times 1/16)} \\\\\n\t& = \\frac{432}{1728 + 432 + 36 + 1}\n\t  = \\frac{432}{2197}.\n\\end{align*}\n\\end{answer}\n\n\\part $P(S=4|N\\text{ is even})=433/6912$,\n\\begin{answer}\nUsing the formula for a geometric series with first term $1/4$ and common raio $1/4$, we have\n\\[\nP(\\text{$N$ even}) = \\frac{1}{4} + \\frac{1}{16} + \\frac{1}{64} + \\ldots = \\frac{1}{3}.\n\\]\nIf $N$ is even, then $S=4$ only if \n\\bit\n\\it $N=2$ and the dice show $(1,3)$, $(2,2)$ or $(3,1)$ (which occurs with probability $3/36$),\n\\it $N=4$ and all four dice show $1$ (which occurs with probability $1/1296$).\n\\eit\nHence,\n\\begin{align*}\nP(S=4|\\text{$N$ even})\n\t& = \\frac{P(\\text{$S=4$ and $N$ even})}{P(\\text{$N$ even})} \\\\ \n\t& = \\frac{P(S=4|N=2)P(N=2) + P(S=4|N=4)P(N=4)}{P(\\text{$N$ even})} \\\\ \n\t& = \\frac{(3/36\\times 1/4) + (1/1296\\times 1/16)}{1/3} \n\t= \\frac{433}{6912}.\n\\end{align*}\n\\end{answer}\n\n\\part $P(N=2|$S=4\\text{ and the first die shows $1$}) = 144/169,\n\\begin{answer}\nLet $D$ be the score on the first die. If $D=1$, then $S=4$ only if \n\\bit\n\\it $N=2$ and the other die shows $3$ (which occurs with probability 1/6),\n\\it $N=3$ and the other two dice show $(1,2)$ or $(2,1)$ (which occurs with probability 2/36), or\n\\it $N=4$ and the other three dice all show $1$ (which occurs with probability 1/216).\n\\eit\nHence,\n\\begin{align*}\nP(N=2|S=2,D=1)\n\t& = \\frac{P(N=2,S=4,D=1)}{P(S=4,D=1)} \\\\ \n\t& = \\frac{1/6\\times 1/4}{(1/6\\times 1/4) +(2/36\\times 1/8) + (1/216\\times 1/16)}\n\t= \\frac{144}{169}.\n\\end{align*}\n\\end{answer}\n\n\\part $P(\\text{Largest number is $m$}) = m/(12-m)$ for every $m\\in\\{1,2,3,4,5,6\\}$.\n\\begin{answer}\nLet $M$ be the maximum number shown on the dice. For $m\\in\\{1,2,3,4,5,6\\}$, the probability that $m$ is the maximum number shown on a single die is $m/6$. Assuming that the dice are independent, the probability that $m$ is the maximum number shown on $k$ dice is $(m/6)^k$. Hence,\n\\begin{align*}\nP(M\\leq m)\n\t& = \\sum_{k=1}^{\\infty}P(M\\leq m|N=k)P(N=k) \\\\\n\t& = \\sum_{k=1}^{\\infty}\\left(\\frac{m}{6}\\right)^k\\frac{1}{2^k} \n\t= \\sum_{k=1}^{\\infty}\\left(\\frac{m}{12}\\right)^k \n\t= \\frac{m}{12-m},\n\\end{align*}\nwhere we have used the formula for a geometric series whose first term and common ratio are both equal to $m/12$.\n\\end{answer}\n\\end{parts}\n\n\\end{questions}\n\\end{exercise}\n\n\n\n", "meta": {"hexsha": "129e70ab16aaa10f1e8d739a8085b6f758ffdd89", "size": 14875, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "L5/MA2500/01B_elementary.tex", "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "L5/MA2500/01B_elementary.tex", "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L5/MA2500/01B_elementary.tex", "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "avg_line_length": 41.43454039, "max_line_length": 377, "alphanum_fraction": 0.656, "num_tokens": 5408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737473266735, "lm_q2_score": 0.8418256532040708, "lm_q1q2_score": 0.5746080907032273}}
{"text": "the following is new in july 2001:\n\n\\subsubsection{$greduce$\\_$orders$: Reduction with several term orders}\nThe shortest polynomial with different polynomial term orders is computed\nwith the operator $greduce$\\_$orders$:\n \n\\begin{description}\n\\ttindex{$greduce$\\_$orders$}\n\\item[{\\it greduce\\_orders}]($exp$, \\{$exp1$, $exp2$, \\ldots , $expm$\\}\n[,\\{$v_1$,$v_2$ \\ldots $v_n$\\}]);\n \nwhere {\\it exp} is an expression and $\\{exp1, exp2,\\ldots , expm\\}$ is\na list of any number of expressions or equations. The list of variables\n$v_1,v_2 \\ldots v_n$ may be omitted; if set, the variables must be a list.\n\\end{description}\n \nThe expression {\\it exp} is reduced by {\\it greduce} with the orders\nin the shared variable {\\it gorders}, which must be a list of term\norders (if set). By default it is set to\n \n\\begin{center}\n$\\{revgradlex,gradlex,lex\\}$\n\\end{center}\n \nThe shortest polynomial is the result.\nThe order with the shortest polynomial is set to the shared variable\n{\\it gorder}. A Groebner basis of the system \\{$exp1$, $exp2$, \\ldots ,\n$expm$\\} is computed for each element of $orders$.\nWith the default setting {\\it gorder} in most cases will be set\nto {\\it revgradlex}.\nIf the variable set is given, these variables are taken; otherwise all\nvariables of the system \\{$exp1$, $exp2$, \\ldots , $expm$\\} are\nextracted.\n \nThe Groebner basis computations can take some time; if interrupted, the\nintermediate result of the reduction is set to the shared variable\n$greduce$\\_$result$, if one is done already. However, this is not\nnesessarily the minimal form.\n \nIf the variable {\\it gorders} should be set to orders with a parameter,\nthe term oder has to be replaced by a list; the first element is the\nterm oder selected, followed by its parameter(s), e.g.\n \n\\begin{center}\n$orders:=\\{\\{gradlexgradlex,2\\},\\{lexgradlex,2\\}\\}$\n\\end{center}\n", "meta": {"hexsha": "5b95896ce15e60cd651239c54c1f6f1b4bba1c94", "size": 1836, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/groebner/new1.tex", "max_stars_repo_name": "arthurcnorman/general", "max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packages/groebner/new1.tex", "max_issues_repo_name": "arthurcnorman/general", "max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packages/groebner/new1.tex", "max_forks_repo_name": "arthurcnorman/general", "max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.0638297872, "max_line_length": 74, "alphanum_fraction": 0.7293028322, "num_tokens": 529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.782662489091802, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5745678207673857}}
{"text": "\\subsubsection{Repeated Quadratic Factors}\r\n\\noindent\r\nIf a quadratic factor that can't be broken into linear factors is repeated, then we can write\r\n\\begin{equation*}\r\n\tQ(x) = R(x)(ax^2+bx+c)^k\\text{, }b^2-4ac < 0\\text{, }k \\geq 0\\text{, and }R(x)\\text{ is not divisible by }(ax^2+bx+c)^k\r\n\\end{equation*}\r\nNow we have to do a combination of what we did for repeated linear factors and quadratic factors. We say\r\n\\begin{equation*}\r\n\t\\frac{P(x)}{R(x)(ax^2+bx+c)^k}=\\left(\\text{Decomposition of }R(x)\\right)+\\frac{A_1x+B_1}{ax^2+bx+c}+\\ldots+\\frac{A_kx+B_k}{(ax^2+bx+c)^k}\r\n\\end{equation*}\r\nand then solve for the coefficients in the numerator.\r\n\r\n\\ifodd\\includeBackgroundReviewExamples\\input{./backgroundReview/algebraPreCalc/repeatedQuadraticFactors_example.tex}\\fi", "meta": {"hexsha": "c4e3222d1a8b4be691901c01bca6595f0da43c62", "size": 766, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/backgroundReview/algebraPreCalc/repeatedQuadraticFactors.tex", "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "diffEq/backgroundReview/algebraPreCalc/repeatedQuadraticFactors.tex", "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "diffEq/backgroundReview/algebraPreCalc/repeatedQuadraticFactors.tex", "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.9230769231, "max_line_length": 139, "alphanum_fraction": 0.7245430809, "num_tokens": 260, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7826624789529376, "lm_q1q2_score": 0.5745678087714551}}
{"text": "\\providecommand{\\main}{../..}\n\\documentclass[\\main/thesis.tex]{subfiles}\n\\begin{document}\n\n\\section{The Maximum Numeral}\\label{maximum}\n\nA number is said to be the \\textit{maximum} if it is greater than or equal\nto all the other numbers.\n\n\\begin{lstlisting}\nMaximum : \u2200 {b d o} \u2192 (xs : Numeral b d o) \u2192 Set\nMaximum {b} {d} {o} xs = (ys : Numeral b d o) \u2192 \u27e6 xs \u27e7 \u2265 \u27e6 ys \u27e7\n\\end{lstlisting}\n\nIf a numeral is a maximum\n\\footnote{More precisely, ``If a \\text{value} of a numeral is a maximum ...''},\nthen its least significant digit (LSD) must be the greatest.\n\n\\begin{lstlisting}\nMaximum\u21d2Greatest-LSD : \u2200 {b} {d} {o}\n    \u2192 (xs : Numeral b d o)\n    \u2192 Maximum xs\n    \u2192 Greatest (lsd xs)\n\\end{lstlisting}\n\n\n\n\\subsection{Properties of each Category}\n\n\\paragraph{NullBase}\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n        \\begin{tikzpicture}\n            % the frame\n            \\path[clip] (-1, -1) rectangle (11, 2);\n            % the spine\n            \\draw[ultra thick] (0,0) -- (1,0);\n            \\draw[ultra thick] (9,0) -- (10,0);\n            % the body\n\n            \\foreach \\i in {1,...,7} {\n                \\draw[ultra thick, fill=white] ({\\i+0.05}, -0.2) rectangle ({\\i+0.95}, +0.2);\n            };\n            \\draw[ultra thick, fill=black] ({8.05}, -0.2) rectangle ({8.95}, +0.2);\n\n            % labels\n            \\draw[->, ultra thick] (1.5,1) -- (1.5,0.5)\n                node at (1.5, 1.3) {$o$};\n            \\draw[->, ultra thick] (8.5,1) -- (8.5,0.5)\n                node at (8.5, 1.3) {$o+d$};\n        \\end{tikzpicture}\n    \\end{adjustbox}\n\\end{center}\n\nIt is obvious that systems of {\\lstinline|NullBase|} have maximum.\nIf a numeral's LSD happens to be the greatest,\nthen the numeral must be the maximum.\n\n\\begin{lstlisting}\nMaximum-NullBase-Greatest : \u2200 {d} {o}\n    \u2192 (xs : Numeral 0 (suc d) o)\n    \u2192 Greatest (lsd xs)\n    \u2192 Maximum xs\n\\end{lstlisting}\n\nWith this lemma, we can tell whether a numeral is a maximum by looking at its\nLSD. In case that the LSD is not the greatest, we could disprove the proposition\nby contraposition.\n\n\\begin{lstlisting}\n    Maximum-NullBase : \u2200 {d} {o}\n        \u2192 (xs : Numeral 0 (suc d) o)\n        \u2192 Dec (Maximum xs)\n    Maximum-NullBase xs with Greatest? (lsd xs)\n    Maximum-NullBase xs | yes greatest =\n        yes (Maximum-NullBase-Greatest xs greatest)\n    Maximum-NullBase | no \u00acgreatest =\n        no (contraposition (Maximum\u21d2Greatest-LSD xs) \u00acgreatest)\n\\end{lstlisting}\n\n\\paragraph{AllZeros}\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n        \\begin{tikzpicture}\n            % the frame\n            \\path[clip] (0, -1) rectangle (4, 2);\n            % the spine\n            \\draw[ultra thick] (2,0) -- (4,0);\n            % the body\n            \\draw[ultra thick, fill=black] ({1.1}, -0.4) rectangle ({2.9}, +0.4);\n\n            % labels\n            \\draw[->, ultra thick] (2,1.5) -- (2,0.8)\n                node at (2, 1.8) {$0$};\n        \\end{tikzpicture}\n    \\end{adjustbox}\n\\end{center}\n\n\\textit{All} numerals of systems of {\\lstinline|AllZeros|} are maxima since they\nare all mapped to $ 0 $.\n\n\\begin{lstlisting}\nMaximum-AllZeros : \u2200 {b}\n    \u2192 (xs : Numeral b 1 0)\n    \u2192 Maximum xs\nMaximum-AllZeros xs ys = reflexive (\n    begin\n        \u27e6 ys \u27e7\n    \u2261\u27e8 to\u2115-AllZeros ys \u27e9\n        zero\n    \u2261\u27e8 sym (to\u2115-AllZeros xs) \u27e9\n        \u27e6 xs \u27e7\n    \u220e)\n\\end{lstlisting}\n\n\\paragraph{Proper}\n\nOn the contrary, there are no maxima in the systems of {\\lstinline|Proper|}.\nIn fact, that is the reason why they are categorized as \\textit{proper} in the\nfirst place. The theorem below is proven by contradicting two propositions:\n\n\\begin{itemize}\n    \\item Given {\\lstinline|claim : Maximum xs|}, we claim that {\\lstinline|xs|}\n        is greater than or equal to {\\lstinline|greatest-digit d \u2237 xs|},\n        a numeral we composed by prefixing it with the greatest digit.\n    \\item On the other hand, we prove that {\\lstinline|xs|} is less than\n        {\\lstinline|greatest-digit d \u2237 xs|}.\n\\end{itemize}\n\n\\begin{lstlisting}\nMaximum-Proper : \u2200 {b d o}\n    \u2192 (xs : Numeral (suc b) (suc d) o)\n    \u2192 (proper : 2 \u2264 suc (d + o))\n    \u2192 \u00ac (Maximum xs)\nMaximum-Proper {b} {d} {o} xs proper claim = contradiction p \u00acp\n    where\n        p : \u27e6 xs \u27e7 \u2265 \u27e6 greatest-digit d \u2237 xs \u27e7\n        p = claim (greatest-digit d \u2237 xs)\n        \u00acp : \u27e6 xs \u27e7 \u2271 \u27e6 greatest-digit d \u2237 xs \u27e7\n        \u00acp = <\u21d2\u2271 (\n            start\n                suc \u27e6 xs \u27e7\n            \u2248\u27e8 cong suc (sym (*-right-identity \u27e6 xs \u27e7)) \u27e9\n                suc (\u27e6 xs \u27e7 * 1)\n            \u2264\u27e8 s\u2264s (n*-mono \u27e6 xs \u27e7 (s\u2264s z\u2264n)) \u27e9\n                suc (\u27e6 xs \u27e7 * suc b)\n            \u2264\u27e8 +n-mono (\u27e6 xs \u27e7 * suc b) (\u2264-pred proper) \u27e9\n                d + o + \u27e6 xs \u27e7 * suc b\n            \u2248\u27e8 cong\n                (\u03bb w \u2192 w + \u27e6 xs \u27e7 * suc b)\n                (sym (greatest-digit-to\u2115 (Fin.from\u2115 d)\n                (greatest-digit-is-the-Greatest d)))\n            \u27e9\n                \u27e6 greatest-digit d \u2237 xs \u27e7\n            \u25a1)\n\\end{lstlisting}\n\n\\subsection{Determine the Maximum}\n\nWe can \\textit{decide} whether a numeral is a maximum by applying them to\nlemmata of each category.\n\n\\begin{lstlisting}\nMaximum? : \u2200 {b d o}\n    \u2192 (xs : Numeral b d o)\n    \u2192 Dec (Maximum xs)\nMaximum? {b} {d} {o} xs with numView b d o\nMaximum? xs | NullBase d o = Maximum-NullBase xs\nMaximum? xs | NoDigits b o = no (NoDigits-explode xs)\nMaximum? xs | AllZeros b   = yes (Maximum-AllZeros xs)\nMaximum? xs | Proper b d o proper = no (Maximum-Proper xs proper)\n\\end{lstlisting}\n\n\\paragraph{Summary}\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n    \\begin{tabular}{ | l | c | c | c | c | }\n    \\textbf{Properties} & \\textbf{NullBase} & \\textbf{NoDigits} & \\textbf{AllZeros} & \\textbf{Proper} \\\\\n    \\hline\n    has an maximum & yes & no & yes & no \\\\\n    \\end{tabular}\n    \\end{adjustbox}\n\\end{center}\n\n\\end{document}\n", "meta": {"hexsha": "ce870370ad3d993309e65b6cb4d8fa4b26b1f248", "size": 5780, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/tex/constructions/maximum.tex", "max_stars_repo_name": "banacorn/numeral", "max_stars_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-04-23T15:58:28.000Z", "max_stars_repo_stars_event_max_datetime": "2015-04-23T15:58:28.000Z", "max_issues_repo_path": "Thesis/tex/constructions/maximum.tex", "max_issues_repo_name": "banacorn/numeral", "max_issues_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/tex/constructions/maximum.tex", "max_forks_repo_name": "banacorn/numeral", "max_forks_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-30T05:50:50.000Z", "avg_line_length": 30.582010582, "max_line_length": 104, "alphanum_fraction": 0.5693771626, "num_tokens": 1930, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624738835052, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5745678004970934}}
{"text": "\\documentclass{article}\n\\usepackage{lisp-on-tex}\n\\lispinterp{%\n  (\\define \\sq (\\lambda (\\n) (\\* \\n \\n)))\n  (\\define \\fact (\\lambda (\\n) (\\lispif (\\= \\n :0) :1 (\\* \\n (\\fact (\\- \\n :1))))))\n  }\n\\newcommand\\sq[1]{\\lispinterp{(\\texprint (\\sq :#1))}}\n\\newcommand\\fact[1]{\\lispinterp{(\\texprint (\\fact :#1))}}\n\n\\begin{document}\n\\section{Factorials and Squares}\n  \\begin{center}\n  \\begin{tabular}{r||rr}\\hline\\hline\n    $n$ & $n!$ & $n^2$ \\\\ \n    \\hline\n     1 & \\fact{1} & \\sq{1} \\\\\n     2 & \\fact{2} & \\sq{2} \\\\\n     3 & \\fact{3} & \\sq{3} \\\\\n     4 & \\fact{4} & \\sq{4} \\\\\n     5 & \\fact{5} & \\sq{5} \\\\\n     6 & \\fact{6} & \\sq{6} \\\\\n     7 & \\fact{7} & \\sq{7} \\\\\n     8 & \\fact{8} & \\sq{8} \\\\\n     9 & \\fact{9} & \\sq{9} \\\\\n    10 & \\fact{10} & \\sq{10} \\\\\n    11 & \\fact{11} & \\sq{11} \\\\\n    \\hline\n  \\end{tabular}\n  \\end{center}\n\\end{document}", "meta": {"hexsha": "39fb7eeb116510ded1a9c76f0fb54453b158762c", "size": 838, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "examples/fact.tex", "max_stars_repo_name": "hak7a3/lisp-on-tex", "max_stars_repo_head_hexsha": "a7fb18f56823f1be935d21c1d838c51cb3200cbb", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-11-29T13:10:44.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-29T13:10:44.000Z", "max_issues_repo_path": "examples/fact.tex", "max_issues_repo_name": "hak7a3/lisp-on-tex", "max_issues_repo_head_hexsha": "a7fb18f56823f1be935d21c1d838c51cb3200cbb", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/fact.tex", "max_forks_repo_name": "hak7a3/lisp-on-tex", "max_forks_repo_head_hexsha": "a7fb18f56823f1be935d21c1d838c51cb3200cbb", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.9333333333, "max_line_length": 83, "alphanum_fraction": 0.4725536993, "num_tokens": 381, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424295406088, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5745460843660236}}
{"text": "%\\vspace{-5pt}\n\\section{Methods}\n\\label{sec:methods}\n\n\\noindent\nIn this model, we simply take the set of $K+W$ positional and key tokens, and\nthe set of $K+W+1$ output tokens, and fit a model similar to word\nvectors~\\cite{ref:mikolov:wvec}. The details of the model are described below.\n\nWe first assign a $D$ dimensional vector to each one of $K+W$ different key\ntokens and positional tokens. We denote such token-vectors by $v_i$, where $1\n\\leq i \\leq K+W$. Next, given a fixed size window of tokens $[t_1, t_2, \\ldots,\nt_W]$, we compute a score for each possible output $j$, $s_j$ as a function of\nthe token-vectors of the tokens in the window, i.e.,\n\\[\ns_j = f_j\\left(v_{t_1}, v_{t_2}, \\ldots v_{t_W}\\right)\n\\]\nThe final loss function for this particular example is given by \n\\[\nL = \\log\\left(\\frac{e^{s_{t_o}}}{\\sum_j{e^{s_j}}}\\right)\n\\]\nwhere $t_o$ is the output token. This is the cross-entropy loss\nbetween softmax based probabilities for each output and the actual observed\noutput. According to the actual form of the function $f_j$, we get different\nmodels. A few of these models are described below.\n\nFor each model, we use {\\sc AdaGrad} optimizer to minimize the total loss\nfunction with respect to the parameters of that model. {\\sc AdaGrad} was chosen\nbecause it gave the best performance in our case among other alternatives such\nas vanilla SGD and momentum based SGD.\n\n\\subsection{Fixed Window Weight Model}\nIn the spirit of continuous bag-of-word (CBOW) model~\\cite{ref:mikolov:wvec}, in\nthis model we assume that a token at position $i$ has a ``weight'' of $w_i$, and\ncombine the token-vectors of the window according to these weights. The final\nscore is assumed to be a linear function of this weighted token-vectors. Thus,\nthe overall model is\n\\begin{align}\nu &= \\sum_{i=1}^{W}{w_i v_{t_i}}\\\\\ns_j &= p_j^Tu\\\\\nL &= \\log\\left(\\frac{e^{s_{t_o}}}{\\sum_j{e^{s_j}}}\\right)\n\\end{align}\nThe parameters in this model are the weights $w_i$, the word vectors $v_i$, and\nthe ``prediction'' vectors $p_j$. The gradients of the loss w.r.t. the\nparameters are obtained by backpropagation, the details of which are omitted\nhere for space.\n\n\\subsection{Matrix Vector Model}\nIn this case, we do not introduce any averaging as in the case of CBOW, but\ninstead simply concatenate the token-vectors to create a larger vector which is\nthen used to create the scores. Formally, the model is\n\\begin{align}\nu &= [v_{t_1}; v_{t_2}; v_{t_3}; \\ldots; v_{t_W}]\\\\\ns_j &= p_j^Tu\\\\\nL &= \\log\\left(\\frac{e^{s_{t_o}}}{\\sum_j{e^{s_j}}}\\right)\n\\end{align}\nNote that since here $u$ is $DW$ dimensional instead of being $D$ dimensional as\nin the case of Fixed Window Weight Model. Thus, the prediction vectors $p_j$ are\nmuch higher dimensional as well, which means this model has a higher number\nof parameters as compared to Fixed Window Weight Model.\n\n\\subsection{Feed-Forward Neural Network Model}\n\\label{sec:nnlm}\nThis case differs from the Matrix Vector Model by addition of one or more\nnon-linear transformations between the concatenated token-vectors and the final\nscores. We denote a specific instance of this model by NL-$k$, where $k$ is the\nnumber of non-linearity layers it contains. Thus, for example, NL-$0$ is\nequivalent to the Matrix-Vector model described above, while for NL-$3$ the\nprobabilities are obtained as:\n\\begin{align}\nu &= [v_{t_1}; v_{t_2}; v_{t_3}; \\ldots; v_{t_W}]\\\\\nz_1 &= \\text{relu}(Q_1u)\\\\\nz_2 &= \\text{relu}(Q_2z_1)\\\\\nz_3 &= \\text{relu}(Q_3z_2)\\\\\ns_j &= p_j^Tz_3\\\\\nL &= \\log\\left(\\frac{e^{s_{t_o}}}{\\sum_j{e^{s_j}}}\\right)\n\\end{align}\nHere $\\text{relu}(.)$ is the rectified linear unit, i.e., $\\text{relu}(x) = x$\nif $x \\geq 0$, and $0$ otherwise. In this work, we specifically experimented\nwith NL-$0$, NL-$1$, NL-$2$, and NL-$3$.\n\n\\subsection{Feed-Forward Model with Soft Attention}\n\\label{sec:annlm}\nMotivated by the recent successes of attention based models, we explore an\nattention mechanism for our problem as well. We experiment with a\n``soft''-attention model, i.e., a weight $a_i$ between 0 and 1 is assigned to\neach position $i$. The attentions are assumed to be the result of a sigmoid\nfunction on a linear combination of the concatenated word-vectors. Subsequently,\nthe word vector for the $i^{th}$ token is weighted by $a_i$, and the\naforementioned NL-$k$ model is applied on the concatenation of these weighted\nword vectors instead. Thus, for example, an attention based NL-$3$ model takes\nthe form of:\n\\begin{align}\nu &= [v_{t_1}; v_{t_2}; v_{t_3}; \\ldots; v_{t_W}]\\\\\na &= \\sigma(Au)\\\\\nz &= [a_1v_{t_1}; a_2v_{t_2}; a_3v_{t_3}; \\ldots; a_Wv_{t_W}]\\\\\ns_j &= \\text{NL-}3(z ; Q_1, Q_2, Q_3, p_j)\\\\\nL &= \\log\\left(\\frac{e^{s_{t_o}}}{\\sum_j{e^{s_j}}}\\right)\n\\end{align}\nHere NL-$3(x ; Q_1, Q_2, Q_3, p_j)$ is the function going from $u$ to $s_j$ in\nthe case of a NL-$3$ model, described in Section~\\ref{sec:nnlm}.\n\n\\subsection{GRU based Recurrent Model}\n\\label{sec:rnnlm}\nRecurrent neural network models based on cells like LSTM~\\cite{ref:lstm} and\nGRU~\\cite{ref:gru} have recently been shown to achieve state-of-the-art\nperformance in language models. Inspired by these results, we also attempted to\nuse a GRU based recurrent model for our prediction task. Specifically, our model\nwas the following:\n\\begin{align}\n[g_1; g_2; g_3; \\ldots; g_{W}] &= \\text{GRU}([v_{t_1}; v_{t_2}; v_{t_3}; \\ldots;\nv_{t_W}])\\\\\ns_j &= p_j^T[g]\\\\\nL &= \\log\\left(\\frac{e^{s_{t_o}}}{\\sum_j{e^{s_j}}}\\right)\n\\end{align}\nHere, $g_i$ is the output of the $i^{th}$ GRU cell, and we have a dense layer\nafter that to get the scores for each output token. As we did not achieve\ncompetitive performance with this model as compared to NL-$1$, we did not\nexperiment further with deeper GRU models.\n", "meta": {"hexsha": "4d9c87ed979eb4f706815281468df3edc0b86cb2", "size": 5693, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/report/methods.tex", "max_stars_repo_name": "rafallewanczyk/ml_code_completion", "max_stars_repo_head_hexsha": "def4364cc14a3c34ab4d2623b2f2743acb04fae4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 36, "max_stars_repo_stars_event_min_datetime": "2015-10-31T08:14:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-06T08:28:29.000Z", "max_issues_repo_path": "docs/report/methods.tex", "max_issues_repo_name": "rafallewanczyk/ml_code_completion", "max_issues_repo_head_hexsha": "def4364cc14a3c34ab4d2623b2f2743acb04fae4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-02-18T21:04:12.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-02T11:13:06.000Z", "max_forks_repo_path": "docs/report/methods.tex", "max_forks_repo_name": "rafallewanczyk/ml_code_completion", "max_forks_repo_head_hexsha": "def4364cc14a3c34ab4d2623b2f2743acb04fae4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2017-06-11T13:39:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-07T02:18:42.000Z", "avg_line_length": 47.4416666667, "max_line_length": 80, "alphanum_fraction": 0.7254523099, "num_tokens": 1789, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8918110396870287, "lm_q2_score": 0.6442250928250375, "lm_q1q2_score": 0.5745270498247692}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[margin=1in]{geometry} \n\\usepackage{amsmath,amsthm,amssymb,amsfonts,stmaryrd}\n\\usepackage{enumitem}\n\\usepackage{tabu}\n\\usepackage{fixltx2e}\n\\usepackage{xcolor}\n\\usepackage{mathtools}\n\\usepackage{tcolorbox}\n \n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\n\\DeclarePairedDelimiter\\abs{\\lvert}{\\rvert}%\n \n\\newenvironment{problem}[2][Problem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n%If you want to title your bold things something different just make another thing exactly like this but replace \"problem\" with the name of the thing you want, like theorem or lemma or whatever\n\n\\newenvironment{nscenter}\n {\\parskip=0pt\\par\\nopagebreak\\centering}\n {\\par\\noindent\\ignorespacesafterend}\n \n\\def\\SPSB#1#2{\\rlap{\\textsuperscript{\\textcolor{black}{#1}}}\\SB{#2}}\n\n \n\\begin{document}\n %Good resources for looking up how to do stuff:\n%Binary operators: http://www.access2science.com/latex/Binary.html\n%General help: http://en.wikibooks.org/wiki/LaTeX/Mathematics\n \n\\begin{center}\n\\textbf{INTRO TO ABSTRACT ALGEBRA} \\\\\nJacob Shiohira, FALL 2017 \\\\\nMATH 310 $|$ University of Nebraska-Lincoln\n\\end{center} \n\n\\section*{Chapter 1}\n% TODO: Add a mention about sets and set builder notation that will come up\n\n\\begin{center}\n\\textbf{Section 1.1: The Division Algorithm} \\\\\n\\end{center}\n\n\\noindent\n\\textbf{WELL-ORDERING AXIOM} Every nonempty subset of the set of non-negative integers contains a smallest element. \\\\\n\n\\noindent\n\\textbf{THEOREM 1.1} Let $a,b$ be integers with $b>0$. Then there exist unique integers $q$ an $r$ such that\n\n\\begin{align*}\na=bq+r \\text{     and     } 0 \\leq r < b.\n\\end{align*}\n\n\\noindent\nNotice that the restrictions on $b$ and $r$. If these did not exist, we could find multiple $q,r \\in \\Z$ that would satisfy the Division Algorithm.\n\n\\begin{center}\n\\textbf{Section 1.2: Divisibility} \\\\\n\\end{center}\n\n\\noindent\n\\textbf{DEFINITION}: Let $a$ and $b$ be integers with $b \\neq 0$. We say that $b$ \\textbf{divides} $a$ (or that $b$ is a divisor of $a$, or that $b$ is a factor of $a$) if $a=bc$ for some integer $c$. In symbols, \"$b$ divides $a$\" is written $b|a$ and \"b does not divide a\" is written $b \\not | a$. \\\\\n\n\\noindent\n\\textbf{Remarks:}\n\\begin{enumerate}\n\\item Every nonzero integer $b$ divides $0$ because $0=0b$. For every integer $a$, we have  $1$ divides $a$ because $a=1\\cdot a$.\n\\item If $b$ divides $a$, then $a=bc$ for some $c$. Hence, $-a=b(-c)$, so that $b|(-a)$. An analogous argument shows that every divisor of $-a$ is also a divisor of $a$. Therefore, \n\\begin{center}\n$a$ and $-a$ have the same divisors.\n\\end{center}\n\n\\item Suppose $a \\neq 0$ and $b|a$. Then, $a=bc$, so that $\\abs{a}=\\abs{b}\\abs{c}$. Consequently, $0 \\leq \\abs{b} \\leq \\abs{a}$. This last inequality is equivalent to $-\\abs{a} \\leq b \\leq \\abs{a}$. Therefore,\n\\begin{center}\n\\begin{enumerate}[label=(\\alph*)]\n\\item every divisor of the nonzero integer $a$ is less than or equal to $\\abs{a}$;\n\\item a nonzero integer has only finitely many divisors.\n\\end{enumerate}\n\\end{center}\n\\item If $a$ and $b$ are integers, then lcm$(a,b)$gcd$(a,b)=\\abs{ab}$.\n\\end{enumerate}\n\n\\noindent\n\\textbf{DEFINITION}: Let $a$ and $b$ be integers, $ab \\neq 0$. The \\textbf{greatest common denominator} (gcd) of $a$ and $b$ is the largest integer $d$ that divides both $a$ and $b$. In other words, $d$ is the gcd of $a$ and $b$ provided that \n\\begin{enumerate}\n\\item $d|a$ and $d|b$;\n\\item if $c|a$ and $c|b$, then $c \\leq d$.\n\\end{enumerate} \n\\noindent\nThe greatest common divisor of $a$ and $b$ is usually denoted $(a,b)$.\\\\\n\n\\noindent\n\\textbf{THEOREM 1.2} Let $a$ and $b$ be integers, $ab \\neq 0$, and let $d$ be their greatest common divisor. Then there exist (not necessarily unique) integers $u$ and $v$ such that $d=au+bv$. \\\\\n\\noindent\n\\textbf{Remarks:}\n\\begin{enumerate}\n\\item Every integer that can be written in the form $au+bv$ for some $u, v \\in \\Z$, is a multiple of the gcd$(a,b)$.\n\\item Every common divisor of $a$ and $b$ also divides gcd$(a,b)$.\n\\end{enumerate}\n\n% \\noindent \n% We are then able to find a smallest group sent in and or information \n\n\\noindent\n\\textbf{COROLLARY 1.3} Let $a$ and $b$ be integers, both not $0$, and let $d$ be a positive integer. Then $d$ is the greatest common divisor of $a$ and $b$ if and only if $d$ satisfies these conditions:\n\\begin{enumerate}\n\\item $d|a$ and $d|b$;\n\\item if $c|d$ and $c|b$, then $c|d$.\n\\end{enumerate}\n\n\\noindent\nNote that the 'if and only if' part of the statement requires two steps. \\\\\n\n\\noindent\n\\textbf{THEOREM 1.4} If $a|bc$ and $(a,b)=1$, then $a|c$. \\\\\n\n\\begin{center}\n\\textbf{Section 1.3: Primes and Unique Factorization}\n\\end{center}\n\n\\noindent\nEvery nonzero integer $n$ has except $\\pm 1$ has at least four distinct divisors, namely, $1,-1,n,-n$. Integers that have \\textit{only} these divisors play a crucial role. \\\\\n\n\\noindent\n\\textbf{DEFINITION}: An integer $p$ is said to be \\textbf{prime} if $p \\neq 0, \\pm1$ and the only divisors of $p$ are $\\pm 1$ and $\\pm p$. \\\\\n\n\\noindent\n\\textbf{Remarks:}\n\\begin{enumerate}[label=(\\alph*)]\n\\item $p$ is prime if and only if $-p$ is prime.\n\\item If $p$ and $q$ are prime and $p|q$, then $p= \\pm q$.\n\\item If $p=rt$, then either $r=\\pm 1$ or $t=\\pm 1$.\n\\item Integers $a$ and $b$ are \\textit{relatively prime} if gcd$(a,b)=1$.\n\\end{enumerate}\n\n\\noindent\n\\textbf{THEOREM 1.5} Let $p$ be an integer with $p \\neq 0, \\pm1$. Then $p$ is prime if and only if $p$ has this property:\n\\begin{center}\nwhenever $p|bc$, then $p|b$ or $p|c$.\n\\end{center}\n\n\\noindent\n\\textbf{Remarks:}\n\\begin{enumerate}[label=(\\alph*)]\n\\item This theorem is especially useful when proving that if $p|b^2$ for any prime $p$ and some integer $b$, then $p|b$ or $p|b$\n\\end{enumerate}\n\n\\noindent\n\\textbf{COROLLARY 1.6} If $p$ is prime and $p|a_1a_2 \\cdot \\cdot \\cdot a_n$, then $p$ divides at least on eof the $a_i$. \\\\\n\n\\noindent\n\\textbf{THEOREM 1.7} Every integer $n$ except $0, \\pm 1$ is a product of primes. \\\\\n\n\\noindent\n\\textbf{THEOREM 1.8} Every integer $n$ except $0, \\pm 1$ is a product of primes. This prime factorization is unique in the following sense: If\n\\begin{align*}\nn=p_1p_1 \\cdot \\cdot \\cdot p_r \\text{     and     } n=q_1q_1 \\cdot \\cdot \\cdot q_s\n\\end{align*}\nwith each $p_i, q_j$ prime, then $r=s$ (that is, the number of factors is the same) and after reordering and relabeling the $q's$,\n\\begin{align*}\np_1=\\pm q_1, \\text{     }, p_2=\\pm q_2, \\text{     }, p_3=\\pm q_3, \\text{     }, ..., p_r=\\pm q_r.\n\\end{align*}\n\n\\noindent\n\\textbf{COROLLARY 1.9} Every integer $n>1$ can be written in one and only one way in the form $n=p_1p_2p_3 \\cdot \\cdot \\cdot p_r$ where $p_i$ are positive primes such that $p_1 \\leq p_2 \\leq p_3 \\leq \\cdot \\cdot \\cdot p_r$. \\\\\n\n\\noindent\n\\textbf{THEOREM 1.10} Let $n>1$. If $n$ has no positive prime factor less than or equal to $\\sqrt{n}$, then $n$ is prime. \\\\\n\n\\begin{center}\n\\textbf{Helpful Proofs}\n\\end{center}\n\n\\noindent\n\\textbf{Euclidean Algorithm} Find $(4631, 42371)$. \\\\\n\n\\noindent\nBy the Euclidean Algorithm, we have \n\n\\begin{align*}\n42371 & = 9 \\cdot 4361 + 3122 \\\\\n4361 & = 1 \\cdot 3122 + 1239 \\\\\n3122 & = 2 \\cdot 1239 + 644 \\\\\n1239 & = 1 \\cdot 644 + 595 \\\\\n644 & = 1 \\cdot 595 + 49 \\\\\n595 & = 12 \\cdot 49 + 7 \\\\\n49 & = 7 \\cdot 7 + 0 \\\\\n\\end{align*}\n\n\\noindent\ntherefore $(4361, 42371)=7$.\\\\ \n\n\\noindent\n\\textbf{Famous Induction Proof} If $n$ is a positive integer, then \n\\begin{center}\n$1+2+ \\cdot \\cdot \\cdot + n = \\frac{n(n+1)}{2}$.\n\\end{center}\n\n\\begin{tcolorbox}\nTo verify that a proposition $P(n)$ holds for all natural numbers $n$, the \\textbf{Principle of Mathematical Induction} consists of successfully carrying out the following two steps: \n\n\\begin{itemize}\n\\item \\textbf{Base Case}: Prove that $P(0)$ is true.\n\\item \\textbf{Induction Step}: Assume that $P(n)$ is true for any arbitrary $n$, then prove that $P(n+1)$ is true.\n\\end{itemize}\n\\end{tcolorbox}\n\n\\vspace{.2cm}\n\\noindent\nWe will proceed by induction that, for all $n \\in \\Z_+$,\n\\begin{align*}\n\\sum_{n=1}^{n} i = \\frac{n(n+1)}{2}.\n\\end{align*}\n\n\\noindent\n\\textbf{Base Case} When $n=1$, the LHS is the sum of the first $1$ integer, which is simply $1$. The RHS is $1(1+1)/2=1$. Both sides are equal, and the inductive hypothesis holds for the base case. \\\\\n\n\\noindent\n\\textbf{Inductive Case} Let $m$ and $k$ be integers such that $m \\geq 1$ and $1 \\leq k \\leq m$. Suppose that $P(k)$ holds. We then want to prove that $P(k+1)$ holds\n\\begin{align*}\n\\sum_{i=1}^{k+1} i & = \\frac{(k+1)(k+2)}{2} \\\\\n& = [1+2+ \\cdot \\cdot \\cdot + k] + (k+1)\n\\end{align*}\n\n\\noindent\nWell, by the summation,\n\\begin{align*}\n\\sum_{i=1}^{k+1} i & = 1+2+ \\cdot \\cdot \\cdot + k + (k+1) \\\\\n& = [1+2+ \\cdot \\cdot \\cdot + k] + (k+1)\n\\end{align*}\n\n\\noindent\nBy the induction hypothesis,\n\\begin{align*}\n\\sum_{i=1}^{k+1} i & =  \\sum_{i=1}^{k} i  + (k+1) \\\\\n& = \\frac{k(k+1)}{2}  + (k+1) \\\\\n& =  \\frac{k^2+k)}{2} +  \\frac{2(k+1)}{2}\\\\\n& =  \\frac{k^2+k)}{2} +  \\frac{2k+2)}{2}\\\\\n& =  \\frac{k^2+3k+2)}{2} \\\\\n& =  \\frac{(k+1)(k+2)}{2}\n\\end{align*}\n\n\\noindent\nThus, $P(k+1)$ holds, and the proof of the induction step is complete. We may now conclude that, by the principle of mathematical induction, $P(n)$ holds true for all $n \\in \\Z_+$. \\\\\n\n\\noindent\n\\textbf{Infinitude of Primes} Suppose that there are actually a finite number of primes such that $p_1<p_2<...<p_r$. Then, let $N=p_1p_2 \\cdot \\cdot \\cdot p_r$. By the Fundamental Theorem of Arithmetic, $N+1$ also has a unique prime factorization. $N-1$ is either prime or composite. If $N-1$ is prime, then we have found another prime and contradict our original assumption. If $N-1$ is composite, it is a product of primes such that it has a prime $p_i$ in common with $N$. So, $p_i$ divides $N-(N-1)=1$, which is a contradiction. There is no prime $q$ such that $q|1$ because that would imply that $q \\leq 1$, but by definition, $q>1$. Thus, there are an infinite number of primes. \\qedsymbol\n\n\\section*{Chapter 2}\n% TODO: Add a mention about sets and set builder notation that will come up\n\n\\begin{center}\n\\textbf{Section 2.1: Congruence and Congruence Classes} \\\\\n\\end{center}\n\n\\noindent\n\\textbf{Definition} Let $a,b,n$ be integers with $n>0$. Then $a$ is congruent $b$ modulo $n$, written \"$a \\equiv b (\\text{mod}n)$\", provided $n$ divides $a-b$. \\\\\n\n\\noindent\n\\textbf{Theorem 2.1} Let $n$ be a positive integer. for all $a,b,c \\in \\Z$,\n\\begin{enumerate}\n\\item $a \\equiv a (\\text{mod}n)$;\n\\item If $a \\equiv b (\\text{mod}n)$, then $b \\equiv a (\\text{mod}n)$;\n\\item If $a \\equiv b (\\text{mod}n)$ and $b \\equiv c (\\text{mod}n)$, then $a \\equiv c (\\text{mod}n)$.\n\\end{enumerate}\n\n\\noindent\n\\textbf{Theorem 2.2} If $a \\equiv b (\\text{mod}n)$ and $b \\equiv c (\\text{mod}n)$,\n\\begin{enumerate}\n\\item $a + c \\equiv b + d(\\text{mod}n)$\n\\item $ac \\equiv bd (\\text{mod}n)$\n\\end{enumerate}\n\n\\noindent\n\\textbf{Definition} Let $a$ an $n$ be integers $n>0$. The congruence class of a modulo $n$ (denoted by [$a$]) is the set of all those integers that are congruent to a modulo $n$, that is,\n\\begin{center}\n[$a$]$=\\{b|b \\in \\Z \\text{  and  } b \\equiv a(\\text{mod}n)\\}$.\n\\end{center}\n\n\\noindent\n\\textbf{Theorem 2.3} $a \\equiv c (\\text{mod}n)$ if and only if [$a$]$=$[$c$]. \\\\\n\n\\noindent\n\\textbf{Corollary 2.4} Two congruence classes modulo $n$ are either disjoint or identical. \\\\\n\n\\noindent\n\\textbf{Corollary 2.5} Let $n>1$ be an integer and consider congruence modulo $n$.\n\\begin{enumerate}\n\\item If $a$ is any integer and $r$ is the remainder when $a$ is divided by $n$, then [$a$]$=$[$r$].\n\\item There are exactly $n$ distinct congruence classes, namely, [$0$], [$1$], [$2$], $\\cdot \\cdot \\cdot$, [$n-1$].\n\\end{enumerate}\n\n\\noindent\n\\textbf{Definition} The set of all congruence classes modulo $n$ is denoted $\\Z_n$ (which is read \"$\\Z$ mod $n$). \\\\\n\n\\noindent\n\\textbf{Theorem 2.6} If [$a$]$=$[$b$] and [$c$]$=$[$d$] in $\\Z_n$, then \\\\\n\\begin{center}\n[$a+c$]$=$[$b+d$] and [$ac$]$=$[$bd$].\n\\end{center}\n\n\\noindent\n\\textbf{Definition} Addition and multiplication in $\\Z_n$ are defined by \\\\\n\\begin{center}\n[$a$]$\\oplus$[$r$]$=$[$a+c$] and [$a$]$\\varodot$[$c$]$=$[$ac$].\n\\end{center}\n\n\\noindent\n\\textbf{Theorem 2.7} For any classes [$a$], [$b$], [$c$] in $\\Z_n$,\n\\begin{enumerate}\n\\item If [$a$] $\\in \\Z_n$ and [$b$] $\\in Z_n$, then [$a$]$\\oplus$[$b$]$\\in \\Z_n$. \\\\\n\\item [$a$]$\\oplus$([$b$]$\\oplus$[$c$])$=$([$b$]$\\oplus$[$a$])$\\oplus$[$c$]. \\\\\n\\item [$a$]$\\oplus$([$0$]$=$[$0$]$\\oplus$[$a$]. \\\\\n\\item [$a$]$\\oplus$([$0$]$=$[$a$]$=$[$0$]$\\oplus$[$a$]. \\\\\n\\item For each $[$a$]$ in $\\Z_n$, the equation [$a$]$\\oplus X=$[$0$] has a solution in $\\Z_n$. \\\\\n\\item If [$a$] $\\in \\Z_n$ and [$b$]$\\in \\Z_n$, then [$a$]$\\varodot$[$b$]$\\in \\Z_n$. \\\\\n\\item [$a$] $\\varodot$([$b$]$\\varodot$[$c$])$=$([$a$]$\\varodot$[$b$])$\\varodot$[$c$]. \\\\\n\\item  [$a$] $\\varodot$[$b$]$=$[$b$]$\\varodot$[$a$]. \\\\\n\\item [$a$] $\\varodot$[$1$]$=$[$a$]$=$[$1$]$\\varodot$[$a$].\n\\end{enumerate}\n\n\\noindent\n\\textbf{Theorem 2.8} If $p>1$ is an integer, then the following conditions are equivalent:\n\\begin{enumerate}\n\\item $p$ is prime\n\\item For any $a \\neq 0 \\in \\Z_p$, the equation $ax=1$ has a solution in $\\Z_p$.\n\\item Whenever $bc=0$ in $\\Z_p$, then $b=0$ or $c=0$.\n\\end{enumerate}\n\n\\noindent\n\\textbf{Theorem 2.9} Let $a$ an $n$ be integers with $n>1$. Then\n\\begin{center}\nThe equation $[a]x=[1]$ has a solution in $\\Z_n$ if and only if $(a,n)=1$ in $\\Z$.\n\\end{center}\n\n\\noindent\n\\textbf{Theorem 2.10}\nLet $a$ and $n$ be integers with $n>1$. Then\n\\begin{center}\n$[a]$ is a unit with $\\Z_n$ if and only if $(a,n)=1$ in $\\Z$.\n\\end{center}\n\\end{document} \n", "meta": {"hexsha": "0bbee855d346754a602062c395eca2ef52ba462a", "size": 13548, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract-algebra/review-terms/Review Terms.tex", "max_stars_repo_name": "jShiohaha/math-classes", "max_stars_repo_head_hexsha": "72711363cf0b58863ffb193ee79ff40244e517eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract-algebra/review-terms/Review Terms.tex", "max_issues_repo_name": "jShiohaha/math-classes", "max_issues_repo_head_hexsha": "72711363cf0b58863ffb193ee79ff40244e517eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract-algebra/review-terms/Review Terms.tex", "max_forks_repo_name": "jShiohaha/math-classes", "max_forks_repo_head_hexsha": "72711363cf0b58863ffb193ee79ff40244e517eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3837209302, "max_line_length": 695, "alphanum_fraction": 0.6542663124, "num_tokens": 5080, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410558746814, "lm_q2_score": 0.8633916187614823, "lm_q1q2_score": 0.5745098994573942}}
{"text": "% ----------------------------------------------------------------------\n\\section{Elasticity with Infinitesimal Strain and Prescribed Slip on Faults}\n\nFor each fault, which is an internal interface, we add a boundary\ncondition to the elasticity equation prescribing the jump in the\ndisplacement field across the fault,\n\\begin{gather}\n  \\label{eqn:bc:prescribed_slip}\n  \\vec{u}^+(\\vec{x},t) - \\vec{u}^-(\\vec{x},t) - \\vec{d}(\\vec{x},t) = \\vec{0} \\text{ on }\\Gamma_f,\n\\end{gather}\nwhere $\\vec{u}^+$ is the displacement vector on the ``positive'' side\nof the fault, $\\vec{u}^-$ is the displacement vector on the\n``negative'' side of the fault, $\\vec{d}$ is the slip vector on the\nfault, and $\\vec{n}$ is the fault normal which points from the\nnegative side of the fault to the positive side of the fault. Note\nthat as an alternative to prescribing the jump in displacement across\nthe fault, we can also prescribe the jump in velocity or acceleration\nacross the fault in terms of slip rate or slip acceleration,\nrespectively.\n\nUsing a domain decomposition approach for constraining the fault slip\nyields additional Neumann-like boundary conditions on the fault\nsurface,\n\\begin{gather}\n  \\tensor{\\sigma} \\cdot \\vec{n} = -\\vec{\\lambda}(\\vec{x},t) \\text{ on }\\Gamma_{f^+}, \\\\\n  \\tensor{\\sigma} \\cdot \\vec{n} = +\\vec{\\lambda}(\\vec{x},t) \\text{ on }\\Gamma_{f^-},\n\\end{gather}\nwhere $\\vec{\\lambda}$ is the vector of Lagrange multipliers\ncorresponding to the tractions applied to the fault surface to\ngenerate the prescribed slip. \n\n\\begin{table}[htbp]\n  \\caption{Mathematical notation for elasticity equation with\n    infinitesimal strain and prescribed slip on faults.}\n  \\label{tab:notation:elasticity:prescribed:slip}\n  \\begin{tabular}{lcp{3in}}\n    \\toprule\n    {\\bf Category} & {\\bf Symbol} & {\\bf Description} \\\\\n    \\midrule\n    Unknowns & $\\vec{u}$ & Displacement field \\\\\n    & $\\vec{v}$ & Velocity field \\\\\n    & $\\vec{\\lambda}$ & Lagrange multiplier field \\\\\n    Derived quantities & $\\tensor{\\sigma}$ & Cauchy stress tensor \\\\\n                   & $\\tensor{\\epsilon}$ & Cauchy strain tensor \\\\\n    Common constitutive parameters & $\\rho$ & Density \\\\\n  & $\\mu$ & Shear modulus \\\\\n  & $K$ & Bulk modulus \\\\\nSource terms & $\\vec{f}$ & Body force per unit volume, for example $\\rho \\vec{g}$ \\\\\n    & $\\vec{d}$ & Slip vector field on the fault corresponding to a\n      jump in the displacement field across the fault \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\\subsection{Quasistatic}\n\nAs in the case of elasticity without faults, we first consider the\nquasistatic case in which we neglect the inertial term\n($\\rho \\frac{\\partial \\vec{v}}{\\partial t} \\approx \\vec{0}$). We place\nall of the terms in the elasticity equation on the LHS, consistent\nwith implicit time stepping. Our solution vector consists of both\ndisplacements and Lagrange multipliers, and the strong form for the\nsystem of equations is\n\\begin{gather}\n  % Solution\n  \\vec{s}^T = \\left( \\vec{u} \\quad \\vec{\\lambda} \\right)^T \\\\\n  % Elasticity\n  \\vec{f}(\\vec{x},t) + \\tensor{\\nabla} \\cdot \\tensor{\\sigma}(\\vec{u}) = \\vec{0} \\text{ in }\\Omega, \\\\\n  % Neumann\n  \\tensor{\\sigma} \\cdot \\vec{n} = \\vec{\\tau}(\\vec{x},t) \\text{ on }\\Gamma_\\tau, \\\\\n  % Dirichlet\n  \\vec{u} = \\vec{u}_0(\\vec{x},t) \\text{ on }\\Gamma_u, \\\\\n  % Prescribed slip\n  \\vec{u}^+(\\vec{x},t) - \\vec{u}^-(\\vec{x},t) - \\vec{d}(\\vec{x},t) = \\vec{0} \\text{ on }\\Gamma_f,  \\\\\n  \\tensor{\\sigma} \\cdot \\vec{n} = -\\vec{\\lambda}(\\vec{x},t) \\text{ on }\\Gamma_{f^+}, \\text{ and}\\\\\n  \\tensor{\\sigma} \\cdot \\vec{n} = +\\vec{\\lambda}(\\vec{x},t) \\text{ on }\\Gamma_{f^-}.\n\\end{gather}\nWe create the weak form by taking the dot product with the trial\nfunction $\\trialvec[u]$ or $\\trialvec[\\lambda]$ and integrating over\nthe domain. After using the divergence theorem and incorporating the\nNeumann boundary and fault interface conditions, we have\n\\begin{gather}\n  % Elasticity\n  \\int_\\Omega \\trialvec[u] \\cdot \\vec{f}(\\vec{x},t) + \\nabla \\trialvec[v] : -\\tensor{\\sigma}(\\vec{u}) \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[u] \\cdot \\vec{\\tau}(\\vec{x},t) \\, d\\Gamma,\n  + \\int_{\\Gamma_{f}} \\trialvec[u^+] \\cdot \\left(-\\vec{\\lambda}(\\vec{x},t)\\right)\n  + \\trialvec[u^-] \\cdot \\left(+\\vec{\\lambda}(\\vec{x},t)\\right)\\, d\\Gamma = 0\\\\\n  % Prescribed slip\n  \\int_{\\Gamma_{f}} \\trialvec[\\lambda] \\cdot \\left(\n    -\\vec{u}^+(\\vec{x},t) + \\vec{u}^-(\\vec{x},t) + \\vec{d}(\\vec{x},t) \\right) \\, d\\Gamma = 0.\n\\end{gather}\nWe have chosen the signs in the last equation so that the Jacobian\nwill be symmetric with respect to the Lagrange multiplier. We solve\nthe system of equations using implicit time stepping, making use of\nresiduals functions and Jacobians for the LHS.\n\n\n\\subsubsection{Residual Pointwise Functions}\n\nIdentifying $F(t,s,\\dot{s})$ and $G(t,s)$, we have\n\\begin{align}\n % Fu\nF^u(t,s,\\dot{s}) &= \\int_\\Omega \\trialvec[u] \\cdot \\eqnannotate{\\vec{f}(\\vec{x},t)}{\\vec{f}^u_0} + \\nabla \\trialvec[u] : \\eqnannotate{-\\tensor{\\sigma}(\\vec{u})}{\\tensor{f^u_1}} \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[u] \\cdot \\eqnannotate{\\vec{\\tau}(\\vec{x},t)}{\\vec{f}^u_0} \\, d\\Gamma \n  + \\int_{\\Gamma_{f}} \\trialvec[u^+] \\cdot \\eqnannotate{\\left(-\\vec{\\lambda}(\\vec{x},t)\\right)}{\\vec{f}^u_0}\n  + \\trialvec[u^-] \\cdot \\eqnannotate{\\left(+\\vec{\\lambda}(\\vec{x},t)\\right)}{\\vec{f}^u_0}\\, d\\Gamma \\\\\n  % Fl\n  F^\\lambda(t,s,\\dot{s}) &= \\int_{\\Gamma_{f}} \\trialvec[\\lambda] \\cdot \\eqnannotate{\\left(\n    -\\vec{u}^+(\\vec{x},t) + \\vec{u}^-(\\vec{x},t) + \\vec{d}(\\vec{x},t) \\right)}{\\vec{f}^\\lambda_0} \\, d\\Gamma, \\\\\n  % Gu\n  G^u(t,s) &= 0 \\\\\n  % Gl\n  G^\\lambda(t,s) &= 0\n\\end{align}\nCompared to the quasistatic elasticity case without a fault, we have\nsimply added additional pointwise functions associated with the\nfault. Our fault implementation does not change the formulation for\nthe materials or external Dirichlet or Neumann boundary conditions.\n\n\\subsubsection{Jacobian Pointwise Functions}\n\nThe LHS Jacobians are:\n\\begin{align}\n  % J_F uu\n  J_F^{uu} &= \\frac{\\partial F^u}{\\partial u} + s_\\mathit{tshift} \\frac{\\partial F^u}{\\partial \\dot{u}}\n      = \\int_\\Omega \\nabla \\trialvec[u] : -\\tensor{C} : \\frac{1}{2}(\\nabla + \\nabla^T)\\basisvec[u] \n\\, d\\Omega \n      = \\int_\\Omega \\trialscalar[u]_{i,k} \\, \\eqnannotate{\\left( -C_{ikjl} \\right)}{J_{f3}^{uu}} \\, \\basisscalar[u]_{j,l}\\, d\\Omega \\\\\n  % J_F ul\n  J_F^{u\\lambda} &= \\frac{\\partial F^u}{\\partial \\lambda} + s_\\mathit{tshift} \\frac{\\partial F^u}{\\partial \\dot{\\lambda}}\n      = \\int_{\\Gamma_{f}} \\trialscalar[u^+]_i \\eqnannotate{\\left(-\\delta_{ij}\\right)}{J^{u\\lambda}_{f0}} \\basisscalar[\\lambda]_j\n                   + \\trialscalar[u^-]_i \\eqnannotate{\\left(+\\delta_{ij}\\right)}{J^{u\\lambda}_{f0}} \\basisscalar[\\lambda]_j\\, d\\Gamma, \\\\\n  % J_F lu\n  J_F^{\\lambda u} &= \\frac{\\partial F^\\lambda}{\\partial u} + s_\\mathit{tshift} \\frac{\\partial F^\\lambda}{\\partial \\dot{u}}\n      = \\int_{\\Gamma_{f}} \\trialscalar[\\lambda]_i \n                    \\eqnannotate{\\left(-\\delta_{ij}\\right)}{J^{\\lambda u}_{f0}} \\basisscalar[u^+]_j\n                    + \\trialscalar[\\lambda]_i \\eqnannotate{\\left(+\\delta_{ij}\\right)}{J^{\\lambda u}_{f0}} \\basisscalar[u^-]_j \\, d\\Gamma, \\\\\n  % J_F ll\n  J_F^{\\lambda \\lambda} &= \\tensor{0}\n\\end{align}\nThis LHS Jacobian has the structure\n\\begin{equation}\n  J_F = \\left( \\begin{array} {cc} J_F^{uu} & J_F^{u\\lambda} \\\\ J_F^{\\lambda u} & 0 \\end{array} \\right)\n      = \\left( \\begin{array} {cc} J_F^{uu} & C^T \\\\ C & 0 \\end{array} \\right),\n\\end{equation}\nwhere $C$ contains entries of $\\pm 1$ for degrees of\nfreedom on the two sides of the fault. The Schur complement of $J$\nwith respect to $J_F^{uu}$ is $-C\\left(J_F^{uu}\\right)^{-1}C^T$.\n\n\n\\subsection{Dynamic}\n\nThe equation prescribing fault slip is independent of the Lagrange\nmultiplier, so we do not have a system of equations that we can put in\nthe form $\\dot{s} = G^*(t,s)$. Instead, we have a\ndifferential-algebraic set of equations (DAEs), which we solve using\nan implicit-explicit (IMEX) time integration scheme. As in the case of\ndynamic elasticity without faults, we introduce the velocity\n($\\vec{v}$) as an unknown to turn the elasticity equation into two\nfirst order equations. The strong form for our system of equations is:\n\\begin{gather}\n  % Solution\n  \\vec{s}^T = \\left( \\vec{u} \\quad \\vec{v} \\quad \\vec{\\lambda} \\right)^T \\\\\n  % Displacement-velocity\n  \\frac{\\partial \\vec{u}}{\\partial t} = \\vec{v}, \\\\\n  % Elasticity\n  \\rho(\\vec{x}) \\frac{\\partial \\vec{v}}{\\partial t} =\n  \\vec{f}(\\vec{x},t) + \\nabla \\cdot \\tensor{\\sigma}(\\vec{u}), \\\\\n  % Neumann BC\n  \\tensor{\\sigma} \\cdot \\vec{n} = \\vec{\\tau} \\text{ on } \\Gamma_\\tau. \\\\\n  % Dirichlet BC\n  \\vec{u} = \\vec{u}_0 \\text{ on } \\Gamma_u, \\\\\n  % Presribed slip\n  \\vec{u}^+(\\vec{x},t) - \\vec{u}^-(\\vec{x},t) - \\vec{d}(\\vec{x},t) = \\vec{0} \\text{ on }\\Gamma_f,  \\\\\n  \\tensor{\\sigma} \\cdot \\vec{n} = -\\vec{\\lambda}(\\vec{x},t) \\text{ on }\\Gamma_{f^+}, \\text{ and}\\\\\n  \\tensor{\\sigma} \\cdot \\vec{n} = +\\vec{\\lambda}(\\vec{x},t) \\text{ on }\\Gamma_{f^-}.\n\\end{gather}\nThe differentiation index is 2 because we must take the second time\nderivative of the prescribed slip equation to match the order of the\ntime derivative in the elasticity equation,\n\\begin{gather}\n  \\frac{\\partial \\vec{v}^+}{\\partial t} - \\frac{\\partial \\vec{v}^-}{\\partial t} -\n  \\frac{\\partial^2 \\vec{d}(\\vec{x},t)}{\\partial t^2} = \\vec{0}.\n\\end{gather}\nWe generate the weak form in the usual way,\n\\begin{gather}\n  % Displacement-velocity\n  \\int_{\\Omega} \\trialvec[u] \\cdot \\frac{\\partial \\vec{u}}{\\partial t} \\, d\\Omega = \n  \\int_{\\Omega} \\trialvec[u] \\cdot \\vec{v} \\, d\\Omega, \\\\\n  % Elasticity\n  \\begin{multlined}\n  \\int_{\\Omega} \\trialvec[v] \\cdot \\rho(\\vec{x}) \\frac{\\partial \\vec{v}}{\\partial t} \\, d\\Omega \n  = \\int_\\Omega \\trialvec[v] \\cdot \\vec{f}(\\vec{x},t) + \\nabla \\trialvec[v] : -\\tensor{\\sigma}(\\vec{u}) \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[v] \\cdot \\vec{\\tau}(\\vec{x},t) \\, d\\Gamma \\\\\n  + \\int_{\\Gamma_{f}} \\trialvec[v^+] \\cdot \\left(-\\vec{\\lambda}(\\vec{x},t)\\right)\n  + \\trialvec[v^-] \\cdot \\left(+\\vec{\\lambda}(\\vec{x},t)\\right)\\, d\\Gamma,\n  \\end{multlined}\\\\\n  % Prescribed slip\n  \\int_{\\Gamma_f} \\trialvec[\\lambda] \\cdot \\left(\n    \\frac{\\partial \\vec{v}^+}{\\partial t} - \\frac{\\partial \\vec{v}^-}{\\partial t} -\n    \\frac{\\partial^2 \\vec{d}(\\vec{x},t)}{\\partial t^2} \\right) \\, d\\Gamma = 0.\n\\end{gather}\n\nFor compatibility with PETSc TS IMEX implementations, we need\n$\\dot{\\vec{s}}$ on the LHS for the explicit part (displacement-velocity and\nelasticity equations) and we need $\\vec{\\lambda}$ in the equation for\nthe implicit part (prescribed slip equation). We first focus on the\nexplicit part and create a lumped LHS Jacobian matrix, $M$, so that we\nhave\n\\begin{gather}\n  % Displacement-velocity\n  \\label{eqn:displacement:velocity:prescribed:slip:weak:form}\n  \\frac{\\partial \\vec{u}}{\\partial t} = M_u^{-1} \\int_{\\Omega} \\trialvec[u] \\cdot \\vec{v} \\, d\\Omega, \\\\\n  % Elasticity\n  \\label{eqn:elasticity:prescribed:slip:dynamic:weak:form}\n  \\begin{multlined}\n  \\frac{\\partial \\vec{v}}{\\partial t}\n  = M_v^{-1} \\int_\\Omega \\trialvec[v] \\cdot \\vec{f}(\\vec{x},t) + \\nabla \\trialvec[v] : -\\tensor{\\sigma}(\\vec{u}) \\, d\\Omega\n  + M_v^{-1} \\int_{\\Gamma_\\tau} \\trialvec[v] \\cdot \\vec{\\tau}(\\vec{x},t) \\, d\\Gamma \\\\\n  + M_{v^+}^{-1} \\int_{\\Gamma_{f}} \\trialvec[v^+] \\cdot \\left(-\\vec{\\lambda}(\\vec{x},t)\\right) \\, d\\Gamma\n  + M_{v^-}^{-1} \\int_{\\Gamma_{f}}\\trialvec[v^-] \\cdot \\left(+\\vec{\\lambda}(\\vec{x},t)\\right) \\, d\\Gamma,\n\\end{multlined}\\\\\n% Mu\nM_u = \\mathit{Lump}\\left( \\int_\\Omega \\trialscalar[u]_i \\delta_{ij} \\basisscalar[u]_j \\, d\\Omega \\right), \\\\\n% Mv\nM_v = \\mathit{Lump}\\left( \\int_\\Omega \\trialscalar[v]_i \\rho(\\vec{x}) \\delta_{ij} \\basisscalar[v]_j \\, d\\Omega \\right).\n\\end{gather}\nNow, focusing on the implicit part we want to introduce\n$\\vec{\\lambda}$ into the prescribed slip equation. We solve the\nelasticity equation for $\\frac{\\partial \\vec{v}}{\\partial t}$, create\nthe weak form, and substitute into the prescribed slip\nequation. Solving the elasticity equation for\n$\\frac{\\partial \\vec{v}}{\\partial t}$, we have\n\\begin{equation}\n  \\frac{\\partial \\vec{v}}{\\partial t} = \\frac{1}{\\rho(x)} \\vec{f}(\\vec{x},t) + \\frac{1}{\\rho(x)} \\left(\\nabla \\cdot \\tensor{\\sigma}(\\vec{u}) \\right),\n\\end{equation}\nand the corresponding weak form is\n\\begin{equation}\n  \\label{eqn:prescribed:slip:DAE:weak:form}\n  \\int_{\\Omega} \\trialvec[v] \\cdot \\frac{\\partial \\vec{v}}{\\partial t} \\, d\\Omega\n  = \\int_\\Omega \\trialvec[v] \\cdot \\frac{1}{\\rho(x)} \\vec{f}(\\vec{x},t) + \\trialvec[v] \\cdot \\frac{1}{\\rho(x)} \\left(\\nabla \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega,\n\\end{equation}\nWe apply the divergence theorem,\n\\begin{equation}\n  \\int_{\\Omega} \\nabla \\cdot \\vec{F} \\, d\\Omega = \\int_\\Gamma \\vec{F} \\cdot \\vec{n} \\, d\\Gamma,\n\\end{equation}\nwith $\\vec{F} = \\trialvec[v] \\cdot \\frac{1}{\\rho(x)} \\left(\\nabla \\cdot \\tensor{\\sigma}(\\vec{u})\\right)$ to get\n\\begin{equation}\n  \\int_\\Omega \\trialvec[v] \\cdot \\frac{1}{\\rho(x)} \\left(\\nabla \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega\n  = \\int_\\Gamma \\trialvec[v] \\cdot \\left( \\frac{1}{\\rho(x)} \\tensor{\\sigma}(\\vec{u}) \\cdot \\vec{n} \\right) \\, d\\Gamma\n  + \\int_\\Omega \\nabla\\trialvec[v] : \\left(-\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right)\n  + \\trialvec[v] \\cdot \\left(-\\frac{\\nabla \\rho(\\vec{x})}{\\rho^2} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega.\n\\end{equation}\nRestricting the trial function to $v^+$ and $v^-$ while recognizing that there is no overlap between the external Neumann boundary conditions $\\Gamma_\\tau$ and the fault interfaces $\\Gamma_f$, yields\n\\begin{gather}\n  \\int_\\Omega \\trialvec[v^+] \\cdot \\frac{1}{\\rho(x)} \\left(\\nabla \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega\n  = \\int_{\\Gamma_f} \\trialvec[v^+] \\cdot \\left(-\\frac{1}{\\rho(x)} \\vec{\\lambda} \\right) \\, d\\Gamma\n  + \\int_\\Omega \\nabla\\trialvec[v^+] : \\left(-\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right)\n  + \\trialvec[v^+] \\cdot \\, \\left(-\\frac{\\nabla \\rho(\\vec{x})}{\\rho^2} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega, \\\\\n  \\int_\\Omega \\trialvec[v^-] \\cdot \\frac{1}{\\rho(x)} \\left(\\nabla \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega\n  = \\int_{\\Gamma_f} \\trialvec[v^-] \\cdot \\left(+\\frac{1}{\\rho(x)} \\vec{\\lambda} \\right) \\, d\\Gamma\n  + \\int_\\Omega \\nabla\\trialvec[v^-] : \\left(-\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right)\n  + \\trialvec[v^-] \\cdot \\, \\left(-\\frac{\\nabla \\rho(\\vec{x})}{\\rho^2} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega. \\end{gather}\nPicking $\\trialvec[v]=\\trialvec[\\lambda]$ and substituting into equation~\\vref{eqn:prescribed:slip:DAE:weak:form} gives\n\\begin{multline}\n  \\int_{\\Gamma_f} \\trialvec[\\lambda] \\cdot \\left(\n    \\frac{\\partial \\vec{v}^+}{\\partial t} - \\frac{\\partial \\vec{v}^-}{\\partial t} -\n    \\frac{\\partial^2 \\vec{d}(\\vec{x},t)}{\\partial t^2} \\right) \\, d\\Gamma = \\\\\n  \\int_\\Omega \\trialvec[v^+] \\cdot \\left( \\frac{1}{\\rho(\\vec{x})} \\vec{f}(\\vec{x}, t)\n    -\\frac{\\nabla \\rho(\\vec{x})}{\\rho^2(\\vec{x})} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right) \n  + \\nabla \\trialvec[v^+] : \\left(-\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega\n  + \\int_{\\Gamma_f} \\trialvec[v^+] \\cdot \\left(-\\frac{1}{\\rho(\\vec{x})} \\vec{\\lambda} \\right) \\, d\\Gamma \\\\\n  - \\int_\\Omega \\trialvec[v^-] \\cdot \\left( \\frac{1}{\\rho(\\vec{x})} \\vec{f}(\\vec{x}, t)\n  -\\frac{\\nabla \\rho(\\vec{x})}{\\rho^2(\\vec{x})} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right)\n  + \\nabla \\trialvec[v^-] : \\left(-\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Omega\n  + \\int_{\\Gamma_f} \\trialvec[v^-] \\cdot \\left(-\\frac{1}{\\rho(\\vec{x})} \\vec{\\lambda} \\right) \\, d\\Gamma \\\\\n  - \\int_{\\Gamma_f} \\trialvec[\\lambda] \\cdot \\frac{\\partial^2 \\vec{d}(\\vec{x}, t)}{\\partial t^2} \\, d\\Gamma.\n\\end{multline}\nWe rewrite the integrals over the domain involving the degrees of\nfreedom adjacent to the fault as integrals over the positive and\nnegative sides of the fault. These are implemented as integrals over\nthe faces of cells adjacent to the fault; they involve quantities,\nsuch as density, that are defined only within the domain cells. After\ncollecting and rearranging terms, we have\n\\begin{multline}\n  \\label{eqn:elasticity:prescribed:slip:dynamic:DAE:weak:form}\n  \\int_{\\Gamma_{f^+}} \\trialvec[\\lambda] \\cdot \\frac{1}{\\rho(\\vec{x})} \\left(\n    \\vec{\\lambda} - \\vec{f}(\\vec{x},t) + \\frac{\\nabla\\rho(\\vec{x})}{\\rho(\\vec{x})} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right)\n  + \\nabla \\trialvec[\\lambda] : \\left(+\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right) \\, d\\Gamma \\\\\n  + \\int_{\\Gamma_{f^-}} \\trialvec[\\lambda] \\cdot \\frac{1}{\\rho(\\vec{x})} \\left(\n    \\vec{\\lambda} + \\vec{f}(\\vec{x},t) - \\frac{\\nabla\\rho(\\vec{x})}{\\rho(\\vec{x})} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right)\n  + \\nabla \\trialvec[\\lambda] : \\left(-\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right)  \\, d\\Gamma \\\\\n  + \\int_{\\Gamma_f} \\trialvec[\\lambda] \\cdot \\frac{\\partial^2 \\vec{d}(\\vec{x}, t)}{\\partial t^2} \\, d\\Gamma\n  = 0.\n\\end{multline}\n\n\n\\subsection{Residual Pointwise Functions}\n\nCombining the explicit parts of the weak form in equations~\\ref{eqn:displacement:velocity:prescribed:slip:weak:form} and \\ref{eqn:elasticity:prescribed:slip:dynamic:weak:form} with the implicit part of the weak form in equation~\\ref{eqn:elasticity:prescribed:slip:dynamic:DAE:weak:form} and identifying $F(t,s,\\dot{s})$ and $G(t,s)$, we have\n\\begin{gather}\n  % Fu\n  F^u(t,s,\\dot{s}) = \\frac{\\partial \\vec{u}}{\\partial t} \\\\\n  % Fv\n  F^v(t,s,\\dot{s}) = \\frac{\\partial \\vec{v}}{\\partial t} \\\\\n  % Fl\n  \\begin{multlined}\n    F^\\lambda(t,s,\\dot{s}) =   \\int_{\\Gamma_{f^+}} \\trialvec[\\lambda] \\cdot \\eqnannotate{\\frac{1}{\\rho(\\vec{x})} \\left(\n    \\vec{\\lambda} - \\vec{f}(\\vec{x},t) + \\frac{\\nabla\\rho(\\vec{x})}{\\rho(\\vec{x})} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right)}{f^\\lambda_0}\n  + \\nabla \\trialvec[\\lambda] : \\eqnannotate{\\left(+\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u})\\right)}{f^\\lambda_1} \\, d\\Gamma \\\\\n  + \\int_{\\Gamma_{f^-}} \\trialvec[\\lambda] \\cdot \\eqnannotate{\\frac{1}{\\rho(\\vec{x})} \\left(\n    \\vec{\\lambda} + \\vec{f}(\\vec{x},t) - \\frac{\\nabla\\rho(\\vec{x})}{\\rho(\\vec{x})} \\cdot \\tensor{\\sigma}(\\vec{u}) \\right)}{f^\\lambda_0}\n  + \\nabla \\trialvec[\\lambda] : \\eqnannotate{\\left(-\\frac{1}{\\rho(\\vec{x})} \\tensor{\\sigma}(\\vec{u}) \\right)}{f^\\lambda_1}  \\, d\\Gamma \\\\\n  + \\int_{\\Gamma_f} \\trialvec[\\lambda] \\cdot \\eqnannotate{\\frac{\\partial^2 \\vec{d}(\\vec{x}, t)}{\\partial t^2}}{f^\\lambda_0} \\, d\\Gamma\n  \\end{multlined}\\\\\n  % Gu\n  G^u(t,s) = \\int_\\Omega \\trialvec[u] \\cdot \\eqnannotate{\\vec{v}}{\\vec{g}^u_0} \\, d\\Omega, \\\\\n  % Gv\n  G^v(t,s) =  \\int_\\Omega \\trialvec[v] \\cdot \\eqnannotate{\\vec{f}(\\vec{x},t)}{\\vec{g}^v_0} + \\nabla \\trialvec[v] : \\eqnannotate{-\\tensor{\\sigma}(\\vec{u})}{\\tensor{g^v_1}} \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[v] \\cdot \\eqnannotate{\\vec{\\tau}(\\vec{x},t)}{\\vec{g}^v_0} \\, d\\Gamma,\n  + \\int_{\\Gamma_{f}} \\trialvec[v^+] \\cdot \\eqnannotate{\\left(-\\vec{\\lambda}(\\vec{x},t)\\right)}{\\vec{g}^v_0}\n             + \\trialvec[v^-] \\cdot \\eqnannotate{\\left(+\\vec{\\lambda}(\\vec{x},t)\\right)}{\\vec{g}^v_0} \\, d\\Gamma, \\\\\n  % Gl\n  G^l(t,s) = 0\n\\end{gather}\n\n\n\\subsection{Jacobian Pointwise Functions}\n\nFor the explicit part we have pointwise functions for computing the\nlumped LHS Jacobian. These are exactly the same pointwise functions as\nin the dynamic case without a fault,\n\\begin{align}\n  % J_F uu\n  J_F^{uu} &= \\frac{\\partial F^u}{\\partial u} + s_\\mathit{tshift} \\frac{\\partial F^u}{\\partial \\dot{u}} =\n             \\int_\\Omega \\trialscalar[u]_i \\eqnannotate{s_\\mathit{tshift} \\delta_{ij}}{J^{uu}_{f0}} \\basisscalar[u]_j  \\, d\\Omega, \\\\\n  % J_F vv\n  J_F^{vv} &= \\frac{\\partial F^v}{\\partial v} + s_\\mathit{tshift} \\frac{\\partial F^v}{\\partial \\dot{v}} =\n             \\int_\\Omega \\trialscalar[v]_i \\eqnannotate{\\rho(\\vec{x}) s_\\mathit{tshift} \\delta_{ij}}{J ^{vv}_{f0}} \\basisscalar[v]_j \\, d\\Omega\n\\end{align}\nFor the implicit part, we have pointwise functions for the LHS Jacobians associated with the prescribed slip,\n\\begin{gather}\n  \\begin{multlined}\n  % J_F lu\n  J_F^{\\lambda u} = \\frac{\\partial F^\\lambda}{\\partial u} + s_\\mathit{tshift} \\frac{\\partial F^\\lambda}{\\partial \\dot{u}} = \\\\\n                    \\int_{\\Gamma_{f^+}} \\trialscalar[\\lambda]_i \\eqnannotate{\\frac{\\rho_{,j}(\\vec{x})}{\\rho^2(\\vec{x})} C_{ikjl} \\basisscalar[u]_{j,l}}{J^{\\lambda u}_{fX}}\n                    + \\trialscalar[\\lambda]_{i,k} \\eqnannotate{+\\frac{1}{\\rho(\\vec{x})} C_{ikjl} \\basisscalar[u]_{k,l}}{J^{\\lambda u}_{f3}} \\, d\\Gamma \\\\\n                    +\\int_{\\Gamma_{f^-}} \\trialscalar[\\lambda]_i \\eqnannotate{-\\frac{\\rho_{,j}(\\vec{x})}{\\rho^2(\\vec{x})} C_{ikjl} \\basisscalar[u]_{j,l}}{J^{\\lambda u}_{fX}}\n                    + \\trialscalar[\\lambda]_{i,k} \\eqnannotate{-\\frac{1}{\\rho(\\vec{x})} C_{ikjl} \\basisscalar[u]_{k,l}}{J^{\\lambda u}_{f3}} \\, d\\Gamma\n                  \\end{multlined} \\\\\n  % J_F ll\n  J_F^{\\lambda \\lambda} = \\frac{\\partial F^\\lambda}{\\partial \\lambda} + s_\\mathit{tshift} \\frac{\\partial F^\\lambda}{\\partial \\dot{\\lambda}} =\n             \\int_{\\Gamma_{f^+}} \\trialscalar[\\lambda]_i \\eqnannotate{\\frac{1}{\\rho(\\vec{x})} \\delta_{ij}}{J^{\\lambda\\lambda}_{f0}} \\basisscalar[\\lambda]_j \\, d\\Gamma\n            + \\int_{\\Gamma_{f^-}} \\trialscalar[\\lambda]_i \\eqnannotate{\\frac{1}{\\rho(\\vec{x})} \\delta_{ij}}{J^{\\lambda\\lambda}_{f0}} \\basisscalar[\\lambda]_j \\, d\\Gamma\n\\end{gather}\n\n\n\n% End of file", "meta": {"hexsha": "bb8f887c0a99acd08de204f03e05394bdfd16d54", "size": 21250, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/userguide/governingeqns/elasticity_infstrain_prescribedslip.tex", "max_stars_repo_name": "Grant-Block/pylith", "max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2015-01-08T16:41:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T13:40:02.000Z", "max_issues_repo_path": "doc/userguide/governingeqns/elasticity_infstrain_prescribedslip.tex", "max_issues_repo_name": "Grant-Block/pylith", "max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 277, "max_issues_repo_issues_event_min_datetime": "2015-02-20T16:27:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T21:13:09.000Z", "max_forks_repo_path": "doc/userguide/governingeqns/elasticity_infstrain_prescribedslip.tex", "max_forks_repo_name": "Grant-Block/pylith", "max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 71, "max_forks_repo_forks_event_min_datetime": "2015-03-24T12:11:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T04:26:02.000Z", "avg_line_length": 59.1922005571, "max_line_length": 341, "alphanum_fraction": 0.6347294118, "num_tokens": 7805, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916064586998, "lm_q2_score": 0.6654105454764747, "lm_q1q2_score": 0.5745098798134932}}
{"text": "\n\\section{A Bayesian Model for Massive Competitions}\n\\label{sec:bayes_model}\n% The method is best-suited for competitions in which the most pertinent information is round rankings. Note that, in the programming contest setting, this means we discard specific information about player scores, which are difficult to model and depend heavily on the specifics of the problem set. This method would be ill-suited to model, for instance, track races, where a runner's absolute time is more informative than relative rankings. On the other hand, it may be very well-suited to obstacle-course races, if each round consists of novel obstacles that make the absolute times hard to interpret.\n\nWe now describe the setting formally, denoting random variables by capital letters. A series of competitive \\textbf{rounds}, indexed by $t=1,2,3,\\ldots$, take place sequentially in time. Each round has a set of participating \\textbf{players} $\\cP_t$, which may in general overlap between rounds. A player's \\textbf{skill} is likely to change with time, so we represent the skill of player $i$ at time $t$ by a real random variable $S_{i,t}$.\n\nIn round $t$, each player $i\\in \\cP_t$ competes at some \\textbf{performance} level $P_{i,t}$, typically close to their current skill $S_{i,t}$. The deviations $\\{P_{i,t}-S_{i,t}\\}_{i\\in\\cP_t}$ are assumed to be i.i.d. and independent of $\\{S_{i,t}\\}_{i\\in\\cP_t}$.\n\nPerformances are not observed directly; instead, a ranking gives the relative order among all performances $\\{P_{i,t}\\}_{i\\in\\cP_t}$. In particular, ties are modelled to occur when performances are exactly equal, a zero-probability event when their distributions are continuous.\\footnote{\n%If $e$ contains ties, then $(E_t = e)$ has probability zero in our model. \nThe relevant limiting procedure is to treat performances within $\\epsilon$-width buckets as ties, and letting $\\epsilon\\rightarrow 0$. This technicality appears in the proof of \\Cref{thm:uniq-max}.} This ranking constitutes the observational \\textbf{evidence} $E_t$ for our Bayesian updates. The rating system seeks to estimate the skill $S_{i,t}$ of every player at the present time $t$, given the historical round rankings $E_{\\le t} := \\{ E_1,\\ldots,E_t \\}$.\n\nWe overload the notation $\\Pr$ for both probabilities and probability densities: the latter interpretation applies to zero-probability events, such as in $\\Pr(S_{i,t} = s)$. We also use colons as shorthand for collections of variables differing only in a subscript: for instance, $P_{:,t}:=\\{P_{i,t}\\}_{i\\in\\cP_t}$. The joint distribution described by our Bayesian model factorizes as follows:\n\\begin{align}\n    &\\Pr(S_{:,:}, P_{:,:}, E_:) \\label{eq:model}\n    \\\\&= \\prod_i \\Pr(S_{i,0})\n    \\prod_{i,t} \\Pr(S_{i,t}\\mid S_{i,t-1})\n    \\prod_{i,t} \\Pr(P_{i,t}\\mid S_{i,t})\n    \\prod_t \\Pr(E_t\\mid P_{:,t}), \\nonumber\n\\end{align}\n\\vspace{-1.5em}\n\\begin{align*}\n    \\text{where } \\Pr(S_{i,0}) &\\text{ is the initial skill prior,}\n    \\\\\\Pr(S_{i,t}\\mid S_{i,t-1}) &\\text{ is the skill evolution model (\\Cref{sec:skill-drift}),}\n    \\\\\\Pr(P_{i,t}\\mid S_{i,t}) &\\text{ is the performance model, and}\n    \\\\\\Pr(E_t\\mid P_{:,t}) &\\text{ is the evidence model.}\n\\end{align*}\nFor the first three factors, we will specify log-concave distributions (see \\Cref{def:log-concave}). The evidence model, on the other hand, is a deterministic indicator. It equals one when $E_t$ is consistent with the relative ordering among $\\{P_{i,t}\\}_{i\\in\\cP_t}$, and zero otherwise.\n\nFinally, our model assumes that the number of participants $|\\cP_t|$ is large. %(in practice, in the tens to thousands).\nThe main idea behind our algorithm is that, in sufficiently massive competitions, from the evidence $E_t$ we can infer very precise estimates for $\\{P_{i,t}\\}_{i\\in\\cP_t}$. Hence, we can treat these performances as if they were observed directly.\n%\\begin{theorem}\n%Consider a round $t$ with player $i$ having performance $P_{i,t}$, and a set of participants $\\cP_t$ drawn from the same distribution as player $i$. Suppose are prior believe on $P_{i,t}$ is continuous and has positive density at its true value. Then with probability 1, in the limit $|\\cP_t|\\rightarrow \\infty$, the posterior belief on $P_{i, t}$ conditioned on $E_t$ concentrates around its true value. Furthermore,\n%\\[\\lim_{|\\cP_t|\\rightarrow\\infty} \\Pr(S_{i,t}=s \\mid P_{i,<t},\\,E_t) = \\Pr(S_{i,t} = s \\mid P_{i, \\leq t}). \\]\n%\\end{theorem}\n%\\begin{proof}\n\nThat is, suppose we have the skill prior at round $t$:\n\\begin{equation}\n\\label{eq:pi-s}\n\\pi_{i,t}(s) := \\Pr(S_{i,t} = s \\mid P_{i,<t}).\n\\end{equation}\n\nNow, we observe $E_t$. By \\Cref{eq:model}, it is conditionally independent of $S_{i,t}$, given $P_{i,\\le t}$. By the law of total probability,\n\\begin{align*}\n&\\Pr(S_{i,t}=s \\mid P_{i,<t},\\,E_t)\n\\\\&= \\int \\Pr(S_{i,t}=s \\mid P_{i,<t},\\,P_{i,t}=p) \\Pr(P_{i,t}=p \\mid P_{i,<t},\\,E_t) \\, \\mathrm{d}p\n\\\\&\\rightarrow \\Pr(S_{i,t}=s \\mid P_{i,\\le t}) \\quad\\text{almost surely as }|\\mathcal P_t|\\rightarrow\\infty.\n\\end{align*}\nThe integral is intractable in general, since the performance posterior $\\Pr(P_{i,t}=p \\mid P_{i,<t},\\,E_t)$ depends not only on player $i$, but also on our belief regarding the skills of all $j\\in\\cP_t$. However, in the limit of infinite participants, Doob's consistency theorem~\\cite{F63} implies that it concentrates at the true value $P_{i,t}$. Since our posteriors are continuous, the convergence holds for all $s$ simultaneously.\n%\\end{proof}\n\nIndeed, we don't even need the full evidence $E_t$. Let $E^L_{i,t} = \\{j\\in\\cP:P_{j,t}>P_{i,t}\\}$ be the set of players against whom $i$ lost, and $E^W_{i,t} = \\{j\\in\\cP:P_{j,t}<P_{i,t}\\}$ be the set of players against whom $i$ won. That is, we only see who wins, draws, and loses against $i$. $P_{i,t}$ remains identifiable using only $(E^L_{i,t}, E^W_{i,t})$, which will be more convenient for our purposes.\n\nPassing to the limit in which $P_{i,\\le t}$ is identified serves to justify several common simplifications made by total-order rating systems. Firstly, since $P_{i,\\le t}$ is a sufficient statistic for predicting $S_{i,t}$, it may be said that $(E^L_{i,\\le t}, E^W_{i,\\le t})$ are ``almost sufficient'' for $S_{i,t}$: any additional information, such as from domain-specific scoring systems, becomes redundant for the purposes of skill estimation. Secondly, conditioned on $P_{:,\\le t}$, the posterior skills $S_{:,t}$ are independent of one another. As a result, there are no inter-player correlations to model, and a player's posterior is unaffected by rounds in which they are not a participant. Finally, if we've truly identified $P_{i,t}$, then rounds later than $t$ should not prompt revisions in our estimate for $P_{i,t}$. This obviates the need for expensive whole-history update procedures~\\cite{DHMG07,WHR}, for the purposes of present skill estimation.\\footnote{As opposed to \\emph{historical} skill estimation, which is concerned with $P(S_{i,t} \\mid P_{i,\\le t'})$ for $t'>t$. Whole-history methods can take advantage of future information.}\n\nFinally, a word on the rate of convergence. Suppose we want our estimate to be within $\\epsilon$ of $P_{i,t}$, with probability at least $1-\\delta$. By asymptotic normality of the posterior~\\cite{F63}, it suffices to have $O(\\frac 1{\\epsilon^2}\\log \\frac 1\\delta)$ participants.\n\nWhen the initial prior, performance model, and evolution model are all Gaussian, treating $P_{i,t}$ as certain is the \\emph{only} simplifying approximation we will make; that is, in the limit $|\\cP_t|\\rightarrow\\infty$, our method performs \\emph{exact} inference on \\Cref{eq:model}. In the following sections, we focus some attention on generalizing the performance model to non-Gaussian log-concave families, parametrized by location and scale. We will use the logistic distribution as a running example and see that it induces robustness; however, our framework is agnostic to the specific distributions used.%For non-Gaussian performance models, we will make a few additional approximations, but we resist the temptation to approximate the posteriors by something compact. \n\nThe prior \\textbf{rating} $\\mu^\\pi_{i,t}$ and posterior rating $\\mu_{i,t}$ of player $i$ at round $t$ should be statistics that summarize the player's prior and posterior skill distribution, respectively. We'll use the mode: thus, $\\mu_{i,t}$ is the maximum a posteriori (MAP) estimate, obtained by setting $s$ to maximize the posterior $\\Pr(S_{i,t}=s \\mid P_{i,\\le t})$. By Bayes' rule,\n\\begin{align}\n\\label{eq:new-obj}\n\\mu_{i,t}^\\pi &:= \\argmax_{s} \\pi_{i,t}(s), \\nonumber\n\\\\\\mu_{i,t} &:= \\argmax_{s} \\pi_{i,t}(s) \\Pr(P_{i,t} \\mid S_{i,t}=s).\n\\end{align}\n\nThis objective suggests a two-phase algorithm to update each player $i\\in\\cP_t$ in response to the results of round $t$. In phase one, we estimate $P_{i,t}$ from $(E^L_{i,t}, E^W_{i,t})$. By Doob's consistency theorem, our estimate is extremely precise when $|\\cP_t|$ is large, so we assume it to be exact. In phase two, we update our posterior for $S_{i,t}$ and the rating $\\mu_{i,t}$ according to \\Cref{eq:new-obj}.\n\n", "meta": {"hexsha": "3185a5c736cebd075c63f717f319392981c3cdbf", "size": 9044, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/source/sections/s2_model.tex", "max_stars_repo_name": "kiwec/Elo-MMR", "max_stars_repo_head_hexsha": "bf64ea75e8c0dbb946d379b9bee1753e604b388a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 57, "max_stars_repo_stars_event_min_datetime": "2021-02-12T18:28:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T10:59:36.000Z", "max_issues_repo_path": "paper/source/sections/s2_model.tex", "max_issues_repo_name": "cesartxt/Elo-MMR", "max_issues_repo_head_hexsha": "7ef860d599e8325ae1f615ce08120369b39bfecc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2021-05-09T15:42:06.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T08:41:23.000Z", "max_forks_repo_path": "paper/source/sections/s2_model.tex", "max_forks_repo_name": "cesartxt/Elo-MMR", "max_forks_repo_head_hexsha": "7ef860d599e8325ae1f615ce08120369b39bfecc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2021-02-13T13:21:48.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T23:08:41.000Z", "avg_line_length": 127.3802816901, "max_line_length": 1155, "alphanum_fraction": 0.7251216276, "num_tokens": 2606, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246077301781, "lm_q2_score": 0.6893056231680122, "lm_q1q2_score": 0.5744153380326898}}
{"text": "\\subsection{Position space representation}\n\nFirst we define the time integral of the Lagrangian of the classical oscillator given in Eq.~(\\ref{eq:5}), over a period $T=2\\pi/\\omega$ as\n\\begin{equation} \\label{eq:b1}\n  \\Delta_{\\varepsilon} = \\frac{1}{T} \\int_0^T L(\\zeta,\\dot{\\zeta},t') dt'.\n\\end{equation}\nAdditionally, performing this integral, we can obtain a more simplified result\n\\begin{equation} \\label{eq:b2}\n  \\Delta_{\\varepsilon} = \\frac{e^2E^2}{4m_e(\\omega_0^2 - \\omega^2)}.\n\\end{equation}\nNext, we define another parameter\n\\begin{equation} \\label{eq:b3}\n  \\xi =\n  \\int_0^t \\; L(\\zeta,\\dot{\\zeta},t') dt' -\n  \\Delta_{\\varepsilon} t,\n\\end{equation}\nand after simplifying, we can identify\n\\begin{equation} \\label{eq:b4}\n  \\xi =\n  \\frac{e^2E^2\\qty(3\\omega^2 - \\omega_0^2)}{8m_e\\omega(\\omega_0^2 - \\omega^2)^2} \\sin(2\\omega t),\n\\end{equation}\nwhich is a periodic function in time. Using these parameters, we can factorize the wave function given in Eq.~(\\ref{eq:2}) as linearly time-dependent part and periodic time-dependent part as follows\n\\begin{equation} \\label{eq:b5}\n  \\begin{aligned}\n    \\psi_{\\alpha}&(x,y,t) \\\\\n    & =\n    \\exp(\\frac{i}{\\hbar} \\left(-\\epsilon_nt + \\Delta_{\\varepsilon} t \\right) )\n    \\frac{1}{\\sqrt{L_x}} \\chi_n\\bm{\\left(}y - y_0 - \\zeta(t)\\bm{\\right)}\n    \\\\\n    & \\quad\\times\n    \\exp \\left(\n       \\frac{i}{\\hbar}\n       \\left\\{\n       p_x x +\n       \\frac{eE(y-y_0)}{\\omega}\\cos(\\omega t)  \\right. \\right. \\\\\n    & \\left. \\left. \\qquad\\qquad\\qquad\\qquad\n    + m_e\\dot{\\zeta}(t) \\left[ y-y_0 -\\zeta(t) \\right]\n    + \\xi \\right\\}\n     \\right).\n  \\end{aligned}\n\\end{equation}\nThis leads to separate the linear time-dependent phase component as the quasienergies\n\\begin{equation} \\label{eq:b6}\n  \\varepsilon_{n} =\n  \\hbar \\omega_0\\qty(n + \\frac{1}{2}) - \\Delta_{\\varepsilon},\n\\end{equation}\nwhile rest of the components as time-periodic Floquet modes\n\\begin{equation} \\label{eq:b7}\n  \\begin{aligned}\n    \\phi_{n,m}(x,y,t) =  &\n    \\frac{1}{\\sqrt{L_x}} \\chi_n\\bm{\\left(}y - y_0 - \\zeta(t)\\bm{\\right)} \\\\\n    & \\times\n    \\exp \\left(\n       \\frac{i}{\\hbar}\n       \\left\\{\n       p_x x +\n       \\frac{eE(y-y_0)}{\\omega}\\cos(\\omega t)  \\right. \\right. \\\\\n    & \\left. \\left. \\qquad\\qquad\\quad\n    + m_e\\dot{\\zeta}(t) \\left[ y-y_0 -\\zeta(t) \\right]\n    + \\xi \\right\\}\n     \\right).\n  \\end{aligned}\n\\end{equation}\n\n\\subsection{Momentum space representation}\n\nWe perform continuous Fourier transform over the considering confined space $A=L_xL_y$ on the Floquet modes given in Eq.~(\\ref{eq:7}) to realize the Floquet modes in momentum space\n\\begin{equation} \\label{eq:b8}\n  \\begin{aligned}\n    \\phi_{n,m}&(k_x,k_y,t) \\\\\n    & =\n    \\exp(\\frac{-i\\gamma(t)}{\\hbar}y_0)\n    \\exp(\\frac{-i}{\\hbar}\\left[m_e \\dot{\\zeta}(t) \\zeta(t) - \\xi \\right]) \\\\\n    & \\quad\\times\n    \\int_{-L_y/2}^{L_y/2} \\exp(-i \\left[k_y - \\gamma(t) \\right]y)\n      \\chi_n \\bm{\\left(}y - \\mu(t)\\bm{\\right)} dy \\\\\n    & \\quad\\times\n    \\frac{1}{\\sqrt{L_x}} \\int_{-L_x/2}^{L_x/2}\n     \\exp(-ik_x x)\\exp( \\flatfrac{i p_x }{\\hbar}x ) dx.\n  \\end{aligned}\n\\end{equation}\nHere we used new two parameters\n\\begin{equation} \\label{eq:b9}\n  \\mu(t) = \\frac{eE\\sin(\\omega t)}{m_e(\\omega_0^2 - \\omega^2)} + y_0,\n\\end{equation}\nand\n\\begin{equation} \\label{eq:b10}\n  \\gamma(t) =\n  \\frac{eE\\omega_0^2\\cos(\\omega t)}{\\hbar\\omega(\\omega_0^2 - \\omega^2)}.\n\\end{equation}\nSubsequently, using the Fourier transform identity \\cite{bruus04}\n\\begin{equation} \\label{eq:b11}\n  \\int_{-L_x/2}^{L_x/2}\n  \\exp( -ik_x x + \\flatfrac{i p_x x}{\\hbar}) dx =\n  L_x \\delta_{k_x,\\flatfrac{p_x}{\\hbar}},\n\\end{equation}\nwe can derive\n\\begin{equation} \\label{eq:b12}\n  \\begin{aligned}\n    \\phi_{n,m}(k_x,k_y,t)  =\n    \\Phi_{n,m}&(k_y,t)\n    \\delta_{k_x,\\flatfrac{p_x}{\\hbar}}\n    \\exp(\\frac{-i}{\\hbar} \\gamma(t)y_0) \\\\\n    & \\times\n    \\exp(\\frac{-i}{\\hbar}\n    \\left[ m_e \\dot{\\zeta}(t) \\zeta(t) - \\xi \\right]),\n  \\end{aligned}\n\\end{equation}\nwhere we can define $\\Phi_{n,m}(k_y,t)$ as\n\\begin{equation} \\label{eq:b13}\n  \\begin{aligned}\n    &\\Phi_{n,m}(k_y,t) \\\\\n    & \\quad=\n    \\sqrt{L_x}\n    \\int_{-L_y/2}^{L_y/2}\n    \\chi_n \\bm{\\left(}y - \\mu(t)\\bm{\\right)}\n    \\exp \\bm{\\left(}-i\\left[k_y - \\gamma(t) \\right]y\\bm{\\right)} dy.\n  \\end{aligned}\n\\end{equation}\nSubstituting ${k'_y} = k_y -\\gamma(t)$ with $y' = y -\\mu(t)$, and assuming that the size of the considered 2DEG sample in $y$-direction is considerably large ($L_y \\rightarrow \\infty$), we can obtain\n\\begin{equation} \\label{eq:b14}\n  \\Phi_{n,m}({k'_y} ,t) =\n  {\\sqrt{L_x}} e^{-i {k'_y}\\mu}\n  \\int_{-\\infty}^{\\infty} \\chi_{n}(y') \\exp(-i{k'_y} y') dy'.\n\\end{equation}\nMoreover, we can identify that the above integral represents the Fourier transform of $\\chi_n$ functions. In addition, using the symmetric conditions of the Fourier transform for Gauss-Hermite functions $\\theta_n(x)$ \\cite{celeghini21}\n\\begin{equation} \\label{eq:b15}\n  \\mathcal{FT}[\\theta_n(\\kappa x),x,k] = \\frac{i^n}{|\\kappa|}\\theta_n(k/\\kappa),\n\\end{equation}\nwe can simply the Eq.~(\\ref{eq:b14}) as\n\\begin{equation} \\label{eq:b16}\n  \\Phi_{n,m}({k'_y} ,t) =\n  \\sqrt{L_x} e^{-i {k'_y}\\mu} \\widetilde{\\chi}_n(k'_y),\n\\end{equation}\nwith\n\\begin{equation} \\label{eq:b17}\n  \\widetilde{\\chi}_{n}(k) =\n  \\frac{i^n}{\\sqrt{2^{n} n! \\sqrt{\\pi}}}\n  \\left(\\frac{1}{\\kappa} \\right)^{1/2}\n  e^{-\\flatfrac{k^2}{(2 \\kappa^2)}}\n  \\mathcal{H}_n (\\flatfrac{k}{\\kappa}).\n\\end{equation}\nFinally, substitute Eq.~(\\ref{eq:b16}) back into Eq.~(\\ref{eq:b12}) and this leads to\n\\begin{equation} \\label{eq:b18}\n  \\begin{aligned}\n    \\phi_{n,m}(k_x,k_y,t)  = &\n    {\\sqrt{L_x}} \\widetilde{\\chi}_n \\bm{\\left(} k_y - b\\cos(\\omega t)\\bm{\\right)} \\\\\n    & \\times\n    \\exp \\left(\n      i\\xi -ik_y  \\left[d\\sin(\\omega t) + \\frac{\\hbar k_x}{eB} \\right]\n    \\right),\n  \\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:b19}\n  b =\n  \\frac{eE\\omega_0^2}{\\hbar\\omega(\\omega_0^2 - \\omega^2)},\n\\end{equation}\nand\n\\begin{equation} \\label{eq:b20}\n  d =\n \\frac{eE}{m_e(\\omega_0^2 - \\omega^2)}.\n\\end{equation}\nIt is necessary to notice that $k_x$ is quantized with $k_x = \\flatfrac{2\\pi m}{L_x} ~,~ m \\in \\mathbb{Z}$.\n", "meta": {"hexsha": "489be060825ac0879a146ce9249bfca32fa4f57a", "size": 6069, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/v4.0/sections/appendix_b.tex", "max_stars_repo_name": "KosalaHerath/magnetic-2DEG-conductivity", "max_stars_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/v4.0/sections/appendix_b.tex", "max_issues_repo_name": "KosalaHerath/magnetic-2DEG-conductivity", "max_issues_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/v4.0/sections/appendix_b.tex", "max_forks_repo_name": "KosalaHerath/magnetic-2DEG-conductivity", "max_forks_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.7818181818, "max_line_length": 235, "alphanum_fraction": 0.6231669138, "num_tokens": 2407, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245953120233, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5744153347915351}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{September 10, 2014}\n\\maketitle\n\\section*{assignment}\n\\subsection*{2.2 \\#16}\nprove f is onto iff there exists $g:B\\to A$ such that $f\\circ g=f(g(x))=1_B$\n\\subsubsection*{proof}\nAssume $f$ is onto then $\\forall b\\in B$ $\\exists x_b\\in A$ such  that $f(x_b)=b$.\n\nFor every $b\\in B$ choose $x_b\\in A$\n\ndefine $g:B\\to A, g(b)=x_b$, then $(f\\circ f)(b)=f(g(b))=f(x_b)=b$\n\nother way\n\nprove that for every $b\\in B$ there exists $x\\in A$ such that $f(x)=b$\n\nwe have $b=f(g(b))$. $g(b)=x$. almost a tautology. done\n\\subsection*{2.2 \\#18}\n\\subsubsection*{LOOK HERE}\nLet $A$ be a \n\n\\section*{last time}\nequivalence relations\n\\subsection*{example}\nlet $f:S\\to T$. on $S$ we define the equivalence relation as follows: $x,y\\in S$ then $x\\sim_f y$ iff $f(x)=f(y)$. \n\ntwo elements are related iff they have the same image. if $f$ is injective then $[f]=f$. constant function has one equivalence class.\n\n\\subsection*{proposition}\nthere exists a one to one (bijection) from the set of equivalence classes $S/\\sim_f$ and $f(s)$. $\\bar f:S/\\sim_f\\to f(S)$. Namelly $[x]\\to f(x)$.\n\nquestion? does $[x]=[y]$ imply that $f(x)=f(y)$? in this case, $[x]=[y]\\Rightarrow x\\sim_f y\\Rightarrow f(x)=f(y)$ so f is a well defined function. (well defined is redundant, but places proper emphasis)\n\nsurjectivity of $\\bar f$ is clear. what about injectivity? if $f(x)=f(y)\\to x\\sim_f y\\to [x]=[y]$\n\\section*{1.4 integers modulo n}\n$n>1,n\\in\\mathbb{Z}$. on $\\mathbb{Z}$ we define the equiv relation $\\equiv$ as $a\\equiv b \\mod n$ if and only if $n|(a-b)$. \n\n$\\mathbb{Z}_n=\\mathbb{Z}/\\equiv\\to$ set of equiv classes. $\\mathbb{Z}_n=\\{[0],[1],\\dots,[n-1]\\}$\n\nalternate notation is $\\mathbb{Z}_n=\\{\\bar0,\\bar1,\\dots,\\bar{n-1}\\}$.\n\ndefine operations on $\\mathbb{Z}_n$ as follows:\n\naddition: $[a]+[b]=[a+b]$\n\nmultiplication: $[a]\\cdot[b]=[a\\cdot b]$\n\n$[a]=[a'],[b]=[b']\\to[a+b]=[a'+b']$.\n\n$n|(a-a'),n|(b-b')\\to n|((a+b)-(a'+b'))$\n\nsimilarly for multiplication.\n\n$[a]=[a'],[b]=[b']\\to[a\\cdot b]=[a'\\cdot b']$.\n\n\\subsection*{proposition}\nthese operation satisfy associative, commutative, distributive\n\ndefinition:\nLet $[a]_n\\in\\mathbb{Z}_n$. If there exists $[b]_n\\in\\mathbb{Z}$ such that $[b]\\ne 0, [a][b]=[0]$. We say that $[a]$ is a zero-divisor in $\\mathbb{Z}_n$\n\n\\subsubsection*{example}\n$\\mathbb{Z}_6=\\{[0],[1],[2],[3],[4],[5],[6]\\}$. zero divisors are $\\{[0],[2],[3],[4]\\}$\n\n\\subsection*{multiplicative inverse}\nif $[a][b]=1$ for some $[b]\\in\\mathbb{Z}_n$ we say that $[a]$ is an invertible element of $\\mathbb{Z}_n$ and $[b]$ is a multiplicative inverse of $[a]$.\n\n$\\mathbb{Z}_6$ invertible elements:$\\{[1][5]\\}$ because $[5][5]=1$ (25 mod 6 is 1)\n\nlets take $[a]\\in \\mathbb{Z}$ then $[a]$ is invertible iff $\\gcd(a,n)=1$.\n\\subsubsection*{proof}\nassume $(a,n)=1$ then $a\\alpha+n\\beta=1$ with $\\alpha,\\beta\\in\\mathbb{Z}$.Then $[1]=[a][\\alpha]+[n][\\beta]$ $[n]=[0]$\n\nassume $[a]$ is invertible. then there exists $[b]$ such that $[a][b]=1, n|(ab-1)$. $ab-1=nk, (a,n)=1$\n\\end{document}\n", "meta": {"hexsha": "868a4975c2cd86c14bf909e3c734d3796df559fb", "size": 3181, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-09-10.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-09-10.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-09-10.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.1477272727, "max_line_length": 203, "alphanum_fraction": 0.641936498, "num_tokens": 1207, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056167854461, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.5744153284339885}}
{"text": "% !TEX root = main.tex\n\n%-------------------------------------------------\n\\section{Large samples}\\label{sec:large_samples}\n\nIntuitively we would like an estimator to improve as the sample size increases. The \\emph{asymptotic} behaviour of a quantity refers to the way it behaves as another quantity tends to some limit. We now write $T_n$ to represent an estimator defined on a random sample of size $n$ and consider its behaviour as $n\\to\\infty$.\n\n\\begin{definition}\nLet $T_n$ be an esimator of some unknown parameter $\\theta\\in\\Theta$.\n\\ben\n\\it $T_n$ is said to be \\emph{asymptotically unbiased} if %for all $\\theta\\in\\Theta$,\n$\\expe(T_n)\\to\\theta$ as $n\\to\\infty$ for all $\\theta\\in\\Theta$.\n%\\[\n%\\expe(T_n)\\to \\theta \\quad\\text{as $n\\to\\infty$}.\n%\\]\n\\it $T_n$ is said to be \\emph{consistent} if %for all $\\theta\\in\\Theta$,\n$T_n\\to\\theta$ in probability as $n\\to\\infty$ for all $\\theta\\in\\Theta$.\n%\\[\n%T_n\\to\\theta \\quad\\text{in probability as $n\\to\\infty$.}\n%\\]\n\\it $T_n$ is said to be \\emph{asymptotically normal} if the distribution of $\\sqrt{n}(T_n-\\theta)$ converges to a normal distribution as $n\\to\\infty$ for all $\\theta\\in\\Theta$.\n%\\[\n%\\sqrt{n}(T_n-\\theta)\\to Z\\quad\\text{in distribution as $n\\to\\infty$, where $Z\\sim N(\\mu,\\sigma^2)$.}\n%\\]\n\\een\n\\end{definition}\n\n\\begin{lemma}\nIf $T_n$ is an unbiased estimator of $\\theta$ and $\\var(T_n)\\to 0$ as $n\\to\\infty$ then $T_n$ is a consistent estimator of $\\theta$.\n\\end{lemma}\n\\begin{proof}\nThis follows by Chebyshev's inequality: for all $\\epsilon>0$,\n\\[\n\\prob(|T_n-\\expe(T_n)|> \\epsilon) \\leq \\frac{\\var(T_n)}{\\epsilon^2}.\n\\]\nSince $T_n$ is unbiased, $\\expe(T_n)=\\theta$ so $\\prob(|T_n - \\theta|>\\epsilon)\\to 0$ as $n\\to\\infty$, as required.\n\\end{proof}\n\n\\begin{example}\nThe reading on a voltmeter connected to a test circuit is a random variable $X$ that has a uniform distribution over the interval $(\\theta,\\theta+1)$, where $\\theta$ is unknown. Let $X_1, X_2, ...,X_n$ be a random sample of observations from the voltmeter. Show that\n\\[\nT_n = \\frac{1}{n}\\sum_{i=1}^n\\left( X_i - \\frac{1}{2}\\right)\n\\]\nis a consistent estimator for $\\theta$.\n\\begin{solution}\nFirst we note that $T_n$ is unbiased, because $\\expe(X_i)=\\theta+1/2$ so\n\\[\n\\expe(T_n) = \\frac{1}{n}\\sum_{i=1}^n \\expe(X_i) - \\frac{1}{2} = \\theta.\n\\]\nTo show that $T_n$ is consistent we therefore only need to show that its variance tends to zero as $n\\to\\infty$:\n\\[\n\\var\\left(\\bar{X}- \\frac{1}{2}\\right) \n\t= \\var(\\bar{X}) \n\t= \\frac{\\var{X}}{n} \n\t= \\frac{1}{12n} \\to 0 \\quad\\text{as}\\quad n\\to\\infty.\n\\]\n\\end{solution}\n\\end{example}\n\n\n%----------------------------------------------------------------------\n\\begin{exercise}\n\\begin{questions}\n%----------------------------------------\n\n%==========================================================================\n\\question\nLet $X_1,X_2,\\ldots,X_n$ be a random sample from the $\\text{Uniform}(0,\\theta)$ distribution. Show that \n\\[\nT_n = \\max\\{X_1,X_2,\\ldots,X_n\\}\n\\]\nis an asymptotically unbiased estimator of $\\theta$.\n\\begin{answer}\nThe PDF of $T_n= \\max\\{X_1,X_2,\\ldots,X_n\\}$ is\n\\[\nf(t;\\theta)\t= \\frac{nt^{n-1}}{\\theta^n} \\quad\\text{for\\quad $0<t<\\theta$\\quad (zero otherwise).}\n\\]\n\nThe expected value of $T_n$ is\n\\[\n\\expe(T_n) \n\t= \\int_{0}^{\\theta }t\\frac{nt^{n-1} }{\\theta ^{n} }\\,dv \n\t= \\frac{n}{(n+1)\\theta ^{n} } \\left[ t^{n+1} \\right] _{0}^{\\theta } \n\t= \\frac{n\\theta }{n +1} \n\t= \\left(1-\\frac{1}{n+1}\\right)\\theta.\n\\]\nThe bias of $T_n$ is\n\\[\n\\bias(T_n) = \\expe(T_n-\\theta) = -\\frac{\\theta}{n+1} \\to 0 \\text{ as }n\\to\\infty.\n\\]\nand because this holds for all $\\theta>0$ it follows that $T_n$ is an asymptotically unbiased estimator for $\\theta$.\n\\end{answer}\n\n%==========================================================================\n\\question\nLet $X_1,X_2,\\ldots,X_n$ be a random sample from the $\\text{Bernoulli}(\\theta)$ distribution. Show that the sample mean \n\\[\n\\displaystyle T_n = \\frac{1}{n}\\sum_{i=1}^n X_i\n\\]\nis a consistent estimator of $\\theta$.\n\n\\begin{answer}\nThis follows by the weak law of large numbers. \n\\par\nIn more detail, because $\\expe(T_n)=\\theta$ and $\\var(T_n) = \\displaystyle\\frac{\\theta(1-\\theta)}{n}$, it follows by Chebyshev's inequality that\n\\[\n\\prob(|T_n-\\theta|>\\epsilon) \\leq \\frac{\\theta(1-\\theta)}{n\\epsilon^2}.\n\\]\nSince $\\theta\\in[0,1]$ is bounded, we have for every $\\epsilon >0$ that\n\\[\n\\prob(|T_n-\\theta|>\\epsilon)\\rightarrow 0 \\text{ as } n\\rightarrow\\infty.\n\\]\nThus $T_n\\to\\theta$ in probability as $n\\to\\infty$. This holds for all $\\theta\\in[0,1]$, so $T_n$ is a consistent estimator of $\\theta$.\n\\end{answer}\n\n%==========================================================================\n\\question\nLet $X$ be a random variable with finite mean $\\mu$ and finite variance $\\sigma^2$. Let $X_1,X_2,\\ldots,X_n$ be a random sample from the distribution of $X$ and consider the sample mean estimators of $\\mu$ and $\\sigma^2$,\n\\[\n\\Xbar_n = \\frac{1}{n}\\sum_{i=1}^n X_i\n\\quad\\text{and}\\quad\n\\hat{\\sigma}^2_n = \\frac{1}{n}\\sum_{i=1}^n (X_i-\\Xbar_n)^2.\n\\]\n\\begin{parts}\n\\part \nShow that $\\hat{\\sigma}^2_n$ is an asymptotically unbiased estimator of $\\sigma^2$.\n\\begin{answer}\nAs we have seen,\n\\[\n\\expe(\\hat{\\sigma}^2_n) = \\left(\\frac{n-1}{n}\\right)\\sigma^2\n\\quad\\text{and}\\quad\n\\bias(\\hat{\\sigma}^2_n) = -\\frac{\\sigma^2}{n}.\n\\]\nHence $\\bias(\\hat{\\sigma}^2_n)\\to 0$ as $n\\to\\infty$, and because this holds for all $\\sigma^2>0$ we have that $\\hat{\\sigma}^2_n$ is an asymptotically unbiased estimator for $\\sigma^2$.\n\\end{answer}\n\n\\part \nShow that $\\Xbar_n$ is a consistent and asymptotically normal estimator of $\\mu$.\n\\begin{answer}\nBecause $\\sigma^2$ is finite, by the law of large numbers we have $\\Xbar_n\\to\\mu$ in probability as $n\\to\\infty$, and by the central limit theorem,\n\\[\n\\sqrt{n}(\\Xbar_n-\\mu) \\to N(0,\\sigma^2) \\text{\\quad in distribution as $n\\to\\infty$.}\n\\]\nThese hold for all $\\mu\\in\\R$, so we have shown that $\\Xbar_n$ is a consistent and asymptotically normal estimator for $\\mu$.\n\\end{answer}\n\n\\part \nIf $\\expe(X^4)<\\infty$ show that $\\hat{\\sigma}^2_n$ is a consistent and asymptotically normal estimator of $\\sigma^2$.\n\\begin{answer}\nFor $\\hat{\\sigma}^2_n$ we write $X_i-\\bar{X}_n = (X_i-\\mu) - (\\Xbar_n-\\mu)$, from which we obtain\n\\[\n\\hat{\\sigma}^2_n \n\t= \\frac{1}{n}\\sum_{i=1}^n (X_i-\\mu)^2 + (\\Xbar_n-\\mu)^2\n\\]\n\nFor the first term we have $\\expe\\big[(X_i-\\mu)^2\\big] = \\sigma^2$ and $\\var\\big[(X_i-\\mu)^2\\big] = \\mu_4 - \\sigma^4$ where $\\mu_4=\\expe(X^4)$.\nBy hypothesis these are both finite, so by the law of large numbers applied to the random variables $(X_i-\\mu)^2$, \n\\[\n\\frac{1}{n}\\sum_{i=1}^n (X_i-\\mu)^2 \\to \\sigma^2 \\quad\\text{in probability as $n\\to\\infty$.}\n\\]\nFor the second term, by Markov's inequality and the fact that $\\expe(\\Xbar_n)=\\mu$ and $\\var(\\Xbar_n)=\\sigma^2/n$,\n\\begin{align*}\n\\prob((\\Xbar_n-\\mu)^2>\\epsilon)\n\t\\leq \t\\frac{\\expe\\big[(\\Xbar_n-\\mu)^2\\big]}{\\epsilon} \n\t=\t\t\\frac{\\var(\\Xbar_n)}{\\epsilon}\n\t=\t\t\\frac{\\sigma^2}{n\\epsilon}.\n\\end{align*}\nHence $(\\Xbar_n-\\mu)^2\\to 0$ in probability as $n\\to\\infty$, and thus we have shown that $\\hat{\\sigma}^2_n\\to\\sigma^2$ in probability as $n\\to\\infty$. Since this holds for every $\\sigma^2>0$ we conclude that $\\hat{\\sigma}^2_n$ is a consistent estimator for $\\sigma^2$.\n\nTo show that $\\hat{\\sigma}^2_n$ is asymptotically normal, let us write\n\\[\n\\sqrt{n}(\\hat{\\sigma}^2_n -\\sigma^2)\n\t= \\sqrt{n}\\left(\\frac{1}{n}\\sum_{i=1}^n(X_i-\\mu)^2 -\\sigma^2\\right) - \\sqrt{n}(\\Xbar_n-\\mu)^2\n\\]\nAs above, we can use Markov's inequality to show that the second term converges to zero in probability as $n\\to\\infty$: \n\\begin{align*}\n\\prob(\\sqrt{n}(\\Xbar_n-\\mu)^2>\\epsilon)\n\t\\leq \t\\frac{\\expe\\big[\\sqrt{n}(\\Xbar_n-\\mu)^2\\big]}{\\epsilon} \n\t=\t\t\\frac{\\sqrt{n}\\var(\\Xbar_n)}{\\epsilon}\n\t=\t\t\\frac{\\sigma^2}{\\sqrt{n}\\epsilon}\n\t\\to 0 \\quad\\text{as $n\\to\\infty$.}\n\\end{align*}\n\nFor the first term, we have $\\expe\\big[(X_i-\\mu)^2\\big]=\\sigma^2$ and $\\var\\big[(X_i-\\mu)^2\\big]=\\mu_4-\\sigma^4$, both of which are finite, so by the central limit theorem applied to the random variables $(X_i-\\mu)^2$\n\\[\n\\sqrt{n}(\\hat{\\sigma}^2_n -\\sigma^2)\n\t\\to N(0, \\mu_4 - \\sigma^4) \\quad\\text{in distribution as $n\\to\\infty$.}\n\\]\nBecause this holds for all $\\sigma^2>0$ we conclude that $\\hat{\\sigma}^2_n$ is an asymptotically normal estimator for $\\sigma^2$.\n\\end{answer}\n\\end{parts}\n\n%----------------------------------------\n\\end{questions}\n\\end{exercise}\n%----------------------------------------------------------------------\n\n", "meta": {"hexsha": "b51af065bc682bb4bf4c036f94308ccd80b6ba54", "size": 8388, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "L5/MA2500/07E_large_samples.tex", "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "L5/MA2500/07E_large_samples.tex", "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L5/MA2500/07E_large_samples.tex", "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "avg_line_length": 42.3636363636, "max_line_length": 323, "alphanum_fraction": 0.6276824034, "num_tokens": 2977, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056167854461, "lm_q2_score": 0.8333245911726382, "lm_q1q2_score": 0.5744153213007351}}
{"text": "\\documentclass{article}\n\n\\usepackage{color}\n\\usepackage{mathrsfs,amsmath}\n\\usepackage{xcolor}\n\\usepackage{titlesec}\n\\usepackage{listings}\n\\usepackage{syntax}\n\\usepackage{pythonhighlighting}\n\\usepackage{fancyvrb}\n\\usepackage{minted} \n\n\\usepackage{graphicx}\n\n\\graphicspath{ {./assets/} }\n\n\\usepackage[margin=1.4in]{geometry}\n\n\\title{Homework \\#3 | Fall 2021} \n\\author{Jared Dyreson\\\\ \n        California State University, Fullerton}\n\n\\DeclareRobustCommand{\\bowtie}{%\n  \\mathrel\\triangleright\\joinrel\\mathrel\\triangleleft}\n\n\n\n\\definecolor{mygray}{rgb}{0.4,0.4,0.4}\n\\definecolor{mygreen}{rgb}{0,0.8,0.6}\n\\definecolor{myorange}{rgb}{1.0,0.4,0}\n\n\\usepackage [english]{babel}\n\\usepackage [autostyle, english = american]{csquotes}\n\\MakeOuterQuote{\"}\n\n\\titlespacing*{\\section}\n{0pt}{5.5ex plus 1ex minus .2ex}{4.3ex plus .2ex}\n\\titlespacing*{\\subsection}\n{0pt}{5.5ex plus 1ex minus .2ex}{4.3ex plus .2ex}\n\n\\usepackage{hyperref}\n\\hypersetup{\n    colorlinks,\n    citecolor=black,\n    filecolor=black,\n    linkcolor=black,\n    urlcolor=black\n}\n\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n\\newpage\n\n\\section{General Information}\n\n\\subsection{Contributors}\n\n\\begin{enumerate}\n\\item Jared Dyreson | \\href{mailto:jareddyreson@csu.fullerton}{\\underline{jareddyreson@csu.fullerton.edu}} | CWID: 889546529\n\\end{enumerate}\n\n\\subsection{Repository}\n\nThe repository for this project can be found \\href{https://github.com/JaredDyreson/CS335_Project3}{\\underline{here}}. Please consult the README.md file for more information regarding the execution of the program.\n\n\\newpage\n\n\\section{Algorithm 1 | Pattern Sorting}\n\n\\subsection{Pseudocode}\n\n\\begin{verbatim}\n\nfunction swap(a: int, b: int):\n    c: int = a\n    a = b\n    b = c\n\nfunction pattern_sorting(container: list[int], pattern: list[int]):\n    for key in pattern:\n        for i in range(0, container.size()):\n            for j in range(i, container.size()):\n                if(key == container[i]):\n                    swap(container[i], container[j])\n\\end{verbatim}\n\n\\subsection{Mathematical Analysis}\n\nIn this function, we need to rearrange the array based on a pattern that contains at least one element in the aforementioned array.\nTo begin, you need to iterate over all the keys to sort once, which will be linear time $O(n)$.\nFor this routine, you will need a nested for loop to check the current index to all of the neighboring indices. That inner loop takes $O(n^{2})$.\nThis sorting algorithm takes directly from bubble sort, where the swap takes place when we meet the criteria.\nIn this instance, the criteria is the left hand side of the array's index being equal to the current value in the pattern.\nIn terms of memory usage, the only other external data container is a single integer in the swap method. Other than that, the program uses no extra space when sorting.\nWe can come to the conclusion that since the outermost loop of $O(n)$ needs to run $O(n^{2})$ times per iteration, the running time complexity is $O(n^{3})$.\n\n\\newpage\n\n\\section{Algorithm 2 | Merging Arrays}\n\n\\subsection{Pseudocode}\n\n\\begin{verbatim}\nfunction merge(input: list[list[int]]):\n    heap = createHeap(int, list[int], lessThanComp) # O(n)\n\n    # nested loops are O(log(n) * n^2)\n    for row in input: \n        for element in row:\n            heap.insert(element)\n\n    external: list[int] = []  # O(1)\n\n    while heap is not empty: # O(log(n))\n        external.append(heap.pop())\n        \n\\end{verbatim}\n\n\\subsection{Mathematical Analysis}\n\nIn this function, we need to merge a collection of arrays into one large array and the resultant must be sorted in descending order.\nTo achieve this, we are going to employ a min heap and it uses a priority queue with a greater than comparator to sort them.\nHere is an example of what min heap looks like; the least most element is at the top and removal of the root will have the least most element bubble to the top.\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=9cm]{min_heap}\n\\end{figure}\n\nFirst, creating a min heap will take $O(n)$ time and will use a list to store all the values in the tree.\nThen, we need to iterate over the collection of arrays that contain the data we need to sort.\nThis takes $O(n^{2})$ time because of the way the data is stored. \nIf the data was stored in a contiguous data structure, this would only take $O(n)$ time but will still result in around the same amount of time as the number of elements hasn't changed.\nEach insertion inside the min heap will take $O(\\log(n))$ time.\nLastly, extracting the data from the heap will take $O(\\log(n))$ per instance, it should be around $O(n)$ because we're completely emptying the heap.\nOverall, the time complexity for this function should be $O(\\log(n) * n^{2})$.\n\n\\end{document}\n", "meta": {"hexsha": "1a84aba1c69c3452e278f81cf53aa0410b96c81a", "size": 4730, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/Project_3.tex", "max_stars_repo_name": "JaredsAlgorithms/CS335_Project_3", "max_stars_repo_head_hexsha": "096c38cfed2980abd1960d77a627ae738e137f8d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/Project_3.tex", "max_issues_repo_name": "JaredsAlgorithms/CS335_Project_3", "max_issues_repo_head_hexsha": "096c38cfed2980abd1960d77a627ae738e137f8d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/Project_3.tex", "max_forks_repo_name": "JaredsAlgorithms/CS335_Project_3", "max_forks_repo_head_hexsha": "096c38cfed2980abd1960d77a627ae738e137f8d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.3098591549, "max_line_length": 212, "alphanum_fraction": 0.7289640592, "num_tokens": 1273, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056040203135, "lm_q2_score": 0.8333246035907932, "lm_q1q2_score": 0.57441531922314}}
{"text": "\n\nThe Kirchhoff approximation is used to derive scattering solutions from surfaces or facets that are assumed to be locally smooth, \\cite{kong1986electromagnetic}. The idea of locally smooth is usually expressed as a constraint on the minimum local radius of curvature of a rough surface relative to the wavelength, or to say that the correlation length of the surface must be many times larger than the root mean squared height. The core idea of the Kirchhoff approximation is to replace the total electrical and magnetic field solutions at a dielectric interface with Fresnel reflection, rather than to solve the full multiple scattering problem. This formulation can be used to derive the radar cross sections of simple shapes (rectangles, circular disks, etc.), or isolated facets that are used to create larger surface discretizations.\n\nIn this chapter, we 1) give the main steps of the Kirchhoff approximation based on \\cite{kong1986electromagnetic}, 2) give code for the Fresnel reflection coefficients, 3) give solutions for the surface phase integral over canonical facet shapes, 4) derive expressions for the S-matrix of an arbitrarily shaped surface facet, and 5) derive the equations for specular and backscatter radar cross sections of canonical shapes.\n\n\n\\section{Derivation of the Kirchhoff Approximation}\n\nLet the incident field be a plane wave defined relative to the origin\n\\eq{\\bb{E}_i = \\bb{e}_i E_o e^{i\\bb{k}_i \\cdot \\bb{r}} }\n\n\\noindent where $E_o$ is the electric field amplitude, $\\bb{e}_i$ is the polarization vector, $\\bb{k}_i$ is the incident wave vector, and $ \\bb{r}$ is the position vector. The reflected and transmitted fields above and below a dielectric boundary are given by \\eqref{erkirch1} and \\eqref{erkirch2}, respectively, \n\\begin{eqnarray}\n\\bb{E}_r(\\br) & = & \\int_S \\left\\{ i\\omega \\mu_o \\G{1} \\cdot \\hat{n}' \\times \\bb{H}(\\br') + \\left[\\nabla \\times \\G{1} \\right] \\cdot \\hat{n}' \\times \\bb{E}(\\br')\\right\\} dS' \\label{erkirch1} \\\\\n\\bb{E}_t(\\br) & = & \\int_S \\left\\{ i\\omega \\mu_o \\G{2} \\cdot \\hat{n}_d' \\times \\bb{H}(\\br') + \\left[\\nabla \\times \\G{2} \\right] \\cdot \\hat{n}_d' \\times \\bb{E}(\\br')\\right\\} dS' \\label{erkirch2} \n\\end{eqnarray}\n\n\\noindent where $k_1$ and $k_2$ are the wavenumbers in the upper and lower regions, respectively, $\\hat{n}$ and $\\hat{n}_d$ are the upward and downward pointing surface normals, and $\\bb{E}(\\br')$ and $\\bb{H}(\\br')$ are the fields on the boundary.  \n\n\nThe dyadic Green's functions in both regions are given by:\n\\begin{equation}\n\\G{1} = \\left[ \\overline{\\bb{I}} + \\dfrac{1}{k_1^2}\\nabla\\nabla\\right] \\dfrac{e^{ik_1\\vert \\br - \\br'\\vert}}{4\\pi \\vert \\br - \\br'\\vert}\n\\end{equation}\n\n\\begin{equation}\n\\G{2} = \\left[ \\overline{\\bb{I}} + \\dfrac{1}{k_2^2}\\nabla\\nabla\\right] \\dfrac{e^{ik_2\\vert \\br - \\br'\\vert}}{4\\pi \\vert \\br - \\br'\\vert}\n\\end{equation}\n\n\n\n\\begin{figure}[h] \n   \\centering\n   \\includegraphics[width=2.5in]{Kirchhoff/Figures/kirchhoffgeometry} \n   \\caption{Geometry for the Kirchhoff approximation at a dielectric interface}\n     \\label{kirchhoffgeo}\n\\end{figure}\n\n\nIn the far-field these simplify to\n\\begin{equation}\n\\G{1} \\approx \\left( \\overline{\\bb{I}} - \\hat{k}_r \\hat{k}_r \\right) \\dfrac{e^{ik_1r}}{4\\pi r} \\exp(-i \\bb{k}_r \\cdot \\br') \n\\end{equation}\n\n\\begin{equation}\n\\G{2} \\approx \\left( \\overline{\\bb{I}} - \\hat{k}_t \\hat{k}_t \\right) \\dfrac{e^{ik_2r}}{4\\pi r} \\exp(-i \\bb{k}_t \\cdot \\br') \n\\end{equation}\n\n\\noindent where $\\bb{k}_r = k_1\\hat{k}_r$ and $\\bb{k}_t = k_2 \\hat{k}_t$ are the reflected and transmitted wave vectors. The wave vectors are defined in the medium of propagation.  Substituting these\n\\begin{eqnarray}\n\\bb{E}_r(\\br) & = & \\dfrac{ik_1 e^{ik_1 r}}{4\\pi r} \\left( \\overline{\\bb{I}} - \\hat{k}_r \\hat{k}_r \\right)  \\int_S \\left\\{ \\hat{k}_r \\times \\left[ \\hat{n}' \\times \\bb{E}(\\br')\\right] + \\eta_1\\left[ \\hat{n}' \\times \\bb{H}(\\br')\\right]\\right\\} e^{-i \\bb{k}_r \\cdot \\br'} dS'   \\\\\n\\bb{E}_t(\\br) & = & \\dfrac{ik_2e^{ik_2r}}{4\\pi r} \\left( \\overline{\\bb{I}} - \\hat{k}_t \\hat{k}_t \\right)  \\int_S \\left\\{ \\hat{k}_t \\times \\left[ \\hat{n}_d' \\times \\bb{E}(\\br')\\right] + \\eta_2\\left[ \\hat{n}_d' \\times \\bb{H}(\\br')\\right]\\right\\} e^{-i \\bb{k}_t \\cdot \\br'}  dS'  \n\\end{eqnarray}\n\nNext, define the orthonormal system $(\\hat{p}_i,\\hat{q}_i,\\hat{k}_i)$ at point $\\br'$ \n\\begin{eqnarray}\n\\hat{q}_i &=& \\dfrac{\\hat{k}_i \\times \\hat{n} }{ \\vert \\hat{k}_i \\times \\hat{n} \\vert } \\\\\n\\hat{p}_i &=& \\hat{q}_i \\times \\hat{k}_i \n\\end{eqnarray}\n\n\\noindent where $\\hat{n}(\\br') = -\\hat{n}_d(\\br')$.  Under the Kirchhoff approximation, the tangential electric and magnetic files for TE and TM fields are given by \\eqref{ncrosse1} and \\eqref{ncrosse2}, \\cite{kong1986electromagnetic}, \n\\begin{eqnarray}\n\\hat{n} \\times \\bb{E}(\\br') & = & E_o\\left\\{ (\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TE}}) + \n(\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TM}}) \\right\\} e^{i\\bb{k}_i \\cdot \\br'} \\label{ncrosse1}  \\\\\n\\hat{n} \\times \\bb{H}(\\br') & = & \\dfrac{E_o}{\\eta_1}\\left\\{ -(\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TE}}) + (\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) \\right\\} e^{i\\bb{k}_i \\cdot \\br'}  \\label{ncrosse2}\n\\end{eqnarray}\n\n\\noindent where $R^{\\textrm{TE}}$ and $R^{\\textrm{TM}}$ are the Fresnel reflection coefficients given in Section \\ref{fresenlref}. The local incidence angle is\n\\eq{\\cos\\theta_i = -\\hat{n} \\cdot \\hat{k}_i \\label{localinctheta} }\n\nSubstituting these into the surface integral equations\n\\begin{eqnarray}\n\\bb{E}_r(\\br) & = & \\dfrac{ik_1e^{ik_1r}}{4\\pi r} E_o \\left( \\overline{\\bb{I}} - \\hat{k}_r \\hat{k}_r \\right)  \\cdot \\int_S  \\bb{F}(\\br') e^{i (\\bb{k}_i - \\bb{k}_r) \\cdot \\br'} dS' \\label{kirchint1} \\\\\n\\bb{E}_t(\\br) & = & -\\dfrac{ik_2e^{ik_2r}}{4\\pi r} E_o \\left( \\overline{\\bb{I}} - \\hat{k}_t \\hat{k}_t \\right)  \\cdot \\int_S  \\bb{N}(\\br') e^{i (\\bb{k}_i - \\bb{k}_t) \\cdot \\br'} dS' \\label{kirchint2} \n\\end{eqnarray}\n\n\\noindent where\n\\begin{eqnarray}\n\\bb{F}(\\br') &=&  - (\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TE}}) + (\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) \\nonumber \\\\\n\\ & \\ &+ (\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{k}_r \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}})  + (\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\cdot \\hat{k}_i) (\\hat{k}_r \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) \\\\\n\\bb{N}(\\br') &=&  - \\dfrac{\\eta_2}{\\eta_1}(\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TE}}) + \\dfrac{\\eta_2}{\\eta_1}(\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) \\nonumber \\\\\n\\ & \\ &+ (\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{k}_t \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}}) + (\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\cdot \\hat{k}_i) (\\hat{k}_t \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) \n\\end{eqnarray}\n\n$\\bb{F}$ and $\\bb{N}$ only differ in the constants and wave vectors.  Due to the plane-wave approximation at the surface, the surface curvature must be large enough so that the Kirchhoff approximation is valid. Whether this is met or not, there are a number of ways to compute the integral depending on the application. One way is to finely discretize the surface at step sizes much smaller than the wavelength ($dS<\\lambda/10$).  \n\n\\section{Fresnel Reflection Coefficients}\n\\label{fresenlref}\nThe Fresnel reflection coefficients for flat interfaces are given by, \\cite{kong1986electromagnetic},\n\\begin{eqnarray}\nR^{\\textrm{TE}} &=& \\dfrac{ \\cos \\theta_i - \\sqrt{(\\epsilon_2/\\epsilon_1) - \\sin^2\\theta_i }}{\\cos \\theta_i + \\sqrt{(\\epsilon_2/\\epsilon_1) - \\sin^2\\theta_i }} \\\\\nR^{\\textrm{TM}} &=& \\dfrac{(\\epsilon_2/\\epsilon_1) \\cos \\theta_i - \\sqrt{(\\epsilon_2/\\epsilon_1) - \\sin^2\\theta_i }}{(\\epsilon_2/\\epsilon_1) \\cos \\theta_i + \\sqrt{(\\epsilon_2/\\epsilon_1) - \\sin^2\\theta_i }} \n\\end{eqnarray}\n\nThe transmission coefficients are $T^{\\textrm{TE}} = 1 + R^{\\textrm{TE}}$ and $T^{\\textrm{TM}} = (1 + R^{\\textrm{TM}})\\cos\\theta_i/\\cos\\theta_t $, where $\\theta_t$ is the transmission angle given by Snell's law, \\cite{ulaby1999fundamentals}. In \\cite{ulaby1999fundamentals}, there is a negative sign in the reflection coefficients that does not appear in \\cite{kong1986electromagnetic}. This is due to the sign convention of incoming/outgoing field components. In \\cite{kong1986electromagnetic}, the incoming and outgoing fields are defined relative to the $p$, $q$ coordinate projection, while in \\cite{ulaby1999fundamentals} they are defined relative to incoming and outgoing plane wave directions, which flips the sign of $E$ and $H$ fields on reflection.  \n\nThe routines \\texttt{fresnelTE} and \\texttt{fresnelTM} compute the Fresnel reflection coefficients given the dielectric constants on either side of an interface, which can be complex, and the incidence angle in degrees.\n\n{\\scriptsize\n\\VerbatimInput{\\code/Kirchhoff/fresnelTE.m}\n}\n\n{\\scriptsize\n\\VerbatimInput{\\code/Kirchhoff/fresnelTM.m}\n}\n\n\n%\n%\\section{Continuous Surfaces}\n%\n%One way to compute the integral is to finely discretize the surface and internal and perform the sum.  This is ultimately the most accurate.  It does however prevent simulating very large surfaces.  The surface curvature also demands attention, so that it stays in the range of validity of the Kirchhoff approximation. \n%\n\n\\section{Facetized Surfaces}\n\nOne way to compute equations \\eqref{kirchint1} and \\eqref{kirchint2} is to break up a large surface into many flat facets and compute the surface integrals analytically over the shapes of the facets. Discretizing the surface integral as a sum over large facets we have\n\\begin{eqnarray}\n\\bb{E}_r(\\br) & \\approx & \\dfrac{ik_1e^{ik_1r}}{4\\pi r} E_o \\left( \\overline{\\bb{I}} - \\hat{k}_r \\hat{k}_r \\right)  \\cdot \\sum_n \\bb{F}(\\br_n) \\int_{S_n} e^{i (\\bb{k}_i - \\bb{k}_r) \\cdot \\br} dS \\label{faceter} \\\\\n\\bb{E}_t(\\br) & \\approx & -\\dfrac{ik_2e^{ik_2r}}{4\\pi r} E_o \\left( \\overline{\\bb{I}} - \\hat{k}_t \\hat{k}_t \\right)  \\cdot \\sum_n \\bb{N}(\\br_n) \\int_{S_n}  e^{i (\\bb{k}_i - \\bb{k}_t) \\cdot \\br} dS  \\label{facetet}\n\\end{eqnarray}\n\n\\noindent where $S_n$ is the surface of the $n$th facet.  This assumes $\\bb{F}(\\br)$ and $\\bb{N}(\\br)$ are constant over the facet, i.e., the facet is flat and illuminated with plane waves. We are left with having to compute the surface phase integral over different facet shapes which can be written compactly in terms of the wave vector difference, $\\bb{K}$, as\n\\eq{I = \\int_S  e^{i\\bb{K}\\cdot \\bb{r} } dS  \\label{surfphaseint}}\n\n\\noindent where $\\bb{K} = \\bb{k}_i - \\bb{k}_r$ or $\\bb{K} = \\bb{k}_i - \\bb{k}_t$.  The surface phase integral is the 2D Fourier transform over the shape of the facet in the wave vector difference domain. \n\n\n\\section{Surface Phase Integral}\n\\label{secsurfphaseint}\nThe surface phase integral is the 2D Fourier transform of the shape of a flat facet into the wave vector difference domain. Here we derive analytic expressions for the surface phase integral for common facet shapes. These will be used to derive closed-form expressions for the specular and backscatter RCS of PEC facets for these shapes.  The surface phase integral is given by \n\\eq{I = \\int_S  e^{i\\bb{K}\\cdot \\bb{r} } dS  \\label{surfphaseint2}}\n\n\\noindent where \n\\ea{\\bb{K} &=& \\bb{k}_i - \\bb{k}_s\\\\\n \\bb{k}_i &=& k_i\\left(\\sin\\theta_i \\cos\\phi_i \\hat{x}  + \\sin\\theta_i \\sin\\phi_i \\hat{y} + \\cos\\theta_i \\hat{z}\\right) \\\\\n\\bb{k}_s &=& k_s\\left(\\sin\\theta_s \\cos\\phi_s \\hat{x}  + \\sin\\theta_s \\sin\\phi_s \\hat{y} + \\cos\\theta_s \\hat{z}\\right) \\\\\n\\bb{r}  &=& x \\hat{x} + y \\hat{y} + z \\hat{z}}\n\nThe scattered wave vector, $\\bb{k}_s$, can be a reflected or transmitted wave. If a facet is embedded in a large surface interface, the wavenumbers of the incident and scattered directions must correspond to correct medium above or below the interface. \n\nThe surface phase integral can be computed for facets that are tilted arbitrarily in a global frame. However, analytic expressions are more easily derived using facets in the $XY$ plane, Figure \\ref{kirchhoffXY}, which is our approach in the following subsections. When a facet is tilted in the global frame, the incident and scattered angles need to be transformed to the local frame of the facet. This can be done by first having the forward rotation of the facet relative to the global frame, then applying the rotation in reverse to the incident and scattered wave vectors in the global frame, which will give the incident and scattered wave vectors in the facet frame.\n\n\\begin{figure}[H] \n   \\centering\n   \\includegraphics[width=2.5in]{Kirchhoff/Figures/surfacephaseint} \n   \\caption{Geometry for the surface phase integral of a facet in the XY plane.}\n     \\label{kirchhoffXY}\n\\end{figure}\n\n\n\n\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{1}}\n\\subsection{Specular Direction}\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{2}}\n\nLet the facet be in the $XY$ plane with $\\hat{n} = \\hat{z}$.  In the specular direction, $\\theta_s = \\pi - \\theta_i$ and $\\phi_s = \\phi_i$, which means that the scattered direction is equivalent to the incident direction with the opposite $z$ component. The wave vector difference is then\n\\eq{\\bb{k}_i - \\bb{k}_s = 2 \\cos\\theta_i \\hat{z}}\n\nThe position vector in this case is $\\bb{r}  = x \\hat{x} + y \\hat{y}$.  Therefore, $(\\bb{k}_i - \\bb{k}_s)\\cdot \\bb{r} = 0$, so the complex exponent evaluates to one, and \n\\eq{I = \\int_S 1 \\ dS = A}\n\n\\noindent where $A$ is the area of the facet. The surface phase integral in the specular direction is equal to the area of the facet. This is true for any shape.\n\n\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{1}}\n\\subsection{Rectangle}\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{2}}\n\nLet a rectangular facet lie in the $XY$ plane with side lengths $L_x$ and $L_y$, $\\hat{n} = \\hat{z}$, and the facet is centered at the origin. The surface phase integral is written \n\\begin{eqnarray}\nI &=& \\int_{-L_y/2}^{L_y/2}  \\int_{-L_x/2}^{L_x/2}  e^{i\\bb{K}\\cdot \\bb{r}} dx dy \\\\\n\\bb{K} &=& K_x \\hat{x} + K_y \\hat{y} + K_z \\hat{z} \\\\\n\\bb{r} &=& x \\hat{x} + y \\hat{y}\n\\end{eqnarray}\n\n\\noindent where $\\bb{K} = \\bb{k}_i - \\bb{k}_s$. The integral separates as \n\\begin{eqnarray}\nI &=& \\int_{-L_x/2}^{L_x/2}  e^{i K_x x} dx  \\int_{-L_y/2}^{L_y/2} e^{i K_y y} dy  \\\\\n\\end{eqnarray}\n\nUsing the fact that \n\\eq{\\int_{-a/2}^{a/2} e^{i b z} dz = \\dfrac{2}{b} \\sin\\left(\\dfrac{ab}{2}\\right) }\n\nand multiplying top and bottom by $L_xL_y$ to convert the sines to sinc functions, the surface phase integral over a rectangular facet is \n\\eq{I = L_x L_y \\textrm{sinc}\\left(\\dfrac{L_x K_x}{2}\\right)\\textrm{sinc}\\left(\\dfrac{L_y K_y}{2}\\right) \\label{SPIrect}}\n\n%\\ea{I &=& \\dfrac{4}{K_x K_y} \\sin\\left(\\dfrac{L_x K_x}{2}\\right)\\sin\\left(\\dfrac{L_y K_y}{2}\\right) \\\\\n%\\ & = & L_x L_y \\textrm{sinc}\\left(\\dfrac{L_x K_x}{2}\\right)\\textrm{sinc}\\left(\\dfrac{L_y K_y}{2}\\right) }\n\n\\noindent where $\\textrm{sinc}(x) = \\sin(x)/x$. This is equal to the area of the facet multiplied by a directionally dependent function that has maximum value of one.\n\n\n\n\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{1}}\n\\subsection{Ellipse}\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{2}}\n\n%Here we derive the analytic expression for the surface phase integral over an elliptical facet. \n\nLet the facet be an ellipse in the $XY$ plane centered at the origin such that $\\hat{n} = \\hat{z}$ with semi-major axis $a$ along the $x$ axis, and semi-minor axis $b$ along the $y$ axis, with a contour described by the equation\n\\eq{\\dfrac{x^2}{a^2} + \\dfrac{y^2}{b^2} = 1}\n\nUsing the change of variables\n\\ea{x &=& a \\rho \\cos\\phi \\\\\ny &=& b \\rho \\sin\\phi \\\\\ndS &=& a b \\rho d\\rho d\\phi \\\\\n\\bb{r} &=&  a \\rho \\cos\\phi \\hat{x} + b \\rho \\sin\\phi \\hat{y} }\n\nwe transform \\eqref{surfphaseint2} to an integration in polar coordinates over the surface of the ellipse as \n\\eq{I = a b \\int_0^{2\\pi} \\int_0^1 e^{i(K_x a \\rho \\cos\\phi  + K_y b \\rho \\sin\\phi)}  \\rho d\\rho d\\phi }\n\nApplying the following identity\n\\eq{\\int_0^{2\\pi} e^{ u \\cos t +v \\sin t} dt = 2 \\pi I_0 \\left( \\sqrt{u^2 + v^2} \\right)}\n\nwe get\n\\ea{I &=&  2 \\pi  a b \\int_{0}^{1}I_0 \\left(i \\rho \\sqrt{  (a K_x)^2 + (b K_y)^2} \\right) \\rho d\\rho  }\n\nFinally, using\n\\eq{\\int_0^1 I_0\\left( i c x \\right) x dx = \\dfrac{J_1(c)}{c} }\n\nthe surface phase integral over an ellipse is\n\\ea{I &=&  2 \\pi  a b \\dfrac{J_1(K_{\\rho}') }{K_{\\rho}'} \\\\\n\\ &=& 2 \\pi  a b \\textrm{jinc}( K_{\\rho}') \\label{SPIellipse} \\\\\nK_{\\rho}' &=&  \\sqrt{  (a K_x)^2 + (b K_y)^2} }\n\nNote, $\\textrm{jinc(0)} = 1/2$, so the final result is equal to the area of the ellipse multiplied by a directionally dependent term that has maximum value of one. $K_{\\rho}'$ is a modified perpendicular component of the wave vector difference.  \n\n\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{1}}\n\\subsection{Circle}\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{2}}\n\nThe surface phase integral for the circular disk in the $XY$ plane with radius $a$ can be derived directly from the elliptical disk, \\eqref{SPIellipse}, by taking $b = a$.  This gives\n\\ea{I &=&  2 \\pi  a^2  \\textrm{jinc}\\left(a K_{\\rho} \\right) \\label{SPIcircle} \\\\\nK_{\\rho} &=&  \\sqrt{K_x ^2 +  K_y ^2}}\n\nA solution for the circular disk can also be found in \\cite{trott1988disk}, which uses a sequence of trigonometric transformations to separate the magnitude and phase of the perpendicular wave vector difference. %While it is possible to adapt that approach for an ellipse, it is easier to derive this using the wave vector difference, $\\bb{K}$.  \n\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{1}}\n\\subsection{Triangle}\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{2}}\n\nLet the facet be a triangle in the $XY$ plane with $\\hat{n} = \\hat{z}$. The vertices of the triangle are at positions $[\\bb{p}_1, \\bb{p}_2, \\bb{p}_3]$. The vector to the triangle centroid is given by \n\\eq{\\br_n = \\dfrac{1}{3}\\left[\\bb{p}_1+ \\bb{p}_2+ \\bb{p}_3\\right] }\n\nThe phase integral is computed by mapping the domain of the arbitrary triangular facet to a triangle that covers half of the unit square though the following transform, \\cite{triangleint},\n\\begin{eqnarray}\nx(u,v) &=& x_1 + u(x_2 - x_1) + v(x_3 - x_1) \\\\\ny(u,v) &=& y_1 + u(y_2 - y_1) + v(y_3 - y_1)  \n\\end{eqnarray}\n\n\\noindent where $u = [0,1]$ and $v = [0,1-u]$.  $\\bb{p}_1$ is mapped to the origin, $\\bb{p}_2$ is mapped to $(1,0)$ and $\\bb{p}_3$ is mapped to $(0,1)$. The position vector is parameterized in terms of $u$ and $v$ as \n\\eq{\\bb{r}= \\bb{p}_1 + u (\\bb{p}_2 - \\bb{p}_1) + v (\\bb{p}_3 - \\bb{p}_1) }\n\nThe change of variables in the surface integral is\n\\eq{I = \\int_S f(x(u,v),y(u,v)) \\vert \\bb{J}(u,v)\\vert du dv }\n\nThe determinant of the Jacobian is\n\\ea{\\vert \\bb{J}(u,v) \\vert &=& \\textrm{det} \\twobytwo{\\partial x /\\partial u}{\\partial x /\\partial v}{\\partial y /\\partial u}{\\partial y /\\partial v} \\\\\n\\ & = & (x_2 - x_1)(y_3 - y_1) - (x_3 - x_1)(y_2 - y_1) \\\\\n\\ & = & 2 A\n}\nwhich is equal to the twice the area of the triangle.  With these, the integral becomes \n\\eq{ I = 2 A \\int_0^1\\int_0^{1-u} e^{i(a + b u + c v)} dv du }\n\n\\noindent where \n\\begin{eqnarray}\na &= & \\bb{K} \\cdot \\bb{p}_1 \\\\ \nb &=& \\bb{K} \\cdot (\\bb{p}_2 - \\bb{p}_1)\\\\\nc &=& \\bb{K} \\cdot (\\bb{p}_3 - \\bb{p}_1) \n\\end{eqnarray}\n\nIntegrating this yields \n\\ea{ I(a,b,c) &=& A e^{ia} g(b,c) \\label{SPItriangle} \\\\\ng(b,c) &=& 2 \\dfrac{c(1 - e^{i b}) - b (1 - e^{ic})}{bc(b-c)}}\n\nDivide by zero occurs for $b = 0$, $c = 0$, or $b = c$. The first two cases happen when the wave vector difference is perpendicular to an edge of the triangle. The third happens when the wave vector difference bisects the angle formed by two of the edges with $\\bb{p}_1$ at the vertex.  The limits, though, exist and are given by \n\\begin{eqnarray}\n\\lim_{b \\rightarrow 0} g(b,c) &=& 2 \\dfrac{1 + ic - e^{ic} }{c^2} \\\\\n\\lim_{c \\rightarrow 0} g(b,c) &=& 2 \\dfrac{1 + ib -e^{ib}}{b^2} \\\\\n\\lim_{c \\rightarrow b} g(b,c) &=& 2 \\dfrac{-1 +(1-ib) e^{ib}}{b^2} \\\\\n\\lim_{[b,c] \\rightarrow [0,0]}g(b,c) &=& 1\n\\end{eqnarray}\n\n%Assuming $\\bb{K}$ is real, the magnitude squared of the auxiliary function is\n%\\ea{\\vert g(b,c) \\vert^2 &=& \\dfrac{\\left( c - c e^{i b} - b + b e^{ic}\\right)\\left( c - ce^{-i b} - b + b e^{-ic}\\right)}{b^2c^2(b-c)^2} \\\\\n%\\ & =& b^2 \\sin^2(c) + b^2 \\cos^2(c) - 2 b^2 \\cos(c) + b^2 + c^2 \\sin^2(b) + c^2 \\cos^2(b) - 2 c^2 \\cos(b) - 2 b c - 2 b c \\sin(b) \\sin(c) + 2 b c \\cos(b) + 2 b c \\cos(c) - 2 b c \\cos(b) \\cos(c) + c^2 \\\\\n%\\ & =& 2 b^2 - 2 b^2 \\cos(c)  + 2 c^2 - 2 b c  - 2 c^2 \\cos(b)  + 2 b c \\cos(b) + 2 b c \\cos(c) - 2 b c (\\sin(b) \\sin(c)  + \\cos(b) \\cos(c)) \\\\\n%\\ & =& 2 b^2 (1-\\cos(c))  + 2 c^2 (1 -  \\cos(b)) - 2 b c  (1 - \\cos(b) -  \\cos(c)) - 2 b c \\cos(b-c) \\\\\n%\\ & =& 2 b^2 (1-\\cos(c))  + 2 c^2 (1 -  \\cos(b)) - 2 b c  (1 - \\cos(b) -  \\cos(c) + \\cos(b-c)) }\n\n%\\clearpage\n%\\newpage\n\n\\clearpage\n\\section{S-matrix of a Facet}\n\nHere we develop the idea of an S-matrix for a Kirchhoff facet. The flat facet is treated as an isolated scatterer. When the facet is part of a larger surface discretization, the facet is assumed to be non-interacting with neighboring facets and which has homogeneous and semi-infinite mediums above and below it. We derive the general cases for the scattering of a facet and also provide expressions for their radar cross sections.\n\n\n%Because of the preference of incident, scattered, and transmission depending on the medium with the originating wave, we can define four partially bistatic scattering quantities, $S_{11}$ and $S_{21}$, in which the incident field comes from medium 1 and reflects into medium 1 or transmits into medium 2, and $S_{12}$ and $S_{22}$ in which the incident field comes from medium 2.  $S_{11}$ and $S_{12}$ are defined above the interface, and $S_{21}$ and $S_{22}$ are defined below the interface.\n\n\nDefine the second orthonormal system $(\\hat{p}_s,\\hat{q}_s,\\hat{k}_s)$ for the scattered field, which is relative to the surface of the facet and applies to both the reflected and transmitted fields, \n\\begin{eqnarray}\n\\hat{q}_s &=& \\dfrac{\\hat{k}_s \\times \\hat{n} }{ \\vert \\hat{k}_s \\times \\hat{n} \\vert } \\\\\n\\hat{p}_s &=& \\hat{q}_s \\times \\hat{k}_s \n\\end{eqnarray}\n\nThe unit vectors that make up the identity dyad are defined in terms of the scattered field system, $\\overline{\\bb{I}} = \\hat{q}_s \\hat{q}_s+ \\hat{p}_s \\hat{p}_s +\\hat{k}_s \\hat{k}_s$, so that the reflected and transmitted fields for a single facet are \n\\begin{eqnarray}\n\\bb{E}_r(\\br) & \\approx & \\dfrac{ik_1e^{ik_1r}}{4\\pi r} E_o \\left( \\hat{q}_r \\hat{q}_r+ \\hat{p}_r \\hat{p}_r \\right)  \\cdot  \\bb{F}(\\hat{k}_i,\\hat{k}_r)  I(\\bb{k}_i,\\bb{k}_r) \\\\\n\\bb{E}_t(\\br) & \\approx & -\\dfrac{ik_2e^{ik_2r}}{4\\pi r} E_o \\left( \\hat{q}_t \\hat{q}_t+ \\hat{p}_t \\hat{p}_t \\right)  \\cdot  \\bb{N}(\\hat{k}_i,\\hat{k}_t)  I(\\bb{k}_i,\\bb{k}_t) \n\\end{eqnarray}\n\n\n\\noindent where $I$ is the surface phase integral \\eqref{surfphaseint}. \nDefine two scattering matrices, $S_{11}$ and $S_{21}$, which map plane waves incident from medium 1 to far-field plane waves that are reflected or transmitted waves in mediums 1 and 2, respectively. $S_{11}$ is valid above the interface, while $S_{21}$ is valid below the interface.  Writing the incident, reflected and transmitted fields in the $\\hat{p}$, $\\hat{q}$ polarization basis\n\\begin{eqnarray}\n\\bb{E}_r & = & \\dfrac{e^{ik_1r}}{r} \\bb{S}_{11}  \\cdot \\bb{E}_i  \\\\\n\\bb{E}_t & = & \\dfrac{e^{ik_2r}}{r} \\bb{S}_{21}  \\cdot \\bb{E}_i  \n\\end{eqnarray}\n\\eq{\\bb{E}_i  = E_o \\twobyone{(\\hat{e}_i \\cdot \\hat{p}_i )}{(\\hat{e}_i \\cdot \\hat{q}_i )}}  \n\n\\noindent where the elements of the reflected and transmitted S-matrices are \n\\ea{ \\bb{S}_{11} &=& \\dfrac{ik_1}{4\\pi} I(\\bb{k}_i,\\bb{k}_r,\\hat{n}) \\tbt{\\hat{p}_r \\cdot \\bb{F}_p }{\\hat{p}_r \\cdot \\bb{F}_q }{\\hat{q}_r \\cdot \\bb{F}_p }{\\hat{q}_r \\cdot \\bb{F}_q} \\label{S11kirch} }\n\\ea{ \\bb{S}_{21} &=& -\\dfrac{ik_2}{4\\pi} I(\\bb{k}_i,\\bb{k}_t,\\hat{n}) \\tbt{\\hat{p}_t \\cdot \\bb{N}_p }{\\hat{p}_t \\cdot \\bb{N}_q }{\\hat{q}_t \\cdot \\bb{N}_p }{\\hat{q}_t \\cdot \\bb{N}_q} }\n\n\\noindent and\n\\ea{\n\\bb{F}_{p} &=&  (\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (\\hat{k}_r \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) \\\\\n\\bb{F}_{q} &=& -(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TE}}) + (\\hat{k}_r \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}}) \\\\\n\\bb{N}_{p} &=& \\dfrac{\\eta_2}{\\eta_1} (\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (\\hat{k}_t \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) \\\\\n\\bb{N}_{q} &=& -\\dfrac{\\eta_2}{\\eta_1} (\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TE}}) + (\\hat{k}_t \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}}) }\n%\n%\\ea{ \\bb{F} &=&  \\onebytwo{\\bb{F}_{p}}{\\bb{F}_{q}} \\twobyone{(\\hat{e}_i \\cdot \\hat{p}_i )}{(\\hat{e}_i \\cdot \\hat{q}_i )}  \\\\\n% \\bb{N} &=&  \\onebytwo{\\bb{N}_{p}}{\\bb{N}_{q}} \\twobyone{(\\hat{e}_i \\cdot \\hat{p}_i )}{(\\hat{e}_i \\cdot \\hat{q}_i )} }\n\nTable \\ref{tableofdotproducts} gives the dot products between $\\hat{p}$ and $\\hat{q}$ and the different vector quantities listed at the top of the columns.\n\\begin{table}[htp]\n\\caption{Table of vector products}\n\\vspace{-3mm}\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|}\n\\hline\n& $\\hat{n}\\times\\hat{q}_i$ & $\\hat{k}\\times\\hat{q}_i$ & $\\hat{q}_i$ & $\\hat{k}\\times(\\hat{n}\\times\\hat{q}_i)$ \\\\ \\hhline{|=|=|=|=|=|}\n$\\hat{p} \\cdot$ & $\\hat{n}\\cdot (\\hat{q}_i\\times\\hat{p}) $ & $-\\hat{q}_i\\cdot \\hat{q}$  & $\\hat{p}\\cdot \\hat{q}_i$ & $(\\hat{k} \\cdot \\hat{q}_i) (\\hat{p} \\cdot \\hat{n}) - (\\hat{k} \\cdot \\hat{n}) (\\hat{q}_i \\cdot \\hat{p}) $ \\\\ \\hline\n$\\hat{q} \\cdot$ & $\\hat{n}\\cdot (\\hat{q}_i\\times\\hat{q}) $ & $\\hat{p}\\cdot \\hat{q}_i$ &$ \\hat{q} \\cdot  \\hat{q}_i$ & $ -(\\hat{k} \\cdot \\hat{n}) (\\hat{q} \\cdot \\hat{q}_i) $ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\label{tableofdotproducts}\n\\end{table}%\n\nUsing these, we can write\n\\ea{\n\\hat{p}_r \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{p}_r) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (-\\hat{q}_i\\cdot \\hat{q}_r) (1 - R^{\\textrm{TM}}) \\label{prFp} \\\\ \n\\hat{q}_r \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{q}_r) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (\\hat{q}_i\\cdot \\hat{p}_r) (1 - R^{\\textrm{TM}}) \\\\ \n\\hat{p}_r \\cdot \\bb{F}_q & = & -(\\hat{n} \\cdot \\hat{k}_i) (\\hat{p}_r\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) + ((\\hat{k}_r \\cdot \\hat{q}_i) (\\hat{p}_r \\cdot \\hat{n}) - (\\hat{k}_r \\cdot \\hat{n}) (\\hat{q}_i \\cdot \\hat{p}_r) ) (1 + R^{\\textrm{TE}})  \\\\ \n\\hat{q}_r \\cdot \\bb{F}_q & = &-(\\hat{n} \\cdot \\hat{k}_i) (\\hat{q}_r\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) -(\\hat{k}_r \\cdot \\hat{n}) (\\hat{q}_r \\cdot \\hat{q}_i)  (1 + R^{\\textrm{TE}})  \\label{qrFq}\n}\n\nThe same applies to $\\bb{N}$ except with $r \\rightarrow t$ and the inclusion of material constants. \n\n%\n%Then we can write\n%\\ea{\n%\\hat{p}_s \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{p}_s) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (-\\hat{q}_i\\cdot \\hat{q}_s) (1 - R^{\\textrm{TM}})  \\\\ \n%\\hat{q}_s \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{q}_s) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (\\hat{q}_i\\cdot \\hat{p}_s) (1 - R^{\\textrm{TM}}) \\\\ \n%\\hat{p}_s \\cdot \\bb{F}_q & = & -(\\hat{n} \\cdot \\hat{k}_i) (\\hat{p}_s\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) + ((\\hat{k}_s \\cdot \\hat{q}_i) (\\hat{p}_s \\cdot \\hat{n}) - (\\hat{k}_s \\cdot \\hat{n}) (\\hat{q}_i \\cdot \\hat{p}_s) ) (1 + R^{\\textrm{TE}})  \\\\ \n%\\hat{q}_s \\cdot \\bb{F}_q & = &-(\\hat{n} \\cdot \\hat{k}_i) (\\hat{q}_s\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) -(\\hat{k}_s \\cdot \\hat{n}) (\\hat{q}_s \\cdot \\hat{q}_i)  (1 + R^{\\textrm{TE}}) \n%}\n\n%\\clearpage\n%\\newpage\n\\section{Radar Cross Section of a Facet}\n\n\nRecall that the polarized bistatic radar cross section is defined as\n\\ea{\\sigma_{pq}(\\hat{k}_s,\\hat{k}_i) &=& \\lim_{kr \\rightarrow \\infty} 4\\pi r^2 \\dfrac{\\left\\vert \\hat{p} \\cdot \\bb{E}_s \\right\\vert^2}{\\left\\vert \\hat{q} \\cdot \\bb{E}_i \\right\\vert^2} \\\\\n\\ &= & 4\\pi \\vert S_{pq} \\vert^2 \\label{Spq}}\n\nThe total radar cross section for either scattered polarization is \\ea{\\sigma_{p} &=& \\sigma_{pp}(\\hat{e}_i \\cdot \\hat{p}_i )^2 + \\sigma_{pq}(\\hat{e}_i \\cdot \\hat{q}_i )^2  \\\\\n\\sigma_{q} &=& \\sigma_{qp}(\\hat{e}_i \\cdot \\hat{p}_i )^2 + \\sigma_{qq}(\\hat{e}_i \\cdot \\hat{q}_i )^2  }\n\nIn the frame of the facet, $\\hat{q}$ and $\\hat{p}$ are equivalent to the traditional $\\hat{h}$ and $\\hat{v}$ polarizations.  In other words, if $\\hat{n} = \\hat{z}$, then $\\hat{q}=\\hat{h}$ and $\\hat{p} = \\hat{v}$ in a global frame.\n\n\n\\subsection{Specular RCS of a Facet}\n\nThe special case of the facet RCS in the specular direction can be derived using \\eqref{S11kirch}.  Assume that the facet lies in the $XY$ plane with $\\hat{n} = \\hat{z}$. In the specular direction, we have $\\theta_r = \\pi - \\theta_i$ and $\\phi_r = \\phi_i$, where the reflected wave vector is just the incidence wave vector reflected from the $XY$ plane. This means that\n\\ea{\\hat{q}_r &=& \\hat{q}_i \\\\\n\\hat{q}_i \\cdot \\hat{q}_r &=& 1 \\\\\n\\hat{p}_r \\cdot \\hat{q}_i &=& 0 \\\\\n\\hat{q}_i \\times \\hat{p}_r &=& -\\hat{k}_r \\\\\n\\hat{q}_i \\times \\hat{q}_r &=& 0 \\\\\n\\hat{q}_i \\times \\hat{p}_r &=& 0}\n\nUsing these in \\eqref{prFp}-\\eqref{qrFq} gives:\n\\ea{\\hat{p}_r \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (-\\hat{k}_r) (1 + R^{\\textrm{TM}}) - (\\hat{n} \\cdot \\hat{k}_i)  (1 - R^{\\textrm{TM}})  \\\\ \n\\hat{q}_r \\cdot \\bb{F}_p & = & 0 \\\\ \n\\hat{p}_r \\cdot \\bb{F}_q & = &  0 \\\\ \n\\hat{q}_r \\cdot \\bb{F}_q & = & -(\\hat{n} \\cdot \\hat{k}_i) (1 - R^{\\textrm{TE}}) - (\\hat{k}_r\\cdot \\hat{n} )  (1 + R^{\\textrm{TE}}) }\n\nUsing $\\theta_i$ as the local incidence angle, we have for the specular direction $\\cos\\theta_i = -\\hat{n} \\cdot \\hat{k}_i$ and $\\cos\\theta_i  = \\hat{n} \\cdot \\hat{k}_r$. Substituting these and simplifying, we can write the polarized RCSs as\n\\ea{\\sigma_{pp}  &=&  \\sigma_{pec}\\vert R^{\\textrm{TM}} \\vert^2 \\\\\n\\sigma_{qq}  &=&  \\sigma_{pec}\\vert R^{\\textrm{TE}}\\vert^2  }\n\\eq{\\sigma_{pq} = \\sigma_{qp} = 0}\n\n\\noindent where \n\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i \\vert I(\\bb{k}_i,\\bb{k}_r)\\vert^2 \\label{kapecspec} }\n\nand $\\sigma_{pec}$ is the RCS for a PEC surface, evaluated in the specular direction. This quantity assumes perfect reflection, and $\\theta_i$ is the local incidence angle measured from the facet normal. These show that 1) there are no cross polarization terms for the specular direction of a flat facet under the Kirchhoff approximation, and 2) the specular RCS of a facet at a dielectric interface is the same as a PEC facet only scaled by the reflectivity of the surface evaluated at the incidence angle.  Derived above, the value of $\\vert I\\vert^2$ in the specular direction is equal to the area of the facet squared. This is true regardless of the shape. Therefore, in the specular direction, the RCS of a PEC facet is just \n\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i A^2  \\label{specRCS} }\n\n\\begin{table}[h]\n\\caption{Specular RCS of a PEC Facet}\n\\begin{center}\n\\begin{tabular}{|p{1.8cm}|p{3.2cm}|c|p{2.7cm}|}\n\\hline\nFacet & Geometry & $\\sigma_{pec}$ & Notes \\\\\n\\hline\nAny shape & \\parbox[c]{1em}{\\includegraphics[width=1.2in]{Kirchhoff/Figures/Specular}}  & $\\dfrac{k^2}{\\pi} \\cos^2\\theta_i A^2$ & $A$ is the facet area \\\\   \\hline\n\\end{tabular}\n\\label{tablePECspecRCS}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Backscatter RCS of a Facet}\n\nTo obtain the backscatter RCS of a facet, we use \\eqref{S11kirch} with $\\hat{k}_r = -\\hat{k}_i$, $\\hat{q}_r = -\\hat{q}_i$, $\\hat{p}_r = \\hat{p}_i$. Substituting these into \\eqref{prFp}-\\eqref{qrFq} it can be shown that \n\\ea{\n\\hat{p}_r \\cdot \\bb{F}_p & = & -2(\\hat{n}\\cdot \\hat{k}_i) R^{\\textrm{TM}}  \\\\ \n\\hat{q}_r \\cdot \\bb{F}_p & = & 0 \\\\ \n\\hat{p}_r \\cdot \\bb{F}_q & = & 0 \\\\ \n\\hat{q}_r \\cdot \\bb{F}_q & = & -2(\\hat{n} \\cdot \\hat{k}_i)  R^{\\textrm{TE}} \n}\n\nThen using  \\eqref{S11kirch} and \\eqref{Spq}, we get\n\\ea{\\sigma_{pp}  &=&  \\sigma_{pec}\\vert R^{\\textrm{TM}} \\vert^2 \\\\\n\\sigma_{qq}  &=&  \\sigma_{pec}\\vert R^{\\textrm{TE}}\\vert^2  }\n\\eq{\\sigma_{pq} = \\sigma_{qp} = 0}\n\nwhere \n\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i \\vert I(\\bb{k}_i,-\\bb{k}_i)\\vert^2 \\label{kasigmapec} }\n\n\\noindent where $\\sigma_{pec}$ is the backscatter RCS for a PEC surface. As with the specular direction, this assumes perfect reflection, and $\\theta_i$ is the local incidence angle measured from the facet normal. There are no cross polarization terms for the backscatter of a flat facet under the Kirchhoff approximation, and the backscatter RCS of a facet at a dielectric interface is equal to the backscatter RCS of a PEC facet scaled by the reflectivity. The surface phase integral has to be evaluated over the facet shape. We use the results from Section \\ref{secsurfphaseint} to find $\\sigma_{pec}$ in the backscatter direction for simple shapes.\n\n%%(-\\hat{q}_i)\n%%(-\\hat{k}_i)\n%\\ea{\n%\\hat{p}_r \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{p}_i) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (-\\hat{q}_i\\cdot (-\\hat{q}_i)) (1 - R^{\\textrm{TM}})  \\\\ \n%\\hat{q}_r \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (\\hat{q}_i\\times(-\\hat{q}_i)) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (\\hat{q}_i\\cdot \\hat{p}_i) (1 - R^{\\textrm{TM}}) \\\\ \n%\\hat{p}_r \\cdot \\bb{F}_q & = & -(\\hat{n} \\cdot \\hat{k}_i) (\\hat{p}_i\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) + (((-\\hat{k}_i) \\cdot \\hat{q}_i) (\\hat{p}_i \\cdot \\hat{n}) - ((-\\hat{k}_i)\\cdot \\hat{n}) (\\hat{q}_i \\cdot \\hat{p}_i) ) (1 + R^{\\textrm{TE}})  \\\\ \n%\\hat{q}_r \\cdot \\bb{F}_q & = &-(\\hat{n} \\cdot \\hat{k}_i) ((-\\hat{q}_i)\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) -((-\\hat{k}_i) \\cdot \\hat{n}) ((-\\hat{q}_i)\\cdot \\hat{q}_i)  (1 + R^{\\textrm{TE}}) \n%}\n%\n%\\ea{\n%\\hat{p}_r \\cdot \\bb{F}_p & = & \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{p}_i) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i)  (1 - R^{\\textrm{TM}})  \\\\ \n%\\hat{q}_r \\cdot \\bb{F}_p & = & 0 \\\\ \n%\\hat{p}_r \\cdot \\bb{F}_q & = & 0 \\\\ \n%\\hat{q}_r \\cdot \\bb{F}_q & = & (\\hat{n} \\cdot \\hat{k}_i)  (1 - R^{\\textrm{TE}}) -(\\hat{k}_i \\cdot \\hat{n})  (1 + R^{\\textrm{TE}}) \n%}\n%\n%\\ea{\n%\\hat{p}_r \\cdot \\bb{F}_p & = & -\\hat{n}\\cdot \\hat{k}_i (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i)  (1 - R^{\\textrm{TM}})  \\\\ \n%\\hat{q}_r \\cdot \\bb{F}_p & = & 0 \\\\ \n%\\hat{p}_r \\cdot \\bb{F}_q & = & 0 \\\\ \n%\\hat{q}_r \\cdot \\bb{F}_q & = & (\\hat{n} \\cdot \\hat{k}_i)  (1 - R^{\\textrm{TE}}) -(\\hat{k}_i \\cdot \\hat{n})  (1 + R^{\\textrm{TE}}) \n%}\n\n\n\n%\n%The bi-static radar cross-section is defined \n%\\eq{\\sigma_{pq}(\\hat{k}_s,\\hat{k}_i) = \\lim_{kr \\rightarrow \\infty} 4\\pi r^2 \\dfrac{\\left\\vert \\hat{p} \\cdot \\bb{E}_s \\right\\vert^2}{\\left\\vert \\hat{q} \\cdot \\bb{E}_i \\right\\vert^2} }\n%\n%(not the same pq as Kirchhoff derivation).  For a single object, the scattered field is \n%\\begin{eqnarray}\n%\\bb{E}_s(\\br) & \\approx & \\dfrac{ike^{ikr}}{4\\pi r} E_o \\left( \\overline{\\bb{I}} - \\hat{k}_s \\hat{k}_s \\right)  \\cdot  \\bb{F} I \\nonumber \\\\\n%\\end{eqnarray}\n%\n%For backscatter in the medium above the facet, $\\hat{k}_r = -\\hat{k}_i$, and using $\\overline{\\bb{I}} = \\hat{q} \\hat{q} + \\hat{p} \\hat{p} +\\hat{k}_s \\hat{k}_s$, the field is \n%\\begin{eqnarray}\n%\\bb{E}_s(\\br) & \\approx & \\dfrac{ike^{ikr}}{4\\pi r} E_o \\left( \\hat{q} \\hat{q} + \\hat{p} \\hat{p}\\right)  \\cdot  \\bb{F} I \\nonumber \\\\\n%\\end{eqnarray}\n%\n%where \n%\\ea{\\bb{F} &=&  - (\\hat{e}_i \\cdot \\hat{q} )(\\hat{n} \\cdot \\hat{k}_i) \\hat{q} (1 - R^{\\textrm{TE}}) + (\\hat{e}_i \\cdot \\hat{p} )(\\hat{n} \\times \\hat{q}) (1 + R^{\\textrm{TM}}) \\nonumber \\\\\n%\\ & \\ &+ (\\hat{e}_i \\cdot \\hat{q} )(-\\hat{k}_i \\times (\\hat{n} \\times \\hat{q})) (1 + R^{\\textrm{TE}}) + (\\hat{e}_i \\cdot \\hat{p} )(\\hat{n} \\cdot \\hat{k}_i) (-\\hat{k}_i \\times \\hat{q}) (1 - R^{\\textrm{TM}}) \\nonumber \\\\}\n%\n%\\ea{-\\hat{k}_i \\times (\\hat{n} \\times \\hat{q}) &=& \\hat{q}(\\hat{n} \\cdot \\hat{k}_i) \\\\\n%-\\hat{k}_i \\times \\hat{q} &=& \\hat{p} }\n%\n%so\n%\\ea{\\bb{F} &=&  - (\\hat{e}_i \\cdot \\hat{q} )(\\hat{n} \\cdot \\hat{k}_i) \\hat{q} (1 - R^{\\textrm{TE}})  + (\\hat{e}_i \\cdot \\hat{p} )(\\hat{n} \\times \\hat{q}) (1 + R^{\\textrm{TM}}) \\nonumber \\\\\n%\\ & \\ & + (\\hat{e}_i \\cdot \\hat{q} )(\\hat{n} \\cdot \\hat{k}_i)\\hat{q} (1 + R^{\\textrm{TE}}) \\nonumber  + (\\hat{e}_i \\cdot \\hat{p} )(\\hat{n} \\cdot \\hat{k}_i) \\hat{p}(1 - R^{\\textrm{TM}}) }\n%\n%%or \n%%\n%%\\ea{\\bb{F} &=&  - (\\hat{e}_i \\cdot \\hat{q} )(\\hat{n} \\cdot \\hat{k}_i)2 R^{\\textrm{TE}}  \\hat{q}\\nonumber \\\\\n%%\\ & \\ &  +  (\\hat{e}_i \\cdot \\hat{p} ) \\left(   (\\hat{n} \\times \\hat{q}) (1 + R^{\\textrm{TM}})  + (\\hat{n} \\cdot \\hat{k}_i) \\hat{p}(1 - R^{\\textrm{TM}}) \\right) }\n%\n%\n%The scattered field is \n%\\begin{eqnarray}\n%\\bb{E}_s & \\approx & \\dfrac{ike^{ikr}}{4\\pi r} E_o \\left( F_q \\hat{q} + F_p \\hat{p}\\right)  I \\nonumber \\\\\n%\\end{eqnarray}\n%\n%%with \n%\n%\\ea{F_q &=& - (\\hat{e}_i \\cdot \\hat{q} )(\\hat{n} \\cdot \\hat{k}_i)2 R^{\\textrm{TE}}  \\\\\n%F_p &=&   (\\hat{e}_i \\cdot \\hat{p} ) \\left(  \\hat{p}\\cdot (\\hat{n} \\times \\hat{q}) (1 + R^{\\textrm{TM}})  + (\\hat{n} \\cdot \\hat{k}_i) (1 - R^{\\textrm{TM}}) \\right) }\n%\n%using $\\hat{p}\\cdot (\\hat{n} \\times \\hat{q}) = -\\hat{p} (\\hat{n} \\cdot \\hat{k}_i) $ and $(\\hat{n} \\cdot \\hat{k}_i) = -\\cos\\theta_i$ this is equal to \n%\\ea{F_q &=&  (\\hat{e}_i \\cdot \\hat{q} ) 2 \\cos\\theta_i R^{\\textrm{TE}}   \\\\\n%F_p &=&   (\\hat{e}_i \\cdot \\hat{p} ) \\left(  \\cos\\theta_i  (1 + R^{\\textrm{TM}})  -\\cos\\theta_i (1 - R^{\\textrm{TM}}) \\right) }\n%\n%or\n%\\ea{F_q &=&  (\\hat{e}_i \\cdot \\hat{q} ) 2 \\cos\\theta_i R^{\\textrm{TE}}   \\\\\n%F_p &=&   (\\hat{e}_i \\cdot \\hat{p} ) 2 \\cos\\theta_i R^{\\textrm{TM}} }\n%\n%\n%The scattered power is \n%\\begin{eqnarray}\n%\\vert \\bb{E}_s \\vert^2 & = & \\dfrac{k^2}{(4\\pi)^2 r^2} \\vert E_o\\vert^2 \\left(  \\vert F_q \\vert^2 +  \\vert F_p \\vert^2 \\right)  I^2 \\nonumber \\\\\n%\\end{eqnarray}\n%\n%The incident field and power are  \n%\\eq{ \\bb{E}_i = E_o ((\\hat{e}_i \\cdot \\hat{q} ) \\hat{q} + (\\hat{e}_i \\cdot \\hat{p} ) \\hat{p} )}\n%\\eq{\\vert \\bb{E}_i \\vert^2 = \\vert E_o\\vert^2 }\n%\n%The total cross-section is defined \n%\\eq{\\sigma = \\lim_{kr \\rightarrow \\infty} 4\\pi r^2 \\dfrac{\\left\\vert \\bb{E}_s \\right\\vert^2}{\\left\\vert \\bb{E}_i \\right\\vert^2} }\n%\n%Then the total backscatter is \n%\\ea{\\sigma &=& \\dfrac{k^2}{4\\pi} \\left(  \\vert F_q \\vert^2 +  \\vert F_p \\vert^2 \\right)  I^2  }\n%\n%and substituting and equating $\\hat{h} = \\hat{q}$ and $\\hat{v} = \\hat{p}$, \n%\\ea{\\sigma &= & \\sigma_{hh} (\\hat{e}_i \\cdot \\hat{h} )^2 + \\sigma_{vv}(\\hat{e}_i \\cdot \\hat{v} ) ^2 }\n%\n%\\noindent where\n%\\ea{\\sigma_{hh}  =  \\sigma_{pec}\\vert R^{\\textrm{TE}}\\vert^2  \\\\\n%\\sigma_{vv}  =  \\sigma_{pec}\\vert R^{\\textrm{TM}} \\vert^2 }\n%\n%and\n%\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i I^2 }\n%\n%depends on the shape of the surface.\n\n\n%\n%\\subsection{Local bistatic scattering angles}\n%\n%So far the $\\hat{p}$ and $\\hat{q}$ polarization decomposition is relative to the facet normal. We need to relate bistatic angles in a global frame to those in the frame of the facet to help with evaluating the surface phase integral.  \\cite{van2011synthetic} and Tsang give such transforms for problems with cylindrical symmetry. In general, we want to transform between  \n%\n%Define the generic incident \n%\\ea{ \\hat{k}_i &=& \\sin\\theta_i \\cos\\phi_i \\hat{x}  + \\sin\\theta_i \\sin\\phi_i \\hat{y} + \\cos\\theta_i \\hat{z}  \\\\\n%\\hat{k}_s &=& \\sin\\theta_s \\cos\\phi_s \\hat{x}  + \\sin\\theta_s \\sin\\phi_s \\hat{y} + \\cos\\theta_s \\hat{z}  }\n%\n%The local incident angle for use in the reflection coefficients is  \n%\\eq{\\cos\\theta_{i} = -\\hat{n} \\cdot \\hat{k}_i }\n%\n%The relative incident and scattered angles used in the surface phase integral when its normal is in the $+z$ are (following the notation in [van Zyl])\n%\\eq{\\cos\\theta_{ic} = \\hat{n} \\cdot \\hat{k}_i }\n%\\eq{\\cos\\theta_{sc} = \\hat{n} \\cdot \\hat{k}_s }\n%\n%Defining   \n%\\ea{ \\hat{n} &=& \\sin\\theta_c \\cos\\phi_c \\hat{x}  + \\sin\\theta_c \\sin\\phi_c \\hat{y} + \\cos\\theta_c \\hat{z}  }\n%\n%\\noindent where $(\\theta_c,\\phi_c)$ are the spherical angles of the normal vector.  Finally, we can find the relative azimuth angles $\\phi_{ic} = \\phi_c - \\phi_i$ and $\\phi_{sc} = \\phi_c - \\phi_s$ from \n%\\ea{\\hat{n} \\cdot \\hat{k}_i &=&  \\cos\\theta_c\\cos\\theta_i + \\sin\\theta_c\\sin\\theta_i \\cos(\\phi_c - \\phi_i) \\\\\n%\\hat{n} \\cdot \\hat{k}_s &=&  \\cos\\theta_c\\cos\\theta_s + \\sin\\theta_c\\sin\\theta_s \\cos(\\phi_c - \\phi_s) }\n%\n\n\n\n\n\n%\n%\\ea{\n%\\hat{p}_s \\cdot \\bb{F}_p & = & -\\cos\\theta_i (1 + R^{\\textrm{TM}}) +\\cos\\theta_i  (1 - R^{\\textrm{TM}})  \\\\ \n%\\hat{q}_s \\cdot \\bb{F}_q & = & \\cos\\theta_i  (1 - R^{\\textrm{TE}}) -\\cos\\theta_i  (1 + R^{\\textrm{TE}}) \n%}\n%\\ea{\n%\\hat{p}_s \\cdot \\bb{F}_p & = & -2 \\cos\\theta_i  R^{\\textrm{TM}}  \\\\ \n%\\hat{q}_s \\cdot \\bb{F}_q & = & -2\\cos\\theta_i   R^{\\textrm{TE}} \n%}\n%\n%Therefore, the specular cross sections are \n%\\ea{\\sigma_{hh} &=& 4\\pi \\vert S_{hh} \\vert^2 \\\\\n%\\ &=&  4 \\pi \\dfrac{k^2}{(4\\pi)^2} \\vert I \\vert^2 \\vert \\hat{q}_s \\cdot \\bb{F}_q  \\vert^2 \\\\\n%\\ &= & \\dfrac{k^2}{\\pi} \\cos^2\\theta_i \\vert R^{\\textrm{TE}} \\vert^2 \\vert I \\vert^2 }\n%\n%Likewise for $vv$.  Or \n%\\ea{\\sigma_{hh}  =  \\sigma_{pec}\\vert R^{\\textrm{TE}}\\vert^2  \\\\\n%\\sigma_{vv}  =  \\sigma_{pec}\\vert R^{\\textrm{TM}} \\vert^2 }\n%\n%and\n%\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i I^2 }\n%\n%In all the cases below, the value of the $I^2$ for the specular direction is just the area of the surface squared, so that\n%\n%\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i A^2  }\n\n%\n%\\subsection{Forward Cross Section}\n%\n%This is the scattering through an interface of a finite area.\n\n\\paragraph{Rectangle} Assuming that the facet is in the $XY$ plane with $\\hat{n} = \\hat{z}$. In the backscatter direction $k_i = k_s = k$, $\\theta_s = \\pi - \\theta_i$ and $\\phi_s = \\phi_i + \\pi$, and it can be shown that the components of the wave vector difference is equal to \n\\ea{K_x &=& 2 k \\sin\\theta_i \\cos\\phi_i \\label{Kxback} \\\\\nK_y &=& 2 k \\sin\\theta_i \\sin\\phi_i \\label{Kyback} }\n\nThe surface phase integral in over a rectangle, \\eqref{SPIrect}, in the backscatter direction is \n\\eq{I = L_x L_y \\textrm{sinc}\\left(L_x k \\sin\\theta_i \\cos\\phi_i\\right)\\textrm{sinc}\\left(L_y k \\sin\\theta_i \\sin\\phi_i\\right)}\n\nUsing this in \\eqref{kasigmapec}, the backscatter RCS of a  rectangular PEC facet is \n\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i L_x^2 L_y^2 \\textrm{sinc}^2\\left(L_x k \\sin\\theta_i \\cos\\phi_i\\right)\\textrm{sinc}^2\\left(L_y k \\sin\\theta_i \\sin\\phi_i\\right) }\n \nIn \\eqref{kasigmapec}, $\\cos\\theta_i$ was defined for the local incident angle using \\eqref{localinctheta}. However, the angles in the argument of the sinc technically belong to the incident wave vector. Because $\\textrm{sinc}(x)$ is even, and because $\\theta_i \\rightarrow \\pi - \\theta_i$ and $\\phi_i \\rightarrow \\phi_i + \\pi$, the result is the same. Therefore, the angles can be treated as belonging to either the incident wave vector or a vector that points to the sensor for the case of backscatter. Note, the RCS is proportional to the area of the facet squared.\n\n\\paragraph{Ellipse} \nAssuming that the facet is in the $XY$ plane, and with \\eqref{Kxback} and \\eqref{Kyback}, the surface phase integral over an ellipse, \\eqref{SPIellipse} reduces to \n\\ea{I &=&  2 \\pi  a b \\textrm{jinc}\\left( 2k\\sin\\theta_i  \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  }\\right) \\label{SPIellipseback} }\n\nUsing this in the expression for radar cross section for a PEC surface, \\eqref{kasigmapec}, the backscatter RCS of a PEC elliptical disk is  \n \\eq{\\sigma_{pec}  = 4 \\pi k^2 \\cos^2\\theta_i a^2 b^2 \\textrm{jinc}^2\\left(2 k \\sin\\theta_i \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  } \\right) \\label{pedbackscatterellipse} }\n%\nBecause this is an even function of $\\theta_i$ and $\\phi_i$, the angles can be defined either from the incident wave vector or from a vector that points to the sensor and measured from the facet normal.  \n\n\\paragraph{Circle} Using \\eqref{SPIellipseback} with $b=a$, the surface phase integral for a circular disk is\n\\ea{I &=&  2 \\pi  a^2 \\textrm{jinc}\\left( 2k\\sin\\theta_i a \\right)}\n\nLikewise, from \\eqref{pedbackscatterellipse}, the backscatter RCS of the circular PEC disk is\n \\eq{\\sigma_{pec}  = 4 \\pi k^2 a^4 \\cos^2\\theta_i  \\textrm{jinc}^2\\left(2 k a \\sin\\theta_i \\right)  }\n\nAgain, $\\theta_i$ can be either the forward incident angle, or the local incident angle from facet normal.\n\n\\paragraph{Triangle} For the backscatter direction, $\\bb{K} = 2 \\bb{k}_i $. When $\\bb{K}$ is real, the surface phase integral of a triangular facet, \\eqref{SPItriangle}, has the following identity $I(-a,-b,-c) = I^*(a,b,c)$. Therefore, reversing the incident direction to use angles that point at the sensor will not change $\\vert I \\vert^2$. The backscatter RCS of a PEC triangle is then\n\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi}  A^2 \\cos^2\\theta_i \\vert g(b,c)\\vert^2 }\n\nNo simple reduction has been found for $\\vert g(b,c)\\vert^2$. \n\n\\paragraph{Summary}\nTable \\ref{tablePECbackRCS} has a summary of $\\sigma_{pec}$ for these shapes. In general, $\\sigma_{pec}$ can be decomposed as a product of four terms: 1) a factor of $k^2/\\pi$, 2) a factor of $\\cos^2\\theta_i$ for the area projection, 3) a factor of the facet area, and 4) a directionally dependent weighting function that has maximum value of one and comes from the Fourier transform over the domain of the facet via the surface phase integral. \n%\\clearpage\n%\\newpage\n \n\\begin{table}[h]\n\\caption{Backscatter RCS of PEC Facets}\\label{tablePECbackRCS}\n%\\vspace{-1.5\\baselineskip}\n\\begin{center}\n\\begin{tabular}{|p{1.6cm}|p{2.8cm}|c|p{2.9cm}|}\n\\hline\nFacet & Geometry & $\\sigma_{pec}$ & Notes \\\\\n\\hline\nAny shape & \\quad \\parbox[c]{1em}{\\includegraphics[width=1in]{Kirchhoff/Figures/Backscatter}}  &$\\dfrac{k^2}{\\pi} \\cos^2\\theta_i \\vert I(\\bb{k}_i,-\\bb{k}_i)\\vert^2 $ &  \\\\   \\hline\nRectangle & \\parbox[c]{1em}{\\includegraphics[width=1in]{Kirchhoff/Figures/Rectangle}}  & $\\begin{array}{c} \\dfrac{k^2}{\\pi} \\cos^2\\theta_i L_x^2 L_y^2 \\cdot \\\\ \\textrm{sinc}^2\\left(L_x k \\sin\\theta_i \\cos\\phi_i\\right)\\textrm{sinc}^2\\left(L_y k \\sin\\theta_i \\sin\\phi_i\\right) \\end{array}$ & $\\textrm{sinc}(x) = \\dfrac{\\sin(x)}{x}$ \\\\   \\hline\nEllipse & \\parbox[c]{1em}{\\includegraphics[width=1in]{Kirchhoff/Figures/Ellipse}}  & $\\begin{array}{c} 4 \\pi k^2 \\cos^2\\theta_i a^2 b^2 \\cdot \\\\\n\\textrm{jinc}^2\\left(2 k \\sin\\theta_i \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  } \\right) \\end{array} $ & $\\textrm{jinc}(x) = \\dfrac{J_1(x) }{x} $ \\\\   \\hline\nCircle & \\parbox[c]{1em}{\\includegraphics[width=1in]{Kirchhoff/Figures/Circle}}  &$ 4 \\pi k^2 a^4 \\cos^2\\theta_i  \\textrm{jinc}^2\\left(2 k a \\sin\\theta_i \\right)$  &  \\\\   \\hline\nTriangle & \\parbox[c]{1em}{\\includegraphics[width=1in]{Kirchhoff/Figures/Triangle}}  & $\\dfrac{k^2}{\\pi}  A^2 \\cos^2\\theta_i \\vert g(b,c)\\vert^2 $ & \\footnotesize$\\begin{array}{l}\nb = \\bb{K} \\cdot (\\bb{p}_2 - \\bb{p}_1) \\\\\nc = \\bb{K} \\cdot (\\bb{p}_3 - \\bb{p}_1) \\\\\n%\\dfrac{1}{3}\\left[\\bb{p}_1+\\bb{p}_2+\\bb{p}_3\\right] = 0\n\\end{array}$ \\\\   \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}%\n \n\n\n%2 \\pi  a b \\textrm{jinc}\\left(\\Omega \\Gamma \\right) }\n\n\n\n\n%\n%There are two approaches to this deviation. The first can be done in terms of the components $K_x$ and $K_y$. The second uses a sequence of trigonometric manipulations in order to separate the \n%\n%Writing out the dot product and collecting like factors of $\\sin\\theta$\n%\\ea{\n%  (\\bb{k}_i - \\bb{k}_s) \\cdot  \\bb{r} &=& a \\rho \\cos\\phi (k_i \\sin\\theta_i \\cos\\phi_i - k_s\\sin\\theta_s \\cos\\phi_s )   + b \\rho \\sin\\phi ( k_i \\sin\\theta_i \\sin\\phi_i  - k_s\\sin\\theta_s \\sin\\phi_s ) \\nonumber \\\\\n%\\ & \\ & \\\\\n%\\ &=& k_i \\rho \\sin\\theta_i ( a \\cos\\phi \\cos\\phi_i + b \\sin\\phi \\sin\\phi_i ) - k_s \\rho \\sin\\theta_s (a \\cos\\phi \\cos\\phi_s + b \\sin\\phi \\sin\\phi_s ) \\nonumber \\\\ }\n%\n%Next, we use the identity \n%\\ea{a \\cos\\alpha \\cos\\beta + b \\sin\\alpha \\sin\\beta  &=& \\dfrac{1}{2}\\left( (a + b) \\cos(\\alpha - \\beta) + (a-b) \\cos(\\alpha + \\beta) \\right) }\n%\n%to convert the trigonometric products to cosines and then factor the sum and difference of the ellipse parameters \n%\\ea{\\ &=&  k_i \\rho \\sin\\theta_i \\dfrac{1}{2}\\left( (a + b) \\cos(\\phi - \\phi_i) + (a-b) \\cos(\\phi + \\phi_i)\\right)   \\\\ \n%\\ & \\ & - k_s \\rho \\sin\\theta_s \\dfrac{1}{2}\\left( (a + b) \\cos(\\phi - \\phi_s) + (a-b) \\cos(\\phi + \\phi_s)\\right) \\\\\n%%&=&  k_i \\rho \\dfrac{1}{2}  \\left( (a + b) \\sin\\theta_i \\cos(\\phi - \\phi_i) + (a-b) \\sin\\theta_i \\cos(\\phi + \\phi_i)\\right)   \\\\ \n%%\\ & \\ & - k_s \\rho  \\dfrac{1}{2}\\left( (a + b) \\sin\\theta_s\\cos(\\phi - \\phi_s) + (a-b) \\sin\\theta_s\\cos(\\phi + \\phi_s)\\right) \\\\ \n%&=&  \\rho \\dfrac{1}{2} (a + b)  \\left( k_i\\sin\\theta_i \\cos(\\phi - \\phi_i) -  k_s\\sin\\theta_s\\cos(\\phi - \\phi_s)\\right)  \\\\\n%\\ & \\ & + \\rho \\dfrac{1}{2}(a-b) \\left(  k_i\\sin\\theta_i \\cos(\\phi + \\phi_i) - k_s\\sin\\theta_s\\cos(\\phi + \\phi_s) \\right)  }\n%\n%Following \\cite{trott1988disk}, the trick is to manipulate the sines and cosines in order to isolate $\\phi$.  This starts by equating the terms in the parentheses to a phased cosine \n%\\ea{k_i \\sin\\theta_i \\cos(\\phi \\mp \\phi_i) - k_s\\sin\\theta_s \\cos(\\phi \\mp \\phi_s) & =& \\Omega_{\\mp} \\cos(\\phi \\mp \\gamma) }\n%\n%Next, apply the sum/difference of angles formula\n%\\ea{\\cos(\\alpha \\mp \\beta) &=& \\cos\\alpha\\cos\\beta \\pm \\sin\\alpha\\sin\\beta \\\\}\n%\n%to the left and right sides to get \n%\\eq{k_i \\sin\\theta_i (\\cos\\phi\\cos\\phi_i \\pm \\sin\\phi\\sin\\phi_i ) - k_s \\sin\\theta_s (\\cos\\phi\\cos\\phi_s \\pm \\sin\\phi\\sin\\phi_s) = \\Omega_{\\mp} (\\cos\\phi\\cos\\gamma \\pm \\sin\\phi\\sin\\gamma)    }\n%\n%Equating factors of $\\cos\\phi$ and $\\sin\\phi$, we have the relations\n%\\ea{\\Omega_{\\mp} \\cos\\gamma  &=& k_i \\sin\\theta_i \\cos\\phi_i- k_s \\sin\\theta_s\\cos\\phi_s \\label{omega1} \\\\\n%\\pm \\Omega_{\\mp} \\sin\\gamma  &=& \\pm k_i \\sin\\theta_i \\sin\\phi_i \\mp k_s \\sin\\theta_s\\sin\\phi_s \\label{omega2} \\\\\n%\\Omega_{\\mp}^2  &=& (k_i \\sin\\theta_i \\cos\\phi_i- k_s \\sin\\theta_s\\cos\\phi_s)^2 + (\\pm k_i \\sin\\theta_i \\sin\\phi_i \\mp k_s \\sin\\theta_s\\sin\\phi_s )^2 }\n%\n%From these it is clear that $\\Omega_{+}  = \\Omega_{-}$. Using this fact, we can define a signal amplitude parameter $\\Omega$ and then solve for $\\tan\\gamma$ by dividing \\eqref{omega2} by \\eqref{omega1} giving \n%\\ea{\\Omega  &=& \\left[(k_i\\sin\\theta_i \\cos\\phi_i- k_s\\sin\\theta_s\\cos\\phi_s)^2 + (k_i\\sin\\theta_i \\sin\\phi_i - k_s\\sin\\theta_s\\sin\\phi_s )^2\\right]^{1/2} \\\\\n%\\tan\\gamma &=& \\dfrac{k_i\\sin\\theta_i \\sin\\phi_i - k_s\\sin\\theta_s\\sin\\phi_s}{k_i\\sin\\theta_i \\cos\\phi_i- k_s\\sin\\theta_s\\cos\\phi_s}}\n%\n%\n%%\\ea{\\Omega \\cos\\gamma  &=& k_i\\sin\\theta_i \\cos\\phi_i- k_s\\sin\\theta_s\\cos\\phi_s \\label{omega1} \\\\\n%%\\Omega \\sin\\gamma  &=& k_i\\sin\\theta_i \\sin\\phi_i - k_s\\sin\\theta_s\\sin\\phi_s \\label{omega2}  \\\\\n%%\\Omega  &=& \\left[(k_i\\sin\\theta_i \\cos\\phi_i- k_s\\sin\\theta_s\\cos\\phi_s)^2 + (k_i\\sin\\theta_i \\sin\\phi_i - k_s\\sin\\theta_s\\sin\\phi_s )^2\\right]^{1/2} }\n%%\n%%We solve for $\\gamma$ by dividing \\eqref{omega2} by \\eqref{omega1}\n%%\\eq{\\tan\\gamma = \\dfrac{k_i\\sin\\theta_i \\sin\\phi_i - k_s\\sin\\theta_s\\sin\\phi_s}{k_i\\sin\\theta_i \\cos\\phi_i- k_s\\sin\\theta_s\\cos\\phi_s}}\n%\n%Using these, we can write the dot product\n%\\ea{  (\\bb{k}_i - \\bb{k}_s) \\cdot  \\bb{r} &=&   \\rho \\dfrac{1}{2} (a + b)  \\Omega \\cos(\\phi - \\gamma) + \\rho \\dfrac{1}{2}(a-b) \\Omega \\cos(\\phi + \\gamma)   \\\\\n%&=&    \\rho \\dfrac{1}{2} \\Omega \\left( (a + b)   \\cos(\\phi - \\gamma) + (a-b) \\cos(\\phi + \\gamma)\\right)   \\\\\n%&=&    \\rho \\Omega \\left( a\\cos\\phi \\cos\\gamma + b\\sin\\phi \\sin\\gamma \\right)  }\n%\n%Using the last relation in the phase integral over the domain of the ellipse\n%\\ea{I &=& a b \\int_{0}^{2\\pi} \\int_{0}^{1} \\exp\\left(i \\rho \\Omega \\left( a\\cos\\phi \\cos\\gamma + b\\sin\\phi \\sin\\gamma \\right) \\right)  \\rho d\\rho d\\phi }\n%\n%Using the following identity: \n%\\eq{\\int_0^{2\\pi} \\exp\\left( u \\cos\\theta +v \\sin\\theta\\right) d\\theta = 2 \\pi I_0 \\left( \\sqrt{u^2 + v^2} \\right)}\n%\n%we get\n%\\ea{I &=&  2 \\pi  a b \\int_{0}^{1}I_0 \\left( \\sqrt{ \\left( i  \\rho \\Omega a \\cos\\gamma \\right)^2 + \\left( i  \\rho \\Omega b \\sin\\gamma \\right)^2 }\\right) \\rho d\\rho  \\\\ \n%\\ &=&  2 \\pi  a b \\int_{0}^{1}I_0 \\left( i  \\rho \\Omega \\Gamma \\right) \\rho d\\rho }\n%\n%where \n%\\eq{\\Gamma = \\sqrt{  a^2 \\cos^2\\gamma + b^2 \\sin^2\\gamma}}\n%\n%Last, using \n%\\eq{\\int_0^1 I_0\\left( i c x \\right) x dx = \\dfrac{J_1(c)}{c} }\n%\n%the surface phase integral over an ellipse is\n%\\ea{I(\\bb{k}_i,\\bb{k}_s) &=&  2 \\pi  a b \\dfrac{J_1(\\Omega \\Gamma)}{\\Omega \\Gamma} \\\\\n%\\ &=&  2 \\pi  a b \\textrm{jinc}\\left(\\Omega \\Gamma \\right) }\n\n\n\n%\n%\n%\\paragraph{Backscatter RCS:} For the backscatter direction $k_i = k_s = k$, $\\theta_s = \\pi - \\theta_i$ and $\\phi_s = \\phi_i + \\pi$.  In this case, it can be shown that $\\Omega$ reduces to \n%\\ea{\\Omega^2 &=& k^2 (\\sin\\theta_i \\cos\\phi_i- \\sin(\\pi - \\theta_i)\\cos(\\phi_i + \\pi))^2 +k^2  (\\sin\\theta_i\\sin\\phi_i- \\sin(\\pi - \\theta_i)\\sin(\\phi_i + \\pi) )^2 \\\\\n%\\ &=&k^2  (\\sin\\theta_i \\cos\\phi_i+ \\sin\\theta_i\\cos\\phi_i)^2 + k^2 (\\sin\\theta_i\\sin\\phi_i+ \\sin\\theta_i\\sin\\phi_i )^2 \\\\\n%\\ &=& k^2 (2 \\sin\\theta_i \\cos\\phi_i)^2 + k^2 (2 \\sin\\theta_i\\sin\\phi_i)^2 \\\\\n%%\\ &=& k^2 (2 \\sin\\theta_i)^2( \\cos^2\\phi_i + \\sin^2\\phi_i) \\\\\n%\\ &=& k^2 (2 \\sin\\theta_i)^2}\n%\n%or \n%\\eq{\\Omega = 2 k \\sin\\theta_i}\n%\n%Also,\n%%\\ea{%\\tan\\gamma &=& \\dfrac{\\sin\\theta_i \\sin\\phi_i - \\sin\\theta_s\\sin\\phi_s}{\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s} \\\\ \n%\\ea{\\tan\\gamma  & =& \\dfrac{\\sin\\theta_i \\sin\\phi_i - \\sin(\\pi - \\theta_i)\\sin(\\phi_i + \\pi)}{\\sin\\theta_i \\cos\\phi_i- \\sin(\\pi - \\theta_i)\\cos(\\phi_i + \\pi) } \\\\\n%\\ & =& \\dfrac{\\sin\\theta_i \\sin\\phi_i + \\sin\\theta_i\\sin\\phi_i}{\\sin\\theta_i \\cos\\phi_i+ \\sin\\theta_i\\cos\\phi_i } \\\\  \n%\\ & =& \\tan\\phi_i \\\\ \n%\\gamma &=& \\phi_i}\n%\n%Therefore, for in the case of backscatter \n%\\eq{\\Gamma = \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  }}\n%\n%and the phase integral reduces to \n%\\ea{I &=&  2 \\pi  a b \\dfrac{J_1(2k\\sin\\theta_i \\Gamma )}{2k\\sin\\theta_i \\Gamma } }\n%\n%Using the radar cross section for a PEC surface, \\eqref{kasigmapec}, the backscatter RCS of a PEC elliptical disk is  \n% \\eq{\\sigma_{pec}  = 4 \\pi k^2 \\cos^2\\theta_i a^2 b^2 \\textrm{jinc}^2\\left(2 k \\sin\\theta_i \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  } \\right)  }\n%%\n%\n%Here, $\\cos\\theta_i$ uses the local incident angle defined by \\eqref{localinctheta}, while the incident angles in the argument of the \\textrm{jinc} are the incident wave vector angles. However, because that function is even, the angles can either be for the incident wave vector or a vector that points to the sensor.  \n%\n%\n%\\paragraph{Specular RCS}: For the specular direction $k_i = k_s = k$, $\\theta_s = \\pi - \\theta_i$ and $\\phi_s = \\phi_i$. Therefore, \n%\\ea{\\Omega^2 &=& (\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s)^2 + (\\sin\\theta_i \\sin\\phi_i - \\sin\\theta_s\\sin\\phi_s )^2 \\\\\n%\\ & = & (\\sin\\theta_i \\cos\\phi_i- \\sin( \\pi - \\theta_i)\\cos\\phi_i)^2 + (\\sin\\theta_i \\sin\\phi_i - \\sin( \\pi - \\theta_i)\\sin\\phi_i )^2 \\\\\n%\\ & = & (\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_i\\cos\\phi_i)^2 + (\\sin\\theta_i \\sin\\phi_i - \\sin\\theta_i\\sin\\phi_i )^2 \\\\\n%\\ & = & 0}\n%\n%Also it can be shown that $\\tan\\gamma  = 0$, so that $\\Gamma = a$.  Then \n%\\eq{I = 2 \\pi  a b \\dfrac{J_1(0 a)}{( 0 a)} =  \\pi a b}\n%\n%which is just the area of the ellipse. This is actually true regardless fo the shape of the integration domain so that the surface phase integral in the specular direction is equal to the area.  Using \\eqref{kapecspec}, the specular RCS of a PEC elliptical disk under the Kirchhoff approximation is \n%\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i A^2 }\n\n\n%\n%The surface phase integral for a circular disk of radius $a$ is that lies in the X-Y plane is \n%\n%\\begin{eqnarray}\n%I &=& \\int_S dS \\exp\\left(i\\bb{K}\\cdot \\bb{r}\\right) \\nonumber \\\\\n%\\ &=&  \\int_0^a \\int_0^{2\\pi} \\exp\\left(i2 \\bb{k}_i \\cdot \\bb{r}\\right) \\rho d\\rho d\\phi \\nonumber \\\\\n%\\end{eqnarray}\n%\n%\\begin{eqnarray}\n% \\bb{k}_i &=& k\\left(\\sin\\theta_i \\cos\\phi_i \\hat{x}  + \\sin\\theta_i \\sin\\phi_i \\hat{y} + \\cos\\theta_i \\hat{z}\\right) \\\\\n% \\bb{r}  &=& x \\hat{x} + y \\hat{y} \\\\\n%  \\ &=& \\rho \\cos\\phi \\hat{x} + \\rho \\sin\\phi \\hat{y} \\\\\n%  \\bb{k}_i \\cdot  \\bb{r} &=& k \\rho \\cos\\phi \\sin\\theta_i \\cos\\phi_i  + k \\rho \\sin\\phi \\sin\\theta_i \\sin\\phi_i  \\\\\n%\\ &=& k \\rho \\sin\\theta_i \\cos(\\phi - \\phi_i) \\\\\n% \\end{eqnarray}\n% \n%The integral is then\n%\\begin{eqnarray}\n%I &=& \\int_0^a \\int_0^{2\\pi} \\exp\\left(i2  k \\rho \\sin\\theta_i \\cos(\\phi - \\phi_i)  \\right) \\rho d\\rho d\\phi \\nonumber \\\\\n%\\end{eqnarray}\n%\n%Using the identity \n%\\begin{eqnarray}\n%\\int_0^{2\\pi} e^{i c \\cos(t-b)} dt &=& 2\\pi J_0(c) \n%\\end{eqnarray}\n%\n%we get\n%\\begin{eqnarray}\n%I &=& 2\\pi \\int_0^a J_0(2  k \\rho \\sin\\theta_i) \\rho d\\rho  \\nonumber \\\\\n%\\end{eqnarray}\n%\n%Next, using the identity \n%\\eq{\\int_0^a J_0(b x) x dx = \\dfrac{a J_1(a b)}{b} }\n%\n%we get\n%\\begin{eqnarray}\n%I &=& 2\\pi \\dfrac{a J_1(2 k a \\sin\\theta_i)}{2 k \\sin\\theta_i}   \\nonumber \\\\\n%\\end{eqnarray}\n%\n%which is rearranged as \n%\\begin{eqnarray}\n%I &=& 2\\pi a^2 \\dfrac{J_1(2 k a \\sin\\theta_i) }{2 k a \\sin\\theta_i} \\\\\n%\\ &=& 2A \\textrm{jinc}(2 k a \\sin\\theta_i)  \n%\\end{eqnarray}\n%\n%\\noindent where $A$ is the area of the disk.  The backscatter for the PEC disk is finally: \n%\\eq{\\sigma_{pec}  = \\dfrac{16\\pi A^2}{\\lambda^2} \\cos^2\\theta_i \\textrm{jinc}^2(2 k a \\sin\\theta_i) }\n%\n%\\subsubsection{Bi-static backscatter}\n%\n%We rederive the backscatter for the case of bi-static reflection.  The results should reduce to the backscatter case.  The integral is again \n%\n%\n%\\begin{eqnarray}\n%I &=& \\int_S dS \\exp\\left(i\\bb{K}\\cdot \\bb{r}\\right) \\nonumber \\\\\n%\\bb{K} &=& \\bb{k}_i - \\bb{k}_s\n%\\end{eqnarray}\n%\n%Define \n%\\ea{\n% \\bb{k}_i &=& k\\left(\\sin\\theta_i \\cos\\phi_i \\hat{x}  + \\sin\\theta_i \\sin\\phi_i \\hat{y} + \\cos\\theta_i \\hat{z}\\right) \\\\\n%\\bb{k}_s &=& k\\left(\\sin\\theta_s \\cos\\phi_s \\hat{x}  + \\sin\\theta_s \\sin\\phi_s \\hat{y} + \\cos\\theta_s \\hat{z}\\right) \\\\\n%\\bb{r}  &=& x \\hat{x} + y \\hat{y} \\\\\n%  \\ &=& \\rho \\cos\\phi \\hat{x} + \\rho \\sin\\phi \\hat{y} \n%  }\n%  \n%Then\n%\n%\\ea{\n%  (\\bb{k}_i - \\bb{k}_s) \\cdot  \\bb{r} &=& k \\rho \\cos\\phi (\\sin\\theta_i \\cos\\phi_i - \\sin\\theta_s \\cos\\phi_s )   + k \\rho \\sin\\phi ( \\sin\\theta_i \\sin\\phi_i  - \\sin\\theta_s \\sin\\phi_s )  \\\\\n%\\ &=& k \\rho ( \\sin\\theta_i \\cos(\\phi - \\phi_i) - \\sin\\theta_s \\cos(\\phi - \\phi_s)) \\\\\n% }\n%\n%The integral is then\n% \n%\\begin{eqnarray}\n%I &=& \\int_0^a \\int_0^{2\\pi} \\exp\\left(i k \\rho ( \\sin\\theta_i \\cos(\\phi - \\phi_i) - \\sin\\theta_s \\cos(\\phi - \\phi_s))   \\right) \\rho d\\rho d\\phi \\nonumber \\\\\n%\\end{eqnarray}\n%\n%Following [Trott], set the phase term equal to \n%\\eq{\\sin\\theta_i \\cos(\\phi - \\phi_i) - \\sin\\theta_s \\cos(\\phi - \\phi_s))  = \\Omega \\cos(\\phi - \\gamma)}\n%\n%Using the difference of angles formula\n%\\eq{\\cos(\\alpha - \\beta) = \\cos\\alpha\\cos\\beta + \\sin\\alpha\\sin\\beta}\n%\n%The left and right side become \n%\\eq{\\sin\\theta_i (\\cos\\phi\\cos\\phi_i + \\sin\\phi\\sin\\phi_i ) - \\sin\\theta_s (\\cos\\phi\\cos\\phi_s + \\sin\\phi\\sin\\phi_s))  = \\Omega (\\cos\\phi\\cos\\gamma + \\sin\\phi\\sin\\gamma)}\n%\n%Equating factors of $\\cos\\phi$ and $\\sin\\phi$\n%\\ea{\\Omega \\cos\\gamma  &=& \\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s \\\\\n%\\Omega \\sin\\gamma  &=& \\sin\\theta_i \\sin\\phi_i- \\sin\\theta_s\\sin\\phi_s \\\\\n%\\Omega^2 &=& (\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s)^2 + (\\sin\\theta_i \\sin\\phi_i- \\sin\\theta_s\\sin\\phi_s )^2 \\nonumber \\\\}\n% \n%Then the integral is \n%\\begin{eqnarray}\n%I &=& \\int_0^a \\int_0^{2\\pi} \\exp\\left(i k \\rho \\Omega \\cos(\\phi - \\gamma))   \\right) \\rho d\\rho d\\phi \\nonumber \\\\\n%\\end{eqnarray}\n% \n%This the same as the backscatter case except $2\\sin\\theta_i \\rightarrow \\Omega$, therefore, \n% \n%\\begin{eqnarray}\n%I &=& 2\\pi a^2 \\dfrac{J_1(k a \\Omega) }{ k a \\Omega} \n%\\end{eqnarray}\n%\n%This reduces to the backscatter case using $\\theta_s = \\pi - \\theta_i$ and $\\phi_s = \\phi_i + \\pi$, \n%\\ea{\\Omega^2 &=& (\\sin\\theta_i \\cos\\phi_i- \\sin(\\pi - \\theta_i)\\cos(\\phi_i + \\pi))^2 + (\\sin\\theta_i\\sin\\phi_i- \\sin(\\pi - \\theta_i)\\sin(\\phi_i + \\pi) )^2 \\\\\n%\\ &=& (\\sin\\theta_i \\cos\\phi_i+ \\sin\\theta_i\\cos\\phi_i)^2 + (\\sin\\theta_i\\sin\\phi_i+ \\sin\\theta_i\\sin\\phi_i )^2 \\\\\n%\\ &=& (2 \\sin\\theta_i \\cos\\phi_i)^2 + (2 \\sin\\theta_i\\sin\\phi_i)^2 \\\\\n%\\ &=& (2 \\sin\\theta_i)^2( \\cos^2\\phi_i + \\sin^2\\phi_i) \\\\\n%\\ &=& (2 \\sin\\theta_i)^2}\n%\n%or $\\Omega = 2\\sin\\theta_i$.  \n\n\n\n\n%\n%\\subsection{Disk Facet}\n%\n%Let the facet be a disk of radius $a$ centered at a point $\\bb{c}$ with normal unit vector $\\hat{n}$.  The equation for a disk in three dimensional space is given by \n%\n%\\begin{equation}\n%\\bb{p}(r,t) = r\\cos( t) \\hat{u} + r \\sin (t) \\hat{n}\\times\\hat{u} + \\bb{c}\n%\\end{equation}\n%\n%\\noindent where $r$ is the radius and $t$ is the parameter angle spanning $t = [0, 2\\pi]$.  $\\hat{u}$ is an arbitrary vector perpendicular to $\\hat{n}$ in the plane of the disk.  \n%\n%%If the normal is given in terms of the spherical angles $(\\theta,\\phi)$, then the vectors can be written \n%%\n%%\\begin{eqnarray}\n%%\\hat{n} &=& \\thrcol{\\cos\\theta\\sin\\phi}{\\sin\\theta\\sin\\phi}{\\cos\\phi} \\\\\n%%\\hat{u} &=& \\thrcol{-\\sin\\theta}{\\cos\\theta}{0} \\\\\n%%\\hat{n}\\times\\hat{u} &=& \\thrcol{\\cos\\theta\\cos\\phi}{\\sin\\theta\\cos\\phi}{-\\sin\\theta}\n%%\\end{eqnarray}\n%\n%Substituting $\\br \\rightarrow \\bb{p}(r,t)$, and specifying limits, the integral becomes\n%\n%\\begin{equation}\n%I = e^{i\\bb{K}\\cdot \\bb{c}} \\int_{0}^{2\\pi} \\int_0^a \\exp\\left[i (A r \\cos(t) + B r \\sin(t)\\right]  r dr dt\n%\\end{equation}\n%\n%\\noindent where\n%\n%\\begin{eqnarray}\n%A &=& \\bb{K} \\cdot \\hat{u} \\\\\n%B &=& \\bb{K} \\cdot (\\hat{n}\\times\\hat{u})\n%\\end{eqnarray}\n%\n%By extracting the constant exponential, the integral is centered at the origin and independent of absolute location.  The absolute location of the disk simply modulates the phase over the disk.  As is, the integral over $t$ is intractable.  However, because $\\hat{u}$ is arbitrary, and assuming $k_i$ and $k_r$ are real, we can select it such that $A = \\bb{K} \\cdot \\hat{u} = 0$ by setting\n%\n%\\begin{equation}\n%\\hat{u} = \\dfrac{\\hat{n}\\times \\bb{K}}{\\vert\\hat{n}\\times \\bb{K} \\vert}\n%\\end{equation}\n%\n%Then we have \n%\n%\\begin{equation}\n%I = e^{i\\bb{K}\\cdot \\bb{c}} \\int_{0}^{2\\pi} \\int_0^a \\exp\\left[i B r \\sin(t)\\right]  r dr dt\n%\\end{equation}\n%\n%which can be integrated using the identity:\n%\n%\\begin{eqnarray}\n%\\int_0^{2\\pi} e^{i c \\sin(t)} dt &=& 2\\pi J_0(c) \n%\\end{eqnarray}\n%\n%which gives\n%\n%\\begin{equation}\n%I = 2\\pi e^{i\\bb{K}\\cdot \\bb{c}}  \\int_0^a J_0(Br)  r dr\n%\\end{equation}\n%\n%Next, we make a change of variables $x = Br$, where, $r = x/B$ and $dr = dx/B$.  \n%\n%\\begin{equation}\n%I = 2\\pi  e^{i\\bb{K}\\cdot \\bb{c}}  \\dfrac{1}{B^2} \\int_0^{Ba} x J_0(x) dx \n%\\end{equation}\n%\n%Then using the identity\n%\n%\\begin{equation}\n%\\int_0^c x J_0(x) dx = c J_1(c)\n%\\end{equation}\n%\n%we get\n%\n%\\begin{equation}\n%I = 2\\pi  a e^{i\\bb{K}\\cdot \\bb{c}}  \\dfrac{1}{B} J_1 (Ba) \n%\\end{equation}\n%\n%or\n%\n%\\begin{equation}\n%I = 2\\pi  a^2 e^{i\\bb{K}\\cdot \\bb{c}}  \\textrm{jinc}(Ba)\n%\\end{equation}\n%\n%\\begin{equation}\n%\\textrm{jinc}(x) = \\dfrac{J_1(x)}{x}\n%\\end{equation}\n%\n%which is the area of the disk modulated by a \\textrm{jinc} (or Sombrero) function.  So the integral is real except for the phase offset term.  This however assumes that $k_i$ and $k_s$ are real.  \n%\n%The routine \\texttt{intKirchhoffDisk} returns this integral given the difference vector $\\bb{K}$, disk unit normal and disk area.  \n%\n%{\\footnotesize\n%\\VerbatimInput{/Users/mshaynes/Desktop/Work/Database/intKirchhoffDisk.m}\n%}\n%\n%\n\n%\n%\\subsubsection{Forwardscatter}\n%\n%This assumes we observe the transmitted field along the same vector and the incident field.  The surface phase integral for a circular disk of radius $a$ is that lies in the X-Y plane is \n%\n%\\begin{eqnarray}\n%I &=& \\int_S dS \\exp\\left(i\\bb{K}\\cdot \\bb{r}\\right) \\nonumber \\\\\n%\\ &=&  \\int_0^a \\int_0^{2\\pi} \\exp\\left(i (\\bb{k}_i - \\bb{k}_t) \\cdot \\bb{r}\\right) \\rho d\\rho d\\phi \\nonumber \\\\\n%\\end{eqnarray}\n%\n%\\begin{eqnarray}\n% \\bb{k}_i &=& k\\left(\\sin\\theta_i \\cos\\phi_i \\hat{x}  + \\sin\\theta_i \\sin\\phi_i \\hat{y} + \\cos\\theta_i \\hat{z}\\right) \\\\\n% \\bb{k}_t &=& k_1\\left(\\sin\\theta_t \\cos\\phi_t \\hat{x}  + \\sin\\theta_t \\sin\\phi_t \\hat{y} + \\cos\\theta_t \\hat{z}\\right) \\\\\n% \\bb{r}  &=& x \\hat{x} + y \\hat{y} \\\\\n%  \\ &=& \\rho \\cos\\phi \\hat{x} + \\rho \\sin\\phi \\hat{y} \\\\\n%   (\\theta_i, \\phi_i) &=& (\\theta_t, \\phi_t) \\\\\n%  (\\bb{k}_i - \\bb{k}_t) \\cdot  \\bb{r} &=& k \\rho \\cos\\phi \\sin\\theta_i \\cos\\phi_i  - k_1 \\rho \\sin\\phi \\sin\\theta_t \\sin\\phi_t  \\\\\n%\\ &=& (k - k_1) \\rho \\sin\\theta_i \\cos(\\phi - \\phi_i) \\\\\n% \\end{eqnarray}\n% \n%The integral is then from above we skip to \n%\\begin{eqnarray}\n%I &=& 2\\pi a^2 \\dfrac{J_1((k-k_1) a \\sin\\theta_i) }{(k-k_1) a \\sin\\theta_i} \\\\\n%\\ &=& 2A \\textrm{jinc}((k-k_1) a \\sin\\theta_i)  \n%\\end{eqnarray}\n%\n%\\noindent where $A$ is the area of the disk.  On axis, $\\theta_i = \\pi$, the argument is zero, and the limit of the jinc is \n%\n%\\eq{\\lim_{x\\rightarrow0} \\dfrac{J_1(x)}{x} = 0.5}\n%\n%so that $I = A$.  \n%\n%\n%\n%\\subsubsection{Bi-static Generic}\n%\n%We rederive the surface phase integral for generic bi-static scattering.  The results should reduce.  The integral is again \n%\n%\n%\\begin{eqnarray}\n%I &=& \\int_S dS \\exp\\left(i\\bb{K}\\cdot \\bb{r}\\right) \\nonumber \\\\\n%\\bb{K} &=& \\bb{k}_i - \\bb{k}_s\n%\\end{eqnarray}\n%\n%Define \n%\\ea{\n% \\bb{k}_i &=& k_1\\left(\\sin\\theta_i \\cos\\phi_i \\hat{x}  + \\sin\\theta_i \\sin\\phi_i \\hat{y} + \\cos\\theta_i \\hat{z}\\right) \\\\\n%\\bb{k}_s &=& k_2\\left(\\sin\\theta_s \\cos\\phi_s \\hat{x}  + \\sin\\theta_s \\sin\\phi_s \\hat{y} + \\cos\\theta_s \\hat{z}\\right) \\\\\n%\\bb{r}  &=& x \\hat{x} + y \\hat{y} \\\\\n%  \\ &=& \\rho \\cos\\phi \\hat{x} + \\rho \\sin\\phi \\hat{y} \n%  }\n%  \n%Then\n%\n%\\ea{\n%  (\\bb{k}_i - \\bb{k}_s) \\cdot  \\bb{r} &=&  \\rho \\cos\\phi (k_1\\sin\\theta_i \\cos\\phi_i - k_2\\sin\\theta_s \\cos\\phi_s )   + \\rho \\sin\\phi ( k_1\\sin\\theta_i \\sin\\phi_i  - k_2\\sin\\theta_s \\sin\\phi_s )  \\\\\n%\\ &=&  \\rho ( k_1\\sin\\theta_i \\cos(\\phi - \\phi_i) - k_2\\sin\\theta_s \\cos(\\phi - \\phi_s)) \\\\\n% }\n%\n%The integral is then\n% \n%\\begin{eqnarray}\n%I &=& \\int_0^a \\int_0^{2\\pi} \\exp\\left(i  \\rho ( k_1\\sin\\theta_i \\cos(\\phi - \\phi_i) - k_2\\sin\\theta_s \\cos(\\phi - \\phi_s))   \\right) \\rho d\\rho d\\phi \\nonumber \\\\\n%\\end{eqnarray}\n%\n%Following [Trott], set the phase term equal to \n%\\eq{k_1\\sin\\theta_i \\cos(\\phi - \\phi_i) - k_2\\sin\\theta_s \\cos(\\phi - \\phi_s)  = \\Omega \\cos(\\phi - \\gamma)}\n%\n%Using the difference of angles formula\n%\\eq{\\cos(\\alpha - \\beta) = \\cos\\alpha\\cos\\beta + \\sin\\alpha\\sin\\beta}\n%\n%The left and right side become \n%\\eq{k_1\\sin\\theta_i (\\cos\\phi\\cos\\phi_i + \\sin\\phi\\sin\\phi_i ) - k_2\\sin\\theta_s (\\cos\\phi\\cos\\phi_s + \\sin\\phi\\sin\\phi_s)  = \\Omega (\\cos\\phi\\cos\\gamma + \\sin\\phi\\sin\\gamma)}\n%\n%Equating factors of $\\cos\\phi$ and $\\sin\\phi$, then squaring and summing\n%\\ea{\\Omega \\cos\\gamma  &=& k_1\\sin\\theta_i \\cos\\phi_i- k_2\\sin\\theta_s\\cos\\phi_s \\\\\n%\\Omega \\sin\\gamma  &=& k_1\\sin\\theta_i \\sin\\phi_i- k_2\\sin\\theta_s\\sin\\phi_s \\\\\n%\\Omega^2 &=& (k_1\\sin\\theta_i \\cos\\phi_i- k_2\\sin\\theta_s\\cos\\phi_s)^2 + (k_1\\sin\\theta_i \\sin\\phi_i- k_2\\sin\\theta_s\\sin\\phi_s )^2 \\nonumber \\\\}\n%\n%Then the integral is \n%\\begin{eqnarray}\n%I &=& \\int_0^a \\int_0^{2\\pi} \\exp\\left(i \\rho \\Omega \\cos(\\phi - \\gamma))   \\right) \\rho d\\rho d\\phi \\nonumber \\\\\n%\\end{eqnarray}\n%\n%This the same as the backscatter case except $2k\\sin\\theta_i \\rightarrow \\Omega$, therefore, \n%\n%\\begin{eqnarray}\n%I &=& 2\\pi a^2 \\dfrac{J_1(a \\Omega) }{ a \\Omega} \n%\\end{eqnarray}\n%\n%\n%\n\n\n\n%\n%In the backscatter direction, we know that \n%\\ea{K_x &=& 2 k \\sin\\theta_i \\cos\\phi_i \\\\\n%K_y &=& 2 k \\sin\\theta_i \\sin\\phi_i }\n\n\n\n\n%\n%The partial derivatives are \n%\n%\\begin{eqnarray}\n%\\df{x}{u} = x_2-x_1 + v(x_3-x_2)  & , & \\df{x}{v} = u(x_3 - x_2)\\\\\n%\\df{y}{u} = y_2-y_1 + v(y_3-y_2) & , & \\df{y}{v} = u(y_3-y_2) \\\\\n%\\df{z}{u} = z_2-z_1 + v(z_3-z_2) & , & \\df{z}{v} = u(z_3-z_2)\n%\\end{eqnarray}\n%\n%The determinants follow cyclically\n%\n%\\begin{eqnarray}\n%\\dfrac{\\partial(x,y)}{\\partial(u,v)} &=&\\left[(x_2-x_1)(y_3-y_2) - (x_3-x_2)(y_2-y_1)\\right]u =  Au \\\\\n%\\dfrac{\\partial(y,z)}{\\partial(u,v)} &=& \\left[(y_2-y_1)(z_3-z_2) - (y_3-y_2)(z_2-z_1)\\right]u = Bu \\\\\n%\\dfrac{\\partial(z,x)}{\\partial(u,v)} &=& \\left[(z_2-z_1)(x_3-x_2) - (z_3-z_2)(x_2-x_1)\\right]u = Cu \n%\\end{eqnarray}\n%\n%Then the integral can be written after some algebra\n%\n%\n%\n%\n%\n%\n%\n%\n%We now have integrals over a triangles in 3D space of the form\n%\n%\\begin{eqnarray}\n%I &=& \\int_S dS f(x,y,z) \\nonumber \\\\\n%f(x,y,z) & = & \\exp\\left(i\\bb{K}\\cdot \\bb{r}\\right) \\nonumber \\\\\n%\\ & = & \\exp\\left(i(K_x x + K_y y + K_z z)\\right) \\nonumber \n%\\end{eqnarray}\n%\n%\\noindent where $\\bb{K} = \\bb{k}_i - \\bb{k}_s$ or $\\bb{K} = \\bb{k}_i - \\bb{k}_t$.\n%\n%The phase integral is perform by mapping the facet to the unit square though this nonlinear transform (Introduction to Boundary Elements: Theory and Applications, pg. 194)\n%\n%\\begin{eqnarray}\n%x(u,v) &=& x_1 + u(x_2 - x_1) + uv(x_3 - x_2) \\\\\n%y(u,v) &=& y_1 + u(y_2 - y_1) + uv(y_3 - y_2)  \\\\\n%z(u,v) &=& z_1 + u(z_2 - z_1) + uv(z_3 - z_2) \n%\\end{eqnarray}\n%\n%\\noindent where $u$ and $v$ cover the entire unit square $u = [0,1]$ and $v = [0,1]$.  The change of variables in the surface integral is given by\n%\n%\\begin{eqnarray}\n%I &=&  \\int_A f(x(u,v),y(u,v),z(u,v)) \\sqrt{\\left(\\df{(x,y)}{(u,v)}\\right)^2 + \\left(\\df{(y,z)}{(u,v)}\\right)^2 + \\left(\\df{(z,x)}{(u,v)}\\right)^2 } du dv \\nonumber\n%\\end{eqnarray}\n%\n%The Jacobian determinants are\n%\n%\\begin{eqnarray}\n%\\dfrac{\\partial(x,y)}{\\partial(u,v)} &=& \\df{x}{u}\\df{y}{v}- \\df{x}{v}\\df{y}{u} \\\\\n%\\dfrac{\\partial(y,z)}{\\partial(u,v)} &=& \\df{y}{u}\\df{z}{v}- \\df{y}{v}\\df{z}{u} \\\\\n%\\dfrac{\\partial(z,x)}{\\partial(u,v)} &=& \\df{z}{u}\\df{x}{v}- \\df{z}{v}\\df{x}{u} \n%\\end{eqnarray}\n%\n%The partial derivatives are \n%\n%\\begin{eqnarray}\n%\\df{x}{u} = x_2-x_1 + v(x_3-x_2)  & , & \\df{x}{v} = u(x_3 - x_2)\\\\\n%\\df{y}{u} = y_2-y_1 + v(y_3-y_2) & , & \\df{y}{v} = u(y_3-y_2) \\\\\n%\\df{z}{u} = z_2-z_1 + v(z_3-z_2) & , & \\df{z}{v} = u(z_3-z_2)\n%\\end{eqnarray}\n%\n%The determinants follow cyclically\n%\n%\\begin{eqnarray}\n%\\dfrac{\\partial(x,y)}{\\partial(u,v)} &=&\\left[(x_2-x_1)(y_3-y_2) - (x_3-x_2)(y_2-y_1)\\right]u =  Au \\\\\n%\\dfrac{\\partial(y,z)}{\\partial(u,v)} &=& \\left[(y_2-y_1)(z_3-z_2) - (y_3-y_2)(z_2-z_1)\\right]u = Bu \\\\\n%\\dfrac{\\partial(z,x)}{\\partial(u,v)} &=& \\left[(z_2-z_1)(x_3-x_2) - (z_3-z_2)(x_2-x_1)\\right]u = Cu \n%\\end{eqnarray}\n%\n%Then the integral can be written after some algebra\n%\n%\\[ I = \\sqrt{A^2 + B^2 + C^2} \\int_0^1\\int_0^1 u \\exp\\left[i(a + b u + cuv)\\right] du dv \\]\n%\n%\\noindent where \n%\\begin{eqnarray}\n%a &= & \\bb{K} \\cdot \\bb{p}_1 \\\\ \n%b &=& \\bb{K} \\cdot (\\bb{p}_2 - \\bb{p}_1)\\\\\n%c &=& \\bb{K} \\cdot (\\bb{p}_3 - \\bb{p}_2) \n%\\end{eqnarray}\n%\n%Integrating $v$ first then $u$ yields\n%\n%\\[ I = \\sqrt{A^2 + B^2 + C^2} e^{i a} g(b,c)  \\]\n%\n%\\[ g(b,c) = \\dfrac{ -c + e^{ib} \\left(-b e^{ic} + b + c\\right) }{bc (b + c)} \\]\n%\n%We see that $I(-a,-b,-c) = I^*(a,b,c)$ for $\\bb{K}$ real.  Divide by zero occurs for $b = 0$, $c = 0$, and $b = -c$,  The first two cases correspond to a facet perpendicular to $\\bb{K}$, which will happen for backscatter and normal incidence.  As the magnitude of $b$ and $c$ increase, the facet is viewed edge on.  bc flip?  The limits, though, exist and are given by \n%\n%%Plotting the function as is shows that the limits exist.  \n%%\n%%\n%%\\begin{figure}[htbp]\n%%\\begin{center}\n%%\\subfigure{\n%%\\includegraphics[width=2.7in]{f_real}}\n%%\\subfigure{\n%%\\includegraphics[width=2.7in]{f_imag}}\n%%\\caption{Real and imaginary parts of $g(b,c)$ computed as is.}\n%%\\end{center}\n%%\\end{figure}\n%\n%\\begin{eqnarray}\n%\\lim_{b \\rightarrow 0} g(b,c) &=& \\dfrac{1 - e^{ic} + ic}{c^2} \\\\\n%\\lim_{c \\rightarrow 0} g(b,c) &=& \\dfrac{-1 + e^{ib}(1-ib)}{b^2} \\\\\n%\\lim_{c \\rightarrow -b} g(b,c) &=& \\dfrac{1 - e^{ib} + ib}{b^2} \\\\\n%\\lim_{b \\rightarrow 0} \\lim_{c \\rightarrow 0} g(b,c) &=& \\dfrac{1}{2} \n%\\end{eqnarray}\n%\n%Implementing these limits, we get the following plots, code follows:\n%\n%\\begin{figure}[h]\n%\\begin{center}\n%\\subfigure{\n%\\includegraphics[width=2.2in]{Kirchhoff/f_real2}}\n%\\subfigure{\n%\\includegraphics[width=2.2in]{Kirchhoff/f_imag2}}\n%\\caption{Real and imaginary parts of $g(b,c)$ computed with limits.}\n%\\end{center}\n%\\end{figure}\n%\n%\n%\\begin{figure}[h]\n%\\begin{center}\n%\\subfigure{\n%\\includegraphics[width=2.2in]{Kirchhoff/f_abs2}}\n%\\subfigure{\n%\\includegraphics[width=2.2in]{Kirchhoff/f_phase2}}\n%\\caption{Absolute value and phase of $g(b,c)$ computed with limits.}\n%\\end{center}\n%\\end{figure}\n%\n%\n%{\n%\\footnotesize\n%\\begin{verbatim}\n%\n%b = linspace(-10,10,101);\n%c = linspace(-10,10,101);\n%[B C] = meshgrid(b,c);\n%I = (-C + eBp(1i*B).*(-B.*exp(1i*C) + B + C))./(B.*C.*(B + C));\n%eps = 1e-12;\n%% b = 0\n%ind = (abs(B) < eps) & (abs(C) > eps);\n%I(ind) = (1-exp(1i*C(ind)) + 1i*C(ind))./(C(ind).^2);\n%% c = 0\n%ind = (abs(B) > eps) & (abs(C) < eps);\n%I(ind) = (-1 + exp(1i*B(ind)).*(1 - 1i*B(ind)))./(B(ind).^2);\n%% c = -b\n%ind = (abs(B+C) < eps);\n%I(ind) = (1-exp(1i*B(ind)) + 1i*B(ind))./(B(ind).^2);\n%% b = c = 0\n%ind = (abs(B) < eps) & (abs(C) < eps);\n%I(ind) = 0.5;\n%\\end{verbatim}\n%}\n%\n%\n\n%\n%\\subsubsection{Alternate derivation of phase integral}\n%\n%We first convert the surface integral over the 2D triangle into a 2D integral over a triangle that is restricted to the $XY$ plane.  We will use the identity:\n%\n%\\[ I = \\int_S dSf(x,y,z) = \\int_A f(x,y,z(x,y)) \\sqrt{1 + \\left(\\dd{z}{x}\\right)^2 +  \\left(\\dd{z}{y}\\right)^2} dx dy \\]\n%\n%The surface of the facet lies in the plane determined by the vertices of the triangle.  The equation for a plane is given by\n%\n%\\[z(x,y) = ax + by + c\\]\n%\n%Substituting this into $f(x,y,z)$\n%\n%\\[ f(x,y,z(x,y))= \\exp\\left[i( (k_x +a k_z) x + (k_y + k_z b) y + k_z c)\\right] \\]\n%\n%Let the three vertices of a facet be $[\\bb{p}_1, \\bb{p}_2, \\bb{p}_3]$.  The coefficients of the equation for the plane are found by solving the following linear system\n%\n%\\[\\thbth{x_1}{y_1}{1}{x_2}{y_2}{1}{x_3}{y_3}{1} \\thrcol{a}{b}{c} = \\thrcol{z_1}{z_2}{z_3} \\]\n%\n%which yields\n%\n%\\[ \\thrcol{a}{b}{c}  = \\dfrac{1}{\\Delta} \\thrcol{z_1(y_2-y_3) - z_2(y_1-y_3) + z_3(y_1-y_2) }{z_2(x_1-x_3) - z_1(x_2-x_3) - z_3(x_1-x_2)}{z_3(x_1y_2 - x_2y_1) - z_2(x_1y_3 - x_3y_1) + z_1(x_2y_3 - x_3y_2)} \\]\n%\n%\\[\\Delta = x_1y_1 - x_2y_1 - x_1y_3 + x_3y_1 + x_2y_3 - x_3y_2 \\]\n%\n%Evaluating the partial derivatives of z we arrive at\n%\n%\\[ I = \\sqrt{1 + a^2 + b^2}  \\int_A \\exp\\left[i( (k_x +a k_z) x + (k_y + k_z b) y + k_z c)\\right]  dx dy \\]\n%\n%The integral is now over a triangle in the XY plane defined by the x and y coordinates of the vertices: $[\\bb{p}_1' = (x_1,y_1), \\bb{p}_2' = (x_2,y_2),\\bb{p}_3' = (x_3,y_3)]$.  \n%\n%A double integral over an arbitrary triangle can be computed by mapping the function and limits of integration to the unit square.  The mapping is given by the nonlinear transform\n%\n%\\begin{eqnarray}\n%x(u,v) &=& u(x_2-x_1) + uv(x_3-x_2) + x_1 \\\\\n%y(u,v) &=& u(y_2-y_1) + uv(y_3-y_2) + y_1 \n%\\end{eqnarray}\n%\n%\\noindent where the limits are now over the entire unit square $u = [0,1]$ and $v = [0,1]$.  The integral is further transformed using the identity, \n%\n%\\[ \\int f(x,y) dx dy = \\int f(x(u,v),y(u,v)) \\left\\vert \\dfrac{\\partial(x,y)}{\\partial(u,v)} \\right\\vert du dv \\]\n%\n%with the Jacobian \n%\n%\\[ \\dfrac{\\partial(x,y)}{\\partial(u,v)} = \\left\\vert \n%\\begin{array}{cc}\n%\\df{x}{u} & \\df{x}{v} \\\\\n%\\quad \\\\\n%\\df{y}{u} & \\df{y}{v} \\\\\n%\\end{array}\n%\\right\\vert = \\df{x}{u}\\df{y}{v}- \\df{x}{v}\\df{y}{u} \\]\n%\n%The partial derivatives are \n%\n%\\begin{eqnarray}\n%\\df{x}{u} &=& x_2-x_1 + v(x_3-x_2) \\\\\n%\\df{x}{v} &=& u(x_3 - x_2) \\\\\n%\\df{y}{u} &=& y_2-y_1 + v(y_3-y_2) \\\\\n%\\df{y}{v} &=& u(y_3-y_2)\n%\\end{eqnarray}\n%\n%which gives\n%\n%\\[ \\dfrac{\\partial(x,y)}{\\partial(u,v)} = u\\left[(x_2-x_1)(y_3-y_2) - (x_3-x_2)(y_2-y_1)\\right] = C u \\]\n%\n%The integral then becomes\n%\n%\\[ I = \\vert C \\vert \\sqrt{1 + a^2 + b^2}  \\int_0^1 \\int_0^1 \\exp\\left[i(h_1 u + h_2 uv + h_3)\\right] u du dv \\]\n%\n%\\begin{eqnarray}\n%h_1 &=& (k_x +a k_z) (x_2 - x_1) + (k_y + k_z b) (y_2 - y_1) \\\\\n%h_2 &=& (k_x +a k_z) (x_3-x_2) + (k_y + k_z b)(y_3-y_2) \\\\\n%h_3 &=& (k_x +a k_z) x_1 + (k_y + k_z b) y_1 + k_z c\n%\\end{eqnarray}\n%\n%From Wolfram Alpha, integrating $v$ first then $u$\n%\n%\\[\\int_0^1 \\int_0^1 \\exp\\left[i(h_1 + h_2 u + h_3 uv)\\right] u du dv = \n%\\dfrac{e^{ih_3} \\left(e^{ih_1} (h_1(1-e^{ih_2}) + h_2 ) - h_2 \\right) }{h_1 h_2(h_1 + h_2)} \\]\n%\n\n\n%\n%\n%\n%\n%\\subsection{Facetized Surfaces}\n%\n%This development so far is for an incident plane wave referenced to the origin, where we use the absolute locations of facets.  As a result, the incident, reflected and transmitted directions are constant for all facets even as facet orientation changes.  For a more point target like simulation, where we allow the incident field to change over the surface, the phase needs to be referenced to the centroids of the facets, and the vertices need to be shifted such that the triangle centroid becomes the origin.  This will keep the integrals correct.  Also, diffraction effects will always occur at the edges of domains.  Gaussian tapered beams should be used to avoid diffraction effects.\n%\n%We implement a limited form of shadowing.  If the local incidence angle is greater than 90 degrees, it is excluded from the sum.  This does not, though, prevent the facing side of hills behind each other from contributing to the scattering field.  Also, we must take special care of normal incidence, i.e., $\\hat{q} = \\hat{k}_i \\times \\hat{n} = 0$, because we will divide by zero trying to make $\\hat{q}$ a unit vector, and we absolutely cannot exclude face-on facets in the computation. The fix is to set $\\hat{q}$ equal to any vector perpendicular to $\\hat{n}$. There are several ways to arrive at one, we use this: $\\bb{v} = [n_y + n_z, -n_x, -n_x]$, where $\\hat{q} = \\bb{v}/\\vert\\bb{v}\\vert$. \n%\n%\n%For that, first choose any two vectors perpendicular to $\\hat{n}$, e.g., $\\hat{v}_1 = [n_y,-n_x,0]$ and $\\hat{v}_2 = [n_z,0,-n_x]$. Then any linear combination of these $\\bb{v} = a\\hat{v}_1 + b\\hat{v}_2$, is also perpendicular to the original vector.  Next pick $a = b = 1$, and you can verify that Then $\\hat{q} = \\dfrac{\\hat{k}_i \\times \\bb{v}}{\\vert \\hat{k}_i \\times \\bb{v}\\vert} \\neq 0$. \n%\n%\n%We validate this by computing the surface integral over a flat surface with different facetizations and wave vectors.  The surface is the XY plane and we consider only backscatter, $\\bb{k}_s = -\\bb{k}_i$, for $\\theta = [0,\\pi/2]$ and $\\phi = [0, 2\\pi]$.  Frequency is 60 MHz.  The surface integrals were computed identically for the following two facetizations.  \n%\n%\n%\\begin{figure}[htbp]\n%\\begin{center}\n%\\subfigure{\n%\\includegraphics[width=2.7in]{Kirchhoff/surf1}}\n%\\subfigure{\n%\\includegraphics[width=2.7in]{Kirchhoff/surf2}}\n%\\caption{Two surface facetizations that yield identical phase sums for all backscatter angles }\n%\\end{center}\n%\\end{figure}\n%\n%\\begin{figure}[h]\n%\\center\n%\\subfigure{\n%\\includegraphics[width=2.4in]{Kirchhoff/surfabs}}\n%\\subfigure{\n%\\includegraphics[width=2.4in]{Kirchhoff/surfphase}}\n%\\subfigure{\n%\\includegraphics[width=2.4in]{Kirchhoff/surfreal}}\n%\\subfigure{\n%\\includegraphics[width=2.4in]{Kirchhoff/surfimag}}\n%\\caption{Absolute value, phase, real and imaginary parts of the backscatter surface integral.  These are the same for both surface facetizations.  The $\\pi$ jumps in phase are due to simultaneous zero crossings of the real and imaginary parts. }\n%\\end{figure}\n%\n%\n%\n%\\section{3D Gaussian Taper}\n%\n%Due to the finite nature of the domain, the scattered field contains diffraction effects from the edges, that we indeed obverse.  To mitigate this, the amplitude of the incident wave is tapered toward the edge of the domain.   to apply a Gaussian attenuation profile in the plane perpendicular to the incident direction.  This ensures that the power crossing that plane, thus incident on the surface, remains constant for all incident angles.  Numerically, we want to apply the taper on the surface itself.  Tsang gives one version tapering in 2D, but we derive one here for arbitrary incident directions in 3D.  \n%\n%The incident wave vector is given by \n%\n%\\[\\bb{k} = k_x \\hat{x} + k_y \\hat{y} + k_z \\hat{z} \\]\n%\n%with angles of incidence\n%\n%\\begin{eqnarray}\n%\\theta_i &=& \\textrm{atan2}\\left(\\sqrt{k_x^2+k_y^2},k_z\\right) \\\\\n%\\phi_i &=& \\textrm{atan2}(k_y,k_x)\n%\\end{eqnarray}\n%\n%Next, let a vertically oriented tube with Gaussian taper in the XY plane be written\n%\n%\\[ A(\\bb{x}') = e^{-f(\\bb{x}')}  = \\exp\\left[-\\dfrac{1}{2}\\dfrac{x'^2 + y'^2}{s^2} \\right] \\]\n%\n%\\noindent where $s$ is the width and constant in $z$.  We can rotate this tube and align its axis with the incident direction with a ZXZ Euler rotation with angles $(\\alpha,\\beta,\\gamma) = (\\phi_i+\\pi/2,\\theta_i,0)$.  The rotation matrix is\n%\n%\\begin{equation}\n%\\bb{R} = \\thbth{\\cos(\\phi_i+\\pi/2)}{-\\sin(\\phi_i+\\pi/2)}{0} {\\sin(\\phi_i+\\pi/2)}{ \\cos(\\phi_i+\\pi/2)}{ 0} {0}{ 0}{ 1}\\thbth{1}{ 0}{ 0}{0 }{\\cos(\\theta_i)}{ -\\sin(\\theta_i)}{0}{ \\sin(\\theta_i) }{\\cos(\\theta_i)}\n%\\end{equation}\n%\n%which gives the mapping \n%\n%\\[\\bb{x} = \\bb{R}\\bb{x}'\\]\n%\n%When evaluating the Gaussian function, we need the reverse transform, which maps points along the incident field to points in the vertical tube.  Taking the transpose and simplifying, we get the coordinate transforms for $x'$ and $y'$ to be used in the vertical tube\n%\n%\\[\\bb{x}' = \\bb{R}^t\\bb{x}\\]\n%\n%or \n%\n%\\[\\thrcol{x'}{y'}{z'} = \\thrcol{-\\sin\\phi_i x + \\cos\\phi_i y}{-\\cos\\phi_i \\cos\\theta_i x - \\sin\\phi_i \\cos\\theta_i y + \\sin\\theta_i z}{\\cos\\phi_i x + \\sin\\phi_i\\sin\\theta_i y + \\cos\\theta_i z}\\]\n%\n%The taper so far only applies to the amplitude of the wave.  However, the tapered incident field must satisfy the wave equation to some degree, therefore a phase correction is required.  This explained in Tsang for 2D problems.  Extending that development to 3D, we can write the incident field as\n%\n%\\[\\bb{E}_{inc} = E_o \\hat{e}_i e^{i \\bb{k}\\cdot\\bb{x}(1+w(\\bb{x}))}A(\\bb{x}) \\]\n%\n%Then choose the phase term to be \n%\n%\\[w(\\bb{x}) = \\dfrac{2f(\\bb{x}) - 1}{\\left(ks\\cos(\\theta_i)\\right)^2} \\]\n%\n%\\noindent where it can be shown that the error in the wave equation is $O(1/s^3)$.  The term $w(\\bb{x})$ warps the phase with the new amplitude, and the larger the domain the more the incident field tends to a plane wave.  The routine \\texttt{gaussianTaper} returns the following taper function\n%\n%\\[T(\\bb{x}) = e^{i (\\bb{k}\\cdot\\bb{x})w(\\bb{x}) - f(\\bb{x})} \\]\n%\n%Recall that the taper is fully 3D, so use it in any problem requiring a Gaussian tapered plane wave incident on a surface.  The default falloff is $s=L/4$, where L is the minimum x or y dimension.  The taper is centered too, so freely change the range of x and y, just remember the phase is referenced to the origin.  Technically, this invalidates the facet integral derived above, however, $w(\\bb{x})$ varies slowly that we are safe evaluating the taper at the facet centroids.  \n%\n%\n%\n%\n%\n\n\n%\n%\n%\\subsubsection{mistake}\n%\n%Separated the exponential by mistake. \n%\n%We integrate $t$ first using the identities:\n%\n%\\begin{eqnarray}\n%\\int_0^{2\\pi} e^{i c \\cos(t)} dt &=& 2\\pi J_0(c) \\\\\n%\\int_0^{2\\pi} e^{i c \\sin(t)} dt &=& 2\\pi J_0(c) \\\\\n%\\end{eqnarray}\n%\n%which yields\n%\n%\\begin{equation}\n%I = 2\\pi  e^{i\\bb{K}\\cdot \\bb{c}} \\int_0^a \\left[ J_0(A r) + J_0(B r)\\right]  r dr \n%\\end{equation}\n%\n%Next, we make a change of variables $x = Ar$, $y = Br$, where for example, $r = x/A$ and $dr = dx/A$.  \n%\n%\\begin{equation}\n%I = 2\\pi  e^{i\\bb{K}\\cdot \\bb{c}} \\left[  \\dfrac{1}{A^2} \\int_0^{Aa} x J_0(x) dx  +  \\dfrac{1}{B^2} \\int_0^{Ba} yJ_0 (y) dy \\right] \n%\\end{equation}\n%\n%Then using the following identity\n%\n%\\begin{equation}\n%\\int_0^c x J_0(x) dx = c J_1(c)\n%\\end{equation}\n%\n%we get\n%\n%\\begin{equation}\n%I = 2\\pi  a e^{i\\bb{K}\\cdot \\bb{c}} \\left[  \\dfrac{1}{A} J_1(Aa)  +  \\dfrac{1}{B} J_1 (Ba) \\right] \n%\\end{equation}\n%\n%or\n%\n%\\begin{equation}\n%I = 2\\pi  a^2 e^{i\\bb{K}\\cdot \\bb{c}} \\left[  \\dfrac{1}{Aa} J_1(Aa)  +  \\dfrac{1}{Ba} J_1 (Ba) \\right] \n%\\end{equation}\n%\n%which is the area of the circle modulated by a sum of \\textrm{jinc} (or Sombrero) functions.  So the integral is real except for the phase offset term (assuming $k_i$ and $k_s$ are real).  \n%\n%\\begin{equation}\n%I = [DiskArea] e^{i\\bb{K}\\cdot \\bb{c}} \\left[  \\textrm{jinc}(Aa)  +  \\textrm{jinc}(Ba)\\right] \n%\\end{equation}\n%\n%\\begin{equation}\n%\\textrm{jinc}(x) = \\dfrac{J_1(x)}{x}\n%\\end{equation}\n%\n\n\n\n\n\n%\\ea{\\hat{n} \\cdot \\hat{k} &=& \\sin\\theta_c \\cos\\phi_c \\sin\\theta \\cos\\phi + \\sin\\theta_c \\sin\\phi_c  \\sin\\theta \\sin\\phi +  \\cos\\theta_c  \\cos\\theta \\\\\n%\\ &=& \\cos\\theta_c\\cos\\theta + \\sin\\theta_c\\sin\\theta \\cos(\\phi_c - \\phi) }\n\n\n\n%\n%\n%\\eq{\\bb{S} = \\tbt{S_{pp}}{S_{pq}}{S_{qp}}{S_{qq}}}\n%\n%\n%\n%The scattered field is \n%\\begin{eqnarray}\n%\\bb{E}_s & \\approx & \\dfrac{ike^{ikr}}{4\\pi r} E_o \\left( F_{qs} \\hat{q}_s + F_{ps} \\hat{p}_s\\right)  I \\nonumber \\\\\n%\\end{eqnarray}\n%\n%%with \n%\n%\\ea{F_{qs} &=& \\hat{q}_s \\cdot \\bb{F}    \\\\\n%F_{ps} &=&  \\hat{p}_s \\cdot \\bb{F}  }\n%\n%\\ea{\n%\\bb{F} &=&  - (\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TE}})  + (\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) \\nonumber \\\\\n%\\ & \\ &+ (\\hat{e}_i \\cdot \\hat{q}_i )(\\hat{k}_s \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}})  + (\\hat{e}_i \\cdot \\hat{p}_i )(\\hat{n} \\cdot \\hat{k}_i) (\\hat{k}_s \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) }\n%\n%\\ea{\n%\\bb{F} &=&  \\onebytwo{\\bb{F}_{p}}{\\bb{F}_{q}} \\twobyone{(\\hat{e}_i \\cdot \\hat{p}_i )}{(\\hat{e}_i \\cdot \\hat{q}_i )}  }\n%\n%\\ea{\n%\\bb{F}_{p} &=&  (\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (\\hat{k}_s \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) \\\\\n%\\bb{F}_{q} &=& -(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_i (1 - R^{\\textrm{TE}}) + (\\hat{k}_s \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}}) }\n%\n%\n%Define the forward scattering matrix\n%\n%\\eq{\\bb{S} = \\tbt{S_{pp}}{S_{pq}}{S_{qp}}{S_{qq}}}\n%\n%Then the scattered field is \n%\\begin{eqnarray}\n%\\bb{E}_s & = & \\dfrac{e^{ikr}}{r} \\bb{S}  \\cdot \\bb{E}_i \n%\\end{eqnarray}\n%\n%For this problem  \n%\n%\\eq{\\bb{E}_i  = E_o \\twobyone{(\\hat{e}_i \\cdot \\hat{p}_i )}{(\\hat{e}_i \\cdot \\hat{q}_i )}}  \n%and\n%\\ea{\n%\\bb{S}&=& \\dfrac{ik}{4\\pi} I(\\theta_s,\\phi_s,\\theta_i,\\phi_i,\\hat{n}) \\tbt{\\hat{p}_s \\cdot \\bb{F}_p }{\\hat{p}_s \\cdot \\bb{F}_q }{\\hat{q}_s \\cdot \\bb{F}_p }{\\hat{q}_s \\cdot \\bb{F}_q}\n%}\n%\n%The surface phase integral and reflection coefficients have to be evaluated at the local incident and scattering angles relative to the surface normal.  \n\n\n\n%Then\n%\n%\\ea{\n%\\hat{p}_s \\cdot \\bb{F}_p &=& \\hat{p}_s \\cdot(\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) \\hat{p}_s \\cdot(\\hat{k}_s \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) \\\\ \n%\\ &=& \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{p}_s) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (-\\hat{q}_i\\cdot \\hat{q}_s) (1 - R^{\\textrm{TM}}) \n%}\n%\n%\\ea{\n%\\hat{q}_s \\cdot \\bb{F}_p &=& \\hat{q}_s \\cdot(\\hat{n} \\times \\hat{q}_i) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_s \\cdot(\\hat{k}_s \\times \\hat{q}_i) (1 - R^{\\textrm{TM}}) \\\\ \n%\\ &=& \\hat{n}\\cdot (\\hat{q}_i\\times\\hat{q}_s) (1 + R^{\\textrm{TM}}) + (\\hat{n} \\cdot \\hat{k}_i) (\\hat{q}_i\\cdot \\hat{p}_s) (1 - R^{\\textrm{TM}}) \n%}\n%\n%\\ea{\n%\\hat{p}_s \\cdot  \\bb{F}_{q} &=& -(\\hat{n} \\cdot \\hat{k}_i) \\hat{p}_s \\cdot \\hat{q}_i (1 - R^{\\textrm{TE}}) + \\hat{p}_s \\cdot (\\hat{k}_s \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}}) \\\\\n%\\ &=& -(\\hat{n} \\cdot \\hat{k}_i) (\\hat{p}_s\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) -\\hat{n}\\cdot (\\hat{q}_i\\times\\hat{q}_s)  (1 + R^{\\textrm{TE}}) \n%}\n%\n%\\ea{\n%\\hat{q}_s \\cdot \\bb{F}_q&=& -(\\hat{n} \\cdot \\hat{k}_i) \\hat{q}_s\\cdot \\hat{q}_i (1 - R^{\\textrm{TE}}) + \\hat{q}_s \\cdot (\\hat{k}_s \\times (\\hat{n} \\times \\hat{q}_i)) (1 + R^{\\textrm{TE}}) \\\\\n%\\ &=& -(\\hat{n} \\cdot \\hat{k}_i) (\\hat{q}_s\\cdot \\hat{q}_i) (1 - R^{\\textrm{TE}}) -\\hat{n}\\cdot (\\hat{q}_i\\times\\hat{p}_s)  (1 + R^{\\textrm{TE}}) \n%}\n\n\n\n%Using the radar cross section for a PEC surface, \\eqref{kasigmapec}, \n%\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i I^2 }\n%\n%The backscatter RCS of a PEC elliptical disk is \n% \\eq{\\sigma_{pec}  = 4 \\pi k^2 \\cos^2\\theta_i a^2 b^2 \\textrm{jinc}^2\\left(2 k \\sin\\theta_i \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  } \\right)  }\n\n%\n%\\subsubsection{old}\n%\n%\n%Then \n%\\ea{\\ &=&  k_i \\rho \\sin\\theta_i \\dfrac{1}{2}\\left( (a + b) \\cos(\\phi - \\phi_i) + (a-b) \\cos(\\phi + \\phi_i)\\right)   \\\\ \n%\\ & \\ & - k_s \\rho \\sin\\theta_s \\dfrac{1}{2}\\left( (a + b) \\cos(\\phi - \\phi_s) + (a-b) \\cos(\\phi + \\phi_s)\\right) \\\\\n%&=&  k_i \\rho \\dfrac{1}{2}  \\left( (a + b) \\sin\\theta_i \\cos(\\phi - \\phi_i) + (a-b) \\sin\\theta_i \\cos(\\phi + \\phi_i)\\right)   \\\\ \n%\\ & \\ & - k_s \\rho  \\dfrac{1}{2}\\left( (a + b) \\sin\\theta_s\\cos(\\phi - \\phi_s) + (a-b) \\sin\\theta_s\\cos(\\phi + \\phi_s)\\right) \\\\ \n%&=&  k \\rho \\dfrac{1}{2} (a + b)  \\left( \\sin\\theta_i \\cos(\\phi - \\phi_i) -  \\sin\\theta_s\\cos(\\phi - \\phi_s)\\right)  \\\\\n%\\ & \\ & + k \\rho \\dfrac{1}{2}(a-b) \\left(  \\sin\\theta_i \\cos(\\phi + \\phi_i) - \\sin\\theta_s\\cos(\\phi + \\phi_s) \\right)  }\n%\n%\n%\n%Following \\cite{trott1988disk}, set the phase term equal to \n%\\ea{\\sin\\theta_i \\cos(\\phi \\mp \\phi_i) - \\sin\\theta_s \\cos(\\phi \\mp \\phi_s) & =& \\Omega_{\\mp} \\cos(\\phi \\mp \\gamma) }\n%\n%Using the sum/difference of angles formula\n%\\ea{\\cos(\\alpha \\mp \\beta) &=& \\cos\\alpha\\cos\\beta \\pm \\sin\\alpha\\sin\\beta \\\\}\n%\n%The left and right side become \n%\\ea{\\sin\\theta_i (\\cos\\phi\\cos\\phi_i \\pm \\sin\\phi\\sin\\phi_i ) - \\sin\\theta_s (\\cos\\phi\\cos\\phi_s \\pm \\sin\\phi\\sin\\phi_s)  &=& \\Omega_{\\mp} (\\cos\\phi\\cos\\gamma \\pm \\sin\\phi\\sin\\gamma)    }\n%\n%Equating factors of $\\cos\\phi$ and $\\sin\\phi$\n%\\ea{\\Omega_{\\mp} \\cos\\gamma  &=& \\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s \\\\\n%\\pm \\Omega_{\\mp} \\sin\\gamma  &=& \\pm\\sin\\theta_i \\sin\\phi_i \\mp \\sin\\theta_s\\sin\\phi_s \\\\\n%\\Omega_{\\mp}^2  &=& (\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s)^2 + (\\pm \\sin\\theta_i \\sin\\phi_i \\mp \\sin\\theta_s\\sin\\phi_s )^2 \\nonumber \\\\}\n%\n%From these, $ \\Omega_{\\mp} = \\Omega$ and can make following simplification\n%\\ea{\\Omega \\cos\\gamma  &=& \\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s \\\\\n%\\Omega \\sin\\gamma  &=& \\sin\\theta_i \\sin\\phi_i - \\sin\\theta_s\\sin\\phi_s \\\\\n%\\Omega^2  &=& (\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s)^2 + (\\sin\\theta_i \\sin\\phi_i - \\sin\\theta_s\\sin\\phi_s )^2 \\nonumber \\\\}\n%\n%Solve for $\\gamma$ by dividing the first two reltaions\n%\\eq{\\tan\\gamma = \\dfrac{\\sin\\theta_i \\sin\\phi_i - \\sin\\theta_s\\sin\\phi_s}{\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s}}\n%\n%Therefore,\n%\\ea{  (\\bb{k}_i - \\bb{k}_s) \\cdot  \\bb{r} &=&   k \\rho \\dfrac{1}{2} (a + b)  \\Omega \\cos(\\phi - \\gamma) + k \\rho \\dfrac{1}{2}(a-b) \\Omega \\cos(\\phi + \\gamma)   \\\\\n%&=&   k \\rho \\dfrac{1}{2} \\Omega \\left( (a + b)   \\cos(\\phi - \\gamma) + (a-b) \\cos(\\phi + \\gamma)\\right)   \\\\\n%&=&   k \\rho \\Omega \\left( a\\cos\\phi \\cos\\gamma + b\\sin\\phi \\sin\\gamma \\right)  }\n%\n%Using the last one, the phase integral is \n%\\ea{I &=& a b \\int_{0}^{2\\pi} \\int_{0}^{1} \\exp\\left(i k \\rho \\Omega \\left( a\\cos\\phi \\cos\\gamma + b\\sin\\phi \\sin\\gamma \\right) \\right)  \\rho d\\rho d\\phi }\n%\n%Using this identity: \n%\\eq{\\int_0^{2\\pi} \\exp\\left( u \\cos\\theta +v \\sin\\theta\\right) d\\theta = 2 \\pi I_0 \\left( \\sqrt{u^2 + v^2} \\right)}\n%\n%we get\n%\\ea{I &=&  2 \\pi  a b \\int_{0}^{1}I_0 \\left( \\sqrt{ \\left( i k \\rho \\Omega a \\cos\\gamma \\right)^2 + \\left( i k \\rho \\Omega b \\sin\\gamma \\right)^2 }\\right) \\rho d\\rho  \\\\ \n%\\ &=&  2 \\pi  a b \\int_{0}^{1}I_0 \\left( i k \\rho \\Omega \\Gamma \\right) \\rho d\\rho \\\\}\n%\n%where \n%\\eq{\\Gamma = \\sqrt{  a^2 \\cos^2\\gamma + b^2 \\sin^2\\gamma}}\n%\n%According to Alpha\n%\\eq{\\int_0^1 I_0\\left( i c x \\right) x dx = \\dfrac{J_1(c)}{c} }\n%\n%Then \n%\\ea{I &=&  2 \\pi  a b \\dfrac{J_1(k \\Omega \\Gamma)}{k \\Omega \\Gamma} }\n%\n%which reduces to the expression for the circular disk when $b = a$.  [Need to check this with numerical solution.]\n%\n%\\paragraph{Backscatter}: For the backscatter direction $\\theta_s = \\pi - \\theta_i$ and $\\phi_s = \\phi_i + \\pi$.  From before, we know that $\\Omega$ reduces to \n%\n%\\eq{\\Omega = 2\\sin\\theta_i}\n%\n%And for $\\Gamma$,\n%\\ea{\\tan\\gamma &=& \\dfrac{\\sin\\theta_i \\sin\\phi_i - \\sin\\theta_s\\sin\\phi_s}{\\sin\\theta_i \\cos\\phi_i- \\sin\\theta_s\\cos\\phi_s} \\\\ \n%\\ & =& \\dfrac{\\sin\\theta_i \\sin\\phi_i - \\sin(\\pi - \\theta_i)\\sin(\\phi_i + \\pi)}{\\sin\\theta_i \\cos\\phi_i- \\sin(\\pi - \\theta_i)\\cos(\\phi_i + \\pi) } \\\\\n%\\ & =& \\dfrac{\\sin\\theta_i \\sin\\phi_i + \\sin\\theta_i\\sin\\phi_i}{\\sin\\theta_i \\cos\\phi_i+ \\sin\\theta_i\\cos\\phi_i } \\\\  \n%\\ & =& \\tan\\phi_i \\\\ \n%\\gamma &=& \\phi_i}\n%\n%Therefore, for backscatter \n%\\eq{\\Gamma = \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  }}\n%\n%The phase integral is \n%\\ea{I &=&  2 \\pi  a b \\dfrac{J_1(2k\\sin\\theta_i \\Gamma )}{2k\\sin\\theta_i \\Gamma } }\n%\n%Using the radar cross section for a PEC\n%\\eq{\\sigma_{pec}  = \\dfrac{k^2}{\\pi} \\cos^2\\theta_i I^2 }\n%\n%The backscatter RCS of a PEC elliptic disk is \n%\n% \\eq{\\sigma_{pec}  = 4 \\pi k^2 \\cos^2\\theta_i a^2 b^2 \\textrm{jinc}^2\\left(2 k \\sin\\theta_i \\sqrt{  a^2 \\cos^2\\phi_i  + b^2 \\sin^2\\phi_i  } \\right)  }\n%\n%\n\n", "meta": {"hexsha": "c6ce3ccd7392f4c94b5cb85c4bc0ad329836c21d", "size": 92265, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tex/Kirchhoff/Kirchhoff.tex", "max_stars_repo_name": "nasa-jpl/Waveport", "max_stars_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2021-08-29T13:29:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T20:09:47.000Z", "max_issues_repo_path": "Tex/Kirchhoff/Kirchhoff.tex", "max_issues_repo_name": "ruzakb/Waveport", "max_issues_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tex/Kirchhoff/Kirchhoff.tex", "max_forks_repo_name": "ruzakb/Waveport", "max_forks_repo_head_hexsha": "caeb9540693185e000e08d826bc2ccabb6aa82bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-08-29T13:28:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-08T19:58:04.000Z", "avg_line_length": 54.6917605216, "max_line_length": 838, "alphanum_fraction": 0.6316804856, "num_tokens": 36862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245787544825, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5744153074220812}}
{"text": "\\section{CNN Classifier}\r\n\r\n\\subsection{Region of Interest Size}\r\n\r\nWe generate locations for the potential merges from the skeletons using the algorithm in Section 3.2.\r\nFrom these locations we extract a cubic region of interest.\r\nWe experimented with three different cube side lengths: $800 \\textrm{nm}$, $1200 \\textrm{nm}$, and $1600 \\textrm{nm}$. \r\nSide lengths of $1600 \\textrm{nm}$ performed better on the training and validation data on our network. \r\n\r\n\\subsection{CNN Parameters}\r\n\r\nWe experimented with several different network architectures before deciding on the one presented in our paper. \r\nWe considered two optimizers: stochastic gradient descent with Nesterov momentum and Adam~\\cite{kingma2014adam}.\r\nIn addition we tried two different loss functions: binary cross-entropy and mean squared error.\r\nWe also experimented with four different input sizes based on the output size of the last max pooling layer.\r\nAfter running a brute force search over all of these architectures, we found that the best architecture used a SGD optimizer with Nesterov momentum, a mean squared error loss function, and an input size of $76 \\times 76 \\times 24$. \r\n\r\n\\subsection{Architecture}\r\n\r\n\\begin{figure*}\r\n\t\\centering\r\n\t\\includegraphics[width=0.9\\linewidth]{./figures/architecture.png}\r\n\t\\caption{The final architecture presented in this paper.}\r\n\t\\label{fig:architecture}\r\n\\end{figure*}\r\n\r\nFigure \\ref{fig:architecture} shows the final architecture that produced the best results on the validation data. \r\nThe input data was $76 \\times 76 \\times 24$ voxels with $3$ channels per voxel. \r\nThis corresponds to an output size from the third max pooling layer of $6 \\times 6 \\times 6$ with 64 filters. \r\nThis flattens to a $13,824$ element vector and two dense layers of output size $512$ and $1$ follow. \r\nAll activation layers are Leaky ReLU with $\\alpha=0.001$ except for the final layer which has a sigmoid activation.", "meta": {"hexsha": "f5889cf665821860880ebbf023a7fec3a25f90c2", "size": 1922, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/cvpr2018/supplemental/classifier.tex", "max_stars_repo_name": "romil797/ibex", "max_stars_repo_head_hexsha": "898134a96e299d8106d9deb7b217671c39bfeca2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "papers/cvpr2018/supplemental/classifier.tex", "max_issues_repo_name": "romil797/ibex", "max_issues_repo_head_hexsha": "898134a96e299d8106d9deb7b217671c39bfeca2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/cvpr2018/supplemental/classifier.tex", "max_forks_repo_name": "romil797/ibex", "max_forks_repo_head_hexsha": "898134a96e299d8106d9deb7b217671c39bfeca2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.0, "max_line_length": 233, "alphanum_fraction": 0.7747138398, "num_tokens": 452, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711832583696, "lm_q2_score": 0.6757645944891558, "lm_q1q2_score": 0.5743804319820601}}
{"text": "\n\\section{1D example}\nConsider the 1D case of Theorem \\ref{lem:stratifiedapprox2}\n\\begin{theorem}\nSuppose $x\\in [-T, T]$ and $f\\in \\mathcal{B}^{m+1, 1}([-T, T])$.\n%$$\n% \\int_{\\mathbb{R}^{d}} |\\hat{f}(\\omega)|\\|\\omega\\|_{\\ell_q}^{m+1} d\\omega<\\infty.\n%$$\nThere exist $\\beta_j\\in \\{-1, 1\\}$, $\\bar \\omega_j\\in\\{1, -1\\}$, $t\\in [0,T]$ such that \n\\begin{equation}\\label{1drepresentation}\nf_n(x)= \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k + {\\nu\\over m!n}\\sum_{j=1}^{n}\\beta_j (\\bar \\omega_j x - t_j)_+^m\n\\end{equation} \nwith $\\nu=\\int_{\\{-1,1\\}\\times [0,T]\\times \\mathbb{R}^{d}} \\rho(\\theta)d\\theta$ and $\\rho(\\theta)$  defined in \\eqref{eq:straglam} \nsatisfies the following estimate\n\\begin{equation}\n\\|f - f_n \\|_{H^k(\\Omega)} \\leq C(m,k,\\Omega) n^{-{1\\over 2}-{1\\over d+2}}\\|f\\|_{\\mathcal B^{m+1, q}(\\Omega)},\\qquad k\\le m.\n\\end{equation} \nEspecially,\n\\begin{equation}\n\\|f - f_n \\|_{L^2(\\Omega)} \\leq {(2T)^m|\\Omega|^{1\\over 2}\\over (m-1)!} n^{-{1\\over 2}-{1\\over d+2}}\\|f\\|_{\\mathcal B^{m+1, q}(\\Omega)}.\n\\end{equation} \n%\\begin{equation}\n%\\|D^\\beta (f(x)- f_n(x))\\|_{L^2(\\Omega)}\\le \\sqrt{2^{m-k-2}(2m-k)\\over k!(m-k)!}|\\Omega|^{1/2} n^{-{1\\over 2}-{1\\over d}},\\quad |\\beta|=k\\le m.\n%\\end{equation}\n\\end{theorem} \n\nIf $\\bar \\omega_j=-1$,\n\\begin{equation} \n\\beta_j (\\bar \\omega_j  x - t_j)_+^m = \\beta_j (- (x - (-t_j)))_+^m.\n\\end{equation} \nIf $m$ is odd, \n\\begin{equation} \n\\beta_j (\\bar \\omega_j  x - t_j)_+^m = - \\beta_j (x - (-t_j))_-^m = \\tilde \\beta_j(x - \\tilde t_j)_-^m\n\\end{equation} \nwith $\\tilde \\beta_j=-\\beta_j\\in \\{-1, 1\\}$ and $\\tilde t_j\\in[-T, 0]$.\nIf $m$ is even, \n\\begin{equation} \n\\beta_j (\\bar \\omega_j  x - t_j)_+^m = \\tilde \\beta_j (x - \\tilde t_j)_-^m\n\\end{equation} \nwith $\\tilde \\beta_j=\\beta_j\\in \\{-1, 1\\}$ and $\\tilde t_j\\in[-T, 0]$.\n\n\n\\begin{equation}\n\\begin{split}\n&f(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k \n\\\\ \n=&{\\rm Re} \\bigg ({i^{m+1}\\over m!}\\int_{\\mathbb{R}^d} \\int_{0}^T\\left[(\\bar \\omega  x - t)_+^me^{i|\\omega|t}\n+(-1)^{m-1}(-\\bar \\omega x - t)_+^me^{-i|\\omega|t} \\right]\\hat{f}(\\omega)|\\omega|^{m+1}dt d\\omega\\bigg ) \n\\end{split}\n\\end{equation}\nIf $m$ is odd,\n$$\n(\\bar \\omega  x - t)_+^me^{i|\\omega|t}\n+(-1)^{m-1}(-\\bar \\omega x - t)_+^me^{-i|\\omega|t}\n=\n(x - t)_+^me^{i\\omega t}\n+(- x - t)_+^me^{-i\\omega t}.\n$$\nThus,\n\\begin{equation}\n\\begin{split}\n&f(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k \n\\\\ \n=&{\\rm Re} \\bigg ({1\\over m!}\\int_{\\mathbb{R}^d} \\int_{0}^T\\left[(x - t)_+^me^{i \\omega t}\n+ (- x - t)_+^me^{-i \\omega t} \\right]\\hat{f}(\\omega) (i\\omega)^{m+1}dt d\\omega\\bigg ) \n\\end{split}\n\\end{equation}\nNote that\n\\begin{equation}\nf^{(m)}(t)=\\int (i\\omega)^m\\hat{f}(\\omega)e^{i\\omega t}d\\omega \n\\end{equation} \nIt holds that\n\\begin{equation}\n\\begin{split}\n&f(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k \n\\\\ \n=&{\\rm Re} \\bigg ({1\\over m!}\\int_{\\mathbb{R}^d} \\int_{0}^T\\left[(x - t)_+^me^{i \\omega t}\n+ (- x - t)_+^me^{-i \\omega t} \\right]\\hat{f}(\\omega) (i\\omega)^{m+1}dt d\\omega\\bigg ) \n\\\\\n=& {1\\over m!} \\int_{0}^T\\left[(x - t)_+^m f^{(m+1)}(t)\n+ (- x - t)_+^mf^{(m+1)}(-t) \\right] dt  \n\\end{split}\n\\end{equation} \n\n\\noindent\\textbf{1D case from Taylor expansion}\n\nConsider the case that $\\Omega=[0,1]$, the integral form of Taylor remainder reads\n\\begin{equation}\n\\begin{aligned}\nf(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k &={1\\over m!}\\int_0^x f^{(m+1)}(t)(x-t)^mdt\n\\\\\n&={1\\over m!}\\int_0^1 f^{(m+1)}(t)(x-t)_+^mdt.\n\\end{aligned}\n\\end{equation}\nLet the probability density be\n\\begin{equation}\n\\lambda(t)= {|f^{(m+1)}(t)|\\over \\|f^{(m+1)}\\|_{L^1}}.\n\\end{equation}\nThus,\n\\begin{equation}\n\\begin{aligned}\nf(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k &={ \\|f^{(m+1)}\\|_{L^1}\\over m!}\\int_0^1 sgn(f^{(m+1)}(t))(x-t)_+^m\\lambda(t)dt.\n\\end{aligned}\n\\end{equation}\n\\begin{lemma}\n\t There exists \n\t \\begin{equation}\n\\begin{aligned}\nf_n(x) = \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k + { \\|f^{(m+1)}\\|_{L^1}\\over m!n}\\sum_{i=1}^n\\beta_i  (x-t)_+^m.\n\\end{aligned}\n\\end{equation}\nwith $\\beta_i\\in [-2,2]$\n such that\n\t\\begin{equation} \n\t\\|f-f_n\\|_{H^k(\\Omega)} \\leq n^{-3/2}|\\Omega|^{1/2}.\n\t\\end{equation}  \n\t\\end{lemma}\n\n\\begin{proof}\nNote that $\\displaystyle f(x)-f_n(x)={ \\|f^{(m+1)}\\|_{L^1}\\over m!}\\left(r(x) - r_n(x)\\right)$ with\n\\begin{equation}\nr(x) = \\int_0^1 sgn(f^{(m+1)}(t))(x-t)_+^m\\lambda(t)dt,\\quad r_n(x)= {1\\over n}\\sum_{i=1}^n\\beta_i  (x-t)_+^m.\n\\end{equation}\nConsider a $\\epsilon$-covering decomposition $G=\\cup_{i=1}^M G_i$ of $\\Omega$ such that \n\\begin{equation}\nsgn(f^{(m+1)}(t))= sgn(f^{(m+1)}(t')), \\quad |t-t'|<\\epsilon, \\quad \\forall t,\\ t'\\in G_i.\n\\end{equation}\nLet $n_j=\\lceil \\lambda(G_j)n\\rceil$ and $t_{i,j} \\in G_j(1\\leq i\\leq n_j)$ and \n\\begin{equation}\nr(x) = \\int_0^1 g(x, t)\\lambda(t)dt,\\quad r_n(x)= \\sum_{i=1}^M \\lambda(G_i) g_{n_i}^i,\\quad \\mbox{ with }\\ g(x, t)=(x-t)_+^m,\\ g_{n_i}^i ={1\\over n_i}  \\sum_{j=1}^{n_i}(x-t_{ij})_+^m.\n\\end{equation}\nDefine\n\\begin{equation}\n\\begin{aligned}\nf_n(x) = \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k + { \\|f^{(m+1)}\\|_{L^1}\\over m!}r_n(x).\n\\end{aligned}\n\\end{equation}\nThus, $\\displaystyle f(x)-f_n(x)={ \\|f^{(m+1)}\\|_{L^1}\\over m!}\\left(r(x) - r_n(x)\\right)$. Note that \n\\begin{equation}\n\\begin{aligned}\nr(x) = \\mathbb{E}_G g =  \\sum_{i=1}^{M}\\lambda(G_i) \\mathbb{E}_{G_i} g.\n\\end{aligned}\n\\end{equation}\nBy Lemma \\ref{MC},\n\\begin{equation}\n\\begin{split}\n\\mathbb{E}_n\\| r^{(k)} - r_n^{(k)} \\|_{L^2(\\Omega)}^2=& \n \\sum_{j=1}^{M}{\\lambda^2(G_i) \\over n_i}\\mathbb{E}_{G_i}\\|\\mathbb{E}_{G_i} \\partial_x g -  \\partial_x g^{(k)}\\|^2_{L^2(\\Omega)}\n\\\\\n\\le & \\sum_{i=1}^{M} {\\lambda^2(G_i)\\over n_i}\\sup_{t,t'\\in G_i} \\|\\partial_x  g(x, t) - \\partial_x g(x,t')\\|^2_{L^2(\\Omega)}.\n\\end{split}\n\\end{equation} \nSince ${\\lambda(G_i)\\over n_i}\\le {1\\over n}$ and $\\displaystyle \\sum_{i=1}^M \\lambda(G_i)=1$,\n\\begin{equation}\n\\mathbb{E}_n\\| r^{(k)} - r_n^{(k)} \\|_{L^2(\\Omega)}^2\\leq n^{-1}\\max_{1\\le i\\le M}\\sup_{t,t'\\in G_i} \\| \\partial_x g(x,t) - \\partial_x g(x,t')\\|^2_{L^2(\\Omega)}.\n\\end{equation}\nNotice that for any $1\\le i\\le M$,\n\\begin{equation}\n\\sup_{t,t'\\in G_i} \\| \\partial_x g(x,t) - \\partial_x g(x,t')\\|^2_{L^2(\\Omega)}\\lesssim |t-t'|^2|\\Omega| \\le | \\Omega| \\epsilon^2.\n\\end{equation}\nSince $\\epsilon \\sim M^{-1}$,\nthere exist $\\{t_{i,j}^\\ast\\}$ such that $t_{i,j}^\\ast\\in G_i$ and \n\\begin{equation}\n\\| f^{(k)} -  f_n^{(k)} \\|_{L^2(\\Omega)}^2\\leq (M^2n)^{-1}| \\Omega|.\n\\end{equation}\nSuppose $\\displaystyle N=\\sum_{j=1}^M n_j$, $n\\le N\\le n+M$,\n$$\nr_n(x) =  {1\\over N}\\sum_{i=1}^{M}\\frac{N\\lambda(G_i)}{n_i}\\sum_{j=1}^{n_i} (x - t_{i,j}^\\ast)_+^m\n =  {1\\over N}\\sum_{i=1}^{M}\\beta_{i,j}\\sum_{j=1}^{n_i} (x - t_{i,j}^\\ast)_+^m\n$$ \nwith\n\\begin{equation}\n\\beta_{i,j}= \\frac{N\\lambda(G_j)}{n_j}\\le \\frac{\\lambda(G_j)(M+n)}{\\lambda(G_j)n}\\le {M+n\\over n}.\n\\end{equation}\nLet $M=n$. There exist $\\{t_{i,j}^\\ast\\}$ such that $t_{i,j}^\\ast\\in G_i$ and \n\\begin{equation}\n\\| f -  f_n \\|_{H^k(\\Omega)}^2\\leq N^{-3}| \\Omega|,\n\\end{equation}\nwhich completes the proof. \n\\end{proof}\n\n\\begin{theorem} \n\t There exists \n\t \\begin{equation}\n\\begin{aligned}\nf_n(x) = \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k + { \\|f^{(m+1)}\\|_{L^1}\\over m!n}\\sum_{i=1}^n (x-t)_+^m.\n\\end{aligned}\n\\end{equation}\n such that\n\t\\begin{equation} \n\t\\|f-f_n\\|_{H^k(\\Omega)} \\leq n^{-{3\\over 4}}|\\Omega|^{1/2}.\n\t\\end{equation}  \n\\end{theorem} \n\n\\begin{proof}\nConsider the same decomposition $G=G_1\\cup \\cdots \\cup G_M$ of $\\Omega$ such that\n$$\n|t -t'| \\leq \\epsilon \\le M^{-1},\\quad t, t'\\in G_i.\n$$ \nLet  $t_{i,j} \\in G_i(1\\leq j\\leq n_i)$,  $n_i$ equal $\\lceil \\lambda(G_i)n\\rceil$ and $\\lfloor \\lambda(G_i)n\\rfloor$ with probabilities chosen to make its mean equal to $\\lambda(G_i)n$ and $m_i=n_i + \\mathbb{I}(n_i=0)$. Then\n\\begin{align} \n\\sum_{i=1}^M m_i&=\\sum_{i=1}^M n_i\\mathbb{I}(n_i>0) + \\sum_{i=1}^M n_i\\mathbb{I}(n_i=0)\n\\\\\n&\\le \\sum_{i=1}^M (n\\lambda(G_i) + 1)\\mathbb{I}(n_i>0) + \\sum_{i=1}^M \\mathbb{I}(n_i=0)\n\\\\\n&\\le  n\\sum_{i=1}^M \\lambda(G_i)\\mathbb{I}(n_i>0) + M\n\\le n+M.\n\\end{align} \nDefine $\\displaystyle N=\\sum_{i=1}^Mn_i$  and\n$$\nr_{n}(x)= {1\\over N}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} g(x, t_{i,j}).\n$$\nBy the definition of $m_i$, $\\displaystyle {n_i\\over m_i}=0$ or $1$.  This means that $r_{n}(x) $ is in the form of  $\\displaystyle {1\\over N}\\sum_{i=1}^Ng(x,\\theta_i)$.  Define\n$$\n\\bar{r}_{n}(x)= \\sum_{i=1}^M {n_i\\over N}\\mathbb{E}_{G_i}g.\n$$\nSince\n$\n\\displaystyle r(x)=\\mathbb{E}_G g= \\sum_{i=1}^M \\lambda(G_i) \\mathbb{E}_{G_i}g,\n$  \n\\begin{align}  \nr_{n} - r  &= r_{n} - \\bar{r}_{n} + \\bar{r}_{n} - r\n\\\\\n&= {1\\over N}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} \\big (g(x,t_{i,j}) - \\mathbb{E}_{G_i}g \\big ) + {1\\over N}\\sum_{i=1}^M (n_i - \\lambda(G_i)N)  \\mathbb{E}_{G_i}g.\n\\end{align}  \nIt follows that  \n\\begin{align}  \n\\|r_{n} - r \\|_{L^2(\\Omega)}^2 &\n\\le {1\\over N^2}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} \\|g(x,t_{i,j}) - \\mathbb{E}_{G_i}g \\|_{L^2(\\Omega)}^2 \n+  {1\\over N^2}\\sum_{i=1}^M (n_i - \\lambda(G_i)N)^2  \\|\\mathbb{E}_{G_i}g\\|_{L^2(\\Omega)}^2.\n\\end{align} \nFor the first term on the right-hand side of the above equation,\n\\begin{align}  \n{1\\over N^2}\\sum_{i=1}^M {n_i\\over m_i} \\sum_{j=1}^{m_i} \\|g(x,t_{i,j}) - \\mathbb{E}_{G_i}g \\|_{L^2(\\Omega)}^2 \n\\le {|\\Omega|\\epsilon^2\\over N^2}\\sum_{i=1}^M n_i ={|\\Omega|\\epsilon^2\\over N}.\n\\end{align} \nRecall \n$\ng(x, t)= (x - t)_+^m {\\rm sgn} s(zt,\\omega),\n$\nwhich is bounded, say $|g(x, t)|\\le C$. Note that $|n-N|\\le M$. Thus,\n\\begin{align}\n {1\\over N^2}\\sum_{i=1}^M (n_i - \\lambda(G_i)N)^2  \\|\\mathbb{E}_{G_i}g\\|_{L^2(\\Omega)}^2& \\le {1 \\over N^2}\\sum_{i=1}^M  (M^2 + {1\\over \\lambda^2(G_i)}) \\int_\\Omega \\int_{G_i} g^2(x, t)\\lambda^2(t)dt d\\mu(x)\n\\\\\n&\\le  {C^2|\\Omega|\\over N^2}(M^2 +M)\\le {\\tilde C\\over N^2}M^2.\n\\end{align}  \nIt follows that \n\\begin{align}  \n\\mathbb{E}_n \\|r_{n} - r \\|_{L^2(\\Omega)}^2 \n\\le {\\epsilon^2 \\over N} + {\\tilde C\\over N^2}M^2\\label{stratify2t}.\n\\end{align}  \nChoose $\\displaystyle {\\epsilon^2 \\over N} = {\\tilde C\\over N^2}M^2$, then $M^2\\sim N\\epsilon^2$ and \n$$\n\\mathbb{E}_n \\|r_{n} - r \\|_{L^2(\\Omega)}^2 \\le {2\\epsilon^2 \\over N} \\lesssim N^{-{3\\over 2}}.\n$$ \nThere exist $\\{t_{i,j}^\\ast\\}$ such that $t_{i,j}^\\ast\\in G_i$ and \n\\begin{equation}\n\\| f -  f_n \\|_{L^2(\\Omega)}\\le {\\nu\\over m!}\\|r_{n} - r\\|_{L^2(\\Omega)}\\lesssim N^{-{3\\over 4}}.\n\\end{equation}\nwhich completes the proof. \n\n\\end{proof}\n\n\n\n\n\\hrule\n\n\\vspace{1cm}\nIf $x>0$,\n\\begin{equation}\n\\begin{aligned}\nf(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k &={1\\over m!}\\int_0^T f^{(m+1)}(t)(x-t)_+^mdt;\n\\end{aligned}\n\\end{equation}\nIf $x<0$, since $u_-=-(-u)_+$ ,\n\\begin{equation}\n\\begin{aligned}\nf(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k &=-{1\\over m!}\\int_0^{-x} f^{(m+1)}(-t)(x+t)^mdt\n\\\\\n&=-{1\\over m!}\\int_0^T f^{(m+1)}(-t)(x+t)_-^mdt\n\\\\\n&={(-1)^{m+1}\\over m!}\\int_0^T f^{(m+1)}(-t)(-x-t)_+^mdt\n\\end{aligned}\n\\end{equation}\nThis implies that\n\\begin{equation}\n\\begin{aligned}\nf(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k \n&={1\\over m!}\\int_0^T f^{(m+1)}(t)(x-t)_+^m + (-1)^{m+1}f^{(m+1)}(-t)(-x-t)_+^mdt\n\\\\\n&={1\\over m!}\\int_{\\{-1,1\\}}\\int_0^T z^{m+1}f^{(m+1)}(zt)(zx-t)_+^m dtdz\n\\\\\n&={1\\over m!}\\int_{\\{-1,1\\}}\\int_0^T |f^{(m+1)}(zt)| s(z, t)(zx-t)_+^m dtdz\n%\\\\\n%&={1\\over m!}\\int_0^T |f^{(m+1)}(t)|s_1(t)(x-t)_+^m + |f^{(m+1)}(-t)|s_2(t)(-x-t)_+^mdt\n\\end{aligned}\n\\end{equation}\nwith $s(z, t)= z^{m+1}sgn (f^{(m+1)}(zt))$.\n%with $s_1(t)=sgn (f^{(m+1)}(t))$ and $s_2(t)=sgn ((-1)^{m+1}f^{(m+1)}(-t))$. \nIt can be written as \n\\begin{equation}\n\\begin{aligned}\nf(x) - \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k &={\\|f^{(m+1)}\\|_{L^1}\\over m!}\\int_{[0,T]\\times \\{-1, 1\\}}s(z,t)(zx-t)_+^m \\lambda(z,t)dtdz\n\\end{aligned}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{aligned}\n\\lambda(z,t)={|f^{(m+1)}(zt)|\\over \\int_{[0,T]\\times \\{-1, 1\\}} |f^{(m+1)}(zt)| dt dz}\n\\end{aligned}\n\\end{equation}\nBy Monte Carlo, there exist $t_j\\in [0, T]$ and $\\beta_j\\in \\{-1, 1\\}$ such that\n\\begin{equation}\n\\begin{aligned}\n\\|f(x) - f_n(x)\\|_{L^2} \\le {1\\over n^{1/2}}\n\\end{aligned}\n\\end{equation}\nwith\n\\begin{equation}\n\\begin{aligned}\nf_n(x) = \\sum_{k\\le m}{1\\over k!}f^{(k)}(0) x^k + {\\|f^{(m+1)}\\|_{L^1}\\over m!n}\\sum_{j=1}^n\\beta_j (z_jx-t_j)_+^m\n\\end{aligned}\n\\end{equation}\n\n\\noindent\\textbf{Higher dimension case from Taylor expansion}\n\nThe Taylor expansion reads\n\\begin{equation}\nf(x)=f(0)+\\sum_{1\\le |\\alpha| \\leqslant k} \\partial^\\alpha f(0) x^{\\alpha}+\\sum_{|\\alpha|=k+1} \\frac{k+1}{\\alpha !} x^{\\alpha} \\int_{0}^{1}(1-t)^{k} \\partial^\\alpha f(t x) \\quad d t\n\\end{equation}\nLet $y=tx$.\n\\begin{equation}\n\\begin{aligned}\nx^{\\alpha}(1-t)^{k} \\partial^{\\alpha} f(t x) d t &=(1-t)^{k} \\partial^{\\alpha} f(y) \\prod_{i=1}^{N} x^{\\alpha_{i}} d t\\\\\n&=\\partial^{\\alpha} f(y) (1-t)^{k} \\frac{y^{\\alpha}}{t^{k+1}} d t \\\\\n&=\\partial^{\\alpha} f(y)  \\frac{1}{t} \\prod_{i=1}^{N}\\left(\\frac{1}{t}-1\\right)^{\\alpha_{i}} y_{i}^{\\alpha_{i}} d t \\\\\n&=\\partial^{\\alpha} f(y)  \\frac{1}{t} \\prod_{i=1}^{N}\\left(x_{i}-y_{i}\\right)^{\\alpha_{i}} d t \\\\\n&=\\partial^{\\alpha} f(y)  (x-y)^{\\alpha} \\frac{1}{t} d t\n\\end{aligned}\n\\end{equation}\nThus \n\\begin{equation}\nf(x)=f(0)+\\sum_{1\\le |\\alpha| \\leqslant k} \\partial^\\alpha f(0) x^{\\alpha}+\\sum_{|\\alpha|=k+1} \\frac{k+1}{\\alpha !}  \\int_{0}^{\\infty} \\partial^\\alpha f(y) (x-y)_+^\\alpha  d s\n\\end{equation}\nwith \n\\begin{equation}\n(x-y)_{+}^{\\alpha}=\\left\\{\\begin{array}{ccc}\n(x-y)^{\\alpha} & \\text { if }  \ny=sx,\\ 0\\le s \\le 1 \n\\\\\n0 &  \\text { othemise } \n\\end{array}\\right.\n\\end{equation}\n\n\n\\noindent\\textbf{Higher dimension case from Taylor expansion 2}\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "1993c05ea426bc239796536e4cb0d8a445a5aca8", "size": 13334, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/ReLU1d.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/ReLU1d.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/ReLU1d.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.3324250681, "max_line_length": 225, "alphanum_fraction": 0.5713964302, "num_tokens": 6516, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.7025300573952054, "lm_q1q2_score": 0.5743706436851754}}
{"text": "\\subsection{Sliding Window Technique} \\label{sliding_window_technique}\nGiven is a continuous time series data stream $Q$. The last subsequence of size $w$ is in the interest of the sliding\nwindow technique. All subsequences or time series data points of $Q$ that are back even further as the window size $w$\nare in this moment irrelevant. The sliding window application tries to classify this last subsequence of $Q$ with the\nhelp of a time series classificator. This time series classificator makes its decisions based on a given time series\ntraining set. The application triggers an event in the case of a positive classification of the subsequence and an other\napplication can react on this event.\n\nThe sliding window application remains idle until the time series data stream $Q$\ngrows. Afterwards, the process is repeated again. However, the sliding window application waits until the time series\ndata stream $Q$ has grown by a predefined step size $s$. If necessary, this stepwise approach can avoid the execution of\nthe process continuously after every new data point in $Q$. Figure \\ref{fig:swt} illustrates the described process.\n\n\\tikzstyle{decision} = [diamond, draw, aspect=2, fill=white!20, text width=8em, text badly centered, node distance=2cm, inner sep=0pt]\n\\tikzstyle{block} = [rectangle, draw, fill=white!20, text width=5em, text centered, minimum height=4em]\n\\tikzstyle{line} = [draw, -latex']\n\n\\begin{figure}\n    \\begin{center}\n        \\resizebox {\\textwidth} {!} {\n            {\\tiny\n                \\begin{tikzpicture}[node distance = 1.5cm, auto]\n                    \\node [block] (sod) {sensors or devices};\n                    \\node [block, right of=sod, node distance=6cm, text width=2cm] (extract) {Extract last subsequence from Q of size $w$, $Q[t-w,t]$};\n                    \\node [block, right of=extract, node distance=4cm, text width=2cm] (nnc) {Time series classificator};\n                    \\node [decision, below of=nnc] (decide) {$Q[t-w,t]$ classifiable?};\n                    \\node [block, left of=decide, node distance=3cm] (sleeps) {Sleep for $s$ time};\n                    \\node [block, below of=decide, node distance=2cm, text width=2cm] (action) {Trigger event that $Q[t-w,t]$ has been classified and sleep for $w$ time};\n\n                    \\path [line,dashed] (sod) -- node (ctss) {Continuous time series stream $Q$} (extract);\n                    \\path [line] (extract) -- node {$Q[t-w,t]$} (nnc);\n                    \\path [line] (nnc) -- (decide);\n                    \\path [line] (decide) -- node {no} (sleeps);\n                    \\path [line,dashed] (sleeps) -| (ctss);\n                    \\path [line] (decide) -- node {yes} (action);\n                    \\path [line,dashed] (action) -| (ctss);\n                \\end{tikzpicture}\n            }\n        }\n    \\end{center}\n    \\caption{Possible design for a sliding window application. The current time is stored in variable $t$. The constant\n    variables $w$ for the window size and $s$ for the step size are predefined.}\n    \\label{fig:swt}\n\\end{figure}\n\nThe explained sliding window technique uses a generic time series classificator. However, in this bachelor thesis the\ngeneric time series classificator is replaced by 1NN-DTW. A time series window $Q[t-w,t]$ is classified as instance of\nclass $K_i$ by 1NN-DTW if the DTW distance between $Q[t-w,t]$ and the nearest neighbour passes a predefined\nclass threshold $\\epsilon_i$. The following subsection gives some background of DTW.\n", "meta": {"hexsha": "f14e773c51bb619d347aa78706855a7deff04618", "size": 3489, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bachelor-thesis/background_and_notation/sliding_window_technique.tex", "max_stars_repo_name": "GordonLesti/SlidingWindowFilter", "max_stars_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-06-22T09:37:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-14T11:43:53.000Z", "max_issues_repo_path": "bachelor-thesis/background_and_notation/sliding_window_technique.tex", "max_issues_repo_name": "GordonLesti/SlidingWindowFilter", "max_issues_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bachelor-thesis/background_and_notation/sliding_window_technique.tex", "max_forks_repo_name": "GordonLesti/SlidingWindowFilter", "max_forks_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-01-11T23:15:57.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-11T23:15:57.000Z", "avg_line_length": 69.78, "max_line_length": 170, "alphanum_fraction": 0.6706792777, "num_tokens": 899, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300449389326, "lm_q2_score": 0.8175744739711883, "lm_q1q2_score": 0.5743706319399031}}
{"text": "% Project Specifications\n\n\\chapter{GEMM and $col2im$}\n\nThe previous chapter discussed different ways to view the convolution operation and its cousin,\nthe transposed convolution, both conceptually and implementationally. When it comes down to implementation,\nwe have seen that both operations can be implemented with a single matrix multiplication. In other words,\nmatrix multiplication is the main computational burden of these operations. Meanwhile, real world applications\noften involve very large matrices. Therefore, as a low-level operation, the implementation of matrix\nmultiplication is often heavily optimized.\n\n\\gls{gemm} is considered as the \\textit{de facto} standard operation\ncontained in the \\gls{blas} specification. It has many implementations for\ndifferent platforms which exhaustively utilize various optimizations like \\gls{simd} instructions,\nparallelism, cache-awareness, etc.\n\n\\gls{gemm} is defined as\n\n$$C \\leftarrow \\alpha op(A) op(B) + \\beta C,$$\n\nwhere $\\alpha, \\beta \\in \\mathbb{R}$, $op(X)$ is either $X$ or $X^\\intercal$. In our particular case for\ntransposed convolution, $\\alpha = 1$, $\\beta = 0$, $op(X) = X$, so \\gls{gemm} reduces to the basic\nmatrix multiplication $C \\leftarrow A B$.\n\nA subtle detail in the implementation of \\gls{gemm} is the order of storage of matrix entries. There are two\ndifferent orders to store the same matrix: row-major order or column-major order. In row-major order,\nentries of rows are stored contiguously in memory, while in column-major order, entries of columns are\nconsecutive to each other in memory.\n\nA na\u00efve C implementation of \\gls{gemm} is shown in Listing \\ref{code:naivegemm}. Here,\n\\mintinline{c}{lda}, \\mintinline{c}{ldb} and \\mintinline{c}{ldc} are\nthe leading dimensions of matrix $A$, $B$ and $C$, respectively.\n\n\\begin{code}\n\\begin{minted}{c}\nfor (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n        float sum = 0;\n        for (int l = 0; l < k; l++)\n          sum += a[l*lda+i] * b[l*ldb+j];\n        c[j*ldc+i] = beta * c[j*ldc+i] + alpha * sum;\n    }\n}\n\\end{minted}\n  \\captionof{listing}{Na\u00efve C Implementation of \\gls{gemm}}\n\\label{code:naivegemm}\n\\end{code}\n\nThis implementation is straightforward and not very efficient, but it is a good starting point to guide\nus through the rest of the design.\n\nAnother operation used by transposed convolution is $col2im$. It cherry picks the elements computed by\n\\gls{gemm} and places them in the destination image. To describe $col2im$, it is necessary to introduce\nthe variables used for transposed convolution.\n\n\\begin{itemize}\n  \\item \\mintinline{c}{inputHeight} and \\mintinline{c}{inputWidth} are the height and width of each input\n    channel (sometimes also called input plane)\n  \\item \\mintinline{c}{nInputChannel} is the number of input channels in total, thus the input is a\n    $3D$ tensor with dimensions \\mintinline{c}{(nInputChannel, inputHeight, inputWidth)}\n  \\item With \\mintinline{c}{m = inputHeight * inputWidth} and \\mintinline{c}{k = nInputChannel}, the input\n    tensor is stored as a matrix $A \\in \\mathbb{R}^{m \\times k}$\n  \\item \\mintinline{c}{kernelH} and \\mintinline{c}{kernelW} are the height and width of each kernel channel\n  \\item Each kernel is also a $3D$ tensor with dimensions \\mintinline{c}{(nInputChannel, kernelH, kernelW)}\n  \\item \\mintinline{c}{nOutputChannel} is the number of output channels\n  \\item There are \\mintinline{c}{nOutputChannel}'s of kernels, each responsible for producing an output channel\n  \\item With \\mintinline{c}{k = nInputChannel} and \\mintinline{c}{n = nOutputChannel * kernelH * kernelW},\n    the kernel tensor is stored as a matrix $B \\in \\mathbb{R}^{k \\times n}$\n\\end{itemize}\n\nThe code of $col2im$ is given in listing \\ref{code:col2im}:\n\n\\begin{code}\n\\begin{minted}{c}\n// content of data_im set to 0 already\nconst int n = nOutputChannel * kernelH * kernelW;\nfor (int j = 0; j < n; ++j) {\n    int w_offset = j % kernelW;\n    int h_offset = (j / kernelW) % kernelH;\n    int c_im = j / kernelH / kernelW;\n    for (int h_col = 0; h_col < inputHeight; ++h_col) {\n        for (int w_col = 0; w_col < inputWidth; ++w_col) {\n            int h_im = h_col * strideH - padH + h_offset * dilationH;\n            int w_im = w_col * strideW - padW + w_offset * dilationW;\n            if (h_im >= 0 && h_im < outputHeight &&\n                w_im >= 0 && w_im < outputWidth) {\n                data_im[(c_im * outputHeight + h_im) * outputWidth + w_im] +=\n                    data_col[(j * inputHeight + h_col) * inputWidth + w_col];\n            }\n        }\n    }\n}\n\\end{minted}\n\\captionof{listing}{C Implementation of $col2im$}\n\\label{code:col2im}\n\\end{code}\n\nHere \\mintinline{c}{data_im} is the target image while \\mintinline{c}{data_col} is the result of \\gls{gemm}, namely, \\mintinline{c}{c} in listing \\ref{code:naivegemm}.\nThe outermost loop iterates through the columns of $C \\in \\mathbb{R}^{m \\times n}$. The two nested loops\niterate the rows of $C$, since \\mintinline{c}{m = inputHeight * inputWidth}. \\mintinline{c}{c_im} is the\nindex of the channel in the output image. %\\mintinline{c}{h_offset} and \\mintinline{c}{w_offset} are the offset in each \n\nFor transposed convolution, \\mintinline{c}{data_im} is typically smaller than\n\\mintinline{c}{data_col}. This leads to a key optimization idea: it is possible to merge \\gls{gemm} and\n$col2im$ together and compute transposed convolution by doing \\gls{gemm} sparsely. In doing so,\n\\mintinline{c}{data_col} can be abandoned and the results will be directly placed in \\mintinline{c}{data_im},\nthat is, there is no need for an extra large buffer for intermediate results.\n\n\\begin{code}\n\\begin{minted}{c}\nint j = 0;\nfor (int c_im = 0; c_im < nOutputChannel; c_im++) {\n    for (int h_offset = 0; h_offset < kernelH; h_offset++) {\n        for (int w_offset = 0; w_offset < kernelW; w_offset++) {\n            int i = 0;\n            for (long h_col = 0; h_col < inputHeight; h_col++) {\n                for (long w_col = 0; w_col < inputWidth; w_col++) {\n                    int h_im = h_col * strideH - padH + h_offset * dilationH;\n                    int w_im = w_col * strideW - padW + w_offset * dilationW;\n                    if (h_im >= 0 && h_im < outputHeight &&\n                        w_im >= 0 && w_im < outputWidth) {\n                        float sum = 0;\n                        for (long l = 0; l < k; l++)\n                            sum += a[l*lda + i] * b[l*ldb + j];\n                        int idx = (c_im * outputHeight + h_im) *\n                                  outputWidth + w_im;\n                        data_im[idx] += sum;\n                    }\n                    i++;\n                }\n            }\n            j++;\n        }\n    }\n}\n\\end{minted}\n\\captionof{listing}{C Implementation of $transposed\\_convolution$}\n\\label{code:transposed_convolution_in_c}\n\\end{code}\n\nThe key to understand this code of merge is to notice that the for-loop indexed by \\mintinline{c}{j} in\n\\mintinline{c}{gemm} can be split into three nested for-loops indexed by \\mintinline{c}{c_im},\n\\mintinline{c}{h_offset} and \\mintinline{c}{w_offset}.\nSimilarly, the for-loop indexed by \\mintinline{c}{i} in \\mintinline{c}{gemm} is equivalent to the two\nnested for-loops indexed by \\mintinline{c}{h_col} and \\mintinline{c}{w_col}, same as in \\mintinline{c}{col2im}.\n\nThis last C code in listing \\ref{code:transposed_convolution_in_c} serves as the blueprint for implementing\ntransposed convolution on \\gls{fpga} with Verilog. It can be seen from the code listing\nthat the output channels can be computed in groups or even by each channel individually.\nThis is a convenient fact when the weights of a layer cannot be fit into the\n\\gls{bram} on the \\gls{fpga} chip all at once.\n\n\\clearpage %force the next chapter to start on a new page. Keep that as the last line of your chapter!\n", "meta": {"hexsha": "e67844182539ac1740ea5b2825ef6be47b090d07", "size": 7818, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/gemm.tex", "max_stars_repo_name": "lambdalainen/metropolia-thesis-latex", "max_stars_repo_head_hexsha": "d7e705ad24f1f8065b2e7f026db5fdc90a7c8b3a", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/gemm.tex", "max_issues_repo_name": "lambdalainen/metropolia-thesis-latex", "max_issues_repo_head_hexsha": "d7e705ad24f1f8065b2e7f026db5fdc90a7c8b3a", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/gemm.tex", "max_forks_repo_name": "lambdalainen/metropolia-thesis-latex", "max_forks_repo_head_hexsha": "d7e705ad24f1f8065b2e7f026db5fdc90a7c8b3a", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.7662337662, "max_line_length": 167, "alphanum_fraction": 0.6774111026, "num_tokens": 2194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8723473846343393, "lm_q2_score": 0.6584175072643413, "lm_q1q2_score": 0.5743687904595093}}
{"text": "% numerikk.tex\n% Part of VertEgg documentation\n%\n\n\n\\chapter{Numerical Methods}\n\n\n\nThe numerical solutions are computed on a fixed equidistant grid.  The\nnumber of grid cells is denoted $N$ and the cell height is $\\Delta\nz$. The grid is staggered as shown in fig.~\\ref{fig:grid}.  The\n$z$-axis points upwards.\n \nThe cell interfaces, the \\emph{flux points}\\index{flux points}, are \n\\begin{equation}\\label{eq:flushpt}\n  z_i = (1-i) \\Dz \\quad \\text{for} \\quad i = 1, \\dots, N+1 .\n\\end{equation}\nThe cell centers, the \\emph{concentration points}\\index{concentration\npoints} or the \\emphi{egg points}, are\n\\begin{equation}\\label{eq:eggpt}\n  \\bar{z}_i = \\frac{z_i + z_{i+1}}{2} = (\\half-i) \\Dz\n            \\quad \\text{for} \\quad i = 1, \\dots, N .\n\\end{equation}\n\n\\begin{figure}[h]\n\\begin{center}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(65,20)\n  \\multiput(10.0, 8.5)(20,0){3}{\\line(0, 1){3}}\n  \\put(20.0,10.0){\\makebox(0,0){$\\times$}}\n  \\put(40.0,10.0){\\makebox(0,0){$\\times$}}\n  \\put(10.0, 6.0){\\makebox(0,0){$z_{i+1}$}}\n  \\put(30.0, 6.0){\\makebox(0,0){$z_{i}$}}\n  \\put(50.0, 6.0){\\makebox(0,0){$z_{i-1}$}}\n  \\put(20.0, 6.0){\\makebox(0,0){$\\bar{z}_{i}$}}\n  \\put(40.0, 6.0){\\makebox(0,0){$\\bar{z}_{i-1}$}}\n  \\put(10.0,14.0){\\makebox(0,0){$F_{i+1}$}}\n  \\put(30.0,14.0){\\makebox(0,0){$F_{i}$}}\n  \\put(50.0,14.0){\\makebox(0,0){$F_{i-1}$}}\n  \\put(20.0,14.0){\\makebox(0,0){$\\phi_{i}$}}\n  \\put(40.0,14.0){\\makebox(0,0){$\\phi_{i-1}$}}\n  \\put(0,10){\\vector(1,0){60}}\n\\end{picture}\n\\end{center}\n\\caption{Horizontal view of the vertical grid}\\label{fig:grid}\n\\end{figure}\n\nThe variables are discretized in the following ways. The egg\nconcentration $\\phi_i$ represents the mean concentration in\ncell $i$, that is\n\\begin{equation}\\label{eq:cellav}\n  \\phi_i \\Dz = \\int_{z_{i+1}}^{z_i} \\!\\phi(z)\\,dz, \n                 \\quad i = 1, \\dots ,N .\n\\end{equation}\nThe $\\phi_i$ values may be considered living on the egg points\n$\\bar{z}_i$ as in fig.~\\ref{fig:grid}. As the flux values are computed\nfrom the vertical velocity and the eddy diffusivity,\n$F_i$, $w_i$ and $K_i$ are taken as point values on the flux points\n$z_i$ for $i = 1, \\dots, N+1$. The source terms $P_i$ and $\\alpha_i$\nare cell averages and live on the egg points.\n\n\n\\section{The Transient Problem}\\label{sec:numtrans}\n\nThe numerical solution of the convection-diffusion\nequation~(\\ref{eq:difflaw}) are covered by several authors.  A classic\nsource is Roache \\shortcite{roac72}. A newer source is chapter 9 in\nthe book by Fletcher \\shortcite{flet91}. Recently a whole book, edited\nby Vreugdenhil and Koren \\shortcite{vreu93}, has been devoted to this\nequation.  For conservative methods, as will be used here, the book by\nLeVeque \\shortcite{leve92} is also recommended.\n\nFor our problem, convection and diffusion of concentration of fish\neggs, some properties of the numerical method is important.  Fish eggs\nare usually found in a subrange of the water column with values close\nto zero outside this range. And of course a real concentration do not\nhave negative values.  The method must therefore be\n\\emph{positive}, that is it must not create\nany negative values.\n\nIn the absence of source and sink terms, eggs are not created or\ndestroyed. The vertical integrated concentration is therefore constant\nin time as shown in section~\\ref{sec:vertint}.  The numerical method\nshould have the same property, it should be \\emph{mass\nconserving}\\index{mass conserving method}. \n\nOf course the numerical solution should be as close as possible to the\nreal solution. That is the \\emph{accuracy} of the method should be\ngood. This concept includes low artificial diffusion and no or very\nlimited wiggles development.\n\nA convection -- diffusion problem is characterised by the following\nnon-dimensional numbers, all defined at the interior flux points,\n$z_i$ for $i = 2, \\dots, N$.\n\\begin{align}\n  & \\text{Courant number}         & c_i & = w_i\\frac{\\Dt}{\\Dz} \\\\\n  & \\text{Diffusive parameter}    & s_i & = K_i \\frac{\\Dt}{\\Dz^2} \\\\\n  & \\text{Cell Peclet number}     & \\Pcell_i & = \n       \\frac{\\abs{w_i} \\Dz}{K} = \\frac{\\abs{c_i}}{s_i}\n\\end{align}\n\\index{Courant number}\\index{diffusive parameter}\\index{cell Peclet number}\nIf the coefficients $w$ and $K$\nare constant, the subscripts are simply dropped.\n\nFor fish eggs, the terminal velocity is in the order of a couple of\nmillimetres per second. The velocity can be positive (\\emph{pelagic}\neggs)\\index{pelagic eggs}, negative (\\emph{benthic})\\index{benthic\neggs} or change sign some place in the water column\n(\\emph{mesopelagic})\\index{mesopelagic eggs}. The vertical eddy\ndiffusion show a very large range of variation, from more than\n$10^{-2}$ \\sqmps\\ in the upper mixing layer to $10^{-5} \\sqmps$\nin the pycnocline layer.  With a grid size of the\norder of one meter, the Cell Peclet number $\\Pcell$ take values from\n$10^{-1}$ to $10^2$. This means that the process may be dominated by\ndiffusion in the upper layer and convection below.\n\nThe numerical methods considered here are \\emph{finite difference} or\nreally \\emph{finite volume} schemes given in \\emph{conservation\nform}\\index{conservation form} (or \\emph{flux\nformulation}\\index{flux formulation}).  These methods are\nconceptually natural, working directly with the integral form of the\nconservation law~(\\ref{eq:intlaw}).  More precisely, the\nequation~(\\ref{eq:intlaw}) is approximated by\n\\begin{equation}\\label{eq:numlaw}\n  (\\phi_i^{n+1} - \\phi_i^n) \\Dz = \n           (F_{i+1}^n - F_i^n) \\Dt + Q_i^n \\Dz \\Dt ,\n           \\quad i = 2, \\dots, N\n\\end{equation}\nwhere the superscripts indicate the time step. \nThe boundary conditions (\\ref{eq:boundcond}) simply become\n$F_1 = F_{N+1} = 0$. The various methods\ndiffer in their estimation of the total flux function $F_i = \\Fconv_i\n+ \\Fdiff_i$ and the source term $Q_i$.  One advantage of this flux\nformulation is that mass conservation is automatically fulfilled, as\nthe fluxes cancel out during vertical integration.\n\nWithout source term the total concentration is conserved. Therefore\nthe local concentration values can not become arbitrary large, unless\nthere are negative values in some other cells. In other words,\npositivity is a sufficient (but not necessary) condition for stability\nof such schemes.\n\nIn the following several methods are described.  They are implemented in\nthe toolbox as \\edbi{ftcs}, \\edbi{lwendrof}, \\edbi{upstream},\n\\edbi{posmet}, and \\edbi{minlim} respectively. Their performance\nwill be studied in the example section~\\ref{sec:sens}. Several methods\nfor the same problem causes a new problem, which method to choose. This\ndepends on the range of $\\Pcell$ values in the problem. A general\nadvice is to try Lax-Wendroff first. This is an accurate and fast method\nwhen applicable. If oscillations occur, go for the flux-limited\nmethods or use higher spatial resolution.  In both cases more computing\ntime is required. \n\n\n\n\\subsection{Linear conservative schemes}\\label{seq:linsch}\n\nThe linear conservative schemes considered here estimates the total \nflux function in the form \n\\begin{equation}\\label{eq:alphabeta}\n  F_i = \\alpha_i \\phi_i + \\beta_i \\phi_{i-1} ,\n\\end{equation}\nwhere the $\\alpha_i$-s and $\\beta_i$-s are independent of \nthe $\\phi_i$-s. Sometimes this will be used in non-dimensional form\n\\begin{equation}\\label{eq:ab}\n  F_i = \\frac{\\Dz}{\\Dt} (a_i \\phi_i + b_i \\phi_{i-1}) .\n\\end{equation}\nSome general theory for this kind of numerical schemes is presented\nin Appendix~\\ref{app:lincons}\n\n\n\\subsubsection{The FTCS scheme}\\index{FTCS scheme}\n\nThe simplest scheme is often called FTCS (forward time central space)\nafter the differencing. This scheme is described for instance in\nchapter 9.4 in \\cite{flet91}. Here both flux\ncomponents are centred around $z = z_i$,\n\\begin{equation}\n  \\Fconv_i = w_i \\frac{\\phi_{i-1} + \\phi_i}{2}\n\\end{equation}\n\\begin{equation}\\label{eq:centdifflux}\n  \\Fdiff_i = - K_i \\frac{\\phi_{i-1} - \\phi_i}{\\Dz} .\n\\end{equation}\nIn the notation of equation (\\ref{eq:alphabeta}) the method is given by\n\\begin{equation}\n   \\alpha_i = \\frac{1}{2} w_i + \\frac{K_i}{\\Dz} , \\quad \n   \\beta_i  = \\frac{1}{2} w_i - \\frac{K_i}{\\Dz} ,\n\\end{equation}\nand non-dimensionalised by\n\\begin{equation}\n   a_i = \\frac{1}{2} c_i + s_i , \\quad \n   b_i = \\frac{1}{2} c_i - s_i .\n\\end{equation}\n\nThe following positivity condition for the FTCS scheme follows from\nthe general condition in Appendix~\\ref{app:pos}.\n\\begin{gather}\n     \\abs{c_i} \\le 2 s_i,  \\quad i = 2, \\dots, N \\\\\n      s_2 \\le 1 + \\half c_2 \\\\\n  s_i + s_{i+1} \\le 1 + \\half (c_i - c_{i+1}), \\quad i = 2, \\dots, N-1 \\\\ \n      s_N \\le 1 - \\half c_N\n\\end{gather}\nThe first condition can be restated as $\\Pcell \\le 2$.\n\nWith constant coefficients, the positivity conditions reduce to\n\\begin{equation}\n  \\abs{c} \\le 2 s \\le 1,\n\\end{equation}\nwhich is stronger than the stability condition\n\\begin{equation}\n  c^2 \\le 2 s \\le 1 .\n\\end{equation}\n\nThe numerical ``diffusion'' is negative, \n\\begin{equation}\n  K_{num} = - \\half w^2 \\Dt\n\\end{equation}\nwhich is negligible if $c^2 \\ll 2s$.\n\nThis method is implemented in the toolbox as the function \\edbi{ftcs}.\n\n\\subsubsection{The Lax-Wendroff Scheme}\\index{Lax-Wendroff scheme}\n\nAn alternative is the Lax-Wendroff scheme as analysed for instance in\n\\cite{vreu93b}. The scheme compensates for the negative numerical\n``diffusion'' in FTCS.  The diffusive flux is still given by\nequation~(\\ref{eq:centdifflux}) but the convective flux is modified\n\\begin{equation}\\label{eq:lwflux}\n  \\Fconv_i = w_i \\frac{\\phi_{i-1} + \\phi_i}{2}\n      - \\half w_i^2 \\frac{\\Dt}{\\Dz} \n       (\\phi_{i-1} -\\phi_i) .\n\\end{equation}\nIn the notation of equation (\\ref{eq:alphabeta}) the Lax-Wendroff\nmethod is given by\n\\begin{equation}\n   \\alpha_i = \\frac{1}{2} w_i(1+w_i\\frac{\\Dt}{\\Dz}) + \\frac{K_i}{\\Dz} , \\quad \n   \\beta_i  = \\frac{1}{2} w_i(1-w_i\\frac{\\Dt}{\\Dz}) - \\frac{K_i}{\\Dz} ,\n\\end{equation}\nand non-dimensionalised by\n\\begin{equation}\n   a_i = \\frac{1}{2} c_i(1+c_i) + s_i , \\quad \n   b_i = \\frac{1}{2} c_i(1-c_i) - s_i .\n\\end{equation}\n\nFrom Appendix~\\ref{app:pos} the conditions for positivity are\n\\begin{gather}\n     \\abs{c_i} \\le 2 s_i + c_i^2,  \\quad i = 2, \\dots, N \\\\\n     s_i + s_{i+1} \\le 1 + \\half (c_i - c_{i+1}) - \\half(c_i^2 + c_{i+1}^2),\n               \\quad i = 2, \\dots, N-1.\n\\end{gather}\nThe first condition can be restated as \n\\begin{equation}\\label{eq:lwpcell}\n    \\Pcell \\le \\frac{2}{1-\\abs{c_i}} .\n\\end{equation}\n\nWith constant coefficients, the positivity\nconditions reduce to\n\\begin{equation}\n \\abs{c} \\le c^2 + 2 s \\le 1,\n\\end{equation}\nwhich is stronger than the stability condition\n\\begin{equation}\n  c^2 + 2 s \\le 1 .\n\\end{equation}\n\nThere is no second order numerical diffusion in the transient\nsolution.  Wiggles will not occur if the positivity\ncondition~(\\ref{eq:lwpcell}) is fulfilled.\nThis method is implemented in the toolbox as the function \\edbi{lwendrof}.\n\n\\subsubsection{The Upstream Scheme}\\index{upstream scheme}\n\nA non-centred alternative is the upstream method.  This is the method\nused by West\\-g{\\aa}rd \\shortcite{west89}.  The upstream scheme is\nstudied in all books on the subject.  The same diffusive flux\n(\\ref{eq:centdifflux}) is used but the convective flux is estimated on\nthe inflow side.\n\\begin{equation}\\label{eq:usflux}\n  \\Fconv_i = w_i^+ \\phi_i + w_i^- \\phi_{i-1}\n\\end{equation}\nwhere \n\\begin{equation}\n   x^+ = \\half (x + |x|) = \n       \\begin{cases}\n          x& \\text{if $x \\ge 0$},\\\\\n          0& \\text{if $x  < 0$}.\n       \\end{cases}\n\\end{equation}\nand $x^- = x - x^+$.\nIn the notation of equation (\\ref{eq:alphabeta}) the method is given by\n\\begin{equation}\n   \\alpha_i =  w_i^+ + \\frac{K_i}{\\Dz} , \\quad \n   \\beta_i  =  w_i^- - \\frac{K_i}{\\Dz} ,\n\\end{equation}\nand non-dimensionalised by\n\\begin{equation}\n   a_i = c_i^+ + s_i , \\quad \n   b_i = c_i^- - s_i .\n\\end{equation}\n\nFrom Appendix~\\ref{app:pos} the positivity condition is simply\n\\begin{equation}\\label{eq:uspos}\n  c_i^+ - c_{i+1}^- + s_i + s_{i+1} \\le 1 .\n\\end{equation}\nWith constant coefficients this reduces to\n\\begin{equation}\n  \\abs{c} + 2 s \\le 1 .\n\\end{equation}\nwhich is also the stability condition.\n\nThe numerical diffusion in the upstream scheme may be quite large\n\\begin{equation}\n  K_{num} = \\half w \\Dz (1-\\abs{c})\n\\end{equation}\nThis is negligible if $K_{num} \\ll K$, or equivalently\n\\begin{equation}\n  \\Pcell \\ll \\frac{2}{1 - \\abs{c}} .\n\\end{equation}\n\nThe upstream scheme is implemented in the toolbox as the function\n\\edbi{upstream}.\n\n\\subsection{Nonlinear methods}\\label{sec:nonlin}\\index{non\nlinear method}\n\nThe Lax-Wendroff scheme is second order in space but may develop\nwiggles and negative concentration values. The upstream method is\npositive but may be too diffusive. These are fundamental problems with\nlinear schemes.  Several nonlinear schemes has been developed to\novercome these problems.  A general idea is to combine the\nLax-Wendroff and upstream methods to produce positive schemes with low\nnumerical diffusion. Overviews of such methods are given by LeVeque\n\\shortcite{leve92} and in the collection \\citep{vreu93}.\n\nEgg distribution problems are somewhat non-symmetrical. It is\nimportant to have high accuracy near maxima where most of the eggs are\nfound. Local minima are less interesting and they occur more seldom in\nthe interior of the water column. For instance, with the steady state\nsolution (\\ref{eq:sstatesol}) without source terms, a local minimum in\nthe interior can occur only in static instable situations when the egg\nis lighter than the water above and heavier saline than the water\nbelow.\n\nThe methods below are not total variation diminishing (TVD) as some of\nthe more advanced non-linear methods.  Instead of a finely tuned\nlinear combination, the schemes use simple on/off mechanisms to switch\nbetween the Lax-Wendroff and upstream fluxes.  On the other hand, the\nasymmetry above is exploited. The schemes have high accuracy at maxima\nbecause the unmodified Lax-Wendroff flux is used in this situation.\n\n\n\n\\subsubsection{The positive method}\\index{positive scheme}\n\nThis is a simple scheme, mostly using the Lax-Wendroff\nfluxes but limiting to the upstream fluxes where Lax-Wendroff produces\nnegative concentration values. The algorithm consists of three steps,\n\\begin{enumerate}\n\\item Compute the Lax-Wendroff fluxes including diffusivity by\n   formul{\\ae} (\\ref{eq:lwflux}) and (\\ref{eq:centdifflux}).\n\\item Apply the fluxes to compute a test distribution\n\\begin{displaymath}\n  \\phi_i^{test} = \\phi_i - \\frac{\\Dt}{\\Dz}(F_i - F_{i+1}) .\n\\end{displaymath}\n\\item Where $\\phi_i^{test} < 0$, recompute $F_i$ and $F_{i-1}$ by the\n  upstream formulation (\\ref{eq:usflux}) and diffusion (\\ref{eq:centdifflux}).\n\\end{enumerate}\n\nPositivity of the upstream scheme or Lax-Wendroff scheme implies\npositivity of the flux limited scheme. The positivity\ncondition~(\\ref{eq:uspos}) is therefore a sufficient (but not\nnecessary) condition for positivity of the combined scheme.\n\nThe scheme is implemented in the toolbox as the function \\edbi{posmet}.\n\n\n\n\\subsubsection{The minimum limiting method}\n\\index{minimum limiting scheme}\n\nAlthough the method above is positive, it may create wiggles around a\npositive value.  To improve on this situation a slight variant is\nproposed.  The new criteria for switching from Lax-Wendroff to\nupstream flux is the occurrence of a local minimum.  As the first\nnegative value must be a local minimum prevents the method from\ncreating negative values (if the upstream scheme is stable).\n\nSteps 1 and 2 in the algorithm are identical to the positive scheme.\nThe third step is modified to \n\\begin{quote}\n  3'. Where $\\phi_i^{test} < \\min (\\phi_{i-1}^{test},\\phi_{i+1}^{test})$, \n      recompute $F_i$ and $F_{i-1}$ by the\n      upstream formulation (\\ref{eq:usflux}) and diffusion \n      (\\ref{eq:centdifflux}).\n\\end{quote}\n\nThis scheme is implemented in the toolbox as the function \\edbi{minlim}.\n\n\\subsection{The Source Term}\n\nNeglecting the fluxes for the moment, the transport equation\n\\pref{eq:difflawfull} reduces to an ordinary differential equation\n\\begin{equation}\\label{eq:noflux}\n  \\Od{\\phi}{t} = P - \\alpha \\phi .\n\\end{equation}\nThe numerical scheme \\pref{eq:numlaw} reduces to\n\\begin{equation}\\label{eq:numnoflux}\n  \\phi_i^{n+1} - \\phi_i^n = Q_i^n \\Dt .\n\\end{equation}\nThe exact solution of (\\ref{eq:noflux}) with\nconstant coefficients is\n\\begin{equation}\n  \\phi^{n+1} = \\e^{-\\alpha \\Dt} \\phi^n + \n                 (1 - \\e^{- \\alpha \\Dt}) \\frac{P}{\\alpha} .\n\\end{equation}\nThis is in the form \\pref{eq:numnoflux} with\n\\begin{equation}\\label{eq:kildediskret}\n  Q_i = (P_i - \\alpha_i \\phi_i) \n         \\frac{1-\\e^{- \\alpha_i \\Dt}}{\\alpha_i \\Dt} .\n\\end{equation}\nThis scheme will never produce negative concentration values.\n\nCombining this with the convection-diffusion solvers above is\nstraightforward. The combined positivity condition becomes more\nrestrictive as discussed in Appendix~\\ref{app:posloss}. \nIn general the number $1$ appearing as upper limit must be replaced\nby $\\exp(-\\alpha \\Dt)$.  For instance, with constant coefficients\nthe positivity conditions for the familiar linear schemes become\n\\begin{align}\n  & \\text{FTCS}         & \\abs{c} \\le 2 s & \\le \\e^{-\\alpha \\Dt} , \\\\\n  & \\text{Lax-Wendroff} & c^2 + 2 s       & \\le \\e^{-\\alpha \\Dt} , \\\\\n  & \\text{Upstream}     &  \\abs{c} + 2 s  & \\le \\e^{-\\alpha \\Dt} .\n\\end{align}\n\n\n  \n\\section{The Stationary Problem}\\label{sec:statprob}\n\nIn many problems only the steady state solution is needed.  Also for\ntransient problems, a fast and accurate way of reaching the stationary\nsolution is of interest.\n\nIf the coefficients $w$ and $K$ are constant, the exact\nsolution~(\\ref{eq:eggsact}) can be used directly. With negative\nvelocity and low diffusivity, $m = w/K \\ll 0$, the formula may overflow.  This\ncan be avoided by using the symmetry relation~(\\ref{eq:symmetry}) for\nnegative $m$.  This solution is computed by the function \\edbi{eggsact}\nin the toolbox. For the discretized solution to have the correct\nvertical integral and be comparable to the numerical solutions, cell\naverages\n\\begin{equation}\n  \\phi_i = \\frac{\\Phi}{\\Dz}\n           \\frac{\\e^{m z_i} - \\e^{m z_{i+1}}}{1 - \\e^{-mH}} \n\\end{equation}\nare better. This is also computed by \\edbi{eggsact}.\n\n\nGeneral $w$ and $K$ can be regarded as constant on the grid cells.\nSuppose $w$ and $K$ and their quotient $m$ live on the flux points as\nin the transient problem. Form the cell average $\\bar{m}_i = (m_i +\nm_{i+1}) / 2$. The cell averaged solution becomes\n\\begin{equation}\n  \\bar{\\phi}_i = \n     \\frac{C_i}{\\Dz} \n     \\frac{\\e^{\\bar{m}_i z_i} - \\e^{\\bar{m}_i z_{i+1}}}{\\bar{m}_i}\n\\end{equation}\nwhere the $C_i$-s are computed by continuity\n\\begin{equation}\n  \\phi_i(z_i) = \\phi_i(z_{i+1})\n\\end{equation}\nand vertical integral\n\\begin{equation}\n  \\Dz \\sum_{i = 1}^N \\bar{\\phi_i} = \\Phi\n\\end{equation}\nWith low mixing and high sinking velocity the exponential terms may\noverflow. The computation of the $C_i$-terms may overflow.  The last\nproblem is overcome by working with the logarithms. For the first\nproblem the $C_i$-terms are renormalised by finding a suitable value for\n$C_1$. The method is implemented in the toolbox as the function\n\\edbi{sstate}.\n\n\n%\\section{Particle Methods}\n\n", "meta": {"hexsha": "9afa15b2bd96184b255dbf81289e991e0ab2c58e", "size": 19138, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Rapport/numerikk.tex", "max_stars_repo_name": "bjornaa/Vertegg", "max_stars_repo_head_hexsha": "836ed0857b2f6bc1a7399a20b28b3e4320810b15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Rapport/numerikk.tex", "max_issues_repo_name": "bjornaa/Vertegg", "max_issues_repo_head_hexsha": "836ed0857b2f6bc1a7399a20b28b3e4320810b15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Rapport/numerikk.tex", "max_forks_repo_name": "bjornaa/Vertegg", "max_forks_repo_head_hexsha": "836ed0857b2f6bc1a7399a20b28b3e4320810b15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.378600823, "max_line_length": 78, "alphanum_fraction": 0.7158010241, "num_tokens": 6069, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.5742608282562359}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[]{algorithm2e}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage[hyphens]{url}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\n\\definecolor{listinggray}{gray}{0.9}\n\\definecolor{lbcolor}{rgb}{0.9,0.9,0.9}\n\\lstset{\n\tbackgroundcolor=\\color{lbcolor},\n\ttabsize=4,\n\tlanguage=C++,\n\tcaptionpos=b,\n\ttabsize=3,\n\tframe=lines,\n\tnumbers=left,\n\tnumberstyle=\\tiny,\n\tnumbersep=5pt,\n\tbreaklines=true,\n\tshowstringspaces=false,\n\tbasicstyle=\\footnotesize,\n\t%  identifierstyle=\\color{magenta},\n\tkeywordstyle=\\color[rgb]{0,0,1},\n\tcommentstyle=\\color{Darkgreen},\n\tstringstyle=\\color{red}\n}\n\n\\begin{document}\n\n\\title{Project 03 CS-790}\n\\author{Brandon Bluemner}\n\\date{2017}\n\\maketitle\n% ================================================\n% Abstract\n% ================================================\n\\begin{abstract}\nAlgorithmic approach to the Matrix Shifting Puzzle\n\\end{abstract}\n\n% =================================================\n% Overview\n% =================================================\n\\section{Overview}\n\\subsection{Matrix shifting Puzzle}\nLet $M$ be an $n \\times n$ array of numbers 1 through $n^2$,\neach move $rotate(row,p)$ or $rotate(col,p)$ cost p s.t. $p \\in [0...n-1]$\nrotate would yield two options, rotate left or rotate right, which will \nhave an impact on the out come of the algorithm. The objective is to find\nthe minimum cost from start state $\\Rightarrow$ goal state.\n\\section{Implementation}\n% =================================================\n% Heuristic function\n% =================================================\n\\subsection{Heuristic function}\nThe code implements a relative distance.\nTrying to find the approximate distance to gaol.\nThis is done by finding the distance from the two points of the grid\nstart at $(current_x,current_y) \\rightarrow (goal_x,goal_y)$. \nThe sum absolute value of the difference with respect to the partials is\nthe used to get the current value. (see Figure \\ref{fig:h_func})\n\\begin{figure}\n\\begin{lstlisting}\n\tint cost = 0;\n\tint temp[2]={0,0};\n\tfor(int i=0; i < current.size(); i++ ){\n\t\tfor(int j=0; j< current.at(i).size(); j++ ){\n\t\t\tget_goal_position(current[i][j], goal, temp);\n\t\t\tint dx = temp[0]-i;\n\t\t\tint dy = temp[1]-j;\n\t\t\tcost += std::abs(dx) +  std::abs(dy);\n\t\t}\n\t}\n\treturn cost;\n\\end{lstlisting}\n\\caption{Heuristic function}\n\\label{fig:h_func}\n\\end{figure}\n\nAnther attempt was made on the same code base with the following \nchange to line 8 of Figure \\ref{fig:h_func} as shown in Figure \\ref{fig:h_func_1}\n\\begin{figure}\n\\begin{lstlisting}\n\t\t\tcost +=  ((int) (std::sqrt( dx*dx + dy*dy )))\n\t\n\\end{lstlisting}\n\\caption{Heuristic function Change}\n\\label{fig:h_func_1}\n\\end{figure}\nThis yield a worst result and significate performance impact, which if you consider\nthe math abs can be simplified by using a bit ``hack'' (see Figure \\ref{fig:bit_twiddle}) which\ncan eliminate computational steps compared to takeing the square root. \n\\begin{figure}\n\t\\begin{lstlisting}\n\tint my_abs(int x)\n\t{\n\t\tint s = x >> 31;\n\t\treturn (x ^ s) - s;\n\t}\n\t\\end{lstlisting}\n\t\\caption{Example abs implementation}\n\t\\label{fig:bit_twiddle}\n\\end{figure}\n% =================================================\n% Hashing function\n% =================================================\n\\subsection{Hashing}\nA problem that arises with any Implementation of a board state\ngame is how to represent a state with the minimum amount of data.\nOne solution is Hashing or using a number or string to represent a state.\nThis Implementation uses a custom hashing method based off of Zobrist \nHashing \\cite{Zobrist}used in chess like board games. This hash code is\nused to prevent recalculation of the Heuristic function an getting into\na cycle in the algorithm. \n\\begin{figure}\n\\begin{lstlisting}\nlong long int get_hash_value(std::vector<std::vector<int>> &matrix){\n\tlong long int result =0;\t\n\tfor(int i=0; i< matrix.size();i++){\n\t\tfor(int j=0; j<matrix[i].size(); j++){\n\t\t\tauto row = (long long int)  ( (i+1) * std::pow(10, (matrix.size()-1 ) *2));\n\t\t\tauto col = (long long int)  ( (j+1) * std::pow(10,matrix.size()-1));\n\t\t\tlong long int temp  = row +  col+ matrix[i][j];\n\t\t\tstd::hash<long long int> hasher;\n\t\t\tauto _hash  = hasher(temp) ;\n\t\t\tresult ^=  _hash +  0x9e3779b9   + (result<<6) + (result>>2); \n\t\t}\n\t}\n\\end{lstlisting}\n\t\\caption{Hashing Code segment}\n\t\\label{fig:Hashing}\n\\end{figure}\n% =================================================\n% Search Algorithm function\n% =================================================\n\\subsection{Algorithm}\nThe algorithm used in this project was a modified version of $A*$.\nSome major changes are the Implementation of hashing to improve the \nrunning time by eliminating checks when nodes are in\nthe frontier or explored section the hashing function gets cached on\nthe cpu and ram thus causing a moderate speed gain.\n% =================================================\n% Result\n% =================================================\n\\section{Results}\nThe ``Path size'' will represent the transition between each state\nthe ``Cost'' is the cost of the transition state not including the Heuristic function weight.\n\\\\\n\\subsection{2x2 matrix}\nStarting position:\n\\begin{tabular}{ l }\n\t04 03 \\\\\n\t02 01\n\\end{tabular}\n$\\Rightarrow$\nGoal:\n\\begin{tabular}{ l }\n\t01 02 \\\\\n\t03 04\n\\end{tabular}\n\\\\ \nThe results below are from a $\\times5$ run on the PC(see \\ref{PC}) with a 1000000 $CLOCKS PER SEC$\n\\\\\n\\begin{tabular}{ | l | l | l | l | l | l | l | }\n\\hline\nRun  & Path Size & Cost & Running Time Alg & Cpu Start & Cpu End & Total cpu time\\\\\\hline\n1 & 4 & 4 & 0.00037s & 15625 & 15625 & 0s\\\\\\hline\n2 & 4 & 4 & 0.00037s & 15625 & 15625 & 0s\\\\\\hline\n3 & 4 & 4 & 0.0003685s & 15625 & 15625 & 0s\\\\\\hline\n4 & 4 & 4 & 0.0002143s & 15625 & 15625 & 0s\\\\\\hline\n5 & 4 & 4 & 0.0002142s & 15625 & 15625 & 0s\\\\\\hline\n\\end{tabular}\n\\\\\nThe results below are from a $\\times5$ run on Server(see \\ref{Server}) with a 1000000 $CLOCKS PER SEC$\n\\\\\n\\begin{tabular}{ | l | l | l | l | l | l | l | }\n\\hline\nRun  & Path Size & Cost & Running Time Alg & Cpu Start & Cpu End & Total cpu time\\\\\\hline\n1 & 4 & 4 & 0.000589516s & 12088 & 12672 & 0.000584s\\\\\\hline\n2 & 4 & 4 & 0.000521357s & 10037 & 10553 & 0.000516s\\\\\\hline\n3 & 4 & 4 & 0.000623622s & 10185 & 10807 & 0.000622s\\\\\\hline\n4 & 4 & 4 & 0.000544579s & 9955 & 10497 & 0.000542s\\\\\\hline\n5 & 4 & 4 & 0.000579305s & 9709 & 10286 & 0.000577s\\\\\\hline\n\\end{tabular}\n\\subsection{3x3 matrix}\nStarting position:\n\\begin{tabular}{ l }\n09 08 07 \\\\\n06 05 04 \\\\\n03 02 01\n\\end{tabular}\n$\\Rightarrow$\nGoal:\n\\begin{tabular}{ l }\n\t01 02 03 \\\\\n \t04 05 06 \\\\\n \t07 08 09\n\\end{tabular}\n\\\\\nThe results below are from a $\\times5$ run on the PC(see \\ref{PC}) with a 1000000 $CLOCKS PER SEC$\n\\\\\n\\begin{tabular}{ | l | l | l | l | l | l | l | }\n\\hline\nRun  & Path Size & Cost & Running Time Alg & Cpu Start & Cpu End & Total cpu time\\\\\\hline\n1 & 34 & 34 & 0.0771355s & 31250 & 109375 & 0.078125s\\\\\\hline\n2 & 34 & 34 & 0.0402095s & 15625 & 46875 & 0.03125s\\\\\\hline\n3 & 34 & 34 & 0.0326797s & 0 & 31250 & 0.03125s\\\\\\hline\n4 & 34 & 34 & 0.0325114s & 0 & 31250 & 0.03125s\\\\\\hline\n5 & 34 & 34 & 0.0324343s & 0 & 31250 & 0.03125s\\\\\\hline\n\\end{tabular}\n\\\\\nThe results below are from a $\\times5$ run on Server(see \\ref{Server}) with a 1000000 $CLOCKS PER SEC$\n\\\\\n\\begin{tabular}{ | l | l | l | l | l | l | l | }\n\\hline\nRun  & Path Size & Cost & Running Time Alg & Cpu Start & Cpu End & Total cpu time\\\\\\hline\n1 & 34 & 34 & 0.08514s & 12444 & 97607 & 0.085163s\\\\\\hline\n2 & 34 & 34 & 0.0843959s & 11669 & 96036 & 0.084367s\\\\\\hline\n3 & 34 & 34 & 0.0852655s & 11209 & 96469 & 0.08526s\\\\\\hline\n4 & 34 & 34 & 0.0837059s & 11251 & 94951 & 0.0837s\\\\\\hline\n5 & 34 & 34 & 0.0835822s & 10498 & 94075 & 0.083577s\\\\\\hline\n\\end{tabular}\n\n\\subsection{4x4 matrix}\n\n\nStarting position:\n\\begin{tabular}{ l }\n16 15 15 13\\\\\n12 11 10 09\\\\\n08 07 06 05\\\\\n04 03 02 01\\end{tabular}\n$\\Rightarrow$\nGoal:\n\\begin{tabular}{ l }\n01 02 03 04\\\\\n05 06 07 08\\\\\n09 10 11 12\\\\\n13 14 15 16\\end{tabular}\n\\\\ \nThe results below are from a $\\times5$ run on the PC(see \\ref{PC}) with a 1000000 $CLOCKS PER SEC$\n\\\\\n\\begin{tabular}{ | l | l | l | l | l | l | l | }\n\\hline\nRun  & Path Size & Cost & Running Time Alg & Cpu Start & Cpu End & Total cpu time\\\\\\hline\n1 & 153 & 178 & 19.3723s & 15625 & 19375000 & 19.3594s\\\\\\hline\n2 & 153 & 178 & 19.2784s & 0 & 19281250 & 19.2812s\\\\\\hline\n3 & 153 & 178 &  19.3407s & 15625 & 19281250 & 19.2656s\\\\\\hline\n4 & 153 & 178 & 19.2833s & 0 & 19281250 & 19.2812s\\\\\\hline\n5 & 153 & 178 & 19.2716s & 0 & 19281250 & 19.2812s\\\\\\hline\n\\end{tabular}\n\\\\\nThe results below are from a $\\times5$ run on Server(see \\ref{Server}) with a 1000000 $CLOCKS PER SEC$\n\\\\\n\\begin{tabular}{ | l | l | l | l | l | l | l | }\n\\hline\nRun  & Path Size & Cost & Running Time Alg & Cpu Start & Cpu End & Total cpu time\\\\\\hline\n1 & 153 & 178 & 50.1818s & 12055 & 50126065 & 50.114s\\\\\\hline\n2 & 153 & 178 & 50.2406s & 12573 & 50170881 & 50.1583s\\\\\\hline\n3 & 153 & 178 & 50.1994s & 11521 & 50125078 & 50.1136s\\\\\\hline\n4 & 153 & 178 & 50.3866s & 12512 & 50319531 & 50.307s\\\\\\hline\n5 & 153 & 178 & 50.0151s & 11048 & 49937159 & 49.9261s\\\\\\hline\n\\end{tabular}\n\\\\\n\\\\\nThe results has the same path size and cost.\nFrom this data the algorithm is consistent with its result with path size.\nThere for from the data a tentative solution is provided, how ever for larger \nnumbers it becomes harder to prove minimum cost. \n\\subsection{Other applications}\nThe shifting row function\nwas also consider in some attempts to find new algorithms for encryption \\cite{rotation_enc}. \nAs general Purpose graphic processing unit (GPGPU) can take advantage of basic rotation, this would allow for new forms of encryption.\n\ngpu.cpp contains a partial code segment on generating the first depth needed for a search Heuristic done on the gpu using Opencl. given a little more time\nand with the assistance of ``\\underline{Massively Parallel A* Search on a GPU}''\\cite{a_start_gpu} the solvable solution size would increase to around $n=128$.\n$n=129$ over loads my GPGPU, This is most likely due to the gpu on PC (see \\ref{PC})  being used to drive 3 1920 x1080 monitors.\n\n\\subsection{Output}\nEach run generates a file, stored in the data folder, with information\nabout each run.\n\nOut put file gives the hash for the start and end state \\\\\nThe Algorithm runs \\\\\nThen it show the Visited count (explored) and the number\nIterations it went through.\\\\\nThe path is out printed next show the transitions\\\\\nFinally the run time stats are printed.\n\n\n\\begin{lstlisting}\nStart Hash:706246336224068\nGoal Hash:706246335411732\n============= Algorithm  ===========\nGoal Found!!\nVisited count:5\nIterations5\n1\t2\t\n3\t4\t\nCost:1\n706246335411732->706246335412049\n1\t2\t\n4\t3\t\nCost:1\n706246335412049->706246336219922\n4\t2\t\n1\t3\t\nCost:1\n706246336219922->706246336224256\n4\t3\t\n1\t2\t\nCost:1\n706246336224256->706246336224068\n4\t3\t\n2\t1\t\nPath size: 4\tCost: 4\nRunning time Algorithm:\t0.000589516s\ncpu start: 12088\tcpu end:12672\tCLOCKS_PER_SEC:1000000\ncpu time:\t0.000584s\n\\end{lstlisting}\n% =================================================\n% Sources of error\n% =================================================\n\\section{Sources of error}\n\\subsection{Compiler}\nIntroducing hashing caused some ``error'' into the application.\nThis is due to how hashing is implemented in the compiler (or windows debugger), running \nthe code on windows seems to yield a less accrete result, where accuracy \nis gauge by minimum cost. \n\\\\\n\\\\\nBelow is an out put running on PC (see \\ref{PC})\n\\\\\n**Note: the running time for windows includes\nthe loading to the debugging library for a pdb file (as window runs the debugger)\n\\\\\nVisual C++ \\textsuperscript{TM}\n\\begin{lstlisting}\nPath size: 156\tCost: 180\nRunning time Algorithm:\t42.9065s\ncpu start: 54\tcpu end:42961\tCLOCKS_PER_SEC:1000\ncpu time:\t42.907s\n\\end{lstlisting}\nGNU g++ \n\\begin{lstlisting}\nPath size: 153  Cost: 178\nRunning time Algorithm: 20.0533s\ncpu start: 15625\tcpu end:19828125\tCLOCKS_PER_SEC:1000000\ncpu time:\t19.8125s\n\\end{lstlisting}\n\\subsection{Data structures}\nThis implementation only allows for n to be less then 5, if 5 is chosen\nthe std map library will throw a segmentation fault for exceeding size. \nIf given more time this issue could have been resolved by creating custom\ndata structures to handle larger hash values and more combinations.\n\\subsection{Cpu boost}\nDuring some of my runs the cpu on PC (See \\ref{PC}) were being affected \nthe the cpu boosting into higher GHz or speed. this is error is prevalent during $3\\times3$\nrun.\n\n% ================================\n% System \n% ================================\n\\section{System}\n\\subsection{IDE}\\label{IDE}\nThis code was programming in Visual Studio Code\\cite{vscode} which an MIT ``Source Code'' editor\nand debugger. which also has the \n\\subsection{PC} \\label{PC}\nCpu: Intel(R) Core(TM) i5-6600K CPU @3.50 GHz (stock) boost to 4.10GHz\n\\\\\nRAM: 16GB DDR4\n\\\\\nOperating System: Window 10\n\\subsection{Laptop} \\label{Laptop}\nCpu: Intel(R) Celeron(TM)  N3150 Quad-core 1.60 GHz (stock) boost to 1.8GHz\n\\\\\nRAM: 8GB DDR3L\n\\\\\nOperating System: Window 10\n\\subsection{Server} \\label{Server}\nCpu Intel(R) Xenon(R) CPU E5345 @ 2.33 GHZ (4 cores allocated to vm with hyper-threading)\n\\\\\nRAM: 8GB DDR2 - ECC  (Error correcting Memory)\n\\\\\nOperating System: Ubuntu Server (running on a hypervisor)\n\\bibliographystyle{unsrt}\n\\bibliography{bib}\n\n\\end{document}", "meta": {"hexsha": "d6f0694fb39bd565ac6ee80896157fc7eb3fdbf6", "size": 13305, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/report.tex", "max_stars_repo_name": "bluemner/heuristics_project", "max_stars_repo_head_hexsha": "3809f778854d61576649a4d822141c5d00547ae8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/report.tex", "max_issues_repo_name": "bluemner/heuristics_project", "max_issues_repo_head_hexsha": "3809f778854d61576649a4d822141c5d00547ae8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/report.tex", "max_forks_repo_name": "bluemner/heuristics_project", "max_forks_repo_head_hexsha": "3809f778854d61576649a4d822141c5d00547ae8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0281329923, "max_line_length": 159, "alphanum_fraction": 0.6632093198, "num_tokens": 4315, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7577943822145998, "lm_q1q2_score": 0.5742523174257673}}
{"text": "\\chapterimage{head2.png} % Chapter heading image\n\\chapter{Spectral Finite Element Method}\n\\section{Domain of sFEM}\n\n\\subsection{Dimensionless version}\nThe grid size is 0.0313. (The distance between two red points). Totally, there are 193 points.\n\\begin{center}\n        \\includegraphics[scale=0.3]{ch3/node_dimensionless.pdf}\n\\end{center}\n\\begin{itemize}\n        \\item The number of spectral element, $N_h=64$\n        \\item The order of interpolation polynomial, $N_p=4$\n\\end{itemize}\n\n\\subsection{Version with dimension}\nThe grid size is 0.6250 \u00c5. (The distance between two red points). Totally, there are 193 points.\n\\begin{center}\n        \\includegraphics[scale=0.3]{ch3/node.pdf}\n\\end{center}\n\\begin{itemize}\n        \\item The number of spectral element, $N_h=64$\n        \\item The order of interpolation polynomial, $N_p=4$\n\\end{itemize}\n\n\\subsection{Mathematical Concept}\nTherefore, we now have $V$, a vector space  of dimension $193$. This vector space has a natural basis, or we called it the standard basis\n\\begin{equation}\n        \\{\\textbf{e}_1, \\textbf{e}_2, ..., \\textbf{e}_{193}\\}\n\\end{equation}\nwhere\n\\begin{equation}\n\\textbf{e}_1=  \\begin{bmatrix} 1 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{bmatrix}~~\n\\textbf{e}_2=  \\begin{bmatrix} 0 \\\\ 1 \\\\ \\vdots \\\\ 0 \\end{bmatrix}~~\n\\cdots ~~\n\\textbf{e}_{193} = \\begin{bmatrix} 0 \\\\ 0 \\\\ \\vdots \\\\ 1 \\end{bmatrix} \n\\end{equation}\n\n\\begin{definition}[Orthonormal basis from sFEM]\nAfter doing spectral finite element method, we can get a set of orthonormal basis\n\\begin{equation}\n        B = \\{ \\psi_1, \\psi_2, \\psi_3, ..., \\psi_{72} \\}    \n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[$\\rho$ with respect to the $B$ basis]\nNow, we have an orthonormal basis $B$, so $\\rho$ can be represented\n\\begin{align*}\n        \\rho = c_1 \\psi_1 + c_2 \\psi_2 + \\cdots + c_{72} \\psi_{72}        \n\\end{align*}\nMore precisely,\n\\begin{align*}\n        [\\rho]_{B} =  \\begin{bmatrix} c_1 \\\\ c_2 \\\\ \\vdots \\\\ c_{72} \\end{bmatrix}\n\\end{align*}      \n\\end{definition}\n\n\\section{Weight function, orthogonality and norm}\n\\begin{definition}[inner product]\nBy using integral calculus, it is common to use the following to define the inner product of two functions $f$ and $g$ with respect to a nonnegative weight function $w$ over an interval $[a, b]$:\n\\begin{equation}\n        \\langle f, g\\rangle_w = \\int_a^b f(x)g(x)w(x)\\,dx \\approx \\sum_{i=1}^{193}w(x_i)f(x_i)g(x_i) \n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[weight function $w(x)$]\nFrom the  Gauss-Lobatto quadrature, we have\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch3/weight_function.png}   \n\\end{center}\n\\end{definition}\n\n\\begin{definition}[Orthogonal]\nWe say that functions $f$ and $g$ are orthogonal if their inner product (equivalently, the value of this integral) is zero:\n\\begin{equation}\n        \\langle f,g\\rangle _{w}=0.\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[Norm]\nWe define the norm with respect to this inner product as\n\\begin{equation}\n        \\|f\\|_w = \\sqrt{\\langle f, f\\rangle_w}\n\\end{equation}\n\\end{definition}\n\n\\section{Diagonalization of the operator}\n\\begin{definition}[Basis functions]\n\\begin{align}\n        \\psi^0_i(x) &= \\rho_{\\rm eq}(x) \\phi^0_i(x) \\\\\n        \\phi^0_i(x) &= \\sum_n c^0_n u_n(x).\n\\end{align}\n\\label{basisfunctions}\n\\end{definition}\n\n\\begin{definition}[Eigenvector matrix]\n\\begin{equation}\n\\Psi=\n\\begin{bmatrix}\n\\vert & \\vert &  & \\vert \\\\\n\\psi_1(x) & \\psi_2(x) & \\cdots & \\psi_{N_v}(x) \\\\\n\\vert & \\vert &  & \\vert \n\\end{bmatrix}\n\\end{equation}\n\\label{eigenvectormatrix}\nHere, we get a set of orthonormal basis, $\\langle \\psi_i |$. The left part in the following part is \n\\begin{equation}\n        \\Psi^T_{72\\times 193}\\Psi_{193 \\times 72} = (\\Psi^T\\Psi)_{72 \\times 72}\n\\end{equation}\nThe completeness relationship is actually can be seen by the blue points in the right part:\n\\begin{equation}\n        \\langle \\psi_i(x),\\psi_i(x)\\rangle _{w} = \\sum_{j=1}^{193}w(x_j)\\psi_i(x_j)\\psi_i(x_j)  = 1\n\\end{equation}\n\\begin{center}\n        \\includegraphics[scale=0.4]{ch3/eigenvec_matrix_complete.png}   \n\\end{center}\n\\end{definition}\n\n\\section{Photon operator and Emission probability}\nAssume we have the graphical model like the following\t\t\t\t\t\t\n\\begin{center}\n        \\includegraphics[scale=0.8]{ch2/four_states_ex.png}   \n\\end{center}\nThe complete likelihood function is\n\\begin{equation}\n\\mathcal{P}(X(t), Y(t)) = p(x_{t_0}) \\prod_{\\tau=1}^4 p(y_{t_{\\tau}}|x_{t_{\\tau}}) p(x_{t_{\\tau}}|x_{t_{\\tau -1}})            \n\\end{equation}\nwhere the emission probability $p(y_{t_{\\tau}}|x_{t_{\\tau}})$ is defined by a Gaussian\n\\begin{equation}\n        p(y_{t_{\\tau}}|x_{t_{\\tau}}) =  \\mathcal{N}(\\mu = y_{t_{\\tau}}  ,\\sigma ^{2})       \n\\end{equation}\n\n\\begin{definition}[Emission Probability as linear transformation $T$]\nFor example, if we have an observation $y_1$ in $t_{1}$, we will propagate like the following:\n\\begin{align*}\n        T(\\rho) = \\textbf{y} \\rho = \n        \\begin{bmatrix} \n              f(x_1) & 0 & 0 & \\cdots & 0 \\\\\n              0 & f(x_2) & 0 & \\cdots & 0 \\\\\n              \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n              0 & 0 & 0 & \\cdots & f(x_{193}) \\\\\n        \\end{bmatrix}\n        \\begin{bmatrix} \\rho_1 \\\\ \\rho_2 \\\\ \\vdots \\\\ \\rho_{193} \\end{bmatrix}    \n\\end{align*}\nWe redefine a function $f(x)$ for emission probability\n\\begin{align*}\nf(x; \\mu = y_1, \\sigma) ={\\frac {1}{\\sigma {\\sqrt {2\\pi }}}}e^{-{\\frac {1}{2}}\\left({\\frac {x-\\mu }{\\sigma }}\\right)^{2}}      \n\\end{align*}\n\\begin{center}\n        \\includegraphics[scale=0.5]{ch3/linear_transform.pdf}   \n\\end{center}\n\\end{definition}\n\n\\begin{definition}[Transformation matrix with respect to the $B$ basis]\nFirst, the change of basis matrix for $B$\n\\begin{align*}\n        C = \n        \\begin{bmatrix}\n              \\vert & \\vert &  & \\vert \\\\\n              \\psi_1   & \\psi_2 & \\cdots & \\psi_{72} \\\\\n              \\vert & \\vert &  & \\vert\n        \\end{bmatrix}_{193 \\times 72}  \n\\end{align*}\nAnd its transpose is\n\\begin{align*}\n        C^{T} = \n        \\begin{bmatrix}\n        - & \\psi_1 & - \\\\\n        - & \\psi_2 & - \\\\\n         & \\vdots & \\\\\n         - & \\psi_{72} & - \\\\\n        \\end{bmatrix}_{72 \\times 193}\n\\end{align*}\nAnd you can see\n\\begin{align*}\n        C^{T}C = I  \n\\end{align*}\nso \\( C^{T}\\) is left inverse\n\\begin{align*}\n        C^{T} = C^{-1}  \n\\end{align*}\n\\begin{center}\n        \\includegraphics[scale=0.5]{ch3/transform_map.pdf}   \n\\end{center}\nTherefore, we get a transformation matrix $D$ with repect to $B$ basis\n\\begin{equation}\n        [T(\\rho)]_{B} = D [\\rho]_{B}  \n\\end{equation}\nwhere\n\\begin{align*}\n        D =  \\begin{bmatrix}\n                - & \\psi_1 & - \\\\\n                - & \\psi_2 & - \\\\\n                 & \\vdots & \\\\\n                 - & \\psi_{72} & - \\\\\n             \\end{bmatrix}_{72 \\times 193}\n          \\begin{bmatrix} \n          f(x_1) & 0 & \\cdots & 0 \\\\\n          0 & f(x_2) & \\cdots & 0 \\\\\n          \\vdots & \\vdots & \\vdots & \\vdots \\\\\n          0 & 0 & \\cdots & f(x_{193}) \\\\\n          \\end{bmatrix}_{193 \\times 193}\n          \\begin{bmatrix}\n                \\vert & & \\vert \\\\\n                \\psi_1   & \\cdots & \\psi_{72} \\\\\n                \\vert &  & \\vert\n          \\end{bmatrix}_{193 \\times 72}    \n\\end{align*}\n$D$ can also be expressed by bra-ket notation\n\\begin{align*}\n        D=\\begin{bmatrix}\n                \\left< \\psi_1(x) | \\textbf{y} | \\psi_1(x) \\right> & \\left< \\psi_1(x) | \\textbf{y} | \\psi_2(x) \\right> & \\cdots & \\left< \\psi_1(x) | \\textbf{y} | \\psi_{72}(x) \\right> \\\\\n                \\left< \\psi_2(x) | \\textbf{y} | \\psi_1(x) \\right> & \\left< \\psi_2(x) | \\textbf{y} | \\psi_2(x) \\right> & \\cdots & \\left< \\psi_2(x) | \\textbf{y} | \\psi_{72}(x) \\right> \\\\\n                \\vdots & \\vdots & \\cdots & \\vdots \\\\\n                \\left< \\psi_{72}(x) | \\textbf{y} | \\psi_1(x) \\right> & \\left< \\psi_{72}(x) | \\textbf{y} | \\psi_2(x) \\right> & \\cdots & \\left< \\psi_{72}(x) | \\textbf{y} | \\psi_{72}(x) \\right> \\\\\n                \\end{bmatrix}_{72 \\times 72}\n\\end{align*}\nwhere\n\\begin{align*}\n        \\left< \\psi_i(x) | \\textbf{y} | \\psi_j(x) \\right> = \\int w(x) \\psi_i(x) f(x) \\psi_j(x) dx \\approx \\sum_{k=1}^{193} w(x_k) \\psi_i(x_k) f(x_k) \\psi_j(x_k)\n\\end{align*}\n\\end{definition}\n\n\\section{Single Well}\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch3/single_well_1.pdf}   \n\\end{center}\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch3/single_well_2.pdf}   \n\\end{center}\n\n\\section{Double Well}\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch3/double_well_1.pdf}   \n\\end{center}\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch3/double_well_2.pdf}   \n\\end{center}\n\n\\section{Triple Well}\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch3/triple_well_1.pdf}   \n\\end{center}\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch3/triple_well_2.pdf}   \n\\end{center}", "meta": {"hexsha": "c8eb45705978c4486cb5e95aac959598ff50b71f", "size": 8782, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/chapter3.tex", "max_stars_repo_name": "yizaochen/em_theory", "max_stars_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/chapter3.tex", "max_issues_repo_name": "yizaochen/em_theory", "max_issues_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/chapter3.tex", "max_forks_repo_name": "yizaochen/em_theory", "max_forks_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.9918032787, "max_line_length": 195, "alphanum_fraction": 0.6053290822, "num_tokens": 3066, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5742265732825181}}
{"text": "\\section{Use of Curry and Foldr / Foldl}\n\n\\begin{lstlisting}[language=Haskell]\nmodule ProgExercises.FS_2019_ProgExer04Prob_V01 where\n\n-- Exercise (Higher-Order Functions)\n\nimport Prelude hiding (flip, curry, uncurry)\n\ntoBeImplemented = undefined\n\n-- flip f takes its (first) two arguments in the reverse order of f\nexaFlip = flip take [1,2,3,4,5] 3 == [1,2,3]\n\nflip :: (a -> b -> c) -> (b -> a -> c)\nflip f x y = f y x\n\n-- curry converts a function on pairs to a curried function\nexaCurry = curry (\\(x,y) -> x + y) 3 4 == 7\n\ncurry :: ((a, b) -> c) -> (a -> b -> c)\ncurry f x y = f (x, y)\n\n-- uncurry converts a curried function to a function on pairs\nexaUncurry = uncurry (\\x y -> x + y) (3, 4) == 7\n\nuncurry :: (a -> b -> c) -> ((a, b) -> c)\nuncurry f (x,y) = f x y\n\nexaReverseFr = reverseFr [1 .. 20000] == [20000, 19999 .. 1]\nexaReverseFl = reverseFl [1 .. 20000] == [20000, 19999 .. 1]\n\n-- implement reverse using foldr and (++)\nreverseFr :: [a] -> [a]\nreverseFr = foldr (\\x accu -> accu ++ [x]) []\n\n-- implement reverse using foldl, (:), and flip\nreverseFl :: [a] -> [a]\nreverseFl = foldl (flip (:)) []\n\n-- revAppend prepends the first list in reverse order before the second list\nexaRevAppend = revAppend [3,2,1] [4,5,6] == [1,2,3,4,5,6]\n-- implement revAppend using foldl and flip\n\nrevAppend :: [a] -> [a] -> [a]\nrevAppend = flip (foldl (flip (:)))\n\nexaMapFr = mapFr (*2) [1 .. 10] == map (*2) [1 .. 10]\nexaMapFl = mapFl (*2) [1 .. 10] == map (*2) [1 .. 10]\n\nmapFr :: (a -> b) -> [a] -> [b]\nmapFr f = foldr (\\x accu -> f x : accu) []\n\nmapFl :: (a -> b) -> [a] -> [b]\nmapFl f = foldl (\\accu x -> accu ++ [f x]) []\n\n-- for estimating performance\nlr = length (mapFr (*2) [1 .. 20000])\nll = length (mapFl (*2) [1 .. 20000])\n\nexaFilterFr = filterFr odd [1 .. 10] == filterFr odd [1 .. 10]\nexaFilterFl = filterFl odd [1 .. 10] == filterFl odd [1 .. 10]\n\nfilterFr :: (a -> Bool) -> [a] -> [a]\nfilterFr p = foldr (\\x accu -> if p x then x : accu else accu) []\n\nfilterFl :: (a -> Bool) -> [a] -> [a]\nfilterFl p = foldl (\\accu x -> if p x then accu ++ [x] else accu) []\n\nexaLengthFr = lengthFr [1 .. 1000] == 1000\nexaLengthFl = lengthFl [1 .. 1000] == 1000\n\nlengthFr :: [a] -> Int\nlengthFr = foldr (\\_ accu -> 1 + accu) 0\n\nlengthFl :: [a] -> Int\nlengthFl = foldl (\\accu _ -> accu + 1) 0\n\nexaAppendFr = appendFr [4,5,6] [1,2,3] == [1 .. 6]\nexaAppendFl = appendFl [1,2,3] [4,5,6] == [1 .. 6]\n\nappendFr :: [a] -> [a] -> [a]\nappendFr = flip (foldr (:))\n\nappendFl :: [a] -> [a] -> [a]\nappendFl = foldl (\\accu x -> accu ++ [x])\n\\end{lstlisting}\n\n\\clearpage", "meta": {"hexsha": "961fe2a1d340b9b69133e7a32f8ddcd57a0a875f", "size": 2547, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TSM_AdvPrPa/Excercises/Haskell/07_curryingFunctions.tex", "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": ["Beerware"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TSM_AdvPrPa/Excercises/Haskell/07_curryingFunctions.tex", "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_licenses": ["Beerware"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TSM_AdvPrPa/Excercises/Haskell/07_curryingFunctions.tex", "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": ["Beerware"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "avg_line_length": 28.6179775281, "max_line_length": 76, "alphanum_fraction": 0.5783274441, "num_tokens": 987, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506418255927, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5742265650904106}}
{"text": "\\subsection{Hopfield Network}\n\n% Wenting.\nHopfield's model is an associative(content-addressable) memory model using binary information and its network is featured with threshold and feedback. A binary input vector to a Hopfield Network will be recursively processed(fed back) at the granularity level of bit, until a threshold criterium is met, the produced output vector possesses the feature that is with the shortest Hamming distance to its origin input vector (see figure \\ref{fig:hopfield_network}). There properties give rise to associative memory recall, where an input pattern (i.e. memory cue) is presented to the Hopfield network, which produces an output vector that is closest in Hamming distance to a previously stored memory, thus filling in the gaps to recall a memory through association.\n\n\\begin{figure}[htbp]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.5\\textwidth]{inc/hopfield_network.png}\n\t\t\\caption{A Hopfield network consisting of 4 nodes, with an input and output vector of length 4. Additionally, each node takes as input the state of each other node in the network.\\protect\\footnotemark}\n\t\t\\label{fig:hopfield_network}\n\t\\end{center}\n\\end{figure}\n\\footnotetext{Original image (CC BY-SA): \\url{https://upload.wikimedia.org/wikipedia/commons/9/95/Hopfield-net.png}}\n\nIn paper from J.J.Hopfield(1982) \\cite{computational_abilities}, the drawbacks of Perceptron were addressed through its intractable back-coupling, lack of abstraction properties and requirement of synchronism. Information storage was improved with help of Non-linearity, and emergent computational properties were obtained from simple properties of many cells rather than complex circuitry(which is a result of linear associativity). The input-output relationship of non-linear computation, binary threshold units and concept of the energy function were introduced.\n\nCollective behaviours of the model was studied and resulted with the following findings: a few stable states was resulted from most of the initial state space, properties necessary for a physical content-addressable memory were not dependent on the symmetry of the connectivity matrix $T_{ij}$. Statement supported by findings from experiments is that ``about 0.15N(memory storage bits, number of neurons) states can be simultaneously remembered before error in recall is severe'', which refers to the memory capacity of the associative memory. Case with arbitrary starting state was studied and results of memories near to the starting state was highly produced. The nominally assigned memories which were called ``attractors'' dominates the phase flow whereas the flow is not entirely deterministic, which leads to a convergence to local optimum.\n\nCase of consistent internal correlations in the memory was as well addressed, and Hebb synapses was used and slightly modified to generate non-symmetric term~$\\delta T_{ij}$, which limitation of sequence of four states was addressed.\n", "meta": {"hexsha": "71f47cec0961ae3626c4a6e4958e5aecd9e76780", "size": 2941, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/2_models_of_associative_memory/1_hopfield_network.tex", "max_stars_repo_name": "mewmew/associative_memories", "max_stars_repo_head_hexsha": "d0c50cf3efbcab9369f5a030125752253539d9e0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-05-30T12:08:22.000Z", "max_stars_repo_stars_event_max_datetime": "2016-05-30T12:08:22.000Z", "max_issues_repo_path": "report/sections/2_models_of_associative_memory/1_hopfield_network.tex", "max_issues_repo_name": "mewmew/associative_memory", "max_issues_repo_head_hexsha": "d0c50cf3efbcab9369f5a030125752253539d9e0", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 54, "max_issues_repo_issues_event_min_datetime": "2016-04-04T00:06:16.000Z", "max_issues_repo_issues_event_max_datetime": "2016-06-02T13:32:52.000Z", "max_forks_repo_path": "report/sections/2_models_of_associative_memory/1_hopfield_network.tex", "max_forks_repo_name": "mewmew/associative_memory", "max_forks_repo_head_hexsha": "d0c50cf3efbcab9369f5a030125752253539d9e0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 147.05, "max_line_length": 848, "alphanum_fraction": 0.8143488609, "num_tokens": 609, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7549149758396752, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5742265649922546}}
{"text": "\\chapter{The Design of SmartFin} \\label{combinator-DSL}\n\nThe main goal of this project is to provide a smart contract implementation of a DSL for financial contracts. Before discussing this implementation, this chapter defines the syntax and semantics of the DSL being implemented, SmartFin. \\\\\n\nThe syntax and semantics of SmartFin are derived from a DSL proposed by Peyton Jones et al.\\cite{SPJ}, described in section \\ref{DSL-design}, but SmartFin varies slightly from this original DSL. The smart contract implementing SmartFin must behave correctly when any SmartFin contract definition is provided, according to the syntax and semantics laid out in this chapter.\n\n\n\\section{Overview of SmartFin Financial Contracts}\n\nSmartFin is a combinator domain-specific language used to describe financial contracts, and in general operates very similarly to the DSL defined by Peyton Jones et al.\\cite{SPJ} which is described in section \\ref{DSL-overview}. The general gist is that a SmartFin contract represents a financial contract between two parties, which can define payments between these parties. The \\textit{holder} party is able to acquire a SmartFin contract - just as a traditional financial contract would be signed - and any obligations are carried out subsequently. SmartFin contracts can expire, and contracts can rely on external input.\n\n\n\\section{Syntax} \\label{DSL-BNF}\n\nThe syntax of SmartFin is very simple, due to its combinatorial nature. The Backus-Naur form of the grammar is as follows, where \\texttt{<name>} is a string starting with a non-integer character, \\texttt{<time>} is a date and time, \\texttt{<integer>} is an integer value, and \\texttt{<address>} is an Ethereum address: \\\\\n\n\\setlength{\\grammarindent}{10em}\n\\begin{grammar}\n<contract> ::= `zero'\n\\alt `one'\n\\alt `give' <contract>\n\\alt `get' <contract>\n\\alt `anytime' <contract>\n\\alt `scale' <observable> <contract>\n\\alt `truncate' <time> <contract>\n\\alt `and' <contract> <contract>\n\\alt `or' <contract> <contract>\n\\alt `then' <contract> <contract>\n\n<observable> ::= <integer> | <name> <address>\n\\end{grammar}\n\nThis syntax has one major difference from the syntax of the original DSL presented in the paper by Peyton Jones et al.\\cite{SPJ}. This difference is the representation of \\textit{observables}, used with the \\texttt{scale} combinator in the \\texttt{<contract>} rule. In the original DSL, an \\texttt{observable} takes a constant value, or a representation of a time-varying value, but its syntax was left abstract due to the somewhat abstract nature of the combinators' implementation in the paper. The paper was more focused on evaluating financial contracts written in this form than representing them programmatically. As this project requires a concrete implementation of the DSL, the syntax cannot be left abstract. Because of this, it is necessary to give a specific definition of an observable's syntax; the chosen definition being a constant integer value, or a name and an Ethereum address. The purpose of these representations is explained further in section \\ref{DSL-semantics}. The original DSL also allowed observables to have mathematical operations carried out on them, whereas SmartFin does not. \\\\\n\nFor clarity, the format of the \\texttt{<time>} rule in the provided BNF can be either UNIX Epoch time, or a date and time in the format \\texttt{\"<DD/MM/YYYY HH:mm:ss +ZZ>\"} - where \\texttt{\"DD\"} is a two-digit date, \\texttt{\"MM\"} is a two-digit month, \\texttt{\"YYYY\"} is a four-digit year, \\texttt{\"HH\"} is a two-digit hour, \\texttt{\"mm\"} is a two-digit minute, \\texttt{\"ss\"} is a two-digit second, and \\texttt{\"+ZZ\"} is an optional positive or negative two-digit time zone offset.\n\n\n\\section{Semantics} \\label{DSL-semantics}\n\nEach combinator has a specific meaning, potentially depending on sub-combinators, times, and/or observables. This section gives a definition of the semantics of SmartFin combinators where they differ from the original DSL, namely \\texttt{one} and \\texttt{scale}. For combinators not listed here, see \\ref{combinators} for their semantics. The conventions used to represent the syntax of these combinators is defined in figure \\ref{combinator-semantics-notation}, and the definition of each combinator's semantics is as follows: \\\\\n\n\\begin{table}[ht]\n    \\begin{center}\n        \\begin{tabular}{|ll|}\n            \\hline\n            $c, d$ &\\textit{Contract} \\\\\n            $o$ &\\textit{Observable} \\\\\n            $x$ &\\textit{Constant Value} \\\\\n            $n$ &\\textit{Observable Name (as in section \\ref{DSL-BNF})} \\\\\n            $a$ &\\textit{Ethereum Address} \\\\\n            $t$ &\\textit{Time} \\\\\n            \\hline\n        \\end{tabular}\n        \\caption{Conventions for the description of the SmartFin's semantics}\n        \\label{combinator-semantics-notation}\n    \\end{center}\n\\end{table}\n\n\\parbox{\\textwidth}{\n\\texttt{zero, give c, and c d, or c d, truncate t c, then c d, get c, anytime c} \\\\\n\nThese combinators' semantics are unchanged from the original DSL, described in section \\ref{combinators}. \\\\ \\\\\n\n}\n\n\\parbox{\\textwidth}{\n\\texttt{one} \\\\\n\nThis combinator represents a contract which requires the counter-party to pay the holder 1 Wei upon acquisition. The currency of payments is always Ether, as opposed to the original DSL's \\texttt{one} combinator which takes a currency. This contract can be acquired at any time, and thus has no horizon. \\\\ \\\\\n}\n\n\\texttt{scale o c} \\\\\n\nA \\texttt{scale} combinator represents \\texttt{c} with all payments multiplied by the value of the observable \\texttt{o} at the time of acquisition. \\texttt{scale(o, c)} has the same horizon as \\texttt{c}. \\\\\n\nAn observable can be provided in the format \\texttt{x}, or \\texttt{n a}. An observable in the form \\texttt{x} is a constant integer value. For example, \\texttt{scale 5 one} requires the counter-party to pay the holder 5 Wei on acquisition. \\\\\n\nAn observable in the form \\texttt{n a} is a time-varying value. The name \\texttt{n} is an identifier for the observable, and the address \\texttt{a} represents an arbiter for the observable's value. This address is treated as the canonical source of the observable's value at the time of acquisition. No two observables may have the same name and arbiter, or there would be no way to differentiate the two, even though their values may differ (if they are acquired at different times, for instance). \\\\\n\n\n\\section{Remarks}\n\nOverall, the SmartFin DSL is very similar to the original DSL described by Peyton Jones et al.\\cite{SPJ}, with a few differences for the sake of compatibility with the Ethereum platform. The next chapter will describe the top-level implementation of a smart contract which can represent any given SmartFin contract, and the following chapter describes the implementation of specific combinators' semantics.", "meta": {"hexsha": "fa356ac21fa2aa5fc8762a172b5244eb992f7937", "size": 6805, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/src/smartfin.tex", "max_stars_repo_name": "danrobdean/SmartFin", "max_stars_repo_head_hexsha": "7dd6ea1cc279bbeb19f0b8094ff35e1c7d49a87f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/src/smartfin.tex", "max_issues_repo_name": "danrobdean/SmartFin", "max_issues_repo_head_hexsha": "7dd6ea1cc279bbeb19f0b8094ff35e1c7d49a87f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-07-17T15:32:25.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-12T15:59:17.000Z", "max_forks_repo_path": "report/src/smartfin.tex", "max_forks_repo_name": "danrobdean/SmartFin", "max_forks_repo_head_hexsha": "7dd6ea1cc279bbeb19f0b8094ff35e1c7d49a87f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-13T16:31:30.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-13T16:31:30.000Z", "avg_line_length": 81.9879518072, "max_line_length": 1112, "alphanum_fraction": 0.7547391624, "num_tokens": 1639, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149758396752, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5742265649922546}}
{"text": "\\subsection{A Short Analysis of Seasonal Pattern in TDIF and PRCP}\r\n\r\nSpectrum analysis\\cite{priestley1981spectral}, also referred to as frequency domain analysis, is the technical process of decomposing a complex signal into simpler parts. If we consider the time series of TDIF as a signal with a seasonal pattern, we can use the spectrum analysis method to identify the dominant frequency.\r\n\r\nFirst let's see the plot on the left of Figure \\ref{patt}. There is an obvious peak in the plot, which is the frequency domain. The cycle corresponding to the peak is 375 days, i.e. TDIF basically follows a cycle of 375 days. And there are no other peaks in the plot, so it indeed is the dominant one.\r\n\r\n\\begin{figure}[h]\r\n\\centering\r\n\\begin{minipage}[t]{0.48\\textwidth}\r\n\\centering\r\n\\includegraphics[width=6cm]{patt1.png}\r\n\\end{minipage}\r\n\\begin{minipage}[t]{0.48\\textwidth}\r\n\\centering\r\n\\includegraphics[width=6cm]{patt2.png}\r\n\\end{minipage}\r\n\\caption{Spectrum Plot and Time Plot}\r\n\\label{patt}\r\n\\end{figure}\r\n\r\nOn the right hand side is the time plot of TDIF (the red line) and PRCP (the black line). The red line shows a clear cycle per year, corresponding to the spectrum analysis before. On the other hand, no clear pattern of the black line can be observed. But it can tell us that PRCP is likely to be high when TDIF is low and vice versa. Overall even if we confirm the existence of a seasonal pattern in PRCP (which should be true), it may not play a crucial role in predicting daily weather. However, what we learn from the analysis above is, the TDIF is important in predicting PRCP and they are likely to be not positively, but negatively correlated.", "meta": {"hexsha": "f487377bce191ae416df6474e94c0d178c1819eb", "size": 1660, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/pattern.tex", "max_stars_repo_name": "shengchenHAO/Weather-Forecast-", "max_stars_repo_head_hexsha": "0c81dd5b8b3c4572464b0e0b841ca279ecb0d650", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/pattern.tex", "max_issues_repo_name": "shengchenHAO/Weather-Forecast-", "max_issues_repo_head_hexsha": "0c81dd5b8b3c4572464b0e0b841ca279ecb0d650", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/pattern.tex", "max_forks_repo_name": "shengchenHAO/Weather-Forecast-", "max_forks_repo_head_hexsha": "0c81dd5b8b3c4572464b0e0b841ca279ecb0d650", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 79.0476190476, "max_line_length": 649, "alphanum_fraction": 0.7740963855, "num_tokens": 417, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914975839675, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5742265649922544}}
{"text": "\n\\subsection{Bias of OLS estimator from ommitted variables}\n\n", "meta": {"hexsha": "cb9ba788ef58f90d30ce5fad189ecf4205a1b6db", "size": 61, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/olsMore/01-01-OLSBiasOmitted.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/olsMore/01-01-OLSBiasOmitted.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/olsMore/01-01-OLSBiasOmitted.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 15.25, "max_line_length": 58, "alphanum_fraction": 0.8032786885, "num_tokens": 15, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.828938799869521, "lm_q2_score": 0.6926419958239133, "lm_q1q2_score": 0.5741578247575044}}
{"text": "\\chapter{Large Scale Modular Physical Models}\\label{ch:largeScale}\nThis chapter provides an extended summary for the work presented in the papers ``Real-Time Control of Large-Scale Modular Physical Models using the Sensel Morph'' \\citeP[A] and ``Physical Models and Real-Time Control with the Sensel Morph'' \\citeP[B].  Paper \\citeP[A] presents the work done on various physical models connected by nonlinear springs using three instruments as case studies: the esraj (bowed sitar), the hammered dulcimer and the hurdy gurdy. The implementations and a video showcasing the hurdy gurdy can be found online.\\footnote{\\url{https://github.com/SMC-AAU-CPH/ConnectedElements/releases/tag/v5.0}}\\textsuperscript{,}\\footnote{\\url{https://youtu.be/BkxLji2ap1w}} Paper \\citeP[A] follows \\cite{theBible} and \\cite{Bilbao2009Modular} and uses `scaling' (see Section \\ref{sec:1DwaveContTime}). To relate the paper to the theory presented in this thesis, this chapter presents the models in a non-scaled, dimensional form. The eventual implementation of the models is equivalent. \n% Furthermore, this chapter will build on the contents of paper \\citeP[A] by providing more details on the implementation.\\todo{will it though?}\n\n\\section{Physical models}\\label{sec:modelsLargeScale}\nAll instruments use multiple instances of the stiff string presented in Chapter \\ref{ch:stiffString} and one instance of the thin plate presented in Section \\ref{sec:thinPlate}. The latter was used as a simplified instrument body for the resulting simulations (see Section \\ref{sec:largeScaleInstruments}). Theory on connections can be found in Chapter \\ref{ch:connections}, and information on the string-plate connection, specifically, is presented in Section \\ref{sec:stringPlateConnection}.\n\nConsider set of strings, where the transverse displacement of string $s$ is described by $u_s = u_s(\\chi_s, t)$ (in m) defined for $t\\geq 0$ and $\\chi_s \\in \\D_s$ for domain $\\D_s = [0, L_s]$ and length $L_s$ (in m). Notice that every string is defined for a separate coordinate system $\\chi_s$. In the following, spatial derivatives $\\partial_{\\chi_s}$ are the same as those described in Section \\ref{sec:FDoperators}, but with respect to coordinate $\\chi_s$. The PDE of string $s$ with an external connection force is defined as %(after division by $\\rho_sA_s$)\n\\begin{equation}\n    \\begin{aligned}\n    \\ptt u_s = c_s^2 \\partial_{\\chi_s\\chi_s} u_s &- \\kappa_s^2 \\partial_{\\chi_s\\chi_s\\chi_s\\chi_s} u_s - 2 \\szX[s]\\pt u_s\\\\\n    &\\quad + 2 \\soX[s] \\partial_{\\chi_s\\chi_s} u_s - \\delta(\\chi_s - \\chi_{s, \\ctxt})\\frac{f_s}{\\rho_sA_s},\n    \\end{aligned}\n\\end{equation}\nwhere spatial Dirac delta function $\\delta(\\chi_s - \\chi_{s,\\ctxt})$ (in m$^{-1}$) localises the connection force between the string $s$ and the plate to connection location $\\chi_{s, \\ctxt}$. Other parameters are as defined in Eq. \\eqref{eq:stiffStringPDE} but have a subscript $s$ to denote that they can be different for each strings. \n\nAs all connections in the implementation are between an individual string and the plate, the PDE of the thin plate in Eq. \\eqref{eq:platePDE} can be extended to \n\\begin{equation}\n    \\begin{aligned}\n    \\!\\!\\!\\!\\ptt w = -\\kappa_\\ptxt^2\\Delta\\Delta w &\\!-\\! 2\\szX[\\ptxt]\\pt w\\! +\\! 2\\soX[\\ptxt] \\pt\\pxx w+ \\!\\sum_s\\delta(x \\!-\\! x_{\\ctxt,s}, y \\!-\\! y_{\\ctxt,s})\\frac{f_s}{\\rho_\\ptxt H},\\!\\!\\!\\!\\!\\!\n    \\end{aligned}\n\\end{equation}\nwhere 2D spatial Dirac delta function $\\delta(x_s - x_{s,\\ctxt}, y_s - y_{s,\\ctxt})$ (in m$^{-2}$) localises the connection force of between the plate and string $s$ to coordinate $(x_s, y_s)$ on the plate. Other parameters are as defined in Eq. \\eqref{eq:platePDE}. \n\nFinally, the connection force between the plate and string $s$ is defined as a nonlinear damped spring (see Eq. \\ref{eq:nonlinearForce})\n\\begin{equation}\n    f_s = K_1\\eta_s + K_3\\eta_s^3 + R \\dot \\eta_s,\n\\end{equation}\nwhere\n\\begin{equation}\n    \\eta_s = u_s(\\chi_{\\ctxt,s}, t) - w(x_{\\ctxt,s}, y_{\\ctxt,s}, t)\n\\end{equation}\nis the relative displacement between string $s$ and the plate at their respective connection locations. Notice that the plate is placed below the strings such that the sign of the force terms are negative for the strings and positive for the the plate.\n\n\\section{Implementation}\nThis section provides considerations for implementing the above models. Details on discretisation of the models and how to solve for $f_s$ are presented in Section \\ref{sec:stringPlateConnection} and are not be given here. \n\nThe spatial Dirac delta functions are discretised using 0\\thOrder spreading operators for simplicity (see Section \\ref{sec:interpolationSpreading} (1D) and Section \\ref{sec:interpolationSpreading2D} (2D)). Furthermore, the connection locations on the plate are implemented to be non-overlapping. Overlaps would require to solve a system of linear equations to obtain the connection forces (see e.g. \\cite{Bilbao2009Modular}). Looking towards real-time implementation, an explicit solution for the each connection is desired.\n\n\\section{Summary}\nThis section provides a summary of the instrument simulations presented paper \\citeP[A]. All instruments were implemented in real-time in C++ using the JUCE framework (see Chapter \\ref{ch:realtime}). Finally, a summary of the results and conclusion will be given.\n\n\\subsection{Instruments}\\label{sec:largeScaleInstruments}\nUsing the setup presented in Section \\ref{sec:modelsLargeScale} various configurations inspired by real instruments have been made. The choices of simulated instruments were aimed at those containing many (sympathetic) strings. Furthermore, physical models of these instruments using FDTD methods should not exist in the literature at the time of writing the papers.\n\nThree implementations inspired by real-life instruments were created and their setups are presented here. The implementations were controlled by a pair of Sensel Morph controllers (see Section \\ref{sec:sensel}). The mapping between the controllers and the instruments is explained in papers \\citeP[A] and \\citeP[B].\n\n\\subsubsection{Esraj: bowed sitar}\nThe first instrument simulation is inspired by the \\textit{esraj}: the bowed sitar. This instrument uses many strings, some of which are bowed and others are sympathetic strings that resonate when the instrument is played. As one can also interact with the latter, several strings in the implementation could be plucked as well. \n\nIn total, 20 strings were implemented, all connected to a thin plate: 2 strings could be bowed, 5 strings could be plucked, and 13 strings are sympathetic. The bow was implemented using the static friction model presented in Section \\ref{sec:staticFricMod} and the pluck was modelled as a time-varying raised cosine found in Section \\ref{sec:timeVaryingRaisedCos}.\n\n\\subsubsection{Hammered dulcimer}\nThe hammered dulcimer, or santur, can be seen as an `open piano' where the player hammers several strings at once. In the implementation, 20 pairs of strings are implemented, and one in each pair is connected to the plate. This causes a slight detuning between the strings resulting in a characteristic `chorus' effect exhibited by the instrument. To excite the strings, the time-varying strike presented in Section \\ref{sec:timeVaryingRaisedCos} is used.\n\n\\subsection{Hurdy gurdy}\nThe hurdy gurdy is a bowed string instrument also containing sympathetic strings. Rather than a bow, instrument uses a rosined wheel attached to a crank and bows the strings as it is turned. As for the esraj, the static friction model presented in Section \\ref{eq:staticFriction} was used to implement the wheel. \n\nThe instrument simulation consists of 5 bowed strings and 13 sympathetic strings, all connected to a plate. \n\n\\subsection{Results and conclusion}\nAll instrument simulations were able to run in real time on a MacBook Pro with a 2.2 GHz Intel i7 processor.\nInteraction with the implementations shows that when exciting one string, the connections with the plate cause other (sympathetic) strings to vibrate as well. Specifically, strings tuned to one of the harmonic partials of the excited string were found to resonate to a high degree. This phenomenon is consistent to real-world processes.\n\nFinally, informal evaluations of the instruments were carried out on experts in the sound and music computing field, and showed that the mapping between the Sensels and the instruments, specifically the bowing interaction, was considered natural and intuitive.", "meta": {"hexsha": "5b824ab8e224ace7df74030a2e80a0fd1ba70f37", "size": 8468, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "aauPhdCollectionThesis/contributions/largeScale.tex", "max_stars_repo_name": "SilvinWillemsen/phdThesis", "max_stars_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "aauPhdCollectionThesis/contributions/largeScale.tex", "max_issues_repo_name": "SilvinWillemsen/phdThesis", "max_issues_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "aauPhdCollectionThesis/contributions/largeScale.tex", "max_forks_repo_name": "SilvinWillemsen/phdThesis", "max_forks_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 130.2769230769, "max_line_length": 999, "alphanum_fraction": 0.7778696268, "num_tokens": 2127, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387998695209, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5741578089796774}}
{"text": "\n\\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size\n\\usepackage{physics}\n\\usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs\n\\usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default\n\\usepackage[english]{babel} % English language/hyphenation\n\\usepackage{amsmath,amsfonts,amsthm} % Math packages\n\\usepackage{braket}\n\\usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template\n\\usepackage{tikz}\n\\usepackage{amsmath}\n\\usepackage{sectsty} % Allows customizing section commands\n\\allsectionsfont{\\centering \\normalfont\\scshape} % Make all sections centered, the default font and small caps\n\\usepackage[mathscr]{euscript}\n\\usepackage{bm}\n\\newcommand{\\uvec}[1]{\\boldsymbol{\\hat{\\textbf{#1}}}}\n\\usepackage[thinlines]{easytable}\n\\usepackage{fancyhdr} % Custom headers and footers\n\\pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers\n\\fancyhead{} % No page header - if you want one, create it in the same way as the footers below\n\n\\usepackage{multicol}\n\\fancyfoot[L]{} % Empty left footer\n\\fancyfoot[C]{} % Empty center footer\n\\fancyfoot[R]{\\thepage} % Page numbering for right footer\n\\renewcommand{\\headrulewidth}{0pt} % Remove header underlines\n\\renewcommand{\\footrulewidth}{0pt} % Remove footer underlines\n\\setlength{\\headheight}{13.6pt} % Customize the height of the header\n\\usepackage{float}\n\\numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\\numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\\numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\n\\setlength\\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text\n\\usepackage{pgfplots}\n\n\\pgfplotsset{\n  compat=newest,\n  xlabel near ticks,\n  ylabel near ticks\n}\n%----------------------------------------------------------------------------------------\n%\tTITLE SECTION\n%----------------------------------------------------------------------------------------\n\n\\newcommand{\\horrule}[1]{\\rule{\\linewidth}{#1}} % Create horizontal rule command with 1 argument of height\n\n\\title{\t\n\\normalfont \\normalsize \n\\textsc{California State University San Marcos \\\\ Dr. De Leone, Physics 323} \\\\ [25pt] % Your university, school and/or department name(s)\n\\horrule{0.5pt} \\\\[0.4cm] % Thin top horizontal rule\n\\huge H.W. 4 \\\\ % The assignment title\n\\horrule{2pt} \\\\[0.5cm] % Thick bottom horizontal rule\n}\n\n\\author{Josh Lucas} % Your name\n\n\\date{\\normalsize\\today} % Today's date or a custom date\n\n\\begin{document}\n\n\\maketitle % Print the title\n\n%----------------------------------------------------------------------------------------\n%\tPROBLEM 1\n%----------------------------------------------------------------------------------------\n\n\\section*{Problem 1.6}\n\\textbf{A beam of spin-1/2 particles is prepared in the state}\n$$ \\ket{\\psi} = \\frac{2}{\\sqrt{13}} \\ket{+}_x + i \\frac{3}{\\sqrt{13}} \\ket{-}_x$$\n\\textbf{a) What are the possible results of a measurement of the spin component $S_z$, and with what probabilities would they occur?}\\\\\nWe know that the relation for state vectors in the x direction is  $\\ket{\\pm}_x = \\frac{1}{\\sqrt{2}} \\big [ \\ket{+} \\pm \\ket{-} \\big ]$ which we can substitute in our probability equation,\n\\begin{align*}\n\\mathscr{P_+} & = \\bigg | \\bra{+} \\big \\{ \\tfrac{2}{\\sqrt{13}} \\big ( \\tfrac{1}{\\sqrt{2}} \\big [ \\ket{+} + \\ket{-} \\big ] \\big ) +  \\tfrac{3}{\\sqrt{13}} \\big ( \\tfrac{1}{\\sqrt{2}}   \\big [ \\ket{+} + \\ket{-}  \\big ]\\big ) \\big \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{2}{\\sqrt{26}} \\braket{+|+} + i\\tfrac{3}{\\sqrt{26}} \\braket{+|+} \\bigg |^2 \\\\\n& = \\bigg | \\tfrac{2}{\\sqrt{26}} + i\\tfrac{3}{\\sqrt{26}} \\bigg |^2 \\\\\n\\mathscr{P_+} & = \\tfrac{13}{26} = \\tfrac{1}{2}\n\\end{align*}\n\\begin{align*}\n\\mathscr{P_-} & = \\bigg | \\bra{-} \\big \\{ \\tfrac{2}{\\sqrt{13}} \\big ( \\tfrac{1}{\\sqrt{2}} \\big [ \\ket{+} - \\ket{-} \\big ] \\big ) +  \\tfrac{3}{\\sqrt{13}} \\big ( \\tfrac{1}{\\sqrt{2}}   \\big [ \\ket{+} - \\ket{-}  \\big ]\\big ) \\big \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{2}{\\sqrt{26}} \\braket{-|-} - i\\tfrac{3}{\\sqrt{26}} \\braket{-|-} \\bigg |^2 \\\\\n& = \\bigg | \\tfrac{2}{\\sqrt{26}} - i\\tfrac{3}{\\sqrt{26}} \\bigg |^2 \\\\\n\\mathscr{P_-} & = \\tfrac{13}{26} = \\tfrac{1}{2}\n\\end{align*}\n\\textbf{b) What are the possible results of a measurement of the spin component $S_x$, and with what probabilities would they occur?}\n \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+} & = \\bigg |  \\braket{+_x|\\psi} \\bigg|^2 \\\\\n& = \\bigg | \\bra{+} \\bigg \\{ \\tfrac{2}{\\sqrt{13}}\\ket{+} + i\\tfrac{3}{\\sqrt{13}}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{2}{\\sqrt{13}}\\braket{+|+}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{2}{\\sqrt{13}} \\bigg |^2 \\\\\n  \\mathscr{P_+} & = \\frac{4}{13}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-} & = \\bigg |  \\braket{+_x|\\psi} \\bigg|^2 \\\\\n& = \\bigg | \\bra{-} \\bigg \\{ \\tfrac{2}{\\sqrt{13}}\\ket{+} - i\\tfrac{3}{\\sqrt{13}}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{3i}{\\sqrt{13}}\\braket{-|-}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{3i}{\\sqrt{13}} \\bigg |^2 \\\\\n  \\mathscr{P_-} & = \\frac{9}{13}\n  \\end{align*}\n  \\end{multicols}\n  \n  \\textbf{c) Plot histograms of the predicted measurement results from parts (a) and (b).} \\\\\n  \\\\\n   \\begin{tikzpicture}\n    \\begin{axis}[\n      ybar,\n      bar width=15pt,\n      xlabel={$S_z$},\n      ylabel={Probability},\n      ymin=0,\n      ymax =100,\n      ytick=\\empty,\n      xtick=data,\n      axis x line=bottom,\n      axis y line=left,\n      enlarge x limits=0.2,\n      symbolic x coords={ $ + \\frac{\\hbar}{2}$, $ - \\frac{\\hbar}{2}$ },\n      xticklabel style={anchor=base,yshift=-\\baselineskip},\n      nodes near coords={\\pgfmathprintnumber\\pgfplotspointmeta\\%}\n    ]\n      \\addplot[fill=white] coordinates {\n        ($ + \\frac{\\hbar}{2}$,50)\n        ($ - \\frac{\\hbar}{2}$,50)\n      };\n    \\end{axis}\n  \\end{tikzpicture}\n     \\begin{tikzpicture}\n    \\begin{axis}[\n      ybar,\n      bar width=15pt,\n      xlabel={$S_x$},\n      ylabel={Probability},\n      ymin=0,\n      ymax =100,\n      ytick=\\empty,\n      xtick=data,\n      axis x line=bottom,\n      axis y line=left,\n      enlarge x limits=0.2,\n      symbolic x coords={ $ + \\frac{\\hbar}{2}$, $ - \\frac{\\hbar}{2}$ },\n      xticklabel style={anchor=base,yshift=-\\baselineskip},\n      nodes near coords={\\pgfmathprintnumber\\pgfplotspointmeta\\%}\n    ]\n      \\addplot[fill=white] coordinates {\n        ($ + \\frac{\\hbar}{2}$,30.77)\n        ($ - \\frac{\\hbar}{2}$,69.23)\n      };\n    \\end{axis}\n  \\end{tikzpicture}\n\n\n\\section*{Problem 1.10}\n\\textbf{Consider the three quantum states:}\n\\begin{align*}\n\\ket{\\psi_1} & = \\tfrac{4}{5} \\ket{+} + i\\tfrac{3}{5} \\ket{-} \\\\\n\\ket{\\psi_2} & = \\tfrac{4}{5} \\ket{+} - i\\tfrac{3}{5} \\ket{-} \\\\\n\\ket{\\psi_1} & = - \\tfrac{4}{5} \\ket{+} + i\\tfrac{3}{5} \\ket{-} \n\\end{align*}\n\\textbf{a) For each of the $\\ket{\\psi}$ above, calculate the probabilities of spin component measurements\nalong the x-, y-, and z-axes.}\\\\\n\\\\\n%%%%%xxxxxxxxxxxx%%%%%%%%%%%%%%%%%%%%%%%\n \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+}_x & = \\bigg | \\braket{+_x|\\psi_1} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{+} + \\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} + i\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} + i\\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_+}_x & = \\tfrac{25}{50} = \\tfrac{1}{2}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-} & = \\bigg |  \\braket{-_x|\\psi_1} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bra{+} -\\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} - i\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} - i\\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_-}_x & = \\tfrac{25}{50} = \\tfrac{1}{2}\n  \\end{align*}\n  \\end{multicols}\n  %%%%%%%%%YYYYYYYYYYYYYYYYY%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n   \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+}_y & = \\bigg | \\braket{+_y|\\psi_1} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{+} - i\\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} - i^2\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} + \\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_+}_y & = \\tfrac{49}{50}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-}_Y & = \\bigg |  \\braket{-_y|\\psi_1} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bra{+} + i \\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} + i^2\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} - \\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_-}_y & = \\tfrac{1}{50}\n  \\end{align*}\n  \\end{multicols}\n  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Zzzzzzz%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n   \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+}_z & = \\bigg | \\braket{+_z|\\psi_1} \\bigg|^2 \\\\\n& = \\bigg | \\bra{+}  \\bigg \\{ \\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5}\\braket{+|+}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{4}{5} \\bigg |^2 \\\\\n  \\mathscr{P_+}_z & = \\tfrac{16}{25}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-}_z & = \\bigg |  \\braket{-_z|\\psi_1} \\bigg|^2 \\\\\n& = \\bigg |  \\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |    i\\tfrac{3}{5} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  i \\tfrac{3}{5} \\bigg |^2 \\\\\n  \\mathscr{P_-}_z & = \\tfrac{9}{25}\n  \\end{align*}\n  \\end{multicols}\\horrule{2pt} \n  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n  %%%%%%%%%%%%%%%%%%BBBB%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n   \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+}_x & = \\bigg | \\braket{+_x|\\psi_2} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{+} + \\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} - i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} - i\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} - i\\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_+}_x & = \\tfrac{25}{50} = \\tfrac{1}{2}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-} & = \\bigg |  \\braket{-_x|\\psi_2} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bra{+} -\\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} - i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} + i\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} + i\\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_-}_x & = \\tfrac{25}{50} = \\tfrac{1}{2}\n  \\end{align*}\n  \\end{multicols}\n  %%%%%%%%%YYYYYYYYYYYYYYYYY%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{multicols}{2}\n\\noindent\n\t\\begin{align*}\n\\mathscr{P_+}_y & = \\bigg | \\braket{+_y|\\psi_2} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{+} - i\\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} - i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} + \\tfrac{3i^2}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} - \\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_+}_y & = \\tfrac{1}{50}\n  \\end{align*}\n  \\begin{align*}\n\\mathscr{P_-}_Y & = \\bigg |  \\braket{-_y|\\psi_2} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bra{+} + i \\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} - i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} - \\tfrac{3i^2}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  \\tfrac{4}{5\\sqrt{2}} + \\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_-}_y & = \\tfrac{49}{50}\n  \\end{align*}\n\\end{multicols}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Zzzzzzz%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{multicols}{2}\n\\noindent\n\t\\begin{align*}\n\\mathscr{P_+}_z & = \\bigg | \\braket{+_z|\\psi_2} \\bigg|^2 \\\\\n& = \\bigg | \\bra{+}  \\bigg \\{ \\tfrac{4}{5}\\ket{+} - i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  \\tfrac{4}{5}\\braket{+|+}  \\bigg |^2 \\\\\n  & = \\bigg |  \\tfrac{4}{5} \\bigg |^2 \\\\\n  \\mathscr{P_+}_z & = \\tfrac{16}{25}\n\t\\end{align*}\n\t\\begin{align*}\n\\mathscr{P_-}_z & = \\bigg |  \\braket{-_z|\\psi_2} \\bigg|^2 \\\\\n& = \\bigg |  \\bra{-} \\bigg \\{ \\tfrac{4}{5}\\ket{+} - i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  -  i\\tfrac{3}{5} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg | - i \\tfrac{3}{5} \\bigg |^2 \\\\\n  \\mathscr{P_-}_z & = \\tfrac{9}{25}\n\t\\end{align*}\n\\end{multicols}\n%%%%%%%%%%%%%%%%%%%%%%%33333\n\\horrule{2pt} \n \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+}_x & = \\bigg | \\braket{+_x|\\psi_3} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{+} + \\bra{-} \\bigg \\{ -\\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  -\\tfrac{4}{5\\sqrt{2}}\\braket{+|+} + i\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n  & = \\bigg | - \\tfrac{4}{5\\sqrt{2}} + i\\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_+}_x & = \\tfrac{25}{50} = \\tfrac{1}{2}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-} & = \\bigg |  \\braket{-_x|\\psi_3} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bra{+} -\\bra{-} \\bigg \\{- \\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  -\\tfrac{4}{5\\sqrt{2}}\\braket{+|+} - i\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  -\\tfrac{4}{5\\sqrt{2}} - i\\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_-}_x & = \\tfrac{25}{50} = \\tfrac{1}{2}\n  \\end{align*}\n  \\end{multicols}\n  %%%%%%%%%YYYYYYYYYYYYYYYYY%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n   \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+}_y & = \\bigg | \\braket{+_y|\\psi_3} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{+} - i\\bra{-} \\bigg \\{ -\\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg | - \\tfrac{4}{5\\sqrt{2}}\\braket{+|+} - i^2\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n  & = \\bigg |  -\\tfrac{4}{5\\sqrt{2}} + \\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_+}_y & = \\tfrac{1}{50}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-}_Y & = \\bigg |  \\braket{-_y|\\psi_3} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bra{+} + i \\bra{-} \\bigg \\{ -\\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  -\\tfrac{4}{5\\sqrt{2}}\\braket{+|+} + i^2\\tfrac{3}{5\\sqrt{2}} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  -\\tfrac{4}{5\\sqrt{2}} - \\tfrac{3}{5\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_-}_y & = \\tfrac{49}{50}\n  \\end{align*}\n  \\end{multicols}\n  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Zzzzzzz%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n   \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n\\mathscr{P_+}_z & = \\bigg | \\braket{+_z|\\psi_3} \\bigg|^2 \\\\\n& = \\bigg | \\bra{+}  \\bigg \\{ -\\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |  -\\tfrac{4}{5}\\braket{+|+}  \\bigg |^2 \\\\\n  & = \\bigg |  -\\tfrac{4}{5} \\bigg |^2 \\\\\n  \\mathscr{P_+}_z & = \\tfrac{16}{25}\n  \\end{align*}\n   \\begin{align*}\n\\mathscr{P_-}_z & = \\bigg |  \\braket{-_z|\\psi_3} \\bigg|^2 \\\\\n& = \\bigg |  \\bra{-} \\bigg \\{ -\\tfrac{4}{5}\\ket{+} + i\\tfrac{3}{5}\\ket{-} \\bigg \\} \\bigg |^2 \\\\\n& = \\bigg |    i\\tfrac{3}{5} \\braket{-|-}  \\bigg |^2 \\\\\n & = \\bigg |  i \\tfrac{3}{5} \\bigg |^2 \\\\\n  \\mathscr{P_-}_z & = \\tfrac{9}{25}\n  \\end{align*}\n  \\end{multicols}\n\n\n\\textbf{b) Use your results from (a) to comment on the importance of the overall phase and of the\nrelative phases of the quantum state vector.} \\\\\n\\\\\nBecause the same results were observed from state vectors in different phases we can say that the phases are not observable in our measurements.\n\\section*{Problem 1.11}\n\\textbf{A beam of spin-1/2 particles is prepared in the state}\\\\\n$$\\ket{\\psi} = \\tfrac{3}{\\sqrt{34}} \\ket{+} + i\\tfrac{5}{\\sqrt{34}} \\ket{-}$$ \n\\\\\n\\textbf{a) What are the possible results of a measurement of the spin component Sz, and with what\nprobabilities would they occur?} \\\\\n\\begin{multicols}{2}\n\\noindent\n\\begin{align*}\n\\mathscr{P}_+ & = \\bigg | \\bra{+} \\big \\{  \\tfrac{3}{\\sqrt{34}} \\ket{+} + i\\tfrac{5}{\\sqrt{34}} \\ket{-} \\big \\} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{3}{\\sqrt{34}} \\bigg|^2 \\\\\n\\mathscr{P}_+ & = \\tfrac{9}{34}\n\\end{align*}\n\\begin{align*}\n\\mathscr{P}_- & = \\bigg | \\bra{-} \\big \\{  \\tfrac{3}{\\sqrt{34}} \\ket{+} + i\\tfrac{5}{\\sqrt{34}} \\ket{-} \\big \\} \\bigg|^2 \\\\\n& = \\bigg | i\\tfrac{5}{\\sqrt{34}} \\bigg|^2 \\\\\n\\mathscr{P}_- & = \\tfrac{25}{34}\n\\end{align*}\n\\end{multicols}\n\\textbf{b) Suppose that the $S_z$ measurement yields the result Sz = -$\\frac{\\hbar}{2}$. Subsequent to that result a second measurement is performed to measure the spin component $S_x$. What are the\npossible results of that measurement, and with what probabilities would they occur?}\\\\\n\\\\\nIf the measurement show that the system is prepared in the z down state then we would expect an even distribution in spins for the x direction as,\n\\begin{multicols}{2}\n\\noindent\n\\begin{align*}\n\\mathscr{P}_+ & = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{+}  \\big \\{  \\  \\ket{+} + \\ket{-} \\big \\} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bigg|^2 \\\\\n\\mathscr{P}_+ & = \\tfrac{1}{2}\n\\end{align*}\n\\begin{align*}\n\\mathscr{P}_- & = \\bigg | \\tfrac{1}{\\sqrt{2}}\\bra{-} \\big \\{  \\ket{+} + \\ket{-} \\big \\} \\bigg|^2 \\\\\n& = \\bigg | \\tfrac{1}{\\sqrt{2}} \\bigg|^2 \\\\\n\\mathscr{P}_- & = \\tfrac{1}{2}\n\\end{align*}\n\\end{multicols}\n\\textbf{c) Draw a schematic diagram depicting the successive measurements in parts (a) and (b).}\\\\\n\\\\\n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%----------------------------------------------------------------------------------------\n\n\\end{document}", "meta": {"hexsha": "65d7605639143653ba65c422c7e5820b0f3e33cf", "size": 17589, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "H.W. 4 Due Wednesday 9-12-2018/Quantum H.W. 4.tex", "max_stars_repo_name": "Epikarsios/QuantumHW", "max_stars_repo_head_hexsha": "1db9c1f3d6e4627a6848a69b8530b9f24c14a889", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "H.W. 4 Due Wednesday 9-12-2018/Quantum H.W. 4.tex", "max_issues_repo_name": "Epikarsios/QuantumHW", "max_issues_repo_head_hexsha": "1db9c1f3d6e4627a6848a69b8530b9f24c14a889", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "H.W. 4 Due Wednesday 9-12-2018/Quantum H.W. 4.tex", "max_forks_repo_name": "Epikarsios/QuantumHW", "max_forks_repo_head_hexsha": "1db9c1f3d6e4627a6848a69b8530b9f24c14a889", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.7792553191, "max_line_length": 241, "alphanum_fraction": 0.5362442436, "num_tokens": 7455, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455589, "lm_q2_score": 0.8289388040954683, "lm_q1q2_score": 0.5741578066474703}}
{"text": "\\documentclass{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\n% Main maths packages\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathrsfs}\n\n% Required to use the underbracket\n\\usepackage{mathtools}\n\n\\begin{document}\n\n\\subsection*{Basic accents}\n\n\\[\n\t\\vec{a} =\n\t\\begin{pmatrix}\n\t\t1\\\\\n\t\t1\n\t\\end{pmatrix}\n\\]\n\n\\[\n\\hat{a} = 90^\\circ\n\\]\n\nThe other accents can be found in the symbols list in the \\emph{Best Practices} part in the \\LaTeX Compendium.\n\n\\subsection*{Stacking elements}\n\n\\[\n\ta \\overset{?}{=} b\n\\]\n\n\\[\n\ta \\underset{k = 0}{=} b\n\\]\n\n\\[\n\t\\sum_{\\substack{k=0 \\\\ i=0 \\\\ j=0}}^{n} i + j + k\n\\]\n\n\\subsection*{Arrows}\n\n\\[\n\ta \\xrightarrow[down]{up} b\n\\]\n\n\\[\n\ta \\xleftarrow[down]{up} b\n\\]\n\n\\subsection*{Complex decoration}\n\n\\[\n\tz = \\overbrace{\\underbracket{x}_{\\text{real}} + \\underbracket{iy}_{\\text{imaginary}}}^{\\text{complex number}}\n\\]\n\n\\end{document}", "meta": {"hexsha": "e87a5f80d782b29d6dc819ce32fc607f022c9a2e", "size": 877, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "compendium/mathematics/accents.tex", "max_stars_repo_name": "ZenLulz/LatexCompendium", "max_stars_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-07-30T21:43:55.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-23T20:16:19.000Z", "max_issues_repo_path": "compendium/mathematics/accents.tex", "max_issues_repo_name": "ZenLulz/LatexCompendium", "max_issues_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "compendium/mathematics/accents.tex", "max_forks_repo_name": "ZenLulz/LatexCompendium", "max_forks_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.1451612903, "max_line_length": 110, "alphanum_fraction": 0.6545039909, "num_tokens": 322, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.574149402587924}}
{"text": "\\documentclass{article}\n%\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{color}\n\\usepackage{tabu}\n\\usepackage{longtable}\n\\usepackage{mathrsfs}\n\\usepackage{enumerate}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\lhead{Final 06}\n\\rhead{Jon Allen}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\\newcommand{\\degree}{\\ensuremath{^\\circ}}\n\n\\begin{document}\n\\subsubsection*{PDE D.2}\n\\begin{align*}\n  \\text{PDE.}&&\\frac{\\partial u}{\\partial t}&=\\frac{\\partial^2u}{\\partial x^2}&&\\text{for}&0&<x<\\infty,&0&<t<\\infty\\\\\n  \\text{BC.}&&u(0,t)&=f(t)&&\\text{for}&&&0&<t<\\infty\\\\\n  \\text{IC.}&&u(x,0)&=0&&\\text{for}&0&<x<\\infty\n\\end{align*}\n\nApply the Laplace transform with respect to $t$ to PDE D.2 to obtain the relation $U(x,s)=sF(s)W(x,s)$, where $U(x,s)$ is the Laplace transform of the solution $u(x,t), F(s)$ is the transform of $f(t)$ and $W(x,s)$ is the transform of problem 5. Show that the result leads to the formula\n\n\\[u(x,t)=f(0)\\text{erfc}\\left(\\frac{x}{2\\sqrt{t}}\\right)+t\\int_0^1{\\text{erfc}\\left(\\frac{x}{2\\sqrt{t}}(1-u)^{-1/2}\\right)f'(tu)\\mathrm{d}u}\\]\n\n\\begin{align*}\n  sU(x)-0&=\\frac{\\mathrm{d}^2U}{\\mathrm{d}^2x}\\\\\n  U(0)&=F(s)\\\\\n  0&=\\frac{\\mathrm{d}^2U}{\\mathrm{d}^2x}-sU(x)\\\\\n  U(x)&=c_1e^{x\\sqrt{s}}+c_2e^{-x\\sqrt{s}}\\\\\n  U(0)&=F(s)=c_1+c_2\\\\\n  c_1=0&\\quad c_2=F(s)\\\\\n  U(x)&=F(s)e^{-x\\sqrt{s}}=\\frac{s}{s}F(s)e^{-x\\sqrt{s}}\\\\\n  W(s)&=\\frac{1}{s}e^{-x\\sqrt{s}}\\\\\n  U(x)&=sF(s)W(x)\n\\end{align*}\nAnd now we do the reverse transform\n\\begin{align*}\n  U(x)&=(sF(s)-f(0)+f(0))W(x)\\\\\n  &=(sF(s)-f(0))W(x)+f(0)W(x)\\\\\n  u(x,t)&=f(0)\\text{erfc}\\left(\\frac{x}{2\\sqrt{t}}\\right)+\\int_0^t{f'(u)\\,\\text{erfc}\\left(\\frac{x}{2\\sqrt{(t-u)}}\\right)\\,\\mathrm{d}u}\\\\\n  &=f(0)\\text{erfc}\\left(\\frac{x}{2\\sqrt{t}}\\right)+t\\int_0^1{f'(tu)\\,\\text{erfc}\\left(\\frac{x}{2\\sqrt{(t-tu)}}\\right)\\,\\mathrm{d}u}\\\\\n  &=f(0)\\text{erfc}\\left(\\frac{x}{2\\sqrt{t}}\\right)+t\\int_0^1{f'(tu)\\,\\text{erfc}\\left(\\frac{x}{2\\sqrt{t}\\sqrt{1-u}}\\right)\\,\\mathrm{d}u}\\\\\n\\end{align*}\n\\end{document}\n", "meta": {"hexsha": "6d8e20c42907d9f8ef4236dcada5f6487a0aa00d", "size": 2089, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "partial differential equations/pde-final-06.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "partial differential equations/pde-final-06.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "partial differential equations/pde-final-06.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.6851851852, "max_line_length": 287, "alphanum_fraction": 0.6208712303, "num_tokens": 950, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174789, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5741493981483315}}
{"text": "\\documentclass[12pt,letterpaper]{article}\n\n%\\usepackage{keystroke}\n\\usepackage{amsmath}\n\\usepackage{algpseudocode}\n\n%if i don't put that trailing space, spaces after \\svdimage wont show\n%like this \"SvdImageis cool beans\"\n%don't know why\n\\newcommand{\\svdimage}{\\texttt{SvdImage} }\n\n\\newcommand{\\codeitem}[2]{\\item[\\texttt{\\%> #1}] \\hfill \\\\ #2}\n\n\\begin{document}\n\n\\title{SvdImage Manual}\n\\author{Tim Martin}\n\\date{Spring 2012}\n\\maketitle\n\n\\section{Description}\nIn an abstracted form, images are matrices of color values. Therefore, we can\nperform linear algebra operations on images just as we do on general matrices.\nSpecifically, when we perform a Singular Value Decomposition (SVD) on an image,\nwe can represent it as the product of 3 matrices: the left-singular vectors,\na diagonal matrix of the decreasing singular values and the right-singular vectors.\n\n\\begin{equation}\nA=USV^{T}\n\\end{equation}\n\nPerforming an SVD gets interesting for images when we truncate the factor\nmatrices. Truncation in this sense is the removal of a certain number of\ntrailing columns of $U$ and $V$ and trailing columns and rows of $S$. By\nmultiplying the matrix back into $A$ after truncating, we get back a similar\nimage to the original. Depending on the amount of truncation, we can keep a\n``good'' representation of the image in less space! \n\n\\svdimage is a program and developer library that performs a truncated SVD. It\nis written in the Ruby programming language and provides a command line\ninterface.\n\n\\section{Features}\n\\begin{itemize}\n\\item Command Line Interface\n\\item Grayscale, RGB, and CMYK channel processing\n\\item Level of truncation can be ``automatic'' or user-specified\n\\item Supports most image file formats\n\\item Codec provides targeted file format (.svd)\n\\item Ruby Gem distribution\n\\item Can be used a library\n\\end{itemize}\n\n\\section{Compression}\nThe main application of \\svdimage is to compress images. However, this does not\ncome for free. We are indeed losing information about the original image. This\nhappens by reducing the rank our image to some value. For example, an image\n\\texttt{dog.jpg}may have 200 linearly independent rows and columns. If we run\n\\svdimage on this image with a k value of 150, the output will be an image that\nlooks similar to \\texttt{dog.jpg}, but is acutally now comprised of 150 linearly\nindependent rows and columns. In other words, 50 rows and columns will be the\nlinear combination of the other 150 rows and columns. \n\nAnother caveat is that you may not always notice compression in file size. This\nis dependent on a few factors (image content, colorspace, file format). Not each\nfile format stores truncated SVDs in a way optimal to this operation. I can't\nimagine a file format that would be able to understand if some rows or columns\nare linearly dependent.\n\nHowever, this is not to say you will never see compression! Personally, most\ntimes, I have.\n\n\\section{Sigma Ratio Threshold}\n\\svdimage comes with the option to use the \\texttt{-a}/\\texttt{--auto-k} flags.\nThese tell \\svdimage that you would like to try to find a good truncation\nautomatically. This truncation is unique to to the image's singular values and\nthe sigma threshold. First, we set our sigma threshold equal to some value. By\ndefault, \\svdimage uses 0.2, but you may also specify this yourself by providing\nan argument to the \\texttt{-a}/\\texttt{--auto-k} flags. Here is psuedocode for\nthe algorithm:\n\n%algorithmic doesn't like \\operatorname\n\\begin{algorithmic}\n  \\Require $0 \\leq \\operatorname{sigma\\_threshold} < 1$\n  \\For{$k = 1 \\to \\operatorname{k\\_max}$}\n    \\If{$ \\operatorname{sigma\\_threshold} \\geq \\frac{\\sum_{i=k+1}^{t} \\sqrt{\\sigma_{i}^{2}}}{\\sum_{i=1}^{t} \\sqrt{\\sigma_{i}^{2}}}$}\n      \\Return $k$\n    \\EndIf\n  \\EndFor\n\\end{algorithmic}\n\nIntuitively, you may think of this as leaving out\n$(\\operatorname{sigma\\_threshold})\\%$ of the image's information.\n\n\\section{Question}\nThe project poses a question:\n\\begin{quote}\nTo get a good easily recognizable image, do you need to have $\\sigma_{k+1}$ small\ncompares to 255, or just small compared to $\\sigma_1$, or is there some other criterion of smallness\nthat is even more relevant?\nTry this compression with portraits of faces. How small can you make k, and still\nkeep the portrait recognizable? In other words (roughly), what is the rank of a human\nface?\n\\end{quote}\n\nI believe I have answered this question with my explanation of the sigma ratio\nthreshold. A good $k$ is not dependent on 255 or just being small, but it is\ninstead dependent on the complexity of the image. $k$ must be chosen such that\nyou do not lose too much information from the image.\n\n\\section{Requirements}\n\\subsection{System}\n\\begin{itemize}\n\\item Ruby 1.9.2\n\\item GNU Scientific Library 1.15\n\\item ImageMagick 6.7.6-0\n\\item RubyGems 1.8.17\n\\end{itemize}\n\\subsection{Ruby Gems}\n\\begin{itemize}\n\\item bundler 1.0.21\n\\end{itemize}\nBundler will be used to easily download and install other gems.\n\n\\section{Installation}\nOnce all requirements have been met, you can install \\svdimage. If you are\nunable to get these requirements or perform the following installation, please\ncontact me!\n\\begin{description}\n\\codeitem{tar -xzvf svdimage-0.1.0.tar.gz}{Extract the archive contents.}\n\\codeitem{cd svdimage-0.1.0/}{Change into the \\svdimage directory.}\n\\codeitem{bundle install}{Install all \\svdimage dependency gems to your\nsystem.}\n\\codeitem{rake install}{Install \\svdimage to your system.}\n\\end{description}\nAnd that's it! \\svdimage should be in your \\texttt{\\$PATH} and you can start\nusing it anywhere\n\n\\section{Usage}\n\\footnotesize\n\\begin{verbatim}\nUsage: svdimage INPUT_IMAGE OUTPUT_IMAGE [OPTIONS]\n    -c, --colorspace COLORSPACE      Defines the colorspace of output-image.\n                                       Must be \"rgb\", \"gray\", or \"cmyk\".\n                                       Defaults to \"rgb\".\n    -k RANK                          Truncates the SVD to rank RANK. May not be\n                                       used with -a/--auto-k\n    -a, --auto-k [SIGMA_THRESHOLD]   Truncates the SVD to a rank determined by\n                                       the unique singular values of input-file.\n                                       Compares the sum of the sqaure roots of\n                                       the squares of the singular values to\n                                       SIMGA_THRESHOLD, which may be provided as\n                                       an argument or defaults to 0.2.\n                                       May not be used with -k\n\n    -h, --help                       Show this message\n        --version                    Show version\n\\end{verbatim}\n\\normalsize\n\n\\section{Examples}\nFor the following examples, you must have \n\\begin{description}\n\\codeitem{svdimage in.jpg out.jpg -k 100}{Truncate \\texttt{in.jpg} to have a rank of 100 and write it out as \\texttt{out.jpg}. Ranks must be $1 \\leq k < \\min(\\operatorname{image\\_height}, \\operatorname{image\\_width})$.}\n\\codeitem{svdimage format1.png format2.gif -k 100}{Like previous, but notice how you can use file formats interchangeably.}\n\\codeitem{svdimage in.jpg out.jpg -a}{Truncate \\texttt{in.jpg} to a ``reasonable'' rank. This algorithm attempts to remove unneeded ranks, but what ranks you consider unneeded may differ from me!}\n\\codeitem{svdimage in.jpg out.jpg -a 0.05}{Like previous, but with a specification of the sigma ratio. See the respective section for more information on how to use this.}\n\\codeitem{svdimage in.jpg out.svd -k 20}{Perform a truncation, but output to \\texttt{.svd} format, a format designed to store SVDs.}\n\\codeitem{svdimage color.jpg bw.jpg -k 43 -c gray}{Decompose to grayscale. The default colorspace is rgb.}\n\\end{description}\n\n\\section{License}\nCopyright (c) 2012 Tim Martin\n\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n'Software'), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\n\nThe above copyright notice and this permission notice shall be\nincluded in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\nIN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\nCLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\nTORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\nSOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\\end{document}\n", "meta": {"hexsha": "500a96bfeb858ee2174e4e46e7dc957faba59df4", "size": 8802, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/manual.tex", "max_stars_repo_name": "t-mart/svdimage", "max_stars_repo_head_hexsha": "1b9600b9d9535ce8d2a6d57b216cd803b87fb66f", "max_stars_repo_licenses": ["MIT", "Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-08-27T16:07:46.000Z", "max_stars_repo_stars_event_max_datetime": "2015-08-27T16:07:46.000Z", "max_issues_repo_path": "doc/manual.tex", "max_issues_repo_name": "t-mart/svdimage", "max_issues_repo_head_hexsha": "1b9600b9d9535ce8d2a6d57b216cd803b87fb66f", "max_issues_repo_licenses": ["MIT", "Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/manual.tex", "max_forks_repo_name": "t-mart/svdimage", "max_forks_repo_head_hexsha": "1b9600b9d9535ce8d2a6d57b216cd803b87fb66f", "max_forks_repo_licenses": ["MIT", "Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.1384615385, "max_line_length": 219, "alphanum_fraction": 0.7331288344, "num_tokens": 2247, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5741493860072541}}
{"text": "\\documentclass[12]{scrartcl}\n\\usepackage{amssymb,amsmath,gensymb,dsfont,calc,multicol,fullpage}\n\\makeatletter\n\\newcommand\\Aboxed[1]{\n   \\@Aboxed#1\\ENDDNE}\n\\def\\@Aboxed#1&#2\\ENDDNE{%\n   &\n   \\settowidth\\@tempdima{$\\displaystyle#1{}$}\n   \\setlength\\@tempdima{\\@tempdima+\\fboxsep+\\fboxrule}\n   \\kern-\\@tempdima\n   \\boxed{#1#2}\n}\n\\makeatother\n\n\\begin{document}\n\n\\title{Homework 32, Section 6.3: 2, 4, 6, 8, 12}\n\\author{Alex Gordon}\n\\date{\\today}\n\\maketitle\n\\section*{Homework}\n\\subsection*{2.}\n$C(7,5) \\cdot (\\frac{1}{6})^5 \\cdot (\\frac{5}{6})^2 + C(7,6) \\cdot (\\frac{1}{6})^6 \\cdot (\\frac{5}{6})^1 + C(7,7) \\cdot (\\frac{1}{6})$\n\\subsection*{4.}\n$\\frac{1}{2}^{10} \\cdot (C(10,8) + C(10,9) + 1)$\n\\subsection*{6.}\n$ C(5,3) \\cdot (\\frac{1}{2})^3 \\cdot (\\frac{1}{2})^2 + C(5,4) \\cdot (\\frac{1}{2})^4 \\cdot (\\frac{1}{2})^1 + \\frac{1}{2}^5$\n\\subsection*{8.}\n$C(4,3) \\cdot (\\frac{1}{2})^3 \\cdot (\\frac{1}{2})^1 + \\frac{1}{2}^4$\n\\subsection*{12. A)}\n$\\frac{3 \\cdot 5 \\cdot 4}{6^3}$\n\\subsection*{12. B)}\n$\\frac{3 \\cdot 5 \\cdot 4}{6^3}$\n\\subsection*{12. C)}\n$\\frac{3 \\cdot 5 + 5}{6^3}$\n\n\n\n\n\\end{document}", "meta": {"hexsha": "04f236f94d13c0459c01cef6ad237e051801d58d", "size": 1090, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DiscreteMath/Homework32.tex", "max_stars_repo_name": "alexggordon/latex", "max_stars_repo_head_hexsha": "7dd945f33490e6585e26cff39d9cf6ad8f582a0e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DiscreteMath/Homework32.tex", "max_issues_repo_name": "alexggordon/latex", "max_issues_repo_head_hexsha": "7dd945f33490e6585e26cff39d9cf6ad8f582a0e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DiscreteMath/Homework32.tex", "max_forks_repo_name": "alexggordon/latex", "max_forks_repo_head_hexsha": "7dd945f33490e6585e26cff39d9cf6ad8f582a0e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.25, "max_line_length": 134, "alphanum_fraction": 0.5926605505, "num_tokens": 529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5741493860072541}}
{"text": "% Copyright 2017 Markus J. Pflaum, licensed under CC BY-NC-ND 4.0\n% main author: \n%   Markus J. Pflaum\n%\n\\section{Inner product spaces}\n\\label{sec:inner-product-spaces}\n%\n%\n\\para \nLet us first remind the reader that as before $\\fldK$ stands for the field \nof real or of complex numbers. We will keep this notational agreement \nthroughout the whole chapter.\n\\begin{definition}\nBy a \\emph{sesquilinear form} on a $\\fldK$-vector space $\\vectorspV$ one understands a map\n$\\langle \\cdot , \\cdot \\rangle : \\vectorspV \\times \\vectorspV \\to \\fldK$\nwith the following two properties:\n\\begin{enumerate}[label={\\textup{({\\sffamily SF\\arabic*})}},leftmargin=*]\n\\item\n\\label{axiom:form-linearity-in-first-coordinate} \n  The map $\\langle \\cdot , \\cdot \\rangle$ is \\emph{linear} in its first coordinate which means that\n  \\[\n    \\langle v_1 + v_2, w \\rangle = \\langle v_1, w \\rangle + \\langle v_2, w \\rangle\n    \\quad \\text{and} \\quad \\langle r v, w \\rangle = r \\langle v, w \\rangle\n  \\]\n  for all  $v, w , v_1,v_2 \\in \\vectorspV$ and $r\\in \\fldK$.\n\\item\n \\label{axiom:form-conjugate-linearity-in-second-coordinate} \n  The map $\\langle \\cdot , \\cdot \\rangle$ is \\emph{conjugate-linear} in its second coordinate which means that\n  \\[\n    \\langle v_1 + v_2, w \\rangle = \\langle v_1, w \\rangle + \\langle v_2, w \\rangle\n    \\quad \\text{and} \\quad \\langle  v, r w \\rangle = \\overline{r} \\langle v, w \\rangle\n  \\]\n  for all  $v, w , v_1,v_2 \\in \\vectorspV$ and $r\\in \\fldK$.\n\\end{enumerate}\nA \\emph{hermitian form} is a sesquilinear form $\\langle \\cdot , \\cdot \\rangle$ on $\\vectorspV$ with the following\nadditional property:\n\\begin{enumerate}[label={\\textup{({\\sffamily SF\\arabic*})}},leftmargin=*,resume]\n\\item\n\\label{axiom:form-conjugate-symmetry} \n  The map $\\langle \\cdot , \\cdot \\rangle$ is \\emph{conjugate symmetric} which means that \n  \\[\n    \\langle v, w \\rangle = \\overline{\\langle w, v \\rangle} \\quad \\text{for all  } v, w \\in \\vectorspV \\ .\n  \\]\n\\end{enumerate}\nA sesquilinear form  $\\langle \\cdot , \\cdot \\rangle$ is  called\n\\emph{weakly-nondegenerate} if it satisfies axiom\n\\begin{enumerate}[label={\\textup{({\\sffamily SF\\arabic*w)}}},leftmargin=*,resume]\n\\item\n\\label{axiom:form-weakly-nondegenerate} \n  For every $v\\in \\vectorspV$, the map $\\vectorspV \\to \\fldK$, $w \\to \\langle w,v \\rangle$ is the zero \n  map if and only if $v=0$.\n\\end{enumerate}\nFinally, one calls a hermitian form  $\\langle \\cdot , \\cdot \\rangle$ on $\\vectorspV$   \n\\emph{positive semidefinite} if\n\\begin{enumerate}[label={\\textup{({\\sffamily SF\\arabic*s})}},leftmargin=*,resume]\n\\item\n\\label{axiom:form-positive-semidefinite} \n  $\\langle v,v \\rangle \\geq 0$ for all $v\\in \\vectorspV$.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\nRecall that a map  $\\langle \\cdot , \\cdot \\rangle : \\vectorspV \\times \\vectorspV \\to \\fldK$ is called \\emph{bilinear},\nif it satisfies \\ref{axiom:form-linearity-in-first-coordinate} and \n\\begin{axiomlist}[BF]\n\\item\n \\label{axiom:form-linearity-in-second-coordinate} \n  The map $\\langle \\cdot , \\cdot \\rangle$ is \\emph{linear} in its second coordinate which means that\n  \\[\n    \\langle v_1 + v_2, w \\rangle =  \\langle  v_1, w \\rangle + \\langle v_2, w \\rangle\n    \\quad \\text{and} \\quad \\langle  v, r w \\rangle = r \\langle v, w \\rangle\n  \\]\n  for all  $v, v_1,v_2, w \\in \\vectorspV$ and $r\\in \\fldK$.\n\\end{axiomlist}\nIf the underlying ground field $\\fldK$ coincides with the field of real numbers, \na sesquilinear form is by definition the same as a bilinear form,\nand a hermitian form the same as a symmetric bilinear form. \n\\end{remark}\n\n\\para \\label{para:properties-seminorm-associated-positive-semidefinite-hermitian-form}\nGiven a positive semidefinite hermitian form $\\langle \\cdot , \\cdot \\rangle$ on a \n$\\fldK$-vector space $\\vectorspV$, one calls two vectors $v,w \\in \\vectorspV$ \\emph{orthogonal} \nif $\\langle v , w \\rangle = 0$. Since the hermitian form \n$\\langle \\cdot , \\cdot \\rangle$ is assumed to be positive semidefinite, the map \n\\[\n \\| \\cdot \\| : \\vectorspV \\to \\R_{\\geq 0} , \\: v \\mapsto \\|v \\| = \\sqrt{\\langle v , v \\rangle} \n\\]\nis well-defined. We will later see that $\\| \\cdot \\|$  is a seminorm on $\\vectorspV$\nand therefore call the map $\\| \\cdot \\|$ the \\emph{seminorm associated to}\n$\\langle \\cdot , \\cdot \\rangle$. The following formulas are immediate consequences of the \nproperties defining a positive semidefinite hermitian form and the definition\nof the associated seminorm: \n\\begin{align} \n  \\label{eq:norm-squared-sum}\n  & \\| v+w \\|^2  = \\| v\\|^2 + 2\\, \\Re \\, \\langle  v ,  w \\rangle + \\| w\\|^2 \\quad \\text{for all } \n  v,w \\in \\vectorspV \\ , \\\\\n  \\label{eq:pythagorean-theorem}\n  & \\| v+w \\|^2  = \\| v\\|^2 + \\| w\\|^2 \\quad \\text{for all orthogonal } v,w \\in \\vectorspV \\ ,  \\\\\n  \\label{eq:parallelogram-identity} \n  & \\| v+w \\|^2  + \\|v-w\\|^2  = 2 \\big( \\| v\\|^2 + \\| w\\|^2 \\big) \\quad \\text{for all } v,w \\in \\vectorspV\\ ,\\\\\n  \\label{eq:absolute-homogeneity} \n  & \\| rv \\| = \\sqrt{ |r|^2 \\langle  v ,  v \\rangle }  = |r| \\| v \\| \n  \\quad \\text{for all } v,w \\in \\vectorspV \\text{ and } r\\in \\fldK \\ .\n\\end{align} \nFormula \\eqref{eq:pythagorean-theorem} is an abstract version of the \\emph{pythagorean theorem},\n\\Cref{eq:parallelogram-identity} is called the \\emph{parallelogram identity}. \nThe triangle inequality for the map $\\| \\cdot \\|$  will turn out to be a consequence of the \nfollowing result. \n\n\\begin{proposition}[Cauchy--Schwarz inequality]\n\\label{thm:cauchy-schwartz-inequality-inner-product-spaces}\n%\nGiven a positive semidefinite hermitian form $\\langle \\cdot , \\cdot \\rangle$ on a $\\fldK$-vector space \n$\\vectorspV$ the following inequality holds true:\n\\begin{equation}\n\\label{eq:cauchy-schwarz-inequality}\n  |\\langle v, w \\rangle| \\leq \\|v\\|\\|w\\| \\quad \\text{for all $v,w \\in \\vectorspV$}.\n\\end{equation}\nEquality holds if and only if $v$ and $w$ are linearly dependant.\n\\end{proposition}\n\\begin{proof}\nIf $v$ or $w$ is $0$, the proof is trivial, hence we can assume both to be nonzero. Put\n\\[\n  c = - \\frac{\\langle v, w \\rangle}{\\|v\\|^2}\n\\]\nand compute\n\\begin{equation}\n\\label{eq:expanding-norm-of-sum-in-inner-product-space}\n\\begin{split}\n   0 & \\leq \\|c v + w\\|^2 = \\langle c v + w, c v + w \\rangle = \n   |c|^2 \\langle v, v \\rangle + c \\langle v, w \\rangle + \n   \\overline{c}\\langle w, v \\rangle + \\langle w, w \\rangle = \\\\\n   & =  |c|^2\\|v\\|^2 + 2 \\, \\Re \\big( c\\langle v, w \\rangle \\big) + \\|w\\|^2 = \n   \\frac{|\\langle v, w \\rangle|^2}{\\|v\\|^2} - 2\\frac{|\\langle v, w \\rangle|^2}{\\|v\\|^2} + \\|w\\|^2 = \\\\\n   & = \\|w\\|^2 - \\frac{|\\langle v, w \\rangle|^2}{\\|v\\|^2} \\ .\n \\end{split}\n\\end{equation}\nHence\n\\[\n   |\\langle v, w \\rangle|^2 \\leq \\|v\\|^2\\|w\\|^2 \n\\]\nwhich entails the Cauchy--Schwarz inequality. \n\nIf $v,w$ are linearly dependant nonzero elements of $\\vectorspV$, then there exists a nonzero scalar \n$a\\in \\fldK$ such that $v = a w$. Hence \n\\[\n  |\\langle v , w \\rangle| = |a| \\, \\| w \\|^2 \n  = \\| v \\|  \\| w \\| \\, .\n\\]\nVice versa, if equality holds, then \\Cref{eq:expanding-norm-of-sum-in-inner-product-space} \nentails that $c v +w  = 0$, which means that $v$ and $w$ are linearly dependant. \n\\end{proof}\n\n\\begin{lemma}\n\\label{thm:positive-definitess-equivalent-nondegeneracy}\n  A positive semidefinite hermitian form $\\langle \\cdot , \\cdot \\rangle$ on a $\\fldK$-vector space \n  $\\vectorspV$ is weakly-nondegenerate if and only if it is \\emph{positive definite} that is if and only if\n  \\begin{enumerate}[label=\\textup{({\\sffamily SF\\arabic*p)}},leftmargin=*]\n  \\setcounter{enumi}{4}\n  \\item\n  \\label{axiom:form-positive-definite} \n  $\\langle v,v \\rangle > 0$ for all $v\\in \\vectorspV \\setminus \\{ 0 \\}$.\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n  A positive definite real bilinear or complex  hermitian form $\\langle \\cdot , \\cdot \\rangle$ is \n  non-degenerate since for every $v \\in \\vectorspV \\setminus \\{ 0 \\}$ the  linear form\n  $\\langle - ,v \\rangle : \\vectorspV \\to \\fldK$ then is  nonzero  because  $\\langle v,v \\rangle > 0$.\n\n  Conversely, if $\\langle - ,v \\rangle : \\vectorspV \\to \\fldK$ is nonzero for all \n  $v \\in \\vectorspV \\setminus \\{ 0 \\}$, then  there exists an element $w \\in \\vectorspV$ such that \n  $\\langle w,v \\rangle \\neq 0$. The Cauchy--Schwarz inequality entails\n  \\[\n    0 < |\\langle w,v \\rangle |^2 \\leq \\langle w,w \\rangle \\, \\langle v,v \\rangle \\ ,\n  \\]\n  which implies $\\langle v,v \\rangle > 0$. Hence $\\langle \\cdot , \\cdot \\rangle$ is positive definite.\n\\end{proof}\n\n\n\\begin{proposition}\n The map \n \\[\n  \\| \\cdot \\| :V \\to \\R_{\\geq 0} , \\: v \\mapsto \\|v \\| = \\sqrt{\\langle v , v \\rangle} \n \\]\n associated to a positive semidefinite hermitian form \n $\\langle \\cdot , \\cdot \\rangle$ on a $\\fldK$-vector space $\\vectorspV$ is a seminorm. \n If the hermitian form is positive definite, then $\\| \\cdot \\|$ is even  a norm. \n\\end{proposition}\n\\begin{proof}\n Absolute homogeneity \\ref{axiom:norm-absolute-homogeneity} is given by Eq.~\\eqref{eq:absolute-homogeneity}.\n The triangle inequality is a consequence of the Cauchy--Schwarz inequality:\n \\[\n   \\| v + w \\|^2 =   \\| v\\|^2 + 2 \\, \\Re \\, \\langle  v ,  w \\rangle + \\| w\\|^2 \\leq\n    \\| v\\|^2 + 2 \\, \\|  v \\| \\, \\| w \\| + \\| w\\|^2 = \\big(\\| v\\|+\\| w\\| \\big)^2 \\ .\n \\] \n Finally, if $\\langle \\cdot , \\cdot \\rangle$ is positive definite, then \n $\\| v \\| = \\sqrt{\\langle v , v \\rangle} > 0 $ for all $v \\in \\vectorspV \\setminus \\{ 0 \\}$,\n so $\\| \\cdot \\|$ is  a norm.\n\\end{proof}\n\n\n%%%%%%%%%%%%%%%%%%%\n%% Hilbert space definition\n%%%%%%%%%%%%%%%%%%%\n\\begin{definition} \n\\label{def:hilbert-space}\nBy an \\emph{inner product} or a \\emph{scalar product} on a $\\fldK$-vector space $\\hilbertH$ one \nunderstands a positive definite hermitian form on $\\hilbertH$. A $\\fldK$-vector space $\\hilbertH$ \ntogether with an inner product $\\langle \\cdot , \\cdot \\rangle : \\hilbertH \\times \\hilbertH \\to \\fldK$ is \ncalled an \\emph{inner product space} or a \\emph{pre-Hilbert space}. \n\nA hermitian form on a  $\\fldK$-vector space $\\hilbertH$ which is only positive semidefinite is called a \\emph{semi-inner product} or a \\emph{semi-scalar product}.\n\nA \\emph{Hilbert space} is an inner product space  $(\\hilbertH , \\langle \\cdot , \\cdot \\rangle)$  which is \ncomplete as a normed vector space. In other words, a Hilbert space is Banach space where the \nnorm on the space is induced by an inner product.\n\\end{definition}\n\n\\begin{examples}\n\\label{ex:inner-product-spaces}\n\\begin{environmentlist}\n\\item The vector space $\\R^n$ with the \\emph{euclidean inner product} \n      \\[ \n        \\langle \\cdot , \\cdot \\rangle : \\R^n \\times \\R^n \\to \\R, \\:\n        \\big( (v_1,\\ldots , v_n),(w_1,\\ldots , w_n) \\big) \\mapsto \n        \\sum_{i=1}^n v_i w_i  \n      \\]\n      is a real Hilbert space. Obviously, $\\langle \\cdot , \\cdot \\rangle$ is linear in the first argument,\n      symmetric, and positive definite, hence a real inner product. The associated norm is \n      the \\emph{euclidean norm}. We have seen before that $\\R^n$ with the euclidean norm is complete.      \n\\item The vector space $\\C^n$ together with the hermitian form \n      \\[\n        \\langle \\cdot , \\cdot \\rangle : \\C^n \\times \\C^n \\to \\C, \\:\n        \\big( (v_1,\\ldots , v_n),(w_1,\\ldots , w_n) \\big) \\mapsto \n        \\sum_{i=1}^n v_i\\overline{w}_i\n      \\]\n      is a complex Hilbert space. One immediately verifies that $\\langle \\cdot , \\cdot \\rangle$ is linear in the first argument,\n      conjugate symmetric, and positive definite. Hence $\\langle \\cdot , \\cdot \\rangle$ is a complex inner product which we \n      sometimes call the \\emph{standard hermitian inner product} on $\\C^n$.  Its associated norm is again the euclidean \n      norm, so by completeness of $\\C^n\\cong \\R^{2n}$ endowed with the euclidean norm one obtains the claim.      \n\\item The set \n      \\[\n        \\ell^2 = \n        \\left\\{(z_k)_{k\\in \\N} \\in \\C^\\N \\suchthat \\sum_{k=0}^\\infty |z_k|^2 < \\infty \\right\\}\n      \\]\n      of square summable sequences of complex numbers  is a complex Hilbert space \n      with inner product\n      \\[\n        \\langle \\cdot , \\cdot \\rangle :\n        \\ell^2 \\times \\ell^2 \\to \\C, \\: \\big((z_k)_{k\\in \\N},(w_k)_{k\\in \\N} \\big)\n        \\mapsto \\sum_{k=0}^\\infty z_k \\overline{w}_k \\ .\n      \\]\n      To prove this one needs to first verify that $\\ell^2$ is a subvector space of $\\C^\\N$. \n      For $z = (z_k)_{k\\in \\N} \\in \\C^\\N$ denote by $\\| z\\|$ the \\emph{extended norm} \n      $\\sqrt{\\sum_{k=0}^\\infty |z_k|^2} = \\sup\\limits_{K\\in \\N} \\sqrt{\\sum_{k=0}^K |z_k|^2} \\in \\closedint{0,\\infty}$.\n      Then $z\\in \\ell^2$ if and only if $\\|z\\| < \\infty$. Now let $a \\in \\C$ and $z\\in \\ell^2$ and compute\n      \\[\n         \\| a z \\| = \\sqrt{\\sum_{k=0}^\\infty |a  z_k|^2} = |a| \\, \\sqrt{\\sum_{k=0}^\\infty |z_k|^2} = |a| \\cdot \\| z \\| < \\infty \\ .\n      \\] \n      Hence $az \\in \\ell^2$. If $z,w \\in \\ell^2$, denote for each $K\\in \\N$ by $z_{(K)}$ and $w_{(K)}$ the ``cut-off'' \n      vectors $(z_0, \\ldots , z_K) \\in \\C^{K+1}$ and  $(w_0, \\ldots , w_K) \\in \\C^{K+1}$, respectively. \n      By the triangle inequality for the norm on the Hilbert space $\\C^{K+1}$  one concludes \n      \\[\n         \\sqrt{\\sum_{k=0}^K  | z_k + w_k |^2} =  \\| z_{(K)} + w_{(K)} \\| \\leq \n         \\| z_{(K)} \\| + \\| w_{(K)} \\| \\leq \\| z \\| + \\| w \\| < \\infty \\ .\n      \\] \n      Therefore, the sequence of partial sums $\\sum_{k=0}^K  | z_k + w_k |^2$, $K\\in \\N$, is bounded,\n      so convergent by the the monotone convergence theorem. One  obtains \n       \\[\n          \\| z + w \\| = \\lim_{K\\to\\infty} \\sqrt{ \\sum_{k=0}^K  | z_k + w_k |^2} \\leq \\| z \\| + \\| w \\| < \\infty \\ .\n      \\]\n      Hence  $z + w$ is square summable and $\\ell^2$ a vector subspace of $\\C^\\N$ indeed. \n      Note that our argument also shows that the restriction of the extended norm to $\\ell^2$ is a norm. \n\n      We need to show that $\\langle \\cdot , \\cdot \\rangle$ is well-defined. \n      To this end it suffices to prove that for all $z,w \\in\\ell^2$ the family $\\left( z_k \\overline{w}_k\\right)_{k\\in \\N}$ \n      is absolutely summable or in other words that $\\sum_{k=0}^\\infty \\left| z_k \\overline{w}_k \\right| < \\infty$. \n      One concludes by the H\\\"older inequality for  sums  \n      \\[\n        \\sum_{k=0}^K \\left| z_k  \\overline{w}_k \\right| = \\sum_{k=0}^K \\left| z_k  w_k \\right| \n        % =  \\left| \\inprod{(|z_0|,\\ldots,|z_N|),(|w_0|,\\ldots,|w_N|)  } \\right|\n        \\leq \\| z_{(K)} \\| \\,  \\| w_{(K)} \\| \\leq  \\| z \\| \\,  \\| w \\| \\ .\n      \\]\n      So the left hand side has an upper bound uniform in $K$ which by the monotone convergence theorem entails\n      convergence of the partial sums and the estimate \n      \\[\n         \\sum_{k=0}^\\infty \\left| z_k \\overline{w}_k \\right| \\leq  \\| z \\| \\,  \\| w \\| < \\infty \\ . \n      \\] \n      By definition it is clear that $ \\langle \\cdot , \\cdot \\rangle $ is linear in the first argument, \n      conjugate symmetric and positive definite, hence a complex inner product. Note that the norm \n      associated to  $ \\langle \\cdot , \\cdot \\rangle $ coincides with the above defined map $\\| \\cdot \\|$. \n\n      It remains to be shown that $\\ell^2$ is complete. Let $(z^n)_{n\\in \\N}$ with $z^n = {(z^n_k)}_{k\\in \\N}\\in \\ell^2$\n      for all $n\\in \\N$ be a Cauchy sequence in $\\ell^2$.    \n      For $\\varepsilon >0$ choose $N_\\varepsilon \\in \\N$ so that\n      \\[\n        \\| z^n - z^m   \\|< \\varepsilon \\quad \\text{for all } n,m \\geq N_\\varepsilon \\ . \n      \\]\n      For each fixed $k\\in \\N$ one therefore has \n      \\begin{equation}\n        \\label{eq:estimate-component-sequence-Cauchy-sequence-square-summable-sequences}\n        | z^n_k - z^m_k | \\leq \\| z^n - z^m   \\|< \\varepsilon \\quad \\text{for all } n,m \\geq N_\\varepsilon \\ . \n      \\end{equation}\n      By completeness of $\\C$ there exist $z_k \\in \\C$ such that $\\lim_{n\\to\\infty} z^n_k = z_k$ for all $k\\in \\N$. \n      We claim that $z = (z_k)_{k\\in \\N}$ is an element of $\\ell^2$ and that $(z^n)_{n\\in \\N}$ converges to $z$.\n      To verify this observe that for all $\\varepsilon >0$, $K\\in \\N$ and $n\\geq N_\\varepsilon$\n      \\[\n        \\sum_{k=0}^K  | z_k - z^n_k |^2 = \\lim_{m\\to\\infty}  \\sum_{k=0}^K  | z^m_k - z^n_k |^2 \n        \\leq \\sup_{m \\geq N_\\varepsilon} \\sum_{k=0}^K  | z^m_k - z^n_k |^2 \n        \\leq \\sup_{m \\geq N_\\varepsilon} \\| z^m - z^n \\|^2 \\leq \\varepsilon^2 \\ . \n      \\]\n      This implies by the triangle inequality and the fact that the Cauchy sequence $(z^n)_{n\\in \\N}$ is bounded in norm by some \n      $C>0$ that for all $K\\in \\N$ and $N = N_1$\n      \\[\n        \\sqrt{\\sum_{k=0}^K  | z_k|^2 } = \\| z_{(K)} \\| \\leq  \\| z_{(K)} - z^N_{(K)} \\| + \\| z^N_{(K)} \\| \n        \\leq  \\| z_{(K)} - z^N_{(K)} \\| + \\| z^N \\| \n        \\leq 1 + C \\ . \n      \\]\n      Hence $ \\| z \\|= \\sqrt{\\sum_{k=0}^\\infty | z_k|^2 } \\leq 1 + C $  and $z \\in \\ell^2$. In addition one obtains\n      \\[\n       \\| z - z^n \\| = \\lim_{K\\to\\infty} \\sqrt{\\sum_{k=0}^K  | z_k - z^n_k |^2} \\leq \\varepsilon \\quad \\text{for all } n \\geq N_\\varepsilon \\ . \n      \\]\n      This means that $z$ is the limit of the sequence $(z^n)_{n\\in \\N}$ and $\\ell^2$ is complete. \n\\item Let \n      \\[ \n        \\mathscr{L}^2(\\R^n) = \\left\\{ f : \\R^n \\to \\C \\suchthat f\n        \\text{ is Lebesgue measurable and }\n        \\int_{\\R^n}|f|^2 d\\lambda < \\infty \\right\\}\n      \\] \n      denote the space of Lebesgue square integrable functions on $\\R^n$. \n      Then the map    \n      \\[\n          \\langle \\cdot , \\cdot \\rangle : \\mathscr{L}^2(\\R^n) \\times  \\mathscr{L}^2(\\R^n) \\to \\C, \\: \n         (f,g) \\mapsto \\int_{\\R^n}f\\overline{g}\\, d\\lambda\n      \\]\n      is a positive semidefinite hermitian form on $\\mathscr{L}^2(\\R^n)$. \n      Modding out $\\mathscr{L}^2(\\R^n)$ by the kernel \n      \\[ \n        \\mathscr{N} := \\Ker (\\| \\cdot \\|) = \n        \\left\\{ f \\in \\mathscr{L}^2(\\R^n) \\suchthat \\int_{\\R^n}|f|^2 d\\lambda = 0 \\right\\}\n      \\]\n      gives the Lebesgue space\n      \\[\n          L^2 (\\R^n) := \\mathscr{L}^2(\\R^n) / \\mathscr{N} \\ .\n      \\]\n      \n      The hermitian form $\\langle \\cdot , \\cdot  \\rangle $ vanishes on $\\mathscr{N} \\times  \\mathscr{L}^2(\\R^n)$ and \n      $\\mathscr{L}^2(\\R^n) \\times \\mathscr{N}$ by the Cauchy--Schwarz inequality, hence descends to a hermitian form\n      \\[\n        \\langle \\cdot , \\cdot \\rangle : L^2(\\R^n) \\times  L^2(\\R^n) \\to \\C, \\:\n        (f + \\mathscr{N} ,g+ \\mathscr{N}) \\mapsto \\int_{\\R^n}f\\overline{g}\\, d\\lambda \\ .\n      \\]\n      That hermitian form is positive definite, since \n      $\\langle f + \\mathscr{N}, f + \\mathscr{N} \\rangle = 0$ means \n      $ \\int_{\\R^n}|f|^2 d\\lambda =0$, hence $f\\in \\mathscr{N}$.\n      So $L^2(\\R^n)$ together with $\\langle \\cdot , \\cdot \\rangle $ is a \n      Hilbert space which we call the \n      \\emph{Hilbert space of square-integrable functions} on $\\R^n$. \n      \\add{provide details on well-definedness of hermitian form and show completeness}     \n\\end{environmentlist}\n\\end{examples}\n\n%%%%%%%%%%%%%%%%%%%\n%% Theorem giving condition to relate inner product with norm\n%%%%%%%%%%%%%%%%%%%\n\n\\begin{theorem} \n\\label{thm:parallegram-identity-guarantees-that-norm-comes-from-some-inner-product}\nLet $\\vectorspV$ be a normed $\\fldK$-vector space. Then the norm $\\|\\cdot\\| : \\vectorspV \\to \\R_{\\geq 0}$\nis associated to an inner product $\\langle \\cdot , \\cdot \\rangle :  \\vectorspV \\times \\vectorspV \\to \\fldK$ \nif and only if the \\emph{parallelogram identity} \n\\[\n  \\| v + w \\|^2 + \\| v -  w\\|^2 = 2\\|v\\|^2 + 2\\|w\\|^2 \n\\]\nholds true for all $v,w \\in \\vectorspV$. In this case, the inner product of two elements $v,w\\in \\vectorspV$\ncan be expressed by the \\emph{polarization identity for} $\\fldK = \\R$\n\\begin{equation}\n  \\label{eq:real-polarization-identity}\n  \\langle v , w \\rangle = \\frac 14 \\left( \\| v + w \\|^2  - \\| v - w \\|^2 \\right)\n  = \\frac 12 \\left( \\| v + w \\|^2  - \\| v \\|^2 - \\| w \\|^2 \\right)\n\\end{equation}\nrespectively by the  \\emph{polarization identity for} $\\fldK = \\C$\n\\begin{equation}\n  \\label{eq:complex-polarization-identity}\n  \\langle v , w \\rangle = \\frac 14 \\sum_{k=1}^4 \\cplxi^k \\, \\| v + \\cplxi^k \\, w \\|^2  \\ .\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nThe forward direction is a consequence of\n\\ref{para:properties-seminorm-associated-positive-semidefinite-hermitian-form}, Eq.~\\ref{eq:parallelogram-identity}. \nTo show the backward direction we consider two cases $\\fldK = \\R$ and \n$\\fldK = \\C$ separately.\n\n\\textit{1.~Case.} Given the norm $\\|\\cdot\\|$ define $\\langle \\cdot , \\cdot \\rangle : \\vectorspV \\times \\vectorspV \\to \\R$ by real polarization\n\\[\n   \\langle v , w \\rangle = \\frac 14 \\left( \\| v + w \\|^2  - \\| v - w \\|^2 \\right), \\quad \\text{where } v,w\\in \\vectorspV  \\ .\n\\]\nNote that the parallelogram identity entails\n\\[\n  \\frac 14 \\left( \\| v + w \\|^2  - \\| v - w \\|^2 \\right) =  \n  \\frac 12 \\left( \\| v + w \\|^2  - \\| v \\|^2 - \\| w \\|^2 \\right) \\ . \n\\]\nObserve that by definition $\\langle v , w \\rangle = \\langle w , v \\rangle$ and $\\| v \\| = \\sqrt{ \\langle v , v \\rangle}$.\nLet us show additivity in the first variable. Let $v_1,v_2,w \\in \\vectorspV$ and compute using the parallelogram identity \n\\begin{equation*}\n  \\begin{split}\n     \\| v_1 + v_2 + w \\|^2 & = 2 \\| v_1 + w \\|^2 + 2 \\| v_2\\|^2 -  \\| v_1 + w - v_2\\|^2 \\ , \\\\ \n     \\| v_1 + v_2 + w \\|^2 & = 2 \\| v_2 + w \\|^2 + 2 \\| v_1\\|^2 -  \\| v_2 + w - v_1\\|^2 \\ .\n  \\end{split}\n\\end{equation*}\nHence \n\\begin{equation*}\n  \\begin{split}\n     \\| v_1 + v_2 \\pm w \\|^2 & =  \n     \\| v_1 \\pm w \\|^2 +  \\| v_2 \\pm w \\|^2 +  \\| v_1\\|^2 + \\| v_2\\|^2 -  \\| v_1 \\pm w - v_2\\|^2  -  \\| v_2 \\pm w - v_1\\|^2 \\ .\n  \\end{split}\n\\end{equation*}\nSubtracting the $-$ version from the $+$ version of this equation entails\n\\begin{equation*}\n  \\begin{split}\n    \\langle v_1 + v_2 , w \\rangle \\, & = \\frac 14 \\left( \\| v_1 + v_2 + w \\|^2  - \\| v_1 + v_2 - w \\|^2 \\right)= \\\\\n    & = \\frac 14 \\left( \\| v_1 + w \\|^2 +  \\| v_2 + w \\|^2 -  \\| v_1 - w \\|^2 -  \\| v_2 - w \\|^2 \\right) =\n    \\langle v_1  , w \\rangle +  \\langle v_2 , w \\rangle \\ ,\n  \\end{split}\n\\end{equation*}\nso additivity in the first variable is proved. By induction  one derives from this that \nfor all natural $n$\n\\begin{equation}\n\\label{eq:natural-number-homogeneity}\n  \\langle nv , w \\rangle = n \\langle v , w \\rangle \\quad \\text{for all } v,w \\in \\vectorspV  \\ . \n\\end{equation}\nSince then $\\langle - nv , w \\rangle - n \\langle v , w \\rangle = \\langle -nv + nv , w \\rangle = 0$ for all $n\\in \\N$,\nEq.~\\eqref{eq:natural-number-homogeneity} also holds for $n\\in \\Z$. \nNow let $p\\in \\Z$ and $q \\in \\gzN$. Then $ q\\, \\langle \\frac pq v , w \\rangle = \\langle p v , w \\rangle = p \\, \\langle  v , w \\rangle$, \nhence one has for rational $r$\n\\begin{equation}\n\\label{eq:rational-number-homogeneity}\n \\langle rv , w \\rangle = r \\langle v , w \\rangle \\quad \\text{for all } v,w \\in \\vectorspV  \\ . \n\\end{equation}\nSince addition, multiplication by scalars and the norm are continuous, the function \n\\[\n   \\R \\to \\R, \\:r \\mapsto  \\langle rv , w \\rangle - r \\langle v , w \\rangle = \\frac 14 \\left( \\| r v + w \\|^2 + r \\|  v - w \\|^2   - \\| rv - w \\|^2 -  r \\|  v + w \\|^2 \\right) \n\\]\nis continuous. Since  it vanishes over $\\Q$, it has to coincide with the zero map. Therefore, \nEq.~\\eqref{eq:rational-number-homogeneity} holds for all $r\\in \\R$. So $\\langle \\cdot , \\cdot \\rangle$ is linear in \nthe first coordinate. By symmetry, it is so too in the second coordinate. Hence $\\langle \\cdot , \\cdot \\rangle$ is a \nsymmetric bilinear form inducing $\\| \\cdot \\|$. \n\n\\textit{2.~Case.} In the case $\\fldK = \\C$ use complex polarization and put\n\\[\n   \\langle v , w \\rangle = \\frac 14 \\sum_{k=1}^4 \\cplxi^k \\, \\| v + \\cplxi^k \\, w \\|^2  \n   \\quad \\text{for all } v,w\\in \\vectorspV  \\ .\n\\]\nThen  $\\langle \\cdot , \\cdot \\rangle$ is conjugate symmetric, since \n\\[\n   \\overline{\\langle v , w \\rangle} = \\frac 14 \\sum_{k=1}^4 (- \\cplxi)^k \\, \\| v + \\cplxi^k \\, w \\|^2  \n   =   \\frac 14 \\sum_{k=1}^4 (-\\cplxi)^k \\, \\| (-\\cplxi)^k \\, v +  w \\|^2 =\n   \\langle w , v \\rangle \\ .\n\\]\nNext compute\n\\[\n  \\Re \\langle v , w \\rangle =  \\frac 14 \\left( \\| v + w \\|^2  - \\| v - w \\|^2 \\right)\n\\]\nand \n\\[\n  \\Im \\langle v , w \\rangle =  \\frac 14 \\left( \\| v + \\cplxi w \\|^2  - \\| v - \\cplxi w \\|^2 \\right) \\ .\n\\]\nBy the first case one concludes that  $\\Re \\langle \\cdot , \\cdot \\rangle$ and $\\Im \\langle \\cdot , \\cdot \\rangle$\nare both $\\R$-linear in the first  and the second coordinate. Moreover,\n\\[\n  \\Re \\langle \\cplxi  v , w \\rangle =  \\frac 14 \\left( \\| \\cplxi v + w \\|^2  - \\| \\cplxi v - w \\|^2 \\right)\n  = \\frac 14 \\left( \\| v - \\cplxi w \\|^2  - \\| v + \\cplxi w \\|^2 \\right) = \n  - \\Im \\langle v ,  w \\rangle\n\\]\nand \n\\[\n  \\Im \\langle \\cplxi \\, v , w \\rangle =  \n  \\frac 14 \\left( \\| \\cplxi v + \\cplxi w \\|^2  - \\| \\cplxi v - \\cplxi w \\|^2 \\right) = \n  \\Re \\langle  v , w \\rangle \\ ,\n\\]\nhence $\\langle \\cdot , \\cdot \\rangle$ is complex linear in the first coordinate. \nFinally,\n\\[\n  \\Re \\langle v , v \\rangle =  \\| v  \\|^2 \\quad \\text{and} \\quad \n  \\Im \\langle v , v \\rangle =  \\frac 14 \\left( \\| v + \\cplxi v \\|^2  - \\| v - \\cplxi v \\|^2 \\right) = 0 \\ .\n\\]\nThis finishes the proof that $ \\langle \\cdot , \\cdot \\rangle$ is a complex inner product inducing the \nnorm $\\| \\cdot \\|$.\n\\end{proof}\n\n\\para\nNext we will turn Hilbert spaces into a category. To this end one needs to know what morphisms in this\ncategory should be. There are two options each giving rise to a category of Hilbert spaces. These categories\njust differ by their morphism classes. The first one is to\nhave as morphisms  linear maps $A:\\hilbertH_1\\to\\hilbertH_2$ preserving the inner products which means that\nthey fulfill\n\\[\n  \\inprod{Av_1,Av_2} = \\inprod{v_1,v_2}  \\quad \\text{for all } v_1,v_2 \\in \\hilbertH_1 \\ .\n\\] \nBy \\Cref{thm:parallegram-identity-guarantees-that-norm-comes-from-some-inner-product} this property is\nequivalent to\n\\[\n  \\| Av \\| = \\| v \\| \\quad \\text{for all } v \\in \\hilbertH_1 \\ ,\n\\] \nthat is to $A$ being \\emph{norm preserving} or \\emph{isometric}. Obviously, the identity map between two\nHilbert spaces is isometric and the composition of two composable isometric linear maps between Hilbert\nspaces is again isometric and linear. Hence Hilbert spaces together with norm preserving linear maps between\nthem form a  category which we denote by $\\category{Hilb_\\textup{np}}$. The isomorphisms in this category\nare the surjective and inner product preserving linear maps between Hilbert spaces. Such maps\nare called \\emph{unitary}. The condition of a linear map being norm preserving is pretty restrictive, so\nthe category $\\category{Hilb_\\textup{np}}$ contains only few morphisms. This can be healed by allowing all\n\\emph{bounded} linear maps between Hilbert spaces to be morphisms that is of all\nlinear $A:\\hilbertH_1\\to\\hilbertH_2$ for which there exists a $C\\geq 0$ such that\n\\[\n  \\| Av \\| \\leq C \\| v \\| \\quad \\text{for all } v \\in \\hilbertH_1 \\ .\n\\]\nThe smallest such $C$ is called the \\emph{operator norm} of $A$ and is denoted $\\| A\\|$.\nEquivalently, the operator norm is given by\n\\[\n  \\| A\\| = \\sup \\big\\{ \\| Av \\| \\bigmid v\\in \\hilbertH_1, \\, \\| v \\| \\leq 1 \\big\\}\n  = \\sup \\big\\{ \\| Av \\| \\bigmid v\\in \\hilbertH_1, \\, \\| v \\| = 1 \\big\\} \\ .\n\\] \nEvery norm preserving linear map is bounded with operator norm $1$.\nIn particular the identity map on a Hilbert space is bounded. Moreover, if\n$A: \\hilbertH_1\\to\\hilbertH_2$  and $B: \\hilbertH_2\\to\\hilbertH_3$ are bounded linear operators\nbetween Hilbert spaces, then the composition $BA :  \\hilbertH_1\\to\\hilbertH_3$ is bounded\nwith operator norm $\\leq \\| B\\|\\, \\| A\\|$ since for all $v \\in \\hilbertH_1 $ with $\\| v\\|\\leq 1$\n\\[\n  \\| BAv \\| \\leq \\| B \\| \\, \\|Av\\| \\leq \\| B\\|\\, \\| A\\| \\ . \n\\]\nHence Hilbert spaces as objects together with bounded linear maps as morphisms form a category which we\ndenote by $\\category{Hilb}$ and call the \\emph{category of Hilbert spaces}. Note that the morphisms\nin this category appear to ``forget'' the inner product and just preserve the linear and the topological\nstructure. John Baez \\cite[p.~133]{BaeHDAII2HS} has explained how to heal this apparent defect by showing that\n$\\category{Hilb}$ carries a so-called $*$-structure given by the adjoint map on bounded linear operators.\nWe will come back to this point later when we introduce adjoint operators. \n\n\\para\nLast in this section we will introduce bounded bilinear and sesquilinear maps. We define them for normed\nvector spaces. Their main application lies in the operator theory on Hilbert spaces, so we introduce\nthem here.  \n\n\\begin{definition}\n  Let $\\vectorspV$ be a vector space over $\\fldK$ with norm $\\| \\cdot \\| : \\vectorspV \\to \\R_{\\geq 0}$. \n  A bilinear or sesquilinear form $b : \\vectorspV \\times \\vectorspV \\to \\fldK$ is called \\emph{bounded} if \n  there exists a $C >0$ such that \n  \\[\n    | b(v,w)| \\leq C \\, \\| v\\| \\, \\| w \\| \\quad \\text{for all } v,w \\in \\vectorspV \\ .\n  \\]\n  In this case,\n  \\[\n    \\| b \\| := \\sup \\big\\{ | b(v,w) | \\bigmid v,w \\in \\vectorspV \\: \\& \\: \\| v\\| = \\| w \\| = 1 \\big\\} \n  \\] \n  exists and is called the \\emph{norm} of the form $b$. \n\\end{definition}\n\\begin{example}\n  The inner product on a (pre-) Hilbert space is bounded by the Cauchy--Schwarz inequality and has norm $1$.\n\\end{example}\n\n\\begin{proposition}\nA bounded bilinear or sesquilinear form $b : \\vectorspV \\times \\vectorspV \\to \\fldK$ on a normed vector space $\\vectorspV$ over \n$\\fldK$ is continuous. Vice versa, if $\\vectorspV$ is complete, then continuity of $b : \\vectorspV \\times \\vectorspV \\to \\fldK$\nimplies boundedness.\n \\end{proposition}\n\\begin{proof}\nIf $b$ is bounded, then\n\\begin{equation*}\n  \\begin{split} \n  \\big\\vert b(v,w) - b(v',w') \\big\\vert \\, & \n  \\leq   \\big\\vert b(v,w) - b(v',w) \\big\\vert + \\big\\vert b(v',w) - b(v',w') \\big\\vert \\leq \\\\\n  & \\leq \\| b \\| \\, \\left( \\|  w \\| \\, \\| v-v'\\| +  \\|  v' \\| \\, \\| w-w'\\| \\right)   \n  \\end{split}\n\\end{equation*}\nfor all $v,v',w,w' \\in \\vectorspV$. Hence $b$ is locally Lipschitz continuous, so in particular continuous. \n\nNow assume that $\\vectorspV$ is a Banach space and that $b$ is continuous. Then one can find $\\delta >0$ such that \nfor all $v,w \\in  V$ of norm less than $\\delta$ the relation $ |b(v,w)| < 1$ holds true. But that entails for all \nnon-zero $v,w$\n\\[\n   |b(v,w)| = \\frac{4 \\, \\|v\\| \\, \\|w\\|}{\\delta^2} \\cdot b\\left( \\delta \\frac{v}{2 \\|v\\|}, \\delta \\frac{w}{2 \\|w\\|}\\right)\n   \\leq  \\frac{4}{ \\delta^2} \\|v\\| \\, \\|w\\| \\ .\n\\] \nHence $b$ is bounded.\n\\end{proof}\n\n\n\n", "meta": {"hexsha": "27acc4bc80b73f45f4062af8ffe20bbd4787c6ee", "size": 30103, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Example/sections/inner-product-spaces.tex", "max_stars_repo_name": "martinpflaum/latex_to_html", "max_stars_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-11-13T15:10:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T14:08:26.000Z", "max_issues_repo_path": "Example/sections/inner-product-spaces.tex", "max_issues_repo_name": "martinpflaum/latex_to_html", "max_issues_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-07-11T13:18:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T22:02:11.000Z", "max_forks_repo_path": "Example/sections/inner-product-spaces.tex", "max_forks_repo_name": "martinpflaum/latex_to_html", "max_forks_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-13T15:22:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-13T15:22:47.000Z", "avg_line_length": 51.1086587436, "max_line_length": 176, "alphanum_fraction": 0.6180447132, "num_tokens": 10822, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7634837527911056, "lm_q1q2_score": 0.5741493819602281}}
{"text": "\\providecommand{\\main}{../..}\n\\documentclass[\\main/thesis.tex]{subfiles}\n\\begin{document}\n\n\\section{Num: a representation for positional numeral systems}\\label{num}\n\n\\subsection{Definition}\n\nNumerals of positional numeral systems are composed of sequences of digits.\nConsequently the definition of {\\lstinline|Numeral|} will be similar to that of\n{\\lstinline|List|},\nexcept that a {\\lstinline|Numeral|} must contain at least one digit while a list\nmay contain no elements at all.\nThe most significant digit is placed near {\\lstinline|_\u2219|} while the least\nsignificant digit is placed at the end of the sequence.\n\n{\\lstinline|Numeral|} has three indices, which corresponds to the three\ngeneralizations we have introduced.\n\n\\begin{lstlisting}\ninfixr 5 _\u2237_\n\ndata Numeral : (b d o : \u2115) \u2192 Set where\n    _\u2219  : \u2200 {b d o} \u2192 Digit d \u2192 Numeral b d o\n    _\u2237_ : \u2200 {b d o} \u2192 Digit d \u2192 Numeral b d o \u2192 Numeral b d o\n\\end{lstlisting}\n\nThe decimal number ``2016'' for example can be represented as:\n\n\\begin{lstlisting}\nMMXVI : Numeral 10 10 0\nMMXVI = # 6 \u2237 # 1 \u2237 # 0 \u2237 (# 2) \u2219\n\\end{lstlisting}\n%\nwhere {\\lstinline|#_ : \u2200 m {n} {m<n : True (suc m \u2264? n)} \u2192 Fin n|} converts from\n{\\lstinline|\u2115|} to {\\lstinline|Fin n|} provided that the number is small enough.\n\n\\lstinline|lsd| extracts the least significant digit of a numeral.\n\n\\begin{lstlisting}\nlsd : \u2200 {b d o} \u2192 (xs : Numeral b d o) \u2192 Digit d\nlsd (x \u2219   ) = x\nlsd (x \u2237 xs) = x\n\\end{lstlisting}\n\n\\subsection{Converting to natural numbers}\n\nConverting to natural numbers is fairly trivial.\n\n\\begin{lstlisting}\n\u27e6_\u27e7 : \u2200 {b d o} \u2192 (xs : Numeral b d o) \u2192 \u2115\n\u27e6_\u27e7 {_} {_} {o} (x \u2219)    = Digit-to\u2115 x o\n\u27e6_\u27e7 {b} {_} {o} (x \u2237 xs) = Digit-to\u2115 x o + \u27e6 xs \u27e7 * b\n\\end{lstlisting}\n\n\\section{Dissecting Numeral Systems with Views}\\label{views}\n\nThere are many kinds of numeral systems inhabit in {\\lstinline|Numeral|}.\nThese systems have different interesting properties that should be treated\ndifferently, so we sort them into \\textbf{four categories} accordingly.\n\n\\paragraph{Systems with no digits at all}\n\nThe number of digits of a system is determined by the index {\\lstinline|d|}.\nIf {\\lstinline|d|} happens to be $ 0 $, then there will be no digits in any of\nthese systems. Although they seem useless, these systems have plenty of properties.\nSince there are not digits at all, any property that is related to digits would\nhold vacuously.\n\n\\paragraph{Systems with base $0$}\n\nIf {\\lstinline|b|}, the base of a system, happens to be $ 0 $,\nthen only the least significant digit would have effects on the evaluation,\nbecause the rest of the digits would diminish into nothing.\n\n\\begin{lstlisting}\n\u27e6 x \u2219    \u27e7 = Digit-to\u2115 x o\n\u27e6 x \u2237 xs \u27e7 = Digit-to\u2115 x o + \u27e6 xs \u27e7 * 0\n\\end{lstlisting}\n\n\\paragraph{Systems with only zeros}\n\nConsider when {\\lstinline|d|} is set to $ 1 $ and {\\lstinline|o|} set to $ 0 $.\nThere will be one digit. however, the only digit can only be assigned to $ 0 $.\n\n\\begin{lstlisting}\n0, 00, 000, 0000, ...\n\\end{lstlisting}\n\nAs a result, every numeral would evaluate to $ 0 $ regardless of the base.\n\n\\paragraph{``Proper'' systems}\n\nThe rest of systems that does not fall into any of the categories above\nare considered \\textit{proper}.\n\n\\subsection{Categorizing Systems}\n\nThese ``categories'' are represented with a datatype called {\\lstinline|NumView|}\nthat is indexed by the three indices: {\\lstinline|b|}, {\\lstinline|d|}, and {\\lstinline|o|}.\n\n\\begin{lstlisting}\ndata NumView : (b d o : \u2115) \u2192 Set where\n    NullBase    : \u2200   d o \u2192 NumView 0       (suc d) o\n    NoDigits    : \u2200 b o   \u2192 NumView b       0       o\n    AllZeros    : \u2200 b     \u2192 NumView (suc b) 1       0\n    Proper      : \u2200 b d o \u2192 (proper : suc d + o \u2265 2)\n                          \u2192 NumView (suc b) (suc d) o\n\\end{lstlisting}\n\nBy pattern matching on indices, different configurations of indices are sorted into\ndifferent {\\lstinline|NumView|}s.\n\n\\begin{lstlisting}\nnumView : \u2200 b d o \u2192 NumView b d o\nnumView b       zero          o       = NoDigits b o\nnumView zero    (suc d)       o       = NullBase d o\nnumView (suc b) (suc zero)    zero    = AllZeros b\nnumView (suc b) (suc zero)    (suc o) = Proper b zero (suc o) _\nnumView (suc b) (suc (suc d)) o       = Proper b (suc d) o _\n\\end{lstlisting}\n\nTogether with \\textit{with-abstractions}, we can, for example, define a function\nto determine whether a numeral system is interesting or not:\n\n\\begin{lstlisting}\ninteresting : \u2200 b d o \u2192 Bool\ninteresting b d o with numView b d o\ninteresting _ _ _ | NullBase d o         = false\ninteresting _ _ _ | NoDigits b o         = false\ninteresting _ _ _ | AllZeros b           = false\ninteresting _ _ _ | Proper b d o proper  = true\n\\end{lstlisting}\n\nAs we can see, the function {\\lstinline|numView|} does more than sorting indices\ninto different categories. It also reveals relevant information and properties\nabout these categories. For instance, if a system {\\lstinline|Numeral b d o|}\nis classified as \\textit{Proper}, then we know that:\n\n\\begin{itemize}\n    \\item {\\lstinline|b|} is greater than $ 0 $.\n    \\item {\\lstinline|d|} is also greater than $ 0 $.\n    \\item {\\lstinline|o|} can be any value as long as {\\lstinline|d + o \u2265 2|};\n        we name this requirement \\textit{proper}.\n\\end{itemize}\n\n\\subsection{Views}\n\nThe sole purpose of {\\lstinline|NumView|} is to sort out and expose some\ninteresting properties about its indices.\nSuch datatypes are called \\textit{views}\\cite{wadler1987views} as they present\ndifferent aspects of the same object.\nFunctions like {\\lstinline|numView|} are called \\textit{view functions} or\n\\textit{eliminators}\\cite{mcbride2004views} because they provide different ways\nof eliminating a datatype.\n\nViews are \\textbf{reusable} as they free us from having to pattern match on the\nsame indices or data again and again. On the other hand, they can be customized\nto our needs since they are just \\textit{ordinary functions}.\nWe will define more views and use them extensively in the coming sections.\n\n\\section{Properties of each Category}\n\n\\paragraph{NoDigits}\n\nAlthough systems with no digits have no practical use,\nthey are pretty easy to deal with because all properties related to digits would\nhold unconditionally for systems of {\\lstinline|NoDigits|}.\nThis is proven by deploying \\textit{the principle of explosion}.\n\n\\begin{lstlisting}\nNoDigits-explode : \u2200 {b o a} {Whatever : Set a}\n    \u2192 (xs : Numeral b 0 o)\n    \u2192 Whatever\nNoDigits-explode (() \u2219   )\nNoDigits-explode (() \u2237 xs)\n\\end{lstlisting}\n\n\\paragraph{NullBase}\n\nThe theorem below states that, evaluating a numeral of {\\lstinline|NullBase|}\nwould results the same value as evaluating its least significant digit.\n\n\\begin{lstlisting}\nto\u2115-NullBase : \u2200 {d o}\n    \u2192 (x : Digit d)\n    \u2192 (xs : Numeral 0 d o)\n    \u2192 \u27e6 x \u2237 xs \u27e7 \u2261 Digit-to\u2115 x o\nto\u2115-NullBase {d} {o} x xs =\n    begin\n        Digit-to\u2115 x o + \u27e6 xs \u27e7 * 0\n    \u2261\u27e8 cong (\u03bb w \u2192 Digit-to\u2115 x o + w) (*-right-zero \u27e6 xs \u27e7) \u27e9\n        Digit-to\u2115 x o + 0\n    \u2261\u27e8 +-right-identity (Digit-to\u2115 x o) \u27e9\n        Digit-to\u2115 x o\n    \u220e\n\\end{lstlisting}\n\n\\paragraph{AllZeros}\n\nThe theorem below states every numeral of {\\lstinline|AllZeros|}\nwould evaluate to $ 0 $ regardless of the base.\nWe pattern match on the digit to eliminate other possible cases to exploit the\nfact that there is only one digit in such numerals.\n\n\\begin{lstlisting}\nto\u2115-AllZeros : \u2200 {b} \u2192 (xs : Numeral b 1 0) \u2192 \u27e6 xs \u27e7 \u2261 0\nto\u2115-AllZeros     (z    \u2219   ) = refl\nto\u2115-AllZeros     (s () \u2219   )\nto\u2115-AllZeros {b} (z    \u2237 xs)\n    = cong (\u03bb w \u2192 w * b) (to\u2115-AllZeros xs)\nto\u2115-AllZeros     (s () \u2237 xs)\n\\end{lstlisting}\n\n\\end{document}\n", "meta": {"hexsha": "4497b0babe137a21b4e2052e5e3fa001ffc0bfe3", "size": 7519, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/tex/constructions/num.tex", "max_stars_repo_name": "banacorn/numeral", "max_stars_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-04-23T15:58:28.000Z", "max_stars_repo_stars_event_max_datetime": "2015-04-23T15:58:28.000Z", "max_issues_repo_path": "Thesis/tex/constructions/num.tex", "max_issues_repo_name": "banacorn/numeral", "max_issues_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/tex/constructions/num.tex", "max_forks_repo_name": "banacorn/numeral", "max_forks_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-30T05:50:50.000Z", "avg_line_length": 34.8101851852, "max_line_length": 92, "alphanum_fraction": 0.6886554063, "num_tokens": 2298, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8539127529517043, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.574112606536513}}
{"text": "% This chapter has been modified on 6-4-05.\n%\\setcounter{chapter}{5}\n\n\n\\chapter{Expected Value and Variance}\\label{chp 6}\n\n\\section[Expected Value]{Expected Value of Discrete Random Variables}\\label{sec 6.1}\n\nWhen a large collection of numbers is assembled, as in a census, we are usually\ninterested not in the individual numbers, but rather in certain descriptive\nquantities such as the average or the median.  In general, the same is true for the\nprobability distribution of a numerically-valued random variable.  In this and in the\nnext section, we shall discuss two such descriptive quantities: the \\emx {expected\nvalue} and the  \\emx {variance.}  Both of these quantities apply only to\nnumerically-valued random variables, and so we assume, in these sections, that all\nrandom variables have numerical values.  To give some intuitive justification for our\ndefinition, we consider the following game.\n\n\\subsection*{Average Value}\n\nA die is rolled.  If an odd number turns up, we win an amount equal to this number;\nif an even number turns up, we lose an amount equal to this number.  For example, if\na two turns up we lose 2, and if a three comes up we win 3.  We want to decide if\nthis is a reasonable game to play.  We first try simulation.  The program {\\bf Die}\\index{Die\n(program)} carries out this simulation.\n\\par\nThe program prints the frequency and the relative frequency with which each outcome\noccurs.  It also calculates the average winnings.  We have run the program twice.  The results are\nshown in Table~\\ref{table 6.1}. \n\n\\medskip\n\\begin{table}[h]\\centering\n\\centering\n\\begin{tabular}{|c|c|c|c|c|} \\hline\n        & \\multicolumn{2}{c|}{n = 100}    & \\multicolumn{2}{c|}{n = 10000} \\\\ \\cline{2-5}\nWinning & \\hskip.1in Frequency \\hskip.1in & Relative  & \\hskip.1in Frequency \\hskip.1in & Relative  \\\\  \n        &                                 & Frequency &                                 & Frequency\n\\\\ \\cline{1-5} 1       & 17        & .17                 & 1681       &.1681 \\\\  -2      & 17       \n& .17                 & 1678       &.1678 \\\\   3       & 16        & .16                 &\n1626       &.1626 \\\\   -4      & 18        & .18                 & 1696       &.1696 \\\\ \n5       & 16        & .16                 & 1686       &.1686 \\\\ \n-6      & 16        & .16                 & 1633       &.1633 \\\\\\hline\n\\end{tabular}\n\\caption{Frequencies for dice game.}\n\\label{table 6.1}\n\\end{table}\n\nIn the first run we have played the game 100 times.  In this run our average gain is\n$-.57$.  It looks as if the game is unfavorable, and we wonder how unfavorable it\nreally is.  To get a better idea, we have played the game 10{,}000 times.  In this\ncase our average gain is $-.4949$.\n\\par\nWe note that the relative frequency of each of the six possible outcomes is quite\nclose to the probability 1/6 for this outcome.  This corresponds to our frequency\ninterpretation of probability.  It also suggests that for very large numbers of\nplays, our average gain should be\n\\begin{eqnarray*}\n\\mu & = & 1 \\Bigl(\\frac 16\\Bigr) - 2\\Bigl(\\frac 16\\Bigr) + 3 \\Bigl(\\frac 16\\Bigr) - 4\n\\Bigl(\\frac 16\\Bigr) + 5 \\Bigl(\\frac 16\\Bigr) - 6 \\Bigl(\\frac 16\\Bigr) \\\\\n    & = & \\frac 96 - \\frac {12}6 = -\\frac 36 = -.5\\ .\n\\end{eqnarray*} This agrees quite well with our average gain for 10{,}000 plays.\n\nWe note that the value we have chosen for the average gain is obtained by taking the\npossible outcomes, multiplying by the probability, and adding the results.  This\nsuggests the following definition for the expected outcome of an experiment.\n\n\\subsection*{Expected Value}\n\n\\begin{definition}\\label{def 6.1}  Let $X$ be a numerically-valued discrete random\nvariable with sample space $\\Omega$ and distribution function $m(x)$.  The {\\em\nexpected value}\\index{expected value} $E(X)$ is defined by\n$$ E(X) = \\sum_{x \\in \\Omega} x m(x)\\ ,\n$$ provided this sum converges absolutely.  We often refer to the expected value as\nthe \\emx {mean,}\\index{mean} and denote $E(X)$ by $\\mu$ for short.  If the above sum does not\nconverge absolutely, then we say that $X$ does not have an expected value.\n\\end{definition}\n\n\\begin{example}\\label{exam 6.03} Let an experiment consist of tossing a fair coin\nthree times.  Let $X$ denote the number of heads which appear.  Then the possible\nvalues of $X$ are $0, 1, 2$ and $3$.  The corresponding probabilities are $1/8, 3/8,\n3/8,$ and $1/8$.  Thus, the expected value of\n$X$ equals \n$$0\\biggl(\\frac 18\\biggr) + 1\\biggl(\\frac 38\\biggr) + 2\\biggl(\\frac 38\\biggr) +\n3\\biggl(\\frac 18\\biggr) = \\frac 32\\ .$$   Later in this section we shall see a\nquicker way to compute this expected value,  based on the fact that $X$ can be\nwritten as a sum of simpler random variables.\n\\end{example}\n\n\n\\begin{example}\\label{exam 6.05} Suppose that we toss a fair coin until a head first\ncomes up, and let $X$ represent the number of tosses which were made.  Then the\npossible values of $X$ are $1, 2, \\ldots$, and the distribution function of $X$ is\ndefined by\n$$m(i) = {1\\over {2^i}}\\ .$$ (This is just the geometric distribution with parameter\n$1/2$.)  Thus, we have\n\\begin{eqnarray*}  E(X) & = &\\sum_{i = 1}^\\infty i {1\\over{2^i}} \\\\ & = & \\sum_{i =\n1}^\\infty {1\\over{2^i}} + \\sum_{i = 2}^\\infty {1\\over{2^i}} + \\cdots \\\\ & = & 1 +\n{1\\over 2} + {1\\over{2^2}} + \\cdots \\\\ & = & 2\\ .\n\\end{eqnarray*}\n\n\\end{example}\n\n\\begin{example}(Example~\\ref{exam 6.05} continued)\\label{exam 6.055} Suppose\nthat we flip a coin until a head first appears, and if the number of tosses equals\n$n$, then we are paid $2^n$ dollars.  What is the expected value of the payment?\n\\par We let $Y$ represent the payment.  Then, \n$$P(Y = 2^n) = {1\\over{2^n}}\\ ,$$ for $n \\ge 1$.  Thus,\n$$E(Y) = \\sum_{n = 1}^\\infty 2^n {1\\over{2^n}}\\ ,$$ which is a divergent sum.  Thus,\n$Y$ has no expectation.  This example is called the  \\emx {St. Petersburg Paradox}.\n\\index{St. Petersburg Paradox} \nThe fact that the above sum is infinite suggests that a player should be willing to\npay any fixed amount per game for the privilege of playing this game. The reader is\nasked to consider how much he or she would be willing to pay for this privilege.  It\nis unlikely that the reader's answer is more than 10 dollars; therein lies the\nparadox.\n\\par In the early history of probability, various mathematicians gave ways to resolve\nthis paradox.  One idea (due to G.~Cramer)\\index{CRAMER, G.} consists of assuming that the amount \nof money in the world is finite.  He thus assumes that there is some fixed value of $n$ such\nthat if the number of tosses equals or exceeds $n$, the payment is $2^n$ dollars. \nThe reader is asked to show in Exercise~\\ref{exer 6.1.21} that the expected value of\nthe payment is now finite.\n\\par Daniel Bernoulli\\index{BERNOULLI, D.} and Cramer also considered another way to \nassign value to the payment.  Their idea was that the value of a payment is some function of the\npayment; such a function is now called a utility function\\index{utility function}.   Examples of\nreasonable utility functions might include the square-root function or the logarithm function.  In\nboth cases, the value of $2n$ dollars is less than twice the value of $n$ dollars.  It can\neasily be shown that in both cases, the expected utility of the payment is finite (see\nExercise~\\ref{exer 6.1.21}).\n\\end{example}\n\n\\begin{example}\\label{exam 6.8} Let $T$ be the time for the first success in a\nBernoulli trials process.  Then we take as sample space $\\Omega$ the integers\n$1,~2,~\\ldots\\ $ and assign the geometric distribution\n$$ m(j) = P(T = j) = q^{j - 1}p\\ . $$  Thus,\n\\begin{eqnarray*}  E(T) & = & 1 \\cdot p + 2qp + 3q^2p +\\cdots \\\\\n     & = & p(1 + 2q + 3q^2 +\\cdots )\\ .\n\\end{eqnarray*}  Now if $|x| < 1$, then\n$$ 1 + x + x^2 + x^3 + \\cdots = \\frac 1{1 - x}\\ .$$  Differentiating this formula, we\nget\n$$ 1 + 2x + 3x^2 +\\cdots = \\frac 1{(1 - x)^2}\\ ,$$  so\n$$ E(T) = \\frac p{(1 - q)^2} = \\frac p{p^2} = \\frac 1p\\ .$$  In particular, we see\nthat if we toss a fair coin a sequence of times, the expected time until the first\nheads is 1/(1/2) = 2.  If we roll a die a sequence of times, the expected number of\nrolls until the first six is 1/(1/6) = 6.\n\\end{example}\n\n\\subsection*{Interpretation of Expected Value}\n\nIn statistics, one is frequently concerned with the average value of a set of data. \nThe following example shows that the ideas of average value and expected value are\nvery closely related.\n\n\\begin{example}\\label{exam 6.8.5} The heights, in inches, of the women\non the Swarthmore basketball team are 5' 9\", 5' 9\", 5' 6\", 5' 8\", 5' 11\",\n5' 5\", 5' 7\", 5' 6\", 5' 6\", 5' 7\", 5' 10\", and 6' 0\". \n\\par A statistician would compute the average height (in inches) as follows:\n$$\\frac{69 + 69 + 66 + 68 + 71 + 65 + 67 + 66 + 66 + 67 + 70 + 72}{12} = 67.9\\ .$$\nOne can also interpret this number as the expected value\nof a random variable.  To see this, let an experiment consist of choosing one of the\nwomen at random, and let $X$ denote her height.  Then the expected\nvalue of $X$ equals 67.9.\n\\end{example}\n\n\\par Of course, just as with the frequency interpretation of probability, to\ninterpret expected value as an average outcome requires further justification.  We\nknow that for any finite experiment the average of the outcomes is not predictable. \nHowever, we shall eventually prove that the average will usually be close to $E(X)$\nif we repeat the experiment a large number of times.  We first need to develop some\nproperties of the expected value.  Using these properties, and those of the concept\nof the variance to be introduced in the next section, we shall be able to prove the\n\\emx {Law of Large Numbers.}  This theorem will justify mathematically both our\nfrequency concept of probability and the interpretation of expected value as the\naverage value to be expected in a large number of experiments.\n\n\n\\subsection*{Expectation of a Function of a Random Variable}\n\nSuppose that $X$ is a discrete random variable with sample space $\\Omega$, and\n$\\phi(x)$ is a real-valued function with domain $\\Omega$.  Then $\\phi(X)$ is a\nreal-valued random variable.  One way to determine the expected value of $\\phi(X)$ is\nto first determine the distribution function of this random variable, and then use\nthe definition of expectation. However, there is a better way to compute the expected\nvalue of $\\phi(X)$, as demonstrated in the next example.\n\n\\begin{example}\\label{exam 6.1.5} Suppose a coin is tossed 9 times, with the result\n\\[ HHHTTTTHT\\ . \\]    \n\\noindent The first set of three heads is called a  \\emx {run}\\index{run}.  There are three\nmore runs in this sequence, namely the next four tails, the next head, and the next\ntail.  We do not consider the first two tosses to constitute a run, since the third\ntoss has the same value as the first two.\n\\par Now suppose an experiment consists of tossing a fair coin three times.  Find the\nexpected number of runs.   It will be helpful to think of two random variables, $X$\nand $Y$, associated with this experiment.  We let $X$ denote the sequence of heads\nand tails that results when the experiment is performed, and $Y$ denote the number of\nruns in the outcome\n$X$.  The possible outcomes of $X$ and the corresponding values of $Y$  are\nshown in Table~\\ref{table 6.2}.\n\\begin{table}\n\\centering\n$$\\begin{tabular}{cc} X    &  Y \\\\ \\hline HHH  & 1\\\\ HHT  & 2\\\\ HTH  & 3\\\\ HTT  & 2\\\\\nTHH  & 2\\\\ THT  & 3\\\\ TTH  & 2\\\\ TTT  & 1\\\\\n\\end{tabular}\n$$\n\\caption{Tossing a coin three times.}\n\\label{table 6.2}\n\\end{table}\n\nTo calculate $E(Y)$ using the definition of expectation, we first must find the\ndistribution function $m(y)$ of $Y$ i.e., we group together those values of $X$ with\na common value of\n$Y$ and add their probabilities.  In this case, we calculate that the distribution\nfunction of $Y$ is:  $m(1) = 1/4,\\ m(2) = 1/2,$ and $m(3) = 1/4$.  One easily finds\nthat $E(Y) = 2$.  \n\\par Now suppose we didn't group the values of $X$ with a common $Y$-value, but\ninstead, for each\n$X$-value $x$, we multiply the probability of $x$ and the corresponding value of $Y$,\nand add the results.  We obtain\n$$1\\biggl(\\frac 18\\biggr) +2\\biggl(\\frac 18\\biggr) +3\\biggl(\\frac 18\\biggr)\n+2\\biggl(\\frac 18\\biggr) +2\\biggl(\\frac 18\\biggr) +3\\biggl(\\frac 18\\biggr)\n+2\\biggl(\\frac 18\\biggr) +1\\biggl(\\frac 18\\biggr)\\ ,$$ which equals 2.\n\\par This illustrates the following general principle.  If $X$ and $Y$ are two random\nvariables, and $Y$ can be written as a function of $X$, then one can compute the\nexpected value of $Y$ using the distribution function of $X$.\n\\end{example}\n\n\\begin{theorem}\\label{thm 6.3.5} If $X$ is a discrete random variable with sample\nspace\n$\\Omega$ and distribution function $m(x)$, and if\n%$\\phi : \\Omega \\to {\\bf\\rm R}$ is a function, then \n$\\phi : \\Omega \\to$ \\mat{\\rm R} is a function, then \n$$ E(\\phi(X)) = \\sum_{x \\in \\Omega} \\phi(x) m(x)\\ ,\n$$  provided the series converges absolutely.\n\\end{theorem}\n \nThe proof of this theorem is straightforward, involving nothing more than grouping\nvalues of\n$X$ with a common $Y$-value, as in Example~\\ref{exam 6.1.5}.\n\n\\subsection*{The Sum of Two Random Variables}\n\nMany important results in probability theory concern sums of random variables.  We\nfirst  consider what it means to add two random variables.  \n\n\\begin{example}\\label{exam 6.06} We flip a coin and let $X$ have the value 1 if the\ncoin comes up heads and 0 if the coin comes up tails.  Then, we roll a die and let\n$Y$ denote the face that comes up.  What does $X+Y$ mean, and what is its\ndistribution?  This question is easily answered in this case, by considering, as we\ndid in Chapter~\\ref{chp 4}, the joint random variable $Z = (X,Y)$, whose outcomes are\nordered pairs of the form $(x, y)$, where $0 \\le x \\le 1$ and $1 \\le y \\le 6$.  The\ndescription of the experiment makes it reasonable to assume that $X$ and $Y$ are\nindependent, so the distribution function of $Z$ is uniform, with $1/12$ assigned to\neach outcome.  Now it is an easy matter to find the set of outcomes of $X+Y$, and its\ndistribution function.\n\\end{example}\n\nIn Example~\\ref{exam 6.03}, the random variable $X$ denoted the number of heads which\noccur when a fair coin is tossed three times.  It is natural to think of $X$ as the\nsum of the random variables $X_1, X_2, X_3$, where $X_i$ is defined to be 1 if the\n$i$th toss comes up heads, and 0 if the $i$th toss comes up tails.  The expected\nvalues of the $X_i$'s are extremely easy to compute.  It turns out that the expected\nvalue of $X$ can be obtained by simply adding the expected values of the $X_i$'s. \nThis fact is stated in the following theorem.\n\n\\begin{theorem}\\label{thm 6.1} Let $X$ and $Y$ be random variables with finite\nexpected values.  Then\n$$ E(X + Y) = E(X) + E(Y)\\ ,\n$$ and if $c$ is any constant, then\n$$ E(cX) = cE(X)\\ .\n$$\n\\proof Let the sample spaces of $X$ and $Y$ be denoted by $\\Omega_X$ and $\\Omega_Y$,\nand suppose that\n$$\\Omega_X = \\{x_1, x_2, \\ldots\\}$$ and\n$$\\Omega_Y = \\{y_1, y_2, \\ldots\\}\\ .$$ Then we can consider the random variable $X +\nY$ to be the result of applying the  function $\\phi(x, y) = x + y$ to the joint\nrandom variable $(X,Y)$.  Then, by Theorem~\\ref{thm 6.3.5}, we have\n\\begin{eqnarray*} E(X+Y) & = &\\sum_j \\sum_k (x_j + y_k) P(X = x_j,\\ Y = y_k) \\\\\n       & = &\\sum_j \\sum_k x_j P(X = x_j,\\ Y = y_k) +\n\\sum_j \\sum_k y_k P(X = x_j,\\ Y = y_k) \\\\ & = &\\sum_j x_j P(X = x_j) + \\sum_k y_k P(Y\n= y_k)\\ .\n\\end{eqnarray*} The last equality follows from the fact that\n$$\\sum_k P(X = x_j,\\ Y = y_k)\\ \\  =\\ \\  P(X = x_j)$$ and\n$$\\sum_j P(X = x_j,\\ Y = y_k)\\ \\  =\\ \\  P(Y = y_k)\\ .$$ Thus,\n$$E(X+Y) = E(X) + E(Y)\\ .$$ If $c$ is any constant,\n\\begin{eqnarray*}  E(cX) & = & \\sum_j cx_j P(X = x_j) \\\\\n      & = & c\\sum_j x_j P(X = x_j)\\\\\n      & = & cE(X)\\ .\n\\end{eqnarray*}\n\\end{theorem}\nIt is easy to prove by mathematical induction that  \\emx {the expected value\nof the sum of any finite number of random variables is the sum of the expected values\nof the individual random variables.}\n\\par It is important to note that mutual independence of the summands was not needed\nas a hypothesis in the Theorem~\\ref{thm 6.1} and its generalization.  The fact that\nexpectations add, whether or not the summands are mutually independent, is sometimes\nreferred to as the First Fundamental Mystery of Probability.\\index{First Fundamental Mystery of\nProbability}\n\n\\begin{example}\\label{exam 6.1} Let $Y$ be the number of fixed points in a random\npermutation of the set\n$\\{a,b,c\\}$.  To find the expected value of $Y$, it is helpful to consider the basic\nrandom variable associated with this experiment, namely the random variable $X$ which\nrepresents the random permutation.  There are six possible outcomes of $X$, and we\nassign to each of them the probability $1/6$ see Table~\\ref{table 6.3}.  Then we can calculate $E(Y)$ using\nTheorem~\\ref{thm 6.3.5}, as\n$$3\\Bigl({1\\over 6}\\Bigr) + 1\\Bigl({1\\over 6}\\Bigr) + 1\\Bigl({1\\over 6}\\Bigr) +\n0\\Bigl({1\\over 6}\\Bigr) +  0\\Bigl({1\\over 6}\\Bigr) + 1\\Bigl({1\\over 6}\\Bigr) = 1\\ .\n$$\n\\begin{table}\n\\centering\n\\begin{tabular}{ccc} &                          & \\\\ \n\\hline &$X$                 & $Y$ \\\\\n\\hline  &$a\\;\\;\\;b\\;\\;\\;   c$       & 3 \\\\  &$a\\;\\;\\; c\\;\\;\\;  b$       & 1 \\\\ \n&$b\\;\\;\\; a\\;\\;\\;  c$       & 1 \\\\  &$b\\;\\;\\; c\\;\\;\\;  a$       & 0 \\\\  &$c\\;\\;\\;\na\\;\\;\\;  b$       & 0 \\\\  &$c\\;\\;\\; b\\;\\;\\;  a$       & 1 \\\\ \n\\hline\n\\end{tabular}\n\\caption{Number of fixed points.}\n\\label{table 6.3}\n\\end{table}\n\n\\par  We now give a very quick way to calculate the average number of fixed points in\na random permutation of the set $\\{1, 2, 3, \\ldots, n\\}$.  Let $Z$ denote the random\npermutation.  For each $i$, $1 \\le i \\le n$, let $X_i$ equal 1 if $Z$ fixes $i$, and\n0 otherwise.  So if we let $F$ denote the number of fixed points in $Z$, then\n$$F = X_1 + X_2 + \\cdots + X_n\\ .$$ Therefore, Theorem~\\ref{thm 6.1} implies that\n$$E(F) = E(X_1) + E(X_2) + \\cdots + E(X_n)\\ .$$ But it is easy to see that for each\n$i$,\n$$E(X_i) = {1\\over n}\\ ,$$ so\n$$E(F) = 1\\ .$$ This method of calculation of the expected value is frequently very\nuseful.  It applies whenever the random variable in question can be written as a sum\nof simpler random variables.  We emphasize again that it is not necessary that the\nsummands be mutually independent.\n\\end{example}\n\n\\subsection*{Bernoulli Trials}\n\n\\begin{theorem}\\label{thm 6.3} Let $S_n$ be the number of successes in $n$ Bernoulli\ntrials with probability\n$p$ for success on each trial.  Then the expected number of successes is $np$.  That\nis,\n$$ E(S_n) = np\\ .\n$$\n\\proof Let $X_j$ be a random variable which has the value~1 if the $j$th outcome is a\nsuccess and~0 if it is a failure.  Then, for each $X_j$,\n$$ E(X_j) = 0\\cdot(1 - p) + 1\\cdot p = p\\ .\n$$ Since\n$$ S_n = X_1 + X_2 +\\cdots+ X_n\\ ,\n$$ and the expected value of the sum is the sum of the expected values, we have\n\\begin{eqnarray*} E(S_n) & = & E(X_1) + E(X_2) +\\cdots+ E(X_n) \\\\\n       & = & np\\ .\n\\end{eqnarray*}\n\\end{theorem}\n\n\\subsection*{Poisson Distribution}\n\nRecall that the Poisson distribution with parameter $\\lambda$ was obtained as a limit\nof binomial distributions with parameters $n$ and $p$, where it was assumed that $np =\n\\lambda$, and $n \\rightarrow \\infty$.  Since for each $n$, the corresponding binomial\ndistribution has expected value $\\lambda$, it is reasonable to guess that the expected\nvalue of a Poisson distribution with parameter $\\lambda$ also has expectation equal to\n$\\lambda$. This is in fact the case, and the reader is invited to show this (see\nExercise~\\ref{exer 6.1.100}).\n\n\\subsection*{Independence}\n\nIf $X$ and $Y$ are two random variables, it is not true in general that $E(X\n\\cdot Y) = E(X)E(Y)$.  However, this is true if $X$ and $Y$ are  \\emx {independent.}\n\n\\begin{theorem}\\label{thm 6.4} If $X$ and $Y$ are independent random variables, then\n$$ E(X \\cdot Y) = E(X)E(Y)\\ .\n$$\n\\proof Suppose that\n$$\\Omega_X = \\{x_1, x_2, \\ldots\\}$$  and\n$$\\Omega_Y = \\{y_1, y_2, \\ldots\\}$$ are the sample spaces of $X$ and $Y$,\nrespectively.  Using Theorem~\\ref{thm 6.3.5}, we have\n$$ E(X \\cdot Y) = \\sum_j \\sum_k x_jy_k P(X = x_j,\\ Y = y_k)\\ .\n$$\n\nBut if $X$ and $Y$ are independent,\n$$ P(X = x_j, Y = y_k) = P(X = x_j)P(Y = y_k)\\ .\n$$ Thus,\n\\begin{eqnarray*} E(X \\cdot Y) & = & \\sum_j\\sum_k x_j y_k P(X = x_j) P(Y = y_k) \\\\\n             & = & \\left(\\sum_j x_j P(X = x_j)\\right) \\left(\\sum_k y_k P(Y =\ny_k)\\right) \\\\\n             & = &E(X) E(Y)\\ .\n\\end{eqnarray*}\n\\end{theorem}\n\n\\begin{example}\\label{exam 6.3} A coin is tossed twice.  $X_i = 1$ if the $i$th toss\nis heads and~0 otherwise.  We know that $X_1$ and $X_2$ are independent.  They each\nhave expected value 1/2.  Thus $E(X_1 \\cdot X_2) = E(X_1) E(X_2) = (1/2)(1/2) = 1/4$.\n\\end{example}\n\nWe next give a simple example to show that the expected values need not multiply if\nthe random variables are not independent.  \n\n\\begin{example}\\label{exam 6.4} Consider a single toss of a coin.  We define the\nrandom variable $X$ to be~1 if heads turns up and~0 if tails turns up, and we set $Y\n= 1 - X$.  Then $E(X) = E(Y) = 1/2$.  But $X \\cdot Y = 0$ for either outcome.  Hence,\n$E(X \\cdot Y) = 0 \\ne E(X) E(Y)$.\n\\end{example}\n\nWe return to our records example of Section~\\ref{sec 3.1} for another application of\nthe result that the expected value of the sum of random variables is the sum of the\nexpected values of the individual random variables.\n\n\\subsection*{Records}\\index{records}\n\n\\begin{example}\\label{exam 6.5} We start keeping snowfall records this year and want\nto find the expected number of records that will occur in the next $n$ years.  The\nfirst year is necessarily a record.  The second year will be a record if the snowfall\nin the second year is greater than that in the first year.  By symmetry, this\nprobability is 1/2.  More generally, let $X_j$ be~1 if the $j$th year is a record\nand~0 otherwise.  To find $E(X_j)$, we need only find the probability that the $j$th\nyear is a record.  But the record snowfall for the first $j$ years is equally likely\nto fall in any one of these years, so $E(X_j) = 1/j$.  Therefore, if $S_n$ is the\ntotal number of records observed in the first $n$ years,\n$$ E(S_n) = 1 + \\frac 12 + \\frac 13 +\\cdots+ \\frac 1n\\ .\n$$ This is the famous  \\emx {divergent harmonic series.}  It is easy to show that\n$$ E(S_n) \\sim \\log n\n$$ as $n \\rightarrow \\infty$.  A more accurate approximation to $E(S_n)$ is given by the expression\n$$\\log n + \\gamma + {1\\over {2n}}\\ ,$$\nwhere $\\gamma$ denotes Euler's constant, and is approximately equal to .5772.\n\nTherefore, in ten years the expected number of records is approximately $2.9298$; the exact value is the sum of the first ten terms of the harmonic series which\nis 2.9290.  \n\\end{example}\n\n\\subsection*{Craps}\\index{craps}\n\n\\begin{example}\\label{exam 6.6} In the game of craps, the player makes a bet and\nrolls a pair of dice.  If the sum of the numbers is 7~or~11 the player wins, if it is\n2,~3, or~12 the player loses.  If any other number results, say $r$, then $r$ becomes\nthe player's point and he continues to roll until either $r$ or 7 occurs.  If $r$\ncomes up first he wins, and if 7 comes up first he loses.  The program {\\bf Craps}\\index{Craps\n(program)} simulates playing this game a number of times.\n\\par We have run the program for 1000 plays in which the player bets 1~dollar each\ntime.  The player's average winnings were $-.006$.  The game of craps would seem to\nbe only slightly unfavorable.  Let us calculate the expected winnings on a single\nplay and see if this is the case.  We construct a two-stage tree measure as shown in\nFigure~\\ref{fig 6.1}.\n\n\\putfig{4.5truein}{PSfig6-1}{Tree measure for craps.}{fig 6.1}\n\nThe first stage represents the possible sums for his first roll.  The second stage\nrepresents the possible outcomes for the game if it has not ended on the first roll. \nIn this stage we are representing the possible outcomes of a sequence of rolls\nrequired to determine the final outcome.  The branch probabilities for the first\nstage are computed in the usual way assuming all 36 possibilites for outcomes for the\npair of dice are equally likely.  For the second stage we assume that the game will\neventually end, and we compute the conditional probabilities for obtaining either the\npoint or a~7.  For example, assume that the player's point is~6.  Then the game will\nend when one of the eleven pairs, $(1,5)$, $(2,4)$, $(3,3)$, $(4,2)$, $(5,1)$,\n$(1,6)$, $(2,5)$,\n$(3,4)$, $(4,3)$, $(5,2)$, $(6,1)$, occurs.  We assume that each of these possible\npairs has the same probability.  Then the player wins in the first five cases and\nloses in the last six.  Thus the probability of winning is 5/11 and the probability\nof losing is 6/11.  From the path probabilities, we can find the probability that the\nplayer wins 1~dollar; it is 244/495.  The probability of losing is then 251/495. \nThus if $X$ is his winning for a dollar bet,\n\\begin{eqnarray*} E(X) & = & 1\\Bigl(\\frac {244}{495}\\Bigr) + (-1)\\Bigl(\\frac\n{251}{495}\\Bigr) \\\\\n     & = & -\\frac {7}{495} \\approx -.0141\\ .\n\\end{eqnarray*} The game is unfavorable, but only slightly.  The player's expected\ngain in $n$ plays is $-n(.0141)$.  If $n$ is not large, this is a small expected loss\nfor the player.  The casino makes a large number of plays and so can afford a small\naverage gain per play and still expect a large profit.\n\\end{example}\n\n\\subsection*{Roulette}\\index{roulette}\n\n\\begin{example}\\label{exam 6.7} In Las Vegas, a roulette wheel has 38 slots numbered\n0,\\ 00,\\ 1,\\ 2,\\ \\ldots,\\ 36.  The 0 and 00 slots are green, and half of the remaining\n36 slots are red and half are black.  A croupier spins the wheel and throws an ivory ball.\nIf you bet 1 dollar on red, you win 1 dollar if the ball stops in a red slot, and otherwise\nyou lose a dollar.  We wish to calculate the expected value of your winnings, if you bet 1\ndollar on red.\n\\par\nLet $X$ be the random variable which denotes your winnings in a 1 dollar bet on red in Las Vegas\nroulette. Then the distribution of $X$ is given by\n$$ m_{X} = \\pmatrix{\n   -1 & 1 \\cr 20/38 & 18/38 \\cr},\n$$ and one can easily calculate (see Exercise~\\ref{exer 6.1.5}) that\n$$ \nE(X) \\approx -.0526\\ .\n$$\n\\par\nWe now consider the roulette game in Monte Carlo, and follow the treatment of\nSagan.\\footnote{H. Sagan, \\emx {Markov Chains in Monte Carlo,} Math. Mag., vol. 54, no. 1 (1981),\npp. 3-10.}\\index{SAGAN, H.} In the roulette game in Monte Carlo there is only one~0.  If you\nbet 1 franc on red and a~0 turns up,  then, depending upon the casino, one or more of the following\noptions may be offered: \n\\par\n\\noindent\n(a) You get 1/2 of your bet back, and the casino gets the other half of your bet.\n\\par\n\\noindent\n(b) Your bet is put ``in prison,\" which we will denote by $P_1$.  If red comes up on the next turn,\nyou get your bet back (but you don't win any money).  If black or 0 comes up, you lose your bet.\n\\par\n\\noindent\n(c) Your bet is put in prison $P_1$, as before.  If red comes up on the next turn, you get\nyour bet back, and if black comes up on the next turn, then you lose your bet.  If a 0\ncomes up on the next turn, then your bet is put into double prison, which we will denote by\n$P_2$.  If your bet is in double prison, and if red comes up on the next turn, then your bet is\nmoved back to prison $P_1$ and the game proceeds as before.  If your bet is in double prison, and\nif black or 0 come up on the next turn, then you lose your bet.  We refer the reader to\nFigure~\\ref{fig 6.1.5}, where a tree for this option is shown.  In this figure, $S$ is the\nstarting position, $W$ means that you win your bet, $L$ means that you lose your bet, and $E$ means\nthat you break even.\n\\putfig{4.5truein}{PSfig6-1-5}{Tree for 2-prison Monte Carlo roulette.}{fig 6.1.5}\n\\par\nIt is interesting to compare the expected winnings of a 1 franc bet on red, under each\nof these three options.  We leave the first two calculations as an exercise \n(see Exercise~\\ref{exer 6.1.38}).  \nSuppose that you choose to play alternative (c). \nThe calculation for this case illustrates the way that the early French probabilists worked problems\nlike this.\n\\par\nSuppose you bet on red, you choose alternative (c), and a~0 comes up.  Your\npossible future outcomes are shown in the tree diagram in Figure~\\ref{fig 6.2}.\nAssume that your money is in the first prison and let $x$ be the probability that you\nlose your franc.  From the tree diagram we see that\n$$\nx = \\frac {18}{37} + \\frac 1{37}P({\\rm you\\ lose\\ your\\ franc\\ }|\\ {\\rm your\\ franc\\ is\n\\ in\\ }P_2)\\ .\n$$\nAlso,\n$$\nP({\\rm you\\ lose\\ your\\ franc\\ }|\\ {\\rm your\\ franc\\ is\n\\ in\\ }P_2) =  \\frac {19}{37} + \\frac{18}{37}x\\ .\n$$\nSo, we have\n$$x = \\frac{18}{37} + \\frac 1{37}\\Bigl(\\frac {19}{37} + \\frac{18}{37}x\\Bigr)\\ .$$\nSolving for $x$, we obtain $x = 685/1351$.  Thus, starting at $S$, the probability that you lose\nyour bet equals\n$$\\frac {18}{37} + \\frac 1{37}x = \\frac{25003}{49987}\\ .$$\n\\par\nTo find the probability that you win when you bet on red, note that you can only win if\nred comes up on the first turn, and this happens with probability 18/37.  \nThus your expected winnings are\n$$ 1 \\cdot {\\frac{18}{37}} -1 \\cdot {\\frac {25003}{49987}} = -\\frac{687}{49987} \n\\approx -.0137\\ .$$\n\n\\putfig{4.5truein}{PSfig6-2}{Your money is put in prison.}  {fig 6.2}\n\nIt is interesting to note that the more romantic option (c) is less favorable than option (a)\n(see Exercise~\\ref{exer 6.1.38}).\n\\par\nIf you bet 1~dollar on the number 17, then the distribution function for your\nwinnings $X$ is\n$$ P_X = \\pmatrix{\n   -1 & 35 \\cr 36/37 & 1/37 \\cr}\\ ,\n$$ \nand the expected winnings are\n$$ -1 \\cdot {\\frac{36}{37}} + 35 \\cdot {\\frac 1{37}} = -\\frac 1{37} \\approx -.027\\ .\n$$ \nThus, at Monte Carlo different bets have different expected values.  In Las Vegas\nalmost all bets have the same expected value of $-2/38 = -.0526$ (see Exercises~\\ref{exer 6.1.4} and\n\\ref{exer 6.1.5}).\n\\end{example}\n\n\\subsection*{Conditional Expectation}\n\n\\begin{definition}\\label{def 6.3} If $F$ is any event and $X$ is a random variable\nwith sample space $\\Omega = \\{x_1, x_2,\n\\ldots\\}$, then the  \\emx {conditional expectation\\index{conditional expectation} given $F$} is\ndefined by\n$$ E(X|F) = \\sum_j x_j P(X = x_j|F)\\ .\n$$ Conditional expectation is used most often in the form provided by the following\ntheorem.\n\\end{definition}\n\n\\begin{theorem}\\label{thm 6.5} Let $X$ be a random variable with sample space\n$\\Omega$.  If $F_1$,~$F_2$, \\dots,~$F_r$ are events such that $F_i \\cap F_j =\n\\emptyset$ for $i \\ne j$ and $\\Omega = \\cup_j F_j$, then\n$$ E(X) = \\sum_j E(X|F_j) P(F_j)\\ .\n$$\n\\proof We have\n\\begin{eqnarray*}\n\\sum_j E(X|F_j) P(F_j) & = & \\sum_j \\sum_k x_k P(X = x_k|F_j) P(F_j) \\\\\n                       & = & \\sum_j \\sum_k x_k P(X = x_k\\,\\, {\\rm and}\\,\\, \nF_j\\,\\,{\\rm occurs}) \\\\\n                       & = & \\sum_k \\sum_j x_k P(X = x_k\\,\\,{\\rm and}\\,\\,F_j\\,\\,{\\rm\noccurs})\n\\\\\n                       & = & \\sum_k x_k P(X = x_k) \\\\\n                       & = & E(X)\\ .\n\\end{eqnarray*}\n\\end{theorem}\n\n\\begin{example}(Example~\\ref{exam 6.6} continued){\\label{exam 6.10}}\\index{craps} Let $T$ be\nthe number of rolls in a single play of craps.  We can think of a single play as a two-stage\nprocess.  The first stage consists of a single roll of a pair of dice.  The play is over if this\nroll is a 2, 3, 7, 11, or 12.  Otherwise, the player's point is established, and the second stage \nbegins.  This second stage consists of a sequence of rolls which ends when either the player's point\nor a 7 is rolled.  We record the outcomes of this two-stage experiment using the random variables\n$X$ and $S$, where $X$ denotes the first roll, and $S$ denotes the number of rolls in the second \nstage of the experiment (of course, $S$ is sometimes equal to 0).  Note that $T = S+1$.\nThen by Theorem~\\ref{thm 6.5}\n$$ E(T) = \\sum_{j = 2}^{12} E(T|X = j) P(X = j)\\ .\n$$ If $j = 7$,~11 or~2, 3,~12, then $E(T|X = j) = 1$.   If $j = 4, 5, 6, 8, 9,$ or\n$10$, we can use Example~\\ref{exam 6.8} to calculate the expected value of $S$.  In each of \nthese cases, we continue rolling until we get either a\n$j$ or a 7.  Thus, $S$ is\ngeometrically distributed with parameter $p$, which depends upon $j$.  If\n$j = 4$, for example, the value of $p$ is $3/36 + 6/36 = 1/4$.  Thus, in this case,\nthe expected number of additional rolls is $1/p = 4$, so \n$E(T|X = 4) = 1 + 4 = 5$.  Carrying out the corresponding calculations for the other\npossible values of $j$ and using Theorem~\\ref{thm 6.5} gives\n\\begin{eqnarray*} E(T) & = & 1\\Bigl(\\frac {12}{36}\\Bigr)   \n    + \\Bigl(1 + \\frac {36}{3 + 6}\\Bigr)\\Bigl(\\frac 3{36}\\Bigr)   \n    + \\Bigl(1 + \\frac {36}{4 + 6}\\Bigr)\\Bigl(\\frac 4{36}\\Bigr) \\\\\n& & + \\Bigl(1 + \\frac {36}{5 + 6}\\Bigr)\\Bigl(\\frac 5{36}\\Bigr)  \n    + \\Bigl(1 + \\frac {36}{5 + 6}\\Bigr)\\Bigl(\\frac 5{36}\\Bigr) \\\\\n& & + \\Bigl(1 + \\frac {36}{4 + 6}\\Bigr)\\Bigl(\\frac 4{36}\\Bigr) \n    + \\Bigl(1 + \\frac {36}{3 + 6}\\Bigr)\\Bigl(\\frac 3{36}\\Bigr) \\\\ \n& = & \\frac {557}{165} \\\\\n& \\approx & 3.375\\dots\\ .\n\\end{eqnarray*}\n\\end{example}\n\n\\subsection*{Martingales}\n\nWe can extend the notion of fairness to a player playing a sequence of games by using\nthe concept of conditional expectation.\n\n\\begin{example}\\label{exam 6.11} Let $S_1$,~$S_2$, \\dots,~$S_n$ be Peter's\naccumulated fortune in playing heads or tails (see Example~\\ref{exam 1.3}).  Then\n$$ E(S_n | S_{n - 1} = a,\\dots,S_1 = r) = \\frac 12 (a + 1) + \\frac 12 (a - 1) = a\\ .\n$$\n\nWe note that Peter's expected fortune after the next play is equal to his present\nfortune.  When this occurs, we say the game is \\emx {fair.}\\index{fair game}  A fair game is also\ncalled a \\emx {martingale.}\\index{martingale}  If the coin is biased and comes up heads with\nprobability\n$p$ and tails with probability $q = 1 - p$, then\n$$ E(S_n | S_{n - 1} = a,\\dots,S_1 = r) = p (a + 1) + q (a - 1) = a + p - q\\ .\n$$ Thus, if $p < q$, this game is unfavorable, and if $p > q$, it is favorable.\n\\end{example}\n\nIf you are in a casino, you will see players adopting elaborate \\emx {systems}\\index{gambling\nsystems} of play to try to make unfavorable games favorable.  Two such systems, the martingale\ndoubling system and the more conservative Labouchere system, were described in\nExercises~\\ref{sec 1.1}.\\ref{exer 1.1.9}~and~\\ref{sec 1.1}.\\ref{exer 1.1.10}. \nUnfortunately, such systems cannot change even a fair game into a favorable game.\n\nEven so, it is a favorite pastime of many people to develop systems of play for\ngambling games and for other games such as the stock market.  We close this section\nwith a simple illustration of such a system.\n\n\\subsection*{Stock Prices}\\index{stock prices}\n\n\\begin{example}\\label{exam 6.12} Let us assume that a stock increases or decreases in\nvalue each day by 1~dollar, each with probability 1/2.  Then we can identify this\nsimplified model with our familiar game of heads or tails.  We assume that a buyer,\nMr.\\ Ace\\index{Ace, Mr.}, adopts the following strategy.  He buys the stock on the first day at\nits price $V$.  He then waits until the price of the stock increases by one to\n$V + 1$ and sells.  He then continues to watch the stock until its price falls back\nto $V$.  He buys again and waits until it goes up to $V + 1$ and sells.  Thus he\nholds the stock in intervals during which it increases by 1~dollar.  In each such\ninterval, he makes a profit of 1~dollar.  However, we assume that he can do this only\nfor a finite number of trading days.  Thus he can lose if, in the last interval that\nhe holds the stock, it does not get back up to $V + 1$; and this is the only way he\ncan lose.  In Figure~\\ref{fig 6.3} we illustrate a typical history if Mr.\\ Ace must\nstop in twenty days.\n\\putfig{4truein}{PSfig6-3}{Mr.\\ Ace's system.}{fig 6.3}\nMr.\\ Ace holds the stock under his system during the days indicated by broken lines. \nWe note that for the history shown in Figure~\\ref{fig 6.3}, his system nets him a gain of 4~dollars.\n\\par\nWe have written a program {\\bf StockSystem}\\index{StockSystem (program)} to simulate the fortune of\nMr.\\ Ace if he uses his sytem over an $n$-day period.  If one runs this program a large number of\ntimes, for\n$n = 20$, say, one finds that his expected winnings are very close to 0, but the probability that he\nis ahead after 20 days is significantly greater than 1/2.  For small values of $n$, the exact\ndistribution of winnings can be calculated.  The distribution for the case $n = 20$ is shown in\nFigure~\\ref{fig 6.3.1}.  Using this distribution, it is easy to calculate that\nthe expected value of his winnings is exactly 0. This is another instance of the fact that a \nfair game (a martingale\\index{martingale}) remains fair under quite general systems of play.\n\\putfig{4truein}{PSfig6-3-1}{Winnings distribution for $n = 20$.}{fig 6.3.1}\n\\par\nAlthough the expected value of his winnings is 0, the probability that Mr.\\ Ace is ahead after 20\ndays is about .610.  Thus, he would be able to tell his friends that his\nsystem gives him a better chance of being ahead than that of someone who simply buys the stock and\nholds it, if our simple random model is correct.  There have been a number of studies\nto determine how random the stock market is.  \n\\end{example}\n\n\n\\subsection*{Historical Remarks}\n\n%**********Put an account of the St. Petersburg paradox in this section.\nWith the Law of Large Numbers to bolster the frequency interpretation of probability, \nwe find it natural to justify the definition of expected value in terms of the average\noutcome over a large number of repetitions of the experiment.  The concept of expected\nvalue was used before it was formally defined; and when it was used, it was considered\nnot as an average value but rather as the appropriate value for a gamble.  For example\nrecall, from the Historical Remarks section of Chapter~\\ref{chp 1}, Section~\\ref{sec 1.2}, Pascal's way of finding the value of a three-game series that had to be called\noff before it is finished.\n\\par\nPascal\\index{PASCAL, B.} first observed that if each player has only one game to win, then the\nstake of 64~pistoles should be divided evenly.  Then he considered the case where one\nplayer has won two games and the other one.\n\n\\begin{quote} Then consider, Sir, if the first man wins, he gets 64~pistoles, if he\nloses he gets 32.  Thus if they do not wish to risk this last game, but wish to\nseparate without playing it, the first man must say: ``I am certain to get\n32~pistoles, even if I lose I still get them; but as for the other 32~pistoles,\nperhaps I will get them, perhaps you will get them, the chances are equal.  Let us\nthen divide these 32~pistoles in half and give one half to me as well as my 32 which\nare mine for sure.\"  He will then have 48~pistoles and the other 16.\\footnote{Quoted\nin F.~N. David, \\emx {Games, Gods and Gambling} (London: Griffin, 1962), p.~231.}\n\\end{quote}\n\nNote that Pascal reduced the problem to a symmetric bet in which each player gets the\nsame amount and takes it as obvious that in this case the stakes should be divided\nequally.\n\nThe first systematic study of expected value appears in Huygens'\\index{HUYGENS, C.|(} book. \nLike Pascal, Huygens find the value of a gamble by assuming that the answer is obvious for\ncertain symmetric situations and uses this to deduce the expected for the general situation. \nHe does this in steps.  His first proposition is\n\n\\begin{quote} Prop.~I. If I expect $a$ or $b$, either of which, with equal\nprobability, may fall to me, then my Expectation is worth $(a + b)/2$, that is, the\nhalf Sum of\n$a$ and $b$.\\footnote{C. Huygens, \\emx {Calculating in Games of Chance,} translation\nattributed to John Arbuthnot (London, 1692), p.~34.}\n\\end{quote}\n\nHuygens proved this as follows: Assume that two player A and B play a game in which\neach player puts up a stake of $(a + b)/2$ with an equal chance of winning the total\nstake.  Then the value of the game to each player is $(a + b)/2$.  For example, if\nthe game had to be called off clearly each player should just get back his original\nstake.  Now, by symmetry, this value is not changed if we add the condition that the\nwinner of the game has to pay the loser an amount $b$ as a consolation prize.  Then\nfor player A the value is still $(a + b)/2$.  But what are his possible outcomes for\nthe modified game?  If he wins he gets the total stake $a + b$ and must pay B an\namount $b$ so ends up with $a$.  If he loses he gets an amount $b$ from player B. \nThus player A wins $a$ or $b$ with equal chances and the value to him is $(a + b)/2$.\n\nHuygens illustrated this proof in terms of an example.  If you are offered a game in\nwhich you have an equal chance of winning 2~or~8, the expected value is~5, since this\ngame is equivalent to the game in which each player stakes 5 and agrees to pay the\nloser 3 --- a game in which the value is obviously 5.\n\nHuygens' second proposition is\n\n\\begin{quote} Prop.~II. If I expect $a$,~$b$, or~$c$, either of which, with equal\nfacility, may happen, then the Value of my Expectation is $(a + b + c)/3$, or the\nthird of the Sum of $a$,~$b$, and~$c$.\\footnote{ibid., p.~35.}\n\\end{quote}\n\nHis argument here is similar.  Three players, A,~B, and~C, each stake\n$$(a+b+c)/3$$ \nin a game they have an equal chance of winning.  The value of this game\nto player A is clearly the amount he has staked.  Further, this value is not changed if A enters \ninto an agreement with B that if one of them wins he pays the other a consolation prize \nof $b$ and with C that if one of them wins he pays the other a consolation prize of $c$.  \nBy symmetry these agreements do not change the value of the game.  In this modified game, \nif A wins he wins the total stake\n$a + b + c$ minus the consolation prizes $b + c$ giving him a final winning of\n$a$.  If B wins, A wins $b$ and if C wins, A wins $c$.  Thus A finds himself in a\ngame with value $(a + b + c)/3$ and with outcomes $a$,~$b$, and~$c$ occurring with\nequal chance.  This proves Proposition II.\n\\par\nMore generally, this reasoning shows that if there are $n$ outcomes\n$$a_1,\\ a_2,\\ \\ldots,\\ a_n\\ ,$$ \nall occurring with the same probability, the expected value is\n$$\n\\frac {a_1 + a_2 +\\cdots+ a_n}n\\ .\n$$\n\nIn his third proposition Huygens considered the case where you win $a$ or $b$ but\nwith unequal probabilities.  He assumed there are $p$ chances of winning\n$a$, and $q$ chances of winning $b$, all having the same probability.  He then showed\nthat the expected value is\n$$ E = \\frac p{p + q} \\cdot a + \\frac q{p + q} \\cdot b\\ .\n$$ This follows by considering an equivalent gamble with $p + q$ outcomes all\noccurring with the same probability and with a payoff of $a$ in $p$ of the outcomes\nand $b$ in $q$ of the outcomes.  This allowed Huygens to compute the expected value\nfor experiments with unequal probabilities, at least when these probablities are\nrational numbers.\n\\par\nThus, instead of defining the expected value as a weighted average, Huygens assumed\nthat the expected value of certain symmetric gambles are known and deduced the other\nvalues from these.  Although this requires a good deal of clever manipulation,\nHuygens ended up with values that agree with those given by our modern definition of\nexpected value.  One advantage of this method is that it gives a justification for\nthe expected value in cases where it is not reasonable to assume that you can repeat\nthe experiment a large number of times, as for example, in betting that at least two\npresidents died on the same day of the year.  (In fact, three did; all were signers\nof the Declaration of Independence, and all three died on July~4.)\n\\par\nIn his book, Huygens calculated the expected value of games using techniques similar\nto those which we used in computing the expected value for roulette at Monte Carlo. \nFor example, his proposition XIV is:\n\n\\begin{quote} Prop.~XIV. If I were playing with another by turns, with two Dice, on\nthis Condition, that if I throw 7 I gain, and if he throws 6 he gains allowing him\nthe first Throw: To find the proportion of my Hazard to his.\\footnote{ibid., p.~47.}\n\\end{quote}\n\\par A modern description of this game is as follows.  Huygens and his opponent take\nturns rolling a die.  The game is over if Huygens rolls a 7 or his opponent rolls a\n6.  His opponent rolls first.  What is the probability that Huygens wins the game?\n\\par  To solve this problem Huygens let\n$x$ be his chance of winning when his opponent threw first and $y$ his chance of\nwinning when he threw first.  Then on the first roll his opponent wins on 5 out of\nthe 36 possibilities.  Thus,\n$$ x = \\frac {31}{36} \\cdot y\\ .\n$$ But when Huygens rolls he wins on 6 out of the 36 possible outcomes, and in the\nother 30, he is led back to where his chances are $x$.  Thus\n$$ y = \\frac 6{36} + \\frac {30}{36} \\cdot x\\ .\n$$ From these two equations Huygens found that $x = 31/61$.\\index{HUYGENS, C.|)}\n\nAnother early use of expected value appeared in Pascal's\\index{PASCAL, B.} argument to show that a\nrational person should believe in the existence of God.\\index{existence of God}\\footnote{Quoted in\nI. Hacking, \\emx {The Emergence of Probability} (Cambridge: Cambridge Univ.\\ Press,\n1975).}  Pascal said that we have to make a wager whether to believe or not to\nbelieve.  Let $p$ denote the probability that God does not exist.  His discussion suggests that we are playing a game with two strategies,\nbelieve and not believe, with payoffs as shown in Table~\\ref{table 6.4}.\n\\begin{table}\n\\centering \n$$\\begin{tabular}{ccc}\n              & \\hspace{.7in}God does not exist & \\hspace{.05in}God exists \\\\\n              & \\\\\n              & \\hspace{.8in}$p$              & \\hspace{.15in}$1 - p$ \\\\\n\\end{tabular}\n$$\n$$\\begin{tabular}{l|c|c|}\\cline{2-2} \\cline{3-3} \n believe      & \\hspace{.5in} $-u$\\hspace{.5in} &\\hspace{.3in} $v\\hspace{.3in} $\\\\\n\\cline{2-2} \\cline{3-3} \n not believe  &    0  & $-x$\\\\ \\cline{2-2} \\cline{3-3} \n\\end{tabular}\n$$\n\\caption{Payoffs.}\n\\label{table 6.4}\n\\end{table}\n\nHere $-u$ represents the cost to you of passing up some worldly pleasures as a\nconsequence of believing that God exists.  If you do not believe, and God is a\nvengeful God, you will lose $x$.  If God exists and you do believe you will gain v. \nNow to determine which strategy is best you should\ncompare the two expected values\n$$ p(-u) + (1 - p)v \\qquad {\\rm and} \\qquad p0 + (1 - p)(-x),\n$$ and choose the larger of the two.  In general, the choice will depend upon the\nvalue of $p$.  But Pascal assumed that the value of $v$ is infinite and so the\nstrategy of believing is best no matter what probability you assign for the existence\nof God.  This example is considered by some to be the beginning of decision theory. \nDecision analyses of this kind appear today in many fields, and, in particular, are\nan important part of medical diagnostics and corporate business decisions.\n\nAnother early use of expected value was to decide the price of annuities.\\index{annuity}  The study\nof statistics has its origins in the use of the bills of mortality kept in the\nparishes in London from 1603.  These records kept a weekly tally of christenings and\nburials.  From these John Graunt\\index{GRAUNT, J.} made estimates for the population of London and\nalso provided the first mortality \ndata,\\index{mortality table}\\footnote{ibid., p.~108.} shown in Table~\\ref{table 6.5}.\n\\begin{table}\n\\centering\n$$ \n\\begin{tabular}{rr}\n    & \\\\ \\hline\n Age &\\hspace{.35in} Survivors  \\\\ \\hline\n 0    &  100  \\\\\n 6    &   64  \\\\ 16    &   40  \\\\ 26    &   25  \\\\ 36    &   16  \\\\ 46    &   10  \\\\\n56    &    6  \\\\ 66    &    3  \\\\ 76    &    1  \\\\ \\hline\n\\end{tabular}$$\n\\caption{Graunt's mortality data.}\n\\label{table 6.5}\n\\end{table}\n\nAs Hacking observes, Graunt apparently constructed this table by assuming that after\nthe age of~6 there is a constant probability of about 5/8 of surviving for another\ndecade.\\footnote{ibid., p.~109.}  For example, of the 64~people who survive to age~6,\n5/8 of 64~or~40 survive to~16, 5/8 of these 40~or~25 survive to~26, and so forth.  Of\ncourse, he rounded off his figures to the nearest whole person.\n\nClearly, a constant mortality rate cannot be correct throughout the whole range, and\nlater tables provided by Halley were more realistic in this respect.\\footnote{E.\nHalley, ``An Estimate of The Degrees of Mortality of Mankind,\" \\emx {Phil.\\ Trans.\\\nRoyal.\\ Soc.,} vol.~17 (1693), pp.~596--610; 654--656.}\n\nA \\emx {terminal annuity}\\index{annuity!terminal} provides a fixed amount of money during a period of\n$n$ years.  To determine the price of a terminal annuity one needs only to know the\nappropriate interest rate.  A \\emx {life annuity}\\index{annuity!life} provides a fixed amount during\neach year of the buyer's life.  The appropriate price for a life annuity is the\nexpected value of the terminal annuity evaluated for the random lifetime of the\nbuyer.  Thus, the work of Huygens in introducing expected value and the work of\nGraunt and Halley in determining mortality tables led to a more rational method for\npricing annuities.  This was one of the first serious uses of probability theory\noutside the gambling houses.\n\nAlthough expected value plays a role now in every branch of science, it retains its\nimportance in the casino.  In 1962, Edward Thorp's\\index{THORP, E.} book \\emx {Beat the\nDealer}\\footnote{E. Thorp, \\emx {Beat the Dealer} (New York: Random House, 1962).}\nprovided the reader with a strategy for playing the popular casino game of\nblackjack\\index{blackjack} that would assure the player a positive expected winning.  This book\nforevermore changed the belief of the casinos that they could not be beat.\n\n\\exercises\n\\begin{LJSItem}\n\n\\i\\label{exer 6.1.1} A card is drawn at random from a deck consisting of cards\nnumbered 2 through~10.  A player wins 1~dollar if the number on the card is odd and\nloses 1~dollar if the number if even.  What is the expected value of his winnings?\n\n\\i\\label{exer 6.1.2} A card is drawn at random from a deck of playing cards.  If\nit is red, the player wins 1~dollar; if it is black, the player loses 2~dollars. \nFind the expected value of the game.\n\n\\i\\label{exer 6.1.3} In a class there are 20~students: 3 are 5' 6\", 5 are 5'8\", 4\nare 5'10\", 4 are 6', and 4 are 6' 2\".  A student is chosen at random.  What is the\nstudent's expected height?\n\n\\i\\label{exer 6.1.4} In Las Vegas the roulette wheel has a~0 and a~00 and then the\nnumbers 1~to~36 marked on equal slots; the wheel is spun and a ball stops randomly in\none slot.  When a player bets 1~dollar on a number, he receives 36~dollars if the\nball stops on this number, for a net gain of 35~dollars; otherwise, he loses his\ndollar bet.  Find the expected value for his winnings.\n\n\\i\\label{exer 6.1.5} In a second version of roulette in Las Vegas, a player bets\non red or black.  Half of the numbers from~1 to~36 are red, and half are black.  If a\nplayer bets a dollar on black, and if the ball stops on a black number, he gets his\ndollar back and another dollar.  If the ball stops on a red number or on~0 or~00 he\nloses his dollar.  Find the expected winnings for this bet.\n\n\\i\\label{exer 6.1.6} A die is rolled twice.  Let $X$ denote the sum of the two\nnumbers that turn up, and $Y$ the difference of the numbers (specifically, the number\non the first roll minus the number on the second).  Show that $E(XY) = E(X)E(Y)$.  Are\n$X$ and $Y$ independent?\n\n\\istar\\label{exer 6.1.7} Show that, if $X$ and $Y$ are random variables taking on\nonly two values each, and if $E(XY) = E(X)E(Y)$, then $X$ and $Y$ are independent.\n\n\\i\\label{exer 6.1.8} A royal family has children until it has a boy or until it\nhas three children, whichever comes first.  Assume that each child is a boy with\nprobability 1/2.  Find the expected number of boys in this royal family and the\nexpected number of girls.\n\n\\i\\label{exer 6.1.9} If the first roll in a game of craps is neither a natural nor\ncraps, the player can make an additional bet, equal to his original one, that he will\nmake his point before a seven turns up.  If his point is four or ten he is paid off\nat $2 : 1$ odds; if it is a five or nine he is paid off at odds $3 : 2$; and if it is\na six or eight he is paid off at odds $6 : 5$.  Find the player's expected winnings\nif he makes this additional bet when he has the opportunity.\n\n\\i\\label{exer 6.1.10} In Example~\\ref{exam 6.12} assume that Mr. Ace decides to buy the stock\nand hold it until it goes up 1~dollar and then sell and not buy again.  Modify the\nprogram {\\bf StockSystem} to find the distribution of his profit under this system\nafter a twenty-day period.  Find the expected profit and the probability that he comes\nout ahead.\n\n\\i\\label{exer 6.1.11} On September~26, 1980, the \\emx {New York Times} reported\nthat a mysterious stranger strode into a Las Vegas casino, placed a single bet of\n777{,}000 dollars on the ``don't pass\" line at the crap table, and walked away with\nmore than 1.5 million dollars.  In the ``don't pass\" bet, the bettor is essentially\nbetting with the house.  An exception occurs if the roller rolls a~12 on the first\nroll.  In this case, the roller loses and the ``don't pass\" better just gets back\nthe money bet instead of winning.  Show that the ``don't pass\" bettor has a more\nfavorable bet than the roller.\n\n\\i\\label{exer 6.1.12} Recall that in the \\emx {martingale doubling system}\\index{martingale\nbetting system} (see Exercise~\\ref{sec 1.1}.\\ref{exer 1.1.10}), the player doubles\nhis bet each time he loses.  Suppose that you are\nplaying roulette in a \\emx {fair casino} where there are no 0's, and you bet on red\neach time.  You then win with probability 1/2 each time.  Assume that you enter the casino with\n100~dollars, start with a\n1-dollar bet and employ the martingale system.  You stop as soon as you have won one bet, or in the unlikely event that black turns up six times in a\nrow so that you are down 63~dollars and cannot make the required 64-dollar bet.  Find\nyour expected winnings under this system of play.\n\n\\i\\label{exer 6.1.13} You have 80~dollars and play the following game.  An urn\ncontains two white balls and two black balls.  You draw the balls out one at a time\nwithout replacement until all the balls are gone.  On each draw, you bet half of your\npresent fortune that you will draw a white ball.  What is your expected final fortune?\n\n\\i\\label{exer 6.1.14} In the hat check problem (see Example~\\ref{exam 3.13}), it\nwas assumed that $N$ people check their hats and the hats are handed\nback at random.  Let $X_j = 1$ if the $j$th person gets his or her hat and 0\notherwise.  Find $E(X_j)$ and $E(X_j \\cdot X_k)$ for $j$ not equal to $k$.  Are $X_j$\nand $X_k$ independent?\n\n\\i\\label{exer 6.1.16} A box contains two gold balls and three silver balls.  You\nare allowed to choose successively balls from the box at random.  You win 1~dollar\neach time you draw a gold ball and lose 1~dollar each time you draw a silver ball. \nAfter a draw, the ball is not replaced.  Show that, if you draw until you are ahead\nby 1~dollar or until there are no more gold balls, this is a favorable game.\n\n\\i\\label{exer 6.1.17} Gerolamo Cardano\\index{CARDANO, G.} in his book, \\emx {The Gambling Scholar,}\nwritten in the early 1500s, considers the following carnival game.  There are six\ndice.  Each of the dice has five blank sides.  The sixth side has a number between\n1~and~6---a different number on each die.  The six dice are rolled and the player\nwins a prize depending on the total of the numbers which turn up.\n\n\\begin{enumerate}\n\\item Find, as Cardano did, the expected total without finding its distribution.\n\n\\item Large prizes were given for large totals with a modest fee to play the game. \nExplain why this could be done.\n\\end{enumerate}\n\n\\i\\label{exer 6.1.18} Let $X$ be the first time that a \\emx {failure} occurs in\nan infinite sequence of Bernoulli trials with probability $p$ for success.  Let $p_k\n= P(X = k)$ for $k = 1$,~2, \\dots.  Show that $p_k = p^{k - 1}q$ where $q = 1 - p$. \nShow that $\\sum_k p_k = 1$.  Show that $E(X) = 1/q$.  What is the expected number of\ntosses of a coin required to obtain the first tail?\n\n\\i\\label{exer 6.1.19} Exactly one of six similar keys opens a certain door.  If\nyou try the keys, one after another, what is the expected number of keys that you\nwill have to try before success?\n\n\\i\\label{exer 6.1.20} A multiple choice exam is given.  A problem has four\npossible answers, and exactly one answer is correct.  The student is allowed to\nchoose a subset of the four possible answers as his answer.  If his chosen subset\ncontains the correct answer, the student receives three points, but he loses one\npoint for each wrong answer in his chosen subset.  Show that if he just guesses a\nsubset uniformly and randomly his expected score is zero.\n\n\\i\\label{exer 6.1.21} You are offered the following game to play: a fair coin is\ntossed until heads turns up for the first time (see Example~\\ref{exam 6.055}).  If\nthis occurs on the first toss you receive 2~dollars, if it occurs on the second toss\nyou receive $2^2 = 4$ dollars and, in general, if heads turns up for the first time\non the $n$th toss you receive $2^n$ dollars.\n\n\\begin{enumerate}\n\\item Show that the expected value of your winnings does not exist (i.e., is given by\na divergent sum) for this game.  Does this mean that this game is favorable no matter\nhow much you pay to play it?\n\n\\item Assume that you only receive $2^{10}$ dollars if any number greater than or\nequal to ten tosses are required to obtain the first head.  Show that your expected\nvalue for this modified game is finite and find its value.\n\n\\item Assume that you pay 10~dollars for each play of the original game.  Write a\nprogram to simulate 100 plays of the game and see how you do.\n\n\\item Now assume that the utility of $n$ dollars is $\\sqrt n$.  Write an expression\nfor the expected utility of the payment, and show that this expression has a finite \nvalue.  Estimate this value.  Repeat this exercise for the case that the utility\nfunction is $\\log(n)$.\n\n\\end{enumerate}\n\n\\i\\label{exer 6.1.100} Let $X$ be a random variable which is\nPoisson distributed with parameter $\\lambda$.  Show that $E(X) = \\lambda$.  \\emx {Hint}: \nRecall that \n$$e^x = 1 + x + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\cdots\\,.$$\n\n\\i\\label{exer 6.1.22} Recall that in Exercise~\\ref{sec 1.1}.\\ref{exer 1.1.14}, we\nconsidered a town with two hospitals.\\index{hospital}  In the large hospital about 45 babies are\nborn each day, and in the smaller hospital about 15 babies\\index{babies} are born each day.  We were\ninterested in  guessing which hospital would have on the average the largest number of\ndays with the property that more than 60 percent of the children born on that day are\nboys.  For each hospital find the expected number of days in a year that have the\nproperty that more than 60~percent of the children born on that day were boys.\n\n\\i\\label{exer 6.1.23} An insurance company has 1{,}000 policies on men of age~50. \nThe company estimates that the probability that a man of age~50 dies within a year is\n.01.  Estimate the number of claims that the company can expect from beneficiaries of\nthese men within a year.\n\n\\i\\label{exer 6.1.24} Using the life table for 1981 in Appendix~C, write\na program to compute the expected lifetime for males and females of each possible age\nfrom~1 to~85.  Compare the results for males and females.  Comment on whether life\ninsurance should be priced differently for males and females.\n\n\\istar\\label{exer 6.1.25} A deck of ESP\\index{ESP} cards consists of 20 cards each of two types:\nsay ten stars, ten circles (normally there are five types).  The deck is shuffled and\nthe cards turned up one at a time.  You, the alleged percipient, are to name the\nsymbol on each card \\emx {before} it is turned up.\n\\par\nSuppose that you are really just guessing at the cards.  If you do not get to see\neach card after you have made your guess, then it is easy to calculate the expected\nnumber of correct guesses, namely ten.\n\\par\nIf, on the other hand, you are guessing with information, that is, if you see each\ncard after your guess, then, of course, you might expect to get a higher score.  This\nis indeed the case, but calculating the correct expectation is no longer easy.\n\\par\nBut it is easy to do a computer simulation of this guessing with information, so we\ncan get a good idea of the expectation by simulation.  (This is similar to the way\nthat skilled blackjack players make blackjack into a favorable game by observing the\ncards that have already been played.  See Exercise~\\ref{exer 6.1.29}.)\n\n\\begin{enumerate}\n\\item First, do a simulation of guessing without information, repeating the\nexperiment at least 1000 times.  Estimate the expected number of correct answers and\ncompare your result with the theoretical expectation.\n\n\\item What is the best strategy for guessing with information?\n\n\\item Do a simulation of guessing with information, using the strategy in (b). \nRepeat the experiment at least 1000 times, and estimate the expectation in this case.\n\n\\item Let $S$ be the number of stars and $C$ the number of circles in the deck.  Let\n$h(S,C)$ be the expected winnings using the optimal guessing strategy in (b).  Show\nthat $h(S,C)$ satisfies the recursion relation\n$$ h(S,C) = \\frac S{S + C} h(S - 1,C) + \\frac C{S + C} h(S,C - 1) + \\frac\n{\\max(S,C)}{S + C}\\ ,\n$$ and $h(0,0) = h(-1,0) = h(0,-1) = 0$.  Using this relation, write a program to\ncompute $h(S,C)$ and find $h(10,10)$.  Compare the computed value of $h(10,10)$ with\nthe result of your simulation in (c).  For more about this exercise and\nExercise~\\ref{exer 6.1.26} see Diaconis and Graham.\\index{DIACONIS, P.}\\index{GRAHAM,\nR.}\\footnote{P. Diaconis and R. Graham, ``The Analysis of Sequential Experiments with Feedback to\nSubjects,\" \\emx {Annals of Statistics,} vol.~9 (1981), pp.~3--23.}\n\\end{enumerate}\n\n\\istar\\label{exer 6.1.26} Consider the ESP\\index{ESP} problem as described in Exercise~\\ref{exer\n6.1.25}.  You are again guessing with information, and you are using the optimal\nguessing strategy of guessing \\emx {star} if the remaining deck has more stars, {\\em\ncircle} if more circles, and tossing a coin if the number of stars and circles are\nequal.  Assume that $S \\geq C$, where $S$ is the number of stars and $C$ the number\nof circles.\n\\par We can plot the results of a typical game on a graph, where the horizontal axis\nrepresents the number of steps and the vertical axis represents the \\emx {difference}\nbetween the number of stars and the number of circles that have been turned up.  A\ntypical game is shown in Figure~\\ref{fig 6.4}.  In this particular game, the order in\nwhich the cards were turned up is $(C,S,S,S,S,C,C,S,S,C)$.  Thus, in this particular\ngame, there were six stars and four circles in the deck.  This means, in particular,\nthat every game played with this deck would have a graph which ends at the point\n$(10, 2)$.  We define the line $L$ to be the horizontal line which goes through the\nending point on the graph (so its vertical coordinate is just the difference between\nthe number of stars and circles in the deck).\n\n\\putfig{4truein}{PSfig6-4}{Random walk for ESP.}{fig 6.4}\n\n\\begin{enumerate}\n\\item Show that, when the random walk is below the line $L$, the player guesses right\nwhen the graph goes up (star is turned up) and, when the walk is above the line, the\nplayer guesses right when the walk goes down (circle turned up).  Show from this\nproperty that the subject is sure to have at least $S$ correct guesses.\n\n\\item When the walk is at a point $(x,x)$ \\emx {on} the line $L$ the number of stars\nand circles remaining is the same, and so the subject tosses a coin.  Show that the\nprobability that the walk reaches $(x,x)$ is\n$$\n\\frac{{S \\choose x}{C \\choose x}}{{{S + C} \\choose {2x}}}\\ .\n$$ \\emx {Hint}: The outcomes of $2x$ cards is a hypergeometric distribution (see\nSection~\\ref{sec 5.1}).\n\n\\item Using the results of (a) and (b) show that the expected number of correct\nguesses under intelligent guessing is\n$$ S + \\sum_{x = 1}^C \\frac12 \\frac{{S \\choose x}{C \\choose x}}{{{S + C} \\choose\n{2x}}}\\ .\n$$\n\\end{enumerate}\n\n\\i\\label{exer 6.1.27} It has been said\\footnote{J.~F. Box,  \\emx {R.~A. Fisher, The\nLife of a Scientist} (New York: John Wiley and Sons, 1978).} that a Dr.~B.~Muriel\nBristol declined a cup of tea\\index{tea} stating that she preferred a cup into which\nmilk\\index{milk} had been poured first.  The famous statistician R.~A. Fisher\\index{FISHER, R. A.}\ncarried out a test to see if she could tell whether milk was put in before or after the tea. \nAssume that for the test Dr.~Bristol was given eight cups of tea---four in which the milk was put in\nbefore the tea and four in which the milk was put in after the tea.\n% (cf. Exercise~\\ref{sec 3.2}.\\ref{exer 3.2.8}).\n\n\\begin{enumerate}\n\\item What is the expected number of correct guesses the lady would make if she had\nno information after each test and was just guessing?\n\n\\item Using the result of Exercise~\\ref{exer 6.1.26} find the expected number of\ncorrect guesses if she was told the result of each guess and used an optimal guessing\nstrategy.\n\\end{enumerate}\n\n\\i\\label{exer 6.1.28} In a popular computer game the computer picks an integer\nfrom~1 to~$n$ at random.  The player is given $k$ chances to guess the number.  After\neach guess the computer responds ``correct,\" ``too small,\" or ``too big.\"\n\n\\begin{enumerate}\n\\item Show that if $n \\leq 2^k - 1$, then there is a strategy that guarantees you\nwill correctly guess the number in $k$ tries.\n\n\\item Show that if $n \\geq 2^k - 1$, there is a strategy that assures you of\nidentifying one of $2^k - 1$ numbers and hence gives a probability of $(2^k - 1)/n$\nof winning.  Why is this an optimal strategy?  Illustrate your result in terms of the\ncase $n = 9$ and $k = 3$.\n\\end{enumerate}\n\n\\i\\label{exer 6.1.29} In the casino game of blackjack\\index{blackjack} the dealer is dealt two\ncards, one face up and one face down, and each player is dealt two cards, both face\ndown.  If the dealer is showing an ace the player can look at his down cards and then\nmake a bet called an  \\emx {insurance} bet.  (Expert players will recognize why it is\ncalled insurance.)  If you make this bet you will win the bet if the dealer's second\ncard is a \\emx {ten card}: namely, a ten, jack, queen, or king.  If you win, you are\npaid twice your insurance bet; otherwise you lose this bet.  Show that, if the only\ncards you can see are the dealer's ace and your two cards and if your cards are not\nten cards, then the insurance bet is an unfavorable bet.  Show, however, that if you\nare playing two hands simultaneously, and you have no ten cards, then it is a\nfavorable bet.  (Thorp\\index{THORP, E.}\\footnote{E.~Thorp,  \\emx {Beat the Dealer} (New York: Random\nHouse, 1962).} has shown that the game of blackjack is favorable to the player if he or she can keep\ngood enough track of the cards that have been played.)\n\n\\i\\label{exer 6.1.30} Assume that, every time you buy a box of Wheaties\\index{Wheaties}, you\nreceive a picture of one of the $n$ players for the New York Yankees\\index{New York Yankees} (see\nExercise~\\ref{sec 3.2}.\\ref{exer 3.2.34}).  Let $X_k$ be the\nnumber of additional boxes you have to buy, after you have obtained $k - 1$ different\npictures, in order to obtain the next new picture.  Thus $X_1 = 1$, $X_2$ is the number\nof boxes bought after this to obtain a picture different from the first pictured\nobtained, and so forth.\n\n\\begin{enumerate}\n\\item Show that $X_k$ has a geometric distribution with $p = (n - k + 1)/n$.\n\n\\item Simulate the experiment for a team with 26 players (25 would be more accurate\nbut we want an even number).  Carry out a number of simulations and estimate the\nexpected time required to get the first 13 players and the expected time to get the\nsecond 13.  How do these expectations compare?\n\n\\item Show that, if there are $2n$ players, the expected time to get the first half\nof the players is\n$$ 2n \\left( \\frac 1{2n} + \\frac 1{2n - 1} +\\cdots+ \\frac 1{n + 1} \\right)\\ ,\n$$ and the expected time to get the second half is\n$$ 2n \\left( \\frac 1n + \\frac 1{n - 1} +\\cdots+ 1 \\right)\\ .\n$$\n\n\\item In Example~\\ref{exam 6.5} we stated that\n$$ 1 + \\frac 12 + \\frac 13 +\\cdots+ \\frac 1n \\sim \\log n + .5772 + \\frac 1{2n}\\ .\n$$ Use this to estimate the expression in (c).  Compare these estimates with the\nexact values and also with your estimates obtained by simulation for the case $n =\n26$.\n\\end{enumerate}\n\n\\istar\\label{exer 6.1.31} (Feller\\index{FELLER, W.}\\footnote{W. Feller,  \\emx {Introduction to\nProbability Theory and Its Applications,} 3rd ed., vol.~1 (New York: John Wiley and\nSons, 1968), p.~240.})  A large number, $N$, of people are subjected to a blood\\index{blood test}\ntest.  This can be administered in two ways: (1)~Each person can be tested separately, in\nthis case $N$ test are required, (2)~the blood samples of $k$ persons can be pooled\nand analyzed together.  If this test is  \\emx {negative,} this one test suffices for\nthe $k$ people.  If the test is  \\emx {positive,} each of the $k$ persons must be\ntested separately, and in all, $k + 1$ tests are required for the $k$ people.  Assume\nthat the probability $p$ that a test is positive is the same for all people and that\nthese events are independent.\n\n\\begin{enumerate}\n\\item Find the probability that the test for a pooled sample of $k$ people will be\npositive.\n\n\\item What is the expected value of the number $X$ of tests necessary under\nplan~(2)?  (Assume that $N$ is divisible by $k$.)\n\n\\item For small $p$, show that the value of $k$ which will minimize the expected\nnumber of tests under the second plan is approximately $1/\\sqrt p$.\n\\end{enumerate}\n\n\\i\\label{exer 6.1.32} Write a program to add random numbers chosen from $[0,1]$\nuntil the first time the sum is greater than one.  Have your program repeat this\nexperiment a number of times to estimate the expected number of selections necessary\nin order that the sum of the chosen numbers first exceeds~1.  On the basis of your\nexperiments, what is your estimate for this number?\n\n\\istar\\label{exer 6.1.33} The following related discrete problem also gives a good\nclue for the answer to Exercise~\\ref{exer 6.1.32}.  Randomly select with replacement\n$t_1$,~$t_2$,\n\\dots,~$t_r$ from the set $(1/n, 2/n, \\dots, n/n)$.  Let $X$ be the smallest value of\n$r$ satisfying\n$$ t_1 + t_2 +\\cdots+ t_r > 1\\ .\n$$ Then $E(X) = (1 + 1/n)^n$.  To prove this, we can just as well choose\n$t_1$,~$t_2$, \\dots,~$t_r$ randomly with replacement from the set $(1, 2,\n\\dots, n)$ and let $X$ be the smallest value of $r$ for which\n$$ t_1 + t_2 +\\cdots+ t_r > n\\ .\n$$\n\\begin{enumerate}\n\\item Use Exercise~\\ref{sec 3.2}.\\ref{exer 3.2.35.5} to show that\n$$ P(X \\geq j + 1) = {n \\choose j}{\\Bigl(\\frac {1}{n}\\Bigr)^j}\\ .\n$$\n\\item Show that\n$$ E(X) = \\sum_{j = 0}^n P(X \\geq j + 1)\\ .\n$$\n\\item From these two facts, find an expression for $E(X)$.  This proof is due to\nHarris Schultz.\\index{SCHULTZ, H.}\\footnote{H. Schultz, ``An Expected Value Problem,\" \\emx {Two-Year\nMathematics Journal,} vol.~10, no.~4 (1979), pp.~277--78.}\n\\end{enumerate}\n\n\\istar\\label{exer 6.1.34} (Banach's Matchbox\\index{Banach's Matchbox}\\footnote{W.~Feller, {\\em\nIntroduction to Probability Theory,} vol. 1, p.~166.}) A man carries in each of his two front\npockets a box of matches originally containing $N$ matches.  Whenever he needs a match, he chooses\na pocket at random and removes one from that box.  One day he reaches into a pocket\nand finds the box empty.\n\n\\begin{enumerate}\n\\item Let $p_r$ denote the probability that the other pocket contains $r$ matches. \nDefine a sequence of  \\emx {counter} random variables as follows: Let\n$X_i = 1$ if the $i$th draw is from the left pocket, and 0 if it is from the right\npocket.  Interpret $p_r$ in terms of $S_n = X_1 + X_2 +\\cdots+ X_n$.  Find a binomial\nexpression for $p_r$.\n\n\\item Write a computer program to compute the $p_r$, as well as the probability that\nthe other pocket contains at least $r$ matches, for $N = 100$ and $r$ from~0 to~50.\n\n\\item Show that $(N - r)p_r = (1/2)(2N + 1)p_{r + 1} - (1/2)(r + 1)p_{r + 1}$\\ .\n\n\\item Evaluate $\\sum_r p_r$.\n\n\\item Use (c) and (d) to determine the expectation $E$ of the distribution\n$\\{p_r\\}$.\n\n\\item Use Stirling's formula to obtain an approximation for $E$.  How many matches\nmust each box contain to ensure a value of about 13 for the expectation\n$E$?  (Take $\\pi = 22/7$.)\n\\end{enumerate}\n\n\\i\\label{exer 6.1.35} A coin is tossed until the first time a head turns up.  If\nthis occurs on the $n$th toss and $n$ is odd you win $2^n/n$, but if $n$ is even\nthen you lose\n$2^n/n$.  Then if your expected winnings exist they are given by the convergent series\n$$ 1 - \\frac 12 + \\frac 13 - \\frac 14 +\\cdots\n$$ called the alternating  \\emx {harmonic series.}  It is tempting to say that this\nshould be the expected value of the experiment.  Show that if we were to do this, the\nexpected value of an experiment would depend upon the order in which the outcomes are\nlisted.\n\n\\i\\label{exer 6.1.37} Suppose we have an urn containing $c$\nyellow balls and $d$ green balls.  We draw $k$ balls, without replacement,\nfrom the urn.  Find the expected number of yellow balls drawn.  \\emx {Hint}:  Write the\nnumber of yellow balls drawn as the sum of $c$ random variables.\n\n\\i\\label{exer 6.1.38} The reader is referred to Example~\\ref{exam 6.7} for an explanation\nof the various options available in Monte Carlo roulette.\n\\begin{enumerate}\n\\item Compute the expected winnings of a 1 franc bet on red under option (a).\n\\item Repeat part (a) for option (b).\n\\item Compare the expected winnings for all three options.  \n\\end{enumerate}\n\n\\istar\\label{exer 6.1.39}\n(from Pittel\\footnote{B. Pittel, Problem \\#1195, \\emx {Mathematics Magazine,} vol.~58,\nno.\\ 3 (May 1985), pg.~183.})\\index{PITTEL, B.} Telephone books\\index{telephone books}, $n$ in\nnumber, are kept in a stack.  The probability that the book numbered $i$ (where $1 \\le i \\le\nn$) is consulted for a given phone call is $p_i > 0$, where the $p_i$'s sum to 1.  After a\nbook is used, it is placed at the top of the stack.  Assume that the calls are independent\nand evenly spaced, and that the system has been employed indefinitely far into the past.  Let\n$d_i$ be the average depth of book $i$ in the stack.  Show that $d_i \\le d_j$ whenever $p_i\n\\ge p_j$.  Thus, on the average, the more popular books have a tendency to be closer to the\ntop of the stack. \\emx {Hint}:  Let $p_{ij}$ denote the probability that book $i$ is above\nbook $j$.  Show that $p_{ij} = p_{ij}(1 - p_j) + p_{ji}p_i$.\n\n\\istar\\label{exer 6.1.40}\n(from Propp\\footnote{J. Propp, Problem \\#1159, \\emx {Mathematics Magazine} vol.~57, no.\\ 1\n(Feb. 1984), pg.~50.})\\index{PROPP, J.} In the previous problem, let $P$ be the probability\nthat at the present time, each book is in its proper place, i.e., book $i$ is $i$th from the\ntop.  Find a formula for $P$ in terms of the $p_i$'s.  In addition, find the least upper\nbound on $P$, if the $p_i$'s are allowed to vary.  \\emx {Hint}:   First find the probability\nthat book 1 is in the right place.  Then find the probability that book 2 is in the right\nplace, given that book 1 is in the right place.  Continue.\n\n\\istar\\label{exer 6.1.41}\n(from H. Shultz and B. Leonard\\footnote{H. Shultz and B. Leonard, ``Unexpected Occurrences of the\nNumber $e$,\"\n\\emx {Mathematics Magazine} vol.~62, no. 4 (October, 1989), pp.~269-271.})\\index{SHULTZ,\nH.}\\index{LEONARD, B.}  A sequence of random numbers in $[0, 1)$ is generated until the sequence is\nno longer monotone increasing.  The numbers are chosen according to the uniform distribution.  What\nis the expected length of the sequence?  (In calculating the length, the term that destroys\nmonotonicity is included.) \\emx {Hint}:  Let $a_1,\\ a_2,\\ \\ldots$ be the sequence and let $X$ denote\nthe length of the sequence.  Then\n$$P(X > k) = P(a_1 < a_2 < \\cdots < a_k)\\ ,$$\nand the probability on the right-hand side is easy to calculate.  Furthermore, one can show that\n$$E(X) = 1 + P(X > 1) + P(X > 2) + \\cdots\\ .$$\n\n\\i\\label{exer 6.1.42}\nLet $T$ be the random variable that counts the number of 2-unshuffles performed on an $n$-card deck\nuntil all of the labels on the cards are distinct.  This random variable was discussed in\nSection~\\ref{sec 3.3}.  Using Equation~\\ref{eq 3.3.1} in that section, together with the formula\n$$\nE(T) = \\sum_{s = 0}^\\infty P(T > s)\n$$\nthat was proved in Exercise~\\ref{exer 6.1.33}, show that\n$$\nE(T) = \\sum_{s = 0}^\\infty \\left(1 - {{2^s}\\choose n}\\frac {n!}{2^{sn}}\\right)\\ .\n$$\nShow that for $n = 52$, this expression is approximately equal to 11.7.  (As was stated in\nChapter~\\ref{chp 3}, this means that on the average, almost 12 riffle shuffles of a 52-card \ndeck are required in order for the process to be considered random.)\n\n\n\\end{LJSItem}\n\n\\section{Variance of Discrete Random Variables}\\label{sec 6.2}\n\nThe usefulness of the expected value as a prediction for the outcome of an experiment\nis increased when the outcome is not likely to deviate too much from the expected\nvalue.  In this section we shall introduce a measure of this deviation, called the\nvariance.\n\n\\subsection*{Variance}\n\n%**********Some comments we made on pg. 238 (prompted by Mark) have not been acted\n%upon yet.\n\n\\begin{definition}\\label{def 6.4} Let $X$ be a numerically valued random variable\nwith expected value $\\mu = E(X)$.  Then the  \\emx {variance}\\index{variance} of $X$, denoted by\n$V(X)$, is\n$$ V(X) = E((X - \\mu)^2)\\ .\n$$\n\\end{definition} Note that, by Theorem~\\ref{thm 6.3.5}, $V(X)$ is given by\n\n\\begin{equation} V(X) = \\sum_x (x - \\mu)^2 m(x)\\ , \n\\label{eq 6.1}\n\\end{equation} where $m$ is the distribution function of $X$.\n\n\\subsection*{Standard Deviation}\n\nThe  \\emx {standard deviation}\\index{standard deviation} of $X$, denoted by $D(X)$, is $D(X) =\n\\sqrt {V(X)}$.  We often write $\\sigma$ for $D(X)$ and $\\sigma^2$ for $V(X)$.\n\n\n\\begin{example}\\label{exam 6.13} Consider one roll of a die.  Let $X$ be the number\nthat turns up.  To find\n$V(X)$, we must first find the expected value of $X$.  This is\n\\begin{eqnarray*}\n\\mu & = & E(X) = 1\\Bigl(\\frac 16\\Bigr) + 2\\Bigl(\\frac 16\\Bigr) + 3\\Bigl(\\frac\n16\\Bigr) +  4\\Bigl(\\frac 16\\Bigr) + 5\\Bigl(\\frac 16\\Bigr) + 6\\Bigl(\\frac 16\\Bigr) \\\\\n    & = & \\frac 72\\ .\n\\end{eqnarray*}\n\nTo find the variance of $X$, we form the new random variable $(X - \\mu)^2$ and\ncompute its expectation.  We can easily do this using the following table.\n\n\\begin{table}\n\\centering\n$$\\begin{tabular}{ccc}\n  && \\\\ \\hline\n$x$ & $m(x)$ & $(x - 7/2)^2$ \\\\ \\hline 1 & 1/6 &       25/4  \\\\ 2 & 1/6\n&\\hspace{.05in}       9/4  \\\\ 3 & 1/6 &\\hspace{.05in}       1/4  \\\\ 4 & 1/6\n&\\hspace{.05in}       1/4  \\\\ 5 & 1/6 &\\hspace{.05in}       9/4  \\\\ 6 & 1/6 &      \n25/4  \\\\\\hline\n\\end{tabular}\n$$ \n\\caption{Variance calculation.}\n\\label{table 6.6}\n\\end{table}\n\nFrom this table we find $E((X - \\mu)^2)$ is\n\\begin{eqnarray*} V(X) & = & \\frac 16 \\left( \\frac {25}4 + \\frac 94 + \\frac 14 +\n\\frac 14 + \\frac 94 + \\frac {25}4 \\right) \\\\\n     & = &\\frac {35}{12}\\ ,\n\\end{eqnarray*} and the standard deviation $D(X) = \\sqrt{35/12} \\approx 1.707$.\n\\end{example}\n\n\\subsection*{Calculation of Variance}\\index{variance!calculation of}\n\nWe next prove a theorem that gives us a useful alternative form for computing the\nvariance.\n\n\\begin{theorem}\\label{thm 6.7} If $X$ is any random variable with $E(X) = \\mu$, then\n$$ V(X) = E(X^2) - \\mu^2\\ .\n$$\n\\proof We have\n\\begin{eqnarray*} V(X) & = & E((X - \\mu)^2) = E(X^2 - 2\\mu X + \\mu^2) \\\\\n     & = & E(X^2) - 2\\mu E(X) + \\mu^2 = E(X^2) - \\mu^2\\ .\n\\end{eqnarray*}\n\\end{theorem}\n\nUsing Theorem~\\ref{thm 6.7}, we can compute the variance of the outcome of a roll of\na die by first computing\n\\begin{eqnarray*} E(X^2) & = & 1\\Bigl(\\frac 16\\Bigr) + 4\\Bigl(\\frac 16\\Bigr) +\n9\\Bigl(\\frac 16\\Bigr) + 16\\Bigl(\\frac 16\\Bigr) + 25\\Bigl(\\frac 16\\Bigr) +\n36\\Bigl(\\frac 16\\Bigr) \\\\\n       & = &\\frac {91}6\\ ,\n\\end{eqnarray*} and,\n$$ \nV(X)  =  E(X^2) - \\mu^2 = \\frac {91}{6} - \\Bigl(\\frac 72\\Bigr)^2\n = \\frac {35}{12}\\ ,\n$$\nin agreement with the value obtained directly from the definition of\n$V(X)$.\n\n\n\\subsection*{Properties of Variance}\n\nThe variance has properties very different from those of the expectation.  If\n$c$ is any constant, $E(cX) = cE(X)$ and $E(X + c) = E(X) + c$.  These two statements\nimply that the expectation is a linear function.  However, the variance is not\nlinear, as seen in the next theorem.\n\n\\begin{theorem}\\label{thm 6.6} If $X$ is any random variable and $c$ is any constant,\nthen\n$$ V(cX)  = c^2 V(X)\n$$ and\n$$ V(X + c) = V(X)\\ .\n$$\n\\proof Let $\\mu = E(X)$.  Then $E(cX) = c\\mu$, and\n\\begin{eqnarray*} V(cX) &=& E((cX - c\\mu)^2) = E(c^2(X - \\mu)^2) \\\\\n      &=& c^2 E((X - \\mu)^2) = c^2 V(X)\\ .\n\\end{eqnarray*}\n\nTo prove the second assertion, we note that, to compute $V(X + c)$, we would replace\n$x$ by $x + c$ and $\\mu$ by $\\mu + c$ in Equation~\\ref{eq 6.1}. Then the\n$c$'s would cancel, leaving $V(X)$.\n\\end{theorem}\n\nWe turn now to some general properties of the variance.  Recall that if $X$ and\n$Y$ are any two random variables, $E(X + Y) = E(X) + E(Y)$.  This is not always true\nfor the case of the variance.  For example, let $X$ be a random variable with\n$V(X) \\ne 0$, and define $Y = -X$.  Then $V(X) = V(Y)$, so that $V(X) + V(Y) =\n2V(X)$.  But\n$X + Y$ is always 0 and hence has variance 0.  Thus $V(X + Y) \\ne V(X) + V(Y)$.\n\\par In the important case of mutually independent random variables, however,  \\emx {the\nvariance of the sum is the sum of the variances.}\n\n\\begin{theorem}\\label{thm 6.8} Let $X$ and $Y$ be two  \\emx {independent} random\nvariables.  Then \n$$V(X + Y) = V(X) + V(Y)\\ .$$\n\\proof Let $E(X) = a$ and $E(Y) = b$.  Then\n\\begin{eqnarray*} V(X + Y) & = & E((X + Y)^2) - (a + b)^2 \\\\\n         & = & E(X^2) + 2E(XY) + E(Y^2) - a^2 - 2ab - b^2\\ .\n\\end{eqnarray*} Since $X$ and $Y$ are independent, $E(XY) = E(X)E(Y) = ab$.  Thus,\n$$ V(X + Y) = E(X^2) - a^2 + E(Y^2) - b^2 = V(X) + V(Y)\\ .\n$$\n\\end{theorem}\n\nIt is easy to extend this proof, by mathematical induction, to show that {\\em\nthe variance of the sum of any number of mutually independent random variables is the sum\nof the individual variances.}  Thus we have the following theorem.\n\n\\begin{theorem}\\label{thm 6.9} Let $X_1$,~$X_2$, \\dots,~$X_n$ be an independent\ntrials process with $E(X_j) =\n\\mu$ and $V(X_j) = \\sigma^2$.  Let\n$$ S_n = X_1 + X_2 +\\cdots+ X_n\n$$ be the sum, and\n$$ A_n = \\frac {S_n}n\n$$ be the average.  Then\n\\begin{eqnarray*} E(S_n) &=& n\\mu\\ , \\\\\nV(S_n) &=& n\\sigma^2\\ , \\\\ \n\\sigma(S_n) &=& \\sigma \\sqrt{n}\\ , \\\\\nE(A_n) &=& \\mu\\ , \\\\\nV(A_n) &=& \\frac {\\sigma^2}\\ , \\\\\n\\sigma(A_n) &=& \\frac{\\sigma}{\\sqrt n}\\ .\n\\end{eqnarray*}\n\\proof Since all the random variables $X_j$ have the same expected value, we have\n$$ E(S_n) = E(X_1) +\\cdots+ E(X_n) = n\\mu\\ ,\n$$ \n$$ V(S_n) = V(X_1) +\\cdots+ V(X_n) = n\\sigma^2\\ ,\n$$\nand\n$$\\sigma(S_n) = \\sigma \\sqrt{n}\\ .$$\n\nWe have seen that, if we multiply a random variable $X$ with mean $\\mu$ and variance\n$\\sigma^2$ by a constant $c$, the new random variable has expected value $c\\mu$ and\nvariance $c^2\\sigma^2$.  Thus,\n$$ E(A_n) = E\\left(\\frac {S_n}n \\right) = \\frac {n\\mu}n = \\mu\\ ,\n$$ and\n$$ V(A_n) = V\\left( \\frac {S_n}n \\right) = \\frac {V(S_n)}{n^2} = \\frac\n{n\\sigma^2}{n^2} = \\frac {\\sigma^2}n\\ .\n$$ \nFinally, the standard deviation of $A_n$ is given by\n$$\\sigma(A_n) = \\frac {\\sigma}{\\sqrt n}\\ .$$\n\\end{theorem}\n\nThe last equation in the above theorem implies that in an independent trials process,\nif the individual summands have finite variance, then the standard deviation of the\naverage goes to 0 as $n \\rightarrow \\infty$.  Since the standard deviation tells us\nsomething about the spread of the distribution around the mean, we see that for large\nvalues of $n$, the value of $A_n$  is usually very close to the mean of $A_n$, which\nequals $\\mu$, as shown above.  This statement is made precise in Chapter~\\ref{chp 8},\nwhere it is called the Law of Large Numbers.  For example, let $X$ represent the roll\nof a fair die.  In Figure~\\ref{fig 6.4.5}, we show the distribution of a random variable\n$A_n$ corresponding to $X$, for $n = 10$ and $n = 100$.\n\n\\putfig{5truein}{PSfig6-4-5}\n{Empirical distribution of $A_n$.}{fig 6.4.5}\n\n\\begin{example}\\label{exam 6.14} Consider $n$ rolls of a die.  We have seen that, if\n$X_j$ is the outcome if the\n$j$th roll, then $E(X_j) = 7/2$ and $V(X_j) = 35/12$.  Thus, if $S_n$ is the sum of\nthe outcomes, and $A_n = S_n/n$ is the average of the outcomes, we have\n$E(A_n) = 7/2$ and $V(A_n) = (35/12)/n$.  Therefore, as $n$ increases, the expected\nvalue of the average remains constant, but the variance tends to~0.  If the variance\nis a measure of the expected deviation from the mean this would indicate that, for\nlarge~$n$, we can expect the average to be very near the expected value.  This is in\nfact the case, and we shall justify it in Chapter~\\ref{chp 8}.\n\\end{example}\n\n\\subsection*{Bernoulli Trials}\n\nConsider next the general Bernoulli trials process.  As usual, we let $X_j = 1$ if\nthe $j$th outcome is a success and 0 if it is a failure.  If $p$ is the probability\nof a success, and $q = 1 - p$, then\n\\begin{eqnarray*} E(X_j) & = & 0q + 1p = p\\ , \\\\ E(X_j^2) & = & 0^2q + 1^2p = p\\ ,\n\\end{eqnarray*} and\n$$ V(X_j) = E(X_j^2) - (E(X_j))^2 = p - p^2 = pq\\ .\n$$\n\nThus, for Bernoulli trials, if $S_n = X_1 + X_2 +\\cdots+ X_n$ is the number of\nsuccesses, then $E(S_n) = np$, $V(S_n) = npq$, and $D(S_n) = \\sqrt{npq}.$  If\n$A_n = S_n/n$ is the average number of successes, then $E(A_n) = p$, $V(A_n) = pq/n$,\nand $D(A_n) = \\sqrt{pq/n}$.  We see that the expected proportion of successes remains\n$p$ and the variance tends to~0.  This suggests that the frequency interpretation of\nprobability is a correct one.  We shall make this more precise in Chapter~\\ref{chp 8}.\n\n\\begin{example}\\label{exam 6.15} Let $T$ denote the number of trials until the first\nsuccess in a Bernoulli trials process.  Then $T$ is geometrically distributed.  What\nis the variance of $T$?  In Example~\\ref{exam 5.7}, we saw that\n$$ m_T = \\pmatrix{1 &  2 &    3 & \\cdots \\cr p & qp & q^2p & \\cdots \\cr}.\n$$ In Example~\\ref{exam 6.8}, we showed that \n$$E(T) = 1/p\\ .$$  \nThus, \n$$V(T) = E(T^2) - 1/p^2\\ ,$$  \nso we need only find\n\\begin{eqnarray*} E(T^2) & = & 1p + 4qp + 9q^2p + \\cdots \\\\\n       & = & p(1 + 4q + 9q^2 + \\cdots )\\ .\n\\end{eqnarray*} To evaluate this sum, we start again with\n$$ 1 + x + x^2 +\\cdots= \\frac 1{1 - x}\\ .\n$$ Differentiating, we obtain\n$$ 1 + 2x + 3x^2 +\\cdots= \\frac 1{(1 - x)^2}\\ .\n$$ Multiplying by $x$,\n$$ x + 2x^2 + 3x^3 +\\cdots= \\frac x{(1 - x)^2}\\ .\n$$ Differentiating again gives\n$$ 1 + 4x + 9x^2 +\\cdots= \\frac {1 + x}{(1 - x)^3}\\ .\n$$ Thus,\n$$ E(T^2) = p\\frac {1 + q}{(1 - q)^3} = \\frac {1 + q}{p^2}\n$$ and\n\\begin{eqnarray*} V(T) & = & E(T^2) - (E(T))^2 \\\\\n     & = & \\frac {1 + q}{p^2} - \\frac 1{p^2} = \\frac q{p^2}\\ .\n\\end{eqnarray*}\n\nFor example, the variance for the number of tosses of a coin until the first head\nturns up is $(1/2)/(1/2)^2 = 2$.  The variance for the number of rolls of a die until\nthe first six turns up is $(5/6)/(1/6)^2 = 30$.  Note that, as $p$ decreases, the\nvariance increases rapidly.  This corresponds to the increased spread of the\ngeometric distribution as $p$ decreases (noted in Figure~\\ref{fig 5.4}).\n\\end{example}\n\n\\subsection*{Poisson Distribution}\\index{Poisson distribution!variance of}\n\nJust as in the case of expected values, it is easy to guess the variance of the\nPoisson distribution with parameter $\\lambda$.  We recall that the variance of a\nbinomial distribution with parameters $n$ and $p$ equals $npq$.  We also recall that\nthe Poisson distribution could be obtained as a limit of binomial distributions, if\n$n$ goes to $\\infty$ and $p$ goes to 0 in such a way that their product is kept fixed at\nthe value $\\lambda$.  In this case, $npq = \\lambda q$ approaches $\\lambda$, since\n$q$ goes to 1.  So, given a Poisson distribution with parameter $\\lambda$, we should\nguess that its variance is $\\lambda$.  The reader is asked to show this in\nExercise~\\ref{exer 6.2.100}.\n\n\\exercises\n\\begin{LJSItem}\n\n \n\\i\\label{exer 6.2.1} A number is chosen at random from the set $S = \\{-1,0,1\\}$. \nLet $X$ be the number chosen.  Find the expected value, variance, and standard\ndeviation of $X$.\n\n\\i\\label{exer 6.2.2} A random variable $X$ has the distribution\n$$ p_X = \\pmatrix{ 0 & 1 & 2 & 4 \\cr 1/3 & 1/3 & 1/6 & 1/6 \\cr}\\ .\n$$ Find the expected value, variance, and standard deviation of $X$.\n\n\\i\\label{exer 6.2.3} You place a 1-dollar bet on the number 17 at Las Vegas, and\nyour friend places a 1-dollar bet on black (see Exercises~\\ref{sec 1.1}.\\ref{exer\n1.1.6}~and~\\ref{sec 1.1}.\\ref{exer 1.1.7}).  Let $X$ be your\nwinnings and\n$Y$ be her winnings.  Compare\n$E(X)$,~$E(Y)$, and $V(X)$,~$V(Y)$.  What do these computations tell you about the\nnature of your winnings if you and your friend make a sequence of bets, with you\nbetting each time on a number and your friend betting on a color?\n\n\\i\\label{exer 6.2.4} $X$ is a random variable with $E(X) = 100$ and $V(X) = 15$. \nFind\n\\begin{enumerate}\n\\item $E(X^2)$.\n\n\\item $E(3X + 10)$.\n\n\\item $E(-X)$.\n\n\\item $V(-X)$.\n\n\\item $D(-X)$.\n\\end{enumerate}\n\n\\i\\label{exer 6.2.5} In a certain manufacturing process, the (Fahrenheit)\ntemperature never varies by more than $2^\\circ$ from $62^\\circ$.  The temperature is,\nin fact, a random variable $F$ with distribution\n$$ P_F = \\pmatrix{ 60 & 61 & 62 & 63 & 64 \\cr 1/10 & 2/10 & 4/10 & 2/10 & 1/10 \\cr}\\ .\n$$\n\\begin{enumerate}\n\\item Find $E(F)$ and $V(F)$.\n\\item Define $T = F - 62$.  Find $E(T)$ and $V(T)$, and compare these answers with\nthose in part (a).\n\n\\item It is decided to report the temperature readings on a Celsius scale, that is,\n$C = (5/9)(F - 32)$.  What is the expected value and variance for the readings now?\n\\end{enumerate}\n\n\\i\\label{exer 6.2.6} Write a computer program to calculate the mean and variance\nof a distribution which you specify as data.  Use the program to compare the\nvariances for the following densities, both having expected value~0:\n$$  p_X = \\pmatrix{ -2 & -1 & 0 & 1 & 2 \\cr 3/11 & 2/11 & 1/11 & 2/11 & 3/11 \\cr}\\ ;\n$$\n$$ p_Y = \\pmatrix{ -2 & -1 & 0 & 1 & 2 \\cr 1/11 & 2/11 & 5/11 & 2/11 & 1/11 \\cr}\\ .\n$$\n\n\\i\\label{exer 6.2.7} A coin is tossed three times.  Let $X$ be the number of heads\nthat turn up.  Find $V(X)$ and $D(X)$.\n\n\\i\\label{exer 6.2.8} A random sample of 2400 people are asked if they favor a\ngovernment proposal to develop new nuclear power plants.  If 40~percent of the people\nin the country are in favor of this proposal, find the expected value and the\nstandard deviation for the number $S_{2400}$ of people in the sample who favored the\nproposal.\n\n\\i\\label{exer 6.2.9} A die is loaded so that the probability of a face coming up is\nproportional to the number on that face.  The die is rolled with outcome~$X$.  Find\n$V(X)$ and $D(X)$.\n\n\\i\\label{exer 6.2.10} Prove the following facts about the standard deviation.\n\\begin{enumerate}\n\\item $D(X + c) = D(X)$.\n\n\\item $D(cX) = |c|D(X)$.\n\\end{enumerate}\n\n\\i\\label{exer 6.2.11} A number is chosen at random from the integers 1,~2,~3,\n\\dots,~$n$.  Let\n$X$ be the number chosen.  Show that $E(X) = (n + 1)/2$ and $V(X) = (n - 1)(n +\n1)/12$.   \\emx {Hint}:  The following identity may be useful:\n$$ 1^2 + 2^2 + \\cdots + n^2 = \\frac{(n)(n+1)(2n+1)}{6}\\ .\n$$\n\n\\i\\label{exer 6.2.13} Let $X$ be a random variable with $\\mu = E(X)$ and\n$\\sigma^2 = V(X)$.  Define $X^* = (X - \\mu)/\\sigma$.  The random variable $X^*$ is\ncalled the  \\emx {standardized random variable}\\index{standardized random variable} associated with\n$X$.  Show that this standardized random variable has expected value 0 and variance 1.\n\n\\i\\label{exer 6.2.14} Peter and Paul play Heads or Tails (see Example~\\ref{exam\n1.3}).  Let\n$W_n$ be Peter's winnings after $n$ matches.  Show that $E(W_n) = 0$ and\n$V(W_n) = n$.\n\n\\i\\label{exer 6.2.15} Find the expected value and the variance for the number of\nboys and the number of girls in a royal family that has children until there is a boy\nor until there are three children, whichever comes first.\n\n\\i\\label{exer 6.2.16} Suppose that $n$ people have their hats returned at random. \nLet\n$X_i = 1$ if the $i$th person gets his or her own hat back and 0 otherwise.  Let\n$S_n =\n\\sum_{i = 1}^n X_i$.  Then $S_n$ is the total number of people who get their own hats\nback.  Show that\n\\begin{enumerate}\n\\item $E(X_i^2) = 1/n$.\n\n\\item $E(X_i \\cdot X_j) = 1/n(n - 1)$ for $i \\ne j$.\n\n\\item $E(S_n^2) = 2$ (using (a) and (b)).\n\n\\item $V(S_n) = 1$.\n\\end{enumerate}\n\n\\i\\label{exer 6.2.17} Let $S_n$ be the number of successes in $n$ independent\ntrials.  Use the program {\\bf BinomialProbabilities} (Section~\\ref{sec 3.2}) to compute,\nfor given $n$,~$p$, and~$j$, the probability\n$$ P(-j\\sqrt{npq} < S_n - np < j\\sqrt{npq})\\ .\n$$\n\\begin{enumerate}\n\\item Let $p = .5$, and compute this probability for $j = 1$,~2,~3 and $n =\n10$,~30,~50.  Do the same for $p = .2$.\n\n\\item Show that the  \\emx {standardized random variable} $S_n^* = (S_n -\nnp)/\\sqrt{npq}$ has expected value 0 and variance 1.  What do your results from (a)\ntell you about this standardized quantity $S_n^*$?\n\\end{enumerate}\n\n\\i\\label{exer 6.2.18} Let $X$ be the outcome of a chance experiment with $E(X) =\n\\mu$ and $V(X) = \\sigma^2$.  When $\\mu$ and $\\sigma^2$ are unknown, the statistician\noften estimates them by repeating the experiment $n$ times with outcomes\n$x_1$,~$x_2$, \\dots,~$x_n$, estimating $\\mu$ by the  \\emx {sample mean}\\index{sample mean}\n$$\n\\bar{x} = \\frac 1n \\sum_{i = 1}^n x_i\\ ,\n$$ and $\\sigma^2$ by the  \\emx {sample variance}\\index{sample variance}\n$$ s^2 = \\frac 1n \\sum_{i = 1}^n (x_i - \\bar x)^2\\ .\n$$ Then $s$ is the  \\emx {sample standard deviation.}\\index{sample standard deviation}  These formulas\nshould remind the reader of the definitions of the theoretical mean and variance.  (Many\nstatisticians define the sample variance with the coefficient $1/n$ replaced by\n$1/(n-1)$.  If this alternative definition is used, the expected value of $s^2$ is equal to $\\sigma^2$. See\nExercise~\\ref{exer 6.2.19}, part (d).)\n\nWrite a computer program that will roll a die $n$ times and compute the sample mean\nand sample variance.  Repeat this experiment several times for~$n = 10$ and~$n =\n1000$.  How well do the sample mean and sample variance estimate the true mean 7/2\nand variance 35/12?\n\n\\i\\label{exer 6.2.19} Show that, for the sample mean $\\bar x$ and sample\nvariance $s^2$ as defined in Exercise~\\ref{exer 6.2.18},\n\\begin{enumerate}\n\\item $E(\\bar x) = \\mu$.\n\n\\item $E\\bigl((\\bar x - \\mu)^2\\bigr) = \\sigma^2/n$.\n\n\\item $E(s^2) = \\frac {n-1}n\\sigma^2$. \\emx {Hint}: For (c) write\n\\begin{eqnarray*}\n\\sum_{i = 1}^n (x_i - \\bar x)^2 & = & \\sum_{i = 1}^n \\bigl((x_i - \\mu) -\n(\\bar x - \\mu)\\bigr)^2 \\\\\n     & = & \\sum_{i = 1}^n (x_i - \\mu)^2 - 2(\\bar x - \\mu) \\sum_{i = 1}^n (x_i -\n\\mu) + n(\\bar x - \\mu)^2 \\\\\n     & = & \\sum_{i = 1}^n (x_i - \\mu)^2 - n(\\bar x - \\mu)^2,\n\\end{eqnarray*} and take expectations of both sides, using part (b) when necessary.\n\n\\item Show that if, in the definition of $s^2$ in Exercise~\\ref{exer 6.2.18}, we\nreplace the coefficient $1/n$ by the coefficient $1/(n-1)$, then $E(s^2) = \\sigma^2$.\n(This shows why many statisticians use the coefficient $1/(n-1)$.  The number $s^2$\nis used to estimate the unknown quantity $\\sigma^2$.  If an estimator has an average\nvalue which equals the quantity being estimated, then the estimator is said to be\n \\emx {unbiased}.\\index{unbiased estimator}  Thus, the statement $E(s^2) = \\sigma^2$ says that $s^2$\nis an unbiased estimator of $\\sigma^2$.)\n\\end{enumerate}\n\n\\i\\label{exer 6.2.20} Let $X$ be a random variable taking on values $a_1$,~$a_2$, \n\\dots, $a_r$ with probabilities $p_1$,~$p_2$, \\dots,~$p_r$ and with $E(X) = \\mu$. \nDefine the  \\emx {spread}\\index{spread} of $X$ as follows:\n$$\n\\bar\\sigma = \\sum_{i = 1}^r |a_i - \\mu|p_i\\ .\n$$ This, like the standard deviation, is a way to quantify the amount that a random\nvariable is spread out around its mean.  Recall that the variance of a sum of\nmutually independent random variables is the sum of the individual variances.  The\nsquare of the spread corresponds to the variance in a manner similar to the\ncorrespondence between the spread and the standard deviation.  Show by an example\nthat it is not necessarily true that the square of the spread of the sum of two\nindependent random variables is the sum of the squares of the individual spreads.\n\n\\i\\label{exer 6.2.21} We have two instruments that measure the distance between\ntwo points.  The measurements given by the two instruments are random variables $X_1$\nand\n$X_2$ that are independent with $E(X_1) = E(X_2) = \\mu$, where $\\mu$ is the true\ndistance.  From experience with these instruments, we know the values of the\nvariances $\\sigma_1^2$ and $\\sigma_2^2$.  These variances are not necessarily the\nsame.  From two measurements, we estimate $\\mu$ by the weighted average $\\bar \\mu\n= wX_1 + (1 - w)X_2$.  Here $w$ is chosen in $[0,1]$ to minimize the variance of\n$\\bar \\mu$.\n\\begin{enumerate}\n\\item What is $E(\\bar \\mu)$?\n\n\\item How should $w$ be chosen in $[0,1]$ to minimize the variance of\n$\\bar \\mu$?\n\\end{enumerate}\n\n\\i\\label{exer 6.2.22} Let $X$ be a random variable with $E(X) = \\mu$ and $V(X) =\n\\sigma^2$.  Show that the function $f(x)$ defined by\n$$ f(x) = \\sum_\\omega (X(\\omega) - x)^2 p(\\omega)\n$$ has its minimum value when $x = \\mu$.\n\n\\i\\label{exer 6.2.23} Let $X$ and $Y$ be two random variables defined on the\nfinite sample space $\\Omega$.  Assume that $X$,~$Y$, $X + Y$, and~$X - Y$ all have\nthe same distribution.  Prove that $P(X = Y = 0) = 1$.\n\n\\i\\label{exer 6.2.24} If $X$ and $Y$ are any two random variables, then the {\\em\ncovariance} of $X$ and $Y$ is defined by {\\rm Cov}$(X,Y) = E((X - E(X))(Y -\nE(Y)))$.  Note that {\\rm Cov}$(X,X) = V(X)$.  Show that, if $X$ and\n$Y$ are independent, then  {\\rm Cov}$(X,Y) = 0$; and show, by an example, that we can\nhave {\\rm Cov}$(X,Y) = 0$ and $X$ and $Y$ not independent.\n\n\\istar\\label{exer 6.2.25} A professor wishes to make up a true-false exam\\index{true-false exam}\nwith\n$n$ questions.  She assumes that she can design the problems in such a way that a student\nwill answer the $j$th problem correctly with probability $p_j$, and that the answers\nto the various problems may be considered independent experiments.  Let $S_n$ be the\nnumber of problems that a student will get correct.  The professor wishes to choose\n$p_j$ so that $E(S_n) = .7n$ and so that the variance of $S_n$ is as large as\npossible.  Show that, to achieve this, she should choose $p_j = .7$ for all~$j$; that\nis, she should make all the problems have the same difficulty.\n\n\\i\\label{exer 6.2.26} (Lamperti\\index{LAMPERTI, J.}\\footnote{Private communication.}) An\nurn contains exactly 5000 balls, of which an unknown number $X$ are white and the rest red, where $X$\nis a random variable with a probability distribution on the integers 0,~1,~2, \\dots,~5000.\n\\begin{enumerate}\n\\item Suppose we know that $E(X) = \\mu$.  Show that this is enough to allow us to\ncalculate the probability that a ball drawn at random from the urn will be white. \nWhat is this probability?\n\n\\item We draw a ball from the urn, examine its color, replace it, and then draw\nanother.  Under what conditions, if any, are the results of the two drawings\nindependent; that is, does\n$$\n P({{\\rm white},{\\rm white}}) =  P({{\\rm white}})^2\\ ?\n$$\n\n\\item Suppose the variance of $X$ is $\\sigma^2$.  What is the probability of drawing\ntwo white balls in part (b)?\n\n\\end{enumerate}\n\n\n\\i\\label{exer 6.2.27} For a sequence of Bernoulli trials, let $X_1$ be the number\nof trials until the first success.  For~$j \\geq 2$, let $X_j$ be the number of trials\nafter the $(j - 1)$st success until the $j$th success.  It can be shown that \n$X_1$,~$X_2$,~\\dots is an independent trials process.\n\\begin{enumerate}\n\\item What is the common distribution, expected value, and variance for $X_j$?\n\n\\item Let $T_n = X_1 + X_2 +\\cdots+ X_n$.  Then $T_n$ is the time until the\n$n$th success.  Find $E(T_n)$ and $V(T_n)$.\n\n\\item Use the results of (b) to find the expected value and variance for the number\nof tosses of a coin until the $n$th occurrence of a head.\n\\end{enumerate}\n\n\\i\\label{exer 6.2.28} Referring to Exercise~\\ref{sec 6.1}.\\ref{exer 6.1.30}, find the\nvariance for the number of boxes of Wheaties bought before getting half of the players'\npictures and the variance for the number of additional boxes needed to get the second\nhalf of the players' pictures.\n\n\\i\\label{exer 6.2.99} In Example~\\ref{exam 5.3}, assume that the book in question has\n1000 pages.  Let\n$X$ be the number of pages with no mistakes.  Show that $E(X) = 905$ and $V(X) = 86$. \nUsing these results, show that the probability is ${}\n\\leq .05$ that there will be more than 924 pages without errors or fewer than 866\npages without errors.\n\n\\i\\label{exer 6.2.100} Let $X$ be Poisson distributed with parameter $\\lambda$. \nShow that $V(X) = \\lambda$.\n\n\\end{LJSItem}\n\n\\choice{}{\\section{Continuous Random Variables}\\label{sec 6.3}\n\nIn this section we consider the properties of the expected value and the variance of\na continuous random variable.  These quantities are defined just as for discrete\nrandom variables and share the same properties.\n\n\\subsection*{Expected Value}\n\n\\begin{definition}\\label{def 6.5}  Let $X$ be a real-valued random variable with\ndensity function $f(x)$.  The  \\emx {expected value}\\index{expected value} $\\mu = E(X)$ is \ndefined by\n$$\n\\mu = E(X) = \\int_{-\\infty}^{+\\infty} xf(x)\\,dx\\ ,\n$$\nprovided the integral\n$$\n\\int_{-\\infty}^{+\\infty} |x|f(x)\\,dx\n$$\nis finite.\n\\end{definition}\n\nThe reader should compare this definition with the corresponding one for discrete\nrandom variables in Section~\\ref{sec 6.1}.  Intuitively, we can interpret $E(X)$, as\nwe did in the previous sections, as the value that we should expect to obtain if we\nperform a large number of independent experiments and average the resulting values of\n$X$.    \n\n\\par We can summarize the properties of $E(X)$ as follows (cf. Theorem~\\ref{thm 6.1}).\n\n\\begin{theorem}\\label{thm 6.10} If $X$ and $Y$ are real-valued random variables and\n$c$ is any constant, then\n\\begin{eqnarray*} \nE(X + Y) &=& E(X) + E(Y)\\ , \\\\ \nE(cX)    &=& cE(X)\\ .\n\\end{eqnarray*}\n%\\end{theorem}\n\nThe proof is very similar to the proof of Theorem~\\ref{thm 6.1}, and we omit it.\n\\end{theorem}\n\\par More generally, if $X_1$,~$X_2$, \\dots,~$X_n$ are $n$ real-valued random\nvariables, and\n$c_1$,~$c_2$, \\dots,~$c_n$ are $n$ constants, then\n$$ E(c_1X_1 + c_2X_2 +\\cdots+ c_nX_n) = c_1E(X_1) + c_2E(X_2) +\\cdots+ c_nE(X_n)\\ .\n$$\n\n\\begin{example}\\label{exam 6.16} Let $X$ be uniformly distributed on the interval\n$[0, 1]$.  Then \n$$E(X) = \\int_0^1 x\\,dx = 1/2\\ .$$    It follows that if we choose a large number $N$\nof random numbers from $[0,1]$ and take the average, then we can expect that this\naverage should be close to the expected value of 1/2.  \n\\end{example}\n\n\\begin{example}\\label{exam 6.17} Let $Z = (x, y)$ denote a point chosen uniformly and\nrandomly from the unit disk, as in the dart game in Example~\\ref{exam 2.2.2} and let\n$X = (x^2 + y^2)^{1/2}$ be the distance from $Z$ to the center of the disk.  The\ndensity function of $X$ can easily be shown to equal $f(x) = 2x$, so by the\ndefinition of expected value,\n\\begin{eqnarray*} E(X) & = & \\int_0^1 x f(x)\\,dx \\\\ & = & \\int_0^1 x (2x)\\,dx \\\\ & =\n& \\frac 23\\ .\n\\end{eqnarray*}\n\\end{example}\n\n\\begin{example}\\label{exam 6.18}   In the example of the couple meeting at the Inn\n(Example~\\ref{exam 2.2.7.4}), each person arrives at a time which is uniformly\ndistributed between 5:00 and 6:00 PM.  The random variable $Z$ under consideration is\nthe length of time the first person has to wait until the second one arrives.  It was\nshown that\n$$f_Z(z) = 2(1-z)\\ ,$$ for $0 \\le z \\le 1$. Hence,\n\\begin{eqnarray*} E(Z) & = & \\int_0^1 zf_Z(z)\\,dz \\\\\n     & = & \\int_0^1 2z(1-z)\\,dz \\\\\n     & = & \\Bigl[z^2 - \\frac 23 z^3\\Bigl]_0^1 \\\\  = \\frac 13\\ .\n\\end{eqnarray*}\n\\end{example}\n\n\\subsection*{Expectation of a Function of a Random Variable}\n\nSuppose that $X$ is a real-valued random variable  and $\\phi(x)$ is a continuous\nfunction from {\\bf R} to {\\bf R}.  The following theorem is the continuous analogue\nof Theorem~\\ref{thm 6.3.5}.\n\n\\begin{theorem}\\label{thm 6.11} If $X$ is a real-valued random variable and if\n%$\\phi : {\\bf\\rm R} \\to {\\bf\\rm R}$ is a continuous real-valued function with domain\n$\\phi :$ {\\bf R} $\\to\\ $  {\\bf R}  is a continuous real-valued function with domain\n$[a,b]$, then \n$$ E(\\phi(X)) = \\int_{-\\infty}^{+\\infty} \\phi(x) f_X(x)\\, dx\\ ,\n$$ provided the integral exists.\n\\end{theorem}\n\nFor a proof of this theorem, see Ross.\\index{ROSS, S.}\\footnote{S. Ross,  \\emx {A First Course in\nProbability,} (New York: Macmillan, 1984), pgs. 241-245.}\n\n\n\n\\subsection*{Expectation of the Product of Two Random Variables}\n\nIn general, it is not true that $E(XY) = E(X)E(Y)$, since the integral of a product\nis not the product of integrals.  But if $X$ and $Y$ are independent, then the\nexpectations multiply.\n\n\\begin{theorem}\\label{thm 6.12} Let $X$ and $Y$ be independent real-valued continuous\nrandom variables with finite expected values.  Then we have \n$$ E(XY) = E(X)E(Y)\\ .\n$$\n\\proof We will prove this only in the case that the ranges of $X$ and $Y$ are contained in\nthe intervals $[a, b]$ and $[c, d]$, respectively.  Let the density functions of $X$ and $Y$ be\ndenoted by $f_X(x)$ and $f_Y(y)$, respectively.  Since $X$ and $Y$ are independent, the joint\ndensity function of $X$ and $Y$ is the product of the individual density functions.  Hence\n\\begin{eqnarray*} E(XY) & = & \\int_a^b \\int_c^d xy f_X(x) f_Y(y)\\, dy\\,dx \\\\\n      & = & \\int_a^b x f_X(x)\\, dx \\int_c^d y f_Y(y)\\, dy \\\\\n      & = & E(X)E(Y)\\ .\n\\end{eqnarray*}\n\\par\nThe proof in the general case involves using sequences of bounded random variables that approach $X$\nand $Y$, and is somewhat technical, so we will omit it.\n\\end{theorem}\n\nIn the same way, one can show that if $X_1$,~$X_2$, \\dots,~$X_n$ are $n$ mutually\nindependent real-valued random variables, then\n$$ \nE(X_1 X_2 \\cdots X_n) = E(X_1)\\,E(X_2)\\,\\cdots\\,E(X_n)\\ .\n$$\n\n\\begin{example}\\label{exam 6.19} Let $Z = (X, Y)$ be a point chosen at random in the\nunit square.  Let $A = X^2$ and\n$B = Y^2$.  Then Theorem~\\ref{thm 4.3} implies that $A$ and $B$ are independent.\nUsing Theorem~\\ref{thm 6.11}, the expectations of $A$ and $B$ are easy to calculate:\n\\begin{eqnarray*} E(A) = E(B) & = & \\int_0^1 x^2\\,dx \\\\ & = & \\frac 13\\ .\n\\end{eqnarray*} Using Theorem~\\ref{thm 6.12}, the expectation of $AB$ is just the\nproduct of $E(A)$ and\n$E(B)$, or 1/9.  The usefulness of this theorem is demonstrated by noting that  it is\nquite a bit more difficult to calculate $E(AB)$ from the definition of expectation. \nOne finds that the density function of $AB$ is\n$$f_{AB}(t) = \\frac {-\\log(t)}{4\\sqrt t}\\ ,$$ so\n\\begin{eqnarray*} E(AB) & = & \\int_0^1 t f_{AB}(t)\\,dt \\\\ & = & \\frac 19\\ .\n\\end{eqnarray*}\n\\end{example}\n\n\\begin{example}\\label{exam 6.20} Again let $Z = (X, Y)$ be a point chosen at random\nin the unit square, and let $W = X + Y$.    Then $Y$ and $W$ are not independent, and\nwe have\n\\begin{eqnarray*} E(Y) & = &\\frac 12\\ , \\\\ E(W) & = & 1\\ , \\\\ E(YW) & = & E(XY + Y^2)\n= E(X)E(Y) + \\frac 13 = \\frac 7{12}\n\\ne E(Y)E(W)\\ .\n\\end{eqnarray*}\n\\end{example}\n\nWe turn now to the variance.\n\n\n\\subsection*{Variance}\n\n\\begin{definition}\\label{def 6.6}  Let $X$ be a real-valued random variable with\ndensity function $f(x)$.  The  \\emx {variance}\\index{variance} $\\sigma^2 = V(X)$ is defined by\n$$\n\\sigma^2 = V(X) = E((X - \\mu)^2)\\ . \n$$\n\\end{definition}   \nThe next result follows easily from Theorem~\\ref{thm 6.3.5}.  There is another\nway to calculate the variance of a continuous random variable, which is usually\nslightly easier.  It is given in Theorem~\\ref{thm 6.14}.\n\\begin{theorem}\\label{thm 6.13.5} If $X$ is a real-valued random variable with $E(X)\n= \\mu$, then\n$$\n\\sigma^2 = \\int_{-\\infty}^\\infty (x - \\mu)^2 f(x)\\, dx\\ .\n$$\n\\end{theorem}\n\\par The properties listed in the next three theorems are all proved in exactly the\nsame way that the corresponding theorems for discrete random variables were proved in\nSection~\\ref{sec 6.2}.\n\\begin{theorem}\\label{thm 6.13} If $X$ is a real-valued random variable defined on\n$\\Omega$ and $c$ is any constant, then (cf. Theorem~\\ref{thm 6.6})\n\\begin{eqnarray*}\nV(cX)    &=& c^2 V(X)\\ , \\\\ \nV(X + c) &=& V(X)\\ .\n\\end{eqnarray*}\n\\end{theorem}\n\n\\begin{theorem}\\label{thm 6.14} If $X$ is a real-valued random variable with $E(X) =\n\\mu$, then (cf. Theorem~\\ref{thm 6.7})\n$$ V(X) = E(X^2) - \\mu^2\\ .\n$$\n\\end{theorem}\n\n\\begin{theorem}\\label{thm 6.15} If $X$ and $Y$ are independent real-valued random\nvariables on $\\Omega$, then (cf. Theorem~\\ref{thm 6.8})\n$$ V(X + Y) = V(X) + V(Y)\\ .\n$$\n\\end{theorem}\n\n\\begin{example} (continuation of Example~\\ref{exam 6.16})\\label{exam 6.18.5} If\n$X$ is uniformly distributed on $[0, 1]$, then, using Theorem~\\ref{thm 6.14}, we have\n$$  V(X) = \\int_0^1 \\Bigl(x - \\frac 12 \\Bigr)^2\\, dx = \\frac 1{12}\\ .\n$$\n\\end{example}\n\n\\begin{example}\\label{exam 6.21} Let $X$ be an exponentially distributed random\nvariable with parameter $\\lambda$.  Then the density function of $X$ is\n$$ f_X(x) = \\lambda e^{-\\lambda x}\\ .\n$$ From the definition of expectation and integration by parts, we have\n\\begin{eqnarray*} E(X) & = & \\int_0^\\infty x f_X(x)\\, dx \\\\\n     & = & \\lambda \\int_0^\\infty x e^{-\\lambda x}\\, dx \\\\\n     & = & \\biggl.-xe^{-\\lambda x}\\biggr|_0^\\infty + \\int_0^\\infty e^{-\\lambda x}\\,\ndx \\\\\n     & = & 0 + \\biggl.\\frac {e^{-\\lambda x}}{-\\lambda}\\biggr|_0^\\infty =\n\\frac 1\\lambda\\ .\n\\end{eqnarray*}\n\nSimilarly, using Theorems~\\ref{thm 6.11} and \\ref{thm 6.14}, we have\n\\begin{eqnarray*} V(X) & = & \\int_0^\\infty x^2 f_X(x)\\, dx - \\frac 1{\\lambda^2} \\\\\n     & = & \\lambda \\int_0^\\infty x^2 e^{-\\lambda x}\\, dx - \\frac 1{\\lambda^2} \\\\\n     & = & \\biggl.-x^2 e^{-\\lambda x}\\biggr|_0^\\infty + 2\\int_0^\\infty x e^{-\\lambda\nx}\\, dx - \\frac 1{\\lambda^2} \\\\\n     & = & \\biggl.-x^2 e^{-\\lambda x}\\biggr|_0^\\infty - \\biggl.\\frac {2xe^{-\\lambda\nx}}\\lambda\\biggr|_0^\\infty - \\biggl.\\frac 2{\\lambda^2} e^{-\\lambda x}\\biggr|_0^\\infty\n-\n\\frac 1{\\lambda^2} =\n\\frac 2{\\lambda^2} - \\frac 1{\\lambda^2} = \\frac 1{\\lambda^2}\\ .\n\\end{eqnarray*} In this case, both $E(X)$ and $V(X)$ are finite if $\\lambda > 0$.\n\\end{example}\n\n\\begin{example}\\label{exam 6.22} Let $Z$ be a standard normal random variable with\ndensity function\n$$ f_Z(x) = \\frac 1{\\sqrt{2\\pi}} e^{-x^2/2}\\ .\n$$  Since this density function is symmetric with respect to the $y$-axis, then it is\neasy to show that\n$$\\int_{-\\infty}^\\infty x f_Z(x)\\, dx$$ has value 0.  The reader should recall\nhowever, that the expectation is defined to be the above integral only if the integral\n$$\\int_{-\\infty}^\\infty |x| f_Z(x)\\, dx$$ is finite.  This integral equals\n$$2\\int_0^\\infty x f_Z(x)\\, dx\\ ,$$ which one can easily show is finite.  Thus, the\nexpected value of $Z$ is 0.\n\\par To calculate the variance of $Z$, we begin by applying Theorem~\\ref{thm 6.14}:\n$$V(Z) =  \\int_{-\\infty}^{+\\infty} x^2 f_Z(x)\\, dx - \\mu^2\\ .$$ If we write $x^2$ as\n$x\\cdot x$, and integrate by parts, we obtain\n$$\\biggl.\\frac 1{\\sqrt{2\\pi}} (-x e^{-x^2/2})\\biggr|_{-\\infty}^{+\\infty} + \\frac\n1{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} e^{-x^2/2}\\, dx\\ .\n$$ The first summand above can be shown to equal 0, since as $x \\rightarrow \\pm\n\\infty$, $e^{-x^2/2}$ gets small more quickly than $x$ gets large.  The second\nsummand is just the standard normal density integrated over its domain, so the value\nof this summand is 1.  Therefore, the variance of the standard normal density equals\n1.\n\\par Now let $X$ be a (not necessarily standard) normal random variable with\nparameters\n$\\mu$ and $\\sigma$.  Then the density function of $X$ is\n$$ f_X(x) = \\frac 1{\\sqrt{2\\pi}\\sigma} e^{-(x - \\mu)^2/2\\sigma^2}\\ .\n$$  We can write $X = \\sigma Z + \\mu$, where $Z$ is a standard normal random\nvariable.  Since\n$E(Z) = 0$ and $V(Z) = 1$ by the calculation above, Theorems~\\ref{thm 6.10} and\n\\ref{thm 6.13} imply that\n\\begin{eqnarray*} \nE(X)  &=& E(\\sigma Z + \\mu)\\ =\\ \\mu\\ , \\\\ \nV(X)  &=& V(\\sigma Z + \\mu)\\ =\\ \\sigma^2\\ .\n\\end{eqnarray*}\n\\end{example}\n\n\\begin{example}\\label{exam 6.23} Let $X$ be a continuous random variable with the\nCauchy density function\n$$ f_X(x) = \\frac {a}{\\pi} \\frac {1}{a^2 + x^2}\\ .\n$$ Then the expectation of $X$ does not exist, because the integral\n$$\n\\frac a\\pi \\int_{-\\infty}^{+\\infty} \\frac {|x|\\,dx}{a^2 + x^2}\n$$ diverges.  Thus the variance of $X$ also fails to exist.  Densities whose variance\nis not defined, like the Cauchy density, behave quite differently in a number of\nimportant respects from those whose variance is finite.  We shall see one instance of\nthis difference in Section~\\ref{sec 8.2}.\n\\end{example}\n\n\n\\subsection*{Independent Trials}\n\n\\begin{corollary}\\label{cor 6.1}                                               \nIf $X_1$,~$X_2$, \\dots,~$X_n$ is an independent trials\nprocess of real-valued random variables, with $E(X_i) = \\mu$ and $V(X_i) = \\sigma^2$,\nand if\n\\begin{eqnarray*} S_n & = & X_1 + X_2 +\\cdots+ X_n\\ , \\\\ A_n & = & \\frac {S_n}n\\ ,\n\\end{eqnarray*} \nthen\n\\begin{eqnarray*}\nE(S_n) & = & n\\mu\\ ,\\\\\nE(A_n) & = & \\mu\\ ,\\\\\nV(S_n) & = & n\\sigma^2\\ ,\\\\\nV(A_n) & = & \\frac {\\sigma^2} n\\ .\n\\end{eqnarray*}\nIt follows that if we set\n$$ \nS_n^* = \\frac {S_n - n\\mu}{\\sqrt{n\\sigma^2}}\\ ,\n$$ \nthen\n\\begin{eqnarray*} \nE(S_n^*) & = & 0\\ ,\\\\\nV(S_n^*) & = & 1\\ .\n\\end{eqnarray*}\nWe say that $S_n^*$ is a  \\emx {standardized version of} $S_n$ (see\nExercise~\\ref{exer 6.2.13} in Section~\\ref{sec 6.2}).\n\\end{corollary}\n\n\n\\subsection*{Queues}\\index{queues}\n\n\\begin{example}\\label{exam 6.24} Let us consider again the queueing problem, that is,\nthe problem of the customers waiting in a queue for service (see Example~\\ref{exam\n5.21}).  We suppose again that customers join the queue in such a way that the time\nbetween arrivals is an exponentially distributed random variable $X$ with density\nfunction\n$$ f_X(t) = \\lambda e^{-\\lambda t}\\ .\n$$ Then the expected value of the time between arrivals is simply $1/\\lambda$ (see\nExample~\\ref{exam 6.21}), as was stated in Example~\\ref{exam 5.21}.  The reciprocal\n$\\lambda$ of this expected value is often referred to as the  \\emx {arrival rate.}  The\n\\emx {service time} of an individual who is first in line is defined to be the amount\nof time that the person stays at the head of the line before leaving.  We suppose that\nthe customers are served in such a way that the service time is another exponentially\ndistributed random variable\n$Y$ with density function\n$$ f_X(t) = \\mu e^{-\\mu t}\\ .\n$$ Then the expected value of the service time is\n$$ E(X) = \\int_0^\\infty t f_X(t)\\, dt = \\frac 1\\mu\\ .\n$$ The reciprocal $\\mu$ if this expected value is often referred to as the {\\em\nservice rate.}\n\\par We expect on grounds of our everyday experience with queues that if the service\nrate is greater than the arrival rate, then the average queue size will tend to\nstabilize, but if the service rate is less than the arrival rate, then the queue will\ntend to increase in length without limit (see Figure~\\ref{fig 5.17}).  The simulations\nin Example~\\ref{exam 5.21} tend to bear out our everyday experience.  We can make this\nconclusion more precise if we introduce the  \\emx {traffic intensity} as the product\n$$\n\\rho = ({\\rm arrival\\ rate})({\\rm average\\ service\\ time}) = \\frac \\lambda\\mu =\n\\frac {1/\\mu}{1/\\lambda}\\ .\n$$ The traffic intensity is also the ratio of the average service time to the average\ntime between arrivals.  If the traffic intensity is less than~1 the queue will\nperform reasonably, but if it is greater than~1 the queue will grow indefinitely\nlarge.  In the critical case of $\\rho = 1$, it can be shown that the queue will\nbecome large but there will always be times at which the queue is empty.\\footnote{L.\nKleinrock,  \\emx {Queueing Systems,} vol.~2 (New York: John Wiley and Sons, 1975).}\n\nIn the case that the traffic intensity is less than~1 we can consider the length of\nthe queue as a random variable $Z$ whose expected value is finite,\n$$ E(Z) = N\\ .\n$$\n\nThe time spent in the queue by a single customer can be considered as a random\nvariable $W$ whose expected value is finite,\n$$ E(W) = T\\ .\n$$ Then we can argue that, when a customer joins the queue, he expects to find $N$\npeople ahead of him, and when he leaves the queue, he expects to find $\\lambda T$\npeople behind him.  Since, in equilibrium, these should be the same, we would expect\nto find that\n$$ N = \\lambda T\\ .\n$$ This last relationship is called  \\emx {Little's law for queues.}\\index{Little's\nlaw for queues}\\footnote{ibid., p.~17.}  We will not prove it here.  A proof may be found in\nRoss.\\index{ROSS, S.}\\footnote{S. M. Ross,  \\emx {Applied Probability Models with Optimization\nApplications,} (San Francisco: Holden-Day, 1970)}  Note that in this case we are counting the\nwaiting time of all customers, even those that do not have to wait at all.  In our simulation in\nSection~\\ref{sec 4.2}, we did not consider these customers.\n\\par If we knew the expected queue length then we could use Little's law to obtain\nthe expected waiting time, since\n$$ T = \\frac N\\lambda\\ .\n$$ The queue length is a random variable with a discrete distribution.  We can estimate\nthis distribution by simulation, keeping track of the queue lengths at the times at\nwhich a customer arrives.  We show the result of this simulation (using the program {\\bf\nQueue}) in Figure~\\ref{fig 6.5}.\n\\putfig{4truein}{PSfig6-5}{Distribution of queue lengths.}{fig 6.5}\n\\par We note that the distribution appears to be a geometric distribution.  In the study\nof queueing theory it is shown that the distribution for the queue length in equilibrium\nis indeed a geometric distribution with\n$$ s_j = (1 - \\rho) \\rho^j \\qquad {\\rm for}\\  j = 0, 1, 2, \\dots\\ ,\n$$ if $\\rho < 1$.\nThe expected value of a random variable with this distribution is\n$$ N = \\frac \\rho{(1 - \\rho)}\n$$ (see Example~\\ref{exam 6.8}).  Thus by Little's result the expected waiting time is\n$$ T = \\frac \\rho{\\lambda(1 - \\rho)} = \\frac 1{\\mu - \\lambda}\\ ,\n$$ where $\\mu$ is the service rate, $\\lambda$ the arrival rate, and $\\rho$ the\ntraffic intensity.\n\\par \nIn our simulation, the arrival rate is 1 and the service rate is 1.1.  Thus, the\ntraffic intensity is $1/1.1 = 10/11$, the expected queue size is\n$$\n\\frac {10/11}{(1 - 10/11)} = 10\\ ,\n$$ and the expected waiting time is\n$$\n\\frac 1{1.1 - 1} = 10\\ .\n%*********The simulation values weren't very close to the theoretical ones.\n%Do we want to rerun it?\n$$ In our simulation the average queue size was 8.19 and the average waiting time\nwas 7.37.   In Figure~\\ref{fig 6.5.5}, we show the histogram for the waiting times.  \nThis histogram suggests that the density for the waiting times is exponential with\nparameter\n$\\mu - \\lambda$, and this is the case.\n\\putfig{4truein}{PSfig6-5-5}{Distribution of queue waiting times.}{fig 6.5.5}\n\\end{example}\n\n\\exercises\n\\begin{LJSItem}\n\n\\i\\label{exer 6.3.1} Let $X$ be a random variable with range $[-1,1]$ and let\n$f_X(x)$ be the density function of $X$.  Find $\\mu(X)$ and $\\sigma^2(X)$ if, for $|x|\n< 1$,\n\n\\begin{enumerate}\n\\item $f_X(x) = 1/2$.\n\n\\item $f_X(x) = |x|$.\n\n\\item $f_X(x) = 1 - |x|$.\n\n\\item $f_X(x) = (3/2) x^2$.\n\\end{enumerate}\n\n\n\\i\\label{exer 6.3.3} Let $X$ be a random variable with range $[-1,1]$ and\n$f_X$ its density function.  Find $\\mu(X)$ and $\\sigma^2(X)$ if, for $|x| > 1$,\n$f_X(x) = 0$, and for $|x| < 1$,\n\n\\begin{enumerate}\n\\item $f_X(x) = (3/4)(1 - x^2)$.\n\n\\item $f_X(x) = (\\pi/4)\\cos(\\pi x/2)$.\n\n\\item $f_X(x) = (x + 1)/2$.\n\n\\item $f_X(x) = (3/8)(x + 1)^2$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.5} The lifetime, measure in hours, of the ACME super light bulb\nis a random variable $T$ with density function $f_T(t) = \\lambda^2 t e^{-\\lambda t}$,\nwhere\n$\\lambda = .05$.  What is the expected lifetime of this light bulb?  What is its\nvariance?\n\n\\i\\label{exer 6.3.6} Let $X$ be a random variable with range $[-1,1]$ and density\nfunction $f_X(x) = ax + b$ if $|x| < 1$.\n\n\\begin{enumerate}\n\\item Show that if $\\int_{-1}^{+1} f_X(x)\\, dx = 1$, then $b = 1/2$.\n\n\\item Show that if $f_X(x) \\geq 0$, then $-1/2 \\leq a \\leq 1/2$.\n\n\\item Show that $\\mu = (2/3)a$, and hence that $-1/3 \\leq \\mu \\leq 1/3$.\n\n\\item Show that $\\sigma^2(X) = (2/3)b - (4/9)a^2 = 1/3 - (4/9)a^2$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.7} Let $X$ be a random variable with range $[-1,1]$ and density\nfunction\n$f_X(x) = ax^2 + bx + c$ if $|x| < 1$ and~0 otherwise.\n\n\\begin{enumerate}\n\\item Show that $2a/3 + 2c = 1$ (see Exercise~\\ref{exer 6.3.6}).\n\n\\item Show that $2b/3 = \\mu(X)$.\n\n\\item Show that $2a/5 + 2c/3 = \\sigma^2(X)$.\n\n\\item Find $a$,~$b$, and~$c$ if $\\mu(X) = 0$, $\\sigma^2(X) = 1/15$, and sketch the\ngraph of $f_X$.\n\n\\item Find $a$,~$b$, and~$c$ if $\\mu(X) = 0$, $\\sigma^2(X) = 1/2$, and sketch the\ngraph of $f_X$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.8} Let $T$ be a random variable with range $[0,\\infty]$ and\n$f_T$ its density function.  Find $\\mu(T)$ and $\\sigma^2(T)$ if, for $t < 0$, $f_T(t)\n= 0$, and for $t > 0$,\n\\begin{enumerate}\n\n\\item $f_T(t) = 3e^{-3t}$.\n\n\\item $f_T(t) = 9te^{-3t}$.\n\n\\item $f_T(t) = 3/(1 + t)^4$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.9} Let $X$ be a random variable with density function $f_X$. \nShow, using elementary calculus, that the function\n$$\n\\phi(a) = E((X - a)^2)\n$$ takes its minimum value when $a = \\mu(X)$, and in that case $\\phi(a) =\n\\sigma^2(X)$.\n\n\\i\\label{exer 6.3.10} Let $X$ be a random variable with mean $\\mu$ and variance\n$\\sigma^2$.  Let $Y = aX^2 + bX + c$.  Find the expected value of $Y$.\n\n\\i\\label{exer 6.3.11} Let $X$,~$Y$, and~$Z$ be independent random variables, each\nwith mean\n$\\mu$ and variance $\\sigma^2$.\n\n\\begin{enumerate}\n\\item Find the expected value and variance of $S = X + Y + Z$.\n\n\\item Find the expected value and variance of $A = (1/3)(X + Y + Z)$.\n\n\\item Find the expected value of $S^2$ and $A^2$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.12} Let $X$ and $Y$ be independent random variables with uniform\ndensity functions on $[0,1]$.  Find\n\n\\begin{enumerate}\n\\item $E(|X - Y|)$.\n\n\\item $E(\\max(X,Y))$.\n\n\\item $E(\\min(X,Y))$.\n\n\\item $E(X^2 + Y^2)$.\n\n\\item $E((X + Y)^2)$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.13} The Pilsdorff\\index{Pilsdorff Beer Company} Beer Company runs a fleet of\ntrucks along the 100~mile road from Hangtown\\index{Hangtown} to Dry Gulch\\index{Dry Gulch}.  The\ntrucks are old, and are apt to break down at any point along the road with equal probability.  Where\nshould the company locate a garage so as to minimize the expected distance from a typical breakdown\nto the garage?  In other words, if $X$ is a random variable giving the location of the\nbreakdown, measured, say, from Hangtown, and $b$ gives the location of the garage,\nwhat choice of $b$ minimizes $E(|X - b|)$?  Now suppose\n$X$ is not distributed uniformly over $[0,100]$, but instead has density function\n$f_X(x) = 2x/10{,}000$.  Then what choice of $b$ minimizes $E(|X - b|)$?\n\n\\i\\label{exer 6.3.14} Find $E(X^Y)$, where $X$ and $Y$ are independent random\nvariables which are uniform on $[0, 1]$.  Then verify your answer by simulation.\n\n\\i\\label{exer 6.3.15} Let $X$ be a random variable that takes on nonnegative\nvalues and has distribution function $F(x)$.  Show that\n$$ E(X) = \\int_0^\\infty (1 - F(x))\\, dx\\ .\n$$ \n \\emx {Hint}: Integrate by parts.\n\nIllustrate this result by calculating $E(X)$ by this method if $X$ has an exponential\ndistribution $F(x) = 1 - e^{-\\lambda x}$ for $x \\geq 0$, and $F(x) = 0$ otherwise.\n\n\\i\\label{exer 6.3.15.5} Let $X$ be a continuous random variable with density\nfunction $f_X(x)$.  Show that if\n$$\n\\int_{-\\infty}^{+\\infty} x^2 f_X(x)\\, dx < \\infty\\ ,\n$$ then\n$$\n\\int_{-\\infty}^{+\\infty} |x| f_X(x)\\, dx < \\infty\\ .\n$$ \n \\emx {Hint}:  Except on the interval $[-1, 1]$, the first integrand is greater than the\nsecond integrand.\n\n\\i\\label{exer 6.3.16} Let $X$ be a random variable distributed uniformly over\n$[0,20]$.  Define a new random variable $Y$ by $Y = \\lfloor X\\rfloor$ (the greatest\ninteger in $X$).  Find the expected value of~$Y$.  Do the same for $Z =\n\\lfloor X + .5\\rfloor$.  Compute $E\\bigl(|X-Y|\\bigr)$ and $E\\bigl(|X-Z|\\bigr)$. \n(Note that\n$Y$ is the value of~$X$ rounded off to the nearest smallest integer, while $Z$ is the\nvalue of~$X$ rounded off to the nearest integer.  Which method of rounding off is\nbetter?  Why?)\n\n\\i\\label{exer 6.3.17} Assume that the lifetime of a diesel engine part is a random\nvariable $X$ with density $f_X$.  When the part wears out, it is replaced by another\nwith the same density.  Let $N(t)$ be the number of parts that are used in time\n$t$.  We want to study the random variable $N(t)/t$.  Since parts are replaced on the\naverage every $E(X)$ time units, we expect about $t/E(X)$ parts to be used in time\n$t$.  That is, we expect that\n$$\n\\lim_{t \\to \\infty} E \\Bigl(\\frac {N(t)}t\\Bigr) = \\frac 1{E(X)}\\ .\n$$ This result is correct but quite difficult to prove.  Write a program that will\nallow you to specify the density $f_X$, and the time $t$, and simulate this\nexperiment to find $N(t)/t$.  Have your program repeat the experiment 500 times and\nplot a bar graph for the random outcomes of $N(t)/t$.  From this data, estimate\n$E(N(t)/t)$ and compare this with $1/E(X)$.  In particular, do this for~$t = 100$\nwith the following two densities:\n\n\\begin{enumerate}\n\\item $f_X = e^{-t}$.\n\n\\item $f_X = te^{-t}$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.18} Let $X$ and $Y$ be random variables.  The  \\emx {covariance}\n$\\rm {Cov}(X,Y)$ is defined by (see Exercise~\\ref{sec 6.2}.\\ref{exer 6.2.24})\n$$\n\\rm {cov}(X,Y) = E ((X - \\mu(X))(Y - \\mu(Y)) )\\ .\n$$\n\\begin{enumerate}\n\\item Show that $\\rm {cov}(X,Y) = E(XY) - E(X)E(Y)$.\n\n\\item Using (a), show that ${\\rm cov}(X,Y) = 0$, if\n$X$ and $Y$ are independent.  (Caution: the converse is  \\emx {not} always true.)\n\n\\item Show that $V(X + Y) = V(X) + V(Y) + 2{\\rm cov}(X,Y)$.\n\\end{enumerate}\n\n\\i\\label{exer 6.3.19} Let $X$ and $Y$ be random variables with positive variance. \nThe \\emx {correlation} of $X$ and $Y$ is defined as\n$$\n\\rho(X,Y) = \\frac{{\\rm cov}(X,Y)}{\\sqrt{V(X)V(Y)}}\\ .\n$$\n\\begin{enumerate}\n\\item Using Exercise~\\ref{exer 6.3.18}(c), show that\n$$ 0 \\leq V\\left( \\frac X{\\sigma(X)} + \\frac Y{\\sigma(Y)} \\right) = 2(1 +\n\\rho(X,Y))\\ .\n$$\n\n\\item Now show that\n$$ 0 \\leq V\\left( \\frac X{\\sigma(X)} - \\frac Y{\\sigma(Y)} \\right) = 2(1 -\n\\rho(X,Y))\\ .\n$$\n\n\\item Using (a) and (b), show that\n$$ -1 \\leq \\rho(X,Y) \\leq 1\\ .\n$$\n\\end{enumerate}\n\n\\i\\label{exer 6.3.20} Let $X$ and $Y$ be independent random variables with uniform\ndensities in $[0,1]$.  Let $Z = X + Y$ and $W = X - Y$.  Find\n\n\\begin{enumerate}\n\\item $\\rho(X,Y)$ (see Exercise~\\ref{exer 6.3.19}).\n\n\\item $\\rho(X,Z)$.\n\n\\item $\\rho(Y,W)$.\n\n\\item $\\rho(Z,W)$.\n\\end{enumerate}\n\n\\istar\\label{exer 6.3.21} When studying certain physiological data, such as heights\nof fathers and sons, it is often natural to assume that these data (e.g., the heights\nof the fathers and the heights of the sons) are described by random variables with\nnormal densities.  These random variables, however, are not independent but rather\nare correlated.  For example, a two-dimensional standard normal density for\ncorrelated random variables has the form\n$$ f_{X,Y}(x,y) = \\frac 1{2\\pi\\sqrt{1 - \\rho^2}} \\cdot e^{-(x^2 - 2\\rho xy + y^2)/2(1\n- \\rho^2)}\\ .\n$$\n\\begin{enumerate}\n\\item Show that $X$ and $Y$ each have standard normal densities.\n\n\\item Show that the correlation of $X$ and $Y$ (see Exercise~\\ref{exer 6.3.19}) is\n$\\rho$.\n\\end{enumerate}\n\n\\istar\\label{exer 6.3.22} For correlated random variables $X$ and $Y$ it is natural to\nask for the expected value for~$X$ given~$Y$.  For example, Galton\\index{GALTON, F.} calculated the\nexpected value of the height of a son given the height of the father.  He used this\nto show that tall men can be expected to have sons who are less tall on the average. \nSimilarly, students who do very well on one exam can be expected to do less well on\nthe next exam, and so forth.  This is called  \\emx {regression on the mean.}\\index{regression on\nthe mean}  To define this conditional expected value, we first define a conditional density of~$X$\ngiven~$Y = y$ by\n$$ f_{X|Y}(x|y) = \\frac {f_{X,Y}(x,y)}{f_Y(y)}\\ ,\n$$ where $f_{X,Y}(x,y)$ is the joint density of $X$ and $Y$, and $f_Y$ is the density\nfor $Y$.  Then the conditional expected value of~$X$ given~$Y$ is\n$$ E(X|Y = y) = \\int_a^b x f_{X|Y}(x|y)\\, dx\\ .\n$$ For the normal density in Exercise~\\ref{exer 6.3.21}, show that the conditional\ndensity of $f_{X|Y}(x|y)$ is normal with mean $\\rho y$ and variance $1 - \\rho^2$. \nFrom this we see that if $X$ and $Y$ are positively correlated $(0 < \\rho < 1)$,\nand if $y > E(Y)$, then the expected value for~$X$ given~$Y = y$ will be less than~$y$\n(i.e., we have regression on the mean). \n\n\\i\\label{exer 6.3.23} A point $Y$ is chosen at random from $[0,1]$.  A second\npoint $X$ is then chosen from the interval $[0,Y]$.  Find the density for $X$.  \\emx\n{Hint}: Calculate $f_{X|Y}$ as in Exercise~\\ref{exer 6.3.22} and then use\n$$ f_X(x) = \\int_x^1 f_{X|Y}(x|y) f_Y(y)\\, dy\\ .\n$$ Can you also derive your result geometrically?\n\n%**********Provide an answer in the answer key to the following problem  (it has been significantly\n%changed).   In addition, we should find out what it means to be inside one of the ellipses\n%discussed in the problem.\n\n\\istar\\label{exer 6.3.24} Let $X$ and $V$ be two standard normal random variables.  Let $\\rho$ \nbe a real number between -1 and 1.\n\\begin{enumerate}\n\\item\nLet $Y = \\rho X + \\sqrt{1 - \\rho^2} V$.  Show that $E(Y) = 0$ and $Var(Y) = 1$.  We shall see later\n(see Example~\\ref{exam 7.8} and Example~\\ref{exam 10.3.3}), that the sum of two independent normal\nrandom variables is again normal.  Thus, assuming this fact, we have shown that $Y$ is standard\nnormal.\n\\item\nUsing Exercises~\\ref{exer 6.3.18} and \\ref{exer 6.3.19}, show that the correlation of $X$\nand $Y$ is $\\rho$.\n\\item\nIn Exercise~\\ref{exer 6.3.21}, the joint density function $f_{X,Y}(x, y)$ for the random variable\n$(X, Y)$ is given.  Now suppose that we want to know the set of points $(x, y)$ in the $xy$-plane \nsuch that $f_{X,Y}(x, y) = C$ for some constant $C$.  This set of points is called a set of constant\ndensity.  Roughly speaking, a set of constant density is a set of points where the outcomes $(X, Y)$\nare equally likely to fall.  Show that for a given $C$, the set of points of constant density is\na curve whose equation is\n$$x^2 - 2\\rho x y + y^2 = D\\ ,$$\nwhere $D$ is a constant which depends upon $C$.  (This curve is an ellipse.)\n\\item\nOne can plot the ellipse in part (c) by using the parametric equations\n\\begin{eqnarray*} x & = & \\frac {r\\cos\\theta}{\\sqrt{2(1 - \\rho)}} + \\frac\n{r\\sin\\theta}{\\sqrt{2(1 +\n\\rho)}}\\ , \\\\ y & = & \\frac {r\\cos\\theta}{\\sqrt{2(1 - \\rho)}} - \\frac\n{r\\sin\\theta}{\\sqrt{2(1 +\n\\rho)}}\\ .\n\\end{eqnarray*}\nWrite a program to plot 1000 pairs $(X, Y)$ for $\\rho = -1/2, 0, 1/2$.  For each plot,\nhave your program plot the above parametric curves for $r = 1, 2, 3$.\n\\end{enumerate}\n\n\\istar\\label{exer 6.3.25} Following Galton, let us assume that the fathers and sons\nhave heights that are dependent normal random variables.  Assume that the average\nheight is 68~inches, standard deviation is 2.7~inches, and the correlation coefficient\nis~.5 (see Exercises~\\ref{exer 6.3.21}~and~\\ref{exer 6.3.22}).  That is, assume that\nthe heights of the fathers and sons have the form $2.7X + 68$ and $2.7Y + 68$,\nrespectively, where $X$ and $Y$ are correlated standardized normal random variables,\nwith correlation coefficient~.5.\n\n\\begin{enumerate}\n\\item What is the expected height for the son of a father whose height is 72~inches?\n\n\\item Plot a scatter diagram of the heights of 1000 father and son pairs.  \\emx\n{Hint}: You can choose standardized pairs as in Exercise~\\ref{exer 6.3.24} and then\nplot $(2.7X + 68, 2.7Y + 68)$.\n\\end{enumerate}\n\n\n%***********This problem is very time-consuming for the student to do.  Either we\n%should make a file containing the data, or else we should delete the problem.  We\n%will try to find the original data (instead of just the table).\n\n\\istar\\label{exer 6.3.26} When we have pairs of data $(x_i,y_i)$ that are outcomes of\nthe pairs of dependent random variables $X$,~$Y$ we can estimate the coorelation\ncoefficient\n$\\rho$ by\n$$\n\\bar r = \\frac {\\sum_i (x_i - \\bar x)(y_i - \\bar y)}{(n - 1)s_Xs_Y}\\ ,\n$$ where $\\bar x$ and $\\bar y$ are the sample means for $X$ and $Y$,\nrespectively, and $s_X$ and $s_Y$ are the sample standard deviations for $X$ and $Y$\n(see Exercise~\\ref{sec 6.2}.\\ref{exer 6.2.18}).  Write a\nprogram to compute the sample means, variances, and correlation for such dependent\ndata.  Use your program to compute these quantities for Galton's data on heights of\nparents and children given in Appendix~B.\n\\par\nPlot the equal density ellipses as defined in Exercise~\\ref{exer 6.3.24} for~$r =\n4$,~6, and~8, and on the same graph print the values that appear in the table at the\nappropriate points.  For example, print 12 at the point\n$(70.5,68.2)$, indicating that there were 12 cases where the parent's height was 70.5\nand the child's was 68.12.  See if Galton's data is consistent with the equal density\nellipses.\n\n\\i\\label{exer 6.3.27} (from Hamming\\index{HAMMING, R. W.}\\footnote{R.~W.~Hamming,  \\emx {The Art\nof  Probability for Scientists and Engineers} (Redwood City:  Addison-Wesley, 1991), p.~192.})  \nSuppose you are standing on the bank of a straight river.  \n\\begin{enumerate}\n\\item Choose, at random, a direction which will keep you on dry land, and walk 1 km\nin that direction.  Let $P$ denote your position.  What is the expected distance from\n$P$ to the river?\n\\item Now suppose you proceed as in part (a), but when you get to $P$, you pick a random\ndirection (from among  \\emx {all} directions) and walk 1 km.  What is the probability \nthat you will reach the river before the second walk is completed?\n\\end{enumerate}\n\n\\i\\label{exer 6.3.28} (from Hamming\\index{HAMMING, R. W.}\\footnote{ibid., pg. 205.})  A game is\nplayed as follows:  A random number $X$ is chosen uniformly from $[0, 1]$.  Then a sequence $Y_1,\nY_2,\n\\ldots$ of random numbers is chosen independently and uniformly from $[0, 1]$.  The\ngame ends the first time that $Y_i~>~X$.  You are then paid $(i-1)$ dollars.  What is\na fair entrance fee for this game?\n\n\\i\\label{exer 6.3.29} A long needle of length $L$ much bigger than 1 is dropped on a\ngrid with horizontal and vertical lines one unit apart.  Show that the average number\n$a$ of lines crossed is approximately\n$$\na = \\frac{4L}\\pi\\ .\n$$\n\n\\end{LJSItem}}\n%\\end{document}\n\n", "meta": {"hexsha": "57801a3e79f99ec8a12ed36c417506f6c8980067", "size": 134256, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/ch6.tex", "max_stars_repo_name": "kskyten/introduction-to-probability", "max_stars_repo_head_hexsha": "288c82a0cb94e6b9d702eb8803dc342052d411f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/ch6.tex", "max_issues_repo_name": "kskyten/introduction-to-probability", "max_issues_repo_head_hexsha": "288c82a0cb94e6b9d702eb8803dc342052d411f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/ch6.tex", "max_forks_repo_name": "kskyten/introduction-to-probability", "max_forks_repo_head_hexsha": "288c82a0cb94e6b9d702eb8803dc342052d411f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.1142217245, "max_line_length": 169, "alphanum_fraction": 0.6912465737, "num_tokens": 43898, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723316860482762, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.5741126009301445}}
{"text": "\\chapter{Stopping Criterion for Randomized Low-Rank Approximations}\n\\label{chap:random}\n\nThere has been work in recent years to understand\nstructured matrices: matrices with off-diagonal blocks that\nare (or can be approximated as) low-rank.\nThe goal is to develop methods which allow us to compute\nmatrix-vector products and matrix inverses faster than\nstandard algorithms; that is, multiplication in $O(n\\log^{\\beta}n)$\nflops and inversion in $O(n^{\\alpha}\\log^{\\beta}n)$ flops\nfor $\\alpha\\in\\brackets{1,2}$ and $\\beta$ small.\nAnother benefit is reduced storage requirements,\nfrequently $O(n\\log^{\\beta}n)$.\nThe simplest of these are banded matrices, but also include\nSequentially Semi-Separable matrices~\\cite{chandrasekaran2005some},\nHierarchically Semi-Separable (HSS)\nmatrices~\\cite{Chandrasekaran2005HSS,chandrasekaran2006fast},\nH-matrices~\\cite{hackbusch1999sparse}, and others.\nRecent work involving randomized HSS construction can be found in\n\\cite{martinsson2011fast,rouet2016distributed,ghysels2017robust}.\nThe main contribution of this chapter is related to\na stochastic estimate of $\\norm{\\cdot}_{F}$ which\nallows us to accurately measure the low-rank approximation\nand determine when it is well-approximated;\nportions of this material first appeared in~\\cite{randomHSSLBL}.\nIn particular, we develop a method to compute a \\emph{relative}\nstopping criterion for an adaptive low-rank approximation.\n\nThroughout this chapter we assume $A\\in\\R^{m\\times n}$\nwith rank $r\\ll \\min(m,n)$.\nFor simplicity, we will assume $m\\ge n$.\nWe let $N(0,1)$ refer to the standard normal distribution\nwith mean $0$ and variance $1$.\nFinally, some of the notation in this chapter may conflict with those\nfrom previous chapters.\nWhile other adaptive randomized algorithms have frequently used\nan absolute tolerance $\\eps$, we will focus on allowing both\nan absolute tolerance $\\epsa$ and a relative tolerance $\\epsr$.\n\n", "meta": {"hexsha": "2357b24907d47a861d0c98f83eb7632cf8f8551f", "size": 1910, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/random.tex", "max_stars_repo_name": "chgorman/UCSB-Dissertation-Template", "max_stars_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/random.tex", "max_issues_repo_name": "chgorman/UCSB-Dissertation-Template", "max_issues_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/random.tex", "max_forks_repo_name": "chgorman/UCSB-Dissertation-Template", "max_forks_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.75, "max_line_length": 69, "alphanum_fraction": 0.7942408377, "num_tokens": 494, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7310585903489891, "lm_q2_score": 0.7853085758631158, "lm_q1q2_score": 0.5741065804594616}}
{"text": "\\chapter{Bayesian inference and learning}\n\n\\section{Learning described as Bayesian inference}\n\nThe central problem of learning from data can be verbalized as: given that some agent have access to data $\\mathcal{D}$, what knowledge can the learner extract from it? One way to approach this problem is by assuming that learning does not take place in a vacuum, but in a world that the learner has uncertain knowledge about, translated into beliefs. The fact that those beliefs are uncertain is important, given that if the learner knew exactly everything he should know about the world, access to data $\\mathcal{D}$ could not teach him nothing more.\n\nGiven this general framework of \"informed uncertainty\", one natural way to describe it, mathematically, is by using probability theory for describing the problem. More specifically, it is used the \\textit{Bayesian} viewpoint of probability, where degrees of uncertainty about quantities are mapped into probabilities about those \\cite{MacKay2003,jaynes03}.\n\nIn this interpretation, probability theory does not just deal with random events, but with anything that an agent is uncertain about. So, given some proposition $A$, and given that the learner knows $I$ about the world, $P(A|I)$ represents what he knows about $A$. Thus $A|I$ becomes a random variable, \\textit{even if $A$ is not a random event}. Cox's theorem \\cite{jaynes03},\\cite{Cox_1963} says that, under some certain common sense assumptions about how the learner should ideally reason about beliefs, the rules of probability theory holds, as an extension to logic.\n\nIn a simplification of this setting, when learning from the data $\\mathcal{D}$, and previous information $I$, the learner must have some set of hypothesis (assuming finite for now) $\\mathcal{H} = \\{H_1,\\ldots,H_t\\}$, such that the learner assumes one and only one of then is true, and with each of them associated with a probability $P(H_k)$ such that\n\\begin{displaymath} \n\\sum_{H_k \\in \\mathcal{H}} P(H_k|I) = 1.\n\\end{displaymath}\nFurthermore, each hypothesis $H_k$ should say something about how likely it is for the data to be generated, given $H_k$ is true, and this information is encoded in $P(\\mathcal{D}|H_k,I)$. In this case, Bayes' theorem says that it is possible to obtain the updated probabilities (thus degree of beliefs) $P(H_k|\\mathcal{D},I)$ by Bayes' rule:\n\\begin{equation}\\label{bayes_theorem_1}\n P(H_k|\\mathcal{D},I) = \\frac{P(\\mathcal{D}|H_k,I)}{P(\\mathcal{D}|I)}P(H_k|I),\n\\end{equation}\nwith $p(\\mathcal{D}|I)$ being available by marginalization \n\\begin{equation}\\label{marginalization_1}\n P(\\mathcal{D}|I) = \\sum_{H_k \\in \\mathcal{H}} P(\\mathcal{D}|H_k,I) P(H_k|I).\n\\end{equation}\n\nIn practice, usually hypothesis does not comes in discrete chunks, but one assumes a \\textit{model} $M$. The model is usually endowed with free parameters $\\theta \\in \\Theta \\subset \\mathbb{R}^D$, so that, assuming those to be continuous\\footnote{This is not necessary at all, but it simplifies the notation}, the hypotheses are encoded by those parameter through a probability density function $p(\\theta|M)$. Then, given the data $\\mathcal{D}$, one seeks the posterior density function $p(\\theta|\\mathcal{D},M)$. In this case, we also have a version of Bayes' theorem for the densities\n\\begin{equation}\\label{bayes_theorem_2}\n p(\\theta | \\mathcal{D},M) = \\frac{p(\\mathcal{D} | \\theta,M)}{p(\\mathcal{D}|M)} p(\\theta | M),\n\\end{equation}\nwith \n\\begin{equation}\\label{marginalization_2}\np(\\mathcal{D}|M) = \\int_\\Theta p(\\mathcal{D}|\\theta',M) p(\\theta'|M) d \\theta'.\n\\end{equation}\n\nIn the Bayesian framework, the problem of learning is reduced to one of inference about $\\theta$, in a manner that if a specific parameter $\\theta$ is sufficient to make a prediction $Q|\\theta,M$, with density $p(Q|\\theta,M)$, then the learner has access to $Q|\\mathcal{D},M$ by marginalization\n\\begin{equation}\\label{marginalizationpred}\np(Q|\\mathcal{D},M) = \\int_{\\Theta} p(Q|\\theta,M) p(\\theta|\\mathcal{D},M) d\\theta.\n\\end{equation}\nThus, there is no fundamental difference between learning and inference from a Bayesian point of view.\n\n\\section{Decision theory}\\label{decision_theory_section}\nFollowing the Bayesian procedure, an agent can learn something about the world. However, ultimately what one wants to do with beliefs about the world is to convert those into actions. This can be formalized in \\textit{Bayesian decision theory} \\cite{Robert_2001}, where the components required for belief updating are combined with a \\textit{loss function} $L : \\Theta \\times \\mathcal{A} \\to \\mathbb{R}^+$, \nwith $L(\\theta,a)$ being the cost of taking action $a$ when the state of the world is $\\theta$ \\footnote{This language refers back to the continuously parameterized model setting described above, and this will be assumed through the text. However, one can refer also to a more general setting, \\textit{mutatis mutandis}}. Then, the action that minimizes the expected loss, given the posterior distribution $p(\\theta|\\mathcal{D},M)$, \n\\begin{equation}\\label{generalloss}\n a^* = \\argmin_{a \\in \\mathcal{A}} \\int_\\Theta L(\\theta,a) p(\\theta|\\mathcal{D},M) d \\theta,\n\\end{equation}\nis the Bayes-optimal decision for the agent to make \\cite{Robert_2001}.\n\nFrom this point of view, parameter estimation is simply when the action taken is choosing a parameter, that is, $\\mathcal{A} = \\Theta$. In this setting, we have that, for some loss functions $L(\\theta,\\tilde{\\theta})$, one can find the desired \\textit{Bayes estimator},\n\\begin{equation}\n\\hat{\\theta} = \\argmin_{\\tilde{\\theta}} \\int L(\\theta,\\tilde{\\theta}) p(\\theta|\\mathcal{D},M) d \\theta,\n\\end{equation}\nby calculating the minimum analytically:\n\\begin{itemize}\n\t\\item The $l_2$ (quadratic) loss $L(\\theta,\\tilde{\\theta}) = ||\\theta - \\tilde{\\theta}||_2^2$, for which $\\hat{\\theta} = \\Ev[\\theta|\\mathcal{D},M]$\n\t\\item The $l_1$ (absolute) loss $L(\\theta,\\tilde{\\theta}) = ||\\theta - \\tilde{\\theta}||_1$, for which, at each coordinate $i$, $\\hat{\\theta}_i = \\text{median}(\\theta_i|\\mathcal{D},M)$.\n\\end{itemize}\n\nHowever, loss functions can be much more general, allowing, for example, to encode asymmetry in the gravity of mistakes. For instance, if $\\theta$ is the maximum load of some structural component, underestimating it may result in a bigger waste of resources, while overestimating it may result in collapse.\n\nOne way of choosing $\\hat{\\theta}$ that does not exactly enter the framework above, at least for continuous parameters, are either by the \\textit{maximum a posteriori} MAP of the probability density function\n\\begin{equation}\\label{map_definition}\n \\hat{\\theta}_{\\text{MAP}} = \\argmax_\\theta p(\\theta | \\mathcal{D},M) = \n       \\argmax_\\theta p(\\mathcal{D} | \\theta,M) p(\\theta | M).\n\\end{equation}\nThe MAP estimation can be regarded however as a limit of minimizers of loss functions of the form\n\\begin{equation}\nL_c(\\theta,\\tilde{\\theta}) = \n\\begin{cases}\n0, & \\text{if } ||\\theta - \\tilde{\\theta}|| < c \\\\\n1, & \\text{otherwise},\n\\end{cases}\n\\end{equation}\nso that \\footnote{Provided some conditions. See \\cite{Bassett_2018} for a counterexample of the general result.}\n\\begin{equation}\n\\hat{\\theta}_{\\text{MAP}} = \\lim_{c \\to 0} \\argmin_{\\tilde{\\theta}} \\int L_c(\\theta,\\tilde{\\theta}) p(\\theta|\\mathcal{D},M) d \\theta\n\\end{equation}\nRelated to the MAP estimator is the \\textit{maximum likelihood estimate} (MLE), which can be seem as a modification of the MAP that doesn't take in account prior belief \n\\begin{equation}\\label{mle_definition}\n\\hat{\\theta}_{\\text{MLE}} = \\argmax_\\theta  p(\\mathcal{D} | \\theta, M).\n\\end{equation}\n\nThe MAP (and MLE) estimation suffers from some drawbacks for continuous distributions:\n\\begin{itemize}\n\t\\item The MAP estimation does not take in account any true loss function, just limits of loss functions. Specially in applications that has an intrinsic asymmetric loss, this may result in grave mistakes.\n\t\\item The MAP estimation is in general an untypical point for the distribution, in the sense that the probability of the parameter to be near the MAP is low. In particular, in high dimensions that MAP will be very untypical (see \\cite{Betancourt_2017}).\n\t\\item The MAP estimation is not invariant under reparameterizations. To illustrate, assume that $\\theta$ is a one-dimensional parameter representing some phenomena $F$, and let $\\phi = g^{-1}(\\theta)$, where $g$ is diffeomorphic \\footnote{In order to exclude some special conditions, assume both $\\theta$ and $\\phi$ are supported in $\\mathbb{R}$}. Clearly, $\\phi$ is also a valid parameterization of $F$. Assuming $g'(\\phi) > 0$ for simplicity, let $f_\\theta(\\theta) := p(\\theta|\\mathcal{D},M)$. Then, we have, letting $f_\\phi(\\phi) := p(\\phi | \\mathcal{D},M)$, that $f_\\phi(\\phi) = f_\\theta(g(\\phi))g'(\\phi)$. This implies that:\n\t\\begin{equation}\n\t f'(\\phi) = g''(\\phi) f_\\theta(g(\\phi)) + (g'(\\phi))^2 f'_\\theta(g(\\phi)).\n\t\\end{equation}\n\tNow, being $\\hat{\\theta}_\\text{MAP}$ the MAP estimator for $\\theta$, we have $f'_\\theta(\\hat{\\theta}_\\text{MAP}) = 0$.\n\tHowever, letting $\\hat{\\phi} := g^{-1}(\\hat{\\theta}_\\text{MAP})$, we have that $f'(\\hat{\\phi}) = g''(\\hat{\\phi}) f_\\theta(g(\\hat{\\phi}))$, which does not equal to $0$ unless $g''(\\hat{\\phi}) = 0$. Hence, $\\hat{\\phi}$ cannot be the MAP estimator for $\\phi$. But, since $\\theta$ and $\\phi$ are both valid parameterizations for phenomena $F$, this lack of invariance implies that the MAP estimation has no meaning in estimating $F$.\n\\end{itemize}\nStill, the MAP estimator is relatively straightforward to calculate, since it requires the optimization of $p(\\mathcal{D}|\\theta) p(\\theta)$, which is in general a simpler problem than integration. Thus, it is widely used.\n\n\\section{Model selection}\\label{modelselectionsection}\nIn the previous discussion, the model $M$ was assumed to be fixed, with only its parameters being unknown. In practice, we have a set of models $\\mathcal{M}$ from which we choose $M$. This raises the question on how to make this choice of $M$.\n\nThe standard Bayesian solution for the problem would be placing a prior distribution $P(M)$ for the models, and then computing the posterior distribution for them, given the data, by Bayes' rule\n\\begin{equation}\n P(M|\\mathcal{D}) = \\frac{P(\\mathcal{D}|M) P(M)}{\\sum_{M' \\in \\mathcal{M}} P(\\mathcal{D}|M') P(M')},\n\\end{equation}\nwith the model likelihood, in this setting being called \\textit{marginal likelihood} or \\textit{evidence}, given by\n\\begin{equation}\\label{marginalization_3}\np(\\mathcal{D} | M) = \\int_{\\Theta_M} p(\\mathcal{D} | \\theta_M, M) p(\\theta_M | M) d\\theta_M,\n\\end{equation}\nemphasizing that the parameter space $\\Theta_M$ depends on the model.  \nThen, one can choose $M$ by MAP estimation \n\\begin{equation}\n \\hat{M} = \\argmax_{M \\in \\mathcal{M}} p(\\mathcal{D} |M) p(M),\n\\end{equation}\nor, in prediction settings, carrying the full posterior model distribution for doing model averaging. Still, choosing a prior for models may not be a trivial task, as discussed in \\cite{Robert_2001}. To circumvent this, one can instead forget about the prior (or assume an uniform prior), and choose the model with maximum likelihood\n\\begin{equation}\\label{modelselectionobjective}\n\\hat{M} = \\argmax_{M \\in \\mathcal{M}} p(\\mathcal{D} | M) = \\argmax_{M \\in \\mathcal{M}} \\int p(\\mathcal{D} | \\theta_M, M) p(\\theta_M | M) d\\theta_M.\n\\end{equation}\n\nThe choice of models by maximization of evidence results in the \\textit{Bayesian Occam's razor} \\cite{MacKay2003,MacKay_1991,Rasmussen_2001}, named after the Occam's razor principle that says, given a choice between models, we should select the simplest models that still explains the data. We say that some model is simpler or more complex than another if it can explain few or more data. To see how Occam's razor works in Bayesian setting, it suffices to realize that, between all possible datasets, probabilities must sum to one. For illustration, assume that $\\mathcal{D}$ comes from a finite set of possible datasets. Then, we need\n\\begin{equation}\n \\sum_{\\mathcal{D}'} p(\\mathcal{D}'|M) = 1.\n\\end{equation}\nNow, compare three models, $M_1$, $M_2$ and $M_3$. $M_1$ can explain only very few datasets well, so few that it cannot explain $\\mathcal{D}$. $M_2$ can explain more datasets, including $\\mathcal{D}$, but not so much as $M_3$ explains, which is a vast number of datasets. We have then that $p(\\mathcal{D}|M_1)$ must be very low, given that $M_1$ does not explain $\\mathcal{D}$. The two other models, that explains the data, have higher values of $p(\\mathcal{D}|M_2)$ and $p(\\mathcal{D}|M_3)$. But, since $p(\\mathcal{D}|M_3)$ \"shares\" probability mass with more datasets than $p(\\mathcal{D}|M_2)$, by conservation of probability mass, we find that $p(\\mathcal{D}|M_2)$ is higher. \nHence we have the order\n\\begin{equation}\n p(\\mathcal{D}|M_2) > p(\\mathcal{D}|M_3) > p(\\mathcal{D}|M_1).\n\\end{equation}\nHence, we find that $M_2$ is simple enough to be desirable, but not so simple as to not be able to explain $\\mathcal{D}$, thus obeying the Occam's razor principle.\n\nThe model set $\\mathcal{M}$ itself does not need to be discrete or enumerable. If $\\mathcal{M}$ can be parametrized by a set $\\Lambda$, then one can change $\\mathcal{M}$ for $\\Lambda$, and find the maximum of evidence by:\n\\begin{equation}\n \\lambda_{ML-II} = \\argmax_{\\lambda \\in \\Lambda} p(\\mathcal{D}|M(\\lambda)).\n\\end{equation}\nThis estimator is called \\textit{type II maximum likelihood} estimator. By setting an prior over $\\Lambda$, we would have instead a \\textit{type II maximum a posteriori} estimator \n\\begin{equation}\n \\lambda_{MAP-II} = \\argmax_{\\lambda \\in \\Lambda} p(\\mathcal{D}|M(\\lambda)) p(M(\\lambda)).\n\\end{equation}\n\n\n\\section{Approximate inference}\nComputationally, Bayesian inference suffers from two major issues: \n\\begin{itemize}\n\\item Because in the posterior density \\eqref{bayes_theorem_2}, the normalizing term $p(\\mathcal{D})$ \\footnote{From now on we omit the dependence on the model $M$.} is to be determined by the integral \\eqref{marginalization_2}, a closed-form solution of the posterior density is often unavailable, even though the unnormalized density $Z p(\\theta|\\mathcal{D}) = p(\\mathcal{D}|\\theta)p(\\theta) = p(\\theta,\\mathcal{D})$ usually is. \n\n\\item A more grave problem is that, even with the normalized posterior density at hand, for an arbitrary function $f(\\theta)$, the expectation $\\int f(\\theta) p(\\theta|\\mathcal{D}) d\\theta$ is not trivial to calculate. And, as seen in Section \\ref{decision_theory_section}, what one wants in the end with posterior distribution is to calculate expectations. Thus, computational methods for dealing with those problems are needed. \n\\end{itemize}\n\nIn this section, Monte Carlo methods are quickly reviewed, along with Laplace's approximation. Discussion on variational inference, another important approximate method, is postponed to Chapter 3, since it is a main subject of this work.\n\n\\subsection{Monte Carlo integration methods}\n\nConsider back the expectation \n\\begin{equation}\\label{ev_example}\n\\mu := \\Ev_{\\theta \\sim p(\\theta|\\mathcal{D})}[f(\\theta)] = \\int f(\\theta) p(\\theta|\\mathcal{D}) d\\theta.\n\\end{equation}\nAssuming this expectation exists, if one can sample $\\theta_1,\\ldots,\\theta_N$ from $\\theta|\\mathcal{D}$, independently, then the estimator \n\\begin{equation}\n \\hat{\\mu} := \\frac{1}{N} \\sum_{i=1}^N f(\\theta_i),\n\\end{equation}\nis such that, as $N \\to \\infty$, $\\hat{\\mu} \\to \\mu$ almost surely, by the law of large numbers. Moreover, if the variance of $f(\\theta)$ is finite, then the convergence rate is $\n\\mathcal{O}\\left(\\sqrt{\\frac{\\Var(f(\\theta))}{N}}\\right)$,\nby the central limit theorem. Hence, the challenge of Monte Carlo methods is how to get, from an unnormalized posterior distribution $p(\\theta,\\mathcal{D})$, independent or \"independent enough\" samples from this distribution.\n\n\\subsubsection{Importance sampling}\nThe importance sampling algorithm \\cite{Robert_2005} is a relatively simple algorithm for sampling from unnormalized posteriors. Let $q(\\theta)$ be some proposal distribution, such that one can sample easily from $q(\\theta)$, having samples $\\theta_1,\\ldots,\\theta_N \\sim q(\\theta)$. Finally, assume an unnormalized density $\\bar{q}(\\theta) = Z_q q(\\theta)$ is known. Then, rewrite \\eqref{ev_example} as \n\\begin{equation}\\label{is_rewrite}\n\\hat{\\mu} = \\int f(\\theta) p(\\theta|\\mathcal{D}) d\\theta = \n\\frac{Z_q}{Z} \\int f(\\theta) \\frac{p(\\theta,\\mathcal{D})}{\\bar{q}(\\theta)} q(\\theta) d\\theta,\n\\end{equation}\nwhich can be estimated as\n\\begin{equation}\\label{is_derivation_1}\n \\frac{Z_q}{Z} \\int f(\\theta) \\frac{p(\\theta,\\mathcal{D})}{\\bar{q}(\\theta)} q(\\theta) d\\theta \\approx \\frac{1}{Z/Z_q} \\frac{1}{N} \\sum_{i=1}^N \\tilde{w}_i f(\\theta_i), \\quad \\tilde{w}_i := \\frac{p(\\theta_i,\\mathcal{D})}{\\bar{q}(\\theta_i)}.\n\\end{equation}\nThe ratio $Z/Z_q$ can himself be estimated, using the same samples, as\n\\begin{equation}\\label{is_derivation_2}\n\\frac{Z}{Z_q} = \\frac{1}{Z_q} \\int p(\\theta,\\mathcal{D}) d \\theta = \\int \\frac{p(\\theta,\\mathcal{D})}{\\bar{q}(\\theta)} q(\\theta) d\\theta \\approx \\frac{1}{N} \\sum_{i=1}^N \\tilde{w}_i.\n\\end{equation}\nThen, joining \\eqref{is_derivation_1} and \\eqref{is_derivation_2}, we have an estimate for \\eqref{ev_example} \n\\begin{equation}\n \\hat{\\mu} = \\int f(\\theta) p(\\theta|\\mathcal{D}) d\\theta \\approx \\sum_{i=1}^N w_i f(\\theta_i), \\quad w_i = \\frac{\\tilde{w}_i}{\\sum_{j=1}^N \\tilde{w}_j}, \\, \\forall i.\n\\end{equation}\n\n\\subsubsection{Markov Chain Monte Carlo}\nMarkov Chain Monte Carlo (MCMC) methods uses Markov chains to sample from the desired distribution \\cite{Robert_2005,Brooks_2011}. MCMC is arguably the most popular method in Bayesian statistics, due to their ability to sample efficiently from relatively high dimensional distributions, with only unnormalized density available. As such, the number of MCMC methods is enormous. Here a basic method, Metropolis-Hastings, is reviewed in passing.\n\nMarkov chains are sequences of random variables $X_0,X_1,\\ldots$, with the property that the conditional distribution $X_i|X_0,\\ldots,X_{i-1}$ is the same as $X_i|X_{i-1}$. Thus, a Markov chain is completely defined by the distribution of the initial random variable $X_0$, and the \\textit{transition probability distribution},  $p(x_{i+1}|x_i)$. If $p(x_{i+1}|x_i)$ is independent of $i$, is called a \\textit{stationary transition}, and those kinds of chains are of most interest in MCMC methods.\n\nMarkov chains becomes interesting when their transitions have some unique distribution $\\pi(x)$, called a \\textit{stationary distribution}, such that \n\\begin{equation}\n \\pi(x') = \\int p(x'|x) \\pi(x) dx,\n\\end{equation}\nthat is, when the initial random variable $X_0$ of a Markov chain is distributed according to $\\pi(x)$, under the transition $p(x'|x)$, every $X_i$ is distributed according to $pi(x)$. One of the conditions that suffices (although is not necessary)for $\\pi(x)$ being a stationary distribution is that it satisfies the \\textit{detailed balance condition}\n\\begin{equation}\n p(x'|x)\\pi(x) = p(x|x')\\pi(x').\n\\end{equation}\nWith the stationary distribution, under some technical conditions (see \\cite{Robert_2005}), we have for a Markov chain with transition probability $p(x'|x)$, such that $X_0 = x_0$ and $x_i$ is sampled from $p(x_i|x_{i-1})$, for $i \\geq 1$, that, as $N \\to \\infty$,\n\\begin{equation}\n \\frac{1}{N} \\sum_{i=1}^N f(x) \\to \\Ev_{X \\sim \\pi}[f(X)].\n\\end{equation}\n\nMoreover, the convergence follows a version of the central limit theorem. Assume first that $X_0 \\sim \\pi$. Then, we have that the central limit theorem holds, with \\eqref{cltconvergence} being substituted for\n\\begin{equation}\\label{cltmc}\n \\mathcal{O}\\left(\\sqrt{\\frac{\\Var(f(X_1)) + 2 \\sum_{k=1}^\\infty \\Cov(f(X_1),f(X_{1+k}))}{N}}\\right),\n\\end{equation}\nwhen $i$ large enough \\cite{Geyer_2011}. More generally (and realistically), any Markov chain (modulo a technical conditions) for which the transition $p(x'|x)$ has stationary distribution $\\pi(x)$ follows the central limit theorem, with asymptotic rate of convergence being the same as when $X_0 \\sim \\pi$. \\footnote{This results in the important notion of \\textit{effective sample size} (ESS), where \\eqref{cltmc} is substituted for $\\mathcal{O}\\left(\\sqrt{\\frac{\\Var(f(X))}{N_{\\text{eff}}}}\\right)$, with $N_\\text{eff} = N/\\left(1+2\\sum_{k=1}^\\infty \\text{corr}(f(X_i),f(X_{i+k})\\right)$, when $i$ is large enough.}\n\t\t\nThese results gives rise to the following procedure: construct a randomized algorithm such that, starting with some $\\theta_i \\in \\Theta$, $\\theta_{i+1} \\in \\Theta$ is generated such that $p(\\theta_{i+1}|\\theta_i)$ has stationary distribution $p(\\theta|\\mathcal{D})$.\n\nThe Metropolis-Hastings algorithm is one manner of doing this for a general distribution. Given a (possibly unnormalized) conditional distribution $g(\\theta'|\\theta)$ (a \\textit{proposal distribution}), and $\\theta_t$, $t \\geq 0$, one samples a proposal \n$\\tilde{\\theta}_{t+1}$ from $g(\\theta'|\\theta_t)$, and then, letting\n\\begin{equation}\n \\alpha(\\tilde{\\theta}_{t+1},\\theta_t) = \n  \\min \\left(1, \\frac{p(\\tilde{\\theta}_{t+1}|\\mathcal{D}) g(\\theta_t|\\tilde{\\theta}_{t+1})}{p(\\theta_t|\\mathcal{D}) g(\\tilde{\\theta}_{t+1}|\\theta_t)}\\right),\n\\end{equation}\nbe the \\textit{acceptance probability}, then let $\\theta_{t+1} = \\tilde{\\theta}_{t+1}$ with probability $\\alpha(\\tilde{\\theta}_{t+1},\\theta_t)$, else let $\\theta_{t+1} = \\theta_t$. One key observation is that the ratio $p(\\tilde{\\theta}_{t+1}|\\mathcal{D})/p(\\theta_t|\\mathcal{D})$ is independent of the normalization constant, thus it can be substituted for $p(\\mathcal{D}|\\tilde{\\theta}_{t+1})p(\\tilde{\\theta}_{t+1})/p(\\mathcal{D}|\\theta_t)p(\\theta_t)$, avoiding the need for the normalized posterior.\n\nFor continuous distributions, one standard proposal distribution is $g(\\theta'|\\theta) = \\mathcal{N}(\\theta'|\\theta,\\epsilon^2 I)$, with $\\epsilon$ being the step size, resulting in the \\textit{Random Walk Metropolis} algorithm. In particular, with this proposal distribution $g(\\theta'|\\theta) = g(\\theta'|\\theta)$, simplifying the acceptance probability to \n\\begin{equation}\n\\alpha(\\tilde{\\theta}_{t+1},\\theta_t) = \n\\min \\left(1, \\frac{p(\\tilde{\\theta}_{t+1}|\\mathcal{D})}{p(\\theta_t|\\mathcal{D})}\\right),\n\\end{equation}\n\n\\subsection{Laplace's approximation}\nLaplace's approximation \\cite{Bishop_2007} is arguably the simplest technique from a class of methods that tries to approximate the density $p(\\theta|\\mathcal{D})$ by some other density $q(\\theta)$, using $p(\\theta,\\mathcal{D})$, and work with the approximation. Another technique that belongs to this class of methods is variational inference, the subject of Chapter 5.\n\nConsider $\\Theta = \\mathbb{R}^D$, $p(\\theta|\\mathcal{D})$ to be smooth, and $\\theta^* = \\hat{\\theta}_{\\text{MAP}}$ be the MAP of $p(\\theta|\\mathcal{D}) = p(\\theta,\\mathcal{D})/Z$. Then, doing a second order Taylor approximation on $l(\\theta) = \\log p(\\theta,\\mathcal{D}) = \\log p(\\theta|\\mathcal{D}) + \\log p(\\mathcal{D})$, around $\\theta^*$, and noticing $\\nabla_\\theta l(\\theta^*) = 0$, \n\\begin{equation}\n l(\\theta) \\approx l(\\theta^*) + \\frac{1}{2} (\\theta - \\theta^*)^T H_{\\theta^*}(l) (\\theta - \\theta^*),\n\\end{equation}\nwhere $H_{\\theta^*}(l)$ is the Hessian matrix of $l(\\theta)$ on $\\theta^*$ \\footnote{The negative of the Hessian matrix $-H_{\\theta}(l)$ is the same as Fisher information matrix $I(\\theta)$.}. Then, letting $\\Sigma = -H_{\\theta^*}^{-1}(l)$ and $\\mu = \\theta^*$, taking the exponential on both sides \n\\begin{equation}\np(\\theta,\\mathcal{D}) \\approx \\exp(l(\\theta^*)) \\exp \\left(-\\frac{1}{2}(\\theta - \\mu)^T \\Sigma^{-1} (\\theta - \\mu) \\right).\n\\end{equation}\nThe second side is just the unnormalized density of $\\mathcal{N}(\\theta|\\mu,\\Sigma)$. Hence, normalizing back, we arrive at the Laplace's approximation for $p(\\theta|\\mathcal{D})$\n\\begin{equation}\np(\\theta|\\mathcal{D}) \\approx \\mathcal{N}\\big(\\theta;\\theta^*,-H_{\\theta^*}^{-1} (l)\\big).\n\\end{equation}\n\nThe Laplace' approximation needs optimization of $l(\\theta)$ and access to second derivatives, which for many cases may be cheaply available. However, since the approximation is only local, it may diverge sharply from the actual posterior. Moreover, in higher dimensions, calculating and inverting the Hessian matrix may be too costly.\n\n\\subsubsection{Remark on approximation}\nIt is important to consider a possible advantage of using an approximate density $q(\\theta)$ over sampling from $p(\\theta|\\mathcal{D})$. Assume $q(\\theta)$ simple enough so that there is no need for advanced sampling techniques for Markov Chain Monte Carlo. Then, consider the general loss minimization problem \\eqref{generalloss}, and substitute $p(\\theta|\\mathcal{D})$ for $q(\\theta)$, yielding the minimization objective for $a \\in \\mathcal{A}$.\n\\begin{equation}\n F_q(a) = \\int_{\\Theta} L(\\theta,a) q(\\theta) d\\theta.\n\\end{equation}\nIf $\\mathcal{A}$ is a subset of $\\mathbb{R}^k$, then one can sample $N$ samples from $q(\\theta)$ and get a stochastic estimation of $\\nabla F_q(a)$\n\\begin{equation}\n \\nabla F_q(a) \\approx \\frac{1}{N}\\sum_{\\theta_i \\sim q(\\theta)} \\nabla_a L(\\theta_i,a),\n\\end{equation}\nallowing the application of stochastic gradient descent, and related techniques, for minimizing $F_q(a)$. However, sampling from $q(\\theta)$ is easy, so the bottleneck mostly belongs in the evaluation of $\\nabla_a L(\\theta,a)$. If samples from $p(\\theta|\\mathcal{D})$ were made from some more advanced sampling technique instead, the bottleneck would be in the sampling algorithm performance, which may be not quite as fast. Since $q(\\theta)$ must be discovered only once, then approximation may be more feasible.\n\nThe drawback is that of course $q(\\theta)$ must be a good approximation of $p(\\theta|\\mathcal{D})$ to begin with, which may not be easy to ensure (as discussed, this is one drawback from Laplace's approximation).\n\n\\section{Expensive and intractable likelihoods}\nIn general, Monte Carlo techniques assumes that $p(\\theta,\\mathcal{D}) = p(\\mathcal{D}|\\theta) p(\\theta)$ can be evaluated cheaply. Since usually the prior $p(\\theta)$ is chosen in a manner that it is very simple, whether $p(\\theta,\\mathcal{D})$ is hard to evaluate depends on $p(\\mathcal{D}|\\theta)$. In many cases, likelihood evaluation is in fact cheap, but in some cases it may be expensive or intractable, requiring specific techniques for approximate inference.\n\n\\subsection{Pseudo-marginals}\\label{pseudomarginalsection}\nOne case of intractable likelihood is when the likelihood model depends on some unobserved variable, that must be marginalize. To illustrate, consider that $\\mathcal{D} = y_1$ is a noisy observation of a phenomena $z_1$, whose dependence on a parameter $\\theta$ is modeled as $p(z_1|\\theta)$. The noise model $p(y_1|z_1)$ is also available. Then, the likelihood $p(y_1|\\theta)$ comes from marginalization \n\\begin{displaymath}\np(y_1|\\theta) = \\int p(y_1|z_1) p(z_1|\\theta) d\\theta.\n\\end{displaymath}\n\nAs a general case, consider a likelihood dependent on a latent variable\n\\begin{equation}\\label{ilikemargin}\n p(\\mathcal{D}|\\theta) = \\int p(\\mathcal{D}|\\omega,\\theta) p(\\omega|\\theta) d\\omega.\n\\end{equation}\nAssume the integral in \\eqref{ilikemargin} is not available analytically, hence so is not $p(\\mathcal{D}|\\theta)$. Usually what is available are Monte Carlo estimates of $p(\\mathcal{D}|\\theta)$, say by i.i.d. samples of $w|\\theta$,\n\\begin{equation}\n \\hat{p}(\\mathcal{D}|\\theta) = \\frac{1}{N} \\sum_{\\omega_i \\sim p(\\omega|\\theta)} p(\\mathcal{D}|\\omega_i,\\theta),\n\\end{equation}\nor by importance sampling. In this case, \\cite{Andrieu_2009} shows that when using an unbiased and positive estimate $\\hat{p}(\\mathcal{D}|\\theta)$ at each step of the Metropolis-Hastings algorithm, resulting in an unbiased estimate of the unnormalized posterior $\\hat{p}(\\mathcal{D}|\\theta)p(\\theta)$, the resulting stationary distribution is $p(\\mathcal{D}|\\theta)$. The result does not give an answer on whether Metropolis-Hastings using $\\hat{p}(\\mathcal{D}|\\theta)$ is efficient, and how to make so. This itself is a current topic of research (some examples can be found in \\cite{Andrieu_2010,Sherlock_2015}).\n\n\\subsection{Approximate Bayesian computation}\nNow consider a model that $p(\\mathcal{D}|\\theta)$ is not readily available, but for each fixed $\\theta \\in \\Theta$, one can sample the \\textit{random variable} $\\mathcal{D}|\\theta$ with ease. For clarity, we will refer to this random variable as $\\mathcal{D}'|\\theta$, while keeping $\\mathcal{D}$ denoting the fixed data.\n\nAs an example, consider the data $\\mathcal{D}$ consists of observation point $x_N$ of a long Markov chain, with known transition probability distribution $p(x_{i+1}|x_i)$, and one wants to infer the point $x_0$ where the chain was initiated. The likelihood $p(x_N|x_0)$ is given by\n\\begin{equation}\n p(x_N|x_0) = \\int \\ldots \\int p(x_N|x_{N-1}) \\ldots p(x_1|x_0) dx_1 \\ldots dx_{N-1},\n\\end{equation}\nwhich is hard to even compute some pseudo-marginal. However, given some $x_0$, sampling $x_N$ is just a question of simulating the chain for $N$ steps, with transition $p(x_{i+1}|x_i)$. Some other examples of models whose likelihood is hard to evaluate, but sampling is easy, are found in evolutionary genetics \\cite{Pritchard_1999,Beaumont_2003}.\n\nIn approximate Bayesian computation (ABC) \\cite{Fearnhead_2012,Beaumont_2003}, one wishes to construct an artificial likelihood $p_{\\text{ABC}}(\\mathcal{D}|\\theta)$, in such a way that for each $\\theta$, when the simulated data $\\mathcal{D}'|\\theta$ is \"similar\" to $\\mathcal{D}$, $p_{\\text{ABC}}(\\mathcal{D}|\\theta)$ is higher than when $\\mathcal{D}'|\\theta$ is not. For doing this, one takes:\n\\begin{itemize}\n\t\\item a function $S$ that takes a (simulated or real) dataset $\\mathcal{D}$ and return some $d$-dimensional statistics of it.\n\tFor example, if $\\mathcal{D} = \\{y_1,\\ldots,y_N\\}$, the statistics may be the first $d$ empirical moments of $\\mathcal{D}$, or simply $\\mathcal{D}$, making $S$ the identity function (in this case $d=N$).\n\t\\item A function $k:\\mathbb{R}^d \\to \\mathbb{R}$, integrating to one, such that $k$ achieves its maximum at $0$. For instance, $k(x) = \\mathcal{N}(x;0,h^2 I)$ can be used.\n\\end{itemize}\nWith those, one defines the ABC approximation for the likelihood\n\\begin{equation}\\label{abclikelihood}\n p_{\\text{ABC}}(\\mathcal{D}|\\theta) = \\int \\frac{1}{h}k\\left(\\frac{S(\\mathcal{D}) - S(\\mathcal{D}')}{h}\\right) p(\\mathcal{D}'|\\theta) d\\mathcal{D}'.\n\\end{equation}\nTo see why this is an approximation of the true likelihood $p(\\mathcal{D}|\\theta)$, assume that $S(\\mathcal{D}) = S(\\mathcal{D}')$ if and only if $\\mathcal{D} = \\mathcal{D}'$, and consider $k(x) = \\mathcal{N}(x;0,h^2 I)$. Then, as $h \\to 0$, $h^{-1}k((S(\\mathcal{D}) - S(\\mathcal{D}'))/h)$ goes to the Dirac delta function $\\delta(\\mathcal{D} - \\mathcal{D}')$. But, we have that \n\\begin{equation}\n\\int \\delta(\\mathcal{D} - \\mathcal{D}') p(\\mathcal{D}'|\\theta) d \\mathcal{D}' = p(\\mathcal{D}|\\theta)\n\\end{equation},\nso $p_{\\text{ABC}}(\\mathcal{D}|\\theta)$ goes to $p(\\mathcal{D}|\\theta)$ when $h$ goes to $0$.\n\nWith the approximate likelihood \\eqref{abclikelihood}, one has the corresponding approximate ABC posterior\n\\begin{equation}\n p_{\\text{ABC}}(\\theta|\\mathcal{D}) \\propto p_{\\text{ABC}}(\\mathcal{D}|\\theta) p(\\theta).\n\\end{equation}\nNotice the ABC likelihood still is not analytically available. However, since samples from $p(\\mathcal{D}'|\\theta)$ are available, one can use the pseudo-marginal technique presented in previous section to sample from the approximate ABC posterior. The question on how to choose appropriate summary statistics is addressed for example in \\cite{Fearnhead_2012}.\n\n\\subsection{Expensive likelihoods}\nIn some cases, the likelihood $p(\\mathcal{D}|\\theta)$ is expensive to evaluate but not intractable, such that one can have tens or hundreds of evaluations in limited time, but not much more. Moreover, unlike the previously presented case, sampling from the model is just as expensive, if not more, than evaluating $p(\\mathcal{D}|\\theta)$. Such likelihoods arise, for example, in Bayesian inverse problems \\cite{Tarantola_2004}, where the mapping from parameters to observations is done by expensive simulations.\n\nAn approach is, given a limited number of likelihood evaluations $\\Omega_N = \\{(\\theta_i,p(\\mathcal{D}|\\theta_i)\\}$, construct an approximate model $\\hat{p}_N(\\theta|\\mathcal{D})$ of $p(\\theta|\\mathcal{D})$, and inference is performed with the approximation, usually with MCMC. This model should, given new evaluations of the likelihood $\\Omega_{N'}$, be able to incorporate those in an online manner.\n\nGaussian processes, presented in next chapter, are particularly suitable for this task, and are used in \\cite{Rasmussen_2003,Wang_2018_2,Bilionis_2013,Kandasamy_2015,Conrad_2016}, using Monte Carlo methods on the approximation. Other approximations include GRIMA \\cite{Bliznyuk_2012} and polynomial approximations \\cite{Marzouk_2007}. The work presented here falls in the contest of expensive likelihoods methods, using Gaussian processes for approximation, and variational inference for approximate inference, as in \\cite{Acerbi_2018}.\n", "meta": {"hexsha": "4b5484e075e06a6fc4984cc607d60fdf6ae50f75", "size": 33076, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex_copy/chapters/capituloA.tex", "max_stars_repo_name": "DFNaiff/Dissertation", "max_stars_repo_head_hexsha": "8db72a0e588042a582053625ec58cde6a661f2a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex_copy/chapters/capituloA.tex", "max_issues_repo_name": "DFNaiff/Dissertation", "max_issues_repo_head_hexsha": "8db72a0e588042a582053625ec58cde6a661f2a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex_copy/chapters/capituloA.tex", "max_forks_repo_name": "DFNaiff/Dissertation", "max_forks_repo_head_hexsha": "8db72a0e588042a582053625ec58cde6a661f2a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 108.091503268, "max_line_length": 679, "alphanum_fraction": 0.7337646632, "num_tokens": 9769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.57410657953126}}
{"text": "\\unnumberedchapter{Nomenclature} \n\\chapter*{Nomenclature} \n\n% Break up this table into several ones if it takes up more than one page\n\\begin{longtable}{rl}\n$h$ & Planck constant ($6.626\\ 070\\ 04\\e{-34}\\ \\mbox{Js}$). \\\\\n$\\hbar$ & Planck constant over $2\\pi$ ($1.054\\ 572\\ 66\\e{-34}\\ \\mbox{Js}$). \\\\\n$L_z$ & Angular momentum operator along the $z$-dimension; $xp_y-yp_x$. \\\\\n$\\nabla$ & Gradient operator. \\\\\n$\\Omega$ & Angular rotation frequency of condensate. \\\\\n$\\xi$ & Condensate healing length; the distance from a region of zero density at vortex core to bulk density. \\\\\n$\\mu$ & Chemical potential of the condensate; the energy change per unit change in particle number. \\\\\n\\end{longtable}", "meta": {"hexsha": "10f19a4ee5bec5a4aac2a360b92ac76c1744a332", "size": 693, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Preamble/nomenclature.tex", "max_stars_repo_name": "mlxd/PhDThesis", "max_stars_repo_head_hexsha": "1b5c6bfd1bfd073b47aa0b1b5abbc7bff5cd521e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Preamble/nomenclature.tex", "max_issues_repo_name": "mlxd/PhDThesis", "max_issues_repo_head_hexsha": "1b5c6bfd1bfd073b47aa0b1b5abbc7bff5cd521e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Preamble/nomenclature.tex", "max_forks_repo_name": "mlxd/PhDThesis", "max_forks_repo_head_hexsha": "1b5c6bfd1bfd073b47aa0b1b5abbc7bff5cd521e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.3076923077, "max_line_length": 112, "alphanum_fraction": 0.7012987013, "num_tokens": 210, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.5741065749297507}}
{"text": "\\section{Phase Unwrapping}\n\n\\subsection{Path-Following vs Minimum Norm Algorithms}\n\n\\subsubsection{Path-Following Algorithms}\n\nPhase unwrapping algorithms may be categorized as path-following or minimum norm algorithms.  Path-following algorithms generally specify one or more starting points at which the phase is considered to be `true.'  Residues, masks, and/or a continuous measure of phase quality may then be used to restrict or guide the allowed path(s), along which the encountered pixels are unwrapped based on local pixel values.  Because the unwrapping operation consists entirely of adding some multiple of $2\\pi$, the resulting phase is necessarily `congruent' to the wrapped phase input.\n\nThe \\code{itk::ItohPhaseUnwrapping} class presented above is the simplest possible example of a path-following algorithm.  More sophisticated algorithms have been described which attempt to restrict the possible paths so as to avoid unwrapping through low-quality regions:\n\n\\begin{itemize}\n\n\\item Goldstein's Branch Cut Phase Unwrapping \\cite{Goldstein1988}\n\\item Flynn's Quality-Guided Residue-Mask Phase Unwrapping \\cite{Flynn1996}\n\\item Flynn's Minimum Discontinuity Phase Unwrapping \\cite{Flynn1997}\n\\item Quality-Guided Phase Unwrapping\n\n\\end{itemize}\n\nWhile Goldstein's and Flynn's algorithms rely on residues, the quality-guided algorithm relies on a user-defined quality map (such as phase derivative variance, discussed above).  This distinction is important because of the observation that residues are only physically meaningful in two dimensions.  Therefore, Goldstein's and Flynn's algorithms are only useful for two-dimensional phase unwrapping.  Phase derivative variance and other quality measures, however, may easily be generalized to arbitrary dimension.  Therefore, the quality-guided approach may be implemented for $n$-dimensional images.  For this reason, \\code{itk:QualityGuidedPhaseUnwrappingImageFilter} (defined in \\code{itkQualityGuidedPhaseUnwrappingImageFilter.h}) was implimented for this submission, whereas Goldstein's and Flynn's algorithms were not considered.\n\n\\subsubsection{Minimum Norm Algorithms}\n\nThe framework for the minimum $L^p$-norm algorithms was first presented by Ghiglia, et al, in \\cite{Ghiglia1998}. Minimum norm algorithms may be subclassified according to (a) whether they take into account phase quality and (b) the exponent of the term being minimized.  Weighted algorithms penalize differences in low-quality pixels less than high-quality pixels, whereas unweighted algorithms weight differences in all pixels equally, regardless of quality.  To give physical significance to the particular norm chosen, $L^0$-norm methods minimize the number of pixels that ate not congruent to the input, $L^1$-norm methods minimize the absolute differences, and $L^2$-norm methods minimize the squared differences.  Unweighted, minimum $L^2$-norm phase unwrapping may be implimented efficiently by taking advantage of the discrete cosine transform (DCT).  This algorithm is implemented in the \\code{itk:DCTPhaseUnwrappingImageFilter} (defined in \\code{itkDCTPhaseUnwrappingImageFilter.h}).\n\n\\subsection{Quality-Guided Phase Unwrapping: Implementation Summary}\n\nThe \\code{itk::QualityGuidedPhaseUnwrapping} class iterates over three images: the wrapped image, the quality image, and a binary image.  A pixel in the binary image is \\code{true} if the pixel has been unwrapped, or \\code{false} otherwise.  Additionally, a list is maintained which contains the index and quality of all `candidate pixels.'  A pixel is considered a candidate if (a) it adjoins a pixel that has been unwrapped and (b) has not been unwrapped.\n\nEfficiently maintaining the list of candidate pixels is of particular importance for this filter.  To this end, this submission provides the \\code{itk::IndexValuePair} class (defined in \\code{itkIndexValuePair.h}), which has properties \\code{Index} and \\code{Value}.  The class overloads the \\code{==} operator such that \\code{pair1 == pair2} is \\code{true} if \\code{pair1.Index == pair2.Index} is \\code{true}, and overloads the \\code{>} operator such that \\code{pair1 > pair2} is \\code{true} if \\code{pair1.Value > pair2.Value} is \\code{true}.  As such, instances of this class can be stored in a \\code{std::set}, ensuring that (a) the same pixel location cannot be stored twice and (b) the pixels are sorted according to their quality score.  Therefore, an \\code{itk::IndexValuePair} can be added to the list without first checking whether it is already a member, and the highest-quality pixel can always be retrieved by selecting the last element of the \\code{std::set}.\n\nThe user sets the active index using the \\code{SetTruePhase()} method.  This is taken as the starting point for iteration, and is set to be \\code{true} in the binary image.  \\code{itk::IndexValuePair}s for the pixels plus or minus one index in each direction are added to the list of candidate pixels.  The pixel in the list with the highest quality is unwrapped and made the active index, and the corresponding index in the binary image is set to \\code{true}.  The process repeats while the list of candidate pixels is non-empty.\n\nOnce \\code{Update()} has been called, the unwrapped and quality images may be retrieved via the \\code{GetPhase()} and \\code{GetQuality()} methods, respectively.  Currently, the filter only supports phase derivative variance as a quality measure, though the filter could be easily extended to include others.\n\n\\subsection{DCT Phase Unwrapping: Implementation Summary}\n\nThe unweighted, minimum $L^2$ norm solution is calculated efficiently in \\code{itk::DCTPhaseUnwrappingImageFilter} (defined in \\code{itkDCTPhaseUnwrappingImageFilter.h}) by taking advantage of Fourier methods for solving Laplace's equation ($\\bigtriangleup \\phi = 0$), as described in \\cite{Ghiglia1998}.  \\code{itk::DCTPhaseUnwrappingImageFilter} uses the two contributed classes \\code{itk::DCTImageFilter} and \\code{itk::DCTPoissonSolverImageFilter}.  We refer the reader to the Appendix for a discussion of these classes, as they are of general interest to the image processing community, but may obscure the primary objectives of this submission if described in full here.  Briefly:\n\n\\begin{itemize}\n\n\\item The wrapped phase Laplacian of the input is calculated using \\code{itk::WrappedPhaseLaplacianImageFilter} (defined in \\code{itkWrappedPhaseLaplacianImageFilter.h}).\n\\item The forward DCT is taken using \\code{itk::DCTImageFilter} (defined in \\code{itkDCTImageFilter.h}).\n\\item The  transformed image undergoes a pixelwise modulation.\n\\item The inverse DCT is taken.\n\n\\end{itemize}\n\n\\code{itk::DCTImageFilter} provides access to the $n$-dimensional real-to-real transform provided in the FFTW library \\cite{Frigo2005}.  For this reason, in order to use this filter (and its dependents), ITK must be built with the cmake variable \\code{USE\\_FFTWD} set to \\code{ON}, and any project using this filter is subject to the GPL license.\n", "meta": {"hexsha": "07326896f2697331ad8857dca0c59cfb0fea5b7b", "size": 6980, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Document/includes/Phase Unwrapping.tex", "max_stars_repo_name": "DVigneault/ITKPhaseSubmission", "max_stars_repo_head_hexsha": "4f52c134102139544a63d454501983f00c3ade67", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-07T03:59:18.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-07T03:59:18.000Z", "max_issues_repo_path": "Document/includes/Phase Unwrapping.tex", "max_issues_repo_name": "DVigneault/ITKPhaseSubmission", "max_issues_repo_head_hexsha": "4f52c134102139544a63d454501983f00c3ade67", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Document/includes/Phase Unwrapping.tex", "max_forks_repo_name": "DVigneault/ITKPhaseSubmission", "max_forks_repo_head_hexsha": "4f52c134102139544a63d454501983f00c3ade67", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 139.6, "max_line_length": 994, "alphanum_fraction": 0.7992836676, "num_tokens": 1671, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300048, "lm_q2_score": 0.785308580887758, "lm_q1q2_score": 0.5741065749297505}}
{"text": "%% \\subsection{Practical Importance}\n%% \\label{sec:Problem:pract}\n\\section{Case Study}\n\\label{sec:CaseStudy}\n\n\\input{fig-testbench}\n\nTo illustrate the practical importance of the four core properties, we consider a simply-typed $\\lambda$-calculus augmented with an \\texttt{assert} operator.\n%\nSuppose our goal is to formalize an environment-based (as opposed to substitution-based) evaluation semantics,\n%\nand prove that evaluation is strongly normalizing and that it satisfies the structural properties contraction and exchange.\n\n\\autoref{fig:testbench} shows a sketch of this environment-based evaluation (\\texttt{\\_$\\vdash$\\_$\\Rightarrow$\\_}) of expressions (\\texttt{exp}) to results (\\texttt{res}).\n%\nThe definitions of evaluation environments (\\texttt{env}), types, type contexts, and typechecking are not shown.\n%\n%; the full development is available in the supplementary materials.\n%\n\\texttt{assert} takes two arguments; if their evaluation results are exactly equal, \\texttt{assert} evaluates to the first,\notherwise it evaluates to an \\texttt{Err} result.\n\nEvaluation environments are used both as a parameter to the evaluation judgment as well as a data component of closure results.\n%\nWhat dictionary implementation should we choose to represent environments?\n\n%\n%% Below we show that \\dds{} are a suitable choice, but that all the conventional solutions fail.\n\nUsing basic \\sal{}s, contraction is false:\n%\nfor example, given environments\n%\n$E_1=$\\ \\texttt{E ,, (n , v') ,, (n , v)} and\n%\n$E_2=$\\ \\texttt{E ,, (n , v)},\n%\nevaluation produces the closures\n%\n$\\altEvalEnv{E_1}{\\lambda x. e}{[E_1]\\ \\lambda x. e}$\n%\nand\n%\n$\\altEvalEnv{E_2}{\\lambda x. e}{[E_2]\\ \\lambda x. e}$\n%\nbut\n%\n$[E_1]\\ \\lambda x. e \\neq [E_2]\\ \\lambda x. e$.\n%\nA similar problem occurs for exchange, so it is also false.\n%\nIn order for contraction and exchange to be true for any system with closure results,\nthe dictionary implementation used for closures must be \\extensional.\n\n\\Cals{} must be packaged with validity proofs wherever they go, including in the closure result.\nIn such a validity proposition \\texttt{valid : dict -> Set},\n%\n%% (defined in full in the supplementary materials)\n%\nthe dictionary object is in negative position,\nand in our case the dictionary type is dependent on the result datatype,\nso the result is also in negative position, violating strict positivity.\nAs such, results cannot contain validity proofs, so for any system that uses dictionaries as data,\nthe dictionaries must be \\semanticallyTotal.\n\nBecause \\fpf{}s do not have \\EqDec, it is not possible to decide which \\texttt{EvalAsrt} constructor would\napply to an \\texttt{assert} of two \\fpf{}s. As such, using \\fpf{}s would make it impossible to constructively prove strong normalization.\n\nIn contrast, \\dds---uniquely amongst the surveyed solutions---possess \\SemTot, \\SemInj, and \\EqDec.\n%\nThus, they are a suitable choice for implementing environments, enabling proofs of contraction, exchange, and strong normalization.\n%\nWe discuss the generality of this case study further in \\autoref{sec:Discussion:Generality}.\n", "meta": {"hexsha": "0b994a90864cc63959fed6c08555f91f6be6f991", "size": 3088, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "case-study.tex", "max_stars_repo_name": "nickcollins/dependent-dicts", "max_stars_repo_head_hexsha": "1ae15eb2f3a3eac6d30ba55ebb1ed65bc15aa0a6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "case-study.tex", "max_issues_repo_name": "nickcollins/dependent-dicts", "max_issues_repo_head_hexsha": "1ae15eb2f3a3eac6d30ba55ebb1ed65bc15aa0a6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-09-10T22:57:49.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-15T21:24:19.000Z", "max_forks_repo_path": "case-study.tex", "max_forks_repo_name": "nickcollins/dependent-dicts", "max_forks_repo_head_hexsha": "1ae15eb2f3a3eac6d30ba55ebb1ed65bc15aa0a6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7297297297, "max_line_length": 171, "alphanum_fraction": 0.7629533679, "num_tokens": 751, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7310585669110203, "lm_q1q2_score": 0.5741065657267315}}
{"text": "\n\\begin{intro}\n  There are two fundamentally different definitions of Sobolev spaces,\n  which are usually referred to as the spaces $H^k$ and $W^{k,p}$. The\n  first group is obtained by completing a space of continuously\n  differentiable functions with respect to a given norm. The second\n  definition relies on the introduction of distributional derivatives\n  and then restricts the set of functions with such derivatives to\n  those bounded with respect to a given norm. Thus, we can say that\n  $H^k$ approximates the desired space from the inside, while\n  $W^{k,p}$ bounds it from the outside. An important result of modern\n  analysis was the conclusion that both classes are actually the same.\n\n  Details on Sobolev spaces, including most of the material below can\n  be found in~\\cite{AdamsFournier03}. A very condensed introduction is\n  also in\\cite[Chapter 7]{GilbargTrudinger98}.\n\\end{intro}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Sobolev spaces $H^{1}(\\Omega)$}\n\\label{sec:h1}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  These spaces are defined by first defining a norm for them, then\n  completing for instance the space $\\co^\\infty$ with respect to this\n  norm. This will lead to some difficulties with the involved symbols,\n  which we will resolve in Section~\\ref{sec:weak-derivatives}. We note\n  that the problem of definiteness of the norm is the same as in the\n  definition of $L^2(\\Omega)$, which is, why we again have to take\n  equivalence classes.\n\\end{intro}\n\n\\begin{definition}\n  For functions $f,g\\in \\co^\\infty(\\Omega)$, we define the inner\n  products, the\n  $H^{1}$-seminorm $|.|_1$ and the $H^{1}$-norm $\\norm{.}_1$ as\n  \\begin{xalignat}2\n    \\scal(f,g)_1 &= \\int_\\Omega \\nabla f\\cdot \\nabla g\\dx,\n    &\n    |f|_1 &= \\norm{\\nabla f}_0,\n    \\\\\n    \\Scal(f,g) &= \\scal(f,g)_0 + \\scal(f,g)_1,\n        & \\norm{f}_1 &= \\norm{f}_0 + |f|_1.\n      \\end{xalignat}\n  Here, $\\scal(.,.)_0$ and $\\norm{.}_0$ refer to the inner product and\n  norm in $L^2(\\Omega)$, respectively.\n\\end{definition}\n\n\\begin{definition}\n  \\index{h1@$H^1(\\Omega)$}\n  First we compute the completion of\n  $\\co^\\infty(\\Omega)$ with respect to the norm $\\norm{f}_1$, that is,\n  the set of limits of all Cauchy sequences with respect to the\n  $H^1$-norm consisting of elements in $\\co^\\infty(\\Omega)$ with\n  uniformly bounded $H^1$-norm. The $H^1$-norm of the limit function\n  is defined as the limit of the norms of the sequence.\n  \n  The \\define{Sobolev space} $H^1(\\Omega)$ is the set of equivalence classes in\n  this completion, where we say $f\\simeq g$ if $\\norm{f-g}_1=0$.\n\\end{definition}\n\n\\begin{note}\n  The space $\\co^\\infty(\\Omega)$ in this definition could have been\n  replaced by $\\co^1(\\Omega)$ with no different effect.\n\\end{note}\n\n\\begin{example}\n  The completion process in the definition above indeed yields\n  functions which were not in $\\co^\\infty$. For instance, it is\n  possible to construct a smooth sequence of functions converging to\n  the function $f(x) = |x|$ with respect to the $H^1$-norm on $[-1,1]$.\n\\end{example}\n\n\\begin{definition}\n  The space $\\co^\\infty_0(\\Omega)$ is the space fo functions with\n  ininitely many continuous derivatives and compact support in\n  $\\Omega$.\n\\end{definition}\n\n\\begin{definition}\n  \\index{h10@$H^1_0(\\Omega)$}\n  The Sobolev space $H^1_0(\\Omega)$ is obtained by completing the\n  space $\\co^\\infty_0(\\Omega)$ with respect to the $H^1$-norm and\n  taking equivalence classes such that $f\\simeq g$ if $\\norm{f-g}_1=0$.\n\\end{definition}\n\n\\begin{note}\n  It remains to be shown that the spaces $H^1(\\Omega)$ and\n  $H^1_0(\\Omega)$ are actually different.\n\\end{note}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Weak derivatives and the Sobolev spaces $W^{1,2}$}\n\\label{sec:weak-derivatives}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  When we introduced the space $H^1(\\Omega)$, we started with a\n  subspace of continuous functions and extended this space by adding\n  the limits of Cauchy sequences. The result is a complete space with\n  respect to the norm $\\norm{.}_1$. On the other hand, it is not clear\n  whether this is the biggest function space for which this norm is\n  finite. Therefore, in this section, we take the opposite approach:\n  we define a derivative in a very broad sense and then reduce to\n  those derivatives bounded with respect to the norm $\\norm{.}_1$.\n\\end{intro}\n\n\\begin{definition}\n  If for a given function $u$ there exists a function $w$ such that\n  \\begin{gather}\n    \\label{eq:hk:1}\n     \\int_\\Omega w \\phi \\dx\n     =\n     -\\int_\\Omega u \\partial_i \\phi \\dx,\n     \\qquad\\forall \\phi\\in \\co^\\infty_0(\\Omega),\n  \\end{gather}\n  then we define $\\partial_i u := w$ as the \\define{distributional\n    derivative} (partial) of $u$ with respect to $x_i$. Similarly through\n  integration by parts, we define distributional directional\n  derivatives, distributional gradients etc.\n\\end{definition}\n\n\\begin{note}\n  The formula~\\eqref{eq:hk:1} is the usual integration by\n  parts. Therefore, whenever $u\\in\\co^1$ in a neighborhood of $x$, the\n  distributional derivative and the usual derivative coincide.\n\\end{note}\n\n\\begin{example}\n  Let $\\Omega=\\R$ and $u(x) = |x|$. Intuitively,\n  it is clear that the distributional derivative, if it exists, must\n  be the \\define{Heaviside function}\n  \\begin{gather}\n    \\label{eq:hk:2}\n    w(x) =\n    \\begin{cases}\n      -1 & x<0 \\\\ 1 & x>0.\n    \\end{cases}\n  \\end{gather}\n  The proof that this is actually the distributional derivative is\n  left to the reader.\n\\end{example}\n\n\\begin{example}\n  For the derivative of the \\putindex{Heaviside function}\n  in~\\eqref{eq:hk:2}, we first observe that it must be zero whenever\n  $x\\neq 0$, since the function is continuously differentiable\n  there. Now, we take a test function $\\phi\\in\\co^\\infty$ with support\n  in the interval $(-\\epsilon,\\epsilon)$ for some positive\n  $\\epsilon$. Let $w'(x)$ be the derivative of $w$. Then, by\n  integration by parts\n  \\begin{gather*}\n    \\int_{-\\epsilon}^\\epsilon w(x) \\phi'(x)\\dx\n    = -\\int_{-\\epsilon}^0 w(x)' \\phi(x)\\dx\n    -\\int_0^\\epsilon w(x)' \\phi(x)\\dx\n    + 2 \\phi(0) = 2\\phi(0),\n  \\end{gather*}\n  since $w'(x) = 0$ under both integrals. Thus, $w'(x)$ is an object\n  which is zero everywhere except at zero, but its integral against a\n  test function $\\phi$ is nonzero. This contradicts our notion, that\n  integrable functions can be changed on a set of measure zero without\n  changing the integral. Indeed, $w'$ is not a function in the usual\n  sense, and we write $w'(x) = 2 \\delta(x)$, where $\\delta(x)$ is the\n  \\define{Dirac $\\delta$-distribution}, which is defined by the two\n  conditions\n  \\begin{gather*}\n    \\begin{alignedat}{2}\n      \\delta(x) &= 0, & \\forall x & \\neq 0\n      \\\\\n      \\int_\\R \\delta(x) \\phi(x)\\dx &= \\phi(0), \\quad & \\forall \\phi\n      &\\in \\co^0(\\R).\n    \\end{alignedat}\n  \\end{gather*}\n  We stress that $\\delta$ is not an integrable function, or a function\n  at all.\n\\end{example}\n\n\n\\begin{example}\n   Take for instance $\\Omega\n  = [0,1]$. It is known that \\putindex{Lipschitz-continuous}\n  functions on $\\R$ are absolutely continuous and thus continuously\n  differentiable almost everywhere, with their derivatives bounded by\n  the Lipschitz constant, say $L$. Thus, the function itself is\n  bounded on $[0,1]$, say by $M$. Therefore, such a function $f$ is\n  weakly differentiable and its norm is bounded by\n  \\begin{gather*}\n    \\norm{f}_1 \\le L+M.\n  \\end{gather*}\n\\end{example}\n\n\\begin{definition}\n  For a function $u\\in L^2(\\Omega)$, we call a distributional\n  derivative a \\define{weak derivative}, if the derivative is in\n  $L^2(\\Omega)$ as well. For such a \\define{weakly differentiable} function,\n  the seminorm $|.|_1$ and thus the norm $\\norm{.}_1$ is defined, if\n  the gradient is understood in the distributional sense.\n  \n  The space of weakly differentiable functions defined in this manner\n  is the \\putindex{Sobolev space} $W^{1,2}(\\Omega)$, where the\n  superscript one stands for the order of derivatives and the two is\n  the exponent in the norm.\n\\end{definition}\n\n\\begin{remark}\n  In this section and the previous section, we have seen two different\n  definition of Sobolev spaces. One of the important theorems of\n  modern analysis due to Meyers and Serrin~\\cite{MeyersSerrin64}\n  states that these two definitions actually specify the same object.\n\\end{remark}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Higher order derivatives and different exponents}\n\\label{sec:higher-derivatives}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  Here we generalize the definitions of Sobolev spaces in\n  Sections~\\ref{sec:h1} and~\\ref{sec:weak-derivatives}, respectively,\n  to higher order derivatives. Most of this section will consist of\n  introducing ugly notation, which is, why we postponed\n  this. Mathematically, like for continuous derivatives, the same\n  concepts as above are applied to obtain the new spaces.\n\\end{intro}\n\n\\begin{definition}\n  A $d$-dimensional \\define{multi-index} is a tuple\n  $\\alpha=(\\alpha_1,\\alpha_2,\\dots,\\alpha_d)$ with values $\\alpha_i$\n  being nonnegative integers. We define partial derivatives of a\n  function $u\\in C^\\infty$ with respect to a multi-index as\n  \\begin{gather*}\n    \\partial_{\\alpha}u(x) = \\frac{\\partial^\\alpha}{\\partial x^\\alpha}\n    u(x)\n    = \\frac{\\partial^{\\alpha_1}}{\\partial x_1^{\\alpha_1}}\n    \\frac{\\partial^{\\alpha_2}}{\\partial x_2^{\\alpha_2}}\n    \\dots\n    \\frac{\\partial^{\\alpha_d}}{\\partial x_d^{\\alpha_d}}\n    u(x_1,x_2,\\dots,x_d).\n  \\end{gather*}\n  The order of such a derivative is\n  \\begin{gather*}\n    |\\alpha| = \\sum_{i=1}^d \\alpha_i.\n  \\end{gather*}\n\\end{definition}\n\n\\begin{definition}\n  Let $u$ be integrable on $\\Omega\\subseteq \\R^d$. If a function $w$\n  exists such that\n  \\begin{gather}\n    \\int_\\Omega w \\phi\\dx = (-1)^{|\\alpha|} \\int_\\Omega\n    u, \\partial_\\alpha \\phi \\dx,\n    \\qquad\\forall \\phi\\in \\co^\\infty_0(\\Omega),\n  \\end{gather}\n  then we call $\\partial_\\alpha u = w$ a \\putindex{distributional derivative} of\n  order $|\\alpha|$ of $u$. If this derivative is in $L^2(\\Omega)$, we\n  call it a \\putindex{weak derivative}.\n\\end{definition}\n\n\\begin{definition}\n  The space $W^{k,2}(\\Omega)$ is the space of functions $u\\in\n  L^2(\\Omega)$ such that all distributional derivatives of order\n  $|\\alpha| \\le k$ are in $L^2(\\Omega)$. The $W^{k,2}$-seminorm and\n  -norm are defined by\n  \\begin{gather*}\n    |f|_k = \\sqrt{\\sum_{|\\alpha| = k} \\norm{\\partial_\\alpha u}^2},\n    \\qquad\n    \\norm{f}_k = \\sqrt{\\sum_{|\\alpha| \\le k} \\norm{\\partial_\\alpha u}^2}.\n  \\end{gather*}\n\\end{definition}\n\n\\begin{remark}\n  If we give up the notion of an inner product, all definitions above\n  extend to the case where we replace $L^2$ by a space based on a norm\n  with different exponent $1 \\le p \\le \\infty$. This leads to the\n  spaces $W^{k,p}(\\Omega)$.\n\\end{remark}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Properties of Sobolev spaces}\n\\label{sec:hk:properties}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  For continuously differentiable functions, the inclusion $\\co^{k+1}\n  \\subset \\co^k$ is obvious. The same inclusion holds for $W^{k+1,p}$\n  and $W^{k,p}$ by definition. Continuous functions are obviously\n  continuous on smooth submanifolds, but Sobolev functions are a\n  priori not even defined there. Nevertheless, we will see below, that\n  there are operators which allow us to define traces of Sobolev\n  functions on lower dimensional submanifolds, and even embeddings\n  into spaces of continuous functions.\n\\end{intro}\n\n\\begin{definition}\n  Let $\\Omega \\subset \\R^d$. For the \\putindex{Sobolev space}\n  $W^{k,p}(\\Omega)$, we define the \\define{Sobolev number}\n  \\begin{gather}\n    \\label{eq:hk:3}\n    \\sigma = k - \\frac dp.\n  \\end{gather}\n\\end{definition}\n\n\\begin{todo}\n  For the following theorems, verification is required when the cases $p=1$ and\n  $p=\\infty$ are included, when equality of the Sobolev numbers is\n  sufficient and when not, and which assumptions on the domains are reasonable.\n\\end{todo}\n\n\\begin{theorem}\n  Let $\\Omega \\subset \\R^d$ be a bounded Lipschitz domain. Given two\n  Sobolev spaces $W^{k_1, p_1}(\\Omega)$ and $W^{k_2, p_2}(\\Omega)$\n  with $1 \\le p_1, p_2 < \\infty$. If $\\sigma_1 \\ge \\sigma_2$, then\n  there exists a continuous embedding $W^{k_1,\n    p_1}(\\Omega)\\hookrightarrow W^{k_2, p_2}(\\Omega)$ such that\n\\begin{gather*}\n  \\norm{u}_{W^{k_2, p_2}(\\Omega)} \\le C \\norm{u}_{W^{k_1, p_1}(\\Omega)}.\n\\end{gather*}\n\\end{theorem}\n\n\\begin{theorem}\n  Let $\\Omega_1 \\subset \\R^{d_1}$ be a bounded Lipschitz domain and\n  $\\Omega_2\\subset \\overline\\Omega_1$ a smooth submanifold of\n  dimension $d_2$. Then, if $\\sigma_1 \\ge \\sigma_2$, there exists a\n  continuous trace operator from $W^{k_1,\n    p_1}(\\Omega_1)\\hookrightarrow W^{k_2, p_2}(\\Omega_2)$, such that\n  \\begin{gather*}\n    \\norm{u}_{W^{k_2, p_2}(\\Omega_2)} \\le C \\norm{u}_{W^{k_1, p_1}(\\Omega_1)}.\n  \\end{gather*}\n\\end{theorem}\n\n\\begin{todo}\n  Poincar\u00e9 and Friedrichs inequalities for first and higher order\n  derivatives.\n\\end{todo}\n% \\begin{definition}\n%   A function is called \\define{H\u00f6lder continuous} with exponent $\\alpha$, if there is a\n%   constant $C$, such that\n%   \\begin{gather*}\n%     \\forall x,y\\in \\Omega: \\quad |f(x)-f(y)| \\le C \n%   \\end{gather*}\n% \\end{definition}\n\n% \\begin{theorem}\n%   Let $\\Omega_1 \\subset \\R^{d_1}$ be a bounded Lipschitz domain. \n% \\end{theorem}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End: \n", "meta": {"hexsha": "781e527f6007d8a8d76d8b6324406b050fbddd3c", "size": 14238, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "integration/hk.tex", "max_stars_repo_name": "ahumanita/notes", "max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "integration/hk.tex", "max_issues_repo_name": "ahumanita/notes", "max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-05-24T07:31:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-31T12:58:14.000Z", "max_forks_repo_path": "integration/hk.tex", "max_forks_repo_name": "ahumanita/notes", "max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-05-15T19:28:53.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-05T19:07:29.000Z", "avg_line_length": 39.7709497207, "max_line_length": 89, "alphanum_fraction": 0.6410310437, "num_tokens": 4154, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110203, "lm_q2_score": 0.785308580887758, "lm_q1q2_score": 0.5741065657267315}}
{"text": "\\label{matrix:chapter}\n\n\nIn Chapter~\\ref{bdoodle:chapter}, we assumed that an event organizer is only allowed to inspect a group of columns (as a batch of date/time options) by polling all invitees at once.\nIn this chapter, we generalize the notion of ``inspection'' by the event organizer, and assume that the organizer can inspect each entry of a random matrix at unit cost (by querying a certain invitee about a certain date/time option at a unit cost).\nConsequently, \\Times and \\Inconveniences are always equal in this setting, and the goal is to find an optimal inspection sequence which is a permutation of entries of a given random matrix.\n\nIn classic Doodle, an inspection is performed over the entire random matrix by querying all invitees (rows) about all date/time options (columns).\nIn the Batched Doodle Problem, we considered a class of B-Doodle mechanisms in which all invitees are queried about a subset of date/time options.\nIn this chapter, we study the Probabilistic Matrix Inspection Problem in which an individual invitee is queried over a single date/time option. In practice, an organizer may want to ask the most important (and/or busiest) invitee first about a couple of date/time options, to ensure that the person is available, before asking other invitees. For instance, when a doctoral graduate student is scheduling his/her university oral examination at Stanford University (which is usually the defense of dissertation in computer science), it is required that\n``[t]he primary advisor, the student, and the out-of-department chairperson must be present and may not participate virtually.''~\\footnote{\nExcerpted on September 06, 2016, from the Graduate Academic Policies and Procedures. See the Policy section under Chapter 4.7.1 Doctoral Degrees, University Oral Examinations \\& Committees.}\nIn such cases, the student can ask the advisor and exam chair to rule out certain date/time options that do not work for them, and then ask other examiners about their availability. \n\nWe model this setting as the problem of inspecting a random matrix where each entry is a random variable (representing availability in group scheduling settings). \nConsider a matrix of mutually independent Bernoulli random variables where rows represent agents and columns represents the set of potential outcomes (such as date/time options). \nGiven probability distribution of the matrix, the organizer can ``inspect'' an entry of the matrix at unit cost to know of the realization of it, and wishes to determine with certainty whether the matrix contains a column consisting only of 1's at minimum number of inspections possible. In our example, this corresponds to querying an agent for her availability, and seeking an agreeable outcome.\nBy abstracting away the details of group scheduling, we are left with a mathematical model that represents the scheduling process, and in this work we study the properties of an optimal inspection policy and design an algorithm that finds it in polynomial time. As we later discuss, this model is applicable to other real-life settings as well.\n\n\\paragraph{Related Work.}\nWe refer to our discussion of related work in Chapter~\\ref{bdoodle:chapter}.\n\n\n\\section{Formal Model} \\label{matrix:sec:model}\n\nLet us introduce mathematical notation we use in this chapter, and formally define the Probabilistic Matrix Inspection Problem (\\PMIP), followed by an example to help the reader understand the setup.\n\n\n\\subsection{Notation and Definitions}\n\nIn the Probabilistic Matrix Inspection Problem, the goal is to determine whether a given random matrix contains a ``feasible'' column or not. \n\n \\begin{definition}[Feasibility]\n Let $A$ be a matrix whose entries are from $\\{0,1\\}$, and refer to the entry of $A$ at row $r$ and column $c$ as $a_{r,c}$.\n We say that a column $c$ of $A$ is {\\em feasible} if it consists only of 1's (otherwise it is {\\em infeasible}). \n We say that $A$ is {\\em feasible} if it contains at least one feasible column (otherwise it is {\\em infeasible}).\n \\end{definition}\n\nWe do not know of the values (or realizations) of the entries of $A$, as they are Bernoulli random variables, but we assume that we know of probability distribution of each entry. An input to the problem is this probability distribution as a matrix. \n \\begin{definition}[Input instance]\n An input instance of the Probabilistic Matrix Inspection problem is a pair of matrices $(A, P_A)$ of size $n$ by $m$\n where $A$ is a matrix of Bernoulli random variables \n and $P_A$ is the probability matrix associated with it.\n We denote an entry of $A$ as $a_{r,c}$ and of $P_A$ as $p_{r,c}$.\n Each entry $a_{r,c}$ of $A$ is a Bernoulli random variable, and $p_{r,c}$ is the probability of success for $a_{r,c}$ (i.e., $p_{r,c} = \\mathbf{P}[a_{r,c} = 1]$).\n We define $s_c = \\prod_{r=1}^{n} p_{r,c}$ to denote the probability that column $c$ is feasible (probability of success for column $c$).\n \\end{definition}\n Throughout this work we will assume that the set of random variables $\\{a_{r,c}\\}$ are mutually independent. This assumption is crucial to our technical results because it allows to compute (in polynomial time) the probability of a specific realization of $A$ conditioning on the event in which some entries of $A$ have already been realized. \n Without this assumption, it is unclear how one can compute such probabilities without having an access to the joint probability distribution over all realizations of $A$ (whose size is exponential in the size of $A$).\n\n We define an ``inspection'' as an operation that can be performed on $A$. \n One can inspect an arbitrary entry $a_{r,c}$ of $A$ at unit cost, so as to know of the realization of the random variable. In group scheduling, an inspection corresponds to querying an agent about her availability for a certain outcome. \n The objective of the problem is to determine whether $A$ is feasible or not, with minimum (expected) number of inspections possible. The expectation is with respect to the probability distribution specified by $P_A$.\n\n Because we are interested in determining feasibility of $A$ with minimum number of inspections, there are certain ``unnecessary inspections'' that an optimal strategy must avoid.\n For instance, if a certain entry $a_{r,c}$ is found to be unsuccessful (i.e., $a_{r,c} = 0$ is realized), then there is no need to inspect any other entry from the same column because the column is already known to be infeasible. Similarly, if a certain column is found to be feasible (which implies $A$ is feasible) or if all columns are found to be infeasible (which implies $A$ is infeasible), then there is no need to inspect any other entries of the matrix. Lastly, if $p_{r,c} = 0$ or $p_{r,c} = 1$, then there is no need to inspect the entry $a_{r,c}$ because we already know its realization with probability $1$.\n Therefore, without loss of generality, we will assume that $p_{r,c} \\in (0,1)$.\n\n Let us define what constitutes a solution to the problem.  Informally, a clear description of how one should inspect a given matrix suffices -- any permutation of entries of the matrix.\n \\begin{definition}[Inspection policy]\n \tGiven an input instance $(A,P_{A})$, a solution is any permutation of the entries of $A$, and we call it an ``inspection policy'' (or simply, a ``policy'').\n \\end{definition}\n The interpretation of a permutation is as follows. The entries of $A$ will be inspected in order specified by the permutation. After each inspection, if $A$ is found to be feasible or infeasible, the inspection process ends. Otherwise, it continues inspecting the entries as specified, but it will not make any unnecessary inspections as mentioned earlier. \n\n If $\\pi$ is a permutation of the entries of $A$, we write $C(\\pi)$ to denote the number of inspections performed by $\\pi$. $C(\\pi)$ is a random variable whose probability distribution is determined by $P_A$. We are interested in finding an optimal policy which minimizes the expected number of inspections, $\\Exp[C(\\pi)]$.\n Note that there are $(nm)!$ permutations of the entries of $A$, and therefore exhaustive search for an optimal permutation will not produce an efficient algorithm. \n\n \\subsection{Example} \\label{matrix:sec:example}\nConsider the 2-by-2 random matrix whose probability of success is given by $P_A$ as follows. In group scheduling, this corresponds to two agents and a set of two date/time options being considered. Along with $A$ and $P_A$, let us consider an inspection policy $\\pi$ which inspects the entries of $A$ column-by-column while inspecting them from top to bottom within a column:\n \\begin{equation*}\n A = \n \t\\begin{bmatrix}\n \t\ta_{1,1}  & a_{1, 2} \\\\\n \t\ta_{2,1}  & a_{2, 2}\n \t\\end{bmatrix}, ~~~\n P_A = \n \t\\begin{bmatrix}\n \t\t0.6  & 0.7  \\\\\n \t\t0.9  & 0.8 \n \t\\end{bmatrix}, ~~~\n \t\\pi = \n \t\\begin{pmatrix} \n \t1 & 2 & 3 & 4  \\\\\n \ta_{1,1} & a_{2,1} & a_{1,2} & a_{2,2} \n \t\\end{pmatrix}.\n \\end{equation*}\n \n Let us try to compute the expected number of inspections that would incur if we choose to use $\\pi$ to inspect $A$. \n Suppose that the realization of $A$ happens to be the identity matrix of size $2$ (i.e., $a_{1,1} = a_{2,2} = 1$ and $a_{2,1} = a_{1,2} = 0$). If we use $\\pi$ to inspect the matrix, it will first inspect $a_{1,1}$ and learn its realization. Because $a_{1,1} = 1$, it will inspect $a_{2,1}$ next only to find that column $1$ is infeasible after all. It will then inspect $a_{1,2}$ and learn that column $2$ is also infeasible, which implies that $A$ is infeasible. At this point, the inspection process terminates without inspecting $a_{2,2}$ as it is not necessary.\n This specific realization of $A$ happens with probability $p_{1,1}(1 - p_{2,1})(1-p_{1,2})p_{2,2}$, and yields $C(\\pi) = 3$ because 3 inspections would occur. \n If the realization of $A$ happens to be the null matrix (i.e., all entries are $0$'s), then $\\pi$ would only inspect $a_{1,1}$ and $a_{1,2}$ but skip $a_{2,1}$ and $a_{2,2}$.  In this manner one can consider all $2^{2\\cdot 2} = 16$ possible realizations of $A$, and compute $\\Exp[C(\\pi)]$ in this example. \n\nAnother way to compute $\\Exp[C(\\pi)]$ is by de-coupling $C(\\pi)$ into two random variables $N(\\pi, c)$ with $c\\in\\{1,2\\}$ where $N(\\pi,c)$ denotes the number of inspections (on column $c$) performed by $\\pi$ conditioning on the event that (at least one element of) column $c$ is inspected.\n We can efficiently compute these quantities: $\\Exp[N(\\pi, c)] = 1 \\cdot \\mathbf{P}[a_{1,c} = 0] + 2 \\cdot \\mathbf{P}[a_{1,c} = 1]$ for $c\\in \\{1, 2\\}$.\n To express $\\Exp[C(\\pi)]$ in $N(\\pi,c)$'s, we need to take conditional probability into account, as column 2 is inspected only if column 1 is infeasible: \n $\\Exp[C(\\pi)] = \\Exp[N(\\pi,1)] + \\Exp[N(\\pi,2)] (1-s_1) = 2.382$. Recall that $s_c$ is the probability of success for column $c$ (i.e., probability that column $c$ is feasible).\n\nThere is a trivial, brute-force algorithm that can find an optimal inspection policy: Consider every inspection policy (there are $(nm)!$ of them) and for each inspection policy compute its expected number of inspections (by considering all $2^{nm}$ realization of $A$). This algorithm's runtime is exponential in the size of an input matrix, and we shall design an efficient algorithm in this chapter.\n\n\n\\section{Technical Results and Algorithm} \\label{matrix:sec:results}\n\nWe first consider two special cases of the Probabilistic Matrix Inspection problem when $n=1$ or $m=1$, both of which admit intuitive, greedy algorithms. We then discuss interesting properties of an optimal inspection policy, which leads to our main result and algorithm.\n\n \\subsection{1-Row Matrix and 1-Column Matrix}\n Let us first consider the case where an input matrix $A$ has only one row (i.e., $n = 1$).\n In this case it is natural to inspect entries with largest probability first because we can stop as soon as we find an entry whose value is $1$ -- which makes its column and $A$ feasible. This intuition is exactly what an optimal policy should do in the single row case.\n\n \\begin{lemma}[1-Row Matrix]\\label{matrix:lemma:single_row_opt}\n When $n = 1$, an inspection policy $\\pi$ is optimal if and only if it inspects the entries in non-increasing order of their associated probabilities.\n \\end{lemma}\n \\begin{proof}\n \tWithout loss of generality let us assume that a policy $\\pi$ inspects the entries in increasing order of their column index; that is, $\\pi(i) = a_{1,i}$. \n \tRecall that $C(\\pi)$ is a random variable that denotes the number of inspections $\\pi$ incurs. \n \tWe can express the expectation of $C(\\pi)$ in terms of $p_{1,c}$'s as follows:\n \t\\small\n \t\\begin{equation} \\label{matrix:eqn:exp_cost_pi_1row}\n \t\t\\Exp\\left[C(\\pi)\\right] = \n \t\tm \\left(\\prod_{k=1}^{m-1}(1 - p_{1,k})\\right) + \n \t\t\\sum_{j=1}^{m-1} j \\left(p_j \\prod_{k=1}^{j-1} (1-p_{1,k}) \\right).\n \t\\end{equation}\n \t\\normalsize\n\t\n \tSuppose that there exists some $c^*$ such that $p_{1,c^*} < p_{1,c^*+1}$ (if no such $c^*$ exists, then $\\pi$ is an inspection policy that inspects the entries in non-increasing order of probabilities).\n \tLet $\\pi'$ be the same policy as $\\pi$ except we swap the order of $a_{1,c^*}$ and $a_{1,c^*+1}$. \n \tThat is, $\\pi'$ is defined as follows.\n \t\\begin{equation*}\n \t\t\\pi'(j) = \n \t\t\\begin{cases}\n \t\t\t\\pi'(j) = \\pi(c^*+1)&  \\mbox{if~} j = c^* \\\\\n \t\t\t\\pi'(j) = \\pi(c^*)  &  \\mbox{if~} j = c^*+1 \\\\\n \t\t\t\\pi'(j) = \\pi(j)  &  \\mbox{if~} j \\neq c^* \\land j \\neq c^*+1\n \t\t\\end{cases}\n \t\\end{equation*}\t\n \tAfter expressing $\\Exp[C(\\pi)]$ and $\\Exp[C(\\pi')]$ as in Equation~\\ref{matrix:eqn:exp_cost_pi_1row}, one can re-arrange the terms to obtain the following:\n \t\\begin{equation} \\label{matrix:eqn:proof_pi_1row}\n \t\\Exp\\left[C(\\pi)\\right] - \\Exp\\left[C(\\pi')\\right] = \n \t\\left(\\prod_{j=1}^{c^*-1} (1-p_{1,j})\\right) (p_{1,c^*+1} - p_{1,c^*}).\n \t\\end{equation}\n \tThis quantity is positive if $p_{1,c^*+1} > p_{1,c^*}$ (recall that $p_{1,j} \\in (0,1)$ for all $j$ as mentioned in Section~\\ref{matrix:sec:model}). \n\t\n \tThis proves the lemma because any policy that inspects an entry with smaller probability before another entry with higher probability is suboptimal, and therefore an optimal policy must inspect entries in non-increasing order of their associated probabilities. \n \\end{proof}\n Although the proof of Lemma~\\ref{matrix:lemma:single_row_opt} is simple, it confirms correctness of our intuition. Equation~\\ref{matrix:eqn:proof_pi_1row} illustrates this intuition; conditioning on the event that the first $c^*-1$ inspections fail (whose probability is the product term in Equation~\\ref{matrix:eqn:proof_pi_1row}), the difference $\\Exp[C(\\pi)] - \\Exp[C(\\pi')]$ depends on the difference in the probabilities of success between the next-entry-to-be-inspected by $\\pi$ and $\\pi'$.\n\n We can also consider the case where an input matrix $A$ has only one column.\n Intuitively, if we wish to minimize the expected number of inspections, we must inspect entries with smallest probability first because we can stop as soon as we determine that $A$ is infeasible.\n Lemma~\\ref{matrix:lemma:single_column_opt} formally states this intuition about optimal policy, and we omit a proof of it as it can be easily done by following the proof of Lemma~\\ref{matrix:lemma:single_row_opt}. \n \\begin{lemma}[1-Column Matrix] \\label{matrix:lemma:single_column_opt}\n When $m = 1$, an inspection policy $\\pi$ is optimal if and only if it inspects the entries in non-decreasing order of their associated probabilities.\n \\end{lemma}\n\n\n \\subsection{Inspection of Entire Column}\n Another interesting property of an optimal inspection policy is that once it inspects the first entry of a column, then it must commit to it and continue inspecting the remaining entries of the column until feasibility of the column is determined. Otherwise, if the policy switches to another column too soon, then it is not optimal. \n \\begin{theorem}[Optimality of inspecting entire column] \\label{matrix:theorem:col_by_col_opt}\n Consider any inspection policy $\\pi$.\n Without loss of generality, let us assume that for each column $c$, $\\pi$ inspects $a_{n,c}$ the last among $n$ entries of the column. \n Let $b_c$ be the index of $\\pi$ such that $\\pi(b_c) = a_{n,c}$. \n Without loss of generality, assume $b_1 < b_2 < \\cdots < b_m$ (we can do this by re-labeling the columns of $A$). \n If there is some column $c^*$ such that $b_{c^*} > n\\cdot c^*$, then $\\pi$ is not optimal.\n \\end{theorem}\n \\begin{proof}\n First, note that $b_c \\geq n\\cdot c$ for all $c$ because we assumed $b_1 < b_2 < \\cdots < b_m$, and therefore the entries of previous columns must appear before the last entry of each column.\n\n Let $\\pi$ be an inspection policy being considered in the theorem for which there exists some $c$ with $b_{c} > n \\cdot c$.\n Let us construct a different inspection policy $\\pi'$.\n First, $\\pi'$ inspects all entries of column $1$ in the same order $\\pi$ does. \n Then, $\\pi'$ inspects all entries of column $2$ in the same order $\\pi$ does, and so on. \n In particular, $\\pi'$ inspects all entries of a column before inspecting another column, while preserving the original ordering of the entries within each column that is given by $\\pi$. \n We will show that $\\Exp[C(\\pi')] < \\Exp[C(\\pi)]$.\n\n Let us define a set of new random variables which can be used to express $C(\\cdot)$, as we did in Section~\\ref{matrix:sec:example} when analyzing an example.\n Recall that $s_c = \\prod_{r=1}^{n} a_{r,c}$ is the probability of success for column $c$.\n Let $N(\\pi, c)$ ($N(\\pi', c)$, respectively) be a random variable that denotes the number of entries of column $c$ that is inspected by $\\pi$ (by $\\pi'$, respectively), conditioning on the event that column $c$ is inspected (i.e., when the previous $c-1$ columns are infeasible).\n We can then express $\\Exp[C(\\pi)]$ and $\\Exp[C(\\pi')]$ as follows:\n \\begin{equation} \\label{matrix:eqn:exp_c_pi_decoupled}\n \t\\Exp[C(\\pi)] = \\sum_{c=1}^{m} \\Exp[N(\\pi, c)]\\left( \\prod_{k=1}^{c-1} (1 - s_c) \\right)\n \\end{equation}\n and\n \\begin{equation} \\label{matrix:eqn:exp_c_pi2_decoupled}\n \t\\Exp[C(\\pi')] = \\sum_{c=1}^{m} \\Exp[N(\\pi', c)]\\left( \\prod_{k=1}^{c-1} (1 - s_c) \\right).\n \\end{equation}\n\n\n To prove the theorem we will first show that for any realization of $A$, $N(\\pi, c) \\geq N(\\pi', c)$ holds for all $c$; this immediately implies $\\Exp[C(\\pi)] \\geq \\Exp[C(\\pi')]$.\n We will then show that there exists at least one realization of $A$ such that for some column $c'$ the strict inequality $N(\\pi, c') > N(\\pi', c')$ holds. These two statements together imply that $\\Exp[C(\\pi)] > \\Exp[C(\\pi')]$.\n\n\n Consider any realization of $A$ with the condition that the first $m-1$ columns are infeasible (recall that $m$ is the number of columns of $A$).\n Then $N(\\pi, c) = N(\\pi', c)$ for all $c$ regardless of feasibility of column $m$.\n To see why, both $\\pi$ and $\\pi'$ would inspect the same set of entries in each of the first $m-1$ columns in the same order until the column is determined to be infeasible, and therefore $N(\\pi, c) = N(\\pi', c)$ if $c<m$.\n If column $m$ is feasible, then both $\\pi$ and $\\pi'$ would inspect all $n$ entries of it, and thus we have $N(\\pi, m) = N(\\pi', m) = n$. Otherwise, if column $m$ is also infeasible (in which case $A$ is infeasible), then $\\pi$ and $\\pi'$ would inspect the same set of entries of column $m$ in the same order until the first infeasible entry of the column is found. Therefore if the first $m-1$ columns are infeasible we have $N(\\pi, c) \\geq N(\\pi', c)$ for all $c$.\n\n Now consider any realization of $A$ with the condition that at least one of the first $m-1$ columns is feasible. Let $c'$ be the smallest index of feasible columns of $A$.\n Because the columns from $1$ to $c'-1$ are infeasible, $N(\\pi, c) = N(\\pi', c)$ for all $c < c'$ for the same reason we stated earlier for the other case. \n Since $c'$ is feasible, $N(\\pi, c') = N(\\pi', c') = n$ as both policies would inspect all $n$ entries of $c'$. By our construction of $\\pi'$ it is clear that $N(\\pi', c) = 0$ for all $c > c'$; therefore we have $N(\\pi, c) \\geq N(\\pi', c)$ for all $c>c'$. In summary $N(\\pi, c) \\geq N(\\pi', c)$ holds for all $c$ in this case as well. \n\n So far we proved the first claim we stated earlier: for all realizations of $A$, we have $N(\\pi, c) \\geq N(\\pi', c)$ for all $c$.\n Let us now prove the second claim. \n \n Let $c^*$ be the smallest index $c$ of columns such that $b_c > nc$ (note that $c^* < m$ because $b_m = nm$ by definition). \n Consider any realization of $A$ with the condition that the first $c^*-1$ columns are infeasible and column $c^*$ is feasible (feasibility of other columns do not matter). \n Using the same arguments we used earlier, we can show that $N(\\pi, c) = N(\\pi', c)$ for all $c < c^*$, that $N(\\pi, c^*) = N(\\pi', c^*) = n$, and that $N(\\pi', c) = 0$ for all $c> c^*$. \n However, because $b_{c^*} > n \\cdot c^*$, there is at least one entry $a_{r',c'}$ with $c' > c^*$ which  appears before $b_{n, c^*}$ in $\\pi$ (otherwise, if no such entry exists, then $b_{c^*}$ would be equal to $n \\cdot c^*$). This implies that there exists some $c'$ with $c' > c^*$ such that $N(\\pi, c') > 0$. \n\n This proves the second claim that for some realization of $A$, there is some column $c'$ for which $N(\\pi, c') >  N(\\pi', c')$, and together with the first claim we proved earlier, this implies that $\\Exp[C(\\pi)] > \\Exp[C(\\pi')]$. \n Together with the first claim, this completes the proof: Any policy that does not inspect all entries of a column consecutively is suboptimal.\n \\end{proof}\n By Theorem~\\ref{matrix:theorem:col_by_col_opt}, when seeking an optimal policy, it is sufficient to consider the set of policies that inspect an entire column before committing to another column. Lemma~\\ref{matrix:lemma:single_column_opt} hints that one should inspect the entries of each column in increasing order of probabilities, which we prove next.\n\n\n \\subsection{Optimal Ordering within Column} \n\n Lemma~\\ref{matrix:lemma:single_column_opt} states that\n an optimal policy must inspect the entries in increasing order of their probability of success, if $A$ is a 1-column matrix.\n This argument can be generalized to the case where there is more than one column: If an optimal policy is to inspect an entry of some column $c$, it must inspect the entry with smallest probability of success first. \n \\begin{theorem}[Optimal ordering within column] \\label{matrix:theorem:within_column_opt}\n \tConsider any inspection policy $\\pi$.\n \tIf there exist two entries $a_{r_1,c}$ and $a_{r_2,c}$ from the same column such that $a_{r_1,c}$ appears before $a_{r_2,c}$ in $\\pi$ and $p_{r_1,c} > p_{r_2,c}$, then $\\pi$ is not optimal. \n \tIn other words, when restricted to each column, an optimal policy must inspect the entries of the column in non-decreasing order of probabilities.\n \\end{theorem}\n \\begin{proof}\n \tLet $\\pi$ be an inspection policy being considered in the theorem.\n \tBecause of Theorem~\\ref{matrix:theorem:col_by_col_opt} we can assume, without loss of generality, that $\\pi$ inspects all entries of column 1, followed by column 2, and so on.\n \tFurther let us assume that $\\pi$ inspects the entries of each column in increasing order of their row index (we can do so by re-labeling the indices of entries). Precisely, $\\pi(r + n (c-1)) = a_{r,c}$ defines $\\pi$. Let $p_{r,c^*}$ and $p_{r+1,c^*}$ be the entries with $p_{r,c^*} > p_{r+1, c^*}$.\n \tLet us consider a different inspection policy $\\pi'$ that is the same as $\\pi$ except that $\\pi'$ inspects $p_{r+1,c^*}$ before $p_{r,c^*}$, by swapping the ordering of them.\n \t\\begin{equation*}\n \t\t\\pi'(j) = \n \t\t\\begin{cases}\n \t\t\t\\pi'(j) = a_{r+1,c^*}  &  \\mbox{if~} \\pi(j) = a_{r,c^*} \\\\\n \t\t\t\\pi'(j) = a_{r,c^*}    &  \\mbox{if~} \\pi(j) = a_{r+1,c^*} \\\\\n \t\t\t\\pi'(j) = \\pi(j)     &  \\mbox{otherwise} \n \t\t\\end{cases}\n \t\\end{equation*}\t\n\t\n \tWe claim that $\\Exp[C(\\pi')] < \\Exp[C(\\pi)]$, which implies that $\\pi$ is not optimal.\n\t\n \tLet us define new random variables $N(\\pi, c)$ and $N(\\pi', c)$ as we did in our proof of Theorem~\\ref{matrix:theorem:col_by_col_opt} (i.e., the number of inspections performed by the respective policy on column $c$, conditioning on the event that the column is inspected).\n \tThen we can express $\\Exp[C(\\pi)]$ and $\\Exp[C(\\pi')]$ in terms of the new random variables and $s_c$'s as we did in Equations~\\ref{matrix:eqn:exp_c_pi_decoupled} and \\ref{matrix:eqn:exp_c_pi2_decoupled}.\n\t\n \tObserve that $N(\\pi, c) = N(\\pi', c)$ for any realization of $A$ if $c \\neq c^*$.\n \tTo see this, first note that column $c$ would not be inspected by $\\pi$ or by $\\pi'$ if any of the previous columns (that is, columns $1$ through $c-1$) is found to be feasible, in which case $N(\\pi, c) = N(\\pi', c) = 0$. Otherwise, if column $c$ is inspected, both policies would inspect the entries of $c$ in the very same order, so $N(\\pi, c) = N(\\pi', c)$ must hold. Therefore we conclude that $\\Exp[N(\\pi, c)] = \\Exp[N(\\pi', c)]$ when $c \\neq c^*$. \n\t\n \tWe will now show that $\\Exp[N(\\pi, c^*)] > \\Exp[N(\\pi', c^*)]$ holds. \n \tThis immediately implies $\\Exp[C(\\pi)] > \\Exp[C(\\pi')]$ due to Equations~\\ref{matrix:eqn:exp_c_pi_decoupled} and \\ref{matrix:eqn:exp_c_pi2_decoupled}.\n \tLet us express $\\Exp[N(\\pi, c^*)]$ in terms of $p_{r,c^*}$'s.\n \t\\begin{equation*}\n \t\t\\Exp[N(\\pi, c^*)] = n\\left( \\prod_{k=1}^{n-1} p_{k,c^*}\\right) + \n \t\t\\sum_{j=1}^{n-1} j (1-p_{j,c^*}) \\left( \\prod_{k=1}^{j-1} p_{k,c^*} \\right)\n \t\\end{equation*}\n \tNote that the event $N(\\pi, c^*) = j$ occurs if the first $j-1$ entries are feasible while the $j$-th entry is not feasible when $j < n$, and $N(\\pi, c^*) = n$ occurs if the first $n-1$ entries are feasible (but the $n$-th entry's feasibility does not matter).\n\t\n \tWe can express $\\Exp[N(\\pi', c^*)]$ in a similar manner, and simplify  $\\Exp[N(\\pi, c^*)] - \\Exp[N(\\pi', c^*)]$ as follows:\n \t\\small\n \t\\begin{equation*}\n \t\t\\Exp[N(\\pi, c^*)] - \\Exp[N(\\pi', c^*)] = \n \t\tr \\left( \\prod_{k=1}^{r-1} p_{k,c^*} \\right) (p_{r,c^*} - p_{r+1,c^*}).\n \t\\end{equation*}\n \t\\normalsize\n \tThe quantity above is positive if $p_{r,c^*} > p_{r+1,c^*}$, which is the assumption we began with.\n \tThis proves the theorem.\n \\end{proof}\n\n\n Theorems~\\ref{matrix:theorem:col_by_col_opt} and \\ref{matrix:theorem:within_column_opt} together tell us that \n in order to find an optimal policy we only need to decide the ordering of the columns. \n There are still $m!$ orderings of columns, and an exhaustive search algorithm would not be efficient.\n As we were able to generalize Lemma~\\ref{matrix:lemma:single_column_opt} to Theorem~\\ref{matrix:theorem:within_column_opt} by generalizing the optimal solution for 1-column case, it would be natural to consider generalizing Lemma~\\ref{matrix:lemma:single_row_opt} in a similar manner.\n\n This idea leads to the following greedy algorithm: First we sort columns by their probability of success ($s_c = \\prod_{r=1}^{n} p_{r,c}$) in decreasing order, and inspect the entries of each column in increasing order of their associated probabilities. However, as the following example shows, this algorithm is suboptimal. \n \\begin{equation*}\n A = \n \t\\begin{bmatrix}\n \t\ta_{1,1}  & a_{1, 2} \\\\\n \t\ta_{2,1}  & a_{2, 2}\n \t\\end{bmatrix},\n \tP_A =\n \t\\begin{bmatrix}\n \t\t0.4459  & 0.2262 \\\\\n \t\t0.4459  & 0.8114\n \t\\end{bmatrix}\n \\end{equation*}\n Here we have $s_1 ~= 0.199$ and $s_2 ~= 0.184$, and the greedy algorithm would produce \n $$\\pi = \\begin{pmatrix} a_{1,1} & a_{2,1} & a_{1,2} & a_{2,2} \\end{pmatrix}.$$ \n\nIts expected cost, $\\Exp[C(\\pi)]$, is $2.428$, but if we inspect the second column first, then the expected cost is $2.407$ which is optimal in this example.\n One can consider another greedy algorithm which inspects the columns in increasing order of their expected number of inspections (within column), but this algorithm turns out to be suboptimal as well. \n\n\n \\subsection{Main Result and Algorithm}\n Let us present the main result that leads to an efficient algorithm for finding an optimal inspection policy. \n\n \\begin{theorem} \\label{matrix:thm:main_result}\n \tLet $s_c$ be the probability of success for column $c$ as before. \n \tLet $\\mu_c$ be the expected number of inspections that column $c$ incurs if its entries are inspected in increasing order of their probability of success, conditioning on the event that column $c$ is inspected and infeasible.\n \tAn optimal policy must be a column-by-column policy (due to Theorem~\\ref{matrix:theorem:col_by_col_opt}), must inspect the entries of each column in non-decreasing order of probabilities (due to Theorem~\\ref{matrix:theorem:within_column_opt}), and must inspect the columns in non-decreasing order of $\\mu_c(1 - s_c)/s_c$.\n \\end{theorem}\n \\begin{proof}\n \tConsider a column-by-column inspection policy $\\pi$ which inspects the column 1 through $m$ in increasing order of their index (we can assume this without loss of generality by re-labeling columns).\n\t\n \tAs before, let $N(\\pi,c)$ be a random variable that denotes the number of inspections performed by $\\pi$ on column $c$, conditioning on the event that column $c$ is inspected.\n \tThen we can express $\\Exp[N(\\pi, c)]$ in terms of $s_c$ and $\\mu_c$ as follows.\n \t\\begin{equation}\\label{matrix:eqn:exp_cost_column_by_condition}\n \t\t\\Exp[N(\\pi, c)] = s_c \\cdot n  + (1-s_c) \\cdot \\mu_c\n \t\\end{equation}\n \tThis equation holds because if the column is feasible (with probability $s_c$), it would require $n$ inspections, but if it is not (with probability $1-s_c$), it would require $\\mu_c$ inspections in expectation. The equation above simply considers these two events, and calculates the expected value of $N(\\pi, c)$. \n\t\n \tSuppose that there is some column $c^*$ such that $\\mu_{c^*}(1 - s_{c^*}) / s_{c^*} > \\mu_{c^*+1} (1 - s_{c^*+1}) / s_{c^*+1}$. Because $\\pi$ inspects column $c^*$ before column $c^*+1$, it would not be inspecting the columns in increasing order of $\\mu_c(1-s_c)/s_c$.\n \tConsider a different inspection policy $\\pi'$ which inspects the columns in the same order as $\\pi$ except that $\\pi'$ inspects column $c^*+1$ before $c^*$ by swapping the inspection ordering of the two.\n \tWe can relate $N(\\pi, \\cdot)$ to $N(\\pi', \\cdot)$ as follows.\n \t\\begin{equation*}\n \t\tN(\\pi', c) = \n \t\t\\begin{cases}\n \t\t\tN(\\pi, c^*+1) & \\mbox{if~} c = c^* \\\\\n \t\t\tN(\\pi, c^*)   & \\mbox{if~} c = c^* + 1 \\\\\n \t\t\tN(\\pi, c)     & \\mbox{otherwise}\n \t\t\\end{cases}\n \t\\end{equation*}\n\t\n \tAs we did in proofs of Theorems~\\ref{matrix:theorem:col_by_col_opt} and \\ref{matrix:theorem:within_column_opt}, we can use Equations~\\ref{matrix:eqn:exp_c_pi_decoupled} and \\ref{matrix:eqn:exp_c_pi2_decoupled}, and simplify $\\Exp[C(\\pi)] - \\Exp[C(\\pi')]$ as follows.\n\t\\small\n \t\\begin{equation} \\label{matrix:eqn:main_result_diff}\n \t\t\\Exp[C(\\pi)] - \\Exp[C(\\pi')] =  \\left(\\frac{\\Exp[N(\\pi, c^*)]}{s_{c^*}}  - \\frac{\\Exp[N(\\pi, c^*+1)]}{s_{c^*+1}} \\right)  \\cdot \\left( \\prod_{j=1}^{c^*-1} (1 - s_j)\\right) s_{c^*} s_{c^*+1}\n \t\\end{equation}\t\n\t\\normalsize\n \tThe quantity in Equation~\\ref{matrix:eqn:main_result_diff} is positive if the difference of the weighted expected values (in the first parentheses) are positive. Using Equation~\\ref{matrix:eqn:exp_cost_column_by_condition} we obtain the following inequality.\n \t\\begin{equation*}\n \t\t\\frac{\\Exp[N(\\pi, c^*)]}{s_{c^*}}  > \\frac{\\Exp[N(\\pi, c^*+1)]}{s_{c^*+1}} \n \t\t\\Leftrightarrow \n \t\t\\frac{\\mu_{c^*} (1 - s_{c^*})}{s_{c^*}} > \\frac{\\mu_{c^*+1}(1 - s_{c^*+1}}{s_{c^*+1}}\n \t\\end{equation*}\n \tBy definition of $c^*$, the second inequality above holds, which implies the first. Hence, an optimal inspection policy must inspect the columns in non-decreasing order of $\\mu_c(1 - s_c)/s_c$. \n \\end{proof}\n Because there is a unique ordering of columns if we sort them by $\\mu_c(1-s_c)/s_c$ (up to ties), Theorem~\\ref{matrix:thm:main_result} leads to the following algorithm: We inspect the columns in increasing order of $\\mu_c(1-s_c)/s_c$, and in each column, we inspect the entries of it in increasing order of probabilities. \n This algorithm can easily be implemented to run in polynomial time. \n\n\n\\section{Discussion} \\label{matrix:sec:discussion}\nIn this work we defined the Probabilistic Matrix Inspection problem which generalizes the Batched Doodle Problem.\nWe first considered two special cases, and discovered interesting properties of an optimal inspection policy which agree with our intuition. We then generalized our findings to design an efficient algorithm to solve the general case, and along the way we showed that two natural greedy algorithms fail to find an optimal solution. While we believe that our technical results make a great starting point for studying and optimizing a group scheduling process, there remain several open problems and future work to be done.\n\nAs we discussed in the previous chapter, our model and algorithm rely on the assumption that probability estimates on availability of agents are available. We suggested several ideas motivated by previous work in the literature, but it will be important to deploy such ideas into a system, and integrate it with our algorithm. From a theoretical perspective, there remain several open problems. While we assumed that an inspection can be performed on a single entry at unit cost, one can generalize the cost model by allowing an inspection of any subset of entries whose cost depends on, for example, the number of entries being inspected. In the context of group scheduling, an inspection on many entries means querying multiple agents at the same time for one or many outcomes (but an inspection is not limited to the entries from the same column or row). This generalization is particularly useful when scheduling takes place in a hierarchical setting such as corporates or universities as in the example of  university oral exams at Stanford University.\n\nLastly, the Probabilistic Matrix Inspection problem has many other applications than group scheduling. \nFor instance, suppose that some research lab is hiring a new researcher among many candidates.\nAfter reading each candidate's curricular vitae and research statement,\neach member of the lab can estimate the likelihood of him/her approving the candidate, yet it is necessary to interview a candidate to make a decision.\nIn this setting, the problem of finding a candidate who is approved by every member of the lab is equivalent to the problem of finding a feasible date/time option. Clearly, assuming that the lab is happy to hire any of the candidates whom all of the lab members approve, it is rational to minimize the expected number of in-person interviews until someone is hired or every candidate is rejected.\nAnother example is a search quest for housing or childcare facility.\nEach house or facility is a candidate, and there is a set of requirements for the candidate being considered. Making inquiries about and taking a tour of the property or facility involves much effort -- even though much information is available through brochures or booklets, people in this situation would wish to minimize the time and effort they need to spend until they declare a winner.\n\n\n\\section*{Publications}\nSome of the contents of this chapter were published and included in the proceedings of the Twenty-fifth International Joint Conference on Artificial Intelligence (IJCAI)~\\cite{lee2016pmip}.\n", "meta": {"hexsha": "f55c776fd2cac7b3529f02e01c861ca2aa798428", "size": 35192, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main/300_Matrix.tex", "max_stars_repo_name": "ltdtl/thesis", "max_stars_repo_head_hexsha": "b1585aa3e57e06b4368fb51540bbbf1c64c491df", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-09-18T17:20:45.000Z", "max_stars_repo_stars_event_max_datetime": "2016-09-18T17:20:45.000Z", "max_issues_repo_path": "main/300_Matrix.tex", "max_issues_repo_name": "ltdtl/thesis", "max_issues_repo_head_hexsha": "b1585aa3e57e06b4368fb51540bbbf1c64c491df", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-01-29T07:21:01.000Z", "max_issues_repo_issues_event_max_datetime": "2017-01-29T07:21:01.000Z", "max_forks_repo_path": "main/300_Matrix.tex", "max_forks_repo_name": "ltdtl/thesis", "max_forks_repo_head_hexsha": "b1585aa3e57e06b4368fb51540bbbf1c64c491df", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 98.8539325843, "max_line_length": 1057, "alphanum_fraction": 0.7203057513, "num_tokens": 9983, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7662936377487304, "lm_q1q2_score": 0.5740207741135492}}
{"text": "\\documentclass[paper=a4,11pt,headsepline]{scrartcl}\n\n\\usepackage[margin=2cm]{geometry}\n\\usepackage[fleqn]{amsmath}\n\\usepackage{amssymb}\n\\usepackage{bm}\n\\usepackage{minted}\n\\usepackage{booktabs}\n\\usepackage{url}\n\\usepackage[\n    colorlinks=true,\n    linkcolor=black,\n    citecolor=black,\n    urlcolor=black,\n    hyperfootnotes=true,\n]{hyperref}\n\\usepackage{cancel}\n\\usepackage{xspace}\n\\usepackage{cleveref}\n\\usepackage{environ}\n\\usepackage[\n    backend=biber,\n    maxbibnames=2,\n    giveninits=true,\n    url=true,\n    isbn=false,\n    sorting=none,\n    date=year,\n]{biblatex}\n\n\\addbibresource{lit.bib}\n\n\\include{nc}\n\n\\newcommand{\\ipmpy}[1]{\\inputminted[xleftmargin=0.9cm]{python}{#1}}\n\n% https://tex.stackexchange.com/a/5652\n\\NewEnviron{splitequation}{%\n    \\begin{equation}\n        \\begin{split}\n            \\BODY\n        \\end{split}\n    \\end{equation}\n}\n\n\n\n\\begin{document}\n\n\\tableofcontents\n\\newpage\n\n\\section{Introduction}\n\nThis text is a short introduction to Automatic Differentiation (AD). The main\nreferences are \\cite{baydin_2018, johnson_2017, jax_autodiff_cookbook}. We\nfocus (for now) on basics linear computation graphs to introduce forward and\nreverse mode, as well as differences between \\jax and \\pytorch (and a bit of\nthe older \\autograd, the predecessor to \\jax and one of the bases for\n\\pytorch's AD system \\cite{paszke_2017}).\n\n\\section{Methods to evaluate derivatives}\n\nThis overview is inspired mostly by \\cite{baydin_2018}.\n\n\\begin{itemize}\n    \\item Work out analytic derivatives by hand (or really Mathematica, Maple, Maxima, ...)\n        and code them. Can be an advantage when the expression you want to\n        differentiate never changes, e.g. derivatives of classical force\n        fields in molecular simulations. Code of derivatives can be\n        hand-optimized for speed.\n    \\item Call symbolic engines (e.g. \\soft{sympy}) in your code to get\n        analytic derivatives.\n    \\item Numerical derivatives by finite differences (FD). Use at least\n        the central differences method. Good for checking AD or manually coded\n        derivatives. Scales as $\\mathcal O(n)$ for $\\nabla f(\\ve x), \\ve\n        x\\in\\mathbb R^n$.\n    \\item AD\n        \\begin{itemize}\n            \\item Directly provides numerical values for derivatives at some point\n                $\\ve x$ \\emph{with machine precision}, \\emph{not} symbolic expressions of\n                derivatives\n            \\item Uses operator overloading (or other methods) to carry out\n                derivatives (\\jax, \\pytorch: define JVPs/VJPs, see below)\n            \\item Can be applied to almost any code using arbitrary control flows\n            \\item In reverse mode: uses code tracing (\"taping\") by one forward eval\n                to construct computation graph\n            \\item No \"code swell\": complex analytic expressions that grow which each\n                application of $\\partial/\\partial x_i$ in symbolic engines and that\n                need an additional simplification step\n        \\end{itemize}\n\\end{itemize}\n\n\\section{Forward and reverse AD}\n\n\\subsection{Setting the stage}\n\nFirst consider a scalar function of scalar inputs $f:\\mathbb R\\ra\\mathbb R$\n\\begin{equation}\n    f(x) = c(b(a(x)))\n\\end{equation}\nwhich implies the computation graph $x\\ra a\\ra b\\ra c = f$. Forward = $x \\ra c$ (inputs\nto outputs). Reverse = $x \\la c$ (outputs to inputs).\nThe derivative using the chain rule is given by\n\\begin{splitequation}\n    \\td{f}{x}\n        &= \\td{c}{b}\\,\\td{b}{a}\\,\\td{a}{x}\\\\\n        &= \\td{c}{b}\\,\\left(\\td{b}{a}\\,\\td{a}{x}\\right)&&\\quad\\text{forward}\\\\\n        &= \\left(\\td{c}{b}\\,\\td{b}{a}\\right)\\,\\td{a}{x}&&\\quad\\text{reverse}\n\\end{splitequation}\n\nNow we consider the general case of a multivariate vector-valued function $\\ve\nf: \\mathbb R^n \\ra \\mathbb R^m$. The Jacobian is the matrix of all first\npartial derivatives and is defined as\n%\n\\begin{equation}\n    \\ve f: \\mathbb R^n \\ra \\mathbb R^m,\\quad \\ma J\n    = \\pd{\\ve f}{\\ve x} =\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1} & \\cdots & \\pd{f_1}{x_n}  \\\\\n        \\vdots        & \\ddots & \\vdots         \\\\\n        \\pd{f_m}{x_1} & \\cdots & \\pd{f_m}{x_n}\n    \\end{bmatrix}\n    \\in\\mathbb R^{m\\times n}\n    \\eqdot\n\\end{equation}\nOne edge case is\n\\begin{equation}\n    \\ve f: \\mathbb R^1 \\ra \\mathbb R^m\\quad \\ma J\n    = \\pd{\\ve f}{x} =\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1}  \\\\\n        \\vdots         \\\\\n        \\pd{f_m}{x_1}\n    \\end{bmatrix}\n    = \\ma J[:,1]\\in\\mathbb R^{m\\times 1}\n\\end{equation}\nwhere we need only the first column $\\ma J[:,1]$. In the other edge case\n\\begin{equation}\n    f: \\mathbb R^n \\ra \\mathbb R^1\\quad \\ma J\n    = \\pd{f}{\\ve x} =\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1} & \\cdots & \\pd{f_1}{x_n}\n    \\end{bmatrix} \\equiv \\nabla f\n    = \\ma J[1,:] \\in\\mathbb R^{1\\times n}\n\\end{equation}\nwe need the first row $\\ma J[1,:]$ only. This is also the definition of the\n\\emph{gradient} $\\nabla f(\\ve x)\\in\\mathbb R^n,\\: f: \\mathbb R^n\\ra\\mathbb R$.\n\nNote that in the ML literature all partial derivatives including those in\nJacobians and Hessians are often called \"gradients\", which might confuse you if\nyou have a physics background :)\n\n\\subsection{Vectorized functions}\n\nVectorized functions (e.g. \\verb|sin()| in \\numpy, \\jax, \\pytorch) are a\nbit of a special case since, depending on input, they are either $f(x):\\mathbb\nR\\ra\\mathbb R$ when given a scalar or $\\ve f(\\ve x):\\mathbb R^n\\ra\\mathbb R^n$\nwhen given a vector, but never $f(\\ve x): \\mathbb R^n \\ra \\mathbb R$ (i.e. the\ngradient $\\nabla f(\\ve x)$ is not defined for them). Importantly we have\n$f_i\\equiv f$: each $f_i$ is a function of only the $x_i$ it is applied to.\nTherefore, the Jacobian is always diagonal, which is a very neat property that\nwe'll use later.\n\n\\subsection{Forward}\n\nAgain we have\n\\begin{equation}\n    \\ve f(\\ve x) = \\ve c(\\ve b(\\ve a(\\ve x)))\n\\end{equation}\nand can use the chain rule\n\\begin{equation}\n    \\ma J=\\pd{\\ve f}{\\ve x} = \\pd{\\ve c}{\\ve b}\\,\\pd{\\ve b}{\\ve a}\\,\\pd{\\ve a}{\\ve x}\n\\end{equation}\nwhich implies a series of matrix multiplications of individual Jacobians $\\pdi{\\ve\na}{\\ve x}$, $\\pdi{\\ve b}{\\ve a}$, $\\pdi{\\ve c}{\\ve b}$.\nWe can now right-multiply $\\ma J$ with the identity\nmatrix $\\pdi{\\ve x}{\\ve x}$\n\\begin{equation}\n    \\pd{\\ve f}{\\ve x}\\,\\pd{\\ve x}{\\ve x} =\n    \\pd{\\ve c}{\\ve b}\\,\\pd{\\ve b}{\\ve a}\\,\\pd{\\ve a}{\\ve x}\\,\\pd{\\ve x}{\\ve x}\n\\end{equation}\nwhich doesn't change anything. We can also think of this as extracting the\n$j$-th column $\\ma J[:,j]$ by multiplying from the right with one column of the\nidentity $\\partial\\ve x/\\partial x_j = \\ve e_j = \\tp{[0,0,\\cdots,1,\\cdots, 0]}$.\nThis will thus select the $x_j$ w.r.t. to which we want to calculate the\nderivative. Analytically this means for an example $2\\times 2$ system and choosing\n$x_j = x_1$\n\\begin{equation}\n    \\pd{\\ve f}{\\ve x}\\,\\pd{\\ve x}{x_1}=\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1} & \\pd{f_1}{x_2} \\\\\n        \\pd{f_2}{x_1} & \\pd{f_2}{x_2} \\\\\n    \\end{bmatrix}\n    \\begin{bmatrix}\n        \\pd{x_1}{x_1} \\\\\n        \\pd{x_2}{x_1} \\\\\n    \\end{bmatrix}\n    =\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1}\\,\\pd{x_1}{x_1} + \\cancelto{0}{\\pd{f_1}{x_2}\\,\\pd{x_2}{x_1}} \\\\\n        \\pd{f_2}{x_1}\\,\\pd{x_1}{x_1} + \\cancelto{0}{\\pd{f_2}{x_2}\\,\\pd{x_2}{x_1}} \\\\\n    \\end{bmatrix}\n    =\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1} \\\\\n        \\pd{f_2}{x_1} \\\\\n    \\end{bmatrix}\n\\end{equation}\n%\nif we note that $\\pdi{x_2}{x_1}=0$ and that formally by the chain rule\n%\n\\begin{gather}\n    \\pd{f_1}{x_1}\\,\\pd{x_1}{x_1} = \\pd{f_1}{x_1} \\\\\n    \\pd{f_2}{x_1}\\,\\pd{x_1}{x_1} = \\pd{f_2}{x_1}\n    \\eqdot\n\\end{gather}\n%\nNow, of course we don't want to build up Jacobians. The trick is to evaluate\nthe expression by using pre-defined derivatives of \\emph{primitive}\nfunctions (here $\\ve a(\\cdot)$, $\\ve b(\\cdot)$, $\\ve c(\\cdot)$)\\footnote{Any\nbasic (\\numpy) function like \\co{sin}, \\co{sum} or \\co{sqrt} that we can't or\nwon't represent in terms of even more basic functions.} and by repeated\napplication of a \\emph{\\red{Jacobian} \\blue{vector} product (JVP)}. We\ninitialize with $\\ve e_j$ and apply JVPs as we go.\n%\n\\begin{splitequation}\n    \\pd{\\ve f}{\\ve x}\\,\\blue{\\pd{\\ve x}{x_j}} = \\pd{\\ve f}{\\ve x}\\,\\blue{\\ve e_j}\n            &= \\pd{\\ve c}{\\ve b}\\,\\pd{\\ve b}{\\ve a}\\,\\red{\\pd{\\ve a}{\\ve x}}\\,\\blue{\\ve e_j} \\\\\n            &= \\pd{\\ve c}{\\ve b}\\,\\red{\\pd{\\ve b}{\\ve a}}\\,\\blue{\\ve v_j} \\\\\n            &= \\red{\\pd{\\ve c}{\\ve b}}\\,\\blue{\\ve v_j'} \\\\\n            &= \\ma J[:,j]\n\\end{splitequation}\n%\nNow what is a JVP? We see that in order to evaluate the chain rule, each\nprimitive function $\\ve z(\\cdot) = \\ve a(\\cdot), \\ve b(\\cdot), \\ve c(\\cdot)$\nneeds to know ($i$) how it would evaluate its own derivative (Jacobian\n$\\pdi{\\ve z}{\\ve{\\tau}}$, e.g. $\\pdi{\\ve b}{\\ve a}$) w.r.t. its inputs $\\ve{\\tau}$ and ($ii$)\nhow to right-multiply that with a given vector $\\ve v$. By doing ($ii$), we can\nalways initialize the process by $\\ve e_j$ and then multiply forward (recall\nthat forward means $x\\ra c$, i.e. from inputs to outputs). Now how do we\ndo $(i)$? The answer is that we don't, in the sense that we don't need to\ninstantiate Jacobians. Instead, we \\emph{augment} each primitive function $\\ve\nz(\\cdot)$ with a JVP such that we map the so-called\n\\emph{primal}-\\emph{tangent} pair $(\\ve x, \\ve v)$ to the function at $\\ve x$\nand its JVP with $\\ve v$.\n%\n\\begin{equation}\n    (\\ve x, \\ve v) \\mapsto \\left(\\ve z(\\ve x), \\red{\\pd{\\ve z}{\\ve\\tau}}\\,\\blue{\\ve v}\\right)\n\\end{equation}\n%\nEach primitive function has an associated JVP function defined (\\jax: by using\n\\co{jax.defjvp}\\footnote{The name of this and details of its usage may vary\nacross \\jax versions.}) which gets called when evaluating the chain rule. At\neach step through the chain rule along the comp graph, we evaluate the\nfunction's value $\\ve z(\\ve x)$ \\emph{and} its JVP $\\red{\\pdi{\\ve\nz}{\\ve{\\tau}}}\\,\\blue{\\ve v}$ together in lock step. Now, the most\nimportant trick is that the intermediate Jacobians are never explicitly build.\nThe JVP function only has to return the \\emph{result} of $\\pdi{\\ve\nz}{\\ve{\\tau}}\\,\\ve v$. Especially, for \\numpy's vectorized functions, all\nJacobians are diagonal, which makes implementing $\\pdi{\\ve z}{\\ve{\\tau}}\\,\\ve\nv$ simple and efficient. In most cases, we get away with element-wise\noperations on \\ve v.\n\nSide note: in the AD literature, applying JVPs is also called\n\"push-forward\" of the vector (here $\\ve e_j$) from inputs to outputs ($\\ve x\\ra\\ve c$).\n\nWhen we repeat the process and apply all possible $\\ve e_j$ (i.e. the whole\nidentity matrix), we can build up the full $\\ma J$ if we need to, one column at\na time. For the edge case $\\ve f(x): \\mathbb R^1 \\ra \\mathbb R^m$ ($n=1$, first column)\nwe are finished in one forward pass. In general forward mode is efficient for\n\"tall\" Jacobians where $m\\gg n$ and inefficient for \"wide\" ones where $m\\ll n$.\nThe other edge case $f(\\ve x): \\mathbb R^n \\ra \\mathbb R^1$ ($m=1$, first row a.k.a.\n$\\nabla f(\\ve x)$) is inefficient in forward mode since we need $n$ passes to\ncalculate each $\\pdi{f}{x_j}$. This has thus similar complexity as finite\ndifferences, where we also repeat $n$ times something like $(f(\\ve x +\\ve e_j\\,h) - f(\\ve\nx-\\ve e_j h))/2\\,h$.\n\n\\subsection{Reverse}\n\nFirst, we do a forward pass to trace the execution done in $\\ve f$ to build up\nthe graph $\\ve x\\ra \\ve a\\ra \\ve b\\ra \\ve c = \\ve f$. Then we can extract the\n$i$-th row $\\ma J[i,:]$ by multiplying from the left with $\\pdi{\\ve f}{f_i} =\n\\ve e_i$. Analytically, we have\n\\begin{splitequation}\n    \\left(\\pd{\\ve f}{f_1}\\right)^\\top\\,\\pd{\\ve f}{\\ve x}\n    &=\n    \\begin{bmatrix}\n        \\pd{f_1}{f_1} & \\pd{f_2}{f_1} \\\\\n    \\end{bmatrix}\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1} & \\pd{f_1}{x_2} \\\\\n        \\pd{f_2}{x_1} & \\pd{f_2}{x_2} \\\\\n    \\end{bmatrix}\n    \\\\\n    &=\n    \\begin{bmatrix}\n        \\pd{f_1}{f_1}\\,\\pd{f_1}{x_1} + \\cancelto{0}{\\pd{f_2}{f_1}\\,\\pd{f_2}{x_1}} &\n        \\pd{f_1}{f_1}\\,\\pd{f_1}{x_2} + \\cancelto{0}{\\pd{f_2}{f_1}\\,\\pd{f_2}{x_2}}\n    \\end{bmatrix}\n    \\\\\n    &=\n    \\begin{bmatrix}\n        \\pd{f_1}{x_1} & \\pd{f_1}{x_2}\n    \\end{bmatrix}\n\\end{splitequation}\nwith $\\pdi{f_2}{f_1}=0$.\nIn the code, we initialize with $\\ve e_i$ and this time apply\n\\emph{\\blue{vector} \\red{Jacobian} products (VJPs)} as we go.\n\\begin{splitequation}\n    \\blue{\\left(\\pd{\\ve f}{f_i}\\right)^\\top}\\,\\pd{\\ve f}{\\ve x} = \\blue{\\ve e_i^\\top}\\,\\pd{\\ve f}{\\ve x}\n        &= \\blue{\\ve e_i^\\top}\\,\\red{\\pd{\\ve c}{\\ve b}}\\,\\pd{\\ve b}{\\ve a}\\,\\pd{\\ve a}{\\ve x}\\\\\n        &= \\blue{\\ve v_i^\\top}\\,\\red{\\pd{\\ve b}{\\ve a}}\\,\\pd{\\ve a}{\\ve x}\\\\\n        &= \\blue{\\ve v_i'^\\top}\\,\\red{\\pd{\\ve a}{\\ve x}}\\\\\n        &= \\ma J[i,:]\n\\end{splitequation}\nAgain, we can apply all possible $\\ve e_i$ to build up the full $\\ma J$, this\ntime one row at a time. For the edge case $f(\\ve x): \\mathbb R^n \\ra \\mathbb\nR^1$ ($m=1$, first row) we are done in one backward pass.\n\nSide note: in the AD literature, applying VJPs is also called \"pull-back\" of\nthe vector (here $\\ve e_i$) from outputs to inputs ($\\ve x\\la\\ve c$).\n\n\\subsection{Reverse mode in neural network training a.k.a. backprop}\n\nNN training loss function: a multivariate function $f(\\ve x): \\mathbb R^n\\ra \\mathbb R$\n\\begin{equation}\n    f(\\ve x) = c(\\ve b(\\ve a(\\ve x)))\n\\end{equation}\nfor instance\n\\begin{minted}{python}\n    f = lambda x: c(b(a(x)))\n    f = lambda x: sum(power(sin(x),2))\n\\end{minted}\ni.e. the composition of a number of vector-valued functions\\footnote{ The NN\ntraining loss' computation graph is of course not short and linear as in our\nexample.} $\\ve a(\\cdot), \\ve b(\\cdot)$ and a final reduction $c(\\cdot): \\mathbb\nR^n\\ra \\mathbb R$ and where $n$ is \"large\" (as in $10^6$) and $\\ve x$ are the\nnetwork's parameters. This is exactly the second edge case from above. When\nusing reverse mode, we get the gradient $\\pdi{f}{\\ve x} = \\ma J[1,:]$ in a\nsingle reverse pass (i.e. backprop).\n\n\\section{Comparison of implementations}\n\n\\subsection{\\jax, \\autograd}\n\nBoth are designed to be functional, like \\numpy itself. \\co{grad()} returns a\nfunction object that evaluates $\\nabla f(\\ve x)$.\n%\n\\ipmpy{../talk/code/jax_ad_teaser_grad_usage.py}\n%\n\\jax returns \\co{DeviceArray} objects because it uses \\tf's XLA compiler.\n\\autograd returns plain \\numpy arrays.\n\n\\subsection{\\pytorch}\n%\n\\pytorch is not functional\\footnote{However, there is the experimental\n\\co{torch.autograd.functional} (checked: v1.8.1).}, it thinks in terms of\n\\co{torch.Tensor} objects, so when using our little example from above, now\nwritten in \\pytorch syntax\n%\n\\ipmpy{../talk/code/pytorch_fwd_rev_1.py}\n%\nwe use the \"loss tensor\" \\co{c} of shape \\co{(1,)} instead of the loss function\n\\co{f}. With knowledge of reverse mode, we can make sense of \\pytorch's API\ndesign. Below, we unpack the calculation step by step. Observe how each tensor\nhas a \\verb|grad_fn| attribute which corresponds to defined VJPs. In \\pytorch\nthe tracing of forward ops is implemented such tensors record\noperations applied to them.\n%\n\\ipmpy{../talk/code/pytorch_fwd_rev_2.py}\n%\nThen, we call the \\co{backward()} method of the final output tensor, here\n\\co{c}.\n%\n\\ipmpy{../talk/code/pytorch_fwd_rev_3.py}\n%\nThis \"pulls back\" the derivatives (starting with $\\pdi{f}{f}=1$) from the\noutput \\co{c} to the input and the derivatives are stored in $\\pdi{c}{\\ve x} =\n\\co{x.grad}$.\\footnote{For a simple linear graph that feels super weird,\npull-back or not. \\co{c.grad} would be much more intuitive. For a multi-leaf\nDAG, where leafs = input tensors, it makes a bit more sense.}\n\nWe can also perform VJPs by calling \\co{backward()} of an intermediate\nvector-valued variable by giving it a vector argument ($\\ve\ne$ or $\\ve v$ above). In particular, we can build up the intermediate\nJacobians. Here, we extract the first row of $\\pdi{\\ve b}{\\ve a}$.\n%\n\\ipmpy{../talk/code/pytorch_rev_detail_2.py}\n%\nAnd finally, the default for \"seeding\" the backward pass starting at \\co{c}\nis indeed \\co{1.0} as in $\\pdi{f}{f}=1$.\n%\n\\ipmpy{../talk/code/pytorch_rev_detail_1.py}\n\n\\subsection{Implementations of JVPs (forward) and VJPs (reverse)}\n\n\\subsubsection{\\autograd}\n\nThey define a VJP for \"each\" numpy function in\n\\url{https://github.com/HIPS/autograd/blob/master/autograd/numpy/numpy_vjps.py}\nusing \\co{defvjp}, also some JVPs in\n\\url{https://github.com/HIPS/autograd/blob/master/autograd/numpy/numpy_jvps.py}\nbut \\autograd is mostly only reverse mode. \\numpy API definitions in\n\\url{https://github.com/HIPS/autograd/blob/master/autograd/numpy/numpy_wrapper.py}\n\n\\subsubsection{\\jax}\n\n\\numpy primitives are defined in\n\\url{https://github.com/google/jax/blob/master/jax/_src/lax/lax.py} with a lot\nof \\co{defjvp}, one for each primitive, i.e. only forward mode.\n\n\\ipmpy{../talk/code/jax_lax_sin.py}\n\nThey don't define explicit VJPs (for reverse mode), instead they use the\nforward trace, which has to be done anyway, and then run that backwards,\ntransposing the JVP operations (note that $\\tp{\\ve e_i}\\,\\ma J =\n\\tp{\\left(\\tp{\\ma J}\\,\\ve e_i\\right)}$). From the docs: \"\\emph{... when\ncomputing reverse differentiation JAX obtains a trace of primitives that\ncompute the tangent using forward differentiation. Then, JAX interprets this\ntrace abstractly backwards and for each primitive it applies a transposition\nrule.}\".\n\nThe \\numpy API def is in\n\\url{https://github.com/google/jax/blob/master/jax/_src/numpy}.\n\n\\subsubsection{\\pytorch}\n\n\\pytorch's AD system is designed with focus on the use case $f: \\mathbb\nR^n\\ra\\mathbb R$, i.e. the loss function during NN training, where $n$ is\nlarge, which is why is uses VJPs (as in \\autograd). VJP definitions are not\ndone at the Python level as in \\jax, using a \\co{defvjp} machinery, but instead\nin a \\co{yaml} file\n\\url{https://github.com/pytorch/pytorch/blob/master/tools/autograd/derivatives.yaml}\nwith Python-ish pseudo code, which defines the VJP for each \\co{torch}\nprimitive function. Then scripts are used to generate C++ code for each\nfunction's VJP and Python bindings (e.g. in \\co{libtorch.so}).\n\n\\subsection{Tracing and compilation}\n\n\\subsubsection{\\jax}\n\n\\jax uses an intermediate representation dubbed \\co{jaxpr} which is the result\nof tracing a function. Also every other function, e.g. \\co{grad(f)} is\nrepresented in \\co{jaxpr}.\n\n\\begin{minted}{python}\n    >>> import jax; from jax import numpy as jnp\n    >>> f=lambda x: jnp.sum(jnp.power(jnp.sin(x),2.0))\n    >>> jax.make_jaxpr(f)(1.0)\n    { lambda  ; a.\n      let b = sin a\n          c = pow b 2.0\n          d = reduce_sum[ axes=() ] c\n      in (d,) }\n\n    >>> jax.make_jaxpr(jax.grad(f))(1.0)\n    { lambda  ; a.\n      let b = sin a\n          c = pow b 1.0\n          d = mul 2.0 c\n          e = mul 1.0 d\n          f = cos a\n          g = mul e f\n      in (g,) }\n\\end{minted}\n\nIt uses \\tf's XLA compiler, which uses LLVM, to compile for CPU and GPU targets.\n\n\\subsubsection{\\pytorch}\n\n\\pytorch's \\co{jit} creates TorchScript, which is a serialized version of\nthe graph. That can be exported to disk, loaded into C++ using the libtorch C++\nAPI and executed as a C++ program.\n\n\\pytorch can also run on XLA devices like TPUs using\n\\url{https://github.com/pytorch/xla}.\n\n\\section{No Free Lunch: Limitations of AD}\n\\begin{itemize}\n    \\item \\tf \\co{1.x}: Manually construct computation graph by framework\n        mini-language (e.g. \\co{tf.while}, \\co{tf.cond}). \\pytorch, \\jax,\n        \\tf\\co{2.x}: eager execution / build graph dynamically\n    \\item In-place ops such as \\co{A *= 3} instead of\n        \\co{B = A*3} are hard to support and can slow things down because large\n        parts of the comp graph must be rewritten instead of just adding a new\n        variable. In \\autograd, \\jax and \\pytorch, usage of in-place mods is\n        discouraged and will often just raise an exception. \\pytorch has an\n        elaborate tensor versioning system that will cause backward grad\n        calculations to fail when in-pace ops would break it\n        (\\url{https://pytorch.org/docs/stable/notes/autograd.html#in-place-operations-with-autograd}).\n        \\jax and \\pytorch have a \\verb|jit|, which could optimize out-of-place\n        modifications and copies away if possible, not sure if they do.\n    \\item Watch out when ADing through fast but approximate\n        implementations: \"Approximate the derivative, not differentiate the\n        approximation.\" (see \\verb|examples/text_jax.py|)\n    \\item For end-to-end AD, all parts of a pipeline must be implemented using\n        the same AD-aware library. Else one needs to cut the chain rule open\n        and hand over derivatives at the interface between codes.\n\\end{itemize}\n\n\\newpage\n\\nocite{*}\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "490bf20e01d63b3c081f52563f997e977c56d87b", "size": 20533, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/main.tex", "max_stars_repo_name": "hzdr/autodiff101", "max_stars_repo_head_hexsha": "ed9a4252620fbcc548f2a5ee5aaca13e0788918f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-16T14:49:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T10:53:00.000Z", "max_issues_repo_path": "theory/main.tex", "max_issues_repo_name": "hzdr/autodiff101", "max_issues_repo_head_hexsha": "ed9a4252620fbcc548f2a5ee5aaca13e0788918f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-06-09T13:58:47.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-22T09:00:20.000Z", "max_forks_repo_path": "theory/main.tex", "max_forks_repo_name": "hzdr/autodiff101", "max_forks_repo_head_hexsha": "ed9a4252620fbcc548f2a5ee5aaca13e0788918f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.4990138067, "max_line_length": 104, "alphanum_fraction": 0.6654166464, "num_tokens": 6654, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7662936377487305, "lm_q1q2_score": 0.574020765528743}}
{"text": "\\section{Introduction}\n\nPhenology, the timing of seasonal biological phenomena, is a key aspect of plant and animal life.\nIt defines the timing and duration of growth and reproduction and thereby determines the ability to capture seasonally variable resources \\citep{chuine2017process}.\nPhenological analyses often focus on the timing of particular events, such as the dates of peak caterpillar abundance \\citep{shutt2019spatial}.\nHowever, for many biological phenomena exact dates of particular events are more difficult to observe than the state of the system itself.\nFor example, repeated but sparse survey visits may record whether a plant is in bud, flowering, or setting fruit, but not the exact dates when each of those stages was reached.\nSuch observations can be used to categorize an organism's state into discrete classes which usually follow natural ordering, e.g.\\ from least to most developed. \nThe resulting data can be described using ordinal regression models \\cite{mccullagh1980regression,agresti2010analysis} which then allow inferences about phenological progression.\n\nI here replicate a number of ordinal regression models that were employed by \\citet{dennis1986stochastic} and \\citet{candy1991modeling} to describe insect phenology. \n\n\\section{Data}\n\\label{sec:data}\nThe models replicated in this study are fitted to a data set on the phenology of the western spruce budworm \\emph{Choristoneura freemani} (Lepidoptera: Tortricidae), a defoliating moth that is widespread in western North America \\citep{brookes1987western}.\nThis data set was originally published in \\citep{dennis1986stochastic} and is a subset of a larger budworm survey data set analysed in \\citep{kemp1986stochastic}. \nThe data consist of 12 sampling occasions at which counts of individual budworms in each of seven development stages (five larval instars, pupae, and adults) were recorded. \nThe only available covariate is a measure of seasonal progression, the accumulated degree days calculated using a threshold of 5.5\u00b0C. \n\\citet{candy1991modeling} noted an inconsistency in these data, namely that the reported total number of individuals did not correspond to the sum across the seven development stages for two of the sampling occasions. \nI therefore use the data set as it was republished in \\citep{candy1990biology}, where numbers in each stage have been assumed correct and the totals for each sampling occasion were adjusted accordingly.\n\n\\begin{table}[tbp]\n  \\small\n    \\centering\n    \\caption{The data used in this replication are counts of western spruce budworm \\emph{Choristoneura freemani} across seven developmental stages on 12 sampling occasions, as published in \\citep{candy1990biology}.}\n  \\input{../outputs/budworm_table}\n  \\label{tab:tab0}\n\\end{table}\n\n\\section{Methods}\nThe statistical models replicated here are different types of ordinal regression models~\\citep{agresti2010analysis}, all with the aim of predicting the proportion of an insect population in a particular development stage at any given time. \nIn particular, they represent three different parameterisations of the so-called cumulative model and one version of the so-called sequential model. \nA summary of the theory underlying these models and their derivation is provided in \\citep{burkner2019ordinal}.\n\nBoth model types assume that the development of an insect follows an unobservable stochastic process $S(t)$ made up of accumulated increments of development over time~$t$. \nAs $S(t)$ increases, the insect passes through successive stages $j=1,\\dots,r$, delimited by $r-1$ moults, with the $j$th moult occurring when the development threshold $a_j$ is reached:\n\n\\begin{alignat*}{3}\n  \\mathrm{stage}\\,1&:\\qquad {}&   S(t) & \\leq a_1 \\\\\n  \\mathrm{stage}\\,2&:\\qquad  a_1&< S(t) & \\leq a_2 \\\\\n  \\vdots & &\\vdots \\\\\n  \\mathrm{stage}\\,r-1&:\\qquad  a_{r-2}&< S(t) & \\leq a_{r-1} \\\\\n  \\mathrm{stage}\\,r&:\\qquad  a_{r-1}&< S(t) \n\\end{alignat*}\n\nThe $a_j$ values are typically unknown and their estimation from observed data was the goal of the orginal studies~\\citep{dennis1986stochastic,candy1991modeling}.\n\n\\subsection{Cumulative model with constant variance }\n\\label{sec:const_var_cm}\nThe ordinal regression model of~\\citep{candy1991modeling} is specified in terms of the cumulative number of individuals  $m_{ij}=\\sum_{k=1}^jn_{ik}$ observed in stages 1 to $j$ on a sampling occasion $i$. \n\\begin{alignat}{1}\n\\mathbf{E}(m_{ij})&=N_iPr(S(t) < \\alpha_j), \\qquad j = 1,\\dots ,r\\\\\n&=N_iG(\\alpha_j + \\beta z_i)\n\\end{alignat}\nwhere $G$ is the cumulative probability density function of $S(t)$, $\\alpha_j$ are ordered thresholds or cut-point parameters, $\\beta$ is a vector of regression parameters and $z_i$ is a vector of predictor variables.\nIf the probability of an individual being in stage $j$ or earlier at time $t_i$ is \n\n$$\\mu_{ij} = \\mathbf{E}(m_{ij})/N_i$$\n\n one can interpret $G^{-1}$ as the link function of a generalised linear model (GLM) with the linear predictor \n\n$$\\eta_{ij}=\\alpha+\\beta z_i$$\n\nThis ordinal regression model is commonly known as the cumulative model~\\citep{burkner2019ordinal}, and is applied to the budworm data in \\citep{candy1991modeling} using the logit and complementary log-log (cloglog) link functions. \nIn both cases the parameterisation results in a constant variance for $S(t)$.\nFor the purpose of parameter estimation \\citet{candy1991modeling} re-expressed the model in terms of stage-specific counts $n_{ij}$  \n\\begin{equation}\n\\mathbf{E}(n_{ij})=N_i\\{G(\\alpha_j + \\beta z_i) - G(\\alpha_{j-1} + \\beta z_i)\\}\n\\label{eq:candy_cm_count_form}\n\\end{equation}\nand fits it using a Poisson likelihood \\citep{thompson1981composite}. \nNo code or initial values for the likelihood optimisation are provided for this estimation procedure in the original paper.   \nI therefore created an R version of the estimation procedure which directly optimizes a Poisson log-likelihood for Equation~\\ref{eq:candy_cm_count_form} using the BFGS method in the \\verb+optim+ function and a parameter scaling value of 0.01 for $\\beta$ relative to the cut-point parameters..\nInitial values for the optimisation were determined using a random search across plausible start values (see Appendix~\\ref{sec:appendix}).\n The cumulative model is implemented in various R packages, including in the \\verb+vglm+ function in \\verb+VGAM+ \\citep{VGAM} and the \\verb+clm+ function in \\verb+ordinal+ \\cite{ordinal} and for comparison I also attempted to fit the models using these functions with their default settings.\n\n\\subsection{Cumulative model with proportional variance}\n\\citet{dennis1986stochastic} used a different parameterisation of the ordinal model with a logit link, i.e. assuming a logistic distribution for $S(t)$, such that the probability that an insect's development at time $t$ has not exceeded $s$ amounts to \n\\begin{equation}\nPr(S(t) \\leq s) = \\left\\{ 1 + \\exp\\left[-\\left(\\frac{s-t}{\\sqrt{b^2t}}\\right)\\right]\\right\\}^{-1}\n\\label{eqn:cumpropcdf}\n\\end{equation}\nwhere $b^2$ is a positive constant. \nThe cumulative distribution function in Eqn.~\\ref{eqn:cumpropcdf} corresponds to a logistic distribution with a mean of $t$ and a variance which increases proportional to the mean as $(\\pi^2/3)b^2t$.\nAt any fixed time $t$ the thresholds $a_j$ segment the probability distribution function into $r$ parts and the area under the curve between $a_{j-1}$ and $a_j$ gives the probability that the insect will be in stage $j$ at time $t$.\n\nThis modelling approach is applied to data consisting of samples~$i$ that record the number of insects $x_{ij}$ in stage $j$ at times $t_1, t_2, \\dots, t_q$ and the $x_{ij}$ are assumed to be random samples from a multinomial distribution with corresponding multinomial probabilities~$p_{ij}$\n\n\\begin{align}\np_{ij} & = Pr(a_{j-1} < S(t_i) \\leq a_{j})\\\\\n& =  \\left\\{ 1 + \\exp\\left[-\\left(\\frac{a_j-t_i}{\\sqrt{b^2t_i}}\\right)\\right]\\right\\}^{-1} - \\left\\{ 1 + \\exp\\left[-\\left(\\frac{a_{j-1}-t_i}{\\sqrt{b^2t_i}}\\right)\\right]\\right\\}^{-1}\\label{eq:dennis_cm}\n\\end{align}\n\nTo fulfill the constraint that $\\sum_{j=1}^r p_{ij}= 1$, it is further assumed that $a_0 = -\\infty$ and $a_r = +\\infty$.\nThe model has $r$ unknown parameters $a_1, \\dots, a_{r-1}$ and $b^2$ which can be found by maximising the corresponding log-likelihood function\n\\begin{equation}\n\\mathcal{\\ell} = \\log C + \\sum_{j=1}^r \\sum_{i=1}^q x_{ij} \\log p_{ij}\n\\label{eq:dennis_loglik}\n\\end{equation}\nwhere $C$ is a combinatorial constant that is independent of the parameter values.\n\n\\citet{dennis1986stochastic} provided SAS code and initial values to estimate the parameters under this likelihood using an iteratively reweighted non-linear least squares approach based on \\verb+PROC NLIN+. \nThis was updated to run in a contemporary version of SAS (SAS 9.4) and is provided in the article code repository. \nHowever, since SAS is a proprietary software package, I implemented two R versions of the estimation procedure which both directly optimize the log-likelihood (\\ref{eq:dennis_loglik}) using the L-BFGS-B optimisation routine \\citep{byrd1995limited} in the R \\verb+optim+ function. The first implementation is a direct translation of the SAS code which uses a literal implementation of the logistic cumulative distribution function and employs a lower parameter bound of 0 and initial values provided in \\citep{dennis1986stochastic} for the estimation. The second implementation makes use of the R function \\verb+stats::plogis+ to implement the logistic CDF and employs a lower parameter bound of the machine precision (i.e.\\ \\verb+.Machine$double.xmin+) and the same initial values as above. Both routines used a parameter scaling value of 0.01 for $b^2$ relative to the cut-point parameters\n\n\\citet{candy1991modeling} re-expressed the proportional variance model (Eqn.~\\ref{eq:dennis_cm}) to match the form of Eqn.~\\ref{eq:candy_cm_count_form}, which results in the following re\\-para\\-meter\\-isation $\\alpha_j = a_j/b$, $\\beta = -1/b$, and $z_i = \\sqrt{t_i}$, and uses the Poisson likelihood approach described in section~\\ref{sec:const_var_cm} for parameter estimation. \nA set of example macros for the software package \\verb+GLIM+ \\citep{aitkin1989statistical} is provided in an earlier manuscript by the same author \\citep{candy1990biology}. GLIM is no longer actively developed or distributed, but initial values from the GLIM code were used in the estimation with the BFGS method of R \\verb+optim+ and a parameter scaling value of 0.01 for $\\beta$ relative to the cut-point parameters.\n\n\\subsection{Sequential model}\nA different class of ordinal regression model can be derived by treating the observations as the result of a strictly ordered counting process, i.e.\\ to achieve a stage $j$, all lower stages $1,\\dots,j-1$ have to be achieved first. \nThe general form of this model is known as the sequential model, and rather than assuming a single latent process $S(t)$ as in the cumulative model there is a latent continuous variable $S_j$ for each category $j$~\\citep{burkner2019ordinal}. \nAnalogous to the cumulative model it can be framed as a GLM \n\n\\begin{equation}\nS_j = \\eta + \\epsilon_j\n\\end{equation}\n\nwith a linear predictor $\\eta$ and an error term $\\epsilon_j$ which has mean zero and is distributed following some distribution $G$. \nThis leads to a model of the form \n\n\\begin{equation}\nPr(S = a_j|S \\geq a_j, \\eta) = G(a_j - \\eta)\n\\end{equation}\n\nWhen $G$ is the logistic distribution this model is also known as the continuation ratio model~\\citep{fienberg1980analysis,burkner2019ordinal}. \nConfusingly, there are two common versions of the model in the literature both using this name. \nThe one outlined above describing the probability of the sequential process \\emph{stopping} at stage $j$, and the other describing the probability of the process \\emph{continuing} beyond stage $j$, i.e. \n\n$Pr(S > a_j | S \\geq a_j, \\eta)$ \\citep{burkner2019ordinal,VGAM}. \n\nThe paper replicated here \\citep{candy1991modeling} used the stopping parameterisation.\nIn their notation the expected value for the stage-specific counts $n_{ij}$ is\n\\begin{alignat}{3}\n\\mathbf{E}(n_{ij})&=N_i G(\\beta_{01} + \\beta_{11}t_i), &j&=1 \\label{eq:candy_sm_counts}\\\\\n&=N^*_{ij} G(\\beta_{0j} + \\beta_{1j}t_i), \\qquad &j&=2,\\dots,r-1 \\nonumber\n\\end{alignat}\nwith \n\\begin{equation}\nN^*_{ij}=\\left(N_i - \\sum_{k=1}^{j-1}n_{ik}\\right) \\label{eq:n_star}\n\\end{equation}\nand conditional probabilities \n\\begin{equation}\np^*_{ij} =  G(\\beta_{0j} + \\beta_{1j}t_i), \\qquad j=1,\\dots,r-1\n\\end{equation}\n\n\\citet{candy1991modeling} uses GLM estimation routines in \\verb+GLIM+ to fit the $r-1$ models defined in Eqn.~\\ref{eq:candy_sm_counts} assuming the $n_{ij}$ are binomially distributed conditional on $N_i$ for stage 1 and conditional on $N^*_{ij}$ for stages $2,\\dots,r-1$. \nNo code is provided for this estimation procedure in the original paper, but the model is straightforward to implement using the \\verb+glm+ function in R with a model formula of the form \n$$\\verb!cbind(count,total - N_star) ~ stage + stage:time - 1!$$\nwhere the variable \\verb+count+ represents the $n_{ij}$, \\verb+N_star+ represents $N_i$ for $j=1$ and $N^*_{ij}$ for all other observations, \\verb+stage+ is a factor variable encoding $j$ and \\verb+time+ are the $t_i$. The sequential model with stopping parameterisation is also implemented in \\verb+VGAM::vglm+ \\citep{VGAM} and for comparison I attempted to fit the model using this function with its default settings.\n\n\n\\section{Results}\n\n\n\n\\begin{table}[tbp]\n  \\small\n    \\centering\n    \\caption{Parameter estimates for the cumulative model with constant variance (Eqn.~\\ref{eq:candy_cm_count_form}). \n    This table replicates results presented in the first two rows of Table~2 of \\citep{candy1991modeling}. \n    Note that \\texttt{ordinal::clm} uses a parameterisation $\\alpha_j - \\beta z_i$ for the linear predictor yielding a parameter estimate for $\\beta$ with the opposite sign than the other methods. \n    The cloglog link model failed to fit using \\texttt{VGAM::vglm}.}\n  \\input{../outputs/cm_const_var_table}\n  \\label{tab:tab1}\n\\end{table}\n\n\\begin{table}[tbp]\n  \\small\n    \\centering\n    \\caption{Parameter estimates for the cumulative logit model with proportional variance. \n    This table replicates results presented in the first row of Table~1 of  \\citep{kemp1986stochastic} and the last row of Table~2 of \\citep{candy1991modeling}.}\n  \\input{../outputs/dennis_model_table}\n  \\label{tab:tab2}\n\\end{table}\n\n\n\\subsection{Cumulative model with constant variance}\nParameters were estimated using a direct optimisation of the Poisson likelihood for Eqn.~\\ref{eq:candy_cm_count_form}, as well as with the R functions \\verb+VGAM::vglm+ and \\verb+ordinal::clm+. \nThe cloglog link model failed due to numerical errors when using the \\verb+vglm+ function. \nParameter estimates were close to those of the original study for all three methods employed (Table~\\ref{tab:tab1}). Parameter estimates were generally insensitive to the choice of starting values once the optimisation function was provided with appropriate parameter scaling values for the optimisation routine (see Appendix).\n\n\\subsection{Cumulative model with proportional variance}\nThe original SAS code provided in \\citep{dennis1986stochastic} required minimal updates to run in a contemporary version of SAS (SAS 9.4). \nTranslating the model code to R was straightforward once I took the decision to implement a direct minimisation of the negative log likelihood with \\verb+optim+.  \n\nParameter estimates from SAS \\verb+NLIN+ and both R \\verb+optim+ implementations (Table~\\ref{tab:tab2}) were virtually identical, but differed slightly from the parameter estimates presented in \\citep{kemp1986stochastic}, which was assumed to be the original source for the parameter estimates, as no parameter estimates were presented in \\citep{dennis1986stochastic}. \nThe observed differences in estimates were $<1$\\% for the cut-point parameters $\\alpha_i$, but c. 9\\% for $\\beta$.\n\nBased on these three sets of parameter estimates it was also possible to redraw two figures from \\citep{dennis1986stochastic}. \nFigure~\\ref{fig:fig1} and~\\ref{fig:fig2}, respectively, show that despite the observed parameter differences  there is an overall good agreement between the original results and the replication.\n\n\\begin{figure}[p]\n  \\centering\n  \\includegraphics[width=\\textwidth]{../figures/fig1_dennis_fig2.pdf}\n  \\caption{Logistic PDF of the cumulative model with proportional variance (Equation~\\ref{eq:dennis_cm}) plotted for fixed values of~$t$. \n  Area under the PDF between $a_{j-1}$ and $a_j$ gives the expected proportion of insects in stage $j$ at time $t$. \n  The graph is based on the estimates in Table 1 of \\citep{kemp1986stochastic} (black lines) and the estimates from the replication using the R \\texttt{optim} implementation (red lines). \n  This figure replicates Figure 2 in \\citep{dennis1986stochastic}.}\n  \\label{fig:fig1}\n\\end{figure}\n\n\n\\begin{figure}[p]\n  \\centering\n  \\includegraphics[width=\\textwidth]{../figures/fig2_dennis_fig3.pdf}\n  \\caption{Proportion of insects expected in stages 1-7 under the cumulative logit model with proportional variance (Equation~\\ref{eq:dennis_cm}) plotted as functions of time $t$.\n   Values of $a_j$ and $b^2$ used in the graph are the estimates given in Table 1 of \\citep{kemp1986stochastic} (black lines) and the estimates from the replication using the R \\texttt{optim} implementation (red lines). \n   This figure replicates Figure 3 in \\citep{dennis1986stochastic}.}\n  \\label{fig:fig2}\n\\end{figure} \n\n\\begin{table}[htb]\n  \\small\n    \\centering\n    \\caption{Parameter estimates for the sequential model with stopping ratios (Equation~\\ref{eq:candy_sm_counts}). \n    This table replicates results presented in Table~3 of \\citep{candy1991modeling}. \n    The cloglog link model failed to fit using \\texttt{VGAM::vglm}.}\n  \\input{../outputs/sm_table}\n  \\label{tab:tab3}\n\\end{table}\n\n\\subsection{Sequential model}\n\\label{sec:sm-results}\nParameters were estimated using the R \\verb+glm+ function and \\verb+VGAM::vglm+. \nThe GLM formulation of the model fitted with R \\verb+glm+ produced a warning that fitted probabilities numerically 0 or 1 occurred. \nLikely due to related reasons, the cloglog link model failed due to numerical errors when using the \\verb+vglm+ function. \nParameter estimates, where obtained, were identical to the original study (Table~\\ref{tab:tab3}) within the precision reported in \\citep{candy1991modeling}.\n\n\\section{Discussion}\nOverall the results from both \\citep{dennis1986stochastic} and \\citep{candy1991modeling} could be replicated closely.\nThe SAS code provided in \\citep{dennis1986stochastic} required only minimal updates to run in a contemporary version of SAS (SAS 9.4) and produced virtually identical estimates to the R re-implementation. \nThese estimates, however, differed slightly from the parameter estimates reported in \\citep{kemp1986stochastic}. \nGiven that the same initial values were used in all implementations, I believe that this disagreement is most likely caused by the inconsistencies in the published data set described in Section~\\ref{sec:data}. \nThe corrections applied to the data by \\citep{candy1991modeling} result in a data set that is internally consistent but likely different to that on which the estimates in \\citep{kemp1986stochastic} are based.  \n\nNo code was provided in \\citep{candy1991modeling}. \nHowever, the mathematical and verbal descriptions of the models were detailed enough to re-implement the estimation procedures in R. \nGLIM code of the cumulative model with proportional variance was available from an earlier manuscript~\\citep{candy1990biology}. \nThis allowed me to use the same initial values as the original study for this model. \nInitial values for the model with constant variance had to be guessed.\nThe direct optimisation of the likelihood was reported to be sensitive to the choice of initial values in ~\\citep{dennis1986stochastic}. \nAdditional simulations (Appendix~\\ref{sec:appendix}) showed that convergence failures did occur for some initial values, particularly in the case for the cloglog-link model.\nHowever, the coefficient of variation for estimates derived from a wide range of initial values was very small (<0.01\\%) for both the logit and cloglog models, whenenver the model converged succesfully (Table~\\ref{tab:tabS2}).\n\nLiteral implementations of both the inverse logit and inverse complementary log-log function can suffer from numerical underflow and/or overflow for probabilities close to zero and one, and this may be the key reason for convergence failures observed in the direct optimisation, as well as in the \\verb+vglm+ function. \nThe case study presented here is based on observations where the population variation in development speed (i.e.\\ the transition from class to class) is low compared to the overall speed of seasonal progression.\nAs a result the early and late development stages of the observed species never occur together, and the observed data are sparse in the sense that many cell counts in Table~\\ref{tab:tab0} are zero.\nAs a consequence the model estimates large effects, which although estimable on the link scale, are indistinguishable from one on the probability scale given the limitations of floating point representations.\nThis is the underlying reason for the warnings produced by the \\verb+glm+ function when fitting the sequential model.\nThe GLIM code for the cumulative model from \\citep{candy1990biology} uses multiple thresholding steps during the calculation of the linear predictor to mitigate against this, whereas the R implementation of the corresponding model makes use of a single thresholding step in the inverse link function (\\verb+gtools::inv.logit+~\\citep{gtools} or \\verb+stats::plogis+ and \\verb+VGAM::clogloglink+, respectively). \nDifferences in parameter estimates between a literal implementation of the logit link and  \\verb+stats::plogis+ for the cumulative model with proportional variance (Table~\\ref{tab:tab3}) were less than 0.01\\%, however, in the case of the constant variance model the complementary log-log link model proved to be numerically less stable than the corresponding logit-link model (Figure~\\ref{fig:figS1}). \nThis is perhaps unsurprising as the cloglog function has a steeper slope as it approaches one compared to the logit function, so values numerically close to one are attained earlier.\nGiven the sparsity of the data set and the resulting large estimated effect sizes this likely contributed to the failure to fit either of the cloglog-link models using the \\verb+VGAM::vglm+ function.\nThe behaviour was reported to the maintainer of the \\verb+VGAM+package, and initial troubleshooting of the\\verb+VGAM+ code resolved some instabilities for the fitting of the cumulative model.\nThese updates are now available in the pre-release version of \\verb+VGAM+\\footnote{https://www.stat.auckland.ac.nz/~yee/VGAM/prerelease/index.html}.\nHowever, other instabilities remain. \nThe cumulative model using the updated \\verb+VGAM+ code converges far from the optimum identified by the other methods and a fix for the error affecting the sequential model are yet to be implemented.\n\n\\section*{Acknowledgements}\nI am grateful to Juniper Simonis and Andrew MacDonald for constructive reviews of an earlier version of this manuscript, and to Thomas Yee for investigating numerical errors in the \\verb+VGAM+ package.\n\n\\afterpage{\\clearpage}\n", "meta": {"hexsha": "807b93b29d9559884dcb5c2afbe7874b1589a7e8", "size": 23441, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "article/content.tex", "max_stars_repo_name": "pboesu/replication_candy_1991", "max_stars_repo_head_hexsha": "98329d447ef025c260879f28316015fc0b24fb08", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-14T12:29:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-15T11:21:55.000Z", "max_issues_repo_path": "article/content.tex", "max_issues_repo_name": "rougier/replication_candy_1991", "max_issues_repo_head_hexsha": "34b4d74de40f6b37b3cbee06c737ee64b6f68b32", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2021-04-27T02:52:44.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-28T16:48:40.000Z", "max_forks_repo_path": "article/content.tex", "max_forks_repo_name": "rougier/replication_candy_1991", "max_forks_repo_head_hexsha": "34b4d74de40f6b37b3cbee06c737ee64b6f68b32", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-04-26T20:11:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-14T08:57:08.000Z", "avg_line_length": 89.1292775665, "max_line_length": 890, "alphanum_fraction": 0.7788916855, "num_tokens": 6084, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5740207609419867}}
{"text": "\\input{header.tex}\n\n\\graphicspath{\n{images/png/}{images/}{images/plots/}\n}\n\n\n\\begin{document}\n%------------------------------------------------------------------------------------------------\n\\section{Multidomain}\n\nThe multidomain equations generalize the bidomain equation and have multiple compartments at each spatial point. \nEach compartment $k$ describes one of the $M_\\text{mu}$ motor units.\nThe electric potential in the extra-cellular space is designated by $\\phi_e$ and the intra-cellular potential of motor unit $k$ is desribed by $\\phi_i$. The transmembrane voltages are given by $V_m^k = \\phi_i^k - \\phi_e$.\n\n\nThe first multidomain equation balances current flow between intra and extra-cellular space:\n\\begin{equation}\\label{eq:multidomain1}\n  \\begin{array}{lll}\n    \\div\\big(\\bfsigma_e\\,\\grad(\\phi_e)\\big) + \\s{k=1}{M_\\text{mu}} f_r^k\\,\\div\\big(\\bfsigma_i^k\\,\\grad(\\ub{V_m^k + \\phi_e}{=\\phi_i^k})\\big) = 0.\n  \\end{array}\n\\end{equation}\nIt can be reformulated as:\n\\begin{equation}\\label{eq:multidomain1-1}\n  \\begin{array}{lll}\n    \\div\\big((\\bfsigma_e + \\ub{\\ds\\sum\\limits_{k}^{M_\\text{mu}} f_r^k\\bfsigma_i^k}{=:\\bfsigma_i}) \\,\\grad(\\phi_e)\\big) + \\s{k=1}{M_\\text{mu}} f_r^k\\,\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^k )\\big) = 0.\n  \\end{array}\n\\end{equation}\n%\nThe second multidomain equations describe flow over the membrane and hold for each compartment:\n\\begin{equation}\n  \\begin{array}{lll}\\label{eq:multidomain2}\n    \\div\\big(\\bfsigma_i^k\\,\\grad(\\ub{V_m^k + \\phi_e}{=\\phi_i^k})\\big) = A_m^k\\,\\big(C_m^k\\,\\p{V_m^k}{t} + I_\\text{ion}(V_m^k)\\big) \\qquad \\forall\\,k \\in 1\\dots M_\\text{mu}\n  \\end{array}\n\\end{equation}\nThe unknowns are the transmembrane voltages of compartment $k$, $V_m^k$ and the extracellular potential, $\\phi_e$. $f_r^k \\in [0,1]$ is a spatially varying factor of the presence of the compartment with $\\sum_k f_i^k = 1$.\n$\\bfsigma_i^k$ and $\\bfsigma_e$ are the conductivity tensors of the intra- and extracellular spaces.\n\nSolving \\eqref{eq:multidomain2} for $\u2202V_m^k/\u2202t$ yields:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\p{V_m^k}{t} = \\dfrac{1}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^k + \\phi_e)\\big) - \\dfrac{1}{C_m^k}\\,I_\\text{ion}(V_m^k)\\\\[4mm]\n  \\end{array}\n\\end{equation*}\nWe solve using an operator splitting approach, e.g. Godunov splitting.\nThe reaction term is solved with an explicit Euler scheme.\n\\begin{equation*}\n  \\begin{array}{lll}\n    V_m^{k,(*)} = V_m^{k,(i)} - dt\\,\\dfrac{1}{C_m^k}\\,I_\\text{ion}(V_m^{k,(i)}).\n  \\end{array}\n\\end{equation*}\nThe subcellular model provided by the CellML description computes $(-1/C_m^k\\,I_\\text{ion})$.\n\nThe diffusion term is solved using an implicit scheme, e.g. backward Euler.\n\\begin{equation}\\label{eq:diffusion_term}\n  \\begin{array}{lll}\n    V_m^{k,(i+1)} = V_m^{k,(*)} + dt\\,\\dfrac{1}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^{k,(i+1)} + \\phi_e)\\big)\\\\[4mm]\n  \\end{array}\n\\end{equation}\n\\begin{equation*}\n  \\begin{array}{lll}    \\Leftrightarrow\\quad \n    V_m^{k,(i+1)} -\\dfrac{dt}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^{k,(i+1)})\\big) - \\dfrac{dt}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(\\phi_e)\\big) = V_m^{k,(*)}\n  \\end{array}\n\\end{equation*}\n\nAfter discretizing the spatial terms with the Finite Element Method the following matrix equation is obtained:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\left[\\begin{array}{@{}ccc|c@{}}\n      \\bfA_{V_m,V_m}^1 &  & & \\bfB^1_{V_m,\\phi_e} \\\\[2mm]\n      & \\ddots &   & \\vdots \\\\[2mm]\n      &  & \\bfA_{V_m,V_m}^{M_\\text{mu}} & \\bfB^{M_\\text{mu}}_{V_m,\\phi_e} \\\\[2mm] \\hline\n      \\bfB_{\\phi_e,V_m}^1 & \\dots & \\bfB_{\\phi_e,V_m}^{M_\\text{mu}} & \\bfB_{\\phi_e,\\phi_e} \\\\[2mm]\n    \\end{array}\\right]\n    \\left[\\begin{array}{@{}c@{}}\n      V_{m}^{1,(i+1)} \\\\[2mm] \\vdots \\\\[2mm] V_{m}^{M_\\text{mu},(i+1)} \\\\[2mm]\\hline \\phi_{e,i} \n    \\end{array}\\right]\n    = \n    \\left[\\begin{array}{@{}c@{}}\n      \\bfb_{V_m}^{1,(i+1)} \\\\[2mm] \\vdots \\\\[2mm] \\bfb_{V_m}^{M_\\text{mu},(i+1)} \\\\[2mm]\\hline \\bfzero\n    \\end{array}\\right]\n  \\end{array},\n\\end{equation*}\nwhere\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\bfA^k_{V_m,V_m} &\\dots\\quad\\text{ discretization of }\\quad -\\dfrac{dt}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(\\cdot)\\big) + (\\cdot) \\\\[4mm]\n    \\bfB^k_{V_m,\\phi_e} &\\dots\\quad\\text{ discretization of }\\quad -\\dfrac{dt}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(\\cdot)\\big)\\\\[4mm]\n    \\bfB^k_{\\phi_e,V_m} &\\dots\\quad\\text{ discretization of }\\quad f_r^k\\,\\div\\big(\\bfsigma_i^k\\,\\grad(\\cdot)\\big)\\\\[4mm]\n    \\bfB_{\\phi_e,\\phi_e} &\\dots\\quad\\text{ discretization of }\\quad \n  \\div\\big((\\bfsigma_e + \\bfsigma_i)\\,\\grad(\\cdot)\\big)\n  \\end{array}\n\\end{equation*}\n%\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\bfb_{V_m}^{k,(i+1)} = V_m^{k,(*)} = V_m^{k,(i)} - \\dfrac{dt}{C_m^k}\\,I_\\text{ion}(V_{m}^{k,(i)}) \\quad \\text{($I_\\text{ion}$ solved using splitting scheme)}\n  \\end{array}\n\\end{equation*}\n    \n    \nThe bidomain equation is the special case for $M_\\text{mu}=1$. It can also be used for computation of EMG ($\\phi_e$) from $V_m$ in the fiber based model.\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\div\\big((\\bfsigma_i + \\bfsigma_e)\\,\\grad \\phi_e\\big) = - \\div(\\bfsigma_i\\,\\grad V_m)\n  \\end{array}\n\\end{equation*}\n\n\\subsection{Finite Element formulation}\nIn this section the finite element formulation for the multidomain equation is derived. We start with an example problem.\n\\subsubsection{Diffusion problem}\nIn general, the weak form of a diffusion problem discretized with Crank-Nicolson,\n\\begin{equation}\\label{eq:weak_form0}\n  \\begin{array}{lll}\n    \u0394u = u_t, \\qquad \\p{u}{\\bfn} = f \\quad \\text{on }\\Gamma_f, \\qquad \\p{u}{\\bfn} = 0 \\quad \\text{on } \u2202\u03a9\\backslash \\Gamma_f \\\\[4mm]\n    \\Rightarrow\\quad \\ds\\int_\u03a9 \\big(\\theta\\,\u0394u^{(i+1)} + (1-\\theta)\\,\u0394u^{(i)}\\big)\\,\\phi \\,\\d\\bfx = \\dfrac{1}{dt} \\ds\\int_\u03a9(u^{(i+1)} - u^{(i)})\\,\\phi\\,\\d\\bfx, \\quad \\forall \\phi \\in V_h\\\\[4mm]\n  \\end{array}\n\\end{equation}\nDiscretize in space with $u = \\sum_j u_j\\,\\varphi_j, V_h = \\span\\{\\varphi_j | j = 1\\dots N\\}$ and using Divergence theorem\n\\begin{equation}\\label{eq:weak_form1}\n  \\begin{array}{lll}\n    \\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big)  \\left(-\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} (\u2207\\varphi_j\\cdot \\bfn)\\varphi_k \\,\\d\\bfx  \\right) \\\\[4mm]\n    \\quad = \\dfrac{1}{dt} \\sum\\limits_{j=1}^{N} \\big(u_j^{(i+1)} - u_j^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx, \\quad \\forall k = 1\\dots N.\\\\[4mm]\n  \\end{array}\n\\end{equation}\nThis can be written in matrix notation as\n\\begin{equation}\\label{eq:diffusion_weak_form_matrix}\n  \\begin{array}{lll}\n    \\bfA\\,\\bfu^{(i+1)} = \\bfb(\\bfu^{(i)}),\n  \\end{array}\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:diffusion_weak_form_matrix2}\n  \\begin{array}{lll}\n    \\bfA = \\theta\\,(\\bfK + \\bfB) -\\dfrac{1}{dt}\\bfM, \\\\[4mm]\n    \\bfb = \\big((\\theta-1)\\,(\\bfK + \\bfB) - \\dfrac{1}{dt} \\bfM \\big)\\,\\bfu^{(i)},\n  \\end{array}\n\\end{equation} \nwith \n\\begin{equation*}\n  \\begin{array}{lll}\n     \\bfK_{kj} = -\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx \\qquad \\text{(note, the minus sign is correct for $+\u0394$)},\\\\[4mm]\n     \\bfB_{kj} = \\ds\\int_{\\Gamma_f} (\u2207\\varphi_j\\cdot \\bfn)\\varphi_k \\,\\d\\bfx,\\\\[4mm]\n     \\bfM_{kj} = \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx,\n  \\end{array}\n\\end{equation*}\nor written in component form:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\bfA_{kj} = \\theta\\,\\left(-\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} (\u2207\\varphi_j\\cdot \\bfn)\\varphi_k \\,\\d\\bfx \\right) - \\dfrac{1}{dt}\\,\\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx,\\\\[4mm]\n    \\bfb_k = \\ds\\sum\\limits_{j=1}^{N} -(1-\\theta)\\,u_j^{(i)}\\,\\left(-\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} (\u2207\\varphi_j\\cdot \\bfn)\\varphi_k \\,\\d\\bfx \\right)\n     + \\dfrac{1}{dt} \\ds\\sum\\limits_{j=1}^{N} -u_j^{(i)} \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx.\n  \\end{array}\n\\end{equation*}\nSo far we did not plug in the boundary conditions. For $f=0$ we get $\\bfB = \\bfzero$. The case $f \\neq 0$ is handled in the next subsection.\n\n\\subsubsection{Boundary conditions}\nThe boundary condition $\u2207u\\cdot\\bfn = f$ can be written as\n\\begin{equation*}\n  \\begin{array}{lll}\n    \u2207u\\cdot\\bfn = \\s{j=1}{N}u_j\\,(\u2207\\varphi_j\\cdot \\bfn) = f,\n  \\end{array}\n\\end{equation*}\nWe discretize the flow over the boundary, $f$, by different ansatz functions, $\\psi_j$, with coefficients $f_j$:\n\\begin{equation*}\n  \\begin{array}{lll}\n    f = \\s{j=1}{N}f_j\\,\\psi_j\n  \\end{array}\n\\end{equation*}\nWe get from \\eqref{eq:weak_form1} \n\\begin{equation*}\n  \\begin{array}{lll}\n    \\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big) \n     \\left(-\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx \\right)\n      + \\ds\\int_{\\Gamma_f} \\big(\\theta\\,f^{(i+1)} + (1-\\theta)\\,f^{(i)}\\big)\\,\\varphi_k \\,\\d\\bfx \\\\[4mm]\n    \\quad = \\dfrac{1}{dt} \\sum\\limits_{j=1}^{N} \\big(u_j^{(i+1)} - u_j^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx, \\quad \\forall k = 1\\dots N,\\\\[4mm]\n    \\Leftrightarrow \\quad \n    \\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big) \n     \\left(-\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx \\right)\n      + \\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,f_j^{(i+1)} + (1-\\theta)\\,f_j^{(i)}\\big) \n      \\ds\\int_{\\Gamma_f} \\psi_j\\,\\varphi_k \\,\\d\\bfx \\\\[4mm]\n    \\quad = \\dfrac{1}{dt} \\sum\\limits_{j=1}^{N} \\big(u_j^{(i+1)} - u_j^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx, \\quad \\forall k = 1\\dots N,\\\\[4mm]\n  \\end{array}\n\\end{equation*}\nIn matrix notation,\n\\begin{equation}\\label{eq:diffusion_weak_form_matrix}\n  \\begin{array}{lll}\n    \\bfA\\,\\bfu^{(i+1)} = \\bfb(\\bfu^{(i)}),\n  \\end{array}\n\\end{equation}\nwe have\n\\begin{equation}\\label{eq:diffusion_weak_form_matrix2}\n  \\begin{array}{lll}\n    \\bfA = \\theta\\,\\bfK -\\dfrac{1}{dt}\\bfM, \\\\[4mm]\n    \\bfb = \\big((\\theta-1)\\,\\bfK - \\dfrac{1}{dt} \\bfM \\big)\\,\\bfu^{(i)} - \\bfB_{\\Gamma_f}\\,\\big(\\theta\\,\\bff^{(i+1)} + (1-\\theta)\\,\\bff^{(i)}\\big),\n  \\end{array}\n\\end{equation} \nwith \n\\begin{equation*}\n  \\begin{array}{lll}\n     \\bfK_{kj} = -\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx \\qquad \\text{(note, the minus sign is correct for $+\u0394$)},\\\\[4mm]\n     \\bfM_{kj} = \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx,\\\\[4mm]\n     \\bfB_{\\Gamma_f,kj} = \\ds\\int_{\\Gamma_f} \\psi_j\\,\\varphi_k \\,\\d\\bfx,\n  \\end{array}\n\\end{equation*}\n\n\\subsubsection{Laplace problem}\nWe consider $\u0394u = 0, \\partial u/\\partial \\bfn = f$.\nThis leads to \n\\begin{equation*}\n  \\begin{array}{lll}\n    (\\bfK + \\bfB)\\,\\bfu = \\bfzero \\qquad \\text{or} \\qquad \\bfK\\,\\bfu + \\bfB_{\\Gamma_f}\\,\\bff = 0.\n  \\end{array}\n\\end{equation*}\n\n\\subsubsection{First multidomain equation}\nBack to the first multidomain equation \\eqref{eq:multidomain1-1}:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\div\\big((\\bfsigma_e + \\ub{\\ds\\sum\\limits_{k}^{M_\\text{mu}} f_r^k\\bfsigma_i^k}{=:\\bfsigma_i}) \\,\\grad(\\phi_e)\\big) + \\s{k=1}{M_\\text{mu}} f_r^k\\,\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^k )\\big) = 0,\n  \\end{array}\n\\end{equation*}\nThe weak form can be written as\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\ds\\sum\\limits_{j=1}^{N} \\phi_{e,j} \\left(-\\ds\\int_\u03a9 (\\bfsigma_e + \\bfsigma_i) \u2207\\varphi_j\\cdot \u2207\\varphi_\\ell \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} ((\\bfsigma_e + \\bfsigma_i) \u2207\\varphi_j\\cdot \\bfn)\\varphi_\\ell \\,\\d\\bfx  \\right) \\\\[4mm]\n    \\quad +  \\s{k=1}{M_\\text{mu}} f_r^k\\,\\left(\\ds\\sum\\limits_{j=1}^{N} V^k_{m,j} \\left(-\\ds\\int_\u03a9 \\bfsigma_i^k \u2207\\varphi_j\\cdot \u2207\\varphi_\\ell \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} (\\bfsigma_i^k \u2207\\varphi_j\\cdot \\bfn)\\varphi_\\ell \\,\\d\\bfx  \\right)\\right) = 0 \\quad \\forall \\ell=1,\\dots,N,\n  \\end{array}\n\\end{equation*}\nwhich is in matrix notation,\n\\begin{equation}\\label{eq:multidomain1_matrix1}\n  \\begin{array}{lll}\n    \\big(\\bfK_{\\bfsigma_e + \\bfsigma_i} + \\bfB_{\\bfsigma_e + \\bfsigma_i}\\big)\\bfphi_{e} +  \\s{k=1}{M_\\text{mu}} f_r^k \\big(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}\\big)\\bfV_m^k = 0\n  \\end{array}\n\\end{equation}\n\n\\subsubsection{Second multidomain equation}\nThe diffusion part of the second multidomain equation is given by\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\dfrac{V_m^{k,(i+1)}-V_m^{k,(*)}}{dt} = \\dfrac{1}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^{k,(i+1)})\\big) + \\dfrac{1}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(\\phi_e)\\big).\n  \\end{array}\n\\end{equation*}\nThe weak form is given by\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\dfrac{1}{A^k_m\\,C_m^k}\\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,V_{m,j}^{(i+1)} + (1-\\theta)\\,V_{m,j}^{(i)}\\big) \\left(-\\ds\\int_\u03a9 \\bfsigma_i^{k}\\, \u2207\\varphi_j\\cdot \u2207\\varphi_\\ell \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} (\\bfsigma_i^k\\,\u2207\\varphi_j\\cdot \\bfn)\\varphi_\\ell \\,\\d\\bfx  \\right) \\\\[4mm]\n    + \\dfrac{1}{A^k_m\\,C_m^k} \\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,\\phi_{e,j}^{(i+1)} + (1-\\theta)\\,\\phi_{e,j}^{(i)}\\big) \\left(-\\ds\\int_\u03a9 \\bfsigma_i^k\\,\u2207\\varphi_j\\cdot \u2207\\varphi_\\ell \\,\\d\\bfx + \\ds\\int_{\u2202\u03a9} (\\bfsigma_i^k\\,\u2207\\varphi_j\\cdot \\bfn)\\varphi_\\ell \\,\\d\\bfx  \\right) \\\\[4mm]\n    \\quad = \\dfrac{1}{dt} \\sum\\limits_{j=1}^{N} \\big(V_{m,j}^{(i+1)} - V_{m,j}^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_\\ell\\,\\d\\bfx, \\quad \\forall \\ell = 1\\dots N\\\\[4mm]\n  \\end{array}\n\\end{equation*}\nAnalogous to \\eqref{eq:diffusion_weak_form_matrix} we get \n\\begin{equation}\\label{eq:multidomain2_matrix1}\n  \\begin{array}{lll}\n    \\bfA\\,\\mat{\\bfV_m^{(i+1)}\\\\ \\bfphi_{e}^{(i+1)}} = \\bfb,\n  \\end{array}\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:multidomain2_matrix2}\n  \\begin{array}{lll}\n    \\bfA = \\mat{\n      \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) -\\dfrac{1}{dt}\\bfM & \\quad\n      \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k})\n    } \\\\[4mm]\n    \\bfb = \\Big( \\dfrac{1}{A_m^k\\,C_m^k}(\\theta-1)\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) - \\dfrac{1}{dt}\\bfM\\Big) \\bfV_m^{(i)} \n      + \\dfrac{1}{A_m^k\\,C_m^k}(\\theta - 1)\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k})\\,\\bfphi_e^{(i)}\n  \\end{array}\n\\end{equation}\n\nTogether, \\eqref{eq:multidomain1_matrix1} and \\eqref{eq:multidomain2_matrix1},\\eqref{eq:multidomain2_matrix2} form the following system\n\\begin{equation}\\label{eq:matrix_equation_1}\n  \\begin{array}{lll}\n    \\matt{\n      \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) -\\dfrac{1}{dt}\\bfM &\n      \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) \\\\[4mm]\n      f_r^k \\big(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}\\big) &\n      \\big(\\bfK_{\\bfsigma_e + \\bfsigma_i} + \\bfB_{\\bfsigma_e + \\bfsigma_i}\\big)\n    }\n    \\matt{\n      \\bfV_m^{(i+1)}\\\\[4mm]\n       \\bfphi_{e}^{(i+1)}\n    }\\\\[4mm]\n    = \n    \\matt{\n      \\big((\\theta-1)\\, \\dfrac{1}{A_m^k\\,C_m^k}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) - \\dfrac{1}{dt}\\bfM\\big) \\bfV_m^{(i)} \n      + (\\theta - 1)\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k})\\,\\bfphi_e^{(i)}\\\\[4mm]\n      \\bfzero\n    }\n  \\end{array}\n\\end{equation}\nBy multiplying the first row with $-dt\\,\\bfM^{-1}$ we get\n\\begin{equation}\\label{eq:matrix_equation_2}\n  \\begin{array}{lll}\n    \\matt{\n      \\dfrac{-dt\\,\\theta}{A_m^k\\,C_m^k}\\bfM^{-1}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) + \\bfI &\n      \\dfrac{-dt\\,\\theta}{A_m^k\\,C_m^k}\\bfM^{-1}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) \\\\[4mm]\n      f_r^k \\big(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}\\big) &\n      \\big(\\bfK_{\\bfsigma_e + \\bfsigma_i} + \\bfB_{\\bfsigma_e + \\bfsigma_i}\\big)\n    }\n    \\matt{\n      \\bfV_m^{(i+1)}\\\\[4mm]\n       \\bfphi_{e}^{(i+1)}\n    }\\\\[12mm]\n    = \n    \\matt{\n      \\left(\\dfrac{(1-\\theta)\\,dt}{A_m^k\\,C_m^k}\\bfM^{-1}(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}) + \\bfI\\right) \\bfV_m^{(i)}\n      + (1 - \\theta)\\,dt\\,\\bfM^{-1}\\,(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k})\\,\\bfphi_e^{(i)} \\\\[4mm]\n      \\bfzero\n    }\n  \\end{array}\n\\end{equation}\n\nFor a forward Euler scheme ($\\theta = 1$) and homogeneous Dirichlet boundary conditions ($\\bfB = \\bfzero$) the equation simplifies to\n\\begin{equation*}\n  \\begin{array}{lll}\n   \\matt{\n      \\dfrac{1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} -\\dfrac{1}{dt}\\bfM\\ & \\quad\n      \\dfrac{1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} \\\\[4mm]\n      f_r^k \\bfK_{\\bfsigma_i^k} &\n      \\bfK_{\\bfsigma_e + \\bfsigma_i}\n    }\n    \\matt{\n      \\bfV_m^{(i+1)}\\\\[4mm]\n       \\bfphi_{e}^{(i+1)}\n    }\n    = \n    \\matt{\n       -\\dfrac{1}{dt}\\bfM\\,\\bfV_m^{(i)} \\\\[4mm]\n      \\bfzero\n    }\n  \\end{array}\n\\end{equation*}\nor equivalently,\n\\begin{equation*}\n  \\begin{array}{lll}\n   \\matt{\n      \\dfrac{-dt}{A_m^k\\,C_m^k}\\bfM^{-1}\\bfK_{\\bfsigma_i^k} +\\bfI & \\quad\n      \\dfrac{-dt}{A_m^k\\,C_m^k}\\bfM^{-1}\\bfK_{\\bfsigma_i^k} \\\\[4mm]\n      f_r^k \\bfK_{\\bfsigma_i^k} &\n      \\bfK_{\\bfsigma_e + \\bfsigma_i}\n    }\n    \\matt{\n      \\bfV_m^{(i+1)}\\\\[4mm]\n       \\bfphi_{e}^{(i+1)}\n    }\n    = \n    \\matt{\n       \\bfV_m^{(i)} \\\\[4mm]\n      \\bfzero\n    }\n  \\end{array}\n\\end{equation*}\n\n\\subsection{Boundary conditions}\nThe boundary conditions to the multidomain equations are given by\n\\begin{equation*}\n  \\begin{array}{lll}\n    (\\bfsigma_i^{k}\\,\u2207\\phi_i^{k})\\cdot \\bfn_m = 0  \\qquad \\text{on } \\Gamma_M\n  \\end{array}\n\\end{equation*}\nWith $\\phi_i = V_m + \\phi_e$ this translates to\n\\begin{equation}\\label{eq:flux_bc}\n  \\begin{array}{lll}\n    (\\bfsigma_i^{k}\\,\u2207V_m^{k})\\cdot \\bfn_m = -(\\bfsigma_i^{k}\\,\u2207\\phi_e)\\cdot \\bfn_m =: p^k  \\qquad \\text{on } \\Gamma_M\n  \\end{array}\n\\end{equation}\nFor now, we assume $\\partial\\phi_e/\\partial \\bfn = 0$ on $\\Gamma_M$. \n\n%%\n\n%\\subsubsection{Diffusion example}\n%How to handle the inhomogeneous Neumann-type boundary conditions will be derived in the following subsection. \n%Consider the example problem \\eqref{eq:weak_form0}:\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %\u2207\\cdot(\\bfsigma\\,\u2207 u) = u_t, \\qquad (\\bfsigma \\,\u2207u)\\cdot\\bfn = f \\quad \\text{on }\\Gamma_f, \\qquad (\\bfsigma\\,\u2207u)\\cdot\\bfn = 0\\quad \\text{on }\u2202\u03a9\\backslash \\Gamma_f.\n  %\\end{array}\n%\\end{equation*}\n%Analogous to \\eqref{eq:weak_form1} we have\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %\\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big)  \\left(-\\ds\\int_\u03a9 \\bfsigma\\,\u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx\\right) \n    %+ \\ds\\sum\\limits_{j=1}^{N}\\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big)\\left(\\ds\\int_{\\Gamma_f} (\\bfsigma\\,\u2207\\varphi_j\\cdot \\bfn)\\varphi_k \\,\\d\\bfx  \\right) \\\\[4mm]\n    %\\quad = \\dfrac{1}{dt} \\sum\\limits_{j=1}^{N} \\big(u_j^{(i+1)} - u_j^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx, \\quad \\forall k = 1\\dots N\\\\[4mm]\n  %\\end{array}\n%\\end{equation*}\n%The boundary condition $(\\bfsigma \\,\u2207u)\\cdot\\bfn = f$ can be written as\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %(\\bfsigma \\,\u2207u)\\cdot\\bfn = \\s{j=1}{N}u_j\\,(\\bfsigma\\,\u2207\\varphi_j\\cdot \\bfn) = f\n  %\\end{array}\n%\\end{equation*}\n%and we get\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %\\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big) \n     %\\left(-\\ds\\int_\u03a9 \\bfsigma\\,\u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx + \\ds\\int_{\\Gamma_f} f\\,\\varphi_k \\,\\d\\bfx  \\right)\\\\[4mm]\n    %\\quad = \\dfrac{1}{dt} \\sum\\limits_{j=1}^{N} \\big(u_j^{(i+1)} - u_j^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx, \\quad \\forall k = 1\\dots N,\\\\[4mm]\n  %\\end{array}\n%\\end{equation*}\n%which is the same as\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %\\ds\\sum\\limits_{j=1}^{N} \\big(\\theta\\,u_j^{(i+1)} + (1-\\theta)\\,u_j^{(i)}\\big)  \\left(-\\ds\\int_\u03a9 \\bfsigma\\,\u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx\\right) \n    %+ \\ds\\sum\\limits_{j=1}^{N}\\big(\\theta\\,f_j^{(i+1)} + (1-\\theta)\\,f_j^{(i)}\\big) \\left(\\ds\\int_{\\Gamma_f}  \\varphi_j\\,\\varphi_k  \\,\\d\\bfx  \\right) \\\\[4mm]\n    %\\quad = \\dfrac{1}{dt} \\sum\\limits_{j=1}^{N} \\big(u_j^{(i+1)} - u_j^{(i)}\\big) \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx, \\quad \\forall k = 1\\dots N.\\\\[4mm]\n  %\\end{array}\n%\\end{equation*}\n%This translates to the matrix equation\n%\\begin{equation*}\n  %\\begin{array}{lll}\n   %\\matt{\n      %\\theta\\,\\bfK_{\\bfsigma} -\\dfrac{1}{dt}\\bfM & \\quad\n      %(1-\\theta)\\,\\bfM_\\Gamma \\quad & \n      %\\theta\\,\\bfM_\\Gamma\n    %}\n    %\\matt{\n      %\\bfu^{(i+1)}\\\\\n      %\\bff^{(i)}\\\\\n      %\\bff^{(i+1)}\n    %}\n    %= \n    %\\matt{\n       %\\big((\\theta-1)\\,\\bfK_{\\bfsigma} - \\dfrac{1}{dt} \\bfM \\big)\\,\\bfu^{(i)}\n    %},\n  %\\end{array}\n%\\end{equation*}\n%where\n%\\begin{equation*}\n  %\\begin{array}{lll}\n     %\\bfK_{\\bfsigma} = -\\ds\\int_\u03a9 \\bfsigma\\,\u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx, \\quad \\bfK = \\bfK_\\bfI \\qquad \\text{(note, the minus sign is correct for $+\u0394$)},\\\\[4mm]\n     %\\bfM_\\Gamma = \\ds\\int_{\\Gamma_f} \\varphi_j\\,\\varphi_k \\,\\d\\bfx\\\\[4mm]\n     %\\bfM = \\ds\\int_\u03a9 \\varphi_j\\,\\varphi_k\\,\\d\\bfx,\n  %\\end{array}\n%\\end{equation*}\n%\\subsubsection{Laplace example}\n%Another example is\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %\u0394u = 0, \\qquad \\p{u}{\\bfn} = f \\quad \\text{on }\\Gamma_f.\n  %\\end{array}\n%\\end{equation*}\n%We get\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %\\ds\\sum\\limits_{j=1}^{N} u_j \\left(-\\ds\\int_\u03a9 \u2207\\varphi_j\\cdot \u2207\\varphi_k \\,\\d\\bfx\\right) \n    %+ \\ds\\sum\\limits_{j=1}^{N} f_j\\ds\\int_{\\Gamma_f} \\varphi_j\\,\\varphi_k \\,\\d\\bfx = 0 \\quad \\forall k = 1\\dots N,\\\\[4mm]\n  %\\end{array}\n%\\end{equation*}\n%which is in matrix form\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %\\matt{\n     %\\bfK & \n      %\\bfM_\\Gamma\n    %}\n    %\\matt{\n      %\\bfu\\\\\n      %\\bff\\\\\n    %}\n    %= \n    %\\bfzero,\n  %\\end{array}\n%\\end{equation*}\n\n\\subsubsection{Multidomain example}\nStarting from \\eqref{eq:matrix_equation_1} with $\\bfB= \\bfzero$ we have\n\\begin{equation}\\label{eq:matrix_equation_3}\n  \\begin{array}{lll}\n  \\matt{\n      \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,\\bfK_{\\bfsigma_i^k} -\\dfrac{1}{dt}\\bfM &\n      \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,\\bfK_{\\bfsigma_i^k} \\\\[4mm]\n      f_r^k \\,\\bfK_{\\bfsigma_i^k}  &\n      \\bfK_{\\bfsigma_e + \\bfsigma_i}\n    }\n    \\matt{\n      \\bfV_m^{(i+1)}\\\\[4mm]\n       \\bfphi_{e}^{(i+1)}\n    }\\\\[8mm]\n    = \n    \\matt{\n      \\big((\\theta-1)\\, \\dfrac{1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} - \\dfrac{1}{dt}\\bfM\\big) \\bfV_m^{(i)} \n      + (\\theta - 1)\\,\\dfrac{1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k}\\,\\bfphi_e^{(i)}\\\\[4mm]\n      \\bfzero\n    }.\n  \\end{array}\n\\end{equation}\nFor boundary integrals and a Neumann boundary condition $(\\bfsigma \u2207u \\cdot \\bfn) = f$ on $\u2202\\Omega=\\Gamma$, we have:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\bfsigma \u2207u\\cdot\\bfn = \\s{j=1}{N}u_j\\,(\\bfsigma \u2207\\varphi_j\\cdot \\bfn) = \\s{j=1}{N}f_j\\,\\psi_j\\\\[6mm]\n    \\Leftrightarrow\\quad \\s{j=1}{N}u_j\\,\\ds\\int_{\u2202\\Omega} (\\bfsigma \u2207\\varphi_j \\cdot \\bfn)\\,\\varphi_\\ell\\,\\d \\bfx \n    = \\s{j=1}{N}f_j\\,\\ds\\int_{\u2202\\Omega} \\psi_j\\,\\varphi_\\ell\\,\\d \\bfx\\\\[8mm]\n    \\Leftrightarrow\\quad \\bfB_{\\bfsigma}\\bfu = \\bfB_{\\Gamma}\\bff\n  \\end{array}\n\\end{equation*}\n\nNow, we know how to incorporate the boundary condition \\eqref{eq:flux_bc} by replacing $\\bfB_{\\bfsigma}\\bfu$ by $\\bfB_{\\Gamma}\\bff$ \nand put the terms on the right hand side. We get \\cref{eq:matrix_equation_3} with a different right hand side $\\bfb$:\n%\n\\begin{equation}\n  \\begin{array}{ll|lll}\n    \\bfb = \\Big( \\dfrac{1}{A_m^k\\,C_m^k}(\\theta-1)\\,\\bfK_{\\bfsigma_i^k} - \\dfrac{1}{dt}\\bfM\\Big) \\bfV_m^{(i)} \n      + \\dfrac{1}{A_m^k\\,C_m^k}(\\theta - 1)\\,\\bfK_{\\bfsigma_i^k}\\,\\bfphi_e^{(i)} \\\\[4mm]\n      \\quad \\ub{+ \\dfrac{1}{A_m^k\\,C_m^k}(\\theta-1)\\,\\bfB_{\\Gamma_M} \\bfp^{k,(i)} - \\dfrac{1}{A_m^k\\,C_m^k}(\\theta-1)\\,\\bfB_{\\Gamma_M} \\bfp^{k,(i)}}{=\\bfzero}\\\\[4mm]\n      \\quad \\ub{- \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,\\bfB_{\\Gamma_M} \\bfp^{k,(i+1)} + \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,\\bfB_{\\Gamma_M} \\bfp^{k,(i+1)}}{=\\bfzero}\n  \\end{array}\n\\end{equation}\n\nAnalogously for the first monodomain equation in matrix notation, \\cref{eq:multidomain1_matrix1},\n with $\\bfB_{\\bfsigma}\\bfu = \\bfB_{\\Gamma}\\bff$ and $q := (\\bfsigma_e \u2207 \\phi_e)\\cdot \\bfn_m$:\n\\begin{equation}\\label{eq:multidomain1_matrix2}\n  \\begin{array}{lll}\n    &\\big(\\bfK_{\\bfsigma_e + \\bfsigma_i} + \\bfB_{\\bfsigma_e + \\bfsigma_i}\\big)\\bfphi_{e} +  \\s{k=1}{M_\\text{mu}} f_r^k \\big(\\bfK_{\\bfsigma_i^k} + \\bfB_{\\bfsigma_i^k}\\big)\\bfV_m^k = 0\\\\[4mm]\n    \\Leftrightarrow \\quad &\\bfK_{\\bfsigma_e + \\bfsigma_i}\\bfphi_{e} + \\bfB_{\\bfsigma_e}\\bfphi_{e} + \\ub{\\bfB_{\\bfsigma_i}\\bfphi_{e} }{=\\s{k=1}{M_\\text{mu}} f_r^k\\, \\bfB_{\\bfsigma_i^k}\\,\\bfphi_e}\n    +  \\s{k=1}{M_\\text{mu}} f_r^k \\big(\\bfK_{\\bfsigma_i^k}\\bfV_m^k + \\bfB_{\\bfsigma_i^k} \\bfV_m^k \\big) = 0\\\\[4mm]\n    \\Leftrightarrow \\quad &\\bfK_{\\bfsigma_e + \\bfsigma_i}\\bfphi_{e} + \\bfB_{\\bfsigma_e}\\bfphi_{e} -\\bfB_{\\Gamma_M} \\s{k=1}{M_\\text{mu}} f_r^k \\bfp^k \n    +  \\s{k=1}{M_\\text{mu}} f_r^k \\big(\\bfK_{\\bfsigma_i^k}\\bfV_m^k + \\bfB_{\\Gamma_M} \\bfp^k \\big) = 0\\\\[4mm]\n    \\Leftrightarrow \\quad &\\bfK_{\\bfsigma_e + \\bfsigma_i}\\bfphi_{e} + \\bfB_{\\Gamma_M}\\bfq\n    +  \\s{k=1}{M_\\text{mu}} f_r^k \\bfK_{\\bfsigma_i^k}\\bfV_m^k = 0\n  \\end{array}\n\\end{equation}\n\nThe same operations can also be done by adding the flux terms to the vector of unknowns:\n\\begin{equation}\\label{eq:multidomain_fe_flux}\n  \\begin{array}{ll|lll}\n    \\matt{\n      \\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} -\\dfrac{1}{dt}\\bfM &\n      \\dfrac{\\theta}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} &\n       \\ub{\\dfrac{(1-\\theta)}{A_m^k\\,C_m^k}\\,\\bfB_{\\Gamma_M}- \\dfrac{(1-\\theta)}{A_m^k\\,C_m^k}\\,\\bfB_{\\Gamma_M}}{=\\bfzero} \\quad & \n      \\ub{\\dfrac{\\theta}{A_m^k\\,C_m^k}\\,\\bfB_{\\Gamma_M} - \\dfrac{\\theta}{A_m^k\\,C_m^k}\\,\\bfB_{\\Gamma_M}}{=\\bfzero}\\\\[8mm]      \n      f_r^k \\,\\bfK_{\\bfsigma_i^k}  &\n      \\bfK_{\\bfsigma_e + \\bfsigma_i} &\n      & 0 &\n      \\bfB_{\\Gamma_M}\n    }\\\\[2mm]\n    \\matt{\n      \\bfV_m^{(i+1)}\\\\[2mm]\n       \\bfphi_{e}^{(i+1)}\\\\[2mm]\n       \\bfp^{k,(i)}\\\\[2mm]\n      \\bfp^{k,(i+1)}\\\\[4mm]\n        \\bfq^{(i+1)}\n    }\n    = \n    \\matt{\n      \\big((\\theta-1)\\, \\dfrac{1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} - \\dfrac{1}{dt}\\bfM\\big) \\bfV_m^{(i)} \n      + (\\theta - 1)\\dfrac{1}{A_m^k\\,C_m^k}\\,\\bfK_{\\bfsigma_i^k}\\,\\bfphi_e^{(i)}\\\\[4mm]\n      \\bfzero\n    },\n  \\end{array}\n\\end{equation}\nwhere \n\\begin{equation*}\n  \\begin{array}{lll}\n    p^{k,(i)} &= (\\bfsigma_i^k \\,\u2207V_m^{k,(i)})\\cdot \\bfn_m &= -(\\bfsigma_i^k\\,\u2207\\phi_e^{(i)})\\cdot \\bfn_m\n  \\end{array}\n\\end{equation*}\n\n\nSo far we haven't specified a boundary condition for the second row. \n\n%%\n\n\\subsection{Additional body region}\nTo simulate surface-electromyography we add a domain $\\Omega_B$ which represents fat tissue. The setting is visualized by Fig.~\\ref{fig:body_domain2}.\n\n\\bild{body_domain2}{0.3\\textwidth}{Computational domains}\n\nOn the muscle domain $\\Omega_M$, we have the 1st and 2nd Multidomain equation,\n\\begin{equation*}\n  \\begin{array}{lll}\n    %\\div\\big(\\bfsigma_e\\,\\grad(\\phi_e)\\big) + \\s{k=1}{M_\\text{mu}} f_r^k\\,\\div\\big(\\bfsigma_i^k\\,\\grad(\\phi_i^k)\\big) = 0.\\\\[4mm]\n    \\div\\big((\\bfsigma_e + \\bfsigma_i) \\,\\grad(\\phi_e)\\big) + \\s{k=1}{M_\\text{mu}} f_r^k\\,\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^k )\\big) = 0,\\\\[4mm]\n    \\div\\big(\\bfsigma_i^k\\,\\grad(\\phi_i^k)\\big) = A_m^k\\,\\big(C_m^k\\,\\p{V_m^k}{t} + I_\\text{ion}(V_m^k)\\big), \\qquad \\forall\\,k \\in 1\\dots M_\\text{mu}.\n  \\end{array}\n\\end{equation*}\nThe 2nd Multidomain equation is solved using an operator splitting approach, which yields the following diffusion equation to be solved as one part of the splitting \\eqref{eq:diffusion_term}:\n\\begin{equation*} \n  \\begin{array}{lll}\n    \\p{V_m^k}{t} = \\dfrac{1}{A^k_m\\,C_m^k}\\div\\big(\\bfsigma_i^k\\,\\grad(V_m^k + \\phi_e)\\big)\n  \\end{array}\n\\end{equation*}\n\nWe assume a harmonic electric potential on $\\Omega_B$:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\div \\big(\\bfsigma_b\\,\\grad (\\phi_b)\\big) = 0 \\qquad \\text{on } \\Omega_B.\n  \\end{array}\n\\end{equation*}\n\nThe boundary conditions on $\\Gamma_M$ are given by \\eqref{eq:flux_bc}:\n\\begin{equation}\\label{eq:bc1}\n  \\begin{array}{lll}\n    &(\\bfsigma_i^{k}\\,\u2207\\phi_i^{k})\\cdot \\bfn_m = 0   \\qquad &\\text{on } \\Gamma_M \\\\[4mm]\n    \\quad \\Rightarrow \\quad\n    &(\\bfsigma_i^{k}\\,\u2207V_m^{k})\\cdot \\bfn_m = -(\\bfsigma_i^{k}\\,\u2207\\phi_e)\\cdot \\bfn_m =: p^k  \\qquad &\\text{on } \\Gamma_M.\n  \\end{array}\n\\end{equation}\nFor the diffusion part of the 2nd Multidomain equation, boundary condition \\eqref{eq:bc1} is satisfied automatically when all flux terms are neglected (cf. \\eqref{eq:multidomain_fe_flux}).\n\nThe connection between muscle and body domain is given by the following conditions:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\phi_e = \\phi_b  \\qquad &\\text{on } \\Gamma_M,\\\\[4mm]\n    (\\bfsigma_e \u2207 \\phi_e)\\cdot \\bfn_m = -(\\bfsigma_b \u2207 \\phi_b)\\cdot \\bfn_m =: q \\qquad &\\text{on } \\Gamma_M.\n  \\end{array}\n\\end{equation*}\nFurthermore we have with $\\phi_e = \\phi^k_i - V_m^k$:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\big((\\bfsigma_e + \\bfsigma_i)\\,\u2207\\phi_e\\big) \\cdot \\bfn_m\\\\[4mm]\n    = (\\bfsigma_e\\,\u2207\\phi_e)\\cdot \\bfn_m + (\\bfsigma_i\\,\u2207\\phi_e)\\cdot \\bfn_m\\\\[4mm]\n    = q + \\s{k=1}{M_\\text{mu}}f_r^k\\big( -\\ub{( \\bfsigma_i^k\\,\u2207V_m^k)\\cdot \\bfn_m}{=p^k} \n    + \\ub{(\\bfsigma_i^k \\,\u2207\\phi^k_i)\\cdot \\bfn_m}{=0, \\,\\eqref{eq:bc1}}\\big)\\\\[4mm]\n    = q - \\s{k=1}{M_\\text{mu}} f_r^k\\,p^k\n  \\end{array}\n\\end{equation*}\n\nThe boundary conditions on the outer boundary of $\u2202\\Omega_B$ are given by\n\\begin{equation*}\n  \\begin{array}{lll}\n   (\\bfsigma_b \u2207\\phi_b)\\cdot \\bfn_b = 0 \\qquad &\\text{on } \\Gamma^\\text{out}_B \\cup \\Gamma_M^\\text{out}.\n  \\end{array}\n\\end{equation*}\n\n\\subsection{System of linear equations} \nWith the body potential, $\\bfphi_b$, and for $M_\\text{mu}=1$ motor unit, we get the following system:\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\left[\\begin{array}{@{}c|c|c|ccc@{}}\n      \\bfA_{V_m,V_m} & \\bfB_{V_m,\\phi_e} & &&\\\\[2mm]\n      \\bfB_{\\phi_e,V_m} & \\bfB_{\\phi_e,\\phi_e} & &f_r^k \\,\\bfB_{\\Gamma_M}-f_r^k \\,\\bfB_{\\Gamma_M}  & \\bfB_{\\Gamma_M}& \\\\[2mm] \\hline\n      &&\\bfC_{\\phi_b,\\phi_b} & & -\\bfB_{\\Gamma_M}&\\\\[2mm]\\hline\n      & \\bfI_{\\Gamma_M} & -\\bfI_{\\Gamma_M} && &\\\\[2mm]\n    \\end{array}\\right]\n    \\left[\\begin{array}{@{}c@{}}\n      V_{m}^{(i+1)}  \\\\[2mm]\\hline \n      \\bfphi_{e}^{(i+1)} \\\\[2mm]\\hline\n      \\bfphi_{b}^{(i+1)} \\\\[2mm]\\hline\n      \\bfp^{k,(i+1)} \\\\[2mm] \n      \\bfq^{(i+1)}\n    \\end{array}\\right]\n    = \n    \\left[\\begin{array}{@{}c@{}}\n      \\bfb_{V_m}^{(i+1)} \\\\\n      \\bfzero \\\\\\hline\n      \\bfzero\\\\\\hline \n      \\bfzero\n    \\end{array}\\right]\\\\[4mm]\n    \\Leftrightarrow\n    \\quad \n    \\left[\\begin{array}{@{}c|c|c|c@{}}\n      \\bfA_{V_m,V_m} & \\bfB_{V_m,\\phi_e} & &\\\\[2mm]\n      \\bfB_{\\phi_e,V_m} & \\bfB_{\\phi_e,\\phi_e} & &\\bfB_{\\Gamma_M} \\\\[2mm] \\hline\n      &&\\bfC_{\\phi_b,\\phi_b} & -\\bfB_{\\Gamma_M}\\\\[2mm]\\hline\n      & \\bfI_{\\Gamma_M} & -\\bfI_{\\Gamma_M} &\\\\[2mm]\n    \\end{array}\\right]\n    \\left[\\begin{array}{@{}c@{}}\n      V_{m}^{(i+1)}  \\\\[2mm]\\hline \n      \\bfphi_{e}^{(i+1)} \\\\[2mm]\\hline\n      \\bfphi_{b}^{(i+1)}  \\\\[2mm]\\hline\n      \\bfq^{(i+1)}\n    \\end{array}\\right]\n    = \n    \\left[\\begin{array}{@{}c@{}}\n      \\bfb_{V_m}^{(i+1)} \\\\[2mm]\n      \\bfzero\\\\\\hline\n      \\bfzero\\\\\\hline \n      \\bfzero\n    \\end{array}\\right]\n  \\end{array},\n\\end{equation*}\nwhere\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\bfA_{V_m,V_m} = \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,\\bfK_{\\bfsigma_i^k} -\\dfrac{1}{dt}\\bfM,\\\\[4mm]\n    \\bfB_{V_m,\\phi_e} = \\dfrac{1}{A_m^k\\,C_m^k}\\theta\\,\\bfK_{\\bfsigma_i^k},\\\\[4mm]\n    \\bfB_{\\phi_e,V_m} = f_r^k \\,\\bfK_{\\bfsigma_i^k},\\\\[4mm]\n    \\bfB_{\\phi_e,\\phi_e} = \\bfK_{\\bfsigma_e + \\bfsigma_i},\\\\[4mm]\n    \\bfB_{\\Gamma_B,kj} = \\ds\\int_{\\Gamma_M} \\psi_j\\,\\varphi_k \\,\\d\\bfx,\\\\[4mm]\n    \\bfC_{\\phi_b,\\phi_b} = \\bfK_{\\bfsigma_b},\\\\[4mm]\n    \\bfI_{\\Gamma_M} \\text{ containing 1 entries for boundary dofs},\\\\[4mm]\n    \\bfb_{V_m}^{(i+1)} = \\big((\\theta-1)\\, \\dfrac{1}{A_m^k\\,C_m^k}\\bfK_{\\bfsigma_i^k} - \\dfrac{1}{dt}\\bfM\\big) \\bfV_m^{(i)} \n      + (\\theta - 1)\\dfrac{1}{A_m^k\\,C_m^k}\\,\\bfK_{\\bfsigma_i^k}\\,\\bfphi_e^{(i)}\\\\[4mm]\n  \\end{array}\n\\end{equation*}\n\nThis matrix can be condensed and takes the form\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\Leftrightarrow\n    \\quad \n    \\left[\\begin{array}{@{}c|c|c@{}}\n      \\bfA_{V_m,V_m} & \\bfB_{V_m,\\phi_e} & \\\\[2mm]\n      \\bfB_{\\phi_e,V_m} & \\bfB_{\\phi_e,\\phi_e} & \\bfD \\\\[2mm] \\hline\n      &\\bfE &\\bfC_{\\phi_b,\\phi_b}\\\\[2mm]\n    \\end{array}\\right]\n    \\left[\\begin{array}{@{}c@{}}\n      V_{m}^{(i+1)}  \\\\[2mm]\\hline \n      \\bfphi_{e}^{(i+1)} \\\\[2mm]\\hline\n      \\hat{\\bfphi}_{b}^{(i+1)}\n    \\end{array}\\right]\n    = \n    \\left[\\begin{array}{@{}c@{}}\n      \\bfb_{V_m}^{(i+1)} \\\\[2mm]\n      \\bfzero\\\\\\hline\n      \\bfzero\n    \\end{array}\\right]\n  \\end{array}.\n\\end{equation*}\nHere, $\\bfD$ and $\\bfE$ contain entries for the dofs in the elements that are adajacent to the border dofs. The size of the last row and column of the system matrix is the number of dofs in the fat domain, without the border dofs, as they are already included in the second column and row.\n\n$\\hat{\\bfphi}_{b}^{(i+1)}$ is the vector of body potential without dofs on the border.\n\n\\subsection{Summary}\n\\begin{equation*}\n  \\begin{array}{lll}\n    \\left[\\begin{array}{@{}ccc|c|c@{}}\n      \\bfA^1_{V_m,V_m} & & &\\bfB^1_{V_m,\\phi_e} & \\\\[2mm]\n      &\\bfA^2_{V_m,V_m} &  &\\bfB^2_{V_m,\\phi_e} & \\\\[2mm]\n      &&\\bfA^k_{V_m,V_m}  &\\bfB^k_{V_m,\\phi_e} & \\\\[2mm]\n      \\bfB^1_{\\phi_e,V_m} & \\bfB^2_{\\phi_e,V_m} & \\bfB^k_{\\phi_e,V_m} & \\bfB_{\\phi_e,\\phi_e} & \\bfD \\\\[2mm] \\hline\n      &&&\\bfE &\\bfC_{\\phi_b,\\phi_b}\\\\[2mm]\n    \\end{array}\\right]\n    \\left[\\begin{array}{@{}c@{}}\n      V_{m}^{1,(i+1)}  \\\\[2mm]\n      V_{m}^{2,(i+1)}  \\\\[2mm]\n      V_{m}^{k,(i+1)}  \\\\[2mm]\\hline \n      \\bfphi_{e}^{(i+1)} \\\\[2mm]\\hline\n      \\hat{\\bfphi}_{b}^{(i+1)}\n    \\end{array}\\right]\n    = \n    \\left[\\begin{array}{@{}c@{}}\n      \\bfb_{V_m}^{1,(i+1)} \\\\[2mm]\n      \\bfb_{V_m}^{2,(i+1)} \\\\[2mm]\n      \\bfb_{V_m}^{k,(i+1)} \\\\[2mm]\n      \\bfzero\\\\\\hline\n      \\bfzero\n    \\end{array}\\right]\n  \\end{array}.\n\\end{equation*}\n\n%\\subsubsection{Flux boundary conditions}\n\n%\\begin{equation*}\n  %\\begin{array}{lll}\n    %(\\bfsigma_i^{k}\\,\u2207V_m^{k})\\cdot \\bfn_m = -(\\bfsigma_i^{k}\\,\u2207\\phi_e)\\cdot \\bfn_m &=: f  \\qquad &\\text{on } \\Gamma_M\\\\[4mm]\n    %(\\bfsigma_e \u2207 \\phi_e)\\cdot \\bfn_m = -(\\bfsigma_b \u2207 \\phi_b)\\cdot \\bfn_m &=: g \\qquad &\\text{on } \\Gamma_M\\\\[4mm]\n    %\\big((\\bfsigma_e + \\bfsigma_i)\u2207\\phi_e\\big)\\cdot \\bfn_m &=: h\n  %\\end{array}\n%\\end{equation*}\n\n\n% -------------- Literaturseite --------------------\n\\newpage\n\\nocite{*}\n\\bibliography{literatur}{}\n\\bibliographystyle{abbrv}\n\n% -------------- Anhang ------------\n%\\appendix\n%\\input{8_anhang.tex}\n\n\\end{document}\n", "meta": {"hexsha": "7178e77627bd23d7d7e4fd614de64d86bc2c4057", "size": 33721, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/derivations/multidomain.tex", "max_stars_repo_name": "maierbn/opendihu", "max_stars_repo_head_hexsha": "577650e2f6b36a7306766b0f4176f8124458cbf0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2018-11-25T19:29:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-20T04:46:22.000Z", "max_issues_repo_path": "doc/derivations/multidomain.tex", "max_issues_repo_name": "maierbn/opendihu", "max_issues_repo_head_hexsha": "577650e2f6b36a7306766b0f4176f8124458cbf0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-11-12T15:15:58.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-29T15:29:24.000Z", "max_forks_repo_path": "doc/derivations/multidomain.tex", "max_forks_repo_name": "maierbn/opendihu", "max_forks_repo_head_hexsha": "577650e2f6b36a7306766b0f4176f8124458cbf0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-10-17T12:18:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-28T13:24:20.000Z", "avg_line_length": 44.0797385621, "max_line_length": 289, "alphanum_fraction": 0.5983808309, "num_tokens": 15220, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8670357598021707, "lm_q2_score": 0.6619228625116081, "lm_q1q2_score": 0.57391079202818}}
{"text": "\\section{Preliminaries}\n\nAn undirected labeled graph \\db is represented as a tuple $ \\db =\n(\\vg,\\eg,L) $ where $\\vg$ is the set of vertices, $\\eg$ is the set of\nedges and $L\\!\\!: \\vg \\rightarrow \\Sigma $ is a function that maps\nvertices to their labels.  The neighbors of a vertex $v$ are given as $\nN(v) = \\{ u | (u,v) \\in \\eg \\} $.  A\n{\\em path} in a graph $\\db$ is a sequence of vertices $v_0,\\ldots,v_k$\nsuch that $v_i \\in \\vg$, $(v_i,v_{i+1}) \\in \\eg$ and $v_i \\neq v_j$ for\n$i \\neq j$.\nWe use $\\kpath{u}{v}{k}$ if there is a $k$ edge path\nbetween $u$ and $v$.\n%It covers the vertices $\\cup_{i=0}^{k} \\{v_i\\}$ and the edges $\\cup_{i=0}^{k-1}\n%\\{(v_i,v_{i+1})\\}$.  \nA path is called a walk if the vertices\nare allowed to repeat. For example in graph \\ref{subfig:ex_db}, the sequence\n$50, 20, 10, 40, 60, 30$ is a path and the sequence $10, 40, 60, 30, 10$ is\na walk.\n[ can we define this when required ?]\n\n\\smallskip\\noindent{\\bf Cost matrix:}\nWe assume that there is a cost matrix \n$C\\!\\!:\\Sigma^{2} \\rightarrow \\mathbb{R}_{\\geq 0} $. \nThe entry $\\matij{C}{l_i}{l_j}$\ndenotes the cost of matching the labels $l_i$ and $l_j$. Although it is \nnot required by the algorithm, $C$ is usually symmetric and the diagonal\nentries are $0$\n\n\\smallskip\\noindent{\\bf Approximate subgraph isomorphism:}\nA graph $S = (V_S,E_S,L)$ is a subgraph of \\db, denoted $S \\subseteq\n\\db$, iff $V_S \\subseteq \\vg$, and $E_S \\subseteq \\eg$.  $S$ is an\ninduced subgraph if $E_S = \\eg \\cap (V_S \\times V_S)$.  Given a database\ngraph $G$ and a pattern $P = (V_P,E_P,L)$, a function $\\phi\\!\\!: V_P \\to\nV_G$ is called an {\\em unlabeled subgraph isomorphism} provided $\\phi$\nis an injective (or one-to-one) mapping such that $\\forall (u,v) \\in\nE_P$, we have $(\\phi(u),\\phi(v)) \\in \\eg$. That is, $\\phi$ preserves the\ntopology of $P$ in $G$. Define the cost of the isomorphism as follows:\n$C(\\phi) = \\sum_{u \\in \\vp} \\matij{C}{L(u)}{L(\\phi(u))}$, that is, the\nsum of the costs of matching the node labels in $P$ to the corresponding\nnode labels in $G$.  We say that $\\phi$ is an {\\em approximate subgraph\nisomorphism} from $P$ to $G$ provided its cost $C(\\phi) \\le \\alpha$,\nwhere $\\alpha$ is a user-specified threshold on the total cost. In this\ncase we also call $P$ an approximate pattern in $G$. Note\nthat if $\\alpha = 0$, then $\\phi$ is an unlabeled subgraph isomorphism\nbetween $P$ and $G$. From now on, \\textit{isomorphism}\nrefers to \\textit{approximate subgraph isomorphism} unless specified\notherwise.\n\n\n%%% Example graphs to illustrate the notation used in the paper\n\\begin{figure}[!h]\n\\vspace{-0.25in}\n\\captionsetup[subfloat]{captionskip=15pt}\n  \\subfloat[Database Graph $\\db$] {\n    \\label{subfig:ex_db}\n  \\scalebox{0.9}{\n    \\begin{pspicture}(-1,0)(3,3)\n      \\putNode{1}{2}{n1}{A}{90}{10}\n      \\putNode{0}{1}{n2}{A}{180}{20}\n      \\putNode{1}{1}{n3}{B}{225}{30}\n      \\putNode{2}{1}{n4}{A}{0}{40}\n      \\putNode{0}{0}{n5}{C}{270}{50}\n      \\putNode{1.5}{0}{n6}{C}{270}{60}\n\n      \\ncline{-}{n1}{n2}\n      \\ncline{-}{n1}{n3}\n      \\ncline{-}{n1}{n4}\n      \\ncline{-}{n2}{n3}\n      \\ncline{-}{n3}{n4}\n      \\ncline{-}{n3}{n6}\n      \\ncline{-}{n4}{n6}\n      \\ncline{-}{n2}{n5}\n    \\end{pspicture}\n\t}}\n  % Graph subgraph isomorphic to the database\n  \\subfloat[Pattern $P$] {\n    \\label{subfig:ex_sub}\n  \\scalebox{0.9}{\n    \\begin{pspicture}(0.5,1)(3,3)\n      \\putNode{2}{3}{n1}{A}{90}{1}\n      \\putNode{1}{2}{n2}{B}{180}{2}\n      \\putNode{3}{2}{n3}{C}{0}{3}\n      \\putNode{2}{1}{n4}{A}{0}{4}\n      \\ncline{-}{n1}{n2}\n      \\ncline{-}{n1}{n3}\n      \\ncline{-}{n2}{n4}\n      \\ncline{-}{n3}{n4}\n    \\end{pspicture}\n\t}}\n  \\newline\n\\captionsetup[subfloat]{captionskip=5pt}\n  \\subfloat[Cost Matrix] {\n    \\label{subfig:ex_match}\n    \\begin{tabular}{|c|c|c|c|c|}\n      \\hline\n      \\M{C} & A &  B & C & D \\\\\n      \\hline\n      A & $0$ & $0.2$ & $0.6$ & $0.1$ \\\\\n      \\hline\n      B & $0.2$ & $0$ & $0.4$ & $0.5$ \\\\\n      \\hline\n      C & $0.6$ & $0.4$ & $0$ & $0.2$ \\\\\n      \\hline\n      D & $0.1$ & $0.5$ & $0.2$ & $0$ \\\\\n      \\hline\n    \\end{tabular}\n  }\n  % Approximately isomorphic\n  \\subfloat[Approximate Embeddings] {\n      \\label{subfig:ex_occur}\n      \\begin{tabular}{|l|c|c|c|c|c|}\n\t\t\\hline\n\t\t & \\multicolumn{5}{|c|}{$\\phi$}\\\\ \n        \\hline\n\t\t$P$ & $1$ & $2$ & $3$ & $4$ & cost \\\\\n        \\hline\n        $\\phi_1$ & $30$ & $10$ & $60$ & $40$ & $0.4$ \\\\\n        $\\phi_2$ & $40$ & $10$ & $60$ & $30$ & $0.4$\\\\\n        \\hline\n      \\end{tabular}\n  }\n  % occurrences of the approx pattern\n  \\caption{ \\protect\\subref{subfig:ex_db}: sample database graph $G$, \n    \\protect\\subref{subfig:ex_sub}: approximate pattern $P$.\n\t\\protect\\subref{subfig:ex_match}: cost matrix.\n\t\\protect\\subref{subfig:ex_occur}: approximate embeddings \n\tof $P$ in $G$.\n  }\n  \\label{fig:ex1}\n\\end{figure}\n\n\\smallskip\\noindent{\\bf Representative set and pattern support:}\nGiven a node $u \\in \\vp$, its {\\em representative set} in the database\ngraph $G$ is the set \n$$R(u) = \\{ v \\in \\vg |\\; \\exists \\phi, \\text{ such\nthat } C(\\phi) \\le \\alpha \\text{ and } \\phi(u) = v \\}$$ \nThat is, the representative set of $u$ comprises all nodes in $G$ that\n$u$ is mapped to in some approximate isomorphism.  \nFigure\n\\ref{fig:ex1} shows an example database, a cost matrix, an approximate\npattern, and its approximate subgraph isomorphism for $\\alpha=0.5$.\nThere are only two possible approximate isomorphisms from $P$ to $G$, as\nspecified by $\\phi_1$ and $\\phi_2$. For example, for $\\phi_1$, we have\n$\\phi_1(1) \\to 30$, $\\phi_1(2) \\to 10$, $\\phi_1(3) \\to 60$, and\n$\\phi_1(4) \\to 40$, as seen in Table~\\ref{subfig:ex_occur}. \nThe cost of the isomorphism is \n$C(\\phi_1) = 0.4$, since \n$C(L(1),L(30)) + C(L(2),L(10)) + C(L(3),L(60)) + C(L(4),L(40)) \n= C(A,B) + C(B,A) + C(C,C)+ C(A,A) = 0.2+0.2+0+0 = 0.4$. \nThe representative\nset for node $1 \\in \\vp$ is $R(1) = \\{30, 40\\}$. However, the\nsupport of $P$ is $sup(P) = 1$, since node $2 \\in \\vp$ has only one\nmapping in $G$, namely $R(2) = \\{10\\}$.\n\n%%%% Define a frequent pattern\n\n\\subsection{Outline} The two main steps in approximate graph mining are\ncandidate generation and support computation. With candidate generation the\nsearch space of the frequent patterns is explored. For each candidate that is\ngenerated, we can check whether it is frequent by computing its support.\n\nGiven a pattern with $k$ vertices, the maximum number of possible isomorphisms\nare $k \\times {|\\vg| \\choose k} $.  It is therefore infeasible to either\nenumerate or store the complete set of isomorphisms. Computing and storing the\nrepresentative sets is a compromise that will enable us to decide efficiently if\na candidate is frequent.\n\n\nThe rest of the paper is structured as follows. In section\n\\ref{sec:representative} we propose methods for computing the representative\nsets. In section, \\ref{sec:mining} we describe how the idea of representative\nsets can be combined with different candidate generation and support computation\nmechanisms to yield mining algorithms with different properties. In section\n\\ref{sec:results}, we present detailed experimental results to show that the\nproposed methods can be used to mine interesting and useful frequent patterns\nfrom three real world datasets.  We discuss the related work in section\n\\ref{sec:relatedwork} and conclude in section \\ref{sec:conclusions}\n\n\n%%%%%%%%% the definition of support is moved to a later section %%%%%%\n\\if0\nDefine the {\\em\nsupport} of pattern $P$ in a database graph $G$ as \n$$sup(P) = \\min_{u \\in\n\\vp} \\{ |R(u)| \\}$$\nThat is, the minimum cardinality over all\nrepresentative set of nodes in $P$.  A pattern $P$ is called {\\em\nfrequent} if $sup(P) \\geq minsup$, where $minsup$ is a user defined\nsupport threshold.  $P$ is {\\em maximal} iff $P$ is frequent and there\ndoes not exist any supergraph of $P$ that is frequent in $G$.  \n\n\\smallskip\n\\noindent{\\bf Problem Statement:} Given a database graph \\db, minimum\nsupport $minsup$, maximum allowed cost $\\alpha$, and an integer $k$, our\ngoal is to mine $k$ maximal frequent approximate patterns.  Figure\n\\ref{fig:ex1} shows an example database, a cost matrix, an approximate\npattern, and its isomorphisms for $\\alpha=0.5$.\nThere are only two possible isomorphisms from $P$ to $G$, as\nspecified by $\\phi_1$ and $\\phi_2$. For example, for $\\phi_1$, we have\n$\\phi_1(10) \\to 3$, $\\phi_1(20) \\to 1$, $\\phi_1(30) \\to 6$, and\n$\\phi_1(40) \\to 4$, as seen in Table~\\ref{subfig:ex_occur}. \nThe cost of the isomorphism is \n$C(\\phi_1) = 0.4$, since \n$C(L(10),L(3)) + C(L(20),L(1)) + C(L(30),L(6)) + C(L(40),L(4)) \n= C(A,B) + C(B,A) + C(C,C)+ C(A,A) = 0.2+0.2+0+0 = 0.4$. \nThe representative\nset for node $10 \\in \\vp$ is $R(10) = \\{3, 4\\}$. However, the\nsupport of $P$ is $sup(P) = 1$, since node $20 \\in \\vp$ has only one\nmapping in $G$, namely $R(20) = \\{1\\}$.\n\\fi\n", "meta": {"hexsha": "1075357a1155eaca104b69cc1c577fa3a8445f10", "size": 8747, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/preliminaries.tex", "max_stars_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_stars_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/preliminaries.tex", "max_issues_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_issues_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/preliminaries.tex", "max_forks_repo_name": "PranayAnchuri/approx-graph-mining-with-label-costs", "max_forks_repo_head_hexsha": "4bb1d78b52175add3955de47281c3ee0073c7943", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-08T11:17:33.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-08T11:17:33.000Z", "avg_line_length": 41.4549763033, "max_line_length": 80, "alphanum_fraction": 0.6415914028, "num_tokens": 3135, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5738407658875625}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage[\na4paper,\nmargin=1in,\nheadsep=4pt, % separation between header rule and text\n]{geometry}\n\\usepackage{xcolor}\n\\usepackage{fancyhdr}\n\\usepackage{tgschola}\n\\usepackage{lastpage}\n\\usepackage[natbibapa]{apacite}\n\\usepackage{listings}\n\\usepackage{subfigure}\n\\usepackage{color}\n\\usepackage{dsfont}\n\\usepackage{footmisc}\n\\usepackage{verbatim}\n\\usepackage{smartdiagram}\n\\setlength{\\marginparwidth}{0cm}\n\\setlength{\\topmargin}{0cm}\n\\setlength{\\voffset}{0cm}\n\\setlength{\\headsep}{0cm}\n\\definecolor{dkgreen}{rgb}{0,0.6,0}\n\\definecolor{gray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mauve}{rgb}{0.58,0,0.82}\n\\usepackage[utf8]{inputenc}\n\\graphicspath{{/Users/auddya/uw-me/2018/mechanicalBidomainModel/}}\n\\lstset{frame=tb,\n\tlanguage=Java,\n\taboveskip=3mm,\n\tbelowskip=3mm,\n\tshowstringspaces=false,\n\tcolumns=flexible,\n\tbasicstyle={\\small\\ttfamily},\n\tnumbers=none,\n\tnumberstyle=\\tiny\\color{gray},\n\tkeywordstyle=\\color{blue},\n\tcommentstyle=\\color{dkgreen},\n\tstringstyle=\\color{mauve},\n\tbreaklines=true,\n\tbreakatwhitespace=true,\n\ttabsize=3\n}\n\\pagestyle{fancy}\n\\fancyhf{}\n\\fancyhead[C]{%\n\t\\footnotesize\\sffamily\n\t\\yourname\\quad\n\tweb: \\textcolor{blue}{\\itshape\\yourweb}\\quad\n\t\\textcolor{blue}{\\youremail}}\n\\fancyfoot[C]{Page \\thepage\\ of \\pageref{LastPage}}\n\n\\newcommand{\\soptitle}{A computational study of mechanical bidomain model in durotaxis}\n\n\\newcommand{\\yourname}{Debabrata Auddya}\n\\newcommand{\\youremail}{auddya@wisc.edu}\n\\newcommand{\\yourweb}{https://github.com/auddya}\n\n\\newcommand{\\statement}[1]{\\par\\medskip\n\t\\textcolor{blue}{\\textbf{#1:}}\\space\n}\n\n\\usepackage[\nbreaklinks,\npdftitle={\\yourname - \\soptitle},\npdfauthor={\\yourname},\nunicode\n]{hyperref}\n\n\\begin{document}\n\t\n\t\\begin{center}\n\t\t\\Large\\soptitle\n\t\\end{center}\n\n\\section*{Parameters}\nN = 101 (No of nodes) \\\\\nL = 0.005 m (Length of domain) \\\\\nnu = 1000 Pa (Intracellular modulus) \\\\\nmu\\_zero = 1000 Pa (Extracellular modulus) \\\\\nK = 50000000000 Pa/$m^{2}$ (Stiffness) \\\\\nT =  200 Pa (Tension) \\\\\n\\section*{Code and Results}\n\\begin{lstlisting}\nN = 101;\nL = 0.005; %0.005m\ng = 0; %100000; %100000 Pa/m\nmu_zero = 1000; %1000 Pa\nnu = 1000; %1000 Pa\nK = 50000000000; %50GPa/m2\nT = 200; %200Pa\nw = zeros(N,1); %Extracellular displacement\nu = zeros(N,1); %Intracellular displacement\nx = zeros(N,1); %x position, useful when plotting\ndelta = (2*L)/(N-1); %Spacing along x direction\niterations = 100;\nfor i = 1:N\nx(i) = L*(2*(i-1)/(N-1)-1); \nmu(i) = mu_zero + g*x(i);\nend\nfor k = 1:iterations\nfor i = 2:(N-1)\na(i) = 4*mu(i)*(w(i+1)+w(i-1))+(mu(i+1)-mu(i-1))*(w(i+1)-w(i-1));\nb(i) = 4*nu*(u(i+1)+u(i-1));\nA(i) = 8*mu(i) + K*delta*delta;\nC = 8*nu + K*delta*delta;\nB = K*delta*delta;\nu(i) = (a(i)*B + A(i)*b(i))/(A(i)*C - B*B);\nw(i) = (a(i)/A(i)) + (B/A(i))*((a(i)*B + A(i)*b(i))/(A(i)*C - B*B));\nend\n%Apply Boundary Conditions\nu(1) = u(2) + (T*delta/(4*nu));\nw(1) = w(2);\nu(N) = u(N-1) - (T*delta/(4*nu));\nw(N) = w(N-1);\nend \nfor i = 1:N\nh(i) = u(i)-w(i);\nend \nplot(x,h)  %if you want plot with x in mm, use plot(x*1000,h)\nxlabel('x');\nylabel('u-w');\ntitle('Difference between extracellular and intracellular displacement');\n\\end{lstlisting}\n\\begin{figure}\n\t\\begin{center}\n\t\t%\n\t\t\\subfigure[Difference in displacements for g=0]{%\n\t\t\t\\label{fig:uw0}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w2(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Difference in displacements for g=$10^5$]{%\n\t\t\t\\label{fig:uwvar}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w2(g_var).jpg}\n\t\t}\\\\ %  ------- End of the first row ----------------------%\n\t\t\\subfigure[Intracellular displacement for g=0]{%\n\t\t\t\\label{fig:u0}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u2(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Intracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:uvar}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u2(g_var).jpg}\n\t\t}\\\\\n\t\t\t\\subfigure[Extracellular displacement for g=0]{%\n\t\t\\label{fig:w0}\n\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w2(g_0).jpg}\n\t  }%\n\t\\subfigure[Extracellular displacement for g=$10^5$]{%\n\t\t\\label{fig:wvar}\n\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w2(g_var).jpg}\n\t }%\n\t\\end{center}\n\t\\caption{%\nComparison of results for g=0 and g=$10^5$ Pa/m (Iteration: 100). Elapsed time is 0.320051 seconds.\n\t}%\n\t\\label{fig:subfigures2}\n\\end{figure}\n\\begin{figure}\n\t\\begin{center}\n\t\t%\n\t\t\\subfigure[Difference in displacements for g=0]{%\n\t\t\t\\label{fig:uw02}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w3(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Difference in displacements for g=$10^5$]{%\n\t\t\t\\label{fig:uwvar2}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w3(g_var).jpg}\n\t\t}\\\\ %  ------- End of the first row ----------------------%\n\t\t\\subfigure[Intracellular displacement for g=0]{%\n\t\t\t\\label{fig:u02}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u3(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Intracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:uvar3}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u3(g_var).jpg}\n\t\t}\\\\\n\t\t\\subfigure[Extracellular displacement for g=0]{%\n\t\t\t\\label{fig:w03}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w3(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Extracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:wvar3}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w3(g_var).jpg}\n\t\t}%\n\t\\end{center}\n\t\\caption{%\n\t\tComparison of results for g=0 and g=$10^5$ Pa/m (Iteration: 1000). Elapsed time is 0.366209 seconds.\n\t}%\n\t\\label{fig:subfigures3}\n\\end{figure}\n\\begin{figure}\n\t\\begin{center}\n\t\t%\n\t\t\\subfigure[Difference in displacements for g=0]{%\n\t\t\t\\label{fig:uw04}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w4(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Difference in displacements for g=$10^5$]{%\n\t\t\t\\label{fig:uwvar4}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w4(g_var).jpg}\n\t\t}\\\\ %  ------- End of the first row ----------------------%\n\t\t\\subfigure[Intracellular displacement for g=0]{%\n\t\t\t\\label{fig:u04}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u4(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Intracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:uvar4}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u4(g_var).jpg}\n\t\t}\\\\\n\t\t\\subfigure[Extracellular displacement for g=0]{%\n\t\t\t\\label{fig:w04}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w4(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Extracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:wvar4}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w4(g_var).jpg}\n\t\t}%\n\t\\end{center}\n\t\\caption{%\n\t\tComparison of results for g=0 and g=$10^5$ Pa/m (Iteration: 10000). Elapsed time is 0.414921 seconds.\n\t}%\n\t\\label{fig:subfigures4}\n\\end{figure}\n\\begin{figure}\n\t\\begin{center}\n\t\t%\n\t\t\\subfigure[Difference in displacements for g=0]{%\n\t\t\t\\label{fig:uw05}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w5(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Difference in displacements for g=$10^5$]{%\n\t\t\t\\label{fig:uwvar5}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w5(g_var).jpg}\n\t\t}\\\\ %  ------- End of the first row ----------------------%\n\t\t\\subfigure[Intracellular displacement for g=0]{%\n\t\t\t\\label{fig:u05}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u5(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Intracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:uvar5}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u5(g_var).jpg}\n\t\t}\\\\\n\t\t\\subfigure[Extracellular displacement for g=0]{%\n\t\t\t\\label{fig:w05}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w5(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Extracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:wvar5}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w5(g_var).jpg}\n\t\t}%\n\t\\end{center}\n\t\\caption{%\n\t\tComparison of results for g=0 and g=$10^5$ Pa/m (Iteration: 100000). Elapsed time is 0.772543 seconds.\n\t}%\n\t\\label{fig:subfigures5}\n\\end{figure}\n\\begin{figure}\n\t\\begin{center}\n\t\t%\n\t\t\\subfigure[Difference in displacements for g=0]{%\n\t\t\t\\label{fig:uw06}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w6(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Difference in displacements for g=$10^5$]{%\n\t\t\t\\label{fig:uwvar6}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u-w6(g_var).jpg}\n\t\t}\\\\ %  ------- End of the first row ----------------------%\n\t\t\\subfigure[Intracellular displacement for g=0]{%\n\t\t\t\\label{fig:u06}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u6(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Intracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:uvar6}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/u6(g_var).jpg}\n\t\t}\\\\\n\t\t\\subfigure[Extracellular displacement for g=0]{%\n\t\t\t\\label{fig:w06}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w6(g_0).jpg}\n\t\t}%\n\t\t\\subfigure[Extracellular displacement for g=$10^5$]{%\n\t\t\t\\label{fig:wvar6}\n\t\t\t\\includegraphics[width=0.4\\textwidth]{/testResultsNew/w6(g_var).jpg}\n\t\t}%\n\t\\end{center}\n\t\\caption{%\n\t\tComparison of results for g=0 and g=$10^5$ Pa/m (Iteration: 1000000). Elapsed time is 6.259795 seconds.\n\t}%\n\t\\label{fig:subfigures6}\n\\end{figure}\n\\section*{Difference of displacements as a function of stiffness gradient}\n\\begin{figure}[htb]\n  \\begin{center}\n \t \\includegraphics[scale=0.3]{/testResultsNew/variationG.jpg}\n\t\\caption{Difference in intracellular and extracellular displacements about x = 0 as a function of \\textbf{g}}\n\t\\label{fig:uwG}\n \\end{center}\n\\end{figure}\n\\newpage\n\\section*{Over Relaxation}\nVariation of computation time with linearly increasing over relaxation parameter. The \\textbf{stars} indicate the minimum value in each iteration limit. The figures below illustrate with iteration ranges from $10^2$ to $10^4$ \n\\begin{figure}[htb]\n  \\begin{center}\n\t \\includegraphics[scale = 0.85]{/testResultsNew/ite2.jpg}\n\t \\caption{Iteration range: 100 - 1000}\n\t \\label{fig:ite2}\n  \\end{center}\n\\end{figure}\n\\begin{figure}[htb]\n  \\begin{center}\n\t \\includegraphics[scale = 0.85]{/testResultsNew/ite3.jpg}\n\t \\caption{Iteration range: 1000 - 10000}\n\t \\label{fig:ite3}\n  \\end{center}\n\\end{figure}\n\\begin{figure}[htb]\n  \\begin{center}\n\t \\includegraphics[scale = 0.85]{/testResultsNew/ite4.jpg}\n\t \\caption{Iteration range: 30000 - 90000}\n\t \\label{fig:ite4}\n  \\end{center}\n\\end{figure}\n%\\newpage\n%\\section*{SOR Iterations against time}\n%\\begin{table}[htb]\n%\t\\centering\n%\t\\resizebox{\\textwidth}{!}{%\n%\t\t\\begin{tabular}{lll}\n%\t\t\tIterations & Time (seconds) & Relaxation Value \\\\\n%\t\t\t100        & 0.000535       & 1.94             \\\\\n%\t\t\t1000       & 0.0055         & 1.87             \\\\\n%\t\t\t10000      & 0.0541         & 1.95             \\\\\n%\t\t\t100000     & 0.5556         & 1.92             \\\\\n%\t\t\t1000000    & 5.4486         & 1.91            \n%\t\t\\end{tabular}%\n%\t}\n%\\end{table}\n\\end{document}\n", "meta": {"hexsha": "360d7c6e58efdda5b2ae5e926a54101f3a4a24fe", "size": 10604, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "resultsOne.tex", "max_stars_repo_name": "auddya/mechanicalBidomainModel", "max_stars_repo_head_hexsha": "c62cc7e5bd8335418b960347500063c7c07c4062", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-02-06T16:58:26.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-06T16:58:26.000Z", "max_issues_repo_path": "resultsOne.tex", "max_issues_repo_name": "auddya/mechanicalBidomainModel", "max_issues_repo_head_hexsha": "c62cc7e5bd8335418b960347500063c7c07c4062", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "resultsOne.tex", "max_forks_repo_name": "auddya/mechanicalBidomainModel", "max_forks_repo_head_hexsha": "c62cc7e5bd8335418b960347500063c7c07c4062", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.3727810651, "max_line_length": 226, "alphanum_fraction": 0.6766314598, "num_tokens": 3946, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5738407615460857}}
{"text": "\n% Vector variables (in Bold Font style)\n\\newcommand{\\va}{\\mathbf{a}} \\newcommand{\\vb}{\\mathbf{b}} \\newcommand{\\vc}{\\mathbf{c}} \\newcommand{\\vd}{\\mathbf{d}} \\newcommand{\\ve}{\\mathbf{e}} \\newcommand{\\vf}{\\mathbf{f}} \\newcommand{\\vg}{\\mathbf{g}} \\newcommand{\\vh}{\\mathbf{h}} \\newcommand{\\vi}{\\mathbf{i}} \\newcommand{\\vj}{\\mathbf{j}} \\newcommand{\\vk}{\\mathbf{k}} \\newcommand{\\vl}{\\mathbf{l}} \\newcommand{\\vm}{\\mathbf{m}} \\newcommand{\\vn}{\\mathbf{n}} \\newcommand{\\vo}{\\mathbf{o}} \\newcommand{\\vp}{\\mathbf{p}} \\newcommand{\\vq}{\\mathbf{q}} \\newcommand{\\vr}{\\mathbf{r}} \\newcommand{\\vs}{\\mathbf{s}} \\newcommand{\\vt}{\\mathbf{t}} \\newcommand{\\vu}{\\mathbf{u}} \\newcommand{\\vv}{\\mathbf{v}} \\newcommand{\\vw}{\\mathbf{w}} \\newcommand{\\vx}{\\mathbf{x}} \\newcommand{\\vy}{\\mathbf{y}} \\newcommand{\\vz}{\\mathbf{z}} \n\\newcommand{\\vA}{\\mathbf{A}} \\newcommand{\\vB}{\\mathbf{B}} \\newcommand{\\vC}{\\mathbf{C}} \\newcommand{\\vD}{\\mathbf{D}} \\newcommand{\\vE}{\\mathbf{E}} \\newcommand{\\vF}{\\mathbf{F}} \\newcommand{\\vG}{\\mathbf{G}} \\newcommand{\\vH}{\\mathbf{H}} \\newcommand{\\vI}{\\mathbf{I}} \\newcommand{\\vJ}{\\mathbf{J}} \\newcommand{\\vK}{\\mathbf{K}} \\newcommand{\\vL}{\\mathbf{L}} \\newcommand{\\vM}{\\mathbf{M}} \\newcommand{\\vN}{\\mathbf{N}} \\newcommand{\\vO}{\\mathbf{O}} \\newcommand{\\vP}{\\mathbf{P}} \\newcommand{\\vQ}{\\mathbf{Q}} \\newcommand{\\vR}{\\mathbf{R}} \\newcommand{\\vS}{\\mathbf{S}} \\newcommand{\\vT}{\\mathbf{T}} \\newcommand{\\vU}{\\mathbf{U}} \\newcommand{\\vV}{\\mathbf{V}} \\newcommand{\\vW}{\\mathbf{W}} \\newcommand{\\vX}{\\mathbf{X}} \\newcommand{\\vY}{\\mathbf{Y}} \\newcommand{\\vZ}{\\mathbf{Z}} \n% Greek Vector variables (in Bold Font style)\n\\newcommand{\\vbeta}{\\bm{\\beta}} \n\\newcommand{\\vbhat}{\\bm{\\hat \\beta}}\n\\newcommand{\\vbstar}{\\bm{\\beta^*}}\n\\newcommand{\\veps}{\\bm{\\epsilon}}\n\\newcommand{\\vmu}{\\bm{\\mu}}\n\\newcommand{\\vtheta}{\\bm{\\theta}}\n\\newcommand{\\valpha}{\\bm{\\alpha}}\n\\newcommand{\\vdelta}{\\bm{\\delta}}\n\n% Constant vectors\n\\newcommand{\\vzero}{\\mathbf{0}}\n\\newcommand{\\vone}{\\mathbf{1}}\n\\newcommand{\\thetac}{\\mathrm{\\theta^{(0)}}}\n\n% Scalars\n\n% Matrix variables (in DS style where possible)\n\\newcommand{\\mA}{\\mathds{A}} \\newcommand{\\mB}{\\mathds{B}} \\newcommand{\\mC}{\\mathds{C}} \\newcommand{\\mD}{\\mathds{D}} \\newcommand{\\mE}{\\mathds{E}} \\newcommand{\\mF}{\\mathds{F}} \\newcommand{\\mG}{\\mathds{G}} \\newcommand{\\mH}{\\mathds{H}} \\newcommand{\\mI}{\\mathds{I}} \\newcommand{\\mJ}{\\mathds{J}} \\newcommand{\\mK}{\\mathds{K}} \\newcommand{\\mL}{\\mathds{L}} \\newcommand{\\mM}{\\mathds{M}} \\newcommand{\\mN}{\\mathds{N}} \\newcommand{\\mO}{\\mathds{O}} \\newcommand{\\mP}{\\mathds{P}} \\newcommand{\\mQ}{\\mathds{Q}} \\newcommand{\\mR}{\\mathds{R}} \\newcommand{\\mS}{\\mathds{S}} \\newcommand{\\mT}{\\mathds{T}} \\newcommand{\\mU}{\\mathds{U}} \\newcommand{\\mV}{\\mathds{V}} \\newcommand{\\mW}{\\mathds{W}} \\newcommand{\\mX}{\\mathds{X}} \\newcommand{\\mY}{\\mathds{Y}} \\newcommand{\\mZ}{\\mathds{Z}}\n\n\\newcommand{\\cA}{\\mathcal{A}} \\newcommand{\\cB}{\\mathcal{B}} \\newcommand{\\cC}{\\mathcal{C}} \\newcommand{\\cD}{\\mathcal{D}} \\newcommand{\\cE}{\\mathcal{E}} \\newcommand{\\cF}{\\mathcal{F}} \\newcommand{\\cG}{\\mathcal{G}} \\newcommand{\\cH}{\\mathcal{H}} \\newcommand{\\cI}{\\mathcal{I}} \\newcommand{\\cJ}{\\mathcal{J}} \\newcommand{\\cK}{\\mathcal{K}} \\newcommand{\\cL}{\\mathcal{L}} \\newcommand{\\cM}{\\mathcal{M}} \\newcommand{\\cN}{\\mathcal{N}} \\newcommand{\\cO}{\\mathcal{O}} \\newcommand{\\cP}{\\mathcal{P}} \\newcommand{\\cQ}{\\mathcal{Q}} \\newcommand{\\cR}{\\mathcal{R}} \\newcommand{\\cS}{\\mathcal{S}} \\newcommand{\\cT}{\\mathcal{T}} \\newcommand{\\cU}{\\mathcal{U}} \\newcommand{\\cV}{\\mathcal{V}} \\newcommand{\\cW}{\\mathcal{W}} \\newcommand{\\cX}{\\mathcal{X}} \\newcommand{\\cY}{\\mathcal{Y}} \\newcommand{\\cZ}{\\mathcal{Z}}\n\n\\newcommand{\\ewise}[2]{{#1 * #2}} % element-wise product\n\n\\newcommand{\\EE}{\\mathrm{E}} % Expectation\n\n\\newcommand{\\g}{\\,\\vert\\,} % for conditional probability, division\n\n\\subsection {A fully connected FFNN} \\label {sec: fcffnn}\nForward propagation\n\\begin {equation} \\begin {split}\n& \\text {Initialize the input layer } \\vZ^1 = \\vX, \\vA^1 = \\vZ^1 \\\\\n& \\text {Propagate all activity forward } \\vZ^l = \\mW^l \\vA^{l-1} + \\vB^l, \\vA^l = f^l(\\vZ^l) \\\\\n\\end {split} \\end {equation}\n\n$\\vZ^l = \\mW^l \\vA^{l-1} + \\vB^l$ is equivalent to \n\\begin{equation} \\label {eq: nnwts}\n\\begin{bmatrix}\nZ_1 \\\\\n\\ldots \\\\\nZ_J \\\\\n\\end{bmatrix}\n= \n\\begin{bmatrix}\nW_{1,1} & \\ldots & W_{1,K} \\\\\n\\ldots & \\ldots & \\ldots \\\\\nW_{J,1} & \\ldots & W_{J,K} \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nA_1 \\\\\n\\ldots \\\\\nA_K \\\\\n\\end{bmatrix} \n+\n\\begin{bmatrix}\nB_1 \\\\\n\\ldots \\\\\nB_J \\\\\n\\end{bmatrix}\n\\end{equation}\nWhere layer $l$ has $J$ nodes and layer $l-1$ has $K$ nodes.\n\nBackward propagation\n\\begin {equation} \\begin {split}\n& \\text {Assuming that the loss function is } \\cL = 0.5 (\\vY - \\vA^l)^2 \\\\\n& \\qquad \\text {Where $\\vY$ is the known output corresponding to the input $\\vX$} \\\\\n& \\qquad \\text {Calculate the final error } \\nabla_a \\cL =  \\vA^l - \\vY \\\\\n& \\text {Initialize  the back propagation } \\vdelta^L =  \\ewise {\\nabla_a \\cL} {{f^L}^{'} (\\vZ^L)} \\\\\n& \\text {Backpropagate the error } \\vdelta^l =  \\ewise {\\left[ {\\mW^{l+1}}^T \\vdelta^{l+1} \\right]} {{f^{l}}^{'} (\\vZ^l)}, l \\in [2, L-1] \\\\\n& \\text {Calculate the gradient of weights } \\frac {\\partial \\cL} {\\partial \\mW^l} = \\vdelta^l {\\vA^{l-1}}^T, l \\in [2, L] \\\\\n& \\qquad \\text {Equivalently } \\frac {\\partial \\cL} {\\partial W_{j,k}^l} = a_k^{l-1} \\delta_j^l, l \\in [2, L] \\\\\n& \\text {Calculate the gradient of bias weights } \\frac {\\partial \\cL} {\\partial \\vB^l} = \\vdelta^l, l \\in [2, L] \\\\\n& \\qquad \\text {Equivalently } \\frac {\\partial \\cL} {\\partial B_j^l} = \\delta_j^l, l \\in [2, L] \\\\\n\\end {split} \\end {equation}\n\nUpdate ($\\eta$ is a hyperparameter that is not learned by the FFNN)\n\\begin {equation} \\begin {split}\n& \\text {Update weights } \\mW^l = \\mW^l - \\eta \\frac {\\partial \\cL} {\\partial \\mW^l}, l \\in [2, L] \\\\\n& \\text {Update bias weights } \\vB^l = \\vB^l - \\eta \\frac {\\partial \\cL} {\\partial \\vB^l}, l \\in [2, L] \\\\\n\\end {split} \\end {equation}\n\n\n\n\\subsection{Recurrent neural networks}\nRecurrent neural networks (RNN) can be used as classification models for time series data. \n\\begin{align*}\n&s_{t} = f_1(W^{s,s}s_{t-1}+W^{s,x}x_ t),\\quad t=1,2,...,T\\\\\n&y=f_2(W^{s,y} s_ T + W_0)\n\\end{align*}\n", "meta": {"hexsha": "1db50bcc84d810c5cce91f2c24c61456de0bb98f", "size": 6122, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/nn.tex", "max_stars_repo_name": "r2cp/MITx_capstone_2", "max_stars_repo_head_hexsha": "a1ef693f8a37c7931900f1721743b1d838ea9908", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/nn.tex", "max_issues_repo_name": "r2cp/MITx_capstone_2", "max_issues_repo_head_hexsha": "a1ef693f8a37c7931900f1721743b1d838ea9908", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/nn.tex", "max_forks_repo_name": "r2cp/MITx_capstone_2", "max_forks_repo_head_hexsha": "a1ef693f8a37c7931900f1721743b1d838ea9908", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-02T14:40:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-02T14:40:57.000Z", "avg_line_length": 65.1276595745, "max_line_length": 779, "alphanum_fraction": 0.6394968964, "num_tokens": 2529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7461389986757758, "lm_q1q2_score": 0.5738407579892815}}
{"text": "\\documentclass{article}\n\\usepackage{mymacro}\n\\usepackage{longtable,booktabs}\n\\usepackage[a4paper, margin=.7in]{geometry}\n\\usepackage{blindtext}\n\n\\def\\tbs{\\textbackslash}\n\n\\begin{document}\n\n\\section{Symbols}\n\nMainly based on physics package\n\n\\begin{center}\n\\begin{table}[htp]\n    \\begin{tabular}{|p{3cm}||l|l|p{6cm}|}\n    \\toprule\n    Macro                     & Usage                            & Effect                   & Comments \\\\\n    \\midrule \\midrule\n    \\tbs quantity             & \\tbs qty(\\tbs frac\\{b\\}\\{a\\})    & $\\qty(\\frac{b}{a})$      & Automatic bracing: (), [], \\{\\}, $\\vert\\vert$\n    \\\\ \\hline\n                              & \\tbs pqty\\{a\\}                   & $\\pqty{a}$               & \\tbs pqty: (); \\tbs bqty: []; \\tbs vqty: $\\abs{}$; \\tbs Bqty: \\{\\}\n    \\\\ \\hline \\hline\n    \\textbf{Bracing}          &                                  &                          & * for no resize, manual bracing: \\tbs big, \\tbs Big, \\tbs bigg, \\tbs Bigg\n    \\\\ \\hline\n    \\tbs absolutevalue        & \\tbs abs\\{a\\}                    & $\\abs{a}$                & Absoluevalue\n    \\\\ \\hline\n    \\tbs norm                 & \\tbs norm\\{a\\}                   & $\\norm{a}$               & Norm\n    \\\\ \\hline\n    \\tbs opnorm               & \\tbs opnorm\\{a\\}                 & $\\opnorm{a}$             & Operator norm\n    \\\\ \\hline\n    \\tbs evaluated            & \\tbs eval\\{a\\}\\_0\\^{}1           & $\\eval{a}_0^1$           & Evaluation, also \\tbs eval(a$\\vert$ \\_0\\^{}1; \\tbs eval[a$\\vert$ \\_0\\^{}1\n    \\\\ \\hline\n    \\tbs order                & \\tbs order\\{a\\}                  & $\\order{a}$              & Order\n    \\\\ \\hline\n    \\tbs commutator           & \\tbs comm\\{a\\}\\{b\\}              & $\\comm{a}{b}$            & Commutator\n    \\\\ \\hline\n    \\tbs anticommutator       & \\tbs acomm\\{a\\}\\{b\\}             & $\\acomm{a}{b}$           & Anti-commutator\n    \\\\ \\hline\n    \\tbs poissonbracket       & \\tbs pb\\{a\\}\\{b\\}                & $\\pb{a}{b}$              & Poison bracket\n    \\\\ \\hline \\hline\n    \\end{tabular}\n    \\begin{tabular}{|p{3cm}||l|l|p{6cm}|}\n    \\toprule\n    \\textbf{Vector}           &                                  &                          &\n    \\\\ \\hline\n    \\tbs vectorbold           & \\tbs vb\\{a\\}                     & $\\vb{a}$                 & Vector as bold (no Greek), * for italic and Greek\n    \\\\ \\hline\n    \\tbs vectorarrow          & \\tbs va\\{a\\}                     & $\\va{a}$                 & Vector with arrow (no Greek), * for italic and Greek\n    \\\\ \\hline\n    \\tbs vectorunit           & \\tbs vu\\{a\\}                     & $\\vu{a}$                 & With hat (no Greek), * for italic and Greek\n    \\\\ \\hline\n    \\tbs dotproduct           & \\tbs vdot                        & $\\vdot$                  & Dot product (bold cdot)\n    \\\\ \\hline\n    \\tbs crossproduct         & \\tbs cross                       & $\\cross$                 & or \\tbs cp\n    \\\\ \\hline\n    \\tbs gradient             & \\tbs grad\\{a\\}                   & $\\grad{a}$               & Also valid for (), []. with \\textit{arrowdel} command, it's changed to vector mode\n    \\\\ \\hline\n    \\tbs divergence           & \\tbs div\\{a\\}                    & $\\div{a}$                & Also valid for (), [].\n    \\\\ \\hline\n    \\tbs laplacian            & \\tbs laplacian\\{a\\}              & $\\laplacian{a}$          & Also valid for (), [].\n    \\\\ \\hline \\hline\n    \\end{tabular}\n\\end{table}\n\\end{center}\n\\begin{longtable}{l||l|l|p{6cm}}\n\\textbf{Derivatives}      &                                  &                          &\n\\\\ \\hline\n\\tbs differential         & \\tbs dd\\{a\\}                     & $\\dd{a}$                 & Differential symbol; also valid for ()\n\\\\ \\hline\n                          & \\tbs dd[3]\\{a\\}                  & $\\dd[3]{a}$              & Power\n\\\\ \\hline\n\\tbs variation            & \\tbs var[3]\\{a\\}                 & $\\var[3]{a}$             & Variation of functional; works as \\tbs dd.\n\\\\ \\hline\n\\tbs derivative           & \\tbs dv[2]\\{a\\}                  & $\\dv[2]{a}$              & Derivative, powers available with []\n\\\\ \\hline\n                          & \\tbs dv\\{f\\}\\{a\\}                & $\\dv{f}{a}$              & Two arguments\n\\\\ \\hline\n                          & \\tbs dv\\{a\\}(f)                  & $\\dv{a}(f)$              & Low form\n\\\\ \\hline\n                          & \\tbs dv*\\{f\\}\\{a\\}               & $\\dv*{f}{a}$             & Inline form\n\\\\ \\hline\n\\tbs partialderivative    & \\tbs pdv\\{f\\}\\{a\\}               & $\\pdv{f}{a}$             & Partial derivative. same to \\tbs dv.\n\\\\ \\hline\n                          & \\tbs pdv\\{f\\}\\{x\\}\\{y\\}          & $\\pdv{f}{x}{y}$          & Can take two variables\n\\\\ \\hline\n\\tbs functionalderivative & \\tbs fdv\\{F\\}\\{g\\}               & $\\fdv{F}{g}$             & Functional derivative; works as \\tbs dv\n\\\\ \\hline \\hline\n\\textbf{Dirac notation}   &                                  &                          & * for no resize\n\\\\ \\hline\n\\tbs ket                  & \\tbs ket\\{a\\}                    & $\\ket{a}$                & Ket\n\\\\ \\hline\n\\tbs bra                  & \\tbs bra\\{a\\}                    & $\\bra{a}$                & Bra\n\\\\ \\hline\n                          & \\tbs bra\\{a\\}\\tbs ket\\{b\\}       & $\\bra{a}\\ket{b}$         & Auto contraction\n\\\\ \\hline\n\\tbs innerproduct         & \\tbs braket\\{a\\}\\{b\\}            & $\\braket{a}{b}$          & Braket. Also \\tbs ip\n\\\\ \\hline\n                          & \\tbs braket\\{a\\}                 & $\\braket{a}$             & Norm\n\\\\ \\hline\n\\tbs outerproduct         & \\tbs ketbra\\{a\\}\\{b\\}            & $\\ketbra{a}{b}$          & Outer, also \\tbs op or \\tbs dyad\n\\\\ \\hline\n\\tbs expectationvalue     & \\tbs expval\\{A\\}                 & $\\expval{A}$             & Expectation value (implicit), also \\tbs ev. (Resize doesn't include A, ** to include)\n\\\\ \\hline\n                          & \\tbs expval\\{A\\}\\{n\\}            & $\\expval{A}{n}$          & Expectation value (explicit)\n\\\\ \\hline\n\\tbs matrixelement        & \\tbs mel\\{n\\}\\{A\\}\\{m\\}          & $\\mel{n}{A}{m}$          & Matrix element, also \\tbs matrixel. (Resize doesn't include A, ** to include)\n\\\\ \\hline \\hline\n\\textbf{Matrix}           &                                  &                          &\n\\\\ \\hline\n\\tbs matrixquantity       & \\tbs mqty\\{a\\& b\\tbs \\tbs c\\& d\\}& $\\begin{matrix}\\mqty{a&b\\\\c&d}\\end{matrix}$        & Matrix, can be grouped as elements in larger matrix. Also works with (), *(), [], $\\norm{}$. \\tbs pmqty: (); \\tbs Pmqty:* (); \\tbs bmqty: []; \\tbs vmqty: $\\norm{}$\n\\\\ \\hline\n\\tbs smallmatrixquantity  & \\tbs smqty\\{a\\& b\\tbs \\tbs c\\& d\\}& $\\begin{matrix}\\smqty{a&b\\\\c&d}\\end{matrix}$       & Small matrix, same as above\n\\\\ \\hline\n\\tbs matrixdeterminant    & \\tbs mdet\\{a\\}                   & $\\begin{matrix}\\mdet{a}\\end{matrix}$               & Determinant;\n\\\\ \\hline\n                          & \\tbs smdet\\{a\\}                  & $\\begin{matrix}\\smdet{a}\\end{matrix}$              & Determinant, small version\n\\\\ \\hline\n\\tbs identitymatrix       & \\tbs imat\\{3\\}                   & $\\begin{matrix}\\smqty(\\imat{3})\\end{matrix}$       & Identity Matrix\n\\\\ \\hline\n\\tbs xmatrix              & \\tbs xmat\\{x\\}\\{2\\}\\{3\\}         & $\\begin{matrix}\\smqty{\\xmat{x}{2}{3}}\\end{matrix}$  & Matrix filled with $x$\n\\\\ \\hline\n                          & \\tbs xmat*\\{a\\}\\{2\\}\\{3\\}        & $\\begin{matrix}\\smqty{\\xmat*{x}{2}{3}}\\end{matrix}$ & * assign indices to elements\n\\\\ \\hline\n\\tbs zeromatrix           & \\tbs zmat\\{2\\}\\{3\\}              & $\\begin{matrix}\\smqty{\\zmat{2}{3}}\\end{matrix}$    & Zero matrix\n\\\\ \\hline\n\\tbs paulimatrix          & \\tbs pmat\\{1\\}                   & $\\begin{matrix}\\smqty{\\pmat{1}}\\end{matrix}$       & Pauli [0, 1, 2, 3] matrix\n\\\\ \\hline\n\\tbs diagonalmatrix       & \\tbs dmat\\{a, b\\}                & $\\begin{matrix}\\smqty{\\dmat{a,b}}\\end{matrix}$     & Diagonal matrix, up to 8 elements, add [0] option to fill with 0. matrix can be inputted as entries as well.\n\\\\ \\hline\n\\tbs antidiagonalmatrix   & \\tbs admat\\{a,b\\}                & $\\begin{matrix}\\smqty{\\admat{a,b}}\\end{matrix}$    & Anti-diagonal matrix, as above.\n\\\\ \\hline \\hline\n\\textbf{Text in math mode}&                                  &                           & Insert text in math mode, including spacing. Special macros see table~\\ref{tab:text}.\n\\\\ \\hline\n\\tbs qqtext               & 1 \\tbs qq\\{word\\} 2              & $1\\qq{word}2$             & with *, only include spacing at the end.\n\\end{longtable}\n\n\\begin{table}[hpb]\n    \\caption{Text in math mode}\\label{tab:text}\n    \\centering\n    \\begin{tabular}{llll}\n    \\tbs qcc & \\tbs qif & \\tbs qthen & \\tbs qotherwise \\\\\n    \\tbs qunless & \\tbs qgiven & \\tbs qusing & \\tbs qassume \\\\\n    \\tbs qsince & \\tbs qlet & \\tbs qfor & \\tbs qall \\\\\n    \\tbs qeven & \\tbs qinteger & \\tbs qand & \\tbs qor \\\\\n    \\tbs qas & \\tbs qin\n    \\end{tabular}\n    \\begin{tabular}{|llll}\n    $1\\qcc1$ & $1\\qif1$ & $1\\qthen1$ & $1\\qotherwise1$ \\\\\n    $1\\qunless1$ & $1\\qgiven1$ & $1\\qusing1$ & $1\\qassume1$ \\\\\n    $1\\qsince1$ & $1\\qlet1$ & $1\\qfor1$ & $1\\qall1$ \\\\\n    $1\\qeven1$ & $1\\qinteger1$ & $1\\qand1$ & $1\\qor1$ \\\\\n    $1\\qas1$ & $1\\qin1$\n    \\end{tabular}\n\\end{table}\n\nOther special functions:\n\nThe functions in Tab.~\\ref{tab:func} can be used as \\tbs sin[2](x):\n$\\sin[2](x)$, which handles the sizing and powers.\n\\begin{table}[hpb]\n    \\caption{Functions}\\label{tab:func}\n\\centering\n\\begin{tabular}{llll}\n\\tbs sin & \\tbs sinh & \\tbs arcsin & \\tbs asin \\\\\n\\tbs cos & \\tbs cosh & \\tbs arccos & \\tbs acos \\\\\n\\tbs tan & \\tbs tanh & \\tbs arctan & \\tbs atan \\\\\n\\tbs csc & \\tbs csch & \\tbs arccsc & \\tbs acsc \\\\\n\\tbs sec & \\tbs sech & \\tbs arcsec & \\tbs asec \\\\\n\\tbs cot & \\tbs coth & \\tbs arccot & \\tbs acot \\\\\n\\tbs log & \\tbs ln\n\\end{tabular}\n\\begin{tabular}{|llll}\n$\\sin$ & $\\sinh$ & $\\arcsin$ & $\\asin$ \\\\\n$\\cos$ & $\\cosh$ & $\\arccos$ & $\\acos$ \\\\\n$\\tan$ & $\\tanh$ & $\\arctan$ & $\\atan$ \\\\\n$\\csc$ & $\\csch$ & $\\arccsc$ & $\\acsc$ \\\\\n$\\sec$ & $\\sech$ & $\\arcsec$ & $\\asec$ \\\\\n$\\cot$ & $\\coth$ & $\\arccot$ & $\\acot$ \\\\\n$\\log$ &$ \\ln$\n\\end{tabular}\n\\end{table}\n\nThe functions in Tab.~\\ref{tab:limit} can be used as \\tbs max[2]\\{x\\}:\n$\\max[2]{x}$, which handles the sizing and subscript (traditional typeset \\tbs\nmax\\_2 is still available).\nIn display mode, it is\n$$\\max[2]{x}$$\n\\begin{table}[htp]\n    \\caption{Limits}\\label{tab:limit}\n\\centering\n\\begin{tabular}{lll}\n\\tbs max & \\tbs min & \\tbs lim \\\\\n\\tbs sup & \\tbs inf & \\tbs argmin \\\\\n\\tbs argmax\n\\end{tabular}\n\\begin{tabular}{|lll}\n$\\max$ & $\\min$ & $\\lim$ \\\\\n$\\sup$ & $\\inf$ & $\\argmin$ \\\\\n$\\argmax$\n\\end{tabular}\n\\end{table}\n\n\\pagebreak\nThe functions in Tab.~\\ref{tab:ftext} can be used with automatic sizing of [],\n(), and \\{\\}:\n\\begin{table}[htp]\n    \\caption{Function as text}\\label{tab:ftext}\n\\centering\n\\begin{tabular}{lll}\n\\tbs exp    & \\tbs det  & \\tbs Pr \\\\\n\\tbs tr     & \\tbs Tr   & \\tbs Res\\\\\n\\end{tabular}\n\\begin{tabular}{|lll}\n$\\exp$      & $\\det $   & $\\Pr$     \\\\\n$\\tr$       & $\\Tr$     & $\\Res$    \\\\\n\\end{tabular}\n\\end{table}\n\nFollowing functions can be used with conditions:\n\\begin{table}[htp]\n    \\caption{Conditional}\\label{tab:cond}\n\\centering\n\\begin{tabular}{lll}\n\\tbs ave[a]\\{b\\}\\{c\\} & \\tbs prob\\{a\\}\\{b\\} \\\\\n\\tbs entro\\{a\\}\\{b\\}  & \\tbs KLdiv\\{a\\}\\{b\\}\\\\\n\\end{tabular}\n\\begin{tabular}{|lll}\n$\\displaystyle\\ave[a]{b}{c}$  & $\\prob{a}{b} $ \\\\\n$\\entro{a}{b} $               & $\\KLdiv{a}{b} $\\\\\n\\end{tabular}\n\\end{table}\n\nFollowing functions are provided as plain text:\n\\begin{table}[hpb]\n\\centering\n\\begin{tabular}{lll}\n\\tbs rank & \\tbs erf & \\tbs ker \\\\\n\\tbs deg  & \\tbs gcd & \\tbs hom\n\\end{tabular}\n\\begin{tabular}{|lll}\n$\\rank$ & $\\erf$ & $\\ker$ \\\\\n$\\deg$ & $\\gcd$ & $\\hom$\n\\end{tabular}\n\\end{table}\n\nThe following symbols are defined\\linebreak[0]\n\\begin{table}[!hpb]\n    \\caption{Symbols}\\label{tab:sym}\n\\centering\n\\begin{tabular}{llll}\n\\tbs ell & \\tbs binary & \\tbs complex & \\tbs integer \\\\\n\\tbs real & \\tbs natural & \\tbs hilb  &\n\\end{tabular}\n\\begin{tabular}{|llll}\n$\\ell$ & $\\binary$ & $\\complex$ & $\\integer$\\\\\n$\\real$ & $\\natural$ & $\\hilb$ &\n\\end{tabular}\n\\end{table}\n\nAlso, some special operators are provided:\n\n\\tbs principalvalue or \\tbs pv\\{f\\}: $\\pv{f}$, \\tbs PV\\{f\\}: $\\PV{f}$, \\tbs Re:\n$\\Re{z}$, \\tbs Im: $\\Im{z}$.\n\n\\section{Marks and Colors}\n\nSeveral shortcut for colors are provided with the \\emph{xcolor} package:\n\n\\red{\\tbs red}, \\blue{\\tbs blue}, \\green{\\tbs green}\n\nAlso the hyperrefs are colored:\n\n\\begin{enumerate}\n    \\item In document ref: \\hyperref[dummylabel]{dummy target}\\label{dummylabel}\n    \\item cite: \\cite{dummy}\n    \\item url: \\url{www.dummyurl}\n\\end{enumerate}\n\nThe marks are provided by the \\emph{soul} package\n\\begin{enumerate}\n    \\item \\tbs so: \\so{space out}\n    \\item \\tbs ul: \\ul{underline} \n    \\item \\tbs st: \\st{striking out}\n    \\item \\tbs hl: \\hl{highlight}\n\\end{enumerate}\n\n\\blindtext\n\n\n\\begin{thebibliography}{1}\n    \\bibitem{dummy} dummy citation\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "a4013b44c3be2d21f64168826ccafd683a46e821", "size": 12860, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/symbolList.tex", "max_stars_repo_name": "Canoming/latexMacro", "max_stars_repo_head_hexsha": "f62cb91c3d9cd4b47195c6df3f6e1b953f0aa189", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/symbolList.tex", "max_issues_repo_name": "Canoming/latexMacro", "max_issues_repo_head_hexsha": "f62cb91c3d9cd4b47195c6df3f6e1b953f0aa189", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/symbolList.tex", "max_forks_repo_name": "Canoming/latexMacro", "max_forks_repo_head_hexsha": "f62cb91c3d9cd4b47195c6df3f6e1b953f0aa189", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2996632997, "max_line_length": 279, "alphanum_fraction": 0.4867029549, "num_tokens": 4432, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.7461389873857264, "lm_q1q2_score": 0.5738407572046086}}
{"text": "%\n% revise the whole content at Aug 10th, 2009\n%\n%\n%\n\n\\chapter{More discussion on operator}\n%\n% this chapter used to describe more on the\n% momentum operator and coordinate Operator; they are the foundations\n% to build more complicated operators, so we also give some\n% mathematical discussion to them. so that in the future, we can\n% use them to derive more properties for the given operator\n%\n\\section{Momentum operator and position operator}\n%\n%\n%\nIn the above content, we have rigorously introduced the momentum\noperator and position operator. Now we are going to give further\ndiscussions for them.\n\nAs a kind of vector operator, the momentum operator and position\noperator can be defined as:\n\\begin{eqnarray}\n% \\nonumber to remove numbering (before each equation)\n  \\mathbf{\\hat{r}} &=& \\hat{x}\\overrightarrow{i} + \\hat{y}\\overrightarrow{j} + \\hat{z}\\overrightarrow{k} \\nonumber \\\\\n  \\mathbf{\\hat{p}} &=& \\hat{p}_{x}\\overrightarrow{i} + \\hat{p}_{y}\\overrightarrow{j} + \\hat{p}_{z}\\overrightarrow{k}\n\\end{eqnarray}\n\nFor the $\\hat{x}, \\hat{y}, \\hat{z}$; in the position representation\naccording to the analysis in\n(\\ref{sec:PRAMR_in_position_representation}) we can simply express\nthem as:\n\\begin{align}\\label{OPERATORMOREeq:14}\n\\hat{x} &= x \\nonumber \\\\\n\\hat{y} &= y \\nonumber \\\\\n\\hat{z} &= z\n\\end{align}\n\nAccordingly, the x, y, z directions of the component for the\nmomentum operator $\\hei{p}$ is defined as:\n\\begin{align}\\label{OPERATORMOREeq:13}\n\\hat{P}_{x} &= -i\\hbar\\frac{\\partial}{\\partial x}   \\nonumber \\\\\n\\hat{P}_{y} &= -i\\hbar\\frac{\\partial}{\\partial y}   \\nonumber \\\\\n\\hat{P}_{z} &= -i\\hbar\\frac{\\partial}{\\partial z}\n\\end{align}\n\nAs for the commutation relationship, it's defined in the axiom\n\\ref{axiom5} and read as:\n\\begin{equation}\\label{OPERATORMOREeq:12}\n[\\hat{x}_{i}, \\hat{x}_{j}] = 0 \\quad [\\hat{P}_{i}, \\hat{P}_{j}] = 0\n\\quad [\\hat{x}_{i}, \\hat{P}_{j}] = i\\hbar\\delta_{ij}\n\\end{equation}\n\nHowever, we can also get this expression from\n(\\ref{OPERATORMOREeq:13}) and (\\ref{OPERATORMOREeq:14}). For an\narbitrary $\\Psi(x)$ we can see that:\n\\begin{eqnarray}\n% \\nonumber to remove numbering (before each equation)\n  x\\hat{P}_{x}\\Psi(x) &=& -i\\hbar x\\frac{\\partial\\Psi(x)}{\\partial x} \\nonumber \\\\\n  \\hat{P}_{x}x\\Psi(x) &=& -i\\hbar\\frac{\\partial}{\\partial x}(\\Psi(x) x) \\nonumber \\\\\n                   &=& -i\\hbar x\\frac{\\partial\\Psi(x)}{\\partial x}\n                   -i\\hbar\\Psi(x)\n\\end{eqnarray}\nTherefore we have:\n\\begin{equation}\\label{}\n[\\hat{x}, \\hat{P}_{x}]\\Psi(x) = i\\hbar\\Psi(x) \\rightarrow [\\hat{x},\n\\hat{P}_{x}] = i\\hbar\n\\end{equation}\n\nNow let's introduce an operator $\\hat{R}$, which is defined as:\n\\begin{equation}\\label{}\n\\hat{R}^{2} = \\hei{r}^{2} = x^{2} + y^{2} + z^{2} \\Rightarrow\n\\hat{R} = \\sqrt{x^{2} + y^{2} + z^{2}}\n\\end{equation}\nThe real Hamiltonian always refers to this scalar operator of\n$\\hat{R}$ rather than the $\\hei{r}$.\n\nOn the other hand, we also use the inverse operator of $\\hat{R}$\n(represented by $\\hat{R}^{-1}$):\n\\begin{equation}\\label{}\n\\hat{R}^{-1} = \\frac{1}{\\sqrt{x^{2} + y^{2} + z^{2}}}\n\\end{equation}\n\nIn quantum chemistry, for an ordinary molecule system its\nHamiltonian only contains the kinetic operator of $\\hat{T}$ and\n$\\hat{R}^{-1}$ (for the electrons and nuclear-electron terms); this\nis clearly shown in (\\ref{BASICeq:4}).\n\nBy the way, both of $\\hei{p}$ and $\\hei{r}$ are hermitian operators.\nSpecifically, in the position representation we have:\n\\begin{align}\\label{OPERATORMOREeq:1}\n\\bra{\\psi}\\hei{r}\\ket{\\phi} &= \\int\\psi^{*}\\bm{r}\\phi d\\tau   \\nonumber \\\\\n&= \\int\\bm{r}\\psi^{*}\\phi d\\tau   \\nonumber \\\\\n&= \\left\\{\\int\\bm{r}\\phi^{*}\\psi d\\tau \\right \\}^{*}   \\nonumber \\\\\n&= \\bra{\\phi}\\hei{r}\\ket{\\psi}^{*}\n\\end{align}\n\nWhat's more, it's natural to see that any function that only related\nto the $\\hei{r}$ or ($\\hat{x}, \\hat{y}, \\hat{z}$) is the hermite\noperator. Consequently, the potential operator of $\\hat{V}(r)$ (for\nexample, the $\\hat{R}$ and $\\hat{R}^{-1}$)in the Hamiltonian\noperator is hermite, since the momentum operator is hermitian, then\naccording to (\\ref{hermitian_in_operator}) the kinetic operator\n$\\hat{T} = \\frac{\\hei{p}^{2}}{2m}$ is also hermitian; so the\nHamiltonian operator ($\\hat{H} = \\hat{T} + \\hat{V}(r)$) itself is\nhermitian.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{More discussion on the algorithm of operator}\n%\n% only directly give more relations to the algorithm of operator\n% see below\n%\n%\n%\nThe introduction to the momentum operator and position operator here\ncan be wound up for a while, now let's go to present more algorithms\non operators.\n\nThe dot product and the cross product for the vector operators are\ndefined as:\n\\begin{equation}\\label{OPERATORMOREeq:2}\n\\hei{A}\\cdot\\hei{B} = \\hat{A}_{x}\\hat{B}_{x} +\n\\hat{A}_{y}\\hat{B}_{y} + \\hat{A}_{z}\\hat{B}_{z}\n\\end{equation}\n\\begin{equation} \\label{OPERATORMOREeq:3}\n\\hei{A}\\times\\hei{B} = \\left\\{ \\begin{aligned}\n         (\\hei{A}\\times\\hei{B})_{x} &= \\hat{A}_{y}\\hat{B}_{z} -  \\hat{A}_{z}\\hat{B}_{y} \\nonumber \\\\\n         (\\hei{A}\\times\\hei{B})_{y} &= \\hat{A}_{z}\\hat{B}_{x} -  \\hat{A}_{x}\\hat{B}_{z} \\nonumber \\\\\n         (\\hei{A}\\times\\hei{B})_{z} &= \\hat{A}_{x}\\hat{B}_{y} -  \\hat{A}_{y}\\hat{B}_{x}\n                          \\end{aligned} \\right.\n\\end{equation}\n\nHere we can see that the cross product between operators are similar\nto the case in classical mechanics.\n\nThe power for the operators is defined as:\n\\begin{equation}\\label{}\n\\hat{A}^{n} = \\underbrace{\\hat{A}\\cdot\\hat{A}\\cdots\\hat{A}}_{n}\n\\end{equation}\nHere we note that the definition of $\\hat{R}^{2} = \\hei{r}^{2}$ is\nthe dot product between $\\hei{r}$, also the kinetic operator is\nrelated to $\\hat{P}^{2}$:\n\\begin{equation}\\label{}\n\\frac{1}{2}\\frac{\\hat{P}^{2}}{2m} = \\frac{\\hbar^{2}}{2m}\\left(\n\\frac{\\partial^{2}}{\\partial x^{2}} + \\frac{\\partial^{2}}{\\partial\ny^{2}} + \\frac{\\partial^{2}}{\\partial z^{2}}  \\right)\n\\end{equation}\n\nThen we are going to focus on the commutative relationship, it's\neasy to prove that for any of the arbitrary operators, the\nexpression below are correct(only use the definition of the\ncommutation):\n\\begin{align}\\label{OPERATORMOREeq:4}\n[\\hat{A}, \\hat{B}] &= -[\\hat{B}, \\hat{A}] \\nonumber \\\\\n[\\hat{A}, \\hat{A}] &= 0  \\nonumber \\\\\n[\\hat{A}, \\hat{B} \\pm \\hat{C}] &= [\\hat{A}, \\hat{B}] \\pm  [\\hat{A},\n  \\hat{C}] \\nonumber \\\\\n[\\hat{A}, \\hat{B}\\hat{C}] &= \\hat{B}[\\hat{A}, \\hat{C}] + [\\hat{A},\n  \\hat{B}]\\hat{C} \\nonumber \\\\\n[\\hat{A}\\hat{B}, \\hat{C}] &= \\hat{A}[\\hat{B}, \\hat{C}] + [\\hat{A},\n  \\hat{C}]\\hat{B} \\nonumber \\\\\n[\\hat{A}, \\hei{B}\\cdot\\hei{C}] &= [\\hat{A}, \\hei{B}]\\cdot\\hei{C} +\n\\hei{B}\\cdot\n  [\\hat{A}, \\hei{C}] \\nonumber \\\\\n[\\hat{A}, \\hei{B}\\times\\hei{C}] &= [\\hat{A}, \\hei{B}]\\times\\hei{C} +\n\\hei{B}\\times\n  [\\hat{A}, \\hei{C}]\n\\end{align}\nThese rules may be very useful when evaluating the characters of\nsome particular operator.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Commutation relationship between $\\hei{r}$ and $\\hei{p}$}\n%\n% derive some important commutation relationship for the\n% p and r so that in the future we can use them to prove\n% more properties for the composite operators\n%\n%\nCommutation is a kind of important subject in quantum mechanics.\nHere in this section we focus on some commutation relationship\nbetween $\\hei{r}$ and $\\hei{p}$, which is very useful in proving the\ncommutation relationship for composite operators. Here below for\nconvenience, the vector operator is expressed as:\n\\begin{equation}\\label{}\n\\hei{A} = \\sum_{i=1}^{3}\\overrightarrow{e}_{i}\\hat{A}_{i} =\n\\overrightarrow{e}_{1}\\hat{A}_{1} +\n\\overrightarrow{e}_{2}\\hat{A}_{2} +\n\\overrightarrow{e}_{3}\\hat{A}_{3}\n\\end{equation}\nFor convenience, the $\\overrightarrow{e}$ refers to $e$.\n\nFirst let's consider the $[\\hei{p}, \\hat{R}^{2}]$.\n\\begin{align}\\label{OPERATORMOREeq:6}\n[\\hei{p}, \\hei{r}^{2}] &= [\\sum e_{i}\\hat{p}_{i}, \\sum\ne_{j}\\hat{r}_{j} \\cdot \\sum e_{j}\\hat{r}_{j}] \\nonumber \\\\\n&=\\sum_{ij} e_{i}e_{j}[\\hat{p}_{i}, \\hat{r}_{j}^{2}] \\nonumber \\\\\n&=\\sum_{ij} e_{i}e_{j}\\left\\{\\hat{r}_{j}[\\hat{p}_{i}, \\hat{r}_{j}] +\n[\\hat{p}_{i}, \\hat{r}_{j}]\\hat{r}_{j} \\right\\} \\nonumber \\\\\n&=-\\sum_{ij} e_{i}e_{j}2i\\hbar\\delta_{ij}r_{j} \\nonumber \\\\\n&=-\\sum_{j} e_{j}2i\\hbar r_{j} \\nonumber \\\\\n&=-2i\\hbar\\hei{r}\n\\end{align}\n\nSo far we can deduce the $[\\hei{p}, \\hat{R}^{2n}]$:\n\\begin{equation}\\label{}\n[\\hei{p}, \\hat{R}^{2n}] = \\hat{R}^{2}[\\hei{p}, \\hat{R}^{2n-2}] +\n[\\hei{p}, \\hat{R}^{2n-2}]\\hat{R}^{2}\n\\end{equation}\nThis is a clear recursion, so we can get that $[\\hei{p},\n\\hat{R}^{2n}] = -ni\\hbar\\hat{R}^{2n-2}\\hei{r}$.\n\nNext we are going to derive the $[\\hei{p}, \\hat{R}]$ from $[\\hei{p},\n\\hat{R}^{2}]$.\n\\begin{align}\\label{OPERATORMOREeq:7}\n[\\hei{p}, \\hat{R}^{2}] &= \\hat{R}[\\hei{p}, \\hat{R}]+ [\\hei{p},\n\\hat{R}]\\hat{R} \\nonumber \\\\\n-2i\\hbar\\hei{r} &= 2\\hat{R}[\\hei{p}, \\hat{R}]\n\\end{align}\nThus the $[\\hei{p}, \\hat{R}] =i\\hbar\\frac{\\hei{r}}{\\hat{R}}$.\n\nHere in (\\ref{OPERATORMOREeq:7}), why $\\hat{R}$ and $[\\hei{p},\n\\hat{R}]$ commute with each other? Here we prove it. Since that we\nhave $[\\hei{p},\\hat{R}^{2}] = \\hat{R}[\\hei{p}, \\hat{R}]+ [\\hei{p},\n\\hat{R}]\\hat{R}$, so the $[\\hei{p},\\hat{R}^{2}]$ directly related to\nthe $[\\hei{p}, \\hat{R}]$; thus we start from\n$[\\hei{p},\\hat{R}^{2}]$:\n\\begin{align}\\label{}\n\\left[\\hat{R}, [\\hei{p},\\hat{R}^{2}]\\right] &= \\left[\\hat{R},\n\\hat{R}[\\hei{p},\n\\hat{R}]+ [\\hei{p}, \\hat{R}]\\hat{R}\\right] \\nonumber \\\\\n&=\\left[\\hat{R}, \\hat{R}[\\hei{p}, \\hat{R}]\\right]+ \\left[\\hat{R},\n[\\hei{p}, \\hat{R}]\\hat{R}\\right]\n\\end{align}\nBy using the formula in (\\ref{OPERATORMOREeq:4}) related to\n$[\\hat{A}, \\hat{B}\\hat{C}]$, we have the expression above\ntransforming into:\n\\begin{align}\\label{}\n\\left[\\hat{R}, [\\hei{p},\\hat{R}^{2}]\\right] &=\n\\hat{R}\\left[\\hat{R},[\\hei{p}, \\hat{R}] \\right] + [\\hat{R},\n\\hat{R}][\\hei{p}, \\hat{R}] + [\\hei{p}, \\hat{R}][\\hat{R}, \\hat{R}] +\n\\left[\\hat{R},[\\hei{p}, \\hat{R}] \\right]\\hat{R} \\nonumber \\\\\n&=\\hat{R}\\left[\\hat{R},[\\hei{p}, \\hat{R}] \\right]+\n\\left[\\hat{R},[\\hei{p}, \\hat{R}] \\right]\\hat{R}\n\\end{align}\nHere we can see the relationship that the $\\hat{R}$ commutes with\n$[\\hei{p}, \\hat{R}]$ is wholly determined by the $\\left[\\hat{R},\n[\\hei{p},\\hat{R}^{2}]\\right]$. However, since we have proved that\n$[\\hei{p},\\hat{R}^{2}]$ equals to $-2i\\hbar\\hei{r}$, it commutes\nwith $\\hat{R}$; thus $\\hat{R}$ commutes with $[\\hei{p}, \\hat{R}]$.\nThe demonstration finished.\n\nBy induction, we can prove the general expression between\n$\\hat{R}^{n}$ and $\\hei{p}$:\n\\begin{equation}\\label{OPERATORMOREeq:8}\n[\\hei{p}, \\hat{R}^{n}] = -ni\\hbar\\hat{R}^{n-2}\\hei{r} \\quad\n(n=1,2,3, \\cdots)\n\\end{equation}\n\nFor the $[\\hei{p}, \\hat{R}^{-n}]$, we can derive it from the\n(\\ref{OPERATORMOREeq:8}):\n\\begin{align}\\label{}\n[\\hei{p}, 1] &= \\left[\\hei{p},\n\\hat{R}^{n}\\frac{1}{\\hat{R}^{n}}\\right] \\Rightarrow \\nonumber \\\\\n           0 &=\\hat{R}^{n}\\left[\\hei{p}, \\frac{1}{\\hat{R}^{n}}\\right] +\n\\left[\\hei{p}, \\hat{R}^{n}\\right]\\frac{1}{\\hat{R}^{n}}\n\\end{align}\nTherefore $[\\hei{p}, \\hat{R}^{-n}] = -ni\\hbar\\hat{R}^{n-2}\\hei{r}$;\nthe $n$ ranges over all the negative number.\n\nHere, we note that by the same procedure in the\n(\\ref{OPERATORMOREeq:6}), we can prove that:\n\\begin{equation}\\label{OPERATORMOREeq:9}\n[\\hei{r}, \\hei{p}\\cdot\\hei{p}] =-2i\\hbar\\hei{p}\n\\end{equation}\nThis is useful while considering the commutative relationship\nbetween angular momentum operator and the Hamiltonian operator.\n\nBy the way, we will derive two useful relations which is needed in\nthe later content. Suggest that $f(\\hei{r})$ is the function for the\n$\\hei{r}$, in the position representation we can see that:\n\\begin{align}\\label{OPERATORMOREeq:10}\n(\\hei{p}f(\\hei{r}) - f(\\hei{r})\\hei{p})\\ket{\\Psi} &=\n-i\\hbar\\nabla(f(\\hei{r})\\ket{\\Psi}) + i\\hbar\nf(\\hei{r})(\\nabla\\ket{\\Psi}) \\quad \\hei{p} = -i\\hbar\\nabla\n\\nonumber \\\\\n&=-i\\hbar\\ket{\\Psi}\\nabla(f(\\hei{r})) =\n-i\\hbar\\ket{\\Psi}\\frac{\\partial f(\\hei{r})}{\\partial \\hei{r}}\n\\end{align}\n\nthus we have:\n\\begin{equation}\\label{}\n[\\hei{p}, f(\\hei{r})] = -i\\hbar \\nabla(f(\\hei{r})) =\n-i\\hbar\\frac{\\partial f(\\hei{r})}{\\partial \\hei{r}}\n\\end{equation}\n\nSimilarly, if we express the $\\hei{r} =\ni\\hbar\\frac{\\partial}{\\partial \\hei{p}}$ in momentum representation,\nthen by exchanging the position between \\heit{p} and \\heit{r} in the\nexpression of (\\ref{OPERATORMOREeq:10}), we can get similar\nexpression:\n\\begin{equation}\\label{OPERATORMOREeq:11}\n[\\hei{r}, f(\\hei{p})] = i\\hbar \\frac{\\partial f(\\hei{p})}{\\partial\n\\hei{p}}\n\\end{equation}\n\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"../../main\"\n%%% End:\n", "meta": {"hexsha": "47b71fc6b424e78da8dd6288d2d702e1de14b04a", "size": 12592, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/physics/operatorMore.tex", "max_stars_repo_name": "murfreesboro/fenglai-note", "max_stars_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-16T07:23:48.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-16T07:23:48.000Z", "max_issues_repo_path": "theory/physics/operatorMore.tex", "max_issues_repo_name": "murfreesboro/fenglai-note", "max_issues_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/physics/operatorMore.tex", "max_forks_repo_name": "murfreesboro/fenglai-note", "max_forks_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.35, "max_line_length": 117, "alphanum_fraction": 0.6255559085, "num_tokens": 4664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7690802370707281, "lm_q1q2_score": 0.5738407536478045}}
{"text": "\\chapter{Image Feature Extraction}\n\nA subset of characteristics\nwith a high discriminative capacity\nto be extracted\nfrom the set of cutouts\nwas defined.\nAll selected features\nrefer to information related to the texture of the images\nand do not undergo segmentation preprocessing.\n\n\\input{R/average-intensity.tex}\n\nMaximum Intensity Value;\n\nMinimum Intensity Value;\n\n\\section{Local Binary Pattern}\n\nThe Local Binary Pattern (LBP)\nis a derivation of the Texture Unit\ndefined by \\cite{wang_texture_1990}.\nFrom the spatial domain,\nwe can define the LBP feature vector $l$\nfor each pixel $a$\nby comparing the pixel to each of its 8 neighbors.\nIf the center pixel's value is greater than the neighbor's value,\nwe assign $0$ to that position in the vector.\nOtherwise,\nwe write $1$.\nFor example,\nconsider that we have\n\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n  \\hline\n  $a_{i - 1, j + 1} = 194$ & $a_{i, j + 1} = 144$ & $a_{i + 1, j + 1} = 132$ \\\\\n  \\hline\n  $a_{i - 1, j} = 180$ & $a_{i, j} = 136$ & $a_{i + 1, j} = 136$ \\\\\n  \\hline\n  $a_{i - 1, j - 1} = 167$ & $a_{i, j - 1} = 125$ & $a_{i + 1, j - 1} = 129$ \\\\\n  \\hline\n\\end{tabular}\n\\end{center}\n\nThe feature vector will be\n\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n  \\hline\n  $l_{i - 1, j + 1} = 1$ & $l_{i, j + 1} = 1$ & $l_{i + 1, j + 1} = 0$ \\\\\n  \\hline\n  $l_{i - 1, j} = 1$ & & $l_{i + 1, j} = 0$ \\\\\n  \\hline\n  $l_{i - 1, j - 1} = 1$ & $l_{i, j - 1} = 0$ & $l_{i + 1, j - 1} = 0$ \\\\\n  \\hline\n\\end{tabular}\n\\end{center}\n\nWe represented as a matrix\nthat has $l_{i, j}$ empty.\nWe can store the matrix as a linear vector\nby following the 8 neighbours in one direction.\nFor example,\nif we start with $l_{i, j + 1}$\nand go clockwise,\nwe will have\n\n\\begin{center}\n\\begin{tabular}{|c|}\n  \\hline $l_{i, j + 1} = 1$ \\\\\n  \\hline $l_{i + 1, j + 1} = 0$ \\\\\n  \\hline $l_{i + 1, j} = 0$ \\\\\n  \\hline $l_{i + 1, j - 1} = 0$ \\\\\n  \\hline $l_{i, j - 1} = 0$ \\\\\n  \\hline $l_{i - 1, j - 1} = 1$ \\\\\n  \\hline $l_{i - 1, j} = 1$ \\\\\n  \\hline $l_{i - 1, j + 1} = 1$ \\\\\n  \\hline\n\\end{tabular}\n\\end{center}\n\nIn total,\nwe have $2^8 = 256$ possible values for the LBP vector $l$.\nThe histogram of LBP in an image\nshould revel its texture information.\n\n\\input{R/local-binary-pattern.tex}\n\n\\section{Histogram of Oriented Gradients (HOG)}\n\n\\section{Gray-Level Co-Occurrence Matrix (GLCM);}\n\n7 Hu Moments;\n\nHaralick\u2019s Texture Features (Haralick, 1979):\n\n\u2013 Angular Second Moment (Energy);\n\n\u2013 Contrast;\n\n\u2013 Correlation;\n\n\u2013 Variance;\n\n\u2013 Entropy;\n\n\u2013 Maximal Correlation Coefficient;\n\n\u2013 Inverse Difference Moment (Homogeneity);\n\n\u2013 Sum Average, Sum Variance and Sum Entropy;\n\n\u2013 Difference Variance and Difference Entropy;\n\n\u2013 Information Measure of Correlation I and II.\n\n\\input{R/range.tex}\n\n\\input{R/variance.tex}\n", "meta": {"hexsha": "6f189129b68337428eb34db804cb3b922b2ab162", "size": 2723, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "texture.tex", "max_stars_repo_name": "rgaiacs/ufop_masters_dissertation", "max_stars_repo_head_hexsha": "6a673025a8f2c9dfc72e950db4ca0ca185df0f4b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-02T16:09:01.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-02T16:09:01.000Z", "max_issues_repo_path": "texture.tex", "max_issues_repo_name": "rgaiacs/ufop_masters_dissertation", "max_issues_repo_head_hexsha": "6a673025a8f2c9dfc72e950db4ca0ca185df0f4b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "texture.tex", "max_forks_repo_name": "rgaiacs/ufop_masters_dissertation", "max_forks_repo_head_hexsha": "6a673025a8f2c9dfc72e950db4ca0ca185df0f4b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.5041322314, "max_line_length": 79, "alphanum_fraction": 0.6397355858, "num_tokens": 1009, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7461389873857265, "lm_q1q2_score": 0.5738407493063277}}
{"text": "%% Solutions to part 2\n\\pagebreak\n\n\\subsubsection*{2.A}\n\n\n\\subsubsection*{2.B}\nThe Matlab code to calculate the learning curve is included below. Figure~\\ref{fig:part2-learning-curve} shows the learning curve averaged over 100 independent input realizations, and  Figure~\\ref{fig:part2-weights} shows the converged weight vector. \n\nNote that the learning curve converged to a much smaller MMSE. Note further that the bias weight is practically negligible now, and some of the weights corresponding to the squared inputs and to the cross-products are significant. \n\n\\FloatBarrier\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.8]{part2_learning_curve.eps}\n\t\\caption{Experimental learning curve for nonlinear plant identification using a nonlinear filter based on Volterra series. The learning curve was averaged 100 times. The training signal had variance $\\sigma_r^2 = 4$.}\n\t\\label{fig:part2-learning-curve}\n\\end{figure}\n\\FloatBarrier\n\n\\FloatBarrier\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.8]{part2_weights.eps}\n\t\\caption{Weight vector after convergence.}\n\t\\label{fig:part2-weights}\n\\end{figure}\n\\FloatBarrier\n\n\\subsubsection*{2.C}\n\nAs shown in Figure~\\ref{fig:part2-learning-curve}, the MMSE estimated using the last 200 samples was equal to $\\approx 0.0046$.\n\n\\subsubsection*{2.D}\nThe Matlab code is included in the appendix. The output of the plant is compared to the output of the adaptive filter in Figure~\\ref{fig:part2-test}. \n\nThe adaptive filter was trained with random signal, but it can almost perfectly track the plant output to a sinusoidal input. The same cannot be accomplished when the adaptive filter is linear as in part 1.E.\n\n\\FloatBarrier\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.8]{part2_test.eps}\n\t\\caption{Test of the adaptive filter after convergence with a sinusoidal signal. During the first $L+1$ samples, the filter was being initialized.}\n\t\\label{fig:part2-test}\n\\end{figure}\n\\FloatBarrier\n\n\\subsubsection*{2.E}\n\nWhen the training signal variance was $\\sigma_r^2 = 0.4$, the adaptive filter achieved MMSE equal to $1.8\\times 10^{-5}$ during training, but it didn't performed as well as before during test with a sinusoidal signal. The reason for this is that for a smaller training signals, the plant is approximately linear, and the adaptive equalizer cannot learn well the weights that weigh the nonlinear terms of the input. \n\n\\FloatBarrier\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.8]{part2e_learning_curve.eps}\n\t\\caption{Experimental learning curve for nonlinear plant identification using a nonlinear filter based on Volterra series. The learning curve was averaged 100 times. The training signal had variance $\\sigma_r^2 = 0.4$.}\n\t\\label{fig:part2e-learning-curve}\n\\end{figure}\n\\FloatBarrier\n\n\\FloatBarrier\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.8]{part2e_weights.eps}\n\t\\caption{Weight vector after convergence.}\n\t\\label{fig:part2e-weights}\n\\end{figure}\n\\FloatBarrier\n\n\\FloatBarrier\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[scale=0.8]{part2e_test.eps}\n\t\\caption{Test of the adaptive filter after convergence with a sinusoidal signal. During the first $L+1$ samples, the filter was being initialized.}\n\t\\label{fig:part2e-test}\n\\end{figure}\n\\FloatBarrier\n", "meta": {"hexsha": "fc5a587d19f3ad02f2351a60c7880549fbb81834", "size": 3273, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exams/final/practice final/tex/solutions_part2.tex", "max_stars_repo_name": "jkperin/DSP", "max_stars_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2019-05-11T21:48:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-07T08:56:28.000Z", "max_issues_repo_path": "exams/final/practice final/tex/solutions_part2.tex", "max_issues_repo_name": "jkperin/DSP", "max_issues_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exams/final/practice final/tex/solutions_part2.tex", "max_forks_repo_name": "jkperin/DSP", "max_forks_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-04-16T01:11:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-19T07:25:20.000Z", "avg_line_length": 41.9615384615, "max_line_length": 415, "alphanum_fraction": 0.7821570425, "num_tokens": 869, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7461389873857264, "lm_q1q2_score": 0.5738407493063274}}
{"text": "\\section{Tricks}\n\\begin{enumerate}\n\n\\item\nIn the result display, do click-drag-release to copy a selection of the display to the clipboard.\n\n\\item\nIn a script, line breaking is allowed provided the line breaks occur immediately after operators.\nThe scanner will automatically go to the next line after an operator.\n\n\\item\nSetting \\verb$trace=1$ in a script causes each line to be printed just before it is evaluated.\nThis is useful for debugging.\n\n\\item\nThe last result is stored in the symbol $last$.\n\n\\item\nUse \\verb$contract(A)$ to get the mathematical trace of matrix $A$.\n\n\\item\nUse \\verb$binding(s)$ to get the unevaluated binding of symbol $s$.\n\n\\item\nUse \\verb$s=quote(s)$ to clear symbol $s$.\n\n\\item\nUse \\verb$float(pi)$ to get the floating point value of $\\pi$.\nSet \\verb$pi=float(pi)$ to evaluate expressions with a numerical value for $\\pi$.\nSet \\verb$pi=quote(pi)$ to make $\\pi$ symbolic again.\n\n\\item\nAssign strings to unit names so they are printed normally.\nFor example, setting \\verb$meter=\"meter\"$ causes the symbol {\\it meter}\nto be printed as meter instead of $m_{eter}$.\n\n\\item\nUse \\verb$expsin$ and \\verb$expcos$ instead of \\verb$sin$ and \\verb$cos$.\nTrigonometric simplifications occur automatically when exponentials are used.\n\n\\item\nUse \\verb$A==B$ or \\verb$A-B==0$ to test for equality of $A$ and $B$.\nThe equality operator \\verb$==$ uses a cross multiply algorithm to eliminate denominators.\nHence \\verb$==$ can typically determine equality even when the unsimplified result of $A-B$ is nonzero.\n\n\\end{enumerate}\n", "meta": {"hexsha": "9a761535af3834156926537ad673767f0d870c0f", "size": 1540, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tricks.tex", "max_stars_repo_name": "franko/eigenmath", "max_stars_repo_head_hexsha": "3fca24d09686220e430108dac4864a192d138f13", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/tricks.tex", "max_issues_repo_name": "franko/eigenmath", "max_issues_repo_head_hexsha": "3fca24d09686220e430108dac4864a192d138f13", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/tricks.tex", "max_forks_repo_name": "franko/eigenmath", "max_forks_repo_head_hexsha": "3fca24d09686220e430108dac4864a192d138f13", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.7659574468, "max_line_length": 103, "alphanum_fraction": 0.7487012987, "num_tokens": 405, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802264851919, "lm_q2_score": 0.7461389873857265, "lm_q1q2_score": 0.5738407414080463}}
{"text": "\\documentclass[t,usenames,dvipsnames]{beamer}\n\\usetheme{Copenhagen}\n\\setbeamertemplate{headline}{} % remove toc from headers\n\\beamertemplatenavigationsymbolsempty\n\n\\usepackage{amsmath, tikz, xcolor, bm, pgfplots, array}\n\\pgfplotsset{compat = newest}\n\\usetikzlibrary{arrows.meta, calc, decorations.pathreplacing, patterns, decorations.markings}\n\\tikzset{>=stealth}\n\n\\title{Parametric Equations}\n\\author{}\n\\date{}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}\n    \\frametitle{Objectives}\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\begin{document}\n\n\\begin{frame}\n    \\titlepage\n\\end{frame}\n\n\\section{Sketch a parametric curve.}\n\n\\begin{frame}{Intro}\nUp until now, we have looked at functions that define $x$ in terms of $y$ (or $y$ in terms of $x$ for inverse functions). \\newline\\\\ \\pause\n\nIn this section, we will look at parametric functions: ones in which $x$ and $y$ are defined by a \\alert{parameter}, such as $t$.\n\\end{frame}\n\n\\begin{frame}{Intro}\nFor instance, the plot below could show the path a bug might take (starting at $O$) while walking on a table: \\newline\\\\\n\n\\begin{center}\n    \\begin{tikzpicture}\n    \\begin{axis}[\n    axis lines = middle,\n    xmin = -1, xmax = 7.5,\n    ymin = -1, ymax = 6,\n    xtick = {1,2,3,4,5},\n    ytick = {1,2,3,4,5},\n    xlabel = $x$, ylabel = $y$,\n    ]\n    \\addplot [mark = *, mark size=2pt] coordinates {(5,1)} node [right] {$O$};\n    \\addplot [domain = -2:2, samples = 100, variable = \\t,\n    postaction={decorate, decoration={markings, \n    mark = at position 0.25 with {\\arrow{>};},\n    mark = at position 0.5 with {\\arrow{>};},\n    mark = at position 0.75 with {\\arrow{>};},\n    mark = at position 1 with {\\arrow{>};}\n    }}\n    ] ({t^2+1}, {t^3-3*t+3}) node [left, above] {$P(x,y) = (f(t),g(t))$};\n    \\end{axis}\n    \\end{tikzpicture}\n\\end{center}    \n\\end{frame}\n\n\\begin{frame}{Intro}\nThe independent variable ($t$ in this case) is called a \\alert{parameter}.    \\newline\\\\  \\pause\n\nThe system of equations\n\\[\n\\begin{cases}\n    x &= f(t)   \\\\\n    y &= g(t)     \n\\end{cases}\n\\]\n\nis called a \\alert{parametrization} of the curve.   \\newline\\\\  \\pause\n\n\\emph{Note}: The curve itself is a set of points and is devoid of any orientation.  \n\\end{frame}\n\n\\begin{frame}{Example 1}\nSketch the curve described by \n\\[\n\\begin{cases}\n    x &= t^2 - 3    \\\\\n    y &= 2t - 1\n\\end{cases}\n\\quad \\text{ for } t \\geq -2\n\\]\n\\pause\n\\begin{minipage}{0.5\\textwidth}\n\\setlength{\\extrarowheight}{3pt}\n\\begin{tabular}{cccc}\n                    $t$ & $x(t)$ & $y(t)$ & $(x(t), y(t))$ \\\\ \\hline\n    \\onslide<2->{$-2$  & 1 & $-5$ & $(1,-5)$ \\\\[3pt]}\n    \\onslide<3->{$-1$ & $-2$ & $-3$ & $(-2,-3)$ \\\\[3pt]}\n    \\onslide<4->{0 & $-3$ & $-1$ & $(-3,-1)$ \\\\[3pt]}\n    \\onslide<5->{1 & $-2$ & 1 & $(-2,1)$ \\\\[3pt]}\n    \\onslide<6->{2 & 1 & 3 & $(1,3)$ \\\\[3pt]}\n    \\onslide<7->{3 & 6 & 5 & $(6, 5)$ \\\\}\n\\end{tabular}\n\\end{minipage}\n\\hspace{-0.25cm}\n\\begin{minipage}{0.5\\textwidth}\n\\onslide<8->{\n\\begin{tikzpicture}[scale=0.7]\n\\begin{axis}[\n    axis lines = middle,\n    grid,\n    xmin = -3, xmax = 6,\n    ymin = -5, ymax = 5,\n    xtick = {-3,-2,...,6},\n    ytick = {-5,-4,...,5},\n    xlabel = $x$, ylabel = $y$,\n    ]\n    \\addplot [mark = *, mark size=2pt, only marks] coordinates {(1,-5) (-2,-3) (-3,-1) (-2,1) (1,3) (6,5)};\n    \\addplot [domain = -2:3, samples = 100, variable = \\t,\n    postaction={decorate, decoration={markings, \n    mark = at position 0.15 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.4 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.65 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.85 with {\\arrow[ultra thick, red]{>};}\n    }}] ({t*t-3},{2*t-1});\n    \\end{axis}\n\\end{tikzpicture}}\n\\end{minipage}\n\\end{frame}\n\n\n\\section{Rewrite an equation by eliminating the parameter.}\n\n\n\\begin{frame}{Eliminating the Parameter}\nWe can eliminate the parameter $t$ by solving one of the equations for $t$ and substituting it into the other.\n\\end{frame}\n\n\\begin{frame}{Example 2}\nEliminate the parameter in Example 1 and write the equation using only $x$ and $y$.\n\\[\n\\begin{cases}\n    x &= t^2 - 3    \\\\\n    y &= 2t - 1\n\\end{cases}\n\\quad \\text{ for } t \\geq -2\n\\]\n\\begin{align*}\n    \\onslide<2->{y &= 2t - 1} \\\\[8pt]\n    \\onslide<3->{y+1 &= 2t} \\\\[8pt]\n    \\onslide<4->{t &= \\frac{y+1}{2}} \\\\[8pt]\n    \\onslide<5->{x &= t^2 - 3} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n    \\begin{align*}\n        x &= t^2 - 3    \\\\[6pt]\n        \\onslide<2->{x &= \\left(\\frac{y+1}{2}\\right)^2 - 3} \\\\[6pt]\n        \\onslide<3->{x+3 &= \\frac{(y+1)^2}{4}} \\\\[6pt]\n        \\onslide<4->{4(x+3) &= (y+1)^2} \\\\[6pt]\n        \\onslide<5->{t &= \\frac{y+1}{2}}    \\\\[6pt]\n        \\onslide<6->{\\frac{y+1}{2} &\\geq -2} \\\\[6pt]\n        \\onslide<7->{y &\\geq -5} \\\\\n    \\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n    \\[  4(x+3) = (y+1)^2, \\quad y \\geq -5\n    \\]\n\\end{frame}\n\n\\begin{frame}{Example 3a}\n    Sketch each of the following curves.    \\newline\\\\\n(a) \\quad $\\begin{cases}\n        x &= t^3    \\\\\n        y &= 2t^2   \\\\\n    \\end{cases}$    \\quad for $-1 \\leq t \\leq 1$    \\newline\\\\    \\pause\n\\begin{minipage}{0.6\\textwidth}\n\\begin{tikzpicture}[scale=0.7]\n\\begin{axis}[\n    axis lines = middle,\n    grid,\n    xmin = -2, xmax = 2,\n    ymin = -1, ymax = 3,\n    xtick = {-2,-1,...,2},\n    ytick = {-1,0,...,3},\n    xlabel = $x$, ylabel = $y$\n]\n\\addplot [mark = *, mark size = 2pt, only marks] coordinates {(-1,2) (1,2)}; \n\\addplot [domain = -1:1, samples = 100, variable = \\t,\n    postaction={decorate, decoration={markings, \n    mark = at position 0.15 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.4 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.65 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.85 with {\\arrow[ultra thick, red]{>};}\n    }}] ({t*t*t},{2*t*t});\n\\end{axis}\n\\end{tikzpicture}\n\\end{minipage}\n\\hspace{-0.5cm}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{align*}\n    \\onslide<3->{x &= t^3} \\\\[8pt]\n    \\onslide<4->{t &= \\sqrt[3]{x}} \\\\[8pt]\n    \\onslide<5->{y &= 2\\left(\\sqrt[3]{x}\\right)^2}    \\\\[8pt]\n    \\onslide<6->{y &= 2\\cdot \\sqrt[3]{x^2}}   \\\\[8pt]\n    \\onslide<7->{y &= 2x^{2/3}} \\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 3b}\n    (b) \\quad $\\begin{cases}\n        x &= 2e^{-t} \\\\\n        y &= e^{-2t}    \\\\\n    \\end{cases}$    \\quad for $t \\geq 0$    \\newline\\\\  \\pause\n\\begin{minipage}{0.6\\textwidth}\n\\begin{tikzpicture}[scale=0.7]\n\\begin{axis}[\n    axis lines = middle,\n    grid,\n    xmin = -1, xmax = 3,\n    ymin = -1, ymax = 2,\n    xtick = {-1,0,...,3},\n    ytick = {-1,0,...,2},\n    xlabel = $x$, ylabel = $y$\n]\n\\addplot [mark = *, mark size = 2pt, only marks] coordinates {(2,1)}; \n\\addplot [domain = 0:5, samples = 100, variable = \\t,\n    postaction={decorate, decoration={markings, \n    mark = at position 0.15 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.4 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.65 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.85 with {\\arrow[ultra thick, red]{>};}\n    }}] ({2*e^(-1*t)},{e^(-2*t});\n\\draw [fill=white] (axis cs: 0,0) circle (2pt);\n\\end{axis}\n\\end{tikzpicture}\n\\end{minipage}\n\\hspace{-0.5cm}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{align*}\n    \\onslide<3->{x &= 2e^{-t}} \\\\[6pt]\n    \\onslide<4->{e^{-t} &= \\frac{x}{2}} \\\\[6pt]\n    \\onslide<5->{y &= \\left(e^{-t}\\right)^2} \\\\[6pt]\n    \\onslide<6->{y &= \\left(\\frac{x}{2}\\right)^2} \\\\[6pt]\n    \\onslide<7->{y &= \\frac{x^2}{4}} \\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 3c}\n(c) \\quad $\\begin{cases}\n        x &= \\sin t \\\\\n        y &= \\csc t \\\\\n    \\end{cases}$    \\quad for $0 < t < \\pi$ \\newline\\\\  \\pause\n\\begin{minipage}{0.6\\textwidth}\n\\begin{tikzpicture}[scale=0.7]\n\\begin{axis}[\n    axis lines = middle,\n    grid,\n    xmin = -1, xmax = 2,\n    ymin = -1, ymax = 4,\n    xtick = {-1,0,1,2},\n    ytick = {-1,0,...,4},\n    xlabel = $x$, ylabel = $y$\n]\n\\addplot [mark = *, mark size = 2pt, only marks] coordinates {(1,1)}; \n\\addplot [domain = 0.25:3.0, samples = 100, variable = \\t,\n    postaction={decorate, decoration={markings, \n    mark = at position 0.15 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.4 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.65 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.85 with {\\arrow[ultra thick, red]{>};}\n    }}] ({sin(deg(t))},{1/(sin(deg(t)))});\n\\end{axis}\n\\end{tikzpicture}\n\\end{minipage}\n\\hspace{-0.25cm}\n\\begin{minipage}{0.3\\textwidth}\n\\begin{align*}\n    \\onslide<3->{x &= \\sin t} \\\\[6pt]\n    \\onslide<4->{y &= \\csc t} \\\\[10pt]\n    \\onslide<5->{y &= \\frac{1}{\\sin t}} \\\\[10pt]\n    \\onslide<6->{y &= \\frac{1}{x}} \\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 3d}\n(d) \\quad $\\begin{cases}\n        x &= 1 + 3\\cos t    \\\\\n        y &= 2\\sin t    \\\\\n    \\end{cases}$    \\quad for $0 \\leq t \\leq \\frac{3\\pi}{2}$\n    \\newline\\\\  \\pause\n\\begin{minipage}{0.5\\textwidth}\n\\begin{tikzpicture}[scale=0.7]\n\\begin{axis}[\n    axis lines = middle,\n    grid,\n    xmin = -3, xmax = 5,\n    ymin = -3, ymax = 3,\n    xtick = {-3,-2,...,5},\n    ytick = {-3,-2,...,3},\n    xlabel = $x$, ylabel = $y$\n]\n\\addplot [mark = *, mark size = 2pt, only marks] coordinates {(4,0) (1,2) (-2,0) (1,-2)}; \n\\addplot [domain = 0:4.71, samples = 100, variable = \\t,\n    postaction={decorate, decoration={markings, \n    mark = at position 0.15 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.4 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.65 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.85 with {\\arrow[ultra thick, red]{>};}\n    }}] ({1+3*cos(deg(t))},{2*sin(deg(t))});\n\\end{axis}\n\\end{tikzpicture}\n\\end{minipage}\n\\hspace{-0.5cm}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\onslide<3->{x &= 1 + 3\\cos t} \\\\[10pt]\n    \\onslide<4->{\\frac{x-1}{3} &= \\cos t} \\\\[10pt]\n    \\onslide<5->{y &= 2 \\sin t} \\\\[10pt]\n    \\onslide<6->{\\frac{y}{2} &= \\sin t} \\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 3d}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{tikzpicture}[scale=0.7]\n\\begin{axis}[\n    axis lines = middle,\n    grid,\n    xmin = -3, xmax = 5,\n    ymin = -3, ymax = 3,\n    xtick = {-3,-2,...,5},\n    ytick = {-3,-2,...,3},\n    xlabel = $x$, ylabel = $y$\n]\n\\addplot [mark = *, mark size = 2pt, only marks] coordinates {(4,0) (1,2) (-2,0) (1,-2)}; \n\\addplot [domain = 0:4.71, samples = 100, variable = \\t,\n    postaction={decorate, decoration={markings, \n    mark = at position 0.15 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.4 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.65 with {\\arrow[ultra thick, red]{>};},\n    mark = at position 0.85 with {\\arrow[ultra thick, red]{>};}\n    }}] ({1+3*cos(deg(t))},{2*sin(deg(t))});\n\\end{axis}\n\\end{tikzpicture}\n\\end{minipage}\n\\hspace{-0.5cm}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\cos^2 t + \\sin^2 t &= 1 \\\\[10pt]\n    \\onslide<2->{\\left(\\frac{x-1}{3}\\right)^2 + \\left(\\frac{y}{2}\\right)^2 &= 1} \\\\[10pt]\n    \\onslide<3->{\\frac{(x-1)^2}{9} + \\frac{y^2}{4} &= 1} \\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Parametrizations of Common Curves}\n\\begin{itemize}\n    \\item For $y = f(x)$, as $x$ runs through some interval $I$, let $x=t$ and $y = f(t)$ and let $t$ run through $I$.    \\newline\\\\  \\pause\n    \\item For $x = g(y)$, as $y$ runs through some interval $I$, let $x=g(t)$ and $y = t$ and let $t$ run through $I$.    \\newline\\\\  \\pause\n    \\item For a directed line segment with initial point $(x_0,y_0)$ and terminal point $(x_1,y_1)$ let $x = x_0 + (x_1-x_0)t$ and let $y = y_0 + (y_1-y_0)t$ for $0 \\leq t \\leq 1$.  \\newline\\\\  \\pause\n    \\item For an ellipse in the form $\\frac{(x-h)^2}{a^2} + \\frac{(y-k)^2}{b^2}=1$, let $x = h + a\\cos t$ and $y = k + a\\sin t$ for $0 \\leq t < 2\\pi$.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Example 4}\nFind a parametrization for each of the following.   \\newline\\\\  \n(a) \\quad $y = x^2$ from $x = -3$ to $x = 2$        \n    \\onslide<2->{\\[ x = t \\quad  \\text{and} \\quad y = t^2 \\quad \\text{for } -3 \\leq t \\leq 2 \\]}\n\n\\onslide<3->{(b) \\quad $y = f^{-1}(x)$ where $f(x) = x^5 + 2x + 1$}\n\\begin{align*}\n    \\onslide<4->{y &= x^5 + 2x + 1} \\\\[8pt]\n    \\onslide<5->{x &= y^5 + 2y + 1}   \\\\[8pt]\n    \\onslide<6->{y = t \\quad & \\quad x = t^5 + 2t + 1 \\quad \\text{for } -\\infty < t < \\infty}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4c}\n(c) \\quad The line segment which starts at $(2,-3)$ and ends at $(1,5)$\n\\begin{align*}\n    \\onslide<2->{x_1 - x_0 &= 1 - 2} \\\\\n    \\onslide<3->{&= -1} \\\\\n    \\onslide<4->{x &= x_0 + (x_1 - x_0)t} \\\\\n    \\onslide<5->{x &= 2 + (-1)t} \\\\\n    \\onslide<6->{{\\color{red}x} &{\\color{red}= 2 - t} \\\\\n        y_1 - y_0 &= 5 - (-3)} \\\\\n    \\onslide<7->{&= 8} \\\\\n    \\onslide<8->{y &= y_0 + (y_1 - y_0)t} \\\\\n    \\onslide<9->{{\\color{red}y} &{\\color{red}= -3 + 8t}} \\\\\n    \\onslide<10->{& \\text{for } 0 \\leq t \\leq 1}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n(d) \\quad The circle $x^2+2x+y^2-4y=4$\n\\begin{align*}\n    \\onslide<2->{\\frac{(x+1)^2}{9} + \\dfrac{(y-2)^2}{9} &= 1} \\\\[10pt]\n    \\onslide<3->{x = -1 + 3\\cos t \\quad & \\quad y = 2 + 3\\sin t \\quad \\text{for } 0 \\leq t < 2\\pi} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n(e) \\quad The left half of the ellipse $\\dfrac{x^2}{4} + \\frac{y^2}{9} = 1$\n\\begin{align*}\n    \\onslide<2->{x &= 2\\cos t} \\\\[8pt]\n    \\onslide<3->{y &= 3\\sin t} \\\\[8pt]\n    \\onslide<4->{& \\text{for } \\frac{\\pi}{2} \\leq t \\leq \\frac{3\\pi}{2}}\n\\end{align*}\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "d455f9ab1717ff22bc0af49446b83fa586190b54", "size": 13451, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Parametric_Equations(BEAMER).tex", "max_stars_repo_name": "BryanBain/Trig_BEAMER", "max_stars_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Parametric_Equations(BEAMER).tex", "max_issues_repo_name": "BryanBain/Trig_BEAMER", "max_issues_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Parametric_Equations(BEAMER).tex", "max_forks_repo_name": "BryanBain/Trig_BEAMER", "max_forks_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:06.000Z", "avg_line_length": 32.3341346154, "max_line_length": 200, "alphanum_fraction": 0.5551260129, "num_tokens": 5416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8056321959813274, "lm_q1q2_score": 0.5737971786370002}}
{"text": "\\subsubsection{Circular Cylinders}\r\n\\noindent\r\nA circular cylinder is what we usually think of as a cylinder. It is a circle extruded into 3D space. One of its forms is \r\n\\begin{equation*}\r\n\tx^2 + y^2 = R^2\t\r\n\\end{equation*}\r\nwhere $R$ is the cylinder's radius.\r\n\r\n[INSERT IMAGE]", "meta": {"hexsha": "096abc10561c9623612815843eaccde22b0251ce", "size": 279, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "multiCalc/differentialMultivariableCalculus/circularCylinder.tex", "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "multiCalc/differentialMultivariableCalculus/circularCylinder.tex", "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "multiCalc/differentialMultivariableCalculus/circularCylinder.tex", "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0, "max_line_length": 123, "alphanum_fraction": 0.7168458781, "num_tokens": 84, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8056321703143954, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5737971603561851}}
{"text": "\\section{Model}\n\n\\begin{definition}\\label{def:mp}\n  A Markov process is a stochastic process $Q_0, Q_1, \\dots$ for which\n  \\begin{align*}\n    p(Q_t=q_t \\gv Q_0=q_0, Q_1=q_1, \\dots, Q_{t-1}=q_{t-1}) = p(Q_t=q_t \\gv Q_{t-1}=q_{t-1}).\n  \\end{align*}\n  There is one, and only one, initial state $q_0$ such that $p(Q_0=q_0)=1$.\n  The possible values of $Q_t$ form a finite set $\\set{Q}$ called the state space.\n\\end{definition}\n\n\\begin{definition}\\label{def:hmm}\n  Let $\\set{A}$ be a finite set of symbols.\n  Let $Q_0, Q_1, \\dots$ be a Markov process and let $S_0, S_1, \\dots$\n  be a stochastic process for which\n  \\begin{align*}\n    p(S_t\\in\\set{A} \\gv Q_0=q_0, Q_1=q_1, \\dots, Q_t=q_t) = p(S_t\\in\\set{A} \\gv Q_t=q_t),\n  \\end{align*}\n  for $t>0$, and $p(S_0=\\emptyset \\gv Q_0=q_0)=1$.\n  The pair $(Q_t, S_t)$ is a hidden Markov model (HMM) with alphabet $\\set{A}$ and $\\emptyset$\n  represents an empty sequence.\n\\end{definition}\n\nThe standard HMM definition is often extended to account for non-initial states that\ndo not emit symbols. Those states are referred to as silent states and are useful to\ndescribe a missing alignment position, for example. This section goes a step further by\ndefining a more general hidden Markov model that accounts for states that instead emit\nsequence of symbols of variable length, including zero-length sequences.\n\n% \\begin{definition}\n%   Let $\\set{A}$ be a finite set of symbols, $k\\in\\field{N}_0$, and define\n%   $\\set{B}=\\bigcup_{i=0}^k\\set{A}^i$.\n%   Let $Q_0, Q_1, \\dots$ be a Markov process.\n%   Let $S_0, S_1$ be a stochastic process for which\n%   \\begin{equation*}\n%     p(S_t\\in\\set{B} \\gv Q_0=q_0, Q_1=q_1, \\dots, Q_t=q_t)\n%     = p(S_t\\in\\set{B} \\gv Q_t=q_t)\n%   \\end{equation*}\n%   for $t>0$, and $p(S_0=\\emptyset \\gv Q_0=q_0)=1$.\n%   The pair $(Q_t, S_t)$ is an invisible Markov model (IMM) with alphabet $\\set{A}$ and\n%   limit $k$.\n% \\end{definition}\n\n\\begin{definition}\n  Let $\\set{A}$ be a finite set of symbols.\n  Let $Q_0, Q_1, \\dots$ be a Markov process.\n  Let $F_0, F_1, \\dots$ and $V_0, V_1$ be two stochastic processes for which\n  \\begin{equation*}\n    p(F_t\\in\\field{N}, V_t\\in\\set{A}^{f_t} \\gv Q_0=q_0, Q_1=q_1, \\dots, Q_t=q_t)\n    = p(F_t\\in\\field{N}, V_t\\in\\set{A}^{f_t} \\gv Q_t=q_t)\n  \\end{equation*}\n  and $p(F_0=0, V_t=\\emptyset \\gv Q_0=q_0) = 1$.\n  The triplet $(Q_t, F_t, V_t)$ is an invisible Markov model (IMM) with alphabet $\\set{A}$.\n\\end{definition}\n\nLet $\\arr{s}=s_1 .. s_L$ be a sequence emitted from an IMM.\nThe marginal likelihood of $\\arr{s}$ is given by\n\\begin{align}\\label{eq:ml}\n  \\mathrm{ML}(\\arr{s}) = p(V_0=\\arr{s}) + p(V_0\\neq\\arr{s}, V_0||V_1=\\arr{s}) +\n  p(V_0\\neq\\arr{s}, V_0||V_1\\neq\\arr{s}, V_0||V_1||V_2=\\arr{s}) + \\dots,\n\\end{align}\nwhere $||$ denotes sequence concatenation.\nNote that\n\\begin{align*}\n  p(V_0 \\neq \\arr{s}, V_0||V_1=\\arr{s}) &= p(V_0\\neq\\arr{s}, V_0||V_1=\\arr{s}, F_0=L) +\n  p(V_0\\neq\\arr{s}, V_0||V_1=\\arr{s}, F_0 \\neq L)\\\\\n  &= p(V_0\\neq\\arr{s}, V_0||V_1=\\arr{s}, F_0=L) + p(V_0||V_1=\\arr{s}, F_0 \\neq L)\\\\\n  &= p(V_0||V_1=\\arr{s}, F_0 \\neq L) = p(V_0||V_1=\\arr{s}, F_0 < L).\n\\end{align*}\nSimilarly,\n\\begin{align*}\n  p(V_0\\neq\\arr{s}, V_0||V_1\\neq\\arr{s}, V_0||V_1||V_2=\\arr{s}) =\n  p(V_0||V_1||V_2=\\arr{s}, F_0+F_1 < L).\n\\end{align*}\nLet $V_{0..t} = V_0||V_1||\\dots||V_t$ and $L_t = F_0 + F_1 + \\dots + F_{t-1}$.\nThe marginal likelihood of $\\arr{s}$ is also given by\n\\begin{align*}\n  \\mathrm{ML}(\\arr{s}) = p(V_0=\\arr{s}) + \\sum_{t=1}^{\\infty} p(V_{0..t}=\\arr{s}, L_t<L).\n\\end{align*}\n\nFor computational reasons, it would be useful to have an upper bound on the summation of the\nmarginal likelihood.\nWe will define a type of IMM that has such a feature.\n\n\\begin{definition}\n  A cycle is any probable sequence of states that starts and ends with the same state.\n\\end{definition}\n\n\\begin{definition}\n  A quiet state is a state that has a non-zero probability of emitting an empty sequence.\n\\end{definition}\n\n\\begin{definition}\n  A quiet cycle is any cycle having only quiet states.\n\\end{definition}\n\n\\begin{corollary}\n  Let $M$ be the number of states of the IMM. If it has no quiet cycles, any sequence of $M$ states\n  will have emitted at least one symbol.\n\\end{corollary}\n\nIf IMM has no quiet cycles, there is always a $N \\leq L\\cdot M$ such that\n\\begin{align*}\n  \\mathrm{ML}(\\arr{s}) = p(V_0=\\arr{s}) + \\sum_{t=1}^N p(V_{0..t}=\\arr{s}, L_t<L).\n\\end{align*}\n", "meta": {"hexsha": "2a3494b06077e154691b8ba36722793ce7208aec", "size": 4365, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/old/model.old.tex", "max_stars_repo_name": "EBI-Metagenomics/protein-hmm", "max_stars_repo_head_hexsha": "a291a6605459e559ebb912ffc71180094da19378", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "theory/old/model.old.tex", "max_issues_repo_name": "EBI-Metagenomics/protein-hmm", "max_issues_repo_head_hexsha": "a291a6605459e559ebb912ffc71180094da19378", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/old/model.old.tex", "max_forks_repo_name": "EBI-Metagenomics/protein-hmm", "max_forks_repo_head_hexsha": "a291a6605459e559ebb912ffc71180094da19378", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.179245283, "max_line_length": 99, "alphanum_fraction": 0.658419244, "num_tokens": 1719, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765707, "lm_q2_score": 0.6959583376458153, "lm_q1q2_score": 0.5737911517333046}}
{"text": "\\documentclass[11pt, oneside]{article}\n\n\\usepackage{../../shared/preamble}\n\\addbibresource{../../shared/references.bib}\n\n\\usepackage{../real-numbers/real-numbers}\n%\\usepackage{manifolds}\n\n\\title{Manifolds}\n\\author{Arthur Ryman, {\\tt arthur.ryman@gmail.com}}\n\\date{\\today}\n\n% Document\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nThis article contains Z Notation type declarations for manifolds and some related objects.\nIt has been type checked by \\fuzz.\n\\end{abstract}\n\n\n    \\section{Introduction}\n\n    Manifolds can be defined in several ways.\n    The way I prefer to think about them is that, first of all, they are  based on topological spaces.\n    A manifold is therefore a topological space with some additional structure.\n    This additional structure allows one to regard a manifold as, locally, being like an open subset of $\\R^n$\n    for some natural number $n$ referred to as the dimension of the manifold.\n    In the following, let $M$ be a topological space of dimension $n$.\n\n    \\section{Charts}\n    A chart $\\phi$ on $M$ is a continuous injection of some open subset $U \\subseteq M$ into $\\R^n$.\n    A chart gives every point $p \\in U$ in its domain of definition a tuple of $n$ real number coordinates.\n    \\begin{equation}\n        \\phi: U \\inj \\R^n\n    \\end{equation}\n\n    \\subsection{Transition Functions}\n    Let $U, V, W$ be open subsets of $M$ with $W = U \\cap V$.\n    Let $\\phi: U \\inj \\R^n$ and $\\psi: V \\inj \\R^n$ be charts.\n    Every point $p \\in W$ is therefore given two, typically distinct, tuples of coordinates.\n    The mapping from one coordinate tuple to the other is called the transition function defined by the pair of charts.\n    Let $t_{\\phi,\\psi}$ denote that transition function that maps the $\\phi$ coordinates to the $\\psi$ coordinates.\n    \\begin{equation}\n        \\forall x \\in \\phi(W) @ t_{\\phi,\\psi}(x) = \\psi(\\phi^{-1}(x))\n    \\end{equation}\n\n    \\subsection{Compatible Charts}\n    Let $\\mathcal{F}$ be some family of partial injections from $\\R^n$ to $\\R^n$, e.g. continuous, differentiable, smooth, defined on the open subsets.\n    \\begin{equation}\n        \\mathcal{F} \\subseteq \\R^n \\pinj \\R^n\n    \\end{equation}\n\n    A pair of charts are said to be compatible with respect to $\\mathcal{F}$ when their transition functions belong to $\\mathcal{F}$.\n\n    \\section{Atlases}\n    A set of pairwise compatible charts that cover $M$ is called an atlas for $M$.\n    An atlas gives $M$ a manifold structure.\n    If the charts are only required to be continuous then $M$ is called a topological manifold.\n    If the charts are required to be differentiable then the atlas is called a differential or differentiable structure and $M$ is called\n    a differentiable manifold.\n    Infinitely differentiable charts are called smooth charts.\n    We are only concerned with smooth charts and manifolds.\n\n    In general, we normally consider an atlas to be a maximal set of charts.\n    A given set of mutually compatible charts belongs to a unique maximal atlas.\n    The given set is said to generate the maximal atlas.\n\n    \\section{Smooth Mappings}\n    Mappings from one smooth manifold to another are called smooth when they are smooth when expressed in their coordinate charts.\n    A smooth mapping that has a smooth inverse is called a diffeomorphism.\n\n    \\section{Tangent Vectors}\n    A tangent vector $X$ at the point $p \\in M$ is a mapping from the set of smooth functions at $p$ to $\\R$ that satisfies the following\n    for all $c \\in \\R$ and $f,g \\in C^{\\infty}(M,p)$\n    \\begin{align}\n        X(cf) &= cX(f) \\\\\n        X(f + g) &= X(f) + X(g) \\\\\n        X(fg) &= g(p)X(f) + f(p)X(g)\n    \\end{align}\n\n    A smooth curve $\\gamma: \\R \\fun M$ defines a tangent vector $X$ at $p=\\gamma(0)$ by\n    \\begin{equation}\n        X(f) = \\left.\\frac{df(\\gamma(t))}{dt}\\right|_{t=0}\n    \\end{equation}\n\n    \\section{Tangent Bundles}\n\n    The set of all tangent vectors at $p$ is denoted $M_p$ or $T_p(M)$.\n    It is an $n$-dimensional vector space and is called the tangent space at $p$.\n    The set of all tangent spaces is called the tangent bundle and is denoted $T(M)$\n    \\begin{equation}\n        T(M) = \\{~ (p,X) | p \\in M, X \\in M_p ~\\}\n    \\end{equation}\n\n    The tangent bundle $T(M)$ is a smooth vector bundle over $M$ under the natural projection\n    $\\pi: T(M) \\fun M, \\pi(p,X) = p$.\n\n    \\printbibliography\n\n\\end{document}", "meta": {"hexsha": "114aa3a97ef758e03d416b89ea56f83a7f50b817", "size": 4367, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "articles/manifolds/manifolds.tex", "max_stars_repo_name": "agryman/mathz", "max_stars_repo_head_hexsha": "a516a20936e1ed7b9f07c546eee7aacf1831de65", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-30T08:06:17.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-30T08:06:17.000Z", "max_issues_repo_path": "articles/manifolds/manifolds.tex", "max_issues_repo_name": "agryman/mathz", "max_issues_repo_head_hexsha": "a516a20936e1ed7b9f07c546eee7aacf1831de65", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "articles/manifolds/manifolds.tex", "max_forks_repo_name": "agryman/mathz", "max_forks_repo_head_hexsha": "a516a20936e1ed7b9f07c546eee7aacf1831de65", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.3980582524, "max_line_length": 151, "alphanum_fraction": 0.683535608, "num_tokens": 1205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765707, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5737911465340982}}
{"text": "\\section{Compactly generated spaces}\nA lot of homotopy theory is about loop spaces and mapping spaces.\nStandard topology doesn't do very well with mapping spaces, so we will narrate the story of \\emph{compactly generated spaces}.\nOne nice consequence of working with compactly generated spaces is that the category is\nCartesian-closed (a concept to be defined below).\n\n\\subsection{CGHW spaces}\\label{CGWHspaces}\nSome constructions commute for ``categorical reasons''.\nFor instance, limits commute with limits.\nHere is an exercise to convince you of a special case of this.\n\\begin{exercise}%Exercise 4 from 18.906\n    Let $X$ be an object of a category $\\cc$.\n    The \\emph{overcategory} (or the \\emph{slice category}) $\\cc_{/X}$ has objects\n    given by morphisms $p:Y\\to X$ in $\\cc$, and morphisms given by the obvious\n    commutativity condition.\n    \\begin{enumerate}\n\t\\item Assume that $\\cc$ has finite products.\n\t    What is the left adjoint to the functor $X\\times -:\\cc\\to\\cc_{/X}$ that sends\n\t    $Y$ to the object $X\\times Y \\xar{\\pr_1}X$?\n\t\\item As a consequence of Theorem \\ref{adjointslimits}, we find that $X\\times -:\\cc\\to\\cc_{/X}$ preserves limits.\n\t    The composite $\\cc\\to\\cc_{/X}\\to\\cc$, however, probably does not.\n\t    \\begin{itemize}\n\t\t\\item What is the limit of a diagram in $\\cc_{/X}$?\n\t\t\\item Let $Y:\\cI\\to\\cc$ be any diagram. Show that\n\t\t    $${\\lim_{i\\in\\cI}}^{\\cc_{/X}}(X\\times Y_i)\n\t\t    \\simeq X\\times {\\lim_{i\\in\\cI}}^\\cc Y_i.$$\n\t\t    What happens if $\\cI$ only has two objects and only identity morphisms?\n\t    \\end{itemize}\n    \\end{enumerate}\n\\end{exercise}\nHowever, colimits and limits need not commute!\nAn example comes from algebra.\nThe coproduct in the category of commutative rings is the tensor product (exercise!).\nBut $\\left(\\lim \\Z/p^k\\Z\\right) \\otimes \\QQ \\simeq \\Z_p \\otimes \\QQ \\simeq \\QQ_p$ is\nclearly not $\\lim\\left(\\Z/p^k\\Z\\otimes \\QQ\\right) \\simeq \\lim 0 \\simeq 0$!\n\nWe also need not have an isomorphism between\n$X\\times\\colim_{j\\in \\cJ}Y_j$ and $\\colim_{j\\in \\cJ}(X\\times Y_j)$.\nOne example comes a quotient map $Y\\to Z$: \nin general, the induced map $X\\times Y\\to X\\times Z$ is not necessarily another quotient map.\nA theorem of Whitehead's says that this problem is rectified if we assume that $X$ is\na compact Hausdorff space.\nUnfortunately, a lot of interesting maps are built up from more ``elementary'' maps by such a procedure,\nso we would like to repair this problem.\n%Why were we talking about colimits? Here's an observation. Suppose $X\\to Y$ is a quotient map; then a map $Y\\to Z$ is continuous iff the composite $X\\to Y\\to Z$ is continuous. A quotient map \\emph{is} a coequalizer. What I'm saying is, I can find two maps to $X$ such that $Y$ is a coequalizer of $X$. What space are we mapping into $X$? Well, suppose $Z=X/\\sim$. If we considered:\n%\\begin{equation*}\n%    \\begin{tikzcd}\n%\tX\\times_Z X\\ar[r,shift left=.75ex,\"\\pi_1\"]\\ar[r,shift right=.75ex,swap,\"\\pi_2\"] & X\\ar[r] & Z\n%    \\end{tikzcd}\n%\\end{equation*}\n%The term here is ``regular epimorphism''.\n\nWe cannot simply do this by restricting ourselves to compact Hausdorff spaces:\nthat's a pretty restrictive condition to place.\nInstead (motivated partially by the Yoneda lemma),\nwe will look at topologies detected by maps from compact Hausdorff spaces.\n\\begin{definition}\n    Let $X$ be a space.\n    A subspace $F\\subseteq X$ is said to be \\emph{compactly closed} if,\n    for any map $k:K\\to X$ from a compact Hausdorff space $K$, \n    the preimage $k^{-1}(F)\\subseteq K$ is closed.\n\\end{definition}\nIt is clear that any closed subset is compactly closed, but\nthere might be compactly closed sets which are not closed in the topology on $X$.\nThis motivates the definition of a $k$-space:\n\\begin{definition}\n    A topological space $X$ is said to be a \\emph{$k$-space}\n    if every compactly closed set is closed.\n\\end{definition}\nThe $k$ comes either from ``kompact'' and/or Kelly, who was an early topologist\nwho worked on such foundational topics.\n\nIt's clear that $X$ is a $k$-space if and only if the following statement is true:\na map $X\\to Y$ is continuous if and only if, for every compact Hausdorff space $K$ and map $k:K\\to X$,\nthe composite $K\\to X\\to Y$ is continuous.\nFor instance, compact Hausdorff spaces are $k$-spaces. First countable (so metric spaces) and CW-complexes are also $k$-spaces.\n\nIn general, a topological space $X$ need not be a $k$-space.\nHowever, it can be ``$k$-ified'' to obtain another $k$-space denoted $kX$.\nThe procedure is simple: endow $X$ with the topology consisting of all compactly closed sets.\nThe reader should check that this is indeed a topology on $X$;\nthe resulting topological space is denoted $kX$.\nThis construction immediately implies, for instance, that the identity $kX\\to X$ is continuous.\n\nLet $k\\Top$ be the category of $k$-spaces.\nThis is a subcategory of the category of topological spaces, via a functor $i:k\\Top\\hookrightarrow \\Top$.\nThe process of $k$-ification gives a functor $\\Top\\to k\\Top$, which has the property that:\n$$k\\Top(X,kY)=\\Top(iX,Y).$$\nNotice that this is another example of an adjunction!\nWe can conclude from this that $k(iX\\times iY)=X\\times^{k\\Top}Y$, where $X$ and $Y$ are $k$-spaces.\nOne can also check that $kiX\\simeq X$.\n\nThe takeaway is that $k\\Top$ has good categorical properties inherited from $\\Top$:\nit is a complete and cocomplete category.\nAs we will now explain, this category has more categorical niceness, that does not exist in $\\Top$.\n\n\\subsection{Mapping spaces}\\label{mappingspaces}\nLet $X$ and $Y$ be topological spaces.\nThe set $\\Top(X,Y)$ of continuous maps from $X$ to $Y$ admits a topology, namely the compact-open topology.\nIf $X$ and $Y$ are $k$-spaces, we can make a slight modification:\ndefine a topology on $k\\Top(X,Y)$ generated by the sets\n$$W(k:K\\to X, \\text{ open }U\\subseteq Y) := \\{f:X\\to Y: f(k(K))\\subseteq U\\}.$$\nWe write $Y^X$ for the $k$-ification of $k\\Top(X,Y)$.\n\\begin{prop}\n    \\begin{enumerate}\n\t\\item The functor $(k\\Top)^{op}\\times k\\Top\\to k\\Top$ given by $(X,Y)\\to Y^X$ is a functor of both variables.\n\t\\item $e:X\\times Z^X\\to Z$ given by $(x,f)\\mapsto f(x)$ and $i:Y\\to (X\\times Y)^X$ given by $y\\mapsto(x\\mapsto(x,y))$ are continuous.\n    \\end{enumerate}\n\\end{prop}\n\\begin{proof}\n    The first statement is left as an exercise to the reader.\n    For the second statement, see \\cite[Proposition 2.11]{StricklandCGWH}.\n\\end{proof}\nAs a consequence of this result, we can obtain a very nice adjunction.\nDefine two maps:\n\\begin{itemize}\n    \\item $k\\Top(X\\times Y,Z)\\to k\\Top(Y,Z^X)$ via\n\t$$(f:X\\times Y\\to Z)\\mapsto (Y\\xrightarrow{i}(X\\times Y)^X\\to Z^X).$$\n    \\item $k\\Top(Y,Z^X) \\to k\\Top(X\\times Y,Z)$ via\n\t$$(f:Y\\to Z^X)\\mapsto(X\\times Y\\to X\\times Z^X\\xrightarrow{e} X).$$\n\\end{itemize}\nBy \\cite[Proposition 2.12]{StricklandCGWH}, these two maps are continuous inverses, so there is a natural homeomorphism\n$$k\\Top(X\\times Y,Z)\\simeq k\\Top(Y,Z^X).$$\nThis motivates the definition of a {Cartesian closed} category.\n\\begin{definition}\\label{cartesian-closed}\n    A category $\\cc$ with finite products is said to be\n    \\emph{Cartesian closed} if, for any object $X$ of $\\cc$, the functor $X\\times -:\\cc\\to \\cc$ has a right adjoint.\n\\end{definition}\nOur discussion above proves that $k\\Top$ is Cartesian closed, while this is not satisfied by $\\Top$.\nAs we will see below, this has very important ramifications for algebraic topology.\n\\begin{exercise}\n    \\todo{Insert Exercise 2 from 18.906.}\n\\end{exercise}\n", "meta": {"hexsha": "5240f3c508f8c33bb56571e6e4d198ea8f00cabd", "size": 7468, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "906/lec-40-compactly-generated.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "906/lec-40-compactly-generated.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "906/lec-40-compactly-generated.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 55.7313432836, "max_line_length": 382, "alphanum_fraction": 0.7167916443, "num_tokens": 2314, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.824461928533133, "lm_q1q2_score": 0.5737911428357694}}
{"text": "\\chapter{The Cut-Matching Game: Expanders via Max Flow}\n\nIn this chapter, we learn about a new algorithm to compute expanders that employs max flow as a subroutine.\n\n\\section{Introduction}\n\nWe start with a review of expanders where make a subtle change the notion of an expander in comparison to chapter \\ref{chap:cheegersInequality} to ease the exposition.\n\n\\paragraph{Definitions.} We let $G=(V,E)$ be an unweighted, connected graph in this chapter, and let $\\dd$ be the degree vector, $E(S,V \\setminus S)$ denote the set of edges crossing the cut $A,B$\n\nGiven set $\\emptyset \\subset S \\subset V$, then we define the \\textbf{sparsity} $\\psi(S)$ of $S$ by\n\\[\n\\psi(S) = \\frac{|E(S, V \\setminus S)|}{ \\min\\{|S|, |V \\setminus S| \\} } \n\\]\nNote that \\textbf{sparsity} $\\psi(S)$ differs from \\textbf{conductance} $\\phi(S) = \\frac{|E(S, V \\setminus S)|}{ \\min\\{\\vol(S), \\vol(V \\setminus S) \\} }$, as defined in \\ref{chap:cheegersInequality}, in the denominator. It is straight-forward to see that in a connected graph $\\psi(S) \\geq \\phi(S)$ for all $S$. \n\nClearly, we again have $\\psi(S) = \\psi(V \\setminus S)$. We define the \\emph{sparsity} of a graph $G$ by $\\psi(G) = \\min_{\\emptyset \\subset S \\subset V} \\psi(S)$. For any $\\psi \\in (0, n]$, we say a graph $G$ is a $\\psi$-expander with regard to sparsity, if $\\psi(G) \\geq \\psi$. When the context is clear, we simply say that $G$ is a $\\psi$-expander. \n\n\\paragraph{The Main Result.} The main result of this chapter is the following theorem.\n\n\\begin{theorem}\nThere is an algorithm $\\textsc{SparsityCertifyOrCut}(G, \\psi)$ that given a graph $G$ and a parameter $0 <\\psi \\leq 1$ either: \n\\begin{itemize}\n    \\item \\emph{Certifies} that $G$ is a $\\Omega(\\psi/\\log^2 n)$-expander with regard to sparsity, or\n    \\item Presents a \\emph{cut} $S$ such that $\\psi(S) \\leq O(\\psi)$.\n\\end{itemize}\nThe algorithm runs in time $O(\\log^2 n) \\cdot T_{max\\_flow}(G) + \\tilde{O}(m)$ where $T_{max\\_flow}(G)$ is the time it takes to solve a Max Flow problem on $G$\\footnote{Technically, we will solve problems with two additional vertices and $n$ additional edges but this will not change the run-time of any known max-flow algorithm asymptotically.}.\n\\end{theorem}\n\nThe bounds above can further be extended to compute $\\phi$-expanders (with regard to conductance). Using current state-of-the art Max Flow results, the above problem can be solved in $\\tilde{O}(m + n^{3/2+o(1)})$ time \\cite{BLLSSSW21}. \n\n\\section{Embedding Graphs into Expanders}\n\nLet us start by exploring the first key idea behind the algorithm. We therefore need a definition of what it means to embed one graph into another.\n\n\\paragraph{Definition of Embedding.} Given graphs $H$ and $G$ that are defined over the same vertex set, then we say that a function $\\textsc{Embed}_{H \\xrightarrow{} G}$ is an \\emph{embedding} if it maps each edge $(u,v) \\in H$ to a $u$-to-$v$ path $P_{u,v} = \\textsc{Embed}_{H \\xrightarrow{} G}(u,v)$ in $G$. \n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[scale=0.2]{./fig/embedGraph_lectureCutMatching.jpeg}\n    \\caption{In this example the red edge $(u,v)$ in $H$ is mapped to the red $u$-to-$v$ path in $G$.}\n    \\label{fig:my_label}\n\\end{figure}\n\nWe say that the \\emph{congestion} of $\\textsc{Embed}_{H \\xrightarrow{} G}$ is the maximum number of times that any edge $e \\in E(G)$ appears on any embedding path: \\[\n\\congestion(\\textsc{Embed}_{H \\xrightarrow{} G}) = \\max_{e \\in E(G)} |\\{ e' \\in E(H) \\;|\\; e \\in \\textsc{Embed}_{H \\xrightarrow{} G}(e') \\}|.\n\\]\n\n\\paragraph{Certifying Expander Graphs via Embeddings.} Let us next prove the following lemma that is often consider Folklore.\n\n\\begin{lemma}\\label{lma:folklore_embedding}\nGiven a $\\frac{1}{2}$-expander graph $H$ and an embedding of $H$ into $G$ with congestion $C$, then $G$ must be an $\\Omega\\left(\\frac{1}{C}\\right)$-expander. \n\\end{lemma}\n\\begin{proof}\nConsider any cut $(S, V \\setminus S)$ with $|S| \\leq |V \\setminus S|$. Since $H$ is a $\\psi$-expander, we have that $|E_H(S, V \\setminus S)| \\geq |S|/2$. We also know by the embedding of $H$ into $G$, that for each edge $(u,v) \\in E_H(S, V \\setminus S)$, we can find path a $P_{u,v}$ in $G$ that also has to cross the cut $(S, V \\setminus S)$ at least once. But since each edge in $G$ is on at most $C$ such paths, we can conclude that at least $|E_H(S, V \\setminus S)|/ C \\geq |S|/2C$ edges in $G$ cross the cut $(S, V \\setminus S)$. \n\\end{proof}\n \nUnfortunately, the reverse of the above lemma is not true, i.e. even if there exists no embedding from $1/2$-expander $H$ into $G$ of congestion $C$, then $G$ might still be an $\\Omega\\left(\\frac{1}{C}\\right)$-expander. \n\n\\section{The Cut-Matching Algorithm}\n\nAlthough there are still some missing pieces, let us next discuss the algorithm   \\ref{algo:cutMatchingGame}. The algorithm runs for $T$ iterations where we will later find that the right value to set $T$ to is in $\\Theta(\\log^2 n)$. \n\n\\begin{algorithm}[H]\n    \\For{$i = 0,1,2, \\dots, T$}{\n        $(S_i, \\overline{S}_i) \\gets \\textsc{FindBiPartition}(G, \\{M_1, M_2, \\dots, M_{i}\\})$ \\tcp*{Assume  $|S| = |\\overline{S}| = n/2$}\n        Solve the flow problem on $G$ where each vertex $e \\in E$ receives capacity $\\cc(e) = 1/\\psi$ and the demand at each edge $v \\in S$ is $+1$ and for each $v \\in \\overline{S}$ is $-1$, by introducing a super-source $s$ and super-sink $t$\\;\n        \\If{flow procedure returns a flow $\\ff$ with $\\val(f) = n/2$}{\n            Remove $s,t$ to derive $S$-$\\overline{S}$ flow; then decompose flow $\\ff$ into flow paths $P_1, P_2, \\dots, P_{n/2}$\\;\n            Create a matching $M_{i}$ where for each $s$-to-$\\overline{s}$ flow path $P_i$, we add $(s, \\overline{s})$ to $M_i$.\n        }\\Else{\n            \\Return the minimum cut $(X_S, X_{\\overline{S}})$ in the flow problem after removing the super-source $s$ and the super-sink $t$ from the two sets.\n        }\n    }\n    \\Return $H = \\bigcup_{i} M_i$\n  \\caption{\\textsc{SparsityCertifyOrCut}$(G, \\psi)$}\n  \\label{algo:cutMatchingGame}\n\\end{algorithm}\n\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[scale=0.2]{./fig/AlgorithmProcessing_lectureCutMatching.jpeg}\n    \\caption{Illustration of the steps of the Algorithm. In a), a bi-partition of $V$ is found. In b), the bi-partition is used to obtain a flow problem where we inject one unit of flow to each vertex in $S$ via super-source $s$ and extract one unit via super-sink $t$. c) A path flow decomposition. For each path, the first vertex is in $S$ and the last vertex in $\\overline{S}$. d) We find $M_i$ to be the one-to-one matching between endpoints in $S$ and $\\overline{S}$ defined by the path flows.}\n    \\label{fig:my_label}\n\\end{figure}\n\n\\paragraph{Beginning of an Iteration.} In each iteration of the algorithm, we first invoke a sub-procedure $\\textsc{FindBiPartition}(\\cdot)$ that returns a cut $(S,\\overline{S})$ (where $\\overline{S} = V \\setminus S$) that partitions the vertex set into two equal-sized sides. Here, we implicitly assume that the number of vertices $n$ is even which is w.l.o.g. \n\nNext, the algorithm creates a flow problem where each vertex $s \\in S$ has to send exactly one unit of flow and each vertex $\\overline{s} \\in S$ has to receive exactly one unit of flow. We therefore introduce dummy nodes $s$ and $t$, add for each vertex $v \\in S$ an edge of capacity $1$ between $s$ and $v$; and for each vertex $v \\in \\overline{S}$ and edge between $t$ and $v$ of capacity $1$. We set the capacity of edges in the original graph $G$ to $1/\\psi$ (which we assume wlog to be integer).\n\n\\paragraph{The If Statement.} If the flow problem can be solved exactly, then we can find a path decomposition of the flow $\\ff$ in $\\tilde{O}(m)$ time (for example using a DFS) where each path starts in $S$ ends in $\\overline{S}$ and carries one unit of flow\\footnote{For simplicity, assume that the returned flow is integral.}. This defines a one-to-one correspondence between the vertices in $S$ and the vertices in $\\overline{S}$. We capture this correspondences in the matching $M_i$. We will later prove the following lemma.\n\n\\begin{lemma}\nIf the algorithm returns after constructing $T$ matchings, for an appropriately chosen $T = \\Theta(\\log^2 n)$, then the graph $H$ returned by the algorithm is a $\\frac{1}{2}$-expander and $G$ can be embedded into $H$ with congestion $O(\\log^2 n/ \\psi)$.\n\\end{lemma}\n\n\\paragraph{The Else Statement.} On the other hand, if the flow problem on $G$ could not be solved, then we return the min-cut of the flow problem. Such a cut can be found in $O(m)$ time by using the above reduction to an $s$-$t$ flow problem on which one can compute maximum flow $\\ff$ from which the $s$-$t$ min-cut can be constructed by following the construction in the proof of \\Cref{thm:maxFlowEqualsMinCut}.\n\nIt turns out that this min-cut already is a sparse cut by the way our flow problem is defined.\n\n\\begin{lemma}\nIf the algorithm finds a cut $(X_S, X_{\\overline{S}})$, then the returned cut satisfies $|E_G(X_S \\setminus \\{s\\}, X_{\\overline{S}} \\setminus \\{t\\})| = O(\\psi)$.\n\\end{lemma}\n\\begin{proof}\nFirst observe that since a min-cut was returned, we have that $|E_G(X_S, X_{\\overline{S}})| < n/2$ (otherwise we could have routed the demands).\n\nLet $n_s$ be the number of edges incident to the super-source $s$ that cross the cut $(X_S, X_{\\overline{S}})$. Let $n_t$ be the number of edges incident to $t$ that cross the cut $(X_S, X_{\\overline{S}})$.\n\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[scale=0.2]{./fig/FewEdgesCrossCut_lectureCutMatching.jpeg}\n    \\caption{Set $X_S$ is enclosed by the orange circle. The thick orange edges are in the cut and incident to super-source $s$. Thus they count towards $n_s$. Here $n_s = 2, n_t = 0$. Note that all remaining edges in the cut are black, i.e. were originally in $G$ and therefore have capacity $1/\\psi$.}\n    \\label{fig:my_label}\n\\end{figure}\n\nObserve that after taking away the vertices $s$ and $t$, the cut $(X_S \\setminus \\{s\\}, X_{\\overline{S}} \\setminus \\{t\\})$ has less than $n/2 - n_s - n_t$ capacity. But each remaining edge has capacity $1/\\psi$, so the total number of edges in the cut can be at most $\\psi \\cdot (n/2 - n_s - n_t)$. Since $X_S \\setminus \\{s\\}$ is of size at least $n/2-n_s$, and $X_{\\overline{S}} \\setminus \\{t\\}$ is of size at least $n/2-n_t$, we have that the induced cut in $G$ has\n\\[\n    \\psi(X_S \\setminus \\{s\\}) < \\frac{\\psi \\cdot (n/2 - n_s - n_t)}{\\min\\{ n/2 - n_s, n/2 - n_t\\}} \\leq \\psi.\n\\]\n\\end{proof}\n\n\\section{Constructing an Expander via Random Walks}\n\nNext, we give the implementation and analysis for the procedure $\\textsc{FindBiPartition}(\\cdot)$. We start however by giving some more preliminaries.\n\n\\paragraph{Random Walk on Matchings.} Let $\\{M_1, M_2, \\dots, M_{T+1}\\}$ be the set of matchings we compute (if we never find a cut). In the $i^{th}$-step of the lazy random walk,  we let the mass at each vertex $j$ stay put with probability $1/2$,  and otherwise traverses the edge in matching $M_i$ incident to $j$ with probability $1/2$. \n\nWe let $\\pp_{j \\mapsto i}^t$ denote the probability that a particle that started at vertex $j$ is at vertex $i$ after a $t$-step lazy random walk. We let $\\pp_i^t = [\\pp_{1 \\mapsto i}^t\\; \\pp_{2 \\mapsto i}^t\\; \\dots \\;\\pp_{n \\mapsto i}^t]$. Note that for each edge $(i,j) \\in M_{t+1}$, we have that \n\\[\n\\pp_i^{(t+1)} = \\frac{1}{2} \\pp_i^t + \\frac{1}{2} \\pp_j^t = \\pp_j^{(t+1)}.\n\\]\nWe define the projection matrix $\\proj^t = [\\pp_1^{t}, \\pp_2^{t}, \\dots, \\pp_n^{t}]^\\trp$ that maps an initial probability distribution $\\dd$ to the probability distribution over the vertices that the random walk visits them at step $t$. You will prove in the exercises that $\\proj^t$ is \\emph{doubly-stochastic}.\n\nWe say that a lazy random walk is \\emph{mixing} at step $t$, if for each $i,j$, $\\pp_{j \\mapsto i}^t \\geq 1/(2n)$. \n\n\\begin{lemma}\nIf $t$-step lazy random walk is \\emph{mixing}, then $H = \\bigcup_{i \\leq t} M_i$ is a $\\frac{1}{2}$-expander.\n\\end{lemma}\n\\begin{proof}\nConsider any cut $(S, \\overline{S})$ with $|S| \\leq |\\overline{S}|$. It is convenient to think about the random walks in terms of probability mass that is moved around. Observe that each vertex $j \\in \\overline{S}$ has to push at least $1/(2n)$ units of probability from $j$ to $i$ (by definition of mixing). \n\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[scale=0.15]{./fig/MixingWalkIsExpander_lectureCutMatching.jpeg}\n    \\caption{Each vertex $j \\in \\overline{S}$ sends at least $1/(2n)$ probability mass to $i$ (red arrow). But in order to transport it, it does have to push the mass through edges in the matchings $M_1, M_2, \\dots, M_t$ that cross the cut.}\n    \\label{fig:my_label}\n\\end{figure}\n\nClearly, to move the mass from $\\overline{S}$ to $S$ it has to use the matching edges that also cross the cut.\n\nNow observe that since there are $\\geq n/2$ vertices in $\\overline{S}$, and each of them has to push $\\geq 1/(2n)$ mass to $i$, the total amount of probability mass pushed through the cut for $i$ is $\\geq 1/4$. Since there are $|S|$ such vertices $i$, the total amount of mass that has to cross the cut is $\\geq |S|/4$. \nBut note, that after each step of the random walk, the total probability mass at each vertex is exactly $1$. Thus, at each step $t'$, each edge in $M_{t'}$ crossing the cut can push at most $1/2$ units of probability mass over the cut (and thereafter the edge is gone). \n\nIt follows that there must be at least $|S|/2$ edges in the matchings $M_1, M_2, \\dots M_{t}$. But this implies that $H = \\cup_i M_i$ is a $\\frac{1}{2}$-expander.\n\n\\end{proof}\n\n\\paragraph{Implementing $\\textsc{FindBiPartition}(\\cdot)$.} We can now give away the implementation of $\\textsc{FindBiPartition}(\\cdot)$ which you can find below.\n\n\\begin{algorithm}[H]\n    Choose random $n$-dimensional vector $\\rr$ orthogonal to $\\vecone$.\\;\n    Compute vector $\\uu = \\proj^t \\rr$, i.e. each $\\uu_i = \\pp_i^t \\cdot \\rr$\\;\n    Let $S$ be the $n/2$ smallest vertices w.r.t. $\\uu$; and $\\overline{S}$ be the $n/2$ largest w.r.t $\\uu$ (ties broken arbitrarily but consistently).\\;\n    \\Return $(S, \\overline{S})$\n  \\caption{\\textsc{FindBiPartition}$(G, \\{M_1, M_2, \\dots, M_t\\})$}\n  \\label{algo:findBiPartition}\n\\end{algorithm}\n\nThe central claim, we want to prove is the following: given a potential function for the random walk at step $t$\n\\[\n    \\Phi^t = \\sum_{i,j} (\\pp_{j \\mapsto i}^t - 1/n)^2 = \\sum_{i} \\|\\pp_{i}^t - \\vecone/n\\|_2^2.\n\\]\n\\begin{claim}\\label{clm:mainClaimCutMatching}\nIn the algorithm $\\textsc{SparsityCertifyOrCut}(\\cdot)$, we have $\\mathbb{E}[\\Phi^{t} - \\Phi^{(t+1)}] \\geq \\Omega(1/\\log n) \\Phi^{t}$.\n\\end{claim}\n\\begin{corollary}\nFor appropriate $T = \\Theta(\\log^2 n)$, the algorithm $\\textsc{SparsityCertifyOrCut}(\\cdot)$ has $\\Phi^{(T+1)} \\leq 4/n^2$.\n\\end{corollary}\n\nThe corollary follows straight-forwardly since $O(\\log n)$ iterations suffice with high probability to reduce the potential by factor $\\frac{1}{2}$ (you can use Chernoff here). Then, after $O(\\log n)$ such phases consisting of $O(\\log n)$ iterations, the potential drops to $1/n^d$ for any $d$ which then goes into the constant factors.\n\nWe further observe that this implies that $\\{M_1, M_2, \\dots, M_{T+1}\\}$ is mixing (you can straight-forwardly prove it by a proof of contradiction), and thereby we conclude the proof of our main theorem. \n\nLet us now give the prove of Claim \\ref{clm:mainClaimCutMatching}: \n\n\\paragraph{Interpreting the Potential Drop.} Let us start by observing \n\\begin{align*}\n     \\Phi^{t} - \\Phi^{(t+1)} = \\sum_{i} \\|\\pp_{i}^t -\\vecone/n\\|_2^2 + \\sum_{i} \\|\\pp_{i}^{(t+1)} -\\vecone/n\\|_2^2\n\\end{align*}\nConsidering now matching $M_{t+1}$,  and an edge $(i,j) \\in M_{t+1}$. We can re-write the former sum as $\\sum_{i} \\|\\pp_{i}^t -\\vecone/n\\|_2^2 = \\sum_{(i,j) \\in M_{t+1}} \\|\\pp_{i}^t -\\vecone/n\\|_2^2 + \\|\\pp_{j}^t -\\vecone/n\\|_2^2$ and we can do the same for the $t+1$-step walk probabilities. Further, note that for $(i,j) \\in M_{t+1}$, we have $\\pp_i^{(t+1)} = \\pp_j^{(t+1)} = \\frac{\\pp_i^t+ \\pp_j^t}{2}$. Thus,\n\\begin{align*}\n     \\Phi^{t} - \\Phi^{(t+1)} = \\sum_{(i,j)}  \\|\\pp_{i}^t -\\vecone/n\\|_2^2 + \\|\\pp_{j}^t -\\vecone/n\\|_2^2 - 2 \\left\\|\\frac{\\pp_i^t+ \\pp_j^t}{2}\\right\\|_2^2.\n\\end{align*}\nFinally, we can use the formula $\\|\\xx\\|^2 + \\|\\yy\\|^2 - 2 \\|(\\xx + \\yy)/2\\|^2 = \\frac{1}{2} \\| \\xx - \\yy\\|^2$ term-wise to derive \n\\begin{align*}\n     \\Phi^{t} - \\Phi^{(t+1)} = \\frac{1}{2}\\sum_{(i,j) \\in M_{t+1}} \\|(\\pp_{i}^t -\\vecone/n) - (\\pp_{j}^t -\\vecone/n)\\|_2^2 = \\frac{1}{2}\\sum_{(i,j) \\in M_{t+1}} \\|\\pp_{i}^t - \\pp_{j}^t \\|_2^2.\n\\end{align*}\n\n\\paragraph{Understanding the Random Projection.} Next, we want to further lower bound the potential drop using the random vector $\\uu$. We will show that (w.p. $\\geq 1-n^2$)\n\\begin{align}\\label{eq:cutMatchingAnalysis}\n     \\Phi^{t} - \\Phi^{(t+1)} = \\frac{1}{2} \\sum_{(i,j) \\in M_{t+1}} \\|\\pp_{i}^t - \\pp_{j}^t \\|_2^2 \\geq \\frac{n-1}{32 \\cdot \\log n} \\sum_{(i,j) \\in M_{t+1}} \\|\\uu_i - \\uu_j\\|^2_2.\n\\end{align}\n\nTo prove this claim, we will prove this again term-wise, showing that for each pair of vertices $i,j \\in V$, we have $\\|\\pp_{i}^t - \\pp_{j}^t \\|_2^2 \\geq \\frac{n-1}{16 \\cdot \\log n} \\|\\uu_i - \\uu_j\\|^2_2$ w.h.p. It will then suffice to talk a union bound over all pairs $i,j$. \n\nTo this end, let us make the following observations: since $\\uu_i = \\pp_i^t \\cdot \\rr$, we have that $\\uu_i - \\uu_j = (\\pp_i^t - \\pp_j^t) \\cdot \\rr$ by linearity. Also note that since $\\sum_j \\pp_{j \\mapsto i}^t = 1$ for all $i$ (since $\\proj^t$ is doubly-stochastic), we further have that the projection $(\\pp_i^t - \\pp_j^t)$ is orthogonal to $\\vecone$.\n\nWe can now use the following statement about random vector $\\rr$ to argue about the effect of projecting $(\\pp_i^t - \\pp_j^t)$ onto $\\rr$. \n\n\\begin{theorem}\\label{thm:factsGaussianAnnulus}\nIf $\\yy$ is a vector of length $\\ell$ in $\\mathbb{R}^d$, and $\\rr$ a unit random vector in $\\mathbb{R}^d$, then\n\\begin{itemize}\n    \\item $\\mathbb{E}[(\\yy^\\trp \\rr)^2] = \\frac{\\ell^2}{d}$, and\n    \\item for $x \\leq d/16$, then $\\mathbb{P}[(\\yy^\\trp \\rr)^2 \\geq x\\ell^2/d] \\leq e^{-x/4}$\n\\end{itemize}\n\\end{theorem}\n\nThis allows us to pick $x = 16 \\cdot \\log n$, and we then we obtain that\n\\[\n    \\mathbb{P}\\left[ ((\\pp_i^t - \\pp_j^t) \\cdot \\rr)^2 \\geq  \\frac{16 \\log n}{n-1} \\|\\pp_i^t - \\pp_j^t\\|_2^2\\right] \\leq e^{-4 \\log n} = n^{-4}.\n\\]\nMultiplying both sides of the event whose probability we bound by $(n-1)/(16 \\log n)$, we derive the claimed inequality  \\ref{eq:cutMatchingAnalysis}.\n\n\\paragraph{Relating to the Lengths of the Projections.} Let $\\mu = \\max_{i \\in S} \\uu_i$, then we have by definition that $\\mu \\leq \\uu_j$ for all $j \\in \\overline{S}$.\n\nNow we can write \n\\begin{align*}\n     \\Phi^{t} - \\Phi^{(t+1)} &\\geq \\frac{n-1}{32 \\cdot \\log n} \\sum_{(i,j) \\in M_{t+1}} \\|\\uu_i - \\uu_j\\|^2_2 \\\\\n     &\\geq  \\frac{n-1}{32 \\cdot \\log n} \\sum_{(i,j) \\in M_{t+1}}  (\\uu_i - \\mu)^2 + (\\uu_j - \\mu)^2\\\\\n     &=  \\frac{n-1}{32 \\cdot \\log n}\\left( \\sum_{i \\in V} \\uu_i^2 - \\mu \\cdot \\sum_i \\uu_i + n \\mu \\right)\n\\end{align*}\nby standard calculations. We then observe that $\\sum_i \\uu_i = \\sum_i \\pp_i^t \\cdot \\rr = \\vecone \\cdot \\rr = 0$ by the fact that $\\proj^t$ is doubly-stochastic and since $\\rr$ is orthogonal to the all-ones vector. We can therefore conclude \n\\[\n\\frac{n-1}{32 \\cdot \\log n}\\left( \\sum_{i \\in V} \\uu_i^2 - \\mu \\cdot \\sum_i \\uu_i + n \\mu \\right) \\geq \\frac{n-1}{32 \\cdot \\log n} \\sum_{i \\in V} \\uu_i^2.\n\\]\nIt remains to use the second fact of Theorem \\ref{thm:factsGaussianAnnulus} to obtain that \n\\[\n\\mathbb{E}[\\uu_i^2] = \\mathbb{E}[(\\pp_i^t \\cdot \\rr)^2] = \\mathbb{E}[(\\pp_i^t - \\vecone/n) \\cdot \\rr)^2] = \\frac{\\|\\pp_i^t - \\vecone/n\\|_2^2}{n-1}\n\\]\nwhere we used again that $\\rr$ is orthogonal to $\\vecone$ in the second equality.\n\nIt remains to conclude \n\\[\n\\mathbb{E}[\\Phi^{t} - \\Phi^{(t+1)}] \\geq \\frac{n-1}{32 \\cdot \\log n} \\sum_{i} \\frac{\\|\\pp_i^t - \\vecone/n\\|_2^2}{n-1} = \\Omega(1/\\log n) \\Phi^t.\n\\]", "meta": {"hexsha": "91fb23b7df5e4ea18400b87b35e502598dd57083", "size": 19924, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "agao21_script/lecture_CutMatching.tex", "max_stars_repo_name": "rjkyng/agao21_script", "max_stars_repo_head_hexsha": "772f8c17b0802ec43d45e1480f7193dd0eceadb7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-15T09:04:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-26T05:39:09.000Z", "max_issues_repo_path": "agao21_script/lecture_CutMatching.tex", "max_issues_repo_name": "rjkyng/agao21_script", "max_issues_repo_head_hexsha": "772f8c17b0802ec43d45e1480f7193dd0eceadb7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "agao21_script/lecture_CutMatching.tex", "max_forks_repo_name": "rjkyng/agao21_script", "max_forks_repo_head_hexsha": "772f8c17b0802ec43d45e1480f7193dd0eceadb7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-03-11T12:35:23.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-13T06:04:51.000Z", "avg_line_length": 81.6557377049, "max_line_length": 535, "alphanum_fraction": 0.6705480827, "num_tokens": 6809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.5737911428357694}}
{"text": "\\documentclass{memoir}\n\\usepackage{linalg}\n% This covers all the notes from 10-23-19 to ~11-01-19\n\n% \\begin{figure}[ht]\n%     \\centering\n%     \\incfig{riemmans-theorem}\n%     \\caption{Riemmans theorem}\n%     \\label{fig:riemmans-theorem}\n% \\end{figure}\n\n\\begin{document}\n\\chapter{Eigenvalues, Eigenvectors, and Eigenspaces}\n\\begin{defn}[Invariant subspace]\n\tSuppose $T \\in \\mathcal{L}(V)$. A subspace $U$ of $V$ is called \\textbf{invariant} under $T$ if $u \\in U$ implies $Tu \\in U$.\n\\end{defn}\n\\section{Eigenvalues and Eigenvectors}\n\\label{sec:eigenvalues_and_eigenvectors}\n\n\n\\begin{defn}[Eigenvalue]\n\n\tSuppose $T  \\in \\mathcal{L}(V)$. A number $\\lambda \\in F$  is called an \\textbf{eigenvalue} of $T$ if there exists $v \\in V$ such that $v\\neq 0$ and $Tv = \\lambda v$.\n\n\\end{defn}\n\\begin{lemma}[Equivalent conditions]\n\tSuppose $V$ is finite-dimensional, $T \\in \\mathcal{L}(V)$, and $\\lambda \\in F$. Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item $\\lambda$ is an eigenvalue of $T$ \n\t\t\\item $T-\\lambda I$ is not injective\n\t\t\\item $T-\\lambda I$ is not surjective\n\t\t\\item $T-\\lambda I$ is not invertible\n\t\\end{itemize}\n\\end{lemma}\n\\begin{defn}[Eigenvector]\n\tSuppose $T \\in \\mathcal{L}(V)$ and $\\lambda \\in F$ is an eigenvalue of $T$. A vector $v\\in V$ is called an \\textbf{eigenvector} of $T$ corresponding to $\\lambda$ if $v\\neq 0$ and $Tv = \\lambda v$.\n\\end{defn}\n\\begin{lemma}[Linearly independent eigenvectors]\n\tLet $T \\in \\mathcal{L}(V)$. Suppose $\\lambda_1,\\ldots,\\lambda_m$ are distinct eigenvalues of $T$ and $v_1,\\ldots,v_m$ are corresponding eigenvectors. Then $v_1,\\ldots,v_m$ is linearly independent.\n\\end{lemma}\n\n\\section{Existence of Eigenvalues}\n\\label{sec:existence_of_eigenvalues}\n\\begin{thm}\n\tEvery operator on a finite-dimensional, nonzero, complex vector space has an eigenvalue.\n\\end{thm}\n\\begin{defn}[Upper-triangular Matrix]\nA matrix ix called \\textbf{upper-triangular} if all the entries below the diagonal equal zero.\n\t\n\\end{defn}\n\\begin{thm}\n\tSuppose $V$ is a finite-dimensional complex vector space and $T\\in \\mathcal{L}(V)$. Then $T$ has an upper-triangular matrix with respect to some basis of $V$.\n\\end{thm}\n\\begin{thm}\n\tSuppose $T \\in \\mathcal{L}(V)$ has an upper-triangular matrix with respect to some basis of $V$. Then $T$ is invertible if and only if all the entries on the diagonal of that upper-triangular matrix are nonzero.\n\\end{thm}\n\\begin{thm}\n\tSuppose $T \\in \\mathcal{L}(V)$ has an upper-triangular matrix with respect to some basis of $V$. Then the eigenvalues of $T$ are precisely the entries on the diagonal of that upper-triangular matrix.\n\\end{thm}\n\\section{Eigenspaces and Diagonal Matrices}\n\\label{cha:eigenspaces_and_diagonal_matrices}\n\\begin{defn}[Eigenspace]\n\t\nSuppose $T \\in \\mathcal{L}(V)$ and $\\lambda\\in F$. The \\textbf{eigenspace} of $T$ corresponding to $\\lambda$, denoted $E(\\lambda,T)$, is defined by \n\\begin{align*}\n\tE(\\lambda,T) = \\textrm{null}(T-\\lambda I).\n\\end{align*}\nIn other words, $E(\\lambda,T)$ is the set of all eigenvectors of $T$ corresponding to $\\lambda$, along with the zero vector.\n\\end{defn}\n\\begin{cor}[Sum of eigenspaces is a direct sum]\n\tSuppose $V$ is finite-dimensional and $T \\in \\mathcal{L}(V)$. Suppose also that $\\lambda_1,\\ldots,\\lambda_m$ are distinct eigenvalues of $T$. Then\n\t\\begin{align*}\n\t\tE(\\lambda_1,T) + \\ldots + E(\\lambda_m,T)\n\t\\end{align*}\n\tis a direct sum. Furthermore,\n\t\\begin{align*}\n\t\t\\textrm{dim} E(\\lambda_1,T) + \\ldots + \\textrm{dim} E(\\lambda_m,T) \\leq \\textrm{dim} V.\n\t\\end{align*}\n\\end{cor}\n\\begin{defn}[Diagonalizable]\n\tAn operator $T\\in \\mathcal{L}(V)$ is called \\textbf{diagonalizable} if the operator has a diagonal matrix with respect to some basis of $V$.\n\\end{defn}\n\\begin{thm}[Conditions equivalent to diagonalizability]\n\tSuppose $V$ is finite-dimensional and $T \\in \\mathcal{L}(V)$. Let $\\lambda_1,\\ldots,\\lambda_m$ denote the distinct eigenvalues of $T$. Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item $T$ is diagonalizable \n\t\t\\item $V$ has a basis consisting of eigenvectors of $T$ \n\t\t\\item There exist one-dimensional subspaces $U_1,\\ldots,U_n$ of $V$, each invaraint under $T$, such that\n\t\t\t\\begin{align*}\n\t\t\t\tV = U_1 \\bigoplus \\ldots \\bigoplus U_n \n\t\t\t\\end{align*}\n\t\t\\item $V = E(\\lambda_1,T) \\bigoplus \\ldots \\bigoplus E(\\lambda_m,T)$ \n\t\t\\item \\textrm{dim}$V$ = \\textrm{dim}$E(\\lambda_1,T) + \\ldots + \\textrm{dim}E(\\lambda_m,T)$.\n\t\\end{itemize}\n\\end{thm}\n\\begin{cor}\n\tIf $T \\in \\mathcal{L}(V)$  has \\textrm{dim}$V$ distinct eigenvalues, then $T$ is diagonalizable.\n\\end{cor}\n\n\\section{Trace and Determinant}\n\\label{sec:trace_and_determinant}\n\nThe trace and determinant of linear maps are a vital tool to characterizing linear maps, and now that we have defined eigenvalues, we can work with the concepts well enough to get a deep understanding.\n\n\\begin{defn}[Trace]\n\tSuppose \\(T \\in \\mathcal{L}(V)\\). The \\textbf{trace} of \\(T\\) is the sum of the eigenvalues of \\(T\\), with each eigenvalue repeated according to its multiplicity.\n\\end{defn}\nNote that if our underlying field is \\(R\\), we have to include complex eigenvalues as well.\\\\\n\nThis might seem different than the traditional definition used in other linear algebra notes, but we will soon see that it corresponds exactly to the other definitions. In particular, the trace has a connection to characteristic polynomials.\n\n\\begin{prop}\n\tSuppose \\(T \\in \\mathcal{L}(V)\\) and let \\(n = \\textrm{dim}(V)\\). Then \\(\\textrm{Tr}(T)\\) is equal to the negative of the coefficient of \\(x^{n-1}\\) in the characteristic polynomial of \\(T\\).\n\\end{prop}\n\nNow we will make the connection to entries of a matrix.\n\\begin{thm}\n\tLet \\(T \\in \\mathcal{L}(V)\\). Then the trace of \\(\\mathcal{M}(T)\\) is the sum of the entries of the diagonal, and exactly equals \\(\\textrm{Tr}(T)\\).\n\\end{thm}\n\nNote that this also tells us that the trace is invariant under change of basis, of course.\n\n\\begin{cor}\n\tIf \\(A\\) and \\(B\\) are linear maps of the same size, then\n\t\\begin{align*}\n\t\t\\textrm{Tr}(AB) = \\textrm{Tr}(BA)\n\t\\end{align*}\n\\end{cor}\n\nThe trace also has a useful additivity property.\n\n\\begin{prop}\n\tSuppose \\(S,T \\in \\mathcal{L}(V)\\). Then \\(\\textrm{Tr}(S+T) = \\textrm{Tr}(S) + \\textrm{Tr}(T)\\).\n\\end{prop}\n\nThe trace can be useful in proving some rather strong properties. For example\n\\begin{prop}\n\tThere do not exist operators \\(S,T \\in \\mathcal{L}(V)\\) such that\n\t\\begin{align*}\n\t\tST-TS = I.\n\t\\end{align*}\n\\end{prop}\n\\begin{proof}\n\tTo see this, observe that\n\t\\begin{align*}\n\t\t\\textrm{Tr}(ST-TS) &= \\textrm{Tr}(ST) - \\textrm{Tr}(TS)\\\\\n\t\t\t\t   &= \\textrm{Tr}(ST) - \\textrm{Tr}(ST)\\\\\n\t\t\t\t   &= 0\n\t\\end{align*}\n\tbut \\(\\textrm{Tr}(I) = \\textrm{dim}(V)\\), and so equality can never hold. \n\\end{proof}\n\n\\subsection{Determinant}\n\\label{subsec:determinant}\n\nOnce again, we will define the determinant in an abstract manner, and show it corresponds to our traditional notion of a determinant.\n\n\\begin{defn}[Determinant]\n\tLet \\(T \\in \\mathcal{L}(V)\\). The \\textbf{determinant} of \\(T\\) is the product of the eigenvalues of \\(T\\), with each eigenvalue repeated according to its multiplicity.\n\\end{defn}\nIf our underlying field is \\(\\R\\), then we need to include the complex eigenvalues as well.\\\\\n\nOnce again, the determinant has many deep connections, including to the characteristic polynomial.\n\\begin{prop}\n\tSuppose \\(T \\in \\mathcal{L}(V)\\) and let \\(n = \\textrm{dim}(V)\\). Then \\(\\textrm{det}(T)\\) equals \\((-1)^{n}\\) times the constant term of the characteristic polynomial of \\(T\\).\n\\end{prop}\n\nThis alone actually gives us some powerful consequences already.\n\\begin{prop}\n\tAn operator on \\(V\\) is invertible if and only if its determinant is nonzero.\n\\end{prop}\n\\begin{proof}\n\tRecall that the operator \\(T\\) is invertible if and only if 0 is not an eigenvalue of \\(T\\). Of course, this only occurs if and only if the product of the eigenvalues of \\(T\\) is nonzero, and hence we have the proposition.\n\\end{proof}\n\nSome take the proposition below to be the definition, but we will have it follow as a consequence.\n\\begin{prop}\n\tLet \\(T \\in \\mathcal{L}(V)\\). Then the characteristic polynomial of \\(T\\) equals \\(\\textrm{det}(xI-T)\\).\n\\end{prop}\n\nAll of this corresponds to the traditional definition of determinants of matrices. We will omit the details to show this, however.\n\n\\begin{defn}[Determinant of a matrix]\n\tLet \\(A\\) be an \\(n\\)-by-\\(n\\) matrix given by\n\t\\begin{align*}\n\t\tA = \\begin{pmatrix} A_{1,1} & \\cdots & A_{1,n}\\\\\n\t\t\\vdots & \\ddots & \\vdots \\\\\n\tA_{n,1} & \\cdots & A_{n,n}\\end{pmatrix} .\n\t\\end{align*}\n\tThe \\textbf{determinant} of \\(A\\) is defined by\n\t\\begin{align*}\n\t\t\\textrm{det}(A) = \\sum_{(\\sigma_1,\\ldots,\\sigma_n) \\in \\textrm{perm}(n)} \\left( \\textrm{sign}(\\sigma_1,\\ldots,\\sigma_n) \\right) A_{m_1,1}\\ldots A_{\\sigma_n, n}.\n\t\\end{align*}\n\\end{defn}\n\n\\begin{prop}\n\tSuppose \\(A\\) is a square matrix and \\(B\\) is obtained from \\(A\\) by interchanging two columns. Then\n\t\\begin{align*}\n\t\t\\textrm{det}(A) = -\\textrm{det}(B).\n\t\\end{align*}\n\\end{prop}\nWe can also see that if a square matrix has two equal columns, then \\(\\textrm{det}(A) = 0\\).\n\n\\begin{prop}\n\tSuppose \\(S,T \\in \\mathcal{L}(V)\\). Then\n\t\\begin{align*}\n\t\t\\textrm{det}(ST) = \\textrm{det}(TS) = \\textrm{det}(S) \\textrm{det}(T)\n\t\\end{align*}\n\\end{prop}\n\nMany of our special operators have unique traces and determinants.\n\n\\begin{prop}\n\tSuppose \\(V\\) is an inner product space and \\(S \\in \\mathcal{L}(V)\\) is an isometry. Then \\(\\left| \\textrm{det}(S) \\right|=1\\).\n\\end{prop}\n\n\\subsection{Volume}\n\\label{subsec:volume}\n\n\n\n\\begin{prop}\n\tSuppose \\(V\\) is an inner product space and \\(T \\in \\mathcal{L}(V)\\). Then\n\t\\begin{align*}\n\t\t\\left| \\textrm{det}(T) \\right| = \\textrm{det}\\left( \\sqrt{T^{*}T}  \\right) .\n\t\\end{align*}\n\\end{prop}\n\nWe define a linear map on a set by\n\\begin{align*}\n\tT(\\Omega ) := \\left\\{Tx \\mid x \\in \\Omega  \\right\\} .\n\\end{align*}\n\n\\begin{prop}\n\tLet \\(T \\in \\mathcal{L}(\\R^{n})\\) and \\(\\Omega \\subset \\R^{n}\\). Then\n\t\\begin{align*}\n\t\t\\textrm{Vol}(T(\\Omega )) = \\left| \\textrm{det}(T) \\right|  \\textrm{Vol}(\\Omega )\n\t\\end{align*}\n\\end{prop}\nThis also implies that isometries don't change volume.\n\\end{document}\n", "meta": {"hexsha": "8b8c6a1080c13492273a6e122b40810d2541af05", "size": 10075, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Linear Algebra/Notes/source/10-23-19-Eigen.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Linear Algebra/Notes/source/10-23-19-Eigen.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Linear Algebra/Notes/source/10-23-19-Eigen.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.632231405, "max_line_length": 241, "alphanum_fraction": 0.6951861042, "num_tokens": 3422, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.5737911413348918}}
{"text": "\\newpage\n\\subsection{Goal}\n\nThe useage of direct methods (one-dimensional methods of exhaustive search, dichotomy, golden section search; multidimensional methods of exhaustive search, Gauss, Nelder-Mead) in the tasks of unconstrained nonlinear optimization.\n\n\\subsection{Formulation of the problem}\n\n\\paragraph{I.}\n\nUse the one-dimensional methods of \\textit{exhaustive search}, \\textit{dichotomy} and \\textit{golden section} search to find an approximate (with precision $\\varepsilon = 0.001$) solution $x: f(x) \\rightarrow \\min$  for the following functions and domains:\n\n\\begin{enumerate}\n    \\item $f(x) = x^3, x \\in [0, 1]$;\n    \\item $f(x) = \\lvert x - 0.2 \\rvert, x \\in [0, 1]$;\n    \\item $f(x) = x\\sin{\\frac{1}{x}}, x \\in [0.01, 1]$.\n\\end{enumerate}\n\nCalculate the number of $f$-calculations and the number of iterations performed in each method and analyze the results.\nExplain differences (if any) in the results obtained.\n\n\\paragraph{II.}\n\nGenerate random numbers $\\alpha \\in (0, 1)$ and $\\beta \\in (0, 1)$.\nFurthermore, generate the noisy data $\\{x_k, y_k\\}$, where $k = 0, ..., 100$, according to the following rule:\n\n\\begin{equation}\n    y_k = \\alpha x_k + \\beta + \\delta_k, x_k = \\frac{k}{100},\n\\end{equation}\n\nwhere $\\delta_k \\sim N(0, 1)$ are values of a random variable with standard normal distribution.\nApproximate the data by the following linear and rational functions:\n\n\\begin{enumerate}\n    \\item $F(x, a, b) = ax + b$ (linear approximant),\n    \\item $F(x, a, b) = \\frac{a}{1 + bx}$ (rational approximant),\n\\end{enumerate}\n\nby means of least squares through the numerical minimization (with precision $\\varepsilon = 0.001$ of the following function:\n\n\\begin{equation}\n    D(a, b) = \\sum^{100}_{k=0}(F(x_k, a, b) - y_k)^2.\n\\end{equation}\n\nTo solve the minimization problem, use the methods of exhaustive search, Gauss and Nelder-Mead.\nIf necessary, set the initial approximations and other parameters of\nthe methods.\nVisualize the data and the approximants obtained in a plot \\textbf{separately for each type of approximant}.\nAnalyze the results obtained (in terms of number of iterations, precision, number of function evaluations, etc.).\n\n\\subsection{Brief theoretical part}\n\nOptimization methods are numerical methods for finding optimal (in some sense) values of objective functions, for example, in the framework of mathematical models of certain processes.\n\n\\textit{Direct optimization methods} (\\textit{zero-order optimization methods}) use only the values of the function $f$ itself, but not its derivatives, when searching for $x*$.\nThese methods are particularly applicable for continuous (and not necessarily differentiable) functions $f = f(x)$ of a single variable $x$ on the segment $Q = [0, 1]$, but also have a low convergence rate.\n\n\\paragraph{Exhaustive search}\n\nIn the \\textbf{exhaustive search} algorithm, tests are performed at points that are determined by evenly dividing the interval $[a, b]$ by $N$ identical subintervals.\nThe smallest value of the function $f(x)$ is selected from the calculated values.\nLet this value be reached at point $x_k$.\nThen, due to the unimodality of the function $f(x)$, the subintervals $[a, x_{k-1}]$, $[x_{k+1}, b]$ can be excluded from consideration, i.e., the interval $[x_{k-1}, x_{k+1}]$ can be made the next uncertainty interval.\n\n\\paragraph{Dichotomy method}\n\nIn the \\textbf{dichotomy algorithm}, tests are performed in pairs.\nThe coordinates of each subsequent pair of tests are separated by the value $\\delta_x < \\varepsilon_x$, where $\\varepsilon_x$ is the required accuracy of the solution.\nTests are performed in the middle of the interval.\nFor the values of $f(x)$ obtained at these points, one half of the interval is excluded from further consideration due to the unimodality of the function $f(x)$.\nThe value of $\\delta_x$ is determined by the required accuracy of the solution.\n\n\\paragraph{Golden section.}\n\nConsider the interval $[a, b]$.\nA point $c$ is said to fulfill the Golden section of the interval $[a, b]$ if $\\frac{c - a}{b - a} = \\tau$, where $\\tau = \\frac{\\sqrt{5} - 1}{2} \\approx 0.618$ is the solution of the quadratic equation $\\tau^2 + \\tau - 1 = 0$.\nRequires two times less computations of the function $f(x)$ than in the iteration method.\n\n\\paragraph{Gauss method.}\n\nThe essence of the method is to minimize the function along each of the coordinates in turn at each iteration.\n\n\\paragraph{Nelder-Mead.}\n\nIf the function $f(x)$ is a ravine function, the efficiency of the simplex method in solving problem:\n\\begin{equation}\n    \\min_{x \\in R^n} f(x) = f(x*) = f*\n\\end{equation}\nis significantly reduced due to the fact that the regular simplex cannot be \"extended\" along the ravine.\nThe Nelder-Mead method (the deformable polyhedron method) is a development of the simplex method and uses the deformation of the current simplex (not necessarily regular) in the search process.\nThe method uses the following operations on simplexes:\n\n\\begin{itemize}\n    \\item reflection;\n    \\item reduction;\n    \\item compression;\n    \\item stretching.\n\\end{itemize}\n\n\\subsection{Results}\n\nFor the first and second functions, the local minimum coincides with the global one, and the third function has many local minima.\nHowever, the $x_{min}$ values for different methods and functions are the same.\nThe main difference between these methods is the number of iterations performed.\nIf for the dichotomy method and the Golden section method, the number of iterations is almost the same.\nThen the exhaustive method goes through all the values of the function (with a step equal to $\\varepsilon$), so the number of iterations is much larger.\n\n\\begin{enumerate}\n    \\item{\n        $f(x) = x^3, x \\in [0, 1]$:\n        \\begin{itemize}\n            \\item{\\textit{exhaustive\\_search}: count of function evaluations and iterations = (1000, 1000);}\n            \\item{\\textit{dichotomy\\_method}: count of function evaluations and iterations = (22, 11);}\n            \\item{\\textit{golden\\_section}: count of function evaluations and iterations = (17, 15).}\n        \\end{itemize}\n    }\n    \\item{\n        $f(x) = \\lvert x - 0.2 \\rvert, x \\in [0, 1]$:\n        \\begin{itemize}\n            \\item{\\textit{exhaustive\\_search}: count of function evaluations and iterations = (1000, 1000);}\n            \\item{\\textit{dichotomy\\_method}: count of function evaluations and iterations = (22, 11);}\n            \\item{\\textit{golden\\_section}: count of function evaluations and iterations = (17, 15).}\n        \\end{itemize}\n    }\n    \\item{\n        $f(x) = x\\sin{\\frac{1}{x}}, x \\in [0.01, 1]$:\n        \\begin{itemize}\n            \\item{\\textit{exhaustive\\_search}: count of function evaluations and iterations = (900, 900);}\n            \\item{\\textit{dichotomy\\_method}: count of function evaluations and iterations = (20, 10);}\n            \\item{\\textit{golden\\_section}: count of function evaluations and iterations = (17, 15).}\n        \\end{itemize}\n    }\n\\end{enumerate}\n\nIt is known that optimization by \\textit{linear approximation} has a unique solution, so these methods give similar optimal values for $a$ and $b$, regardless of the choice of initial approximations.\nIn the case of a \\textit{rational approximation}, significant nonlinearities occur, so the result depends on the initial approximations. It is worth adding that inside the Gauss method, a brute-force algorithm was used, so the graphs for them are similar (Figure \\ref{ris:direct}).\n\n\\begin{figure}[H]\n    \\center\n    \\includegraphics[width=\\textwidth]{img/plot.png}\n    \\caption{Direct methods of optimization.}\n    \\label{ris:direct}\n\\end{figure}\n\n\\subsection{Conclusion}\n\nIn the course of the laboratory work, direct methods were implemented and analyzed within the problem of the unconstrained optimization problem.\n\n\\subsection{Appendix}\n\nThe source code is located \\href{https://github.com/vanSultan/anal_dev_algo/tree/lab_02}{here}: \\url{https://github.com/vanSultan/anal_dev_algo/tree/lab_02}.\n", "meta": {"hexsha": "fe18c5e487801eb7f19d79036c8f9e8473811573", "size": 7951, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lab_02/report/lab_02_body.tex", "max_stars_repo_name": "vanSultan/anal_dev_algo", "max_stars_repo_head_hexsha": "e9d6382103080e6f885b1456cc0a3ce64fbe1863", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lab_02/report/lab_02_body.tex", "max_issues_repo_name": "vanSultan/anal_dev_algo", "max_issues_repo_head_hexsha": "e9d6382103080e6f885b1456cc0a3ce64fbe1863", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab_02/report/lab_02_body.tex", "max_forks_repo_name": "vanSultan/anal_dev_algo", "max_forks_repo_head_hexsha": "e9d6382103080e6f885b1456cc0a3ce64fbe1863", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.6556291391, "max_line_length": 281, "alphanum_fraction": 0.7249402591, "num_tokens": 2080, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.8244619199068831, "lm_q1q2_score": 0.5737911368322589}}
{"text": "\\subsection{Strengths}\n\\begin{itemize}\n  \\item Using Gaussian surface and Gauss theorem in physical electromagnetism, we deduce the process of fungi wood decomposition in accordance with objective laws and mathematical logic.\n  \\item Based on \\textit{Figure~1} and \\textit{Figure~2} given in problem sheet, after performing the function fitting derivation, we carry out the fungi selection operation, which simplifies the overall model building process, and obtains more practical data.\n  \\item Starting from Single-group Logistic Model, we use Multi-groups Logistic Model to accurately and comprehensively describe the interactions between different species of fungi.\n  \\item In the case of insufficient time series data, we use the Gray Prediction Model to predict and analyze the survival of fungal species combination and the change trend of decomposition in different environmental conditions.\n  \\item From the perspective of the ecosystem, we carry out an ecologically significant analysis of the impact of fungal population biodiversity on the carbon cycle in authentic nature.\n\\end{itemize}", "meta": {"hexsha": "23ab276517deaaca13077ce3ac39af0cde486c14", "size": 1098, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6/1.tex", "max_stars_repo_name": "syy11cn/2021-mcm-meritorious-article", "max_stars_repo_head_hexsha": "3eaf143f4319fae681d98134bfc7e699833d8273", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-11-07T14:38:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-11T10:37:34.000Z", "max_issues_repo_path": "6/1.tex", "max_issues_repo_name": "syy11cn/2021-mcm-meritorious-article", "max_issues_repo_head_hexsha": "3eaf143f4319fae681d98134bfc7e699833d8273", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6/1.tex", "max_forks_repo_name": "syy11cn/2021-mcm-meritorious-article", "max_forks_repo_head_hexsha": "3eaf143f4319fae681d98134bfc7e699833d8273", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 137.25, "max_line_length": 260, "alphanum_fraction": 0.8214936248, "num_tokens": 210, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681195338728, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.5737441631317672}}
{"text": "\\section{Implementation of non-pharmaceutical interventions} \\label{npi}\nA major part of the rationale for the development of this model was to capture the past impact of non-pharmaceutical interventions (NPIs) and produce future scenarios projections with the implementation or release of such interventions.\n\n\\subsection{Isolation and quarantine}\nFor persons who are identified with symptomatic disease and enter clinical stratum 3, self-isolation is assumed to occur and their infectiousness is modified as described above. The proportion of ambulatory symptomatic persons passively identified through the public health response is determined by the case detection rate as described above in Section \\ref{cdr}.\n\n\\subsection{Community quarantine or ``lockdown\" measures}\nFor all NPIs relating to reduction of human mobility or \u201clockdown\u201d (i.e. all NPIs other than isolation and quarantine), these interventions are implemented through dynamic adjustments to the age-assortative mixing matrix. The baseline mixing matrices have the major advantage of allowing for disaggregation of total contact rates by location, i.e. home, work, school and other locations. This disaggregation allows for the simulation of various NPIs in the local context by dynamically varying the contribution of each location to reflect the historical implementation of the interventions.\n\nFor each location $L$ (home, school, work, other locations) the age-specific contact matrix $\\mathbf{C^L} = (c_{i,j}^L) \\in \\mathbb{R}_{+}^{16 \\times 16}$ is defined such that $c_{i,j}^L$ is the average number of contacts that a typical individual aged $i$ has with individuals aged $j$. The original matrices from the United Kingdom are denoted $\\mathbf{Q^L} = (q_{i,j}^L) \\in \\mathbb{R}_{+}^{16 \\times 16}$, where $q_{i,j}^L$ is defined using the same convention as for $c_{i,j}^L$. The matrices $\\mathbf{Q^L}$ were extracted using the R package ``socialmixr'' (v 0.1.8).\n\nLet $\\pi_j$ denote the proportion of people aged $j$ in the UK, and $\\rho_j$ the proportion of people aged $j$ in Victoria. The contact matrices $\\mathbf{C^L}$ were obtained from:\n$$\nc_{i,j}^L = q_{i,j}^L \\times \\frac{\\pi_j}{\\rho_j} . \n$$\n\n\nThe overall contact matrix results from the summation of the four location-specific contact matrices: \\(C_{0}=C_{H}+C_{S}+C_{W}+C_{L}\\), where \\(C_{H}\\), \\(C_{S}\\), \\(C_{W}\\) and \\(C_{L}\\) are the age-specific contact matrices associated with households, schools, workplaces and other locations, respectively.\n\nIn our model, the contributions of the matrices \\(C_{S}\\), \\(C_{W}\\) and \\(C_{L}\\) vary with time such that the input contact matrix can be written:\n\\[C(t)= C_{H}+ s(t)^{2}C_{S}+ w(t)^{2}C_{W}+l(t)^{2}C_{L}\\]\n\nThe modifying functions are each squared to capture the effect of the mobility changes on both the infector and the infectee in any given interaction that could potentially result in transmission. The modifying functions incorporate both macro-distancing and microdistancing effects, depending on the location.\n\n\\subsection{School closures/re-openings}\nReduced attendance at schools is represented through the function \\(s(t)\\), which represents the proportion of all students currently attending on-site teaching. If schools are fully closed, \\(s(t)=0\\) and \\(C_{S}\\) does not contribute to the overall mixing matrix \\(C(t)\\). \\(s(t)\\) is calculated through a series of estimates of the proportion of students attending schools, to which a smoothed step function is fitted. Note that the dramatic changes in this contribution to the mixing matrix with school closures/re-openings is a more marked change than is seen with the simulation of policy changes in workplaces and other locations (which are determined by empiric data and so do not vary so abruptly or reach a value of zero).\n\n\\subsection{Workplace closures}\nWorkplace closures are represented by quadratically reducing the contribution of workplace contacts to the total mixing matrix over time. This is achieved through the scaling term \\(w(t)^{2}\\) which modifies the contribution of \\(C_{W}\\) to the overall mixing matrix \\(C(t)\\). The profile of the function \\(w(t)\\) is set by fitting a polynomial spline function to Google mobility data for workplace attendance (Table \\ref{tab:mobility_map}).\n\n\\subsection{Community-wide movement restriction}\nCommunity-wide movement restriction (or ``lockdown\") measures are represented by proportionally reducing the contribution of the other locations contacts to the total mixing matrix over time. This is achieved through the scaling term \\(l(t)^{2}\\) which modifies the contribution of \\(C_{L}\\) to the overall mixing matrix \\(C(t)\\). The profile of the function \\(l(t)\\) is set by fitting a polynomial spline function to an average of Google mobility data for various locations, as indicated in Table \\ref{tab:mobility_map}.\n\n\\subsection{Household contacts}\nThe contribution of household contacts to the overall mixing matrix \\(C(t)\\) is fixed over time. Although Google provides mobility estimates for residential contacts, the nature of these data are different from those for each of the other Google mobility types. They represent the time spent in that location, as opposed to the other categories, which measure a change in total visitors rather than the duration. The daily frequency with which people attend their residence is likely to be close to one and we considered that household members likely have a daily opportunity for infection with each other household member. Therefore, we did not implement a function to scale the contribution of household contacts to the mixing matrix with time.\n\n\\begin{table}[ht]\n\\renewcommand{\\baselinestretch}{1}\n    \\begin{tabular}{| p{4.4cm} | p{4.4cm} | p{5cm} |}\n        \\hline\n        \\textbf{location} & \\textbf{Approach} & \\textbf{Google mobility types} \\\\\n        \\hline\n        School & Policy response & Not applicable \\\\\n      \\hline\n      Household & Constant & Not applicable \\\\\n      \\hline\n      Workplace & Google mobility & Workplace \\\\\n      \\hline\n      Other locations & Google mobility & \n      Unweighted average of: \\begin{itemize}\n\t\t\t\\item Retail and recreation\n          \\item Grocery and pharmacy\n          \\item Parks\n          \\item Transit stations\n      \\end{itemize}\\\\\n      \\hline\n    \\end{tabular}\n    \\title{Mapping of Google mobility data to contact locations.}\n    \\caption{\\textbf{Mapping of Google mobility data to contact locations.}}\n    \\label{tab:mobility_map}\n\\end{table}\n\n\\subsection{Microdistancing}\n\\label{microdist}\nInterventions other than those that prevent people coming into contact with one another are thought to be important to COVID-19 transmission and epidemiology, such as maintaining interpersonal physical distance and the wearing of face coverings. We therefore implemented a ``microdistancing\u201d function to represent reductions in the rate of effective contact that is not attributable to persons visiting specific locations and so is not captured through Google mobility data. This microdistancing function reduces the values of all non-household contributions to the mixing matrices by a certain proportion. These time-varying functions multiplicatively scale the location-specific contact rate modifiers \\(s(t)\\), \\(w(t)\\) and \\(l(t)\\).\n", "meta": {"hexsha": "acde1caf2e6176e6a05e90a6e08fb9f8c6d42bd7", "size": 7256, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/papers/covid_19/preprocess/mixing_matrix/mixing_and_npis_vic2021.tex", "max_stars_repo_name": "monash-emu/AuTuMN", "max_stars_repo_head_hexsha": "fa3b81ef54cf561e0e7364a48f4ff96585dc3310", "max_stars_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2020-03-11T06:15:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T03:38:35.000Z", "max_issues_repo_path": "docs/papers/covid_19/preprocess/mixing_matrix/mixing_and_npis_vic2021.tex", "max_issues_repo_name": "monash-emu/AuTuMN", "max_issues_repo_head_hexsha": "fa3b81ef54cf561e0e7364a48f4ff96585dc3310", "max_issues_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_issues_count": 96, "max_issues_repo_issues_event_min_datetime": "2020-01-29T05:10:29.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T01:48:46.000Z", "max_forks_repo_path": "docs/papers/covid_19/preprocess/mixing_matrix/mixing_and_npis_vic2021.tex", "max_forks_repo_name": "monash-emu/AuTuMN", "max_forks_repo_head_hexsha": "fa3b81ef54cf561e0e7364a48f4ff96585dc3310", "max_forks_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-24T00:38:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-19T16:19:03.000Z", "avg_line_length": 109.9393939394, "max_line_length": 746, "alphanum_fraction": 0.7650220507, "num_tokens": 1658, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577680904463333, "lm_q2_score": 0.6688802669716107, "lm_q1q2_score": 0.5737441493374722}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath,amssymb}\n\\usepackage{csquotes}\n\\usepackage[pdfborder={0 0 0}]{hyperref}\n\\newcommand\\set[1]{\\left\\{#1\\right\\}}\n\\newcommand\\card[1]{\\left|#1\\right|}\n\n\\begin{document}\n\\section{Exercise 10}\n\n\\begin{verbatim}\n#!/usr/bin/env python\n\nimport itertools\n\ndef experiment(n):\n    distribution = [0] * (n + 1)\n    for outcome in itertools.product({'L', 'R'}, repeat=2*n-1):\n        bags = {'L': n, 'R': n}\n        for out in outcome:\n            bags[out] -= 1\n            if bags[out] == 0:\n                break\n\n        print(outcome, bags)\n\n        outcome_k = max(bags.values())\n        distribution[outcome_k] += 1\n\n    return distribution\n\n\nprint(experiment(5))\n\\end{verbatim}\n\nFor $n=5$, it gives $k=1$ in 140 cases, $k=2$ in 140 cases, $k=3$ in 120 cases, $k=4$ in 80 cases and $k=5$ in 32 cases.\nSee OEIS A172068 \\enquote{Triangular array T(n,k) = the number of n-step one dimensional walks that return to the origin exactly k times.}.\n\n\n\\[ \\Omega = \\set{0,1}^{2n-k} \\]\n\\[ \\mathcal A = \\mathcal P(\\Omega) \\]\n\\[ \\mathbb P(A_k) = \\frac{\\card{A_k}}{\\card{\\Omega}} \\]\n\n$0$ represents a left drawing, $1$ represents a right drawing.\n\n\\[ \\implies A_k = R_k + L_k \\]\n\nRegards $L_k$, there must be a $1$ at its tail.\nBefore that $(n+1)$ ones and $n-k$ zeros will be distributed on $2n - k - 1$ places.\n\\[ \\implies {2n-k-1 \\choose n-1} \\]\n\n\n\nThe probability that right-sided box is chosen: ${1 \\over 2}^{n-1}$\nThe probability that left-sided box is chosen: ${1 \\over 2}^{n-k}$\n\nIn the last trial, we chose a right-sided box, so $\\cdot \\frac12$.\nThe probability that right-sided box is empty and left-sided box has\n$k$ elements,\n\\[\n  {2n - k - 1 \\over n-1} \\cdot {1 \\over 2}^{n-1} {1 \\over 2} {1 \\ over 2}^{n-k} = {2n-k-1 \\over n-1} \\cdot {1 \\over 2}^{2n-k}\n\\]\n$L_k$ has same cardinality like $R_k$, hence times $2$.\n\n\\[\n  \\mathbb P(A_k) = 2 \\cdot {2n-k-1 \\choose n-1} {1 \\over 2}^{2n-k} = {2n-k-1 \\over n-1} \\cdot {1 \\over 2}^{2n-k-1}\n\\]\n\n\\end{document}", "meta": {"hexsha": "df53c82e341924d37927ce4ae1737377450cdd23", "size": 2014, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "probability_theory_practicals/ex10/solution.tex", "max_stars_repo_name": "prokls/math-lecture-notes", "max_stars_repo_head_hexsha": "d1a94e128d13ce4399a9cc55323b2f8e0d9494fd", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2015-11-25T01:49:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-26T14:47:36.000Z", "max_issues_repo_path": "probability_theory_practicals/ex10/solution.tex", "max_issues_repo_name": "prokls/math-lecture-notes", "max_issues_repo_head_hexsha": "d1a94e128d13ce4399a9cc55323b2f8e0d9494fd", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-05-22T07:56:03.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-02T09:32:40.000Z", "max_forks_repo_path": "probability_theory_practicals/ex10/solution.tex", "max_forks_repo_name": "prokls/math-lecture-notes", "max_forks_repo_head_hexsha": "d1a94e128d13ce4399a9cc55323b2f8e0d9494fd", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-03-24T14:42:30.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-25T11:00:11.000Z", "avg_line_length": 28.7714285714, "max_line_length": 139, "alphanum_fraction": 0.6171797418, "num_tokens": 725, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.8128673110375458, "lm_q1q2_score": 0.5737095878093339}}
{"text": "\\documentclass{article}\n\n\\usepackage{fancyhdr}\n\\usepackage{extramarks}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{amsfonts}\n\\usepackage[plain]{algorithm}\n\\usepackage{algpseudocode}\n\\usepackage{matlab-prettifier}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n%\n% Basic Document Settings\n%\n\\lstMakeShortInline[style=Matlab-editor]\"\n\n\\topmargin=-1in\n\\evensidemargin=0in\n\\oddsidemargin=0in\n\\textwidth=6.5in\n\\textheight=9.0in\n\\headsep=0.25in\n\n\\linespread{1.1}\n\n\n\\rhead{\\firstxmark}\n\\lfoot{\\lastxmark}\n\\cfoot{\\thepage}\n\n\\renewcommand\\headrulewidth{0.4pt}\n\\renewcommand\\footrulewidth{0.4pt}\n\n%\n% Homework Details\n%   - Title\n%   - Due date\n%   - Class\n%   - Section/Time\n%   - Instructor\n%   - Author\n%\n\n\\newcommand{\\hmwkTitle}{AMATH 482 Homework 3: PCA}\n\\newcommand{\\hmwkDueDate}{February 26, 2019}\n\\newcommand{\\hmwkClassInstructor}{Professor Nathan Kutz}\n\\newcommand{\\hmwkAuthorName}{\\textbf{Skyler Hallinan}}\n\n%\n% Title Page\n%\n\n\\title{\n    \\textmd{\\textbf{\\text{ } \\hmwkTitle}}\\\\\n}\n\n\\author{\\hmwkAuthorName}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\section*{\\fontsize{19}{15}\\selectfont Abstract}\n\t\tWe were given a four sets of three videos, where each set corresponded to a different oscillation scenario. Each of the three videos represented samples taken from a different location; the cameras were in different locations and observed the same movement from different angles. We performed principal component analysis on the four systems of oscillating data to convert the six variables (three sets of X and Y coordinates at $n$ points in time) into six principal components. We then looked at the energy content of each of these components, and projected the data onto significant principal components basis sets and compared them to the original oscillation behavior.\n\\section*{\\fontsize{19}{15}\\selectfont Introduction and Overview}\n\tWe have taken videos of four different oscillatory situations, with different noise and rotation behavior. We are assuming that we have no underlying knowledge of the mechanics or equations behind this motion, and we are trying to experimentally determine how they function. For each of these four situations, we have somewhat oversampled; we have used three cameras at different locations to track the oscillatory motion of the bucket on a spring. There is a light on the bucket to track the movement, so we can convert the video of the bucket oscillating to $(x,y)$ coordinates by tracking the light in each frame, and add them to a data matrix, for all three cameras. \n\tOnce we have this matrix of size $6 \\times$ FrameNumber, we will need to perform SVD on it. \\\\ \\\\\nThe SVD will show us which principle components are the most important, and how much energy each principal component captures. We can also see how many principal component dimensions are significant to see if we can accurately reduce the dimension of our system to it's accurate number. We will ultimately remove noise and redundancy from this data by reducing its dimension.\n\tAlthough some videos may only need one principal component, such as simple harmonic motion going up and down, some will be better represented by having more principal components, such as test 4, which has a rotation factor that cannot be expressed in only one dimension. We will see how accurate our PCA will be in determining the number of significant principal components needed to express a system.\n\\section*{\\fontsize{19}{15}\\selectfont Theoretical Background}\n\tThe Singular Value Decomposition is used to reformat a matrix into the following: \n\t\\begin{align*}\n\\mathbf { A } = \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\text{ where} \\\\\n\\begin{array} { l } { \\mathbf { U } \\in \\mathbb { C } ^ { m \\times m } \\text { is unitary } } \\\\ { \\mathbf { V } \\in \\mathbb { C } ^ { n \\times n } \\text { is unitary } } \\\\ { \\boldsymbol { \\Sigma } \\in \\mathbb { R } ^ { m \\times n } \\text { is diagonal } } \\end{array}\n\t\\end{align*}\n\tThe diagonal values of $\\sigma$ are nonnegative and ordered from largest to smallest. We can also compute the SVD with: \n\\begin{align*}\n\\mathbf { A } ^ { T } \\mathbf { A } & = \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) ^ { T } \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) \\\\ & = \\mathbf { V } \\boldsymbol { \\Sigma } \\mathbf { U } ^ { * } \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\\\ & = \\mathbf { V } \\boldsymbol { \\Sigma } ^ { 2 } \\mathbf { V } ^ { * }  \\\\ \\\\\n\\mathbf { A } \\mathbf { A } ^ { T } & = \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) \\left( \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * } \\right) ^ { T } \\\\ & = \\mathbf { U \\Sigma V } ^ { * } \\mathbf { V } \\boldsymbol { \\Sigma } \\mathbf { U } ^ { * } \\\\ & = \\mathbf { U } \\boldsymbol { \\Sigma } ^ { 2 } \\mathbf { U } ^ { * } \\\\ \\\\\n\\mathbf { A } ^ { T } \\mathbf { A V } & = \\mathbf { V } \\Sigma ^ { 2 } \\\\ \\mathbf { A } \\mathbf { A } ^ { T } \\mathbf { U } & = \\mathbf { U } \\boldsymbol { \\Sigma } ^ { 2 }\n\\end{align*}\nComputing the normalized eigenvectors for the final two equations gives orthonormal basis vectors for $U$ and $V$. In addition, the square root of the eigenvalues of these equations produces the singular values $\\sigma_j$. The SVD enables for every matrix to be diagonal if the proper bases for the domain and the range are used.\n\tOne of the primary applications of the SVD is for Principal Component Analysis (PCA). The PCA allows for large, complex, and even somewhat random sets of data to be reduced to lower dimensions of dynamics without any knowledge of underlying behavior. This allows one to quantify low dimensional dynamics in systems like a mass spring system. We can place in the data in a matrix with $X \\in \\mathbb { R } ^ { m \\times n }$, where $m$ is number of measurements and $n$ is number of data points taken over time. From this matrix, we must address two issues: noise in the data, which comes from a low signal to noise ratio SNR, as well as redundancy, which comes from oversampling from multiple cameras. \\\\ \\\\\n\tThe covariance matrix will show correlations between all variables in the system; strong statistically dependent variables can be classified as redundant. Since we are looking for the covariance of a specific matrix, we have $\\mathbf { C } _ { \\mathbf { X } } = \\frac { 1 } { n - 1 } \\mathbf { X X } ^ { T }$, where $C_X$ is a square $m \\times m$. This covariance matrix is key to understanding redundancies in data; large values correspond to redundancy. However, large diagonal terms also correspond to large variances, which suggest strong fluctuations in specific variables, and help identify important components. Large variances are thus dynamics of interest, while small variances are non-interesting. Thus, our goal is to view this $C_X$ matrix ordered from largest to smallest, with off-diagonal values of 0 (diagonilization).\n\tThe SVD does this. Each singular direction in the SVD captures as much energy as possible, which is measured by the singular values $\\theta_j$.  We know that SVD can diagnonalize any matrix via using the appropriate bases $U$ and $V$ as shown above. We define the transformed variable $Y = U*X$ where $U$ is the unitary transformation associated with the SVD ($\\mathbf { X } = \\mathbf { U } \\boldsymbol { \\Sigma } \\mathbf { V } ^ { * }$). We then calculate the variance in Y:\n \\begin{align*}\nC_Y = \\frac { 1 } { n - 1 } \\mathbf { Y } \\mathbf { Y } ^ { T }\\\\\n= \\frac { 1 } { n - 1 } \\left( \\mathbf { U } ^ { * } \\mathbf { X } \\right) \\left( \\mathbf { U } ^ { * } \\mathbf { X } \\right) ^ { T } \\\\\n= \\frac { 1 } { n - 1 } \\mathbf { U } ^ { * } \\left( \\mathbf { X } \\mathbf { X } ^ { T } \\right) \\mathbf { U } \\\\\n= \\frac { 1 } { n - 1 } \\mathbf { U } ^ { * } \\mathbf { U } \\boldsymbol { \\Sigma } ^ { 2 } \\mathbf { U } \\mathbf { U } ^ { * } \\\\\nC_Y= \\frac { 1 } { n - 1 } \\mathbf { \\Sigma } ^ { 2 }\n\\end{align*}\n\nIn general, PCA suggests that we are expanding our solution in another orthonormal basis, where we can diaganolize the system. We are normally given a function $f(x,t)$, which we must expand in a different basis representation such that $f ( x , t ) \\approx \\sum _ { j = 1 } ^ { N } a _ { j } ( t ) \\phi _ { j } ( x )$. From this expansion, we can consider the energy in each principal component direction, which are the singular values $\\sigma_j$. One note when we do SVA and PCA is that we must subtract the mean for each row $x_j$. PCA also assumes linearity, and that larger variances are more important. Finally, PCA may not always produce optimal results.\n\n\\section*{\\fontsize{19}{15}\\selectfont Algorithm Development}\n\tOur procedure was similar for all four cases. We first loaded in the three videos corresponding to different cameras. We started with one video. We found the number of frames for each one using the \"size\" command. We went through each of the frames of the video in a loop. For each frame, we converted it to grayscale using \"rgb2gray\" and \"double\" so we could analyze it. Since the bucket had a light attached to the top of it, from this grayscale image, in order to track where the bucket was at a specific point in time, we looked at the max values in the grayscale matrix corresponding to the image at a point of time. We created a threshold matrix where we found all locations where the value was greater than 250 (we had to lower this in some of the video case due to sparse data). Then, we used the $find$ function to locate the indeces of these points, and the $ind2sub$ function to output the exact coordinates of all these bright points. We then averaged these coordinates, and added this to our data array. This gave us a matrix of size $2 \\times$FrameNumber. We repeated this with the other two videos, to get three matrices. \\\\ \\\\\n\tWe had two issues: The videos had different frame numbers, and were not in sync. In order to make them in sync, we changed the first frame in each of the videos to be the frame corresponding with the lowest Y coordinate. Once we did this, we identified the shortest video (in terms of frame length), and trimmed the other two matrices so they had the same frame number. We then added this to one large matrix, such that we had 6 rows corresponding to $X$ and $Y$ coordinates taken from 3 videos, and a number of columns equal to the lowest frame number.\\\\ \\\\\n\tWith this data matrix, we then subtracted the mean of each row from the data to center the data. We then used took the svd of the transpose of our data matrix, and divided by $\\sqrt{n-1}$, using \"svd(data'/sqrt(n-1)\" to get three matrices, \"[u,s,v]\". We extracted the values of the diagonal matrix \"s\" and squared them to get our variances for each principal component. We then plotted normalized variances (divided by the sum) to see which principal components were significant. Finally, we plotted projections of our data on the significant principal component orthonormal bases to reconstruct our observed data. We repeated this procedure for all four sets of data.\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 8cm]{oscmotion1}\n\\includegraphics[width = 8cm]{energy1}\n\\caption{\\label{fig:scaled_diss} (left)  Plots of original z displacement vs new basis (test1)}\n\\caption{\\label{fig:scaled_diss} (right) Plot of variances of principal components (test 1)}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 8cm]{oscmotion2}\n\\includegraphics[width = 8cm]{energy2}\n\\caption{\\label{fig:scaled_diss} (left) Plots of original z displacement vs new basis (test2) }\n\\caption{\\label{fig:scaled_diss} (right) Plot of variances of principal components (test 2)}\n\\end{center}\n\\end{figure}\n\n\\section*{\\fontsize{19}{15}\\selectfont Computational Results}\n\t\\textbf{(test 1) Ideal Case} \\\\ \nAs seen in Figure 2, we saw that we had only that we only had one principal component which corresponded to a 91.78 \\% energy capturing. The rest of the components had very low energy, which fits this video; we had a single bucket oscillating in one direction, so it should only need one principle component. In addition, our transformation of the first principal component resulted in accurate data. We also see that our projection onto the first principal component basis results in very similar data to what we experimentally measured. This shows that this system can be accurately reproduced and represented by one principal component. \\\\ \\\\\n\\textbf{(test 2) noisy case:}\nLooking at Figure 4 in this case, we see that maybe two principal components could be considered significant; the second diagonal variance has a somewhat high value of 0.1946, while our first principal component has an energy captured of 0.6345. Thus, it seems that we have shown that two principal components are needed for this system, although it is the same as system 1, only with noise. Thus noise in our data has thrown off some of our PCA calculations. \\\\\nWe see that because there was so much noise in the system, our observed data consistently shows noise as it oscillates. This is also true in our projection to the principal component basis. However, although there is lots of noise in both systems, there is clearly oscillatory behavior. \\\\ \\\\\n\\textbf{(test 3) horizontal displacement}\nWe see that for our horizontal displacement in Figure 6, we see up to 4 principal components that seem to capture significant amounts of energy relative to eachother: We have values of the first four energy captured of 0.4671, 0.2489, 0.1475, 0.1213. Although there is a large drop off from the first principal component to the last, they are all relatively close together. \\\\ \\\\\n\\textbf{(test 4) horizontal displacement and rotation}\nWe see that for our energy captured in Figure 8, there seem to be 3 significant principal components that have high energy captured, with values of  0.6385, 0.2286, and 0.0910. This corresponds to the data we observed, as we had both oscillatory motion and spinning motion. It seems that PCA has captured the multi-dimensional nature of the horizontal displacement and rotation, as there is both a pendulum nature, as well as simple harmonic motion\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 8cm]{oscmotion4}\n\\includegraphics[width = 8cm]{energy4}\n\\caption{\\label{fig:scaled_diss} (left) Plots of original z displacement vs new basis (test4)}\n\\caption{\\label{fig:scaled_diss} (right) Plot of variances of principal components (test 4)}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width = 8cm]{oscmotion3}\n\\includegraphics[width = 8cm]{energy3}\n\\caption{\\label{fig:scaled_diss} (left) Plots of original z displacement vs new basis (test 3) }\n\\caption{\\label{fig:scaled_diss} (right) Plot of variances of principal components (test 3)}\n\\end{center}\n\\end{figure}\n\n\n\\section*{\\fontsize{19}{15}\\selectfont Summary and Conclusions}\nWe were able to track an oscillating bucket for different scenarios and perform PCA on our datasets to see how many significant, orthonormal, principal components existed. We were able to plot both the energy captured by specific nodes, as well as our original data projected onto the various basis sets of the principal components. We observe that our projections are accurate to some extent. We have seen that even without underlying system dynamics, we can experimentally determine the minimum number of principal components needed to represent a system, and reclassify redundant systems into lower order ones.\n\n\\pagebreak\n\n\n\n\n\\section*{\\fontsize{19}{15}\\selectfont Appendix A}\n\\subsection*{MATLAB functions used and implementation}\n\"diag(X)\" : Returns a column vector of the diagonal values of a matrix. We used this to generate our variance numbers from SVD. \\\\ \\\\\n\"find(X)\" : Returns a vector of non-zero indeces in a matrix. We used this in our conditional matrix to find locations in the image of bright spots, which corresponded to moving object. We then used in \"ind2sub\" to find direct coordinates.  \\\\ \\\\\n\"[i,j,k] = ind2sub(siz,IND)\" : Given a index \"IND\", it returns the \"i\", \"j\", and \"k\" indeces, which can then be translated into spacial or frequency domain coordinates. We used this command to find the matrix coordinates of our bright spots of our black-white frames in the videos, which corresponded to the light on the bucket. \\\\ \\\\\n\"max(A)\" : If \"A\" is a vector, then it returns the maximum value of this vector. We used this for normalization of data. \\\\ \\\\\n\"mean(A)\" : Returns the mean of the vector \"A\". We used this to average the coordinates above the threshold value to locate the object.\n\"[M,I] = min(A)\" : Returns the minimum value \"M\" and the index \"I\" of the array. We used this to find the minimum value in the first 20 frames of the video, to start at a minimum value. \\\\ \\\\\n\"plot3(X1,Y1,Z1)\" : Displays a 3-dimensional plot of a set of datapoints through inputs \"X1\", \"Y1\", \"Z1\". We used this to visualize the path of the marble in the intestine after noise was removed and 20 data points were obtained in 3-dimensional space. \\\\ \\\\\n\"rgb2gray(X)\" : Given a 3D image matrix \"X\", returns a 2D image matrix that corresponds to the grayscale version. We used this for image processing, to calculate the brightest spot on each frame for object tracking. \\\\ \\\\\n\"size(X)\" : Returns the dimensions of the matrix \"X\". We used this to calculate frame numbers of our video files. \\\\ \\\\\n\"zeros(A,B,C)\" : Creates a matrix filled with zeros of size \"A x B x C\". We used this to initialize an average matrix, where we stored and added 20 64x64x64 fourier transformed matrices together. \\\\ \\\\\n\n\\section*{\\fontsize{19}{15}\\selectfont Appendix B}\n\\subsection*{MATLAB code}\n\\begin{lstlisting}[style=Matlab-editor]\nclear all; close all; clc; \nload('cam1_1.mat')\nload('cam2_1.mat')\nload('cam3_1.mat')\n%%\n%Play Videos\nnumFrames1a =size(vidFrames1_1,4);\nnumFrames2a =size(vidFrames2_1,4);\nnumFrames3a =size(vidFrames3_1,4);\n\n%%\nfor k = 1:numFrames1a\n    mov1a(k).cdata = vidFrames1_1(:,:,:,k);\n    mov1a(k).colormap = [];\nend\n\n%Play video\nwidth = 50;\nfilter = zeros(480,640);\nfilter(300-2.6*width:1:300+2.6*width, 350-width:1:350+width) = 1;\n\ndata1 = [];\nfor j=1:numFrames1a\n    X=frame2im(mov1a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 250;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data1 = [data1; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%pause(5)\nclose all;\n%%\nfor k = 1:numFrames2a\n    mov2a(k).cdata = vidFrames2_1(:,:,:,k);\n    mov2a(k).colormap = [];\nend\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-3*width:1:250+3*width, 290-1.3*width:1:290+1.3*width) = 1;\n\ndata2 = [];\n%Play video\nfor j=1:numFrames2a\n    X=frame2im(mov2a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 250;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data2 = [data2; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\nfor k = 1:numFrames3a\n    mov3a(k).cdata = vidFrames3_1(:,:,:,k);\n    mov3a(k).colormap = [];\nend\n\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-1*width:1:250+2*width, 360-2.5*width:1:360+2.5*width) = 1;\n\ndata3 = [];\n%Play video\nfor j=1:numFrames3a\n    X=frame2im(mov3a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 247;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data3 = [data3; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\n[M,I] = min(data1(1:20,2));\ndata1  = data1(I:end,:);\n\n[M,I] = min(data2(1:20,2));\ndata2  = data2(I:end,:);\n\n[M,I] = min(data3(1:20,2));\ndata3  = data3(I:end,:);\n\n%%\ndata2 = data2(1:length(data1), :);\ndata3 = data3(1:length(data1), :);\n\n%% Method 2\nalldata = [data1';data2';data3'];\n%alldata = alldata';\n\n[m,n]=size(alldata); % compute data size\nmn=mean(alldata,2); % compute mean for each row\nalldata=alldata-repmat(mn,1,n); % subtract mean\n\n[u,s,v]=svd(alldata'/sqrt(n-1)); % perform the SVD\nlambda=diag(s).^2; % produce diagonal variances\n\nY= alldata' * v; % produce the principal components projection\n\nsig=diag(s);\n%%\nfigure()\nplot(1:6, lambda/sum(lambda), 'mo', 'Linewidth', 2);\ntitle(\"Case 1: Energy of each Diagonal Variance\");\nxlabel(\"Diagonal Variances\"); ylabel(\"Energy Captured\");\n\nfigure()\nsubplot(2,1,1)\nplot(1:218, alldata(2,:),1:218, alldata(4,:), 1:218, alldata(6,:), 'Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \ntitle(\"Case 1: Original displacement across Z axis and XY-plane\");\nlegend(\"Z\", \"XY\")\nsubplot(2,1,2)\nplot(1:218, Y(:,1),'r','Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \ntitle(\"Case 1: Displacement across principal component directions\");\nlegend(\"PC1\")\n\nclear all; close all; clc; \nload('cam1_2.mat')\nload('cam2_2.mat')\nload('cam3_2.mat')\n%%\n%Play Videos\nnumFrames1a =size(vidFrames1_2,4);\nnumFrames2a =size(vidFrames2_2,4);\nnumFrames3a =size(vidFrames3_2,4);\n\n%%\nfor k = 1:numFrames1a\n    mov1a(k).cdata = vidFrames1_2(:,:,:,k);\n    mov1a(k).colormap = [];\nend\n\n%Play video\nwidth = 50;\nfilter = zeros(480,640);\nfilter(300-2.6*width:1:300+2.6*width, 350-width:1:350+2*width) = 1;\n\ndata1 = [];\nfor j=1:numFrames1a\n    X=frame2im(mov1a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 250;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data1 = [data1; mean(X), mean(Y)];\n    \n% subplot(1,2,1)\n% imshow(uint8((thresh * 255))); drawnow\n% subplot(1,2,2)\n% imshow(uint8(Xf)); drawnow\nend\n\n%pause(5)\nclose all;\n%%\nfor k = 1:numFrames2a\n    mov2a(k).cdata = vidFrames2_2(:,:,:,k);\n    mov2a(k).colormap = [];\nend\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-4*width:1:250+4.5*width, 290-2.5*width:1:290+2.7*width) = 1;\n\ndata2 = [];\n%Play video\nfor j=1:numFrames2a\n    X=frame2im(mov2a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 249;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data2 = [data2; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\nfor k = 1:numFrames3a\n    mov3a(k).cdata = vidFrames3_2(:,:,:,k);\n    mov3a(k).colormap = [];\nend\n\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-1*width:1:250+2.6*width, 360-2.5*width:1:360+2.7*width) = 1;\n\ndata3 = [];\n%Play video\nfor j=1:numFrames3a\n    X=frame2im(mov3a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 246;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data3 = [data3; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\n[M,I] = min(data1(1:20,2));\ndata1  = data1(I:end,:);\n\n[M,I] = min(data2(1:20,2));\ndata2  = data2(I:end,:);\n\n[M,I] = min(data3(1:20,2));\ndata3  = data3(I:end,:);\n\n%%\ndata2 = data2(1:length(data1), :);\ndata3 = data3(1:length(data1), :);\n\n%% Method 2\nalldata = [data1';data2';data3'];\n%alldata = alldata';\n\n[m,n]=size(alldata); % compute data sizea\nmn=mean(alldata,2); % compute mean for each row\nalldata=alldata-repmat(mn,1,n); % subtract mean\n\n[u,s,v]=svd(alldata'/sqrt(n-1)); % perform the SVD\nlambda=diag(s).^2; % produce diagonal variances\n\nY= alldata' * v; % produce the principal components projection\n\nsig=diag(s);\n%%\nfigure()\nplot(1:6, lambda/sum(lambda), 'mo', 'Linewidth', 2);\ntitle(\"Case 2: Energy of each Diagonal Variance\");\nxlabel(\"Diagonal Variances\"); ylabel(\"Energy Captured\");\n\nfigure()\nsubplot(2,1,1)\nplot(1:295, alldata(2,:), 1:295, alldata(1,:),'Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \nlegend(\"Z\", \"XY\")\ntitle(\"Case 2: Original displacement across Z axis and XY-plane (cam 1)\");\nsubplot(2,1,2)\nplot(1:295, Y(:,1),1:295, Y(:,2),'r','Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \ntitle(\"Case 2: Displacement across principal component directions\");\nlegend(\"PC1\", \"PC2\")\n\nclear all; close all; clc; \nload('cam1_3.mat')\nload('cam2_3.mat')\nload('cam3_3.mat')\n%%\n%Play Videos\nnumFrames1a =size(vidFrames1_3,4);\nnumFrames2a =size(vidFrames2_3,4);\nnumFrames3a =size(vidFrames3_3,4);\n\n%%\nfor k = 1:numFrames1a\n    mov1a(k).cdata = vidFrames1_3(:,:,:,k);\n    mov1a(k).colormap = [];\nend\n\n%Play video\nwidth = 50;\nfilter = zeros(480,640);\nfilter(300-1.5*width:1:300+3*width, 350-1.5*width:1:350+2*width) = 1;\n\ndata1 = [];\nfor j=1:numFrames1a\n    X=frame2im(mov1a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 250;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data1 = [data1; mean(X), mean(Y)];\n    \n    \n% subplot(1,2,1)\n% imshow(uint8((thresh * 255))); drawnow\n% subplot(1,2,2)\n% imshow(uint8(Xf)); drawnow\nend\n\n%pause(5)\nclose all;\n%%\nfor k = 1:numFrames2a\n    mov2a(k).cdata = vidFrames2_3(:,:,:,k);\n    mov2a(k).colormap = [];\nend\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-3*width:1:250+3.5*width, 290-2.5*width:1:290+2.7*width) = 1;\n\ndata2 = [];\n%Play video\nfor j=1:numFrames2a\n    X=frame2im(mov2a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 249;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data2 = [data2; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\nfor k = 1:numFrames3a\n    mov3a(k).cdata = vidFrames3_3(:,:,:,k);\n    mov3a(k).colormap = [];\nend\n\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-1.8*width:1:250+2.3*width, 360-2.5*width:1:360+2.7*width) = 1;\n\ndata3 = [];\n%Play video\nfor j=1:numFrames3a\n    X=frame2im(mov3a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 246;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data3 = [data3; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\n[M,I] = min(data1(1:20,2));\ndata1  = data1(I:end,:);\n\n[M,I] = min(data2(1:20,2));\ndata2  = data2(I:end,:);\n\n[M,I] = min(data3(1:20,2));\ndata3  = data3(I:end,:);\n\n%%\ndata1 = data1(1:length(data3), :);\ndata2 = data2(1:length(data3), :);\n\n%% Method 2\nalldata = [data1';data2';data3'];\n%alldata = alldata';\n\n[m,n]=size(alldata); % compute data sizea\nmn=mean(alldata,2); % compute mean for each row\nalldata=alldata-repmat(mn,1,n); % subtract mean\n\n[u,s,v]=svd(alldata'/sqrt(n-1)); % perform the SVD\nlambda=diag(s).^2; % produce diagonal variances\n\nY= alldata' * v; % produce the principal components projection\n\nsig=diag(s);\n%%\nfigure()\nplot(1:6, lambda/sum(lambda), 'mo', 'Linewidth', 2);\ntitle(\"Case 3: Energy of each Diagonal Variance\");\nxlabel(\"Diagonal Variances\"); ylabel(\"Energy Captured\");\n\nfigure()\nsubplot(2,1,1)\nplot(1:237, alldata(2,:),'Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \ntitle(\"Case 3: Original displacement across Z axis and XY-plane\");\nsubplot(2,1,2)\nplot(1:237, Y(:,1), 1:237, Y(:,2), 1:237, Y(:,3),1:237, Y(:,4),'r','Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \ntitle(\"Case 3: Displacement across principal component directions\");\nlegend(\"PC1\", \"PC2\", \"PC3\", \"PC4\")\n\nclear all; close all; clc; \nload('cam1_4.mat')\nload('cam2_4.mat')\nload('cam3_4.mat')\n%%\n%Play Videos\nnumFrames1a =size(vidFrames1_4,4);\nnumFrames2a =size(vidFrames2_4,4);\nnumFrames3a =size(vidFrames3_4,4);\n\n%%\nfor k = 1:numFrames1a\n    mov1a(k).cdata = vidFrames1_4(:,:,:,k);\n    mov1a(k).colormap = [];\nend\n\n%Play video\nwidth = 50;\nfilter = zeros(480,640);\nfilter(300-1.5*width:1:300+3*width, 350-1.5*width:1:350+2.4*width) = 1;\n\ndata1 = [];\nfor j=1:numFrames1a\n    X=frame2im(mov1a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 247;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data1 = [data1; mean(X), mean(Y)];\n    \n    \n% subplot(1,2,1)\n% imshow(uint8((thresh * 255))); drawnow\n% subplot(1,2,2)\n% imshow(uint8(Xf)); drawnow\nend\n\n%pause(5)\nclose all;\n%%\nfor k = 1:numFrames2a\n    mov2a(k).cdata = vidFrames2_4(:,:,:,k);\n    mov2a(k).colormap = [];\nend\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-4*width:1:250+3*width, 290-2.5*width:1:290+2.7*width) = 1;\n\ndata2 = [];\n%Play video\nfor j=1:numFrames2a\n    X=frame2im(mov2a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 249;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data2 = [data2; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\nfor k = 1:numFrames3a\n    mov3a(k).cdata = vidFrames3_4(:,:,:,k);\n    mov3a(k).colormap = [];\nend\n\nwidth = 50;\nfilter = zeros(480,640);\nfilter(250-3*width:1:250+1*width, 360-1.8*width:1:360+2.9*width) = 1;\n\ndata3 = [];\n%Play video\nfor j=1:numFrames3a\n    X=frame2im(mov3a(j));\n    \n    Xabw = rgb2gray(X);\n    X2 = double(X);\n    \n    Xabw2 = double(Xabw);\n    Xf = Xabw2.*filter;\n    thresh = Xf > 234;\n    indeces = find(thresh);\n    [Y, X] = ind2sub(size(thresh),indeces);\n    \n    data3 = [data3; mean(X), mean(Y)];\n    \n%     subplot(1,2,1)\n%     imshow(uint8((thresh * 255))); drawnow\n%     subplot(1,2,2)\n%     imshow(uint8(Xf)); drawnow\nend\n\n%%\n[M,I] = min(data1(1:20,2));\ndata1  = data1(I:end,:);\n\n[M,I] = min(data2(1:20,2));\ndata2  = data2(I:end,:);\n\n[M,I] = min(data3(1:20,2));\ndata3  = data3(I:end,:);\n\n%%\ndata1 = data1(1:length(data3), :);\ndata2 = data2(1:length(data3), :);\n\n%% Method 2\nalldata = [data1';data2';data3'];\n%alldata = alldata';\n\n[m,n]=size(alldata); % compute data sizea\nmn=mean(alldata,2); % compute mean for each row\nalldata=alldata-repmat(mn,1,n); % subtract mean\n\n[u,s,v]=svd(alldata'/sqrt(n-1)); % perform the SVD\nlambda=diag(s).^2; % produce diagonal variances\n\nY= alldata' * v; % produce the principal components projection\n\nsig=diag(s);\n%%\nfigure()\nplot(1:6, lambda/sum(lambda), 'mo', 'Linewidth', 2);\ntitle(\"Case 4: Energy of each Diagonal Variance\");\nxlabel(\"Diagonal Variances\"); ylabel(\"Energy Captured\");\n\nfigure()\nsubplot(2,1,1)\nplot(1:375, alldata(2,:),'Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \ntitle(\"Case 4: Original displacement across Z axis and XY-plane\");\nsubplot(2,1,2)\nplot(1:375, Y(:,1), 1:375, Y(:,2), 1:375, Y(:,3), 'Linewidth', 2)\nylabel(\"Displacement (pixels)\"); xlabel(\"Time (frames)\"); \ntitle(\"Case 4: Displacement across principal component directions\");\nlegend(\"PC1\", \"PC2\", \"PC3\")\n\n\\end{lstlisting}\n\n\\end{document}\n", "meta": {"hexsha": "7d3b404bbd93a2d4ca26108dd417358611d032a8", "size": 31407, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW3/HW3.tex", "max_stars_repo_name": "shallinan1/AMATH-482", "max_stars_repo_head_hexsha": "3ce6b7df17fa4c66d93e13a0cfe119d4551b45d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW3/HW3.tex", "max_issues_repo_name": "shallinan1/AMATH-482", "max_issues_repo_head_hexsha": "3ce6b7df17fa4c66d93e13a0cfe119d4551b45d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW3/HW3.tex", "max_forks_repo_name": "shallinan1/AMATH-482", "max_forks_repo_head_hexsha": "3ce6b7df17fa4c66d93e13a0cfe119d4551b45d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.8060836502, "max_line_length": 1143, "alphanum_fraction": 0.6818225236, "num_tokens": 9731, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8128673246376008, "lm_q1q2_score": 0.5737095873471459}}
{"text": "[30 min late, first zoom lecture]\n\n\\section{The Point on Infinity}\nThe transformation $w=1/z$ establishes a one-to-one correspondence between\npoints in the $z$-plane and there is the $w$-plane,\nexcept for the singular point at $z=0$.\nThe resolve this,\nwe attach an ``improper point'' to the $w$-plane.\nThis point is called ``the point of infinity''\nand is denoted by $\\infty$.\nThe entire complex plane together with the point of infinity is called the\n``extended complex plane''.\n\nYou need to know what it is and move on.\n\nCertain functions are analytic in the entire complex plane,\nthey are \\emph{entire functions}.\nHowever, they need not be analytic at the point of infinity.\nTo check whether something is analytic at infinity,\nyou have to do this mapping $w=1/z$.\nOnce you do that mapping,\nlook at how the function behaves at $w=0$.\n\nTo check whether a function $g(z)$ is analytic at $\\iinfty$,\nsubstitute $w=1/z$ and check analyticity at $w=0$.\n\n\\begin{question}\n    There is no number line,\n    it's a complex plane.\n    What does infinity mean in a complex plane?\n\\end{question}\nIt's not a simple thing.\nThe claim is that in the complex plane,\ninfinity can be represented by a single point,\nwhich is not true in the real line where $+\\infty$\nis different to $-\\infty$.\nBut in the complex plane,\nthere is only one $\\infty$.\nThe only reason I am discussing is that you will hear this occasionally so you\njust need to have some idea what people are talking about.\nIt's not like we're actually going to use it for anything.\nIf you want to know more,\nyou're going to have to do the reading yourself.\nInfinity is a treacherous number,\nit's not a simple thing.\n\nAny other questions?\nOK let's move on.\n\n\\section{Conformal mapping}\nMappings by analytic functions possess and important geometric property known as\n\\emph{conformal}.\nA mapping in the plane is said to be ``conformal''\nif it preserves the angle between any two curves,\nboth in magnitude and orientation.\n\nLet me illustrate that with a picture.\nLet's say this is the $z=x+iy$ plane,\nwith real part $x$ and imaginary part $y$.\nLet's say that is the curve $C_1$\nand that is the curve $C_2$.\nThey intersect at a point and\ndraw their tangents at this point.\nLet's say there's an angle $\\beta$ between the tangents of the curves at that\npoint.\nLet's look at the map of these curves $z\\to w = u + iv$.\nEvery point of the curve $C_1$ gets mapped to a curve in the $w$ plane $C_1'$.\nEvery point of the curve $C_2$ gets mapped to a curve in the $w$ plane $C_2'$.\nAgain, $C_1'$ and $C_2'$ still intersect at a point.\nThe claim is that the angle between the tangents of the curves in the map is the\nsame as before, $\\beta$.\nSo yeah.\n\nIn other words,\nif the mapping is conformal,\nthe angle between the curves is preserved.\nAnd the claim is that if you do a mapping via an analytic function,\nit's automatically conformal.\nMappings with analytic functions are automatically conformal.\nIt's guaranteed with one caveat I will explain in a moment.\n\nWe want to show that the mapping $w=f(z)$\nis conformal at every point where $f(z)$ is analytic\nexcept where the derivative $f'(z)=0$.\nA point where $f'(z)=0$ is called a \\emph{critical point}.\n\nConsider a small change $\\Delta z$ at a point $z$.\nThen the derivative is\n\\begin{align}\n    \\frac{df}{dz} = \\frac{dw}{dz}\n    = \\lim_{\\Delta z\\to 0} \\frac{\\Delta w}{\\Delta z}\n\\end{align}\ntaking the argument which is the angle\n\\begin{align}\n    \\arg \\lim_{\\Delta z\\to 0} \\frac{\\Delta w}{\\Delta z}\n    = \\lim_{\\Delta z\\to 0} \\arg \\Delta w\n    - \\lim_{\\Delta z\\to 0} \\arg \\Delta z\n    = \\arg \\frac{df}{dz} =: \\alpha\n\\end{align}\nwhich is a constant for a given point.\n\nTake the ratio of two complex numbers and take the argument to get the angle\nbetween them.\n\nAny line that passes through $z_0$\nhas its image rotated by an angle $\\alpha$.\nThat means if you had two lines passing through $z_0$,\nthey would both be rotated by the same angle $\\alpha$.\nHence for any pair of lines through $z_0$,\ntheir images in the $w$-plane are rotated through the same angle.\nHence the angle between the lines is the same as in the $z$-plane.\n\nRemember,\nyou have two curves $C_1$ and $C_2$,\nboth of which get rotated by the same angle.\n\nYou might remember there was a caveat.\nAt points $z$ where $f'(z)=0$,\nthe angle $\\alpha$ is not well-defined,\nand the argument fails.\n\nThat's the caveat.\nAny analytic function where there is a region where the derivative doesn't\nvanish,\nif you have this property of conformality,\nthe image of any curves is rotated by a fixed angle at that point,\nso the angle between pairs of curves is preserved.\n\n\\begin{question}\n    What do you mean by orientation?\n\\end{question}\nWhat I mean is that you are rotated by this angle $\\alpha$ in the same\ndirection.\n\nThis always ends up taking a bit longer than I expect.\n\nSo we agree that the angle is preserved.\nBut what about relative sizes?\n\nFrom the definition of a derivative,\nwe have\n\\begin{align}\n    \\lim_{z\\to z_0}\n    \\underbrace{\\left|\n        \\frac{f(z) - f(z_0)}{z - z_0}\n    \\right|}_{\n        = \\frac{|f(z) - f(z_0)|}{|z - z_0|}\n    }\n    =\n    |f'(z_0)|\n\\end{align}\nLet's think about what this means.\n$|z - z_0|$ is the length of $\\Delta z$\nand $|f(z) - f(z_0)|$ is the length between the image points.\nSo in the limit $\\Delta z\\to 0$,\nthe denominator is the distance in the $z$ plane,\nbut the numerator is the distance between the same two points in the $w$ plane.\nThat means there's a rescaling between the points and the images.\nSo if you have a distance between two points,\nthat distance is rescaled by a factor\nwhich only depends on $z_0$.\n\nThat means if you have a curve that passes through a point,\nin the neighbourhood of hat point,\nthe image is rescaled by that factor.\nIf you have multiple curves passing through that point,\nall the curves are rescaled by that factor.\nYou also know they're rotated.\nSo every cure is rotated by some amount\nand rescaled by some amount\nin that neighbourhood.\n\nI understand it's not easy to observe these things.\nIf I write everything down,\nI can go on and on,\nso let me explain again from scratch.\n\nWe want $z-z_0$ to go infinitesimal.\nThe denominator is the distance between two points in the $z$-plane.\n\nLet me write this.\nThe mapping $w=f(z)$ rescales the lengths of infinitesimal lines passing through\n$z_0$ by the factor $|f'(z_0)|$.\nTherefore the image of a small figure always has the same shape as the original\nfigure,\neven though it has been rescaled and rotated.\nHowever, a large figure may have an image with a completely different shape.\n\n\\section{Conformal Mapping and Harmonic Functions}\nOK so here is the claim,\nwhich I proved in my notes,\nbut the proof is tedious and I wonder if I want to do it.\nI will state the claim and give and outline of the proof,\nand you can have the pleasure of filling in the proof.\n\n\\begin{theorem}\n    A harmonic function of two variables $(x, y)$\n    remains harmonic under a change of variables from $(x, y)$\n    to $(u, v)$ if $z = x + iy$ and $w= u + iv$\n    are related by a conformal transformation.\n\\end{theorem}\nI'll explain in a moment what that means.\n\nOK so here's the idea.\nLet $h$ be a harmonic function.\nWhat that means is\n\\begin{align}\n    \\frac{\\partial^2 h}{\\partial x^2}\n    + \\frac{\\partial^2 h}{\\partial y^2} = 0.\n\\end{align}\nThe claim is that if $u=u(x, y)$ and $v=v(x, y)$\nsuch that $w = f(z)$,\n$w=u+iv$ and $z=x+iy$.\nThen\n\\begin{align}\n    \\frac{\\partial^2 h}{\\partial u^2}\n    + \\frac{\\partial^2 h}{\\partial v^2} =0\n\\end{align}\nprovided $f'(z)\\ne 0$.\n\nThis is a very useful result,\nused to prove lots of results in fluid dynamics and electrostatics.\nYou're going to apply this to electrostatics.\nUnfortunately,\nit's a trick that only works in 2D.\nUnfortunately it doesn't help you solve Laplace's equation in arbitrary\ndimensions,\nbut it is a powerful method in 2D.\n\nI look at my notes and I actually did prove this.\nLet me just outline how it goes.\nThe proof is kind of like brute force.\nThere are more clever proofs in the books I suggested,\nbut all of them rely on some additional assumptions or rely on some theorem I\ndidn't prove.\n\n\\begin{proof}\n    Write using the chain rule\n    \\begin{align}\n        \\frac{\\partial h}{\\partial x} &=\n        \\frac{\\partial h}{\\partial u} \\frac{\\partial u}{\\partial x}\n        + \\frac{\\partial h}{\\partial v} \\frac{\\partial v}{\\partial x}\n    \\end{align}\n    then differentiate again\n    \\begin{align}\n        \\frac{\\partial^2 h}{\\partial x^2} &=\n        \\left\\{\n            \\frac{\\partial^2 h}{\\partial u^2}\n            \\left(\\frac{\\partial u}{\\partial x} \\right)^2\n            +\n            \\frac{\\partial^2 h}{\\partial u \\partial v}\n            \\left( \\frac{\\partial u}{\\partial x} \\right)\n            \\left( \\frac{\\partial v}{\\partial x} \\right)\n            + \\frac{\\partial h}{\\partial u}\n            \\frac{\\partial^2 u}{\\partial x^2}\n        \\right\\}\\\\\\nonumber\n        &\\qquad + \n        \\left\\{\n            \\frac{\\partial^2 h}{\\partial v^2}\n            \\left(\\frac{\\partial v}{\\partial x} \\right)^2\n            +\n            \\frac{\\partial^2 h}{\\partial u \\partial v}\n            \\left( \\frac{\\partial u}{\\partial x} \\right)\n            \\left( \\frac{\\partial v}{\\partial x} \\right)\n            + \\frac{\\partial h}{\\partial v}\n            \\frac{\\partial^2 v}{\\partial x^2}\n        \\right\\}\n    \\end{align}\n    Similarly,\n    \\begin{align}\n        \\frac{\\partial^2 h}{\\partial y} = \\cdots\n    \\end{align}\n    and it gets messy\\ldots\n    Then you have to use the Cauchy-Riemann equations to cancel out some terms.\n    Then you use the fact that $u$ and $v$ are harmonic functions themselves.\n    After a lot of algebra,\n    you get\n    \\begin{align}\n        \\frac{\\partial^2 h}{\\partial x^2} +\n        \\frac{\\partial^2 h}{\\partial y^2} &=\n        \\left(\n            \\frac{\\partial^2 h}{\\partial u^2}\n            + \\frac{\\partial^2 h}{\\partial v^2}\n        \\right)\n        |f'(z)|^2\n        = 0\n    \\end{align}\n    where we need the assumption $f'(z)\\ne 0$.\n    Then\n    \\begin{align}\n        \\frac{\\partial^2 h}{\\partial u^2}\n        + \\frac{\\partial^2 h}{\\partial v^2}\n        = 0\n    \\end{align}\n\\end{proof}\n", "meta": {"hexsha": "31ac2a9d2b40a036339bf783081ae1331ac8f861", "size": 10118, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phys610/lecture8.tex", "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_issues_repo_path": "phys610/lecture8.tex", "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phys610/lecture8.tex", "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.1824324324, "max_line_length": 80, "alphanum_fraction": 0.6879818146, "num_tokens": 2830, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673223709251, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5737095807169084}}
{"text": "\\documentclass{article}\n\n\\usepackage{stmaryrd}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{relsize} \n\\usepackage{bm} \n\\usepackage{IEEEtrantools}\n\\usepackage{graphicx}\n\\usepackage[font={small,it}, width=\\textwidth]{caption}\n\\usepackage{subcaption}\n\\usepackage{hyperref}\n\\usepackage{cases}\n\\usepackage{xfrac}\n\\usepackage{comment}\n\\usepackage{framed}\n\\usepackage[ ddmmyyyy ]{datetime} \n\\usepackage{fancyhdr}\n\\usepackage{enumitem}\n\\usepackage{cite}\n\\usepackage{float}\n\\usepackage{multirow}\n\n\\newcommand{\\source}[1]{\\caption*{\\hfill Source: {#1}} }\n\n\\usepackage{mathtools}\n\\DeclareMathOperator{\\sign}{sign}\n\\DeclareMathOperator{\\sat}{sat}\n\n\\oddsidemargin = 20pt\n\\textwidth = 420pt\n\n\\hypersetup{\n     colorlinks   = true,\n     linkcolor    = blue\n}\n\n\\title{Dual Motor Control}\n\\author{Marcus Greiff}\n\n\\begin{document}\n\\maketitle\n\n\\section{Introduction}\n\n\\subsection{Cart model}\nThe most basic cart model consists of simply setting up the equations of motion for the cart with the motor torques as input signals,\n\\begin{equation}\\label{eq:motion}\nM\\ddot{x} = \\frac{u_1+u_2}{d} - F_f(\\dot{x}).\n\\end{equation}\nHere, the control signals $u_1$ and $u_2$ are the torques generated by the two motors, $d$ is the cart wheel radius and $M$ is the cart mass. In this simple linear model, the friction force, $F_f(\\dot{x}) = b\\dot{x}$, is simply proportional to the cart speed with some constant $b$. By introducing the state variables $\\mathbf{x} = [x, \\dot{x}]^T$ and letting the control signal $\\mathbf{u} = [u_1, u_2]^T$, the system can be written as\n\\begin{equation}\\label{eq:linModelCart}\n\\dot{\\mathbf{x}} = \\mathbf{A}_c\\mathbf{x} + \\mathbf{B}_c\\mathbf{u} =\n\\begin{bmatrix}\n0 & 1\\\\\n0 & b/M\n\\end{bmatrix}\\mathbf{x}  + \n\\frac{1}{Md}\n\\begin{bmatrix}\n0 & 0\\\\\n1 & 1\n\\end{bmatrix}\\mathbf{u},\n\\end{equation}\nThis state space representation was used to derive certain controllers, but does not take the many non-linearities of the system into account, most notably the backlash, motor torque saturation and non-linear friction. In the original model implementation (see Figure~\\ref{fig:SimModel}), the linear dynamics were implemented separate to the non-linearities in order to investigate their effect separately. However, in the open source project, a simpler linear cart was implemented to show the effect of the non-linear torque splitting and how the cascade control loop is constructed (see \\texttt{DMC.slx}).\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figures/CartSimulink.png}\n\\rule{35em}{0.5pt}\n\\caption{The Simulink cart model with motor torque saturation (blue), the backlash model (red), the non-linear friction model (green) and the linear cart dynamics (purple).}\n\\label{fig:SimModel}\n\\end{figure}\n\n\\subsection{Backlash compensation}\nAs this work aspires to be reused in parallel kinematics projects, the complete dynamics of the final robot will be very different from that of a simple cart. In order for the backlash compensation to be reusable, it should therefore be largely independent of system dynamics. Consequently, approaches such as those using state-dependent algebraic Ricatti equation (SDARE) method cannot be used~\\cite{friedland1997feedback}. Instead, a simple non-linear filter is proposed to (i) operate the motors synchronously at large reference offsets to improve the system response, and (ii) to operate the two motors in the opposing directions when the reference offset is small to completely get rid of backlash and improving accuracy in stationarity. This method of compensation is more economical in terms of power consumption than the method described in a Cairen's thesis on DMC control ~\\cite{Patrik:2013}, where the motors were working in opposing directions at all times. \n\nThe non-linear filter exploits the fact that the effective force applied to the cart, $u$, is proportional to the sum of the motor torques, $T_a$ and $T_b$. As long as there exists good torque control of the motors and the controllers are tuned equally, we may perturb the motor torques slightly by some constant, $\\epsilon_{bw}$, and still achieve the same effective force\n\\begin{equation}\\label{eq:splitEq}\n\\frac{u_1 + u_2}{d}  = \\frac{\\sat(T_a) + \\sat(T_b)}{d} = \\frac{\\sat(T_a +  \\epsilon_{bw}) + \\sat(T_b - \\epsilon_{bw})}{d}.\n\\end{equation}\nThis statement holds only when absolute value of the perturbed torque reference is within the bounds of saturation, but proves no issue for our application, as $\\epsilon_{bw} << \\max_t(|T_a|,|T_b|)$ and the absolute saturation limit can be lowered by $\\epsilon_{bw}$ to guarantee that~\\eqref{eq:splitEq} holds. As we only include feed forward terms of acceleration, velocity and position, we denote the $i^{th}$ derivative of the control error by $e^{(i)}(t) = x^{(i)}(t)  - x^{(i)}_{ref}(t)$ and let $\\tilde{e}(t) = \\max_i(|e^{(i)}(t)|)$ with $r = 2$. Using equation~\\eqref{eq:splitEq}, we then demand that $T_a = T_b$ when the control error is above some threshold $\\tilde{e}(t) > \\epsilon_{th}$, and that reference torques are only split continuously for smaller control errors, such that $\\tilde{e}(t) \\equiv 0\\Rightarrow T_a = -T_b \\Rightarrow u_1 + u_2 = \\epsilon_{bw}-\\epsilon_{bw} = 0$. With this approach, both conditions (i) and (ii) will be satisfied, eliminating backlash completely when all control error derivatives are small. In summary the filter function, $f(u,e)$, has to be chosen so that $f(u,0) = \\epsilon_{th}$ and $f(u,\\epsilon_{bw}) = 0$, and general filter can then be written\n\\begin{equation}\nh(u,e) = \n\\begin{cases}\n\\tilde{e}(t) = \\max\\limits_{i = 0,...,r}(|e^{(i)}(t)|) \\quad r\\in\\mathbb{N}_0\\\\\n\\frac{1}{2}\\{u, u\\} \\qquad\\qquad\\qquad\\quad\\;\\; \\text{if}\\; |\\tilde{e}(t)| > \\epsilon_{th}\\\\\n\\frac{1}{2}\\{u \\pm f(u,\\tilde{e})\\} \\qquad\\quad\\quad\\;\\;\\text{if}\\; |\\tilde{e}(t)| \\leq \\epsilon_{th}\\\\\n\\end{cases}\n\\;\\;\\text{where}\\;\\;\\epsilon_{th},\\epsilon_{bw}\\in \\mathbb{R}_+,\\;\\; \\epsilon_{th}\\leq\\epsilon_{bw}\n\\end{equation}\nAn initial linear filter function,\n\\begin{equation}\nf_1(u,\\tilde{e}) = \\epsilon_{bw} \\cdot (\\epsilon_{th}  - u \\cdot \\sign(\\tilde{e})) \\in C^1,\n\\end{equation}\nwas implemented in Simulink and simulated with good results, but behaved strangely with certain control schemes (notably second order sliding mode control) due to it not having a second derivative. To remedy this, the conditions $f^{\\prime}(0) = f^{\\prime}(\\epsilon_{bw}) = 0$ were enforced, and a trigonometric function was used instead of a linear one (see Figure~\\ref{fig:filter}), making the filter smooth and continuous in the $n^{th}$ derivative,\n\\begin{equation}\nf_2(u,\\tilde{e}) = \\frac{\\epsilon_{bw}}{2}\\Big(1+\\cos\\Big(\\frac{\\pi \\tilde{e}}{\\epsilon_{th}}\\Big)\\Big) \\in C^N.\n\\end{equation}\nNote that this ``$C^N$-filter\" can be implemented such that the control error is with reference to position, velocity, acceleration or a combination of the three.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/CN_filter.png}\n\\includegraphics[width=0.9\\textwidth]{figures/CNFILT.png}\n\\rule{35em}{0.5pt}\n\\caption{\\textbf{Top:} The simulink implementation of the  $C^N$-filter with $r=2$. \\textbf{Bottom:} The $C^N$-filter with $r=0$ applied to the input $u(t) = 8e^{-t}cos(2\\pi t)$ (blue), with the outputs $T_a$ (red) and $T_b$ (green), and the sum of the outputs (black). The splitting is done when the positional error is small, and approaches a stationary split $T_a=-T_b$ when $e\\approx 0$.}\n\\label{fig:filter}\n\\end{figure}\n\n\\subsection{Trajectory planning}\nThe trajectory planning has two separate functions. The first is to simply to take reference function of position, velocity or acceleration, and then generate the signals required in the feed-forward loop. For example, if a sinusoidal velocity reference is to be followed, the trajectory planner should evaluate and output velocities in time, as well as it's derivative (acceleration) and primitive (position). The second mode of operation is to detect sudden reference changes in position ( steps), and then approximate a fourth degree reference polynomial which moves the cart to the required position in optimal time given constraints on velocity, $\\hat{v}$, acceleration, $\\hat{a}$, and jerk, $\\hat{j}$, and jerk derivative, $\\hat{d}$. The first problem is trivial, but the second is more difficult can be formulated mathematically as\n\\begin{equation}\\label{eq:optProb}\n\\text{Minimize}(t_f)\\;\\;\\text{ subject to}\\;\\;\\\\\n\\begin{cases}\nx(t_0) = r_0, x(t_f) = r_f\\\\\n\\dot{x}(t_0) = \\ddot{x}(t_0) = x^{(3)}(t_0) = x^{(4)}(t_0)=0\\quad (*)\\\\\n\\dot{x}(t_f) = \\ddot{x}(t_f) = x^{(3)}(t_f) = x^{(4)}(t_f) = 0 \\\\\n|x^{(4)}(t)|<\\hat{d},|x^{(3)}(t)| < \\hat{j}, |\\ddot{x}(t)|<\\hat{a}, |\\dot{x}(t)| <\\hat{v} \\quad(**)\\\\\n\\end{cases}\n\\end{equation}\nby defining the time of a change in reference as $t_0 = 0$, and the time at which the new reference position is reached as $t_f > t_0$. Here, $r_0$ is the positional reference before $t_0$ and $r_f$ is the desired position. We also require the jerk and all lower derivatives to be continuous at all times. In future iterations, it is worth considering setting the derivatives at $t_0$ as free variables $(*)$, to properly allow the computation of new reference steps while the cart is moving. Here we make the assumption of $(*)$ and use the symmetry of the problem, to which the general solution is a set of polynomial splines of order $p\\leq4$,\n\\begin{equation}\\label{eq:gensol}\n\\begin{cases}\nx^{(4)}(t) = d_i\\\\\nx^{(3)}(t) =  d_it+j_i\\\\\n\\ddot{x}(t) = \\dfrac{{d_i}}{2}t^2 + {j_i}t + a_i\\\\\n\\dot{x}(t) = \\dfrac{{d_i}}{6}t^3 + \\dfrac{{j_i}}{2}t^2 + {a_i}t + v_i\\\\\nx(t) = \\dfrac{{d_i}}{24}t^4 + \\dfrac{{j_i}}{6}t^3 + \\dfrac{{a_i}}{2}t^2 + {v_i}t + x_i.\n\\end{cases}\n\\end{equation}\nThese splines exist on a time interval $[t_i,t_{i+1}]$, and are defined as sets $S_i = \\{d_i, j_i, a_i, v_i, x_i,t_i, t_{i+1}\\}$, and the idea is to start simple and then introduce the constraints $(**)$ one by one. Due to the boundary conditions on velocity, the acceleration has to be zero at $t=t_f/2$, which implies that the jerk has to be zero at  $t = t_f/4,2t_f/4,3t_f/4$. It is then clear that the jerk derivative must be alternating between $x^{(4)}(t)=\\pm\\hat{d}$ on eight equidistant time intervals, $t_d$. \nThe jerk derivative in each spline $S_i$ is then given by the corresponding element in the vector\n\\begin{equation}\\label{eq:djerksol}\n\\mathbf{d} = \\sign(r_f - r_0)\\cdot\\hat{d}\\cdot[1,-1,-1,1,1,-1,1]\\\\\n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:djerkref}\nt_d=\\sqrt[4]{\\dfrac{|r_f - r_0|}{8\\hat{d}}},\n\\end{equation}\nis determined from~\\eqref{eq:gensol} and the remaining constants are chosen to preserve continuity. When adding the remaining constraints $(**)$, we need to introduce periods of constant jerk of length $t_j$, constant acceleration of length $t_a$ and constant velocity of length $t_v$, such that the constraints are preserved. The general solution is then \n\\begin{equation}\n\\begin{cases}\\label{eq:fullsol}\n\\mathbf{d} = \\sign(r_f - r_0)\\cdot\\hat{d}\\cdot[1,0,-1,0,-1,0,1,0,1,0,-1,0,1]\\\\\n\\mathbf{t} =[t_d,t_j,t_a,t_j,t_d,t_v,t_d,t_j,t_a,t_j,t_d,]\n\\end{cases}\n\\end{equation}\nwhere the jerk derivative and time interval length in each spline $S_i$ is given by the corresponding element in $\\mathbf{d}$ and $\\mathbf{t}$ respectively. The time interval lengths and splines are then determined in five steps.\n\n\\subsubsection*{Step 1}\nWe start by finding $t_d$, as the maximum time that the jerk derivative can be kept zero separate without violating $(**)$, which is given by\n\\begin{equation}\\label{eq:td}\nt_{d} = \\min\\begin{Bmatrix}\\sqrt[4]{\\dfrac{|r_f - r_0|}{8\\hat{d}}}, \\sqrt[3]{\\dfrac{\\hat{v}}{2\\hat{d}}}, \\sqrt{\\dfrac{\\hat{a}}{\\hat{d}}},\\dfrac{\\hat{j}}{\\hat{d}}\\end{Bmatrix}\n\\end{equation}\nas the initial conditions on the derivative terms are $\\mathbf{x}(t_0) = \\mathbf{0}$. Now, if $t_d$ depends on the first term (i.e. the index of the minimum value in ~\\eqref{eq:td} is 1), we violate a constraint on jerk derivative and nothing else, and the solution takes the form of ~\\eqref{eq:djerksol} described above with $t_j=t_a=t_v=0$. If the jerk is first violated (index 2), we need to introduce periods of constant jerk, shown in \\textbf{Step 2}. If the acceleration is first violated (index 3), $t_j = 0$, and we proceed with \\textbf{Step 3}. Finally, if the constraint on velocity is first violated (index 4), $t_j=t_a=0$, and we compute a period of constant velocity in \\textbf{Step 4}.\n\n\\subsubsection*{Step 2}\nAccording to~\\eqref{eq:gensol} and~\\eqref{eq:fullsol}, the maximum $t_j$ at which the constraint $|x^{(3)}(t)| < \\hat{j}$ is still satisfied, is given by the real and positive solution to\n\\begin{equation}\\label{eq:tjj}\nt_j^3 + 5t_dt_j^2 + 8t_d^2t_j + 4t_d^3-\\dfrac{|r_f - r_0|}{2\\hat{d}t_d} = 0.\n\\end{equation}\nThis choice of $t_j$ ensures that we arrive at the correct terminal point, $x(t_f) = r_f$. However, we must also check if the constraint on acceleration and velocity is violated. By examining~\\eqref{eq:gensol} and~\\eqref{eq:fullsol}, these constraints are violated exactly when\n\\begin{equation}\\label{eq:tja}\n\\sup\\limits_{t_0\\leq t\\leq t_f}|\\ddot{x}(t)| = \\hat{a} \\Rightarrow t_j = \\dfrac{\\hat{a}- \\hat{d}t_d^2}{\\hat{j}}.\n\\end{equation}\nand for the velocity\n\\begin{equation}\\label{eq:tjv}\n\\sup\\limits_{t_0\\leq t\\leq t_f}|\\dot{x}(t)| = \\hat{v} \\Rightarrow\n\\hat{j}(t_d + t_j)t_a + \\hat{d}(t_d^3 + t_d^2t_j)+ \\hat{j}(2t_jt_d + t_j^2+t_d^2)-\\hat{v} =0\n\\end{equation}\nThe smallest real solution $tj>0$ to ~\\eqref{eq:tjj}~\\eqref{eq:tja}~\\eqref{eq:tjv} is chosen so that the solution complies with all constraints.\nIf this $t_j$ is given by the constraint on jerk ~\\eqref{eq:tjj}, we arrive at our terminal point keeping all constraints in $(**)$ and we are done.\nIf $t_j$ is given by the constraint on acceleration ~\\eqref{eq:tja}, we proceed with \\textbf{step 3}, and if the constraint on velocity~\\eqref{eq:tjv} is violated first, $t_a = 0$ and we move straight to \\textbf{Step 4}.\n\n\\subsubsection*{Step 3}\nA period of constant acceleration, $t_a$, is computed such that the terminal point $x(t_f) = r_f$. This time is given by the real an positive solution to\n\\begin{equation}\n(t_d^2+t_dt_j)t_a^2+\n(6t_d^3+9t_d^2t_j+3t_dt_j^2)t_a+\n8t_d^4+16t_d^3t_j+10t_d^2t_j^2+2t_dt_j^3-\\dfrac{|r_f - r_0|}{\\hat{d}} = 0\n\\end{equation}\nas derived from equation~\\eqref{eq:gensol} and~\\eqref{eq:fullsol}. However, we might then violate the velocity constraint, and must therefore compute the $t_a$ at which the maximum velocity is reached. This is done by solving~\\eqref{eq:tjv} for $t_a$. Just as in \\textbf{Step 2}, if the constraint on velocity is not breached, the algorithm arrives at the terminal point and we are done. If the velocity constraint is violated (i.e. $t_f$ is given by~\\eqref{eq:tjv}) we proceed with \\textbf{Step 4}.\n\n\\subsubsection*{Step 4}\nHere, a period of constant velocity, $t_v$, is introduced. No $t_v>0$ can violate any of the constraints in $(**)$ since the speed is constant during $t_f$, and the period time is simply computed so that  $x(t_f) = r_f$. This can either be done similarly to $t_j$ and $t_a$, by finding $t_v$ analytically from~\\eqref{eq:gensol} and~\\eqref{eq:fullsol}. In the implementation, this is done automatically in \\textbf{Step 5} by checking the end point position, $x_p$, and velocity, $v_p$, of the 7th spline and computing $t_v = x_p/v_p$.\n\n\\subsubsection*{Step 5}\nIn this final step, the splines are created based on $~\\eqref{eq:fullsol}$ to preserve continuity in all derivatives between the splines. In doing so, a complete trajectory can be generated in the cycle where the positional reference step is taken (see the function block \\texttt{computeTrajectory}), and then evaluated at a given point in time (see the function block \\texttt{evaluateTrajectory}). This approach was taken to decrease the computational effort and allow a higher cycle frequency.\n\n\\begin{figure}\n\\centering\n\\includegraphics[height = 6cm, width = 0.7\\linewidth]{figures/Splines.png}\n\\rule{35em}{0.5pt}\n\\caption{The fourth order trajectory generated for a positional reference step from $r_0 = 5$ to $r_f=10$ [cm] at the time $t_0 = 0$, given the bounds $[\\hat{d},\\hat{j},\\hat{a},\\hat{v}]= [10,3,2,4]$ (dashed). The optimal time given the harsh constraints is $t_f\\approx 4.3$.}\n\\label{fig:posref}\n\\end{figure}\n\n\\clearpage\\subsection{Cart control}\nSince previous work done in backlash compensation using DMC and positional PID control resulted in a very low repeatable positional accuracy, this method was used as a starting point in the controller design~\\cite{Patrik:2013}. However, this gave bad results when implemented in TwinCAT, and was not investigated further due to a lack of time. Instead, two  approaches commonly used in the industry for rigid robotics, the feed-forward PID-control and the cascade control, were implemented in Simulink to make use of the information generated by the trajectory planner (see Figure ~\\ref{fig:controlSimulink})~\\cite{PID2015feedforward}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width = 0.8\\linewidth]{figures/PIDfeed-forward.png}\n\\includegraphics[width = 0.8\\linewidth]{figures/CascadeControl.png}\n\\caption{The feed-forward PID control  (top) and the cascade control (bottom) as implemented in Simulink.}\n\\label{fig:controlSimulink}\n\\end{figure}\n\nIn the physical process, the two motors of the cart are controlled by a pair of EPOS3 70/10 drivers developed by Maxon. These drivers can be configured to have the motor follow various reference signals, such as position, velocity and torque. Given the structure of the $C^N$-filter, the PID-controlled current loop over the motors was auto-tuned to follow a reference torque.\n\nA model of the asynchronous squirrel cage motors was created in Simulink using the ``asynchronous macine'' based on the work of Li et al. (see \\texttt{motor\\_model.slx}, Gitlab)~\\cite{li2013modeling}. Due to the poor data sheet of the motors, there were too many unknown parameters to make good use the motor model when simulating the system. Consequently, for the purpose of crude simulations of control in Matlab, the driver control loop was omitted and a 1:1 relation of reference torque and actual motor torque was assumed.\n\nIn the feed-forward scheme, the control errors in position, velocity and acceleration are amplified by feed forward gains and summed in to a single error signal. This error is fed to the PID control, which in turn generates a control signal corresponding to an effective total motor torque. After the friction compensation, the signal control is split by the $C^N$-filter and fed to the motor drivers as reference torques.\n\nIn the cascade control, a P-PI loop is used to to compensate errors in position and velocity successively. Simulations showed that good control could be achieved both when using the P-PI loop (see FIgure~\\ref{fig:controlSimulink}), that both speed and accuracy could be increased by introducing a second PI-term on acceleration. However, this P-PI-PI control could not be realized in TwinCAT due to noisy signals and highly inaccurate acceleration measurements. \n\nThe tuning of the P-PI loop was relatively simple in Matlab but proved very difficult in the real process. In order to both have good transient behavior and make the cart follow it's optimal trajectory, gain scheduling was used to increase the P-parts on position and velocity while moving the cart, and scaling down both P-gains by a factor of 100 when approaching the reference position (see Figure ~\\ref{fig:CascadeControl}). In addition, error in velocity was completely disregarded when below a certain tolerance if also the positional error of an order of a few mm. In doing so, integral action work exclusively on positional control.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width = 0.8\\linewidth]{figures/CascadeTwincat.png}\n\\caption{The cascade control as implemented in Simulink (top) and TwinCat (bottom).}\n\\label{fig:CascadeControl}\n\\end{figure}\n\n\\subsection{Anti-Windup}\nWhen running the physical process with the P-PI cascade control and the trajectory planning set to only constrain jerk derivative, small positional oscillations were present as the cart approached the terminal trajectory position, $r_f$ (see the torque control signals in \\textbf{Section 5}, Figure~\\ref{fig:torquesplit}). In order for the cart to reach a stable position in a short amount of time, the influence of the oscillations had to be minimized. When investigating the problem, it seemed largely dependent on the I-gain. This was confirmed in the Simulink model, where the behavior was replicated when using cascade control with high I-gains along with third order trajectory planning (see Figure~\\ref{fig:AntiWindupResults}). Here the oscillations are most clearly visible in the plot of acceleration and velocity, and significant enough to cause large position errors when compared to the desired repeatable accuracy on the micrometer scale.\n\\begin{figure}\n\\centering\n\\includegraphics[width = 0.3\\linewidth,height=6cm]{figures/PositionAW.png}%\n\\includegraphics[width = 0.3\\linewidth,height=6cm]{figures/VelocityAW.png}%\n\\includegraphics[width = 0.3\\linewidth,height=6cm]{figures/AccelerationAW.png}\n\\caption{Cart position (left), velocity (center) and acceleration (right) when running the cart model along a third-order trajectory (black, dashed) with conditioned AW P-PI cascade control, and regular P-PI cascade control.}\n\\label{fig:AntiWindupResults}\n\\end{figure}\n\nA good way of mitigating the oscillatory behavior induced by $I$-gain is to use one of many anti-windup schemes (AW). Four different approaches were investigated, based on the work on J. Espina et al.~\\cite{espina2009speed}. In this survey of fast AW schemes, the authors concluded that for PI-control of permanent magnet synchronous machines, the conditioned AW had the lowest settling time and overshoot of the methods considered. This conditioned AW scheme simply detects when the control signal gets saturated, which in our case is when difference in reference torque before and after saturation is non-zero. In this state, the integrator holds it's most recent value. The conditioned AW scheme was implemented in Simulink using the memory block and a relational operator for checking when the control is saturated (see Figure ~\\ref{fig:AntiWindupSimulink}). Note that the $D$-term was included with a filter-coefficient $N$ for the purpose of testing AW PID control, but that the $D$-gain was set to zero in the cascade control implementation. As expected, the oscillations are greatly suppressed and we reach stable desired position much faster than with conventional $PI$-regulators.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width = 0.5\\linewidth]{figures/AntiwindupSimulink.png}\n\\caption{The Simulink model of the continuous time conditioned AW-PID regulator.}\n\\label{fig:AntiWindupSimulink}\n\\end{figure}\n\nThe simulation was run with rough estimations of parameters such as the cart mass and motor rotor inertia, and was merely used to determine the effect of conditioned AW control, which yielded a significant improvement in control. Due to lack of time, this was never implemented in TwinCAT, but it is definitely worth investigating in future iterations of the TwinCAT software.\n\n\\newpage\\bibliography{Bibliography}{}\n\\bibliographystyle{IEEEtran}\n\\end{document}", "meta": {"hexsha": "3ce0cca51fe14f4e829443f192383718db2fda6c", "size": 23325, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dual_motor_control/report/report.tex", "max_stars_repo_name": "mgreiff/control_theory", "max_stars_repo_head_hexsha": "9188975d330a3cd712d042c599abd537e98d3296", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dual_motor_control/report/report.tex", "max_issues_repo_name": "mgreiff/control_theory", "max_issues_repo_head_hexsha": "9188975d330a3cd712d042c599abd537e98d3296", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dual_motor_control/report/report.tex", "max_forks_repo_name": "mgreiff/control_theory", "max_forks_repo_head_hexsha": "9188975d330a3cd712d042c599abd537e98d3296", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-06-22T10:03:14.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-22T10:03:14.000Z", "avg_line_length": 93.3, "max_line_length": 1201, "alphanum_fraction": 0.7509539121, "num_tokens": 6744, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673133042217, "lm_q2_score": 0.7057850278370112, "lm_q1q2_score": 0.5737095793482166}}
{"text": "\n\\color{black}\n\\subsection*{fitting a self organizing map to in-game data}\nThe file \\texttt{q3dm1-path2.csv} contains a sequence \n\\begin{equation*}\n\\vec{x}[1]:\\vec{x}[2]:\\vec{x}[3]:\\vec{x}[3]:\\cdots:\\vec{x}[n]\n\\end{equation*}\nof 3D locations the avatar of a human player was seen at while moving around the Quake III map \\textit{q3dm1}. When plotted, these data look like this\n\\begin{center}\n\\includegraphics[width=0.75\\textwidth]{q3dm1-data-path2.pdf}\n\\end{center}\n\nFit a self organizing map of $k=24$ neurons into the given data points $\\vec{x}[t]$. This SOM should have the following topological- or \\emph{map space} structure\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{som-infty-topology.pdf}\n\\end{center}\n\\newpage\n\n\n\n\n\nPlot the data in \\texttt{q3dm1-path2.csv} together with the SOM you fitted to it. Plot data points in black and SOM weights and their connections in \\textcolor{blue}{blue}.\n\n\\textbf{Note:} if your fitted SOM looks ``twisted'', then run the training algorithm again, until you obtain a better fit.\n%%%%%\n%%%%% enter your answer here, i.e. replace \"placeholder.pdf\" by the name of the graphics file you created\n%%%%%\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{som24.png}\n\\end{center}\n%%%%%\n%%%%%\n%%%%%\n\n\n\n\n\n\\vspace{2cm}\nOnce you have fitted your SOM, determine how well its weights $\\vec{w}_1, \\ldots, \\vec{w}_k$ represent the data. To do this, compute the mean squared error\n\\begin{equation*}\nE = \\frac{1}{n} \\sum_{t=1}^n \\, \\min_i \\, \\dsq{\\vec{x}[t]}{\\vec{w}_i}\n\\end{equation*}\nRound your result to \\emph{two} decimals and enter it here \\color{blue}\n%%%%%\n%%%%% enter your answer after the '=' sign\n%%%%%\n\\begin{equation*}\nE = 1671.85\n\\end{equation*}\n%%%%%\n%%%%%\n%%%%%\n%%%%%\n\\color{black}\n\\newpage", "meta": {"hexsha": "7a44fed82923c0611692edfbe069935bf58e72f7", "size": 1741, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "SelfTest2SS2020/selfTestProblem1.tex", "max_stars_repo_name": "baraaHassan/Game-AI-Course", "max_stars_repo_head_hexsha": "dd4ed04b10b2231aac2f98b3b88274f7ab0cc339", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SelfTest2SS2020/selfTestProblem1.tex", "max_issues_repo_name": "baraaHassan/Game-AI-Course", "max_issues_repo_head_hexsha": "dd4ed04b10b2231aac2f98b3b88274f7ab0cc339", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SelfTest2SS2020/selfTestProblem1.tex", "max_forks_repo_name": "baraaHassan/Game-AI-Course", "max_forks_repo_head_hexsha": "dd4ed04b10b2231aac2f98b3b88274f7ab0cc339", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.5438596491, "max_line_length": 172, "alphanum_fraction": 0.7007466973, "num_tokens": 551, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370111, "lm_q2_score": 0.8128673133042217, "lm_q1q2_score": 0.5737095793482165}}
{"text": "\\chapter{Multiscale Geometric Analysis on the Sphere}\n\\minitoc \n\n\\label{ch_mrs}\n\n\\vskip1cm\n\n\\index{wavelet}\n\\index{sphere}\n\n\\section{Introduction}\n\nMany wavelet transforms on the sphere have been proposed in the past years.  Using the lifting scheme\n \\citet{wave:sweldens95a} developed an orthogonal Haar wavelet \n transform on any surface, which can be directly applied on the sphere.\n Its interest is however relatively limited because of the poor properties of the Haar function and the\n problems inherent to orthogonal transforms. \n\nMore interestingly, many papers have presented \n new continuous wavelet transforms \\citep{wave:antoine99,wave:tenerio99,wave:cayon01,wave:holschneider96}.\n These works have been extended to directional wavelet transforms \\citep{wave:antoine01,wave:hobson04}. \n All these  continuous wavelet decompositions  are useful for data analysis, but cannot be used for restoration purposes because of the lack \n of an inverse transform. \n\\citet{freeden97} and \\citet{freeden98} proposed the first redundant wavelet transform, based on the spherical harmonics transform, which presents an inverse transform.  \n \\citet{starck:sta05_2}  proposed an invertible isotropic undecimated wavelet transform (UWT) on the sphere, also  based on \nspherical harmonics, which  has the same property as the starlet transform, i.e.\\ the sum of the wavelet  scales reproduces the original image.\n A similar wavelet construction \\citep{marinucci08,fay08a,fay08} used the so-called needlet filters.\n \\citet{wiaux08} also proposed an algorithm which permits to reconstruct an image from its steerable wavelet transform.\nSince reconstruction algorithms are available,  these new tools can be used for many applications\nsuch as denoising, deconvolution, component separation \\citep{starck:yassir05,bobin-gmca-cmb,delabrouille08} or inpainting \\citep{inpainting:abrial06,starck:abrial08}.\n\\index{needlet filters}\n\nExtensions to the sphere of 2D geometric multiscale decompositions such as the ridgelet \ntransform and the curvelet transform were presented  in  \n\\citet{starck:sta05_2}.\n  % It has been shown that such constructions are very useful for the detection of cosmic strings \\citep{starck:sta03_1,starck:jin05,hammond08}.\n\nThe goal of this chapter is to overview these multiscale transforms on the sphere. Section~\\ref{mrs_pixel} overviews the HEALPix pixelization scheme and the spherical harmonics transform. Section~\\ref{mrs_haar} shows how a fast orthogonal Haar wavelet transform on the sphere can be built using HEALPix. In Section~\\ref{sect_wts}, we present an isotropic wavelet transform on the sphere which has similar properties as the starlet transform and therefore should\nbe very useful for data denoising and deconvolution. This algorithm is directly derived from the FFT-based wavelet transform proposed in \\citet{starck:sta94_3} for aperture synthesis image restoration (see Section~\\ref{sec_fft}), and is relatively close to the \\citet{freeden98} method, except that it features the same straightforward reconstruction as does the starlet transform algorithm (i.e.\\ the sum of the scales reproduces the original data). This wavelet transform can also be easily extended to a pyramidal wavelet transform, allowing us to reduce the redundancy, a possibility which may be very important for larger data sets. In Section~\\ref{sect_cur}, we show how this new pyramidal transform can be used to derive a curvelet transform on the sphere. Section~\\ref{sect_exp} describes how these new transforms can be used for denoising, component separation and inpainting. \n\nSections~\\ref{section:bedros} and \\ref{section:cmb} present how these new tools can help us to analyze data in two real applications, in physics and in cosmology. Finally, guided numerical experiments together with a toolbox dedicated to multiscale transforms on the sphere (MR/S) are described.\n \n\\section{Data on the Sphere}\n\\label{mrs_pixel}\n\nVarious pixelization schemes for data on the sphere exist in the literature. These include the Equidistant Coordinate Partition (ECP), the Icosahedron method  \\citep{tegmark:icos1}, the Quad Cube \\citep{white92}, IGLOO \\citep{igloo}, HEALPix \\citep{pixel:healpix}, Hierarchical Triangular Mesh (HTM) \\citep{kunszt01} or Gauss-Legendre Sky Pixelization (GLESP) \\citep{pixel:glesp}. Important properties to decide which one is the best for a given application include the number of pixels and their size, fast computation of the spherical harmonics transform, equal surface area for all pixels, pixel shape regularity, separability of variables with respect to latitude and longitude, availability of efficient software libraries including parallel implementation, etc. Each of these properties has advantages and drawbacks. In this chapter, we use the HEALPix representation which has several useful properties. \n\n\\newpage\n\\subsection{HEALPix}\n\\index{HEALPix}\n\\label{healpix}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=8cm]{pixelhealpix.pdf}\n\\caption{The HEALPix sampling grid for four different resolutions.}\n\\label{pixelhealpix}\n\\end{figure}\n\nThe HEALPix representation (Hierarchical Equal Area isoLatitude Pixelization of a sphere) \\citep{pixel:healpix}\\footnote{http://healpix.jpl.nasa.gov.} is a curvilinear hierarchical partition of the sphere into quadrilateral pixels of exactly equal area but with varying shape. The base resolution divides the sphere into 12 quadrilateral faces of equal area placed on three rings around the poles and equator. Each face is subsequently divided into $N_{\\mathrm{side}}^{2}$ pixels following a quadrilateral multiscale  tree structure (see Fig.~\\ref{pixelhealpix}). The pixel centers are located on iso-latitude rings, and pixels from the same ring are equispaced in azimuth. This is critical for computational speed of all operations involving the evaluation of the spherical harmonics coefficients, including standard operations such as convolution, power spectrum estimation, and so on.  \nHEALPix is a standard pixelization scheme in astronomy.\n \n% \\begin{figure*}\n%\\centering\n%\\includegraphics[height = 2.8 in]{pixelbasecyl.pdf}\n%\\caption[Projection cylindrique de diff\\'erentes pix\\'elisations possibles.]{Projection cylindrique de diff\\'erentes pix\\'elisations possibles pour diff\\'erents $N_\\theta$ et $N_\\vartheta$. La grille HEALPix utilise $N_\\theta=3$ et $N_\\vartheta =4$}\n%\\label{pixcyl}\n%\\end{figure*}\n\n% La pix\\'elisation s'effectue en d\\'ecoupant la surface form\\'ee suivant la projection cylindrique en $N_\\theta$ et $N_\\vartheta$ parties suivant respectivement la latitude et la longitude, \n% selon les figures \\ref{pixcyl} et \\ref{pixsph}. La s\\'eparation entre les pixels pres du p\u00f4le et ceux pres de l'\\'equateur est choisie de telle fa\u00e7on que tous les pixels aient  la m\u00eame surface.\n% Du fait de ce choix de pix\\'elisation de base, les frontieres de chaque pixel sont assez simples. Elles sont de la forme $cos{\\theta} = a + b * \\vartheta $ dans la zone \\'equatoriale et de la forme $ cos{\\theta} = a + b / (\\vartheta)^2 $ ailleurs.\n\n% Pour leur implantation, ils ont choisi $N_\\theta = 3$ et $N_\\vartheta = 4$, si bien que la pix\\'elisation de base comporte 4 pixels autour de chaque p\u00f4le et 4 pixels le long de l'\\'equateur.\n\n% \\begin{figure*}\n% \\centering\n% \\includegraphics[height = 2.8 in]{pixelbasesph.pdf}\n% \\caption[Projection orthographique de diff\\'erentes pix\\'elisations possibles.]{Projection orthographique de diff\\'erentes pix\\'elisations possibles pour diff\\'erents $N_\\theta$ et $N_\\vartheta$. }\n% \\label{pixsph}\n% \\end{figure*}\n \n\n\\subsection{Spherical Harmonics}\n\\index{spherical harmonics}\n\nThe equivalent of the Fourier transform on the sphere is the spherical harmonics transform. From this analogy, in the sequel, the $\\hat{ }$ notation that was used for Fourier transform in previous chapters will be used to denote the spherical harmonics coefficients of a function. \nAny function $f(\\theta,\\vartheta) \\in L_2(S^2)$ on the sphere $S^2$ in $\\RR^3$ can be decomposed into spherical harmonics:\n\\begin{eqnarray}\n\\label{decomp_alm}\nf(\\theta,\\vartheta)=\\sum_{l=0}^{+\\infty}\\sum_{m=-l}^{l} \\hat{f}_{lm}Y_{lm}(\\theta,\\vartheta) ,\n\\end{eqnarray}\nwhere  $Y_{lm}$ are the spherical harmonics defined by:\n\\begin{eqnarray}\n\\label{def_ylm}\nY_{lm}(\\theta,\\vartheta)=\\sqrt{\\frac{2l+1}{4\\pi}\\frac{(l- \\abs{m} )!}{(l+ \\abs{m} )!}} P_{lm}(\\cos\\vartheta)e^{im\\theta},\n\\end{eqnarray}\n\n$P_{lm}$ are the  associated Legendre functions (or polynomials) defined by the following differential equation:\n\\begin{eqnarray}\n\\label{def_poly_legendre}\n\\frac{d}{dt} \\left[     (1-t^2)   \\frac{d}{dt} P_{lm}  \\right]   + \\Big( l(l+1) - \\frac{m^2}{1-t^2}\\Big) P_{lm} = 0  .\n\\end{eqnarray}\n\nThese functions are related to the Legendre polynomials $P_{l}$ by \n\\begin{eqnarray}\nP_{lm}(t)  = (-1)^m (1-t^2)^{m/2}\\frac{d^m}{dt^m}P_l(t) ,\n\\label{def_poly_leg}\n\\end{eqnarray}\nwhere  $P_l$  is:\n\\begin{eqnarray}\nP_{l}(t)  =\\frac{1}{2^l l!}  \\frac{d^l}{dt^l} (t^2-1)^l .\n\\label{def_pl}\n\\end{eqnarray}\n\nFurthermore, an important property of the Legendre polynomials is that they are orthogonal:  \n\\begin{eqnarray}\n\\label{ortho_harm_sph}\n\\sum_{l\\in\\mathbb{N}} \\sum_{|m|\\leqslant l} Y^*_{lm}({\\boldsymbol \\omega}') \\ Y_{lm}({\\boldsymbol \\omega}) = \\delta({\\boldsymbol \\omega}' - {\\boldsymbol \\omega}) .\n\\end{eqnarray}\nwith ${\\boldsymbol \\omega} = (\\theta,\\vartheta)$ et ${\\boldsymbol \\omega}' = (\\theta',\\vartheta')$\n\n\n%  On peut ainsi souhaiter que la transform\\'ee en harmoniques sph\\'eriques possede un algorithme rapide. \n% On peut \\'egalement vouloir qu'elle soit inversible pour des images \u00e0 bande limit\\'ee.\n% Pour le premier point, il est indispensable que les points soient \\'equir\\'epartis\n% sur les m\\'eridiens et les paralleles. Pour le deuxieme point, il suffit que\n% l'\\'echantillonage en $\\vartheta $ soit suffisant et que les points d'\\'echantillonage\n% en latitude  $\\theta $ appartiennent, soit aux z\\'eros du polyn\u00f4me de Legendre ou qu'ils soient \\'equir\\'epartis.\n\n\nIn this chapter, many multiscale decompositions will be built \nbased on the spherical harmonics and/or the HEALPix representation.\n\n\\section{Orthogonal Haar Wavelets on the Sphere}\n\\label{mrs_haar}\n\\index{wavelet!Haar on sphere}\n\\index{sphere!Haar}\n\nThe Haar wavelet transform on the sphere  \\citep{wave:sweldens95a} \nat each resolution $j$ and pixel $\\bk=(k_x,k_y)$ on the sphere is based on a scaling function  $\\phi_{j,\\bk}$ ($\\phi_{j,\\bk}(\\bx) = \\phi\\parenth{2^{-j}(\\bx-\\bk)}$, where $\\bx$ is the vector of Cartesian coordinates on the sphere, and $\\phi$ is the Haar scaling function) \nand three Haar wavelet functions $\\psi^{d}_{j,\\bk}$ (see Section~\\ref{sec_haar})\nwith  $d \\in\\{1,2,3\\}$. It uses the idea that a given pixel on the sphere at a given resolution $j$  in the HEALPix representation\nis directly related to four pixels at the next resolution $j-1$.  \n% For Healpix  pixelisation, resolution of the map is related \n% to the  $N_{side}$ parameter, \n% which corresponds to the square root of number of pixels in each face.\n% $ N_{side} = 2^{j-1}$.  At a given resolution  $j$ we have $n_j=12 \\times 4^{j-1}$ pixels with a surface size $\\mu_j$. \n\n% Noting, $\\bk_0$, the pixelThe scaling function $\\phi$ and three wavelet functions $\\psi$ are defined by:\n% \\begin{eqnarray}\n% \\label{def_whaar1}\n% \\phi_{j,\\bk}(\\bx) & = & \\left\\{  \\begin{array}{ll}\n% 1 & \\textrm{si $t \\in S_{j,\\bk}$ }\\\\\n% 0 & \\textrm{sinon}\n%    \\end{array} \\right.  \\nonumber \\\\\n% \\psi_{1,j+1,\\bk} & = & \\frac{\\phi_{j,\\bk_0} +\\phi_{j,\\bk_2}-\\phi_{j,\\bk_1}-\\phi_{j,\\bk_3} } {4}   \\nonumber \\\\\n% \\psi_{2,j+1,\\bk} & = & \\frac{\\phi_{j,\\bk_0} +\\phi_{j,\\bk_1}-\\phi_{j,\\bk_2}-\\phi_{j,\\bk_3} }{4}  \\nonumber \\\\\n% \\psi_{3,j+1,\\bk} & = & \\frac{\\phi_{j,\\bk_0} +\\phi_{j,\\bk_3}-\\phi_{j,\\bk_1}-\\phi_{j,\\bk_2} }{4}\n% \\end{eqnarray}\n\n% \\begin{figure}[htb]\n%\\centering\n%\\includegraphics[width=10cm]{fig_haar_decomp.pdf}\n%\\label{fig_haar_decomp}\n%\\caption{On resolution to next using Haar.}\n%\\end{figure}\n\n% \\begin{figure*}\n% \\vbox{\n% \\centerline{\n% \\hbox{\n% \\includegraphics[angle=180,width=7cm]{haar0.pdf}\n% \\includegraphics[angle=180,width=7cm]{haar1.pdf}\n% }}\\centerline{\n% \\hbox{\n% \\includegraphics[angle=180,width=7cm]{haar2.pdf}\n% \\includegraphics[angle=180,width=7cm]{haar3.pdf}\n% }}\n% \\centerline{\n% \\hbox{\n% \\includegraphics[angle=180,width=7cm]{haar4.pdf}\n% \\includegraphics[angle=180,width=7cm]{haar5.pdf}\n% }}\n% }\n% \\label{fig_haar_decomp2}\n% \\caption{Haar wavelet  coefficents.}\n% \\end{figure*}\n\nDenoting $\\bk_0,\\bk_1,\\bk_2,\\bk_3$  the four pixels at scale $j$, hierarchically related to \nthe pixel $\\bk$ at scale $j+1$,  scaling coefficients  $c_{j+1,\\bk}$ at scale $j+1$ are derived from \n  those at scale $j$ by:\n\\begin{eqnarray}\n\\label{def_haar1}\nc_{j+1}[\\bk] &=& \\frac{1}{4} \\sum_{d=0}^3 c_{j} [\\bk_d],\n\\end{eqnarray}\nand wavelet coefficients at scale $j+1$ from coefficients at scale  $j$ by:\n\\begin{eqnarray}\n\\label{def_haar2}\nw^{1}_{j+1} [\\bk] &=& \\frac{1}{4} (c_{j}[\\bk_0]+c_{j} [\\bk_2]   -  c_{j} [\\bk_1]-c_{j}[\\bk_3])  \\nonumber  \\\\\nw^{2}_{j+1} [\\bk] &=&\\frac{1}{4}  (c_{j}[\\bk_0]+c_{j} [\\bk_1]   -  c_{j}[\\bk_2]-c_{j}[\\bk_3])  \\nonumber  \\\\\nw^{3}_{j+1} [\\bk] &=&\\frac{1}{4}  (c_{j}[\\bk_0]+c_{j} [\\bk_3]   -  c_{j} [\\bk_1]-c_{j}[\\bk_2]) .\n\\end{eqnarray}\n\n% Fig. \\ref{fig_haar_decomp} shows the calculation scheme of haar coefficients, from one resolution to the next. \n% Haar wavelet coefficients for the astronomical WMAP data set are shown in Fig. \\ref{fig_haar_decomp2}.\n\nThe Haar wavelet transform on the sphere is orthogonal and its reconstruction is exact. The inverse transformation is obtained by:\n \\begin{eqnarray}\n\\label{def_recons_haar}\nc_0[\\bx] = \\sum_{\\bk}  c_{J}[\\bk]  \\phi_{J,\\bk}(\\bx) + \\sum_{j=1}^{J} \\sum_{d=1}^3 \\sum_{\\bk} w^{d}_{j}[\\bk]  \\psi^d_{j}(\\bx) .\n\\end{eqnarray}\n\nThis transform is very fast but its interest is relatively limited. Indeed, it is not rotation invariant, and more importantly the Haar wavelet shape is not well adapted for most applications, because of the non-regular shape of the wavelet function.\n\n\n% ===============================================\n\n\\section{Continuous Wavelets on the Sphere}\n\n\\subsection{Stereoscopic Projection}\n\\label{chap_mex_hat}\n\n\\index{wavelet!axisymmetrical wavelets}\n\\index{wavelet!stereoscopic projection}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=10cm]{proj_stereo_inverse.pdf}\n\\caption{Inverse stereographic projections  of a  radial  function  from plane  to the  sphere.}\n\\label{figprojstereo_direct}\n\\end{figure}\n\nIn order to have more choice to design the wavelet function, we may want to use wavelets defined for regular 2D images to the sphere.\nThis is possible by using inverse stereographic projections of radial wavelet functions such the Mexican hat \\citep{wave:cayon01}.\nDefining the stereographic projection operator, $\\bR: {\\bf t}  \\mapsto \\ {\\boldsymbol \\omega} $, with $  {\\boldsymbol \\omega}  = (\\theta(r),\\vartheta) $, $\\theta(r) = 2 \\arctan(r/2)$,\nthe  radial wavelets $\\psi_{\\mathrm{plane}}$ can be projected on the sphere by a unique rotation,   ${\\boldsymbol \\omega}_0 =(\\theta_0,\\vartheta_0)$, respectively around the two axes   $O_y$ and $O_z$.  \nFig.~\\ref{figprojstereo_direct} shows the projection of radial functions from the plane to the sphere. \n \nThe convolution on the sphere between a radial wavelet function $\\psi(\\theta)$ and a function $f({\\boldsymbol \\omega})$ is:\n \\begin{eqnarray}\n\\label{convol_sph}\n(\\psi \\ast f)(\\theta,\\vartheta)= \\int_{S^2} \\psi_{\\mathrm{plane}}^* (\\bR^{-1}  {\\boldsymbol \\omega}) f({\\boldsymbol \\omega}) d{\\boldsymbol \\omega} .\n\\end{eqnarray}\n\nSuch wavelets are axisymmetric by construction. This property can be used to derive fast transformation algorithms using spherical harmonics.\nIndeed spherical harmonics coefficients $\\hat{\\psi}[l,m]$ of the wavelet function $\\psi$ on the sphere are equal to zero when  $m \\neq 0$, and by the Funk-Hecke theorem, the convolution can be written using spherical harmonics by:\n\\begin{eqnarray}\n\\label{convol_sph_harm}\n(\\psi \\ast f)(\\theta,\\vartheta)  =  \\sum_{l=0}^{\\infty} \\sum_{m=-l}^{l} \\sqrt{\\frac{2l+1}{4\\pi} }\\hat{f}[l,m] \\hat{ \\psi}[l,0] Y_{lm}(\\theta,\\vartheta) ~.\n\\end{eqnarray}\nwhere $\\hat{f}[l,m]$ are the spherical harmonics coefficients of the function $f$, i.e. $f = \\sum_{l=0}^{\\infty}\\sum_{m=-l}^{l}  \\hat{f}[l,m] Y_{lm}$ and similary for $\\hat{\\psi}$.\n\nClassical wavelet dilations can also be derived on the sphere using the dilation operator $\\mathscr{D}_a$ by a factor $a > 0$ \\citep{wiaux07}:\n\\begin{eqnarray}\n\\label{proj_stereo}\n\\mathscr{D}_a(f)({\\boldsymbol \\omega}) = \\chi_a^{1/2} (a,\\theta)  f(D^{-1}_a {\\boldsymbol \\omega}) ,\n\\end{eqnarray}\nwhere $D_a(\\theta,\\vartheta)=(\\theta_a(\\theta),\\vartheta)$ with the linear relation $\\tan\\theta_a(\\theta)/2=a\\tan\\theta/2$, and $D_a$ is the dilation operator that maps a sphere without its South pole on itself. $ \\chi_a^{1/2}(a,\\theta)$ is a norm preservation term (i.e. $\\mathscr{D}_a$ is unitary):\n\\begin{eqnarray}\n\\label{def_lambda}\n\\chi_a^{1/2}(a,\\theta) = a^{-1}[1+\\tan^2(\\theta/2)]/[1+a^{-2}\\tan^2(\\theta/2)] ~.\n\\end{eqnarray}\n\n% \\begin{figure*}\n% \\centering\n% \\includegraphics[width=9cm]{proj_stereo.pdf}\n% \\caption{Projection st\\'er\\'eographique inverse sur la sphere.}\n% \\label{figprojstereo}\n% \\end{figure*}\n\n\\subsection{Mexican Hat Wavelet} \n\n\\index{wavelet!Mexican hat on sphere}\n\\index{sphere!Mexican hat}\n\\index{Mexican hat!on sphere}\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[angle=180,width=6.5cm]{wav_mex_hat_0.pdf}\n\\includegraphics[angle=180,width=6.5cm]{wav_mex_hat_1.pdf}\n}}\\centerline{\n\\hbox{\n\\includegraphics[angle=180,width=6.5cm]{wav_mex_hat_2.pdf}\n\\includegraphics[angle=180,width=6.5cm]{wav_mex_hat_3.pdf}\n}}\n}\n\\caption{Mexican hat on the sphere for the dilation parameter equal to $a = \\{1,2,4,8\\}$.}\n\\label{fig_wav_mex_hat}\n\\end{figure}\n\n% \\begin{figure}[htb]\n% \\vbox{\n% \\centerline{\n% \\hbox{\n% \\includegraphics[angle=180,width=6.5cm]{mex_hat_0.pdf}\n% \\includegraphics[angle=180,width=6.5cm]{mex_hat_1.pdf}\n% }}\\centerline{\n% \\hbox{\n% \\includegraphics[angle=180,width=6.5cm]{mex_hat_8.pdf}\n% \\includegraphics[angle=180,width=6.5cm]{mex_hat_20.pdf}\n% }}\n% }\n% \\caption{Cosmic Microwave Background (CMB) simulated map and its mexican hat wavelet transform\n% with dilation parameter $a = \\{1,8,20\\}$.}\n% \\label{fig_trans_cmb_wav_mex_hat}\n% \\end{figure}\n\nThe 2D Mexican hat wavelet transform is the second derivative of a Gaussian:\n\\begin{equation}\n\\label{rad_mexhat}\n\\psi(r) = \\frac{1}{\\sqrt{2 \\pi}} \\frac{1}{a} \\Big( 2 - \\Big( \\frac{r}{a} \\Big)^2 \\Big) e^{- \\frac{r^2}{2a^2}}\n\\end{equation}\nwhere  $a$ is a scale factor parameter and $r$ the distance to the wavelet center.\n\nUsing the inverse stereographic projection, it is possible to extend the Mexican hat wavelet\non the sphere  \\citep{wave:antoine99,wave:tenerio99,wave:cayon01,wave:holschneider96,wave:vielva04}:\n\\begin{equation}\n\\psi_a (r) = \\frac{1}{\\sqrt{2 \\pi} C_a} \\Big( 1 +  \\Big( \\frac{r}{2} \\Big)^2 \\Big)^2   \\Big( 2 - \\Big( \\frac{r}{a} \\Big)^2 \\Big) e^{- \\frac{r^2}{2a^2}},\n\\end{equation}\nwhere $a$ is a scale factor, $C_a$ is a normalization term $C_a = a \\Big( 1 + \\frac{a^2}{2} + \\frac{a^4}{4} \\Big)^{\\frac{1}{2}}$, and $r$ is the distance on the tangent plane, which is related to the polar angle $\\theta$ through $r = 2 \\textrm{tan} \\frac{\\theta}{2}$. This transform may be useful to analyze the data, but  it does not have a reconstruction operator, and can therefore not be used for restoration\napplications.\n\nFig.~\\ref{fig_wav_mex_hat} shows the Mexican hat wavelet on the sphere for  four different scales.\n%  and Fig ~\\ref{fig_trans_cmb_wa_mex_hat}  shows four scales for the a simulated CMB map.\n\n\n\\subsection{Directional Wavelets}\n\\label{dirwavelet}\n\\index{wavelet!directional wavelets}\n\n% \\index{sphere!directionnal wavelet}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=9cm]{projection_stereo_ondelette_directionnelle.pdf}\n\\caption{Inverse stereographic projections  of a  directional wavelet on the sphere.}\n\\label{figprojstereo_direct2}\n\\end{figure}\n\nTo study anisotropic structures, the previously described continuous wavelet transform can be extended to directional wavelets  \\citep{wave:antoine01,vielva06,McEwen08}.\nFig.~\\ref{figprojstereo_direct2} shows the projection of an elliptic function from the plane to the sphere.  \n\n\\subsubsection{Elongated Mexican Hat Wavelet}\n\\index{Mexican hat!elongated}\n\\index{wavelet!Mexican hat}\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[angle=180,width=6.5cm]{dir_mex_hat_0.pdf}\n\\includegraphics[angle=180,width=6.5cm]{dir_mex_hat_1.pdf}\n}}\\centerline{\n\\hbox{\n\\includegraphics[angle=180,width=6.5cm]{dir_mex_hat_2.pdf}\n\\includegraphics[angle=180,width=6.5cm]{dir_mex_hat_3.pdf}\n}}\n\\centerline{\n\\hbox{\n\\includegraphics[angle=180,width=6.5cm]{dir_mex_hat_4.pdf}\n\\includegraphics[angle=180,width=6.5cm]{dir_mex_hat_5.pdf}\n}}\n}\n\\caption{Elongated Mexican hat on the sphere for the dilation parameter equal to $a_x= 1$ and $a_y = \\{0.5,1.,1.25,1.5,2,4\\}$.}\n\\label{fig_dir_mex_hat}\n\\end{figure}\n\nThe elongated Mexican hat wavelet can be written as:\n\\begin{multline}\n\\label{directional_hat}\n\\psi_{a_x, a_y}(\\theta, \\vartheta) = \\sqrt{\\frac{2}{\\pi}} C(a_x,a_y) \\parenth{1+ \\tan^2 \\frac{\\theta}{2}} \\bigg( 1- \\frac{4 \\tan^2 \\theta/2 }{a_x^2+a_y^2} \\Big(\\frac{a_y^2}{a_x^2} \\cos^2 \\vartheta \\\\\n+  \\frac{a_x^2}{a_y^2} \\sin^2 \\vartheta\\Big) \\bigg)  e^{-2 \\tan \\frac{\\theta}{2}(\\cos^2 \\vartheta / a^2_x + \\sin^2 \\vartheta / a^2_y  )} ~,\n\\end{multline}\nwhere $a_x$ and $a_y$ are the dilation factors along the two axes $O_x$ and $O_y$, $C(a_x,a_y)$ is a normalization constant defined as\n\\begin{eqnarray}\n\\label{directional_hat_norm}\nC(a_x,a_y)& =& (a_x^2+a_y^2) \\parenth{a_x a_y (3a_x^4+ 3a_y^4+2a_xa_y)}^{-1/2} ~.\n\\end{eqnarray}\n\nFig.~\\ref{fig_dir_mex_hat} shows the wavelet functions for different dilation parameters $a_x$  and  $a_y$.\n\n\n\\subsubsection{Morlet Wavelet}\n\\index{wavelet!Morlet}\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[angle=180,width=6.5cm]{dir_morlet_0.pdf}\n\\includegraphics[angle=180,width=6.5cm]{dir_morlet_1.pdf}\n}}\\centerline{\n\\hbox{\n\\includegraphics[angle=180,width=6.5cm]{dir_morlet_2.pdf}\n\\includegraphics[angle=180,width=6.5cm]{dir_morlet_3.pdf}\n}}\n}\n\\caption{Morlet wavelets on the sphere for the parameter  $\\bk$ equal to  (2,0), (4,0), (6,6) et (9,1).}\n\\label{fig_dir_morlet}\n\\end{figure}\n\nThe Morlet wavelet on the sphere, derived from the stereographic projection of the 2D function on the plane, is:\n\\begin{multline}\n\\label{real_morlet}\n\\psi_{a_x, a_y,  {\\bf k}}(\\theta, \\vartheta)  =  \\sqrt{\\frac{2}{\\pi}} C({\\bf k}) (1+ \\tan^2 \\frac{\\theta}{2}) \\bigg(\\cos \\frac{  {\\bf k} \\cdotp  R^{-1} {\\bf x} }{\\sqrt{2}} \\\\  - e^{- |{\\bf k}|^2/4} \\bigg)  e^{-2 \\tan^2(\\theta/2)} \n\\end{multline}\nwith $R^{-1}  {\\bf x}  = (2 \\tan (\\theta/2) \\cos \\vartheta,2 \\tan (\\theta/2) \\sin \\vartheta)$,  ${\\bf k}=(k_x,k_y)$, $|{\\bf k}|^2 = k_x^2 + k_y^2$, \n and  $C({\\bf k}) = (1+3e^{-|{\\bf k}|^2/2} -4e^{-3   |{\\bf k}|^2/8})^{-1/2} $. ${\\bf k}$ allows us to control the oscillations of the wavelet functions.\n \nFig.~\\ref{fig_dir_morlet} shows the Morlet wavelet for ${\\bf k}$  equal respectively to  $(2,0)$, $(4,0)$, $(6,6)$ and  $(9,1)$.\n\nContinuous wavelet transforms have been intensively used in astrophysics, mainly to analyze the Cosmic Microwave Background  \\citep{wave:vielva04}.\nDirectional wavelets based on steerable filters were also proposed in \\citet{wiaux06,McEwen08}.\nWe present in the following a set of multiscale decompositions on the sphere which have a fast exact inverse transform, and are therefore suitable for many  applications such as restoration.\n \n%===============================================================\n\n\\section{Redundant Wavelet Transform on the  Sphere with Exact Reconstruction}\n\\label{sect_wts}\n%We show in this section that we can build  wavelet transforms on the sphere, based on the spherical harmonics transform, ewhich have an exact reconstruction and can therefore be easily used in different applications.\n\n\n\\subsection{Isotropic Undecimated Wavelet Transform on the Sphere}\nHere an undecimated isotropic transform (UWTS) is described which is similar in many respects to the starlet transform (see Chapter~\\ref{chp_uwt}), and will therefore be a good candidate for restoration applications.\nIts isotropy is a favorable property when analyzing isotropic features. \nThis isotropic transform is obtained using a scaling function ${\\phi}_{l_c}(\\theta, \\vartheta)$ \nwith cut-off frequency  $l_c$ and  azimuthal symmetry, meaning that ${\\phi}_{l_c}$ does not depend \non the azimuth $\\vartheta$. Hence the spherical harmonics coefficients $\\hat {\\phi}_{l_c} [l,m]$ of ${\\phi}_{l_c}$ vanish \nwhen $m \\ne 0$ so that:\n\\begin{eqnarray}\n{\\phi}_{l_c}(\\theta, \\vartheta)= {\\phi}_{l_c}(\\theta) = \\sum_{l = 0}^{l = l_c} \\hat  {\\phi}_{l_c} [l,0] Y_{l0}(\\theta, \\vartheta) .\n\\end{eqnarray}\nThen, convolving a function $f(\\theta, \\vartheta) \\in L_2(S^2)$ with ${\\phi}_{l_c}$ is greatly simplified \nand the spherical harmonics coefficients of the resulting map $c_0$ are readily given by\n\\begin{eqnarray}\n \\hat c_{0}[l,m] = \\widehat{{\\phi}_{l_c} * f} [l,m] = \\sqrt{\\frac{2l+1}{4\\pi} } \\hat {\\phi}_{l_c} [l,0] \\hat f[l,m]  .\n\\end{eqnarray}\n\\index{sphere!undecimated wavelet}\n\n\\subsubsection{From One Resolution to the Next}\n\nA sequence of smoother approximations of $f$ on \na dyadic resolution scale can be obtained using the scaling function ${\\phi}_{l_c}$ as follows:\n\\begin{eqnarray}\n\\begin{split}\nc_0   & = &  {\\phi}_{ l_{c} }  \\ast f        \\\\\nc_1   & = &  {\\phi}_{2^{-1} l_{c} }   \\ast f    \t   \\\\\n&\\ldots&\\\\ \nc_j    &=&   {\\phi}_{2^{-j}  l_{c}  }  \\ast f  ,\n\\end{split}\n\\end{eqnarray}\nwhere ${\\phi}_{2^{-j} l_{c} }$ is a rescaled version of ${\\phi}_{l_{c}}$. The above multiresolution sequence can actually  be obtained recursively. \n\nDefine a low pass filter $h_{j}$ for each scale $j$  by:\n\\begin{eqnarray}\n \\widehat{H}_{j}[l,m]  & = & \\sqrt{\\frac{4\\pi}{2l+1} }  \\hat h_{j}[l,m]   \\nonumber  \\\\\n   &  = & \n   \\begin{cases}\n   \\frac {   \\hat \\phi_{\\frac{l_{c}}{2^{j+1}} }[l,m]   }   {  \\hat  \\phi_{  \\frac{l_{c}}{2^{j}} }[l,m]   } & \\mbox{if }  l  < \\frac{ l_{c}} {2^{j+1}} \\quad \\textrm{and}\\quad m = 0 , \\\\\n   0 & \\mbox{otherwise } . \n  \\end{cases}\n\\end{eqnarray}\nIt is then easily shown that $c_{j+1}$ derives from $c_j$ by convolution on the sphere with $h_j$:  $c_{j+1} = c_{j} \\ast h_j$.\n\n\n\\subsubsection{The Wavelet Coefficients}\n\nGiven an axisymmetric  wavelet function $\\psi_{l_c}$, we can derive in the same way a \nhigh pass filter $g_j$ on each scale~$j$:\n\\begin{eqnarray}\n \\widehat{G}_{j}[l,m]  & = & \\sqrt{\\frac{4\\pi}{2l+1} }  \\hat{g}_{j}[l,m] \\nonumber  \\\\\n   & = & \n  \\begin{cases}\n  \\frac {   \\hat \\psi_{\\frac{l_{c}}{2^{j+1}} }[l,m]   }   {  \\hat  \\phi_{  \\frac{l_{c}}{2^{j}} }[l,m]   } & \\mbox{if }  l  < \\frac{ l_{c}} {2^{j+1}} \\quad \\textrm{and}\\quad m = 0 ,\\\\\n1 &\\mbox{if }  l  \\ge \\frac{ l_{c}} {2^{j+1}} \\quad \\textrm{and}\\quad m = 0 ,\\\\ \n0&  \\mbox{otherwise } .\n  \\end{cases}\n\\end{eqnarray}\nFrom this definition, the wavelet coefficients $w_{j+1} $ at scale $j+1$ are obtained from the previous scaling coefficients $c_j$ by a simple convolution on the sphere with $g_j$: $w_{j+1} = c_{j} \\ast g_j$.\n\nAs in the  starlet transform   algorithm, the wavelet coefficients can be defined as the difference between two consecutive resolutions, $w_{j+1}(\\theta, \\vartheta) = c_{j}(\\theta, \\vartheta) - c_{j+1}(\\theta, \\vartheta)$. This defines a zonal wavelet function $\\psi_{l_c}$ as:\n\\begin{eqnarray}\\label{wavelet}\n\\hat \\psi_{\\frac{l_c}{2^{j}}}[l,m] = \\hat \\phi_{\\frac{l_c}{2^{j-1}}} [l,m]  - \\hat \\phi_{\\frac{l_c}{2^{j}}}[l,m] .\n\\end{eqnarray}\nThe high pass filters $g_j$ associated with this wavelet are expressed as: \n\\begin{eqnarray}\n\\begin{split}\n\\widehat{G}_{j}[l,m]  & = \\sqrt{\\frac{4\\pi}{2l+1} } \\hat{g}_{j}[l,m]  \\\\\n                  & = 1 - \\sqrt{\\frac{4\\pi}{2l+1} } \\hat{h}_j[l,m]   =   1 - \\widehat{H}_j[l,m] .\n\\end{split}\n\\end{eqnarray}\nObviously other wavelet functions could be used just as well.\n\n\n\\subsubsection{Choice of the Scaling Function}\nAny function with a cut-off frequency is a possible candidate. We retained here \na B-spline function of order 3 (see Chapter~\\ref{chp_uwt}):\n\\begin{eqnarray}\n{\\hat{\\phi}}_{l_c} [l,m] = \\frac{3}{2} B_{3}  \\left(  \\frac{2l}{l_{c}} \\right)\n\\end{eqnarray}\nwhere $B_3(t)$ is the scaling function defined in Section~\\ref{sect_starlet}.\n%\\[B_3(t) = \\frac{1}{12}({\\mid{t-2}\\mid}^3 - 4 {\\mid{t-1}\\mid}^3 + 6 {\\mid{t}\\mid}^3 - 4 {\\mid{t+1}\\mid}^3 + {\\mid{t+2}\\mid}^3)  \\]\n\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[width=14cm,height=4.5cm]{fig_sphere_filterbank1.pdf}\n% \\includegraphics[width=14cm,height=5cm]{ch1_diff_uv_phi_psi.pdf}\n}}}\n\\caption{On the left, spherical harmonics coefficients $\\hat{{\\phi}}[l,0]$ of the the scaling function ${{\\phi}}$ and, on the right, those of the wavelet function ${\\psi}$.}\n\\label{fig_diff_uv_phi_psi}\n\\end{figure}\n\nIn Fig.~\\ref{fig_diff_uv_phi_psi} the spherical harmonics coefficients of the scaling function derived from a B$_3$-spline, and those of the associated wavelet function \\eqref{wavelet}, are plotted as a function of $l$. Other functions such as the needlet function \\citep{marinucci08} can be used as well.\n\nThe steps of the UWT on the sphere of a discrete image $X$ sampled from $f$ are summarized in Algorithm~\\ref{algo_uwts}. If the wavelet function corresponds to the choice \\eqref{wavelet}, Step 3 in this UWTS algorithm reduces to $w_{j+1} = c_{j} - c_{j+1}$.\n\n% Their corresponding conjugate low pass and high pass filters $h$ and $g$ are plotted in Fig.~\\ref{fig_diff_uv_ht_gt}. \n\n{\\linespread{1}\n\\begin{algorithm}[h]\n\\caption{The Undecimated Wavelet Transform on the Sphere.}\n\\label{algo_uwts}\n\\noindent{\\bf Task:} Compute the UWTS of a discrete $X$.\\\\\n\\noindent{\\bf Parameters:} Data samples $X$ and number of of wavelet scales $J$.\\\\  \n\\noindent{\\bf Initialization:} \n\\begin{itemize}\n\\item $c_0=X$.\n\\item Compute the B$_3$-spline scaling function and derive $\\hat{\\psi}$, $\\widehat{H}$ and $\\widehat{G}$ numerically.\n\\item Compute the corresponding spherical harmonics transform of $c_0$.\n\\end{itemize}\n\\For{$j=0$ to $J-1$} {\n\\begin{enumerate}[1.]\n\\item Compute the spherical harmonics transform of the scaling coefficients:  $\\hat{c}_{j+1}=\\hat{c}_j\\widehat{H}_{j}$.\n\\item Compute the inverse spherical harmonics transform of $\\hat{c}_{j+1}$ to get $c_{j+1}$.\n\\item Compute the spherical harmonics transform of the wavelet coefficients:  $\\hat{w}_{j+1}=\\hat{c}_j\\widehat{G}_{j}$.\n\\item Compute the inverse spherical harmonics transform of $\\hat{w}_{j+1}$ to get $w_{j+1}$.\n\\end{enumerate}\n}\n\\noindent{\\bf Output:} ${\\cal W}=\\{w_1, w_2, \\dots, w_{J}, c_{J}\\}$ the UWTS of $X$.\n\\end{algorithm}\n}\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_mars.png}\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_mars_scale1.png}\n}}\n\\centerline{\n\\hbox{\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_mars_scale2.png}\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_mars_scale3.png}\n}}\n\\centerline{\n\\hbox{\n\\includegraphics[width=6.5cm,height=3.9cm]{fig_mars_scale4.png}\n\\includegraphics[width=6.5cm,height=3.9cm]{fig_mars_scale5.png}\n}}\n}\n\\caption{Mars topographic map and its UWTS (four wavelet detail scales and the scaling (smooth) band).}\n\\label{Figure:UWTS}\n\\index{data!Mars topography}\n\\end{figure}\n\nFig.~\\ref{Figure:UWTS} shows the Mars topographic map (top left)\n\\footnote{The Mars Orbiter Laser Altimeter (MOLA) generated altimetry profiles used to create global topographic maps. The MOLA instrument stopped acquiring altimetry data on June 30, 2001, and after that operated in passive radiometry mode until the end of the Mars Global Surveyor mission. MOLA data sets are produced by the MOLA Science Team and archived by the PDS Geosciences Node.} and its wavelet transform, using five scales (four wavelet scales + coarse scale). The sum of the five scales reproduces exactly the original image.\n\\index{Mars Orbiter Laser Altimeter}\n\n\\subsubsection{Inverse Transform}\n\nIf the wavelet is the difference between two resolutions, a straightforward reconstruction of an image from its wavelet coefficients ${\\cal W} = \\{w_1,\\dots, w_{J}, c_{J}\\}$ is: \n\n\\begin{eqnarray}\n c_{0}(\\theta, \\vartheta) = c_{J}(\\theta, \\vartheta) + \\sum_{j=1}^J  w_j(\\theta, \\vartheta) .\n\\end{eqnarray}\nThis reconstruction formula is the same as with the starlet algorithm.\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[width = 13cm, height = 4.cm]{fig_sphere_filterbank2.pdf} \n% \\includegraphics[width=14.5cm,height=5cm]{ch1_diff_uv_ht_gt.pdf}\n}}}\n\\caption{On the left, the filter $\\hat{\\tilde{h}}$, and on the right the \nfilter $\\hat{\\tilde{g}}$.}\n\\label{fig_diff_uv_ht_gt}\n\\end{figure}\n\nBut since the transform is redundant there is actually no unique way to reconstruct an image\nfrom its coefficients (the filterbank design framework of Section~\\ref{design_no_fb}). Indeed, using the relations:\n\\begin{eqnarray}\n\\begin{split}\n\\hat c_{j+1}[l,m] = \\widehat H_{j} [l,m]  \\hat c_{j} [l,m] \\\\\n\\hat w_{j+1}[l,m] = \\widehat G_{j} [l,m] \\hat c_{j} [l,m] \n\\end{split}\n\\end{eqnarray}\na least-squares estimate of $c_j$ from $c_{j+1}$ and $w_{j+1}$ gives:\n\\begin{eqnarray}\n\\hat{c}_{j}   = \\hat{c}_{j+1}  {\\widehat {\\tilde H}}_{j}   + \\hat{w}_{j+1}  {\\widehat {\\tilde G}}_{j} ~,\n\\end{eqnarray}\nwhere the dual filters $\\tilde h$ and $\\tilde g$ satisfy:\n\\begin{eqnarray}\n\\label{eqnht} \n\\begin{split}\n{\\widehat {\\tilde H}}_j =  \\sqrt{\\frac{4\\pi}{2l+1} } {\\hat {\\tilde h}}_j & = {\\widehat H}_{j}^* /\n\\parenth{\\big|{\\widehat H}_{j}\\big|^2 + \\big|{\\widehat G}_j\\big|^2} \\\\\n{\\widehat {\\tilde G}}_j =  \\sqrt{\\frac{4\\pi}{2l+1} } {\\hat {\\tilde g}}_j & = {\\widehat G}_{j}^* /\n\\parenth{\\big|{\\widehat H}_j\\big|^2 + \\big|{\\widehat G}_j\\big|^2} .\n\\end{split}\n\\end{eqnarray}\nFor the scaling function which is a B$_3$-spline function and a wavelet taken as the difference between two resolutions,\nthe corresponding conjugate low pass and high pass filters $\\widehat {\\tilde H}$ and $\\widehat {\\tilde G}$ are plotted in Fig.~\\ref{fig_diff_uv_ht_gt}. \nThe reconstruction algorithm is given in Algorithm~\\ref{algo_iuwts}.\n\n{\\linespread{1}\n\\begin{algorithm}[h]\n\\caption{Inverse UWT on the sphere.}\n\\label{algo_iuwts}\n\\noindent{\\bf Task:} Reconstruct an image from its UWTS coefficients.\\\\\n\\noindent{\\bf Parameters:} UWTS coefficients ${\\cal W}=\\{w_1, w_2, \\dots, w_{J}, c_{J}\\}$.\\\\\n\\noindent{\\bf Initialization:}\n\\begin{itemize}\n\\item Compute the B$_3$-spline scaling function and derive $\\hat{\\psi}$, $\\widehat{H}$, $\\widehat{G}$, $\\widehat{\\tilde H}$ and $\\widehat{\\tilde G}$ numerically.\n\\item Compute the spherical harmonics transform of $c_J$ to get ${\\hat c}_J$.\n\\end{itemize}\n\\For{$j=J-1$ to $0$, with step $=-1$} {\n\\begin{enumerate}[1.]\n\\item Compute the spherical harmonics transform of the wavelet coefficients $w_{j+1}$ to get $\\hat{w}_{j+1}$.\n\\item Multiply $\\hat{c}_{j+1}$ by ${\\widehat {\\tilde H}}_{j}$.\n\\item Multiply $\\hat{w}_{j+1}$ by ${\\widehat {\\tilde G}}_{j}$.\n\\item Get the spherical harmonics of $\\hat{c}_j=\\hat{c}_{j+1}+\\hat{w}_{j+1}$.\n\\end{enumerate}\n}\nCompute The inverse Spherical Harmonics transform of $\\hat c_0$.\\\\\n\\noindent{\\bf Output:} $c_0$ is the inverse UWT on the sphere.\n\\end{algorithm}\n\n\\begin{figure}[htb]\n\\centerline{\n\\hbox{\n\\includegraphics[width = 10cm,angle = 90]{fig_backwt_sphere.pdf}\n}}\n\\caption{Reconstruction from a single wavelet coefficient at different scales. Each map is obtained by setting all wavelet coefficients to zero but one, and by applying an inverse UWTS. Depending on the position and scale of the non-zero coefficient, the reconstructed map shows an isotropic feature \nat different scales and positions.}\n\\label{Figure:back_wt}\n\\end{figure}\n}\nFig.~\\ref{Figure:back_wt} shows the reconstruction by setting all wavelet coefficients but one at different scales\nand positions. Depending on the position and scale of the non-zero coefficient, the reconstructed map shows an isotropic feature \nat different scales and positions.\n \n\n%-------------------------\n\n\\subsection{Isotropic Pyramidal Wavelet Transform on the Sphere}\n\\index{wavelet!pyramidal transform}\n\\index{sphere!pyramidal wavelet}\n\n\\subsubsection{Forward Transform}\nIn the previous algorithm, no down-sampling is\nperformed and each scale of the wavelet decomposition has the same number of pixels \nas the original data set. Therefore the number of pixels in the\ndecomposition is equal to the number of pixels in the data multiplied by the number of scales.\nFor some applications, we may prefer to introduce a decimation\nin the decomposition so as to reduce the required memory size and the computation time.\nThis can be done easily by using a specific property of the chosen scaling function.\nIndeed, since we are considering here a scaling function with an initial cut-off $l_c$ in spherical harmonic multipole number $l$, and since the actual cut-off is reduced by a factor of two at each step, the number \nof significant spherical harmonics coefficients is then reduced by a factor of four after each convolution with the low pass filter\n$h$. Therefore, we need less pixels in the direct space when we compute the inverse \nspherical harmonics transform.  Using the HEALPix pixelization scheme \\citep{pixel:healpix},  \nthis can be done easily by dividing by 2 the \n$N_{\\mathrm{side}}$ parameter when calling the inverse spherical harmonics transform routine.\nThe pyramidal wavelet transform on the sphere algorithm is given in Algorithm~\\ref{algo_pwts}.\n\n{\\linespread{1}\n\\begin{algorithm}[h]\n\\caption{Pyramidal wavelet transform on the sphere.}\n\\label{algo_pwts}\n\\noindent{\\bf Task:} Compute the pyramidal WT on the sphere of a discrete image $X$.\\\\\n\\noindent{\\bf Parameters:} Data $X$ and number of of wavelet scales $J$.\\\\\n\\noindent{\\bf Initialization:} \n\\begin{itemize}\n\\item $c_0=X$.\n\\item Compute the B$_3$-spline scaling function and derive $\\hat{\\psi}$, $\\widehat{H}$ and $\\widehat{G}$ numerically.\n\\item Compute the corresponding spherical harmonics transform of $c_0$.\n\\end{itemize}\n\\For{$j=0$ to $J-1$} {\n\\begin{enumerate}[1.]\n\\item Compute the spherical harmonics transform of the scaling coefficients:  $\\hat{c}_{j+1}=\\hat{c}_j\\widehat{H}_{j}$.\n\\item Compute the inverse spherical harmonics transform of $\\hat{c}_{j+1}$ to get $c_{j+1}$.\n\\item Down-sample $c_{j+1}$, since its support in the spherical harmonic domain has been divided by two.\n\\item Compute the spherical harmonics transform of the wavelet coefficients:  $\\hat{w}_{j+1}=\\hat{c}_j\\widehat{G}_{j}$.\n\\item Compute the inverse spherical harmonics transform of $\\hat{w}_{j+1}$ to get $w_{j+1}$.\n\\end{enumerate}\n}\n\\noindent{\\bf Output:} ${\\cal W}=\\{w_1, w_2, \\dots, w_{J}, c_{J}\\}$ the Pyramidal WT on sphere of $X$.\n\\end{algorithm}\n}\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_earth_elevation.png}\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_earth_scale_1.png}\n}}\n\\centerline{\n\\hbox{\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_earth_scale_2.png}\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_earth_scale_3.png}\n}}\n\\centerline{\n\\hbox{\n\\includegraphics[width=6.5cm,height=3.9cm]{fig_earth_scale_4.png}\n\\includegraphics[width=6.5cm,height=3.9cm]{fig_earth_scale_5.png}\n}}\n}\n\\caption{Pyramidal wavelet transform on the sphere.}.\n\\label{Figure:PWTS}\n\\end{figure}\n\nFig.~\\ref{Figure:PWTS} shows an Earth image and its pyramidal wavelet transform (PWTS) using five scales.\nAs the scale number increases (i.e.\\ the resolution decreases), the pixel size becomes larger. The data are  land and sea-floor elevations  obtained from the ETOPO5 5-minute gridded elevation data set. A thorough explanation of the data set is provided at \\texttt{www.ngdc.noaa.gov} \\footnote{The ETOPO5 data are credited to ``Data Announcement 88-MGG-02, Digital relief of the Surface of the Earth. NOAA, National Geophysical Data Center, Boulder, Colorado, 1988''.   The HEALPix image is available at \\texttt{http://astro.ic.ac.uk/$\\sim$pdineen/earth/index.html\\#earthmap}.}.\n\n\n\\newpage\n\\subsubsection{Inverse Transform}\n \nThis reconstruction is not as straightforward as in the undecimated case, since the different scales do not have the same resolution.\nFor each resolution level, we have to up-sample the scaling band before co-adding it to the wavelet coefficients. Algorithm \\ref{algo_ipwts} describes this.\n{\\linespread{1}\n\\begin{algorithm}[h]\n\\caption{Inverse Pyramidal Wavelet Transform on the sphere.}\n\\label{algo_ipwts}\n{\\bf Task:} Reconstruct an image from its pyramidal WT on the sphere.\\\\\n\\noindent{\\bf Parameters:} Pyramidal WT coefficients ${\\cal W}=\\{w_1, w_2, \\dots, w_{J}, c_{J}\\}$.\\\\\n{\\bf Initialization:}\n\\begin{itemize}\n\\item Compute the B$_3$-spline scaling function and derive $\\hat{\\psi}$, $\\widehat{H}$, $\\widehat{G}$, $\\widehat{\\tilde H}$ and $\\widehat{\\tilde G}$ numerically.\n\\item Compute the spherical harmonics transform of $c_J$ to get ${\\hat c}_J$.\n\\end{itemize}\n\\For{$j=J-1$ to $0$} {\n\\begin{enumerate}[1.]\n\\item Upsample $c_{j+1}$ to the resolution of $c_j$.\n\\item Compute the spherical harmonics transform of the wavelet coefficients $w_{j+1}$ to get $\\hat{w}_{j+1}$.\n\\item Multiply $\\hat{c}_{j+1}$ by ${\\widehat {\\tilde H}}_{j}$.\n\\item Multiply $\\hat{w}_{j+1}$ by ${\\widehat {\\tilde G}}_{j}$.\n\\item Get the spherical harmonics of $\\hat{c}_j=\\hat{c}_{j+1}+\\hat{w}_{j+1}$.\n\\end{enumerate}\n}\nCompute The inverse spherical harmonics transform of $\\hat c_0$.\\\\\n\\noindent{\\bf Output:} $c_0$ is the inverse pyramidal WT on the sphere.\n\\end{algorithm}\n}\n  \nThe wavelet transform on the sphere and its pyramidal version have both a reconstruction operator, so they are very well designed for any restoration application when the data contains isotropic features. In the following, we present other transforms on the sphere more adapted to the analysis of anisotropic features.\n \n%===============================================================================\n\n\n\\section{Curvelet Transform on the Sphere }\n\\label{sect_cur}\n%\\subsection{Introduction}\n\\index{curvelet}\n\\index{curvelet!sphere}\n\\index{sphere!curvelet}\n\nThe 2D curvelet transform enables the directional analysis of an image in different scales (see Chapter~\\ref{ch_mga}). \nThe fundamental property of the curvelet transform is to \nanalyze the data with functions of length about\n $2^{-j/2}$ for the $j$th subband $[2^j, 2^{j+1}]$ of the \ntwo-dimensional wavelet transform.   \nFollowing the implementation of the first generation curvelet transform described in Section~\\ref{subsubsec_curvG1}, the data first undergoes \nan Isotropic Undecimated Wavelet Transform (i.e.\\ starlet transform). Each scale $j$ is then decomposed into smoothly overlapping blocks of\nside-length $B_j$ pixels in such a way that the overlap between two\nvertically adjacent blocks is a rectangular array of size $B_j \\times B_j/2$.\nFinally the ridgelet transform is applied on each individual block. Recall from Chapter~\\ref{ch_mga} that the ridgelet transform\nprecisely amounts to applying a 1D wavelet transform to the\nslices of the Radon transform. The first generation curvelet transform is also redundant, with a redundancy factor of $16J+1$ whenever $J$ scales\nare employed. The curvelet transform was shown to sparsely represent anisotropic structures and smooth curves and edges of different lengths.\n\n\\subsection{Curvelets on the Sphere}\nThe curvelet transform on the sphere (CTS) can be similar to the 2D first generation digital curvelet transform,\nbut replacing the starlet transform by the isotropic UWTS previously \ndescribed. The CTS algorithm is the following:\n\\begin{itemize}\n\\item Isotropic UWTS. \n\\item Partitioning: Each scale is decomposed into blocks of an appropriate scale (of side-length $\\sim2^{-s}$), using the HEALPix pixelization.\n\\item Ridgelet transform: Each block is analyzed with the discrete ridgelet transform.\n\\end{itemize}\nWe now describe these three steps.\n\n%\\begin{figure}\n%\\centering\n%\\includegraphics{pixelhealpix.pdf}\n%\\caption{The Healpix sampling grid.}\n%\\label{pixelhealpix}\n%\\end{figure}\n\n\\subsubsection{Partitioning Using the HEALPix Representation}\n \n \\begin{figure}\n\\centerline{\n\\hbox{\n\\includegraphics[width=12cm]{fig_flowgraph_ridgelet_sphere.pdf}\n% \\includegraphics[width=13.6cm,height=8cm]{fig_flowgraph_ridgelet_sphere.pdf}\n}}\n\\caption{Flowgraph of the ridgelet transform on the sphere.}\n\\label{Figure:rid_sphere}\n\\end{figure}\n\nThe HEALPix representation is a curvilinear hierarchical partition of the sphere into quadrilateral pixels of exactly equal area but with varying shape. The base resolution divides the sphere into 12 quadrilateral faces of equal area placed on three rings around the poles and equator.  Each face is subsequently divided into $N_{\\mathrm{side}}^{2}$ pixels following a quadrilateral multiscale  tree structure (see Fig.~\\ref{pixelhealpix}).  The pixel centers are located on iso-latitude rings, and pixels from the same ring are equispaced in azimuth. This is critical for computational speed of all operations involving the evaluation of spherical harmonics transforms, including standard numerical analysis operations such as convolution, and  power spectrum estimation. \n\nAn important geometrical feature of the HEALPix sampling grid is the hierarchical quadrilateral tree structure. This defines a  natural one-to-one mapping of the sphere sampled according to the HEALPix grid, into twelve  flat images, on all scales. It is then easy to partition a spherical map using HEALPix into quadrilateral blocks of a specified size. One first extracts  the twelve base-resolution faces, and each face is then decomposed into overlapping blocks of the specified size. This decomposition into blocks is an essential step of the traditional flat 2D curvelet transform. Based on the reversible warping of the sphere into a set of flat images made possible by the HEALPix sampling grid, the ridgelet and curvelet transforms can be extended to the sphere. \n\nWith the decomposition into blocks described above, there is no overlap between neighboring blocks belonging to different base-resolution faces. This may result for instance in blocking effects in denoising experiments  via nonlinear filtering. It is possible to overcome this difficulty in some sense by working simultaneously  with various rotations of the data with respect to the sampling grid. This will average out undesirable effects at edges between base resolution faces. \n\n\\newpage\n\\subsubsection{Ridgelet Transform}\n\\index{ridgelet}\n\\index{ridgelet!sphere}\n\\index{sphere!ridgelet}\n\nOnce the partitioning is performed, the standard 2D ridgelet transform described in Chapter~\\ref{ch_mga} is applied  in each individual block. The ridgelet transform can be based on different implementations of the Radon transform: linogram, fast slant stack, see Chapter~\\ref{ch_mga} for details. \n% \\begin{enumerate}\n% \\item Compute the 2D Fourier transform.\n% \\item Extract lines going through the origin in the frequency plane.\n% \\item Compute  the 1D inverse Fourier transform of each line. We get the Radon transform.\n% \\item Compute the 1D wavelet transform of the lines of the Radon transform.\n% \\end{enumerate}\n% The first three steps correspond to a Radon transform method called the \n% {\\em linogram}.\n% \\index{linogram}\n% Other implementations of the Radon transform,  such as the Slant Stack Radon Transform \\citep{cur:donoho_02}, see Section \\ref{sectfss}, \n% can be used as well, as long as they allow for an exact reconstruction.\n\\index{ridgelet transform!fast slant stack}   \n\nFig.~\\ref{Figure:rid_sphere} shows the flowgraph of the ridgelet transform \non the sphere and Fig.~\\ref{Figure:back_rid} shows the reconstruction from a single ridgelet \ncoefficient at different scales and orientations.\n\n\n\\begin{figure}[htb]\n\\centerline{\n\\hbox{\n\\includegraphics[width=10cm]{fig_ridssr.pdf}\n% \\includegraphics[width=9.5cm,height=6cm]{fig_ridssr.pdf}\n}}\n\\caption{Ridgelet atoms on the sphere obtained by reconstruction from a few ridgelet coefficient at different scales and orientations.}\n\\label{Figure:back_rid}\n\\end{figure}\n\n\\subsection{Curvelet Transform Algorithm}\nThe curvelet transform algorithm on the sphere is described in \nAlgorithm \\ref{algo_curts}.\n{\\linespread{1}\n\\begin{algorithm}[h]\n\\caption{Curvelet Transform on the sphere.}\n\\label{algo_curts}\n{\\bf Task:} Compute the curvelet transform on the sphere of a discrete image $X$.\\\\\n{\\bf Parameters:} Image $X$ and number of scales $J$.\\\\\n{\\bf Initialization:} \n\\begin{itemize}\n\\item $B_1 = B_{\\min}$.\n\\item Compute the isotropic UWTS of $X$ with $J$ scales, get $\\{w_1,\\dots,w_J,c_J\\}$.\n\\end{itemize}\n\\For{$j=0$ to $J-2$} {\n\\begin{enumerate}[1.]\n\\item Partition the wavelet subband $w_j$ with a block size $B_j$.\n\\item Apply the digital ridgelet transform to each block; get the curvelet coefficients at scale $j$.\n\\end{enumerate}\n\\lIf{$j \\mbox{ modulo } 2 = 1$} $B_{j+1} = 2 B_{j}$,  else $B_{j+1} = B_{j}$.\n}\n{\\bf Output:} The curvelet transform on the sphere of $X$.\n\\end{algorithm}\n}\n \nThe sidelength of the localizing windows is doubled {\\em at every\nother} dyadic subband, hence maintaining the fundamental property of\nthe curvelet transform which says that elements of length about\n$2^{-j/2}$ serve for the analysis and synthesis of the $j$th subband\n$[2^j, 2^{j+1}]$.  We used the default value $B_{\\min} = 16$\npixels in our implementation.  Fig.~\\ref{Figure:cur_sphere}\ngives an overview of the organization of the algorithm.\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n% \\includegraphics[width=9.cm,height=12cm]{fig_flowgraph_curvelet_sphere.pdf}\n\\includegraphics[width=13cm]{fig_flowgraph_curvelet_sphere.pdf}\n}}}\n\\caption{Flowgraph of the curvelet transform on the sphere.}\n\\label{Figure:cur_sphere}\n\\end{figure}\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n% \\includegraphics[width=9.cm,height=12cm]{fig_back_cur_sphere.pdf}\n\\includegraphics[angle=90,width=12cm]{fig_back_cur_sphere.pdf}\n}}}\n\\caption{Reconstruction from a single curvelet coefficient at different scales and orientations.}\n\\label{Figure:back_cur}\n\\end{figure}\nFig.~\\ref{Figure:back_cur} shows the backprojection of  curvelet coefficients at \ndifferent scales and orientations.\n\n\\subsection{Pyramidal Curvelet Transform on the Sphere}\nThe CTS is very redundant, which may be a problem for handling huge data sets such as \nPlanck data (see Section \\ref{section:cmb} below). \nThe redundancy can be reduced by substituting, in the \ncurvelet transform algorithm, the pyramidal wavelet transform with the undecimated wavelet transform.\nThe second step which consists of applying the ridgelet transform on the wavelet scale is unchanged.\nThe pyramidal curvelet transform (PCTS) algorithm is summarized in Algorithm~\\ref{algo_pcurts}.\n\\index{sphere!pyramidal curvelet}\n\n{\\linespread{1}\n\\begin{algorithm}[h]\n\\caption{Pyramidal Curvelet Transform on the sphere.}\n\\label{algo_pcurts}\n{\\bf Task:} Compute the pyramidal curvelet transform on the sphere of a discrete image $X$.\\\\\n{\\bf Parameters:} Image $X$ and number of scales $J$.\\\\\n{\\bf Initialization:} \n\\begin{itemize}\n\\item $B_1 = B_{\\min}$.\n\\item Compute the pyramidal wavelet transform of $X$ with $J$ scales, get $\\{w_1,\\dots,w_J,c_J\\}$.\n\\end{itemize}\n\\For{$j=0$ to $J-2$} {\n\\begin{enumerate}[1.]\n\\item Partition the wavelet subband $w_j$ with a block size $B_j$.\n\\item Apply the digital ridgelet transform to each block; get the curvelet coefficients at scale $j$.\n\\end{enumerate}\n\\lIf{$j \\mbox{ modulo } 2 = 1$} $B_{j+1} = 2 B_{j}$,  else $B_{j+1} = B_{j}$.\n}\n{\\bf Output:} The pyramidal curvelet transform on the sphere of $X$.\n\\end{algorithm}\n}\n\nIn the next section, it is shown how the pyramidal curvelet transform can be used for image filtering.\n\n\n%=========================================\n\n\n\\section{Restoration and Decomposition on the Sphere}\n\\label{sect_exp}\n\n\\subsection{Denoising}\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_sync.png}\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_sync_noise5.png}\n}}\n\\centerline{\n\\hbox{\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_sync_wtfilter5.png}\n \\includegraphics[width=6.5cm,height=3.9cm]{fig_sync_resi_wtfilter5.png}\n}}\n\\centerline{\n\\hbox{\n\\includegraphics[width=6.5cm,height=3.9cm]{fig_sync_curfilter5.png}\n\\includegraphics[width=6.5cm,height=3.9cm]{fig_sync_resi_curfilter5.png}\n}}\n}\n \\caption{Denoising. Upper left and right: simulated \nsynchrotron image and same image with\nadditive Gaussian noise (i.e.\\ simulated data).  Middle: undecimated \nwavelet filtering and residual.\nBottom: pyramidal curvelet filtering  and residual.}\n\\label{Figure:sync_filter}\n\\index{data!synchrotron}\n\\end{figure}\n\n\nWavelets and curvelets have been used successfully for image denoising via nonlinear filtering or thresholding methods as extensively studied in Chapter~\\ref{chap:denoise}. In the results of Fig.~\\ref{Figure:sync_filter}, denoising by hard thresholding the wavelet and curvelet coefficients on the sphere was used. The threshold was set to $4 \\times $ the standard deviation of the noise at each subband.\n\n% Hard thresholding, for instance, consists of setting all insignificant coefficients i.e.\\ coefficients with an absolute value below a given threshold)\n% to zero. In practice, we need to estimate the noise standard deviation $\\sigma_j$ in each band $j$\n% and a wavelet (or curvelet) coefficient $w_j$ is significant if $\\abs{w_j} > \\tau \\sigma_j$,\n% where $\\tau$ is a user-defined parameter, typically chosen between 3 and 5. \n% The $\\sigma_j$ estimation in band $j$ can be derived from simulations \\citep{starck:book06}. \n% Denoting $\\bf Y$ the noisy data and $\\HT$ the thresholding operator, the filtered data $\\bf X$ are obtained by:\n% \\begin{eqnarray}\n%  {\\bf X} =    {\\W}   \\HT_{\\bf \\lambda} ( {\\W}^\\Tr  {\\bf Y} )\n% \\end{eqnarray}\n% where ${\\W}^\\Tr$ is the wavelet (respectively, curvelet) \n% transform  and ${\\W} $ is \n% the wavelet (resp.\\ curvelet) reconstruction  (more details can be found in Chapter~\\ref{chap:denoise}).\n% $\\bf \\lambda$ is a vector which has the size of the number of bands in the used wavelet (respectively, curvelet) transform.\n% The thresholding is applied to all coefficients in band $j$ with the threshold ${\\bf \\lambda}[j] = \\tau \\sigma_j$.\n\nFig.~\\ref{Figure:sync_filter} describes the setting and the results of a simulated denoising experiment: upper left, the original simulated map of the  astrophysical synchrotron emission; upper right, the same image plus additive Gaussian noise ($\\sigma=5$). Since the synchrotron image has a standard deviation (after renormalization) equal to $16.26$, the SNR is around  $3.25$.  The middle panels in this figure show the UWTS denoised image and the residuals.  The bottom panels show the pyramidal curvelet transform filtered image and the residuals. On such data, exhibiting very anisotropic features, the curvelets produce better results than wavelets.\n  \n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n \\includegraphics[width=8cm]{fig_sync_cbfilter5.png}\n }}\n \\centerline{\n \\hbox{\n \\includegraphics[width=8cm]{fig_sync_resi_cbfilter5.png}\n}}}\n \\caption{Combined denoising (using both wavelets and curvelets) and residuals.}\n\\label{Figure:sync_cbf_filter}\n\\index{data!synchrotron}\n\\end{figure}\n\n \\begin{table}[htb]\n\\baselineskip=0.4cm\n\\begin{center}\n\\begin{tabular}{lcccc} \\hline \\hline\nMethod                          &  Error standard deviation       \\\\ \\hline \\hline\nNoisy map                       & 5    8  \\\\\nWavelet                         & 1.25     \\\\\nCurvelet                        & 1.07   \\\\\nCFA                             & 0.86  \\\\ \\hline\n\\hline\n\\end{tabular}\n\\caption{Error standard deviations  after denoising the synchrotron noisy map (additive white Gaussian noise, $\\sigma = 5$) by the wavelet, the curvelet and the combined denoising algorithm.  See Section~\\ref{sect_cfm} for a description of the latter.}\n\\label{comptab_sync}\n\\end{center}\n\\end{table}\n\n\nThe residuals after wavelet and curvelet based denoising presented in Fig.~\\ref{Figure:sync_filter} show different structures. As expected, elongated features are better restored using the curvelet transform, while isotropic structures are better denoised using the wavelet transform. The combined denoising algorithm introduced in Section~\\ref{sect_cfm} can obviously also be applied on the sphere, in order to benefit from the advantages of both transforms. This iterative method detects the significant coefficients in both the wavelet domain and the curvelet domain and guarantees  that the reconstructed map will take into account any pattern which is detected as significant by either of the transforms. \n\nThe results are reported in Table~\\ref{comptab_sync}. The residual is much better when the combined denoising is applied, and no feature can be detected any more by eye in the residual (see Fig.~\\ref{Figure:sync_cbf_filter}). This was not the case for either the wavelet or the curvelet based denoising alone.\n\n\n% ==>  Err WT =       1.25165,  Err Cur  =       1.07169,  Err Combined =      0.857861\n\n\n\n\\subsection{Morphological Component Analysis}\n\\label{sect_mca}\n\n% For a given spherical map ${\\bf Y}$ modeled as a linear combination of \n% $K$ spherical maps $x_k$, $  {\\bf Y} = \\sum_{k=1}^K x_k$, having different \n% morphologies, \n% MCA assumes that a dictionary of bases $\\{ \\W_1,\\cdots,\\W_K \\}$ exists such that, \n% for each $ ~ k$, $ x_k$ is sparse in $ ~ \\W_k$ while its representation in the other $ ~ \\W_{k'}$ ($ ~ k' \\ne k$) is not sparse: \n% $ ~ \\forall k' \\neq k, ||\\W_k^\\Tr x_k||_0 < ||\\W_{k'}^\\Tr x_k||_0$, where $||x||_0$ denotes the $\\ell_0$ pseudo-norm of the vector $x$.\n% %  (\\textit{i.e.}  the number of non-zero coefficients of $x$)\n% The problem is to separate the mixture $\\bf Y$ into its constitutive morphological components $ (x_k)_{k=1,\\cdots,K}$ relying on the \n% discriminating power of the different dictionaries $\\W_k$.  As described in Section~\\ref{mca_method}, this can be done by minimizing\n% \\begin{equation}\n% \\label{eq:l1prob_sphere}\n% \\min_{x_1,\\ldots,x_K}  \\sum_{k=1}^K  \\|  \\W_k^\\Tr  x_k \\|_1   \\mbox{ s.t. } \\|  {\\bf Y}  - \\sum_{k=1}^K   x_k   \\|_2 \\leq \\sigma\n% \\end{equation}\n% where $\\sigma$ is the noise standard deviation in the noisy case. \n\nThe Morphological Component Analysis (MCA) Algorithm~\\ref{algo:mca} (see Section~\\ref{sec:mca}) was applied to a decomposition problem of an image on the sphere, using the transforms developed above.\n\nThe spherical maps shown in Fig.~\\ref{Figure:mcatoy} illustrate a simple numerical experiment. We applied the MCA decomposition algorithm on the sphere, to synthetic data resulting from the linear mixture of components that were respectively sparse in the spherical harmonics and the isotropic wavelet representations. The method was able to separate the data back into its original constituents.  \n \n\\index{MCA}\n\\index{MCA!sphere}\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n% \\includegraphics[width=6.5cm,height=4cm]{test_gauss_alm_bw.pdf}\n\\includegraphics[angle=90,width=6cm]{test_gauss_alm_bw.pdf}\n}}\n\\centerline{\n\\hbox{\n% \\includegraphics[width=6.5cm,height=4cm]{test_gauss_alm_mca_alm_bw.pdf}\n% \\includegraphics[width=6.5cm,height=4cm]{test_gauss_alm_mca_wt_bw.pdf}\n\\includegraphics[angle=90,width=6cm]{test_gauss_alm_mca_alm_bw.pdf}\n\\includegraphics[angle=90,width=6cm]{test_gauss_alm_mca_wt.pdf}\n}}\n}\n\\caption{ Simple toy experiment with MCA on the sphere. \nThe top map shows a linear combination of a spherical harmonic function \nand a localized Gaussian-like function on the sphere. The bottom maps \nshow the resulting separated components that were obtained using the \nproposed Morphological Component Analysis on the sphere.}\n\\label{Figure:mcatoy}\n\\end{figure}\n\n%---------------------------------------------------------------------------------------------------------------------------------------\n\n\\subsection{Inpainting}\n\\label{sect_mrs_inpaint}\n\n\\index{inpainting}\n\\index{inpainting!sphere}\n\n\n%  Inpainting can also be done on the sphere. Consider a discrete spherical data map $\\bf Y$ \n%  and the missing data mask ${ \\bf M}_j$  \n% (the main diagonal of ${\\bf M}$ encodes the pixel status, namely $1$ for an existing\n% pixel and $0$ for a missing one see, see Section~\\ref{par:mcainpaint} for more details).\n%  %  a boolean map $M$ such that ones in $M$ indicate that the corresponding pixels in $Y$ are valid data while zeros indicate invalid data. \n%  \n% The objective function of MCA, \n% equation~\\eqref{eq:l1prob_sphere}, can be modified as follows: \n% \\begin{equation}\n% \\label{inp:model}\n% \\min_{x_1,\\ldots,x_K}  \\sum_{k=1}^K  \\|  \\W_k^\\Tr  x_k \\|_1   \\mbox{ s.t. }  {\\bf Y}   =  {\\bf M}   \\sum_{k=1}^K   x_k\n% \\end{equation}\n% Thus we are preventing the sparse model under construction from attempting to fit the invalid data. \n% Other constraints can be easily imposed on the interpolated sparse components. \n\nThe inpainting algorithms described in Section~\\ref{sect_inpainting} can also be applied on the sphere. In Section~\\ref{sect_inpainting}, a total variation penalty is shown to enhance the recovery of piecewise smooth components. Asking for regularity across the gaps of some localized statistics (e.g.\\ enforcing the empirical variance of a given inpainted sparse component to be nearly equal outside and inside the masked areas) yields other possible constraints. In practice, because of the lack of accuracy of some digital transformations, additional constraints in the spherical topology can be used which may be relaxed close to convergence. These constraints were also found useful in some cases to stabilize the described iterative algorithms. \n\nA simple numerical experiment is shown in Fig.~\\ref{Figure:mcaearth}. \nStarting with a full satellite view of the Earth\\footnote{Available from: http://www.nasa.gov/vision/earth/features/bmng\\_gallery\\_4.html}, \nan incomplete spherical map was obtained by randomly masking some of the pixels. In fact, as much as sixty percent of the pixels were masked. Using both the spherical harmonics transform and the curvelet transform on the sphere within the proposed MCA inpainting algorithm, it is possible to fill in the missing pixels in a visually undetectable way. \n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[angle=90,width=8cm]{earth_ori.pdf}\n}}\n\\centerline{\n\\hbox{\n\\includegraphics[angle=90,width=8cm]{earth_mask.pdf}\n}}\n\\centerline{\n\\hbox{\n\\includegraphics[angle=90,width=8cm]{earth_recons.pdf}\n}}\n}\n \\caption{ Application of the proposed MCA-inpainting algorithm on the sphere. Top: original satellite view of the Earth.  \nMiddle: incomplete map retaining 40 percent of the original pixels. \nBottom: inpainted map.}\n\\label{Figure:mcaearth}\n\\index{data!Earth}\n\\end{figure}\n\n%=========================================================================\n\n\\section{Applications}\n\\subsection{Application in Fusion Physics}\n\\label{section:bedros}\n\n\\index{inertial confinement fusion}\n\\index{ICF}\n\\index{fusion physics}\nIn Inertial Confinement Fusion (ICF) a spherical shell is irradiated by laser energy directly or after the \nlaser energy has been converted to soft X-rays~\\citep{bedros:atzeni}. Either way, the aim is to implode the capsule which contains a shell of nuclear fusion fuel (deuterium and tritium) ready to ignite if, after it has been imploded, its density is high enough and a hot spot in its center becomes hot enough to cause a propagating nuclear burn wave to travel through the rest of the fuel. This ultimate energy source will not work if, during the implosion, hydrodynamic instabilities develop which can break apart the shell before it assembles at the center and a hot spot forms~\\citep{bedros:lindl}. Hydrodynamic instabilities such as Rayleigh-Taylor occur due to non-uniformities in the laser spatial profile or imperfections in the composition of multiple surfaces which make up the layers of thin material that surround the nuclear  fuel. Very small amplitude imperfections initially can result in the ultimate failure of the target due to the large compression ratios involved in ICF.\n\n\n\\begin{figure}\n\\centerline{\n\\hbox{\n\\includegraphics[angle=0,width=6cm]{bed_mca_orig_data.png}\n\\includegraphics[angle=0,width=6cm]{bed_mca_large_scale.png}\n}\n}\n\\caption{Left: Surface structures of ICF spherical shells measured on the nanometer scale are a superposition of global scale variations, isolated bumps and scratches as well as artifacts which look like interference patterns on intermediate scales. Right: Coarsest scale of the undecimated isotropic wavelet transform of the surface measurements of an ICF target.}\n\\label{bedros_mca_data}\n\\index{data!plasma confinement}\n\\end{figure}\n\\begin{figure}[htb]\n\\centerline{\n\\includegraphics[angle=0,width=6cm]{bed_mca_in_data.png}\n\\includegraphics[angle=0,width=6cm]{bed_mca_large_scale.png}\n}\n\\centerline{\n\\includegraphics[angle=0,width=6cm]{bed_mca_out_dct.png}\n\\includegraphics[angle=0,width=6cm]{bed_mca_out_wt.png}\n}\n\\caption{Top: Spherical map obtained by subtracting the coarse scale map on the right of Fig.~\\ref{bedros_mca_data} from the initial map on the left of Fig.~\\ref{bedros_mca_data}. Bottom: Component maps separated by the MCA method on the sphere: interference patterns and measurement artifacts were caught by the local cosine functions on the sphere (left) while  the isolated bumps were caught using the undecimated wavelet on the sphere (right). Adding back the coarse scale on the right of Fig.~\\ref{bedros_mca_data} to the latter map results in a clean map of the surface structures of an ICF spherical shell with the interference patterns and artifacts removed.} \n\\label{bedros_mca_result}\n\\index{data!plasma confinement}\n\\end{figure}\n\nIt is therefore extremely important to characterize the inner and outer surfaces of ICF shell targets so as to know whether they are worthy of consideration for ICF implosions. One day in a reactor setting tens of thousands of targets will have to be imploded daily so that checking each one is totally out of the question. Instead, very good target fabrication quality control processes have to be adopted so that confidence levels in proper performance will be high. A major step along this path to fusion energy then is to understand why imperfections occur and to correct the systematic elements and control the harm done by random sources.\n%\nFine structures on the surfaces of spherical shells can be measured on the nanometer scale, among others,  by atomic force microscopy or phase shifting spherical diffractive optical interferometry. \n\nAn example of such measurements is shown in Fig.~\\ref{bedros_mca_data}. As can be seen from the figure, there appears to be a superposition of global scale variations, isolated bumps and scratches as well as artifacts which look like interference patterns on intermediate scales of localization. The latter must be isolated and eliminated from consideration when deciding the readiness of the target for implosion. \n\nWe achieved morphological feature separation by first doing an isotropic wavelet transform on the spherical data and subtracting the coarsest scale information. MCA on the sphere was used on the rest of the image using the undecimated wavelet and the local cosine transforms on the sphere. The isolated bumps were thus identified and the artifacts caused by the measurement technique were removed easily. The resulting bumps, added to the coarsest scale, comprises the clean data with the interference patterns and artifacts removed as shown in Fig.~\\ref{bedros_mca_result}. The spherical harmonic decomposition of the cleaned image gives rise to coefficients of various $\\ell$ modes which will be amplified by the implosion process.\n\nThe implosion process can now be assessed correctly using numerical hydrodynamic simulation generated growth factors. If the bumps are clustered and not randomly distributed then systematic errors in the manufacturing process can be tracked down. \n%A code called MODEM has been put together to study such target surface data and extract the localized bump statistics including their correlations in height, size and relative location. \nFor more details see~\\citet{bedros:modem}.\n \n \n\\subsection{Application in Cosmology}\n\\label{section:cmb}\n\nA major issue in modern cosmology is the measurement and the statistical characterization  (spatial power spectrum, Gaussianity)  of the slight fluctuations in the Cosmic Microwave Background radiation field. These are strongly related to  the cosmological scenarios describing the properties and evolution of our Universe. Some 370,000 years after the Big Bang, when the temperature of the Universe was around 3000~K, thermal energy was no longer sufficient to keep electrons and positively charged particles apart so they combined. Photons were then set free in a nearly transparent Universe. Since the Universe further expanded, these photons are now in the microwave range but they should still be distributed according to a black body emission law. Indeed, before recombination, the Universe was a highly homogeneous opaque plasma in near thermal equilibrium in which photons and charged particles were highly interacting.  Hence the slight fluctuations in matter density, from which such large scale structures as galaxies or clusters of galaxies have evolved, are also imprinted on the distribution of photons. \n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[width = 6.cm]{ADAIV_poster_abrial_fig1}\n\\includegraphics[width = 6.cm]{ADAIV_poster_abrial_fig2}\n}\n}\n}\n\\caption{Left: CMB data map provided by the WMAP team. Areas of significant foreground contamination in the galactic region and at the locations of strong radio point sources have been masked out. Right: Map obtained by applying the  MCA-inpainting algorithm on the sphere to the former incomplete WMAP CMB data map.}\n\\label{cmb_wmap_inpainting}\n\\index{data!CMB}\n\\end{figure}\n\n\\index{CMB}\n\\index{Cosmic Microwave Background}\nThe Cosmic Microwave Background (CMB) was first observed in 1965 by Penzias and Wilson confirming a prediction made by Gamow in the late 1940s. But it was not until the early 1990s that evidence for small fluctuations in the CMB sky could finally be found thanks to the observations made by COBE~\\citep{gauss:smoot92}. This was confirmed by several subsequent observations and recently by NASA's Wilkinson Microwave Anisotropie Probe (WMAP) \\footnote{The WMAP data and mask we used here are available online at http://map.gsfc.nasa.gov/}. Full-sky multi-spectral observations with unprecedented sensitivity and angular resolution are expected from ESA's Planck\\footnote{http://astro.estec.esa.nl/Planck} mission, which was launched in 2009. The statistical analysis of this data set will help set tighter bounds on major cosmological parameters.\n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig6}\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig7}\n}\n}\n\\centerline{\n\\hbox{\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig8}\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig9}\n}\n}\n\\centerline{\n\\hbox{\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig10}\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig11}\n}\n}\n\\centerline{\n\\hbox{\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig12}\n\\includegraphics[width=5cm]{ADAIV_poster_abrial_fig13}\n}\n}\n}\n\\caption{Top left: Masked area. From top to bottom and left to right: The seven wavelet scales of the inpainted map. From the visual point  of view, the initially masked area cannot be distinguished any more in the wavelet scales of the inpainted map.}\n\\label{cmb_scale_wmap_inpainting}\n\\end{figure}\n\nA simple numerical experiment is shown in Fig.~\\ref{cmb_wmap_inpainting}, starting with the full-sky CMB map provided by the WMAP team.\n% and available at http://map.gsfc.nasa.gov/. \nThis CMB map was partially masked to discard pixels where the level of  contamination by residual foregrounds is expected to be the highest. Applying the described inpainting algorithm, making use of the sparsity of the representation of CMB in the spherical harmonics domain, leads to the map shown on the right of Fig.~\\ref{cmb_wmap_inpainting}: the stationarity of the CMB field appears to have been restored and the masked region is completely undetectable to the eye.  Fig.~\\ref{cmb_scale_wmap_inpainting} shows the wavelet decomposition of the inpainted map allowing for further visual positive assessment of the quality of the proposed method as again the masked regions are undetectable at all scales. It was shown in \\citet{inpainting:abrial06,starck:abrial08} that inpainting the CMB map is an interesting approach for analyzing it, especially for non-Gaussianity studies and power spectrum estimation.\n\n\n\\newpage\n\\section{Guided Numerical Experiments}\n\\subsection{MR/S Toolbox}\nThe guided numerical experiments of this chapter are conducted using IDL with HEALPix and MR/S toolboxes. \n\nThe IDL software  (\\texttt{http://www.idl-envi.com}) is analogous to Matlab and is very widely used in astrophysics and in medical imaging.  \nHEALPix is available at \n\\texttt{http://sourceforge.net/projects/healpix}.\nMR/S is a collection of IDL files that implements all the multiscale transforms described in \nthis chapter, using HEALPix pixelization. The library is available via the book's web site: \n\n{\\centerline{\\texttt{http://www.SparseSignalRecipes.info}}}\n\n% MR/S is available for download at:  \\\\\n% {\\centerline{\\texttt{http://jstarck.free.fr/mrs.html}}}\n\n\\subsection{Undecimated Wavelet Transform on the Sphere}\nThe code to generate the undecimated wavelet transform of the Mars \nimage of Fig.~\\ref{Figure:UWTS} is as follows. \n\\begin{verbatim}\n; read the data \nm = mrs_read('mars_topo_mola_hpx_128.fits')\n\n; compute the undecimated wavelet transform with 5 scales\nmrs_wttrans, m, w,nbrscale=5\n\n; Display and write the figures to the disk\ntvs, m, tit='Mars topographic map', png='fig_mars.png'\ntvs, w.coef[*,0], tit='Mars topographic map: scale 1', $\n                   png='fig_mars_scale1.png' \ntvs, w.coef[*,1], tit='Mars topographic map: scale 2', $\n                   png='fig_mars_scale2.png'\ntvs, w.coef[*,2], tit='Mars topographic map: scale 3', $\n                   png='fig_mars_scale3.png'\ntvs, w.coef[*,3], tit='Mars topographic map: scale 4', $\n                   png='fig_mars_scale4.png'\ntvs, w.coef[*,4], tit='Mars topographic map: scale 5', $\n                   png='fig_mars_scale5.png'\n\\end{verbatim}\n\n\\subsection{Pyramidal Wavelet Transform on the Sphere}\nThe code to generate the pyramidal wavelet transform of the Mars \nimage of Fig.~\\ref{Figure:PWTS} is as follows.  \n\\index{sphere!pyramidal wavelet}\n\n\\begin{verbatim}\n; read the data \ne = mrs_read('earth_healpix_128.fits')\n\n; compute the pyramidal wavelet transform with 5 scales\nmrs_pwttrans, e, we, nbrscale=5\n\n; Display and write the figures to the disk\nmrs_wttv, we, write='fig_earth'\n\\end{verbatim}\n\n\n\\subsubsection{Denoising}\nIn the denoising experiment of Fig.~\\ref {Figure:sync_filter} and Fig.~\\ref{Figure:sync_cbf_filter}, \nwe have added Gaussian noise to the astronomical simulated synchrotron emission map.\nThe code to generate the figures is as follows.\n\\begin{verbatim}\n; read the image\ns = rims('sync_res128.fits')\n\n; add Gaussian noise\nn = randomn(seed, N_ELEMENTS(s))\nSigmaNoise = 5.\ns1 = s + n* SigmaNoise\n \n; Denoising using the undecimated WT on the sphere at 4sigma\nNsig = 4.\n mrs_wtfilter, s1, fwt4, nsigma= Nsig, nbrscale=5, SigmaNoise=SigmaNoise\n\n; Denoising using the curvelet transform\nmrs_curfilter, s1, fct4, nsigma= Nsig, nbrscale=5, SigmaNoise=SigmaNoise\n\n; Denoising using the combined denoising\nmrs_cbfilter, s1, fcb4, nsigma= Nsig, nbrscale=5, SigmaNoise=SigmaNoise\n\n; Display and write the figure to the disk\ntvs, s, /log, tit='Synchrotron emission', png='fig_sync.png'\ntvs, s1 > 30, /log, tit='Synchrotron emission + noise', $\n                   png='fig_sync_noise5.png'\n\ntvs , fwt4 > 30, /log, title='Undecimated Wavelet Denoising (4sigma)', $\n                   png='fig_sync_wtfilter5.png'\ntvs , fct4 > 30, /log, title='Curvelet Denoising (4sigma)',  $\n                   png='fig_sync_curfilter5.png'\ntvs , fcb4 > 30, /log, title='Combined Filtering (4sigma)',  \n                   png='fig_sync_cbfilter5.png'\n\ntvs , s1- fwt4,   title='Residual undecimated Wavelet Denoising (4sigma)', $\n                   png='fig_sync_resi_wtfilter5.png'\ntvs , s1 - fct4,  title='Residual  curvelet Denoising (4sigma)',  $\n                   png='fig_sync_resi_curfilter5.png'\ntvs , s1  - fcb4,  title='Residual  combined Filtering (4sigma)',  $\n                   png='fig_sync_resi_cbfilter5.png'\n\n; Print the standard deviation (error) between the true image \n; and the denoised images\nprint, 'Err WT = ', sigma(s-fwt4) , ', Err Cur  = ', sigma(s-fct4),  ',  $\n                  Err Comb = ', sigma(s-fcb4)\n\n\\end{verbatim}\n\nWe find the outcome here to be:\n\n\\begin{verbatim}\n==>  Err WT = 1.25,  Err Cur  = 1.07,  Err Combined = 0.86\n\\end{verbatim}\n\n\n\\section{Summary}\nIn this chapter, multiscale transforms on the sphere were presented: the wavelet transform, the ridgelet transform and the curvelet transform. The described transforms have many desirable properties such as invertibility. With the wealth of these multiscale analysis tools and the associated discrete analysis and synthesis transforms on the sphere at hand, several problems with spherical data can be attacked effectively.  \nApplications to denoising, image decomposition and inpainting on the sphere were given. Few applications to challenging data analysis problems in physics and astrophysics were reported. These tools are expected to be valuable in many other applications. To disseminate them, a toolbox implementing both the transforms and the denoising, separation and inpainting algorithms is freely available to interested users.\n  \n\n", "meta": {"hexsha": "66a85d89ba9df94ba69b80099c0f6ee5042a65c5", "size": 77850, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_isap/OLD/chap_mrs.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_isap/OLD/chap_mrs.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_isap/OLD/chap_mrs.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.6593886463, "max_line_length": 1118, "alphanum_fraction": 0.7425561978, "num_tokens": 22679, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.743168019989179, "lm_q1q2_score": 0.5736094040305874}}
{"text": "\\section{Noise}\n\n\\begin{frame}\n  \\frametitle{What is noise?}\n  \\begin{itemize}\n  \\item Models unpredictable behavior by adding a random variable.\n  \\item A random variable is a variable whose values are associated to a random phenomenon.\n  \\end{itemize}\n\\end{frame}\n\n\\section{Modeling}\n\n\\begin{frame}\n  \\frametitle{Modeling}\n  \\begin{columns}\n  \\column{0.5\\textwidth} \n      \\begin{itemize}\n\t\\item Mathematical models are an abstraction of reality.\n\t\\item There are three types of responses. \n\t\\begin{itemize}\n\t\t\\item Type I\\\\\n\t\t$\\ y'=\\alpha y$\n\t\t\\item Type II\\\\\n\t\t$\\ y' = \\frac{\\alpha y}{y+ L}$ \n\t\t\\item Type III\\\\\n\t\t$\\ y'= \\frac {\\alpha y^2}{\\beta + y^2}$\n\t\\end{itemize}\n       \\end{itemize}\n  \\column{0.5\\textwidth}\n      \\begin {figure}\n        \\centerline{\\includegraphics[scale = 0.7]{Responses}}\n       \\end{figure}\n  \\end{columns}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{Modeling}\n\\begin{columns}[t]\n\\column{0.5\\textwidth}\nDeterministic Models\n\\begin{itemize}\n\\item Every set of variable states is determined by parameters in the model.\n\\item Always produce the same output from a given starting condition or initial state.\n\\end{itemize}\n\n\\column{0.5\\textwidth}\nStochastic Models\n\\begin{itemize}\n\\item Variable states are not described by unique values, but rather by probability distributions.\n\\end{itemize}\n\\end{columns}\n\n\\bigskip\n\\centering\n Stochastic differential equations (SDEs) are systems described by differential equations influenced by random noise.\n\\end{frame}\n\n\\section{Random Walk}\n\\begin{frame}\n  \\frametitle{Random Walk}\n  \\begin{itemize}\n  \\item A random walk is a mathematical formalization of a path that consists of a succession of random steps.\n  \\end{itemize}\n  \\centering\n  \\includegraphics[scale=0.67]{RandomWalk}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Random Walk}\n  \\centering\n%  \\graphicspath{C:\\Users\\Valerie Ann\\Documents\\GitHub\\REU15\\Presentations\\Midterm}\n  \\includegraphics[scale=0.67]{RWPath}\n\\end{frame}\n", "meta": {"hexsha": "4dae453a7c1e6f1732647cd5d17ab8690de7d6cc", "size": 1948, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Presentations/Midterm/Introduction.tex", "max_stars_repo_name": "SUNY-SDE-2015/REU15", "max_stars_repo_head_hexsha": "a54ace642d8696250c7fa0bf574b16a931ec91c9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Presentations/Midterm/Introduction.tex", "max_issues_repo_name": "SUNY-SDE-2015/REU15", "max_issues_repo_head_hexsha": "a54ace642d8696250c7fa0bf574b16a931ec91c9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-06-04T17:55:32.000Z", "max_issues_repo_issues_event_max_datetime": "2015-07-09T15:38:17.000Z", "max_forks_repo_path": "Presentations/Midterm/Introduction.tex", "max_forks_repo_name": "SUNY-SDE-2015/REU15", "max_forks_repo_head_hexsha": "a54ace642d8696250c7fa0bf574b16a931ec91c9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3243243243, "max_line_length": 117, "alphanum_fraction": 0.7258726899, "num_tokens": 585, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5736093996400722}}
{"text": "\\documentclass{article}\r\n\\usepackage{algpseudocode}\r\n\\usepackage{algorithm}\r\n\\usepackage[english]{babel}\r\n\\usepackage[utf8]{inputenc}\r\n\\usepackage{fancyhdr}\r\n\\usepackage{listings}\r\n\\usepackage{graphicx}\r\n\\usepackage[section]{placeins}\r\n\\usepackage{titlesec}\r\n\\usepackage{amssymb}\r\n\\usepackage{amsmath}\r\n\\usepackage[margin=1in]{geometry}\r\n\\pagestyle{fancy}\r\n\\fancyhf{}\r\n\\rhead{Rahul Shah}\r\n\\lhead{Machine Learning: Nanodegree}\r\n\\rfoot{\\thepage}\r\n\\title{Supervised Learning: Perceptron Algorithm}\r\n\\begin{document}\r\n\\maketitle\r\n\\section{Motivation}\r\nA college's acceptances are a yes/no decision. The factors the college considers are:\\\\\r\n$x_1:=$ test score / 10\\\\\r\n$x_2:=$ grades / 10\\\\\r\n\r\nCan we predict whether a new $(x_1, x_2)$ gets accepted or rejected?\r\n\r\n\\section{Classification Problem}\r\nThis is a classification problem, so our goal is not to fit a function to the data, but instead fit a function such that it splits the data into discrete, already defined categories. In the case of the college problem, we plot the points on an $x_1$ vs $x_2$ plot, and come up with the equation of a line $f(x_1.x_2) = \\beta_2 x_2 + \\beta_1 x_1 + \\beta_0$ that splits the data in half. Then, for any given $(x_1,x_2)$, our prediction equation is: \\\\\r\n$\\hat{y} = \r\n\\begin{cases} \r\n0 & f(x_1, x_2) \\le 0 \\\\\r\n1 & f(x_1, x_2) > 0 \r\n\\end{cases}\r\n$\\\\\r\nGenerally, if each point on our plot has n coordinates, our dividers are going to be n-planes, or an n-1 subspace of $\\mathbb{R}^n$, and our $f(x_1, x_2) = \\beta^T X + \\beta_0$, where both $\\beta$ and $X$ are column vectors containing the respective elements. Just like linear regression, we can also bend our plane in polynomial shapes by adding new terms of the form $x_{i}^n$\r\n\\section{Perceptron}\r\n%This is best done with a picture\r\n\r\n\\subsection{Another Way}\r\n% again, picture\r\n\\subsection{Logical Operators}\r\nBasically, if we plot points at $(0, 0), (0, 1), (1, 0), (1, 1)$, we need to come up with a dividing line or whatever such that all points above it, when plugged in, return true, and all other points return false. Try plugging in these points into equations of the form $f(x_1.x_2) = \\beta_2 x_2 + \\beta_1 x_1 + \\beta_0$ that follow the rules listed for each logical operator and compare the output with the logical operator it's supposed to represent:\\\\\r\nAND: $\\beta_0 < 0, \\beta_1 + \\beta_0 < 0, \\beta_2 + \\beta_0 < 0, \\beta_2 + \\beta_1 + \\beta_0 > 0$ \\\\\r\nOR: $\\beta_0 < 0, \\beta_1 + \\beta_0 > 0, \\beta_2 + \\beta_0 > 0, \\beta_2 + \\beta_1 + \\beta_0 > 0$ \\\\\r\nNOT: $\\beta_0 > 0, \\beta_1 + \\beta_0 < 0$\r\n\r\nYou can compose the rest of these with one another to get other functions.\r\n\r\n\\section{Finding the Constant Terms}\r\nNow that you know both the 2 variable and the general case of this algorithm, we can use what we've learned so far to figure out what the constant terms $\\beta$ are (ya know, the whole point of this algorithm).\r\n\\subsection{Perceptron Trick}\r\n\\subsubsection{Intuition}\r\nFor any misclassified point, we want to move the line closer to the misclassified points, because the line is the border between our classification 1 or 0, so crossing that line equates to changing the predicted value. Kind of like how we do the absolute trick, where we want the line to move to a place where it approximately fits with the data, we want to do a similar thing here. But here our line (or hyperplane) acts as a border between points, not as a representation of the data. So, we want the \"broad side\" of the line to move towards the points that are misclassified, so we flip all our $\\pm$ signs to $\\mp$ signs. \r\n% THIS IS A BAD EXPLANATION FOR THIS, i dont think it hits the exact reason why we do what we're doing this. Why, here, do we apply the + and -s simultaneously here but not for the absolute trick? This probably requires playing around with numbers.\r\n\\subsubsection{Equation}\r\nSo for each misclassified point,\\\\\r\n$f(\\beta, \\alpha, X, P) = (\\beta - \\alpha P)^TX$\\\\\r\nWhere:\\\\\r\n$\\beta :=$ Constants in vector form, including $\\beta_0$\\\\\r\n$\\alpha :=$ learn rate\\\\\r\n$X :=$ the data input (appended with a 1 for the constant term $\\beta_0$)\\\\\r\n$P :=$ the point that is misclassified $(p_1, p_2, ...)$\\\\\r\n\r\n\\subsection{Algorithm}\r\nIn general, the steps are as follows:\r\n\r\n\\begin{itemize}\r\n\t\\item Start with random weights\r\n\t\\item figure out what points are misclassified (apply current linear eqn to the dataset and see if each corresponding data point matched with the actual data point's classification)\r\n\t\\item For each point that is misclassified, change weights by the value of their corresponding x multiplied by the learn rate\r\n\\end{itemize}\r\n\r\n\\begin{algorithm}\r\n\t\\caption{Perceptron Algorithm}\r\n\t\\begin{algorithmic}\r\n\t\t\\Procedure{Step}{$X$, $\\beta$, $y$, $\\alpha$}\\\\\r\n\t\t\\Comment $X$ the data input, $\\beta$ the coefficients to the linear eqn, $y$ the actual result values of the predictor output, $\\alpha$ is the learning rate\r\n\t\t\r\n\t\t\\State $\\hat{y} = \\beta^TX$\r\n\t\t\r\n\t\t\\For { each point in (X, y)}\\\\\\\\\r\n\t\t\\Comment Remember, when we read in the data, it's read in as ($<$input$>$, $<$output$>$), where $X = $ input (n col matrix) and $y = $ output (vector)\\\\\r\n\t\t\r\n\t\t\\If {$\\hat{y} == 0$ and the corresponding y is different}:\r\n\t\t\t\\State $\\beta += \\alpha X$\r\n\t\t\\EndIf\\\\\r\n\t\t\r\n\t\t\\If {$\\hat{y} == 1$ and the corresponding y is different}:\r\n\t\t\t\\State $\\beta -= \\alpha X$\r\n\t\t\\EndIf\\\\\r\n\t\t\r\n\t\t\\EndFor\r\n\t\t\r\n\t\t\r\n\t\t\\EndProcedure\r\n\t\\end{algorithmic}\r\n\\end{algorithm}\r\n\r\n\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "02024fa2f9e2911acd2a1e17b39eda50fd779b29", "size": 5451, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Machine_Learning/Perceptron Algorithm/Perceptron Algorithm.tex", "max_stars_repo_name": "rahulsanjay18/Notes", "max_stars_repo_head_hexsha": "5306c2ba9584221b1738a1d12a200294413b0558", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Machine_Learning/Perceptron Algorithm/Perceptron Algorithm.tex", "max_issues_repo_name": "rahulsanjay18/Notes", "max_issues_repo_head_hexsha": "5306c2ba9584221b1738a1d12a200294413b0558", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Machine_Learning/Perceptron Algorithm/Perceptron Algorithm.tex", "max_forks_repo_name": "rahulsanjay18/Notes", "max_forks_repo_head_hexsha": "5306c2ba9584221b1738a1d12a200294413b0558", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.4134615385, "max_line_length": 627, "alphanum_fraction": 0.7068427811, "num_tokens": 1611, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.573609395249557}}
{"text": "%!TEX root = ../thesis.tex\n%*******************************************************************************\n%*********************************** Thrid Chapter *****************************\n%*******************************************************************************\n\n\\chapter{Background Methodologies \\label{cha:background}}  %Title of the Thrid Chapter\n\n\\nomenclature[a-GP]{$\\mathcal{GP}\\left(m(\\cdot), k(\\cdot, \\cdot)\\right)$}{Gaussian Process with mean function $m(\\cdot)$ and covariance function $k(\\cdot, \\cdot)$}\n\\nomenclature[s-k]{$k$}{Principal component index.}\n\n\\ifpdf\n    \\graphicspath{{Chapter3/Figs/Raster/}{Chapter3/Figs/PDF/}{Chapter3/Figs/}}\n\\else\n    \\graphicspath{{Chapter3/Figs/Vector/}{Chapter3/Figs/}}\n\\fi\n\nIn the following chapter we consider the various statistical methodologies  upon which we build our CPACE model.\nThis chapter is structured as follows.\nFirst we focus on common FDA techniques applicable to EO data.\nWe follow this by discussing smoothing methodologies which are use in the FDA techniques. \nFinally we discuss Gaussian processes that are used in the CPACE framework to model correlation. \n\n\\section{Functional Principal Components Analysis \\label{sec:fpca}}\nA commonly used technique in multivariate statistics is Principal Components Analysis (PCA), \\citep{wold_principal_1987}. \nThe technique finds dominant directions of variation and helps to achieve dimensionality reduction.\nThis offers a parsimonious way to view data which is driven by the data themselves.\nThe equivalent technique when the data are functional in nature is known as Functional Principal Components Analysis (FPCA), \\citep{ramsay_functional_2010}.\nThe basic concepts were studied in the mid twentieth century.\nThe work of \\citeauthor{karhunen_zur_1946} and independently \\citeauthor{loeve_fonctions_1946} paved the basic foundations of the technique in the FDA literature, \\citep{karhunen_zur_1946, loeve_fonctions_1946}.\nThe FPCA technique essentially stems from representing the random function $\\mathcal{X}(t)$ as an infinite linear combination of orthogonal functions.\nSuch a representation is now known as the Karhunen-Lo\\`{e}ve theorem after its discoverers.\n\n\\subsection{Formulation}\nThe formulation of FPCA begins by assuming that $\\mathcal{X}(t)$, $t \\in \\mathcal{T}$ is a square integrable stochastic process over some domain $\\mathcal{T}$.\nLet the mean and the covariance of the stochastic process $\\mathcal{X}$ be denoted by $\\mu(t)$ and $G\\left(s, t\\right)$ respectively, where:\n\\begin{align}\n\t\\mu(t) &= \\E \\left(\\mathcal{X}(t)\\right) \\label{eqn:mean_fn}\\\\\n\tG\\left(s, t\\right) &= \\text{Cov}\\left(\\mathcal{X}(s),  \\mathcal{X}(t)\\right) \\label{eqn:cov_fn}\n\\end{align}\n\nAssociated with the covariance surface $G\\left(s, t\\right)$ we have the linear operator $T_G$ defined by:\n\\begin{align}\n\tT_G &: L^2\\left(\\mathcal{T}\\right) \\to L^2\\left(\\mathcal{T}\\right) \\\\\n\tT_G&:  f \\mapsto T_G f = \\int_{\\mathcal{T}} G\\left(s, \\cdot \\right) f(s) ds \\label{eqn:t_op}\n\\end{align}\n\nAs $T_G$ is a linear operator we can consider its eigenvalues and eigenfunctions which we will denote by $\\lambda_k$ and $\\phi_k$ respectively (following convention set out in \\citep{yao_functional_2005}) for $k=1,2,\\cdots$.\nThese are defined as the solutions to the Fredholm integral equations of the second kind, \\citep{yao_functional_2005}: \n\n\\begin{equation}\\label{eqn:fredholm}\n\t\\langle G(\\cdot, t), \\phi_k \\rangle = \\lambda_k \\phi_k(t)\n\\end{equation}\nwhere $\\langle f, g \\rangle = \\int_{\\mathcal{T}} f(s) g(s) ds$ is the inner product in the space $L^2(\\mathcal{T})$. \nThen by the Karhunen-Lo\\`{e}ve theorem one can express the centred process through the eigenvalues and eigenfunctions of the linear operator associated to the covariance surface, \\citep{karhunen_zur_1946, loeve_fonctions_1946}.\nThat is:\n\\begin{equation}\\label{eqn:fpca}\n\t\\mathcal{X}(t) - \\mu(t) = \\sum_{k=1}^{\\infty}\\xi_k \\phi_k(t)\n\\end{equation}\nwhere $\\xi_k$ is the $k^\\text{th}$ principal component associated to the eigenfunction $\\phi_k$.\nThe Karhunen-Lo\\`{e}ve theorem assures us this $L^2$ convergence is uniform in $t$.\nThe principal components are given by the following: \n\\begin{equation}\\label{eqn:principal_comp}\n\t\\xi_k= \\langle \\mathcal{X} - \\mu, \\phi_k \\rangle \n\\end{equation}\n\nFurther to this decomposition the Karhunen-Lo\\`{e}ve theorem means we have that the principal components are independent from each other, centred, and have variance equal to their associated eigenvalue, \\citep{karhunen_zur_1946, loeve_fonctions_1946}.\nThat is:\n\\begin{align}\n\t\\E\\left(\\xi_k\\right) &= 0 \\\\\n\t\\text{Var}\\left(\\xi_k\\right) &= \\lambda_k \\\\\n\t\\E\\left(\\xi_k \\xi_l\\right) &= 0,~\\text{for}~ k \\ne l \\label{eqn:principal_comp_uncorr}\n\\end{align}\n\n\\subsection{Interpretation}\nAs with the multivariate principal components analysis the interpretation of the eigenvectors is often useful in exploratory analysis of data.\nThe functional principal components analysis is of a similar form to the multivariate case and as such the same interpretation of the eigenfunctions is often employed.\nThe first eigenfunction $\\phi_1(t)$ encapsulates the dominant mode of variation in $\\mathcal{X}(t)$ by construction since:\n\\begin{equation}\\label{eqn:first_comp}\n\t\\phi_1 = \\argmax_{\\lVert \\phi \\rVert=1} \\text{Var}\\left( \\langle  \\mathcal{X} - \\mu, \\phi \\rangle  \\right)\n\\end{equation}\nSimilarly, the $k^\\text{th}$ eigenfunction is the dominant mode of variation which is orthogonal to the preceding $k-1$ components.\nTherefore exploring the first few eigenfunctions often gives a parsimonious way to view the variation in the data.\nAlike multivariate PCA, it is often that the structure of the eigenfunctions replicates some observed physical process.\nAs such the FPCA decomposition is often used widely as a tool for data exploration. \n\nIn addition to this, we can use the fact that subsequent eigenfunctions capture less and less variation of the data as a form of dimensionality reduction, like PCA, \\citep{wold_principal_1987}.\nIn this sense we can consider truncating the full representation given in Equation~\\eqref{eqn:fpca} to the $K$ leading eigenfunctions which gives an approximation to the full process which we will denote by $\\mathcal{X}^K(t)$ where:\n\\begin{equation}\\label{eqn:fpca_trun}\n\t\\mathcal{X}^K(t)  =   \\mu(t) + \\sum_{k=1}^{K}\\xi_k \\phi_k(t)\n\\end{equation}\nThe approximation of $\\mathcal{X}$ by $\\mathcal{X}^K$ converges as:\n\\begin{equation}\\label{eqn:fpca_trun_conv}\n\t\\E \\left( \\langle  \\mathcal{X} - \\mathcal{X}^K, \\mathcal{X} - \\mathcal{X}^K \\rangle \\right)  =   \\sum_{k > K}^{\\infty} \\lambda_k \\to 0~\\text{as}~K \\to \\infty\n\\end{equation}\n\nAs such using the leading principal components for reconstruction has the effect of capturing the main modes of variation of the data and ignoring smaller modes of variation.\nChoosing the number of principal components is then up to the practitioner as in multivariate PCA, \\citep{wold_principal_1987}.\n \\citeauthor{ramsay_functional_2010} discuss in length the comparison of PCA to FPCA including commentary on the optimal choice of the number of principal components, \\cite[Chapter~8]{ramsay_functional_2010}.\n The practical implementation of FPCA then involves estimating various components.\n In particular estimation of;  the mean function $\\mu(t)$, the covariance surface $G\\left(s,t \\right)$, the $K$ eigenfunctions and eigenvalues $\\phi_k(t)$, $\\lambda_k$ respectively, and the principal components $\\xi_k$ for each realisation of the process $\\mathcal{X}$ we observe.\n\n\\section{Principal Analysis Through Conditional Expectation \\label{sec:pace}}\nWe will assume for now that we have a sufficient method for estimating the mean  and covariance surfaces which we will denote by $\\hat{\\mu}(t)$ and $\\hat{G}\\left(s, t \\right)$ respectively.\nWe discuss in more detail the estimation of these components in Section~\\ref{sec:splines}.\nPrior to the introduction of the Principal Analysis through Conditional Expectation (PACE) methodology in \\citep{yao_functional_2005} FPCA decomposition was restricted due to the need for approximating the integrals in Equations~\\eqref{eqn:principal_comp}.\nAs such it was often a requirement that the functional data were observed on a dense regular grid which meant that the principal components could be reliably estimated though some numerical integration scheme, \\citep[Chapter~8]{ramsay_functional_2010}.\nThis very much restricted the application of the FPCA technique, however \\citeauthor{yao_functional_2005} introduced the PACE method for overcoming such an obstacle using conditional expectations for sparsely observed functional data, \\citep{yao_functional_2005}.\nAt the same time the technique of \\citep{yao_functional_2005} accommodates for observation error. \n\nTraditionally Equation~\\eqref{eqn:principal_comp}, used for estimating the principal component scores for the $i^\\text{th}$ realisation, is approximated through sums.\nSubstituting $y_{ij}$ for $\\mathcal{X}(t_{ij})$, $\\hat{\\mu}(t_{ij})$ for $\\mu(t_{ij})$, and $\\hat{\\phi}_k(t_{ij})$ for  $\\phi_k(t_{ij})$ we obtain the estimate $\\xi_i^{S} = \\sum_1^{J_i}\\left(y_{ij} - \\hat{\\mu}(t_{ij})\\right)\\hat{\\phi}_k(t_{ij})\\left(t_{ij} - t_{i(j-1)}\\right)$, \\citep{yao_functional_2005}, where $y_{ij}$ is as described in Equation~\\eqref{eqn:fd_temporal} and setting $t_{i0}=0$.\nHowever such an estimate breaks for the case that observations are sparse. Similarly such an approximation will be biased when the error processes from Equation~\\eqref{eqn:fd_temporal}, $\\varepsilon_{ij}$, is non-zero.\n\\citeauthor{yao_functional_2005} overcome this by first assuming that the model is as follows, \\citep{yao_functional_2005}:\n\n\\begin{align}\n\ty_{ij} &= \\chi_{i} + \\varepsilon_{ij} \\\\\n\t&= \\mu(t_{ij}) + \\sum_{k=1}^{\\infty} \\xi_{ik} \\phi_k(t_{ij}) + \\varepsilon_{ij} \\label{eqn:fd_temporal_fpca}\n\\end{align}\nwith $\\varepsilon_{ij}$ being jointly Gaussian with $\\xi_{ik}$.\nWe also require the noise process satisfies:\n\\begin{align}\n\t\\E\\left(\\varepsilon_{ij}\\right) &= 0 \\\\\n\t\\text{Var}\\left( \\varepsilon_{ij} \\right) &= \\sigma_\\varepsilon^2\n\\end{align}\n\nThe number of measurements of the $i^\\text{th}$ subject is considered random which reflect sparse functional data.\nSuch a description follows naturally from our dataset description, given in Equation~\\eqref{eqn:fd_temporal}, by using the FPCA decomposition structure of $\\mathcal{X}$ as discussed in Section~\\ref{sec:fpca}.\nFollowing \\citep{yao_functional_2005} we define the subsequent vector notations:\n\\begin{align}\n\t\\vesub{Y}{i} &= \\left(y_{i1}, y_{i2}, \\cdots, y_{iJ_i}\\right)^\\top \\label{eqn:yvec}\\\\\n\t\\vesub{\\phi}{ik} &= \\left(\\phi_{k}(t_{i1}), \\phi_{k}(t_{i2}), \\cdots, \\phi_k(t_{iJ_i}) \\right)^\\top \\label{eqn:phivec} \\\\\n\t\\vesub{\\mu}{i} &= \\left(\\mu(t_{i1}), \\mu(t_{i2}), \\cdots, \\mu(t_{iJ_i})\\right)^\\top \\label{eqn:muvec}\\\\\n\t\\vesub{t}{i} &= \\left( t_{i1}, t_{i2}, \\cdots, t_{iJ_i} \\right)^\\top \\label{eqn:tvec}\n\\end{align}\nWith such a model and assumptions, as stated in \\citep{yao_functional_2005}, the best prediction of the principal component scores for the $i^\\text{th}$  subject is given by:\n\\begin{equation}\\label{eqn:fpc_best}\n\t\\tilde{\\xi}_{ik} = \\E\\left(\\xi_{ik} | \\vesub{Y}{i}, \\vesub{t}{i} \\right) = \\lambda_k \\vess{\\phi}{ik}{\\top} \\vess{\\Sigma}{\\vesub{Y}{i}}{-1} \\left( \\vesub{Y}{i} - \\vesub{\\mu}{i} \\right)\n\\end{equation}\nwhere $\\vesub{\\Sigma}{\\vesub{Y}{i}} = \\text{Cov}\\left(\\vesub{Y}{i}, \\vesub{Y}{i} \\right)$.\nThe estimate for the principal component score can then be found by substituting in estimates for the various components in Equation~\\eqref{eqn:fpc_best}.\nThat is:\n\\begin{equation}\\label{eqn:fpc_est}\n\t\\hat{\\xi}_{ik}= \\hat{\\E}\\left(\\xi_{ik} | \\vesub{Y}{i}, \\vesub{t}{i} \\right) = \\hat\\lambda_k \\hvess{\\phi}{ik}{\\top} \\hvess{\\Sigma}{\\vesub{Y}{i}}{-1} \\left( \\vesub{Y}{i} - \\hvesub{\\mu}{i} \\right)\n\\end{equation}\nThe covariance matrix $\\hvesub{\\Sigma}{\\vesub{Y}{i}}$ is formed with $\\left(l, m\\right)^\\text{th}$ element:\n\\begin{equation}\\label{eqn:sig_cov}\n\t\\left[\\hvesub{\\Sigma}{\\vesub{Y}{i}}\\right]_{lm} = \\hat{G}(t_{il}, t_{im}) + \\hat{\\sigma}_\\varepsilon^2 \\delta_{lm}\n\\end{equation}\nwhere $\\hat{\\sigma}_\\varepsilon^2$ is the estimated variance of the noise process.\nThe estimation method for this is discussed in Section~\\ref{sec:splines}.\n\\citeauthor{yao_functional_2005} also provide asymptotic properties of such an estimator  along with asymptotic confidence bands where the mean and covariance surfaces are estimated with local linear smoothers, \\citep{fan_study_1996}.\n\nThe conditional expectation technique describe above from \\citep{yao_functional_2005} alleviates the issue of poor integral approximation from sparsely observed data when the estimated covariance surface is a relatively good fit to the true covariance surface.\nThis is a somewhat better condition as it allows one to pool data from different observed subjects to estimate such a surface and thus the requirement of dense data per subject is relaxed to having dense data from the collection over all subjects.\nWe discuss a particular method for estimating such surfaces in Section~\\ref{sec:splines}.\n\n\\section{Penalised Regression Splines \\label{sec:splines}}\n Smoothing models underpin much of FDA.\n FDA uses the smoothness of observations over a continuous domain to help inform and model observed data, \\citep{ramsay_functional_2010}.\n Typically, as described in Section~\\ref{sec:fr}, data is only observed discretely.\n Therefore with most FDA methodology there must be a conversion from discretely observed data and the continuous functional variable that generates it.\n This is particularly the case for our EO data since we have discrete observations specified by our data model given in Equation~\\eqref{eqn:observed_data} which we assume is generated by observations of continuous functions given by our models in Equation~\\eqref{eqn:fd_temporal}.\n As such many models for obtaining such a smooth of the data have been studied, such as kernel smoothing, polynomial regression, and local linear smoothing, \\citep[Chapter~4]{ramsay_functional_2010}. \n In this section we consider the well studied technique of obtaining smooths of discrete data through penalised regression splines, \\citep{ruppert_semiparametric_2003}.\n We will use such a method to estimate the mean and covariance surfaces present in the PACE methodology as described in Section~\\ref{sec:pace}. \n We first describe the components that form the foundations of a regression spline, the spline basis.\n \n \\subsection{Basis splines  \\label{ssec:basis_splines}}\n One of the components of a penalised regression splines is the basis functions used in the regression.\n As the name suggests regression splines uses spline functions as the regression basis.\n A spline function of order $d$,  which is well documented in the monograph of \\citeauthor{de_boor_practical_2001}, is a piecewise polynomial function of degree $d-1$, \\cite{de_boor_practical_2001}. In the case of a spline function of order $d$, $S: \\mathcal{T} \\to \\mathbb{R}$, over a univariate domain $\\mathcal{T} = \\left[a, b\\right] \\subset \\mathbb{R}$ we have: \n \\begin{equation}\n \tS: t \\mapsto S(t) = \\begin{cases}\n \t\tP_0(t)~\\text{if}~ \\tau_0 < t  \\leq \\tau_1,\\\\\n \t\tP_1(t)~\\text{if}~ \\tau_1 < t  \\leq \\tau_2,\\\\\n \t\t\\vdots \\\\\n \t\tP_{m-1}(t)~\\text{if}~ \\tau_{m-1} < t  \\leq \\tau_m,\\\\\n \t\\end{cases}\n \\end{equation}\nwhere $P_i: \\left[\\tau_i, \\tau_{i+1}\\right] \\to \\mathbb{R}$ are polynomial functions of degree $d-1$.\nThe vector of points $\\ve{\\tau} = \\left(\\tau_0, \\tau_1, \\cdots, \\tau_m \\right)$ is known as the knot vector for the spline and must satisfy $a=\\tau_0 < \\tau_1 < \\cdots < \\tau_m = b$.\nBy specifying that the piecewise polynomials must share the same derivative order up to a degree we can ensure continuity of relative smoothness over the knot points and the whole spline function.\nWe specify the continuity at each point in our knot vector by the continuity vector $\\ve{r}=\\left( r_0,\\cdots, r_m \\right)^\\top$ where $r_i$ specifies that $P_i$ and $P_{i+1}$ share common derivative values at point $\\tau_i$ for derivatives up to order $r_i$.\nThe spline type can be specified completely by specifying the knot locations and the continuity vector, \\citep{de_boor_practical_2001}.\nIn fact, one can extend our definition of the knot vector to incorporate both the knot and continuity vector into one.\nThis is known as the extended knot vector, which will completely specify the spline type, \\citep{de_boor_practical_2001}.\nWe define the extended knot vector as the vector of knot points which repeats the $i^\\text{th}$ knot vector exactly $n - r_i$ times. That is:\n\\begin{equation*}\n\t(\\tau_0,\\cdots,\\tau_0, \\tau_1, \\cdots, \\tau_1 ,\\cdots, \\tau_{m-1},\\cdots, \\tau_{m-1}, \\tau_m \\cdots, \\tau_m)\n\\end{equation*}\nWe denote the spline functions of order $d$ with extended knot vector by $S_{d, \\ve{\\tau}}$. \n\nThe Basis splines are more commonly referred to as B-splines, \\citep{knott_interpolating_2000}.\nB-splines are basis functions for splines of the same order defined over the same knots.\nThey are typically defined recursively, \\citep{knott_interpolating_2000, de_boor_practical_2001}.\nThe classic algorithm for the recursive construction is known as the Cox-de Boor recursion formula, \\citep{de_boor_practical_2001}, and is given as follows.\nGiven a knot vector $(\\tau_0,\\cdots,\\tau_0, \\tau_1,\\cdots,  \\tau_1, \\cdots, \\tau_{m-1},\\cdots, \\tau_{m-1}, \\tau_m,\\cdots, \\tau_m)^\\top$ the B-spline of order $1$ is given by:\n\\begin{equation}\t\\label{eqn:bspline1}\n\tB_{i, 1}\\left( t \\right) = \\begin{cases} 1,~\\text{for}~\\tau_i \\le t < \\tau_{i+1} \\\\ 0,~\\text{otherwise.} \\end{cases}\n\\end{equation}\n\nThe higher order B-splines are defined by recursion as:\n\\begin{equation}\\label{eqn:bspline_high}\n\tB_{i, q+1}(t) = w_{i, p}(t) B_{i, q}(t) + \\left[ 1 - w_{i+1, q}(t) \\right]B_{i+1, q}(t)\n\\end{equation}\nwhere $w_{i, q}$ is a weighting for the $i^\\text{th}$ B-spline of order $d$ given by:\n\\begin{equation}\n\tw_{i, q}(t) = \\begin{cases} \\frac{x-\\tau_i}{\\tau_{i+q} - \\tau_i} ,~\\text{for}~\\tau_{i+q} \\ne \\tau_i \\\\ 0,~\\text{otherwise.} \\end{cases}\n\t\\label{eqn:weighting}\n\\end{equation}\nA B-spline basis system of size $Q$ can then be considered by choosing the extended knot vector $\\ve{\\tau}$ and specifying the order, $d$, of the B-spline functions, and is given by the collection:\n\\begin{equation}\n\t\\{ B_{d, q}^{\\ve{\\tau}}(t)\\}_{q=1}^{Q}\n\t\\label{eqn:bspline_basis}\n\\end{equation}\nwhere $Q$ is the number of basis functions to use in the system, $\\ve{\\tau}$ is the extended knot vector, and $B_{d, q}^{\\ve{\\tau}}$ is the $q^\\text{th}$ B-spline of order $d$ defined by Equation~\\eqref{eqn:bspline_high} for our knot vector $\\ve{\\tau}$.\n\n\\subsection{Regression splines \\label{ssec:spline_reg}}\nAs discussed in Section~\\ref{sec:pace} the PACE methodology requires estimation of both the mean function, $\\mu(t)$, and covariance surface $G\\left(s, t\\right)$. \nEstimating such functions is a problem due to their infinite dimensional nature.\nA well studied and effective method for representing such functions is the use of a basis function expansion,  \\cite{ramsay_functional_2010}.\nThat is representing the target surface using a linear combination of known basis functions.\nIn this work we will utilise the B-spline basis function as discussed in Section~\\ref{ssec:basis_splines}.\nThe B-spline system is exceptionally popular due to its ease of computation and ability to reconstruct many surfaces, \\citep{de_boor_practical_2001}.\nSuch ease of computation makes it feasible to not only create large basis systems but also alleviates many fitting procedures as we can re-evaluate the basis system at various points with ease.\nSuch properties are very useful when using such a basis for regression models. \nOther common basis systems include the Fourier, Monomial, and Polynomial basis systems.\nSee \\citep{ramsay_functional_2010} for details of these basis systems in the functional framework.\nIn the following we present the approach for estimating an arbitrary realisation of our functional random variable $\\chi_i(t)$ over domain $\\mathcal{T}$ and discuss how we extend the same concept to a two dimensional surface over $\\mathcal{T} \\times \\mathcal{T}$.\n\nWe assume that our function  can be represented using an order $d$ B-spline basis system with knot vector $\\ve{\\tau}$: \n\n\\begin{align}\n\t\\chi_i(t) &= \\sum_{q=1}^Q c_q B_{d, q}^{\\ve{\\tau}}(t) \\\\\n\t&= \\vesup{c}{\\transpose} \\vess{B}{d}{\\ve{\\tau}}(t) \\label{eqn:basis_expansion}\\\\\n\\end{align}\nwhere $\\ve{c}=\\left( c_1,\\cdots, c_K \\right)^\\transpose$, $\\vess{B}{d}{\\ve{\\tau}}(t) = \\transpose{\\left( B_{d, 1}^{\\ve{\\tau}}(t), B_{d, 2}^{\\ve{\\tau}}(t), \\cdots, B_{d, Q}^{\\ve{\\tau}}(t) \\right)}$, and $Q$ is the dimension of the expansion.\nIf such basis functions have nice properties like being easy to compute then such a representation for $\\chi_i$ given by Equation~\\eqref{eqn:basis_expansion} can be extremely useful since most problems can be reduced to involving only the finite dimensional vector $\\ve{c} \\in \\mathbb{R}^Q$. \n\nOur representation of $\\chi_i$ using a basis system then becomes the problem of choosing the coefficients $\\vesup{c}{\\transpose}$ using only our set of observations of $\\vesub{Y}{i}$ which are observed with error.\nThe most common method for fitting a basis system to discretely observed data is by choosing the coefficients of the expansion $c_q$ given in Equation~\\ref{eqn:basis_expansion} by minimising the criterion, \\cite{bjorck_numerical_1996}:\n\\begin{equation}\n\t\\text{SSE}_{\\vesub{Y}{i}}\\left( \\ve{c}\\right) = \\lVert \\vesub{Y}{i} - \\ve{B}\\ve{c} \\rVert^2\n\t\\label{eqn:sse}\n\\end{equation}\nwhere $\\ve{B} = \\left( \\vess{B}{d}{\\ve{\\tau}}(t_{i1}),\\vess{B}{d}{\\ve{\\tau}}(t_{i2}),\\cdots, \\vess{B}{d}{\\ve{\\tau}}(t_{iJ_i}) \\right)^\\transpose$ is the $J_i \\times Q$ matrix of the basis system evaluated at observed time points corresponding to the $J_i$ length observation vector $\\vesub{Y}{i}$. Minimising such a criterion is given by, \\cite{bjorck_numerical_1996}:\n\\begin{equation}\n\t\\hat{\\ve{c}} = \\left( \\vesup{B}{\\transpose} \\ve{B} \\right)^{-1}\\vesup{B}{\\transpose}\\vesub{Y}{i}\n\t\\label{eqn:hatc}\n\\end{equation}\n\nThe simple least squares approximation is a well studied and standard approach. See \\citep{bjorck_numerical_1996} for a through introduction to the concept. \nSuch a methodology is often suitable for situations where our error process $\\varepsilon(t)$ is a white noise process.\nSuch a process for the noise is often unrealistic; as such a simple adjustment to the least squares criterion in Equation~\\ref{eqn:sse} can be used to allow for correlation among the observation errors: \n\\begin{equation}\n\t\\text{SSE}_{\\vesub{Y}{i}, \\ve{W}}(\\ve{c}) = \\lVert \\ve{W}^\\frac{1}{2}\\left( \\vesub{Y}{i} - \\ve{B}\\vesup{c}{\\transpose} \\right) \\rVert^2\n\t\\label{eqn:wsse}\n\\end{equation}\nwhere $\\ve{W}$ is a weighting matrix for the observations. Ideally the matrix will be the inverse of the variance-covariance matrix of the observations. Minimising the adjusted criterion is given by, \\cite{bjorck_numerical_1996}:\n\\begin{equation}\n\t\\hat{\\ve{c}} = \\left( \\vesup{B}{\\transpose} \\ve{W} \\ve{B} \\right)^{-1}\\vesup{B}{\\transpose} \\ve{W} \\vesub{Y}{i}\n\t\\label{eqn:whatc}\n\\end{equation}\nThe estimate with least squares fitting can then be found by substituting $\\hat{\\ve{c}}$ for the $\\ve{c}$ in Equation~\\ref{eqn:basis_expansion}, \\citep{bjorck_numerical_1996}. That is:\n\\begin{equation}\\label{eqn;hat_basis_expansion}\n\t\\hat{\\chi}_i(t) = \\vesup{\\hat{c}}{\\transpose} \\vess{B}{d}{\\ve{\\tau}}(t)\n\\end{equation}\nThe selection of the knot vector is well studied and the classical choice is to choose a knot vector where knots are located at the sampling points, \\citep{de_boor_practical_2001}. \n\nAn issue with the classical least squares fitting using a basis system expansion is the choice of number of basis functions, \\cite{ramsay_functional_2010}.\nWe are constrained to choose $Q$ to be less than or equal to the number of observations, $J_i$. This is because more than $J_i$ basis functions would results in Equation~\\ref{eqn:whatc} being ill defined since the matrix $\\ve{B}$ would have linearly dependent columns.\nHowever, we still have the choice to choose $Q$ between $1$ and $J_i$.\nExactly which value for $Q$ to choose is unknown and results in bias - variance trade off in the estimator, \\citep{ramsay_functional_2010}.\nA large number of basis functions reduces bias in the estimator $\\hat{\\chi}_i(t)$, but the variance of this estimator may be unacceptably high.\nConversely, a lower number of basis functions will result in high bias of the estimator but low variance.\nThe bias-variance trade off is well studied and there is a vast literature on the methodology of choosing the number of basis functions. \nHowever, there is no gold standard and often the choice is made in an ad hoc fashion, \\citep{ramsay_functional_2010}. \nSuch an issue motivates modifying the fitting criterion which determines $\\ve{\\hat{c}}$ in Equation~\\eqref{eqn:whatc}.\n\n\\subsubsection{Penalties}\nIdeally, we want to penalise estimators which have high variance, that occur naturally when we have a large number of basis functions, but keep bias low.\nThe naive choice of just reducing the number of basis functions, known as regression splines,  fails in this respect, \\citep{ruppert_semiparametric_2003}.\nOne such approach to do this is to reduce the number of basis functions in conjunction with a penalty, known as penalised regression splines, \\citep{ruppert_semiparametric_2003}.\nSuch an approach was first used in \\citep{osullivan_statistical_1986} who applied such a technique to ill posed inverse problems.\n\\citep{ruppert_semiparametric_2003} discusses various other spline smoothing techniques as well as the penalised regression splines.\n\nPenalised regression spline models adjust the fitting criterion in Equation~\\eqref{eqn:whatc} to, \\citep{ruppert_semiparametric_2003}:\n\\begin{equation}\\label{eqn:plss}\n\t\\text{PSSE}_{\\vesub{Y}{i}, \\ve{W}, \\lambda}(\\ve{c}) = \\lVert \\vesup{W}{\\frac{1}{2}}\\left( \\vesub{Y}{i} - \\ve{B}\\ve{c} \\right) \\rVert^2 + \\omega \\vesup{c}{\\transpose} \\ve{P} \\ve{c}\n\\end{equation}\nwhere $\\ve{P}$ is formed with $(l,m)^\\text{th}$ element $\\left[\\ve{P}\\right]_{lm} = \\langle L\\left(\\vesub{B}{l}\\right), L\\left( \\vesub{B}{m} \\right) \\rangle$ and $\\omega$ is a parameter which controls the regularisation trade off.\n$L$ is some linear differential operator.\nTypically, one chooses $L$ to be the required smoothness of the target function and examples include simple first or second derivatives, \\cite{ruppert_semiparametric_2003}.\n\nAnalytically minimising the $\\text{PSSE}$ criterion in Equation~\\eqref{eqn:plss} can be found via, \\citep{ruppert_semiparametric_2003}:\n\\begin{equation}\n\t\\hat{\\ve{c}} = \\left( \\vesup{B}{\\transpose} \\ve{W} \\ve{B} + \\omega \\ve{P} \\right)^{-1}\\vesup{B}{\\transpose} \\ve{W} \\vesub{Y}{i}\n\t\\label{eqn:phatc}\n\\end{equation}\n\nEssentially such a penalisation term determines that there should be a trade off between the bias which corresponds to the first term in Equation~\\eqref{eqn:plss} and the variance which is the second term.\nThis trade off is controlled by the regularisation parameter $\\omega$.\nThe advantage of this method is that we can now let $Q$, our number of basis functions, be large without worrying of over fitting as the penalty term in Equation~\\eqref{eqn:plss} will penalise functions with high variability in terms of the differential operator $L$.\n\nThe choice of differential operator is a well studied problem also. A common choice is the first or second order differential, denoted by  $D^1$ and  $D^2$ respectively, as this specifies a reasonable level of smoothness in the target function, \\cite{ruppert_semiparametric_2003}. However, often more complex terms are used to facilitate known properties of the target functions, such a letting $L$ be the harmonic acceleration operator which forces a periodic form of the target functions.\nMore care must be taken when extending the linear differential operator to higher dimensions which is discussed below.\nAdditionally, in the case of B-spline basis system these penalty matrices are typically evaluated using a form of numerical integration, \\cite{ramsay_functional_2010}.\n\nPenalised regression splines moves our problem of selecting $Q$ to choosing our regularisation parameter, $\\omega$.\nSuch a parameter influences the strictness with which we expect our target function to be smooth as defined by the operator $L$.\nChoosing such a parameter is a problem that is present not only in spline smoothing but other penalised regression approaches, \\citep{lukas_robust_2006}.\nA popular method for choosing such a parameter is Generalised Cross Validation (GCV).\nGCV, introduced by \\citeauthor{wahba_practical_1977} in \\citep{wahba_practical_1977}, is a well studied method which has good asymptotic properties as the number of observations tends to infinity, \\citep{wahba_spline_1990, wahba_comparison_1985}.\nGCV chooses $\\omega$ as the minimiser of the GCV criterion $V(\\omega)$ which is given by, \\citep{wahba_spline_1990}:\n\\begin{equation}\\label{eqn:gcv}\n\tV\\left(\\omega\\right) = \\frac{J_i^{-1} \\lVert \\left(\\ve{I} - \\ve{A} \\right) \\vesub{Y}{i} \\rVert^2}{\\left[ J_i^{-1} \\text{tr}\\left(\\ve{I} - \\ve{A}\\right)\\right]^2}\n\\end{equation}\nwhere $\\ve{A}$ is the influence matrix defined by:\n\\begin{equation}\\label{eqn:inf}\n\t\\ve{A} = \\ve{B} \\left( \\vesup{B}{\\transpose} \\ve{W} \\ve{B} + \\omega \\ve{P} \\right)^{-1}\\vesup{B}{\\transpose} \\ve{W}\n\\end{equation}\nThe GCV method can then be minimised for $\\omega$ using a numerical minimisation routine.\nFor large $J_i$ it is known that the GCV criterion performs well in recovering a regularisation parameter which minimises variance while maintaining low bias in the reconstruction of the target function, \\citep{wahba_comparison_1985}.\nFor the case of low $J_i$ the GCV method may not be reliable.\nAs such, methods to extend the GCV criterion have been considered.\nThe modified GCV criterion adds a further modifier to the denominator in Equation~\\eqref{eqn:gcv} by multiplying the trace of the influence matrix by a factor, \\citep{cummins_confidence_2001}.\nThe modified GCV approach effectively increases the cost associated with each effective parameter in the curve which reduces the chance of choosing an $\\omega$ which under smooths the data, \\citep{cummins_confidence_2001}.\nA similar but separate approach to adjusting the GCV is robust GCV, introduced by \\citeauthor{lukas_robust_2006}.\n\\citeauthor{lukas_robust_2006} uses a weighted sum of the GCV function with a term which penalises $\\omega$ values that are close to zero, \\citep{lukas_robust_2006}.\nThe performance of such methods are discussed in \\citep{lukas_performance_2012}.\n\nChoosing a basis system, a criterion to choose the regularisation parameter, and a differential operator then fully specifies the penalised regression spline approach.\nIn the case of one dimensional functions the procedure applies as above. As such, we can estimate our mean function $\\mu(t)$ through the use of a penalised regression splines where our observation points for the mean function are the pooled mean across subjects of the union of observed time points for all curves.\nHowever for multiple dimensions, particularly the case when we wish to smooth the covariance surface, we must make some adjustments to the approach described above.\n\n\\subsubsection{Extension to higher dimensions \\label{sssec:spline_ext}}\nThere are two issues when extending the penalised regression spline to higher dimensions; extending the basis system and extending the penalty specification.\nTo alleviate the first we must specify a basis system which can cover multiple dimensions. In fact there are many such systems, \\citep{wahba_spline_1990}.\nOne such popular approach when we have regular data for FDA is using a tensor product B-spline system, \\citep{xiao_asymptotic_2020}.\nWe describe the extension to two dimensional surface, but the same extension will work for higher dimensional surfaces.\nConsider a two dimensional surface, $\\sigma \\left(s, t\\right)$, which we represent by the tensor product spline given by, \\citep{xiao_asymptotic_2020}:\n\\begin{equation}\\label{eqn:tensor_expansion}\n\t\\sigma \\left(s,t\\right) = \\sum_{1 \\le q_1, q_2 \\le \\bar{Q}} c_{q_1, q_2} \\vess{B}{d_1}{\\vesub{\\tau}{1}}(s)\\vess{B}{d_2}{\\vesub{\\tau}{2}}(t)\n\\end{equation} \nwhere $\\vess{B}{d_i}{\\vesub{\\tau}{i}}$ is the B-spline basis system for the $i^\\text{th}$ dimension for $i=1,2$.\nFor notational simplicity we assume the dimension of each marginal basis system is the same, $\\bar{Q}$. \nHowever, in general this need not be the case.\n$\\ve{C} \\in \\mathbb{R}^{\\bar{Q} \\times \\bar{Q}}$ is a coefficient matrix to be determined.\nEquation~\\eqref{eqn:tensor_expansion} can be written more succinctly using a Kronecker product as, \\citep{xiao_asymptotic_2020}:\n\\begin{equation}\\label{eqn:kron_expansion}\n\t\\sigma\\left(s,t\\right) = \\vesup{\\bar{B}}{\\transpose}\\left(s, t\\right) \\text{Vec} \\left(\\ve{C}\\right)\n\\end{equation}\nwhere $ \\ve{\\bar{B}}\\left(s, t\\right) = \\vess{B}{d_2}{\\vesub{\\tau}{2}}(t) \\otimes \\vess{B}{d_1}{\\vesub{\\tau}{1}}(s)$ and $\\text{Vec}\\left(\\cdot\\right)$ is an operator which stacks the columns of a matrix into a vector. We use the $\\bar{\\cdot}$ notation to make explicit that this basis is over multiple dimensions.\n\nThe same methods now follow as in the non penalised univariate case with this Kronecker basis system, \\citep{xiao_asymptotic_2020}.\nHowever, we must still adjust the penalty matrix in Equation~\\eqref{eqn:phatc} to account for smoothness across multiple dimensions. \n\nUsing the tensor product basis system as described above one might consider specifying the smoothness of the function in each dimension.\nIndeed, one such approach to extending the penalty specification which was introduced by \\citeauthor{wood_low-rank_2006} is to consider setting penalties on the marginal basis separately and to combine them by a weighted sum, \\cite{wood_low-rank_2006}.\nSuch an approach known as tensor product penalties is well studied in the linear generalised additive model setting, \\cite{wood_generalized_2006}. \nA two dimensional penalty matrix $\\bar{\\ve{P}}$ may be described as follows: \n\\begin{equation}\\label{eqn:tensor_pen}\n\t    \\bar{\\ve{P}} = \\omega_1 \\ve{P}_1 \\otimes \\ve{I}_2 + \\omega_2 \\ve{I}_1 \\otimes \\ve{P}_2\n\\end{equation}\nwhere $\\vesub{P}{i}$ is marginal penalty over a single basis dimension as described in Equation~\\eqref{eqn:plss}, $I_i$ is the identity matrix of dimension of the $i^\\text{th}$ dimension basis, and $\\omega_i$ is the marginal regularisation parameter for $i=1,2$.\nThe properties of such a smoothness penalty are discussed in detail in \\citep{wood_low-rank_2006} with the main points being such a penalty is both scale invariant and low rank.\nIn addition, \\citep{wood_p-splines_2017} studies the use of such a penalty for the case of unevenly distributed data.\nThe additional complication is that we now have multiple smoothness parameters $\\omega_i$, one for each dimension of the surface to be smoothed.\nIn this case the GCV methodology described above, can still be applied but now minimisation occurs with respect to the vector $\\ve{\\omega} = \\left(\\omega_1, \\omega_2 \\right)^\\transpose$. Implementation details of such can be found in \\cite{wood_generalized_2006}.\n\nWe can now use the above approach to estimate our covariance surface, denoted by $\\hat{G}\\left(s,t\\right)$, for use in PACE methodology, \\citep{yao_functional_2005}.\nThe discrete observations for the covariance surface to be smoothed are gathered by pooling individual observed covariances from across subjects, which is discussed in detail in both \\citep{yao_functional_2005, xiao_asymptotic_2020}. \n\\citeauthor{xiao_asymptotic_2020} provides asymptotic properties of such an approach to the covariance surface of independent functional data which are on par to the asymptotic results of other smoothers used in \\citep{yao_functional_2005} for the PACE methodology, \\citep{xiao_asymptotic_2020}.\n\n\\section{Functional Time Series\\label{sec:fts}}\nAs discussed in Section~\\ref{sec:eo} EO data is often both spatially and temporally correlated.\nThe two types of correlation are often considered separately. \nAn area in FDA which has considered a similar case where functional data is observed and observations are correlated is functional time series, \\citep{aguilera_forecasting_1999}. Typically, functional observations are naturally indexed by some time of observation and correlation may occur between observations.\nHence we may build up a time series of functional observations.\nFunctional time series models are some of the first in the FDA literature to start to consider correlated functional observations.\nAlthough they limit themselves to temporal correlation many of the ideas can be considered for extensions to higher dimension correlation and so we discuss a few of the more popular methodologies in this section.\n\nWe focus on a technique introduced by \\citeauthor{hyndman_forecasting_2009} in \\citep{hyndman_forecasting_2009} to forecast functional time series.\nSuch a method is of interest as it expands methodology on how to use existing forecasting techniques in a functional setting.\nIn particular \\citep{hyndman_forecasting_2009} uses the FPCA decomposition described in Section~\\ref{sec:fpca} to decompose functional observations and then uses independent forecasting of each principal component score using standard multivariate techniques.\n\n\\citeauthor{hyndman_robust_2007} in \\citep{hyndman_robust_2007} suggests to assume the principal component scores, $\\xi_{ik}$, follow independent univariate time series models.\nThen, conditioning on the observed data $\\ve{Y}$ given in Equation~\\eqref{eqn:observed_data} and the set of principal components $\\ve{\\phi}(t) = \\left(\\phi_1(t), \\phi_2(t), \\cdots, \\phi_K(t)\\right)$ they obtain the $h$-step ahead forecast of $\\chi_{i+h | i}(t)$ as, \\citep{hyndman_robust_2007}:\n\n\\begin{equation}\\label{eqn:forecast}\n\t\\hat{\\chi}_{i+h | i} = \\E\\left(\\chi_{i+h}(t) | \\ve{Y}, \\ve{\\phi}\\right) = \\hat{\\mu}(t) + \\sum_{k=1}^K \\hat{\\xi}_{i+h | i, k} \\phi_k(t) \n\\end{equation}\nwhere $\\hat{\\xi}_{i+h | i, k}$ denotes the $h$-step ahead forecast of the $k^\\text{th}$ principal component score.\nThe method for which $\\hat{\\xi}_{i+h | i, k}$ is obtained can be any univariate time series method. Such methods are extremely well studied and discussed in the monograph of \\citeauthor{hyndman_forecasting_2018} in \\citep{hyndman_forecasting_2018}.\n\\citeauthor{hyndman_stochastic_2008} highlight that the forecast, given by Equation~\\eqref{eqn:forecast}, is relatively insensitive to the choice of number of components in the principal decomposition provided it is sufficiently large.\nThe variance of such a method can also easily be obtained through the sum of the component variances, \\citep{hyndman_stochastic_2008}.\nThe component variance of the forecast principal component scores are generally readily available from many time series models, \\citep{hyndman_forecasting_2018}. \nThe above forecasting methodology initially described in \\citep{hyndman_robust_2007} used normal FPCA procedure with outliers weighted to zero, however this was reconsidered in \\citep{hyndman_forecasting_2009} to include a geometric weighting to the principal components to allow for changes in the function over time. \n\nSuch a methodology motivates the construction of the CPACE model described in Chapter~\\ref{cha:cpace} for correlated functional data by considering the case where the principal components scores obey univariate correlated models not just time series. To describe some of these such models we use the concept of a Gaussian Process. We give background to this in the following section. \n\n\\section{Gaussian Process Regression \\label{sec:gp}}\nThe above section on functional time series shows there is scope for placing a model on the principal component scores to allow for correlation among functional observations.\nThe natural progression to such work is to consider what options are available when we have more complex correlation structure or a higher dimensional domain. \nFor example, in the case of EO data discussed in Section~\\ref{sec:eo}, we have functional observations over a spatial domain indexed by some coordinate, $\\ve{s} \\in \\mathcal{S}$.\nFor this the univariate time series methods discussed in \\citep{hyndman_forecasting_2009} are not suitable and we look to Gaussian processes as one possible solution to model principal component scores which are indexed by space.\nA such we discuss the basic concept of a Gaussian process in the following. \n\nTo describe a Gaussian process we first discuss the concept of a stochastic process.\nA real valued stochastic process is a collection of real random variables defined on the same probability space $\\left(\\Omega, \\mathcal{F}, \\mathcal{P}\\right)$ where $\\Omega$ is a sample space, $\\mathcal{F}$ is a $\\sigma$-algebra, and $\\mathcal{P}$ is a probability measure; and the random variables, indexed by some set $\\mathcal{S}$ are all real valued. More details of such constructions can be found in \\citep{billingsley_probability_1995}. A stochastic process can then be written as the collection:\n\\begin{equation*}\n\t\\{ \\xi(\\ve{s}, w)| \\ve{s} \\in \\mathcal{S} \\}\n\\end{equation*}\nwhere $w \\in \\Omega$. A sample function of the stochastic process is the mapping, for a point $w \\in \\Omega$:\n\\begin{equation*}\n\t \\xi(\\cdot, w) : \\mathcal{S} \\to \\mathbb{R}\n \\end{equation*}\n\nA Gaussian process is a stochastic process which is parametrised by a mean function $m: \\mathcal{S} \\to \\mathbb{R}$ where $m(\\ve{s}) = \\E\\left(\\xi(\\ve{x})\\right)$ and its covariance function: \n\\begin{align*}\n\tk&: \\mathcal{S}^2 \\to \\mathbb{R} \\\\\n\tk&: \\ve{s} \\mapsto \\text{Cov}\\left(\\xi(\\ve{s}), \\xi(\\ve{s})\\right)\n\\end{align*}\nwhere for any finite collection of points, $\\vesub{s}{1}, \\vesub{s}{2}, \\cdots, \\vesub{s}{n} \\in \\mathcal{S}$, the joint distribution of $\\vesub{\\xi}{n} = \\left(\\xi(\\vesub{s}{1}), \\xi(\\vesub{s}{2}), \\cdots, \\xi(\\vesub{s}{n})\\right)^\\transpose$ is a multivariate normal distribution with mean vector $\\vesub{m}{n} = \\left(m(\\vesub{s}{1}), m(\\vesub{s}{2}), \\cdots, m(\\vesub{s}{n})\\right)^\\transpose$ and covariance matrix $\\vesub{K}{n}$ whose $\\left(l,m\\right)^\\text{th}$ entry is given by $k(\\vesub{s}{l}, \\vesub{s}{m})$, \\citep{shi_gaussian_2011}.\nAs such, Gaussian processes are a natural way of defining a prior distribution over spaces of functions, which are the parameter spaces for Bayesian non linear regression models.\nIn this work, we will denote such a Gaussian process by $\\mathcal{GP}$ and write:\n\\begin{equation}\\label{eqn:gp}\n\t\\xi(\\cdot) \\sim \\mathcal{GP}\\left( m(\\cdot), k(\\cdot, \\cdot) \\right)\n\\end{equation}\n\nOne aspect of Gaussian process regression models is that under Gaussian assumptions they have a nice closed form for prediction. Let $S = \\{ \\vesub{s}{1}, \\vesub{s}{2}, \\cdots, \\vesub{s}{n}\\}$ denote the design matrix of the regression, and $\\ve{\\xi}$ denote the corresponding target vector.\nThen conditioning on the joint Gaussian prior distribution on the observations gives the posterior at prediction points $S_*$, \\citep{williams_gaussian_2006}:\n\\begin{equation}\\label{eqn:gp_pred}\n\t\\vesub{\\xi}{*} | S_{*}, S, \\ve{\\xi} \\sim \\mathcal{N}\\left(K(S_*, S)K(S, S)^{-1} \\ve{\\xi}, K(S_*, S_*) - K(S_*, S)K(S, S)^{-1}K(S, S_*)\\right)\n\\end{equation}\nwhere $\\vesub{\\xi}{*}$ is our posterior process evaluated at the collection of prediction points, and $K(\\cdot, \\cdot)$ is the covariance matrix formed by evaluating the covariance function $K(\\cdot, \\cdot)$ at all pairs of inputs. This can be extended easily to noisy observations by adjusting the observed covariance $K(S, S)$ to include a diagonal component which is the effect of model observation error. See \\citep{williams_gaussian_2006} for details.\n\n\nOne key aspect of the Gaussian process is the covariance function $k(\\cdot, \\cdot)$.\nThe covariance function characterises various smoothness properties such as the sample path continuity and its differentiability, \\citep{williams_gaussian_2006}.\nAs such the choice of $k(\\cdot, \\cdot)$ heavily influences the prediction mean and covariance as described in Equation~\\eqref{eqn:gp_pred}.\nThere are various common forms of the covariance function but all must have the intrinsic property of being non-negative definite.\nThe covariance function is of such importance in Gaussian process modelling and spatial statistics that it has been widely studied.\nSee \\citep[Chapter~4]{williams_gaussian_2006} for a detailed introduction to various covariance functions. We expand on Section~\\ref{sec:st_methods} to briefly introduce the form of covariance function which we will consider throughout this work. \n\nOf the many different covariance functions employed in Gaussian processes stationary covariance functions are most commonly employed due to their simplicity and ease of construction, \\citep{cressie_statistics_2010}.\nOne such commonly used covariance function is the Mat\\`{e}rn covariance function which is given by, \\citep{abramowitz_handbook_2013}: \n\\begin{equation}\\label{eqn:mat}\n\tC_\\nu(d) = \\sigma^2 \\frac{2^{1-\\nu}}{\\Gamma(\\nu)} \\left(\\sqrt{2\\nu}d\\right)^\\nu K_\\nu \\left( \\sqrt{2\\nu}d \\right) \n\\end{equation}\nwhere $\\Gamma$ is the gamma function, $K_\\nu$ is the modified Bessel function of the second kind, $\\nu$ is a shape parameter of the kernel, and $d$ is the possibly anisotropic separation between two vectors $\\ve{s}, \\vesup{s}{\\prime}$, \\citep{abramowitz_handbook_2013}. \nThe covariance kernel $k(\\ve{s}, \\vesup{s}{\\prime})$ is then simply $C_\\nu(d(\\ve{s}, \\vesup{s}{\\prime}))$. \n\nThe issue with stationary covariance forms is that they are often quite restrictive in the sense that the correlation structure cannot vary across the domain.\nFor example, this might be a too restrictive assumption in the case of climate data where correlation structure might be quite different in different parts of the globe. \nOne particular way to extend the stationary Mat\\`{e}rn kernel to be non-stationary is proposed in \\citep{paciorek_spatial_2006}.\n\\citeauthor{paciorek_spatial_2006} propose a method to knit together multiple stationary correlation functions such that the resultant function is non-stationary.\n\nThey provide a form of non-stationary covariance function $k^{NS}(\\cdot, \\cdot)$ from a stationary covariance  function $k^{S}(\\cdot, \\cdot)$ as follows, \\citep{paciorek_spatial_2006}:\n\\begin{equation}\\label{eqn:mat_ns}\n\tk^{NS}(\\ve{s}, \\vesup{s}{\\prime}) = \\lvert \\Sigma_{\\ve{s}} \\rvert ^{\\frac{1}{4}} \\lvert \\Sigma_{\\vesup{s}{\\prime}} \\rvert^{\\frac{1}{4}} \\lvert \\frac{ \\Sigma_{\\ve{s}}  + \\Sigma_{\\vesup{s}{\\prime}}}{2}\\rvert^{-\\frac{1}{2}} k^S(Q(\\ve{s}, \\vesup{s}{\\prime}))\n\\end{equation}\nwhere $Q(\\ve{s}, \\vesup{s}{\\prime}) = \\left(\\ve{s} - \\vesup{s}{\\prime}\\right)^\\transpose \\left( \\frac{ \\Sigma_{\\ve{s}}  + \\Sigma_{\\vesup{s}{\\prime}}}{2}\\right)^{-1}  \\left(\\ve{s} - \\vesup{s}{\\prime}\\right)$ and $\\Sigma_{\\ve{s}} = \\Sigma(\\ve{s})$ is the covariance matrix of the Gaussian kernel centred at $\\ve{s}$. How $\\Sigma_{\\ve{s}}$ varies across the domain specifies how non-stationary the full covariance kernel is. \n\nWith almost all covariance functions and especially non-stationary covariances there are typically hyper parameters which must be estimated from the data.\nFor example in the Mat\\`{e}rn covariance we have the shape parameter $\\nu$ and any length scale parameters defined in the distance function $d(\\cdot, \\cdot)$.\nThese are typically estimated though maximum likelihood estimation, \\citep{williams_gaussian_2006}, however fully Bayesian estimation can also be achieved through some Markov Chain Monte Carlo (MCMC) scheme, \\citep{paciorek_spatial_2006}. ", "meta": {"hexsha": "21d28586c53f9e0d77b6a864f1570959b0e2bb20", "size": 47654, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter3/chapter3.tex", "max_stars_repo_name": "JulianAustin1993/thesis", "max_stars_repo_head_hexsha": "8b8cc587fbcd9a86a6d3834ddd38823799797b70", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter3/chapter3.tex", "max_issues_repo_name": "JulianAustin1993/thesis", "max_issues_repo_head_hexsha": "8b8cc587fbcd9a86a6d3834ddd38823799797b70", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter3/chapter3.tex", "max_forks_repo_name": "JulianAustin1993/thesis", "max_forks_repo_head_hexsha": "8b8cc587fbcd9a86a6d3834ddd38823799797b70", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 102.0428265525, "max_line_length": 547, "alphanum_fraction": 0.7573341168, "num_tokens": 13400, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5736093913492462}}
{"text": "\\SecDef{symmetries}{Rotational Invariants in NORX}\n\nIn this section I describe rotational symmetries in the permutation $F$ of NORX. They exist both on the word level (inherited from $G$) and on the state level (structural).\n\n\\subsection{State Invariants}\n\nWe can see a 4x4 NORX state $S$ as a list of 4 columns: $S = (c_1, c_2, c_3, c_4)$. \n\n\\begin{definition}[Columns Rotation]\nFor an integer $n$ denote by $R_n$ the function rotating of the columns left by $n$ positions. For example $R_1(c_1, c_2, c_3, c_4) = (c_2, c_3, c_4, c_1)$ for arbitrary $c_1, c_2, c_3, c_4 \\in (\\field{w})^4$.\n\\end{definition}\n\nThe following proposition shows that the permutation $F$ is column rotation-symmetric.\n\n\\begin{proposition}\nThe permutations $R_n$ and $F^l$ commute for any integers $n$ and $l \\ge 1$:\n$$\nF^l \\circ R_n = R_n \\circ F^l.\n$$\n\\end{proposition}\n\\begin{proof}\nClearly, the rotation of columns does not affect the column step $\\col$, since it transforms each column separately: $\\col \\circ R_n = R_n \\circ \\col$. Such rotations do not break the diagonals as well, because the diagonals are simply reordered. Therefore, $\\diag \\circ R_n = R_n \\circ \\diag$. It follows that $F$ commutes with $R_n$ and thus $F^l$ commutes with $R_n$ too.\n\\end{proof}\n\n\\begin{definition}\nA state $s \\in (\\field{w})^{16}$ is said to be \\emph{column $n$-rotation invariant} if\n$$\nR_n(s) = s.\n$$\n\\end{definition}\n\nLet $s \\in (\\field{w})^{16}$ be a column $n$-rotation invariant state for a fixed positive integer $n$. Observe that\n$$\nR_n(F(s)) = F(R_n(s)) = F(s),\n$$\ni.e. $F(s)$ is also column $n$-rotation invariant. It follows that the property of a state being column $n$-rotation invariant is an invariant of the round function $F$. It is easy to see that this invariant corresponds to an invariant subspace.\n\n\\begin{proposition}\nFor a fixed integer $n, 1 \\le n \\le 3$, the set of all column $n$-rotation invariant states is a linear subspace of $(\\field{w})^{16}$.\nFor $n = 1$ or $n = 3$ this is the same subspace of dimension $4w$,\nfor $n = 2$ the invariant subspace has dimension $8w$.\n\\end{proposition}\n\\begin{proof}\n    If $n=1$ or $n=3$, then for any $c_1, c_2, c_3, c_4 \\in (\\field{w})^4$\n    $$\n    (c_1, c_2, c_3, c_4) = (c_2, c_3, c_4, c_1)\n    $$\n    and it follows that all columns are equal: $c_1 = c_2 = c_3 = c_4$. There are $2^{4w}$ out of $2^{16w}$ such states. The designers of NORX noted these states in~\\cite{DBLP:conf/latincrypt/AumassonJN14}. A constraint $c_i = c_j$ consists of $4w$ linear equations $c_{i,y,x} \\oplus c_{j,y,x} = 0$, where $1 \\le y \\le 4, 1 \\le x \\le w$. Therefore, these constraints define a linear subspace of dimension $16w - 3\\cdot 4w = 4w$.\n\n    If $n = 2$, then for any $c_1, c_2, c_3, c_4 \\in (\\field{w})^4$ $$\n    (c_1, c_2, c_3, c_4) = (c_3, c_4, c_1, c_2)\n    $$\n    and it follows that the two pairs of columns are equal: $c_1 = c_3$ and $c_2 = c_4$. There are $2^{8w}$ out of $2^{16w}$ such states. Similarly, these constraints define a linear subspace of dimension $8w$.\n\\end{proof}\n\nHitting such a special state even for the case $n=2$ is not easy under the NORX security claims. However, $2^{8w}$ is a more serious fraction of states than the $2^{4w}$ weak states which were known to the designers. To illustrate possible dangers of such properties, I refer to the forgery attack~\\cite{NORXfse} on the previous version of NORX exploiting this invariant, and I also describe two hypothetical attacks on NORX8~\\cite{aumasson2015norx8}, a NORX version with 8-bit words for low-end devices. I remark that NORX8 is not a part of the CAESAR submission.\n\nThe fist attack shows a weak-key set, which could be exploited if the domain separation constants were rotation-invariant. The weak-key set is relatively small, $2^{32}$ keys out of $2^{80}$. The second attack is a state/key recovery attack in a known plaintext scenario. It succeeds with probability $2^{-64}$ for each two consequent known-plaintext blocks, and the total time complexity is $2^{72}$ to recover an 80-bit key. Note that the designers restrict the data per single key to $2^{24}$ message blocks, therefore, the attack can break a concrete key with probability only $2^{-40}$.\n\nBoth attacks are independent of the number of rounds $l$ used in the permutation.\n\n\\subsection{Hypothetical Weak-key Attack on NORX8 Initialization}\nThe initial state of NORX8 is given by\n\\begin{equation}\n\\begin{pmatrix}\n    n_1 & n_2 & n_3 & n_4 \\\\\n    k_1 & k_2 & k_3 & k_4 \\\\\n    k_5 & k_6 & k_7 & k_8 \\\\\n    k_9\\oplus w & k_{10}\\oplus l & u_{15}\\oplus p  & u_{16}\\oplus t \\\\\n\\end{pmatrix} \\in (\\field{8})^{16},\n\\end{equation}\nwhere $n_i$ and $k_i$ denote bytes of the nonce and the key respectively, $u_i$ are constants and $w,l,p,t$ are constants encoding parameters of NORX. It is possible to construct valid initial states with two equal halves, i.e. a column 2-rotation invariant state. Indeed, let us fix the four key bytes $(k_3, k_4, k_7, k_8)$ arbitrarily and let us choose the two nonce bytes $(n_3, n_4)$ arbitrarily. Then we can set the left half of the state equal to the right half, i.e.\n\\eq{\n(n_1, n_2) &= (n_3, n_4),\\\\\n(k_1, k_2) &= (k_3, k_4),\\\\\n(k_5, k_6) &= (k_7, k_8),\\\\\n(k_9, k_{10}) &= (u_{14} \\oplus p \\oplus w, u_{15} \\oplus t \\oplus l).\n}\nThere are $2^{32}$ weak keys out of $2^{80}$ and $2^{16}$ nonces that result in such a weak state. The column 2-rotation invariant of such state is preserved through arbitrary number of rounds of $F$. However, after the first $F^l$ rounds the domain separation constant will be added to the last word of the state (see~\\FigRef{sponge2}). This constant is not column 2-rotation invariant and therefore it will break the property. Therefore, we consider a slightly modified version of NORX8 where the domain separation constant is column 2-rotation invariant. For example, the original constant may be added not only to the last word, but to all words of the state or to all words in the last row. In such case the invariant is preserved through the next $F^l$ rounds and the rate part of the state is then observed by an adversary. This leads to a simple distinguisher: the adversary simply compares the left and right halves of the exposed part of the state. In NORX8 the rate part consists of only 5 bytes. It allows to check only the topmost 4 words with error probability $2^{-16}$. By using a few more encryptions with another weak nonces the error probability can be decreased to negligible.\n\nI remark that the weak key space is very small and the attack requires symmetric domain separation constants. On the other hand, it is powerful in that it is independent of the number of rounds. The attack illustrates possible dangers of having such strong invariants in the permutation.\n\n\n\\subsection{State Recovery Attack on NORX8} \n\nThe column $2$-rotation invariant can be used to mount a state/key recovery attack on NORX8, though exceeding the data usage limit defined by the designers.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[height=2.8cm]{\\PathFig{layout.png}}\n    \\FigDef{sponge2}{The NORX v2.0 AE scheme with parallelization parameter $p = 1$. NORX8 and NORX16 follow this scheme. (credits: NORX specification~\\cite{NORX})}\n\\end{figure}\n\nAssume that we have a two-block known-plaintext message. That is, we know the rate part before and after a call to the NORX8 core permutation $F^l$. Denote the input rate part by $a$ and the output rate part by $b$. Recall that the rate in NORX8 is 40 bits, which is five 8-bit words. With probability $2^{-16}$ we will observe $a_1 = a_3, a_2 = a_4$. Then there are two cases:\n\\begin{enumerate}\n    \\item The whole state is column rotation-2 invariant. The probability of this is equal to  $2^{-6\\cdot8}=2^{-48}$, given the observed rate part. Indeed, a uniformly random state is column 2-rotation invariant with probability $2^{-64}$. In this case the output state will be also column rotation-2 invariant with probability 1 and we will observe $b_1 = b_3, b_2 = b_4$.\n    \n    \\item The whole state is not column rotation-2 invariant. Then with probability $2^{-16}$ we will observe $b_1 = b_3, b_2 = b_4$ as a false positive.\n\\end{enumerate}\nAs a result, when we observe both $a_1 = a_3, a_2 = a_4$ and $b_1 = b_3, b_2 = b_4$, the probability of the state being column rotation-2 invariant is equal to $2^{-32}$ and in the other cases it is a false positive. In the first case the state before the call to $F^l$ contains 5 unknown words $x_1,\\ldots,x_5 \\in \\field{8}$:\n$$\n\\begin{pmatrix}\n    a_1 & a_2 & a_3=a_1 & a_4=a_2 \\\\\n    a_5 & x_1 & a_5 & x_1 \\\\\n    x_2 & x_3 & x_2 & x_3 \\\\\n    x_4 & x_5 & x_4 & x_5 \\\\\n\\end{pmatrix}.\n$$\nWe can exhaustively check all $2^{40}$ possibilities for $x_1, \\ldots, x_5$ by encrypting through $F^l$ and obtaining extra filter with probability $2^{-24}$ from $b$. The remaining $2^{16}$ candidates can be checked by decrypting the state up to the initial state and matching the constants and further verifying the tag.\n\nAs a result, with probability $2^{-64}$ two consequent known-plaintext blocks allow to recover the full state and the secret key. The initial filter has strength $2^{-32}$ and the time complexity of checking a block pair is $2^{40}$. Note that the designers set a limit to $2^{24}$ data, therefore the attack succeeds for a concrete key only with probability around $2^{-40}$.\n\n\n\\subsection{Word Invariants}\n\n\\newcommand\\rr{\\mathbf{r}}\nA similar rotational symmetry exists on the word level too. \nLet $G'$ be the permutation of $(\\field{w})^4$ to itself obtained from $G$ by replacing the four left shift operations by left rotations.\n\n\\begin{proposition}\n\\PropLabel{GprimeG}\n$G' = G$ is conditioned by 4 bit equations, where each equation holds with probability $3/4$.\n\\end{proposition}\n\\begin{proof}\nThe left shift by one inserts a zero in the least significant bit of the result. If the most significant bit of the input is equal to 0, then the left shift is equivalent to the left rotation. There are 4 left shifts in $G$, each yields such bit equation. The input of a left shift in $G$ is simply an AND of two state bits, which are uniformly distributed.\n\\end{proof}\n\n\\begin{observation}\nExperimentally, it is observed that $\\Pr[G' = G]$ is close to $2^{-1.82}$, where the input is sampled uniformly at random, for all word sizes $w \\in \\{8,16,32,64\\}$.\n\\end{observation}\n\nNote that this observation shows the effect of dependency of the four quarter-steps in $G$. The probability that all these bits are equal to zero can be estimated as $(3/4)^4 \\approx 2^{-1.66}$. However, the actual probability is lower due to the dependency of the equations. \n\n\\begin{definition}\nLet $r_n\\colon \\field{w} \\to \\field{w}$ be the mapping which rotates a word left by $n$ bits and let $\\rr_n \\colon (\\field{w})^4 \\to (\\field{w})^4$ be defined as\n$$\n\\rr_n(a,b,c,d) \\eqdef (r_n(a), r_n(b), r_n(c), r_n(d)).\n$$\n\\end{definition}\n\n\\begin{proposition}\n\\PropLabel{Gcommute}\nFor any integer $n, 1 \\le n < w$, $\\rr_n$ commutes with $G'$:\n$$\nG' \\circ \\rr_n = \\rr_n \\circ G'.\n$$\nFurthermore, $\\rr_n$ commutes with $G$ conditioned by 8 bit equations, each holding with probability $3/4$.\n\\end{proposition}\n\\begin{proof}\nFirst, it is easy to verify that all operations in $G'$ commute with $\\rr_n$. For a binary operation to commute it is required that $\\rr_n$ applied to both inputs is equivalent to $\\rr_n$ applied to the output.\n\nThe second claim follows by applying \\PropRef{GprimeG} to the equation $G' \\circ \\rr_n = \\rr_n \\circ G'$ two times.\n\\end{proof}\n\n\\begin{observation}\nExperimentally, it is observed that $\\Pr[G' \\circ \\rr_n = \\rr_n \\circ G']$ varies from $2^{-3.84}$ to $2^{-3.59}$ depending on the word size and rotation amount $n$. The rotation amounts corresponding to the smallest probabilities are $1$ and $w-1$.\n\\end{observation}\n\nSimilarly to the column $n$-rotation invariant, define the word $n$-rotation invariant.\n\n\\begin{definition}\nA columns $c \\in (\\field{w})^4$ (resp. a state $s \\in (\\field{w})^{16}$) is said to be word $n$-rotation invariant if for each its word $c_i$ (resp. $s_i$) the following holds:\n$$\nr_n(c_i) = c_i~~(\\text{resp.}~r_n(s_i) = s_i).\n$$\n\\end{definition}\n\n\\begin{proposition}\nThe set of all word $n$-rotation invariant states is a linear subspace of dimension $16\\cdot gcd(n, w)$.\n\\end{proposition}\n\\begin{proof}\nIt is easy to see that a word $v \\in \\field{w}$ is word $n$-rotation invariant if and only if it is made of $w/gcd(n,w)$ copies of the same vector $u \\in \\field{gcd(n,w)}$. Clearly, all such words form a linear subspace of $\\field{w}$ of dimension $gcd(n,w)$. As there are 16 words in the state, the proposition follows.\n\\end{proof}\n\nNote that the property of a state or column being invariant requires only one approximation of $G$ by $G'$, i.e. it is approximately twice as more probable than the commutation. \n\n\\begin{proposition}\nLet $c \\in (\\field{w})^4$ be a word $n$-rotation invariant column. Then \n$$\n\\Pr[\\rr_n(G(c)) = G(c)] \\ge \\Pr[G(c) = G'(c)],\n$$\nwhere the probabilities are taken over $c$ sampled uniformly at random from the set of all word $n$-rotation invariant columns.\n\\end{proposition}\n\\begin{proof}\nConsider the following equation:\n$$\n\\rr_n(G(c)) \\approx \\rr_n(G'(c)) = G'(\\rr_n(c)) = G'(c) \\approx G(c).\n$$\nThe two approximations are applied to the same input: $G(c) \\approx G'(c)$, therefore the equation holds with probability at least $\\Pr[G(c) = G'(c)]$.\n\\end{proof}\n\nExperimentally, no difference is observed on $\\Pr[G(c) = G'(c)]$ when $c$ is sampled uniformly at random from $(\\field{w})^4$ and when it is sampled uniformly at random from the set of all word $n$-rotation invariant columns. Therefore, it can be expected that a word $n$-rotation invariant is preserved through $F$ with a probability approximately $(2^{-1.82})^8 = 2^{-14.56}$. The commutation of $F$ and the $n$-rotation of each word can be expected to happen with probability approximately $(2^{-3.59})^8 = 2^{-28.72}$ if $1 < n < w-1$.\n\nIt is worth noting that the word $n$-rotation invariants can be seen as probabilistic invariant subspaces of $F$.\n\n\n\\subsection{Hypothetical Attack on NORX128 v2.0}\n\nAs the probability of $\\rr_n$ commuting with $G'$ does not seem to depend on the word size, the distinguishing property is stronger for instances with larger words and key size. I consider an existential forgery attack similar to the one proposed in~\\cite{NORXfse}. Similarly, I consider NORX v2.0 since NORX v3.0 breaks the attack by injecting the key in the finalization stage.\n\n\\newcommand\\rrr{\\mathbf{r}}\nConsider the forgery attack scenario. The finalization stage of NORX consists of 8 iterations of $F$.\nLet us assume that the words in the rate part of the state before the finalization are $w/2$-rotation invariant. This happens with probability $2^{-2w}$. Then we can attempt a forgery by rotating each word in the last ciphertext block by $w/2$. Then, with probability approximately $(2^{-3.59})^{64} = 2^{-229.76}$ we expect the rotation to commute with the finalization:\n$$\nF^8(\\rrr_n(s)) = \\rrr_n(F^8(s)), \n$$\nwhere $s$ is the state before the finalization stage. Since the tag is obtained by truncating the final state and we have observed the tag in the first encryption, we can expect the new tag to be equal to the word $w/2$-rotated version of the original tag.\n\nFor NORX64, the probability of the rate to be $32$-rotation invariant is equal to $2^{-128}$. Unfortunately, the attack's success probability then is worse than for a generic attack (i.e. $2^{-256}$). For this reason, I suggest to increase the word size even more and to consider NORX128, a generalization of NORX64 by increasing the word size to 128 bits. In this hypothetical cipher, the full attack success probability is approximately $2^{-256}\\cdot 2^{-229.76} < 2^{-512}$, i.e. it is better than a generic attack.\n\nThis attack on the hypothetical instance of NORX shows the possibility of exploiting the word-level symmetries as well. The attack does not apply directly to main instances of NORX.", "meta": {"hexsha": "93d567652c6c5990671c1ff8a7911d1ba1f02d20", "size": 15994, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-source/9niNORX/3symmetries.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "thesis-source/9niNORX/3symmetries.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "thesis-source/9niNORX/3symmetries.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 75.8009478673, "max_line_length": 1196, "alphanum_fraction": 0.7238964612, "num_tokens": 4723, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.5736093864685265}}
{"text": "\\Lecture{Jayalal Sharma}{Sept 19, 2020}{06}{Catlan Bijections}{Anshu and Narasimha Sai}{$\\alpha$}{JS}\n\n\\section{Introduction}\nOne of the classic examples to demonstrate the power of bijections is \\emph{Catlan numbers}. The Catlan numbers form a sequence of natural numbers that occur in various counting problems and occurs in several seemingly different contexts. Historically, \\emph{Euler} is the first person to study them. He was interested in counting the number of ways of dividing a polygon into triangles by drawing non-overlapping diagonals. Catlan numbers got their name from \\emph{Eugene Catlan} when he used them to answer the \\emph{Parenthesisation problem} which is the following: Consider a sequence $(a_1,a_2,\\cdots,a_{n+1})$ of $n+1$ numbers, If we have to perform a binary operations $\\odot$ $n$ times among them, how many number of ways are there to parenthesise (or bracket) them using $n$ parenthesis of single type (say $'()'$). In this lecture, we will see a few equivalent problems to this and then arrive at an explicit expression of Catlan numbers.\n\\section{Equivalent Bijections}\nIn this section, we see a few equivalent problems of the \\emph{parenthesisation} problem and argue that answer to each of them is also the \\emph{catlan number}\n\\paragraph{Full binary trees} If we observe the Parenthesisation problem carefully, we notice that every valid parentesisation of those $n+1$ numbers form a \\emph{full binary tree} (a binary tree in which every node have either two children or no children) of $n+1$ leaves and $n$ internal nodes where leaves represents the numbers $a_1,\\cdots,a_{n+1}$ and each internal node corresponds to one operation. Therefore, there's an implicit bijection between the set of valid parenthesisations and full binary trees with $n$ internal nodes. Therefore, \n\\begin{equation}\n    \\substack{\\textrm{number of valid parenthesisations of }\\\\ n+1 \\textrm{ elements }}  = \\substack{\\textrm{number of full binary trees with }\\\\ n \\textrm{ internal nodes}}\n\\end{equation}  \n\n\\paragraph{Balanced parenthesised strings} A balanced parenthesised string of length $2n$ is a string consists of $n$ left brackets $'('$ and $n$ right brackets $')'$ in which every prefix of the string has number of left brackets $'('$ $\\geq$ number of right brackets $')'$. One can easily observe the bijection from set of balanced paranthesised string to valid parenthesisations of $n+1$ numbers\n\n\\paragraph{Euler's problem} Find the number of ways of triangulating a polygon with $n+2$ edges\n\n\\paragraph{Handshaking problem} Consider a scenario where $2n$ people are sitting around a table. How many ways they can shake hands with each other without crossing hands. We leave it as an exercise to establish bijections from \\emph{Euler's} problem to \\emph{Full binary tree} problem and \\emph{handshaking} problem to \\emph{balanced parenthesised strings} problem.\n\\jsay{Establish bijections from \\emph{Euler's} problem to \\emph{Full binary tree} problem and \\emph{handshaking} problem to \\emph{balanced parenthesised strings} problem}\n\n\\section{Algebraic Expression}\nIn this section, we are interested in arriving at a concrete expression of the $n^{th}$ \\emph{catlan number} (denoted by $c_n$). Let's solve another problem and then, by establishing a bijection to one of the above problems, we can arrive at an expression for $c_n$.\n\n\\subsection{Monotone walk on $n\\times n$ grid} Suppose we have a grid of size $n\\times n$. How many ways are there to go from $(0,0)$ to $(n,n)$ by using only downward edges or right edges. A sample path is represented in Fig. \\ref{fig:sample-path}. We observe that each step can increment the value of exactly one of the co-ordinates by $1$. Since we have to move from $(0,0)$ to $(n,n)$, we have to increase the value of both the co-ordinates by $n$ and $n$ and thus irrespective of the path you take, the length of a path from $(0,0)$ to $(n,n)$ must be of length $n+n=2n$.\n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.4\\linewidth]{images/sample-path.png}\n    \\caption{A path from $(0,0)$ to $(n,n)$ using downward and right edges}\n    \\label{fig:sample-path}\n\\end{figure}\n\nIf we represent each right move as $R$ and each downward move as $D$, one can observe that there's a bijection $f$ from the set of paths to set of strings of length $2n$ over the alphabet $\\{D,R\\}$ with number of $D$'s = number of $R$'s = $n$. Formally, if $(u_0,v_0), (u_1,v_1),\\cdots,(u_{2n},v_{2n})$ represents the path where $(u_0,v_0)=(0,0)$ and $(u_{2n},v_{2n})=(n,n)$, and $b=b_1b_2\\cdots b_{2n}$ represents the string where each $b_i$ is either $D$ or $R$, our bijection $f$ takes a path as input and sets $b_i$ as\n$$b_i=\\begin{cases}\nD &\\mbox{if } u_i = u_{i-1}+1\\\\\nR &\\mbox{if } v_i = v_{i-1}+1\n\\end{cases}$$\n\\begin{description}\n\\item \\underline{Well defined:} As we have exactly $n$ $x$ co-ordinate increments and $n$ $y$ co-ordinate increments, we will have exactly $n$ $D$'s and $n$ $R$'s in our string and thus $f$ is well defined.\n\\item \\underline{Injective:} Two different paths from $(0,0)$ to $(n,n)$ will different in at least one $(u_{i-1},v_{i-1})$ to $(u_i,v_i)$ transition where $i=1,2,\\cdots,2n$, their corresponding strings under $f$ will differ in at least $i^{th}$ position and thus $f$ is injective.\n\\item \\underline{Surjective:} Every string over $\\{D,R\\}$ of length $2n$ with equal number of $D$'s and $R$'s has a pre-image under $f$ which is defined by $(u_0,v_0)=(0,0)$ and $(u_i,v_i)$ is $(u_{i-1}+1,v_{i-1})$ if $b_i=R$ and $(u_{i-1},v_{i-1}+1)$ if $b_i=D$. As there will be $n$ $D$'s and $n$ $R$'s, $(u_{2n},v_{2n})=(n,n)$ and thus $f$ is surjective .\n\\end{description} \nThus $f$ is bijection. As we have number of string over $\\{D,R\\}$ of length $2n$ with equal number of $D$'s and $R$'s equal to $\\binom{2n}{n}$ (select $n$ positions out of $2n$ available and fill them with $D$'s and the rest with $R$'s). Thus the number of paths from $(0,0)$ to $(n,n)$ with only downward and rightward movements is $\\binom{2n}{n}$.\n\nLets ask a slightly question. How many ways are there to go from $(0,0)$ to $(n+1,n-1)$ using only downward or right edges.Using a similar arguments as above, we can come up with a bijection to set of string over $\\{D,R\\}$ of length $2n$ with $n+1$ $D$'s and $n-1$ $R$'s. Therefore number of required paths are $\\binom{2n}{n+1}=\\binom{2n}{n-1}$\n\n", "meta": {"hexsha": "256c52d268bcd98065c44db865211c74410d5afe", "size": 6360, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture06.tex", "max_stars_repo_name": "narasimhasai07/theory-toolkit", "max_stars_repo_head_hexsha": "fde5621c515c2e05e3d91e8b021b745ea6ea2075", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lecture06.tex", "max_issues_repo_name": "narasimhasai07/theory-toolkit", "max_issues_repo_head_hexsha": "fde5621c515c2e05e3d91e8b021b745ea6ea2075", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture06.tex", "max_forks_repo_name": "narasimhasai07/theory-toolkit", "max_forks_repo_head_hexsha": "fde5621c515c2e05e3d91e8b021b745ea6ea2075", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 141.3333333333, "max_line_length": 948, "alphanum_fraction": 0.7273584906, "num_tokens": 1914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754607093178, "lm_q2_score": 0.7879311881731379, "lm_q1q2_score": 0.5735945697175803}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{parskip}\n\\usepackage{markdown}\n\\usepackage{hyperref}\n\\usepackage{listings}\n\\usepackage{color}\n\\usepackage[subtle]{savetrees}\n\\usepackage{verbatim}\n\\usepackage{blindtext}\n\n\\title{\\vspace{-2cm} \\textbf{Session 5 - Some CS-ey things} \\\\ UCAS Program 2020}\n\n\\author{Chloe Lau}\n\\date{August 2020}\n\n\\begin{document}\n\\setlength{\\parindent}{4ex}\n\\setlength{\\parskip}{1em}\n\n\\maketitle\n\n\\section{Big O Notation}\n\\begin{center}\n    \\includegraphics[scale=0.2]{bigo.png}\n\\end{center}\n\nThe \\href{https://en.wikipedia.org/wiki/Big_O_notation}{Big O Notation} ($\\mathcal{O} \\sim$) is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.\n\nAs it is hard to visualise easily what the complexity of an algorithm is, below are some common orders of growth along with descriptions and examples where possible.\n\n\\section{Examples}\n\n\\subsection{$\\mathcal{O}$(1)}\n$\\mathcal{O}$(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.\n\n\\begin{verbatim}\n    bool IsFirstElementNull(IList<string> elements)\n    {\n        return elements[0] == null;\n    }\n\\end{verbatim}\n\nThis is simple huh? Now let's get to other examples:\n\n\\subsection{$\\mathcal{O}$(N)}\n$\\mathcal{O}$(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. The example below also demonstrates how Big O favours the worst-case performance scenario; a matching string could be found during any iteration of the \\texttt{for} loop and the function would return early, but Big O notation will always assume the upper limit where the algorithm will perform the maximum number of iterations.\n\n\\begin{verbatim}\n    bool ContainsValue(IList<string> elements, string value)\n    {\n        foreach (var element in elements)\n        {\n            if (element == value) return true;\n        }\n\n    return false;\n    }\n\\end{verbatim}\n\nStill easy? Let's play with exponentials then!\n\n\\newpage\n\\subsection{$\\mathcal{O}$($N^2$)}\n$\\mathcal{O}$($N^2$) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in $\\mathcal{O}$($N^3$), $\\mathcal{O}$($N^4$) etc.\n\n\\begin{verbatim}\n    bool ContainsDuplicates(IList<string> elements)\n    {\n        for (var outer = 0; outer < elements.Count; outer++)\n        {\n            for (var inner = 0; inner < elements.Count; inner++)\n            {\n                // Don't compare with self\n                if (outer == inner) continue;\n\n                if (elements[outer] == elements[inner]) return true;\n            }\n        }\n\n        return false;\n    }\n\\end{verbatim}\n\nMakes sense right? How about some Binary Powers:\n\n\\subsection{$\\mathcal{O}$($2^N$)}\n$\\mathcal{O}$($2^N$) denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an $\\mathcal{O}$($2^N$) function is exponential - starting off very shallow, then rising meteorically. An example of an O(2N) function is the recursive calculation of Fibonacci numbers:\n\n\\begin{verbatim}\n    int Fibonacci(int number)\n    {\n        if (number <= 1) return number;\n\n        return Fibonacci(number - 2) + Fibonacci(number - 1);\n    }\n\\end{verbatim}\n\nWait for the finale:\n\n\\subsection{Logarithms}\nLogarithms are slightly trickier to explain so I'll use a common example:\n\n\\href{https://en.wikipedia.org/wiki/Binary_search_algorithm#Derivation_of_average_case}{Binary search} is a technique used to search sorted data sets. It works by selecting the middle element of the data set, essentially the median, and compares it against a target value. If the values match it will return success. If the target value is higher than the value of the probe element it will take the upper half of the data set and perform the same operation against it. Likewise, if the target value is lower than the value of the probe element it will perform the operation against the lower half. It will continue to halve the data set with each iteration until the value has been found or until it can no longer split the data set.\n\nThis type of algorithm is described as $\\mathcal{O}$(log N). The iterative halving of data sets described in the binary search example produces a growth curve that peaks at the beginning and slowly flattens out as the size of the data sets increase e.g. an input data set containing 10 items takes one second to complete, a data set containing 100 items takes two seconds, and a data set containing 1000 items will take three seconds. Doubling the size of the input data set has little effect on its growth as after a single iteration of the algorithm the data set will be halved and therefore on a par with an input data set half the size. This makes algorithms like binary search extremely efficient when dealing with large data sets.\n\n\\end{document}\n", "meta": {"hexsha": "c82f5b0748c61ad00990543defe5c96a1dc9306c", "size": 5206, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Seminar5/main.tex", "max_stars_repo_name": "chloelaucodes/ucas_program_notes", "max_stars_repo_head_hexsha": "b1f08173ff9aca53719b0daae0884512441fcb7e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-20T14:59:58.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-20T14:59:58.000Z", "max_issues_repo_path": "Seminar5/main.tex", "max_issues_repo_name": "chloelaucodes/ucas_program_notes", "max_issues_repo_head_hexsha": "b1f08173ff9aca53719b0daae0884512441fcb7e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Seminar5/main.tex", "max_forks_repo_name": "chloelaucodes/ucas_program_notes", "max_forks_repo_head_hexsha": "b1f08173ff9aca53719b0daae0884512441fcb7e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.7614678899, "max_line_length": 736, "alphanum_fraction": 0.7345370726, "num_tokens": 1250, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7879311956428947, "lm_q1q2_score": 0.5735945658551598}}
{"text": "\\documentclass[t,usenames,dvipsnames]{beamer}\n\\usetheme{Copenhagen}\n\\setbeamertemplate{headline}{} % remove toc from headers\n\\beamertemplatenavigationsymbolsempty\n\n\\usepackage{amsmath, tikz, xcolor, array, graphicx}\n\\usetikzlibrary{arrows.meta}\n\\tikzset{>=stealth}\n\\everymath{\\displaystyle}\n\n\\title{Angles and Radian Measure}\n\\author{}\n\\date{}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}\n    \\frametitle{Table of Contents}\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\newcommand\\bigangle[2][]{% \n    \\draw[->,domain=0:#2,variable=\\t,samples=200,>=stealth,#1]\n      plot ({(\\t+#2)*cos(\\t)/(4*#2)},\n           {(\\t+#2)*sin(\\t)/(4*#2)}) \n        ;}\n\n\\begin{document}\n\n\\begin{frame}\n    \\titlepage\n\\end{frame}\n\n\n\\begin{frame}{Angles in Standard Position}\n\\begin{center}\n\\begin{tikzpicture}[scale=1.2]\n    \\draw [<->] (-3,0) -- (3,0) node [right] {$x$};\n    \\draw [<->] (0,-2) -- (0,3) node [right] {$y$};\n    \\draw [->, >=stealth, line width = 1.25, color=blue] (0,0) -- (2.75,0) node [below, midway] {\\color{blue}Initial Side};\n    \\draw [->, >=stealth, line width = 1.25, color = red] (0,0) -- (140:2.75) node [midway, below, sloped] {\\color{red}Terminal Side};\n    \\draw [color=violet, fill=violet] (0,0) circle (2pt);\n    \\draw [->, >=stealth, color=violet] (-0.5,-0.5) node [below] {\\color{violet}Vertex} -- (0,0); \n    \\draw [->, >=stealth] (0:0.5) arc (0:140:0.5) node [midway, above right] {$\\theta$};\n\\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Angles in Standard Position}\n    An angle is in \\alert{standard position} if    \\newline\\\\  \\pause\n\\begin{itemize}\n    \\item Vertex is at the origin.  \\newline\\\\  \\pause\n    \\item Initial side runs along positive $x$-axis.    \\newline\\\\  \\pause\n\\end{itemize}\n\nPositive angles open \\textbf{counter-clockwise} and negative angles open \\textbf{clockwise}.    \\newline\\\\  \\pause\n\nA \\alert{quadrantal angle} is one whose terminal side lies on an axis. In other words, it's a multiple of $90^\\circ$.\n\\end{frame}\n\n\n\\begin{frame}{Radians}\nOne \\textbf{radian} is the measure of the central angle of a circle in which the radius equals the length of the intercepted arc.    \\newline\\\\\n\n\\begin{center}\n    \\begin{tikzpicture}\n    \\draw (0,0) circle (3cm);\n    \\draw [color = blue, ultra thick] (0,0) -- (3,0) node [below, midway] {$r$};\n    \\draw [color = blue, ultra thick] (0,0) -- (57.3:3) node [sloped, midway, above] {$r$};\n    \\draw [line width = 1.5, color = blue, ultra thick] (0:3) arc (0:57.3:3) node [midway, right] {$r$};\n    \\draw [->, >=stealth] (0:0.75) arc (0:57.3:0.75) node [midway, right] {1 radian};\n    \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Radian Interpretation}\nIn other words, its when the length of your slice of pizza's crust is equal to the radius of the pizza. \n\\end{frame}\n\n\\section{ Convert degrees to radians and radians to degrees.}\n\n\\begin{frame}{Relationship Between Radians and Degrees}\n    One rotation is the circumference of the circle ($2\\pi r$) and also $360^\\circ$:\n\n\\begin{align*}\n    \\onslide<2->{360^\\circ &= 2\\pi \\text{ radians}} \\\\[10pt]\n    \\onslide<3->{180^\\circ &= \\pi \\text{ radians} } \\\\[10pt]\n    \\onslide<4->{\\frac{180^\\circ}{\\pi} &= \\text{1 radian}} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Convert Degrees to Radians and Radians to Degrees}\n\\begin{itemize}\n    \\item To convert degrees to radians, multiply by $\\frac{\\pi}{180^\\circ}$.   \\\\[18pt]  \\pause\n    \\item To convert radians to degrees, multiply by $\\frac{180^\\circ}{\\pi}$.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Example 1}\nConvert each angle to radians.  \\newline\\\\\n(a) \\quad $30^\\circ$    \n\\begin{align*}\n    \\onslide<2->{30^\\circ &    }\\onslide<3->{\\rightarrow30^\\circ\\left(\\frac{\\pi}{180^\\circ}\\right)}   \\\\[12pt]\n    \\onslide<4->{&= \\frac{30\\pi}{180}} \\\\[12pt]\n    \\onslide<5->{&= \\frac{\\pi}{6}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 1}\nConvert each angle to radians.  \\newline\\\\\n(b) \\quad $90^\\circ$    \n\\begin{align*}\n    \\onslide<2->{90^\\circ &    }\\onslide<3->{\\rightarrow90^\\circ\\left(\\frac{\\pi}{180^\\circ}\\right)}   \\\\[12pt]\n    \\onslide<4->{&= \\frac{90\\pi}{180}} \\\\[12pt]\n    \\onslide<5->{&= \\frac{\\pi}{2}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 1}\nConvert each angle to radians.  \\newline\\\\\n(c) \\quad $-135^\\circ$    \n\\begin{align*}\n    \\onslide<2->{-135^\\circ &    }\\onslide<3->{\\rightarrow-135^\\circ\\left(\\frac{\\pi}{180^\\circ}\\right)}   \\\\[12pt]\n    \\onslide<4->{&= \\frac{-135\\pi}{180}} \\\\[12pt]\n    \\onslide<5->{&= \\frac{-3\\pi}{4}}\n\\end{align*}\n\\end{frame}\n\n\n\\begin{frame}{Example 2}\nConvert each angle to degrees. \\newline\\\\\n(a) \\quad $\\frac{\\pi}{3}$\n\\begin{align*}\n    \\onslide<2->{\\frac{\\pi}{3} &    }\\onslide<3->{\\rightarrow\\frac{\\pi}{3}\\left(\\frac{180^\\circ}{\\pi}\\right)}   \\\\[12pt]\n    \\onslide<4->{&= \\frac{180^\\circ}{3}} \\\\[12pt]\n    \\onslide<5->{&= 60^\\circ}\n\\end{align*}\n\\end{frame}\n\n\n\\begin{frame}{Example 2}\nConvert each angle to degrees. \\newline\\\\\n(b) \\quad $-\\frac{5\\pi}{4}$\n\\begin{align*}\n    \\onslide<2->{-\\frac{5\\pi}{4} &    }\\onslide<3->{\\rightarrow-\\frac{5\\pi}{4}\\left(\\frac{180^\\circ}{\\pi}\\right)}   \\\\[12pt]\n    \\onslide<4->{&= -\\frac{900^\\circ}{4}} \\\\[12pt]\n    \\onslide<5->{&= -225^\\circ}\n\\end{align*}\n\\end{frame}\n\n\n\\section{ Draw angles in standard position.}\n\n\\begin{frame}{Example 3}\nDraw and label each angle in standard position. \\newline\\\\\n(a) \\quad $\\alpha = \\frac{\\pi}{6}$  \\qquad \\onslide<2->{$\\alpha = 30^\\circ$}\n\\begin{center}\n\\begin{tikzpicture}[scale=1.2]\n    \\draw [<->] (-2,0) -- (2,0) node [right] {$x$};\n    \\draw [<->] (0,-2) -- (0,2) node [above] {$y$};\n    \\draw [->, color=blue, very thick] (0,0) -- (1.5,0);\n    \\onslide<3->{\\draw [->, color=red, very thick] (0,0) -- (30:1.5);}\n    \\onslide<4->{\\draw [->] (0.5,0) arc (0:30:0.5) node [right, yshift=-0.1cm] {$\\alpha$};}\n\\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(b) \\quad $\\beta = -\\frac{4\\pi}{3}$ \\qquad \\onslide<2->{$\\beta = -240^\\circ$}\n\\begin{center}\n\\begin{tikzpicture}[scale=1.2]\n    \\draw [<->] (-2,0) -- (2,0) node [right] {$x$};\n    \\draw [<->] (0,-2) -- (0,2) node [above] {$y$};\n    \\draw [->, color=blue, very thick] (0,0) -- (1.5,0);\n    \\onslide<3->{\\draw [->, color=red, very thick] (0,0) -- (-240:1.5);}\n    \\onslide<4->{\\draw [->] (0.5,0) arc (0:-240:0.5) node [below, xshift=-0.5cm, yshift=-0.75cm] {$\\beta$};}\n\\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(c) \\quad   $\\gamma = - \\frac{9\\pi}{4}$ \\qquad \\onslide<2->{$\\gamma = -405^\\circ$}\n\\begin{center}\n\\begin{tikzpicture}[scale=1.2]\n    \\draw [<->] (-2,0) -- (2,0) node [right] {$x$};\n    \\draw [<->] (0,-2) -- (0,2) node [above] {$y$};\n    \\draw [->, color=blue, very thick] (0,0) -- (1.5,0);\n    \\onslide<3->{\\draw [->, color=red, very thick] (0,0) -- (-405:1.5);}\n    \\onslide<4->{\\bigangle{-405};}\n    \\onslide<4->{\\node at (-0.5,0.5) {$\\gamma$};}\n\\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}{Example 3}\n(d) \\quad   $\\delta = -\\frac{5\\pi}{2}$  \\qquad \\onslide<2->{$\\delta = -450^\\circ$}\n\\begin{center}\n\\begin{tikzpicture}[scale=1.2]\n    \\draw [<->] (-2,0) -- (2,0) node [right] {$x$};\n    \\draw [<->] (0,-2) -- (0,2) node [above] {$y$};\n    \\draw [->, color=blue, very thick] (0,0) -- (1.5,0);\n    \\onslide<3->{\\draw [->, color=red, very thick] (0,0) -- (-450:1.5);}\n    \\onslide<4->{\\bigangle{-450};}\n    \\onslide<4->{\\node at (-0.5,0.5) {$\\delta$};}\n\\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\n\\section{ Find a coterminal angle to a given angle.}\n\n\n\\begin{frame}{Coterminal Angles}\nTwo angles that have the same initial and terminal side are \\alert{coterminal angles}.    \\newline\\\\  \\pause\n\nTo find coterminal angles, add (or subtract) multiples of $360^\\circ$ (or $2\\pi$ radians).\n\\end{frame}\n\n\\begin{frame}{Example 4}\nFind a coterminal angle between $0^\\circ$ and $360^\\circ$ (or 0 and $2\\pi$ radians) for each.   \\newline\\\\\n(a) \\quad   $400^\\circ$\n\\begin{align*}\n    \\onslide<2->{400^\\circ}   \n    \\onslide<3->{&\\rightarrow 400^\\circ - 360^\\circ} \\\\[10pt]\n    \\onslide<4->{&= 40^\\circ}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n(b) \\quad   $-\\frac{4\\pi}{3}$\n\\begin{align*}\n    \\onslide<2->{-\\frac{4\\pi}{3}}   \n    \\onslide<3->{&\\rightarrow -\\frac{4\\pi}{3} + 2\\pi} \\\\[10pt]\n    \\onslide<4->{&= \\frac{2\\pi}{3}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n(c) \\quad   $\\frac{9\\pi}{4}$\n\\begin{align*}\n    \\onslide<2->{\\frac{9\\pi}{4}}   \n    \\onslide<3->{&\\rightarrow \\frac{9\\pi}{4} - 2\\pi} \\\\[10pt]\n    \\onslide<4->{&= \\frac{\\pi}{4}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n(d) \\quad   $-785^\\circ$\n\\begin{align*}\n    \\onslide<2->{-785^\\circ}   \n    \\onslide<3->{&\\rightarrow -785^\\circ + 1080^\\circ} \\\\[10pt]\n    \\onslide<4->{&= 295^\\circ}\n\\end{align*}\n\\end{frame}\n\n    \n\\end{document}\n\n\n   \\newline\\\\\n\n{\\color{red}\\textbf{Example 4.}} \n\\begin{enumerate}[(a)]\n    \\item \n    \\item \n    \\item \n    \\item \n\\end{enumerate}", "meta": {"hexsha": "dbfb8e7216cb71bc16e551223aeeac9dc89bc77e", "size": 8814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Angles_and_Radian_Measure.tex", "max_stars_repo_name": "BryanBain/HA2_BEAMER", "max_stars_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Angles_and_Radian_Measure.tex", "max_issues_repo_name": "BryanBain/HA2_BEAMER", "max_issues_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Angles_and_Radian_Measure.tex", "max_forks_repo_name": "BryanBain/HA2_BEAMER", "max_forks_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:45.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:45.000Z", "avg_line_length": 32.6444444444, "max_line_length": 143, "alphanum_fraction": 0.5956432948, "num_tokens": 3400, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.7879311906630568, "lm_q1q2_score": 0.5735945622299602}}
{"text": "% 12tensors.tex\n% Fund Science! & Help Ernest finish his Physics Research! : quantum super-A-polynomials - a thesis by Ernest Yeung\n%                                               \n% http://igg.me/at/ernestyalumni2014                                                                             \n%                                                              \n% Facebook     : ernestyalumni  \n% github       : ernestyalumni                                                                     \n% gmail        : ernestyalumni                                                                     \n% google       : ernestyalumni                                                                                   \n% linkedin     : ernestyalumni                                                                             \n% tumblr       : ernestyalumni                                                               \n% twitter      : ernestyalumni                                                             \n% youtube      : ernestyalumni                                                                \n% indiegogo    : ernestyalumni                                                                        \n%\n% Ernest Yeung was supported by Mr. and Mrs. C.W. Yeung, Prof. Robert A. Rosenstone, Michael Drown, Arvid Kingl, Mr. and Mrs. Valerie Cheng, and the Foundation for Polish Sciences, Warsaw University.                  \n\n\n\n\\subsection*{ Multilinear Algebra }\n\n\n\n$F: V_1 \\times \\dots \\times V_k \\to W$ multilinear if $\\forall \\, i$, linear in each variable, $F(v_1 \\dots a v_1 + a' v_i' \\dots v_k) = aF(v_1 \\dots v_i \\dots v_k) + a'F(v_1 \\dots v_i' \\dots v_k)$\n\nmultilinear function of 2 variables is bilinear.  \\\\\n\n$L(V_1, \\dots V_k; W)$ - set of all multilinear maps from $V_1 \\times \\dots \\times V_k$ to $W$ \\\\\n\n$\\lbrace T: \\underbrace{V\\times \\dots \\times V}_{k \\text{times }} \\to \\mathbb{R} \\rbrace = T^k(V)$  \\\\\n\n$S\\in T^k(V), \\, T\\in T^l(V)$  \\\\\ntensor product $S\\otimes T: V \\times \\dots \\times V \\to \\mathbb{R}$, covariant $(k+l)$-tensor\n\n\\[\n  S \\otimes T(x_1 \\dots x_{k+l}) = S(x_1 \\dots x_k) T(x_{k+1} \\dots x_{k+l}) \\\\ \n\\]\n\n\\exercisehead{12.3}  \n\\[\n\\begin{gathered}\n  F(v_1 \\dots av_i + bw_i \\dots v_k) G(v_{k+1} \\dots v_{k+l}) = aF(v_1 \\dots v_i \\dots v_k) G + b F(v_1 \\dots w_i \\dots v_k)G = \\\\\n  =  aF\\otimes G(v_1 \\dots v_i \\dots v_k \\dots v_{k+l}) + bF\\otimes G(v_1 \\dots w_i \\dots v_k \\dots v_{k+l}) = F\\otimes G( v_1 \\dots a v_i  + b w_i \\dots v_k, v_{k+1} \\dots v_{k+l} ) \\\\ \n    F(v_1 \\dots v_k) G(v_{k+1} \\dots av_{k+i} + bw_{k+i} \\dots v_{k+l}) = \\\\\n    = F(v_1 \\dots v_k)(aG(v_{k+1} \\dots v_{k+i}, v_{k+i +1} \\dots v_{k+l} ) + bG(v_{k+1 } \\dots w_{k+i} v_{k+i +1} \\dots v_{k+l }) = \\\\ \n    = aF\\otimes G(v_{k+1} \\dots v_{k+i} , v_{k+i +1} \\dots v_{k+l }) + bF\\otimes G(v_1 \\dots w_{k+i}, v_{k+i + 1} \\dots v_{k+l} ) = \\\\\n    = F\\otimes G(v_1 \\dots v_k, v_{k+1} \\dots av_{k+i} + b w_{k+i} \\dots v_{k+i +1} \\dots v_{k+l })\n\\end{gathered}\n\\]\n\n\\[\n\\begin{gathered}\n(F\\otimes G) \\otimes H = (F\\otimes G)(x_1 \\dots x_{k+l}H(x_{k+l+1} \\dots x_{k+l+m }) = F(x_1 \\dots x_k) G(x_{k+1} \\dots x_{k+l} ) H(x_{k+l+1} \\dots x_{k+l+m}) = \\\\\n= F(x_1 \\dots x_k)(G\\otimes H)(x_{k+1} \\dots x_{k+l+m}) = F\\otimes (G\\otimes H)(x_1 \\dots x_{k+l+m})\n\\end{gathered}\n\\]\n\n\n\\begin{proposition}[12.4](A Basis for the Space of Multilinear Functions)  Let $V$ real vector space of dim. $n$, $(E_i)$ any basis for $V$, $\\epsilon^i$ dual basis.\\\\\nset of all $k$-tensors of form $\\epsilon^{i_1} \\otimes \\dots \\otimes \\epsilon^{i_k}$, \\, $1\\leq i_1 \\dots i_k \\leq n$ basis for $T^k(V)$, \\, dim. $n^k$\n\\end{proposition}\n\n\\begin{proof} Let $\\mathcal{B} = \\lbrace \\epsilon^{i_1} \\otimes \\dots \\otimes \\epsilon^{i_k} | 1 \\leq i_1 \\dots i_k \\leq n \\rbrace$ \\\\\n\nSuppose arbitrary $T \\in T^k(V)$  \\\\\nDefine $T_{i_1 \\dots i_k} = T(E_{i_1} \\dots E_{i_k} )$\n\n\\[\nT_{i_1 \\dots i_k} \\epsilon^{i_1} \\otimes \\dots \\otimes \\epsilon^{i_k}(E_{j_1} \\dots E_{j_k} ) \\underbrace{=}_{ \\text{ (by definition) } }T_{i_1 \\dots i_k} \\epsilon^{i_1}(E_{j_1}) \\dots \\epsilon^{i_k}(E_{j_k}) = T_{i_1 \\dots i_k}\\delta^{i_1}_{j_1} \\dots \\delta^{i_k}_{j_k} = T_{j_1 \\dots j_k} = T(E_{j_1} \\dots E_{j_k} )\n\\]\n$T$ spanned by $\\mathcal{B}$\n\n\\end{proof}\n\n\n\n\n\n\\subsubsection*{ Abstract Tensor Products of Vector Spaces }\n\n\n\nfree vector space on $S$, $\\mathbb{R}\\langle S \\rangle = \\lbrace \\mathcal{F} \\rbrace$ \\\\\n\\quad \\quad finite formal linear combination - function $\\mathcal{F} : S \\to \\mathbb{R}$ s.t. $\\mathcal{F}(s) = 0$ for all but finite many $s \\in S$ \\\\\n\\quad \\quad $\\forall \\, \\mathcal{F} \\in \\mathbb{R} \\langle S \\rangle$, $\\mathcal{F} = \\sum_{i=1}^m a_i x_i$, \\, $x_1 \\dots x_m \\in S$ s.t. $\\mathcal{F}(x_i) \\neq 0$, $a_i = \\mathcal{F}(x_i)$\n\n\\exercisehead{12.6} (Characteristic Property of Free Vector Spaces)\n\n\\[\n\\begin{aligned}\n  F : S & \\to W \\\\ \n  & x \\mapsto w \\in W \n\\end{aligned}\n\\]\nConsider $w\\in W$ and $w = \\sum c_{\\alpha} w_{\\alpha}$, $w_{\\alpha} = F(x_{\\alpha})$; $x_{\\alpha} \\in S$, $w_{\\alpha} \\in W$.  \n\nConsider $\\overline{F}:\\mathbb{R}\\langle S \\rangle \\to W$, \\, $\\sum_{i=1}^m c_i x_i \\mapsto \\sum_{I=1}^m c_i F(x_i) = \\sum_{i=1}^m c_i w_i \\in W$\n\nLet $w= v$, $\\begin{aligned} & \\quad \\\\ & w = \\sum_{I=1}^m c_i w_i = \\sum_{i=1}^m c_i F(x_i) \\\\ & v = \\sum_{i=1}^n b_i v_i = \\sum_{i=1}^n b_i F(y_i) \\end{aligned}$\n\n$w - v =0$ so for a vector space, this implies $w_i = v_i$, $m=n$, \n\\[\n\\sum_{i=1}^m (c_i-b_i)w_i = 0, \\quad \\quad c_i = b_i\n\\]\n\n$\\mathcal{R} \\equiv$ subspace of free vector space $\\mathbb{R}\\langle V \\times W \\rangle $ spanned by \n\\begin{equation}\n\\begin{gathered}\n  a(v,w) - a(v, w) \\\\\n  a(v,w) - (v,aw) \\\\\n(v,w) + (v',w) - (v+v',w) \\\\\n  (v,w) + (v,w') - (v,w+w')\n\\end{gathered} (12.4)\n\\end{equation}\n\ntensor product of $V, W$ \n\\[\nV\\otimes W = \\mathbb{R} \\langle V \\times W \\rangle / \\mathcal{R}\n\\]\nequivalence class of element $(v,w)$ if $v\\otimes w \\in V\\otimes W$\n\n\\begin{proposition}[12.7] (Characteristic Property of the Tensor Product Space)\n  If bilinear $A: V \\times W \\to X$, $\\exists \\, !, \\, \\widetilde{A} : V\\otimes W \\to X$, any vector space $X$  s.t.\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em]\n  {\n    V\\times W  & X  \\\\\n    V\\otimes W  &   \\\\ };\n  \\path[-stealth]\n  (m-1-1) edge node [right] {$A$} (m-1-2)\n  edge node [left] { $\\pi$} (m-2-1)\n  (m-2-1) edge node [below] {$\\widetilde{A}$} (m-1-2);\n\\end{tikzpicture}  (12.6)\n\n$\\pi(v,w) = V\\otimes W$\n\\end{proposition}\n\n\\begin{proof}\nBy characteristic property of free vector space, $A: V\\times W \\to X$ extends uniquely to linear $\\overline{A} : \\mathbb{R} \\langle V \\times W \\rangle \\to X$\n\n\\[\n\\overline{A}(v,w) = A(v,w) \\text{ if } (v,w) \\in V\\times W \\subset \\mathbb{R}\\langle V \\times W \\rangle\n\\]\n\n$A$ bilinear\n\n\\[\n\\begin{aligned}\n  \\overline{A}(av,w) = A(av,w) = aA(v,w) = a\\overline{A}(v,w) = \\overline{A}(a(v,w)) \\\\ \n  \\overline{A}(v,aw) = A(v,aw) = aA(v,w)  = \\overline{A}(a(v,w)) \\\\ \n  \\overline{A}(v+v',w)  = A(v+v',w) = A(v,w) +  A(v',w) = \\overline{A}(v,w) = \\overline{A}(a(v',w)) \\\\ \n\\end{aligned}\n\\]\n\\end{proof}\n\nLikewise for (12.4) \n\nsubspace $\\mathcal{R} \\subset \\text{ker}{\\overline{A}}$\n\n$\\therefore \\, \\overline{A}$ descends to linear $\\widetilde{A} : V\\otimes W = \\mathbb{R} \\langle V\\times W \\rangle / \\mathcal{R} \\to X$ s.t. \\\\\n$\\widetilde{A} \\circ \\pi = \\overline{A}$, $\\pi: \\mathbb{R}\\langle V \\times W \\rangle \\to V\\otimes W$\n\nuniqueness from $\\forall \\, v \\otimes w \\in V \\otimes W$, $v\\otimes w =$ linear combination of $v\\otimes w$ \\\\\n$\\widetilde{A}$ uniquely determined on $\\widetilde{A}(v\\otimes w) = \\overline{A}(v,w) = A(v,w)$\n\n\n\\begin{proposition}[11.4] (\\textbf{Other Properties of Tensor Products}).  Let $V, W$, and $X$ be finite-dimensional real vector spaces \n\\begin{enumerate}\n  \\item[(a)] $V^* \\otimes W^*$ canonically isomorphic to $B(V,W)$, bilinear maps from $V\\times W$ into $\\mathbb{R}$\n  \\item[(b)] if $\\begin{aligned} & \\quad \\\\ & (E_i) \\text{ basis for $V$ } \\\\ & (F_j) \\text{ basis for $W$ } \\end{aligned}$, then $\\lbrace E_i \\otimes F_j \\rbrace$ basis is basis for $V\\otimes W$, $\\therefore \\, \\text{dim}{(V\\otimes W)} = \\text{dim}{V} \\text{dim}{W}$\n  \\item[(c)] $\\exists \\, !$ \\, isomorphism $\\begin{aligned} & \\quad \\\\ & V\\otimes (W\\otimes X) \\to (V \\otimes W) \\otimes X \\\\ & v\\otimes (w \\otimes x) \\mapsto (v\\otimes w)\\otimes x \\end{aligned}$\n\\end{enumerate}\n\\end{proposition}\n\n\n\n\\begin{proof}\n\\begin{enumerate}\n\\item[(a)] canonical isomorphism (basis independence) construction between $V^* \\otimes W^*$ and $B(V,W)$ (space of bilinear maps) \\\\\n\nDefine $\\begin{aligned} & \\quad \\\\\n  & \\Phi : V^* \\times W^* \\to B(V,W) \\\\\n  & \\Phi(\\omega, \\eta)(v,w) = \\omega(v) \\eta(\\omega) \\end{aligned}$\n\n$\\Phi$ bilinear (easy to check).  \n\nProp. 11.3  $\\forall \\, $ bilinear $A: V\\times W \\to X$, $\\exists \\, ! \\widetilde{A} : V\\otimes W \\to X$, any vector space $X$ s.t. $\\widetilde{A} \\pi = A$\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em]\n  {\n    V^*\\times W^*  & X  \\\\\n    V^* \\bigotimes W^*  &   \\\\ };\n  \\path[-stealth]\n  (m-1-1) edge node [auto] {$\\Phi$} (m-1-2)\n  edge node [left] { $\\pi$} (m-2-1)\n  (m-2-1) edge node [below] {$\\widetilde{\\Phi}$} (m-1-2);\n\\end{tikzpicture} \n\ni.e. descends uniquely to linear $\\widetilde{\\Phi} : V^* \\otimes W^* \\to B(V,W)$\n\nLet $\\begin{aligned} & \\quad \\\\\n  & (e_i) \\\\ \n  & (f_j) \\end{aligned}$ be bases for $\\begin{aligned} & \\quad \\\\ \n  & V, \\\\\n  & W \\end{aligned}$ \\quad $\\begin{aligned} & \\quad \\\\ \n  & (\\epsilon^i) \\\\ \n  & (\\phi^j) \\end{aligned}$ \\quad dual basis \n\nsince $V^* \\otimes W^*$ spanned by elements of the form $\\omega \\otimes \\eta$, $\\begin{aligned} & \\quad \\\\ & \\omega \\in V^* \\\\\n & \\eta \\in W^* \\end{aligned}$ \\\\\n$\\forall \\, \\tau \\in V^* \\otimes W^*$, \\, $\\tau = \\tau_{ij} \\epsilon^i \\otimes \\varphi^j $\n\nDefine $\\begin{aligned} & \\quad \\\\\n  & \\Psi : B(V, W) \\to V^* \\otimes W^* \\\\\n  & \\Phi(b) = b(e_k,  f_l) \\epsilon^k \\otimes \\varphi^l \n\\end{aligned}$\n\n\\[\n\\begin{gathered}\n  \\Psi \\widetilde{\\Phi}(\\tau) = \\widetilde{\\Phi}(\\tau)(e_k , f_l) \\epsilon^k \\otimes \\varphi^l = \\tau_{ij} \\widetilde{\\Phi}(\\epsilon^i \\otimes \\varphi^j)(e_k , f_l) \\epsilon^k \\otimes \\varphi^l = \\tau_{ij} \\Phi( \\epsilon^i , \\varphi^j) (e_k , f_l) \\epsilon^k \\otimes \\varphi^l = \\\\\n= \\tau_{ij} \\epsilon^i(e_k) \\varphi^j(f_l) \\epsilon^k\\otimes \\varphi^l = \\tau_{kl} \\epsilon^k \\otimes \\varphi^l = \\tau\n\\end{gathered}\n\\]\n\nFor $\\begin{aligned} & \\quad \\\\ \n  & v \\in V \\\\ \n  & w \\in W \\end{aligned}$\n\\[\n\\begin{gathered}\n  \\widetilde{\\Phi} \\circ \\Psi(b)(v,w) = \\widetilde{\\Phi} b(e_j,f_k) \\epsilon^j \\otimes \\varphi^k(v,w) = b(e_j , f_k) \\widetilde{\\Phi}(\\epsilon^j \\otimes \\varphi^k)(v,w) = \\\\\n  = b(e_j, f_k) \\Phi(e^i, \\varphi^k)(v,w) = b(e_j, f_k) e^i(v) \\varphi^k(w) = b(e_j, f_k) v^i \\varphi^k = b(v,w)\n\\end{gathered}\n\\]\n\n\\item[(b)]\nGiven $\\begin{aligned}\n  & \\lbrace e_i | i \\in I \\rbrace = \\mathcal{B}_U \\\\ \n  & \\lbrace f_j | j \\in J \\rbrace = \\mathcal{B}_U \\\\ \n\\end{aligned}$\n\nBy the bilinearity of tensor product: $a_i e_i \\otimes b_j f_j  = a_i b_j e_i \\otimes f_j$ \n\nConsider dual basis elements $\\begin{aligned} & \\quad \\\\\n  & e^*_k(e_i) = \\delta_{ik} \\\\\n   & f^*_l(f_j) = \\delta_{jl} \\end{aligned}$ and \\quad \n\n$\\begin{aligned} & \\quad \\\\ \n  & U \\times V \\to K \\\\\n  & (u,v) \\mapsto e^*_k(u) \\cdot f^*_l(v) \\end{aligned}$\n\ninduces $\\begin{aligned} & \\quad \\\\ \n  & U\\otimes V \\to K \\\\ \n   & u \\otimes v \\mapsto e^*_k(u) \\cdot f_l^*(v) \\end{aligned}$\n\n\\[\ne_i \\otimes f_j \\mapsto \\delta_{ik} \\delta_{jl}\n\\]\n\\[\nc_{ij} e_i \\otimes f_j = 0 = c_{ij} \\delta_{ik} \\delta_{jl} = c_{kl} = 0 \\quad \\quad \\forall \\, k,l \\text{ so } e_i\\otimes f_j \\text{ form a basis }\n\\]\n\n\\end{enumerate}\n\\end{proof}\n\n\n\n\\begin{corollary}[11.5] $V$ finite-dim. real vector space, space $T^k(V)$ of covariant $k$-tensors on $V$ canonically isomorphic to $k$-fold tensor product $V^* \\otimes \\dots \\otimes V^*$\n\\end{corollary}\n\n\\exercisehead{11.3} Prove Corollary 11.5.  \n\nIt's enough to consider the basis (good strategy).  \n\n$T^k(V)$ basis $\\mathcal{B} = \\lbrace \\epsilon^{i_1} \\otimes \\dots \\otimes \\epsilon^{i_k} | 1 \\leq i_1 \\dots i_k \\leq n \\rbrace$ (Prop. 11.2) $\\text{dim}{n^k}$\n\nUse Prop. 11.4(b).  Surely $V^*$ finite-dim. real vector space as well, on its own, even though it's a dual basis. \n\nProp.11.4(b) if $\\begin{aligned} & \\quad \\\\ \n& (E_i) \\text{ basis for $V$ } \\\\ \n  & (E_j) \\text{ basis for $W$ } \\end{aligned}$, \\quad then $\\lbrace E_i \\otimes E_j \\rbrace$ basis for $V\\otimes W$ and $\\text{dim}{ (V\\otimes W)} = \\text{dim}{V} \\text{dim}{W}$\n\nbasis for $V^* \\otimes \\dots \\otimes V^* = \\lbrace \\epsilon^{i_1} \\otimes \\dots \\otimes \\epsilon^{i_k} | 1 \\leq i_1 \\dots i_k \\leq n \\rbrace$, \\quad $\\text{dim}{ (V^* \\otimes \\dots \\otimes V^*) } = n^k$\n\ndimensions are same.  isomorphic.  \n\n%\\fcdice{12}\n\\staveXXIX\n\n\n%$\\mathcal{T}$\n\n\n\\begin{lemma}[11.7] Let smooth $M$, suppose $\\begin{aligned} & \\quad \\\\ & \\sigma \\in \\mathcal{T}^k(M) \\\\ & \\tau \\in \\mathcal{T}^l(M) \\end{aligned}$ \\quad \\, $f\\in C^{\\infty}(M)$  \\\\\n\nThen $f\\sigma$, $\\sigma \\otimes \\tau$ also smooth tensor fields whose \n\n\\[\n\\begin{gathered}\n  (f\\sigma)_{i_1 \\dots i_k} = f \\sigma_{i_1 \\dots \\sigma_k } \\\\ \n  (\\sigma \\otimes \\tau)_{i_1 \\dots i_{k+l} } = \\sigma_{i_1 \\dots i_k} \\tau_{i_{k+1} \\dots i_{k+l} }\n\\end{gathered}\n\\]\n\\end{lemma}\n\n\\exercisehead{11.7}\n\nProve Lemma 11.7.  Note $T^0M = T_0M = M \\times \\mathbb{R}$\n\n\\[\n\\begin{gathered}\n  f\\sigma( p, e^{(1)}_{i_1} , \\dots , e^{(k)}_{i_k} ) = f(p) \\sigma( e^{(1)}_{i_1} \\dots e^{(k)}_{i_k} ) = f(p) \\sigma_{i_1 \\dots i_k} = (f\\sigma)_{i_1 \\dots i_k}\n\\end{gathered}\n\\]\n\n\nSuppose smooth $F:M \\to N$ \\\\\n$\\forall \\, $ smooth covariant $k$-tensor field $\\sigma$ on $N$, \\\\\ndefine $k$-tensor field $F^*\\sigma$ on $M$ by \n\\[\n(F^* \\sigma)_p = F^*(\\sigma_{F(p)})\n\\]\nexplicitly, if $X_1 \\dots X_k \\in T_pM$, then\n\\[\n(F^* \\sigma)_p(X_1 \\dots X_k) = \\sigma_{F(g)}(F_* X_1 \\dots F_* X_k)\n\\]\n\n\n\\begin{proposition}[11.9] (The properties of Tensor Field Pullbacks)\nSuppose smooth $\\begin{aligned} & \\quad \\\\ \n  & F:M \\to N \\\\\n  & G:N \\to P \\end{aligned}$, \\quad $\\begin{aligned} & \\quad \\\\ \n  & \\sigma \\in \\mathcal{T}^k(N) \\\\ \n  & \\tau \\in \\mathcal{T}^l(N) \\end{aligned}$, \\quad $f\\in C^{\\infty}(N)$ \n\\begin{enumerate}\n  \\item[(a)] $F^*(f\\sigma) = (f\\circ F) F^* \\sigma$\n\\item[(b)] $F^*(\\sigma \\otimes \\tau) = F^*\\sigma \\otimes F^*\\tau$ \n\\item[(c)] $F^*\\sigma$ smooth tensor field \n\\item[(d)] $F^*:\\mathcal{T}^k(N) \\to \\mathcal{T}^k(M)$ linear over $\\mathbb{R}$\n\\item[(e)] $(GF)^* = F^* G^*$ \n\\item[(f)] $(\\text{Id}_N)^*\\sigma = \\sigma$\n\\end{enumerate}\n\\end{proposition}\n\n\\exercisehead{11.9} Prove Prop. 11.9\n\n\\begin{corollary}[11.10] Let smooth $F:M \\to N$, $\\sigma \\in \\mathcal{T}^k(N)$ \\\\\nIf $p\\in M$, smooth coordinates $(y^j)$ for $N$ on neighborhood of $F(p)$, then $F^*\\sigma$ near $p$ \n\\[\nF^*(\\sigma_{j_1 \\dots j_k} dy^{j_1} \\otimes \\dots \\otimes dy^{j_k} ) = (\\sigma_{j_1 \\dots j_k } \\circ F ) d(y^{j_1 } \\circ F) \\otimes \\dots \\otimes d(y^{j_k} \\circ F)\n\\]\n\\end{corollary}\n\n\n20130919\n\nHowever, in the special case of a diffeomorphism, tensor fields of any variance can be pushed forward and pulled back at will (see Problem 11-6)\n\n\\subsection*{Symmetric Tensors}\n\n20130919\n\n\\exercisehead{11.10}\n\n\\[\nT_{i_1 \\dots i_k} = T(E_{i_1} \\dots E_{i_k} ) = T(E_{i_1} \\dots E_{i_s} \\dots E_{i_r} \\dots E_{i_k} ) = T_{i_1 \\dots i_s \\dots i_r \\dots i_k } \\quad \\quad \\, r<s\n\\]\n\n\\staveXXIX\n\nset of symmetric covariant $k$-tensors on $V$ by $\\sum^k(V)$ \\\\\n\ndefine $\\,^{\\sigma}T^(X_1 \\dots X_k)^ = T(X_{\\sigma(1)} \\dots X_{ \\sigma(k)})$ \\\\\n\ndefine $\\text{Sym}T = \\frac{1}{k!}  \\sum_{\\sigma \\in S_k} \\,^{\\sigma}T$ \\\\\n\nIf $\\begin{aligned} & \\quad \\\\ \n  & S \\in \\sum^k(V) \\\\ \n  & T\\in \\sum^l(V) \\end{aligned}$, \\, define $ST = \\text{Sym}{ (S\\otimes T)}$\n\n\\[\nST(X_1 \\dots X_{k+l}) = \\frac{1}{ (k+l)!} \\sum_{ \\sigma \\in S_{k+l} } S(X_{\\sigma(1)} \\dots X_{\\sigma(k)} ) T(X_{\\sigma(k+1)}, \\dots , X_{\\sigma(k+l) } )\n\\]\n\n\\begin{proposition}[12.15] (Properties of the Symmetric Product)\n\\begin{enumerate}\n\\item[(a)]\n\\item[(b)] if $\\omega, \\eta$ covectors, $ \\omega \\eta = \\frac{1}{2} ( \\omega \\otimes \\eta + \\eta \\otimes \\omega)$\n\\end{enumerate}\n\\end{proposition}\n\n20130919\n\\exercisehead{12.16} Prove Proposition 12.15\n\n\\begin{enumerate}\n\\item[(a)]\n\\item[(b)] \\[\n\\begin{gathered}\n  \\omega \\eta(e_i, e_j) = \\frac{1}{2} ( \\omega(e_i) \\eta(e_j) + \\omega(e_j) \\eta(e_i)  ) = \\frac{1}{2} ( \\omega(e_i) \\eta(e_j) + \\eta(e_i) \\omega(e_j)) = \\frac{1}{2} ( \\omega \\otimes \\eta + \\eta \\otimes \\omega)(e_i ,e_j)\n\\end{gathered}\n\\]\n\ndirect application of definition of $ST \\equiv \\text{Sym}{(S\\otimes T)}$ and $S\\otimes T(X_1 \\dots X_{k+l} ) = S(X_1 \\dots X_k) T(X_{k+1} \\dots X_{k+l})$ definition of tensor product.  \n\n\\end{enumerate}\n\n\\subsubsection*{Alternating Tensors}\n\n\n\n\n\n\\subsubsection*{Lie Derivatives of Tensor Fields}\n\n\n\n\\begin{lemma}[12.30] smooth $M,V,A$, $\\exists \\, $ (12.8) $\\forall \\, p \\in M$, and defines $\\mathcal{L}_VA$ as smooth tensor field on $M$\n\\end{lemma}\n\n\\exercisehead{12.31}\n\nSuppose smooth $M$, smooth $V$, smooth covariant tensor $A$\n\n$I = (i_1 \\dots i_k)$ \\\\\n\\quad \\, $i_i = 1 \\dots \\text{dim}{M}$ \\\\\n\n$\\theta_t(p) = y$ (notation) \n\n\\[\nA = A(y) = A(\\theta_t(p)) = A_I(y) dy^I = A_I(\\theta_t(p)) dy^I\n\\]\nwith $A_I$ smooth function of $\\theta_t(p) = y$\n\n\n\\[\nv_1 = \\delta^{j_1}_{ \\, \\, i_1} \\frac{ \\partial }{ \\partial x^{j_1}} \\quad \\quad \\, v^{j_1}_{ (1)} = \\delta^{j_1}_{ i_1}\n\\]\n\n\n\\[\nd(\\theta_t)^*_p(A_{\\theta_t(p)}) \\frac{ \\partial }{ \\partial x^I} = A_{\\theta_t(p)}(d(\\theta_t)_p \\frac{ \\partial }{ \\partial x^I } )\n\\]\n$d(\\theta_t)_p = \\frac{ \\partial y^i}{ \\partial x^j}$\n\n\\[\nd(\\theta_t)_p v_1  = \\frac{ \\partial y^{i_1}}{ \\partial x^{j_1}} v^{j_1}_{(1)} \\frac{ \\partial }{ \\partial y^{i_1}}\n\\]\n\n\n\\[\n\\Longrightarrow \\frac{ \\partial y^{k_1} }{ \\partial x^{j_1} } \\delta^{j_1}_{i_1} \\frac{ \\partial }{ \\partial y^{k_1}} = \\frac{ \\partial y^{j_1}}{ \\partial x^{i_1}} \\frac{ \\partial }{ \\partial y^{j_1}} \n\\]\n\n$\\frac{ \\partial }{ \\partial x^I} = \\frac{ \\partial }{ \\partial x^{i_1} } \\otimes \\dots \\otimes \\frac{ \\partial }{ \\partial x^{i_k} }$\n\n\\[\nd(\\theta_t)_p \\frac{ \\partial }{ \\partial x^I} = d(\\theta_t)_pv_1 \\dots d(\\theta_t)_pv_k =  \\frac{ \\partial y^J}{ \\partial x^I} \\frac{ \\partial }{ \\partial y^J}\n\\]\n\n\n%\\[\n%dx^I = dx^{i_1} \\otimes \\dots \\otimes dx^{i_k} = \\delta^{i_1}_{ \\, \\, j_1} dx^{j_1} \\otimes \\dots \\otimes \\delta^{i_k}_{ \\, \\, j_k} dx^{j_k} = \\delta^I_{ \\, \\, J} dx^J\n%\\]\n\n%\\[\n%\\frac{ \\partial y^{i_1}}{ \\partial x^{j_1} } dx^{j_1} \\otimes \\dots \\otimes \\frac{ \\partial y^{i_k}}{ \\partial x^{j_k}} dx^{j_k} = \\frac{ \\partial y^I}{ \\partial x^J} dx^J\n%\\]\n\n\n\n%\\[\n%A_{\\theta_t(p)}( d(\\theta_t)_p \\frac{ \\partial }{ \\partial x^I}  ) = A_K(y) dy^K \\frac{ \\partial y^J}{ \\partial x^I} \\frac{ \\partial }{ \\partial y^J} = A_J(y) \\frac{ \\partial y^J}{ \\partial x^I}\n%\\]\n%\\[\n%d(\\theta_t)^*_p(A)\n%\\]\n\n\\[\n\\begin{gathered}\n  A_{\\theta_t(p)}(d(\\theta_t)_p(v_1) \\dots d(\\theta_t)_p(v_k) ) = A_J(\\theta_t(p))\\frac{ \\partial y^J}{ \\partial x^I} \n\\end{gathered}\n\\]\n\\[\n\\begin{aligned}\n  & d(\\theta_t)^*_p(A_{\\theta_t(p)}) = A^*_J dx^J \\\\ \n  &  d(\\theta_t)^*_p(A_{\\theta_t(p)}) \\frac{ \\partial }{ \\partial x^I} = A_I^* = A_J \\frac{ \\partial y^J}{ \\partial x^I }\n\\end{aligned}\n\\]\n\n\\[\n\\Longrightarrow ( \\mathcal{L}_VA)_p = \\left. \\frac{d}{dt} \\right|_{t=0} (\\theta_t^* A)_p = \\left. \\frac{d}{dt} \\right|_{t=0} A_J \\left. \\frac{ \\partial \\theta^J(t,x) }{ \\partial x^I } \\right|_p dx^I\n\\]\n$A_J = A_J(\\theta_t(p)) = A_J(\\theta(t,x))$\n\n$\\theta$ smooth in $t,x$, so $A_J$ smooth in $t$ and $x$  \\\\\n\nso since $\\left. \\left( \\mathcal{L}_VA\\right)_I \\right|_p$ smooth $\\forall \\, I \\in \\lbrace (i_1 \\dots i_k) | i_i = 1 \\dots \\text{dim}{M}, \\, i = 1 \\dots k \\rbrace$, \\\\\n\\quad $\\left. (\\mathcal{L}_VA)  \\right|_p$ smooth tensor field on $M$\n\n\n\n\\hrulefill\n\n\n\\begin{proposition}[12.32]\n  \\begin{enumerate}\n\\item[(a)] $\\mathcal{L}_V f = Vf$ \n\\item[(b)] $\\mathcal{L}_V(fA) = (\\mathcal{L}_Vf )A + f\\mathcal{L}_VA$\n\\item[(c)] $\\mathcal{L}_V(A\\otimes B) = (\\mathcal{L}_VA) \\otimes B + A \\otimes \\mathcal{L}_V B$ \n\\item[(d)] If $X_1 \\dots X_k$ smooth vector fields, $A$ smooth $k$-tensor field, \n\\begin{equation}\n  \\mathcal{L}_V(A(X_1 \\dots X_k)) = (\\mathcal{L}_VA)(X_1 \\dots X_k) + A(\\mathcal{L}_VX_1 \\dots X_k) + \\dots + A(X_1 \\dots \\mathcal{L}_VX_k) \\quad \\quad \\quad \\, (12.9)\n\\end{equation}\n\\end{enumerate}\n\n\\end{proposition}\n\n\\begin{corollary}[12.33]\n\\begin{equation}\n  (\\mathcal{L}_VA)(X_1 \\dots X_k) = V(A(X_1 \\dots X_k))  - A([V,X_1], X_2 \\dots X_k) - \\dots - A(X_1 \\dots X_{k-1}, [V,X_k]) \\quad \\quad \\quad \\, (12.10)\n\\end{equation}\n\\end{corollary}\n\n\n\n", "meta": {"hexsha": "ec48388741ccc93b70228eefcf74402ba1e6d786", "size": 20498, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LeeJM/12tensors.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LeeJM/12tensors.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LeeJM/12tensors.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 38.969581749, "max_line_length": 319, "alphanum_fraction": 0.5744462874, "num_tokens": 8330, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311856832191, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5735945539546504}}
{"text": "%This chapter was modified on 6-4-05.\n\\setcounter{chapter}{9}\n\n\\chapter{Generating Functions}\\label{chp 10}  \n\n\\section[Discrete Distributions]{Generating Functions for Discrete Distributions}\\label{sec\n10.1} \n\\par\nSo far we have considered in detail only the two most important attributes of a\nrandom variable, namely, the mean and the variance.  We have seen how these\nattributes enter into the fundamental limit theorems of probability, as well as\ninto all sorts of practical calculations.  We have seen that the mean and\nvariance of a random variable contain important information about the random\nvariable, or, more precisely, about the distribution function of that variable.\nNow we shall see that the mean and variance do  \\emx {not} contain \\emx {all}\nthe available information about the density function of a random variable.  To\nbegin with, it is easy to give examples of different distribution functions which\nhave the same mean and the same variance.  For instance, suppose $X$~and~$Y$\nare random variables, with distributions\n\n$$p_X = \\pmatrix{\n1 & 2 & 3 & 4 & 5 & 6\\cr\n0 & 1/4 & 1/2 & 0 & 0 & 1/4\\cr},$$ \n$$p_Y = \\pmatrix{\n1 & 2 & 3 & 4 & 5 & 6\\cr\n1/4 & 0 & 0 & 1/2 & 1/4 & 0\\cr}.$$\n\nThen with these choices, we have $E(X) = E(Y) = 7/2$ and $V(X) = V(Y) = 9/4$, and\nyet certainly $p_X$~and~$p_Y$ are quite different density functions.\n\nThis raises a question: If $X$ is a random variable with range\n$\\{x_1, x_2, \\ldots\\}$ of at most countable size, and distribution function $p = p_X$, \nand if we know its mean $\\mu = E(X)$ and its variance $\\sigma^2 = V(X)$, then\nwhat else do we need to know to determine $p$ completely?\n\n\\subsection*{Moments}\nA nice answer to this question, at least in the case that $X$ has finite range, can be given in terms of\nthe\n\\emx {moments}\\index{moments} of~$X$, which are numbers defined as follows:\n\\pagebreak[4]\n\\begin{eqnarray*}\n\\mu_k &=& k \\mbox{th}\\,\\,\\mbox{moment~of}\\,\\, X\\\\\n      &=& E(X^k) \\\\\n      &=& \\sum_{j = 1}^\\infty (x_j)^k p(x_j)\\ ,\n\\end{eqnarray*}\nprovided the sum converges.  Here $p(x_j) = P(X = x_j)$.\n\\par\nIn terms of these moments, the mean~$\\mu$ and variance~$\\sigma^2$ of~$X$ are\ngiven simply by\n\n\\begin{eqnarray*}\n     \\mu &=& \\mu_1, \\\\\n\\sigma^2 &=& \\mu_2 - \\mu_1^2\\ ,\n\\end{eqnarray*}\nso that a knowledge of the first two moments of~$X$ gives us its mean and\nvariance.  But a knowledge of \\emx {all} the moments of~$X$ determines its\ndistribution function $p$ completely.\n\n\\subsection*{Moment Generating Functions}\nTo see how this comes about, we introduce a new variable $t$, and define a\nfunction $g(t)$ as follows:\n\n\\begin{eqnarray*}\ng(t) &=& E(e^{tX}) \\\\\n     &=& \\sum_{k = 0}^\\infty \\frac {\\mu_k t^k}{k!} \\\\\n     &=& E\\left(\\sum_{k = 0}^\\infty \\frac {X^k t^k}{k!} \\right) \\\\\n     &=& \\sum_{j = 1}^\\infty e^{tx_j} p(x_j)\\ .\n\\end{eqnarray*}\nWe call $g(t)$ the \\emx {moment generating function}\\index{generating\nfunction!moment}\\index{moment generating function} for~$X$, and think of it as a convenient\nbookkeeping device for describing the moments of~$X$.  Indeed, if we differentiate $g(t)$ $n$\ntimes and then set\n$t = 0$, we get $\\mu_n$:\n\n\\begin{eqnarray*}\n\\left. \\frac {d^n}{dt^n} g(t) \\right|_{t = 0} &=& g^{(n)}(0) \\\\\n     &=& \\left. \\sum_{k = n}^\\infty \\frac {k!\\, \\mu_k t^{k - n}} {(k - n)!\\, k!}\n\\right|_{t = 0} \\\\\n     &=& \\mu_n\\ .\n\\end{eqnarray*}\nIt is easy to calculate the moment generating function for simple examples.\n\n\\subsection*{Examples}\n\\begin{example}\\label{exam 10.1}\nSuppose $X$ has range $\\{1,2,3,\\ldots,n\\}$ and $p_X(j) = 1/n$ for $1 \\leq j\n\\leq n$ (uniform distribution).  Then\n\n\\begin{eqnarray*}\ng(t) &=& \\sum_{j = 1}^n \\frac 1n e^{tj} \\\\\n     &=& \\frac 1n (e^t + e^{2t} +\\cdots+ e^{nt}) \\\\\n     &=& \\frac {e^t (e^{nt} - 1)} {n (e^t - 1)}\\ .\n\\end{eqnarray*}\nIf we use the expression on the right-hand side of the second line above, then it is\neasy to see that\n\n\\begin{eqnarray*}\n\\mu_1 &=& g'(0) = \\frac 1n (1 + 2 + 3 + \\cdots + n) = \\frac {n + 1}2, \\\\\n\\mu_2 &=& g''(0) = \\frac 1n (1 + 4 + 9+ \\cdots + n^2) = \\frac {(n + 1)(2n + 1)}6\\ ,\n\\end{eqnarray*}\nand that $\\mu = \\mu_1 = (n + 1)/2$ and $\\sigma^2 = \\mu_2 - \\mu_1^2 = (n^2 -\n1)/12$.\n\\end{example}\n\n\\begin{example}\\label{exam 10.2}\nSuppose now that $X$ has range $\\{0,1,2,3,\\ldots,n\\}$ and $p_X(j) = {n \\choose j} p^j\nq^{n - j}$ for $0 \\leq j \\leq n$ (binomial distribution).  Then\n\\begin{eqnarray*}\ng(t) &=& \\sum_{j = 0}^n e^{tj} {n \\choose j} p^j q^{n - j} \\\\\n     &=& \\sum_{j = 0}^n {n \\choose j} (pe^t)^j q^{n - j} \\\\\n     &=& (pe^t + q)^n\\ .\n\\end{eqnarray*}\nNote that\n\\begin{eqnarray*}\n\\mu_1 = g'(0) &=& \\left. n(pe^t + q)^{n - 1}pe^t \\right|_{t = 0} = np\\ , \\\\\n\\mu_2 = g''(0) &=& n(n - 1)p^2 + np\\ ,\n\\end{eqnarray*}\nso that $\\mu = \\mu_1 = np$, and $\\sigma^2 = \\mu_2 - \\mu_1^2 = np(1 - p)$, as\nexpected.\n\\end{example}\n\n\\begin{example}\\label{exam 10.3}\nSuppose $X$ has range $\\{1,2,3,\\ldots\\}$ and $p_X(j) = q^{j - 1}p$ for all~$j$\n(geometric distribution).  Then\n\\begin{eqnarray*}\ng(t) &=& \\sum_{j = 1}^\\infty e^{tj} q^{j - 1}p \\\\\n     &=& \\frac {pe^t}{1 - qe^t}\\ .\n\\end{eqnarray*}\nHere\n\\begin{eqnarray*}\n\\mu_1 &=& g'(0) = \\left. \\frac {pe^t}{(1 - qe^t)^2} \\right|_{t = 0} = \\frac 1p\\ , \\\\\n\\mu_2 &=& g''(0) = \\left. \\frac {pe^t + pqe^{2t}}{(1 - qe^t)^3} \\right|_{t = 0} = \\frac\n{1 + q}{p^2}\\ ,\n\\end{eqnarray*}\n$\\mu = \\mu_1 = 1/p$, and $\\sigma^2 = \\mu_2 - \\mu_1^2 = q/p^2$, as computed in\nExample~\\ref{exam 6.21}.\n\\end{example}\n\n\\begin{example}\\label{exam 10.4}\nLet $X$ have range $\\{0,1,2,3,\\ldots\\}$ and let $p_X(j) =\ne^{-\\lambda}\\lambda^j/j!$ for all~$j$ (Poisson distribution with mean~$\\lambda$). \nThen\n\\begin{eqnarray*}\ng(t) &=& \\sum_{j = 0}^\\infty e^{tj} \\frac {e^{-\\lambda}\\lambda^j}{j!} \\\\\n     &=& e^{-\\lambda} \\sum_{j = 0}^\\infty \\frac {(\\lambda e^t)^j}{j!} \\\\\n     &=& e^{-\\lambda} e^{\\lambda e^t} = e^{\\lambda(e^t - 1)}\\ .\n\\end{eqnarray*}\nThen\n\\begin{eqnarray*}\n\\mu_1 &=& g'(0) = \\left. e^{\\lambda(e^t - 1)}\\lambda e^t \\right|_{t = 0} = \\lambda\\ ,\\\\\n\\mu_2 &=& g''(0) = \\left. e^{\\lambda(e^t - 1)} (\\lambda^2 e^{2t} + \\lambda e^t)\n\\right|_{t = 0} = \\lambda^2 + \\lambda\\ ,\n\\end{eqnarray*} \n$\\mu = \\mu_1 = \\lambda$, and $\\sigma^2 = \\mu_2 - \\mu_1^2 = \\lambda$.\n\\par\nThe variance of the Poisson distribution is easier to obtain in this way than directly\nfrom the definition (as was done in Exercise~\\ref{sec 6.2}.\\ref{exer 6.2.100}).\n\\end{example}\n\n\\subsection*{Moment Problem}\\index{moment problem}\nUsing the moment generating function, we can now show, at least in the case of\na discrete random variable with finite range, that its distribution function is\ncompletely determined by its moments.\n\\begin{theorem}\\label{thm 10.2}\nLet $X$ be a discrete random variable with finite range \n$\\{x_1,x_2,\\ldots,\\linebreak x_n\\}$, distribution function~$p$, and moment generating\nfunction~$g$.  Then $g$ is uniquely determined by~$p$, and conversely.\n\\proof\nWe know that $p$ determines $g$, since\n$$\ng(t) = \\sum_{j = 1}^n e^{tx_j} p(x_j)\\ .\n$$\nConversely, assume that $g(t)$ is known.  We wish to determine the values of $x_j$ and $p(x_j)$, for $1 \\le j \\le n$.  We assume, without loss of generality, that $p(x_j) > 0$ for $1 \\le j \\le n$, and that \n$$x_1 < x_2 < \\ldots < x_n\\ .$$\nWe note that $g(t)$ is differentiable for all $t$, since it is a finite linear combination of exponential functions.  If we compute $g'(t)/g(t)$, we obtain\n$${{x_1 p(x_1) e^{tx_1} + \\ldots + x_n p(x_n) e^{t x_n}}\\over{p(x_1) e^{tx_1} + \\ldots + p(x_n)e^{tx_n}}}\\ .$$\nDividing both top and bottom by $e^{tx_n}$, we obtain the expression\n$${{x_1 p(x_1) e^{t(x_1-x_n)} + \\ldots + x_n p(x_n)}\\over{p(x_1) e^{t(x_1 - x_n)} + \\ldots + p(x_n)}}\\ .$$\nSince $x_n$ is the largest of the $x_j$'s, this expression approaches $x_n$ as $t$ goes to $\\infty$. So we have shown that\n$$x_n = \\lim_{t \\rightarrow \\infty} {{g'(t)}\\over{g(t)}}\\ .$$\nTo find $p(x_n)$, we simply divide $g(t)$ by $e^{tx_n}$ and let $t$ go to $\\infty$.\nOnce $x_n$ and $p(x_n)$ have been determined, we can subtract $p(x_n) e^{tx_n}$ from $g(t)$, and repeat the above procedure with the resulting function, obtaining, in turn, $x_{n-1}, \\ldots, x_1$ and $p(x_{n-1}), \\ldots, p(x_1)$.\n\\end{theorem}\n\nIf we delete the hypothesis that $X$ have finite range in the above theorem, then\nthe conclusion is no longer necessarily true.\n\n\\subsection*{Ordinary Generating Functions}\nIn the special but important case where the $x_j$ are all nonnegative integers,\n$x_j = j$, we can prove this theorem in a simpler way.\n\nIn this case, we have\n$$\ng(t) = \\sum_{j = 0}^n e^{tj} p(j)\\ ,\n$$\nand we see that $g(t)$ is a \\emx {polynomial} in $e^t$.  If we write $z =\ne^t$, and define the function $h$ by\n$$\nh(z) = \\sum_{j = 0}^n z^j p(j)\\ ,\n$$\nthen $h(z)$ is a polynomial in~$z$ containing the same information as $g(t)$,\nand in fact\n\\begin{eqnarray*}\nh(z) &=& g(\\log z)\\ , \\\\\ng(t) &=& h(e^t)\\ .\n\\end{eqnarray*}\nThe function $h(z)$ is often called the \\emx {ordinary generating function}\\index{ordinary\ngenerating function}\\index{generating function!ordinary} for~$X$.  Note that $h(1) = g(0) =\n1$,\n$h'(1) = g'(0) =\n\\mu_1$, and\n$h''(1) = g''(0) - g'(0) = \\mu_2 - \\mu_1$.  It follows from all this that if we know\n$g(t)$, then we know $h(z)$, and if we know $h(z)$, then we can find the $p(j)$\nby Taylor's formula:\n\\begin{eqnarray*}\np(j) &=& \\mbox{coefficient~of}\\,\\, z^j \\,\\, \\mbox{in}\\,\\, h(z) \\\\\n     &=& \\frac{h^{(j)}(0)}{j!}\\ .\n\\end{eqnarray*}\n\nFor example, suppose we know that the moments of a certain discrete random\nvariable $X$ are given by\n\\begin{eqnarray*}\n\\mu_0 &=& 1\\ , \\\\\n\\mu_k &=& \\frac12 + \\frac{2^k}4\\ , \\qquad \\mbox{for}\\,\\, k \\geq 1\\ .\n\\end{eqnarray*}\nThen the moment generating function $g$ of~$X$ is\n\\begin{eqnarray*}\ng(t) &=& \\sum_{k = 0}^\\infty \\frac{\\mu_k t^k}{k!} \\\\\n     &=& 1 + \\frac12 \\sum_{k = 1}^\\infty \\frac{t^k}{k!} + \\frac14 \\sum_{k =\n1}^\\infty \\frac{(2t)^k}{k!} \\\\\n     &=& \\frac14 + \\frac12 e^t + \\frac14 e^{2t}\\ .\n\\end{eqnarray*}\nThis is a polynomial in $z = e^t$, and\n$$\nh(z) = \\frac14 + \\frac12 z + \\frac14 z^2\\ .\n$$\nHence, $X$ must have range $\\{0,1,2\\}$, and $p$ must have values\n$\\{1/4,1/2,1/4\\}$.\n\n\\subsection*{Properties}\nBoth the moment generating function $g$ and the ordinary generating function\n$h$ have many properties useful in the study of random variables, of which we\ncan consider only a few here.  In particular, if $X$ is any discrete random \nvariable and $Y = X + a$, then\n\\begin{eqnarray*}\ng_Y(t) &=& E(e^{tY}) \\\\\n       &=& E(e^{t(X + a)}) \\\\\n       &=& e^{ta} E(e^{tX}) \\\\\n       &=& e^{ta} g_X(t)\\ ,\n\\end{eqnarray*}\nwhile if $Y = bX$, then\n\\begin{eqnarray*}\ng_Y(t) &=& E(e^{tY}) \\\\\n       &=& E(e^{tbX}) \\\\\n       &=& g_X(bt)\\ .\n\\end{eqnarray*}\nIn particular, if\n$$\nX^* = \\frac{X - \\mu}\\sigma\\ ,\n$$\nthen (see Exercise~\\ref{exer 10.1.14})\n$$\ng_{x^*}(t) = e^{-\\mu t/\\sigma} g_X\\left( \\frac t\\sigma \\right)\\ .\n$$\n\nIf $X$ and $Y$ are \\emx {independent} random variables and $Z = X + Y$ is\ntheir sum, with $p_X$,~$p_Y$, and~$p_Z$ the associated distribution functions, then\nwe have seen in Chapter~\\ref{chp 7} that $p_Z$ is the \\emx {convolution} of\n$p_X$~and~$p_Y$, and we know that convolution involves a rather complicated\ncalculation.  But for the generating functions we have instead the simple\nrelations\n\\begin{eqnarray*}\ng_Z(t) &=& g_X(t) g_Y(t)\\ , \\\\\nh_Z(z) &=& h_X(z) h_Y(z)\\ ,\n\\end{eqnarray*}\nthat is, $g_Z$ is simply the \\emx {product} of $g_X$~and~$g_Y$, and similarly\nfor~$h_Z$.\n\nTo see this, first note that if $X$~and~$Y$ are independent, then $e^{tX}$ and\n$e^{tY}$ are independent (see Exercise~\\ref{sec 5.2}.\\ref{exer 5.2.38}), and hence\n$$\nE(e^{tX} e^{tY}) = E(e^{tX}) E(e^{tY})\\ .\n$$\nIt follows that\n\\begin{eqnarray*}\ng_Z(t) &=& E(e^{tZ}) = E(e^{t(X + Y)}) \\\\\n       &=& E(e^{tX}) E(e^{tY}) \\\\\n       &=& g_X(t) g_Y(t)\\ ,\n\\end{eqnarray*}\nand, replacing $t$ by $\\log z$, we also get\n$$\nh_Z(z) = h_X(z) h_Y(z)\\ .\n$$\n\n\\begin{example}\\label{exam 10.5}\nIf $X$~and~$Y$ are independent discrete random variables with range\n$\\{0,1,2,\\ldots,n\\}$ and binomial distribution\n$$\np_X(j) = p_Y(j) = {n \\choose j} p^j q^{n - j}\\ ,\n$$\nand if $Z = X + Y$, then we know (cf. Section~\\ref{sec 7.1}) that the range of $X$ is\n$$\\{0,1,2,\\ldots,2n\\}$$ \nand $X$ has binomial distribution\n$$\np_Z(j) = (p_X * p_Y)(j) =\n{2n \\choose j} p^j q^{2n - j}\\ .\n$$\nHere we can easily verify this result by using generating functions.  We know\nthat\n\\begin{eqnarray*}\ng_X(t) = g_Y(t) &=& \\sum_{j = 0}^n e^{tj} {n \\choose j} p^j q^{n - j} \\\\\n       &=& (pe^t + q)^n\\ ,\n\\end{eqnarray*}\nand\n$$\nh_X(z) = h_Y(z) = (pz + q)^n\\ .\n$$\nHence, we have\n$$\ng_Z(t) = g_X(t) g_Y(t) = (pe^t + q)^{2n}\\ ,\n$$\nor, what is the same,\n\\begin{eqnarray*}\nh_Z(z) &=& h_X(z) h_Y(z) = (pz + q)^{2n} \\\\\n       &=& \\sum_{j = 0}^{2n} {2n \\choose j} (pz)^j q^{2n - j}\\ ,\n\\end{eqnarray*}\nfrom which we can see that the coefficient of~$z^j$ is just $p_Z(j) =\n{2n \\choose j} p^j q^{2n - j}$.\n\\end{example}\n\n\\begin{example}\\label{exam 10.6}\nIf $X$~and~$Y$ are independent discrete random variables with the non-negative \nintegers $\\{0,1,2,3,\\ldots\\}$ as range, and with geometric distribution function \n$$p_X(j) = p_Y(j) = q^j p\\ ,$$ \nthen\n$$\ng_X(t) = g_Y(t) = \\frac p{1 - qe^t}\\ ,\n$$\nand if $Z = X + Y$, then\n\\begin{eqnarray*}\ng_Z(t) &=& g_X(t) g_Y(t) \\\\\n       &=& \\frac{p^2}{1 - 2qe^t + q^2 e^{2t}}\\ .\n\\end{eqnarray*}\nIf we replace $e^t$ by~$z$, we get\n\\begin{eqnarray*}\nh_Z(z) &=& \\frac{p^2}{(1 - qz)^2} \\\\\n       &=& p^2 \\sum_{k = 0}^\\infty (k + 1) q^k z^k\\ ,\n\\end{eqnarray*}\nand we can read off the values of~$p_Z(j)$ as the coefficient of~$z^j$ in this\nexpansion for~$h(z)$, even though $h(z)$ is not a polynomial in this case.  The\ndistribution $p_Z$ is a negative binomial distribution (see \nSection~\\ref{sec 5.1}).\n\\end{example}\n\nHere is a more interesting example of the power and scope of the method of\ngenerating functions.\n\n\\subsection*{Heads or Tails}\n\\begin{example}\\label{exam 10.1.7}\nIn the coin-tossing game discussed in Example~\\ref{exam 1.3}, we now consider the\nquestion ``When is Peter first in the lead?\"\n\\par\nLet $X_k$ describe the outcome of the $k$th trial in the game\n$$\nX_k = \\left \\{ \\matrix{ \n               +1,  &\\mbox{if}\\,\\, k{\\rm th}\\,\\, \\mbox{toss~is~heads}, \\cr\n               -1,  &\\mbox{if}\\,\\, k{\\rm th}\\,\\, \\mbox{toss~is~tails.}\\cr}\\right. \n$$\nThen the $X_k$ are independent random variables describing a Bernoulli\nprocess.  Let $S_0 = 0$, and, for $n \\geq 1$, let\n$$S_n = X_1 + X_2 + \\cdots + X_n\\ .$$\nThen $S_n$ describes Peter's fortune after $n$ trials, and Peter is first in\nthe lead after $n$ trials if $S_k \\leq 0$ for $1 \\leq k < n$ and $S_n = 1$.\n\\par\nNow this can happen when $n = 1$, in which case $S_1 = X_1 = 1$, or when $n >\n1$, in which case $S_1 = X_1 = -1$.  In the latter case, $S_k = 0$ for $k = n -\n1$, and perhaps for other $k$ between 1~and~$n$.  Let $m$ be the \\emx {least}\nsuch value of~$k$; then $S_m = 0$ and $S_k < 0$ for $1 \\leq k < m$.  In this\ncase Peter loses on the first trial, regains his initial position in the next\n$m - 1$ trials, and gains the lead in the next $n - m$ trials.\n\\par\nLet $p$ be the probability that the coin comes up heads, and let $q = 1-p$.\nLet $r_n$ be the probability that Peter is first in the lead after $n$ trials. \nThen from the discussion above, we see that\n\\begin{eqnarray*}\nr_n &=& 0\\ , \\qquad \\mbox{if}\\,\\, n\\,\\, \\mbox{even}, \\\\\nr_1 &=& p \\qquad  (= \\mbox{probability~of~heads~in~a~single~toss)}, \\\\\nr_n &=& q(r_1r_{n-2} + r_3r_{n-4} +\\cdots+ r_{n-2}r_1)\\ , \\qquad \\mbox{if}\n\\ n > 1,\\ n\\ \\mbox{odd}.\n\\end{eqnarray*}\nNow let $T$ describe the time (that is, the number of trials) required for\nPeter to take the lead.  Then $T$ is a random variable, and since $P(T = n) =\nr_n$, $r$ is the distribution function for~$T$.\n\nWe introduce the generating function $h_T(z)$ for~$T$:\n$$\nh_T(z) = \\sum_{n = 0}^\\infty r_n z^n\\ .\n$$\nThen, by using the relations above, we can verify the relation\n$$\nh_T(z) = pz + qz(h_T(z))^2\\ .\n$$\nIf we solve this quadratic equation for~$h_T(z)$, we get\n$$\nh_T(z) = \\frac{1 \\pm \\sqrt{1 - 4pqz^2}}{2qz} = \\frac{2pz}{1 \\mp \\sqrt{1 -\n4pqz^2}}\\ .\n$$\nOf these two solutions, we want the one that has a convergent power series\nin~$z$ (i.e., that is finite for $z = 0$).  Hence we choose\n$$\nh_T(z) = \\frac{1 - \\sqrt{1 - 4pqz^2}}{2qz} = \\frac{2pz}{1 + \\sqrt{1 - 4pqz^2}}\\ .\n$$\nNow we can ask: What is the probability that Peter is \\emx {ever} in the\nlead?  This probability is given by (see Exercise~\\ref{exer 10.1.10})\n\\begin{eqnarray*}\n\\sum_{n = 0}^\\infty r_n &=& h_T(1) = \\frac{1 - \\sqrt{\\mathstrut1 - 4pq}}{2q} \\\\\n                        &=& \\frac{1 - |p - q|}{2q} \\\\\n                        &=& \\left \\{ \\begin{array}{ll} \n                                   p/q, & \\mbox{if $p < q$},     \\\\\n                                     1, & \\mbox{if $p \\geq q$}, \\end{array}\\right.  \n\\end{eqnarray*} \nso that Peter is sure to be in the lead eventually if $p \\geq q$.\n\\par\nHow long will it take?  That is, what is the expected value of~$T$?  This value\nis given by\n$$\nE(T) = h_T'(1) = \\left \\{ \\matrix {\n                          1/(p - q),  & \\mbox{if}\\,\\, p > q, \\cr\n                          \\infty,     & \\mbox{if}\\,\\, p = q.\\cr}\\right. \n$$\nThis says that if $p > q$, then Peter can expect to be in the lead by about\n$1/(p - q)$ trials, but if $p = q$, he can expect to wait a long time.\n\\par\nA related problem, known as the Gambler's Ruin problem, is studied in Exercise~\\ref{exer\n11.2.22} and in Section~\\ref{sec 12.2}.\n\\end{example}\n\n\\exercises\n\\begin{LJSItem}\n\n\n\\i\\label{exer 10.1.1} Find the generating functions, both ordinary $h(z)$\nand moment $g(t)$, for the following discrete probability distributions.\n\\begin{enumerate}\n\\item The distribution describing a fair coin.\n\n\\item The distribution describing a fair die.\n\n\\item The distribution describing a die that always comes up 3.\n\n\\item The uniform distribution on the set $\\{n,n+1,n+2,\\ldots,n+k\\}$.\n\n\\item The binomial distribution on $\\{n,n+1,n+2,\\ldots,n+k\\}$.\n\n\\item The geometric distribution on $\\{0,1,2,\\ldots,\\}$ with $p(j) = 2/3^{j + 1}$.\n\\end{enumerate}\n\n\\i\\label{exer 10.1.2} For each of the distributions (a) through (d) of Exercise~\\ref{exer\n10.1.1} calculate the first and second moments, $\\mu_1$~and~$\\mu_2$, directly from their\ndefinition, and verify that $h(1) = 1$, $h'(1) = \\mu_1$, and $h''(1) = \\mu_2 -\n\\mu_1$.\n\n\\i\\label{exer 10.1.3} Let $p$ be a probability distribution on $\\{0,1,2\\}$ with\nmoments $\\mu_1 = 1$, $\\mu_2 = 3/2$.\n\\begin{enumerate}\n\\item Find its ordinary generating function $h(z)$.\n\n\\item Using (a), find its moment generating function.\n\n\\item Using (b), find its first six moments.\n\n\\item Using (a), find $p_0$, $p_1$, and $p_2$.\n\\end{enumerate}\n\n\\i\\label{exer 10.1.4} In Exercise~\\ref{exer 10.1.3}, the probability distribution is \ncompletely determined by its first two moments.  Show that this is always true for any\nprobability distribution on $\\{0,1,2\\}$.  \\emx {Hint}: Given $\\mu_1$~and~$\\mu_2$,\nfind $h(z)$ as in Exercise~\\ref{exer 10.1.3} and use $h(z)$ to determine $p$.\n\n\\i\\label{exer 10.1.5} Let $p$ and $p'$ be the two distributions\n\n$$ \np = \\pmatrix{\n1 & 2 & 3 & 4 & 5 \\cr\n1/3 & 0 & 0 & 2/3 & 0 \\cr}\\ ,\n$$\n \n$$p' = \\pmatrix{\n1 & 2 & 3 & 4 & 5 \\cr\n0 & 2/3 & 0 & 0 & 1/3 \\cr}\\ . \n$$\n\\begin{enumerate}\n\\item Show that $p$ and $p'$ have the same first and second moments, but not\nthe same third and fourth moments.\n\n\\item Find the ordinary and moment generating functions for $p$~and~$p'$.\n\\end{enumerate}\n\n\\i\\label{exer 10.1.6} Let $p$ be the probability distribution\n\n$$\np = \\pmatrix{\n0 & 1 & 2 \\cr\n0 & 1/3 & 2/3 \\cr}\\ ,\n$$\nand let $p_n = p * p * \\cdots * p$ be the $n$-fold convolution of~$p$ with\nitself.\n\\begin{enumerate}\n\\item Find $p_2$ by direct calculation (see Definition~\\ref{defn 7.1}).\n\n\\item Find the ordinary generating functions $h(z)$ and $h_2(z)$ for\n$p$~and~$p_2$, and verify that $h_2(z) = (h(z))^2$.\n\n\\item Find $h_n(z)$ from $h(z)$.\n\n\\item Find the first two moments, and hence the mean and variance, of $p_n$\nfrom~$h_n(z)$.  Verify that the mean of~$p_n$ is $n$ times the mean of~$p$.\n\n\\item Find those integers~$j$ for which $p_n(j) > 0$ from $h_n(z)$.\n\\end{enumerate}\n\n\\i\\label{exer 10.1.7} Let $X$ be a discrete random variable with values in \n$\\{0,1,2,\\ldots,n\\}$ and moment generating function $g(t)$.  Find, in terms \nof~$g(t)$, the generating functions for\n\\begin{enumerate}\n\\item $-X$.\n\n\\item $X + 1$.\n\n\\item $3X$.\n\n\\item $aX + b$.\n\\end{enumerate}\n\n\\i\\label{exer 10.1.8} Let $X_1$,~$X_2$, \\ldots,~$X_n$ be an independent trials \nprocess, with values in $\\{0,1\\}$ and mean $\\mu = 1/3$.  Find the ordinary and \nmoment generating functions for the distribution of\n\\begin{enumerate}\n\\item $S_1 = X_1$. \\emx {Hint}: First find $X_1$ explicitly.\n\n\\item $S_2 = X_1 + X_2$.\n\n\\item $S_n = X_1 + X_2 +\\cdots+ X_n$.\n\\end{enumerate}\n\n\\i\\label{exer 10.1.9} Let $X$ and $Y$ be random variables with values in \n$\\{1,2,3,4,5,6\\}$ with distribution functions $p_X$~and~$p_Y$ given by\n\\begin{eqnarray*}\np_X(j) &=& a_j\\ , \\\\\np_Y(j) &=& b_j\\ .\n\\end{eqnarray*} \n\\begin{enumerate}\n\\item Find the ordinary generating functions $h_X(z)$ and $h_Y(z)$ for these\ndistributions.\n\n\\item Find the ordinary generating function $h_Z(z)$ for the distribution $Z = X\n+ Y$.\n\n\\item Show that $h_Z(z)$ cannot ever have the form\n$$\nh_Z(z) = \\frac{z^2 + z^3 +\\cdots+ z^{12}}{11}\\ .\n$$\n\\end{enumerate}\n\\noindent \\emx {Hint}: $h_X$ and $h_Y$ must have at least one nonzero root, but \n$h_Z(z)$ in the form given has no nonzero real roots.\n\nIt follows from this observation that there is no way to load two dice so that\nthe probability that a given sum will turn up when they are tossed is the same\nfor all sums (i.e., that all outcomes are equally likely).\n\n\\i\\label{exer 10.1.10} Show that if\n$$\nh(z) = \\frac{1 - \\sqrt{1 - 4pqz^2}}{2qz}\\ ,\n$$\nthen\n$$\n h(1) = \\left \\{ \\begin{array}{ll}\n               p/q, &  \\mbox{if $p \\leq q,$}  \\\\\n                 1, &  \\mbox{if $p \\geq q,$} \\end{array}\\right.\n$$\nand\n$$\n h'(1) = \\left \\{ \\begin{array}{ll}\n                1/(p - q), &  \\mbox{if $p > q,$}\\\\\n                   \\infty, &  \\mbox{if $p = q.$} \\end{array}\\right.\n$$\n\n\\i\\label{exer 10.1.14} Show that if $X$ is a random variable with mean~$\\mu$\nand variance~$\\sigma^2$, and if $X^* = (X - \\mu)/\\sigma$ is the standardized\nversion of~$X$, then\n$$\ng_{X^*}(t) = e^{-\\mu t/\\sigma} g_X\\left( \\frac t\\sigma \\right)\\ .\n$$\n\\end{LJSItem}\n\n\\section{Branching Processes}\\label{sec 10.2}\\index{branching process}\n\\subsection*{Historical Background}\nIn this section we apply the theory of generating functions to the study of an\nimportant chance process called a \\emx {branching process.}\n\nUntil recently it was thought that the theory of branching processes originated\nwith the following problem posed by Francis Galton\\index{GALTON, F.} in the \\emx {Educational\nTimes} in 1873.\\footnote{D.~G. Kendall, ``Branching Processes Since\n1873,\" \\emx {Journal of London Mathematics Society,} vol.~41 (1966), p.~386.}\n\\begin{quote}\nProblem 4001: A large nation, of whom we will only concern ourselves with the\nadult males, $N$ in number, and who each bear separate surnames, colonise a\ndistrict.  Their law of population is such that, in each generation, $a_0$ \nper cent of the adult males have no male children who reach adult life;\n$a_1$ have one such male child; $a_2$ have two; and so on up to\n$a_5$ who have five.\n\nFind (1)~what proportion of the surnames will have become extinct after $r$\ngenerations; and (2)~how many instances there will be of the same surname being\nheld by $m$ persons.\n\\end{quote}\n\nThe first attempt at a solution was given by Reverend H.~W. Watson.\\index{WATSON, H. W.} \nBecause of a mistake in algebra, he incorrectly concluded that a family name would always\ndie out with probability~1.  However, the methods that he employed to solve the\nproblems were, and still are, the basis for obtaining the correct solution.\n\\par\nHeyde\\index{HEYDE, C.} and Seneta\\index{SENETA, E.} discovered an earlier communication by\nBienaym\\'e\\index{BIENAYM\\'E, I.} (1845) that anticipated Galton and Watson by 28~years. \nBienaym\\'e showed, in fact, that he was aware of the correct solution to Galton's problem. \nHeyde and Seneta in their book \\emx {I.~J. Bienaym\\'e: Statistical Theory\nAnticipated,}\\footnote{C.~C. Heyde and E. Seneta, \\emx {I.~J. Bienaym\\'e:\nStatistical Theory Anticipated} (New York: Springer Verlag, 1977).} give the\nfollowing translation from Bienaym\\'e's paper:\n\n\\begin{quote}\nIf \\dots the mean of the number of male children who replace the number of\nmales of the preceding generation were less than unity, it would be easily\nrealized that families are dying out due to the disappearance of the members of\nwhich they are composed.  However, the analysis shows further that when this\nmean is equal to unity families tend to disappear, although less rapidly \\dots.\n\nThe analysis also shows clearly that if the mean ratio is greater than unity,\nthe probability of the extinction of families with the passing of time no\nlonger reduces to certainty.  It only approaches a finite limit, which is\nfairly simple to calculate and which has the singular characteristic of being\ngiven by one of the roots of the equation (in which the number of generations\nis made infinite) which is not relevant to the question when the mean ratio is\nless than unity.\\footnote{ibid., pp.~117--118.}\n\\end{quote}\n\nAlthough Bienaym\\'e does not give his reasoning for these results, he did\nindicate that he intended to publish a special paper on the problem.  The paper\nwas never written, or at least has never been found.  In his communication\nBienaym\\'e indicated that he was motivated by the same problem that occurred to\nGalton.  The opening paragraph of his paper as translated by Heyde and Seneta\nsays,\n\n\\begin{quote}\nA great deal of consideration has been given to the possible multiplication of\nthe numbers of mankind; and recently various very curious observations have\nbeen published on the fate which allegedly hangs over the aristocrary and\nmiddle classes; the families of famous men, etc.  This fate, it is alleged,\nwill inevitably bring about the disappearance of the so-called \\emx {families\nferm\\'ees.}\\footnote{ibid., p.~118.}\n\\end{quote}\n\n\nA much more extensive discussion of the history of branching processes may be\nfound in two papers by David G. Kendall.\\index{KENDALL, D. G.}\\footnote{D.~G.~Kendall, \n``Branching Processes Since 1873,\" pp.~385--406; and ``The Genealogy of Genealogy:\nBranching Processes Before (and After) 1873,\" \\emx {Bulletin London Mathematics\nSociety,} vol.~7 (1975), pp.~225--253.} \n\nBranching processes have served not only as crude models for population growth\nbut also as models for certain physical processes such as chemical and nuclear\nchain reactions.\n\n\\subsection*{Problem of Extinction}\\index{extinction, problem of}\nWe turn now to the first problem posed by Galton (i.e., the problem of finding\nthe probability of extinction for a branching process).  We start in the\n0th generation with 1 male parent.  In the first generation we shall have 0,~1,\n2, 3,~\\ldots\\ male offspring with probabilities $p_0$,~$p_1$, $p_2$,\n$p_3$,~\\ldots.  If in the first generation there are $k$ offspring, then in the\nsecond generation there will be $X_1 + X_2 +\\cdots+ X_k$ offspring, where\n$X_1$,~$X_2$, \\ldots,~$X_k$ are independent random variables, each with the\ncommon distribution $p_0$,~$p_1$, $p_2$,~\\ldots.  This description enables us to\nconstruct a tree, and a tree measure, for any number of generations.\n\n\\subsection*{Examples}\n\n\\begin{example}\\label{exam 10.2.1}\nAssume that $p_0 = 1/2$, $p_1 = 1/4$, and $p_2 = 1/4$.  Then the tree measure\nfor the first two generations is shown in Figure~\\ref{fig 10.1}.\n\n\\putfig{3truein}{PSfig10-1}\n{Tree diagram for Example~\\protect\\ref{exam 10.2.1}\\protect.}{fig 10.1}\n\nNote that we use the theory of sums of independent random variables to assign\nbranch probabilities.  For example, if there are two offspring in the first\ngeneration, the probability that there will be two in the second generation is\n\\begin{eqnarray*}\nP(X_1 + X_2 = 2) &=& p_0p_2 + p_1p_1 + p_2p_0 \\\\\n                 &=& \\frac12\\cdot\\frac14 + \\frac14\\cdot\\frac14 +\n\\frac14\\cdot\\frac12 = \\frac 5{16}\\ .\n\\end{eqnarray*}\n\\par\nWe now study the probability that our process dies out (i.e., that at some\ngeneration there are no offspring).\n\\par\nLet $d_m$ be the probability that the process dies out by the $m$th\ngeneration.  Of course, $d_0 = 0$.  In our example, $d_1 = 1/2$ and $d_2 = 1/2\n+ 1/8 + 1/16 = 11/16$ (see Figure~\\ref{fig 10.1}).  Note that we must add the \nprobabilities for all paths that lead to~0 by the $m$th generation.  It is \nclear from the definition that\n$$\n0 = d_0 \\leq d_1 \\leq d_2 \\leq\\cdots\\leq 1\\ .\n$$\nHence, $d_m$ converges to a limit $d$, $0 \\leq d \\leq 1$, and $d$ is the\nprobability that the process will ultimately die out.  It is this value that we\nwish to determine.  We begin by expressing the value $d_m$ in terms of all\npossible outcomes on the first generation.  If there are $j$ offspring in the\nfirst generation, then to die out by the $m$th generation, each of these lines\nmust die out in $m - 1$ generations.  Since they proceed independently, this\nprobability is $(d_{m - 1})^j$.  Therefore\n \n\\begin{equation}\nd_m = p_0 + p_1d_{m - 1} + p_2(d_{m - 1})^2 + p_3(d_{m - 1})^3 +\\cdots\\ .\n\\label{eq 10.2.1} \n\\end{equation} \nLet $h(z)$ be the ordinary generating function for the $p_i$:\n$$\nh(z) = p_0 + p_1z + p_2z^2 +\\cdots\\ .\n$$\nUsing this generating function, we can rewrite Equation~\\ref{eq 10.2.1} in the form\n\n\\begin{equation}\nd_m = h(d_{m - 1})\\ . \n\\label{eq 10.2.2}\n\\end{equation}\n \nSince $d_m \\to d$, by Equation~\\ref{eq 10.2.2} we see that the value~$d$ that we are\nlooking for satisfies the equation\n\n\\begin{equation}\nd = h(d)\\ . \n\\label{eq 10.2.3}\n\\end{equation}\n \nOne solution of this equation is always $d = 1$, since\n$$\n1 = p_0 + p_1 + p_2 +\\cdots\\ .\n$$\nThis is where Watson made his mistake.  He assumed that 1 was the only solution\nto Equation~\\ref{eq 10.2.3}.  To examine this question more carefully, we first note\nthat solutions to Equation~\\ref{eq 10.2.3} represent intersections of the graphs of\n$$\ny = z\n$$\nand\n$$\ny = h(z) = p_0 + p_1z + p_2z^2 +\\cdots\\ .\n$$\nThus we need to study the graph of $y = h(z)$.\nWe note  that $h(0) = p_0$.  Also,\n\\begin{equation}\nh'(z) = p_1 + 2p_2z + 3p_3z^2 +\\cdots\\ ,  \\label{eq 10.2.4}\n\\end{equation}\nand\n$$\nh''(z) = 2p_2 + 3 \\cdot 2p_3z + 4 \\cdot 3p_4z^2 + \\cdots\\ .\n$$\nFrom this we see that for~$z \\geq 0$, $h'(z) \\geq 0$ and $h''(z) \\geq 0$.  Thus\nfor nonnegative $z$, $h(z)$ is an increasing function and is concave upward. \nTherefore the graph of $y = h(z)$ can intersect the line $y = z$ in at most two\npoints.  Since we know it must intersect the line $y = z$ at $(1,1)$, we know\nthat there are just three possibilities, as shown in Figure~\\ref{fig 10.2}.\n\\putfig{4.5truein}{PSfig10-2}{Graphs of $y = z$ and $y = h(z)$.}{fig 10.2}\n\\par\nIn case (a)~the equation $d = h(d)$ has roots $\\{d,1\\}$ with $0 \\leq d <\n1$.  In the second case (b)~it has only the one root $d = 1$.  In case (c)~it\nhas two roots $\\{1,d\\}$ where $1 < d$.  Since we are looking for a solution $0\n\\leq d \\leq 1$, we see in cases (b)~and~(c) that our only solution is~1.  In\nthese cases we can conclude that the process will die out with probability~1. \nHowever in case~(a) we are in doubt.  We must study this case more carefully.\n\\par\nFrom Equation~\\ref{eq 10.2.4} we see that\n$$\nh'(1) = p_1 + 2p_2 + 3p_3 +\\cdots= m\\ ,\n$$\nwhere $m$ is the expected number of offspring produced by a single parent.  In\ncase~(a) we have $h'(1) > 1$, in~(b) $h'(1) = 1$, and in~(c) $h'(1) < 1$.  Thus\nour three cases correspond to $m > 1$, $m = 1$, and $m < 1$.  We assume now\nthat $m > 1$.  Recall that $d_0 = 0$, $d_1 = h(d_0) = p_0$, $d_2 = h(d_1)$,\n\\ldots, and $d_n = h(d_{n - 1})$.  We can construct these values geometrically,\nas shown in Figure~\\ref{fig 10.3}.\n\\putfig{4truein}{PSfig10-3}{Geometric determination of $d$.}{fig 10.3}\n\\par\nWe can see geometrically, as indicated for $d_0$,~$d_1$, $d_2$, and~$d_3$ in\nFigure~\\ref{fig 10.3}, that the points $(d_i,h(d_i))$ will always lie above the line $y =\nz$.  Hence, they must converge to the first intersection of the curves $y = z$\nand $y = h(z)$ (i.e., to the root $d < 1$).  This leads us to the following\ntheorem.\n\\end{example}\n\\begin{theorem}\\label{thm 10.3}\nConsider a branching process with generating function $h(z)$ for the number of\noffspring of a given parent.  Let $d$ be the smallest root of the equation $z =\nh(z)$.  If the mean number $m$ of offspring produced by a single parent is\n${} \\leq 1$, then $d = 1$ and the process dies out with probability~1.  If $m > 1$\nthen $d < 1$ and the process dies out with probability~$d$.\n\\end{theorem}\n\nWe shall often want to know the probability that a branching process dies out\nby a particular generation, as well as the limit of these probabilities.  Let\n$d_n$ be the probability of dying out by the $n$th generation.  Then we know\nthat $d_1 = p_0$.  We know further that $d_n = h(d_{n - 1})$ where $h(z)$ is the\ngenerating function for the number of offspring produced by a single parent. \nThis makes it easy to compute these probabilities.  \n\\par\nThe program {\\bf Branch}\\index{Branch (program)} calculates the values of~$d_n$.  We have run\nthis program for  12 generations for the case that a parent can produce at most two offspring\nand the probabilities for the number produced are $p_0 = .2$, $p_1 = .5$, and $p_2 = .3$.  The\nresults are given in Table~\\ref{table 10.1}.\n\\begin{table}\n\\centering\n\\begin{tabular}{crccclc}\n\\multicolumn{3}{c} {Generation} & \\multicolumn{4}{c}{Probability of dying out}\\\\\n\\hline\n&1  &&&& .2      \\\\\n&2  &&&& .312    \\\\\n&3  &&&& .385203 \\\\\n&4  &&&& .437116 \\\\\n&5  &&&& .475879 \\\\\n&6  &&&& .505878 \\\\\n&7  &&&& .529713 \\\\\n&8  &&&& .549035 \\\\\n&9  &&&& .564949 \\\\\n&10 &&&& .578225 \\\\\n&11 &&&& .589416 \\\\\n&12 &&&& .598931  \n\\end{tabular}\n\\caption{Probability of dying out.}\n\\label{table 10.1}\n\\end{table}\n\\par\nWe see that the probability of dying out by 12 generations is about~.6.  We\nshall see in the next example that the probability of eventually dying out\nis~2/3, so that even 12 generations is not enough to give an accurate estimate\nfor this probability.\n\\par\nWe now assume that at most two offspring can be produced.  Then\n$$\nh(z) = p_0 + p_1z + p_2z^2\\ .\n$$\nIn this simple case the condition $z = h(z)$ yields the equation\n$$\nd = p_0 + p_1d + p_2d^2\\ ,\n$$\nwhich is satisfied by $d = 1$ and $d = p_0/p_2$.  Thus, in addition to the root\n$d = 1$ we have the second root $d = p_0/p_2$.  The mean number~$m$ of offspring\nproduced by a single parent is\n$$\nm = p_1 + 2p_2 = 1 - p_0 - p_2 + 2p_2 = 1 - p_0 + p_2\\ .\n$$\nThus, if $p_0 > p_2$, $m < 1$ and the second root is ${} > 1$.  If $p_0 = p_2$,\nwe have a double root $d = 1$.  If $p_0 < p_2$, $m > 1$ and the second root~$d$\nis less than~1 and represents the probability that the process will die out.\n\n\\begin{example}\\label{exam 10.2.3}\nKeyfitz\\index{KEYFITZ, N.}\\footnote{N. Keyfitz, \\emx {Introduction to the Mathematics of\nPopulation,} rev.~ed.\\ (Reading, PA: Addison Wesley, 1977).} compiled and\nanalyzed data on the continuation of the female family line among Japanese\nwomen.  His estimates at the basic probability distribution for the number of\nfemale children born to Japanese women of ages 45--49 in 1960 are given in Table~\\ref{table 10.2}.\n\\begin{table}\n\\centering\n$$\\begin{array}{rl}\np_0 &= .2092 \\\\\np_1 &= .2584 \\\\\np_2 &= .2360 \\\\\np_3 &= .1593 \\\\\np_4 &= .0828 \\\\\np_5 &= .0357 \\\\\np_6 &= .0133 \\\\\np_7 &= .0042 \\\\\np_8 &= .0011 \\\\\np_9 &= .0002 \\\\\np_{10} &= .0000\\  \n\\end{array}\n$$\n\\caption{Distribution of number of female children.}\n\\label{table 10.2}\n\\end{table}\n\nThe expected number of girls in a family is then 1.837 so the probability~$d$\nof extinction is less than~1.  If we run the program {\\bf Branch}, we can estimate\nthat $d$ is in fact only about .324.\n\\end{example}\n\n\\subsection*{Distribution of Offspring}\nSo far we have considered only the first of the two problems raised by Galton,\nnamely the probability of extinction.  We now consider the second problem, that\nis, the distribution of the number $Z_n$ of offspring in the $n$th generation. \nThe exact form of the distribution is not known except in very special cases. \nWe shall see, however, that we can describe the limiting behavior of~$Z_n$ as\n$n \\to \\infty$.\n\\par\nWe first show that the generating function $h_n(z)$ of the distribution\nof~$Z_n$ can be obtained from~$h(z)$ for any branching process.\n\\par\nWe recall that the value of the generating function at the value~$z$ for any\nrandom variable~$X$ can be written as\n$$\nh(z) = E(z^X) = p_0 + p_1z + p_2z^2 +\\cdots\\ .\n$$\nThat is, $h(z)$ is the expected value of an experiment which has outcome~$z^j$\nwith probability~$p_j$.\n\\par\nLet $S_n = X_1 + X_2 +\\cdots+ X_n$ where each\n$X_j$ has the same integer-valued distribution $(p_j)$ with generating function\n$k(z) = p_0 + p_1z + p_2z^2 +\\cdots.$  Let $k_n(z)$ be the generating function\nof~$S_n$.  Then using one of the properties of ordinary generating functions \ndiscussed in Section~\\ref{sec 10.1}, we have\n$$k_n(z) = (k(z))^n\\ ,$$\nsince the $X_j$'s are independent and all have the same distribution.\n\\par\nConsider now the branching process $Z_n$.  Let $h_n(z)$ be the generating\nfunction of~$Z_n$.  Then\n\\begin{eqnarray*}\nh_{n + 1}(z) &=& E(z^{Z_{n + 1}}) \\\\\n             &=& \\sum_k E(z^{Z_{n + 1}} | Z_n = k) P(Z_n = k)\\ .\n\\end{eqnarray*}\nIf $Z_n = k$, then $Z_{n + 1} = X_1 + X_2 +\\cdots+ X_k$ where $X_1$,~$X_2$,\n\\ldots,~$X_k$ are independent random variables with common generating function\n$h(z)$.  Thus\n$$\nE(z^{Z_{n + 1}} | Z_n = k) = E(z^{X_1 + X_2 +\\cdots+ X_k}) = (h(z))^k\\ ,\n$$\nand\n$$\nh_{n + 1}(z) = \\sum_k (h(z))^k P(Z_n = k)\\ .\n$$\nBut\n$$\nh_n(z) = \\sum_k P(Z_n = k) z^k\\ .\n$$\nThus,\n\\begin{equation}\nh_{n + 1}(z) = h_n(h(z))\\ . \n\\label{eq 10.2.5}\n\\end{equation}\nIf we differentiate Equation~\\ref{eq 10.2.5} and use the chain rule we have\n$$\nh'_{n+1}(z) = h'_n(h(z)) h'(z) .\n$$\nPutting $z = 1$ and using the fact that $h(1) = 1$, $h'(1) = m$, and $h_n'(1) = m_n =\n{}$the mean number of offspring in the $n$'th generation, we have\n$$\nm_{n + 1} = m_n \\cdot m\\ .\n$$\nThus, $m_2 = m \\cdot m = m^2$, $m_3 = m^2 \\cdot m = m^3$, and in general\n$$\nm_n = m^n\\ .\n$$\nThus, for a branching process with $m > 1$, the mean number of offspring grows\nexponentially at a rate~$m$.\n\n\\subsection*{Examples}\n\n\\begin{example}\\label{exam 10.2.3.5}\nFor the branching process of Example~\\ref{exam 10.2.1} we have\n\\begin{eqnarray*}\n  h(z) &=& 1/2 + (1/4)z + (1/4)z^2\\ , \\\\\nh_2(z) &=& h(h(z)) = 1/2 + (1/4)[1/2 + (1/4)z + (1/4)z^2] \\\\\n       &=& + (1/4)[1/2 + (1/4)z + (1/4)z^2]^2 \\\\\n       &=& 11/16 + (1/8)z + (9/64)z^2 + (1/32)z^3 + (1/64)z^4\\ .\n\\end{eqnarray*}\nThe probabilities for the number of offspring in the second generation agree\nwith those obtained directly from the tree measure (see Figure~1).\n\\end{example}\n\nIt is clear that even in the simple case of at most two offspring, we cannot\neasily carry out the calculation of~$h_n(z)$ by this method.  However, there is\none special case in which this can be done.\n\n\\begin{example}\\label{exam 10.2.4}\nAssume that the probabilities $p_1$,~$p_2$,~\\ldots\\ form a geometric series:\n$p_k = bc^{k - 1}$, $k = 1$,~2,~\\ldots, with $0 < b \\leq 1 - c$ and $0 < c < 1$. Then we have\n\\begin{eqnarray*}\np_0 &=& 1 - p_1 - p_2 -\\cdots \\\\\n    &=& 1 - b - bc - bc^2 -\\cdots \\\\\n    &=& 1 - \\frac b{1 - c}\\ .\n\\end{eqnarray*}\nThe generating function $h(z)$ for this distribution is\n\\begin{eqnarray*}\nh(z) &=& p_0 + p_1z + p_2z^2 +\\cdots \\\\\n     &=& 1 - \\frac b{1 - c} + bz + bcz^2 + bc^2z^3 +\\cdots \\\\\n     &=& 1 - \\frac b{1 - c} + \\frac{bz}{1 - cz}\\ .\n\\end{eqnarray*}\nFrom this we find\n$$\nh'(z) = \\frac{bcz}{(1 - cz)^2} + \\frac b{1 - cz} = \\frac b{(1 - cz)^2}\n$$\nand\n$$\nm = h'(1) = \\frac b{(1 - c)^2}\\ .\n$$\nWe know that if $m \\leq 1$ the process will surely die out and $d = 1$.  To\nfind the probability~$d$ when $m > 1$ we must find a root $d < 1$ of the equation\n$$\nz = h(z)\\ ,\n$$\nor\n$$\nz = 1 - \\frac b{1 - c} + \\frac{bz}{1 - cz}.\n$$\nThis leads us to a quadratic equation.  We know that $z = 1$ is one solution. \nThe other is found to be\n$$\nd = \\frac{1 - b - c}{c(1 - c)}.\n$$\nIt is easy to verify that $d < 1$ just when $m > 1$.\n\nIt is possible in this case to find the distribution of~$Z_n$.  This is done by\nfirst finding the generating function $h_n(z)$.\\footnote{T.~E. Harris, {\\it\nThe Theory of Branching Processes} (Berlin: Springer, 1963), p.~9.}  The\nresult for $m \\ne 1$ is:\n$$\nh_n(z) = 1 - m^n \\left[\\frac{1 - d}{m^n - d}\\right] + \n\\frac{m^n \\left[\\frac{1 - d}{m^n - d}\\right]^2 z}\n{1 - \\left[\\frac{m^n - 1}{m^n - d}\\right]z}\\ .\n$$\nThe coefficients of the powers of~$z$ give the distribution for~$Z_n$:\n \n$$P(Z_n = 0) = 1 - m^n\\frac{1 - d}{m^n - d} = \\frac{d(m^n - 1)}{m^n - d}$$\n\\noindent and \n$$P(Z_n = j) = m^n \\Bigl(\\frac{1 - d}{m^n - d}\\Bigr)^2 \\cdot \n\\Bigl(\\frac{m^n - 1}{ m^n - d}\\Bigr)^{j - 1},$$\n\n\\noindent for $j \\geq 1$.\n\\end{example}\n\n\\begin{example}\\label{exam 10.2.4.5}\nLet us re-examine the Keyfitz data to see if a distribution of the type\nconsidered in Example~\\ref{exam 10.2.4} could reasonably be used as a model for this\npopulation.  We would have to estimate from the data the parameters $b$~and~$c$\nfor the formula $p_k = bc^{k - 1}$.  Recall that\n\\begin{equation}\nm = \\frac b{(1 - c)^2}  \n\\label{eq 10.2.7}\n\\end{equation}\nand the probability~$d$ that the process dies out is\n\\begin{equation}\nd = \\frac{1 - b - c}{c(1 - c)}\\ .  \\label{eq 10.2.8}\n\\end{equation}\nSolving Equation~\\ref{eq 10.2.7} and \\ref{eq 10.2.8} for $b$~and~$c$ gives\n$$\nc = \\frac{m - 1}{m - d}\n$$\n\\noindent and\n$$\nb = m\\Bigl(\\frac{1 - d}{m - d}\\Bigr)^2\\ .\n$$\n\nWe shall use the value 1.837 for~$m$ and .324 for~$d$ that we found in the\nKeyfitz example.  Using these values, we obtain $b = .3666$ and $c = .5533$. \nNote that $(1 - c)^2 < b < 1 - c$, as required.  In Table~\\ref{table 10.3}\n we give for\ncomparison the probabilities $p_0$ through $p_8$ as calculated by the geometric\ndistribution versus the empirical values.\n\\begin{table}\n\\centering                                   \n\\begin{tabular}{rrc}  \\hline\n    &           & Geometric\\\\\n$p_j$ &Data & Model   \\\\ \\hline\n0 & .2092 & .1816 \\\\\n1 & .2584 & .3666 \\\\\n2 & .2360 & .2028 \\\\\n3 & .1593 & .1122 \\\\\n4 & .0828 & .0621 \\\\\n5 & .0357 & .0344 \\\\\n6 & .0133 & .0190 \\\\\n7 & .0042 & .0105 \\\\\n8 & .0011 & .0058 \\\\\n9 & .0002 & .0032 \\\\\n10 & .0000 & .0018 \\\\\\hline\n\\end{tabular}\n\\caption{Comparison of observed and expected frequencies.}\n\\label{table 10.3}\n\\end{table}\n\\par\nThe geometric model tends to favor the larger numbers of offspring but is\nsimilar enough to show that this modified geometric distribution might be\nappropriate to use for studies of this kind.\n\nRecall that if $S_n = X_1 + X_2 +\\cdots+ X_n$ is the sum of independent random\nvariables with the same distribution then the Law of Large Numbers states that\n$S_n/n$ converges to a constant, namely $E(X_1)$.  It is natural to ask if\nthere is a similar limiting theorem for branching processes.\n\nConsider a branching process with~$Z_n$ representing the number of offspring\nafter $n$ generations.  Then we have seen that the expected value of~$Z_n$\nis~$m^n$.  Thus we can scale the random variable $Z_n$ to have expected value~1\nby considering the random variable\n$$\nW_n = \\frac{Z_n}{m^n}\\ .\n$$\n\nIn the theory of branching processes it is proved that this random variable\n$W_n$ will tend to a limit as $n$ tends to infinity.  However, unlike the case\nof the Law of Large Numbers where this limit is a constant, for a branching\nprocess the limiting value of the random variables $W_n$ is itself a random\nvariable.\n\nAlthough we cannot prove this theorem here we can illustrate it by simulation. \nThis requires a little care.  When a branching process survives, the number of\noffspring is apt to get very large.  If in a given generation there are 1000\noffspring, the offspring of the next generation are the result of 1000 chance\nevents, and it will take a while to simulate these 1000 experiments.  However,\nsince the final result is the sum of 1000 independent experiments we can use\nthe Central Limit Theorem to replace these 1000 experiments by a single\nexperiment with normal density having the appropriate mean and variance.  The\nprogram {\\bf BranchingSimulation}\\index{BranchingSimulation (program)} carries out this process.\n\\par\nWe have run this program for the Keyfitz example, carrying out 10 simulations\nand graphing the results in Figure~\\ref{fig 10.4}.\n\n\\putfig{4.5truein}{PSfig10-4}{Simulation of $Z_n/m^n$ for the Keyfitz example.}{fig 10.4}\n\n\nThe expected number of female offspring per female is 1.837, so that we are\ngraphing the outcome for the random variables $W_n = Z_n/(1.837)^n$.  For three\nof the simulations the process died out, which is consistent with the value $d\n= .3$ that we found for this example.  For the other seven simulations the\nvalue of~$W_n$ tends to a limiting value which is different for each simulation.\n\\end{example}\n\n\\begin{example}\\label{exam 10.2.4.6}\nWe now examine the random variable $Z_n$ more closely for the case $m < 1$ (see\nExample~\\ref{exam 10.2.4}).  Fix a value $t > 0$; let $[tm^n]$ be the integer part of\n$tm^n$.  Then\n\\begin{eqnarray*}\nP(Z_n = [tm^n]) &=& m^n (\\frac{1 - d}{m^n - d})^2 (\\frac{m^n - 1}{m^n - d})\n^{[tm^n] - 1} \\\\\n                &=& \\frac1{m^n}(\\frac{1 - d}{1 - d/m^n})^2 (\\frac{1 - 1/m^n} {1 -\nd/m^n})^{tm^n + a}\\ ,\n\\end{eqnarray*}\nwhere $|a| \\leq 2$.  Thus, as $n \\to \\infty$,\n$$\nm^n P(Z_n = [tm^n]) \\to (1 - d)^2 \\frac{e^{-t}}{e^{-td}} = (1 - d)^2 e^{-t(1 - d)}\\ .\n$$\nFor $t = 0$,\n$$\nP(Z_n = 0) \\to d\\ .\n$$\nWe can compare this result with the Central Limit Theorem for sums~$S_n$ of\ninteger-valued independent random variables (see Theorem~\\ref{thm 9.3.5}), which\nstates that if $t$ is an integer and $u = (t - n\\mu)/\\sqrt{\\sigma^2 n}$, then\nas $n \\to \\infty$,\n$$\n\\sqrt{\\sigma^2 n}\\, P(S_n = u\\sqrt{\\sigma^2 n} + \\mu n) \\to \\frac1{\\sqrt{2\\pi}}\ne^{-u^2/2}\\ .\n$$\nWe see that the form of these statements are quite similar.  It is possible to\nprove a limit theorem for a general class of branching processes that states\nthat under suitable hypotheses, as $n \\to \\infty$,\n$$\nm^n P(Z_n = [tm^n]) \\to k(t)\\ ,\n$$\nfor $t > 0$, and\n$$\nP(Z_n = 0) \\to d\\ .\n$$\nHowever, unlike the Central Limit Theorem for sums of independent random\nvariables, the function $k(t)$ will depend upon the basic distribution that\ndetermines the process.  Its form is known for only a very few examples similar\nto the one we have considered here.\n\\end{example}\n\n\\subsection*{Chain Letter Problem}\n\\begin{example}\\label{exam 10.2.5}\nAn interesting example of a branching process was suggested by Free\nHuizinga.\\index{HUIZINGA, F.}\\footnote{Private communication.}  In 1978, a chain\nletter\\index{chain letter} called the ``Circle of Gold,\"\\index{Circle of Gold} believed to have\nstarted in California,  found its way across the country to the theater district of New\nYork.  The chain required a participant to buy a letter containing a list of 12~names for\n100~dollars.  The buyer gives 50~dollars to the person from whom the letter was purchased and\nthen sends 50~dollars to the person whose name is at the top of the list.  The\nbuyer then crosses off the name at the top of the list and adds her own name at\nthe bottom in each letter before it is sold again.\n\nLet us first assume that the buyer may sell the letter only to a single\nperson.  If you buy the letter you will want to compute your expected\nwinnings.  (We are ignoring here the fact that the passing on of chain letters\nthrough the mail is a federal offense with certain obvious resulting\npenalties.)  Assume that each person involved has a probability~$p$ of selling\nthe letter.  Then you will receive 50~dollars with probability~$p$ and another\n50~dollars if the letter is sold to 12~people, since then your name would have\nrisen to the top of the list.  This occurs with probability~$p^{12}$, and so\nyour expected winnings are $-100 + 50p + 50p^{12}$.  Thus the chain in this\nsituation is a highly unfavorable game.\n\nIt would be more reasonable to allow each person involved to make a copy of the\nlist and try to sell the letter to at least 2 other people.  Then you would\nhave a chance of recovering your 100~dollars on these sales, and if any of the\nletters is sold 12 times you will receive a bonus of 50~dollars for each of\nthese cases.  We can consider this as a branching process with 12 generations. \nThe members of the first generation are the letters you sell.  The second\ngeneration consists of the letters sold by members of the first generation, and\nso forth.\n\nLet us assume that the probabilities that each individual sells letters to\n0,~1, or~2 others are $p_0$,~$p_1$, and~$p_2$, respectively.  Let $Z_1$,~$Z_2$,\n\\ldots,~$Z_{12}$ be the number of letters in the first 12 generations of this\nbranching process.  Then your expected winnings are\n$$\n50(E(Z_1) + E(Z_{12})) = 50m + 50m^{12}\\ ,\n$$\nwhere $m = p_1 + 2p_2$ is the expected number of letters you sold.  Thus to be\nfavorable we just have\n$$\n50m + 50m^{12} > 100\\ ,\n$$\nor\n$$\nm + m^{12} > 2\\ .\n$$\nBut this will be true if and only if $m > 1$.  We have seen that this will\noccur in the quadratic case if and only if $p_2 > p_0$.  Let us assume for\nexample that $p_0 = .2$, $p_1 = .5$, and $p_2 = .3$.  Then $m = 1.1$ and the\nchain would be a favorable game.  Your expected profit would be\n$$\n50(1.1 + 1.1^{12}) - 100 \\approx 112\\ .\n$$\n\nThe probability that you receive at least one payment from the 12th generation\nis $1 - d_{12}$.  We find from our program {\\bf Branch} that $d_{12} =\n.599$.  Thus, $1 - d_{12} = .401$ is the probability that you receive some\nbonus.  The maximum that you could receive from the chain would be $50(2 +\n2^{12}) = 204{,}900$ if everyone were to successfully sell two letters.  Of\ncourse you can not always expect to be so lucky.  (What is the probability of\nthis happening?)\n\nTo simulate this game, we need only simulate a branching process for 12\ngenerations.  Using a slightly modified version of our program {\\bf\nBranchingSimulation} we carried out twenty such simulations, giving the\n results shown in Table~\\ref{table 10.4}.\n\nNote that we were quite lucky on a few runs, but we came out ahead only a\nlittle less than half the time.  The process died out by the twelfth generation\nin 12 out of the 20 experiments, in good agreement with the probability $d_{12}\n= .599$ that we calculated using the program {\\bf Branch}.\n\n%\\vfill\n%\\newpage\n%\\begin{progout}\n%\\begin{verbatim}\n%Finite density (0) or Poisson (1): 0\n%Enter density p1, p2,..., pn: .2,.5,.3\n%Mean: 1.1\n\\begin{table}\n\\centering\n$$\n\\begin{tabular}{rrrrrrrrrrrrr}\n$Z_{1}$&$Z_{2}$&$ Z_{3}$& $Z_{4}$& $Z_{5}$& $Z_{6}$& $Z_{7}$& $Z_{8}$& $Z_{9}$&$ Z_{10}$&$ Z_{11}$& $Z_{12}$& Profit\\\\\\hline\n1   &0   &0   &0   &0  & 0  & 0   &0   &0   &0   &0   &0   &-50\\\\ \n1   &1   &2   &3   &2  & 3  & 2   &1   &2   &3   &3   &6   &250\\\\ \n0   &0   &0   &0   &0  & 0  & 0   &0   &0   &0   &0   &0  &-100\\\\ \n2   &4   &4   &2   &3  & 4  & 4   &3   &2   &2   &1   &1  & 50\\\\ \n1   &2   &3   &5  & 4  & 3  & 3   &3   &5   &8   &6   &6  & 250\\\\ \n0   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  &-100\\\\ \n2   &3   &2   &2  & 2  & 1  & 2   &3   &3   &3   &4   &6  & 300\\\\ \n1   &2   &1   &1  & 1  & 1  & 2   &1   &0   &0   &0   &0  & -50\\\\ \n0   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  &-100\\\\ \n1   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  & -50\\\\ \n2   &3   &2   &3  & 3  & 3  & 5   &9  &12   &12  &13  &15 & 750\\\\ \n1   &1   &1   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  & -50\\\\ \n1   &2   &2   &3  & 3  & 0  & 0   &0   &0   &0   &0   &0  & -50\\\\ \n1   &1   &1   &1  & 2  & 2  & 3   &4   &4   &6   &4   &5  & 200\\\\ \n1   &1   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  & -50\\\\ \n1   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  & -50\\\\ \n1   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  & -50\\\\ \n1   &1   &2   &3  & 3  & 4  & 2   &3   &3   &3   &3   &2  &  50\\\\ \n1   &2   &4   &6  & 6  & 9  &10   &13  &16  &17  &15  &18 &  850\\\\ \n1   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0  & -50\\\\ \n\\end{tabular}\n$$\n\\caption{Simulation of chain letter (finite distribution case).}\n\\label{table 10.4}\n\\end{table}\n\nLet us modify the assumptions about our chain letter to let the buyer sell the\nletter to as many people as she can instead of to a maximum of two.  We shall\nassume, in fact, that a person has a large number~$N$ of acquaintances and a\nsmall probability~$p$ of persuading any one of them to buy the letter.  Then\nthe distribution for the number of letters that she sells will be a binomial\ndistribution with mean $m = Np$.  Since $N$ is large and $p$ is small, we can\nassume that the probability~$p_j$ that an individual sells the letter to\n$j$~people is given by the Poisson distribution\n$$\np_j = \\frac{e^{-m} m^j}{j!}\\ .\n$$\n\\newpage\n\\noindent\nThe generating function for the Poisson distribution is\n\\begin{eqnarray*}\nh(z) &=& \\sum_{j = 0}^\\infty \\frac{e^{-m} m^j z^j}{j!} \\\\\n     &=& e^{-m} \\sum_{j = 0}^\\infty \\frac{m^j z^j}{j!} \\\\\n     &=& e^{-m} e^{mz} = e^{m(z - 1)}\\ .\n\\end{eqnarray*}\n\nThe expected number of letters that an individual passes on is $m$, and again\nto be favorable we must have $m > 1$.  Let us assume again that $m = 1.1$. \nThen we can find again the probability $1 - d_{12}$ of a bonus from {\\bf\nBranch}.  The result is .232.  Although the expected winnings are the same, the\nvariance is larger in this case, and the buyer has a better chance for a\nreasonably large profit.  We again carried out 20 simulations using the Poisson\ndistribution with mean 1.1.  The results are shown in Table~\\ref{table 10.5}.\n\\begin{table}\n\\centering\n$$\n\\begin{tabular}{rrrrrrrrrrrrr}\n$Z_{1}$&$Z_{2}$&$ Z_{3}$& $Z_{4}$& $Z_{5}$& $Z_{6}$& $Z_{7}$& $Z_{8}$& $Z_{9}$&$ Z_{10}$&$ Z_{11}$& $Z_{12}$& Profit\\\\\\hline\n1   &2   &6   &7   &7  & 8  & 11  &9   &7   &6   &6   &5   & 200\\\\ \n1   &0   &0   &0   &0  & 0  & 0   &0   &0   &0   &0   &0   & -50\\\\ \n1   &0   &0   &0   &0  & 0  & 0   &0   &0   &0   &0   &0   & -50\\\\ \n1   &1   &1   &0   &0  & 0  & 0   &0   &0   &0   &0   &0   & -50\\\\ \n0   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & -100\\\\ \n1   &1   &1   &1  & 1  & 1  & 2   &4   &9   &7   &9   &7   & 300\\\\ \n2   &3   &3   &4  & 2  & 0  & 0   &0   &0   &0   &0   &0   & 0\\\\ \n1   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & -50\\\\ \n2   &1   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & 0\\\\ \n3   &3   &4   &7  & 11 & 17 & 14  &11  &11  &10  &16  &25  & 1300\\\\ \n0   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & -100\\\\ \n1   &2   &2   &1  & 1  & 3  & 1   &0   &0   &0   &0   &0   & -50\\\\ \n0   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & -100\\\\ \n2   &3   &1   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & 0\\\\ \n3   &1   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & 50\\\\ \n1   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & -50\\\\ \n3   &4   &4   &7  &10  &11  & 9   &11  &12  &14  &13  &10  & 550\\\\ \n1   &3   &3   &4  & 9  & 5  & 7   &9   &8   &8   &6   &3   & 100\\\\ \n1   &0   &4   &6  & 6  & 9  &10   &13  &0   &0   &0   &0   & -50\\\\ \n1   &0   &0   &0  & 0  & 0  & 0   &0   &0   &0   &0   &0   & -50\\\\ \n\\end{tabular}\n$$\n\\caption{Simulation of chain letter (Poisson case).}\n\\label{table 10.5}\n\\end{table}\n\\par\nWe note that, as before, we came out ahead less than half the time, but we also\nhad one large profit.  In only 6 of the 20 cases did we receive any profit. \nThis is again in reasonable agreement with our calculation of a probability\n.232 for this happening.\n\\end{example}\n\n\\exercises\n\\begin{LJSItem}\n\n\\i\\label{exer 10.2.1} Let $Z_1$,~$Z_2$, \\ldots,~$Z_N$ describe a branching process in \nwhich each parent has $j$~offspring with probability~$p_j$.  Find the probability~$d$ \nthat the process eventually dies out if\n\\begin{enumerate}\n\\item $p_0 = 1/2$, $p_1 = 1/4$, and $p_2 = 1/4$.\n\n\\item $p_0 = 1/3$, $p_1 = 1/3$, and $p_2 = 1/3$.\n\n\\item $p_0 = 1/3$, $p_1 = 0$, and $p_2 = 2/3$.\n\n\\item $p_j = 1/2^{j + 1}$, for $j = 0$,~1, 2,~\\ldots.\n\n\\item $p_j = (1/3)(2/3)^j$, for $j = 0$,~1, 2,~\\ldots.\n\n\\item $p_j = e^{-2} 2^j/j!$, for $j = 0$,~1, 2,~\\ldots\\ (estimate $d$\nnumerically).\n\\end{enumerate}\n\n\\i\\label{exer 10.2.2} Let $Z_1$,~$Z_2$, \\ldots,~$Z_N$ describe a branching process in \nwhich each parent has $j$~offspring with probability~$p_j$.  Find the probability~$d$ that\nthe process dies out if\n\\begin{enumerate}\n\\item $p_0 = 1/2$, $p_1 = p_2 = 0$, and $p_3 = 1/2$.\n\n\\item $p_0 = p_1 = p_2 = p_3 = 1/4$.\n\n\\item $p_0 = t$, $p_1 = 1 - 2t$, $p_2 = 0$, and $p_3 = t$, where $t \\leq\n1/2$.\n\\end{enumerate}\n\n\\i\\label{exer 10.2.3} In the chain letter problem (see Example~\\ref{exam 10.2.5}) find \nyour expected profit if\n\\begin{enumerate}\n\\item $p_0 = 1/2$, $p_1 = 0$, and $p_2 = 1/2$.\n\n\\item $p_0 = 1/6$, $p_1 = 1/2$, and $p_2 = 1/3$.\n\\end{enumerate}\nShow that if $p_0 > 1/2$, you cannot expect to make a profit.\n\n\\i\\label{exer 10.2.4} Let $S_N = X_1 + X_2 +\\cdots+ X_N$, where the $X_i$'s are independent random\nvariables with common distribution having generating function $f(z)$.  Assume that $N$ is an integer\nvalued random variable independent of all of the $X_j$ and having generating function $g(z)$. \nShow that the generating function for $S_N$ is $h(z) = g(f(z))$.  \\emx {Hint}:\nUse the fact that\n$$\nh(z) = E(z^{S_N}) = \\sum_k E(z^{S_N} | N = k) P(N = k)\\ .\n$$\n\n\\i\\label{exer 10.2.5} We have seen that if the generating function for the offspring of a\nsingle parent is $f(z)$, then the generating function for the number of\noffspring after two generations is given by $h(z) = f(f(z))$.  Explain how this\nfollows from the result of Exercise~\\ref{exer 10.2.4}.\n\n\\i\\label{exer 10.2.6} Consider a queueing process such that in each minute either 1~or~0 customers arrive with probabilities $p$ or \n$q = 1 - p$, respectively.  (The number $p$ is called the \\emx {arrival rate}.)  \nWhen a customer starts service she finishes in the next minute\nwith probability~$r$.  The number $r$ is called the \\emx {service rate}.)  \nThus when a customer begins being served she will finish\nbeing served in $j$~minutes with probability $(1 - r)^{j -1}r$, for $j = 1$,~2,\n3,~\\ldots.\n\\begin{enumerate}\n\\item Find the generating function $f(z)$ for the number of customers who\narrive in one minute and the generating function $g(z)$ for the length of time\nthat a person spends in service once she begins service.\n\n\\i\\label{exer 10.2.7} Consider a \\emx {customer branching process}\\index{branching\nprocess!customer}\\index{customer branching process} by considering the offspring of a customer\nto be the customers who arrive while she is being served.  Using Exercise~\\ref{exer 10.2.4},\nshow that the generating function for our customer branching process is $h(z) = g(f(z))$.\n\n\\i\\label{exer 10.2.8} If we start the branching process with the arrival of the first\ncustomer, then the length of time until the branching process dies out will be\nthe \\emx {busy period} for the server.  Find a condition in terms of the\narrival rate and service rate that will assure that the server will\nultimately have a time when he is not busy.\n\\end{enumerate}\n\n\\i\\label{exer 10.2.9} Let $N$ be the expected total number of offspring in a branching\nprocess.  Let $m$ be the mean number of offspring of a single parent.  Show that\n$$\nN = 1 + \\left(\\sum p_k \\cdot k\\right) N = 1 + mN\n$$\nand hence that $N$ is finite if and only if $m < 1$ and in that case $N = 1/(1\n- m)$.\n\n\\i\\label{exer 10.2.10} Consider a branching process such that the number of offspring of a\nparent is $j$ with probability $1/2^{j + 1}$ for $j = 0$,~1, 2,~\\ldots.\n\\begin{enumerate}\n\\item Using the results of Example~\\ref{exam 10.2.4} show that the probability that\nthere are $j$~offspring in the $n$th generation is\n$$\np_j^{(n)} = \\left \\{ \\begin{array}{ll}\n                \\frac{1}{n(n + 1)} (\\frac {n}{n + 1})^j, & \\mbox{if $ j \\geq 1$}, \\\\\n                                      \\frac {n}{n + 1},  & \\mbox{if $ j = 0$}.\\end{array}\\right.\n$$\n\n\\item Show that the probability that the process dies out exactly at the\n$n$th generation is $1/n(n + 1)$.\n\n\\item Show that the expected lifetime is infinite even though $d = 1$.\n\\end{enumerate}\n\\end{LJSItem}\n\n\\section[Continuous Densities]{Generating Functions for Continuous Densities}\\label{sec\n10.3}\\index{generating function!for continuous density} In the previous section, we introduced\nthe concepts of moments and moment generating functions for discrete random variables.  These\nconcepts have natural analogues for continuous random variables, provided some care is taken\nin arguments involving convergence.\n\n\\subsection*{Moments}\nIf $X$ is a continuous random variable defined on the probability\nspace~$\\Omega$, with density function~$f_X$, then we define the $n$th moment\\index{moments}\nof~$X$ by the formula\n$$\n\\mu_n = E(X^n) = \\int_{-\\infty}^{+\\infty} x^n f_X(x)\\, dx\\ ,\n$$\nprovided the integral \n$$\n\\mu_n = E(X^n) = \\int_{-\\infty}^{+\\infty} |x|^n f_X(x)\\, dx\\ ,\n$$\nis finite.  Then, just as in the discrete case, we see\nthat $\\mu_0 = 1$, $\\mu_1 = \\mu$, and $\\mu_2 - \\mu_1^2 = \\sigma^2$.\n\n\\subsection*{Moment Generating Functions}\\index{moment generating function}\\index{generating\nfunction!moment} Now we define the \\emx {moment generating function} $g(t)$ for~$X$ by the\nformula\n\\begin{eqnarray*}\ng(t) &=& \\sum_{k = 0}^\\infty \\frac{\\mu_k t^k}{k!} = \\sum_{k = 0}^\\infty\n\\frac{E(X^k) t^k}{k!} \\\\\n     &=& E(e^{tX}) = \\int_{-\\infty}^{+\\infty} e^{tx} f_X(x)\\, dx\\ ,\n\\end{eqnarray*}\nprovided this series converges.  Then, as before, we have\n$$\n\\mu_n = g^{(n)}(0)\\ .\n$$\n\n\\subsection*{Examples}\n\\begin{example}\\label{exam 10.3.1}\nLet $X$ be a continuous random variable with range $[0,1]$ and density\nfunction $f_X(x) = 1$ for $0 \\leq x \\leq 1$ (uniform density).  Then\n$$\n\\mu_n = \\int_0^1 x^n\\, dx = \\frac1{n + 1}\\ ,\n$$\nand\n\\begin{eqnarray*}\ng(t) &=& \\sum_{k = 0}^\\infty \\frac{t^k}{(k+1)!}\\\\\n     &=& \\frac{e^t - 1}t\\ .\n\\end{eqnarray*}\nHere the series converges for all~$t$.  Alternatively, we have\n\\begin{eqnarray*}\ng(t) &=& \\int_{-\\infty}^{+\\infty} e^{tx} f_X(x)\\, dx \\\\\n     &=& \\int_0^1 e^{tx}\\, dx = \\frac{e^t - 1}t\\ .\n\\end{eqnarray*}\nThen (by L'H\\^opital's rule)\n\\begin{eqnarray*}\n\\mu_0 &=& g(0) = \\lim_{t \\to 0} \\frac{e^t - 1}t = 1\\ , \\\\\n\\mu_1 &=& g'(0) = \\lim_{t \\to 0} \\frac{te^t - e^t + 1}{t^2} = \\frac12\\ , \\\\\n\\mu_2 &=& g''(0) = \\lim_{t \\to 0} \\frac{t^3e^t - 2t^2e^t + 2te^t - 2t}{t^4} =\n\\frac13\\ .\n\\end{eqnarray*}\nIn particular, we verify that $\\mu = g'(0) = 1/2$ and\n$$\n\\sigma^2 = g''(0) - (g'(0))^2 = \\frac13 - \\frac14 = \\frac1{12}\n$$\nas before (see Example~\\ref{exam 6.18.5}).\n\\end{example}\n\n\\begin{example}\\label{exam 10.3.2}\nLet $X$ have range $[\\,0,\\infty)$ and density function $f_X(x) = \\lambda\ne^{-\\lambda x}$ (exponential density with parameter~$\\lambda$).  In this case\n\\begin{eqnarray*}\n\\mu_n &=& \\int_0^\\infty x^n \\lambda e^{-\\lambda x}\\, dx = \\lambda(-1)^n\n\\frac{d^n}{d\\lambda^n} \\int_0^\\infty e^{-\\lambda x}\\, dx \\\\\n      &=& \\lambda(-1)^n \\frac{d^n}{d\\lambda^n} [\\frac1\\lambda] = \\frac{n!}\n{\\lambda^n}\\ ,\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\ng(t) &=& \\sum_{k = 0}^\\infty \\frac{\\mu_k t^k}{k!} \\\\\n     &=& \\sum_{k = 0}^\\infty [\\frac t\\lambda]^k = \\frac\\lambda{\\lambda - t}\\ .\n\\end{eqnarray*}\nHere the series converges only for $|t| < \\lambda$.  Alternatively, we have\n\\begin{eqnarray*}\ng(t) &=& \\int_0^\\infty e^{tx} \\lambda e^{-\\lambda x}\\, dx \\\\\n     &=& \\left.\\frac{\\lambda e^{(t - \\lambda)x}}{t - \\lambda}\\right|_0^\\infty =\n\\frac\\lambda{\\lambda - t}\\ .\n\\end{eqnarray*}\n\nNow we can verify directly that\n$$\n\\mu_n = g^{(n)}(0) = \\left.\\frac{\\lambda n!}{(\\lambda - t)^{n + 1}}\\right|_{t =\n0} = \\frac{n!}{\\lambda^n}\\ .\n$$\n\\end{example}\n\n\\begin{example}\\label{exam 10.3.3}\nLet $X$ have range $(-\\infty,+\\infty)$ and density function\n$$\nf_X(x) = \\frac1{\\sqrt{2\\pi}} e^{-x^2/2}\n$$\n(normal density).  In this case we have\n\\begin{eqnarray*}\n\\mu_n &=& \\frac1{\\sqrt{2\\pi}} \\int_{-\\infty}^{+\\infty} x^n e^{-x^2/2}\\, dx \\\\\n      &=& \\left \\{ \\begin{array}{ll}\n                        \\frac{(2m)!}{2^{m} m!}, & \\mbox{if $ n = 2m$,}\\cr\n                        0,                      & \\mbox{if $ n = 2m+1$.}\\end{array}\\right.\n\\end{eqnarray*}\n(These moments are calculated by integrating once by parts to show that $\\mu_n\n= (n - 1)\\mu_{n - 2}$, and observing that $\\mu_0 = 1$ and $\\mu_1 = 0$.)  Hence,\n\\begin{eqnarray*}\ng(t) &=& \\sum_{n = 0}^\\infty \\frac{\\mu_n t^n}{n!} \\\\\n     &=& \\sum_{m = 0}^\\infty \\frac{t^{2m}}{2^{m} m!} = e^{t^2/2}\\ .\n\\end{eqnarray*}\nThis series converges for all values of~$t$.  Again we can verify that\n$g^{(n)}(0) = \\mu_n$.\n\\par\nLet $X$ be a normal random variable with parameters $\\mu$ and $\\sigma$.  It is easy\nto show that the moment generating function of $X$ is given by\n$$e^{t\\mu + (\\sigma^2/2)t^2}\\ .$$\nNow suppose that $X$ and $Y$ are two independent normal random variables with\nparameters $\\mu_1$, $\\sigma_1$, and $\\mu_2$, $\\sigma_2$, respectively.  Then,\nthe product of the moment generating functions of $X$ and $Y$ is\n$$e^{t(\\mu_1 + \\mu_2) + ((\\sigma_1^2 + \\sigma_2^2)/2)t^2}\\ .$$\nThis is the moment generating function for a normal random variable with mean\n$\\mu_1 + \\mu_2$ and variance $\\sigma_1^2 + \\sigma_2^2$.  Thus, the sum\nof two independent normal random variables is again normal.  (This was proved\nfor the special case that both summands are standard normal in Example~\\ref{exam \n7.8}.)\n\\end{example}\n\nIn general, the series defining $g(t)$ will not converge for all~$t$.  But in\nthe important special case where $X$ is bounded (i.e., where the range of~$X$\nis contained in a finite interval), we can show that the series does converge\nfor all~$t$.\n\n\\begin{theorem}\\label{thm 10.4}\nSuppose $X$ is a continuous random variable with range contained in the\ninterval $[-M,M]$.  Then the series\n$$\ng(t) = \\sum_{k = 0}^\\infty \\frac{\\mu_k t^k}{k!}\n$$\nconverges for all~$t$ to an infinitely differentiable function~$g(t)$, and\n$g^{(n)}(0) = \\mu_n$.\n\\proof\nWe have\n$$\n\\mu_k = \\int_{-M}^{+M} x^k f_X(x)\\, dx\\ ,\n$$\nso\n\\begin{eqnarray*}\n|\\mu_k| &\\leq& \\int_{-M}^{+M} |x|^k f_X(x)\\, dx \\\\\n        &\\leq& M^k \\int_{-M}^{+M} f_X(x)\\, dx = M^k\\ .\n\\end{eqnarray*}\nHence, for all~$N$ we have\n$$\n\\sum_{k = 0}^N \\left|\\frac{\\mu_k t^k}{k!}\\right| \\leq \\sum_{k = 0}^N\n\\frac{(M|t|)^k}{k!} \\leq e^{M|t|}\\ ,\n$$\nwhich shows that the power series converges for all~$t$.  We know that the sum\nof a convergent power series is always differentiable.\n\\end{theorem}\n\n\\subsection*{Moment Problem}\\index{moment problem}\n\n\\begin{theorem}\\label{thm 10.5}\nIf $X$ is a bounded random variable, then the moment generating function\n$g_X(t)$ of~$x$ determines the density function $f_X(x)$ uniquely.\n\\vspace{.385in} \n\n\\noindent \\emx {Sketch of the Proof.}~                                             \nWe know that\n\\begin{eqnarray*}\ng_X(t) &=& \\sum_{k = 0}^\\infty \\frac{\\mu_k t^k}{k!} \\\\\n       &=& \\int_{-\\infty}^{+\\infty} e^{tx} f(x)\\, dx\\ .\n\\end{eqnarray*}\n\n%\\subsection{Characteristic Functions}     \n\n\\noindent If we replace $t$ by~$i\\tau$, where $\\tau$ is real and $i = \\sqrt{-1}$, then\nthe series converges for all~$\\tau$, and we can define the function\n$$\nk_X(\\tau) = g_X(i\\tau) = \\int_{-\\infty}^{+\\infty} e^{i\\tau x} f_X(x)\\, dx\\ .\n$$\n\nThe function $k_X(\\tau)$ is called the \\emx {characteristic function}\\index{characteristic\nfunction} of~$X$, and is defined by the above equation even when the series for~$g_X$ does not\nconverge.  This equation says that $k_X$ is the \\emx {Fourier transform}\\index{Fourier\ntransform} of~$f_X$.  It is known that the Fourier transform has an inverse, given by the\nformula\n$$\nf_X(x) = \\frac1{2\\pi} \\int_{-\\infty}^{+\\infty} e^{-i\\tau x} k_X(\\tau)\\, d\\tau\\ ,\n$$\nsuitably interpreted.\\footnote{H. Dym and H.~P. McKean, \\emx {Fourier Series and\nIntegrals} (New York: Academic Press, 1972).}  Here we see that the\ncharacteristic function~$k_X$, and hence the moment generating function~$g_X$,\ndetermines the density function~$f_X$ uniquely under our hypotheses.\n\\end{theorem} \n\n\\subsection*{Sketch of the Proof of the Central Limit Theorem}\n\\index{Central Limit Theorem!proof of}\nWith the above result in mind, we can now sketch a proof of the Central Limit Theorem\nfor bounded continuous random variables (see Theorem~\\ref{thm 9.4.7}).  To this end,\nlet $X$ be a continuous random variable with density function~$f_X$, mean $\\mu\n= 0$ and variance $\\sigma^2 = 1$, and moment generating function~$g(t)$ defined\nby its series for all~$t$.  Let $X_1$,~$X_2$, \\ldots,~$X_n$ be an independent\ntrials process with each $X_i$ having density $f_X$, and let $S_n = X_1 + X_2\n+\\cdots+ X_n$, and $S_n^* = (S_n - n\\mu)/\\sqrt{n\\sigma^2} = S_n/\\sqrt n$.  Then\neach $X_i$ has moment generating function~$g(t)$, and since the $X_i$ are\nindependent, the sum~$S_n$, just as in the discrete case (see Section~\\ref{sec 10.1}),\nhas moment generating function\n$$\ng_n(t) = (g(t))^n\\ ,\n$$\nand the standardized sum~$S_n^*$ has moment generating function\n$$\ng_n^*(t) = \\left(g\\left(\\frac t{\\sqrt n}\\right)\\right)^n\\ .\n$$\n\nWe now show that, as $n \\to \\infty$, $g_n^*(t) \\to e^{t^2/2}$, where\n$e^{t^2/2}$ is the moment generating function of the normal density $n(x) =\n(1/\\sqrt{2\\pi}) e^{-x^2/2}$ (see Example~\\ref{exam 10.3.3}).\n\nTo show this, we set $u(t) = \\log g(t)$, and\n\\begin{eqnarray*}\nu_n^*(t) &=& \\log g_n^*(t) \\\\\n         &=& n\\log g\\left(\\frac t{\\sqrt n}\\right) = nu\\left(\\frac t{\\sqrt n}\\right)\\ ,\n\\end{eqnarray*}\nand show that $u_n^*(t) \\to t^2/2$ as $n \\to \\infty$.  First we note that\n\\begin{eqnarray*} \n  u(0) &=& \\log g_n(0) = 0\\ , \\\\\n u'(0) &=& \\frac{g'(0)}{g(0)} = \\frac{\\mu_1}1 = 0\\ , \\\\\nu''(0) &=& \\frac{g''(0)g(0) - (g'(0))^2}{(g(0))^2} \\\\\n       &=& \\frac{\\mu_2 - \\mu_1^2}1 = \\sigma^2 = 1\\ .\n\\end{eqnarray*}\nNow by using L'H\\^opital's rule twice, we get\n\\begin{eqnarray*}\n\\lim_{n \\to \\infty} u_n^*(t) &=& \\lim_{s \\to \\infty} \\frac{u(t/\\sqrt s)}{s^{-1}}\\\\\n     &=& \\lim_{s \\to \\infty} \\frac{u'(t/\\sqrt s) t}{2s^{-1/2}} \\\\\n     &=& \\lim_{s \\to \\infty} u''\\left(\\frac t{\\sqrt s}\\right) \\frac{t^2}2 = \\sigma^2\n\\frac{t^2}2 = \\frac{t^2}2\\ .\n\\end{eqnarray*}\nHence, $g_n^*(t) \\to e^{t^2/2}$ as $n \\to \\infty$.  Now to complete the proof\nof the Central Limit Theorem, we must show that if $g_n^*(t) \\to e^{t^2/2}$,\nthen under our hypotheses the distribution functions $F_n^*(x)$ of the $S_n^*$\nmust converge to the distribution function $F_N^*(x)$ of the normal\nvariable~$N$; that is, that\n$$\nF_n^*(a) = P(S_n^* \\leq a) \\to \\frac1{\\sqrt{2\\pi}} \\int_{-\\infty}^a\ne^{-x^2/2}\\, dx\\ ,\n$$\nand furthermore, that the density functions $f_n^*(x)$ of the $S_n^*$ must\nconverge to the density function for~$N$; that is, that\n$$\nf_n^*(x) \\to \\frac1{\\sqrt{2\\pi}} e^{-x^2/2}\\ ,\n$$\nas $n \\rightarrow \\infty$.\n\\par\nSince the densities, and hence the distributions, of the $S_n^*$ are uniquely \ndetermined by their moment generating functions under our hypotheses, these\nconclusions are certainly plausible, but their proofs involve a detailed\nexamination of characteristic functions and Fourier transforms, and we shall\nnot attempt them here.\n\\par\nIn the same way, we can prove the Central Limit Theorem for bounded discrete\nrandom variables with integer values (see Theorem~\\ref{thm 9.3.6}).  Let $X$ be a\ndiscrete random variable with density function~$p(j)$, mean $\\mu = 0$, variance\n$\\sigma^2 = 1$, and moment generating function $g(t)$, and let $X_1$,~$X_2$,\n\\ldots,~$X_n$ form an independent trials process with common density~$p$.  Let\n$S_n = X_1 + X_2 +\\cdots+ X_n$ and $S_n^* = S_n/\\sqrt n$, with densities\n$p_n$~and~$p_n^*$, and moment generating functions $g_n(t)$ and\n$g_n^*(t) = \\left(g(\\frac t{\\sqrt n})\\right)^n.$\nThen we have\n$$\ng_n^*(t) \\to e^{t^2/2}\\ ,\n$$\njust as in the continuous case, and this implies in the same way that the\ndistribution functions $F_n^*(x)$ converge to the normal distribution; that is,\nthat\n$$\nF_n^*(a) = P(S_n^* \\leq a) \\to \\frac1{\\sqrt{2\\pi}} \\int_{-\\infty}^a\ne^{-x^2/2}\\, dx\\ ,\n$$ \nas $n \\rightarrow \\infty$.\n\\par\nThe corresponding statement about the distribution functions $p_n^*$, however,\nrequires a little extra care (see Theorem~\\ref{thm 9.3.5}).  The trouble arises\nbecause the distribution $p(x)$ is not defined for all~$x$, but only for\ninteger~$x$.  It follows that the distribution $p_n^*(x)$ is defined only for~$x$ of\nthe form $j/\\sqrt n$, and these values change as $n$ changes.\n\nWe can fix this, however, by introducing the function $\\bar p(x)$, defined\nby the formula\n\n$$\n\\bar p(x) =  \\left \\{ \\begin{array}{ll}\n                        p(j), & \\mbox{if $j - 1/2 \\leq x < j + 1/2$,} \\cr\n                         0\\ , & \\mbox{otherwise}.\\end{array}\\right.\n$$\nThen $\\bar p(x)$ is defined for all~$x$, $\\bar p(j) = p(j)$, and the\ngraph of $\\bar p(x)$ is the step function for the distribution $p(j)$ (see Figure~3\nof Section~\\ref{sec 9.1}).\n\nIn the same way we introduce the step function $\\bar p_n(x)$ and\n$\\bar p_n^*(x)$ associated with the distributions $p_n$~and~$p_n^*$, and their\nmoment generating functions $\\bar g_n(t)$ and $\\bar g_n^*(t)$.  If we\ncan show that $\\bar g_n^*(t) \\to e^{t^2/2}$, then we can conclude that\n$$\n\\bar p_n^*(x) \\to \\frac1{\\sqrt{2\\pi}} e^{t^2/2}\\ ,\n$$\nas $n \\rightarrow \\infty$, for all~$x$, a conclusion strongly suggested by \nFigure~\\ref{fig 9.2}.\n\\par\nNow $\\bar g(t)$ is given by\n\\begin{eqnarray*}\n\\bar g(t) &=& \\int_{-\\infty}^{+\\infty} e^{tx} \\bar p(x)\\, dx \\\\\n               &=& \\sum_{j = -N}^{+N} \\int_{j - 1/2}^{j + 1/2} e^{tx} p(j)\\, dx\\\\\n               &=& \\sum_{j = -N}^{+N} p(j) e^{tj} \\frac{e^{t/2} - e^{-t/2}}\n{2t/2} \\\\\n               &=& g(t) \\frac{\\sinh(t/2)}{t/2}\\ ,\n\\end{eqnarray*}\nwhere we have put\n$$\n\\sinh(t/2) = \\frac{e^{t/2} - e^{-t/2}}2\\ .\n$$\n\\par\nIn the same way, we find that\n\\begin{eqnarray*}\n\\bar g_n(t)   &=& g_n(t) \\frac{\\sinh(t/2)}{t/2}\\ , \\\\\n\\bar g_n^*(t) &=& g_n^*(t) \\frac{\\sinh(t/2\\sqrt n)}{t/2\\sqrt n}\\ .\n\\end{eqnarray*}\nNow, as $n \\to \\infty$, we know that $g_n^*(t) \\to e^{t^2/2}$, and, by\nL'H\\^opital's rule,\n$$\n\\lim_{n \\to \\infty} \\frac{\\sinh(t/2\\sqrt n)}{t/2\\sqrt n} = 1\\ .\n$$\nIt follows that\n$$\n\\bar g_n^*(t) \\to e^{t^2/2}\\ ,\n$$\nand hence that\n$$\n\\bar p_n^*(x) \\to \\frac1{\\sqrt{2\\pi}} e^{-x^2/2}\\ ,\n$$\nas $n \\rightarrow \\infty$.\nThe astute reader will note that in this sketch of the proof of Theorem~\\ref{thm 9.3.5},\nwe never made use of the hypothesis that the greatest common divisor of the\ndifferences of all the values that the $X_i$ can take on is~1.  This is a technical\npoint that we choose to ignore.  A complete proof may be found in Gnedenko and\nKolmogorov.\\footnote{B.~V. Gnedenko and A.~N. Kolomogorov, \\emx {Limit Distributions \nfor Sums of Independent Random Variables} (Reading: Addison-Wesley, 1968), p.~233.}\n\n\\subsection*{Cauchy Density}\\index{Cauchy density}\\index{density function!Cauchy}\nThe characteristic function of a continuous density is a useful tool even in\ncases when the moment series does not converge, or even in cases when the\nmoments themselves are not finite.  As an example, consider the Cauchy density\nwith parameter $a = 1$ (see Example~\\ref{exam 5.20})\n$$\nf(x) = \\frac1{\\pi(1 + x^2)}\\ .\n$$\nIf $X$ and $Y$ are independent random variables with Cauchy density~$f(x)$,\nthen the average $Z = (X + Y)/2$ also has Cauchy density~$f(x)$, that is,\n$$\nf_Z(x) = f(x)\\ .\n$$\nThis is hard to check directly, but easy to check by using characteristic\nfunctions.  Note first that\n$$\n\\mu_2 = E(X^2) = \\int_{-\\infty}^{+\\infty} \\frac{x^2}{\\pi(1 + x^2)}\\, dx = \\infty\n$$\nso that $\\mu_2$ is infinite.  Nevertheless, we can define the characteristic\nfunction $k_X(\\tau)$ of~$x$ by the formula\n$$\nk_X(\\tau) = \\int_{-\\infty}^{+\\infty} e^{i\\tau x}\\frac1{\\pi(1 + x^2)}\\, dx\\ .\n$$\nThis integral is easy to do by contour methods, and gives us\n$$\nk_X(\\tau) = k_Y(\\tau) = e^{-|\\tau|}\\ .\n$$\nHence,\n$$\nk_{X + Y}(\\tau) = (e^{-|\\tau|})^2 = e^{-2|\\tau|}\\ ,\n$$\nand since\n$$\nk_Z(\\tau) = k_{X + Y}(\\tau/2)\\ ,\n$$\nwe have\n$$\nk_Z(\\tau) = e^{-2|\\tau/2|} = e^{-|\\tau|}\\ .\n$$\nThis shows that $k_Z = k_X = k_Y$, and leads to the conclusions that $f_Z = f_X\n= f_Y$.\n\nIt follows from this that if $X_1$,~$X_2$, \\ldots,~$X_n$ is an independent\ntrials process with common Cauchy density, and if\n$$\nA_n = \\frac{X_1 + X_2 + \\cdots+ X_n}n\n$$\nis the average of the $X_i$, then $A_n$ has the same density as do the $X_i$. \nThis means that the Law of Large Numbers fails for this process; the\ndistribution of the average $A_n$ is exactly the same as for the individual\nterms.  Our proof of the Law of Large Numbers fails in this case because the\nvariance of~$X_i$ is not finite.\n\n\\exercises\n\\begin{LJSItem}\n\n\n\\i\\label{exer 10.3.1} Let $X$ be a continuous random variable with values in\n$[\\,0,2]$ and density~$f_X$.  Find the moment generating function~$g(t)$ for~$X$\nif\n\\begin{enumerate}\n\\item $f_X(x) = 1/2$.\n\n\\item $f_X(x) = (1/2)x$.\n\n\\item $f_X(x) = 1 - (1/2)x$.\n\n\\item $f_X(x) = |1 - x|$.\n\n\\item $f_X(x) = (3/8)x^2$.\n\\end{enumerate}\n\\noindent \\emx {Hint}: Use the integral definition, as in Examples~\\ref{exam\n10.3.1}~and~\\ref{exam 10.3.2}.\n\n\\i\\label{exer 10.3.2} For each of the densities in Exercise~\\ref{exer 10.3.1} calculate \nthe first and second moments, $\\mu_1$~and~$\\mu_2$, directly from their definition and \nverify that $g(0) = 1$, $g'(0) = \\mu_1$, and $g''(0) = \\mu_2$.\n\n\\i\\label{exer 10.3.3} Let $X$ be a continuous random variable with values in\n$[\\,0,\\infty)$ and density~$f_X$.  Find the moment generating functions for~$X$\nif\n\\begin{enumerate}\n\\item $f_X(x) = 2e^{-2x}$.\n\n\\item $f_X(x) = e^{-2x} + (1/2)e^{-x}$.\n\n\\item $f_X(x) = 4xe^{-2x}$.\n\n\\item $f_X(x) = \\lambda(\\lambda x)^{n - 1} e^{-\\lambda x}/(n - 1)!$.\n\\end{enumerate}\n\n\\i\\label{exer 10.3.4} For each of the densities in Exercise~\\ref{exer 10.3.3}, calculate \nthe first and second moments, $\\mu_1$~and~$\\mu_2$, directly from their definition and verify\nthat $g(0) = 1$, $g'(0) = \\mu_1$, and $g''(0) = \\mu_2$.\n\n\\i\\label{exer 10.3.5} Find the characteristic function $k_X(\\tau)$ for each of the random\nvariables~$X$ of Exercise~\\ref{exer 10.3.1}.\n\n\\i\\label{exer 10.3.6} Let $X$ be a continuous random variable whose characteristic function\n$k_X(\\tau)$ is\n$$\nk_X(\\tau) = e^{-|\\tau|}, \\qquad -\\infty < \\tau < +\\infty\\ .\n$$\nShow directly that the density~$f_X$ of~$X$ is\n$$\nf_X(x) = \\frac1{\\pi(1 + x^2)}\\ .\n$$\n\n\\i\\label{exer 10.3.7} Let $X$ be a continuous random variable with values in $[\\,0,1]$, \nuniform density function $f_X(x) \\equiv 1$ and moment generating function $g(t) = (e^t\n- 1)/t$.  Find in terms of~$g(t)$ the moment generating function for\n\\begin{enumerate}\n\\item $-X$.\n\n\\item $1 + X$.\n\n\\item $3X$.\n\n\\item $aX + b$.\n\\end{enumerate}\n\n\\i\\label{exer 10.3.8} Let $X_1$,~$X_2$, \\ldots,~$X_n$ be an independent trials process with\nuniform density.  Find the moment generating function for\n\\begin{enumerate}\n\\item $X_1$.\n\n\\item $S_2 = X_1 + X_2$.\n\n\\item $S_n = X_1 + X_2 +\\cdots+ X_n$.\n\n\\item $A_n = S_n/n$.\n\n\\item $S_n^* = (S_n - n\\mu)/\\sqrt{n\\sigma^2}$.\n\\end{enumerate}\n\n\\i\\label{exer 10.3.9} Let $X_1$,~$X_2$, \\ldots,~$X_n$ be an independent trials process with\nnormal density of mean~1 and variance~2.  Find the moment generating function\nfor\n\\begin{enumerate}\n\\item $X_1$.\n\n\\item $S_2 = X_1 + X_2$.\n\n\\item $S_n = X_1 + X_2 +\\cdots+ X_n$.\n\n\\item $A_n = S_n/n$.\n\n\\item $S_n^* = (S_n - n\\mu)/\\sqrt{n\\sigma^2}$.\n\\end{enumerate}\n\n\\i\\label{exer 10.3.10} Let $X_1$,~$X_2$, \\ldots,~$X_n$ be an independent trials process with\ndensity\n$$\nf(x) = \\frac12 e^{-|x|}, \\qquad -\\infty < x < +\\infty\\ .\n$$\n\\begin{enumerate}\n\\item Find the mean and variance of $f(x)$.\n\n\\item Find the moment generating function for $X_1$,~$S_n$, $A_n$,\nand~$S_n^*$.\n\n\\item What can you say about the moment generating function of~$S_n^*$ as $n\n\\to \\infty$?\n\n\\item What can you say about the moment generating function of~$A_n$ as $n\n\\to \\infty$?\n\\end{enumerate}\n \n\\end{LJSItem}\n", "meta": {"hexsha": "37572c79fb5a50c01213c3e38a1536f4b0e57756", "size": 79560, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/ch10.tex", "max_stars_repo_name": "kskyten/introduction-to-probability", "max_stars_repo_head_hexsha": "288c82a0cb94e6b9d702eb8803dc342052d411f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/ch10.tex", "max_issues_repo_name": "kskyten/introduction-to-probability", "max_issues_repo_head_hexsha": "288c82a0cb94e6b9d702eb8803dc342052d411f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/ch10.tex", "max_forks_repo_name": "kskyten/introduction-to-probability", "max_forks_repo_head_hexsha": "288c82a0cb94e6b9d702eb8803dc342052d411f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.3243791181, "max_line_length": 229, "alphanum_fraction": 0.6386375063, "num_tokens": 29664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.7981867801399695, "lm_q1q2_score": 0.5735725540533476}}
{"text": "\\section{Co-occurrence Data Embedding}\n\\label{sec:code}\n\nThe general strategy we follow for unsupervised syntactic category\nacquisition is to combine features of the context with the identity\nand features of the target word.  Our preliminary experiments\nindicated that using the context information alone (e.g. clustering\nsubstitute vectors) without the target word identity and features had\nlimited success.\\footnote{A 10-nearest-neighbor supervised baseline\n  using cosine distance between substitute vectors gives .7213\n  accuracy.  Clustering substitute vectors using various distance\n  metrics and dimensionality reduction methods give results inferior\n  to this upper bound.} It is the co-occurrence of a target word with a\nparticular type of context that best predicts the syntactic category.\nIn this section we review the unsupervised methods we used to model\nco-occurrence statistics: the Co-occurrence Data Embedding (CODE)\nmethod \\cite{globerson2007euclidean} and its spherical extension\n(S-CODE) introduced by \\cite{maron2010sphere}.\n\nLet $X$ and $Y$ be two categorical variables with finite cardinalities\n$|X|$ and $|Y|$.  We observe a set of pairs $\\{x_i, y_i\\}_{i=1}^n$\ndrawn IID from the joint distribution of $X$ and $Y$.  The basic idea\nbehind CODE and related methods is to represent (embed) each value of\n$X$ and each value of $Y$ as points in a common low dimensional\nEuclidean space $\\mathbf{R}^d$ such that values that frequently\nco-occur lie close to each other.  There are several ways to formalize\nthe relationship between the distances and co-occurrence statistics, in\nthis paper we use the following:\n\\begin{equation} \\label{eq:probability}\np(x,y) = \\frac{1}{Z} \\bar{p}(x) \\bar{p}(y) e^{-d^2_{x,y}}\n\\end{equation}\n\\noindent where $d^2_{x,y}$ is the squared distance between the\nembeddings of $x$ and $y$, $\\bar{p}(x)$ and $\\bar{p}(y)$ are empirical\nprobabilities, and $Z=\\sum_{x,y} \\bar{p}(x) \\bar{p}(y) e^{-d^2_{x,y}}$ is\na normalization term.  If we use the notation $\\phi_x$ for the\npoint corresponding to $x$ and $\\psi_y$ for the point corresponding\nto $y$ then $d^2_{x,y} = \\|\\phi_x-\\psi_y\\|^2$.  The log-likelihood\nof a given embedding $\\ell(\\phi, \\psi)$ can be expressed as:\n\\begin{eqnarray} \n&&\\ell(\\phi, \\psi) = \\sum_{x,y} \\bar{p}(x,y) \\log p(x,y) \\label{eq:likelihood} \\\\\n&&= \\sum_{x,y} \\bar{p}(x,y) (-\\log Z + \\log \\bar{p}(x)\\bar{p}(y) - d^2_{x,y}) \\nonumber \\\\\n&&= -\\log Z + \\mathit{const} - \\sum_{x,y} \\bar{p}(x,y) d^2_{x,y} \\nonumber\n\\end{eqnarray}\nThe likelihood is not convex in $\\phi$ and $\\psi$.  We use gradient\nascent to find an approximate solution for a set of $\\phi_x$, $\\psi_y$\nthat maximize the likelihood.  The gradient of the $d^2_{x,y}$ term\npulls neighbors closer in proportion to the empirical joint\nprobability:\n\\begin{equation}\n\\frac{\\partial}{\\partial\\phi_x} \\sum_{x,y} -\\bar{p}(x,y) d^2_{x,y} =\n\\sum_y 2 \\bar{p}(x,y) (\\psi_y - \\phi_x) \\label{eq:attract}\n\\end{equation}\nThe gradient of the $Z$ term pushes neighbors apart in proportion to the\nestimated joint probability:\n\\begin{equation}\n\\frac{\\partial}{\\partial\\phi_x} (-\\log Z) = \\sum_y 2 p(x,y) (\\phi_x -\n\\psi_y) \\label{eq:repulse}\n\\end{equation}\nThus the net effect is to pull pairs together if their estimated\nprobability is less than the empirical probability and to push them\napart otherwise.  The gradients with respect to $\\psi_y$ are\nsimilar.\n\nS-CODE \\cite{maron2010sphere} additionally restricts all $\\phi_x$ and\n$\\psi_y$ to lie on the unit sphere.  With this restriction, $Z$ stays\naround a fixed value during gradient ascent.  This allows S-CODE to\nsubstitute an approximate constant $\\tilde{Z}$ in gradient\ncalculations for the real $Z$ for computational efficiency.  In our\nexperiments, we used S-CODE with its sampling based stochastic\ngradient ascent algorithm and smoothly decreasing learning\nrate.\n\n", "meta": {"hexsha": "77ceed633e962c14f55f47aea07b65f57f918055", "size": 3830, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/cl2012/emnlp12/scode.tex", "max_stars_repo_name": "ai-ku/upos", "max_stars_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-01-24T11:27:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-18T11:35:02.000Z", "max_issues_repo_path": "papers/cl2012/emnlp12/scode.tex", "max_issues_repo_name": "ai-ku/upos", "max_issues_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/cl2012/emnlp12/scode.tex", "max_forks_repo_name": "ai-ku/upos", "max_forks_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-06T07:56:00.000Z", "avg_line_length": 52.4657534247, "max_line_length": 90, "alphanum_fraction": 0.7420365535, "num_tokens": 1129, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5735725475181814}}
{"text": "\\section{Oscillations on a paraboloid basin}\nThis test simulates water oscillations on a paraboloid basin. The analytical solution was derived by Thacker~\\cite{Thacker1981}, and is periodic. At any instant in time, the free surface elevation is paraboloid, and the velocity is linear. The scenario includes regular wetting and drying, as the flow oscillates up and down in the basin. As well as testing the ability of the code to do wetting and drying, it will highlight any numerical energy loss or gain, and manifest as an increase or decrease in the magnitude of the flow oscillations over long time periods (compared with the analytical solution). This test was also implemented by Yoon and Cho~\\cite{YC2001} to investigate the performance of their numerical method.\n\nConsider the topography in two dimensions\n\\begin{equation}\nz(x,y) = -D_0\\left[1 -\\left(\\frac{r}{L}\\right)^2\\right]\n\\end{equation}\nwhere $r=\\sqrt{x^2 + y^2}$. Here $D_0$ is the largest depth when water is still and $L$ is the distance between the centre of water surface and the shore when water is still.\nThe analytical solution is \n\\begin{equation}\nu(x,y,t) = \\frac{\\omega r A \\sin{(\\omega t)}}{ 2 \\left[1 -A \\cos(\\omega t)\\right] },\n\\end{equation}\n\\begin{equation}\nw(x,y,t) = D_0 \\left[\\frac{\\sqrt(1-A^2)}{1-A\\cos(\\omega t)}  -1 \n-\\left( \\frac{r}{L}\\right)^2 \\frac{1-A^2}{[(1-A\\cos(\\omega t))^2]-1} \\right].\n\\end{equation}\nHere $\\omega=\\frac{2\\sqrt{2 g D_0}}{L}$ and $A = \\frac{L^4 - R_0^4}{L^4 + R_0^4}$ and $R_0$ is the horizontal distance between the centre of water surface and the shore at the initial condition.\nThe initial condition is set by taking $t=0$ in the analytical solution.\n\n\\subsection{Results}\nFor our test, we consider $D_0=1000$, $L=2500$, and $R_0=2000$.\nAfter running the simulation for some time, we have Figures~\\ref{fig:cs_stage}--\\ref{fig:cs_xvel} showing the stage, $x$-momentum, and $x$-velocity respectively. There should be a good agreement between numerical and analytical solutions, although wet-dry artefacts may appear in the 'nearly-dry' areas, where the numerical method can cause the water to drain too slowly (similar to that reported in \\cite{KESSERWANIA14} using a finite volume scheme with similarities to discontinuous elevation algorithms in \\anuga{}).\n\nAs time goes on, some small deviations may appear. These are shown in Figures~\\ref{fig:w_centre}--\\ref{fig:u_centre}, which illustrate the stage, $x$-momentum, and $x$-velocity at the centroid of the domain.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{cross_section_stage.png}\n\\caption{Stage on a cross section of the basin at time $t=50$\\,.}\n\\label{fig:cs_stage}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{cross_section_xmom.png}\n\\caption{Xmomentum on a cross section of the basin at time $t=50$\\,.}\n\\label{fig:cs_xmom}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{cross_section_xvel.png}\n\\caption{Xvelocity on a cross section of the basin at time $t=50$\\,.}\n\\label{fig:cs_xvel}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{Stage_origin.png}\n\\caption{Stage over time in the centre of the paraboloid basin.}\n\\label{fig:w_centre}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{Xmom_origin.png}\n\\caption{Xmomentum over time in the centre of the paraboloid basin.}\n\\label{fig:p_centre}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{Xvel_origin.png}\n\\caption{Xvelocity over time in the centre of the paraboloid basin.}\n\\label{fig:u_centre}\n\\end{center}\n\\end{figure}\n\n\n\\endinput\n", "meta": {"hexsha": "8831ccaa6dbd0232f9d09ec83f61210e9e2043b3", "size": 3724, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "validation_tests/analytical_exact/paraboloid_basin/results.tex", "max_stars_repo_name": "samcom12/anuga_core", "max_stars_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_stars_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_stars_count": 136, "max_stars_repo_stars_event_min_datetime": "2015-05-07T05:47:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T03:07:40.000Z", "max_issues_repo_path": "validation_tests/analytical_exact/paraboloid_basin/results.tex", "max_issues_repo_name": "samcom12/anuga_core", "max_issues_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_issues_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-05-03T09:27:54.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-20T04:22:48.000Z", "max_forks_repo_path": "validation_tests/analytical_exact/paraboloid_basin/results.tex", "max_forks_repo_name": "samcom12/anuga_core", "max_forks_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_forks_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_forks_count": 70, "max_forks_repo_forks_event_min_datetime": "2015-03-18T07:35:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T07:07:29.000Z", "avg_line_length": 46.55, "max_line_length": 725, "alphanum_fraction": 0.7529538131, "num_tokens": 1115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.7981867825403177, "lm_q1q2_score": 0.5735725461576456}}
{"text": "The kinematic tree is an abstract representation of a collection of transforms. \nUnlike the rigid collection (section \\ref{sec: rigid}), the kinematic tree is able to represent transforms in multiple datum frames. \nThe tree can then use the network of frames to \"lookup\" a transform between any two connected nodes, as hinted at in figure \\ref{fig: poses}.\nThis concept is widely used in robotics and therefore has a place in the prototyping package. \nThere are several algorithms this class implements:\n\\begin{enumerate}\n\t\\item Representation: Construct a tree representation given only the edges\n\t\\item Lookup: apply breadth first graph search to find a path between any two nodes on the tree\n\t\\item Root: use depth first search to express all frames in the base link frame or another specified frame on the tree\n\\end{enumerate}\n\n\\subsection{Representation}\nFrames in a kinematic tree must have a child frame, a parent frame, or both. \nFrame $T$ is described as:\n\\begin{equation}\n\tT_i^{p(i)} = \\begin{bmatrix}\n\t\tR_i^{p(i)} & t_i^{p(i)} \\\\\n\t\t\\bf{0} & 1\n\t\\end{bmatrix}\n\\end{equation}\nwhere $i$ is the child frame, and $p(i)$ is the parent frame. \nThe kinematic tree has a root, which we can also call the \"base link\". \nThe challenge with the kinematic tree is that we are only given the transforms, which are the edges in the graph. \nSo we need to create a tree representation from just the transforms. \nTransforms contain the parent and child of the edge, and are therefore directed. \nThus we want a representation that is easily able to express the directed nature of the graph. \n\nThe importance of the edges in the kinematic tree suggests that an incidence list is the best representation for this situation. \nFor example, say we have a tree as shown in figure \\ref{fig: tree}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{images/example_tree.png}\n\t\\caption{Example tree}\n\t\\label{fig: tree}\n\\end{figure}\n\nThis takes the form:\n\\begin{align}\n\tbase &: \\big[ T^{base}_1, T^{base}_2\\big] \\\\\n\t1 &: \\big[ T^1_3, T^1_{base} \\big]\\\\\n\t2 &: \\big[ T^2_{base}\\big]\\\\\n\t3 &: \\big[ T_4^3, T_5^3 \\big] \\\\\n\t4 &: \\big[ T_3^4\\big] \\\\\n\t5 &: \\big[ T_3^5\\big] \\\\\n\\end{align}\nWe can improve memory storage by using a dictionary (hash map) in order to only save the relevant edges. \nThe construction algorithm is shown in algorithm \\ref{alg: inc_list}.\n\n\\begin{algorithm}\n\t\\DontPrintSemicolon\n\t\\KwIn{Unordered edge set $T$}\n\t\\KwOut{Hash map where frames are the keys and the transforms they are incident with are values}\n\ttree $\\gets \\{\\}$\n\n\t\\tcp{hash.insert(key : value)}\n\ttree.insert(root : $\\varnothing$)\n\n\t\\For{$t \\in T$}{$p \\gets t.parent$\n\n\t\t$c \\gets t.child$\n\n\t\t\\tcp{process parents first}\n\t\t\\If{$p \\neq NULL$}{\n\t\t\t\\eIf{$p \\in$ tree}{\n\t\t\t\t\\tcp{we store list of transforms touching $p$}\n\t\t\t\ttree[$p$].append($t$)\n\t\t\t}{\n\t\t\t\ttree.insert($p : \\varnothing$)\n\t\t\t}\n\t\t}\n\n\t\t\\tcp{process children}\n\t\t\\If{$c \\neq NULL$}{\n\t\t\t\\eIf{$c \\in $ tree}{\n\t\t\t\t\\tcp{store whether the edge is forwards (used later)}\n\t\t\t\t($t^{-1}).forwards \\gets false$\n\n\t\t\t\ttree[$c$].append($t^{-1}$)\n\t\t\t}{\n\t\t\t\ttree.insert($c : \\varnothing$)\n\t\t\t}\n\n\t\t}\n\t}\n\n\t\\Return{tree}\n\t\\caption{Incidence list representation for the Kinematic Tree}\n\t\\label{alg: inc_list}\n\\end{algorithm}\n\nOnce the incidence list is constructed, then we can use it to query the tree to find paths between frames.\nThe incidence list can keep track of the direction by assigning a flag to each edge specifying whether it is backwards or forwards. \nThis will be useful for the rooting algorithm, shown later. \n\nAn important thing to note is that in the context of the kinematic tree, an edge transform $T^i_j$ contains a child $j$ and a parent $i$ similar to a node. \nHowever an edge can only have one parent and one child, whereas a node can have only one parent, but many children. \n\n\\subsection{Lookup path between frames}\n\nNow say we wanted to know the transform between two frames $i, j$ on the tree that aren't connected by an existing edge. \nWe can find this edge, $T_i^j$ by finding the path between the two frames, and then multiplying the transforms we pass through, as shown in figure \\ref{fig: lookup}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{images/lookup_path.png}\n\t\\caption{Lookup path between frames}\n\t\\label{fig: lookup}\n\\end{figure}\n\nThis can be done by executing a breadth first search on the tree in an attempt to find a path between two frames. \nThe algorithm is given in alorithm \\ref{alg: lookup}.\nThe algorithm shows how the algorithm is modified to deal with the need to store edges. \nOnce we have a path as a sequence of edges, we can multiply these transforms together to obtain the transform between frames $i, j$.\n\n\n\\begin{algorithm}\n\t\\KwIn{start frame, end frame}\n\t\\KwOut{backpointers dictionary}\n\t\\DontPrintSemicolon\n\t\\SetKwFunction{expand}{expand}\n\t\\SetKwFunction{bfs}{search\\_for\\_path}\n\t\\SetKwProg{Fn}{def}{:}{}\t$O \\gets \\varnothing$\n\n\t\\tcp{the main search function}\n\t\\Fn{\\bfs{$x_{start}, x_{goal}$}}{\n\n\t\t$C \\gets \\varnothing$\n\n\t\t\\tcp{the backpointers dictionary stores edges we've passed through}\n\t\tbackpointers = \\{\\}\n\n\t\t$O$.insert(start)\n\n\t\tbackpointers.insert(start:$NULL$)\n\n\t\t\\While{$O \\notin \\varnothing$}{\n\t\t\t$x \\gets O$.pop()\n\n\t\t\t$C$.insert($x$)\n\n\t\t\t\\If{\\expand{x}}{\n\t\t\t\t\\Return{Success, backpointers}\n\t\t\t}\n\n\t}\n\n\t\\Return{Failure, backpointers}\n\t}\t\n\n\t\\tcp{The node expansion function}\n\t\\Fn{\\expand{x}}{\n\t\t\\tcp{get the ordered set of transforms linked to $x$}\n\t\t$T \\gets x.edges$\n\n\t\t\\For{\n\t\t\t$t \\in T$\n\t\t}{\n\n\t\t\t\\uIf{$t$.child = end}{\n\t\t\t\tbackpointers.insert($t.child : t$)\n\t\t\t\t\n\t\t\t\t\\Return{Success}\n\t\t\t\t}\n\t\t\t\\ElseIf{$t.child \\notin O \\textrm{ and } t.child \\notin C$}{\n\n\t\t\t\tbackpointers.insert($t.child : t$)\n\n\t\t\t\tO.insert($t.child$)\n\t\t\t}\n\t\t\t\n\t\t\t\\Return{Failure}\n\t\t\t\n\t\t}\n\t}\n\n\t\\caption{Lookup between frames using breadth first search. The backpointers can then be used to extract the path of $n$ transforms between the frames: $\\prod_i^n T_i^{p(i)}T^i_{c(i)}$}\n\t\\label{alg: lookup}\n\\end{algorithm}\n\n\\subsection{Root function}\nThe other main algorithm used by the kinematic tree is the root function. \nThe purpose of this is simply to put every frame in the tree into the base frame. \nFrom here, it is trivial to place the entire tree in coordinates of any frame in the tree. \nIf we use figure \\ref{fig: lookup} for reference, then we want the frames $T^{base}_1,T^{base}_2, T^{base}_3, T^{base}_4, T^{base}_5$.\nWe can obtain this through a depth first search that visits nodes in order and extracts the transform during the traversal. \nUsing the tree in figure \\ref{fig: lookup}, we would visit the nodes from left to right: 4, 3, 5, 1, 2. \n\n\\begin{algorithm}\n\t\\SetKwFunction{expand}{expand}\n\t\\SetKwProg{Fn}{def}{:}{}\t$O \\gets \\varnothing$\n\n\t\\KwIn{Root of the tree}\n\t\\KwOut{Dictionary of transforms $T_i^{base}$ for each frame $i$ in the tree}\n\n\tframes = \\{\\}\n\n\t$O \\gets \\varnothing$\n\n\t$C \\gets \\varnothing$\n\n\t$O$.insert(root)\n\n\t\\While{$O \\neq \\varnothing$}{\n\t\t$x \\gets O$.pop\n\t\t\n\t\t$C$.insert($x$)\n\n\t\t\\expand{x}\n\n\t}\n\n\t\\Return{frames}\n\n\t\\tcp{depth first search expansion}\n\t\\Fn{\\expand{$x$}}{\n\t\t\\tcp{traverse the ordered set $T$ of neighboring edges in reverse order}\n\n\t\t$R \\gets reverse(T)$\n\n\t\t\\ForAll{$t \\in R$}{\n\t\t\t\\If{$t.forward()$}{\n\t\t\t\t$x \\gets t.child$\n\n\t\t\t\t$p \\gets t.parent$\n\n\t\t\t\t\\If{$x \\notin O and x \\notin C$}{\n\t\t\t\t\t$O$.insert($x$)\n\n\t\t\t\t\t$t^p_x \\gets $ frames[$p$]\n\n\t\t\t\t\tframes.insert($x$ : $t^{p}_x$ * $t$)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t\\caption{The depth first traversal used to derive the set of transforms $\\mathcal{T} =\\{ T^{base}_i \\forall i : i \\in K\\}$ with frames $i$ in kinematic tree $K$.}\n\t\\label{alg: root}\n\\end{algorithm}\n\nThis is shown in algorithm \\ref{alg: root}. \nThe root function can then be used to graph the entire tree in the base frame or any other frame in the tree. ", "meta": {"hexsha": "a0ed66afe5a416ad08b1586f34a53ac28a4ffd2b", "size": 7788, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/math/core/core_kinematic_tree.tex", "max_stars_repo_name": "bkolligs/pyrobo", "max_stars_repo_head_hexsha": "341687cbed96f839fae682f9ec1c58524d7b35b4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-20T15:40:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-11T03:20:29.000Z", "max_issues_repo_path": "docs/math/core/core_kinematic_tree.tex", "max_issues_repo_name": "bkolligs/pyrobo", "max_issues_repo_head_hexsha": "341687cbed96f839fae682f9ec1c58524d7b35b4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-07-06T01:31:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-31T00:05:39.000Z", "max_forks_repo_path": "docs/math/core/core_kinematic_tree.tex", "max_forks_repo_name": "bkolligs/robotics-prototyping", "max_forks_repo_head_hexsha": "ac7766921c7e8b2c51792697ddf2166ab9a46c82", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.049382716, "max_line_length": 185, "alphanum_fraction": 0.6983821263, "num_tokens": 2326, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5735725378976027}}
{"text": "\\chapter{Spectral approximation properties of ReLU DNN}\nHere we will talk about how to use ReLU DNN to approximate polynomials\nand then achieve the ``spectral\" accuracy for analytical functions.\nThe main results can be found in \\cite{yarotsky2017error,\n  wang2018exponential}. Some related works are\n\\cite{liang2016why,lu2017expressive}\n\n\nFirst, we will introduce the following notation\n\\begin{itemize}\n\\item Colon notation for subscript: let $\\{x_{m:n}\\} = \\{x_i:i =\n  m,m+1,...,n\\}$ and $\\{x_{m_1:n_1,m_2:n_2}\\}= \\{x_{i,j}:i =\n  m_1,...,n_1,j = m_2,...,n_2\\}.$\n\\item Linear combination: denote $y\\in \\mathcal L(x_1,...,x_n)$ if\n  there exist $\\beta_i\\in \\mathbb{R},i=1,...,n,$ such that $y =\n  \\beta_0+\\beta_1x_1+\\cdots+\\beta_n x_n$.\n\\item Linear combination with ReLU activation: denote $\\tilde{y}\\in\n  \\tilde{\\mathcal L}(x_1,...,x_n)$ if there exists $y\\in\n  \\mathcal{L}(x_1,...,x_n)$ and $\\tilde{y} = \\mbox{ReLU}(y) =\n  \\max(y,0).$\n\\end{itemize}\n\\begin{definition}\nGiven a function $f(x_1,...,x_d)$, if there exist variables $\\{y_{1:L,1:M}\\}$ such that \n\\begin{equation}\ny_{1,m}\\in \\tilde{\\mathcal L}(x_{1:d}),\\quad y_{l+1,m} \\in \\tilde{\\mathcal L}(x_{1:d},y_{l,1:M}),\\quad f\\in\\mathcal{L}(x_{1:d},y_{1:L,1:M}),\n\\end{equation}\\label{def:netclass}\nwhere $m=1,...,M,l=1,...,L-1$, then $f$ is said to be in the neural nets class $\\mathcal{F}_{L,M}(\\mathbb{R}^d)$, and $\\{y_{1:L,1:M}\\}$ is called a set of hidden variables of $f$. \n\\end{definition}\n\\begin{properties}\nA function $f\\in \\mathcal F_{L,M}(\\mathbb R^d)$ can be represented by a ReLU network with depth $L+1$ and width $M+d+1$.\n\\end{properties}\n\\begin{proof}\nLet $\\{y_{1:L}\\}$ be the hidden variables of $f$ that satisfies (\\ref{def:netclass}), where\n$$\nf = \\alpha_0 + \\sum_{i=1}^d \\alpha_1x_1+\\sum_{l=1}^L\\sum_{m=1}^M \\beta_{l,m}y_{l,m}.\n$$\nConsider the following variables $\\{h_{1:L,1:M}\\}$:\n$$\nh_{l,1:M} = y_{l,1:M}, \\quad h_{l,M+1:M+d} = x_{1:d}\n$$\nfor $l = 1,...,L,$ and\n$$\nh_{1,M+d+1} = \\alpha_0 + \\sum_{i=1}^d\\alpha_ix_i,\\quad h_{l+1,M+d+1} =h_{l,M+d+1} + \\sum_{m=1}^M\\beta_{l,m}h_{l,m}\n$$\nfor $l=1,...,L-1$. One can see that $h_{1,m}\\in \\tilde{\\mathcal L}(x_{1:d}),h_{l+1,m}\\in \\tilde{L}(h_{l,1:M+d+1}),m = 1,...,M+d+1,l = 1,...,L-1,$ and $f\\in \\mathcal{L}(h_{L,1:M+d+1}),$ which is a representation of a standard neural net.\n\\end{proof}\n\n\\begin{properties}\\label{prop:net class}\n(Addition and composition of neural net class $\\mathcal F_{L,M}$)\n\\begin{itemize}\n\\item[1]\n$$\n\\mathcal F_{L_1,M} + \\mathcal{F}_{L_2,M} \\subseteq \\mathcal{F}_{L_1+L_2,M},\n$$\ni.e. if $f_1\\in \\mathcal F_{L_1,M}(\\mathbb{R}^d)$ and $f_2\\in \\mathcal{F}_{L_2,M}(\\mathbb{R}^d),$ then $f_1+f_2\\in\\mathcal{F}_{L_1+L_2,M}(\\mathbb{R}^d)$.\n\\item[2]\n$$\n\\mathcal F_{L_2,M}\\circ \\mathcal F_{L_1,M+1} \\subseteq \\mathcal F_{L_1+L_2,M+1}.\n$$ \ni.e. if $f_1(x_1,...,x_d)\\in \\mathcal{F}_{L_1,M+1}(\\mathbb{R}^d)$ and $f_2(y,x_1,...,x_d)\\in \\mathcal{F}_{L_2,M}(\\mathbb{R}^{d+1}),$ then \n$$\nf_2(f_1(x_1,...,x_d),x_1,...,x_d)\\in \\mathcal{F}_{L_1+L_2,M+1}(\\mathbb{R}^d).\n$$\n\\end{itemize}\n\\end{properties}\n\n\\begin{proof}\nFor the addition property, denote the hidden variables of $f_1$ and $f_2$ as $\\{y_{1:L_1,1:M}^{(1)}\\}$ and $\\{y_{1:L_2,1:M}^{(2)}\\}$. Let\n$$\ny_{1:L_1,1:M} = y_{1:L_1,1:M}^{(1)},\\quad y_{L_1+1,L_1+L_2,1:M} = y_{1:L_2,1:M}^{(2)}. \n$$\nBy definition, $\\{y_{1:L_1+L_2,1:M}\\}$ is a set of hidden variables of $f_1+f_2$. Thus $f_1+f_2\\in \\mathcal F_{L_1+L_2,M}.$\n\nFor the composition property, let the hidden variables of $f_1$ and $f_1$ as $\\{y_{1:L_1,1:M+1}^{(1)}\\}$ and $\\{y_{1:L_2,1:M}^{(2)}\\}$. Let\n$$\ny_{1:L_1,1:M+1} = y_{1:L_1,1:M+1}^{(1)},\\quad y_{L_1+1:L_1+L_2,1:M} = y_{1:L_2,1:M}^{(2)},\n$$\n$$\ny_{L_1+1,M+1} =\\cdots= y_{L_1+L_2} = f_1(x_1,...,x_d).\n$$\nOne can see that $\\{y_{1:L_1+L_2,1:M+1}\\}$ is a set of hidden variables of $f_2(f_1(\\bm x),\\bm x)$, thus the composition property holds.\n\\end{proof}\n\n\\begin{definition}\nGiven a continuous function $\\varphi(\\bm{x}),\\bm{x} \\in [-1,1]^d$ and a continuous function class $\\mathcal F([-1,1]^d),$ define the $L^\\infty$ distance\n$$\n\\mbox{dist} (\\varphi,\\mathcal F) = \\inf_{f\\in \\mathcal F} \\max_{\\bm x \\in [-1,1]^d} |\\varphi(\\bm{x}) - f(\\bm{x})|.\n$$\n\\end{definition}\n\n\\begin{properties}\\label{prop:dis}\n(Addition and composition properties for distance function)\n\\begin{itemize}\n\\item[1] Let $\\varphi_1$ and $\\varphi_2$ be continuous functions. Let $\\mathcal F_{1}$ and $\\mathcal F_2$ be two continuous function classes, then \n$$\n\\mbox{dist}(\\alpha_{1}\\varphi_1+\\alpha_{2}\\varphi_2,\\mathcal{F}_1+\\mathcal F_2)\\le |\\alpha_1|\\mbox{dist}(\\varphi_1,\\mathcal F_1) + |\\alpha_2|\\mbox{dist}(\\varphi_2,\\mathcal F_2),\n$$\nwhere $\\alpha_1$ and $\\alpha_2$ are two real numbers.\n\\item[2] Assume that $\\varphi_1(\\bm x) = \\varphi_1(x_1,...,x_d),\\varphi_2(y,\\bm x) = \\varphi_2(y,x_1,...,x_d)$ satisfy $\\varphi_1([-1,1]^d)\\subseteq[-1,1]$. Let $\\mathcal F_1([-1,1]^d),\\mathcal F_2([-1,1]^{d+1})$ be two continuous function classes, then\n$$\n\\mbox{dist}(\\varphi_2(\\varphi_1(\\bm x),\\bm x),\\mathcal F_2\\circ \\mathcal F_1)\\le L_{\\varphi_2}\\mbox{dist}(\\varphi_1,\\mathcal F_1) +\\mbox{dist}(\\varphi_2,\\mathcal F_2)\n$$\nwhere $L_{\\varphi_2}$ is the Lipschitz norm of $\\varphi_2$ with respect to $y$.\n\\end{itemize}\n\\end{properties}\n\\begin{proof}\nThe additional property obviously holds. Now we prove the composition property. For any $f_1\\in\\mathcal F_1,f_2\\in \\mathcal F_2$, one has\n\\[\n\\begin{split}\n|\\varphi_2(\\varphi_1(\\bm x),\\bm x) - f_2(f_1(\\bm x),\\bm x)|&\\le |\\varphi_2(\\varphi_1(\\bm x),\\bm x) - \\varphi_2(f_1(\\bm x),\\bm x)|+|\\varphi_2(f_1(\\bm x),\\bm x)-f_2(f_1(\\bm x),\\bm x)|\\\\\n& \\le L_{\\varphi_2} ||\\varphi_1(\\bm x) -f_1(\\bm x)||_\\infty +||\\varphi_2(y,\\bm x) - f_2(y,\\bm x)||_\\infty\n\\end{split}\n\\]\nTake $f_1^* = \\argmin_f ||\\varphi_1(\\bm x) - f(\\bm x)||_\\infty$ and $f_2^* = \\argmin_f ||\\varphi_2(y,\\bm x) - f(y,\\bm x)||_\\infty$, then it is proved.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:xsquare}\nThe function $\\varphi(x) = x^2,x\\in [-1,1]$ can be approximated by deep neural nets with an exponential convergence rate:\n$$\n\\mbox{dist}(x^2,\\mathcal{F}_{L,2}([-1,1]))\\le 2^{-2L}.\n$$\n\\end{lemma}\n\\begin{proof}\nConsider the function\n$$\ng(y) = \\left\\{\\begin{split}\n\t&2y,&\\quad 0\\le y<1/2,\\\\\n\t&2(1-y),&\\quad 1/2\\le y\\le 1,\n\t\\end{split}\\right.\n$$\nthen \n\\begin{equation}\\label{func:g}\ng(y) = 2y -4\\mbox{ReLU}(y-1/2)\n\\end{equation}\nin $[0,1]$. Define the hidden variables $\\{y_{1:L,1:2}\\}$ as follows:\n\\[\n\\begin{split}\n y_{1,1} = \\mbox{ReLU}(x),&\\quad y_{1,2}=\\mbox{ReLU}(-x),\\\\\n y_{2,1} = \\mbox{ReLU}(y_{1,1}+y_{1,2}),&\\quad y_{2,2} = \\mbox{ReLU}(y_{1,1}+y_{1,2}-1/2),\\\\\n y_{l+1,1} = \\mbox{ReLU}(2y_{l,1}-4y_{l,2}),&\\quad y_{l+1,2} = \\mbox{ReLU}(2y_{l,1}-4y_{l,2}-1/2)\\\\\n\\end{split}\n\\]\nfor $l = 2,3,...,L-1$. Using induction, one can see that $|x| = y_{1,1}+y_{1,2}$ and \n\\begin{equation}\\label{func:glinduction}\ng_l(|x|)=\\underbrace{g\\circ g\\circ \\cdots \\circ g}_{l}(|x|) = 2y_{l+1,1}-4y_{l+1,2},\\quad l=1,...,L-1,\n\\end{equation} \nfor $x\\in [-1,1]$. i.e.\n\\[\n\\begin{split}\ng_l(|x|) &= g\\left(g_{l-1}(|x|)\\right)\\\\\n         &= g(2y_{l,1}-4y_{l,2})\\qquad (\\mbox{Eq~(\\ref{func:g})})\\\\\n         &= 2(2y_{l,1}-4y_{l,2}) - 4\\mbox{ReLU}(2y_{l,1}-4y_{l,2}-1/2)\\qquad (g_{l-1}(|x|)\\ge 0~\\mbox{if}~x\\in[0,1])\\\\\n         &= 2\\mbox{ReLU}(2y_{l,1}-4y_{l,2})- 4\\mbox{ReLU}(2y_{l,1}-4y_{l,2}-1/2)\\\\\n         & = 2y_{l+1,1} - 4y_{l+1,2}\\\\\n\\end{split}\n\\]\nby induction, Eq (\\ref{func:glinduction}) holds.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{dl_approx_analytic_gl}\n\t\\caption{The figure of $g_l(x)$}\n\t\\label{fig:gl}\n\\end{figure}\n\nLet $f_m$ be the piecewise linear interpolation of $f$ with $2^m+1$ uniformly distributed breakpoints $\\frac{k}{2^m},k = 0,...,2^m:$\n$$\nf_{m}(\\frac{k}{2^m}) = (\\frac{k}{2^m})^2,\\quad k = 0,...,2^m.\n$$\nNote that refining the interpolation from $f_{m-1}$ to $f_m$ amounts to adjusting it by a function proportional to a sawtooth function:\n$$\nf_{m-1}(x) - f_m(x) = \\frac{g_m(x)}{2^{2m}}.\n$$\nHence \n$$\nf_{L-1}(|x|) = |x| - \\sum_{l=1}^{L-1}\\frac{g_l(|x|)}{2^{2l}} .\n$$\nthen $f_{L-1}\\in \\mathcal{F}_{L,2}$, and \n\\[\n\\begin{split}\n||x^2-f_{L-1}(x)||_{\\infty} & = \\max_{k} \\max_{x\\in [\\frac{k}{2^{L-1}},\\frac{k+1}{2^{L-1}}]} |x^2 - f_{L-1}(x)|\\\\\n  & = \\max_{k} |(\\frac{1}{2}(\\frac{k}{2^{L-1}}+\\frac{k+1}{2^{L-1}}))^2 - f_{L-1}(\\frac{1}{2}(\\frac{k}{2^{L-1}}+\\frac{k+1}{2^{L-1}}))|\\\\\n  & = \\max_k |(\\frac{1}{2}(\\frac{k}{2^{L-1}}+\\frac{k+1}{2^{L-1}}))^2 - \\frac{1}{2}((\\frac{k}{2^{L-1}})^2+(\\frac{k+1}{2^{L-1}})^2))|\\\\\n  &=\\frac{1}{4^L}.\n\\end{split}\n\\]\n$|x^2-f_{L-1}(x)|\\le 2^{-2L}$ for $x\\in[-1,1].$\n\\end{proof}\n\n\\begin{lemma}\\label{lem:xy}\nFor multiplication function $\\varphi(x,y) = xy$, we have \n$$\n\\mbox{dist}(xy,\\mathcal{F}_{3L,2}([-1,1]^2))\\le 3\\cdot 2^{-2L}.\n$$\n\\begin{proof}\nNotice that\n$$\n\\varphi = xy = 2\\left(\\frac{x+y}{2} \\right)^2-\\frac{1}{2}x^2-\\frac{1}{2}y^2.\n$$\nBy properties \\ref{prop:net class},\\ref{prop:dis} and lemma \\ref{lem:xsquare}, lemma \\ref{lem:xy} is proved.\n\\end{proof}\n\\end{lemma}\n\n\\begin{lemma}\nFor a monomial $M_p(\\bm x)$ of $d$ variables with degree $p$, we have\n$$\n\\mbox{dist}(M_p,\\mathcal F_{3(p-1)L,3})\\le 3(p-1)\\cdot 2^{-2L}.\n$$\n\\end{lemma}\n\n\\begin{proof}\nLet $M_p(\\bm x)=x_{i_1}x_{i_2}\\cdots x_{i_p},i_1,...,i_p\\in\\{1,...,d\\}.$ Using induction, assume that the lemma holds for the degree-$p$ monomial $M_p$, consider a degree-$(p+1)$ monomial $M_{p+1}(\\bm{x}) = M_p(\\bm{x})\\cdot x_{i_{p+1}}$. Let $\\varphi(y,x) = yx,$ then $M_{p+1}(\\bm x) = \\varphi(M_p(\\bm{x}),x_{i_{p+1}})$. We have\n\\[\n\\begin{split}\n\\mbox{dist}(M_{p+1},\\mathcal{F}_{3pL,3})&\\le \\mbox{dist}(\\varphi(M_p(\\bm x),x_{i_{p+1}}),\\mathcal F_{3L,2}\\circ \\mathcal F_{3(p-1)L,3})\\\\\n& \\le L_{\\varphi}\\mbox{dist}(M_p,\\mathcal{F}_{3(p-1)L,3})+\\mbox{dist}(\\varphi,\\mathcal F_{3L,2})\\le 3p\\cdot 2^{-2L}.\n\\end{split}\n\\]\nNote that the Lipschitz norm $L_{\\varphi}=1$ since $x_{i_{p+1}}\\in [-1,1].$\n\\end{proof}\n\n\\begin{lemma}\nFor a degree-$p$ polynomial $P_p(\\bm{x}) = \\sum_{|\\bm{k}|\\le p}a_{\\bm k}\\bm x^{\\bm k},\\bm x\\in [-1,1]^d,\\bm k = (k_1,...,k_d)\\in\\mathbb{N}^d,$ we have \n\\[\n\\mbox{dist}\\left(P_p,\\mathcal F_{\\binom{p+d}{d}(p-1)L,3}\\right)<3(p-1)\\cdot 2^{-2L}\\sum_{|\\bm k|\\le p}|a_{\\bm k}|\n\\]\n\\end{lemma}\\label{lem:poly}\n\n\\begin{proof}\nThis lemma can be proved by the properties \\ref{prop:net class}, \\ref{prop:dis} and lemma \\ref{lem:poly}.\nNote that the number of monomials of $d$ variables with degree less or equal to $p$ is $\\binom{p+d}{d}$.\n$$\nk_1 + k_2 + \\cdots + k_p \\le p,\n$$\nAdd a variables $k_{p+1}$, we have\n$$\nk_1 + k_2 + \\cdots + k_p + k_{p+1} = p.\n$$\nthe number of the non-negative solution is $\\binom{p+d}{d}$.\n\\end{proof}\n\n\\begin{theorem}\nLet $f$ be an analytic function over $(-1,1)^d$. Assume that the power series $f(\\bm x) = \\sum_{\\bm k\\in \\mathbb N^d}a_{\\bm k}\\bm x^{\\bm k}$ is absolutely convergent in $[-1,1]^d.$ Then for any $\\delta>0,$ there exists a function $\\hat f$ that can be represented by a deep ReLU network with depth $L$ and width $d+4$, such that\n\\[\n|f(\\bm x) - \\hat f(\\bm x)|<2\\sum_{\\bm k\\in \\mathbb N^d}|a_{\\bm k}|\\cdot \\exp\\left(-d\\delta\\left(e^{-1}L^{1/2d}-1\\right)\\right)\n\\]\nfor all $\\bm x\\in [-1+\\delta,1-\\delta]^d.$\n\\end{theorem}\n\\begin{proof}\nLet $\\epsilon = \\exp(-d\\delta(e^{-1}L^{1/2d}-1)),$ then $L = [e(\\frac{1}{d\\delta}\\log\\frac{1}{\\epsilon}+1)]^{2d}$. Without loss of generality, assume $\\sum_{\\bm k}|a_{\\bm k}|=1.$ We will show that there exists $\\hat f\\in \\mathcal F_{L,3}$ such that $||f-\\hat f||_{\\infty}<2\\epsilon.$\nDenote \n$$\nf(\\bm x) = P_p(\\bm x) + R(\\bm x): = \\sum_{|\\bm k|\\le p}a_{\\bm k}\\bm x^{\\bm k} +  \\sum_{|\\bm k|> p}a_{\\bm k}\\bm x^{\\bm k}.\n$$\nFor $\\bm x\\in [-1+\\delta,1-\\delta]^d$, we have $|R(\\bm x)|<(1-\\delta)^p$, thus truncation to $p = \\frac{1}{\\delta}\\log\\frac{1}{\\epsilon}$ with ensure $|R(\\bm x)|<2\\epsilon$. From lemma \\ref{lem:poly}, we have dist$(P_p,\\mathcal F_{L,3})<3(p-1)\\cdot 2^{-2L'}$, where \n\\[\n\\begin{split}\nL' &= L\\binom{p+d}{d}^{-1}(p-1)^{-1}\\\\\n   &\\ge L(\\frac{(p+d)!}{p!d!})^{-1}p^{-1}\\quad (d! \\thicksim \\sqrt{2\\pi d}(d/e)^d)\\\\\n   &\\ge L(\\frac{(p+d)^d}{(d/e)^d})^{-1}p^{-1}\\\\\n   &= L[e(\\frac{1}{d\\delta}\\log \\frac{1}{\\epsilon}+1)]^{-d}(\\frac{1}{\\delta}\\log\\frac{1}{\\epsilon})^{-1}\\\\\n   & = [e(\\frac{1}{d\\delta}\\log \\frac{1}{\\epsilon}+1)]^{d}(\\frac{1}{\\delta}\\log\\frac{1}{\\epsilon})^{-1}\\\\\n   & \\ge e^d (\\frac{1}{\\delta}\\log\\frac{1}{\\epsilon}) \\\\\n   &\\gg\\log\\frac{1}{\\delta}+ \\log\\frac{1}{\\epsilon}\n\\end{split}\n\\]\n\nfor $d\\ge 2$ and $\\epsilon<<1$, then dist$(P_p,\\mathcal F_{L,3})<<\\epsilon$. Thus there exists $\\hat f\\in \\mathcal F_{L,3}$ such that $||P_p - \\hat f||_{\\infty}<\\epsilon$, and $||f-\\hat f||_{\\infty}\\le ||R(x)||_\\infty + ||P_p - \\hat f||_\\infty<3\\epsilon$.\n\\end{proof}\nfor $d\\ge 2$ and $\\epsilon\\ll 1$, then dist$(P_p,\\mathcal F_{L,3})<3(p-1)\\cdot 2^{-2L'} = 3(p-1)(\\epsilon^2+\\delta^2)\\ll\\epsilon$. Thus there exists $\\hat f\\in \\mathcal F_{L,3}$ such that $||P_p - \\hat f||_{\\infty}<\\epsilon$, and $||f-\\hat f||_{\\infty}\\le ||f-P_p||_\\infty + ||P_p - \\hat f||_\\infty<3\\epsilon$.\n\n", "meta": {"hexsha": "f533fc2fab556dd3cf3db36681c294127790a979", "size": 12864, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/Spectral.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/Spectral.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/Spectral.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.4686346863, "max_line_length": 328, "alphanum_fraction": 0.6037779851, "num_tokens": 5871, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019594, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5733267749508736}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\n\\markright{ifmt}\n\\section*{\\hspace*{-1.6cm} ifmt}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nInverse fast Mellin transform.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nx = ifmt(mellin,beta)\nx = ifmt(mellin,beta,M)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty ifmt} computes the inverse fast Mellin transform of {\\ty\n        mellin}.\\\\ {\\it Warning} : the inverse of the Mellin transform is\n        correct only if the Mellin transform has been computed from {\\ty\n        fmin} to 0.5 Hz, and if the original signal is analytic.\\\\\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty mellin} & Mellin transform to be inverted. {\\ty mellin} must have been\n         obtained from {\\ty fmt} with frequency running from {\\ty fmin} to 0.5 Hz\\\\\n        {\\ty beta} & Mellin variable issued from {\\ty fmt}\\\\\n        {\\ty M} & number of points of the inverse Mellin transform\n                                        & {\\ty length(mellin)}\\\\\n  \\hline {\\ty x} & inverse Mellin transform with {\\ty M} points in time\\\\\n\n\\hline\n\\end{tabular*}\n\n\\end{minipage}\n\\vspace*{1cm}\n\n\n{\\bf \\large \\sf Example}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nTo check the perfect reconstruction property of the inverse Mellin\ntransform, we consider an analytic signal, compute its fast Mellin\ntransform with an upper frequency bound of 0.5, and apply on the output\nvector the {\\ty ifmt} algorithm\\,:\n\\begin{verbatim}\n         sig=atoms(128,[64,0.25,32,1]); clf;\n         [mellin,beta]=fmt(sig,0.08,0.5,128); \n         x=ifmt(mellin,beta,128); plot(abs(x-sig));\n\\end{verbatim}\nWe can observe the almost perfect equality between {\\ty x} and {\\ty sig}. \n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nfmt, fft, ifft.\n\\end{verbatim}\n\\end{minipage}\n", "meta": {"hexsha": "609d11b399673ab62d23f65fe82a5196add2acf9", "size": 2287, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/ifmt.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/ifmt.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/ifmt.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 27.2261904762, "max_line_length": 83, "alphanum_fraction": 0.6554438129, "num_tokens": 775, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5733267710992997}}
{"text": "\\chapter{Calibration}\n\nOur goal is to create a system which does not need any prior information about the\nposition of cameras. Since we do not know how far away the cameras are, nor\nangles between them, we firstly use calibration to obtain this information.\n\nBy calibrating cameras with a well describable pattern, we can obtain\ninformation about the single camera, such as what distortion its lens cause, or\nhow the world points are projected to an image plane of the camera.\n\nAfter discovering these parameters for both cameras, we will continue on stereo\ncalibration. Stereo calibration will provide us information about their\nposition in space relative to each other.\n\nFor these calibration processes, we use algorithms implemented in OpenCV.\nTherefore, we provide only a short overview of the process, and we describe\nonly results obtained from calibration essential to us.\n\n\\section{Intrinsic parameters}\n\nEach camera type is different. Moreover, each camera of a specific type is\ndifferent due to a manufacturing process. Therefore we introduce intrinsic\ncamera parameters, which help us to model the camera precisely.\n\nIntrinsic camera parameters define a transformation between the world coordinates and\nthe coordinates within an image plane (in pixels). Physical attributes of the\ncamera influence this transformation.\n\nThese parameters include focal length, a position of the principal point and\ndistortion coefficients. All these parameters are needed to get a correct\ntransformation between the point in the space and the point at the image plane.\nSometimes even more parameters are used for a better description of the model\nof the camera.\n\n\\subsection{Camera matrix} \n\nCamera matrix will provide us a transformation from\nworld coordinates (with the origin in the camera) to image plane coordinates\n(seen in the image taken from the camera). If the coordinates in 3D are\navailable, we can obtain 2D coordinates by simple multiplication by camera\nmatrix. After multiplication we obtain 2D homogeneous coordinates.\n\nAs the next step, we describe the camera matrix obtained by OpenCV calibration\nprocedure. In the OpenCV implementation the camera matrix is $3\\times3$ matrix of the\nfollowing format:\n\n\\[\n\\begin{pmatrix}\n\tf_x \t& 0 \t& c_x \\\\\n\t0\t& f_y\t& c_y \\\\\n\t0\t& 0\t& 1\n\\end{pmatrix}\n\\]\n\nWhere $f_x$, $f_y$ denote focal lengths expressed in pixel units. It is usual\nto introduce two of them, separately for both axes, since pixels usually are\nnot perfect squares but rather rectangles. Therefore the same focal length, has\ndifferent length in pixel units over a given axis.\n\nWe refer the ray of the view of the camera as the principal ray. The point\nwhere this ray intersects the image plane is called principal\npoint\\footnote{For more information visit \\url{https://en.wikipedia.org/wiki/Pinhole\\_camera\\_model\\#The\\_geometry\\_and\\_mathematics\\_of\\_the\\_pinhole\\_camera}}.\nParameters $c_x$ and $c_y$ define coordinates of the principal point in the\nimage plane.  It should be the center of the image, but assembling process of\nthe camera might cause a small displacement.\n\n\\subsection{Distortion coefficients}\n\nCameras are equipped with lenses to obtain sharp image instead of blurry. The\nlens may causes various distortions. Fish-Eye lenses are known for their\ndistortion.  Even web camera lenses have distortion, but not as visible as the\ncamera with a fish-eye lens. It is important to correct these distortions.\n\nTwo distortion types cause a significant effect on the image. The first one is\nthe radial distortion, creating barrel effect and the second one tangential distortion.\n\nThe radial distortion is caused by the camera lens (the effect of radial\ndistortion is displayed in the image \\ref{fig:distortion}). It can usually be\ndescribed by three parameters. Highly distorted images (like from fish-eye)\noften need more parameters. Since our system use web cameras, we will use only\nthree parameters.\n\nIn the ideal camera, the lens would be placed parallel to the chip. Since such\nprecision is not possible, due to an assembling process, tangential distortion\narises. For this distortion, we use two parameters.\n\nWe describe both distortion effects by five parameters. More about the meaning\nof the parameters could be found in the book by \\citet*{bradski2008learning}.\n\n\\begin{figure}\n\t\\begin{subfigure}[b]{0.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{img/chessboard/7x8chessboard-positivedistortion}\n\t\t\\caption{Effect of positive radial distortion}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{img/chessboard/7x8chessboard-negativedistortion}\n\t\t\\caption{Effect of negative radial distortion}\n\t\\end{subfigure}\n\t\\caption{Effects of lens distortion on the image of the chessboard}\n\t\\label{fig:distortion}\n\\end{figure}\n\nThese nine parameters (four for camera matrix and five for distortion\ncoefficients) describe the camera model. They may be used for example for\nprojecting a point from 3D space to image coordinates in the camera (OpenCV\nfunction \\verb+projectPoints+). We will use them to get better results for\nstereo calibration and for localization, since they provide us a more accurate\nmodel of the camera.\n\n%We can now describe the transformation process in one camera by multiplication by\n%the camera matrix and then correcting the results by distortion coefficients. With\n%these parameters, we can continue on stereo calibration.\n\n%\\todo[inline]{Ale co se pronasobi tou matici?}\n%\\todo[inline]{Napis sem ten vzorecek}\n\n\\section{Stereo calibration}\n\nAfter performing a calibration of both cameras separately, we also need\ninformation about their relative position to each other. By the relative\nposition, we understand translation from the first camera to the second and the\nrotation of the camera (among three axes). Using this information, we can\ntransform a view of the one camera, by moving it by translation vector and\nrotating accordingly. Therefore, the position of the second camera can be\ndescribed by six parameters, three as an angle around each axis to rotate and\nthree for translation vector in respect to the first camera.\n\nStereo calibration routine in OpenCV can also perform mono calibration for both\ncameras, but we pre-calibrate cameras on itself for better precision and better\nconvergence of the algorithm.\n\nStereo calibration routine will provide more information. For the goals of this\nthesis, only rotation matrix and translation vector are important.\n\n\\section{Calibration process}\n\nIn previous sections we shortly discussed theoretical background behind the\ncalibration. In this section we will focus on the implementation.\n\nIn OpenCV a chessboard is commonly used, since it is a planar object (easy to\nreproduce by printing) and well described. OpenCV also provides methods for\nautomatic finding of the chessboard in the picture, so no human intervention is\nneeded to select the chessboard corners from the image (an example of the\nchessboard is provided in a Figure \\ref{fig:chessboard}). Therefore, we also\nuse the chessboard pattern for calibration.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{img/chessboard/7x8chessboard}\n\t\\caption{Example of $7\\times8$ chessboard used for the calibration}\n\t\\label{fig:chessboard}\n\\end{figure}\n\n\\subsection{How many images?} \n\nNow we know that, we can use a chessboard to calibrate the cameras. However,\nthe question of how many images are needed for the calibration remains. We\ndiscuss only the images where the full chessboard is found. For enough\ninformation from each image, we need a chessboard with at least $3\\times3$\ninner corners. It is better to have a chessboard with even, and odd dimension\n(for example 7$\\times$ 8 inner corners) since it has only one symmetry axis,\nand the pose of the object can be detected correctly. It is important that the\nchessboard indeed has squares, not rectangles (so it is better to check it\nafter printing). A bigger chessboard is easier to recognize, so we recommend at\nleast format A4.\n\nWe need only one image for computing the distortion coefficients. However, for\nthe camera matrix, we need at least two images. For stereo calibration, we need at\nleast one pair of images of the chessboard.\n\nWe know the minimum number of images needed for calibration. On the other hand,\nwe use more images for the robustness of the algorithm. We need a ``rich'' set of\nviews. Therefore it is important to move the chessboard between snapshots. With\nmore provided information to the algorithm, it compensates the errors in the\nmeasurements (like a wrongly detected chessboard in one of the images).\nTherefore also the computation time increases. Since it is enough for a given\nsetup of the cameras to perform a calibration only once and then use results\nfrom it again, we recommend to do the calibration on more images and wait a\nlittle bit longer for the results. \n", "meta": {"hexsha": "aef34916f7afc97823324944ce158c3d6df62d95", "size": 8868, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/en/calibration.tex", "max_stars_repo_name": "JankaSvK/thesis", "max_stars_repo_head_hexsha": "c440ab8242b058f580fdf9d5a1d00708a1696561", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-29T14:13:47.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-29T14:13:47.000Z", "max_issues_repo_path": "text/en/calibration.tex", "max_issues_repo_name": "JankaSvK/thesis", "max_issues_repo_head_hexsha": "c440ab8242b058f580fdf9d5a1d00708a1696561", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-04-24T18:30:00.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-11T23:25:07.000Z", "max_forks_repo_path": "text/en/calibration.tex", "max_forks_repo_name": "JankaSvK/thesis", "max_forks_repo_head_hexsha": "c440ab8242b058f580fdf9d5a1d00708a1696561", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.2666666667, "max_line_length": 161, "alphanum_fraction": 0.8035633739, "num_tokens": 1975, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5733267699234423}}
{"text": "% declare document class and geometry\n\\documentclass[12pt]{article} % use larger type; default would be 10pt\n\\usepackage[margin=1in]{geometry} % handle page geometry\n\n% import packages and commands\n\\input{../header2.tex}\n\n\n\\title{Phys 220A -- Classical Mechanics -- HW07}\n\\author{UCLA, Fall 2014}\n\\date{\\formatdate{21}{11}{2014}} % Activate to display a given date or no date (if empty),\n         % otherwise the current date is printed \n\n\\begin{document}\n\\maketitle\n\n\n\\section*{Problem 1 (25 pts)}\n\\textit{\nConsider a canonical transformation with a generating function $F_2(q_i,P_i)$ where the $p_i, Q_i$ are determined by\n\\begin{eqn}\np_i = \\pd{F_2}{q_i}, \\qquad \nQ_i = \\pd{F_2}{P_i}, \\qquad\nH' = H + \\pd{F_2}{t}\n\\end{eqn}\nand $F_2$ has the form\n\\begin{eqn}\nF_2 = \\sum_i q_i P_i + X.\n\\end{eqn}\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nFind the relation of $q_i, p_i$ and $Q_i, P_i$ if $X = \\delta \\v a \\cdot \\v P$ where $\\delta \\v a$ is an infinitesimal vector.\n}\n\n\n% part B\n\\item \\textit{\nFind the relation of $q_i, p_i$ and $Q_i, P_i$ if $X = \\delta \\v \\phi \\cdot (\\v q \\times \\v P)$ where $\\delta \\v \\phi$ is an infinitesimal vector.\n}\n\n\n% part C\n\\item \\textit{\nFind the relation of $q_i, p_i$ and $Q_i, P_i$ if $X = \\delta \\delta \\tau \\, H$ where $\\delta \\tau$ is an infinitesimal paramter and $H$ is the Hamiltonian.\n}\n\n\n% part D\n\\item \\textit{\nFind the relation of $q_i, p_i$ and $Q_i, P_i$ if $X = -e \\chi(\\v q, t)$.\n}\n\n\n% part E\n\\item \\textit{\nFor the Hamiltonian \n\\begin{eqn}\nH = \\frac{1}{2m} (\\v p - e \\v A(\\v q,t))^2 + e \\Phi(\\v q,t)\n\\end{eqn}\nwhere\n\\begin{eqn}\n\\v E = -\\v \\nabla \\Phi - \\pd{\\v A}{t}, \\qquad\n\\v B = \\v \\nabla \\times \\v A.\n\\end{eqn}\nCalculate the new Hamiltonian $H'(Q,P)$ under the canonical transformations defined in parts (a)--(d).\n}\n\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 2 (20 pts)}\n\\textit{\nConsider the Hamiltonian \n\\begin{eqn}\nH = \\frac{1}{2m} p^2 q^4 + \\frac{k}{2q^2}\n\\end{eqn}\nand the canonical transformation given by the generating function\n\\begin{eqn}\nF_1(q,Q) = -\\sqrt{mk} \\, \\frac{Q}{q}.\n\\end{eqn}\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nFind the canonical transformations $Q=Q(q,p)$ and $P=P(q,p)$.\n}\n\n\n% part B\n\\item \\textit{\nFind the new Hamiltonian $H'(Q,P)$.\n}\n\n\n% part C\n\\item \\textit{\nFind the general solution for $Q(t), P(t)$.\n}\n\n\n% part D\n\\item \\textit{\nSolve for $q(t), p(t)$ with the initial condition $q(0) = q_0$ and $p(0) = 0$.\n}\n\n\\end{enumproblem}\n\n\n\n\\section*{Problem 3 (15 pts)}\n\\textit{\nUse Hamilton-Jacobi theory to solve for the dynamics of a particle with Cartesian coordinates $(x,y,z)$ moving in the gravitational potential of two fixed masses $m_+, m_-$ located at the points $(\\pm c,0,0)$. Restrict motion to the plane $z=0$ for the sake of simplicity. This system has Lagrangian (we set $\\mu_\\pm = G m_\\pm m > 0$ and $r_\\pm^2 = (x \\mp c)^2 + y^2$ for brevity)\n\\begin{eqn}\nL = \\frac{1}{2} m (\\dot x^2 + \\dot y^2) + \\frac{\\mu_+}{r_+} + \\frac{\\mu_-}{r_-}.\n\\end{eqn}\n\\textbf{Hint:} Change variables to elliptical coordinates, $x = c \\cosh q_1 \\cos q_2$ and $y = c \\sinh q_1 \\sin q_2$.\n}\n\n\n\n\\section*{Problem 4 (10 pts)}\n\\textit{\nConsider an infinitesimal canonical transformation with generating function\n\\begin{eqn}\nF_2(q,P) = P q + \\epsilon S(P,q).\n\\end{eqn}\n}\n\n\\begin{enumproblem}\n\n% part A\n\\item \\textit{\nShow that to first order in $\\epsilon$ one has\n\\begin{eqn}\np = P + \\epsilon \\frac{S}{q}, \\qquad\nQ = q + \\epsilon \\pd{S}{P}.\n\\end{eqn}\n}\n\n\n% part B\n\\item \\textit{\nFrom this show that the infinitesimal canonical transformation satisfies Hamilton equations with $\\epsilon$ viewed as ``time''\n\\begin{eqn}\n\\eval{\\od{P}{\\epsilon}}_{\\epsilon=0} = -\\pd{H}{q}, \\qquad\n\\eval{\\od{Q}{\\epsilon}}_{\\epsilon=0} = -\\pd{Q}{p}\n\\end{eqn}\nwhere $H(q,p) = S(q,p)$. [Should the second one not have a plus instead of minus sign?]\n}\n\n\n\\end{enumproblem}\n\n\n\\end{document}\n", "meta": {"hexsha": "1b40657838d85b03391a7bb0852961eb36b1dda2", "size": 3839, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classical/hw07.tex", "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classical/hw07.tex", "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classical/hw07.tex", "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.4085365854, "max_line_length": 380, "alphanum_fraction": 0.6566814275, "num_tokens": 1391, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7745833945721304, "lm_q1q2_score": 0.5733267699234423}}
{"text": "\\chapter{Conclusions and future developments}%\n\\label{ch:conclusions}\nIn this thesis we developed a multi-cellular model accounting for a spatially-extended intra-cellular system for ROPs pattern formation. In the proposed framework, we solved ROPs pattern formation in a system composed by multiple cells, together with a transport model for hormone auxin. The RD model explains the auxin-mediated action of ROPs in an Arabidopsis root hair cell leading to formation of the localized patches of activated ROPs.\n\nOur simulations support conclusions reached in other works regarding ROPs dynamics under a-priori defined auxin distribution \\cite{phdthesis:victor, intra1_R, intra2}. We have analyzed various scenarios such that a stripe-like patch forms where auxin concentration is higher. Then, instability of stripe into spot-like states occurs and multiple spots align with auxin gradient or travel towards auxin minimum. Several results confirm that, for a transversally independent gradient, lateral stripes become unstable states.\n\nSuccessively as a new contribution, we extended the intra-cellular dynamics for root-hair initiation model to a multi-cellular system, developing a new model taking into account communication between cells. In order to do so, we have defined a boundary value problem which assumes new boundary conditions between neighboring cells. Subsequently, we develop an iterative procedure to solve it. We take as reference scheme a Robin-Robin domain decomposition method. We impose fluxes of ROPs betweeen neighboring cells, depending on the difference of the ROPs concentrations, through localised open channels. Such connections are tuned in order to visualize considerably different results with respect to a configuration characterized by stagnant cells. In addition,we aim at preserving all previous analyses over important parameters characterizing the system. We numerically assessed the robustness of the proposed model in cooperating with auxin distribution in influencing ROPs pattern formation.\n\nHaving defined a reliable multi-cellular model, we have taken into account also auxin concentration dynamics. Hormone auxin is regulated by carriers PIN following a non-linear ODEs system. We implement a semi-implicit method to solve such as a system on a two cells setting. The simulations we carried out show oscillating values of auxin concentration for specific sets of parameters, as expected from previous studies. A further confirmation of the robustness of pattern formation according to the proposed new multi-cellular model, when considering channel communication between cells, is given in Chapter \\ref{cap:4}. It is shown that even if the system is under steady homogeneous auxin concentrations, the model robustly forms hotspots when considering open channels under a sufficiently high overall auxin level.\n\nThe results show two ways spots of active ROPs can be generated. The first factor is the auxin gradient, which still guarantees and influence stripe to spot evolution. The second factor is the structural coupling between cells. Even if not characterized by variation in space but with a sufficiently high value of the auxin concentrations, the multi-cellular model leads the system to multiple spots.\n\nIn conclusion, this thesis provides a first attempt in modeling communication between root-hair cells in pattern formation. Future developments include to study the structural model or to deeply analyze channels characterization. Moreover, the iterative procedure implemented could be applied to a multi-cellular system composed by more than four cells and eventually try to increase the computational performance through parallel computing. Another possible road to follow is to approach the problem through homogenization techniques. The cited method may increase the efficiency in solving the model, particularly when the number of cells involved becomes large. Finally, one could assume other auxin transport models, less simplified or accounting for spatial dependence inside the cells.\n", "meta": {"hexsha": "b4057f60eb3ad545e1e984cf256521061a8b48b0", "size": 4051, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/Chapter_conclusion.tex", "max_stars_repo_name": "danieleavitabile/root-simulator", "max_stars_repo_head_hexsha": "b530efef392f3cabbc251ee5f0d7d50dea2271d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Thesis/Chapter_conclusion.tex", "max_issues_repo_name": "danieleavitabile/root-simulator", "max_issues_repo_head_hexsha": "b530efef392f3cabbc251ee5f0d7d50dea2271d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/Chapter_conclusion.tex", "max_forks_repo_name": "danieleavitabile/root-simulator", "max_forks_repo_head_hexsha": "b530efef392f3cabbc251ee5f0d7d50dea2271d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 289.3571428571, "max_line_length": 997, "alphanum_fraction": 0.8338681807, "num_tokens": 754, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5733267622202944}}
{"text": "After testing analog sorting on a prototype analog computer chip and simulating the process using an ODE solver, we can evaluate its time and space complexity. \n\nCompared to the other sorting algorithms mentioned at the beginning of the paper, analog sorting is competitive in terms of time complexity. However, due to the nature of circuits, it is not as space efficient as most other popular sorting algorithms.\n\n\\subsection{Functionality validation of analog sorting}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{graphics/10d_toda_7.png}\n\\caption{Simulated sorting a 10 element vector.}\n\\end{figure}\n\nWe see in Figure 1, generated from the ODE solver, that the sorting system has periods of swapping, interspersed in quiet periods of little change. The ODE solver was used on a 10 element vector. \n\nThe output of the analog chip, run on two variables and over multiple loops, is shown in Figure 2.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{graphics/2d_analog.jpg}\n\\caption{Sorted 2 elements using analog computer chip.}\n\\end{figure}\n\nThis confirms that the values on the diagonals will converge towards a final value.\n\n\n\\subsection{Time cost of the discrete QR algorithm}\nSince the analog sorting method is the continuous version of the QR algorithm, there might be a correlation between their time complexities as well. \n\nIn essence, our analog sorter takes a tridiagonal matrix $H$ and sorts the eigenvalues along the diagonal in descending order. Deift explains that the QR algorithm takes $O(N)$ time for a tridiagonal $N\\times N$ matrix. \n\nThe assumption is that the time complexity of analog sorting will also be in that range.\n\n%Time cost of overall QR loop:\n\t%how many iterations of qr til convergence?\n\t% time to convergence of qr w.r.t.. problem size\n\t%Since we know the ODE is analogous to QR algorithm, they should take the same amount of time.\n%Time cost of QR step:\n\t%Numerical Recipes 3rd Edition p585 says the QR algorithm takes $O(N)$ time for symmetric tridiagonal matrices.\n\n\\subsection{Time cost of analog sorting}\n\nIn terms of time, the sorter takes at least $O(N)$ time because of the time it takes just for signals to propagate across the circuit.\nAnother issue is the time it takes for the ODE to settle to its final value.\nThis shows preliminary data showing the time to convergence of analog sorting, plotted against problem size.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{graphics/ode_time_vs_problem_size.pdf}\n\\caption{Preliminary time to convergence vs. problem size.}\n\\end{figure}\n\nThe axes are the time to solution, plotted against $N$, the number of real numbers we are sorting.\nThe data points come from multiple random trials at each $N$.\nThe set of real numbers is randomly generated from a chosen dynamic range.\nIt appears that as the $N$ size increases, the average time grows linearly with respect to $N$.\nFurthermore, the variance of the solution time, measured as the standard deviation, also grows.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{graphics/range_vs_solution.png}\n\\caption{Preliminary time to convergence vs. dynamic range.}\n\\end{figure}\n\nAnother possible factor for solution time to consider is the dynamic range of the real numbers to sort. However, running the simulation on a constant number of elements with different dynamic ranges did not show any significant trends.\n\n\\subsection{Hardware cost of analog sorting}\n\nAnalog sorting on the chip relies on the hardware components needed to build the circuit. \nWe need a constant amount of components for each of the $N$ ODEs.\nThus, the analog sorter takes up $O(N)$ amount of circuit components to sort $N$ elements.\n\n\n\\subsection{Limitations}\n\nAs mentioned before, the choice of initial $y$ values affects the sorting mechanism. Changing the initial values of the off-diagonals impacts whether or not the algorithm ever reaches a correctly sorted state of the diagonal entries.\n\nSetting the initial values for the off-diagonals to $0$ will prevent the algorithm from starting in the first place. Initial $y$ values that are too small may stop the algorithm mid-way and result in an approximate sorting with two elements being out of place. After the diagonal entries converge to their final state, the $y$ values are nonzero, but in the magnitude of less than $10^{30}$. How big the initial off-diagonal entries have to be might depend on the ratio of the numbers to sort.\n\nBrockett mentions briefly that $H_\\infty$ will be sorted in almost all cases, but does not elaborate on the exceptions~\\cite{brockett}.\n\n\n% \\begin{figure}\n% \\centering\n% \\includegraphics[height=1in, width=1in]{fly}\n% \\caption{A sample black and white graphic\n% that has been resized with the \\texttt{includegraphics} command.}\n% \\end{figure}\n\n% \\begin{figure*}\n% \\centering\n% \\includegraphics{flies}\n% \\caption{A sample black and white graphic\n% that needs to span two columns of text.}\n% \\end{figure*}\n\n% \\begin{figure}\n% \\centering\n% \\includegraphics[height=1in, width=1in]{rosette}\n% \\caption{A sample black and white graphic that has\n% been resized with the \\texttt{includegraphics} command.}\n% \\vskip -6pt\n% \\end{figure}", "meta": {"hexsha": "6c68aa9c9a202e548b331446c1cfb5f7fb4f4353", "size": 5203, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/eval.tex", "max_stars_repo_name": "yipenghuang0302/sorting", "max_stars_repo_head_hexsha": "7181d83d331e4f0f5df5961843e5e98c6ad0566e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "papers/eval.tex", "max_issues_repo_name": "yipenghuang0302/sorting", "max_issues_repo_head_hexsha": "7181d83d331e4f0f5df5961843e5e98c6ad0566e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/eval.tex", "max_forks_repo_name": "yipenghuang0302/sorting", "max_forks_repo_head_hexsha": "7181d83d331e4f0f5df5961843e5e98c6ad0566e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.0098039216, "max_line_length": 493, "alphanum_fraction": 0.7853161638, "num_tokens": 1211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5733267583687204}}
{"text": "\\documentclass{article}\n\n\\usepackage[colorlinks=true]{hyperref}\n\\usepackage[cmex10]{amsmath}\n\\usepackage{bbm}\n\\usepackage{graphicx}\n\\usepackage{subfig}\n\\usepackage{algorithm}\n\\usepackage{algorithmic}\n\\usepackage{comment}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{multirow}\n\n\\DeclareMathOperator*{\\argmin}{\\mathrm{argmin}}\n\\DeclareMathOperator*{\\pro}{\\mathcal P_{\\Omega}}\n\\DeclareMathOperator*{\\pron}{\\mathcal P_{\\bar{\\Omega}}}\n\\DeclareMathOperator*{\\proe}{\\mathcal P_{H}}\n\\newcommand{\\BigO}[1]{\\ensuremath{\\operatorname{O}\\left(#1\\right)}}\n\n\\begin{document}\n\n\\title{MC-Kit Manual}\n\\author{Stephen Tierney}\n\\maketitle\n\n\\section{Introduction}\n\\subsection{Classic Matrix Completion}\n\nThe problem of Matrix Completion (MC) is to recover a matrix $\\mathbf A$ from only a small sample of its entries. Let $\\mathbf A \\in \\mathbb R^{m \\times n}$  be the matrix we would like to know as precisely as possible while only observing a subset of its entries $(i, j) \\in \\Omega$. It is assumed that observed entries in $\\Omega$ are uniform randomly sampled. Low-Rank Matrix Completion is a variant that assumes that $\\mathbf A$ is low-rank. The tolerance for ``low'' is dependant upon the size of $\\mathbf A$ and the number of sampled entries. \n\nFor further explanation let us define the sampling operator $\\pro : \\mathbb R^{m \\times n} \\rightarrow \\mathbb R^{m \\times n}$ as\n\\begin{align}\n[ \\pro  ( X ) ]_{ij} = \\left\\{\n\\begin{array}{ll}\nX_{ij}, &(i, j) \\in \\Omega,\\\\\n0, &\\text{otherwise}.\n\\end{array}\n\\right.\n\\end{align}\nWe also define the opposite operator $\\pron$, which keeps those outside $\\Omega$ unchanged and sets values inside $\\Omega$ (i.e. $\\bar{\\Omega}$) to $0$. Since we assume that the matrix to recover is low-rank, one could recover the unknown matrix by solving\n\\begin{align}\n\\min_{\\mathbf A} \\; \\text{rank}(\\mathbf A)\\\\\n\\text{s.t.} \\; \\pro  (\\mathbf A) = \\pro (\\mathbf M) \\nonumber \n\\end{align}\nwhere matrix $\\mathbf M$ is the partially observed matrix. In practice this problem is intractable therefore we use the closest convex relaxation i.e. the nuclear norm\n\\begin{align}\n\\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* \\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) \\nonumber\n\\label{classic_objective}\n\\end{align}\nWe also consider the case where our observed entries may contain a limited amount of noise. Our corresponding objective is the following\n\\begin{align}\n\\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* + \\frac{\\lambda}{2} \\| \\pro (\\mathbf E) \\|_F^2\\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) + \\pro (\\mathbf E) \\nonumber \n\\end{align}\n\n\\section{Singular Value Shrinkage Operator}\n\nCentral to this work is the {\\bf{singular value shrinkage operator}}. Consider the singular value decomposition (SVD) of a matrix $\\mathbf Y \\in \\mathbb R^{m \\times n}$ with rank $r$\n\\begin{align}\n\\mathbf Y = \\mathbf{U \\Sigma V^T}, \\;\\; \\mathbf \\Sigma = \\text{diag}(\\{\\sigma_i\\}_{i=1}^r).\n\\end{align}\nThen we define the singular value shrinkage operator for any $\\tau \\geq 0$ as\n\\begin{align}\n\\mathcal D_{\\tau}(\\mathbf Y) = \\mathbf U S_{\\tau}(\\mathbf \\Sigma) \\mathbf V^T, \\;\\; S_{\\rho}(\\mathbf \\Sigma) = \\text{diag}(\\{\\text{max}(\\sigma_i - \\tau, 0)\\}).\n\\end{align}\nIt has been shown \\cite{cai2010singular} that the operator $\\mathcal D_{\\tau}(\\mathbf Y)$ is the solution to the proximal nuclear norm problem i.e.\n\\begin{align}\n\\mathcal D_{\\tau}(\\mathbf Y) = \\argmin_{\\mathbf X} \\; \\tau \\| \\mathbf X \\|_* + \\frac{1}{2} \\| \\mathbf{X - Y} \\|_F^2\n\\end{align}\nThe implementation of the singular value singular shrinkage operator is implemented by the function $[ \\mathbf X, \\mathbf s ] = \\text{nn\\_prox}( \\mathbf Y, \\tau )$, where $\\mathbf s$ is the vector of the singular values of $\\mathbf X$.\n\n\\newpage\n\\section{Function Listing}\n\n\\begin{table}[!h]\n{\\small{\n\\centering\n\n\\begin{tabular}{c | c | c}\n\\hline\nProblem & Function & Section \\\\\n\\hline\n\n$\\begin{array}{c} \\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* +  \\frac{1}{2} \\| \\mathbf{ A } \\|_F^2 \\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) \\end{array}$ & solve\\_svt\t& 4.1.1  \\\\\n\\hline\n\n\\multirow{2}{*}{$\\begin{array}{c} \\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* \\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) \\end{array}$} & solve\\_ialm\t& 4.1.2 \\\\\n\t\t& solve\\_lin\t& 4.1.3 \\\\\n\t\t\n\\hline\n\\multirow{3}{*}{$\\begin{array}{c} \\tau \\| \\mathbf A \\|_*  +  \\frac{\\lambda}{2} \\| \\mathbf{ \\pro (A) - \\pro (M)  } \\|^2_F  \\end{array}$}\n\t& solve\\_e\\_lin & 4.2 \\\\\n\t& solve\\_e\\_lin\\_ext & \\\\\n\t& solve\\_e\\_lin\\_acc & \\\\\n\\hline\n\n$\\begin{array}{c} \\min_{\\mathbf{A,E}} \\; \\tau \\| \\mathbf A \\|_* + \\frac{\\lambda}{2} \\| \\pro (\\mathbf E) \\|_F^2\\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) + \\pro (\\mathbf E) \\end{array}$ & solve\\_e\\_exact\t& 4.3  \\\\\n\\hline\n\n\\end{tabular}\n}}\n\\end{table}\n\n\\newpage\n\\section{Classic Implementations}\n\\subsection{Noise Free Data}\n\\subsubsection{SVT}\n\nThe function\n\\begin{align}\n[ \\mathbf A, \\mathbf{f\\_values}, \\mathbf{stop\\_vals} ] = \\text{solve\\_svt}( \\mathbf M, \\Omega, \\tau, \\mu, iterations, tol )\\notag \n\\end{align}\nsolves the following\n\\begin{align}\n\\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* +  \\frac{1}{2} \\| \\mathbf{ A } \\|_F^2 \\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) \\nonumber \n\\end{align}\\\nas proposed by the authors of \\cite{cai2010singular}.\n\n\\begin{itemize}\n\\item $\\mathbf M$ - matrix with observed entries\n\\item $\\Omega$ - vector of constrained matrix indices\n\\item $\\tau$ - regularisation (optional)\n\\item $\\mu$ - step size (optional)\n\\item $iterations$ - maximum number of iterations (optional)\n\\item $tol$ - stopping criteria tolerance (optional)\n\\end{itemize}\n\n\\subsubsection{Inexact ALM}\n\nThe function\n\\begin{align}\n[ \\mathbf A, \\mathbf{f\\_vals}, \\mathbf{stop\\_vals} ] = \\text{solve\\_ialm}( \\mathbf M, \\Omega, \\tau, \\mu, iterations, tol )\\notag \n\\end{align}\nsolves the following\n\\begin{align}\n\\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* \\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) \\nonumber \n\\end{align}\nas proposed by the authors of \\cite{lin2010augmented}.\n\n\\begin{itemize}\n\\item $\\mathbf M$ - matrix with observed entries\n\\item $\\Omega$ - vector of constrained matrix indices\n\\item $\\tau$ - regularisation (optional)\n\\item $\\mu$ - step size (optional)\n\\item $iterations$ - maximum number of iterations (optional)\n\\item $tol$ - stopping criteria tolerance (optional)\n\\end{itemize}\n\n\\subsubsection{Linearised ALM}\n\nThe function\n\\begin{align}\n[ \\mathbf A, \\mathbf{f\\_vals}, \\mathbf{stop\\_vals} ] = \\text{solve\\_lin}( \\mathbf M, \\Omega, \\tau, \\mu, \\rho, iterations, tol ) \\notag \n\\end{align}\nsolves the following\n\\begin{align}\n\\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* \\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) \\nonumber \n\\end{align}\n\n\\begin{itemize}\n\\item $\\mathbf M$ - matrix with observed entries\n\\item $\\Omega$ - vector of constrained matrix indices\n\\item $\\tau$ - regularisation (optional)\n\\item $\\mu$ - step size (optional)\n\\item $\\rho$ - linearisation step size (optional)\n\\item $iterations$ - maximum number of iterations (optional)\n\\item $tol$ - stopping criteria tolerance (optional)\n\\end{itemize}\n\n\\subsection{Noisey Data Relaxation}\n\nThe functions\n\\begin{align}\n[ \\mathbf A, \\mathbf{f\\_vals}, \\mathbf{stop\\_vals} ] = \\text{solve\\_e\\_lin}( \\mathbf M, \\Omega, \\tau, \\lambda, \\rho, iterations, tol ) \\notag \\\\\n[ \\mathbf A, \\mathbf{f\\_vals}, \\mathbf{stop\\_vals} ] = \\text{solve\\_e\\_lin\\_ext}( \\mathbf M, \\Omega, \\tau, \\lambda, \\rho, iterations, tol ) \\notag \\\\\n[ \\mathbf A, \\mathbf{f\\_vals}, \\mathbf{stop\\_vals} ] = \\text{solve\\_e\\_lin\\_acc}( \\mathbf M, \\Omega, \\tau, \\lambda, \\rho, iterations, tol )\\notag \n\\end{align}\nsolve the following\n\\begin{align}\n\\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_*  +  \\frac{\\lambda}{2} \\| \\mathbf{ \\pro (A) - \\pro (M)  } \\|^2_F \n\\end{align}\nwith increasing convergence speed based on \\cite{ji2009accelerated}.\n\n\\begin{itemize}\n\\item $\\mathbf M$ - matrix with observed entries\n\\item $\\Omega$ - vector of constrained matrix indices\n\\item $\\tau$ - regularisation (optional)\n\\item $\\lambda$ - regularisation (optional)\n\\item $\\rho$ - linearisation step size (optional)\n\\item $iterations$ - maximum number of iterations (optional)\n\\item $tol$ - stopping criteria tolerance (optional)\n\\end{itemize}\n\n\\subsection{Noisey Data Exact}\n\nThe function\n\\begin{align}\n[ \\mathbf A, \\mathbf{f\\_vals}, \\mathbf{stop\\_vals} ] = \\text{solve\\_e\\_exact}( \\mathbf M, \\Omega, \\tau, \\lambda, \\rho, iterations, tol ) \\notag\n\\end{align}\nsolve the following\n\\begin{align}\n\\min_{\\mathbf A} \\; \\tau \\| \\mathbf A \\|_* + \\frac{\\lambda}{2} \\| \\pro (\\mathbf E) \\|_F^2\\\\\n\\text{s.t.} \\; \\pro (\\mathbf M) = \\pro (\\mathbf A) + \\pro (\\mathbf E) \\nonumber \n\\end{align}\n\n\\begin{itemize}\n\\item $\\mathbf M$ - matrix with observed entries\n\\item $\\Omega$ - vector of constrained matrix indices\n\\item $\\tau$ - regularisation (optional)\n\\item $\\lambda$ - regularisation (optional)\n\\item $\\rho$ - linearisation step size (optional)\n\\item $iterations$ - maximum number of iterations (optional)\n\\item $tol$ - stopping criteria tolerance (optional)\n\\end{itemize}\n\n\n\n\\newpage\n\\bibliographystyle{plain}\n\\bibliography{references}\n\n\\end{document}", "meta": {"hexsha": "089152466bea59a1580eb9539ecf9a3c784ee2d2", "size": 9082, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/manual.tex", "max_stars_repo_name": "luhairong11/MCK", "max_stars_repo_head_hexsha": "cf1f612e6f6d676c4a4051b9d33c32b0568e7b19", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manual/manual.tex", "max_issues_repo_name": "luhairong11/MCK", "max_issues_repo_head_hexsha": "cf1f612e6f6d676c4a4051b9d33c32b0568e7b19", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/manual.tex", "max_forks_repo_name": "luhairong11/MCK", "max_forks_repo_head_hexsha": "cf1f612e6f6d676c4a4051b9d33c32b0568e7b19", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-09-10T13:33:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-23T10:41:49.000Z", "avg_line_length": 39.4869565217, "max_line_length": 549, "alphanum_fraction": 0.6777141599, "num_tokens": 3060, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390162, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5733267583687203}}
{"text": "\\section{Introduction}\\label{introduct}\r\nA well-studied phenomenon in material science concerns the coarsening of microstructure found in metals, porcelains, and other common materials.  Typically, these materials have polycrystalline structure, where each grain has a particular orientation.  Annealing induces an evolution on grain boundaries which reduces their total length.  During this process, grains are deleted (and not created), causing coarsening of microstructure through an increase of average grain area (see Fig. \\ref{samplegrains}).  Microstructure can determine material properties. For instance, the Hall-Petch effect gives a relation between yield stress and average grain size (see \\cite{anderson2005fracture}, section 5.2.3).     More scientific and  mathematical properties of grain networks can be found in \\cite{fradkov1994two,thompson2001grain}.\r\n\r\n\r\n \r\n \r\nGrain coarsening in two dimensions  can be modeled by a geometric network, or planar graph $G(t) \\subset \\mathbb{R}^2$, $t\\in (0,T)$, with edges  that evolve in time. Faces of $G$ are called \\textbf{grains}, and edges are referred to as \\textbf{boundaries}. An idealization of the coarsening process assumes that grain boundaries are isotropic, or that grain boundary energies are identical along edges of the network. Under this idealization, we  define a \\textbf{grain network with existence interval} $(0,T)$  as a geometric finite planar graph $G(t)$ which satisfies the\r\nfollowing restrictions for all $t\\in(0,T)$:\r\n\r\n\r\n\r\n\\begin{enumerate}\r\n\\item (\\textbf{Herring's Condition})   $G(t)$ is trivalent, and angles between two edges at a vertex are fixed at 120 degrees.\r\nThis restriction follows from a force balance law on edges  emanating from a vertex (see \\cite{herring1999surface}).\\item (\\textbf{Mean Curvature Flow}) Boundaries are smooth, and satisfy mean curvature flow.  This means that the time evolution of a smooth boundary $\\gamma(x,t)\\subset \\mathbb R^2$ is also smooth, and satisfies \r\n \\begin{equation} \\label{curveflow}\r\n\\partial_t \\gamma(x,t) = M\\sigma \\kappa(x,t)  \\vec n(x,t), \\quad t\\in(0,T), x \\in \\mathbb{R}^2,\r\n\\end{equation}\r\nwith $M$ and $\\sigma$ denoting grain mobility and surface energy constants, $\\vec n(x,t)$ denoting the inward unit normal, and $\\kappa(x,t)$ denoting mean curvature. This type of motion was observed experimentally by Beck for crystallites in metals \\cite{bec52}.\r\n\\end{enumerate}\r\n     \r\n\r\n\r\nNote that grain networks retain their topological structure during their existence time.  The parabolic PDE (\\ref{curveflow})  is referred to as curve-shortening flow, and it can in fact be shown that the total edge length of a network decreases in time \\cite{barmak2011entropy}. A theorem of curve shortening flow on grain networks, due to von Neumann and Mullins \\cite{mul56,von1952discussion} gives an elegant relation between a topological quantity (sides of a grain) and a geometrical quantity (area of a grain).\r\n\r\n\\begin{figure}\r\n\\includegraphics[width=\\textwidth]{samplegrains.png}\r\n\\caption{\\textbf{Coarsening of a grain network.} Grains in material microstructure delete through annealing, causing the average grain size to increase.}\\label{samplegrains}\r\n\\end{figure}\r\n \r\n\r\n\r\n\\begin{theorem}(\\textbf{The von Neumann-Mullins $n-6$ rule}). For a grain with area $A$ and $n$ sides,\r\n\\begin{equation}\r\n\\frac{dA}{dt} = M\\sigma\\frac \\pi 3 (n-6).\r\n\\end{equation}  \r\n\\end{theorem}\r\nOne consequence of the $n-6$ rule  is  that grains with less than six sides may shrink to a point in finite time. Such deletions, along with possible edge deletions, result in networks with vertices that violate Herring's condition.  This raises the question: What types of solutions exist with initial conditions that are not grain networks?  Specifically, the \\textbf{continuation problem for a planar network $G$} asks the following: If we are given a planar network $G$ with vertices that may not satisfy Herring's conditions,  is there an  $\\varepsilon>0$ where $G(t)$ is a grain network for $t\\in (0, \\varepsilon)$, and $G(t)\\rightarrow G$ in the Hausdorff distance as $t\\rightarrow  0^+$? Another interesting question is whether the flow is unique, or if it can become unique under additional assumptions.\r\n\r\nCurrently, there are no comprehensive answers to the above questions.  Viewed as a problem in parabolic PDE theory, the current focus is local, fixing on simplified models with initial conditions of $k$ rays meeting at a vertex. Schnurer and Shulze found a unique self-similar solution with initial conditions of three rays meeting at arbitrary angles \\cite{sch07}.  Mazzeo and Saez  generalized with the $k$-ray initial conditions, and discovered a bijection between self-similar solutions   and $k$-Steiner trees of a hyperbolic metric \\cite{maz07}.  On the computational front, techniques of flowing through a graph not satisfying the grain network conditions include level set \\cite{elsey2009diffusion}  and variational \\cite{kin06} methods. \r\n \r\n\r\n\r\n\r\n\r\n\\section{Mean-field models of grain growth}\\label{fradkov}\r\nAnother approach to studying grain boundary coarsening involves using mean field assumptions to gather bulk statistics of grain behavior.  A well known kinetic model is due to Fradkov \\cite{fra882,fra881}, who exploits  the $n-6$ rule, along with an additional deterministic rule for side redistribution, to derive kinetic equations for densities of grains with a specific number of sides.   Fradkov makes the following mean field assumptions about the redistribution of sides when either a grain or edge deletes:\r\n\\begin{enumerate}\r\n\\item If a grain or side deletes, topological changes of grains will be selected in proportion to the number of sides of grains, and independent of their areas.  For instance, if a singular event causes a grain to lose a side, the probability that a particular grain with $j$ sides will drop to one with $j-1$ sides is\r\n\\begin{equation}\r\np_j=\\frac{j}{\\sum_{k>1} kN_k},\r\n\\end{equation}\r\nwhere $N_k$ is the number of grains with $k$ sides.  \r\n  This eliminates the notion of grains having neighbors.  \r\n\\item A free parameter $\\beta$ is introduced which is defined as the constant ratio between side deletions and grain deletions.  Such an assumption comes from the lack of a topological rule for edge growth.  \r\n\\end{enumerate}\r\n\r\nFor a number density $f_n(a,t)$ of $n $ sided grains with area $a$ at time $t$, the Fradkov equations take the form of a transport equation with a topological source term:\r\n\\begin{equation}\r\n\\partial_t f_n(a,t)+(n-6)\\partial_a f_n(a,t) = \\Gamma(f(t))(Jf)_n(a,t) \\quad (a,t) \\in (0,\\infty)^2, n \\ge 2 ,\r\n\\end{equation}\r\nwith a collision operator $J$ and coupling weight $\\Gamma$ determined from conservation of area and polyhedral defect (networks retain an average of six sides). Well-posedness and self-similar solutions for the Fradkov model can be found in \\cite{henseler2008kinetic, her12}.\r\n\r\n\r\n\r\n \r\n\\section{PDMPs and grain coarsening}\\label{pdmpgraincoarse}\r\nThe main goal of this thesis is to understand grain  statistics as a hydrodynamic limit of finitely many grains respecting deterministic drift and random topological transitions.  In particular, we will  construct limiting kinetic equations describing the evolution of grain area densities for the class of grains with a fixed number of sides.   Toward this goal, the main tool we use is the theory of piecewise deterministic Markov process (PDMP).  The construction and theory of PDMPs will be discussed in Chapter 2.  However, not much is lost to think of a PDMP  as a generalization of a jump process for particles that drift on vector fields.    \r\n\r\nFor the problem of grain coarsening, we wish to build a PDMP $X(t)$ that tracks a finite set of grains $(g_1,\\dots, g_n) =  ((a_1,s_1), \\dots, (a_n,s_n))$ with areas $a_i>0$ and side numbers $s_i \\in \\mathbb N.$  \r\nNote that the dimension of $X(t)$ can change with time, as grains delete from coarsening. Grain areas change at a constant rate, following the $n-6$ rule until a grain shrinks to a point, or an edge deletes (we will make an assumption on side deletion rates based on total particle number).  At that time, we impose mean field assumptions to change topologies of grains. \r\n\r\n \r\n\r\n \r\n\\begin{figure}  \r\n\\begin{centering} \r\n \\includegraphics[width = .4\\textwidth]{graintops.png} \r\n \\caption{\\textbf{Grain growth via a PDMP}. Particles drift according to their number of sides. Each tier denotes particles of different side numbers. In this instance, a vanishing grain in tier three triggers a  random reassignment  of the number of sides of grains in tiers 4,7, and 8.}\r\n\\end{centering}\r\n\\end{figure}\r\n\r\n\\section{Overview}\r\n \r\nIn Chapter 2, we describe a PDMP in generality. We then state its infinitesimal generator $\\mathcal A$, and the class of functionals $\\mathcal D(\\mathcal A)$ that give rise to martingales through the Dynkin formula.    \r\n\r\nIn Chapter 3, we  study a simplified version of the PDMP described in Section \\ref{pdmpgraincoarse}. Here, particles drift on the positive half-line until reaching the origin, where they are reassigned according to a probability density $p(x)$.     We first prove that for empirical densities approaching nonatomic initial conditions, the densities $u(x,t)$ of particles converge to a weak form of the limiting kinetic equation\r\n\\begin{alignat}{2}\\label{pde1}  \r\n\\partial_tu(x,t) - \\partial_x u(x,t) = p(x)u(0,t) \\quad t, x \\in \\mathbb{R}^+, \\\\\r\nu(x,0) = u_0(x) . \\nonumber\r\n\\end{alignat}\r\nWe then focus on constructing measure valued weak solutions of (\\ref{pde1}) with mild restrictions on initial data. Finally, we use a popular result of renewal theory to show the existence of an attractor under certain assumptions.\r\n\r\n  Chapter 4 is a generalization of PDMPs on grain networks described in Section \\ref{pdmpgraincoarse}.  Here, we show the existence of fluid limits of PDMPs that allow generalized rules and probability distributions for particle jumps between different species.  As in Chapter 3, we will show the existence of a weak limit for densities $u_k(x,t)$ on $L_k$, that satisfy the advection-reaction equations\r\n\\begin{eqnarray} \\label{pde2}\r\n  \\lefteqn{\\partial_t u_k(x,t)+\\partial_{x}(v_k(x)u_{k}(x,t))=  \\phantom{u_j(x,t)\\mathbf{1}_{R^{(l)}_{ij}}\r\n-K^{(l)}W_k^{(l)}(t)u_k(x,t)}}\\label{smoooth}\\\\\r\n && \\sum_{l = 1}^{M_-}u_l(0,t)v_{l}(0)\\left[\\sum_{i = 1}^{M}\\sum_{j= 1}^{K^{(l)}} W_i^{(l)}(t)u_i(x,t)\\mathbf{1}_{R^{(l)}_{ij}=k}\r\n-K^{(l)}W_k^{(l)}(t)u_k(x,t)\\right]\\nonumber\\\\ \r\n&&+\\frac{\\beta G(t)}{G^{int}(t)}\\left( \\sum_{i = 1}^{M}\\sum_{j= 1}^{K^{int}} w^{int} _iu_{i}(x,t)\\mathbf{1}_{R^{int}_{ij} = k}-K^{int}w^{int}_ku_{k}(x,t)\r\n \\right), \\nonumber\r\n \\end{eqnarray}\r\n with initial conditions\r\n \\begin{equation}\r\nu_k(x,0) = u_{k}^0(x), \\quad k = 1,\\dots, M. \\nonumber\r\n\\end{equation}\r\nThe solutions of (\\ref{pde2}) will be shown to be unique in the class of $L^1\\cap L^\\infty(\\mathbb R^+)$ functions.\r\n\r\nFinally, in Chapter 5, we return to the problem of grain coarsening. First, we examine the topological behavior of grain networks before and after a grain or side vanishes. After finding a finite set of rules that dictate topological changes,    we then define the various free parameters in the $k$-species model to align with a mean field model for grain coarsening.  Next, we prove some basic properties of the limiting kinetic equation, such as conservation of mass and polyhedral defect.  Finally, we end the chapter with a computational investigation of grain growth, and raise several conjectures about grain statistics.   \r\n  \r\n\r\n", "meta": {"hexsha": "1e829408f6e227ecc4cc77e43abf69ce50fb961c", "size": 11560, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Joe Thesis/Thesisfiles/introduction.tex", "max_stars_repo_name": "Danie1Johnson/thesis", "max_stars_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Joe Thesis/Thesisfiles/introduction.tex", "max_issues_repo_name": "Danie1Johnson/thesis", "max_issues_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Joe Thesis/Thesisfiles/introduction.tex", "max_forks_repo_name": "Danie1Johnson/thesis", "max_forks_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 104.1441441441, "max_line_length": 830, "alphanum_fraction": 0.755017301, "num_tokens": 3049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.5733267577807917}}
{"text": "\n\\subsection{Weak duality theorem}\n\nThe duality gap (\\(p^*-d^*\\) is non-negative.\n\n", "meta": {"hexsha": "7a6ef0c6320893bab3fab987552479cdb0283974", "size": 83, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/analysis/optimisationMulti/05-01-weak.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/analysis/optimisationMulti/05-01-weak.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/analysis/optimisationMulti/05-01-weak.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 13.8333333333, "max_line_length": 45, "alphanum_fraction": 0.6746987952, "num_tokens": 26, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.819893353516963, "lm_q2_score": 0.6992544335934766, "lm_q1q2_score": 0.57331406252056}}
{"text": "\\section{A few more things about coefficients, cohomology}\nProblem Set 5 now due Friday! Recall our example where we let $\\cI=(\\Z_{>0},|)$. You can consider $\\cI\\to\\mathbf{Ab}$, say assigning to each $i$ the integers $\\Z$, and $f_{ij}:\\Z\\xrightarrow{j/i}\\Z$. The colimit is $\\QQ$, where you send $X_n\\to\\QQ$ via $1\\mapsto \\frac{1}{n}$. It's rather natural. But it's not the simplest way to define $\\QQ$. You just could have defined $\\QQ=\\varinjlim(\\Z\\xrightarrow{2}\\Z\\xrightarrow{3}\\Z\\xrightarrow{4}\\Z\\to\\cdots)$.\n\nWe saw that $\\varinjlim$ is exact, that it commutes with tensor products, and that homology commutes with direct limits. So if I had a chain complex $C_\\bullet$ of abelian groups, then the process of computing homology involves exact sequences. If I want to compute:\n\\begin{equation*}\n H(C_\\bullet\\otimes\\QQ)= H(C_\\bullet\\otimes\\varinjlim \\Z)= H(\\varinjlim(C_\\bullet\\otimes\\Z))= H(\\varinjlim C_\\bullet)=\\varinjlim H(C_\\bullet)= H(C_\\bullet)\\otimes\\varinjlim \\Z= H(C_\\bullet)\\otimes\\QQ\n\\end{equation*}\nThus $-\\otimes\\QQ$ is exact, i.e., $\\QQ$ is \\emph{flat} over $\\Z$, so $\\Tor^\\ZZ_1(M,\\QQ)=0$. By UCT, this means that $ H_\\ast(X)\\otimes \\QQ\\simeq H_\\ast(X;\\QQ)$.\n\\begin{definition}\nThe $n$th Betti number of $X$ is defined to be $\\beta_n:=\\dim_\\QQ H_n(X;\\QQ)$.\n\\end{definition}\nThe Euler characteristic can just as well have been defined as $\\chi(X)=\\sum^\\infty_{n=0}(-1)^n\\beta_n$.\n\nOh, let me ask if 11 am works for those of you who are going to take 18.906. OK? It seems like it's too early.\n\nAnyway, every space has a diagonal map $X\\xrightarrow{\\Delta}X\\times X$. This induces a map $ H_\\ast(X;R)\\to H_\\ast(X\\times X;R)$, where $R$ is a PID. But now, we have a K\\\"{u}nneth map $\\alpha: H_\\ast(X;R)\\otimes_R H_\\ast(X;R)\\to H_\\ast(X\\times X;R)$. We can get some kind of comultiplication if $\\alpha$ is an isomorphism. And, well, the K\\\"{u}nneth theorem says that $\\Tor^R_1( H_\\ast(X;R), H_\\ast(X;R))=0$ if and only if $\\alpha$ is an isomorphism. This condition is satisfied, for example, if each $ H_n(X;R)$ is flat over $R$ for all $n$ (for example, free over $R$). A special case is when $R$ is a field. So you get a comultiplication $\\Delta: H_\\ast(X;R)\\to H_\\ast(X;R)\\otimes_R H_\\ast(X;R)$ if this condition is satisfied.\n\nThis diagonal map has many properties.\n\\begin{definition}\nLet $R$ be a ring. A (graded, bounded below) coalgebra over $R$ is a (graded) $R$-module $M$ with a multiplication $\\Delta:M\\to M\\otimes_R M$ and an augmentation map $\\varepsilon:M\\to R$ such that all of the following diagrams commute:\n\\begin{equation*}\n\\xymatrix{ & M\\ar[d]\\ar[dr]\\ar[dl] & \\\\\nR\\otimes_R M & M\\otimes_R M\\ar[l]^{\\varepsilon\\otimes 1}\\ar[r]^{1\\otimes\\varepsilon} & M\\otimes_R R}\n\\end{equation*}\nWhere the diagonal maps are the canonical isomorphisms. And you have coassociativity:\n\\begin{equation*}\n\\xymatrix{\n\tM\\ar[r]^{\\Delta}\\ar[d]^{\\Delta} & M\\otimes_R M\\ar[d]^{\\Delta\\otimes 1}\\\\\n\tM\\otimes_R M\\ar[r]_{1\\otimes\\Delta} & M\\otimes_R M\\otimes_R M\n}\n\\end{equation*}\nAnd it's cocommutative (he'll just say commutative because there's no need to say ``co'' if we know we're working with coalgebras, but I want to write it anyway -- I'll probably abuse it though) if:\n\\begin{equation*}\n\\xymatrix{\n\t & M\\ar[dl]^\\Delta\\ar[dr]^\\Delta &\\\\\n\tM\\otimes_R M\\ar[rr]^{\\tau} & & M\\otimes_R M\n}\n\\end{equation*}\nWhere $\\tau(x\\otimes y)=(-1)^{|x|\\cdot|y|}y\\otimes x$ is the twisting map.\n\\end{definition}\n\\begin{example}\nThe K\\\"{u}nneth map is coassociative and cocommutative.\n\\end{example}\nConsider:\n\\begin{equation*}\n\\xymatrix{S_\\ast(X)\\otimes S_\\ast(Y)\\ar[r]^{\\tau}\\ar[d]^{\\times} & S_\\ast(Y)\\otimes S_\\ast(X)\\ar[d]^{\\times}\\\\\nS_\\ast(X\\times Y)\\ar[r]^{\\tau_\\ast} & S_\\ast(Y\\times X)}\n\\end{equation*}\nWhere $\\tau$ is as defined above on the tensor product and $\\tau$ is also used to denote the twisting map $X\\times Y\\to Y\\times X$. And this diagram commutes up to chain homotopy.\n\\begin{corollary}\nLet $k$ be a field. Then $ H_\\ast(X;k)$ has the natural structure of a cocommutative graded coalgebraic structure. \n\\end{corollary}\nI could just talk about coalgebras. But one of my friends told me that nobody in France knew what coalgebras were. So we're going to talk about cohomology, and get an algebra structure. Some say that cohomology is better because you have algebras, but that's more of a sociological statement than a mathematical one.\n\\begin{slogan}\nCohomology gives algebras. It's a contravariant functor on spaces.\n\\end{slogan}\nA better reason for looking a cohomology is that there are many geometric constructions that pull back. For example, if I have some covering space $\\widetilde{X}\\to X$, and I have a map $f:Y\\to X$, I get a pullback covering space $f^{\\ast}\\widetilde{X}$. A better example is vector bundles (that we'll talk about in 18.906) -- they don't push out, they pullback. This'll give the theory of \\emph{characteristic classes}. Another even better reason is that cohomology is the target of the Poincar\\'{e} duality map.\n\\begin{definition}\nLet $N$ be an abelian group. A singular $n$-cochain on $X$ with coefficients in $N$ is a function $\\Sin_n(X)\\to N$. \n\\end{definition}\nIf $N$ is an $R$-module, then I can extend linearly to get a map $S_n(X;R)\\to N$.\n\\begin{notation}\nWrite $S^n(X;N):=\\Map(\\Sin_n(X);N)=\\Hom_R(S_n(X;R);N)$.\n\\end{notation}\nThis is going to give me something contravariant, that's for sure. But I want to get a cochain complex $(S^\\ast(X;N),d)$. Cochain means that the differential increases the degree.\n\nI have a map $(-,-):S^n(X;N)\\otimes S_n(X;R)\\to N$ given by evaluation. I want $d$ on $S^\\ast(X;N)$ that makes the map $S^n(X;N)\\otimes S_n(X;R)\\to N$ is a chain map where we regard $N$ as a chain complex with zeros everywhere except in dimension $0$. But the way I've said it doesn't quite make sense because $S^\\ast(X;N)$ isn't a chain complex! So let $S^\\ast(X;N)$ be a chain complex with $S^n(X;N)$ in dimension $(-n)$. But now it's not bounded below. So, to be honest, I should say that we're going to make a decision. If $C_\\bullet,D_\\bullet$ aren't necessarily bounded below chain complex, then $(C_\\bullet\\otimes D_\\bullet)_n:=\\bigoplus_{i+j=n}C_i\\otimes D_j$.\n\nLet $f\\in S^n(X;R)$ and $\\sigma\\in S_n(X;R)$. This is a chain map if $d (f,\\sigma)=0$. And $d(f,\\sigma)=(df,\\sigma)+(-1)^{|f|}(f,d\\sigma)$. Let me erase this. This doesn't make sense. Scratch that. Let's start over.\n\nLet $f\\in S^{n+1}(X;R)$ and $\\sigma\\in S_n(X;R)$. Then $(df,\\sigma)+(-1)^{|f|}(f,d\\sigma)=d(f,\\sigma)=0$. So $(df)(\\sigma)=(-1)^{|f|+1}f(d\\sigma)$. That's what the differential is. This makes $S^\\ast(X;N)$ a cochain complex. So you can consider its homology.\n\\begin{definition}\nThe $n$th cohomology of $X$ with coefficients in $N$ is $ H^n(X;N):= H^n(S^\\ast(X;N))$. This is a contravariant functor on $\\mathbf{Top}$.\n\\end{definition}\nIf you fix $X$, I get a covariant functor $ H^n(X;-)$ on $\\mathbf{Mod}_R$.\n\\begin{construction}\nI have a chain map $(,):S^\\ast(X;N)\\otimes_R S_\\ast(X;R)\\to N$. I can apply homology to this, to get $\\alpha: H^\\ast(X;N)\\otimes_R H_\\ast(X;R)\\to H(S^\\ast(X;N)\\otimes_R S_\\ast(X;R))$. In particular, you get a natural pairing $(,): H^\\ast(X;N)\\otimes_R H_\\ast(X;R)\\xrightarrow{\\alpha} H(S^\\ast(X;N)\\otimes_R S_\\ast(X;R))\\xrightarrow{(,)}N$. This ``evaluation map'' is called the Kronecker pairing.\n\\end{construction}\n\\begin{warning}\n$S^n(X;\\Z)=\\Map(\\Sin_n(X);\\Z)=\\prod_{\\Sin_n(X)}\\Z$, which is probably an uncountable product. An awkward fact is that this is never free abelian.\n\\end{warning}\n\\begin{construction}\nIf $A\\subseteq X$, there is a restriction map $S^n(X;N)\\to S^n(A;N)$. There is an injection $\\Sin_n(A)\\hookrightarrow \\Sin_n(X)$. And as long as $A$ is empty, I can split this. So any function $\\Sin_n(A)\\to N$ extends to $\\Sin_n(X)\\to N$. This means that $S^n(X;N)\\to S^n(A;N)$ is surjective. You can define the kernel as $S^n(X,A;N)$, which sits in $0\\to S^n(X,A;N)\\to S^n(X;N)\\to S^n(A;N)\\to 0$. This gives the \\emph{relative cochains}.\n\\end{construction}\nI can take the homology of this sexseq to get a lexseq:\n\\begin{equation*}\n\\xymatrix{\n\t\\cdots & & \\\\\n\t H^1(X,A;N)\\ar[r] & H^1(X;N)\\ar[r] & H^1(A)\\ar[ull]^{\\delta}\\\\\n\t H^0(X,A;N)\\ar[r] & H^0(X;N)\\ar[r] & H^0(A)\\ar[ull]^{\\delta}\n}\n\\end{equation*}\nAnd $ H^0(X;N)$ sits in the cokernel of $\\Map(\\Sin_0(X),N)\\to \\Map(\\Sin_1(X),N)$, so that $ H^0(X;N)=\\Map(\\pi_0(X),N)$.\n", "meta": {"hexsha": "c8d958e973cbc8c4d8f035fad173b9cd36e34148", "size": 8315, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old-905/lec-26-cohomology.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "old-905/lec-26-cohomology.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "old-905/lec-26-cohomology.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 88.4574468085, "max_line_length": 732, "alphanum_fraction": 0.6933253157, "num_tokens": 2869, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.8198933359135361, "lm_q1q2_score": 0.5733140502112857}}
{"text": "Having described how to operate with basic sets, we consider a more fundamental representation problem.\n%\nWe have seen that there exist classes of sets.\nSome of them, such as polyhedra, are closed under various set operations. That property is convenient because the type of the result is known in advance.\n%\nThe whole class of convex sets is also closed under the operations described in the upper part of Table~\\ref{tab:operations}, but we can typically not represent the result with the limited amount of set types in LazySets. This is not a shortcoming of LazySets: you would need infinitely many set representations for all possible combinations! However, we can resort to a simple yet powerful trick to effectively represent the result of the set operations: \\emph{lazy representation}.\n%\nGoing a step further, by making both basic set types and operations between sets live in the same abstraction layer (namely always subtyping \\code{LazySet} irrespective whether it is a concrete set or the result of an operation) allows to easily \\emph{compose} set computations.\n\n\n\\subsection{Type composition} \\label{sec:composition}\n\nTo give an example, the Minkowski sum of a square and a disc (two-dimensional balls in the infinity norm and Euclidean norm) is not representable with a basic type in LazySets. Hence \\code{minkowski\\_sum} will yield an error. But we can apply the \\emph{lazy} operation, which is called \\code{MinkowskiSum}. \\code{MinkowskiSum} itself subtypes \\code{LazySet}. This choice allows for ease of composition.\n\nWe wrap the operands in a new object that, by definition, represents the result of the operation, but without actually performing the computation. (This also motivates the name of LazySets).\n\n\\smallskip\n\nAs an illustrative example, suppose that we are interested in the formula $\\Omega_0 = \\CH(\\X_0, \\Phi \\X_0 \\oplus Y)$, where $\\CH$ and $\\oplus$ are defined in Table~\\ref{tab:operations}, $X_0$ and $Y$ are sets, and $\\Phi$ is a matrix defined in Appendix~\\ref{sec:omega0}. Such formulas are prevalent in reachability analysis of linear initial-value problems, or nonlinear ones after some form of conservative linearization; see for example \\cite{althoff2020set,ForetsS21} and references therein. Given the sets $\\X_0$, $Y$, and the matrix $\\Phi$, we can write:\n\n\\begin{minipage}{\\linewidth}\n\\begin{lstlisting}\njulia> \u03a9\u2080 = CH(X\u2080, \u03a6*X\u2080 \u2295 Y)\n\\end{lstlisting}\n\\end{minipage}\n\n\\begin{minipage}{0.69\\linewidth}\nThen, $\\Omega_0$ is a (nested) \\emph{lazy} representation of the  operation just as a normal \\code{LazySet}. As such, it can be used for further operations (conversion, approximation, evaluation). The structure of the nested operations is internally represented in the form of a tree, which can be visualized with \\href{https://github.com/JuliaTeX/TreeView.jl}{TreeView.jl} as shown in the diagram (right).\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.2\\linewidth}\n\t\\begin{tikzpicture}[level/.style={sibling distance=8mm,level distance=8mm}]\n\t\t\\node {\\texttt{CH}}\n\t\tchild {\n\t\t\tnode {\\texttt{X}}\n\t\t}\n\t\tchild {\n\t\t\tnode {$\\oplus$}\n\t\t\tchild {\n\t\t\t\tnode {\\texttt{*}}\n\t\t\t\tchild {\n\t\t\t\t\tnode {$\\Phi$}\n\t\t\t\t}\n\t\t\t\tchild {\n\t\t\t\t\tnode {\\texttt{X}}\n\t\t\t\t}\n\t\t\t}\n\t\t\tchild {\n\t\t\t\tnode {\\texttt{Y}}\n\t\t\t}\n\t\t};\n\t\\end{tikzpicture}\n\\end{minipage}\n\n\\smallskip\n\nLazy operations can be efficiently evaluated, as we describe next.\n\n\n\\subsection{Support-function calculus} \\label{sec:supfun}\n\nA standard approach to operate with compact and convex sets $\\X \\in \\R^n$ is to use the \\emph{support function} \\cite{LeGuernic09}.\n%\nThe support function along direction $d \\in \\R^n$, denoted $\\rho(d, \\X)$, is the maximum of $d^T x$ over all $x \\in \\X$, i.e., the support function describes the (signed) distance of the supporting hyperplane in direction $d$ from the origin.\n%\nIt can be used to efficiently find the boundary of a set in a given direction.\n%\nThe maximizers are called \\emph{support vectors} $\\sigma(d, \\X)$. Intuitively, the support vectors are the extreme points of $\\X$ in direction $d$.\n%\nBasic properties of the support function are given in Appendix~\\ref{sec:supfunc_properties}.\n\nFor various set representations, the support function is known analytically and can be efficiently evaluated numerically. Such cases include hyperrectangular sets and zonotopic sets.\n%\nFor sets with less structure, e.g., if $\\X$ is a polytope in half-space representation, its support function can be computed by solving a linear program, for which fast and robust solvers exist.\n%\nBut the main advantage of using the support function in LazySets lies in the extensive use of \\emph{composition rules}, as we describe later.\n\n\\smallskip\n\nLazySets offers \\code{$\\uprho$(d, X)} (or \\code{support\\_function(d, X)}) to compute the support function $\\rho(d,\\X)$, and \\code{$\\upsigma$(d, X)} (or \\code{support\\_vector(d, X)}) to compute (some) support vector $\\sigma(d,\\X)$. Fig.~\\ref{fig:supfunc} (left) illustrates the evaluation of the support function over the polygon $\\X$ (orange) along direction $(-1, 1)^T$.\n\n\\begin{minipage}{\\linewidth}\n\\vspace{-\\abovedisplayskip}\n\\begin{lstlisting}\njulia> d = [-1, 1]\n\njulia> \u03c1(d, X) # or support_function(d, X)\n3.6\n\njulia> \u03c3(d, X) # or support_vector(d, X)\n[-3.0, 0.6]\n\\end{lstlisting}\n\\end{minipage}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.49\\linewidth, keepaspectratio]{img/supfunc}\n\t\\includegraphics[width=0.49\\linewidth, keepaspectratio]{img/supfunc_oct}\n\t\\vspace*{1mm}\n\t\\caption{Left: The supporting hyperplane of the set $\\X$ along direction $d$.\n\tIn red we plot the distance of the hyperplane to the origin, which is given by $\\rho(d', \\X)$ where $d' = d / \\Vert d \\Vert$.\n\tRight: An outer approximation of $\\X$ using the eight directions of a regular octagon.}\n\t\\label{fig:supfunc}\n\\end{figure}\n\nConsider again the set $\\Omega_0$ from the previous section. Suppose that we are interested in the support value of $\\Omega_0$ along a given direction $d \\in \\R^2$.\n%\nSince the support function distributes over the Minkowski sum, $\\rho(d, \\X \\oplus \\Y) = \\rho(d, \\X) + \\rho(d, \\Y)$ for any pair of sets $\\X, \\Y \\subseteq \\R^n$, and since it holds that $\\rho(d, M \\X) = \\rho(M^T d, \\X)$ for any matrix $M \\in \\R^{n \\times n}$, we can propagate the computation through the operation tree until a concrete set is found, and in many cases, an analytic formula is available.\n%\nThat is precisely what LazySets does, automatically, when we make queries to the lazy set such as asking for its support function along $d = (-1, 1)^T$ from before.\n\n\\begin{minipage}{\\linewidth}\n\t\\vspace{-\\abovedisplayskip}\n\t\\begin{lstlisting}\njulia> @btime \u03c1($d, $\u03a9\u2080)\n117.236 ns (2 allocations: 192 bytes)\n-0.8\n\njulia> @btime \u03c1($d, concretize($\u03a9\u2080))\n  23.700 \u03bcs (203 allocations: 16.03 KiB)\n-0.8\n\t\\end{lstlisting}\n\\end{minipage}\n\nIn the first benchmark we evaluate the support function on the lazy set.\\footnote{The Julia macro \\code{@btime} provided by the \\href{https://github.com/JuliaCI/BenchmarkTools.jl}{BenchmarkTools.jl} package evaluates the given command multiple times and returns the smallest record.}\nIn the second benchmark we first convert to a concrete set representation (in this case a polygon in vertex representation) and then evaluate the support function.\nThe first case is two orders of magnitude faster.\nThis exemplifies that set computations can be implemented efficiently using the support function, which becomes more prominent in higher dimensions.\n\nWe note that in the above computation we have only obtained a bound in direction $d$, not in other directions. For many applications it is sufficient to evaluate the support function in only a few directions. For example, in the helicopter model presented in Section~\\ref{sec:reachability} we are only interested in the vertical velocity, so we only need to evaluate the support function twice to obtain the upper and lower bound in that dimension.\nAnother example is to enclose $\\Omega_0$ with a bounded set, for which we can pick a list of \\emph{template} directions (see Section~\\ref{sec:approximation}).\n", "meta": {"hexsha": "460d14f429d95eca7a5f026c5288ef2a9f220cc5", "size": 8019, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/lazy.tex", "max_stars_repo_name": "JuliaReach/LazySets-JuliaCon21", "max_stars_repo_head_hexsha": "033612bd98ef8692195e4860e7eaa7bfbd05e6c1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-09-28T20:12:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-27T16:51:11.000Z", "max_issues_repo_path": "paper/lazy.tex", "max_issues_repo_name": "JuliaReach/LazySets-JuliaCon21", "max_issues_repo_head_hexsha": "033612bd98ef8692195e4860e7eaa7bfbd05e6c1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-11-03T13:31:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-16T21:15:45.000Z", "max_forks_repo_path": "paper/lazy.tex", "max_forks_repo_name": "JuliaReach/LazySets-JuliaCon21", "max_forks_repo_head_hexsha": "033612bd98ef8692195e4860e7eaa7bfbd05e6c1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-27T15:57:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-27T15:57:41.000Z", "avg_line_length": 60.75, "max_line_length": 559, "alphanum_fraction": 0.75146527, "num_tokens": 2147, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933359135361, "lm_q2_score": 0.6992544273261176, "lm_q1q2_score": 0.5733140450727199}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n\n% \\begin{figure}[ht]\n%     \\centering\n%     \\incfig{riemmans-theorem}\n%     \\caption{Riemmans theorem}\n%     \\label{fig:riemmans-theorem}\n% \\end{figure}\n\n\\begin{document}\n\\subsection{Direct Product of Groups}\t\n\\begin{defn}[Direct Product]\n\tThe \\textbf{direct product} \\(G_1\\times G_2\\) of groups \\(G_1,G_2\\) is the group of all ordered pairs \\((g_1,g_2)\\) where \\(g_i \\in G_i\\), with the usual definition of multiplication:\n\t\\begin{align*}\n\t\t(g_1,g_2)(h_1,h_2) = (g_1h_1,g_2h_2).\n\t\\end{align*}\n\\end{defn}\n\tIt is clearly a group, and the projection subsets \\(G_1^{*}= \\left\\{(g_1,e_2) \\mid g_1 \\in G_1 \\right\\} \\) and \\(G_2^{*}=\\left\\{(e_1,g_2) \\mid g_2 \\in G_2 \\right\\} \\) are isomorphic to their respective groups. This in turn tells us that \\(\\left| G_1\\times G_2 \\right| = \\left| G_1 \\right| \\left| G_2 \\right| \\).\\\\\n\t\\begin{prop}\n\\(G_1^{*},G_2^{*} \\triangleleft G_1\\times G_2\\) are normal subgroups, and every \\(u \\in G_1\\times G_2\\) can be decomposed as \\(u = u_1u_2\\), \\(u_i \\in G_i^{*}\\).\\\\\n\nThe converse holds as well; if \\(N,M\\) are normal subgroups of a group \\(G\\), and every \\(g \\in G\\) can be written as \\(g = nm\\), \\(n \\in N, m \\in M\\), then \\(G \\cong N\\times M\\).\n\t\\end{prop}\nOf course, \\((G_1\\times G_2) / G_1^{*} \\cong G_2\\) and vice versa.\\\\\n\nWe can even take the direct product of more than two groups:\n\\begin{align*}\n\tg = (g_1,g_2,\\ldots,g_n) \\in G_1\\times G_2\\times \\ldots \\times G_n\\\\\n\t(g_1,g_2,\\ldots,g_n)(h_1,h_2,\\ldots,h_n) = (g_1h_1,g_2h_2,\\ldots,g_nh_n).\n\\end{align*}\n\nWe will later see that direct products provide us a means of characterizing all finitely generated abelian groups.\n\\end{document}\n", "meta": {"hexsha": "e0b05f54c8e575f3da3ef3a0059f7dcaab803e3a", "size": 1677, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Abstract Algebra - Introductory/Algebra I/Notes/source/2020-04-07-GroupProd.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Abstract Algebra - Introductory/Algebra I/Notes/source/2020-04-07-GroupProd.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Abstract Algebra - Introductory/Algebra I/Notes/source/2020-04-07-GroupProd.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.9142857143, "max_line_length": 314, "alphanum_fraction": 0.6684555754, "num_tokens": 656, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8198933271118222, "lm_q1q2_score": 0.5733140337795165}}
{"text": "%!TEX root = TTK4215-Summary.tex\n\\section{Adaptive pole placement control (APPC)}\nAPPC is pretty cool, because it does not require a plant of minimum phase (as opposed to MRAC).\n\n\\subsection{Indirect PPC}\nGiven the system\n\\begin{equation}\n\ty_p = \\frac{Z_p(s)}{R_p(s)} u_p\n\\end{equation}\nwhere\n\\begin{itemize}\n\t\\item $R_p$ is monic, of known degree $n$,\n\t\\item $Z_p, R_p$ are coprime,\n\t\\item $G_p$ strictly proper ($\\deg{Z_p} < n$).\n\\end{itemize}\n\nThe objective is to choose $u_p$ so that the closed-loop poles are the roots of a polynomial $A^*(s)$. Can be done by choosing $P$ (degree $q+n-1$) and $L$ (monic, degree $n-1$) such that\n\\begin{equation}\n\tL Q_m R_p + P Z_p = A^*\n\\end{equation}\nis satisfied for $A^*$ of degree $2n+q-1$. $Q_m$ is chosen so that\n\\begin{equation}\n\tQ_m y_p = 0,\n\\end{equation}\nand the control input is given by\n\\begin{equation}\n\tQ_m L u_p = -P (y_p - y_m)\n\t.\n\\end{equation}", "meta": {"hexsha": "73e274f44961d5adf8d20236e31ff8a6daac7a4e", "size": 901, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TTK4215 System identification and adaptive control/sec-appc.tex", "max_stars_repo_name": "jakoblover/ntnu-course-summaries", "max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z", "max_issues_repo_path": "TTK4215 System identification and adaptive control/sec-appc.tex", "max_issues_repo_name": "jakoblover/ntnu-course-summaries", "max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TTK4215 System identification and adaptive control/sec-appc.tex", "max_forks_repo_name": "jakoblover/ntnu-course-summaries", "max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0689655172, "max_line_length": 187, "alphanum_fraction": 0.6903440622, "num_tokens": 326, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8947894717137997, "lm_q2_score": 0.640635868562172, "lm_q1q2_score": 0.5732342303916571}}
{"text": "\\section{Introduction}\nWe will begin with a brief introduction of the basic terminology of algebraic geometry. This script will follow standard notation as in Bourbaki cfg. \\cite{bou} A integral domain (ID) is a unital commutative ring, with no zero divisors. For this, we define a specialised functor\n$$\\bao{rrcl}\n\\mrm{Ann}_R(\\cdot) &:\\modr& \\longrightarrow& \\modr(R)\\\\\n&&&\\\\\n&M &\\longmapsto&\\{r \\in R : r m = 0\\; \\forall m \\in M\\}\\\\\n\\ea$$\nIn particular, $\\mrm{Ann}_R(R)  = \\{r \\in R : r' \\in R \\backslash\\{0\\},\\; r r' = 0\\}$.\n\\subsection{Affine and projective schemes}\nFor a unital commutative ring $R$ we define a prime ideal to be an ideal $\\prm \\subsetneq R$ such that for all $r, s \\in R$ we have\n$$r s \\in \\prm \\ \\Rightarrow r \\in \\prm \\vee s \\in \\prm.$$\nFurthermore, for $R$ as above we define\n$$\\specr := \\{\\prm \\in \\mathrm{Mod}_R : \\prm \\ \\mathrm{prime}\\}$$\nto be the spectrum of $R$. Thereby, $\\mathrm{Mod}_R$ is the category of $R$-modules. We call an ideal $\\mx \\in \\mathrm{Mod}_R$ maximal if for all ideals $\\mathfrak{n} \\in \\mathrm{Mod}_R$\n$$\\mx \\subset \\mathfrak{n} \\Leftrightarrow \\mx = \\mathfrak{n} \\ \\mathrm{or}\\ R = \\mathfrak{n}.$$\nWe denote the class of all maximal ideals with $\\mathrm{Max}(R)$. If $R$ is unital as assumed we get that $\\mathrm{Max}(R) \\subset \\specr$. Let $R$ be as above and $\\prm \\in \\specr$ the localisation, $S_\\prm^{-1} R$, of $R$ wrt. $\\prm$ is defined to be\n$$R_\\prm := S_{\\prm}^{-1}R = S_\\prm \\times R/\\sim, \\ \\mathrm{with}\\ \\sim =\\left\\{\\left((s,r),(s',r')\\right) : \\exists t \\in S_\\prm, \\;\\mrm{s.t.}\\; t(s' r - s r') \\in \\prm\\right\\},$$\nwhere $S_\\prm = \\{r \\in R : r \\notin \\prm\\} = R \\backslash \\prm$ and with addition and multiplication:\n$$\\begin{array}{rcl}\nm_+ &=& \\left[\\left([s,r],[s',r']\\right) \\longmapsto [s s', s' r + s r']\\right],\\\\\n&&\\\\\nm_\\cdot &=& \\left[\\left([s,r],[s',r']\\right) \\longmapsto [s s', r r']\\right],\\\\\n\\end{array}$$\nrespectively. We remark that $S_\\prm^{-1}R$ is a local ring i.e. it contains only one maximal ideal. Given a non-zero divisor and non-unit $r \\in R$, we define\n$$S_r := R \\backslash \\bigcup_{r \\notin \\prm} \\prm.$$\nThis is set is in general not an ideal. However, it is a multiplicative monoid: if $a$ is not in $\\prm$ and $b$ is not in $\\prm$ then neither is $a b$. This holds for all $\\prm$ not containing $r$. Now, given a topological space $(X, \\tau)$ and a ring $R$ we get the \n$$S := \\mathrm{map}(X,R) = R^X$$\nthe ring of $R$-valued functions.\n\\subsubsection{Affine case}\n\\newcommand{\\calf}{\\mathcal{F}}\nGiven a finite set of elements $F = \\{f \\in R\\}$, we define its ideal to be\n$$I(F) := \\left\\{\\sum_{f \\in F} p_f f : p_f \\in R,\\ \\forall f \\in F\\right\\} = \\sum_{f \\in F} R. f.$$\nConversely, for any ideal $I \\in \\modr$, the set of prime ideals containing $I$ is defined to be\n$$V(I) := \\{\\prm \\in \\specr : I \\subset \\prm\\},$$\nthe so-called zero set (or more specifically: common set of zeros). For two ideals $I, J \\in \\modr$, the sums and products are:\n$$\\bao{rcl}\nV(I \\cdot J) &=& \\{\\prm : I J \\subset \\prm\\}\\\\\n&&\\\\\n&=& \\{\\prm : I \\cdot J \\subset I \\cap J \\subset I, J \\subset \\prm\\}\\\\\n&&\\\\\n&=& \\{\\prm : I \\cdot J \\subset I \\cap J \\subset I \\subset \\prm \\vee I \\cdot J \\subset I \\cap J \\subset J \\subset \\prm\\}\\\\\n&&\\\\\n&=& \\{\\prm : I \\cdot J \\subset I \\cap J \\subset I, J \\subset \\prm\\} \\cup \\{\\prm:I \\cdot J \\subset I \\cap J \\subset J \\subset \\prm\\}\\\\\n&&\\\\\n&=&V(I) \\cup V(J)\\\\\n\n\\ea$$\nThe argument goes as follows: $I J \\subset \\prm$ implies $I \\subset \\prm \\vee J \\subset \\prm$.\n$$\\bao{rcl}\nV(I + J) &=& \\{\\prm : I + J \\subset \\prm\\}\\\\\n&&\\\\\n&=& \\{\\prm : I + J \\subset \\prm \\Leftrightarrow I \\subset \\prm \\vee J \\subset \\prm\\}\\\\\n&&\\\\\n&=& \\{\\prm : I \\subset \\prm\\} \\cap \\{\\prm : J \\subset \\prm\\}\\\\\n&&\\\\\n&=& V(I) \\cap V(J)\\\\\n\\ea$$\nWe remark that in general unital commutative rings $R$, there need not to be a unique (up to sequence and association) factorisation. Therefore, $I \\cdot J \\subsetneq I \\cap J$. One promiment example:\n\\begin{defi} Some important definitions:\n\\bd\n\\item[Associatedness] Two elements $r, r' \\in R$ are called associated, if there exists an $c \\in R^\\times$ s.t.\n$$r = c r'.$$\n\\item[Irreducibility] An element $r \\in R \\backslash (R^\\times \\cup \\{0\\})$ is called irreducible iff each factorization is associated:\n$$c r' = r \\Rightarrow c \\in R^\\times,$$\nin particular, $r'$ is its associated irreducible.1\n\\item[UFD] An ID $R$ is called a unique factorization domain (UFD) if for each $r \\in R\\backslash R^\\times \\cup \\{0\\}$, there exists irreducible elements $p_i$ s.t. the factorization\n$$r = \\prod_i p_i^{s_i}, \\sum_i s_i \\geq 1$$\nis unique up to associatedness and permutation.\n\\item[PID] An UFD $R$ is called a principle ideal domain, if each ideal $I$ is generated by one element.\n\\item[EUCL] A PID $R$ is called euclidean, if there are a multiplicative map $\\phi : R \\backslash\\{0\\} \\longrightarrow \\zz_{>0}, $\n\\ed\n\\end{defi}\n\\ex Following rings are all examples of UFDs:\n$$\\zz,\\ \\qz,\\ \\fz{p},\\ \\mrm{where}\\ p\\ \\mrm{prime}.$$\n\\ex The ring $R = \\zz[\\sqrt{-3}] = \\zz \\oplus \\zz.\\sqrt{-3}$: The element $2 \\in R$ is irreducible as any divisor $d \\in R$ is either a unit $d \\in R^\\times = \\mrm{Gl}_1(R) = \\{\\pm 1\\}$, or associated to $2$.\\\\\n\\indent Similarly, $1 + \\sqrt{-3}$ is irreducible by the same argument and the consideration that:\n$$R \\simeq_{\\mrm{Rng}} \\zz.I_2 \\oplus \\zz.\\left(\\bao{cc}0&-3\\\\1&0\\\\\\ea\\right), \\det_\\zz : R \\longrightarrow \\zz, r \\longmapsto \\det \\left(\\bao{cc}r & -3 r'\\\\r'&r\\\\\\ea\\right) = r^2 + 3 r'^2 .$$\nTherefore, $\\det_\\zz (1 + \\sqrt{-3}) = 1 + 3 = 4 = \\det_\\zz 2$. We arrive immediately at\n$$2 \\mid 4 \\wedge 1 + \\sqrt{-3} \\mid 4 \\Leftrightarrow 4 \\in \\left<2\\right> \\wedge 4 \\in \\left<1 + \\sqrt{-3}\\right>$$\nwithout any of the two to be associated. Moreover, one can show quickly that\n$$\\left<2\\right> \\cdot \\left<1 + \\sqrt{-3}\\right> = \\left<2 + 2 \\sqrt{-3}\\right> \\subsetneq \\left<2\\right> \\cap \\left<1 + \\sqrt{-3}\\right> = \\left\\{\\left(\\bao{cc}r&-3 r'\\\\r'&r\\\\\\ea\\right) : r + r' \\in 2 \\zz\\right\\}.$$\nAn interesting result - the intersection is strictly greater than the product of two (principal) ideals. This is one of the prominent examples of a non-UFD. Moreover, $4$ has no unique decomposition wrt to irreducibility:\n$$(1 + \\sqrt{-3}) (1 - \\sqrt{-3}) = 1 - (-3) = 4 = 2^2.$$\nas well as\n$$2 - (1 + \\sqrt{-3}) = 1 - \\sqrt{-3} = \\frac{4}{1 + \\sqrt{-3}}.$$\nThis example will be revisited in one of the following sections.\n\\begin{theo}\nIf $R$ is a UFD then so is its polynomial ring $R[X]$.\n\\end{theo}\nThis theorem has important implications - we can propose the existence of a decomposition of each finitely generate module and a given endomorphism:\n\\begin{coro}\nLet $M \\in \\mrm{Mod}_R$ finitely generated, $\\varphi \\in \\mrm{End}_R(M)$, $\\mathbb{P}$ a representation of all irreducible polynomials in $R[X]$ and $p_\\varphi \\in R[X]$ its characteristic polynomial then there exists a unique monic polynomial $m_\\varphi$ of smallest degree with decomposition:\n$$m_\\varphi = \\prod_{\\substack{p \\in \\mathbb{P}\\\\p \\mid p_\\varphi\\\\}} p^{s_p},\\ s_p \\geq 1 \\forall p$$\nsuch that $m_\\varphi(\\varphi) = 0 \\in R[\\varphi]$. Then there exists a sequence of integers $t_p \\leq s_p$ and $u_{t_p}\\geq 1$ such that\n$$R[\\varphi] \\simeq \\sum_{\\substack{p \\in \\mathbb{P}\\\\p \\mid p_\\varphi}}\\left(R[X]/\\left<p^{t_p}\\right>\\right)^{u_{t_p}}$$\nand $\\sum_p u_{t_p} t_p = deg p_\\varphi$.\n\\end{coro}\n", "meta": {"hexsha": "9f0e483cf60e5bfa195a12d2a324301a0d3e5043", "size": 7339, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "grp_scheme/intro.tex", "max_stars_repo_name": "gmuel/texlib", "max_stars_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "grp_scheme/intro.tex", "max_issues_repo_name": "gmuel/texlib", "max_issues_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "grp_scheme/intro.tex", "max_forks_repo_name": "gmuel/texlib", "max_forks_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.887755102, "max_line_length": 294, "alphanum_fraction": 0.6435481673, "num_tokens": 2686, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.870597271765821, "lm_q2_score": 0.6584175139669997, "lm_q1q2_score": 0.5732164913425043}}
{"text": "\\chapter{Methods}\n\\subsection{Fetching the data}\nThe raw material of this thesis is coming from the \\textit{International Association of Analysts}. Thirteen magmatic rocks were analyzed by approximately a hundred laboratories. \n\nWe began by converting the datasets in a clean $n\\times p$ matrix :\n\\begin{align} \\label{matrix}\nX_{n,p} = \n\\begin{pmatrix}\nx_{1,1} & x_{1,2} & \\cdots & x_{1,p} \\\\\nx_{2,1} & x_{2,2} & \\cdots & x_{2,p} \\\\\n\\vdots  & \\vdots  & \\ddots & \\vdots  \\\\\nx_{n,1} & x_{m,2} & \\cdots & x_{n,p} \n\\end{pmatrix}\n\\end{align}\nWith $n$ the number of laboratories which analyzed the rock sample, for January 2020, $n$ was 119. $p$ is the number of chemical elements found in the sample. Theoretically, $p$ can be as large as 94, the number of naturally-occurring chemical elements. In practice, the size of the database is approximately 60 (rocks) x 30 (elements) x 100 (labs). \n\\subsection{Filling missing-values}\nWe start by using the expectation-maximization algorithm. ", "meta": {"hexsha": "c5921b399c0b6fca5713761a377d63e9514a307d", "size": 982, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/Chapters/Methods.tex", "max_stars_repo_name": "mkeutgen/GeoPT-thesis", "max_stars_repo_head_hexsha": "13ef97d5b59d5c029cfede85ce16a78460af52e9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/Chapters/Methods.tex", "max_issues_repo_name": "mkeutgen/GeoPT-thesis", "max_issues_repo_head_hexsha": "13ef97d5b59d5c029cfede85ce16a78460af52e9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/Chapters/Methods.tex", "max_forks_repo_name": "mkeutgen/GeoPT-thesis", "max_forks_repo_head_hexsha": "13ef97d5b59d5c029cfede85ce16a78460af52e9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.7647058824, "max_line_length": 350, "alphanum_fraction": 0.7240325866, "num_tokens": 302, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.839733983715524, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5731803557553339}}
{"text": "In this chapter, we define the actual event detection algorithm. First, we describe the original method used by \\cite{event-detection}. Then, we make a change to incorporate semantic similarity through the word embeddings obtained in \\autoref{chap:data-preprocessing}. Finally, we introduce an alternative algorithm that utilizes word clustering using a custom distance function.\n\nThe original algorithm creates events as sets of related keywords by greedily minimizing a cost function combining temporal and semantical distance between words. However, the paper used only a simple notion of semantical distance, namely the document overlap between words. This demands that there exists at least one document containing all the words used to represent an event. This is a strong requirement, since the documents may use different vocabularies while conveying similar information.  As a result, the events are split into multiple keyword sets, leading to redundancy.\n\nIn an attempt to solve this problem, we modify the cost function, replacing the document overlap by a Word2Vec-based similarity. This does not require the words to appear in exactly the same documents, only that they have similar semantics. We refer to this method as \\textit{embedded greedy approach}, as it is a modification to make the original greedy algorithm utilize word embeddings.\n\nRealizing that the task of constructing keyword sets resembles the task of word clustering, we propose an alternative algorithm. Here, we apply a clustering algorithm to the words, using a modification of the cost function as a distance measure. This is a method referred to as \\textit{cluster-based approach}.\n\nFirst, we briefly describe the original method for reference. This will make it clear which parts of the function we modify. It will also allow us to make reference to the original method in \\autoref{chap:evaluation}, where we compare all three algorithms.\n\n\\section{Original method}\nAs stated in the introduction, the original method performs greedy minimization of a cost function defined over sets of words. The cost function consists of trajectory distance measuring the word distance in temporal domain, and document overlap, standing for distance in the semantic domain. We first describe these two components and then combine them into the cost function.\n\n\\subsection{Trajectory distance}\nBefore measuring the trajectory distance, the trajectories are smoothened by fitting a probability density function to them. We adapt a similar technique in \\autoref{chap:document-retrieval} where it is described in more detail. Our event detection modifications do not use the original smoothing though, and we refer the reader to the original paper for more details.\n\nAfter normalization to unit sum, the (smoothened) trajectory $\\vect{\\trajn}_{w}$ of a word $w$ can be interpreted as a probability distribution over the stream days. The element $\\trajn_{w}(i)$ then denotes the probability that $w$ appears in a random document published on day $i$. This interpretation allows to compare the trajectories using information-theoretic techniques, notably the information divergence \\citep{information-theory}.\n\nIn the original paper, the authors first defined the distance between trajectories of two words $v$ and $w$ as $\\trajdist{\\vect{\\trajn}_{v}}{\\vect{\\trajn}_{w}} = \\max \\left\\{ \\kl{\\vect{\\trajn}_{v}}{\\vect{\\trajn}_{w}}, \\kl{\\vect{\\trajn}_{w}}{\\vect{\\trajn}_{v}} \\right\\}$, symmetrizing the Kullback-Leibler divergence KL \\citep{kl-divergence}.\n\nThen, the distance is generalized to a whole set of words $\\featset$ as\n\n\\begin{equation} \\label{eq:trajectory-distance}\n\t\\text{Dist}( \\featset ) = \\max_{v, w \\in \\featset} \\trajdist{\\vect{\\trajn}_{v}}{\\vect{\\trajn}_{w}}.\n\\end{equation}\n\n\\subsection{Document overlap}\nThe document overlap is again first defined for a pair of words $v$ and $w$ as $\\text{DO}\\left( v, w \\right) = \\frac{| \\featset_{v} \\cap \\featset_{w} |}{\\min \\left\\{ | \\featset_{v} |, | \\featset_{w} | \\right\\}}$, where $\\featset_{j} = \\{ i \\mid \\bowmat_{ij} = 1 \\}$ is the set of all documents containing the word $j$. The higher the document overlap, the more documents do the two words have in common, which makes them more likely to be correlated.\n\nThe overlap is again generalized to a set of words $\\featset$ as\n\n\\begin{equation} \\label{eq:document-overlap}\n\t\\text{DO}( \\featset ) = \\min_{v, w \\in \\featset} \\text{DO}( v, w ).\n\\end{equation}\n\n\\subsection{Cost function}\nThe cost function is a combination of the trajectory distance and document overlap of a set of words $\\featset$. It is defined as\n\n\\begin{equation} \\label{eq:cost-function-original}\n\t\\text{C}( \\featset ) = \\frac{\\text{Dist}( \\featset )}{\\text{DO}( \\featset ) \\cdot \\sum_{w \\in \\featset} \\text{DPS}_{w}}.\n\\end{equation}\n\nSince the algorithm attempts to minimize it, the intuitive result is a set of words with low trajectory distance and high document overlap. The algorithm will also prefer words of higher importance due to the last term of the denominator, counting in the power spectra.\n\n\n\\subsection{Event detection algorithm}\nThe algorithm, called \\textit{unsupervised greedy event detection algorithm} in the original paper, produces events as structured objects $e$ consisting of:\n\n\\begin{itemize}\n\t\\item $\\kw{e}$: The event keyword set.\n\t\\item $\\doc{e}$: Documents concerning the event.\n\t\\item $\\bursts{e}$: Bursty periods of the event.\n\t\\item $\\domper{e}$: Dominant period of the event.\n\t\\item $\\annot{e}$: Human-readable annotation of the event.\n\\end{itemize}\n\nOnly the event keyword set $\\kw{e}$ is initialized in this section. The other fields will be properly defined and filled in the rest of the thesis.\n\nThe algorithm itself is defined as follows:\n\n\\begin{algorithm}[H]\n\\begin{algorithmic}[1]\n\\caption{Unsupervised greedy event detection}\n\\label{alg:greedy-event-detection}\n\\Input $\\text{Word set} ~ \\text{EW obtained in \\autoref{chap:word-analysis}, matrices } \\bowmat \\text{ and }\\trajmat \\text{, word DPS}$\n\n\\State $\\text{Sort the words in descending DPS order: } \\text{DPS}_{w_{1}} \\geq \\dots \\geq \\text{DPS}_{w_{\\left\\vert \\text{EW} \\right\\vert}}$\n\n\\State $k = 0$\n\n\\ForEach{$w \\in \\text{EW}$}\n\t\\State $k = k + 1$\t\n\t\\State $\\kw{e_{k}} = \\{ w \\}$\n\t\\State $cost_{e_{k}} = \\frac{1}{DPS_{w}}$\n\t\\State $\\text{EW} = \\text{EW} \\setminus w$\n\t\n\t\\While{$\\text{EW} \\neq \\emptyset$}\n\t\t\\State $m = \\argmin\\limits_{m}{\\text{C}( \\kw{e_{k}} \\cup w_{m} )}$\n\n\t\t\\If{$\\text{C}( \\kw{e_{k}} \\cup w_{m} ) < cost_{e_{k}}$}\n\t\t\t\\State $cost_{e_{k}} = \\text{C}( \\kw{e_{k}} \\cup w_{m} )$\n\t\t\t\\State $\\kw{e_{k}} = \\kw{e_{k}} \\cup w_{m}$\n\t\t\t\\State $\\text{EW} = \\text{EW} \\setminus w_{m}$\n\t\t\\Else\n\t\t\t\\Break\n\t\t\\EndIf\n\t\\EndWhile\n\\EndFor\n\n\\Output $\\text{Events} ~ \\{ e_{1}, \\dots, e_{k} \\}$\n\\end{algorithmic}\n\\end{algorithm}\n\nThe algorithm works by greedily minimizing the cost function \\eqref{eq:cost-function-original}. Once it is minimized, an event is produced, consisting of all the words found since last event.\n\nThe words are sorted in descending DPS order before entering the minimization loop, so that the most important words are processed first. This assures that the most eventful words are assigned together, without wasting them to appear with low quality words.\n\n\\cite{event-detection} did not provide the time complexity of the algorithm, which we attempt to estimate now. The execution time is dominated by the main loop on lines 3 through 18. The outer loop must execute $\\mathcal{O}(|\\text{EW}|)$ times. In each of the iterations, the inner loop is executed at most $|\\text{EW}|$ times, making it $\\mathcal{O}(|\\text{EW}|)$ as well. The argmin statement on line 9 must search through the whole remaining $|\\text{EW}|$ words, also making it run $\\mathcal{O}(|\\text{EW}|)$ times.\n\nIf the number of eventful words is low enough, the pairwise trajectory distance and document overlap can be precomputed. This makes the cost function take $\\mathcal{O}(|\\text{M}|^{2})$ time when applied to a set M. If the distances are not precomputed, the cost function execution requires $\\mathcal{O}(|\\text{M}|^{3})$ time.\n\nWe were unable to precisely determine the cost function's complexity with respect to the set EW, as it is always applied on the currently composed event. However, during our experiments, the number of words comprising an event never exceeded 10 in the original method. This makes the cost function's asymptotic complexity negligible compared to the main loop.\n\nThe resulting complexity of the algorithm is therefore $\\mathcal{O}(|\\text{EW}|^{3} \\cdot c)$, where $c$ is the complexity of the cost function.\n\n\nIn \\autoref{fig:original-event}, we show an example of an event detected by the original method. We can see that it consists of four keywords with overlapping trajectories, sharing a common burst of activity around day 210. The event is likely related to the tension in the Middle East, though the keywords by themselves do not allow any closer interpretation. For this reason, we provide longer annotations in \\autoref{chap:event-annotation}, so that we can examine the event more closely.\n\n\n\\begin{figure}\n  \\centering\n  \\includegraphics{original_event}  % original event\n  \\caption{Example of an event detected using the original method. The event consists of the words \\textit{palestinian}, \\textit{israeli}, \\textit{Palestinian} and \\textit{Israel}, respectively.}\n  \\label{fig:original-event}\n\\end{figure}\n\n\n\\section{Embedded greedy approach}\nIn this section, we modify the original method to use the Word2Vec model to measure semantic similarity between words. Unlike the document overlap \\eqref{eq:document-overlap}, this new similarity measure is able to distinguish semantically similar words even when they do not appear in the same documents. This may happen, for instance, when different authors each use distinct vocabulary when referring to the same event.\n\n\n\\subsection{Semantic similarity}\nSome of the astounding results of the Word2Vec model arise from semantically similar words forming clusters \\citep{linguistic-regularities} in terms of cosine similarity, which is a standard measure used in information retrieval \\citep{information-retrieval, cosine-similarity}.\n\nWe replace the document overlap in the cost function \\eqref{eq:cost-function-original} by cosine similarity between Word2Vec embeddings, though with a small modification. The cosine similarity is bounded in $[-1, 1]$ with -1 denoting the least degree of similarity. This means that the cost function would reach negative values for highly dissimilar words. This would mean a problem, as Algorithm \\ref{alg:greedy-event-detection} attempts to minimize it. Consequently, we will transform the cosine similarity into $[0, 1]$, just like the document overlap \\eqref{eq:document-overlap}.\n\nThe similarity between a set of words $\\featset$ and a word $w \\notin \\featset$ is defined as\n\n\\begin{equation}\n\t\\semsim{\\featset}{w} = \\left( \\frac{\\inp[\\big]{\\bar{\\embed}_{\\featset}}{\\embed_{w}}}{\\| \\bar{\\embed}_{\\featset} \\| \\cdot \\| \\embed_{w} \\|} + 1 \\right) /\\ 2,\n\\end{equation}\n\nwhere $\\bar{\\embed}_{\\featset}$ is the mean of all vector embeddings of $\\featset$ and $\\embed_{w}$ is the vector embedding of $w$. Here, the mean vector virtually represents the central topic of words in $\\featset$.\n\n\n\\subsection{Cost function}\nWe redefine the cost function \\eqref{eq:cost-function-original} as\n\n\\begin{equation} \\label{eq:cost-function}\n\t\\cost{\\featset}{w} = \\frac{\\text{Dist}( \\featset \\cup w )}{\\semsim{\\featset}{w} \\cdot \\sum_{u \\in \\featset \\cup w}{\\text{DPS}_{u}}},\n\\end{equation}\n\nwhere $\\text{Dist}(\\cdot)$ is the original trajectory distance function \\eqref{eq:trajectory-distance}.\n\nIn the original method, \\cite{event-detection} defined the cost function \\eqref{eq:cost-function-original} for a set of words. However, in Algorithm \\ref{alg:greedy-event-detection}, it is always applied on the union of the keywords of an event constructed so far, and a newly added word. The new cost function must now be applied on such keyword set and word separately due to the nature of Word2Vec similarity definition.\n\nHaving constructed the cost function, we use Algorithm \\ref{alg:greedy-event-detection} to detect events once again.\n\nIn \\autoref{fig:greedy-event}, we show an event detected using the embedded greedy method. It is related to the same real-world event as the event in \\autoref{fig:original-event}, though it consists of more keywords. Generally, events detected by the embedded greedy method contain more keywords, as we will see in \\autoref{tab:events-overview}. The trajectory overlap is not perfect, the burst of the word \\textit{American} being off compared to the rest of the words. It is possible that the semantic similarity of the word was so high that the word was deemed relevant nonetheless.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics{greedy_event}  % greedy event\n  \\caption{Example of an event detected using the embedded greedy method. The event consists of the words \\textit{rebel, missile, Israel, israeli, American} and \\textit{humanitarian}, and is related to the same real event as \\autoref{fig:original-event}.}\n  \\label{fig:greedy-event}\n\\end{figure}\n\n\n\\section{Cluster-based approach}\nRealizing that the keyword-based event detection resembles word clustering and could be solved by an application of a clustering algorithm, we decided to investigate this idea. In the final method, we apply a clustering algorithm equipped with a custom distance function to the set of eventful words. The distance function is actually a modification of the cost function yet again, though some means have to be taken to make it usable in this context. First though, we need to consider a proper clustering algorithm.\n\nThe obvious requirement for the clustering algorithm is that it must require no prior knowledge of the desired number of clusters. Another requirement is that the algorithm must accept a custom distance measure.\n\nWe considered three candidate algorithms, all satisfying these requirements: Affinity propagation \\citep{affinity-propagation}, DBSCAN \\citep{dbscan} and its modification, HDBSCAN \\citep{hdbscan}.\n\nDuring our experimentation, Affinity propagation performed poorly, its clusters being often seemingly random and of low quality. The quality of HDBSCAN clusters was considerably better, though the algorithm took longer to converge as the number of eventful words grew. It also required to tune multiple parameters, which was difficult to do without any annotated data. We decided to use the DBSCAN algorithm, which outperformed Affinity propagation as well, and does not require to tune as many parameters as HDBSCAN.\n\nIn addition to the previously stated requirements, DBSCAN is also capable of filtering out noisy samples (in our case words), not fit for any of the clusters. This property will prove advantageous for our task, as will become clear during the evaluation in \\autoref{sec:noise-evaluation}.\n\n\n\\subsection{Noise filtering} \\label{subsec:noise-filtering}\nBefore we apply clustering, we filter out the noisy parts from the word trajectories. Most words are on some level reported all the time, though only a fraction of these reportings corresponds to notable events. Unlike the greedy optimization described previously, clustering is prone to such noise, and would yield clusters of poor quality, often with trajectories being put together only due to their noisy parts being similar. Additionaly, with DBSCAN capable of filtering out noisy samples, some high quality words could be discarded precisely due to this noise in their (otherwise eventful) trajectories.\n\nWe want to keep only those trajectory parts exceeding a certain frequency level, distinguishing notable bursts from the general noise. We do this by computing a cutoff value for each event trajectory and discarding the sectors falling under this cutoff. This procedure is adopted from \\cite{online-search-queries}. The algorithm is based on computing a moving average along the trajectory, and works as follows:\n\n\\begin{algorithm}[H]\n\\begin{algorithmic}[1]\n\\caption{Burst filtering}\n\\label{alg:burst-filtering}\n\\Input $\\text{Window-length} \\ l,\\ \\text{word trajectory} \\ \\vect{\\traj_{w}}$\n\n\\State $\\vect{MA}_{l} = \\text{Moving Average of length} ~ l ~ \\text{for} ~ \\vect{\\traj}_{w} = \\left[ \\traj_{w}(1), \\traj_{w}(2), \\dots, \\traj_{w}(\\streamlen) \\right]$\n\n\\State $\\mathit{cutoff} = \\text{mean} \\left( \\vect{MA}_{l} \\right) + \\text{std} \\left( \\vect{MA}_{l} \\right)$\n\n\\State $\\vect{bursts}_{w} = \\left[ \\traj_{w}(t) \\mid \\traj_{w\t}(t) > \\mathit{cutoff} \\right]$\n\n\\Output $\\vect{bursts}_{w}$\n\\end{algorithmic}\n\\end{algorithm}\n\nWe use the window length $l = 7$, looking 1 week back in the trajectory.\n\n\n\\subsection{Distance function}\nWe now define the distance function used by DBSCAN. It conveys similar information as the cost function in the previous two algorithms. We still need to measure the trajectory distance as well as semantic similarity between words, though the distance will now be defined strictly pairwise.\n\nFor a measure of trajectory distance, we replace the Kullback-Leibler divergence by the Jensen-Shannon divergence JSD \\citep{js-divergence-1}, which is symmetric in its arguments. This is a necessary property of the distance function.\n\nAlthough \\cite{event-detection} did symmetrize the Kullback-Leibler divergence, they did not provide any source for their symmetrization form. We were unable to find a case where that particular form was used, though we discovered the Jensen-Shannon divergence, which comes from stronger mathematical background \\citep{js-divergence-1, js-divergence-2}. It also tended to improve the clustering quality during our experimentation, as opposed to the original symmetrization. We then decided to replace the original paper's KL-divergence symmetrization by the JS-divergence.\n\nInstead of semantic \\textit{similarity}, we measure semantic \\textit{distance} as the Euclidean distance between two word vector embeddings. The reason is that Euclidean distance is unbounded, which makes it possible for the samples to be spread farther apart. Since DBSCAN is a density-based clustering algorithm, having high density areas consisting of words with low trajectory distance and similar cosine similarities would cause them to appear in the same cluster. This would cluster the words only in terms of their trajectories, not semantics.\n\nThe distance between two words $v$ and $w$ with (normalized and filtered using Algorithm \\ref{alg:burst-filtering}) trajectories $\\vect{\\trajn}_{v},\\ \\vect{\\trajn}_{w}$ and Word2Vec embeddings $\\embed_{v},\\ \\embed_{w}$ is now defined as\n\n\\begin{equation}\n\t\\distfunc{v}{w} = \\jsd{\\vect{\\trajn}_{v}}{\\vect{\\trajn}_{w}} \\cdot \\| \\embed_{v} - \\embed_{w}\\|_{2},\n\\end{equation}\n\nwith $\\jsd{\\vect{p}}{\\vect{q}} = \\frac{1}{2} \\left( \\kl{\\vect{p}}{\\vect{m}} + \\kl{\\vect{q}}{\\vect{m}} \\right) ,\\ \\vect{m} = \\frac{1}{2} \\left( \\vect{p} + \\vect{q} \\right)$.\n\n\n\\subsection{Event detection}\nNow, we describe the cluster-based event detection algorithm, which is a direct application of the DBSCAN algorithm and consequent noise filtering.\n\n\\begin{algorithm}[H]\n\\begin{algorithmic}[1]\n\\caption{Cluster-based event detection}\n\\Input $\\text{Word set EW obtained in \\autoref{chap:word-analysis}, matrix } \\trajmat \\text{, word embeddings for EW}$\n\n\\State Precompute a distance matrix $\\distmat \\in \\R^{\\left\\vert \\text{EW} \\right\\vert \\times \\left\\vert \\text{EW} \\right\\vert}$ with $\\distmat_{ij} = \\distfunc{w_{i}}{w_{j}}$\n\n\\State Apply DBSCAN to $\\distmat$, obtaining $k$ clusters and the noisy cluster\n\n\\ForEach{$(w, cluster) \\in \\text{DBSCAN.clusters}$}\n\t\\If{$cluster \\neq noise$}\n\t\t\\State $\\kw{e_{cluster}} = \\kw{e_{cluster}} \\cup w$\n\t\\EndIf\n\\EndFor\n\n\\Output $\\text{Events} ~ \\{ e_{1}, e_{2}, \\dots, e_{k} \\}$\n\\end{algorithmic}\n\\end{algorithm}\n\nIn \\autoref{fig:cluster-event}, we show an event detected using the cluster-based method. It is again related to the same real event as those in \\autoref{fig:original-event} and \\autoref{fig:greedy-event}. The main thing to note is that the word trajectories are clear of noise due to application of Algorithm \\ref{alg:burst-filtering}. This made it possible to match words only based on their bursts, not any underlying noise. Compared to the event depicted in \\autoref{fig:greedy-event}, the cleaned trajectories overlap almost perfectly.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics{cluster_event}  % cluster event\n  \\caption{Example of an event detected using the cluster-based method. The event is related to the same real event as \\autoref{fig:original-event} and \\autoref{fig:greedy-event}.}\n  \\label{fig:cluster-event}\n\\end{figure}\n", "meta": {"hexsha": "2a1b2733a08f4347bbfa2ebb9e550723fc88cb83", "size": 20659, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/tex/chapters/event_detection.tex", "max_stars_repo_name": "tomaskala/bachelor-thesis", "max_stars_repo_head_hexsha": "fd412fba6ecd2838b034423c8f8db48c50c1a558", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Thesis/tex/chapters/event_detection.tex", "max_issues_repo_name": "tomaskala/bachelor-thesis", "max_issues_repo_head_hexsha": "fd412fba6ecd2838b034423c8f8db48c50c1a558", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/tex/chapters/event_detection.tex", "max_forks_repo_name": "tomaskala/bachelor-thesis", "max_forks_repo_head_hexsha": "fd412fba6ecd2838b034423c8f8db48c50c1a558", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-14T15:56:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T15:56:55.000Z", "avg_line_length": 85.0164609053, "max_line_length": 609, "alphanum_fraction": 0.764703035, "num_tokens": 5204, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8397339516289534, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5731803338538838}}
{"text": "A number of different numerical abstract domains have been studied in the literature, and they can be classified with respect to a number of different dimensions: finite versus infinite height, relational versus non-relational, convex versus possibly non-convex, and so on. \n\nThe computational cost raises when lifting from finite non\u2013relational domains like \\textit{Sign} or \\textit{Parity}, to \ninfinite non\u2013relational domains like \\textit{Intervals}, to sophisticated infinite relational domains like \\textit{Octagons} \\cite{MIN06}, \\textit{Polyhedra} \\cite{CH78}, \\textit{Pentagons} \\cite{LF08}, and \\textit{Stripes} \\cite{F08}, or to donut-like non-convex domains \\cite{G12}. Moreover, when considering possibly non-convex disjunctive domains, as obtained through the powerset operator \\cite{FR99}, the complexity of the analysis is growing (as well as its accuracy) in a full orthogonal (exponential) way.\n\nNoticeable efforts have been put both to reduce the loss of precision due to the upper bound operation, and to accelerate the convergence of the Kleene iterative algorithm. Some ways to reduce the space dimension in polyhedra computations relying on variable elimination and Cartesian factoring are introduced in \\cite{HMG06}.  \nSeladji and Bouissou \\cite{SB13} designed refinement tools based on convex analysis to express the convergence of convex sets using support functions, and on numerical analysis to accelerate this convergence applying sequence transformations. \nOn the other hand, Sankaranarayanan et al. \\cite{SISG06} faced the issue of reducing the computational cost of the analysis using a powerset domain, by adopting restrictions based on \\textquotedblleft on the fly elaborations\\textquotedblright\\ of the program's control flow graph. Efficiency issues about convergence acceleration by widening in the case of a powerset domain have been studied by Bagnara et al. in \\cite{BHZ07}. All these domains do not track disjunctive information.\n\nFinally, the trace partitioning technique designed by Mauborgne and Rival \\cite{MR05} provides automatic procedures to build suitable partitions of the traces yielding to a refinement that has great impact both on the accuracy and on the efficiency of the analysis. This approach tracks disjunctive information, and it works quite well when the single partitions are carefully designed by an expert user. Unluckily, given the high number of hypercubes tracked by our analysis, this approach is definitely too slow for the scenario we are targeting.\n\nThe Parametric Hypercubes proposal presented in this paper can be seen as a selection and a combination of most of these techniques, tailored to get a solution that properly suits the features of Computer Games Software applications. In this scenario, we had to track precisely a lot of disjunctive information. Therefore, we needed to introduce a targeted domain \\adomain\\ for physics simulations. On the other hand, some of the domain's features (like width parameter tuning, and interval offsets) may also be applied in other domains. \n\n\\subsection{Future work}\n\nWe observe that our approach offers plenty of venues in order to improve its results, thanks to its flexible and parametric nature. In particular, we could: (i) increase the precision by intersecting our hypercubes with arbitrary bounding volumes which restrict the relationships between variables (the offsets presented in Section \\ref{sec:tuning} are the simplest version of this extension); (ii) increase the performance of Algorithm \\ref{alg:widthAdjusting} by halving the widths only on some axes, chosen through an analysis of the distribution of hypercubes in the \\emph{yes,no,maybe} sets; and (iii) study the derivative with respect to time of the iterations of the main loop in order to define temporal trends to refine the widening operator.\nIn addition, our domain is modular w.r.t. the non-relational abstract domain adopted to represent the hypercube dimensions. By using other abstract domains it is possible to track relationships between variables which do not necessarily represent physical quantities.\n\n", "meta": {"hexsha": "4dc34ec34f7ebf1c4ca7411552d223a36a364eca", "size": 4108, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "12. Hypercubes Domain/HC/Sections/versione SAS2013/7_related_work.tex", "max_stars_repo_name": "vs-team/Papers", "max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z", "max_issues_repo_path": "12. Hypercubes Domain/Sections/versione SAS2013/7_related_work.tex", "max_issues_repo_name": "vs-team/Papers", "max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "12. Hypercubes Domain/Sections/versione SAS2013/7_related_work.tex", "max_forks_repo_name": "vs-team/Papers", "max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 216.2105263158, "max_line_length": 751, "alphanum_fraction": 0.8162122687, "num_tokens": 871, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8397339556397749, "lm_q2_score": 0.6825737214979745, "lm_q1q2_score": 0.5731803311692562}}
{"text": "\\chapter{Noise in Detectors}\n\\textit{Poisson noise, shot noise, and photon noise} are often used interchangeably.  Shot noise in simple term, is the noise associated with the random arrival of discrete electrons, and photon noise is often similarly ascribed to the random arrival of discrete photons.  photon noise can also be described as shot noise with photo-electrons.  So photons arrive discretely, which create shot noise, discrete electrons created by discrete arrival of photons.  Assume both photo-electrons and photons are Poisson random processes.\n", "meta": {"hexsha": "fe08c8d78e04870729a824e2a2d33fdf5ab62798", "size": 559, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/noise_in_detectors.tex", "max_stars_repo_name": "hfan36/dissertation", "max_stars_repo_head_hexsha": "5d755c96cf6cbece2c382789015e9db9ceb02da7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/noise_in_detectors.tex", "max_issues_repo_name": "hfan36/dissertation", "max_issues_repo_head_hexsha": "5d755c96cf6cbece2c382789015e9db9ceb02da7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/noise_in_detectors.tex", "max_forks_repo_name": "hfan36/dissertation", "max_forks_repo_head_hexsha": "5d755c96cf6cbece2c382789015e9db9ceb02da7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 186.3333333333, "max_line_length": 529, "alphanum_fraction": 0.8157423971, "num_tokens": 115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8438951025545425, "lm_q2_score": 0.679178692681616, "lm_q1q2_score": 0.5731555725134124}}
{"text": "\n\\subsection{Representing real numbers}\n\nstore as two integers \\((x,y)\\)\nevaluate as \\(x*2^y\\)\nthis is binary floating point\nthis means you get inaccuracies\neg \\((0.1+0.2-0.3)*10^20\\) is not zero\nalternative is decimal floating point\nstore as \\(x*10^y\\)\n\\subsection{Addition, sub, mult, div of real}\n\n\\subsection{Powers, log, exp}\n\n\\subsection{Floor and ceiling}\n\n", "meta": {"hexsha": "24f1cdb171c5dfc682c0ca0754b10574cffb4beb", "size": 364, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/computer/numbersReal/01-01-real.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/computer/numbersReal/01-01-real.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/computer/numbersReal/01-01-real.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.4117647059, "max_line_length": 45, "alphanum_fraction": 0.7252747253, "num_tokens": 112, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8354835371034368, "lm_q2_score": 0.6859494678483918, "lm_q1q2_score": 0.5730994876721947}}
{"text": "\\subsubsection{dataObjectLabelFilter}\n\\label{dataObjectLabelFilter}\n\nThis PostProcessor allows to filter the portion of a dataObject, either PointSet or HistorySet, with a given clustering label.\nA clustering algorithm associates a unique cluster label to each element of the dataObject (PointSet or HistorySet).\nThis cluster label is a natural number ranging from $0$ (or $1$ depending on the algorithm) to $N$ where $N$ is the number of obtained clusters.\nRecall that some clustering algorithms (e.g., K-Means) receive $N$ as input while others (e.g., Mean-Shift) determine $N$ after clustering has been performed.\nThus, this Post-Processor is naturally employed after a data-mining clustering techniques has been performed on a dataObject so that each clusters\ncan be analyzed separately.\n\n\\ppType{dataObjectLabelFilter}{dataObjectLabelFilter}\n\nIn the \\xmlNode{PostProcessor} input block, the following XML sub-nodes are required,\nindependently of the \\xmlAttr{subType} specified:\n\n\\begin{itemize}\n   \\item \\xmlNode{label}, \\xmlDesc{string, required field}, name of the clustering label\n   \\item \\xmlNode{clusterIDs}, \\xmlDesc{integers, required field}, ID of the selected clusters. Note that more than one ID can be provided as input\n\\end{itemize}\n", "meta": {"hexsha": "808d4b607ba5aad55965beb4dc9cb0e59a655f89", "size": 1252, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/PostProcessors/dataObjectLabelFilter.tex", "max_stars_repo_name": "FlanFlanagan/raven", "max_stars_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-10T18:54:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T18:54:09.000Z", "max_issues_repo_path": "doc/user_manual/PostProcessors/dataObjectLabelFilter.tex", "max_issues_repo_name": "FlanFlanagan/raven", "max_issues_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/user_manual/PostProcessors/dataObjectLabelFilter.tex", "max_forks_repo_name": "FlanFlanagan/raven", "max_forks_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.6, "max_line_length": 158, "alphanum_fraction": 0.7947284345, "num_tokens": 299, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8354835452961425, "lm_q2_score": 0.6859494421679929, "lm_q1q2_score": 0.5730994718364261}}
{"text": "\\documentclass{article}\n\n\\usepackage[hidelinks]{hyperref}\n%\\usepackage[capitalize]{cleveref}\n\\usepackage{tikz}\n\\usepackage[all]{xy}\n%\\usepackage[foot]{amsaddr}\n\\usepackage{amsmath,amsfonts,amssymb,amsthm}\n\\usepackage{xcolor}\n\\usepackage{tabularx}\n\\usepackage{booktabs}\n\n\\newcommand{\\SM}[1]{\\color{blue}#1\\color{black}}\n\\newcommand{\\ZZ}[1]{\\color{cyan}#1\\color{black}}\n\\newcommand{\\AS}[1]{\\color{red}#1\\color{black}}\n\n\\newcommand{\\Q}{\\ensuremath{\\mathbb Q}}\n\\newcommand{\\Z}{\\ensuremath{\\mathbb Z}}\n\\newcommand{\\Fp}{\\ensuremath{\\mathbb F_p}}\n\\newcommand{\\End}{\\ensuremath{\\text{End}}}\n\n\\theoremstyle{definition}\n\\newtheorem*{remark}{Remark}\n\n\\title{Bandersnatch: a fast elliptic curve built over the BLS12-381\n  scalar field}\n\\author{Simon Masson${}^\\text{1}$, Antonio Sanso${}^\\text{2,3}$ and\n  Zhenfei Zhang${}^\\text{2}$\\\\\n  \\ \\\\\n  ${}^\\text{1}$\\href{https://anoma.network}{Anoma}\\\\\n  ${}^\\text{2}$Ethereum Foundation\\\\\n  ${}^\\text{3}$Ruhr Universit{\\\"a}t Bochum\\\\\n}\n\\date{}\n\n\\makeatletter\n\\newcommand{\\verbatimfont}[1]{\\renewcommand{\\verbatim@font}{\\ttfamily#1}}\n\\makeatother\n\n\\begin{document}\n\n\\verbatimfont{\\small}%\n\n\\maketitle\n\\medskip\n\\begin{abstract}\n In this short note, we introduce Bandersnatch, a new elliptic curve\n built over the BLS12-381 scalar field. The curve is equipped with an efficient\n endomorphism, allowing a fast scalar multiplication algorithm.\n Our benchmark shows that the multiplication is 42\\% faster, \n and 21\\% reduction in terms of circuit size in the \n form of rank 1 constraint systems (R1CS),\n compared to another curve, called Jubjub, having similar\n properties. Nonetheless, Bandersnatch does not provide any\n performance improvement for multi scalar multiplications, \n compared to the Jubjub curve.\n\\end{abstract}\n\n\\section{Introduction}\n%BLS12-381\nBLS12-381 is a pairing-friendly curve created by Sean\nBowe~\\href{https://electriccoin.co/blog/new-snark-curve/}{here} in 2017.\nCurrently, BLS12-381 is undergoing a standardization process \nfrom the\nIRTF Crypto Forum Research Group, and is\nuniversally used for digital\nsignatures and zero-knowledge proofs by many projects orbiting in the\nblockchain universe: Zcash, Ethereum 2.0, \\href{https://anoma.network}{Anoma}, Skale, Algorand, Dfinity,\nChia, and more.\n%jubjub\nThe ZCash team\nintroduced Jubjub~\\href{https://z.cash/technology/jubjub/}{here}, an\nelliptic curve built over the BLS12-381 scalar field $\\mathbb\nF_{r_\\text{BLS}}$.\nThis curve is not pairing-friendly, but leads to constructions where\n$\\mathbb F_{r_\\text{BLS}}$ arithmetic circuits can be manipulated\nusing the BLS12-381 curve.\nThe Jubjub curve can be represented in the twisted Edwards\ncoordinates, allowing efficiency inside zk-SNARK circuits.\nIn order for some cryptographic applications to scale, it is necessary to\nhave efficient scalar multiplication on the non-pairing-friendly\ncurve.\nThe main drawback of Jubjub is the slow scalar multiplication\nalgorithm compared, for example, with the ``Bitcoin curve''\n(SECP256k1).\nIt comes from the fact that the curve does not have an efficiently\ncomputable endomorphism, necessary for computing scalar\nmultiplications using the GLV method~\\cite{C:GalLamVan01} (a technique\nprotected by a US patent until Sep 2020~\\cite{glvpatent}, but that \nexpired and is\nfreely usable now).\n\n\\paragraph{Our contribution.}\nThe Jubjub curve is a curve with a large discriminant, meaning that\nthe GLV method is not possible on this curve.\nWe performed an exhaustive search of curves of small discriminant,\ndefined over the BLS12-381 scalar field. This way, we obtain an\nelliptic curve using the Complex Multiplication\nmethod~\\cite{MC:AtkMor93}, where the scalar multiplication algorithm\nis efficient thanks to the GLV method~\\cite{C:GalLamVan01}.\n\nWe implement this curve in Rust, using\nthe~\\href{http://arkworks.rs}{Arkworks framework}, and release our\ncode to the open domain~\\cite{bandersnatch-rust}.\nTable~\\ref{tab:comp} shows a comparison of Bandersnatch curve\nand Jubjub curve. \nDetails deferred to Section~\\ref{sec:comparison}.\n\n\\begin{table}[ht] %\\small\n  \\centering\n  \n  \\begin{tabular}{|l|c|c|}\\hline\n      & multiplication cost & R1CS constraints  \\\\\\hline\\hline\n    Jubjub & 75 $\\mu$s  & 3177 \\\\\\hline\n    Bandersnatch & 44 $\\mu$s  & 2621 \\\\\\hline\\hline   \n    Improvement & 42\\% & 21\\%\\\\\\hline\n  \\end{tabular}\n  \\caption{Bandersnatch vs Jubjub}\n  \\label{tab:comp}\n\\end{table}\n\nWe also report the number of \n% , in terms of both group multiplications and \n% the number of \nconstraints one needs to express a group multiplication\nin the form of rank one constraint system (R1CS), \na common approach for expressing circuits for zero-knowledge \nproof systems. \nA group multiplication takes \n2621\nconstraints when the point is in affine form over the \ntwisted Edwards curve,\n%, and \n%2361\n%constraints when the point is in the projective form\n%over the short Weierstrass from.\n%Both figures \n% This matches what we have for Jubjub curve.\nyielding a 21\\% improvement over the Jubjub curve.\n\n\\paragraph{Organization of the paper.}\nIn Section~\\ref{sec:small-disc-curves}, we describe how we obtained\nseveral curves allowing the GLV method together with cryptographic\nsecurity.\nThen, we introduce in Section~\\ref{sec:bandersnatch} the Bandersnatch\ncurve in different models (in Weierstrass, Montgomery and twisted Edwards\ncoordinates).\nFinally, we compare the scalar multiplication algorithm over\nthe Bandersnatch and the Jubjub curves in\nSection~\\ref{sec:comparison} from a practical point of view.\n\n\n\\section{Small discriminant curves}\\label{sec:small-disc-curves}\n\nThe GLV method~\\cite{C:GalLamVan01} is a well known trick for accelerating\nscalar multiplication over particular curve. In a nutshell, it applies\nto elliptic curves where an endomorphism $\\psi$ can be efficiently computed.\nThe GLV method applies in particular for $j$-invariant $j=0$\n(resp. $j=1728$) curves because a non-trivial automorphism can be\ncomputed using only one modular multiplication. % (see Example~\\ref{ex:psi-j0}).\nThe method also applies for other curves where the endomorphism is\nslightly more expensive, called \\emph{small discriminant} curves.\n\n% notations\nLet $E$ be an elliptic curve defined over $\\Fp$ of trace $t$. $E$ and\nits quadratic twist $E^t$ are $\\mathbb F_{p^2}$-isomorphic curves and\ntheir orders over $\\Fp$ are closely related with the trace\n$t$:\n$$\\#E(\\Fp) = p+1-t\\qquad \\#E^t(\\Fp) = p+1+t.$$\nSee~\\cite{Silverman86} for a complete introduction to elliptic curves.\nIn this work, we are looking for cryptographic applications based on\nordinary elliptic curves, meaning that we look for $t\\not\\equiv 0\n\\bmod p$. The endomorphism ring of these curves have a particular\nstructure: $\\End(E)$ is an order of the imaginary quadratic field\n$\\Q(\\sqrt{t^2-4p})$.\nFrom now, we denote $-D$ to be the discriminant of $\\End(E)$, and\n$\\{\\text{Id},\\psi\\}$ a basis of the endomorphism ring.\nThe fundamental discriminant corresponds to the discriminant of the\nmaximal order containing $\\End(E)$.\nThis way, $\\psi$ is of degree $\\frac{D+1}4$ or $D/4$ depending on the\nvalue of $D$ modulo $4$, and $\\psi$ can be defined using polynomials\nof degree $O(D)$ thanks to the V\u00e9lu's formulas~\\cite{velu71}.\nThus, the evaluation of $\\psi$ is efficient only for curves of small\ndiscriminant.\n\nIn this work, we restrict to curves defined over the BLS12-381 scalar\nfield $\\mathbb F_{r_\\text{BLS}}$. From now, we denote $p=r_\\text{BLS}$\nand we look for curves with a $128$-bit cryptographic security.\nCurves with $-D=-3$ and $-4$ do not have a large prime subgroup\ndefined over $\\Fp$.\nHence, we look for small discriminant $-D<-4$ curves with subgroup and\ntwist-subgroup security. It means that $\\#E(\\Fp)$ has a roughly 256-bit\nprime factor, as well as $\\#E^t(\\Fp)$.\n\nAs the endomorphism cost is closely related to the discriminant, we\nrestrict to $-D \\geq -388$ so that $\\psi$ can be efficiently computed.\nMoreover, we restrict on fundamental discriminants (discriminants\nof the maximal orders of imaginary quadratic fields). We denote\n$\\mathcal O_{-D}$ the maximal order of discriminant $-D$. Elliptic\ncurves with $\\End(E) \\subset \\mathcal O_{-D}$ are isogenous curves,\nmeaning that there is a rational map between them. Isogenous curves\nhave the same order so that we can restrict on fundamental\ndiscriminants for our search.\n\nWe compute an exhaustive search among all the possible (fundamental)\ndiscriminants ($-292 \\leq -D \\leq -3$).\nGiven a discriminant $-D$, roughly half of the curves are\nsupersingular and hence not relevant to our cryptographic\napplications.\nWe list in Table~\\ref{tab:group-order-factorization} the ordinary\ncurves we obtained. In this table, $p_n$ denotes a prime of $n$ bits.\nThe generation of these curves is reproducible\nusing~\\href{https://github.com/asanso/Bandersnatch/blob/main/python-ref-impl/small-disc-curves.py}{this\n  file}.\nWe finally obtain an interesting curve for $-D = -8$ with large prime\norder subgroups on both the curve and its twist.\nWe present in Section~\\ref{sec:bandersnatch} the curve in several\nmodels, together with the endomorphism in order to apply the GLV\nscalar multiplication algorithm.\n\n\\begin{table*}[!ht]\n    \\centering\\footnotesize\n    \\begin{tabularx}{\\textwidth}{ccl}\n        \\toprule                            \n        $\\mathbf{-D}$    & \\textbf{Curve sec.}  & \\textbf{Curve order} \\\\\n        \\midrule        \n$-3$ & $65$-bit & $2^{2}  \\cdot 3  \\cdot 97  \\cdot 19829809  \\cdot 2514214987  \\cdot 423384683867248993  \\cdot p_{131}$\\\\\n & $14$-bit & $2^{64}  \\cdot 906349^{4}  \\cdot p_{28}^{4}$\\\\\n & $77$-bit & $7  \\cdot 43  \\cdot 1993  \\cdot 2137  \\cdot 43558993  \\cdot 69032539613749  \\cdot p_{154}$\\\\\n & $41$-bit & $3  \\cdot 7  \\cdot 13  \\cdot 79  \\cdot 2557  \\cdot 33811\n        \\cdot 1645861201  \\cdot 75881076241177 \\cdot$\\\\\n &          & $86906511869757553  \\cdot p_{82}$\\\\\n & $13$-bit & $3^{2}  \\cdot 11^{2}  \\cdot 19^{2}  \\cdot 10177^{2}  \\cdot 125527^{2}  \\cdot 859267^{2}  \\cdot 2508409^{2}  \\cdot 2529403^{2}  \\cdot p_{26}^{2}$\\\\\n & $118$-bit & $836509  \\cdot p_{236}$\\\\\n$-4$ & $59$-bit & $2^{32}  \\cdot 5  \\cdot 73  \\cdot 906349^{2}  \\cdot 254760293^{2}  \\cdot p_{119}$\\\\\n & $37$-bit & $2^{2}  \\cdot 29  \\cdot 233  \\cdot 34469  \\cdot\n        1327789373  \\cdot 19609848837063073 \\cdot$\\\\\n &          & $159032890827948314857  \\cdot p_{74}$\\\\\n & $37$-bit & $2  \\cdot 3^{2}  \\cdot 11^{2}  \\cdot 13  \\cdot 1481  \\cdot 10177^{2}  \\cdot 859267^{2}  \\cdot 52437899^{2}  \\cdot 346160718017  \\cdot p_{74}$\\\\\n & $57$-bit & $2  \\cdot 5  \\cdot 19^{2}  \\cdot 1709  \\cdot 125527^{2}  \\cdot 2508409^{2}  \\cdot 2529403^{2}  \\cdot p_{114}$\\\\\n$\\mathbf{-8}$ & $\\mathbf{122}$\\textbf{-bit} & $\\mathbf{2^{7}  \\cdot 3^{3}  \\cdot p_{244}}$\\\\\n & $\\mathbf{126}$\\textbf{-bit} & $\\mathbf{2^{2}  \\cdot p_{253}}$\\\\\n$-11$ & $69$-bit & $5  \\cdot 191  \\cdot 5581  \\cdot 18793  \\cdot 48163  \\cdot 46253594704380463613  \\cdot p_{138}$\\\\\n & $73$-bit & $3^{3}  \\cdot 11^{2}  \\cdot 9269797  \\cdot 17580060420191283788101  \\cdot p_{147}$\\\\\n$-19$ & $110$-bit & $7  \\cdot 11^{2}  \\cdot 19  \\cdot 23  \\cdot 397  \\cdot 419  \\cdot p_{220}$\\\\\n & $74$-bit & $3^{2}  \\cdot 5  \\cdot 503  \\cdot 10779490483  \\cdot 433275286013779991  \\cdot p_{149}$\\\\\n$-24$ & $53$-bit & $2^{2}  \\cdot 3^{2}  \\cdot 7  \\cdot 19^{2}  \\cdot 127  \\cdot 29402034080953  \\cdot 2970884754778276642175743  \\cdot p_{106}$\\\\\n & $86$-bit & $2^{5}  \\cdot 5  \\cdot 39628279  \\cdot 1626653036429383  \\cdot p_{172}$\\\\\n$-51$ & $112$-bit & $3^{2}  \\cdot 5  \\cdot 61223923  \\cdot p_{224}$\\\\\n & $120$-bit & $23^{2}  \\cdot 41  \\cdot p_{241}$\\\\\n$-67$ & $67$-bit & $3479887483  \\cdot 56938338857  \\cdot 8474085246072233  \\cdot p_{135}$\\\\\n & $79$-bit & $3^{2}  \\cdot 8478452882270519617659314063  \\cdot p_{159}$\\\\\n$-88$ & $61$-bit & $2^{2}  \\cdot 11  \\cdot 16984307  \\cdot 24567897636186592260640293583411  \\cdot p_{122}$\\\\\n & $66$-bit & $2^{9}  \\cdot 3^{2}  \\cdot 31  \\cdot 6133  \\cdot 116471  \\cdot 69487476515565975361139  \\cdot p_{133}$\\\\\n$-132$ & $73$-bit & $2  \\cdot 1753  \\cdot 101235113104036296384208928969  \\cdot p_{147}$\\\\\n & $92$-bit & $2  \\cdot 3^{2}  \\cdot 7^{2}  \\cdot 11  \\cdot 23  \\cdot 587  \\cdot 701  \\cdot 32299799971  \\cdot p_{184}$\\\\\n$-136$ & $62$-bit & $2^{3}  \\cdot 7^{3}  \\cdot 19^{3}  \\cdot 10939  \\cdot 11131315086725327441688173207  \\cdot p_{125}$\\\\\n & $87$-bit & $2^{2}  \\cdot 5  \\cdot 5741  \\cdot 30851  \\cdot 533874022134253  \\cdot p_{175}$\\\\\n$-228$ & $114$-bit & $2  \\cdot 3^{2}  \\cdot 19  \\cdot 89  \\cdot 5189  \\cdot p_{228}$\\\\\n & $81$-bit & $2  \\cdot 947  \\cdot 277603  \\cdot 28469787063396608749  \\cdot p_{162}$\\\\\n$-244$ & $89$-bit & $2^{2}  \\cdot 13  \\cdot 523  \\cdot 1702319  \\cdot 2827715661581  \\cdot p_{179}$\\\\\n & $88$-bit & $2^{8}  \\cdot 3^{2}  \\cdot 5  \\cdot 71  \\cdot 907  \\cdot 2749  \\cdot 146221  \\cdot 2246269  \\cdot p_{176}$\\\\\n$-264$ & $83$-bit & $2^{3}  \\cdot 11  \\cdot 131  \\cdot 12543757399  \\cdot 2818746796297  \\cdot p_{167}$\\\\\n & $82$-bit & $2^{2}  \\cdot 3  \\cdot 5^{2}  \\cdot 2287  \\cdot 2134790941497418864559  \\cdot p_{165}$\\\\\n$-276$ & $70$-bit & $2  \\cdot 11^{2}  \\cdot 8839  \\cdot 78797899  \\cdot 323360863688748558301  \\cdot p_{140}$\\\\\n & $88$-bit & $2  \\cdot 3  \\cdot 5  \\cdot 6197  \\cdot 138617  \\cdot 16664750312513  \\cdot p_{177}$\\\\\n$-292$ & $92$-bit & $2  \\cdot 54983  \\cdot 5220799  \\cdot 2671917733  \\cdot p_{185}$\\\\\n & $86$-bit & $2  \\cdot 11^{2}  \\cdot 149  \\cdot 354689  \\cdot\n24012883  \\cdot 32483123  \\cdot p_{172}$\\\\\n\\bottomrule\n    \\end{tabularx}\n    \\caption{Curves for discriminants $-3 \\geq -D \\geq -292$.}\n    \\label{tab:group-order-factorization}\n\\end{table*}\n\n\\section{Bandersnatch}\\label{sec:bandersnatch}\n\nThe Bandersnatch is obtained from a discriminant $-D = -8$, meaning\nthat the endomorphism ring is $\\Z[\\sqrt{-2}]$.\nWe obtain the curve $j$-invariant using the Complex Multiplication\nmethod, based on the Hilbert class polynomial $H_{-D}(X)$.\nThe roots of $H_{-D}$ are $j$-invariants of elliptic curves whose\nendomorphism ring is of discriminant $-D$.\nFrom a $j$-invariant, we obtain the curve equation in different\nmodels. Before looking into the details of three different representations,\nwe briefly recall how to exhibit the degree $2$ endomorphism $\\psi$.\n\n\\paragraph{Degree 2 endomorphism.}\nThe endomorphism $\\psi$ has a kernel generated by a $2$-torsion\npoint. Hence, we can obtain the rational maps defining $\\psi$ by\nlooking at the curves $2$-isogenous to Bandersnatch. Only one has the\nsame $j$-invariant, meaning that up to an isomorphism, the V\u00e9lu's\nformulas~\\cite{velu71} let us obtain compute $\\psi$.\nFor cryptographic use-cases, we are interested in computing $\\psi$ on\nthe $p_{253}$-order subgroup of the curve. On these points, $\\psi$\nacts as a scalar multiplication by the eigenvalue\n$$\\lambda = \\text{\\small{\\tt\n    0x13b4f3dc4a39a493edf849562b38c72bcfc49db970a5056ed13d21408783df05}}.$$\nBy construction, $\\psi$ is the endomorphism $\\sqrt{-2}\\in \\mathcal\nO_{-8}$. Thus, $\\lambda$ satisfies\n$\\lambda^2+2 = 0 \\bmod p_{253}$.\nIn the following sections, we provide details on the curve equation,\nthe $\\psi$ rational maps, and a generator of the $p_{253}$-order\nsubgroup in the case of the affine Weierstrass, projective Montgomery\nand projective twisted Edwards representations.\nThe parameters are reproducible using the script\nof~\\href{https://github.com/asanso/Bandersnatch/blob/main/python-ref-impl/get\\_params.py}{this\n  file}.\n\n\\subsection{Weierstrass curve}\\label{sec-w-curve}\n\\paragraph{Curve equation.}\nThe Bandersnatch curve can be represented in the Weierstrass model\nusing the equation\n$$E_W:y^2 = x^3 -3763200000x -78675968000000.$$\n%% \\begin{verbatim}\n%% p=0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001\n%% E=EllipticCurve(GF(p), [-3763200000, -78675968000000])\n%% \\end{verbatim}\n\n\\paragraph{Endomorphism.}\nThe endomorphism $\\psi$ can obtained using the method detailed above.\nWe obtain the following expression:\n$$\\psi_\\text{W}(x,y) = \\left(u^2\\cdot \\frac{x^2+44800x+2257920000}{x+44800}, u^3\\cdot\ny\\cdot \\frac{x^2+2\\cdot 44800x+t_0}{(x+44800)^2}\\right).$$\n\\begin{verbatim}\nu=0x50281ac0f92fc1b20fd897a16bf2b9e132bdcb06721c589296cf82245cf9382d\nt0=0x73eda753299d7d483339d80809a1d80553bda402fffe5bfefffffffef10be001.\n\\end{verbatim}\n\n\\paragraph{Subgroup generator.}\nThe generator of the $p_{253}$-order subgroup is computed by finding the\nlexicographically smallest valid $x$-coordinate of a point of the\ncurve, and scaling it by the cofactor $4$ such that the result is not\nthe point at infinity. From a point with $x=2$, we obtain a generator $E_W(x_W,y_W)$ where:\n\\input{params-W.tex}\n\n\\subsection{Twisted Edwards curve}\n\\paragraph{Curve equation.}\nBandersnatch can also be represented in twisted Edwards coordinates,\nwhere the group law is complete.\nIn this model, the Bandersnatch curve can be defined by the equation\n$$E_\\text{TE}:-5x^2+y^2 = 1 +\ndx^2y^2, d=\\frac{138827208126141220649022263972958607803}{171449701953573178309673572579671231137}.$$\nTwisted Edwards group law is more efficient with a coefficient\n$a = -1$ (see~\\cite{AC:HWCD08} for details).\nIn our case, $-5$ is not a square in $\\Fp$. Thus, the curve with\nequation $-x^2+y^2 = 1 -dx^2y^2/5$ is the quadratic twist of\nBandersnatch. We provide a representation with $a=-5$, leading to a\nslightly more expensive group law because multiplying by $-5$ is more\nexpensive than a multiplication by $-1$, but this cost will be\nneglected compared to the improvement of the GLV method. See\nSection~\\ref{sec:comparison} for details.\n\n\\paragraph{Endomorphism.}\nFrom this representation, we exhibit the degree $2$ endomorphism in\ntwisted Edwards coordinates:\n$$\n\\psi_\\text{TE}(x,y,z) = \\left(f(y)h(y), g(y)xy, h(y)xy\\right)\\qquad\n\\left\\{\\begin{array}{r}\nf(y) = c(z^2-y^2),\\\\\ng(y) = b(y^2+bz^2),\\\\\nh(y) = y^2-bz^2.\n\\end{array}\\right.$$\n\n\\begin{verbatim}\nb=0x52c9f28b828426a561f00d3a63511a882ea712770d9af4d6ee0f014d172510b4\nc=0x6cc624cf865457c3a97c6efd6c17d1078456abcfff36f4e9515c806cdf650b3d.\n\\end{verbatim}\nThis map can be computed in 3 additions and 9 multiplications by\nfirst computing $xy$, $y^2$, $z^2$ and $bz^2$.\n\n\\paragraph{Subgroup generator.}\nThe generator of the $p_{253}$-order subgroup obtained in\nSection~\\ref{sec-w-curve} has twisted coordinates\nof the form $E_\\text{TE}(x_\\text{TE},y_\\text{TE},1)$ with\n\\input{params-TE.tex}\n\n\\subsection{Montgomery curve}\n\\paragraph{Curve equation.}\nA twisted Edwards curve is always birationally equivalent to a\nMontgomery curve. We obtain the mapping between these two\nrepresentations following~\\cite{JCEng:CosSmi18}.\nWhile the twisted Edwards model fits better for $\\mathbb F_p$ circuit\narithmetic and more generally for the zero-knowledge proof context, we\nprovide here the Montgomery version because the scalar multiplication\nis more efficient in this model:\n$$E_M: By^2 = x^3 + Ax^2 + x$$\n\\begin{verbatim}\nB=0x300c3385d13bedb7c9e229e185c4ce8b1dd3b71366bb97c30855c0aa41d62727\nA=0x4247698f4e32ad45a293959b4ca17afa4a2d2317e4c6ce5023e1fd63d1b5de98.\n\\end{verbatim}\n\n\\paragraph{Endomorphism.}\nMontgomery curves allow efficient scalar multiplication using the\nMontgomery ladder. We provide here the endomorphism $\\psi$ in this\nmodel:\n$$\\psi_\\text{M}(x,-,z) = (-(x-z)^2 - cxz, -, 2xz)$$\n\\begin{verbatim}\nc=0x4247698f4e32ad45a293959b4ca17afa4a2d2317e4c6ce5023e1fd63d1b5de9a.\n\\end{verbatim}\n\n\\paragraph{Subgroup generator.}\nThe generator of the $p_{253}$-order subgroup given above is of the\nform $E_M(x_M,-,1)$ with:\n\\input{params-M.tex}\n\n\\subsection{Security of Bandersnatch}\n\nThe Bandersnatch curve order is $2^2\\cdot r$ for a $253$-bit long\nprime $r$.\n% 13108968793781547619861935127046491459309155893440570251786403306729687672801\nIts quadratic twist has order\n$2^7 \\cdot 3^3 \\cdot r'$, where $r'$ is another prime of $244$ bits.\n%15172417585395309745210573063711216967055694857434315578142854216712503379\nHence, the Bandersnatch curve satisfies twist security after a quick cofactor\ncheck.\nWe estimate that the Bandersnatch curve (resp. its quadratic twist)\nhas $126$ bits of security (resp. $122$ bits of security).\n\n% \\ZZ{TODOs: 1. expand the security section 2. hash to curve (not essential, but nice to have)\n% 3. serialization (required by a spec rather than an academic paper. may be borrow \n% some ed25519 trick where the cofactor is 8?)}\n\n\\section{Comparison}\\label{sec:comparison}\n\n%### Scalar multiplication improvement\nThe twisted Edwards representation is mostly used in practice, and we\nnow focus on the comparison between Jubjub and Bandersnatch in this\nmodel.\n\n\\subsection{Scalar multiplications for a variable base point}\n% Jubjub scalar multiplication\nBecause of its large discriminant, the scalar multiplication on Jubjub\nis a basic double-and-add algorithm, meaning that it computes a\nmultiplication by $n$ in $\\log n$ doublings and $\\log n/2$\nadditions (in average) on the curve. \n\n% Bandersnatch scalar multiplication\nThe endomorphism $\\psi$ lets us compute the scalar multiplication\nfaster than a double-and-add algorithm with few precomputations. For a\npoint $P$ and a scalar $n$, we first evaluate $\\psi$ at $P$ and\ndecompose $n = n_1 + \\lambda n_2$. Then a multi scalar multiplication\nis computed in $\\log n/2$ doublings and $3\\log n/8$ additions (in average) on the curve.\n\n% Benchmarks\nWe benchmarked our implementation with both GLV enabled and disabled, and \ncompared it with Arkworks' own Jubjub implementation. \nOur benchmark is conducted over an AMD 5900x CPU, with Ubuntu 20.04,\nrust 1.52 stable version, and Arkwork 0.3.0 release version.\nWe used~\\href{https://docs.rs/criterion}{\\texttt{criterion}\n  micro-benchmark toolchain},  version 0.3.0, for data collection. We\ncompile Arkworks with two options, namely \\texttt{default} and\n\\texttt{asm}, respectively.\nThe \\texttt{default} setup relies on \\texttt{num\\_bigint} crate for\nlarge integer arithmetics, while \\texttt{asm} turns on assembly for\nfinite field multiplication. \n\nArkworks use the aforementioned double-and-add\nmultiplication methodology, without side channel protections such \nas Montgomery ladders. Our non-GLV implementation also follows\nthis design. For our GLV implementation, there are three components,\nnamely, the endomorphism, the scalar decomposition, and the\nmulti scalar multiplication (MSM). We implement those schemes and \npresent the micro-benchmarks in Table~\\ref{tab:comp_full}.\nSpecifically, we do not use the MSM implementation in Arkworks:\nour scalars, after the decomposition, contain roughly 128 bits\nof leading zeros, and our own MSM implementation is \noptimized for this setting.\n\nTable~\\ref{tab:comp_full} presents the full picture of the benchmark.\nWhen GLV is disabled, we observe a similar but a little worse \nperformance for Bandersnatch curve, compared to\nthe Jubjub curve, due to the slightly larger coefficient \n$a=-5$ and a larger scalar field of 253 bits (Jubjub curve has $a=-1$\nand a scalar field of 252 bits).\nWhen GLV is enabled, we report a 45\\% improvement of the Bandersnatch\ncurve, and a 42\\% improvement over the Jubjub curve.\n\n\\begin{table}[ht] %\\small\n  \\centering\n  \n  \\begin{tabular}{|l|c|c|}\\hline\n      & \\texttt{default} & \\texttt{asm}\\\\\\hline\\hline\n    Jubjub & 75 $\\mu$s & 69 $\\mu$s \\\\\\hline\\hline\n    Bandersnatch without GLV & 78 $\\mu$s & 72 $\\mu$s  \\\\\\hline\\hline   \n    Bandersnatch with GLV& 44 $\\mu$s & 42 $\\mu$s \\\\\\hline\n    \\ \\ \\ Endomorphism & 2.4 $\\mu$s& 1.8 $\\mu$s\\\\\\hline\n    \\ \\ \\ Scalar decomposition & 0.75 $\\mu$s & 0.7 $\\mu$s \\\\\\hline\n    \\ \\ \\ multi scalar multiplication & 42 $\\mu$s &  40.8 $\\mu$s\\\\\\hline\\hline\n    Overall Improvement & 42\\% & 39\\% \\\\\\hline\n  \\end{tabular}\n  \\caption{Bandersnatch vs Jubjub: Performance}\n  \\label{tab:comp_full}\n\\end{table}\n\nTo make a meaningful comparison, we benchmark\nthe cost of the group multiplication over the default generators.\nNote that Arkworks do not implement optimizations for \nfixed generators nonetheless. We then sample field elements \nuniformly at random, for each new iteration, and the benchmark\nresult is consolidated over 100 iterations.\n\n\\subsection{Multi scalar multiplications}\nThis section reports the performance of variable base \nmulti scalar multiplications (MSM). Note that this MSM is \ncompatible, but\ndifferent\nfrom the MSM inside the GLV. \nIn particular, for a sum of $k$ scalar multiplications,\nwe report the data point for: \n\\begin{itemize}\n  \\item invoking the MSM over the $k$ base scalars randomly sampled,\n    expected to be around 256 bits;\n  \\item using GLV endomorphism to break the $k$ base scalars into $2k$\n    new base scalars, of halved size, i.e.~of 128 bits.\n\\end{itemize}\nThe data is presented in Figure~\\ref{fig:msm}. Specifically, \nas a baseline, the trivial solution, captained by \n{\\em GLV without \nMSM}, is the product of the number of bases and the cost of\ndoing a single GLV multiplication. \nNote that the MSM algorithms incur an overhead to build some\ntables, which make them less favorable compared to the trivial\nsolution when dimension is really small. For a dimension greater \nthan 4, MSM algorithms begin to out-perform trivial solutions.\nFor dimension greater than 128, it is more efficient to \ninvoke the MSM directly, rather than doing it over the GLV basis.\nThe reason is that the size of the basis becomes too large, so\nthat the gain we get from halving the scalars is offset from the\ngain we get from halving the basis. We remark that this threshold\npoint is platform dependent.\n\\begin{figure}[h]\\centering \\label{fig:msm}\n  \\includegraphics[width=12cm]{fig/msm.pdf}\n\\end{figure}\n\n\\subsection{R1CS constraints}\nThe Bandersnatch curve is zk-SNARK friendly: its \nbase field matches the scalar field for the BLS12-381 curve, a \npairing-friendly curve, on top of which people build zk-SNARK\nsystems, such as Groth16 \\cite{EC:Groth16} or Plonk \\cite{EPRINT:GabWilCio19}.\nIn such a setting, the prover can sufficiently argue certain \nrelationships over arithmetic circuits rather than binary\ncircuits.\nThe circuit is expressed in a form of Rank-1 constraint system\n(R1CS), and in general, the complexity is determined by the \nnumber of constraints in an R1CS. \n\nWe evaluate the number of constraints for a \nvariable base group multiplication. For a double-and-add\nalgorithm, \nour code reports 3177 constraints in \ntotal.\nAs a sanity check, within the core logic,\nit takes 6 constraints per addition, 5 constraints\nper doubling and 2 constraints per bit selection. This adds\nup to 13 constraints per bit, or 3177 constraints per\ngroup multiplication (and we reasonably assume some system overhead\nconsumes another 10 constraints). \n\nNow, in the case of GLV,\n% Let us first explain the obstacle of implementing the GLV\n%with R1CS. T\nthe endomorphism is almost free: it requires \n6 constraints. The MSM inside the GLV can also be done \nwith $2176$ constraints using the above double-then-add\ntechniques.\nThe difficult part is the circuit for scalar decomposition,\nwhich is to prove $n = n_1 +\\lambda n_2 \\bmod r$ where\n$n$ and $\\lambda$ are around 256 bits,\n$n_1$ and $n_2$ are around 128 bits, and\n$r$\nis the order of the scalar field.\nWe implemented this part of the code with $405$ constraints\nvia hand optimized circuits.\n% We therefore report the number of constraints to perform\n% a group operation on both Jubjub curve and Bandersnatch \n% curve. We observe that it takes 3177 constraints for the \n% Jubjub curve, for which the statement is argued over \n% the Edwards affine form coordinates. \nPrecisely, we list the breakdown numbers in Table~\\ref{tab:r1cs_full};\nthe  Bandersnatch with GLV constraints count is a little higher than\nthe sum of its components due to some overhead during setup phase.\n\n\\begin{table}[h] %\\small\n  \\centering\n  \n  \\begin{tabular}{|l|c|}\\hline\n      & \\texttt{Number of Constraints} \\\\\\hline\\hline\n      jubjub& 3177 \\\\\\hline\n      Bandersnatch without GLV& 3177 \\\\\\hline\\hline\n    Endomorphism & 6 \\\\\\hline\n    Scalar decomposition &  405 \\\\\\hline\n    multi scalar multiplication & 2176 \\\\\\hline\n    Bandersnatch with GLV&  2621\\\\\\hline\\hline\n    Overall Improvement & 21\\%   \\\\\\hline\n  \\end{tabular}\n  \\caption{Bandersnatch vs Jubjub: R1CS}\n  \\label{tab:r1cs_full}\n\\end{table}\n\n\nThe \\href{https://github.com/zhenfeizhang/bandersnatch-glv}{Rust implementation} of the Bandersnatch scalar multiplication\nalgorithm confirms our estimations: the GLV method is $21\\%$ faster\nthan the Jubjub implementation.\n\n\\section{Conclusion}\nTne last decade has seen great improvements on practical zk-SNARK systems.\nAn essential stepping stone of these schemes is an efficient elliptic\ncurve whose base field matches the scalar field for some pairing-friendly curve.\nOn this note, we present Bandersnatch as an alternative to the commonly used\nbase curve Jubjub. Due to the existence of an efficiently computable\nendomorphism, the scalar multiplication over this curve is 42\\% times\nfaster than the Jubjub curve.\nFor multi scalar multiplications, we report a narrowed advantage over Jubjub \ncurve when the dimension is small, but it vanishes for larger\ndimensions.\nWe also do not observe any improvement in terms of number of constraints in \nthe corresponding R1CS circuit.\n\n\n\\bigskip\n\\paragraph*{\\textbf{Acknowledgments.}} We would like to thank Weikeng\nChen, Luca De Feo, Justin Drake, Youssef El Housni, Dankrad Feist,\nGottfried Herold and Daira Hopwood for fruitful discussions. \n\n\\bibliography{bandersnatch,cryptobib/abbrev3,cryptobib/crypto}\n\\bibliographystyle{unsrt}\n\\end{document}\n", "meta": {"hexsha": "a9a3277bb598b5e776bf20297f05687384195739", "size": 29255, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/bandersnatch.tex", "max_stars_repo_name": "asanso/Bandersnatch", "max_stars_repo_head_hexsha": "e280929a6ba583f9fd23a4903779956c3e6ebf4d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-29T13:35:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T02:00:03.000Z", "max_issues_repo_path": "paper/bandersnatch.tex", "max_issues_repo_name": "asanso/Bandersnatch", "max_issues_repo_head_hexsha": "e280929a6ba583f9fd23a4903779956c3e6ebf4d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-05T15:26:29.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-05T15:26:29.000Z", "max_forks_repo_path": "paper/bandersnatch.tex", "max_forks_repo_name": "asanso/Bandersnatch", "max_forks_repo_head_hexsha": "e280929a6ba583f9fd23a4903779956c3e6ebf4d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-29T13:35:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-29T13:35:00.000Z", "avg_line_length": 46.5103338633, "max_line_length": 160, "alphanum_fraction": 0.7411040848, "num_tokens": 9295, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835289107309, "lm_q2_score": 0.6859494485880928, "lm_q1q2_score": 0.5730994659607497}}
{"text": "\\subsection{Charge carrier transport}\nTo describe charge carrier transport, the bi-polar drift-diffusion equations are solved in position space\nfor electrons,\n\\begin{equation}\n\\label{eq:ndrive}\n\\boldsymbol{J_n} = q \\mu_e n_{f}  {\\nabla E_{c}}  + q D_n  {\\nabla n_{f}},\n\\end{equation}\nand holes,\n\\begin{equation}\n\\label{eq:pdrive}\n\\boldsymbol{J_p} = q \\mu_h p_{f}  {\\nabla E_{v}}  - q D_p {\\nabla p_{f}}.\n\\end{equation}\n\nConservation of charge carriers is forced by solving the charge carrier continuity equations for both electrons,\n\\begin{equation}\n\\label{eq:contn}\n\\nabla \\boldsymbol{J_n}  = q (R-G+\\frac{\\partial n}{\\partial t}),\n\\end{equation}\nand holes\n\\begin{equation}\n\\label{eq:contp}\n\\nabla \\boldsymbol{J_p} = - q (R-G+\\frac{\\partial p}{\\partial t}).\n\\end{equation}\n\nwhere $R$ and $G$ are the net recombination and generation rates per unit volume respectively.\n", "meta": {"hexsha": "d7cbc4c76e919daf8b61f4ec921bb546b7af645d", "size": 870, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "gpvdm_data/docs/man/electrical_transport.tex", "max_stars_repo_name": "roderickmackenzie/gpvdm", "max_stars_repo_head_hexsha": "914fd2ee93e7202339853acaec1d61d59b789987", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2016-09-13T08:58:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-17T07:04:52.000Z", "max_issues_repo_path": "gpvdm_data/docs/man/electrical_transport.tex", "max_issues_repo_name": "roderickmackenzie/gpvdm", "max_issues_repo_head_hexsha": "914fd2ee93e7202339853acaec1d61d59b789987", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-11-11T12:33:02.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-08T00:48:08.000Z", "max_forks_repo_path": "gpvdm_data/docs/man/electrical_transport.tex", "max_forks_repo_name": "roderickmackenzie/gpvdm", "max_forks_repo_head_hexsha": "914fd2ee93e7202339853acaec1d61d59b789987", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-01-03T06:17:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-01T15:59:00.000Z", "avg_line_length": 33.4615384615, "max_line_length": 112, "alphanum_fraction": 0.724137931, "num_tokens": 286, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070084811306, "lm_q2_score": 0.6297746074044135, "lm_q1q2_score": 0.5730363290407283}}
{"text": "\\documentclass{article}\n\\usepackage{theorem}\n\n\\DeclareMathAlphabet{\\mathsc}{OT1}{cmr}{m}{sc}\n\n\\newcommand{\\implies}{\\Rightarrow}\n\\newcommand{\\have}{{.~}}\n\\newcommand{\\bstate}{{\\mathcal{S}}}\n\\newcommand{\\blocks}{{\\mathcal{B}}}\n\\newcommand{\\tabtop}{{\\mathsc{t}}}\n\\newcommand{\\tblocks}{{\\blocks_\\tabtop}}\n\\newcommand{\\bclear}{{\\mathsc{clear}}}\n\\newcommand{\\bon}{{\\mathsc{on}}}\n\\newcommand{\\babove}{{\\mathsc{above}}}\n\\newcommand{\\bstack}{{\\mathsc{stack}}}\n\\newcommand{\\bbase}{{\\mathsc{base}}}\n\\newcommand{\\st}{~|}\n\n\\begin{document}\n\n\\section{Definitions}\n\n\\begin{itemize}\n\\item Let\n  $$ \\blocks = \\{ 1, \\ldots, N \\} $$\nrepresent a set of $N$ blocks in a blocks-world problem.\nWe represent the table top with the symbol $\\tabtop$, and\ndefine $$ \\tblocks \\equiv \\blocks \\cup \\{ \\tabtop \\} $$\n  \n\\item Call any \n  $$ \\bstate \\subseteq \\blocks \\times \\tblocks $$\na {\\em part-state} of the blocks-world problem. The pairs of blocks in\n$\\bstate$ represent {\\em on} relations.\n\n\\item A part-state $\\bstate$ is {\\em legal} iff\n  \\begin{itemize}\n  \\item No block is on two things. $$\n    \\forall i \\in \\blocks \\have\n      \\langle i , j \\rangle \\in \\bstate \\implies\n      \\not\\exists j' \\in \\tblocks \\have\n\tj \\neq j' \\wedge \\langle i , j' \\rangle \\in \\bstate\n  $$\n  \\item No two blocks are on the same block. $$\n    \\forall j \\in \\blocks \\have\n      \\langle i , j \\rangle \\in \\bstate \\implies\n      \\not\\exists i' \\in \\blocks \\have\n\ti \\neq i' \\wedge \\langle i' , j \\rangle \\in \\bstate\n  $$\n  \\end{itemize}\n\n\\item Define the partial functions\n$\\bon_i(\\bstate)$ mapping legal part-states to\nblocks such that\n$\\bon_i(\\bstate) = j$ when $\\langle i, j \\rangle \\in \\bstate$;\n$\\bon_i$ is undefined elsewhere.\n(In other words, $\\bon_i$ says\nwhich block block $i$ is on in a given state.)\nBy the definition of a legal part-state, these functions are well-defined.\n\n\\item A part-state is a {\\em total state} or just a {\\em state}\niff it is legal and every block is on something. $$\n    \\forall i \\in \\blocks \\have\n      \\exists j \\in \\tblocks \\have\n\t\\langle i , j \\rangle \\in \\bstate\n$$\n\n\\item The {\\em above} relation is the transitive closure of\nthe {\\em on} relation: A block $i$ is above a block $j$ in a\nlegal part-state $\\bstate$ if either $i$ is on $j$, or $i$ is on\na block which is above $j$.  $$\n  \\babove_i(\\bstate) = \\left \\{ \\begin{array}{ll}\n    \\{ j \\} \\cup \\babove_j(\\bstate) &\n      j = \\bon_i(\\bstate) \\wedge j \\neq \\tabtop \\\\\n    \\emptyset & \\mbox{otherwise}\n  \\end{array} \\right .\n$$\n\n\\item A block $j$ is {\\em clear} in a legal part-state $\\bstate$ iff no block\nis above it. $$\n  \\bclear(\\bstate) = \\{ j \\in \\bstate \\st\n  \\not\\exists i \\have \\langle i , j \\rangle \\in \\bstate \\}\n$$\n\n\\item A legal part-state $\\bstate$ is {\\em realizable} iff\nno block is above itself. $$\n  \\not\\exists i \\in \\blocks \\have\n    i \\in \\babove_i(\\bstate)\n$$\n\n\\item {\\em Moves} are functions $m_{ij}$ which take a realizable part-state in\nwhich blocks $i$ and $j$ are clear to a\nrealizable part-state in which $i$ is on $j$.\n\nLet $i \\in \\blocks$ and $j \\in \\tblocks$, and let $\\mathcal{C}_{ij}$ be the set\nof realizable part-states\nin which $i$ is clear, and either $j = \\tabtop$ or $j$ is clear.\nThen the domain of $m_{ij}$ is\n$\\mathcal{C}_{ij}$, and the range is the set of all realizable part-states.\n\nWe define the partial function $m_{ij}$ by $$\n  m_{ij}(\\bstate) \\equiv\n    \\left ( \\bstate - \\{ \\langle i, \\bon_i(\\bstate) \\rangle \\} \\right )\n    \\cup \\{ \\langle i, j \\rangle \\}\n$$\nwhen $i$ and $j$ are clear in $\\bstate$; $m_{ij}$ is undefined\nelsewhere.  A move is {\\em legal} iff $m_{ij}$ is defined.\n\n\\item A {\\em move sequence} $M$\nof length $k$ from state $\\bstate^1$ to state $\\bstate^2$ consists of\na sequence of pairs $$\n  \\langle i_1, j_1 \\rangle , \\langle i_2 , j_2 \\rangle , \\ldots ,\n  \\langle i_k , j_k \\rangle \\in \\left ( \\blocks \\times \\tblocks \\right ) ^ \\ast\n$$\nsuch that $$\n  \\left ( m_{i_k j_k} \\circ \\cdots \\circ m_{i_2 j_2} \\circ\n  m_{i_1 j_1} \\right ) (\\bstate^1) = \\bstate^2\n$$\nWe allow ourselves the liberty of writing $M(\\bstate^1) = \\bstate^2$.\nA move sequence $M$ is {\\em legal} for a state $\\bstate$ iff\n$M(\\bstate)$ is defined.\n\n\\item A {\\em primitive-blocks-world} (PBW) {\\em problem} consists of two\nrealizable states: an\ninitial state $I$ and a goal state $G$.\nA {\\em solution} to the problem is a move sequence $M$ such that $M(I) = G$.\nWe say that a solution is {\\em optimal} if there is no solution\nof shorter length.\n\n\\item A {\\em stack} of blocks starting at $i$ in a\nlegal part-state $\\bstate$ is just\n$i$ plus the set of blocks $i$ is above. $$\n  \\bstack_i(\\bstate) = \\{ i \\} \\cup \\babove_i(\\bstate)\n$$\nWe call $i$ the {\\em top} of the stack.\nIf $\\bstate$ is a realizable part-state, the {\\em base}\nof the stack is the block in the stack which is\nabove no other blocks in the stack. $$\n  \\bbase_i(\\bstate) = j \\in \\bstack_i(\\bstate) \\have\n    \\babove_j(\\bstate) \\subseteq \\{ \\tabtop \\}\n$$\n(The lemma that every stack in any realizable part state\nhas a base is proved by tracing through the definitions.)\n\n\\item A {\\em tower} is a stack which is a subset of no other\nstack.  \n\n\\end{itemize}\n\n\\section{Flexibility}\n\nA legal part-state $\\bstate^2$ is {\\em as flexible} as\na legal part-state $\\bstate^1$ iff no block in $\\bstate^2$\nis above some block in $\\bstate^2$ which it is not above in\n$\\bstate^1$, i.e. $$\n  \\babove_i (\\bstate^2) \\subseteq \\babove_i(\\bstate^1)\n$$\nWe notate this by writing $\\bstate^2 \\sqsubseteq \\bstate^1$.\n\nAn observation about states related by flexibility is in order.\n\\begin{lemma}{flexible-clear}{Flexibility And Clear Blocks}\n\\given\nStates $\\bstate^1$ and $\\bstate^2$ with $\\bstate^2 \\sqsubseteq \\bstate^1$.\n\\prove\nAny block clear in $\\bstate^1$ is clear in $\\bstate^2$: $$\n  \\bclear(\\bstate^2) \\supseteq \\bclear(\\bstate^1)\n$$\n\\proof\nConsider some block $j$ in $\\bstate^2$.  If it is not clear,\nthen there is a block $i$ above it in $\\bstate^2$.  But,\nby definition of flexibility, this means that $i$ is above $j$\nin $\\bstate^1$, so $j$ is not clear in $\\bstate^1$, either.\n\\end{lemma}\n\nWe now justify the terminology ``as flexible as''\nby means of Lemma \\ref{flexible-move}:\n\\begin{lemma}{flexible-move}{Flexibility And Move Sequences}\n\\given\nStates $\\bstate^1$ and $\\bstate^2$ with $\\bstate^2 \\sqsubseteq \\bstate^1$.\nA legal move sequence $M$ starting at $S^1$.\n\\prove\n\\begin{description}\n\\item[i)~] $M$ is also a legal move sequence starting at $\\bstate^2$.\n\\item[ii)~] $M(\\bstate^2) \\sqsubseteq M(\\bstate^1)$.\n\\end{description}\n\\proof\nBy induction on prefixes of $M$.\n\\basecase\nThe theorem is true for the empty prefix of $M$.\n\\indcase\nDecompose $M$ such that the prefix $M^1$ before\nsome move $m = \\langle i, j \\rangle$ satisfies the lemma $$\n  M = M^1 \\cdot \\langle i, j \\rangle \\cdot M^2\n$$\nand let\n\\begin{eqnarray*}\n  \\bstate^1_1 &=& M^1(\\bstate^1) \\\\\n  \\bstate^1_2 &=& \\langle i, j \\rangle(\\bstate^1_1) \\\\\n  \\bstate^2_1 &=& M^1(\\bstate^2) \\\\\n  \\bstate^2_2 &=& \\langle i, j \\rangle(\\bstate^2_1)\n\\end{eqnarray*}\n\nBy hypothesis we have $\\bstate^2_1 \\sqsubseteq \\bstate^1_1$.\nBy Lemma \\ref{flexible-clear}, this means that $$\n  \\bclear(\\bstate^1_1) \\subseteq \\bclear(\\bstate^2_1)\n$$\nThis means that since $\\langle i, j \\rangle$ is a legal\nmove in $\\bstate^1_1$, it must also be a legal move in\n$\\bstate^2_1$.  Thus, condition (i) of the lemma holds\nfor the prefix $M^1 \\cdot \\langle i, j \\rangle$ of $M$.\n\nTo see that condition (ii) of the Lemma holds on the extended\nprefix, we note that the only block possibly above other blocks in\n$\\bstate^2_2$ but not in $\\bstate^2_1$ is $i$.  We then note that\nby definition of flexibility\n\\begin{eqnarray*}\n  \\babove_i(\\bstate^2_2) &=& \\babove_j(\\bstate^2_1) \\cup \\{j\\} \\\\\n  &\\subseteq& \\babove_j(\\bstate^1_1) \\cup \\{j\\} \\\\\n  &=& \\babove_i(\\bstate^1_2)\n\\end{eqnarray*}\n\\end{lemma}\n  \n\\section{Solution Optimization}\n\nIn the search for an algorithm giving optimal solutions for PBW\nproblems, we make use of the following lemmas, most of which\nhave been proved previously elsewhere in the literature.  All\nof these lemmas are proved using essentially the same\nreasoning:  If $\\bstate$ is a solution, and we can construct\nfrom $\\bstate$ a solution $\\bstate'$ which is the same as\n$\\bstate$ except for deletion of one or more elements, then\n$\\bstate$ is not an optimal solution.\n\n\\begin{lemma}{table-to-table}{Table-To-Table Moves}\n\\given\nA PBW problem $\\langle I, G \\rangle$.\nAn optimal solution $M$ to $\\langle I, G \\rangle$.\n\\prove\nNo block moves from the table to the table in $M$.\n\\proof\nAssume the contrary of the lemma, i.e., that there is\nsome table-to-table move in $M$.\nUse this move to split $M$ \n$$\n  M = M^1 \\cdot \\langle i, \\tabtop \\rangle \\cdot M^2\n$$\nwhere $$\n  \\bon_i(M^1(I)) = \\tabtop\n$$\nLet $\\bstate^1 = M^1(I)$.\nApplying the definitions of the previous section yields\n\\begin{eqnarray*}\n  \\langle i, \\tabtop \\rangle(\\bstate^1)\n  &\\equiv& m_{i \\tabtop}(\\bstate^1) \\\\\n  &=& \\left ( \\bstate^1 - \\{ \\langle i, \\bon_i(\\bstate^1) \\rangle \\} \\right )\n  \\cup \\{ \\langle i, \\tabtop \\rangle \\} \\\\\n  &=& \\left ( \\bstate^1 - \\{ \\langle i, \\tabtop \\rangle \\} \\right )\n  \\cup \\{ \\langle i, \\tabtop \\rangle \\} \\\\\n  &=& \\bstate^1\n\\end{eqnarray*}\nThis says that the state at the start of the table-to-table\nmove is the same as the state at the end, and thus by the\ndefinition of a solution, the sequence obtained by deleting\nthis move from $M$ is also a solution.  But by the definition\nof optimality $M$ is therefore not optimal, and thus we have\na condradiction.\n\nTherefore, no such move can be in $M$.\n\\end{lemma}\n\n\n\\begin{lemma}{tabled}{Correctly Tabled Blocks}\n\\given\nA PBW problem $\\langle I, G \\rangle$, such that\na block $i$ is on the table in both $I$ and $G$.\nAn optimal solution $M$ to $\\langle I, G \\rangle$.\n\\prove\n$i$ doesn't move in $M$, i.e. $$\n  \\not\\exists j \\in \\tblocks \\have\n    M = \\ldots, \\langle i, j \\rangle, \\ldots\n$$\n\\proof\nAssume the contrary, i.e., that $i$ moves in $M$.\n\nWe first split $M$ such that $$\n  M = M^1 \\cdot \\langle i, j \\rangle \\cdot M^2 \\cdot\n  \\langle i, j' \\rangle \\cdot M^3\n$$\nwhere $M^2$ and $M^3$ contain no moves of $i$.\nTo see that this split is possible, let\n\\begin{eqnarray*}\n\\bstate^1 &=& M^1(I) \\\\\n\\bstate^2 &=& \\langle i, j \\rangle(\\bstate^1) \\\\\n\\bstate^3 &=& (M^2 \\cdot \\langle i, \\tabtop \\rangle)(\\bstate^2)\n\\end{eqnarray*}\nWe first note\nthat since by assumption $i$ moves in $M$, the first\nmove in the split exists.  Further, by Lemma\n\\ref{table-to-table}, $j \\neq \\tabtop$.\nFinally, since $i$ is on the table in $G$, and\n$j$ is not the table in $\\bstate^2$, then $i$ must move\nto the table, so the second move in the split exists, and $j' = \\tabtop$.\n\nNow consider the subproblem $\\langle \\bstate^1, \\bstate^3 \\rangle$\nand its known and presumably optimal solution $$\n  M' = \\langle i, j \\rangle \\cdot M^2 \\cdot \\langle i, \\tabtop \\rangle\n$$\nWe will construct a shorter solution to the problem from this\nsolution.\n\nFor any solution $M$ to $\\langle I, G \\rangle$,\nlet $M_i$ be the sequence of moves obtained as follows:\nConsider any move $m = \\langle j, i \\rangle$ in $M$, and the\nstate $\\bstate$ just before that move.  Replace $m$ with the two-move\nsequence $$\n  m' = \\langle i, tabtop \\rangle, \\langle j, \\bon_i(\\bstate) \\rangle\n$$\nWe note that $M_i$ is still a sequence of\nlegal moves by Lemma \\ref{flexible-move}, since the state after $m'$\nis as flexible as the state after $m$.\n\nNow consider $M'_i$.  Since $i$ doesn't move $M^2$, the last\nmove in $M'_i$ moves $i$ from the table to the table, and can\nthus be deleted.  And since the only $\\babove$ relations that\nhave changed anywhere in the two move sequences involve $i$,\nand since $i$ is on the table at the end of both $M'_i$ and $M_i$,\n$M'_i(\\bstate) = M_i(\\bstate)$.\n\nThus, we now have a sequence \n\\end{lemma}\n\n\\end{document}\n", "meta": {"hexsha": "7441a26d98d37062098162df3e544f2e95112f44", "size": 11740, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/thm/bak/thm-2.tex", "max_stars_repo_name": "BartMassey/blocks", "max_stars_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/thm/bak/thm-2.tex", "max_issues_repo_name": "BartMassey/blocks", "max_issues_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/thm/bak/thm-2.tex", "max_forks_repo_name": "BartMassey/blocks", "max_forks_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-15T18:45:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-15T18:45:13.000Z", "avg_line_length": 35.3614457831, "max_line_length": 79, "alphanum_fraction": 0.6728279387, "num_tokens": 4003, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696748, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5730319398173964}}
{"text": "\\chapter{Demodulation of 802.15.4}\n\\section{Demodulation equation}\nThe receiver uses the following equation to demodulate the signal.\n\\begin{equation}\nd(t) = R_Q(t)\\times R_I(t-\\tau) - R_Q(t-\\tau)\\times R_I(t)\n\\end{equation}\n\nwhere the $R_I(t)/R_Q(t)$ are received In-phase/Quad-phase signals, $\\tau$ is the sampling period. Bit decoding depends on\nthe sign of $d(t)$. If $d(t)>0$, the decoded bit equals 1 and vice versa.\n\n\\section{Decoding principle}\nThe 802.15.4 standard uses OQPSK modulation. The In-phase and Quad-phase are shifted by 1/2 symbol time. The symbol time \nis 1/2 period of sine wave. Namely, the In-phase and Quad-phase are shifted by 1/4 period of sine wave. Thus, the In-phase\nand Quad-phase can be expressed as $\\pm sin(t + \\theta)$ and $\\pm cos(t + \\theta)$. Where $\\theta \\in \\{0, \\tfrac{\\pi}{2}\\}$.\n\nFor example, one combination of In-phase/Quad-phase is $cos(t)$ and $sin(t), 0<t<\\tfrac{pi}{2}$. By applying the demodulation\nequation,\n\\begin{align}\nd(t)&= sin(t)\\times cos(t-\\tau) - sin(t-\\tau)\\times cos(t) \\\\\nd(t)&= sin(t)\\left[cos(t)cos(\\tau) + sin(t)sin(\\tau)\\right] - cos(t)\\left[sin(t)cos(\\tau) - cos(t)sin(\\tau)\\right] \\\\\nd(t)&= sin^2(t)sin(\\tau) + cos^2(t)sin(\\tau) = sin(\\tau)\n\\end{align}\nTherefore, $d(t)$ depends on the sampleing period only. In this example, $d(t)$ is greater than 0 and remains constant \nduring the half-symbol period. The following table lists the total combination of I/Q signals.\n\n\\begin{table}[h!]\n\\centering\n\t\\begin{tabular}{|c|c|c|}\n\t\t\\hline\n\t\tIn-phase & Quad-phase \t& decoded equation \\\\ \\hline\t\n\t\t$cos(t)$ & $sin(t)$\t\t& $sin(\\tau)$\\\\ \\hline\n\t\t$cos(t)$ & $-sin(t)$\t& $-sin(\\tau)$\\\\ \\hline\n\t\t$-cos(t)$ & $sin(t)$\t& $-sin(\\tau)$\\\\ \\hline\n\t\t$-cos(t)$ & $-sin(t)$\t& $sin(\\tau)$\\\\ \\hline\n\t\t$sin(t)$ & $cos(t)$\t\t& $-sin(\\tau)$\\\\ \\hline\n\t\t$sin(t)$ & $-cos(t)$\t& $sin(\\tau)$\\\\ \\hline\n\t\t$-sin(t)$ & $cos(t)$\t& $sin(\\tau)$\\\\ \\hline\n\t\t$-sin(t)$ & $-cos(t)$\t& $-sin(\\tau)$\\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab:demux}\n\t\\caption{}\n\\end{table}\n\nTherefore, the receiver simply determines the incoming bit from the sign of $d(t)$. Note that the I/Q signals coming\nat the symbol rate of 1$\\mu$s, and the I/Q shift by 0.5$\\mu$s. Every 0.5$\\mu$s generates a bit. Bit rate is 2MHz and \nhalf-byte is mapped to 32 bits chip sequence, which is 16$\\mu$s.\n", "meta": {"hexsha": "93cfba386ce9ec2ee53acf62676e5f689e8f4e30", "size": 2292, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/tex/demux.tex", "max_stars_repo_name": "lab11/uSDR", "max_stars_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-08-23T03:56:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T11:51:36.000Z", "max_issues_repo_path": "notes/tex/demux.tex", "max_issues_repo_name": "lab11/uSDR", "max_issues_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/tex/demux.tex", "max_forks_repo_name": "lab11/uSDR", "max_forks_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-07-22T12:47:41.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-16T23:18:10.000Z", "avg_line_length": 48.7659574468, "max_line_length": 125, "alphanum_fraction": 0.6509598604, "num_tokens": 821, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303236047049, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.573031929298919}}
{"text": "\\input{../header_function}\r\n\r\n%---------- start document ---------- %\r\n \\section{poly.factor -- polynomial factorization}\\linkedzero{poly.factor}\r\n The factor module is for factorizations of integer coefficient univariate polynomials.\r\n\r\n\r\n This module using following type:\r\n \\begin{description}\r\n   \\item[polynomial]\\linkedone{poly.factor}{polynomial}:\\\\\r\n     \\param{polynomial} is the polynomial generated by function poly.uniutil.polynomial. \r\n \\end{description}\r\n\r\n%\r\n  \\subsection{brute\\_force\\_search -- search factorization by brute force}\\linkedone{poly.factor}{brute\\_force\\_search}\r\n   \\func{brute\\_force\\_search}\r\n   {%\r\n     \\hiki{f}{poly.uniutil.IntegerPolynomial},\\ %\r\n     \\hiki{fp\\_factors}{list},\\ %\r\n     \\hiki{q}{integer}%\r\n   }{%\r\n     \\out{[factors]}%\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Find the factorization of \\param{f} by searching a factor which is a product of some combination in \\param{fp\\_factors}. The combination is searched by brute force.\r\n   \\spacing\r\n   % added document\r\n   The argument \\param{fp\\_factors} is a list of poly.uniutil.FinitePrimeFieldPolynomial .\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n%\r\n  \\subsection{divisibility\\_test -- divisibility test}\\linkedone{poly.factor}{divisibility\\_test}\r\n   \\func{divisibility\\_test}\r\n        {\\hiki{f}{polynomial},\\ %\r\n         \\hiki{g}{polynomial}%\r\n        }\r\n        {\\out{bool}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return Boolean value whether \\param{f} is divisible by \\param{g} or not, for polynomials.\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\negok Note that this function returns Hilbert class polynomial as a list of coefficients.\\\\\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \\param{D} must be negative int or long. See \\cite{Pomerance}.\\\\\r\n%\r\n  \\subsection{minimum\\_absolute\\_injection -- send coefficients to minimum absolute representation }\\linkedone{poly.factor}{minimum\\_absolute\\_injection}\r\n   \\func{minimum\\_absolute\\_injection}\r\n        {\\hiki{f}{polynomial}}\r\n        {\\out{F}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return an integer coefficient polynomial F by injection of a $\\mathbf{Z}/p\\mathbf{Z}$ coefficient polynomial \\param{f} with sending each coefficient to minimum absolute representatives.\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\negok \r\n   %\\spacing\r\n   % input, output document\r\n   \\quad The coefficient ring of given polynomial \\param{f} must be \\linkingone{intresidue}{IntegerResidueClassRing} or \\linkingone{finitefield}{FinitePrimeField}.\\\\\r\n%\r\n  \\subsection{padic\\_factorization -- p-adic factorization}\\linkedone{poly.factor}{padic\\_factorization}\r\n   \\func{padic\\_factorization}\r\n        {\\hiki{f}{polynomial}}\r\n        {\\out{p}, \\out{factors}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a prime \\param{p} and a p-adic factorization of given integer coefficient squarefree polynomial \\param{f}. The result \\param{factors} have integer coefficients, injected from $\\mathbb{F}_p$ to its minimum absolute representation.\r\n   \\spacing\r\n   % added document\r\n   \\quad \\negok The prime is chosen to be:\r\n   \\begin{enumerate}\r\n   \\item \\param{f} is still squarefree mod \\param{p},\r\n   \\item the number of factors is not greater than with the successive prime.\r\n   \\end{enumerate}\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad The given polynomial \\param{f} must be poly.uniutil.IntegerPolynomial .\\\\\r\n%\r\n  \\subsection{upper\\_bound\\_of\\_coefficient --Landau-Mignotte bound of coefficients}\\linkedone{poly.factor}{upper\\_bound\\_of\\_coefficient}\r\n   \\func{upper\\_bound\\_of\\_coefficient}\r\n        {\\hiki{f}{polynomial}}\r\n        {\\out{long}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Compute Landau-Mignotte bound of coefficients of factors, whose degree is no greater than half of the given \\param{f}.\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\negok Additional argument \\param{floatpre} specifies the precision of calculation in decimal digits.\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad The given polynomial \\param{f} must have integer coefficients.\\\\\r\n%\r\n  \\subsection{zassenhaus -- squarefree integer polynomial factorization by Zassenhaus method}\\linkedone{poly.factor}{zassenhaus}\r\n   \\func{zassenhaus}\r\n        {\\hiki{f}{polynomial}}\r\n        {\\out{list of factors f}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Factor a squarefree integer coefficient polynomial \\param{f} with Berlekamp-Zassenhaus method.\r\n   \\spacing\r\n   % added document\r\n   %\\quad \r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad output must be list of factors.\r\n%\r\n  \\subsection{integerpolynomialfactorization -- Integer polynomial factorization}\\linkedone{poly.factor}{integerpolynomialfactorization}\r\n   \\func{integerpolynomialfactorization}\r\n        {\\hiki{f}{polynomial}}\r\n        {\\out{factor}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Factor an integer coefficient polynomial \\param{f} with Berlekamp-Zassenhaus method.\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\param{p} must be a prime integer and \\param{d} be an integer such that 0 < \\param{d} < 4\\param{p} with $-\\mathtt{d} \\equiv 0, 1 \\pmod{4}$. \r\n   \\spacing\r\n   % input, output document\r\n   \\quad factor output by the form of list of tuples that formed (factor, index). \\\\\r\n%\r\n%\\begin{ex}\r\n%>>> module.func1(1, 0.1, \"a\", [], (1, 2))\r\n%(2, \"b\")\r\n%>>> module.func2()\r\n%1\r\n%\\end{ex}%Don't indent!(indent causes an error.)\r\n\\C\r\n\r\n%---------- end document ---------- %\r\n\r\n\\input{../footer}\r\n", "meta": {"hexsha": "ef330b2537352944f12a31e94b0528bd0dc2c2c9", "size": 5557, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/en/poly.factor.tex", "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_issues_repo_path": "manual/en/poly.factor.tex", "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/en/poly.factor.tex", "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.162962963, "max_line_length": 246, "alphanum_fraction": 0.6854417851, "num_tokens": 1489, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7090191460821871, "lm_q1q2_score": 0.5729351257301925}}
{"text": "\\documentclass[gr-notes.tex]{subfiles}\n\n\\begin{document}\n\n\\setcounter{chapter}{5}\n\n\\chapter{Curved Manifolds}\n\n\\setcounter{section}{8}\n\n\\section{Exercises}\n\n\n\\textbf{1}\nDetermine if the following sets are manifolds, and why. List any exceptional points.\n\n(a) Phase space in Hamiltonian mechanics is generally smooth, though it may contain singular points, depending on the system described. So it is a manifold, excluding the singularities.\n\n(b) The interior of a circle in 2D Euclidean space is smooth everywhere, and is therefore a manifold.\n\n(c) The set of permutations of $n$ objects is not a manifold, as it is discontinuous.\n\n(d) The set of solutions to $xy (x^2 + y^2 - 1)$ is a manifold. The solutions form a unit circle, ($x^2 + y^2 = 1$), as well as lines which span the $x$- and $y$-axes ($x=0$, $y=0$). The singular values occur at the points of intersection: $(0, 0)$, $(0, \\pm1)$, and $(\\pm1, 0)$.\n\n\n\\textbf{2}\nOn which of the manifolds in Exercise 1 is it customary to use a metric? What are those metrics? Why would metrics not be defined for some?\n\n(a) Phase space is comprised of two variables, $p$ and $q$, each of which represent different physical quantities, with incompatible units. For instance, if $p$ is momentum and $q$ is position, then $p^2 + q^2$ is non-physical.\n\n(b) The metric for the interior of a circle in 2D Euclidean space would be the Euclidean norm in 2 dimensions. While this could be given by $(\\Delta s)^2 = (\\Delta x)^2 + (\\Delta y)^2$, it would be more natural to express in units of $r$ and $\\theta$.\n%\n\\begin{align*}\n  (\\Delta s)^2 &=\n  (\\Delta x)^2 + (\\Delta y)^2\n  \\\\ &=\n  (x - x_0)^2 + (y - y_0)^2\n  \\\\ &=\n  r^2 \\qty[ (\\cos\\theta - \\cos\\theta_0)^2 + (\\sin\\theta - \\sin\\theta_0)^2 ]\n  \\\\ &=\n  r^2 \\qty[\n    \\cos^2\\theta + \\cos^2\\theta_0 - 2 \\cos\\theta \\cos\\theta_0 +\n    \\sin^2\\theta + \\sin^2\\theta_0 - 2 \\sin\\theta \\sin\\theta_0\n  ]\n  \\\\ &=\n  r^2 \\qty[ 1 + 1 - 2 \\cos(\\theta - \\theta_0) ] =\n  2 r^2 \\qty[ 1 - \\cos(\\Delta\\theta) ]\n  \\\\ &=\n  4 r^2 \\sin^2(\\Delta\\theta / 2)\n\\end{align*}\n\n(c) This was not a manifold.\n\n(d) Since this is again 2D Euclidean space, we could use the Euclidean norm in 2 dimensions. This time it would be more natural to express distances in $(x,y)$ coordinates, unless we restricted ourselves to the unit circle portion of this manifold.\n\n\n\n\\textbf{4}\nProve the following:\n\n(a) The number of independent terms in $\\pdv*{x^\\alpha}{x^{\\gamma'}}{x^{\\mu'}}|_\\mathcal{P}$ is $40$.\n\nThe total number of components is $4^3$, however, we do not want to consider duplicate terms. To find the number of duplicate terms in total, we find the number of duplicate terms for a fixed value of $\\alpha$, and then multiply that by $4$. The number of terms for a fixed $\\alpha$ is $4^2$, and of those, $4$ are completely independent (the diagonals), and the remainder exist in pairs. Since we only want one from each pair, we divide the total count by two, which means that the total number of duplicate components is $4 \\qty[ (4^2 - 4) / 2 ]$, and so the total number of non-duplicate components is $4^3 - 4 \\qty[ (4^2 - 4) / 2 ] = 40$.\n\nIn the next part, I cheat and use a formula. I will apply it to this part first, to show that it works. If you have a symmetric rank $k$ tensor with $n$ dimensions, then it has\n%\n\\begin{displaymath}\n  \\qty(\\!\\!\\binom{n}{k}\\!\\!) = \\binom{n+k-1}{k}\n\\end{displaymath}\n%\nindependent components. In the case of this problem, by fixing $\\alpha$, we get $4$ rank $2$ tensors of $4$ dimensions, and so the total number of independent components is\n%\n\\begin{displaymath}\n  4 \\binom{4+2-1}{2} = 40.\n\\end{displaymath}\n\n\n(b) The number for $\\pdv*{x^\\alpha}{x^{\\lambda'}}{x^{\\mu'}}{x^{\\nu'}}|_\\mathcal{P}$ is $80$.\n\nHere, if we fix $\\alpha$, we have $4$ symmetric rank $3$ tensors of $4$ dimensions, and so there are\n%\n\\begin{displaymath}\n  4 \\binom{4+3-1}{3} = 80\n\\end{displaymath}\n%\nindependent components.\n\n\n(c) The number for $g_{\\alpha\\beta,\\gamma'\\mu'}|_\\mathcal{P}$ is $100$.\n\nIf we interchange $\\alpha\\beta$, but fix $\\gamma'\\mu'$, then we have a symmetric rank $2$ tensor of $4$ dimensions, which has\n%\n\\begin{displaymath}\n  \\binom{4+2-1}{2} = 10\n\\end{displaymath}\n%\nindependent components. Likewise, if we interchange $\\gamma'\\mu'$ but fix $\\alpha\\beta$, we get $10$ independent components. Multiply the two and we have $100$ independent components.\n\n\n\\textbf{7}\n\n(a) Define $\\det(A)$ in terms of cofactors of elements.\n\n\\begin{displaymath}\n  \\det(A) =\n  \\sum_{j=1}^n (-1)^{i+j} a_{i,j} M_{i,j} =\n  \\sum_{i=1}^n (-1)^{i+j} a_{i,j} M_{i,j}\n\\end{displaymath}\n\n(b) Compute $\\dv{x} \\det(A)$, where $A$ is a $2\\times2$ matrix. Show that this satisfies Equation 6.39.\n\nFirst we note that, for $A_{1\\times1}$, $\\det(A) = a_{1,1}$. Thus, for $A_{2\\times2}$, $M_{i,j} = a_{i',j'}$, where $i \\neq i'$ and $j \\neq j'$. Therefore we can rewrite the determinant of $A_{2\\times2}$ as\n%\n\\begin{align*}\n  \\det(A) &=\n  \\sum_{i=1}^2 (-1)^{i+j} a_{i,j} a_{i',j'}\n  \\\\ &=\n  (-1)^{j+1} a_{1,j} a_{2,j'} + (-1)^{j+2} a_{2,j} a_{1,j'}.\n\\end{align*}\n%\nIf we assume $j = 1$ (it doesn't really matter if we choose $1$ or $2$), then this simplifies to\n%\n\\begin{align*}\n  \\det(A) &=\n  (-1)^2 a_{1,1} a_{2,2} + (-1)^3 a_{2,1} a_{1,2}\n  \\\\ &=\n  a_{1,1} a_{2,2} - a_{2,1} a_{1,2}.\n\\end{align*}\n%\nWe can then see that the derivative is\n%\n\\begin{align*}\n  \\pdv{x^\\mu} \\det(A) &=\n  \\pdv{x^\\mu} (a_{11} a_{22} - a_{21} a_{12})\n  \\\\ &=\n  a_{11} a_{22,\\mu} + a_{22} a_{11,\\mu} - a_{21} a_{12,\\mu} - a_{12} a_{21,\\mu}\n\\end{align*}\n\nNow to relate this to Equation 6.39, we let $A$ be the metric $g$. Then the derivative of its determinant is\n%\n\\begin{align*}\n  g_{,\\mu} &=\n  g_{11} g_{22,\\mu} + g_{22} g_{11,\\mu} -\n  g_{21} g_{12,\\mu} - g_{12} g_{21,\\mu}\n  \\\\ &=\n  g_{11} g_{22,\\mu} + g_{22} g_{11,\\mu} - 2 g_{12} g_{12,\\mu}.\n\\end{align*}\n%\nNow if we expand Equation 6.39, we see we have\n%\n\\begin{align*}\n  g &=\n  g_{11} g_{22} - g_{12} g_{21} =\n  g_{11} g_{22} - (g_{12})^2\n  \\\\\n  g^{\\alpha\\beta} g_{\\alpha\\beta,\\mu} &=\n  g^{11} g_{11,\\mu} + g^{22} g_{22,\\mu} + 2 g^{12} g_{12,\\mu}\n  \\\\\n  g g^{\\alpha\\beta} g_{\\alpha\\beta,\\mu} &=\n  (g_{11} g_{22} - (g_{12})^2)\n  (g^{11} g_{11,\\mu} + g^{22} g_{22,\\mu} + 2 g^{12} g_{12,\\mu})\n  \\\\ &=\n  g_{22} g_{11,\\mu} - g^{11} (g_{12})^2 g_{11,\\mu} +\n  g_{11} g_{22,\\mu} - g^{22} (g_{12})^2 g_{22,\\mu} +\n  2 g_{11} g_{22} g^{12} g_{12,\\mu} -\n  2 g_{12} g_{12,\\mu}\n  \\\\ &=\n  g_{11} g_{22,\\mu} + g_{22} g_{11,\\mu} - 2 g_{12} g_{12,\\mu}\n  + 2 g_{11} g_{22} g^{12} g_{12,\\mu}\n  - (g_{12})^2 (g^{11} g_{11,\\mu} + g^{22} g_{22,\\mu}).\n\\end{align*}\n%\nIf it is the case that $2 g_{11} g_{22} g^{12} g_{12,\\mu} - (g_{12})^2 (g^{11} g_{11,\\mu} + g^{22} g_{22,\\mu}) = 0$, then this is consistent with our previous expression for $g_{,\\mu}$, but I'm not sure how to show that.\n\n\n\\textbf{10}\nA ``straight line'' on a sphere forms a great circle. The sum of the interior angles of a triangle whose sides are formed by arcs of great circles is greater than $180^\\circ$. Show that the rotation of a vector, parallel transported around such a triangle (Figure 6.3 in Schutz), is exactly equal to the excess of that $180^\\circ$ sum.\n\n\n\n\\textbf{11}\nWhat guarantees we can find a vector field $\\vec{V}$ satisfying:\n%\n\\begin{displaymath}\n  \\tensor{V}{^\\alpha_{;\\beta}} =\n  \\tensor{V}{^\\alpha_{,\\beta}} +\n  \\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}}\n  V^\\mu =\n  0\n\\end{displaymath}\n%\n\\begin{enumerate}[(a)]\n\\item The integrability condition follows from the commuting of partial derivatives, $\\comm{\\partial_\\nu}{\\partial_\\beta} V^\\alpha = 0$. Show that this implies\n%\n\\begin{displaymath}\n  (\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta,\\nu}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu,\\beta}}) V^\\mu =\n  (\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}} \\tensor{\\Gamma}{^\\mu_{\\sigma\\nu}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}} \\tensor{\\Gamma}{^\\mu_{\\sigma\\beta}})\n  V^\\sigma =\n  0\n\\end{displaymath}\n\nSince we must satisfy $\\tensor{V}{^\\alpha_{,\\beta}} + \\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}} V^\\mu = 0$, then it must be the case that $\\tensor{V}{^\\alpha_{,\\beta}} = -\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}} V^\\mu$. Differentiating both sides, we get\n%\n\\begin{align*}\n  \\tensor{V}{^\\alpha_{,\\beta\\nu}} &=\n  -\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta,\\nu}} V^\\mu -\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}} \\tensor{V}{^\\mu_{,\\nu}}\n  \\\\ &=\n  -\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta,\\nu}} V^\\mu +\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}}\n   \\tensor{\\Gamma}{^\\mu_{\\lambda\\nu}} V^\\lambda\n  \\\\\n  \\tensor{V}{^\\alpha_{,\\nu\\beta}} &=\n  -\\tensor{\\Gamma}{^\\alpha_{\\mu\\nu,\\beta}} V^\\mu +\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}}\n   \\tensor{\\Gamma}{^\\mu_{\\sigma\\beta}} V^\\sigma\n  \\\\\n  \\tensor{V}{^\\alpha_{,\\beta\\nu}} &=\n  \\tensor{V}{^\\alpha_{,\\nu\\beta}}\n  \\\\ \\implies\n  -\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta,\\nu}} V^\\mu +\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}}\n   \\tensor{\\Gamma}{^\\mu_{\\sigma\\nu}} V^\\sigma &=\n  -\\tensor{\\Gamma}{^\\alpha_{\\mu\\nu,\\beta}} V^\\mu +\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}}\n   \\tensor{\\Gamma}{^\\mu_{\\sigma\\beta}} V^\\sigma\n  \\\\\n  (\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta,\\nu}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu,\\beta}})\n  V^\\mu &=\n  (\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}} \\tensor{\\Gamma}{^\\mu_{\\sigma\\nu}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}} \\tensor{\\Gamma}{^\\mu_{\\sigma\\beta}})\n  V^\\sigma\n\\end{align*}\n\n\\item By relabeling indices, we can work this into another form:\n%\n\\begin{align*}\n  (\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta,\\nu}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu,\\beta}})\n  V^\\mu &=\n  (\\tensor{\\Gamma}{^\\alpha_{\\sigma\\beta}} \\tensor{\\Gamma}{^\\sigma_{\\mu\\nu}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\sigma\\nu}} \\tensor{\\Gamma}{^\\sigma_{\\mu\\beta}})\n  V^\\mu\n  \\\\\n  (\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta,\\nu}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu,\\beta}} &+\n   \\tensor{\\Gamma}{^\\alpha_{\\sigma\\nu}} \\tensor{\\Gamma}{^\\sigma_{\\mu\\beta}} -\n   \\tensor{\\Gamma}{^\\alpha_{\\sigma\\beta}} \\tensor{\\Gamma}{^\\sigma_{\\mu\\nu}})\n  V^\\mu =\n  0\n\\end{align*}\n\n\n\\end{enumerate}\n\n\n\n\n\\textbf{13}\n\\begin{enumerate}[(a)]\n\\item Show that if $\\vec{A}$ and $\\vec{B}$ are parallel transported along a curve, their dot product is constant along that curve.\n\nThe dot product being constant along the curve means that \\emph{it} must be parallel transported along the curve, i.e. $\\grad_{\\vec{U}} (\\vec{A} \\cdot \\vec{B}) = 0$. We will now show this.\n%\n\\begin{align*}\n  \\grad_{\\vec{U}} (\\vec{A} \\cdot \\vec{B}) &=\n  U^\\lambda \\grad_\\lambda (g_{\\alpha\\beta} A^\\alpha B^\\beta)\n  \\\\ &=\n  U^\\lambda (\n    A^\\alpha B^\\beta \\cancel{\\grad_\\lambda g_{\\alpha\\beta}} +\n    g_{\\alpha\\beta} B^\\beta \\grad_\\lambda A^\\alpha +\n    g_{\\alpha\\beta} A^\\alpha \\grad_\\lambda B^\\beta\n  )\n  \\\\ &=\n  B^\\beta U^\\lambda \\grad_\\lambda A^\\alpha +\n  A^\\alpha U^\\lambda \\grad_\\lambda B^\\beta.\n\\end{align*}\n\nNotice that the terms $U^\\lambda \\grad_\\lambda A^\\alpha$ and $U^\\lambda \\grad_\\lambda B^\\beta$ are just the parallel transport equations, and so they come out to be zero, meaning $\\grad_{\\vec{U}} (\\vec{A} \\cdot \\vec{B}) = 0$, i.e. the dot product is constant along the curve.\n\n\\item Show that if a geodesic is spacelike, timelike, or null \\emph{somewhere}, then it remains that way \\emph{everywhere}.\n\nSince the dot product of two parallel transported vectors is constant, if we parallel transport a curve's tangent vector along itself (the geodesic), its magnitude ($\\vec{U} \\cdot \\vec{U}$) should remain constant. Since its magnitude doesn't change, it will remain spacelike, timelike, or null.\n\n\\end{enumerate}\n\n\\textbf{14}\nShow that if the curve in Equation 6.8 is a geodesic, the proper length is an affine parameter.\n\nEquation 6.8 states\n%\n\\begin{displaymath}\n  \\ell =\n  \\int_{\\lambda_0}^{\\lambda_1} \\sqrt{\\abs{\\vec{V} \\cdot \\vec{V}}} \\dd\\lambda.\n\\end{displaymath}\n%\nIf the curve is a geodesic, we have just shown that the dot product of any two vectors remains constant along the curve, and so we may pull it out of the integral.\n%\n\\begin{displaymath}\n  \\ell =\n  \\sqrt{\\abs{\\vec{V} \\cdot \\vec{V}}}\n  \\int_{\\lambda_0}^{\\lambda_1} \\dd\\lambda =\n  \\sqrt{\\abs{\\vec{V} \\cdot \\vec{V}}} (\\lambda_1 - \\lambda_0),\n\\end{displaymath}\n%\nand so the proper length $\\ell$ is indeed an affine parameter, as it can be obtained by a linear transformation of the parameter of the curve, $\\lambda$.\n\n\n\\textbf{16}\n\\begin{enumerate}[(a)]\n\\item Derive Equations 6.59 and 6.60 from 6.68.\n\nSomehow Schutz uses a Taylor expansion to get 6.59 from 6.68. I honestly have no idea how he does this, and Taylor expanding vectors and Christoffel symbols is black magic to me, so here's my (obviously wrong) attempt.\n%\n\\begin{align*}\n  \\var{V^\\alpha} &=\n  \\int_{x^1=a} \\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu \\dd{x^2} -\n  \\int_{x^1=a+\\var{a}} \\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu \\dd{x^2}\n  \\\\ &+\n  \\int_{x^2=b} \\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu \\dd{x^1} -\n  \\int_{x^2=b+\\var{b}} \\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu \\dd{x^1}\n  \\\\ &\\approx\n  \\int_b^{b+\\var{b}} \\eval[\n    \\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu +\n    \\var{a} \\pdv{x^1} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu)\n  |_a \\dd{x^2}\n  \\\\ &-\n  \\int_b^{b+\\var{b}} \\eval[\n    \\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu +\n    \\var{a} \\pdv{x^1} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu)\n  |_a \\dd{x^2}\n  \\\\ &+\n  \\int_a^{a+\\var{a}} \\eval[\n    \\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu +\n    \\var{b} \\pdv{x^2} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu)\n  |_b \\dd{x^1}\n  \\\\ &-\n  \\int_a^{a+\\var{a}} \\eval[\n    \\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu +\n    \\var{b} \\pdv{x^2} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu)\n  |_b \\dd{x^1}\n  \\\\ &\\approx\n  0\n  \\\\ &\n  \\text{but then a miracle occurred!!}\n  \\\\ &\\approx\n -\\int_b^{b+\\var{b}}\n  \\var{a} \\pdv{x^1} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu) \\dd{x^2}\n  \\\\ &+\n  \\int_a^{a+\\var{a}}\n  \\var{b} \\pdv{x^2} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu) \\dd{x^1}\n\\end{align*}\n\nThe next step actually \\emph{does} make sense to me. Since we are integrating over such tiny areas ($\\var{a}$ and $\\var{b}$),\n$\\int_a^{a+\\var{a}} f(x) \\dd{x} \\approx \\var{a} f(a)$, so\n%\n\\begin{align*}\n  \\int_b^{b+\\var{b}}\n  \\var{a} \\pdv{x^1} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu) \\dd{x^2}\n  &\\approx\n  \\var{a} \\var{b} \\pdv{x^1} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu),\n  \\\\\n  \\int_a^{a+\\var{a}}\n  \\var{b} \\pdv{x^2} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu) \\dd{x^1}\n  &\\approx\n  \\var{a} \\var{b} \\pdv{x^2} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu).\n\\end{align*}\n%\nSubtracting the two gives us\n%\n\\begin{displaymath}\n  \\var{V^\\alpha} \\approx\n  \\var{a} \\var{b} \\qty[\n   -\\pdv{x^1} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 2}} V^\\mu) +\n    \\pdv{x^2} \\qty(\\tensor{\\Gamma}{^\\alpha_{\\mu 1}} V^\\mu)\n  ].\n\\end{displaymath}\n\n\n\\item Derive Equation 6.61 from this.\n\nUsing a generalized form of Equation 6.53:\n%\n\\begin{displaymath}\n  \\tensor{V}{^\\alpha_{,\\beta}} =\n  -\\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}} V^\\mu,\n\\end{displaymath}\n%\nwe arrive at the expression\n%\n\\begin{displaymath}\n  \\qty(\\tensor{\\Gamma}{^\\alpha_{\\nu\\lambda}} V^\\nu)_{,\\beta} =\n  \\tensor{\\Gamma}{^\\alpha_{\\nu\\lambda,\\beta}} V^\\nu +\n  \\tensor{\\Gamma}{^\\alpha_{\\nu\\lambda}} \\tensor{V}{^\\nu_{,\\beta}} =\n  \\tensor{\\Gamma}{^\\alpha_{\\nu\\lambda,\\beta}} V^\\nu -\n  \\tensor{\\Gamma}{^\\alpha_{\\nu\\lambda}}\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\beta}}\n  V^\\nu.\n\\end{displaymath}\n%\nNow we substitute $\\mu \\to \\nu$ in Equation 6.60, and use this expression to find\n%\n\\begin{align*}\n  \\var{V^\\alpha} &\\approx\n  \\var{a} \\var{b} \\qty[\n   -\\tensor{\\Gamma}{^\\alpha_{\\nu2,1}} V^\\nu +\n    \\tensor{\\Gamma}{^\\alpha_{\\nu2}}\n    \\tensor{\\Gamma}{^\\nu_{\\mu1}}\n    V^\\nu +\n    \\tensor{\\Gamma}{^\\alpha_{\\nu1,2}} V^\\nu -\n    \\tensor{\\Gamma}{^\\alpha_{\\nu1}}\n    \\tensor{\\Gamma}{^\\nu_{\\mu2}}\n    V^\\nu\n  ]\n  \\\\ &\\approx\n  \\var{a} \\var{b} \\qty[\n    \\tensor{\\Gamma}{^\\alpha_{\\nu1,2}} -\n    \\tensor{\\Gamma}{^\\alpha_{\\nu2,1}} +\n    \\tensor{\\Gamma}{^\\alpha_{\\nu2}}\n    \\tensor{\\Gamma}{^\\nu_{\\mu1}} -\n    \\tensor{\\Gamma}{^\\alpha_{\\nu1}}\n    \\tensor{\\Gamma}{^\\nu_{\\mu2}}\n  ]\n  V^\\nu.\n\\end{align*}\n\n\\end{enumerate}\n\n\n\\textbf{18}\n\\begin{enumerate}[(a)]\n\\item Derive Equations 6.69 and 6.70 from 6.68.\n\n\\begin{align*}\n  R_{\\alpha\\beta\\mu\\nu} &=\n  \\frac{1}{2} (\n    {\\color{red}    g_{\\alpha\\nu,\\beta\\mu}} -\n    {\\color{green}  g_{\\alpha\\mu,\\beta\\nu}} +\n    {\\color{blue}   g_{\\beta\\mu,\\alpha\\nu}} -\n    {\\color{violet} g_{\\beta\\nu,\\alpha\\mu}}\n  )\n  \\\\\n  R_{\\beta\\alpha\\mu\\nu} &=\n  \\frac{1}{2} (\n    {\\color{violet} g_{\\beta\\nu,\\alpha\\mu}} -\n    {\\color{blue}   g_{\\beta\\mu,\\alpha\\nu}} +\n    {\\color{green}  g_{\\alpha\\mu,\\beta\\nu}} -\n    {\\color{red}    g_{\\alpha\\nu,\\beta\\mu}}\n  )\n  \\\\ &=\n  \\frac{1}{2} (\n   -{\\color{red}    g_{\\alpha\\nu,\\beta\\mu}} +\n    {\\color{green}  g_{\\alpha\\mu,\\beta\\nu}} -\n    {\\color{blue}   g_{\\beta\\mu,\\alpha\\nu}} +\n    {\\color{violet} g_{\\beta\\nu,\\alpha\\mu}}\n  )\n  \\\\ &=\n  -R_{\\alpha\\beta\\mu\\nu}\n  \\\\\n  R_{\\alpha\\beta\\nu\\mu} &=\n  \\frac{1}{2} (\n    {\\color{green}  g_{\\alpha\\mu,\\beta\\nu}} -\n    {\\color{red}    g_{\\alpha\\nu,\\beta\\mu}} +\n    {\\color{violet} g_{\\beta\\nu,\\alpha\\mu}} -\n    {\\color{blue}   g_{\\beta\\mu,\\alpha\\nu}}\n  )\n  \\\\ &=\n  \\frac{1}{2} (\n   -{\\color{red}    g_{\\alpha\\nu,\\beta\\mu}} +\n    {\\color{green}  g_{\\alpha\\mu,\\beta\\nu}} -\n    {\\color{blue}   g_{\\beta\\mu,\\alpha\\nu}} +\n    {\\color{violet} g_{\\beta\\nu,\\alpha\\mu}}\n  )\n  \\\\ &=\n  -R_{\\alpha\\beta\\mu\\nu}\n  \\\\\n  R_{\\mu\\nu\\alpha\\beta} &=\n  \\frac{1}{2} (\n    {\\color{blue}   g_{\\mu\\beta,\\nu\\alpha}} -\n    {\\color{green}  g_{\\mu\\alpha,\\nu\\beta}} +\n    {\\color{red}    g_{\\nu\\alpha,\\mu\\beta}} -\n    {\\color{violet} g_{\\nu\\beta,\\mu\\alpha}}\n  )\n  \\\\ &=\n  \\frac{1}{2} (\n    {\\color{red}    g_{\\alpha\\nu,\\beta\\mu}} -\n    {\\color{green}  g_{\\alpha\\mu,\\beta\\nu}} +\n    {\\color{blue}   g_{\\beta\\mu,\\alpha\\nu}} -\n    {\\color{violet} g_{\\beta\\nu,\\alpha\\mu}}\n  )\n  \\\\ &=\n  R_{\\alpha\\beta\\mu\\nu}\n\\end{align*}\n%\n\\begin{align*}\n  2 (R_{\\alpha\\beta\\mu\\nu} + R_{\\alpha\\nu\\beta\\mu} + R_{\\alpha\\mu\\nu\\beta}) &=\n  {\\color{red}    g_{\\alpha\\nu,\\beta\\mu}} -\n  {\\color{green}  g_{\\alpha\\mu,\\beta\\nu}} +\n  {\\color{blue}   g_{\\beta\\mu,\\alpha\\nu}} -\n  {\\color{violet} g_{\\beta\\nu,\\alpha\\mu}}\n  \\\\ &+\n  {\\color{green}  g_{\\alpha\\mu,\\nu\\beta}} -\n  {\\color{brown}  g_{\\alpha\\beta,\\nu\\mu}} +\n  {\\color{violet} g_{\\nu\\beta,\\alpha\\mu}} -\n  {\\color{teal}   g_{\\nu\\mu,\\alpha\\beta}}\n  \\\\ &+\n  {\\color{brown}  g_{\\alpha\\beta,\\mu\\nu}} -\n  {\\color{red}    g_{\\alpha\\nu,\\mu\\beta}} +\n  {\\color{teal}   g_{\\mu\\nu,\\alpha\\beta}} -\n  {\\color{blue}   g_{\\mu\\beta,\\alpha\\nu}}\n  \\\\ &=\n  0\n\\end{align*}\n\n\\item Show that Equation 6.69 reduces the number of independent components from $4\\times4\\times4\\times4$ to $6 \\times 7 / 2$.\n\nFor a rank-2 symmetric tensor, you have $(n/2) (n+1)$ independent components. For an anti-symmetric tensor you have $(n/2) (n-1)$ independent components. So for each of our pairs of anti-symmetric indices, there are $(n/2) (n-1)$ independent components. We can then treat the two pairs as a single pair of symmetric indices, with that many possible values. The number of indices is therefore:\n%\n\\begin{displaymath}\n  (1/2) [(n/2) (n-1)] [(n/2) (n-1) + 1] =\n  (1/2) [(4/2) (4-1)] [(4/2) (4-1) + 1] =\n  6 \\times 7 / 2 = 21.\n\\end{displaymath}\n\n\\item Show that Equation 6.70 only imposes one additional relation, separate from Equation 6.69, reducing the total independent components to 20.\n\nThe addition of Equation 6.70 adds the condition that $R_{\\alpha[\\beta\\mu\\nu]} = 0$, and so the number of independent components becomes\n%\n\\begin{displaymath}\n  \\qty(\\!\\!\\binom{4}{3}\\!\\!) =\n  \\binom{4+3-1}{3} =\n  \\binom{6}{3} =\n  20.\n\\end{displaymath}\n\n\\end{enumerate}\n\n\n\\textbf{19}\nProve that the components of the Riemann tensor are all zero for polar coordinates in the Euclidean plane. Recall that:\n%\n\\begin{align*}\n  \\tensor{\\Gamma}{^\\theta_{(\\theta r)}} &= 1/r; \\quad\n  \\tensor{\\Gamma}{^r_{\\theta\\theta}} = -r\n  \\\\\n  \\tensor{R}{^\\alpha_{\\beta\\mu\\nu}} &=\n  \\tensor{\\Gamma}{^\\alpha_{\\beta\\nu,\\mu}} -\n  \\tensor{\\Gamma}{^\\alpha_{\\beta\\mu,\\nu}} +\n  \\tensor{\\Gamma}{^\\alpha_{\\sigma\\mu}} \\tensor{\\Gamma}{^\\sigma_{\\beta\\nu}} -\n  \\tensor{\\Gamma}{^\\alpha_{\\sigma\\nu}} \\tensor{\\Gamma}{^\\sigma_{\\beta\\mu}}.\n\\end{align*}\n\nAccording to the computer algebra system, \\texttt{Maxima}, the components are all zero.\n\n\\begin{verbatim}\n(%i1) load(ctensor)$\n\n(%i2) ct_coordsys(polar)$\n\n(%i3) cmetric()$\n\n(%i4) lg;\n                                   [ 1  0  ]\n(%o4)                              [       ]\n                                   [     2 ]\n                                   [ 0  r  ]\n(%i5) riemann(mcs);\nThis spacetime is flat\n(%o5)                                done\n\\end{verbatim}\n\n\n% \\begin{table}[ht]\n%   \\centering\n%   \\begin{tabular}{llll|l}\n%     $\\alpha$ & $\\beta$ & $\\mu$ & $\\nu$ & $\\tensor{R}{^\\alpha_{\\beta\\mu\\nu}}$ \\\\\n%     \\hline\n%     $\\theta$ & $\\theta$ & $\\theta$ & $\\theta$ \\\\\n%     $\\theta$ & $\\theta$ & $\\theta$ & $r$ \\\\\n%     $\\theta$ & $\\theta$ & $r$ & $\\theta$ \\\\\n%     $\\theta$ & $\\theta$ & $r$ & $r$ \\\\\n%     $\\theta$ & $r$ & $\\theta$ & $\\theta$ \\\\\n%     $\\theta$ & $r$ & $\\theta$ & $r$ \\\\\n%     $\\theta$ & $r$ & $r$ & $\\theta$ \\\\\n%     $\\theta$ & $r$ & $r$ & $r$ \\\\\n%     $r$ & $\\theta$ & $\\theta$ & $\\theta$ \\\\\n%     $r$ & $\\theta$ & $\\theta$ & $r$ \\\\\n%     $r$ & $\\theta$ & $r$ & $\\theta$ \\\\\n%     $r$ & $\\theta$ & $r$ & $r$ \\\\\n%     $r$ & $r$ & $\\theta$ & $\\theta$ \\\\\n%     $r$ & $r$ & $\\theta$ & $r$ \\\\\n%     $r$ & $r$ & $r$ & $\\theta$ \\\\\n%     $r$ & $r$ & $r$ & $r$ \\\\\n%   \\end{tabular}\n%   \\caption{Problem 19: Components on the Riemann tensor.}\n%   \\label{tab:ch6-problem19}\n% \\end{table}\n\n\n\n\\textbf{24}\nUsing Equation 6.88, derive Equation 6.89.\n\n\\begin{align*}\n  R_{\\alpha\\beta\\mu\\nu,\\lambda} &=\n  \\frac{1}{2} (\n    g_{\\alpha\\nu,\\beta\\mu\\lambda} -\n    g_{\\alpha\\mu,\\beta\\nu\\lambda} +\n    g_{\\beta\\mu,\\alpha\\nu\\lambda} -\n    g_{\\beta\\nu,\\alpha\\mu\\lambda}\n  )\n  \\\\\n  R_{\\alpha\\beta\\lambda\\mu,\\nu} &=\n  \\frac{1}{2} (\n    g_{\\alpha\\mu,\\beta\\lambda\\nu} -\n    g_{\\alpha\\lambda,\\beta\\mu\\nu} +\n    g_{\\beta\\lambda,\\alpha\\mu\\nu} -\n    g_{\\beta\\mu,\\alpha\\lambda\\nu}\n  )\n  \\\\\n  R_{\\alpha\\beta\\nu\\lambda,\\mu} &=\n  \\frac{1}{2} (\n    g_{\\alpha\\lambda,\\beta\\nu\\mu} -\n    g_{\\alpha\\nu,\\beta\\lambda\\mu} +\n    g_{\\beta\\nu,\\alpha\\lambda\\mu} -\n    g_{\\beta\\lambda,\\alpha\\nu\\mu}\n  )\n  \\\\\n  2 (\n    R_{\\alpha\\beta\\mu\\nu,\\lambda} +\n    R_{\\alpha\\beta\\lambda\\mu,\\nu} +\n    R_{\\alpha\\beta\\nu\\lambda,\\mu}\n  ) &=\n  {\\color{red} g_{\\alpha\\nu,\\beta\\mu\\lambda}} -\n  {\\color{orange} g_{\\alpha\\mu,\\beta\\nu\\lambda}} +\n  {\\color{brown} g_{\\beta\\mu,\\alpha\\nu\\lambda}} -\n  {\\color{green} g_{\\beta\\nu,\\alpha\\mu\\lambda}}\n  \\\\* &+\n  {\\color{orange} g_{\\alpha\\mu,\\beta\\lambda\\nu}} -\n  {\\color{blue} g_{\\alpha\\lambda,\\beta\\mu\\nu}} +\n  {\\color{violet} g_{\\beta\\lambda,\\alpha\\mu\\nu}} -\n  {\\color{brown} g_{\\beta\\mu,\\alpha\\lambda\\nu}}\n  \\\\* &+\n  {\\color{blue} g_{\\alpha\\lambda,\\beta\\nu\\mu}} -\n  {\\color{red} g_{\\alpha\\nu,\\beta\\lambda\\mu}} +\n  {\\color{green} g_{\\beta\\nu,\\alpha\\lambda\\mu}} -\n  {\\color{violet} g_{\\beta\\lambda,\\alpha\\nu\\mu}}\n  \\\\* &=\n  0\n\\end{align*}\n\n\n\\textbf{25}\n\n\\begin{enumerate}[(a)]\n\\item Prove the Ricci tensor is the only independent contraction of the Riemann tensor. All others are $\\pm \\tensor{R}{^\\alpha_{\\beta\\mu\\nu}}$ or $0$.\n\nThere are three possible ways to contract the Riemann tensor. If we contract on the second lower index, we have the definition of the Ricci tensor: $R_{\\beta\\nu} = \\tensor{R}{^\\alpha_{\\beta\\alpha\\nu}}$.\n\nThe value of contracting the last index is the easiest to find, and can be found by manipulating the above expression and invoking the anti-symmetry properties of the Riemann tensor:\n%\n\\begin{displaymath}\n  R_{\\beta\\nu} =\n  \\tensor{R}{^\\alpha_{\\beta\\alpha\\nu}} =\n -\\tensor{R}{^\\alpha_{\\beta\\nu\\alpha}}\n  \\implies\n  \\tensor{R}{^\\alpha_{\\beta\\nu\\alpha}} =\n -R_{\\beta\\nu}.\n\\end{displaymath}\n%\nGiven this identity, finding the value of the remaining contraction is easy. Equation 6.70 states that\n%\n\\begin{displaymath}\n  R_{\\alpha\\beta\\mu\\nu} + R_{\\alpha\\nu\\beta\\mu} + R_{\\alpha\\mu\\nu\\beta} = 0.\n\\end{displaymath}\n%\nIf we raise the $\\alpha$'s with the metric, we get\n%\n\\begin{align*}\n  g^{\\alpha\\beta}\n  (R_{\\alpha\\beta\\mu\\nu} + R_{\\alpha\\nu\\beta\\mu} + R_{\\alpha\\mu\\nu\\beta}) &=\n  0\n  \\\\\n  \\tensor{R}{^\\beta_{\\beta\\mu\\nu}} +\n  \\tensor{R}{^\\beta_{\\nu\\beta\\mu}} +\n  \\tensor{R}{^\\beta_{\\mu\\nu\\beta}} &=\n  0\n  \\\\\n  \\tensor{R}{^\\beta_{\\beta\\mu\\nu}} +\n  \\tensor{R}{^\\beta_{\\nu\\beta\\mu}} -\n  \\tensor{R}{^\\beta_{\\mu\\beta\\nu}} &=\n  0\n  \\\\\n  \\tensor{R}{^\\beta_{\\beta\\mu\\nu}} &=\n  0\n\\end{align*}\n\n\n\\item Show that the Ricci tensor is symmetric.\n\n\\begin{align*}\n  R_{\\beta\\nu} &=\n  \\tensor{R}{^\\alpha_{\\beta\\alpha\\nu}}\n  \\\\\n  g_{\\alpha\\lambda} R_{\\beta\\nu} &=\n  \\tensor{R}{_{\\lambda\\beta\\alpha\\nu}} =\n  \\tensor{R}{_{\\alpha\\nu\\lambda\\beta}}\n  \\\\\n  g^{\\alpha\\lambda} g_{\\alpha\\lambda} R_{\\beta\\nu} &=\n  g^{\\alpha\\lambda} \\tensor{R}{_{\\alpha\\nu\\lambda\\beta}} =\n  \\tensor{R}{^\\lambda_{\\nu\\lambda\\beta}} = R_{\\nu\\beta}\n  \\\\ \\implies\n  R_{\\beta\\nu} &= R_{\\nu\\beta}\n\\end{align*}\n\n\n\\end{enumerate}\n\n\n\n\\textbf{28}\n\\begin{enumerate}[(a)]\n\\item Derive Equation 6.19 using the coordinate transformation\n  $(x,y,z) \\to (r,\\theta,\\phi)$\n\nWe begin by finding the basis vectors in $(r,\\theta,\\phi)$, using\n%\n\\begin{align*}\n  \\vec{e}_r &=\n  \\pdv{x}{r} \\vec{e}_x +\n  \\pdv{y}{r} \\vec{e}_y +\n  \\pdv{z}{r} \\vec{e}_z,\n  \\\\\n  \\vec{e}_\\theta &=\n  \\pdv{x}{\\theta} \\vec{e}_x +\n  \\pdv{y}{\\theta} \\vec{e}_y +\n  \\pdv{z}{\\theta} \\vec{e}_z,\n  \\\\\n  \\vec{e}_\\phi &=\n  \\pdv{x}{\\phi} \\vec{e}_x +\n  \\pdv{y}{\\phi} \\vec{e}_y +\n  \\pdv{z}{\\phi} \\vec{e}_z.\n\\end{align*}\n%\nThe variables transform according to\n%\n\\begin{align*}\n  x &= r \\sin\\theta \\cos\\phi,\n  \\\\\n  y &= r \\sin\\theta \\sin\\phi,\n  \\\\\n  z &= r \\cos\\theta.\n\\end{align*}\n%\nNow we take the derivatives\n%\n\\begin{align*}\n  \\pdv{x}{r}      &=    \\sin\\theta \\cos\\phi, &\n  \\pdv{x}{\\theta} &=  r \\cos\\theta \\cos\\phi, &\n  \\pdv{x}{\\phi}   &= -r \\sin\\theta \\sin\\phi,\n  \\\\\n  \\pdv{y}{r}      &=    \\sin\\theta \\sin\\phi, &\n  \\pdv{y}{\\theta} &=  r \\cos\\theta \\sin\\phi, &\n  \\pdv{y}{\\phi}   &= -r \\sin\\theta \\cos\\phi,\n  \\\\\n  \\pdv{z}{r}      &=    \\cos\\theta, &\n  \\pdv{z}{\\theta} &= -r \\sin\\theta, &\n  \\pdv{z}{\\phi}   &=  0.\n\\end{align*}\n%\nThe basis vectors are therefore\n%\n\\begin{align*}\n  \\vec{e}_r &=\n    \\sin\\theta \\cos\\phi \\vec{e}_x +\n    \\sin\\theta \\sin\\phi \\vec{e}_y +\n    \\cos\\theta          \\vec{e}_z\n  \\\\\n  \\vec{e}_\\theta &=\n  r \\cos\\theta \\cos\\phi \\vec{e}_x +\n  r \\cos\\theta \\sin\\phi \\vec{e}_y -\n  r \\sin\\theta          \\vec{e}_z\n  \\\\\n  \\vec{e}_\\phi &=\n -r \\sin\\theta \\sin\\phi \\vec{e}_x +\n  r \\sin\\theta \\cos\\phi \\vec{e}_y\n\\end{align*}\n%\nNow we find the components of the metric tensor using the fact that\n$g_{\\alpha\\beta} = \\vec{e}_\\alpha \\cdot \\vec{e}_\\beta$.\n%\n\\begin{align*}\n  g_{rr} =\n  \\vec{e}_r \\cdot \\vec{e}_r &=\n  (\\sin\\theta \\cos\\phi)^2 \\delta_{xx} +\n  (\\sin\\theta \\sin\\phi)^2 \\delta_{yy} +\n  (\\cos^2\\theta)^2 \\delta_{zz} +\n  \\ldots\n  \\\\ &=\n  \\sin^2\\theta \\cos^2\\phi +\n  \\sin^2\\theta \\sin^2\\phi +\n  \\cos^2\\theta =\n  \\sin^2\\theta + \\cos^2\\theta\n  \\\\ &=\n  1,\n  \\\\\n  g_{\\theta\\theta} =\n  \\vec{e}_\\theta \\cdot \\vec{e}_\\theta &=\n  (r \\cos\\theta \\cos\\phi)^2 \\delta_{xx} +\n  (r \\cos\\theta \\sin\\phi)^2 \\delta_{yy} +\n  (-r \\sin\\theta)^2 \\delta_{zz}\n  \\\\ &=\n  r^2 (\\cos^2\\theta \\cos^2\\phi + \\cos^2\\theta \\sin^2\\phi + \\sin^2\\theta)\n  \\\\ &=\n  r^2 (\\cos^2\\theta + \\sin^2\\theta)\n  \\\\ &=\n  r^2,\n  \\\\\n  g_{\\phi\\phi} =\n  \\vec{e}_\\phi \\cdot \\vec{e}_\\phi &=\n  (-r \\sin\\theta \\sin\\phi)^2 \\delta_{xx} +\n  ( r \\sin\\theta \\cos\\phi)^2 \\delta_{yy}\n  \\\\ &=\n  r^2 (\\sin^2\\theta \\sin^2\\phi + \\sin^2\\theta \\cos^2 \\phi)\n  \\\\ &=\n  r^2 \\sin^2\\theta.\n\\end{align*}\n%\nNow for the off-diagonal elements, we take advantage of the symmetry properties of the metric to reduce it from 6 terms to 3.\n%\n\\begin{align*}\n  g_{r \\theta} = g_{\\theta r} =\n  \\vec{e}_r \\cdot \\vec{e}_\\theta &=\n  (\\sin\\theta \\cos\\phi) ( r \\cos\\theta \\cos\\phi) \\delta_{xx} +\n  (\\sin\\theta \\sin\\phi) ( r \\cos\\theta \\sin\\phi) \\delta_{yy} +\n  (\\cos\\theta)          (-r \\sin\\theta)          \\delta_{zz}\n  \\\\ &=\n  r \\qty(\n    \\sin\\theta \\cos\\theta \\cos^2\\phi +\n    \\sin\\theta \\cos\\theta \\sin^2\\phi -\n    \\sin\\theta \\cos\\theta\n  )\n  \\\\ &=\n  0,\n  \\\\\n  g_{r \\phi} = g_{\\phi r} =\n  \\vec{e}_r \\cdot \\vec{e}_\\phi &=\n  (\\sin\\theta \\cos\\phi) (-r \\sin\\theta \\sin\\phi) \\delta_{xx} +\n  (\\sin\\theta \\sin\\phi) ( r \\sin\\theta \\cos\\phi) \\delta_{yy} +\n  (\\cos\\theta) (0) \\delta_{zz}\n  \\\\ &=\n  r (-\\sin^2\\theta \\sin\\phi \\cos\\phi + \\sin^2\\theta \\sin\\phi \\cos\\phi)\n  \\\\ &=\n  0,\n  \\\\\n  g_{\\theta \\phi} = g_{\\phi \\theta} =\n  \\vec{e}_\\theta \\cdot \\vec{e}_\\phi &=\n  (r \\cos\\theta \\cos\\phi) (-r \\sin\\theta \\sin\\phi) \\delta_{xx} +\n  (r \\cos\\theta \\sin\\phi) ( r \\sin\\theta \\cos\\phi) \\delta_{yy}\n  \\\\ &=\n  r^2 (\n   -\\cos\\theta \\cos\\phi \\sin\\theta \\sin\\phi +\n    \\cos\\theta \\cos\\phi \\sin\\theta \\sin\\phi\n  )\n  \\\\ &=\n  0.\n\\end{align*}\n%\nThe metric tensor in spherical polar coordinates is therefore\n%\n\\begin{displaymath}\n  (g_{ij}) = \\mqty(\\dmat[0]{1,r^2,r^2\\sin^2\\theta}).\n\\end{displaymath}\n\n\\item Use Equation 6.19 to find the metric on the surface of a sphere.\n\nOn the surface of a sphere, $r$ is fixed, and therefore $\\Delta r = 0$. As a result of this, we do not need to consider $g_{rr}$, and the only relevant components become $(\\theta,\\phi)$. So we can simplify the metric as:\n%\n\\begin{displaymath}\n  (g_{ij}) = \\mqty(\\dmat[0]{r^2,r^2\\sin^2\\theta}).\n\\end{displaymath}\n\n\\item Find the components of $g^{\\alpha\\beta}$ on the surface of a sphere.\n\nSince $g_{\\alpha\\beta}$ is a diagonal matrix, the components of its inverse are simply equal to their multiplicative inverse. So the matrix is\n%\n\\begin{displaymath}\n  (g_{ij}) = \\mqty(\\dmat[0]{1/r^2,1/r^2\\sin^2\\theta}).\n\\end{displaymath}\n\n\\end{enumerate}\n\n\n\\textbf{29}\nCalculate the Riemann tensor of the unit sphere in spherical polar coordinates.\n\nThe metric for a unit sphere in spherical polars is\n%\n\\begin{displaymath}\n  (g_{ij}) = \\mqty(\\dmat[0]{1,\\sin^2\\theta}),\n\\end{displaymath}\n%\nand so one component of the Riemann tensor is\n%\n\\begin{align*}\n  R_{\\theta\\phi\\theta\\phi} &=\n  \\frac{1}{2} \\qty(\n    g_{\\theta\\phi,\\phi\\theta} -\n    g_{\\theta\\theta,\\phi\\phi} +\n    g_{\\phi\\theta,\\theta\\phi} -\n    g_{\\phi\\phi,\\theta\\theta}\n  ) =\n  \\frac{1}{2} \\qty(\n    g_{\\theta\\theta,\\phi\\phi} -\n    g_{\\phi\\phi,\\theta\\theta}\n  )\n  \\\\ &=\n  \\frac{1}{2} \\qty(\n    \\pdv[2]{\\phi}   1 -\n    \\pdv[2]{\\theta} \\sin^2\\theta\n  ) =\n  \\frac{1}{2} \\sin^2\\theta.\n\\end{align*}\n%\nUsing the symmetry and anti-symmetry properties of the Riemann tensor, we find the remaining components:\n%\n\\begin{align*}\n  R_{\\phi\\theta\\phi\\theta} &=\n  \\sin^2\\theta\n  \\\\\n  R_{\\theta\\phi\\phi\\theta} &=\n  R_{\\phi\\theta\\theta\\phi} =\n -\\sin^2\\theta.\n\\end{align*}\n%\nAll remaining components are zero, as they have indices $\\theta\\theta\\theta\\phi$ or $\\phi\\phi\\phi\\theta$, and the only non-zero second derivative of the metric is $g_{\\phi\\phi,\\theta\\theta}$, which requires two of each index, not three.\n\n\n\\textbf{30}\nCalculate the Riemann tensor on a cylinder.\n\nThe metric in cylindrical polars, $(r,\\theta,z)$, is given by\n%\n\\begin{displaymath}\n  (g_{ij}) = \\mqty(\\dmat[0]{1,r^2,1}).\n\\end{displaymath}\n%\nOn the surface of a cylinder (excluding the top and bottom) the radius is unchanging, so $\\Delta r = $, as was the case on the surface of a sphere. The metric can therefore be simplified in $(\\theta,z)$ coordinates as:\n%\n\\begin{displaymath}\n  (g_{ij}) = \\mqty(\\dmat[0]{r^2,1}).\n\\end{displaymath}\n%\nFrom the metric alone, it is obvious that the components of the Riemann tensor must \\emph{all} be zero. This is because the Riemann tensor depends on second derivatives of the components of the metric, and the only variable term is $g_{\\theta\\theta} = r^2$. Since we removed the dependence on the \\emph{coordinate} $r$, \\emph{none} of the terms in the Riemann tensor will involve differentiating with respect to $r$, and therefore they will \\emph{all} be zero.\n\n\n\\textbf{32}\nA $4$D manifold has coodinates $(u,v,w,p)$, and a metric\n%\n\\begin{displaymath}\n  (g_{\\alpha\\beta}) =\n  \\mqty( 0 & 1 & 0 & 0 \\\\\n         1 & 0 & 0 & 0 \\\\\n         0 & 0 & 1 & 0 \\\\\n         0 & 0 & 0 & 1 ).\n\\end{displaymath}\n\n\\begin{enumerate}[(a)]\n\\item Show that the manifold is flat and has signature $+2$.\n\nSince every element in the metric is a constant, $g_{\\alpha\\beta,\\mu\\nu} \\equiv 0$, and therefore $R_{\\alpha\\beta\\mu\\nu} \\equiv 0$, so the manifold is flat.\n\nThe signature is just the sum of the diagonal elements, which in this case is $1 + 1 = 2$.\n\n\\item Since this manifold is flat and has signature $+2$, it must be a Minkowski spacetime. Find a coordinate transformation to $(t,x,y,z)$.\n\n\\begin{align*}\n  \\Lambda g &= \\eta\n  \\\\\n  \\Lambda g g^{-1} &= \\eta g^{-1}\n  \\\\\n  \\Lambda &= \\eta g^{-1} = \\eta g\n  \\text{ (since $g$ is symmetric)}\n  \\\\\n  (\\Lambda_{\\alpha\\beta}) &=\n  \\mqty(\\dmat[0]{-1,1,1,1})\n  \\mqty( 0 & 1 & 0 & 0 \\\\\n         1 & 0 & 0 & 0 \\\\\n         0 & 0 & 1 & 0 \\\\\n         0 & 0 & 0 & 1 ) =\n  \\mqty( 0 & -1 & 0 & 0 \\\\\n         1 &  0 & 0 & 0 \\\\\n         0 &  0 & 1 & 0 \\\\\n         0 &  0 & 0 & 1 )\n\\end{align*}\n\n\\end{enumerate}\n\n\n\\textbf{33}\nA three-sphere (or glome) is the 4D analog of a sphere, with cartesian coordinates $(x,y,z,w)$, described by the equation $x^2+y^2+z^2+w^2=r^2$, where $r$ is its radius.\n\n\\begin{enumerate}[(a)]\n\\item Define coordinates $(r,\\theta,\\phi,\\chi)$, given by\n%\n\\begin{align*}\n  x &= r \\sin\\chi \\sin\\theta \\cos\\phi, &\n  y &= r \\sin\\chi \\sin\\theta \\sin\\phi,\n  \\\\\n  z &= r \\sin\\chi \\cos\\theta, &\n  w &= r \\cos\\chi,\n\\end{align*}\n%\nand show that $(\\theta,\\phi,\\chi)$ form the coordinates of the surface of the sphere.\n\nPer usual, we begin by finding the elements of the Jacobian\n%\n\\begin{displaymath}\n  \\Lambda : (x,y,z,w) \\to (r,\\theta,\\phi,\\chi).\n\\end{displaymath}\n%\n\\begin{align*}\n  \\pdv*{x}{r} &=\n  \\sin\\chi \\sin\\theta \\cos\\phi, &\n  \\pdv*{y}{r} &=\n  \\sin\\chi \\sin\\theta \\sin\\phi, &\n  \\pdv*{z}{r} &=\n  \\sin\\chi \\cos\\theta, &\n  \\pdv*{w}{r} &=\n  \\cos\\chi,\n  \\\\\n  \\pdv*{x}{\\theta} &=\n  r \\sin\\chi \\cos\\theta \\cos\\phi, &\n  \\pdv*{y}{\\theta} &=\n  r \\sin\\chi \\cos\\theta \\sin\\phi, &\n  \\pdv*{z}{\\theta} &=\n  -r \\sin\\chi \\sin\\theta, &\n  \\pdv*{w}{\\theta} &=\n  0,\n  \\\\\n  \\pdv*{x}{\\phi} &=\n  -r \\sin\\chi \\sin\\theta \\sin\\phi, &\n  \\pdv*{y}{\\phi} &=\n  r \\sin\\chi \\sin\\theta \\cos\\phi, &\n  \\pdv*{z}{\\phi} &=\n  0, &\n  \\pdv*{w}{\\phi} &=\n  0,\n  \\\\\n  \\pdv*{x}{\\chi} &=\n  r \\cos\\chi \\sin\\theta \\cos\\phi, &\n  \\pdv*{y}{\\chi} &=\n  r \\cos\\chi \\sin\\theta \\sin\\phi, &\n  \\pdv*{z}{\\chi} &=\n  r \\cos\\chi \\cos\\theta, &\n  \\pdv*{w}{\\chi} &=\n  -r \\sin\\chi.\n\\end{align*}\n%\nthe basis vectors are then\n%\n\\begin{align*}\n  \\vec{e}_\\xi &=\n  \\pdv{x^\\alpha}{\\xi} \\vec{e}_\\alpha &\n  \\\\\n  \\vec{e}_r &=\n  \\sin\\chi \\sin\\theta \\cos\\phi \\vec{e}_x &\n%\n  \\vec{e}_\\theta &=\n  r \\sin\\chi \\cos\\theta \\cos\\phi \\vec{e}_x &\n%\n  \\vec{e}_\\phi &=\n  -r \\sin\\chi \\sin\\theta \\sin\\phi \\vec{e}_x &\n  \\vec{e}_\\chi &=\n  r \\cos\\chi \\sin\\theta \\cos\\phi \\vec{e}_x\n  \\\\ &+ %%%%\n  \\sin\\chi \\sin\\theta \\sin\\phi \\vec{e}_y &&+\n%\n  r \\sin\\chi \\cos\\theta \\sin\\phi \\vec{e}_y &&+\n%\n  r \\sin\\chi \\sin\\theta \\cos\\phi \\vec{e}_y &&+\n%\n  r \\cos\\chi \\sin\\theta \\sin\\phi \\vec{e}_y\n  \\\\ &+ %%%%\n  \\sin\\chi \\cos\\theta \\vec{e}_z &&-\n%\n  r \\sin\\chi \\sin\\theta \\vec{e}_z &&\n%\n  &&+\n%\n  r \\cos\\chi \\cos\\theta\n  \\\\ &+\n  \\cos\\chi \\vec{e}_w &&\n%\n  &&\n%\n  &&-\n%\n  r \\sin\\chi \\vec{e}_w\n\\end{align*}\n%\nNotice that if we fix $\\chi = \\pi/2$, this reduces to the basis vectors for 2D spherical polars.\n\nThe components of the metric can be found using\n$g_{\\alpha\\beta} = \\vec{e}_\\alpha \\cdot \\vec{e}_\\beta$.\n%\n\\begin{align*}\n  g_{rr} &=\n  \\sin^2\\chi \\sin^2\\theta \\cos^2\\phi \\eta_{xx} +\n  \\sin^2\\chi \\sin^2\\theta \\sin^2\\phi \\eta_{yy} +\n  \\sin^2\\chi \\cos^2\\theta \\eta_{zz} +\n  \\cos^2\\chi \\eta_{ww}\n  \\\\* &=\n  \\sin^2\\chi (\\sin^2\\theta + \\cos^2\\theta) + \\cos^2\\chi =\n  \\sin^2\\chi + \\cos^2\\chi =\n  1\n  \\\\\n  g_{\\theta\\theta} &=\n  r^2 \\qty(\n    \\sin^2\\chi \\cos^2\\chi \\cos^2\\phi \\eta_{xx} +\n    \\sin^2\\chi \\cos^2\\theta \\sin^2\\phi \\eta_{yy} +\n    \\sin^2\\chi \\sin^2\\theta \\eta_{zz}\n  )\n  \\\\* &=\n  r^2 \\sin^2\\chi (\\cos^2\\theta + \\sin^2\\theta) =\n  r^2 \\sin^2\\chi\n  \\\\\n  g_{\\phi\\phi} &=\n  r^2 \\qty(\n    \\sin^2\\chi \\sin^2\\theta \\sin^2\\phi \\eta_{xx} +\n    \\sin^2\\chi \\sin^2\\theta \\cos^2\\phi \\eta_{yy}\n  )\n  \\\\* &=\n  r^2 \\sin^2\\chi \\sin^2\\theta\n  \\\\\n  g_{\\chi\\chi} &=\n  r^2 \\qty(\n    \\cos^2\\chi \\sin^2\\theta \\cos^2\\phi \\eta_{xx} +\n    \\cos^2\\chi \\sin^2\\theta \\sin^2\\phi \\eta_{yy} +\n    \\cos^2\\chi \\cos^2\\theta \\eta_{zz} +\n    \\sin^2\\chi \\eta_{ww}\n  )\n  \\\\* &=\n  r^2 \\qty(\n    \\cos^2\\chi \\sin^2\\theta +\n    \\cos^2\\chi \\cos^2\\theta +\n    \\sin^2\\chi\n  ) =\n  r^2 \\qty( \\cos^2\\chi + \\sin^2\\chi ) =\n  r^2\n\\end{align*}\n\nTo show that the off-diagonal terms are zero, I got lazy and used the Maxima computer algebra system. Its naming convention and ordering for these coordinates is different, but it still makes it clear that the metric is diagonal.\n\n\\begin{verbatim}\n(%i1) load(ctensor)$            /* load the component tensor package */\n(%i2) ct_coordsys(spherical4d)$ /* use the 3-sphere metric */\n(%i3) lg;                       /* display the metric */\n              [ 1  0         0                    0             ]\n              [                                                 ]\n              [     2                                           ]\n              [ 0  r         0                    0             ]\n(%o3)         [                                                 ]\n              [         2    2                                  ]\n              [ 0  0   r  sin (theta)             0             ]\n              [                                                 ]\n              [                           2       2    2        ]\n              [ 0  0         0         sin (eta) r  sin (theta) ]\n\\end{verbatim}\n\nSo in our notation, the metric tensor is\n%\n\\begin{displaymath}\n  (g_{ij}) =\n  \\mqty(\\dmat[0]{1, r^2\\sin^2\\chi, r^2\\sin^2\\chi\\sin^2\\theta, r^2}).\n\\end{displaymath}\n\n\n\\item Show that the metric on the \\emph{surface} of the three-sphere only has non-zero components $g_{\\theta\\theta}$, $g_{\\phi\\phi}$, and $g_{\\chi\\chi}$.\n\nOn the surface of a three-sphere, $r$ is unchanging, so $\\Delta r$ is always zero. Thus, we may reduce the dimensionality of the metric to 3: $(\\theta,\\phi,\\chi)$.\n%\n\\begin{displaymath}\n  (g_{ij}) =\n  \\mqty(\\dmat[0]{r^2\\sin^2\\chi, r^2\\sin^2\\chi\\sin^2\\theta, r^2}).\n\\end{displaymath}\n\n\\end{enumerate}\n\n\n\\textbf{34}\nProve the following identities for a general metric tensor in a general coordinate system. Equations 6.39 and 6.40 will be helpful.\n\n\\begin{enumerate}[(a)]\n\\item $\\tensor{\\Gamma}{^\\mu_{\\mu\\nu}} = \\frac{1}{2} (\\ln\\abs{g})_{,\\nu}$\n%\n\\begin{displaymath}\n  \\tensor{\\Gamma}{^\\mu_{\\mu\\nu}} =\n  \\frac{(\\sqrt{-g})_{,\\nu}}{\\sqrt{-g}} =\n  \\frac{1}{2 \\sqrt{-g}} \\frac{(-g)_{,\\nu}}{\\sqrt{-g}} =\n  \\frac{(-g)_{,\\nu}}{2 (-g)} =\n  \\frac{\\abs{g}_{,\\nu}}{2 \\abs{g}} =\n  \\frac{1}{2} (\\ln\\abs{g})_{,\\nu}\n\\end{displaymath}\n%\n\\item $g^{\\mu\\nu} \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}} =\n       (-g^{\\alpha\\beta} \\sqrt{-g})_{,\\beta} / \\sqrt{-g}$\n%\n\\begin{align*}\n  g^{\\mu\\nu} \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}} &=\n -(g^{\\alpha\\beta} \\sqrt{-g})_{,\\beta} / \\sqrt{-g}\n  \\\\ &=\n -(g^{\\alpha\\beta} (\\sqrt{-g})_{,\\beta} +\n   \\tensor{g}{^{\\alpha\\beta}_{,\\beta}} \\sqrt{-g}) / \\sqrt{-g}\n  \\\\ &=\n  -(g^{\\alpha\\beta} (\\sqrt{-g})_{,\\beta} / \\sqrt{-g} +\n    \\tensor{g}{^{\\alpha\\beta}_{,\\beta}})\n  \\\\ &=\n  -(g^{\\alpha\\beta} \\tensor{\\Gamma}{^\\lambda_{\\lambda\\beta}} +\n    \\tensor{g}{^{\\alpha\\beta}_{,\\beta}})\n  \\\\\n  \\frac{1}{2} g^{\\mu\\nu} g^{\\beta\\alpha}\n  (g_{\\beta\\mu,\\nu} + g_{\\beta\\nu,\\mu} - g_{\\mu\\nu,\\beta}) &=\n  -(g^{\\alpha\\beta} g^{\\lambda\\sigma} g_{\\lambda\\sigma,\\beta} / 2 +\n    \\tensor{g}{^{\\alpha\\beta}_{,\\beta}})\n  \\\\\n  \\frac{1}{2} g^{\\mu\\nu} g^{\\beta\\alpha}\n  (g_{\\beta\\mu,\\nu} + g_{\\beta\\nu,\\mu}) -\n  g^{\\mu\\nu} g^{\\beta\\alpha} g_{\\mu\\nu,\\beta} / 2 &=\n  -(g^{\\alpha\\beta} g^{\\lambda\\sigma} g_{\\lambda\\sigma,\\beta} / 2 +\n    \\tensor{g}{^{\\alpha\\beta}_{,\\beta}})\n  \\\\\n  \\frac{1}{2} g^{\\mu\\nu} g^{\\beta\\alpha}\n  (g_{\\beta\\mu,\\nu} + g_{\\beta\\nu,\\mu}) &=\n -\\tensor{g}{^{\\alpha\\beta}_{,\\beta}}\n  \\\\\n -\\frac{1}{2} g^{\\mu\\nu}\n  (\\tensor{g}{^{\\beta\\alpha}_{,\\nu}} g_{\\beta\\mu} +\n   \\tensor{g}{^{\\beta\\alpha}_{,\\mu}} g_{\\beta\\nu}) &=\n -\\tensor{g}{^{\\alpha\\beta}_{,\\beta}}\n  \\\\\n -\\frac{1}{2}\n  (\\tensor{\\delta}{_\\beta^\\nu} \\tensor{g}{^{\\beta\\alpha}_{,\\nu}} +\n   \\tensor{\\delta}{_\\beta^\\mu} \\tensor{g}{^{\\beta\\alpha}_{,\\mu}}) &=\n -\\tensor{g}{^{\\alpha\\beta}_{,\\beta}}\n  \\\\\n -\\frac{1}{2}\n  (2 \\tensor{g}{^{\\beta\\alpha}_{,\\beta}}) &=\n -\\tensor{g}{^{\\alpha\\beta}_{,\\beta}}\n\\end{align*}\n%\n\\item\n  $\\tensor{F}{^{[\\mu\\nu]}_{;\\nu}} =\n   (\\sqrt{-g} F^{[\\mu\\nu]})_{,\\nu} / \\sqrt{-g}$\n%\n\\begin{displaymath}\n  \\tensor{F}{^{[\\mu\\nu]}_{;\\nu}} =\n  \\tensor{F}{^{[\\mu\\nu]}_{,\\nu}} +\n  F^{[\\mu\\nu]} \\tensor{\\Gamma}{^\\alpha_{\\nu\\alpha}} =\n  (\\tensor{F}{^{[\\mu\\nu]}_{,\\nu}} \\sqrt{-g} +\n   F^{\\mu\\nu} (\\sqrt{-g})_{,\\nu}) / \\sqrt{-g} =\n  (\\sqrt{-g} F^{\\mu\\nu})_{,\\nu}\n\\end{displaymath}\n%\n\\item $g^{\\alpha\\sigma} g_{\\sigma\\beta,\\gamma} =\n      -\\tensor{g}{^{\\alpha\\sigma}_{,\\gamma}} g_{\\sigma\\beta}$\n%\nWe start with $g^{\\alpha\\sigma} g_{\\sigma\\beta} = \\tensor{\\delta}{^\\alpha_\\beta}$. Then we differentiate both sides to get\n%\n\\begin{align*}\n  \\tensor{g}{^{\\alpha\\sigma}_{,\\gamma}} g_{\\sigma\\beta} &+\n  g^{\\alpha\\sigma} g_{\\sigma\\beta,\\gamma} =\n  0\n  \\\\\n  g^{\\alpha\\sigma} g_{\\sigma\\beta,\\gamma} &=\n -\\tensor{g}{^{\\alpha\\sigma}_{,\\gamma}} g_{\\sigma\\beta}\n\\end{align*}\n\n\\item $\\tensor{g}{^{\\mu\\nu}_{,\\alpha}} =\n      -\\tensor{\\Gamma}{^\\mu_{\\beta\\alpha}} g^{\\beta\\nu} -\n       \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} g^{\\mu\\beta}$\n\n\\begin{align*}\n  \\tensor{g}{^{\\mu\\nu}_{;\\alpha}} = \\tensor{g}{^{\\mu\\nu}_{,\\alpha}} &+\n  \\tensor{\\Gamma}{^\\mu_{\\beta\\alpha}} g^{\\beta\\nu} +\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} g^{\\mu\\beta} =\n  0\n  \\\\\n  \\tensor{g}{^{\\mu\\nu}_{,\\alpha}} &=\n -\\tensor{\\Gamma}{^\\mu_{\\beta\\alpha}} g^{\\beta\\nu} -\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} g^{\\mu\\beta}\n\\end{align*}\n\n\\end{enumerate}\n\n\n\n\\textbf{35}\nCompute the metric tensor, Christoffel symbols, and Riemann tensor for a spacetime with line element:\n%\n\\begin{displaymath}\n  \\dd{s^2} =\n -e^{2\\Phi} \\dd{t^2} +\n  e^{2\\Lambda} \\dd{r^2} +\n  r^2 (\\dd{\\theta^2} + \\sin^2\\theta \\dd{\\phi^2}).\n\\end{displaymath}\n\nBased on the line element, the metric must be\n%\n\\begin{align*}\n  (g_{\\alpha\\beta}) &=\n  \\mqty(\\dmat[0]{-e^{2\\Phi}, e^{2\\Lambda}, r^2, r^2 \\sin^2\\theta}) &\n  (g^{\\alpha\\beta}) &=\n  \\mqty(\\dmat[0]{-e^{-2\\Phi}, e^{-2\\Lambda}, 1/r^2, 1/r^2 \\sin^2\\theta})\n\\end{align*}\n\nFor the rest of this problem, I took advantage of the \\texttt{Maxima} computer algebra system. According to it, the non-zero, unique Christoffel symbols are\n%\n\\begin{align*}\n  \\tensor{\\Gamma}{^r_{tt}} &=\n  \\exp(2\\Phi - 2\\Lambda) \\dv{\\Phi}{r}\n  &\n  \\tensor{\\Gamma}{^t_{rt}} &=\n  \\dv{\\Phi}{r}\n  \\\\\n  \\tensor{\\Gamma}{^r_{rr}} &=\n  \\dv{\\Lambda}{r}\n  &\n  \\tensor{\\Gamma}{^\\theta_{r \\theta}} =\n  \\tensor{\\Gamma}{^\\phi_{r \\phi}} &=\n  \\frac{1}{r}\n  \\\\\n  \\tensor{\\Gamma}{^r_{\\theta\\theta}} &=\n  -\\exp(-2 \\Lambda) r\n  &\n  \\tensor{\\Gamma}{^\\phi_{\\theta\\phi}} &=\n  \\cot\\theta\n  \\\\\n  \\tensor{\\Gamma}{^r_{\\phi\\phi}} &=\n  -\\exp(-2 \\Lambda) r \\sin^2\\theta\n  &\n  \\tensor{\\Gamma}{^\\theta_{\\phi\\phi}} &=\n  -\\sin\\theta \\cos\\theta\n\\end{align*}\n%\nThe independent non-zero components of the Riemann tensor are\n%\n\\begin{align*}\n  R_{t \\theta t \\theta} &=\n  \\exp(2(\\Phi-\\Lambda)) \\qty[\n    \\dv{\\Phi}{r} \\qty(\\dv{\\Lambda}{r} - \\dv{\\Phi}{r}) - \\dv[2]{\\Phi}{r}\n  ]\n  &\n  R_{t \\theta t \\theta} = R_{t \\phi t \\phi} &=\n  -\\frac{1}{r} \\exp(2(\\Phi - \\Lambda)) \\dv{\\Phi}{r}\n  \\\\\n  R_{r r t t} &=\n  \\dv{\\Lambda}{r} \\dv{\\Phi}{r} -\n  \\dv[2]{\\Phi}{r} -\n  \\qty(\\dv{\\Phi}{r})^2\n  &\n  R_{r \\theta r \\theta} = R_{r \\phi r \\phi} &=\n  -\\frac{1}{r} \\dv{\\Lambda}{r}\n  \\\\\n  R_{\\theta \\phi \\theta \\phi} &=\n  \\exp(-2\\Lambda) - 1\n  &\n  R_{\\phi \\phi \\theta \\theta} &=\n  \\exp(-2\\Lambda) \\qty(\\exp(2 \\Lambda) - 1) \\sin^2\\theta\n  \\\\\n  R_{\\theta \\theta t t} &=\n  -r \\exp(-2 \\Lambda) \\dv{\\Phi}{r}\n  &\n  R_{\\theta \\theta r r} &=\n  r \\exp(-2 \\Lambda) \\dv{L}{r}\n\\end{align*}\n\n\\textbf{36}\nConsider a 4D manifold with coordinates $(t,x,y,z)$ and line element\n%\n\\begin{displaymath}\n  \\dd{s}^2 =\n -(1 + 2\\phi) \\dd{t}^2 +\n  (1 - 2\\phi) (\\dd{x}^2 + \\dd{y}^2 + \\dd{z}^2),\n\\end{displaymath}\n%\nwith $\\abs{\\phi(t,x,y,z)} \\ll 1$. At an arbitrary point $\\mathcal{P}$ with coordinates $(t_0,x_0,y_0,z_0)$, find a coordinate transformation to LIF. How does this frame accelerate with respect to the original coordinates? Do all of this to first order in $\\phi$.\n\nBy inspection of the line element, we can see that the metric has components\n%\n\\begin{displaymath}\n  (g_{\\alpha\\beta}) \\underset{(t,x,y,z)}{\\to}\n  \\mqty(\\dmat[0]{-(1+2\\phi),(1-2\\phi),(1-2\\phi),(1-2\\phi)}).\n\\end{displaymath}\n%\nWe want a transformation to a Minkowski spacetime, i.e.\n%\n\\begin{displaymath}\n  \\tensor{\\Lambda}{^{\\alpha'}_\\alpha}\n  \\tensor{\\Lambda}{^{\\beta'}_\\beta}\n  g_{\\alpha'\\beta'} =\n  \\eta_{\\alpha\\beta}.\n\\end{displaymath}\n%\nNow, there may be multiple transformations which satisfy this, so we need only find one. Since both $\\boldsymbol{g}$ and $\\boldsymbol{\\eta}$ are diagonal, I assume that $\\boldsymbol{\\Lambda}$ is diagonal as well, and find its components.\n%\n\\begin{align*}\n  \\eta_{00} &=\n  \\tensor{\\Lambda}{^{0'}_0}\n  \\tensor{\\Lambda}{^{0'}_0}\n  g_{0'0'} &\n%%%%\n  \\eta_{ii} &=\n  \\tensor{\\Lambda}{^{i'}_i}\n  \\tensor{\\Lambda}{^{i'}_i}\n  g_{i'i'}\n  \\\\\n  -1 &=\n  (\\tensor{\\Lambda}{^{0'}_0})^2 (-(1 + 2\\phi)) &\n%%%%\n  1 &=\n  (\\tensor{\\Lambda}{^{i'}_i})^2\n  (1 - 2\\phi)\n  \\\\\n  \\tensor{\\Lambda}{^{0'}_0} &=\n  (1 + 2\\phi)^{-1/2} &\n%%%%\n  \\tensor{\\Lambda}{^{i'}_i} &=\n  (1 - 2\\phi)^{-1/2}\n\\end{align*}\n%\nSince we know that $\\phi$ is small, we can use the approximation\n$(1 + x)^{-1/2} = (1 - x/2) + \\order{x^2}$, to find\n%\n\\begin{align*}\n  \\tensor{\\Lambda}{^{0'}_0} &\\approx\n  (1 - \\phi) &\n%%%%\n  \\tensor{\\Lambda}{^{i'}_i} &\\approx\n  (1 + \\phi)\n\\end{align*}\n\n\\textbf{(39)}\n\n\n\n\n\n\\end{document}", "meta": {"hexsha": "3541b60774a806782549c46ec6345b26ef33a819", "size": 45364, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/textbook/tex/gr-ch6-notes.tex", "max_stars_repo_name": "dwysocki/ASTP-760", "max_stars_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/textbook/tex/gr-ch6-notes.tex", "max_issues_repo_name": "dwysocki/ASTP-760", "max_issues_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/textbook/tex/gr-ch6-notes.tex", "max_forks_repo_name": "dwysocki/ASTP-760", "max_forks_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.2424242424, "max_line_length": 642, "alphanum_fraction": 0.5790935544, "num_tokens": 17735, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.8080672158638527, "lm_q1q2_score": 0.5729351174319207}}
{"text": "\\chapter{Quadratic functions}\n\\section{Defined on $\\mdr$}\nLet $f:\\mdr \\rightarrow \\mdr, f(x) = a \\cdot x^2 + b \\cdot x + c$ with $a \\in \\mdr \\setminus \\Set{0}$ and \n$b, c \\in \\mdr$ be a quadratic function.\n\n\\begin{figure}[htp]\n    \\centering\n\\begin{tikzpicture}\n    \\begin{axis}[\n        legend pos=north west,\n        axis x line=middle,\n        axis y line=middle,\n        grid = major,\n        width=0.8\\linewidth,\n        height=8cm,\n        grid style={dashed, gray!30},\n        xmin=-3,    % start the diagram at this x-coordinate\n        xmax= 3,    % end   the diagram at this x-coordinate\n        ymin=-0.25, % start the diagram at this y-coordinate\n        ymax= 9,    % end   the diagram at this y-coordinate\n        axis background/.style={fill=white},\n        xlabel=$x$,\n        ylabel=$y$,\n        tick align=outside,\n        minor tick num=-3,\n        enlargelimits=true,\n        tension=0.08]\n      \\addplot[domain=-3:3, thick,samples=50, red]    {0.5*x*x}; \n      \\addplot[domain=-3:3, thick,samples=50, green]  { x*x}; \n      \\addplot[domain=-3:3, thick,samples=50, blue]   { x*x +   x};\n      \\addplot[domain=-3:3, thick,samples=50, orange,dotted] { x*x + 2*x};\n      \\addplot[domain=-3:3, thick,samples=50, black,dashed]  {-x*x + 6};\n      \\addlegendentry{$f_1(x)=\\frac{1}{2}x^2$}\n      \\addlegendentry{$f_2(x)=x^2$}\n      \\addlegendentry{$f_3(x)=x^2+x$}\n      \\addlegendentry{$f_4(x)=x^2+2x$}\n      \\addlegendentry{$f_5(x)=-x^2+6$}\n    \\end{axis} \n\\end{tikzpicture}\n    \\caption{Quadratic functions}\n\\end{figure}\n\n\\subsection{Calculate points with minimal distance}\nIn this case, $d_{P,f}^2$ is polynomial of degree $n^2 = 4$. \nWe use Theorem~\\ref{thm:fermats-theorem}:\\nobreak\n\\begin{align}\n    0     &\\overset{!}{=} (d_{P,f}^2)'\\\\\n          &= 2x -2 x_p -2y_p f'(x) + \\left (f(x)^2 \\right )'\\\\\n          &= 2x -2 x_p -2y_p f'(x) + 2 f(x) \\cdot f'(x) \\rlap{\\hspace*{3em}(chain rule)}\\label{eq:minimizingFirstDerivative}\\\\\n\\Leftrightarrow 0 &\\overset{!}{=} x -x_p -y_p f'(x) + f(x) \\cdot f'(x) \\rlap{\\hspace*{3em}(divide by 2)}\\label{eq:minimizingFirstDerivative}\\\\\n          &= x -x_p -y_p (2ax+b) + (ax^2+bx+c)(2ax+b)\\\\\n          &= x -x_p -y_p \\cdot 2ax- y_p b + (2 a^2 x^3+2 a b x^2+2 a c x+ab x^2+b^2 x+bc)\\\\\n          &= x -x_p -2y_p ax- y_p b + (2a^2 x^3 + 3 ab x^2 + 2acx + b^2 x + bc)\\\\\n          &= 2a^2 x^3 + 3 ab x^2 + (1 -2y_p a+ 2ac + b^2)x +(bc-by_p-x_p)\\label{eq:quadratic-derivative-eq-0}\n\\end{align}\n\nThis is an algebraic equation of degree 3.\nThere can be up to 3 solutions in such an equation. Those solutions\ncan be found with a closed formula. But not every solution of the\nequation given by Theorem~\\ref{thm:fermats-theorem}\nhas to be a solution to the given problem as you can see in\nExample~\\ref{ex:false-positive}.\n\\goodbreak\n\\begin{example}\\label{ex:false-positive}\n    Let $a = 1,  b = 0,  c=-1, x_p= 0, y_p = 1$.\n    So $f(x) = x^2 - 1$ and $P(0, 1)$.\n    \\begin{align}\n        \\xRightarrow{\\text{Equation}~\\ref{eq:quadratic-derivative-eq-0}} 0 &\\stackrel{!}{=} 2x^3 - 3x\\\\\n        &= x(2x^2-3)\\\\\n        \\Rightarrow x_{1,2} &= \\pm \\sqrt{\\frac{3}{2}} \\text{ and } x_3 = 0\\\\\n        d_{P,f}(x_3) &= \\sqrt{0^2 + (-1-1)^2} = 2\\\\\n        d_{P,f} \\left (\\pm \\sqrt{\\frac{3}{2}} \\right ) &= \\sqrt{\\left (\\sqrt{\\frac{3}{2} - 0} \\right )^2 + \\left (\\frac{1}{2}-1 \\right )^2}\\\\\n            &= \\sqrt{\\nicefrac{3}{2}+\\nicefrac{1}{4}} \\\\\n            &= \\sqrt{\\nicefrac{7}{4}}\\\\\n    \\end{align}\n    This means $x_3$ is not a point of minimal distance, although\n    $(d_{P,f}(x_3))' = 0$.\n\\end{example}\n\n\n\\subsection{Number of points with minimal distance}\n\\begin{theorem}\n    A point $P$ has either one or two points on the graph of a \n    quadratic function $f$ that are closest to $P$.\n\\end{theorem}\n\n\\begin{proof}\nThe number of closests points of $f$ cannot be bigger than 3, because\nEquation~\\ref{eq:quadratic-derivative-eq-0} is a polynomial function\nof degree 3. Such a function can have at most 3 roots. As $f$ has\nat least one point on its graph, there is at least one point with\nminimal distance.\n\nIn the following, I will do some transformations with $f = f_0$ and\n$P = P_0$. This will make it easier to calculate the minimal distance\npoints. Moving $f_0$ and $P_0$ simultaneously in $x$ or $y$ direction does \nnot change the minimum distance. Furthermore, we can find the \npoints with minimum distance on the moved situation and calculate\nthe minimum points in the original situation.\n\nFirst of all, we move $f_0$ and $P_0$ by $\\frac{b}{2a}$ in $x$ direction, so\n\\[f_1(x) = ax^2 - \\frac{b^2}{4a} + c \\;\\;\\;\\text{ and }\\;\\;\\; P_1 = \\left (x_p+\\frac{b}{2a},\\;\\; y_p \\right )\\]\n\nBecause:\\footnote{The idea why you subtract $\\frac{b}{2a}$ within\n$f$ is that when you subtract something from $x$ before applying\n$f$ it takes more time ($x$ needs to be bigger) to get to the same\nsituation. In consequence, if we want to move the whole graph by 1 \nto the left, we have to add $+1$.}\n\\begin{align}\n    f(x-\\nicefrac{b}{2a}) &= a (x-\\nicefrac{b}{2a})^2 + b (x-\\nicefrac{b}{2a}) + c\\\\\n    &= a (x^2 - \\nicefrac{b}{a} x + \\nicefrac{b^2}{4a^2}) + bx - \\nicefrac{b^2}{2a} + c\\\\\n    &= ax^2 - bx + \\nicefrac{b^2}{4a} + bx - \\nicefrac{b^2}{2a} + c\\\\\n    &= ax^2 -\\nicefrac{b^2}{4a} + c\n\\end{align}\n\n\nThen move $f_1$ and $P_1$ by $\\frac{b^2}{4a}-c$ in $y$ direction. You get:\n\\[f_2(x) = ax^2\\;\\;\\;\\text{ and }\\;\\;\\; P_2 = \\Big (\\underbrace{x_P+\\frac{b}{2a}}_{=: z},\\;\\; \\underbrace{y_P+\\frac{b^2}{4a}-c}_{=: w} \\Big )\\]\n\nAs $f_2(x) = ax^2$ is symmetric to the $y$ axis, only points \n$P = (0, w)$ could possilby have three minima.\n\nThen compute:\n\\begin{align}\n  d_{P,{f_2}}(x)  &= \\sqrt{(x-0)^2 + (f_2(x)-w)^2}\\\\\n    &= \\sqrt{x^2 + (ax^2-w)^2}\\\\\n    &= \\sqrt{x^2 + a^2 x^4-2aw x^2+w^2}\\\\\n    &= \\sqrt{a^2 x^4 + (1-2aw) x^2 + w^2}\\\\\n    &= \\sqrt{\\left (a x^2 + \\frac{1-2 a w}{2a} \\right )^2 + w^2 - \\left (\\frac{1-2 a w}{2a} \\right )^2}\\\\\n    &= \\sqrt{\\left (a x^2 + \\nicefrac{1}{2a}- w \\right )^2 + \\left (w^2 - \\left (\\frac{1-2 a w}{2a} \\right )^2 \\right)}\n\\end{align}\n\nThis means, the term\n\\[a^2 x^2 + (\\nicefrac{1}{2a}- w)\\]\nhas to get as close to $0$ as possilbe when we want to minimize \n$d_{P,{f_2}}$. For $w \\leq \\nicefrac{1}{2a}$ you only have $x = 0$ as a minimum.\nFor all other points $P = (0, w)$, there are exactly two minima $x_{1,2} = \\pm \\sqrt{\\frac{\\frac{1}{2a} - w}{a}}$.\n$\\qed$\n\\end{proof}\n\n\\subsection{Solution formula}\nWe start with the graph that was moved so that $f_2 = ax^2$.\n\n\\textbf{Case 1:} $P$ is on the symmetry axis, hence $x_P = - \\frac{b}{2a}$.\n\nIn this case, we have already found the solution. If $w = y_P + \\frac{b^2}{4a} - c > \\frac{1}{2a}$,\nthen there are two solutions:\n\\[x_{1,2} = \\pm \\sqrt{\\frac{\\frac{1}{2a} - w}{a}}\\]\nOtherwise, there is only one solution $x_1 = 0$.\n\n\\textbf{Case 2:} $P = (z, w)$ is not on the symmetry axis, so $z \\neq 0$. Then you compute:\n\\begin{align}\n  d_{P,{f_2}}(x)  &= \\sqrt{(x-z)^2 + (f_2(x)-w)^2}\\\\\n    &= \\sqrt{(x^2 - 2zx + z^2) + ((ax^2)^2 - 2 awx^2 + w^2)}\\\\\n    &= \\sqrt{a^2x^4 + (1- 2 aw)x^2 +(- 2z)x + z^2 + w^2}\\\\\n  0 &\\stackrel{!}{=} \\Big(\\big(d_{P, {f_2}}(x)\\big)^2\\Big)' \\\\\n    &= 4a^2x^3 + 2(1- 2 aw)x +(- 2z)\\\\\n    &= 2 \\left (2a^2x^3 + (1- 2 aw)x \\right ) - 2z\\\\\n    \\Leftrightarrow 0 &\\stackrel{!}{=} 2a^2x^3  + (1- 2 aw) x - z\\\\\n\\stackrel{a \\neq 0}{\\Leftrightarrow} 0 &\\stackrel{!}{=} x^3 + \\underbrace{\\frac{1- 2 aw}{2 a^2}}_{=: \\alpha} x  + \\underbrace{\\frac{-z}{2 a^2}}_{=: \\beta}\\\\\n    &= x^3 + \\alpha x + \\beta\\label{eq:simple-cubic-equation-for-quadratic-distance}\n\\end{align}\n\nLet $t$ be defined as\n\\[t := \\sqrt[3]{\\sqrt{3 \\cdot (4 \\alpha^3 + 27 \\beta^2)} -9\\beta}\\]\n\n\\subsubsection{Analyzing $t$}\n\\input{analyzing-t.tex}\n\n\\subsubsection{Solutions of $x^3 + \\alpha x + \\beta$}\nI will make use of the following identities:\n\\begin{align*}\n    (1-i \\sqrt{3})^2     &= -2 (1+i \\sqrt{3})\\\\\n    (1+i \\sqrt{3})^2     &= -2 (1-i \\sqrt{3})\\\\\n    (1 \\pm i \\sqrt{3})^3 &= -8\\\\\n    (a-b)^3              &= a^3-3 a^2 b+3 a b^2-b^3\n\\end{align*}\n\n\\textbf{Case 2.1:} \n\\input{quadratic-case-2.1}\n\\goodbreak\n\\textbf{Case 2.2:}\n\\input{quadratic-case-2.2}\n\n\\textbf{Case 2.3:}\n\\input{quadratic-case-2.3}\n\n\\goodbreak\nSo the solution is given by\n\\todo[inline]{NO! Currently, there are erros in the solution.\nCheck $f(x) = x^2$ and $P=(-2,4)$. Solution should be $x_1 = -2$, but it isn't!}\n\\begin{align*}\nx_S &:= - \\frac{b}{2a} \\;\\;\\;\\;\\; \\text{(the symmetry axis)}\\\\\nw &:= y_P+\\frac{b^2}{4a}-c \\;\\;\\; \\text{ and } \\;\\;\\; z := x_P+\\frac{b}{2a}\\\\\n\\alpha &:= \\frac{1- 2 aw}{2 a^2} \\;\\;\\;\\text{ and }\\;\\;\\; \\beta := \\frac{-z}{2 a^2}\\\\\nt &:= \\sqrt[3]{\\sqrt{3 \\cdot (4 \\alpha^3 + 27 \\beta^2)} -9\\beta}\\\\\n\\underset{x\\in\\mdr}{\\arg \\min d_{P,f}(x)} &= \\begin{cases}\n     x_1 = +\\sqrt{a (y_p + \\frac{b^2}{4a} - c) - \\frac{1}{2}} + x_S \\text{ and }   &\\text{if } x_P = x_S \\text{ and } y_p + \\frac{b^2}{4a} - c >  \\frac{1}{2a} \\\\\n     x_2 = -\\sqrt{a (y_p + \\frac{b^2}{4a} - c) - \\frac{1}{2}} + x_S\\\\\n     x_1 = x_S   &\\text{if } x_P = x_S \\text{ and } y_p + \\frac{b^2}{4a} - c \\leq  \\frac{1}{2a} \\\\\n     x_1 = \\frac{t}{\\sqrt[3]{18}} - \\frac{\\sqrt[3]{\\frac{2}{3}} \\alpha }{t}   &\\text{if } x_P \\neq x_S\n    \\end{cases}\n\\end{align*}\n\\clearpage\n\n\\section{Defined on a closed interval $[a,b] \\subseteq \\mdr$}\nNow the problem isn't as simple as with constant and linear\nfunctions.\n\nIf one of the minima in $S_2(P,f)$ is in $[a,b]$, this will be the\nshortest distance as there are no shorter distances.\n\n\\todo[inline]{\nThe following IS WRONG! Can I include it to help the reader understand the \nproblem?}\n\nIf the function (defined on $\\mdr$) has only one shortest distance\npoint $x$ for the given $P$, it's also easy: The point in $[a,b]$ that\nis closest to $x$ will have the sortest distance. \n\n\\[\\underset{x\\in[a,b]}{\\arg \\min d_{P,f}(x)} = \\begin{cases}\n S_2(f, P) \\cap [a,b] &\\text{if } S_2(f, P) \\cap [a,b] \\neq \\emptyset \\\\\n              \\Set{a} &\\text{if } |S_2(f, P)| = 1 \\text{ and } S_2(f, P) \\ni x < a\\\\\n              \\Set{b} &\\text{if } |S_2(f, P)| = 1 \\text{ and } S_2(f, P) \\ni x > b\\\\\n                 todo &\\text{if } |S_2(f, P)| = 2 \\text{ and } S_2(f, P) \\cap [a,b] = \\emptyset\n    \\end{cases}\\]\n", "meta": {"hexsha": "ffbb78b444ecd17675f69c44f3c376ac1d1b561d", "size": 10098, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/math-minimal-distance-to-cubic-function/quadratic-functions.tex", "max_stars_repo_name": "keithmannock/LaTeX-examples", "max_stars_repo_head_hexsha": "6829f6cf9710b314a4bf0b64abdae5bcf6997fd0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-11-02T10:09:12.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-24T22:16:18.000Z", "max_issues_repo_path": "documents/math-minimal-distance-to-cubic-function/quadratic-functions.tex", "max_issues_repo_name": "everbot/LaTeX-examples", "max_issues_repo_head_hexsha": "9558d8b3c19776cb068b9753dcd3f88645dd7134", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documents/math-minimal-distance-to-cubic-function/quadratic-functions.tex", "max_forks_repo_name": "everbot/LaTeX-examples", "max_forks_repo_head_hexsha": "9558d8b3c19776cb068b9753dcd3f88645dd7134", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.4845814978, "max_line_length": 161, "alphanum_fraction": 0.5787284611, "num_tokens": 4122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5729351141547074}}
{"text": "\\section{Question 4}\nIn this problem we use masserati ghibli data. For cancelling oscillation we use derivative controller to make overshoot low and increase gain to be fast enough with low overshoot.\n\\begin{center}\n    \\begin{table}[H]\\caption{Maserati ghibli data}\n        \\centering\n        \\begin{tabular}{ c c c }\n            \\hline\n            Paramete & Unit & Value \\\\ \n            \\hline\n            M & kg & 1900 \\\\\n            $C_d$ & 1 & 0.29\\\\\n            \\hline\n           \\end{tabular}\n    \\end{table}\n\\end{center}\nIn this problem we use \n\\newpage\n\\begin{itemize}\n    \\item system step responde\n    \\begin{figure}[H]\n        \\caption{system step responde}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q4/Q4_system_respond.png}\n    \\end{figure}\n    \\item system rlocus\n    \\begin{figure}[H]\n        \\caption{system rlocus plot}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q4/Q4_system_rlocus.png}\n    \\end{figure}\n    \\item system with controller step responde\n    \\begin{figure}[H]\n        \\caption{system with controller step responde}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q4/Q4_system_controller_respond.png}\n    \\end{figure}\n    \\item system with controller rlocus\n    \\begin{figure}[H]\n        \\caption{system with controller rlocus plot}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q4/Q4_system_controller_rlocus.png}\n    \\end{figure}\n    \\item system with and without controller step responde\n    \\begin{figure}[H]\n        \\caption{system with and without controller step responde}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q4/Q4_respond_all.png}\n    \\end{figure}\n    \\item system with and without controller rlocus\n    \\begin{figure}[H]\n        \\caption{system with and withou controller rlocus plot}\n        \\centering\n        \\includegraphics[width=12cm]{../Figure/Q4/Q4_rlocus.png}\n    \\end{figure}\n\\end{itemize}", "meta": {"hexsha": "bbad9733fc51e9aad63f8fb1ca1c0230414fa7c5", "size": 1955, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW/HW III/Report/Q4/Q4.tex", "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW/HW III/Report/Q4/Q4.tex", "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW/HW III/Report/Q4/Q4.tex", "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.5454545455, "max_line_length": 179, "alphanum_fraction": 0.6475703325, "num_tokens": 546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.7090191337850932, "lm_q1q2_score": 0.5729351092388872}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\n\n\\setcounter{footnote}{0} \n\n\\chapter{Appendices}\n\n\\section{ML Representations}\n\\subsection{SOAP} \\label{section::soap_derivation}\n\nA SOAP vector\\cite{Bartok2013} $\\boldsymbol{p}$ is given by concatenating elements,\n\\begin{equation}\n\tp_{nn'l}^{Z_1, Z_2} = \\pi \\sqrt{\\frac{8}{2l + 1}} \\sum_m \\left(c_{nlm}^{Z_1}\\right)^* c_{n'lm}^{Z_2}\n\\end{equation}\nfor every pair of atoms with atomic number $Z$ and the coefficients ($c_{nlm}$) are of radial and angular basis functions centred a particular atom given by,\n\\begin{equation}\n\tc_{nlm}^Z = \\int_\\mathcal{R} g_n(r) Y_{lm}(\\phi,\\theta) \\rho^Z(\\boldsymbol{r})\\; \\text{d}V\n\\end{equation}\nwhere the atomic density is given as a sum over 3D Gaussian centred on each atom $i$,\n\\begin{equation}\n\t\\rho^Z(\\boldsymbol{r}) = \\sum_{i \\in \\{Z\\}} \\exp\\left(-\\frac{|\\boldsymbol{r} - \\boldsymbol{R}_i|^2}{2\\sigma_\\text{at}^2}\\right)\n\\end{equation}\nA component of the neighbour density is then ($a := 1/2\\sigma_\\text{at}^2$), \n% https://journals.aps.org/prb/pdf/10.1103/PhysRevB.87.184115\n% https://iopscience.iop.org/article/10.1088/0953-4075/22/1/004/pdf\n\\begin{equation}\n\t\\text{e}^{-a|\\boldsymbol{r} - \\boldsymbol{R}_i|^2} = 4\\pi \\text{e}^{-a(r^2+R_i^2)}i_l(2arR_i) \\sum_{l'm'}Y_{l'm'}(\\phi, \\theta) Y_{l'm'}^*(\\boldsymbol{R}_i)\n\\end{equation}\ninserting and using the orthogonality of the spherical harmonics integrated over theta and phi,\n\\begin{equation}\n\t\\begin{split}\n\t\tc_{nlm}^Z &= \\int_\\mathcal{R} g_n(r) Y_{lm}(\\phi,\\theta)  \\sum_{i \\in \\{Z\\}} \\exp\\left(-a|\\boldsymbol{r} - \\boldsymbol{R}_i|^2\\right) \\text{d}V \\\\\n\t\t&= \\sum_{i \\in \\{Z\\}} \\int_\\mathcal{R} g_n(r) Y_{lm}(\\phi,\\theta) \\exp\\left(-a|\\boldsymbol{r} - \\boldsymbol{R}_i|^2\\right) \\text{d}V \\\\\n\t\t&=  4\\pi \\sum_{i \\in \\{Z\\}}\\sum_{l'm'} \\int_\\mathcal{R} g_n(r) Y_{lm}(\\phi,\\theta) \\text{e}^{-a(r^2+R_i^2)}i_l(2arR_i)Y_{l'm'}(\\phi, \\theta) Y_{l'm'}^*(\\boldsymbol{R}_i)  \\text{d}V \\\\\n\t\t&=4\\pi \\sum_{i \\in \\{Z\\}}Y_{lm}^*(\\boldsymbol{R}_i) \\int_\\mathcal{\\tilde{R}} g_n(r) \\text{e}^{-a(r^2+R_i^2)}i_l(2arR_i)r^2\\; \\text{d}r\n\t\\end{split}\n\\end{equation}\nwhere $i_l$ is a modified spherical Bessel function of the first kind. Inserting a orthonormal radial basis as $g_n(r)$,\n\\begin{equation}\n\t\\begin{split}\n\t\tc_{nlm}^Z &=4\\pi \\sum_{i \\in \\{Z\\}}Y_{lm}^*(\\boldsymbol{R}_i) \\int_\\mathcal{\\tilde{R}} \\sum_{n'}^{n_\\text{max}} \\beta_{nn'l}r^l e^{-\\alpha_{n'l}r^2}\\text{e}^{-a(r^2+R_i^2)} i_l(2arR_i)r^2\\; \\text{d}r \\\\\n\t\t&= 4\\pi \\sum_{i \\in \\{Z\\}}Y_{lm}^*(\\boldsymbol{R}_i)\\text{e}^{-aR_i^2} \\sum_{n'}^{n_\\text{max}} \\beta_{nn'l}\\int_\\mathcal{\\tilde{R}} r^{l + 2}  \\text{e}^{-(\\alpha_{n'l} + a)r^2} i_l(2arR_i)\\; \\text{d}r.\n\t\\end{split}\n\\end{equation}\n\nEvaluating the integral where $b  := \\alpha_{n'l} + a$ and $c_i:= 2aR_i$,\n\n\\begin{equation}\n\tI_{iln'} = \\int_0^\\infty r^{l+2} \\text{e}^{-br^2}i_l(c_i r)\\; \\text{d}r\n\\end{equation}\n\nusing $i_\\lambda(z) = i^{-\\lambda}j_\\lambda(iz)$ where from Gradshteyn and Ryzhik,\\cite{Gradshteyn2007} \n% https://arxiv.org/pdf/1908.07374.pdf\n\n\\[\\int_0^\\infty x^{\\lambda + 2}\\text{e}^{-k_1x^2}j_\\lambda(k_2x) \\text{d}x =  \n\\sqrt{\\frac{\\pi}{2}} \\frac{k_2^\\lambda}{(2k_1)^{\\lambda + 3/2}}\\exp\\left(-\\frac{k_2^2}{4k_1}\\right)\\]\n\nso the required integral is with $\\lambda = l, k_1 = b, k_2 = ic_i$,\n\n\\begin{equation}\n\t\\begin{split}\n\t\tI_{iln'} &= \\sqrt{\\frac{\\pi}{2}} \\frac{c_i^l}{(2b)^{l+3/2}}\\exp\\left(\\frac{c_i^2}{4b}\\right) \\\\\n\t\t&=  \\sqrt{\\frac{\\pi}{2}} \\frac{(2aR_i)^l}{(2(\\alpha_{n'l} + a))^{l+3/2}}\\exp\\left(\\frac{(aR_i)^2}{\\alpha_{n'l} + a}\\right)\n\t\\end{split}\n\\end{equation}\nso then,\n\n\\begin{equation}\n\t\\begin{split}\n\t\tc_{nlm}^Z &= \\sqrt{8\\pi^3} \\sum_{i \\in \\{Z\\}}Y_{lm}^*(\\boldsymbol{R}_i)\\,\\text{e}^{-aR_i^2} \\sum_{n'}^{n_\\text{max}} \\beta_{nn'l} \\frac{(2aR_i)^l}{(2(\\alpha_{n'l} + a))^{l+3/2}}\\exp\\left(\\frac{(aR_i)^2}{\\alpha_{n'l} + a}\\right).\n\t\\end{split}\n\\end{equation}\n\nUsing real spherical harmonic functions,\n\n\\begin{equation}\n\tY_{l,m}(\\theta, \\phi) = \\begin{cases}\n\t\t\\sqrt{2}(-1)^m \\,\\text{Im}\\{Y_l^{|m|}(\\theta, \\phi)\\} &\\quad \\text{if } m < 0 \\\\\n\t\t1 &\\quad \\text{if } m = 0 \\\\\n\t\t\\sqrt{2}(-1)^m\\,\\text{Re}\\{Y_l^{m}(\\theta, \\phi)\\} &\\quad \\text{if } m > 0 \\\\\n\t\\end{cases}\n\\end{equation}\nwhere,\n\\begin{equation}\n\tY_l^m (\\theta, \\phi) = \\sqrt{\\frac{(2l+1)(l- m)!}{4\\pi (l+m)!}} P_l^{m}(\\cos\\theta)e^{im\\phi}.\n\\end{equation}\n\n\n\n\\section{Solution Phase Entropy}\n\\subsection{Partition Functions} \\label{section::appendix_igm_partition_functions}\n\nFor completeness, the well-known integral approximation to the particle in a box PF is reproduced below. Provided the gaps between energy levels are small the sum can be replaced by a integral to give,\n\\begin{equation}\n\tq_\\text{trans} \\approx \\iiint_0^\\infty dn_x\\;dn_y\\;dn_z\\; \\exp{ {\\Big \\{} -{\\beta}\\cdot  \\frac{\\hbar^2 \\pi^2}{2ml^2}(n_x^2 + n_y^2 + n_z^2)}{\\Big \\}}.\n\\end{equation}\n\nConverting into spherical polar coordinates with $n_x^2 + n_y^2 + n_z^2 = n_r^2$ and integrating over $n_\\theta$ and $n_\\phi$ in the positive octant,\n\\begin{equation}\n\t\\begin{aligned}\n\t\tq_\\text{trans} &\\approx \\frac{1}{8} \\cdot 4\\pi \\int_0^\\infty dn_r\\; n_r^2\\exp{ {\\Big \\{} -{\\beta}\\cdot  \\frac{\\hbar^2 \\pi^2}{2ml^2}n_r^2}{\\Big \\}} \\\\\n\t\t&\\approx \\frac{\\pi}{2} \\cdot \\frac{\\sqrt{\\pi}}{4} {\\Big (} \\frac{\\hbar^2 \\pi^2\\beta}{2ml^2} {\\Big )}^{-3/2} \\\\\n\t\t&\\approx \\pi^{3/2} {\\Big (} \\frac{ml^2}{2\\hbar^2 \\pi^2\\beta} {\\Big )}^{3/2} \\\\\n\t\t&\\approx {\\Big (} \\frac{2\\pi m k_B T}{h^2} {\\Big )}^{3/2} V \n\t\t\\label{q_trans_pib}\n\t\\end{aligned}\n\\end{equation}\nwhere the standard integral $\\int_{0}^{\\infty} dx\\; x^2 e^{-ax^2} = \\sqrt{\\pi}/4a^{3/2}$ and $V = l^3$ have been used. Interestingly, \\eqref{q_trans_pib} is identical to that obtained classically \\eqref{q_trans_pib_classical}.\n\\begin{equation}\n\t\\begin{aligned}\n\t\tq_\\text{trans, c} &= \\frac{1}{h^{3}}\\int\\int e^{-\\beta H(\\boldsymbol{p}, \\boldsymbol{x})} \\; d\\boldsymbol{p}^{3} d\\boldsymbol{x}^{3} \\\\\n\t\t&= \\frac{1}{h^{3}}\\int_{-\\infty}^\\infty e^{-\\beta \\boldsymbol{p}^2/2m} \\; d\\boldsymbol{p}^{3} \\int_0^l e^{-\\beta \\cdot 0} \\;  d\\boldsymbol{x}^{3} \\\\\n\t\t&= \\frac{1}{h^{3}} {\\Big \\{}  \\int_{-\\infty}^\\infty e^{-\\beta {p_k}^2/2m} \\; d{p_k} {\\Big \\}}^3 {\\Big \\{}  \\int_0^l  \\;  dx_k {\\Big \\}}^3 \\\\\n\t\t&= \\frac{l^3}{h^{3}} {\\Big \\{}  \\sqrt{\\frac{\\pi}{1/(2 k_B T)}} {\\Big \\}}^3 \\\\\n\t\t&={\\Big (} \\frac{2\\pi m k_B T}{h^2} {\\Big )}^{3/2} V.\n\t\t\\label{q_trans_pib_classical}\n\t\\end{aligned}\n\\end{equation}\n\n\\subsection{Classical Truncated Harmonic Oscillator}\n\nThe classical entropy of the truncated harmonic oscillator with potential energy, \n\\begin{equation}\n\tV(x) = \\begin{cases}\n\t\t\\frac{1}{2}m\\omega^2x^2 &\\text{if} \\quad x > -r_e \\\\\n\t\t+\\infty &\\text{if} \\quad x \\le -r_e\n\t\\end{cases} \n\\end{equation}\nis derived as follows. First the classical partition function is,\\footnote{Using $\\int_{a}^{+\\infty}  e^{-bx^2} dx = \\sqrt{\\pi/b}\\;\\text{erfc}(a\\sqrt{b})/2$}\n\\begin{equation}\n\t\\begin{aligned}\n\t\tZ_c &= \\frac{1}{h}\\int_{-\\infty}^\\infty \\; dp\\int_{-\\infty}^{\\infty} \\; dx \\; e^{-\\beta E(x,p)} \\\\\n\t\t&= \\frac{1}{h}\\int_{-\\infty}^\\infty \\; dp \\; e^{-\\beta p^2/2m} \\int_{-\\infty}^{\\infty} \\; dx \\; e^{-\\beta V(x)} \\\\\n\t\t&= \\frac{1}{h}\\int_{-r_e}^\\infty \\; dp \\; e^{-\\beta p^2/2m} \\int_{-r_e}^{\\infty} \\; dx \\; e^{-\\beta m\\omega^2x^2/2} \\\\\n\t\t&=\\frac{1}{h}\\cdot \\frac{\\sqrt{\\pi}\\;\\text{erfc}(-r_e \\sqrt{\\beta/2m})}{2\\sqrt{\\beta/2m}}\\cdot \\frac{\\sqrt{\\pi}\\;\\text{erfc}(-r_e \\sqrt{\\beta m\\omega^2/2})}{2\\sqrt{\\beta m\\omega^2/2}} \\\\\n\t\t&=\\frac{\\pi k_B T}{2h\\omega} \\;\\text{erfc}(-r_e \\sqrt{\\beta/2m})\\;\\text{erfc}(-r_e \\sqrt{\\beta m\\omega^2/2})\n\t\\end{aligned}\n\\end{equation}\n\nwhere $\\text{erfc}(x)$ is the complementary error function. The derivative with respect to temperature for $a := r_e^2/ 2k_B m$ and $b := r_e^2m\\omega^2/2k_B$ is, \n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\frac{d Z_c}{dT} &= \\frac{d}{dT} {\\Big \\{} \\frac{\\pi k_B T}{2h\\omega} \\;\\text{erfc}(- \\sqrt{a/T})\\;\\text{erfc}(- \\sqrt{b/T}) {\\Big \\}} \\\\\n\t\t&= \\frac{ k_B}{4\\hbar\\omega} {\\Bigg (} \\text{erfc}(- \\sqrt{a/T})\\;\\text{erfc}(- \\sqrt{b/T}\\,) - \\frac{b e^{-b/T} \\; \\text{erfc}(-\\sqrt{a/T})}{\\sqrt{b\\pi T}} - \\frac{a e^{-a/T} \\; \\text{erfc}(-\\sqrt{b/T})}{\\sqrt{a\\pi T}} {\\Bigg )}\n\t\\end{aligned}\n\\end{equation}\n\n\\subsection{Classical Morse Oscillator}\n\nThe classical entropy of a Morse oscillator as a model of a chemical bond requires an upper bound to the total energy to prevent the configurational integral becoming unbounded. For a total energy,\n\\begin{equation}\n\tE(p, x) = \\frac{p^2}{2m} + D_e (1 - e^{-a(r-r_e)})^2\n\\end{equation}\nwhere $r$ is the bond length and $r_e$ the equilibrium bond length. The maximum energy is the dissociation energy of the bond ($D_e$) such that the the momentum is bounded,\n\\begin{equation}\n\tE(x, p) < D_e \\qquad \\implies \\qquad p_\\pm = \\pm \\sqrt{2mD_e(1 - (1-e^{-ax})^2)}\n\\end{equation}\nwhere $x$ is the displacement from equilibrium, with $x > -\\frac{1}{a}\\ln(2)$ to ensure the momentum is real. \n\nThe classical partition function is then,\\footnote{Using $\\int_{-a}^{+a}  e^{-bx^2} dx = \\sqrt{\\pi/b}\\;\\text{erf}(a\\sqrt{b})$ and that $\\text{erf}$ is an even function.}\n\\begin{equation}\n\t\\begin{aligned}\n\t\tZ_c &= \\frac{1}{h}\\int_{-\\infty}^\\infty \\; dp\\int_{-\\infty}^{\\infty} \\; dx \\; e^{-\\beta E(x,p)} \\\\\n\t\t&= \\frac{1}{h}\\int_{p_-}^{p_+} \\int_{-\\frac{1}{a}\\ln(2)}^{\\infty}\\; e^{-\\beta \\{{p^2}/{2m} + D_e (1 - e^{-ax})^2\\}}  \\; dx  \\; dp \\\\\n\t\t&= \\frac{1}{h}\\int_{p_-}^{p_+} \\int_{-\\frac{1}{a}\\ln(2)}^{\\infty}\\; e^{-\\beta {p^2}/{2m}} e^{-\\beta D_e (1 - e^{-ax})^2}  \\; dx  \\; dp \\\\\n\t\t&= \\frac{1}{h} \\int_{-\\frac{1}{a}\\ln(2)}^{\\infty}\\;\\int_{p_-}^{p_+}\\; e^{-\\beta {p^2}/{2m}} e^{-\\beta D_e (1 - e^{-ax})^2}  \\; dp  \\; dx \\\\\n\t\t&= \\frac{1}{h} \\int_{-\\infty}^{\\infty}\\; e^{-\\beta D_e (1 - e^{-ax})^2}  \\cdot \\frac{\\sqrt{\\pi}\\;\\text{erf}(p\\sqrt{\\beta/2m})}{\\sqrt{\\beta/2m}} \\Bigg|_{p_-}^{p_+} \\; dx \\\\\n\t\t&= \\frac{2}{h}\\sqrt{\\frac{2m\\pi}{\\beta}} \\int_{-\\frac{1}{a}\\ln(2)}^{\\infty}\\; e^{-\\beta D_e (1 - e^{-ax})^2}   \\;\\text{erf}(p_+\\sqrt{\\beta/2m}) \\; dx \\\\\n\t\t&= \\frac{2}{h}\\sqrt{\\frac{2m\\pi}{\\beta}} \\int_{-\\frac{1}{a}\\ln(2)}^{\\infty}\\; e^{-\\beta D_e (1 - e^{-ax})^2}   \\;\\text{erf}{\\Big (} \\sqrt{\\beta D_e(1 - (1-e^{-ax})^2)}{\\Big )} \\; dx. \n\t\\end{aligned}\n\t\\label{morse_pf_integral}\n\\end{equation}\n\nAlthough \\eqref{morse_pf_integral} has no analytical solution, the integrand is smooth allowing for efficient numerical evaluation (\\figurename{ \\ref{morse_pf_integrand_vs_x}}).\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=12cm]{8/figs/morse_pf_integrand_vs_x}\n\t\\vspace{0.2cm}\n\t\\hrule\n\t\\caption{Integrand of $Z_c$ vs $x$. $D_e$ = 400 kJ mol$^{-1}$, $T$ = 298.15 K, $m = 1$ amu.}\n\t\\label{morse_pf_integrand_vs_x}\n\\end{figure}\n\n\n\\subsection{Exponential Well}\n\nTo derive the entropy and internal energy due to translation of a molecule in an 3D exponential well the temperature derivative of the partition function ($Z_c^\\text{exp}$) is required. For,\n\\begin{equation}\n\tZ_c^\\text{exp} =\\left(\\frac{2m\\pi k_B T}{h^2} \\right)^{3/2} \\int_0^\\infty 4\\pi r^2 e^{- k(e^{ar} - 1) / k_B T} \\; \\text{d}r :=  \\Lambda(T) I(T)\n\\end{equation}\nthe derivative is,\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\left(\\frac{\\partial Z_c^\\text{exp}}{\\partial T}\\right)_{N , V} &= \\left(\\frac{\\partial \\Lambda}{\\partial T}\\right)_{N , V} I(T) + \\Lambda(T)\\left(\\frac{\\partial I}{\\partial T}\\right)_{N , V} \\\\\n\t\t&= \\Lambda(T) \\frac{3}{2}T^{-1} I(T) + \\Lambda(T) \\int_0^\\infty 4\\pi r^2 \\frac{k(e^{ar} - 1)}{k_B T^2}  e^{- k\\beta (e^{ar} - 1)} \\; \\text{d}r \\\\\n\t\t&= \\frac{3}{2T} Z_c^\\text{exp} + \\frac{\\Lambda(T) k\\beta}{T} \\int_0^\\infty 4\\pi r^2 e^{ar}  e^{- k\\beta(e^{ar} - 1)} \\; \\text{d}r \\\\\n\t\t&\\qquad\\qquad\\qquad\\qquad-  \\frac{\\Lambda(T) k\\beta}{T}  \\int_0^\\infty 4\\pi r^2 e^{- k\\beta(e^{ar} - 1)} \\; \\text{d}r \\\\\n\t\t&= \\frac{1}{T} \\left(  \\frac{3}{2} Z_c^\\text{exp} + \\Lambda(T) k\\beta \\int_0^\\infty 4\\pi r^2 e^{ar}  e^{- k\\beta(e^{ar} - 1)} \\; \\text{d}r -  k\\beta Z_c^\\text{exp} \\right) \\\\\n\t\t&= \\frac{Z_c^\\text{exp}}{T} \\left( \\frac{3}{2} - k\\beta + \\frac{\\Lambda(T) k\\beta}{Z_c^\\text{exp}} \\int_0^\\infty 4\\pi r^2 e^{ar}  e^{- k\\beta(e^{ar} - 1)} \\; \\text{d}r\n\t\t\\right)\n\t\\end{aligned}\n\\end{equation}\n\n\n\\clearpage\n\\subsection{Test Sets} \\label{section::appendix_entropy_test_cases}\n\nTest set {\\bfseries{A}} contains two reactions from the Morita Baylis--Hillman reaction for which experimental reaction $\\Delta S$ values are available (\\tablename{ \\ref{setA_data}}, \\figurename{ \\ref{plata_mbh}}).\\cite{Plata2015}\n\n\n\\begin{table}[h!]\n\t\\renewcommand{\\arraystretch}{1.5}\n\t\\begin{center}\n\t\t\\small\n\t\t\\begin{tabularx}{\\textwidth}{YYYY} \n\t\t\t\\toprule\n\t\t\tReaction & $T_\\text{avg}$ / K & {$T\\Delta S$} / \\kcal & {$\\Delta G$} / \\kcal \\\\\n\t\t\t\\hline\n\t\t\t% 4.184\n\t\t\t\\bfseries{R-MBH1}       & 317.6  & -7.9 & -3.2\\\\\n\t\t\t\\bfseries{R-MBH2}       & 328.4  & -10.2 & -3.9\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabularx}\n\t\\end{center}\n\t\n\t\\caption{Thermodynamic data for {\\bfseries{A}}. No error is given. $T_\\text{avg}$ values are averages over the temperature range used to calculate $\\Delta S$.}\n\t\\label{setA_data}\n\\end{table}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=13cm]{8/figs/plata_mbh}\n\t\\vspace{0.2cm}\n\t\\hrule\n\t\\caption{Reactions in test set {\\bfseries{A}}. }\n\t\\label{plata_mbh}\n\\end{figure}\n\nTest set {\\bfseries{B}} comprise entropies of activation for 10 S$_\\text{N}$2 reactions collated in ref. \\cite{Vlasov2006} (\\figurename{ \\ref{vlasov_sn2}}).  These were chosen based on modest structural complexity, while spanning a wide range of $\\Delta S^\\ddagger$ values (\\tablename{ \\ref{setB_data}}).\n\\\\\n\\begin{table}[h!]\n\t\\renewcommand{\\arraystretch}{1.5}\n\t\\begin{center}\n\t\t\\small\n\t\t\\begin{tabularx}{\\textwidth}{YYYYY} \n\t\t\t\\toprule\n\t\t\tReaction & $T$ / K & {$\\Delta H^\\ddagger$} / \\kcal & {$T\\Delta S^\\ddagger$} / \\kcal & {$\\Delta G^\\ddagger$} / \\kcal\\\\\n\t\t\t\\hline\n\t\t\t\\bfseries{R1} & 369.1 & 22.3 & -3.9 & 26.0\\\\\n\t\t\t\\bfseries{R2} & 369.1 & 18.6 & -6.1 & 24.4\\\\\n\t\t\t\\bfseries{R3} & 322.4 & 21.5 & -1.9 & 23.4\\\\\n\t\t\t\\bfseries{R4} & 303.1 & 15.9 & -3.7 & 19.6\\\\\n\t\t\t\\bfseries{R5} & 303.1 & 16.5 & -2.1 & 18.6\\\\\n\t\t\t\\bfseries{R6} & 303.1 & 16.4 & -4.9 & 21.3\\\\\n\t\t\t\\bfseries{R7} & 303.1 & 18.5 & -2.0 & 20.5\\\\\n\t\t\t\\bfseries{R8} & 252.2 & 11.9 & -3.1 & 15.0\\\\\n\t\t\t\\bfseries{R9} & 303.1 & 18.6 & -2.4 & 20.9\\\\\n\t\t\t\\bfseries{R10} & 303.1 & 20.2 & -1.3 & 21.5\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabularx}\n\t\\end{center}\n\t\n\t\\caption{Activation parameters for test set {\\bfseries{B}}. Reactions are shown in \\figurename{ \\ref{vlasov_sn2}}. Where errors are quoted: ${\\Delta S}_\\text{err} < \\pm2.5$ J K$^{-1}$ mol$^{-1}$, ${\\Delta H}_\\text{err} < \\pm0.2$ \\kcal.}\n\t\\label{setB_data}\n\\end{table}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=14.5cm]{8/figs/vlasov_sn2}\n\t\\vspace{0.2cm}\n\t\\hrule\n\t\\caption{Reactions in test set {\\bfseries{B}}.}\n\t\\label{vlasov_sn2}\n\\end{figure}\n\n\n\\clearpage\n\\end{document}", "meta": {"hexsha": "5877cb4493f16fa18b4b583040cbafe1ab6b38bb", "size": 14672, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8/appendicies.tex", "max_stars_repo_name": "t-young31/thesis", "max_stars_repo_head_hexsha": "2dea31ef64f4b7d55b8bdfc2094bab6579a529e0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8/appendicies.tex", "max_issues_repo_name": "t-young31/thesis", "max_issues_repo_head_hexsha": "2dea31ef64f4b7d55b8bdfc2094bab6579a529e0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8/appendicies.tex", "max_forks_repo_name": "t-young31/thesis", "max_forks_repo_head_hexsha": "2dea31ef64f4b7d55b8bdfc2094bab6579a529e0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.5878136201, "max_line_length": 304, "alphanum_fraction": 0.614980916, "num_tokens": 6410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430394931456, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.572911554865102}}
{"text": "\\section{\\src{comp_caldyn_vert}}\n\n\\subsection{Description}\n\nKernel \\src{comp_caldyn_vert} is taken from the original subroutine\n\\src{compute_caldyn_vert} in \\DYNAMICO.\n%\nThis subroutine is originally defined in module \\src{caldyn_gcm_mod}.\n%\nThis module defines subroutine \\src{caldyn}, which is the main\nsubroutines for dynamics part of the model, and several sub-subroutines\nfor various terms in the governing equation, such as potential\nvorticity, geopotential, etc.\n%\nThis subroutine calculates vertical mass flux and vertical transport.\n\n\n\n\\subsection{Discretization and code}\n\n\n\\autoref{l:definition_comp_caldyn_vert} shows the definition part of this subroutine,\nand \\autoref{f:pad_comp_caldyn_vert} shows the PAD of this.\n\n\n\\begin{LstF90}[%\ncaption={Definition part of \\src{compute_caldyn_vert}},%\nlabel={l:definition_comp_caldyn_vert}%\n]\nSUBROUTINE compute_caldyn_vert(u,theta,rhodz,convm, wflux,wwuu, dps,dtheta_rhodz,du)\n  USE icosa\n  USE disvert_mod\n  USE exner_mod\n  USE trace\n  USE omp_para\n  IMPLICIT NONE\n    REAL(rstd),INTENT(IN)  :: u(iim*3*jjm,llm)\n    REAL(rstd),INTENT(IN)  :: theta(iim*jjm,llm)\n    REAL(rstd),INTENT(IN)  :: rhodz(iim*jjm,llm)\n    REAL(rstd),INTENT(INOUT)  :: convm(iim*jjm,llm)  ! mass flux convergence\n\n    REAL(rstd),INTENT(INOUT) :: wflux(iim*jjm,llm+1) ! vertical mass flux (kg/m2/s)\n    REAL(rstd),INTENT(INOUT) :: wwuu(iim*3*jjm,llm+1)\n    REAL(rstd),INTENT(INOUT) :: du(iim*3*jjm,llm)\n    REAL(rstd),INTENT(INOUT) :: dtheta_rhodz(iim*jjm,llm)\n    REAL(rstd),INTENT(INOUT) :: dps(iim*jjm)\n\n! temporary variable\n    INTEGER :: i,j,ij,l\n    REAL(rstd) :: p_ik, exner_ik\n    INTEGER,SAVE ::ij_omp_begin, ij_omp_end\n!$OMP THREADPRIVATE(ij_omp_begin, ij_omp_end)\n    LOGICAL,SAVE :: first=.TRUE.\n!$OMP THREADPRIVATE(first)\n\\end{LstF90}\n%\nWhere \\src{u}, \\src{theta}, \\src{rhodz} are wind velocity on the edge,\npotential temperature, and mass, respectively.\n\\src{convm}, \\src{wflux}, \\src{wwuu} are mass flux convergence, vertical mass flux,\nand \\src{wflux*u} on the edge, respectively.\n%\nLast three variables are time derivatives.\n\\src{du}, \\src{dtheta_rhodz}, \\src{dps} are for wind velocity on the edge,\nmass-weighted potential temperature, and surface pressure, respectively.\n%\nAll of these except \\src{dps} are two dimensional.\n%\nFirst dimension is for horizontal index, and the size depends on the\npoint where the variable is defined, since \\DYNAMICO adopts C-grid.\n%\nSecond dimension is for vertical index, and the size is \\src{llm},\nexcept \\src{llm+1} for \\src{wflux} and \\src{wwuu}, these are defined on\nthe half level in vertical, while others are defined on the full level.\n\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[scale=.5]{figs/caldyn_vert.pdf}\n \\caption{PAD of \\src{compute_caldyn_vert}}\n \\label{f:pad_comp_caldyn_vert}\n\\end{figure}\n\nMain part of this subroutine is consist of several $l$- and $ij$- double loop.\n%\nThe first double loop is to accumulate mass flux convergence from top to bottom,\nthen convert \\src{convm} at the lowest level to \\src{dps}.\n%\nThe second double loop is to compute vertical mass flux \\src{wflux}.\n%\nNote that the range of $l$-loop, because \\src{wflux} is defined on half\nvertical level and at the top and the bottom are already set by\nsubroutine \\src{caldyn_BC} as a boundary condition.\n%\nNext two double loop is to calculate convergence of potential\ntemperature \\src{dtheta_rhodz}.\n%\nAgain note that the range of two $l$-loop, since \\src{dtheta_rhodz} is\ndefined on full vertical level and needs to sum up both upper and lower\nface of the level.\n%\nNext two double loop is to compute vertical transport \\src{wwuu}, and to\nadd it to \\src{du}.\n%\nNote the horizontal index here.\n%\n\\src{wwuu} and \\src{du} are defined on the edge\nof control volume, there are three statement in each double loop.\n\n\\clearpage\n\n\n\n\\subsection{Inputdata and result}\n\nInput data file is prepared and you can download from official server using\n\\file{data/download.sh} script.\n%\nThis data file is created by original \\DYNAMICO\\footnotemark with\nHeld-Suarez case parameter set included in the original source archive.\n%\n\\footnotetext{with slight modification by AICS.}\n%\nMax/min/sum of input/output data of the kernel subroutine are output as\na log.\n%\nBelow is an example of \\src{$IAB_SYS=Ubuntu-gnu-ompi} case.\n\n\\begin{LstLog}\n [KERNEL] comp_caldyn_vert\n *** Start  initialize\n                iim, jjm, llm:    23    25    19\n             ij_begin, ij_end:    48   528\n     ij_begin_ext, ij_end_ext:    24   552\n             ll_begin, ll_end:     1    19\n         ll_beginp1, ll_endm1:     2    18\n        t_right, t_rup, t_lup:     1    23    22\n     t_left, t_ldown, t_rdown:    -1   -23   -22\n        u_right, u_rup, u_lup:     0  1173   575\n     u_left, u_ldown, u_rdown:    -1  1150   553\n           z_rup, z_up, z_lup:   598     0   597\n     z_ldown, z_down, z_rdown:   -23   575   -22\n                       dbg: g:     9.80000000\n +check[bp              ] max=  1.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  6.9254510678414132E+00\n *** Finish initialize\n *** Start kernel\n ### check point iteration:        1000\n ### Input ###\n +check[u               ] max=  4.1278968179782127E-01,min= -4.1278968179782127E-01,sum=  1.6791131703073393E+01\n +check[theta           ] max=  8.0139914420291746E+02,min=  0.0000000000000000E+00,sum=  3.8582633571973117E+06\n +check[rhodz           ] max=  1.2306877011993038E+03,min=  0.0000000000000000E+00,sum=  5.3979591836733194E+06\n +check[convm_prev      ] max=  1.0361970643226587E-03,min= -1.0359249303947807E-04,sum= -1.5233533963107249E-01\n +check[wflux_prev      ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[wwuu_prev       ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[du_prev         ] max=  3.4404317002518906E-03,min= -3.0804630348046005E-03,sum=  5.5048589972605033E-01\n +check[dtheta_rhodz_pre] max=  3.2251351666935379E-01,min= -3.3676276308628725E-02,sum= -5.3720539414185993E+01\n +check[dps_prev        ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n ### Output ###\n +check[convm           ] max=  6.9593389571287302E-03,min= -7.9269107622801825E-04,sum= -1.5227927267389074E+00\n +check[wflux           ] max=  4.1171035230973149E-04,min= -3.6145630748324665E-03,sum=  5.8901163077820706E-01\n +check[wwuu            ] max=  1.7149300599654128E-04,min= -1.8768192515764522E-04,sum= -5.0377733672629036E-04\n +check[du              ] max=  3.4404317002518906E-03,min= -3.0804630348046005E-03,sum=  5.5048632410110032E-01\n +check[dtheta_rhodz    ] max=  3.5427038431326496E-01,min= -4.2032604085394595E-02,sum= -5.3720539414186263E+01\n +check[dps             ] max=  6.8201521779861565E-02,min= -7.7683725470345790E-03,sum= -1.3213658793877174E+00\n ### final iteration:        1000\n ### Validation : grid-by-grid diff ###\n +check[convm           ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[wflux           ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[wwuu            ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[du              ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[dtheta_rhodz    ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[dps             ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n *** Finish kernel\n\\end{LstLog}\n\nCheck the lines below \\src{``Validation : grid-by-grid diff''} line,\nthat shows difference between calculated output array and\npre-calculated reference array.\nThese should be zero or enough small to be acceptable.\n%\nThere are sample output log files in \\file{reference/}\nin each kernel program directory, for reference purpose.\n\n\\subsection{Sample of performance result}\n\nHere's an example of the performance result part of the log output.\nBelow is an example executed with the machine environment described in \\autoref{s:measuring_env}.\n%\nNote that in this program kernel part is iterated 1000 times.\n\n\\begin{LstLog}\n *** Computational Time Report\n *** ID=001 : MAIN_comp_caldyn_vert            T=     0.156 N=   1000\n\\end{LstLog}\n\n", "meta": {"hexsha": "dfdc6f82d796fe738b410d912c08be0fbb71124d", "size": 8341, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/DYNAMICO/src/34_caldyn_vert.tex", "max_stars_repo_name": "aimes-project/IcoAtmosBenchmark_v1", "max_stars_repo_head_hexsha": "44b8f12dcf0e50094a2d0f78a8794febda270007", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/DYNAMICO/src/34_caldyn_vert.tex", "max_issues_repo_name": "aimes-project/IcoAtmosBenchmark_v1", "max_issues_repo_head_hexsha": "44b8f12dcf0e50094a2d0f78a8794febda270007", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/DYNAMICO/src/34_caldyn_vert.tex", "max_forks_repo_name": "aimes-project/IcoAtmosBenchmark_v1", "max_forks_repo_head_hexsha": "44b8f12dcf0e50094a2d0f78a8794febda270007", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-04T04:07:38.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T04:07:38.000Z", "avg_line_length": 42.9948453608, "max_line_length": 112, "alphanum_fraction": 0.7167006354, "num_tokens": 2885, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430353105599, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.5729115519820223}}
{"text": "\\documentclass{beamer}\n\n\\usepackage{beamerthemevictor,comment,verbatim,graphicx,amssymb}\n\n\\input{tutmacs}\n\\input{slidemacs}\n\\input idxmacs\n\n\\begin{document}\n\n\\title{Dynamic Programming}\n\\author{Victor Eijkhout}\n\\date{Notes for CS 594 -- Fall 2004}\n\n\\frame{\\titlepage}\n\n\\section{Introduction}\n\n\\frame[containsverbatim]{\n  \\frametitle{What is dynamic programming?}\n\\begin{itemize}\n\\item Solution technique for minization problems\n\\item Often lower complexity than naive techniques\n\\item Sometimes equivalent to analytical techniques\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{What is it \\emph{not}?}\n\\begin{itemize}\n\\item Black box that will solve your problem\n\\item Way of finding lowest complexity\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{When dynamic programming?}\n\\begin{itemize}\n\\item Minimization problems\n\\item constraints; especially integer\n\\item sequence of decisions\n\\end{itemize}\n}\n\n\\section{Examples}\n\\subsectionframe{Decision timing}\n\n\\frame[containsverbatim]{\n  \\frametitle{Description}\n\\begin{itemize}\n\\item Occasions for deciding yes/no\n\\item items with attractiveness $\\in[0,1]$\n\\item Finite set\n\\item no reconsidering\n\\item Question: at any given step, do you choose or pass?\n\\item Objective: maximize expectation\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{crucial idea}\n\\begin{itemize}\n\\item start from the end:\n\\item step $N$: no choice left\n\\item expected yield: $.5$\n\\item<2-> in step $N-1$: pick if better than $.5$\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{yield from $N-1$}\n\\begin{itemize}\n\\item pick if $>.5$: done in $.5$ of the cases\n\\item<2-> expected yield $.75$\n\\item<3-> go on in $.5$ of the cases\n\\item<4-> expected yield then $.5$\n\\item<5-> total expected yield: $.5\\times.75+.5\\times.5=.625$\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{at $N-2$}\n\\begin{itemize}\n\\item pick if better than $.625$\n\\item<2-> happens in $.375$ of the cases,\n\\item<3-> yield in that case $1.625/2$\n\\item<4-> otherwise, $.625$ yield from later choice\n\\item<4-> et cetera\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Essential features}\n\\begin{itemize}\n\\item Stages: more or less independent decisions\n\\item Global minimization; solving by subproblems\n\\item Principle of optimality: sub part of total solution is optimal\n  solution of sub problem\n\\end{itemize}\n}\n\n\\subsectionframe{Manufacturing problem}\n\\frame{\n  \\frametitle{Statement}\n\\begin{itemize}\n\\item Total to be produced in given time, variable cost in each time\n  period\n\\item wanted: scheduling\n\\item\n\\[ \\min_{\\sum p_k=S}\\sum w_kp_k^2.\\]\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{define concepts}\n\\begin{itemize}\n\\item amount of work to produce~$s$ in $n$~steps:\n\\[ v(s|n)=\\min_{\\sum_{k>N-n} p_k=s}\\sum w_kp_k^2 \\]\n\\item optimal amount $p(s|n)$ at $n$~months from the end\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{principle of optimality}\n\\begin{eqnarray*}\n    v(s|n)&=&\\min_{p_n\\leq s}\\left\\{w_np_n^2\n        +\\sum_{{k>N-n+1\\atop\\sum p_k=s-p_n}}w_kp_k^2\\right\\} \\\\\n        &=&\\min_{p_n\\leq s}\\left\\{ w_np_n^2+v(s-p_n|n-1)\\right\\}\n\\end{eqnarray*}\n}\n\n\\frame{\n  \\frametitle{start from the end}\n\\begin{itemize}\n\\item In the last period: $p(s|1)=s$, and $v(s|1)=w_1s^2$\n\\item<2-> period before:\n\\[ v(s|2)=\\min_{p_2}\\{w_2p_2^2+v(s-p_2|1)\\}=\\min_{p_2}c(s,p_2) \\]\nwhere $c(s,p_2)=w_2p_2^2+w_1(s-p_2)^2$.\n\\item<3-> Minimize: $\\delta c(s,p_2)/\\delta p_2=0$,\n\\item<3-> then\n$p(s|2)=w_1s/(w_1+w_2)$ and $v(s|2)=w_1w_2s^2/(w_1+w_2)$.\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{general form}\n\\begin{itemize}\n\\item Inductively\n \\[ p(s|n)={1/w_n\\over \\sum_{i=1}^n 1/w_i}s,\\qquad\n    v(s|n)=s^2\\sum_{i=1}^n1/w_i.\\]\n\\item<2-> Variational approach:\n\\[ \\sum_kw_kp_k^2+\\lambda(\\sum_kp_k-S) \\]\nConstrained minimization\n\\item<3-> Solve by setting derivatives to $p_n$ and~$\\lambda$ to zero.\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{characteristics}\n\\begin{itemize}\n\\item Stages: time periods\n\\item State: amount of good left to be produced\n\\item Principle of optimality\n\\item<2-> Can be solved analytically\n\\item<3-> Analytical approach can not deal with integer constraints\n\\end{itemize}\n}\n\n\\subsectionframe{Stagecoach problem}\n\n\\frame[containsverbatim]{\n  \\frametitle{Statement}\nSeveral routes from beginning to end\\\\\nCost of travel insurance:\n\n\\convertMPtoPDF{stages.1}{1}{1}\n\nObjective: minimize cost\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{data in python}\n\\begin{verbatim}\ntable = [ [0, 5, 4, 0, 0, 0, 0, 0, 0], # first stage: 0\n          [0, 0, 0, 1, 3, 4, 0, 0, 0], # second: 1 & #2\n          [0, 0, 0, 4, 2, 2, 0, 0, 0],\n          [0, 0, 0, 0, 0, 0, 5, 1, 0], # third: 3, #4, #5\n          [0, 0, 0, 0, 0, 0, 2, 4, 0],\n          [0, 0, 0, 0, 0, 0, 4, 3, 0],\n          [0, 0, 0, 0, 0, 0, 0, 0, 5], # fourth: 6 & #7\n          [0, 0, 0, 0, 0, 0, 0, 0, 2]\n          ]\nfinal = len(table); \n\\end{verbatim}\n}\n\n\\frame{\n  \\frametitle{recursive formulation}\n\\begin{itemize}\n\\item In the final city, the cost is zero;\n\\item Otherwise minimum over all cities reachable\\\\\n  of the cost of the next leg plus the minimum cost from that city.\\\\\n(principle of optimality)\n\\item<2-> wrong to code it recursively\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{recursive solution in python}\n\\footnotesize\n\\hypertarget{rec-coach}{}\n\\begin{verbatim}\ndef cost_from(n):\n    # if you're at the end, it's free\n    if n==final: return 0\n    # otherwise range over cities you can reach\n    # and keep minimum value\n    val = 0\n    for m in range(n+1,final+1):\n        local_cost = table[n][m]\n        if local_cost>0:\n            # if there is a connection from here,\n            # compute the minimum cost\n            local_cost += cost_from(m)\n            if val==0 or local_cost<val:\n                val = local_cost\n    return val\nprint \"recursive minimum cost is\",cost_from(0)\n\\end{verbatim}\n\\hyperlink{back-coach}{\\beamerbutton{backward}}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{characteristic}\n\\begin{itemize}\n\\item Overlapping subproblems\\\\\n\\convertMPtoPDF{123.1}{1}{1}\n\\item \\n{cost_from(1)} computed twice\n\\item Cost: $N$~cities, $S$~stages of $L$~each: $O(L^S)$\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{dynamic programming}\n\\begin{itemize}\n\\item Compute minimum cost $f_n(x_n)$\\\\\nof traveling from step~$n$, starting in $x_n$ (state variable)\n\\item \nFormally, $f_k(s)$ minimum cost for traveling from stage~$k$\\\\\nstarting in city~$s$. Then\n\\[ f_{k-1}(s)=\\min_t\\{ c_{st}+f_k(t) \\]\nwhere $c_{st}$ cost of traveling from city~$s$ to~$t$.\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{backward dynamic solution in python}\n\\footnotesize\n\\hypertarget{back-coach}{}\n\\begin{verbatim}\ncost = (final+1)*[0] # initialization\n# compute cost backwards\nfor t in range(final-1,-1,-1):\n    # computing cost from t\n    for i in range(final+1):\n        local_cost = table[t][i]\n        if local_cost==0: continue\n        local_cost += cost[i]\n        if cost[t]==0 or local_cost<cost[t]:\n            cost[t] = local_cost\nprint \"minimum cost:\",cost[0]\n\\end{verbatim}\n\\hyperlink{rec-coach}{\\beamerbutton{recursive}}\n\\hyperlink{forw-coach}{\\beamerbutton{forward}}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{analysis}\n\\begin{itemize}\n\\item Running time $O(N\\cdot L)$ or~$O(L^2S)$\n\\item compare $L^S$ for recursive\n\\end{itemize}\n}\n\n\\frame{\n  \\frametitle{Forward solution}\n\\begin{itemize}\n\\item Backward: $f_n(x)$ cost from $x$ in $n$ steps to the end\n\\item Forward: $f_n(x)$ cost in $n$ steps to $x$\n\\[ f_n(t) =\\min_{s<t}\\{c_{st}+f_{n-1}(s)\\} \\]\n\\item sometimes more appropriate\n\\item same complexity\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{forward dynamic solution in python}\n\\footnotesize\n\\hypertarget{forw-coach}{}\n\\begin{verbatim}\ncost = (final+1)*[0]\nfor t in range(final):\n    for i in range(final+1):\n        local_cost = table[t][i]\n        if local_cost == 0: continue\n        cost_to_here = cost[t]\n        newcost = cost_to_here+local_cost\n        if cost[i]==0 or newcost<cost[i]:\n            cost[i] = newcost\nprint \"cost\",cost[final]\n\\end{verbatim}\n\\hyperlink{back-coach}{\\beamerbutton{backward}}\n}\n\n\\subsectionframe{Traveling salesman}\n\n\\frame{\n  \\frametitle{Problem statement}\n\\begin{itemize}\n\\item Cities as stages?\n\\item<2-> No ordering\n\\item<3-> Stage $n$ : having $n$ cities left\n\\item<4-> State: combination of cities to visit plus current city\n\\item<4-> Cost formula to~0 (both start/end)\n\\begin{eqnarray*}\nC(\\{\\,\\},f)&=&a_{f0}\\quad\\hbox{for $f=1,2,3,\\ldots$}\\\\\nC(S,f)&=&\\min_{m\\in S}a_{fm}+\\left[C(S-m,m)]\\right]\n\\end{eqnarray*}\n\\end{itemize}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{backward implementation}\n\\footnotesize\n\\begin{verbatim}\ndef shortest_path(start,through,lev):\n    if len(through)==0:\n        return table[start][0]\n    l = 0\n    for dest in through:\n        left = through[:]; left.remove(dest)\n        ll = table[start][dest]+shortest_path(dest,left,lev+1)\n        if l==0 or ll<l:\n            l = ll\n    return l\nto_visit = range(1,ntowns);\ns = shortest_path(0,to_visit,0)\n\\end{verbatim}\n(recursive; need to be improved)\n}\n\n\\sectionframe{Discussion}\n\n\\frame[containsverbatim]{\n  \\frametitle{Characteristics}\n\\begin{description}\n\\item[Stages] sequence of choices\n\\item[Stepwise solution] solution by successive subproblems\n\\item[State] cost function has a state parameter,\\\\\ndescription of work left~\\&c\n\\item[Overlapping subproblems]\n\\item[Principle of optimality] This is the property that the\n  restriction of a global solution to a subset of the stages is also\n  an optimal solution for that subproblem.\n\\end{description}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{Principle of optimality}\n\\begin{quote}\nAn optimal policy has the property that whatever the initial state and\ninitial decision are, the remaining decisions must be an optimal\npolicy with regard to the state resulting from the first decision.\n\\end{quote}\n}\n\n\\frame[containsverbatim]{\n  \\frametitle{derivation}\nMaximize~$\\sum^N_ig_i(x_i)$ under~$\\sum_ix_i=\\nobreak X$, $x_i\\geq0$\\\\\nCall this~$f_N(X)$, then\n\\begin{eqnarray*}\nf_N(X)&=&\\max_{\\sum_i^Nx_i=X}\\sum_i^Ng_i(x_i)\\\\\n&=&\\max_{x_N<X}\\left\\{g_N(x_N)+\\max_{\\sum_i^{N-1}x_i=X-x_N}\\sum_i^{N-1}g_i(x_i)\\right\\}\\\\\n&=&\\max_{x_N<X}\\left\\{g_N(x_N)+f_{N-1}(X-x_N)\\right\\}\n\\end{eqnarray*}\n}\n\n\\end{document}\n", "meta": {"hexsha": "fb8c1569b2fd1929ce1d6b0cdf8815c465f92542", "size": 10192, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/dynamic.tex", "max_stars_repo_name": "wvqusrai/the-science-of-tex-and-latex", "max_stars_repo_head_hexsha": "a96fd5cd0f7a6b9208675ba38ddcaec0264a9e31", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-07T08:21:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-07T08:21:41.000Z", "max_issues_repo_path": "slides/dynamic.tex", "max_issues_repo_name": "wvqusrai/the-science-of-tex-and-latex", "max_issues_repo_head_hexsha": "a96fd5cd0f7a6b9208675ba38ddcaec0264a9e31", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/dynamic.tex", "max_forks_repo_name": "wvqusrai/the-science-of-tex-and-latex", "max_forks_repo_head_hexsha": "a96fd5cd0f7a6b9208675ba38ddcaec0264a9e31", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.2005141388, "max_line_length": 89, "alphanum_fraction": 0.6839678179, "num_tokens": 3474, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.8006919925839875, "lm_q1q2_score": 0.5728342744427117}}
{"text": "\n\\subsection{Sample mode}\n\nThe is the most common value in the sample.\n\n", "meta": {"hexsha": "72a690507546e69ad1c4b77a0c1fb9d71e3a65cf", "size": 72, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/summary/01-03-mode.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/summary/01-03-mode.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/summary/01-03-mode.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 12.0, "max_line_length": 43, "alphanum_fraction": 0.75, "num_tokens": 17, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8006919925839875, "lm_q2_score": 0.7154239836484144, "lm_q1q2_score": 0.572834255009823}}
{"text": "\\chapter{Power series and Taylor series}\nPolynomials are very well-behaved functions,\nand are studied extensively for that reason.\nFrom an analytic perspective, for example, they are smooth,\nand their derivatives are easy to compute.\n\nIn this chapter we will study \\emph{power series},\nwhich are literally ``infinite polynomials'' $\\sum_n a_n x^n$.\nArmed with our understanding of series and differentiation,\nwe will see three great things:\n\\begin{itemize}\n\t\\ii Many of the functions we see in nature\n\tactually \\emph{are} given by power series.\n\tAmong them are $e^x$, $\\log x$, $\\sin x$.\n\n\t\\ii Their convergence properties are actually quite well behaved:\n\tfrom the string of coefficients,\n\twe can figure out which $x$ they converge for.\n\n\t\\ii The derivative of $\\sum_n a_n x^n$\n\tis actually just $\\sum_n n a_n x^{n-1}$.\n\\end{itemize}\n\n\\section{Motivation}\nTo get the ball rolling, let's start with\none infinite polynomial you'll recognize:\nfor any fixed number $-1 < x < 1$ we have the series convergence\n\\[ \\frac{1}{1-x} = 1 + x + x^2 + \\dots \\]\nby the geometric series formula.\n\nLet's pretend we didn't see this already in\n\\Cref{prob:geometric}.\nSo, we instead have a smooth function $f \\colon (-1,1) \\to \\RR$ by\n\\[ f(x) = \\frac{1}{1-x}. \\]\nSuppose we wanted to pretend that it was equal to\nan ``infinite polynomial'' near the origin, that is\n\\[ (1-x)\\inv = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + \\dots. \\]\nHow could we find that polynomial,\nif we didn't already know?\n\nWell, for starters we can first note that by plugging in $x = 0$\nwe obviously want $a_0 =1$.\n\nWe have derivatives, so actually,\nwe can then differentiate both sides to obtain that\n\\[ (1-x)^{-2} = a_1 + 2a_2 x + 3a_3 x^2 + 4a_4 x^3. \\]\nIf we now set $x = 0$, we get $a_1 = 1$.\nIn fact, let's keep taking derivatives and see what we get.\n\\begin{alignat*}{8}\n\t(1-x)^{-1} &{}={}& a_0 &{}+{}& a_1x &{}+{}& a_2x^2 &{}+{}& a_3x^3 &{}+{}& a_4x^4  &{}+{}& a_5x^5 &{}+{}& \\dots \\\\\n\t(1-x)^{-2} &{}={}& && a_1 &{}+{}& 2a_2 x &{}+{}& 3a_3 x^2 &{}+{}& 4a_4 x^3 &{}+{}& 5a_5 x^4 &{}+{}&\\dots \\\\\n\t2(1-x)^{-3} &{}={}& && && 2a_2 &{}+{}& 6a_3 x &{}+{}& 12 a_4 x^2 &{}+{}& 20 a_5 x^3 &{}+{}&  \\dots \\\\\n\t6(1-x)^{-4} &{}={}& &&  &&&& 6a_3 &{}+{}& 24 a_4 x &{}+{}& 60 a_5 x^2 &{}+{}& \\dots \\\\\n\t24(1-x)^{-5} &{}={}& && && && && 24 a_4 &{}+{}& 120 a_5 x &{}+{}& \\dots \\\\\n\t&{}\\vdotswithin={}.\n\\end{alignat*}\nIf we set $x=0$ we find $1 = a_0 = a_1 = a_2 = \\dots$\nwhich is what we expect;\nthe geometric series $\\frac{1}{1-x} = 1 + x + x^2 + \\dots$.\nAnd so actually taking derivatives was enough to get the right claim!\n\n\\section{Power series}\n\\prototype{$\\frac{1}{1-z} = 1 + z + z^2 + \\dots$, which converges on $(-1,1)$.}\nOf course this is not rigorous,\nsince we haven't described what the right-hand side is,\nmuch less show that it can be differentiated term by term.\nSo we define the main character now.\n\n\\begin{definition}\n\tA \\vocab{power series} is a sum of the form\n\t\\[ \\sum_{n = 0}^\\infty a_n z^n\n\t\t= a_0 + a_1 z + a_2 z^2 + \\dots \\]\n\twhere $a_0$, $a_1$, \\dots\\ are real numbers,\n\tand $z$ is a variable.\n\\end{definition}\n\\begin{abuse}\n\t[$0^0=1$]\n\tIf you are very careful, you might notice\n\tthat when $z=0$ and $n=0$ we find $0^0$ terms appearing.\n\tFor this chapter the convention is that\n\tthey are all equal to one.\n\\end{abuse}\n\nNow, if I plug in a \\emph{particular} real number $h$,\nthen I get a series of real numbers $\\sum_{n = 0}^{\\infty} a_n h^n$.\nSo I can ask, when does this series converge?\nIt terms out there is a precise answer for this.\n\n\\begin{definition}\n\tGiven a power series $\\sum_{n=0}^{\\infty} a_n z_n$,\n\tthe \\vocab{radius of convergence} $R$ is defined\n\tby the formula\n\t\\[ \\frac 1R = \\limsup_{n \\to \\infty} \\left\\lvert a_n \\right\\rvert^{1/n}. \\]\n\twith the convention that $R = 0$ if the right-hand side is $\\infty$,\n\tand $R = \\infty$ if the right-hand side is zero.\n\\end{definition}\n\\begin{theorem}\n\t[Cauchy-Hadamard theorem]\n\tLet $\\sum_{n=0}^{\\infty} a_n z^n$\n\tbe a power series with radius of convergence $R$.\n\tLet $h$ be a real number, and consider the infinite series\n\t\\[ \\sum_{n=0}^\\infty a_n h^n \\]\n\tof real numbers.\n\tThen:\n\t\\begin{itemize}\n\t\t\\ii The series converges absolutely if $|h| < R$.\n\t\t\\ii The series diverges if $|h| > R$.\n\t\\end{itemize}\n\\end{theorem}\n\\begin{proof}\n\tThis is not actually hard,\n\tbut it won't be essential, so not included.\n\\end{proof}\n\\begin{remark}\n\tIn the case $|h| = R$, it could go either way.\n\\end{remark}\n\\begin{example}\n\t[$\\sum z^n$ has radius $1$]\n\tConsider the geometric series $\\sum_{n} z^n = 1 + z + z^2 + \\dots$.\n\tSince $a_n = 1$ for every $n$, we get $R = 1$,\n\twhich is what we expected.\n\\end{example}\n\nTherefore, if $\\sum_n a_n z^n$ is a power\nseries with a nonzero radius $R > 0$ of convergence,\nthen it can \\emph{also} be thought of as a function\n\\[ (-R, R) \\to \\RR\n\t\\quad\\text{ by }\\quad\n\th \\mapsto \\sum_{n \\ge 0} a_n h^n. \\]\nThis is great.\nNote also that if $R = \\infty$,\nthis means we get a function $\\RR \\to \\RR$.\n\n\\begin{abuse}\n\t[Power series vs.\\ functions]\n\tThere is some subtly going on with ``types'' of objects again.\n\tAnalogies with polynomials can help.\n\n\tConsider $P(x) = x^3 + 7x + 9$, a polynomial.\n\tYou \\emph{can}, for any real number $h$,\n\tplug in $P(h)$ to get a real number.\n\tHowever, in the polynomial \\emph{itself},\n\tthe symbol $x$ is supposed to be a \\emph{variable} ---\n\twhich sometimes we will plug in a real number for,\n\tbut that happens only after the polynomial is defined.\n\n\tDespite this, ``the polynomial $p(x) = x^3+7x+9$''\n\t(which can be thought of as the coefficients)\n\tand ``the real-valued function $x \\mapsto x^3+7x+9$''\n\tare often used interchangeably.\n\tThe same is about to happen with power series:\n\twhile they were initially thought of as a sequence of\n\tcoefficients, the Cauchy-Hadamard theorem\n\tlets us think of them as functions too,\n\tand thus we blur the distinction between them.\n%\tPedants will sometimes \\emph{define} a polynomial\n%\tto be the sequence of coefficients, say $(9,7,0,1)$,\n%\twith $x^3+7x+9$ being ``intuition only''.\n%\tSimilarly they would define a power series\n%\tto be the sequence $(a_0, a_1, \\dots)$.\n%\tWe will not go quite this far, but they have a point.\n%\n%\tI will be careful to use ``power series''\n%\tfor the one with a variable,\n%\tand ``infinite series'' for the sums of real numbers from before.\n\\end{abuse}\n\n\\section{Differentiating them}\n\\prototype{We saw earlier $1+x+x^2+x^3+\\dots$ has derivative $1+2x+3x^2+\\dots$.}\nAs promised, differentiation works exactly as you want.\n\n\\begin{theorem}\n\t[Differentiation works term by term]\n\tLet $\\sum_{n \\ge 0} a_n z^n$ be a power series\n\twith radius of convergence $R > 0$,\n\tand consider the corresponding function\n\t\\[ f \\colon (-R,R) \\to \\RR \\quad\\text{by}\\quad\n\t\tf(x) = \\sum_{n \\ge 0} a_n x^n. \\]\n\tThen all the derivatives of $f$ exist and\n\tare given by power series\n\t\\begin{align*}\n\t\tf'(x) &= \\sum_{n \\ge 1} n a_n x^{n-1} \\\\\n\t\tf''(x) &= \\sum_{n \\ge 2} n(n-1) a_n x^{n-2} \\\\\n\t\t&\\vdotswithin=\n\t\\end{align*}\n\twhich also converge for any $x \\in (-R, R)$.\n\tIn particular, $f$ is smooth.\n\\end{theorem}\n\\begin{proof}\n\tAlso omitted.\n\tThe right way to prove it is to define the notion\n\t``converges uniformly'',\n\tand strengthen Cauchy-Hadamard to have\n\tthis is as a conclusion as well.\n\tHowever, we won't use this later.\n\\end{proof}\n\n\\begin{corollary}\n\t[A description of power series coefficients]\n\tLet $\\sum_{n \\ge 0} a_n z^n$ be a power series\n\twith radius of convergence $R > 0$,\n\tand consider the corresponding function $f(x)$ as above.\n\tThen\n\t\\[ a_n = \\frac{f^{(n)}(x)}{n!}. \\]\n\\end{corollary}\n\\begin{proof}\n\tTake the $n$th derivative and plug in $x=0$.\n\\end{proof}\n\n\\section{Analytic functions}\n\\prototype{The piecewise $e^{-1/x}$ or $0$ function is not analytic,\n\tbut is smooth.}\nWith all these nice results about power series,\nwe now have a way to do this process the other way:\nsuppose that $f \\colon U \\to \\RR$ is a function.\nCan we express it as a power series?\n\nFunctions for which this \\emph{is} true\nare called analytic.\n\\begin{definition}\n\tA function $f \\colon U \\to \\RR$ is \\vocab{analytic} at\n\tthe point $p \\in U$\n\tif there exists an open neighborhood $V$ of $p$ (inside $U$)\n\tand a power series $\\sum_n a_n z^n$ such that\n\t\\[ f(x) = \\sum_{n \\ge 0} a_n (x-p)^n \\]\n\tfor any $x \\in V$.\n\tAs usual, the whole function is analytic\n\tif it is analytic at each point.\n\\end{definition}\n\\begin{ques}\n\tShow that if $f$ is analytic, then it's smooth.\n\\end{ques}\nMoreover, if $f$ is analytic,\nthen by the corollary above its coefficients are\nactually described exactly by\n\\[ f(x) = \\sum_{n \\ge 0} \\frac{f^{(n)}(p)}{n!} (x-p)^n. \\]\nEven if $f$ is smooth but not analytic,\nwe can at least write down the power series;\nwe give this a name.\n\\begin{definition}\n\tFor smooth $f$,\n\tthe power series $\\sum_{n \\ge 0} \\frac{f^{(n)}(p)}{n!} z^n$\n\tis called the \\vocab{Taylor series} of $f$ at $p$.\n\\end{definition}\n\n\\begin{example}\n\t[Examples of analytic functions]\n\t\\listhack\n\t\\label{ex:nonanalytic}\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Polynomials, $\\sin$, $\\cos$, $e^x$, $\\log$\n\t\tall turn out to be analytic.\n\n\t\t\\ii The smooth function from before defined by\n\t\t\\[ f(x) = \\begin{cases}\n\t\t\t\t\\exp(-1/x) & x > 0 \\\\\n\t\t\t\t0 & x \\le 0 \\\\\n\t\t\t\\end{cases}\n\t\t\\]\n\t\tis \\emph{not} analytic.\n\t\tIndeed, suppose for contradiction it was.\n\t\tAs all the derivatives are zero,\n\t\tits Taylor series would be $0 + 0x + 0x^2 + \\dots$.\n\t\tThis Taylor series does \\emph{converge}, but not to the right value ---\n\t\tas $f(\\eps) > 0$ for any $\\eps > 0$, contradiction.\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{theorem}\n\t[Analytic iff Taylor series has positive radius]\n\tLet $f \\colon U \\to \\RR$ be a smooth function.\n\tThen $f$ is analytic if and only if for any point $p \\in U$,\n\tits Taylor series at $p$ has positive radius of convergence.\n\\end{theorem}\n\n\\begin{example}\n\tIt now follows that $f(x) = \\sin(x)$ is analytic.\n\tTo see that, we can compute\n\t\\begin{align*}\n\t\tf(0) &= \\sin 0 = 0 \\\\\n\t\tf'(0) &= \\cos 0 = 1 \\\\\n\t\tf''(0) &= -\\sin 0 = 0 \\\\\n\t\tf^{(3)}(0) &= -\\cos 0 = -1 \\\\\n\t\tf^{(4)}(0) &= \\sin 0 = 0 \\\\\n\t\tf^{(5)}(0) &= \\cos 0 = 1 \\\\\n\t\tf^{(6)}(0) &= -\\sin 0 = 0 \\\\\n\t\t&\\vdotswithin=\n\t\\end{align*}\n\tand so by continuing the pattern\n\t(which repeats every four) we find the Taylor series is\n\t\\[ z - \\frac{z^3}{3!} + \\frac{z^5}{5!} - \\frac{z^7}{7!} + \\dots \\]\n\twhich is seen to have radius of convergence $R = \\infty$.\n\\end{example}\n\nLike with differentiable functions:\n\\begin{proposition}\n\t[All your usual closure properties for analytic functions]\n\tThe sums, products, compositions, nonzero quotients\n\tof analytic functions are analytic.\n\\end{proposition}\nThe upshot of this is is that most of your usual\nfunctions that occur in nature,\nor even artificial ones like $f(x) = e^x + x \\sin(x^2)$,\nwill be analytic, hence describable locally by Taylor series.\n\n\\section{A definition of Euler's constant and exponentiation}\nWe can actually give a definition of $e^x$ using the tools we have now.\n\n\\begin{definition}\n\tWe define the map $\\exp \\colon \\RR \\to \\RR$ by\n\tusing the following power series,\n\twhich has infinite radius of convergence:\n\t\\[ \\exp(x) = \\sum_{n \\ge 0} \\frac{x^n}{n!}. \\]\n\tWe then define Euler's constant as $e = \\exp(1)$.\n\\end{definition}\n\\begin{ques}\n\tShow that under this definition, $\\exp' = \\exp$.\n\\end{ques}\n\nWe are then settled with:\n\\begin{proposition}\n\t[$\\exp$ is multiplicative]\n\tUnder this definition,\n\t\\[ \\exp(x+y) = \\exp(x) \\exp(y). \\]\n\\end{proposition}\n\\begin{proof}\n\t[Idea of proof.]\n\tThere is some subtlety here with switching\n\tthe order of summation that we won't address.\n\tModulo that:\n\t\\begin{align*}\n\t\t\\exp(x) \\exp(y)\n\t\t&= \\sum_{n \\ge 0} \\frac{x^n}{n!}\n\t\t\t\\sum_{m \\ge 0} \\frac{y^m}{m!}\n\t\t= \\sum_{n \\ge 0} \\sum_{m \\ge 0}\n\t\t\t\\frac{x^n}{n!} \\frac{y^m}{m!} \\\\\n\t\t&= \\sum_{k \\ge 0} \\sum_{\\substack{m+n = k \\\\ m,n \\ge 0}}\n\t\t\t\\frac{x^n y^m}{n! m!}\n\t\t= \\sum_{k \\ge 0} \\sum_{\\substack{m+n = k \\\\ m,n \\ge 0}}\n\t\t\t\\binom{k}{n} \\frac{x^n y^m}{k!} \\\\\n\t\t&= \\sum_{k \\ge 0} \\frac{(x+y)^k}{k!} = \\exp(x+y). \\qedhere\n\t\\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n\t[$\\exp$ is positive]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii We have $\\exp(x) > 0$ for any real number $x$.\n\t\t\\ii The function $\\exp$ is strictly increasing.\n\t\\end{enumerate}\n\\end{corollary}\n\\begin{proof}\n\tFirst \\[ \\exp(x) = \\exp(x/2)^2 \\ge 0 \\]\n\twhich shows $\\exp$ is nonnegative.\n\tAlso, $1 = \\exp(0) = \\exp(x) \\exp(-x)$ implies $\\exp(x) \\ne 0$\n\tfor any $x$, proving (a).\n\n\t(b) is just since $\\exp'$ is strictly positive (racetrack principle).\n\\end{proof}\n\nThe log function then comes after.\n\\begin{definition}\n\tWe may define $\\log \\colon \\RR_{>0} \\to \\RR$\n\tto be the inverse function of $\\exp$.\n\\end{definition}\nSince its derivative is $1/x$ it is smooth;\nand then one may compute its coefficients to show it is analytic.\n\t\nNote that this actually gives us a rigorous way to define\n$a^r$ for any $a > 0$ and $r > 0$, namely\n\\[ a^r \\defeq \\exp\\left( r \\log a \\right). \\]\n\n\\section{This all works over complex numbers as well,\nexcept also complex analysis is heaven}\nWe now mention that every theorem we referred to above\nholds equally well if we work over $\\CC$,\nwith essentially no modifications.\n\\begin{itemize}\n\t\\ii Power series are defined by $\\sum_n a_n z^n$ with $a_n \\in \\CC$,\n\trather than $a_n \\in \\RR$.\n\t\\ii The definition of radius of convergence $R$ is unchanged!\n\tThe series will converge if $|z| < R$.\n\t\\ii Differentiation still works great.\n\t(The definition of the derivative is unchanged.)\n\t\\ii Analytic still works great for functions\n\t$f \\colon U \\to \\CC$, with $U \\subseteq \\CC$ open.\n\\end{itemize}\nIn particular, we can now even define complex exponentials,\ngiving us a function \\[ \\exp \\colon \\CC \\to \\CC \\]\nsince the power series still has $R = \\infty$.\nMore generally if $a > 0$ and $z \\in \\CC$\nwe may still define \\[ a^z \\defeq \\exp(z \\log a). \\]\n(We still require the base $a$ to be a positive real\nso that $\\log a$ is defined, though.\nSo this $i^i$ issue is still there.)\n\nHowever, if one tries to study calculus for complex functions\nas we did for the real case,\nin addition to most results carrying over,\nwe run into a huge surprise:\n\\begin{quote}\n\t\\itshape\n\tIf $f \\colon \\CC \\to \\CC$ is differentiable,\n\tit is analytic.\n\\end{quote}\nAnd this is just the beginning of the nearly unbelievable\nresults that hold for complex analytic functions.\nBut this is the part on real analysis,\nso you will have to read about this later!\n\n\\section{\\problemhead}\n\n\\begin{problem}\n\tFind the Taylor series of $\\log(1-x)$.\n\\end{problem}\n\n\\begin{dproblem}\n\t[Euler formula]\n\tShow that \\[ \\exp(i \\theta) = \\cos \\theta + i \\sin \\theta \\]\n\tfor any real number $\\theta$.\n\t\\begin{hint}\n\t\tBecause you know all derivatives of $\\sin$ and $\\cos$,\n\t\tyou can compute their Taylor series,\n\t\twhich converge everywhere on $\\RR$.\n\t\tAt the same time, $\\exp$ was defined as a Taylor series,\n\t\tso you can also compute it.\n\t\tWrite them all out and compare.\n\t\\end{hint}\n\\end{dproblem}\n\n\\begin{dproblem}\n\t[Taylor's theorem, Lagrange form]\n\tLet $f \\colon [a,b] \\to \\RR$ be continuous\n\tand $n+1$ times differentiable on $(a,b)$.\n\tDefine\n\t\\[ P_n = \\sum_{k=0}^n \\frac{f^{(k)}(b)}{k!} \\cdot (b-a)^k. \\]\n\tProve that there exists $\\xi \\in (a,b)$ such that\n\t\\[ f^{(n)}(\\xi) = (n+1)! \\cdot \\frac{f(b) - P_n}{(b-a)^{n+1}}. \\]\n\tThis generalizes the mean value theorem\n\t(which is the special case $n = 0$, where $P_0 = f(a)$).\n\t\\begin{hint}\n\t\tUse repeated Rolle's theorem.\n\t\tYou don't need any of the theory in this chapter to solve this,\n\t\tso it could have been stated much earlier;\n\t\tbut then it would be quite unmotivated.\n\t\\end{hint}\n\\end{dproblem}\n\n\\begin{problem}\n\t[Putnam 2018 A5]\n\t\\yod\n\tLet $f \\colon \\RR \\to \\RR$ be smooth,\n\tand assume that $f(0) = 0$, $f(1) = 1$, and $f(x) \\ge 0$\n\tfor every real number $x$.\n\tProve that $f^{(n)}(x) < 0$ for some positive integer $n$\n\tand real number $x$.\n\t\\begin{hint}\n\t\tUse Taylor's theorem.\n\t\\end{hint}\n\\end{problem}\n", "meta": {"hexsha": "3b4aca2a881db28108c3e563e9999af2dcc530a4", "size": 15795, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/calculus/taylor.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/calculus/taylor.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/calculus/taylor.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.1883116883, "max_line_length": 114, "alphanum_fraction": 0.6628679962, "num_tokens": 5400, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723316860482762, "lm_q2_score": 0.8519528038477825, "lm_q1q2_score": 0.5727948650445359}}
{"text": "\\hypertarget{functional-programming-1}{%\n\\section{Functional Programming 1}\\label{functional-programming-1}}\n\n\\hypertarget{correctness}{%\n\\subsection{Correctness}\\label{correctness}}\n\nA program should be correct with respect to its specification. e.g.~a\nprogram that computes the sine perfectly well but should compute the\nroot is clearly not correct.\n\nOne can know wether the program is correct\n\n\\begin{itemize}\n\\tightlist\n\\item\n  by testing\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    choose particular input\u2022determine correct result for that input\n    using test oracle\n  \\item\n    run program under test on the chosen input\n  \\item\n    compare obtained and correct result\n  \\end{itemize}\n\\item\n  by proving\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    no particular input\n  \\item\n    no execution of the program\n  \\item\n    instead apply mathematical rules to program and specification\n  \\item\n    with a finite number of steps prove that something works for a\n    infinite number of values\n  \\end{itemize}\n\\end{itemize}\n\n\\hypertarget{referential-transparency}{%\n\\subsection{Referential Transparency}\\label{referential-transparency}}\n\n\\hypertarget{formal-proof}{%\n\\subsubsection{Formal proof}\\label{formal-proof}}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.25\\textwidth]{figures/formalproof1.png}\n\\caption{Formal proof 1}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.25\\textwidth]{figures/formalproof2.png}\n\\caption{Formal proof 2}\n\\end{figure}\n\n\\hypertarget{equality}{%\n\\subsubsection{Equality}\\label{equality}}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/equality.png}\n\\caption{Equality}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{figures/leibniz.png}\n\\caption{Example from Leibniz}\n\\end{figure}\n\n\\hypertarget{functional-program}{%\n\\subsubsection{Functional Program}\\label{functional-program}}\n\n\\begin{itemize}\n\\item\n  a functional program consists of\n\n  \\begin{enumerate}\n  \\def\\labelenumi{\\arabic{enumi}.}\n  \\tightlist\n  \\item\n    a set of value and function declarations\n  \\item\n    a single expression\n  \\end{enumerate}\n\\item\n  functional programming is referentially transparent\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    values and functions are declared via equality\n  \\item\n    equality then means mathematical equality (if using eager evaluation\n    modulo termination)\n  \\end{itemize}\n\\item\n  referential transparency employed for\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    program development, transformation, and proof\n  \\item\n    evaluation\n  \\end{itemize}\n\\end{itemize}\n\n\\clearpage\n\\hypertarget{imperative-programming}{%\n\\subsection{Imperative Programming}\\label{imperative-programming}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  syntax: expressions + commands\n\\item\n  semantics: values + environment + state\n\\item\n  expressions are evaluated in the environment and current state,\n  yielding a value\n\\item\n  commands are executed in the environment and current state, yielding a\n  new state\n\\item\n  Example:\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    assignment command with variable v and expression E -\\textgreater{}\n    v := E\n  \\item\n    E is evaluated in the environment and current state, yielding value\n    t; then t is assigned to the storage cell denoted by v in the\n    environment, thus yielding a new state\n  \\end{itemize}\n\\item\n  proofs of imperative programs are well possible too, but are by far\n  more complicated\n\\item\n  possible using HOARE logic\n\\item\n  HOARE triple, with P, Q predicates and C command:\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    \\{P\\} C \\{Q\\}\n  \\item\n    means: if execution of C starts in a state satisfying P, and\n    execution terminates, then the resulting state satisfies Q\n  \\end{itemize}\n\\item\n  Example:\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    proof rule for assignment command v := E\n  \\item\n    \\{Q{[}v \\textless{}- E{]}\\} v := E \\{Q\\}\n  \\end{itemize}\n\\end{itemize}\n\n\\hypertarget{conclusion-on-imperative-vs.functional}{%\n\\subsection{Conclusion on Imperative\nvs.~Functional}\\label{conclusion-on-imperative-vs.functional}}\n\n\\textbf{imperative paradigm}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  syntax: expressions + commands\n\\item\n  semantics: values + environment + state\n\\item\n  expressions are evaluated in the environment and current state,\n  yielding a value\n\\item\n  commands are executed in the environment and current state, yielding a\n  new state\n\\end{itemize}\n\n\\textbf{functional paradigm}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  syntax: expressions\n\\item\n  semantics: values + environment\n\\item\n  expressions are evaluated in the environment, yielding a value\n\\end{itemize}\n\n\\hypertarget{misuse-of-the-symbol-for-equality}{%\n\\subsubsection{Misuse of the Symbol for Equality\n=}\\label{misuse-of-the-symbol-for-equality}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  assignment like x := x + 1 has not the slightest similarity to\n  equality\n\\item\n  it is pronounced ``x becomes (gets, receives) x + 1'' \\ldots{}\n\\item\n  \\ldots{} but never ever ``x equals (is, is equal to) x + 1''\n\\item\n  a different symbol like := or \u2190 should be used instead\n\\item\n  using the symbol for equality = to denote assignment is a horrendous\n  design error of too many programming languages.\n\\end{itemize}\n\n\\hypertarget{evaluation}{%\n\\subsection{Evaluation}\\label{evaluation}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  strategies\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    innermost (call-by-value)\n  \\item\n    outermost (call-by-name)\n  \\item\n    \\textbf{lazy (outermost + sharing)}\n  \\item\n    reducible expression, or redex\n  \\item\n    application of a function to its argument expressions\n  \\end{itemize}\n\\item\n  Example: mult(x,y) = x * y\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    mult(1+2, 2+3) has three redexes\n\n    \\begin{itemize}\n    \\tightlist\n    \\item\n      1+2, yielding mult(3,2+3)\n    \\item\n      2+3, yielding mult(1+2,5)\n    \\item\n      mult(1+2,2+3) yielding (1+2)*(2+3)\n    \\end{itemize}\n  \\end{itemize}\n\\end{itemize}\n\n\\hypertarget{innermost-evaluation}{%\n\\subsubsection{Innermost Evaluation}\\label{innermost-evaluation}}\n\ninnermost redex first; if several, choose leftmost one first\n\nmult (1+2,2+3)\\\\\n= mult (3,2+3)\\\\\n= mult (3,5)\\\\\n= 3*5\\\\\n= 15\n\n\\clearpage\n\\hypertarget{outermost-evaluation}{%\n\\subsubsection{Outermost Evaluation}\\label{outermost-evaluation}}\n\noutermost redex first; if several, choose leftmost one first\n\nmult (1+2,2+3)\\\\\n= (1+2) * (2+3)\\\\\n= 3 * (2+3)\\\\\n= 3 * 5\\\\\n= 15\n\n\\hypertarget{lazy-evaluation}{%\n\\subsubsection{Lazy Evaluation}\\label{lazy-evaluation}}\n\nArgument expressions might be evaluated more than once if the\ncorresponding formal parameters occur several times in the body of the\nfunction. Solution to this problem via sharing:\n\n\\begin{itemize}\n\\tightlist\n\\item\n  keep only a single copy of the argument expression, and maintain a\n  pointer to it for each corresponding formal parameter\n\\item\n  evaluate the expression once, and replace it by its value\n\\item\n  access this value through the pointers\n\\end{itemize}\n\n\\hypertarget{summary-of-evaluation}{%\n\\subsubsection{Summary of evaluation}\\label{summary-of-evaluation}}\n\n\\begin{itemize}\n\\tightlist\n\\item\n  an argument is evaluated\n\n  \\begin{itemize}\n  \\tightlist\n  \\item\n    innermost: exactly once\n  \\item\n    outermost: zero or more times\n  \\item\n    lazy: at most once\n  \\end{itemize}\n\\item\n  whenever there exists an order of evaluation that terminates,\n  outermost (and thus lazy) evaluation will find it\n\\end{itemize}\n\n\\begin{tcolorbox}[colback=red!5!white,colframe=red!75!black]\nThe evaluation strategy in which function arguments are evaluated before the function call is made is known as an \"eager\" evaluation.\n\nLet us postpone evaluating function arguments until after the function call, and then evaluate the argument only if it actually is needed (e.g. if it appears in an expression in the called function). Such a strategy is \"lazy.\"\n\\end{tcolorbox}\n\n\\clearpage", "meta": {"hexsha": "e060fc3e22d3104af95d1c4dd964de5c4c36ada7", "size": 7915, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TSM_AdvPrPa/Summary/01_FunctionalProgramming.tex", "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": ["Beerware"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TSM_AdvPrPa/Summary/01_FunctionalProgramming.tex", "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_licenses": ["Beerware"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TSM_AdvPrPa/Summary/01_FunctionalProgramming.tex", "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": ["Beerware"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "avg_line_length": 23.6268656716, "max_line_length": 226, "alphanum_fraction": 0.7378395452, "num_tokens": 2264, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300698514777, "lm_q2_score": 0.8152324826183822, "lm_q1q2_score": 0.5727253329590857}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{parskip}\n\\usepackage{amsmath}\n\n\\title{Backpropagation Mathematics}\n\\author{Brady Neal}\n\\date{August 2015}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Definitions}\n\nIf you are ever reading through the math in here and realize you don't know what something such as $w_{ij}^l$ means, just come back to this section and you'll find a concise definition.\n\nSuperscripts will always refer to the layer, and subscripts will always refer to the neuron in that layer. All indexing will start from 1, not 0.\n\n$b_i^l$ - bias of the $i$\\textsuperscript{th} neuron in the $l$\\textsuperscript{th} layer\n\n$w_{ij}^l$ - weight from the $j$\\textsuperscript{th} neuron in layer $l - 1$ to the $i$\\textsuperscript{th} neuron in layer $l$\n\n$z_i^l$ - weighted input to the activation function of the $i$\\textsuperscript{th} neuron in the $l$\\textsuperscript{th} layer\n\\[z_i^l = \\sum_j w_{ij}^l a_j^{l - 1} + b_i^l\\]\nVectorized version:\n\\[z^l = w^l a^{l - 1} + b^l\\]\n\nThe activation function used is the sigmoid function:\n\\[\\sigma(z) = \\frac{1}{1 + e^{-z}}\\]\n\n$a_i^l$ - activation of the $i$\\textsuperscript{th} neuron in the $l$\\textsuperscript{th} layer\n\\[a_i^l = \\sigma(z_i^l) = \\sigma\\left(\\sum_j w_{ij}^l a_j^{l - 1} + b_i^l\\right)\\]\nVectorized version:\n\\[a^l = \\sigma(z^l) = \\sigma(w^l a^{l - 1} + b^l)\\]\n\n$L$ - number of layers in the neural network\n\n$x^{(i)}$ - feature vector corresponding to the $i$\\textsuperscript{th} training example\n\n$y^{(i)}$ - correct output value for the $i$\\textsuperscript{th} training example\n\n$a = a^L$ - hypothesis (prediction) of the neural network for a training example (i.e. $a^{(i)}$ is the neural network output that corresponds to $x^{(i)}$)\n\n$\\odot$ - the Hadamard product operator (element-wise multiplication between n-dimensional arrays)\n\nThe cost function used is the cross entropy error function:\n\\[C = - \\frac{1}{m} \\sum_{i = 1}^m \\left[y^{(i)} \\ln a^{(i)} + (1 - y^{(i)}) \\ln \\left(1 - a^{(i)}\\right)\\right]\\]\n\nwhere $m$ is the number of training examples, $a^{(i)}$ is the output layer of the neural network for the $i$\\textsuperscript{th} training example, and $y^{(i)}$ is the correct output for the $i$\\textsuperscript{th} training example.\n\nThe above cost function is over all training examples $x^{(i)}$. Both $y^{(i)}$ and $a^{(i)}$ are commonly vectors, so it is more explicitly written as\n\\[C = - \\frac{1}{m} \\sum_{i = 1}^m \\sum_j \\left[y_j^{(i)} \\ln a_j^{(i)} + (1 - y_j^{(i)}) \\ln \\left(1 - a_j^{(i)}\\right)\\right]\\]\nwhere $y_j^{(i)}$ and $a_j^{(i)}$ are the $j$\\textsuperscript{th} elements in the $y^{(i)}$ and $a^{(i)}$ vectors respectively.\n\nHowever, I will mostly be referring to the cost of a single training example in vectorized form, $C_x$:\n\\[C_x = y \\ln a + (1 - y) \\ln (1 - a)\\]\nThis simplifies the notation a great deal as we no longer have to worry about the many indices and summations. Therefore, when I say $C$ below, I'm often referring to $C_x$.\n\n\\section{Proofs}\n\nBecause we want to find the weights and biases that will minimize the cost function, the ultimate goal of backpropagation is to find the following:\n\\[\\frac{\\partial C}{\\partial w_{ij}^l} \\text{ and } \\frac{\\partial C}{\\partial b_i^l}\\]\nWe need to get four fundamental equations first:\n\\begin{flalign*}\n&\\delta_i^L = \\frac{\\partial C}{\\partial a_i^L} \\sigma'(z_i^L)\n\\qquad \\qquad \\qquad \\quad\n\\delta^L = \\nabla_a C \\odot \\sigma'(z^L) \\\\\n&\\delta_j^l = \\sum_i w_{ij}^{l + 1} \\delta_i^{l + 1} \\sigma'(z_j^l)\n\\qquad \\qquad\n\\delta^l = \\left(\\left(w^{l + 1}\\right)^T \\delta^{l + 1}\\right) \\odot \\sigma'(z^l) \\\\\n&\\frac{\\partial C}{\\partial w_{ij}^l} = \\delta_i^l a_j^{l - 1} \n\\qquad \\qquad \\qquad \\qquad\n\\frac{\\partial C}{\\partial w^l} = \\delta^l \\left(a^{l - 1}\\right)^T \\\\\n&\\frac{\\partial C}{\\partial b_i^l} = \\delta_i^l\n\\qquad \\qquad \\qquad \\qquad \\qquad \\enspace\n\\frac{\\partial C}{\\partial b^l} = \\delta^l\n\\end{flalign*}\n\n\\subsection{An Intermediate Quantity: $\\delta_i^l$}\nIn order to get $\\frac{\\partial C}{\\partial w_{ij}^l}$ and $\\frac{\\partial C}{\\partial b_i^l}$, we must first define an intermediate quantity that we'll call the error:\n\\[\\delta_i^l = \\frac{\\partial C}{\\partial z_i^l}\\]\nThe first step is to get the error in the output layer $L$:\n\\[\\delta_i^L = \\frac{\\partial C}{\\partial z_i^L} = \\frac{\\partial C}{\\partial a_i^L} \\frac{d a_i^L}{d z_i^L} \\text{ or } \\frac{\\partial C}{\\partial a_i^L} \\sigma'(z_i^L)\\]\nVectorized:\n\\[\\delta^L = \\nabla_a C \\odot \\sigma'(z^L)\\]\nNow that we have the error in the output layer, $\\delta^L$, we need an equation that relates $\\delta^l$ to $\\delta^{l + 1}$ in order to inductively solve for $\\delta^l$.\n\\[\\delta_j^l = \\frac{\\partial C}{\\partial z_j^l} = \\sum_i \\frac{\\partial C}{\\partial z_i^{l + 1}} \\frac{\\partial z_i^{l + 1}}{\\partial z_j^l} = \\sum_i \\delta_i^{l + 1} \\frac{\\partial z_i^{l + 1}}{\\partial z_j^l}\\]\nIn order to get $\\frac{\\partial z_i^{l + 1}}{\\partial z_j^l}$, recall $z_i^{l + 1}$:\n\\[z_i^{l + 1} = \\sum_j w_{ij}^{l + 1} a_j^l + b_i^{l + 1} = \\sum_j w_{ij}^{l + 1} \\sigma(z_j^l) + b_i^{l + 1}\\]\n\\[\\frac{\\partial z_i^{l + 1}}{\\partial z_j^l} = w_{ij}^{l + 1} \\sigma'(z_j^l)\\]\nThus,\n\\[\\delta_j^l = \\sum_i \\delta_i^{l + 1} \\frac{\\partial z_i^{l + 1}}{\\partial z_j^l} = \\sum_i \\delta_i^{l + 1} w_{ij}^{l + 1} \\sigma'(z_j^l) = \\sum_i w_{ij}^{l + 1} \\delta_i^{l + 1} \\sigma'(z_j^l)\\]\nVectorized:\n\\[\\delta^l = \\left(\\left(w^{l + 1}\\right)^T \\delta^{l + 1}\\right) \\odot \\sigma'(z^l)\\]\n\n\\subsection{$\\frac{\\partial C}{\\partial w_{ij}^l}$ and $\\frac{\\partial C}{\\partial b_i^l}$ in terms of $\\delta_i^l$}\n\n\\[\\frac{\\partial C}{\\partial w_{ij}^l} = \\frac{\\partial C}{\\partial z_i^l} \\frac{\\partial z_i^l}{\\partial w_{ij}^l} = \\delta_i^l \\frac{\\partial z_i^l}{\\partial w_{ij}^l}\\]\n\\[\\frac{\\partial C}{\\partial b_i^l} = \\frac{\\partial C}{\\partial z_i^l} \\frac{\\partial z_i^l}{\\partial b_i^l} = \\delta_i^l \\frac{\\partial z_i^l}{\\partial b_i^l}\\]\nIn order to get $\\frac{\\partial z_i^l}{\\partial w_{ij}^l}$ and $\\frac{\\partial z_i^l}{\\partial b_i^l}$, recall $z_i^l$:\n\\[z_i^l = \\sum_j w_{ij}^l a_j^{l - 1} + b_i^l\\]\n\\[\\frac{\\partial z_i^l}{\\partial w_{ij}^l} = a_j^{l - 1}\n\\qquad \\qquad \n\\frac{\\partial z_i^l}{\\partial b_i^l} = 1\\]\nThus,\n\\[\\frac{\\partial C}{\\partial w_{ij}^l} = \\delta_i^l \\frac{\\partial z_i^l}{\\partial w_{ij}^l} = \\delta_i^l a_j^{l - 1}\\]\n\\[\\frac{\\partial C}{\\partial b_i^l} = \\delta_i^l \\frac{\\partial z_i^l}{\\partial b_i^l} = \\delta_i^l\\]\nVectorized:\n\\[\\frac{\\partial C}{\\partial w^l} = \\delta^l \\left(a^{l - 1}\\right)^T\\]\n\\[\\frac{\\partial C}{\\partial b^l} = \\delta^l\\]\n\n\\section{Specific Activation Function and Cost Function}\n\nAll of the math in the previous section refers to the activation function as $\\sigma$ and the cost function as $C$. Because we defined a specific activation function and cost function at the beginning of the paper, we will now work out the math that will be necessary for an implementation of backpropagation.\n\nIn other words, $\\sigma'(z_i^L)$ and $\\frac{\\partial C}{\\partial a_i^L}$ are part of the fundamental equations above, so we'll calculate them with respect to our chosen activation function and cost function.\n\n\\subsection{Derivative of The Sigmoid Function}\nRecall the sigmoid function:\n\\[\\sigma(z) = \\frac{1}{1 + e^{-z}} = \\left(1 + e^{-z}\\right)^{-1}\\]\n\\[\\sigma'(z) = -\\left(1 + e^{-z}\\right)^{-2} \\cdot -e^{-z} = e^{-z} \\left(1 + e^{-z}\\right)^{-2}\\]\n\\[\\sigma'(z) = \\frac{e^{-z}}{\\left(1 + e^{-z}\\right)^{2}} = \\frac{1}{1 + e^{-z}} \\cdot \\frac{e^{-z}}{1 + e^{-z}}\\]\n\\[\\sigma'(z) = \\frac{1}{1 + e^{-z}} \\left(1 - \\frac{1}{1 + e^{-z}}\\right)\\]\nThus,\n\\[\\sigma'(z) = \\sigma(z) \\left(1 - \\sigma(z)\\right)\\]\n\n\\subsection{Partial Derivative of The Cross Entropy Error Function}\nRecall the cross entropy error function:\n\\[C = - \\frac{1}{m} \\sum_{i = 1}^m \\left[y^{(i)} \\ln a^{(i)} + (1 - y^{(i)}) \\ln \\left(1 - a^{(i)}\\right)\\right]\\]\nwhere $m$ is the number of training examples, $a^{(i)}$ is the output layer of the neural network for the $i$\\textsuperscript{th} training example, and $y^{(i)}$ is the correct output for the $i$\\textsuperscript{th} training example.\n\\[\\frac{\\partial C}{\\partial a^{(i)}} = -\\left(y^{(i)} \\cdot \\frac{1}{a^{(i)}} + (1 - y^{(i)}) \\cdot \\frac{1}{1 - a^{(i)}} \\cdot -1\\right)\\]\n\\[\\frac{\\partial C}{\\partial a^{(i)}} = \\frac{\\left(1 - y^{(i)}\\right)}{1 - a^{(i)}} - \\frac{y^{(i)}}{a^{(i)}} = \\frac{a^{(i)}\\left(1 - y^{(i)}\\right) - y^{(i)}\\left(1 - a^{(i)}\\right)}{a^{(i)} \\left(1 - a^{(i)}\\right)}\\]\n\\[\\frac{\\partial C}{\\partial a^{(i)}} = \\frac{a^{(i)} - a^{(i)} y^{(i)} - y^{(i)} + a^{(i)} y^{(i)}}{a^{(i)} \\left(1 - a^{(i)}\\right)}\\]\nThus,\n\\[\\frac{\\partial C}{\\partial a^{(i)}} = \\frac{a^{(i)} - y^{(i)}}{a^{(i)} \\left(1 - a^{(i)}\\right)}\\]\nNotice that the denominator looks quite familiar. Recall $a^{(i)} = \\sigma\\left(z^L\\right)^{(i)}$, and we can relate the function to $\\sigma'$ (let's drop the indices for simplicity):\n\\[\\frac{\\partial C}{\\partial a} = \\frac{a - y}{a (1 - a)} = \\frac{a - y}{\\sigma(z) \\left(1 - \\sigma(z)\\right)} = \\frac{a - y}{\\sigma'(z)}\\]\nSo if we ever had to find $\\frac{\\partial C}{\\partial a} \\sigma'(z)$, we'd magically get\n\\[\\frac{\\partial C}{\\partial a} \\sigma'(z) = \\frac{a - y}{\\sigma'(z)} \\sigma'(z) = a - y\\]\n\n\\subsection{Rewriting the Four Fundamental Equations}\nNow that we know $\\sigma'(z)$ and $\\frac{\\partial C}{\\partial a}$ for our specific activation function and cost function, we can rewrite the backpropogation equations more specifically.\n\nHere are their more generalized versions:\n\\begin{flalign*}\n&\\delta_i^L = \\frac{\\partial C}{\\partial a_i^L} \\sigma'(z_i^L)\n\\qquad \\qquad \\qquad \\quad\n\\delta^L = \\nabla_a C \\odot \\sigma'(z^L) \\\\\n&\\delta_j^l = \\sum_i w_{ij}^{l + 1} \\delta_i^{l + 1} \\sigma'(z_j^l)\n\\qquad \\qquad\n\\delta^l = \\left(\\left(w^{l + 1}\\right)^T \\delta^{l + 1}\\right) \\odot \\sigma'(z^l) \\\\\n&\\frac{\\partial C}{\\partial w_{ij}^l} = \\delta_i^l a_j^{l - 1} \n\\qquad \\qquad \\qquad \\qquad\n\\frac{\\partial C}{\\partial w^l} = \\delta^l \\left(a^{l - 1}\\right)^T \\\\\n&\\frac{\\partial C}{\\partial b_i^l} = \\delta_i^l\n\\qquad \\qquad \\qquad \\qquad \\qquad \\enspace\n\\frac{\\partial C}{\\partial b^l} = \\delta^l\n\\end{flalign*}\n\nHere are their more specific versions:\n\\begin{flalign*}\n&\\delta^L = a^L - y \\\\\n&\\delta^l = \\left(\\left(w^{l + 1}\\right)^T \\delta^{l + 1}\\right) \\odot \\left(a^l \\odot \\left(1 - a^l\\right)\\right) \\\\\n&\\frac{\\partial C}{\\partial w^l} = \\delta^l \\left(a^{l - 1}\\right)^T \\\\\n&\\frac{\\partial C}{\\partial b^l} = \\delta^l\n\\end{flalign*}\n\n\\section{Regularization}\nIn order to reduce overfitting, we're going to add L2 regularization to our cost function.\n\nRecall the cross entropy error function that we used as our cost function. We're now going to call this $C_0$:\n\\[C_0 = - \\frac{1}{m} \\sum_{i = 1}^m \\left[y^{(i)} \\ln a^{(i)} + (1 - y^{(i)}) \\ln \\left(1 - a^{(i)}\\right)\\right]\\]\nOur new cost function with L2 regularization:\n\\[C = - \\frac{1}{m} \\sum_{i = 1}^m \\left[y^{(i)} \\ln a^{(i)} + (1 - y^{(i)}) \\ln \\left(1 - a^{(i)}\\right)\\right] + \\frac{\\lambda}{2m} \\sum_{i = 1}^m \\sum_j \\sum_k {w_{jk}^{(i)}}^2\\]\nNow that we've changed the cost function, we have to figure out how that changes $\\frac{\\partial C}{\\partial w_{jk}^{(i)}}$ and $\\frac{\\partial C}{\\partial b_j^{(i)}}$ Because we already know $\\frac{\\partial C_0}{\\partial w_{jk}^{(i)}}$ and $\\frac{\\partial C_0}{\\partial b_j^{(i)}}$ let's start by rewriting $C$ in terms of $C_0$:\n\\[C = C_0 + \\frac{\\lambda}{2m} \\sum_i^m \\sum_j \\sum_k {w_{jk}^{(i)}}^2\\]\nThus,\n\\[\\frac{\\partial C}{\\partial w_{jk}^{(i)}} = \\frac{\\partial C_0}{\\partial w_{jk}^{(i)}} + \\frac{d}{d w_{jk}^{(i)}} \\frac{\\lambda}{2m} {w_{jk}^{(i)}}^2\\]\n\\[\\frac{\\partial C}{\\partial w_{jk}^{(i)}} = \\frac{\\partial C_0}{\\partial w_{jk}^{(i)}} + \\frac{\\lambda}{m} w_{jk}^{(i)}\\]\nTherefore, the new gradient descent update would be:\n\\[w_{jk}^{(i)} \\leftarrow w_{jk}^{(i)} - \\alpha \\left( \\frac{\\partial C_0}{\\partial w_{jk}^{(i)}} + \\frac{\\lambda}{m} w_{jk}^{(i)} \\right)\\]\n\\[w_{jk}^{(i)} \\leftarrow \\left( 1 - \\frac{\\alpha \\lambda}{m} \\right) w_{jk}^{(i)} - \\alpha \\frac{\\partial C_0}{\\partial w_{jk}^{(i)}}\\]\nwhere $\\alpha$ is the learning rate. \\\\\n\nBecause, $\\frac{\\lambda}{2m} \\sum\\limits_{i = 1}^m \\sum\\limits_j \\sum\\limits_k {w_{jk}^{(i)}}^2$ has no dependence on $b_j^{(i)}$,\n\\[\\frac{\\partial C}{\\partial b_j^{(i)}} = \\frac{\\partial C_0}{\\partial b_j^{(i)}}\\]\n\n\\end{document}", "meta": {"hexsha": "6c75ffe63ba6ef10a989c9209f20812d6ddacd4c", "size": 12281, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "backpropagation_math/backprop_math.tex", "max_stars_repo_name": "bradyneal/neural-networks", "max_stars_repo_head_hexsha": "078e544a6863af46bb743351b6a906973f4ad1a2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-08-11T17:08:07.000Z", "max_stars_repo_stars_event_max_datetime": "2015-08-11T17:08:07.000Z", "max_issues_repo_path": "backpropagation_math/backprop_math.tex", "max_issues_repo_name": "bradyneal/neural-networks", "max_issues_repo_head_hexsha": "078e544a6863af46bb743351b6a906973f4ad1a2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "backpropagation_math/backprop_math.tex", "max_forks_repo_name": "bradyneal/neural-networks", "max_forks_repo_head_hexsha": "078e544a6863af46bb743351b6a906973f4ad1a2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.6368421053, "max_line_length": 330, "alphanum_fraction": 0.6311375295, "num_tokens": 4678, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324803738429, "lm_q2_score": 0.7025300698514778, "lm_q1q2_score": 0.5727253313822294}}
{"text": "\\documentclass{scribe}\n\\theoremstyle{plain}\n\\theoremheaderfont{\\upshape \\bfseries}\n\\theoremseparator{.}\n\\theoremsymbol{}\n\\theorembodyfont{\\itshape}\n\\newtheorem{problem}[theorem]{Problem}\n\\theoremstyle{empty}\n\\theoremheaderfont{\\upshape \\bfseries}\n\\theorembodyfont{\\upshape}\n\\theoremsymbol{\\ensuremath{\\square}}\n\\newtheorem{refproof}{Proof}\n\n\\setCourse{Mathematics for Computer Science (30470023)}\n\\setSemester{Spring 2020}\n\\setInstructor{Andrew C. Yao}\n\\setLectureID{3}\n\\setLectureDate{Mar 2, 2020}\n\\setScribe{Han Luo}\n\n\\begin{document}\n\\notetitle\n    \n\\section{Overview}\nIn this lecture we discribe the solution to the Online Auction Problem. The surprising result is that it is the best to skip first $\\frac{1}{\\text{e}}$ offers. Then we introduce the concept of conditional probability, and also the distributive principle as well as the chain rule. Finally we introduce the expectation for a random variable. \n    \n\\section{The Online Auction Problem (Secretary's Problem/Beauty contest)}\n    \nIn last lecture we introduced the Online Auction Problem:\n    \n\\begin{problem}\n    A music concert ticket is selling. There is a stream of $n$ offers with prices $x_1,x_2,\\cdots,x_n$, for what strategy can we get the best price with highest probability?\n\\end{problem}\n\n\\begin{remark}\n    The strategy is non-regrettable, means that we cannot go back to the previous offers we rejected.\n\\end{remark}\n\n\\subsection{Formalizing}\n\nWe can simply assume that $x_1,x_2,\\cdots,x_n$ is a permutation of $1,2,\\cdots,n$. The probability space $\\mathbb{P}=(\\mathcal{U},p)$ is denoted by\n\\begin{itemize}\n    \\item $\\mathcal{U}$ is the set of all permutations of $\\{1,2,\\cdots,n\\}$;\n    \\item For all $u\\in \\mathcal{U}$, $p(u)=\\dfrac{1}{n!}$;\n    \\item $T$ is the event of accepting the best offer (i.e. the price $n$).\n\\end{itemize}\n\nFor integer $k(1\\le k<n)$, we denote strategy $k$ as\n\\begin{itemize}\n    \\item[1.] Skip the first $k$ offer;\n    \\item[2.] For $j>k$, take the first $x_j$, such that $x_j>\\max\\{x_1,x_2,\\cdots,x_k\\}$.\n\\end{itemize}\nThen the probability of getting the best price $p_{n,k}=\\dfrac{|T|}{|\\mathcal{U}|}=\\dfrac{|T|}{n!}$. To calculate it, we should first obtain the properties of the permutations where we can reach the best price.\n\nA permutation $u=(x_1,x_2,\\cdots,x_n)$ where $x_j=n$ is in $T$ iff\n\\begin{itemize}\n    \\item[1.] $j>k$;\n    \\item[2.] $\\max\\{x_1,x_2,\\cdots,x_{j-1}\\}=\\max\\{x_1,x_2,\\cdots,x_{k}\\}$.\n\\end{itemize}\n\nTo continue the calculation, we first introduce two basic principles.\n\n\\subsection{Two basic principles}\n\n\\begin{theorem}[Addition principle]\n    If $S=S_1\\cup S_2\\cup\\cdots\\cup S_m$ is a disjoint union, then\n    $$|S|=\\sum_{i=1}^m|S_i|.$$\n\\end{theorem}\n\n\\begin{remark}\n    As a simple application for addition principle, we have $\\text{Pr}(S)\\le\\sum\\limits_{i=1}^m \\text{Pr}(S_i)$, the equality holds iff the $S_i$ are disjoint.\n\\end{remark}\n\n\\begin{theorem}[Multiplication principle]\n    If $S$ has items characterized by $s\\in S, s=(i_1,i_2,\\cdots,i_l)$, and $1\\le i_j\\le c_j$ for all $1\\le j\\le l$, then\n    $$|S|=c_1c_2\\cdots c_l.$$\n\\end{theorem}\n\n\\subsection{The solution of the Online Auction Problem}\n\n\\begin{lemma} \\label{lmm:num-permutation}\n    Denote $T_j$ as the set of permutations in $T$ that $x_j=n$, then\n    $$|T_j|=n!\\cdot\\frac{1}{n}\\cdot\\frac{k}{j-1}.$$\n\\end{lemma}\n\nWe will prove to this lemma later. \n\nApplying the lemma, we get that\n$$|T|=\\sum_{j>k}|T_j|=n!\\cdot\\frac{k}{n}\\left(\\frac{1}{k}+\\frac{1}{k+1}+\\cdots+\\frac{1}{n}\\right),$$\nso\n\\begin{align*}\n    p_{n,k}&=\\frac{k}{n}\\left(\\frac{1}{k}+\\frac{1}{k+1}+\\cdots+\\frac{1}{n}\\right) \\\\\n    & \\approx\\frac{k}{n}\\left(\\ln n-\\ln k\\right) \\\\\n    & =\\frac{k}{n}\\ln \\frac{n}{k}\n\\end{align*}\nby derivation we get that when $k=\\dfrac{1}{\\text{e}}n$ the probability reaches the maximum $\\dfrac{1}{\\text{e}}$.\n\n\\section{Conditional Probability}\n\n\\begin{definition}\n    Let $V,W$ be two events such that $\\text{Pr}(W)\\neq 0$. The conditional probability of $V$ (conditioned on $W$) is defined by\n    $$\\text{Pr}(V|W)=\\frac{\\text{Pr}(V\\cap W)}{\\text{Pr}(W)}.$$\n\\end{definition}\n\\begin{remark} \n    \\begin{itemize}\n        \\item[1.] $V\\cap W$ represents the event that both $V$ and $W$ happen simultaneously;\n        \\item[2.] The conditional probability can be seen as a restricted probability space $\\mathbb{P'}=(W,p')$ with $p'(u)=\\dfrac{p(u)}{\\text{Pr}(W)}$;\n        \\item[3.] In particular, define $\\text{Pr}(V|W)=0$ if $\\text{Pr}(W)=0$.\n    \\end{itemize}\n\\end{remark}\n\n\\begin{theorem}[Distributive principle]\n    Let $W_1, W_2, \\cdots, W_m$ be disjoint events, and their union is $W=W_1\\cup W_2\\cup\\cdots\\cup W_m$, then for arbitrary event $T$, we have \n    $$\\Pr(T|W)=\\frac{\\sum_{i=1}^m\\text{Pr}(W_i)\\text{Pr}(T|W_i)}{\\text{Pr}(W)}.$$ \n\\end{theorem}\n\n\\begin{proof}\n    Since $W_1,W_2,\\cdots,W_m$ are disjoint, we get that $T\\cap W_1,T\\cap W_2,\\cdots,T\\cap W_m$ are disjoint and their union is $T\\cap W$. So by the addition principle,\n\\begin{align*}\n    \\sum_{i=1}^m\\text{Pr}(W_i)\\text{Pr}(T|W_i) & = \\sum_{i=1}^m \\text{Pr}(T\\cap W_i) \\\\\n    &= \\text{Pr}(T\\cap W) \\\\\n    &=\\text{Pr}(T|W)\\text{Pr}(W).\n\\end{align*}\n\\end{proof}\n\nThis distributive principle can be applied to the Online Auction Problem.\n\n\\begin{example}[Proof of Lemma \\ref{lmm:num-permutation}]\n    In the Online Auction Problem, we denote $W_j(j\\ge k+1)$ to be the set of permutations that $x_j=n$, then $\\mathcal{U}=W_{k+1}\\cup \\cdots\\cup W_n$, and $\\text{Pr}(W_j)=\\dfrac{1}{n}$. Since $\\text{Pt}(T|W_j)$ is the probability that we can get the best price in the case of $x_j=n$, i.e. the maximum of $x_1,x_2,\\cdots,x_{j-1}$ appears in $x_1,x_2,\\cdots,x_k$, so $\\text{Pt}(T|W_j)=\\dfrac{k}{j-1}$. This gives a brief proof to Lemma \\ref{lmm:num-permutation}.\n\\end{example}\n\n\\begin{theorem}[Chain Rule]\n    Let $V_1, V_2,\\cdots, V_k$ be events that not forced to be disjoint, and the event that they happen simutaneously is $T=V_1\\cap V_2\\cap\\cdots\\cap V_k$, then\n    $$\\text{Pr}(T)=\\text{Pr}(V_1)\\text{Pr}(V_2|V_1)\\text{Pr}(V_3|V_1\\cap V_2)\\cdots\\text{Pr}(V_k|V_1\\cap V_2\\cap \\cdots\\cap V_{k-1})$$\n\\end{theorem}\n\n\\begin{remark}\n    The chain rule is a generalization of the ``multiplication principle''.\n\\end{remark}\n\n\\begin{example}\n    In birthday paradox, denote $\\bar T$ to be the event that everyone in class has a unique birthday, and $V_j$ is the student \\#1, \\#2, $\\cdots$, \\#$j$ have different birthdays, so $T=V_1\\cap V_2\\cap\\cdots\\cap V_k$, and\n    $$\\Pr(V_j|V_1\\cap V_2\\cap \\cdot\\cap V_{j-1})=1-\\frac{j-1}{n}.$$\n\n    Appling the Chain Rule we get \n    $$\\text{Pr}(T)=\\text{Pr}(V_1)\\text{Pr}(V_2|V_1)\\cdots\\text{Pr}(V_k|V_1\\cap V_2\\cap \\cdots\\cap V_{k-1})=\\prod_{i=0}^{k-1}\\left(1-\\frac{i}{n}\\right).$$\n\\end{example}\n\n\\section{Expectation}\n\n\\begin{definition}[Expectation]\n    For a probability space $\\mathbb{P}=(\\mathcal{U},p)$, a random variable $X$ over $\\mathcal{U}$ is a real-valued mapping $X:\\mathcal{U}\\to (-\\infty,+\\infty)$.\n    \n    The expectation of $X$ is defined by \n    $$\\mathbb{E}(X)=\\sum_{u\\in U}X(u)p(u).$$\n\\end{definition}\n\n\\begin{theorem}[linear of expectation] If random variable $X=X_1+X_2$, then\n    $$\\mathbb{E}(X)=\\mathbb{E}(X_1)+\\mathbb{E}(X_2).$$\n\\end{theorem}\n\n\\begin{proof} by the definition of expctation,\n    $$\\mathbb{E}(X)=\\sum_{u\\in U}p(u)X(u)=\\sum_{u\\in U}p(u)(X_1(u)+X_2(u))=\\mathbb{E}(X_1)+\\mathbb{E}(X_2).$$\n\\end{proof}\n\n\\begin{example}\n    Throw $n$ coins independently, each coin has bias $b$, i.e. \n    $$\n    \\begin{cases}\n    \\text{Pr}(X_i=1)=b \\\\\n    \\text{Pr}(X_i=0)=1-b\n    \\end{cases}.\n    $$\n    \n    Denote $X$ to be the number of $1$s you throw. Then $\\mathbb{E}(X)=\\mathbb{E}(X_1)+\\mathbb{E}(X_2)+\\cdots+\\mathbb{E}(X_n)=bn$.\n\\end{example}\n\n\\begin{example}\n    Take a random permutation $u$ of $\\{1,2,\\cdots,n\\}$, look at its cycle representation, denote by $X(u)$ the number of cycles in its cycle representation.\n    \n    To calculate the expectation of $X(u)$, we can write the random variable $X(u)$ to be some easily calculated random variables. Denote $X_i=\\dfrac{1}{l}$ be the random variables of $i$, where $l$ is the length of the cycle that $i$ is on. Then $X=\\sum\\limits_{i=1}^nX_i$.\n\n    So $\\mathbb{E}(X)=\\sum\\limits_{i=1}^n\\mathbb{E}(X_i)$. The rest of the problem is solved in homework.\n\\end{example}\n\n\\begin{example}\n    We play a game on a rooted tree. In every step we randomly pick a node $v$ and delete $v$ and all its descendents, repeat the step until the tree is empty. Denote by $X$ the number of steps we will make before tree is empty. The result is also shown in homework.\n\\end{example}\n\n\n\\end{document}", "meta": {"hexsha": "eee84750bbfdfb2a292e515cee62963a9890d7a8", "size": 8564, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "scribe-notes/Week3/src/Week 3.tex", "max_stars_repo_name": "modaxiansheng/YAO_MATH", "max_stars_repo_head_hexsha": "bdd13f712974aa0939becb41afa8ce011323bee9", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scribe-notes/Week3/src/Week 3.tex", "max_issues_repo_name": "modaxiansheng/YAO_MATH", "max_issues_repo_head_hexsha": "bdd13f712974aa0939becb41afa8ce011323bee9", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scribe-notes/Week3/src/Week 3.tex", "max_forks_repo_name": "modaxiansheng/YAO_MATH", "max_forks_repo_head_hexsha": "bdd13f712974aa0939becb41afa8ce011323bee9", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.8376963351, "max_line_length": 462, "alphanum_fraction": 0.6737505838, "num_tokens": 3042, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952052, "lm_q2_score": 0.8152324938410784, "lm_q1q2_score": 0.5727253306886091}}
{"text": "\n\\section{Optimal joint assignment (OJA)}\\label{sec:qp}\n\nSince heatmaps generated by the joint predictor are multi-modal, the non-maximum suppression procedure yields multiple possible locations for each joint. The set of joint proposals is represented as $X = \\{x_{jp}\\}$, where $x_{jp}$ indicates the 2D position of proposal $p \\in \\{1,...,N_j\\}$ associated with joint $j \\in J$.\nBefore applying the optimizer, a subset of proposals $X^* \\subseteq X$ should be selected in order to form a complete skeleton, i.e. precisely \\emph{one proposal is selected for every joint}. This section will consider how to choose the optimal subset by formulating the problem as an extended optimal assignment problem.\n\nIn order to select a complete skeleton proposal from the set of joint proposals $\\{x_{jp}\\}$, a binary indicator vector $\\bvec a_j = \\{a_{jp}\\} \\in \\{0, 1\\}^{N_j+1}$ is introduced, where $a_{jp} = 1$ indicates that the $p^\\text{th}$ proposal for joint $j$ is a correct assignment, and the $p = N_j+1$ position corresponds to a {\\em null proposal}, indicating that joint $j$ has no match in this image.\nThe null proposals are handled as described in each of the energy terms below.\nLet $A$ be the jagged array $[\\bvec a_j]_{j=1}^J$ containing all assignment variables (for the current frame), and let $X^* = X(A)$ denote the subset of points selected by the binary array $A$.\n\n\n% Please add the following required packages to your document preamble:\n% \\usepackage{booktabs}\n% \\usepackage{multirow}\n\\begin{table}\n    \\RawFloats\n    \\parbox{.48\\linewidth}{\n        \\strut\n        \\centering\n        \\begin{tabular}{@{}llll@{}}\n        \\toprule\n        Joint $j$               & Proposal $p$ & $x_{jp}$ & $a_{jp}$ \\\\ \n        \\midrule\n        \\multirow{3}{*}{Nose} & 0          & $(0, 2) $   & 1 \\\\\n                            & 1          & $(4, 2) $   & 0 \\\\\n                            & NULL       & ---         & 0 \\\\\n        \\midrule\n        Upper Leg             & NULL       & ---         & 1 \\\\\n        \\midrule\n        \\multirow{3}{*}{Paw}  & 0          & $(4, 2)$    & 0 \\\\\n                            & 1          & $(8, 10)$   & 1 \\\\\n                            & NULL       & ---         & 0 \\\\\n        \\midrule\n        \\multirow{4}{*}{Tail} & 0          & $(4, 2)$    & 0 \\\\\n                            & 1          & $(8, 10)$   & 0 \\\\\n                            & 2          & $(4, 2)$    & 1 \\\\\n                            & NULL       & ---         & 0 \\\\\n        \\bottomrule\n        \\end{tabular}\n        \\caption{Example inputs and output to the joint assignment problem. Non maximum suppression applied to predicted heatmap tensors yields a set of proposals $p$ for each each skeleton joint $j$ at 2D location $x_{jp}$. $a_{jp}$ is an illustrative assignment vector which is predicted by the OJA algorithm.}\n        \\label{tab:oja-example-inputs}\n    }\n    \\hfill\n    \\parbox{.48\\linewidth}{\n        \\strut\n        \\centering\n        \\parbox{\\linewidth}{\n            \\strut\n            \\centering\n            \\begin{tabular}{@{}lllll@{}}\n            \\toprule\n            Joint $j$ & \\multicolumn{4}{l}{Proposal $p$}                                                 \\\\\n            \\midrule\n            Nose      & 1 & 0                        & 0                        & \\cellcolor[HTML]{808080} \\\\\n            Upper Leg & 1 & \\cellcolor[HTML]{808080} & \\cellcolor[HTML]{808080} & \\cellcolor[HTML]{808080} \\\\\n            Paw       & 0 & 1                        & 0                        & \\cellcolor[HTML]{808080} \\\\\n            Tail      & 0 & 0                        & 1                        & 0                        \\\\\n            \\bottomrule\n            \\end{tabular}\n            \\caption{Assignment variables $A = \\bvec a_j = \\{a_{jp}\\} \\in \\{0, 1\\}^{N_j+1}$ for the current frame stored as a jagged array.}\n            \\label{tab:oja-proposals}\n        }\n        \\bigskip\n        \\parbox{\\linewidth}{\n            \\strut\n            \\centering\n            \\begin{tabular}{@{}ll@{}}\n                \\toprule\n                Joint $j$ & $X^* = X(A)$ \\\\\n                \\midrule\n                Nose      & $(0, 2)$     \\\\\n                Upper Leg & ---          \\\\\n                Paw       & $(8, 10)$    \\\\\n                Tail      & $(4, 2)$     \\\\\n                \\bottomrule\n            \\end{tabular}\n            \\caption{2D joint locations $X^* = X(A)$ selected by the assignment variable $A$}\n            \\label{tab:oja-selected}\n        }\n    }\n\\end{table}        \n        \n\nOptimal assignment minimizes the function\n\\begin{equation}\nL(A) = \\LL{conf}(A) + \\LL{null}(A) + \\LL{prior}(A) + \\LL{temp}(A) + \\LL{cov-sil}(A) + \\LL{cov-bone}(A)\n\\end{equation}\nwhich balances agreement of the joint configuration with the network-supplied {\\em confidences}, a learned {\\em prior}, {\\em temporal} coherence, and {\\em coverage} terms which encourage the model to correctly project over the silhouette. Without the coverage terms, this can be optimized as a quadratic program, but better results are obtained by including the coverage terms, and using a genetic algorithm. In addition, the parameters $A$ must satisfy the $J$ constraints $\\sum_{p=1}^{N_j+1} a_{jp} = 1$, that exactly one joint proposal (or the null proposal) must be selected for each joint.\n\n\\subsection{Basic formulation}\n\n\\subsubsection{Network confidences: $\\LL{conf}(A)$}\n\nThe first energy term $\\LL{conf}(A)$ comes from the output of the joint prediction network, which provides a confidence score $y_{jp}$ associated with each joint proposal~$x_{jp}$.  Then $\\LL{conf}(A) = \\sum_j\\sum_p -\\lambda_{\\text{conf}}\\log(y_{jp}) A_{jp}$ is a linear function of $A$, \nand $\\lambda_{\\text{conf}}$ is a tunable parameter to control the relative contribution of the network confidences compared with that of the skeleton prior. Note that if only this term is included, the OJA would simply produce the result of the standard non-maximal suppresion algorithm, selecting the heatmap location with highest network confidence. Note that this function can be rewritten as\n\n\\begin{equation}\\label{eq:conf-energy}\n    \\LL{conf}(A) = \\lambda_{\\text{conf}}\\log(y^T) \\text{vec}(A)\n\\end{equation}\n\n\\subsubsection{Null proposals: $\\LL{null}(A)$}\n\nUnder some circumstances, for example when a body part is heavily occluded or ambiguous, \\emph{all} available proposals for a given joint may be of poor quality. In such a case, including any of these options in a skeleton configuration may have a detrimental impact on the later model fitting stage. Under these circumstances, it may be preferable to exclude the joint in question from the optimization entirely. Null proposals pay a fixed cost $\\lambda_{null}$, effectively acting as a threshold whereby the null proposal will be selected if no other proposal is of sufficient likelihood. Precisely, a jagged array $D$ is defined\n\n\\begin{equation}\n    D_{jp} = \\begin{cases}\n        \\lambda_{\\text{null}} & \\text{if } p \\text{ is null proposal}\\\\\n        0 & \\text{otherwise}\n    \\end{cases}\n\\end{equation}\n\nand resulting energy becomes\n\n\\begin{equation}\\label{eq:null-energy}\n    \\LL{null}(A) = \\text{vec}(D) \\text{vec}(A)\n\\end{equation}\n\n\\subsubsection{Skeleton Prior: $\\LL{prior}(A)$}\n\nThe next energy term is used to discourage anatomically implausible skeletal configurations from being selected. The prior probability of a skeletal assignment $A$ is represented as a multivariate Gaussian distribution over the selected joint positions $X^* = X(A)$\n\n% % Here, x is used as a general 2D coordinate\n\\begin{equation}\nP_{\\mathrm{prior}}(A) = \\frac{1}{\\sqrt[]{(2\\pi)^k\\left|\\Sigma\\right|}}\\exp\\left(-\\frac{1}{2}(x^*-\\mu)^T\\Sigma^{-1}(x^*-\\mu)\\right)\n\\end{equation}\n\nThe mean $\\mu \\in \\R{2J}$ and covariance $\\Sigma \\in \\R{2J\\times 2J}$ terms are obtained from synthetic training examples generated earlier. The prior is shown in \\Cref{fig:skeleton-prior}. The objective of the OJA is to find the assignment vector $A$ which maximizes this prior, which is equivalent to minimizing the negative log prior\n\\begin{equation}\n    \\LL{prior}(A) = \\frac{k}{2}\\log(2\\pi) + \\frac{1}{2}\\left|\\Sigma\\right| + \\frac{1}{2}(x^*-\\mu)^T\\Sigma^{-1}(x^*-\\mu)\n\\end{equation}\n\nThis formulation is reducible to a minimization over the Mahalanobis distance, which is given by the summation\n\\begin{equation}\n\\LL{prior}(A) = \\sum_j^J\\sum_p^{N_j}\\sum_k^J\\sum_q^{N_k}a_{jp}a_{kq}(x_{jp} - \\mu_j)\\Sigma_{jk}^{-1}(x_{kq}-\\mu_k)\n\\end{equation}\n\nNotice this is a quadratic function of $A$, so $\\LL{prior}(A) = \\text{vec}(A)^\\top Q \\text{vec}(A)$ for a fixed matrix $Q$. Precisely, the elements of the matrix $Q$ are precisely the Mahalanobis distance between each individual pair of joint proposals\n\\begin{equation}\n\\left[Q\\right]_{jp, kq} = (x_{jp} - \\mu_j)\\Sigma_{jk}^{-1}(x_{kq}-\\mu_k)\n\\end{equation}\n\nNull proposals are simply excluded from the sum, equivalent to marginalizing over their position. \n\n\n\\begin{figure}[t]\n\\def\\bb{\\rule{2in}{0pt}\\rule{0pt}{1in}}\n\\begin{tabular}{cccc}\n\\includegraphics[width=0.25\\linewidth]{skeletal_prior_gen/63_rgb.png} & \\includegraphics[width=0.25\\linewidth]{skeletal_prior_gen/36_rgb.png} & \\includegraphics[width=0.25\\linewidth]{skeletal_prior/74.png} & \\includegraphics[width=0.25\\linewidth]{skeletal_prior/108.png} \\\\\n(a) & (b) & (c) & (d) \\\\\n\\end{tabular}\n\\caption{Skeleton Prior: Synthetic quadruped training data examples (rendered with texture to show 3D) generated by sampling pose, shape and position parameters and applying to SMAL model (a), (b). The set of 2D skeletal positions are used to create a vector of means $\\mu \\in \\mathbb{R}^{2J}$ and covariance matrix $\\Sigma \\in \\mathbb{R}^{2J \\times 2J}$. The Gaussian distribution constructed can be sampled to create new skeletons, such as those shown in (c), (d).}\n\\label{fig:skeleton-prior}\n\\end{figure} \n\n\\subsubsection{Temporal Prior: $\\LL{temp}(A)$}\n% TODO: Consider bringing in qp_defs/IMAGES to help explain and draw matrices Q\nA common failure case of the joint prediction network is in situations where a joint position is highly ambiguous, for example between the left and right legs. In such cases, the algorithm will commonly alternate between two equally likely predictions. This leads to large displacements in joint positions between consecutive frames which are difficult for the later model fitting stage to recover from. This can be addressed by introducing a temporal term into the OJA. A prior is imposed on the distance moved by each joint between frame $t_0$ and $t_1$, which is given by a normal distribution with zero mean and variance $\\sigma^{2} =e^{\\tau|t_1 - t_0 - 1|}$. \nThe parameter $\\tau$ controls the strength of the interaction between distant frames. This results in an additional quadratic term in our objective function, which has the form $L_{temp} = a^\\top T^{(t_0, t_1)} a$ for matrix $T^{(t_0, t_1)}$ given by \n\\begin{equation}\n\\left[T^{(t_0, t_1)}\\right]_{jp, kq} = \\begin{cases}\ne^{-\\alpha|t_1 - t_0 - 1|}||x^{(t_0)}_{jp} - x^{(t_1)}_{kq}||^2 & \\text{if } j=k\\\\\n0 & \\text{otherwise}\n\\end{cases}\n\\end{equation}\n\n\\subsection{QP solution.}\nA general quadratic program is made up of a quadratic objective function and linear equality and inequality constraints. Thus far, all terms in $L(A)$ are quadratic or linear. \n\nTo optimize over a sequence of frames, we construct the block diagonal matrix $\\hat{Q}$ whose diagonal elements are the prior matrices $Q^{(t)}$ and off-diagonal elements are the temporal matrices $T^{(t_0, t_1)}$. The confidence term \\Cref{eq:conf-energy} and null penalty term \\Cref{eq:null-energy} are combined and the vector $\\hat{c}$ is obtained by stacking across each frame. The solution vector for the sequence $\\hat{a}$ is similarly constructed by stacking the vectorized assignment matrices $A$ across timesteps. The jagged array $B$ is used to formalize the constraint that only one proposal should be selected per joint. Precisely\n\n\\begin{equation}\n    \\left[B\\right]_{jp,kq} = \\begin{cases}\n        1 & \\text{if } j=k\\\\\n        0 & \\text{otherwise}\n    \\end{cases}\n\\end{equation}\nand (similarly to $A$) is vectorized and stacked across timesteps to form the constraint $\\hat{B}\\hat{a} = 1$. Finally, the constraint $\\hat{a}(1 - \\hat{a})=0$ is applied to ensure binary values of $\\hat{a}$. The resulting quadratic program formulation is then given in \\Cref{eq:quadprog}.\n\n\n\n% \\begin{equation}\n% \\min_{\\hat{a}} \\quad \\hat{a}^T \\hat{Q} \\hat{a} + \\lambda_{temp} \\hat{a}^T \\hat{T} \\hat{a} + \\lambda_{conf} \\hat{c}_{conf}^T \\hat{a} + \\lambda_{null} \\hat{c}_{null}^T\\hat{a}\n% \\end{equation}\n\n% \\[\n% \\begin{array}{c c} &\n%     \\begin{array}{c c c} p=1 & p=2 & p=3 \\\\\n%     \\end{array}\n%     \\\\\n%     \\begin{array}{c c c}\n%     p=1 \\\\\n%     p=2\\\\\n%     p=3\n%     \\end{array}\n%     &\n%     \\left[\n%     \\begin{array}{c c c}\n%     0.1 & 0.1 & 0.0 \\\\\n%     0.4 & 1.0 & 0.0 \\\\\n%     0.8 & 0.0 & 0.4\n%     \\end{array}\n%     \\right]\n% \\end{array}\n% \\]\n\n\\begin{mini}\n    {\\hat{a}}{\\hat{a}^T \\hat{Q} \\hat{a} + \\hat{c}^T \\hat{a}}{}{}\n    \\addConstraint{\\hat{B}\\hat{a} = 1}\n    \\addConstraint{\\hat{a}(1 - \\hat{a}) = 0}\n    \\label{eq:quadprog}\n\\end{mini}\n\n% \\begin{align}\n%     \\min_{\\hat{a}} \\quad \\hat{a}^T \\hat{Q} \\hat{a} + \\lambda_{temp} \\hat{a}^T \\hat{T} \\hat{a} + \\lambda_{conf} \\hat{c}_{conf}^T \\hat{a} + \\lambda_{null} \\hat{c}_{null}^T\\hat{a} \\\\\n%     \\text{subject to } \\sum_p^{N_j} \\hat{a}_{jp} = 1 \\quad \\forall j & \\quad \\text{and } \\hat{a}_{jp}(1 - \\hat{a}_{jp}) = 0 \\quad \\forall j,p \n    \n% \\end{align}\n\nThe quadratic program is specified using the open source CVXPY library \\cite{diamond2016cvxpy} and solved using the ``\\emph{Suggest-and-Improve}'' framework proposed by Park and Boyd \\cite{park2017general}. It is initialized by choosing the proposal with the highest confidence for each joint. Appropriate values for the free parameters $\\lambda_{\\text{conf}, \\text{temp}, \\text{null}}$ and $\\alpha$ were chosen empirically via grid search. \n\n\\subsection{Incoporating coverage priors}\n\nThe above quadratic formulation is sufficient to correct many errors in the raw output (which we later demonstrate in the experimental section), but suffers from an `overcounting' problem, in which leg joint predictions both cover the same silhouette leg region, leaving another leg empty. We therefore extend the definition of $L(A)$ to include two additional terms. \n\n\\def\\silhouette{S}\n\n\\subsubsection{Silhouette coverage: $\\LL{cov-sil}$}\n\nThe silhouette coverage term is designed to penalize large silhouette areas with no nearby selected joint. This term requires a precomputed set of silhouette sample points $Z \\subseteq \\mathbb{R}^2$, which we aim to ``cover'' as best as possible with the set of selected joints. Intuitively, the silhouette is considered well-covered if all sample points are close to \\emph{some} selected joint proposal. The set $Z$ is generated from the medial axis transform (MAT)\\cite{blum1967transformation} of the silhouette, $Z^{t} = \\text{MAT}(\\silhouette^{t})$\nwith a cubed loss strongly penalizing projection outside the silhouette:\n\\begin{equation}\n\\LL{cov-sil}(A^{t};X^{t},Z^{t}) = \\sum_{i}\\min_{j}\\|Z_{i}^{t} - \\hat{X}_{j}^{t}\\|^3\n\\end{equation}\n\n\\begin{figure}[t!]\n\\begin{floatrow}\n\\ffigbox{%%%%%%%%%%%%\n\\def\\bb{\\rule{2in}{0pt}\\rule{0pt}{1in}}\n\\def\\bjb{\\rule{0.5in}{0pt}\\rule{0pt}{0.25in}}\n\n\\begin{center}\n\\scalebox{-1}[1]{\\includegraphics[width=0.49\\linewidth]{sil_coverage_new/approx_render_cov_cropped.jpg}}\n\\scalebox{-1}[1]{\\includegraphics[width=0.49\\linewidth]{sil_coverage_new/med_axis_overlay_error_cropped.jpg}}\n% predictions taken from rs_dog frame 0100\n\\end{center}\n}\n{\\caption{Silhouette coverage loss. The error (shown in red) is the the distance between the median axis transform (right) and the nearest point on an approximate rendering (left).}\n\\label{fig:example_errors}}\n\\ffigbox{ \n    \\raisebox{1 em}{\n    \\centering\n    \\includegraphics[trim={0cm 0cm 0cm 0cm}, clip,width=0.45\\linewidth]{bone_coverage/skeleton_sil_cropped.jpg}\n    \\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=0.45\\linewidth]{bone_coverage/bone_error_overlay_cropped.jpg}\n    }\n}\n{\\caption{Bone coverage loss. One of the back-right leg joints is incorrectly assigned (left), leading to a large penalty since the lower leg bone crosses outside the dilated silhouette (right).}\n\\label{fig:cov-bone}\n% camel 0020}\n}\n\\end{floatrow}\n\\end{figure}\n\n\\subsubsection{Bone coverage: $\\LL{cov-bone}$}\n\nThe bone coverage term is used to prevent bones crossing the background. The joint hierarchy is stored in a kinematic tree structure $K = \\{\\{j,k\\} \\text{ if joints } j, k \\text{ are connected by a bone}\\}$.\n\\begin{equation}\n\\LL{cov-bone}(A^{t};X^{t},\\silhouette^{t},K) = \\sum_{\\{j,k\\} \\in K}\\biggl(1 - \\min_{\\lambda \\in \\big[0:0.1:1\\big]}\\silhouette^{t}(\\hat{X}_{j}^{t} + \\lambda(\\hat{X}_{j}^{t} - \\hat{X}_{k}^{t}))\\biggr)\n\\end{equation}\n\n\\subsection{Formulation as a genetic algorithm}\nWe minimize this more complex objective using a genetic algorithm (GA)\\cite{holland1992adaptation}. A genetic algorithm is a method for solving optimization problems using a natural selection process that mimics biological evolution. Unlike the quadratic program described previously, the GA optimization procedure relies on crossover and mutation, rather than by calculating derivatives. In some cases (and as shown in this work), this can lead to much faster convergence properties. The following components are required:\n\n\\ss{Genes.} Candidate solutions to the optimization problem are referred to as a set of `genes'. In this case, a gene is the asssignment array $A$ although is represented as a vector of $J$ integers (indicating the proposal ID), rather than one-hot encodings. \n\n\\ss{Initial population.} Genes are constructed subject to the constraints given in \\Cref{eq:quadprog}. The genetic algorithm is initialized with a population size of 128 genes. Of these, the first 32 are set equal to the max confidence solutions given by the network, which speeds up convergence. The remaining 96 genes are generated by selecting a random proposal for each joint. \n\n\\ss{Fitness function.} A fitness function is used to evaluate the quality of a gene. In this setting, the fitness function is precisely the energy $L(A)$ given above which minimizes to yield an optimal skeleton configuration.\n\n\\ss{Crossover.} Crossover is a fundemental operation common to evolutionary methods which defines the process of combining the information of two `parent' genes to generate an offspring gene. \\emph{Single-point} crossover operates by slicing parent genes each into two parts, and combining first and second parts from different parents to yield the next generation. Following standard practice, the crossover point is randomly selected for each combination. \n\n\\ss{Mutation.} Analogous to biological mutation and used to maintain diversity among genes, each gene is assigned some probability of undergoing a {\\em mutation}. If a gene \nis selected for mutation, between 1 and 4 joints have new proposals randomly assigned.\n\nThe genetic algorithm described has weights set empirically and is run for 1000 generations. Examples of errors corrected by the additional coverage energy terms are shown in Fig.~\\ref{fig:example_errors} and Fig.~\\ref{fig:cov-bone}.\n", "meta": {"hexsha": "694e90ab6e9af942c569d7df917967ec6b28d8a5", "size": 19222, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter4/4_oja.tex", "max_stars_repo_name": "benjiebob/phd-thesis-template", "max_stars_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter4/4_oja.tex", "max_issues_repo_name": "benjiebob/phd-thesis-template", "max_issues_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter4/4_oja.tex", "max_forks_repo_name": "benjiebob/phd-thesis-template", "max_forks_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.4456140351, "max_line_length": 664, "alphanum_fraction": 0.6738632817, "num_tokens": 5439, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.7025300573952054, "lm_q1q2_score": 0.5727253275348967}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\n%\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\\documentclass[12pt,preprint]{aastex}\n\\documentclass[iop, twocolumn, tighten]{emulateapj}\n\\usepackage{amsmath}\n\\usepackage{color}\n\\definecolor{grey}{rgb}{0.5,0.5,0.5}\n\\definecolor{red}{rgb}{0.8,0.0,0.0}\n\\usepackage{listings}\n\n\\newcommand{\\Vb}{{\\boldsymbol V}}\n\\newcommand{\\vb}{{\\boldsymbol v}}\n%\\newcommand{\\vtb}{{\\boldsymbol \\tilde{v}}}\n\\newcommand{\\vtb}{{\\tilde{ \\boldsymbol  v}}}\n\\newcommand{\\Tf}{{\\mathcal{T}}}\n\\newcommand{\\Xt}{{\\tilde{X}}}\n\\newcommand{\\Yt}{{\\tilde{Y}}}\n\\newcommand{\\Zt}{{\\tilde{Z}}}\n\\newcommand{\\Vp}{V_p}\n\\newcommand{\\Vsys}{V_\\mathrm{sys}}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\slugcomment{}\n\\shortauthors{}\n\\shorttitle{}\n\\bibliographystyle{apj}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{document}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\title{ChopStacks}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\author{Hajime Kawahara\\altaffilmark{1,2}}\n\\email{kawahara@eps.s.u-tokyo.ac.jp}\n\\altaffiltext{1}{Department of Earth and Planetary Science, \nThe University of Tokyo, Tokyo 113-0033, Japan}\n\\altaffiltext{2}{Research Center for the Early Universe, \nSchool of Science, The University of Tokyo, Tokyo 113-0033, Japan}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{abstract}\nChopStacks is a python code collection for resampling data and stacking them under the flux preservation.\n\\end{abstract}\n\\keywords{}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Introduction}\nWe sometimes need to re-sample 1D discrete data (i.e. spectra, light curves, and so on). Of course, the interpolation is one of the major solutions to this problem. However, we sometimes need the flux preservation simultaneously, for instance, when we stack the data.My Python small package, Chopstacks, provides the functions, \n\n\\begin{itemize}\n\\item re-sampling 1D data and preserving flux with an arbitrary array, \n\\item stacking 1D data with preserving flux,\n\\item converting the 1D data into the log 1D data with preserving flux.\n\\end{itemize}\n\nThis document describes the background mathematics of Chopstacks. If you want to know how to use, read jupyter notebooks in ipynb directory.\n\n\\section{Algorithm}\n\nI consider the situation that the stacked spectrum (we use the term of \"spectrum\" as an example of 1D data) $\\hat{f}(x)$ consists of the sum of $M$-spectra as \n\\begin{eqnarray}\n  \\hat{f} (x) = \\sum_{m=0}^{M-1} f^{(m)} (x)\n\\end{eqnarray}\nI assume that the stacked spectrum and each spectrum have the unit of [$q/X$], where $q$ is the unit of preserved quantity $Q(x)$ and $X$ is the unit of $x$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\\begin{figure}[htb]\n%  \\includegraphics[width=1.0\\linewidth]{FIG1.eps}\n%\\caption{\\label{fig:1}}\n%\\end{figure}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nThe stacked spectrum $\\hat{f}$ is discretized to the master bin as\n\\begin{eqnarray}\n  \\hat{f}_i &\\equiv& \\hat{f} (\\hat{x}_i)  \\\\\n  \\hat{x}_i  &:& i = 0,1..,N-1. \\\\\n  f^{(m)}_j &\\equiv& f^{(m)} (x_j)  \\\\\n  x_j  &:& j = 0,1..,N-1.\n\\end{eqnarray}\nThe width of each bin $\\hat{x}_i$ is defined by $\\hat{x}_{i,s}$ and $\\hat{x}_{i,e}$ as \n\\begin{eqnarray}\n\\Delta \\hat{x}_i \\equiv \\hat{x}_{i,e} - \\hat{x}_{i,s}. \n\\end{eqnarray}\nThe quantity in the $i$-th bin is written as\n\\begin{eqnarray}\n\\label{eq:tot}\n\\hat{Q}_i = \\hat{f}_i \\Delta \\hat{x}_i\n\\end{eqnarray}\nand \n\\begin{eqnarray}\nQ^{(m)}_j = f^{(m)}_j \\Delta x^{(m)}_j. \n\\end{eqnarray}\n\nThe problem is how to distribute the quantity in $\\hat{f}_i$ from $f^{(m)}_j$. I denote the overlaped region between $\\hat{x}_i$ and $x^{(m)}_j$ by $\\Delta^{(m)}_{i,j}$. Then the contribution of the $j$-th bin of the $m$-th spectra to $\\hat{Q}_i$  is \n\\begin{eqnarray}\n\\Delta Q^{(m)}_{i;j} &=& Q^{(m)}_j \\frac{\\Delta^{(m)}_{i,j}}{\\Delta x^{(m)}_j} \\\\\n                   &=& f^{(m)}_j \\Delta^{(m)}_{i,j}\n\\end{eqnarray}\n\nHence, the quantity in the i-th stacked bin is \n\\begin{eqnarray}\n\\hat{Q}_i &=& \\sum_{m=0}^{M-1} \\sum_j \\Delta Q^{(m)}_{i;j} \\\\\n&=& \\sum_{m=0}^{M-1} \\sum_j f^{(m)}_j  \\Delta^{(m)}_{i,j}. \n\\end{eqnarray}\nUsing equation (\\ref{eq:tot}), I obtain\n\\begin{eqnarray}\n\\hat{f}_i = \\sum_{m=0}^{M-1} \\sum_{j=0}^{N-1} f^{(m)}_j \\frac{\\Delta^{(m)}_{i,j}}{\\Delta \\hat{x}_i}.\\nonumber \\\\\n\\end{eqnarray}\nThough this eqution has $M \\times N$ terms, however, most of $\\Delta^{(m)}_{i,j}$ is zero. Hence, it is faster if I can find the overlap $(i,j)$ which satisfies $\\Delta^{(m)}_{i,j} \\ne 0$ for each spectrum.\n\\begin{eqnarray}\n\\hat{f}_i = \\sum_{m=0}^{M-1} \\sum_{j \\in \\mathrm{overlap}} f^{(m)}_j \\frac{\\Delta^{(m)}_{i,j}}{\\Delta \\hat{x}_i}. \n\\end{eqnarray}\n\n\\subsection{implementation}\n\n{\\bf chopstacks.py} is the central program of this algorithm. The main subroutine to redistribute the value is {\\bf cutput} in {\\bf chopstacks.py}. The illustrative explanation of this routine is shown in Figure \\ref{fig:ssp}. \nThe input arrays of {\\bf cutput} are \n\\begin{itemize}\n\\item xw: walls of the original bins [X], i.e. (N)-array.   \n\\item f: input value [q/X], i.e. (N-1)-array.   \n\\item hxw: walls of the resampled bins [X], i.e. (N)-array.   \n\\item hf: stacking array [q/X], i.e. (N-1)-array, \n\\end{itemize}\nand the output is \n\\begin{itemize}\n\\item hf: the input hf + resampled and stacked data.\n\\end{itemize}\nBecause 1D data often use the representative values of [X] (usually, the central value of the bin), you need to set $N$-walls between the $(N-1)$-representative values of [X].   \nIf you want to set the walls to be the center of the neighboring representative values, use {\\bf buildwall} in chopstacks.py. The edge option specifies the deal of the edge values. If you specify hf=None, then chopstacks.py assumes the initial hf to be a zero array. So, if you just want to divide the 1D data, use hf=None.  \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{figure}[]\n  \\includegraphics[bb=0 0 615 1030,width=\\linewidth]{stackspectrum.png}\n\\caption{Schematic explanation of {\\bf cutput}. The code is shown in appendix. \\label{fig:ssp}}\n\\end{figure}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\newpage\n\n\\section{Applications}\n Use the spectra with the unit of [$q/X$]. The preservation of $q$ can be checked using {\\bf check\\_preservation}.\n\n\\subsection{Log Bin Convertor}\n\n{\\bf analogbin.py} performs the resampling of $f$ [$q/X$] into the logscale of $x$ ($l \\equiv \\log x$) with a resolution power $R$. The relation between $\\delta \\log x$ and R is provided by\n\\begin{eqnarray}\n\\Delta l \\equiv \\delta \\log x = \\frac{\\Delta x}{x} = \\frac{1}{R}.\n\\end{eqnarray}\nThen the number of the rebinned data is given by \n\\begin{eqnarray}\n\\hat{N} = \\frac{\\log \\hat{x}_N  - \\log \\hat{x}_0}{\\Delta l} = R (\\log \\hat{x}_N  - \\log \\hat{x}_0).\n\\end{eqnarray}\n\nThe {\\bf opt} option specifies the output form. opt=0 will provide\n\\begin{itemize}\n\\item $\\hat{x}$, walls of $\\hat{x}$, $\\hat{f}$.\n\\end{itemize}\nopt=1 provides\n\\begin{itemize}\n\\item $\\log \\hat{x}$, walls of $\\log \\hat{x}$, $\\hat{x} \\hat{f}$.\n\\end{itemize}\nThe latter output is based on the preservation law\n\\begin{eqnarray}\nf d x &=& f \\left| \\frac{\\partial x}{\\partial (\\log x)}\\right| d (\\log) x \\nonumber \\\\\n&=&   x f d (\\log x).\n\\end{eqnarray}\nSubroutine {\\bf check\\_preservation} can check the degree of preservation of $[q]$. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{figure*}[!htb]\n  \\includegraphics[bb=0 0 576 432,width=0.49\\linewidth]{demo1.png}\n  \\includegraphics[bb=0 0 576 432,width=0.49\\linewidth]{demo2.png}\n\\caption{Demonstrations of the log binning. The blue line is original data and the red one is the binned data. The right panel shows the oversampling case. \\label{fig:2}}\n\\end{figure*}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\newpage\n\\appendix\n\n\\small\n\\begin{verbatim}\ndef cutput(xw,f,hxw,hf=None,silent=None):\n    #see note 14.8.12 and 14.9.02\n    if hf is None:\n        if silent is None:\n            print \"Reset master data (hf) to zero.\"\n        hf=np.zeros(len(hx))\n\n    ind=np.digitize(hxw,xw)\n    for i in range(0,len(hxw)-1):\n\n        if ind[i]<len(xw) and ind[i+1]-1 < len(xw)+1 and ind[i]>0:  #cx1,cx2,cx3\n            if ind[i]==ind[i+1]:\n                hf[i]=hf[i]+f[ind[i]-1] #A0\n            else: \n                hf[i]=hf[i]+f[ind[i]-1]*(xw[ind[i]]-hxw[i])/(hxw[i+1]-hxw[i]) #B0\n                for k in range(ind[i]+1, ind[i+1]):\n                    hf[i]=hf[i]+f[k-1]*(xw[k]-xw[k-1])/(hxw[i+1]-hxw[i]) #B1    \n                hf[i]=hf[i]+f[ind[i+1]-1]*(hxw[i+1]-xw[ind[i+1]-1])/(hxw[i+1]-hxw[i]) #B2\n\n        elif ind[i+1]-1 == len(xw)+1:  #right boundary criterion\n            if ind[i]<ind[i+1]:\n                hf[i]=hf[i]+f[ind[i]-1]*(xw[ind[i]]-hxw[i])/(hxw[i+1]-hxw[i]) #B0            \n                for k in range(ind[i]+1, ind[i+1]):\n                    hf[i]=hf[i]+f[k-1]*(xw[k]-xw[k-1])/(hxw[i+1]-hxw[i]) #B1    \n                                        \n        elif ind[i]==0 and ind[i+1]>0: #left boundary condition\n                for k in range(ind[i]+1, ind[i+1]):\n                    hf[i]=hf[i]+f[k-1]*(xw[k]-xw[k-1])/(hxw[i+1]-hxw[i]) #B1    \n                hf[i]=hf[i]+f[ind[i+1]-1]*(hxw[i+1]-xw[ind[i+1]-1])/(hxw[i+1]-hxw[i]) #B2\n \n    return hf\n\\end{verbatim}\n\n\\end{document}\n", "meta": {"hexsha": "e5ed013aac047a2f0b1e517f443f6f056383443a", "size": 9612, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/tex/main.tex", "max_stars_repo_name": "HajimeKawahara/chopstacks", "max_stars_repo_head_hexsha": "b2a812c59d7a16d929f1d33c5a9f2cf3e1f34635", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documents/tex/main.tex", "max_issues_repo_name": "HajimeKawahara/chopstacks", "max_issues_repo_head_hexsha": "b2a812c59d7a16d929f1d33c5a9f2cf3e1f34635", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documents/tex/main.tex", "max_forks_repo_name": "HajimeKawahara/chopstacks", "max_forks_repo_head_hexsha": "b2a812c59d7a16d929f1d33c5a9f2cf3e1f34635", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2972972973, "max_line_length": 328, "alphanum_fraction": 0.573553891, "num_tokens": 3049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8840392695254319, "lm_q2_score": 0.6477982247516797, "lm_q1q2_score": 0.5726790694093464}}
{"text": "\\section{Mathematical Analysis}\n\\subsection{Derivation of link-level successful transmission probability}\nAs the first step, we study the successful transmission probability $p_s(r)$ for any given link with distance $r$.\nWithout loss of generality, the device (i.e., the transmitter) labeled by $x_0$ is assumed to be located at the origin. The other end of this link (i.e., Base Station) with label $y_i$ is with distance $r$ away from the device. By definition, $p_s(r)$ is expressed as follows:\n\\begin{align}\np_{s} \\left( r\\right)\n& =\\mathbb{P}\\left\\lbrace \\frac{H \\exp(G) r^{-\\gamma}}{\\sum_{x_j \\in \\Phi_{m}} H_{x_j} \\exp(G_{x_j}) r_{x_j}^{-\\gamma}}  \\geq \\theta \\right\\rbrace \\nonumber\\\\\n%& =\\mathbb{P} \\left\\lbrace \\frac{H_{x_i}  r_{x_i}^{-\\gamma}}{\\sum_{x_j \\in \\Phi_{m}} H_{x_j} \\exp(G_{x_j}-G_{x_i}) r_{x_j}^{-\\gamma}} \\geq \\theta \\right\\rbrace \\nonumber\n\\end{align}\nLet $I=\\sum_{x_j \\in \\Phi_{m}} H_{x_j} \\exp(G_{x_j}) r_{x_j}^{-\\gamma}$, which is the cumulative interference suffered for device $x_0$. Thus, we have:\n\\begin{align}\n\\label{eq:def_ps}\np_{s}\\left( r \\right)  &= Pr \\left\\lbrace H  \\geq I \\theta\\exp(-G)r ^{\\gamma}  \\right\\rbrace  \\nonumber\\\\\n% The follwing line can be hidden, not necessary in FINAL_VERSION\n&=\\mathbb{E}_{G}\\left\\lbrace  \\mathbb{E}_{I} \\left[ \\exp(-\\theta \\exp(-g) r^{\\gamma}  I ) \\vert G = g\\right]\\right\\rbrace   \\nonumber\\\\\n&= \\mathbb{E}_{G} \\left[ \\mathcal{L}_{I}\\left\\lbrace \\theta \\exp(-G) r^{\\gamma}\\right\\rbrace \\right] \n\\end{align}\nIt is observed that the $p_s\\left( r \\right)$ is actually the expectation (with respect to $G$) of a conditional Laplace Transform of cumulative interference $I$ at point $\\theta \\exp(-G)  r^{\\gamma}$.\n\\begin{align}\n\\label{eq:interferece-laplace-transform}\n&\\mathcal{L}_{I}\\left( s \\right)  = \\mathbb{E}\\left[ \\exp(-sI)\\right] \\nonumber\\\\\n&= \\mathbb{E}\\left[ \\exp(-s\\sum_{x_j \\in \\Phi_m} H_{x_j} \\exp(G_{x_j})  r_{x_j}^{-\\gamma})\\right] \\nonumber \\\\\n&= \\mathbb{E}_{\\Phi_m} \\left\\lbrace \\prod_{x_j\\in \\Phi_m} \\mathbb{E}\\left[\\exp(-sH_{x_j} \\exp(G_{x_j})  r_{x_j}^{-\\gamma})\\right] \\right\\rbrace ,\n\\end{align}\nwhich is actually the Probability Generating Functional (PGFL) of a Poisson point process.\nApplying Campell's theorem, \n%\\qsong{Here I give the wikipedia link for this theorem, in the final version, I will give a reference of one book: https://en.wikipedia.org/wiki/Campbell's_theorem_(probability)}\n\\begin{align}\n\\label{eq:laplace_trans_I}\n\\mathcal{L}_{I}\\left( s \\right) &=\\exp\\left\\lbrace -\\lambda_m \\pi \\mathbb{E}\\left[ H^{\\frac{2}{\\gamma}}\\exp(\\frac{2}{\\gamma}G)\\right] \\Gamma(1-\\frac{2}{\\gamma})s^{\\frac{2}{\\gamma}}  \\right\\rbrace \n\\end{align}\nThis result has been derived in~\\cite[eq.3.20]{haenggi2009interference}. Since $H$ and $G$ are independent, we have: \n% The following two formula can be removed in FINAL_VERSION\n%\\begin{align}\n%\\label{eq:lognormal_moment}\n%\\mathbb{E}\\left[ \\exp(\\frac{2}{\\gamma}G)\\right] &= \\exp \\left\\lbrace \\left( \\frac{\\sqrt{2}\\beta\\sigma}{\\gamma}\\right) ^2\\right\\rbrace \n%\\end{align}\n%\\begin{align}\n%\\label{eq:expo_moment}\n%\\mathbb{E}\\left[ H^{\\frac{2}{\\gamma}} \\right] &= \\Gamma(1+\\frac{2}{\\gamma}) \n%\\end{align}\n%Substituting $(\\ref{eq:expo_moment})(\\ref{eq:lognormal_moment})$ into $(\\ref{eq:laplace_trans_I})$, we have:\n\\begin{align}\n\\mathcal{L}_{I}\\left( s \\right) &= \\exp\\left\\lbrace -p\\lambda_m \\pi \\Gamma(1+\\frac{2}{\\gamma}) \\Gamma(1-\\frac{2}{\\gamma}) \\exp \\left( \\frac{\\sqrt{2}\\sigma}{\\gamma}\\right) ^2 s^{\\frac{2}{\\gamma}}  \\right\\rbrace\n\\end{align}\nLet $s=\\theta \\exp(-G) r^{\\gamma}$, thus\n\\begin{align}\n\\label{eq:conditional_lp_trans}\n\\mathcal{L}_{I}\\left\\lbrace \\theta \\exp(-G) r^{\\gamma}\\right\\rbrace = \\exp\\left\\lbrace -A r^2\\exp(-\\frac{2}{\\gamma}G)\\right\\rbrace ,\n\\end{align}\nwhere $A=p\\lambda_m\\pi \\Gamma(1+\\frac{2}{\\gamma}) \\Gamma(1-\\frac{2}{\\gamma}) \\exp \\left( \\frac{\\sqrt{2}\\sigma}{\\gamma}\\right) ^2 \\theta^{\\frac{2}{\\gamma}}$, $\\sigma = 0.1\\ln(10)\\sigma_{dB}$.\n\nWith substitution of $(\\ref{eq:conditional_lp_trans})$ into $(\\ref{eq:def_ps})$, it remains to calculate the expectation with respect to $G$:\n\\begin{align}\n\\label{eq:def_ps_2}\np_{s}(r) &= \\mathbb{E}_{G}\\left[ \\exp(-A r^2 \\exp(-\\frac{2}{\\gamma}G)) \\right] \\nonumber\\\\\n&=\\int_{-\\infty}^{+\\infty} \\exp(-A r^2 e^{x})\\frac{1}{\\sqrt{2\\pi}\\frac{2}{\\gamma}\\sigma} \\exp(-\\frac{x^2}{2(\\frac{2}{\\gamma}\\sigma)^2}) dx \\nonumber\\\\\n&=\\mathbb{E}_{X}\\left[ \\exp(-Ar^2X)\\right], \n\\end{align}\nwhich is the expectation with respect to a new log-normal random variable $X \\sim LN(0, \\frac{2}{\\gamma}\\sigma)$.\n% The following should be removed in FINAL_VERSION\n%\\begin{align}\n%p_{s}(r) &= \\mathbb{E}\\left[ \\exp(-Ar^2X)\\right] \\nonumber\\\\\n%&= \\mathcal{L}_{X} \\left[ Ar^2\\right]  \n%\\end{align}\nNow we will not consider to get the closed-form expression for $p_{s}(r)$, because in the following the form will simplify the analysis. \n%\u73b0\u5728\u6211\u4eec\u5148\u4e0d\u8003\u8651\uff1a\u83b7\u53d6P_s\u7684\u89e3\u6790\u89e3\uff0c\u56e0\u4e3a\u5728\u4e0b\u4e00\u6b65\u8ba1\u7b97\u4f1a\u66f4\u65b9\u4fbf\n%A closed form expression of the Laplace transform of the lognormal distribution does not\n%exist. According to reference~\\cite{asmussen2016laplace}, the Laplace transform of a log-normal random variable can be accurately approximated as follows:\n%\\begin{align}\n%\\label{eq:laplace-transform-lognormal-form-1}\n%\\mathcal{L} \\left\\lbrace X \\right\\rbrace \\left( s \\right)\n%&= \\frac{\\exp(-\\frac{W(s \\sigma_{X}^2 e^{\\mu_{X}} )^2 + 2W(s \\sigma_{X}^2 e^{\\mu_{X}})}{2\\sigma_{X}^2})}{\\sqrt{1 + W(s \\sigma_{X}^2 e^{\\mu_{X}})}},\n%\\end{align}\n%where $W\\left( \\cdot \\right)$ is the Lambert W function~\\cite{corless1996lambertw}, which is defined as the solution in principal branch of the\n%equation $W\\left(x\\right) e^{W \\left( x\\right) }= x$.\n\\subsection{Derivation of outage probability over infinite plane}\n\\subsubsection{Nearest BS attach method}\nAs a comparison reference, now we consider probability $P_{f1}$ over an infinite plane using traditional method: the device attach the nearest BS.\nThe distance to the nearest base station, we denote this distance as $r$. Its PDF, obtained by void probability of PPP, is:\n\\begin{align}\n\\label{eq:pdf_nearest_distance}\nf\\left( r\\right)  = 2 \\pi \\lambda_b  r \\exp(-\\lambda_b \\pi r^2), r \\in \\left[ 0, +\\infty\\right] \n\\end{align}\nIn this case, it just needs to take the expectation with respect to $r$, to get the outage probability $P_{f1}$.\n\\begin{align}\nP_{f1}&= \\mathbb{E}\\left[ 1-p_{s}\\left(r\\right) \\right]  \\nonumber\\\\\n&= 1-\\int_{0}^{+\\infty} \\mathbb{E}\\left[ \\exp(-A r^2 X)\\right]  2 \\pi \\lambda_b  r \\exp(-\\lambda_b \\pi r^2) dr \\nonumber\\\\\n&= 1 - \\mathbb{E}\\left[ \\frac{1}{\\frac{AX}{\\pi\\lambda_b}+1} \\right] \\nonumber\n\\end{align}\nFor the simplicity  of writing in the following, let $B= \\frac{A}{\\pi \\lambda_b}$. And we now focus on the term $\\mathbb{E}\\left[ \\frac{1}{BX+1} \\right]$: \n\\begin{align}\n\\label{eq:mean_bx+1_step1}\n&\\mathbb{E}\\left[ \\frac{1}{BX+1} \\right]  = \\int_{-\\infty}^{+\\infty} \\frac{1}{ Be^t+1} \\cdot \\frac{1}{\\sqrt{2\\pi} \\sigma_X} \\exp\\left\\lbrace -\\frac{t^2}{2 \\sigma_X^2}\\right\\rbrace dt \\nonumber\\\\\n&= \\int_{-\\infty}^{+\\infty} \\frac{1}{ e^t+1} \\cdot \\frac{1}{\\sqrt{2\\pi} \\sigma_X} \\exp\\left\\lbrace -\\frac{(t-\\ln(B))^2}{2 \\sigma_X^2}\\right\\rbrace dt\n%&= \\int_{-\\infty}^{+\\infty} \\frac{1}{1+e^{-(t-\\ln(B))}} \\cdot \\frac{1}{\\sqrt{2\\pi} \\sigma_X} \\exp\\left\\lbrace -\\frac{t^2}{2 \\sigma_X^2}\\right\\rbrace dt \\nonumber\\\\\n%&= \\int_{-\\infty}^{+\\infty} \\frac{1}{1+\\exp{-\\left( \\frac{t-\\frac{\\ln(B)}{\\sigma_X}}{\\frac{1}{\\sigma_X}}\\right) }} \\cdot \\frac{1}{\\sqrt{2\\pi}} \\exp\\left\\lbrace -\\frac{t^2}{2}\\right\\rbrace dt \\nonumber\\\\\n%& = \\int_{-\\infty}^{+\\infty} \\frac{\\phi\\left( t \\right) }{1+\\exp{-\\left( \\frac{t-\\frac{\\ln(B)}{\\sigma_X}}{\\frac{1}{\\sigma_X}}\\right) }} dt,\n\\end{align}\nwhich is actually a logistic-normal integral. According to~\\cite{crooks2009logistic}, this kind of integral can be accurately approximated by another reparameterized logistic function (i.e., sigmoid function). Thus,\n\\begin{align}\n\t\\label{eq:analytical_result_approach_2}\n\tP_{f1} &= 1 -\\frac{1}{1 + \\exp\\left\\lbrace \\left( 1 +\\frac{\\pi \\sigma_X^2}{8} \\right)^{-\\frac{1}{2}} \\ln(B) \\right\\rbrace} \\nonumber\\\\\n\t&=1-\\frac{1}{1 + B^{\\left( 1 +\\frac{\\pi \\sigma^2}{2\\gamma^2} \\right)^{-\\frac{1}{2}}}},\n\\end{align}\nwhere $B= \\frac{p\\lambda_m}{\\lambda_b}\\Gamma(1+\\frac{2}{\\gamma}) \\Gamma(1-\\frac{2}{\\gamma}) \\exp \\left( \\frac{\\sqrt{2}\\sigma}{\\gamma}\\right) ^2 \\theta^{\\frac{2}{\\gamma}}$.\n\\subsubsection{BS reception diversity method}\nWe now study the outage probability $P_{f2}$ over an infinite plane when applying BS reception diversity. In this case, one transmission is outage if and only if none of the Base Stations in the infinite plane has received the packet. Thus, $P_{f1}$ is expressed as follows:\n\\begin{align}\n\\label{eq:definition_pf1}\nP_{f2} &= \\mathbb{E}\\left[  \\prod_{r_i \\in \\Phi_{b}} (1-p_{s}(r_i)) \\right], \\text{with } r_i \\in \\left[ r_{\\text{min}}, +\\infty\\right],\n\\end{align} \nwhere $r_i$ refers to the distance between the device and BS with label $i$. Note that $r_i$ should be not less than distance to the nearest BS denoted by $r_{\\text{min}}$, whose probability density function is given in $(\\ref{eq:pdf_nearest_distance})$. Expression $(\\ref{eq:definition_pf1})$ is actually the Probability Generating FunctionaL(PGFL) of Point point process $\\Phi_{b}$ for Base Station, which states for some function $f(x)$ that $\\mathbb{E}\\left[ \\prod_{x \\in \\Phi}f(x) \\right] = \\exp(-\\lambda(\\int_{\\mathbb{R}^2}(1-f(x))dx))$. Thus, $(\\ref{eq:definition_pf1})$ can be expressed as follows:\n\\begin{align}\nP_{f2} &= \\exp\\left\\lbrace -\\pi \\lambda_{b} \\mathbb{E}_{r_{\\text{min}}} \\left[ \\int_{r_{\\text{min}}}^{+\\infty} p_{s}(r)rdr \\right] \\right\\rbrace ,\n\\end{align} \n\n\\begin{align}\nP_{f2} &= \\exp\\left\\lbrace -\\mathbb{E}_{r_{\\text{min}}} \\left[ \\int_{r_{\\text{min}}}^{+\\infty} \\mathbb{E}_{X}\\left[ \\exp(-AXr^2)\\right] 2\\pi\\lambda_{b} rdr \\right] \\right\\rbrace \\nonumber\\\\ \n&= \\exp\\left\\lbrace -\\pi \\lambda_{b} \\mathbb{E}_{r_{\\text{min}}} \\left[\\mathbb{E}_{X}\\left[ \\int_{r_{\\text{min}}}^{+\\infty}\\exp(-AXr^2)dr\\right] \\right] \\right\\rbrace \\nonumber\\\\ \n&= \\exp\\left\\lbrace -\\pi \\lambda_{b} \\mathbb{E}_{r_{\\text{min}}} \\left[\\mathbb{E}_{X}\\left[ \\frac{e^{-Ar_{\\text{min}}^2X} }{AX} \\right] \\right] \\right\\rbrace \\nonumber\\\\ \n&= \\exp\\left\\lbrace -\\pi \\lambda_{b} \\mathbb{E}_{X} \\left[\\mathbb{E}_{r_{\\text{min}}}\\left[ \\frac{e^{-Ar_{\\text{min}}^2X} }{AX} \\right] \\right] \\right\\rbrace \\nonumber\\\\ \n&= \\exp\\left\\lbrace -\\mathbb{E}_{X} \\left[ \\frac{1}{\\left( \\frac{AX}{\\pi\\lambda_b}\\right)\\left( \\frac{AX}{\\pi\\lambda_b}+1\\right) }\\right] \\right\\rbrace \\nonumber\\\\ \n&= \\exp\\left\\lbrace \\mathbb{E}_{X} \\left[ \\frac{1}{BX + 1}\\right] - \\mathbb{E}_{X} \\left[ \\frac{1}{BX}\\right]   \\right\\rbrace\n\\end{align}\n\n\\begin{align}\n\t\\mathbb{E}_{X} \\left[ \\frac{1}{BX + 1}\\right] = \\frac{1}{1 + B^{\\left( 1 +\\frac{\\pi \\sigma^2}{2\\gamma^2} \\right)^{-\\frac{1}{2}}}}\n\\end{align}\n\\begin{align}\n\t\\mathbb{E}_{X} \\left[ \\frac{1}{BX}\\right] =-\\frac{1}{\\frac{p\\lambda_m}{\\lambda_b}\\Gamma(1+\\frac{2}{\\gamma}) \\Gamma(1-\\frac{2}{\\gamma})\\theta^{\\frac{2}{\\gamma}}} \n\\end{align}\nHence, we have:\n\\begin{align}\n\t\\label{eq:analytical_result_approach_1}\n\tP_{f2} &= \\exp(\\frac{1}{1 + B^{\\left( 1 +\\frac{\\pi \\sigma^2}{2\\gamma^2} \\right)^{-\\frac{1}{2}}}}-\\frac{1}{B\\exp\\left\\lbrace  -\\left(\\frac{\\sqrt{2}\\sigma}{\\gamma}\\right) ^2 \\right\\rbrace } )\uff0c\n\\end{align}\nwhere $B= \\frac{p\\lambda_m}{\\lambda_b}\\Gamma(1+\\frac{2}{\\gamma}) \\Gamma(1-\\frac{2}{\\gamma}) \\exp \\left( \\frac{\\sqrt{2}\\sigma}{\\gamma}\\right) ^2 \\theta^{\\frac{2}{\\gamma}}$.\nWe surprisingly observe that with BS reception diversity, the limit case, the outage probability has the same formula with the case where just Rayleigh fading is considered. But in practical system, we still observe that shadowing has improved the outage probability, since the wireless network cannot be infinite and the population is always limited. \n", "meta": {"hexsha": "1389bf9599b8ffa85df4b9db383450485c401b08", "size": 11737, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter5/bs_rx_divers_math_analysis.tex", "max_stars_repo_name": "hansomesong/PhD-Thesis", "max_stars_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter5/bs_rx_divers_math_analysis.tex", "max_issues_repo_name": "hansomesong/PhD-Thesis", "max_issues_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter5/bs_rx_divers_math_analysis.tex", "max_forks_repo_name": "hansomesong/PhD-Thesis", "max_forks_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.8357142857, "max_line_length": 606, "alphanum_fraction": 0.6738519213, "num_tokens": 4392, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467770088162, "lm_q2_score": 0.651354857898194, "lm_q1q2_score": 0.5726365240102327}}
{"text": "\\section{Equations/Discretizations}\n\\begin{frame}{Equations \\ttnc{BART} solves transport equation}\n\t\\begin{block}{\\ttnc{BART} is solving multi-group transport equation with discrete-ordinates in angle}\n\t\t\\begin{itemize}\n\t\t\t\\item Transport equation in operator form with:\n\t\t\t\\begin{align}\n\t\t\t&\\bfscr{T}_\\mm{g,m}\\psigm+\\bfscr{C}_\\mm{g,m}\\psigm=\\sum\\limits_{\\substack{1\\leq\\mm{g'}\\leq\\mm{G}\\\\1\\leq\\mm{m'}\\leq\\mm{M}}}\\left(\\bfscr{S}_{\\mm{g',m'}\\to\\mm{g,m}}+\\bfscr{F}_{\\mm{g',m'}\\to\\mm{g,m}}\\right)\\psigmp\\\\\n\t\t\t&\\psigm=\\psigm^\\mm{inc},\\ \\vr\\in\\pd,\\ \\ndom<0\\nno\n\t\t\t\\end{align}\n%\t\t\t\\begin{center}\n\t\t\t\t$\\bfscr{T}$: Streaming operator\\\\\n\t\t\t\t$\\bfscr{C}$: Collision operator, $\\sigt$\\\\\n\t\t\t\t$\\bfscr{S}$: Scattering operator\\\\\n\t\t\t\t$\\bfscr{F}$: Fission operator\\\\\n\t\t\t\t$\\bfscr{\\psi}$: Angular flux\\\\\n\t\t\t\t$\\mm{g,\\ g'}$: Group indicies\\\\\n\t\t\t\t$\\mm{m,\\ m'}$: Angular indices\\\\\n\t\t\t\t$\\mm{G,\\ M}$: Numbers of groups and directions\n%\t\t\t\\end{center}\n\t\t\t\\item The formulation generalizes to diffusion and nonlinear diffusion\n\t\t\\end{itemize}\n\t\\end{block}\n\\end{frame}\n\n\\begin{frame}{\\ttnc{BART} solves first-order form of transport equations}\n\t\\begin{block}{First-order form and DFEM discretization in space}\n\t\t\\begin{itemize}\n\t\t\t\\item Transport equation has differential order of 1\\ in the streaming term, we refer it to as ``first-order\" form (FOF) \n\t\t\t\\begin{align}\n\t\t\t\t\\ome_\\mm{m}\\cdot\\nabla\\psigm+\\sigma_{\\mm{t,g}}\\psigm=\\mathcal{Q}_\\mm{g,m}\\left({\\bf \\Psi}\\right)\n\t\t\t\\end{align}\n\t\t\t\\begin{align}\n\t\t\t   \\mq_\\mm{g,m}:=\\sum\\limits_{\\substack{1\\leq\\mm{g'}\\leq\\mm{G}\\\\1\\leq\\mm{m'}\\leq\\mm{M}}}\\left(\\bfscr{S}_{\\mm{g',m'}\\to\\mm{g,m}}+\\bfscr{F}_{\\mm{g',m'}\\to\\mm{g,m}}\\right)\\psigmp\n\t\t\t\\end{align}\n\t\t\t\\item FOF is discretized using DFEM in \\ttt{BART}: given polynomial function space $\\mv$,\\ $\\forall\\psigm^*\\in\\mv$,\\ find $\\psigm\\in\\mv$,\\ s.t.\n\t\t\t\\begin{align}\n\t\t\t\\sum\\limits_{\\mm{e}\\in\\md}\\left[\\left(-\\omemn\\psigms,\\psigm\\right)+\\left(\\psigms,\\sigtg\\psigm\\right)\\right]_\\mm{e}+\\sum\\limits_{\\mm{f}\\in\\md}\\left|\\ndom\\right|\\left<\\jjump{\\psigms},\\tilde{\\psi}_\\mm{g,m}\\right>_\\mm{f}\\nno\\\\\n\t\t\t+\\sum\\limits_{\\substack{\\mm{f}\\in\\pd\\\\\\ndom>0}}\\ndom\\left<\\psigms,\\psigm\\right>_\\mm{f}=\\sum\\limits_{\\mm{e}\\in\\md}\\left(\\psigms,\\mqgm\\right)_\\mm{e}+\\sum\\limits_{\\substack{\\mm{f}\\in\\pd\\\\\\ndom<0}}\\absndom\\left<\\psigms,\\psigm^\\mm{inc}\\right>_\\mm{f}\n\t\t\t\\end{align}\n\t\t\\end{itemize}\n\t\\end{block}\n\\end{frame}\n\n\\begin{frame}{\\ttnc{BART} solves second-order forms of transport equations}\n\t\\begin{block}{Second-order forms (SOF) of transport equations}\n\t\t\\begin{itemize}\n\t\t\t\\item FOF can be cast to diffusion-like equations having streaming term with differential order of 2\n\t\t\t\\item The casting allows for use of CFEM\n\t\t\t\\item BART solves even-parity equation and self-adjoint angular flux equation (SAAF)\n\t\t\\end{itemize}\n\t\\end{block}\n\t\\begin{block}{SOF example: SAAF}\n\t\t\\begin{itemize}\n\t\t\t\\item SAAF equation\n\t\t\t\\begin{align}\n\t\t\t\t-\\omemn\\frac{1}{\\sigtg}\\omemn\\psigm+\\sigtg\\psigm=\\mqgm-\\frac{1}{\\sigtg}\\omemn\\mqgm\n\t\t\t\\end{align}\n\t\t\t\\item CFEM discretization\n\t\t\t\\begin{align}\n\t\t\t\t\\left(\\omemn\\psigms,\\frac{1}{\\sigtg}\\omemn\\psigm\\right)_\\md+\\left(\\psigms,{\\sigtg}\\psigm\\right)_\\md+\\sum\\limits_{\\substack{\\mm{f}\\in\\pd\\\\\\ndom>0}}\\ndom\\left<\\psigms,\\psigm\\right>_\\mm{f}\\nno\\\\\n\t\t\t\t=\\left(\\psigms+\\frac{\\omemn\\psigms}{\\sigtg},\\mqgm\\right)_\\mm{\\md}+\\sum\\limits_{\\substack{\\mm{f}\\in\\pd\\\\\\ndom<0}}\\absndom\\left<\\psigms,\\psigm^\\mm{inc}\\right>_\\mm{f}\n\t\t\t\\end{align}\n\t\t\\end{itemize}\n\t\\end{block}\n\\end{frame}\n\n\\begin{frame}{\\ttnc{BART} solves diffusion and nonlinear diffusion}\n\t\\begin{block}{BART solves diffusion equation}\n\t\t\\begin{itemize}\n\t\t\t\\item Diffusion could be solved within the framework of \\ttt{BART}\n\t\t\t\\begin{equation}\n\t\t\t\t-\\nabla\\frac{1}{3\\sigtg}\\cdot\\nabla\\phig+\\sigtg\\phig=\\mq_\\mm{g}:=\\sum\\limits_\\mm{g'}\\left[\\left(\\sigsgg+\\frac{\\chig\\nu\\sigma_\\mm{f,g'}}{k_\\mm{eff}}\\right)\\phigp\\right]\n\t\t\t\\end{equation}\n\t\t\\end{itemize}\n\t\\end{block}\n\n\t\\begin{block}{Nonlinear diffusion for acceleration (NDA)}\n\t\t\\begin{itemize}\n\t\t\t\\item Diffusion can also be written in P$_1$\\ form\n\t\t\t\\begin{equation}\n\t\t\t\\nabla\\cdot\\vec{J}_\\mm{g}+\\sigtg\\phig=\\mq_\\mm{g},\\quad\\vec{J}_\\mm{g}=-\\frac{1}{3\\sigtg}\\nabla\\phi_\\mm{g}\\ (\\mm{Fick's\\ law})\n\t\t\t\\end{equation}\n\t\t\t\\item NDA is derived from correcting Fick's law using transport corrections\n\t\t\t\\item \\boxed{Correction} to preserve current is derived from transport high-order solutions (HO)\n\t\t\t\\begin{equation}\n\t\t\t\t\\vec{J}_\\mm{g}=-\\frac{1}{3\\sigtg}\\nabla\\phi_\\mm{g}+\\boxed{\\frac{\\sum\\limits_{\\mm{m}<M}w_\\mm{m}\\ome_\\mm{m}\\omemn\\psigm^\\mm{HO}+\\frac{1}{3\\sigtg}\\nabla\\phi_\\mm{g}^\\mm{HO}}{\\phi_\\mm{g}^\\mm{HO}}\\phi}\n\t\t\t\\end{equation}\n\t\t\\end{itemize}\n\t\\end{block}\n\\end{frame}\n\n\\begin{frame}{\\ttnc{BART} solves diffusion and nonlinear diffusion (cont'd)}\n\t\\begin{block}{\\ttnc{BART} solves in multiple ways}\n\t\t\\begin{itemize}\n\t\t\t\\item CFEM is natural to use to solve diffusion/nonlinear diffusion\n\t\t\t\\item DFEM is realizable through penalty method\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item Useful when accelerating transport solves with DFEM\n\t\t\t\\end{itemize}\n\t\t\t\\item A hybrid-FEM is a near-future project for NDA in P$_1$-like form\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item Piece-wise constant test function for $\\phig$\\ while Raviat-Thomas test functions for $\\vec{J}_\\mm{g}$\n\t\t\t\\end{itemize}\n\t\t\\end{itemize}\n\t\\end{block}\n\\end{frame}", "meta": {"hexsha": "f02a52516c2354f7d763659d4416083a14b17b0a", "size": 5277, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bart/slides/intro-slides/sections/sec-physics-math.tex", "max_stars_repo_name": "weixiong-zheng-berkeley/Presentations-and-Figures", "max_stars_repo_head_hexsha": "e77ea49c5da54f7e0cde85df77317f097c97faf4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bart/slides/intro-slides/sections/sec-physics-math.tex", "max_issues_repo_name": "weixiong-zheng-berkeley/Presentations-and-Figures", "max_issues_repo_head_hexsha": "e77ea49c5da54f7e0cde85df77317f097c97faf4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bart/slides/intro-slides/sections/sec-physics-math.tex", "max_forks_repo_name": "weixiong-zheng-berkeley/Presentations-and-Figures", "max_forks_repo_head_hexsha": "e77ea49c5da54f7e0cde85df77317f097c97faf4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.7830188679, "max_line_length": 247, "alphanum_fraction": 0.6784157665, "num_tokens": 2111, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467548438126, "lm_q2_score": 0.6513548646660542, "lm_q1q2_score": 0.5726365155228923}}
{"text": "\\section{Semantics}\n\\label{semantics}\n\nThe semantics of LPADs for the case of programs without functions symbols can be given\nas follows. An LPAD defines a probability distribution over normal logic programs called\n\\emph{worlds}. A world is obtained from an LPAD by first grounding it, by \nselecting a single head atom for each ground clause and by including in the world\nthe clause with the selected head atom and the body.\nThe probability of a world is the product of the probabilities associated to the \nheads selected.\nThe probability of a ground atom (the query) is given by the sum of the probabilities\nof the worlds where the query is true.\n\nIf the LPAD contains function symbols, the definition is more complex, see\n\\cite{DBLP:journals/ai/Poole97,DBLP:journals/jair/SatoK01,Rig15-PLP-IW}.\n\nFor the semantics of programs with continuous random variables, see \\cite{TLP:8688161}\nthat defines the probability\nspace for $N$ continuous random variables by considering the Borel $\\sigma$-algebra\nover $\\mathbb{R}^N$\nand defines a Lebesgue measure on this set as the probability measure.\nThe probability space is lifted to cover the entire program using the least model semantics of constraint logic programs. Alternatively,\n\\cite{Nitti2016} defines the semantics of distributional clauses by resorting to\na stochastic $Tp$ operator.\n\\verb|cplint| allows more freedom than distributional clauses in the use of continuous random variables\nin expressions, for example\n\\href{http://cplint.eu/e/kalman_filter.pl}{\\texttt{kalman\\_filter.pl}} would not be allowed by distributional clauses.\n", "meta": {"hexsha": "6fd7260f0b6d8e31c8053aa09349beee93a68b17", "size": 1586, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/old_docs/semantics.tex", "max_stars_repo_name": "JanWielemaker/cplint", "max_stars_repo_head_hexsha": "654dd2d84f31bf1444c090bc5f33b8babfc9c2da", "max_stars_repo_licenses": ["Artistic-2.0"], "max_stars_count": 55, "max_stars_repo_stars_event_min_datetime": "2015-07-10T23:18:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T02:27:06.000Z", "max_issues_repo_path": "docs/old_docs/semantics.tex", "max_issues_repo_name": "JanWielemaker/cplint", "max_issues_repo_head_hexsha": "654dd2d84f31bf1444c090bc5f33b8babfc9c2da", "max_issues_repo_licenses": ["Artistic-2.0"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2016-09-14T05:24:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T13:37:11.000Z", "max_forks_repo_path": "docs/old_docs/semantics.tex", "max_forks_repo_name": "JanWielemaker/cplint", "max_forks_repo_head_hexsha": "654dd2d84f31bf1444c090bc5f33b8babfc9c2da", "max_forks_repo_licenses": ["Artistic-2.0"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2015-11-02T15:09:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-24T09:35:44.000Z", "avg_line_length": 56.6428571429, "max_line_length": 136, "alphanum_fraction": 0.8064312736, "num_tokens": 374, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267118111485244, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5726153135999696}}
{"text": "\\section{Massive Dirac Fermion Model}\nThe dispersion relation $E(k_x, k_y)$ looks like a cone.\n\nNew band touching,\nwe get Dirac fermion\n\\begin{align}\n    H &=\n    \\int d^2 x\\,\n    \\Psi^\\dagger\\left(\n        -i\\sigma^x v_x D_x\n        - i\\sigma^y D_y\n        + m\\sigma^z\n    \\right)\\Psi\n\\end{align}\nwhere\n\\begin{align}\n    D_j &= \\partial_j - i A_j\n\\end{align}\nWe can rescale space to get isotropic velocity\n\\begin{align}\n    y \\to \\frac{v_y}{v_x}y\n\\end{align}\n\nIn the Lagrangian formulation\n\\begin{align}\n    S &= \\int d^3x\\, \\mathcal{L}\n\\end{align}\nthe Lagrangian density is\n\\begin{align}\n    \\mthcal{L} &=\n    \\overline{\\Psi} \\left( i \\slash{D} - m \\right) \\Psi\n\\end{align}\nwhere we use Feynman slash notation\n\\begin{align}\n    \\slash{D} = \\gamma^{\\mu}D_{\\mu}\n\\end{align}\nand\n\\begin{align}\n    \\overline{\\Psi} &= \\Psi^\\dagger \\gamma^0\n\\end{align}\nand the Dirac matrices here are\n\\begin{align}\n    \\gamma^0 &= \\sigma^z\\\\\n    \\gamma^1 &= i \\sigma^x\\\\\n    \\gamma^2 &= i\\sigma^y.\n\\end{align}\nIn continuum,\ncan also define on curved space-time a metric $g_{\\mu\\nu}$\nwith action\n\\begin{align}\n    S &= \\int d^3x\\,\n    \\sqrt{|g|} \\mathcal{L}\n\\end{align}\nwhere you can diagonalize the metric with orthonormal frame fields\n\\begin{align}\n    g_{\\mu\\nu}^a \\eta_{\\alpha\\beta} e_{\\nu}^b\n\\end{align}\nand we use the Minkowski metric\n\\begin{align}\n    \\eta &=\n    \\begin{pmatrix}\n        1 & &\\\\\n        & -1 &\\\\\n        & & -1\n    \\end{pmatrix}.\n\\end{align}\nThe group of transformations which leaves the metric invariant is\n$SO(2, 1)$, which means there is a redundancy in this description,\nas transformations like\n\\begin{align}\n    e_{\\mu}^a \\to \\sum_{b} W_{ab} e_\\mu^b\n\\end{align}\nwhere $W\\in SO(2,1)$ describe the same thing.\nThis gauge symmetry leads to a $SO(2, 1)$\nnon-Abelian gauge field\n\\begin{align}\n    W_\\mu^{ab}\n\\end{align}\nwhich is also called the ``spin connection''.\nThe holomony of this gauge filed will tell you how you wind up rotated as you go\nin a closed loop.\nIt tells you how to do parallel transport in your space to relate frame fields to\neach other in different points in space.\n\nSo the field strength of the spin connection will then give you the curvature of\nspacetime.\n\nThe way you couple Dirac fermions to space-time curvature is to couple them to\nthis non-Abelian gauge field,\nthe spin connection.\n\n\\begin{question}\n    What was the redundancy?\n    If $W$ is just a redundancy,\n    how can it define a field?\n\\end{question}\nWhen you have $U(1)$ gauge symmetry,\nyou have\n\\begin{align}\n    \\psi \\to e^{i\\theta} \\psi\n\\end{align}\nThen we gauge the global symmetry.\n\n\\begin{question}\n    Although it's called spin connection,\n    it seems only related to the $SO(2, 1)$ spacetime,\n    but not the field that lives on the spacetime?\n\\end{question}\nWhat you're alluding to will come up next,\nhow to couple the field to the Dirac spinor,\nduring which we must pick a representation,\nand we have to pick the spinor representation.\n\n\\begin{question}\n    How will we be gauging this?\n    These are local transformations to begin with,\n    do we need to gauge them?\n\\end{question}\nIt's like turning on gravity.\nWhen you turn on $g$,\nyou have curved spacetime,\nand to describe it,\nit's turning on a gauge field associated with $SO(2, 1)$.\nIt gives a way of thinking of curvature as an ordinary gauge term,\nof $SO(2,1)$ in this case.\n\n\\begin{question}\n    If you want your theory to be invariant,\n    there should be some cancellation of fermionic field,s\n    and since the field is present,\n    it can demonstrate the curvature of spacetime.c\n    But sine the $W$ is just a local Lorentz transformation,\n    it doesn't carry.\n\\end{question}\nI'm not spelling this out,\nI'm just sketching the general thing.\nJust to see what fields are involved in the description.\nSo fermions should couple to this gauge field $W_\\mu^{ab}$.\nSo the covariant derivative $D_\\mu \\psi_\\alpha$.\n\nThe spinor should be written as\n\\begin{align}\n    \\Psi =\n    \\begin{pmatrix}\n        \\psi_1\\\\\n        \\psi_2\n    \\end{pmatrix}\n\\end{align}\nand then we want to couple to the gauge field the following way.\n\\begin{align}\n    D_{\\mu}\\psi_\\alpha &=\n    \\left( \\partial_\\mu - iA_\\mu \\right)\\psi_\\alpha\n    + \\frac{1}{2} W_\\mu^{ab}\\left( S_{ab} \\right)_{\\alpha\\beta}\\psi_\\beta\n\\end{align}\nwhere the $S_{ab}$ is the spinor representation of the Lorentz group,\ncan be written in terms of Gamma matrices.\n\nSo now we can consider integrating out the fermions.\n\nBy integrating out the fermions,\nI mean we do something like this.\n\\begin{align}\n    e^{iS_{\\mathrm{eff}}[A,W]} &=\n    \\int \\mathcal{D}\\overline{\\Psi}\\,\\mathcal{D}\\Psi\\,\n    e^{iS[\\Psi, A, W]}.\n\\end{align}\nIt will be homework,\nbut the point  is,\nto lowest order,\nyou get Chern-Simons terms.\n\\begin{align}\n    S_{\\mathrm{eff}} &=\n    \\frac{1}{4\\pi} \\frac{\\sgn(m)}{2}\n    \\int \\epsilon^{\\mu\\nu\\lambda} A_\\mu \\partial_\\nu A_\\lambda\n    + \\frac{\\sgn(m)}{2} S_{\\mathrm{CS,grav}}\n\\end{align}\nwhere the second term is the gravitational Chern-Simons term for the non-Abelian\ngauge field.\n\nAnd this term is\n\\begin{align}\n    S_{\\mathrm{CS,grav}} \\propto\n    \\int\\Tr\\left( w\\wedge dw + \\frac{2}{3} w\\wedge w \\wedge w \\right).\n\\end{align}\nChanging the sign of the mass gives\n\\begin{align}\n    \\Delta S &= \\frac{1}{4\\pi} \\int A\\, dA\n    + S_{\\mathrm{CS,grav}}\n\\end{align}\nThe point of this was to see what the effective action is from the point of view\nof the Dirac fermions and see how we can reproduce the\n$A_\\mu \\partial_\\nu A_\\lambda$\nChern-Simons term and that we also get this over gravitational term.\n\nEvery time we describe topological phases with free Fermions,\nthere's always some way to also describe it in terms of Majorana fermions\nor Dirac fermion.\n\n\\begin{question}\n    Why is there a 1/2 factor in\n    \\begin{align}\n        \\frac{1}{2}W_\\mu^{ab} \\left( S_{ab} \\right)_{\\alpha\\beta}\\psi_\\beta.\n    \\end{align}\n\\end{question}\n\nNow we have a Dirac fermion of 2+1D and we can chase the phase we're in by\nchanging the mass.\nThe same thing happens in other dimensions,\nand by changing the mass term,\nyou can go through different topological phases.\n\n\\subsection{Chiral fermion on domain wall in mass}\nIf you have a system with boundaries,\nyou can have chiral fermion modes.\nBecause changing the sign of the mass changes the Chern number,\nbecause it changes the effective action by one unit of the CS term.\nIt turns out,\nat the domain wall,\nwe're going to have exponentially localized to the domain wall,\nthat's gapless and chiral.\nLet $m(x)$ is negative for negative $x$ and positive for positive $x$.\nIt reduces to the Jakiw-Rabbi soliton in 1D.\nThe Hamiltonian is\n\\begin{align}\n    H &=\n    \\int dx\\, dy\\,\n    \\Psi^\\dagger\\left(\n        -i\\sigma^x \\partial_x - i\\sigma^y \\partial_y + m(x) \\sigma^z\n    \\right)\n\\end{align}\nwhere I set the velocities to 1.\n\nFor a plane wave\n\\begin{align}\n    \\Psi(x, y) &= \\Psi(x) e^{ik_x y},\n\\end{align}\nthe eigenvalue equation is\n\\begin{align}\n    \\left( -i\\sigma^x \\partial_x + m(x) \\sigma^z \\right)\\psi\n    &=\n    \\left( E_{k_y} - k_y \\sigma^y \\right)\\psi.\n\\end{align}\nIf you remember what we did in the Jakiv-rabbi Soliton,\nthere was a zero-energy solution where\n$E_{k_y} + k_y \\sigma^x$ was the solution\nand it was exponentially localized to the domain wall.\n\nRecall in the J-R soliton,\nthere is a $z$ mode solution $\\psi_0$\nwith\n\\begin{align}\n    \\sigma^x \\psi_0 = \\lambda \\psi_0\n\\end{align}\nwhere $\\lambda = \\pm 1$.\nThis zero-energy solution means that we get a solution localized to the domain\nwall with an energy\n\\begin{align}\n    E_{k_y} &= -\\lambda k_y.\n\\end{align}\nIf we had a domain wall where $m(x)<0$ on one side\nbut $m(x)>0$ on the other side, \nwe have a propagating fermion exponentially confined to the domain wall.\n\nWe have a chiral anomaly and a gravitational anomaly,\nwhich is partially responsible for a quantized thermal quantum hall effect.\n\nThe $(1+1)D$ chiral fermion has a\n$U(1)$ chiral anomaly,\nwhich is a t' Hooft anomaly.\nThere is also a gravitational anomaly.\nBoth of these anomalies can be cancelled by the\n$(2+1)D$ bulk,\nwhich is the Chern insulator.\nThis is an example\nof how you have anomalies in one dimension,\nbut are cancelled in higher dimensions.\nRelated to this is the fact that the chiral fermion is gives a quantized thermal\nhall conductance.\n\nThe gravitational anomaly and the quantized thermal Hall conductance\ndefines a chiral central charge $C_{-}$.\nIt is the same as the Chern number for free fermion systems,\nbut that breaks down for interacting systems.\n\nIf you view them as boundaries of a Chern insulator,\nthere's no longer anything wrong with the anomalies.\n\nU(1) chiral anomaly\n\nThe effective action for Dirac fermions is\n\\begin{align}\n    S_{\\mathrm{eff}} &=\n    \\int A\\, dA\n    + S_{\\textrm{CS,grav}}\n\\end{align}\n\nThen if we turn on the electric field\n\\begin{align}\n    \\frac{dp}{dt} &= E\n\\end{align}\nand suppose we increase the momentum by $\\Delta t= \\int E\\, dt$.\nImagine putting it on a  ring with periodic boundary conditions.\n$p$ is quantized in units $2\\pi/L$.\nThen the charge is\n\\begin{align}\n    \\Delta Q &=\n    \\frac{\\Delta p}{2\\pi /L}\\\\\n    &= \\frac{L}{2\\pi} \\int E\\, dt\\\\\n    &= \\frac{1}{2\\pi} \\int E\\, dy dt\\, j\n\\end{align}\nand also nothing that the charge is just\n\\begin{align}\n    \\int \\partial_t n \\, dt\\, dy\n\\end{align}\nwhere $n$I s the charge density,\nyou get\n\\begin{align}\n    \\partial_t n &= \\frac{1}{2\\pi E}.\n\\end{align}\nThe covariant versions of this is\n\\begin{align}\n    \\partial_\\mu j^\\mu &= \\frac{1}{2\\pi}\\epsilon_{\\mu\\nu}F^{\\mu\\nu}.\n\\end{align}\nThe key is that some conservation law breaks down when you turn on the field.\n\nSuppose you have a cylinder with boundaries \nThe bulk cancels the anomaly because it tells you exactly where the charge comes\nfrom.\nIf you have both edges at the same time,\n\nInclude both edges.\n\\begin{align}\n    \\partial_\\mu j_{\\nu}^{\\mathrm{axial}} &=\n    \\frac{1}{2\\pi} \\epsilon_{\\mu\\nu} F_{\\mu\\nu}..\n\\end{align}\nSometimes axial current is called chiral current.\n\nEarlier in the course I said you can have 't Hooft anomalies\nwhich allow you to couple to the background gauge do\nTrying to $A_\\mu$\n\\begin{align}\n    S_{\\mathrm{coupling}} &=\n    \\int A_\\mu j^\\mj\n    = \\to \\int_{} \\left( A_\\mu - \\partial_\\mu f \\right) \\gamma^\\mu\\\\\n    \\delta S &= \\int f \\partial_\\mu j^\\mu &= 0\n\\end{align}\n\n\\subsection{Gravitational Anomaly}\nTo couple a field theory to gravity,\nthat means we want to couple it to some spacetime metric.\nIf we want to couple to $g_{\\mu\\nu}$,\nwe need a conserved stress energy tensor $T_{m\\nu}$\nwhich is conserved meaning\n\\begin{align}\n    \\Delta_\\mu T^{\\mu\\nu}.\n\\end{align}\nAnd the gravitational anomaly is a breakdown of this equation.\nFor example,\nfor the chiral fermion,\n\\begin{align}\n    \\nabla_\\mu T^{\\mu\\nu} \\propto C_{-}\\epsilon^{\\nu\\sigma}\\partial_{\\sigma}R\n\\end{align}\nwhere $C_{-}$ is the chiral central charge\nand $R$ is the space-time curvature.\n\nThe next thing I want to describe is the last property,\nwhich is the quantized thermal Hall conductance.\n\n\\subsection{Quantized Thermal Hall Conductance}\nSuppose we have a $(2+1)$D Chern insulator region\nwhere $C=1$,\nwhich is coupled to a thermal bath at temperature $T_1$ with $C=0$\nand the other side is coupled to a thermal bath at temperature $T_2$.\nSo we have a \\emph{thermal current}\n\\begin{align}\n    \\vec{j}_Q = -\\kappa \\vec{\\nabla} T\n\\end{align}\nBut here there is a quantized thermal hall conductance\n\\begin{align}\n    \\kappa_H &=\n    \\frac{\\pi^2}{3} \\frac{k_B^2}{h}T\n\\end{align}\nper chiral edge mode\nso the quantized thermal current is\n\\begin{align}\n    j_{Q,X} &= -\\kappa_H \\Delta T.\n\\end{align}\n\nThe intuition is as follows.\nIf you couple the top to some temperature,\nyou're creating a thermal density of excitations,\nwhere the average energy is $k_B T_1$\nand same for the other side.\nBut because it's chiral,\nyou have a flow of energy along that boundary,\nand a flow of energy along the other boundary in the opposite direction,\nand if the temperatures and thus energies different,\nthere is a net flow of energy.\n\nConsider a single chiral edge mode at temperature $T$.\nThe average energy density\n\\begin{align}\n    n_Q &=\n    \\int \\frac{dk}{2\\pi}\n    \\left( \\varepsilon_k - \\mu \\right)\\left[ \n    \\frac{1}{e^{\\beta\\left( \\varepsilon_k - \\mu \\right)} + 1}\n    - n_{FD;k}(T = 0)\n    \\right]\n\\end{align}\nwhere the energy is $\\varepsilon_k = \\hbar v\\cdot k$ and the thermal current is\n\\begin{align}\n    j_Q = v\\cdot n_Q = \\frac{\\pi^2}{6} \\frac{k_B^2}{h}T^2.\n\\end{align}\nFor two edge modes,\none going to the right with current\n\\begin{align}\n    j_{Q,1} &=\n    \\frac{\\pi^2}{6} \\frac{k_B^2}{h} T_1^2\n\\end{align}\nand one going to the left\n\\begin{align}\n    j_{Q,2} &=\n    \\frac{\\pi^2}{6} \\frac{k_B^2}{h} T_2^2\n\\end{align}\nso the total current is their difference\n\\begin{align}\n    j_{Q,\\mathrm{tot}} &=\n    \\frac{\\pi^2}{6} \\frac{k_B^2}{h}\n    \\left( T_1^2 - T_2^2 \\right)\\\\\n    &=\n    \\underbrace{\\frac{\\pi^2}{6} \\frac{k_B^2}{h} 2T_1 \\Delta T}_{\\kappa_H}\n    + \\mathcal{O}\\left( \\Delta T^2 \\right)\n\\end{align}\nwhere $T_2 = T_1 + \\Delta T$.\nIn general,\n\\begin{align}\n    \\kappa_H &=\n    C_{-} \\frac{\\pi^2}{3} \\frac{k_B^2}{h}T\n\\end{align}\nand here the chiral central charge $C_{-}$ is the number of right-moving chiral\nfermion modes minus the number of left-moving chiral fermion modes on a given\nedge.\n\nThe reason we introduce this is because\nMajorana fermions are half a complex fermion,\nso it has a $\\frac{1}{2}$ chiral central charge.\nIn fact,\nit's much more general,\nand has to do with a conformal field theory.\nMore complicated topological quantum phases of matter\nwith boundaries not described by chiral fermions,\nbut there is still a chiral central charge.\n\n\\begin{question}\n    About the $U(1)$ anomaly there's a way to see the bulk.\n    Is there a way to see how the gravitational anomaly comes from the bulk?\n\\end{question}\nThe gravitational anomaly,\nyou can do calculations and put your system on a curved spacetime,\nyou see extra energy,\nand if you write the extra CS term,\nit will account for the non-conservation of the stress energy tensor.\nBut you won't be able to relate it to the quantized thermal Hall effect,\nbecause there is no boundary.\nThe boundaries are supposed to be moving as we adiabatically insert flux.\nYou can't really relate it to the thermal hall effect in a clear way.\n\n\\begin{question}\n    Is the only way you can have these chiral fermions is in relation to the\n    bulk?\n\\end{question}\n\n\\begin{question}\n    Is the gravitational anomaly what leads to the quantized hall conductance?\n\\end{question}\nI don't have a good explanation except that $C_-$ appears in both cases.\nJust like how we derive conductivity by turning on an electric field,\nyou can turn on thermal conductivity by turning on a gravitational field.\nIn the context of the quantum Hall effect,\nthere have been many studies,\nbut I don't think it led to a direct connection between the thermal Hall effect\nand the gravitational response.\nFrom my perspective,\nthere's central charges appearing in both places.\nThere might e ways of introducing thermal conductivity on the edge\nby introducing gravitational potential.\nThere's a fundamental mismatch,\nbecause the bulk can never have thermal quantum hall effect.\nIt's possible other people might have a better answer.\n\n\\begin{question}\n    A cylinder with a gradient around the cylinder?\n\\end{question}\nThat doesn't even make sense.\nYou can't really put a temperature gradient on the boundary of this thing\nbecause it's chiral.\nI didn't put a gradient along the edge,\nI don't think it's even something you can do.\n", "meta": {"hexsha": "7096e4d98268a8b5846c46f258c6b72432c8e2e0", "size": 15502, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phys733/lecture12.tex", "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_issues_repo_path": "phys733/lecture12.tex", "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phys733/lecture12.tex", "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0661322645, "max_line_length": 81, "alphanum_fraction": 0.7092633209, "num_tokens": 4660, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117983401362, "lm_q2_score": 0.6926419894793248, "lm_q1q2_score": 0.5726153047283422}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                                                                 %\n%   PAW   - Reference Manual -- LaTeX Source                      %\n%                                                                 %\n%   Chapter 6: SIGMA                                              %\n%                                                                 %\n%   EPS file      : sigexa1.eps                                   %\n%                                                                 %\n%   Editor: Michel Goossens / CN-AS                               %\n%   Last Mod.:  8 Feb 1994 20:15 mg                               %\n%                                                                 %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\chapter{SIGMA}\n\\label{chap-sigma}\n\\index{SIGMA|(}\n\\def\\PAWchap{SIGMA}\n\n\\section{Access to SIGMA}\n\\label{sec:H2SIGMA}\n\\index{SIGMA!access}\n \nThe SIGMA array manipulation package can be \naccessed in three different ways in PAW:\n\n\\subsection*{Precede the statement by the prefix SIGMA}\n\\index{SIGMA!prefix SIGMA}\n\\index{prefix SIGMA}\n \n\\subsection*{Example}\n\\begin{alltt}\n  PAW > \\Ucom{SIGMA xvec=array(100,-pi#pi*2)}\n  PAW > \\Ucom{SIGMA y=sin(xvec)*xvec}\n\\end{alltt}\nNote the use of the predefined constant \\texttt{PI}\nin SIGMA with the obvious value.\n\n\\subsection*{The PAW command: APPLication SIGMA}\n\\index{SIGMA!APPLication SIGMA}\n\\index{application SIGMA}\n \nAll commands typed in after this command will be directly processed by\nSIGMA. The command \\Ucom{EXIT} will return control to PAW, e.g.\n\n\\begin{alltt}\n  PAW > \\Ucom{APPLication SIGMA}\n  SIGMA > \\Ucom{xvec=array(100,-pi#pi*2)}\n  SIGMA > \\Ucom{sinus=sin(xvec)*xvec}\n  SIGMA > \\Ucom{cosinus=cos(xvec)*xvec}\n  SIGMA > \\Ucom{exit}\n  PAW > \\Ucom{vector/list}\n   Vector Name                    Type    Length    Dim-1    Dim-2    Dim-3\n \n   XVEC                              R       100      100\n   SINUS                             R       100      100\n   COSINUS                           R       100      100\n \n   Total of  3 Vector(s)\n\\end{alltt}\n\n\\subsection*{The PAW system function \\dollar SIGMA}\n\\label{sec:H3SIGVE}\n\\index{SIGMA!\\protect\\dollar SIGMA}\n\\index{\\protect\\dollar SIGMA}\n \nThe expression to be evaluated must be\nenclosed in parentheses. The function will return the numerical\nvalue of the expression (if the result is a scalar) or the\nname of a temporary vector (if the result is a vector).\n \nAssuming that the computation of the function \\texttt{sin(x)*x}\nin the above\nexample would be only for the purpose of producing a graph, (i.e.\nthe result is not needed for further calculations), then one could\njust have typed the following commands:\n\n\\begin{alltt}\n  PAW > \\Ucom{SIGMA xvec=array(100,-pi#pi*2)}\n  PAW > \\Ucom{GRAph 100 xvec $SIGMA(SIN(XVEC)*XVEC)}\n\\end{alltt}\n\n\\section{Vector arithmetic operations using SIGMA}\n\\index{array!in SIGMA}\n\\index{vector!operations}\n\\index{vector!in SIGMA}\n\\index{vector!arithmetic}\n\\index{SIGMA!vector}\n\\index{SIGMA!array}\n \nA complete and convenient mechanism for the mathematical\nmanipulation of vectors is provided by SIGMA. In the following,\nwe use the words ``array'' and ``vector'' as synonyms. \nIn both\ncases, we refer to PAW vectors, in the sense that SIGMA offers\nan alternative way to generate and to manipulate PAW vectors\n(see section \\ref{sec:H1PVECT} on page~\\pageref{sec:H1PVECT}).\nThe notation of SIGMA is similar to that of FORTRAN, in the sense\nthat is based upon formulae and assignment statements.\n \nThe special operator \\texttt{ARRAY} is used to generate vectors:\n\n\\PAWfdef{ARRAY}{vname = ARRAY (arg1,arg2)}\n \n\\begin{DLtt}{12345}\n\\item[vname] Name of the vector (array) being created.\n\\item[arg1]  Defines the array {\\bf structure},\n             i.e. the Number of COmponents (\\texttt{NCO}) of the array.\n\\item[arg2]  Provides the {\\bf numerical values} filling the array\n             row-wise.\\\\\n             If \\texttt{arg2} is absent (or does not provide enough\n             values) the array is filled with 1.\n\\end{DLtt}\n\\index{array!filling}\n\\index{SIGMA!array!structure}\n\\index{SIGMA!array!filling}\n\\index{operator in SIGMA}\n\n\\subsection{Basic operators}\n\\index{SIGMA!basic operator}\n\\index{basic operator in SIGMA}\n \n\\begin{DLtt}{123}\n\\item[+]  Add\n\\item[-]  Subtract\n\\item[*]  Multiply\n\\item[/]  Divide\n\\item[**] Exponentiation\n\\item[\\&] Concatenation\n\\end{DLtt}\n\n\\fbox{\\parbox{.97\\textwidth}{%\nNote that ill defined operations will give \\texttt{0.} as result. For\ninstance: a division by zero gives zero as result.}}\n\n\\subsection{Logical operators}\n\\index{SIGMA!logical operator}\n\\index{SIGMA!boolean value}\n\\index{logical operator in SIGMA}\n\\index{boolean value in SIGMA}\n \nLogical operators act on entities that have\n{\\bf Boolean} values \\texttt{1} (true) or \\texttt{0} (false).\nThe result is Boolean.\n\n\\begin{DLtt}{123}\n\\item[AND] Logical operation AND\n\\item[NOT] Logical operation NOT\n\\item[OR]  Logical operation OR\n\\item[EQ]  EQual to\n\\item[GE]  Greater or Equal to\n\\item[GT]  Greater Than\n\\item[LE]  Less or Equal to\n\\item[LT]  Less Than\n\\item[NE]  Not Equal\n\\end{DLtt}\n\n\\subsection{Control operators}\n\\index{SIGMA!control operator}\n\\index{control operator in SIGMA}\n\n\\begin{DLtt}{12345678}\n\\item[!PRINT]   Provides the automatic printing of \n                every newly created array or scalar.\n\\item[!NOPRINT] Suppresses the automatic printing of \n                every newly created array or scalar.\n\\end{DLtt}\n \n\\subsection*{Examples}\n\\begin{alltt}\n\\Ucom{A=ARRAY (6,1#6)}             1  2  3  4  5  6\n\\Ucom{A=ARRAY (4)}                 1  1  1  1\n\\Ucom{A=ARRAY (5,2&3&-1&2&1.2)}    2  3  -1  2  1.2\n\\Ucom{A=ARRAY (3)*PI}              3.1415927  3.1415927  3.1415927\n\\Ucom{A=ARRAY (1,123E4)}           1230000.0\n\\end{alltt}\n\n\\section{SIGMA functions}\n\\index{SIGMA!function}\n\\index{function!in SIGMA}\n \nSIGMA provides some functions which perform a task on a whole array. These\nfunctions have no analogues in FORTRAN because all FORTRAN functions operate on\none or more single numbers. Presently available SIGMA functions\nare listed in table \\ref{tab:SIGFUN} below.\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{|llp{.75\\textwidth}|}\n\\hline\n\\rm\\bf Name            & \\rm\\bf Result                    &\n\\rm\\bf Explanation                                        \\\\\n\\hline\n\\PAWfind{ANY}          & Scalar                           &\nThe result is a Boolean scalar of value \\texttt{1} (true)\nif at least one component of the argument\nis true and \\texttt{0} (false) otherwise.                    \\\\\n\\PAWfind{DEL}          & Vector                           &\nAnalog to the {\\bf Dirac-DELta Function}.\n\\texttt{V1=DEL(V)} sets each element of \\texttt{V1} to \\texttt{0.0} \n(if corresponding element in \\texttt{V} \nis {\\bf non-zero}) or to \\texttt{1.0}\n(if corresponding element is {\\bf zero}).                 \\\\\n\\PAWfind{DIFF}         & Vector                           &\n\\texttt{V2=DIFF(V)} {\\bf forward difference} of \\texttt{V}.\nThe rightmost value in \\texttt{V1} is obtained by quadratic \nextrapolation over the last three elements of \\texttt{V}.    \\\\\n\\PAWfind{LS}           & Vector                           &\n\\texttt{V1=LS(V,N)} {\\bf shifts} index of \\texttt{V} to the left \nby \\texttt{N} steps (cyclic).                                \\\\\n\\PAWfind{LVMAX}        & Scalar                           &\n\\texttt{S1=LVMAX(V1)} sets \\texttt{S1} equal to the {\\bf index} \n(location) of the {\\bf maximum} value in vector \\texttt{V1}. \\\\\n\\PAWfind{LVMIM}        & Scalar                           &\n\\texttt{S1=LVMIN(V1)} sets \\texttt{S1} equal to the {\\bf index} \n(location) of the {\\bf minimum} value in vector \\texttt{V1}. \\\\\n\\PAWfind{MAX}          & Vector                           &\n\\texttt{V3=MAX(V1,V2)}\nsets each element of \\texttt{V3} equal to the {\\bf maximum}\nof the corresponding elements in \\texttt{V1} and \\texttt{V2}.   \\\\\n\\PAWfind{MAXV}         & Vector                           &\n\\texttt{V1=MAXV(V)} sets each element of \\texttt{V1} equal \nto the {\\bf maximum} value in \\texttt{V}.                    \\\\\n\\PAWfind{MIN}          & Vector                           &\n\\texttt{V3=MIN(V1,V2)}\nsets each element of \\texttt{V3} equal to the {\\bf minumum}\nof the corresponding elements in \\texttt{V1} and \\texttt{V2}.   \\\\\n\\PAWfind{MINV}         & Vector                           &\n\\texttt{V1=MINV(V)} sets each element of \\texttt{V1} equal \nto the {\\bf minimum} value in \\texttt{V}.                    \\\\\n\\PAWfind{NCO}          & Scalar                           &\n\\texttt{V1=NCO(V)} {\\bf Number of COmponents} \nof vector of \\texttt{V}.                                     \\\\\n\\PAWfind{ORDER}        & Vector                           &\n\\texttt{V1=ORDER(V,V2)} finds a permutation that brings\n\\texttt{V2} in a non-descending order\nand applies it to \\texttt{V} to generate \\texttt{V1}.           \\\\\n\\PAWfind{PROD}         & Vector                           &\n\\texttt{V1=PROD(V)} \\texttt{V1} is the {\\bf running product} \nof \\texttt{V}.                                               \\\\\n\\PAWfind{QUAD}         & Vector                           &\n\\texttt{V2=QUAD(V1,H)}\nThe quadrature function \\texttt{QUAD} numerically integrates \neach row of \\texttt{V1} with respect \nto the scalar step size \\texttt{H}.                          \\\\\n\\PAWfind{SUMV}         & Vector                           &\n\\texttt{V2=SUMV(V1)} {\\bf running sum} of \\texttt{V}.           \\\\\n\\PAWfind{VMAX}         & Scalar                           &\n\\texttt{S1=VMAX(V1)} sets \\texttt{S1} equal to the \n{\\bf maximum} value in vector \\texttt{V1}.                   \\\\\n\\PAWfind{VMIN}         & Scalar                           &\n\\texttt{S1=VMIN(V1)} sets \\texttt{S1} equal to the \n{\\bf minimum} value in vector \\texttt{V1}.                   \\\\\n\\PAWfind{VSUM}         & Scalar                           &\n\\texttt{S1=VSUM(V)} {\\bf sum} of all components of \\texttt{V}.  \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{SIGMA functions}\n\\label{tab:SIGFUN}\n\\end{table}\n\n\\subsection{SIGMA functions - A detailed description.}\n \nIn the following description of the SIGMA functions, the letter \\texttt{R} always\ndenotes the {\\bf result} and \\texttt{arg} denotes one or more {\\bf arguments}.\nAny argument may itself be an expression. \nIn that case \\texttt{arg} means the result of this expression. \nLet \\texttt{OP} denote any of the above array functions, then the statement:\n\n\\PAWfdef{OP}{R=OP(arg1,arg2,...)}\n\nproduces \\texttt{R} without doing anything to the \ncontents stored under the names\nappearing in \\texttt{arg1,arg2,...}.\nThus, although in the description we may say\n``...\\texttt{OP} does such and such to \\texttt{arg} ...'',\nin reality it leaves \\texttt{arg} intact and works on\nthe argument to produce \\texttt{R}.\n\n\\PAWfdef{ANY}{R = ANY(arg)}\n \nThe function \\texttt{ANY} considers the result \nof the argument expression as a Boolean\narray. SIGMA represents ``true'' by \\texttt{1} and ``false'' by \\texttt{0}.\nThus the components of \\texttt{arg}\nmust be either \\texttt{0} or \\texttt{1}, otherwise an error is generated.\n \nIf at least one component of the result of the argument expression is \\texttt{1},\nthan \\texttt{ANY} returns the scalar \\texttt{1}. If all components of the result of the argument\nexpression are \\texttt{0} then \\texttt{ANY} returns the scalar \\texttt{0}.\nIf \\texttt{arg} is a Boolean scalar, \\texttt{R = arg}.\n\n\\subsection*{Example of the \\texttt{ANY} command}\n\\begin{alltt}\n  PAW > \\Ucom{APPL SIGMA}\n  SIGMA > \\Ucom{!PRINT}                            | Print newly created vectors and scalars\n  SIGMA > \\Ucom{W=(-2)**ARRAY(10,1#10)}\n  NCO(W       )=   10\n  W       =\n   -2.000      4.000     -8.000      16.00     -32.00      64.00\n   -128.0      256.0     -512.0      1024.\n  SIGMA > \\Ucom{X=W GT 0}\n  NCO(X       )=   10\n  X       =\n   0.0000      1.000     0.0000      1.000     0.0000      1.000\n   0.0000      1.000     0.0000      1.000\n  SIGMA > \\Ucom{R=ANY(X)}\n  NCO(R       )=    1\n  R         1.000\n\\end{alltt}\n\n\\PAWfdef{DEL}{R = DEL(arg)}\n\\index{delta function}\n \n\\texttt{DEL} is a discrete analogue of a {\\bf Dirac delta function}. \n\\texttt{DEL} works independently\non each row of the argument array. \nIf the elements of any row of the argument are denoted by\n\\(X_1,\\:X_2,\\:\\dots,\\:X_i,\\:\\ldots,\\:X_n\\)\nthen the corresponding row of the result of the delta function\noperation will be \n\\(Z_1,\\:Z_2,\\:\\dots,\\:Z_i,\\:\\ldots,\\:Z_n\\)\nwhere all \\(Z_i = 0\\) except in three cases, \nin which \\(Z_i = 1\\), namely:\n\n\\begin{OL}\n\\item When the component \\( X_i \\) is itself zero.\n\\item When \\( X_{i-1}, \\:X_i \\) are of opposite sign and \n      \\( \\left| X_i \\right| < \\left| X_{i-1} \\right| \\)\n      If \\( i = 1 \\) then linear extrapolation to the left is used.\n\\item When \\( X_i, \\:X_{i+1} \\) are of opposite sign and \n      \\( \\left| X_i \\right| \\leq \\left| X_{i+1} \\right| \\)\n      If \\( i = 1 \\) then linear extrapolation to the right is used.\n\\end{OL}\n\nIf \\texttt{arg} is a scalar, the value of \\texttt{DEL(arg)} will be \\texttt{1}\nif \\texttt{arg} is zero, and \\texttt{0} otherwise.\n\n\\subsection*{Example of the \\texttt{del} command}\n\\begin{alltt}\n  SIGMA > \\Ucom{W=array(11,-1#1)}\n  NCO(W       )=   11\n  W       =\n   -1.000    -0.8000    -0.6000    -0.4000    -0.2000    -0.2980E-07\n   0.2000     0.4000     0.6000     0.8000      1.000\n \n  SIGMA > \\Ucom{X= (W+1.01)*W*(W-.35)*(W-.42)}\n  NCO(X       )=   11\n  X       =\n  -0.1917E-01 -0.2357    -0.2384    -0.1501    -0.5524E-01-0.4425E-08\n   0.7986E-02 -0.5640E-03 0.4347E-01 0.2476     0.7578\n \n  SIGMA > \\Ucom{R=del(x)}\n  NCO(R       )=   11\n  R       =\n    1.000     0.0000     0.0000     0.0000     0.0000      1.000\n   0.0000      1.000     0.0000     0.0000     0.0000\n\\end{alltt}\n\n\\PAWfdef{DIFF}{R = DIFF(arg)}\n\\index{DIFF}\n \nThe \\texttt{DIFF} function generates the {\\bf forward difference}\nof each row of the argument array, say \n\\(X_1,\\:X_2,\\:\\ldots,\\) \n\\(X_i,\\:\\ldots,\\:X_n\\)\nand creates an array with components equal to the forward difference of \n\\(X\\): \\(X_2-X_1,\\:X_3-X_2,\\)\n\\(\\ldots,\\:X_n-X_{n-1},X_0\\)\nwhere the rightmost value \\(X_0\\) \nis obtained by quadratic extrapolation over the last\nthree elements of the result of \\texttt{arg}. \nApplied to a scalar \\texttt{DIFF} gives a zero result.\n\n\\subsection*{Example of the \\texttt{DIFF} command}\n\\begin{alltt}\n  SIGMA > \\Ucom{x=array(6,5#0)}\n  NCO(X       )=    6\n  X       =\n    5.000      4.000      3.000      2.000      1.000     0.0000\n  SIGMA > \\Ucom{Y=x*x}\n  NCO(Y       )=    6\n  Y       =\n    25.00      16.00      9.000      4.000      1.000     0.0000\n  SIGMA > \\Ucom{Z=Diff(Y)}\n  NCO(Z       )=    6\n  Z       =\n   -9.000     -7.000     -5.000     -3.000     -1.000      1.000\n\\end{alltt}\n\n\\PAWfdef{LS}{R=LS(arg1,arg2)}\n\\index{LS}\n \nThe \\texttt{LS} rearrangement function performs a {\\bf left shift}.\n\\texttt{arg1} is the array to be shifted; \n\\texttt{arg2} must be a scalar value (rounded\nif necessary by the system), \ninterpreted as the number of places the array has to\nbe shifted to the left. \nThe scalar \\texttt{arg2} can be negative, in which case \\texttt{LS} shifts\nto the right a number of places equal to the absolute value of \\texttt{arg2}.\n \nIt should be noted the the shift is performed circularly \\texttt{modulo N},\nwhere \\texttt{N} \nis the number of components in the rows of the array to be shifted. \nHence, \\texttt{LS(X,N+l)}\nshifts the \\texttt{N} component rows of \\texttt{X} by \\texttt{1} to the left, \nand \\texttt{LS(X,-l)} shifts the rows by\n\\texttt{N-1} to the left (or by \\texttt{1} to the right).\nIf \\texttt{arg1} is a scalar, \\texttt{R = arg1}.\n\n\\subsection*{Example of the left shift command}\n\\begin{alltt}\n SIGMA > \\Ucom{X=array(4&5,array(20,1#20))}\n NCO(X       )=    4    5\n X       =\n   1.000      2.000      3.000      4.000\n   5.000      6.000      7.000      8.000\n   9.000      10.00      11.00      12.00\n   13.00      14.00      15.00      16.00\n   17.00      18.00      19.00      20.00\n \n SIGMA > \\Ucom{y=ls(x,1)}\n \n NCO(Y       )=    4    5\n Y       =\n   2.000      3.000      4.000      1.000\n   6.000      7.000      8.000      5.000\n   10.00      11.00      12.00      9.000\n   14.00      15.00      16.00      13.00\n   18.00      19.00      20.00      17.00\n \n SIGMA > \\Ucom{y=ls(x,-3)}\n NCO(Y       )=    4    5\n Y       =\n   2.000      3.000      4.000      1.000\n   6.000      7.000      8.000      5.000\n   10.00      11.00      12.00      9.000\n   14.00      15.00      16.00      13.00\n   18.00      19.00      20.00      17.00\n \n SIGMA > \\Ucom{X=array(5,1#5)}\n NCO(X       )=    5\n X         1.000      2.000      3.000      4.000      5.000\n SIGMA > \\Ucom{z=ls(x,3)}\n NCO(Z       )=    5\n Z         4.000      5.000      1.000      2.000      3.000\n SIGMA > \\Ucom{z1=ls(x,-4)}\n NCO(Z1      )=    5\n Z1        2.000      3.000      4.000      5.000      1.000\n\\end{alltt}\n\n\\PAWfdefii{LVMAX}{R=LVMAX(arg1)}{LVMIN}{R=LVMIN(arg1)}\n\nThe functions \\texttt{LVMAX} and \\texttt{LVMIN}\nreturns as a scalar result the index (position) of\nthe largest or smallest element, respectively,\nin the argument array.\n\n\\subsection*{Example of using the \\texttt{LVMAX} and \\texttt{LVMIN}\n  commands}\n\\begin{alltt}\n SIGMA >\\Ucom{  x=sin(array(10,1#10))}\n NCO(X       )=   10\n X       =\n  0.841     0.909     0.141    -0.757    -0.959    -0.279     0.657\n  0.989     0.412    -0.544\n \n SIGMA >\\Ucom{ r=lvmax(x)}\n NCO(R       )=    1\n R         8.00\n\\end{alltt}\n \n\\PAWfdefii{MAX}{R=MAX(arg1,arg2)}{MIN}{R=MIN(arg1,arg2)}\n \nThe functions \\texttt{MAX} and \\texttt{MIN} work\nindependently on each element of their arguments.\n\\texttt{arg2} can be a scalar.\nThe result has the same dimension as the argument array \n\\texttt{arg1} and each element of\nthe result is set equal to the largest or smallest element, respectively,\nof the corresponding element of the argument arrays.\n\n\\subsection*{Example of using the \\texttt{MAX} and \\texttt{MIN}\n  commands}\n\\begin{alltt}\n SIGMA >\\Ucom{  x=sin(array(10,1#10))}\n NCO(X       )=   10\n X       =\n  0.841     0.909     0.141    -0.757    -0.959    -0.279     0.657\n  0.989     0.412    -0.544\n \n SIGMA >\\Ucom{   y=cos(array(10,1#10))}\n NCO(Y       )=   10\n Y       =\n  0.540    -0.416    -0.990    -0.654     0.284     0.960     0.754\n -0.146    -0.911    -0.839\n \n SIGMA >\\Ucom{ z=min(x,y)}\n NCO(Z       )=   10\n Z       =\n  0.540    -0.416    -0.990    -0.757    -0.959    -0.279     0.657\n -0.146    -0.911    -0.839\n\\end{alltt}\n\n\\PAWfdefii{MAXV}{R=MAXV(arg)}{MINV}{R=MINV(arg)}\n\nThe extrema functions \\texttt{MAXV} and \\texttt{MINV} work\non each element of their argument and\nthe result has the same dimension as the argument array \\texttt{arg1}. \nEach element of\nof the result is set equal to the largest or smallest element, respectively,\nof the corresponding row of the argument array.\n \nAll these functions, if applied to a scalar argument, yield \\texttt{R=arg}.\n\n\\subsection*{Example of using the \\texttt{MAX} and \\texttt{MIN}\n  commands}\n\\begin{alltt}\n  SIGMA > \\Ucom{x=array(10,0#10)}\n  NCO(X       )=   10\n  X       =\n   0.0000      1.111      2.222      3.333      4.444      5.556\n    6.667      7.778      8.889      10.00\n \n  SIGMA > \\Ucom{s=sin(x)*x}\n  NCO(S       )=   10\n  S       =\n   0.0000     0.9958      1.767    -0.6352     -4.286     -3.695\n    2.494      7.755      4.539     -5.440\n \n  SIGMA > \\Ucom{ x=minv(s)}\n  NCO(X       )=   10\n  X       =\n   -5.440     -5.440     -5.440     -5.440     -5.440     -5.440\n   -5.440     -5.440     -5.440     -5.440\n\\end{alltt}\n\n\\PAWfdef{NCO}{R = NCO(arg)}\n \nThe ``Number of COmponents'' (\\texttt{NCO}) control\nfunction obtains the \\texttt{NCO} vector of the \\texttt{arg}. \nThe \\texttt{NCO} vector of a scalar is the scalar \\texttt{1}.\nFor any argument the \\texttt{NCO(NCO(arg))} gives the number \nof dimensions of the \\texttt{arg}.\n\n\\subsection*{Using the \\texttt{NCO} command}\n\\begin{alltt}\n SIGMA > \\Ucom{ x=array(4&3&2,array(24,2#48))}\n NCO(X       )=    4    3    2\n X       =\n   2.000      4.000      6.000      8.000\n   10.00      12.00      14.00      16.00\n   18.00      20.00      22.00      24.00\n \n   26.00      28.00      30.00      32.00\n   34.00      36.00      38.00      40.00\n   42.00      44.00      46.00      48.00\n \n SIGMA > \\Ucom{ r=nco(x)}\n NCO(R       )=    3\n R         4.000      3.000      2.000\n SIGMA > \\Ucom{ ndim=nco(nco(x))}\n NCO(NDIM    )=    1\n NDIM      3.000\n\\end{alltt}\n\n\\PAWfdef{ORDER}{R=ORDER(arg1,arg2)}\n \nThe ordering function \\texttt{ORDER} acts independently on each row\nof \\texttt{arg1}. \\texttt{arg2} must have the same row length as \\texttt{arg1}.\n \n\\texttt{ORDER} finds the permutation that\nbrings \\texttt{arg2} into a non-descending sequence\n(row-wise) and constructs the result by applying \nthis permutation to \\texttt{arg1}. \nIt may in some cases be expanded to that structure by \nusing the techniques of the topological arithmetic. \nThis is particularly useful if \\texttt{arg2} is\na single vector with the length of the rows of \\texttt{arg1}.\n\n\\subsection*{Using the \\texttt{ORDER} command}\n\\begin{alltt}\n SIGMA > \\Ucom{X = 1&1&2&4&-3&1&3}\n NCO(X       )=    7\n X       =\n   1.00      1.00      2.00      4.00     -3.00      1.00      3.00\n SIGMA > \\Ucom{P = ORDER(X,X)}\n NCO(P       )=    7\n P       =\n  -3.00      1.00      1.00      1.00      2.00      3.00      4.00\n SIGMA > \\Ucom{P = ORDER(X,-X)}\n NCO(P       )=    7\n P       =\n   4.00      3.00      2.00      1.00      1.00      1.00     -3.00\n SIGMA > \\Ucom{Y = ARRAY(7,1# 7)}\n NCO(Y       )=    7\n Y       =\n   1.00      2.00      3.00      4.00      5.00      6.00      7.00\n SIGMA > \\Ucom{P = ORDER(Y,X)}\n NCO(P       )=    7\n P       =\n   5.00      1.00      2.00      6.00      3.00      7.00      4.00\n\\end{alltt}\n\n\\PAWfdef{PROF}{R=PROD(arg)}\n \nThe \\texttt{PROD} function generates the {\\bf running product}\nof each row of the argument\narray, say \\( X_1,\\: X_2,\\ldots,\\:X_n \\)\nand creates an array with components equal to the\nrunning product of the component of the argument:\n\\( X_1,\\: X_2,\\ldots,\\:X_n \\)\n\\(X_1 ,X_1 \\times X_2 ,\\: \\ldots, \\: X_1 \\times X_2 \\times \\ldots X_n \\)\n\n\\subsection*{Using the \\texttt{TIMES} command}\n\\begin{alltt}\n  SIGMA > \\Ucom{x=array(6&4,array(24,1#24))}\n  NCO(X       )=    6    4\n  X       =\n    1.000      2.000      3.000      4.000      5.000      6.000\n    7.000      8.000      9.000      10.00      11.00      12.00\n    13.00      14.00      15.00      16.00      17.00      18.00\n    19.00      20.00      21.00      22.00      23.00      24.00\n \n  SIGMA > \\Ucom{y=prod(x)}\n  NCO(Y       )=    6    4\n  Y       =\n    1.000      2.000      6.000      24.00      120.0      720.0\n    7.000      56.00      504.0      5040.     0.5544E+05 0.6653E+06\n    13.00      182.0      2730.     0.4368E+05 0.7426E+06 0.1337E+08\n    19.00      380.0      7980.     0.1756E+06 0.4038E+07 0.9691E+08\n\\end{alltt}\n\n\\PAWfdef{QUAD}{R=QUAD(arg1,arg2)}\n\nThe {\\bf quadrature function} \\texttt{QUAD} numerically integrates each row of\n\\texttt{arg1} with respect to the scalar step size \\texttt{h} defined by \\texttt{arg2}.\n \nThe result \\texttt{R} has the same dimension as \\texttt{arg1} \nand the integration constant is\nfixed by choosing the first point of the result to be zero.\n \nThe method uses a four-point forward and backward one-strip-formula based\non Lagrange interpolation. We have for the first point of the result:\n\\[\nR_1 = \\int_{ x_1 }^{ x_1 } ( arg1 ) \\mathrm{d}x = 0\n\\]\n \nfor the second and third points\n\\[\nR_{i+1} = R_i + \\frac{h}{24} ( 9 f_i + 19 f_{i+1} - 5 f_{i+2} + f_{i+3} )\n\\]\n \nand for all subsequent points\n\\[\nR_i = R_{i-1} + \\frac{h}{24} ( f_{i-3} - 5 f_{i-2} + 19 f_{i-1} + 9 f_i )\n\\]\n \nwhere the \\( f_i \\) are elements of \\texttt{arg1} and are assumed to be values of\nsome functions evaluated at equidistant intervals\nwith interval width equal to \\texttt{h} (\\texttt{h} being\nequal to the value of \\texttt{arg2}).\n\n\\begin{figure}\n\\begin{minipage}{.33\\textwidth}\n\\begin{alltt}\nSIGMA > *********************\nSIGMA > * SIGMA application *\nSIGMA > *  showing use of   *\nSIGMA > *   QUAD numeric    *\nSIGMA > *   integration     *\nSIGMA > *********************\nSIGMA > \\Ucom{x=array(101,0#2*pi)}\nSIGMA > * Function value array\nSIGMA > \\Ucom{y=sin(x)}\nSIGMA > * Step size\nSIGMA > \\Ucom{dx=0.6283186E-01}\nSIGMA > \\Ucom{print dx}\n NCO(DX      )=    1\n DX       0.6283186E-01\nSIGMA > * Integration of SIN(X)\nSIGMA > * in interval 0<=X<+2*PI\nSIGMA > \\Ucom{f=quad(y,dx)}\nSIGMA > * Analytical result\nSIGMA > * is   1-COS(X)\nSIGMA > \\Ucom{g=1-cos(x)}\nSIGMA > * Compute the difference\nSIGMA > \\Ucom{erro=(g-f)*10**6}\nSIGMA > * Plot the difference\nSIGMA > *  in units of \\(10\\sp{-6}\\)\nSIGMA > \\Ucom{exit}\nPAW > \\Ucom{opt GRID}\nPAW > \\Ucom{gra 101 x erro}\n\\end{alltt}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.66\\textwidth}\n\\begin{center}\\mbox{\\epsfig{file=sigexa1.eps,width=.95\\linewidth}}\\end{center}\n\\end{minipage}\n\n\\caption{Using numerical integration with SIGMA}\n\\label{fig:SIGEXA1}\n\\end{figure}\n\n\\PAWfdef{SUMV}{R=SUMV(arg)}\n \nThe \\texttt{SUMV} function generates the {\\bf running summation}\nof each row of the argument\narray, say \n\\(X_1,\\:X_2,\\:\\ldots,\\)\n\\(X_i,\\:\\ldots,\\:X_n\\)\nand creates an array with components equal to the running sum of the\n\\( X_i \\) namely: \n\\( X_1,\\: X_1 + X_2 ,\\)\n\\(\\ldots, \\: X_1 + X_2 + \\ldots X_i, \\:\n                             \\ldots, \\: X_1 + X_2 + \\ldots X_n \\).\n\n\\subsection*{Using the \\texttt{SUM} function}\n\\begin{alltt}\n  SIGMA > \\Ucom{ x=array(6&4,array(24,1#24))}\n  NCO(X       )=    6    4\n  X       =\n    1.000      2.000      3.000      4.000      5.000      6.000\n    7.000      8.000      9.000      10.00      11.00      12.00\n    13.00      14.00      15.00      16.00      17.00      18.00\n    19.00      20.00      21.00      22.00      23.00      24.00\n \n  SIGMA > \\Ucom{y=sumv(x)}\n  NCO(Y       )=    6    4\n  Y       =\n    1.000      3.000      6.000      10.00      15.00      21.00\n    7.000      15.00      24.00      34.00      45.00      57.00\n    13.00      27.00      42.00      58.00      75.00      93.00\n    19.00      39.00      60.00      82.00      105.0      129.0\n\\end{alltt}\n \n\\PAWfdefii{VMAX}{R=VMAX(arg)}{VMIN}{R=VMIN(arg)}\n \nThe functions \\texttt{VMAX} and \\texttt{VMIN}\nreturn a scalar equal to the largest or smallest element of the array \\texttt{arg}.\n\n\\PAWfdef{VSUM}{R=VSUM(arg1)}\n \nThe \\texttt{VSUM} function generates the {\\bf  sum}\nof each element of the argument\narray, say \n\\( X_1,\\:X_2,\\:\\ldots,\\:X_i,\\:\\ldots,\\:X_n \\)\nand creates a scalar whose value is equal to \nthe sum of all the components of \\( X \\) namely:\n\\( X_1 + X_2 + X_3,\\:\\ldots,\\:X_n \\)\n\n\\subsection*{Using the \\texttt{VSUM} function}\n\\begin{alltt}\n SIGMA >\\Ucom{ x=array(10)}\n NCO(X       )=   10\n X       =\n   1.00      1.00      1.00      1.00      1.00      1.00      1.00\n   1.00      1.00      1.00\n \n SIGMA >\\Ucom{ r=vsum(x)}\n NCO(R       )=    1\n R         10.0\n\\end{alltt}\n\n\\section{Available library functions}\n\\index{library functions in SIGMA}\n\\index{SIGMA!library functions}\n \nThe library functions available under SIGMA are\nlisted below. All these functions have a single\nargument, unless otherwise indicated.\nThe number indicated between parentheses corresponds to the\nnumber of the same function in the CERN program library.\n\n{\\small\n\\begin{DLttc}{1234567}\n\\item[ABS]     ABSolute value\n\\item[ACOS]    ArCOSine\n\\item[ALOGAM]  LOGarithm of the GAMma Function (C341)\n\\item[ASIN]    ArcSINe\n\\item[ATAN]    ArcTANgent\n\\item[ATAN2]   ArcTANgent2 (2 arguments)\n\\item[BESI0]   Mod. Bessel Function I0 (C313)\n\\item[BESI1]   Mod. Bessel Function I1 (C313)\n\\item[BESJ0]   Bessel Function J0 (C312)\n\\item[BESJ1]   Bessel Function J1 (C312)\n\\item[BESK0]   Mod. Bessel Function K0 (C313)\n\\item[BESK1]   Mod. Bessel Function K1 (C313)\n\\item[BESY0]   Bessel Function Y0 (C312)\n\\item[BESY1]   Bessel Function Y1 (C312)\n\\item[COS]     COSine\n\\item[COSH]    Hyperbolic COSine\n\\item[COSINT]  COSine INTegral (C336)\n\\item[DILOG]   DILOGarithm Function (C304)\n\\item[EBESI0]  \\(\\exp(-\\left|x\\right|)I_0(x)\\) (C313)\n\\item[EBESI1]  \\(\\exp(-\\left|x\\right|)I_1(x)\\) (C313)\n\\item[EBESK0]  \\(\\exp(x)K_0(x)\\) (C313)\n\\item[EBESK1]  \\(\\exp(x)K_1(x)\\) (C313)\n\\item[ELLICK]  Complete Elliptic Integral K (C308)\n\\item[ELLICE]  Complete Elliptic Integral E (C308)\n\\item[ERF]     Error Function ERF (C300)\n\\item[ERFC]    Error Function ERFC (C300)\n\\item[EXP]     EXPonential\n\\item[EXPINT]  EXPonential INTegral (C337)\n\\item[FREQ]    Normal Frequency Function FREQ (C300)\n\\item[GAMMA]   GAMMA Function (C305)\n\\item[INT]     Takes INTegral part of decimal number\n\\item[LOG]     Natural LOGarithm\n\\item[LOG10]   Common LOGarithm\n\\item[MOD]     Remaindering\n\\item[RNDM]    Random Number Generator: \\texttt{V1=RNDM(V)}, with\n               \\texttt{NCO(V1)=NCO(V)} generates random numbers \n               between \\texttt{0} and \\texttt{1}.\n\\item[SIGN]    Transfer of SIGN: \\texttt{V2=SIGN(V,V1)}, \n               \\texttt{V2=|V|*V1/|V1|}\n\\item[SIN]     SINe Function\n\\item[SINH]    Hyperbolic SINe\n\\item[SININT]  SINe INTegral (C336)\n\\item[SQRT]    SQuare RooT\n\\item[TAN]     TANgent\n\\item[TANH]    Hyperbolic Tangent\n\\end{DLttc}\n}%%% end of \\small\n\\fbox{Ill defined functions will return \\texttt{0.} as result.\n(e.g. \\texttt{SQRT} of a negative number is taken as \\texttt{0}).}\n\\index{SIGMA|)}\n", "meta": {"hexsha": "26d611dec2afc7621a09e27e09098f1103269385", "size": 29181, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paw/pawch6.tex", "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_issues_repo_path": "paw/pawch6.tex", "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paw/pawch6.tex", "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2427536232, "max_line_length": 96, "alphanum_fraction": 0.5892532812, "num_tokens": 10394, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117983401363, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5726153047283422}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{commath}\n\\usepackage{enumerate}\n\\usepackage{lmodern}\n\\usepackage{microtype}\n\\usepackage{fullpage}\n\\usepackage{graphicx}\n\n\\title{STAT 527: Assignment \\#1}\n\\author{Philip Pham}\n\\date{\\today}\n\n\\DeclareMathOperator{\\E}{E}\n\\DeclareMathOperator{\\Prob}{P}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Problem 1}\n\n\\begin{enumerate}[(a)]\n\\item\n  \\begin{proof}\n    This follows from the definition of variance after some algebra.\n    \\begin{align*}\n      &\\E\\left[\n      \\left(\\hat{f}(x) - f(x)\\right)^2\n        \\right]\n        = \\E\\left[\n        \\left[\\left(\\hat{f}(x) - \\E\\left[\\hat{f}(x)\\right]\\right)\n        +\n        \\left(\\E\\left[\\hat{f}(x)\\right] - f(x)\\right)\n        \\right]^2\n        \\right] \\\\\n      &=\n        \\E\\left[\\left(\\hat{f}(x) - \\E\\left[\\hat{f}(x)\\right]\\right)^2\\right]\n        +\n        2\\E\\left[\n        \\left(\\hat{f}(x) - \\E\\left[\\hat{f}(x)\\right]\\right)\n        \\left(\\E\\left[\\hat{f}(x)\\right] - f(x)\\right)\n        \\right]\n        +\n        \\E\\left[\\left(\\E\\left[\\hat{f}(x)\\right] - f(x)\\right)^2\\right]\n      \\\\\n      &= \\operatorname{var}\\left(\\hat{f}(x)\\right)\n\t+ \\left(\\E\\left[\\hat{f}(x)\\right] - f(x)\\right)^2\n        + 2\\left(\\E\\left[\\hat{f}(x)\\right] - f(x)\\right)\\E\\left[\n        \\left(\\hat{f}(x) - \\E\\left[\\hat{f}(x)\\right]\\right)        \n        \\right] \\\\\n      &= \\left(\\E\\left[\\hat{f}(x)\\right] - f(x)\\right)^2 +\n        \\operatorname{var}\\left(\\hat{f}(x)\\right).\n    \\end{align*}\n  \\end{proof}\n\\item\n  \\begin{proof}    \n    The prediction error is\n    \\begin{align*}\n      &\\E\\left[\n      \\left(y_{\\text{new}} - \\hat{f}\\left(x_{\\text{new}}\\right)\\right)^2\n      \\right]      \n      = \\E\\left[\n        \\left(f\\left(x_{\\text{new}}\\right) + \\epsilon_\\text{new} -\n        \\hat{f}\\left(x_{\\text{new}}\\right)\\right)^2\n        \\right] \\\\\n      &= \\E\\left[\n        \\left(\\epsilon_\\text{new} +\n        \\left(f\\left(x_{\\text{new}}\\right) -\n        \\hat{f}\\left(x_{\\text{new}}\\right)\\right)\\right)^2\n        \\right] \\\\\n      &= \\E\\left[\\epsilon_\\text{new}^2\\right]\n        + 2\\E\\left[\\epsilon_\\text{new}\\left(f\\left(x_{\\text{new}}\\right) -\n        \\hat{f}\\left(x_{\\text{new}}\\right)\\right)\\right]\n        + \\left(\\E\\left[\\hat{f}(x_\\text{new})\\right] -\n        f(x_\\text{new})\\right)^2 +\n        \\operatorname{var}\\left(\\hat{f}\\left(x_\\text{new}\\right)\\right) \\\\\n      &= \\sigma^2 +\n        \\left(\\E\\left[\\hat{f}(x_\\text{new})\\right] -\n        f(x_\\text{new})\\right)^2 +\n        \\operatorname{var}\\left(\\hat{f}\\left(x_\\text{new}\\right)\\right).\n    \\end{align*}\n\n    There is an additional $\\sigma^2$ term to account for the variance of our\n    new observation.\n  \\end{proof}\n\\item\n  \\begin{proof}\n    \\begin{align*}\n      a \\geq \\E\\left[L\\right]\n      &= \\int_0^\\infty t \\,\\dif L(t) \\\\\n      &= \\int_0^{a/\\epsilon} t \\,\\dif L(t) + \\int_{a/\\epsilon}^\\infty t \\,\\dif L(t) \\\\\n      &\\geq \\int_0^{a/\\epsilon} t \\,\\dif L(t) + \\frac{a}{\\epsilon}\\int_{a/\\epsilon}^\\infty \\,\\dif L(t) \\\\\n      &\\geq \\frac{a}{\\epsilon}\\int_{a/\\epsilon}^\\infty \\,\\dif L(t) = \\frac{a}{\\epsilon}\\Prob\\left(L > \\frac{a}{\\epsilon}\\right).\n    \\end{align*}\n\n    Thus, we have that\n    \\begin{equation*}\n      \\frac{a}{\\epsilon}\\Prob\\left(L > \\frac{a}{\\epsilon}\\right) \\leq a\n      \\Leftrightarrow\n      \\Prob\\left(L > \\frac{a}{\\epsilon}\\right) \\leq \\epsilon\n      \\Leftrightarrow\n      \\Prob\\left(\\frac{L}{a} > \\frac{1}{\\epsilon}\\right) \\leq \\epsilon.\n    \\end{equation*}\n  \\end{proof}\n\\item\n  \\begin{proof}\n    Note that $y = X\\beta + \\epsilon$, where\n    $\\epsilon \\sim \\operatorname{N}\\left(0, \\sigma^2I\\right)$, so\n    \\begin{align*}\n      \\hat{\\beta}\n      &= \\left(X^\\top X\\right)^{-1}X^\\top y_i \\\\\n      &= \\left(X^\\top X\\right)^{-1}X^\\top \\left(X\\beta + \\epsilon\\right) \\\\\n      &= \\beta + \\left(X^\\top X\\right)^{-1}X^\\top\\epsilon \\\\\n      \\hat{\\beta} - \\beta &= \\left(X^\\top X\\right)^{-1}X^\\top\\epsilon.\n    \\end{align*}\n\n    Thus, we have that\n    \\begin{align*}\n      \\E\\left[\\hat{\\beta} - \\beta\\right]\n      &= \\E\\left[\\left(X^\\top X\\right)^{-1}X^\\top\\epsilon\\right]\n        = \\E\\left[\\left(X^\\top X\\right)^{-1}X^\\top\\right]\n        \\E\\left[\\epsilon\\right] = 0 \\\\\n      \\operatorname{var}\\left(\\hat{\\beta} - \\beta\\right)\n      &=\n        \\E\\left[\\left(X^\\top X\\right)^{-1}X^\\top\\epsilon\\epsilon^\\top X\n        \\left(X^\\top X\\right)^{-1}\\right]\n        = \\sigma^2\\E\\left[\\left(X^\\top X\\right)^{-1}\\right].\n    \\end{align*}\n    \n    If the design was fixed ($\\left\\{x_i\\right\\} \\subset \\mathbb{R}^p$ were\n    deterministic), then we simply have\n    $\\operatorname{var}\\left(\\hat{\\beta} - \\beta\\right) = \\sigma^2\\left(X^\\top\n      X\\right)^{-1}$, so\n    \\begin{equation*}\n      \\hat{\\beta} - \\beta \\sim \\operatorname{N}\\left(0, \\sigma^2\\left(X^\\top X\\right)^{-1}\\right)\n    \\end{equation*}\n    \n\n    With a random design, we appeal to Slutsky's theorem.\n    $X^\\top X = \\sum_{i=1}^n x_i^\\top x_i$, so by strong law of large numbers\n    $\\left(X^\\top X\\right)/n \\xrightarrow{\\text{a.s.}} \\Sigma$, where $\\Sigma$\n    is a constant.\n\n    $X^\\top\\epsilon = \\sum_{i=1}^n x_i\\epsilon_i$, where the $x_i\\epsilon_i$ are\n    i.i.d. and $\\operatorname{var}\\left(x_i\\epsilon_i\\right) = \\sigma^2\\Sigma$.\n\n    By CLT, we have that\n    \\begin{equation*}\n      \\frac{X^\\top\\epsilon}{\\sqrt{n}}\n        \\xrightarrow{d} \\operatorname{N}\\left(0, \\sigma^2\\Sigma\\right)\n    \\end{equation*}\n\n    So, we can now apply Slutsky's theorem to show that\n    \\begin{align*}\n      \\sqrt{n}\\left(\\hat{\\beta} - \\beta\\right)\n      =\n      \\left(\\frac{X^\\top X}{n}\\right)^{-1}\\frac{X^\\top\\epsilon}{\\sqrt{n}}\n      \\xrightarrow{d} \\Sigma^{-1}\\operatorname{N}\\left(0, \\sigma^2\\Sigma\\right)\n      &= \\operatorname{N}\\left(0, \\sigma^2\\Sigma^{-1}\\Sigma\\Sigma^{-1}\\right) \\\\\n      &= \\operatorname{N}\\left(0, \\sigma^2\\Sigma^{-1}\\right)\n    \\end{align*}\n    as desired.\n  \\end{proof}\n\\end{enumerate}\n\n\\section*{Problem 2}\n\n\\begin{enumerate}[(a)]\n\\item $f(x) = 2x$\n  \\begin{figure}[h]\n    \\centering\n    \\includegraphics{mse_linear.pdf}\n  \\end{figure}\n\\item $f(x) = \\sin(\\pi x)$\n    \\begin{figure}[h]\n      \\centering\n      \\includegraphics{mse_sin.pdf}\n    \\end{figure}\n    \\pagebreak\n  \\item $f(x) = 2x + x^3 - 6x^4$\n    \\begin{figure}[h!]\n      \\centering\n      \\includegraphics{mse_polynomial.pdf}\n    \\end{figure}\n\n  \\item $f(x) = 1/\\left(1 + (5x)^2\\right)$\n    \\begin{figure}[h!]\n      \\centering\n      \\includegraphics{mse_inverse.pdf}\n    \\end{figure}\n  \\end{enumerate}\n\n  $n = 1000$ and 100 simulations were run for each case. Almost all the\n  estimators achieve the optimal MSE of 1 when $f$ is linear.\n\n  When $f$ is sinusoidal, the linear and degree 2 polynomial can no longer model\n  the data.\n\n  When $f$ is a higher-degree polynomial, the linear estimator does\n  terribly. Unsuprisingly, the degree 4 and degree 5 polynomials do the best.\n\n  When $f$ is an inverse polynomial function, increasing the degree of the\n  polynomial estimator helps, but only the non-parametric models are able to\n  achieve the theoretical best error of 1.\n\\end{document}\n", "meta": {"hexsha": "c8d2b596d1e7da42dfb848a84228688eb74983a8", "size": 7026, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw1/hw1_solutions.tex", "max_stars_repo_name": "ppham27/stat527", "max_stars_repo_head_hexsha": "f96f606b7f141e034698ff59891df309b850f95f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw1/hw1_solutions.tex", "max_issues_repo_name": "ppham27/stat527", "max_issues_repo_head_hexsha": "f96f606b7f141e034698ff59891df309b850f95f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw1/hw1_solutions.tex", "max_forks_repo_name": "ppham27/stat527", "max_forks_repo_head_hexsha": "f96f606b7f141e034698ff59891df309b850f95f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.6172248804, "max_line_length": 128, "alphanum_fraction": 0.5768573868, "num_tokens": 2530, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.8267117898012104, "lm_q1q2_score": 0.5726153040590698}}
{"text": "\\subsection{BinaryFunction Class}\n\\label{sec:binaryFunction}\n\nA \\code{BinaryFunction} object represents a function that can combine two \\code{Expression}s\nand produce another \\code{ValueComputation}.\n\nFor the purposes of representing a single operand of an instruction, the\n\\code{BinaryFunction}s of interest are addition and multiplication of integer values;\nthis allows an \\code{Expression} to represent all addressing modes on the architectures\ncurrently supported by the Instruction API. \n\n\\begin{apient}\nBinaryFunction(Expression::Ptr arg1,\n               Expression::Ptr arg2,\n               Result_Type result_type,\n               funcT:Ptr func)\n\\end{apient}\n\\apidesc{\nThe constructor for a \\code{BinaryFunction} may take a reference-\\/counted pointer or a plain C++ pointer to each of the child \\code{Expression}s that represent its arguments. Since the reference-\\/counted implementation requires explicit construction, we provide overloads for all four combinations of plain and reference-\\/counted pointers. Note that regardless of which constructor is used, the pointers \\code{arg1} and \\code{arg2} become owned by the \\code{BinaryFunction} being constructed, and should not be deleted. They will be cleaned up when the \\code{BinaryFunction} object is destroyed.\n\nThe \\code{func} parameter is a binary functor on two \\code{Result}s. It should be derived from \\code{funcT}. \\code{addResult} and \\code{multResult}, which respectively add and multiply two \\code{Result}s, are provided as part of the InstructionAPI, as they are necessary for representing address calculations. Other \\code{funcTs} may be implemented by the user if desired. \\code{funcTs} have names associated with them for output and debugging purposes. The addition and multiplication functors provided with the Instruction API are named \"+\" and \"*\", respectively. \n}\n\n\\begin{apient}\nconst Result & eval () const\n\\end{apient}\n\\apidesc{\nThe \\code{BinaryFunction} version of \\code{eval} allows the \\code{eval} mechanism to handle complex addressing\nmodes. Like all of the \\code{ValueComputation} implementations, a \\code{BinaryFunction}'s \\code{eval} will return the\nresult of evaluating the expression it represents if possible, or an empty \\code{Result} otherwise. A \\code{BinaryFunction} may have arguments that can be evaluated, or arguments that cannot. Additionally, it\nmay have a real function pointer, or it may have a null function pointer. If the arguments can be\nevaluated and the function pointer is real, a result other than an empty \\code{Result} is guaranteed to\nbe returned. This result is cached after its initial calculation; the caching mechanism also allows\noutside information to override the results of the \\code{BinaryFunction}'s internal computation. If the\ncached result exists, it is guaranteed to be returned even if the arguments or the function are not\nevaluable.\n}\n\n\\begin{apient}\nvoid getChildren (vector< InstructionAST::Ptr > & children) const\n\\end{apient}\n\\apidesc{\nThe children of a \\code{BinaryFunction} are its two arguments.\nAppends the children of this BinaryFunction to \\code{children}.\n}\n\n\\begin{apient}\nvoid getUses (set< InstructionAST::Ptr > & uses) \n\\end{apient}\n\\apidesc{\nThe use set of a \\code{BinaryFunction} is the union of the use sets of its children.\nAppends the use set of this \\code{BinaryFunction} to \\code{uses}.\n}\n\n\\begin{apient}\nbool isUsed (InstructionAST::Ptr findMe) const \n\\end{apient}\n\\apidesc{\n\\code{isUsed} returns \\code{true} if \\code{findMe} is an argument of this \\code{BinaryFunction}, or if it is in the use set of either argument.\n}\n", "meta": {"hexsha": "73c0b6b417d3ae66a0d1b7bd45bd42855311a057", "size": 3580, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Dyninst-8.2.1/instructionAPI/doc/API/BinaryFunction.tex", "max_stars_repo_name": "Vtech181/Path_Armor", "max_stars_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 47, "max_stars_repo_stars_event_min_datetime": "2015-10-14T23:12:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T11:23:59.000Z", "max_issues_repo_path": "Dyninst-8.2.1/instructionAPI/doc/API/BinaryFunction.tex", "max_issues_repo_name": "Vtech181/Path_Armor", "max_issues_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Dyninst-8.2.1/instructionAPI/doc/API/BinaryFunction.tex", "max_forks_repo_name": "Vtech181/Path_Armor", "max_forks_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2015-11-04T03:44:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-14T10:17:39.000Z", "avg_line_length": 58.6885245902, "max_line_length": 598, "alphanum_fraction": 0.7801675978, "num_tokens": 844, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267118111485244, "lm_q2_score": 0.6926419704455589, "lm_q1q2_score": 0.5726152978645307}}
{"text": "\\section{Derivatives of the solutions of IVP with respect to data}\n\\subsection{Derivatives with respect to the initial values}\n\\begin{intro}\n  In order to understand BVP, we have to introduce the notion of the\n  derivative of the solution to an IVP with respect to its initial\n  values. Our interest lies in the change of $u$ if the initial value\n  is changed. We will thus denote in these cases $u=u(t;v)$, where $t$ is\n  the usual ``time'' variable and $v$ the initial value, which now is\n  a variable as well. Thus, $u(t;v)$ is the solution to the IVP\n  \\begin{gather}\n    \\label{eq:derivatives:1}\n    \\begin{split}\n      u'(t;v) = \\tfrac{\\partial}{\\partial t} u(t;v) &= f\\bigl(t,u(t;v)\\bigr) \\\\\n      u(t_0;v) &= v.\n    \\end{split}\n  \\end{gather}\n  The purpose of this section is the study of the derivative\n  \\begin{gather*}\n    \\tfrac{\\partial}{\\partial v} u(t;v),\n  \\end{gather*}\n  which is fundamentally different from $\\nicefrac{\\partial}{\\partial t}\n  u(t;v)$.\n  It can be obtained by solving the variational equation of the\n  original IVP, defined as follows.\n\\end{intro}\n\n\\begin{Definition}{gateaux-derivative}\n  Let $F(v)$ be a function defined on some function space. Then, the G\u00e2teaux\n  derivative of $F$ at a point $v$ in direction $w$ with $v$ and $w$\n  in this function space is defined as\n  \\begin{gather}\n    \\label{eq:derivatives:6}\n    \\frac{\\partial}{\\partial w} F(v)\n    = \\lim_{\\epsilon\\to 0} \\frac{F(v+\\epsilon w) - F(v)}{\\epsilon},\n  \\end{gather}\n  if this limit exists.\n  \n  Here, we have used the notation for directional derivatives in\n  $\\R^n$, since this is indeed the character of the G\u00e2teaux\n  derivative.\n  % Let $f$ be a function depending on another function $v$. Then, the\n  % \\define{Gateaux derivative} of $f$ at $v$ in direction $w$ is\n  % defined as\n  \n\\end{Definition}\n% \\begin{remark}\n%   In terms of linear \\putindex{perturbation theory} this section deals\n%   with the derivative of the solution $u(t)$ of the\n%   IVP~\\eqref{eq:IVP}, but not with respect to the argument $t$, but\n%   rather with respect to the initial values. That is why we will write\n%   the solution $u = u(t;s)$ of the IVP\n%   \\begin{gather}\n%     \\label{eq:derivatives:1}\n%     \\begin{split}\n%       u'(t;s) = \\tfrac{\\partial}{\\partial t} u(t;s) &= f\\bigl(t,u(t;s)\\bigr) \\\\\n%       u(t_0;s) &= s_0\n%     \\end{split}\n%   \\end{gather}\n%   with two arguments, the actual time variable $t$ and the initial value $s$.\n% \\end{remark}\n\n\\begin{Definition}{variational-equation}\n  The \\define{variational equation} to the first order system of\n  ODE\n  \\begin{gather*}\n    u' = f,\n  \\end{gather*}\n  of dimension $d$ is the linear matrix-valued system of ODE\n  \\begin{subequations}\n  %  \\label{eq:derivatives:1}\n  \\begin{gather}\n    \\label{eq:derivatives:2}\n      \\fundam' = \\nabla_u f\\bigl(t,u(t)\\bigr) \\fundam\n  \\end{gather}\n  for $d\\times d$ matrices $\\fundam$. Here is $u$ a solution of the\n  equation~\\eqref{eq:IVP:ode} and \n  \\begin{gather*}\n    \\nabla_u f(t,u) =\n    \\begin{pmatrix}\n      \\tfrac{\\partial f_1}{\\partial u_1} & \\cdots\n      & \\tfrac{\\partial f_1}{\\partial u_d} \\\\\n      \\vdots & & \\vdots \\\\\n      \\tfrac{\\partial f_d}{\\partial u_1} & \\cdots\n      &  \\tfrac{\\partial f_d}{\\partial u_d}\n    \\end{pmatrix}\n  \\end{gather*}\n  is the matrix of the derivatives of $f$ with respect to the\n  components of $u$. The \\define{fundamental matrix}\n  $\\fundamental(t;t_0)$\\index{Y@$\\fundamental(t;t_0)$} is solution of\n  the IVP to this equation with\n  \\begin{gather}\n    \\label{eq:derivatives:3}\n    \\fundam(t_0) = \\identity.\n  \\end{gather}\n  \\end{subequations}\n\\end{Definition}\n\n\\begin{remark}\n  The fundamental matrix $\\fundam$ can also be read column\n  by column.  Then each column is a vector-valued function\n  $\\phi^{(i)}(t)$ and solves the IVP\n  \\begin{gather*}\n    \\begin{split}\n      \\tfrac{d}{dt}\\phi^{(i)}(t) &= \\nabla_u\n      f\\bigl(t,u(t)\\bigr)\\phi^{(i)}(t),\n      \\\\\n      \\phi^{(i)}(t_0) &= e_i.\n    \\end{split}\n  \\end{gather*}\n\\end{remark}\n\n\\begin{remark}\n  The definition of the fundamental matrix here is consistent with the\n  one in Definition~\\ref{Definition:fundamental-system} for linear\n  equations. Namely, for $f(u) = Au$, we have $\\nabla_u f(u) = A$.\n\\end{remark}\n\n\\input{theorems/fundamental}\n\n\\begin{proof}\n  In order to prove the first equation, denote by $V(\\tau;s)$ the\n  solution to the IVP\n  \\begin{gather*}\n    V'(\\tau;s) = \\nabla_u f(\\tau,u(\\tau)) V(\\tau;s),\n    \\qquad V(s;s) = \\fundamental(s;t).\n  \\end{gather*}\n  Because of uniqueness, we must have\n  $V(\\tau;s) = \\fundamental(\\tau;t)$ for any $\\tau$ between $s$ and\n  $t$, in particular for $r=t$, such that $V(t;s) = \\fundamental(t;t) = \\identity$. On the\n  other hand, by linearity, we have\n  $V(\\tau;s) = \\fundamental(\\tau;s)\\fundamental(s;t)$, and thus the\n  equation is proven by\n  \\begin{gather*}\n    \\identity = V(t;s) = \\fundamental(t;s)\\fundamental(s;t).\n  \\end{gather*}\n\n  Now, assume without loss of generality that $s$ is between $r$ and\n  $t$. Indeed, if for instance $t$ is between $r$ and $s$, multiply\n  equation~\\eqref{eq:derivatives:4} from the left by $\\fundamental(s;t)$ and\n  prove the equation for\n  \\begin{gather*}\n    \\fundamental(s;t)\\fundamental(t;r)\n    = \\fundamental(s;t)\\fundamental(t;s)U(s;r)\n    = \\fundamental(s;r).\n  \\end{gather*}\n  Take the auxiliary function $V(\\tau;s)$ as defined above. By\n  uniqueness, it is equal to $\\fundamental(\\tau;t)$ for all $\\tau$. But, on the\n  other hand, we have by linearity $V(\\tau;s) = \\fundamental(\\tau;s)\\fundamental(s;r)$, in\n  particular for $\\tau=t$.\n\n  The statement follows from the definition as a solution of an IVP\n  and the fact that solutions of linear IVP are linear combinable.\n\\end{proof}\n\n\\input{theorems/derivative}\n\n\\begin{proof}\n  We write the IVP in its full dependence on $v$ as\n  \\begin{align*}\n    \\frac{\\partial u(t;v)}{\\partial t} &= f\\bigl(t,u(t;v)\\bigr) \\\\\n    u(t_0;v) &= v.\n  \\end{align*}\n  From the second equation, we immediately obtain\n  \\begin{gather*}\n    \\frac{\\partial u(t_0;v)}{\\partial v} = \\identity.\n  \\end{gather*}\nAssuming differentiability of $f$ with respect to $u$, the first\nequation yields\n\\begin{gather*}\n  \\frac{\\partial}{\\partial v}\\frac{\\partial u(t;v)}{\\partial t}\n  = \\frac{\\partial f\\bigl(t,u(t;v)\\bigr)}{\\partial v}\n  = \\nabla_u f(t,u(t;v)) \\frac{\\partial u(t;v)}{\\partial v}.\n\\end{gather*}\nThus, $\\tfrac{\\partial}{\\partial v} u$ solves the\nIVP~\\eqref{eq:derivatives:1}\n\\begin{gather*}\n  \\left( \\frac{\\partial}{\\partial v} u \\right)'\n  = \\nabla_u f(t,u(t;v)) \\frac{\\partial u(t;v)}{\\partial v}.\n\\end{gather*}\n$\\tfrac{\\partial}{\\partial v} u$ solves the same IVP as the\nfundamental matrix and thus, they coincide.\n\\end{proof}\n\n\\subsection{Derivatives with respect to the right hand side function}\n\\begin{intro}\n  We close this section by studying the differential dependence of the\n  solution $u(t)$ of an ODE at time $t$ on the function $f(t,u)$, that\n  is, the derivative of a value with respect to a function. In order\n  to keep things simple, we reduce this question to a regular\n  derivative of a function with respect to a real variable by using\n  the G\u00e2teaux derivative.\n  Back to differential equations, our task is now to compute the\n  derivative of $u(t)$ with respect to changes in $f$, denoted as\n  \\begin{gather}\n    \\label{eq:derivatives:7}\n    \\frac{\\partial}{\\partial g} u(t)\n    = \\lim_{\\epsilon\\to 0} \\frac{u_\\epsilon-u}{\\epsilon}\n    = \\left.\\frac{d}{d\\epsilon} u_\\epsilon(t)\\right|_{\\epsilon=0},\n  \\end{gather}\n  where $u$ and $u_\\epsilon$ respectively solve the IVPs\n  \\begin{xalignat*}{2}\n    u'&=f(t,u) & u(t_0)& = u_0 \\\\\n    u_\\epsilon'&=f(t,u_\\epsilon)+\\epsilon g(t,u_\\epsilon)\n    & u_\\epsilon(t_0)& = u_0.\n  \\end{xalignat*}\n  For this derivative, we have the following theorem.\n\\end{intro}\n\n\\input{theorems/derivative-f.tex}\n\n\\begin{proof}\n   We set out by devising a differential equation for the G\u00e2teaux\n  derivative $\\mathcal U(t) := \\left.\\frac{d}{d\\epsilon} u_\\epsilon(t)\\right|_{\\epsilon=0}$. The differential equation for $u$ yields\n  \\begin{align*}\n    \\mathcal U'(t) =& \\left( \\left.\\frac{d}{d\\epsilon} u_\\epsilon(t)\\right|_{\\epsilon=0} \\right)' \\\\\n    =& \\left.\\frac{d}{d\\epsilon} u_\\epsilon ' (t)\\right|_{\\epsilon=0} \\\\\n    =& \\left.\\frac{d}{d\\epsilon}\n    \\Bigl(f\\bigl(t,u_\\epsilon(t)\\bigr)\n    + \\epsilon g\\bigl(t,u_\\epsilon(t)\\bigr)\\Bigr) \\right|_{\\epsilon=0} \\\\\n    =& \\left.\\nabla_u f(t,u_\\epsilon) \\frac{d}{d\\epsilon} u_\\epsilon(t)\n     + \\epsilon \\nabla_u g \\bigl(u_\\epsilon (t) \\bigr) \\frac{d}{d\\epsilon} u_\\epsilon(t) \n    + g\\bigl(t,u_\\epsilon(t)\\bigr) \\right|_{\\epsilon=0}\n  \\\\\n    =& \\left.\\nabla_u f(t,u_\\epsilon) \\mathcal U(t)\n    + g\\bigl(t,u_\\epsilon(t)\\bigr) \\right|_{\\epsilon=0}\n  \\end{align*}\n  Furthermore, we have\n  \\begin{gather*}\n     \\mathcal U(t_0) = \\frac{d}{d\\epsilon} u_\\epsilon(t_0) = 0.\n  \\end{gather*}\n\n  According to Lemma~\\ref{Lemma:linear-representation}, the solution\n  of this initial value problem can be represented with the\n  integrating factor $M(t)$ as\n  \\begin{gather*}\n    \\mathcal U(t) = M^{-1}(t) \\int_{t_0}^t M(s) g\\bigl(s,u(s)\\bigr)\n    \\, \\ds.\n  \\end{gather*}\n  Noticing that $M(\\tau)^{-1} = \\fundamental(\\tau;t_0)$, we obtain\n  \\begin{gather*}\n    u(t) = \\int_{t_0}^t \\fundamental(t;s) g\\bigl(s,u(s)\\bigr) \\, \\ds\n\\end{gather*}\n\\end{proof}\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"notes\"\n%%% End: \n", "meta": {"hexsha": "4cd26869e448f64f7634778b2158fd2f56c5a882", "size": 9327, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ode/derivatives.tex", "max_stars_repo_name": "ahumanita/notes", "max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ode/derivatives.tex", "max_issues_repo_name": "ahumanita/notes", "max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-05-24T07:31:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-31T12:58:14.000Z", "max_forks_repo_path": "ode/derivatives.tex", "max_forks_repo_name": "ahumanita/notes", "max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-05-15T19:28:53.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-05T19:07:29.000Z", "avg_line_length": 37.308, "max_line_length": 133, "alphanum_fraction": 0.6575533398, "num_tokens": 3166, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455588, "lm_q2_score": 0.8267117940706735, "lm_q1q2_score": 0.5726152860356943}}
{"text": "\\section{Methodology}\n\\subsection{Strategy}\nIt's easy to notice that the nested $i$ and $j$ for-loops in \\textbf{Algorithm \\ref*{alg:fw1}} can be parallizable, but one of the \\emph{intermediate edges} for a process\ncould be the \\emph{edge under analysis} for another process. For example process $P^1$ is writing in a cell while process $P^2$ is reading from the same cell, leading to a unpredictable final result\nif $P^1$'s writing comes before or after $P^2$'s reading.\n\nIt seems that enabling parallelization requires some sort of blocking mechanism between processes, like a semaphore, but we prove that no data race can occur as in the previous example as far as the value of $k$\nis shared between processes. \n\nLet's assume we have 2 processes, namely $P^1$ and $P^2$, and they have \\emph{under analisys} $M_{i,j}$ and $M_{x,y}$ respectively, with $x \\neq i$ so that each process partitions the matrix horizontally. \nAt any point the following system of preconditions exists (the two inequalities are taken from \\textbf{Algorithm \\ref*{alg:fw1}} at line 10):\n\n\\begin{flalign}\\label{eq:sys1}\n &&  \\left\\{\\begin{matrix}\nP^{1}_{i,j} & > & P^{1}_{i,k} & + & P^{1}_{k,j} \\\\\n\\\\ \nP^{2}_{x,y} & > & P^{2}_{x,k} & + & P^{2}_{k,y}\n\\end{matrix}\\right. &&\n\\end{flalign}\n\n\\emph{i.e.} $P^1$ is reading values of $M_{i,k}$ and $M_{k,j}$ so it can decide if $M_{i,j}$ should be overwritten; \nthe same does $P^2$ with $M_{x,k}$, $M_{k,y}$ and $M_{x,y}$ respectively. \\\\\nIn order to have a data race, the following statement must be true as well\n\n\\[(x = i \\wedge k=j) \\vee (k = i \\wedge y = j)\\]\n\n\\emph{i.e.} $P^2$ is using as one of its \\emph{intermediate edges} the \\emph{edge under analysis} of $P^1$. Without losing\nof generality we do not analyze the opposite case because the proof develops symmetrically.\\\\\nBecause $x \\neq i$, only the following must be verified\n\n\\begin{flalign}\\label{eq:cond1}\n &&  k = i \\wedge y = j &&\n\\end{flalign}\nand by applying (\\ref*{eq:cond1}) to (\\ref*{eq:sys1}) we have\n\n\\begin{flalign}\\label{eq:sys2}\n &&  P^{1}_{i,j} > P^{1}_{k,k} + P^{1}_{i,j} &&\n\\end{flalign}\nbut (\\ref*{eq:sys2}) is clearly false, because $M$ is a hollow matrix \\emph{i.e.} the  diagonal elements are all equal to $0$, leaving the \nfollowing inequality:\n\\begin{flalign}\\label{eq:sys3}\n &&  P^{1}_{i,j} > P^{1}_{i,j} &&\n\\end{flalign}\nClearly no number can be greater than itself and this means that at this point the comparison made by $P^1$ is irrilevant\nand is always evaluated to false. If $P^2$ writes after or before $P^1$'s evaluation does not really matter and thus \nthere is no data race as far as $k$ is the same for all the running processes.\n\nThis is really important because it assures that there is no need for locking mechanism on $M$ when it comes to\nparallelize the nested $i$ and $j$ for-loops and having no blocks guarantees better performance.\n\nNow the strategy relies on dividing the matrix by rows and assigning an equal number of rows to each process.\nOnce $k$ is set and it's shared among all processes, each process puts \\emph{under analysis} every cell\nof its sub-matrix and selects its \\emph{intermediate edges} depending on $k$.\n\nIn this work we propose 3 parallel architectures implemented with 3 different parallel programming environments: \n\\emph{Distributed computing} with MPI, \\emph{Shared-memory Mutithreading} with OpenMP and \\emph{GPGPU} with CUDA.\n\nThe document analyzes each implementation and gives an overview of timings, implementations and pros and cons for each case.\n\n\\subsection{Distributed with MPI}\n\nMessage Passing Interface (MPI) is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications. \\par\n\nThe main strategy is based on scattering horizontally the whole matrix among all the process, so that\neach process can read a portion of the matrix of size $\\frac{n^2}{p}$; then a \\emph{process of competence}\nis chosen: as $k$ is in common (but never transmitted) to all the processes, there's always a cell in the $k^{th}$\nrow representing one of the two intermediate vertices for any process and there's always one process which this row was assigned to.\nThe value $k$ is always \"up to date\" among processes because each for-loop involving $k$ starts with a collective communication\nwhich implies a synchronization point among processes.\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/mpi-scatter}\n\\captionsetup{justification=centering}                                                                                                                                   \n\\caption{View of the data each thread can reach}                                                                                                                                            \n\\label{fig:threads}                                                                                                                                                           \n\\end{figure}\nEverytime $k$ changes, the $k^{th}$ row is broadcasted by the \\emph{process of competence} (the process that \"owns\" the row) to the other processes;\na total of $k$ \\texttt{MPI\\_Bcast} operations are required. \\par\nOnce the $k^{th}$ row has been received, each process acts like the original FW in the $i$ and $j$ for-loops; obviously\nthey each write values in their own local matrix. \\par\n\nAt the end of the $k$ for-loop, all the local matrices are gathered to the root process. \n\\textbf{Algorithm \\ref*{alg:mpi}} shows an high level pseudo-code of the strategy exposed above. For the concrete implementation see \\textbf{section \\ref{mpiimpl}}.\\par\n\n\\begin{algorithm}[h!]\n\n\\SetAlgoLined\n\nbroadcast($n$, $ROOT$) \\\\\nscatter($M$, $p$)\n\n\\For{$k = 1 \\rightarrow n$}{\n\t\\If{$rank =$ \\textnormal{findOwnerOfK$^{th}$Row}($k$, $rank$)}{\n\t\t$K \\leftarrow $ getRow($M$, $k$) \\\\\n\t}\n\tbroadcast($K$) \\\\\n\n\t\\For{$i = 1 \\rightarrow \\frac{n}{p}$}{\n\t\t\\For{$j = 1 \\rightarrow n$}{\n\t\t\t\\If{$M_{i,j} < M_{i,k} + K_j$}{\n\t\t\t\t$M_{i,j} \\leftarrow M_{i,k} + K_j$\n\t\t\t}\n\t\t}\n\t}\n}\ngather($M$, $p$);\n \n\\caption{Distributed version of FW}\\label{alg:mpi}\n\\end{algorithm}\n\n\nWe can list all the communication required:\n\\begin{itemize}\n\\item{1 \\texttt{MPI\\_Bcast} for communicating the value of $n$}\n\\item{1 \\texttt{MPI\\_Scatter} for the assignment of the local sub-matrix}\n\\item{$n$ \\texttt{MPI\\_Bcast} for communicating the $k^{th}$ row}\n\\item{1 \\texttt{MPI\\_Gather} for the collection of the local sub-matrix}\n\\end{itemize}\n\n\\textbf{Table \\ref*{tab:comm}} approximates how many bytes are involved in the communications, assuming that\nthe implementation uses 4 bytes to store one \\texttt{int} and omitting the overhead of the communication protocol.\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\n\\rowcolor[HTML]{F56B00} \n{\\color[HTML]{FFFFFF} \\textbf{Count}} & {\\color[HTML]{FFFFFF} \\textbf{Type}} & {\\color[HTML]{FFFFFF} \\textbf{Size}} \\\\ \\hline\n1                                     &  \\texttt{MPI\\_Bcast}                 &  $4(p-1)$ bytes                      \\\\ \\hline\n1                                     &  \\texttt{MPI\\_Scatter}               &  $4\\frac{n^2(p-1)}{p}$ bytes         \\\\ \\hline\n$n$                               &  \\texttt{MPI\\_Bcast}                 &  $4n^2(p-1)$ bytes                    \\\\ \\hline\n1                                     &  \\texttt{MPI\\_Gather}                &  $4\\frac{n^2(p-1)}{p}$ bytes         \\\\ \\hline\n\\end{tabular}\n\\caption{Approximation of the size of each communication for \\emph{MPI} FW}                                                                                                                                            \n\\label{tab:comm}\n\\end{table}\n\\par\n\nTotal communications can be expressed with the following formula, that can help to calculate the bandwidth of the network required\nfor this implementation:\n\\[W_{comm} = 4(p-1)(1 + n^2(1 + \\frac{2}{p})) \\text{ bytes}\\]\nThis formula is important because the time spent on communications is time taken from the calculation and a network with an inadequate\nbandwith is the main source of bottlenecks. For example having 8 processes consuming a $12500 \\times 12500$ matrix implies a total of\n$\\textasciitilde 5.46$GB transferred  over the entire network; this means that each processor would theorically spend $5468$ms (43 seconds in total) on communication over a 1Gbps network.\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/mpi-time}\n\\captionsetup{justification=centering}                                                                                                                                   \n\\caption{Calculation and MPI instantiation + communication time of \\emph{MPI} FW consuming a $5040\\times5040$ matrix over a 1Gbps network depending on the number of processes}\n\\label{fig:mpi-time}                                                                                                                                                           \n\\end{figure}\n\n\\textbf{Figure \\ref*{fig:mpi-time}} shows the amount of time taken to initialize the MPI cluster and communicate during the execution of the \\emph{MPI} FW. In this case a dense \n$5040\\times5040$ matrix is used as input and the MPI cluster was located in a completely-connected network with 1Gbps bandwith.\n\nThe bandwidth required has a quadratic and linear growth with respect to the number of vertices and the number of processes respectively; \nso with a medium-small matrix the time used in communication can exceed the 10\\% of the total computational time. \nBut by increasing the size of the matrix to $12600\\times12600$ the time used in communication drops to 4\\%, with a total of 50 seconds. Thus the implementation fits well for huge matrixes, rather than medium ones.\n\nWe can also easily calculate the time required by the algorithm with this formula:\n\\[T = t_c \\frac{n^3}{p} + log(p) \\cdot(t_s(3+n) + t_w(1 + 2 \\frac{n^2}{p} + n))\\]\nwhere $t_c$ is the computation time, $t_s$ the time required to start a communication and $t_w$ the time to communicate a single word (in this case a \\texttt{int}); the formula\nassumes that the collective communications use a tree-based hierarchical pattern that takes $log(p)$ steps rather than $p$ steps for $p$ processes.\n\nFor this reason the speedup is considerable but far from the ideal for a $5040\\times5040$ matrix. \\textbf{Figure \\ref*{fig:mpi-speedup}} shows the trend of the speedup in relation to the \nsequential version.\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/mpi-speedup}\n\\captionsetup{justification=centering}                                                                                                                                   \n\\caption{Speedup of \\emph{MPI} FW}                                                                                                                                            \n\\label{fig:mpi-speedup}                                                                                                                                                           \n\\end{figure}\n\nWith a consequent decrease in efficiency as shown in \\textbf{Figure \\ref*{fig:mpi-efficiency}}\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/mpi-efficiency}\n\\captionsetup{justification=centering}                                                                                                                                   \n\\caption{Efficiency of \\emph{MPI} FW consuming a medium matrix}                                                                                                                                            \n\\label{fig:mpi-efficiency}                                                                                                                                                           \n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Shared-memory multiprocessing with OpenMP}\nOpenMP (Open Multi-Processing) is an application programming\ninterface (API) for parallel programming intended to work on shared-memory architectures. More specifically, it is a set of compiler\ndirectives, library routines and environmental variables, which influence runtime behavior. OpenMP enables parallel programming in\nvarious languages, such as C, C++ and FORTRAN and runs on mostoperating systems. \\par\nThe OpenMP API uses the fork-join model of parallel execution.\nMultiple threads perform tasks defined implicitly or explicitly by OpenMP directives. All OpenMP applications begin as a single thread\nof execution, called the initial thread. The initial thread executes sequentially until it encounters a parallel construct. At that point,\nthis thread creates a group of itself and zero or more additional threads and becomes the master thread of the new group. Each thread\nexecutes the commands included in the parallel region, and their execution may be differentiated, according to additional directives\nprovided by the programmer. At the end of the parallel region, all threads are synchronized. \\par\nThe runtime environment is responsible for effectively scheduling threads. Each thread, receives a unique id, which differentiates it\nduring execution. Scheduling is performed according to memory usage, machine load and other factors and may be adjusted by altering\nenvironmental variables. In terms of memory usage, most variables in OpenMP code are visible to all threads by default. \nHowever, OpenMP provides a variety of options for data management, such as a thread-private memory and private variables, as well as multiple ways of\npassing values between sequential and parallel regions. Additionally, recent OpenMP implementations introduced the concept of tasks,\nas a solution for parallelizing applications that produce dynamic workloads. \nThus, OpenMP is enriched with a flexible model for irregular parallelism, providing parallel while loops and recursive data structures. \\par\nThe main advantage on using OpenMP is the ease of developing parallelismswith simple constructs that (often) do not differ too much from the original implementation.\n\nThe snippet at \\textbf{Algorithm \\ref*{alg:omp}} shows the implementation used in this work: the matrix containing the\ndistances between vertices is shared among all the threads, while the two nested for-loops are executed\nby each thread indipendentely. The $k^{th}$ row is extracted by a single thread and shared among the other threads: this\nguarantees a sequential access to the $k^{th}$ row faster than accessing the same row directly on $M$, resulting in a speedup of up to $\\textasciitilde31\\%$.\n\n\\begin{algorithm}[h!]\n\n\\SetAlgoLined\n\n\\texttt{\\#pragma omp parallel num\\_threads(t) shared(M, K)} private(k) \\\\\n\\For{$k = 1 \\rightarrow n$}{\n\t\\texttt{\\#pragma omp master} \\\\\n\t$K \\leftarrow k^{th}Row(M, k)$ \\\\\n\t\\texttt{\\#pragma omp for private(i,j) schedule(dynamic)} \\\\\n\t\\For{$i = 1 \\rightarrow n$}{\n\t\t\\For{$j = 1 \\rightarrow n$}{\n\t\t\t\\If{$M_{i,j} < M_{i,k} + K_j$)}{\n\t\t\t\t$M_{i,j} \\leftarrow M_{i,k} + K_j$\n\t\t\t}\n\t\t}\n\t}\n}\n\n \n\\caption{Multithreaded FW with \\emph{OpenMP}}\\label{alg:omp}\n\\end{algorithm}\n\n\nDespite the overhead that the dynamic scheduler entails, in this case it works better than a static one because the\ncontent of the sub-matrices varies from thread to thread so a group of threads may do more write operations on $M$ and a\nstatic scheduler would make wait the others until they finish. We benchmarked a loss in performance from $\\textasciitilde7\\%$ to $\\textasciitilde48\\%$ when choosing a static scheduler over\na dynamic one, depending on the size of the matrix.\n \\\\\nAlso note that the solution does not implement a \\texttt{collapse} directive because when we collapse multiple loops, OpenMP turns them into a single loop: there is a single\nindex that is constructed from \\texttt{i} and \\texttt{j} using division and modulo operations. This can\nhave huge impacts on the performance because of this overhead, especially if the matrix is really wide.\n\nFor the concrete implementation see \\textbf{section \\ref{ompimpl}}.\n\n\\textbf{Figure \\ref*{fig:threads}} shows how 2 threads interact inside the matrix: the red and orange zones highlight the cells where\nThread 1 and Thread 2 can write respectively; the blue and turquise cells represent the intermediate vertices that are compared\nwith the vertice under analysis for Thread 1 and Thread 2 respectively.\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/openmp-threads}\n\\captionsetup{justification=centering,margin=2cm}                                                                                                                                   \n\\caption{View of the data each thread can reach}                                                                                                                                            \n\\label{fig:threads}                                                                                                                                                           \n\\end{figure}\n\n\nWe already proved that no race condition may appear in this case. \nNo race condition can occur if $k$ is the same among the threads and threrfore there's no need to verify atomicity of the write operation;\nthe lack of OpenMP directive like \\texttt{atomic} or \\texttt{critical} plays in favor of performance. \\par\n\nThe speedup of the solution is sligthy worse than the ideal speedup (see \\textbf{Figure \\ref*{fig:omp-speedup}})\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/openmp-speedup}\n\\captionsetup{justification=centering,margin=2cm}                                                                                                                                   \n\\caption{Speedup of \\emph{OpenMP} FW on a octacore CPU}                                                                                                                                            \n\\label{fig:omp-speedup}                                                                                                                                                           \n\\end{figure}\nWhen scaling from 7 to 8 threads, we notice a slight deviation from the previous (almost) linear trend. That's because the measurement\nis taken from a 8-core/8-thread CPU, namely Intel Core i7-9700K, and because no other cores were free to manage the OS and its subprocesses, the scheduler\ndivided this task among all the threads. So we interpolated the speedup, not counting the fluctuations due to the management of the OS. \\par\n\nThe efficiency, which stays always above $90\\%$, is shown in \\textbf{Figure \\ref*{fig:omp-efficiency}} alongside with its theoretical counterpart.\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/openmp-efficiency}\n\\captionsetup{justification=centering,margin=2cm}                                                                                                                                   \n\\caption{Efficiency of \\emph{OpenMP} FW on a octacore CPU}                                                                                                                                            \n\\label{fig:omp-efficiency}                                                                                                                                                           \n\\end{figure}\nAn overview of the timings collected can be found in \\textbf{Table \\ref*{tab:omp-time}}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{GPGPU with CUDA}\nIn recent years, many scientific and numeric GPGPU applications found success due to graphics hardware\u2019s streaming data-parallel organizational model. \n\nThe GPU serves, to an extent, as a coprocessor to the CPU programmed through the CUDA API. A single program known as kernel is compiled to operate on the GPU\ndevice to exploit the massive data parallelism inherit on Single Instruction, Multiple Data (SIMD) architecture. \nGroups of threads then execute the kernel on the GPU. Threads are organized into blocks which allow efficient sharing of\ndata through a high-speed shared memory region accessible to the programmer directly through CUDA. \nShared-memory is shared among threads in a block, facilitating higher bandwidth and overall performance gains. \nTherefore, algorithms must intelligently manage this fast shared memory cache effectively. \nThis will fully utilize the data parallelism capabilities of graphics\nhardware and alleviates any memory latency that data intensive algorithms suffer from on the GPU.\n\nUnlike the other two implementations, \\emph{CUDA} FW can rely on its \\emph{grid-of-blocks} system and thus the kernel can do no loops: $k$ is still shared among threads\nso that no data race occurs but each thread instead of covering one or more rows, is assigned to a single cell. \n\n\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=2.5in]{diagrams/cuda-threads}\n\\captionsetup{justification=centering}                                                                                                                                   \n\\caption{Mapping of CUDA threads inside $M$}                                                                                                                                            \n\\label{fig:cuda-threads}                                                                                                                                                           \n\\end{figure}\n\n\nDepending on the number of threads per block, each row  of matrix $M$ is covered by 1 or more blocks, that\nhave always height equal to 1 thread. This mapping allows threads of the same block share some data that can be reused. \\\\\nIn this case the shared variable stores the value $M_{i,k}$ that is in common with the entire row. Because the \nshared memory offers lower latency and higher bandwidth than the global memory, it is worth storing this value\nfor other threads.\n\n\\begin{algorithm}[h!]\n\n\\SetAlgoLined\n\\DontPrintSemicolon\n  \n\n\\For{$k = 0 \\rightarrow n$}{\n\t\\textnormal{kernel<{}<{}<$\\frac{n+b-1}{b} \\times n$, $b$>{}>{}>($M$, $n$, $k$)}\n}\n\n\\;\n\n\\SetKwFunction{FMain}{kernel}\n  \\SetKwProg{Fn}{Function}{:}{}\n  \\Fn{\\FMain{$M$, $n$, $k$}}{\n\n$j \\leftarrow \\left|B\\right|_x \\times B_x + T_x$ \\\\\n$i \\leftarrow \\left|B\\right|_y$ \\\\\n\n\\If{$j < n$}{\n\t\n\t\\If{$T_x = 0 $}{\n\t\t$K_{\\textnormal{shared}} \\leftarrow M_{i,k};$\n\t}\n\n\t$\\textnormal{syncthreads()}$\n\n\n\t\\If{$M_{i,j} < K_{\\textnormal{shared}} + M_{k,j}$}{\n\t\t$M_{i,j} \\leftarrow K_{\\textnormal{shared}} + M_{k,j}$\n\t}\n}\n}\n \n\\caption{Kernel of the \\emph{CUDA-FW} on a pre-Volta architecture}\\label{alg:cuda}\n\\end{algorithm}\n\n\\textbf{Algorithm \\ref*{alg:cuda}} shows the kernel function for this implementation. \\\\\n$B_x$ is the coordinate $x$ of the block, $\\left|B\\right|_x$ is the horizontal size of the block, $T_x$ the \ncoordinate $x$ of the thread; $\\left|B\\right|_y$ is the coordinate $y$ of the block and $K_{\\textnormal{shared}}$ is the variable\nshared among all the threads of the same block.\n\nOne thread (namely $T_{0,0}$) reads from $M_{i,k}$ and stores the value in the shared memory. From here threads encounter a synchronization point which\nassures that $T_{0,0}$ had written in the shared memory and the block can actually use the correct value.\n\nThe state-of-the-art L1 cache in Volta and Turing offers lower latency, higher bandwidth, and higher capacity compared to the earlier architectures. Like Volta, Turing's L1 can cache write operations (write-through). The result is that for many applications Volta and Turing narrow the performance gap between explicitly managed shared memory and direct access to device memory \\cite{nvidia}. \\par\nFor this reason we benchmarked \\textbf{Algorithm \\ref*{alg:cuda2}} which relies only on global memory and we do not experienced any performance degradation on the Turing architecture.\nThis mechanism is present in the newer Ampere architecture as well.\n\nFor the concrete implementation see \\textbf{section \\ref{cudaimpl}}.\n\n\n\n\\begin{algorithm}[h!]\n\n\\SetAlgoLined\n\n\\DontPrintSemicolon\n  \\SetKwFunction{FMain}{kernel}\n  \\SetKwProg{Fn}{Function}{:}{}\n  \\Fn{\\FMain{$M$, $n$, $k$}}{\n\n\n$j \\leftarrow \\left|B\\right|_x \\times B_x + T_x$ \\\\\n$i \\leftarrow \\left|B\\right|_y$ \\\\\n\n\\If{$j < n$}{\n\t\n\n\t\\If{$M_{i,j} < M_{i,k} + M_{k,j}$}{\n\t\t$M_{i,j} \\leftarrow M_{i,k} + M_{k,j}$\n\t}\n}\n}\n \n\\caption{Kernel of the \\emph{CUDA-FW} on a Volta and post-Volta architectures}\\label{alg:cuda2}\n\\end{algorithm}\n\nBecause each thread does little computation and uses high memory bandwidth, timings are really low if compared\nto \\emph{serial-FW} and \\emph{OMP} FW with 8 cores. \\\\\nIn fact from \\textbf{Figure \\ref*{fig:cuda-time}} we see a speedup up to $87.8$ times compared to \\emph{serial} FW\nand $12.34$ times compared to \\emph{OMP} FW with 8 cores. We also notice that the maximum speedup is reached when the number of threads per block is equal to 128;\nthat's because the occupancy of the Streaming Multiprocessors is at its peak and there are less wasted threads.\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/cuda-speedup}\n\\captionsetup{justification=centering}                                                                                                                                   \n\\caption{Speedup of \\emph{CUDA} FW (128 threads per block) consuming a medium matrix compared to \\emph{sequential} and \\emph{OMP} FW}                                                                                                                                            \n\\label{fig:cuda-speedup}                                                                                                                                                           \n\\end{figure}\n\nBecause the CUDA APIs don't allow the user to control the number of Streaming Multiprocessors or how many CUDA cores\ncan be involved in the computation, the speedup is calculated based on the number of threads per block.\n\n\n\\begin{figure}[h!]\n\\centering                                                                        \n\\includegraphics[width=3.5in]{diagrams/cuda-time}\n\\captionsetup{justification=centering}                                                                                                                                   \n\\caption{Efficiency of \\emph{MPI} FW consuming a medium matrix}                                                                                                                                            \n\\label{fig:cuda-time}                                                                                                                                                           \n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Hybrid: MPI + OpenMP}\nIn order to make the most of \\emph{MPI FW}, it can be combined with \\emph{OpenMP}; this\nmakes sense because otherwise each process would run the $i$ and $j$ for loops on one thread and nowadays\nis more than rare to have a cluster composed of single thread CPUs.\n\n\\textbf{Algorithm \\ref*{alg:mpi+omp}} shows how easily \\emph{MPI FW} can be modified\nso that each process can benefit from multithreading.\n\\begin{algorithm}[h!]\n\n\\SetAlgoLined\n\nbroadcast($n$, $ROOT$) \\\\\nscatter($M$, $processes$)\n\n\\texttt{\\#pragma omp parallel num\\_threads(t) private(k) shared(M, K, p)} \\\\\n\\For{$k = 1 \\rightarrow n$}{\n\t\\texttt{\\#pragma omp master} \\\\\n\t\\If{$rank =$ \\textnormal{findOwnerOfKthRow}($k$, $rank$)}{\n\t\t$K \\leftarrow $ getRow($M$, $k$) \\\\\n\t}\n\tbroadcast($K$) \\\\\n\n\t\\texttt{\\#pragma omp for private(i,j) schedule(dynamic)} \\\\\n\t\\For{$i = 1 \\rightarrow \\frac{n}{p}$}{\n\t\t\\For{$j = 1 \\rightarrow n$}{\n\t\t\t\\If{$M_{i,j} < M_{i,k} + K_j$}{\n\t\t\t\t$M_{i,j} \\leftarrow M_{i,k} + K_j$\n\t\t\t}\n\t\t}\n\t}\n}\ngather($M$, $processes$);\n \n\\caption{\\emph{MPI+OpenMP} FW}\\label{alg:mpi+omp}\n\\end{algorithm}\n\nFor the concrete implementation see \\textbf{section \\ref{hybimpl}}.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Hybrid: MPI + CUDA}\nAnother way to improve the efficiency of the cluster is to \nuse GPGPU on each node.\n\n\\textbf{Algorithm \\ref*{alg:mpi+cuda}} shows how this solution\ncan be implemented.\n\\begin{algorithm}[h!]\n\n\\SetAlgoLined\n\\DontPrintSemicolon\nbroadcast($n$, $ROOT$) \\\\\nscatter($M$, $p$)\n\n\\For{$k = 1 \\rightarrow n$}{\n\t\\If{$rank =$ \\textnormal{findOwnerOfKthRow}($k$, $rank$)}{\n\t\t$K \\leftarrow $ getRow($M$, $k$) \\\\\n\t}\n  \tbroadcast($K$) \\\\\n\n\n\t\\textnormal{kernel<{}<{}<$\\frac{n+b-1}{b\\cdot p}\\times n$, $b$>{}>{}>($M$, $n$, $k$)}\n\n}\ngather($M$, $p$);\n\\;\n\\;\n  \\SetKwFunction{FMain}{kernel}\n  \\SetKwProg{Fn}{Function}{:}{}\n  \\Fn{\\FMain{$M$, $n$, $k$}}{\n\n\n$j \\leftarrow \\left|B\\right|_x \\times B_x + T_x$ \\\\\n$i \\leftarrow \\left|B\\right|_y$ \\\\\n\n\\If{$j < n$}{\n\t\n\n\t\\If{$M_{i,j} < M_{i,k} + M_{k,j}$}{\n\t\t$M_{i,j} \\leftarrow M_{i,k} + M_{k,j}$\n\t}\n}\n}\n\n \n\\caption{\\emph{MPI+CUDA} FW}\\label{alg:mpi+cuda}\n\\end{algorithm}\n\nAlthough the increase of performance, this solution implies\nhigh costs because each node must be equipped with a GPU; moreover the GPUs\n(that must be produced by NVIDIA) must be compatibile with the version of the CUDA Toolkit\nfor which the program was compiled for.\n", "meta": {"hexsha": "60521d4206a4ce97009e0a080abc476706f53229", "size": 30247, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/methodology.tex", "max_stars_repo_name": "firaja/Parallel-FloydWarshall", "max_stars_repo_head_hexsha": "97b99291cf2eb8bf12b1775358f6c5179f5a03b8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-06-19T21:42:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-12T11:25:06.000Z", "max_issues_repo_path": "report/methodology.tex", "max_issues_repo_name": "firaja/Parallel-FloydWarshall", "max_issues_repo_head_hexsha": "97b99291cf2eb8bf12b1775358f6c5179f5a03b8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/methodology.tex", "max_forks_repo_name": "firaja/Parallel-FloydWarshall", "max_forks_repo_head_hexsha": "97b99291cf2eb8bf12b1775358f6c5179f5a03b8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.6014084507, "max_line_length": 669, "alphanum_fraction": 0.6117962112, "num_tokens": 7049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5726080317490049}}
{"text": "\\documentclass{article}\n\\title{Chapter 08}\n\\author{Newton Ni}\n\n\\usepackage{bussproofs}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\n% Set cardinality: |#1|\n\\newcommand{\\cardinality}[1]{\\lvert#1\\rvert}\n\n% Set: { #1 }\n\\newcommand{\\set}[1]{\\{\\ #1\\ \\}}\n\n% Set comprehension: { #1 | #2 }\n\\newcommand{\\comp}[2]{\\set{#1\\ \\mid\\ #2}}\n\n% t\u2081, t\u2082, t\u2083, ...\n\\newcommand{\\term}[1]{\\texttt{t\\textsubscript{#1}}}\n\n% v\u2081, v\u2082, v\u2083, ...\n\\newcommand{\\val}[1]{\\texttt{v\\textsubscript{#1}}}\n\n\\renewcommand{\\ss}[2]{#1 \\longrightarrow #2}\n\\renewcommand{\\bs}[2]{#1 \\Downarrow #2}\n\n% Monospace\n\\newcommand{\\ms}[1]{\\texttt{#1}}\n\n\\newcommand{\\LabelR}[1]{\\RightLabel{\\textsc{#1}}}\n\n\\theoremstyle{remark}\n\\newtheorem*{case}{Case}\n\n\\begin{document}\n\\maketitle\n\n\\section{8.2.3}\n\n    \\textit{Prove that every subterm of a well-typed term is well typed.}\n\n    \\begin{proof}\n        By induction on the typing derivation of term \\term{}. Our inductive hypothesis is:\n        $$H(\\term{}): \\term{} \\textit{ is well-typed} \\implies \\textit{all subterms of } \\term{} \\textit{ are well-typed}$$\n\n        We proceed by case analysis on the final step of the derivation.\n        \\begin{case}[\\ms{true}, \\ms{false}, 0]\n            These terms are well-typed by the inversion lemma, and have no subterms.\n        \\end{case}\n        \\begin{case}[\\ms{if \\term{1} then \\term{2} else \\term{3}}]\n            By the inversion lemma, this term is well-typed, and we have three well-typed subterms:\n            \\begin{itemize}\n                \\item $\\term{1} : \\ms{Bool}$\n                \\item $\\term{2} : \\ms{R}$\n                \\item $\\term{3} : \\ms{R}$\n            \\end{itemize}\n            We can apply the inductive hypothesis to each.\n        \\end{case}\n        \\begin{case}[\\ms{succ \\term{1}}, \\ms{pred \\term{1}}, \\ms{iszero \\term{1}}]\n            These cases are analagous to the previous case, except they\n            only have one subterm each.\n        \\end{case}\n    \\end{proof}\n\n\\section{8.3.4}\n\n    \\textit{Restructure} [the \\textsc{Preservation}] \\textit{proof so that it goes by}\n    \\textit{induction on evaluation derivations rather than typing derivations.}\n\n    \\begin{proof}\n        By induction on the derivation of $\\ss{\\term{}}{\\term{}'}$. Our inductive hypothesis is:\n        $$H(\\term{}): \\term{} : T\\ \\land\\ \\ss{\\term{}}{\\term{}'} \\implies \\term{}' : T$$\n        We proceed by case analysis on the last step of the derivation. In all cases,\n        we have that $\\term{} : T$.\n\n        \\begin{case}[\\textsc{E-IfTrue}]\n            Here, $\\ss{\\term{}}{\\term{}'}$ is:\n            $$\\ss{\\ms{if true then \\term{2} else \\term{3}}}{\\term{2}}$$\n            By the inversion lemma, $\\term{} : T \\implies \\term{2} : T$ as desired.\n        \\end{case}\n        \\begin{case}[\\textsc{E-IfFalse}]\n            This case is analagous to the previous one.\n        \\end{case}\n\n        \\begin{case}[\\textsc{E-If}]\n            Here, we have:\n            $$\\ss{\\ms{if \\term{1} then \\term{2} else \\term{3}}}{\\ms{if \\term{1}' then \\term{2} else \\term{3}}}$$\n            $$\\ss{\\term{1}}{\\term{1}'}$$\n            By the inversion lemma, we have:\n            \\begin{itemize}\n                \\item $\\term{1} : \\ms{Bool}$\n                \\item $\\term{2} : \\ms{T}$\n                \\item $\\term{3} : \\ms{T}$\n            \\end{itemize}\n            By the inductive hypothesis, we also have $\\term{1}' : \\ms{Bool}$.\n            We conclude $\\term{}' : \\ms{T}$ by \\textsc{T-If}.\n        \\end{case}\n\n        \\begin{case}[\\textsc{E-Succ}]\n            Here, we have:\n            $$\\ss{\\ms{succ \\term{1}}}{\\ms{succ \\term{1}'}}$$\n            $$\\ss{\\term{1}}{\\term{1}'}$$\n            By the inversion lemma, we have $\\term{1} : \\ms{Nat}$.\n            By the inductive hypothesis, we have $\\term{1}' : \\ms{Nat}$.\n            We conclude that $\\ms{succ \\term{1}'} : \\ms{Nat}$ by \\textsc{T-Succ}.\n        \\end{case}\n        \\begin{case}[\\textsc{E-Pred}, \\textsc{E-IsZero}]\n            These cases are analagous to the previous case.\n        \\end{case}\n\n        \\begin{case}[\\textsc{E-PredZero}, \\textsc{E-IsZeroZero}]\n            These cases follow directly from the inversion lemma and typing\n            rules \\textsc{T-Zero} and \\textsc{T-True}, respectively.\n        \\end{case}\n\n        \\begin{case}[\\textsc{E-PredSucc}]\n            Here, we have:\n            $$\\ss{\\ms{pred (succ \\term{1})}}{\\term{1}}$$\n            By two applications of the inversion lemma, we have:\n            \\begin{itemize}\n                \\item $\\ms{succ \\term{1}} : \\ms{Nat}$\n                \\item $\\ms{\\term{1}} : \\ms{Nat}$\n            \\end{itemize}\n        \\end{case}\n        \\begin{case}[\\textsc{E-IsZeroSucc}]\n            This case is analagous to the previous case.\n        \\end{case}\n\n    \\end{proof}\n\n\\section{8.3.5}\n\n    \\textit{The evaluation rule \\textsc{E-PredZero} (Figure 3--2) is a bit counterintuitive:}\n    \\textit{we might feel that it makes more sense for the predecessor of zero to be undefined,}\n    \\textit{rather than being defined to be zero. Can we achieve this simply by removing the}\n    \\textit{rule from the definition of single-step evaluation?} \\\\\n\n    No, because the term $\\ms{pred 0}$ would be well-typed by \\textsc{T-Zero} and \\textsc{T-Pred},\n    but would be stuck, violating the progress property.\n\n\\section{8.3.6}\n\n    \\textit{Having seen the subject reduction property, it is reasonable to wonder whether}\n    \\textit{the opoosite property---\\text{subject expansion}---also holds. Is it always the}\n    \\textit{case that, if} $\\ss{\\term{}}{\\term{}'}$ \\textit{and} $\\term{}' : \\ms{T}$ \\textit{, then}\n    $\\term{} : \\ms{T}$? \\textit{If so, prove it. If not, give a counterexample.} \\\\\n\n    No, since ill-typed terms can also take a step. For example:\n    $$\\ss{\\ms{if true then 0 else true}}{0}$$\n\n\\section{8.3.7}\n\n    \\textit{Suppose our evaluation relation is defined in the big-step style, as in exercise}\n    \\textit{3.5.17. How should the intuitive property of type safety be formalized?} \\\\\n\n    Type preservation remains the same, but progress is modified to state that all well-typed terms\n    evaluate to a value.\n\n\\section{8.3.8}\n\n    \\textit{Suppose our evaluation relation is augmented with rules for reducing nonsensical}\n    \\textit{terms to an explicit} \\ms{wrong} \\textit{state, as in Exercise 3.5.16. Now how}\n    \\textit{should type safety be formalized?} \\\\\n\n    Type preservation remains the same, but progress is no longer useful, as terms can always\n    take a step (sometimes to \\ms{wrong}).\n\n\\end{document}\n", "meta": {"hexsha": "b95e08c9fa44d08244150b1bdb5fc13411a1459e", "size": 6494, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter-08/chapter-08.tex", "max_stars_repo_name": "nwtnni/tapl", "max_stars_repo_head_hexsha": "7a4184297f4de9ded7d918bc04895302dde9c161", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-28T17:20:53.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-28T17:20:53.000Z", "max_issues_repo_path": "chapter-08/chapter-08.tex", "max_issues_repo_name": "nwtnni/tapl", "max_issues_repo_head_hexsha": "7a4184297f4de9ded7d918bc04895302dde9c161", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter-08/chapter-08.tex", "max_forks_repo_name": "nwtnni/tapl", "max_forks_repo_head_hexsha": "7a4184297f4de9ded7d918bc04895302dde9c161", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2, "max_line_length": 123, "alphanum_fraction": 0.5916230366, "num_tokens": 2032, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7799929104825006, "lm_q1q2_score": 0.5726080264289103}}
{"text": "\\documentclass[amsmath,amssymb,aps,pra,reprint,groupedaddress,showpacs]{revtex4-1}\n\n\\usepackage{multirow}\n\\usepackage{verbatim}\n\\usepackage{color,graphicx, verbatim, float}\n \n\\input{../Common}\n\n\\begin{document} \n\n%%%%%%%%%%%%%%% TITLE %%%%%%%%%%%%%%%%\n\\title{Fibonacci Sequence Prime Factors Properties}\n \n%%%%%%%%%%%%%%% AUTHORS %%%%%%%%%%%%%%%%\n\\author{Lucchi Manuele}\n\\email[]{manuele.lucchi@studenti.unimi.it}\n\\affiliation{IT Department Students, Universita' degli Studi di Milano, Citta' degli Studi, Milano, Italia}\n\n%%%%%%%%%%%%%%% DATE %%%%%%%%%%%%%%%%\n\\date{\\today}\n\n%%%%%%%%%%%%%%% ABSTRACT %%%%%%%%%%%%%%%%\n\\begin{abstract}\nThe purpose of this research is to find a common pattern in the scomposition at prime factors of the Fibonacci Numbers.\n\\end{abstract} \n \n\\maketitle\n\n%%%%%%%%%%%%%%% INTRODUCTION %%%%%%%%%%%%%%%%\n\\section{Introduction}\n\nThe Fibonacci Sequence [1] is a recurrent equation introduced by Fibonacci in 1202. This sequence rapidly increases the size of its numbers and has a lot of properties.\nThe sequence is $F_n = F_{n-1} + F_{n-2}$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/fibonacci(100).png}\n\\caption{The Fibonacci Sequence}\n\\end{figure} \n\n%%%%%%%%%%%%%%% POWER OF 2 BEHAVIOURS IN FIBONACCI NUMBERS %%%%%%%%%%%%%%\n\\section{Powers of 2 behaviours in Fibonacci Numbers}\n\nWe first tested with the simpler prime numbers. If we look at the power of 2 present in the first 100 Fibonacci Numbers we get something like this\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(100,2).png} % Power of 2 in 100\n\\caption{Power of 2 as factor of the first 100 Fibonacci Numbers}\n\\end{figure}\n\nIt's pretty easy to see some sort of regularity between the power of 2 included in each Fibonacci Number. We can now increase the range to 1000 to see it better\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(1000,2).png} % Power of 2 in 1000\n\\caption{Power of 2 as factor of the first 1000 Fibonacci Numbers}\n\\end{figure}\n\nAnd again we can see a similar behaviour. First of all, we can notice how only numbers that are multiple of 3 are divisible by 2. \n  \nSo we can suppose that every number divisible by 3 has its Fibonacci equivalent to be even\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(100,2)only(3).png} % Power of 2 in 100 divisible for 3\n\\caption{Power of 2 as factor of the first 100 Fibonacci Numbers but only for $n$ divisible by 3}\n\\end{figure}\n\nInstead, if we look only at every $n$ not divisible by 3, we get something like this\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(100,2)except(3).png} % Power of 2 in 100 not divisible for 3\n\\caption{Power of 2 as factor of the first 100 Fibonacci Numbers but only for $n$ not divisible by 3}\n\\end{figure}\n\nSo we can draw a first conclusion: \n$$\\mathbf{\\forall n \\in \\mathbb{N} : 3\\mid n \\implies 2\\mid Fibonacci(n)}$$\n\nGoing forward, we can define the regularity for every exponent of 2, \nfor example, $F(n)$ is divisble by 2 starting from 3 (or $2^0 \\cdot 3$) and $\\forall n \\in \\mathbb{N}$ that is a multiple of 3.\nIf we search for Fibonacci Numbers divisible for $2^3$, we have to start from 6 (or $2^1 \\cdot 3$)\nand increase of 12 (or $2^2 \\cdot 3$) at time. For $2^4$ the starting $n$ becomes 12 (or $2^2 * 4$)\nand the increase between similar exponents is 24 (or $2^4 \\cdot 3$).\\\\\n\nSo, after defining E as a certain exponent of 2 $\\geq 3$ inside Fibonacci Numbers we can set the first $n$ which will appear as $2^{E-1}$ and the difference between the $n_s$ that have the same exponent of 2 in $F(n)$\nas $2^E \\cdot 3$.\\\\\n\nHowever, the regularity in the exponents of 2 is not $\\forall n$, since 0, 1, and 2 don't follow it,\notherwise, with 1 as exponent we would have a 3 as a starting point, but we got 6. Also, it seems like that 2 as an exponent will not appear,\nso \\AllBold{we will not find any $F(n)$ that can be divided by $2^2$ and not for $2^k$ with $k \\in N \\land k>2$ at the same time}\n\n%%%%%%%%%%%%%%% A FIRST SPECIFIC FORMULA %%%%%%%%%%%%%%%%\n\\section{A first, specific Formula}\n\nWe have now all the tools for the development of a simple equation that gives us the power of in the $n_{th}$ number of Fibonacci. Let's define $S_n$ as the exponent of 2 of the $n_{th}$ Fibonacci Number, \nwe can say that $\\mathbf{S_n = S_{n/2} + 1}$. But that is true for $F(n)$ only where $n$ is divisible by both 2 and 3, so we have to create special cases for\n$n=1$ and divisible by 3 or 2 but not both. For $n = 1$ or $n \\nmid 2 \\land \\mid 3 $ it's always 0, for $2 \\mid n \\land 3 \\nmid n $ it's always one.\n\nSo the final recurrent formula will be: \n$$S_n = \\begin{cases} S_{n/2} + 1, & \\mbox{if }2|n \\land 3 \\mid n \\\\ 1, & \\mbox{if } 2 \\nmid n \\land 3|n \\\\ 0, & \\mbox{if } \\left( 2 \\mid n \\land 3 \\nmid n \\right) \\lor n = 1 \\lor n = 0\n\\end{cases} $$\n\n%//DECISAMENTE MIGLIORAMENTO FORMULA\n\n%%%%%%%%%%%%%%% A RECURRENT BEHAVIOR IN PRIME FACTORS SCOMPOSITION OF FIBONACCI NUMBERS %%%%%%%%%%%%%%%%\n\\section{A recurrent behavior in Prime Factors Scomposition of Fibonacci Numbers}\n\n2 is not the only base to present a similar behavior. Every number (even non prime ones) have a similar regularity, but with some differences.\nFor example the 3\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(100,3).png} % Power of 3 in 100\n\\caption{Power of 3 as factor of the first 100 Fibonacci Numbers}\n\\end{figure}\n\nOr the 5\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(100,5).png} % Power of 5 in 100\n\\caption{Power of 5 as factor of the first 100 Fibonacci Numbers}\n\\end{figure}\n\nAnd so on even for bigger prime numbers like 23\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(10000,23).png} % Power of 23 in 10000\n\\caption{Power of 23 as factor of the first 10000 Fibonacci Numbers}\n\\end{figure}\n\n%//INIZIO FOCUS STARTING POINT\n\n\\begin{comment}\n\nBut it's not exactly the same, it seems like the more we increase the base, the less the number appears as a prime factor in Fibonacci Numbers \n(so with an increasing distance between the same exponents), but also this is not true, there are a lot of cases that don't follow the growing rule.\n\nBut before getting in a more generic formula, we have to name some things (since they are going to be different for every base).\n\n\\begin{enumerate} \n\\item $\\mathbf{\\Gamma}$ will be the base of the prime (or non-prime) number we are taking as example. \n\\item $\\mathbf{\\Omega}$ will be the exponent of base $\\Gamma$ of $F(n)$ \n\\item $\\mathbf{\\Delta \\Omega}$ will be defined as the distance between the Fibonacci Number before the next equal exponent will appear\n\\item $\\mathbf{F(n)}$ will be the $n_{th}$ Fibonacci Number\n\\item $\\mathbf{\\lambda(\\Omega, \\Gamma)}$ will identify the function that returns the first $n$ where the exponent of the $\\Gamma$ base appears in $F(n)$\n\\item $\\mathbf{\\lambda_n(\\Omega, \\Gamma)}$ will, in the same way, be the $n_{th}$ Fibonacci Number where the given exponent for the given base appears\n\\end{enumerate} \n\nIf we look at FIG. 6 and 7, we have two examples to notice the differences between different bases.\nFor $\\Gamma = 3$ and $\\Gamma = 5$ we have $\\lambda(1, 3) = 4$ and $\\lambda(1, 5) = 6$.\nIt seems like $\\lambda(1, \\Gamma) = \\Gamma + 1$, but that's wrong: for example, \nfor $\\Gamma = 11$ we have $\\lambda(1, 11) = 10$. For $\\Gamma = 13$ it will be $n = 7$. \nSo let's have a look at the starting points of the first prime numbers:\n\n\\end{comment}\n\n%//FINE FOCUS STARTING POINT\n\n%%%%%%%%%%%%%%% FOCUS ON NON PRIME FACTORS %%%%%%%%%%%%%%%%\n\\section{Focus on non-prime factors}\n\nUntil now we tested only prime numbers, but we can expect a similar behavior for non-primes.\nFor example if we look at power of 4 $(2^2)$ we notice that it's exactly halved compared of 2. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/factor(100,[2,4]).png}\n\\caption{Comparison of powers of 2 and 4 of the first 1000 Fibonacci Numbers}\n\\end{figure}\n\nWe can have better overview by comparing the distance between the same exponent of two different factors \nas they first appear. For example, using 2 and 4 as factors and comparing the index in Fibonacci numbers\nwhen they are on 1 by subtracting we get this:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/distance(1000,2,4,1,minus).png}\n\\caption{Difference between indexes of the same recurrence of the same grade of 2 and 4}\n\\end{figure}\n\nAs expected it's a linear function. If we try to divide them, again we can see the ratio it's exactly $\\frac{1}{2}$\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/distance(1000,2,4,1,div).png}\n\\caption{Division between indexes of the same recurrence of the same grade of 2 and 4}\n\\end{figure}\n\n%%%%%%%%%%%%%%% COMPARISON ON DIFFERENT BASES %%%%%%%%%%%%%%%%\n\\section{Comparison on different bases}\n\nBut if we try to obtain a choerent behaviour for other couple of numbers we'll be disappointed. The examples below are between 9 and 3\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/distance(1000,3,9,1,minus).png}\n\\caption{Difference between indexes of the same recurrence of the same grade of 3 and 9}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/distance(1000,3,9,1,div).png}\n\\caption{Division between indexes of the same recurrence of the same grade of 3 and 9}\n\\end{figure}\n\nWhile in both of them we can see some sort of logic, it's not $1/3$ nor the square root ($9 = 3^2$).\nHowever, this sort of behavior is once again recurrent for every number, even between numbers that doesn't have anything in common.\nFor example, 2 and 3:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/distance(500,2,3,1,minus).png}\n\\caption{Difference between indexes of the same recurrence of the same grade of 2 and 3}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=3in]{assets/distance(500,2,3,1,div).png}\n\\caption{Division between indexes of the same recurrence of the same grade of 2 and 3}\n\\end{figure}\n\nWe tested a lot of possibilities but couldn't be able to find what connect them all, except one thing:\nIn every test made by dividing the corrispondent indexes, going forward with the iterations shows the ratio to stabilize near a certain number\nWe can see that already in FIG. 13 where it goes somewhere between 0.44 and 0.45 or in FIG.15 where it tends to 1.\nOn the other hand, subtracting the indexes shows some different behaviours, from linear functions to regular curves that tries to mimic linear functions.\nOr again, what we saw in FIG. 14, that' pretty regular but different from everything else seen until now.\nOf course the same test can be done with different exponent grades with the same properties.\n\n%%%%%%%%%%%%%%% CONCLUSION %%%%%%%%%%%%%%%%\n\\section{Conclusion}\n\nAs we know, the Fibonacci Numbers hold a great number of interesting properties and this one sums up to the list.\n\n%%%%%%%%%%%%%%% REFERENCES %%%%%%%%%%%%%%%%\n\\begin{thebibliography}{24}\n \n\\bibitem{fibonaccigeneral}\n{OEIS},\n\\textit{Sequence A000045 (Fibonacci Numbers)}\n\n\\end{thebibliography}\n\n\\end{document}\n\n%%https://www.overleaf.com/learn/latex/Positioning_images_and_tables", "meta": {"hexsha": "31e98171455428b4cdbd105ffefec5a567d06042", "size": 11195, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "prime_factors_scomposition/fibonacci_prime_factors_scomposition.tex", "max_stars_repo_name": "manuelelucchi/Fibonacci", "max_stars_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-06T23:49:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-06T23:49:23.000Z", "max_issues_repo_path": "prime_factors_scomposition/fibonacci_prime_factors_scomposition.tex", "max_issues_repo_name": "manuelelucchi/Fibonacci", "max_issues_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prime_factors_scomposition/fibonacci_prime_factors_scomposition.tex", "max_forks_repo_name": "manuelelucchi/Fibonacci", "max_forks_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.9598393574, "max_line_length": 217, "alphanum_fraction": 0.7210361769, "num_tokens": 3266, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5726080189200466}}
{"text": "% Created: Enze Chen, May 2017\r\n% Last edited: Enze Chen, December 2017\r\n%\r\n% Appendix B of the MSE 142 coursereader. This chapter reviews the math and physics a student ought to know when starting this course.\r\n% Math: Differentiation, integration, and differential equations. \r\n% Physics: Forces, potential, simple harmonic motion.\r\n\r\n% Uncomment the following three lines and last line to individually compile this chapter\r\n%\\documentclass[12pt, english]{book}\r\n%\\usepackage{142crstyle}\r\n%\\begin{document}\r\n\r\n\\chapter{Prerequisites} \\label{ch:prereq}\r\n%{ \\doublespacing\r\nIdeally, you should find most of the material in this section to be review. More advanced concepts will be developed as we go along, but just to make sure we are all starting on the same page, I have listed the major concepts that I expect my students to be familiar with coming into this course. It would be wise to skim this section and take some more time with concepts you might not be as familiar with---nailing those will allow you to get a lot more out of this course! If any of the following topics are confusing, please stop by office hours and I'd be happy to go over it. \\par \r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\section{Math review} \\label{sec:math}\r\n\\subsection{Complex numbers}\r\nQuantum mechanics lives in complex space, and you will find yourself manipulating complex numbers quite frequently. A complex number is a number that can be expressed in the form \r\n\\begin{tcolorbox}[title=Complex numbers]\r\n$a + bi$, where $i = \\sqrt{-1}$ and $a,b \\in \\mathbb{R}$\r\n\\end{tcolorbox}\r\n\r\nWe begin by looking at some powers of $i$. Namely,\r\n\\begin{align*}\r\ni^1 &= \\sqrt{-1} = i \\\\\r\ni^2 &= i \\cdot i = -1 \\\\\r\ni^3 &= (i^2)i = -i \\\\\r\ni^4 &= (i^3)i =  1 \\\\\r\ni^5 &= (i^4)i = i \\tag{same as $i^1$}\\\\\r\n&\\vdots \r\n\\end{align*}\r\nand so on, where we begin to see it repeating in the exponent modulo 4. When we look at negative exponents, the same pattern holds.\r\n\\begin{align*}\r\ni^1 &= \\sqrt{-1} = i \\\\\r\ni^0 &= 1 \\\\\r\ni^{-1} &= \\dfrac{1}{i} = \\dfrac{i}{i\\cdot i} = -i \\tag{same as $i^{3}$} \\\\\r\ni^{-2} &= \\dfrac{i^{-1}}{i} = \\dfrac{-i}{i} = -1 \\tag{same as $i^2$} \\\\\r\n&\\vdots \r\n\\end{align*}\r\n\r\nWhenever we want to eliminate complex numbers and work with real numbers instead, we multiply the complex number $a+bi$ by its \\textbf{complex conjugate} $a-bi$, which has the opposite imaginary part. When we do so, we obtain \r\n\\begin{equation*} \r\n(a+bi)(a-bi) = a(a) + a(-bi) + bi(a) + bi(-bi) = a^2 - \\cancel{abi} + \\cancel{abi} + b^2(-i^2) = a^2+b^2\r\n\\end{equation*}\r\n\r\nNotation wise, if we have a complex number $\\zeta = a + bi$, we denote its complex conjugate using an asterisk, such that $\\zeta^* = a - bi$. \\par\r\n\r\nOne of the most amazing results in complex analysis is \\textbf{Euler's formula},~\\footnote{See \\href{https://en.wikipedia.org/wiki/Euler\\%27s\\_formula}{Wikipedia} for details and a proof.} which states that \r\n\\begin{tcolorbox}[title=Euler's formula]\r\n\t$e^{ix} = \\cos(x) + i\\sin(x), \\quad \\forall x \\in \\mathbb{R}$\r\n\\end{tcolorbox}\r\n\r\nWhen this formula is evaluated at $x=\\pi$, we arrive at the identity $e^{i\\pi} = \\cos(\\pi) + i\\sin(\\pi) = -1$. Euler's formula will be extremely useful in our analysis of waves, because the sinusoidal behavior of cosine and sine are all encompassed neatly in the complex exponential term. I'll give two final comments about Euler's formula here:\r\n\\begin{enumerate}[1.]\r\n\t\\item Though it's written in a slightly different form, $e^{ix}$ is still a complex number, with $a = \\cos(x)$ and $b = \\sin(x)$. Therefore, it has a complex conjugate equal to $e^{-ix}=\\cos(x) -i\\sin(x)$. Let's apply what we did above and multiply $e^{ix}$ and $e^{-ix}$.\r\n\t\\begin{align*}\r\n\te^{ix} \\cdot e^{-ix} &= (\\cos(x) + i\\sin(x))(\\cos(x)-i\\sin(x)) \\\\\r\n\te^{ix} \\cdot e^{-ix} &= \\cos^2(x) -i\\cos(x)\\sin(x) + i\\sin(x)\\cos(x) -i^2\\sin^2(x) \\\\\r\n\te^{ix} \\cdot e^{-ix} &= \\cos^2(x) + \\sin^2(x) \\\\\r\n\t\\Aboxed{e^{ix} \\cdot e^{-ix} &= 1} \r\n\t\\end{align*}\r\n\t\r\n\tWe could have also arrived at this result by using the identity $e^a \\cdot e^b = e^{a+b}$. Proceeding, $e^{ix} \\cdot e^{-ix} = e^{ix - ix} = e^0 = 1$.\r\n\t\r\n\t\\item With the complex conjugate on hand, we can now easily extract the individual cosine and sine terms. Notice what happens if we add $e^{ix}$ to $e^{-ix}$:\r\n\t\\begin{align*}\r\n\te^{ix} + e^{-ix} &= (\\cos(x) + \\cancel{i\\sin(x)}) + (\\cos(x) - \\cancel{i\\sin(x)}) \\\\\r\n\te^{ix} + e^{-ix} &= 2\\cos(x) \\\\\r\n\t\\Aboxed{ \\dfrac{e^{ix} + e^{-ix}}{2}&= \\cos(x) }\r\n\t\\end{align*}\r\n\t\r\n\tIn a similar vein, we can show that $\\boxed{\\dfrac{e^{ix} - e^{-ix}}{2i} = \\sin(x)}$ (note the $i$ in the denominator).\r\n\t\r\n\\end{enumerate}\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\subsection{Differentiation} \\label{sec:diff}\r\nNo matter which form it takes, the \\Sch\\ equation is inherently a differential equation, so we should be prepared to work with derivatives in this course. Before jumping into the math, it's important to understand at a high level what a derivative represents. \\par \r\n\\begin{enumerate}[1.]\r\n\t\\item Namely, the \\textbf{first derivative} represents the slope of a tangent line to the function, or the instantaneous rate of change.\r\n\t\\item The \\textbf{second derivative} tells us the concavity of the function, where a positive value symbolizes concave up and a negative value for concave down.\r\n\\end{enumerate} \r\n\r\n\\textbf{Note}: I will be using both $\\dv{x} f$ and $f'$ notation for derivatives, so I kindly ask for your patience and flexibility. This might also be the first time you see the notation $\\pdv{x} f$, which represents a \\emph{partial} derivative of $f$ with respect to the variable $x$. We won't be working with them too much, but know that it behaves exactly the same way as a normal derivative would. It's written this way to clarify that the function $f$ depends on more variables than just $x$, and we would only like to know how it changes with respect to changes in $x$. \\par \r\n\r\nThe three main types of functions you will need to differentiate are:\r\n\\begin{enumerate}[1.]\r\n\t\\item Polynomial: We apply the power rule, $\\dv{x} x^n = nx^{n-1}$.\r\n\t\\item Exponential: The derivative $ \\dv{x} e^{f(x)} = f'(x)e^{f(x)}$.\r\n\t\\item Trigonometric: $\\dv{x} \\cos(x) = -\\sin(x)$. $\\dv{x} \\sin(x) = \\cos(x)$. \r\n\\end{enumerate}\r\n\r\nIn every case, watch out for \r\n\\begin{itemize}\r\n\t\\item Product rule: $\\dv{x} [f(x)\\cdot g(x)] = f(x)\\cdot g'(x) + f'(x)\\cdot g(x)$.\r\n\t\\item Chain rule: $\\dv{x} [f(g(x))] = f'(g(x)) \\cdot g'(x)$.\r\n\\end{itemize}\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\subsection{Taylor series}\r\n\r\nDerivatives involve infinitesimals, and one technique that we will apply (and you will master!) is taking \\textbf{Taylor series}~\\footnote{See \\href{http://tutorial.math.lamar.edu/Classes/CalcII/TaylorSeries.aspx}{Paul Dawkins' page} for more information.} expansions of functions. We will rely on Taylor series heavily to make approximations of functions whenever the result approaches 0. The Taylor series of a function $f(x)$ at a point $x=a$ is given by the infinite power series:\r\n\\begin{tcolorbox}[title=Taylor series formula]\r\n$f(x)|_{x=a} = f(a) + f'(a)(x-a) + \\dfrac{f''(a)}{2!}(x-a)^2 + \\dots = \\displaystyle\\sum_{n=0}^{\\infty} \\dfrac{f^{(n)}(a)}{n!}(x-a)^n$\r\n\\end{tcolorbox}\r\n\r\nThe \\emph{order} of a Taylor series is the largest degree polynomial in the series expansion that we write out explicitly. Everything above the highest-order term is typically discarded or abbreviated using $\\text{big O}$ notation. As an example, the Taylor series for cosine about $x=0$ can be expressed as \\[ \\cos(x)|_{x=0} = 1 - \\dfrac{x^2}{2!} + \\order{x^4} \\] where we lumped together everything after the second order term because those values are small enough that they won't affect the final answer and can be safely discarded. There's no requirement to memorize any specific Taylor series expansions for this class; you should just look them up in a table or derive the first few terms (we generally don't go beyond second order terms).\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\subsection{Integration}\r\nThis course will feature plenty of integrals, not all of which are intuitive, but we will walk through all of them in lecture. As long as you have solid fundamentals, and are able to keep track of signs, integration limits, etc., then they shouldn't present a problem. \\par \r\n\r\nOne thing to keep in mind for integration, differentiation, and this course in general is the idea of \\emph{linear} operations/operators. What this essentially means is that we can distribute the operation across a sum, with integration being an example:\r\n\\begin{tcolorbox}[title=Linear operations---integration example]\r\n\t$\\displaystyle\\int \\left[ f(x) + g(x) + h(x) \\right]\\dd{x} = \\displaystyle\\int f(x)\\dd{x} + \\displaystyle\\int g(x)\\dd{x} + \\displaystyle\\int h(x)\\dd{x}$\r\n\\end{tcolorbox}\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\subsection{Differential equations}\r\nKnowledge of basic differential equations can be helpful for this course, but we will be sure to cover all the bases. Just know that a differential equation such as the \\Sch\\ equation combines all the fun of differentiation with the fun of integration! A differential equation always features some combination of derivatives, variables, and constants, an example of which is given by \\[ \\dv{y}{x} + f(x)y = 0 \\] \r\nwhere $f(x)$ is some arbitrary function that only depends on $x$. To cover some basic terminology, the above equation is a \\textbf{linear}, \\textbf{first-order} differential equation because the derivatives of $y$ (our dependent variable) are not multiplied together (only added) and the highest-order derivative of $y$ is the first derivative. This equation is also \\textbf{separable} because we are able to rearrange the equation to have all variables of the same type on each side of the equation.\r\n\\begin{align*}\r\n\\dv{y}{x} + f(x)y &= 0 \\\\\r\n\\dv{y}{x} &= -f(x)y \\\\\r\n\\frac{1}{y}\\dd{y} &= -f(x)\\dd{x}\r\n\\end{align*}\r\n\r\nSeparable differential equations are nice for many reasons, one of which is that they can be solved by straightforward integration. We proceed to get\r\n\\begin{align*}\r\n\\int\\frac{1}{y}\\dd{y} &= \\int-f(x)\\dd{x} \\\\\r\n\\ln y &= -\\int f(x)\\dd{x} + C \\\\\r\ny &= y_0\\exp\\left(-\\int f(x)\\dd{x}\\right)\r\n\\end{align*}\r\n\r\nwhere $y_0$ appears as the integration constant depending on initial conditions. We won't be focusing too much on solving differential equations in this class, at least not in this way, and this example was just meant to give you a general idea for the procedure.\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\section{Physics review}\r\nOften times, it helps to reframe the abstract nature of quantum mechanics in a classical mechanics context, so this section will highlight some key points in classical mechanics and electromagnetism. \\par \r\n\r\n\\subsection{Forces and potentials}\r\n\r\nNewton's laws of motion are fundamentally important, and you should be familiar in particular with the Second law, which says that force equals mass times acceleration.\r\n\\begin{tcolorbox}[title=Newton's Second law] \\vspace{-2ex}\r\n\\[ F = ma = m \\dv{v}{t} = m\\dv[2]{x}{t} \\]\r\n\\end{tcolorbox} \r\n\r\nThis expression can be rearranged to obtain \r\n\\begin{equation*}\r\n\tF = m\\dv{v}{t} \\implies \\int F\\dd{t} = \\int m\\dd{v} \\implies \\boxed{p = mv}\r\n\\end{equation*}\r\nwhere $p$ is the symbol for \\textbf{momentum}. \\par \r\n\r\nWe can also find the energy/work done by multiplying together force and distance, such that $\\Delta U = -W = -F \\Delta x$. In differential terms, this is commonly expressed as \r\n\\begin{equation}\r\n\\boxed{F = -\\dv{U}{x}} \\label{eq:FdU}\r\n\\end{equation}\r\nwhere the direction of the force is opposite the energy gradient (e.g. a hill slopes upwards, but pushes a ball downwards). \\par \r\n\r\nYou should already know the expressions for the different forms of energy, shown in the table below. \\par \r\n\\begin{table}[!h]\r\n\t\\centering \r\n\t\\begin{tabular}{ll}\r\n\t\t\\textbf{Form} & \\textbf{Expression} \\\\ \\toprule \r\n\t\tGravitational potential energy & $U_g = mgh$ \\\\ \\rule{0ex}{3ex}\r\n\t\tElastic potential energy & $U_e = \\frac{1}{2}kx^2$ \\\\ \\rule{0ex}{3ex}\r\n\t\tKinetic energy & $K = \\frac{1}{2}mv^2$\r\n\t\\end{tabular}\r\n\\end{table} \r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\subsection{Simple harmonic motion} \\label{sec:shm}\r\nThe behavior of atoms and subatomic particles can often be modeled using harmonic oscillators, so it's important to start with a solid understanding of simple harmonic motion (SHM) and the simple harmonic oscillator (SHO). \r\n\r\n\\begin{figure}[!h]\r\n\t\\centering\r\n\t\\begin{tikzpicture}\r\n\t\\node[draw, rectangle, minimum size=8mm] (m) at (2.9,0) {$m$};\r\n\t\\draw[decoration={aspect=0.45, segment length=3mm, amplitude=3mm,coil},decorate] (0,0)--(m); \r\n\t\\fill[pattern = north east lines] (0,-0.42) rectangle (-0.4,1);\r\n\t\\draw[thick] (0,1)--(0,-0.42)--(3.5,-0.42);\r\n\t\\end{tikzpicture}\r\n\t\\caption{Model of a simple harmonic oscillator as a mass on a spring.}\r\n\t\\label{fig:SHO-model}\r\n\\end{figure} \r\n\r\nWe can think of a SHO as a mass on a spring (Figure~\\ref{fig:SHO-model}). In equilibrium, everything is stationary; but when the mass is perturbed slightly, it feels a restoring force from the spring given by \\textbf{Hooke's Law}: $F=-kx$, where $k$ is the spring constant and $x$ is the amount of displacement from equilibrium. The elastic potential energy stored in the spring is given by $U_e=\\frac{1}{2}kx^2$, which can be derived by integrating Hooke's Law according to Equation~\\ref{eq:FdU}. \\par \r\n\r\nWith SHM, we typically like to think of cycles in terms of angle traversed, which makes it easier to model using trigonometric functions. This leads to \\textbf{angular frequency}, $\\omega$, which has units of radians per second (rad/s). In terms of the cycle frequency $f$, we have $\\omega = 2\\pi f$. Since period is the inverse of frequency, this also leads to $\\omega = \\frac{2\\pi}{T}$. Perhaps most importantly, we can relate the angular frequency to the spring constant and mass by \r\n\\begin{tcolorbox}[title=Key Relationship] \\vspace*{-2ex}\r\n\\begin{equation}\r\n\\omega=\\sqrt{\\frac{k}{m}} \\label{eq:wkm}\r\n\\end{equation}\r\n\\end{tcolorbox}\r\n\r\nWe will not derive Equation~\\ref{eq:wkm} here, but we will apply the result often when analyzing the quantum harmonic oscillator.\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n\\subsection{Electromagnetism}\r\nThen we have the electromagnetic analog to mechanics. As we analyze the behavior of electrons, we will treat them as point charges $q$ that experience a force given by \r\n\\begin{equation*}\r\n\tF = qE = -q\\dv{V}{x}\r\n\\end{equation*}\r\n\r\nwhere $E$ is the electric field and $V$ is the electric potential. Similar to the gravitational context, the force is related to the electric potential energy by $F = -\\dv{U}{x}$, and an electron with mass $m_e$ experiences an acceleration according to Newton's Second law, $F = m_ea$. When dealing with two point charges, the force is also given by \\textbf{Coulomb's law}, an inverse-square law which states that\r\n\\begin{equation*}\r\n\tF = \\frac{1}{4\\pi \\varepsilon_0} \\frac{q_1q_2}{r^2}\r\n\\end{equation*}\r\n\r\nwhere $\\varepsilon_0$ is the permittivity of free space, $q_1$ and $q_2$ are the charges of the particles, and $r$ is the distance between them. \\par \r\n\r\nIt may also be helpful to know that light is one form of electromagnetic radiation, and the force carrier for the electromagnetic force is called the \\textbf{photon}. Traveling electromagnetic radiation has both an electric field and magnetic field that are orthogonal (perpendicular) to each other (Figure~\\ref{fig:EB-field}). The \\emph{frequency} of light is directly proportional to its energy and the range of frequencies of electromagnetic radiation make up the \\textbf{electromagnetic spectrum} (Figure~\\ref{fig:EM-spec}).\r\n\r\n\\begin{figure}[!h]\r\n\t\\centering\r\n\t\\includegraphics[width=0.7\\linewidth]{EB-field}\r\n\t\\caption{Electromagnetic radiation contains orthogonal electric and magnetic fields. Image courtesy of \\href{http://www.schoolphysics.co.uk/age16-19/Wave\\%20properties/Polarisation/text/Polarisation_/images/2.png}{schoolphysics}.}\r\n\t\\label{fig:EB-field}\r\n\\end{figure}\r\n\r\n\\begin{figure}[!h]\r\n\t\\centering\r\n\t\\includegraphics[width=0.9\\linewidth]{EM-spectrum}\r\n\t\\caption{The electromagnetic spectrum. Image courtesy of \\href{https://sites.google.com/site/chempendix/em-spectrum}{Sapling Learning}.}\r\n\t\\label{fig:EM-spec}\r\n\\end{figure}\r\n\r\n%}\r\n%\\end{document}", "meta": {"hexsha": "89f20dc6bdf6b83e17b7408f0a849d6c727919e2", "size": 16664, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/chapter_b_review.tex", "max_stars_repo_name": "Enze-Chen/mse_142_cr", "max_stars_repo_head_hexsha": "a98585b32f26f6c189b96345d9cc1e9727156268", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-13T17:08:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-13T17:08:24.000Z", "max_issues_repo_path": "tex/chapter_b_review.tex", "max_issues_repo_name": "Enze-Chen/mse_142_cr", "max_issues_repo_head_hexsha": "a98585b32f26f6c189b96345d9cc1e9727156268", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/chapter_b_review.tex", "max_forks_repo_name": "Enze-Chen/mse_142_cr", "max_forks_repo_head_hexsha": "a98585b32f26f6c189b96345d9cc1e9727156268", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.7238493724, "max_line_length": 746, "alphanum_fraction": 0.6828492559, "num_tokens": 4628, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.5726080189200466}}
{"text": "\\section{Part 2. Gaussian Random Field}\n\nThe file of the functions used for this exercise is:\n\n\\lstinputlisting{part_two.py}\n\nMy script produces the following results, showing the Gaussian random fields for n = -1, -2, and -3, in Figures \\ref{fig:gauss1}, \\ref{fig:gauss2}, and \\ref{fig:gauss3}, respecitvely.\nAs I chose the size to be in Mpc, the minimum size that these plots show is 1 Mpc. This affects the\nmaximum size in conjunction with $n$, as $n$ influences the structure formation on different scales. So, for example,\nin Fig. \\ref{fig:gauss3}, there are over and underdensities on the scale of a few hundred Mpc, while for\n$n = -1$, the over and underdensities are on the order of one to a couple of Mpc in size. If I had chosen a physical size\nof 1 pixel is 1 kpc instead, the same would hold true, with the largest over and underdensities for $n = -3$ being a few hundred kpc\nin size, while for a larger $n$, $n = -1$, they would be at much smaller scales.\n\nAdditionally, as $n$ increases from -3 to -1, $k$'s absolute value goes up, suggesting that the minimum and maximum $k$\nvalues get larger. For $n = -3$, the maximum over or underdensity has a $k$ value of around 0.003, while for $n = -1$, that value\ngoes up by nearly an order of magnitude to 0.025. The minimum $k$ value between those two $n$ values similar changes, going from 0.0005\nto 0.005.\n\nThe larger $k$ values for larger $n$ valules also corresponds to smaller, more detailed structures in the density field, while\nlower $k$ values correspond to much larger in space fluctuations.\n\n\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[width=0.9\\linewidth]{./plots/GaussianField-1.png}\n  \\caption{Gauss Random Field for n = -1}\n  \\label{fig:gauss1}\n\\end{figure}\n\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[width=0.9\\linewidth]{./plots/GaussianField-2.png}\n  \\caption{Gauss Random Field for n = -2}\n  \\label{fig:gauss2}\n\\end{figure}\n\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[width=0.9\\linewidth]{./plots/GaussianField-3.png}\n  \\caption{Gauss Random Field for n = -3}\n  \\label{fig:gauss3}\n\\end{figure}\n\n", "meta": {"hexsha": "3ad7cc35caae5ffce8ba9b1bc6a4207cb90f2aad", "size": 2096, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "part_2.tex", "max_stars_repo_name": "jacobbieker/NUR_Handin2", "max_stars_repo_head_hexsha": "6e620b23191edaec4452d29eac90ec37ced0c038", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "part_2.tex", "max_issues_repo_name": "jacobbieker/NUR_Handin2", "max_issues_repo_head_hexsha": "6e620b23191edaec4452d29eac90ec37ced0c038", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "part_2.tex", "max_forks_repo_name": "jacobbieker/NUR_Handin2", "max_forks_repo_head_hexsha": "6e620b23191edaec4452d29eac90ec37ced0c038", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-05-17T07:33:07.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-17T07:33:07.000Z", "avg_line_length": 46.5777777778, "max_line_length": 183, "alphanum_fraction": 0.7347328244, "num_tokens": 625, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5726080189200466}}
{"text": "\\chapter{The arith Library}\n\nThis document describes the facilities provided by the \\ml{arith} library\nfor the HOL system~\\cite{description}. The main function is a partial decision\nprocedure for a subset of linear natural number\narithmetic\\index{linear arithmetic}. There are also some associated functions,\nand some more general tools for normalisation and for increasing the power of\nproof procedures.\n\nThe main function, \\ml{ARITH\\_CONV}, is a partial decision procedure for\nPresburger natural arithmetic\\index{Presburger arithmetic}. Presburger natural\narithmetic is the subset of arithmetic formulae made up from natural number\nconstants, numeric variables, addition, multiplication by a constant, the\nrelations on the natural numbers ($<$, $\\leq$, $=$, $\\geq$, $>$) and the\nlogical connectives ($\\neg$, $\\wedge$, $\\vee$, $\\Rightarrow$,\n$\\Leftrightarrow$, $\\forall$, $\\exists$).\nProducts\\index{products}\\index{multiplication} of two expressions which both\ncontain variables are not included in the subset, but the function\n{\\small\\verb%SUC%} which is not normally included in a specification of\nPresburger arithmetic are allowed in this \\HOL\\ implementation.\n\n\\ml{ARITH\\_CONV} further restricts the subset as follows: when the formula has\nbeen put in prenex normal form\\index{prenex normal form} it must contain only\none kind of quantifier, that is the quantifiers must either all be\nuniversal\\index{quantifiers!universal} (`forall') or all\nexistential\\index{quantifiers!existential}. Variables may appear\nfree\\index{free variables} (unquantified) provided any quantifiers that do\nappear in the prenex normal form are universal; free variables are taken as\nbeing implicitly universally quantified so mixing them with existential\nquantifiers would violate the above restriction.\n\n\\index{normalisation}\nThe function makes use of a number of preprocessors\\index{preprocessors} which\nextend the coverage beyond the subset specified above and which are also\navailable as separate functions. Natural number subtraction\\index{subtraction},\nthe predecessor function {\\small\\verb%PRE%}, and conditional\nstatements\\index{conditional statements} can be eliminated using the function\n\\ml{SUB\\_AND\\_COND\\_ELIM\\_CONV}. Another preprocessor, \\ml{INSTANCE\\_T\\_CONV},\npermits substitution instances\\index{instances}\\index{substitution instances}\nof universally quantified formulae to be accepted. There is also a function\nfor putting a formula into prenex normal form\\index{prenex normal form}:\n\\ml{PRENEX\\_CONV}.\n\n\n\\section{The method of proof}\n\nThe main function, \\ml{ARITH\\_CONV}, is constructed from two separate\nprocedures, one for universally quantified formulae (\\ml{FORALL\\_ARITH\\_CONV})\nand one for existentially quantified formulae (\\ml{EXISTS\\_ARITH\\_CONV}).\n\\ml{FORALL\\_ARITH\\_CONV}\\index{quantifiers!universal} uses a variable\nelimination\\index{variable elimination} technique similar to the one described\nby Hodes~\\cite{VarElimHodes}\\index{Hodes}. This procedure is\nincomplete\\index{completeness} for the natural numbers\\index{natural numbers}.\nIn particular, it does not prove formulae which depend on the integral\nproperties\\index{integral properties} of natural numbers.\n\n\\ml{EXISTS\\_ARITH\\_CONV}\\index{quantifiers!existential} uses a technique of\nShostak's~\\cite{SUPINFShostak}\\index{Shostak} based on the\n{\\small SUP-INF}\\index{SUP-INF method} method due to\nBledsoe~\\cite{SUPINFBledsoe}\\index{Bledsoe}. This technique is able to find\nwitnesses\\index{witnesses} for the existentially quantified variables, that is\nvalues which satisfy\\index{satisfaction of formulae} the formula.\nUnfortunately this method too is incomplete\\index{completeness} for the\nnatural numbers. The implementation used by \\ml{EXISTS\\_ARITH\\_CONV} does not\nhave a very good coverage. It could be improved but only at the expense of\nincreased computation times. Despite the incompleteness, the procedure should\nbe of some use.\n\nBoth of the proof methods are described more fully in a\nreport~\\cite{EfficFullyExpTP} which also discusses techniques for writing\nefficient\\index{efficient proof procedures} proof procedures in the \\HOL\\\nsystem.\n\n\n\\section{Using the library}\n\nThe \\ml{arith} library can be loaded into a user's \\HOL\\ session using the\nfunction \\ml{load\\_library}\\index{load\\_library@{\\ptt load\\_library}} (see the\n\\HOL\\ manual for a general description of library loading). The first action\nin the load sequence initiated by \\ml{load\\_library} is to update the \\HOL\\\nhelp\\index{help!updating search path} search path. The help search path is\nupdated with a pathname to online help files for the \\ML\\ functions in the\nlibrary. After updating the help search path, the \\ml{reduce}\nlibrary~\\cite{reduce} is loaded. The \\ml{arith} library makes use of some of\nthe functions in the \\ml{reduce} library. Finally, the \\ML\\ functions in the\n\\ml{arith} library are loaded into \\HOL.\n\nThe following session shows how the \\ml{arith} library may be loaded using\n\\ml{load\\_library}:\n\n\\setcounter{sessioncount}{1}\n\\begin{session}\\begin{verbatim}\n#load_library `arith`;;\nLoading library `arith` ...\nUpdating help search path\n.Loading library `reduce` ...\nExtending help search path.\nLoading boolean conversions........\nLoading arithmetic conversions..................\nLoading general conversions, rule and tactic.....\nLibrary `reduce` loaded.\n.............................................................................\n.............................................................................\n.............................................................................\n.............................................................................\n.............................................................................\n................................\nLibrary `arith` loaded.\n() : void\n\n#\n\\end{verbatim}\\end{session}\n\n\\noindent\nThe arithmetic proof procedure can be called by applying the conversion\n\\ml{ARITH\\_CONV}:\n\n\\begin{session}\\begin{verbatim}\n#ARITH_CONV \"!m n p. m < n /\\ n < p ==> m < p\";;\n|- (!m n p. m < n /\\ n < p ==> m < p) = T\n\\end{verbatim}\\end{session}\n\n\\noindent\nIt is easy to define a function which returns a theorem directly corresponding\nto the arithmetic formula:\n\n\\begin{session}\\begin{verbatim}\n#let ARITH_PROVE = EQT_ELIM o ARITH_CONV;;\nARITH_PROVE = - : conv\n\n#ARITH_PROVE \"!m n p. m < n /\\ n < p ==> m < p\";;\n|- !m n p. m < n /\\ n < p ==> m < p\n\\end{verbatim}\\end{session}\n\n\n\\section{Using the procedure to prove a formula false}\n\nContinuing the session from the previous section, here is an example of how\nto use the proof procedure to prove that a formula is\nfalse\\index{false!proof of}\\index{proving false}:\n\n\\begin{session}\\begin{verbatim}\n#NEGATE_CONV ARITH_CONV \"!m n. m < n\";;\n|- (!m n. m < n) = F\n\n#NEGATE_CONV ARITH_CONV \"?m n. (m + n) < 0\";;\n|- (?m n. (m + n) < 0) = F\n\\end{verbatim}\\end{session}\n\n\\noindent\n\\ml{NEGATE\\_CONV} works by applying the proof procedure to the\nnegation\\index{negation} of the formula. This means that when proving a\nuniversally quantified formula false, the main procedure is actually trying to\nprove that an existentially quantified formula is true. One could define a\nfunction that proves formulae either true or false, but this would be\ninefficient as for certain examples it would apply \\ml{ARITH\\_CONV} twice. The\nauthor anticipates that in most circumstances the user will know whether the\nformula is true or false and can therefore apply either \\ml{ARITH\\_CONV} or\n\\ml{NEGATE\\_CONV ARITH\\_CONV} as appropriate. A combined function is\nillustrated below:\n\n\\begin{session}\\begin{verbatim}\n#let TF_ARITH_CONV = ARITH_CONV ORELSEC NEGATE_CONV ARITH_CONV;;\nTF_ARITH_CONV = - : conv\n\n#TF_ARITH_CONV \"!m n p. m < n /\\ n < p ==> m < p\";;\n|- (!m n p. m < n /\\ n < p ==> m < p) = T\n\n#TF_ARITH_CONV \"!m n. m < n\";;\n|- (!m n. m < n) = F\n\\end{verbatim}\\end{session}\n\n\n\\section{Constructing a tactic}\n\nConsider the following goal\\index{goals}:\n\n\\begin{session}\\begin{verbatim}\n#g \"!x y t. x /\\ ((~(t + 3 = 0)) => F | y) = y /\\ x\";;\n\"!x y t. x /\\ ((~(t + 3 = 0)) => F | y) = y /\\ x\"\n\n() : void\n\\end{verbatim}\\end{session}\n\n\\noindent\n\\ml{ARITH\\_CONV} can be used as a tactic\\index{tactics} as illustrated below:\n\n\\begin{session}\\begin{verbatim}\n#expand (CONV_TAC ARITH_CONV);;\nOK..\nevaluation failed     ARITH_CONV -- formula not in the allowed subset\n\\end{verbatim}\\end{session}\n\n\\noindent\nThis fails because the arithmetic conversion is only being applied to the\nwhole goal. To simplify the goal it is necessary to apply the conversion\ndeep\\index{depth conversion} within the goal:\n\n\\begin{session}\\begin{verbatim}\n#let DEPTH_ARITH_TAC = CONV_TAC (ONCE_DEPTH_CONV ARITH_CONV);;\nDEPTH_ARITH_TAC = - : tactic\n\n#expand DEPTH_ARITH_TAC;;\nOK..\n\"!x y t. x /\\ (T => F | y) = y /\\ x\"\n\n() : void\n\\end{verbatim}\\end{session}\n", "meta": {"hexsha": "7782bea8820fb07ff0889d1cac9f37d4dcb99782", "size": 8761, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/num/arith/Manual/description.tex", "max_stars_repo_name": "dwRchyngqxs/HOL", "max_stars_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 492, "max_stars_repo_stars_event_min_datetime": "2015-01-07T16:36:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T22:18:48.000Z", "max_issues_repo_path": "src/num/arith/Manual/description.tex", "max_issues_repo_name": "dwRchyngqxs/HOL", "max_issues_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 759, "max_issues_repo_issues_event_min_datetime": "2015-01-01T00:40:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T17:33:39.000Z", "max_forks_repo_path": "src/num/arith/Manual/description.tex", "max_forks_repo_name": "dwRchyngqxs/HOL", "max_forks_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 126, "max_forks_repo_forks_event_min_datetime": "2015-02-17T03:20:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T00:42:55.000Z", "avg_line_length": 41.9186602871, "max_line_length": 79, "alphanum_fraction": 0.7222919758, "num_tokens": 2263, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799928900257126, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5726080159484458}}
{"text": "\\chapter{What are Linear models?}\n\nChapter aims:\n\n\\begin{compactitem}\n\t\\item developing an intuitive understanding of the theory of linear \n\tstatistical models\n\t\\item Introduce concepts to be developed using practical examples in \n\tsubsequent chapters\n\\end{compactitem}\n\n\\section{Linear models}\n\nLet's start with an example: What predicts the weights ($w$) of \nlecturers (like yours truly!)? WE may hypothesize that the following \n{\\it variables} we have collected contribute to predicting weights: \n\n\\begin{compactitem}\n\t\\item Height ($h$) in metres\n\t\\item Exercise per week ($e$) in hours\n\t\\item Gender ($g$)\n\t\\item Distance from home to nearest Greggs bakery ($d$) in metres\n\t\\item Ownership of a games console ($c$)\n\\end{compactitem}\n\nFrom this, we can build a statistical model:\n$w = \\beta_1 + \\beta_2 h + \\beta_3 e + \\beta_4 g_m  + \\beta_5 d + \n\\beta_6 c_s + \\beta_7 c_a + \\varepsilon\\$\n\nThe model has a combination of four components:\n\n\\begin{center}\n\n\\begin{tikzpicture} \n\n\\useasboundingbox (0 mm, 0mm) rectangle (118 mm, 30mm); % width of slide less margins - fixed (128 - 2 x 5)\n\n% style fix minimum height on nodes to align labels above components, \n% a chain to align the equation along and some colour styles for overlays\n\\begin{scope}[every node/.style={text height=3mm,  text depth=1mm, on chain, inner sep=0.5mm},\n              gr/.style={teal}, \n              rd/.style={red}, \n              bl/.style={blue}, \n              gy/.style={gray},\n              start chain, node distance=0mm]\n\n\\node [gr] (w) at (12mm, 19mm) {$w$}; % fixed position within bbox\n\\node      (eq) {$=$};\n\\node [rd] (b1) {$\\beta_1$};\n\\node      (p)  {$+$};\n\\node [rd] (b2) {$\\beta_2$};\n\\node [bl] (h)  {$h$};\n\\node      (p)  {$+$};\n\\node [rd] (b3) {$\\beta_3$};\n\\node [bl] (e)  {$e$};\n\\node      (p)  {$+$};\n\\node [rd] (b4) {$\\beta_4$};\n\\node [bl] (g)  {$g_m$};\n\\node      (p)  {$+$};\n\\node [rd] (b5) {$\\beta_5$};\n\\node [bl] (d)  {$d$};\n\\node      (p)  {$+$};\n\\node [rd] (b6) {$\\beta_6$};\n\\node [bl] (cS)  {$c_s$};\n\\node      (p)  {$+$};\n\\node [rd] (b7) {$\\beta_7$};\n\\node [bl] (cA)  {$c_a$};\n\\node      (p)  {$+$};\n\\node [gy] (eps)  {$\\varepsilon$};\n\n\n\\end{scope}\n\n\\begin{scope}[every node/.style={rounded corners, draw, minimum height=6mm},\n              every path/.style={latex-, draw=red, thick}]\n              \n    \\node (exp)  [inner sep=0pt, yshift=1cm, fit={(e) (cS)}, label=center:Explanatory variables, fill=blue!10, draw=blue] {};\t\n    \\node (coef) [inner sep=0pt, yshift=-1cm, fit={(b2) (b6)}, label=center:Coefficients, fill=red!10, draw=red] {};\t\n\t\\node (resp)  [above = 4mm of w, fill=teal!10, draw=teal] {Response variable};\n\t\\node (resid)  [below = 4.5mm of eps, fill=gray!10, draw=gray] {Residuals};\n\n\t\\foreach \\i in {e, d, g, cS}  \\draw [blue] (\\i.north) -- (\\i.north |- exp.south);\n\t\\draw [blue] (cA.north) |- (exp.east);\n\t\\draw [blue] (h.north) |- (exp.west);\t\n\t\\foreach \\i in {b2, b3, b4, b5, b6}  \\draw [red] (\\i.south) -- (\\i.south |- coef.north);\n\t\\draw [red] (b7.south) |- (coef.east);\n\t\\draw [red] (b1.south) |- (coef.west);\n    \\draw [teal] (w.north) --  (resp.south);\n    \\draw [gray] (eps.south) -- (resid.north);\n\n    \n\\end{scope}\n\n\\end{tikzpicture}\n\n\n\\begin{compactitem}\n\\item A \\textcolor{teal}{response variable} ($w$)\n\\item A set of \\textcolor{blue}{explanatory variables} ($h,e,g,d,c$)\n\\item A set of \\textcolor{red}{coefficients} ($\\beta_1$ -- $\\beta_7$)\n\\item A set of \\textcolor{gray}{residuals} ($\\varepsilon$)\n\\end{compactitem}\n\n\\end{center}\n\n% \\section{Different types of variables}\n% \n% \\begin{center}\n% \\begin{tikzpicture} \n% \n% \\useasboundingbox (0 mm, 0mm) rectangle (118 mm, 30mm);\n% \n% \\begin{scope}[every node/.style={text height=3mm,  text depth=1mm, on chain, inner sep=0.5mm},\n              % gr/.style={teal}, \n              % rd/.style={red}, \n              % bl/.style={blue}, \n              % gy/.style={gray},\n              % start chain, node distance=0mm]\n% \n% \\node [bl] (w) at (12mm, 19mm) {$w$}; % fixed position within bbox\n% \\node      (eq) {$=$};\n% \\node [gy] (b1) {$\\beta_1$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b2) {$\\beta_2$};\n% \\node [bl] (h)  {$h$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b3) {$\\beta_3$};\n% \\node [bl] (e)  {$e$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b4) {$\\beta_4$};\n% \\node [rd] (g)  {$g_m$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b5) {$\\beta_5$};\n% \\node [bl] (d)  {$d$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b6) {$\\beta_6$};\n% \\node [rd] (cS)  {$c_s$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b7) {$\\beta_7$};\n% \\node [rd] (cA)  {$c_a$};\n% \\node [gy] (p)  {$+$};\n% \\node [gy] (eps)  {$\\varepsilon$};\n% \n% \n% \\end{scope}\n% \n% \\begin{scope}[every node/.style={rounded corners, draw, minimum height=6mm},\n              % every path/.style={latex-, draw=red, thick}]\n              % \n    % \\node (cont)  [inner sep=0pt, yshift=1cm, fit={(w) (d)}, label=center:Continuous variables, fill=blue!10, draw=blue] {};\t\n    % \\node (cat) [inner sep=0pt, yshift=-1cm, fit={(g) (cA)}, label=center:Categorical variables, fill=red!10, draw=red] {};\t\n% \n\t% \\foreach \\i in {w, h, e, d}  \\draw [blue] (\\i.north) -- (\\i.north |- exp.south);   \n\t% \\foreach \\i in {g, cA, cS}  \\draw [red] (\\i.south) -- (\\i.south |- coef.north);   \n    % \n% \\end{scope}\n% \n% \\end{tikzpicture}\n% \n% \n% \\begin{compactitem}\n% \\item The response variable is always \\textcolor{blue}{continuous}.\n% \\item The explanatory variables can be a mix of:\n% \\begin{compactitem}\n% \\item \\textcolor{blue}{Continuous} variables: height, exercise and distance.\n% \\item \\textcolor{red}{Categorical} variables: gender and console ownership.\n% \\end{compactitem}\n% \\item \\textcolor{red}{Categorical} variables or {\\it factors} have a number of {\\it levels}:\n% \\begin{compactitem}\n% \\item Gender has two levels (Male / Female)\n% \\item Console has three levels (None / Sofa-based / Active)\n% \\end{compactitem}\n% \\end{compactitem}\n% \\end{center}\n% \n% \n% \n% \\section{Terms and coefficients}\n% \n% \\begin{center}\n% \\begin{tikzpicture} \n% \n% \\useasboundingbox (0 mm, 0mm) rectangle (118 mm, 30mm); % width of slide less margins - fixed (128 - 2 x 5)\n% \n% \\begin{scope}[every node/.style={text height=3mm,  text depth=1mm, on chain, inner sep=0.5mm},\n              % gr/.style={teal}, \n              % rd/.style={red}, \n              % bl/.style={blue}, \n              % gy/.style={gray},\n              % start chain, node distance=0mm]\n% \n% \\node [bl] (w) at (12mm, 19mm) {$w$}; % fixed position within bbox\n% \\node      (eq) {$=$};\n% \\node [gy] (b1) {$\\beta_1$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b2) {$\\beta_2$};\n% \\node [bl] (h)  {$h$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b3) {$\\beta_3$};\n% \\node [bl] (e)  {$e$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b4) {$\\beta_4$};\n% \\node [rd] (g)  {$g_m$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b5) {$\\beta_5$};\n% \\node [bl] (d)  {$d$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b6) {$\\beta_6$};\n% \\node [rd] (cS)  {$c_s$};\n% \\node      (p)  {$+$};\n% \\node [gy] (b7) {$\\beta_7$};\n% \\node [rd] (cA)  {$c_a$};\n% \\node [gy] (p)  {$+$};\n% \\node [gy] (eps)  {$\\varepsilon$};\n% \n% \n% \\end{scope}\n% \n% \\begin{scope}[every node/.style={rounded corners, draw, minimum height=6mm},\n              % every path/.style={latex-, draw=red, thick}]\n              % \n    % \\node (height)    [below=0.5cm of h, fill=blue!10, draw=blue, xshift=-3mm] {Height};\n    % \\draw [blue] (h.south) -- (h.south |- height.north);\n% \n    % \\node (exercise)  [below=0.5cm of e, fill=blue!10, draw=blue, xshift=4mm] {Exercise};\n    % \\draw [blue] (e.south) -- (e.south |- exercise.north);\n% \n    % \\node (distance)  [below=0.5cm of d, fill=blue!10, draw=blue] {Distance};\n    % \\draw [blue] (d.south) -- (d.south |- distance.north);\n% \n    % \\node (gender)  [above=0.5cm of g, fill=red!10, draw=red] {Gender};\n    % \\draw [red] (g.north) -- (g.north |- gender.south);\n% \n    % \\node (console) [inner sep=0pt, yshift=1.1cm, fit={(cA) (cS)}, label=center:Console, fill=red!10, draw=red] {};\t\n    % \\foreach \\i in {cA, cS}  \\draw [red] (\\i.north) -- (\\i.north |- console.south);   \n    % \n% \\end{scope}\n% \n% \\end{tikzpicture}\n% \n% \n% \\begin{compactitem}\n% \\item Each explanatory variable is a {\\it term} in the model\n% \\item Each term has at least one coefficient\n% \\item \\textcolor{blue}{Continuous} terms always have one coefficient\n% \\item \\textcolor{red}{Factors} have $N - 1$ coefficients, where $N$ is the number of levels\n% \n% \\end{compactitem}\n% \\end{center}\n% \n% \n% \\section{Wait! Why $N-1$? What is $\\beta_1$?}\n% \n% \\begin{center}\n% \\begin{tikzpicture}[every node/.style={text height=3mm,  text depth=1mm, on chain, inner sep=0.5mm},\n              % gr/.style={teal}, \n              % rd/.style={red}, \n              % bl/.style={blue}, \n              % gy/.style={gray},\n              % pr/.style={draw, rounded corners=2mm, gray},\n              % start chain, node distance=0mm]\n% \n% \\useasboundingbox (0 mm, 0mm) rectangle (118 mm, 21mm); % width of slide less margins - fixed (128 - 2 x 5)\n% \n% \\node  (w) at (12mm, 10mm) {$w$}; % fixed position within bbox\n% \\node  (eq) {$=$};\n% \\node  (b1) {$\\beta_1$};\n% \\node  (p)  {$+$};\n% \\node  (b2) {$\\beta_2$};\n% \\node  (h)  {$h$};\n% \\node  (p)  {$+$};\n% \\node  (b3) {$\\beta_3$};\n% \\node  (e)  {$e$};\n% \\node  (p)  {$+$};\n% \\node  (b4) {$\\beta_4$};\n% \\node  (g)  {$g_m$};\n% \\node  (p)  {$+$};\n% \\node  (b5) {$\\beta_5$};\n% \\node  (d)  {$d$};\n% \\node  (p)  {$+$};\n% \\node  (b6) {$\\beta_6$};\n% \\node  (cS)  {$c_s$};\n% \\node  (p)  {$+$};\n% \\node  (b7) {$\\beta_7$};\n% \\node  (cA)  {$c_a$};\n% \\node  (p)  {$+$};\n% \\node  (eps)  {$\\varepsilon$};\n% \n% \\node [pr, red, fit= (b1), line width=1pt] {};\n% \\node [pr, fit= (b2) (h)] {};\n% \\node [pr, fit= (b3) (e)] {};\n% \\node [pr, fit= (b4) (g)] {};\n% \\node [pr, fit= (b5) (d)] {};\n% \\node [pr, fit= (b6) (cS)] {};\n% \\node [pr, fit= (b7) (cA)] {};\n% \n% \\end{tikzpicture}\n% %\n% \n% \\begin{compactitem}\n% \\item Two ways of thinking about $\\beta_1$:\n% \\begin{compactitem}\n% \\item Continuous variables: the $y$ {\\it intercept}\n% \\item Factors: the baseline or {\\it reference} value\n% \\end{compactitem}\n% \\item This baseline is the value for the {\\it first levels} of each factor\n% \\item All response values start at this baseline\n% \\item All the other coefficients measure {\\it differences} from $\\beta_1$:\n% \\begin{compactitem}\n% \\item along a continuous slope\n% \\item as an offset to a different level\n% \\end{compactitem}\n% \\end{compactitem}\n% \n% \\end{center}\n% \n% \n% \\section{Linear models are just a sum}\n% \n% \\begin{center}\n% \\begin{tikzpicture}[every node/.style={text height=3mm,  text depth=1mm, on chain, inner sep=0.5mm},\n              % cross/.style={path picture={\n                   % \\draw[red, ultra thick] (path picture bounding box.center) +(0, 1.4mm) -- +(0, -1.4mm) (path picture bounding box.center) +(1.4mm, 0mm) -- +(-1.4mm,0) ;}},\n              % gr/.style={teal}, \n              % rd/.style={cross, red}, \n              % bl/.style={blue}, \n              % gy/.style={gray},\n              % pr/.style={draw, rounded corners=2mm, gray},\n              % start chain, node distance=0mm]\n% \n% \\useasboundingbox (0 mm, 0mm) rectangle (118 mm, 21mm); % width of slide less margins - fixed (128 - 2 x 5)\n% \n% \\node      (w) at (12mm, 10mm) {$w$}; % fixed position within bbox\n% \\node      (eq) {$=$};\n% \\node      (b1) {$\\beta_1$};\n% \\node [rd] (p)  {$+$};\n% \\node      (b2) {$\\beta_2$};\n% \\node      (h)  {$h$};\n% \\node [rd] (p)  {$+$};\n% \\node      (b3) {$\\beta_3$};\n% \\node      (e)  {$e$};\n% \\node [rd] (p)  {$+$};\n% \\node      (b4) {$\\beta_4$};\n% \\node      (g)  {$g_m$};\n% \\node [rd] (p)  {$+$};\n% \\node      (b5) {$\\beta_5$};\n% \\node      (d)  {$d$};\n% \\node [rd] (p)  {$+$};\n% \\node      (b6) {$\\beta_6$};\n% \\node      (cS)  {$c_s$};\n% \\node [rd] (p)  {$+$};\n% \\node      (b7) {$\\beta_7$};\n% \\node      (cA)  {$c_a$};\n% \\node [rd] (p)  {$+$};\n% \\node      (eps)  {$\\varepsilon$};\n% \n% \\end{tikzpicture}\n% %\n% \\end{center}\n% \n% \\begin{compactitem}\n% \\item Find the baseline value for women with no games console ($\\beta_1$)\n% \\item The model tells us how much to add to this\\ldots\n% \\begin{compactitem}\n% \\item  for a height of 1.82 metres?\n% \\item  for doing 150 minutes of exercise a week?\n% \\item  for being male?\n% \\item  for living 2416 metres from a Greggs?\n% \\item  for owning an Xbox?\n% \\end{compactitem}\n% \\end{compactitem}\n% \n% \n% \n% \n% \n% \n% {\\section{Examples - one continuous variable}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{Origin.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &=  \\beta_1 x \\\\\n\t\t  % \\\\\n\t\t  % 4  &=  4 \\times 1 \\\\\n\t\t  % 8  &=  4 \\times 2 \\\\\n\t\t  % 12 &=  4 \\times 3 \\\\\n\t\t  % 16 &=  4 \\times 4   \n\t\t% \\end{align*}\n\t\t% \\[\\beta_1 =4\\]\n% \\end{columns}\n% }\n% \n% \n% \n% {\\section{Examples - one continuous variable}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{Intercept.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &= \\beta_1 + \\beta_2 x \\\\\n\t\t  % \\\\\n\t\t  % 9  &= 5 + 4 \\times 1 \\\\\n\t\t  % 13  &= 5 + 4 \\times 2 \\\\\n\t\t  % 21 &= 5 + 4 \\times 3 \\\\\n\t\t  % 29 &= 5 + 4 \\times 4   \n\t\t% \\end{align*}\n        % \\[\\beta_1 = 5; \\beta_2=4\\]\n% \\end{columns}\n% }\n% \n% \n% {\\section{Examples - one factor}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{Factor.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &= \\beta_1 + \\beta_2 g_m  \\\\\n\t\t  % \\\\\n\t\t  % 2  &= 2 + 3 \\times 0 \\\\\n\t\t  % 2  &= 2 + 3 \\times 0 \\\\\n\t\t  % 2  &= 2 + 3 \\times 0 \\\\\n\t\t  % 5  &= 2 + 3 \\times 1 \\\\  \n\t\t  % 5  &= 2 + 3 \\times 1 \\\\\n\t\t  % 5  &= 2 + 3 \\times 1\n\t\t% \\end{align*}\n        % \\[\\beta_1 = 2; \\beta_2=3\\]\n% \\end{columns}\n% }\n% \n% \n% \n% {\\section{Examples - one continuous variable and one factor}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{TwoVars.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &= \\beta_1  + \\beta_2 x + \\beta_3 g_m\\\\\n\t\t  % \\\\\n\t\t  % 3   &= 1 + 2 \\times 1 + 3 \\times 0\\\\\n\t\t  % 5   &= 1 + 2 \\times 2 + 3 \\times 0\\\\\n\t\t  % 7   &= 1 + 2 \\times 3 + 3 \\times 0\\\\\n\t\t  % 6   &= 1 + 2 \\times 1 + 3 \\times 1\\\\  \n\t\t  % 8   &= 1 + 2 \\times 2 + 3 \\times 1\\\\\n\t\t  % 10  &= 1 + 2 \\times 3 + 3 \\times 1\n\t\t% \\end{align*}\n        % \\[\\beta_1 = 1; \\beta_2=2; \\beta_3=3\\]\n% \\end{columns}\n% }\n% \n% \n% \n% \n% {\\section{Examples - one continuous variable and one factor}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{TwoVarsHighlight.pdf}\n\t\t% \n\t\t% \\column{0.5\\textwidth}\n\t\t\t% \\begin{align*}\n\t\t\t  % y  &= \\beta_1  + \\beta_2 x + \\beta_3 g_m\\\\\n\t\t\t  % \\\\\n\t\t\t  % 3   & = 1 + 2 \\times 1 + 3 \\times 0\\\\\n\t\t\t  % 5   & = 1 + 2 \\times 2 + 3 \\times 0\\\\\n\t\t\t  % 7   & = 1 + 2 \\times 3 + 3 \\times 0\\\\\n\t\t\t  % 6   & = 1 + 2 \\times 1 + 3 \\times 1\\\\  \n\t\t\t  % {\\color{red}8} & {\\color{red}\\;= 1 + 2 \\times 2 + 3 \\times 1}\\\\ % loses a space for some reason\n\t\t\t  % 10  & = 1 + 2 \\times 3 + 3 \\times 1\n\t\t\t% \\end{align*}\n\t        % \\[\\beta_1 = 1; \\beta_2=2; \\beta_3=3\\]\n% \\end{columns}\n% }\n% \n% \\section{How do we deal with variation?}\n% \n% \n% \n% {\\section{Residuals - variation is everywhere}\n% \n% \\begin{center}\n\t\t% \\includegraphics[width=0.6\\textwidth]{VariationOrme2002.pdf}\\\\\n\t\t% {\\tiny ln(volume)}\n% \\end{center}\t\t\n\t\t% \n% \\begin{compactitem}\n% \\item Data always shows variation from a perfect model\n% \\begin{compactitem}\n% \\item Missing variables (age, lab vs. field biology, time of day) \n% \\item Measurement error \n% \\item Stochastic variation\n% \\end{compactitem}\n% \\end{compactitem}\n% }\n% \n% \n% \n% {\\section{Residuals - variation is everywhere}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{Error.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &= \\beta_1 + \\beta_2 x \\\\\n\t\t  % \\\\\n\t\t  % 9.50  &= \\;?\\; + \\;?\\; \\times 1 \\\\\n\t\t  % 11.00 &= \\;?\\; + \\;?\\; \\times 2 \\\\\n\t\t  % 19.58 &= \\;?\\; + \\;?\\; \\times 3 \\\\\n\t\t  % 20.00 &= \\;?\\; + \\;?\\; \\times 4   \n\t\t% \\end{align*}\n\t\t% \n\t\t% \\begin{center}\n\t\t % {\\it No unique line \\\\ through the points}\n\t\t% \\end{center}\n\t\t % \n% \\end{columns}\n% }\n% \n% \n% \n% \n% {\\section{Residuals - Guess 1}\n % \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{TooFlat.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &= \\beta_1 + \\beta_2 x  + {\\color{red} \\varepsilon }\\\\\n\t\t  % \\\\\n\t\t  % 9.50  &= 12.52 + 1 \\times 1 - {\\color{red} 4.02}\\\\\n\t\t  % 11.00 &= 12.52 + 1 \\times 2 - {\\color{red} 3.52}\\\\\n\t\t  % 19.58 &= 12.52 + 1 \\times 3 + {\\color{red} 4.06}\\\\\n\t\t  % 20.00 &= 12.52 + 1 \\times 4 + {\\color{red} 3.48} \n\t\t% \\end{align*}\n\t\t% \\[\\beta_1 = 12.52; \\beta_2=1\\]\n% \n% \\end{columns}\n% }\n% \n% \n% \n% {\\section{Residuals  - Guess 2}\n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{Toosteep.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &= \\beta_1 + \\beta_2 x + {\\color{red} \\varepsilon }\\\\\n\t\t  % \\\\\n\t\t  % 9.50  &= -2.48 + 7 \\times 1 + {\\color{red} 4.98}\\\\\n\t\t  % 11.00 &= -2.48 + 7 \\times 2 - {\\color{red} 0.52}\\\\\n\t\t  % 19.58 &= -2.48 + 7 \\times 3 + {\\color{red} 1.06}\\\\\n\t\t  % 20.00 &= -2.48 + 7 \\times 4 - {\\color{red} 5.52} \n\t\t% \\end{align*}\n\t\t% \\[\\beta_1 = -2.48; \\beta_2=7\\]\n% \n% \\end{columns}\t\t\n% }\n% \n% \n% \n% {\\section{Residuals - least squares solution}\n% \n% Minimize the {\\it sum} of the {\\it squared} residuals\n% \n\n% \\begin{columns}\n% \\column{\\textwidth}\n% \\animategraphics[width=\\textwidth, \n                 % controls, %autoplay, %palindrome, \n                 % timeline=Varying_timeline.txt, \n                 % buttonsize=1em]{12}{Varying_timeline}{}{}\n% \n% \\end{columns}\n% }\n% \n% \n% \n% {\\section{Residuals - least squares solution}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{JustRight.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % y  &= \\beta_1 + \\beta_2 x  + {\\color{red} \\varepsilon }\\\\\n\t\t  % \\\\\n\t\t  % 9.50  &= 5 + 4 \\times 1 + {\\color{red} 0.50}\\\\\n\t\t  % 11.00 &= 5 + 4 \\times 2 - {\\color{red} 2.00}\\\\\n\t\t  % 19.58 &= 5 + 4 \\times 3 + {\\color{red} 2.58}\\\\\n\t\t  % 20.00 &= 5 + 4 \\times 4 - {\\color{red} 1.00} \n\t\t% \\end{align*}\n\t\t% \n\t\t% \\[\\beta_1 = 5; \\beta_2=4\\]\n\t\t\t\t% \n% \\end{columns}\t\t\n% }\n% \n% \n% \n% \n% {\\section{Model as a matrix - terminology}\n% \n% \\begin{center}\n% \n% {\\LARGE $\\mathbf{Y} = \\mathbf{X \\beta  +  \\varepsilon}$}\n% \n% \\vspace{5mm}\n% \n% \\begin{tikzpicture}\n%%minimum height on nodes to align labels above components\n% \\begin{scope}[every node/.style={minimum height=22mm}]\n% \\node                    (Y)  {$\\left[ \\begin{array}{r@{}l}\n \t\t\t\t\t\t\t\t   % \\alert<2>  {9} & \\alert<2>{.50} \\\\ \n\t\t\t\t\t\t\t\t   % \\alert<3> {11} & \\alert<3> {.00} \\\\ \n                                    % \\alert<4> {19} & \\alert<4> {.58} \\\\ \n                                    % \\alert<5> {20} & \\alert<5> {.00}  \\end{array}\\right]$};\n% \\node [right= 2mm of Y]  (eq) {$=$};                                     \n% \\node [right= 2mm of eq] (X)  {$\\left[ \\begin{array}{cc} \\alert<2>{1} & \\alert<2>{1} \\\\\n                                     % \\alert<3>{1} & \\alert<3>{2} \\\\ \n                                     % \\alert<4>{1} & \\alert<4>{3} \\\\ \n                                     % \\alert<5>{1} & \\alert<5>{4} \\\\ \\end{array}\\right]$};\n% \\node [right= 0mm of X]  (b)  {$\\left[ \\begin{array}{c} \\alert<2-5>{5} \\\\ \n                                    % \\alert<2-5>{4} \\end{array}\\right]$};\n% \\node [right= 2mm of b]  (pl) {$+$};\n% \\node [right= 2mm of pl] (e)  {$\\left[ \\begin{array}{r@{}l} \n\t\t\t\t\t\t\t\t   % \\alert<2>  {0} & \\alert<2>{.50} \\\\ \n\t\t\t\t\t\t\t\t   % \\alert<3> {-2} & \\alert<3> {.00} \\\\ \n                                    % \\alert<4>  {2} &\\alert<4> {.58} \\\\ \n                                    % \\alert<5> {-1} & \\alert<5> {.00}  \\end{array}\\right]$};\n% \\end{scope}\n% \n% \\begin{scope}[every node/.style={rounded corners, draw, fill=red!10},\n              % every path/.style={latex-, draw=red, thick, fill=red}]\n% \n% \\onslide<1-5>{\n\t% \\node [above= 5mm of Y] (obs) {Observed values};\n\t% \\node [below= 5mm of X] (mat) {Model matrix};\n\t% \\node [above= 5mm of b] (coe) {Coefficients};\n\t% \\node [below= 5mm of e] (res) {Residuals};\n\t% \\draw (Y) -- (obs);\n\t% \\draw (X) -- (mat);\n\t% \\draw (b) -- (coe);\n\t% \\draw (e) -- (res);\n% }\n% \n% \\onslide<6>{\n\t% \\node [above= 5mm of eq, minimum width=38mm] (given) {Given these \\ldots};\n\t% \\node [above= 5mm of b, xshift=18mm] (find) {\\ldots find the set of these\\ldots};\n\t% \\node [below= 5mm of b] (min)  {\\ldots that minimize the sum of the squares of these.};\n% \n\t% \\draw (Y.north) -- (Y.north |- given.south);\n\t% \\draw (X.north) -- (X.north |- given.south);\n\t% \\draw (b.north) -- (b.north |- find.south);\n\t% \\draw (e.south) -- (e.south |- min.north);   \n% }\n% \n% \\end{scope}\n% \n% \\end{tikzpicture}\n% \\end{center}\n% }\n% \n% \n% {\\section{Model as a matrix - predictions}\n% \n% \\begin{center}\n% \n% {\\LARGE $\\mathbf{\\hat{Y}} = \\mathbf{X \\beta }$}\n% \n% \\vspace{5mm}\n% \n% \\begin{tikzpicture}\n%% minimum height on nodes to align labels above components\n% \\begin{scope}[every node/.style={minimum height=22mm}]\n% \\node                    (Y)  {$\\left[ \\begin{array}{c} {9} \\\\ \n                                     % {13} \\\\ \n                                     % {17} \\\\ \n                                     % {21} \\\\ \\end{array}\\right]$};\n% \\node [right= 2mm of Y]  (eq) {$=$};                                     \n% \\node [right= 2mm of eq] (X)  {$\\left[ \\begin{array}{cc} {1} & {1} \\\\\n                                     % {1} & {2} \\\\ \n                                     % {1} & {3} \\\\ \n                                     % {1} & {4} \\\\ \\end{array}\\right]$};\n% \\node [right= 0mm of X]  (b)  {$\\left[ \\begin{array}{c} {5} \\\\ \n                                    % {4} \\end{array}\\right]$};\n% \\end{scope}\n% \n% \\begin{scope}[every node/.style={rounded corners, draw, fill=red!10},\n              % every path/.style={latex-, draw=red, thick, fill=red}]\n% \n\t% \\node [above= 5mm of Y] (obs) {Predicted or fitted values};\n\t% \\node [below= 5mm of X] (mat) {Model matrix};\n\t% \\node [above= 5mm of b] (coe) {Coefficients};\n\t% \\draw (Y) -- (obs);\n\t% \\draw (X) -- (mat);\n\t% \\draw (b) -- (coe);\n\t% \n% \\end{scope}\n% \n% \\end{tikzpicture}\n% \\end{center}\n% }\n% \n% \n% {\\section{Predicted values}\n% \n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[width=\\textwidth]{Predicted.pdf}\n\t\t% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{align*}\n\t\t  % \\hat{y}  &= \\beta_1 + \\beta_2 x  \\\\\n\t\t  % \\\\\n\t\t  % 9  &= 5 + 4 \\times 1 \\\\\n\t\t  % 13 &= 5 + 4 \\times 2 \\\\\n\t\t  % 17 &= 5 + 4 \\times 3 \\\\\n\t\t  % 21 &= 5 + 4 \\times 4  \n\t\t% \\end{align*}\n% \\end{columns}\t\t\n% }\n% \n% \\section{Is a linear model appropriate?}\n% \n% \n% {\\section{Assumptions}\n% \n% \\begin{compactitem}\n% \\item Linear models have the following assumptions:\n% \\begin{compactitem}\n% \\item No measurement error in explanatory variables\n% \\item The explanatory variables are not very highly correlated\n% \\item \\color<2>{red} The model is linear\n% \\item \\color<2>{red} The model has constant normal variance\n% \\end{compactitem}\n% \\item If these assumptions are not met, the model can be very wrong\n% \\item<2> The last two need some further explanation\n% \\end{compactitem}\n% \n% }\n% \n% \n% \\section{`The model is linear'}\n% \n% \\includegraphics[width=\\textwidth]{Linear.pdf}\n% \n% \\begin{compactitem}\n% \\item These are {\\it all} good linear models.\n% \\item Linear models can include curved relationships (e.g. polynomials)\n% \\item The data can be modelled as a {\\it sum} of components\n% \\item A {\\it linear combination} of variables and coefficients\n% \\end{compactitem}\n% \n% \n% \\section{The meaning of 'constant normal variance'}\n\t\t% \\includegraphics[height=72mm]{ResidDemo.pdf}\n\t\t% \n\t\t% \\begin{compactitem}\n\t\t% \\item The data has a similar spread around any predicted point in the model\n\t\t% \\vspace{2cm}\n\t\t% \\item The residuals are normal\n\t\t% \\item Points {\\it should} be spaced equally in the area under the curve\n\t\t% \\item Expect mostly small but a few larger residuals\n\t\t% \\end{compactitem}\n\t\t% \n% \\includegraphics[width=\\textwidth]{ConstantVarianceMods.pdf}\n% \n% \\begin{compactitem}\n% \\item Three good models\n% \\begin{compactitem}\n% \\item Is the spread the same for all fitted values?\n% \\item Do the residuals match the normal expectation?\n% \\end{compactitem}\n% \\end{compactitem}\n% \n% \n% \n% \\includegraphics[width=0.95\\textwidth]{FitResid.pdf}\n% \n% \\includegraphics[width=0.95\\textwidth]{QQNorm.pdf}\n% \n% \\includegraphics[width=\\textwidth]{NonConstantVarianceMods.pdf}\n% \n% \\begin{compactitem}\n% \\item Three bad models\n% \\begin{compactitem}\n% \\item Is the spread the same for all fitted values?\n% \\item Do the residuals match the normal expectation?\n% \\end{compactitem}\n% \\end{compactitem}\n% \n% \n% \n% \\includegraphics[width=0.95\\textwidth]{BadFitResid.pdf}\n% \n% \\includegraphics[width=0.95\\textwidth]{BadQQNorm.pdf}\n% \n% \n% {\\section{When is a linear model appropriate?}\n% {\\centering \\Huge\n% \n% Plot the data!\\\\\n% Plot the residuals!\n% \n% }}\n% \n% \n% \\section{Is a linear model explanatory?}\n% \n% \n% {\\section{How explanatory is the model?}\n% \n% \\begin{center}\n% \\begin{compactitem}\n% \\item Finally! Some statistics! (Woohoo!)\n% \\item {\\it Terms}: analysis of variance\n% \\begin{compactitem}\n% \\item Does the model explain enough variation?\n% \\item Does each term explain enough variation?\n% \\end{compactitem}\n% \\item {\\it Coefficients}: $t$ tests\n% \\begin{compactitem}\n% \\item Are the coefficients different from zero?\n% \\end{compactitem}\n% \\end{compactitem}\n% \n% \\end{center}\n% }\n% \n% \n% \\section{Null and saturated models - two endpoints}\n% \\begin{columns}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\includegraphics[height=72mm]{NullSaturated.pdf}\n% \n\t% \\column{0.5\\textwidth}\n\t\t% \\begin{compactitem}\n\t\t% \\item The null model ($H_0$)\n\t\t% \\item Nothing is going on\n\t\t% \\item Biggest possible residuals\n\t\t% \\item Residual sum of squares (RSS) is as big as it can be\n\t\t% \\vspace{1cm}\n\t\t% \\item The saturated model\n\t\t% \\item One coefficient per data point\n\t\t% \\item RSS is zero - all the sums of squares are now explained (ESS)\n\t\t% \\end{compactitem}\n\t\t% \n% \\end{columns}\n% \n% \n% \n% \n% \\section{More interesting models}\n% \n% \n\t% \n\t\t% \\centerline{\\includegraphics[height=0.5\\textheight]{Intermediate.pdf}}\n% \n\t\t% \\begin{compactitem}\n\t\t% \\item Added a term with three levels\n\t\t% \\item Some but not all of the residual sums of squares are explained\n\t\t% \\item Is this enough to be interesting?\n\t\t% \\end{compactitem}\t\t\n% \n% \n% \n% \n% \n% \\section{The F statistic}\n% \n% \\begin{center}\n% \n\t% \\begin{tikzpicture}[every node/.style={ text depth=1mm, inner sep=2mm,font=\\Large}, node distance=0mm]\n\t% \n\t% \\node (stat) {F};\n\t% \\node (eq1) [right= of stat] {=};\n% \n\t% \\matrix (ratio1) [right=of eq1,ampersand replacement=\\&] {\n\t\t% \\node (ESS) {ESS}; \\node(div1) [right=of ESS] {/}; \\node (Nc) [right= of div1]{$N_c$} ; \\\\ \n\t\t% \\node (RSS) \t{RSS}; \\node(div2) [right=of RSS]{/}; \\node (Nr) [right= of div2]{$N_r$} ;\\\\\n\t% };\n% \n\t% \\draw (ESS.south west) -- (Nc.south east);\n\t% \\node (eq2) [right= of ratio1] {=};\n% \n\t% \\matrix (ratio2)[right=of eq2,ampersand replacement=\\&] {\n\t\t% \\node (ESSn) {9.92}; \\node(div3) [right=of ESSn] {/}; \\node (Ncn) [right= of div3]{$2$} ; \\\\ \n\t\t% \\node (RSSn) {6.59}; \\node(div4) [right=of RSSn]{/}; \\node (Nrn) [right= of div4]{$6$} ;\\\\\n\t% };\n% \n\t% \\draw (ESSn.south west) -- (Ncn.south east);\n\t% \n\t% \\node (eq3) [right= of ratio2] {=};\n\t% \\node (res) [right= of eq3] {4.52};\n\t% \n\t% \\begin{scope} [every node/.style={fill=blue!10, draw=blue, font=\\large, rounded corners, align=center},\n                   % every path/.style={latex-,  thick, draw=blue}]\n\t% \\node (ESSGood)  [above=of ESS, yshift=8mm, xshift=-4mm] {Large ESS \\\\ is good};\t\n\t% \\node (RSSGood)  [below=of RSS, yshift=-8mm, xshift=-4mm] {Small RSS \\\\ is good};\t\n\t% \\node (NcGood)  [above=of Nc, yshift=8mm, xshift=8mm, anchor=south west] {Fewer coefficients \\\\ is better};\t\n\t% \\node (NrGood)  [below=of Nr, yshift=-8mm, xshift=8mm, anchor=north west] {Residual degrees of freedom};\t\n\t% \n\t% \\draw [blue] (ESS.north) -- (ESS.north |- ESSGood.south);\n\t% \\draw  [blue](RSS.south) -- (RSS.south |- RSSGood.north);\n\t% \\draw [blue] (Nc.north) |- (NcGood.west);\n\t% \\draw [blue] (Nr.south) |- (NrGood.west);\n% \n\t% \n\t% \\end{scope}\n\t% \n\t% \\end{tikzpicture}\n% \n% \\end{center}\n% \n% \n% \n% \n% \n% \\section{$F$ values by chance}\n% \n% \\includegraphics[width=\\textwidth]{F_extremes.pdf}\n% \n% \\begin{compactitem}\n% \\item What is the distribution of $F$ if nothing is going on?\n% \\item Simulate 10,000 datasets where nothing is going on ($H_0$ is true)\n% \\item Calculate $F$ for each random dataset under $H_1$\n% \\item Mostly $H_1$ has a low $F$ - but sometimes it is high by chance \n% \\end{compactitem}\n% \n% \n% \n% \n% \n% \n% \n% \\section{Distribution of $F$}\n% \n% \n% \\centerline{\\animategraphics[height=0.5\\textheight, \n                 % %autoplay, %controls, %palindrome, \n                 % timeline=FAnimate.txt, \n                 % buttonsize=1em]{12}{FAnimate}{}{}}\n% \n% \\begin{compactitem}\n% \\item In our possibly interesting model, $F=4.52$\n% \\item<2> 95\\% of the random data sets have $F\\le 5.5$\n% \\item<2> A model this good is found by chance 1 in 16 times ($p=0.063$)\n% \\item<2> Not quite interesting enough!\n% \\end{compactitem}\n% \n% \n% \n% \n% \n% \\section{Are coefficients different from zero?}\n% \n% \n% \\begin{center}\n% \n\t% \\begin{tikzpicture}[every node/.style={ text depth=1mm, inner sep=2mm,font=\\Large}, node distance=0mm]\n\t% \n\t% \\node (stat) {t};\n\t% \\node (eq1) [right= of stat] {=};\n% \n\t% \\matrix (ratio1) [right=of eq1,ampersand replacement=\\&] {\n\t\t% \\node (diff) {Effect size}; \\\\ \n\t\t% \\node (prec) \t{Precision}; \\\\\n\t% };\n\t% \\draw (diff.south west) -- (diff.south east);\n\t% \n\t% \\node (eq2) [right= of ratio1] {=};\n% \n\t% \\matrix (ratio2) [right=of eq2,ampersand replacement=\\&] {\n\t\t% \\node (val) {Coefficient value}; \\\\ \n\t\t% \\node (se) \t{Standard error}; \\\\\n\t% };\n\t% \\draw (val.south west) -- (val.south east);\n\t% \n\t% \\begin{scope} [every node/.style={fill=blue!10, draw=blue, font=\\large, rounded corners, align=center},\n                   % every path/.style={latex-,  thick, draw=blue}]\n\t% \\node (esgood)  [above=of eq2, yshift=10mm ] {Large is good - bigger changes};\t\n\t% \\node (precgood)  [below=of eq2, yshift=-10mm] {Small is good - known more precisely};\t\n\t% \n\t% \\draw [blue] (diff.north) -- (diff.north |- esgood.south);\n\t% \\draw [blue] (val.north) -- (val.north |- esgood.south);\n\t% \\draw [blue] (prec.south) -- (prec.south |- precgood.north);\n\t% \\draw [blue] (se.south) -- (se.south |- precgood.north);\t\n\t\t% \n\t% \\end{scope}\n\t% \n\t% \\end{tikzpicture}\n% \\end{center}\n% \n% \n% \\begin{compactitem}\n% \\item The value of a coefficient in a model is an {\\it effect size}\n% \\item How much does changing this variable change the response?\n% \\item A {\\it standard error} estimates how precisely we know the value\n% \\end{compactitem}\n% \n% \n% \n% \n% \n% \\section{Variation in effect size and precision}\n% \n% \n\t% \n\t\t% \\centerline{\\includegraphics[height=75mm]{T_examples.pdf}}\n% \n% \n% \n% \n% \n% \n% \\section{$t$ values by chance}\n% \n% \\includegraphics[width=\\textwidth]{t_extremes.pdf}\n% \n% \\begin{compactitem}\n% \\item What is the distribution of $t$ if nothing is going on?\n% \\item Simulate 10,000 datasets where nothing is going on ($H_0$ is true)\n% \\item Calculate $t$ for each random dataset under $H_1$\n% \\item Mostly $H_1$ has a $t$ near zero but can be positive or negative\n% \\end{compactitem}\n% \n% \n% \n% \n% \n% \n% \\section{Distribution of $t$}\n% \n% \n% \\centerline{\\animategraphics[height=0.5\\textheight, \n                 % %autoplay, %controls, %palindrome, \n                 % timeline=FAnimate.txt, \n                 % buttonsize=1em]{12}{tAnimate}{}{}}\n% \n% \\begin{compactitem}\n% \\item 95\\% of the random data sets have $t\\le \\pm 2.09$\n% \\item Only the two higher precision models are expected to occur less than 1 time in 20 by chance.\n% \\end{compactitem}\n% \n% \n% \n% \n% \\section{Summary}\n% \n% \\begin{compactitem}\n% \\item Linear models predict a continuous response variable\n% \\item A sum based on the effect size of explanatory variables\n% \\item Estimate the model using least squares residuals\n% \\item Need to check if the model is appropriate\n% \\item Then check if the model is explanatory\n% \\end{compactitem}\n\n\n\n\n% ##################################################\n% ################## Extra stuff ###################\n% ##################################################\n\n%\n%\n%\\section{Examples: the null model}\n%\n%\\includegraphics[width=\\textwidth]{Anova_null.pdf}\n%\n%\\begin{compactitem}\n%\\item The null hypothesis ($H_0$): Nothing is going on\n%\\item The residuals have to get smaller as we include terms.\n%\\item How much shorter?\n%\\end{compactitem}\n%\n%\n%\n%\n%\n%\n%\\section{Examples: one continuous term}\n%\n%\\includegraphics[width=\\textwidth]{Anova_mod.pdf}\n%\n%\\begin{compactitem}\n%\\item An alternative model ($H_1$) using $x$\n%\\item Added one term ($x$) to the model to give ($H_1$)\n%\\item Do we reject $H_0$ and accept this new model?\n%\\end{compactitem}\n%\n%\n%\n%\n%\n%\\section{Examples: adding a factor}\n%\n%\\includegraphics[width=\\textwidth]{Anova_mod2.pdf}\n%\n%\\begin{compactitem}\n%\\item Another model ($H_2$) using $x$ and a factor $f$ with three levels\n%\\item The sum of squares gets smaller again \n%\\item We've added one term ($f$) but two coefficients ($f_b$ and $f_c$)\n%\\item Is this even better than $H_1$?\n%\\end{compactitem}\n%\n%\n%\n%\n%\n%\\section{Change in variance}\n%\n%\\begin{table}[htdp]\n%\\begin{center}\n%\\begin{tabular}{ccccc}\n%\t  & Model A & Model B & Model C \\\\\n%\\hline\n%$H_0 $& Unexplained SS& 241.97 & 185.02 & 259.80 \\\\\n%      & Explained SS  & 0      & 0      &    0   \\\\\n%$H_1 $& Unexplained SS& 241.97 & 173.21 & 62.95 \\\\\n%      & Explained SS  & 0.005  & 11.803 & 196.84 \\\\\n%%$H_2$& 238.07 & 123.75 & 25.05 \\\\\n%%\t\t\t\t\t\t\t\t\\\\\n%\\end{tabular}\n%\\end{center}\n%\\end{table}%\n%\n%\n%\n%\n\n\n\n\n\n\n\n%\t\n%\t\n%\t{\\section{How to find $\\beta$ \\dots}\n%\t\n%\t\\begin{center}\n%\t\\begin{tikzpicture}\n%\t\n%\t\\node                     (eq1)  {=};\n%\t\\node [left=  2mm of eq1] (lhs1) {$\\mathbf{Y}$};\n%\t\\node [right= 2mm of eq1] (rhs1) {$\\mathbf{X \\beta  +  \\varepsilon}$};\n%\t\\node [below= 7mm of eq1] (eq2)  {=};\n%\t\\node [left=  2mm of eq2] (lhs2) {$\\mathbf{X^T Y}$};\n%\t\\node [right= 2mm of eq2] (rhs2) {$\\mathbf{X^T X \\beta}$};\n%\t\\node [below= 7mm of eq2] (eq3)  {=};\n%\t\\node [left=  2mm of eq3] (lhs3) {$\\mathbf{(X^T X )^{-1} X^T Y}$};\n%\t\\node [right= 2mm of eq3] (rhs3) {$\\mathbf{\\beta}$};\n%\t\n%\t\n%\t\\end{tikzpicture}\n%\t\\end{center}\n%\t\n%\t}\n%\t\n%\t\n%\n%\n%\n%{\\section{Categorical variables}\n%\\begin{columns}\n%\n%\\column{0.5\\textwidth}\n%\t\\includegraphics[width=\\textwidth]{Anova.pdf}\n%\t\n%\\column{0.5\\textwidth}\n%\tFeatures of categorical variables\\\\\n%\t\\begin{compactitem}\n%\t\t\\item Also called {\\it factors}\n%\t\t\\item Have a set of discrete {\\it levels}\n%\t\t\\item We want to model {\\it differences} between levels\n%\t\t\\item The least squares estimate for a level is simply the {\\it mean} within the level\n%\t\t\\item How do we include this in the linear model?\n%\t\\end{compactitem}\n%\\end{columns}\t\t\n%}\n%\n%\n%\n%\n%{\\section{Contrasts}\n%\n%\\begin{compactitem}\n%\\item Many possible differences between levels ({\\it contrasts})\n%\\item For 4 levels, the possible contrasts between:\n%\\begin{compactitem}\n%\\item two levels\n%\\item a pair of levels and a third level\n%\\item two pairs of levels\n%\\item one level and the remaining three levels\n%\\end{compactitem}\n%\\item Each {\\it contrast} represents a possible hypothesis.\n%\\end{compactitem}\n%\\vspace{3mm}\n%\n%\t\\begin{tikzpicture}\n%\t\n%\t\\newcommand{\\bb}{\\node [circle,  fill=blue, inner sep=0.5mm]{} ;}\n%\t\\newcommand{\\rr}{\\node [circle,  fill=red, inner sep=0.5mm]{} ;}\n%\t\n%\t\\matrix (contrasts) [matrix of nodes, ampersand replacement=\\&, % BEAMER hijacks straight &\n%\t                     row sep=1mm, column sep=2.5mm, \n%\t                     nodes={circle, fill=black!35, inner sep=0.5mm}, nodes in empty cells]\n%\t{\n%\t\t    \\&     \\& \\bb \\&     \\& \\rr \\& \\rr \\& [4mm] % {a}{b}\n%\t\t    \\& \\rr \\&     \\& \\rr \\& \\bb \\& \\bb \\&     \\& \\rr \\& \\bb \\& \\bb \\& [4mm] % {ab}{c}\n%\t    \\rr \\& \\rr \\&  \\bb \\& [4mm] %% {ab}{cd}\n%\t    \\rr \\& \\bb \\& \\bb \\& \\bb \\\\[1.5mm] %% {abc}{d}\n%\t\t    \\& \\bb \\&     \\& \\rr \\&     \\& \\bb \\& \n%\t\t\\rr \\&     \\& \\bb \\& \\bb \\&     \\& \\rr \\& \\bb \\& \\bb \\& \\bb \\& \\bb \\& \n%\t\t\\rr \\& \\bb \\& \\rr \\& \n%\t\t\\bb \\& \\rr \\& \\bb \\& \\bb \\\\[12mm]\n%\t\t\\bb \\&     \\&     \\& \\bb \\& \\bb \\&     \\& \n%\t\t\\bb \\& \\bb \\& \\rr \\&     \\& \\rr \\&     \\& \\bb \\& \\bb \\&     \\& \\rr \\& \n%\t\t\\bb \\& \\rr \\& \\rr \\& \n%\t\t\\bb \\& \\bb \\& \\rr \\& \\bb \\\\[1.5mm]\n%\t\t\\rr \\& \\rr \\& \\rr \\&     \\&     \\&     \\&\n%\t    \\bb \\& \\bb \\& \\bb \\& \\bb \\& \\bb \\& \\bb \\& \\rr \\&     \\& \\rr \\&     \\& \n%\t    \\bb \\& \\bb \\&  \\bb \\&\n%\t    \\bb \\& \\bb \\& \\bb \\& \\rr \\\\\n%\t};\n%\t\n%\t%% Draw in background bars to clarify contrast sets\n%\t\\begin{scope}[on background layer, every node/.style=\n%\t                  {rounded corners, inner sep=0.7mm,\n%\t                    fill=black!20}]\n%\t                   \n%\t\t \\def\\x{contrasts-1-\\i}\n%\t\t \\def\\y{contrasts-4-\\i}\n%\t\t \\def\\r{col-\\i}\t\t  \n%\t     \\foreach \\i in {1, 2, ..., 23} \\node (\\r) [fit=(\\x) (\\y)]{};\n%\t\\end{scope}\n%\t\n%\t\\node[left=4mm of contrasts-1-1]{Treat 2};\n%\t\\node[left=4mm of contrasts-2-1]{Treat 1};\n%\t\\node[left=4mm of contrasts-3-1]{Manip};\n%     \t\\node[left=4mm of contrasts-4-1]{Control};\n%\t\n%\t%\t\t\\draw ([yshift=5mm] col-1.north west)   -- node[above, font=\\footnotesize] {\\{a\\} v. \\{b\\}}   ([yshift=5mm] col-6.north east);\n%\t%\t\t\\draw ([yshift=-5mm] col-7.south west)  -- node[below, font=\\footnotesize] {\\{a,b\\} v. \\{c\\}}   ([yshift=-5mm] col-16.south east);\n%\t%\t\t\\draw ([yshift=5mm] col-17.north west)  -- node[above, font=\\footnotesize] {\\{a,b\\} v. \\{c,d\\}}   ([yshift=5mm] col-19.north east);\n%\t%\t\t\\draw ([yshift=-5mm] col-20.south west) -- node[below, font=\\footnotesize] {\\{a,b,c\\} v. \\{d\\}}   ([yshift=-5mm] col-23.south east);\n%\t\n%\t\\end{tikzpicture}\n%}\t\n%\n%\n%\n%\n%\n%{\\section{Model as a matrix - terminology}\n%\n%\\begin{center}\n%\n%{\\LARGE $\\mathbf{Y} = \\mathbf{X \\beta  +  \\varepsilon}$}\n%\n%\\vspace{5mm}\n%\n%\\begin{tikzpicture}\n%% minimum height on nodes to align labels above components\n%\\begin{scope}[every node/.style={minimum height=22mm}]\n%\\node                    (Y)  {$\\left[ \\begin{array}{c} \n%\t\t\t\t\t\t\t\t\t1.58 \\\\ 6.79 \\\\\n%\t\t\t\t\t\t\t\t\t4.16 \\\\ 6.73 \\\\\n%\t\t\t\t\t\t\t\t\t4.26 \\\\ 6.33 \\\\\n%\t\t\t\t\t\t\t\t\t3.18 \\\\ 8.73\n%                                 \\end{array}\\right]$};\n%\\node [right= 2mm of Y]  (eq) {$=$};                                     \n%\\node [right= 2mm of eq] (X)  {$\\left[ \\begin{array}{cccc} \n%\t\t\t\t\t\t\t\t\t1 & 0 & 0 & 0 \\\\  1 & 0 & 0 & 0 \\\\  \n%\t\t\t\t\t\t\t\t\t1 & 1 & 0 & 0 \\\\  1 & 1 & 0 & 0 \\\\  \n%\t\t\t\t\t\t\t\t\t1 & 0 & 1 & 0 \\\\  1 & 0 & 1 & 0 \\\\  \n%\t\t\t\t\t\t\t\t\t1 & 0 & 0 & 1 \\\\  1 & 0 & 0 & 1 \\\\ \n%                                \\end{array}\\right]$};\n%\\node [right= 0mm of X]  (b)  {$\\left[ \\begin{array}{c} \n%                                         \\beta_1 \\\\ \\beta_2 \\\\\n%                                         \\beta_3 \\\\ \\beta_4 \\\\\n%                                 \\end{array}\\right]$};\n%                                 \n%\\node [right= 2mm of b]  (pl) {$+$};\n%\\node [right= 2mm of pl] (e)  {$\\left[ \\begin{array}{c} \n%                                         \\varepsilon_1 \\\\ \\varepsilon_2 \\\\\n%                                         \\varepsilon_3 \\\\ \\varepsilon_4 \\\\\n%                                         \\varepsilon_5 \\\\ \\varepsilon_6 \\\\\n%                                         \\varepsilon_7 \\\\ \\varepsilon_8 \\\\\n%                                 \\end{array}\\right]$};\n%\\end{scope}\n%\n%\\end{tikzpicture}\n%\\end{center}\n%}\n%\n%\n%\n%{\\section{Categorical variables}\n%\\begin{columns}\n%\n%\\column{0.5\\textwidth}\n%\t\\includegraphics[width=\\textwidth]{AnovaCoef.pdf}\n%\t\n%\\column{0.5\\textwidth}\n%\\end{columns}\t\t\n%}\n%\n%\n%\n%\n%\n%{\\section{Model as a matrix - terminology}\n%\n%\\begin{center}\n%\n%{\\LARGE $\\mathbf{Y} = \\mathbf{X \\beta  +  \\varepsilon}$}\n%\n%\\vspace{5mm}\n%\n%\\begin{tikzpicture}\n%% minimum height on nodes to align labels above components\n%\\begin{scope}[every node/.style={minimum height=22mm}]\n%\\node                    (Y)  {$\\left[ \\begin{array}{c} \n%\t\t\t\t\t\t\t\t\t1.58 \\\\ 6.79 \\\\\n%\t\t\t\t\t\t\t\t\t4.16 \\\\ 6.73 \\\\\n%\t\t\t\t\t\t\t\t\t4.26 \\\\ 6.33 \\\\\n%\t\t\t\t\t\t\t\t\t3.18 \\\\ 8.73\n%                                 \\end{array}\\right]$};\n%\\node [right= 2mm of Y]  (eq) {$=$};                                     \n%\\node [right= 2mm of eq] (X)  {$\\left[ \\begin{array}{cccc} \n%\t\t\t\t\t\t\t\t\t1 &  0.5 & -0.5 &    0 \\\\  1 &  0.5 & -0.5 &    0 \\\\  \n%\t\t\t\t\t\t\t\t\t1 &  0.5 &  0.5 &    0 \\\\  1 &  0.5 &  0.5 &    0 \\\\  \n%\t\t\t\t\t\t\t\t\t1 & -0.5 &    0 & -0.5 \\\\  1 & -0.5 &    0 & -0.5 \\\\  \n%\t\t\t\t\t\t\t\t\t1 & -0.5 &    0 &  0.5 \\\\  1 & -0.5 &    0 &  0.5 \\\\ \n%                                \\end{array}\\right]$};\n%\\node [right= 0mm of X]  (b)  {$\\left[ \\begin{array}{c} \n%                                         \\beta_1 \\\\ \\beta_2 \\\\\n%                                         \\beta_3 \\\\ \\beta_4 \\\\\n%                                 \\end{array}\\right]$};\n%                                 \n%\\node [right= 2mm of b]  (pl) {$+$};\n%\\node [right= 2mm of pl] (e)  {$\\left[ \\begin{array}{c} \n%                                         \\varepsilon_1 \\\\ \\varepsilon_2 \\\\\n%                                         \\varepsilon_3 \\\\ \\varepsilon_4 \\\\\n%                                         \\varepsilon_5 \\\\ \\varepsilon_6 \\\\\n%                                         \\varepsilon_7 \\\\ \\varepsilon_8 \\\\\n%                                 \\end{array}\\right]$};\n%\\end{scope}\n%\n%\\end{tikzpicture}\n%\\end{center}\n%}\n%\n%\n%\n%\n%{\\section{Concepts}\n%\\begin{centering}\n%  \n%\n%\\includegraphics[width=\\textwidth]{ScatterplotGroup.pdf}\n%\n%\\end{centering}\n%}\n%\n%\n%{\\section{Concepts}\n%\n%\\begin{centering}\n%\n%\\includegraphics[width=\\textwidth]{CoefMarkup.pdf}\n%\n%\\end{centering}\n%\n%}\n%\\section{Is a linear model appropriate?}\n%\n%{\\section{Concepts}\n%\n%\\begin{centering}\n%\n%\\includegraphics[width=\\textwidth]{CoefMarkup.pdf}\n%\n%\\end{centering}\n%\n%}\n%\n%\n%{\\section{Concepts}\n%\n%\\begin{centering}\n%\n%\\includegraphics[width=\\textwidth]{CoefMarkup.pdf}\n%\n%\\end{centering}\n%\n%}\n", "meta": {"hexsha": "86e158a4aefd57caf26e63bd4dac96fe96e006bd", "size": 41771, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "archived/SilCompBioStat/LinMods.tex", "max_stars_repo_name": "mathemage/TheMulQuaBio", "max_stars_repo_head_hexsha": "63a0ad6803e2aa1b808bc4517009c18a8c190b4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-12T13:33:14.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-12T13:33:14.000Z", "max_issues_repo_path": "archived/SilCompBioStat/LinMods.tex", "max_issues_repo_name": "OScott19/TheMulQuaBio", "max_issues_repo_head_hexsha": "197d710f76163469dfc7fa9d2d95ba3a739eccc7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "archived/SilCompBioStat/LinMods.tex", "max_forks_repo_name": "OScott19/TheMulQuaBio", "max_forks_repo_head_hexsha": "197d710f76163469dfc7fa9d2d95ba3a739eccc7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.2309307208, "max_line_length": 176, "alphanum_fraction": 0.5411170429, "num_tokens": 15522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7799928951399098, "lm_q1q2_score": 0.5726080151656147}}
{"text": "\n%%%%%%%%%% Start TeXmacs macros\n\\newcommand{\\nocomma}{}\n\\newcommand{\\noplus}{}\n\\newcommand{\\tmop}[1]{\\ensuremath{\\operatorname{#1}}}\n\\newcommand{\\tmtextbf}[1]{{\\bfseries{#1}}}\n\\newcommand{\\tmtextit}[1]{{#1}}\n\\newcommand{\\tmtexttt}[1]{{\\ttfamily{#1}}}\n\\newenvironment{descriptioncompact}{\\begin{description} }{\\end{description}}\n%%%%%%%%%% End TeXmacs macros\n\n\n\\section{Restricted Boltzmann Machines}\n\nHopfield networks are deterministic: training a network with the same patterns\nwill yield the same weights, and matching a pattern against the network will\nalways give the same result. This has the advantage of simplicity and ease of\ntesting. However, as mentioned before, there are spurious attractors in the\nHopfield network (linear combination of an odd number of training patterns).\nUsing stochastic updating rules will decrease the probability of being 'stuck'\nin such a state. This resembles simulated annealing, but ensuring we are not\ncaught in a local minima (in the energy landscape).\n\nRestricted Boltzmann Machines have been used for various purposes in recent\nyears\n\\cite{louradour2011classification} \\cite{teh2001rate} \\cite{nair2010rectified}\n, most of it\nconducted by Geoffrey E. Hinton at university of Toronto, Canada. They have a\nsimple structure: one layer of visible units and one layer of hidden units,\nwhich form a bipartite graph, as in Figure. The patterns used to train the\nnetwork correspond to the visible (exposed units), while the hidden units\ncorrespond to the attributes of the features we would like to learn. It is\nworth mentioning that in the most common and simple form, both the visible and\nhidden units take binary values. This is the approach we also adopted for our\nimplementation.\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{figure}[h]\n  \\centering\n  \\resizebox{300px}{300px}{\\includegraphics{RBM-1.png}}\n  \\caption{Restricted Boltzmann Machine}\n\\end{figure}\n\n\nThe connections between the neurons in the two layers is symmetrical, so it\ncan be represented as a weight matrix which keeps the connection between hidden\nand visible layers. A state of a Restricted Boltzmann machine is given by the\nvalues of both the visible and given state. The corresponding Hopfield energy\nformula for a state is given by :\n\\[ E (v, h) = - \\sum_{i \\in V} a_i v_i - \\sum_{j \\in H} b_j h_i - \\sum_{i \\in\n   V, \\nocomma j \\in H \\nocomma} v_i h_j w_{\\tmtextit{\\tmtextit{\\tmop{ij}}}}\n\\]\n\n\nwhere, $w_{\\tmtextit{\\tmtextit{}}}$is the weight matrix, $a$ and $b$are the\nvectors of biases corresponding to the visible, respectively hidden layers. As\nexpected, $v$is vector of visible units and \\tmtextit{h} is the vector of\nhidden units.\n\nWe denote by \\tmtextit{V} = \\{1, .. \\tmtextit{length} \\tmtextit{v}\\} , the\nindices of a visible vector and by \\tmtextit{H} = \\{1, .. \\tmtextit{length}\n\\tmtextit{h}\\} the indices of a hidden vector.\n\n\n\n\\subsection{Probability of a state}\n\nAfter training, the network assigns a probability to each possible state, as\nfollows:\n\\[ p \\left( v, h \\right) = \\frac{1}{Z} e^{- E \\left( v, h \\right)} \\]\nwhere Z is used to normalise the probabilities\n\\[ Z = \\sum_{v, h} e^{- E \\left( v, h \\right)} \\]\nThus, the probability the network assigns for a visible vector \\tmtextit{v}:\n\\begin{equation}\n  p \\left( v \\right) = \\sum_h \\frac{1}{Z} e^{- E \\left( v, h \\right)}\n\\end{equation}\n\n\\subsection{Training the network}\n\nThe network can be trained such that one maximises the probability of a training\npattern. The derivative of the log probability of a training vector (given by\n(1)), is simple, as follows:\n\\[ \\frac{\\delta \\log p \\left( v \\right)}{\\delta w_{\\tmop{ij}}} = < v_i h_i\n   >_{\\tmop{data}} - < v_i h_i >_{\\tmop{model}} \\]\nDue to the fact that the Restricted Boltzmann Machine can be represented as a bipartite graph, it is easy\nto attain an \\tmtextit{unbiased} sample of the hidden units, given the\nvisible units.\n\\begin{equation}\n  p \\left( h_j = 1 \\left|  \\right. v \\right) = \\sigma \\left( b_j + \\sum_{i \\in\n  V} v_i w_{\\tmop{ij}} \\right)\n\\end{equation}\nFor the visible units, the same formula gives a \\tmtextit{biased} sample of\nthe visible units.\n\\begin{equation}\n  p \\left( v_i = 1 \\left|  \\right. h \\right) = \\sigma \\left( a_i + \\sum_{j \\in\n  H} h_j w_{\\tmop{ij}} \\right)\n\\end{equation}\n\n\nwhere $\\sigma = \\frac{1}{1 \\noplus + e^{- x}}^{}$, is the logistic sigmoid\nfunction.\n\nThere are ways of getting an unbiased sample of the visible units, but they\nare very time consuming. In practice, the contrastive divergence algorithm is\nused as a faster substitute. The visible units are set to a training vector\nand the binary states of the hidden vector are computed using (2). These\nbinary units are used to \\tmtextit{reconstruct} the states of the visible\nvector using (3).\n\nThe training rule then becomes:\n\\[ \\Delta w_{\\tmop{ij}} = \\lambda \\left( < v_i h_i >_{\\tmop{data}} - < v_i h_i\n   >_{\\tmop{reconstruction}} \\right) \\]\nWe note that the above training rule has the desired properties for modelling\nthe biological brain: it is both \\tmtextit{local} and \\tmtextit{incremental}.\nAngle brackets \\ denote the expected value under the given distribution by the\nfollowing subscript.\n\n\n\n\\subsection{Using Restricted Boltzmann Machines for classification}\n\nAs suggested by Hinton in section 16 \\cite{hinton2010practical}, there are various ways of using Restricted\nBoltzmann Machines for classification. We have employed two methods, one which\nwe developed ourselves and another one described in the paper. The latter is adding\nanother set of visible units, called the \\emph{classification units}.\n\n\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{figure}[h]\n  \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\centering\n  \\resizebox{350px}{350px}{\\includegraphics{RBM-2.png}}\n  \\caption{Classification Boltzmann machine}\n\\end{figure}\n\n\n\nThe vector of hidden units is used to model the joint distribution between a\nvector of inputs v and a target (classification) vector y. The classification\nvector y corresponds to a class. It's length is given by the number of\nclasses. $y = e_c$, where c is the class the vector is modelling ($e_c$ is a\nvector with all zeros, expect from the position c, where it is 1).\n\nAs seen from Figure 2, one now needs to weight matrices, which we will denote\nby \\tmtextit{W}(the weight matrix between the visible units and hidden ones) \\\nand U (the weight matrix and between the classification units and hidden\nones). Vectors \\tmtextit{b}, \\tmtextit{c}, \\tmtextit{d} correspond to the\nvector of biases for the visible units (training patterns), hidden units and\nclassification units, respectively.\n\nThe energy function for this model is:\n\\[ E \\left( v \\nocomma, y, h \\right) = - d^T y - c^T h - b^T v - h^T W v - h^T\n   U y \\]\nWhich gives rise to the following formula for the probability of a configuration\n\\tmtextit{v}, \\tmtextit{y}.\n\\[ p \\left( v, y, h \\right) = \\frac{e^{- E \\left( v \\nocomma, y, h\n   \\right)}}{Z} \\]\nwhere \\tmtextit{Z} is the normalising constant. Thus,\n\\[ p \\left( v, y \\right) = \\sum_h p \\left( v, y, h \\right) = \\sum_h \\frac{e^{-\n   E \\left( v \\nocomma, y, h \\right)}}{Z} \\]\nThe distribution used for the Classification Boltzmann Machine are the\nfollowing:\n\\begin{equation}\n  p \\left( h_i = 1 \\left| v, y \\right. \\right) = \\sigma \\left( c_j + \\noplus\n  \\noplus W_j v \\noplus + U_j y \\right)\n\\end{equation}\n\\begin{equation}\n  p \\left( v_i = 1 \\left| h \\right. \\right) = \\sigma \\left( b_j + \\noplus\n  \\noplus h^T W^i  \\right)\n\\end{equation}\n\\begin{equation}\n  p \\left( y = e_c \\left| h \\right. \\right) = \\frac{e^{d^T e_c \\noplus + h^T\n  Ue_c}}{N}\n\\end{equation}\nwhere N is the normalising constant $\\sum_c$ $e^{d^T e_c \\noplus + h^T Ue_c}$.\n\nNote that we denote by $W_i$ the row \\tmtextit{i} of matrix \\tmtextit{W}, and\nby $W^i_{}$ the column \\tmtextit{i} of matrix \\tmtextit{W}.\n\n\n\nValuable reference for this was given to us from\n\\cite{louradour2011classification} and\n\\cite{larochelle2008classification}.\n\n\\subsection{Learning}\n\nContrastive divergence can be used to train the network, giving rise to the\nfollowing update rules:\n\n\n\\begin{eqnarray*}\n  b'  & = & b + \\lambda \\left( v - v_{\\tmop{sampled}} \\right)\\\\\n  c'  & = & c + \\lambda \\left( h_{\\sigma} - h_{\\sigma \\tmop{sampled}}\n  \\right)\\\\\n  d'  & = & d + \\lambda \\left( y - y_{\\tmop{sampled}} \\right)\\\\\n  W'  & = & W + \\lambda \\left( h_{\\sigma} v^T - h_{\\sigma \\tmop{sampled}}\n  v_{\\tmop{sampled}}^T \\right)\\\\\n  U' & = & U + \\lambda \\left( \\left. h_{\\sigma} y^T - h_{\\sigma\n  \\tmop{sampled}} y_{\\tmop{sampled}}^T \\right) \\right.\n\\end{eqnarray*}\nwhere we denote by $x'$ the new value of parameter x. $v_{\\tmop{sampled}} $and\n$y_{\\tmop{sampled}} $are obtained by sampling the distributions in (5) and\n(6).\n\\begin{eqnarray*}\n  h_{\\sigma} & = & \\sigma \\left( c_j + \\noplus \\noplus W_j v \\noplus + U_j y\n  \\right)\\\\\n  h_{\\sigma \\tmop{sampled}} & = & \\sigma \\left( c_j + \\noplus \\noplus W_j\n  v_{\\tmop{sampled}} \\noplus + U_j y_{\\tmop{sampled}}  \\right)\n\\end{eqnarray*}\n\\subsection{Classification}\n\\[ p \\left( y = e_c \\left| v \\right. \\right) = \\frac{e^{- F \\left( v \\nocomma,\n   e_c \\right)}}{\\sum_d e^{- F \\left( v \\nocomma, e_d \\right)}} \\]\nwhere $F \\left( v \\nocomma, e_c \\right) $is the \\tmtextit{free energy\nfunction}\n\\[ F \\left( v \\nocomma, e_c \\right) = - d^T y - \\sum^H_{j = 1}\n   \\tmtextit{\\tmtextit{s}} \\left( c_j + \\noplus \\noplus W_j v \\noplus + U_j y\n   \\right) \\nocomma \\]\nwhere $s \\left( x \\right) = \\log \\left( 1 + e^x \\right) $\n\n\n\nThe implementation of the Classification Boltzmann Machine can be found in\n\n\\tmtexttt{ClassficationBoltzmannMachine.hs. }\n\n\n\n\\subsection{New method}\n\nAnother method we have employed using Boltzmann Machines was created by us. We\nhave never seen this approach used somewhere else. Instead of creating 2 types\nof visible units, we use the simple Restricted Boltzmann Machine, with one\ntype of visible units (and hence a single matrix). For each training vector\nwe append the classification at the end. The classification is represented as\na binary vector, either as above, by using $e_c$, where c is the\nclassification of the current pattern, or even in a more compressed manner, by\ncreating the binary vector using the representation in base 2 of class c.\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\begin{figure}[h]\n  \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n  \\centering\n  \\resizebox{350px}{350px}{\\includegraphics{RBM-3.png}}\n  \\caption{Boltzmann machines according to our new method}\n\\end{figure} \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\nThe training is done in the usual way, but with the complete vectors (actual\npattern and classification). In our case, as different patterns ought to have\ndifferent classifications, we use as class the index of the pattern in the list\n(with removed duplicates) of training patterns.\n\nWhen a pattern needs to be matched to one of the training patterns for\nrecognition, one uses the log probability to compute the\nprobability of each of the classifications, and chooses the one with maximum\nprobability.\n\n\n\n\\begin{figure}[h]\n  \\centering\n  \\resizebox{498px}{146px}{\\includegraphics{RBM-4.png}}\n  \\caption{The recognition process in the new Boltzmann machine.}\n\\end{figure}\n\n\n\nAs given in \\cite{hinton2010practical}, the log\nprobability is  given by:\n\\[ \\tmtextit{\\log} \\left( \\tmop{class} = c \\left| v \\right. \\right) =\n   \\frac{e^{- F_c \\left( v \\right)}}{\\sum_{c'} e^{- F_{c'} \\left( v\n   \\right)}} \\]\n\\[ F \\left( v \\right) = - \\sum_{i \\in V} v_i a_i - \\sum_{j \\in H} \\log \\left(\n   1 \\noplus + e^{x_j} \\right) \\]\n\n\nwhere $x_j = b_j + \\sum_i w_{ij_{}} v_j$\n\n\n\nThe implementation of this procedure can be found in\n\\tmtexttt{RestrictedBoltzmannMachine.hs.}\n\n", "meta": {"hexsha": "54e7d19518999a5f1cc8ba1dd1e0c9920aea6c3b", "size": 11839, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/RBM.tex", "max_stars_repo_name": "imperialhopfield/hopfield", "max_stars_repo_head_hexsha": "d64e21b1c7b915755ae535685ffd7dfd25e3970f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2015-07-30T10:00:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-10T15:49:06.000Z", "max_issues_repo_path": "report/RBM.tex", "max_issues_repo_name": "imperialhopfield/hopfield", "max_issues_repo_head_hexsha": "d64e21b1c7b915755ae535685ffd7dfd25e3970f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/RBM.tex", "max_forks_repo_name": "imperialhopfield/hopfield", "max_forks_repo_head_hexsha": "d64e21b1c7b915755ae535685ffd7dfd25e3970f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-12-19T13:06:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-03T13:32:21.000Z", "avg_line_length": 40.9653979239, "max_line_length": 107, "alphanum_fraction": 0.687727004, "num_tokens": 3741, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7341195210831258, "lm_q1q2_score": 0.5726080143827833}}
{"text": "\ufeff\ufeff\\documentclass{article}\n\\usepackage{tocloft}\n\\include{common_symbols_and_format}\n\\renewcommand{\\cfttoctitlefont}{\\Large\\bfseries}\n\\begin{document}\n\\logo\n\\rulename{Exponential Trading Rule - One factor} %Argument is name of rule\n\\tblofcontents\n\\ruledescription{These representations have the position dependent on an exponentially weighted average of the prior values of research and price. The lookback length, $\\lookbacklength$, determines how far back the rule looks. The decay rate, $\\decaycoefficient$, and amplitude, $\\amplitudecoefficient$, coefficients determine the contribution from each exponential. The expression is normalised to be dimensionless by division by the weighting factors and current price.}\n\n\\ruleparameters\n{Window size}{10}{This is the number of time steps over which exponential contributions are sourced.}{$\\lookbacklength$}\n{Exponential decay rate for price}{0.0}{This is the decay factor for contributions from the price series.}{$\\decaycoefficient^{\\price}$}\n{Exponential decay rate for research}{0.0}{This is the decay factor for contributions from the research series.}{$\\decaycoefficient^{\\research}$}\n{Amplitude of price contribution}{-0.1}{This factor scales the overall contribution from past values of price.}{$\\amplitudecoefficient^{\\price}$}\n{Amplitude of research contribution}{0.1}{This factor scales the overall contribution from past values of research.}{$\\amplitudecoefficient^{\\research}$}\n\\stoptable\n\n\\section{Equation}\n\n\\begin{equation}\n\\contribution_{\\dummyiterator}^{\\price} = e^{-\\frac{\\dummyiterator}{e^{-\\decaycoefficient^\\price}}} \\\\\n\\end{equation}\n\\begin{equation}\n\\contribution_{\\dummyiterator}^{\\research} = e^{-\\frac{\\dummyiterator}{e^{-\\decaycoefficient^\\research}}} \\\\\n\\end{equation}\n\\begin{equation}\n\\bigcontribution_\\price = \\amplitudecoefficient^\\price \\frac{\\sum_{\\dummyiterator}^{\\lookbacklength} \\price_{\\currenttime - \\dummyiterator} \\lambda_{\\dummyiterator}^{\\price}}{\\sum_{\\dummyiterator = 0}^{\\lookbacklength} \\lambda_{\\dummyiterator}^{\\price}} \\\\\n\\end{equation}\n\\begin{equation}\n\\bigcontribution_\\research = \\amplitudecoefficient^\\research \\frac{\\sum_{\\dummyiterator}^{\\lookbacklength} \\research_{\\currenttime - \\dummyiterator} \\lambda_{\\dummyiterator}^{\\research}}{\\sum_{\\dummyiterator = 0}^{\\lookbacklength} \\lambda_{\\dummyiterator}^{\\research}} \\\\\n\\end{equation}\n\\begin{equation}\n\\position_\\currenttime = (\\bigcontribution_\\price+ \\bigcontribution_\\research)/\\price_\\currenttime \\\\\n\\end{equation}\n\n\\hspace{200mm}\n\n\\noindent where $\\price_\\currenttime$ is the price at time $\\currenttime$, $\\research$ is the value of the research series, $\\amplitudecoefficient$ are the amplitude coefficients, $\\decaycoefficient$ are the decay rate coefficients and $\\position$ is the resultant fractional portfolio investment.\n\n\\hspace{200mm}\n\\hspace{200mm}\n\n\\keyterms\n\\furtherlinks\n\n\\end{document}\n", "meta": {"hexsha": "1ec1d3148802875cfbde516bc31280cae1aaffe3", "size": 2861, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/strategies/tex/ExponentialOne.tex", "max_stars_repo_name": "pawkw/infertrade", "max_stars_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_issues_repo_path": "docs/strategies/tex/ExponentialOne.tex", "max_issues_repo_name": "pawkw/infertrade", "max_issues_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 137, "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_forks_repo_path": "docs/strategies/tex/ExponentialOne.tex", "max_forks_repo_name": "pawkw/infertrade", "max_forks_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "avg_line_length": 59.6041666667, "max_line_length": 472, "alphanum_fraction": 0.7797972737, "num_tokens": 744, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5726080143827833}}
{"text": "\\subsection{Spinning cell}\n\\label{spinning_cell}\n\nThe next problem which needed to be solved was a photodecomposition of samples\nby incoming laser light.\nResonance Raman spectroscopy uses excitation light with frequency inside the\nelectronic absorption band.\nIt means that the investigated molecules accept a significant part of the power\nfrom the incoming laser beam, and this excess energy can destroy the samples.\nIt also locally\nincreases temperature and causes the thermal lens effect, which distorts laser\nfocus.\nWe had to decide between several so far developed and successfully employed\napproaches, as described in\n\\cref{introduction_sample_placement},\nto minimize these effects.\n\nAs we wanted to study RNAs on our apparatus, which are highly sensitive to\ncontamination by ubiquitous RNase H and therefore cleaning of the sample cells\nwas a factor of major importance for us, we decided to utilize the idea of a\nspinning cell to minimize photodecomposition of samples.\nWe designed the cell to be also usable with small volumes of the samples, the\ninner radius of 4\\,mm and height of 5\\,mm.\nIt means that the maximal volume could be $\\sim250$\\,\\g{m}L, but due to\ncentrifugal force, the smaller amount of samples could be measured;\nfor example, 50\\,\\g{m}L of sample results in an $\\sim 4$\\g{m}m thick layer\nattached to the cell wall if the cell is rotated sufficiently fast.\n\nTo estimate the required rotation speed, we modeled the behavior of the water\ninside the cell.\nWe neglected all the complicated effects like surface tension and currents\ninside the liquid.\nWe can use energy conservation, where the sum of kinetic energy $E_\\text{k}$\nand gravitation potential energy $U_\\text{g}$ is constant total energy $E$.\nIf we consider a closed system, the change in potential energy can go only to\nkinetic energy and vice versa, so we can write\n\\begin{equation*}\n\tE_\\text{k} = \\frac{mr^2\\omega^2}{2} = -U_\\text{g} = mg(z - z_0),\n\\end{equation*}\nwhere $m$ stands for mass, $\\omega$ for angular speed, $z$ for the height of\nthe equipotential surface, $z_0$ for reference height for potential energy and\n$g = 9.80665$\\, Nkg$^{-1}$ as gravitation acceleration constant.\nSolving the equation results into\n\\begin{equation}\n\tz = z_0 + \\frac{r^2\\omega^2}{2g}.\n\t\\label{\\eqnlabel{spinning_cell:liquid_surface}}\n\\end{equation}\n\nThe value for $z_0$ can be calculated from the sample volume\n\\begin{equation*}\n\tV = 2\\text{\\g{p}}h(R^2 - R_2^2)\n\t\t+ 2\\text{\\g{p}}\\int_{R_1}^{R_2}{rz(r)\\text{d}r},\n\\end{equation*}\nwhere $R$ is the internal radius of the cell, $h$ is the height of the cell,\n$R_1$ is the radius where the surface of the liquid touches the floor of the\ncell, and $R_2$ is the radius where the surface of the liquid meets the ceiling\nof the cell, so it means that $R \\geq R2 > R1 \\geq 0$.\nThe values of $R_1$ and $R_2$ can be estimated from\n\\eqnref{spinning_cell:liquid_surface}.\n\nWe can estimate the sufficient rotation speed of about 4000\\,rpm from sample\nmodel results for 50\\,\\g{m}L of a sample displayed in\n\\figref{spinning_cell:model}.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.49\\columnwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]%\n\t\t\t{results_and_discussion/assets/spinning_cell_model/model_500rpm}\n\t\t\\caption{500 rpm}\n\t\t\\label{\\figlabel{spinning_cell:model_500rpm}}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\columnwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]%\n\t\t\t{results_and_discussion/assets/spinning_cell_model/model_1000rpm}\n\t\t\\caption{1000 rpm}\n\t\t\\label{\\figlabel{spinning_cell:model_1000rpm}}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\columnwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]%\n\t\t\t{results_and_discussion/assets/spinning_cell_model/model_2000rpm}\n\t\t\\caption{2000 rpm}\n\t\t\\label{\\figlabel{spinning_cell:model_2000rpm}}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.49\\columnwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]%\n\t\t\t{results_and_discussion/assets/spinning_cell_model/model_4000rpm}\n\t\t\\caption{4000 rpm}\n\t\t\\label{\\figlabel{spinning_cell:model_4000rpm}}\n\t\\end{subfigure}\n\t\\caption[%\n\t\tModel of the spinning cell.%\n\t]{%\n\t\t\\captiontitle{%\n\t\t\tModel of the spinning cell following\n\t\t\t\\eqnref{spinning_cell:liquid_surface}\n\t\t\twith different rotation speeds using 4\\,mm as internal diameter, 5\\,mm\n\t\t\tas height, and 50\\,\\g{m}L of liquid.%\n\t\t}\n\t}\n\t\\label{\\figlabel{spinning_cell:model}}\n\\end{figure}\n\nThe sample holder was inspired by the work of\n\\textcite{Shriver1974},\nbut we used FPM o-rings with an inner diameter of 11\\,mm and 1\\,mm diameter of\nthe rubber as displayed in\n\\figref{spinning_cell:drawing}\ninstead of the nylon cone.\nThe knurled aluminum nut compressed the o-ring, effectively centering and\nsecuring the spinning cell to the holder chuck.\nThe cell holder was secured to the driving motor shaft by M2 screws.\n\n\\begin{figure}\n\t\\centering\n\t\\input{results_and_discussion/assets/spinning_cell_drawing}\n\t\\caption[%\n\t\tSpinning cell holder.%\n\t]{%\n\t\t\\captiontitle{%\n\t\t\tSpinning cell holder.%\n\t\t}\n\t\tA Teflon plug seals the cell (in blue), the knurled nut (light grey)\n\t\tsecures the cell to the holder chuck (dark grey) by compressing the o-ring\n\t\t(black).\n\t\tThe cell is attached to the driving motor shaft by M2 screws.\n\t}\n\t\\label{\\figlabel{spinning_cell:drawing}}\n\\end{figure}\n\nAs a driving motor, we used a DC motor (Maxon A-max 110119) controlled by a\nhomemade power source that could produce from 0\\,V to 9\\,V at the output.\nThe power supply was equipped with a digital voltmeter for user convenience.\nThe capabilities of the motor were measured with an attached fully filled cell.\nA tiny dot was stuck to the cell, and a photodiode with an attached\noscillometer was then used for rotation frequency measurement.\nThe results of the measurement are shown in\n\\figref{spinning_cell:rotation}.\nThe values of rotation speed $\\omega$ as a function of input voltage $U$ were\nfitted by linear dependence\n\\begin{gather*}\n\t\\omega = a_1U + a_0,\\\\\n\ta_1 = (1071 \\pm 4) \\text{V}^{-1}, a_0 = 2 \\pm 20.\n\\end{gather*}\n\n\\begin{figure}\n\t\\centering\n\t\\input{results_and_discussion/assets/spinning_cell_rotation/rotation}\n\t\\caption[%\n\t\tSpinning cell rotation evaluation.%\n\t]{%\n\t\t\\captiontitle{%\n\t\t\tSpinning cell rotation evaluation.%\n\t\t}\n\t\tThe dependence of rotation speed $\\omega$ on the input voltage for the\n\t\tdriving motor.\n\t}\n\t\\label{\\figlabel{spinning_cell:rotation}}\n\\end{figure}\n\nThe dependence has not got any significant constant factor $a_0$, and with the\nmaximal power of 9\\,V accepted by the motor, about 9600\\,rpm was achieved,\nwhich was satisfactory for the spinning cell to centrifuge the samples to its\nwalls completely.\n", "meta": {"hexsha": "5301083ccf7f6109740f7d4f90e64421df9eb927", "size": 6613, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/results_and_discussion/spinning_cell.tex", "max_stars_repo_name": "lumik/phd_thesis", "max_stars_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/results_and_discussion/spinning_cell.tex", "max_issues_repo_name": "lumik/phd_thesis", "max_issues_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 41, "max_issues_repo_issues_event_min_datetime": "2019-08-13T12:27:09.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T03:00:58.000Z", "max_forks_repo_path": "src/results_and_discussion/spinning_cell.tex", "max_forks_repo_name": "lumik/phd_thesis", "max_forks_repo_head_hexsha": "3b29f24732d49b64c627aeb8f6585f042cd59c4e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1301775148, "max_line_length": 79, "alphanum_fraction": 0.765008317, "num_tokens": 1910, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928900257126, "lm_q2_score": 0.734119526900183, "lm_q1q2_score": 0.5726080114111826}}
{"text": "\\documentclass[a4paper,12pt]{scrartcl}\n\n\\usepackage{bm,amsmath,url,graphicx}\n\\usepackage{palatino}\n\\usepackage{color, xcolor}\n\\usepackage{listings}\n\n\n\\newcommand{\\n}{\\mathbf{ n}}\n\\newcommand{\\h}{\\mathbf{ h}}\n\\newcommand{\\x}{\\mathbf{ x}}\n\\newcommand{\\y}{\\mathbf{ y}}\n\\newcommand{\\w}{\\mathbf{ w}}\n\\newcommand{\\HH}{\\mathbf{ H}}\n\\newcommand{\\R}{\\mathbf{ R}}\n\\newcommand{\\C}{\\mathbf{ C}}\n\\newcommand{\\thb}{{\\boldsymbol{\\theta}}}\n\\newcommand{\\mub}{{\\boldsymbol{\\mu}}}\n\\newcommand{\\python}{{\\fbox{\\texttt{\\bfseries python}}\\quad}}\n\\newcommand{\\pen}{{\\fbox{\\texttt{\\bfseries pen\\&paper}}\\quad}}\n\n\\renewcommand{\\familydefault}{\\rmdefault}\n\n\n\\begin{document}\n\\section*{SGN-41007 Pattern Recognition and Machine Learning}\n\\emph{Exercise Set 5: February 8--February 10, 2017}\n\\bigskip\n\\sloppy\n\n\\lstdefinestyle{mystyle}{\n  belowcaptionskip=1\\baselineskip,\n  breaklines=true,\n  frame=single,\n  xleftmargin=\\parindent,\n  language=Python,\n  showstringspaces=false,\n  basicstyle=\\ttfamily,\n  keywordstyle=\\bfseries\\color{green!40!black},\n  commentstyle=\\itshape\\color{purple!40!black},\n  identifierstyle=\\color{blue},\n  stringstyle=\\color{orange},\n  moredelim=**[is][\\color{red}]{@}{@},\n}\n\n\\lstset{language=Python,style=mystyle} \n\n\n\\noindent\nExercises consist of both pen\\&paper and computer assignments.\nPen\\&paper questions are solved at home before exercises, while\ncomputer assignments are solved during exercise hours. The\ncomputer assignments are marked by  \\python and \nPen\\&paper questions by  \\pen\n\n\\begin{enumerate}\n\n\\item \\pen \\emph{Derive the explicit mapping corresponding to a kernel trick.}\n\nIn the lectures we saw that the kernel trick $\\kappa(\\x, \\y) = (\\x \\cdot \\y)^2$\nfor $\\x = (x_1,x_2)$ and $\\y = (y_1,y_2)$ corresponds to the mapping \n\\[\n\\begin{pmatrix}\nu \\\\ v\n\\end{pmatrix}\n\\mapsto \n\\begin{pmatrix}\nu^2\\\\\nv^2\\\\\n\\sqrt{2}u v\n\\end{pmatrix}\n\\]\nFind the explicit mapping corresponding to the inhomogeneous kernel\n$\\kappa(\\x, \\y) = (\\x \\cdot \\y + 1)^2$ with $\\x, \\y\\in\\R^2$.\n\n\\emph{Hint:} Expand the kernel formula as far as you can. \nAt that point, reformulate the result into a dot product of\ntwo 5-dimensional vectors; one composed of coordinates of $\\x$ only\nand the other of coordinates of $\\y$ only.\n\n\\item \\pen \\emph{Compute the gradient of the log-loss.}\n\nIn the lectures we defined the \\emph{logistic loss function}:\n\\begin{equation}\n\\ell(\\w) = \\sum_{n = 0}^{N-1} \\ln(1 + \\exp(-y_n\\w^T\\x_n)).\n\\label{eq:grad}\n\\end{equation}\nNote that there was a typo in the slides with minus sign missing from the $\\exp()$ function.\n\n\\begin{enumerate}\n\t\\item Compute the formula for its gradient $\\frac{\\partial \\ell(\\w) }{\\partial \\w}$.\n\t\\item There are two alternative strategies for using the gradient.\n\t\\begin{itemize}\n\t\t\\item \\textbf{Batch gradient:} Compute the gradient from all samples and then apply the gradient descent rule\n\t\t$\\w \\leftarrow \\w - \\eta \\frac{\\partial \\ell(\\w) }{\\partial \\w}$.\n\t\t\\item \\textbf{Stochastic gradient:} Compute the gradient from one sample and then apply the gradient descent rule.\n\t\tIn other words, pretend $N = 1$ in formula \\ref{eq:grad}.\n\t\\end{itemize}\n\tIn the latter case, compute the next estimate for $\\w$ when $\\x_n = [-0.3, -1.7]^T$ and\n\t$y[n] = 1$ and $\\w =[  0.9,  0.9]^T$.\n\\end{enumerate}\n\n\\item \\python \\emph{Implement gradient descent for log-loss.}\n\n\\begin{enumerate}\n\\item Implement a log-loss minimization algorithm. You may use the template provided by the\nteaching assistant.\n\\item Apply the code for the data downloaded from\n\n\\url{https://github.com/mahehu/SGN-41007/tree/master/exercises/Ex5/log_loss_data.zip}\n\nThe data is in CSV format. Load  \\verb+X+ and \\verb+y+ using \\verb+numpy.loadtxt+.\n\n\\item Plot the path of $\\w$ over 100 iterations and check the accuracy (see plots below).\n\n\\end{enumerate}\n\n\\centerline{\\includegraphics[width=0.5\\textwidth]{log_loss_minimization.pdf}}\n\n\\item \\python \\emph{Select appropriate hyperparameters for the GTSRB data.}\n\nLast week we trained classifiers for the German Traffic Sign\nRecognition Benchmark (GTSRB) dataset. It turned out that the SVM was\nreally poor with default arguments, but changing the kernel pushed it\nto the top. In this exercise, we use brute force to find good hyperparameters\nfor the classifiers (kernel, C, number of trees, etc.).\n\nConsider the following two classifiers\n\t\\begin{verbatim}\nclf_list = [LogisticRegression(), SVC()]\nclf_name = ['LR', 'SVC']\n\t\\end{verbatim}\n\tMost important hyperparameters are the \\emph{regularization strength} \\verb+C+ and\n\tthe penalty type parameter \\verb+penalty+, which can have values \"\\verb+l1+\" and \"\\verb+l2+\".\n\t\n\tIn order to use the same range for the two methods, you need to scale the data to\n\tzero mean and unit variance using \\verb+sklearn.preprocessing.Normalizer+.\n\t\n\tImplement\n\ta grid search over these two parameters along the following lines:\n\t\\begin{verbatim}\nfor clf,name in zip(clf_list, clf_name):\n    for C in C_range:\n\t\t    for penalty in [\"l1\", \"l2\"]:\n            clf.C = C\n            clf.penalty = penalty\n            clf.fit(X_train, y_train)\n            y_pred = clf.predict(X_test)\n            score = accuracy_score(y_test, y_pred)\n\\end{verbatim}\nA reasonable range for \\verb+C+ is $C\\in\\{10^{-5},...,10^{0}\\}$.\n\n\\item \\python \\emph{Train ensemble methods with the GTSRB data.}\n\n\\begin{enumerate}\n\\item Train a 100-tree Random Forest classifier with the GTSRB and compute the accuracy on the test set.\n\\item Train a 100-tree Extremely Randomized Trees classifier with the GTSRB and compute the accuracy on the test set.\n\\item Train a 100-tree AdaBoost classifier with the GTSRB and compute the accuracy on the test set.\n\\item Train a 100-tree Gradient Boosted Tree classifier with the GTSRB and compute the accuracy on the test set.\n\\end{enumerate}\n\n\n\\end{enumerate}\n\n\\end{document}          \n", "meta": {"hexsha": "0f38a22351906a84ae2938b5004b23215e086c6c", "size": 5771, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/Ex5/Exercise5.tex", "max_stars_repo_name": "mahehu/SGN-41007", "max_stars_repo_head_hexsha": "c8ed169a0a5f70fb87b99448e39a573c0df584b2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 61, "max_stars_repo_stars_event_min_datetime": "2017-01-09T07:48:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-15T15:13:49.000Z", "max_issues_repo_path": "exercises/Ex5/Exercise5.tex", "max_issues_repo_name": "mahehu/SGN-41007", "max_issues_repo_head_hexsha": "c8ed169a0a5f70fb87b99448e39a573c0df584b2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Ex5/Exercise5.tex", "max_forks_repo_name": "mahehu/SGN-41007", "max_forks_repo_head_hexsha": "c8ed169a0a5f70fb87b99448e39a573c0df584b2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 46, "max_forks_repo_forks_event_min_datetime": "2017-01-10T19:32:04.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-20T08:29:20.000Z", "avg_line_length": 34.765060241, "max_line_length": 117, "alphanum_fraction": 0.7236180905, "num_tokens": 1700, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802603710086, "lm_q2_score": 0.8558511396138365, "lm_q1q2_score": 0.5724619331037274}}
{"text": "\n\\subsection{Money is liquid and information efficient}\n\n\\subsubsection{Liquidity of money}\n\nLiquidity (don\u2019t always want a goat)\n\nAlice fishes and Bob hunts, and they trade meat and fish with each other. This much fish for that much meat. This all works perfectly well until Carol, Dan and Eve come along, who wants to trade berries, wood and furs. Now agreeing how much of this for that becomes more complicated, for every good there\u2019s a ratio relative to every other good.\n\nSo they agree a new system, they collect all of the shells on the island and use these to trade with each other. Now they only have to remember a price for each good.\n\nWe want divisibility of money.\n\n\\subsubsection{Information requirements of money}\n\nInformation requirements \\((n-1)+(n-2)\\) etc v \\((n-1)\\)\n\n\\(\\sum_{i=1}^n n-i\\)\n\n\\(n^2-\\sum_{i=1}^n i\\)\n\n\\(n^2-\\dfrac{n(n+1)}{2}\\)\n\n\\(\\dfrac{n(n-1)}{2}\\)\n\nCompared to \\(n-1\\) when money is used.\n\n\\subsubsection{Stability}\n\nDon't want to be used for other purposes, demand could fluctuate more, more wasteful.\n\n", "meta": {"hexsha": "86c375222f58100ec2c938b147de6bc0e537e0e4", "size": 1037, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/economics/money/01-01-moneyBarter.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/economics/money/01-01-moneyBarter.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/economics/money/01-01-moneyBarter.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.40625, "max_line_length": 344, "alphanum_fraction": 0.7348119576, "num_tokens": 280, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7217432182679957, "lm_q1q2_score": 0.5724188416351208}}
{"text": "\\documentclass[12pt,a4paper]{article}\n\\usepackage{algorithm, algpseudocode, amsmath, amssymb, caption, csquotes, empheq, geometry, graphicx, hyperref, listings, multirow, physics, siunitx, subcaption, upgreek}\n\\usepackage[section]{placeins}\n\n\\title{Computational Physics\\\\Problem Set 1}\n\\author{Saleh Shamloo Ahmadi\\\\Student Number: 98100872}\n\\date{September 27, 2021}\n\n\\hypersetup{colorlinks=true}\n\n\\newcommand*{\\defeq}{\\mathrel{\\vcenter{\\baselineskip0.5ex \\lineskiplimit0pt\n\t\t\t\\hbox{\\scriptsize.}\\hbox{\\scriptsize.}}}\n\t\t\t=}\n\n\\newcommand{\\figpath}{../fig}\n\n\\begin{document}\n\t\\maketitle\n    \\section{Fractal Generation}\n    Fractals are self-similar shapes, so they can be generated by repeatedly applying mappings to a set of points.\n    The fractals in this problem set can all be created with linear iterated function systems (IFS).\n    The transformations for these fractals can be described with affine transformations:\n    \\begin{gather}\n        \\vb{v}_{i+1, j} = T_j\\vb{v}_{i, k} + \\vb{c}_j\\\\\n        \\vb{v}_{i, j} = \\mqty(x_{i,j} \\\\ y_{i,j}) \\\\\n        T_j = \\mqty(a_j & b_j \\\\ c_j & d_j),\\quad \\vb{c}_j = \\mqty(e_j \\\\ f_j)\n    \\end{gather}\n    There are two ways to define the mappings: relative and absolute. Also, the fractals can be generated randomly\n    with the absolute mappings.\n    \\subsection{Relative Mapping}\n    This type of mapping is suitable for use with line-based fractals (i.e. the fractal consists of a single curve).\n    In this method, the transformations are applied locally on each line; The origin of the coordinate system is set\n    to on end of the line and the point on the other end of the line is transformed relative to that.\n    Algorithm \\ref{alg:rel} is the implementation of this method.\n    \\begin{algorithm}\n        \\caption{Fractal Generation by Relative Mapping}\n        \\label{alg:rel}\n        \\begin{algorithmic}[1]\n            \\Function{fractal}{$x_{init}, y_{init}, T, \\vb{c}, steps$}\n            \\Comment{$T$ and $\\vb{c}$ represent the matrix and vector for every transformation}\n                \\State $x \\gets x_{init}$\n                \\State $y \\gets y_{init}$\n                \\For{$step \\gets 1,steps$}\n                    \\State $i \\gets 1$\n                    \\While{$i < N(x)$}\n                    \\Comment{N(x) is the number of x coordinates (which is the same as the number of points)}\n                        \\State $\\mqty(x_{rel} \\\\ y_{rel}) \\gets \\mqty(x_{i+1} - x_i \\\\ y_{i+1} - y_i)$\n                        \\ForAll{$T_j$ and $\\vb{c}_j$}\n                            \\State $\\mqty(x_{new} \\\\ y_{new}) \\gets T_j\\mqty(x_{rel} \\\\ y_{rel}) + \\vb{c}$\n                            \\State insert $x_{new}$ into $x$ at index $i$\n                            \\State insert $y_{new}$ into $y$ at index $i$\n                            \\State $i \\gets i+1$\n                        \\EndFor\n                    \\EndWhile\n                \\EndFor\n                \\State \\textbf{return} $x$ and $y$\n            \\EndFunction\n        \\end{algorithmic}\n    \\end{algorithm}\n    \\subsection{Absolute Mapping}\n    This type of mapping is suitable for non-line-based fractals (the fractal isn't only a single curve).\n    In this method, the same transformations are applied to every point in each step to generate new points.\n    The origin of the coordinate system is always set at the first point of the fractal. By repeatedly applying\n    the transformations to the fractal in each step, the self-similar shape is created.\n    Algorithm \\ref{alg:abs} is the implementation of this method.\n    \\begin{algorithm}\n        \\caption{Fractal Generation by Absolute Mapping}\n        \\label{alg:abs}\n        \\begin{algorithmic}[1]\n            \\Function{fractal}{$x_{init}, y_{init}, T, \\vb{c}, steps$}\n            \\Comment{$T$ and $\\vb{c}$ represent the matrix and vector for every transformation}\n                \\State $x \\gets x_{init}$\n                \\State $y \\gets y_{init}$\n                \\For{$step \\gets 1,steps$}\n                    \\State define the $x_{new}$ empty array\n                    \\State define the $y_{new}$ empty array\n                    \\ForAll{$x_i \\in x$ and $y_i \\in y$}\n                        \\ForAll{$T_j$ and $\\vb{c}_j$}\n                            \\State $\\mqty(x_{gen} \\\\ y_{gen}) \\gets T_j\\mqty(x_{rel} \\\\ y_{rel}) + \\vb{c}$\n                            \\State append $x_{gen}$ to $x_{new}$\n                            \\State append $y_{gen}$ to $y_{new}$\n                        \\EndFor\n                    \\EndFor\n                    \\State $x \\gets x_{new}$\n                    \\State $y \\gets y_{new}$\n                \\EndFor\n                \\State \\textbf{return} $x$ and $y$\n            \\EndFunction\n        \\end{algorithmic}\n    \\end{algorithm}\n    \\subsection{Random Fractals}\n    Some fractals can be generated by repeatedly applying random transformations from the fractal. If this is done with enough\n    sample points, and enough steps so that the smalles unit of the shape is smaller than a pixel of the display, the fractal\n    will be perfect. Algorithm \\ref{alg:rand} is the implementation of this method.\n    \\begin{algorithm}\n        \\caption{Fractal Generation by Random Mapping}\n        \\label{alg:rand}\n        \\begin{algorithmic}[1]\n            \\Function{fractal}{$range_x, range_y, T, \\vb{c}, steps, samples$}\n            \\Comment{$T$ and $\\vb{c}$ represent the matrix and vector for every transformation}\n                \\State generate random $x$ values in $range_x$ ($N(x)=samples$)\n                \\State generate random $y$ values in $range_y$ ($N(y)=samples$)\n                \\ForAll{$x_i \\in x$ and $y_i \\in y$}\n                    \\For{$step \\gets 1,steps$}\n                        \\State choose random $T_j$ and $\\vb{c}_j$\n                        \\State $\\mqty(x_i \\\\ y_i)\\gets T_j\\mqty(x_i \\\\ y_i) + \\vb{c}_j$\n                    \\EndFor\n                \\EndFor\n                \\State \\textbf{return} $x$ and $y$\n            \\EndFunction\n        \\end{algorithmic}\n    \\end{algorithm}\n    \\subsection{Representation of the fractals}\n    The fractals are defined by functions $\\{f_1, f_2, \\dots, f_n\\}$. Each of the functions are\n    \\begin{equation}\n        f_i(\\vb{v}) = T_i\\vb{v} + \\vb{c}_i,\n    \\end{equation}\n    where\n    \\begin{gather}\n        T = \\mqty(a & b \\\\ c & d),\\quad \\vb{c} = \\mqty(e \\\\ f).\n    \\end{gather}\n    So we can represent each fractal by defining the values $a$, $b$, $c$, $d$, $e$.\n    \\section{Koch Snowflake}\n    \\begin{table}\n        \\centering\n        \\begin{tabular}{|c|c|c|c|c|c|c|}\n            \\hline\n            transformation & $a$ & $b$ & $c$ & $d$ & $e$ & $f$ \\\\\n            \\hline\n            $f_1$ & $\\flatfrac{1}{3}$ & 0 & 0 & $\\flatfrac{1}{3}$ & 0 & 0 \\\\\n            \\hline\n            $f_2$ & $\\flatfrac{1}{6}$ & $-\\flatfrac{\\sqrt{3}}{6}$ & $\\flatfrac{\\sqrt{3}}{6}$ & $\\flatfrac{1}{6}$ & $\\flatfrac{1}{3}$ & 0 \\\\\n            \\hline\n            $f_3$ & $\\flatfrac{1}{6}$ & $-\\flatfrac{\\sqrt{3}}{6}$ & $\\flatfrac{\\sqrt{3}}{6}$ & $\\flatfrac{1}{6}$ & $\\flatfrac{1}{2}$ & $\\flatfrac{\\sqrt{3}}{6}$ \\\\\n            \\hline\n            $f_4$ & $\\flatfrac{1}{3}$ & 0 & 0 & $\\flatfrac{1}{3}$ & $\\flatfrac{2}{3}$ & 0 \\\\\n            \\hline\n        \\end{tabular}\n    \\end{table}\n    \\begin{figure}\n        \\centering\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/koch}}\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/koch-large}}\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/koch-color.pdf}}\n        \\caption{Koch Curve}\n    \\end{figure}\n    \\begin{figure}\n        \\centering\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/koch-tri}}\n        \\caption{Koch Snowflake}\n    \\end{figure}\n    \\begin{figure}\n        \\centering\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/koch-tri-color}}\n        \\caption{Colored Koch Snowflake}\n    \\end{figure}\n    \\section{Heighway Dragon}\n    \\begin{table}\n        \\centering\n        \\begin{tabular}{|c|c|c|c|c|c|c|}\n            \\hline\n            transformation & $a$ & $b$ & $c$ & $d$ & $e$ & $f$ \\\\\n            \\hline\n            $f_1$ & $\\flatfrac{1}{2}$ & $\\flatfrac{1}{2}$ & $-\\flatfrac{1}{2}$ & $\\flatfrac{1}{2}$ & 0 & 0 \\\\\n            \\hline\n            $f_2$ & $-\\flatfrac{1}{2}$ & $\\flatfrac{1}{2}$ & $-\\flatfrac{1}{2}$ & $-\\flatfrac{1}{2}$ & $\\flatfrac{1}{2}$ & $-\\flatfrac{1}{2}$ \\\\\n            \\hline\n        \\end{tabular}\n    \\end{table}\n    \\begin{figure}\n        \\centering\n        \\includegraphics[width=\\linewidth]{\\figpath/heighway}\n    \\end{figure}\n    \\begin{figure}\n        \\centering\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/heighway-large}}\n    \\end{figure}\n    \\section{Sierpi\u0144ski Triangle}\n    \\begin{figure}[htb!]\n        \\centering\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/sierpinski}}\n    \\end{figure}\n    \\begin{table}\n        \\centering\n        \\begin{tabular}{|c|c|c|c|c|c|c|}\n            \\hline\n            transformation & $a$ & $b$ & $c$ & $d$ & $e$ & $f$ \\\\\n            \\hline\n            $f_1$ & $\\flatfrac{1}{2}$ & 0 & 0 & $\\flatfrac{1}{2}$ & 0 & 0 \\\\\n            \\hline\n            $f_2$ & $\\flatfrac{1}{2}$ & 0 & 0 & $\\flatfrac{1}{2}$ & $\\flatfrac{1}{4}$ & $\\flatfrac{\\sqrt{3}}{4}$ \\\\\n            \\hline\n            $f_3$ & $\\flatfrac{1}{2}$ & 0 & 0 & $\\flatfrac{1}{2}$ & $\\flatfrac{1}{2}$ & 0 \\\\\n            \\hline\n        \\end{tabular}\n    \\end{table}\n    \\begin{figure}\n        \\centering\n        \\centering\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/sierpinski-large}}\n        \\caption{Step 10 Sierpi\u0144ski triangle}\n    \\end{figure}\n    \\newgeometry{top=0.4in}\n    \\thispagestyle{empty}\n    \\begin{figure}\n        \\centering\n        \\begin{subfigure}{\\linewidth}\n            \\centering\n            \\includegraphics[width=\\linewidth]{\\figpath/sierpinski-random}\n        \\end{subfigure}\n        \\begin{subfigure}{\\linewidth}\n            \\centering\n            \\includegraphics[width=\\linewidth]{\\figpath/sierpinski-random-large}\n        \\end{subfigure}\n        \\caption{Randomly Generating the Sierpi\u0144ski Triangle}\n    \\end{figure}\n    \\FloatBarrier\n    \\restoregeometry\n    \\subsection{The Sierpi\u0144ski Triangle Inside Pascal's Triangle}\n    \\newgeometry{left=0in,right=0in}\n    \\begin{figure}\n        \\centering\n        \\begin{subfigure}{0.45\\linewidth}\n            \\centering\n            \\makebox[0.45\\linewidth][c]{\\includegraphics{\\figpath/pascal-triangle}}\n            \\caption{Pascal's Triangle}\n        \\end{subfigure}\n        \\begin{subfigure}{0.45\\linewidth}\n            \\centering\n            \\makebox[0.45\\linewidth][c]{\\includegraphics{\\figpath/pascal-triangle-fractal}}\n            \\caption{Connecting the odd numbers generates the Sierpi\u0144ski triangle}\n        \\end{subfigure}\n        \\begin{subfigure}{0.45\\linewidth}\n            \\centering\n            \\makebox[0.45\\linewidth][c]{\\includegraphics{\\figpath/pascal-fractal}}\n            \\caption{The Sierpi\u0144ski triangle inside a larger Pascal's triangle}\n        \\end{subfigure}\n        \\begin{subfigure}{0.45\\linewidth}\n            \\centering\n            \\makebox[0.45\\linewidth][c]{\\includegraphics{\\figpath/pascal-fractal-large}}\n            \\caption{Painted on Pascal's triangle of size 256}\n        \\end{subfigure}\n    \\end{figure}\n    \\FloatBarrier\n    \\restoregeometry\n    \\section{Barnsley Fern}\n    To create this fractal, we need to apply random functions with different probabilities.\n    \\begin{table}\n        \\centering\n        \\begin{tabular}{|c|c|c|c|c|c|c|c|}\n            \\hline\n            transformation & $a$ & $b$ & $c$ & $d$ & $e$ & $f$ & probability \\\\\n            \\hline\n            $f_1$ & 0 & 0 & 0 & 0.16 & 0 & 0 & 1\\% \\\\\n            \\hline\n            $f_2$ & 0.85 & 0.04 & -0.04 & 0.85 & 0 & 1.6 & 85\\% \\\\\n            \\hline\n            $f_3$ & 0.20 & -0.26 & 0.23 & 0.22 & 0 & 1.6 & 7\\% \\\\\n            \\hline\n            $f_4$ & -0.15 & 0.28 & -0.26 & 0.24 & 0 & 0.44 & 7\\% \\\\\n            \\hline\n        \\end{tabular}\n    \\end{table}\n    \\thispagestyle{empty}\n    \\begin{figure}\n        \\centering\n        \\makebox[\\linewidth][c]{\\includegraphics{\\figpath/barnsley-fern}}\n        \\caption{Barnsley fern with 200000 random samples and 25 transformation steps}\n    \\end{figure}\n\\end{document}\n", "meta": {"hexsha": "93d895d88ef02dbfb0a2ac15e7ce9e3b0660040e", "size": 12196, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ps1-fractals/report/ps1-fractals.tex", "max_stars_repo_name": "slhshamloo/comp-phys", "max_stars_repo_head_hexsha": "04d6759e0eb9d7e16e2781417d389bc15e22b01b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ps1-fractals/report/ps1-fractals.tex", "max_issues_repo_name": "slhshamloo/comp-phys", "max_issues_repo_head_hexsha": "04d6759e0eb9d7e16e2781417d389bc15e22b01b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ps1-fractals/report/ps1-fractals.tex", "max_forks_repo_name": "slhshamloo/comp-phys", "max_forks_repo_head_hexsha": "04d6759e0eb9d7e16e2781417d389bc15e22b01b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.0036900369, "max_line_length": 171, "alphanum_fraction": 0.5560019679, "num_tokens": 3709, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5724188368882239}}
{"text": "\\section{Universal coefficient theorem, and products in $ H^\\ast$}\nWe talked about cohomology $ H^\\ast(X,A;N)$, which was contravariant in $(X,A)$. We can repeat some of the arguments for homology in the cohomological context. We can also relate cohomology to homology. This is the purpose of the universal coefficient theorem for cohomology. I won't actually prove this here, because I've put up notes on the website that covers this.\n\\begin{theorem}[Mixed variance UCT]\nLet $R$ be a PID, let $N$ be a $R$-module, and let $C_\\bullet$ be a chain complex of free $R$-modules. We've decided that $\\Hom_R(C_\\bullet,N)$ is a cochain complex. I'm always a little confused on how to write the homology of a cochain complex? Should I write $ H^n$ or $ H_n$? Maybe this is a personal problem, and I should keep it personal? We'll just write $ H^n$. (Some ridiculous notation with the $n$ sitting on the line in $H$ was suggested, but this'd be \\emph{impossible} to TeX!)\n\nAnyway, we had a map $ H^n\\Hom_R(C_\\bullet,N)\\to\\Hom_R( H_n(C_\\bullet),N)$. The theorem is that this is surjective, which has kernel $\\Ext^1_R( H_{n-1}(C_\\bullet),N)$. I.e, there is a natural sexseq:\n\\begin{equation*}\n0\\to\\Ext^1_R( H_{n-1}(C_\\bullet),N)\\to H^n\\Hom_R(C_\\bullet,N)\\to\\Hom_R( H_n(C_\\bullet),N)\\to 0\n\\end{equation*}\nthat splits, but not naturally.\n\\end{theorem}\n\\begin{proof}[Strategy of the proof]\nI have a sexseq $0\\to Z_n(C_\\bullet)\\to C_n\\to C_n/Z_n(C_\\bullet)\\to 0$ where $Z_n(C_\\bullet)=\\ker d_n$ (where $C_\\bullet$ is a chain complex). But $C_n/Z_n(C_\\bullet)=B_{n-1}\\hookrightarrow C_{n-1}$ where $B_n(C_\\bullet)=\\img d_{n-1}$ (this last thing probably has wrong indexing). We assumed $C_{n-1}$ is a free $R$-module, and that $R$ is a PID, so that $B_{n-1}$ is also free. Thus the sexseq $0\\to Z_n(C_\\bullet)\\to C_n\\to B_{n-1}\\to 0$ splits.\n\nAnother sexseq that's important is $0\\to Z_n(C_\\bullet)/B_n(C_\\bullet)\\to C_n/B_n(C_\\bullet)\\to C_n/Z_n(C_\\bullet)\\to 0$. If you like, you can think of this as follows:\n\\begin{equation*}\n\\xymatrix{\n\t& 0 & 0 & 0 & \\\\\n\t0\\ar[r] & Z_n(C_\\bullet)/B_n(C_\\bullet)\\ar[r]\\ar[u] & C_n/B_n(C_\\bullet)\\ar[r]\\ar[u] & C_n/Z_n(C_\\bullet)\\ar[r]\\ar[u] & 0\\\\\n\t0\\ar[r] & Z_n(C_\\bullet)\\ar[r]\\ar[u] & C_n\\ar[r]\\ar[u] & C_n/Z_n(C_\\bullet)\\ar[r]\\ar[u] & 0\\\\\n\t0\\ar[r] & B_n(C_\\bullet)\\ar[r]\\ar[u] & B_n(C_\\bullet)\\ar[r]\\ar[u] & 0\\ar[r]\\ar[u] & 0\\\\\n\t& 0\\ar[u] & 0\\ar[u] & 0\\ar[u] &\n}\n\\end{equation*}\nThus $0\\to Z_n(C_\\bullet)/B_n(C_\\bullet)\\to C_n/B_n(C_\\bullet)\\to C_n/Z_n(C_\\bullet)\\to 0$ splits. But $Z_n(C_\\bullet)/B_n(C_\\bullet)= H_{n-1}$, so this gives: $0\\to H_{n-1}\\to C_n/B_n(C_\\bullet)\\to B_{n-1}\\to 0$.\n\nNow, we have a sexseq $0\\to B^n\\Hom_R(C_\\bullet,N)\\to Z^n\\Hom_R(C_\\bullet,N)\\to H^n\\Hom_R(C_\\bullet,N)\\to 0$. We want to compare this to $\\Hom_R( H_n(C_\\bullet),N)$. But, now, $0\\to H_{n-1}\\to C_n/B_n(C_\\bullet)\\to B_{n-1}\\to 0$ splits, so we get a sexseq $0\\to \\Hom(B_{n-1},N)\\to \\Hom(C_n/B_n(C_\\bullet),N)\\to\\Hom( H_{n-1}(C_\\bullet),N)\\to 0$. Let me write this out:\n\\begin{equation*}\n\\xymatrix{\n\t0\\ar[r] & B^n\\Hom_R(C_\\bullet,N) \\ar[r]\\ar@{-->}[d] & Z^n\\Hom_R(C_\\bullet,N) \\ar[r]\\ar@{-->}[d] & H^n\\Hom_R(C_\\bullet,N) \\ar[r]\\ar[d] & 0\\\\\n\t0 \\ar[r] & \\Hom(B_{n-1},N) \\ar[r] & \\Hom(C_n/B_n(C_\\bullet),N)\\ar[r] & \\Hom( H_{n-1}(C_\\bullet),N) \\ar[r] & 0\n}\n\\end{equation*}\nAn element of $Z^n\\Hom_R(C_\\bullet,N)$ is a map $f:C_n\\to N$ such that $f\\circ d=0$. So this map $f$ factors as $C_n/B_n(C_\\bullet)\\to N$. Thus we have the middle dotted map, and it's actually an isomorphism. You can then check compatibility, to get the left dotted map.\n\nIs the map $ H^n\\Hom_R(C_\\bullet,N)\\to\\Hom( H_{n-1}(C_\\bullet),N)$ surjective? Well, use the snake lemma. I can then use diagram chasing to see that the map is indeed surjective, with kernel given by the kernel of $B^n\\Hom_R(C_\\bullet,N)\\to \\Hom(B_{n-1},N)$. And the rest of the proof amounts to showing that this kernel is $\\Ext^1_R( H_{n-1}(C_\\bullet),N)$. And for splitting we can construct a splitting map, see the notes.\n\\end{proof}\n\\begin{remark}\nMiguel: Why is $\\Ext$ called Ext?\n\nMiller: It deals with extensions. Let $R$ be a commutative ring, and let $M,N$ be two $R$-modules. I can think about extensions $0\\to N\\to L\\to M\\to 0$. Well, for example, I have two extensions $0\\to\\Z/2\\Z\\to\\Z/2\\Z\\oplus\\Z/2\\Z\\to\\Z/2\\Z\\to 0$, and $0\\to \\Z/2\\Z\\to\\Z/4\\Z\\to\\Z/2\\Z\\to 0$. Say that two extensions are equivalent if there's a map of sexseqs between them is the identity on $N$ and $M$. The two extensions above aren't equivalent, for example.\n\nAnother definition of $\\Ext^1_R(M,N)$ is the set of extensions like this modulo this notion of equivalence. The zero in the group is the split extension.\n\nAlso, $\\Ext$ is contravariant in the first variable, but not in the second variable. If you want to find the Ext groups, you can use an injective resolution of the second variable, or a projective resolution of the first variable. These are what are known as derived functors. $\\Tor$ is a left derived functor because it uses a projective resolution that goes off to the left, but $\\Ext$ is a right derived functor because it uses an injective resolution that goes off to the right.\n\\end{remark}\n\\subsection{Products}\nWe'll talk about the cohomology cross product.\n\\begin{construction}\nDefine $S^p(X)\\otimes S^q(Y)\\xrightarrow{\\times}S^{p+q}(X\\times Y)$ as follows. Let $\\sigma$ be a $(p+q)$-simplex in $X\\times Y$. Let $f\\otimes g\\in S^p(X)\\otimes S^q(Y)$. We'll define $f\\times g\\in S^{p+q}(X\\times Y)$. Then $f:S_p(X)\\to R$ and $g:S_q(Y)\\to R$. I can write $\\sigma=\\begin{pmatrix}\\sigma_1 \\\\ \\sigma_2\\end{pmatrix}$ where $\\sigma_1:\\Delta^{p+q}\\to X$ and $\\sigma_2:\\Delta^{p+q}\\to Y$. Define:\n\\begin{equation*}\n(f\\times g)(\\sigma)=f(\\sigma_1\\circ\\alpha_p)g(\\sigma_2\\circ\\omega_q)\n\\end{equation*}\nwhere $\\alpha_p:\\Delta^p\\to\\Delta^{p+q}$ takes $k\\mapsto k$ where $k\\in[p]$. And $\\omega_q:\\Delta^q\\to\\Delta^{p+q}$ sends $\\ell\\mapsto \\ell+p$ where $\\ell\\in[q]$. It is an extremely (idiotic I think was the word used) construction that I have never gotten used to.\n\\begin{remark}\nIt is \\emph{incredibly} stupid.\n\\end{remark}\nWe get a map on cochains. We need to check that this is a chain map $S^\\ast(X)\\otimes S^\\ast(Y)\\to S^\\ast(X\\times Y)$. This is your homework! What this means is that you get a map $ H^\\ast(S^\\ast(X)\\otimes S^\\ast(Y))\\to H^\\ast(X\\times Y)$. I guess I have coefficients in $R$. I'm not quite done, but I have a map $ H^\\ast(X)\\otimes H^\\ast(Y)\\to H^\\ast(S^\\ast(X)\\otimes S^\\ast(Y))$. The composition $ H^\\ast(X)\\otimes H^\\ast(Y)\\to H^\\ast(S^\\ast(X)\\otimes S^\\ast(Y))\\to H^\\ast(X\\times Y)$ is the cross product.\n\\end{construction}\nIt's not very easy to do computations with this. It's just hard to deal with. Let me make some points about this construction, though.\n\\begin{definition}\nNow take $X=Y$. Then I get $ H^p(X)\\otimes H^q(X)\\xrightarrow{\\times} H^{p+q}(X\\times X)\\xrightarrow{\\Delta^\\ast} H^{p+q}(X)$ where $\\Delta:X\\to X\\times X$ is the diagonal. This composition is called the cup product. Some people write $\\smile$, and others write $\\cup$. (I'll TeX it as $\\cup$.)\n\\end{definition}\nSome properties of the cup product are the following. I claim that $ H^0(X)\\cong\\Map(\\pi_0(X),R)$ as rings. This $\\alpha$ and $\\omega$ stuff collapses if $p=q=0$. There's nothing to do ... they're both the identity maps. So this isomorphism is clear. Also, inside $ H^0(X)$, we pick the element that maps to $(c\\mapsto 1)\\in\\Map(\\pi_0(X),R)$. This is the identity for the cup product. This comes out because when $p=0$ in our above story, then $\\alpha_0$ is just including the $0$-simplex, and $\\omega$ is the identity, so this is completely clear from that description.\n\\begin{prop}\nIf $f\\in S^p(X)$ and $g\\in S^q(Y)$ and $h\\in S^r(Z)$, then $((f\\times g)\\times h)(\\sigma)=(f\\times(g\\times h))(\\sigma)$ where $\\sigma:\\Delta^{p+q+r}\\to X\\times Y\\times Z$.\n\\end{prop}\n\\begin{proof}\nWell, $((f\\times g)\\times h)(\\sigma)=(f\\times g)(\\sigma_{12}\\circ\\alpha_{p+q})h(\\sigma_3\\circ\\omega_r)$ where $\\sigma_{12}:\\Delta^{p+q+r}\\to X\\times Y$ and $\\sigma_3:\\Delta^{p+q+r}\\to Z$. But $(f\\times g)(\\sigma_{12}\\circ\\alpha_{p+q})=f(\\sigma_1\\circ\\alpha_p)g(\\sigma_2\\circ\\mu_q)$ where $\\mu_q$ is the ``middle portion'' that sends $\\ell\\mapsto \\ell+p$ where $\\ell\\in[q]$. In other words, $((f\\times g)\\times h)(\\sigma)=f(\\sigma_1\\circ\\alpha_p)g(\\sigma_2\\circ\\mu_q)h(\\sigma_3\\circ\\omega_r)$. I've used associativity of the ring. But this thing is exactly the same as $(f\\times(g\\times h))(\\sigma)$, so this is associative.\n\\end{proof}\nTherefore, $(\\alpha\\cup\\beta)\\cup\\gamma=\\alpha\\cup(\\beta\\cup\\gamma)$.\n\nBut this product is obviously not commutative. It treats the two maps completely differently. But we have ways of dealing with this. This comes from the story of acyclic models, where I'll show that $\\alpha\\cup\\beta=(-1)^{|\\alpha|\\cdot|\\beta|}\\beta\\cup\\alpha$. Thus $ H^\\ast(X)$ forms a \\emph{graded commutative ring}.\n", "meta": {"hexsha": "053b2a36b947a2ae21189003faf4a9d307f33c92", "size": 8879, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old-905/lec-28-uct-products-in-cohomology.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "old-905/lec-28-uct-products-in-cohomology.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "old-905/lec-28-uct-products-in-cohomology.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 119.9864864865, "max_line_length": 623, "alphanum_fraction": 0.6901678117, "num_tokens": 3121, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.793105953629227, "lm_q1q2_score": 0.5724188339060723}}
{"text": "%% Digital Systems\n%% Continuous System Equivalence\n\\def\\FileDate{98/12/02}\n\\def\\FileVersion{1.0}\n% ----------------------------------------------------------------\n% Notes pages *********************************************************\n% ----------------------------------------------------------------\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{Introduction}\n\\input{chunk00}\n\\end{slide}\n\\fi\n\\input{chunk00}\n\nBefore we can present such a system, it is necessary to establish the\nrelationship between digital operations, such as the shift, and\ncontinuous operations.\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{Agenda}\nIn this pre-lecture presentation we will start by discussing the relationship of $s$ to $z$.\nWe will then present four ways to convert a transfer function $H(s)$ into its\ndigital equivalent $H(z)$. These are:\n\\begin{itemize}\n\\item The zero-order hold equivalent\n\\item The Tustin bilinear transform equivalent\n\\item Matched pole zero equivalent\n\\item Modified matched pole-zero equivalent\n\\end{itemize}\n\\end{slide}\n\\fi\n\n\\section*{Equivalence of $s$ and $z$}\n\n\\ifslidesonly\n\\begin{slide}\\label{slide:l11s1}\n  \\heading{Delaying a Sampled Signal}\n\\input{chunka}\n\\end{slide}\n\\fi\n\\input{chunka}\n\n\\ifslidesonly\n\\begin{slide}\\label{slide:l11s2}\n  \\heading{Sampling a Delayed Signal}\n  \\input{chunkb}\n\\end{slide}\n\\fi\n\\input{chunkb}\n\n\\ifslidesonly\n\\begin{slide}\\label{zisexpmst}\n  \\input{chunk01}\n\\end{slide}\n\\fi\n\\input{chunk01}\n\nThis is the fundamental relationship of equivalence. Before using it,\nwe must see how a continuous signal is reconstructed from a digital\nsignal. This is accomplished by means of a ``\\emph{Digital-to-Analogue\nConverter}''\n\n\\section*{Digital-to-Analogue Converter}\n\n\\ifslidesonly\n\\begin{slide}\\label{dac}\n\\heading{Modelling a DAC with a Zero-Order Hold}\n\\input{chunk02}\n\\end{slide}\n\\fi\n\\input{chunk02}\n\n\\ifslidesonly\n\\begin{slide}\\label{dac}\n\\heading{Operaton of the Zero-order Hold}\n\\input{chunk03}\n\\end{slide}\n\\fi\n\\input{chunk03}\n\n\\begin{slide}\\label{slide:l11s3}\n  \\heading{Step-wise continuous signal}\n  This represents the output of the zero-order hold $v(t) = v_k$ for\n  $kT \\le t < (k+1)T$.\\\\\n  \\begin{center}\n\\resizebox{300pt}{!}{\\includegraphics{pictures/steps.pdf}}\n  \\end{center}\n\\end{slide}\n\nThe signal may be considered as an infinite number of pulses of which\n  the $k$-th is that shown in \\sref{slide:l11s4}. To model such a\n  signal we use the so-called ``\\emph{gating}'' property of the\n  time-delayed unit-step function $\\epsilon(t)$ illustrated in \\sref{slide:l11s5}.\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{Modelling the ZOH Mathematically}\n\\end{slide}\n\\fi\n\\begin{slide}\\label{slide:l11s4}\n  \\heading{The $k$-th ``Pulse''}\n  \\begin{center}\n\\resizebox{300pt}{!}{\\includegraphics{pictures/pulse.pdf}}\n  \\end{center}\n\\end{slide}\n\n\\begin{slide}\\label{slide:l11s5}\n  \\heading{The Gating Function (1)}\n  \\begin{center}\n\\resizebox{300pt}{!}{\\includegraphics{pictures/gate.pdf}}\n  \\end{center}\n\\end{slide}\n\nThe opening ``gate'' is given by $v_k \\epsilon (t-kT)$, a step of height\n$v_k$, which is activated at $t=kT$ seconds. The gate is ``closed'' by\na negative going unit step, also of height $v_k$, which is activated\nat $t=\\{k+1\\}T$ seconds.\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{The Gating Function (2)}\n\\input{chunk04}\n\\end{slide}\n\\fi\n\\input{chunk04}\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{The Gating Function (3)}\n\\input{chunk05}\n\\end{slide}\n\\fi\n\\input{chunk05}\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{ZOH as mixed transfer function}\n\\input{chunk06}\n\\end{slide}\n\\fi\n\\input{chunk06}\n\n\\begin{slide}\\label{slide:l11s10}\n  \\heading{Hold-Equivalent Digital System}\n  \\resizebox{300pt}{!}{\\includegraphics{pictures/holdequiv.pdf}}\n\\end{slide}\n\nWe can now design a `hold-equivalent' digital system.\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{Hold-equivalent transform}\n\\input{chunk07}\n\\end{slide}\n\\fi\n\\input{chunk07}\n\n\\begin{slide}\\label{slide:l11s11}\n  \\heading{Example 1}\n  If \\[H(s) = \\frac{a}{s+a}\\] then find the Zero-Order Hold Equivalent $H(z)$\n\\end{slide}\n\n\\subsubsection*{Solution}\n\\begin{eqnarray*}\n  H(z) &=& \\frac{z-1}{z}\\mathcal{Z} \\frac{a}{s(s+a)}\\\\\n       &=& \\frac{z-1}{z} \\frac{z(1-e^{-aT})}{(z-1)(z-e^{-aT})}\\\\\n       &=& \\frac{1-e^{-aT}}{z-e^{-aT}}.\n\\end{eqnarray*}\n\n\\textbf{Malab note}: the zero-order-hold equivalent is the default\nsystem used for continuous system equivalence in \\emph{Matlab}. To\nconvert a continous system in any \\emph{lti} format\\footnote{%\nThe LTI functions are \\emph{tf}, \\emph{ss}, and \\emph{zpk}. For more\ninformation, type \\emph{help lti} inside \\emph{Matlab}.} use:\n\\begin{verbatim}\nlti_d = c2d(lti, Ts); % You must provide a sampling time or use -1 if undefined\n\\end{verbatim}\n\n\\section*{Other Continuous System Equivalences}\n\n\\subsection*{Approximation based on numerical integration}\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{Approximation based on numerical integration}\n\\input{chunk08}\n\\end{slide}\n\\fi\n\\input{chunk08}\n\n\\begin{slide}\\label{slide:l11s20a}\n  \\heading{Approximation by Numerical Integration}\n  \\begin{itemize}\n  \\item We wish to find a transfer function $T(z)$ that is equivalent to $T(s)=s$.\n  \\item Let us instead seek a transfer function $D(z)$ that is equivalent to $D(s) = 1/s$.\n  \\end{itemize}\n  \\resizebox{300pt}{!}{\\includegraphics{pictures/s-to-z.pdf}}\n\\end{slide}\n\nThus, the transfer function we are seeking will in fact be an approximation of\nthe integral $y(t)=\\int u(t) dt$. We can illustrate this as shown in SLIDE~\\ref{slide:l11s20b}.\nIf we sample the curve $u(t)$ and consider the situation at the $n$-th sampling\ninstant, we will have $$y(nT) = \\int_{0}^{nT} u(t) dt$$ We can rewrite this as\n$$y(nT)= \\int_{0}^{nT-T} u(t) dt + \\int_{nT-T}^{nT} u(t) dt$$\nwhere the second integral term is the shaded area shown in SLIDE~\\ref{slide:l11s20c}.\nNow if we assume that the first integral term was approximated by the digital\nintegrator in the previous sampling instant, then\n$y(nT-T) = \\int_{0}^{nT-T} u(t) dt$ is known, and consequently we have\n$$y(nT) = y(nT-T) + \\int_{nT-T}^{nT} u(t) dt$$\nand we only need to determine the area of the shaded region to approximate $y(nT)$.\n\n\\begin{slide}\\label{slide:l11s20b}\n\\heading{Model of Integration}\n\\begin{center}\n\t\\resizebox{220pt}{!}{\\includegraphics{pictures/integral1.pdf}}\n\\end{center}\n\\end{slide}\n\n\\begin{slide}\\label{slide:l11s20c}\n\\heading{Sampled Model of Integration}\n\\begin{center}\n\t\\resizebox{220pt}{!}{\\includegraphics{pictures/integral2.pdf}}\n\\end{center}\n\\end{slide}\n\nThe most obvious approximations are illustrated in \\sref{slide:l11s20d} to\n\\sref{slide:l11s20f}. It is clear that if we are to disallow any further\nsubdivision of the shaded area, the \\emph{trapezoidal approximation}\nwill provide the most accurate result.\n\n\\begin{slide}\\label{slide:l11s20d}\n\\heading{Forward Rectangular Approximation}\n\\begin{center}\n\t\\resizebox{220pt}{!}{\\includegraphics{pictures/forward-rect.pdf}}\n\\end{center}\n\\end{slide}\n\n\\begin{slide}\\label{slide:l11s20e}\n\\heading{Backward Rectangular Approximation}\n\\begin{center}\n\t\\resizebox{220pt}{!}{\\includegraphics{pictures/backward-rect.pdf}}\n\\end{center}\n\\end{slide}\n\n\\begin{slide}\\label{slide:l11s20f}\n\\heading{Trapezoidal Approximation}\n\\begin{center}\n\t\\resizebox{220pt}{!}{\\includegraphics{pictures/trapezoidal.pdf}}\n\\end{center}\n\\end{slide}\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{$z$-transform of trapezoidal approximation}\n\\input{chunk09}\n\\end{slide}\n\\fi\n\\input{chunk09}\n\n\\ifslidesonly\n\\begin{slide}\n\\heading{Approximation of $s$ by numerical integration}\n\\input{chunk10}\n\\end{slide}\n\\fi\n\\input{chunk10}\n\n\\begin{slide}\\label{slide:l11s22}\n  \\heading{Trapezoidal Approximation}\n\n  \\begin{eqnarray*}\n     s &=& \\frac{2}{T}\\, \\frac{z-1}{z+1} \\\\\n     z &=& \\frac{1 + 1/2 sT}{1 - 1/2 sT}\n  \\end{eqnarray*}\n  compare expansion $$z=(1+ (1/2) sT)(1- (1/2) sT)^{-1} = 1 + sT + (1/2)\n s^2T^2 + (1/4) s^3T^3\n + \\cdots$$  with $$z = e^{sT} = 1 + sT + (1/2) s^2T^2 + (1/6) s^3T^3 + \\cdots$$\n\\end{slide}\n\n\\begin{slide}\\label{slide:l11s20}\n  \\heading{Forward Rectangular Approximation}\n\n  \\begin{eqnarray*}\n     s &=& \\frac{z-1}{T} \\\\\n     z &=& 1 + sT\n  \\end{eqnarray*}\n  compare with $$z = e^{sT} = 1 + sT + (1/2) s^2T^2 + (1/6) s^3T^3 + \\cdots$$\n\\end{slide}\n\n\\begin{slide}\\label{slide:l11s21}\n  \\heading{Backward Rectangular Approximation}\n\n  \\begin{eqnarray*}\n     s &=& \\frac{z-1}{Tz} \\\\\n     z &=& \\frac{1}{1 + sT}\n  \\end{eqnarray*}\n  compare expansion $$z=(1-sT)^{-1} = 1 + sT + s^2T^2 + s^3T^3\n + \\cdots$$  with $$z = e^{sT} = 1 + sT + (1/2) s^2T^2 + (1/6) s^3T^3 + \\cdots$$\n\\end{slide}\n\n\nThe approximation shown in SLIDE\\sref{slide:l11s22}) is\nis known as ``\\emph{Tustin's Bilinear Transformation}''.\n\n\\begin{slide}\\label{slide:l11s23}\n  \\heading{Example 2}\n  If \\[H(s) = \\frac{a}{s+a},\\] determine the equivalent $H(z)$ using Tustin's\n  bilinear transformation.\n\\end{slide}\n\n\\subsubsection{Solution}\nTustin's bilinear transformation gives a\n digital system with transfer function\n \\begin{eqnarray*}\n   H(z)&=& \\frac{a}{\\frac{2}{T}\\,\\frac{z-1}{z+1}+a} \\\\\n       &=& \\frac{\\left(\\frac{(aT)/2}{1+(aT)/2}\\right)(z+1)}{z-\\left(\\frac{1-(aT)/2}{1+(aT)/2}\\right)}.\n \\end{eqnarray*}\n\n\\textbf{Malab note}: the bilinear transform equivalent is obtained by passing the argument \\verb|'tustin'| to the \\emph{c2d} method:\n\\begin{verbatim}\nlti_d = c2d(lti, Ts, 'tustin');\n\\end{verbatim}\n\n\\subsection*{Matched pole-zero approximations}\n\nAnother way to obtain a digital approximation of a continuous transfer function\nis to use the relationship $z = e^{sT}$ to map poles and zeros of the continuous\ntransfer function into poles and zeros in the z-transfer function.\n\nSince continuous transfer functions often have more poles than zeros,\nthat is $n-m$ zeros at infinity, zeros at infinity are replaced in the z-transfer\nfunction by zeros at $z = -1$ which is equivalent to half the sampling frequency\n$\\omega_s/2$ (i.e. the highest frequency possible in the z-domain).\nThus if $n-m =1$ we add $(1+z^{-1})$, if $n-m=2$, $(1+z^{-1})^2$, etc.\n\n\\begin{slide}\\label{slides:l11s24}\n\t\\heading{Matched-Pole Zero method}\n\tIdea is that all poles and zeros of continuous transfer function $D(s)$ can\n  become poles and zeros of digital transfer function $D(z)$ if the mapping\n  $z=e^{sT}$ is used.\n\n\tMethod:\n\t\\begin{enumerate}\n\t\t\\item Map finite poles and zeros of $D(s)$ to poles and zeros of $D(z)$ according to $z=e^{sT}$.\n\t\t\\item Add zeros at $z=-1$ for each infinite zero in $D(s)$.\n\t\t\\item Match DC or low-frequency gain.\n\t\\end{enumerate}\n\tSee examples in the notes.\n\\end{slide}\n\n\\begin{slide}\n\\heading{Example 3}\nUse the Matched-Pole-Zero (MPZ) approximation to give the z-transfer function equivalent to\n$$D(s)=\\frac{s+a}{s+b}.$$\n\\end{slide}\n\\subsubsection*{Solution}\nOrder of numerator and denominator are equal so there are no infinite zeros and\nby matching poles and zeros\n$$D(z)=k\\frac{(1-e^{-aT}z^{-1})}{(1 - e^{-bT}z^{-1})}.$$\nFor $D(s)$ the DC gain is $D(s)|_{s=0} = a/b$.\nThe DC gain for $D(z)$ is $D(z)|_{z=1}$ (from final value theorem) that is\n$$k\\frac{(1-e^{-aT})}{(1-e^{-bT})} = \\frac{a}{b}.$$\nWe choose $k$ so that the DC gains match, i.e.\n$$k=\\frac{a(1-e^{-bT})}{b(1-e^{-aT})}.$$\n\n\\begin{slide}\n\\heading{Example 4}\n\nUse the Matched-Pole-Zero (MPZ) approximation to give the z-transfer function equivalent to\n$$D(s)=\\frac{s+a}{s(s+b)}.$$\n\\end{slide}\n\n\\subsubsection*{Solution}\nHere, the order of the denominator is one greater than the numerator so there is\n$n-m = 2 - 1 = 1$ infinite zero. Placing this zero at $z = -1$ makes the MPZ transfer\nfunction\n$$D(z)=k\\frac{(1+z^{-1})(1-e^{-aT}z^{-1})}{(1-z^{-1})(1 - e^{-bT}z^{-1})}.$$\nAs $D(s)$ is type 1, we can't use the value of $D(0)$ to compute the DC gain.\nInstead, let us compute the gain at $s = -1$:\n\\begin{eqnarray*}\n  D( - 1) &=& {\\left. {\\frac{{s + a}}{{s(s + b)}}} \\right|_{s =  - 1}} \\\\\n   &=& \\frac{{-1 + a}}{{( - 1)(-1 + b)}} = \\frac{{a - 1}}{{(1 - b)}} \\\\\n\\end{eqnarray*}\nThe equivalent value $s=-1$ in the $z$-plane is $z=e^{-T}$.\n\\begin{eqnarray*}\n  k{\\left. {\\frac{{(1 + {z^{ - 1}})(1 - {e^{ - aT}}{z^{ - 1}})}}{{(1 - {e^{ - bT}}{z^{ - 1}})}}} \\right|_{z = {e^{ - T}}}} &=& k\\frac{{(1 + {e^T})(1 - {e^{ - aT}}{e^T})}}{{(1 - {e^{ - bT}}{e^T})}} \\hfill \\\\\n   &=& k\\frac{{\\left( {1 + {e^T}} \\right)\\left( {1 - {e^{ - (1 - a)T}}} \\right)}}{{\\left( {1 - {e^{ - (1 - b)T}}} \\right)}} \\hfill \\\\\n\\end{eqnarray*}\n\nAgain, we choose $k$ so that the equivalent gains match, i.e.\n$$k =  \\left( {\\frac{{1 - b}}{{a - 1}}} \\right)\\frac{{\\left( {1 - {e^{ - (1 - b)T}}} \\right)}}{{\\left( {1 + {e^T}} \\right)\\left( {1 - {e^{ - (1 - a)T}}} \\right)}}.$$\n\nWe would implement $D(z)$ as a \\emph{difference equation} defined in terms of the\nsampled inputs and outputs as shown in SLIDE~\\ref{slides:l11s25} for Example 3.\n\\begin{slide}\\label{slides:l11s25}\n\t\\heading{Implementation of an MPZ Approximation}\n\tIf $$\\frac{Y(z)}{U(z)}=D(z)=k\\frac{1-\\alpha z^{-1}}{1-\\beta z^{-1}}$$ then digital implementation will be\n\t$$y(n)=\\beta y(n-1)+k(u(n)-\\alpha u(n-1))$$\n\t$y(n)$ is the current calculated value, $y(n-1)$ is previous calculated value, $u(n)$ is current sample, and $u(n-1)$ is previous sample.\n\n\tImplementation only works if computation rate $<<$ significant dynamics of the sampled system and a small fraction of sampling time $T$.\n\\end{slide}\n\nIn the implementation, you should notice that it is actually physically impossible\nto sample $u(t)$, compute $y(n)$ and output $y(n)$ all at the same instant of time.\nHence the equation is actually impossible to implement. However, if the\ncomputation is sufficiently fast, the delay between sampling $u(t)$ and outputting\n$y(t)$ will be small and can often be neglected.\nA rule of thumb has computation delay $< t_r/20$ of the dominant pole.\nIt should certainly be a small fraction of the sampling period $T$.\n\n\\begin{slide}\\label{slides:l11s26}\n\t\\heading{Modified MPZ}\n\t\\begin{itemize}\n\t\\item\n\tUsed if constraint on computation time cannot be met\\footnote{Which these days is unlikely}.\n\n\t\\item Allow a zero at infinity in $D(z)$ so that order of numerator is one\n  less than denominator.\n\n\t\\item This ensures that only past values of $u$ and $y$ appear in the implementation\n  equation and a whole sample period is available for computation\n\n\\end{itemize}\n\\end{slide}\n\n\\begin{slide}\\heading{Example 5}\n\nRe-implement Example 3 using the Modified MPZ method.\n\\end{slide}\n\\subsubsection*{Solution}\nIf we allow an infinite zero\n$$D(s)=\\frac{s+a}{s(s+b)}$$ becomes\n$$D(z)=k\\frac{(1-e^{-aT}z^{-1})}{(1-z^{-1})(1-e^{-bT}z^{-1})}$$ and\n$$k = \\left( {\\frac{{1 - b}}{{a - 1}}} \\right)\\frac{{\\left( {1 - {e^{ - (1 - b)T}}} \\right)}}{{\\left( {1 - {e^{ - (1 - a)T}}} \\right)}}.$$\n\nThe implementation is now\n\n$$y(n) = (1 - e^{-bT})y(n-1)-e^{-bT}y(n-2)+k(u(n-1)-e^{-aT}u(n-2))$$\n\nand now the computation for $y(n)$ is performed only on past values of $u(n)$ and\n$y(n)$ and a whole sample period is available for computation.\n\n\\textbf{Matlab note}: the matched-pole zero equivalent of an LTI system is obtained by passing \\verb|'matched'| as the third argument to \\emph{c2d}:\n\\begin{verbatim}\nlti_d = c2d(lti, Ts, 'matched')\n\\end{verbatim}\nThere isn't a built-in method that returns the \\emph{modified}-matched-pole zero equivalent.\n\n\n%----------------------------------------------------------------\n% The end of notes\n% ----------------------------------------------------------------\n\\endinput\n\n% Local Variables:\n% TeX-master: \"lecture03\"\n% End:\n", "meta": {"hexsha": "63eeeaf0d0ae0d7b6072285bfebe030ad2a12c87", "size": 15310, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DigitalSystems/Lecture12/notes.tex", "max_stars_repo_name": "cpjobling/EGLM03-Resources", "max_stars_repo_head_hexsha": "70e5fd7b3e519cc3f327f348631b800d361bbb27", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DigitalSystems/Lecture12/notes.tex", "max_issues_repo_name": "cpjobling/EGLM03-Resources", "max_issues_repo_head_hexsha": "70e5fd7b3e519cc3f327f348631b800d361bbb27", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DigitalSystems/Lecture12/notes.tex", "max_forks_repo_name": "cpjobling/EGLM03-Resources", "max_forks_repo_head_hexsha": "70e5fd7b3e519cc3f327f348631b800d361bbb27", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.1385281385, "max_line_length": 206, "alphanum_fraction": 0.6696930111, "num_tokens": 5118, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5724188286118366}}
{"text": "\\section{Overview}\n\n%%%%%%%%%%\n\\begin{frame}{Dynamic Programming}\n  Q: What is DP?\n  \\begin{itemize}\n    \\item A: Smart scheduling of subproblems.\n  \\end{itemize}\n\n  \\vspace{0.20cm}\n  Q: What does DP look like?\n  \\begin{enumerate}\n    \\item Define subproblems (\\textcolor{red}{types})\n    \\item Set the goal: what is the solution to the original problem\n    \\item Define recurrence: (\\textcolor{red}{ask the right questions} $\\Rightarrow$ reduce to subproblems)\n      \\begin{itemize}\n\t\\item larger problem $\\Leftarrow$ a \\textcolor{blue}{number} of ``smaller'' subproblems\n      \\end{itemize}\n    \\item Write pseudo-code (\\textcolor{red}{fill the array/table/matrix} in order) \n    \\item Analyze time complexity\n    \\item Extract optimal solutions\n  \\end{enumerate}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{Common subproblems}\n  \\begin{enumerate}\n    \\item 1-D subproblems \n      \\begin{itemize}\n\t\\item input: $x_{1}, x_{2}, \\cdots, x_{n}$ (array, sequence, string)\n\t\\item subproblems: $x_{1}, x_{2}, \\cdots, x_{i}$ (prefix\\textcolor{gray}{/postfix})\n\t\\item $\\# = O(n)$\n\t\\item examples: weighted interval scheduling, max-sum subarray, breaking into lines\n      \\end{itemize}\n    \\item 2-D subproblems\n      \\begin{enumerate}\n\t\\item input: $x_{1}, x_{2}, \\cdots, x_{m}; y_{1}, y_{2}, \\cdots, y_{n}$\n\t\\begin{itemize}\n\t  \\item subproblems: $x_{1}, x_{2}, \\cdots, x_{i}; y_{1}, y_{2}, \\cdots, y_{j}$\n\t  \\item $\\# = O(mn)$\n\t  \\item examples: edit distance, LCS\n\t\\end{itemize}\n\t\\item input: $x_{1}, x_{2}, \\cdots, x_{n}$\n\t\\begin{itemize}\n\t  \\item subproblems: $x_{i}, \\cdots, x_{j}$\n\t  \\item $\\# = (n^{2})$\n\t  \\item examples: multiplying a sequence of matrices, optimal binary search tree\n\t\\end{itemize}\n      \\end{enumerate}\n  \\end{enumerate}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{Common subproblems}\n  \\begin{enumerate}\n    \\setcounter{enumi}{2}\n    \\item 3-D subproblems:\n      \\begin{itemize}\n\t\\item example: Floyd-Warshall algorithm, Bellman-Ford algorithm\n      \\end{itemize}\n    \\item DP on graphs (tree, DAG $\\ldots$)\n      \\begin{itemize}\n\t\\item input: rooted tree\n\t\\item subproblems: rooted subtree\n      \\end{itemize}\n    \\item knapsack problem\n      \\begin{itemize}\n\t\\item example: changing coins\n      \\end{itemize}\n    \\item \\textcolor{red}{others $\\ldots$}\n  \\end{enumerate}\n\\end{frame}\n%%%%%%%%%%\n", "meta": {"hexsha": "5625002f535019b9423384956a1a8c2807770e44", "size": 2302, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-dp-2016-06-16/sections/overview.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-dp-2016-06-16/sections/overview.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-dp-2016-06-16/sections/overview.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 31.9722222222, "max_line_length": 107, "alphanum_fraction": 0.6476976542, "num_tokens": 750, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5724188286118366}}
{"text": "\\chapter{TD Prediction}\n\n\\section{Summary}\n\n\\subsection{TD prediction}\n\nThe basic formula for monte carlo prediction is $V(S_t)=V(S_t)+\\alpha [G_t - V(S_t)]$. $G_t$ is the final result, this means that the update only can happen at the end of the simulation. By replacing $G_t$ with $R_{t+1} V(S_{t+1})$ we get the TD method $V(S_t) := V(S_t)+\\alpha [R_{t+1} + \\gamma V(S_{t+1}) - V(S_t)]$.\n\nThe update of the TD method is called the \\textbf{TD error} $\\delta = G_t - R_{t+1} V(S_{t+1})$. An equivalent entity exists with Monte-Carlo methods, and is called the Monte-Carlo error. The monte carlo error can be written as a sum of TD errors, illustrated by equation \\ref{eq:monte carlo error is a sum of td errors}. (proof on page 121)\n\n\\begin{equation}\nG_t - V(S_t) = \\sum_{k=t}^{T-1} \\gamma^{k-t} \\delta_k\n\\label{eq:monte carlo error is a sum of td errors}\n\\end{equation}\n\n\\subsection{TD Advantages}\n\n\\begin{enumerate}\n\t\\item No model of the behavior is required\n\t\\item Naturally online/incremental algorithm (useful with long episodes)\n\t\\item Learns from experimental choices (monte carlo need to discard them)\n\t\\item In practice faster then monte carlo methods\n\\end{enumerate}\n\n\\subsection{Optimality of TD(0)}\nWhen using batch learning, as in only changing the value function everytime a whole batch is processes. TD(0) and Monte Carlo do not converge to the same solution. Monte Carlo methods finds the solution that minimized the error on the dataset. TD(0) finds the parameters that most like would cause a markov process to result in the dataset. This is called the certainty-equivalence estimate.\n\n\\subsection{SARSA}\nSARSA stands for $S_t,A_t,R_{t+1},S_{t+1}, A_{t+1}$. It uses an policy to generate $A_{t}$ and $A_{t+1}$. Updates the Q value, applies $A_{t+1}$ and then finds the next input $A_{t+2}$. \n\n\\begin{equation}\nQ(S_t, A_t) := Q(S_t, A_t) + \\alpha [R_{t+1} + \\gamma Q(S_{t+1}, A_{t+1})-Q(S_t, A_t)]\n\\end{equation}\n\n\\subsection{Q-Learning}\nQ-learning acts greedily in when predicting, but acts according to it's policy when finding an input to apply to the system. So in contrast to SARSA it won't reuse $A_{t+1}$ it generated when predicting.\n\n\\begin{equation}\nQ(S_t, A_t) := Q(S_t, A_t) + \\alpha [R_{t+1} + \\gamma \\max_a Q(S_{t+1},a) - Q(S_t, A_t)]\n\\label{eq:Q learning update}\n\\end{equation}\n\n\\subsection{Difference between SARSA and Q-Learning}\nSARSA will act a bit more carefull, as it's prediction is not greedy. And it takes into account that the next action might not be the best one. Q-Learning will take the more risky route, as it uses the best(according to Q(S, A)) possible action in it's prediction.\n\n\\subsection{Expected Sarsa}\nExpected SARSA uses the expected value of all possible actions $A_{t+1}$ given the policy. Then it uses a greedy policy to act, just like with Q-learning. Expected Sarsa will work with $\\alpha=1$, which would not work very will with classical SARSA. This makes the short term behavior much better. But is more computational expensive.\n\n\\begin{equation}\nQ(S_t, A_t) := Q(S_t, A_t) + \\alpha [R_{t+1} + \\gamma \\EX[Q(A_{t+1}, S_{t+1})|S_{t+1}] - Q(s_t, A_t)]\n\\label{eq:expected sarsa update rule}\n\\end{equation}\n\n\\subsection{Double learning}\nEquation~\\ref{eq:Q learning update} uses an argmax to estimate the value of Q. If one of these estimates is over-estimated, it will result in bad behavior(bias). Double learning reduces the odds of this happening by using two $Q(A,S)$ estimates. One to find the maximum action, and one to estimate it's value.(equation~\\ref{eq:estimation double learning}) It's less like that the overestimate will happen this way. \n\n\\begin{equation}\nA = Q_2(\\argmax_a Q1(S,a))\n\\label{eq:estimation double learning}\n\\end{equation}\n \n It's good practice to swap $Q_1$ and $Q_2$ in equation~\\ref{eq:estimation double learning} constantly. For example at random with odds 50/50.\n \n\\section{Exercises}\n\n\\subsection{Exercise 6.1}\n\n\\begin{equation}\nV_{t+1}(s_{t}) = \\alpha [R_{t+1} + \\gamma V_t(s_{t+1})-V_t(s_t)] + V_t(s_t)\n\\label{eq:difference value function update}\n\\end{equation}\n\nThe difference between the value function at time t and t+1 is defined by equation~\\ref{eq:difference value function update}.\n\nThe equality $G_t = R_{t+1} + \\gamma G_{t+1}$ still holds. However the monte carlo error is slightly different in every iteration. $G_t - V_t(s_t)$ becomes $G_{t+1} - V_{t+1}(s_{t+1})$ in the next iteration. As the value function now changes at iteration t, with a difference of  $d_t = \\alpha [R_{t+1} + \\gamma V_t(s_{t+1})-V_t(s_t)]$.\n\n\\begin{equation}\nG_{t+1} - V_t(S_{t+1}) = G_{t+1} - V_{t+1}(S_{t+1})-d_{t+1}\n\\label{eq:single iteration difference}\n\\end{equation}\n\n\\begin{equation}\nerror = -\\sum_{k=t+1}^{T-1} \\gamma^{k-t} d_{k-1}\n\\label{eq:ex_6_1_difference}\n\\end{equation}\n\nIn conclusion the different factor is equation~\\ref{eq:ex_6_1_difference}.\n\n\\subsection{Exercise 6.2}\nIf (as explained in the example of the hint) a part of the statespace is already well estimated. Then the TD prediction will be very good as you enter those states and if your path ends on one of those states. So you only have lesser predictions while in an unexplored part.\n\nThe Monte Carlo approach would still need to evaluate through the already well estimated part. Which is rather slow.\n\n\\subsection{Exercise 6.3}\nThe change on a value function is defined by:  $\\alpha [R_{t+1} + \\gamma V_t(s_{t+1})-V_t(s_t)] = 0.1[0 + 0 - 0.5]=-0.05$ if $V_t(s_{+1}) = 0$ so it ends on the left terminal state. And $\\alpha=0.1$ and $V_t(A)=0.5$.\n\n\\subsection{Exercise 6.4}\nThe TD algo is over-fitting when $\\alpha>0.05$ we could try to make it a bit smaller. But at $\\alpha=0.05$ it seems to flatten out nicely, so I would not expect better results.\n\nA similar story with the MC method, this time at $\\alpha0.02$ we get a nice flat tail. It's not as clear as with the TD method, but that's due the larger variance on the MC method.\n\nSo no, I would not expect any changes in results if more samples were ran with different values for $\\alpha$.\n\n\\subsection{Exercise 6.5}\nOverfitting, the step is too large so TD cannot find the optimal values. But keeps over/under estimating every time it runs through an episode.\n\n\\subsection{Exercise 6.6}\nYou setup the bellman optionality equation, and the pick a method to solve it. As this is a rather simple example, you could just manually solve the equation.\n\n\\begin{equation}\n\\begin{split}\nV(A) = 0.5 V(B)\\\\\nV(B) = 0.5 V(A) + 0.5 V(C)\\\\\nV(C) = 0.5 V(B) + 0.5 V(D)\\\\\nV(D) = 0.5 V(C) + 0.5 V(E)\\\\\nV(E) = 0.5 V(D) + 0.5\\\\\n\\end{split}\n\\end{equation}\n\nThis seems like the simplest way to do it, as it's small.\n\n\\subsection{Exercise 6.7}\nThe normal on-policy TD(0) update looks like $V(s_t) = V(S_t) + \\alpha[R_{t+1}+\\gamma V(S_{t+1}) - V(S_t)]$. I would expect that $\\alpha=\\frac{\\rho}{\\sum_t \\rho_t }$ as it becomes a weighted average due too the importance sampling.\n\n\\subsection{Exercise 6.8}\ntodo, not hard, but a bit of bookkeeping to be done.\n\n\\subsection{Exercise 6.11}\nIn Q-learning the actions that are applied to the system are learning through a $\\epsilon$-greedy policy(behavior policy) are not used for the prediction(Q). This is by definition an off-policy control.\n\n\\subsection{Exercise 6.12}\nIt would be nearly the same, SARSA selects the next action before updating Q and Q-learning selects it after. So the update of Q might make a difference in some cases.\n\n\\subsection{Exercise 6.13}\ntodo\n\n\\subsection{Exercise 6.14}\ntodo", "meta": {"hexsha": "8f84fcb0a41833a8b92b2625392f882219bd7c6f", "size": 7439, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RL/notes/TeX_files/chapter06.tex", "max_stars_repo_name": "Zilleplus/HML", "max_stars_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "RL/notes/TeX_files/chapter06.tex", "max_issues_repo_name": "Zilleplus/HML", "max_issues_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RL/notes/TeX_files/chapter06.tex", "max_forks_repo_name": "Zilleplus/HML", "max_forks_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.9057971014, "max_line_length": 415, "alphanum_fraction": 0.7276515661, "num_tokens": 2289, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.793105953629227, "lm_q1q2_score": 0.5724188244122783}}
{"text": "%2multibyte Version: 5.50.0.2960 CodePage: 1252\n\n\\documentclass[11pt]{article}\n\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{lineno}\n\\usepackage{fancybox}\n\\usepackage{fancyhdr}\n\\usepackage{rotating}\n\\usepackage{sectsty}\n\n\\setcounter{MaxMatrixCols}{10}\n\n\\begin{document}\n\n\\section{Appendix}\n\nLetting $\\eta _{t}^{s}$ , $\\eta _{t}^{k}$ and $\\lambda _{t}$ be the\nLagrangian multipliers on equations (\\ref{3}), (\\ref{4}) and (\\ref{6})\nrespectively, the first order conditions are given by:\n\n\\begin{equation}\nc_{t}:[c_{t}-\\varkappa c_{t-1}]^{-\\gamma }-\\varkappa \\beta (1+\\mathfrak{g}%\n)[c_{t+1}-\\varkappa c_{t}]^{-\\gamma }=\\lambda _{t},  \\label{A1}\n\\end{equation}\n\n\\begin{equation}\ng_{t}:[g_{t}-\\varkappa g_{t-1}]^{-\\gamma }-\\varkappa \\beta (1+\\mathfrak{g}%\n)[g_{t+1}-\\varkappa g_{t}]^{-\\gamma }=\\lambda _{t},\n\\end{equation}\n\n\\begin{equation}\ni_{t}^{k}:e_{k}\\eta _{t}^{k}=\\lambda _{t},  \\label{A3}\n\\end{equation}\n\n\\begin{equation}\ni_{t}^{s}:e_{s}\\eta _{t}^{s}=\\lambda _{t},  \\label{A4}\n\\end{equation}\n\n\\begin{equation}\nd_{t}:\\lambda _{t}=\\beta \\lambda _{t+1}\\left\\{ \n\\begin{array}{c}\n1+r^{\\ast }+\\frac{\\pi }{\\rho _{1}^{2}}\\left[ \\exp \\left[ \\rho _{1}(d_{t}-%\n\\bar{d}-\\psi V_{t})\\right] -\\rho _{2}(d_{t}-\\overline{d}-\\psi V_{t})-\\rho\n_{3}\\right] \\\\ \n+\\frac{\\pi }{\\rho _{1}^{2}}\\left[ \\rho _{1}\\exp \\left[ \\rho _{1}(d_{t}-\\bar{d%\n}-\\psi V_{t})\\right] -\\rho _{2}\\right] d_{t}%\n\\end{array}%\n\\right\\} ,  \\label{A5}\n\\end{equation}\n\n\\begin{equation}\nk_{t}:\\lambda _{t}=\\beta (1+\\mathfrak{g})\\lambda _{t+1}\\frac{\\left[\ne_{k}\\theta _{k}\\frac{y_{t+1}^{n}}{k_{t}}+(1-\\delta _{k})-e_{k}\\frac{\\phi\n_{k}}{2}\\left( \\frac{k_{t+1}}{k_{t}}-1\\right) ^{2}+e_{k}\\phi _{k}\\left( \n\\frac{k_{t+1}}{k_{t}}-1\\right) \\frac{k_{t+1}}{k_{t}}\\right] }{\\left[ (1+%\n\\mathfrak{g})+e_{k}\\phi _{k}\\left( \\frac{k_{t}}{k_{t-1}}-1\\right) \\right] },\n\\label{A6}\n\\end{equation}\n\n\\begin{equation}\ns_{t}:\\lambda _{t}=\\beta (1+\\mathfrak{g})\\lambda _{t+1}\\frac{\\left[\ne_{s}\\theta _{s}\\frac{y_{t+1}^{n}}{s_{t}}+(1-\\delta _{s})-e_{s}\\frac{\\phi\n_{s}}{2}\\left( \\frac{s_{t+1}}{s_{t}}-1\\right) ^{2}+e_{s}\\phi _{s}\\left( \n\\frac{s_{t+1}}{s_{t}}-1\\right) \\frac{s_{t+1}}{s_{t}}\\right] }{\\left[ (1+%\n\\mathfrak{g})+e_{s}\\phi _{s}\\left( \\frac{s_{t}}{s_{t-1}}-1\\right) \\right] },\n\\label{A7}\n\\end{equation}\n\n\\begin{equation}\n\\eta _{t}^{s}:(1+\\mathfrak{g})s_{t+1}=e_{s}i_{t}^{s}+(1-\\delta _{s})s_{t},\n\\label{A8}\n\\end{equation}\n\n\\begin{equation}\n\\eta _{t}^{k}:(1+\\mathfrak{g})k_{t+1}=e_{k}i_{t}^{k}+(1-\\delta _{k})k_{t},\n\\label{A9}\n\\end{equation}\n\n\\begin{equation}\n\\lambda _{t}:(1+\\mathfrak{g}%\n)d_{t}=(1+r_{t-1})d_{t-1}+c_{t}+i_{t}^{k}+AC_{t}^{k}+g_{t}+i_{t}^{s}+AC_{t}^{s}-y_{t}^{n}-y_{t}^{o}-T_{t}.\n\\label{A10}\n\\end{equation}\n\nAt steady-state, we find that:\n\n\\begin{equation}\n\\lbrack c(1-\\varkappa )]^{-\\gamma }[1-\\varkappa \\beta (1+\\mathfrak{g}%\n)]=\\lambda ,  \\label{SS1}\n\\end{equation}\n\n\\begin{equation}\n\\kappa \\lbrack g(1-\\varkappa )]^{-\\gamma }[1-\\varkappa \\beta (1+\\mathfrak{g}%\n)]=\\lambda ,  \\label{SS2}\n\\end{equation}\n\n\\begin{equation}\ne_{k}\\eta ^{k}=\\lambda ,  \\label{SS3}\n\\end{equation}\n\n\\begin{equation}\ne_{s}\\eta ^{s}=\\lambda ,  \\label{SS4}\n\\end{equation}\n\n\\begin{equation}\n1=\\beta \\left[ 1+r^{\\ast }+\\frac{\\pi }{\\rho _{1}^{2}}[1-\\rho _{3}+(\\rho\n_{1}-\\rho _{2})d]\\right] ,  \\label{SS5}\n\\end{equation}\n\n\\begin{equation}\n1=\\beta \\left[ e_{k}\\theta _{k}\\frac{y^{n}}{k}+(1-\\delta _{k})\\right] ,\n\\label{SS6}\n\\end{equation}\n\n\\begin{equation}\n1=\\beta \\left[ e_{s}\\theta _{s}\\frac{y^{n}}{s}+(1-\\delta _{s})\\right] ,\n\\label{SS7}\n\\end{equation}\n\n\\begin{equation}\ns(\\mathfrak{g}+\\delta _{s})=e_{s}i^{s},  \\label{SS8}\n\\end{equation}\n\n\\begin{equation}\nk(\\mathfrak{g}+\\delta _{k})=e_{k}i^{k},  \\label{SS9}\n\\end{equation}\n\n\\begin{equation}\n(\\mathfrak{g}-r)d+y^{n}+y^{o}+T=c+i^{k}+g+i^{s}.  \\label{SS10}\n\\end{equation}\n\nCombining (\\ref{SS1}) and (\\ref{SS2}):\n\n\\begin{equation}\n\\kappa =\\left( \\frac{g}{c}\\right) ^{\\gamma }  \\label{SS11}\n\\end{equation}\n\n\\end{document}\n", "meta": {"hexsha": "40d2627bc27894a2f5196572c2cc8b8a143aeb66", "size": 3926, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "utils/landing_pages/CANorm_Appendix_equations.tex", "max_stars_repo_name": "open-macro/open-macro-io", "max_stars_repo_head_hexsha": "7e734636a4afc9f169a9d7a3da682863dfe4135f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-01-15T03:56:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-28T17:37:45.000Z", "max_issues_repo_path": "utils/landing_pages/CANorm_Appendix_equations.tex", "max_issues_repo_name": "open-macro/open-macro-io", "max_issues_repo_head_hexsha": "7e734636a4afc9f169a9d7a3da682863dfe4135f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "utils/landing_pages/CANorm_Appendix_equations.tex", "max_forks_repo_name": "open-macro/open-macro-io", "max_forks_repo_head_hexsha": "7e734636a4afc9f169a9d7a3da682863dfe4135f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.6478873239, "max_line_length": 106, "alphanum_fraction": 0.6044319918, "num_tokens": 1882, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7310585903489892, "lm_q1q2_score": 0.57217212858238}}
{"text": "\\section{An Enumerative Synthesizer} \\label{sec:optimizations}\n%\nTo test our ideas in practice we implemented a simple\nenumerative synthesizer.\n%\nIn the remainder of this section we detail some of the\nchallenges and optimizations we implemented.\n\n\\subsection{Beta-Normal Form Enumeration}\n%\nThe synthesizer enumerates terms in beta-normal form.\n%\nThis is probably the most important optimization as it\nlargely reduces the search space.\n%\nMore specifically, our implementation makes a bottom up\nenumeration of the expressions in the following grammar:\n\n\\begin{align*}\n  e & ::= \\lambda x. e ~|~ \\lambda x. t \\\\\n  t & ::= x ~|~ t\\;t ~|~ t\\;e\n\\end{align*}\n\n\\subsection{Number of Function Arguments}\nOne minor optimization that we were able to implement was to require\n  functions to be enumerated in a form such that the first $n$ non-terminals\n  were lambdas where $n$ is the number of arguments to the function.\nFor example, the function $\\f{add}$ which takes two arguments must have the\n  form $\\lambda x. \\lambda y. E$ where $E$ is the rest of the expression.\nAlthough this is a minor optimization, it allowed us to effectively decrease\n  the depth of our search space by the number of arguments to a function, and\n  we would not have been able to synthesize $\\f{add}$ without it.\n\n\\subsection{Enumerating Closed Terms}\n%\nOne additional challenge in generating lambda expressions\nlies in generating only closed terms; terms with no free\nvariables.\n%\nA nice trick we found makes this easier is to generate\nvariables in a separate step.\n%\nThat is, we first enumerate lambda expressions were we leave\na ``hole'' in the place a variable should go and then fill\nthose holes with variables in scope which is easier once the\nrest of the term is fixed.\n\nWe also found it easier to work with De Bruijn indices.\n%\nWith explicit names, it becomes necessary for the\nsynthesizer to have some sort of variable context that it\npasses around when generating variable terms, both so that\nit knows what names to gives new bindings, as well as to\nknow which bindings it has available.\n%\nPassing through such a context introduces a lot of\ncomplexity to the enumeration itself.\n%\nUsing De Bruijn indices has the added benefit of removing\nalpha renaming and alpha equivalence from our expression\nevaluation.\n\n  % In order to deal with this challenge, we exclusively use De\n  % Bruijn indices to reference our variables.\n  % This simplifies our enumeration greatly, as we no longer need to pass through\n  % a context to know which variable names are bound, instead we can just keep\n  % track of the depth of the expression.\n", "meta": {"hexsha": "469235400665e826b44e45a4b22cef660d8c8256", "size": 2602, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/optimizations.tex", "max_stars_repo_name": "DavidThien/elsa", "max_stars_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/optimizations.tex", "max_issues_repo_name": "DavidThien/elsa", "max_issues_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/optimizations.tex", "max_forks_repo_name": "DavidThien/elsa", "max_forks_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2647058824, "max_line_length": 81, "alphanum_fraction": 0.7751729439, "num_tokens": 597, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624688140728, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.5721721211702757}}
{"text": "\n\\subsection{The function \\stackpush}\n\\Label{sec:stackpush}\n\\Label{sec:stackpushwd}\n\nThe following listing shows the specification of the function \\stackpush.\nIn accordance with Axiom~\\eqref{eq:stack-size-push}, \\stackpush is supposed\nto increase the number of elements of a non-full stack by one.\nThe specification also demands that the value that is pushed on a\nnon-full stack becomes the top element of the resulting stack (see \nAxiom~\\eqref{eq:stack-top-push}).\n\n\\input{Listings/stack_push.h.tex}\n\nThe implementation of \\stackpush is shown in the next listing.\nIt checks whether its argument is a non-full stack in which case it\nincreases the field \\inl{size} by one but only after it has assigned\nthe function argument to the element \\inl{obj[size]}.\n\n\\input{Listings/stack_push.c.tex}\n\n\\vfill\n\nThe following listing shows our formalization of the well-definition for \\stackpush.\n%\nThe function \\stackpush does not return a value but rather modifies its argument.\nFor the well-definition of \\stackpush we therefore check whether it\nturns equal stacks into equal stacks.\n\n\\input{Listings/stack_push_wd.c.tex}\n\nHowever, equality of the stack arguments is not sufficient for a proof\nthat \\stackpush is well-defined.\nWe must also ensure that there is no \\emph{aliasing} between the two stacks.\nOtherwise modifying one stack could modify the other stack in unexpected ways.\nIn order to express that there is no aliasing between two stacks, \nwe use the predicate \\logicref{StackSeparated}.\n\nIn order to achieve an automatic verification of \\implref{stackpushwd} we\nhave added the assertions \\inl{top} and \\equal and introduced the lemma\n\\logicref{StackPushEqual} in the following listing.\n\n\\input{Listings/StackLemmas.acsl.tex}\n\n", "meta": {"hexsha": "162861914b212236e0210c79e4a0081d41d741aa", "size": 1728, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/stack/stack_push.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/stack/stack_push.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/stack/stack_push.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 38.4, "max_line_length": 84, "alphanum_fraction": 0.796875, "num_tokens": 418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7826624738835052, "lm_q1q2_score": 0.5721721157043186}}
{"text": "\n\\subsection{Input layer normalisation}\n\nWe normalise the input layer.\n\nThis speeds up training, and makes regularisation saner.\n\n", "meta": {"hexsha": "cb80906e689606bfce638d946d13aec188c06e47", "size": 130, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/neuralNetworks/03-01-inputNorm.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/neuralNetworks/03-01-inputNorm.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/neuralNetworks/03-01-inputNorm.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 16.25, "max_line_length": 56, "alphanum_fraction": 0.8, "num_tokens": 28, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5721721148243659}}
{"text": "\\begin{comment}\n \n\n\n\\chapter{Reliability of Supervision Tree}\n\n\n\\section{Single Node Analysis}\n\nCase 0: Single Failure, No restart\n\\begin{equation}\nR(t) = \\int_{t}^{\\infty} f(x)dx\n\\end{equation}\n\\begin{equation}\nA(t) = \\frac{\\int_{0}^{t} xf(x)dx + t \\cdot R(t) }{t}  \n\\end{equation}\nCase I: Every $T$ time, spend $t_R$ time to restart\n\n\\begin{equation}\nA(t) = \\frac{ n(\\int_{0}^{T} x f(x) dx + T \\cdot R(T)) + \\int_{0}^{t_{tail}} x \nf(x) dx}{t}\n\\end{equation}\n\\hspace{7 cm} where  $n = \\left \\lfloor \\frac{t}{T+t_R} \\right \\rfloor$ and \n$t_{tail} = t-nT$\n\n\nCase II: When failed, spend $t_R$ time to restart\n\n\\begin{equation}\nA(t) = ?\n\\end{equation}\n\n\\hspace{7 cm} where  $W(t) = \\int_{0}^{t} xf(x)dx$\n\nCase III: When failed or running for $T$ time, spend $t_R$ time to restart\n\n\n$A(t) = ? $\n\n\\end{comment}\n", "meta": {"hexsha": "8182b274cdfbc614c9c24773b21aebfa76407a72", "size": 806, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "s1024484/Thesis/ReliabilitySupervision.tex", "max_stars_repo_name": "Jiansen/TAkka", "max_stars_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_stars_repo_licenses": ["BSD-Source-Code"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-09-11T14:35:53.000Z", "max_stars_repo_stars_event_max_datetime": "2019-06-27T06:36:09.000Z", "max_issues_repo_path": "s1024484/Thesis/ReliabilitySupervision.tex", "max_issues_repo_name": "Jiansen/TAkka", "max_issues_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_issues_repo_licenses": ["BSD-Source-Code"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "s1024484/Thesis/ReliabilitySupervision.tex", "max_forks_repo_name": "Jiansen/TAkka", "max_forks_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_forks_repo_licenses": ["BSD-Source-Code"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.6585365854, "max_line_length": 79, "alphanum_fraction": 0.6253101737, "num_tokens": 321, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032941988938414, "lm_q2_score": 0.6334102636778401, "lm_q1q2_score": 0.5721558167000115}}
{"text": "\\chapter{Localization at a prime}\\label{chap:prime}\n\nThis chapter contains results by my MRC group, consisting of Dan Christensen, Morgan Opie, Luis Scoccola, and me.\n\nIn this chapter we focus on localization with respect to the \\define{degree $k$\nmap} $\\degg(k) : \\sphere{1} \\to \\sphere{1}$, for $k : \\N$, and with respect to\nfamilies of such maps.\nThe degree $k$ map is defined by circle induction by mapping $\\base$ to $\\base$\nand $\\lloop$ to $\\lloop^k$.\nUsing suggestive language that mirrors the algebraic case, we call $L_{\\degg(k)}$\n``localization away from $k$'' and say that we are ``inverting $k$.''\nWe note that, for many applications, one considers localization away from $k$ for $k$ prime.\nHowever, for the results of this chapter, $k$ can be any natural number.\n\nAs might be expected, $\\degg(k)$-localizations can be combined to localize a type \\emph{at}\na prime $p$, by localizing with respect to the family of maps $\\degg(q)$ indexed by all\nprimes $q$ different from $p$.\nWith this case in mind, we study localization with respect to any family of degree $k$ maps\nindexed by a map $S : A \\to \\N$ for some type $A$.\nIn other words, we localize with respect to the family $\\degg\\circ S : A \\to \\sphere{1} \\to \\sphere{1}$.\nWe denote the family $\\degg\\circ S$ by $\\degg(S)$.\n\nThe main goal of this chapter is to show that, for simply connected types,\n$\\degg(S)$-localization localizes the homotopy groups away from $S$ (as long as $S$ indexed by $\\N$),\nas is true in the classical setting.\n\nWe begin in \\cref{ss:localizationofgroups} by introducin uniquely $k$-divisible groups, and their localizations in the category of groups/abelian groups. \\marginnote{edit}\nIt will follow from \\cref{lemma:localizationlocalizesfirst} that every group $G$ has a localization $G \\to G'$\naway from $S$, obtained by applying $\\pi_1$ to the $\\deg(S)$-localization of $K(G, 1)$.\nSimilarly, if $G$ is abelian, its localization in the category of abelian groups is obtained\nby applying $\\pi_2$ to the $\\deg(S)$-localization of $K(G,2)$.\nMoreover, by \\cref{theorem:localizationKgn}, these algebraic localizations agree\nwhen restricted to abelian groups.\nIn \\cref{ss:abelian}, we give an independent, purely algebraic construction of\nthe localization of an abelian group away from $S$ which also shows that the\ntwo localizations agree.\n\nIn \\cref{ss:n-conn}, we give characterizations of $\\degg(S)$-localization for highly connected types. In particular, we show that the localization of a simply connected type can be computed as the nullification with respect to $M_S$, the family of cofibers of the maps $\\degg(S(a))$, or as the localization with respect to $\\susp \\degg(S)$, the family of suspensions of these maps.\nWe prove that the $\\degg(S)$-localization of a pointed, $(n-1)$-connected type inverts $S$ in $\\pi_n$,\nand deduce from this that every group has a localization away from $S$.\n\nThese characterizations of $\\degg(S)$-localization are of interest to us for two reasons. First, the observation that $\\degg(S)$-localization can be computed via a nullification implies that, restricted to simply connected types, $\\degg(S)$-localization is a modality and therefore is better behaved than an arbitrary localization. Second, the fact that $\\degg(S)$-localization can be computed as $\\susp \\degg(S)$-localization allows us to deduce that, for simply connected types, $\\degg(S)$-localization commutes with taking loop spaces, a fact we use in the next section.\n\nIn \\cref{ss:localizingKgn}, we give a method for computing the $\\degg(S)$-localization of a loop space via a mapping telescope construction. This allows us to show that the localization of an Eilenberg-Mac Lane space $K(G,n)$, for $n \\geq 1$ and $G$ abelian, is $K(G',n)$, where $G'$ is the algebraic localization of $G$ away from $S$.\n\nIn \\cref{ss:localization-of-homotopy-groups}, we combine the results of \\cref{ss:localizingKgn} and observations about the interaction between $\\degg(S)$-localization and truncation to show that localizing a simply connected type at $\\degg(S)$ localizes all of the homotopy groups away from $S$.\nThe results of the last two sections assume that the family $S$ is indexed by the natural numbers.\n\nFinally, in \\cref{ss:abelian} we give a direct proof of the fact that\nthe localization of an abelian group in the category of groups coincides with\nits localization in the category of abelian groups, which also follows from\n\\cref{theorem:localizationKgn}.\n\n\\section{Localizing highly connected types}\\label{ss:n-conn}\n\nWe continue to fix a family $S : A \\to \\N$ of natural numbers and the associated family\nof maps $\\degg(S) : A \\to \\sphere{1} \\to \\sphere{1}$ sending $a$ to $\\degg(S(a))$.\nWe write $M_S$ for the family of cofibers, which sends $a$ to the cofiber\nof $\\degg(S(a))$, a ``Moore space.''\nWe write $\\susp{\\degg(S)}$ for the family of suspensions of the maps $\\degg(S(a))$,\nand similarly consider $\\suspsym^n M_S$ and $\\suspsym^n \\degg(S)$.\n\nThe main result of this section is:\n\n\\begin{thm}\\label{theorem:localizationisnullification}\n    Let $n\\geq 1$.\n    If $X$ is $n$-connected, then its\n    $\\degg(S)$-localization, its $\\suspsym^{n-1} M_S$-nullification and\n    its $\\suspsym^n \\degg(S)$-localization coincide.\n\\end{thm}\n\nNote that this implies that the same is true for the other localizations ``between''\n$\\degg(S)$ and $\\suspsym^n \\degg(S)$.\n% E.g. N_{\\suspsym^i M_S} for various i.\n\n\\begin{proof}\nBy \\cref{cor:preserve-n-connected},\n$L_{\\susp[n] \\degg(S)} X$ is $n$-connected,\nsince $\\trunc{n}{\\suspsym^n \\, \\sphere{1}} \\eqvsym 1$.\nThus, by \\cref{theorem:characterizinglocalness}, $L_{\\susp[n] \\degg(S)} X$\nis $\\degg(S)$-local and $\\suspsym^{n-1} M_S$-local.\nSo the natural maps\n\\[\n  L_{\\susp[n] \\degg(S)} X \\lra L_{\\suspsym^{n-1} M_S} X \\lra L_{\\degg(S)} X\n\\]\nare equivalences, by \\cref{lemma:comparelocalization}(3).\n\\end{proof}\n\nThis theorem fails without the connectedness hypothesis, as the next example shows.\n\n\\begin{eg}\n    The type $\\sphere{1}$ is not $\\degg(k)$-local for $k > 1$ but it is $M_k$-null.\n    On the one hand, $\\loopspacesym \\sphere{1}$ is equivalent to the integers, and the $k$-fold map\n    is multiplication by $k$, which is not an equivalence.\n    On the other, mapping from the cofiber sequence $\\sphere{1}\\to \\sphere{1} \\to M_k$ into $\\sphere{1}$ we\n    obtain the fiber sequence:\n    \\[\n        (M_k \\to_\\ast \\sphere{1}) \\longhookrightarrow (\\sphere{1} \\to_\\ast \\sphere{1}) \\xrightarrow{k} (\\sphere{1}\\to_\\ast \\sphere{1}),\n    \\]\n    and the multiplication by $k$ map on the integers is injective, so the pointed mapping space $M_k \\to_\\ast \\sphere{1}$ is contractible. Therefore $\\sphere{1}$ is $M_k$-null by \\cref{cor:pointed_null}.\n\\end{eg}\n\nAs an application of \\cref{theorem:localizationisnullification}, we have:\n\n\\begin{cor}\\label{corollary:commutativitylooplocalizationsimpconn}\n    For a pointed, simply connected type $X$, we have\n    \\[\n        \\loopspacesym L_{\\degg(S)} X \\simeq L_{\\degg(S)} \\loopspacesym X.\n    \\]\n\\end{cor}\n\n\\begin{proof}\nThis follows from \\cref{remark:commutativitylooplocalization} and \\cref{theorem:localizationisnullification}.\n\\end{proof}\n\n\\cref{example:notlex} shows that the assumption that $X$ is simply connected cannot be removed.\nNote that we cannot necessarily iterate the interchange of $\\loopspacesym$ and $\\degg(S)$-localization,\nsince $\\loopspacesym X$ might fail to be simply connected.\n\n\\medskip\n\nIt also follows that these localizations preserve $l$-connected types.\n\n\\begin{cor}\\label{corollary:plocalizationpreservesconnectedness}\n    For $l \\geq -2$ and $n \\geq 0$, if $X$ is $l$-connected, then $L_{\\susp[n] \\degg(S)} X$ is $l$-connected.\n\\end{cor}\n% We really only need the case n = 0 of this result.\n% This lemma can also be proved using lemma:truncationpreserveslocal,\n% and can then be used to prove theorem:localizationisnullification.\n\n\\begin{proof}\n    If $n \\geq l$, this follows from \\cref{cor:preserve-n-connected}.\n    If $n < l$, then \\cref{theorem:localizationisnullification} implies\n    that $L_{\\susp[n] \\degg(S)} X \\eqvsym L_{\\susp[l] \\degg(S)} X$, putting us\n    in the situation where $n = l$.\n%\\note{Having to break into cases makes me think we still haven't\n%captured the cleanest way of organizing this.}\n\\end{proof}\n\nThe following proposition implies that localizations of groups always exist\n(see \\cref{rmk:localizationofgroups})\nand is also used to prove \\cref{theorem:localizationKgn}.\n\n\\begin{prp}\\label{lemma:localizationlocalizesfirst}\n    For $n \\geq 1$,\n    the $\\degg(S)$-localization of a pointed, $(n-1)$-connected type $X$ localizes $\\pi_n(X)$ away from $S$.\n    The algebraic localization takes place in the category of groups if $n = 1$ and\n    abelian groups otherwise.\n\\end{prp}\n\n\\begin{proof}\n    By \\cref{prop:homotopygroupsoflocalarelocal},\n    we know that $\\pi_n(L_{\\degg(S)} X)$ is uniquely $S$-divisible.\n    So it remains to show that precomposition with $\\pi_n(\\eta) : \\pi_n(X) \\to \\pi_n(L_{\\degg(S)} X)$\n    induces an equivalence\n    \\[\n        \\precomp{\\pi_n(\\eta)} : \\Hom(\\pi_n(L_{\\degg(S)} X),  H) \\lra \\Hom(\\pi_n(X), H)\n    \\]\n    for every uniquely $S$-divisible group $H$ (where $H$ is assumed to be abelian if $n>1$).\n    Notice that this map is equivalent to \n    \\[\n        \\precomp{\\pi_n(\\ttrunc{n}{\\eta})} : \\Hom(\\pi_n(\\ttrunc{n}{L_{\\degg(S)} X}),  H) \\lra \\Hom(\\pi_n(\\ttrunc{n}{X}), H)\n    \\]\n    which in turn is equivalent to\n    \\[\n        \\precomp{\\ttrunc{n}{\\eta}} : \\left( \\ttrunc{n}{L_{\\degg(S)}X} \\pto K(H, n) \\right) \\lra \\left( \\ttrunc{n}{X} \\pto K(H, n) \\right) \n    \\]\n    by \\cref{theorem:catofgroups}, using the fact that $\\ttrunc{n}{L_{\\degg(S)}X}$ is still $(n-1)$-connected (\\cref{corollary:plocalizationpreservesconnectedness}).\n    Finally, this last map is equivalent to\n    \\[\n        \\precomp{\\eta} : \\left( L_{\\degg(S)}X \\pto K(H, n) \\right) \\lra \\left( X \\pto K(H, n) \\right),\n    \\]\n    since $K(H,n)$ is $n$-truncated.\n    And this map is an equivalence, since $K(H,n)$ is $\\degg(S)$-local by \\cref{corollary:characterizationpdivisible}.\n\\end{proof}\n\nThe results of this section also let us deduce the following lemma,\nwhich will be used in \\cref{ss:localization-of-homotopy-groups}.\n\n\\begin{lem}\\label{lemma:lex}\n    $\\degg(S)$-localization maps fiber sequences of simply connected types to fiber sequences.\n\\end{lem}\n\nNote that this lemma cannot be used to prove \\cref{corollary:commutativitylooplocalizationsimpconn},\nsince the fiber that arises in that case is $\\loopspacesym X$, which is not assumed to be\nsimply connected.\n%\\note{But maybe it's enough to assume that $E$ and $X$ are simply connected?}\n\n\\begin{proof}\n    Let $F \\to E \\to X$ be a fiber sequence of simply connected types, with $F$ the fiber over $x_0$.\n    By \\cref{theorem:localizationisnullification}, the $L$- and $L'$-localizations\n    of these types agree, where $L \\defeq L_{\\degg(S)}$.\n    Applying \\cref{proposition:preservationfibersequences},\\marginnote{I don't know where this proposition went. Need to fix} we get a diagram\n\\begin{equation*}\n\\begin{tikzcd}\nF \\arrow[d,swap,\"\\eta\"] \\arrow[r,hookrightarrow] & E \\arrow[d,swap,\"l\"] \\arrow[r] & X \\arrow[d,\"\\eta=\\eta'\"] \\\\\nLF \\arrow[r,hookrightarrow] & E' \\arrow[r] & L'X \\mathrlap{\\,\\,=\\, L X ,}\n\\end{tikzcd}\n\\end{equation*}\n    in which the rows are fiber sequences, with $L F$ the fiber over $\\eta(x_0)$.\n    To show that $l$ is an $L$-localization, it is enough to show that $E'$ is $L$-local.\n    Since it is the total space of a fibration of simply connected spaces,\n    it is simply connected. And since it is $L'$-local,\n    by the same proposition, we deduce that it is $L$-local, as needed.\n\\end{proof}\n\n\\cref{example:nonlocalfib} shows that the $\\degg(S)$-localization of a\nfiber sequence is not a fiber sequence in general.\n\n\\medskip\n\nWe end this section with the following lemma, which is of a similar\nflavor and which will be used in \\cref{ss:localization-of-homotopy-groups}.\n\n\\begin{lem}\\label{lemma:truncationpreserveslocal}\n    For $l \\geq -2$ and $n \\geq 0$, the $l$-truncation of a $\\suspsym^n \\degg(S)$-local type is $\\suspsym^n \\degg(S)$-local.\n\\end{lem}\n% I think we only need the n = 0 case of this later.\n\n\\begin{proof}\n    Let $X$ be a $\\suspsym^n \\degg(S)$-local type.\n    Fix $a : A$ and let $k = S(a)$.\n    We must show that $k : \\loopspacesym^{n+1} \\ttrunc{l}{X} \\to \\loopspacesym^{n+1} \\ttrunc{l}{X}$\n    is an equivalence for each basepoint $x' : \\ttrunc{l}{X}$.\n    If $l - (n+1) \\leq -2$, then $\\loopspacesym^{n+1} \\ttrunc{l}{X}$ is\n    contractible, so this is clear.\n    So assume that $l - (n+1) > -2$, and in particular that $l > -2$.\n    Since being an equivalence is a mere proposition,\n    we can assume that $x' = \\tproj{l}{x}$ for some $x : X$.\n    Recall that $l$-truncation\n    is the subuniverse of separated types for $(l-1)$-truncation (\\cref{example:truncationisseparated}).\n    Applying \\cref{thm:separation_characterization} $n+1$ times gives the equivalences\n    in the square\n\\begin{equation*}\n\\begin{tikzcd}\n\\trunc{l-n-1}{\\loopspacesym^{n+1} X} \\arrow[d,swap,\"\\trunc{l-n-1}{k}\"] \\arrow[r,\"\\eqvsym\"] & \\loopspacesym^{n+1} \\trunc{l}X \\arrow[d,\"k\"] \\\\\n\\trunc{l-n-1}{\\loopspacesym^{n+1} X}. \\arrow[r,\"\\eqvsym\"] & \\loopspacesym^{n+1} \\trunc{l} X\n\\end{tikzcd}\n\\end{equation*}\n    To show that the square commutes, it suffices to check this after precomposing\n    with the truncation map $\\loopspacesym^{n+1} X \\to \\trunc{l-n-1}{\\loopspacesym^{n+1} X}$.\n    And this follows since the equivalences commute with the natural maps from\n    $\\loopspacesym^{n+1} X$ and\n    both vertical maps commute with $k : \\loopspacesym^{n+1} X \\to \\loopspacesym^{n+1} X$.\n    Since $X$ is $\\suspsym^n \\degg(k)$-local, the map on the right hand side is an equivalence,\n    so the result follows.\n\\end{proof}\n\n\\section{Localization of groups}\\label{ss:localizationofgroups}\n\nWe fix a family $S : A \\to \\N$ of natural numbers and consider the family\nof maps $\\degg(S) : A \\to \\sphere{1} \\to \\sphere{1}$ sending $a$ to $\\degg(S(a))$.\n\n\\begin{defn}\n    For $k : \\N$, a group $G$ is \\define{uniquely $k$-divisible} if the $k$-th power map\n    $g \\mapsto g^k$ is a bijection.\n    A group $G$ is \\define{uniquely $S$-divisible} if it is uniquely $S(a)$-divisible for every $a : A$.\n\n    Given a group $G$, a group homomorphism $\\eta : G \\to G'$ is said to be a \\define{localization of $G$ away from $S$\n    in the category of groups} if $G'$ is uniquely $S$-divisible, and for every group $H$\n    which is uniquely $S$-divisible the precomposition map $\\precomp{\\eta} : \\mathsf{Grp}(G', H) \\to \\mathsf{Grp}(G, H)$ is an equivalence.\n    If $G$ is abelian and $\\eta : G \\to G'$ has the universal property only with respect to uniquely\n    $S$-divisible abelian groups, we say $\\eta$ is a \\define{localization of $G$ away from $S$ in the category\n    of abelian groups}.\n\\end{defn}\n\n\\begin{rmk}\nWhen the group is additive, the $k$-th power map is usually called the \\define{multiplication-by-$k$ map}.\nNotice that for non-abelian groups the $k$-th power map is not in general a group homomorphism.\n\\end{rmk}\n\nOur first goal is to show that the homotopy groups of a $\\degg(S)$-local type\nare uniquely $S$-divisible.\nIn order to do this, we begin by characterizing the $\\degg(S)$-local types.\n\n\\begin{lem}\\label{lem:deglocal_char}\nA type $X$ is $\\degg(S)$-local if and only if, for each $x:X$ and $a:A$, the map\n$S(a) : \\loopspace{X,x} \\to \\loopspace{X,x}$ sending \n$\\omega$ to $\\omega^{S(a)}$ is an equivalence.\n\\end{lem}\n\n\\begin{proof}\nWe apply \\cref{lemma:pointed}.\nLet $x:X$ and $a:A$.\nBy the universal property of the circle, we have an equivalence\n$(\\sphere{1} \\pto X) \\eqvsym \\loopspace{X,x}$.\nMoreover, the square\n\\[\n  \\begin{tikzcd}\n    (\\sphere{1} \\pto X) \\arrow[r,\"\\simeq\"] \\arrow[d,swap,\"\\precomp{\\degg(S(a))}\"] & \\loopspace{X,x} \\arrow[d,\"S(a)\"] \\\\\n    (\\sphere{1} \\pto X) \\arrow[r,swap,\"\\simeq\"] & \\loopspace{X,x}\n  \\end{tikzcd}\n\\]\ncommutes, so we conclude that $\\precomp{\\degg(S(a))}$ is an equivalence if and only if $S(a)$ is an equivalence.\n\\end{proof}\n\n\\begin{prp}\\label{prop:homotopygroupsoflocalarelocal}\n    If $X$ is a pointed, $\\degg(S)$-local type, then $\\pi_{m+1}(X)$ is uniquely $S$-divisible\n    for each $m : \\N$.\n    Conversely, if for some $n \\geq 0$, $X$ is a pointed, simply connected $n$-type\n    such that $\\pi_{m+1}(X)$ is uniquely $S$-divisible for each $m : \\N$,\n    then $X$ is $\\degg(S)$-local.\n\\end{prp}\n\nFor the converse, the requirement that $X$ be truncated is needed because\nwe need to use Whitehead's theorem.\n\n\\begin{proof}\nIf $X$ is a pointed, $\\degg(S)$-local type, then it follows immediately from \\cref{lem:deglocal_char} that $\\pi_1(X)$ is uniquely $S$-divisible. Moreover, since the loop space of a pointed $\\degg(S)$-local type is again $\\degg(S)$-local, it follows that $\\pi_m(X)$ is uniquely $S$-divisible for all $m\\geq 1$.\n\\end{proof}\n\n\\begin{proof}\n    Let $X$ be a pointed, $\\degg(S)$-local type and let $a : A$.\n    In this proof, we write $\\degg(k, Y)$ for the degree $k$ map on $\\loopspacesym Y$,\n    where $k = S(a)$.\n    For every $m : \\N$,\n    the map $\\degg(k, X) : \\loopspacesym X \\to \\loopspacesym X$ induces\n    the map $\\degg(k, \\loopspacesym^m X) : \\loopspacesym^m \\loopspacesym X \\to \\loopspacesym^m \\loopspacesym X$,\n    by the Eckmann-Hilton argument~\\cite[Theorem~2.1.6]{hottbook}.\n    That is, $\\loopspacesym^m \\degg(k, X) = \\degg(k, \\loopspacesym^m X)$.\n    It follows that $\\loopspacesym^m \\degg(k, X)$ induces the $k$-th power map on $\\pi_{m+1}( X )$.\n    Since $\\degg(k, X)$ is an equivalence, the $k$-th power map on $\\pi_{m+1}( X )$\n    must be an equivalence for each $m$.\n    In other words, $\\pi_{m+1}( X )$ is uniquely $S$-divisible for each $m$.\n\n    Conversely, suppose that, for some $n \\geq 0$, $X$ is a pointed,\n    simply connected $n$-type such that $\\pi_{m+1}(X)$ is uniquely $S$-divisible\n    for each $m : \\N$.\n    Let $a : A$ and let $k = S(a)$.\n    Since $X$ is pointed and connected, it suffices to show that\n    $\\degg(k, X) : \\loopspacesym X \\to \\loopspacesym X$ is an equivalence.\n    By the above argument, $\\degg(k, X)$ induces the $k$-th power map on each\n    $\\pi_{m+1}(X) = \\pi_m(\\loopspacesym X)$.\n    By assumption, these $k$-th power maps are bijections.\n    Since $\\loopspacesym X$ is a pointed, connected $(n-1)$-type, it follows from the\n    truncated Whitehead theorem~\\cite[Theorem~8.8.3]{hottbook} that \n    $\\degg(k, X)$ is an equivalence.\n    % This last step is where we use that X is simply connected.  Otherwise,\n    % we'd need to study pi_m(\\loopspacesym X) at other base points.\n\\end{proof}\n\n\\begin{cor}\\label{corollary:characterizationpdivisible}\n    For any $n > 1$, an abelian group $G$ is uniquely $S$-divisible if and only if its classifying space $K(G,n)$\n    is $\\degg(S)$-local. An arbitrary group $G$ is uniquely $S$-divisible if and only if $K(G,1)$ is $\\degg(S)$-local.\n\\end{cor}\n\n\\begin{proof}\n    As argued in the proof of \\cref{prop:homotopygroupsoflocalarelocal}, $\\degg(k)$ induces the $k$-th power map under the equivalence of categories, so one morphism is an equivalence if and only if the other is.\n\\end{proof}\n\nWe conclude this section with three examples.\nThe first one shows that $\\degg(k)$-localization does not preserve fiber sequences in general.\nThe second one shows the stronger statement that $\\degg(k)$-localization is not a modality,\nby giving an example showing that uniquely $k$-divisible groups are not closed under extensions.\n(It is an easy exercise that if a localization $L$ preserves fiber sequences,\nthen $L$ is a modality.)\nThe third one shows that the $\\degg(k)$-local types are not the separated types\nfor any reflective subuniverse $L$.\n\n\\begin{eg}\\label{example:notlex}\n    We show that for $k > 1$, $\\degg(k)$-localization\n    does not preserve all fiber sequences.\n    We first make some general observations.\n    By \\cref{lemma:setsarelocal}, we know that if $f$ is a map between pointed,\n    connected types, then sets are $f$-local.\n    It follows that if $L_f$ preserves the fiber sequence\n    \\[\n        G \\longhookrightarrow \\unit \\lra K(G,1),\n    \\]\n    then $K(G,1)$ is $f$-local.\n    Indeed, the $f$-localization map between the fiber sequences gives\n\\begin{equation*}\n\\begin{tikzcd}\nG \\arrow[d,swap,\"\\eqvsym\"] \\arrow[r,hookrightarrow] & \\unit \\arrow[d,swap,\"\\eqvsym\"] \\arrow[r] & K(G,1) \\arrow[d] \\\\\nL_f G \\arrow[r,hookrightarrow] & L_f\\unit \\arrow[r] & L_f K(G,1).\n\\end{tikzcd}\n\\end{equation*}\n    which implies that $K(G,1) \\simeq L_f K(G,1)$, since $L_f K(G,1)$ is connected\n    by \\cref{cor:preserve-n-connected}.\n    Now take $f$ to be the degree $k$ map.\n    By \\cref{corollary:characterizationpdivisible}, $K(G,1)$ is $\\degg(k)$-local if and only if\n    $G$ is uniquely $k$-divisible.\n    In particular, if $G = \\Z/k\\Z$, then $K(G,1)$ is not $\\degg(k)$-local.\n    So $\\degg(k)$-localization does not preserve all fiber sequences.\n\\end{eg}\n\n\\note{Added the following example. It is much simpler than the next one. Should we keep both?}\n\\note{Haven't modified the intro to these examples.}\n\n\\begin{eg}\\label{example:nonlocalfib2}\n    Consider the map $K(\\Z,1) \\to K(\\Q,1)$, induced by the inclusion of groups $\\Z \\to \\Q$.\n    Using the long exact sequence of homotopy groups, one sees that the fiber over any point is\n    (merely) equivalent to the set $\\Q/\\Z$. Recall that sets are $\\degg(k)$-local for any $k : \\N$,\n    so as long as $k > 0$, the type $K(\\Z,1)$ is the total space of a fibration with $\\degg(k)$-local fibers and $\\degg(k)$-local base.\n    On the other hand, if $k > 1$, $K(\\Z,1)$ is not $\\degg(k)$-local, so we see that $\\degg(k)$-localization\n    is not a modality in general.\n\\end{eg}\n\n\\begin{eg}\\label{example:nonlocalfib}\n    Let $B$ be the subtype of $\\Q \\to \\Q$ consisting of functions with bounded support.\n    We can define this type constructively as the type of functions together with\n    a mere bound for their support:\n    \\[\n         B \\defeq \\sm{f : \\Q \\to \\Q} \\, \\ttrunc{-1}{\\sm{b : \\Q} \\prd{x : \\Q} (x > b) \\to (f(x) = 0) \\times (f(-x) = 0)}.\n    \\]\n    The group operation is given by $(f + g)(x) = f(x) + g(x)$.\n    Notice that both $\\Q$ and $B$ are uniquely $k$-divisible for any $k > 0$.\n\n    Consider the semidirect product $P \\defeq B \\rtimes \\Q$, where $\\Q$ acts by translation:\n    \\[\n        (r \\cdot f)(x) = f(x+r).\n    \\]\n    We claim that $(\\delta_0, 2) : P$ does not have a square root. To see\n    this, assume we have $(f,r)$ such that $(f,r)^2 = (f + r\\cdot f,2r) = (\\delta_0, 2)$.\n    Then it must be the case that $r = 1$ and thus $f(x) + f(x+1) = \\delta_0(x)$ for all $x : \\Q$.\n    But then $f(x) = - f(x+1)$ if $x \\neq 0$.\n    We can now contradict our assumption by proving that $f(x) = 0$ for all $x$.\n    To do this, notice that this fact is a mere proposition, so we\n    can use the mere bound for the support of $f$ to conclude that $f(x+1)$ is $0$ if $x>0$ is large enough,\n    and then induct backwards using $f(x) = - f(x+1)$ if $x>0$. The case $x<0$ is analogous.\n    Thus we have obtained a short exact sequence of groups\n    \\[\n        1 \\lra B \\lra P \\lra \\Q \\lra 1\n    \\]\n    in which the kernel and cokernel are uniquely $2$-divisible, but the middle group is not.\n\n    It follows that if we apply the functor $K(\\blank, 1)$ to this short exact sequence of groups,\n    we obtain a fiber sequence with connected, $\\degg(2)$-local base and fiber,\n    but for which the total space is not $\\degg(2)$-local.\n    Phrased differently, a dependent sum of $\\degg(2)$-local types, indexed by \n    a $\\degg(2)$-local type, is not necessarily $\\degg(2)$-local.\n    So $\\degg(2)$-localization is not a modality.\n\n    A similar argument shows that for any $k > 1$, $P$ is not uniquely $k$-divisible\n    and therefore that $\\degg(k)$-localization is also not a modality.\n\\end{eg}\n\n\\begin{eg}\nAs a final example in a similar spirit, we show that for $k > 1$, $L_{\\degg(k)}$ is not\nof the form $L'$ for any reflective subuniverse $L$.\nFor any $L$, the $L$-separated types are determined by their identity types.\nHowever, the identity types of $K(\\Q,1)$ and $K(\\Z,1)$ are equivalent,\nand the former is $L_{\\degg(k)}$-local while the latter is not.\n\\end{eg}\n\n\\section{Localization of Eilenberg--Mac Lane spaces}\\label{ss:localizingKgn}\n\nIn this section, we compute the localization of a loop space as a mapping telescope,\nin a way that is familiar from classical topology.\nThis specializes to give a concrete description of the\nlocalization of an Eilenberg--Mac Lane space for an abelian group.\nThis is the key ingredient in the proofs of the results in the next section.\n\nFor the remainder of \\cref{section:localizationaway}, we assume that our family\n$S$ is indexed by the natural numbers, i.e., we have $S : \\N \\to \\N$ and\n$\\degg(S)$ sends $i$ to $\\degg(S(i))$.\n\nFor example, if we want to invert a decidable subset of the natural numbers\n$T : \\N \\to \\bool$, we can define a family $S : \\N \\to \\N$ by\n\\[\n    S(k) \\defeq \\begin{cases} k, & \\text{if $T(k)$} \\\\\n                              1, & \\text{otherwise.}\n                \\end{cases}\n\\]\nIt is easy to see that a type is $\\degg(S)$-local if and only if it is $\\degg(k)$-local for every $k$ such that $T(k)$ is true.\nThe most important example of this form is the case of localizing \\emph{at} a prime $p$,\nin which we take $T$ to be the subset of primes different from $p$.\n\n\\begin{lem}\\label{lemma:pmapisorthogonal}\n    Given a pointed, simply connected type $X$, the $k$-fold map $k : \\loopspacesym X \\to \\loopspacesym X$\n    is an $L_{\\degg(k)}$-equivalence.\n\\end{lem}\n\n\\begin{proof}\n    We have the usual square\n\\begin{equation*}\n\\begin{tikzcd}\n\\loopspacesym X \\arrow[d,swap,\"k\"] \\arrow[r,\"\\eta\"] & L_{\\degg(k)} \\loopspacesym X \\arrow[d,\"L_{\\degg(k)} k\"] \\\\\n\\loopspacesym X \\arrow[r,swap,\"\\eta\"] & L_{\\degg(k)} \\loopspacesym X\n\\end{tikzcd}\n\\end{equation*}\n    Recall that the right vertical map is the unique map equipped with a homotopy witnessing that the square commutes.\n    Now, by \\cref{corollary:commutativitylooplocalizationsimpconn}, we know that for pointed, simply connected types we have\n    $L_{\\degg(k)} \\loopspacesym X \\simeq \\loopspacesym L_{\\degg(k)} X$ so that,\n    by the uniqueness of the right vertical map, the above square is equivalent to the square\n\\begin{equation*}\n\\begin{tikzcd}\n\\loopspacesym X \\arrow[d,swap,\"k\"] \\arrow[r,\"\\loopspacesym \\eta\"] & \\loopspacesym L_{\\degg(k)} X \\arrow[d,\"k\"] \\\\\n\\loopspacesym X \\arrow[r,swap,\"\\loopspacesym \\eta\"] & \\loopspacesym L_{\\degg(k)} X\n\\end{tikzcd}\n\\end{equation*}\n    where the map on the right is the usual $k$-fold map.\n    But in this square the right vertical map is an equivalence, since $L_{\\degg(k)} X$\n    is $\\degg(k)$-local.\n\\end{proof}\n\nWe are almost ready to localize the loop space of a simply connected type away from $S : \\N \\to \\N$.\nBefore doing this we need a general fact about sequential colimits.\n\n\\begin{rmk}\\label{remark:equivalenceofseqcolim}\nGiven a sequential diagram $X$:\n\\begin{equation*}\n\\begin{tikzcd}\nX_0 \\arrow[r,\"h_0\"] & X_1 \\arrow[r,\"h_1\"] & X_2 \\arrow[r,\"h_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nwe can consider the shifted sequential diagram $\\mathsf{shift}(X)$ given by $X_1 \\to X_2 \\to \\cdots$.\nAnalogously, we can shift a natural transformation $f : X \\to Y$ between sequential\ndiagrams to obtain a natural transformation $\\mathsf{shift}(f):\\mathsf{shift}(X)\\to\\mathsf{shift}(Y)$.\nNotice that the transition maps $h_n$ induce a natural transformation $h : X \\to \\mathsf{shift}(X)$, and that moreover, the\ninduced map $\\colim(h) : \\colim(X) \\to \\colim(\\mathsf{shift}(X))$ is an equivalence.\nIt then follows that if $f : X \\to X$ is a natural transformation such that there exist $g, g' : X \\to \\mathsf{shift}(X)$\nwith $g \\circ f = h$ and $\\mathsf{shift}(f) \\circ g' = h$, then $f$ induces an self-equivalence $\\colim(f) : \\colim(X) \\eqvsym \\colim(X)$.\n\\end{rmk}\n\n\\begin{prp}\\label{theorem:localizationastelescope}\n    Let $X$ be pointed and simply connected, and\n    define $s : \\N \\to \\N$ by $s(n) = \\prod_{i=0}^n S(i)$.\n    Then the $\\degg(S)$-localization of $\\loopspacesym X$ is equivalent to the colimit of the sequence\n\\begin{equation*}\n\\begin{tikzcd}\n\\loopspacesym X \\arrow[r,\"s(0)\"] & \\loopspacesym X \\arrow[r,\"s(1)\"] & \\loopspacesym X \\arrow[r,\"s(2)\"] & \\cdots,\n\\end{tikzcd}\n\\end{equation*}\n    where as usual a natural number $k$ is used to denote the $k$-fold map.\n\\end{prp}\n\n\\begin{proof}\n    We will show that the colimit is $\\degg(S)$-local and that\n    the map $\\loopspacesym X \\to \\colim\\, \\loopspacesym X$ is an $L_{\\degg(S)}$-equivalence.\n    % Being a deg(S)-equivalence is not the same as being a deg(S(k))-equivalence\n    % for every k.\n    \n    To prove that $\\loopspacesym X \\to \\colim\\, \\loopspacesym X$ is an $L_{\\degg(S)}$-equivalence\n    we use \\cref{lemma:orthogonalcomposition}, so it is enough to show that all the maps in the\n    diagram are $L_{\\degg(S)}$-equivalences. To prove this last fact notice that \\cref{lemma:pmapisorthogonal}\n    implies that the $s(k)$-fold map is an $L_{\\degg(s(k))}$-equivalence. But in particular this implies that it is also an\n    $L_{\\degg(S)}$-equivalence, since every $\\degg(S)$-local type is $\\degg(s(k))$-local for every $k$.\n    % The map s(0) is not a deg(S(1))-equivalence in general, so we can't work\n    % one k at a time above.  But it works to handle all k at once.\n\n    Now we must show that the colimit is $\\degg(S(k))$-local for each $k$.\n    So fix $k$ and recall that\n    the colimit is equivalent to the colimit of the sequence starting at $k$.\n    That is, we consider the colimit of the sequential diagram with objects\n    $C_n = \\loopspacesym X$ and transition maps $h_n : C_n \\to C_{n+1}$ given by $s(n+k)$.\n    Then $\\colim_n C_n$ is pointed and connected by \\cite{DoornRijkeSojakova},\n    so it is enough to check that the map $S(k) : \\loopspacesym \\colim_n C_n \\to \\loopspacesym \\colim_n C_n$ is an equivalence.\n    By the commutativity of loop spaces and sequential colimits \\cite{DoornRijkeSojakova},\n    it is then enough to show that the natural transformation $S(k) : \\loopspacesym C_n \\to \\loopspacesym C_n$\n    induces an equivalence in the colimit.\n\n    By the Eckmann-Hilton argument~\\cite[Theorem~2.1.6]{hottbook}, the transition map\n    $\\loopspacesym h_n : \\loopspacesym C_n \\to \\loopspacesym C_{n+1}$ is homotopic to $s(n+k)$.\n    We can then apply \\cref{remark:equivalenceofseqcolim},\n    taking $f$ to be the natural transformation given by\n    $S(k)$ in every degree, and both $g$ and $g'$ to be the natural transformation given by\n    $s(n + k)/S(k) : \\loopspacesym C_n \\to \\loopspacesym C_{n+1}$ in degree $n$.\n    We have that $g_n \\circ f_n = f_{n+1} \\circ g'_n = s(n + k)$, and moreover that\n    $g \\circ f = f[1] \\circ g' = h$ as natural transformations.\n    So $S(k) : \\loopspacesym C_n \\to \\loopspacesym C_n$ induces an equivalence in the colimit,\n    as needed.\n\\end{proof}\n\nUsing the fact that $K(G,n)$ is the loop space of a simply connected space\nwhen $G$ is abelian we deduce:\n\n\\begin{cor}\\label{corollary:localizationKgn}\n    For $n \\geq 1$ and $G$ abelian,\n    the $\\degg(S)$-localization of $K(G,n)$ is equivalent to the colimit of the sequence\n    \\[\n      K(G,n) \\xrightarrow{s(0)} K(G,n) \\xrightarrow{s(1)} \\cdots .  \\tag*{\\qed}\n    \\]\n\\end{cor}\n\nIn particular, it follows from \\cite{DoornRijkeSojakova} that the\n$\\deg(S)$-localization of $K(G,n)$ is $n$-truncated and $(n-1)$-connected,\nso it is again a $K(G',n)$ for some group $G'$.\nBy \\cref{lemma:localizationlocalizesfirst}, $G'$ is the algebraic localization of $G$\naway from $S$ (in the category of groups if $n = 1$ and abelian groups otherwise).\nWe deduce:\n\n\\begin{thm}\\label{theorem:localizationKgn}\n    Let $n\\geq 1$ and let $G$ be any abelian group.\n    Then the $\\degg(S)$-localization map $K(G,n) \\to L_{\\degg(S)} K(G,n)$ is a map between Eilenberg--Mac Lane spaces and\n    is induced by an algebraic localization of $G$ away from $S$.\n    Moreover, the algebraic localization of $G$ is equivalent to the colimit\n    \\[\n      G \\xrightarrow{s(0)} G \\xrightarrow{s(1)} \\cdots\n    \\]\n    and is therefore abelian, even when $n = 1$.  \\qed\n\\end{thm}\n\nIt follows that the two types of algebraic localization agree for abelian groups,\nso we do not need to distinguish between them in what follows.\n\n\\section{Localization of homotopy groups}\\label{ss:localization-of-homotopy-groups}\n\nIn this section we prove the main theorem of the paper:\nfor simply connected types, $\\degg(S)$-localization localizes all homotopy groups away from $S$.\nWe continue to assume that our family $S$ is indexed by the natural numbers,\nas the essential ingredient is our result on the localization of Eilenberg--Mac Lane spaces from the previous section.\n%We use this and \\cref{lemma:lex} to induct over the Postnikov tower.\nThe other ingredient we need is that for simply connected types,\ntruncation commutes with $\\degg(S)$-localization, which follows\nfrom the next lemma.\n\n\\begin{lem}\\label{lemma:locofscntrunc}\n    The $\\degg(S)$-localization of a simply connected, $n$-truncated type is $n$-truncated.\n\\end{lem}\n\n\\begin{proof}\n    First, we show that it suffices to prove the statement for pointed types.\n    Let us assume given a simply connected and $n$-truncated type $X$, and let us denote the statement\n    of the theorem by $P(X)$. If we prove $X \\to P(X)$ it follows that $\\ttrunc{-1}{X} \\to P(X)$,\n    since being $n$-truncated is a mere proposition. But this is enough, for $X$ is\n    simply connected, which implies that $\\ttrunc{1}{X}$ is contractible, and hence that\n    $\\ttrunc{-1}{X}$ is inhabited.\n\n    We now assume that $X$ is pointed and proceed by induction.\n    If $X$ is $-2$, $-1$, $0$ or $1$-truncated, we are done, since $X$ is also simply connected, and thus contractible.\n    If $X$ is $(n+1)$-truncated and $n > 0$, consider the fiber sequence\n    \\[ K(G,n+1) \\longhookrightarrow X \\lra \\ttrunc{n}{X}. \\]\n    Since the types in the fiber sequence are simply connected, $\\degg(S)$-localization\n    preserves this fiber sequence, by \\cref{lemma:lex}. So we obtain a fiber sequence\n    \\[L_{\\degg(S)} K(G,n+1) \\longhookrightarrow L_{\\degg(S)}X \\lra L_{\\degg(S)} \\ttrunc{n}{X}. \\]\n    The type $L_{\\degg(S)} \\ttrunc{n}{X}$ is $n$-truncated by the induction hypothesis\n    and $L_{\\degg(S)} K(G,n+1)$ is $(n+1)$-truncated by \\cref{theorem:localizationKgn}.\n    So $L_{\\degg(S)} X$ must be $(n+1)$-truncated as well.\n\\end{proof}\n\nThe proof of \\cref{lemma:locofscntrunc} shows that we can compute\nthe $\\degg(S)$-localization of a simply connected $n$-type $X$ by\nlocalizing the Postnikov tower of $X$.\n\nFrom \\cref{lemma:locofscntrunc}, \\cref{lemma:truncationpreserveslocal} and\nthe argument used in \\cref{lemma:commutelocalization}, but restricted to simply connected types,\nwe deduce:\n\n\\begin{cor}\\label{corollary:localizationandtruncationcommute}\n    For simply connected types, $\\degg(S)$-localization and $n$-trunca\\-tion commute.\\qed\n\\end{cor}\n\nWe now give the main theorem of the paper.\n\n\\begin{thm}\\label{theorem:localizationlocalizes}\n    The $\\degg(S)$-localization of a pointed, simply connected type $X$ localizes all of the\n    homotopy groups away from $S$.\n\\end{thm}\n\n\\begin{proof}\n    Since $\\degg(S)$-localization preserves simply connectedness, it is immediate that $\\pi_1(L_{\\degg(S)} X)$ is trivial.\n    Now fix $n \\geq 1$ and consider the fiber sequence\n    \\[\n        K(\\pi_{n+1}(X),n+1) \\longhookrightarrow \\ttrunc{n+1}{X} \\lra \\ttrunc{n}{X}.\n    \\]\n    Notice that all of the types in the fiber sequence are simply connected.\n    Applying $\\degg(S)$-localization we obtain a map of fiber sequences\n\\begin{equation*}\n\\begin{tikzcd}\nK(\\pi_{n+1}(X),n+1) \\arrow[d] \\arrow[r,hookrightarrow] & \\ttrunc{n+1}{X} \\arrow[d] \\arrow[r] & \\ttrunc{n}{X} \\arrow[d] \\\\\nL_{\\degg(S)} K(\\pi_{n+1}(X),n+1) \\arrow[r,hookrightarrow] & \\ttrunc{n+1}{L_{\\degg(S)}X} \\arrow[r] & \\ttrunc{n}{L_{\\degg(S)} X}\n\\end{tikzcd}\n\\end{equation*} \n    by \\cref{lemma:lex} combined with the commutativity of truncation and\n    $\\degg(S)$-localization (\\cref{corollary:localizationandtruncationcommute}).\n    The result now follows from \\cref{theorem:localizationKgn}.\n\\end{proof}\n\nWe also have a partial converse to \\cref{theorem:localizationlocalizes}.\n\n\\begin{thm}\\label{theorem:characterize-localization}\n  Let $X$ and $X'$ be pointed, simply connected $n$-types, for some $n \\geq 0$.\n  If $f : X \\to X'$ is a pointed map such that $\\pi_m(f) : \\pi_m(X) \\to \\pi_m(X')$ is\n  an algebraic localization away from $S$ for each $m > 1$,\n  then $f$ is a $\\degg(S)$-localization of $X$.\n\\end{thm}\n\n\\begin{proof}\n  By \\cref{prop:homotopygroupsoflocalarelocal}, $X'$ is $\\degg(S)$-local.\n  Therefore, we have a commuting triangle\n  \\[\n    \\begin{tikzcd}\n      X \\arrow[r,\"f\"] \\arrow[d,swap,\"\\eta\"] & X' \\\\\n      L_{\\degg(S)} X \\arrow[ur,dashed,swap,\"f'\"]\n    \\end{tikzcd}\n  \\]\n  and it suffices to show that $f'$ is an equivalence.\n  Since $L_{\\degg(S)} X$ is also a pointed, simply connected $n$-type,\n  if we show that $\\pi_m(f')$ is a bijection for each $m > 1$,\n  the truncated Whitehead theorem~\\cite[Theorem~8.8.3]{hottbook} implies\n  that $f'$ is an equivalence.\n  By assumption, $\\pi_m(f)$ is an algebraic localization of $\\pi_m(X)$.\n  By \\cref{theorem:localizationlocalizes}, the same is true of $\\pi_m(\\eta)$.\n  Since $\\pi_m(f') \\circ \\pi_m(\\eta) = \\pi_m(f)$, it follows that\n  $\\pi_m(f')$ is a bijection.\n\\end{proof}\n\n\\section{Algebraic localization of abelian groups}\\label{ss:abelian}\n\nAs observed in \\cref{rmk:localizationofgroups},\n\\cref{theorem:localizationKgn} implies that the localization of an abelian group\naway from a set $S$, in the category of groups, coincides with its localization in\nthe category of abelian groups. It moreover provides a construction of the localization.\nSince we could not find these results in the literature, and since \\cref{theorem:localizationKgn}\nhas an indirect, homotopical proof, in this section we give a short, independent, algebraic proof\nof these results.\n\nThe following elementary lemma is the key ingredient.\n \n\\begin{lem}\n    Let $H$ be a group, let $n,m : \\N$, and let $x$, $y$, $\\hat{x}$ and $\\hat{y}$ be\n    elements of $H$ with the property that\n    $\\hat{x}$ is the unique element of $H$ such that $\\hat{x}^n = x$ and\n    $\\hat{y}$ is the unique element of $H$ such that $\\hat{y}^m = y$.\n    If $x$ and $y$ commute, then $\\hat{x}$ and $\\hat{y}$ commute.\n\\end{lem}\n\n\\begin{proof}\n    We first show that $x$ and $\\hat{y}$ commute.\n    The $m$-th power of $x \\hat{y} x^{-1}$ is $x \\hat{y}^m x^{-1} = x y x^{-1} = y$,\n    so by the uniqueness of $\\hat{y}$, it must be the case that $x \\hat{y} x^{-1} = \\hat{y}$.\n\n    Similarly, we have that\n    the $n$-th power of $\\hat{y} \\hat{x} \\hat{y}^{-1}$ is\n    $\\hat{y} \\hat{x}^n \\hat{y}^{-1} = \\hat{y} x \\hat{y}^{-1} = x$,\n    so by the uniqueness of $\\hat{x}$, we have $\\hat{y} \\hat{x} \\hat{y}^{-1} = \\hat{x}$.\n\\end{proof}\n\n\\begin{prp}\n    Let $G$ be an abelian group, let $n : \\N$, and let $n : G \\to G$ be the multiplication by $n$ map.\n    Then, for every uniquely $n$-divisible group $H$, the precomposition map\n    \\[\n        n^* : \\Hom(G,H) \\lra \\Hom(G,H)\n    \\]\n    is an equivalence.\n\\end{prp}\n\n\\begin{proof}\n    Given a map $f : G \\to H$, we have to show that there exists a unique $\\hat{f} : G \\to H$ such that\n    $\\hat{f} \\circ n = f$.\n    Since $H$ is uniquely $n$-divisible, we have an inverse function $\\phi : H \\to H$ to the $n$-th power map $n : H \\to H$.\n    It follows that if $\\hat{f}$ exists, then $\\hat{f} = \\phi \\circ f$.\n    So we only have to check that $\\hat{f}$ is a group homomorphism.\n\n    It is clear that $\\hat{f}$ preserves the identity element, so it remains to show that it preserves\n    the group operation. Take two elements $a,b : G$, and consider $f(a),f(b) : H$. These are two commuting\n    elements that have unique $n$-th roots. So their $n$-th roots $\\phi(f(a)), \\phi(f(b)) : H$ must also\n    commute. This implies that\n    \\[\n        (\\hat{f}(a) \\cdot \\hat{f}(b))^n = \\hat{f}(a)^n \\cdot \\hat{f}(b)^n = f(a) \\cdot f(b) = f(a + b) .\n    \\]\n    So by the uniqueness of $n$-th roots, $\\hat{f}(a) \\cdot \\hat{f}(b) = \\phi(f(a + b)) = \\hat{f}(a+b)$.\n\\end{proof}\n\nUsing this result, given an abelian group $G$,\nit is straightforward to prove that the colimit of the sequence displayed in\n\\cref{theorem:localizationKgn} is a localization of $G$ away from the family $S$\nin the category of groups.\nMoreover, this colimit is abelian, so it is also the localization of $G$\nin the category of abelian groups.\n", "meta": {"hexsha": "09f49ab14d0098e937944329934ee45ad979846a", "size": 40454, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "prime.tex", "max_stars_repo_name": "EgbertRijke/dissertation", "max_stars_repo_head_hexsha": "f2c087ba8983205d3dd336bbc194be5b7218c2c5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-07-06T10:37:12.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-06T10:37:12.000Z", "max_issues_repo_path": "prime.tex", "max_issues_repo_name": "EgbertRijke/dissertation", "max_issues_repo_head_hexsha": "f2c087ba8983205d3dd336bbc194be5b7218c2c5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prime.tex", "max_forks_repo_name": "EgbertRijke/dissertation", "max_forks_repo_head_hexsha": "f2c087ba8983205d3dd336bbc194be5b7218c2c5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.0892388451, "max_line_length": 573, "alphanum_fraction": 0.6838384338, "num_tokens": 13145, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5720703278391579}}
{"text": "\\section{Evaluation on retrieval systems}\n\nThe goal of evaluation is to assess the quality of the results \nobtained by an IR system.\nThere's the need of knowing a \\emph{ground truth}, so an annotated corpus \nwhere for each task we know what documents are relevant. The annotations\ncould be created manually or derived from the data, if it contains \nannotations.\n\n\\subsection{The notion of quality}\nGiven a corpus $C$ and a query $q$, the task is to find a set of \ndocuments $A_{q,C}$ that match $q$, but after retrieving such documents, \nthere's the need to estimate the quality of results.\n\n\\paragraph{Precision}\nTo formalise the quality of the retrieved documents $A_{q,C}$ we could \ncount how many of these documents are relevant to $q$, this \nis the notion of precision:\n$$\\mathit{Prec} = \\frac{\\mathit{relevant\\;retrieved}}{\\mathit{retrieved}}$$\nNote that in order to know if a document is relevant or not we need the \nground truth or a user feedback.\n\nThis measure does not suffice, as for instance, if we retrieve \nonly one correct document we would have maximum precision.\n\n\\paragraph{Recall}\nThe precision measure does not take in consideration how many relevant \ndocuments are there, that is crucial to assess quality of a query result.\nSo we can consider another measure, called Recall:\n$$\\mathit{Rec} = \\frac{\\mathit{relevan\\;retrieved}}{\\mathit{relevant}}$$\n\n\\paragraph{Information need}\nGiven the two quality measures, should we aim at better precision or more \nrecall? Trivially both, but it actually depends on the information need.\\\\\nIn some cases a really high recall is not necessary, while other times\na lower precision could be accepted while not missing anything relevant.\n\n\\paragraph{F1 Score}\nTo take into consideration both precision and recall, we could take \na weighted mean, so a tradeoff between the twos \n$$\\mathit{F1} = \\frac{2\\cdot\\mathit{Prec}\\cdot\\mathit{Rec}}\n{\\mathit{Prec} + \\mathit{Rec}}$$\n\n\\paragraph{Baseline system}\nTo assess the quality of an IR system, we need a baseline system, \nsomething trivial such as tossing a coin to decide whereas a document \nis relevant or not. \nGiven its quality measure, we can infer the quality of our, hopefully, \nmore complex system, in fact, quality measures can't be seen as absolute \nvalues, there's always the need to compare.\n\n\\subsection{Quality measures}\nTo formally define the notions introduced in the previous section, we need\nto take into consideration a more detail measure for errors.\n\n\\paragraph{Confusion Matrix}\nGiven a query $q$, a ground truth $E_q$ of relevant documents with respect \nto $q$, and a set of retrieved documents $A_q$, we define this matrix:\n\\begin{center}\n    \\begin{tabular}{c | c |c}\n            & $d \\in E_q$ & $d \\notin E_q$\\\\\n            \\hline\n            $d \\in A_q$ & True positive & False positive\\\\\n            \\hline\n            $d \\notin A_q$ & False Negative & True Negative\\\\\n    \\end{tabular}\n\\end{center}\nSo now we can redefine the \\emph{precision}, \\emph{recall} and \\emph{F1} \nmeasures as follows\n$$\\mathit{Prec} = \\frac{\\mathit{TP}}{\\mathit{TP} + \\mathit{FP}}\\;\\;\\;\n\\mathit{Rec} = \\frac{\\mathit{TP}}{\\mathit{TP} + \\mathit{FN}}\\;\\;\\;\n\\mathit{F1} = \\frac{\\mathit{TP}}{\\mathit{TP} + \\frac{1}{2}(\\mathit{FP} + \\mathit{FN})}$$\nThe actual confusion matrix is obtained by counting the documents given the retrieved \nones and the ground truth. \n\nThe matrix can be useful to estimate the system parameters, for instance, \nif the number of retrieved documents is set to a certain value $k$ and \nthe value of true positives is really high, probably a smaller $k$ would do the job, \nwhile if a lot positives are left out, i.e, the matrix has a high number of\nfalse negatives, maybe an higher $k$ would make the system better.\n\n\\paragraph{Other measures}\nThe are a lot of measures to estimate the quality of a system, such as \n\\emph{specificity}, \\emph{negative predictive value}, \\emph{miss-rate}, \n\\emph{fall-out} and \\emph{accuracy}. \nFinding the right measure is really important, for instance, using accuracy with\nan unbalanced dataset is not better than precision, recall and F1.\n\n\\subsection{Evaluation of ranking systems}\nWe talked about boolean retrieval systems and we took into consideration \nif a document is retrieved or not.\n\nIn a raking scenario, we want to give importance not only to precision and recall, \nbut also to the rank assigned by the system.\n\n\\paragraph{Setting a threshold}\nOne way to go is to set a specific threshold and compute the precision and recall \nof those top documents.\nThis method does not take into consideration the ranking of the documents.\n\n\\paragraph{Precision at K}\nIf we take the first $K$ retrieved documents, ordered by rank, we can estimate \nthe quality of the rank by computing the precision for the top $K$ documents.\nBy iterating the threshold we can better estimate the performances, opposed to \nsetting a unique threshold.\\\\\nTo decide what thresholds to use we could use the distribution of the system ranking.\n\n\\paragraph{Discounted cumulative gain}\nThis approach takes into consideration the ranking and discount the gain given by\na document with respect to its position in it.\n$$\\mathit{DCG} = \\sum_{i = 1}^n\\frac{R_j}{\\log(i+1)}$$\nwhere $R_j$ is the rank assigned to a document.\n\n\\paragraph{Precision vs Recall curve}\nAnother approach is to compute precision and recall measures at each point in the ranking.\nIf we take the measurements and plot them, we find a correlation between precision \nand recall and if we compute the integral of that function, we obtain the average precision.\n\nTo have a smoother curve, it's possible to interpolate the precision, so for each \npoint in the recall axis, we take the maximum precision of consecutive points.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.7\\textwidth]{images/precision-recall.png}\n    \\caption{Example of a non-interpolated precision and recall curve}\n\\end{figure}", "meta": {"hexsha": "51a62a5f8c9e35b25276e3460154a5a9fa771399", "size": 5911, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old-courses/information-retrieval/chapters/evaluation.tex", "max_stars_repo_name": "marcodb97/unimi-notes", "max_stars_repo_head_hexsha": "b0b9520a01568c4c64f4fdb69523dd05339ee8f6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "old-courses/information-retrieval/chapters/evaluation.tex", "max_issues_repo_name": "marcodb97/unimi-notes", "max_issues_repo_head_hexsha": "b0b9520a01568c4c64f4fdb69523dd05339ee8f6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "old-courses/information-retrieval/chapters/evaluation.tex", "max_forks_repo_name": "marcodb97/unimi-notes", "max_forks_repo_head_hexsha": "b0b9520a01568c4c64f4fdb69523dd05339ee8f6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-09T08:24:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-09T08:24:02.000Z", "avg_line_length": 46.9126984127, "max_line_length": 92, "alphanum_fraction": 0.7546946371, "num_tokens": 1470, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737963569014, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5720462322087169}}
{"text": "\\documentstyle[11pt,reduce]{article}\n\\date{July 1994}\n\\title{A Definite Integration Interface for REDUCE}\n\\author{Kerry Gaskell \\\\\nKonrad--Zuse--Zentrum f\\\"ur Informationstechnik Berlin\\\\\nTakustra\\\"se 7 \\\\\nD--14195 Berlin -- Dahlem \\\\\nFederal Republic of Germany \\\\[0.05in]\nE--mail: neun@zib.de\\footnotemark[1]}\n\n\\begin{document}\n\\maketitle\n\\footnotetext[1]{This definite integration interface was written\nduring my one year placement at ZIB. Any comments and/or\nproblems should therefore be directed to Winfried Neun at\nneun@zib.de.}\n\n\\section{Introduction}\nThis documentation describes part of \\REDUCE's definite\nintegration package that is able to calculate the definite integrals of\nmany functions, including several special functions.  There are other\nparts of this package, such as Stan Kameny's code for contour integration,\nthat are not included here.  The integration process described here is not\nthe more normal approach of initially calculating the indefinite integral,\nbut is instead the rather unusual idea of representing each function as a\nMeijer G-function (a formal definition of the Meijer G-function can be\nfound in \\cite {Prudnikov}), and then calculating the integral by using\nthe following Meijer G integration formula.\n\n\\begin{displaymath}\n\\int_{0}^{\\infty} x^{\\alpha-1} G^{s t}_{u v} \n\\left( \\sigma x \\  \\Bigg\\vert \\  {( c_u) \\atop (d_v)} \\right) \nG^{m n}_{p q} \\left( \\omega x^{l/k} \\  \\Bigg\\vert \\ {(a_p) \\atop (b_q)}\n\\right) dx = k G^{i j}_{k l} \\left( \\xi \\ \\Bigg\\vert \\ \n{(g_k) \\atop (h_l)} \\right)  \\hspace{5mm} (1)\n\\end{displaymath}\n\nThe resulting Meijer G-function is then retransformed, either directly\nor via a hypergeometric function simplification, to give\nthe answer. A more detailed account of this theory can be found in \n\\cite {Adamchik:90}.\n\n\\section{Integration between zero and infinity}\n\nAs an example, if one wishes to calculate the following integral\n\n\\begin{displaymath}\n\\int_{0}^{\\infty} x^{-1} e^{-x} sin(x) \\, dx\n\\end{displaymath}\n\nthen initially the  correct Meijer G-functions are found, via a \npattern matching \nprocess, and are substituted into (1) to give\n\n\\begin{displaymath}\n\\sqrt{\\pi} \\int_{0}^{\\infty} x^{-1} G^{1 0}_{0 1} \\left(x \n\\ \\Bigg\\vert \\ \n{. \\atop 0}\\right) G^{1 0}_{0 2}\\left(\\frac{x^{2}}{4} \n\\ \\Bigg\\vert \\ {. \\; .  \\atop \\frac{1}{2} \\; 0} \\right) dx\n\\end{displaymath}\n\nThe cases for validity of the integral are then checked. If these \nare found to be satisfactory then the formula is calculated and we \nobtain the following Meijer G-function \n\n\\begin{displaymath}  \nG^{1 2}_{2 2} \\left(1 \\ \\Bigg\\vert \\ {\\frac{1}{2} \\; 1 \\atop \n\\frac{1}{2} \\; 0} \\right)\n\\end{displaymath} \n\nThis is reduced to the following hypergeometric function\n\n\\begin{math}\n\\hspace{50mm} _2F_1 (\\frac{1}{2},1;\\frac{3}{2};-1 )\n\\end{math}\n\nwhich is then calculated to give the correct answer of \n\n\\begin{displaymath}\n\\frac{\\pi}{4}\n\\end{displaymath}\n\nThe above formula (1) is also true for the integration of a single\nMeijer G-function by replacing the second Meijer G-function \nwith a trivial Meijer G-function.\n\nA list of numerous particular Meijer G-functions is available in \n\\cite {Prudnikov}.\n\n\\section{Integration over other ranges}\n\nAlthough the description so far has been limited to the computation of\ndefinite integrals between 0 and infinity, it can also be extended to\ncalculate integrals between 0 and some specific upper bound, and\nby further extension, integrals between any two bounds.  One approach is\nto use the Heaviside function, i.e.\n\n\\begin{displaymath}\n\\int_{0}^{\\infty} x^{2} e^{-x} H(1-x)\\,dx = \\int_{0}^{1} x^{2} e^{-x}dx\n\\end{displaymath} \n\nAnother approach, again not involving the normal indefinite integration\nprocess, again uses Meijer G-functions, this time by means of the\nfollowing formula\n\n\\begin{displaymath}\n\\int_{0}^{y} x^{\\alpha-1} G^{m n}_{p q} \n\\left( \\sigma x \\  \\Bigg\\vert \\  {( a_u) \\atop (b_v)} \\right) dx=%\ny^{\\alpha}\\,G^{m \\; n+1}_{p+1 \\; q+1} \\left( \\sigma y \\  \\Bigg\\vert \\\n{( a_1..a_n,1-\\alpha,a_{n+1}..a_p) \n\\atop (b_1..b_m,-\\alpha,b_{m+1}..b_q)} \\right) (2) \n\\end{displaymath}\n\nFor a more detailed look at the theory behind this see \n\\cite{Adamchik:90}.\n\nFor example, if one wishes to calculate the following integral\n\n\\begin{displaymath}\n\\int_{0}^{y} sin(2 \\sqrt{x}) \\, dx \n\\end{displaymath}\n\nthen initially the correct Meijer G-function is found, by a pattern \nmatching process, and is substituted \ninto (2) to give\n\n\\begin{displaymath}\n\\int_{0}^{y} G^{1 0}_{0 2}\\left(x \n\\ \\Bigg\\vert \\ {. \\; .  \\atop \\frac{1}{2} \\; 0} \\right) dx\n\\end{displaymath}\n\nwhich then in turn gives\n\n\\begin{displaymath}\ny \\; G^{1 1}_{1 3}\\left(y \\ \\Bigg\\vert \\ {0 \\atop \n\\frac{1}{2} -\\!1 \\; 0} \\right) dx\n\\end{displaymath}\n\nand returns the result\n\n\\begin{displaymath}\n\\frac{\\sqrt{\\pi} \\, J_{3/2}(2 \\, \\sqrt{\\,y}) \\, y}{y^{1/4}}\n\\end{displaymath}\n\n\\section{Using the definite integration package}\nTo use this package, you must first load it by the command\n\\begin{verbatim}\nload_package defint;\n\\end{verbatim}\nDefinite integration is then possible using the \\verb+int+\ncommand with the syntax:\n\\begin{verbatim}\n   INT(EXPRN:algebraic,VAR:kernel,LOW:algebraic,UP:algebraic)\n       :algebraic.\n\\end{verbatim}\nwhere LOW and UP are the lower and upper bounds respectively for\nthe definite integration of EXPRN with respect to VAR.\n\n\\subsection{Examples}\n\n\\begin{displaymath}\n\\int_{0}^{\\infty} e^{-x} dx \n\\end{displaymath}\n\n\n\\begin{verbatim}\n int(e^(-x),x,0,infinity);\n\n 1\n\\end{verbatim}\n\n\\begin{displaymath}\n\\int_{0}^{\\infty} x sin(1/x) \\, dx\n\\end{displaymath}\n\n\\begin{verbatim}\n int(x*sin(1/x),x,0,infinity);\n\n           1\nINT(X*SIN(---),X,0,INFINITY)\n           X\n\\end{verbatim}\n\n\\begin{displaymath}\n\\int_{0}^{\\infty} x^2 cos(x) \\, e^{-2x} dx\n\\end{displaymath}\n\n\\begin{verbatim}\n int(x^2*cos(x)*e^(-2*x),x,0,infinity);\n\n   4\n -----\n  125\n\\end{verbatim}\n\n\\begin{displaymath}\n\\int_{0}^{\\infty} x e^{-1/2x} H(1-x) \\,dx = \\int_{0}^{1} x e^{-1/2x} dx\n\\end{displaymath}\n\n\\begin{verbatim}\n int(x*e^(-1/2x)*Heaviside(1-x),x,0,infinity);\n\n  2*(2*SQRT(E) - 3)\n -------------------\n       SQRT(E)\n\\end{verbatim}\n\n\\begin{displaymath}\n\\int_{0}^{1} x \\,log(1+x) \\,dx\n\\end{displaymath}\n\n\\begin{verbatim}\n int(x*log(1+x),x,0,1);\n\n  1\n ---\n  4\n\\end{verbatim}\n\n\\begin{displaymath}\n\\int_{0}^{y} cos(2x) \\,dx\n\\end{displaymath}\n\n\\begin{verbatim}\n int(cos(2x),x,y,2y);\n\n SIN(4*Y) - SIN(2*Y)\n---------------------\n          2\n\\end{verbatim}\n\n\n\\section{Integral Transforms}\n\nA useful application of the definite integration package is in the \ncalculation of various integral transforms. The transforms\navailable are as follows:\n\n\\begin{itemize}\n\\item Laplace transform \n\\item Hankel transform \n\\item Y-transform \n\\item K-transform \n\\item StruveH transform \n\\item Fourier sine transform\n\\item Fourier cosine transform \n\\end{itemize}\n\n\\subsection{Laplace transform}\n\nThe Laplace transform\n\n$\\hspace{20 mm} f(s) = \\cal L$ \\{F(t)\\} =\n$\\int_{0}^{\\infty} e^{-st}F(t)\\,dt$\n\ncan be calculated by using the \\verb+laplace_transform+ command.\n\nThis requires as parameters\n\n\\begin{itemize}\n\\item the function to be integrated\n\\item the integration variable.\n\\end{itemize}\n\nFor example\n\n$\\hspace{56 mm} \\cal L$ $\\{e^{-at}\\} \\\\$\nis entered as\n\n\\begin{verbatim}\n        laplace_transform(e^(-a*x),x);\n\\end{verbatim}\n\nand returns the result\n \n\\begin{displaymath}\n\\frac{1}{s+a}\n\\end{displaymath}\n\n\\subsection{Hankel transform}\n\nThe Hankel transform\n\n\\begin{displaymath}\nf(\\omega) = \\int_{0}^{\\infty} F(t) \\,J_{\\nu}(2\\sqrt{\\omega t}) \\,dt \n\\end{displaymath}\n\ncan be calculated by using the \\verb+hankel_transform+ command e.g.\n\n\\begin{verbatim}\n        hankel_transform(f(x),x);\n\\end{verbatim}\n\nThis is used in the same way as the \\verb+laplace_transform+ command.\n\n\\subsection{Y-transform}\n\nThe Y-transform\n\n\\begin{displaymath}\nf(\\omega) = \\int_{0}^{\\infty} F(t) \\,Y_{\\nu}(2\\sqrt{\\omega t}) \\,dt \n\\end{displaymath}\n\ncan be calculated by using the \\verb+Y_transform+ command e.g.\n\n\\begin{verbatim}\n        Y_transform(f(x),x);    \n\\end{verbatim}\n\nThis is used in the same way as the \\verb+laplace_transform+ command.\n\n\\subsection{K-transform}\n\nThe K-transform\n\n\\begin{displaymath}\nf(\\omega) = \\int_{0}^{\\infty} F(t) \\,K_{\\nu}(2\\sqrt{\\omega t}) \\,dt\n\\end{displaymath}\n\ncan be calculated by using the \\verb+K_transform+ command e.g.\n\n\\begin{verbatim}\n        K_transform(f(x),x);    \n\\end{verbatim}\n\nThis is used in the same way as the \\verb+laplace_transform+ command.\n\n\\subsection{StruveH transform}\n\nThe StruveH transform\n\n\\begin{displaymath}\nf(\\omega) = \\int_{0}^{\\infty} F(t) \\,StruveH(\\nu,2\\sqrt{\\omega t}) \\,dt\n\\end{displaymath}\n\ncan be calculated by using the \\verb+struveh_transform+ command e.g.\n\n\\begin{verbatim}\n        struveh_transform(f(x),x);\n\\end{verbatim}\n\nThis is used in the same way as the \\verb+laplace_transform+ command.\n\n\\subsection{Fourier sine transform}\n\nThe Fourier sine transform \n\n\\begin{displaymath}\nf(s) = \\int_{0}^{\\infty} F(t) \\,sin (st) \\,dt \n\\end{displaymath}\n\ncan be calculated by using the \\verb+fourier_sin+ command e.g.\n\\begin{verbatim}\n        fourier_sin(f(x),x);    \n\\end{verbatim}\n\nThis is used in the same way as the \\verb+laplace_transform+ command.\n\n\\subsection{Fourier cosine transform}\n\nThe Fourier cosine transform \n\n\\begin{displaymath}\nf(s) = \\int_{0}^{\\infty} F(t) \\,cos (st) \\,dt \n\\end{displaymath}\n\ncan be calculated by using the \\verb+fourier_cos+ command e.g.\n\\begin{verbatim}\n        fourier_cos(f(x),x);\n\\end{verbatim}\n\nThis is used in the same way as the \\verb+laplace_transform+ command.\n\n\\section{Additional Meijer G-function Definitions}\n\nThe relevant Meijer G representation for any function is found by a \npattern-matching process which is carried out on a list of Meijer \nG-function definitions. This list, although extensive, can never hope \nto be complete and therefore the user may wish to add more definitions.\nDefinitions can be added by adding the following lines:\n\n\\begin{verbatim}\n  defint_choose(f(~x),~var => f1(n,x);\n\n  symbolic putv(mellin!-transforms!*,n,'\n                (() (m n p q) (ai) (bj) (C) (var)));\n\n\\end{verbatim} \n     where f(x) is the new function, i = 1..p, j=1..q, C = a constant,\n%where i = 1..p, j=1..q, C = a constant,  \n     var = variable, n = an indexing number.\n\nFor example when considering $cos (x)$ we have\n\n\\it Meijer G representation  -  \n\\begin{displaymath}\n\\sqrt{\\pi} \\,G^{1 0}_{0 2}\\left(\\frac{x^{2}}{4} \\ \\Bigg\\vert \n\\ { . \\; . \\atop 0 \\; \\frac{1}{2}} \\right) dx \n\\end{displaymath}\n\n\\it Internal definite integration package representation  - \n\\begin{verbatim}\n  defint_choose(cos(~x),~var)   => f1(3,x);\n\\end{verbatim}\n\n\\rm where 3 is the indexing number corresponding to the 3\nin the following formula\n\n\\begin{verbatim}\n  symbolic putv(mellin!-transforms!*,3,'\n                (() (1 0 0 2) () (nil (quotient 1 2))\n                (sqrt pi) (quotient (expt x 2) 4)));\n\\end{verbatim} \n\nor the more interesting example of $J_{n}(x)$:\n\n\\it Meijer G representation  -  \n\\begin{displaymath}\nG^{1 0}_{0 2} \\left(\\frac{x^{2}}{4} \\ \\Bigg\\vert \n\\ {. \\; .  \\atop \\frac{n}{2} \\; {\\frac{-n}{2}}} \\right) dx \n\\end{displaymath}\n\n\\it Internal definite integration package representation  - \n\n\\begin{verbatim}\n  defint_choose(besselj(~n,~x),~var) => f1(50,x,n);\n\n  symbolic putv(mellin!-transforms!*,50,'\n                ((n) (1 0 0 2) () ((quotient n 2)\n                                   (minus quotient n 2)) 1\n                                   (quotient (expt x 2) 4)));\n\\end{verbatim} \n\n\\section{The print\\_conditions function}\n\n\\rm The required conditions for the validity of the transform integrals\ncan be viewed using the following command:\n\n\\begin{verbatim}\n        print_conditions().\n\\end{verbatim}\n\nFor example after calculating the following laplace transform\n\n\\begin{verbatim}\n        laplace_transform(x^k,x);\n\\end{verbatim}\n\nusing the \\verb+print_conditions+ command would produce\n\n\\begin{verbatim}\n                         \nrepart(sum(ai) - sum(bj)) + 1/2 (q + 1 - p)>(q - p) repart(s)\n                         \n and ( - min(repart(bj))<repart(s))<1 - max(repart(ai)) \n\n or mod(arg(eta))=pi*delta\n\n or ( - min(repart(bj))<repart(s))<1 - max(repart(ai)) \n\n or mod(arg(eta))<pi*delta\n\n\\end{verbatim}\n\nwhere\n\\begin{displaymath}\n\\begin{array}{ll}\ndelta = s+t-\\frac{u-v}{2}\\\\\neta = 1-\\alpha(v-u)-\\mu-\\rho\\\\\n\\mu = \\sum_{j=1}^{q} b_{j} - \\sum_{i=1}^{p} a_{i} + \\frac{p-q}{2} + 1\\\\\n\\rho = \\sum_{j=1}^{v} d{j} - \\sum_{i=1}^{u} c_{i} + \\frac{u-v}{2} + 1\\\\\ns,t,u,v,p,q,\\alpha \\; {\\rm as \\; in \\; (1)}\n\\end{array}\n\\end{displaymath}\n\n\n\\section{Acknowledgements}\nI would like to thank Victor Adamchik whose implementation of the \ndefinite integration package for \\REDUCE is vital to this\ninterface.  \n\n\n\\begin{thebibliography}{}\n\n\\bibitem{Prudnikov} A.P. Prudnikov, Yu.A. Brychkov and O.I. Marichev,\n{\\em Integrals and Series, Volume 3: More Special Functions} Gordon \nand Breach Science Publishers (1990)\n\n\\bibitem{Adamchik:90} V.S. Adamchik and O.I. Marichev, {\\em The \nAlgorithm for Calculating Integrals of Hypergeometric Type Functions \nand its Realization in Reduce System} from {\\em ISSAC 90:Symbolic and \nAlgebraic Computation} Addison-Wesley Publishing Company (1990) \n\n\\bibitem{Luke} Yudell L. Luke, {\\em The Special Functions and their\nApproximations, Volume 1} Academic Press (1969).\n\n\\end{thebibliography}\n\n\\end{document}\n\n", "meta": {"hexsha": "4924f716a4712f46e161afb03afcda4c879cc9b5", "size": 13262, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/defint/defint.tex", "max_stars_repo_name": "arthurcnorman/general", "max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packages/defint/defint.tex", "max_issues_repo_name": "arthurcnorman/general", "max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packages/defint/defint.tex", "max_forks_repo_name": "arthurcnorman/general", "max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.4710578842, "max_line_length": 74, "alphanum_fraction": 0.6830794752, "num_tokens": 4293, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5720462291079158}}
{"text": "\n\n    \\filetitle{isexplosive}{True if any eigenvalue is outside unit circle}{VAR/isexplosive}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nFlag = isexplosive(V)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{V} {[} VAR {]} - VAR object whose eigenvalues will be tested\n  for explosiveness.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{Flag} {[} \\texttt{true} \\textbar{} \\texttt{false} {]} - True\n  if at least one eigenvalue is outside unit circle.\n\\end{itemize}\n\n\\paragraph{Options}\\label{options}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{'tolerance='} {[} numeric \\textbar{}\n  \\emph{\\texttt{getrealsmall()}} {]} - Tolerance for the eigenvalue\n  test.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "30aa9936df81c867ce79faed6a7eb79433e4c990", "size": 957, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/VAR/isexplosive.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/VAR/isexplosive.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/VAR/isexplosive.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 21.75, "max_line_length": 91, "alphanum_fraction": 0.7387669801, "num_tokens": 312, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.5720462242019618}}
{"text": "% ----- CHAPTER 5: ZERO SUMS ----- %\n\nAlgorithm \\ref{algo:compute_rank} allows us to compute the rank of an elliptic curve in $\\softO\\left(\\sqrt{N_E}\\right)$ time by evaluating successive derivatives of $\\Les$ at the central point. However, there is an inescapable limitation of this algorithm: the $\\sqrt{N_E}$ time dependence of evaluating $\\Les$ means that it becomes infeasible to run on modern computer hardware when the conductor is larger than about $10^{16}$. Moreover, since there is no known way to evaluate elliptic curve $L$-functions in faster than square-root-conductor time, there is essentially nothing we can do to make such a rank computation algorithm asymptotically faster. \\\\\n\nIn this section we work toward presenting a method to bound analytic rank from above that does not require direct computation of a curve's $L$-function. The upside of such an algorithm is that it can be run on curves with much larger conductor, with the tightness of the bound scaling with how long one wants computation time to be. The downside is that we will have to sacrifice exactness: the method will only provide upper bounds on rank. \\\\\n\nBecause the aforementioned method relies on sums over the zeros of $\\Les$, for this entire section we will assume GRH unless explicitly stated otherwise.\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Logarithmic Derivatives}\\label{sec:log_derivs}\n\nLet $E/\\QQ$  have conductor $N_E$.\n\\begin{definition}\nThe {\\it logarithmic derivative} of the $L$-function attached to $E$ is\n\\begin{equation}\n\\ldLes := \\frac{d}{ds} \\log \\Les = \\frac{L_E^{\\pr}(s)}{\\Les}.\n\\end{equation}\n\\end{definition}\nLogarithmic derivatives have some useful properties. Importantly, the logarithmic derivative of the product of meromorphic functions is the sum of the logarithmic derivatives thereof. To this end:\n\\begin{proposition}\n\\begin{equation}\\label{eqn:logderiv_relation}\n\\ldLam{s} = \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) + \\digamma(s) + \\ldLe{s},\n\\end{equation}\nwhere $\\digamma(s) = \\frac{\\Gamma\\pr}{\\Gamma}(s)$ is the digamma function on $\\C$.\n\\end{proposition}\nThis follows immediately from the definition of $\\Lams = (N_E)^{\\frac{s}{2}}(2\\pi)^{-s}\\Gamma(s)\\Les$. \\\\\n\nNote that the digamma function is well-understood and easily computable. It has simple poles at the negative integers, and it has the following infinite sum expansion about $s=1$:\n\\begin{equation}\\label{eqn:digamma_sum}\n\\digamma(1+s) = -\\eta + \\sum_{k=1}^{\\infty} \\frac{s}{k(k+s)}.\n\\end{equation}\nThis series converges absolutely for any $s$ not equal to a negative integer, and uniformly on bounded sets (excluding the aforementioned negative integers).\\\\\n\nWhat is perhaps surprising, however, is that $\\ldLe{s}$ can be represented by an elegant Dirichlet series. Recall that for $p \\nmid N_E$, the characteristic polynomial of Frobenius w.r.t.~$f$ at $p$ is $x^2 - a_p x + p^2$, where $a_p$ is as given by Definition \\ref{def:a_p}. Let this quadratic polynomial split as $(x-\\alpha_p)(x-\\beta_p)$ in $\\CC$, where for $\\alpha_p$ and $\\beta_p$ the dependence on $E$ is understood. \\\\\n\n\\begin{definition}\\label{def:bn}\nFor $n \\in \\NN$, let\n\\begin{equation}\nb_n(E) := \\begin{cases}\n-\\left(\\alpha_p^e+\\beta_p^e\\right)\\cdot \\log(p), & n=p^e\\;\\;\\text{a prime power ($e\\ge1$), and $p \\nmid N_E$} \\\\\n-a_p^e \\cdot \\log(p), & n=p^e\\;\\;\\text{and $p \\mid N_E$} \\\\\n0, & \\text{otherwise.} \\end{cases}\n\\end{equation}\n\\end{definition}\n\n\\begin{lemma}\nThe Dirichlet series for $\\ldLes$ is given by\n\\begin{equation}\n\\ldLes = \\sum_{n=1}^{\\infty} b_n(E)\\, n^{-s},\n\\end{equation}\nwhere the coefficients $b_n(E)$ are defined as in Definition \\ref{def:bn}. \\\\\n\\end{lemma}\n\\begin{proof}\nThe proof is an exercise in taking the logarithmic derivative of the Euler product formula for $\\Les$ and simplifying. Note we may write the Euler product of $\\Les$ as\n\\begin{equation}\n\\Les = \\prod_{p|N_E} \\left(1-a_p p^{-s}\\right)^{-1} \\prod_{p\\nmid N_E} \\left(1-\\alpha_p p^{-s}\\right)^{-1}\\left(1-\\beta_p p^{-s}\\right)^{-1}.\n\\end{equation}\nThe result follows by taking the logarithmic derivative of each term individually and then summing the results.\n\\end{proof}\n\nThe Dirichlet coefficients for $\\ldLes$ have a beautiful characterization in terms of the number of points on $E$ over finite fields:\n\\begin{proposition}\n\\begin{equation}\nb_n(E) = \\begin{cases}\n-\\left(p^e + 1 - \\#\\widetilde{E}(\\FF_{p^e})\\right)\\cdot \\log(p), & n=p^e\\;\\;\\text{a prime power,} \\\\\n0, & \\text{otherwise.} \\end{cases}\n\\end{equation}\nwhere $\\#\\widetilde{E}(\\FF_{p^e})$ is the number of points over $\\FF_{p^e}$ on the (possibly singular) curve obtained by reducing $E$ modulo $p$.\n\\end{proposition}\n\n\\begin{proof}\nIt is a standard result that if $(x-\\alpha_p)(x-\\beta_p)$ is the characteristic polynomial for Frobenius on $E$ for the prime of good reduction $p$, then\n\\begin{equation}\n\\#E(\\FF_{p^e}) = p^e + 1 - \\alpha_p^e - \\beta_p^e\n\\end{equation}\n(see \\cite[pp. 134-136]{Sil-1985} for a proof), from which the result at $p \\nmid N_E$ follows. \\\\\n\nFor primes of bad reduction, recall\n\\begin{equation}\na_p(E) := \\begin{cases}\n+1, & \\text{$E$ has split multiplicative reduction at $p$} \\\\\n-1, & \\text{$E$ has non-split multiplicative reduction at $p$} \\\\\n0, & \\text{$E$ has additive reduction at $p$.}\n\\end{cases}\n\\end{equation}\nLet $\\Ensfpe$ be the group of nonsingular points on $\\widetilde{E}(\\FF_{p^e})$. \\\\\nWhen $E$ has additive reduction at $p$, $\\Ensfpe \\simeq (\\F_{p^e},+)$, so together with the singular point $\\#\\widetilde{E}(\\FF_{p^e}) = p^e+1$; \\\\\nHence $(p^e + 1 - \\#\\widetilde{E}(\\FF_{p^e}))\\log(p) = 0 = a_p^e \\log(p)$. \\\\\nWhen $E$ has split multiplicative reduction at $p$, $\\Ensfpe \\simeq (\\F_{p^e}^*,\\times)$, so together with the singular point $\\#\\widetilde{E}(\\FF_{p^e}) = (p^e-1)+1 = p^e$; So $(p^e + 1 - \\#\\widetilde{E}(\\FF_{p^e}))\\log(p) = 1\\cdot \\log(p) = a_p^e \\log(p)$. \\\\\nWhen $E$ has non-split multiplicative reduction at $p$, let $L/\\FF_{p^e}$ be the quadratic extension obtained by adjoining to $\\FF_{p^e}$ the slopes of the tangent lines  at the singular point; then $\\Ensfpe \\simeq \\ker(\\Norm_{L/\\FF_{p^e}})$. \\\\\nSome thought should convince you that there are $p^e-(-1)^e$ elements in $L$ with norm 1, so together with the singular point $\\#\\widetilde{E}(\\FF_{p^e}) = p^e+1-(-1)^e$; \\\\\nHence $(p^e + 1 - \\#\\widetilde{E}(\\FF_{p^e}))\\log(p) = (-1)^e\\cdot \\log(p) = a_p^e \\log(p)$.\nSee \\cite[pg. 180, Prop. 5.1]{Sil-1985} for the proofs of the above isomorphisms.\\\\\n\\end{proof}\n\nWith elliptic curve $L$-functions it is often easier to work with the shifted logarithmic derivative $\\ldLe{1+s}$ as it places the critical point at the origin. We therefore define notation for the coefficients of the shifted Dirichlet series below:\n\\begin{definition}\\label{def:cn}\nThe logarithmic derivative of the shifted $L$-function $L_E(1+s)$ is given by Dirichlet series\n\\begin{equation}\n\\ldLe{1+s} := \\sum_n c_n n^{-s} = \\sum_{n} \\frac{b_n}{n} n^{-s},\n\\end{equation}\ni.e. $c_n = b_n/n$, where the $b_n$ are as defined in Definition \\ref{def:bn}.\n\\end{definition}\n\nBecause of its transparent Dirichlet series, we can bound the magnitude of $\\ldLe{1+s}$ for $\\Re(s)>\\frac{1}{2}$. Let $\\frac{\\zeta\\pr}{\\zeta}$ be the logarithmic derivative of the Riemann zeta function. Then $\\frac{\\zeta\\pr}{\\zeta}(s) = \\sum -\\Psi(n) n^{-s}$ for $\\Re(s)>1$, where $\\Psi(n)$ is the von Mangoldt function, given by\n\\begin{equation}\\label{eqn:vonmangoldt}\n\\Psi(n) = \\begin{cases} \\log p & n = p^e \\;\\;\\text{a perfect prime power,} \\\\ 0 & \\text{otherwise.} \\end{cases}\n\\end{equation}\n(The von Mangoldt function is typically denoted $\\Lambda(n)$ in the literature, but we have already reserved $\\Lambda$ for the competed $L$-function of an elliptic curve). Observe that $-\\frac{\\zeta\\pr}{\\zeta}(s)$ is strictly positive for $s > 1$ real, and decays to zero exponentially as $s \\to \\infty$. \\\\\n\nAway from the critical strip the behaviors of both $\\Les$ and $\\ldLe{s}$ are tightly constrained.\n\\begin{lemma}\\label{lem:ldLe_bound}\nLet $L_E(s)$ be the $L$-function of $E$. For any $s \\in \\CC$ with $\\sigma := \\Re(s) >\\frac{3}{2}$, we have the following:\n\\begin{enumerate}\n\\item\n\\begin{equation}\\label{ineq:bounds_on_Les}\n\\frac{\\zeta(2\\sigma-1)^2}{\\zeta(\\sigma-\\frac{1}{2})^2} < \\left|L_E(s)\\right| < \\zeta\\left(\\sigma-\\frac{1}{2}\\right)^2.\n\\end{equation}\n\\item\n\\begin{equation}\\label{ineq:bounds_on_Ldles}\n2\\frac{\\zeta\\pr}{\\zeta}\\left(\\sigma-\\frac{1}{2}\\right) < \\left| \\ldLe{s}\\right| < -2\\frac{\\zeta\\pr}{\\zeta}\\left(\\sigma-\\frac{1}{2}\\right).\n\\end{equation}\n\\end{enumerate}\nwhere $\\zeta(s)$ is the Riemann Zeta function.\n\\end{lemma}\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/L_E_and_logderiv_bounds.png}\n    \\caption{Plots of $L_E(s)$ and $\\ldLes$ for $1<s<8$ for 3 elliptic curves -- one each of rank 0, 1 and 2 -- with the global bounds given in Lemma \\ref{lem:ldLe_bound} drawn in.}\n    \\label{fig:L_E_and_logderiv_bounds}\n\\end{figure}\n\n\\begin{proof}\nFor the bound on $L_E(s)$, note that we may write the Euler product representation of $L_E(s)$ as\n\\begin{equation}\nL_E(s) = \\prod_{p \\mid N_E} \\left(\\frac{1}{1-a_p p^{-s}}\\right) \\cdot \\prod_{n \\nmid N_E} \\left(\\frac{1}{1-\\alpha_p p^{-s}}\\right)\\left(\\frac{1}{1-\\beta_p p^{-s}}\\right),\n\\end{equation}\nwhere for good $p$, $\\alpha_p$ and $\\beta_p$ are the two complex conjugate roots of the characteristic equation of Frobenius at $p$ for $E$. Hasse's Theorem has that these are both precisely $\\sqrt{p}$ in magnitude; since $|a_p|\\le 1$ for bad $p$ we thus derive the inequality\n\\begin{equation}\n\\prod_p \\left(\\frac{1}{1+ \\sqrt{p}\\cdot p^{-s}}\\right)^2 < \\left|L_E(s)\\right| < \\prod_p \\left(\\frac{1}{1- \\sqrt{p}\\cdot p^{-s}}\\right)^2.\n\\end{equation}\nWe then note that \n\\[ \\prod_p \\left(\\frac{1}{1- \\sqrt{p}\\cdot p^{-s}}\\right)^2 = \\left(\\prod_p \\frac{1}{1- p^{-s+\\frac{1}{2}}}\\right)^2 = \\zeta(s-\\frac{1}{2})^2,\\] \nwhile\n\\[ \\prod_p \\left(\\frac{1}{1+ \\sqrt{p}\\cdot p^{-s}}\\right)^2 = \\left(\\prod_p \\frac{1- p^{-s+\\frac{1}{2}}}{1- p^{-2s+1}}\\right)^2 = \\frac{\\zeta(2s-1)^2}{\\zeta(s-\\frac{1}{2})^2}\\]\nto complete the result. \\\\ \n\nFor the bound on $\\ldLe{s}$, observe that Hasse's Theorem implies that $|q + 1 - \\#\\widetilde{E}(\\FF_{q})| \\le 2\\sqrt{q}$ for any prime {\\it power} $q$. Hence\n\\begin{align*}\n\\left| \\ldLe{s}\\right| &\\le \\sum_n |b_n| \\cdot n^\\sigma < \\sum_{n} 2\\sqrt{n} \\cdot \\lambda(n) n^{-\\sigma}  = -2\\frac{\\zeta\\pr}{\\zeta}\\left(\\sigma-\\frac{1}{2}\\right).\n\\end{align*}\nThe left inequality is proved in the same way with the signs reversed. The resulting inequalities are indeed strict, as Hasse's bound is guaranteed not to be tight when, say, $p=2$.\n\\end{proof}\nNote that these bounds are global: they do not depend on the elliptic curve $E$ in any way.\n\n\\begin{corollary}\\label{cor:L_E_abs_convergence}\nThe Dirichlet series and Euler product for $L_E(s)$ converges absolutely for $\\Re(s)>\\frac{3}{2}$.\n\\end{corollary}\nThis follows immediately from the fact that $L_E(s)$ is bounded in magnitude by the $\\zeta$ function shifted a half unit to the left, and the Dirichlet series for $\\zeta(s)$ converges for $\\Re(s)>1$.\n\n\\begin{corollary}\n$\\Lambda_E(1+s)$ has no zeros outside the critical strip $|\\Re(s)| \\le \\frac{1}{2}$.\n\\end{corollary}\n\\begin{proof}\nThis may be proven via either set of inequalities in the above proposition; we will use the latter. Recall that if $f$ is meromorphic on $\\CC$, then $\\frac{f\\pr}{f}$ has a pole at $s=s_0$ iff $f$ has a zero or pole at $s_0$; moreover poles of $\\frac{f\\pr}{f}$ are simple and have residue equal to the multiplicity of the corresponding zero/pole of $f$. But by the above $\\ldLe{1+s}$ converges absolutely for $\\Re(s)>\\frac{1}{2}$, so $\\ldLam{1+s}$ is well-defined and bounded for $\\Re(s)>\\frac{1}{2}$, and hence cannot have any poles in this region. By symmetry the same is true for $\\Re(s)<-\\frac{1}{2}$. Hence $\\Lambda_E(1+s)$ cannot have any zeros for $|\\Re(s)| > \\frac{1}{2}$.\n\\end{proof}\n\nIf one assumes GRH, $\\Lambda_E(1+s)$ has a particularly simple representation as a product over its zeros, from which we get a representation of $\\ldLam{1+s}$ as a sum over its zeros.\n\\begin{proposition}[GRH]\n\\label{prop:logderiv_zero_rep} We have\n\\begin{enumerate}\n\\item \\begin{equation}\\label{eqn:Lams_prod}\n\\Lambda_E(1+s) = C_E\\cdot s^{r_E} \\cdot \\prod_{\\gamma > 0} \\left(1+\\frac{s^2}{\\gamma^2}\\right),\n\\end{equation}\nwhere $C_E$ is the leading coefficient of $\\Les$ at the central point (i.e. that defined in Conjecture \\ref{conj:BSD}), and the product is taken over the imaginary parts of all nontrivial zeros of $\\Lambda_E(1+s)$ in the upper half plane. The product converges absolutely for any $s$, and uniformly on any bounded set.\n\\item \\begin{equation}\\label{eqn:ldLam_sum}\n\\ldLam{1+s} = \\sum_{\\gamma} \\frac{s}{s^2+\\gamma^2}, \n\\end{equation}\nwhere the sum is taken over the imaginary parts of {\\bf all} nontrivial zeros of $\\Lambda_E(1+s)$, including central zeros with multiplicity. The sum converges absolutely for any $s$ outside the set of nontrivial zeros for $L_E(1+s)$, and uniformly on any bounded set outside of the set of zeros. \\\\\n\\end{enumerate}\n\\end{proposition}\n\nNote that by GRH, $\\gamma^2$ is always a nonnegative real number in any of the above expansions. Furthermore,  since noncentral nontrivial zeros occur in conjugate pairs, each term for $\\gamma \\ne 0$ in Equation \\ref{eqn:ldLam_sum} appears exactly twice. It is therefore sometimes useful to rewrite it as\n\\begin{equation}\\label{eqn:ldLam_sum_v2}\n\\ldLam{1+s} = \\frac{r_E}{s} + 2 \\sum_{\\gamma>0} \\frac{s}{s^2+\\gamma^2},\n\\end{equation}\nwhere $r_E$ is the (analytic) rank of $E$.\n\n\\begin{proof}\n$\\Lambda_E(1+s)$ has a zero of order $r_E$ at the origin, and by GRH all other zeros of $\\Lambda_E(1+s)$ are simple, lie on the imaginary axis, and are symmetric about the origin. \\\\\n\nNow since $\\Lambda_E(1+s)$ is an entire function of finite order, we may express it as a Hadamard product over its zeros. As with the Hadamard product for the completed Riemann Zeta function, the symmetry of $\\Lambda_E(1+s)$ simplifies this product to\n\\begin{equation}\n\\Lambda_E(1+s) = C_E \\cdot s^{r_E}\\cdot \\prod_{\\gamma\\ne0}\\left(1-\\frac{s}{i\\gamma}\\right),\n\\end{equation}\nwhere $C_E$ is the leading nonzero coefficient of the Taylor series for $\\Lambda_E(1+s)$ at the central point; and for convergence the product should be taken over conjugate pairs of zeros. Combining conjugate pair terms yields Equation \\ref{eqn:Lams_prod}; logarithmic differentiation then yields Equation \\ref{eqn:ldLam_sum_v2}, which can be simplified to Equation \\ref{eqn:ldLam_sum}.\n\\end{proof}\n\n\\begin{corollary}\n$\\ldLam{1+s}$ is an odd function.\n\\end{corollary}\nNote this result holds independent of GRH. \\\\\n\nLemma \\ref{lem:ldLe_bound} and Equation \\ref{eqn:ldLam_sum} may be used to provide a crude bound on the analytic rank of $E$ with respect to its conductor:\n\\begin{corollary}\\label{cor:logderiv_rank_bound}\nLet $E$ have analytic rank $r_{an}(E)$ and conductor $N_E$. Then\n\\begin{equation}\nr_{an}(E) < 1.6 + \\frac{1}{2} \\log N_E.\n\\end{equation}\nMoreover, this bound is {\\it unconditional}; it does not require GRH to hold.\n\\end{corollary}\n\\begin{proof}\nWe begin by assuming GRH. From Equation \\ref{eqn:ldLam_sum} we have the point estimate\n\\begin{equation}\\label{eqn:r_an_point_estimate}\nr_{an} < \\sum_{\\gamma} \\frac{1}{1+\\gamma^2} = \\ldLam{2},\n\\end{equation}\nwhile from Lemma \\ref{lem:ldLe_bound} we get\n\\begin{equation}\n\\ldLam{2} =  \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) + \\digamma(2) + \\ldLe{2} < \\frac{1}{2}\\log N_E  - \\log 2\\pi + 1-\\eta -2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{3}{2}\\right),\n\\end{equation}\nwhere $\\digamma(s)$ is the digamma function on $\\CC$ and $\\eta$ is the Euler-Mascheroni constant $= 0.5772156649\\ldots$. Collect constant terms and round up to get the stated bound. \\\\\n\nIf one does not assume GRH, then we must use a less simplified representation for the logarithmic derivative:\n\\begin{equation}\n\\ldLam{1+s} = \\sum_{\\rho} \\frac{1}{2}\\left( \\frac{1}{s-\\rho} + \\frac{1}{s-\\bar{\\rho}}\\right),\n\\end{equation}\nwhere $\\rho$ ranges over the nontrivial zeros of $L_E(1+s)$. However, everything else proceeds as before, and the point estimate given in Equation \\ref{eqn:r_an_point_estimate} still holds.\n\\end{proof}\nWe will later use a related technique to show firstly that the factor in front of $\\log N_E$ can be made arbitrarily small, at the expense of having to assume GRH and having to increase the added constant. \\\\\n\nThe following corollary of Proposition \\ref{prop:logderiv_zero_rep} will be of import in obtaining explicit bounds on the number of zeros of $L_E(s)$ in a given interval on the critical strip:\n\\begin{corollary}[GRH]\\label{cor:Re_logderiv}\nLet $\\Re(s) > 0$, and write $s = \\sigma + i\\tau$, i.e. $\\sigma > 0$. Then\n\\begin{equation}\n\\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} = \\Re\\left(\\ldLam{1+s}\\right),\n\\end{equation}\nwhere again the sum is taken over all nontrivial zeros of $L_E(s)$. The sum converges absolutely for any $\\tau \\in \\RR$ and $\\sigma > 0$.\n\\end{corollary}\n\\begin{proof}\nBy equation \\ref{eqn:ldLam_sum} we have\n\\begin{align*}\n\\Re\\left(\\ldLam{1+s}\\right) &= \\Re\\left(\\sum_{\\gamma} \\frac{s}{s^2+\\gamma^2}\\right) \\\\\n& = \\frac{1}{2} \\sum_{\\gamma} \\Re\\left(\\frac{1}{s - i \\gamma} + \\frac{1}{s + i \\gamma}\\right) \\\\\n&= \\frac{1}{2} \\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} +  \\frac{\\sigma}{\\sigma^2+(\\gamma+\\tau)^2}.\n\\end{align*}\nHowever, absolute convergence for $\\sum_{\\gamma} \\frac{s}{s^2+\\gamma^2}$ for any $s$ in the right half plane implies absolute convergence for the individual sums $\\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2}$ and $\\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma+\\tau)^2}$. We may thus write\n\\begin{align*}\n\\Re\\left(\\ldLam{1+s}\\right) &= \\frac{1}{2} \\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} +  \\frac{1}{2} \\sum_{\\gamma}\\frac{\\sigma}{\\sigma^2+(\\gamma+\\tau)^2} \\\\\n&= \\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} \\;\\;\\text{by symmetry.}\n\\end{align*}\n\\end{proof}\nObserve that GRH implies that $\\Re(\\ldLam{1+s})>0$ for $\\Re(s)>0$, since then each of the terms in the above sum are strictly positive. By oddness of $\\ldLam{1+s}$ we also then have that $\\Re(\\ldLam{1+s})<0$ for all $\\Re(s)<0$, and $\\Re(\\ldLam{1+s})=0 \\Rightarrow \\Re(s)=0$. \\\\\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Explicit Formula for Elliptic Curves}\n\nCombining equations \\ref{eqn:logderiv_relation}, \\ref{eqn:digamma_sum} and \\ref{eqn:ldLam_sum} we get the following equality:\n\\begin{proposition}[GRH]\nLet $E/\\QQ$ have conductor $N_E$. Let $\\gamma$ range over all nontrivial zeros of $\\Les$ with multiplicity, let $\\eta$ be the Euler-Mascheroni constant, and let the $c_n = c_n(E)$ be as given by definitions \\ref{def:bn} and \\ref{def:cn}. Then\n\\begin{equation}\\label{eqn:exp_form_1}\n\\sum_{\\gamma} \\frac{s}{s^2 + \\gamma^2} = \\left[-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right] + \\sum_{k=1}^{\\infty} \\frac{s}{k(s+k)} + \\sum_{n=1}^{\\infty} c_n n^{-s}.\n\\end{equation}\n\\end{proposition}\nThis is the prototypical {\\it explicit formula} for elliptic curves: an equation relating a sum over the nontrivial zeros of $\\Les$ to a sum over the logarithmic derivative coefficients of $\\Les$, plus some easily smooth part that only depends on the curve's conductor. \\\\\n\nIn general, the phrase ``explicit formula\" is not applied to a specific equation, but rather to a suite of equalities that resemble the above in some way. We reproduce Lemma 2.1 from \\cite{Bob-2011}, which is a more general version of the explicit formula, akin to the Weil formulation of the Riemann-von Mangoldt explicit formula for $\\zeta(s)$.\n\\begin{lemma}[GRH]\n\\label{lem:exp_form_2}\nSuppose that $f(z)$ is an entire function s.t. there exists a $\\delta>0$ such that $f(x+iy) = O(x^{-(1+\\delta)})$ for $|y|<1+\\epsilon$ for some $\\epsilon>0$. Suppose that the Fourier transform of $f$\n\\begin{equation}\n\\hat{f}(y) = \\int_{-\\infty}^{\\infty} e^{-i x y}f(x)\\; dx\n\\end{equation}\nexists and is such that $\\sum_{n=1}^{\\infty} c_n \\hat{f}\\left(\\log n\\right)$ converges absolutely. Then\n\\begin{equation}\\label{eqn:exp_form_2}\n\\sum_{\\gamma} f(\\gamma) = \\frac{1}{\\pi}\\left[\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\hat{f}(0) + \\Re\\int_{-\\infty}^{\\infty} \\digamma(1+it)f(t) \\; dt  + \\frac{1}{2} \\sum_{n=1}^{\\infty} c_n \\left( \\hat{f}\\left(\\log n\\right) + \\hat{f}\\left(-\\log n\\right)\\right) \\right].\n\\end{equation}\n\\end{lemma}\nA proof can be found in \\cite[Theorem 5.12]{IwKo-2004}. Note that Equation \\ref{eqn:exp_form_1} can be recovered by setting $f$ to be the Poisson kernel $f_s(x) = \\frac{s}{s^2+x^2}$; then $\\hat{f_s}(y) = e^{-s|y|}$, so $\\hat{f_s}(\\log n) = n^{-s}$. \\\\\n\nWe give a distribution-theoretic reformulation of Lemma \\ref{lem:exp_form_2}. While the subject of explicit formulae for $L$-functions of Hecke eigenforms is treated by a number of sources, the following doesn't seem to have been explicitly written down in the literature anywhere:\n\\begin{proposition}[GRH]\\label{prop:exp_form_as_distribution}\nLet $\\gamma$ range over the imaginary parts of the zeros of $\\Les$ with multiplicity. Let $\\varphi_E = \\sum_{\\gamma} \\delta(x-\\gamma)$ be the complex-valued distribution on $\\RR$ corresponding to summation over the zeros of $L_E(s)$, where $\\delta(x)$ is the usual Dirac delta function. That is, for any test function $f: \\RR \\mapsto \\CC$ such that $\\sum_{\\gamma}f(\\gamma)$ converges, \n\\begin{equation}\n\\langle f,\\varphi_E \\rangle = \\int_{-\\infty}^{\\infty} f(x)\\left(\\sum_{\\gamma\\in S_E} \\delta(x-\\gamma)\\right) \\, dx = \\sum_{\\gamma\\in S_E} f(\\gamma).\n\\end{equation}\nThen as distributions,\n\\begin{equation}\\label{eqn:exp_form_3}\n\\varphi_E = \\sum_{\\gamma} \\delta(x-\\gamma) = \\frac{1}{\\pi}\\left[-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) +\\sum_{k=1}^{\\infty} \\frac{x^2}{k(k^2+x^2)} + \\frac{1}{2}\\sum_{n=1}^{\\infty} c_n \\left(n^{ix}+n^{-ix}\\right) \\right].\n\\end{equation}\n\\end{proposition}\nIn the above language, $\\ldLam{1+s} = \\left\\langle \\frac{s}{s^2+x^2},\\varphi_E \\right\\rangle$ for $\\Re(s) > 0$. Note that convergence on the right hand side is absolute for $\\Re(s)>1$, and conditional (provably so thanks to Sato-Tate) for $0<\\Re(s)\\le 1$. \\\\\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Estimating Analytic Rank with the sinc$^2$ Sum}\\label{sec:sinc_squared_sum}\n\nThe Explicit Formula may be used to provide computationally effective upper bounds on the analytic rank of an elliptic curve. The method appears to have first been formulated by Mestre in \\cite{Me-1986}, and used by Brumer in \\cite{Bru-1992} to prove that, conditional on GRH, the average rank of elliptic curves was at most 2.3. This upper bound was improved to 2 by Heath-Brown in \\cite{HeBr-2004}. \\\\\n\n[Aside: one of the most groundbreaking developments in number theory in recent years is a series of results by Manjul Bhargava and Arul Shankar \\cite{BhSh-2010a} \\cite{BhSh-2010b} \\cite{BhSh-2013} proving that the average rank of elliptic curves is at most 0.885. These results are {\\it unconditional}; the first such unconditional bound on average rank. For his work Bhargava received a Fields Medal in 2014.] \\\\\n\nThe analytic method stems from invoking the explicit formula as stated in Lemma \\ref{lem:exp_form_2} on a function $f$ of a specific form:\n\\begin{lemma}[GRH]\\label{lem:exp_form_nonneg_f_cpt_suppt}\nLet $\\gamma$ range over the nontrivial zeros of $L_E(s)$. Let $f$ be a non-negative even real-valued function on $\\RR$ such that $f(0)=1$. Suppose further that the Fourier transform $\\hat{f}$ of $f$ has compact support, i.e. $\\hat{f}(y) = 0$ for $|y|>R$ for some $R>0$. Then for any $\\Delta>0$, we have\n\\begin{equation}\n\\sum_{\\gamma} f(\\Delta \\gamma) = \\frac{1}{\\Delta \\pi}\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) + \\Re\\int_{-\\infty}^{\\infty} \\digamma(1+it)f(\\Delta t) \\; dt  + \\frac{1}{\\Delta \\pi}\\sum_{n<e^{\\Delta R}} c_n \\hat{f}\\left(\\frac{\\log n}{\\Delta}\\right).\n\\end{equation}\nMoreover, the value of the sum bounds from above the analytic rank of $E$ for any given value of $\\Delta$, and sum converges to $r_{\\an}(E)$ as $\\Delta \\to \\infty$.\n\\end{lemma}\n\n\\begin{proof}\nThe formula as stated above is just an application of the explicit formula in Lemma \\ref{lem:exp_form_2}, noting that the Fourier transform of $f(\\Delta x)$ is $\\frac{1}{\\Delta}\\hat{f}\\left(\\frac{\\xi}{\\Delta}\\right)$. Since $f$ is $1$ at the origin, $\\sum_{\\gamma} f(\\Delta \\gamma) = r_E + \\sum_{\\gamma\\ne 0} f(\\Delta \\gamma)$. Furthermore, $f$ is non-negative and integrable, so the sum over noncentral zeros is nonnegative and decreases to zero as $\\Delta$ increases.\n\\end{proof}\n\nWhile in theory any $f$ with the properties mentioned above work for bounding analytic rank, the function\n\\begin{equation}\nf(x) = \\sinc^2(x) = \\left(\\frac{\\sin(\\pi x)}{\\pi x}\\right)^2\n\\end{equation}\nis what is used by Mestre, Brumer, Heath-Brown in the publications above, and by Bober in \\cite{Bob-2011}. This is due to its Fourier transform being compactly supported, namely is the triangular function:\n\\begin{equation}\n\\hat{f}(y) = \\int_{-\\infty}^{\\infty} e^{-i x y}f(x)\\; dx =  \\begin{cases} 1 - \\frac{|y|}{2\\pi}, & |y|\\le 2\\pi \\\\ 0, & |y| > 2\\pi.\\end{cases}\n\\end{equation}\nMoreover, if $f(x) = \\sinc^2(x)$, the integral $\\Re\\int_{-\\infty}^{\\infty} \\digamma(1+it)f(\\Delta t) \\; dt$ can be computed explicitly in terms of known constants and special functions:\n\\begin{equation}\n\\Re\\int_{-\\infty}^{\\infty} \\digamma(1+it)f(\\Delta t) \\; dt = - \\frac{\\eta}{\\pi \\Delta} + \\frac{1}{2\\pi^2 \\Delta^2}\\left(\\frac{\\pi^2}{6} - \\Li_2\\left(e^{-2\\pi \\Delta}\\right)\\right),\n\\end{equation}\nwhere $\\eta$ is the Euler-Mascheroni constant $= 0.5772\\ldots$ and $\\Li_2(x)$ is the dilogarithm function, defined as $\\Li_2(x) = \\sum_{k=1}^{\\infty} \\frac{x^k}{k^2}$ for $|x|\\le 1$.\n\nCombining the above, we get a specialization of Lemma \\ref{lem:exp_form_nonneg_f_cpt_suppt}:\n\\begin{corollary}[GRH]\nLet $\\gamma$ range over the nontrivial zeros of $L_E(s)$, and let $\\Delta > 0$. Then\n\\begin{align}\\label{eqn:sincsquared_sum}\n\\sum_{\\gamma} \\sinc^2(\\Delta \\gamma) = \\quad &\\frac{1}{\\Delta \\pi}\\left[\\left(-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right)+ \\frac{1}{2\\pi \\Delta}\\left(\\frac{\\pi^2}{6} - \\Li_2\\left(e^{-2\\pi \\Delta}\\right)\\right)  \\right. \\nonumber \\\\\n&+ \\left.\\sum_{n<e^{2\\pi \\Delta}} c_n \\cdot \\left(1-\\frac{\\log n}{2\\pi \\Delta}\\right)\\right].\n\\end{align}\n\\end{corollary}\n\nWhat's notable about the above formula is that evaluation of the right hand side is a finite computation, and only requires knowledge of the elliptic curve's conductor and its $a_p$ values up to some bound. Thus the zero sum is eminently computable, and results in a value that bounds from above the analytic rank of $E$. \\\\\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/zero_sum_visualization.png}\n    \\caption{A graphic representation of the $\\sinc^2$ sum for the elliptic curve $E: y^2=x^3-18x+51$, a rank 1 curve with conductor $N_E=750384$, for three increasing values of the parameter $\\Delta$. Vertical lines have been plotted at $x=\\gamma$ whenever $L_E(1+i\\gamma)=0$ -- red for the single central zero, and blue for noncentral zeros; the height of the darkened portion of each line is given by the black curve $\\sinc^2(\\Delta x)$. Summing up the lengths of the dark vertical lines thus gives the value of the $\\sinc^2$ sum. We see that as $\\Delta$ increases, the contribution from the blue lines -- corresponding to noncentral zeros -- goes to zero, while the contribution from the central zero in red remains at 1. Thus the sum must limit to 1 as $\\Delta$ increases.}\n    \\label{fig:zero_sum_visualization}\n\\end{figure}\n\nThe $\\sinc^2$ zero sum rank estimation method has been implemented in Sage (see Appendix A), and used to successfully estimate ranks on a database of 18 million elliptic curves with conductor at most $\\sim 10^{11}$. A range of $\\Delta$ values was used, from $\\Delta=1.0$ (for which average time per curve was $\\sim 10^{-5}$ s), to $\\Delta=2.0$ (average time per curve $\\sim 10^{-1}$ s). See an upcoming paper by Ho, Balakrishnan, Kaplan, Stein, Weigandt, and S. for details on the computations. \\\\\n\nA neat conclusion that can immediately be drawn from the finiteness of the $\\sinc^2$ explicit formula sum, is that maximum analytic rank grows more slowly than $\\log N_E$:\n\n\\begin{corollary}[GRH]\\label{cor:rank_slower_than_log_N}\nFor any $\\epsilon >0$ there is a constant $K_{\\epsilon}>0$ such that for any $E/\\QQ$ with conductor $N_E$, we have\n\\begin{equation}\nr_E < \\epsilon \\log N_E + K_{\\epsilon}.\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nWe note that for any given $\\Delta>0$, the sum $ \\sum_{n<e^{2\\pi \\Delta}} c_n \\cdot \\left(1-\\frac{\\log n}{2\\pi \\Delta}\\right)$ is bounded by a constant that is independent of the choice of elliptic curve, as the $c_n$ values are bounded globally. Thus the right hand side of Equation \\ref{eqn:sincsquared_sum} is equal to $\\frac{1}{2\\pi \\Delta}\\log N_E$ plus a number whose supremum magnitude depends only on $\\Delta$ and not on $E$. Since the sum bounds analytic rank, taking $\\epsilon = \\frac{1}{2\\pi \\Delta}$ and letting $\\epsilon \\to 0$ proves the statement.\n\\end{proof}\n\n[Aside: This statement is already known in the literature, so nothing new has been proven here. In fact, it's conjectured that maximum analytic rank grows more like $\\sqrt{\\log N_E}$ (existing numerical evidence would seem to support this), but this is still very much an open problem.] \\\\\n\nThe above allows us to provide bounds on analytic rank via point estimates by choosing particular values of $\\epsilon$. For example, if we choose $\\epsilon = \\frac{1}{\\log 23} \\sim 0.3189\\ldots$ and collect and bound all the conductor-independent terms in equation \\ref{eqn:sincsquared_sum}, we can improve the result in Corollary \\ref{cor:logderiv_rank_bound} to the following:\n\\begin{corollary}[GRH]\\label{cor:better_an_bound}\nLet $E$ have analytic rank $r_E$ and conductor $N_E$. Then\n\\begin{equation}\nr_E < 0.32 \\log N_E + 0.5.\n\\end{equation}\n\\end{corollary}\nWe leave the details of the proof to the reader as a fun analysis exercise.\\\\\n\nFinally, it's worth noting is that when $\\Delta \\le \\frac{\\log 2}{2\\pi}$, the $c_n$ sum in Equation \\ref{eqn:sincsquared_sum} is empty. Thus we have the following:\n\\begin{corollary}[GRH]\nLet $E/\\QQ$ have conductor $N_E$. Let $\\eta$ be the Euler-Mascheroni constant $=0.5772\\ldots$, and let $\\gamma$ range over the nontrivial zeros of $\\L_E(s)$. Then\n\\begin{equation}\n\\sum_{\\gamma} \\sinc^2\\left(\\frac{\\log 2}{2\\pi} \\cdot \\gamma \\right) = \\frac{\\log N_E}{\\log 2} + K,\n\\end{equation}\nwhere $K = \\frac{\\pi^2}{6(\\log 2)^2} - \\frac{2\\eta}{\\log 2} - 2\\frac{\\log \\pi}{\\log 2} - 1 = -2.54476987\\ldots$ is a global constant that is independent of $E$.\n\\end{corollary}\n\\begin{proof}\nEvaluate Equation \\ref{eqn:sincsquared_sum} at $\\Delta = \\frac{\\log 2}{2\\pi}$ and simplify, noting that $\\Li_2\\left(\\frac{1}{2}\\right) = \\frac{\\pi^2}{6} - \\frac{(\\log 2)^2}{2}$.\n\\end{proof}\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Rank Estimation Fidelity and Choosing how to Scale $\\Delta$}\n\nObserve that for a fixed choice of $\\Delta$, evaluating Equation \\ref{eqn:sincsquared_sum} has runtime that is almost independent of the conductor of $E$ (it should scale with some power of $\\log N_E$ due to the complexity of basic arithmetic operations). However, as conductor increases the tightness of the provided bound decreases -- we can see this from the first term, which adds a positive bias to the sum proportion to $\\log N_E$, which is not seen in the average ranks of curves as conductor increases. \\\\\n\n\\begin{definition}\nThe {\\it fidelity} of a sinc$^2$ sum rank estimation with a given choice of $\\Delta$, is the average tightness of the rank bound as a function of conductor of the curve in question. Specifically, we may define\n\\begin{equation}\n\\mbox{fid}(\\Delta,N) = \\mbox{mean}\\set{\\left(\\sum_{\\Lambda_E(1+i\\gamma)=0}\\sinc^2(\\gamma \\Delta)\\right) - r_E:\\;\\; N_E = N},\n\\end{equation}\nwhere $E$ ranges over all rational elliptic curves with conductor $N$. Loosely, we may think of the fidelity of a given choice of $\\Delta$ and $N$ to be the expected accuracy of the rank estimate, or the chance that the sum is tight (e.g. within 2 of the true rank, since we are always assuming parity is known) for a curve of conductor $N_E=N$. \n\\end{definition}\n\nIn other words, for fixed curves fidelity increases as $\\Delta$ increases, but for fixed $\\Delta$ fidelity {\\it decreases} as the conductor of the curve in question increases. It follows that $\\Delta$ should scale with $N_E$ in order to obtain an estimates of constant fidelity. The natural question to ask then, given the statement of Equation \\ref{eqn:sincsquared_sum}, is: how large does $\\Delta$ need to be such that $\\sum_{\\Lambda_E(1+i\\gamma)=0}\\sinc^2(\\gamma \\Delta) < r_E+2$? \\\\\n\nEvaluating the sum will be dominated by the final sum over the $c_n$ coefficients, whose runtime in turn is exponential in $\\Delta$, so we must be judicious in the choice of $\\Delta$. Experimentally, we found that choosing $\\Delta(E) = \\alpha \\cdot \\log N_E$ for any constant value of $\\alpha$ produces estimates of asymptotically constant fidelity. Such a choice makes the contribution from the first two terms in Equation \\ref{eqn:sincsquared_sum} -- $\\frac{1}{\\Delta \\pi}\\left(-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right) + \\frac{1}{2\\pi^2 \\Delta^2}\\left(\\frac{\\pi^2}{6} - \\Li_2\\left(e^{-2\\pi \\Delta}\\right)\\right)$ -- asymptotically constant, so net bias in the sum does not increase as $N_E$ increases. \\\\\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/rkub_ne_rk.png}\n    \\caption{The cumulative proportion of curves in the Cremona database for which the sinc$^2$ rank bound was not within 2 of the true rank of the curve, using the scaling $\\Delta(E) = \\frac{1}{\\pi}\\left(-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right)$. }\n    \\label{fig:rkub_ne_rk}\n\\end{figure}\n\nTo generate Figure \\ref{fig:rkub_ne_rk}, we used the scaling $\\Delta(E) = \\frac{1}{\\pi}\\left(-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right)$, and computed rank bounds on the entire Cremona database of all rational elliptic curves up to conductor 350000; this scaling was chosen so that the bias coming from the first term in the sinc$^2$ sum was always exactly $1$. It was found that the resulting bounds were within 2 of the true rank in 99.75\\% of cases. The 4000 or so curves for which the bound exceeded $r_E+2$ all possess anomalously low-lying zeros that 'look like central zeros' when small values of $\\Delta$ are used. \\\\\n\nSince the number of terms in the $c_n$ sum in Equation \\ref{eqn:sincsquared_sum} is $e^{2\\pi\\Delta}$, choosing $\\Delta(E) = \\alpha \\cdot \\log N_E$ means that the evaluating the sum will have $\\softO\\left((N_E)^{2\\pi \\alpha} \\right)$ runtime. That is, the scaling choice used to generate Figure \\ref{fig:rkub_ne_rk} yielded an $\\softO(N_E)$ computation time. In general, runtime can be made to be $\\softO\\left((N_E)^{\\epsilon}\\right)$ for any $\\epsilon>0$, at the expense of lowering the fidelity of the bound. \\\\\n\nIt is worth noting explicitly that the accuracy of the sinc$^2$ sum rank estimate is sensitive to low-lying zeros. Thus if it known a priori that the $L$-function of a particular curve does not have any low-lying zeros, a smaller value of $\\Delta$ can be used. This fact is exploited in \\cite{Bob-2011}, where Bober uses the method on curves of very large rank. There is a well-known phenomenon of zero repulsion in $L$-functions -- zeros tend not to fall as close to each other as could be expected if they were distributed purely randomly on the critical line -- and as such curves with large rank tend to have lowest zeros significantly higher up in the upper half plane than would be expected otherwise. \\\\\n\nThis, for example, allowed Bober to use a $\\Delta$ value of only $3.2$ to show that a curve with 28 independent points had analytic rank at most 30. The conductor of the curve in question is roughly $3.4\\times 10^{141}$, so using the scaling $\\Delta(E) = \\frac{1}{\\pi}\\left(-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right)$ would require a $\\Delta$ value of about $51.1$. \\\\\n\nA related question we can of course ask is: how large does $\\Delta$ have to be for the $\\sinc^2$ sum rank bound to have perfect fidelity, i.e. guaranteed to be less than $1$ more than the rank of $E$? We will answer this question at the end of section \\ref{sec:bite}, though in a way that requires knowledge of an extra invariant attached to $E$, namely the bite $\\beta_E$.\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Distribution of Nontrivial Zeros}\n\nThough not necessary to prove Equation \\ref{eqn:sincsquared_sum}, we may also use zero sums to provide bounds on the density and estimates on the distribution and expected location of the nontrivial zeros of $\\Les$ as a function of the curve's conductor.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Explicit Bounds on Zero Density on the Critical Line}\n\nWe start by investigating the density of zeros on the critical line. We will see that zero density scales with $\\log N_E$; because the explicit formula is used extensively in this section, all results are of course taken to be contingent on GRH. \\\\\n\nTo this end, we define the {\\it zero counting function}, which counts the number of zeros on the critical line up to a given bound:\n\\begin{definition}\\label{defn:zero_counting_function}\nFor non-negative $t$, let $M_E(t)$ be the modified non-trivial zero counting function for $\\Les$, i.e.\n\\begin{equation}\nM_E(t) := \\sideset{}{\\pr}\\sum_{|\\gamma| \\le t} \\frac{1}{2},\n\\end{equation}\nwhere $\\gamma$ runs over the imaginary parts of nontrivial zeros of $L_E(s)$, and the prime indicates that the final $\\gamma$ is taken with half weight if $\\gamma = t$. The central zero is taken with with multiplicity $r_E$, where $r_E$ is the analytic rank of $E$.\n\\end{definition}\n\nNote that $M_E(0) = \\frac{r_E}{2}$, and the function jumps by 1 across the locations of nontrivial zeros, since noncentral zeros come in conjugate pairs and (by GRH) are always simple. \\\\\n\n%\n%Equation \\ref{eqn:zero_density} gives us an accurate-up-to-$O(\\log t)$ estimate for the number of zeros of $\\Les$ on the critical line with imaginary part at most $t$ in magnitude. However, it is also useful to have explicit bounds on the number of zeros in a given interval on the critical line. The results in this section are therefore of course contingent on GRH. \\\\\n\nWe may obtain bounds on $M_E(t)$ via the shifted completed logarithmic derivative $\\ldLam{1+s}$. Our workhorse theorem places tight constraints on the sum $\\sum_\\gamma \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2}$ for positive $\\sigma$ and real $\\tau$. This is a shifted Cauchy distribution-type sum, giving us information on the density of zeros with imaginary part near $\\tau$.\n\\begin{theorem}[GRH]\n\\label{thm:poisson_sum_bound}\nLet $E$ be an elliptic curve with conductor $N_E$ and L-function $L_E(s)$. Let $\\sigma > \\frac{1}{2}$ and $\\tau \\in \\RR$, and let $\\gamma$ range over the imaginary parts of the nontrivial zeros of $L_E(s)$. Then\n\\begin{equation}\n \\left| \\sum_\\gamma \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} - \\left[\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) + \\Re\\left(\\digamma(1+\\sigma+i \\tau)\\right)\\right] \\right| < - 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+\\sigma\\right).\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nLet $s = \\sigma + i \\tau$. By equations \\ref{eqn:logderiv_relation} and \\ref{cor:Re_logderiv} we have\n\\begin{align*}\n\\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} &= \\Re\\left(\\ldLam{1+s}\\right) \\\\\n&= \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) + \\Re\\left(\\digamma(1+\\sigma+i \\tau)\\right) + \\Re\\left(\\sum_n c_n n^{-s}\\right).\n\\end{align*}\nBut by lemma \\ref{lem:ldLe_bound}\n\\begin{equation*}\n\\left|\\Re\\left(\\sum_n c_n n^{-s}\\right)\\right| \\le \\left|\\ldLe{1+s}\\right| < - 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+\\sigma\\right),\n\\end{equation*}\nso\n\\begin{equation*}\n\\left|\\sum_\\gamma \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} - \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) - \\Re\\left(\\digamma(1+\\sigma+i \\tau)\\right)\\right| < - 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+\\sigma\\right).\n\\end{equation*}\n\\end{proof}\nNote that $\\frac{\\zeta\\pr}{\\zeta}$ decays rapidly with increasing $\\sigma$, i.e. we have\n\\begin{equation*}\n\\sum_\\gamma \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} - \\left[\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) + \\Re\\left(\\digamma(1+\\sigma+i \\tau)\\right)\\right] = O(\\sigma^{-c})\n\\end{equation*}\nfor any $c>0$. \\\\\n\n\\begin{corollary}[GRH]\\label{cor:zero_density}\nLet $M_E(t)$ be the zero counting function as given in Definition \\ref{defn:zero_counting_function}. Then for $t \\ge 1$ we have\n\\begin{equation}\nM_E(t) \\le t\\log N_E +2t\\log(t+1).\n\\end{equation}\nIf we restrict to $t>~1.32$ we may further simplify this to\n\\begin{equation}\nM_E(t) \\le t\\log N_E +2t\\log t.\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nTake the right inequality in Theorem \\ref{thm:poisson_sum_bound} with $\\sigma = t$ and $\\tau=0$, yielding\n\\begin{equation*}\n\\sum_{\\gamma} \\frac{t}{t^2+\\gamma^2} \\le \\frac{1}{2}\\log N_E - \\log 2\\pi + \\digamma(1+t) - 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+t\\right),\n\\end{equation*}\nsince the digamma function is real on the real axis. \\\\\n\nNote that for $t \\ge 1$ we have $-\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+t\\right) < -\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{3}{2}\\right) = 1.50523\\ldots$, since $-\\frac{\\zeta\\pr}{\\zeta}$ is decreasing with increasing $t$. Observe that this does not exceed $\\log 2\\pi = 1.83787\\ldots$  \\\\\n\nNow $\\digamma(1+t) < \\log(1+t)$ for $t\\ge0$. For the second inequality in the theorem, note that $-\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{3}{2}\\right) - \\log 2\\pi = 0.33264\\ldots$, and $\\digamma(1+t) \\le \\log(t) + 0.33264\\ldots$ when $t\\ge 1.32255\\ldots$ \\\\\n\nFinally, we have that $\\sum_{\\gamma} \\frac{t}{t^2+\\gamma^2} \\ge \\frac{1}{2t}M_E(t)$, since all zeros in the interval $[-t,t]$ are counted with weight at least $\\frac{1}{2t}$. Combining the above observations and collecting constants completes the proof.\n\\end{proof}\n\nIt is also useful to have an upper bound on the number of zeros in a given unit interval on the critical line.\n\\begin{corollary}[GRH]\\label{cor:zeros_on_unit_interval}\nFor $t\\ge1$, $M_E(t+1)-M_E(t)$ gives the number of zeros $\\gamma$ with $t < |\\Im(\\gamma)| \\le t+1$. We have\n\\begin{equation}\\label{eqn:zeros_on_unit_interval}\nM_E(t+1)-M_E(t) < \\frac{5}{4}\\log(N_E) + \\frac{5}{2}\\log(t+1).\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nProceed as before, but now taking the right inequality in theorem \\ref{thm:poisson_sum_bound} with $\\sigma = 1$ and $\\tau=t+\\frac{1}{2}$. Observe that\n\\begin{equation*}\n\\sum_{\\gamma}\\frac{1}{1+\\left(\\gamma-t-\\frac{1}{2}\\right)^2} \\ge \\frac{4}{5}\\cdot\\frac{\\left(M_E(t+1)-M_E(t)\\right)}{2},\n\\end{equation*}\nsince we are only counting zeros in the upper half plane. Also note that similar to before,\\\\\n$\\digamma\\left(\\frac{3}{2}+i\\left(t+\\frac{1}{2}\\right)\\right) - \\log 2\\pi - 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{3}{2}\\right) < \\log(t+1)$ for $t \\ge 1$.\n\\end{proof}\nIn a similar manner we derive for $t \\ge 1$ that\n\\begin{equation}\\label{eqn:zeros_in_unit_ball}\n\\#\\set{\\gamma:\\;|\\gamma-t|\\le 1} < \\log(N_E) + 2\\log(t+1)\n\\end{equation}\n(note that unlike equation \\ref{eqn:zeros_on_unit_interval}, the above only considers zeros with non-negative imaginary part). \\\\\n\nFinally, recall the definition of the bite of $E$: $\\beta_E = \\sum_{\\gamma \\ne 0} \\frac{1}{\\gamma^2}$. We may use \\ref{thm:poisson_sum_bound} to place an explicit bound on the tail end of the inverse square sum via the following:\n\\begin{corollary}[GRH]\\label{cor:sum_tail_bound}\nFor $\\sigma \\ge 1$ and $\\tau \\in \\RR$ we have\n\\begin{equation}\n\\sum_{|\\gamma-\\tau|>\\sigma} \\frac{1}{(\\gamma-\\tau)^2} \\le \\frac{\\log(N_E) + 2\\log(|\\tau|+1)}{\\sigma}.\n\\end{equation}\nSpecifically, when $\\tau=0$ we have\n\\begin{equation}\n\\sum_{|\\gamma|>\\sigma} \\frac{1}{\\gamma^2} \\le \\frac{\\log(N_E)}{\\sigma}.\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nObserve that\n\\begin{equation*}\n\\frac{\\sigma}{2}\\cdot\\sum_{|\\gamma-\\tau|>\\sigma} \\frac{1}{(\\gamma-\\tau)^2} \\le \\sum_{|\\gamma-\\tau|>\\sigma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2} \\le\\sum_{\\gamma} \\frac{\\sigma}{\\sigma^2+(\\gamma-\\tau)^2},\n\\end{equation*}\nand the rightmost sum is bounded by $\\frac{1}{2}\\log N_E + \\log(|\\tau|+1)$ as in the work above. \\\\\n\\end{proof}\n\nEquation \\ref{thm:poisson_sum_bound} may also used to put lower bounds on the above quantities in some cases, which we hope to pursue in future work.\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%\n\\subsection{The Expected Number of Zeros up to $t$}\n\nCorollary \\ref{cor:zero_density} gives us explicit bounds on zero density, but the bound of $t \\log N_E$ is not tight: empirically we see $M_E(t)$ grow more like $\\frac{t}{2}\\log N_E$; i.e. a factor of $2$ slower. We may again use the explicit formula to come up with a more accurate expansion for the zero counting function, at the expense of have a more nebulous error term that resists attempts to put explicit bounds on it.\n\n\\begin{proposition}[GRH]\nWe have\n\\begin{equation}\\label{eqn:M_E(t)}\nM_E(t) = \\frac{1}{\\pi}\\left[\\left(-\\eta+\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right) t + \\sum_{k=1}^{\\infty} \\left(\\frac{t}{k} - \\arctan\\left(\\frac{t}{k}\\right)\\right) + \\sum_{n=1}^{\\infty} \\frac{c_n}{\\log n}\\cdot \\sin(t\\log n)\\right].\n\\end{equation}\nConvergence on the RHS is pointwise with respect to $t$ for both sums; for fixed $t$ convergence for the sums over $k$ and $n$ is absolute and conditional respectively (and {\\it extremely} slow for the latter).\n\\end{proposition}\n\n\\begin{proof}\nObserve we may write $M_E(t) = \\sum_{\\gamma}f_t(\\gamma)$, where\n\\begin{equation}\nf_t(x) = \\begin{cases} \\frac{1}{2}, & |x|<t \\\\ \\frac{1}{4}, & |x| = t \\\\ 0 & |x|> t. \\end{cases}\n\\end{equation}\nInformally, we obtain the above formula by integrating both sides of Equation \\ref{eqn:exp_form_3} against $f = f_t(x)$, noting that $\\hat{f}_t(y) = \\frac{\\sin(ty)}{y}$. The integrals in the sum over $k$ give us no issue and we may swap the order of the integral and summation signs, since convergence there is absolute. However, some care must be taken when it comes to the sum over $n$, since here convergence is only conditional. \n\nFormally, we must write $M_E(t)$ as a path integral of $\\ldLam{1+s}$ on the path\n\\begin{equation*}\n\\epsilon-it \\mapsto \\epsilon+it \\mapsto -\\epsilon+it \\mapsto -\\epsilon-it \\mapsto \\epsilon-it\n\\end{equation*}\nfor some $\\epsilon>0$, and invoke the Cauchy Residue Theorem. We may then shrink $\\epsilon$ to zero (assuming GRH) to obtain that the RHS of \\ref{eqn:M_E(t)} converges point wise to $M_E(t)$ as $m$ and $n \\to \\infty$.\n\\end{proof}\n\nEquation \\ref{eqn:M_E(t)} may be though of having two components. The first two terms comprise a smooth part that gives the ``expected number of zeros'' up to $t$; and the trigonometric sum over $n$ comprises the discretization information that yields the zeros' exact values relative to their general expected location. We expect the trigonometric sum to be zero infinitely often, and asymptotically it should be positive as often as it is negative. As such the sum should average out to zero and shouldn't contribute any asymptotic bias to the density of zeros on the critical line. We can therefore talk in a real sense of the expected number of zeros up to $t$, which is given by\n\\begin{equation}\\label{eqn:M_E_smooth_part}\n\\frac{1}{\\pi}\\left[\\left(-\\eta+\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right) t + \\sum_{k=1}^{\\infty} \\left(\\frac{t}{k} - \\arctan\\left(\\frac{t}{k}\\right)\\right)\\right].\n\\end{equation}\n\nMoreover, the trigonometric sum should grow very slowly with $t$. Put more formally, we have the following:\n\\begin{conjecture}[GRH]\\label{conj:trig_sum_size}\nGRH implies that\n\\begin{equation}\n\\sum_{n=1}^{\\infty} \\frac{c_n}{\\log n}\\cdot \\sin(t\\log n) = O(\\log t).\n\\end{equation}\n\\end{conjecture}\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/M_E_trig_sum_bounds.png}\n    \\caption{The oscillating sum $\\sum_{n=1}^{\\infty} \\frac{c_n}{\\log n}\\cdot \\sin(t\\log n)$ for the curve with Cremona label $389a$ (with equation $y^2 + y = x^3 + x^2 - 2x$) versus $\\pm \\log(t)$ for $0\\le t \\le 200$. Numerically we actually see the maximum value of the sum grow slower than $\\log(t)$ - possibly $\\log(t)^{\\alpha}$ for some $0<\\alpha<1$, or even $\\log\\log(t)$.}\n    \\label{fig:M_E_trig_sum_bounds}\n\\end{figure}\n\nWe won't say much more about bounding the error term in this thesis (or attempt to prove anything about its magnitude), since it requires more advanced analytical tools not mentioned or developed here. \\\\\n\n\\begin{lemma}\\label{lem:arctan_sum_size}\nFor $t \\gg 0$, \n\\begin{equation}\n\\sum_{k=1}^{\\infty} \\left(\\frac{t}{k} - \\arctan\\left(\\frac{t}{k}\\right)\\right) = t\\log t + (\\eta-1)t + \\frac{\\pi}{4} + O\\left(\\frac{1}{t}\\right),\n\\end{equation}\nwhere $\\eta = 0.5772\\ldots$ is the Euler-Mascheroni constant.\n\\end{lemma}\n\\begin{proof}\nWe have\n\\begin{equation*}\n\\sum_{k=1}^{\\infty} \\left(\\frac{t}{k} - \\arctan\\left(\\frac{t}{k}\\right)\\right) = \\int_{0}^{t} \\sum_{k=1}^{\\infty} \\frac{x^2}{k(k^2+x^2)} \\; dx = \\int_{0}^{t} \\Re\\left(\\digamma(1+ix) + \\eta\\right) \\; dx,\n\\end{equation*}\nwhere $\\digamma(z)$ is the digamma function on $\\CC$. Now along the critical line we have the following asymptotic expansion for the real part of the digamma function:\n\\begin{equation}\n\\Re\\left(\\digamma(1+ix)\\right) = \\log x + \\frac{1}{12} x^{-2} + O(x^{-4})\n\\end{equation}\nHence $\\int_{0}^{t} \\Re\\left(\\digamma(1+ix)\\right) \\; dx = t(\\log t - 1)  + O(1)$. The constant term of $\\frac{\\pi}{4}$ comes from integrating the difference between $\\Re\\left(\\digamma(1+ix)\\right)$ and $\\log x$ between $0$ and $\\infty$:\n\\begin{equation*}\n\\int_{0}^{\\infty} \\left[\\Re\\left(\\digamma(1+ix)\\right) - \\log x\\right] \\; dx = \\frac{\\pi}{4}.\n\\end{equation*}\nThe result follows.\n\\end{proof}\n\nConjecture \\ref{conj:trig_sum_size} and lemma \\ref{lem:arctan_sum_size} combine to give us a precise asymptotic statement on the distribution of zeros up to $t$, in the same vein as von Mangoldt's asymptotic formula for the number of zeros up to $t$ for $\\zeta$:\n\n\\begin{theorem}[GRH]\\label{thm:zero_density}\nLet $E$ have conductor $N_E$. Then for $t\\gg0$ we have\n\\begin{equation}\\label{eqn:zero_density}\nM_E(t) = \\frac{t}{\\pi} \\, \\log\\left(\\frac{t\\sqrt{N_E}}{2\\pi e}\\right) + \\frac{1}{4} + O(\\log t),\n\\end{equation}\nwhere the error term is positive as often as it negative and contributes no net bias.\n\\end{theorem}\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/M_E_389.png}\n    \\caption{The number of zeros up to $t$ versus $\\frac{t}{\\pi}\\left[\\log\\left(\\frac{t\\sqrt{N_E}}{2\\pi}\\right) -1 \\right] + \\frac{1}{4}$ for the Cremona curve $389a$. The match up is extremely good.}\n    \\label{fig:M_E_389}\n\\end{figure}\n\n\\begin{corollary}[GRH]\nFor $t\\gg0$, the number of zeros on the critical line in a unit interval\n\\begin{equation}\nM_E(t)-M_E(t-1) = \\frac{1}{\\pi}\\log\\left(\\frac{t\\sqrt{N_E}}{2\\pi}\\right) + O(\\log t),\n\\end{equation}\nwhere again the error term contributes no net bias.\n\\end{corollary}\n\nThat is, zero density on the critical line grows like $\\frac{1}{2}\\log N_E + \\log t$, where $N_E$ is the conductor of $E$ and $t$ the distance from the real axis. \\\\\n\nNeglecting the oscillating error term in Equation \\ref{eqn:zero_density}, we may solve for $t$ in terms of the Lambert $W$-function to obtain an explicit formula for the expected value of the imaginary part of the $n$th zero on the critical line. Recall the definition of the Lambert $W$-function: if $y = x e^x$, then $x = W(y)$. $W$ is a multiple-valued function; we make use of the principle branch $W_0$ below:\n\\begin{corollary}[GRH]\\label{cor:gamma_n_approx_value}\nLet $\\gamma_n := \\gamma_n(E)$ be the imaginary value of the $n$th nontrivial (and noncentral) zero in the upper half plane of $\\Les$ with analytic rank $r_E$. Then\n\\begin{equation}\\label{approx:gamma_n}\n\\gamma_n \\sim \\frac{2\\pi e}{\\sqrt{N_E}} \\cdot \\exp \\left(W_0\\left[\\left(\\frac{r_E}{2} +n - \\frac{3}{4}\\right)\\cdot \\frac{\\sqrt{N_E}}{2 e}\\right]\\right),\n\\end{equation}\nin the sense that for a given curve, the difference between the above value and the true value of $\\gamma_n$ will on average be zero as $n \\to \\infty$.\n\\end{corollary}\n\\begin{proof}\nObserve that the $n$th nontrivial noncentral zero has imaginary part $t$ when $M_E(t) = \\frac{r}{2} + n - \\frac{1}{2}$ (since the final zero is counted with half weight). Hence using Equation \\ref{eqn:zero_density} sans the oscillating error term, we solve for $t$ in\n\\begin{equation*}\n\\frac{t}{\\pi} \\, \\log\\left(\\frac{t\\sqrt{N_E}}{2\\pi e}\\right) + \\frac{1}{4} = \\frac{r_E}{2} + n - \\frac{1}{2}.\n\\end{equation*}\n\\end{proof}\n\n[Aside: The principle branch of the Lambert $W$-function has the asymptotic expansion $W_0(x) = \\log x - \\log \\log x + o\\left(1\\right)$, for $n \\gg 0$ we recover the known asymptotic for the location of the $n$th nontrivial zero: $\\gamma_n = O\\left(\\frac{n}{\\log n} \\right)$. Better yet, after some manipulation the asymptotic expansion gives us the proportionality constant explicitly:\n\\begin{equation}\n\\lim_{n \\to \\infty} \\frac{\\gamma_n}{\\frac{n}{\\log n}} = \\pi.\n\\end{equation}\nNote, however, that the convergence rate is slow: $O(\\frac{1}{\\log n})$, and the constant in front scales with the log of the conductor of $E$.] \\\\\n\nA natural question to ask, given that we now have an expected value for $\\gamma_n$, is: how much does the imaginary part of the $n$th zero deviate from its expected location? To this end we define the {\\it dispersion} of the $n$th zero:\n\\begin{definition}\nThe dispersion $\\delta_n(E) := \\delta_n$ of the imaginary part of the $n$th nontrivial zero in the upper half plane is the difference between the true and predicted values of $\\gamma_n$, i.e.\n\\begin{equation}\n\\delta_n = \\gamma_n - \\frac{2\\pi e}{\\sqrt{N_E}} \\cdot \\exp \\left(W_0\\left[\\left(\\frac{r_E}{2} +n - \\frac{3}{4}\\right)\\cdot \\frac{\\sqrt{N_E}}{2 e}\\right]\\right).\n\\end{equation}\n\\end{definition}\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/389a_zero_dispersions_scatterplot.png}\n    \\caption{A scatter plot of zero dispersions for the first 1000 nontrivial zeros of the Cremona curve 389a, the rank 3 curve with smallest conductor. The values are seldom more than $\\frac{1}{2}$.}\n    \\label{fig:389a_zero_dispersions_scatterplot}\n\\end{figure}\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/389a_zero_dispersions_cumulative_average.png}\n    \\caption{A cumulative average plot of the above, showing clearly that asymptotically, the average difference between the predicted and true values of $\\gamma_n$ is zero. The positive bias at the beginning comes from the $O(1/t)$ term in Lemma \\ref{lem:arctan_sum_size}. Interestingly, although the deviations might a priori appear completely random, there is a clear oscillating structure in the average, and the line about which the oscillation occurs appears to decrease to zero from above.}\n    \\label{fig:zero_dispersions_cumulative_average}\n\\end{figure}\n\nEven though the above graph demonstrates that the zero dispersions are clearly not random, when viewed as a i.i.d. time series, the dispersions appear be normally distributed. \\\\\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/389a_zero_dispersions_histagram.png}\n    \\caption{A histogram of zero dispersions for the curve 389 for the 1000th through 11000th zeros (we discard the first 1000 zeros to avoid the small-height bias observable in the cumulative average plot above). }\n    \\label{fig:389a_zero_dispersions_histagram}\n\\end{figure}\n\nFor the data set used the graph above, the mean was $3.16\\times10^{-5}$ (a good indicator that the expected value formula contains no systematic bias), standard deviation $0.1566$. The standard deviation appears to decrease with increasing $n$: we applied the Shapiro-Wilk normality test on batches of 1000 consecutive zero dispersions, and got $p$-values in excess of $0.2$ (and most of the time in excess of 0.5) in all cases. Moreover, the computed standard deviations decreased uniformly from $0.1745$ for the $n=1000$ to $2000$ dispersion set, to $0.1464$ in the $n=10000$ to $11000$ set. We hope to pursue this investigation in future work. \\\\\n\nFinally, we may also go in the other direction and use Equation \\ref{eqn:M_E(t)} to make a guess as to the expected imaginary part of the {\\it lowest} noncentral nontrivial zero of $\\Les$ as a function of increasing conductor $N_E$:\n\\begin{proposition}[GRH]\nFor a curve $E$ with large conductor $N_E$ and analytic rank $r_E$, the best guess for the imaginary part of the first nontrivial noncentral zero $\\gamma_0$ of $\\Les$ in the upper half plane is\n\\begin{equation}\n\\gamma_0 = \\frac{(r_E+1)\\pi}{\\log(N_E) -2\\log(2\\pi) -2\\eta}.\n\\end{equation}\n\\end{proposition}\nThe derivation is similar to before. The location of the first nontrivial noncentral zero is given by the value of $t$ for which $M_E(t)$ jumps from $r_E/2$ to $r_E/2+1$; at that point $M_E(t) = r_E/2 + 1/2 = \\frac{r_E+1}{2}$, so the expected value of $\\gamma_1$ is given by setting equation \\ref{eqn:M_E(t)} equal to $\\frac{r_E+1}{2}$ and solving for $t$. \\\\\n\nNow, however, $\\frac{1}{\\pi}\\sum_{k=1}^{\\infty} \\left[\\frac{t}{k} - \\arctan\\left(\\frac{t}{k}\\right)\\right]$ is $O(t^3)$ for small t, so the quantity expressed in equation \\ref{eqn:M_E_smooth_part} is dominated by the $\\frac{1}{\\pi}\\left(-\\eta+\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right) t$ term when $N_E$ is large. Solving for $t$ yields the desired value.\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Bite}\\label{sec:bite}\n\nMazur and Stein in \\cite{MaSt-2013} define the {\\it bite} of an elliptic curve:\n\\begin{definition}\\label{defn:bite}\nThe {\\it bite} of $\\Les$ is\n\\begin{equation}\n\\beta(E) := \\sum_{\\gamma \\ne 0} \\gamma^{-2},\n\\end{equation}\nwhere the sum runs over the imaginary parts of all {\\it noncentral} nontrivial zeros of $\\Les$.\n\\end{definition}\nThis quantity is of interest for a variety of reasons: it controls the rate of convergence in many explicit formula-type sums for $\\Les$, and is intimately linked with the analytic rank and leading central Taylor coefficient for the $L$-series of $E$. Again, the explicit dependence on $E$ may be left as understood if the choice of $E$ is unambiguous, or we may subsume the dependence on $E$ into a subscript and write $\\beta_E$. In this final section we establish some bounds involving the bite, show how one can compute it efficiently without having to compute the locations of the zeros of $\\Les$ explicitly, and give some zero sum examples relevant to this thesis where the bite comes into play. \\\\\n\nSince sums of inverse higher powers of zeros also crop up, we generalize the notion of bite as follows:\n\\begin{definition}\nFor positive integer $n$, the {\\it higher order bite} of order $n$ for $\\Les$ is\n\\begin{equation}\n\\beta_n(E) := \\sum_{\\gamma \\ne 0} \\gamma^{-n}.\n\\end{equation}\n\\end{definition}\nThus $\\beta_2(E) = \\beta_E$ as defined previously. Note also that $\\beta_n = 0$ for any odd $n$, since zeros come in conjugate pairs. \\\\\n\nEquation \\ref{eqn:ldLam_sum} gives us a description of the Laurent expansion of $\\ldLam{1+s}$ about zero:\n\\begin{proposition}[GRH]\\label{prop:ldLam_series_at_zero}\nThe Laurent expansion of $\\ldLam{1+s}$ about zero is given by\n\\begin{align}\n\\ldLam{1+s} &= \\frac{r_E}{s} + \\beta_2\\cdot s - \\beta_4\\cdot s^3 + \\beta_6\\cdot  s^5 - \\ldots \\\\\n& = \\frac{r_E}{s} + \\sum_{k=1}^{\\infty} (-1)^{k-1}\\beta_{2k}\\cdot s^{2k-1},\n\\end{align}\nand this converges for $|s|<\\gamma_0$, where $\\gamma_0$ is the imaginary part of the lowest noncentral nontrivial zero of $\\Les$ in the upper half plane.\n\\end{proposition}\nThe proof of this follows immediately by expanding the sum in Equation \\ref{eqn:ldLam_sum} and collecting terms. \\\\\n\n\\begin{corollary}[GRH]\\label{cor:ldLe_expansion}\nLet $E/\\QQ$ have conductor $N_E$, $L$-function $\\Les$ with bite $\\beta_E = \\beta_2(E)$ and central leading coefficient $C_E\\pr$. Let the Taylor series expansion of $L_E$ about the central point be\n\\begin{equation}\nL_E(1+s) = C_E\\pr \\, s^{r_E}\\left[1 + a\\cdot s + b\\cdot s^2 + O(s^3)\\right]\n\\end{equation}\nThen \n\\begin{align}\na &= -\\left[-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right] \\\\\n2b &= \\left[-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right]^2 - \\frac{\\pi^2}{6} + \\beta_E,\n\\end{align}\nwhere $\\eta$ is the Euler-Mascheroni constant $= 0.5772\\ldots$.\n\\end{corollary}\n\n\\begin{proof}\nWe note that the digamma function has the following Taylor expansion about $s=1$:\n\\begin{equation}\n\\digamma(1+s) = -\\eta - \\sum_{k=1}^{\\infty} (-1)^k \\zeta(k+1) s^k,\n\\end{equation}\nwhere $\\eta$ is the Euler-Mascheroni constant, and $\\zeta(s)$ is the Riemann zeta function. \\\\\nThus by equation \\ref{eqn:logderiv_relation} and Proposition \\ref{prop:ldLam_series_at_zero} we have that\n\\begin{equation*}\n\\ldLe{1+s} = \\frac{r_E}{s} - \\left[-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right] + \\left[-\\zeta(2) + \\sum_{\\gamma\\ne0} \\gamma^{-2}\\right]\\cdot s + O(s^2).\n\\end{equation*}\nBut if $L_E(1+s) = C_E\\pr \\, s^{r_E}\\left[1 + a\\cdot s + b\\cdot s^2 + O(s^3)\\right]$, then careful logarithmic differentiation yields\n\\begin{equation*}\n\\ldLe{1+s} = \\frac{r_E}{s} + a + \\left(-a^2 + 2b\\right)\\cdot s + O(s^2).\n\\end{equation*}\nComparing terms and solving for the relevant quantities produces the desired formulae.\n\\end{proof}\nWe may continue in the same vein to produce formulae for higher order coefficients of $L_E(s)$. As can be seen from above, these can in general be written in terms of sums of powers of the quantity $\\left[-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right]$ (which is the constant term in the Laurent series of $\\ldLam{1+s}$), inverse sums of even powers of the nontrivial zeros, and $\\zeta(n)$ for $n$ a positive integer. \\\\\n\nIn other words, $\\beta_E$ and higher-order bites encode information about higher order terms in the Taylor expansion of $L_E(1+s)$; moreover, the Taylor series thereof contains no new information about the curve's attached invariants beyond that which can be found in the first nonzero coefficient and the bites $\\beta_{2n}(E)$. Whether the bites do indeed have any arithmetic significance, however, is an open question. \\\\\n\nAs can be seen from the above, the bite of a curve is of interest is due to it being intimately linked with the leading central Taylor coefficient $C_E$ of $\\Lambda_E(1+s)$ and the (analytic) rank $r_E$. We may link the three quantities explicitly with a suite of inequalities derived from point estimates on the $L$-function of $E$ and the logarithmic derivative thereof. First, we will need the following technical lemma:\n\\begin{lemma}[GRH]\\label{lem:bite_rank_sigma_inequality}\nLet $\\sigma > \\frac{1}{2}$. The bite $\\beta_E$ and analytic rank $r_E$ of a curve $E$ obey\n\\begin{equation}\n\\sigma \\cdot \\beta_E + \\frac{r_E}{\\sigma} > \\frac{1}{2} \\log N_E + \\digamma(1+\\sigma)-\\log(2\\pi) - 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+\\sigma\\right),\n\\end{equation}\nwhere $\\digamma(s) = \\frac{\\Gamma\\pr}{\\Gamma}(s)$ is the digamma function on $\\CC$, $\\zeta(s)$ is the Riemann zeta function and $N_E$ is the conductor of $E$.\n\\end{lemma}\n\\begin{proof}\nFrom Equation \\ref{thm:poisson_sum_bound}, letting $\\tau=0$, we get\n\\begin{equation}\n\\sum_\\gamma \\frac{\\sigma}{\\sigma^2+\\gamma^2} > \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right) + \\digamma(1+\\sigma) - 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+\\sigma\\right).\n\\end{equation}\nBut\n\\begin{align*}\n\\sum_\\gamma \\frac{\\sigma}{\\sigma^2+\\gamma^2} &= \\frac{1}{\\sigma} \\sum_\\gamma \\frac{1}{1+\\left(\\frac{\\gamma}{\\sigma}\\right)^2} \\\\\n&< \\frac{1}{\\sigma}\\left(r_E + \\sum_{\\gamma\\ne 0} \\frac{1}{\\left(\\frac{\\gamma}{\\sigma}\\right)^2} \\right) \\\\\n&= \\frac{r_E}{\\sigma} + \\sigma\\cdot \\beta_E.\n\\end{align*}\nCombining inequalities completes the proof.\n\\end{proof}\n\nWe then have the following:\n\\begin{proposition}[GRH]\\label{prop:beta_C_r_bounds}\nLet $\\beta_E$, $C_E$, $r_E$ and $N_E$ be the bite, completed $L$-function leading central Taylor coefficient, analytic rank and conductor of $E$ respectively. Then\n\\begin{align}\n(1+\\beta_E)\\cdot C_E & < 0.173 \\cdot N_E, \\\\\n\\beta_E + \\log C_E &> \\frac{1}{2} \\log N_E - 5.229,  \\\\\n\\beta_E + r_E & > \\frac{1}{2} \\log N_E - 4.426.\n\\end{align}\n\\end{proposition}\n\\begin{proof}\nThe third inequality is a specialization of Lemma \\ref{lem:bite_rank_sigma_inequality} with $\\sigma=1$, with the conductor-independent terms lumped together into one numerical value. The first two inequalities come from the Hadamard product of the completed $L$-function (Equation \\ref{eqn:Lams_prod}) evaluated one unit to the right of the central point:\n\\begin{equation}\n\\Lambda_E(2) = C_E\\cdot \\prod_{\\gamma \\ne 0} \\left(1+\\frac{1}{\\gamma^2}\\right),\n\\end{equation}\nnoting that $1+\\beta_E < \\prod_{\\gamma \\ne 0} \\left(1+\\frac{1}{\\gamma^2}\\right) < e^{\\beta_E}$. On the other hand from the definition of the completed $L$-function we have $\\Lambda_E(2) = N_E \\cdot (2\\pi)^{-2}\\cdot  L_E(2)$. Inequality \\ref{ineq:bounds_on_Les} has that $\\frac{\\zeta(3)^2}{\\zeta(\\frac{3}{2})^2} < L_E(2) < \\zeta\\left(\\frac{3}{2}\\right)^2$; combining inequalities and collecting constant terms in the respective inequalities completes the two results.\n\\end{proof}\nThe take-away from Proposition \\ref{prop:beta_C_r_bounds} is that the bite and the leading Taylor coefficient $C_E$ cannot both be very large or very small simultaneously relative to the conductor. Similarly, the larger the rank of a curve, the smaller $\\beta_E$ can be relative to $N_E$. \\\\\n\nWe can go even further and establish a lower bound on $\\beta_E$ in terms of $N_E$ independent of $C_E$ and $r_E$, at the expense of introducing a non-explicit constant. As asymptotic zero density on the critical line grows proportional to $\\log N_E$ (see Theorem \\ref{thm:zero_density}), we expect the bite to grow at least like $\\log N_E$ too, regardless of the limiting behavior of zeros near the central point. This is indeed the case:\n\\begin{proposition}[GRH]\nFor all $\\epsilon>0$ there is a constant $K_{\\epsilon}>0$ such that for all elliptic curves $E$, the bite of $E$ obeys\n\\begin{equation}\\label{eqn:bite_lower_bound}\n\\beta_E = \\sum_{\\gamma\\ne 0} \\frac{1}{\\gamma^2} > \\frac{1}{1+\\epsilon} \\log N_E - K_{\\epsilon}.\n\\end{equation}\nwhere $N_E$ is the conductor of $E$.\n\\end{proposition}\n\\begin{proof}\nWe again invoke Lemma \\ref{lem:bite_rank_sigma_inequality} to observe that\n\\begin{equation}\n\\beta_E > \\frac{1}{2\\sigma} \\log N_E - \\frac{r_E}{\\sigma^2} + \\frac{1}{\\sigma}\\left[\\digamma(1+\\sigma)-\\log(2\\pi) + 2\\frac{\\zeta\\pr}{\\zeta}\\left(\\frac{1}{2}+\\sigma\\right)\\right],\n\\end{equation}\nwhere the term in the square brackets is independent of $N_E$ and is finite for any $\\sigma>\\frac{1}{2}$. By Corollary \\ref{cor:rank_slower_than_log_N}, the rank $r_E$ grows slower than any multiple  of $\\log N_E$. Hence for $\\epsilon > 0$ we may, for example, take $r_E < \\epsilon^2 \\log N_E + K\\pr(\\epsilon^2)$ for some constant $K\\pr$ dependent on $\\epsilon^2$, and then let $\\sigma = \\frac{1}{2}+\\frac{1}{2}\\epsilon$. This allows the constant in front of the collected $\\log N_E$ term to be made arbitrarily close to 1 from below, while all other terms sum to a finite value independent of $E$.\n\\end{proof}\n\nIn reality we expect the bite to grow faster than $\\log N_E$ -- as zero density scales with $\\log N_E$, the sum of the inverse squares thereof should na\\\"{i}vely be expected to grow with $(\\log N_E)^2$. However, this is discounting any unusual behaviour of zeros near the central point. Sarnak has mentioned in private correspondence that it's believed that the lowest noncentral zero $\\gamma_0$ actually approaches a constant limiting distribution as conductor goes to infinity, which would in turn decrease any lower bound that can be placed on the bite. \\\\\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/bites_vs_conductors_array.png}\n    \\caption{The bites of all curves in the Cremona tables were computed using the above method. Above is a scatterplot of $\\log \\beta_E $ vs. $\\log N_E$ for curves of rank 0, 1, 2 and 3 respectively. }\n    \\label{fig:bites_vs_conductors_array}\n\\end{figure}\n\nOne can see from Figure \\ref{fig:bites_vs_conductors_array} that the bite obeys a sharp lower bound with respect to the conductor, but the upper bound is somewhat less tight. More interesting is the fact that the lower bound appears the same regardless of rank, while curves with anomalously large bites are predominantly rank $0$. This makes sense: large bites correspond to very low-lying zeros, and because of the well-documented zero repulsion effect, this usually only happens when there are no zeros at the central point. \\\\\n\nHow does one go about computing the bite of an elliptic curve? The na\\\"{i}ve way would be to compute the location of the $n$ zeros up to some bound and then add up their inverse squares to get an approximation of $\\beta_E$. This can indeed be done, for example via Rubinstein's {\\tt lcalc} package. However it is slow and inefficient, and one will always introduce some truncation error via this method.  \\\\\n\nInstead, the following result allows us to relate the bite directly to the leading Taylor coefficient $C_E$ and higher derivatives of $\\Lambda_E(s)$ at the central point:\n\\begin{proposition}[GRH]\\label{prop:bite_times_leading_coeff}\nLet $E$ have completed $L$-function $\\Lams$ and analytic rank $r_E$. Then\n\\begin{equation}\n\\beta_E\\cdot C_E = \\frac{\\Lambda_E^{(r_E+2)}(1)}{(r_E+2)!},\n\\end{equation}\nwhere $\\beta_E$ is the bite of $E$, and $C_E$ is the leading coefficient of $\\Lams$ at the central point.\n\\end{proposition}\n\\begin{proof}\nFrom equation \\ref{eqn:Lams_prod} we have that \n\\begin{equation}\n\\Lambda_E(1+s) = C_E\\left(s^{r_E} + \\beta_E s^{r_E+2} + O(s^{r_E+4})\\right).\n\\end{equation}\nDifferentiating $r_E+2$ times and evaluating at $s=0$ achieves the desired result.\n\\end{proof}\n\nIt is worth noting explicitly that one cannot hope to be able to compute the bite of a curve without knowing its analytic rank -- we have to know how many zeros are precisely at the central point and not just $\\epsilon$ away from the central point, otherwise $\\beta_E$ could be arbitrarily large. This obstruction notwithstanding, Proposition \\ref{prop:bite_times_leading_coeff} gives us a straightforward way to compute the bite of $E$ from the $r_E$th and $(r_E+2)$th Taylor coefficients of $L_E(1+s)$:\n\\begin{corollary}[GRH]\n\\begin{align}\\\n\\beta_E \\quad &= \\frac{1}{(r_E+1)(r_E+2)} \\cdot \\frac{\\Lambda_E^{(r_E+2)}(1)}{\\Lambda_E^{(r_E)}(1)} \\label{eqn:bite_via_Lams} \\\\\n&= \\frac{2}{(r_E+1)(r_E+2)} \\cdot \\frac{L_E^{(r_E+2)}(1)}{L_E^{(r_E)}(1)} - \\left(-\\eta+\\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right)^2 + \\frac{\\pi^2}{6} \\label{eqn:bite_via_L_E}.\n\\end{align}\n\\end{corollary}\n\\begin{proof}\nThe first line follows immediately from Proposition \\ref{prop:bite_times_leading_coeff} and the fact that $C_E = \\frac{\\Lambda^{(r_E)}(1)}{r_E!}$. The second line comes from the formula for the $(r_E+2)$th Taylor coefficient of $L_E$ at the central point derived in Corollary \\ref{cor:ldLe_expansion}.\n\\end{proof}\nWe can therefore compute the bite of a curve {\\it without} having to compute the locations of the zeros themselves. Moreover, Theorem \\ref{thm:main_theorem} implies that the bite can be provably computed to $k$ bits precision in $\\softO(k\\cdot \\sqrt{N_E})$ time (assuming standard conjectures). This may be done, for example, via Tim Dokchitser's {\\tt computel} PARI code, which can compute the Taylor series expansion of a motivic $L$-function at a given point. [Important side-note: the aforementioned package uses approximations that have not (yet) been shown to be provably correct; however, one could certainly write code to compute in square root time the central Taylor expansion of $\\Les$ via the work of Bradshaw in \\cite{Bra-2010}, which {\\it does} produce provably correct $L$-function values.] \\\\\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/beta_times_C_vs_conductors_array.png}\n    \\caption{A scatter plot of $\\log (\\beta_E \\cdot C_E) $ vs. $\\log N_E$ for all curves up to conductor 350000, differentiated by rank. The constrained nature of the quantity $\\beta_E \\cdot C_E$ is readily apparent. }\n    \\label{fig:bites_vs_conductors_array}\n\\end{figure}\n\nWe finish off this section with a result giving an indication of one other area in which the bite of a curve comes into play: controlling convergence rates in explicit formula type sums. This is a topic worthy of its own paper, so we shall just present two examples relevant to the work in this thesis.\n\n\\begin{theorem}[GRH]\\label{thm:sinc_squared_sum_with_bite}\nLet $\\sum_{\\gamma} \\sinc^2(\\Delta \\gamma)$ be the $\\sinc^2$ sum for $E$ with parameter $\\Delta$, as detailed in Equation \\ref{eqn:sincsquared_sum}. If we let $\\Delta = \\frac{1}{\\pi}\\cdot \\sqrt{\\frac{\\beta_E}{n}}$, then the sum will evaluate to a value less than than $r_E+n$, where $r_E$ is the analytic rank of $E$.\n\\end{theorem}\n\\begin{corollary}[GRH]\\label{cor:sinc_squared_sum_with_bite}\nThe analytic rank of $E$ is the largest integer less than\n\\begin{equation}\\label{eqn:sinc_squared_sum_with_bite}\n\\frac{1}{\\sqrt{\\beta_E}}\\left[\\left(-\\eta + \\log\\left(\\frac{\\sqrt{N_E}}{2\\pi}\\right)\\right)+ \\frac{1}{2 \\sqrt{\\beta_E}}\\left(\\frac{\\pi^2}{6} - \\Li_2\\left(e^{-2\\sqrt{\\beta_E}}\\right)\\right) + \\sum_{\\log n<2\\sqrt{\\beta_E}} c_n \\cdot \\left(1-\\frac{\\log n}{2\\sqrt{\\beta_E}}\\right)\\right].\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nWe note that \n\\begin{equation}\n\\sum_{\\gamma} \\sinc^2(\\Delta \\gamma) = r_E + \\sum_{\\gamma\\ne 0} \\sinc^2(\\Delta \\gamma) <  r_E + \\sum_{\\gamma\\ne 0} \\frac{1}{(\\pi \\Delta \\gamma)^2} = r_E + \\frac{1}{\\pi^2 \\Delta^2} \\cdot \\beta_E.\n\\end{equation}\nSo choosing $\\Delta = \\frac{1}{\\pi}\\cdot \\sqrt{\\frac{\\beta_E}{n}}$ bounds the sum value from above by $r_E + n$. The corollary follows immediately from Equation \\ref{eqn:sincsquared_sum} the case with $n=1$.\n\\end{proof}\n\nThis formula comes with one giant caveat that renders it of little use practically -- it requires knowing the bite of a curve, which of course in itself requires knowing the curve's analytic rank a priori. Nevertheless, the above serves to underscore that determining the bite and determining the analytic rank of a curve are computationally equivalent: we can compute the bite knowing the rank via Equations \\ref{eqn:bite_via_Lams} or \\ref{eqn:bite_via_L_E}, and we can compute the rank knowing the bite via Formula \\ref{eqn:sinc_squared_sum_with_bite}. \\\\\n\nUsing the bite we have an even simpler way to compute the analytic rank of an elliptic curve:\n\\begin{theorem}[GRH]\\label{thm:compute_rank_by_logderiv}\n\\begin{equation}\nr_E = \\left\\lfloor\\frac{1}{\\sqrt{\\beta_E}}\\cdot \\ldLam{1+\\frac{1}{\\sqrt{\\beta_E}}}\\right\\rfloor.\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nBy Equation \\ref{eqn:ldLam_sum}, the Hadamard product expansion of $\\ldLam{1+s}$ gives us \\begin{equation}\ns\\cdot \\ldLam{1+s} = s\\cdot \\prod_{\\gamma} \\frac{s}{s^2+\\gamma^2} = r_E + \\prod_{\\gamma\\ne 0} \\frac{1}{1+(\\frac{\\gamma}{s})^2} < r_E + s^2\\cdot \\beta_E.\n\\end{equation}\nSo, analogous to the method used in the proof of Theorem \\ref{thm:sinc_squared_sum_with_bite}, evaluating at $s = \\sqrt{\\frac{n}{\\beta_E}}$ gives a real value bounded by $r_E+n$.\n\\end{proof}\n\nAgain, there are subtle issues present with this formula. Apart from again having to know the bite of a curve, even though we can evaluate $\\Lambda_E(s)$ and its derivative to any given precision in $\\softO(\\sqrt{N_E})$ time, the same is not true for the logarithmic derivative. Namely, we may encounter destructive precision loss near the central point if $r_E$ has high analytic rank and/or low-lying zeros. We therefore caution against using this method to determine analytic rank willy-nilly, as in our mind it does {\\it not} constitute a method to compute rank provably without doing more work.\n", "meta": {"hexsha": "9fdb04c7cb02e6b5ac0d7b51fa40903309a689a7", "size": 76004, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/5_zero_sums.tex", "max_stars_repo_name": "haikona/thesis", "max_stars_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/5_zero_sums.tex", "max_issues_repo_name": "haikona/thesis", "max_issues_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/5_zero_sums.tex", "max_forks_repo_name": "haikona/thesis", "max_forks_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.2464403067, "max_line_length": 808, "alphanum_fraction": 0.7108441661, "num_tokens": 24179, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.8031737892899222, "lm_q1q2_score": 0.572046222269433}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath,mathdots}\n\n\\def\\ud{{\\rm\\,d}}\n\\def\\ii{{\\rm i}}\n\\def\\fl{{\\rm\\,fl}}\n\n\\def\\abs#1{\\left|{#1}\\right|}\n\\def\\norm#1{\\left\\|{#1}\\right\\|}\n\\def\\conj#1{\\overline{#1}}\n\n\n\\begin{document}\n\n\\title{FastTransforms Documentation}\n\n\\author{Richard Mika\\\"el Slevinsky\\thanks{Email: Richard.Slevinsky@umanitoba.ca}}\n\n\\maketitle\n\n\\section{Introduction}\n\n{\\tt FastTransforms} is a C library based on the solutions of the two-dimensional harmonic polynomial connection problems in~\\cite{Slevinsky-ACHA-17,Slevinsky-1711-07866} that have an $\\mathcal{O}(n^3)$ run-time, where $n$ is the polynomial degree, and are $2$-normwise backward stable.\n\nThe transforms are separated into computational kernels that offer SSE, AVX, and AVX-512 vectorization on applicable Intel processors, and driver routines that are easily parallelized by OpenMP.\n\n\\section{What {\\tt FastTransforms} actually computes}\n\nFor every subsection below, the title of the subsection, of the form \\verb+a2b+, refers conceptually to the transform and the available functions are as follows:\n\\begin{itemize}\n\\item \\verb+plan_a2b+, is a pre-computation,\n\\item \\verb+execute_a2b+, is a forward execution,\n\\item \\verb+execute_b2a+, is a backward execution,\n\\item \\verb+execute_a_hi2lo+, is a conversion to a tensor-product basis,\n\\item \\verb+execute_a_lo2hi+, is a conversion from a tensor-product basis,\n\\item \\verb+kernel_a_hi2lo+, is an orthonormal conversion from high to low order,\n\\item \\verb+kernel_a_lo2hi+, is an orthonormal conversion from low to high order.\n\\end{itemize}\nThe \\verb+execute_*+ functions are drivers that perform transforms as defined below. They are composed of computational kernels, \\verb+kernel_*+ functions, that may be assembled differently.\n\n\\subsection{{\\tt sph2fourier}}\n\nSpherical harmonics are:\n\\begin{equation}\nY_\\ell^m(\\theta,\\varphi) = \\frac{e^{\\ii m\\varphi}}{\\sqrt{2\\pi}} (-1)^{\\abs{m}}\\sqrt{(\\ell+\\tfrac{1}{2})\\frac{(\\ell-\\abs{m})!}{(\\ell+\\abs{m})!}} P_\\ell^{\\abs{m}}(\\cos\\theta),\n\\end{equation}\nwhere $P_\\ell^m(\\cos\\theta)$ are the associated Legendre functions. A degree-$n$ expansion in spherical harmonics is given by:\n\\begin{equation}\nf_n(\\theta,\\varphi) = \\sum_{\\ell=0}^{n}\\sum_{m=-\\ell}^{+\\ell} f_\\ell^m Y_\\ell^m(\\theta,\\varphi).\n\\end{equation}\nIf spherical harmonic expansion coefficients are organized into the array:\n\\begin{equation}\nF = \\begin{pmatrix}\nf_0^0 & f_1^{-1} & f_1^1 & f_2^{-2} & f_2^2 & \\cdots & f_n^{-n} & f_n^n\\\\\nf_1^0 & f_2^{-1} & f_2^1 & f_3^{-2} & f_3^2 & \\cdots & 0 & 0\\\\\n\\vdots & \\vdots & \\vdots &  \\vdots &  \\vdots & \\iddots & \\vdots & \\vdots\\\\\nf_{n-2}^0 & f_{n-1}^{-1} & f_{n-1}^1 & f_n^{-2} & f_n^2 &  & \\vdots & \\vdots\\\\\nf_{n-1}^0 & f_n^{-1} & f_n^1 & 0 & 0 & \\cdots & 0 & 0\\\\\nf_n^0 & 0 & 0 & 0 & 0 & \\cdots & 0 & 0\\\\\n\\end{pmatrix},\n\\end{equation}\nthen {\\tt sph2fourier} returns the bivariate Fourier coefficients:\n\\begin{equation}\nG = \\begin{pmatrix}\ng_0^0 & g_0^{-1} & g_0^1 & \\cdots & g_0^{-n} & g_0^n\\\\\ng_1^0 & g_1^{-1} & g_1^1 & \\cdots & g_1^{-n} & g_1^n\\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots\\\\\ng_{n-1}^0 & g_{n-1}^{-1} & g_{n-1}^1& \\cdots & g_{n-1}^{-n} & g_{n-1}^n\\\\\ng_n^0 & 0 & 0 & \\cdots & g_n^{-n} & g_n^n\\\\\n\\end{pmatrix}.\n\\end{equation}\nThat is:\n\\begin{equation}\ng_n(\\theta,\\varphi) = \\sum_{\\ell=0}^n\\sum_{m=-n}^{+n} g_\\ell^m \\frac{e^{\\ii m\\varphi}}{\\sqrt{2\\pi}} \\left\\{\\begin{array}{lr} \\cos(\\ell\\theta) & m~{\\rm even},\\\\ \\sin((\\ell+1)\\theta) & m~{\\rm odd}.\\end{array}\\right.\n\\end{equation}\nSince {\\tt sph2fourier} only transforms columns of the arrays, the routine is indifferent to the choice of longitudinal basis; it may be complex exponentials or sines and cosines, with no particular normalization.\n\n\\subsection{{\\tt spinsph2fourier}}\n\nSpin-weighted spherical harmonics are:\n\\begin{align}\nY_{\\ell,m}^s(\\theta,\\varphi) & = \\frac{e^{\\ii m\\varphi}}{\\sqrt{2\\pi}} \\sqrt{(\\ell+\\tfrac{1}{2})\\dfrac{(\\ell+\\ell_0)!(\\ell-\\ell_0)!}{(\\ell+\\ell_1)!(\\ell-\\ell_1)!}}\\nonumber\\\\\n& \\times \\sin^{\\abs{m+s}}(\\tfrac{\\theta}{2})\\cos^{\\abs{m-s}}(\\tfrac{\\theta}{2}) P_{\\ell-\\ell_0}^{(\\abs{m+s},\\abs{m-s})}(\\cos\\theta).\n\\end{align}\nwhere $P_n^{(\\alpha,\\beta)}(\\cos\\theta)$ are the Jacobi polynomials and $\\ell_0 = \\max\\{\\abs{m},\\abs{s}\\}$ and $\\ell_1 = \\min\\{\\abs{m},\\abs{s}\\}$. A degree-$n$ expansion in spin-weighted spherical harmonics is given by:\n\\begin{equation}\nf_n^s(\\theta,\\varphi) = \\sum_{\\ell=\\ell_0}^{n}\\sum_{m=-\\ell}^{+\\ell} f_\\ell^m Y_{\\ell,m}^s(\\theta,\\varphi).\n\\end{equation}\nIf spin-weighted spherical harmonic expansion coefficients with $s=2$, for example, are organized into the array:\n\\begin{equation}\nF = \\begin{pmatrix}\nf_2^0 & f_2^{-1} & f_2^1 & f_2^{-2} & f_2^2 & \\cdots & f_n^{-n} & f_n^n\\\\\nf_3^0 & f_3^{-1} & f_3^1 & f_3^{-2} & f_3^2 & \\cdots & 0 & 0\\\\\n\\vdots & \\vdots & \\vdots &  \\vdots &  \\vdots & \\iddots & \\vdots & \\vdots\\\\\nf_{n}^0 & f_{n}^{-1} & f_{n}^1 & f_n^{-2} & f_n^2 &  & \\vdots & \\vdots\\\\\n0 & 0 & 0 & 0 & 0 & \\cdots & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & \\cdots & 0 & 0\\\\\n\\end{pmatrix},\n\\end{equation}\nthen {\\tt spinsph2fourier} returns the bivariate Fourier coefficients:\n\\begin{equation}\nG = \\begin{pmatrix}\ng_0^0 & g_0^{-1} & g_0^1 & \\cdots & g_0^{-n} & g_0^n\\\\\ng_1^0 & g_1^{-1} & g_1^1 & \\cdots & g_1^{-n} & g_1^n\\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots\\\\\ng_{n-1}^0 & g_{n-1}^{-1} & g_{n-1}^1& \\cdots & g_{n-1}^{-n} & g_{n-1}^n\\\\\ng_n^0 & 0 & 0 & \\cdots & g_n^{-n} & g_n^n\\\\\n\\end{pmatrix}.\n\\end{equation}\nThat is:\n\\begin{equation}\ng_n(\\theta,\\varphi) = \\sum_{\\ell=0}^n\\sum_{m=-n}^{+n} g_\\ell^m \\frac{e^{\\ii m\\varphi}}{\\sqrt{2\\pi}} \\left\\{\\begin{array}{lr} \\cos(\\ell\\theta) & m+s~{\\rm even},\\\\ \\sin((\\ell+1)\\theta) & m+s~{\\rm odd}.\\end{array}\\right.\n\\end{equation}\nSince {\\tt spinsph2fourier} only transforms columns of the arrays, the routine is indifferent to the choice of longitudinal basis; it may be complex exponentials or sines and cosines, with no particular normalization.\n\n\\subsection{{\\tt tri2cheb}}\n\nTriangular harmonics are:\n\\begin{equation}\n\\tilde{P}_{\\ell,m}^{(\\alpha,\\beta,\\gamma)}(x,y) = (2(1-x))^m \\tilde{P}_{\\ell-m}^{(2m+\\beta+\\gamma+1,\\alpha)}(2x-1) \\tilde{P}_m^{(\\gamma,\\beta)}\\left(\\frac{2y}{1-x}-1\\right),\n\\end{equation}\nwhere the tilde implies that the univariate Jacobi polynomials are orthonormal. A degree-$n$ expansion in triangular harmonics is given by:\n\\begin{equation}\nf_n(x,y) = \\sum_{\\ell=0}^{n}\\sum_{m = 0}^\\ell f_\\ell^m \\tilde{P}_{\\ell,m}^{(\\alpha,\\beta,\\gamma)}(x,y).\n\\end{equation}\nIf triangular harmonic expansion coefficients are organized into the array:\n\\[\nF = \\begin{pmatrix}\nf_0^0 & f_1^1 & f_2^2 & \\cdots & f_n^n\\\\\n%f_1^0 & f_2^1 & f_3^2 & \\cdots & 0\\\\\n\\vdots & \\vdots &  \\vdots & \\iddots & 0\\\\\nf_{n-2}^0 & f_{n-1}^1 & f_n^2 & \\iddots & \\vdots\\\\\nf_{n-1}^0 & f_n^1 & 0 & \\cdots & 0\\\\\nf_n^0 & 0 & 0 & \\cdots & 0\\\\\n\\end{pmatrix},\n\\]\nthen {\\tt tri2cheb} returns the bivariate Chebyshev coefficients:\n\\[\nG = \\begin{pmatrix}\ng_0^0 & g_0^1 & \\cdots & g_0^n\\\\\ng_1^0 & g_1^1 & \\cdots & g_1^n\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\n%g_{n-1}^0 & g_{n-1}^1& \\cdots & g_{n-1}^n\\\\\ng_n^0 & g_n^1 & \\cdots & g_n^n\\\\\n\\end{pmatrix}.\n\\]\nThat is:\n\\[\ng_n(x,y) = \\sum_{\\ell=0}^n\\sum_{m=0}^n g_\\ell^m T_\\ell(2x-1) T_m\\left(\\frac{2y}{1-x}-1\\right).\n\\]\n\n\\subsection{{\\tt disk2cxf}}\n\nDisk harmonics are Zernike polynomials:\n\\begin{equation}\nZ_\\ell^m(r,\\theta) = \\sqrt{2\\ell+2} r^{\\abs{m}}P_{\\frac{\\ell-\\abs{m}}{2}}^{(0,\\abs{m})}(2r^2-1)\\frac{e^{\\ii m\\theta}}{\\sqrt{2\\pi}}.\n\\end{equation}\nA degree-$2n$ expansion in disk harmonics is given by:\n\\begin{equation}\nf_{2n}(r,\\theta) = \\sum_{\\ell=0}^{2n}\\sum_{m=-\\ell,2}^{+\\ell} f_\\ell^m Z_\\ell^m(r,\\theta),\n\\end{equation}\nwhere the $,2$ in the inner summation index implies that the inner summation runs from $m=-\\ell$ in steps of $2$ up to $+\\ell$. If disk harmonic expansion coefficients are organized into the array:\n\\begin{equation}\nF = \\begin{pmatrix}\nf_0^0 & f_1^{-1} & f_1^1 & f_2^{-2} & f_2^2 & \\cdots & f_{2n}^{-2n} & f_{2n}^{2n}\\\\\nf_2^0 & f_3^{-1} & f_3^1 & f_4^{-2} & f_4^2 & \\cdots & 0 & 0\\\\\n\\vdots & \\vdots & \\vdots &  \\vdots &  \\vdots & \\iddots & \\vdots & \\vdots\\\\\nf_{2n-4}^0 & f_{2n-3}^{-1} & f_{2n-3}^1 & f_{2n-2}^{-2} & f_{2n-2}^2 &  & \\vdots & \\vdots\\\\\nf_{2n-2}^0 & f_{2n-1}^{-1} & f_{2n-1}^1 & f_{2n}^{-2} & f_{2n}^2 & \\cdots & 0 & 0\\\\\nf_{2n}^0 & 0 & 0 & 0 & 0 & \\cdots & 0 & 0\\\\\n\\end{pmatrix},\n\\end{equation}\nthen {\\tt disk2cxf} returns the even Chebyshev--Fourier coefficients:\n\\begin{equation}\nG = \\begin{pmatrix}\ng_0^0 & g_0^{-1} & g_0^1 & g_0^{-2} & g_0^2 & \\cdots & g_0^{-2n} & g_0^{2n}\\\\\ng_2^0 & g_2^{-1} & g_2^1 & g_2^{-2} & g_2^2 & \\cdots & g_2^{-2n} & g_2^{2n}\\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots\\\\\ng_{2n-2}^0 & g_{2n-2}^{-1} & g_{2n-2}^1 & g_{2n-2}^{-2} & g_{2n-2}^2 & \\cdots & g_{2n-2}^{-2n} & g_{2n-2}^{2n}\\\\\ng_{2n}^0 & 0 & 0 & g_{2n}^{-2} & g_{2n}^2 & \\cdots & g_{2n}^{-2n} & g_{2n}^{2n}\\\\\n\\end{pmatrix}.\n\\end{equation}\nThat is:\n\\begin{equation}\ng_{2n}(r,\\theta) = \\sum_{\\ell=0}^{n}\\sum_{m=-2n}^{+2n} g_{2\\ell}^m \\frac{e^{\\ii m\\theta}}{\\sqrt{2\\pi}} \\left\\{\\begin{array}{lr} T_{2\\ell}(r) & m~{\\rm even},\\\\ T_{2\\ell+1}(r) & m~{\\rm odd}.\\end{array}\\right.\n\\end{equation}\nSince {\\tt disk2cxf} only transforms columns of the arrays, the routine is indifferent to the choice of azimuthal basis; it may be complex exponentials or sines and cosines, with no particular normalization.\n\n\\begin{thebibliography}{1}\n\n\\bibitem{Slevinsky-ACHA-17}\nR.~M. Slevinsky.\n\\newblock Fast and backward stable transforms between spherical harmonic\n  expansions and bivariate {F}ourier series.\n\\newblock {\\em Appl. Comput. Harmon. Anal.}, 2017.\n\n\\bibitem{Slevinsky-1711-07866}\nR.~M. Slevinsky.\n\\newblock Conquering the pre-computation in two-dimensional harmonic polynomial\n  transforms.\n\\newblock arXiv:1711.07866, 2017.\n\n\\end{thebibliography}\n\n\\end{document}", "meta": {"hexsha": "1aac984800954eedca71199614cd64582f0dc049", "size": 9782, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/docs.tex", "max_stars_repo_name": "dawese/FastTransforms", "max_stars_repo_head_hexsha": "4b1ef29c950c61e4a6344951b9627267e69cf43f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-06-15T02:50:55.000Z", "max_stars_repo_stars_event_max_datetime": "2018-06-15T02:50:55.000Z", "max_issues_repo_path": "docs/docs.tex", "max_issues_repo_name": "dawese/FastTransforms", "max_issues_repo_head_hexsha": "4b1ef29c950c61e4a6344951b9627267e69cf43f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/docs.tex", "max_forks_repo_name": "dawese/FastTransforms", "max_forks_repo_head_hexsha": "4b1ef29c950c61e4a6344951b9627267e69cf43f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.4854368932, "max_line_length": 286, "alphanum_fraction": 0.6419955019, "num_tokens": 4109, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737869342623, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5720462156857025}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\marginpar{Friday\\\\ 2021-12-17}\n\nLast time we reached the equations \\(\\overline{h}^{\\mu 0}= 0\\), as well as \nthe quadrupole formula \n%\n\\begin{align}\n\\overline{h}^{n k} (t, \\vec{r}) &= \\frac{2G}{c^{4}r} \\dv[2]{}{\\tau } q^{n k} (t - r/c)  \\\\\nq^{n k}(t) &= \\frac{1}{c^2} \\int \\dd[3]{x} x^{n} x^{k} T^{00} (t, \\vec{x})\n\\,.\n\\end{align}\n\nThe prefactor \\(G/c^{4} \\approx \\SI{e-55}{s^2 / g cm}\\) is very small! \n\nBecause of the conservation of momentum, the dipole contribution \\(\\propto\\ddot{d}\\) vanishes. \nAlso, objects with a constant quadrupole moment do not emit. \n\nIn order to move the waveform to the TT gauge we need to use projectors: \na projector for vectors is \\(P_{ij} = \\delta_{ij} - n_i n_j = 0\\). \nFrom it, we can define \n%\n\\begin{align}\nP_{jkmn} = P_{jm} P_{kn} - \\frac{1}{2} P_{jk} P_{mn}\n\\,.\n\\end{align}\n\nThis object projects rank-2 tensors into their traceless and transverse part. \nIt has the following properties: \n%\n\\begin{align}\nP_{jklm} &= P_{lmjk}  \\\\\nP_{jkmn} P_{mnrs} &= P_{jkrs}  \\\\\nn^i P_{jkmn} &= n^k P_{jkmn} = n^n P^{jkmn} = n^{m} P^{jkmn} = 0  \\\\\n\\delta^{jk} P_{jkmn} &= \\delta^{mn} P_{jkmn} = 0   \n\\,.\n\\end{align}\n\nWith this, we can find \n%\n\\begin{align}\nh^{TT}_{ij} \n= P_{ijlm} \\overline{h}_{lm}\n= P_{ijlm} h_{lm}\n\\,.\n\\end{align}\n\nWith this machinery, we can then write \n%\n\\begin{align}\nh^{TT}_{nk} (t, r) = \\frac{2G}{c^{4} r} \\dv[2]{}{t} Q^{TT}_{n k}(t -r /c)\n\\,,\n\\end{align}\n%\nwhere \\(Q^{TT}_{n k} = P_{n k i j} q_{ij} \\). \n\n\\subsection{Binary system GW emission}\n\nWe take two masses \\(m_1 \\) and \\(m_2 \\), with initial separation \\(\\ell_0\\). \nWe define the total mass \\(M = m_1 + m_2 \\) and the reduced mass \\(\\mu = (1/m_1 + 1/m_2 )^{-1}\\). \n\nThe Keplerian frequency reads: \n%\n\\begin{align}\n\\omega _k = \\sqrt{ \\frac{GM}{\\ell_0^3}}\n\\,.\n\\end{align}\n\nIf the orbit is assumed to be circular, we will have positions changing in the \\(xy\\) plane\nas \n%\n\\begin{align}\nx_1 &= \\frac{m_2 \\ell_0 }{M} \\cos \\omega _k t \\\\\ny_1 &= \\frac{m_2 \\ell_0 }{M} \\sin \\omega _k t \\\\\nx_2 &= -\\frac{m_1 \\ell_0 }{M} \\cos \\omega _k t \\\\\ny_2 &= -\\frac{m_1 \\ell_0 }{M} \\sin \\omega _k t \\\\\n\\,.\n\\end{align}\n\nThe stress-energy tensor will read \n%\n\\begin{align}\nT^{00} = c^2 \\sum _{i} m_i \\delta(x - x_i) \\delta (y- y_i) \\delta (z)\n\\,.\n\\end{align}\n%\nWe can then compute the components of the quadrupole tensor:\n%\n\\begin{align}\nq_{xx} &= \\frac{1}{c^2} \\int \\dd[3]{x} x_x x_x \\left( c^2 \\sum _{i} m_i \\delta(x - x_i) \\delta (y- y_i) \\delta (z) \\right)  \\\\\n&= m_1 x_1^2 + m_2 x_2^2  \\\\\n&= \\mu \\ell_0^2 \\cos^2 (\\omega _k t) = \\frac{\\mu}{2} \\ell_0^2 \\cos( 2 \\omega _k t) + \\text{const}\n\\,.\n\\end{align}\n\nThe calculation for the other components is similar, and we get \n%\n\\begin{align}\nq_{yy} &= -\\frac{\\mu}{2} \\ell_0^2 \\cos(2 \\omega _k t)  + \\text{const}  \\\\\nq_{xy} &= \\frac{\\mu}{2} \\ell_{0}^2 \\sin(2 \\omega _k t)\n\\,.\n\\end{align}\n\nThe trace of the quadrupole is constant! \nWhen we compute the second derivative for the strain we find \n%\n\\begin{align}\nh^{TT}_{ij} = - \\frac{2G}{c^{4}r } \\frac{\\mu \\ell_0^2}{2} (2 \\omega _k)^2 P_{ijkl}A_{kl}\n\\,,\n\\end{align}\n%\nwhere \n%\n\\begin{align}\nA_{kl} = \\left[\\begin{array}{ccc}\n\\cos(2 \\omega _k t) & \\sin(2 \\omega _k t) & 0 \\\\ \n\\sin(2 \\omega _k t) & - \\cos(2 \\omega _k t) & 0 \\\\ \n0 & 0 & 0\n\\end{array}\\right]\n\\,.\n\\end{align}\n\nThe wave emitted in the \\(z\\) direction is \\emph{circularly} polarized, \nwhile in the \\(x\\) direction it is \\emph{linearly} polarized. \n\nFor PSR 1913+16, the total mass was roughly \\(\\num{2.8}M_{\\odot}\\), \nthe separation was \\(\\ell_0 \\sim \\SI{1.9e5}{km}\\), \nthe frequency of gravitational waves was \\(f _{\\text{GW}} \\sim \\SI{e-5}{Hz}\\). \n\nThe amplitude is about \\(h_0 \\sim (4 \\mu M G^2 / \\ell_0 c^{4} r) \\approx \\num{6e-22}\\). \n\nWhat is the emitted power? It is \n%\n\\begin{align}\nL _{\\text{GW}} = \\frac{G}{5c^{5}} \\expval{\\dot{\\ddot{Q}}_{km} \\dot{\\ddot{Q}}_{km}}\n\\,,\n\\end{align}\n%\nwhere the average is over several wavelengths or many orbital periods.\nThis is the \\emph{adiabatic approximation}. \n\nFor the binary case, we have \\(\\dot{\\ddot{Q}} \\dot{\\ddot{Q}} = 32 \\mu^2 \\ell_0^{4} \\omega _k^{6}\\). \n\nThe luminosity can then be written as \n%\n\\begin{align}\nL = \\frac{32}{5} \\frac{G^{4}}{c^{5}} \\frac{\\mu^2M^3}{\\ell_0^{5}} \n\\,.\n\\end{align}\n\nHow does this translate to a chirping signal? \n\nThe orbital energy will read \n%\n\\begin{align}\nE _{\\text{orb}} = \\frac{1}{2} \\mu \\omega _k^2 \\ell_0^2 - G \\mu \\frac{M}{\\ell_0 }\n= - \\frac{1}{2} \\frac{G \\mu M}{\\ell_0 }\n\\,.\n\\end{align}\n\nThe only way this will change is through a change in \\(\\ell_0\\); \nthat means \n%\n\\begin{align}\n\\dv{E _{\\text{orb}}}{t} = - E _{\\text{orb}} \\dv{ \\log \\ell_0 }{t}\n\\,.\n\\end{align}\n\nAs is often denoted improperly in physics, \\(\\dv*{\\log x}{ t} = (1/x) \\dv*{x}{t}\\). \n\nIn terms of the orbital frequency, by Kepler\n%\n\\begin{align}\n\\dv{\\log \\omega _k}{t} = - \\frac{3}{2} \\dv{\\log \\ell_0 }{t}\n\\,.\n\\end{align}\n\nPutting everything together, \n%\n\\begin{align}\n\\frac{2}{3} \\dv{\\log T}{t} = - \\frac{3}{2} \\dv{\\log E _{\\text{orb}}}{t} \n= - \\frac{3}{2} \\frac{L _{\\text{GW}}}{E _{\\text{orb}}}\n\\,.\n\\end{align}\n\nThis tells us what the variation of the period of the binary will be. \nFor PSR 1913+16 the prediction for \\(\\dot{T}\\) matches the data very closely.\n\nThe variation of the binary separation will read \n%\n\\begin{align}\n\\dv{\\log \\ell_0}{t} = \\frac{L_{\\text{gw}}}{E _{\\text{orb}}}\n = - \\frac{64}{5 \\ell_0^{4}} \\frac{G^3}{c^{5}} \\mu M^2\n\\,,\n\\end{align}\n%\nwhich can be integrated from a certain length to another: we get \n%\n\\begin{align}\n\\ell^{4}_0 (t) \n&= \\ell_0^{\\text{initial}} - \\frac{256}{5}  \\frac{G^3}{c^{5}} \\mu M^2 t  \\\\\n\\ell_0 (t) &= \\left(\\ell_0^{\\text{initial}} - \\frac{t}{t _{\\text{coal}}}\\right)^{1/4}\n\\,.\n\\end{align}\n\nThis will not work close to merger. \nWe can then look at the variation of \\(\\omega _k\\) in time, \n%\n\\begin{align}\n\\omega _k(t) = \\sqrt{\\frac{GM}{\\ell_0^3(t)}} = \\omega _k^{\\text{in}} \\left(1 - \\frac{t}{ t _{\\text{coal}}}\\right)^{-3/8} = \\pi f _{\\text{GW}}\n\\,.\n\\end{align}\n\nThis tells us how the amplitude \\(h_0 (t)\\) changes: \n%\n\\begin{align}\nh_0 (t) &= \\frac{4 \\mu M G^2}{rc^{4} \\ell_0 (t)}  \\\\\n&= 4 \\pi^{2/3} \\frac{G^{5/3}}{c^{4} r} \\mathcal{M}_c^{5/3} f _{\\text{gw}}^{2/3} (t)\n\\,,\n\\end{align}\n%\nwhere \\(\\mathcal{M}_c = \\mu^{3/5} M^{2/5}\\) is called the \\emph{chirp mass}. \n\nIn order to write a waveform, we need to introduce an integrated phase: \n%\n\\begin{align}\n\\phi (t) = 2 \\phi _{\\text{orbital}} (t) = \\int^{t} 2 \\omega _k (t') \\dd{t'} + \\phi _{\\text{initial}}\n\\,.\n\\end{align}\n%\n\nThe GW frequency can be manipulated into \n%\n\\begin{align}\nf _{\\text{GW}} (t) = \\frac{1}{8 \\pi } \\left( \\frac{c^3}{G \\mathcal{M}_c}\\right)^{5/8} \\left( \\frac{5}{t _{\\text{coal}} - t}\\right)^{5/8}\n\\,.\n\\end{align}\n\nThe result can be then computed analytically: \n%\n\\begin{align}\n\\phi (t ) - \\phi _{\\text{in}} = - 2 \\left( \\frac{c^3 (t _{\\text{coal}} - t )}{5 G \\mathcal{M}_c}\\right)^{5/8}\n\\,.\n\\end{align}\n\nThe phase of the signal has a strong dependence on the chirp mass, less so on higher-order parameters. \n\nThe total waveform reads \n%\n\\begin{align}\nh_{ij}^{TT} (t, r) = \\frac{4 \\pi^{2/3}}{r c^{4}} G^{5/3} \\mathcal{M}^{5/3} f _{\\text{gw}}^{2/3} P_{ijkl} \\left[\\begin{array}{ccc}\n\\cos \\phi (t) & \\sin \\phi (t) & 0 \\\\ \n\\sin \\phi (t) & - \\cos \\phi (t) & 0 \\\\ \n0 & 0 & 0\n\\end{array}\\right]_{kl}\n\\,.\n\\end{align}\n\n\n\n\\end{document}\n", "meta": {"hexsha": "2b783cc240140a2f178684abd4a55bd197cf830c", "size": 7304, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phd_courses/theoretical_gravitation_cosmology/dec17.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "phd_courses/theoretical_gravitation_cosmology/dec17.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phd_courses/theoretical_gravitation_cosmology/dec17.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 27.7718631179, "max_line_length": 141, "alphanum_fraction": 0.6026834611, "num_tokens": 3042, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.803173791645582, "lm_q2_score": 0.7122321720225278, "lm_q1q2_score": 0.572046214135302}}
{"text": "%---------------------------Dimension---------------------------\n\\section{Dimension} \n\nThis metric was specifically designed in the context of\nSandia's \\textsf{Pronto} code, for stable time step calculation. It is\ndefined as follows:\n\\[\nq = \\frac{V}{2\\nabla{V}}\n\\]\n\\hexmetrictable{dimension}%\n{$L^1$}%                                      Dimension\n{application-dependent}%                      Acceptable range\n{$[0,DBL\\_MAX]$}%                             Normal range\n{$[0,DBL\\_MAX]$}%                             Full range\n{$1$}%                                        Cube\n{Adapted from \\cite{tf:89}}%                  Citation\n{v\\_hex\\_dimension}%                          Verdict function name\n", "meta": {"hexsha": "eaeffa4426ff4c412c041cf9e025d446571d98e9", "size": 702, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexDimension.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexDimension.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexDimension.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 39.0, "max_line_length": 70, "alphanum_fraction": 0.4658119658, "num_tokens": 155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8596637505099167, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.5720293365611854}}
{"text": "\\section{\\texorpdfstring{Non-uniform computation}{Non-uniform computation}}\n\\vspace{5mm}\n\\large\n% 7th lecture\n\n\\begin{definition}[Uniform computation]\n\tUniform models of computation - single algorithm for all inputs. (DTM, NTM, RAM ...)\n\\end{definition}\n\n\\begin{definition}[Non-uniform computation]\n\tAlgorithm may vary for different input lengths.\n\tHowever, inputs of the same lengths are handled by the same algorithm.\n\n\tFor example Boolean circuits.\n\\end{definition}\n\n\\begin{definition}[Boolean circuit]\n\tBoolean circuit with $n$ inputs (and single output) is an acyclic directed graph with $n$ source virtices.\n\tSource vertex has $deg_{in}(s) = 0$.\n\tAnd single sink vertex $deg_{out}(t) = 0$.\n\n\tSource vertices represents the inputs, sink - output.\n\tAll other vertices are gates (NOT, AND, OR).\n\n\tAND, OR gates have 2 inputs (in degree).\n\tNOT has only 1 input.\n\n\tSize of the circuit is \\# of vertices.\n\\end{definition}\n\n\\begin{note}\n\tBecause of the selected elementary gates, number of edges is bounded by $2n$.\n\\end{note}\n\n\\begin{example}[XOR gate]\n\t$x_1 XOR x_2$ as a DNF formula:\n\t\\[ (x_1 \\land \\neg x_2) \\lor (\\neg x_1 \\land x_2) \\]\n\n\t\\includegraphics[scale=0.4]{xor_0.eps}\n\n\tSource ~\\cite{arora2009computational}.\n\n\tOr CNF\n\t\\[ (x_1 \\lor x_2) \\land (\\neg x_1 \\lor \\neg x_2) \\]\n\\end{example}\n\n\\begin{example}\n\t\\[ (x_1 \\lor x_2 \\lor x_3) \\land (x_1 \\lor x_2 \\lor x_4) \\land (x_1 \\lor x_2 \\lor x_5) \\]\n\tThe gate will be the following:\n\n\t%lecture 7 25:00\n\t\\includegraphics[scale=0.4]{gate_0.eps}\n\n\tSize of the gate is $|C| = 13$.\n\n\tHowever, if we change the definition and allow multiple outputs, size is $|C| = 11$.\n\t%lecture 7 25:00\n\t\\includegraphics[scale=0.4]{gate_1.eps}\n\\end{example}\n\n\\begin{observation}\n\tIf we allow gates with $k$ inputs or outputs, the equivalent binary circuit will have $(k - 1)$ gates.\n\tDepth of the tree can be optimized by balancing.\n\n\t%lecture 7 26:29\n\t\\includegraphics[scale=0.4]{gate_k.eps}\n\\end{observation}\n\n\\begin{note}\n\tHaving gates with unlimited inputs can result in an exponential blowup.\n\\end{note}\n\nQ: do we restrict the gates to binary? A: no, but we allow only finite?\n\n\\begin{note}\n\tCircuit with input of size $n$ can be viewed as an device that recognizes Language of binary words of length $n$.\n\n\tA sequence $\\{C_n\\}$ of circuits of various length is a device that recognizes some Language.\n\tEnormous switch by $n$.\n\\end{note}\n\n\\begin{notation}\n\tFor circuit $C$ and input vector $x: C(x)$ is a value of the output gate.\n\\end{notation}\n\n\\begin{definition}[Family of circuits]\n\tLet $T$ be a function. A family of circuits of size $T(n)$ is a sequence of circuits $\\{C_n\\}_{n \\in \\N}$.\n\tWhere\n\t\\[ \\forall n \\in \\N: C_n\\ \\text{has n inputs}\\ \\& |C_n| \\leq T(n) \\]\n\n\tLanguage $L$ is in class SIZE$(T(n))$ is there exists a family of circuits $\\{C_n\\}$ of size $T(n)$ such that\n\t\\[ \\forall x \\in L \\iff C_n(x) = 1 \\]\n\\end{definition}\n\nQ: $T$ is a boolean function?\n\n\\begin{example}\n\t\\[ L = \\{ 1^n |\\ n \\in \\N \\} \\]\n\tThen $L \\in $ SIZE$(\\bigO(n))$.\n\n\tThe circuit is a conjunction of all inputs.\n\\end{example}\n\n\\begin{example}\n\t\\[ L = \\{ (i, j, i + j) |\\ i, j \\in \\N \\} \\]\n\tThen $L \\in $ SIZE$(\\bigO(n))$.\n\n\tThe circuit performs binary addition of $i$ and $j$, then compare with $i + j$.\n\tBinary addition can be done by circuit of linear size.\n\\end{example}\n\n\\begin{definition}[$\\Ppoly$ class]\n\t\\[ \\Ppoly = \\bigcup^{\\infty} SIZE (n^i) \\]\n\tClass of languages recognizable by families of circuits of polynomial size.\n\\end{definition}\n\n\\begin{definition}[Oblivious DTM]\\label{obliv_dtm}\n\tIf $L$ is recognizable by DTM M in time $p(n)$ then it is also recognizable by DTM $M_1$ with single work tape with time complexity $p^2(n)$.\n\tMoreover, the head movement of $M_1$ are independent of the contents of its tapes but only on the input length ($M_1$ always performs a sequence of left to right and back sweeps of the same form regardless of what is the input).\n\n\tSource ~\\cite[p. 37]{arora2009computational}.\n\\end{definition}\n\n\\begin{theorem}[$\\TP \\subseteq \\Ppoly$]\\label{tp_ppoly}\n\t$\\TP \\subseteq \\Ppoly$.\n\\end{theorem}\n\\begin{proof}\n\tIdea: for fixed input length $n$ TM can be simulated by polynomial size circuit.\n\tThe only problem is to get the symbol from work/input tape without storing the whole tape.\n\tWhich is solved by the previous configuration with current symbol.\n\n\tLet $L \\in \\TP$ be arbitrary. WLOG there is an oblivious DTM M \\cref{obliv_dtm} such that $L = L(M)$.\n\t$M$ works in time $T(n)$, where $T$ is a polynomial.\n\tWLOG $T(n)$ is the exact time $M$ needs for computation.\n\tWe can choose $T(n)$ by trying different polynomials in parallel with $M$ using the fact, that polynomials are \\emph{time constructible}.\n\n\tKnowing that TM did exactly $T(n)$ steps tells us that there were $T(n)$ displays (state, input symbol, work tape symbol).\n\n\tLet $|x| = n$ be input, let\n\t\\[ d_1, d_2, \\ldots, d_{T(n)} \\]\n\tbe binary string encoding displays during the computation of $M$ on $x$.\n\n\t% lecture 7 01:20:00\n\tSince $M$ is \\emph{oblivious}, the position of the head $i$ uniquely defines the position $i_v, i_p$.\n\tBecause the position of the head is uniquely determined by the \\# of steps.\n\n\t$i_v, i_p$ are the display indexes the last time the input head was in the same position and the work head was in the same position as in $d_i$.\n\tThe gates representing the displays $d_{i_v}$ and $d_{i_p}$ are the inputs to the gate $C_i$.\n\tIt can also happen, that next symbol under the head has not been seen during the computation yet.\n\tFor such case, we add one more input $x_j$ - input bit.\n\n\tSimilarly, $i_p$ should not be defined.\n\tHowever, we assume that work tape is blank before the computation and symbol is also blank.\n\n\tWe should add one more gate that check whether TM finished in accepting state.\n\tOutputs 1 if was in accepting state and 0 o/w.\n\n\tComplexity: $|C_i| = \\bigO(1) \\Rightarrow |C| = \\bigO(T(n))$.\n\\end{proof}\n\n\\begin{corollary}\\label{tp_ppoly_cor}\n\tThe family of circuits $\\{C_n\\}$ from previous theorem not only exists but also can be efficiently constructed.\n\t% todo missing word in place of ... lecture 7 01:29:50\n\tMeaning there exists DTM M which on the input $1^n$ (only marks the length) outputs a ... of $C_n$.\n\tMoreover M works in polynomial time and Log space.\n\n\tThe only non-trivial thing in circuit construction is to compute indexes $i_v, i_p$ from $i$ and $n$.\n\tM outputs the circuit $C_i$ in $\\bigO(\\log n)$ time and space.\n\n\tSpace is reused therefore total space complexity is also $\\bigO(\\log n)$.\n\tHowever, having $T(n)$ gates time complexity is $\\bigO(T(n) \\cdot \\log n)$ which is polynomial.\n\\end{corollary}\n\n\\begin{note}[$\\TP \\nsupseteq \\Ppoly$]\n\t$\\TP \\supseteq \\Ppoly$ is not true since every unary language\n\t\\[ L \\subseteq \\{ 1^n |\\ n \\in \\N \\} \\]\n\tis in $\\Ppoly$.\n\n\tIf $1^n \\in L \\Rightarrow C_n$ is a tree of $\\land$.\n\tOtherwise, $C_n$ outputs 0.\n\n\tOn the other side\n\t\\[ UHALT = \\{ 1^n |\\ n\\ \\text{is a Godel number of}\\ \\langle M, x \\rangle: M(x) \\downarrow \\} \\]\n\tis not in $\\TP$ as it is a variant of Halting problem which is undecidable.\n\n\tQ: how can boolean circuit recognize UHALT? How to detect that that TM halts in this case?\n\\end{note}\n\n\\begin{definition}[$\\TP$-uniform family circuits]\n\tA family of circuits $\\{C_n\\}$ is $\\TP$-uniform if there exists DTM M which on input $1^n$ outputs the description of $\\{C_n\\}$ and works in polynomial time.\n\\end{definition}\n\n\\begin{theorem}[$\\TP$-uniform family circuits]\n\t$L$ is accepted by $\\TP$-uniform family of circuits $\\iff L \\in \\TP$.\n\\end{theorem}\n\\begin{proof}\n\t\"$\\Rightarrow$\". For an input $x$ DTM M will simulate DTM $M_g$ which outputs $C_n$ on $1^n$, guaranteed by uniformity.\n\tThen $M$ simulates $C_n$ on $x$.\n\t\\[ x \\in L(M) \\iff C_n(x) = 1 \\]\n\n\t\"$\\Leftarrow$\". Follows from the corollary \\cref{tp_ppoly_cor}.\n\\end{proof}\n\n\\begin{definition}[Log Space uniform family circuits]\n\tA family of circuits $\\{C_n\\}$ is $\\TP$-uniform if there exists DTM M which on input $1^n$ outputs the description of $\\{C_n\\}$ and works in Log space.\n\\end{definition}\n\n\\begin{theorem}[Log Space uniform family circuits]\n\t$L$ is accepted by Log Space uniform family of circuits $\\iff L \\in \\TP$.\n\\end{theorem}\n\\begin{proof}\n\t\"$\\Rightarrow$\". For an input $x$ DTM M will simulate DTM $M_g$ which outputs $C_n$ on $1^n$, guaranteed by uniformity.\n\tThen $M$ simulates $C_n$ on $x$.\n\t\\[ x \\in L(M) \\iff C_n(x) = 1 \\]\n\n\tThe only difference between current theorem and previous is an assumption on $M_g$.\n\tHowever, TM that works in Log space also works in polynomial time.\n\n\t\"$\\Leftarrow$\". Follows from the corollary \\cref{tp_ppoly_cor}.\n\\end{proof}\n\n\\subsection{Cook-Levin alternative proof}\n\n\\begin{definition}[CRT-SAT]\n\tCRT-SAT is a language of binary strings that encode boolean circuits which for \\emph{some} input output 1.\n\t\\[ CRT-SAT = \\{ C |\\ \\exists n: C(n) = 1 \\} \\]\n\\end{definition}\n\n\\begin{lemma}[CRT-SAT $\\in \\TNP$]\n\tCRT-SAT $\\in \\TNP$.\n\\end{lemma}\n\\begin{proof}\n\tThe certificate is the input for which $C(n) = 1$.\n\tCheck by simulating circuit using DTM.\n\\end{proof}\n\n\\begin{lemma}[CRT-SAT $\\in \\TNP$-hard]\n\tCRT-SAT $\\in \\TNP$-hard.\n\\end{lemma}\n\\begin{proof}\n\tLet $L \\in \\TNP$ arbitrary.\n\n\t\\[ L \\in \\TNP \\iff \\exists DTM M: x \\in L \\iff \\exists b \\in \\{ 0, 1 \\}^{p(n)}: M(x, b) = 1 \\]\n\tM works in polynomial time, $b$ encodes the accepting branch of the NTM computation.\n\n\tFrom $\\TP \\subseteq \\Ppoly$ \\cref{tp_ppoly} exists $\\{ C_n \\}$ which recognizes the same language as $M$.\n\tFor the pair $(x, b)$ of size $n + T(n)$ we have circuit $C_n$:\n\t\\[ C_n(x, b) = 1 \\iff M(x, b) = 1 \\]\n\n\tFor fixed $x$ we construct a circuit $C_n^x \\in C_n$ by fixing the inputs of $C_n$ to $x$.\n\tTherefore $C_n^x$ has single input $b$.\n\t\\[ x \\in L \\iff \\exists b: M(x, b) = 1 \\iff \\exists b: C_n(x, b) = 1 \\iff \\exists b: C_n^x(b) = 1 \\iff C_n^x \\in CRT-SAT \\]\n\\end{proof}\n\n\\begin{consequence}[CRT-SAT $\\in \\TNP$-complete]\n\tCRT-SAT $\\in \\TNP$-complete, follows from 2 previous lemma.\n\\end{consequence}\n\n\\begin{theorem}[Cook-Levin alternative proof]\n\\end{theorem}\n\\begin{proof}\n\t%Idea: $L \\in \\TNP \\leftpropto CRT-SAT \\leftpropto 3-SAT$.\n\tIdea: $L \\in \\TNP \\to CRT-SAT \\to 3-SAT$.\n\n\tLet $C$ be a circuit with input vertices $x_1, \\ldots, x_n$ and gates $g_1, \\ldots, g_n$.\n\tWhere $g_n$ is an output gate.\n\tThe next step is to identify vertices and gates with binary variables.\n\n\tKnown as Tseitin encoding.\n\t\\begin{itemize}\n\t\t\\item if $g_i$ is NOT with single input $g_j$ then formula is XOR\n\t\t\t\\[ (g_i \\lor g_j) \\land (\\neg g_i \\lor \\neg g_j) \\]\n\n\t\t\\item if $g_i$ is AND with inputs $g_j, g_k$ then formula consists of 2 implications\n\t\t\t\\[ (g_i \\Rightarrow (g_j \\land g_k)) \\land ((g_j \\land g_k) \\Rightarrow g_i) \\]\n\t\t\tThen convert to CNF\n\t\t\t\\[ (g_i \\Rightarrow (g_j \\land g_k)) = g_i \\Rightarrow g_j \\land g_i \\Rightarrow g_k = (\\neg g_i \\lor g_k) \\land (\\neg g_i \\lor g_k ) \\]\n\t\t\tThe second part\n\t\t\t\\[ ((g_j \\land g_k) \\Rightarrow g_i) = (\\neg g_j \\lor \\neg g_k \\lor g_i) \\]\n\t\t\tAltogether\n\t\t\t\\[ (\\neg g_i \\lor g_k) \\land (\\neg g_i \\lor g_k ) \\land (\\neg g_j \\lor \\neg g_k \\lor g_i) \\]\n\t\t\\item if $g_i$ is OR with inputs $g_j, g_k$ then formula consists of 2 implications:\n\t\t\t\\[ (g_i \\Rightarrow (g_j \\lor g_k)) \\land ((g_j \\lor g_k) \\Rightarrow g_i) \\]\n\t\t\tThen convert to CNF\n\t\t\t\\[ (g_i \\Rightarrow (g_j \\lor g_k)) = (\\neg g_i \\lor g_j \\lor g_k)\\]\n\t\t\tThe second part\n\t\t\t\\[ ((g_j \\lor g_k) \\Rightarrow g_i) = g_j \\Rightarrow g_i \\land g_k \\Rightarrow g_i = (\\neg g_j \\lor g_i) \\land (\\neg g_k \\lor g_i) \\]\n\t\t\tAltogether\n\t\t\t\\[ (\\neg g_i \\lor g_j \\lor g_k) \\land (\\neg g_j \\lor g_i) \\land (\\neg g_k \\lor g_i) \\]\n\t\t\\item unit clause $(g_n)$.\n\t\\end{itemize}\n\\end{proof}\n", "meta": {"hexsha": "e4778e2fe703e42e472164ff9265916ff2efe87e", "size": 11590, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/prednasky/06_prednaska.tex", "max_stars_repo_name": "karlov/NTIN063", "max_stars_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/prednasky/06_prednaska.tex", "max_issues_repo_name": "karlov/NTIN063", "max_issues_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/prednasky/06_prednaska.tex", "max_forks_repo_name": "karlov/NTIN063", "max_forks_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.5563139932, "max_line_length": 229, "alphanum_fraction": 0.6874892148, "num_tokens": 3784, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.8104789086703224, "lm_q1q2_score": 0.5720238891485357}}
{"text": "\\documentclass[a5paper]{article}\n\\usepackage{amssymb}\n\\usepackage{amstext}\n\\usepackage{amsmath}\n\\usepackage[default,osfigures,scale=0.95]{opensans}\n\\usepackage[margin=2cm]{geometry}\n\\setlength{\\parindent}{0pt}\n\\setlength{\\parskip}{1.2em}\n\n\\usepackage{booktabs}\n\\usepackage{array}\n\\newcolumntype{L}{>{$}l<{$}}\n\\usepackage[hidelinks]{hyperref}\n\n\\renewcommand{\\t}{\\tau}\n\\newcommand{\\s}{\\sigma}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\order}{\\text{order}}\n\n\\newtheorem{lemma}{Lemma}\n\n\\begin{document}\n\n\\section*{Group algebra of the permutations of three things}\n\nFinlay Thompson\n\\bigskip\n\n\nThe permutation group can be generated by two elements, $\\t$ and $\\s$,\nof order two and three respectively. \n\\begin{align}\n    < \\t,\\s \\mid \\t^2 = 1,\\s^3 = 1, \\s\\t = \\t\\s^2 >\n\\end{align}\nThese elements generate the six element finite group $S_3$, the\nsimplest non-Abelian group of permutations.\n\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{L|LLLLLL}\n       & 1       & \\t     & \\t\\s   & \\t\\s^2 & \\s     & \\s^2   \\\\\n\\hline\\addlinespace\n1      & 1       & \\t     & \\t\\s   & \\t\\s^2 & \\s     & \\s^2   \\\\\n\\t     & \\t      & 1      & \\s     & \\s^2   & \\t\\s   & \\t\\s^2 \\\\\n\\t\\s   & \\t\\s    & \\s^2   & 1      & \\s     & \\t\\s^2 & \\t     \\\\\n\\t\\s^2 & \\t\\s^2  & \\s     & \\s^2   & 1      & \\t     & \\t\\s   \\\\\n\\s     & \\s      & \\t\\s^2 & \\t     & \\t\\s   & \\s^2   & 1      \\\\\n\\s^2   & \\s^2    & \\t\\s   & \\t\\s^2 & \\t     & 1      & \\s     \\\\\n\\end{tabular}\n\\end{center}\n\\caption{Multiplication table for six elements permutation group on\nthree elements.}\n\\end{table}\n\nThe group algebra $\\R[S_3]$ is a six dimensional, semisimple,\nassociative algebra, which by Weddeburn's structure theorems, means\nthat it can be decomposed into three parts, corresponding to the tree\nconjugacy classes $\\{1\\},\\{\\t,\\t\\s,\\t\\s^2\\},\\{\\s,\\s^2\\}$.\n\\begin{equation}\n    \\R[S_3] = \\R \\oplus \\R \\oplus M_2(\\R)\n\\label{eq:decomposition}\n\\end{equation}\nThe decomposition \\autoref{eq:decomposition} can be constructed using\nmultiplication by three central idempotents.\n\\begin{align}\n    \\pi_0 &= \\frac{1}{6}(1+\\t)(1+\\s+\\s^2) \\\\\n    \\pi_1 &= \\frac{1}{6}(1-\\t)(1+\\s+\\s^2) \\\\\n    \\pi_2 &= \\frac{1}{3}(2-\\s-\\s^2)\n\\end{align}\nThese elements satisfy the equations $\\pi_i^2=\\pi_i$, $\\pi_i\\pi_j=0$\nwhen $i\\neq j$. The decomposition corresponds to the three irreducible\nrepresentations of $S_3$. \n\nThe first projection $\\pi_0$ maps is acted on uniformly as\nthe identity, $\\rho_0(x)y = y$. This can be seen by, for example:\n\\begin{align*}\n    \\t \\pi_0 &= \\t\\frac{1}{6}(1+\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(1+\\t)(1+\\s+\\s^2) \\\\\n     &= \\pi_0\n\\end{align*}\nor\n\\begin{align*}\n    \\s \\pi_0 &= \\s\\frac{1}{6}(1+\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}\\s(1+\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(\\s+\\s\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(\\s+\\t\\s^2)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(\\s(1+\\s+\\s^2) +\\t\\s^2(1+\\s+\\s^2))\\\\\n     &= \\frac{1}{6}((1+\\s+\\s^2) +\\t(1+\\s+\\s^2))\\\\\n     &= \\frac{1}{6}(1+\\t)(1+\\s+\\s^2) \\\\\n     &= \\pi_0\n\\end{align*}\n\nThe second representation is based on the sign function, that takes\neach element to its sign, $\\rho_1(x)y = (-1)^{(\\order(x)-1)}y$. This\ncan be seen:\n\\begin{align*}\n    \\t \\pi_1 &= \\t\\frac{1}{6}(1-\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(\\t-1)(1+\\s+\\s^2) \\\\\n     &= \\frac{-1}{6}(1-\\t)(1+\\s+\\s^2) \\\\\n     &= -\\pi_1\n\\end{align*}\nor\n\\begin{align*}\n    \\s \\pi_1 &= \\s\\frac{1}{6}(1-\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}\\s(1-\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(\\s-\\s\\t)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(\\s-\\t\\s^2)(1+\\s+\\s^2) \\\\\n     &= \\frac{1}{6}(\\s(1+\\s+\\s^2) - \\t\\s^2(1+\\s+\\s^2))\\\\\n     &= \\frac{1}{6}((1+\\s+\\s^2) - \\t(1+\\s+\\s^2))\\\\\n     &= \\frac{1}{6}(1-\\t)(1+\\s+\\s^2) \\\\\n     &= \\pi_1\n\\end{align*}\n\nThe third representation can be understood by considering the two dimensional\nsubspace of $\\{1,\\s,\\s^2\\}_\\R$ that is perpendicular to $1 + \\s + \\s^2$, with\nthe standard Euclidean metric. The element $\\pi_2$ is part of this, and should\nbe associated to the identity element in $M_2(\\R)$. Perpendicular to both\n$\\pi_2$ and $1+\\s+\\s^2$ is the element $\\s-\\s^2$. Considering the span of these\nwith $\\t$ gets us the four dimensional space $M_2(\\R)$.\n\n\\begin{align*}\n\\left(\\begin{smallmatrix}1&0\\\\0&1\\end{smallmatrix}\\right) \n    \\sim \\frac{1}{3}(2 - \\s - \\s^2) &\\quad\n\\left(\\begin{smallmatrix}1&0\\\\0&-1\\end{smallmatrix}\\right) \n    \\sim \\frac{\\t}{\\sqrt3}(\\s - \\s^2) \\\\\n\\left(\\begin{smallmatrix}0&1\\\\1&0\\end{smallmatrix}\\right) \n    \\sim \\frac{\\t}{3}(2 - \\s - \\s^2) &\\quad\n\\left(\\begin{smallmatrix}0&1\\\\-1&0\\end{smallmatrix}\\right) \n    \\sim \\frac{1}{\\sqrt3}(\\s - \\s^2) \n\\end{align*}\n\n\\begin{lemma}\n    The isomorphism between $M_2(\\R)$ and the image of $\\pi_2$ contained in\n    $\\R[S_3]$ is given by:\n    \\begin{align*}\n        E_{00} &\\mapsto \\frac{1}{6}\\left(\n          2 - \\s - \\s^2 + \\sqrt 3 \\t\\s - \\sqrt 3 \\t\\s^2 \\right) \\\\\n        E_{11} &\\mapsto \\frac{1}{6}\\left(\n          2 - \\s - \\s^2 - \\sqrt 3 \\t\\s + \\sqrt 3 \\t\\s^2 \\right) \\\\\n        E_{01} &\\mapsto \\frac{1}{6}\\left(\n          2\\t -\\t\\s -\\t\\s^2 + \\sqrt3\\s - \\sqrt3\\s^2 \\right) \\\\\n        E_{10} &\\mapsto \\frac{1}{6}\\left(\n          2\\t -\\t\\s -\\t\\s^2 - \\sqrt3\\s + \\sqrt3\\s^2 \\right) \n    \\end{align*}\n\\end{lemma}\n\n\\textbf{Proof:} I will just look at the just two maps to see that they work:\n\n\n\n\\end{document}\n", "meta": {"hexsha": "bfceaa3c81ccfc37f143cb347ba9f9d703ca04bb", "size": 5227, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "symmetric.tex", "max_stars_repo_name": "finlay/extensive", "max_stars_repo_head_hexsha": "836e8442720e41b9a8ac5883a23952650ef534a9", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-08-05T12:20:01.000Z", "max_stars_repo_stars_event_max_datetime": "2016-08-05T12:20:01.000Z", "max_issues_repo_path": "symmetric.tex", "max_issues_repo_name": "finlay/extensive", "max_issues_repo_head_hexsha": "836e8442720e41b9a8ac5883a23952650ef534a9", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "symmetric.tex", "max_forks_repo_name": "finlay/extensive", "max_forks_repo_head_hexsha": "836e8442720e41b9a8ac5883a23952650ef534a9", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-08-05T11:23:17.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-28T22:36:32.000Z", "avg_line_length": 34.3881578947, "max_line_length": 79, "alphanum_fraction": 0.5559594414, "num_tokens": 2168, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5720188594267378}}
{"text": "\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn the preceding, we have demonstrated a\nBIE framework for computing the eigenvalues\nof the Stokes operator in the plane which\nis robust and scalable.\n%\nTo justify the\napproach, we developed a uniqueness theory\nfor the oscillatory Stokes equations in\nexterior domains in analogy with the\ndiscussion of the Helmholtz equation in\n\\cite{colton1983integral}.\n%\nThis lead to the primary theoretical\nresults of the paper which show that the\nBIEs resulting from double layer and\ncombined-field representations of the\nvelocity field are \nnot invertible precisely when $k^2$ is\nan eigenvalue on simply connected\nand multiply connected domains, respectively.\n%\nAs in \\cite{zhao2015robust}, the costliness\nof performing the nonlinear minimization\nassociated with computing these eigenvalues\ncan be alleviated by computing instead the\napproximate zeros of the discrete Fredholm\ndeterminant.\n\nThe results of this paper can be\nextended in a number of ways.\n%\nThe theory\nextends directly to three dimensions, where\ncomputational efficiency and numerical\nimplementation will be the primary concern.\n%\nIn the numerical examples above, we\nconsider only domains with differentiable\nboundaries for simplicity.\n%\nWhen using the eigenfunctions as a trial\nbasis for simulating the Navier--Stokes\nequations~\\cite{batcho1994generalized},\nit is necessary to handle domains\nwith corners because the domains of interest\narise from domain decomposition, e.g.\ndividing a larger domain into \nquadrilaterals.\nFortunately, there has been recent progress\ntoward efficient discretization of the\nlayer potentials of elliptic operators\non domains with corners\n\\cite{helsing2008corner,serkh2016solution,rachh2017solution,helsing2018integral}\nwhich makes the solution of such problems\ntractable.\n%\nOf course, the theoretical considerations\nare different for such domains.\n%\nComputing the Stokes eigenvalues of regions\nwith corners is the topic of ongoing\nresearch.\n\nAs noted in the introduction, the\neigenvalues of the Stokes operator are\nequivalent to the so-called\n``buckling'' eigenvalues of the\nbiharmonic operator on simply connected\ndomains~\\cite{kelliher2009eigenvalues}.\n%\nThis can be seen through the stream function\nformulation  of the oscillatory Stokes\nequation, i.e. setting $\\bu = \\nabla^\\perp \\psi$\nwhere $\\psi$ now satisfies\n\n\\begin{equation*}\n  -\\Delta^2 \\psi = k^2 \\Delta \\psi \\quad \\textrm{in} \\quad \\Omega\\; .\n\\end{equation*}\nNote that the buckling problem enforces the\nclamped, or first Dirichlet, boundary\ncondition on $\\psi$\n\n\\begin{equation*}\n  \\psi = \\partial_\\nu \\psi = 0 \\; \\quad \\textrm{on} \\quad \\Gamma.\n\\end{equation*}\nOn a multiply connected domain, there are\nStokes eigenfunctions which do not have a corresponding\nclamped stream function, so that the\nbuckling eigenvalues are a subset of the Stokes\neigenvalues.\nBy adapting the approach of \\cite{rachh2017integral},\na suitable layer potential representation of the\nbuckling problem can be derived based on the\noscillatory Stokes layer potentials.\n%\nThis is the subject of a follow-up paper which is in\npreparation.\n\nThere are some interesting questions to answer\non the use of Fredholm determinants in numerical\ncalculations.\n%\nAs observed in \\cite{zhao2015robust}, the\ncombined-field representation causes some\ndifficulty in that the Fredholm determinant\nis not defined when the single layer, which\nis not trace-class, is included.\n%\nZhao and Barnett~\\cite{zhao2015robust}\nsuggest looking into\nrepresentations of the form $\\cI-2\\cD\n-2i\\eta\\cS^2-2\\cW$,\nwhich would have a well-defined Fredholm\ndeterminant.\n%\nThe relative performance of\nsuch an approach should be explored.\n%\nFurther, as discussed above, the\nconvergence of the determinant of\nintegral equations discretized with\npanel-corrected schemes (as described in\n\\cref{sec:numerical}) is yet to be proved.\n%\nWhen addressing high frequency problems,\nthe fast-direct solver used to evaluate\ndeterminants in this paper will no longer\nhave near-linear scaling~\\cite{ho2012fast}.\n%\nThe design of fast-direct solvers in this\nregime is the subject of ongoing research, and\nto the best of our knowledge, fast determinant\ncomputation at high frequency is an open\nquestion.\n%\nFinally, it is worth exploring alternatives\nto the Fredholm determinant which perform well\nfor layer potentials that are not trace class\non the boundary or for problems at higher\nfrequencies.\n", "meta": {"hexsha": "8cf54e8be089148937a3b9a6d07db164895e986d", "size": 4404, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/draft-01-stokes/05conclusion.tex", "max_stars_repo_name": "askhamwhat/biharm-evals", "max_stars_repo_head_hexsha": "d836302f544670b3d899bd91ea4cb49e9afb6a75", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/draft-01-stokes/05conclusion.tex", "max_issues_repo_name": "askhamwhat/biharm-evals", "max_issues_repo_head_hexsha": "d836302f544670b3d899bd91ea4cb49e9afb6a75", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/draft-01-stokes/05conclusion.tex", "max_forks_repo_name": "askhamwhat/biharm-evals", "max_forks_repo_head_hexsha": "d836302f544670b3d899bd91ea4cb49e9afb6a75", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.5833333333, "max_line_length": 80, "alphanum_fraction": 0.8092643052, "num_tokens": 1072, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754471, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5720188594267377}}
{"text": "\\section{Reducibility Candidates}\nOne might ask, what is a good definition of a semantic type? - Rather than\nattempting the proof of the fundamental lemma directly and then trying to\nextract additional lemmas one might need about the semantic types, we follow\nGirard's technique and characterize some key properties our semantic types need\nto satisfy. If a semantic type satisfies these key properties, then our proof of the fundamental lemma will be straightforward. To put it differently, defining these key properties, will allow for a  a modular proof of the fundamental lemma.\n\n% \\begin{definition}[Reducibility Candidate] $\\Gamma \\vdash M \\in \\den{A}$ is a reducibility\n%   candidate, if the following conditions hold\n%   \\begin{itemize}\n%   \\item $\\CR 1:$ If $\\Gamma \\vdash M \\in \\den{A}$ then $M \\in \\SN$. % , i.e. $\\den{A} \\subseteq \\SN$.% \\\\[0.5em]\n%   \\item $\\CR 2:$ If $\\Gamma \\vdash M \\in \\SNe$ then $\\Gamma \\vdash M \\in \\den{A}$. % , i.e. $\\SNe \\subseteq \\den{A}$. % \\\\[0.5em]\n%   \\item $\\CR 3:$ If $\\Gamma \\vdash M \\redSN M'$ and $\\Gamma \\vdash M' \\in \\den{A}$ then $\\Gamma \\vdash M \\in \\den{A}$, i.e . $\\den{A}$ is closed under reduction.\n%   \\end{itemize}\n% \\end{definition}\n\n% The last property is often also referred to as \\emph{backward closed}. We show that that all semantic types $\\den{A}$ satisfy the conditions above.\n\n\n\n\\begin{theorem}\\label{thm:redcand}~\n% For all types $C$, $\\Gamma \\vdash M \\den{C}  \\in \\CR$, i.e. it satisfies the conditions $\\CR_1$, $\\CR_2$, and $\\CR_3$.\n  \\begin{enumerate}\n  \\item\\label{cr1} \\CR 1: If $\\inden{\\Gamma}{M}{A}$ then $\\Gamma \\vdash M : A \\in \\SN$. % , i.e. $\\den{A} \\subseteq \\SN$.% \\\\[0.5em]\n  \\item\\label{cr2} \\CR 2: If $\\Gamma \\vdash M : A \\in \\SNe$ then $\\inden{\\Gamma}{M}{A}$. % , i.e. $\\SNe \\subseteq \\den{A}$. % \\\\[0.5em]\n  \\item\\label{cr3} \\CR 3: If $\\Gamma \\vdash M \\redSN M' : A$ and $\\inden{\\Gamma}{M'}{A}$ then $\\inden{\\Gamma}{M}{A}$, i.e. backwards closure.\n  \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nWe prove these three properties simultaneously.\n\\\\[1em]\n\\fbox{\\CR \\ref{cr1}.  If $\\inden{\\Gamma}{M}{C}$ then $\\Gamma \\vdash M : A \\in \\SN$.}\n\\\\[0.5em]\nBy induction on the structure of $C$.\n\n\\paragraph{Case: $C =\\base$}.\\\\\n$\\inden{\\Gamma}{M}{\\base}$ \\hfill by assumption \\\\\n$\\Gamma \\vdash M : \\base \\in \\SN$ \\hfill by def. of sem. interpretation for $\\base$\n\n\\paragraph{Case: $C = A \\arrow B$}.\n\\\\\n$\\Gamma, x{:}A \\vdash x : A \\in \\SNe$ \\hfill by def. of $\\SNe$\\\\\n$\\inden{\\Gamma. x{:}A}{x}{A}$ \\hfill by IH (\\ref{cr2}) \\\\\n% $\\Gamma, x{:}A \\models x : A$ \\hfill by Lemma \\ref{lm:closn} (Property \\ref{cp2})\\\\\n$\\Gamma, x{:}A \\ext{\\id} \\Gamma$ \\hfill by def. of context extensions \\\\\n% $\\Gamma, x{:}A \\models M  : A \\arrow B$ \\hfill by Semantic Weakening Lemma \\ref{lm:sweak}\\\\\n$\\inden{\\Gamma, x{:}A}{[\\id]M~x}{B}$ \\hfill by def. of  $\\inden{\\Gamma, x{:}A}{M}{A \\arrow B}$\\\\\n$\\Gamma, x{:}A \\vdash [\\id]M~x : B\\in \\SN$ \\hfill by IH (\\CR \\ref{cr1})\\\\\n$\\Gamma, x{:}A \\vdash [\\id]M : A \\arrow B \\in \\SN$ \\hfill by  Extensionality Lemma \\ref{lm:pSN} \\\\% Lemma \\ref{lem:psn} (Property \\ref{pp6})\\\\\n$\\Gamma \\vdash M : A \\arrow B\\in \\SN$ \\hfill by Anti-renaming Lemma \\ref{lm:anti-renameSN}\n\\\\[1em]\n\\fbox{\\CR \\ref{cr2}. If $\\Gamma \\vdash M : C\\in \\SNe$ then $\\inden{\\Gamma}{M}{C}$.}\n\\\\[0.5em]\nBy induction on $\\Gamma \\vdash M : C \\in \\SNe$.\n\n\\paragraph{Case: $C=\\base$}.\\\\\n$\\Gamma \\vdash M : C \\in \\SNe$ \\hfill by assumption \\\\\n$\\Gamma \\vdash M : C \\in \\SN$ \\hfill by def. of $\\SN$\\\\\n% $\\Gamma \\vdash M \\hastype \\base$ \\hfill by assumption \\\\\n$\\inden{\\Gamma}{M}{\\base}$ \\hfill by def. of semantic typing\n\n\\paragraph{Case: $C = A \\arrow B$}.\\\\\nAssume $\\Gamma' \\ext{\\rho} \\Gamma$ and $\\inden{\\Gamma'}{N}{A}$ \\\\\n$\\Gamma' \\vdash N : A\\in \\SN$ \\hfill by IH (\\CR \\ref{cr1}) \\\\\n$\\Gamma \\vdash M : A \\arrow B \\in \\SNe$ \\hfill by assumption \\\\\n$\\Gamma' \\vdash [\\rho]M : A \\arrow B \\in \\SNe$ \\hfill by Renaming Lemma \\ref{lm:renameSN} \\\\\n$\\Gamma' \\vdash [\\rho]M~N : B \\in \\SNe$ \\hfill by def. of $\\SNe$\\\\\n$\\inden{\\Gamma'}{[\\rho]M~N}{B}$ \\hfill by IH (\\CR \\ref{cr2})\\\\\n$\\inden{\\Gamma}{M}{A \\arrow B}$ \\hfill since $\\inden{\\Gamma'}{N}{A}$ was arbitrary\n\\\\[1em]\n\\fbox{\\CR \\ref{cr3}.   If $\\Gamma \\vdash M \\redSN M' : C$ and $\\inden{\\Gamma}{M'}{C}$ then $\\inden{\\Gamma}{M}{C}$}\n\\\\[0.5em]\nBy induction on $C$.\n\\paragraph{Case: $C = \\base$}.\\\\[0.5em]\n$\\Gamma \\vdash M' : \\base \\in \\SN$  \\hfill since $\\inden{\\Gamma}{M'}{\\base}$\\\\\n$\\Gamma \\vdash M : \\base \\in \\SN$ \\hfill by closure rule for $\\SN$\\\\\n$\\inden{\\Gamma}{M}{\\base}$ \\hfill by definition of semantic typing\n\n\\paragraph{Case: $C = A \\arrow B$}.\n\\\\[0.5em]\nAssume $\\Gamma' \\ext{\\rho} \\Gamma$,~$\\inden{\\Gamma'}{N}{A}$ \\\\\n$\\inden{\\Gamma'}{M'[\\rho]~N}{B}$ \\hfill by assumption $\\inden{\\Gamma}{M'}{A \\arrow B}$\\\\\n$\\Gamma \\vdash M \\redSN M' : A \\arrow B$ \\hfill by assumption \\\\\n$\\Gamma' \\vdash [\\rho]M \\redSN [\\rho]M' : A \\arrow B$ \\hfill by Renaming Lemma \\ref{lm:renameSN}\\\\\n$\\Gamma' \\vdash [\\rho]M~N \\redSN [\\rho]M'~N : B$ \\hfill by $\\redSN$\\\\\n$\\inden{\\Gamma}{[\\rho]M~N}{B}$ \\hfill by IH (\\CR\\ref{cr3})\\\\\n$\\inden{\\Gamma}{M}{A \\arrow B}$ \\hfill since $\\inden{\\Gamma'}{N}{A}$ was arbitrary\\\\\n\n\\end{proof}\n\n\n\\section{Proving strong normalization}\nAs mentioned before, we prove that if a term is well-typed, then it is strongly normalizing in  two steps:\n\n\\begin{description}\n\\item[Step 1] If $\\inden{\\Gamma}{M}{A}$ then $\\Gamma \\vdash M : A \\in \\SN$.\n\\item[Step 2] If $\\Gamma \\vdash M : A$ and $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ then $\\inden{\\Gamma'}{[\\sigma] M}{A}$.\n\\end{description}\n\nThe first part described in Step 1, is satisfied by the fact that $\\inden{\\Gamma}{M}{A}$ must be a reducibility candidate (Theorem \\ref{thm:redcand}) and  by \\CR \\ref{cr1})  all terms in $\\denot{A}$ are strongly normalizing. We now prove the second step, which is often referred to as the \\emph{Fundamental Lemma}.\nIt states that if $M$ has type $A$ and we can provide ``good'' instantiation $\\sigma$, which provides terms which are themselves normalizing for all the free variables in $M$, then $\\inden{\\Gamma}{[\\sigma]M}{A}$.\n\n\n\\begin{lemma}[Fundamental lemma]\nIf $\\Gamma \\vdash M : A$ and $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$\nthen $\\inden{\\Gamma'}{[\\sigma]M}{A}$.\n\\end{lemma}\n\\begin{proof}\nBy induction on $\\Gamma \\vdash M : A$.\n\n\\paragraph{Case} $\\D = \\ianc{\\Gamma(x) = A}{\\Gamma \\vdash x : A}{}$\n\\\\[1em]\n$\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n$\\inden{\\Gamma'}{[\\sigma]x}{A}$ \\hfill by definition of $[\\sigma]x$ and $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$\n\n\\paragraph{Case} $\\D = \\ibnc{\\Gamma \\vdash M : A \\rightarrow B}{\\Gamma \\vdash N : A}{\\Gamma \\vdash M\\;N : B}{}$\n\\\\\n$\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\n$\\inden{\\Gamma'}{[\\sigma]M}{A \\rightarrow B}$ \\hfill by IH\\\\\n$\\inden{\\Gamma'}{[\\sigma]N}{A}$ \\hfill by IH\\\\\n$\\inden{\\Gamma'}{[\\sigma]M\\;[\\sigma]N}{B}$ \\hfill by $\\inden{\\Gamma'}{[\\sigma]M}{A \\rightarrow B}$\\\\\n$\\inden{\\Gamma'}{[\\sigma](M\\;N)}{B}$ \\hfill by subst. definition \\\\\n\n\n\\paragraph{Case} $\\D = \\ianc{\\Gamma, x:A \\vdash M:B}{\\Gamma \\vdash \\lambda x.M : A \\rightarrow B}{}$\n\\\\\n$\\inden{\\Gamma'}{\\sigma}{\\Gamma}$ \\hfill by assumption \\\\\nAssume $\\Gamma'' \\ext{\\rho} \\Gamma'$ and $\\Gamma'' \\vdash N : A$  \\\\\n$\\inden{\\Gamma''}{[\\rho] \\sigma}{\\Gamma}$ \\hfill by weakening \\\\\n$\\inden{\\Gamma''}{([\\rho]\\sigma, N/x)}{\\Gamma, x:A}$ \\hfill by definition of semantic substitutions\\\\\n$\\inden{\\Gamma''}{[[\\rho]\\sigma, N/x]M}{B}$ \\hfill by IH \\\\\n$\\Gamma'' \\vdash (\\lambda x.[[\\rho]\\sigma,x/x]M)\\;N \\redSN [[\\rho]\\sigma, N/x]M$ \\hfill by reduction $\\redSN$ \\\\\n$(\\lambda x.[[\\rho]\\sigma,x/x]M) = [[\\rho]\\sigma](\\lambda x.M)$ \\hfill by subst. def\\\\\n$\\inden{\\Gamma''}{([[\\rho]\\sigma]\\lambda x.M)\\;N}{B}$ \\hfill by $\\CR 3$ \\\\\n$\\inden{\\Gamma'}{[\\sigma](\\lambda x.M)}{A \\arrow B}$ \\hfill since $\\Gamma'' \\ext{\\rho} \\Gamma'$ and $\\Gamma'' \\vdash N : A$  was arbitrary\n\n\\end{proof}\n\n\n\\begin{corollary}\nIf $\\Gamma \\vdash M : A$ then $\\Gamma \\vdash M : A \\in \\SN$.\n\\end{corollary}\n\n\\begin{proof}\nUsing the fundamental lemma with the identity substitution $\\inden{\\Gamma}{\\textsf{id}}{\\Gamma}$, we obtain  $\\inden{\\Gamma}{M}{A}$. By $\\CR 1$, we know $\\Gamma \\vdash M \\in \\SN$.\n\\end{proof}\n\n\n\\newpage\n\n\\section{Extension: Unit type}\n\\label{sec:unit}\nWe will now extend our simply-typed lambda-calculus the unit type, written as $\\one$.\n\n\\[\n\\begin{array}{llcl}\n\\mbox{Types}  & A & \\bnfas & \\ldots \\mid \\one\\\\\n\\mbox{Terms}  & M & \\bnfas & \\ldots \\mid ()\n\\end{array}\n\\]\n\nIn particular, we extend our type-directed reduction rules and allow any term $M$ of type $\\one$ to be reduced to $()$.\n\n\\[\n  \\begin{array}{c}\n\\ianc{M \\not= ()}{\\Gamma \\vdash M \\red () : \\one}    {}\n  \\end{array}\n\\]\n\nWe extend our definition of $\\SN$ and $\\redSN$ as follows:\n\n\\[\n  \\begin{array}{c}\n\\ianc{}{\\Gamma \\vdash () : \\one \\in \\SN}    {} \\qquad\n\\ianc{\\Gamma \\vdash M : \\one}{\\Gamma \\vdash M \\redSN () : \\one}{}\n  \\end{array}\n\\]\n\nWe omit here the extensions in the proofs about $\\SN$, in particular the renaming, anti-renaming and substitution lemmas.\n\nAs $\\one$ is simply a new base type, we simply say\n\n\\begin{center}\n\\begin{tabular}{lcl}\n$\\inden{\\Gamma}{M}{\\one}$ & iff & $\\Gamma \\vdash M : \\one \\in \\SN$\n\\end{tabular}\n\\end{center}\n\nWe revisit our previous theorem \\ref{thm:redcand} and highlight the cases for unit.\n\n\\begin{theorem*}\n\\CR 1: If $\\inden{\\Gamma}{M}{A}$ then $\\Gamma \\vdash M : A \\in \\SN$. % , i.e. $\\den{A} \\subseteq \\SN$.%\n\\end{theorem*}\n\n\\begin{proof}\nInduction on type $A$.\n\n\\paragraph{Case} $A = \\one$.\n\\\\[1em]\n$\\Gamma \\vdash M : \\one \\in \\SN$ \\hfill by def. of $\\inden{\\Gamma}{M}{A}$\n\n\\end{proof}\n\n\\begin{theorem*}\n\\CR 2: If $\\Gamma \\vdash M : A \\in \\SNe$ then $\\inden{\\Gamma}{M}{A}$. % , i.e. $\\SNe \\subseteq \\den{A}$. %\n\\end{theorem*}\n\\begin{proof}\nInduction on $A$.\n\n\\paragraph{Case} $A = \\one$.\n\\\\[1em]\n$\\Gamma \\vdash M : \\one \\in \\SNe$ \\hfill by assumption \\\\\n$\\Gamma \\vdash M : \\one \\in \\SN$ \\hfill by def. of $\\SN$\\\\\n$\\inden \\Gamma M \\one$ \\hfill by def. of semantic interpretation of $\\one$\n\n\\end{proof}\n\n\n\\begin{theorem*}\n \\CR 3: If $\\Gamma \\vdash M \\redSN M' : A$ and $\\inden{\\Gamma}{M'}{A}$ then $\\inden{\\Gamma}{M}{A}$, i.e. backwards closure.\n\\end{theorem*}\n\\begin{proof}\nInduction on $A$.\n\n\\paragraph{Case} $A = \\one$\n\\\\[1em]\n$\\Gamma \\vdash M' : \\one \\in \\SN$ \\hfill by assumption $\\inden{\\Gamma}{M'}{\\one}$ \\\\\n$\\Gamma \\vdash M \\redSN M' : \\one$ \\hfill by assumption\n$\\Gamma \\vdash M : \\one \\in \\SN$ \\hfill by def. of $\\SN$\n\n\\end{proof}\n\nWe can now revisit the fundamental lemma.\n\n \\begin{lemma}[Fundamental lemma]\n If $\\Gamma \\vdash M : C$ and $\\inden{\\Gamma'}{\\sigma}{\\Gamma}$\n then $\\inden{\\Gamma'}{[\\sigma]M}{C}$.\n \\end{lemma}\n \\begin{proof}\n By induction on $\\Gamma \\vdash M : C$.\n\n\\paragraph{Case} $\\D = \\ianc{}{\\Gamma \\vdash () : \\one}{}$\n\\\\[1em]\n$\\Gamma \\vdash () : \\one \\in \\SN$ \\hfill by def. of $\\SN$\\\\\n$\\inden \\Gamma {()} \\one$ \\hfill by def. of $\\mathcal{R}_\\one$\\\\\n\n\\end{proof}", "meta": {"hexsha": "f8c4adc7944f38399228e2be998da55f47e56846", "size": 10813, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sn-proof/reducibility.tex", "max_stars_repo_name": "andreasabel/strong-normalization", "max_stars_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2017-05-22T14:33:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-05T12:12:03.000Z", "max_issues_repo_path": "sn-proof/reducibility.tex", "max_issues_repo_name": "andreasabel/strong-normalization", "max_issues_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-02-14T16:42:36.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-20T14:54:18.000Z", "max_forks_repo_path": "sn-proof/reducibility.tex", "max_forks_repo_name": "andreasabel/strong-normalization", "max_forks_repo_head_hexsha": "67bb99c3d0f4b70cfc7b0812a3f36ed55f8cf424", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-11-10T16:44:52.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-23T18:22:17.000Z", "avg_line_length": 43.7773279352, "max_line_length": 314, "alphanum_fraction": 0.621659114, "num_tokens": 3938, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.752012562644147, "lm_q1q2_score": 0.5720188465967906}}
{"text": "\\documentclass[../report/main.tex]{subfiles}\n \n\\begin{document}\n\n% The asterix after \\subsection disables section numbering\n\\subsection*{Algorithm Description}\n\nThis algorithm starts the salesman at a random city in the tour and repeatedly visits the nearest city until all have been visited. We implemented this algorithm in Python 3.5.  We first encountered the nearest neighbor algorithm when researching general TSP algorithms at ~\\cite{wikipediaTSP} . We learned more through it's specific Wikipedia article at ~\\cite{wikipediaNN} .\n\n\\subsection*{Algorithm Discussion}\n\nWe chose nearest neighbor because it is essentially greedy which makes a good choice for being efficient and quick. We found that this algorithm satisfies the 1.25 requirement for most cases.  Additionally, because it runs \"fairly\" fast for most input sizes, we were able to set it up to try each possible starting point.  For larger input sizes we had the algorithm write results every time it found a newer/better route.  This allowed us to find an acceptable route under 3 minutes (though there may have been better routes with unlimited time).  For the three example cases 2-opt found routes between 15\\% and 22\\% over optimal.  The algorithm runs in O($n^2$) time.\n\n\\subsection*{Algorithm Pseudo-code}\n\n\\begin{verbatim}\nNearest_Neighbor():\n    cities = []\n    // each city object has id, x and y attributes\n    read data from input file into the cities array  \n\n    adjMatrix=[len(cities)][len(cities)]\n    for i = 0 to len(cities) - 1:\n        for j = i + 1 to len(cities):\n            ij = dist(cities[i][j])\n            adjMatrix[i][j] = ij\n            adjMatrix[i][j] = ji\n\n    min_tour_dist = MAX_INT\n    min_tour_order = []\n\n    for i = 0 to len(cities) - 1:\n        tour_cities = copy of cities array values w/o references to orig array\n        tour_order = []\n\n        tour_order.append(tour_cities[i].id)\n        tour_cities.remove(tour_cities[i])\n        tour_distance = 0\n\n        while tour_cities length > 0:\n            cur_city = cities[tour_order[len(tour_order) - 1]]\n            min_dist = MAX_INT\n            min_city = -1\n\n            for j = 0 to len(tour_cities) - 1:\n                this_dist = adjMatrix[cur_city.id][tour_cities[j].id]\n\n                if this_dist == -1:\n                    this_dist = cartesian dist between cur_city and tour_cities[j]\n                    adjMatrix[cur_city.id][tour_cities[j].id] = this_dist\n                    adjMatrix[tour_cities[j].id][cur_city.id] = this_dist\n\n                if this_dist < min_dist:\n                    min_dist = this_dist\n                    min_city = tour_cities[j]\n\n            tour_order.append(min_city.id)\n            tour_cities.remove(min_city)\n            tour_distance = tour_distance + min_dist\n        u = cities[tour_order[0].id]\n        v = cities[tour_order[len(tour_order) - 1]].id\n        this_dist = adjMatrix[u][v]\n        if this_dist == -1:\n            this_dist = distance between cities[tour_order[0]] \n                        and cities[tour_order[len(tour_order) - 1]]\n            adjMatrix[u][v] = this_dist\n            adjMatrix[u][v] = this_dist\n\n        tour_distance = tour_distance + this_dist\n\n        if tour_distance < min_tour_dist:\n            min_tour_dist = tour_distance\n            min_tour_order = copy of tour_order's array by value \n\n            write the output file to the path specified\n\\end{verbatim}\n\n\\end{document}", "meta": {"hexsha": "abcdea09e5d77f3daa12dfceed085d0f2fe47cae", "size": 3432, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Alg04_Nearest_Neighbor/alg04.tex", "max_stars_repo_name": "OSU-CS-325/Project_Four_TSP", "max_stars_repo_head_hexsha": "c88e496b755fa5dfc3220f68a3daa3eba2e57e2e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Alg04_Nearest_Neighbor/alg04.tex", "max_issues_repo_name": "OSU-CS-325/Project_Four_TSP", "max_issues_repo_head_hexsha": "c88e496b755fa5dfc3220f68a3daa3eba2e57e2e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Alg04_Nearest_Neighbor/alg04.tex", "max_forks_repo_name": "OSU-CS-325/Project_Four_TSP", "max_forks_repo_head_hexsha": "c88e496b755fa5dfc3220f68a3daa3eba2e57e2e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.0, "max_line_length": 669, "alphanum_fraction": 0.6541375291, "num_tokens": 821, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506418255927, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5720188426637425}}
{"text": "\n\n\n\n\\chapter{First Notions of Category Theory}\n\\thispagestyle{empty}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \n\\epigraph{\u201cThe human mind has never invented a labor-saving machine equal to algebra.\u201d }{\\textit{Stephan Banach (1925)}}\n\nFor us Category Theory is an area of interest on its own rights, rather than merely an elegant tool. Thus, we will introduce this theory with a point of view that emphasize the intuitive ideas behind each notion.\\\\\n\nWith this objective in mind, we will get into the habit of introducing the first examples even before introducing the formal definitions. Thus, when the formal definition is introduced it becomes meaningful.\\\\\n\nThe fundamental idea of Category Theory is that many properties can be unified if expressed in arrow diagrams. Intuitively, a diagram is a directed graph, such that each way of going from a node to another are equals. For example, the diagram:\n\n\\[\n  \\begin{tikzcd}\n    & b \\arrow[rd, \"g\"]& \\\\\n    a\\arrow[ru, \"f\"] \\arrow[rr, \"h\", dashed] && c\n  \\end{tikzcd}\n\\]\nMeans that $f\\circ g = h$. This approach to mathematics emphasises  at the relationships between elements, rather than at the structure of the elements themselves. In general, dashed lines means that the existence of that particular arrow is uniquely determined by the solid arrow presents in the diagram.\\\\\n\nIn this first chapter we introduce the notion of category and some useful properties. The principal references for this chapter are \\cite{mac2013categories} and \\cite{riehl2017category}. To show how categories can be used for programming we will illustrate several concepts with the programming language Haskell. Haskell references can be checked in \\cite{milewski2018category}.\n\n\\section{Metacategories}\nWe will start by defining a concept independent of the set theory axioms: the concept of \\emph{metacategory}. Then, categories will arise from studying these concepts within set theory. We follow  \\cite{mac2013categories} for these definitions.\\\\\n\n\nTraditionally, mathematics is based on the set theory. When we start set theory it is not necessary (it is not possible) to define what a set is. It is similar with the concepts of element and belonging, which are basic to set theory. Category theory can also be used to found mathematics. This theory provide meaning based on other concepts such as object, arrow or composition rather than sets or belonging. \\\\\n\n\\begin{definition} \\label{def:metagraph}\n  A \\emph{metagraph} consist of \\emph{objects}: $a,b,c..$ and \\emph{arrows} $f,g,h...$. There are also two pairings: $\\dom$ and $\\codom$. This pairings assigns each arrow with an object. An arrow $f$ with $\\dom(f)=a$ and $\\codom(f)=b$ is usually denoted as $f:a\\to b$.\\\\\n\\end{definition}\n\n\\begin{definition}\n  A metacategory  is a metagraph with two additional operations and two properties:\n  \\begin{itemize}\n  \\item Operations:\n    \\begin{itemize}\n      \n    \\item \\emph{Identity}: assigns to each object $a$ an arrow $1_a:a\\to a$. \n    \\item \\emph{Composition}: assigns to each pair of arrows $f,g$ with $\\codom(f)=\\dom(g)$ and arrow $g\\circ f$ such that the diagram:\n\n      \\[\n        \\begin{tikzcd}\n          & b \\arrow[rd, \"g\"]& \\\\\n          a\\arrow[ru, \"f\"] \\arrow[rr, \"g\\circ f\",swap, dashed] && c\n        \\end{tikzcd}\n      \\]\n\n      commutes. The arrow $g\\circ f$ is called the \\emph{composite} of $f$  and $g$.\n    \\end{itemize}\n\n  \\item Properties:\n    \\begin{itemize}\n    \\item \\emph{Associative}: given arrows $f,g,h$, we have that,\n      $$(g\\circ f) \\circ h = g \\circ (f \\circ h).$$\n    \\item \\emph{Unit}: given an object $a$, and arrows $f,g$ such that $\\dom (f)= a$ and $\\codom (g) = a$, we have that,\n      $$1_a \\circ g = g, \\qquad f \\circ 1_a = f.$$\n    \\end{itemize}\n  \\end{itemize}\n  In the context of categories and metacategories, arrows are often called \\emph{morphisms}.\n\\end{definition}\n\nWe have just define what a  metacategory is without any need of set and elements. In most cases we will rely in a set theory interpretation of this definitions, as most examples (specially with our interest in computational example) will rely on this theory. Nonetheless, whenever possible, we will define the concepts working only in terms of objects and arrows.\\\\\n\n\n\n% Diagramas de existencia condicionada.!!\n% We will also need diagrams that represents the existence of arrows due to the existence other arrows. , given objects $a,b,c$ and arrows $d$\n\n\n\n\\section{Set theory categories}\nDespite being an alternative foundation of mathematics, in general we will work with categories interpreted with notions of set theory. Unless otherwise stated, we will use the Zermelo-Fraenkel-G\u00f6del axiomatics with Axiom of Choice. In this we are going to difference between \\emph{small sets} and \\emph{classes}. Sets that are not small are called \\emph{large sets}. We recommend \\cite{kunen2014set} for more information.\\\\\n\nWe have no interest in this work in the reformulation of axioms. Note that  we have defined a meta-category without the need for notions of sets. In the same way,  many definitions and propositions relating to this theory can be considered without the need for notions of sets.\\\\\n\n\nFrom now on, we will work based on set theory. Despite that, it is useful to take into account that most of the concept here presented can be defined without making use of this Theory. \n\n\\begin{definition}\n  A category (resp. graph) is an interpretation of a metacategory (resp. metagraph) within a set theory.\n\\end{definition}\n\n\n\\footnote{{\\color{black} There is a discussion here that must be addressed at the end}}That is, a graph/category is a pair $(O,A)$ where a collection $O$ consisting of all objects as well as a collection $A$ consisting of all arrows. Their elements hold the same properties that objects and arrows hold on metacategories / metagraph.\\\\\n\n\nWe will focus on the category. First, we define the function homeset of a category $C=(O,A)$, wrote as $\\hhom_{C}$, as the function:\n\\begin{align*}\n  && \\hhom_{C}: O \\times O &\\mapsto \\mathcal{P}(A)&\\\\\n  \\displaystyle &\\ &(a,b)&\\mapsto \\{f\\in A | f:a\\to b\\}&\n\\end{align*}\n\nWe will refer to the collection of objects of a category $C$ as $Ob(C)$ and the collection of arrows as $Ar(C)$. When there is no possibility of confusion we will state $c\\in C$ meaning either $c\\in Ob(C)$ or $c\\in Ar(C)$.\n\n\\begin{definition}\n  We say that a category is \\emph{small} if the collection of objects is given by a set (instead of a proper class). We say that a category is \\emph{locally small} if every homesets is a set.\n\\end{definition}\n\nWe proceed to introduce a comprehensive list of examples, so that it is already introduced in subsequent chapters. \n\\begin{example}\n\n  \\begin{itemize}  \\ \n  \\item The elementary categories:\n    \\begin{itemize}\n    \\item The category $0 = ( \\emptyset, \\emptyset)$ where every property of metacategories is trivially satisfied.\n    \\item The category $1 = (\\{e\\},\\{1_e\\})$.\n    \\item The category $2 = (\\{a,b\\},\\{1_a,1_b,f:a\\to b\\})$\\label{2-category}\n    \\end{itemize}\n\n  \\item Discrete categories: are categories where every arrow is an identity arrow. This are sets regarded as categories, in the following sense: every discrete category $C=(A, \\{1_a : a \\in A\\})$ is fully identified by its set of object.  \n  \\item Monoids and Groups: A monoid is a category with one object (regarding the monoid of the arrows). In the same way, if we requires the arrows to be invertible, we can see a group as a single-object category. \n  \\item Preorder: From a preorder $(A, \\le)$ we can define a category $C = (A, B)$ where $B$ has an arrow $e: a \\to b$ for every $a,b\\in A$ such that $a \\le B$. The identity arrow is the arrow that arise from the reflexive property of the preorders. \n\n  \\item Large categories: these categories have a large set of objects. For example:\n    \\begin{itemize}\n    \\item The category $Top$ that has as objects all small topological spaces and as arrow continuous mappings.\n    \\item The category $Set$ that has as objects all small sets and as arrows all functions between them. We can also consider the category $Set_*$ of pointed small sets (sets with a distinguish point), and functions between them that maps preserving the marked point. This category, when restricted to finite sets, is known as $FinSet$.\n    \\item The category $Vect$ That has as object all small vector spaces and as arrows all the linear functions.\n    \\item The category $Grp$ that has as object all small vector group and as arrows all the homomorphism.\n    \\item The category $Top_*$ that has as object all small topological spaces with a distinguished point, and as arrows all the continuous functions that maps each distinguished points into distinguished points. Similarly we can consider $Set_*$ or $Grp_*$.\n    \\item The category $Grph$ that has as objects all small graphs, and as arrows all graphs morphisms.\n    \\end{itemize}\n\n  \\item The category $Hask$ of all Haskell types and all possible functions between two types.\\\\\n    \n  \\end{itemize}\n  Note that, for example, as natural numbers can be seen as either a set or a preorder, they also can be seen as a discrete category or a preorder category.\n\\end{example}\n\n\nSimple enough categories can be easily described with graphs: there are as many objects as nodes in the diagram and an arrow in the category for each:\n\\begin{itemize}\n\\item Arrow in the graph. \n\\item Composition of existing arrows.\n\\item Every node (identity arrow usually omitted in diagrams).\n\\end{itemize}\n\nFor example, we can fully represent the $2$ category with the diagram:\n\n\\[\n  \\begin{tikzcd}\n    a\\arrow[rr, \"g\"] && c\\\\\n  \\end{tikzcd}\n\\]\n\n\n\n\n\\subsection{Properties}\n\nWe can see that is common in mathematics to have an object of study (propositional logic clauses, groups, Banach spaces or types in Haskell). Once the purpose of studying these particular sets of objects is fixed, it is also common to proceed to consider the transformations between these objects (partial truth assignments, homomorphisms, linear bounded functionals or  functions in Haskell).\\\\\n\nIn categories, we have a kind of different approach to the subject. Instead of focusing on the objects themselves, we focus on how do they relate to each other. That is, we focus on the study of the arrows and how they composes. Therefore we can consider equal two objects that has the same relations with other objects. This inspire the next definition:\n\n\\begin{definition}\\cite[Definition 1.1.9]{riehl2017category}\n  Given a category $C=(O,A)$, a morphism $f: a \\to b \\in A$ is said to have a \\emph{left inverse} (resp. \\emph{right inverse}) if there exists a $g: b \\to a \\in A$ such that morphism $g \\circ f = 1_b$ (resp. $f \\circ g = 1_a$). A morpishm is an \\emph{isomorphism} if it has both left and right inverse, sloppily called the \\emph{inverse}. Two object are isomorphic is there exists an isomorphism between them.\n\\end{definition}\n\n\nIs easy to follow that if a morphism has a left and a right inverse, they must be the same, thus implying the uniqueness of the inverse. Also one can see that this functions define, by precomposition, bijections between $\\hhom(a,c)$ and $\\hhom(b,c)$ for all $c\\in O$.\\\\\n\nWe will now proceed to talk about certain arrows and objects that have properties that distinguish them from others.  The most useful example to gain an intuition of these properties is that of the $FinSet$ category, of all finite sets and functions between them.  Considering special arrows:\\\\\n\n\\begin{definition} An arrow $f$ is  \\emph{monic} (resp. \\emph{epic}) if it is \\emph{left-cancelable} (resp. \\emph{right-cancelable}), i.e.  $f\\circ g = f \\circ h \\implies g = h$ (resp. $g\\circ f = h \\circ f \\implies g = h$).\n\\end{definition}\n\n\n\nIn $FinSet$ these arrows are the injective functions (resp. surjective). Considering special objects:\n\n\\begin{definition}\n  an object $a$ is \\emph{terminal} (resp. \\emph{initial}) if for every object $b$ there exists an unique arrow $f:b\\to a$ (resp. $f:a\\to b$).  An object that is both terminal and initial is called \\emph{zero}.\n\\end{definition}\n\nIn $FinSet$ the initial object is the  empty set and the terminal object the one point set.\n\n\\begin{proposition}\\label{terminal-proposition}\n  Every two terminal object are isomorphic.\n\\end{proposition}\n\\begin{proof}\n  Every terminal object has only one arrow from itself to itself, and necessarily this arrow has to be the identity. Let $a, b$ be terminal object and $f:a\\to b$ and $g:b\\to a$ be the only arrows with that domain and codomain. Then $f\\circ g : a \\to a \\implies f \\circ g = 1_a$. Analogously $g \\circ f = 1_b$.\\\\\n\\end{proof}\n\n% \\subsubsection{Duality}\nAnother important property is the \\emph{duality property}. This property tell us that for every theorem that we prove for categories, there exist another theorem that is automatically also true, by inverting the direction of the arrows. To formalize this idea, we define the concept of \\emph{opposite category}.\n\n\n\\begin{definition}\\cite[Definition 1.2.1]{riehl2017category}\n  Let $C$ be any category. The opposite category $C^{op}$ has:\n  \\begin{itemize}\n  \\item the same objects as $C$,\n  \\item an arrow $f^{op}:b \\to a\\in C^{op}$ for each arrow $f: a\\to b \\in C$, so that the domain of $f^{op}$ is defined as the codomain of $f$ and viceversa.\n  \\end{itemize}\n  The remaining structure of the category $C^{op}$ is given as follows:\n  \\begin{itemize}\n  \\item For each object $a$, the arrow $1_a^{op}$ serves as its identity in $C^{op}$.\n  \\item We observe that $f^{op}$ and $g^{op}$ are composable when $f$ and $g$ are, and define $f^{op} \\circ g^{op} = (g \\circ f)^{op}$.\n  \\end{itemize}\n\\end{definition}\n\n\nThe intuition is that we have the same category, only that all arrows are turned around. We can see that  for each theorem $T$ that we prove, we have reinterpretation that theorem to the opposite category. Intuitively this theorem is an equal theorem in which all the arrows have been turned around. For example: the proposition\n\\ref{terminal-proposition} can be reworked as:\n\n\\begin{proposition}\\label{prop:initial}\n  Every two initial object are isomorphic.\n\\end{proposition}\n\n\nThat is because being initial is the dual property of being terminal, that is, if $a\\in C$ is a terminal object then $a\\in C^{op}$ is an initial object. The property ``being isomorphic'' is its own dual.\n\n\\subsection{Transformation in categories}\n\n\n\n\n% \\subsubsection{Functors}\nOne of the main ways of defining a category is considering as object every small set with an structure (e.g. groups, monoids, vector spaces), and as morphism all functions that preserve structure. Then, we may also follow our study defining the structure preserving transformation of categories.\n\n\\begin{definition}\n  Given two categories $C, B$, a \\emph{functor} $F: B \\to C$ is a pair of functions $F=(F':Ob(C)\\to Ob(B),F'':Ar(C)\\to Ar(B))$ (the \\emph{object functor} and the \\emph{arrow functor} respectively) in such a way that:\n  $$F''(1_c) = 1_{F'c}, \\ \\forall c \\in Ob(C), \\qquad F''(f\\circ g) = F''f \\circ F''g, \\ \\ \\forall f:a\\to b,g: b\\to c  \\in Ar(C).$$\n\n\\end{definition}\n\n\nRoughly speaking,  a functor is a morphism of categories. When there is no ambiguity we will represent both $F'$ and $F''$ with a single symbol $F$ acting on both objects and arrows. Also, it can be seen in the definition that whenever possible the parentheses of the functor will be dropped. This loss of parentheses will be replicated throughout the text, whenever possible.\\\\\n\n\n\n\\begin{remark}\n  A functor $F$ can be defined only pointing out how it maps arrows, as how $F$ maps object can be defined with how it map identity arrows.\n\\end{remark}\nLets provide some examples of functors:\\\\\n\n\n\n\\begin{example}\\ \n  \\begin{itemize}\n  \\item {Forgetfull functor}: We have a variety of categories consisting of structures with sets as objects and functions that hold structure such as arrows (eg Top, Grp or Vect).\\\\\n    \n    Let $C$ one of such  categories, then we have a functor $F:C\\to Set$ that maps each object to its underlying set and each arrow to the equivalent arrow between sets. That is, this functor forgets about the structure that is present in $C$.\\\\\n\n    Additionally, we will often have functors defined $F:C\\to Set$. For some cases of $C$ it also happen that the image of $Ob(C)$ by $F'$ has some more structure on it. In that case we will say that $F$ is an enriched functor for that cases.\\\\\n\n    Another similar case of Forgetful functor is $\\mathcal{U}:Cart \\to Grph$ that maps each (small) category to their \\emph{underlying} graph. \n  \\item Fundamental group: In the context of algebraic topology we have the functor of the fundamental group $\\Pi_1: Top_* \\to Grp$. The most famous property of this function is that\n\n    \n   \\begin{displayquote}\n      ``Given continuous application $ f,g:X \\to Y$ between two topological spaces induces an application of the group of loops of $X$ on the group of loops of $Y$ such that $\\Pi_1(f \\circ g) = \\Pi_1(f) \\circ \\Pi_1(g)$.''\n   \\end{displayquote}\n\n    that is, the functoriality of the function (taking also into account the fact that the identity of topological spaces is mapped to the identity of group). \n\n  \\item Stone-\\v{C}ech compactification\\label{example:stone-cech}:  From the realm of functional an\u00e1lisis we have the following theorem:\n\n    \\begin{theorem} Let $\\mathbb{K}=\\mathbb R $ or $\\mathbb C$ and let $\\Omega$ be a completely regular Hausdorff topological space. Then there exists a compact Hausdorff topological space $\\beta \\Omega$ and a homeomorphism $\\Phi$, from $\\Omega$ to a dense subset of $\\beta\\Omega$. Moreover, for every continuous and bounded function $f: \\Omega \\to K$ there exists a unique $\\overline{f} \\in C(\\beta \\Omega)$ such that $\\overline{f}\\circ \\Phi = f$ and the application $f \\to \\overline{f}$ is an isomorphism. Finally, $\\beta \\Omega$ is uniquely determined up to isomorphism by the above properties.\n    \\end{theorem}\n\n    \\begin{proof}\n      We define the functional $\\Delta: \\Omega \\to \\Omega^{**}$   such that $\\Delta(t)=\\delta_t$ where $\\delta_t(f) = f(t)$ is Dirac's operator .  It is enough to take as $\\beta \\Omega$ the closure of $\\Delta(\\Omega)$ in the induced weak-topology $\\omega^*$ of $\\Omega^{**}$ (since in the weak topology we have that a bounded closed is compact).  \n      \n      Given a continuous and bounded function $f:\\Omega \\to\\mathbb K$ we can define the function $\\overline{f}$ by injection of $\\Omega$ and then limit to maintain continuity (which will exist by compactness and boundedness of $f$). Thus the maximum of $\\overline{f}$ can be reached by a sequence of points injected from  $\\Omega^{**}$ obtaining that   \n      $${\\displaystyle\\max\\left\\{|\\ \\overline{f}\\ (s)\\ | : s \\in \\beta\\Omega\\right\\} = \\sup\\left\\{| f(s) |: s \\in \\Omega\\right\\}}.$$\n      \n      The application $f \\mapsto \\overline{f}$ is therefore an isometric isomorphism. Thanks to this isometric isomorphism the function spaces and by the Banach-Stone theorem\\cite[Theorem 3]{banach1932theorie} we have uniqueness up to isomorphisms. \n    \\end{proof}\n\n    Based on this theorem we can define a functor  $\\beta:Hauss \\to Comp$ where $Hauss$ is the category of Completely regular Haussdorff  Spaces and continuous functions, and $Comp$ the category of Compact Haussdorff Spaces along with continuous function. This functor:\n    \\begin{itemize}\n    \\item Take an object $\\Omega \\in Hauss$ to $\\beta(\\Omega) = \\beta\\Omega$:\n    \\item Take an arrow $f:\\Omega \\to \\Omega'\\in Hauss$ to $\\Delta(f)$ and the complete it by continuity as in the previous proof.\n    \\end{itemize}\n    \n    % {\\color{red} Question}: I want to include this example of functor in order to later reuse it in universality section (opening of chapter 2).\\\\\n\n    % I'm having a bit of trouble doing this though, as I do not see how does it maps arrows. I have found this proof of its functoriality:\n\n    % \\begin{displayquote}\n    %   As is usual for universal constructions like this, the extension property makes $\\Beta$ a functor from $Top$ (the category of topological spaces) to $CHaus$ (the category of compact Hausdorff spaces).\n    % \\end{displayquote}\n\n    % However, I was hoping to provide a more elemental proof, so I will continue looking. If nothing is founded, there will be no problem to drop this example, or to explain directly when extension property is explained.\n    % {\\color{red} End Question}\n\n\n  \\item Group actions:\\label{group-action} Every group action can be seen as a functor from a Group to a Set.\\\\\n    \n    Let $G$ be a group seen as a category with only one element and let $X$ be a set seen as a discrete category. Then, an action is a representation of the group of the endomorphism of the set, that is, an action $\\alpha$ associate each element of the group with an function $\\alpha(g):X\\to X$ such that the product of element of the group is maintain via composition. That is for $g\\in Ar(G)$ we have that $\\alpha(g) \\in Ar(X)$  and for all $ g,f\\in Ar(G):$\n    $$\\alpha(g\\circ f) = \\alpha(g) \\circ \\alpha(f).$$\n\n    Notice that, by the definition of $G$, the composition $g\\circ f$ actually correspond to the product in $G$.\n    \n    Note that is a group seen as a category the product is the composition thus having the functoriality.\\\\\n    \n  \\item \\texttt{Maybe} Method in Haskell \\cite[Section 7.1]{milewski2018category}: In Haskell the definition of Maybe is a mapping from a type $a$ to a type $\\maybe  a$. We define the BN notation used in the example in \\ref{def:BN-Notation}.\n\n    \\begin{center}\n      \\begin{lstlisting}[language=Haskell,caption={Declaration of Maybe},captionpos=b]\n        data Maybe a = Nothing | Just a\n      \\end{lstlisting}\n    \\end{center}\n    \n    Note that \\texttt{Maybe a} is not a type but a function of types. In order for it to be an endofunctor of the $Hask$ category we will need for it to also map function. Let $f: a \\to b\\in Hask$, then we define the function\n    $$\\maybe  f: \\maybe a \\to \\maybe b$$\n    such that\n    $$(\\maybe  f) (Nothing) = Nothing, \\qquad (\\maybe f) (Just\\  a) = Just\\  f(a).$$\n    Note that $(\\maybe id_a) (\\maybe a) = Nothing\\  |\\  Just\\  id_a(a) =  \\maybe a$.\n  \\end{itemize}\n\\end{example}\n\n\nWe can now construct the category of all small categories $Cat$. This category has as object all small categories and as arrows all functor between them. Note that $Cat$ does not contain itself, since it is a large category.\\\\\n\nWe can consider some properties in functors. Note that we can consider a functor $T:C\\to D$ as a set function over homeset, that is, a function: \n$$T:\\hhom_C(a,b) \\to \\{Tf: Ta \\to Tb | f \\in C\\} \\subset \\hhom_D(Ta,Tb)$$\n\n\\begin{definition}\n  A functor $T:C\\to D$ is \\emph{full} if it is surjective as a function over homesets, i.e. if the function $T:\\hhom_C(a,b) \\to  \\hhom_D(Ta,Tb)$  is surjective for every $a,b \\in C$. A functor is \\emph{faithfull} if it is injective over homesets.\n\\end{definition}\n\nWe can consider functors $T:C^{op} \\to D$. It is customary to refer to this type of functions a \\emph{contravariant} functor from $C$ to $D$ and regular functors $F:C\\to D$ as \\emph{covariant}.\n\n\n\n\n% \\subsubsection{Natural Transformations}\n\nWe shall continue defining natural transformation. In the words of Saunders Mac Lane:\n\n\\begin{displayquote}\n  ``Category has been defined in order to define functor and functor has been defined in order to define natural transformation.''\n\\end{displayquote}\n\n\n\n\nOne can see a functor $T:C\\to D$ as a representation of a category in another, in the sense that a functor provide a picture of the category $C$ in $D$. Further into this idea, we can consider how to transform these drawings into each other. \n\n\\begin{definition}\n  Given two functor $T,S:C\\to D$, a \\emph{natural transformation} $\\tau : T \\Rightarrow S$ is a function from $Ob(C)$ to $Ar(D)$ such that for every arrow $f:c \\to c' \\in C$ the following diagram commutes:\n  \\[\n    \\begin{tikzpicture}\n      \\node {\\begin{tikzcd}[column sep=20mm]\n          Tc\\ar[r,\"\\tau c\"]\\ar[d,\"Tf\"] & Sc\\ar[d,\"Sf\"]\\\\\n          Tc'\\ar[r,\"\\tau c'\"] & Sc'\\ .\n        \\end{tikzcd}};\n    \\end{tikzpicture}\n  \\]\n\n A natural transformation where every arrow $\\tau c$ is invertible is called a \\emph{natural equivalence} and the functors are \\emph{naturally isomorphic}.\n\\end{definition}\n\\begin{remark}\n  In come context, subindex $\\tau_c$ are used to note natural transformation, specially when working in composition with other arrows.\n\\end{remark}\n\nThat is a natural transformation is a map from pictures of $C$ in to pictures of $D$. Note that a natural transformation acts only on the domain of objects. Lets provide some examples.\\\\\n\nWhen we have to define natural transformations  it will often be useful to define each arrow individually. In this case, with the same notation as in the definition,  we will note $\\tau c = \\tau_c: Tc \\to Sc$. \n\n\\begin{example}\\ \n  \\begin{itemize}\n  \\item The opposite group: We can define the opposite functor $(\\cdot)^{op}: Grp \\to Grp$ that maps $(G, *)$ to $(G; *^{op})$ where $a*^{op}b= b*a$ and maps a morphism $f: G \\to G'$ to $f^{op}(a) = f(a)$ and have that\n    $$ f^{op}(a*^{op}b) = f(b*a) = f(b)*f(a) = f^{op}(a) *^{op} f^{op}(b).$$\n\n    Denoting the identity function $Id:Grp\\to Grp$, we have a natural transformation $\\tau:Id \\Rightarrow (\\cdot)^{op}$ defined by $\\tau_G(a)=a^{-1}$ for all $a\\in G$.\n    \n  \\item In Haskell, the $\\operatorname{safeHead}$ \\cite[Section 10.1]{milewski2018category} function between the $\\operatorname{List}$ functor and the $\\maybe$ functor. To be precise, \n    \\begin{itemize}\n    \\item The $\\operatorname{list}$ functor maps the type $a$ to the type $[a]$ and assigns each function $f:a\\to b$ to $f':[a]\\to [b]$ that applies $f$ elements wise (on empty list it does nothing).\n      \\begin{lstlisting}[language=Haskell,captionpos=b]\n        data List a = Nil | Cons a ( List a)\n      \\end{lstlisting}\n    \\item  With this definition, we can define the natural transformation $\\operatorname{SafeHead}: \\operatorname{List} \\to \\operatorname{Maybe}$ as:\n      \\begin{lstlisting}[language=Haskell,captionpos=b]\n        safeHead :: [a] -> Maybe a\n        safeHead [] = Nothing\n        safeHead (x : xs) = Just x\n      \\end{lstlisting}\n\n      or in category notation:  $\\tau : List \\to Maybe$ such that, for every type $a$:\n      $$\\tau a ([]) = \\operatorname{Nothing}, \\qquad \\tau a (x:xs) = \\operatorname{Just} x.$$ \n    \\end{itemize}\n  \\end{itemize}\n\\end{example}\n\n\n\\subsection{Constructions}\nIn this last subsection we will introduce some standard construction on categories, along with some examples of these constructions.\n\n\n\\subsubsection{Product Category}\nWe present now one of the most usual construction in mathematics: the product. We will additionally consider the ``universal'' properties of product on the next chapter. \n\n\\begin{definition}\n  Let $B,C$ be categories. Then the \\emph{product category} $B\\times C$ is the category that has as objects the pairs $\\{\\langle b,c\\rangle : b \\in Ob(B), c\\in Ob(C)\\}$ and as arrows the pairs of arrows $\\{\\langle f,g\\rangle: f \\in Ar(B), g\\in Ar(C)\\}$. The composition of arrows is defined by the elementwise composition. \n\\end{definition}\n\nIt is clear that we can define two functors $P: B\\times C \\to B$ and $Q: B \\times C \\to C$ that restricts the category to each of its component parts (functorial axioms follow immediately). Moreover, we can see that any functor $F:D\\to B\\times C$ will be uniquely identified by its composition with $P$ and $Q$.\\\\\n\nComplementary, for any two functors $F:D\\to B, G:D\\to C$ we can define a functor $F \\times G : D \\to B\\times C$ that maps $(F\\times G) \\langle f,g\\rangle = \\langle Fg, Gg\\rangle$. Expressed as a diagram: \n\n\\[\n  \\begin{tikzcd}\n    {} & D\n    \\arrow[swap, \"F\"]{ddl}\n    \\arrow[dashed, \"F\\times G\"]{dd}\n    \\arrow[\"G\"]{ddr}\\\\\n    {} & \\\\\n    B & B \\times C\n    \\arrow[\"P\"]{l}\n    \\arrow[\"Q\"]{r} & C\n  \\end{tikzcd}\n\\]\n\nA functor $F: B^{op}\\times C \\to D$ is called  a \\emph{bifunctor}. Arguably the most important bifunctor is the $\\hhom_C$ function. Given a category $C$ we can see $\\hhom_C: C^{op}\\times C \\to Set$ as a bifunctor such that for all $a,b \\in Ob(C)$:\n\\[\n  \\hhom_C(\\cdot,\\cdot)(a,b) = \\hhom_C (a,b) \n\\]\nfor the objects. For the arrows, for all $f:a\\to a', g:b\\to b' \\in C$ and for all $h\\in \\hhom_C(a',b)$:\n\\begin{align*}\n  \\hhom_C(f^{op},g): \\hhom_C (a',b)  & \\to \\hhom_C(a,b'), \\\\\n  \\hhom_C(f^{op},g) (h)   &= g \\circ h \\circ f.\n\\end{align*}\n\nFrom this bifunctor for every  $c\\in Ob(C),g:d\\to d' \\in C, h\\in \\hhom_C(c,d)$ we can define two functors:\n\\begin{itemize}\n\\item  The functor $\\hhom_C(c, \\cdot):C\\to Set$ such that for all $d \\in Ob(C)$:\n  \\[\n   \\hhom_C(c,\\cdot) = \\hhom_C (c,d),\n \\]\n and\n  \\begin{align*}\n    \\hhom_C(c,g): \\hhom_C (a,d) & \\to \\hhom_C(a,d'),\\\\\n    \\hhom_C(c,g) (h)   &= g \\circ h .\n  \\end{align*}\nThis functor is often called the \\emph{post-composition} functor.\n\\item  Analogously, we can define the contravariant functor $\\hhom_C(\\cdot, c):C^{op}\\to Set$, defined analogously.This functor is often called the \\emph{pre-composition} functor.\n\\end{itemize}\n\\subsubsection{Functor Categories}\nWe continue by defining functor categories, that is, categories where we consider the functors as objects and natural transformation as arrows in some sense. This concept will be instrumental in further consideration in the domain of functional programming (in particular, in the definition of a monad).\\\\\n\nBefore explaining composition between natural transformation, it is useful to present the following diagram. Let $B,C$ be categories, $F,G:B\\to C$ be functors and $\\tau:F\\to G$ a natural transformation. It is common to represent this structure with:\n\\[\n  \\begin{tikzcd}[column sep=huge]\n    A\n    \\arrow[bend left=40]{r}[name=F,label=above:$\\scriptstyle F$]{}\n    \\arrow[bend right=40]{r}[name=G,label=below:$\\scriptstyle G$]{} &\n    B\n    \\arrow[shorten <=3pt,shorten >=3pt,Rightarrow,to path={(F) -- node[label=right:$\\tau$] {} (G)}]{}\n  \\end{tikzcd}\n\\]\n\nLet us define the composition of two natural transformation. We will start defining the composition of two natural tranformations $\\sigma, \\tau$ as:\n\n\\[\n  \\begin{tikzcd}[column sep=huge]\n    A\n    \\arrow[bend left=60]{r}[name=F,label=above:$\\scriptstyle F$]{}\n    \\arrow{r}[name=G,label=right above:$\\scriptstyle G$]{}\n    \\arrow[bend right=60]{r}[name=H,label=below:$\\scriptstyle H$]{}  &\n    B\n    \\arrow[shorten <=3pt,shorten >=2pt,Rightarrow,to path={(F) -- node[label=left:$\\tau$] {} (G)}]{}\n    \\arrow[shorten <=3pt,shorten >=2pt,Rightarrow,to path={(G) -- node[label=left:$\\sigma$] {} (H)}]{}\n  \\end{tikzcd}\n\\]\n\nDue to this representation, this composition of natural transformations is called \\emph{vertical composition}, in opposition to the \\emph{horizontal composition} (def. \\ref{horizontal-composition}).\n\n\\begin{definition}\\label{vertical-composition}\n  Let $C$ and $B$ be two categories, $R,S,T : C \\to B$ be functors, and let $\\tau: R \\to S$, $\\sigma:S\\to T$. We define the composition $(\\tau \\cdot \\sigma)c = \\tau c\\cdot \\sigma c$.\n\\end{definition}\n\n\nTo see that $(\\tau \\cdot \\sigma)$ is a natural transformation it suffices the following diagram \\cite{stack-composition-natural}:\n\n\\[\n\\begin{tikzcd}[row sep = 1.4cm, column sep = 1.4cm]\n  R c\n  \\arrow[lddr, to path= { --\n    ([xshift=-1ex]\\tikztostart.west)\n    -| ([xshift=-2ex]\\tikztotarget.west)\n    |- (\\tikztotarget)}]\n  \\arrow[d, swap, \"\\sigma c\"]\n  \\arrow[r, \"R f\"] \n  & R c'\n  \\arrow[d, \"\\sigma c'\"]\n  \\arrow[rddl, to path= { --\n    ([xshift=1ex]\\tikztostart.east) \n    -| ([xshift=2ex]\\tikztotarget.east)\n    -- (\\tikztotarget)}, \"\\sigma \\circ \\tau c'\"]\n  \\\\\n  S c\n  \\arrow[d, swap, \"\\tau c \"] \n  \\arrow[r, \"S f \"] & S(c')\n  \\arrow[d, \"\\tau c'\"] \\\\\n  T c \n  \\arrow[r, \"T f\"] & T c'\n\\end{tikzcd}\n\\]\n\n\n\n\\begin{definition}\n  Let $B,C$ be categories. We define the Functor category from $C$ to $B$ as $B^C$ as  the category with all functors $F:C \\to B$ as object, natural transformation as arrows, and composition as defined in definition \\ref{vertical-composition}.\n\\end{definition}\n\n\nWe now present some examples to provide some intuition about when this type of construction.\n\n\\begin{example}\\ \n  \\begin{itemize}\n  \\item The category of group actions over sets: As we have seen in example \\ref{group-action} each group action over a set is a functor. Let $G$ be a group and $S$ be a set, both seen as categories and discrete categories. Then we can consider the category  $S^G$, that has as object every group action of $G$ over $S$ and as arrow the morpishm of actions.\\\\\n    \n  \\item  In example \\ref{2-category} we define the $2$ category and  can consider therefore the category of functor from $2$ to $C$. This category is called the arrow category, as we can see that there is a functor from $2$ to $C$ as functor for each arrow in $C$ and conversely.\\\\\n\n    This example displays a interesting idea: we can consider small collection of object as functions/functors. This idea can as well be seen when we define a pair of real numbers as any function $f:\\{0,1\\} \\to \\R$. Should we want, for example, to consider all the squares in a category $C$, we might as well define the category square $Sqr$ and consider every functor $F:Sqr \\to C$.\\\\\n    \\begin{figure}[!h]\n      \\[\n        \\begin{tikzpicture}\n          \\node {\\begin{tikzcd}[column sep=20mm]\n              a\\ar[r,\"f\"]\\ar[d,\"g\"] & a'\\ar[d,\"h\"]\\\\\n              b\\ar[r,\"k\"] & b'\n            \\end{tikzcd}};\n        \\end{tikzpicture}\n      \\]\n      \\caption*{The square category.}\n    \\end{figure}\n\n    Note that the elements $F:Sqr \\to C$ are the collections of the arrows of the arrow category of $C$.\n    \n  \\item We can use this construction to study the category $C^C$ of endofunctors of a particular category $C$. This case is particularly interesting while studying the endofunctors present in $Hask^{Hask}$ and later consider the Monad as a programming structure. \n  \\end{itemize}\n\\end{example}\n\nNote that this is not the only way in which to define the composition of natural transformations can be defined. In fact, we can define another functor category. In this case we compose two natural transformation as in:\n\n\\[\n  \\begin{tikzcd}[column sep=huge]\n    B\n    \\arrow[bend left=40]{r}[name=F,label=above:$\\scriptstyle F$]{}\n    \\arrow[bend right=40]{r}[name=G,label=below:$\\scriptstyle G$]{} &\n    C\n    \\arrow[bend left=40]{r}[name=F',label=above:$\\scriptstyle F'$]{}\n    \\arrow[bend right=40]{r}[name=G',label=below:$\\scriptstyle G'$]{}&\n    D\n    \\arrow[shorten <=3pt,shorten >=3pt,Rightarrow,to path={(F) -- node[label=right:$\\tau$] {} (G)}]{}\n    \\arrow[shorten <=3pt,shorten >=3pt,Rightarrow,to path={(F') -- node[label=right:$\\sigma$] {} (G')}]{}\n  \\end{tikzcd}\n\\]\n\nLet us formalize this composition:\n\\begin{definition}\\label{horizontal-composition}\n  Let $B,C,D$ be categories, $F,G: B \\to C, F',G':C\\to D$ be functors, and let $\\tau: F \\to G$, $\\sigma:F'\\to G'$. We define the horizontal composition $(\\tau \\circ \\sigma): F\\circ F' \\to G\\circ G'$ by: $$(\\sigma \\circ \\tau) c = \\sigma F c \\circ G' \\tau c$$  for all $c\\in B$.\n\\end{definition}\n\nIn this case we can see that the composition of two natural transformation is indeed a natural transformation due to the commutativity of:\n\n\\[\n  \\begin{tikzpicture}\n    \\node {\\begin{tikzcd}[column sep=20mm]\n        F'Fc      \\ar[r,\"\\sigma F c\"]\n        \\ar[d,\"F'\\tau c\"]\n        \\ar[dr,\"(\\sigma\\circ \\tau)c\"]\n        & G'Fc\\ar[d,\"G'\\tau c\"]\\\\\n        F'Gc\\ar[r,\"\\sigma Gc'\"] & G'Gc'\n      \\end{tikzcd}};\n  \\end{tikzpicture}\n\\]\n\n\nWith this definition of composition we can consider another different category: the category of all functors of all (small) categories, that is, the category that has all the functors as object, and has the natural transformation with horizontal composition as arrows.\\\\\n\nWhen we have to consider both compositions at the same time we denote the vertical composition with $\\tau \\cdot \\sigma$ and horizontal composition with $\\tau \\circ \\sigma$ , as in  \\cite{mac2013categories}. Lastly we have to consider how this composition relate to each other. This is seen in the \\emph{interchange law}, also known as  \\emph{Godement relation}:\\\\\n\n\\begin{proposition}\n  Let $A,B,C$ be categories, let $F,G,H:A\\to B$ and  $F',G',H':B\\to C$ be functors and $\\tau: F \\to G,\\sigma:G\\to H,\\tau': F' \\to G'$ and $\\sigma' : G'\\to H'$ be natural transformations. Then let:\n  $$(\\sigma' \\circ \\sigma)\\cdot (\\tau' \\circ \\tau) = (\\sigma' \\cdot \\tau')\\circ (\\sigma\\cdot \\tau)  $$\n\\end{proposition}\n\\begin{proof}\n  We have a structure like   \n\n  \\[\n    \\begin{tikzcd}[column sep=huge]\n      A\n      \\arrow[bend left=60]{r}[name=F,label=above:$\\scriptstyle F$]{}\n      \\arrow{r}[name=G,label=right above:$\\scriptstyle G$]{}\n      \\arrow[bend right=60]{r}[name=H,label=below:$\\scriptstyle H$]{}  &\n      B\n      \\arrow[bend left=60]{r}[name=F',label=above:$\\scriptstyle F'$]{}\n      \\arrow{r}[name=G',label=right above:$\\scriptstyle G'$]{}\n      \\arrow[bend right=60]{r}[name=H',label=below:$\\scriptstyle H'$]{}  &\n      C\n      \\arrow[shorten <=3pt,shorten >=2pt,Rightarrow,to path={(F) -- node[label=left:$\\tau$] {} (G)}]{}\n      \\arrow[shorten <=3pt,shorten >=2pt,Rightarrow,to path={(G) -- node[label=left:$\\sigma$] {} (H)}]{}\n      \\arrow[shorten <=3pt,shorten >=2pt,Rightarrow,to path={(F') -- node[label=left:$\\tau'$] {} (G')}]{}\n      \\arrow[shorten <=3pt,shorten >=2pt,Rightarrow,to path={(G') -- node[label=left:$\\sigma'$] {} (H')}]{}\n    \\end{tikzcd}\n  \\]\n\n  From the  naturality of $\\tau'$ we have that for all $c\\in A$:\n  \\begin{align*}\n    ((\\sigma'\\cdot \\tau')\\circ (\\sigma\\cdot \\tau))(c)\n    & = H' ((\\sigma\\cdot \\tau) c ) \\circ (\\sigma'\\cdot \\tau') (F c)  \\\\\n    & = H' ( \\sigma c \\circ \\tau c) \\circ (\\sigma' (Fc)\\circ \\tau' (F c) )  \\\\\n    & = H' ( \\sigma c) \\circ H'(\\tau c) \\circ \\sigma' (Fc)\\circ \\tau' (F c)   \\\\\n    &  = H' ( \\sigma c) \\circ\n      \\sigma'Gc \\circ G' \\tau cH'(\\tau c) \\circ \\sigma' (Fc)\n      \\circ \\tau' (F c)   \\\\ \n    &  = (\\sigma' \\circ \\sigma) (c) \\circ  (\\tau \\circ \\tau) (c)   \\\\\n    &  = ((\\sigma' \\circ \\sigma) \\cdot  (\\tau' \\circ \\tau)) (c). \n  \\end{align*}\n\\end{proof}\n\nTo finally consider the relation between products, and the functor category we can see that given three small categories $A,B,C$ we have a bijection:\n\n$$\\hhom_{Cat}(A\\times B, C) \\cong \\hhom_{Cat}(A, C^B).$$\n\nThis fact will be taken into account further in the text, as this will mean that the functor $\\cdot \\times B: Cat \\to Cat$ has a right adjoint.\n\\subsubsection{Comma Category}\n\nTo define the comma category, we shall define the category $(b \\downarrow S)$ of objects $S$-under $b$, sketch the dual notion, and generalize this two concepts with the comma category.\\\\\n\nGiven a functor $F:B\\to C$ an object $b\\in B$ is $F-$under another object $c\\in C$ if there exists an arrow $f:c\\to Fb \\in C$. This can be represented as\n\n$$\n\\begin{tikzcd}\n  c\n  \\ar[d,\"f\"]\\\\\n  Fb\n\\end{tikzcd}\n$$\n\nand thus the name of $F$-under.\n\n\\begin{definition}\n  Let $B,C$ be  categories and $S:B\\to C$ be a functor. For every $c\\in Ob(C)$ we can define the category $(c \\downarrow S)$ that has as objects all pairs $(b,f) \\in Ob(B)\\times Ar(C)$ where $f:c\\to Sb$, and as arrows $h:(d,f) \\to (d',f')$ the arrows $h:d\\to d'\\in Ar(B)$ such that $f' = Sh\\circ f$.\n\\end{definition}\n\nThe property that each arrow should satisfy can be represented as:\n\\[\n  \\begin{tikzpicture}\n    \\node {\\begin{tikzcd}\n        & c\\ar[ddl, \"f\"]\\ar[ddr,\"f'\",swap] & \\\\\n        &&\\\\\n        Sd\\ar[rr, \"Sh\", swap] & & Sd'\n      \\end{tikzcd}};\n  \\end{tikzpicture}\n\\]\n\n\\begin{example} Let $U:Grp \\to Set$ be the forgetful functor, and let $X\\in Ob(Set)$. We can consider $(X\\downarrow U)$ where every object is a function of Sets $f: X\\to U G$ for a group $G$.  \n\\end{example}\n\nOne can easily now deduce the dual concept of the category $(S\\uparrow b)$ represented by: \\\\\n\\begin{center}\n  objects: $\n  \\begin{tikzcd}\n    c\n    \\\\\n    Fb\\ar[u,\"f\"]\n  \\end{tikzcd}\n  $;$\\qquad$ arrows: $\\begin{tikzcd}\n    & c & \\\\\n    &&\\\\\n    Sd\\ar[rr, \"Sh\", swap]\\ar[uur, \"f\"] & & Sd'\\ar[uul,\"f'\",swap]\n  \\end{tikzcd}$.\n\\end{center}\n\nCategories $S\\uparrow b$ where $S= id$ are particularly useful, and will be studied more thoroughly in Chapter \\ref{chap:5}.\n\n\\begin{definition}\\label{def:slice-cat}\n  let $C$ be a category and $a$ be an object of $C$. We define the \\emph{slice category} $C/a$ as:\n  $$C/a :=(a\\uparrow 1_C).$$\n\\end{definition}\n\nNow suppose that we have three categories $B,C,D$ and two functors $S,T$ such that\n\\[\n  \\begin{tikzcd}\n    B \\ar[r, \"S\"] & C & D\\ar[l, \"T\", swap]\n  \\end{tikzcd}\n\\]\n\nWe might want to consider the relations objects of $B$ and $D$, for that we have the comma category:\n\n\n\\begin{definition}\n  Let $B,C,D$ be categories and to $S:B\\to C,T: C\\to D$. We define the coma category $(S\\downarrow T)$ as the category that has as object the triples $(b,d,f)$ with $b\\in Ob(B), d\\in Ob(D), f:Sb\\to Td\\in Ar(C)$ and consider as arrows the pairs $(g,h):(b,d,f)\\to (b',d',f')$ with $g:b\\to b'\\in Ar(B), h:d\\to d'\\in Ar(D)$ such that $Th \\circ f = f' \\circ Sg$. \n\\end{definition}\n\nWe can represent the previous definition by:\n\\begin{center}\n  objects: $\n  \\begin{tikzcd}\n    Sb\\ar[d,\"f\"]\n    \\\\\n    Td\n  \\end{tikzcd}\n  $;$\\qquad$ arrows: $\\begin{tikzcd}\n    Sb\\ar[d,\"f\"]\\ar[r,\"Sg\"] & Sb' \\ar[d,\"f'\"]\n    \\\\\n    Td\\ar[r,\"Sh\"] & Td'\n  \\end{tikzcd}$.\n\\end{center}\n\n\nThe name comma category comes from is alternative notation $(S\\downarrow T) = (S,T)$. We prefer the $(S\\downarrow T)$ as it is more clear, nonetheless it is clear that before modern text editor became popular, the original comma notation had a big plus.\n\n\n", "meta": {"hexsha": "40057fa3ed0737fc75b27af1494c31ea061b8416", "size": 41585, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/Chapters/Chapter1.tex", "max_stars_repo_name": "pedrobn23/Master-thesis", "max_stars_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-02T13:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-20T16:41:28.000Z", "max_issues_repo_path": "thesis/Chapters/Chapter1.tex", "max_issues_repo_name": "pedrobn23/Master-thesis", "max_issues_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/Chapters/Chapter1.tex", "max_forks_repo_name": "pedrobn23/Master-thesis", "max_forks_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.2422969188, "max_line_length": 597, "alphanum_fraction": 0.6945292774, "num_tokens": 12108, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7634837743174788, "lm_q1q2_score": 0.5719159327617938}}
{"text": "%!TEX root =  ../main.tex\n\n\\subsection{Separating $x$ and $y$}\n\n\\objective{Use parametric functions to determine a path and velocity}\n\n\nPhysics teaches us that it is often preferable to separate $x$ and $y$, and to\nconsider them both functions of $t$ and not one a function of the other.  This is \ntypically done because gravity (which Einstein showed to be the same as \nacceleration, and hence is a quadratic equation) pull in one dimension, while\nthe others are simple linear equations.\n\nAnother benefit to parameterizing an equation, is that we also can \\emph{when}\nsomething happened.  So far, we have been differentiating $y$ with respect to \n$x$, but what if we want to know how quickly $y$ is changing over time?\n\n\\personfeature[0in]{\\chapdir/pics/Maria_Gaetana_Agnesi_(1836).png}{Maria Gaetana Agnesi}{1718-1799}{was an Italian mathematician, philosopher, theologian and humanitarian. She was the first woman to write a mathematics handbook and the first woman appointed as a Mathematics Professor at a university.  The parametric curve $$y=\\frac{8a^3}{x^2+4a^2}$$ (The Witch of Agnesi) is named after her.\\href{https://en.wikipedia.org/wiki/Maria_Gaetana_Agnesi}{Wikipedia}}\n\n\\subsubsection{Parameter Domain}\nParameters (most often $t$) also have a domain.  In science, it is often quite\nirrelevant to reference any of the variable before the experiment begins or\nafter it ends.  If an object is thrown on Earth, and we model its path with an\nupside-down parabola, what happens before $t=0$ or after it hits the ground\nis meaningless.  Parameters' domains provide a natural and intuitive way to\nmodel this limited domain. \n\n\\index{TI-8*!zoom standard}\nPay careful attention to the WINDOW on your TI-8*.  In RADIANS mode, the\ndefault (i.e. ZOOM-STANDARD) parameter domain is $[0,2\\pi]$.\n\n\\subsubsection{Eliminating the Parameter}\nSometimes, we still want to construct a typical equation from a parametric \nset.  This is called \\textbf{eliminating the parameter}.  In chapter 10, we will\nlearn many trigonometric identities to remove sines and cosines of $t$, but in \nthis section on algebraic functions, we will restrict ourselves to\n\\textbf{substitution} and \\textbf{elimination}.\n\n\\subsubsection{Implicit Differentiation}\\index{derivative!notation}\nUntil now, we have been solely using LeGrange notation for derivatives, e.g.\n$y'$ (pronounced, ``$y$ prime'').  However, this isn't always helpful, especially\nif we want to know what we have been differentiating with respect to.  Until now,\nthat hasn't mattered!  Now we might be looking for the differential of $y$ with \nrespect to $x$, or with respect to $t$.\n\nA differential is a measure of sensitivity to change.  How much does $y$ alter,\ngiven minuscule nudges to $x$ (or $t$).  This differential can also be written with the\nletter `d', and so ``the differential of $y$ over the differential of $x$'' can symbolized\nas $\\frac{dy}{dx}$.  It can also be used like a function, to say ``take the derivative with \nrespect to $x$ of ...'', as in $\\frac{d}{dx}\\left( y\\right)$.\n\nThis fractional way of writing derivatives is called \\textbf{Leibniz's notation}, after\nGottfried Leibniz.  It helps show how we can find the derivative of a parametric equation.\nIf we seek $\\frac{dy}{dx}$, that is equivalent to $\\cfrac{\\frac{dy}{dt}}{\\frac{dx}{dt}}, \\frac{dx}{dt}\\ne\n0$.\n\n\\index{derivative!higher order}\nDerivatives of derivatives (called \\textbf{higher order derivative}) were written \nwith more tick-marks (e.g. $f''(x), f'''(x)$, etc.) or parenthetical numbers (e.g. $f^{(2)}(x),\nf^{(3)}(x)$, etc.) in Lagrange notation.  For Leibnitz's, they get more complicated:\n$\\cfrac{d\\left(\\frac{d\\left(\\frac{dy}{dx}\\right)}{dx}\\right)}{dx} = \n\\left(\\frac{d}{dx}\\right)^3y = \\frac{d^3y}{dx^3}$.\n", "meta": {"hexsha": "5f344f6b900ab6513c151ebd0da79f47cf7623e0", "size": 3743, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch04/0402.tex", "max_stars_repo_name": "aquatiki/AnalysisTextbook", "max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z", "max_issues_repo_path": "ch04/0402.tex", "max_issues_repo_name": "aquatiki/AnalysisTextbook", "max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch04/0402.tex", "max_forks_repo_name": "aquatiki/AnalysisTextbook", "max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.4126984127, "max_line_length": 462, "alphanum_fraction": 0.7477958857, "num_tokens": 1044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.5719159287305112}}
{"text": "\\input{../header_class}\r\n\r\n%---------- start document ---------- %\r\n \\section{group -- algorithms for finite groups}\\linkedzero{group}\r\n \\begin{itemize}\r\n   \\item {\\bf Classes}\r\n   \\begin{itemize}\r\n     \\item \\linkingone{group}{Group}\r\n     \\item \\linkingone{group}{GroupElement}\r\n     \\item \\linkingone{group}{GenerateGroup}\r\n     \\item \\linkingone{group}{AbelianGenerate}\r\n   \\end{itemize}\r\n \\end{itemize}\r\n\r\n\\C\r\n\r\n \\subsection{\\negok Group -- group structure}\\linkedone{group}{Group}\r\n \\initialize\r\n  \\func{Group}{\\hiki{value}{class},\\ \\hikiopt{operation}{int}{-1}}{Group}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create an object which wraps \\param{value} (typically a ring or a field)\r\n  only to expose its group structure. \\\\\r\n  \\spacing\r\n  % added document\r\n  \\quad The instance has methods defined for (abstract) group.\r\n  For example, \\linkingtwo{group}{Group}{identity} returns the identity element of the group from wrapped \\param{value}.\\\\\r\n  \\spacing\r\n  % input, output document\r\n  \\quad \\param{value} must be an instance of a class expresses group structure.\r\n  \\param{operation} must be 0 or 1;\r\n  If \\param{operation} is 0, \\param{value} is regarded as the additive group.\r\n  On the other hand, if \\param{operation} is 1, \\param{value} is considered as the multiplicative group.\r\n  The default value of \\param{operation} is 0.\\\\\r\n  \\negok You can input an instance of \\linkingone{group}{Group} itself as \\param{value}.\r\n  In this case, the default value of \\param{operation} is the attribute \\linkingtwo{group}{Group}{operation} of the instance.\r\n  \\begin{at}\r\n    \\item[entity]\\linkedtwo{group}{Group}{entity}:\\\\ The wrapped object.\r\n    \\item[operation]\\linkedtwo{group}{Group}{operation}:\\\\ It expresses the mode of operation;\r\n$0$ means additive, while $1$ means multiplicative.\r\n  \\end{at}\r\n  \\begin{op}\r\n    \\verb+A==B+ & Return whether A and B are equal or not.\\\\\r\n    \\verb+A!=B+ & Check whether A and B are not equal.\\\\\r\n    \\verb+repr(A)+ & representation\\\\\r\n    \\verb+str(A)+ & simple representation\\\\\r\n  \\end{op} \r\n\\begin{ex}\r\n>>> G1=group.Group(finitefield.FinitePrimeField(37), 1)\r\n>>> print G1\r\nF_37\r\n>>> G2=group.Group(intresidue.IntegerResidueClassRing(6), 0)\r\n>>> print G2\r\nZ/6Z\r\n\\end{ex}%Don't indent!\r\n  \\method\r\n  \\subsubsection{setOperation -- change operation}\\linkedtwo{group}{Group}{setOperation}\r\n   \\func{setOperation}{\\param{self},\\ \\hiki{operation}{int}}{(None)}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Change group type to additive ($0$) or multiplicative ($1$).\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad \\param{operation} must be 0 or 1.\\\\\r\n \\subsubsection{\\negok createElement -- generate a GroupElement instance}\\linkedtwo{group}{Group}{createElement}\r\n   \\func{createElement}{\\param{self},\\ \\param{*value}}{\\out{GroupElement}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return \\linkingone{group}{GroupElement} object whose group is \\param{self}, initialized with \\param{value}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad \\negok This method calls \\param{self}.\\linkingtwo{group}{Group}{entity}.createElement.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   \\quad \\param{value} must fit the form of argument for \\param{self}.\\linkingtwo{group}{Group}{entity}.createElement.\\\\\r\n  \\subsubsection{\\negok identity -- identity element}\\linkedtwo{group}{Group}{identity}\r\n   \\func{identity}{\\param{self}}{\\param{GroupElement}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return identity element (unit) of group.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad Return zero (additive) or one (multiplicative) corresponding to \\linkingtwo{group}{Group}{operation}.\r\n   \\negok This method calls \\param{self}.\\linkingtwo{group}{Group}{entity}.identity or \\linkingtwo{group}{Group}{entity} does not have the attribute then returns one or zero.\r\n   \\spacing\r\n   % input, output document\r\n  \\subsubsection{grouporder -- order of the group}\\linkedtwo{group}{Group}{grouporder}\r\n   \\func{grouporder}{\\param{self}}{\\param{long}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return group order (cardinality) of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad \\negok This method calls \\param{self}.\\linkingtwo{group}{Group}{entity}.grouporder, card or \\_\\_len\\_\\_.\\\\\r\n   We assume that the group is finite, so returned value is expected as some long integer.\r\n   If the group is infinite, we do not define the type of output by the method.\r\n   \\spacing\r\n   % input, output document\r\n\\begin{ex}\r\n>>> G1=group.Group(finitefield.FinitePrimeField(37), 1)\r\n>>> G1.grouporder()\r\n36\r\n>>> G1.setOperation(0)\r\n>>> print G1.identity()\r\nFinitePrimeField,0 in F_37\r\n>>> G1.grouporder()\r\n37\r\n\\end{ex}%Don't indent!\r\n\\C\r\n\r\n \\subsection{GroupElement -- elements of group structure}\\linkedone{group}{GroupElement}\r\n \\initialize\r\n  \\func{GroupElement}{\\hiki{value}{class},\\ \\hikiopt{operation}{int}{-1}}{GroupElement}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create an object which wraps \\param{value} (typically a ring element or\r\n  a field element) to make it behave as an element of group.\\\\\r\n  \\spacing \r\n  % added document\r\n  \\quad The instance has methods defined for an (abstract) element of group.\r\n  For example, \\linkingtwo{group}{GroupElement}{inverse} returns the inverse element of \\param{value} as the element of group object.\\\\\r\n  \\spacing\r\n  % input, output document\r\n  \\quad \\param{value} must be an instance of a class expresses an element of group structure.\r\n  \\param{operation} must be $0$ or $1$;\r\n  If \\param{operation} is $0$, \\param{value} is regarded as the additive group.\r\n  On the other hand, if \\param{operation} is $1$, \\param{value} is considered as the multiplicative group.\r\n  The default value of \\param{operation} is $0$.\\\\\r\n  \\negok You can input an instance of \\linkingone{group}{GroupElement} itself as \\param{value}.\r\n  In this case, the default value of \\param{operation} is the attribute \\linkingtwo{group}{GroupElement}{operation} of the instance.\r\n  \\begin{at}\r\n    \\item[entity]\\linkedtwo{group}{GroupElement}{entity}:\\\\ The wrapped object.\r\n    \\item[set]\\linkedtwo{group}{GroupElement}{set}:\\\\ It is an instance of \\linkingone{group}{Group}, which expresses the group to which \\param{self} belongs.\r\n    \\item[operation]\\linkedtwo{group}{GroupElement}{operation}:\\\\ It expresses the mode of operation;\r\n$0$ means additive, while $1$ means multiplicative.\r\n  \\end{at}\r\n  \\begin{op}\r\n    \\verb+A==B+ & Return whether A and B are equal or not.\\\\\r\n    \\verb+A!=B+ & Check whether A and B are not equal.\\\\\r\n    \\verb+A.ope(B)+ & Basic operation (additive $+$, multiplicative $*$)\\\\\r\n    \\verb+A.ope2(n)+ & Extended operation (additive $*$, multiplicative $**$)\\\\\r\n    \\verb+A.inverse()+\\linkedtwo{group}{GroupElement}{inverse} & Return the inverse element of \\param{self}\\\\\r\n    \\verb+repr(A)+ & representation\\\\\r\n    \\verb+str(A)+ & simple representation\\\\\r\n  \\end{op} \r\n\\begin{ex}\r\n>>> G1=group.GroupElement(finitefield.FinitePrimeFieldElement(18, 37), 1)\r\n>>> print G1\r\nFinitePrimeField,18 in F_37\r\n>>> G2=group.Group(intresidue.IntegerResidueClass(3, 6), 0)\r\nIntegerResidueClass(3, 6)\r\n\\end{ex}%Don't indent!\r\n  \\method\r\n  \\subsubsection{setOperation -- change operation}\\linkedtwo{group}{GroupElement}{setOperation}\r\n   \\func{setOperation}{\\param{self},\\ \\hiki{operation}{int}}{(None)}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Change group type to additive ($0$) or multiplicative ($1$).\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad \\param{operation} must be $0$ or $1$.\\\\\r\n \\subsubsection{\\negok getGroup -- generate a Group instance}\\linkedtwo{group}{GroupElement}{getGroup}\r\n   \\func{getGroup}{\\param{self}}{\\out{Group}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return \\linkingone{group}{Group} object to which \\param{self} belongs.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad \\negok This method calls \\param{self}.\\linkingtwo{group}{GroupElement}{entity}.getRing or getGroup.\\\\\r\n   \\negok In an initialization of \\linkingone{group}{GroupElement}, the attribute \\linkingtwo{group}{GroupElement}{set} is set as the value returned from the method.\\\\\r\n   \\spacing\r\n   % input, output document\r\n  \\subsubsection{order -- order by factorization method}\\linkedtwo{group}{GroupElement}{order}\r\n   \\func{order}{\\param{self}}{\\param{long}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the order of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad \\negok This method uses the factorization of order of group.\\\\\r\n   \\negok We assume that the group is finite, so returned value is expected as some long integer.\r\n   \\negok If the group is infinite, the method would raise an error or return an invalid value.\\\\\r\n   \\spacing\r\n   % input, output document\r\n  \\subsubsection{t\\_order -- order by baby-step giant-step}\\linkedtwo{group}{GroupElement}{t\\_order}\r\n   \\func{t\\_order}{\\param{self},\\ \\hikiopt{v}{int}{2}}{\\param{long}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the order of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad \\negok This method uses Terr's baby-step giant-step algorithm.\\\\\r\n   This method does not use the order of group.\r\n   You can put number of baby-step to \\param{v}.\r\n   \\negok We assume that the group is finite, so returned value is expected as some long integer.\r\n   \\negok If the group is infinite, the method would raise an error or return an invalid value.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   \\quad \\param{v} must be some int integer.\\\\\r\n\\begin{ex}\r\n>>> G1=group.GroupElement(finitefield.FinitePrimeFieldElement(18, 37), 1)\r\n>>> G1.order()\r\n36\r\n>>> G1.t_order()\r\n36\r\n\\end{ex}%Don't indent!\r\n\\C\r\n\r\n \\subsection{\\negok GenerateGroup -- group structure with generator}\\linkedone{group}{GenerateGroup}\r\n \\initialize\r\n  \\func{GenerateGroup}{\\hiki{value}{class},\\ \\hikiopt{operation}{int}{-1}}{GroupElement}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create an object which is generated by \\param{value} as the element of group structure. \\\\\r\n  \\spacing\r\n  % added document\r\n  \\quad This initializes a group `including' the group elements, not a group with generators, now.\r\n  We do not recommend using this module now.\r\n  The instance has methods defined for an (abstract) element of group.\r\n  For example, \\linkingtwo{group}{GroupElement}{inverse} returns the inverse element of \\param{value} as the element of group object.\\\\\r\n  The class inherits the class \\linkingone{group}{Group}.\\\\\r\n  \\spacing\r\n  % input, output document\r\n  \\quad \\param{value} must be a list of generators.\r\n  Each generator should be an instance of a class expresses an element of group structure.\r\n  \\param{operation} must be $0$ or $1$;\r\n  If \\param{operation} is $0$, \\param{value} is regarded as the additive group.\r\n  On the other hand, if \\param{operation} is $1$, \\param{value} is considered as the multiplicative group.\r\n  The default value of \\param{operation} is $0$.\\\\\r\n\\begin{ex}\r\n>>> G1=group.GenerateGroup([intresidue.IntegerResidueClass(2, 20),\r\n... intresidue.IntegerResidueClass(6, 20)])\r\n>>> G1.identity()\r\nintresidue.IntegerResidueClass(0, 20)\r\n\\end{ex}%Don't indent!\r\n\\C\r\n\r\n \\subsection{AbelianGenerate -- abelian group structure with generator}\\linkedone{group}{AbelianGenerate}\r\n \\initialize\r\n  \\quad The class inherits the class \\linkingone{group}{GenerateGroup}.\\\\\r\n  \r\n  \\subsubsection{relationLattice -- relation between generators}\\linkedtwo{group}{AbelianGenerate}{relationLattice}\r\n   \\func{relationLattice}{\\param{self}}{\\out{\\linkingone{matrix}{Matrix}}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a list of relation lattice basis as a square matrix each of whose column vector is a relation basis.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad The relation basis, $V$ satisfies that $\\prod_{i} \\mbox{generator}_i V_i=1$.\r\n   \\spacing\r\n   % input, output document\r\n  \\subsubsection{computeStructure -- abelian group structure}\\linkedtwo{group}{AbelianGenerate}{computeStructure}\r\n   \\func{computeStructure}{\\param{self}}{\\param{tuple}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Compute finite abelian group structure.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad If \\param{self} $G \\simeq \\oplus_i <h_i>$, return [($h_1$, ord($h_1$)),..($h_n$, ord($h_n$))] and $^\\# G$,\r\n   where $<h_i>$ is a cyclic group with the generator $h_i$.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   \\quad The output is a tuple which has two elements;\r\n   the first element is a list which elements are a list of $h_i$ and its order,\r\n   on the other hand, the second element is the order of the group.\r\n\\begin{ex}\r\n>>> G=AbelianGenerate([intresidue.IntegerResidueClass(2, 20),\r\n... intresidue.IntegerResidueClass(6, 20)])\r\n>>> G.relationLattice()\r\n10 7\r\n 0 1\r\n>>> G.computeStructure()\r\n([IntegerResidueClassRing,IntegerResidueClass(2, 20), 10)], 10L)\r\n\\end{ex}%Don't indent!\r\n\\C\r\n%---------- end document ---------- %\r\n\r\n\\input{../footer}\r\n", "meta": {"hexsha": "1bbebc7d50244f2338323329371a3fe16e88827a", "size": 13061, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/en/group.tex", "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_issues_repo_path": "manual/en/group.tex", "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/en/group.tex", "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.1519434629, "max_line_length": 175, "alphanum_fraction": 0.6959650869, "num_tokens": 3675, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7490872131147275, "lm_q1q2_score": 0.5719159246992284}}
{"text": "\\section{Center of Mass}\\label{sec:centerofmass}\n\nSuppose a beam is 10 meters long, and that there are three weights on\nthe beam: a 10 kilogram weight 3 meters from the left end, a 5\nkilogram weight 6 meters from the left end, and a 4 kilogram weight 8\nmeters from the left end. Where should a fulcrum be placed so that the\nbeam balances? Let's assign a scale to the beam, from 0 at the left\nend to 10 at the right, so that we can denote locations on the beam\nsimply as $x$ coordinates; the weights are at $x=3$, $x=6$, and $x=8$,\nas in Figure~\\ref{fig:massesonbeam}.\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <1truecm,1truecm>\n\\setplotarea x from 0 to 10, y from -0.5 to 1\n\\linethickness1pt\n\\axis bottom shiftedto y=0 ticks length <2pt>\n  withvalues {3} {6} {8} / at 3 6 8 / /\n\\setlinear\n\\plot 5 0 5.2 -0.4 4.8 -0.4 5 0 /\n\\plot 3.4 0 3.4 0.8 2.6 0.8 2.6 0 /\n\\put {$10$} at 3 0.4\n\\plot 6.4 0 6.4 0.8 5.6 0.8 5.6 0 /\n\\put {$5$} at 6 0.4\n\\plot 8.4 0 8.4 0.8 7.6 0.8 7.6 0 /\n\\put {$4$} at 8 0.4\n\\endpicture}}\n\\caption{A beam with three masses.}\n\\label{fig:massesonbeam}\n\\end{figure}\n\nSuppose to begin with that the fulcrum is placed at $x=5$. What will\nhappen? Each weight applies a force to the beam that tends to rotate\nit around the fulcrum; this effect is measured by a quantity called\n\\dfont{torque}, proportional to the mass times the\ndistance from the fulcrum. Of course, weights on different sides of\nthe fulcrum rotate the beam in opposite directions. We can distinguish\nthis by using a signed distance in the formula for torque. So with the\nfulcrum at 5, the torques induced by the three weights will be\nproportional to $(3-5)10=-20$, $(6-5)5=5$, and $(8-5)4=12$. For the\nbeam to balance, the sum of the torques must be zero; since the sum is\n$-20+5+12=-3$, the beam rotates counter-clockwise, and to get the beam\nto balance we need to move the fulcrum to the left. To calculate\nexactly where the fulcrum should be, we let $\\ds \\bar x$ denote the\nlocation of the fulcrum when the beam is in balance. The total torque\non the beam is then $\\ds (3-\\bar x)10+(6-\\bar x)5+(8-\\bar x)4=92-19\\bar\nx$. Since the beam balances at $\\ds \\bar x$ it must be that\n$\\ds 92-19\\bar x=0$ or $\\ds \\bar x=92/19\\approx 4.84$, that is, the fulcrum\nshould be placed at $x=92/19$ to balance the beam.\n\nNow suppose that we have a beam with varying density---some portions\nof the beam contain more mass than other portions of the same size. We\nwant to figure out where to put the fulcrum so that the beam balances.\n\n\\begin{figure}[H]\n\\centerline{\n\\vbox{\\beginpicture\n\\normalgraphs\n\\setcoordinatesystem units <1truecm,1truecm>\n\\setplotarea x from 0 to 10, y from 0 to 1\n\\putrule from 0 0 to 10 0\n\\putrule from 10 1 to 10 0\n\\putrule from 0 1 to 10 1\n\\putrule from 0 0 to 0 1\n\\putrule from 1 0 to 1 1\n\\putrule from 2 0 to 2 1\n\\putrule from 3 0 to 3 1\n\\putrule from 4 0 to 4 1\n\\putrule from 5 0 to 5 1\n\\putrule from 6 0 to 6 1\n\\putrule from 7 0 to 7 1\n\\putrule from 8 0 to 8 1\n\\putrule from 9 0 to 9 1\n\\put {$m_0$} at 0.5 0.5\n\\put {$m_1$} at 1.5 0.5\n\\put {$m_2$} at 2.5 0.5\n\\put {$m_3$} at 3.5 0.5\n\\put {$m_4$} at 4.5 0.5\n\\put {$m_5$} at 5.5 0.5\n\\put {$m_6$} at 6.5 0.5\n\\put {$m_7$} at 7.5 0.5\n\\put {$m_8$} at 8.5 0.5\n\\put {$m_9$} at 9.5 0.5\n\\endpicture}}\n\\caption{A solid beam.}\n\\label{fig:solidbeam}\n\\end{figure}\n\n\\begin{example}{Balance Point of a Beam}{BalancePointBeam}\nFind the balance point of a solid beam, illustrated in Figure~\\ref{fig:solidbeam},\nassuming the beam is 10 meters long and that the density is $1+x$ kilograms\nper meter at location $x$ on the beam.\n\\end{example}\n\\begin{solution}\nTo approximate the solution, we can think of the beam as a sequence of weights ``on''\na beam. For example, we can think of the portion of the beam between\n$x=0$ and $x=1$ as a weight sitting at $x=0$, the portion between\n$x=1$ and $x=2$ as a weight sitting at $x=1$, and so on, as indicated\nin Figure~\\ref{fig:solidbeam}. We then\napproximate the mass of the weights by assuming that each portion of\nthe beam has constant density. So the mass of the first weight is\napproximately $\\ds m_0=(1+0)1=1$ kilograms, namely, $(1+0)$\nkilograms per meter times 1 meter. The second weight is $\\ds\nm_1=(1+1)1=2$ kilograms, and so on to the tenth weight with $\\ds\nm_9=(1+9)1=10$ kilograms.  So in this case the total torque is\n\\[(0-\\bar x)m_0+(1-\\bar x)m_1+\\cdots+(9-\\bar x)m_9=\n  (0-\\bar x)1+(1-\\bar x)2+\\cdots+(9-\\bar x)10.\\]\nIf we set this to zero and solve for $\\ds \\bar x$ we get $\\ds \\bar x=6$.\nIn general, if we divide the beam into $n$ portions, the mass\nof weight number $i$ will be $\\ds m_i=(1+x_i)(x_{i+1}-x_i)=(1+x_i)\\Delta x$ \nand the torque induced by weight number $i$ will be\n$\\ds (x_i-\\bar x)m_i=(x_i-\\bar x)(1+x_i)\\Delta x$. The total torque is then\n\\begin{align*}\n(x_0-\\bar x)(1+x_0)\\Delta x&+(x_1-\\bar x)(1+x_1)\\Delta x+\\cdots+(x_{n-1}-\\bar x)(1+x_{n-1})\\Delta x\t\\\\\n  &=\\sum_{i=0}^{n-1} x_i(1+x_i)\\Delta x-\\sum_{i=0}^{n-1}\\bar x(1+x_i)\\Delta x\t\\\\\n  &=\\sum_{i=0}^{n-1} x_i(1+x_i)\\Delta x-\\bar x\\sum_{i=0}^{n-1}(1+x_i)\\Delta x.\n\\end{align*}\nIf we set this equal to zero and solve for $\\ds \\bar x$ we get an\napproximation to the balance point of the beam:\n\\begin{align*}\n  0&=\\sum_{i=0}^{n-1} x_i(1+x_i)\\Delta x- \n    \\bar x\\sum_{i=0}^{n-1}(1+x_i)\\Delta x\t\\\\\n  \\bar x\\sum_{i=0}^{n-1}(1+x_i)\\Delta x&=\\sum_{i=0}^{n-1} x_i(1+x_i)\\Delta x\t\\\\\n  \\bar x&={\\ds\\sum_{i=0}^{n-1} x_i(1+x_i)\\Delta x\\over\n     \\ds\\sum_{i=0}^{n-1}(1+x_i)\\Delta x}.\n\\end{align*}\nThe denominator of this fraction has a very familiar\ninterpretation. Consider one term of the sum in the denominator: $\\ds\n(1+x_i)\\Delta x$. This is the density near $\\ds x_i$ times a short\nlength, $\\Delta x$, which in other words is approximately the mass of\nthe beam between $\\ds x_i$ and $\\ds x_{i+1}$. When we add these up we\nget approximately the mass of the beam.\n\nNow each of the sums in the fraction has the right form to turn into\nan integral, which in turn gives us the exact value of $\\ds \\bar x$:\n\\[\n  \\bar x={\\ds\\int_0^{10} x(1+x)\\,dx\\over\\ds\\int_{0}^{10}(1+x)\\,dx}.\n\\]\nThe numerator of this fraction is called the \\dfont{moment} of the system around zero:\n\\[\\int_0^{10} x(1+x)\\,dx=\\int_0^{10} x+x^2\\,dx={1150\\over 3},\\]\nand the denominator is the mass of the beam:\n\\[\\int_0^{10} (1+x)\\,dx=60,\\]\nand the balance point, officially called the \\dfont{center of\nmass} is \n\\[\\bar x={1150\\over 3}{1\\over 60} = {115\\over 18}\\approx 6.39.\\]\n\\end{solution}\n\nIt should be apparent that there was nothing special about the density\nfunction $\\sigma(x)=1+x$ or the length of the beam, or even that the\nleft end of the beam is at the origin. In general, if the density of\nthe beam is $\\sigma(x)$ and the beam covers the interval $[a,b]$, the\nmoment of the beam around zero is\n\\[M_0=\\int_a^b x\\sigma(x)\\,dx\\]\nand the total mass of the beam is\n\\[M=\\int_a^b \\sigma(x)\\,dx\\]\nand the center of mass is at\n\\[\\bar x={M_0\\over M}.\\]\n\\begin{example}{Center of Mass of a Beam}{CenterMassBeam}\nSuppose a beam lies on the $x$-axis between 20 and 30, and\nhas density function $\\sigma(x)=x-19$. Find the center of mass.\n\\end{example}\n\\begin{solution}\nThis is the same as the previous example except that the beam has been\nmoved. Note that the density at the left end is $20-19=1$ and at the\nright end is $30-19=11$, as before. Hence the center of mass must be\nat approximately $20+6.39=26.39$. Let's see how the calculation works\nout.\n\\begin{align*}\n  M_0&=\\int_{20}^{30} x(x-19)\\,dx=\\int_{20}^{30} x^2-19x\\,dx=\n    \\left.{x^3\\over3}-{19x^2\\over2}\\right|_{20}^{30}={4750\\over3}\t\\\\\n  M&=\\int_{20}^{30} x-19\\,dx=\\left.{x^2\\over2}-19x\\right|_{20}^{30}=60\t\\\\\n  {M_0\\over M}&={4750\\over3}{1\\over60}={475\\over18}\\approx 26.39.\n\\end{align*}\n\\end{solution}\n\n\\begin{example}{Centroid of a Flat Plate}{CentroidFlatPlate}\nSuppose a flat plate of uniform density has the shape\ncontained by $\\ds y=x^2$, $y=1$, and $x=0$, in the first quadrant. Find\nthe center of mass. (Since the density is constant, the center of mass\ndepends only on the shape of the plate, not the density, or in other\nwords, this is a purely geometric quantity. In such a case the center\nof mass is called the \\dfont{centroid}.)\n\\end{example}\n\\begin{solution}\nThis is a two dimensional problem, but it can be solved as if it were\ntwo one dimensional problems: we need to find the $x$ and $y$\ncoordinates of the center of mass, $\\ds \\bar x$ and $\\ds \\bar y$, and\nfortunately we can do these independently. Imagine looking at the\nplate edge on, from below the $x$-axis. The plate will appear to be a\nbeam, and the mass of a short section of the ``beam'', say between\n$\\ds x_i$ and $\\ds x_{i+1}$, is the mass of a strip of the plate\nbetween $\\ds x_i$ and $\\ds x_{i+1}$. See Figure~\\ref{fig:twodcenterofmass}\nshowing the plate from above and as it appears edge on.\n\n\\begin{figure}[H]\n\\centerline{\n\\hbox to \\hsize{\\hfill\n\\begin{tikzpicture}[baseline=0,x=3cm,y=3cm]\n\\fill[fill=red!10] (0.5,0.25) rectangle (0.6,1);\n\\draw (0.5,0.25) rectangle (0.6,1);\n\\draw (0,0) -- (1.1,0) ;\n\\draw (0,0) -- (0,1.1) ;\n\\foreach \\x in {0,1} \\draw (\\x,0) -- (\\x,-2pt) node[anchor=north] {$\\x$};\n\\foreach \\y in {0,1} \\draw (0,\\y) -- (-2pt,\\y) node[anchor=east]{$\\y$};\n\\draw (0.5,0) -- (0.5,-2pt) node [anchor=north] {$x_i$};\n\\node [anchor=south east] at (0.375,0.6) {$(\\bar x,\\bar y)$};\n\\node at (0.375,0.6) {$\\bullet$};\n\\draw[color=black] (0,0) parabola (1,1);\n\\draw (0,1) -- (1,1); \n\\end{tikzpicture}\n\\hskip2cm\n\\begin{tikzpicture}[domain=-1.2:1.2,baseline=0,x=3cm,y=3cm]\n\\fill[fill=red!10] (0.5,0) rectangle (0.6,0.1);\n\\draw (0,0) -- (1.1,0) ;\n\\foreach \\x in {0,1} \\draw (\\x,0) -- (\\x,-2pt) node[anchor=north] {$\\x$};\n\\draw (0.5,0) -- (0.5,-2pt) node[anchor=north] {$x_i$};\n\\node[anchor=south] at (0.55,0.1) {$m_i$};\n\\draw (0,0) rectangle (1,0.1);\n\\draw (0.5,0) rectangle (0.6,0.1);\n\\end{tikzpicture}\\hfill}}\n\\caption{Center of mass for a two dimensional plate.}\n\\label{fig:twodcenterofmass}\n\\end{figure}\n\nSince the plate has uniform density we may as well assume that\n$\\sigma=1$. Then the mass of the plate between $\\ds x_i$ and $\\ds\nx_{i+1}$ is approximately $\\ds m_i=\\sigma(1-x_i^2)\\Delta\nx=(1-x_i^2)\\Delta x$. Now we can compute the moment around the\n$y$-axis:\n\\[M_y=\\int_0^1 x(1-x^2)\\,dx={1\\over4}\\]\nand the total mass \n\\[M=\\int_0^1 (1-x^2)\\,dx={2\\over3}\\]\nand finally\n\\[\\bar x = {1\\over4}{3\\over2}={3\\over8}.\\]\nNext we do the same thing to find $\\ds \\bar y$. The mass of the plate\nbetween $\\ds y_i$ and $\\ds y_{i+1}$ is approximately $\\ds\nn_i=\\sqrt{y}\\Delta y$, so\n\\[M_x=\\int_0^1 y\\sqrt{y}\\,dy={2\\over5}\\]\nand\n\\[\\bar y={2\\over5}{3\\over2}={3\\over5},\\]\nsince the total mass $M$ is the same. The center of mass is shown in\nFigure~\\ref{fig:twodcenterofmass}.  \n\\end{solution}\n\n\\begin{example}{Center of Mass under Cosine}{centerofmassundercos}\nFind the center of mass of a thin, uniform plate whose shape\nis the region between $y=\\cos x$ and the $x$-axis between $x=-\\pi/2$\nand $x=\\pi/2$.\n\\end{example}\n\\begin{solution}\nIt is clear that $\\ds \\bar x=0$, but for practice let's\ncompute it anyway. We will need the total mass, so we compute it\nfirst:\n\\[M=\\int_{-\\pi/2}^{\\pi/2} \\cos x\\,dx=\\sin x\\Big|_{-\\pi/2}^{\\pi/2}=2.\\]\nThe moment around the $y$-axis is\n\\[M_y=\\int_{-\\pi/2}^{\\pi/2} x\\cos x\\,dx=\\cos x+x\\sin x\\Big|_{-\\pi/2}^{\\pi/2}=0\\]\nand the moment around the $x$-axis is\n\\[M_x=\\int_{0}^{1} y\\cdot2\\arccos y\\,dy=\\left.y^2\\arccos y-{y\\sqrt{1-y^2}\\over2}+{\\arcsin y\\over 2}\\right|_{0}^{1}={\\pi\\over4}.\\]\nThus\n\\[\\bar x={0\\over2},\\quad \\bar y={\\pi\\over8}\\approx 0.393.\\]\n\\end{solution}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:centerofmass}}\n\n\\begin{enumialphparenastyle}\n\n\\begin{ex}\nA beam 10 meters long has density $\\ds \\sigma(x)=x^2$ at\ndistance $x$ from the left end of the beam. Find the center of mass\n$\\ds \\bar x$.\n\\begin{sol}\n$15/2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA beam 10 meters long has density $\\sigma(x)=\\sin(\\pi x/10)$ at\ndistance $x$ from the left end of the beam. Find the center of mass\n$\\ds \\bar x$.\n\\begin{sol}\n$5$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA beam 4 meters long has density $\\ds \\sigma(x)=x^3$ at\ndistance $x$ from the left end of the beam. Find the center of mass\n$\\ds \\bar x$.\n\\begin{sol}\n$16/5$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nVerify that $\\ds\\int 2x\\arccos x\\,dx=\nx^2\\arccos x-{x\\sqrt{1-x^2}\\over2}+{\\arcsin x\\over 2}+C$.\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region between $\\ds y=x^2$ and the $x$-axis\nbetween $x=1$ and $x=2$. Find the centroid.\n\\begin{sol}\n$\\ds \\bar x=45/28$, $\\ds \\bar y = 93/70$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate fills the upper half of the unit circle\n$\\ds x^2+y^2=1$. Find the centroid.\n\\begin{sol}\n$\\ds \\bar x=0$, $\\ds \\bar y=4/(3\\pi)$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region contained by $y=x$ and\n$\\ds y=x^2$. Find the centroid.\n\\begin{sol}\n$\\ds \\bar x=1/2$, $\\ds \\bar y=2/5$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region contained by $\\ds y=4-x^2$ and\nthe $x$-axis. Find the centroid.\n\\begin{sol}\n$\\ds \\bar x=0$, $\\ds \\bar y=8/5$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region contained by $\\ds y=x^{1/3}$ and\nthe $x$-axis between $x=0$ and $x=1$. Find the centroid.\n\\begin{sol}\n$\\ds \\bar x=4/7$, $\\ds \\bar y=2/5$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region contained by \n$\\ds \\sqrt{x}+\\sqrt{y}=1$ and the axes in the first quadrant.\nFind the centroid.\n\\begin{sol}\n$\\bar x=\\bar y=1/5$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region between\nthe circle $\\ds x^2+y^2=4$ and the circle $\\ds x^2+y^2=1$, above the $x$-axis.\nFind the centroid.\n\\begin{sol}\n$\\ds \\bar x=0$, $\\ds \\bar y=28/(9\\pi)$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region between \nthe circle $\\ds x^2+y^2=4$ and the circle $\\ds x^2+y^2=1$ in the first quadrant.\nFind the centroid.\n\\begin{sol}\n$\\ds \\bar x=\\bar y=28/(9\\pi)$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nA thin plate lies in the region between \nthe circle $\\ds x^2+y^2=25$ and the circle $\\ds x^2+y^2=16$\nabove the $x$-axis.\nFind the centroid.\n\\begin{sol}\n$\\ds \\bar x=0$, $\\ds\\bar y=244/(27\\pi)\\approx 2.88$\n\\end{sol}\n\\end{ex}\n\n\\end{enumialphparenastyle}\n", "meta": {"hexsha": "a5e918f4aecd244f46d5593023a9b39f7b5bb852", "size": 14123, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8-applications-of-integration/8-6-center-of-mass.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8-applications-of-integration/8-6-center-of-mass.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8-applications-of-integration/8-6-center-of-mass.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.9712041885, "max_line_length": 129, "alphanum_fraction": 0.6732988742, "num_tokens": 5430, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6370307944803832, "lm_q2_score": 0.8976952900545975, "lm_q1q2_score": 0.5718595438247783}}
{"text": "\\documentclass[11pt,a4paper]{article}\n\n\\usepackage{mathtools}\n\\usepackage{graphicx}\n\\usepackage{bm}\n\\usepackage{calligra}\n\\usepackage{wrapfig}\n\\usepackage{comment}\n\\usepackage{subcaption}\n\\usepackage{color}\n\n%\\DeclareMathAlphabet{\\mathcalligra}{T1}{calligra}{m}{n}\n%\\DeclareFontShape{T1}{calligra}{m}{n}{<->s*[2.2]callig15}{}\n\n\\setlength{\\parindent}{0pt}\n\n\n\\DeclarePairedDelimiter{\\avg}{\\langle}{\\rangle}\n\n\\title{The Lau/Ostoji\\'c Ising model simulation}\n\\date{}\n\\author{Bernie Lau and Oliver Ostoji\\'c}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\n\nThis project is in essence an investigation of the Ising Hamiltonian with no external field;\\footnote{As always,\n$\\avg{i,j}$ indicates a sum over nearest neighbors}\n\n\\begin{equation}\\label{eq:Hamiltonian}\n    H = -J \\sum_{\\avg*{i,j}} s_i s_j\n\\end{equation}\n\nAs this apparently simple Hamiltonian is known to result in very interesting and rich dynamical behavior on a \n macroscopic scale, we endeavor to find, through computational methods, some interesting thermodynamical quantities\n and qualitative behaviors. Of primary interest is the search for signs of a phase transistion, which the two and three\n dimensional Ising model is known to exhibit.\n\n In order to be less vague, we shall from now on talk about the Ising Hamiltonian\n in the context of ferromagnetism, where the quantites $s_i$ are associated with electron spins which can point in one of two\n directions. We will also set the coupling energy and the Boltzmann canstant equal to one, \n $J = k_B = 1$, for all simulations. As a consequence, the inverse temperature $\\beta = 1/k_BT$ is measured in unphysical\n units. This is justified by the\n fact that  we are primarily interested in the qualitative behavior of the system and all actual numbers in the following\n simulations can still be compared to theoretical calculations and similar simulations.\n\n Phase transitions are typically characterized by rapid changes in macroscopic quantities near the transition temperature, which\n is called the critical temperature, $T_c$. The macroscopic quantities we will be investigating are the net magnetization,\n $M = \\sum_i s_i$, the magnetic susceptibility (a measure for how the magnetization reacts to external magnetic fields) and\n the correlation length (a measure for the length scale at which the spins influence each other). These quantities and their\n connection to phase transitions will be made more concrete in the relevant sections. \n\n The actual methods we employ are two Monte Carlo type algorithms, with most simulations carried out using a \n Metropolis algorithm and a couple of measurements using a Wolff algorithm. It seems appropriate therefore, to summarize the\n principles of those algorithms here, in a minimalistic fashion.\n\n\\subsection{Metropolis algorithm}\nAs the focus of this assignment is on the \\textit{implementation} of the Metropolis algorithm rather than its theoretical\n aspects, we have chosen to omit the technical details and just outline the procedure used in its implementation here.\n\n Given a\n lattice with a spin variable $s_i \\in \\{-1,1\\}$ on each lattice point $i$, we define a \"state\" to be a particular\n configuration of all the ${s_i}$. The energy $E_\\mu$ of the system in a state $\\mu$ is determined by the Hamiltonian\n (Equation \\ref{eq:Hamiltonian}). Time evolution of the system is achieved by randomly choosing a single site $j$ and determining\n the energy $E_\\nu$ of the system if the spin on that site were flipped, $s_j \\rightarrow -s_j$. Weather or not we actually flip\n the spin is determined by the the number $A(\\mu \\rightarrow \\nu)$ given by;\n\n\n\\begin{equation}\\label{eq:A-ratio}\n    A(\\mu \\rightarrow \\nu) = \\begin{cases}\n        e^{-\\beta (E_\\nu - E_\\mu)} & \\mbox{if} \\,\\,  E_\\nu - E_\\mu > 0 \\\\\n        1 & \\mbox{else}\n    \\end{cases}\n\\end{equation}\n\nOnce the spin flip is accepted or rejected, the process is repeated and each iteration is called a \"Monte Carlo step\". It is actually\n this particular definiton of the so called\n \"acceptance ratio\", $A(\\mu \\rightarrow \\nu)$ that makes the algorihm a Metropolis algorithm,\n the rest of the procedure is common to all single spin-flip dynamics Monte Carlo algorithms. The conditions\n of detailed balance and ergodicity, which ensure that the\n equilibrium state of the system will behave according to Boltzmann statistics and that each state can be reached from each other state, are\n statisfied by this acceptance ratio, as must be the case with all Monte Carlo algorithms.\n\n\n\\subsection{Wolff algorithm}\nAn example of a cluster-flipping algorithm, the Wolff algorithm works by iteratively picking out\n an entire cluster of spins pointing in the same direction and then flipping them all at the same time. The actual physics content of this \n procedure resides entirely in the generation of the cluster, which is done based on a temperature dependent probability chosen such that the\n conditions of detailed balance and ergodicity are again satisfied.\n More specifically, the procedure which is iterated is as follows;\n \\begin{enumerate}\n \\item Pick a spin at random and look at all its neighbors.\n \\item If a neighbor is pointing in the same direction as the current one, add it to the cluster with a probability $P_{add}$ given by\n\n\\begin{equation}\\label{eq:A-ratio}\n    P_{add} = 1 - e^{-2\\beta J}\n\\end{equation}\n \n \\item For each spin that was added, look at all \\textit{its} neighbors and add them with the same probability. Do this until no spins are left\n in the cluster whose neighbors have not been considered for addition.\n \\item Flip the cluster\n\n \\end{enumerate}\n\nThe Wolff algorithm, although less intuitive than Metropolis has the advantage of being much faster near the critical temperature, thereby\n allowing us to obtain more accurate results in the region of interest.\n\n\\subsection{Technicalities}\nIn what follows, we will set the coupling energy and the Boltzmann canstant equal to one, \n $J = k_B = 1$, for all simulations. As a consequence, the inverse temperature $\\beta = 1/k_BT$ is measured in unphysical\n units. This is justified by the\n fact that  we are primarily interested in the qualitative behavior of the system and all actual numbers in the following\n simulations can still be compared to theoretical calculations and similar simulations. \n\n\\section{Thermalization and autocorrelation for the 40x40 lattice}\n\n\nIn this section we present some results of a simulation of a 40x40 two dimensional Ising model, using a Metropolis algorithm.\n We present plots of energy and magnetization as a function of time and report the thermalization times found for each plot.\n We also plot autocorrelation functions for both enery and magnetization\n for five values of the inverse temperature $\\beta$ between $\\beta = 0.3$ and $\\beta = 0.5$. \n Both high temperature (fully disordered) and temperature (all spins aligned) initial conditions are simulated.\n Time is measured in Monte-Carlo steps per spin for all simulations. Each simlutation \"takes\" $32\\cdot 10^6$ \n Monte-Carlo steps.\n\n\n\\subsection{Energy functions}\nTo find the thermalization times, we plot the energy as a function of time (measured in Monte-Carlo steps per site)\n and fit a general exponential curve and read off the characteristic time $\\tau$:\n\n\\begin{equation}\\label{eq:exp_decay}\n    f(t)=Ae^{t/\\tau} + C\n\\end{equation}\n\nAfter a single characteristic time $\\tau$, the value of the energy will be at about $37\\%$ of its initial value and\n the system will not yet be thermalized. We therefore define the\n thermalization time as five times the characteristic time, just to be safe:\n \n\\begin{equation*}\n    t_{therm} = 5\\tau\n\\end{equation*}\n\nFigure \\ref{fig:Evt} shows the results of this procedure for the values $\\beta = 0.3$; $\\beta = 0.35$;\n $\\beta = 0.4$; $\\beta = 0.45$ and $\\beta = 0.5$. Included are the acquired values of $t_{therm}$. Both \n high temperature and low temperature initial conditions are covered.\n\\\\\n\\\\\n{\\color{red}REVIZE SECTION AFTER ADDING PLOTS AND WRITING INTRO}\n\n\\begin{figure}[h!]\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{metro_40x40_iter_32M_unaligned_E_vs_time.png}\n  \\caption{}\n  \\label{fig:Evt_highT}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{metro_40x40_iter_32M_aligned_E_vs_time.png}\n  \\caption{}\n  \\label{fig:Evt_lowT}\n\\end{subfigure}\n\\caption{Energy vs ierations per spin for the values $\\beta = 0.3$; $\\beta = 0.35$;\n         $\\beta = 0.4$; $\\beta = 0.45$ and $\\beta = 0.5$. a) High temperature starting condition. \n         {\\color{red}To add: the (calculated) values of the thermalization times per $\\beta$}.\n         b) Low temperature starting condition.\n         {\\color{red}To add: the (calculated) values of the thermalization times per $\\beta$)}}\n\\label{fig:Evt}\n\\end{figure}\n\n\n\\subsection{Magnetization functions}\nAs we are still looking for thermalization times, just now for magnetization, we use the same procedure as in\n the previous section. \n\nFigure \\ref{fig:Mvt} shows plots of magnetization versus time for the values $\\beta = 0.3$; $\\beta = 0.35$;\n $\\beta = 0.4$; $\\beta = 0.45$ and $\\beta = 0.5$ for high and low temperature initial conditions. \n\n\n\\begin{figure}[h!]\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{metro_40x40_iter_32M_unaligned_net_M_vs_time.png}\n  \\caption{}\n  \\label{fig:Mvt_highT}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=\\linewidth]{metro_40x40_iter_32M_aligned_net_M_vs_time.png}\n  \\caption{}\n  \\label{fig:Mvt_lowT}\n\\end{subfigure}\n\\caption{Magnetization vs ierations per spin for the values $\\beta = 0.3$; $\\beta = 0.35$;\n         $\\beta = 0.4$; $\\beta = 0.45$ and $\\beta = 0.5$. a) High temperature starting condition\n         {\\color{red}To add: the (calculated) values of the thermalization times per $\\beta$}.\n         b) Low temperature starting condition.\n         {\\color{red}To add: the (calculated) values of the thermalization times per $\\beta$}.}\n\\label{fig:Mvt}\n\\end{figure}\n\n\n\\subsection{Autocorrelation functions}\nIn addition to the previous results, the autocorrelation times for both energy and magnetization can be\n extracted from the simulations. We implement the formulas\n\n\\begin{equation*}\n    c_e(\\Delta t) = \\avg{(E(t+\\Delta t) - E_{avg})\\cdot (E(t)-E_{avg})}_t\n\\end{equation*}\n\nand\n\n\\begin{equation*}\n    c_m(\\Delta t) = \\avg{(M(t+\\Delta t) - M_{avg})\\cdot (M(t)-M_{avg})}_t\n\\end{equation*}\n\n\nwhere, $E_{avg}$ and $M_{avg}$ indicate averaging over the \\textit{last} million values of the energy and magnetization\n respectively. This in contrast to the averaging indicated by the $\\avg{}_t$, which means we average over the number of\n iterations. We again fit the general exponential decay given by equation \\ref{eq:exp_decay}, and report the characteristic\n time, which is by definition the autocorrelation time. The results are given below (Figure \\ref{fig:autocorr})\n for each simulation, i.e for the same values of $\\beta$ as listed above.\n \n\\begin{figure}[h!]\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=.4\\linewidth]{Boltzmann.jpg}\n  \\caption{}\n  \\label{fig:autocorr_energy}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n  \\centering\n  \\includegraphics[width=.4\\linewidth]{Boltzmann.jpg}\n  \\caption{}\n  \\label{fig:autocorr_mag}\n\\end{subfigure}\n\\caption{Autocorrelation times vs $\\Delta t$ for the values $\\beta = 0.3$; $\\beta = 0.35$;\n         $\\beta = 0.4$; $\\beta = 0.45$ and $\\beta = 0.5$. a) Energy autocorrelation function. \n         b) Magnetization autocorrelation function.}\n\\label{fig:autocorr}\n\\end{figure}\n\n\n\\section{Magnetization and magnetic suscepibility with metropolis}\n\n\\section{The Wolff}\n\n\\section{The 3D Ising model}\n\\end{document}\n", "meta": {"hexsha": "61cde6edf15ccb0d986de34d9bb70a023524d660", "size": 11819, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "verslag/verslag_versie_2.tex", "max_stars_repo_name": "iambernie/olisms", "max_stars_repo_head_hexsha": "1ccbe2862ea41ffbeb9df8ffc6da43bac1bc5887", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "verslag/verslag_versie_2.tex", "max_issues_repo_name": "iambernie/olisms", "max_issues_repo_head_hexsha": "1ccbe2862ea41ffbeb9df8ffc6da43bac1bc5887", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "verslag/verslag_versie_2.tex", "max_forks_repo_name": "iambernie/olisms", "max_forks_repo_head_hexsha": "1ccbe2862ea41ffbeb9df8ffc6da43bac1bc5887", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.3490196078, "max_line_length": 143, "alphanum_fraction": 0.754209324, "num_tokens": 3103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7662936484231889, "lm_q1q2_score": 0.5717615712003387}}
{"text": "% A simple template for LaTeX documents\n% \n% To produce pdf run:\n%   $ pdflatex paper.tex \n%\n\n\\documentclass[10pt, twocolumn]{article}\n%\\documentclass[12pt]{article}\n\n% Begin paragraphs with new line\n\\usepackage{parskip}  \n\n% Change margin size\n\\usepackage[margin=0.5in]{geometry}   \n\n% Graphics Example:  (PDF's make for good plots)\n\\usepackage{graphicx}               \n% \\centerline{\\includegraphics{figure.pdf}}\n\n% Allows hyperlinks\n\\usepackage{hyperref}\n\n% Blocks of code\n\\usepackage{listings}\n\\lstset{basicstyle=\\ttfamily, title=\\lstname}\n% Insert code like this. replace `plot.R` with file name.\n% \\lstinputlisting{plot.R}\n\n% Monospaced fonts\n%\\usepackage{inconsolata}\n% GNU \\texttt{make} is a nice tool.\n\n% Supports proof environment\n\\usepackage{amsthm}\n\n% Allows writing \\implies and align*\n\\usepackage{amsmath}\n\n% Allows mathbb{R}\n\\usepackage{amsfonts}\n\n% Numbers in scientific notation\n%\\usepackage{siunitx}\n\n% Use tables generated by pandas\n\\usepackage{booktabs}\n\n% norm and infinity norm\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\\newcommand{\\inorm}[1]{\\left\\lVert#1\\right\\rVert_\\infty}\n\n% Statistics essentials\n\\newcommand{\\iid}{\\text{ iid }}\n\\newcommand{\\Expect}{\\operatorname{E}}\n\\newcommand{\\Var}{\\operatorname{Var}}\n\\newcommand{\\Cov}{\\operatorname{Cov}}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{document}\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\textbf{Chi square}\nSpecial case: $X \\sim \\chi^2_n \\equiv \\text{Gamma}(\\frac{n}{2}, \\frac{1}{2})$\n\n$\nf(x) \\propto x^{\\frac{n}{2} - 1} e^{\\frac{-x}{2}},\n\\quad x > 0\n\\qquad \\Expect X = n,\n\\quad \\Var X = 2n\n$\n\nLet $Z_i$ be iid $N(0, 1)$.\n$\\sum_{i=1}^n Z_i^2 \\sim \\chi^2_n$\n\nNoncentral $\\chi^2$. Let $Y \\sim N(\\mu, I)$ be an $n$ vector. Then \n\\[\n    \\norm{Y}^2 \\sim \\chi^2_n(\\norm{\\mu}^2)\n\\]\n\\textbf{F} if num. and den. independent then\n\\[\n    F(m, n) \\equiv \\frac{\\frac{\\chi^2_m}{m}}\n        {\\frac{\\chi^2_n}{n}}\n\\]\n\\textbf{T} if num. and den. independent then\n\\[\n    t(n) = \\frac{N(0, 1)}\n    {\\sqrt{\\frac{\\chi^2_n}{n}}}\n\\]\nConditional pdf:\n\\[\n    f_{X|Y}(x | y) \\equiv \\frac{f_{X, Y}(x, y)}{f_Y(y)}\n\\]\nIterated expectation:\n\\[\n    E(Y) = E(E(Y | X))\n\\]\nConditional variance formula:\n\\[\n    \\Var(Y) = \\Var(E(Y | X)) + E(\\Var(Y | X))\n\\]\nSingular Value Decompostion (SVD) Any matrix $X$ can be written\n\\[\n    X = UDV^T\n\\]\nwith $U, V$ orthogonal, and $D$ diagonal.\n\nMoore Penrose Psuedoinverse $A^+$ exists uniquely for every matrix $A$.\nProperties: $AA^+A = A, \\quad A^+AA^+ = A^+$ and $AA^+, A^+A$ are symmetric.\n\nProjection matrix $P$ are symmetric and idempotent. They have eigenvalues\neither 0 or 1.\n\\[\n    P = P^T \\qquad P^2 = P\n\\]\nCovariance of linear transformations\n\\[\n    Cov(Ay, Bx) = A Cov(y, x) B^T\n\\]\nInvert $2 \\times 2$ matrix:\n$\n    A = \n    [\\begin{smallmatrix}\n        a & b \\\\\n        c & d \\\\\n    \\end{smallmatrix}]\n$\n\\[\n    A^{-1} = \n    \\frac{1}{\\det (A)}\n    \\begin{bmatrix}\n        d & -b \\\\\n        -c & a \\\\\n    \\end{bmatrix}\n\\]\nSum identities:\n\\[\n    \\sum_{k=0}^{\\infty} p^k = \\frac{p}{1 - p} \\qquad \n    \\sum_{k=0}^{\\infty} k p^k = \\frac{p}{(1 - p)^2} \\qquad |p| < 1\n\\]\nIntegration by parts:\n\\[\n    \\int uv' = uv - \\int u'v\n\\]\nMatrix / Vector differentiation\n\n$\\frac{\\partial A^T \\beta}{\\partial \\beta} = A$, \n$\\frac{\\partial \\beta^T A \\beta}{\\partial \\beta} = (A + A^t) \\beta =\n2A\\beta$ for $A$ symmetric.\n\n$\\frac{\\partial}{\\partial \\theta_i} \\log (|A|) =\ntr( A^{-1} \\frac{\\partial A}{\\partial \\theta_i}$)\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nGeneral linear tests under $H_0$:\n\\[\n    \\frac{\\frac{SSE_r - SSE_f}{p - r}}\n         {\\frac{SSE_f}{n - p}}\n         \\sim F_{p-r, n-p}\n\\]\nThree principles of experimental design: 1) Replication 2) Randomization 3)\nBlocking\n\nOne way ANOVA with $n$ total observations, $K$ groups:\n\n{\n\\centering\n\\begin{tabular}{lll}\n    SS   &  & DF     \\\\\n    SSTR & $\\sum_{j=1}^K n_j (\\bar{y_{j \\cdot}} - \\bar{y_{\\cdot \\cdot}})^2$  & K - 1 \\\\\n    SSE  & $\\sum_{i=1}^n (y_{ij} - \\bar{y_{j \\cdot}})^2$  & n - K \\\\\n    SSTO & $\\sum_{i=1}^n (y_{ij} - \\bar{y_{\\cdot \\cdot}})^2$  & n - 1 \n\\end{tabular}\n}\n\n\\subsection*{Multivariate Normal}\n\n$X \\sim N(\\mu, \\Sigma)$, $\\Sigma$ positive definite\n\\[\n    f(x) = \\frac{\\exp\\{ - \\frac{1}{2}(x - \\mu)^T \\Sigma^{-1} (x - \\mu) \\}}\n        {(2\\pi)^{\\frac{k}{2}} \\sqrt{\\det(\\Sigma)}}\n        = \\frac{1}{\\sqrt{2 \\pi} \\sigma} e^{-\\frac{(x - \\mu)^2}{2\n        \\sigma^2}}\n\\]\nmgf: $M_X (t) = \\exp (\\mu' t + \\frac{1}{2} t' \\Sigma t)$\n\n\nlog likelihood for $k$ vector $x \\sim N(\\mu, \\Sigma)$\n\\[\n    l_x = -\\frac{k}{2} \\log 2 \\pi - \\frac{1}{2}\n    \\{ \\log \\det \\Sigma + (x - \\mu)^T \\Sigma^{-1} (x - \\mu) \\}\n\\]\nStein's formula: $X \\sim N(\\mu, \\sigma)$\n\\[\n    \\Expect (g(X) (X - \\mu)) = \\sigma^2 \\Expect(g'(X))\n\\]\nassuming these expectations are finite.\n\n$X \\sim N(\\mu, \\Sigma)$, $A$ an $m \\times n$ matrix,\nthen \n\\[\n    AX \\sim N(A \\mu, A \\Sigma A^t)\n\\]\nFor $\\Sigma$ full rank it's possible to transform between $Z \\sim\nN(0, I)$ and $X$:\n\\[\n    X = \\Sigma^{1/2} Z + \\mu \\qquad Z = \\Sigma^{-1/2} (X - \\mu)\n\\]\nIn block matrix form:\n\\[\n    X =\n    \\begin{bmatrix}\n        X_1 \\\\\n        X_2 \\\\\n    \\end{bmatrix}\n    \\sim N \\left(\n    \\begin{bmatrix}\n        \\mu_1 \\\\\n        \\mu_2 \\\\\n    \\end{bmatrix}\n    ,\n    \\begin{bmatrix}\n        \\Sigma_{11} & \\Sigma_{12} \\\\\n        \\Sigma_{21} & \\Sigma_{22} \\\\\n    \\end{bmatrix}\n\\right)\n\\]\nAssuming $\\Sigma_{11}$ is positive definite then the conditional\ndistribution\n\\[\n    X_2 | X_1 \\sim N(\\mu_2 + \\Sigma_{21} \\Sigma_{11}^{-1} (X_1 - \\mu_1),\n    \\Sigma_{22} - \\Sigma_{21} \\Sigma_{11}^{-1} \\Sigma_{12})\n\\]\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\end{document}\n", "meta": {"hexsha": "8843a2ae5e4e4ad8b693383f3880d3e37072678a", "size": 5659, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "232c/cheatsheet.tex", "max_stars_repo_name": "clarkfitzg/phd_stats", "max_stars_repo_head_hexsha": "c74b21a7fd55a713927650ce1827d1b853809395", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "232c/cheatsheet.tex", "max_issues_repo_name": "clarkfitzg/phd_stats", "max_issues_repo_head_hexsha": "c74b21a7fd55a713927650ce1827d1b853809395", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-11-10T07:47:09.000Z", "max_issues_repo_issues_event_max_datetime": "2015-11-10T22:55:08.000Z", "max_forks_repo_path": "232c/cheatsheet.tex", "max_forks_repo_name": "clarkfitzg/phd_stats", "max_forks_repo_head_hexsha": "c74b21a7fd55a713927650ce1827d1b853809395", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.3842975207, "max_line_length": 87, "alphanum_fraction": 0.55557519, "num_tokens": 2087, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7662936430859597, "lm_q1q2_score": 0.5717615672180237}}
{"text": "\\SecDef{zerosum}{Degree-\\texorpdfstring{$d$}{d} Zero-Sum Sets and Sum-Invariant Matrices}\n\nA natural question to ask is which other linear mappings have a similar property as given in \\EqRef{invariant_preserving_linear}. To answer this question, we study \\emph{degree-$d$ zero-sum sets} as a generalization of the above problem. \n\n\\begin{definition}[Degree-$d$ Zero-Sum Set]\nLet $S \\subseteq \\F_2^n$ and let $d \\in \\mathbb{N}$. We call $S$ \\emph{degree-$d$ zero-sum} if, for all $f \\in \\BF{n}{d}$,\n\\eql{zerosum}{\n\\bigoplus_{s \\in S} f(s) = 0.\n}\nWe define $\\rank(S)$ to be the maximum number of linearly independent elements in $S$ and denote by $\\ZS{n}{m}{d}$ the set of degree-$d$ zero-sum sets with $m$ elements and rank $n$.\n\\end{definition}\n\nWe first show the following equivalent characterizations of degree-$d$ zero-sum sets. \n\\begin{proposition}\n\\PropLabel{zero_sum}\nLet $S = \\{s_1,\\dots,s_{k}\\} \\subseteq \\F_2^n$ and let $d \\in \\ZZplus$. Let $\\matzs{S} \\in \\F_2^{n \\times k}$ be any matrix (up to a permutation of the columns) the columns of which correspond to the elements of $S$, i.e., $$\n\\matzs{S} = \\rowthree{s_1^\\top}{\\dots}{s_{k}^\\top}.\n$$\nThen the following statements are equivalent: \n\\enumroman\n\\begin{enumerate}\n    \\item $S$ is a degree-$d$ zero-sum set.\n    \\item $k$ is even and, for any choice of $d$ (not necessarily distinct) rows $r_1,\\dots,r_d$ of $\\matzs{S}$, it is $\\inprod{ r_1,\\dots,r_d} = 0$.\n    \\item in every $d \\times k$ submatrix of $\\matzs{S}$, each column occurs an even number of times.\n    \\item $\\deg(\\Ind_S) \\leq n-d-1$.\n    \\item for all $t \\geq 1$ and all $f \\in \\BF{t}{d}$, $\\forall X \\in \\F_2^{t \\times n}\\colon \\bigoplus_{s \\in S} f(sX^\\top) = 0$.\n\\end{enumerate}\nIn particular, the degree-$d$ zero-sum sets in $\\F_2^n$ are exactly the supports of the $n$-bit Boolean functions of degree at most $n-d-1$. Therefore, any non-empty degree-$d$ zero-sum set must contain at least $2^{d+1}$ elements.\n\\end{proposition}\n\\begin{proof}\nTo prove $(i) \\Rightarrow (ii)$, let \n$$\n\\matzs{S} = \\colthree{r_1}{\\vdots}{r_n}\n$$\nwith $r_i \\in \\F_2^{k}$. Let $l_1,\\dots,l_d$ be $d$ (not necessarily distinct) row indices and consider the monomial function $f \\in \\BF{n}{d}, \\ x \\mapsto \\prod_{i=1}^{d}x_{l_i}$, which has degree $d$. From \\EqRef{zerosum}, it must be\n$$\n0\n= \\bigoplus_{s \\in S} f(s)\n= \\bigoplus_{s \\in S} \\prod_{i=1}^{d} s_{l_i}\n= \\inprod{ r_{l_1},\\dots,r_{l_d}}.\n$$\nClearly, $k$ must be even because $\\bigoplus_{s \\in S} 1 = 0$.\n\n$(ii) \\Rightarrow (iii)$: We first see that any $1 \\times k$ submatrix of $\\matzs{S}$ contains each element in $\\F_2$ an even number of times. Indeed, let $r$ be any row in $\\matzs{S}$. From $(ii)$ we know that $\\wt(r) \\mod 2 = \\langle r \\rangle = 0$ and thus $r$ contains an even number of $1$'s. Because $k$ is even, it must also contain an even number of $0$'s. We now use induction on the number of rows. Let $d' < d$ such that $(ii) \\Rightarrow (iii)$ holds for $d'$. Let us choose an arbitrary $(d'+1) \\times k$ submatrix $H = [m_{i,j}]_{1\\leq i \\leq d'+1, 1 \\leq j \\leq k}$ of $\\matzs{S}$. We define $H^{(0)} \\coloneqq [m^{(0)}_{i,j}]$ to be the submatrix of $H$ that is obtained by selecting exactly the columns $m_{\\star,j}$ of $H$ for which $m_{d'+1,j} = 0$. Similarly, let $H^{(1)} \\coloneqq [m^{(1)}_{i,j}]$ be the submatrix of $H$ that is obtained by selecting exactly the columns $m_{\\star,j}$ of $H$ for which $m_{d'+1,j} = 1$. We have already seen from the initial step that both $H^{(0)}$ and $H^{(1)}$ must contain an even number of columns (otherwise the row $m_{d'+1,\\star}$ would have an odd weight). From $(ii)$, we know that \n\\begin{align*}\n    0 &= \\inprod{ m_{1,\\star},\\dots,m_{d',\\star},m_{d'+1,\\star} } =\n    \\inprod{ m^{(0)}_{1,\\star},\\dots,m^{(0)}_{d'+1,\\star} } + \\inprod{ m^{(1)}_{1,\\star},\\dots,m^{(1)}_{d'+1,\\star} } \\\\\n    &= \\inprod{ m^{(1)}_{1,\\star},\\dots,m^{(1)}_{d',\\star} } = \\inprod{ m^{(0)}_{1,\\star},\\dots,m^{(0)}_{d',\\star} }.\n\\end{align*}\nBecause of the induction hypothesis, $H^{(0)}$ and $H^{(1)}$ contain each column an even number of times and therefore, every column of $H$ occurs an even number of times.\n\n$(iii) \\Rightarrow (iv)$: Let $u \\in \\F_2^n$ with $\\wt(u) \\geq n-d$. Because of $(iii)$,\n$$\n|\\pset{s \\in S \\mid s \\preceq u}|\n$$\nis even (because $d$ zeroes in positions $i$ where $u_i = 0$ occur an even number of times among elements of $S$). It follows that \n$$\n|\\pset{s \\in S \\mid s \\preceq u}| \\mod 2 = \\bigoplus_{s \\preceq u}\\Ind_S(s) = 0\n$$\nand thus, the monomial $x^u$ doesn't occur in the ANF of $\\Ind_S$. Since this holds for all $u$ with $\\wt(u) \\geq n-d$, the algebraic degree of $\\Ind_S$ is at most $n-d-1$.\n\n\n$(iv) \\Rightarrow (v)$: Let $f \\in \\BF{t}{d}$ be an arbitrary function of degree at most $d$. Observe that\n\\eql{indicator_sum}{\n    \\forall X \\in \\field{t \\times n} \\quad \\bigoplus_{s \\in \\field{n}} \\Ind_{S} \\cdot f(sX^{\\top}) = 0,\n}\nbecause $\\deg{\\Ind_{S} \\cdot (f \\circ X)} \\le \\deg{\\Ind_{S}} + \\deg{f}\\le n - 1$. Here, $f \\circ X$ denotes the $n$-bit Boolean function $s \\mapsto f(sX^\\top)$.\n\\EqRef{indicator_sum} can equivalently be written as\n\\eq{\n    \\forall X \\in \\field{t \\times n} \\quad \\bigoplus_{s \\in S}  f(sX^\\top) = 0,\n}\nwhich proves $(v)$. The implication $(v) \\Rightarrow (i)$ follows by letting $t=n$ and $X = \\idmat{n}$.\n\nTo see that any non-empty degree-$d$ zero-sum set contains at least $2^{d+1}$ elements, we use the fact that any non-zero Boolean function of degree at most $n-d-1$ has a weight at least $2^{n-(n-d-1)} = 2^{d+1}$. \n\\end{proof}\n\nIt is worth remarking that the property of being degree-$d$ zero-sum  is invariant under the application of an injective linear mapping. Indeed, if $\\varphi\\colon \\Span(S) \\rightarrow \\F_2^{n'}$ is an injective linear function on the subspace $\\Span(S)$ of dimension $\\rank(S)$, then $|\\varphi(S)|=|S|$ and if $S$ is degree-$d$ zero-sum, so is $\\varphi(S)$. Further, $\\rank(\\varphi(S)) = \\rank(S)$. Therefore, without loss of generality, we can represent a zero-sum set $S \\in \\ZS{n}{m}{d}$ as a subset of $\\F_2^n$ and given by the columns of an $n \\times m$ matrix $\\matzs{S}$ of the form \n\\eql{systematic_form}{\n\\matzs{S} = \\rowtwo{\\idmat{n}}{L}\n}\nfor an $L \\in \\F_2^{n \\times (m-n)}$. We say that a zero-sum set (resp. a matrix $\\matzs{S}$) given in the representation of \\EqRef{systematic_form} is in \\emph{systematic form}. We are in particular interested in the properties of such matrices $L$ that define zero-sum sets in $\\ZS{n}{m}{d}$ in the above way. For instance, such an $L$ can only exist if $m$ is even. We generalize this by introducing the notion of a \\emph{degree-$d$ sum-invariant matrix} as follows.\n\n\n\n\\begin{definition}[Degree-$d$ Sum-Invariant Matrix]\nA matrix $L \\in \\F_2^{n \\times m}$ is called \\emph{degree-$d$ sum-invariant} if, for all $t \\geq 1$ and all $f \\in \\BF{t}{d}$,\n\\eql{linear_layer}{\n\\forall X \\in \\F_2^{t \\times n} \\colon \n\\bigoplus_{i=1}^n f\\big( (X^\\top)_i \\big) =\n\\bigoplus_{j=1}^m f\\big( ((XL)^\\top)_j \\big) + \\varepsilon_{m+n}f(0),\n}\nwhere $\\varepsilon_{m+n} = (m+n) \\mod 2$.\n\\end{definition}\n\n\n\\begin{proposition}\n\\PropLabel{inner_product}\nLet $L \\in \\F_2^{n \\times m}$ be a linear mapping and let $d \\in \\mathbb{N}$.\nThen the following statements are equivalent:\n\\enumroman\n\\begin{enumerate}\n    \\item $L$ is degree-$d$ sum-invariant.\n\n    \\item The columns of the matrix $\\matl{L}$ occurring with odd multiplicity define a degree-$d$ zero-sum set, where\n    \\begin{equation}\n    \\begin{cases}\n    \\arraycolsep=4pt\\def\\arraystretch{1}\n    \\matl{L} \\coloneqq \\rowtwo{\\idmat{n}}{L} \\in \\F_2^{n\\times(m+n)},\n    & \\text{~if~} m+n \\text{~is even}\\;; \\\\\n    \\arraycolsep=4pt\\def\\arraystretch{1}\n    \\matl{L} \\coloneqq \\rowthree{\\idmat{n}}{L}{0} \\in \\F_2^{n\\times(m+n+1)},\n    & \\text{~if~} m+n \\text{~is odd}\\;.\n    \\end{cases}\n    \\end{equation}\n    \n    \\item For all $x_1,\\dots x_d \\in \\F_2^{n}$ it is $\\inprod{ x_1,\\dots,x_d} = \\inprod{x_1 L,\\dots,x_d L}$.\n\\end{enumerate}\nMoreover, if $L$ fulfills $(i)$ and if $d \\geq 2$, then $n \\le m$, $LL^{\\top} = I_{n}$ and $L$ must have full rank $n$. \n\\end{proposition}\n\\begin{proof}\n\nWe first prove $(i) \\Rightarrow (ii)$.\nIf $m+n$ is even, then \\EqRef{linear_layer} is equivalent to \n\\eql{linear_layer_rewrite}{\n\\forall X \\in \\F_2^{t \\times n} \\colon\n\\bigoplus_{i=1}^n f\\big(e_iX^\\top\\big) +\n\\bigoplus_{j=1}^m f\\big( (L^\\top)_j X^\\top \\big) = 0,\n}\nwhere $e_i$ denotes the $i$-th unit vector. If there is a $j$ for which $(L^\\top)_j$ is equal to a unit vector $e_k$, then $f((L^\\top)_j X^\\top) = f(e_k X^\\top)$ and the two terms cancel in \\EqRef{linear_layer_rewrite}. Similarly, if there exist two different $j_1,j_2$ such that $(L^\\top)_{j_1} = (L^\\top)_{j_2}$, then $f( (L^\\top)_{j_1} X^\\top)$ and  $f( (L^\\top)_{j_2}X^\\top)$ cancel out. This is another way of saying that the columns of the matrix $\\matl{L} = \\rowtwo{\\idmat{n}}{L}$ occurring with odd multiplicity define a degree-$d$ zero-sum set.\n\nIf $m+n$ is odd, then $\\varepsilon_{m+n} = 1$ and \\EqRef{linear_layer} can be written as\n\\eq{\n\\forall X \\in \\F_2^{t \\times n} \\colon\n\\bigoplus_{i=1}^n f\\big(e_i X^\\top \\big) +\n\\bigoplus_{j=1}^m f\\big( (L^\\top)_{j} X^{\\top}\\big) + f(0X^\\top) = 0.\n}\nThis is equivalent to say that the columns of the $n\\times(m+n+1)$ matrix $\\matl{L} = \\rowthree{\\idmat{n}}{L}{0}$ occurring with odd multiplicity define a degree-$d$ zero-sum set.\n\n\n$(ii) \\Rightarrow (iii)$. If the columns of $\\matl{L}$ occurring with odd multiplicity define a degree-$d$ zero sum set, then, because of \\PropRef{zero_sum}, any $d$ (not necessarily distinct) rows $\\rowtwo{e_{l_1}}{L_{l_1}},\\ldots,\\rowtwo{e_{l_d}}{L_{l_d}}$ of $\\matl{L}$ fulfill \n\\begin{align*}\n    \\inprod{ \\rowtwo{e_{l_1}}{L_{l_1}}, \\ldots, \\rowtwo{e_{l_d}}{L_{l_d}} } = 0\\;, \n\\end{align*}\nwhich is equivalent to\n$$\n\\inprod{e_{l_1},\\dots,e_{l_d}} = \\inprod{ e_{l_1}L,\\dots,e_{l_d}L }.\n$$\nBecause of the linearity of the inner product, i.e.,\n$$\n\\inprod{x_1+x_1',x_2,\\dots,x_d} = \\inprod{x_1,x_2,\\dots,x_d} + \n                                  \\inprod{x_1',x_2,\\dots,x_d},\n$$\nthe statement follows.\n\n$(iii) \\Rightarrow (i)$. If there are $f_1,f_2 \\in \\BF{t}{d}$ such that \\EqRef{linear_layer} holds for both $f_1$ and $f_2$, then it clearly holds for $f_1+1$ and for $f_1+f_2$ as well. Therefore, without loss of generality, let $f \\in \\BF{t}{d}$ be a monomial function, i.e., $f(z) = \\prod_{k=1}^{d}z_{l_k}$ for $1 \\leq l_1 \\leq \\dots \\leq l_{d} \\leq t$. Let $X \\in \\F_2^{t \\times n}$. Then,\n\\begin{align*}\n    \\bigoplus_{i=1}^n f((X^\\top)_i) =\n    \\bigoplus_{i=1}^n\\prod_{k=1}^d (X^\\top)_{i,l_k} =\n    \\langle X_{l_1},\\dots,X_{l_d} \\rangle \n\\end{align*}\nand\n\\begin{align*}\n    \\bigoplus_{j=1}^m f( ((XL)^\\top)_j) + \\varepsilon_{m+n}f(0) = \n    \\bigoplus_{j=1}^m\\prod_{k=1}^d ((XL)^\\top)_{j,l_k} = \\langle X_{l_1}L,\\dots,X_{l_d}L \\rangle \\;.\n\\end{align*}\nIt follows that if $L$ preserves all generalized inner products of $d$ elements, then $L$ is degree-$d$ sum-invariant.\n\nIf $L$ fulfills the equivalent statements $(i)$ - $(iii)$,  then, for all $x,y \\in \\F_2^{n}$, it is \n\\[ xy^\\top = \\inprod{ x,y} = \\inprod{xL,yL} = xL(yL)^\\top =  xLL^{\\top}y\\;.\\]\nIt follows that $LL^\\top$ must be the identity and thus, $L$ must have full rank $n$.\n\\end{proof}\n\nThis result shows a relation between degree-$d$ sum-invariant matrices and semi-orthogonal matrices. A matrix $L \\in \\F_2^{n \\times m}$ with $n \\leq m$ is called \\emph{semi-orthogonal} if $LL^\\top = \\idmat{n}$. Indeed, we have shown that a matrix is degree-$2$ sum-invariant if and only if it is semi-orthogonal.\\footnote{We only consider matrices with $n \\leq m$. If $L \\in \\F_2^{n\\times m}$ with $n > m$, $L$ would be defined to be semi-orthogonal if $L^\\top L = I_m$. Then, $L$ is semi-orthogonal if and only if $L^\\top$ is degree-$2$ sum-invariant.} Because of the above relation, the degree-$(d+1)$ sum-invariant matrices might also be called \\emph{$d$-th order semi-orthogonal}.\n\nThe invertible semi-orthogonal matrices are exactly the \\emph{orthogonal} matrices and the orthogonal matrices in dimension $n$ form a multiplicative group, called the \\emph{orthogonal group}. With the above equivalences, we obtain an interesting characterization of the orthogonal groups over $\\F_2$.  \n\n\\begin{corollary}\nA matrix $L \\in \\F_2^{n \\times n}$ is orthogonal if and only if in each $2 \\times 2n$ submatrix of $\\rowtwo{\\idmat{n}}{L}$, each column occurs an even number of times. \n\\end{corollary}\n\n\\subsection{Relation to Orthogonal Arrays}\n\\PropRef{zero_sum} points out a relation between degree-$d$ zero-sum sets and orthogonal arrays.\n\n\\begin{definition}[Orthogonal Array~\\cite{hedayat1999orthogonal}]\nAn $m \\times n$ matrix $M$ with entries from a finite set of cardinality $k$ is said to be an \\emph{orthogonal\narray with $k$ levels, strength $d$ and index $\\lambda$}, denoted $OA(m,n,k,d)$, if every $m \\times d$ submatrix of $M$ contains each $d$-tuple exactly $\\lambda$ times as a row. Without loss of generality, we will assume that $M$ is a matrix with elements in $\\mathbb{Z}_k$. \n\\end{definition}\n% \\lambda = m / s^d\n\nFor our purposes we are only interested in the  case of $k = 2$. We directly obtain the following.\n\n\\begin{corollary}\nLet $S \\subseteq \\F_2^n$. If $\\matzs{S}^{\\top}$ is an $OA(|S|,n,2,d)$ such that $2^{d+1}$ divides $|S|$ (i.e., if the index $\\lambda$ is even), then $S$ is a degree-$d$ zero-sum set.\n\\end{corollary}\n\n\nAs an example, for $d = 3$, there is a well-known construction of orthogonal arrays from Hadamard matrices (see~\\cite[pp.\\@ 145--148]{hedayat1999orthogonal}). A \\emph{Hadamard matrix} of order $n$ is a matrix $H \\in \\mathbb{Z}^{n \\times n}$ which can only take values in $\\{-1,1\\}$ and which fulfills $H^{\\top}H = n \\idmat{n}$. For a  matrix $M$ with elements in $\\{-1,1\\}$, we denote by $\\widetilde{M}$ the $\\F_2$ matrix obtained from $M$ by replacing $-1$ with $0$, i.e., we define $\\widetilde{M}$ to be the result of $\\frac{1}{2}(M+1)$, interpreted in $\\F_2$.\n\nIf $H$ is a Hadamard matrix of order $8k$ for $k \\in \\ZZplus$, it is well known that \n$$\n\\widetilde{\\coltwo{H}{-H}}\n$$\nis an $OA(16k,8k,2,3)$ of even index (see~\\cite[Theorem 4.16]{hedayat_hadamard}). Therefore, it defines a degree-$3$ zero-sum set $S \\subseteq \\F_2^{8k}$ with $16k$ elements. However, its rank can be at most $4k$ (see~\\cite[Proposition 2]{hadamard_codes}) and we are interested in the zero-sum sets of full rank.\n\n", "meta": {"hexsha": "e683037548358effe59df79a55712909d74a8efa", "size": 14344, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-source/9niLinear/3zerosum.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "thesis-source/9niLinear/3zerosum.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "thesis-source/9niLinear/3zerosum.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 69.9707317073, "max_line_length": 1148, "alphanum_fraction": 0.6583937535, "num_tokens": 5228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7662936377487304, "lm_q1q2_score": 0.5717615632357089}}
{"text": " \\documentclass[../thesis.tex]{subfiles}\n \\begin{document}\n\n\nThis appendix contains proofs for several theorems extending\nthe well-known Data Processing Inequality in information theory\nto configurations of random variables beyond a triplet Markov Chain.\nThe motivation for these theorems is the desire to model information\nflow through a world modeled as a Bayesian network, where information\nflow is determine by causal flows and nomic associations.\nNomic association here is measured as mutual information between two variables.\nThe Chain rule for mutual information and the Markov properties of\na Bayesian network make it possible to prove several theorems that are\nas far as we know new.\n\n\\section{Triplet Structures}\n\nThe Data Processing Inequality is a standard theorem in information theory.\nIt concerns the mutual information of three variables arranged in a\nMarkov Chain.\n\n\\begin{dfn}\n  (\\citet{cover2012elements})\n  Random variables $X, Y, Z$ are said to \\emph{form a Markov chain in that order}\n  (denoted $X \\rightarrow Y \\rightarrow Z$) if the conditional distribution\n  of $Z$ depends only on $Y$ and is conditionally independent of $X$.\n  Specifically, $X,Y,Z$ form a Markov chain $X \\rightarrow Y \\rightarrow Z$\n  if the joint probability mass function can be written as\n  \\begin{equation}\n    p(x,y,z) = p(x)p(y \\vert x)p(z \\vert y)\n  \\end{equation}\n\\end{dfn}\n\n\\begin{thm}[Data Processing Inequality]\n  Given a probability model defined by the following (Markov Chain):\n  \\begin{center}\n    \\begin{tikzcd}    \n      X \\arrow[r] & Y \\arrow[r] & Z \\\\\n    \\end{tikzcd}\n  \\end{center}\n  where $X \\independent Z \\vert Y$, then it must be that $I(X;Y) \\geq I(X;Z)$.\n\\end{thm}\n\\begin{proof}\n  From \\cite{cover2012elements}. By the Chain Rule, mutual information\n  can be expanded in two different ways:\n  \\begin{equation}\n    \\begin{split}\n    I(X;Y,Z) & = I(X;Z) + I(X;Y \\vert Z) \\\\\n    & = I(X;Y) + I(X;Z \\vert Y)\n    \\end{split}\n  \\end{equation}\n  Since X and Z are conditionally independent given Y, we have\n  $I(X;Z \\vert Y) = 0$. Since $I(X;Y \\vert Z) \\geq 0$, we have\n  \\begin{equation}\n    I(X;Y) \\geq I(X;Z)\n  \\end{equation}\n  We have equality if and only if $I(X;Y \\vert Z) = 0$\n  (i.e. $X \\rightarrow Z \\rightarrow Y$ forms a Markov Chain).\n  Similarly, one can prove that $I(Y;Z) \\geq I(X;Z)$.\n\\end{proof}\n\nBayesian networks are a generalization of Markov chains.\nA Bayesian network models the probability distribution\nbetween many random variables as a directed acyclic\ngraph. The conditional probability distribution of\neach variable is defined in terms of the graphical\nparents $pa(\\dot)$ of each variable, i.e. $P(X_i) = P(X_i \\vert pa(X_u))$.\nThe joint distribution is\n\n\\begin{equation}\n  P(X_1, X_2, ..., X_n) = \\prod_{i=1}^{n}P(X_i \\vert pa(X_i))\n\\end{equation}\n\nWe can now prove several theorems that are similar to the\nData Processing Inequality but for other probabilistic\nstructures besides Markov chains.\n\n\\begin{thm}[Data Sourcing Inequality]\n  Given a probability model defined by the following (Common Cause):\n  \\begin{center}\n    \\begin{tikzcd}    \n         & Y \\arrow[dl] \\arrow[dr] & \\\\\n      X  &                         & Z\n    \\end{tikzcd}\n  \\end{center}\n  then it must be that $I(X;Y) \\geq I(X;Z)$.\n\\end{thm}\n\\begin{proof}\n  The implication of the common cause structure is that\n  \\begin{equation}\n    p(x,y,z) = p(x \\vert y )p(y)p(z \\vert y).\n  \\end{equation}\n  It follows that $X \\independent Z \\vert Y$.\n  The rest of the proof is identifical to the previous proof.\n\\end{proof}\n\n\\begin{thm}[Unobserved common effect inequality]\n  Given variables $X,Y,Z$ with\nthe common cause structure\n\\begin{center}\n  \\begin{tikzcd}\n    X \\arrow[dr] &   & \\arrow[dl] Z \\\\\n    & Y &                        &   \n  \\end{tikzcd}\n\\end{center}\n  then it must be that $I(X,Y) \\geq I(X,Z) = 0$.\n\\end{thm}\n\\begin{proof}\n  The implication of the structure is that\n  \\begin{equation}\n    p(x,y,z) = p(x) p(y \\vert x,z) p(z).\n  \\end{equation}\n  It follows that $X \\independent Z$, therefore $I(X;Z) = 0$.\n  Because the mutual information of two variables is always\n  nonnegative, \n  $$I(X,Y) \\geq I(X;Z)$$\n\\end{proof}\n\nWe note that while a similar property holds for a\n``collider'' or common effect structure, it's proof is different\nfrom the chain and common cause cases because, in general,\nit is not the case that $X \\independent Z \\vert Y$ for\na common effect structure.\nFor example, when $X$ and $Z$ are both fair coin tosses\nand $Y = X \\oplus Z$, $X$ and $Z$ are independent from each other\nbut not when conditioned on $Y$.\n\nWhen a common effect is in the conditioning set,\nthe two causes depend probabilistically on each other.\nThe extent to which these dependencies are limited can\nbe characterized by a few equations.\n\n\\begin{lem}\n  \\label{lemma:common-effect-1}\n  Given variables $X_1,X_2,Y$ with\n  the common effect structure $X_1 \\rightarrow Y \\leftarrow X_2$,\n  then $I(X_1;X_2,Y) = I(X_1,Y \\vert X_2)$.\n\\end{lem}\n\\begin{proof}\n  By the Chain Rule for mutual information,\n  $$I(X_1;X_2,Y) = I(X_1;X_2) + I(X_1;Y \\vert X_2)$$\n  Because of the common effect structure, $I(X_1;X_2) = 0$.\n  Therefore, $I(X_1;X_2,Y) = I(X_1;Y \\vert X_2)$.\n\\end{proof}\n\n\\begin{lem}\n  \\label{lemma:common-effect-2}\n  Given variables $X_1,X_2,Y$ with\n  the common effect structure $X_1 \\rightarrow Y \\leftarrow X_2$,\n  then\n  \\begin{equation}\n    \\begin{split}\n      I(Y; X_1, X_2) & = I(X_1;X_2,Y) + I(X_2;Y) \\\\\n      & = I(X_2;X_1,Y) + I(X_1;Y)\n    \\end{split}\n  \\end{equation}\n\\end{lem}\n\\begin{proof}\n  \\begin{equation}\n    \\begin{split}\n      & I(X_1;X_2,Y) \\\\\n      & = I(X_1;Y \\vert X_2) \\\\\n      & = H(Y \\vert X_2) - H(Y \\vert X_1, X_2)\\\\\n      & = H(Y \\vert X_2) - H(Y) + I(Y ; X_1, X_2)\\\\\n      & = I(Y ; X_1, X_2) - I(X_2;Y)\n    \\end{split}\n  \\end{equation}\n  which implies that\n  $$I(X_1;X_2,Y) + I(X_2;Y) = I(Y ; X_1, X_2)$$\n  The proof works symmetrically for\n  $I(X_2;X_1,Y) + I(X_1;Y) = I(Y ; X_1, X_2)$\n\\end{proof}\n\n\\begin{lem}\n  \\label{lemma:common-effect-3}\n  Given variables $X_1,X_2,Y$ with\n  the common effect structure $X_1 \\rightarrow Y \\leftarrow X_2$,\n  then $I(X_1,X_2 \\vert Y) \\leq I(X_1;X_2,Y)$.\n\\end{lem}\n\\begin{proof}\n  \\begin{equation}\n    \\begin{split}\n      & I(X_1,X_2 \\vert Y) \\\\\n      & = H(X_1 \\vert Y) - H(X_1 \\vert X_2, Y) \\\\\n      & = H(X_1) - I(X_1;Y) - H(X_1) + I(X_1;X_2,Y)\\\\\n      & = I(X_1;X_2,Y) - I(X_1;Y) \\\\\n      & \\leq I(X_1;X_2,Y) \\\\\n    \\end{split}\n  \\end{equation}\n\\end{proof}\n\n\\begin{thm}\n  \\label{thm:common-effect-inequality}\n  Given variables $X_1,X_2,Y$ with\n  the common effect structure $X_1 \\rightarrow Y \\leftarrow X_2$, then\n  $$I(X_1,X_2 ; Y) \\geq I(X_1;X_2,Y) = I(X_1 ; Y \\vert X_2) \\geq I(X_1;X_2 \\vert Y)$$\n\\end{thm}\n\\begin{proof}\n  Follows from Lemmas \\ref{lemma:common-effect-1},\n  \\ref{lemma:common-effect-2}, and \\ref{lemma:common-effect-3}.\n\\end{proof}\n\n\n\\section{Quartet Structures}\n\nWhile triplet structures (chain, common effect, and common cause)\nare the building blocks of larger paths in Bayesian networks,\nan analysis of larger, quarter structures will help us develop\ngeneral theorems about the mutual information along paths.\n\nRecall that a both with a common effect is not blocked\nif \\emph{either} the common effect \\emph{or} a descendant of\nthe effect is in the conditioning set.\nLet's look at the following structure,\nwhich we will call a \\emph{wishbone} structure.\n\n\\begin{center}\n  \\begin{tikzcd}\n    X_0 \\arrow[dr] &   & \\arrow[dl] X_1 \\\\\n    & Y \\arrow[d] & \\\\\n    & Z & \n  \\end{tikzcd}\n\\end{center}\n\nHere, $Y$ is a common effect of $X_0$ and $X_1$,\nand $Z$ is a descendant of $Y$.\nHow much information flows from $X_0$ to $X_1$\nwhen $Z$ is known?\n\n\\begin{thm}\n  \\label{thm:wishbone}\n  For variables $X_0, X_1, Y, Z$ in a wishbone structure,\n  $$I(X_0;X_1 \\vert Z) \\leq I(Y;Z)$$\n\\end{thm}\n\\begin{proof}\n  Consider the quantity $I(X_0, Y; X_1, Z)$,\n  expanded by the Chain Rule.\n  One expansion is:\n  \\begin{equation}\n    \\begin{split}\n      & I(X_0, Y; X_1, Z) \\\\\n      & = I(Y;Z) + I(X_0;Z \\vert Y) + I(Y; X_1 \\vert Z) + I(X_0, X_1 \\vert Y,Z) \\\\\n      & = I(Y;Z) + I(X_0,Y;X_1)\n    \\end{split}\n  \\end{equation}\n  Another expansion is:\n  \\begin{equation}\n    \\begin{split}\n      & I(X_0, Y; X_1, Z) \\\\\n      & = I(X_0;Z) + I(Y;X_1 \\vert X_0) +\n      I(X_0; X_1 \\vert Z) + I(Y; X_1 \\vert X_0,Z) \\\\\n      & \\geq I(Y; X_1 \\vert X_0) + I(X_0; X_1 \\vert Z) \n    \\end{split}\n  \\end{equation}\n\n  By Theorem \\ref{thm:common-effect-inequality},\n  we know that $I(Y;X_1 \\vert X_0) = I(X_0,Y;X_1)$\n  for three variables in a common effect structure,\n  as they are for these variables in the wishbone structure.\n\n  So we can set the two expansions equal to each other and reduce:\n  \\begin{equation}\n    \\begin{split}\n      I(Y; X_1 \\vert X_0) + I(X_0; X_1 \\vert Z) & \\leq I(Y;Z) + I(X_0,Y;X_1) \\\\\n      I(X_0, Y; X_1) + I(X_0; X_1 \\vert Z) & \\leq I(Y;Z) + I(X_0,Y;X_1) \\\\\n      I(X_0; X_1 \\vert Z) & \\leq I(Y;Z)\n    \\end{split}\n  \\end{equation}\n\\end{proof}\n\n\\section{Paths}\n\nWe can now look at mutual information of nodes connected\nby longer paths.\nWe start with an arbitrariliy long Markov chain.\n\n\\begin{center}\n  \\begin{tikzcd}    \n    X_0  \\arrow[r] & X_1 \\arrow[r] & ... \\arrow[r] & X_{n-1} \\arrow[r] & X_n\n  \\end{tikzcd}\n\\end{center}\n\n\\begin{thm}[Chain Data Processing Inequality]\n  \\label{cdpi-thm}\n  Given a Markov chain of variables $$X_1, ..., X_n$$\n  such that $X_1 \\rightarrow ... \\rightarrow X_n$.\n  It must be the case that\n  $$I(X_1,X_n) \\leq \\min_i I(X_i,X_{i+1}).$$\n\\end{thm}\n\\begin{proof}\n  \\label{cdpi-prf}\n  For all $i$, by the Chain rule for mutual information\n  and the independence properties of the Markov chain,\n  \\begin{equation}\n    \\label{cdpi-prf-eq1}\n    \\begin{split}\n      I(X_0, ..., X_{i} ; X_{i+1},...,X_n) = \\\\\n      \\sum_{j=i+1}^{n} I(X_0,...,X_{i}; X_j \\vert X_{i+1},...,X_j) = \\\\\n      I(X_0,...,X_{i}; X_{i+1}) = \\\\\n      \\sum_{j=i}^{1} I(X_{i+1}; X_{j} \\vert X_i,...X_j) = \\\\\n      I(X_i;X_{i+1}) + \\sum_{j=i-1}^{1} I(X_{i+1}; X_{j} \\vert X_i,...X_j) = \\\\\n      I(X_i;X_{i+1})\n    \\end{split}\n  \\end{equation}\n  The Chain rule can expand the variables in arbitary order.\n  So we can also derive (using the fact that mutual information\n  is always nonnegative):\n  \\begin{equation}\n    \\label{cdpi-prf-eq2}\n    \\begin{split}\n      & I(X_0, ..., X_{i} ; X_{i+1},...,X_n) \\\\\n      &= I(X_0, .., X_{i} ; X_n) + \\sum_{j=n-1}^{i+1} I(X_{0}..,X_{i}; X_j \\vert X_{j+1}..,X_j) \\\\\n      &\\geq I(X_0, ..., X_{i} ; X_n) \\\\\n      &= \\sum_{j = 0}^{n-1} I(X_n ; X_j \\vert X_{j-1}, ... , X_0) \\\\\n      &= I(X_n; X_0) + \\sum_{j=1}^{n-1} I(X_n; X_j \\vert X_{j-1}, ..., X_0) \\\\\n      &\\geq I(X_n; X_0)\n    \\end{split}\n  \\end{equation}\n  Combining these two results and generalizing across all $i$,\n  \\begin{equation}\n    \\forall i, I(X_0;X_n) \\leq I(X_i,X_{i+1})\n  \\end{equation}\n  which entails that which is to be proven,\n  \\begin{equation}\n    I(X_0;X_n) \\leq \\min_i I(X_i,X_{i+1})\n  \\end{equation}\n\\end{proof}\n\nOur goal is to generalize this theorem to Bayesian paths\nwith other structures, just found as in the previous section\nwe found equivalents to the Data Processing Inequality in\nother triplet structures.\n\n\\begin{dfn}[Path]\nA \\emph{path} between two nodes \\(X_1\\) and \\(X_2\\) in a graph \nto be a sequence of nodes starting with \\(X_1\\) and ending with \\(X_2\\)\nsuch that successive nodes are connected by an edge (traversing\nin either direction).\n\\end{dfn}\n\nIn this section, we will only consider paths isolated from\nany other variables.\nWe are interested in how to derive useful bounds on the\nmutual information of a path based on the mutual information\nof links within the path.\n\n\\begin{dfn}[Mutual information of a path]\n  The \\emph{mutual information of a path} between two nodes \\(X\\) and \\(Y\\)\n  is $I(X,Y)$.\n\\end{dfn}\n\n\\begin{thm}[Unobserved Path Data Processing Inequality]\n  \\label{thm:updpi}\n  Given a path between $X_0$ and $X_n$\n  of variables $X_0, ..., X_n$, with no other connected variables.\n  It must be the case that\n  $$I(X_1,X_n) \\leq \\min_{i} I(X_i,X_{i+1}).$$\n\\end{thm}\n\\begin{proof}\n  This proof mirrors the proof of Theorem \\ref{cdpi-thm}.\n  \n  For any $i$, consider $I(X_0,...X_i;X_{i+1},...,X_n)$.\n\n  By the logic of Equation \\ref{cdpi-prf-eq1},\n  $I(X_0,...X_i;X_{i+1},...,X_n) = I(X_i,X_{i+1})$.\n\n  By the logic of Equation \\ref{cdpi-prf-eq2},\n  $I(X_0,...X_i;X_{i+1},...,X_n) \\geq I(X_0,X_n)$.\n\n  Therefore, $\\forall i, I(X_0;X_n) \\leq I(X_i,X_{i+1})$\n  and $I(X_0;X_n) \\leq \\min_i I(X_i,X_{i+1})$.\n\\end{proof}\n\nTheorem \\ref{thm:updpi} applies to any paths on the condition\nthat none of the variables are observed.\nIts proof is identical to the proof for Markov chains because\nisolated, unobserved paths are Markov equivalent to Markov chains.\n\nSome proofs extending this result follow from theory of Bayesian\nnetworks. Recall that there are two conditions under which a\npath betweent two variables is blocked.\nFirst, an unobserved head-to-head connection on the\npath blocks the path and makes the terminal nodes conditionally\nindependent. Second, an observation of a head-to-tail or tail-to-tail\nnode blocks the path and makes the terminal nodes conditionally\nindependent.\nIf the only paths between two variables are blocked, then they\nare d-separated and therefore independent, with zero mutual information.\n\n\\begin{thm}[Blocked Path Mutual Information]\n  \\label{thm:bpmi}\n  For any blocked paths between $X_0$ and $X_n$\n  of variables $X_0, ..., X_n$ with no other connected\n  variables, $I(X_0,X_n) = 0$.\n\\end{thm}\n\\begin{proof}\n  If the only path between $X_0$ and $X_n$ is blocked,\n  then $X_0$ and $X_n$ are d-separated and \n  conditionally independent. If $X_0$ and $X_n$ are\n  conditionally independent,\n  then $I(X_0, X_n) = 0$.\n\\end{proof}\n\nThe difficult case for determining the mutual information\nof a path is the case where there are observed common\neffects on the paths.\nThis breaks the conditions for the proof of Theorem \\ref{thm:updpi}.\nIt is possible for $I(X_i,X_{i+1}) = 0$ but\n$I(X_{i-1},X_{i+1} \\vert X_i) > 0$.\nAs a simple example, consider again the case where\n$X_{i-1}$ and $X_{i+1}$ are fair coin tosses and\n$X_i = X_{i-1} \\oplus X_{x+1}$.\n\nIf there are many common effect nodes on the path and\nonly some of them are observed, then the path is\nblocked and the mutual information is solved using\nTheorem \\ref{thm:bpmi}; the mutual information of the\npath is zero.\nSimilarly, if there are common cause or chain triplets\non the path and the central node of the triplet is observed,\nthe mutual information of the path is trivially ze\nSo we need consider only the case where there's\na path where \\emph{all and only} the common effect\nnodes are observed.\n\n\\begin{thm}[Path Mutual Information Theorem (PMIT)]\n  \\label{thm:path-mutual-information}\n  Given a path between $X_0$ and $X_n$\n  of variables $\\{X_0, ..., X_n\\} = \\mathcal{X}$ with no other connected\n  variables.\n  Let $\\mathcal{X}_E$ be the common effect nodes, meaning only those\n  nodes $X_i$ such that the edge structure of the path is\n  $X_{i-1} \\rightarrow X_i \\leftarrow X_{i+1}$.\n  The mutual information of the path when all the common effects\n  are observed is is:\n  \n  $$I(X_0; X_n \\vert \\mathcal{X}_E) \\leq min_{i} \n  \\begin{cases}\n    I(X_i,X_{i+1}) & \\text{if} X_i,X_{i+1} \\notin \\mathcal{X}_E \\\\\n    I(X_{i-1},X_{i+1}) & \\text{if} X_i \\in \\mathcal{X}_E\\\\\n  \\end{cases}$$\n  \n\\end{thm}\n\\begin{proof}\n  For any $i$, consider\n  $$I(X_0,..., X_i;X_{i+1},..., X_n \\vert \\mathcal{X}_E)$$\n  By the Chain Rule for mutual information, this can be expanded as\n  $$\\sum_{j=0}^i I(X_j;X_{i+1},..., X_n \\vert X_0,..., X_{j-1},\\mathcal{X}_E)$$\n\n  Consider two cases.\n\n  In the first case, $X_i \\notin \\mathcal{X}_E$\n  and $X_{i+1} \\notin \\mathcal{X}_E$.\n\n  By logic similar to Equation \\ref{cdpi-prf-eq1}\n  and Equation \\ref{cdpi-prf-eq2}, then as before\n  $I(X_0;X_n) \\leq I(X_i,X_{i+1})$.\n  \n  In the second case, $X_i$ is a common effect node,\n  i.e $X_i \\in \\mathcal{X}_E$.\n  It is not possible to have two common\n  efffect nodes adjacent on a path.\n  So in any case where either $X_{i-1}$ or $X_{i+1}$ is\n  in the conditioning set, the path is blocked.\n  We can therefore compute the mutual information and its Chain Rule\n  expansion as:\n  \n  \\begin{equation}\n    \\begin{split}\n      I(X_0,..., X_{i-1};X_{i+1},..., X_n \\vert \\mathcal{X}_E) \\\\\n      = \\sum_{j=i-1}^0 I(X_j;X_{i+1},..., X_n \\vert X_{i-1},..., X_{j-1},\\mathcal{X}_E) \\\\\n      = I(X_{i-1};X_{i+1},..., X_n \\vert \\mathcal{X}_E)\\\\\n      = I(X_{i-1};X_{i+1} \\vert \\mathcal{X}_E)\\\\\n      = I(X_{i-1};X_{i+1} \\vert X+i)\n    \\end{split}\n  \\end{equation}\n\n  Since once again by the logic of Equation \\ref{cdpi-prf-eq2}\n  this value is greater than or equal to the mutual information of\n  the path, we have\n  $$I(X_0;X_n) \\leq I(X_{i-1},X_{i+1} \\vert X_i)$$\n  for the cases when $X_i \\in \\mathcal{X}_E$.\n\n  Combining these results, we get the bound on the mutual\n  information of the path.  \n\\end{proof}\n\nNote that Theorem \\ref{thm:updpi} is a special case of PMIT,\nor Theorem \\ref{thm:path-mutual-information}, where the\nset of common effects on the path $\\mathcal{X}_E$ is empty.\n\n%*** What about if there is a common effect node,\n%and a descendent of it is observed? ***\n%\n%\\begin{center}\n%  \\begin{tikzcd}    \n%    X_0  \\arrow[dr] & & & &  \\\\\n%      & Y_0 \\arrow[r] & ... \\arrow[r] & Y_n \\arrow[r] & Z \\\\\n%    X_1 \\arrow[ur] & & & &\n%  \\end{tikzcd}\n%\\end{center}\n%\n%\\emph{TODO: Proof for this case}\n%\n%\n%Note that Theorem \\ref{thm:wishbone} generalizes to any\n%variables such that the Markov\n%properties implies by the wishbone structure hold.\n%This can be the case even when the variables are\n%part of a larger, more complex structure.\n%For example, consider the structure:\n%\n%\n%The inequality $I(X_0;X_1 \\vert Z) \\leq I(Y;Z)$\n%still holds in this case.\n%The difference is that $I(Y;Z)$ depends\n%on the mutual information of the intermediate\n%connections between $Y$ and $Z$.\n%This brings us to our study of the mutual information\n%along Bayesian network paths in general.\n%\n\n\\end{document}\n", "meta": {"hexsha": "8ef84c40d30cf8d5815e8605d6fbcbbe7d7a480c", "size": 17983, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendixA.tex", "max_stars_repo_name": "sbenthall/dissertation", "max_stars_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendixA.tex", "max_issues_repo_name": "sbenthall/dissertation", "max_issues_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2018-04-19T13:07:29.000Z", "max_issues_repo_issues_event_max_datetime": "2018-05-02T21:33:06.000Z", "max_forks_repo_path": "chapters/appendixA.tex", "max_forks_repo_name": "sbenthall/dissertation", "max_forks_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.3187022901, "max_line_length": 98, "alphanum_fraction": 0.6643496636, "num_tokens": 6299, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407017, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.5717615506019009}}
{"text": "\\chapter{Exercises}\n\\label{chap:Exercises}\n\n% TODO\n% \n% All exercises should be group with a chapter\n% \n% \u25b6   Recommended\n% M   Matematically oriented\n% HM  Higher matematical education required\n% MP  Matematical problem solving skills required,\n%     double the rating otherwise; difficult is\n%     very personal\n% \n% 00  Immediate\n% 10  Simple\n% 20  Medium\n% 30  Moderately hard\n% 40  Term project\n% 50  Research project\n% \n% \u207a   High risk of undershoot difficulty\n\n\n\\begin{enumerate}[label=\\textbf{\\arabic*}.]\n\n\n\n\\item {[$\\RHD$\\textit{02}]} \\textbf{Saturated subtraction}\n\nImplement the function\n\n\\vspace{-1em}\n\\begin{alltt}\n   void monus(z_t r, z_t a, z_t b);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nwhich calculates $r = a \\dotminus b = \\max \\{ 0,~ a - b \\}$.\n\n\n\n\\item {[$\\RHD$\\textit{10}]} \\textbf{Modular powers of 2}\n\nWhat is the advantage of using \\texttt{zmodpow}\nover \\texttt{zbset} or \\texttt{zlsh} in combination\nwith \\texttt{zmod}?\n\n\n\n\\item {[\\textit{M15}]} \\textbf{Convergence of the Lucas Number ratios}\n\nFind an approximation for\n$\\displaystyle{ \\lim_{n \\to \\infty} \\frac{L_{n + 1}}{L_n}}$,\nwhere $L_n$ is the $n^{\\text{th}}$\nLucas Number \\psecref{sec:Lucas numbers}.\n\n\\( \\displaystyle{\n    L_n \\stackrel{\\text{\\tiny{def}}}{\\text{=}} \\left \\{ \\begin{array}{ll}\n      2 & \\text{if} ~ n = 0 \\\\\n      1 & \\text{if} ~ n = 1 \\\\\n      L_{n - 1} + L_{n + 1} & \\text{otherwise}\n    \\end{array} \\right .\n}\\)\n\n\n\n\\item {[\\textit{MP12}]} \\textbf{Factorisation of factorials}\n\nImplement the function\n\n\\vspace{-1em}\n\\begin{alltt}\n   void factor_fact(z_t n);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nwhich prints the prime factorisation of $n!$\n(the $n^{\\text{th}}$ factorial). The function shall\nbe efficient for all $n$ where all primes $p \\le n$\ncan be found efficiently. You can assume that\n$n \\ge 2$. You should not evaluate $n!$.\n\n\n\n\\item {[\\textit{M20}]} \\textbf{Reverse factorisation of factorials}\n\n{\\small\\textit{You should already have solved\n``Factorisation of factorials'' before you solve\nthis problem.}}\n\nImplement the function\n\n\\vspace{-1em}\n\\begin{alltt}\n   void unfactor_fact(z_t x, z_t *P,\n        unsigned long long int *K, size_t n);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nwhich given the factorsation of $x!$ determines $x$.\nThe factorisation of $x!$ is\n$\\displaystyle{\\prod_{i = 1}^{n} P_i^{K_i}}$, where\n$P_i$ is \\texttt{P[i - 1]} and $K_i$ is \\texttt{K[i - 1]}.\n\n\n\n\\item {[$\\RHD$\\textit{MP17}]} \\textbf{Factorials inverted}\n\nImplement the function\n\n\\vspace{-1em}\n\\begin{alltt}\n   void unfact(z_t x, z_t n);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nwhich given a factorial number $n$, i.e. on the form\n$x! = 1 \\cdot 2 \\cdot 3 \\cdot \\ldots \\cdot x$,\ncalculates $x = n!^{-1}$. You can assume that\n$n$ is a perfect factorial number and that $x \\ge 1$.\nExtra credit if you can detect when the input, $n$,\nis not a factorial number. Such function would of\ncourse return an \\texttt{int} 1 if the input is a\nfactorial and 0 otherwise, or alternatively 0\non success and $-1$ with \\texttt{errno} set to\n\\texttt{EDOM} if the input is not a factorial.\n\n\n\n\\item {[\\textit{05}]} \\textbf{Fast primality test}\n\n$(x + y)^p \\equiv x^p + y^p ~(\\text{Mod}~p)$\nfor all primes $p$ and for a few composites $p$,\nwhich are know as pseudoprimes. Use this to implement\na fast primality tester.\n\n\n\n\\item {[\\textit{10}]} \\textbf{Fermat primality test}\n\n$a^{p - 1} \\equiv 1 ~(\\text{Mod}~p) ~\\forall~ 1 < a < p$\nfor all primes $p$ and for a few composites $p$,\nwhich are know as pseudoprimes\\footnote{If $p$ is composite\nbut passes the test for all $a$, $p$ is a Carmichael\nnumber.}. Use this to implement a heuristic primality\ntester. Try to mimic \\texttt{zptest} as much as possible.\nGNU~MP uses $a = 210$, but you don't have to. ($a$ is\ncalled a base.)\n\n\n\n\\item {[\\textit{11}]} \\textbf{Lucas\u2013Lehmer primality test}\n\nThe Lucas\u2013Lehmer primality test can be used to determine\nwhether a Mersenne numbers $M_n = 2^n - 1$ is a prime (a\nMersenne prime). $M_n$ is a prime if and only if\n$s_{n - 1} \\equiv 0 ~(\\text{Mod}~M_n)$, where\n\n\\( \\displaystyle{\n    s_i = \\left \\{ \\begin{array}{ll}\n      4 & \\text{if} ~ i = 0 \\\\\n      s_{i - 1}^2 - 2 & \\text{otherwise}.\n    \\end{array} \\right .\n}\\)\n\n\\noindent\nThe Lucas\u2013Lehmer primality test requires that $n \\ge 3$,\nhowever $M_2 = 2^2 - 1 = 3$ is a prime. Implement a version\nof the Lucas\u2013Lehmer primality test that takes $n$ as the\ninput. For some more fun, when you are done, you can\nimplement a version that takes $M_n$ as the input.\n\nFor improved performance, instead of using \\texttt{zmod},\nyou can use the recursive function\n%\n\\( \\displaystyle{\n    k \\text{ mod } (2^n - 1) =\n    \\left (\n      (k \\text{ mod } 2^n) + \\lfloor k \\div 2^n \\rfloor\n    \\right ) \\text{ mod } (2^n - 1),\n}\\)\n%\nwhere $k \\mod 2^n$ is efficiently calculated\nusing \\texttt{zand($k$, $2^n - 1$)}. (This optimisation\nis not part of the difficulty rating of this problem.)\n\n\n\n\\item {[\\textit{20}]} \\textbf{Fast primality test with bounded perfection}\n\nImplement a primality test that is both very fast and\nnever returns \\texttt{PROBABLY\\_PRIME} for input less\nthan or equal to a preselected number.\n\n\n\n\\item {[\\textit{30}]} \\textbf{Powers of the golden ratio}\n\nImplement function that returns $\\varphi^n$ rounded\nto the nearest integer, where $n$ is the input and\n$\\varphi$ is the golden ratio.\n\n\n\n\\item {[\\textit{$\\RHD$05}]} \\textbf{zlshu and zrshu}\n\nWhy does libzahl have\n\n\\vspace{-1em}\n\\begin{alltt}\n   void zlsh(z_t, z_t, size_t);\n   void zrsh(z_t, z_t, size_t);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nrather than\n\n\\vspace{-1em}\n\\begin{alltt}\n   void zlsh(z_t, z_t, z_t);\n   void zrsh(z_t, z_t, z_t);\n   void zlshu(z_t, z_t, size_t);\n   void zrshu(z_t, z_t, size_t);\n\\end{alltt}\n\\vspace{-1em}\n\n\n\n\\item {[\\textit{$\\RHD$M15}]} \\textbf{Modular left-shift}\n\nImplement a function that calculates\n$2^a \\text{ mod } b$, using \\texttt{zmod} and\nonly cheap functions. You can assume $a \\ge 0$,\n $b \\ge 1$. You can also assume that all\nparameters are unique pointers.\n\n\n\n\\item {[\\textit{$\\RHD$08}]} \\textbf{Modular left-shift, extended}\n\n{\\small\\textit{You should already have solved\n``Modular left-shift'' before you solve this\nproblem.}}\n\nExtend the function you wrote in ``Modular left-shift''\nto accept negative $b$ and non-unique pointers.\n\n\n\n\\item {[\\textit{13}]} \\textbf{The totient}\n\nThe totient of $n$ is the number of integer $a$,\n$0 < a < n$ that are relatively prime to $n$.\nImplement Euler's totient function $\\varphi(n)$\nwhich calculates the totient of $n$. Its\nformula is\n\n\\( \\displaystyle{\n    \\varphi(n) = |n| \\prod_{p \\in \\textbf{P} : p | n}\n    \\left ( 1 - \\frac{1}{p} \\right ).\n}\\)\n\nNote that $\\varphi(-n) = \\varphi(n)$, $\\varphi(0) = 0$,\nand $\\varphi(1) = 1$.\n\n\n\n\\item {[\\textit{M13}]} \\textbf{The totient from factorisation}\n\nImplement the function\n\n\\vspace{-1em}\n\\begin{alltt}\n   void totient_fact(z_t t, z_t *P,\n                     unsigned long long int *K, size_t n);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nwhich calculates the totient $t = \\varphi(n)$, where\n$n = \\displaystyle{\\prod_{i = 1}^n P_i^{K_i}} > 0$,\nand $P_i = \\texttt{P[i - 1]} \\in \\textbf{P}$,\n$K_i = \\texttt{K[i - 1]} \\ge 1$. All values \\texttt{P}\nare mutually unique. \\texttt{P} and \\texttt{K} make up\nthe prime factorisation of $n$.\n\nYou can use the following rules:\n\n\\( \\displaystyle{\n  \\begin{array}{ll}\n      \\varphi(1) = 1                      & \\\\\n      \\varphi(p) = p - 1                  & \\text{if } p \\in \\textbf{P} \\\\\n      \\varphi(nm) = \\varphi(n)\\varphi(m)  & \\text{if } \\gcd(n, m) = 1   \\\\\n      n^a\\varphi(n) = \\varphi(n^{a + 1})  &\n  \\end{array}\n}\\)\n\n\n\n\\item {[\\textit{HMP32}]} \\textbf{Modular tetration}\n\nImplement the function\n\n\\vspace{-1em}\n\\begin{alltt}\n   void modtetra(z_t r, z_t b, unsigned long n, z_t m);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nwhich calculates $r = {}^n{}b \\text{ mod } m$, where\n${}^0{}b = 1$, ${}^1{}b = b$, ${}^2{}b = b^b$,\n${}^3{}b = b^{b^b}$, ${}^4{}b = b^{b^{b^b}}$, and so on.\nYou can assume $b > 0$ and $m > 0$. You can also assume\n\\texttt{r}, \\texttt{b}, and \\texttt{m} are mutually\nunique pointers.\n\n\n\n\\item {[\\textit{13}]} \\textbf{Modular generalised power towers}\n\n{\\small\\textit{This problem requires a working\nsolution for ``Modular tetration''.}}\n\nModify your solution for ``Modular tetration'' to\nevaluate any expression on the forms\n$a^b,~a^{b^c},~a^{b^{c^d}},~\\ldots \\text{ mod } m$.\n\n\n\n\\end{enumerate}\n\n\n\n\\chapter{Solutions}\n\\label{chap:Solutions}\n\n\n\\begin{enumerate}[label=\\textbf{\\arabic*}.]\n\n\\item \\textbf{Saturated subtraction}\n\n\\vspace{-1em}\n\\begin{alltt}\nvoid monus(z_t r, z_t a, z_t b)\n\\{\n    zsub(r, a, b);\n    if (zsignum(r) < 0)\n        zsetu(r, 0);\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\n\\item \\textbf{Modular powers of 2}\n\n\\texttt{zbset} and \\texttt{zbit} requires $\\Theta(n)$\nmemory to calculate $2^n$. \\texttt{zmodpow} only\nrequires $\\mathcal{O}(\\min \\{n, \\log m\\})$ memory\nto calculate $2^n \\text{ mod } m$. $\\Theta(n)$\nmemory complexity becomes problematic for very\nlarge $n$.\n\n\n\\item \\textbf{Convergence of the Lucas Number ratios}\n\nIt would be a mistake to use bignum, and bigint in particular,\nto solve this problem. Good old mathematics is a much better solution.\n\n$$ \\lim_{n \\to \\infty} \\frac{L_{n + 1}}{L_n} = \\lim_{n \\to \\infty} \\frac{L_{n}}{L_{n - 1}} = \\lim_{n \\to \\infty} \\frac{L_{n - 1}}{L_{n - 2}} $$\n\n$$ \\frac{L_{n}}{L_{n - 1}} = \\frac{L_{n - 1}}{L_{n - 2}} $$\n\n$$ \\frac{L_{n - 1} + L_{n - 2}}{L_{n - 1}} = \\frac{L_{n - 1}}{L_{n - 2}} $$\n\n$$ 1 + \\frac{L_{n - 2}}{L_{n - 1}} = \\frac{L_{n - 1}}{L_{n - 2}} $$\n\n$$ 1 + \\varphi = \\frac{1}{\\varphi} $$\n\nSo the ratio tends toward the golden ratio.\n\n\n\n\\item \\textbf{Factorisation of factorials}\n\nBase your implementation on\n\n\\( \\displaystyle{\n    n! = \\prod_{p~\\in~\\textbf{P}}^{n} p^{k_p}, ~\\text{where}~\n    k_p = \\sum_{a = 1}^{\\lfloor \\log_p n \\rfloor} \\lfloor np^{-a} \\rfloor.\n}\\)\n\nThere is no need to calculate $\\lfloor \\log_p n \\rfloor$,\nyou will see when $p^a > n$.\n\n\n\n\\item \\textbf{Reverse factorisation of factorials}\n\n$\\displaystyle{x = \\max_{p ~\\in~ P} ~ p \\cdot f(p, k_p)}$,\nwhere $k_p$ is the power of $p$ in the factorisation\nof $x!$. $f(p, k)$ is defined as:\n\n\\vspace{1em}\n\\hspace{-2.8ex}\n\\begin{minipage}{\\linewidth}\n\\begin{algorithmic}\n    \\STATE $k^\\prime \\gets 0$\n    \\WHILE{$k > 0$}\n      \\STATE $a \\gets 0$\n      \\WHILE{$p^a \\le k$}\n        \\STATE $k \\gets k - p^a$\n        \\STATE $a \\gets a + 1$\n      \\ENDWHILE\n      \\STATE $k^\\prime \\gets k^\\prime + p^{a - 1}$\n    \\ENDWHILE\n    \\RETURN $k^\\prime$\n\\end{algorithmic}\n\\end{minipage}\n\\vspace{1em}\n\n\n\n\\item \\textbf{Factorials inverted}\n\nUse \\texttt{zlsb} to get the power of 2 in the\nprime factorisation of $n$, that is, the number\nof times $n$ is divisible by 2. If we write $n$ on\nthe form $1 \\cdot 2 \\cdot 3 \\cdot \\ldots \\cdot x$,\nevery $2^\\text{nd}$ factor is divisible by 2, every\n$4^\\text{th}$ factor is divisible by $2^2$, and so on.\nFrom calling \\texttt{zlsb} we know how many times,\n$n$ is divisible by 2, but know how many of the factors\nare divisible by 2, but this can be calculated with\nthe following algorithm, where $k$ is the number\ntimes $n$ is divisible by 2:\n\n\\vspace{1em}\n\\hspace{-2.8ex}\n\\begin{minipage}{\\linewidth}\n\\begin{algorithmic}\n    \\STATE $k^\\prime \\gets 0$\n    \\WHILE{$k > 0$}\n      \\STATE $a \\gets 0$\n      \\WHILE{$2^a \\le k$}\n        \\STATE $k \\gets k - 2^a$\n        \\STATE $a \\gets a + 1$\n      \\ENDWHILE\n      \\STATE $k^\\prime \\gets k^\\prime + 2^{a - 1}$\n    \\ENDWHILE\n    \\RETURN $k^\\prime$\n\\end{algorithmic}\n\\end{minipage}\n\\vspace{1em}\n\n\\noindent\nNote that $2^a$ is efficiently calculated with,\n\\texttt{zlsh}, but it is more efficient to use\n\\texttt{zbset}.\n\nNow that we know $k^\\prime$, the number of\nfactors ni $1 \\cdot \\ldots \\cdot x$ that are\ndivisible by 2, we have two choices for $x$:\n$k^\\prime$ and $k^\\prime + 1$. To check which, we\ncalculate $(k^\\prime - 1)!!$ (the semifactoral, i.e.\n$1 \\cdot 3 \\cdot 5 \\cdot \\ldots \\cdot (k^\\prime - 1)$)\nna\u00efvely and shift the result $k$ steps to the left.\nThis gives us $k^\\prime!$. If $x! \\neq k^\\prime!$, then\n$x = k^\\prime + 1$ unless $n$ is not factorial number.\nOf course, if $x! = k^\\prime!$, then $x = k^\\prime$.\n\n\n\n\\item \\textbf{Fast primality test}\n\nIf we select $x = y = 1$ we get\n$2^p \\equiv 2 ~(\\text{Mod}~p)$. This gives us\n\n\\vspace{-1em}\n\\begin{alltt}\nenum zprimality\nptest_fast(z_t p)\n\\{\n    z_t a;\n    int c = zcmpu(p, 2);\n    if (c <= 0)\n        return c ? NONPRIME : PRIME;\n    zinit(a);\n    zsetu(a, 1);\n    zlsh(a, a, p);\n    zmod(a, a, p);\n    c = zcmpu(a, 2);\n    zfree(a);\n    return c ? NONPRIME : PROBABLY_PRIME;\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\n\n\\item \\textbf{Fermat primality test}\n\nBelow is a simple implementation. It can be improved by\njust testing against a fix base, such as $a = 210$, it\n$t = 0$. It could also do an exhaustive test (all $a$\nsuch that $1 < a < p$) if $t < 0$.\n\n\\vspace{-1em}\n\\begin{alltt}\nenum zprimality\nptest_fermat(z_t witness, z_t p, int t)\n\\{\n    enum zprimality rc = PROBABLY_PRIME;\n    z_t a, p1, p3, temp;\n    int c;\n\n    if ((c = zcmpu(p, 2)) <= 0) \\{\n        if (!c)\n            return PRIME;\n        if (witness && witness != p)\n            zset(witness, p);\n        return NONPRIME;\n    \\}\n\n    zinit(a), zinit(p1), zinit(p3), zinit(temp);\n    zsetu(temp, 3), zsub(p3, p, temp);\n    zsetu(temp, 1), zsub(p1, p, temp);\n\n    zsetu(temp, 2);\n    while (t--) \\{\n        zrand(a, DEFAULT_RANDOM, UNIFORM, p3);\n        zadd(a, a, temp);\n        zmodpow(a, a, p1, p);\n        if (zcmpu(a, 1)) \\{\n            if (witness)\n                zswap(witness, a);\n            rc = NONPRIME;\n            break;\n        \\}\n    \\}\n\n    zfree(temp), zfree(p3), zfree(p1), zfree(a);\n    return rc;\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\n\n\\item \\textbf{Lucas\u2013Lehmer primality test}\n\n\\vspace{-1em}\n\\begin{alltt}\nenum zprimality\nptest_llt(z_t n)\n\\{\n    z_t s, M;\n    int c;\n    size_t p;\n\n    if ((c = zcmpu(n, 2)) <= 0)\n        return c ? NONPRIME : PRIME;\n\n    if (n->used > 1) \\{\n        \\textcolor{c}{/* \\textrm{An optimised implementation would not need this} */}\n        errno = ENOMEM;\n        return (enum zprimality)(-1);\n    \\}\n\n    zinit(s), zinit(M), zinit(2);\n\n    p = (size_t)(n->chars[0]);\n    zsetu(s, 1), zsetu(M, 0);\n    zbset(M, M, p, 1), zsub(M, M, s);\n    zsetu(s, 4);\n    zsetu(two, 2);\n\n    p -= 2;\n    while (p--) \\{\n        zsqr(s, s);\n        zsub(s, s, two);\n        zmod(s, s, M);\n    \\}\n    c = zzero(s);\n\n    zfree(two), zfree(M), zfree(s);\n    return c ? PRIME : NONPRIME;\n\\}\n\\end{alltt}\n\n$M_n$ is composite if $n$ is composite, therefore,\nif you do not expect prime-only values on $n$, the\nperformance can be improved by using some other\nprimality test (or this same test if $n$ is a\nMersenne number) to first check that $n$ is prime.\n\n\n\n\\item \\textbf{Fast primality test with bounded perfection}\n\nFirst we select a fast primality test. We can use\n$2^p \\equiv 2 ~(\\texttt{Mod}~ p) ~\\forall~ p \\in \\textbf{P}$,\nas describe in the solution for the problem\n\\textit{Fast primality test}.\n\nNext, we use this to generate a large list of primes and\npseudoprimes. Use a perfect primality test, such as a\nna\u00efve test or the AKS primality test, to filter out all\nprimes and retain only the pseudoprimes. This is not in\nruntime so it does not matter that this is slow, but to\nspeed it up, we can use a probabilistic test such the\nMiller\u2013Rabin primality test (\\texttt{zptest}) before we\nuse the perfect test.\n\nNow that we have a quite large \u2014 but not humongous \u2014 list\nof pseudoprimes, we can incorporate it into our fast\nprimality test. For any input that passes the test, and\nis less or equal to the largest pseudoprime we found,\nbinary search our list of pseudoprime for the input.\n\nFor input, larger than our limit, that passes the test,\nwe can run it through \\texttt{zptest} to reduce the\nnumber of false positives.\n\nAs an alternative solution, instead of comparing against\nknown pseudoprimes. Find a minimal set of primes that\nincludes divisors for all known pseudoprimes, and do\ntrail division with these primes when a number passes\nthe test. No pseudoprime need to have more than one divisor\nincluded in the set, so this set cannot be larger than\nthe set of pseudoprimes.\n\n\n\n\\item \\textbf{Powers of the golden ratio}\n\nThis was an information gathering exercise.\nFor $n \\ge 2$, $L_n = [\\varphi^n]$, where\n$L_n$ is the $n^\\text{th}$ Lucas number.\n\n\\( \\displaystyle{\n    L_n \\stackrel{\\text{\\tiny{def}}}{\\text{=}} \\left \\{ \\begin{array}{ll}\n      2 & \\text{if} ~ n = 0 \\\\\n      1 & \\text{if} ~ n = 1 \\\\\n      L_{n - 1} + L_{n + 1} & \\text{otherwise}\n    \\end{array} \\right .\n}\\)\n\n\\noindent\nbut for efficiency and briefness, we will use\n\\texttt{lucas} from \\secref{sec:Lucas numbers}.\n\n\\vspace{-1em}\n\\begin{alltt}\nvoid\ngolden_pow(z_t r, z_t n)\n\\{\n    if (zsignum(n) <= 0)\n        zsetu(r, zcmpi(n, -1) >= 0);\n    else if (!zcmpu(n, 1))\n        zsetu(r, 2);\n    else\n        lucas(r, n);\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\n\n\\item \\textbf{zlshu and zrshu}\n\nYou are in big trouble, memorywise, of you\nneed to evaluate $2^{2^{64}}$.\n\n\n\n\\item \\textbf{Modular left-shift}\n\n\\vspace{-1em}\n\\begin{alltt}\nvoid\nmodlsh(z_t r, z_t a, z_t b)\n\\{\n    z_t t, at;\n    size_t s = zbits(b);\n\n    zinit(t), zinit(at);\n    zset(at, a);\n    zsetu(r, 1);\n    zsetu(t, s);\n\n    while (zcmp(at, t) > 0) \\{\n        zsub(at, at, t);\n        zlsh(r, r, t);\n        zmod(r, r, b);\n        if (zzero(r))\n            break;\n    \\}\n    if (!zzero(a) && !zzero(b)) \\{\n        zlsh(r, r, a);\n        zmod(r, r, b);\n    \\}\n\n    zfree(at), zfree(t);\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\nIt is worth noting that this function is\nnot necessarily faster than \\texttt{zmodpow}.\n\n\n\n\\item \\textbf{Modular left-shift, extended}\n\nThe sign of \\texttt{b} shall not effect the\nresult. Use \\texttt{zabs} to create a copy\nof \\texttt{b} with its absolute value.\n\n\n\n\\item \\textbf{The totient}\n\n\\( \\displaystyle{\n    \\varphi(n)\n    = n \\prod_{p \\in \\textbf{P} : p | n} \\left ( 1 - \\frac{1}{p} \\right )\n    = n \\prod_{p \\in \\textbf{P} : p | n} \\left ( \\frac{p - 1}{p} \\right )\n}\\)\n\n\\noindent\nSo, if we set $a = n$ and $b = 1$, then we iterate\nof all integers $p$, $2 \\le p \\le n$. For which $p$\nthat is prime, we set $a \\gets a \\cdot (p - 1)$ and\n$b \\gets b \\cdot p$. After the iteration, $b | a$,\nand $\\varphi(n) = \\frac{a}{b}$. However, if $n < 0$,\nthen, $\\varphi(n) = \\varphi|n|$.\n\n\n\n\\item \\textbf{The totient from factorisation}\n\n\\vspace{-1em}\n\\begin{alltt}\nvoid\ntotient_fact(z_t t, z_t *P,\n             unsigned long long *K, size_t n)\n\\{\n    z_t a, one;\n    zinit(a), zinit(one);\n    zseti(t, 1);\n    zseti(one, 1);\n    while (n--) \\{\n        zpowu(a, P[n], K[n] - 1);\n        zmul(t, t, a);\n        zsub(a, P[n], one);\n        zmul(t, t, a);\n    \\}\n    zfree(a), zfree(one);\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\n\n\\item \\textbf{Modular tetration}\n\nLet \\texttt{totient} be Euler's totient function.\nIt is described in the problem ``The totient''.\n\nWe need two help function: \\texttt{tetra(r, b, n)}\nwhich calculated $r = {}^n{}b$, and \\texttt{cmp\\_tetra(a, b, n)}\nwhich is compares $a$ to ${}^n{}b$.\n\n\\vspace{-1em}\n\\begin{alltt}\nvoid\ntetra(z_t r, z_t b, unsigned long n)\n\\{\n    zsetu(r, 1);\n    while (n--)\n        zpow(r, b, r);\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\\vspace{-1em}\n\\begin{alltt}\nint\ncmp_tetra(z_t a, z_t b, unsigned long n)\n\\{\n    z_t B;\n    int cmp;\n\n    if (!n || !zcmpu(b, 1))\n        return zcmpu(a, 1);\n    if (n == 1)\n        return zcmp(a, b);\n    if (zcmp(a, b) >= 0)\n        return +1;\n\n    zinit(B);\n    zsetu(B, 1);\n    while (n) \\{\n        zpow(B, b, B);\n        if (zcmp(a, B) < 0) \\{\n            zfree(B);\n            return -1;\n        \\}\n    \\}\n    cmp = zcmp(a, B);\n    zfree(B);\n    return cmp;\n\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\\texttt{tetra} can generate unmaintainably huge\nnumbers. Will however only call \\texttt{tetra}\nwhen this is not the case.\n\n\\vspace{-1em}\n\\begin{alltt}\nvoid\nmodtetra(z_t r, z_t b, unsigned long n, z_t m)\n\\{\n    z_t t, temp;\n\n    if (n <= 1) \\{\n        if (!n)\n            zsetu(r, zcmpu(m, 1));\n        else\n            zmod(r, b, m);\n        return;\n    \\}\n\n    zmod(r, b, m);\n    if (zcmpu(r, 1) <= 0)\n        return;\n\n    zinit(t);\n    zinit(temp);\n\n    t = totient(m);\n    zgcd(temp, b, m);\n\n    if (!zcmpu(temp, 1)) \\{\n        modtetra(temp, b, n - 1, t);\n        zmodpow(r, r, temp, m);\n    \\} else if (cmp_tetra(t, b, n - 1) > 0) \\{\n        temp = tetra(b, n - 1);\n        zpowmod(r, r, temp, m);\n    \\} else \\{\n        modtetra(temp, b, n - 1, t);\n        zmodpow(temp, r, temp, m);\n        zmodpow(r, r, t, m);\n        zmodmul(r, r, temp, m);\n    \\}\n\n    zfree(temp);\n    zfree(t);\n\\}\n\\end{alltt}\n\\vspace{-1em}\n\n\n\n\\item \\textbf{Modular generalised power towers}\n\nInstead of the signature\n\n\\vspace{-1em}\n\\begin{alltt}\n   void modtetra(z_t r, z_t b, unsigned long n, z_t m);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nyou want to use the signature\n\n\\vspace{-1em}\n\\begin{alltt}\n   void modtower_(z_t r, z_t *a, size_t n, z_t m);\n\\end{alltt}\n\\vspace{-1em}\n\nInstead of using, \\texttt{b} (in \\texttt{modtetra}),\nuse \\texttt{*a}. At every recursion, instead of\npassing on \\texttt{a}, pass on \\texttt{a + 1}.\n\nThe function \\texttt{tetra} is modified into\n\n\\vspace{-1em}\n\\begin{alltt}\n   void tower(z_t r, z_t *a, size_t n)\n   \\{\n       zsetu(r, 1);\n       while (n--);\n           zpow(r, a[n], r);\n   \\}\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\n\\texttt{cmp\\_tetra} is changed analogously.\n\nTo avoid problems in the evaluation, define\n\n\\vspace{-1em}\n\\begin{alltt}\n   void modtower(z_t r, z_t *a, size_t n, z_t m);\n\\end{alltt}\n\\vspace{-1em}\n\n\\noindent\nwhich cuts the power short at the first element\nof of \\texttt{a} that equals 1. For example, if\n$a = \\{2, 3, 4, 5, 1, 2, 3\\}$, and $n = 7$\n($n = |a|$), then \\texttt{modtower}, sets $n = 4$,\nand effectively $a = \\{2, 3, 4, 5\\}$. After this\n\\texttt{modtower} calls \\texttt{modtower\\_}.\n\n\n\n\\end{enumerate}\n", "meta": {"hexsha": "9a18b996383af49370db0b09b34753435a9b112d", "size": 21735, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/exercises.tex", "max_stars_repo_name": "maandree/libzahl", "max_stars_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2016-03-06T10:34:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T10:40:14.000Z", "max_issues_repo_path": "doc/exercises.tex", "max_issues_repo_name": "maandree/libzahl", "max_issues_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2016-05-09T12:34:47.000Z", "max_issues_repo_issues_event_max_datetime": "2017-04-22T13:11:49.000Z", "max_forks_repo_path": "doc/exercises.tex", "max_forks_repo_name": "maandree/libzahl", "max_forks_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2016-10-14T12:23:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-23T12:10:26.000Z", "avg_line_length": 23.3208154506, "max_line_length": 143, "alphanum_fraction": 0.6172992869, "num_tokens": 7826, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7956581097540519, "lm_q1q2_score": 0.5717554560727885}}
{"text": "\\section{Introduction}\n\n%\\paragraph{(Frame the problem)}\n\nToday\u2019s social, behavioral and medical scientists have access to large multidimensional datasets that can be\nused to investigate the complex relationships between social, psychological and biological factors in \nshaping individual and societal outcomes.\nLarge social science datasets, such as the World Values Survey, or the European Values Study (EVS), \nare easily available to researchers and initiatives have been undertaken to link and extend these datasets \ninto a system of linked open data.\nMaking use of the full potential of these data sets requires dealing with the crucial problem of multivariate\nmissing data.\n\nRubin's Multiple Imputation (MI) approach \\citep{rubin:1987} was developed to specifically address the issue of missing \nresponses in surveys.\nMI is a three-step process that entails an imputation, analysis, and pooling phase.\nThe fundamental idea of the imputation phase is to replace each missing data point with $m$ plausible values sampled from \ntheir posterior predictive distributions given the observed data.\nThis procedure leads to the definition of $m$ complete versions of the original data that can be analyzed separately \nusing standard complete-data analysis models (analysis phase).\nFinally, the $m$ estimates of any parameter of interest can be pooled following Rubin's rules \\citep{rubin:1987} \n(pooling phase).\n\nSince Rubin's seminal work, two main strategies have become popular for multiple imputation of multivariate \nmissing data: joint modelling (JM) \\citep[ch. 4]{schafer:1997} and full conditional specification (FCS), also known \nas Multiple Imputation by Chained Equation (MICE) \\citep{vanBuurenEtAl:2006}.\nThe first one relies on defining a multivariate distribution for the missing data, deriving conditional\ndistributions for each missing data pattern, and obtaining samples from it by means of a Markov Chain Monte Carlo \nalgorithm.\nThe second method defines conditional densities for each incomplete variable and performs iterative imputations on \na variable-by-variable basis.\nCompared to the JM approach, FCS imputation can easily accommodate for the different distributions of variables and it \ncan preserve unique features in the data, such as variables interactions or questionnaires skip patterns.\n\nBoth the JM and the FCS approaches rely on the crucial \\emph{missing at random} (MAR) assumption.\nMeeting this assumption requires specifying imputation models for the MI procedure that include all observed\nvariables that are correlates of missingness.\nOmitting from the imputation models an observed predictor related to both the missingness and the imputed variables \ncreates a \\emph{missing not at random} (MNAR) problem.\nMI under MNAR leads to substantial bias in parameter estimation in the analysis and pooling phases, and \ninvalidates hypothesis testing involving the imputed variables.\n\nAs a result, when it comes to defining the set of auxiliary variables for the imputation models within an MI procedure, \nan inclusive strategy (i.e. including numerous auxiliary variables) is generally preferred to restrictive approach \n(i.e. including few or no auxiliary variables).\nAn inclusive approach reduces the chances of omitting important correlates of missingness, making the MAR assumption \nmore plausible.\nFurthermore, the inclusive strategy has been shown to reduce estimation bias and increase efficiency \\citep{collinsEtAl:2001},\nas well as reducing the chances of specifying uncongenial imputation and analysis models \\citep{meng:1994}.\n\nSpecifying the imputation models for a FCS MI procedure remains one of the most challenging steps in dealing\nwith missing values for large multidimensional data sets.\nIn practice, the inclusive strategy faces identification and computational limitations.\nOne serious risk of an inclusive strategy is the occurrence of singular matrices within the imputation algorithm.\nWhen data is high-dimensional (i.e. the number of recorded units $n$ is not substantially larger than the number of recorded \nvariables $p$) or afflicted by high collinearity (i.e. one or more of the variables is equal to a linear \ncombination of the others) the data covariance matrix is singular.\nSingular matrices are not invertible, an operation that is fundamental in the estimation of the imputation \nmodels in any parametric imputation procedure.\nAs a result, the possible high dimensionality of the observed data matrix, resulting from an inclusive strategy, \ncan prevent a straightforward application of MI algorithms or force researchers to make arbitrary choices \nregarding which variables to include in the imputation models.\n\nRecent developments in high-dimensional data MI techniques represent interesting opportunities to embrace an \ninclusive strategy, without facing its downsides.\nSome statisticians and machine learning experts have focused on high-dimensional Single Imputation (SI) methods in \nan effort to improve the accuracy of individual imputations \\citep[e.g.][]{kimEtAl:2005, stekhovenBuhlmann:2011, \nd'ambrosioEtAl:2012}. \nHowever, the main task of social scientists is to make inference about a population based on a sample of observed \ndata points, and SI is simply inadequate for this purpose: it does not provide statistically valid \ninference \\citep{rubin:1996}.\nThe concept of statistical validity as defined by \\citep{rubin:1996} is meant to capture two features of \nestimation.\nFirst, the point estimate of a parameter of interest must be unbiased, and second, the actual \nconfidence interval coverage (CIC) of the true parameter value must be equal or greater than nominal \ncoverage rates.\nSI strategies might meet the first requirement, but cannot meet the second as they \ndo not take into account the uncertainty regarding the imputed values.\nMI, on the other hand, was designed to provide statistically valid inference and therefore is \nmore suitable for social scientific research.\n\nThe combination of MI with high-dimensional prediction models has been directly tackled by algorithms combining \nfull conditional specification of imputation models with shrinkage methods \\citep{zhaoLong:2016, dengEtAl:2016},\nbut their application has been studied only for biomedical sciences.\nOther researchers have proposed FCS strategies using dimensionality reduction to avoid the obstacles of an inclusive\nstrategy in high-dimensional data imputation.\nHowever, these solutions were either limited to the Joint Modeling approach \\citep{songBelin:2004}, \nor tested exclusively on particularly low-dimensional settings \\citep{howardEtAl:2015}.\nFinally, tree-based FCS strategies also have the potential to overcome the limitations of inclusive strategies.\nThe non-parametric nature of decision trees bypasses the identification issues most parametric methods face\nin high-dimensional contexts.\nHowever, these methods have been either used to deal with different issues, such as imputation in the presence\nof interaction effects \\citep{dooveEtAl:2014}, or have been tested exclusively on biomedical datasets \n\\citep{shahEtAl:2014}.\n%\n\\paragraph{Scope}\nThe inclusion of shrinkage methods, principal component analysis and non-parametric decision trees within the FCS framework \nhas the potential of simplifying the decisions social scientists need to make when dealing with missing values.\nThe lack of comparative research on the performance of these methods makes it difficult for social scientists working with \nlarge multidimensional data sets to decide which imputation method to adopt.\nWith this article, we provide a comparison of these state-of-the-art high-dimensional imputation algorithms.\nWe compared the imputation methods based on the statistical validity of the complete-data analyses they allow to perform.\nThe comparison was based on three numerical experiments: two simulation studies and a resampling study using real survey \ndata.\n%\n\\paragraph{Outline}\nIn what follows, we first present the general MICE framework, how the high-dimensional MI methods fit within it, and some\nsingle data missing data strategies considered for reference.\nThen we present the three numerical experiments, their design, and the results.\nWe then discus the implications of the results and provides recommendations for applied researchers.\nFinally, we provide a description of the limitations of the study, and possible future research directions.\n", "meta": {"hexsha": "2342e2f7992ce740092f5edd67110a462ba5402f", "size": 8407, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/old/01_section.tex", "max_stars_repo_name": "EdoardoCostantini/imputeHD-comp", "max_stars_repo_head_hexsha": "c6fad9ee406ad85dd9874f336220177ae08ae477", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manuscript/old/01_section.tex", "max_issues_repo_name": "EdoardoCostantini/imputeHD-comp", "max_issues_repo_head_hexsha": "c6fad9ee406ad85dd9874f336220177ae08ae477", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuscript/old/01_section.tex", "max_forks_repo_name": "EdoardoCostantini/imputeHD-comp", "max_forks_repo_head_hexsha": "c6fad9ee406ad85dd9874f336220177ae08ae477", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 75.0625, "max_line_length": 126, "alphanum_fraction": 0.8233614845, "num_tokens": 1661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.7185943865443352, "lm_q1q2_score": 0.5717554512777382}}
{"text": "\\problemname{Train Boarding}\n\nPunctual City is well known for the punctuality of its citizens and\nits public transportation system.  It is particularly famous for its\ntrain system.  It is always on time, and never too late (or even too\nearly).  Statistics about train boarding is regularly collected to\nkeep things running smoothly.\n\nA train has cars numbered $1$ to $N$ (from front to back), each of\nlength $L$ meters. Each car has exactly one door for boarding located\nat the center ($L/2$ meters from each end of the car). There are no gaps\nbetween cars.\n\nWhen the train stops at the boarding platform, each passenger waiting\nfor the train walks to the door of the car which is closest to them,\ntaking the higher numbered car in the case of a tie.\n\nGiven the location of the passengers relative to the train, help the\ncity by reporting the longest distance that any passenger has\nto walk and the maximum number of passengers boarding any single car.\n\n\\begin{center}\n  \\includegraphics[width=0.9\\textwidth]{Diagram1.png}\n\\end{center}\n\n\\section*{Input}\n\nThe first line of input contains three integers $N$ ($1 \\leq N \\leq 100$),\nwhich is the number of cars of the train, $L$ ($2 \\leq L \\leq 100$), which\nis the length of each car, and $P$ ($1 \\leq P \\leq 1\\,000$), which is the\nnumber of passengers waiting for the train. It is guaranteed that $L$\nis an even number.\n\nThe next $P$ lines describe the location of the passengers relative to\nthe train. Each line contains a single integer $x$\n($0 \\leq x \\leq 10\\,000$), which is the distance the passenger is\nbehind the front-end of the train.\n\n\\section*{Output}\n\nDisplay the longest distance that any passenger has to walk on one\nline.  On the next line, display the maximum number of passengers boarding\nany single car.\n", "meta": {"hexsha": "10c0f7f9a04f913303ec4aac505ef6bbb26d2ee2", "size": 1770, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problems/trainboarding/problem_statement/problem.tex", "max_stars_repo_name": "icpc/na-rocky-mountain-2020-public", "max_stars_repo_head_hexsha": "d77cd0dfd9bd707f34497977251c4cc583647fef", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-11T21:49:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-19T22:31:57.000Z", "max_issues_repo_path": "problems/trainboarding/problem_statement/problem.tex", "max_issues_repo_name": "icpc/na-rocky-mountain-2020-public", "max_issues_repo_head_hexsha": "d77cd0dfd9bd707f34497977251c4cc583647fef", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/trainboarding/problem_statement/problem.tex", "max_forks_repo_name": "icpc/na-rocky-mountain-2020-public", "max_forks_repo_head_hexsha": "d77cd0dfd9bd707f34497977251c4cc583647fef", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-03-11T18:15:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-24T00:15:32.000Z", "avg_line_length": 40.2272727273, "max_line_length": 74, "alphanum_fraction": 0.7610169492, "num_tokens": 449, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178138, "lm_q2_score": 0.7956580903722561, "lm_q1q2_score": 0.571755432555038}}
{"text": "\\section{Introduction}\nWhen studying intelligence, insects, reptiles, and humans have been found to possess neurons with the capacity to hold integers, real numbers, and perform arithmetic operations \\cite{nieder-neuronal-number,rugani-arithmetic-chicks,gallistel-numbers-in-brain}.\nIn our quest to mimic intelligence we have put much faith in neural networks, which in turn has provided unparalleled and often superhuman performance in tasks requiring high cognitive abilities \\cite{natureGo,bert,openai-learning-dexterous}.\nHowever, when using neural networks to learn simple arithmetic problems, such as counting, multiplication, or comparison they systematically fail to extrapolate onto unseen ranges \\cite{stillNotSystematic,suzgun2019evaluating,trask-nalu}.\nThe absence of inductive bias makes it difficult for neural networks to extrapolate well on arithmetic tasks as they lack the underlying logic to represent the required operations.\n\nWe would like to achieve a neural network component that can take an arbitrary hidden input, learn to select the appropriate elements, and apply the desired arithmetic operation.\nA recent attempt to achieve this goal is the Neural Arithmetic Logic Unit (NALU), by \\citet{trask-nalu}.\n\nThe NALU consists of two sub-units: the $\\text{NAC}_{+}$ for addition/subtraction and the $\\text{NAC}_{\\bullet}$ for multiplication/division.\nThe sub-units are softly gated using a sigmoid function in order to exclusively select one of the sub-units.\nHowever, we find that the soft gating mechanism and the $\\text{NAC}_{\\bullet}$ are fragile and hard to learn.\n\nIn this paper, we analyze and improve upon the $\\text{NAC}_{+}$ and $\\text{NAC}_{\\bullet}$ with respect to addition, subtraction, and multiplication.\nOur proposed improvements, namely the Neural Addition Unit (NAU) and Neural Multiplication Unit (NMU), are more theoretically founded and improve performance regarding stability, speed of convergence, and interpretability of results.\nMost importantly, the NMU can support a large hidden input-size.\n\nThe improvements, based on a theoretical analysis of the NALU and its components, are achieved by a simplification of the parameter matrix for a better gradient signal, a sparsity regularizer, and a new multiplication unit that can be optimally initialized and supports both negative and small numbers.\nThe NMU do not support division.\nHowever, we find that the $\\text{NAC}_{\\bullet}$ in practice also only supports multiplication and cannot learn division (for theoretical findings on why division is hard to learn, see section \\ref{sssec:nac-mul}).\n\nTo analyze the impact of each improvement we introduce several variants of the $\\text{NAC}_{\\bullet}$.\nWe find that, allowing division makes optimization for multiplication harder, linear and regularized weights improve convergence, and that the NMU style of multiplication is critical when increasing the hidden input size.\n\nFurthermore, we improve upon existing benchmarks in \\citet{trask-nalu} by expanding the ``simple function task'', using a multiplicative variant of ``MNIST Counting and Arithmetic Tasks'', and use an improved success-criterion \\citet{maep-madsen-johansen-2019}.\nA success-criterion is important because the arithmetic layers are solving a logical problem.\nWe propose the MNIST multiplication variant as we want to test the NMU's and $\\text{NAC}_{\\bullet}$'s ability to learn from real data.\n%Hence, the solution found is either correct or wrong.\n%To test this we propose using a success-criteria to evaluate model performance.\n%A success-criterion enables measuring sensitivity to the initialization seed as well as the number of iterations until convergence.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.7]{graphics/nmu.pdf}\n\\caption{Visualization of NMU for a single output scalar $z_1$, this construction repeats for every element in the output vector $\\mathbf{z}$.}\n\\end{figure}\n\n\\subsection{Learning a 10 parameter function}\nConsider the static function $t = (x_1 + x_2) \\cdot (x_1 + x_2 + x_3 + x_4)$ for $x \\in \\mathbb{R}^4$. To illustrate the ability of $\\mathrm{NAC}_{\\bullet}$, NALU, and our proposed NMU, we conduct 100 experiments for each model, where we attempt to fit this function. Table \\ref{tab:very-simple-function-results} shows that NMU has a higher success rate and converge faster.\n%When inspecting the $6\\%$ that did not converge, we found the issue to be underflow when $w = 0$ in the multiplication layer.\n\\input{results/simple_mul.tex}\n", "meta": {"hexsha": "855b833ec71ea4a0a3d55fa582fffdcc8b3135c9", "size": 4495, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sections/introduction.tex", "max_stars_repo_name": "wlm2019/Neural-Arithmetic-Units", "max_stars_repo_head_hexsha": "f9de9d004bb2dc2ee28577cd1760d0a00c185836", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/sections/introduction.tex", "max_issues_repo_name": "wlm2019/Neural-Arithmetic-Units", "max_issues_repo_head_hexsha": "f9de9d004bb2dc2ee28577cd1760d0a00c185836", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sections/introduction.tex", "max_forks_repo_name": "wlm2019/Neural-Arithmetic-Units", "max_forks_repo_head_hexsha": "f9de9d004bb2dc2ee28577cd1760d0a00c185836", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 107.0238095238, "max_line_length": 374, "alphanum_fraction": 0.7988876529, "num_tokens": 1003, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256472515683, "lm_q2_score": 0.6791786926816161, "lm_q1q2_score": 0.5717500425661755}}
{"text": "\\documentclass[a4paper]{article}\n\n%% Language and font encodings\n\\usepackage[english]{babel}\n\\usepackage[utf8x]{inputenc}\n\\usepackage[T1]{fontenc}\n\n%% Sets page size and margins\n\\usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}\n\n%% Useful packages\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{booktabs}\n\n\\usepackage{graphicx}\n\\usepackage{tikz}\n\\usetikzlibrary{arrows.meta}\n\n\\usepackage[colorinlistoftodos]{todonotes}\n\\usepackage[colorlinks=true, allcolors=blue]{hyperref}\n\n\\usepackage{color}\n\\usepackage{url}\n\n%% display solutions or not\n\\newif\\ifsol\n\\soltrue % comment out to hide solutions\n\n\\title{Section 8: HMMs}\n\\author{CS 182 - Artificial Intelligence}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Probability Review}\n\nA {\\bf{random variable}} $X$ is a variable which could take on various values with specified probabilities (its distribution). Two random variables $X$ and $Y$ are independent if $p(x, y) = p(x)p(y)$. Then, the probability of an event that depends on both $X$ and $Y$ (for example, the event that $X == 1$ and $Y == 2$) will be given by a value in the {\\bf{joint probability distribution}} $p(x, y)$. \\\\\n\n\\noindent The {\\bf{marginal probability distribution}} is the probability distribution of a subset of variables from a {\\bf{joint probability distribution}}. For example, the marginal probability distribution $p(y)$ can be found by summing across all values of $x$ in $p(x, y)$:\n\\begin{align*}\np(x) = \\sum_{y \\in \\mathcal{Y}} p(x, y) \\\\\np(x) = \\int_{\\mathcal{Y}} p(x,y) dy\n\\end{align*}\n\n\\noindent When we know that an event $B$ has happened, that could influence the probability of another event $A$. The new probability of $A$ given $B$ is the {\\bf{conditional probability}} $P(A|B)$. If $B$ changes the distribution of a random variable $X$, we write the new random variable as $X|B$, and the new distribution $P(X|B)$ is the conditional distribution:\n$$ P(A | B) = \\frac{P(A \\;\\; \\text{and} \\;\\; B)}{P(B)}$$\n\n\\noindent Note that this has some direct implications: for instance, if $A$ is the event $X == x$ and $B$ is the event $Y == y$, we have an expression relating distributions (sometimes called the {\\bf{product rule}}):\n$$ p(x | y) = \\frac{p(x, y)}{p(y)}$$\nwhich implies that\n$$ p(x, y) = p(x | y) p(y) = p(y | x) p(x)$$\n\n\\noindent Finally, you may have heard of {\\bf{Bayes' Theorem}} (also known as {\\bf{Bayes' Rule}} or {\\bf{Bayes' Law}}), a very prominent statement that relates conditional probabilities between events:\n$$ p(x | y) = \\frac{p(y | x) p(x)}{p(y)} $$ \\\\\nIn machine learning, we are often looking for parameters $\\theta$ based on the distribution of data $y$. Then, we interpret $p(\\theta)$ as the {\\bf{prior}} (the distribution of $\\theta$ without knowing about $y$), $p(y)$ as the {\\bf{evidence}} (the overall probability of this data without considering our parameters), $p( y | \\theta)$ as the {\\bf{likelihood}} (how likely are we to collect this data given the parameters?) and $p( \\theta | y)$ as the {\\bf{posterior}} (the distribution of $\\theta$ after knowing about $y$).\n\\begin{align*}\n\\underbrace{p(\\theta | y)}_{\\text{posterior}}\n&= \\frac{\\overbrace{ p(y | \\theta) }^{\\text{likelihood}} \n\t\\overbrace{ p(\\theta) }^{\\text{prior}}}\n{\\underbrace{p(y)}_{\\text{evidence}}}\n\\end{align*}\n\n\\section*{Markov Models}\n\\noindent A Markov model shows the probabilistic transitions of a \\textbf{state} $X$. The system must obey the {\\bf{Markov assumption}} that the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state:\n\\[\nP(X_T|X_1, \\ldots, X_{T-1}) = P(X_T|X_{T-1}).\n\\]\nAnd therefore all we need to fully define the model is:\n\\begin{itemize}\n    \\item An initial distribution $P(X_0)$\n    \\item Transition probabilities $P(X_{t}|X_{t-1})$\n\\end{itemize}\nFurthermore, we can express the probability of a sequence of states as\n\\[\nP(X_1, \\ldots, X_T) = P(X_1) \\prod_{t=2}^T P(X_t|X_{t-1})\n\\]\nNote that as $t$ becomes large, the probabilities $P(X_t)$ will converge to a \\textbf{stationary distribution} regardless of the initial distribution $P(X_1)$. We can think of this as uncertainty growing over time. \\\\ \n\n\\noindent We can now think of a Markov model as a generalization of an MDP that is embedded in probability distributions over actions. The statement that $P(X_T|X_1, \\ldots, X_{T-1}) = P(X_T|X_{T-1})$ is the same as saying that all transitions only depend on the current state over random actions $T(s, a, s')$. Again we have a starting state and potentially a terminal state, or absorbing state. We could also have a Markov Model give off rewards at states although general Markov Models do not need to have rewards.\n\n\\section*{Hidden Markov Models}\n\nIn Hidden Markov Models (HMMs), you can no longer directly observe the state itself. You can only observe (noisy) emissions $e$ from those states. Unlike in RL, where we were no longer able to know the transition probabilities, here we don't know the actual value of the state at each time. Thus, an HMM is defined by the following:\n\\begin{enumerate}\n\\item An initial distribution $P(X_1)$\n\\item Transition probabilities $P(X_{t}|X_{t-1})$\n\\item Emission probabilities $P(E_t|X_t)$\n\\end{enumerate}\nWe can use those to compute the joint distribution\n\\[\nP(X_1, E_1, \\ldots, X_T, E_T) = P(X_1) P(E_1|X_1) \\prod_{t=2}^T P(X_{t}|X_{t-1}) P(E_t|X_t)\n\\]\nFinally, we can define the \\textbf{belief} at time $t$ as the probability of being in a specific state given all of the evidence seen so far:\n\\[\nB_t(x) = P(X_t=x|e_{1:t}).\n\\]\nWe can therefore solve for our belief of the state throughout time (\\textbf{filtering}) through the \\textbf{forward algorithm} by updating for (1) the passage of time and (2) receiving/observing new evidence:\n\\begin{enumerate}\n\\item $P(x_t|e_{1:t-1}) = \\sum_{x_{t-1}} P(x_{t-1}|e_{1:t-1}) \\cdot P(x_t|x_{t-1})$\n\\item $P(x_t|e_{1:t}) \\propto P(x_t|e_{1:t-1}) \\codt P(e_t|x_t)$\n\\end{enumerate}\nThese equations can be derived by applying Bayes rule, the chain rule, and conditional independence to the definition of the belief at each time step under the HMM model. There is also a MDP equivalent for HMM which are called POMDPs and these are some of the hardest problems to solve in robotics due to the large state spaces and uncertainty. \\textbf{Particle Filters} are designed to attempt to overcome this issue of needing to explicitly write out the belief distribution across all states by instead tracking a series of particles to compute a discrete, sample-based, approximation of the belief distribution. Particles are passed through a transition model for the elapse time step and then weighted based on the emissions seen in the observe step. Particles are then resampled according to this distribution to normalize before the process is repeated.\n\n\\clearpage\n\\section*{Practice Problems}\n\n\\begin{enumerate}\n    \\item Suppose you have a Markov Process defined by the following table:\n        \\begin{figure}[h]\n            \\centering\n            \\includegraphics[width=.5\\textwidth]{figs/sunny-foggy}\n        \\end{figure}\n        \n    \\begin{enumerate}\n        \\item Transform the table into a finite state automaton representation.\\\\\n        \\ifsol\n            \\begin{figure}[h]\n                \\centering\n                \\includegraphics[width=.45\\textwidth]{figs/1-sol}\n            \\end{figure}\n        \\else\n            \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip\n        \\fi\n        \\item Given that today is \\emph{sunny}, what is the probability that the next two days are first \\emph{sunny} and then \\emph{rainy}?\n            \\ifsol\n                \\textcolor{blue}{\\begin{align*}\n                P(X_2 = s, X_3 = r | X_1 = s) &= P(X_1=s) * P(X_2=s|X_1=s) * P(X_3=r|X_2=s) \\\\\n                \t\t\t\t\t\t\t  &= P(X_2=s|X_1=s) * P(X_3=r|X_2=s) \\\\\n                                              &= 0.05 * 0.8 \\\\\n                                              &= 0.04\n                \\end{align*}}\n            \\else\n                \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip\n            \\fi\n        \\item Given that today is foggy, what is the probability that it rains in two days? \\\\\n        \\ifsol\n            \\textcolor{blue}{ \n            All three possibilities for $X_2$ can lead from $X_1=f$ to $X_3=r$. Therefore, we sum:\n            \\begin{align*}\n            P(X_3=r|X_1=f) &=                     P(X_2=f | X_1=f) * P(X_3=r|X_2=f) +\\\\\n                           &\\mathrel{\\phantom{=}} P(X_2=r | X_1=f) * P(X_3=r|X_2=r) +\\\\\n                           &\\mathrel{\\phantom{=}} P(X_2=s | X_1=f) * P(X_3=r|X_2=s) \\\\\n                           &= 0.3 * 0.5 + 0.6*0.3 + 0.05 * 0.2 \\\\\n                           &= 0.34\n            \\end{align*}}\n        \\else\n            \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip\n        \\fi\n        \n        \\item Suppose today is foggy weather. What is the probability of all possible weather in two days? \\\\\n        \\ifsol\n            \\textcolor{blue}{\\begin{table}[h]\n            \\centering\n            \\begin{tabular}{@{}lllll@{}}\n            \\toprule\n              & $X_1$ & $X_2$ & $X_3$                           &  \\\\ \\midrule\n            s & 0.0   & 0.2   & 0.2*0.80+0.3*0.2+0.5*0.2 = 0.32  &  \\\\\n            r & 0.0   & 0.3   & 0.2*0.05+0.3*0.6+0.5*0.3 = 0.34 &  \\\\\n            f & 1.0   & 0.5   & 0.2*0.15+0.3*0.2+0.5*0.5 = 0.34 &  \\\\ \\bottomrule\n            \\end{tabular}\n            \\end{table}}\n        \\else\n            \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip\n        \\fi\n    \\end{enumerate}\n    \n    \\clearpage\n    \\item Imagine you are a computer scientist and therefore do not leave your basement, ever. Now, the only evidence for the weather is whether the person who brings your daily meal carries an umbrella. You know the following emission probabilities $E$:\\\\\n    \\begin{figure}[h]\n        \\centering\n        \\includegraphics[width=.4\\textwidth]{figs/umbrella}\n    \\end{figure}\n\n    Yesterday, you were forced to go outside because of a fire drill -- the worst -- and saw that it was sunny. Today, your caregiver has an umbrella on them. What it is the probability that it is raining today? \n    \n    \\ifsol\n        \\textcolor{blue}{ We need to marginalize over all the probabilities of seeing an umbrella \\begin{align*}\n        \\frac{p(X_2=r|X_1=s)p(E_2=u|X_2=r)}{p(E_2=u|X_1=s)} &= \\frac{p(X_2=r|X_1=s)p(E_2=u|X_2=r)}{\\sum_{i \\in [s,r,f]} p(E_2=u|X_2=i) p(X_2=i|X_1=s)} \\\\\n            &= \\frac{0.05 * 0.8}{0.8*0.1 + 0.05*0.8 + 0.15*0.3} \\\\\n            &\\approx 0.2424\n        \\end{align*}}\n    \\else\n        \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip\n    \\fi\n    \n    \\item Now consider the following conditional probabilities. \n        \\begin{figure}[h]\n        \\centering\n        \\includegraphics[width=0.8\\textwidth]{figs/hmmtask2.png}\n        \\end{figure}\n        \n        \\begin{enumerate}\n            \\item Draw the full HMM model for this set of probabilities:\\\\\n            \\ifsol\n                \\begin{center}\n                \\begin{tikzpicture}[scale=0.15]\n                \\tikzstyle{every node}+=[inner sep=0pt]\n                \\draw [black] (37.7,-7) circle (3);\n                \\draw (37.7,-7) node {Start};\n                \\draw [black] (26.6,-17.1) circle (3);\n                \\draw (26.6,-17.1) node {$0$};\n                \\draw [black] (48.8,-17.1) circle (3);\n                \\draw (48.8,-17.1) node {$1$};\n                \\draw [black] (29.6,-34) circle (3);\n                \\draw (29.6,-34) node {$A$};\n                \\draw [black] (46.3,-34) circle (3);\n                \\draw (46.3,-34) node {$B$};\n                \\draw [black] (40.137,-5.257) arc (65:22:17);\n                \\fill [black] (47.8,-13.41) -- (48.4,-14.2) -- (49.0,-13.41);\n                \\draw (47.137,-9.257) node [right] {$0.7$};\n                \\draw [black] (35.263,-5.257) arc (-65:-22:-17);\n                \\fill [black] (26.4,-13.41) -- (26.8,-14.2) -- (27.6,-13.41);\n                \\draw (28.263,-9.257) node [left] {$0.3$};\n                \\draw [black] (29.037,-15.357) arc (120.52398:59.47602:17.056);\n                \\draw (37.7,-12.49) node [above] {$0.6$};\n                \\draw [black] (46.408,-18.904) arc (-58.17768:-121.82232:16.515);\n                \\fill [black] (28.99,-18.9) -- (29.41,-19.75) -- (29.94,-18.9);\n                \\draw (37.7,-21.89) node [below] {$0.8$};\n                \\draw [black] (50.797,-14.877) arc (165.80141:-122.19859:2.25);\n                \\fill [black] (46.36,-15.36) -- (45.93,-14.52) -- (45.42,-15.38);\n                \\draw (55.81,-13.71) node [right] {$0.2$};\n                \\fill [black] (51.78,-17.33) -- (52.43,-18.01) -- (52.68,-17.04);\n                \\draw [black] (23.617,-17.277) arc (301.12921:13.12921:2.25);\n                \\draw (19.64,-13.54) node [left] {$0.4$};\n                \\fill [black] (24.64,-14.84) -- (24.66,-13.9) -- (23.8,-14.41);\n                \\draw [black] (26.72,-33.212) arc (-114.93875:-224.92927:8.916);\n                \\fill [black] (26.72,-33.21) -- (26.21,-32.42) -- (25.78,-33.33);\n                \\draw (20.98,-26.94) node [left] {$0.9$};\n                \\draw [black] (51.159,-18.931) arc (42.7293:-59.55867:9.087);\n                \\fill [black] (49.09,-32.93) -- (50.03,-32.96) -- (49.52,-32.09);\n                \\draw (54.17,-26.61) node [right] {$0.5$};\n                \\draw [black] (43.463,-33.029) arc (-111.62882:-149.62165:31.312);\n                \\fill [black] (43.46,-33.03) -- (42.9,-32.27) -- (42.53,-33.2);\n                \\draw (29.86,-25.18) node [below] {$0.1$};\n                \\draw [black] (47.485,-19.795) arc (-28.90492:-68.38608:29.67);\n                \\fill [black] (32.44,-33.04) -- (33.37,-33.21) -- (33,-32.28);\n                \\draw (45.88,-25.18) node [below] {$0.5$};\n                \\end{tikzpicture}\n                \\end{center}\n            \\else\n                \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip \\bigskip\n            \\fi\n            \\item Suppose that $O_1 = A$ and $O_2 = B$ is observed. Use the Forward algorithm to compute the probability distribution $p(X2| O1 = A, O2 = B) = B_2(X2)$.\\\\\n            \\ifsol\n                \\begin{tabular}{lll}\n                \\toprule\n                $X_1$   & $Pr(X_1, O_1 = A)$ & Normalized \\\\ \\midrule\n                0       & 0.3 * 0.9 = 0.27 &  27/62 \\\\\n                1       & 0.7 * 0.5 = 0.35 &  35/62 \\\\\\bottomrule\n                $X_2$   & $Pr(X_2, O_1 = A, O_2 = B)$ & Normalized \\\\ \\midrule\n                0       & 0.1 * [0.4 * 27/62 + 0.8 * 35/62] = 97/1550 &  97/387 $\\approx$ 0.25 \\\\\n                1       & 0.5 * [0.6 * 27/62 + 0.2 * 35/62] = 29/155 &  290/387 $\\approx$ 0.75 \\\\ \\bottomrule\n                \\end{tabular}\n                % \\begin{figure}[h]\n                % \\centering\n                % \\includegraphics[width=0.5\\textwidth]{figs/sol1}\n                % \\end{figure}\n            \\else\n                \\bigskip \\bigskip \\bigskip\n            \\fi\n        \\end{enumerate}\n\\end{enumerate}\n\\end{document}", "meta": {"hexsha": "be30519611f19582d261dfe116f3b63a222cc42e", "size": 15235, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Section_09/tex/main.tex", "max_stars_repo_name": "Harvard-CS182-F18/courseware", "max_stars_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Section_09/tex/main.tex", "max_issues_repo_name": "Harvard-CS182-F18/courseware", "max_issues_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Section_09/tex/main.tex", "max_forks_repo_name": "Harvard-CS182-F18/courseware", "max_forks_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.4905660377, "max_line_length": 860, "alphanum_fraction": 0.5906137184, "num_tokens": 4879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.8376199673867852, "lm_q1q2_score": 0.5717373891575271}}
{"text": "\\documentclass{scrartcl}\n\\usepackage{natbib}\n\\usepackage{amsmath, amsfonts, amssymb, bbm, graphicx}\n\\numberwithin{equation}{section}\n\\bibliographystyle{unsrtnat}\n\n%opening\n\\title{Comparing Hamiltonian Monte Carlo and Elliptical Slice Sampling for constrained Gaussian distributions}\n\\subtitle{732A76 Research Project Report}\n\\author{Bayu Brahmantio (baybr878)}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Background}\nHigh-dimensional multivariate gaussian distribution is used in various models and applications. In some cases, we need to generate from a certain distribution which applies constraints to a multivariate Gaussian distribution (\\cite{gelfand1992GS} and \\cite{RodrguezYam2004EfficientGS}). Sampling from this distribution is still a challenging issue, particularly because it is not straightforward to compute the normalizing constant for the density function.  \n\nThe gibbs sampler has proven to be a suitable choices to sample from truncated multivariate Gaussian distributions (\\cite{gelfand1992GS}). Recently, more sophisticated methods have been developed to generate samples from truncated multivariate Gaussian distributions. In this research project, two methods, namely Exact Hamiltonian Monte Carlo (\\cite{pakman2013exact}) and Analytic Elliptical Slice Sampling (\\cite{Fagan2016ESSwEP}), will be compared. \n\n\n\\section{Definitions}\n\\subsection{Truncated Multivariate Gaussian Distribution}\nThe truncated multivariate Gaussian distribution is a probability distribution obtained from a multivariate Gaussian random variable by bounding it under some linear (or quadratic) constraints.   \n\nLet $\\textbf{w}$ be a $d$-dimensional Gaussian random variable with mean vector $\\boldsymbol{\\mu}$ and covariance matrix $\\boldsymbol{\\Sigma}$. The corresponding truncated multivariate Gaussian distribution can be defined as\n\\begin{equation}\\label{eq:tmg}\n\tp(\\boldsymbol{\\textbf{x}}) = \\frac{\\exp\\{-\\frac{1}{2}(\\textbf{x}-\\boldsymbol{\\mu})^{\\intercal} \\boldsymbol{\\Sigma}^{-1}(\\textbf{x}-\\boldsymbol{\\mu})\\}}{\\int_{\\textbf{F}\\textbf{x} + \\textbf{g} \\geq 0} \\exp\\{-\\frac{1}{2}(\\textbf{x}-\\boldsymbol{\\mu})^{\\intercal} \\boldsymbol{\\Sigma}^{-1}(\\textbf{x}-\\boldsymbol{\\mu})\\} d\\textbf{x}}\\mathbbm{1}(\\textbf{F}\\textbf{x} + \\textbf{g} \\geq 0)\n\\end{equation}\nwhere $\\textbf{x}$ is a $d$-dimensional truncated Gaussian random variable, $\\mathbbm{1}$ is an indicator function, and $\\textbf{F}$ is an $m \\times d$ matrix, which, together with the $m \\times 1$ vector of $\\textbf{g}$, defines all $m$ constraints of $p(\\boldsymbol{\\textbf{x}})$.  We denote this as $\\textbf{x} \\sim TN(\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma},\\textbf{F},\\textbf{g})$.   \n\nWe can rewrite $p(\\boldsymbol{\\textbf{x}})$ as\n\\begin{equation}\\label{eq:tmg2}\n\tp(\\textbf{x}) = \\frac1Z\\exp\\bigg\\{-\\frac{1}{2}\\textbf{x}^{\\intercal} \\boldsymbol{\\Lambda} \\textbf{x} + \\boldsymbol{\\nu}^{\\intercal}\\textbf{x}\\bigg\\}\\mathbbm{1}(\\textbf{F}\\textbf{x} + \\textbf{g} \\geq 0)\n\\end{equation}\nwhere $\\boldsymbol{\\Lambda} = \\boldsymbol{\\Sigma}^{-1}$, $\\boldsymbol{\\nu} = \\boldsymbol{\\Sigma}^{-1}\\boldsymbol{\\mu}$, and $Z$ is the normalizing constant. Through linear change of variables, \\eqref{eq:tmg2} can be transformed into\n\\begin{equation}\\label{eq:tmg3}\n\tp(\\textbf{x}) = \\frac1Z\\exp\\bigg\\{-\\frac{1}{2}\\textbf{x}^{\\intercal}\\textbf{x}\\bigg\\}\\mathbbm{1}(\\textbf{F}^*\\textbf{x} + \\textbf{g}^* \\geq 0)\n\\end{equation}\nsuch that $\\textbf{x} \\sim TN(\\textbf{0}, \\textbf{I}_d,\\textbf{F}^*,\\textbf{g}^*)$, for some values of $\\textbf{F}^*$ and $\\textbf{g}^*$.\n\n\n\n\n\n\n\n\n\n\\subsection{Exact Hamiltonian Monte Carlo for Truncated Multivariate Gaussians}   \n\\subsubsection{Hamiltonian Monte Carlo}\nHamiltonian Monte Carlo (HMC) is a Markov Chain Monte Carlo method that makes use of an ideal system that follow Hamiltonian dynamics. A particle in this system can be described by a Hamiltonian energy function\n\\begin{equation}\\label{eq:hml}\n\tH(\\textbf{x}, \\textbf{s}) = U(\\textbf{x}) + K(\\textbf{s})\n\\end{equation}\nwhere $U(\\textbf{x})$ is the potential energy term as a function of particle's position ($\\textbf{x}$) and $K(\\textbf{s})$ is the kinetic energy term as a function of particle's momentum ($\\textbf{s}$). Both $\\textbf{x}$ and $\\textbf{s}$ are of $d$-dimensions.   \nThe change of position and momentum over time $t$ can be described by Hamilton's equations\n\\begin{equation}\\label{eq:heqs}\n\\begin{split}\n\t\\frac{\\partial x_i}{\\partial t} & = \\frac{\\partial H}{\\partial s_i} \\\\\n\t\\frac{\\partial s_i}{\\partial t} & = -\\frac{\\partial H}{\\partial x_i}, \\qquad i=1,...,d.\\\\\n\\end{split}\n\\end{equation}\n\nTo sample from a distribution using HMC, we can relate the target distribution to the current energy state of the particle through canonical distribution:\n\\begin{equation}\\label{eq:can}\n\tp(\\textbf{x}) \\propto \\exp\\{-E(\\textbf{x})\\}\n\\end{equation}\nwhere the target distribution, $p(\\textbf{x})$, depends on the value of energy function $E(\\textbf{x})$. In a Hamiltonian system, we have $H(\\textbf{x}, \\textbf{s})$ as our energy function, which results in the canonical distribution:\n\\begin{equation}\\label{eq:can2}\n\\begin{split}\n\tp(\\textbf{x}, \\textbf{s}) &\\propto \\exp\\{-H(\\textbf{x}, \\textbf{s})\\} \\\\\n\t&\\propto \\exp\\{-U(\\textbf{x})\\} \\exp\\{-K(\\textbf{s})\\} \\\\\n\t&\\propto p(\\textbf{x})p(\\textbf{s}).\n\\end{split}\n\\end{equation}\n\nHence, $\\textbf{x}$ and $\\textbf{s}$ are independent. To sample from the target distribution $p(\\textbf{x})$, we can sample from the joint distribution $p(\\textbf{x}, \\textbf{s})$ and ignore the variable $\\textbf{s}$.  \n\n\\subsubsection{Exact HMC for Truncated Multivariate Gaussians}\nExact Hamiltonian Monte Carlo (HMC) for Truncated Multivariate Gaussians (TMG) (\\cite{pakman2013exact}) considers the exact paths of particle trajectories in a Hamiltonian system that is proportional to the negative logarithm of TMG distributions density. Suppose our target distribution $p(\\textbf{x})$ is a truncated multivariate Gaussian distribution as in ($\\ref{eq:tmg3}$). We can set our momenta to be normally distributed, that is $\\textbf{s} \\sim \\mathcal{N}(\\textbf{0}, \\textbf{I}_d)$. Therefore, the Hamiltonian system can be described as: \n\\begin{equation}\\label{eq:hsys}\n\tH = U(\\textbf{x}) + K(\\textbf{s}) = \\frac{1}{2}\\textbf{x}^{\\intercal}\\textbf{x} + \\frac{1}{2}\\textbf{s}^{\\intercal}\\textbf{s}\n\\end{equation}\nsubject to:  \n\\begin{equation}\\label{eq:st}\n\t\\textbf{F}\\textbf{x} + \\textbf{g} \\geq 0.\n\\end{equation}\nfor some values of $\\textbf{F}$ and $\\textbf{g}$.   \n\nThe equations of motion for the Hamiltonian system in \\eqref{eq:hsys} are:    \n\\begin{equation}\\label{eq:heqs2}\n\\begin{split}\n\t\\frac{\\partial x_i}{\\partial t} & = \\frac{\\partial H}{\\partial s_i} = s_i \\\\\n\t\\frac{\\partial s_i}{\\partial t} & = -\\frac{\\partial H}{\\partial x_i} -x_i, \\qquad i=1,...,d.\\\\\n\\end{split}\n\\end{equation}\n\nIn this sense, we want the particles in Hamiltonian system to only move around inside the constrained space. The exact trajectory of a particle using the equations above is:    \n\\begin{equation}\\label{eq:path}\n\tx_i(t) = s_i(0)\\sin(t) + x_i(0)\\cos(t).\n\\end{equation}\n\nA particle will follow the trajectory above until it hits a wall, or in other words, until $\\textbf{F}\\textbf{x} + \\textbf{g} = 0$. Let $t_h$ be the time when the particle hits wall $h$, or when $\\textbf{F}_h \\cdot \\textbf{x}(t_h) + \\text{g}_h = 0$. It will hit the wall with velocity $\\dot{\\textbf{x}}(t_h)$ which can be decomposed into:   \n\\begin{equation}\\label{eq:bounce}\n\t\\dot{\\textbf{x}}(t_h) = proj_{\\textbf{n}}\\dot{\\textbf{x}}(t_h) + proj_{\\textbf{F}_h}\\dot{\\textbf{x}}(t_h)\n\\end{equation}\nwhere $proj_{\\textbf{n}}\\dot{\\textbf{x}}(t_h)$ is the projection of $\\dot{\\textbf{x}}(t_h)$ on the normal vector $\\textbf{n}$ perpendicular to $\\textbf{F}_h$ and\n\\begin{equation}\\label{eq:proj}\n\\begin{split}\n\tproj_{\\textbf{F}_h}\\dot{\\textbf{x}}(t_h) &= \\frac{\\textbf{F}_h \\cdot \\dot{\\textbf{x}}(t_h)}{||\\textbf{F}_h||}\\frac{\\textbf{F}_h}{||\\textbf{F}_h||}\\\\  \n\t&= \\frac{\\textbf{F}_h \\cdot \\dot{\\textbf{x}}(t_h)}{||\\textbf{F}_h||^2}\\textbf{F}_h \\\\\n\t&= \\alpha_h\\textbf{F}_h.\n\\end{split}\n\\end{equation}\nBy inverting the direction of $proj_{\\textbf{n}}\\dot{\\textbf{x}}(t_h)$, we can obtain the reflected velocity as\n\\begin{equation}\\\n\\begin{split}\n\t\\dot{\\textbf{x}}_R(t_h) & = -proj_{\\textbf{n}}\\dot{\\textbf{x}}(t_h) + proj_{\\textbf{F}_h}\\dot{\\textbf{x}}(t_h) \\\\\n\t& = -\\dot{\\textbf{x}}(t_h) + 2\\alpha_h\\textbf{F}_h\n\\end{split}\n\\end{equation}\nwhich can be used as the new initial velocity ($s_i(0)$) in \\eqref{eq:path} for the particle to continue its path. It can be shown that exact HMC follows the detailed balanced condition (\\cite{pakman2013exact}) which leaves the canonical distribution invariant.    \n\nThe process of sampling using exact HMC can be summarized into two steps:   \n\\begin{enumerate}\n\t\\item Sample $\\textbf{s}$ from $\\mathcal{N}(\\textbf{0}, \\textbf{I}_d)$.\n\t\\item Use $\\textbf{s}$ and the last value of $\\textbf{x}$, and move the particle for a time $T$ until we reach $\\textbf{x}^*$ and $\\textbf{s}^*$. Use $\\textbf{x}^*$ as initial positions for the next iteration.\n\\end{enumerate}\n\n\\subsection{Analytic Elliptical Slice Sampling}   \n\\subsubsection{Slice Sampling}\nSlice sampling (\\cite{Neal2003}) is a method that tries to sample uniformly from the ($n+1$)-dimensional region under $f(x)$ that is proportional to density function$p(x)$. It is done by using Gibbs sampling to sample from the joint distribution $(x, y)$ where $y$ is an auxiliary variable. We do this by alternately sampling from $y|x \\sim \\text{U}(0, f(x))$ and $x|y \\sim \\text{U}(S)$ where $S = \\{x: f(x)>y\\}$. To obtain samples from $x$, we can sample from $p(x,y)$ and ignore the $y$ later.   \n\n\\subsubsection{Elliptical Slice Sampling}\nElliptical slice sampling (ESS, \\cite{murray2010}) is a form of slice sampling that uses Gaussian prior to sample from a posterior distribution of the form\n\\begin{equation}\\label{eq:ESSpost}\n\tp^*(\\textbf{x}) = \\frac1Z\\mathcal{N}(\\textbf{x};\\textbf{0}, \\boldsymbol{\\Sigma}) L(\\textbf{x})\n\\end{equation}\nwhere $Z$ is the normalization constant, $\\mathcal{N}(\\textbf{0},\\boldsymbol{\\Sigma})$ is a multivariate Gaussian prior, and $L$ is a likelihood function. To do this, we introduce an auxiliary variable $\\boldsymbol{\\nu} \\sim \\mathcal{N}(\\textbf{0}, \\boldsymbol{\\Sigma})$ so that given an initial state $\\textbf{x}$, the update ellipse is   \n\\begin{equation}\\label{eq:rule}\n\t\\textbf{x}' = \\textbf{x}\\cos(\\theta) + \\boldsymbol{\\nu}\\sin(\\theta), \\quad  \\theta \\in [0, 2\\pi]\n\\end{equation}   \n\nThe algorithm begins by sampling the slice height $y \\sim \\text{U}(0, L(\\textbf{x}))$ and the auxiliary variable $\\boldsymbol{\\nu} \\sim \\mathcal{N}(\\textbf{0}, \\boldsymbol{\\Sigma})$ that determines the ellipse in \\eqref{eq:rule}. We then sample $\\theta$ uniformly from $[0, 2\\pi]$ and shrink the interval towards $\\theta=0$ (initial state) until we found $\\theta$ that satisfies $L(\\textbf{x}') > y$. The new point is accepted as the next state and we use it as the initial state for the next iteration.   \n\nESS is analogous to slice sampling in the sense that it samples uniformly from the height of function and from the interval where the function value is more than the height. Furthermore, it can be shown that the transition operator of ESS is reversible which implies that the Markov chain converges to the target posterior distribution (\\cite{murray2010}).   \n\n\\subsubsection{Analytic Elliptical Slice Sampling}\nAnalytic Elliptical Slice Sampling (\\cite{Fagan2016ESSwEP}) tries to cut the process of shrinking the ellipse in ESS by pre-determining the possible sampling interval. First, we define\n\\begin{equation}\\label{eq:ellipse}\n\t\\mathcal{E} = \\{\\textbf{x}':\\textbf{x}'(\\theta) = \\textbf{x}\\cos(\\theta) + \\boldsymbol{\\nu}\\sin(\\theta)\\}\n\\end{equation} \nas the set of possible new states and define\n\\begin{equation}\\label{eq:slice}\n\t\\mathcal{S}(y,\\mathcal{E}) = \\{ \\textbf{x}'\\in \\mathcal{E}: L(\\textbf{x}')>y\\}\n\\end{equation}\nas the slice of acceptable proposed state. To form a markov chain that converges to the posterior distribution, we need to sample uniformly from the analytically pre-determined slice $\\mathcal{S}(y,\\mathcal{E})$ (\\cite{Fagan2016ESSwEP}). That is, the algorithm is similar to ESS except that the process of shrinking the ellipse is replaced by determining $\\mathcal{S}(y,\\mathcal{E})$ so that sampling from $\\theta$ guarantees to yield a new state $\\textbf{x}'$.    \n\nIn a truncated multivariate Gaussian setting, we have\n\\begin{equation}\\label{eq:tmgESS}\n\\begin{split}\n\tp^*(\\textbf{x}) &= \\frac1Z\\mathcal{N}(\\textbf{x};\\textbf{0}, \\boldsymbol{\\Sigma}) TN(\\textbf{x};\\textbf{0},\\textbf{I}_d,\\textbf{F},\\textbf{g}) \\\\\n\t&\\propto \\mathcal{N}(\\textbf{x};\\textbf{0},\\boldsymbol{\\Sigma}) \\mathcal{N}(\\textbf{x};\\textbf{0},\\textbf{I}_d)\\mathbbm{1}(\\textbf{F}\\textbf{x} + \\textbf{g}\\geq 0)\n\\end{split}\n\\end{equation}\nGiven $\\textbf{x}$, $\\boldsymbol{\\nu}$, and $y$, we can determine the slice of possible new states as\n\\begin{equation}\\label{eq:exactslice}\n\\begin{split}\n\t\\mathcal{S}(y,\\mathcal{E}) &= \\{\\theta \\in [0,2\\pi]:  L(\\textbf{x}')>y\\} \\\\\n\t&= \\cap^m_{j=1}\\ \\{\\theta \\in [0,2\\pi]: F_j^{\\intercal}\\textbf{x} + g_j \\geq0 \\} \\\\\n\t& \\quad \\cap\\{\\theta \\in [0,2\\pi]: \\mathcal{N}(\\textbf{x};\\textbf{0},\\textbf{I}_d)>y\\}.\n\\end{split}\n\\end{equation}   \n\n\\cite{Fagan2016ESSwEP} also demonstrates how to sample $J$ points from a single ESS iteration. In an ESS setting, it is done by 'recycling' the interval that has been sampled until $J$ points are accepted. One of the $J$ points is then randomly selected as the next initial state. In the case of analytic ESS, $J$ different values of y are sampled and for each of them we find the corresponding $\\mathcal{S}(y,\\mathcal{E})$ and uniformly sample one point.   \n\n\n\\section{Results}\nIn this section we are going to compare both methods (Exact HMC for TMG and Analytic ESS) under different settings of truncated multivariate gaussians.\n\nFigure 1 shows how both algorithms sample a two-dimensional gaussian distribution with some constraints that results in a narrow support, similar to Figure 1 in \\cite{pakman2013exact}. From both cases, we can see that they worked as expected. The generated samples rapidly converged to the mean while also complying the constraints.     \n\n\n\\begin{figure}[h!]\n\t\\includegraphics[width=\\linewidth]{fig1.png}\n\t\\caption{Comparison of generated points from $\\mathcal{N}((4,4)^T,\\textbf{I}_2)$ under the constraints $x_1 \\leq x_2 \\leq1.1 x_1$ and $x_1,x_2 \\geq 0$ with initial points (2, 2.1). First column: 1000 iterations. Second column: trace plots of the first 400 iterations of $x_2$. Third column: Autocorrelation of $x_2$. In this example we used $T=\\pi/2$ for HMC and $J=1$ for ESS. Both methods converges around $x_2=4$ and have relatively low autocorrelation scores.}\n\\end{figure}\n\nNotice that the maximum traveling time $(T)$ and number of samples per iteration $(J)$ affect the computation time for Exact HMC and Analytic ESS respectively. The higher the maximum traveling time ($T$), the longer it takes for a particle in an HMC system to move while also increases the possibility of bouncing. At the same time, an increase in the value of $J$ lets us sample multiple points from the same ellipse without a significant increase in computation time.    \n\n\nFrom the experiments, we observed a large disparity between the runtimes of both algorithms. As seen in the Table 1, Exact HMC performs much faster than the fastest Analytic ESS.     \n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|c|}\n\t\\hline\n\tMethod & Avg. computation time per sample (second) \\\\\n\t\\hline\n\tHMC($T=\\frac{\\pi}{2}$) & 0.0001728 \\\\\n\t\\hline\n\tHMC($T=\\frac{3\\pi}{4}$) & 0.000173 \\\\\n\t\\hline\n\tESS($J=1$) & 0.0271275 \\\\\n\t\\hline\n\tESS($J=5$) & 0.0250084 \\\\\n\t\\hline\n\tESS($J=10$) & 0.0236001 \\\\\n\t\\hline\n\\end{tabular}\n\\caption{Average computation time per sample for both models under different settings for 10-dimensional TMG under the constraints $-1 \\leq x_i \\leq 1, i=1,...,10$ using CPU. For HMC, T is the maximum travel time. For ESS, J is the number of points sampled in each iteration. The computation time for Exact HMC is overall much faster than Analytic ESS.}\n\\end{table}\n\n\n\\section{Discussion}\nIn this research project, we have compared Exact HMC and Analytic ESS. In terms of the generated samples, they are not significantly different. Exact HMC also performs much faster than Analytic ESS. More rigorous measurements can be made to evaluate whether the samples are autocorrelated within a chain or not by measuring their effective sample size.     \n\nThere are, however, some caveats to the computation of Analytic ESS in this case. We expected the runtime to be somewhat similar to Exact HMC, but it runs more than 100 times slower. This could be to the fact that external packages are used for the computation of the intervals and slices which could have been replaced by customized functions that are more specific to the task, hence the faster runtime. The programming language used $\\texttt{R}$ is also not popularly chosen because of its speed. Thus, the algorithm might need to be rewritten in a language that is proven to be computationally faster (e.g. $\\texttt{Python}$).   \n\n\n\\newpage\n\\bibliography{ref}\n\n\\end{document}\n", "meta": {"hexsha": "8234b5388bf70957c71dbc50f73240c5655fa7c7", "size": 17171, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/rp_report.tex", "max_stars_repo_name": "bayubeta/research-project", "max_stars_repo_head_hexsha": "9643e1ec9d81852eec7c1ceb9ad1079a09094415", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/rp_report.tex", "max_issues_repo_name": "bayubeta/research-project", "max_issues_repo_head_hexsha": "9643e1ec9d81852eec7c1ceb9ad1079a09094415", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/rp_report.tex", "max_forks_repo_name": "bayubeta/research-project", "max_forks_repo_head_hexsha": "9643e1ec9d81852eec7c1ceb9ad1079a09094415", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 75.6431718062, "max_line_length": 633, "alphanum_fraction": 0.7259914973, "num_tokens": 5253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.682573734412324, "lm_q2_score": 0.8376199653600372, "lm_q1q2_score": 0.5717373877741221}}
{"text": "\\chapter{Storing a Path as a Tree}\n\\label{append:treap}\n\\vspace*{2em}\n\nIn phase 3 (\\autoref{algo:dham:ph3}) of the DHAM, we construct a path out of the 2 cycles being patched and search (DFS/BFS) the tree formed on this path by double rotation operations. Now, there are 2 parts to this problem: One is to store the path in such a way that the double rotations operations can be done efficiently, and the other is to identify valid double rotations, given such a path.\n\nIf we make the obvious decision to store the path as a link list to make the double rotation operation constant time (by just swapping 2 of the links), the process of identifying a valid operation ends up being linear due to having to traverse the path.\n\nInstead, if we store the path as an array that making generation of valid operations faster, we realize that executing the operation itself now needs updating up to linear number of elements in the array.\n\nTo get around this difficulty, we store our path as a Randomized Search Tree \\cite{treap} to get a $\\bigO(\\log n)$ bound on both of the operations.\n\n\\section{Randomized Search Tree}\nThe Randomized Search Tree (or treap) as given by Aragon and Seidel \\cite{treap} is based on 2 auxiliary operations - split (around a given key) and merge (assuming elements within the trees are in order of keys), to implement the main operations like insertion, deletion, union etc. The reason we choose this data structure is because we can use these auxiliary operations to also implement a double rotation by 'splitting' out a sub-array, and 'merging' it at the end.\n\nTo attain all the desired functionality, we make some modifications to the data structure\n\n\\subsection{Implicit Keys}\nDoing a double rotation changes the position of elements in the structure. In this situation the keys(or index) of these elements becomes inaccurate. So, instead of storing the key as a value within the node, we maintain the keys of all nodes implicitly.\n\nThis is achieved by storing the size of the tree rooted at a node, and using this value to compute keys on the fly.\n\nThe updated node looks like\n\n\\begin{lstlisting}\nstruct Node {\n    Node *left, *right, *parent;\n    int val, y, subtree_size = 1;\n    void recalc();\n};\nint cnt(Node* n) { return n ? n->subtree_size : 0; }\nvoid Node::recalc() { c = cnt(l) + cnt(r) + 1; }\n\\end{lstlisting}\n\nThe key of a node is the number of elements with a smaller key, which can be looked on as the number of elements in the left subtree (because of the BST structure). So while traversing down the tree, we can keep track of the sum of their sizes (every time we traverse to the right) to implicitly maintain the key of a node we are currently visiting.\nModifying the split function accordingly, we get\n\\begin{lstlisting}\npair<Node*, Node*> split(Node* n, int k)\n\tif (!n) return {};\n\tif (cnt(n->l) >= k) {\n\t\tauto pa = split(n->l, k);\n\t\tn->l = pa.second;\n\t\tn->recalc();\n\t\treturn {pa.first, n};\n\t} else {\n\t\tauto pa = split(n->r, k - cnt(n->l) - 1);\n\t\tn->r = pa.first;\n\t\tn->recalc();\n\t\treturn {n, pa.second};\n\t}\n\\end{lstlisting}\n\nThe merge function remains largely the same since we do not access the keys of a node, assuming them to already be in order.\n\n\n\\subsection{key $\\iff$ value conversions}\nWith keys being automatically updated, we only need to add the functionality to return the value given a key (accessing an element) and return a key given the value (search operation). These operations will allow us to generate valid operations efficiently.\nThese can simply be implemented in $\\bigO(\\log n)$ as shown below\n\\begin{lstlisting}\nint key(Node* root, Node* x) {\n    if (tree==nullptr || x==nullptr) return -1;\n    int ans = cnt(x->l);\n    while(x != root) {\n        auto par = x->p;\n        if (par->r == x)\n            ans += 1 + cnt(par->l);\n        x = par;\n    }\n    return ans;\n}\n\nint value(Node *n, int key) {\n    if (!n) return -1;\n    if (cnt(n->l) == key) return n->val;\n    else if (cnt(n->l) > key) return value(n->l, key);\n    else return value(n->r, key - cnt(n->l) - 1);\n}\n\\end{lstlisting}\n\n\\newpage\n\\section{Double Rotation}\nArmed with the tools described above, we can implement the double rotation operation to move the subarray $[l, r)$ to the index $k$ in $\\bigO(\\log n)$ time.\n\n\\begin{lstlisting}\nvoid move(Node*& t, int l, int r, int k) {\n    Node *a, *b, *c;\n    tie(a,b) = split(t, l); tie(b,c) = split(b, r - l);\n    if (k <= l) t = merge(ins(a, b, k), c);\n    else t = merge(a, ins(c, b, k - r));\n}\n\\end{lstlisting}\n", "meta": {"hexsha": "ea8eca04b70f7f19b38c112f1e0e8cf329dec36f", "size": 4492, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/chapters/ap-treap.tex", "max_stars_repo_name": "LaughingBudda/hachikuji", "max_stars_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/chapters/ap-treap.tex", "max_issues_repo_name": "LaughingBudda/hachikuji", "max_issues_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/chapters/ap-treap.tex", "max_forks_repo_name": "LaughingBudda/hachikuji", "max_forks_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.8260869565, "max_line_length": 470, "alphanum_fraction": 0.7074799644, "num_tokens": 1183, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744850834648, "lm_q2_score": 0.6992544335934766, "lm_q1q2_score": 0.5716925834875164}}
{"text": "\\section{Current Operator in Landau Levels}\n\nNow consider about the current density operator for $N$th Landau level. Since we have already found the extact solution for our time depenedent Hamiltonian and we have derive them as Floquet states with quesi energies. Using second quantization notaion we can represent our Hamiltonian using creating and anhilation operators for Floquet states which are eigen function of this Hamiltonian.\n\n\\vspace{5mm}\n\\noindent\nLet's introduce our creation and anhilation operators in momentum space for the $N$th Landau level states as follows\n\\begin{equation} \\label{7.1}\n  c_{k_x}^{\\dagger} \\ket{0} = \\ket{\\psi_{N,k_x}} \\quad \\text \\quad\n  c_{k_x} \\ket{\\psi_{N,k_x}}  = \\ket{0}.\n\\end{equation}\nTherefore with second quantization we can represent our Hamiltonian as\n\\begin{equation} \\label{7.2}\n  \\hat{H}_N = \\sum_{k_x} \\varepsilon_N c_{k_x}^{\\dagger}c_{k_x}\n\\end{equation}\nand particle density operator as\n\\begin{equation} \\label{7.3}\n  \\hat{\\rho}_N = \\sum_{{k'}_x} c_{{k'}_x}^{\\dagger}c_{{k'}_x}.\n\\end{equation}\n\n\\noindent\nNow we can find the commutation relationship with each other as\n\\begin{equation} \\label{7.4}\n  \\qty[\\hat{H}_N , \\hat{\\rho}_N] =\n  \\qty[ \\sum_{k_x} \\varepsilon_N c_{k_x}^{\\dagger}c_{k_x},\n  \\sum_{{k'}_x} c_{{k'}_x}^{\\dagger}c_{{k'}_x}]\n\\end{equation}\nand this can be simplified using fermions commutation relationship and one can derive that\n\\begin{equation} \\label{7.5}\n  \\qty[\\hat{H}_N , \\hat{\\rho}_N] =\n   \\sum_{k_x} \\varepsilon_N c_{k_x}^{\\dagger}c_{k_x}.\n\\end{equation}\nNow using \\textit{Liouville-Von Neumann equation} we can derive that\n\\begin{equation} \\label{7.6}\n  \\pdv{\\hat{\\rho}_N}{t} =\n   -\\frac{i}{\\hbar}\\qty[\\hat{H}_N , \\hat{\\rho}_N] =\n   \\sum_{k_x} -\\frac{i}{\\hbar}\\varepsilon_N c_{k_x}^{\\dagger}c_{k_x}\n\\end{equation}\nand using famous \\textit{continuty equation} we can make relationship with probability current density ($\\mb{j}(\\mb{r},t)$) operator as follows\n\\begin{equation} \\label{7.7}\n  \\pdv{\\hat{\\rho}_N}{t} = - \\grad \\cdot \\hat{\\mb{j}}(\\mb{r},t).\n\\end{equation}\nHowever, we can assume that the current flow of this system only can be happed in $x$ direction due to $y$ direction restriction by magnetic leangth. Therefore we can re-write the above equation as follows\n\\begin{equation} \\label{7.8}\n  \\pdv{\\hat{\\rho}_N}{t} = - \\pdv{\\hat{jx}^x(\\mb{r},t)}{x}\n\\end{equation}\nand using this on Eq. \\eqref{7.6} we can derive that\n\\begin{equation} \\label{7.9}\n - \\pdv{\\hat{j^x}(\\mb{r},t)}{x} =\n \\sum_{k_x} -\\frac{i}{\\hbar}\\varepsilon_N c_{k_x}^{\\dagger}c_{k_x}\n\\end{equation}\nand this leads to\n\\begin{equation} \\label{7.10}\n \\hat{j}^x(\\mb{r},t) =\n \\sum_{k_x} -\\frac{i L_x}{\\hbar}\\varepsilon_N c_{k_x}^{\\dagger}c_{k_x}.\n\\end{equation}\nTherefore we can identify the momentum space component of the current density operator as\n\\begin{equation} \\label{7.11}\n - {j}^x(k_x,t) =\n -\\frac{i L_x}{\\hbar}\\varepsilon_N\n\\end{equation}\nand find the time space fourier series components and this will vanish all the components expect $s=0$ components because\n\\begin{equation} \\label{7.12}\n \\int dt e^{-is\\omega t} = 2\\pi \\delta_{s,0}\n\\end{equation}\nand then we can find the time fourier series components of current operator as follows\n\\begin{equation} \\label{7.13}\n {j}^x_{s=0} (k_x,t) =\n \\frac{i 2\\pi L_x}{\\hbar}\\varepsilon_N =\n i 2\\pi \\omega_0 L_x \\qty(N + \\frac{1}{2})\n\\end{equation}\nand for electric current flow we can introfuce the electron's charge as $-e$ and this will be modified to\n\\begin{equation} \\label{7.14}\n {j}^x_{s=0} (k_x,t) =\n -i 2\\pi e \\omega_0 L_x \\qty(N + \\frac{1}{2})\n\\end{equation}\n\\hfill$\\blacksquare$\n\n\\noindent\nNow we can use this in our previously derived conductivity formula \\eqref{6.20} and get\n\\begin{equation} \\label{7.15}\n  \\begin{aligned}\n    \\lim_{\\omega \\to 0}\n    \\text{Re}[{\\sigma}^{xx}(0,\\omega)] =\n    \\frac{\\hbar}{4\\pi A L_x}\n    \\sum_{{k_x}}\n    \\qty[-i 2\\pi e \\omega_0 L_x \\qty(N + \\frac{1}{2})]^2\n    \\frac{\\hbar^2}\n    {\\qty[\\Gamma^{00}_{N}(\\varepsilon_F,k_x)]^2}\n    \\qty[\n      1 -\n      4\\qty(\\frac{{\\varepsilon_F}-{\\varepsilon_N}}{\\Gamma^{00}_{N}(\\varepsilon_F,k_x)})^2\n    ]\n  \\end{aligned}\n\\end{equation}\nand this leads to\n\\begin{equation} \\label{7.16}\n  \\begin{aligned}\n    \\lim_{\\omega \\to 0}\n    \\text{Re}[{\\sigma}^{xx}(0,\\omega)] =\n    \\frac{\\pi e^2 \\omega_0^2 \\hbar}{L_x}\n    \\sum_{{k_x}}\n    \\qty(N + \\frac{1}{2})^2\n    \\frac{\\hbar^2}\n    {\\qty[\\Gamma^{00}_{N}(\\varepsilon_F,k_x)]^2}\n    \\qty[\n      1 -\n      4\\qty(\\frac{{\\varepsilon_F}-{\\varepsilon_N}}{\\Gamma^{00}_{N}(\\varepsilon_F,k_x)})^2\n    ].\n  \\end{aligned}\n\\end{equation}\nHowever we know that $\\Gamma^{00}_{N}(\\varepsilon_F,k_x)$ is independent of $k_x$ and we can get summation over availble all momentum through the summation. However by the condition that the center of the force of the oscillator $y_0$ must physically liw within the system $-L_y/2 < y_0 < L_y/2$, one can derive that\n\\begin{equation} \\label{7.17}\n -\\frac{m\\omega_0 Ly}{2\\hbar} \\leq k_x \\leq \\frac{m\\omega_0 Ly}{2\\hbar}\n\\end{equation}\nTherefore the Eq. \\eqref{7.16} can be simplified to\n\\begin{equation} \\label{7.18}\n  \\begin{aligned}\n    \\lim_{\\omega \\to 0}\n    \\text{Re}[{\\sigma}^{xx}(0,\\omega)] =\n    \\frac{\\pi e^2 \\omega_0^2 \\hbar}{L_x}\n    \\frac{m\\omega_0 Ly}{\\hbar}\n    \\qty(N + \\frac{1}{2})^2\n    \\frac{\\hbar^2}\n    {\\qty[\\Gamma^{00}_{N}]^2}\n    \\qty[\n      1 -\n      4\\qty(\\frac{{\\varepsilon_F}-{\\varepsilon_N}}{\\Gamma^{00}_{N}})^2\n    ]\n  \\end{aligned}\n\\end{equation}\nand this will becomes\n\\begin{equation} \\label{7.19}\n  \\begin{aligned}\n    \\lim_{\\omega \\to 0}\n    \\text{Re}[{\\sigma}^{xx}(0,\\omega)] =\n    {\\pi e^2 \\omega_0 m }\n    \\frac{\\qty(\\hbar \\omega_0)^2}\n    {\\qty[\\Gamma^{00}_{N}]^2}\n    \\qty(N + \\frac{1}{2})^2\n    \\qty[\n      1 -\n      4\\qty(\\frac{{\\varepsilon_F}-{\\varepsilon_N}}{\\Gamma^{00}_{N}})^2\n    ].\n  \\end{aligned}\n\\end{equation}\n\\hfill$\\blacksquare$\n", "meta": {"hexsha": "fae40eac1670e2c70c34408db4a5846973ccfa21", "size": 5831, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/sec_07_1.tex", "max_stars_repo_name": "KosalaHerath/magnetic-2DEG-conductivity", "max_stars_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "theory/sec_07_1.tex", "max_issues_repo_name": "KosalaHerath/magnetic-2DEG-conductivity", "max_issues_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/sec_07_1.tex", "max_forks_repo_name": "KosalaHerath/magnetic-2DEG-conductivity", "max_forks_repo_head_hexsha": "91c5df1b018579b4b9c91d84f2d60ee482a001de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3986486486, "max_line_length": 390, "alphanum_fraction": 0.6631795575, "num_tokens": 2210, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744850834648, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5716925783634835}}
{"text": "% Created 2021-07-22 Thu 11:00\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=1610]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{pifont}\n\\newcommand{\\cmark}{\\textcolor{green!80!black}{\\ding{51}}}\n\\usepackage{amssymb}\n\\usepackage{pgfplotstable}\n\\DeclareMathOperator{\\shift}{q}\n\\DeclareMathOperator{\\diff}{p}\n\\usepackage{khpreamble, euscript}\n\\DeclareMathOperator{\\atantwo}{atan2}\n\\newcommand*{\\ctrb}{\\EuScript{C}}\n\\newcommand*{\\obsv}{\\EuScript{O}}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{State space models}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={State space models},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\n\n\\section{PMSM - sysid}\n\\label{sec:orgd8d4281}\n\n\\begin{frame}[label={sec:org9a749ce}]{Obtain state-space model from discrete-time pulse-transfer function}\n\\end{frame}\n\n\\begin{frame}[label={sec:org7a551b6}]{The permanent magnet synchronous motor}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{../../figures/permanent-motor.jpg}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgb40e192}]{The PMSM}\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{../../figures/pmsm_control_block_diag.png}\n\\end{center}\n{\\footnotesize De Liu and Li  ``Speed control for PMSM servo system'', IEEE Transactions on Industrial Electronics, 2012.}\n\\end{frame}\n\\begin{frame}[label={sec:orgfaa79fd}]{Identified model}\nTwo poles, two zeros, one delay\n\\begin{center}\n  \\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=10mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n    \\node[coordinate] (input) {};\n    \\node[block, right of=input] (delay1)  {$\\frac{1}{z}$};\n    \\node[block, right of=delay1, node distance=30mm] (plant)  {$\\frac{b_0z^2 + b_1z + b_2}{z^2 + a_1 z + a_2}$};\n    \\node[coordinate, right of=plant] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.3] {$u(k)$} (delay1);\n    \\draw[->] (delay1) -- node[above, pos=0.3] {} (plant);\n    \\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);\n\n    \\begin{scope}[yshift=-1cm, xshift = 3cm]\n    \\node {$\\Updownarrow$};\n    \\end{scope}\n    \\begin{scope}[yshift=-3cm, xshift = 3cm]\n    \\node {$\\Updownarrow$};\n    \\end{scope}\n\n    \\node[coordinate, below of=input, node distance=2cm] (input2) {};\n    \\node[block, right of=input2, node distance=30mm] (plant)  {$\\frac{b_0z^2 + b_1z + b_2}{z^2 + a_1 z + a_2}$};\n    \\node[block, right of=plant] (delay2)  {$\\frac{1}{z}$};\n    \\node[coordinate, right of=delay2] (output) {};\n\n    \\draw[->] (input2) -- node[above, pos=0.3] {$u(k)$} (plant);\n    \\draw[->] (plant) -- node[above, pos=0.3] {} (delay2);\n    \\draw[->] (delay2) -- node[above, near end] {$y(k)$} (output);\n\n    \\node[coordinate, below of=input2, node distance=2cm] (input3) {};\n    \\node[block, right of=input3, node distance=30mm] (plant)  {$\\frac{b_0z^2 + b_1z + b_2}{z(z^2 + a_1 z + a_2})$};\n    \\node[coordinate, right of=plant, node distance=30mm] (output) {};\n\n    \\draw[->] (input3) -- node[above, pos=0.3] {$u(k)$} (plant);\n    \\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);\n\n\n\n  \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org663463f}]{Identified model}\n\\[ H(z) = \\frac{6.91z^2 + 16.48z -17.87}{z(z^2 - 1.766z + 0.7665)} = \\frac{6.91(z+3.19)(z-0.81)}{z(z-0.998)(z-0.768)}\\]\n\n\\begin{center}\n\\includegraphics[width=0.6\\linewidth]{../../figures/validation-result-2020-07-24.png}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org6ba2fba}]{From pulse-transfer function to state space model}\n\\begin{center}\n  \\begin{tikzpicture}[node distance=32mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n    \\node[coordinate] (input) {};\n    \\node[block, right of=input] (plant)  {$H(z) = \\frac{b_0z^2 + b_1z + b_2}{z(z^2 + a_1 z + a_2)}$};\n    \\node[coordinate, right of=plant] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.3] {$u(k)$} (plant);\n    \\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);\n\n    \\begin{scope}[yshift=-2cm, xshift = 3cm]\n    \\node {$\\Updownarrow$};\n    \\end{scope}\n\n    \\begin{scope}[yshift=-4cm, node distance=50mm, xshift=-2cm]\n    \\node[coordinate] (input) {};\n    \\node[block, right of=input, align=center] (plant)  {$x(k+1) = \\Phi x(k) + \\Gamma u(k)$\\\\$y(k) = C x(k)$};\n    \\node[coordinate, right of=plant] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.3] {$u(k)$} (plant);\n    \\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);\n    \\end{scope}\n\n\n\n  \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orga09dbeb}]{Canonical forms}\nGiven pulse-transfer function \n\\[ H(z) = \\frac{b_1 z^2 + b_2 z + b_3}{z^3 + a_1z^2 + a_2z + a_3}.\\] \nFind a representation in state space form\n\\begin{align*}\n x(k+1) &= \\Phi x(k) + \\Gamma u(k) \\\\\n y(k) &= C x(k)\n \\end{align*}\n\n\\pause\n\n\\begin{itemize}\n\\item Controlable canonical form\n\\item Observable canonical form\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[label={sec:org8bf8145}]{Controlable canonical form}\nGiven pulse-transfer function \n\\[ H(z) = \\frac{b_1 z^2 + b_2 z + b_3}{z^3 + a_1z^2 + a_2z + a_3}.\\] \n\n\\begin{align*}\n x(k+1) &= \\begin{bmatrix} -a_1 & -a_2 & -a_3\\\\1 & 0 & 0\\\\0 & 1 & 0\\end{bmatrix} x(k) + \\begin{bmatrix}1\\\\0\\\\0\\end{bmatrix} u(k) \\\\\n y(k) &= \\begin{bmatrix} b_1 & b_2 & b_3 \\end{bmatrix} x(k)\n \\end{align*}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orga3d2da4}]{Observable canonical form}\nGiven pulse-transfer function \n\\[ H(z) = \\frac{b_1 z^2 + b_2 z + b_3}{z^3 + a_1z^2 + a_2z + a_3}.\\] \n\n\\begin{align*}\n x(k+1) &= \\begin{bmatrix} -a_1 & 1 & 0\\\\-a_2 & 0 & 1\\\\-a_3 & 0 & 0\\end{bmatrix} x(k) + \\begin{bmatrix}b_1\\\\b_2\\\\b_3\\end{bmatrix} u(k) \\\\\n y(k) &= \\begin{bmatrix} 1 & 0 & 0 \\end{bmatrix} x(k)\n \\end{align*}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orgcb7a0f5}]{Canonical forms}\n\\alert{Activity} Find the controlable abd observable canonical forms for the pulse-transfer function of the motor. Answer on Canvas (questions 1 and 2 on today's exercises).\n\n\\[ H(z) = \\frac{6.91z^2 + 16.48z -17.87}{z(z^2 - 1.766z + 0.7665)} = \\frac{6.91(z+3.19)(z-0.81)}{z(z-0.998)(z-0.768)}\\]\n\\end{frame}\n\n\\section{Apollo moon lander}\n\\label{sec:org0898f55}\n\\begin{frame}[label={sec:org0461feb}]{Discrete-time state-space  from continuous-time state space}\nA.k.a. discretization\n\\end{frame}\n\n\\begin{frame}[label={sec:orgb2a2f80}]{Example - the Apollo lunar module}\n\\begin{center}\n\\includegraphics[width=\\linewidth]{fig-apollo}\n\\end{center}\n\\end{frame}\n\\begin{frame}[label={sec:orgacd27c5}]{Example - the Apollo lunar module}\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{fig-apollo}\n\\end{center}\n\\alert{Activity} Which is the transfer function of the system?\n\\[1: \\; G(s) = \\frac{k_1 k_2}{s^2}\\qquad 2: \\; G(s) = \\frac{k_1 k_2}{s(s^2 + 1)} \\qquad 3: \\; G(s) = \\frac{k_1 k_2}{s^3}\\]\n\\end{frame}\n\n\\begin{frame}[label={sec:orgd29c635}]{Example - the Apollo lunar module}\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{fig-apollo}\n\\end{center}\n\\alert{Activity} What sensors are needed by the control system?\n\\end{frame}\n\n\\begin{frame}[label={sec:org1d01b3c}]{Example - the Apollo lunar module}\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{fig-apollo}\n\\end{center}\n\nState variables: \\(x = \\begin{bmatrix} x_1 & x_2 & x_3 \\end{bmatrix}^T = \\begin{bmatrix} \\dot{\\theta} & \\theta & \\dot{z} \\end{bmatrix}^T\\). With the dynamics\n\\[ \\begin{cases} \\dot{x}_1 =  \\ddot{\\theta} = k_1 u\\\\ \\dot{x}_2 = \\dot{\\theta} = x_1\\\\ \\dot{x}_3 = \\ddot{z} = k_2\\theta = k_2x_2 \\end{cases} \\]\n\\end{frame}\n\n\\begin{frame}[label={sec:org380c798}]{Example - the Apollo lunar module}\nState variables: \\(x = \\begin{bmatrix} x_1 & x_2 & x_3 \\end{bmatrix}^T = \\begin{bmatrix} \\dot{\\theta} & \\theta & \\dot{z} \\end{bmatrix}^T\\). With dynamics\n\\[ \\begin{cases} \\dot{x}_1 =  \\ddot{\\theta} = k_1 u\\\\ \\dot{x}_2 = \\dot{\\theta} = x_1\\\\ \\dot{x}_3 = \\ddot{z} = k_2\\theta = k_2x_2 \\end{cases} \\]\n\n\\alert{Activity} Fill the matrix \\(A\\) and vector \\(B\\).\n\n\\[ \\dot{x} = \\begin{bmatrix} \\dot{x}_1\\\\\\dot{x}_2\\\\\\dot{x}_3\\end{bmatrix} = \\underbrace{\\begin{bmatrix} \\textcolor{white}{0} & \\textcolor{white}{0} &\\textcolor{white}{0} \\\\\\textcolor{white}{1} & \\textcolor{white}{0}& \\textcolor{white}{0}\\\\ \\textcolor{white}{0}& \\textcolor{white}{k_2} &\\textcolor{white}{0} \\end{bmatrix}}_{A} \\begin{bmatrix} x_1\\\\x_2\\\\x_3\\end{bmatrix} + \\underbrace{\\begin{bmatrix} \\textcolor{white}{k_1} \\\\ \\textcolor{white}{0} \\\\\\textcolor{white}{0}  \\end{bmatrix}}_{B} u \\]\n\\end{frame}\n\n\\begin{frame}[label={sec:orgd8545c5}]{Example - the Apollo lunar module}\n\\end{frame}\n\n\n\\section{Discretization}\n\\label{sec:orge61a8be}\n\n\\begin{frame}[label={sec:org8990824}]{Discretizing a continuous-time state-space model}\n\\end{frame}\n\\begin{frame}[label={sec:orgedf0d91}]{Discretizaci\u00f3n}\nThe general solution to a linear, continuous-time state-space system\n\\begin{align*}\nx(t_k+\\tau)& = \\mathrm{e}^{A(\\tau)} x(t_k) + \\int_{0}^\\tau \\mathrm{e}^{As} B u\\big((t_k+\\tau)-s) ds\n\\end{align*}\n\n\\begin{center}\n  \\begin{tikzpicture}\n    \\draw[->] (-3,0) -- (6,0) node[below] {$t$};\n    \\draw (-2, 0.2) -- ( -2, 0) node[below] {$t_k=kh$};\n    \\draw (1, 0.2) -- ( 1, 0) node[below] {$t_{k+1}=kh+h$};\n    \\draw (4, 0.2) -- ( 4, 0) node[below] {$kh+2h$};\n    \\draw[thick, orange!90!black] (-3,0.3) -- (-2, 0.3) -- (-2,1) -- (1, 1) -- (1,0.8) -- (4, 0.8) --(4, 0.5) --(5.5, 0.5) node[pos=0.1, coordinate, pin=30:{$u(t)$}] {} ; \n    \\draw[->] (-2, -0.7) -- (0, -0.7) node[below] {$\\tau$};\n  \\end{tikzpicture}\n\\end{center}\n\n \\begin{align*}\n  x(kh+h) &= \\mathrm{e}^{Ah} x(kh) + \\int_{0}^{h} \\mathrm{e}^{As} B u(kh+h-s) ds\\\\\n   &= \\underbrace{\\mathrm{e}^{Ah}}_{\\Phi(h)} x(kh) + \\underbrace{\\left(\\int_{0}^h \\mathrm{e}^{As} B ds \\right)}_{\\Gamma(h)} u(kh)\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgaa1f3f9}]{Discretization - The matrix exponential}\nSquare matrix \\(A\\). Scalar variable \\(t\\).\n\\[ \\mathrm{e}^{At} = 1 + At + \\frac{t^2}{2!}A^2 + \\frac{t^3}{3!} A^3 + \\cdots\\]\nLaplace transform\n\\[ \\laplace{\\mathrm{e}^{At}} = (sI - A)^{-1}\\]\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orge94101e}]{Discretization - example}\n \\begin{align*}\n  x(kh+h) &= \\mathrm{e}^{Ah} x(kh) + \\int_{0}^{h} \\mathrm{e}^{As} B u(kh+h-s) ds\\\\\n   &= \\underbrace{\\mathrm{e}^{Ah}}_{\\Phi(h)} x(kh) + \\underbrace{\\left(\\int_{0}^h \\mathrm{e}^{As} B ds \\right)}_{\\Gamma(h)} u(kh)\n\\end{align*}\n\\[ A = \\begin{bmatrix} 0 & 0 & 0\\\\1 & 0 & 0\\\\0 & k_2 & 0\\end{bmatrix}, \\quad A^2 = \\begin{bmatrix} 0 & 0 & 0\\\\1 & 0 & 0\\\\0 & k_2 & 0\\end{bmatrix}\\begin{bmatrix} 0 & 0 & 0\\\\1 & 0 & 0\\\\0 & k_2 & 0\\end{bmatrix}= \\begin{bmatrix} 0 & 0 & 0\\\\0 & 0 & 0\\\\k_2 & 0  & 0\\end{bmatrix}, \\quad A^3 = 0\\]\nSo,\n\\begin{align*}\n \\Phi(h) &= \\mathrm{e}^{Ah} = 1 + Ah + A^2 h^2/2  + \\cdots \\\\\n &= \\begin{bmatrix} 1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{bmatrix} + \\begin{bmatrix} 0 & 0 & 0\\\\1 & 0 & 0\\\\0 & k_2 & 0\\end{bmatrix}h + \\begin{bmatrix} 0 & 0 & 0\\\\0 & 0 & 0\\\\k_2 & 0 & 0\\end{bmatrix}\\frac{h^ 2}{2}= \\begin{bmatrix} 1 & 0 & 0\\\\h & 1 & 0\\\\\\frac{h^2k_2}{2} & hk_2 & 1\\end{bmatrix}\n \\end{align*}\n\\end{frame}\n\n\\begin{frame}[label={sec:orge5dc4ca}]{Discretization - example}\n \\begin{align*}\n  x(kh+h) &= \\mathrm{e}^{Ah} x(kh) + \\int_{0}^{h} \\mathrm{e}^{As} B u(kh+h-s) ds\\\\\n   &= \\underbrace{\\mathrm{e}^{Ah}}_{\\Phi(h)} x(kh) + \\underbrace{\\left(\\int_{0}^h \\mathrm{e}^{As} B ds \\right)}_{\\Gamma(h)} u(kh)\n\\end{align*}\n\\[\\mathrm{e}^{As}B &=  \\begin{bmatrix} 1 & 0 & 0\\\\s & 1 & 0\\\\\\frac{s^2k_2}{2} & sk_2 & 1\\end{bmatrix} \\begin{bmatrix} k_1\\\\0\\\\0 \\end{bmatrix} = k_1 \\begin{bmatrix} 1\\\\s\\\\\\frac{k_2s^2}{2} \\end{bmatrix}\n  \\]\n\\begin{align*}\n\\Gamma (h) &= \\int_0^h \\mathrm{e}^{As}B ds = k_1 \\int_0^h \\begin{bmatrix} 1\\\\s\\\\\\frac{k_2s^2}{2} \\end{bmatrix}ds = k_1\\begin{bmatrix} h\\\\ \\frac{h^2}{2} \\\\ \\frac{k_2 h^3}{6} \\end{bmatrix} \n\\end{align*}\n\\end{frame}\n\n\\begin{frame}[label={sec:org2d3268c}]{Discretization - example}\n \\begin{align*}\n  x(kh+h) &= \\mathrm{e}^{Ah} x(kh) + \\int_{0}^{h} \\mathrm{e}^{As} B u(kh+h-s) ds\\\\\n   &= \\underbrace{\\mathrm{e}^{Ah}}_{\\Phi(h)} x(kh) + \\underbrace{\\left(\\int_{0}^h \\mathrm{e}^{As} B ds \\right)}_{\\Gamma(h)} u(kh)\\\\\n   &= \\begin{bmatrix} 1 & 0 & 0\\\\h & 1 & 0\\\\\\frac{h^2k_2}{2} & hk_2 & 1\\end{bmatrix} x(kh) + k_1 \\begin{bmatrix} h\\\\ \\frac{h^2}{2} \\\\ \\frac{k_2 h^3}{6} \\end{bmatrix} u(kh)\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgff35f9c}]{Discretization - exercise}\n\\alert{Activity} Discretize the system (question 3 on today's exercises on Canvas)\n\\[ \\dot{x} = Ax + Bu = \\begin{bmatrix} 0 & 1\\\\ 0 & 0 \\end{bmatrix} x + \\begin{bmatrix}0\\\\1\\end{bmatrix}u\\]\n\\end{frame}\n\\end{document}", "meta": {"hexsha": "ae317b701ac89e91ceef8e09db6f70ef26b86beb", "size": 12859, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "state-space/slides/lecture-state-space-models.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "state-space/slides/lecture-state-space-models.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "state-space/slides/lecture-state-space-models.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 41.6148867314, "max_line_length": 493, "alphanum_fraction": 0.630453379, "num_tokens": 5326, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8175744828610095, "lm_q1q2_score": 0.571692571685389}}
{"text": "\\providecommand{\\main}{..}\n\\documentclass[\\main/thesis.tex]{subfiles}\n\\begin{document}\n\n\\graphicspath{ {images/}{../images/} }\n\n\\chapter{Introduction}\\label{introduction}\n\n\\section{Positional Numeral Systems}\n\nA numeral system is a writing system for expressing numbers, and humans have\ninvented various kinds of numeral systems throughout history.\nTake the number ``2016'' for example:\n\n\\begin{center}\n    \\begin{tabular}{ | l | r | }\n    \\textbf{Numeral system} & \\textbf{notation}  \\\\\n    \\hline\n    Chinese numerals    & \u5169\u5343\u96f6\u4e00\u5341\u516d    \\\\\n    Roman numerals      & MMXVI         \\\\\n    Egyptian numerals   & \\includegraphics[width=2em]{egyptian/2016.png} \\\\\n    \\end{tabular}\n\\end{center}\n\nEven so, most of the systems we are using today are positional notations\\cite{knuth1998art}\nbecause they can express infinite numbers with just a finite set of symbols called \\textbf{digits}.\n\n\\subsection{Digits}\n\nAny set of symbols can be used as digits as long as we know how to \\textit{assign}\neach digit to the value it represents.\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n    \\begin{tabular}{ | l | *{16}{l} | }\n    \\textbf{Numeral system} & \\multicolumn{16}{c |}{\\textbf{Digits} } \\\\\n    \\hline\n    decimal         & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 &    &    &    &    &    &    \\\\\n    binary          & 0 & 1 &   &   &   &   &   &   &   &   &    &    &    &    &    &    \\\\\n    hexadecimal     & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & A  & B  & C  & D  & E  & F  \\\\\n    \\hline\n    \\textbf{Assigned value}  & \\textbf{0} & \\textbf{1} & \\textbf{2} & \\textbf{3} & \\textbf{4} & \\textbf{5} & \\textbf{6} & \\textbf{7} & \\textbf{8} & \\textbf{9} & \\textbf{10} & \\textbf{11} & \\textbf{12} & \\textbf{13} & \\textbf{14} & \\textbf{15} \\\\\n    \\end{tabular}\n    \\end{adjustbox}\n\\end{center}\n\nWe place a bar above a digit to indicate its assignment.\nFor instance, these are the assignments of hexadecimal digits.\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n    \\begin{tabular}{ *{4}{c} }\n    $ \\bar{0} \\mapsto 0 $ & $ \\bar{1} \\mapsto 1 $ & $ \\bar{2} \\mapsto 2 $ & $ \\bar{3} \\mapsto 3 $ \\\\\n    $ \\bar{4} \\mapsto 4 $ & $ \\bar{5} \\mapsto 5 $ & $ \\bar{6} \\mapsto 6 $ & $ \\bar{7} \\mapsto 7 $ \\\\\n    $ \\bar{8} \\mapsto 8 $ & $ \\bar{9} \\mapsto 9 $ & $ \\bar{A} \\mapsto 10 $ & $ \\bar{B} \\mapsto 11 $ \\\\\n    $ \\bar{C} \\mapsto 12 $ & $ \\bar{D} \\mapsto 13 $ & $ \\bar{E} \\mapsto 14 $ & $ \\bar{F} \\mapsto 15 $ \\\\\n    \\end{tabular}\n    \\end{adjustbox}\n\\end{center}\n\nPositional numeral systems represent a number by lining up a series of digits:\n\n$$ \\xrightarrow{2016} $$\n\nIn this case, $ 6 $ is called the \\textit{least significant digit},\nand $ 2 $ is known as the \\textit{most significant digit}.\nExcept when writing decimal numbers,\nwe will write down numbers in reverse order,\nfrom the least significant digit to the most significant digit like this\n\n$$ \\xleftarrow{6102} $$\n\n\\subsection{Syntax and Semantics}\n\nSyntax bears no meaning;\nits semantics can only be expressed through the process of \\textit{converting} to some other syntax.\nNumeral systems are merely syntax.\nThe same notation can represent different numbers in different context.\n\nTake the notation ``11'' for example; it could have several meanings.\n\n\\begin{center}\n    \\begin{tabular}{ | l | r | }\n    \\textbf{Numeral system}      & \\textbf{number in decimal}  \\\\\n    \\hline\n    decimal             & 11    \\\\\n    binary              & 3     \\\\\n    hexadecimal         & 17    \\\\\n    \\end{tabular}\n\\end{center}\n\nTo make things clear, we call a sequence of digits a \\textbf{numeral}, or \\textbf{notation};\nthe number it expresses a \\textbf{value}, or simply a \\textbf{number};\nthe process that converts notations to values an \\textbf{evaluation}.\nFrom now on, \\textbf{numeral systems} only refer to the positional ones.\nWe will not concern ourselves with other kinds of numeral systems.\n\n\\subsection{Evaluating Numerals}\n\nWhat we mean by a \\textit{context} in the previous section is the \\textbf{base}\nof a numeral system.\nThe ubiquitous decimal numeral system as we know has the base of 10,\nwhile the binaries that can be found in our machines nowadays has the base of 2.\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n    \\begin{tabular}{ | l | l | *{16}{l} | }\n    \\textbf{Numeral system} & \\textbf{Base}  & \\multicolumn{16}{c |}{\\textbf{Digits} } \\\\\n    \\hline\n    decimal         & 10 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 &    &    &    &    &    &    \\\\\n    binary          & 2  & 0 & 1 &   &   &   &   &   &   &   &   &    &    &    &    &    &    \\\\\n    hexadecimal     & 16 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & A  & B  & C  & D  & E  & F  \\\\\n    \\hline\n    \\textbf{Assigned value}  & & \\textbf{0} & \\textbf{1} & \\textbf{2} & \\textbf{3} & \\textbf{4} & \\textbf{5} & \\textbf{6} & \\textbf{7} & \\textbf{8} & \\textbf{9} & \\textbf{10} & \\textbf{11} & \\textbf{12} & \\textbf{13} & \\textbf{14} & \\textbf{15} \\\\\n    \\end{tabular}\n    \\end{adjustbox}\n\\end{center}\n\n\nA numeral system of base $n$ has exactly $n$ digits, which are assigned values\nfrom $0$ to $n-1$.\n\nConventionally, the base of a system is annotated by subscripting it to the\nright of a numeral, like $ ({2016})_{10} $.\nWe replace the parenthesis with a fancy pair of semantics brackets,\nlike $ [\\![ 2016 ]\\!]_{10} $ to emphasize its role as the evaluation function.\n\nTo evaluate a notation of a certain base:\n$$\n    [\\![d_0d_1d_2...d_n]\\!]_{base}\n    =\n    \\bar{d_0}\\times base^0 + \\bar{d_1}\\times base^1 + \\bar{d_2}\\times base^2 + ... + \\bar{d_n}\\times base^n\n$$\n%\nwhere $ d_{n} $ is a digit for all $ n $.\n\n\\section{Unary Numbers and Peano Numbers}\n\nSome computer scientists and mathematicians seem to be more comfortable with\nunary (base-1) numbers because they are isomorphic to the natural numbers \u00e0 la Peano.\n\n$$\n    [\\![1111]\\!]_{1} \\cong\n        \\overbrace{\\text{suc (suc (suc (suc}}^4 + \\text{ zero)))}\n$$\n\nStatements established on such construction can be proven using mathematical\ninduction. Moreover, people have implemented and proven a great deal of functions\nand properties on these unary numbers because they are easy to work with.\n\n\\paragraph{Problem}\nIf we are to evaluate unary numerals with the model we have just settled,\nthe only digit of the unary system would have to be assigned to $ 0 $ and\nevery numeral would evaluate to zero as a result.\n\nThe definition of digit assignments can be modified to allow unary digits to\nstart counting from $ 1 $.\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n    \\begin{tabular}{ | l | l | *{16}{l} | }\n    \\textbf{Numeral system} & \\textbf{Base}  & \\multicolumn{16}{c |}{\\textbf{Digits} } \\\\\n    \\hline\n    decimal         & 10 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 &    &    &    &    &    &    \\\\\n    binary          & 2  & 0 & 1 &   &   &   &   &   &   &   &   &    &    &    &    &    &    \\\\\n    hexadecimal     & 16 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & A  & B  & C  & D  & E  & F  \\\\\n    unary           & 1  &   & 1 &   &   &   &   &   &   &   &   &    &    &    &    &    &    \\\\\n    \\hline\n    \\textbf{Assigned value}  & & \\textbf{0} & \\textbf{1} & \\textbf{2} & \\textbf{3} & \\textbf{4} & \\textbf{5} & \\textbf{6} & \\textbf{7} & \\textbf{8} & \\textbf{9} & \\textbf{10} & \\textbf{11} & \\textbf{12} & \\textbf{13} & \\textbf{14} & \\textbf{15} \\\\\n    \\end{tabular}\n    \\end{adjustbox}\n\\end{center}\n\nHowever, the representation for numeral systems would have to be generalized to\nmanage the inconsistency of digit assignment among systems of different bases.\nThis generalization will be introduced in Chapter~\\ref{generalizations}.\n\nMoreover, theorems developed on natural numbers \u00e0 la Peano cannot be migrated\nto numbers of other representations, although the proofs are mostly identical.\nBecause the relation and similarities between these two representations we have\nobserved and described are uttered with a \\textit{metalanguage} (English, in\nthis case), while those proofs are often written in some \\textit{object language}.\nWe will demonstrate how to \\textit{encode} propositions expressed in the\nmetalanguage to an object language that we have constructed and how to manipulate\nthem\n\n\\section{Binary Numerals in Digital Circults}\n\nSuppose we are asked to anwser the questions below.\n\n\\vspace{15pt}\n\\noindent\\begin{minipage}{.45\\textwidth}\n    \\begin{center}\n        \\begin{tabular}{c@{\\,}c@{\\,}c@{\\,}c@{\\,}c@{\\,}c@{\\,}c@{\\,}c@{\\,}c@{\\,}c}\n          & 2 & 5 & 3 & 2 & 9 & 8 & 1 & 2 & 3 \\\\\n        + & 3 & 4 & 7 & 8 & 4 & 4 & 2 & 3 & 5 \\\\\n        \\hline\n          &   &   &   &   &   &   &   &   & ? \\\\\n        %   & 6 & 0 & 1 & 1 & 4 & 2 & 3 & 5 & 8 \\\\\n        \\end{tabular}\n    \\end{center}\n\\end{minipage}\\hfill\n\\begin{minipage}{.48\\textwidth}\n    \\begin{center}\n        \\begin{tabular}{c@{\\,}c@{\\,}c@{\\,}c}\n              & 1 & 2 & 3 \\\\\n            + &   & 3 & 4 \\\\\n            \\hline\n              &   &   & ? \\\\\n        \\end{tabular}\n    \\end{center}\n\\end{minipage}\n\\vspace{15pt}\n\nIt would take much more effort to perform long addition and anwser the question\non the left, because it has greater numbers. The greater a number is,\nthe longer its notation will be, which in terms determines the time it takes to\nperform operations.\n\nSince a system can only have \\textbf{finitely many} digits, operations such as\naddition on these digits can be implemented in \\textbf{constant time}.\nConsequently, the time complexity of operations such as long addition on a\nnumeral would be $ O(lg n) $ at best.\nThe choice of the base is immaterial as long as it is not unary\n(which would degenerate to $ O(n) $).\n\nHowever, this is not the case for the binary numeral system implemented in\narithmetic logic units (ALU). These digital circuits are designed to perform\nfast arithmetics. Regarding addition, it takes only \\textit{constant time}.\n\nIt seems that either we have been doing long addition wrong since primary school,\nor the chip manufacturers have been cheating all the time. But there's a catch!\nBecause we are capable of what is called \\textit{arbitrary-precision arithmetic},\ni.e., we could perform calculations on numbers of arbitrary size\nwhile the binary numbers that reside in machines are bounded by the hardware,\nwhich could only perform \\textit{fixed-precision arithmetic}.\n\n\\paragraph{Problem}\nJudging from the time complexity of operations, the binary numerals running in\ndigital circuits is certainly different from the ordinary binary numerals we have\nknown.\n\n\\section{Numerical representation}\n\n\\paragraph{lists and unary numbers}\n\nOne may notice that the structure of unary numbers looks suspiciously similar\nto that of lists'. Let's compare their definition in Haskell.\n\n\\noindent\\begin{minipage}{.45\\textwidth}\n\\begin{lstlisting}\ndata Nat = Zero\n         | Suc Nat\n\\end{lstlisting}\n\\end{minipage}\\hfill\n\\begin{minipage}{.48\\textwidth}\n\\begin{lstlisting}\ndata List a = Nil\n            | Cons a (List a)\n\\end{lstlisting}\n\\end{minipage}\n\nIf we replace every {\\lstinline|Cons _|} with {\\lstinline|Suc|} and\n{\\lstinline|Nil|} with {\\lstinline|Zero|}, then a list becomes an unary number.\nThis is precisely what the {\\lstinline|length|} function,\na homomorphism from lists to unary numbers, does.\n\n\\tikzset{\n    one/.style = {draw, circle, inner sep=0pt, minimum size=10pt, fill=black},\n    zero/.style = {draw, circle, inner sep=0pt, minimum size=10pt},\n    cons/.style = {draw, circle, inner sep=0pt, minimum size=25pt, font=\\scriptsize},\n    nil/.style = {draw, rectangle, inner sep=0pt, minimum size=20pt, font=\\scriptsize},\n    txt/.style = {circle}\n}\n\n\\begin{center}\n    \\begin{tikzpicture}[edge from parent/.style = {draw, -latex}]\n        \\node[cons]{cons}\n            child[grow=right] {node[cons]{cons}\n                child[grow=right] {node[cons]{cons}\n                    child[grow=right] {node[cons]{cons}\n                        child[grow=right] {node[nil]{nil}\n                            child[grow=down] {node[txt]{zero} edge from parent[-, dashed]}\n                        }\n                        child[grow=down] {node[txt]{suc} edge from parent[-, dashed]}\n                    }\n                    child[grow=down] {node[txt]{suc} edge from parent[-, dashed]}\n                }\n                child[grow=down] {node[txt]{suc} edge from parent[-, dashed]}\n            }\n            child[grow=down] {node[txt]{suc} edge from parent[-, dashed]};\n    \\end{tikzpicture}\n\\end{center}\n\nNow let's compare addition on unary numbers and merge (append) on lists:\n\n\\noindent\\begin{minipage}{.45\\textwidth}\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\nadd : Nat \u2192 Nat \u2192 Nat\nadd Zero    y = y\nadd (Suc x) y =\n    Suc (add x y)\n\\end{lstlisting}\n\\end{minipage}\\hfill\n\\begin{minipage}{.48\\textwidth}\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\nappend : List a \u2192 List a \u2192 List a\nappend Nil         ys = ys\nappend (Cons x xs) ys =\n    Cons x (append xs ys)\n\\end{lstlisting}\n\\end{minipage}\n\nAside from having virtually identical implementations, operations on unary numbers\nand lists both have the same time complexity. Incrementing a unary number takes\n$ O(1) $, inserting an element into a list also takes $ O(1) $; adding two unary\nnumbers takes $ O(n) $, appending a list to another also takes $ O(n) $.\n\n\\paragraph{binomial heaps and binary numbers}\n\nIf we look at implementations and operations of binary numbers and binomial\nheaps, the resemblances are also uncanny.\n\n\\begin{center}\n    \\begin{tikzpicture}[\n        edge from parent/.style = {draw, -latex},\n        node distance=40pt\n    ]\n        \\node[one] (A) {}\n            child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n            child[grow=right] {node[zero] (B) {}\n                child[grow=up] {node[txt]{0} edge from parent[-, dashed]}\n                child[grow=right] {node[one, xshift=40pt] (C) {}\n                    child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n                    child[grow=down] {node[one] (C-1) {}}\n                    child {node[one, left of = C-1] {}\n                        child[grow=down] {node[one]{}}\n                    }\n                    child[grow=right] {node[one, xshift=120pt] (D) {}\n                        child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n                        child[grow=down] {node[one] (D-1) {}}\n                        child {node[one, left of = D-1] (D-2) {}\n                            child[grow=down] {node[one]{}}\n                        }\n                        child {node[one, left of = D-2] (D-3) {}\n                            child[grow=down] {node[one] (D-3-1) {}}\n                            child {node[one, left of = D-3-1] {}\n                                child[grow=down] {node[one]{}}\n                            }\n                        }\n                    }\n                }\n            };\n    \\end{tikzpicture}\n\\end{center}\n\nThe figure above is a binomial heap containing $ 13 $ elements.\n\\footnote{Nodes that contain elements are painted black.}\nFrom left to right, there are \\textit{binomial trees} of different \\textit{ranks}\nattached to the path that we call \\textit{``the spine''}.\nA binomial heap is composed of binomial trees just as a numeral is composed\nof digits. If we read the nodes with binomial trees as $ 1 $ and those without\nas $ 0 $, then we get the numeral of $ 13 $ in binary.\n\n\\paragraph{building blocks}\n\nSingle cells in lists and binomial trees in binomial heaps are all different\nkind of simple data structures that are called \\textit{building blocks}.\nThere are also other kinds of building blocks, such as perfect leaf trees.\n\n\\begin{center}\n    \\begin{tikzpicture}[\n        edge from parent/.style = {draw, -latex},\n        level/.style={sibling distance = 80pt/#1},\n        level distance = 20pt\n    ]\n        \\node[zero]{}\n            child {node[zero]{}\n                child {node[zero]{}\n                    child {node[one]{}}\n                    child {node[one]{}}\n                }\n                child {node[zero]{}\n                    child {node[one]{}}\n                    child {node[one]{}}\n                }\n            }\n            child {node[zero]{}\n                child {node[zero]{}\n                    child {node[one]{}}\n                    child {node[one]{}}\n                }\n                child {node[zero]{}\n                    child {node[one]{}}\n                    child {node[one]{}}\n                }\n            };\n    \\end{tikzpicture}\n\\end{center}\n\nThese building blocks can have different ranks.\nA binary leaf tree of rank n, for instance, would contain $ 2^n $ elements.\nThe data structures we have addressed so far are all composed of a series of\nbuilding blocks that are ordered by their ranks.\n\nHowever, these building blocks do not necessarily have to be binary based,\nas long as multiple building blocks of the same rank can be merged into a\nbuilding block of a higher rank or vice versa.\n\n\n\\paragraph{random access lists and binary numbers}\n\nAccessing an element on lists typically takes $ O(n) $.\nInstead of using single cells, \\textit{random access lists} adopts perfect\nleaf trees as building blocks.\nThis improves the time complexity of random access from $ O(n) $ to $ O(lg n) $\nas a random access list would have at most $ O(lg n) $ building blocks, and\nthe tallest perfect leaf tree takes at most $ O(lglg n) $ to traverse.\n\n\\tikzset{\n    cltree/.style = {draw,\n        font=\\scriptsize,\n        isosceles triangle, shape border rotate=90}\n}\n\\begin{center}\n    \\begin{tikzpicture}[\n        edge from parent/.style = {draw, -latex},\n        node distance = 60pt\n    ]\n        \\node[one](A){}\n            child {node[zero, right of = A](B){}\n                child {node[one, right of = B](C){}\n                    child {node[one, right of = C](D){}\n                        child {\n                            node[cltree, scale=2.8, yshift=-5pt] {8} edge from parent[draw=none]\n                        }\n                        { node[cltree, scale=2] {4} }\n                    }\n                }\n                { node[cltree, scale=1, yshift=20pt] {1} }\n            };\n    \\end{tikzpicture}\n\\end{center}\n\nSimilar to that of binomial heaps, random access lists also have spines.\nAlso, treating building blocks as digits also yields binary numerals of the\ncontainer's size.\n\n\\subsection{The correspondence}\n\nThe strong analogy between data structures and positional numeral systems\nsuggests that numeral systems can serve as templates for designing containers.\nSuch data structures are called \\textbf{Numerical Representations}\\cite{okasaki1996purely}\n\\cite{hinze1998numerical}.\n\n\\begin{center}\n    \\begin{adjustbox}{max width=\\textwidth}\n    \\begin{tabular}{ l l l }\n    a container of size $ n $ & corresponds to & a numeral of $ n $ \\\\\n    a building block of rank $ n $ & corresponds to & a digit of weight $ base^n $ \\\\\n    inserting an element      & corresponds to & incrementing a numeral \\\\\n    merging two containers    & corresponds to & adding two numerals \\\\\n    \\end{tabular}\n    \\end{adjustbox}\n\\end{center}\n\n\\paragraph{Problem}\n\nRetrieving the first element ({\\lstinline|head|}) of a list typically takes\nonly constant time. On the other hand, it takes $ O(lg n) $ on random access lists.\nTo illustrate the problem, consider a random access list with $ 32 $ elements:\n\n\\begin{center}\n    \\begin{tikzpicture}[\n        edge from parent/.style = {draw, -latex},\n        node distance = 60pt\n    ]\n        \\node[zero]{}\n            child[grow=up] {node[txt]{0} edge from parent[-, dashed]}\n            child[grow=right] {node[zero]{}\n                child[grow=up] {node[txt]{0} edge from parent[-, dashed]}\n                child[grow=right] {node[zero]{}\n                    child[grow=up] {node[txt]{0} edge from parent[-, dashed]}\n                    child[grow=right] {node[zero]{}\n                        child[grow=up] {node[txt]{0} edge from parent[-, dashed]}\n                        child[grow=right] {node[zero]{}\n                            child[grow=up] {node[txt]{0} edge from parent[-, dashed]}\n                            child[grow=right] {node[one]{}\n                                child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n                                { node[cltree, scale=1, yshift=-20pt] {32} }\n                            }\n                        }\n                    }\n                }\n            };\n    \\end{tikzpicture}\n\\end{center}\n\nTo access any elements from the list above, we have to skip\nthrough four empty nodes before reaching the first building block.\nNodes that correspond to the digit ``0'' contain no elements.\nThey are not only ineffective but also hinders traversal.\n\nHowever, if we replace the digits ``0'' and ``1'' with ``1'' and ``2'',\nthen the number $ 32 $ can be represented as $ 21111 $ instead of $ 000001 $.\nBy doing so, we eliminate empty nodes and shorten the spine, thus improving\nthe performance of list traversal.\n\n\\begin{center}\n    \\begin{tikzpicture}[\n        edge from parent/.style = {draw, -latex},\n    ]\n        \\node[one](root){}\n            child[grow=up] {node[txt]{2} edge from parent[-, dashed]}\n            child[grow=right] {node[one]{}\n                child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n                child[grow=right] {node[one]{}\n                    child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n                    child[grow=right] {node[one]{}\n                        child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n                        child[grow=right] {node[one]{}\n                            child[grow=up] {node[txt]{1} edge from parent[-, dashed]}\n                            { node[cltree, scale=2, yshift=-25pt] {16} }\n                        }\n                        { node[cltree, scale=1.68, yshift=-23pt] {8} }\n                    }\n                    { node[cltree, scale=1.41, yshift=-22pt] {4} }\n                }\n                { node[cltree, scale=1.19, yshift=-21pt] {2} }\n            }\n            child {node[txt, below right = 16pt and 11pt of root]{}\n                { node[cltree, scale=0.8, below right = 28pt and 5pt of root] {1} }\n            }\n            child {node[txt, below left = 16pt and 11pt of root]{}\n                { node[cltree, scale=0.8, below left = 28pt and 5pt of root] {1} }\n            };\n    \\end{tikzpicture}\n\\end{center}\n\nThe data structure introduced above is called the \\textit{1-2 random access list}.\nIts presence suggests that a binary numeral system with digits ``1'' and ``2''\nshould be admissible.\n\nHinze argues\\cite{hinze1998numerical} that if we add the digit ``0'' back to the\n\\textit{1-2 random access list}, then the resulting numerical representation,\nso-called \\textit{0-1-2 random access list},\nwould even have a better performance of insertion and deletion in certain edge cases.\n\nTo accommodate these numerical representations, we need a more versatile\nrepresentation for numeral systems.\n\n\\section{Motivation and Outline}\n\nWe started off with numerical representations, but then we found that their\ncorresponding numeral systems alone are interesting enough.\n\n\\begin{itemize}\n    \\item How to capture the numeral systems behind these data structures?\n    \\item What are these numeral systems capable of?\n    \\item Which kinds of numeral systems are suitable for modelling numerical\n        representations? Do they support increment (insertion) or addition (merge)?\n    \\item What are the relations between these numeral systems and the natural numbers?\n    \\item How to translate propositions and proofs from the natural numbers to\n        our representation?\n\\end{itemize}\n\nThe remainder of the thesis is organized as follows.\n\n\\begin{description}\n    \\item[Chapter~\\ref{generalizations}]\n        resolves the problems we have addressed in this chapter by proposing\n        some generalizations to the conventional positional numeral systems.\n   \\item[Chapter~\\ref{agda}]\n        gives an introduction to \\textit{Agda}, the language we use to construct\n        and reason about the representation.\n   \\item[Chapter~\\ref{props}]\n        introduces \\textit{equational reasoning} and relevant properties\n        of natural numbers exercised in the coming chapters.\n   \\item[Chapter~\\ref{constructions}]\n        constructs the representation for these numeral systems,\n        searches for suitable systems for modeling data structures by\n        defining operations such as increment and addition,\n        and investigates the relationships between these systems and the natural numbers.\n    \\item[Chapter~\\ref{translation}]\n        demonstrates how to translate propositions and proofs from the natural\n        numbers \u00e0 la Peano to our representation of numbers with universe\n        constructions.\n    \\item[Chapter~\\ref{conclusion}] discusses some related topics and concludes\n        everything.\n\\end{description}\n\n\\end{document}\n", "meta": {"hexsha": "171eca083d490a3f4e6bd1e75b0a0c802ee298b5", "size": 24820, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/tex/introduction.tex", "max_stars_repo_name": "banacorn/numeral", "max_stars_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-04-23T15:58:28.000Z", "max_stars_repo_stars_event_max_datetime": "2015-04-23T15:58:28.000Z", "max_issues_repo_path": "Thesis/tex/introduction.tex", "max_issues_repo_name": "banacorn/numeral", "max_issues_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/tex/introduction.tex", "max_forks_repo_name": "banacorn/numeral", "max_forks_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-30T05:50:50.000Z", "avg_line_length": 41.8549747049, "max_line_length": 247, "alphanum_fraction": 0.6145850121, "num_tokens": 6958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8175744761936437, "lm_q1q2_score": 0.571692567023204}}
{"text": "\\chapter{Neutral Diversity and Population Structure.}\n%% this section was moved from the coalescent chapter\n\nHow does genetic differentiation build up between closely related\npopulations? How does migration act to reduce differentiation? These\nquestions are key to understand the conditions under which populations\n(and species) can start to genetically diverge from each other. To answer these\nquestions, we'll first consider this in the context of neutral alleles, and then return\nto think about selection and differentiation in later chapters. \nWe've considered neutral alleles drawn from a randomly-mating population, and divergence among alleles drawn from two distantly-related populations.\nWe'll now turn to consider divergence among more closely related\npopulations. In thinking about the coalescent within populations we\nmade the assumption that any pair of lineages is equally likely to\ncoalesce with each other. However, when there is population structure\nthis assumption is violated, as the parent for an allele is \nlikely to be found in the same population as it's child and so\nlineages in different populations are less likely to coalesce. \\\\\n\nTo develop models of about population structure we'll use the\nstatistic $\\fst$, which we introduced in Section \\ref{section:F_stats}\nof discussion of summarizing\npopulation structure in allele frequencies. We have previously written the measure of population structure\n$\\fst$ as\n\\begin{equation}\n\\fst = \\frac{H_T-H_S}{H_T}\n\\end{equation}\nwhere $H_S$ is the probability that two alleles sampled at random from a\nsubpopulation differ, and $H_T$ is the probability that two alleles\nsampled at random from the total population differ.\n\n\\section{A simple population split model}\nImagine a population of constant size of $N_e$ diploid individuals that\n$T$ generations in the past split into two daughter populations (sub-populations)\neach of size $N_e$ individuals, which do not subsequently exchange\nmigrants. In the current day we sample an equal number of alleles\nfrom both subpopulations.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width= 0.8 \\textwidth]{figures/drift_split.png}\n\\end{center}\n\\caption{Change in allele frequencies following a population split. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Rcode/Loss_of_heterozyg_varying_pop.R}} \\label{fig:drift_split}\n\\end{figure}\n\nConsider a pair of alleles sampled within one of our\nsub-populations and think about their per site heterozygosity.\nThese alleles have experienced a population of size $N_e$\nand so the probability that they differ is $H_S \\approx 4N_e \\mu$ (assuming that $N_e \\mu \\ll 1$, using our equation  \\ref{eqn:hetero} for heterozygosity within a population ).\n\n%Consider a pair of alleles sampled within one of our\n%sub-populations, they have experienced a population of size $N_e$\n%and so the probability that they differ is $H_S = \\theta/(1+\\theta)$\n%(where $\\theta=4N_e\\mu$).\n\n\nThe heterozygosity in our total population is a little more tricky to\ncalculate. Assuming that we equally sample both sub-populations, when we draw two alleles from our total\nsample, $50\\%$ of the time they are drawn from the same\nsubpopulation and $50\\%$ of the time they are drawn from different\nsubpopulations. Therefore, our total heterozygosity is given by\n\\begin{equation}\nH_T = \\half H_S + \\half H_B\n\\end{equation}\nwhere $H_B$ is the probability that a pair of alleles drawn from our\ntwo different sub-populations differ from each other. A pair of\nalleles from different sub-populations cannot find a common ancestor with each other for at least $T$\ngenerations into the past as they are in distinct populations (not\nconnected by migration). Once our alleles find themselves back in the combined ancestral\npopulation it takes them on average $2N$ generations to coalesce. So the total opportunity for mutation between our pair of alleles sampled from different populations is $2 (T + 2N )$ generations of meioses, such that the probability that our pairs of alleles is different is\n\\begin{equation}\nH_B \\approx 2\\mu ( T + 2 N) %\\left( 1-(1-\\mu)^{2T} \\right) + (1-\\mu)^{2T}\n  %\\frac{\\theta}{\\theta+1}\n\\end{equation}\n\n\n%The probability that one or other of them\n%mutates in this time is $1-(1-\\mu)^{2T}$. With probability\n%$(1-\\mu)^{2T} $ neither of our alleles mutate in the $T$ generations\n%back in time before they find themselves back in the combined ancestral\n%population. Conditional on failing to mutate before the combined ancestral\n%population, the probability that they do manage to mutate before\n%coalescing in that population of size $N_e$ is\n%$\\theta/(\\theta+1)$. Putting these components together\n%\\begin{equation}\n%H_B = \\left( 1-(1-\\mu)^{2T} \\right) + (1-\\mu)^{2T}\n % \\frac{\\theta}{\\theta+1}\n%\\end{equation}\nWe can plug this into our expression for $H_T$, and then that in turn\ninto $\\fst$. Doing so we find that\n\\begin{equation}\n\\fst \\approx \\frac{ \\mu T}{\\mu T +  4N_e\\mu }  = \\frac{ T}{ T +  4N_e\n} \\label{eqn:FST_split}\n\\end{equation}\nNote that $\\mu$ cancels out of this equation. In this simple toy model, $\\fst$\nis increasing because the amount of between-population diversity\nincreases with the divergence time of the two populations (initially\nlinearly with $T$). $\\fst$ grows at a rate\ngive by $\\nicefrac{T}{(4N_e)}$ so that differentiation will be higher\nbetween populations separated by long divergence times or with small\neffective population sizes.\n\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width= 0.8 \\textwidth]{illustration_images/Genetic_drift/Orang/7971889392_d32b45668b_z.jpg}\n\\end{center}\n\\caption{Orangutan ({\\it Pongo}). \\BHLNC{Brehms thierleben, allgemeine kunde des thierreichs. Brehm, A. E.}{https://www.biodiversitylibrary.org/page/1008013\\#page/114/mode/1up}{MBLWHOI Library}} \\label{fig:Orang}\n\\end{marginfigure}\n\n\n\\begin{question}{}  %Pongo abelii (Sumatran) and Pongo pygmaeus\n  The genome-wide $F_{ST}$ between Bornean and Sumatran orangutan species samples  ({\\it Pongo pygmaeus} and {\\it Pongo abelii}) is $\\approx 0.37$ \\citep{locke2011}, representing a deep population split between the species (potentially with little subsequent gene flow). Within the populations the genome-wide average Watterson's $\\theta$ is $\\theta_W=1.4$kb$^{-1}$, estimated from the number of segregating sites. Assume a generation time of 20 years, and a mutation rate of $2 \\times 10^{-8}$ per base per generation. How far in the past did the two populations diverge?\n\\end{question}\n%o understand this better we can make a simple\n%approximation based on our mutation rate being very low, such that\n%$N_e \\mu \\ll 1$ so $H_S \\approx\n%4N_e\\mu$, and that $\\mu \\ll 1$ and $\\mu T \\ll 1$. Assuming this, then\n%\\begin{equation}\n%H_B \\approx 2 \\mu T + 4N_e\\mu.\n%\\end{equation}\n\n\n\n%\\begin{question}\n%The gorilla lineage split from the human-chimp lineage $\\sim$7 million years ago. Let\u2019s assume that this speciation event occurred instantaneously in allopatry with no subsequent gene flow. \\\\\n%{\\bf A)}\tWhat is the probability of that gorilla is not an outgroup to human and chimp at a single locus?\\\\\n%{\\bf B)}\tIt has been estimated that the gorilla lineage is not an outgroup at around ~30\\% of autosomal loci. What effective population size would you need to assume to explain this observation? Is that only plausible explanation?\\\\\n%{\\bf C)}\tThe gorilla lineage is an outgroup for large portions of the X chromosome, what is a plausible explanation for this finding?\n%\\end{question}\n\n\\section{A simple model of migration between an island and the mainland.}\nWe can also use the coalescent to think about patterns of\ndifferentiation under a simple model of migration-drift\nequilibrium. Let's consider a small island population that is relatively isolated\nfrom a large mainland population, where both of these populations\nare constant in size. We'll assume that the expected heterozygosity\nfor a pair of alleles sampled on the mainland is $H_M$.\n\nOur island has a population size\n$N_{I}$ that is very small compared to our mainland population.\nEach generation some low fraction $m$ of our individuals on the\nisland have migrant parents from the mainland the generation\nbefore. Our island may also send migrants back to the mainland, but\nthese are a drop in the ocean compared to the large population size on\nthe mainland and their effect can be ignored.\n\n\nIf we sample an allele on the island and trace its ancestral\nlineage backward in time, each generation our ancestral allele has a low\nprobability $m$ of being descended from the mainland in the preceding\ngeneration (if we go back far enough the allele eventually has to be\ndescended from an allele on the mainland). The probability that a pair of alleles sampled on the\nisland are descended from a shared recent common ancestral allele on the island is the\nprobability that our pair of alleles coalesces before either lineage\nmigrates. Well our pair of lineages coalesce with probability\n$\\nicefrac{1}{2N_I} $ in a given generation and, assuming that the rate of migration is not\ntoo high ($m \\ll 1$), the probability that one or other lineage\nmigrates in a given generation is $2m$. So the probability that our\nlineages coalesce before they migrate is \n\\begin{equation}\n\\frac{\\nicefrac{1}{(2N_I)}}{\\nicefrac{1}{(2N_I)} +   2m},\n\\end{equation}\nwhich follows as an exactly analogous argument to our probability that a pair of\nlineages coalesce before a mutation, \\eqn \\ref{eqn:coal_no_mut}, that we used in deriving the\nexpected heterozygosity.  \n\n%For example, the probability that our pair of alleles\n%coalesces $t+1$ generations back on the island is\n%\\begin{equation}\n%\\frac{1}{2N_I}(1-m)^{2(t+1)} \\left(1-\\frac{1}{2N_I} \\right)^{t} \\approx\n%\\frac{1}{2N_I} \\exp\\left( -t\\left (\\frac{1}{2N_I} + 2m\\right) \\right),\n%\\end{equation}\n%JRI: this is a long scary-looking equation. i think it's pretty straightforward but even I was momentarily surprised by it. i think many readers will see it and tune out and not try to work through it. i suggest you keep it but spend a few sentences working through it (1-m) not migrating times prob. of coalesceing t gens, etc.\n%with the approximation following from assuming that $m \\ll 1$ \\& $\\frac{1}{(2N_I)}\n% \\ll 1$ (note that this is very similar to our derivation of\n% heterozygosity above). The probability that our alleles coalesce before either one\n% of them migrates off the island, irrespective of the time, is\n% \\begin{equation}\n% \\int_0^{\\infty} \\frac{1}{2N_I} \\exp\\left( -t\\left (\\frac{1}{2N_I} +\n% 2m\\right) \\right) dt = \\frac{\\nicefrac{1}{(2N_I)}}{\\nicefrac{1}{(2N_I)} +\n%     2m}.\n% \\end{equation}\n\n% Let's assume that the mutation rate is very low such that it is very\n% unlikely that the pair of alleles mutate before they coalesce on the\n% island. Therefore, the only way that the alleles can be different from\n% each other is if one or other of them migrates to the mainland, which\n% happens with probability\n\nConditional on one or other of our alleles migrating to the mainland, both of our alleles represent independent draws from the mainland and so differ from each other with probability $H_M$. Therefore, the level of\nheterozygosity on the island is given by\n\\begin{equation}\n  H_I = \\left(1 - \\frac{\\nicefrac{1}{(2N_I)}}{\\nicefrac{1}{(2N_I)} + 2m} \\right)H_M\n\\end{equation}\nSo the reduction of heterozygosity on the island compared to the\nmainland is\n\\begin{equation}\n  F_{IM} = 1- \\frac{H_I}{H_M} = \\frac{\\nicefrac{\n      1}{(2N_I)}}{\\nicefrac{1}{(2N_I)} + 2m} = \\frac{ 1 }{1 + 4N_Im}. \\label{eqn:FIM}\n\\end{equation}\nThe level of inbreeding on the island compared to the mainland will be\nhigh if the migration rate is low and the effective population size of\nthe island is low, as allele frequencies on the island are drifting\nand diversity on the island is not being replenished by migration. The\nkey parameter here is the number individuals on the island replaced by\nimmigrants from the mainland each generation ($N_I m$), even a few\nmigrants arriving on the island a generation is enough to prevent much\nallele frequency differentiation building up.\n\nWe have framed this problem as being about the reduction in genetic diversity on the island compared to the mainland. However, if we consider collecting individuals on the island and mainland in proportion to their population\nsizes, the total level of heterozygosity would be $H_T=H_M$, as samples from our mainland would greatly outnumber those from our island. Therefore, considering the island as our sub-population, we have derived another simple model of $F_{ST}$.\n\n\n\n\\begin{question}{}\nYou are investigating a small river population of sticklebacks, which receives infrequent migrants from a very large marine population. At a set of putatively neutral biallelic markers the freshwater population has frequencies:\\\\\n0.2, 0.7, 0.8\\\\\nat the same markers the marine population has frequencies:\\\\\n0.4, 0.5 and 0.7.\\\\\n From studying patterns of heterozygosity at a large collection of markers, you have estimated the long term effective size of your freshwater population is 2000 individuals.\\\\\nWhat is your estimate of the migration rate from the marine\npopulations into the river?\n\\end{question}\n\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{illustration_images/Genetic_drift/stickleback/threespine.jpg}\n\\end{center}\n\\caption{Three-spined stickleback ({\\it Gasterosteus aculeatus}).\n  \\newline \\noindent \\tiny{Jordan, David Starr (1907) Fishes, New York City, NY: Henry Holt and Company. Image from  \\href{https://commons.wikimedia.org/wiki/File:FMIB_51889_Three-spined_Stickleback,_Gasterosteus_aculeatus_L_Woods_Hole,_Mass.jpeg}{Wikimedia Commons} Public domain.}} \\label{fig:stickle}\n\\end{marginfigure}\n\n\\section{Incomplete Lineage Sorting}\n%Finally we turn to the interaction of the coalescent and\n%speciation and population splits. %%this is a guess\nOften when we're studying multiple populations, e.g. species, we're interested in the\nunderlying order in which populations split off from each other, and\nthe timing of these events. In the case where populations split off\nfrom each other with no subsequent gene flow, we can represent this\npattern of splitting by a population tree. Because it can take a long time for a polymorphism to drift up or down in frequency, multiple population splits may occur during the time an allele is still segregating. This can lead to incongruence between the overall population tree and the information about relationships present at\nindividual loci. \nAs we have seen in the previous chapters the\nrelationships between sampled alleles at a locus are represented by\ncoalescent tree, sometimes call gene trees in the context of\nincomplete lineage and more generally in phylogenetics.\nIn Figure \\ref{fig:NoILS_poly} and \\ref{fig:ILS_poly}\nwe show a simulation of three populations where the bottom population splits off from the other two first, followed by the subsequent splitting of the the top and the middle populations. We start both simulations with a newly\nintroduced red allele being polymorphic in the combined ancestral population. The most likely fate of this allele is that it is\nquickly lost from the population, but sometimes the allele can drift up in frequency and be polymorphic when the populations split, as the allele in our two figures has done. If the allele is lost/fixed in the descendant populations before the next population split, our allele\nconfiguration will agree with the population tree, as it does in\nFigure  \\ref{fig:NoILS_poly}, and so too the gene tree will agree with population tree (as shown in the left side of Figure \\ref{fig:ILS_cartoon}).\nHowever, if the allele persists as a polymorphism in the ancestral population until the top and the middle\npopulations split, then the allele could fix in one of these populations\nand not the other. Such an event leads to a substitution pattern\nthat disagrees with the population tree, as in Figure \\ref{fig:ILS_poly}.  If\nwe were to construct a phylogeny using the variation at this site we would see a disagreement between the gene tree and population tree. In Figure \\ref{fig:ILS_poly} an allele\ndrawn from the top and the bottom populations are\nnecessarily more closely related to each other than either is to an allele drawn from population 2;\ntracing our allelic lineages from the top and bottom populations back through time, they must coalesce with each other before we reach the point where the red mutation arose; in contrast, a lineage from the middle population cannot have coalesced with either other lineage until past the time the red mutation arose. An example of this `incomplete lineage sorting' in terms of the underlying tree is shown on the right side of Figure \\ref{fig:ILS_cartoon}.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Genetic_drift/ILS/no_ILS.pdf}\n\\end{center}\n\\caption{An example of alleles assorting among three populations such that there is no incomplete lineage sorting. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Rcode/Loss_of_heterozyg_varying_pop.R}} \\label{fig:NoILS_poly}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Genetic_drift/ILS/ILS.pdf}\n\\end{center}\n\\caption{An example of alleles assorting among three populations leading to incomplete lineage sorting. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Rcode/Loss_of_heterozyg_varying_pop.R} } \\label{fig:ILS_poly}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Genetic_drift/ILS/ILS_coal_cartoon.pdf}\n\\end{center}\n\\caption{The population tree of three populations ((A, B), C), as the\n  white space blocked out between the black shapes. Two different coalescent trees\nrelating a single allele drawn from A, B, and C are shown with thinner lines.} \\label{fig:ILS_cartoon}\n\\end{figure}\n\nA natural pedigree analogy to incomplete lineage sorting is the fact that while two\nbiological siblings are more closely related to each other genealogically than\neither is to their cousin, at any given locus one of the siblings can\nshare an allele IBD with their cousin that they do not share with\ntheir own sibling, due to the randomness of Mendelian segregation down their\npedigree. In these cases, the average relatedness of the individuals/populations disagrees\nwith the patterns of relatedness at a particular locus.\n\n\n%\\begin{question}\n%Draw a pedigree of two full cousins, where one of the cousins has a full sibling. Label the four alleles in shared grandparents 1-4, and draw the situation where the cousins share 1 allele IBD, and the sibs share 0 alleles IBD.\n%\\end{question}\n%This analogy isn't perfect as full cousins dont share one parents, while all three of our populations share one ancestral, parental population far enough back. If we want to make this analogy more perfect we'd need to posit a bunch of inbreeding, but honestly drawinf that would make my head hurt a lot.\n\n%black-throated finch (Poephila cincta https://twitter.com/BioDivLibrary/status/1195096274048937984\n\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{illustration_images/Genetic_drift/Poephila_cincta_finch/Poephila_cincta_finch.png}\n\\end{center}\n\\caption{Banded grass Finch ({\\it P. cincta}). Illustration by\n  Elizabeth Gould.\n  \\newline \\noindent \\tiny{Birds of Australia Gould J. 1840. CC BY 4.0 uploaded to \\href{https://www.flickr.com/photos/vintage_illustration/41073432105/in/photolist-WpHYmR-WBF2Uc-Uoq2kW-QAW9bN-UL33uh-RmyMxt-PKmBjN-8xpKmQ-8xmHg2-H6mfAS-dVREvC-7Qepuh-222xkPQ-7QepzS-25zw6ak-W28Nyk-Qa6d32-ijh2uH-6V8Srd-25zw5ov-8xpKB7-nUQ33T-6V8TLy-nV88oH-25RuYmq-21TcFz1-R2Gdy4-GdWRgi-GdMmry-GdWAwD-9mjwfA-25oHWeJ-HMbQre-6CQWTH-7TjKu-e6SYV-25RuXU3-pdKtxc-9hYCzE-voj7Sj-zqSdQb-6SNmDE-a1datx-a1dasZ-PFfPXY-rR7QYw-GJ7rey-6SJmuB-7TjFh}{Flickr} by \\url{rawpixel.com}.}} \\label{fig:Poephila_cincta}\n\\end{marginfigure}\n\nAs an empirical example of incomplete lineage sorting, let's consider the work of\n\\citet{jennings:05} who sequenced a single allele from three\ndifferent species of Australian grass finches ({\\it Poephila}): two sister species\nof long-tailed finches ({\\it Poephila acuticauda} and {\\it P. hecki})\nand the black-throated finch ({\\it Poephila cincta}, see Figure \\ref{fig:Poephila_cincta}). They collected\nsequence data for 30 genes and constructed phylogenetic gene trees\nat each of these loci, resulting in 28 well-resolved gene trees.\nSixteen of the gene trees showed {\\it\n  P. acuticauda} and {\\it P. hecki} as sisters with {\\it P. cincta})\n(the tree ((A,H),C)~), while for twelve genes the gene tree was\ndiscordant with the population tree: for seven of their genes  {\\it P. hecki}\nfell as an outgroup to the other two and at five {\\it P. acuticauda} fell as an outgroup (the trees ((A,C),H) and\n((H,C),A) respectively).\n\n\nLet's use the coalescent to understand this discordance between gene\ntrees and species trees. Let's assume that two sister populations\n(A \\& B) split $t_1$ generations in the past, with a\ndeeper split from a third outgroup population (C) $t_2$ generations in the\npast. We'll assume that there's no gene flow among our populations\nafter each split. We can trace back the ancestral lineages of our three alleles. The\nfirst opportunity for the A \\& B lineages to coalesce is $t_1$\ngenerations ago. If they coalesce with each other in their shared\nancestral population before $t_2$ in the past (left side of\nFigure \\ref{fig:ILS_cartoon}) their gene tree will\ndefinitely agree with the population tree. So the only way for the gene\ntree to disagree with the population tree is for the A \\& B lineages\nto fail to coalesce in their shared ancestral population between $t_1$ and  $t_2$; this happens\nwith probability $\\left(1 - \\nicefrac{1}{2N}\\right)^{t_2-t_1}$. We'll\nget a discordant gene tree if A\n\\& B make it back to the shared ancestral population with C without\ncoalescing, and then one or the other of them coalesces with the C lineage before they coalesce with each other. This happens with probability $2/3$, as at the first\npairwise-coalescent event there are are three possible pairs of lineages that could coalesce, two of\nwhich (A \\& C  and B \\& C ) result in a discordant tree. So the\nprobability that we get a coalescent tree that is discordant with the population tree is\n\\begin{equation}\n\\frac{2}{3} \\left(1 - \\nicefrac{1}{2N}\\right)^{t_2-t_1}. \\label{eqn:ILS_coal}\n\\end{equation}\nThis equation allows us to relate the fraction of loci showing\nincomplete lineage sorting to the population genetics parameters of\nthe ancestral population.\n%ancestral population\n\\begin{question}{}\nLet's return to \\citeauthor{jennings:05}'s Australian grass finches\nexample. They estimated that the ancestral population size of our two\nlong-tailed finches was four hundred thousand. What is your best\nestimate of the inter-speciation time, i.e. $t_2-t_1$?\n\\end{question}\n\nThe fraction of loci showing ILS, eqn \\eqref{eqn:ILS_coal}, depends on the times between\npopulation splits ($t_2-t_1$) Thus we should expect gene-tree population-tree discordance when populations split in rapid succession and/or population sizes are large.\n\n\n\n\\paragraph{Testing for gene flow.}\n\nWe often want to test whether gene flow has occurred between populations. For example, we might want to establish a case\nthat interbreeding between humans and Neanderthals occurred or demonstrate that\ngene flow occurred after two populations began to speciate.\nA broad range of methods have been designed to test for gene flow and\nto estimate gene flow rates based on neutral expectations. Here we'll briefly just discuss one method based on\nsome simple coalescent ideas.  Above we assumed that gene-tree population-tree discordance was due to\nincomplete lineage sorting due to populations rapidly\nsplitting. However, gene flow among populations can also lead to gene-tree discordance.\nWhile both ILS and gene flow can lead to discordance, under\nsimplifying assumptions, ILS implies more symmetry in how these\ndiscordances manifest themselves.\\\\\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Genetic_drift/ILS/ABBA_BABA_coal.pdf}\n\\end{center}\n\\caption{ Incomplete lineage sorting between our single lineages sampled from populations 1, 2, and 3. Population 4 is a distant outgroup\nsuch that the lineages from 1 through 3 always coalesce with each other before any coalesce with 4. The small dash on the branch indicates the mutation A$\\rightarrow$B occurring, giving rise to the\nABBA or BABA mutational pattern shown at the bottom. } \\label{fig:ABBA_BABA}\n\\end{figure}\n%JRI: here and in text you refer to populations ABCD but figure is labelled ``species'' and 1234. I think ABCD for populations and AB for alleles is confusing. I don't know if you need A/B for alleles. Why not use A1A2 or just 1/2 as you have previously? sure you miss the fun mnemonic, but it's clearer for a novice reader\n\nTake a look at Figure \\ref{fig:ABBA_BABA}. In both cases the lineages from 1 and 2 fail to coalesce in\ntheir initial shared ancestral population, and one or the other of them\ncoalesces with the lineage from 3 before they coalesce with each other. Each option is equally\nlikely; therefore the mutational patterns ABBA and BABA are equally\nlikely to occur under ILS, but differential gene flow will break the symmetry.  \\sidenote{Here we have to assume no structure in the ancestral population.}\n\nTo test for this effect of gene flow, we can sample a sequence from each of our 4 populations and count up the number of sites that show the two mutational patterns consistent with the gene-tree discordance $n_{ABBA}$ and\n$n_{BABA}$ and calculate\n\\begin{equation}\n  \\frac{n_{ABBA}-n_{BABA}}{n_{ABBA}+n_{BABA}} \\label{eqn:ABBA_BABA}\n\\end{equation}\nThis statistic will have expectation zero if the gene-tree discordance\nis due to ILS. If there is gene flow between between 2 and 3, that\nexcludes 1, see Figure \\ref{fig:ABBA_BABA_introgression}, there will be an excess ABBAs and so the ABBA-BABA\nstatistic will be skewed positively (and conversely it'll skew negatively if gene\nflow occurred between \n3 into 1). In practice, whether this is significantly different from\nzero is judged by constructing a  Z statistic with a standard error\nfound by recalculating the statistic on computationally resampled\ndataset of large genomic windows.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Genetic_drift/ILS/ABBA_BABA_introgression.pdf}\n\\end{center}\n\\caption{A similar scenerio to Figure \\ref{fig:ABBA_BABA} but now gene\n  flow has occured populations 2 and 3, as depicited by the white gap\n  having opened up between 2 and 3. Under\n  this model there is an excess of ABBAs,  as they can arise both by\n  incomplete lineage sorting (left) and by the lineages moving between\n  2 and 3 by gene flow and coalescing before the ancestral 1-2-3\n  population (right). BABAs can still occur but only by incomplete\n  lineage sorting as in the right side of Figure \\ref{fig:ABBA_BABA}.} \\label{fig:ABBA_BABA_introgression}\n\\end{figure}\n\n\nThe big cats ({\\it Panthera }) clade is a recent radiation, with a\nconsiderable amount of shared genetic variation still\nsegregating across the group. \\citet{figueiro2017genome} examined\npatterns of genomic divergence, incomplete lineage sorting, and gene\nflow across this clade using  ABBA-BABA tests with a Domestic cat sequence as the\noutgroup. One example, for snow leopard, tiger, and  lion is shown\nbelow. Snow leopards and tigers are known more closely related to each other than\neither is to lions. \\citeauthor{figueiro2017genome} counted SNPs where snow leopard and lion\nsequences shared a derived allele to the exclusion of tiger (ABBA)\nand those where where the tiger and lion\nsequences shared a derived allele to the exclusion of snow leopard\n(BABA) and found:\\\\\n\\begin{tabular}{ccccc}\nSnow leopard &Tiger &  Lion & Domestic cat & Counts \\\\\nA & B & B & A &   1,434,106 \\\\\nB & A & B & A &   1,250,134\\\\\n\\end{tabular}\\\\\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{illustration_images/Genetic_drift/Big_cat_ILS/Big_cat_ILS.pdf}\n\\end{center}\n\\caption{A simple schematic of the population history of snow leopard\n  ({\\it Panthera uncia}), tiger ({\\it Panthera tigris},\n  and lion ({\\it Panthera leo}) species. The arrow shows gene flow. \\BHLNC{Images cropped from: The game animals of India, Burma, Malaya, and\n    Tibet (1907). Lydekker, R}{https://www.biodiversitylibrary.org/ia/gameanimalsofind00lyde/\\#page/311/mode/1up}{Smithsonian Libraries} } \\label{fig:Big_cat_ILS}\n\\end{marginfigure}\nThe calculated ABBA-BABA\nstatistic, \\eqn \\eqref{eqn:ABBA_BABA}, is $0.07 \\pm 0.0026~s.e.$,\nwhich  is highly significantly different from zero.  The direction of\nthis statistics with a strong excess of derived SNPs where the tiger sequence is closer to\nthe lion sequence than snow leopard, is consistent with gene flow\nbetween tigers and lions after tigers split off from snow\nleopards (Figure \\ref{fig:Big_cat_ILS}). Historically, lions had a large geographic range, and\nso this interbreeding deep in the past is plausible.\n\n\n\n%Tiger Snow leopard Lion 1,250,134 1,434,106 -0.07 -0.07 0.0026 -26.0 \n\n% Lion Jaguar Snow leopard 1,219,639 1,921,731 -0.22 -0.22 0.0043 -52.5 0\n\n%https://twitter.com/arkivist/status/1187049284992143360\n\n%% Isolation by distance\n% https://en.wikipedia.org/wiki/Tobler%27s_first_law_of_geography\n%  Waldo Tobler, is \"everything is related to everything else, but near things are more related than distant things.\"\n\n\\begin{ChapterSummary}\n\\item We developed simple models of neutral population structure and\n  developed expectations of allele frequency differentiation as\n  measured by $\\fst$ under these models. \n\\item Under a simple model of  population isolation, allele frequency\n  differentiation builds up due to genetic drift in proportion to the\n  split time divided by the population size.\n  \\item Only a small number of migrants between populations per generation is sufficient\n    to prevent the build up of neutral allele frequency\n    differentiation.\n  \\item Incomplete lineage sorting of ancestral variation is one source of disagreement\n    between population/species-trees and gene trees. It occurs when the split times between\n    populations are in quick enough succession that lineages do not\n    have time coalese between more closely related populations. \n    \\item Gene flow can also lead to patterns similar to incomplete\n      lineage sorting. We can test between a model of incomplete\n      lineage sorting and gene flow using tests such as ABBA-BABA.\n\\end{ChapterSummary}\n\n\n\\begin{question} {}\nYou are studying a two species of fish (red fish \\& blue fish), and\nsequencing a set of pseudogenes.  Here are some facts you've collected:\n \\begin{itemize}\n \\item A third species of fish (black fish) diverged from\n the common ancestor of red/blue fish 3 million years ago. \n Assume 1 fish generation per year. Between red fish and black fish there is\n on average 1 substitution every 100 basepairs. \n \\item In these pseudogenes, within red fish, you estimate that heterozygosity within\n red fish is $10^{-4}$ per basepair. \n \\item $F_{ST}$ between red fish and blue fish is 0.1.\n \\item There has been no gene flow among any of these species after\n   they split.\n \\end{itemize}\n{\\bf A)} What is the per base mutation rate?\n\n{\\bf B)} What is the effective population size of red fish?\\\\\n\n{\\bf C)} When did the red and blue fish populations split? Assume they\nhave equal population sizes. \n\\end{question}\n\n\n  \\begin{marginfigure}\n  \\begin{center}\n    \\includegraphics[width= \\textwidth]{figures/Genetic_drift/ILS/ABBA_chimp_out.pdf}\n  \\end{center}\n  \\caption{ A simple population tree diagram, not to scale, of human\n    populations and Neanderthals. Assume, for the sake of the question, that there is no gene flow\nbetween populations after they split. Assume that the African and European populations split $\\sim$100 thousand years\nago ($t_{OA}$). Neanderthals split from the modern human populations $\\sim$700\nthousand years ago ($t_N$). The population ancestral to humans and\nchimps split 6.5 Million years ago ($t_C$).} \\label{fig:ABBA_Neanderthal}\n\\end{marginfigure}\n  \n\\begin{question}{}\nWith reference to the population tree shown in Figure \\ref{fig:ABBA_Neanderthal}:\n{\\bf A)} On the population tree the dashed lines show an\nincomplete gene phylogeny (for a single allele drawn from each population). At a locus, the Chimp lineage has the A allele. Complete\na gene genealogy in a way that would be consistent with Neanderthal and European lineages sharing a derived B allele, to the\nexclusion of the African lineage (ABBA). Mark the branch that a mutation from $A\n\\rightarrow B$  must occur on in order to generate this pattern\n(assuming a single mutation).\\\\\n{\\bf B)} What is the probability of observing a gene tree consistent\nwith the one you drew in part A under the coalescent model? Hint:\nRemember that incomplete lineage sorting is due to failing to coalesce\nwithin an ancestral population. \\\\\nAssume a generation time of 30 years, and an\neffective population size of 10,000 in all populations. Further, assume that lineages\nsampled from the\nNeanderthal and modern human populations will definitely coalesce with\neach other before the common ancestral population with chimp.\n\\end{question}\n\n\n%%D stats question in goose https://avianhybrids.wordpress.com/2019/11/09/d-statistics-for-dummies-a-simple-test-for-introgression/", "meta": {"hexsha": "b11be72872317e047f445b5c4c3524b29db2fdc2", "size": 33424, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/genetic_drift_pop_structure.tex", "max_stars_repo_name": "mtomasini/popgen-notes", "max_stars_repo_head_hexsha": "481a9fd5c6d90927e5fbcb006f6990f2f0804242", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 471, "max_stars_repo_stars_event_min_datetime": "2015-02-04T23:51:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-20T15:40:37.000Z", "max_issues_repo_path": "Chapters/genetic_drift_pop_structure.tex", "max_issues_repo_name": "mtomasini/popgen-notes", "max_issues_repo_head_hexsha": "481a9fd5c6d90927e5fbcb006f6990f2f0804242", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2015-12-03T23:14:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-03T18:10:54.000Z", "max_forks_repo_path": "Chapters/genetic_drift_pop_structure.tex", "max_forks_repo_name": "mtomasini/popgen-notes", "max_forks_repo_head_hexsha": "481a9fd5c6d90927e5fbcb006f6990f2f0804242", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 103, "max_forks_repo_forks_event_min_datetime": "2015-02-05T01:36:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-16T15:08:35.000Z", "avg_line_length": 60.9927007299, "max_line_length": 573, "alphanum_fraction": 0.7834191, "num_tokens": 8745, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7853085708384736, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.5716853593858495}}
{"text": "\\documentclass[twoside]{IEEEtran}\n\n\\usepackage{amsmath}\n\\usepackage{tikz}\n\n\\bibliographystyle{IEEEtran}\n\n\\newcommand{\\mult}{\\nonscript\\:}\n\\newcommand{\\opoff}{{\\operatorname{off}}}\n\\newcommand{\\opout}{{\\operatorname{out}}}\n\\newcommand{\\opth}{{\\operatorname{th}}}\n\\newcommand{\\optotal}{{\\operatorname{total}}}\n\\newcommand{\\opx}{{\\operatorname{x}}}\n\n\\begin{document}\n\n\\title{An Extension of the Construction of\\\\Power Conservative Equivalent Circuits for\\\\DC Networks containing Current Sources}\n\\author{Julian Ahrens}\n\\maketitle\n\n\\section{Introduction}\n\\label{section_introduction}\n\nBy the Th\\'{e}venin Theorem, any electrical network between two terminals, containing only voltage sources, current sources, and resistors, can be replaced by an equivalent circuit consisting of a voltage source $V_{\\opth}$ and a series resistance $R_{\\opth}$.\nHowever, the power dissipated by the series resistance is not necessarily the same as the combined power dissipated by the resistors of the original network.\nIn a recent paper~\\cite{Bar20}, it was shown that, in the case where the original network does not contain any current sources, a \\emph{power conservative} equivalent circuit can be constructed.\nThe term `power conservative' refers to the property of the equivalent circuit, that it exhibits the same amount of Joule heating as the original network under all conditions, i.e., the total power dissipated by the resistors of the equivalent circuit is the same as the total power dissipated by the resistors of the original network.\nIn the following, an extension of this construction to the case of networks containing both voltage and current sources is developed.\nThe construction is divided into two parts: in Section~\\ref{section_power} an analysis of the power dissipated in the original network is performed and in Section~\\ref{section_circuit} the equivalent circuit is introduced and the values of its elements determined.\n\n\\section{Power dissipation in a general network}\n\\label{section_power}\n\n\\begin{figure}\n  \\centering\n  \\input{figures/loaded_network.tex}\n  \\caption{Electrical network with current load}\n  \\label{figure_loaded_network}\n\\end{figure}\n\nFor the analysis of the power dissipated by the resistors in an electrical network consisting only of voltage sources, current sources, and resistors, a current load $I_{\\opout}$ is attached to the two terminals of the network as shown in \\figurename~\\ref{figure_loaded_network}.\nIn the following, the voltage across this load is denoted by $V_{\\opout}$.\nFurthermore, let $R_{k}$, $k = 0, \\ldots, n - 1$, denote the values of the resistors in the network and let $V_{\\opth}$ and $R_{\\opth}$ denote the values of the voltage source and resistor of the Th\\'{e}venin equivalent of the network, respectively.\nThe output voltage of the network is thus given by\n\\begin{equation}\n    V_{\\opout}\n  = V_{\\opth} - I_{\\opout} \\mult R_{\\opth}\n  \\enskip .\n  \\label{eq_v_out}\n\\end{equation}\n\nIn order to simplify the analysis, only the case where $R_{\\opth}$ is non-zero will be treated.\nThe case where $R_{\\opth}$ is zero is fairly simple as in that case the power dissipated in the resistors of the network is independent of the load current $I_{\\opout}$.\n\nNow examine a modified version of the network where all the sources are switched off, i.e., all voltage sources are replaced by a short circuit and all current sources are replaced by an open circuit.\nThe current through each of the resistors $R_{k}$ in this modified network is proportional to the current flowing through the terminals of the network, i.e., for each $k$, there exists a unique constant $c_{k}$ such that the current through the resistor $R_{k}$ is equal to $I_{k, \\opoff} = I_{\\opout} \\mult c_{k}$.\nIn particular, the power dissipated by $R_{k}$ is equal to\n\\begin{equation}\n    P_{k, \\opoff}\n  = I_{k, \\opoff}^{2} \\mult R_{k}\n  = (I_{\\opout} \\mult c_{k})^{2} \\mult R_{k}\n  \\enskip .\n  \\label{eq_p_k_off}\n\\end{equation}\nFurthermore, since the modified network consists purely of resistors and its equivalent resistance is, by the Th\\'{e}venin theorem, equal to $R_{\\opth}$, the total power dissipated by the entire network is given by\n\\begin{equation}\n    P_{\\optotal, \\opoff}\n  = I_{\\opout}^{2} \\mult R_{\\opth}\n  \\enskip .\n  \\label{eq_p_total_off}\n\\end{equation}\nNow since the total power dissipated by the entire network has to be equal to the sum of all the power dissipated in each individual resistor, combining \\eqref{eq_p_k_off}~and~\\eqref{eq_p_total_off}, we obtain the equation\n\\begin{align*}\n         I_{\\opout}^{2} \\mult R_{\\opth}\n     & = P_{\\optotal, \\opoff}\n  \\\\ & = \\sum_{k} P_{k, \\opoff}\n       = \\sum_{k} (I_{\\opout} \\mult c_{k})^{2} \\mult R_{k}\n       = I_{\\opout}^{2} \\mult \\sum_{k} c_{k}^{2} \\mult R_{k}\n\\end{align*}\nand therefore\n\\begin{equation}\n    \\sum_{k} c_{k}^{2} \\mult R_{k}\n  = R_{\\opth}\n  \\enskip .\n  \\label{eq_leading_coefficient}\n\\end{equation}\n\nReturning to the original network, by the superposition principle, for each $k$, the current through the resistor $R_{k}$ is given by $I_{k} = I_{k, \\opoff} + I_{k, 0} = I_{\\opout} \\mult c_{k} + I_{k, 0}$, where $I_{k, 0}$ denotes the current through the resistor at zero load current $I_{\\opout}$.\nCorrespondingly, for each $k$, the power dissipated by the resistor $R_{k}$ is equal to\n\\begin{align*}\n         P_{k}\n       = I_{k}^{2} \\mult R_{k}\n     & = (I_{\\opout} \\mult c_{k} + I_{k, 0})^{2} \\mult R_{k}\n  \\\\ & = I_{\\opout}^{2} \\mult c_{k}^{2} \\mult R_{k} + I_{\\opout} \\mult c_{k} \\mult I_{k, 0} \\cdot 2 \\mult R_{k} + I_{k, 0}^{2} \\mult R_{k}\n  \\enskip .\n\\end{align*}\nSumming over all the resistors and applying~\\eqref{eq_leading_coefficient} yields the total power dissipated by the network:\n\\begin{align}\n         P_{\\optotal}\n     & = \\sum_{k} P_{k}\n  \\nonumber \\\\ & = \\sum_{k} \\bigl( I_{\\opout}^{2} \\mult c_{k}^{2} \\mult R_{k} + I_{\\opout} \\mult c_{k} \\mult I_{k, 0} \\cdot 2 \\mult R_{k} + I_{k, 0}^{2} \\mult R_{k} \\bigr)\n  \\nonumber \\\\ & = I_{\\opout}^{2} \\mult \\sum_{k} c_{k}^{2} \\mult R_{k} + I_{\\opout} \\cdot 2 \\mult \\sum_{k} c_{k} \\mult I_{k, 0} \\mult R_{k} \\nonumber \\\\ & \\qquad + \\sum_{k} I_{k, 0}^{2} \\mult R_{k}\n  \\nonumber \\\\ & = I_{\\opout}^{2} \\mult R_{\\opth} + I_{\\opout} \\cdot 2 \\mult \\sum_{k} c_{k} \\mult I_{k, 0} \\mult R_{k} + \\sum_{k} I_{k, 0}^{2} \\mult R_{k}\n  \\enskip .\n  \\label{eq_p_total}\n\\end{align}\nThe key observation to be made here is that the power dissipated in the network is a quadratic polynomial in $I_{\\opout}$ with leading coefficient $R_{\\opth}$.\n\n\\section{Power conservative equivalent circuit}\n\\label{section_circuit}\n\n\\begin{figure}\n  \\centering\n  \\input{figures/equivalent_circuit.tex}\n  \\caption{Power conservative equivalent circuit with current load}\n  \\label{figure_equivalent_circuit}\n\\end{figure}\n\nIn order to obtain a power conservative equivalent circuit, a circuit exhibiting both the same electrical properties as the original network and the same power dissipation in its resistors has to be found.\nThe Th\\'{e}venin theorem reduces the problem of electrical equivalence to the problem of finding a circuit satisfying~\\eqref{eq_v_out}.\nAs shown in Section~\\ref{section_power}, the power dissipated by the resistors in the original network is given by~\\eqref{eq_p_total} and hence the problem of finding a power conservative equivalent circuit reduces to the problem of finding an electrically equivalent circuit such that the total power dissipated by its resistors is equal to the right-hand side of~\\eqref{eq_p_total}.\n\nOne possibility of obtaining a power conservative equivalent circuit is shown in \\figurename~\\ref{figure_equivalent_circuit}.\nThe resistor $R_{\\opx}$ contained in this circuit does not contribute anything to the electrical properties of the circuit, its sole purpose being the dissipation of a fixed amount of power which in the following will be referred to as $P_{\\opx}$.\nThe current flowing through the resistor $R_{\\opth}$ is given by $I_{\\opout} - I_{\\opx}$, resulting in an output voltage of\n\\begin{equation}\n    V_{\\opout}\n  = V_{\\opx} - (I_{\\opout} - I_{\\opx}) \\mult R_{\\opth}\n  = V_{\\opx} - I_{\\opout} \\mult R_{\\opth} + I_{\\opx} \\mult R_{\\opth}\n  \\enskip .\n  \\label{eq_equivalent_v_out}\n\\end{equation}\nFurthermore, the power dissipated by $R_{\\opth}$ is given by $(I_{\\opout} - I_{\\opx})^{2} \\mult R_{\\opth}$ and therefore the total power dissipated by the resistors $R_{\\opth}$ and $R_{\\opx}$ amounts to\n\\begin{align}\n                   P_{\\optotal}\n               & = (I_{\\opout} - I_{\\opx})^{2} \\mult R_{\\opth} + P_{\\opx}\n  \\nonumber \\\\ & = I_{\\opout}^{2} \\mult R_{\\opth} - I_{\\opout} \\mult I_{\\opx} \\cdot 2 \\mult R_{\\opth} + I_{\\opx}^{2} \\mult R_{\\opth} + P_{\\opx}\n  \\enskip .\n  \\label{eq_equivalent_p_total}\n\\end{align}\n\nIn order to achieve power conservative equivalence to the original network analyzed in Section~\\ref{section_power}, the values of the elements $V_{\\opx}$, $I_{\\opx}$, and $R_{\\opx}$ have to be chosen such that the right-hand sides of equations \\eqref{eq_v_out}~and~\\eqref{eq_equivalent_v_out} and the right-hand sides of equations \\eqref{eq_p_total}~and~\\eqref{eq_equivalent_p_total} become equal.\nThe right-hand sides of \\eqref{eq_p_total}~and~\\eqref{eq_equivalent_p_total} are both quadratic polynomials in $I_{\\opout}$ with leading coefficient $R_{\\opth}$ and achieving equality reduces to equating the remaining coefficients, yielding\n\\begin{displaymath}\n    2 \\mult \\sum_{k} c_{k} \\mult I_{k, 0} \\mult R_{k}\n  = -I_{\\opx} \\cdot 2 \\mult R_{\\opth}\n\\end{displaymath}\nand hence\n\\begin{equation}\n    I_{\\opx}\n  = -\\sum_{k} c_{k} \\mult I_{k, 0} \\mult \\frac{R_{k}}{R_{\\opth}}\n  \\label{eq_i_x}\n\\end{equation}\nand\n\\begin{displaymath}\n    \\sum_{k} I_{k, 0}^{2} \\mult R_{k}\n  = I_{\\opx}^{2} \\mult R_{\\opth} + P_{\\opx}\n\\end{displaymath}\nand hence\n\\begin{equation}\n    P_{\\opx}\n  = \\sum_{k} I_{k, 0}^{2} \\mult R_{k} - I_{\\opx}^{2} \\mult R_{\\opth}\n  \\enskip .\n  \\label{eq_p_x}\n\\end{equation}\nFinally, in view of \\eqref{eq_v_out}~and~\\eqref{eq_equivalent_v_out}, $V_{\\opx}$ has to be chosen such that\n\\begin{displaymath}\n    V_{\\opth} - I_{\\opout} \\mult R_{\\opth}\n  = V_{\\opx} - I_{\\opout} \\mult R_{\\opth} + I_{\\opx} \\mult R_{\\opth}\n  \\enskip ,\n\\end{displaymath}\ni.e.,\n\\begin{equation}\n    V_{\\opx}\n  = V_{\\opth} - I_{\\opx} \\mult R_{\\opth}\n  \\enskip .\n  \\label{eq_v_x}\n\\end{equation}\n\nAll that remains is to choose the resistor $R_{\\opx}$ such that the power dissipated by it is equal to $P_{\\opx}$.\nHowever, this is only possible in the case where $P_{\\opx}$ and $V_{\\opx}$ are both non-zero, as then $R_{\\opx}$ can be chosen to be $V_{\\opx}^{2} / P_{\\opx}$, leading to a power dissipation of $V_{\\opx}^{2} / R_{\\opx} = P_{\\opx}$.\nIn the remaining cases, the circuit has to be modified slightly.\nIn the case where $P_{\\opx}$ is zero, $R_{\\opx}$ can simply be replaced by an open circuit.\nIn the case where $P_{\\opx}$ is non-zero and $V_{\\opx}$ is zero, $R_{\\opx}$ and $V_{\\opx}$ can be replaced by a short circuit and a separate source with a suitable resistor connected across its terminals added to the circuit so that the power dissipated in that resistor is equal to $P_{\\opx}$.\nNote that none of these modifications increase the number of elements in the circuit, i.e., a power conservative equivalent circuit can always be constructed using at most four elements.\n\n\\bibliography{main}\n\n\\end{document}\n\n", "meta": {"hexsha": "96eb0c3193f80d5d55555a3d230b7e13071aca46", "size": 11315, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main.tex", "max_stars_repo_name": "julian-ahrens/thevenin_extension", "max_stars_repo_head_hexsha": "e5fa806efc300588c68c06c618219ccb38733cca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.tex", "max_issues_repo_name": "julian-ahrens/thevenin_extension", "max_issues_repo_head_hexsha": "e5fa806efc300588c68c06c618219ccb38733cca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.tex", "max_forks_repo_name": "julian-ahrens/thevenin_extension", "max_forks_repo_head_hexsha": "e5fa806efc300588c68c06c618219ccb38733cca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.2408376963, "max_line_length": 397, "alphanum_fraction": 0.7140963323, "num_tokens": 3602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.5716853574322169}}
{"text": "%-----------------------------------------------------------------------------------------------\n\\section{Reversible Stirling Engine Model}\n\n    In     this     section,     Cheng     and     Yang's     reversible     Stirling     engine\n    model~\\cite{2012-ChengCH+YangHS-ApEnergy} is presented. Only a concise subset of  the  model\n    that is useful for the goal of the present work is being presented.\n\n    Moreover, the model nomenclature  is  kept  the  same  as  the  one  used  in  the  original\n    reference~\\cite{2012-ChengCH+YangHS-ApEnergy}, so as to allow readers to  easily  lookup  in\n    the original work the parts herein abridged. There is only a minute change in  the  employed\n    nomenclature for conciseness, which consists in annotating the Greek letter  of  the  engine\n    configuration above a symbol, so as to specify that symbol's expression for that  particular\n    engine configuration, starting on Eq.~(\\ref{eq:Ve}).\n\n    \\begin{figure}[ht]\n        \\centering\n        \\includegraphics[width=0.7\\columnwidth]{fig/stirling_engine_alpha_color.pdf}\n        \\caption{Schematic representation of an ideal Stirling engine of the $\\alpha$-type.  The\n            `eHX'  and  the  `cHX'  are  the  `expansion'  and  `compression'  heat  exchangers,\n            respectively of high- and low- temperatures. The power pistons and  the  regenerator\n            are indicated. Volumes are not to scale.}\n        \\label{fig:alpha}\n    \\end{figure}\n\n    \\begin{figure}[ht]\n        \\centering\n        \\includegraphics[width=0.7\\columnwidth]{fig/stirling_engine_beta_color.pdf}\n        \\caption{Schematic representation of an ideal Stirling engine of the  $\\beta$-type.  The\n            `eHX'  and  the  `cHX'  are  the  `expansion'  and  `compression'  heat  exchangers,\n            respectively of high- and low- temperatures. The power and displacement pistons  and\n            the regenerator are indicated. Volumes are not to scale.}\n        \\label{fig:beta}\n    \\end{figure}\n\n    Let $V_{cs}$ and $V_{es}$ be piston and displacer  sweep  volumes  in  the  operation  of  a\n    reversible Stirling engine of either $\\alpha$-,  $\\beta$-,  or  $\\gamma$-type---in  Stirling\n    engine nomenclature, it is common to name high- and low-  temperature  spaces  and  adjacent\n    moving   parts   as    `expansion',    and    `compression'    ones,    respectively,    see\n    Figures~\\ref{fig:alpha}--\\ref{fig:gamma} that illustrate the `expansion', and  `compression'\n    heat exchangers; hence the `$c$' and `$e$' subscripts.\n\n    Let further $V_d$ be the so-called engine ``dead volume'', which is composed by  piston  and\n    displacer minimum clearances and regenerator volumes, i.e., $V_d =  V_{emin}  +  V_{cmin}  +\n    V_r$, respectively.\n\n    \\begin{figure}[ht]\n        \\centering\n        \\includegraphics[width=0.7\\columnwidth]{fig/stirling_engine_gamma_color.pdf}\n        \\caption{Schematic representation of an ideal Stirling engine of the $\\gamma$-type.  The\n            `eHX'  and  the  `cHX'  are  the  `expansion'  and  `compression'  heat  exchangers,\n            respectively of high- and low- temperatures. The power and displacement pistons  and\n            the regenerator are indicated. Volumes are not to scale.}\n        \\label{fig:gamma}\n    \\end{figure}\n\n    Let $\\phi$ be the engine's crankshaft angle, and $\\alpha$  be  the  expansion-to-compression\n    piston-piston  (for  the  $\\alpha$-type)  or  piston-displacer   (for   the   $\\beta$-   and\n    $\\gamma$-types)  phase  angle.  For  all  engine  configurations   the   expansion   volume,\n    $V_e(\\phi)$, is:\n    %\n    \\begin{align}\n        \\label{eq:Ve}\n        V_e(\\phi) &= V^{\\alpha}_e(\\phi) = V^{\\beta}_e(\\phi) = V^{\\gamma}_e(\\phi) \\nonumber\\\\\n                  &= V_{emin} + \\frac{1}{2} V_{es}(1 + \\sin(\\phi + \\alpha)).\n    \\end{align}\n\n    The compression volume, $V_c(\\phi)$, is configuration dependent:\n    %\n    \\begin{align}\n        \\label{eq:Vca}\n        V^{\\alpha}_c(\\phi) &= V_{cmin} + \\frac{1}{2} V_{es} \\kappa (1 + \\sin(\\phi)), \\\\\n        \\label{eq:Vcb}\n        V^{\\beta}_c(\\phi)  &= V_{cmin} +\n            \\frac{1}{2} V_{es} (\\sqrt{1 - 2\\kappa\\cos\\alpha + \\kappa^2} \\nonumber\\\\\n                           &+ \\kappa\\sin\\phi - \\sin(\\phi + \\alpha)), \\\\\n        \\label{eq:Vcg}\n        V^{\\gamma}_c(\\phi) &= V_{cmin} + \\frac{1}{2} V_{es}\n            (1 + \\kappa + \\kappa\\sin\\phi \\nonumber\\\\\n                           &- \\sin(\\phi + \\alpha)),\n    \\end{align}\n    %\n    \\noindent where $\\kappa \\equiv V_{cs}/V_{es}$ is the engine sweep volume ratio.\n\n    It is worth noting that each engine has its very own distinct volume dynamics,  making  them\n    suitable to verifying Carnot's general proposition from the point  of  view  of  ``different\n    internal details''.\n\n    Let the total engine volume---the instantaneous volume  occupied  by  the  engine's  working\n    fluid,\n    %\n    \\begin{equation}\n        V(\\phi) = V_e(\\phi) + V_c(\\phi) + V_r,\n    \\end{equation}\n    %\n    \\noindent be written in terms of  a  configuration  dependent  dimensionless  function  most\n    generally written as $\\Phi(\\phi | \\chi, \\kappa,  \\alpha)$,  i.e.,  in  terms  of  parameters\n    $\\chi$, $\\kappa$ and $\\alpha$, so that\n    %\n    \\begin{equation}\n        V(\\phi) = V_{es}\\Phi(\\phi | \\chi, \\kappa, \\alpha),\n    \\end{equation}\n    %\n    \\noindent with:\n    %\n    \\begin{align}\n        \\label{eq:Phia}\n        \\Phi^{\\alpha}(\\phi | \\chi, \\kappa, \\alpha) &= \\chi + \\frac{1}{2}(1 + \\kappa) \\nonumber\\\\\n            &+ \\frac{1}{2}(\\sin(\\phi + \\alpha) + \\kappa\\sin\\phi),\\\\\n        \\label{eq:Phib}\n        \\Phi^{\\beta}(\\phi | \\chi, \\kappa, \\alpha) &= \\chi + \\frac{1}{2}\\kappa\\sin\\phi +\n            \\frac{1}{2}(1 \\nonumber\\\\\n            &+ \\sqrt{1 - 2\\kappa\\cos\\alpha + \\kappa^2}),\\\\\n        \\label{eq:Phig}\n        \\Phi^{\\gamma}(\\phi | \\chi, \\kappa) &= \\chi + \\frac{1}{2}(2 + \\kappa)\n            + \\frac{1}{2}\\kappa\\sin\\phi.\n    \\end{align}\n    %\n    \\noindent where $\\chi \\equiv V_{d}/V_{es}$ is the engine dead  volume  ratio.  It  is  worth\n    noting that for the $\\gamma$-type engine, parameter  $\\alpha$  is  absent  from  the  $\\Phi$\n    function, thus showing that different  engine  configuration  may  also  lead  to  different\n    mathematical ``signature'' of intermediate functions.\n\n    The working fluid pressure, $p$, is assumed to be uniform throughout the engine  cavity---as\n    pressure-drops introduce irreversibilities that would prevent one  from  verifying  Carnot's\n    general proposition in such models. Assuming,  for  the  sake  of  simplicity,  while  still\n    allowing for different fluids to be used, ideal gas  $p$-$V$-$T$  behavior  of  the  working\n    fluid, one has:\n    %\n    \\begin{equation}\n        \\label{eq:P}\n        p = \\frac{mR}{\\frac{V_e}{T_e} + \\frac{V_c}{T_c} + \\frac{V_d}{T_d}}\n          = \\frac{mRT_e}{V_{es}}\\Psi(\\phi | \\alpha, \\kappa, \\tau, \\chi),\n    \\end{equation}\n    %\n    \\noindent where $T_e$, $T_c$, and $T_d \\equiv (T_e + T_c)/2$ are the expansion, compression,\n    and dead space temperatures, respectively, with $T_e > T_c$ for heat engine  operation;  and\n    $\\Psi(\\phi | \\alpha, \\kappa, \\tau, \\chi)$ is the dimensionless engine pressure  function  of\n    $\\phi$ with parameters $\\alpha, \\kappa, \\tau, \\chi$, in which $\\tau \\equiv T_c / T_e$ is the\n    compression-to-expansion reservoir temperature ratio. It is worth noting  that  if  Carnot's\n    general proposition is to hold, the thermal efficiency of the exemplified  Stirling  engines\n    must be a function of $\\tau$ only.\n\n    The  dimensionless  pressure  function  is  a  multi-term,  engine   configuration dependent\n    expression:\n    %\n    \\begin{align}\n        \\label{eq:Psia}\n        \\frac{1}{\\Psi^{\\alpha}(\\phi)} &=\n                \\frac{2\\chi}{1 + \\tau} +\n                \\frac{\\kappa + \\tau}{2\\tau} +\n                \\frac{\\kappa\\sin\\phi}{2\\tau} \\nonumber\\\\\n            &+  \\frac{\\sin(\\phi + \\alpha)}{2},\\\\\n        \\label{eq:Psib}\n        \\frac{1}{\\Psi^{\\beta}(\\phi)}  &=\n                \\frac{2\\chi}{1 + \\tau} +\n                \\frac{\\tau + \\sqrt{1 - 2\\kappa\\cos\\alpha + \\kappa^2}}{2\\tau} \\nonumber\\\\\n            &+  \\frac{\\kappa\\sin\\phi}{2\\tau} +\n                \\frac{(\\tau - 1)\\sin(\\phi + \\alpha)}{2\\tau},\\\\\n        \\label{eq:Psig}\n        \\frac{1}{\\Psi^{\\gamma}(\\phi)} &=\n                \\frac{2\\chi}{1 + \\tau} +\n                \\frac{1 + \\tau + \\kappa}{2\\tau} +\n                \\frac{\\kappa\\sin\\phi}{2\\tau} \\nonumber\\\\\n            &+  \\frac{(\\tau - 1)\\sin(\\phi + \\alpha)}{2\\tau}.\n    \\end{align}\n\n    The $RT_e$-normalized (i)~specific net cycle work, $\\bar{W}$, and (ii)~specific inlet  cycle\n    heat, $\\bar{Q}_{in}$, are dimensionless quantities that can be expressed as:\n    %\n    \\begin{align}\n        \\label{eq:W}\n        \\bar{W} &= \\frac{\\oint p\\,\\difff(V/m)}{RT_e} \\nonumber\\\\\n                &= \\int_0^{2\\pi} \\Psi(\\phi)\n                   \\frac{\\partial\\Phi(\\phi | \\chi, \\kappa, \\alpha)}{\\partial\\phi}\\,\\difff\\phi,\\\\\n        \\label{eq:Q}\n        \\bar{Q}_{in} &= \\frac{\\oint p\\,\\difff(V_e/m)}{RT_e} \\nonumber\\\\\n                     &= \\frac{1}{2} \\int_0^{2\\pi} \\Psi(\\phi)\n                        \\cos(\\phi + \\alpha) \\,\\difff\\phi,\n    \\end{align}\n    %\n    \\noindent recalling that $\\Phi$ has a different  mathematical  signature  for  $\\gamma$-type\n    engines,      with      respect      to      the      $\\alpha$-      and      $\\beta$-types,\n    Eq.~(\\ref{eq:Phia})--(\\ref{eq:Phig}).\n\n    Equations~(\\ref{eq:W}) and (\\ref{eq:Q}) embody the argument made on the Introduction section\n    of this work, meaning that both  cycle  net  work  and  cycle  inlet  heat  expressions  are\n    \\emph{complicated} and \\emph{different} functions of the engine's internal details---whether\n    in  terms  of  $\\alpha$-,  $\\beta$-,  or  of  $\\gamma$-type  configurations,  but  also   of\n    construction/tuning parameters such as $\\alpha$, $\\kappa$, and $\\chi$---and of  the  thermal\n    reservoir temperatures, $\\tau$.\n\n%-----------------------------------------------------------------------------------------------\n\n", "meta": {"hexsha": "0da2d9fbaa4c79c597d8e93e969e2257ac25d687", "size": 10049, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "01-02-Section-02.tex", "max_stars_repo_name": "cnaak/man-CarnotPrinciple", "max_stars_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01-02-Section-02.tex", "max_issues_repo_name": "cnaak/man-CarnotPrinciple", "max_issues_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01-02-Section-02.tex", "max_forks_repo_name": "cnaak/man-CarnotPrinciple", "max_forks_repo_head_hexsha": "b258fa7508d8049d2c1a958ffab58ff9b17c78ff", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.7989690722, "max_line_length": 96, "alphanum_fraction": 0.5832421136, "num_tokens": 3090, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245953120233, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.571618557349232}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n\n\n    \n    \n      \\subsection{lossQuad.m}\n\n\\begin{par}\n\\textbf{Summary:} Compute expectation and variance of a quadratic cost $(x-z)'*W*(x-z)$ and their derivatives, where $x \\sim N(m,S)$\n\\end{par} \\vspace{1em}\n\n\\begin{verbatim}function [L, dLdm, dLds, S, dSdm, dSds, C, dCdm, dCds] = lossQuad(cost, m, S)\\end{verbatim}\n    \\begin{par}\n\\textbf{Input arguments:}\n\\end{par} \\vspace{1em}\n\n\\begin{verbatim}  cost\n    .z:     target state                                              [D x 1]\n    .W:     weight matrix                                             [D x D]\n  m         mean of input distribution                                [D x 1]\n  s         covariance matrix of input distribution                   [D x D]\\end{verbatim}\n    \\begin{par}\n\\textbf{Output arguments:}\n\\end{par} \\vspace{1em}\n\\begin{verbatim}L               expected loss                                  [1   x    1 ]\ndLdm            derivative of L wrt input mean                 [1   x    D ]\ndLds            derivative of L wrt input covariance           [1   x   D^2]\nS               variance of loss                               [1   x    1 ]\ndSdm            derivative of S wrt input mean                 [1   x    D ]\ndSds            derivative of S wrt input covariance           [1   x   D^2]\nC               inv(S) times input-output covariance           [D   x    1 ]\ndCdm            derivative of C wrt input mean                 [D   x    D ]\ndCds            derivative of C wrt input covariance           [D   x   D^2]\\end{verbatim}\n\\begin{par}\nCopyright (C) 2008-2013 by Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen.\n\\end{par} \\vspace{1em}\n\\begin{par}\nLast modified: 2013-05-30\n\\end{par} \\vspace{1em}\n\n\n\\subsection*{High-Level Steps} \n\n\\begin{enumerate}\n\\setlength{\\itemsep}{-1ex}\n   \\item Expected cost\n   \\item Variance of cost\n   \\item inv(s)* cov(x,L)\n\\end{enumerate}\n\n\\begin{lstlisting}\nfunction [L, dLdm, dLds, S, dSdm, dSds, C, dCdm, dCds] = lossQuad(cost, m, S)\n\\end{lstlisting}\n\n\n\\subsection*{Code} \n\n\n\\begin{lstlisting}\nD = length(m); % get state dimension\n\n% set some defaults if necessary\nif isfield(cost,'W'); W = cost.W; else W = eye(D); end\nif isfield(cost,'z'); z = cost.z; else z = zeros(D,1); end\n\n% 1. expected cost\nL = S(:)'*W(:) + (z-m)'*W*(z-m);\n\n% 1a. derivatives of expected cost\nif nargout > 1\n  dLdm = 2*(m-z)'*W; % wrt input mean\n  dLds = W';         % wrt input covariance matrix\nend\n\n% 2. variance of cost\nif nargout > 3\n  S = trace(W*S*(W + W')*S) + (z-m)'*(W + W')*S*(W + W')*(z-m);\n  if S < 1e-12; S = 0; end % for numerical reasons\nend\n\n% 2a. derivatives of variance of cost\nif nargout > 4\n  % wrt input mean\n  dSdm = -(2*(W+W')*S*(W+W)*(z-m))';\n  % wrt input covariance matrix\n  dSds = W'*S'*(W + W')'+(W + W')'*S'*W' + (W + W')*(z-m)*((W + W')*(z-m))';\nend\n\n% 3. inv(s) times IO covariance with derivatives\nif nargout > 6\n    C = 2*W*(m-z);\n    dCdm = 2*W;\n    dCds = zeros(D,D^2);\nend\n\\end{lstlisting}\n", "meta": {"hexsha": "51e21a929957ef2281390e9c97d2f9cc45e31bd8", "size": 3082, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/lossQuad.tex", "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_issues_repo_path": "doc/tex/lossQuad.tex", "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_forks_repo_path": "doc/tex/lossQuad.tex", "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "avg_line_length": 31.1313131313, "max_line_length": 132, "alphanum_fraction": 0.5483452304, "num_tokens": 1009, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245953120233, "lm_q2_score": 0.6859494550081926, "lm_q1q2_score": 0.5716185519992051}}
{"text": "\n\\subsection{The Feasible Generalised Least Squares (FGLS) estimator}\n\n\\subsubsection{Introduction}\n\nWe do OLS to get a consistent estimate of \\(\\Omega \\), \\(\\hat \\Omega \\).\n\nWe then plug this into the GLS estimator.\n\n", "meta": {"hexsha": "10641c1a43c1261443016dbe7e124ee3ea063685", "size": 218, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/gls/02-01-fgls.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/gls/02-01-fgls.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/gls/02-01-fgls.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.8, "max_line_length": 72, "alphanum_fraction": 0.7385321101, "num_tokens": 58, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8333245870332531, "lm_q2_score": 0.6859494550081925, "lm_q1q2_score": 0.571618546320387}}
{"text": "\\documentclass[notitlepage]{simple}\n\n\\author{Matt McCarthy}\n\\title{Box Counting Dimension of the Middle-$\\lambda$ Cantor Set}\n\\date{April 2016}\n\n\\def\\card{\\operatorname{card}}\n\\def\\dim{\\operatorname{dim}}\n\\def\\diam{\\operatorname{diam}}\n\n\\begin{document}\n\t\\maketitle\n\n\t\\begin{thm*}\n\t\tLet $C$ denote the middle-$\\lambda$ Cantor set with $0 < \\lambda < 1$.\n\t\tThen the box counting dimension of $C$ is\n\t\t\\[\n\t\t\t\\dim_B C = \\frac{\\ln 2}{\\ln\\paren{\\frac{2}{1-\\lambda}}}.\n\t\t\\]\n\t\\end{thm*}\n\n\t\\section{Background}\n\n\t\\subsection{Fractal Analysis}\n\n\tWe begin by defining a $\\delta$-cover of a set.\n\t\\begin{definition}\n\t\tLet $X\\subset\\RR^n$ and $\\delta>0$.\n\t\tThen a \\textit{$\\delta$-cover} of $X$ is a set $D=\\set{D_i}_{i=1}^{n_D}$ such that $X\\subseteq \\cup_{D_i\\in D} D_i$ and $\\diam D_i \\leq \\delta$ for all $D_i\\in D$.\n\t\tWe denote the collection of all such covers as $\\mathcal{D}_\\delta(X)$.\n\t\\end{definition}\n\n\tNow that we have $\\delta$-covers, we can define the \\textit{box-counting dimension} of a set.\n\n\t\\begin{definition}\n\t\tLet $X$ be a subset of $\\RR^n$.\n\t\tThen the \\textit{box-counting dimension} of $X$, denoted $\\dim_B(X)$, is defined as\n\t\t\\[\n\t\t\t\\lim\\limits_{\\delta\\rightarrow0} \\frac{\\ln N_\\delta(X)}{-\\ln\\delta}\n\t\t\\]\n\t\twhere\n\t\t\\[\n\t\t\tN_\\delta(X) = \\min\\limits_{D\\in\\mathcal{D}_\\delta(X)} \\card{D}.\n\t\t\\]\n\t\\end{definition}\n\tNote that the box-counting dimension of a set does not necessarily exist, however when it does exist it is usually easier to find.\n\n\t\\begin{thm}[Squeeze Theorem]\n\t\tLet $(a_n),(b_n),(c_n)$ be real valued sequences such that $\\lim a_n = \\lim c_n = x$ and $a_n\\leq b_n\\leq c_n$ for all $n$ greater than some $n\\in\\NN$.\n\t\tThen $b_n$ converges to $x$.\n\t\\end{thm}\n\n\t\\subsection{The Cantor Set}\n\n\tWe will now talk about the middle third Cantor set.\n\tBegin by defining $C_0=[0,1]$.\n\tWe now remove the middle third from $C_0$ yielding, $C_1=[0,1/3]\\cup[2/3,1]$.\n\tWe then remove the middle third from \\textit{each} subinterval of $C_1$, yielding $C_2 = [0,1/9]\\cup[2/9,3/9]\\cup [6/9,7/9]\\cup[8/9,1]$.\n\tWe iterate this process by removing the middle-third from each subinterval of $C_n$ and labeling the remaining set $C_{n+1}$.\n\tFinally, we define the Cantor set as $C=\\cap_{n\\in\\NN} C_n$.\n\tWe can similarly construct the middle-$\\lambda$ Cantor set by performing the same construction but removing $\\lambda$ instead of one third in each step.\n\tFurthermore, $C$ is an uncountable set that has no length left to it.\n\n\t\\section{Solution}\n\n\t\\begin{thm}\n\t\tLet $C$ denote the middle-$\\lambda$ Cantor set with $0 < \\lambda < 1$.\n\t\tThen the box counting dimension of $C$ is\n\t\t\\[\n\t\t\t\\dim_B C = \\frac{\\ln 2}{\\ln\\paren{\\frac{2}{1-\\lambda}}}.\n\t\t\\]\n\t\\end{thm}\n\t\\begin{proof}\n\t\tConsider removing an interval of length $a\\lambda$ from the middle of the interval $[0,a]$ for some $a>0$.\n\t\tSince we have removed $a\\lambda$ from the interval, the length of this new set is exactly $a-a\\lambda=a(1-\\lambda)$.\n\t\tMoreover, since we removed it from the middle of the interval, this length is equally distributed among the two resultant subintervals.\n\t\tThus the subintervals must have length $a((1-\\lambda)/2)$.\n\t\tFurthermore, the leftmost interval must be $[0,a((1-\\lambda)/2)]$.\n\n\t\tWe now need to find the length of any subinterval of $C_n$.\n\t\tSince $C_0=[0,1]$, $l_0=1$.\n\t\tThis tells us that $l_1=(1-\\lambda)/2$ and that the leftmost interval in $C_1$ is $[0,(1-\\lambda)/2]$.\n\t\tWe know from the above logic that $l_{n+1}=l_n(1-\\lambda)/2$ and its leftmost interval will be $[0,l_n(1-\\lambda)/2]$.\n\t\tSolving this recursion yields that\n\t\t\\[\n\t\t\tl_n = \\paren{\\frac{1-\\lambda}{2}}^n\n\t\t\\]\n\t\tand the leftmost interval of $C_n$ is $[0,((1-\\lambda)/2)^n]$.\n\t\tThus if we were to cover $C_n$ we would need $2^n$ sets of diameter $((1-\\lambda)/2)^n$.\n\n\t\tSuppose $\\delta >0$.\n\t\tThus, we can find an $n$ such that $l_{n+1}\\leq \\delta < l_n$.\n\t\tSince $\\delta < l_n$, we need no fewer than $2^n$ sets of diameter $\\delta$ to cover $C_n\\supset C$.\n\t\tSince $\\delta \\geq l_{n+1}$, we need no more than $2^{n+1}$ sets of diameter $\\delta$ to cover $C_{n+1}\\supset C$.\n\t\tThus, we can glean the following inequality.\n\t\t\\[\n\t\t\t2^{n} \\leq N_\\delta(C) \\leq 2^{n+1}.\n\t\t\\]\n\t\tSince $\\ln$ is a monotone increasing function, we get\n\t\t\\begin{equation}\\label{ln-num}\n\t\t\tn\\ln 2 \\leq \\ln N_\\delta(C) \\leq (n+1)\\ln 2.\n\t\t\\end{equation}\n\t\tFurthermore, since $l_{n+1}\\leq \\delta < l_n$,\n\t\t\\begin{equation}\\label{ln-den}\n\t\t\t\\frac{1}{-\\ln l_{n+1}} \\leq \\frac{1}{-\\ln\\delta} \\leq \\frac{1}{-\\ln l_n}.\n\t\t\\end{equation}\n\n\t\tTake \\autoref{ln-num} and multiply it by $1/(-\\ln\\delta)$.\n\t\t\\[\n\t\t\t\\frac{n\\ln 2}{-\\ln\\delta} \\leq \\frac{\\ln N_\\delta(C)}{-\\ln\\delta} \\leq \\frac{(n+1)\\ln 2}{-\\ln\\delta}\n\t\t\\]\n\t\tBy \\autoref{ln-den}, we have\n\t\t\\[\n\t\t\t\\frac{n\\ln 2}{-\\ln l_{n+1}} \\leq \\frac{\\ln N_\\delta(C)}{-\\ln\\delta} \\leq \\frac{(n+1)\\ln 2}{-\\ln l_n}.\n\t\t\\]\n\t\tConsider $\\ln l_n$.\n\t\t\\[\n\t\t\t\\ln l_n = \\ln\\paren{\\paren{\\frac{1-\\lambda}{2}}^n} = -n\\ln\\paren{\\frac{2}{1-\\lambda}}\n\t\t\\]\n\t\tThus, our inequality becomes,\n\t\t\\[\n\t\t\t\\frac{n\\ln 2}{(n+1)\\ln\\paren{\\frac{2}{1-\\lambda}}} \\leq \\frac{\\ln N_\\delta(C)}{-\\ln\\delta} \\leq \\frac{(n+1)\\ln 2}{n\\ln\\paren{\\frac{2}{1-\\lambda}}}.\n\t\t\\]\n\t\tIf we want to find $\\lim\\limits_{\\delta\\rightarrow 0} \\ln N_\\delta(C)/(-\\ln\\delta)$, we need to find the limits of the far left and far right as $n\\rightarrow\\infty$.\n\t\tOn the lefthand side we get\n\t\t\\[\n\t\t\t\\frac{n\\ln 2}{(n+1)\\ln\\paren{\\frac{2}{1-\\lambda}}} = \\frac{n\\ln 2}{n\\ln\\paren{\\frac{2}{1-\\lambda}}+\\ln\\paren{\\frac{2}{1-\\lambda}}}\\rightarrow\\frac{\\ln 2}{\\ln\\paren{\\frac{2}{1-\\lambda}}}\n\t\t\\]\n\t\tby l'H\\^opital's rule.\n\t\tOn the righthand side we get\n\t\t\\[\n\t\t\t\\frac{(n+1)\\ln 2}{n\\ln\\paren{\\frac{2}{1-\\lambda}}} = \\frac{n\\ln 2+\\ln 2}{n\\ln\\paren{\\frac{2}{1-\\lambda}}}\\rightarrow\\frac{\\ln 2}{\\ln\\paren{\\frac{2}{1-\\lambda}}}\n\t\t\\]\n\t\tagain by l'H\\^opital's rule.\n\t\tThus by squeeze theorem, $\\dim_B (C)=\\frac{\\ln 2}{\\ln\\paren{\\frac{2}{1-\\lambda}}}$.\n\t\\end{proof}\n\\end{document}\n", "meta": {"hexsha": "71255ab2ffbeca120c2f38b2a7724e9be54ab621", "size": 5878, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2016-spring/box-counting-cantor/box-counting-cantor.tex", "max_stars_repo_name": "matt-mccarthy/problem-solving", "max_stars_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2016-spring/box-counting-cantor/box-counting-cantor.tex", "max_issues_repo_name": "matt-mccarthy/problem-solving", "max_issues_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2016-spring/box-counting-cantor/box-counting-cantor.tex", "max_forks_repo_name": "matt-mccarthy/problem-solving", "max_forks_repo_head_hexsha": "8014f517e5290f2904cfb49f3831f05e484d59ec", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.5942028986, "max_line_length": 188, "alphanum_fraction": 0.6483497788, "num_tokens": 2269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.5715558361138996}}
{"text": "%!TEX root = ../notes.tex\n\\section{March 22, 2022}\n\\subsection{Discriminants of bases, Vandermonde determinants}\nWe have some review from last time:\n\nLet $K = \\QQ(\\theta)$ be a number field of degree $n$, let $\\{\\alpha_1, \\dots, \\alpha_n\\}$ be a $\\QQ$-basis of $K$, and let $\\sigma_i : K\\hookrightarrow \\CC$, $1\\leq i\\leq n$ be the embeddings of $K$ into $\\CC$.\n\nThe discriminant of $\\{\\alpha, \\dots, \\alpha_n\\}$ is\n\\[\\Delta[\\alpha_i, \\dots, \\alpha_n] = \\det\\begin{pmatrix}\n        \\sigma_1(\\alpha_1) & \\sigma_1(\\alpha_2) & \\cdots & \\sigma_1(\\alpha_n) \\\\\n        \\sigma_2(\\alpha_1) & \\sigma_2(\\alpha_2) & \\cdots & \\sigma_2(\\alpha_n) \\\\\n        \\vdots             & \\vdots             & \\ddots & \\vdots             \\\\\n        \\sigma_n(\\alpha_1) & \\sigma_n(\\alpha_2) & \\cdots & \\sigma_n(\\alpha_n) \\\\\n    \\end{pmatrix}^2\\]\n\nIf $\\{\\beta_1, \\dots, \\beta_n\\}$ is another basis, then for all $1\\leq k\\leq n$,\n\\[\\beta_k = \\sum_{i=1}^n c_{ik}\\alpha_i, \\quad c_{ik}\\in \\QQ,\\]\nwhere $\\det(c_{ik})\\neq 0$. Fact from homework is that\n\\[\\Delta[\\beta_i, \\dots, \\beta_n] = \\det(c_{ik})^2\\cdot \\Delta[\\alpha_i, \\dots, \\alpha_n]\\]\n\n\\begin{theorem*}[2.7, p.42 \\cite{stewart2015algebraic}]\n    The discriminant of any $\\QQ$-basis for $K$ is rational and nonzero.\n\\end{theorem*}\n\\begin{proof}\n    It suffices to prove this for $\\{1, \\theta, \\theta^2, \\dots, \\theta^{n-1}\\}$.\n\\end{proof}\n\nWe have the following observation:\n\\begin{definition*}[Vandermonde Matrix]\n    A (square) \\ul{Vandermonde matrix} is a matrix of the form\n    \\[V = \\begin{pmatrix}\n            1      & t_1    & t_1^2  & \\dots  & t_1^{n-1} \\\\\n            1      & t_2    & t_2^2  & \\dots  & t_2^{n-1} \\\\\n            \\vdots & \\vdots & \\vdots & \\ddots & \\vdots    \\\\\n            1      & t_n    & t_n^2  & \\dots  & t_n^{n-1} \\\\\n        \\end{pmatrix}\\]\n\\end{definition*}\n\nWe then claimed (without proof) that\n\\begin{claim*}\n    The determinant of $V$ is\n    \\[\\prod_{1\\leq i < j\\leq n}(t_j - t_i).\\]\n\\end{claim*}\n\\begin{proof}\n    We know that $\\det(V) = 0$ when $t_i = t_j$ for some $i\\neq j$. So, $\\det(V)$ (as a polynomial in $t_1, \\dots, t_n$) is divisible by $t_i - t_j$ for $i < j$. We have that the total degree of $\\det(V)$ as a polynomial in $t_1, \\dots, t_n$ is\n    \\[\\sum_{i=1}^{n-1} i = \\frac{n(n-1)}{2}.\\]\n    On the other hand, the total degree of $D$ is also $\\binom{n}{2} = \\frac{n(n-1)}{2}$. Hence, $\\det(V)$ is a scalar multiple of $D$ (since ).\n\n    But $\\det(V)$ and $D$ are both monic as polynomials (\\emph{kinda}) in $\\QQ(t_2, \\dots, t_n)[t_1]$. Thus $\\det(V) = D$. \\emph{Or something like that}.\n\\end{proof}\n\nGoing back to the proof of \\cref{thm:2.7}, we take $t_i = \\theta_i := \\sigma_i(\\theta)$ to get\n\\begin{align*}\n    \\Delta[1, \\theta, \\dots, \\theta^{n-1}] & = \\prod_{i < j}(theta_i - \\theta_j)^2  \\\\\n                                           & = \\mathsf{disc}(\\minpoly_\\QQ(\\theta)).\n\\end{align*}\nwhich is clearly rational and nonzero in $\\QQ^+$.\n\n\\begin{example}\n    Let\n    \\[K = \\QQ(\\sqrt{5})\\]\n    with the obvious basis $\\{1, \\sqrt{5}\\}$. We have\n    \\begin{align*}\n        \\Delta[1, \\sqrt{5}] & = \\left|\\begin{array}{cc}\n                                          1 & \\sqrt{5}  \\\\\n                                          1 & -\\sqrt{5}\n                                      \\end{array}\\right|^2 \\\\\n                            & = (-2\\sqrt{5})^2             \\\\\n                            & = 20\n    \\end{align*}\n    and another basis is $\\left\\{1, \\frac{1+\\sqrt{5}}{2}\\right\\}$ and\n    \\begin{align*}\n        \\Delta\\left[1, \\frac{1+\\sqrt{5}}{2}\\right] & = \\left|\\begin{array}{cc}\n                                                                 1 & \\frac{1+\\sqrt{5}}{2}  \\\\\n                                                                 1 & -\\frac{1+\\sqrt{5}}{2}\n                                                             \\end{array}\\right|^2 \\\\\n                                                   & = (-2\\frac{\\sqrt{5}}{2})^2                      \\\\\n                                                   & = (\\sqrt{5})^2 = 5\n    \\end{align*}\n\\end{example}\n\n\\begin{example}\n    Let\n    \\[K = \\QQ(\\sqrt[3]{2})\\]\n    A basis is $\\{1, \\sqrt[3]{2}, (\\sqrt[3]{2})^2\\} =: B$\n    \\begin{align*}\n        \\Delta(B) = \\left|\\begin{array}{ccc}\n                              1 & \\sqrt[3]{2}          & (\\sqrt[3]{2})^2          \\\\\n                              1 & \\zeta_3\\sqrt[3]{2}   & (\\zeta_3\\sqrt[3]{2})^2   \\\\\n                              1 & \\zeta_3^2\\sqrt[3]{2} & (\\zeta_3^2\\sqrt[3]{2})^2\n                          \\end{array}\\right|\n    \\end{align*}\n    \\emph{computation left as exercise\\dots}\n\\end{example}\n\n\\subsection{Algebraic Integers}\n\\begin{definition}[Algebraic Integer]\n    A complex number is an \\ul{algebraic integer} if it is a root of a \\emph{monic} polynomial with integer coefficients.\n\\end{definition}\n\nWe denote the set of algebraic integers by $\\overline{\\ZZ}$. By definitions, $\\overline{\\ZZ}\\subseteq\\overline{\\QQ}$.\n\n\\begin{example}\n    The following are some algebraic integers: \n    \\begin{itemize}\n        \\item $\\sqrt{2}\\in\\overline{\\ZZ}$ since $\\sqrt{2}$ is a root of $x^2 - 2$. \n        \\item $\\tau = \\frac{1}{2}(1 + \\sqrt{5})$, since $\\tau^2 - \\tau - 1 = 0$. \n    \\end{itemize}\n    Non-examples: \n    \\begin{itemize}\n        \\item $\\frac{22}{7}$ is not an algebraic integer. \\emph{Why?} We look at the $7$-adic valuation of the monic polynomial when we plug $\\frac{22}{7}$ in. What are some other ways to reason about this? \n    \\end{itemize}\n\\end{example}\nKey algebra fact: If $f(x)\\in \\ZZ[x]$ is a monic polynomial with $f(x) = g(x)\\cdot h(x)$ where $g(x)$, $h(x)$ are monic polynomials in $\\QQ[x]$, then $g(x), h(x)\\in \\ZZ[x]$. (Gauss's Lemma). \n\nHence, if $\\frac{22}{7}$ is an algebraic integer, if has some $f(x)\\in\\ZZ[x]$ for which it is a root. But it is also a root of $g(x) = x - \\frac{22}{7}$ where $f(x) = g(x)\\cdot h(x)$ forcing $g(x)$ to be in $\\ZZ[x]$. So the fact that $\\minpoly_\\QQ\\left(\\frac{22}{7}\\right) = x - \\frac{22}{7}$ implies that $\\frac{22}{7}\\not\\in\\overline{\\ZZ}$. \n\n\\begin{definition*}[Algebraic Integer']\n    An algebraic number $\\theta$ is an algebraic integer iff $\\minpoly_\\QQ(\\theta)\\in\\ZZ[x]$. \n\\end{definition*}", "meta": {"hexsha": "21de19903537b2e341898a6e2d8dcd3abaca9878", "size": 6154, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-03-22.tex", "max_stars_repo_name": "jchen/math1560-notes", "max_stars_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-02T15:41:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-03T20:28:48.000Z", "max_issues_repo_path": "lectures/2022-03-22.tex", "max_issues_repo_name": "jchen/math1560-notes", "max_issues_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-03-22.tex", "max_forks_repo_name": "jchen/math1560-notes", "max_forks_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.2833333333, "max_line_length": 343, "alphanum_fraction": 0.5224244394, "num_tokens": 2114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7690802370707281, "lm_q1q2_score": 0.571555828247068}}
{"text": "\\section{Pattern Recognition Basics}\n\n\\subsection{Classification of Simple Patterns}\n\n\\begin{frame}\n\t\\frametitle{Classification of Simple Patterns}\n\n\tThe system for the classification of simple patterns has the following generic structure\\\\\n\t\\vspace{2cm}\n\t\\pause\n%\\begin{centering}\n\t$\\hspace{1.03cm} \\overset{\\vec f}{\\longrightarrow}\\fbox{Preprocessing}\\overset{\\vec g}{\\longrightarrow}\\fbox{Feature Extraction} \\overset{\\vec c}{\\longrightarrow}\\fbox{Classification} \\overset{y}{\\longrightarrow}$\\\\\n\t$\\hspace{1.03cm}\\hspace{8.5cm} \\uparrow $\\\\\n\t$\\hspace{1.03cm}\\hspace{4.3cm} \\fbox{Training Samples}\\longrightarrow \\fbox{Learning}$\n\\end{frame}\n\n\\begin{frame}\n\t\\frametitle{Classification of Simple Patterns \\cont}\n\n\t\\begin{itemize}\n\t\t\\item {\\em \\structure{Supervised learning:}}\n\n\t\t      $m$ training samples include feature and associated class number\n\t\t      \\begin{displaymath}\n\t\t\t      S = \\{ (\\vec x_1, y_1), (\\vec x_2, y_2), (\\vec x_3, y_3), \\dots, (\\vec x_m, y_m) \\}\n\t\t      \\end{displaymath}\n              where $\\vec x_i \\in \\mathcal{X}$ denotes the feature vector and $y_i\\in Z$ denotes the class number of sample $i$.\n\t      If nothing special is mentioned ${\\mathcal{X}}\\subseteq \\mathbb{R}^d$.\n\t\t      \\pause\n\t\t      \\vspace{0.5cm}\n\t\t\\item {\\em \\structure{Unsupervised learning:}}\n\n\t\t      $m$ training samples just include features, no class assignments and even the number of classes is (not always) known\n\t\t      \\begin{displaymath}\n\t\t\t      S = \\{ \\vec x_1, \\vec x_2, \\vec x_3, \\dots, \\vec x_m \\}\n\t\t      \\end{displaymath}\n\t\\end{itemize}\n\\end{frame}\n\n\n\\subsection{Bayesian Classifier}\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier}\n\n\t\\structure{Notation:}\n\n\t\\begin{center}\n\t\t\\begin{minipage}{0.7\\textwidth}\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item[ $ \\vec x \\in \\mathbb{R}^d:$]  $d$-dimensional feature vector\n\t\t\t\t\\item[ $y:$] class number \\\\\n\t\t\t\t      (usually $y\\in\\{0,1\\}$ or $y\\in\\{-1,+1\\}$)\n\t\t\t\t\\item[$p(y):$] prior probability of pattern class $y$\n\t\t\t\t\\item[$p(\\vec x):$] evidence\\\\\n\t\t\t\t      (distribution of features in $d$-dimensional feature space)\n\t\t\t\t\\item[$p(\\vec x , y):$] joint probability density function (pdf)\n\t\t\t\t\\item[$p(\\vec x |y):$] class conditional density\n\t\t\t\t\\item[$p(y| \\vec x):$] posterior probability\n\t\t\t\\end{itemize}\n\t\t\\end{minipage}\n\t\\end{center}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier \\cont}\n\\vspace{-.52cm}\n  \\begin{figure}\n    \\resizebox{.85\\linewidth}{!}{\n      \\alt<25->{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie25.\\png}\n      }{\\alt<24>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie24.\\png}\n      }{\\alt<23>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie23.\\png}\n      }{\\alt<22>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie22.\\png}\n      }{\\alt<21>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie21.\\png}\n      }{\\alt<20>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie20.\\png}\n      }{\\alt<19>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie19.\\png}\n      }{\\alt<18>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie18.\\png}\n      }{\\alt<17>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie17.\\png}\n      }{\\alt<16>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie16.\\png}\n      }{\\alt<15>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie15.\\png}\n      }{\\alt<14>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie14.\\png}\n      }{\\alt<13>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie13.\\png}\n      }{\\alt<12>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie12.\\png}\n      }{\\alt<11>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie11.\\png}\n      }{\\alt<10>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie10.\\png}\n      }{\\alt<9>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie9.\\png}\n      }{\\alt<8>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie8.\\png}\n      }{\\alt<7>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie7.\\png}\n      }{\\alt<6>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie6.\\png}\n      }{\\alt<5>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie5.\\png}\n      }{\\alt<4>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie4.\\png}\n      }{\\alt<3>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie3.\\png}\n      }{\\alt<2>{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie2.\\png}\n      }{\n        \\includegraphics[width=.85\\linewidth]{\\pngdir/condProb/Folie1.\\png}\n      }}}}}}}}}}}}}}}}}}}}}}}}\n    }\n  \\end{figure}\n\n\\end{frame}\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier \\cont}\n\n\t\\structure{Bayes rule:}\\\\[.5cm]\n\n\t\\begin{eqnarray*}\n\t\t\\underbrace{p(\\vec x , y)}_{joint\\ pdf}\n\t\t&=& \\pause \\underbrace{p(y)}_{prior} \\cdot \\underbrace{p(\\vec x | y)}_{class\\ conditional\\ pdf} \\\\[.5cm] \\pause\n\t\t&=& \\underbrace{ p(\\vec x)}_{evidence} \\cdot \\underbrace{ p(y| \\vec x)}_{posterior}\n\t\\end{eqnarray*}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier \\cont}\n\n\tNow we get the posterior as follows:\n\n\t\\begin{eqnarray*}\n\t\tp(y| \\vec x)\n\t\t&=& \\pause \\frac{p(y) \\cdot p(\\vec x | y)}{p(\\vec x)} \\\\ \\pause\n\t\t&=& \\frac{p(y) \\cdot p(\\vec x | y)}{\\sum\\limits_{y'}p(\\vec x , y')} \\\\ \\pause\n\t\t&=& \\frac{p(y) \\cdot p(\\vec x | y)}{\\sum\\limits_{y'}p(y') \\cdot p(\\vec x | y')}\\\\\n\t\\end{eqnarray*}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier \\cont}\n\n\t\\structure{Note:}\n\t\\begin{displaymath}\n\t\tp(\\vec x) = \\sum\\limits_{y}p(y) \\cdot p(\\vec x | y)\n\t\\end{displaymath}\n\tis a \\structure{marginal} of $p(\\vec x , y)$.\n\n\t\\begin{itemize}\n\t\t\\item We get $p(\\vec{x})$ by marginalizing $p(\\vec{x}, y)$ over $y$.\n\t\t\\item Accordingly we get $p(y)$ by marginalizing $p(\\vec{x}, y)$ over $\\vec{x}$, i.\\,e.\n\t\t      \\begin{eqnarray*}\n\t\t\t      p(y)&=& \\int p(\\vec{x}, y) \\mathsf{d}\\vec{x}\n\t\t      \\end{eqnarray*}\n\t\\end{itemize}\n\n\t\\alert{Did you notice:} $y$ is a discrete random variable whereas $\\vec{x}$ is a continuous random vector (summation vs.\\ integration).\n\\end{frame}\n\n\n\\input{nextTime.tex}\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier \\cont}\n\n\tNow let us summarize the Bayesian decision rule:\\\\[.3cm]\n\tWe decide for the class $y^*$ according to the decision rule\n\t\\begin{eqnarray*}\n\t\ty^* &=& \\pause \\argmax\\limits_{y} p(y | \\vec x)\\ \\\\[.3cm] \\pause\n\t\t&=& \\argmax\\limits_{y} \\frac{p(y) \\cdot p(\\vec x | y)}{p(\\vec x)} \\\\[.3cm] \\pause\n\t\t&=& \\argmax\\limits_{y} p(y) \\cdot p(\\vec x | y) \\\\[.3cm] \\pause\n\t\t&=& \\argmax\\limits_{y} \\{\\log p(y)\\ + \\log p(\\vec x | y)\\}\n\t\\end{eqnarray*}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier \\cont}\n\n\t\\structure{Notes:} \\\\[.3cm]\n\n\t\\begin{itemize}\n\t\t\\item The key aspect in designing a classifier is to find a good model \\\\\n\t\t      for the posterior $p(y|\\vec x)$. \\\\[.3cm]\n\t\t\\item Feature vectors $\\vec x$ usually have fixed dimensions $d$ in simple classification schemes, \\\\[.3cm]\n\t\t\\item but ${\\mathcal{X}}$ is not necessarily a subset of $\\mathbb{R}^d$: \\\\\n\t\t      features of varying dimension, sequences and sets of features\n\t\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Bayesian Classifier \\cont}\n\n\t\\begin{itemize}\n\t\t\\item \\structure{Generative modeling:} \\\\\n\t\t      modeling and estimation of $p(y)$ and $p(\\vec x | y)$. \\\\[.5cm]\n\t\t\\item \\structure{Discriminative modeling:} \\\\\n\t\t      straight modeling and estimation of $p(y|\\vec x)$.\n\t\\end{itemize}\n\\end{frame}\n\n\n\\subsection{Optimality of the Bayesian Classifier}\n\n\\begin{frame}\n\t\\frametitle{Optimality of the Bayesian Classifier}\n\n\t%  \\begin{citeblock}{Definition}\n\t\\begin{definition}\n\t\t$l(y_{1},y_{2})$ is the \\structure{loss} if a feature vector belonging to class $y_{2}$\n\t\tis assigned to class $y_{1}$. The $(0,1)$-loss function is defined by\n\t\t\\begin{eqnarray*}\n\t\t\tl(y_{1},y_{2}) &= &\\left\\{ \\begin{array}{cc}\n\t\t\t\t0 & ,\\ if\\ y_{1}=y_{2} \\\\\n\t\t\t\t1 & ,\\ otherwise\n\t\t\t\\end{array} \\right.\n\t\t\\end{eqnarray*}\n\t\t%  \\end{citeblock}\n\t\\end{definition}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Optimality of the Bayesian Classifier \\cont}\n\n\tThe \\structure{best (or optimal) decision rule} according to classification loss minimizes the average loss L:\n\n\t\\begin{eqnarray*}\n\t\t\\mathsf{AL}(\\vec x , y) &=& \\sum\\limits_{y'}l(y,y')p(y'|\\vec x)\n\t\\end{eqnarray*}\n\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Optimality of the Bayesian Classifier \\cont}\n\n\tUsing the $(0,1)$-loss function, the class decision is based on:\n\n\t\\begin{eqnarray*}\n\t\ty^* &=& \\argmin\\limits_{y} \\mathsf{AL}(\\vec x, y)\\\\\n\t\t\\pause  &=& \\argmin\\limits_{y} \\sum\\limits_{y'} l(y,y') \\cdot p(y'|\\vec x)\\\\\n\t\t%           \\pause  &=& \\argmin\\limits_{y} (1-p(y|\\vec x))\\\\\n\t\t\\pause  &=& \\argmax\\limits_{y} p(y|\\vec x)\\\\\n\t\\end{eqnarray*}\n\\end{frame}\n\n\n\\begin{frame}\n\t\\frametitle{Optimality of the Bayesian Classifier \\cont}\n\n\t\\structure{Conclusion:}\n\n\t\\begin{itemize}\n\t\t\\item The optimal classifier w.\\,r.\\,t.\\ the (0,1)-loss function applies the Bayesian decision rule.\n\t\t\\item This classifier is called \\structure{Bayesian classifier}. \\\\[.75cm]\n\t\\end{itemize}\n\t\\spread\n\n\t\\vorsicht The loss function is {\\bf NOT} convex. \\vfill\n\\end{frame}\n\n\n\\subsection{Lessons Learned}\n\n\\begin{frame}\n\t\\frametitle{Lessons Learned}\n\n\t\\begin{itemize}\n\t\t\\item General structure of a classification system \\\\[.5cm] \\pause\n\t\t\\item Supervised and unsupervised learning \\\\[.5cm] \\pause\n\t\t\\item Basics on probabilities (probability, pdf, Bayes rule, etc.) \\\\[.5cm] \\pause\n\t\t\\item Optimality of Bayes classifier and the role of the loss function \\\\[.5cm] \\pause\n\t\t\\item Discriminative and generative approach to model a posteriori probability\n\t\\end{itemize}\n\\end{frame}\n\n\\input{nextTime.tex}\n\n\\subsection{Further Readings}\n\n\\begin{frame}\n\t\\frametitle{Further Readings}\n\n\t\\begin{itemize}\n\t\t\\item Heinrich Niemann: \\\\\n\t\t      \\structure{Pattern Analysis}, \\\\\n\t\t      Springer Series in Information Sciences 4, Springer, Berlin, 1982. \\\\[.15cm]\n\t\t\\item Heinrich Niemann: \\\\\n\t\t      \\structure{Klassifikation von Mustern}, \\\\\n\t\t      Springer Verlag, Berlin, 1983. \\\\[.15cm]\n\t\t\\item Richard O. Duda, Peter E. Hart, David G. Stork: \\\\\n\t\t      \\structure{Pattern Classification}, 2nd Edition, \\\\\n\t\t      John Wiley \\& Sons, New York, 2000.\n\t\\end{itemize}\n\\end{frame}\n\n", "meta": {"hexsha": "3b4a435eb51e5f9204143bbcb9772bcf1c4d53b1", "size": 10404, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "02_bayes_classifier.tex", "max_stars_repo_name": "akmaier/pr-slides", "max_stars_repo_head_hexsha": "c322ad388993ff7891b959764e4481a05a444dff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2021-01-11T07:27:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-31T19:21:31.000Z", "max_issues_repo_path": "02_bayes_classifier.tex", "max_issues_repo_name": "akmaier/pr-slides", "max_issues_repo_head_hexsha": "c322ad388993ff7891b959764e4481a05a444dff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "02_bayes_classifier.tex", "max_forks_repo_name": "akmaier/pr-slides", "max_forks_repo_head_hexsha": "c322ad388993ff7891b959764e4481a05a444dff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-06-21T06:06:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-18T18:47:28.000Z", "avg_line_length": 33.2396166134, "max_line_length": 216, "alphanum_fraction": 0.6468665898, "num_tokens": 3511, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7690802476562641, "lm_q1q2_score": 0.5715558273643058}}
{"text": "\\documentclass{amsart}\n\n\\usepackage[english]{babel}\n\\usepackage{enumerate}\n\\usepackage{graphicx}\n\n\\title{The game of Risk analysed}\n\\author{Hidde Wieringa}\n\\date{\\today}\n\n\\begin{document}\n\t\n\t\\begin{abstract}\n\t\tIn the game of Risk, the fights between two countries dominate the game. After explaining the rules of the game, these fights are examined rather closely. To do that, we need an optimal defend strategy, which will be used to calculate the worst case scenarios for the attacker. The possibilities of losing a certain number of units for the two opposing forces will be calculated. Because of these possibilities, it is possible to calculate the expected \\emph{gain} in a certain state of the fight, as well as the probability for the attacker to win the fight.\n\t\\end{abstract}\n\t\n\t\\maketitle\n\t\n\t\\section{The game of Risk}\n\t\n\tThe game of Risk is played by three to six players. Each player gets assigned a number of areas on a world map, and gets an objective. Once a player has fulfilled this objective, the game ends. These objectives are either defeating another player, or conquering certain areas, usually in the form of continents. \n\t\n\tTo do that, a player will attack other players, in neighboring areas. A player can attack if he has more than one unit in the area. To attack, the attacker throws a minimum of one die, and a maximum of three dice. The defender then chooses whether he will defend with one or two dice. If the defender has only one unit, the player may only defend with one die. \n\t\n\tOnce the dice rolls are known, the maximum of each set is compared. If the attacker has a strict greater roll, the defender loses one unit. Else, the attacker loses one unit. This is repeated, until either the attacker or defender has rolled no more dice. \n\t\n\t\\section{The optimal defend strategy}\n\t\n\tThe defender has the unique position be able to choose the amount of dice that will be used to defend the attacked area, after the attacker has rolled the attacking dice. This is a great advantage, because the defender can choose whether one or two armies must be put into the fight. Generally speaking, it is preferred to roll one die when the attacker rolled high, and two dice when the attacker rolled low. We will now make this strategy exact, to optimize the chances for the defender. \n\t\n\tTo do that, we first define the \\emph{gain} of a defending roll as\n\t$$\n\t\t\\text{gain}_A := E(G_A),\n\t$$ where $G$ is the stochastic value of the difference between the amount of attacking units destroyed and the amount of defending units lost, given the attacking roll $A$.\n\t\n\tTwo find the optimal defending strategy, we look at each possible die roll for the three attacking dice. For two attacking dice we get the same results, as only the best two dice of the three count. When the attacker only rolls one die, it is always better for the defender to roll two dice when possible.\n\t\n\tWe calculate the \\emph{gain} for both strategies by calculating the mean gain when throwing with one die and when throwing with two dice. The results are shown in figure \\ref{fig:1}. Only the two maximum dice are shown, as the minimum die does not influence the outcome of the fight. The chances of rolling such a roll for the attacker are also noted. \n\t\n\t\\begin{figure}\n\t\n\t\\makebox[\\textwidth]{\n\t\\begin{tabular}{c||cc|cc|cc|cc|cc|cc}\n\t$A_1$ \\,\\textbackslash \\, $A_2$ & \\multicolumn{2}{c}{1} & \\multicolumn{2}{c}{2} & \\multicolumn{2}{c}{3} & \\multicolumn{2}{c}{4} & \\multicolumn{2}{c}{5} & \\multicolumn{2}{c}{6} \\\\\n\t & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 \\\\ \n\t\t\n\t\\hline\t\n1 & 1.00 & 2.00 & & & & & & & & & &  \\\\ \n2 & 0.67 & 1.94 & 0.67 & 1.33 & & & & & & & &  \\\\ \n3 & 0.33 & 1.78 & 0.33 & 1.17 & 0.33 & 0.67 & & & & & &  \\\\ \n4 & 0.00 & 1.50 & 0.00 & 0.89 & 0.00 & 0.39 & 0.00 & 0.00 & & & &  \\\\ \n5 & -0.33 & 1.11 & -0.33 & 0.50 & -0.33 & 0.00 & -0.33 & -0.39 & -0.33 & -0.67 & &  \\\\ \n6 & -0.67 & 0.61 & -0.67 & 0.00 & -0.67 & -0.50 & -0.67 & -0.89 & -0.67 & -1.17 & -0.67 & -1.33  \\\\ \n\t\\end{tabular}\n\t}\n\t\n\t\\vspace{10px}\n\t\n\t\\makebox[\\textwidth]{\n\t\\begin{tabular}{c||cc|cc|cc|cc|cc|cc}\n\t$A_1$ \\,\\textbackslash \\, $A_2$ & \\multicolumn{2}{c}{1} &  \\multicolumn{2}{c}{2} & \\multicolumn{2}{c}{3} & \\multicolumn{2}{c}{4} & \\multicolumn{2}{c}{5} & \\multicolumn{2}{c}{6} \\\\\n\t\t\n\t\\hline\t\n1 & \\textbullet\\textbullet & 0.00  & & & & & & & & & &  \\\\ \n2 & \\textbullet\\textbullet & 0.01  & \\textbullet\\textbullet & 0.02  & & & & & & & &  \\\\ \n3 & \\textbullet\\textbullet & 0.01  & \\textbullet\\textbullet & 0.04  & \\textbullet\\textbullet & 0.03  & & & & & &  \\\\ \n4 & \\textbullet\\textbullet & 0.01  & \\textbullet\\textbullet & 0.04  & \\textbullet\\textbullet & 0.07  & \\textbullet & 0.05  & & & &  \\\\ \n5 & \\textbullet\\textbullet & 0.01  & \\textbullet\\textbullet & 0.04  & \\textbullet\\textbullet & 0.07  & \\textbullet & 0.10  & \\textbullet & 0.06  & &  \\\\ \n6 & \\textbullet\\textbullet & 0.01  & \\textbullet\\textbullet & 0.04  & \\textbullet\\textbullet & 0.07  & \\textbullet & 0.10  & \\textbullet & 0.12  & \\textbullet & 0.07   \\\\ \n\t\\end{tabular}\n\t}\n\t\n\t\\caption{First table: the \\emph{gain} when defending with either one or two dice. Second table: the optimal amount of dice to defend with, as well as the probability this situation occurs.} \\label{fig:1}\n\t\\end{figure}\n\t\n\tFrom this result we can conclude that the following simple rules (evaluated from the top) give us an optimal defending strategy, given attacking roll $A$ (sorted descending), attacking armies $X$ and defending armies $Y$:\n\t\\begin{enumerate}\n\t\t\\item If $Y = 1$, use one die,\n\t\t\\item If $|A| = 1$, use two dice,\n\t\t\\item If $A_2 > 3$, use one dice,\n\t\t\\item Use two dice.\n\t\\end{enumerate}\n\t\n\tWe will use this set of rules to calculate the worst case scenario for the attacker concerning his chances of winning a fight. \n\t\n\t\\section{Destroyed units}\n\t\n\tNow that we have found an optimal defending strategy, let us look at the attacking force. When a force attacks, it can either lose one or two units, and the defender can also lose either one or two units in one roll for both parties. Depending on the dice rolls and the choices made, some of these chances of losing a certain number of units for both sides can be $0$. These events will be our event space \n\t$$\n\tE = \\{(2,0), (1,0), (1,1), (0,1), (0,2)\\}.\n\t$$\n\t\n\tFrom now on we will assume the defender defends his area in an optimal way, to find a lower bound for the probabilities involved. \n\t\n\tMuch like the \\emph{gain} which was calculated in the previous section, we will now calculate the probabilities of the different events which can take place after one roll of the attacker and defender. To do this, for each attacking roll each (optimal) defending row was calculated and the probability of those rolls occurring added to the probability of the event which occurs when those rolls are rolled. The results are listed in figure \\ref{fig:2}. \n\t\n\t\\begin{figure}\n\t\n\t\\makebox[\\textwidth]{\n\t\\begin{tabular}{c||ccccc}\n\t\t$X$ \\,\\textbackslash \\, $Y$ & 1 & 2 & 3 & 4 \\\\\n\t\t\\hline\n\t\t1 & \\begin{tabular}{ll}\n(0, 0) 1.00\\end{tabular}& \\begin{tabular}{ll}\n(0, 0) 1.00\\end{tabular}& \\begin{tabular}{ll}\n(0, 0) 1.00\\end{tabular}& \\begin{tabular}{ll}\n(0, 0) 1.00\\end{tabular} \\\\ \\hline\n2 & \\begin{tabular}{ll}\n(0, 1) 0.42\\\\(1, 0) 0.58\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.25\\\\(1, 0) 0.75\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.25\\\\(1, 0) 0.75\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.25\\\\(1, 0) 0.75\\end{tabular} \\\\ \\hline\n3 & \\begin{tabular}{ll}\n(0, 1) 0.58\\\\(1, 0) 0.42\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.35\\\\(2, 0) 0.31\\\\(1, 0) 0.15\\\\(1, 1) 0.14\\\\(0, 2) 0.05\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.35\\\\(2, 0) 0.31\\\\(1, 0) 0.15\\\\(1, 1) 0.14\\\\(0, 2) 0.05\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.35\\\\(2, 0) 0.31\\\\(1, 0) 0.15\\\\(1, 1) 0.14\\\\(0, 2) 0.05\\end{tabular} \\\\ \\hline\n4 & \\begin{tabular}{ll}\n(0, 1) 0.66\\\\(1, 0) 0.34\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.37\\\\(2, 0) 0.20\\\\(1, 0) 0.13\\\\(1, 1) 0.17\\\\(0, 2) 0.14\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.37\\\\(2, 0) 0.20\\\\(1, 0) 0.13\\\\(1, 1) 0.17\\\\(0, 2) 0.14\\end{tabular}& \\begin{tabular}{ll}\n(0, 1) 0.37\\\\(2, 0) 0.20\\\\(1, 0) 0.13\\\\(1, 1) 0.17\\\\(0, 2) 0.14\\end{tabular} \\\\\n\t\\end{tabular}\n\t}\n\t\n\t\\caption{For given $X$ and $Y$, the possible events $e \\in E$ (the units destroyed for the attacker and defender) which can occur, with their respectible probabilities.} \\label{fig:2}\n\t\\end{figure}\n\t\n\tThese probabilities are the same for any larger number of units on either side. Logical, since the attacking or defending strategy does not change when more units are added to either side.\n\t\n\t\\section{Expected gain}\n\t\n\tAs we have calculated the \\emph{gain} for the defender in different scenarios, we will also calculate the gain for the attacker, given the attacking units $X$ and defending units $Y$. The plot of the gain is shown in figure \\ref{fig:3}.\n\t\n\t\\begin{figure}\n\t\n\t\\makebox[\\textwidth]{\n\t\\begin{tabular}{c||ccccc}\n\t\t$X$ \\,\\textbackslash \\, $Y$ & 1 & 2 & 3 & 4 & 5 \\\\\n\t\t\\hline\n\t\t1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00  \\\\ \n\t\t2 & -0.17 & -0.49 & -0.49 & -0.49 & -0.49  \\\\ \n\t\t3 & 0.16 & -0.30 & -0.30 & -0.30 & -0.30  \\\\ \n\t\t4 & 0.32 & 0.11 & 0.11 & 0.11 & 0.11  \\\\ \n\t\t5 & 0.32 & 0.11 & 0.11 & 0.11 & 0.11  \\\\ \n\t\\end{tabular}\n\t}\n\t\n\t\\caption{The gain of the attacker, given $X$ and $Y$ with the defender defending optimally.} \\label{fig:3}\n\t\\end{figure}\n\t\n\t\\section{Winning probabilities}\n\t\n\tFinally we calculate the winning probabilities for the attacker, given the attacking units $X$ and defending units $Y$. Let $S = (X, Y)$ be the state of the fight. This state can change in any of the ways in the event set $E$. For any $e \\in E$, there is a probability $P_{e,X,Y}$ that this event occurs. Once such an event occurs, the state changes to a new state, for which the same things can be said.\n\t\n\tThis chain of events can be viewed as a Discrete Time Markov Chain, as no previous state influences the future of the current state. The chain will always enter an absorbing state which is of the form $(X, 0)$ or $(1, Y)$. These states are absorbing, because the attacker has won when the defender has no more units, and the defender wins once the attacker is left with one unit.\n\t\n\tTo calculate the probability that the attacker wins, we set up the recurrence relation\n\t$$\n\tP_\\text{win}(X, Y) = \\sum_{e \\in E} P_\\text{win}(X - e_\\text{att}, Y-e_\\text{def}) \\cdot P_{e, X, Y}, \n\t$$\n\tusing \n\t$$\n\tP_\\text{win}(X,0) = 1, P_\\text{win}(1, Y) = 0\n\t$$\n\tfor any $X$ and $Y$. The quantities $e_\\text{att}$ and $e_\\text{def}$ are the number of units that are destroyed of respectively the attacker and defender once event $e$ occurs.\n\t\n\tThis recurrence relation can be solved recursively for any $X$ and $Y$, giving the results in figure \\ref{fig:4}.\n\t\n\t\\begin{figure}\n\t\\makebox[\\textwidth]{\n\t\\begin{tabular}{c||cccccccccccccccc}\nX \\,\\textbackslash \\, Y &   0   &   1   &   2    &  3   &   4   &   5   &   6   &   7    &  8    &  9   &   10  &   11   &  12 &    13    & 14&     15  \\\\\n\\hline\n1    &    - &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 \\\\\n2    &    1.00 &   0.42 &   0.11 &   0.03 &   0.01 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 \\\\ \n3    &    1.00 &   0.75 &   0.39 &   0.20 &   0.09 &   0.04 &   0.02 &   0.01 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 &   0.00 \\\\ \n4    &    1.00 &   0.92 &   0.67 &   0.47 &   0.31 &   0.20 &   0.12 &   0.08 &   0.05 &   0.03 &   0.02 &   0.01 &   0.01 &   0.00 &   0.00 &   0.00 \\\\ \n5    &    1.00 &   0.97 &   0.81 &   0.64 &   0.48 &   0.35 &   0.25 &   0.17 &   0.12 &   0.08 &   0.05 &   0.03 &   0.02 &   0.01 &   0.01 &   0.01 \\\\ \n6    &    1.00 &   0.99 &   0.90 &   0.78 &   0.64 &   0.51 &   0.39 &   0.29 &   0.21 &   0.15 &   0.11 &   0.08 &   0.05 &   0.04 &   0.02 &   0.02 \\\\ \n7    &    1.00 &   1.00 &   0.95 &   0.86 &   0.76 &   0.64 &   0.52 &   0.41 &   0.32 &   0.25 &   0.18 &   0.14 &   0.10 &   0.07 &   0.05 &   0.04 \\\\ \n8    &    1.00 &   1.00 &   0.97 &   0.92 &   0.84 &   0.74 &   0.64 &   0.53 &   0.44 &   0.35 &   0.27 &   0.21 &   0.16 &   0.12 &   0.09 &   0.07 \\\\ \n9    &    1.00 &   1.00 &   0.99 &   0.95 &   0.90 &   0.82 &   0.74 &   0.64 &   0.55 &   0.45 &   0.37 &   0.30 &   0.24 &   0.18 &   0.14 &   0.11 \\\\ \n10   &    1.00 &   1.00 &   0.99 &   0.97 &   0.94 &   0.88 &   0.81 &   0.73 &   0.64 &   0.56 &   0.47 &   0.39 &   0.32 &   0.26 &   0.20 &   0.16 \\\\ \n11   &    1.00 &   1.00 &   1.00 &   0.98 &   0.96 &   0.92 &   0.87 &   0.80 &   0.73 &   0.65 &   0.56 &   0.48 &   0.41 &   0.34 &   0.28 &   0.22 \\\\ \n12   &    1.00 &   1.00 &   1.00 &   0.99 &   0.98 &   0.95 &   0.91 &   0.86 &   0.80 &   0.73 &   0.65 &   0.57 &   0.50 &   0.42 &   0.36 &   0.30 \\\\ \n13   &    1.00 &   1.00 &   1.00 &   0.99 &   0.99 &   0.97 &   0.94 &   0.90 &   0.85 &   0.79 &   0.72 &   0.65 &   0.58 &   0.51 &   0.44 &   0.37 \\\\ \n14   &    1.00 &   1.00 &   1.00 &   1.00 &   0.99 &   0.98 &   0.96 &   0.93 &   0.89 &   0.84 &   0.79 &   0.72 &   0.66 &   0.59 &   0.52 &   0.45 \\\\ \n15   &    1.00 &   1.00 &   1.00 &   1.00 &   0.99 &   0.99 &   0.97 &   0.95 &   0.92 &   0.89 &   0.84 &   0.78 &   0.72 &   0.66 &   0.59 &   0.53 \\\\\n\t\\end{tabular}}\n\t\n\t\\includegraphics[scale=0.6]{Plot}\n\t\n\t\\caption{The probability of winning a fight with $X$ attacking and $Y$ defending units, or $P_\\text{win}(X,Y)$. Below that is a plot of the same data.} \\label{fig:4}\n\t\\end{figure}\n\t\n\t\\section{conclusion}\n\t\n\tFrom the results of the last sections we can deduce two conclusions:\n\t\\begin{enumerate}\n\t\t\\item It is always better to have more units than your opponent. The more units you have while the opponents units stays the same amount, the more your winning probabilities increase.\n\t\n\t\t\\item It is better for the attacker to save up units, even if the defender is doing the same. For equal $X$ and $Y$, the winning probabilities increase for larger $X$ and $Y$. \n\t\\end{enumerate}\n\t\n\t\\section{Further ideas}\n\t\n\tThis analysis is made solely for the game of Risk. Changing the rules of the game, any found results can be expanded easily. For example, one might change the number of states the dice can assume or the rules when and how many dice can be used to attack and defend. One can also create an more interesting problem by adding certain states in which units are added to the existing ones, instead of removed.\n\t\n\\end{document}", "meta": {"hexsha": "a6c47aae3fffdebea9b1f0ff65c944eca36039a5", "size": 14450, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Risk.tex", "max_stars_repo_name": "hiddewie/Risk", "max_stars_repo_head_hexsha": "816008d1c33590a92dbb6713ad1c4c666abd2730", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Risk.tex", "max_issues_repo_name": "hiddewie/Risk", "max_issues_repo_head_hexsha": "816008d1c33590a92dbb6713ad1c4c666abd2730", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Risk.tex", "max_forks_repo_name": "hiddewie/Risk", "max_forks_repo_head_hexsha": "816008d1c33590a92dbb6713ad1c4c666abd2730", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.8095238095, "max_line_length": 561, "alphanum_fraction": 0.6219377163, "num_tokens": 5628, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7690802264851918, "lm_q1q2_score": 0.571555820380236}}
{"text": "\\documentclass[acmsmall]{acmart}\n\n\\usepackage{algorithm2e}\n\\usepackage{amsthm}\n\n\\usepackage{todonotes}\n\\newcommand{\\sotodo}{\\todo[color=green]}\n\\newcommand{\\sotodoinline}{\\todo[color=green,inline=true]}\n\n\n\n\\AtBeginDocument{%\n\t\\providecommand\\BibTeX{{%\n\t\t\t\\normalfont B\\kern-0.5em{\\scshape i\\kern-0.25em b}\\kern-0.8em\\TeX}}}\n\n\\begin{document}\n\t\n\\title{PowerNumbers.jl: a fast approach to automatic asymptotics}\n\n\\author{Sheehan Olver}\n\\email{s.olver@imperial.ac.uk}\n\\affiliation{%\n\t\\institution{Imperial College}\n\t\\city{London}\n\t\\state{England}\n\t\\postcode{SW7 2AZ}\n}\n\n\\author{Matthew Rees}\n\\email{matthew.rees.16@ucl.ac.uk}\n\\affiliation{%\n\t\\institution{University College London}\n\t\\city{London}\n\t\\state{England}\n\t\\postcode{WC1E 6BT}\n}\n\n\\begin{abstract}\n\tWe develop a scheme for quick arithmetic on asymptotic series, evaluated to just one or two terms. This generalizes the concept of dual numbers in automatic differentiation to support general algebraic powers, including negative powers to capture behaviour at infinity.  We give a full description of the algebraic rules for these newly defined {\\it power numbers}, with justification of guaranteed equivalence to asymptotic algebra. Some example applications follow, including evaluation of complicated rational functions at infinity defined in terms of linear algebra, hypergeometric functions, and Stieltjes transforms over intervals for functions with power-like singularities and the endpoints.\n\\end{abstract}\n\n\\begin{CCSXML}\n\t<ccs2012>\n\t<concept>\n\t<concept_id>10002950.10003705.10003707</concept_id>\n\t<concept_desc>Mathematics of computing~Solvers</concept_desc>\n\t<concept_significance>500</concept_significance>\n\t</concept>\n\t</ccs2012>\n\\end{CCSXML}\n\n\\ccsdesc[500]{Mathematics of computing~Solvers}\n\n\\keywords{none}\n\n\\maketitle\n\n\\section{Introduction}\n\n{\\it Dual numbers} are a simple algebraic structure that underlies forward mode automatic-differentiation, with a convenient representation in terms of $2 \\times 2$ matrices:\n\n\\begin{definition}\nLet $K$ be a field. The set of matrices\n$$\\mathbf{D}^K = \\left\\{\\begin{pmatrix}a & b \\\\ 0 & a \\end{pmatrix} : a,b \\in K\\right\\}$$ \nare called the set of \\textbf{dual numbers}. The elements are typically denoted by\n$$\na + b \\epsilon \\qquad \\hbox{where} \\qquad \\epsilon = \\begin{pmatrix} 0 & 1 \\\\ & 0 \\end{pmatrix}.\n$$\n\\end{definition}\n\nThe property that $\\epsilon^2 = 0 $ suffices to guarantee that general analytic matrix functions applied to dual numbers encode the derivative at the point $a$, namely:\n$$\nf(a + b \\epsilon) = f(a) + b f'(b) \\epsilon\n$$\n\nThus computations using dual numbers preserve the most important information  encoded in the Taylor Expansion. \n\nIn analogy to dual numbers, we seek a simple algebraic structure that encodes the leading-order information of Hahn-form asymptotic expansions. Asymptotic expansions are important computational tools in applied mathematics, that can reduce complicated functions to simple forms with specified accuracy. Many such expansions are of the Hahn form:\n\\begin{equation}\nS(z) = \\sum_{n=1}^{\\infty}a_nz^{\\alpha_n} \\quad as \\quad z \\rightarrow 0\n\\end{equation}\nwhere the $\\alpha_n \\in \\mathbb{R}$ are strictly increasing. We we call the set of all such expansions $\\mathbb{S}$.\n\nBy restricting these series to the terms most significant in the limit, we can define the set of \\textbf{power numbers}. Specifically, we will be restricting to the first two terms with $\\alpha \\leq \\beta$, so that we are essentially considering:\n$$S(z) = az^\\alpha + bz^\\beta + o(z^\\beta)$$\n\nBy giving algebraic rules that are consistent with their counterparts in $\\mathbb{S}$, we will allow for fast manipulation without compromising on accuracy or losing important order information. \n\nIn \\textbf{Section 2} we will rigorously describe the algorithms that govern the basic operations of addition, subtraction, multiplication and division on power numbers. These have been defined to correspond with what we expect from similar operations on $\\mathbb{S}$, and we provide a theorem guaranteeing this symmetry. Furthermore, this guarantee extends to exponentiation and analytic functions of power numbers.\n\nIn \\textbf{Section 3} we will give some of the possible applications of the construction, including the evaluation of rational polynomials at infinity and finding limiting expressions of hypergeometric functions(?). We also provide an implementation written in the Julia language, \"PowerNumbers.jl\".\n\n\\section{Formal Description}\n\n\\begin{definition}\nLet $K$ be a field. The set of functions of $\\epsilon \\in [0,\\infty)$,\n$$\\mathbf{PN}^K = \\{a\\epsilon^\\alpha + b\\epsilon^\\beta : a,b \\in K;\\alpha,\\beta\\in\\mathbb{R}\\cup\\{\\infty\\}; \\alpha\\leq\\beta\\}$$ \nis called the set of \\textbf{power numbers}. \n\\end{definition}\n\nIn the case $\\alpha=\\beta$, we write only one term $(a+b)\\epsilon^\\beta$. Notationally, $\\epsilon^0$ is omitted. While we do not have a one-to-one correspondence between $\\mathbf{PN}^K$ and $\\mathbb{S}$ (there is usually a choice between one or two terms), this is not a major concern. Power numbers are not considered to be equal unless the values $(a,b,\\alpha,\\beta)$ are all equal.\n\nWe enforce the equality $0\\epsilon^\\alpha + b\\epsilon^\\beta = b\\epsilon^\\beta$. However, in general, $a\\epsilon^\\alpha + 0\\epsilon^\\beta \\neq a\\epsilon^\\alpha$.\nBy defining $(+,*)$ below, we acquire the double monoid $$\\mathbb{PN}^K = (\\mathbf{PN}^K,+,*)$$\n\n\\subsection{Basic Operations}\n\\begin{definition}\n\\textbf{Addition} $+:\\mathbf{PN}^K \\times \\mathbf{PN}^K \\rightarrow \\mathbf{PN}^K$ is described by an algorithm.\n\n\\begin{algorithm}[H]\n\t\\SetAlgoLined\n\t\\KwData{$a\\epsilon^\\alpha + b\\epsilon^\\beta, c\\epsilon^\\gamma + d\\epsilon^\\delta \\in \\mathbf{PN}^K$; assume WLOG that $\\beta \\leq \\delta$}\n\t\\KwResult{$(a\\epsilon^\\alpha + b\\epsilon^\\beta) + (c\\epsilon^\\gamma + d\\epsilon^\\delta) = p\\epsilon^\\zeta + q\\epsilon^\\eta \\in \\mathbf{PN}^K$}\n\t\\uIf{$\\beta=\\delta$}{\n\t\t\\uIf{$\\gamma=\\beta$}{\n\t\t\t$p = a$, $q = b + c + d$, $\\zeta = \\alpha$, $\\eta = \\beta$\n\t\t}\n\t\t\\uElseIf{$\\alpha<\\gamma<\\beta$}{\n\t\t\t$p = a$, $q = c$, $\\zeta = \\alpha$, $\\eta = \\gamma$\n\t\t}\n\t\t\\uElseIf{$\\gamma=\\alpha$}{\n\t\t\t$p = a + c$, $q = b + d$, $\\zeta = \\alpha$, $\\eta = \\beta$\n\t\t}\n\t\t\\Else{\n\t\t\t$p = c$, $q = a$, $\\zeta = \\gamma$, $\\eta = \\alpha$\n\t\t}\n\t}\n\t\\Else{\n\t\t\\uIf{$\\beta<\\gamma$}{\n\t\t\t$p = a$, $q = b$, $\\zeta = \\alpha$, $\\eta = \\beta$\n\t\t}\n\t\t\\uElseIf{$\\gamma=\\beta$}{\n\t\t\t$p = a$, $q = b + c$, $\\zeta = \\alpha$, $\\eta = \\beta$\n\t\t}\n\t\t\\uElseIf{$\\alpha<\\gamma<\\beta$}{\n\t\t\t$p = a$, $q = c$, $\\zeta = \\alpha$, $\\eta = \\gamma$\n\t\t}\n\t\t\\uElseIf{$\\gamma=\\alpha$}{\n\t\t\t$p = a + c$, $q = b$, $\\zeta = \\alpha$, $\\eta = \\beta$\n\t\t}\n\t\t\\Else{\n\t\t\t$p = c$, $q = a$, $\\zeta = \\gamma$, $\\eta = \\alpha$\n\t\t}\n\t}\n\t\n\t\\caption{Summing Power Numbers}\t\n\\end{algorithm}\n\nThe additive identity for Power Numbers is $0\\epsilon^\\infty$.\n\\end{definition}\n\n\\begin{definition}\n\\textbf{Multiplication} $*:\\mathbf{PN}^K \\times \\mathbf{PN}^K \\rightarrow \\mathbf{PN}^K$ can be most simply expressed as addition of Power Numbers:\n\\begin{displaymath}\n(a\\epsilon^\\alpha + b\\epsilon^\\beta) * (c\\epsilon^\\gamma + d\\epsilon^\\delta)\n= (ac\\epsilon^{\\alpha+\\gamma} + ad\\epsilon^{\\alpha+\\delta}) + (bc\\epsilon^{\\beta+\\gamma}+ bd\\epsilon^{\\beta+\\delta})\n= p\\epsilon^\\zeta + q\\epsilon^\\eta \\in \\mathbf{PN}^K\n\\end{displaymath}\n\nThe multiplicative identity for Power Numbers is $1 + 0\\epsilon^\\infty$.\n\\end{definition}\n\t\n\nWhile $\\mathbb{PN}^K_\\epsilon$ is a monoid under both addition and multiplication, we can define two operations that have some of the properties we expect for subtraction and division. \\\\\n\n\\begin{definition} \n\\textbf{Subtraction} $-:\\mathbf{PN}^K \\rightarrow \\mathbf{PN}^K$ is defined as:\n$$-(a\\epsilon^\\alpha + b\\epsilon^\\beta) = (-a)\\epsilon^\\alpha + (-b)\\epsilon^\\beta$$\nNaturally, $-:\\mathbf{PN}^K \\times \\mathbf{PN}^K \\rightarrow \\mathbf{PN}^K$ is defined as $A + (-(B))$, where $A, B \\in \\mathbf{PN}^K$.\n\\end{definition}\n\n\\begin{definition}\nThe \\textbf{multiplicative pseudo-inverse} $inv:\\mathbf{PN}^K \\rightarrow \\mathbf{PN}^K$ is slightly more complicated.\n\n\\begin{algorithm}[H]\n\t\\SetAlgoLined\n\t\\KwData{$a\\epsilon^\\alpha + b\\epsilon^\\beta \\in \\mathbf{PN}^K$}\n\t\\KwResult{$$\\frac{1}{a\\epsilon^\\alpha + b\\epsilon^\\beta} = p\\epsilon^\\zeta + q\\epsilon^\\eta \\in \\mathbf{PN}^K$$}\n\t\\uIf{$\\alpha=\\infty$}{\n\t\tNot defined.\n\t}\n\t\\uElseIf{$\\alpha=\\beta$}{\n\t\t$p = 0$, $q = \\frac{1}{a+b}$, $\\zeta = -\\alpha$, $\\eta = -\\alpha$\n\t}\n\t\\Else{\n\t\t$p = \\frac{1}{a+b}$, $q = -\\frac{b}{a^2}$, $\\zeta = -\\alpha$, $\\eta = \\beta-2\\alpha$\n\t}\n\t\n\t\\caption{Multiplicative Inversion}\t\n\\end{algorithm}\n\n\\textbf{Division} $\\div:\\mathbf{PN}^K \\times \\mathbf{PN}^K \\rightarrow \\mathbf{PN}^K$ is defined as $A \\div B = A * (inv(B))$, where $A, B \\in \\mathbf{PN}^K$.\n\\end{definition}\n\nThis matches the Dual Numbers case, where $\\alpha = 0$, $\\beta = 1$.\n\nThere is a special case wherein the pseudo-negative and -inverse are the true negative and inverse, which motivates the following\n\\begin{proposition}\nFor $A = a\\epsilon^\\alpha + b\\epsilon^\\beta \\in \\mathbf{PN}^K$, we have that $A-A=0\\epsilon^\\infty$ and $A \\div A = 1 + 0\\epsilon^\\infty$ if and only if $\\beta = \\infty$.\n\\end{proposition}\n\n\\subsection{Exponentiation, Analytic Functions \\& Asymptotic Series}\n\nThe definition of $A^m$ for $A \\in \\mathbf{PN}^K$, $m \\in \\mathbb{Z}$ follows naturally from the above definitions of multiplication and division. \\\\\nSince we will generally consider $\\epsilon$ to be small, we can extend any analytic function $h:K \\rightarrow K$ to $h:\\mathbf{PN}^K_{\\epsilon} \\rightarrow \\mathbf{PN}^K_{\\epsilon}$ for $\\alpha \\geq 0$. This is via Taylor series:\n\n$$h(a\\epsilon^0+b\\epsilon^\\beta) = h(a+b\\epsilon^\\beta) = h(a) + b\\epsilon^\\beta h'(a) \\quad$$\n$$h(a\\epsilon^\\alpha+b\\epsilon^\\beta) = h(0) + a\\epsilon^\\alpha h'(0) \\quad where \\quad \\alpha \\neq 0$$\n\nNow, consider an asymptotic series $S(z)$ that is also a Hahn series \\textit{ie} $S \\in \\mathbb{S}$ as in (1). The primary significance of Power Numbers comes from the following \n\\begin{proposition}\n\tLet $f(z) = az^\\alpha + bz^\\beta + o(z^\\beta)$,\t$g(z) = cz^\\gamma + dz^\\delta + o(z^\\delta)$ be series of the form in $(1)$.\n\tDefine the associated Power Numbers, $F = a\\epsilon^\\alpha + b\\epsilon^\\beta$, $G = c\\epsilon^\\gamma + d\\epsilon^\\delta$.\n\tThen, where $\\cdot$ is one of $(+$, $-$, $*$, $/)$ we have that \n\t$$f(z) \\cdot g(z) = pz^\\zeta + qz^\\eta + o(z^\\eta)$$\n\t$$\\Leftrightarrow$$\n\t$$F \\cdot G = p\\epsilon^\\zeta + q\\epsilon^\\eta$$\n\tFurthermore, for any analytic function $h:K \\rightarrow K$ we have\n\t$$h(f(z)) = pz^\\zeta + qz^\\eta + o(z^\\eta)$$ \n\t$$\\Leftrightarrow$$\n\t$$h(F) = p\\epsilon^\\zeta + q\\epsilon^\\eta$$\n\\end{proposition}\n\n\\section{Examples}\n\n\\subsection{Rational Polynomials at Infinity}\nWe can get a simple system for evaluating a rational polynomial in the infinite limit. For example, consider:\n$$f(z) = \\frac{z^3+z^2+1}{z^3+z}$$\nTaking the Power Number $z = \\epsilon^{-1}$. By the arithmetic described above we have:\n$$\\frac{(\\epsilon^{-3})+(\\epsilon^{-2})+1}{(\\epsilon^{-3})+(\\epsilon^{-1})} = \\frac{\\epsilon^{-3}}{\\epsilon^{-3}}=\\epsilon^0$$\nWhich gives the correct limit when considering $\\epsilon \\rightarrow 0$.\n\n\n\\section{Conclusion}\n\n\n\\end{document}\n", "meta": {"hexsha": "c0c0759afb6a5ab8207f27538809134a8ebf45f7", "size": 11174, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "acmdraft/powernumbers.tex", "max_stars_repo_name": "idlemat/PowerNumbers.jl", "max_stars_repo_head_hexsha": "6eb42d554982d139f7a4826ceb9bb88540596f25", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "acmdraft/powernumbers.tex", "max_issues_repo_name": "idlemat/PowerNumbers.jl", "max_issues_repo_head_hexsha": "6eb42d554982d139f7a4826ceb9bb88540596f25", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-08-16T11:44:02.000Z", "max_issues_repo_issues_event_max_datetime": "2019-08-29T18:03:44.000Z", "max_forks_repo_path": "acmdraft/powernumbers.tex", "max_forks_repo_name": "idlemat/PowerNumbers", "max_forks_repo_head_hexsha": "6eb42d554982d139f7a4826ceb9bb88540596f25", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-16T14:37:31.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-16T14:37:31.000Z", "avg_line_length": 46.9495798319, "max_line_length": 700, "alphanum_fraction": 0.6958117057, "num_tokens": 3541, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7690802317779601, "lm_q1q2_score": 0.5715558155640584}}
{"text": "\\documentclass[namecite, fleqn]{goose-article}\n\n\\title{Non-linear elasticity}\n\n\\author{Tom W.J.\\ de Geus}\n\n\\hypersetup{pdfauthor={T.W.J. de Geus}}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nThis constitutive model encompasses a non-linear, but history independent,\nrelation between the Cauchy stress, $\\bm{\\sigma}$,\nand the linear strain tensor, $\\bm{\\varepsilon}$, i.e.:\n\\begin{equation*}\n    \\bm{\\sigma} = f \\left( \\bm{\\varepsilon} \\right)\n\\end{equation*}\nThe model is implemented in 3-D, hence it can directly be used for either 3-D or\n2-D plane strain problems.\n\\end{abstract}\n\n\\section{Constitutive model}\n\nThe following strain-energy is defined:\n\\begin{equation}\n    U ( \\bm{\\varepsilon} )\n    = \\frac{9}{2} K \\varepsilon_\\mathrm{m}\n    + \\frac{ \\sigma_0 \\, \\varepsilon_0 }{ n+1 }\n    \\left( \\frac{\\varepsilon_\\mathrm{eq}}{\\mathrm{\\varepsilon_0}} \\right)^{n+1}\n\\end{equation}\nwhere $K$ is the bulk modulus, $\\varepsilon_0$ and $\\sigma_0$ are a reference strain and\nstress respectively, and $n$ is an exponent that sets the degree of non-linearity.\nFinally $\\varepsilon_\\mathrm{m}$ and $\\varepsilon_\\mathrm{eq}$ are the hydrostatic and\nequivalent strains (see \\cref{sec:ap:strain}).\n\nThis leads to the following stress--strain relation:\n\\begin{equation}\n\\label{eq:stress}\n    \\bm{\\sigma}\n    = \\frac{\\partial U}{\\partial \\bm{\\varepsilon}}\n    = 3 K \\varepsilon_\\mathrm{m} \\, \\bm{I}\n    + \\frac{2}{3} \\frac{\\sigma_0}{\\varepsilon_0^n} \\,\n    \\varepsilon_\\mathrm{eq}^{n-1} \\, \\bm{\\varepsilon}_\\mathrm{d}\n\\end{equation}\nsee \\cref{sec:ap:nomenclature} for nomenclature.\n\n\\section{Consistent tangent}\n\nThe consistent tangent maps a variation in strain, $\\delta \\bm{\\varepsilon}$,\nto a variation in stress, $\\delta \\bm{\\sigma}$, as follows\n\\begin{equation}\n  \\delta \\bm{\\sigma} = \\mathbb{C} : \\delta \\bm{\\varepsilon}\n\\end{equation}\nThe tangent, $\\mathbb{C}$, thus corresponds to the derivative of \\cref{eq:stress} w.r.t.\\ strain.\nFor this, the chain rule is employed:\n\\begin{equation}\n    \\mathbb{C}\n    = \\frac{\\partial}{\\partial \\bm{\\varepsilon}}\n    \\bigg[\\;\n        3 K \\varepsilon_\\mathrm{m} \\, \\bm{I}\n    \\;\\bigg]\n    + \\frac{\\partial}{\\partial \\bm{\\varepsilon}_\\mathrm{d}}\n    \\bigg[\\;\n        \\frac{2}{3} \\frac{\\sigma_0}{\\varepsilon_0^n} \\,\n        \\varepsilon_\\mathrm{eq}^{n-1} \\, \\bm{\\varepsilon}_\\mathrm{d}\n    \\;\\bigg]\n    : \\frac{\\partial \\bm{\\varepsilon}_\\mathrm{d}}{\\partial \\bm{\\varepsilon}}\n\\end{equation}\nWhere:\n\\begin{itemize}\n\n    \\item the derivative of the volumetric part reads\n    \\begin{equation}\n        \\frac{\\partial}{\\partial \\bm{\\varepsilon}}\n        \\bigg[\\;\n            3 K \\varepsilon_\\mathrm{m} \\, \\bm{I}\n        \\;\\bigg]\n        = K \\bm{I} \\otimes \\bm{I}\n    \\end{equation}\n\n    \\item the chain rule for the deviatoric part reads\n    \\begin{align}\n        \\frac{\\partial}{\\partial \\bm{\\varepsilon}_\\mathrm{d}}\n        \\bigg[\\;\n            \\varepsilon_\\mathrm{eq}^{n-1} \\, \\bm{\\varepsilon}_\\mathrm{d}\n        \\;\\bigg]\n        &=\n        \\frac{\n            \\partial \\big[ \\varepsilon_\\mathrm{eq}^{n-1} \\big]\n        }{\n            \\partial \\bm{\\varepsilon}_\\mathrm{d}\n        } \\otimes \\bm{\\varepsilon}_\\mathrm{d}\n        + \\varepsilon_\\mathrm{eq}^{n-1}\n        \\frac{\n            \\partial \\bm{\\varepsilon}_\\mathrm{d}\n        }{\n            \\partial \\bm{\\varepsilon}_\\mathrm{d}\n        }\n        \\\\\n        &=\n        \\tfrac{2}{3} (n-1) \\, \\varepsilon_\\mathrm{eq}^{n-3} \\,\n        \\bm{\\varepsilon}_\\mathrm{d} \\otimes \\bm{\\varepsilon}_\\mathrm{d}\n        + \\varepsilon_\\mathrm{eq}^{n-1} \\, \\mathbb{I}\n    \\end{align}\n\n    \\item and it has been used that\n    \\begin{align}\n        \\frac{\\partial}{\\partial \\bm{\\varepsilon}_\\mathrm{d}}\n        \\bigg[\\;\n            \\varepsilon_\\mathrm{eq}^{n-1}\n        \\;\\bigg]\n        &= (n-1)\\, \\varepsilon_\\mathrm{eq}^{n-2} \\,\n        \\frac{2}{3} \\frac{\\bm{\\varepsilon}_\\mathrm{d}}{\\varepsilon_\\mathrm{eq}}\n        \\\\\n        &= \\tfrac{2}{3} (n-1) \\,\n        \\varepsilon_\\mathrm{eq}^{n-3} \\, \\bm{\\varepsilon}_\\mathrm{d}\n    \\end{align}\n\n\\end{itemize}\n\nCombining the above yields:\n\\begin{align}\n    \\mathbb{C}\n    &= K \\bm{I} \\otimes \\bm{I} +\n    \\frac{2}{3} \\frac{\\sigma_0}{\\varepsilon_0^n}\n    \\bigg(\n        \\tfrac{2}{3} (n-1) \\, \\varepsilon_\\mathrm{eq}^{n-3}\n        \\bm{\\varepsilon}_\\mathrm{d} \\otimes \\bm{\\varepsilon}_\\mathrm{d}\n        + \\varepsilon_\\mathrm{eq}^{n-1} \\mathbb{I}\n    \\bigg) : \\mathbb{I}_\\mathrm{d} \\\\\n    &= K \\bm{I} \\otimes \\bm{I} +\n    \\frac{2}{3} \\frac{\\sigma_0}{\\varepsilon_0^n}\n    \\bigg(\n        \\tfrac{2}{3} (n-1) \\, \\varepsilon_\\mathrm{eq}^{n-3}\n        \\bm{\\varepsilon}_\\mathrm{d} \\otimes \\bm{\\varepsilon}_\\mathrm{d}\n        + \\varepsilon_\\mathrm{eq}^{n-1} \\, \\mathbb{I}_\\mathrm{d}\n    \\bigg)\n\\end{align}\n\n\\section{Consistency check}\n\nTo check if the derived tangent $\\mathbb{C}$ a \\emph{consistency check} can be performed.\nA (random) perturbation $\\delta \\bm{\\varepsilon}$ is applied.\nThe residual is compared to that predicted by the tangent.\nFor the general case of linearisation, the following holds:\n\\begin{equation}\n    \\bm{\\sigma}\\big( \\bm{\\varepsilon}_\\star + \\delta \\bm{\\varepsilon} \\big) =\n    \\bm{\\sigma}\\big( \\bm{\\varepsilon}_\\star \\big) +\n    \\mathbb{C} \\big( \\bm{\\varepsilon}_\\star \\big) : \\delta \\bm{\\varepsilon} +\n    \\mathcal{O}(\\delta \\bm{\\varepsilon}^2)\n\\end{equation}\nor\n\\begin{equation}\n    \\underbrace{\n        \\bm{\\sigma}\\big( \\bm{\\varepsilon}_\\star + \\delta \\bm{\\varepsilon} \\big) -\n        \\bm{\\sigma}\\big( \\bm{\\varepsilon}_\\star \\big)\n    }_{\n        \\displaystyle \\delta \\bm{\\sigma}\n    } -\n    \\mathbb{C} \\big( \\bm{\\varepsilon}_\\star \\big) : \\delta \\bm{\\varepsilon} =\n    \\mathcal{O}(\\delta \\bm{\\varepsilon}^2)\n\\end{equation}\nThis allows the introduction of a relative error\n\\begin{equation}\n  \\eta =\n  \\Big|\\Big|\n        \\delta \\bm{\\sigma} -\n        \\mathbb{C}(\\bm{\\varepsilon}_\\star) : \\delta \\bm{\\varepsilon}\n  \\Big|\\Big|\n  /\n  \\Big|\\Big| \\delta \\bm{\\sigma} \\Big|\\Big|\n\\end{equation}\nThis \\emph{truncation error} thus scales as $\\eta \\sim || \\delta \\bm{\\varepsilon} ||^2$\nas depicted in \\cref{fig:consistency:expected}.\nAs soon as the error becomes sufficiently small the numerical \\emph{rounding error}\nbecomes more dominant, the scaling thereof is also included in \\cref{fig:consistency:expected}.\n\n\\begin{figure}[htp]\n    \\centering\n    \\includegraphics[width=.5\\textwidth]{figures/consistency}\n    \\caption{Expected behaviour of the consistency check, see \\citet[p.~9]{Heath2002}.}\n    \\label{fig:consistency:expected}\n\\end{figure}\n\nThe measurement of $\\eta$ and a function of $|| \\delta \\bm{\\varepsilon} ||$, as depicted in\n\\cref{fig:consistency}, indeed matches the prediction in \\cref{fig:consistency:expected}.\n\n\\begin{figure}[htp]\n    \\centering\n    \\includegraphics[width=.5\\textwidth]{examples/consistency}\n    \\caption{Measured consistency check, cf.\\ \\cref{fig:consistency:expected}.}\n    \\label{fig:consistency}\n\\end{figure}\n\n\\bibliography{library}\n\n\\appendix\n\\vfill\\newpage\n\n\\section{Nomenclature}\n\\label{sec:ap:nomenclature}\n\n\\paragraph{Tensor products}\n\\vspace*{.5eM}\n\n\\begin{itemize}\n\n    \\item Dyadic tensor product\n    \\begin{align}\n        \\mathbb{C} &= \\bm{A} \\otimes \\bm{B} \\\\\n        C_{ijkl} &= A_{ij} \\, B_{kl}\n    \\end{align}\n\n    \\item Double tensor contraction\n    \\begin{align}\n        C &= \\bm{A} : \\bm{B} \\\\\n        &= A_{ij} \\, B_{ji}\n    \\end{align}\n\n\\end{itemize}\n\n\\paragraph{Tensor decomposition}\n\\vspace*{.5eM}\n\n\\begin{itemize}\n\n    \\item Deviatoric part $\\bm{A}_\\mathrm{d}$ of an arbitrary tensor $\\bm{A}$:\n    \\begin{equation}\n        \\mathrm{tr}\\left( \\bm{A}_\\mathrm{d} \\right) \\equiv 0\n    \\end{equation}\n    and thus\n    \\begin{equation}\n        \\bm{A}_\\mathrm{d} = \\bm{A} - \\tfrac{1}{3} \\mathrm{tr}\\left( \\bm{A} \\right)\n    \\end{equation}\n\n\\end{itemize}\n\n\\paragraph{Fourth order unit tensors}\n\\vspace*{.5eM}\n\n\\begin{itemize}\n\n    \\item Unit tensor:\n    \\begin{equation}\n        \\bm{A} \\equiv \\mathbb{I} : \\bm{A}\n    \\end{equation}\n    and thus\n    \\begin{equation}\n        \\mathbb{I} = \\delta_{il} \\delta{jk}\n    \\end{equation}\n    \\item Right-transposition tensor:\n    \\begin{equation}\n        \\bm{A}^T \\equiv \\mathbb{I}^{RT} : \\bm{A} = \\bm{A} : \\mathbb{I}^{RT}\n    \\end{equation}\n    and thus\n    \\begin{equation}\n        \\mathbb{I}^{RT} = \\delta_{ik} \\delta_{jl}\n    \\end{equation}\n    \\item Symmetrisation tensor:\n    \\begin{equation}\n        \\mathrm{sym} \\left( \\bm{A} \\right) \\equiv \\mathbb{I}_\\mathrm{s} : \\bm{A}\n    \\end{equation}\n    whereby\n    \\begin{equation}\n        \\mathbb{I}_\\mathrm{s} = \\tfrac{1}{2} \\left( \\mathbb{I} + \\mathbb{I}^{RT} \\right)\n    \\end{equation}\n    This follows from the following derivation:\n    \\begin{align}\n        \\mathrm{sym} \\left( \\bm{A} \\right) &= \\tfrac{1}{2} \\left( \\bm{A} + \\bm{A}^T \\right)\n        \\\\\n        &= \\tfrac{1}{2} \\left( \\mathbb{I} : \\bm{A} + \\mathbb{I}^{RT} : \\bm{A} \\right)\n        \\\\\n        &= \\tfrac{1}{2} \\left( \\mathbb{I} + \\mathbb{I}^{RT} \\right) : \\bm{A}\n        \\\\\n        &= \\mathbb{I}_\\mathrm{s} : \\bm{A}\n    \\end{align}\n\n    \\item Deviatoric and symmetric projection tensor\n    \\begin{equation}\n        \\mathrm{dev} \\left( \\mathrm{sym}\n        \\left( \\bm{A} \\right) \\right) \\equiv \\mathbb{I}_\\mathrm{d} : \\bm{A}\n    \\end{equation}\n    from which it follows that:\n    \\begin{equation}\n        \\mathbb{I}_\\mathrm{d}\n        = \\mathbb{I}_\\mathrm{s} - \\tfrac{1}{3} \\bm{I} \\otimes \\bm{I}\n    \\end{equation}\n\n\\end{itemize}\n\n\\section{Strain measures}\n\\label{sec:ap:strain}\n\n\\begin{itemize}\n\n    \\item Mean strain\n    \\begin{equation}\n        \\varepsilon_\\mathrm{m}\n        = \\tfrac{1}{3} \\, \\mathrm{tr} ( \\bm{\\varepsilon} )\n        = \\tfrac{1}{3} \\, \\bm{\\varepsilon} : \\bm{I}\n    \\end{equation}\n    \\item Strain deviator\n    \\begin{equation}\n        \\bm{\\varepsilon}_\\mathrm{d}\n        = \\bm{\\varepsilon} - \\varepsilon_\\mathrm{m} \\, \\bm{I}\n        = \\mathbb{I}_\\mathrm{d} : \\bm{\\varepsilon}\n    \\end{equation}\n    \\item Equivalent strain\n    \\begin{equation}\n        \\varepsilon_\\mathrm{eq}\n        = \\; \\sqrt{\n            \\tfrac{2}{3} \\, \\bm{\\varepsilon}_\\mathrm{d} : \\bm{\\varepsilon}_\\mathrm{d}\n        }\n    \\end{equation}\n\n\\end{itemize}\n\n\\section{Variations}\n\\label{sec:ap:variations}\n\n\\begin{itemize}\n\n    \\item Strain deviator\n    \\begin{equation}\n        \\delta \\bm{\\varepsilon}_\\mathrm{d}\n        = \\left( \\mathbb{I}_\\mathrm{s} - \\tfrac{1}{3} \\bm{I} \\otimes \\bm{I} \\right) :\n        \\delta \\bm{\\varepsilon}\n        = \\mathbb{I}_\\mathrm{d} : \\delta \\bm{\\varepsilon}\n    \\end{equation}\n\n    \\item Mean equivalent strain\n    \\begin{equation}\n        \\delta \\varepsilon_\\mathrm{m}\n        = \\tfrac{1}{3} \\bm{I} : \\delta \\bm{\\varepsilon}\n    \\end{equation}\n\n    \\item Von Mises equivalent strain\n    \\begin{align}\n        \\delta \\varepsilon_\\mathrm{eq}\n        &= \\frac{1}{3} \\frac{1}{\\varepsilon_\\mathrm{eq}}\n        \\left( \\bm{\\varepsilon}_\\mathrm{d} : \\delta \\bm{\\varepsilon}_\\mathrm{d} +\n        \\delta \\bm{\\varepsilon}_\\mathrm{d} : \\bm{\\varepsilon}_\\mathrm{d} \\right) \\\\\n        &= \\frac{2}{3} \\frac{1}{\\varepsilon_\\mathrm{eq}}\n        \\left( \\bm{\\varepsilon}_\\mathrm{d} : \\delta \\bm{\\varepsilon}_\\mathrm{d} \\right) \\\\\n        &= \\frac{2}{3} \\frac{\\bm{\\varepsilon}_\\mathrm{d}}{\\varepsilon_\\mathrm{eq}} :\n        \\delta \\bm{\\varepsilon}_\\mathrm{d}\n    \\end{align}\n\n\\end{itemize}\n\n\\end{document}\n", "meta": {"hexsha": "6b71e935187c76910f5bd03572599b7ccbcf5aa7", "size": 11242, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/readme.tex", "max_stars_repo_name": "tdegeus/GMatNonLinearElastic", "max_stars_repo_head_hexsha": "265463cd772ac318bee33a7705b7126ee53d59df", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/readme.tex", "max_issues_repo_name": "tdegeus/GMatNonLinearElastic", "max_issues_repo_head_hexsha": "265463cd772ac318bee33a7705b7126ee53d59df", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2019-04-11T14:16:36.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-30T07:09:50.000Z", "max_forks_repo_path": "docs/readme.tex", "max_forks_repo_name": "tdegeus/GMatNonLinearElastic", "max_forks_repo_head_hexsha": "265463cd772ac318bee33a7705b7126ee53d59df", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7570621469, "max_line_length": 97, "alphanum_fraction": 0.6063867639, "num_tokens": 4041, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711908591638, "lm_q2_score": 0.6723317123102956, "lm_q1q2_score": 0.5714625861647628}}
{"text": "\n\\section{The \\rotatecopy algorithm}\n\\Label{sec:rotatecopy}\n\nThe \\rotatecopy algorithm of the \\cxx Standard Library \\cite[\\S 28.6.11]{cxx-17-draft} copies, in a particular way,\nthe elements of one sequence of length~$n$ into a separate sequence.\nMore precisely,\n\n\\begin{itemize}\n\\item the first~$m$ elements of the first sequence become the last~$m$ elements of the second sequence, and\n\\item the last~$n-m$ elements of the first sequence become the first~$n-m$ elements of the second sequence.\n\\end{itemize}\n\nFigure~\\ref{fig:rotatecopy} illustrates the effects of \\rotatecopy\nby highlighting how the initial and final segments of the array~\\inl{a[0..n-1]} are mapped\nto corresponding subranges of the array~\\inl{b[0..n-1]}.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.62\\textwidth]{Figures/rotate_copy.pdf}\n\\caption{\\Label{fig:rotatecopy} Effects of \\rotatecopy}\n\\end{figure}\n\nFor our purposes we have modified the generic implementation\nto that of a range of type \\valuetype.\n%\nThe signature now reads:\n\n\\begin{lstlisting}[style=acsl-block]\n\n  void rotate_copy(const value_type* a, size_type m, size_type n, value_type* b);\n\\end{lstlisting}\n\n\n%\\clearpage\n\n\\subsection{Formal specification of \\rotatecopy}\n\nThe specification of \\rotatecopy is shown in the following listing.\nNote that we require explicitly that both ranges do not overlap and that we are only\nable to \\emph{read} from the range~\\inl{a[0..n-1]}.\n\n\\input{Listings/rotate_copy.h.tex}\n\n\\subsection{Implementation of \\rotatecopy}\n\nThe following listing shows an implementation of the \\rotatecopy function.\nThe implementation simply calls the function \\copyi twice.\n\n\\input{Listings/rotate_copy.c.tex}\n\n%\\clearpage\n\n", "meta": {"hexsha": "399c7e67a48998d572ad349ee0ca1fd1d026eb14", "size": 1691, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/mutating/rotate_copy.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/mutating/rotate_copy.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/mutating/rotate_copy.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 31.3148148148, "max_line_length": 115, "alphanum_fraction": 0.7735068007, "num_tokens": 456, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056295505783, "lm_q2_score": 0.828938799869521, "lm_q1q2_score": 0.571392181302961}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% CS624: Analysis of Algorithms\n% Copyright 2015 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% More info: https://github.com/ghorbanzade/beacon\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section*{Question 8}\n\nThe \\textsc{Quicksort} algorithm contains two recursive calls to itself.\nAfter \\textsc{Quicksort} calls \\textsc{Partition}, it recursively sorts the left subarray and then it recursively sorts the right subarray.\nThe second recursive call in \\textsc{Quicksort} is not really necessary; we can avoid it by using an iterative control structure.\nThis technique, called \\textit{tail recursion}, is provided automatically by good compilers.\nConsider the following version of quicksort, which simulates tail recursion.\n\n\\begin{algorithm}[H]\n\\caption{\\textsc{Tail-Recursive-Quicksort($A$,$p$,$r$)}}\n\\begin{algorithmic}[1]\n\\While {$p < r$}\n\\State q = \\textsc{Partition}($A$,$p$,$r$)\n\\State \\textsc{Tail-Recursive-Quicksort($A$,$p$,$q-1$)}\n\\State $p$ = $q$ + 1\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{enumerate}[label=(\\alph*)]\n\\item Argue that \\textsc{Tail-Recursive-Quicksort}($A$,1,r) correctly sorts array $A$.\n\n\\item Describe a scenario in which \\textsc{Tail-Recursive-Quicksort}'s stack depth is $\\Theta(n)$ on an $n$-element input array.\n\n\\item Modify the code for \\textsc{Tail-Recursive-Quicksort} so that the worst-case stack depth is $\\Theta(\\log n)$.\nMaintain the $\\mathcal{O}(n\\log n)$ expected running time of the algorithm.\n\\end{enumerate}\n\n\\subsection*{Solution}\n\n\\begin{enumerate}[label=(\\alph*)]\n\\item It is assumed that the original \\textsc{Quicksort} algorithm sorts correctly.\nGiven that the original \\textsc{Quicksort} makes use of the \\textsc{Partition} algorithm itself, it is claimed that the statement $q = \\textsc{Partition}(A,p,r)$ in second line of the algorithm works properly.\nAs the algorithm calls itself at each iteration, proof of correctness is given by induction on the length $r$ of the array to be sorted.\n\n\\begin{itemize}\n\\item[] \\emph{Base Step}: When length of array $r = 1$, the array is already sorted and \\textsc{Tail-Recursive-Quicksort} algorithm sorts correctly.\n\\item[] \\emph{Induction Step}: Inductive hypothesis is formed as \\textsc{Tail-Recursive-Quicksort} works fine for any array of length $r \\leq r_0$.\nUsing this hypothesis, we show the algorithm is correct for $r = r_0 + 1$.\nThis can simply be proven by following statements of the program.\nWhen $r = r_0 + 1$, the algorithm calls itself with arguments ($A$, $p$, $q-1$).\nSince $q - 1 < r_0+1$ for any $q$, all recursive calls are to an algorithm, correctness of which is already proven by the inductive hypothesis.\n\\end{itemize}\n\n\\item The number of iterations of \\textit{while} loop in \\textsc{Tail-Recursive-Quicksort} determines the stack depth.\nTo make stack depth bound by $\\Theta(n_0)$ for an $n_0$-element array, it suffices to make the algorithm to call itself for sorting an array ($n-1$)-element for any $n \\leq n_0$.\nThis means argument $q-1$ of the \\textsc{Tail-Recursive-Quicksort} algorithm should be decremented for each iteration.\nThis goal will be achieved, if we have an already-sorted array of length $n_0$.\nIn this case, \\textsc{Partition}($A$,$p$,$r$) will always be $r$ for any $r \\leq r_0$.\n\n\\item To maintain the expected runtime of the \\textsc{Tail-Recursive-Quicksort} algorithm at $\\Theta(n \\log n)$, we cannot change arguments of the \\textsc{Partition} and recursive \\textsc{Tail-Recursive-Quicksort} calls.\nTherefore the only way to change the worst-case scenario is to change the order of partitioned sub-arrays so that the small sub-arrays be sorted first.\nThis is made by insertion of a statement in between lines $2$ and $3$ that compares length of the sub arrays and gives the shortest one to the \\textsc{Tail-Recursive-Quicksort} recursion.\n\n\\end{enumerate}\n", "meta": {"hexsha": "2a92e499d2570fff75f2aa0940a45c732df65fac", "size": 3983, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "umb-cs624-2015s/src/tex/hw01/hw01q08.tex", "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_issues_repo_path": "umb-cs624-2015s/src/tex/hw01/hw01q08.tex", "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "umb-cs624-2015s/src/tex/hw01/hw01q08.tex", "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "avg_line_length": 63.2222222222, "max_line_length": 220, "alphanum_fraction": 0.7313582727, "num_tokens": 1089, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056040203135, "lm_q2_score": 0.8289388083214156, "lm_q1q2_score": 0.5713921659658723}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n\n% \\begin{figure}[ht]\n%     \\centering\n%     \\incfig{riemmans-theorem}\n%     \\caption{Riemmans theorem}\n%     \\label{fig:riemmans-theorem}\n% \\end{figure}\n\n\\begin{document}\n% \\section{}\t\n\nNow we consider functions on the complex field. Namely, we are characterizing functions \\(f:\\C\\to \\C\\). It is immediate that any function \\(f\\) can be parametrized by\n\\begin{align*}\n\tf:\\C\\to \\C\\\\\n\t(x,y) \\mapsto (u(x,y),v(x,y))\n\\end{align*}\nfor \\(u,v \\) real-valued functions. Sometimes we might write \\(f = u + iv\\) for shorthand of the above parametrization. This notation arises because\n\\begin{align*}\n\tf(x,y) = (u(x,y),v(x,y)) &\\implies\\\\\n\tf &= (u(x,y),0) + (0,v(x,y)) \\implies\\\\\n\tf &= u + iv.\n\\end{align*}\nNow we construct the equivalent analytic structures on complex functions.\n\\begin{defn}[Complex Continuity]\n\tLet \\(\\Omega\\subset \\C\\). A function \\(f:\\Omega\\to \\C\\) is \\textbf{continuous} if for all \\(\\varepsilon >0\\), for all \\(z \\in \\Omega\\), there exists a \\(\\delta>0\\) such that, for all \\(w \\in \\Omega\\),\n\t\\begin{align*}\n\t\t|w-z| < \\delta \\implies \\left| f(w)-f(z) \\right| < \\varepsilon\n\t\\end{align*}\nEquivalently, if \\((z_n) \\subset \\Omega\\) is an arbitrary sequence in \\(\\Omega\\) converging to \\(z \\in \\C\\), then \\(f(z_n) \\to f(z)\\).\\\\\n\nWe denote the space of complex continuous functions on \\(\\Omega\\) by \\(C(\\Omega)\\).\n\\end{defn}\n\n\\begin{anki}\nTARGET DECK\nComplex Qual::Complex Analysis\n\nSTART\nMathJaxCloze\nText: Let \\(\\Omega\\subset \\C\\). A function \\(f:\\Omega\\to \\C\\) is **continuous** if {{c2::for all \\(\\varepsilon >0\\)}}, {{c2::for all \\(z \\in \\Omega\\)}}, {{c2::there exists a \\(\\delta>0\\)}} such that, {{c2::for all \\(w \\in \\Omega\\)}},\n{{c1::\\(\\begin{align*}\n        \t|w-z| < \\delta \\implies \\left| f(w)-f(z) \\right| < \\varepsilon\n        \\end{align*}\\)}} \nEquivalently, if \\((z_n) \\subset \\Omega\\) is an arbitrary sequence in \\(\\Omega\\) converging to \\(z \\in \\C\\), then {{c1::\\(f(z_n) \\to f(z)\\)}}.\nExtra: Of course, we can alternatively define continuity of a function at a point, then refer to a continuous function as one that's continuous at all points.\nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1624399037000-->\nEND\n\\end{anki}\n\nOf course, we can alternatively define continuity of a function at a point, then refer to a continuous function as one that's continuous at all points.\\\\\n\nThe sum, difference, product, ratio, and composition of continuous functions is continuous. We leave the proof of this as an exercise to the reader.\n\\begin{defn}[Uniform Continuity]\n\tLet \\(\\Omega \\subset \\C\\). A function \\(f:\\Omega\\to \\C\\) is \\textbf{uniformly continuous} if for all \\(\\varepsilon\\), there exists a \\(\\delta>0\\) such that for all \\(z,w \\in \\Omega\\),\n\t\\begin{align*}\n\t\t|w-z| < \\delta \\implies \\left| f(w)-f(z) \\right| < \\varepsilon\n\t\\end{align*}\n\\end{defn}\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Let \\(\\Omega \\subset \\C\\). A function \\(f:\\Omega\\to \\C\\) is **uniformly continuous** if {{c1::for all \\(\\varepsilon\\)}}, {{c1::there exists a \\(\\delta>0\\)}} such that {{c1::for all \\(z,w \\in \\Omega\\)}},\n{{c2::\\(\\begin{align*}\n        \t|w-z| < \\delta \\implies \\left| f(w)-f(z) \\right| < \\varepsilon\n        \\end{align*}\\)}}\nExtra: In other words, we choose \\(\\delta \\) first before being given the point \\(z \\in \\Omega\\). On unbounded domains, this is a much stronger property than continuity. \nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1624399037047-->\nEND\n\\end{anki}\n\nIn other words, we choose \\(\\delta \\) first before being given the point \\(z \\in \\Omega\\). On unbounded domains, this is a much stronger property than continuity. However, under many circumstances the notions are equivalent:\n\\begin{thm}\n\tOn a compact set in \\(\\C\\), every continuous function is uniformly continuous.\n\\end{thm}\n\\begin{anki}\nSTART\nMathJaxCloze\nText: On a compact set in \\(\\C\\), every continuous function is {{c1::uniformly continuous}}.\nTags: analysis complex_analysis complex_analyticity\n<!--ID: 1624399037095-->\nEND\n\\end{anki}\n\nMany of the theorems here are equivalent to the real-valued version, and hence unless the proof differs, proofs will be omitted as they can be found in a standard real analysis textbook.\\\\\n\n\\begin{thm}\n\tThe continuous image of a compact set in \\(\\C\\) is compact.\n\\end{thm}\nThis is actually a corollary via point-set topology.\n\n\\begin{cor}\n\tA continuous real-valued function on a compact set has a maximum and a minimum.\n\\end{cor}\nThis doesn't seem very relevant immediately, as the range of a complex function is not well-ordered. However, by considering the modulus of complex functions, we can obtain a similar result.\n\n\\begin{defn}[Maximum and Minimum of Complex Functions]\n\tLet \\(f:\\C\\to \\C\\) be a complex-valued function. We say that \\(f\\) attains a \\textbf{maximum} at \\(z_0 \\in \\Omega \\) if\n\t\\begin{align*}\n\t\t\\left| f(z) \\right| \\leq \\left| f(z_0) \\right| \n\t\\end{align*}for all \\(z \\in \\Omega \\).\\\\\n\n\tIf instead\n\t\\begin{align*}\n\t\t\\left| f(z) \\right| \\geq \\left| f(z_0) \\right| \n\t\\end{align*} for all \\(z\\in \\Omega \\) then we say \\(z_0\\) is a \\textbf{minimum} for \\(f\\).\n\\end{defn}\nNotice that if \\(f\\) is continuous, then \\(z\\mapsto \\left| f(z) \\right| \\) is continuous (by the triangle inequality).\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: Let \\(f:\\C\\to \\C\\) be a complex-valued function. We say that \\(f\\) attains a **maximum** at \\(z_0 \\in \\Omega \\) if\n{{c1::\\(\\begin{align*}\n        \t\\left| f(z) \\right| \\leq \\left| f(z_0) \\right| \n        \\end{align*}\\)}}\nfor all \\(z \\in \\Omega \\).\n\nIf instead\n{{c1::\\(\\begin{align*}\n      \\left| f(z) \\right| \\geq \\left| f(z_0) \\right| \n      \\end{align*}\\)}}\nfor all \\(z\\in \\Omega \\) then we say \\(z_0\\) is a **minimum** for \\(f\\).\nExtra: Notice that if \\(f\\) is continuous, then \\(z\\mapsto \\left| f(z) \\right| \\) is continuous (by the triangle inequality).\nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1624399037133-->\nEND\n\\end{anki}\n\n\\begin{cor}\n\tA continuous (real or complex-valued) function on a compact set \\(\\Omega\\) is bounded and attains a maximum and minimum on \\(\\Omega\\).\n\\end{cor}\n\n\\begin{anki}\nSTART\nMathJaxCloze\nText: A continuous (real or complex-valued) function on a compact set \\(\\Omega \\) is {{c1::bounded}} and {{c1::attains a maximum and minimum on \\(\\Omega\\)}}.\nTags: analysis complex_analysis complex_analyticity\n<!--ID: 1624399037173-->\nEND\n\\end{anki}\n\n\\subsection{Curves in \\(\\C\\)}\n\\label{sub:curves_in_c}\n\n\\begin{defn}[Continuous Curve]\n\tLet \\(\\Omega\\subset \\C\\). A \\textbf{continuous curve} in \\(\\Omega\\) is a continuous function \\(\\gamma :[t_0,t_1]\\to \\Omega\\) where \\([t_0,t_1]\\subset \\R\\).\n\\end{defn}\n\n\\begin{anki}\n% Up to 5 consequences\nSTART\nDefinition\nName: Continuous Curve in \\(\\C\\)\nPremise 1: \\(\\Omega\\subset \\C\\), \\([t_0,t_1] \\subset \\R\\)\nConsequence 1: \\(\\gamma \\) is a continuous curve if \\(\\gamma:[t_0,t_1]\\to \\Omega\\) is a continous function\nTags: analysis complex_analysis complex_analyticity defn\n<!--ID: 1624399037216-->\nEND\n\\end{anki}\n\nBecause every complex function can be parametrized by \\(F(x,y) = (u(x,y),v(x,y))\\), it follows that we can write a curve \\(\\gamma \\) by\n\\begin{align*}\n\t\\gamma (t) = (x(t),y(t))\n\\end{align*}\nfor real-valued curves \\(x,y\\). This will prove useful later when we introduce complex differentiation.\n\n\\begin{defn}[Path-Connected]\n\tLet \\(\\Omega\\subset \\C\\). Then \\(\\Omega\\) is path-connected if every two points in  \\(\\Omega\\) can be joined by a continuous curve in \\(\\Omega\\).\n\\end{defn}\n\n\\begin{anki}\n% Up to 5 consequences\nSTART\nDefinition\nName: Path-Connected Set in \\(\\C\\)\nPremise 1: \\(\\Omega\\subset \\C\\)\nConsequence 1: \\(\\Omega\\) is path-connected if \\(\\forall z_0,z_1 \\in \\Omega\\), \\(\\exists \\gamma:[t_0,t_1]\\to \\Omega\\) with \\(\\gamma(t_i)=z_i\\)\nTags: analysis complex_analysis complex_topology defn\n<!--ID: 1624399037262-->\nEND\n\\end{anki}\n\n\n\\begin{thm}\n\tLet \\(\\Omega\\subset \\C\\) be open, Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item \\(\\Omega\\) is connected\n\t\t\\item Every two points in \\(\\Omega\\) can be joined by a broken line consisting of (finitely many) horizontal and vertical line segments within \\(\\Omega\\)\n\t\t\\item \\(\\Omega\\) is path-connected\n\t\\end{itemize}\n\\end{thm}\n\n\\begin{anki}\n% Up to 4 premises\n% Up to 5 equivalences\nSTART\nEquivalence\nName: Connectedness of open \\(\\Omega\\subset \\C\\)\nPremise 1:  \\(\\Omega\\subset \\C\\) open set\nEquivalence 1: \\(\\Omega\\) is connected\nEquivalence 2: Every \\(z_0,z_1 \\in \\Omega\\) is joined by line consisting of (finitely many) horizontal and vertical line segments in \\(\\Omega\\)\nEquivalence 3: \\(\\Omega\\) is path-connected\nTags: analysis complex_analysis complex_topology\n<!--ID: 1624399037300-->\nEND\n\\end{anki}\n\n\n\\end{document}\n", "meta": {"hexsha": "a290d9a3c6e538816dbde6728737aca63beb360e", "size": 8657, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Complex Analysis/Notes/source/2020-02-20-ComplexContinuity.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Complex Analysis/Notes/source/2020-02-20-ComplexContinuity.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Complex Analysis/Notes/source/2020-02-20-ComplexContinuity.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.2651162791, "max_line_length": 233, "alphanum_fraction": 0.6825690193, "num_tokens": 2802, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.640635854839898, "lm_q2_score": 0.8918110511888302, "lm_q1q2_score": 0.5713261351340243}}
{"text": "\\subsection{part b}\nNow design PID controller with following methods.\n\\newpage\n\\begin{itemize}\n    \\item ziegler nichols\n    $$\n    G_c =   \\dfrac{4.756 s^2 + 6.423 s + 2.27}{0.1904 s^2 + 2.76 s}\n    $$\n    \\begin{figure}[H]\n        \\caption{step responde with ziegler nichols PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/zn.png}\n    \\end{figure}\n    \\item refined ziegler nichols\n    $$\n    G_c =     \\dfrac{3.099 s + 2.27}{1.629 s}, \\qquad H =    \n    \\dfrac{1.051 s^2 + 1.698 s + 1}{0.09418 s^2 + 1.434 s + 1}\n    $$\n    \\begin{figure}[H]\n        \\caption{step responde with refined ziegler nichols PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/rzn.png}\n    \\end{figure}\n    \\item modified ziegler nichols\n    $$\n    r_1 = 1.0, \\qquad p_b = 5:5:35\n    $$\n    \\begin{figure}[H]\n        \\caption{step responde with modified ziegler nichols PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/mzn.png}\n    \\end{figure}\n    \\item Cohen Coon\n    $$\n    G_c =   \\dfrac{3.223 s^2 + 7.463 s + 3.189}{ 0.09188 s^2 + 2.3 s}\n    $$\n    \\begin{figure}[H]\n        \\caption{step responde with Cohen Coon PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/cc.png}\n    \\end{figure}\n    \\newpage\n    \\item Cohen Coon revisited\n    $$\n    G_c =   \\dfrac{3.374 s^2 + 7.744 s + 3.202}{0.09579 s^2 + 2.378 s}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with Cohen Coon revisited PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/ccr.png}\n    \\end{figure} \n    \\item Astrom Hagglund\n    $$\n    G_c =   \\dfrac{0.9211 s^2 + 1.794 s + 1.385}{0.06048 s^2 + 1.247 s}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with Astrom Hagglund PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/ah.png}\n    \\end{figure}  \n    \\item Frequency based Astrom Hagglund\n    $$\n    G_c =   \\dfrac{1.025 s^2 + 1.666 s + 1.355}{0.0688 s^2 + 1.171 s}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with Frequency based Astrom Hagglund PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/fah.png}\n    \\end{figure} \n    \\item CHR set point $0\\%$ overshoot\n    $$\n    G_c =   \\dfrac{0.8439 s^2 + 1.19 s + 1.135}{0.06758 s^2 + 0.9794 s}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with CHR set point $0\\%$ overshoot PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/chr0.png}\n    \\end{figure}\n    \\item CHR set point $20\\%$ overshoot\n    $$\n    G_c =   \\dfrac{1.758 s^2 + 2.581 s + 1.797}{0.08893 s^2 + 1.371 s}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with CHR set point $20\\%$ overshoot PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/chr20.png}\n    \\end{figure}\n    \\item WJC\n    $$\n    G_c =   \\dfrac{9.298 s^2 + 21.39 s + 12.51}{0.4048 s^2 + 10 s}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with WJC PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/wjc.png}\n    \\end{figure}  \n    \\item optimum set point PID ISTE\n    $$\n    G_c =   \\dfrac{1.993 s^2 + 3.738 s + 2.496}{0.07258 s^2 + 1.447 s}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with optimum PID controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/optpid.png}\n    \\end{figure}  \n    \\item optimum set point PI-D ISTE\n    $$\n    G_c =   \\dfrac{4.219 s + 2.41}{1.751 s} \\qquad H = \\frac{0.9836 s^2 + 4.328 s + 2.41}{0.191 s^2 + 4.328 s + 2.41}\n    $$ \n    \\begin{figure}[H]\n        \\caption{step responde with optimum PI-D controller}\n        \\centering\n        \\includegraphics[width=10cm]{../Figure/Q1/b/optpi-d.png}\n    \\end{figure}  \n\\end{itemize}\n\\subsection{conclusion}\nziegler nichols is very slow but refined ziegler nichols is better ans faster. modified ziegler nichols is very slow.\nCohen coon is slow and has hight overshoot but cohen coon revisited is faster.\nAstromm hagglund is very good in doesn't have overshoot and system is fast but Frequency is a little slow. CHR is fast but overshoot method work strange. Zero overshoot has overshoot but 20\\% overshoot has lower overshoot.\nWJC work fast and have very small overshoot.\nOptimum PID work well but PI-D is a little slow.\nWe don't what the system is so we can't select best controller it depends on our plant.\n", "meta": {"hexsha": "17d1a36a381ab94c93ff5c07f168998cdb5d3924", "size": 4515, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW/HW VI/Report/Q1/b.tex", "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW/HW VI/Report/Q1/b.tex", "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW/HW VI/Report/Q1/b.tex", "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.7073170732, "max_line_length": 222, "alphanum_fraction": 0.6119601329, "num_tokens": 1573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5712987774545981}}
{"text": "\\section{Distributions}\n\\label{sec:distributions}\n\\newcommand{\\distname}[1]{\\textbf{#1}}\n\\newcommand{\\distattrib}[1]{\\textit{#1}}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% If you are confused by the input of this document, please make sure you see\n% these defined commands first. There is no point writing the same thing over\n% and over and over and over and over again, so these will help us reduce typos,\n% by just editing a template sentence.\n\\newcommand{\\nameDescription}{\\xmlAttr{name},\n  \\xmlDesc{required string attribute}, user-defined name of this distribution.\n  %\n  \\nb As with other objects, this identifier can be used to reference this\n  specific entity from other input blocks in the XML.}\n\\newcommand{\\specBlock}[2]{The specifications of this distribution must be\n  defined within #1 \\xmlNode{#2} XML block.}\n\\newcommand{\\attrIntro}{This XML node accepts one attribute:}\n\\newcommand{\\attrsIntro}{This XML node accepts the following attributes:}\n\\newcommand{\\subnodeIntro}{This distribution can be initialized with the\n  following child node:}\n\\newcommand{\\subnodesIntro}{This distribution can be initialized with the\n  following children nodes:}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%\\maljdan{Do we want to provide the equations of each distribution?}\n%\\alfoa{TBH I do not think so. It is not a theory manual... I put few equations in the ROM section just to explain better the meaning of some parameters.}\n\nRAVEN provides support for several probability distributions.\n%\nCurrently, the user can choose among several 1-dimensional distributions and\n$N$-dimensional ones, either custom or multidimensional normal.\n\nThe user will specify the probability distributions, that need to be used during\nthe simulation, within the \\xmlNode{Distributions} XML block:\n\\begin{lstlisting}[style=XML]\n<Simulation>\n   ...\n  <Distributions>\n    <!-- All the necessary distributions will be listed here -->\n  </Distributions>\n  ...\n</Simulation>\n\\end{lstlisting}\n\nIn the next two sub-sections, the input requirements for all of the\ndistributions are reported.\n\n%%%%%% 1-Dimensional Probability distributions\n\\subsection{1-Dimensional Probability Distributions}\n\\label{subsec:1dDist}\nThis sub-section is organized in two different parts: 1) continuous 1-D\ndistributions and 2) discrete 1-D distributions.\n%\nThese two paragraphs cover all the requirements for using the different\ndistribution entities.\n%\n%%%%%% paragraph 1-Dimensional Continuous Distributions.\n\\subsubsection{1-Dimensional Continuous Distributions}\n\\label{subsubsec:1DContinuous}\nIn this paragraph all the 1-D distributions currently available in RAVEN are\nreported.\n\nFirstly, all the probability distributions functions in the code can be\ntruncated by using the following keywords:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n   ...\n   <aDistributionType>\n      ...\n      <lowerBound>aFloatValue</lowerBound>\n      <upperBound>aFloatValue</upperBound>\n      ...\n   </aDistributionType>\n</Distributions>\n\\end{lstlisting}\nEach distribution has a pre-defined, default support (domain) based on its\ndefinition, however these domains can be shifted/stretched using the appropriate\n\\xmlNode{low} and \\xmlNode{high} parameters where applicable, and/or truncated\nusing the nodes in the example above, namely \\xmlNode{lowerBound} and\n\\xmlNode{upperBound}.\nFor example, the Normal distribution domain is $[-\\infty,+\\infty]$, and thus\ncannot be shifted or stretched, as it is already unbounded, but can be\ntruncated.\n%\nRAVEN currently provides support for 13 1-Dimensional distributions.\n%\nIn the following paragraphs, all the input requirements are reported and\ncommented.\n\n%%%%%% Beta\n\\paragraph{Beta Distribution}\n\\label{Beta}\nThe \\distname{Beta} distribution is parameterized by two positive shape parameters, denoted by\n$\\alpha$ and $\\beta$, that appear as exponents of the random variable. Its default support\n(domain) is $x \\in [0, 1]$.\n%\nThe distribution domain can be changed, specifying new boundaries, to fit the user's needs.\n%\nThe user can specify a \\distname{Beta} distribution in two ways.  The standard\nis to provide the parameters \\xmlNode{low}, \\xmlNode{high}, \\xmlNode{alpha},\nand \\xmlNode{beta}.  Alternatively, to approximate a normal\ndistribution that falls to 0 at the endpoints, the user may provide\nthe parameters \\xmlNode{low}, \\xmlNode{high}, and \\xmlNode{peakFactor}. The\npeak factor is a value between 0 and 1 that determines the peakedness of the\ndistribution.  At 0 it is dome-like ($\\alpha=\\beta=4$) and at 1 it is very\nstrongly peaked around the mean ($\\alpha=\\beta=100$).  A reasonable approximation\nto a Gaussian normal is a peak factor of 0.5.\n\n\\specBlock{a}{Beta}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item Standard initialization:\n    \\begin{itemize}\n    \\item \\xmlNode{alpha}, \\xmlDesc{float, conditional required parameter}, first shape\n     parameter.  If specified, \\xmlNode{beta} must also be inputted and\n     \\xmlNode{peakFactor} can not be specified.\n%\n     \\item \\xmlNode{beta}, \\xmlDesc{float, conditional required parameter}, second shape\n      parameter.  If specified, \\xmlNode{alpha} must also be inputted and\n      \\xmlNode{peakFactor} can not be specified.\n%\n       \\item \\xmlNode{low}, \\xmlDesc{float, optional parameter}, lower domain\n       boundary. \\default{0.0}\n %\n       \\item \\xmlNode{high}, \\xmlDesc{float, optional parameter}, upper domain,\n         boundary. \\default{1.0}\n      \\end{itemize}\n %\n     \\item Alternative initialization:\n     \\begin{itemize}\n        \\item \\xmlNode{peakFactor}, \\xmlDesc{float, optional parameter}, alternative\n         to specifying \\xmlNode{alpha} and \\xmlNode{beta}.  Acceptable values range from\n        0 to 1.\n%\n       \\item \\xmlNode{low}, \\xmlDesc{float, optional parameter}, lower domain\n       boundary. \\default{0.0}\n %\n       \\item \\xmlNode{high}, \\xmlDesc{float, optional parameter}, upper domain,\n         boundary. \\default{1.0}\n      \\end{itemize}\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Beta name='aUserDefinedName'>\n     <low>aFloatValue</low>\n     <high>aFloatValue</high>\n     <alpha>aFloatValue</alpha>\n     <beta>aFloatValue</beta>\n  </Beta>\n  <Beta name='aUserDefinedName2'>\n     <low>aFloatValue</low>\n     <high>aFloatValue</high>\n     <peakFactor>aFloatValue</peakFactor>\n  </Beta>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Exponential\n\\paragraph{Exponential Distribution}\n\\label{Exponential}\nThe \\distname{Exponential} distribution has a default support of\n$x \\in [0, +\\infty)$.\n\n\\specBlock{an}{Exponential}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodeIntro\n\\begin{itemize}\n  \\item \\xmlNode{lambda}, \\xmlDesc{float, required parameter}, rate parameter.\n  \\item \\xmlNode{low}, \\xmlDesc{float, optional parameter}, lower domain\n     boundary. \\default{0.0}\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Exponential name='aUserDefinedName'>\n    <lambda>aFloatValue</lambda>\n    <low>aFloatValue</low>\n  </Exponential>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Gamma\n\\paragraph{Gamma Distribution}\n\\label{Gamma}\nThe \\distname{Gamma} distribution is a two-parameter family of continuous\nprobability distributions.\n%\nThe common exponential distribution and $\\chi$-squared distribution are special\ncases of the gamma distribution.\n%\nIts default support is $x \\in [0,+\\infty]$.\n\n\\specBlock{a}{Gamma}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{alpha}, \\xmlDesc{float, required parameter}, shape parameter.\n  %\n  \\item \\xmlNode{beta}, \\xmlDesc{float, optional parameter}, 1/scale or the\n  inverse scale parameter. \\default{1.0}\n  \\item \\xmlNode{low}, \\xmlDesc{float, optional parameter}, lower domain\n  boundary. \\default{0.0}\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Gamma name='aUserDefinedName'>\n    <alpha>aFloatValue</alpha>\n    <beta>aFloatValue</beta>\n    <low>aFloatValue</low>\n  </Gamma>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Laplace\n\\paragraph{Laplace Distribution}\n\\label{Laplace}\nThe \\distname{Laplace} distribution is a two-parameter continuous\nprobability distribution.  It is the distribution of the differences\nbetween two independent random variables with identical exponential\ndistributions.\n%\nIts default support is $x \\in (-\\infty,+\\infty)$.\n\n\\specBlock{a}{Laplace}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n\\item \\xmlNode{location}, \\xmlDesc{float, required parameter},\n  determines the location or shift of the distribution.\n\\item \\xmlNode{scale}, \\xmlDesc{float, required parameter}, must be\n  greater than 0, and determines how spread out the distribution is.\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Laplace name='aUserDefinedName'>\n    <location>aFloatValue</location>\n    <scale>aFloatValue</scale>\n  </Laplace>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Logistic\n\\paragraph{Logistic Distribution}\n\\label{Logistic}\nThe \\distname{Logistic} distribution is similar to the\nnormal distribution with a CDF that is an instance of a logistic function ($Cdf(x) = \\frac{1}{1+e^{-\\frac{(x-location)}{scale})}}$).\n%\nIt resembles the normal distribution in shape but has heavier tails (higher\nkurtosis).\n%\nIts default support is $x \\in [-\\infty,+\\infty]$.\n\n\\specBlock{a}{Logistic}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{location}, \\xmlDesc{float, required parameter}, the\n  distribution\n  mean.\n  %\n  \\item \\xmlNode{scale}, \\xmlDesc{float, required parameter}, scale parameter\n  that\n  is proportional to the standard deviation ($\\sigma ^{2}=\\frac{1}{3}\\pi^{2}scale^{2} $).\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Logistic name='aUserDefinedName'>\n    <location>aFloatValue</location>\n    <scale>aFloatValue</scale>\n  </Logistic>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% LogNormal\n\\paragraph{LogNormal Distribution}\n\\label{LogNormal}\nThe \\distname{LogNormal} distribution is a distribution with the\nlogarithm of the random variable being normally distributed.\n%\nIts default support is $x \\in [0, +\\infty]$.\n\n\\specBlock{a}{LogNormal}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{mean}, \\xmlDesc{float, required parameter}, the log of the distribution\n  mean or expected value.\n  %\n  \\item \\xmlNode{sigma}, \\xmlDesc{float, required parameter}, standard\n  deviation.\n  \\item \\xmlNode{low}, \\xmlDesc{float, optional parameter}, lower domain\n  boundary. \\default{0.0}\n  %\n\\end{itemize}\n\\nb The \\xmlNode{mean} and \\xmlNode{sigma} listed above are NOT the mean and standard deviation of the\ndistribution; they are the mean and standard deviation of the log of the distribution.  Using the following\nnotation:\n\\begin{itemize}\n  \\item $\\mu_\\ell$: the $\\mu$ parameter of the lognormal distribution, which RAVEN expects in the\n    \\xmlNode{mean} node;\n  \\item $\\sigma_\\ell$: the $\\sigma$ parameter of the lognormal distribution, which RAVEN expects in the\n    \\xmlNode{sigma} node;\n  \\item $M$: the user-desired mean value of the distribution;\n  \\item $S$: the user-desired standard deviation of the distribution;\n\\end{itemize}\na conversion is defined to translate from mean $M$ and standard deviation $S$ into the parameters RAVEN\nexpects:\n\\begin{equation}\n  \\mu_\\ell = \\log\\left(\\frac{M}{\\sqrt{1+\\frac{S^2}{M^2}}}\\right),\n\\end{equation}\n\\begin{equation}\n  \\sigma_\\ell = \\sqrt{\\log{1+\\frac{S^2}{M^2} }}.\n\\end{equation}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <LogNormal name='aUserDefinedName'>\n    <mean>aFloatValue</mean>\n    <sigma>aFloatValue</sigma>\n    <low>aFloatValue</low>\n  </LogNormal>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% LogUniform\n\\paragraph{LogUniform Distribution}\n\\label{LogNormal}\nThe \\distname{LogNormal} distribution is a distribution associated to\na variable $y=h(x)=e^{x}$ where variable x is uniform distributed.\nThis distribution supports not only the case  $y=h(x)=e^{x}$ (natural case) but also\nthe case where $y=h(x)=10^{x}$ (decimal case).\n%\n\nIts default support is $x \\in [h(lowerBound),h(upperBound)]$.\n\n\\specBlock{a}{LogUniform}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{lowerBound}, \\xmlDesc{float, required parameter}, domain lower boundary.\n  %\n  \\item \\xmlNode{upperBound}, \\xmlDesc{float, required parameter}, domain upper boundary.\n  %\n  \\item \\xmlNode{base}, \\xmlDesc{string, required parameter}, case type (decimal or natural).\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <LogUniform name=\"x_dist\">\n    <upperBound>1.0</upperBound>\n    <lowerBound>3.0</lowerBound>\n    <base>natural</base>\n  </LogUniform>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Normal\n\\paragraph{Normal Distribution}\n\\label{Normal}\nThe \\distname{Normal} distribution is an extremely\nuseful continuous distribution.\n%\nIts utility is due to the central limit theorem, which states that, under mild\nconditions, the mean of many random variables independently drawn from the same\ndistribution is distributed approximately normally, irrespective of the form of\nthe original distribution.\n%\nIts default support is $x \\in [-\\infty, +\\infty]$.\n\n\\specBlock{a}{Normal}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{mean}, \\xmlDesc{float, required parameter}, the distribution\n  mean\n  or expected value.\n  %\n  \\item \\xmlNode{sigma}, \\xmlDesc{float, required parameter}, the standard\n  deviation.\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Normal name='aUserDefinedName'>\n    <mean>aFloatValue</mean>\n    <sigma>aFloatValue</sigma>\n  </Normal>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Triangular\n\\paragraph{Triangular Distribution}\n\\label{Triangular}\nThe \\distname{Triangular} distribution is a continuous distribution that has a\ntriangular shape for its PDF.\n%\n%It is often used where the distribution is only vaguely known.\n%\nLike the uniform distribution, upper and lower limits are ``known,'' but a\n``best guess,'' of the mode or center point is also added.\n%\nIt has been recommended as a ``proxy'' for the beta distribution.\n%\nIts default support is $x \\in [min,max]$.\n\n\\specBlock{a}{Triangular}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{apex}, \\xmlDesc{float, required parameter}, peak location\n  %``best guess'', also called, peak factor.\n  %\n  \\item \\xmlNode{min}, \\xmlDesc{float, required parameter}, domain lower\n  boundary.\n  %\n  \\item \\xmlNode{max}, \\xmlDesc{float, required parameter}, domain upper\n  boundary.\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Triangular name='aUserDefinedName'>\n    <apex>aFloatValue</apex>\n    <min>aFloatValue</min>\n    <max>aFloatValue</max>\n  </Triangular>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Uniform\n\\paragraph{Uniform Distribution}\n\\label{Uniform}\nThe \\distname{Uniform} distribution is a continuous distribution with a\nrectangular-shaped PDF.\n%\nIt is often used where the distribution is only vaguely known, but upper and\nlower limits are known.\n%\nIts default support is $x \\in [lower,upper]$.\n\n\\specBlock{a}{Uniform}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{lowerBound}, \\xmlDesc{float, required parameter}, domain lower\n  boundary.\n  %\n  \\item \\xmlNode{upperBound}, \\xmlDesc{float, required parameter}, domain upper\n  boundary.\n  %\n\\end{itemize}\n\\nb Since the Uniform distribution is a rectangular-shaped PDF, the truncation does not have any effect;\n this is the reason why the children nodes are the ones generally used for truncated distributions.\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Uniform name='aUserDefinedName'>\n    <lowerBound>aFloatValue</lowerBound>\n    <upperBound>aFloatValue</upperBound>\n  </Uniform>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Weibull\n\\paragraph{Weibull Distribution}\n\\label{Weibull}\nThe \\distname{Weibull} distribution is a continuous distribution that is often\nused in the field of failure analysis; in particular, it can mimic distributions\nwhere the failure rate varies over time.\n%\nIf the failure rate is:\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item constant over time, then $k = 1$, suggests that items are failing from\n  random events;\n  \\item decreases over time, then $k < 1$, suggesting ``infant mortality'';\n  \\item increases over time, then $k > 1$, suggesting ``wear out'' - more likely\n  to fail as time goes by.\n  %\n\\end{itemize}\n\\vspace{-5mm}\nIts default support is $x \\in [0, +\\infty)$.\n\n\\specBlock{a}{Weibull}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{k}, \\xmlDesc{float, required parameter}, shape parameter.\n  %\n  \\item \\xmlNode{lambda}, \\xmlDesc{float, required parameter}, scale parameter.\n  \\item \\xmlNode{low}, \\xmlDesc{float, optional parameter}, lower domain\n  boundary. \\default{0.0}\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Weibull name='aUserDefinedName'>\n    <lambda>aFloatValue</lambda>\n    <k>aFloatValue</k>\n    <low>aFloatValue</low>\n  </Weibull>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Custom1D\n\\paragraph{Custom1D Distribution}\n\\label{Custom1D}\nThe \\distname{Custom1D} distribution is a custom continuous distribution that can be initialized from a dataObject\ngenerated by RAVEN.\nThis distribution cannot be initialized from a dataObject directly but through a .csv file.\nThis file must contain the values of either cdf or pdf of the random variable sampled along the range of the desired \nrandom variable.\nIn the distribution block of the RAVEN input file, the user needs to specify which file (including its working directory)\nneeds to be used to initialize the distribution. In addition, the user is required to specify which type (cdf or pdf) or values\nare contained in the file and also the IDs of both the random variable and cdf/pdf.\nThus the csv file contains a set of points that samples the function $pdf(x)$ or $cdf(x)$ for several values of the stochastic variable $x$. \nThe user needs to specify which variable IDs correspond to $x$ and $pdf(x)$ (or $cdf(x)$).\nThe distribution create a fourth order spline interpolation from the provided input points.\n%\nNote that the support of this distribution is set between the minimum and maximum values of the random variable which are \nspecified in the distribution input file.\n\nRefer to the test example ($tests/framework/test_distributionCustom1D.xml$) for more clarification.\n\n\\specBlock{a}{Custom1D}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{dataFilename}, \\xmlDesc{string, required parameter}, file name to be used to initialize the distribution.\n  \\item \\xmlNode{workingDir}, \\xmlDesc{string, optional parameter}, relative working directory that contains the input file.\n  \\item \\xmlNode{functionType}, \\xmlDesc{string, required parameter}, type of initialization values specified in the input file (pdf or cdf).\n  \\item \\xmlNode{variableID}, \\xmlDesc{string, required parameter}, ID of the variable contained in the input file.\n  \\item \\xmlNode{functionID}, \\xmlDesc{string, required parameter}, ID of the function associated to the variableID contained in the input file.\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n    <Custom1D name=\"pdf_custom\">\n      <dataFilename>PointSetFile2_dump.csv</dataFilename>\n      <functionID>pdf_values</functionID>\n      <variableID>x</variableID>\n      <functionType>pdf</functionType>\n      <workingDir>custom1D/</workingDir>\n    </Custom1D>\n    <Custom1D name=\"cdf_custom\">\n      <dataFilename>PointSetFile3_dump.csv</dataFilename>\n      <functionID>cdf_values</functionID>\n      <variableID>x</variableID>\n      <functionType>cdf</functionType>\n      <workingDir>custom1D/</workingDir>\n    </Custom1D>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n\nThe example above initializes two distributions from two .csv files. \nFor example, the first distribution retrieves the pdf values, located in the column with label $pdf_values$, for several locations of the variable located in the column \nwith label $x$ in the file $PointSetFile2_dump.csv$.\n\n\n%%%%%% paragraph 1-Dimensional Discrete Distributions.\n\\subsubsection{1-Dimensional Discrete Distributions.}\n\\label{subsubsec:1DDiscrete}\nRAVEN currently supports 3 discrete distributions.\n%\nIn the following paragraphs, the input requirements are reported.\n\n%%%%%% Bernoulli\n\\paragraph{Bernoulli Distribution}\n\\label{Bernoulli}\nThe \\distname{Bernoulli} distribution is a discrete distribution of the outcome\nof a single trial with only two results, 0 (failure) or 1 (success), with a\nprobability of success \\distattrib{p}.\n%\nIt is the simplest building block on which other discrete distributions of\nsequences of independent Bernoulli trials can be based.\n%\nBasically, it is the binomial distribution (k = 1, \\distattrib{p}) with only\none trial.\n%\nIts default support is $k \\in {0, 1}$.\n\n\\specBlock{a}{Bernoulli}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodeIntro\n\\begin{itemize}\n  \\item \\xmlNode{p}, \\xmlDesc{float, required parameter}, probability of\n  success.\n  %\n \\end{itemize}\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Bernoulli name='aUserDefinedName'>\n    <p>aFloatValue</p>\n  </Bernoulli>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Binomial\n\\paragraph{Binomial Distribution}\n\\label{Binomial}\nThe \\distname{Binomial} distribution is the discrete probability distribution of\nthe number of successes in a sequence of \\distattrib{n} independent yes/no\nexperiments, each of which yields success with probability \\distattrib{p}.\n%\nIts default support is $k \\in {0, 1, 2, ..., n}$.\n\n\\specBlock{a}{Binomial}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n  \\item \\xmlNode{p}, \\xmlDesc{float, required parameter}, probability of\n  success.\n  %\n  \\item \\xmlNode{n}, \\xmlDesc{integer, required parameter}, number of\n  experiments.\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Binomial name='aUserDefinedName'>\n    <n>aIntegerValue</n>\n    <p>aFloatValue</p>\n  </Binomial>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Geometric\n\\paragraph{Geometric Distribution}\n\\label{Geometric}\nThe \\distname{Geometric} distribution is a one-parameter discrete probability distribution.\n%\nThe distribution uses the probability $p$ that trial will be\nsuccessful.  The geometric distribution gives the probability of\nobserving $k$ trials before the first success.\n%\nIts support is $k \\in {0, 1, 2, ..., n}$.\n\n\\specBlock{a}{Geometric}\n\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n\\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodesIntro\n\\begin{itemize}\n\\item \\xmlNode{p}, \\xmlDesc{float, required parameter}, the success fraction for the trials.\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Geometric name='aUserDefinedName'>\n    <p>aFloatValue</p>\n  </Geometric>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n%%%%%% Poisson\n\\paragraph{Poisson Distribution}\n\\label{Poisson}\nThe \\distname{Poisson} distribution is a discrete probability distribution that\nexpresses the probability of a given number of events occurring in a fixed\ninterval of time and/or space if these events occur with a known average rate\nand independently of the time since the last event.\n%\nIts default support is $k \\in {1, 2, 3, 4, ...}$.\n\n\\specBlock{a}{Poisson}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodeIntro\n\\begin{itemize}\n  \\item \\xmlNode{mu}, \\xmlDesc{float, required parameter}, mean rate of\n  events/time.\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <Poisson name='aUserDefinedName'>\n    <mu>aFloatValue</mu>\n  </Poisson>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n\n%%%%%% Categorical\n\\paragraph{Categorical Distribution}\n\\label{Categorical}\nThe \\distname{Categorical} distribution is a discrete distribution that describes the result of a random variable that can have $K$ possible outcomes. \nThe probability of each outcome is separately specified.\nThe possible outcomes must be only numerical values (either integer or float numbers). No string can be assigned to any outcome.\n%\nThere is not necessarily an underlying ordering of these outcomes, but labels are assigned in describing the distribution (in the range $1$ to $K$).\n%\n\\specBlock{a}{Categorical}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodeIntro\n\\begin{itemize}\n  \\item \\xmlNode{state}, \\xmlDesc{float, required parameter}, probability for outcome 1\n  \\begin{itemize}\n          \\item \\xmlAttr{outcome}, \\xmlDesc{float, required parameter}, outcome value.\n  \\end{itemize}\n  \\item \\xmlNode{state}, \\xmlDesc{float, required parameter}, probability for outcome 2\n  \\begin{itemize}\n          \\item \\xmlAttr{outcome}, \\xmlDesc{float, required parameter}, outcome value.\n  \\end{itemize}\n  \\item ...\n  \\item \\xmlNode{state}, \\xmlDesc{float, required parameter}, probability for outcome K\n  \\begin{itemize}\n          \\item \\xmlAttr{outcome}, \\xmlDesc{float, required parameter}, outcome value.\n  \\end{itemize}\n \\end{itemize}\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n    <Categorical name='testCategorical'>\n        <state outcome=\"10\">0.1</state>\n        <state outcome=\"20\">0.2</state>\n        <state outcome=\"50\">0.15</state>\n        <state outcome=\"60\">0.4</state>\n        <state outcome=\"90\">0.15</state>\n    </Categorical>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n\\paragraph{Uniform Discrete Distribution}\n\\label{subsec:UniformDiscrete}\n\nThe \\textbf{UniformDiscrete} distribution is a discrete distribution which describes a random variable \nthat can have $N$ values having equal probability value.\nThis distribution allows the user to choose two kinds of sampling strategies: with or without replacement.\nIn case the ``without replacement'' strategy is used, the distribution samples from the set of specified $N$ values \nreduced by the previously sampled values. \nAfter, the sampler has generated values for all variables, the distribution is \nresetted (i.e., the set of values that can be sampled is returned to $N$).\nIn case the ``with replacement'' strategy is used, the distribution samples always from the complete set of specified $N$ values.\n\n\\specBlock{a}{Uniform Discrete}\n%\n\\attrIntro\n\\vspace{-5mm}\n\\begin{itemize}\n  \\itemsep0em\n  \\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\\subnodeIntro\n\\begin{itemize}\n  \\item \\xmlNode{lowerBound}, \\xmlDesc{float, required parameter}, lower bound.\n  \\item \\xmlNode{upperBound}, \\xmlDesc{float, required parameter}, upper bound.\n  \\item \\xmlNode{nPoints},    \\xmlDesc{integer, optional parameter}, number of points between lower and upper bound\n  \\item \\xmlNode{strategy},   \\xmlDesc{string,  required parameter}, type of sampling strategy \n  (withReplacement or withoutReplacement).\n  %\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n    <UniformDiscrete name=\"UD_dist\">\n      <lowerBound>3</lowerBound>\n      <upperBound>8</upperBound>\n      <strategy>orderedWithReplacement</strategy>\n    </UniformDiscrete>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n\\paragraph{Markov Categorical Distribution}\n\\label{subsec:markovCategorical}\n\nThe \\textbf{MarkovCategorical} distribution is a specific discrete categorical distribution describes\na random variable that can have $K$ possible outcomes, based on the steady state probabilities provided by\nMarkov model.\n%\n\\begin{itemize}\n  \\item \\xmlNode{transition}, \\xmlDesc{float, optional field}, the transition matrix of given Markov model.\n  \\item \\xmlNode{dataFile}, \\xmlDesc{string, optional xml node}. The path for the given data file, i.e. the transition matrix.\n    In this node, the following attribute should be specified: \n    \\begin{itemize}\n      \\item \\xmlAttr{fileType}, \\xmlDesc{string, optional field}, the type of given data file, default is `csv'.\n    \\end{itemize}\n  \\nb Either \\xmlNode{transition} or \\xmlNode{dataFile} is required to provide the transition matrix.\n  \\item \\xmlNode{workingDir}, \\xmlDesc{string, optional field}, the path of working directory\n  \\item \\xmlNode{state}, \\xmlDesc{required xml node}. The output from this state indicates\n    the probability for outcome 1.\n    In this node, the following attribute should be specified:\n    \\begin{itemize}\n      \\item \\xmlAttr{outcome}, \\xmlDesc{float, required field}, outcome value. \n      \\item \\xmlAttr{index}, \\xmlDesc{integer, required field}, the index of steady state probabilities corresponding to the transition matrix. \n    \\end{itemize}\n  \\item \\xmlNode{state}, \\xmlDesc{required xml node}. The output from this state indicates\n    the probability for outcome 2.\n    In this node, the following attribute should be specified:\n    \\begin{itemize}\n      \\item \\xmlAttr{outcome}, \\xmlDesc{float, required field}, outcome value. \n      \\item \\xmlAttr{index}, \\xmlDesc{integer, required field}, the index of steady state probabilities corresponding to the transition matrix. \n    \\end{itemize}\n  \\item ...\n  \\item \\xmlNode{state}, \\xmlDesc{required xml node}. The output from this state indicates\n    the probability for outcome K.\n    In this node, the following attribute should be specified:\n    \\begin{itemize}\n      \\item \\xmlAttr{outcome}, \\xmlDesc{float, required field}, outcome value. \n      \\item \\xmlAttr{index}, \\xmlDesc{integer, required field}, the index of steady state probabilities corresponding to the transition matrix. \n    \\end{itemize}\n\n\\end{itemize}\n\n\\textbf{Example:}\n\n\\begin{lstlisting}[style=XML]\n<Simulation>\n ...\n  <Distributions>\n    ...\n    <MarkovCategorical name=\"x_dist\">\n        <!--dataFile fileType='csv'>transitionFile</dataFile-->\n        <transition>\n            -1.1   0.8   0.7\n            0.8    -1.4  0.2\n            0.3    0.6   -0.9\n        </transition>\n        <state outcome='1' index='1'/>\n        <state outcome='2' index='2'/>\n        <state outcome='4' index='3'/>\n    </MarkovCategorical>\n    ...\n  </Distributions>\n ...\n</Simulation>\n\\end{lstlisting}\n\n\n\n%%%%%% N-Dimensional Probability distributions\n\\subsection{N-Dimensional Probability Distributions}\n\\label{subsec:NdDist}\nThe group of $N$-Dimensional distributions allow the user to model stochastic dependencies between parameters. Thus instead of using $N$ distributions for $N$ parameters, the user can define a single distribution lying in a $N$-Dimensional space.\nThe following $N$-Dimensional Probability Distributions are available within RAVEN:\n\\begin{itemize}\n\\item MultivariateNormal: Multivariate normal distribution (see Section~\\ref{MultivariateNormal})\n\\item NDInverseWeight: ND Inverse Weight interpolation distribution (see Section~\\ref{NDInverseWeight})\n\\item NDCartesianSpline: ND spline interpolation distribution (see Section~\\ref{NDCartesianSpline})\n\\end{itemize}\nFor NDInverseWeight and NDCartesianSpline distributions, the user provides the sampled values of either CDF or PDF of the distribution. The sampled values can be scattered distributed (for NDInverseWeight) or over a Cartesian grid (for NDCartesianSpline).\n\nThe user could specify, for each $N$-Dimensional distribution, the parameters of the random number generator function:\n\\begin{itemize}\n\\item \\xmlNode{initialGridDisc}, \\xmlDesc{positive integer, optional field}, user-defined initial grid discretization. This parameter specifies the number of discretizations that need to be performed, initially, for each Dimension in\norder to find N-Dimensional coordinate that corresponds to the CDF represented by a random number (0-1);\n\\item \\xmlNode{tolerance}, \\xmlDesc{float, optional field}, user-defined tolerance in order to find the N-D coordinates corresponding to a random number. This tolerance is expressed in terms of CDF.\n\\end{itemize}\nin the \\xmlNode{samplerInit} block defined in sampler block \\xmlNode{samplerInit} (see Section~\\ref{sec:Samplers}).\n\n\\subsubsection{MultivariateNormal Distribution}\n\\label{MultivariateNormal}\nthe multivariate normal distribution or multivariate Gaussian distribution, is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions.\nThe multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.\nThe multivariate normal distribution of a $k$-dimensional random vector $\\mathbf{x} = [x_1, x_2, \u2026, x_k]$  can be written in the following notation:\n$ \\mathbf{x}\\ \\sim\\ \\mathcal{N}(\\boldsymbol\\mu,\\, \\boldsymbol\\Sigma)$\nwith with $k$-dimensional mean vector\n\n$\\boldsymbol\\mu= [E[x_1], E[x_2], \u2026, E[x_k]]$\n\nand $k \\times k$ covariance matrix\n\n$\\boldsymbol\\Sigma = [Cov[x_i,x_j]] , i=1,2,\\ldots,k ; j=1,2,\\ldots,k$\n\nThe probability distribution function for this distribution is the following:\n\n$\nf_{\\mathbf x}(x_1,\\ldots,x_k) =\n\\frac{1}{\\sqrt{(2\\pi)^k|\\boldsymbol\\Sigma|}}\n\\exp\\left(-\\frac{1}{2}({\\mathbf x}-{\\boldsymbol\\mu})^\\mathrm{T}{\\boldsymbol\\Sigma}^{-1}({\\mathbf x}-{\\boldsymbol\\mu})\n\\right),\n$\n\nThe specifications of this distribution must be defined within the xml block \\xmlNode{MultivariateNormal}.\nThis XML node needs to contain the attributes:\n\\vspace{-5mm}\n\\begin{itemize}\n\\itemsep0em\n\\item \\xmlAttr{name}, \\xmlDesc{required string attribute}, user-defined identifier of this multivariate normal distribution.\n%\n\\nb As with other objects, this is the name that can be used to refer to this specific entity from other input XML blocks.\n\\item \\xmlAttr{method}, \\xmlDesc{required string attribute}, defines which method is used to generate the multivariate normal distribution.\nThe only allowable methods are \\xmlString{spline} and \\xmlString{pca}.\n%\n\\end{itemize}\n\\vspace{-5mm}\n\nIn RAVEN the MultivariateNormal distribution can be initialized through the following keywords:\n\\begin{itemize}\n  \\item \\xmlNode{mu}, list of mean values of each dimension\n  \\item \\xmlNode{covariance}, list of element values in the covariance matrix. There are two types of \\xmlNode{covariance}, based on the \\xmlAttr{type}:\n  \\begin{itemize}\n    \\item \\xmlAttr{type}, \\xmlDesc{string, optional field}, specifies the type of covariance, the default \\xmlAttr{type} is \\xmlString{abs}. Possible values for \\xmlAttr{type} are \\xmlString{abs} and \\xmlString{rel}. \\nb \\xmlString{abs} indicates the covariance is a normal covariance matrix, while \\xmlString{rel} indicates the covariance is a relative covariance matrix. In addition, method \\xmlString{pca} can be combined with both types, and method \\xmlString{spline} only accept the type \\xmlString{abs}\n  \\end{itemize}\n  \\item \\xmlNode{transformation}, \\xmlDesc{XML node, optional field}, option to enable input parameter transformation using principal component analysis (PCA) approach. If this node is provided, PCA will be used to compute the principal components of input covariance matrix. The subnode \\xmlNode{rank} is used to indicate the number of principal components that will be used for the input transformation. The content will specify one attribute:\n  \\begin{itemize}\n    \\item \\xmlNode{rank}, \\xmlDesc{positive integer, required field}, user-defined dimensionality reduction.\n  \\end{itemize}\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n    <MultivariateNormal name='MultivariateNormal_test' method='spline'>\n        <mu>0.0 60.0</mu>\n        <covariance>\n\t1.0 0.7\n\t0.7 1.0\n        </covariance>\n    </MultivariateNormal>\n    <MultivariateNormal name='MultivariateNormal_abs' method='pca'>\n        <mu>0.0 60.0</mu>\n        <covariance type='abs'>\n\t1.0 0.7\n\t0.7 1.0\n        </covariance>\n    </MultivariateNormal>\n    <MultivariateNormal name='MultivariateNormal_rel' method='pca'>\n        <mu>0.0 60.0</mu>\n        <covariance type='rel'>\n\t1.0 0.7\n\t0.7 1.0\n        </covariance>\n    </MultivariateNormal>\n  ...\n</Distributions>\n\\end{lstlisting}\n\nIn the following, we defined a distribution with a transformation node using PCA method. The number of principal components is\ndefined in \\xmlNode{rank}. In this distribution, PCA is employed to restructure the multivariate normal distribution. In addition,\nthe size of uncorrelated variables is also determined by \\xmlNode{rank}.\n\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n    <MultivariateNormal name='MultivariateNormal_test' method='pca'>\n        <mu>0.0 10.0 20.0</mu>\n        <covariance type=\"abs\">\n           1.0   0.7   -0.2\n           0.7   1.0   0.4\n           -0.2  0.4   1.0\n        </covariance>\n        <transformation>\n          <rank>2</rank>\n        </transformation>\n    </MultivariateNormal>\n  ...\n</Distributions>\n\\end{lstlisting}\n\n\\subsubsection{NDInverseWeight Distribution}\n\\label{NDInverseWeight}\nThe NDInverseWeight distribution creates a $N$-Dimensional distribution given a set of points\nscattered distributed. These points sample the PDF of the original distribution.\nDistribution values (PDF or CDF) are calculated using the inverse weight\ninterpolation scheme.\n\n\\specBlock{a}{NDInverseWeight}\n%\n\\attrsIntro\n\\vspace{-5mm}\n\\begin{itemize}\n\\itemsep0em\n\\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\n\nIn RAVEN the NDInverseWeight distribution can be initialized through the following nodes:\n\\begin{itemize}\n\\item \\xmlNode{p}, \\xmlDesc{float, required parameter}, power parameter. Greater values of p assign greater influence to values closest to the interpolated point.\n\\item \\xmlNode{data\\textunderscore filename}, \\xmlDesc{string, required parameter},  name of the data file containing scattered values (file type '.txt').\n\\begin{itemize}\n\\item \\xmlAttr{type}, \\xmlDesc{required string attribute},  indicates if the data in indicated file is PDF or CDF.\n\\end{itemize}\n\\item \\xmlNode{working\\textunderscore dir}, \\xmlDesc{string, required parameter}, folder location of the data file\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <NDInverseWeight name='...'>\n        <p>...</p>\n        <dataFilename type='...'>...</dataFilename>\n        <workingDir>...</workingDir>\n  </NDInverseWeight>\n  ...\n</Distributions>\n\\end{lstlisting}\n\nEach data entry contained in data\\textunderscore filename is listed row by row and must be listed as follows:\n\\begin{itemize}\n\\item number of dimensions\n\\item number of sampled points\n\\item ND coordinate of each sampled point\n\\item value of each sampled point\n\\end{itemize}\n\nAs an example, the following shows the data entries contained in data\\textunderscore filename for a 3-dimensional data set that contained two sampled CDF values: ([0.0,0.0,0.0], 0.1) and ([1.0, 1.0,0.0], 0.8)\n\nExample scattered data file:\n\\begin{lstlisting}\n3\n2\n0.0\n0.0\n0.0\n1.0\n1.0\n0.0\n0.1\n0.8\n\\end{lstlisting}\n\n\\subsubsection{NDCartesianSpline Distribution}\n\\label{NDCartesianSpline}\n\nThe NDCartesianSpline distribution creates a $N$-Dimensional distribution given a set of points\nregularly distributed on a Cartesian grid. These points sample the PDF of the original distribution.\nDistribution values (PDF or CDF) are calculated using the ND spline\ninterpolation scheme.\n\n\n\\specBlock{a}{NDCartesianSpline}\n%\n\\attrsIntro\n\\vspace{-5mm}\n\\begin{itemize}\n\\itemsep0em\n\\item \\nameDescription\n\\end{itemize}\n\\vspace{-5mm}\n\n\nIn RAVEN the NDCartesianSpline distribution can be initialized through the following nodes:\n\\begin{itemize}\n\\item \\xmlNode{data\\textunderscore filename}, \\xmlDesc{string, required parameter},  name of the data file containing scattered values (file type '.txt').\n\\begin{itemize}\n\\item \\xmlAttr{type}, \\xmlDesc{required string attribute}, indicates if the data in indicated file is PDF or CDF.\n\\end{itemize}\n\\item \\xmlNode{working\\textunderscore dir}, \\xmlDesc{string, required parameter}, folder location of the data file\n\\end{itemize}\n\nExample:\n\\begin{lstlisting}[style=XML]\n<Distributions>\n  ...\n  <NDCartesianSpline name='...'>\n        <dataFilename type='...'>...</dataFilename>\n        <workingDir></workingDir>\n  </NDCartesianSpline>\n  ...\n</Distributions>\n\\end{lstlisting}\n\nEach data entry contained in data \\textunderscore filename is listed row by row and must be listed as follows:\n\\begin{itemize}\n\\item number of dimensions\n\\item number of discretization for each dimension\n\\item discretization values for each dimension\n\\item value of each sampled point\n\\end{itemize}\n\nAs an example, the following shows the data entries contained in data \\textunderscore filename for a 2-dimensional CDF data set on the following grid $(x,y)$:\n\\begin{itemize}\n\\item first dimension (x): -0.5, 0.5\n\\item first dimension (y): 1.0 2.0 3.0\n\\end{itemize}\n\nExample scattered data file:\n\\begin{lstlisting}\n2\n2\n3\n-0.5\n0.5\n1.0\n2.0\n3.0\nCDF value of (-0.5,1.0)\nCDF value of (+0.5,1.0)\nCDF value of (-0.5,2.0)\nCDF value of (+0.5,2.0)\nCDF value of (-0.5,3.0)\nCDF value of (+0.5,3.0)\n\\end{lstlisting}\n", "meta": {"hexsha": "ea95c87bec58371f0d89b18d7c9161196e59cb97", "size": 42101, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/ProbabilityDistributions.tex", "max_stars_repo_name": "FlanFlanagan/raven", "max_stars_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-10T18:54:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T18:54:09.000Z", "max_issues_repo_path": "doc/user_manual/ProbabilityDistributions.tex", "max_issues_repo_name": "FlanFlanagan/raven", "max_issues_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/user_manual/ProbabilityDistributions.tex", "max_forks_repo_name": "FlanFlanagan/raven", "max_forks_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.0203921569, "max_line_length": 508, "alphanum_fraction": 0.7359445144, "num_tokens": 11400, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7401743677704878, "lm_q1q2_score": 0.5712987730307989}}
{"text": "\\subsection{Influence of the topology}\n\n\\begin{figure}\n  \\includegraphics[width=\\columnwidth]{figure-topology-influence.pdf}\n  %\n  \\caption{%\n  %\n  \\textbf{Influence of topology on the self organization.}\n  %\n  The same initial set of 1024 neurons has been equiped with 2-nearest neighbors, 3 nearest neighbors and 4-nearest neighbors induced topology (panels \\textbf{A}, \\textbf{B} and \\textbf{C} respectively) and trained on 25,000 random RGB colors. This lead to qualitatively different self-organization as shown on panels \\textbf{D}, \\textbf{E} and \\textbf{F} respectively, with major discontinuities in the 2-nearest neighbors case. ).\n  %\n  }\n  %\n  \\label{fig:topology-influence}\n\n\\end{figure}\n", "meta": {"hexsha": "c33ba28b67717c254611c129bac86b13fd4fefd7", "size": 701, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "article-overleaf/05-appendix-D.tex", "max_stars_repo_name": "rougier/VSOM", "max_stars_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-11-20T06:27:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:20:28.000Z", "max_issues_repo_path": "article-overleaf/05-appendix-D.tex", "max_issues_repo_name": "rougier/VSOM", "max_issues_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "article-overleaf/05-appendix-D.tex", "max_forks_repo_name": "rougier/VSOM", "max_forks_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-03T04:41:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T04:41:57.000Z", "avg_line_length": 41.2352941176, "max_line_length": 432, "alphanum_fraction": 0.7517831669, "num_tokens": 196, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5712987647224005}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% UMB-CS110-2015S: Introduction to Computing\n% Copyright 2015 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% More info: https://github.com/ghorbanzade/UMB-CS110-2015S\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\def \\topDirectory {.}\n\\def \\texDirectory {\\topDirectory/src/main/tex}\n\n\\documentclass[12pt,letterpaper,twoside]{article}\n\\usepackage{\\texDirectory/template/style/directives}\n\\usepackage{\\texDirectory/template/style/assignment}\n\\input{\\texDirectory/template/config}\n\n\\begin{document}\n\n\\doc{title}{Midterm Exam}\n\\doc{points}{20}\n\n\\prepare{header}\n\n\\section*{Question 1}\n\nThe following code snippets either do not compile or do not run as expected. There are five \\textbf{distinct} problems in each code snippet. You are expected to find and fix all errors so that given command-line arguments will lead to expected output as indicated.\n\\begin{enumerate}[label=\\textbf{(\\alph*)}]\n\\item Execution Script \\hfill Expected Output\\\\\n\\texttt{java HelloWorld President} \\hfill \\texttt{Hello President!}\n\\begin{lstlisting}\npublic class HelloWorld {\n\tpublic static void main(String args) {\n\t\tSystem.out.printn(\"Hello\" + args[0]!)\n\t}\n}\n\\end{lstlisting}\n\n\\item Execution Script \\hfill Expected Output\\\\\n\\texttt{java Divisible 15 4} \\hfill \\texttt{15 not divisible by 4}\n\n\\begin{lstlisting}\npublic class Divisible.java {\n\tpublic static void main(String args) {\n\t\tint b;\n\t\tint a = args[0];\n\t\tb = args[1];\n\t\tif (a % b = 0) {\n\t\t\tSystem.out.println(a + \" divisible by \" + b);\n\t\t}\n\t\telseif {\n\t\t\tSystem.out.print(a + \" not divisible by \" + b);\n\t\t}\n\t}\n}\n\\end{lstlisting}\n\n\\item Execution Script \\hfill Expected Output\\\\\n\\texttt{java Quadratic 1 -2 -4} \\hfill \\texttt{-1.236 and 3.237}\n\n\\begin{lstlisting}\npublic class Quadratic {\n\tpublic static void main(String[] args) {\n\t\tint a = Integer.parseInt(args[0]);\n\t\tint b = Integer.parseInt(args[1]);\n\t\tint c = Integer.parseInt(args[2]);\n\t\tdouble discriminant = b^2 - 4*a*c;\n\t\tif (double discriminant > 0) {\n\t\t\tdouble 1sol = (- b + sqrt(discriminant))/(2*a);\n\t\t\tdouble 2sol = (- b - sqrt(discriminant))/(2*a);\n\t\t\tSystem.out.printf(\"%f and %f\\n\", sol1, sol2);\n\t\t}\n\t}\n}\n\\end{lstlisting}\n\n\\item Execution Script \\hfill Expected Output\\\\\n\\texttt{java IdentityMatrix 2} \\hfill \\texttt{1 0}\\\\ \\raggedleft \\texttt{0 1}\n\n\\begin{lstlisting}\npublic class IdentityMatrix {\n\tpublic static void main(String[] args) {\n\t\tint i, j;\n\t\tint row = Integer.parseInt(args[0]);\n\t\tint[][] matrix = int[row][row];\n\t\tfor (i = 1; i < row; i++) {\n\t\t\tmatrix[i,i] = 1;\n\t\t}\n\t\tfor (i = 1; i < row; i++) {\n\t\t\tfor (j = 1; j < row; j++) {\n\t\t\t\tSystem.out.print(matrix[i,j] + \" \");\n\t\t\t}\n\t\t}\n\t}\n}\n\\end{lstlisting}\n\\newpage\n\\item Execution Script \\hfill Expected Output\\\\\n\\texttt{java Factorial 5} \\hfill \\texttt{5! is 120}\n\n\\begin{lstlisting}\npublic class Factorial {\n\tpublic static void main(String[] args) {\n\t\tlong product = 1;\n\t\tlong a = Long.parseLong(args[0]);\n\t\tfor (i = a: i >= 0: i--) {\n\t\t\tproduct *= a;\n\t\t}\n\t\tSystem.out.println(a! + \" is \" + product );\n\t}\n}\n\\end{lstlisting}\n\n\\end{enumerate}\n\n\\section*{Question 2}\n\nDetermine the output of the following code snippet and support your answer by explaining how the program works. No assumption about user input can be made except that the input is an integer.\n\n\\begin{enumerate}[label=\\textbf{(\\alph*)}]\n\\item Execution Script \\hfill \\texttt{java CoolArray}\n\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class CoolArray {\n\tpublic static void main(String[] args) {\n\t\tScanner input = new Scanner(System.in);\n\t\tSystem.out.print(\"Enter array size: \");\n\t\tint size = input.nextInt();\n\t\tinput.close();\n\t\tdouble array[] = new double[2*size];\n\t\tfor (int i = 0; i < size; i++) {\n\t\t\tint num = (int) (Math.random() * 10);\n\t\t\tarray[i] = num;\n\t\t\tarray[2*size - i - 1] = 10 - num;\n\t\t}\n\t\tdouble sum = 0;\n\t\tfor (int i = 0; i < 2*size; i++) {\n\t\t\tsum += array[i];\n\t\t}\n\t\tSystem.out.println(sum/size/2);\n\t\tSystem.out.println();\n\t}\n}\n\n\\end{lstlisting}\n\n\\end{enumerate}\n\n\\section*{Question 3}\n\nWrite a program \\texttt{GPAcalculator.java} that asks a student for \\texttt{N} grades - where \\texttt{N} is given by student - and number of credits corresponding to each and prints his GPA based on entered information. You are expected to use Class Scanner to get input from user.\n\n\\prepare{footer}\n\n\\end{document}\n", "meta": {"hexsha": "8a64a49cc580acf26dba88bca60be87fad1addee", "size": 4414, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/main/tex/exams/m01.tex", "max_stars_repo_name": "UMB-CS110-2015S/Assignments", "max_stars_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:40.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:40.000Z", "max_issues_repo_path": "src/main/tex/exams/m01.tex", "max_issues_repo_name": "UMB-CS110-2015S/Assignments", "max_issues_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2015-08-22T15:44:45.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-17T16:39:11.000Z", "max_forks_repo_path": "src/main/tex/exams/m01.tex", "max_forks_repo_name": "UMB-CS110-2015S/Assignments", "max_forks_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4774193548, "max_line_length": 281, "alphanum_fraction": 0.6658359764, "num_tokens": 1276, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743505760727, "lm_q2_score": 0.7718434978390746, "lm_q1q2_score": 0.5712987597594014}}
{"text": "\\lab{Applications}{Optimization}{Transportation}\n\\label{lab:Transportation}\n\\objective{.}\n\n\n\\section*{Transportation}\n\nThe following data is from the Seattle Transportation Traffic Management Division.\n\nThe graph represents a section of one way roads in Seattle, with the exception of Third street going both east and west.\n \nWe can model the flow of traffic using a system of equations. For example, if the rate of flow into an intersection is $x$ and $20$ and the flow out of the same intersection is $y$ and $45$, then the flow through the intersection can be represented by $x-y = 25$.\n\n \nCreate the system of equations that models these streets.\n \nNow solve the system using linalg.lsts(). %Part 1a\n\\begin{lstlisting}[style=python]\n\\end{lstlisting}\n\nWas this the kind of answer you expected? How can you have a negative number of cars? Since this is a closed system, the number of cars leaving is the same entering, we'll get infinitely many answers.\nThis explains the negative entries of x.\nWe can get further information by measuring the number of cars passing through each independent street in this system. Find the rank of the matrix to know how many streets we should be looking for and identify which streets we need to place counters on. \n\n\n \nThe counters revealed the following data.\n\n\\begin{table}\n $x_4$ =& 2303\\\\\n $x_9$ =& 5302\\\\\n $x_{10}$ =& 13412\\\\\n $x_{15}$ =& 3052\\\\\n $x_{16}$ =& 4270\\\\\n $x_{17}$ =& 1243\\\\\n\\end{table}\n\nSolve the system with these parameters.\n\nSo we've determined the amount of traffic on each road. Interpreting, we have that $9969$ cars travel east from Pike to Union along Fifth Street.\n\nWe can now identify the intersections with significant traffic in one direction and less traffic in the other.\n \nIntersection A has lots of cars travelling east. $18144$ cars approach from the west and and $9972$ leave from the east. Comparatively few cars travel north, with $2303$ cars approaching the intersection from the south. This indicates intersection A would a good candidate for a light. Intersection E would also be a good choice. \n\nWhich intersections would you recommend putting lights in first?\n \n%Part 2\nNow we want to find the fastest path through from A to L. This can be represented by a a Minimum Cost Linear Program for this network flow.\nWe'll define $c_i$ to be the cost of travelling along road $x_i$. The cost will represent the amount of traffic, the numbers you found in Part 1.\n\n$A$, an $m\\times n$ matrix, is called the arc-node incidence matrix where $m$ is the number of nodes and $n$ is the number of arcs. In our case, there are $12$ intersections and $17$ streets. $A$ is determined by the formula\n\\begin{displaymath}\n   f(x) = \\left\\{\n     \\begin{array}{lr}\n       1 & : arc j starts at node i\\\\\n       -1 & : arc j ends at node i\\\\\n       0 & : else\n     \\end{array}\n   \\right.\n\\end{displaymath}\n \nNotice that each column must sum to $0$.\nArc 1, or $x_1$, starts at node, intersection, $A$ and ends at node $B$. So the first column of $A$ is $[1,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]^T$.\n\n$b$ is the external supply to the system. So $b$ is the same as in part 1, but be sure to remove the independent numbers we found in part 1. Unlike other optimization problems, $Ax=b$.  \n\n\n\nlook at network flow python packages?", "meta": {"hexsha": "66ca9497af54bb2644245d9def880d06ebabcc8f", "size": 3279, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Applications/NetworkFlow/NetworkFlow.tex", "max_stars_repo_name": "abefrandsen/numerical_computing", "max_stars_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2016-10-18T19:54:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-09T20:12:38.000Z", "max_issues_repo_path": "Applications/NetworkFlow/NetworkFlow.tex", "max_issues_repo_name": "abefrandsen/numerical_computing", "max_issues_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Applications/NetworkFlow/NetworkFlow.tex", "max_forks_repo_name": "abefrandsen/numerical_computing", "max_forks_repo_head_hexsha": "90559f7c4f387885eb44ea7b1fa19bb602f496cb", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-05-14T16:07:59.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-20T09:05:06.000Z", "avg_line_length": 46.8428571429, "max_line_length": 330, "alphanum_fraction": 0.736810003, "num_tokens": 856, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6548947290421275, "lm_q2_score": 0.8723473730188542, "lm_q1q2_score": 0.5712956964837943}}
{"text": "\\subsection{Comparison of Accuracy vs Order}\n\nIt would be interesting to investigate whether the accuracy changes depending on\nthe transfer function's order $n$.\n\nFor each  order $n$ 100 random PTn transfer functions $G_n(s)$ were generated in\nthe form of:\n\n\\begin{equation}\n    G(s) = \\prod_{k=1}^n \\frac{1}{s+T_k}\n    \\label{eq:random_ptn}\n\\end{equation}\n\nwhere $n$ was a random integer in  the  range of $[2..8]$ and $T_k$ was a random\nnumber in the range of $[1..9]$.\n\nThe step response of each $G_n(s)$  was fed into each method. The step responses\nof each resulting transfer  function  was  then  compared  to the original input\nfunction's step  response  and the root mean square error (RMSE) was calculated.\nThe  100 errors were then averaged, resulting in a final error  value  for  each\nmethod, for each order.\n\n\\begin{figure}\n    \\includegraphics[width=\\linewidth]{images/error_order}\n    \\caption{The mean square error (MSE) of each method to a randomly generated step response, in function of filter order}\n    \\label{fig:error_order}\n\\end{figure}\n\\begin{figure}\n    \\includegraphics[width=\\linewidth]{images/sani_interpolation_vs_lookup}\n    \\caption{Interpolation formulae proposed by L. Sani compared to ``brute force'' calculating the individual t10, t50, t90 parameters and creating lookup curves}\n    \\label{fig:interpolation_vs_lookup}\n\\end{figure}\n\nThe data portrayed in figure \\ref{fig:error_order} shows these errors.\n\n\\subsubsection*{Conclusions}\n\nAs expected, the fitted methods yield  the  most  accurate results. The Sani fit\nappears to be better than the Hudzovic fit.\n\nThe  most  accurate  non-fitted  method  is  Hudzovic's   transfer  function  in\ncombination with Sani's characterisation (orange curve).\n\nAn interesting observation is that the method proposed by L. Sani\\cite{ref:sani}\nusing the $t_{10}$, $t_{50}$,  $t_{90}$ characterisation (purple curve) performs\nworse and worse the higher the order, whereas all  other  methods perform better\nand  better.  The exact reason as to why this happens is a consequence of  using\nthe    interpolation     formulae     proposed     by    L.    Sani    (equation\n\\ref{eq:sani_interpolation}) rather than using lookup curves.\n\nTo  see  if  this  is indeed the cause, the MATLAB code was adapted such that L.\nSani's  method  for determining $r$, $T$, $n$ based on the input data  $t_{10}$,\n$t_{50}$, $t_{90}$ used  a  lookup  curve  instead  of  using  the interpolation\nformulae. The same simulation was performed again with the modified code.\n\nThe result can be seen in figure \\ref{fig:interpolation_vs_lookup}.  The  purple\nline is the same  purple  curve  from  figure  \\ref{fig:error_order}. The dashed\npurple line shows the result of the modification.\n\nIt appears that by using lookup curves the resulting  transfer  function is less\naccurate than if interpolation formulae were used.\n\n", "meta": {"hexsha": "faa27ae4d28a904e1ed33adc7d60bc730210fa4c", "size": 2869, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "versuche/mlab/sections/results/accuracy_vs_order.tex", "max_stars_repo_name": "TheComet93/laborjournal", "max_stars_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "versuche/mlab/sections/results/accuracy_vs_order.tex", "max_issues_repo_name": "TheComet93/laborjournal", "max_issues_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "versuche/mlab/sections/results/accuracy_vs_order.tex", "max_forks_repo_name": "TheComet93/laborjournal", "max_forks_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.5396825397, "max_line_length": 163, "alphanum_fraction": 0.7507842454, "num_tokens": 751, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321983146848, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5712086483519256}}
{"text": "% To be compiled by XeLaTeX, preferably under TeX Live.\n% LaTeX source for ``Yanqi Lake Lectures on Algebra'' Part III.\n% Copyright 2019  \u674e\u6587\u5a01 (Wen-Wei Li).\n% Permission is granted to copy, distribute and/or modify this\n% document under the terms of the Creative Commons\n% Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)\n% https://creativecommons.org/licenses/by-nc/4.0/\n\n% To be included\n\\chapter{Integral dependence, Nullstellensatz and flatness}\n\nThe materials below largely come from \\cite[\\S 4]{Eis95}, \\cite[V.4]{AK70} and \\cite[\\S\\S 5--6]{Mat80}. In what follows, any ring $R$ and the $R$-algebras are assumed to be commutative. For elements $a,b, \\ldots$ in an $R$-algebra, denote by $R[a,b, \\ldots]$ the $R$-subalgebra generated by them.\n\n\\section{Integral extensions}\nConsider an $R$-algebra $A$. Recall that to give an $R$-algebra $A$ is the same as to give a ring homomorphism $\\varphi: R \\to A$. We shall switch freely between algebra and homomorphisms, omitting $\\varphi$ if necessary. In many concrete circumstances $R$ is simply a subring of $A$.\n\n\\begin{definition}\\index{integrality}\n\tAn element $x \\in A$ is said to be \\emph{integral} over $R$ if there exists a monic polynomial $P(X) = X^n + a_{n-1} X^{n-1} + \\cdots + a_0 \\in R[X]$ (with $n \\geq 1$) such that $P(x)=0$. If every $x \\in A$ is integral over $R$, we say $A$ is integral over $R$.\n\\end{definition}\nNote that elements from $R$ are trivially integral: take $P(X)$ with $n=1$. In the case $A = \\CC$ and $R = \\Z$, we recover the notion of \\emph{algebraic integers}.\n\nWhen $R$ is a field, we usually say \\emph{algebraic} instead of \\emph{integral}. The monic assumption on $P$ is crucial when $R$ is not a field, as illustrated by the following proof.\n\n\\begin{proposition}\n\tAn element $x \\in A$ is integral over $R$ if and only if there exists an $R$-submodule $M \\subset A$ such that\n\t\\begin{compactitem}\n\t\t\\item $M$ is a finitely generated $R$-module;\n\t\t\\item $xM \\subset M$, thus $M$ is an $R[x]$-module;\n\t\t\\item $M$ is a faithful $R[x]$-module i.e. $\\mathrm{ann}_{R[x]}(M) = \\{0\\}$, and\n\t\\end{compactitem}\n\tIf $x$ is integral, $M := R[x]$ satisfies the conditions listed above.\n\\end{proposition}\n\\begin{proof}\n\tThis is a familiar application of Cayley-Hamilton theorem, which we recall below. If $x$ is integral, say $x^n + a_{n-1} x^{n-1} + \\cdots + a_0 = 0$, a straightforward induction shows that\n\t\\[ R[x] = \\sum_{i=0}^{n-1} Rx^i. \\]\n\tIn particular we may take $M := R[x]$ to be the required submodule, which is faithful as $1 \\in R[x]$. Conversely, given a submodule $M$ as above, with generators $x_1, \\ldots, x_m$, we make $M$ into an $R[X]$-module by letting the variable $X$ act as multiplication by $x$. Writing $x \\cdot x_i = \\sum_{j=1}^m a_{ij} x_j$, there is the matrix equation\n\t\\[ (X \\cdot 1_{m \\times m} - A) \\begin{pmatrix} x_1 \\\\ \\vdots \\\\ x_m \\end{pmatrix} = 0, \\quad A = (a_{ij})_{1 \\leq i,j \\leq m} \\in \\text{Mat}_{m \\times m}(R) \\]\n\tover $R[X]$. Now multiply both sides by the cofactor matrix $(X \\cdot 1_{m \\times m} - A)^\\vee$, we get $P(X) x_i = 0$ for all $i$, where $P \\in R[X]$ is the characteristic polynomial of $A$, i.e. $P(x)M = \\{0\\}$. Since $M$ is faithful as an $R[x]$-module, we get $P(x)=0$.\n\\end{proof}\n\n\\begin{corollary}\n\tThe integral elements in an $R$-subalgebra $A$ form a subalgebra. In particular, $A$ is integral over $R$ if and only if it has a set of integral generators.\n\\end{corollary}\n\\begin{proof}\n\tLet $a, b \\in A$ be integral elements. One readily checks that\n\t\\begin{itemize}\n\t\t\\item $R[a,b]$ is finitely generated as an $R$-module (say by certain monomials $a^i b^j$);\n\t\t\\item $R[a,b]$ is faithful (as an $R[a,b]$-module) because it contains $1$.\n\t\\end{itemize}\n\tThus $R[a,b]$ witnesses the integrality of $a+b$ and $ab$, since they both stabilize $R[a,b]$.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:integrality-tower}\n\tConsider ring homomorphisms $R \\to A \\to B$ such that $A$ is integral over $R$. If $y \\in B$ is integral over $A$, then it is integral over $R$. \n\\end{proposition}\n\\begin{proof}\n\tAssume $y^n + a_{n-1} y^{n-1} + \\cdots + a_0 = 0$ with $a_0, \\ldots, a_{n-1} \\in A$ integral over $R$. The $R[a_0, \\ldots, a_{n-1}]$-module $R[a_0, \\ldots, a_{n-1}][y]$ is also finitely generated over $R$, faithful and preserved by $y$, hence it witnesses the integrality of $y$ over $R$.\n\\end{proof}\n\nThis subalgebra of integral elements in $A$ is called the \\emph{integral closure}\\index{integral closure} of $R$ in $A$. If the integral closure equals the image of $R$ in $A$, we say $R$ is \\emph{integrally closed} in $A$. The integral closure is automatically integrally closed by virtue of Proposition \\ref{prop:integrality-tower}.\n\n\\begin{definition}\\index{normal}\n\tLet $R$ be an integral domain and denote by $K$ its field of fractions. The integral closure of $R$ in $K$ is called the \\emph{normalization} of $R$. The domain $R$ is said to be \\emph{normal} if $R$ is integrally closed in $K$.\n\\end{definition}\n\nThe first examples of normal domains come from unique factorization domains (UFD) that you have seen in undergraduate algebra, including $\\Z$, $\\Q[X,Y]$, etc.\n\n\\begin{proposition}\n\tUnique factorization domains are normal.\n\\end{proposition}\n\\begin{proof}\n\tGiven $x = r/s \\in K$ with coprime $r,s \\in R$. If there is an integral dependence relation $x^n + a_{n-1} x^{n-1} + \\cdots + a_0 = 0$, we will have $r^n + a_{n-1} r^{n-1}s + \\cdots + a_0 s^n = 0$, hence $s \\mid r^n$. As $r$ is coprime to $s$, we see $s \\in R^\\times$ and $x \\in R$.\n\\end{proof}\n\nLet us show that taking integral closure commutes with localizations. Geometrically, this means that taking integral closure is a local operation on $\\Spec(R)$.\n\n\\begin{lemma}\n\tLet $A$ be an $R$-algebra, $S$ be a multiplicative subset of $R$. Denote by $\\tilde{R}$ the integral closure of $R$ in $A$. Then the integral closure of $R[S^{-1}]$ in $A[S^{-1}]$ equals $\\tilde{R}[S^{-1}]$.\n\\end{lemma}\n\\begin{proof}\n\tSuppose that $x \\in A$ satisfies $x^n + \\sum_{i=0}^{n-1} b_i x^i = 0$ with $b_0, \\ldots, b_{n-1} \\in R$. For all $s \\in S$ we deduce $\\left(\\frac{x}{s}\\right)^n + \\sum_{i=0}^{n-1} \\frac{b_i}{s^{n-i}} \\cdot \\left( \\frac{x}{s} \\right)^i = 0$ in $A[S^{-1}]$. Therefore $\\tilde{R}[S^{-1}]$ is in the integral closure of $R[S^{-1}]$ in $A[S^{-1}]$.\n\n\tConversely, suppose that $x/s \\in A[S^{-1}]$ satisfies\n\t\\[ \\left(\\frac{x}{s}\\right)^n + \\sum_{i=0}^{n-1} \\frac{a_i}{s_i} \\left( \\frac{x}{s} \\right)^i = 0, \\quad a_i \\in A, \\; s_i \\in S. \\]\n\tCleaning denominators, we see that $x s_1 \\cdots s_{n-1}$ is integral over $R$, hence $x = \\frac{xs_1 \\cdots s_{n-1}}{s_1 \\cdots s_n} \\in \\tilde{R}[S^{-1}]$.\n\\end{proof}\n\nIn particular, localizations of a normal domain are still normal. We have the following converse.\n\\begin{exercise}\n\tLet $R$ be a domain. If for every maximal ideal $\\mathfrak{m}$ of $R$ we have $R_{\\mathfrak{m}}$ is normal, then so is $R$ itself. Hint: recall that $R = \\bigcap_{\\mathfrak{m}} R_{\\mathfrak{m}}$ inside the fraction field $K$.\n\\end{exercise}\n\n\\begin{exercise}\n\tLet $R := \\CC[X,Y]/(Y^2-X^3)$. Show that\n\t\\begin{compactenum}[(i)]\n\t\t\\item $R$ is a domain, or equivalently, $Y^2 - X^3$ is an irreducible polynomial;\n\t\t\\item the element $x := \\bar{Y}/\\bar{X} \\in \\text{Frac}(R)$ does not lie in $R$, where $\\bar{X},\\bar{Y}$ denote the images of $X,Y \\in \\CC[X,Y]$;\n\t\t\\item show that $x^3 \\in R$, therefore $R$ is not a normal ring.\n\t\\end{compactenum}\n\\end{exercise}\n\n\\begin{remark}\n\tThe rationale of the previous Exercise comes from singularity. Consider the \\emph{cuspidal curve} $C := \\{ (x,y) \\in \\CC^2: y^2=x^3\\}$. It has an isolated singularity at $(0,0)$ since $\\nabla (y^2-x^3) = (0,0)$ at the origin. On the other hand, it admits a (non-injective) parametrization from $\\CC$ by\n\t\\[ t \\mapsto (x=t^2, y=t^3). \\]\n\tFollowing Milnor \\cite{Mil68}, a nice way to understand the ``cusp'' at $(0,0)$ is to cut $C$ by a $3$-sphere $S := \\{(x,y) \\in \\CC^2: |x|^2 + |y|^2 = \\epsilon \\}$ enclosing the origin, with $0 < \\epsilon \\ll 1$. Writing the parametrization above as $t=re^{i\\theta}$ in polar coordinates, we get the equation $r^4 + r^6 = \\epsilon$ which has a unique positive root $\\rho$. Hence $S \\cap C$ is described by the closed curve $\\theta \\mapsto (\\rho^2 e^{2i\\theta}, \\rho^3 e^{3i\\theta})$ in $S \\simeq \\mathbb{S}^3$. We call $S \\cap C$ the \\emph{link of singularity} at $(0,0)$; in this case it is equivalent to the $(2,3)$-torus knot, i.e. the \\emph{trefoil} shown below.\n\t\n\t\\begin{center}\\vspace{1.5em}\n\t\t% Draw\u3000the trefoil knot using the knots library in TikZ\n\t\t\\begin{tikzpicture}\n\t\t\t\\begin{knot}[\n\t\t\t\tconsider self intersections,\n\t\t\t\tflip crossing = 2,\n\t\t\t\tclip width = 7]\n\t\t\t\\strand[ultra thick, black]\n\t\t\t\t(90:2) to[out=180, in=-120, looseness=2]\n\t\t\t\t(-30:2) to[out=60, in=120, looseness=2]\n\t\t\t\t(210:2) to[out=-60, in=0, looseness=2] (90:2);\n\t\t\t\\end{knot}\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\n\tIf we started from a non-singular curve in $\\CC^2$, the result would then be an unknotted $\\mathbb{S}^1$ inside $\\mathbb{S}^3$. In this sense the link provides a measure for the singularity.\n\\end{remark}\n\n\nOur study of normality has to be paused here. We refer to \\cite[\\S\\S 8--9]{Mum99} for a beautiful discussion on the algebro-geometric content of normality, as well as important consequences such as Zariski's Main Theorem. Roughly speaking, normality means there is only one branch through each point of the corresponding variety.\n\n\\section{Nullstellensatz}\nOur aim is to present a generalization of the celebrated \\emph{Nullstellensatz}, which is one of the cornerstones of algebraic geometry. We shall also write the nilpotent radical of a ring $R$ as\n\\[ \\text{nil}(R) := \\sqrt{0_R}. \\]\nFor an ideal $\\mathfrak{a} \\subset R$, we write $\\mathfrak{a}[X]$ for the ideal of $R[X]$ formed by polynomials with all coefficients lying in $\\mathfrak{a}$.\n\n\\begin{definition}\\label{def:Jacobson-ring}\\index{Jacobson ring}\n\tA ring $R$ is called a \\emph{Jacobson ring} if every prime ideal $\\mathfrak{p}$ satisfies\n\t\\begin{gather}\\label{eqn:Hilbert}\n\t\t\\mathfrak{p} = \\bigcap_{\\mathfrak{m}: \\text{maximal ideal } \\supset \\mathfrak{p}} \\mathfrak{m}.\n\t\\end{gather}\n\tEquivalently, we require that the Jacobson radical $\\text{rad}(R/\\mathfrak{p}) = \\text{nil}(R/\\mathfrak{p}) = \\{0\\}$ for all $\\mathfrak{p}$. Note that $\\supset$ always holds.\n\\end{definition}\nObservations:\n\\begin{compactitem}\n\t\\item Quotients of Jacobson rings are still Jacobson.\n\t\\item Fields are trivially Jacobson.\n\\end{compactitem}\n\n\\begin{exercise}\n\tProve that every principal ideal domain (a domain in which every ideal is generated by one element) with infinitely many maximal ideals is Jacobson.\n\\end{exercise}\n\n\n\\begin{theorem}[E.\\ Snapper]\\label{prop:Snapper}\n\tFor any $R$, the polynomial algebra $R[X]$ satisfies $\\mathrm{rad}(R[X]) = \\mathrm{nil}(R[X])$.\n\\end{theorem}\n\\begin{proof}\n\tTo show that $\\text{nil}(R[X]) \\supset \\text{rad}(R[X])$, let $f(X) = \\sum_i a_i X^i \\in \\text{rad}(R[X])$, then $1+Xf(X) = 1 + \\sum_i a_i X^{i+1} \\in R[X]^\\times$. By looking at the reduction modulo $\\mathfrak{p}$ of $f(X)$ for every prime ideal $\\mathfrak{p}$ of $R$, we see that $a_i \\in \\bigcap \\mathfrak{p} = \\text{nil}(R)$ for all $i$. Hence $f(X) \\in \\text{nil}(R)[X] \\subset \\text{nil}(R[X])$.\n\\end{proof}\n\n\\begin{lemma}\n\tLet $R \\subset A$ be integral domains such that $A$ is a finitely generated $R$-algebra. If $\\mathrm{rad}(R)=\\{0\\}$, then $\\mathrm{rad}(A)=\\{0\\}$.\n\\end{lemma}\n\\begin{proof}\n\tWe may assume that $A$ is generated by a single element $a \\in A$ over $R$. If $a$ is transcendental over $K := \\text{Frac}(R)$, Theorem \\ref{prop:Snapper} above can be applied as $\\mathrm{nil}(R[X]) = \\{0\\}$. Let us assume that $a$ satisfies $f(a)=0$ for some $f(X) = \\sum_{i=0}^n r_i X^i \\in R[X]$ with $r_n \\neq 0$. Let $b \\in \\text{rad}(A)$ and suppose $b \\neq 0$. Hereafter we embed everything into the $K$-algebra $\\text{Frac}(A)$. Since $a$ is algebraic over $K$, so is every element from $R[a]$ or even $K[a] \\subset \\text{Frac}(A)$. Hence $b$ is integral over $K$ as well. By cleaning denominators, we arrive at $g(b)=0$ for some $g(X) = \\sum_{i=0}^m s_i X^i \\in R[X]$ with the smallest possible degree $m$. Since $A$ is a domain, we have $s_0 \\neq 0$.\n\n\tUsing $\\text{rad}(R) =\\{0\\}$, there exists a maximal ideal $\\mathfrak{m}$ of $R$ such that $r_n s_0 \\not\\in \\mathfrak{m}$. Taking localization at $\\mathfrak{m}$, we get the subring $A' := A \\otimes_R R_{\\mathfrak{m}} \\subset \\text{Frac}(A)$ containing $R_{\\mathfrak{m}}$, and $A'$ is also a finitely generated $R_{\\mathfrak{m}}$-module since we inverted $r_n$. Nakayama's Lemma for $R_{\\mathfrak{m}}$-modules implies $\\mathfrak{m} A' \\subsetneq A'$, thus $\\mathfrak{m} A \\subsetneq A$. Now choose a maximal ideal $\\mathfrak{m}_A$ of $A$ over $\\mathfrak{m}$. We must have $\\mathfrak{m}_A \\cap R = \\mathfrak{m}$, which entails $s_0 \\not\\in \\mathfrak{m}_A$. This is a contradiction since $s_0 = -\\sum_{i=1}^m s_i b^i \\in \\text{rad}(A)$.\n\\end{proof}\n\n\\begin{theorem}[Nullstellensatz]\\label{prop:Nullstellensatz-gen}\\index{Nullstellensatz}\n\tLet $A$ be a finitely generated $R$-algebra. Assume that $R$ is a Jacobson ring, then the following statements hold.\n\t\\begin{enumerate}[(i)]\n\t\t\\item $A$ is a Jacobson ring.\n\t\t\\item Let $\\mathfrak{n} \\in \\Spec(A)$ be maximal, then its image $\\mathfrak{m} \\in \\Spec(R)$ is maximal as well, and $A/\\mathfrak{n}$ is a finite extension of the field $R/\\mathfrak{m}$.\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\tWe start with (i). One may assume $R \\subset A$ from the outset. Condition \\eqref{eqn:Hilbert} for $A$ amounts to $\\text{rad}(A/\\mathfrak{p})=0$ for every $\\mathfrak{p} \\in \\Spec(A)$. Apply the previous Lemma to the integral domains $R/R \\cap \\mathfrak{p} \\subset A/\\mathfrak{p}$ to prove (i).\n\t\n\tNow turn to (ii). Using (i) and induction, we may assume that $A=R[a]$ for some $a \\in A$. By considering the homomorphism $R/\\mathfrak{m} \\hookrightarrow A/\\mathfrak{n}$ between Jacobson rings, we may further reduce to the case $\\mathfrak{n}=\\{0\\}$ and $\\mathfrak{m}=\\{0\\}$, so that $R$ is a domain embedded in the field $A$. In particular $a$ cannot be transcendental (as $R[X]$ is not a field) and must satisfy $\\sum_{i=0}^n c_i a^i = 0$ for some $c_0, \\ldots, c_n \\in R$ with $c_n \\neq 0$. Let $\\mathfrak{k}$ be any maximal ideal of $R$ not containing $c_n$, which exists since $\\text{rad}(R) = \\{0\\}$.\n\t\n\tAs in the proof of the previous Lemma, $a$ becomes integral over $R_{\\mathfrak{k}}$ and Nakayama's Lemma for $R_{\\mathfrak{k}}$-modules entails $\\mathfrak{k}A \\neq A$, hence $\\mathfrak{k}=0$ because $A$ is a field. This implies that $R$ is a field and $A$ is a finite extension of $R$.\n\\end{proof}\n\n\\begin{corollary}\n\tLet $\\Bbbk$ be an algebraically closed field, and $A := \\Bbbk[X_1, \\ldots, X_n]$. The maximal ideals of $A$ are in bijection with $\\Bbbk^n$ by attaching to each $x := (x_1, \\ldots, x_n) \\in \\Bbbk^n$ the ideal\n\t\\[ \\mathfrak{m}_x = \\{f \\in A : f(x)=0 \\} = (X_1 - x_1, \\ldots, X_n - x_n). \\]\n\\end{corollary}\n\\begin{proof}\n\tSince $A/\\mathfrak{m}_x \\rightiso \\Bbbk$ by evaluation at $x$, we see $\\mathfrak{m}_x$ is indeed maximal. It is routine to show that $x = y \\iff \\mathfrak{m}_x = \\mathfrak{m}_y$. It remains to show that every maximal ideal $\\mathfrak{n}$ contains some $\\mathfrak{m}_x$. Indeed, Theorem \\ref{prop:Nullstellensatz-gen} implies the field $A/\\mathfrak{n}$ is algebraic over $\\Bbbk$, hence $A/\\mathfrak{n} \\simeq \\Bbbk$ as $\\Bbbk$-algebras. Let $x_i$ be the image of $X_i$ under $A \\twoheadrightarrow A/\\mathfrak{n} \\rightiso \\Bbbk$ and set $x := (x_1, \\ldots, x_n)$, then $\\mathfrak{n} \\supset \\mathfrak{m}_x$ as required.\n\\end{proof}\n\n\\begin{corollary}\n\tKeep the notations above and set\n\t\\begin{align*}\n\t\tZ(\\mathfrak{a}) & := \\{x \\in \\Bbbk^n: \\forall f \\in \\mathfrak{a}, \\; f(x)=0 \\}, \\\\\n\t\tI(\\mathcal{X}) & := \\{f \\in A: \\forall x \\in X, \\; f(x)=0 \\}\n\t\\end{align*}\n\tfor ideals $\\mathfrak{a} \\subset A$ and subsets $\\mathcal{X} \\subset \\Bbbk^n$, then we have $IZ(\\mathfrak{a}) = \\sqrt{\\mathfrak{a}}$ for all $\\mathfrak{a}$.\t\t\n\\end{corollary}\nIf we identify $\\Bbbk^n$ with $\\MaxSpec(A)$ by the previous Corollary, then $Z(\\mathfrak{a})$ is just the intersection of $V(\\mathfrak{a}) \\subset \\Spec(A)$ and $\\MaxSpec(A)$. Details are left to the readers.\n\\begin{proof}\n\tIf $f \\in A$ and $f^n \\in \\mathfrak{a}$ for some $n$, the vanishing of $f^n$ on $Z(\\mathfrak{a})$ will entail that of $f$, hence the inclusion $\\supset$ holds. Assume conversely that $f \\in A$ vanishes on $Z(\\mathfrak{a})$. This means: for every maximal ideal $\\mathfrak{m}_x$ we have\n\t\\[ \\mathfrak{m}_x \\supset \\mathfrak{a} \\iff x \\in Z(\\mathfrak{a}) \\implies f(x)=0 \\iff f \\in \\mathfrak{m}_x. \\]\n\tHence\n\t\\[ f \\in \\bigcap_{\\mathfrak{m}_x \\supset \\mathfrak{a}} \\mathfrak{m}_x = \\sqrt{\\mathfrak{a}}, \\]\n\tthe last equality being based on Definition \\ref{def:Jacobson-ring} since $A/\\mathfrak{a}$ is a Jacobson ring. This concludes the $\\subset$.\n\\end{proof}\n\n\\begin{remark}\n\tHow about $Z I(\\mathcal{X})$? Unwinding definitions, it is seen to equal the set of points that ``satisfy the algebraic equations that $\\mathcal{X}$ satisfies.'' The Zariski topology on $\\Bbbk^n$ is defined by stipulating the subsets $\\{ x \\in \\Bbbk^n: f(x)=0 \\}$ to be closed, for all $f \\in A$, so we obtain $ZI(\\mathcal{X}) = \\bar{\\mathcal{X}}$, the Zariski-closure of $\\mathcal{X}$. The reader is invited to verify that by identifying $\\Bbbk^n$ with $\\MaxSpec(A)$, the foregoing topology is induced from the Zariski topology on the prime spectrum $\\Spec(A)$.\n\\end{remark}\n\nAn (algebraic, closed) subvariety of $\\Bbbk^n$ is the vanishing locus $f_1 = \\cdots = f_m = 0$ for some $f_1, \\ldots, f_m \\in \\Bbbk[X_1, \\ldots, X_n]$; it is determined by the ideal $\\mathfrak{a} = (f_1, \\ldots, f_m)$, in fact it depends only on $\\sqrt{\\mathfrak{a}}$. An ideal $\\mathfrak{a}$ is called \\emph{radical} if $\\sqrt{\\mathfrak{a}}=\\mathfrak{a}$. To recap, we obtain a dictionary:\n\\begin{center}\\begin{tabular}{c|c}\n\tSubvariety $\\mathcal{X}$ in $\\Bbbk^n$ & Radical ideal $\\mathfrak{a}$ in $\\Bbbk[X_1, \\ldots, X_n]$ \\\\\n\tPoints of $\\mathcal{X}$ & $\\MaxSpec(A)$, \\;$A = A_{\\mathcal{X}} := \\Bbbk[X_1, \\ldots, X_n]/\\mathfrak{a}$ \\\\\n\tUnion of two varieties & product or intersection of two ideals \\\\\n\tIntersection of varieties & sum of ideals \\\\\n\tPolynomial map $\\mathcal{X} \\to \\mathcal{Y}$ & $\\Bbbk$-homomorphism $A_{\\mathcal{Y}} \\to A_{\\mathcal{X}}$ \\\\\n\t\\vdots & \\vdots\n\\end{tabular}\\end{center}\n\nIf one allows arbitrary rings $A$ instead of just $\\Bbbk[X_1, \\ldots, X_n]/\\mathfrak{a}$, and consider $\\Spec(A)$ instead of $\\MaxSpec(A)$ (the latter is well-behaved only for Jacobson rings), the result is the category of \\emph{affine schemes}. A proper treatment of these ideas should be left to the Algebraic Geometry course, if it exists......\n\n\\section{Flatness: the first glance}\nTo begin with, we consider a ring $A$ and a module $N$. It has been observed that the additive functor $N \\dotimes{A} -: A\\dcate{Mod} \\to A\\dcate{Mod}$ is \\emph{right exact}, namely it preserves the exactness of sequences like\n\\[ \\bullet \\to \\bullet \\to \\bullet \\to 0. \\]\nIf we consider exactness of sequences like $0 \\to \\bullet \\to \\bullet \\to \\bullet$, the corresponding notion is \\emph{left exactness}. The same applies to any additive functor $F$ instead of $N \\dotimes{A} -$. Being both left and right exact is equivalent to that $F$ preserves all exact sequences; in this case we say $F$ is an \\emph{exact functor}.\n\n\\begin{definition}\\index{flat}\\index{faithfully flat}\n\tWe say $N$ is a \\emph{flat} $A$-module if $N \\dotimes{A} -$ is exact. We say $N$ is \\emph{faithfully flat} if for every sequence $M_\\bullet = [\\cdots \\to M_i \\to M_{i-1} \\to \\cdots]$ of $R$-modules, we have $M_\\bullet \\dotimes{R} N$ is exact if and only if $M_\\bullet$ is.\n\\end{definition}\n\n\\begin{remark}\n\tIn view of the right-exactness of $\\otimes$, to assure flatness of $N$ it suffices that $N \\dotimes{A} -$ preserves kernels.\n\\end{remark}\n\nNow we consider a ring homomorphism $A \\to B$, which makes $B$ into an $A$-algebra. Tensor product now gives an additive functor, often called the \\emph{base change}:\n\\[ B \\dotimes{A} -: A\\dcate{Mod} \\to B\\dcate{Mod}. \\]\nThus we can also talk about flatness and faithful flatness of $B$ over $A$. Since $B$ is naturally an $A$-module, this notion is compatible with the previous one.\n\\begin{example}\\label{eg:localization-flatness}\n\tLet $S$ be a multiplicative subset of $A$, then $A[S^{-1}]$ is flat over $A$. It is not faithfully flat in general, however; see Theorem \\ref{prop:faithfully-flat-criterion}.\n\\end{example}\n\n\\begin{example}\n\tA routine fact is that for any family $\\left( M^{(i)}_\\bullet \\right)_{i \\in I}$ of complexes of $A$-modules, we have\n\t\\[ \\forall i \\in I, \\; M^{(i)}_\\bullet \\text{ is exact} \\iff \\bigoplus_{i \\in I} M^{(i)}_\\bullet \\text{ is exact}. \\]\n\tRecall that $\\otimes$ preserves direct sums. It follows that a direct sum of modules is flat if and only if each summand is flat. From this we deduce the flatness of free modules since $A \\dotimes{A} M \\simeq M$ functorially for each $M$. Furthermore, projective modules are flat as they are direct summands of free modules.\n\\end{example}\n\n\\begin{exercise}\n\tShow that $\\Z/n\\Z$ is not flat over $\\Z$ for $n > 1$.\n\\end{exercise}\n\nWe list some basic properties below.\n\\begin{description}\n\t\\item[Tensor products] Let $M,N$ be flat (resp. faithfully flat) $R$-modules, then so is $M \\dotimes{R} N$. This follows the associativity constraint of tensor products: $(- \\otimes M) \\otimes N \\simeq - \\otimes (M \\otimes N)$.\n\t\\item[Transitivity] Given ring homomorphisms $A \\to B \\to C$, if $B$ is flat (resp. faithfully flat) over $A$ and $C$ is flat (resp. faithfully flat) over $B$, then $C$ is also flat (resp. faithfully flat) over $A$. This follows from the transitivity of base change, namely there is a isomorphism of functors $A\\dcate{Mod} \\to C\\dcate{Mod}$.\n\t\\[ (- \\dotimes{A} B) \\dotimes{B} C \\rightiso - \\dotimes{A} C. \\]\n\t\\item[Base change] Suppose $N$ is a flat (resp. faithfully flat) $A$-module and $B$ is any $A$-algebra, then $N \\dotimes{A} B$ is a flat (resp. faithfully flat) $B$-module. Again, any $B$-module $M$ can be viewed as an $A$-module, and there is a functorial isomorphism\n\t\\[ (N \\dotimes{A} B) \\dotimes{B} M \\rightiso N \\dotimes{A} M. \\]\n\\end{description}\n\n\\begin{remark}\\label{rem:exactness-local}\n\tA sequence $[ \\cdots \\to M_i \\xrightarrow{d_i} M_{i-1} \\to \\cdots]$ of $R$-modules is a complex (resp. exact) if and only if so is its localization at $\\mathfrak{m}$, for every maximal ideal $\\mathfrak{m}$. Indeed:\n\t\\begin{compactitem}\n\t\t\\item $(M_\\bullet, d_\\bullet)$ is a complex if and only if $\\Image(d_{i-1}d_i)=0$ for all $i$. Since localization is an exact functor, it preserves images and we know a module $N$ is zero if and only if $N_{\\mathfrak{m}}=0$ for all $\\mathfrak{m}$.\n\t\t\\item a complex $(M_\\bullet, d_\\bullet)$ is exact if and only if $\\Hm_i(M_\\bullet)=0$ for all $i$. The same reasoning applies since localization preserves $H_i$.\n\t\\end{compactitem}\n\\end{remark}\n\n\\begin{proposition}\n\tThe following are equivalent for an $R$-module $N$.\n\t\\begin{inparaenum}[(i)]\n\t\t\\item $N$ is flat over $R$,\n\t\t\\item $N_{\\mathfrak{p}}$ is flat over $R_{\\mathfrak{p}}$ for all prime ideal $\\mathfrak{p}$,\n\t\t\\item $N_{\\mathfrak{m}}$ is flat over $R_{\\mathfrak{m}}$ for all maximal ideal $\\mathfrak{m}$.\n\t\\end{inparaenum}\n\\end{proposition}\n\\begin{proof}\n\tDirect consequence of the exactness of localization and Remark \\ref{rem:exactness-local}.\n\\end{proof}\n\n\\begin{lemma}\\label{prop:flat-ring-localizations}\n\tLet $\\varphi: R \\to R'$ be a ring homomorphism, $\\mathfrak{p}' \\in \\Spec(R')$ maps to $\\mathfrak{p} \\in \\Spec(R)$ under $\\varphi^\\sharp$. If $\\varphi$ is flat, so is the induced homomorphism $R_{\\mathfrak{p}} \\to R'_{\\mathfrak{p}'}$.\n\\end{lemma}\n\\begin{proof}\n\tSet $S = R \\smallsetminus \\mathfrak{p}$ so that $\\varphi(S) \\subset R' \\smallsetminus \\mathfrak{p}'$. Factorize $R_{\\mathfrak{p}} \\to R_{\\mathfrak{p}'}$ as\n\t\\[ R_{\\mathfrak{p}} \\to \\underbracket{R'[\\varphi(S)^{-1}]}_{\\text{as a ring}} \\to R'_{\\mathfrak{p}'}. \\]\n\tThe first arrow is also the base-change to $R_{\\mathfrak{p}}$ of $\\varphi$ (as a homomorphism of $R$-modules), whereas the second one is a localization of $R'$. Their composite is therefore flat.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:flatness-localized}\n\tLet $\\varphi: R \\to R'$ be a ring homomorphism. The following are equivalent:\n\t\\begin{enumerate}[(i)]\n\t\t\\item $R'$ is flat over $R$,\n\t\t\\item $R'_{\\mathfrak{p}'}$ is flat over $R_{\\mathfrak{p}}$ for all $\\mathfrak{p}' \\in \\Spec(R')$ with $\\mathfrak{p} = \\varphi^\\sharp(\\mathfrak{p}')$;\n\t\t\\item \\emph{Idem}, but for $\\mathfrak{p}' \\in \\MaxSpec(R')$.\n\t\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n\t(i) $\\implies$ (ii): Set $S := R \\smallsetminus \\mathfrak{p}$. Base change implies $R'[S^{-1}]$ is flat over $R[S^{-1}] = R_{\\mathfrak{p}}$. Since $R'_{\\mathfrak{p}'}$ is a localization of $R[S^{-1}]$ (exercise), we conclude by transitivity.\n\t\n\t(ii) $\\implies$ (iii): Trivial.\n\t\n\t(iii) $\\implies$ (i): By Remark \\ref{rem:exactness-local} applied to complexes of $R'$-modules, it suffices to show the exactness of the functor $- \\dotimes{R} R'_{\\mathfrak{p}'}$ for all $\\mathfrak{p}' \\in \\MaxSpec(R')$. In view of Lemma \\ref{prop:flat-ring-localizations}, the factorization of $R \\to R'_{\\mathfrak{p}'}$ into $R \\to R_{\\mathfrak{p}} \\to R'_{\\mathfrak{p}'}$ and the flatness of $R \\to R_{\\mathfrak{p}}$ show that $R'_{\\mathfrak{p}'}$ is indeed flat over $R$.\n\\end{proof}\n\n\\begin{lemma}[Equational criterion of flatness]\\label{prop:equational-flatness}\n\tAn $R$-module $N$ is flat if and only if for all $r \\geq 1$, $a_1, \\ldots, a_r \\in R$, $x_1, \\ldots, x_r \\in N$ verifying $\\sum_{i=1}^r a_i x_i = 0$, there exist $s \\in \\Z_{\\geq 1}$, an $R$-valued matrix $B = (b_{ij})_{\\substack{1 \\leq i \\leq r \\\\ 1 \\leq j \\leq s }}$ and $y_1, \\ldots, y_s \\in N$ such that\n\t\\[ \\begin{pmatrix} x_1 \\\\ \\vdots \\\\ x_r \\end{pmatrix} = B \\begin{pmatrix} y_1 \\\\ \\vdots \\\\ y_s \\end{pmatrix}, \\quad \\begin{pmatrix} a_1 & \\cdots & a_r \\end{pmatrix} B = 0. \\]\n\\end{lemma}\n\\begin{proof}\n\tSuppose $M$ is flat and consider the exact sequence $0 \\to \\Ker(f) \\to R^{\\oplus r} \\xrightarrow{f} R$ where $f(t_1, \\ldots, t_r) = \\sum_i a_i t_i$. We obtain an exact\n\t\\[ 0 \\to \\Ker(f) \\dotimes{A} M \\to M^{\\oplus r} \\xrightarrow{(x_i)_i \\mapsto \\sum_i a_i x_i} M. \\]\n\tThus if $(x_1, \\ldots, x_r) \\mapsto 0$ by the arrow above, we can express it as $\\sum_{j=1}^s (b_{1j}, \\ldots, b_{rj}) \\otimes y_j \\in \\Ker(f) \\dotimes{A} M$ as required.\n\t\n\tTo show the converse, we invoke the fact that flatness is equivalent to the injectivity of $\\mathfrak{a} \\otimes N \\to \\mathfrak{a} N$ for all finitely generated ideal $\\mathfrak{a}$. See Proposition \\ref{prop:flat-module}.\n\\end{proof}\n\nIt will be useful to rephrase the condition in Lemma \\ref{prop:equational-flatness} as follows: for all\n\\begin{compactitem}\n\t\\item homomorphism $x: R^{\\oplus r} \\to M$, where $r \\in \\Z_{\\geq 1}$, and\n\t\\item $K \\subset \\Ker(x)$: submodule generated by one element,\n\\end{compactitem}\nthere exist some $s \\in \\Z_{\\geq 1}$ and a commutative diagram\n\\[\\begin{tikzcd}[column sep=small]\n\tR^{\\oplus r} \\arrow[rd, \"x\"'] \\arrow[rr, \"B\"] & & R^{\\oplus s} \\arrow[ld, \"y\"] \\\\\n\t& M &\n\\end{tikzcd} \\quad \\text{s.t. } K \\subset \\Ker(B). \\]\nIndeed, $x$ (resp. $y$) corresponds to $(x_1, \\ldots, x_r)$ via $x: (t_1, \\ldots, t_r) \\mapsto \\sum_i t_i x_i$ (resp. $y: (u_1, \\ldots, u_s) \\mapsto \\sum_j u_j, y_j$), and $K \\subset \\Ker(x)$ corresponds to $R \\cdot (a_1, \\ldots, a_r)$. The homomorphism $B$ corresponds to the matrix $(b_{ij})_{i,j}$.\n\n\\begin{remark}\\label{rem:equational-flatness-ext}\n\tFor flat $N$, the equational criterion so rephrased is applicable to any finitely generated $K \\subset \\Ker(x)$. Indeed, one may iterate the construction for $R^{\\oplus s} \\xrightarrow{y} M$, etc. to make every generator of $K$ mapped to $0$.\n\\end{remark}\n\n\\section{Structure of flat modules}\nRecall from homological algebra that the right exact functor $N \\dotimes{A} -: R\\dcate{Mod} \\to R\\dcate{Mod}$ has $(\\Tor_i^R(N, -))_{i \\geq 0}$ as its left derived functors.\n\\begin{proposition}\\label{prop:flat-module}\n\tThe following are equivalent for an $R$-module $N$:\n\t\\begin{enumerate}[(i)]\n\t\t\\item $N$ is flat,\n\t\t\\item $\\Tor_i^R(N,-)=0$ for all $i > 0$,\n\t\t\\item $\\Tor_1^R(N,-)=0$,\n\t\t\\item $\\Tor_1^R(N, R/\\mathfrak{a})=0$ for all finitely generated ideal $\\mathfrak{a}$, or equivalently $\\mathfrak{a} \\dotimes{R} N \\to N$ is injective.\n\t\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n\tFirst, the equivalence mentioned in (iv) is a consequence of the exact sequence\n\t\\[ \\underbracket{\\Tor_1^R(N, R)}_{=0} \\to \\Tor_1^R(N, R/\\mathfrak{a}) \\to N \\dotimes{R} \\mathfrak{a} \\to N \\to N \\dotimes{R} (R/\\mathfrak{a}) \\to 0 \\]\n\tdeduced from $0 \\to \\mathfrak{a} \\to R \\to R/\\mathfrak{a} \\to 0$.\n\n\tClearly (i) $\\implies$ (ii) $\\implies$ (iii) $\\implies$ (iv). To show (iii) $\\implies$ (i), note that if $0 \\to M' \\to M \\to M'' \\to 0$ is exact, then $\\Tor_1^R(N,M'') \\to N \\otimes M' \\to N \\otimes M \\to N \\otimes M'' \\to 0$ is exact.\n\t\n\tWe show (iv) $\\implies$ (i), (ii) or (iii) as follows: $\\Tor_1^R(N,M)=0$ can be tested for finitely generated $M$ only, since $\\Tor^R_i(N,-)$ preserves filtered inductive limits such as\n\t\\[ M = \\varinjlim \\left\\{ \\text{f.g. submodules} \\right\\}. \\]\n\tWe do induction on the minimal number $n$ of generators of $M$. If $M = Rx_1 + \\cdots + Rx_n$, put $M' := \\sum_{i < n} Rx_i$ so that we have a short exact sequence $0 \\to M' \\to M \\to M'' \\to 0$ where $M''$ is generated by the image of $x_n$, hence isomorphic to some $R/\\mathfrak{a}$. Since $\\Tor_1^R(N,M') \\to \\Tor_1^R(N, M) \\to \\Tor_1^R(N, M'')$ is exact, we are reduced to the $n=1$ case, i.e. $M = R/\\mathfrak{a}$. It boils down to assure $\\mathfrak{a} \\otimes N \\hookrightarrow N$. Again, using the exactness of filtered $\\varinjlim$ and the fact that $\\otimes$ respects $\\varinjlim$, it suffices to test this on finitely generated $\\mathfrak{a}$.\n\t\n\tFor a down-to-earth approach, see \\cite[\\S 6.3]{Eis95}.\n\\end{proof}\n\n\\begin{corollary}\n\tIf $r \\in R$ is not a zero divisor, then $r$ is not a zero divisor for any flat $R$-module $N$.\n\\end{corollary}\n\\begin{proof}\n\tTake $\\mathfrak{a} :=Rr$, which is $\\simeq R$, and contemplate on $N \\simeq \\mathfrak{a} \\dotimes{R} N \\hookrightarrow N$.\n\\end{proof}\n\n\\begin{exercise}\n\tSuppose $R$ is a principal ideal domain. Show that $N$ is flat if and only if $N$ has no zero divisors except $0$. Hint: the ideals take the form $\\mathfrak{a} = (t)$, so the condition (iv) amounts to $tx=0 \\iff x=0$.\n\\end{exercise}\n\n\\begin{exercise}\n\tFor all field $\\Bbbk$, show that\n\t\\begin{compactenum}[(i)]\n\t\t\\item $\\Bbbk[X,Y]/(XY-X)$ is not flat over $\\Bbbk[X]$,\n\t\t\\item $\\Bbbk\\llbracket t \\rrbracket[Y,Z]/(YZ-t)$ is flat over $\\Bbbk\\llbracket t\\rrbracket$.\n\t\\end{compactenum}\n\\end{exercise}\n\n\\begin{lemma}\n\tSuppose $A \\to B$ is flat. Write $M_B := B \\dotimes{A} M$ for any $A$-module $B$. Then there are natural isomorphisms $\\Tor_i^B(M_B, N_B) \\simeq B \\dotimes{A} \\Tor_i^A(M,N)$ for all $i$.\n\\end{lemma}\n\\begin{proof}\n\tTake a projective resolution $0 \\leftarrow M \\leftarrow P_\\bullet$ of $A$-modules. Since $B$ is flat over $A$, its base-change $0 \\leftarrow M_B \\leftarrow P_{\\bullet,B}$ to $B$ is still a projective resolution; we are using the fact that base-change preserves projectivity. Hence $\\Hm_i(P_{\\bullet,B} \\dotimes{B} N_B)$ computes $\\Tor_i^B(M_B, N_B)$. On the other hand, by flatness the homology groups are equal to $\\Hm_i(P_\\bullet \\dotimes{A} N) \\dotimes{A} B$, that is, $\\Tor_i^A(M,N) \\dotimes{A} B$. We leave it to the reader to convince him- or herself that the isomorphism so constructed is natural.\n\t\n\tAnother way is to use to associativity and commutativity of tensor products on the derived level, namely the flatness of $A \\to B$ implies\n\t\\[ M_B \\otimesL_B N_B \\simeq (M \\otimesL_A B) \\otimesL_B (N \\otimes_A B) \\simeq (M \\otimesL_A N) \\otimesL_A B \\]\n\tin the derived categories, and it remains to take $\\Hm_i$.\n\\end{proof}\nIn particular, let $S \\subset R$ be a multiplicative subset. By Example \\ref{eg:localization-flatness} we infer\n\\[ \\Tor_i^{R[S^{-1}]}\\left( M[S^{-1}], N[S^{-1}] \\right) \\simeq \\Tor_i^R(M,N)[S^{-1}]. \\]\n\n\\begin{theorem}\\label{prop:free-proj}\n\tLet $R$ be a local ring with maximal ideal $\\mathfrak{m}$. Let $M$ be a finitely generated $R$-module. The following are equivalent:\n\t\\begin{compactitem}\n\t\t\\item $M$ is free,\n\t\t\\item $M$ is projective.\n\t\\end{compactitem}\n\tIf we assume moreover that $M$ is finitely presented, both conditions are equivalent to the flatness of $M$.\n\\end{theorem}\n\\begin{proof}\n\tFree modules are known to be projective. Now let $M$ be finitely generated projective, and take a basis $\\bar{x}_1, \\ldots \\bar{x}_n$ of the $R/\\mathfrak{m}$-vector space $M/\\mathfrak{m}M$, together with liftings $M \\ni x_i \\mapsto \\bar{x}_i$. Nakayama's Lemma then implies the surjectivity of\n\t\\begin{align*}\n\t\t\\Phi: R^{\\oplus n} & \\longrightarrow M \\\\\n\t\t(a_1, \\ldots, a_n) & \\longmapsto a_1 x_1 + \\cdots + a_n x_n.\n\t\\end{align*}\n\tAs $M$ is projective, $\\Phi$ admits a section so that we may identify $M$ with a direct summand of $R^{\\oplus n}$, namely $M \\oplus N = R^{\\oplus n}$ for some $N$. Taking $- \\otimes_R R/\\mathfrak{m}$ leads to\n\t\\[ (R/\\mathfrak{m})^{\\oplus n} = M/\\mathfrak{m}M \\oplus N/\\mathfrak{m}N \\quad \\text{as vector spaces over } R/\\mathfrak{m}, \\]\n\tand by comparing dimensions we see $N/\\mathfrak{m}N = \\{0\\}$, which in turn gives $N = \\{0\\}$ by Nakayama's Lemma ($N$ is finitely generated since $R^{\\oplus n} \\twoheadrightarrow N$). Hence $M \\simeq R^{\\oplus n}$ is free.\n\t\n\tNow turn to the second assertion. Projective modules are flat since they are direct summands of free modules, and it remains to show that every flat $M$ with finite presentation $R^{\\oplus q} \\to R^{\\oplus r} \\xrightarrow{x} M \\to 0$ is a direct summand of a free module. Let's plug $x: R^{\\oplus r} \\twoheadrightarrow M$ and $K := \\Ker(x)$ into the equational criterion of flatness (Lemma \\ref{prop:equational-flatness}), rephrased as in Remark \\ref{rem:equational-flatness-ext}. Let $N$ be the image of $B: R^{\\oplus r} \\to R^{\\oplus s}$. One readily sees that $y$ induces $N \\rightiso M$. This furnishes a section $s: M \\to R^{\\oplus s}$ for $y$.\n\\end{proof}\n\nWe deduce the following result characterizing finitely presented projective modules: in geometric language, they correspond to \\emph{vector bundles} over the \\emph{affine scheme} $\\Spec(R)$.\n\\begin{corollary}\n\tLet $M$ be a finitely presented $R$-module. Then $M$ is projective if and only $M_{\\mathfrak{m}}$ is free for every maximal ideal $\\mathfrak{m}$.\n\\end{corollary}\n\\begin{proof}\n\tIn view of Theorem \\ref{prop:free-proj}, it suffices to show $M$~is projective if and only if $M_{\\mathfrak{m}}$ is for all maximal ideal $\\mathfrak{m}$. One direction is easy: if $M$ is a direct summand of a free module, then so is $M_{\\mathfrak{m}}$.\n\n\tConversely, the assumption on finite presentation entails an isomorphism between additive functors $R\\dcate{Mod} \\to R_{\\mathfrak{m}}\\dcate{Mod}$\n\t\\[ \\Hom_R(M, -) \\dotimes{R} R_{\\mathfrak{m}} \\simeq \\Hom_{R_{\\mathfrak{m}}}(M_{\\mathfrak{m}}, (-)_{\\mathfrak{m}}). \\]\n\tHence $M_{\\mathfrak{m}}$ is projective for all $\\mathfrak{m}$ implies $M$ is projective, by Remark \\ref{rem:exactness-local} and the exactness of localizations.\n\\end{proof}\n\nNote that for finitely presented $M$, the equivalence between projectivity and flatness holds for any ring $R$. The arguments are verbatim, and this can also be deduced from the local case.\n\nWe close this section by a stronger result, whose proof is referred to \\cite[Theorem A6.6]{Eis95}.\n\\begin{theorem}[Govorov--Lazard]\n\tAn $R$-module is flat if and only if it is a filtered $\\varinjlim$ of free $R$-modules.\n\\end{theorem}\n\n\\section{Faithful flatness and surjectivity}\nWe begin by a contemplation on the definition of faithful flatness. Recall that $R\\dcate{Mod}$ is the template of \\emph{abelian categories}, in which one can talk about the zero object $0$, direct sums, kernels, cokernels, images and exactness.\n\nWe say a functor between categories is \\emph{faithful} if it induces injections on $\\Hom$-sets. A functor between additive categories is called \\emph{additive} if it induces group homomorphisms between $\\Hom$-sets. An additive functor between abelian categories is \\emph{exact} if it preserves all exact sequences. Exact functors preserve kernels, cokernels and images.\n\n\\begin{lemma}\\label{prop:faithful-functor} \\index{faithfully flat}\\index{faithful functor}\n\tLet $F: \\mathcal{C} \\to \\mathcal{C}'$ be an additive functor between abelian categories. The following are equivalent.\n\t\\begin{enumerate}[(i)]\n\t\t\\item $F$ is exact and faithful;\n\t\t\\item $F$ is exact and $(FM=0 \\iff M=0)$ for every object $M$ of $\\mathcal{C}$;\n\t\t\\item a sequence $M' \\to M \\to M''$ is exact in $\\mathcal{C}$ if and only if $FM' \\to FM \\to FM''$ is exact in $\\mathcal{C}'$.\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\t(i) $\\implies$ (ii): If $FM=0$ then $F(\\identity_M)=\\identity_{FM}=0$, and the faithfulness implies $\\identity_M=0$ in $\\End_{\\mathcal{C}}(M)$; this is possible only when $M=0$.\n\n\t(ii) $\\implies$ (i): Suppose $u: N \\to M$ is mapped to $0$ under $F$. Then we have $F(\\Image(u))=0$, thereby $\\Image(u)=0$ and $u=0$.\n\n\t(i) $\\implies$ (iii): Suppose $M' \\xrightarrow{u} M \\xrightarrow{v} M''$ induces an exact sequence $FM' \\xrightarrow{Fu} FM \\xrightarrow{Fv} FM''$. From $F(vu) = F(v)F(u) = 0$ we get $vu=0$. Thus it makes sense to define $C := \\Ker(v)/\\Image(u)$. One has an exact sequence\n\t\\[ \\Image(u) \\to \\Ker(v) \\to C \\to 0. \\]\n\tSince $F$ is exact, we deduce $FC=0$, which implies $C=0$ by (i) $\\implies$ (ii).\n\n\t(iii) $\\implies$ (i): Suppose $u: N \\to M$ is mapped to $0$ under $F$. Consider $v: M \\to \\Coker(u)$. Since $F$ preserves exact sequences and $Fv$ is an isomorphism, we see $v$ is also an isomorphism, therefore $u=0$.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:faithful-flatness-crit}\n\tThe following are equivalent for an $R$-module $N$.\n\t\\begin{compactenum}[(i)]\n\t\t\\item $N$ is faithfully flat.\n\t\t\\item The functor $- \\dotimes{R} N$ is exact and faithful.\n\t\t\\item $N$ is flat and for every maximal ideal $\\mathfrak{m}$ of $R$, we have $N/\\mathfrak{m}N \\simeq N \\dotimes{R} R/\\mathfrak{m} \\neq 0$.\n\t\\end{compactenum}\n\\end{proposition}\n\\begin{proof}\n\tThe equivalence (i) $\\iff$ (ii) has just been established. An $R$-module $M$ is nonzero if and only if there exist exact sequences\n\t\\[ 0 \\to R/\\mathfrak{a} \\to M, \\quad R/\\mathfrak{a} \\to R/\\mathfrak{m} \\to 0 \\]\n\twhere $\\mathfrak{a}$ is a proper ideal and $\\mathfrak{m}$ is a maximal over-ideal of $\\mathfrak{a}$. Next, let's show (iii) $\\implies$ (i) or (ii): there are exact sequences\n\t\\[ 0 \\to T \\to M \\dotimes{R} N, \\quad T \\to M \\dotimes{R} (R/\\mathfrak{m}) \\to 0 \\]\n\twhere $T := (R/\\mathfrak{a}) \\dotimes{R} N$, therefore $M \\dotimes{R} N \\neq 0$ and we apply Lemma \\ref{prop:faithful-functor}. Finally, (i) or (ii) $\\implies$ (iii) is clear.\n\\end{proof}\n\n\\begin{corollary}\n\tLet $R \\to R'$ be a local homomorphism\\footnote{That is, the preimage of the maximal ideal $\\mathfrak{m}_{R'}$ equals $\\mathfrak{m}_R$.} between local rings and let $M$ be a finitely generated $R'$-module, $M \\neq 0$. Then as an $R$-module, $M$ is faithfully flat if and only if it is flat.\n\\end{corollary}\n\\begin{proof}\n\tLet $\\mathfrak{m}, \\mathfrak{m}'$ be the maximal ideals of $R, R'$, respectively. Since $\\mathfrak{m}$ is mapped into $\\mathfrak{m}' = \\text{rad}(R')$, the assertion follows from Proposition \\ref{prop:faithful-flatness-crit} together with Nakayama's Lemma.\n\\end{proof}\n\nThis is often applied to the cases $R=R'$ or $M=R'$.\n\n\\begin{exercise}[Faithfully flat descent for flatness]\n\tGiven a faithfully flat homomorphism $\\varphi: R \\to R'$. If $M$ is an $R$-module such that $M \\dotimes{R} R'$ is a flat (resp. faithfully flat) $R'$-module, then so is $M$ over $R$.\n\\end{exercise}\n\n\\begin{proposition}\\label{prop:faithfully-flat-surj}\n\tSuppose $\\varphi: A \\to B$ is a faithfully flat ring homomorphism and regard $B$ as an $A$-algebra.\n\t\\begin{itemize}\n\t\t\\item For any $A$-module $M$, the natural map $M \\to M \\dotimes{A} B$ is injective; in particular, $\\varphi$ is seen to be injective by taking $M=A$.\n\t\t\\item For any ideal $\\mathfrak{a} \\subsetneq A$ we have $\\varphi^{-1}(\\mathfrak{a}B)=\\mathfrak{a}$.\n\t\t\\item The map $\\varphi^\\sharp: \\Spec(B) \\to \\Spec(A)$ is surjective.\n\t\\end{itemize}\n\\end{proposition}\n\\begin{proof}\n\tLet $N$ be the kernel of $M \\to M \\dotimes{A} B$. The $B$-module homomorphism $N \\dotimes{A} B \\to M \\dotimes{A} B$ is injective by flatness; it is zero on $N \\otimes 1$, hence identically zero and we obtain $N = \\{0\\}$ by faithful flatness (Lemma \\ref{prop:faithful-functor}). \n\n%\tTake $x \\in M$. Flatness implies $Ax \\otimes B \\hookrightarrow M \\dotimes{A} B$. Note that $Ax \\otimes B$ is generated by $x \\otimes 1$. If $x \\neq 0$ then $Ax \\neq \\{0\\}$ and $Ax \\otimes B \\neq \\{0\\}$ since $\\varphi$ is faithfully flat. This implies $x \\mapsto x \\otimes 1 \\neq 0$.\n\t\n\tLet $\\mathfrak{a} \\subset A$ be a proper ideal. The previous step with $M := A/\\mathfrak{a}$ implies that $A/\\mathfrak{a} \\to (A/\\mathfrak{a}) \\dotimes{A} B \\simeq B/\\mathfrak{a}B$ is injective, hence $\\varphi^{-1}(\\mathfrak{a}B) = \\mathfrak{a}$.\n\t\n\tTo show the surjectivity of $\\varphi^\\sharp$, consider a given $\\mathfrak{p} \\in \\Spec(A)$ and form the faithfully flat ring homomorphism\n\t\\[ \\varphi_{\\mathfrak{p}}: A_{\\mathfrak{p}} \\to B \\dotimes{A} A_{\\mathfrak{p}} = B_{\\mathfrak{p}} \\]\n\tby base change. The previous step implies $\\varphi_{\\mathfrak{p}}^{-1}(\\mathfrak{p}B_{\\mathfrak{p}}) = \\mathfrak{p} A_{\\mathfrak{p}}$, hence $\\mathfrak{p}B_{\\mathfrak{p}}$ is a proper ideal of $B_{\\mathfrak{p}}$. Any maximal over-ideal $\\mathfrak{m}_0$ of $\\mathfrak{p}B_{\\mathfrak{p}}$ satisfies $\\varphi^{-1}_{\\mathfrak{p}}(\\mathfrak{m}_0) = \\mathfrak{p} A_{\\mathfrak{p}}$. Take $\\mathfrak{m} \\in \\Spec(B)$ mapping to $\\mathfrak{m}_0$. From the commutative diagrams\n\t\\[\\begin{tikzcd}\n\t\tB \\arrow[r] & B_{\\mathfrak{p}} \\\\\n\t\tA \\arrow[u, \"\\varphi\"] \\arrow[r] & A_{\\mathfrak{p}} \\arrow[u, \"\\varphi_{\\mathfrak{p}}\"']\n\t\\end{tikzcd} \\qquad \\begin{tikzcd}\n\t\t\\Spec(B) \\arrow[d, \"\\varphi^\\sharp\"'] & \\Spec(B_{\\mathfrak{p}}) \\arrow[d, \"\\varphi_{\\mathfrak{p}}^\\sharp\"] \\arrow[l] \\\\\n\t\t\\Spec(A) & \\Spec(A_{\\mathfrak{p}}) \\arrow[l]\n\t\\end{tikzcd}\\]\n\tone infers $\\varphi^\\sharp(\\mathfrak{m}) = \\mathfrak{p}$.\n\\end{proof}\n\n\\begin{theorem}\\label{prop:faithfully-flat-criterion}\n\tThe following are equivalent for a ring homomorphism $\\varphi: A \\to B$.\n\t\\begin{enumerate}[(i)]\n\t\t\\item $\\varphi$ is faithfully flat;\n\t\t\\item $\\varphi$ is flat and $\\varphi^\\sharp: \\Spec(B) \\to \\Spec(A)$ is surjective;\n\t\t\\item $\\varphi$ is flat and for any maximal ideal $\\mathfrak{p} \\subset A$ there exists a maximal ideal $\\mathfrak{m} \\subset B$ such that $\\varphi^{-1}(\\mathfrak{m}) = \\mathfrak{p}$.\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\t(i) $\\implies$ (ii) is contained in Proposition \\ref{prop:faithfully-flat-surj}. As for (ii) $\\implies$ (iii), take any $\\mathfrak{q} \\in \\Spec(B)$ that pulls back to $\\mathfrak{p} \\in \\Spec(A)$, then any maximal over-ideal $\\mathfrak{m}$ of $\\mathfrak{q}$ also pulls back to $\\mathfrak{p}$. To show (iii) $\\implies$ (i), apply the criterion of Proposition \\ref{prop:faithful-flatness-crit}: for any $\\mathfrak{p} \\in \\MaxSpec(A)$, the existence of $\\mathfrak{m} \\mapsto \\mathfrak{p}$ implies $\\mathfrak{m} \\supset \\varphi(\\mathfrak{p}) \\cdot B = \\mathfrak{p}B$, hence $\\mathfrak{p}B \\neq B$.\n\\end{proof}\n\nThe notion of flatness was first introduced by J.-P. Serre in \\cite{Se55}. The surjections $\\varphi^\\sharp: \\Spec(B) \\to \\Spec(A)$ (or rather their global avatars) for faithfully flat $\\varphi: A \\to B$ are often employed as candidates of ``coverings'' in algebraic geometry, leading up to the well-known \\emph{fpqc} (faithfully flat + quasi-compact) and \\emph{fppf} (faithfully flat + finitely presented) topologies in the sense of Grothendieck. They have been indispensable tools for contemporary geometers; \\cite{Vi05} serves as a readable introduction to this circle of ideas.", "meta": {"hexsha": "11178ef12fcecfb52504b2b795037fb6a93423cf", "size": 44076, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "YAlg3-3.tex", "max_stars_repo_name": "wenweili/Yanqi-Algebra-3", "max_stars_repo_head_hexsha": "4223e9973c97342ecb09b444b9fc3c30ffd53aa3", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2019-07-09T06:22:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-05T14:44:14.000Z", "max_issues_repo_path": "YAlg3-3.tex", "max_issues_repo_name": "wenweili/Yanqi-Algebra-3", "max_issues_repo_head_hexsha": "4223e9973c97342ecb09b444b9fc3c30ffd53aa3", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "YAlg3-3.tex", "max_forks_repo_name": "wenweili/Yanqi-Algebra-3", "max_forks_repo_head_hexsha": "4223e9973c97342ecb09b444b9fc3c30ffd53aa3", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-07-10T23:47:58.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-21T03:32:08.000Z", "avg_line_length": 83.7946768061, "max_line_length": 762, "alphanum_fraction": 0.6800299483, "num_tokens": 15786, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.8056321913146127, "lm_q1q2_score": 0.5712086483422079}}
{"text": "\\newcommand{\\vect}[1]{\\mathbf{#1}}\n\\newcommand{\\mat}[1]{\\mathbf{#1}}\n\n\\newcommand{\\pos}{\\vect{x}}\n\\newcommand{\\dx}{\\vect{\\Delta x}}\n\\newcommand{\\xcur}{\\vect{x}_{n}}\n\\newcommand{\\xnext}{\\vect{x}_{n+1}}\n\\newcommand{\\vel}{\\vect{v}}\n\\newcommand{\\dv}{\\vect{\\Delta v}}\n\\newcommand{\\vcur}{\\vect{v}_{n}}\n\\newcommand{\\vnext}{\\vect v_{n+1}}\n\\newcommand{\\acc}{\\vect{a}}\n\\newcommand{\\force}{\\vect{f}}\n\\newcommand{\\forcext}{\\vect{f}_{ext}} \n\\newcommand{\\lam}{\\vect{\\lambda}}\n\\newcommand{\\lcur}{\\lam_{n}}\n\\newcommand{\\lnext}{\\lam_{n+1}}\n\\newcommand{\\avlam}{\\bar{\\lam}}\n\\newcommand{\\fcur}{\\vect{f}_{n}}\n\\newcommand{\\fnext}{\\vect f_{n+1}}\n\\newcommand{\\M}{\\mat M}\n\\newcommand{\\Minv}{\\mat M^{-1}}\n\\newcommand{\\Mbinv}{\\bar{\\mat M}^{-1}}\n\\renewcommand{\\P}{\\mat P}\n\\newcommand{\\I}{\\mat I}\n\\newcommand{\\J}{\\mat J}\n\\newcommand{\\Jt}{\\mat J^T}\n\\newcommand{\\C}{\\mat C}\n\\newcommand{\\D}{\\mat D}\n\\newcommand{\\K}{\\mat K}\n\\newcommand{\\violation}{ \\phi}\n\\newcommand{\\dviolation}{\\dot \\violation}\n\\newcommand{\\violcur}{\\violation_{n}}\n\\newcommand{\\dviolcur}{\\dot \\violcur}\n\\newcommand{\\violnext}{\\violation_{n+1}}\n\\newcommand{\\dviolnext}{\\dot \\violation_{n+1}}\n\\newcommand{\\cmp}{c}\n\\newcommand{\\dampingratio}{d}\n\n\\section{Introduction}\nThis plugin serves two purposes:\n\\begin{itemize}\n \\item to provide a new, more complete mechanisme for assembling the dynamical system matrices such as mass $\\M$ and stiffness $\\K$, as well as constraint Jacobians $\\J$,\n \\item and experiment a unified approach of soft and hard constraints, inspired from~\\cite{servin2006interactive}.\n\\end{itemize}\nContrary with other implementations of matrix assembly available in SOFA, this one handles mappings, and even multimappings.\n\nSections \\ref{sec constraint forces} and \\ref{sec:time integration} explain the unified constraint approach, while Section~\\ref{sec matrix assembly} presents the matrix assembly process based on an example. An overview of the corresponding API is given in Section~\\ref{sec implementation}. \n\n\\section{Constraint forces} \\label{sec constraint forces}\nThis approach unifies soft and hard constraints, by providing constraints with compliance. \n\\subsection{Hard constraints}\nFor a good introduction on hard constraints, see~\\cite{witkinconstraints}.\nHard constraints are usually implemented using Lagrange multipliers $\\lambda$ in the following equation:\n\\begin{equation} \\label{eq acc hard}\n\\left( \\begin{array}{cc}\n\\M & -\\Jt \\\\\n \\J &  \\end{array}\\right)\n\\left( \\begin{array}{c}\n\\acc \\\\ \\lam\n\\end{array}\\right) = \\left( \\begin{array}{c}\n\\forcext  \\\\\n-\\ddot \\violation\n\\end{array}\\right) \n\\end{equation}\nwhere $\\M$ is the mass matrix (or, more generally, a dynamics matrix such as $\\M-h\u00b2\\K$ used in implicit time integration), $\\J$ is the Jacobian matrix of the constraint(s), $\\acc$ is the acceleration, $\\forcext$ is the net external force applied to the system, $\\lam$ is the constraint force $\\violation$ is the constraint violation and $\\ddot \\violation$ is the second time derivative of the violation (i.e. the error on accelerations).\nThis form of equation system is typically called \\textit{Karush-Kuhn-Tucker (KKT)}.\n$\\lam$ and $\\ddot \\violation$ are vectors with as many entries as scalar constraints. The equation system is typically solved using a Schur complement to compute the constraint forces:\n\\begin{equation}\\label{eq schur rigid}\n\\J \\Minv \\Jt \\lam = -\\ddot \\violation - \\J \\Minv \\forcext\n\\end{equation}\nand then the acceleration is computed as $\\acc = \\Minv( \\forcext + \\Jt \\lam)$.\n\n\\subsection{Projective constraints}\nWhen the constraints are simple enough, such as keeping points fixed or in some hyperplane, they are more easily handled by straightforwardly projecting forces and displacements.\nThe previous equation becomes:\n\\begin{equation} \\label{eq acc hard projected}\n\\left( \\begin{array}{cc}\n\\P\\M & -\\P\\Jt \\\\\n \\J\\P &  \\end{array}\\right)\n\\left( \\begin{array}{c}\n\\acc \\\\ \\lam\n\\end{array}\\right) = \\left( \\begin{array}{c}\n\\P\\forcext  \\\\\n-\\ddot \\violation\n\\end{array}\\right) \n\\end{equation}\n where $\\P$ is the orthogonal projection matrix corresponding to the constraint.\n\n\\subsection{Generalized constraints}\nIn the generalized approach, the constraint forces are considered directly proportional to the constraint violations $\\violation$ and their first time derivative (error on velocity) $\\dviolation$:\n\\begin{equation}\\label{eq generalized constraint}\n\\lam = - \\frac{1}{\\cmp} \\left( \\violation + \\dampingratio \\dviolation \\right)\n\\end{equation}\nwhere the positive real number $\\cmp$ is the compliance of the constraint and $\\dampingratio$ is its damping ratio.\nCombined with a time discretization scheme, this leads to an equation system similar to \\eqref{eq schur rigid} as shown in Section~\\ref{sec:time integration}.\n% This leads to equation system:\n% \\begin{equation} \\label{eq acc compliant}\n% \\left( \\begin{array}{cc}\n% \\M & -\\Jt \\\\\n%  \\J &  \\C \\end{array}\\right)\n% \\left( \\begin{array}{c}\n% \\acc \\\\ \\lam\n% \\end{array}\\right) = \\left( \\begin{array}{c}\n% \\forcext  \\\\\n% -\\violation - \\dampingratio \\dviolation\n% \\end{array}\\right) \n% \\end{equation}\n% where $\\C$ is the compliance matrix, typically diagonal, which is null for hard constraints. The corresponding Schur complement is well-conditioned for hard constraints:\n% \\begin{equation}\\label{eq schur compliant}\n% \\left( \\J \\Minv \\Jt  +   \\C \\right) \\lam = -\\violation - \\dampingratio\\dviolation - \\J \\Minv \\forcext\n% \\end{equation}\n% contrary to the standard implicit integration approach, where the indefinite stiffness matrix grows to infinity for hard constraints.\n\n\\subsection{Force or constraint ?} \\label{sec force or constraint}\nForceFields usually contribute the top line of Equation~\\ref{eq acc hard}, by accumulating force in the right-hand term. In implicit integration, their stiffness matrix is also accumulated to the left-hand side.\nWhen a ForceField has an invertible stiffness matrix, it can alternatively be handled as a constraint with non-null compliance, rather than a standard force. In this case, it contributes to the bottom of Equation~\\ref{eq acc hard} rather than to the top.\n\nDue to matrix conditioning issues, and depending on which linear solver is used, it may be a good idea to process very stiff forces as low compliances rather than large stiffnesses. \nIn this case, we call them \\textit{compliant} or \\textit{compliance forces}. Here the term compliant does not convey information on the magnitude of the stiffness (or compliance) but only on the way it is processed by the solver, namely in the bottom row of the KKT equation system.\nNote, however, that each force handled as a constraint increases the size of the equation system, since it adds lines to the bottom (and columns to the right) of the equation system.\nConversely, it may be more efficient to handle soft forces as stiffnesses, since the size of the stiffness matrix depends on the number of independent DOFs, which does not increase along with the number of forces. We call such forces \\textit{stiff} or \\textit{stiffness forces}, though here again, the term does not convey information on the magnitude of the stiffness but only on the way it is handled by the solver, namely in the first row of the KKT equation system.\n\nNote that geometric stiffness is not (yet) handled in the compliance view, which may result in instabilities in case of large displacements.\n\n\n\\section{Time integration} \\label{sec:time integration}\nOur integration scheme is:\n\\begin{eqnarray}\n \\vnext &=& \\vcur + h \\Minv \\left(\\alpha \\fnext + (1-\\alpha) \\fcur\\right) \\label{eq ode velocity} \\\\\n  \\xnext &=& \\xcur + h \\left( \\beta \\vnext + (1-\\beta) \\vcur \\right) \\label{eq ode position}\n\\end{eqnarray}\nwhere index $n$ denotes current values while index $n+1$ denotes next values, $\\alpha$ is the implicit velocity factor, and $\\beta$ is the implicit position factor. Let\n\\begin{eqnarray}\n \\dv = \\vnext -    \\vcur  &=& h \\Minv \\left(\\alpha \\fnext + (1-\\alpha) \\fcur\\right) \\\\\n\\dx = \\xnext -  \\xcur    &=& h (v + \\beta  \\dv)\n\\end{eqnarray}\nbe the velocity and position changes across the time step.\nWe have not carefully studied the influence of these parameters, but it seems that $\\alpha=0.5$ and $\\beta=1$ corresponds to an \\textbf{energy-conserving integration scheme}.\n\n\nThe constraint violation $\\violation$ and its Jacobian $\\J$ are:\n\\begin{eqnarray}\n \\J &=& \\frac{\\partial \\vect \\violation}{\\partial \\pos} \\\\\n \\violnext &\\simeq& \\violcur + \\J \\dx = \\violcur + h   \\dviolation + \\J h \\beta \\Delta \\vel  \\label{eq violnext}\\\\\n\\dviolnext &\\simeq& \\dviolcur + \\J \\dv \\label{eq dviolnext}\n\\end{eqnarray}\nThe corresponding forces are:\n\\begin{eqnarray}\n \\force &=& \\forcext + \\Jt \\lam \\\\\n \\lam_i &=& -\\frac{1}{c_i} (  \\violation_i + d \\dviolation_i ) \\label{eq lambda}\n\\end{eqnarray}\nwhere the subscript $i$ denotes a scalar constraint.\n\nThe average constraint forces are computed  using equations \\ref{eq lambda}, \\ref{eq violnext} and \\ref{eq dviolnext}:\n\\begin{eqnarray*}\n \\avlam_i &=& \\alpha \\lnext + (1-\\alpha) \\lcur \\\\\n&=& -\\frac{1}{c_i} ( \\alpha \\violation + \\alpha h \\dviolation  + \\alpha h \\beta \\J \\Delta \\vel + \\alpha d \\dviolation + \\alpha d \\J \\dv + (1-\\alpha)\\violation + (1-\\alpha) d \\dviolation  ) \\\\\n&=& -\\frac{1}{c_i} ( \\violation + d\\dviolation + \\alpha h \\dviolation + \\alpha(h\\beta+d)\\J \\dv )\n\\end{eqnarray*}\nWe can rewrite the previous equation as:\n\\begin{equation} \\label{eq bottom}\n \\J \\dv + \\frac{1}{\\alpha(h\\beta+d)}\\C\\avlam = - \\frac{1}{\\alpha(h\\beta+d)} (\\violation + (d+\\alpha h)\\dviolation)\n\\end{equation}\nwhere values without indices denote current values.\nThe complete equation system has the following generalized KKT form:\n\\begin{equation} \\label{eq:kktode}\n \\left( \\begin{array}{cc}\n\\frac{1}{h}\\P\\M & -\\P\\Jt \\\\\n\\J & \\frac{1}{l} \\C \\end{array}\\right)\n\\left( \\begin{array}{c}\n\\dv \\\\ \\avlam\n\\end{array}\\right) = \\left( \\begin{array}{c}\n\\P\\forcext  \\\\\n- \\frac{1}{l} (\\violation +(d+\\alpha h) \\dviolation)\n\\end{array}\\right) \n\\end{equation}\nwhere $ l=\\alpha(h \\beta + d) $.\nThe matrix in equation~\\ref{eq:kktode} is semi-definite when matrix $\\C$ is null, or when matrix $\\P $ removes independent degrees of freedom without reducing the size of the mass matrix accordingly.\nThis equation system can be solved using iterative solvers, such as CG or other Krylov methods.\nAn other option is to create a reduced equation system on constraint forces only:\n\\begin{equation} \\label{eq schur complement}\n \\left( h\\J\\Mbinv\\Jt + \\frac{1}{l}\\C \\right) \\avlam = -\\frac{1}{l} \\left(\\violation + (\\dampingratio+h\\alpha)\\dviolation \\right) - h\\J \\P^T \\Mbinv \\forcext\n\\end{equation}\nwhere matrix $ h\\J\\Mbinv\\Jt + \\frac{1}{l}\\C $ is called the Schur complement of the KKT equation system.\nMatrix $\\bar\\M= \\P \\M \\P^T + diag(\\epsilon)$ is optionally regularized by the linear solver using a very small number $\\epsilon$. \nThis makes the matrix PSD, and does not significantly change the value of the equation solution thanks to the small magnitude of the perturbation.\n\n% Note that the system is singular due to matrix $\\P $, which removes independent degrees of freedom without reducing the size of the mass matrix accordingly.\n% To remedy this, we set an option to replace it with \n% \\begin{equation} \\label{eq mbar}\n% \\bar\\M = \\P \\M \\P^T + diag(\\epsilon),\n% \\end{equation}\n%  where $\\epsilon$ is a very small number. \n% This makes the matrix PSD, and does not significantly change the value of the equation solution thanks to the small magnitude of the perturbation.\n% \n% % \\[ \\begin{array}{ccc}\n% % \\left( h\\J\\P\\Minv\\P\\Jt + \\frac{1}{l}\\C \\right) \\avlam &=& -\\frac{1}{l} \\left(\\violation + (\\dampingratio+h\\alpha)\\dviolation \\right) - h\\J \\Minv \\forcext \\\\\n% % \\dv &=& h\\P\\Minv(\\forcext +\\Jt \\avlam ) \\\\\n% % \\dx &=& h( \\vel + \\beta \\dv )\n% % \\end{array} \\]\n% \n% By inverting $\\bar\\M$, we can insert the first line of the equation system and get:\n% \\begin{equation} \\label{eq schur complement}\n%  \\left( h\\J\\Mbinv\\Jt + \\frac{1}{l}\\C \\right) \\avlam = -\\frac{1}{l} \\left(\\violation + (\\dampingratio+h\\alpha)\\dviolation \\right) - h\\J \\P^T \\Mbinv \\forcext\n% \\end{equation}\n% where matrix $ h\\J\\Mbinv\\Jt + \\frac{1}{l}\\C $ is called the Schur complement of the KKT equation system.\nSolving for $\\avlam$ allows us to update the velocities and the positions:\n\\begin{eqnarray}\n \\dv &=& h\\P\\Mbinv(\\forcext +\\Jt \\avlam ) \\label{eq update velocities}\\\\\n\\dx &=& h( \\vel + \\beta \\dv ) \\label{eq update positions}\n\\end{eqnarray}\n\n\n\n\\section{Matrix assembly} \\label{sec matrix assembly}\nThe equation system, in its most general form, can be written as:\n\\begin{equation}\n \\label{eq abstract system}\n \\left( \\begin{array}{cc}\n\\M & -\\Jt \\\\\n\\J &  \\C \\end{array}\\right)\n\\left( \\begin{array}{c}\n\\pos \\\\ \\lam\n\\end{array}\\right) = \\left( \\begin{array}{c}\n\\force  \\\\\n\\violation\n\\end{array}\\right) \n\\end{equation}\nWe assemble the $7$ terms of equation~\\ref{eq abstract system} separately. \n\nFigure~\\ref{fig system graph} shows an example of mechanical system.\nThe independent DOFs are $X_{a}$ and $X_d$.\nState $X_b$ is attached to $X_a$ using a simple mapping, and a mass matrix $M_{bb}$ is defined at this level.\nState $X_c$ is attached to $X_b$ using a simple mapping, and a compliance matrix $C_{\\alpha \\alpha}$ (possibly a deformation force) is applied to these DOFs.\nState $X_e$ is attached to $X_a$ and  $X_d$ at the same time, using a MultiMapping. A compliance matrix $C_{\\beta \\beta}$, possibly an interaction force, is applied to these DOFs, \nwhile a mass $M_{dd}$ is applied to $X_d$.\n\\begin{figure}\n\\centering\n%  \\includegraphics[width=0.49\\linewidth]{system-graph.png}\n\\includegraphics[clip,trim= 0mm 255mm 145mm 0mm,width=0.49\\linewidth]{system-graph.pdf}\n\\caption{Example of mechanical system structure. The $X$ represent mechanical states, while the arrows represent the kinematic hierarchy, and the plain lines represent components acting on the states.}\n\\label{fig system graph}\n\\end{figure}\n\n\nThe corresponding equation system has the block structure shown in the right of Figure~\\ref{fig system matrix}.\nThe main blocks of the equation system are highlighted in grey rectangles. The $J$ matrices are the mapping matrices. The bottom row has two mappings, since the state $X_e$ impacted by compliance $\\beta$ depends on two parent states.\n\\begin{figure}\n\\centering\n \\includegraphics[clip,trim=5mm 185mm 40mm 0mm,width=0.7\\linewidth]{system-bigmatrix.pdf}\n\\caption{Block view of the equation~\\ref{eq abstract system} applied to the system of Figure~\\ref{fig system graph}, with non-null blocks highlighted in yellow.}\n\\label{fig system matrix}\n\\end{figure}\n\n\nThe assembly of local matrices and vectors to the global matrices and vectors which compose the system is performed using shift matrices, as illustrated in Figure~\\ref{fig shifting}.\nShift matrices $J_{*0}$, shown in the top of the figure, are composed of identity matrices (represented with a diagonal in a block) and null blocks. They can be used to implement the shifting of vector entry indices from local to global. \n\\begin{figure}\n\\centering\n%  \\includegraphics[width=0.69\\linewidth]{system-shift.png}\n \\includegraphics[clip,trim=0mm 40mm 0mm 115mm,width=0.89\\linewidth]{system-bigmatrix.pdf}\n\\label{fig shifting}\n\\caption{Shift matrices $J_{*0}$ are used to place blocks within global vectors and matrices. }\n\\end{figure}\nThe assembly is thus easily expressed as a sum of product of matrices, as illustrated in Figure~\\ref{fig assembly}. \nWhile shifting values in dense arrays using matrix products is not efficient (vector assembly is actually implemented using shifted copies), the sum of matrix products is a reasonable implementation of sparse matrix assembly.\n\\begin{figure}\n\\centering\n%  \\includegraphics[width=0.6\\linewidth]{system-assembly.png}\n \\includegraphics[clip,trim=5mm 185mm 70mm 5mm,width=0.6\\linewidth]{system-assembly.pdf}\n\\caption{The assembly is performed by summing the global vectors and matrices obtained by shifting local vectors and matrices.}\n\\label{fig assembly}\n\\end{figure}\n\n\n\\section{Implementation} \\label{sec implementation}\n\n\\subsection{Dependencies}\nThis plugin uses \\texttt{Eigen} and \\texttt{suitesparse}. Moreover, the test suite requires the \\texttt{boost unit test framework}, see https://wiki.sofa-framework.org/wiki/UnitTesting  .\n\n\\subsection{Functions}\n\nThe virtual functions used for setting up the equation system are declared in BaseForceField.h, BaseMapping.h, BaseMass.h, BaseProjectiveConstraint.h, MechanicalParams.h.\nThey and documented in ``Experimental Compliance API'' documentation blocks of these files.\n\n\n\\subsection{BaseForceField functions}\nForceFields may be handled as forces or as soft constraints, as explained in Section~\\ref{sec force or constraint}. \nThe default behavior is to handle them as forces, and in this case their force is accumulated using a classical \\texttt{MechanicalOperations::computeForce} operation, while their stiffness matrix is accumulated using the \\texttt{getStiffnessMatrix} briefly described below.\n\\begin{itemize}\n \\item \\texttt{getComplianceMatrix} returns NULL if the ForceField is to be handled as a traditional force function, while it returns a pointer to a compliance matrix if it is to be handled as a constraint.\n \\item \\texttt{addForc}e is used to accumulate the force in the right-hand term. This function is not specific to the \\textit{Compliant} plugin. This should do nothing when the force must be handled as a constraint.\n \\item \\texttt{getStiffnessMatrix} is the complement of \\texttt{getCompliantMatrix}. If one retuns NULL, then the other must return a pointer to a matrix.\n \\item \\texttt{writeConstraintValue} is used to write the constraint violation $\\violation$ in Eq.\\ref{eq generalized constraint} when the ForceField is handled as a constraint.\n \\item \\texttt{getDampingRatio} is used in the constraint case, and corresponds to parameter $ \\dampingratio$ in Eq.\\ref{eq generalized constraint}\n\\end{itemize}\n\n\\subsubsection{BaseMapping functions}\n\\begin{itemize}\n \\item \\texttt{getJs} returns a list of Jacobian matrices, one for each parent of the mapping. Typical mappings have only one parent, but MultiMappings have several ones.\n \\item \\texttt{getKs} returns a list of stiffness matrices, one for each parent. These correspond to geometric stiffness, i.e. change of forces due to mapping nonlinearity, like \\texttt{addDJT} in the traditional API. \\textbf{\\begin{large}TODO:                                                                                                                                                                                                                                       \\end{large}} refine this, to also return off-diagonal terms in MultiMappings. \n\\end{itemize}\n\n\\subsubsection{BaseMass functions}\n\\begin{itemize}\n \\item \\texttt{addMToMatrix} to create the mass matrix. This function is not specific to the \\textit{Compliant} plugin.\n\\end{itemize}\n\n\\subsection{BaseProjectiveConstraint functions}\n\\begin{itemize}\n \\item \\texttt{projectMatrix} replaces a matrix with one corresponding to the same linear operator, projected to cancel, out the forbidden directions. \n\\end{itemize}\n\n\\subsection{MechanicalParams functions}\n\\begin{itemize}\n \\item \\texttt{implicitVelocity} is the $\\alpha$ parameter in the implicit velocity integration of Eq.\\ref{eq ode velocity}.\n \\item \\texttt{implicitPosition} is the $\\beta$ parameter in the implicit position integration of Eq.\\ref{eq ode position}.\n\\end{itemize}\n\n\n\\section{Alternative model} \\label{sec alternative model}\nTo allow more flexibility in the definition of damping in the compliance approach, the following variant is possible.\nWe replace eq.~\\ref{eq generalized constraint} with the following.\n\\begin{equation}\\label{eq generalized damped constraint}\n\\lam = - \\C^{-1} \\violation - \\D \\dviolation\n\\end{equation}\nwhere $\\violation$ and $\\dviolation$ are vectors, and $\\C$ and $\\D$ symmetric matrices.\nThis allows us to define non-uniform damping coefficients.\nWe rewrite eq.~\\ref{eq bottom} as:\n\\begin{equation}\n \\label{eq bottom alt}\n \\alpha( h \\beta \\I + \\C \\D) \\J \\dv + \\C \\avlam = -\\violation - \\alpha(h\\I + \\C\\D )\\dviolation\n\\end{equation}\nwhere $\\I$ is the identity matrix of the appropriate size.\nUnfortunately, the diagonal matrix in the left of $\\J$ in the term on $\\dv$ now makes the Schur complement unsymmetric. Therefore, we have not implemented this approach.\n\n\n\n\\section{Additional Notes}\n\\label{sec:additional notes}\n\n\\newcommand{\\component}[1]{\\texttt{#1}}\n\\newcommand{\\compopt}[2]{\\texttt{#1}=''#2''\\\\}\n\nThe initial implementation of the compliance framework is provided by\n\\component{ComplianceSolver}, which performs system assembly, system\nsolves and time-stepping. It uses a sparse Cholesky solver and is able\nto handle bilateral constraints only.\n\nWith time, this component has been broken down to the following\ncomponents:\n\n\\begin{itemize}\n\\item \\component{CompliantImplicitSolver}, which performs system assembly\n  (through an internal \\component{AssemblyVisitor}) and\n  time-stepping.\n\\item \\component{KKTSolver}, which performs the KKT solve associated\n  with an \\component{AssembledSystem}, itself provided by the\n  \\component{CompliantImplicitSolver} though its internal\n  \\component{AssemblyVisitor}. \n\\end{itemize}\n\n\\component{KKTSolver} is a virtual base class implemented in\n\\component{MinresSolver} and \\component{LDLTSolver} for bilateral\nconstraints. The CompliantDev plugin contains additional KKT solvers\nfor unilateral and frictional constraints.\n\nWe now give a quick user documentation for the above components.\n\n\\subsection{CompliantImplicitSolver}\n\nThis component implements a linearized implicit Euler solver. The KKT\nsystem is first assembled by an \\component{AssemblyVisitor}, and\nfactorized by the \\component{KKTSolver}.\n\nThe right-hand-side is then computed depending on constraint\nstabilization and order of resolution, and the \\component{KKTSolver}\nprovides system solves.\n\nAt the end of the time-step, velocities are updated and the system is\nstepped forward. Optionally, one can propagate the computed Lagrange\nmultipliers upwards in the scene graph at the end of the time step in\nthe \\emph{force} state vector for easy access.\n\n\\subsubsection{Order of Resolution}\n\n\\compopt{use\\_velocity}{\\textbf{true}, false}\n\nFormulates the system at the velocity level. Acceleration is used\notherwise. This is mostly for debugging purpose, unless warm-starts\nare disabled (see below).\n\n\\subsubsection{Warm Starts}\n\n\\compopt{warm\\_start}{\\textbf{true}, false}\n\nWarm starts iterative solvers with previous velocity/lambda (velocity-level). At the\nacceleration level, only the lambda are used.\n\nWhen disabled and used with iterative solvers and premature exit, this\nbiases the solution towards zero, thus this results in numerical\ndamping with velocity-level.\n\nThis is mostly for debugging purpose.\n\n\\subsubsection{Constraint Stabilization}\n\n\\compopt{stablization}{\\textbf{false}, true}\n\nPerforms simple constraint stabilization. This causes an additional\nKKT solve during the time step.\n\nYou need to add a \\component{Stabilization} component next to any\ncompliance you wish to stabilize. Only use with zero-compliance\n(holonomic constraints).\n\nTODO more, example\n\n\\subsection{Lagrange Multiplier Propagation}\n\n\\compopt{propagate\\_lambdas}{\\textbf{false}, true}\n\nOptionally propagate/aggregate Lagrange multipliers from compliant\nDOFs to independent DOFs at the end of the time step. This erases the\n\\emph{force} state vector.\n\nTODO example\n\n\\subsection{LDLTSolver}\n\nno options \n\nTODO more\n\n\\subsection{MinresSolver}\n\n\\compopt{iterations}{\\textbf{false}, true}\n\niteration count\n\n\\compopt{precision}{\\textbf{1e-8}}\n\nresidual norm threshold\n\n\\compopt{relative}{\\textbf{true}, false}\n\nuse relative precision, otherwise absolute\n\n\nTODO parallel ?\nTODO schur complement ?\n\nTODO better description\n\n", "meta": {"hexsha": "39fc499eaa4fcad8699d3dd0d49a2aeda4b071d4", "size": 23745, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "applications/plugins/Compliant/doc/compliant_body.tex", "max_stars_repo_name": "sofa-framework/issofa", "max_stars_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_stars_repo_licenses": ["OML"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "applications/plugins/Compliant/doc/compliant_body.tex", "max_issues_repo_name": "sofa-framework/issofa", "max_issues_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_issues_repo_licenses": ["OML"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "applications/plugins/Compliant/doc/compliant_body.tex", "max_forks_repo_name": "sofa-framework/issofa", "max_forks_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_forks_repo_licenses": ["OML"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.0723684211, "max_line_length": 553, "alphanum_fraction": 0.7447883765, "num_tokens": 6447, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812552, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5712086417343455}}
{"text": "\\subsection{Evaluation}\n\\label{sec:eval}\n\nIn the evaluation phase of the CRISP-DM process, the model(s) produced in the\nearlier step is carefully examined and evaluated; i.e. does it solve the\nbusiness problem? By the end of this phase a decision should be made wether the\ndata mining results can be used, or if more iteration is needed. Even if the\nmodel is deployed, it is still possible to continue the process of iteration to\ncreate a better model or discover more things about the data or business\nproblem.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{Dataset Bias}\n\nWhile running the tests that produced the data in Table \\ref{tab:formulacompare}\nI noticed that while the accuracy stayed around $93 - 94$ \\% for all the runs,\nthe model did not have the same precision when predicting \\texttt{TRUE} as it\ndoes when predicting \\texttt{FALSE}. The model would be wrong when predicting a\ncustomer to be \\textit{retained} one time out of three, giving it an accuracy\nfor predicting \\texttt{TRUE} of around $66.666$ \\%. When guessing \\texttt{FALSE}\non the other hand; the model would have an accuracy of around $95$ \\% of the\ntime. The data for these calculations can be seen in Table\n\\ref{app:tab:confusion1}, Table \\ref{app:tab:confusion2} and Table\n\\ref{app:tab:confusion3} in Appendix \\ref{app:confusion}.\n\nLooking at the datasets it looks like this trend is caused by the number of\nobservations for each class. Both the training and test dataset contains more\nthan ten times as many unretained customers as retained, the exact numbers can\nbe seen in Table \\ref{tab:datasetretention}. This essentially returned me to the\ndata understanding phase of the CRISP-DM process, but for the sake of continuity\nin this reportm the data preparation, modelling and evaluation of this extra\niteration will be written in this section.\n\n\\begin{table}[H]\n  \\centering\n  \\begin{tabular}{lll}\n    \\textbf{Dataset}  & \\texttt{TRUE} & \\texttt{FALSE} \\\\ \\hline\n    \\textit{Training} & $30358$       & $433358$       \\\\\n    \\textit{Test}     & $40731$       & $454659$       \\\\\n    \\textit{Equal}    & $30358$       &  $30358$\n  \\end{tabular}\n  \\caption{The distribution of the \\textit{iscjretained} target variable classes\n    in the different datasets.}\n  \\label{tab:datasetretention}\n\\end{table}\n\nIn order to try and combat the bias in the dataset I constructed a new dataset I\nwill refer to as dataset \\textit{equal}. This new dataset contains all of the\nobservations from the training dataset who is \\textit{retained}, and a random\nsample, of the same size, of the observations who is not.\n\nLike in Section \\ref{sec:formcompare} I created a new set of tests (the source\ncode for these tests is available in Figure \\ref{app:code:equal} in Appendix\n\\ref{app:equal}) which would test the new accuracy of the model with varying\ntree depths. Using this new test, the overall accuracy for drops to around $82$\n\\% ($81.6753$ \\%, $81.9569$ \\% and $81.9438$ \\%) for maximum depths of 4, 6 and\n8, for the detailed numbers, please refer to Table \\ref{app:tab:confusion11},\nTable \\ref{app:tab:confusion22} and Table \\ref{app:tab:confusion33} in Appendix\n\\ref{app:confusion}.\n\nThe new dataset and model does better at guessing retained customers with a mean\naccuracy of around $82$ \\% for both \\texttt{TRUE} and \\texttt{FALSE}. While this\nincrease in accuracy for retained customers is roughly equal to the decrease in\naccuracy for unretained customers when looking at the percentages, the data that\nthe model should predict on contains more unretained customers than retained. So\nthe drop from $95$ \\% to  $82$ \\% in unretained accuracy represents a lot more\nusers that the increase in retained accuracy from  $66.666$ \\% to  $82$ \\% when\nreferencing the distribution in Table \\ref{tab:datasetretention}. This means\nthat this model will misclassify a lot more unretained users as retained\ncompared to the improved ability to correctly classify retained customers as\nretained.\n", "meta": {"hexsha": "2d69588b2dfd949e0f0a3e57c08bd44742e4d446", "size": 3964, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/03_03_evaluation.tex", "max_stars_repo_name": "martinnj/CompanyProject2015", "max_stars_repo_head_hexsha": "9fcedf95a0194400257417f98fc22b8b184c0d4f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/03_03_evaluation.tex", "max_issues_repo_name": "martinnj/CompanyProject2015", "max_issues_repo_head_hexsha": "9fcedf95a0194400257417f98fc22b8b184c0d4f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/03_03_evaluation.tex", "max_forks_repo_name": "martinnj/CompanyProject2015", "max_forks_repo_head_hexsha": "9fcedf95a0194400257417f98fc22b8b184c0d4f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.0555555556, "max_line_length": 80, "alphanum_fraction": 0.7530272452, "num_tokens": 1025, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5712086318274108}}
{"text": "%! Author = tstreule\n\n\\section{Computed Tomography}\n\nQuantitative with \\textbf{Houndsfield unit} (normed to water):\\\\\n\\highlight{$\\displaystyle \\textrm{CT value} = \\frac{\\mu - \\mu\\ped{water}}{\\mu\\ped{water}}\\cdot \\unit[1000]{HU}$}\n\nBeam types: Pencil, Fan, Parallel fan, cone.\n\n\\textbf{Algebraic Reconstruction}:\nOut of a $n \\times n$-Image, make a $n^2 \\times n^2$ equation system:\n$\\vec{P} = W \\vec{\\mu}$ (P are the measurements, W are the weightings, how much a beam crossed a box/pixel)\\\\\n\\highlight{$\\displaystyle \\vec{\\mu} = (W^H W)^{-1} W^H \\vec{P}$} $~^H$: hermetisch transponiert.\n\n\\textbf{Projection Slice Theorem}: Project a 2D function onto a line and do a 1D Fourier transform \\quad $\\iff$\\\\\nDo a 2D Fourier transform of that 2D function and slice it through its origin (parallel to the line above).\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Backprojection}\n%\nWhen doing simple Backprojection: $PSF = \\frac{1}{|r|}$\n\n\\includegraphics[width = .8\\linewidth]{CT_Sinogram}\\\\\n$P_\\phi(r) = -R[\\mu](r, \\phi) = -\\iint \\limits_{-\\infty}^\\infty \\mu(x, y) \\delta(\\underbrace{x \\cos \\phi + y \\sin \\phi - r\\vspace{-1mm}}_{\\text{line formula}\\vspace{-1mm}})dxdy$\n\nMath. Deduction: \\textbf{1st Reconstruction formula}:\\\\\nProjection $P_\\phi(r) = -\\int \\mu(r, s) ds$\\\\\n$\\mathcal{F}\\{P_\\phi(r)\\}(u,\\phi) = \\int P_\\phi(r) e^{-jur}dr =$\\\\\n$= - \\iint \\mu(r, s) e^{-jur} dr ds =$ (with $r = x\\cos \\phi + y \\sin \\phi$)\\\\\n$= -\\iint \\mu(x, y) \\exp(-jx\\underbrace{u \\cos \\phi\\vspace{-1mm}}_{\\vspace{-2mm}p \\hat{=} k_x} - jy\\underbrace{u\\sin\\phi\\vspace{-1mm}}_{\\vspace{-2mm}q \\hat{=} k_y}) dxdy = $\\\\\n$= - \\iint \\mu(x, y) e^{-jxp - jyq} dx dy$ (until here just proj. slice th.)\\\\\n\\highlight{$\\displaystyle \\implies \\mu(x, y) = \\frac{-1}{4\\pi^2}\\iint \\mathcal{F}\\{P_\\phi(r)\\}(p, q) \\eu^{\\iu xp + \\iu yq} \\diff p\\diff q$}\\\\\nFill k-space with $\\mathcal{F}$ of the proj.s $\\hat{=}$ Backproj. of $\\mathcal{F}$\\\\\nThen do 2D-backtransformation\n\n\\textbf{2nd variant}: Change of integration variables:\\\\\n\\fbox{$u(x,y) = \\frac{-1}{4\\pi^2} \\int \\limits_0^\\pi \\int \\limits_{-\\infty}^\\infty \\mathcal{F}\\{P_\\phi(u)\\}\\eu^{\\iu ur} \\abs{u} \\diff u \\diff\\phi$}\n\n\\textbf{Filtered Backprojection}:\\\\\n$\\highlight{\\hat{u}(x, y) = \\int_0^\\pi P_\\phi \\cdot \\mathcal{F}^{-1} \\{|u|\\} d\\phi} = \\sum_{j=0}^n \\big( p(r,\\varphi_j)*h(r) \\big) \\textrm{d}\\varphi$.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Effects that occur in practice \\& Spiral CT}\n``Streak artifacts'': Too few projections have been acquired. Correction: Additional low-pass $\\implies$ SNR rises.\n\n``data grabbing'': Convert cartesian $\\leftrightarrow$ polar coordinaates\n\nFilter in image-domain: $|u| \\quad \\laplace \\quad \\frac{1}{(2\\pi r)^2}$\\\\\nIf Fourier-domain is band-limited, the filter becomes a \\textbf{Ram-Lak}: $\\mathrm{sinc}(r) * \\frac{1}{(2\\pi r)^2}$\n\\quad\\textcolor{gray}{(don't forget: $\\mathrm{rect}(r) \\quad\\laplace\\quad \\mathrm{sinc}(r)$)}\n\nHave other filter, in order to not amplify noise: \\textbf{Shepp-Logan, Cosine or Hann-Filter}, i.e. combine with low-pass.\n\n\\textbf{Spiral CT}: Before backprojection, do linear interpolation of data. \\textbf{Pitch}=1: The neighboring slices touch, Pitch=2: slice distance = 1 slice thickness\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{(Quantum) Noise}\n%\nDistribution of photon counts is $P_\\lambda(x)$.\n$\\textrm{E}\\{P_\\lambda\\} = \\lambda \\equiv \\bar{N}$, $\\sigma = \\sqrt{\\lambda}$.\\\\\n$\\!\\implies \\textrm{SNR} = \\frac{\\textrm{average}}{\\textrm{std. deviation}} \\!=\\! \\frac{\\overline{N}}{\\sqrt{\\overline{N}}} = \\sqrt{N}$\\\\\n\\phantom{$\\!\\implies$} where $N$: \\#X-rays, $\\lambda$: \\#Photons registered.\\\\\n$\\!\\implies \\textrm{Image SNR} = \\frac{\\overline{I}}{\\sqrt{\\textrm{Var}\\{I\\}}}$ where $\\sigma_I \\equiv \\sqrt{\\textrm{Var}\\{I\\}} \\propto \\sqrt{\\frac{1}{I_\\textrm{A}\\,t\\,\\Delta z}}$\n\nPixel noise = $\\sigma = \\sqrt{\\frac{1}{M-1}\\sum_{i=1}^M (I_i - \\overline{I})^2} = \\highlight{\\sqrt{1 / (Q \\Delta z) }}$\\\\\nwhere Q: Tube current $I_\\textrm{A}$ $\\times$ scan time $t$, \\quad $\\Delta z$: detector thickness\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Dose considerations}\n%\n\\textbf{Absorbed dose}: $\\unit[1]{Gray} = \\unit[1]{Gy} = \\unitfrac[1]{J}{Kg}$\\\\\n\\textbf{Equivalent dose}: $1 \\unit{Sievert} = \\unit[1]{Sv} = Q \\cdot N \\unit{Gy}$\\\\\nwith the \\textbf{Quality factor Q} (mostly 1) and the \\\\\\textbf{Pertinent factor N} (bone, lung, stomach: 0.12 - skin: 0.1)\n\n\\textbf{Excess Relative Risk}:\\\\\n$RR = \\frac{\\text{Cancer Deaths/y}}{\\text{Cancer Deaths control pop./y}}\n\\hfill\\highlight{ERR = \\frac{RR-1}{\\text{Additional dose}}}$\n\nIt is higher for women and children. $2.5/\\unit{Sv} - 0.25/\\unit{Sv}$\n", "meta": {"hexsha": "fa6dbdadf1a42dc02bc89f241baa70b5228ed779", "size": 4673, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/BMI18/sections/04_computed_tomography.tex", "max_stars_repo_name": "tstreule/eth-cheat-sheets", "max_stars_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-26T23:11:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T23:11:57.000Z", "max_issues_repo_path": "src/BMI18/sections/04_computed_tomography.tex", "max_issues_repo_name": "tstreule/eth-cheat-sheets", "max_issues_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/BMI18/sections/04_computed_tomography.tex", "max_forks_repo_name": "tstreule/eth-cheat-sheets", "max_forks_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.4868421053, "max_line_length": 179, "alphanum_fraction": 0.6257222341, "num_tokens": 1694, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127603871312, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.5711653895018709}}
{"text": " \\chapter[Computing the occurrence of a process]\n   {Computing the occurrence of a process} \n\n\\section{Particle transport using the {\\em differential} approach}\n\\subsection{The interaction length}\n\\begin{itemize}\n\\item[*]\n         In a simple material the number of atoms per volume is:\n         \\[n  = \\frac{\\mathcal{N}\\rho}{A}\\]\n         where:\n         \\begin{eqnarray*}\n          \\mathcal{N} &  & \\mbox{Avogadro's number} \\\\\n          \\rho        &  & \\mbox{density of the medium} \\\\\n          A           &  & \\mbox{mass of mole} \n         \\end{eqnarray*}\n\\item[*]\n         In a compound material the number of atoms of Element elm per volume is:\n         \\[n_{elm}  = \\frac{\\mathcal{N}\\rho w_{elm}}{A_{elm}}\\]\n         where:\n         \\begin{eqnarray*}\n          \\mathcal{N} &  & \\mbox{Avogadro's number} \\\\\n          \\rho        &  & \\mbox{density of the medium} \\\\\n          w_{elm}     &  & \\mbox{proportion by mass of the Element elm}\\\\\n          A_{elm}     &  & \\mbox{mass of mole of the Element elm} \n         \\end{eqnarray*} \n\\item[*] \n         The mean free path, $\\lambda$, of a process can be given in terms of its \n total cross section. \n         \\[\n           \\lambda(E) \\equiv \\frac{1}{\\Sigma (E)} \n             = \\frac{1}{\\sum_{elm}{\\lbrack n_{elm} \\sigma(Z_{elm},E)\\rbrack}}\n         \\]\n         where $\\sigma(Z,E)$  is the total cross section per atom of the\n         process and\n          $\\sum_{elm}$ runs over all Elements the material is made of.\n\\end{itemize}\n\n\\noindent\nCross sections per atom and mean free path values are tabulated during \ninitialisation.\n\n\\subsection{Determination of the interaction point}\nThe mean free path of a particle for a given process,\n$\\lambda$, depends on the medium and cannot be used\ndirectly to sample the probability of an interaction in a heterogeneous\ndetector. The number of mean free paths which a particle travels is:\n\n\\begin{equation}\n\\label{int.c}\nn_\\lambda =\\int \\frac{dx}{\\lambda(x)}\n\\end{equation}\n\nand it is independent of the material traversed. If $n_r$ is\na random variable denoting the number of mean free paths from a given\npoint until the point of interaction, it can be shown that $n_r$ has the\ndistribution function\n\\begin{equation}\n\\label{int.d}\nP( n_r < n_\\lambda ) = 1-e^{-n_\\lambda}\n\\end{equation}\nThe total number of mean free paths the particle travels before\nthe interaction point, $n_\\lambda$, is sampled at the beginning\nof the trajectory as:\n\\begin{equation} \n\\label{int.e}\nn_\\lambda = -\\log \\left ( \\eta \\right )\n\\end{equation}   \nwhere $\\eta$ is a random number uniformly distributed\nin the range $(0,1)$.\n$n_\\lambda$ is updated after each step $\\Delta x$ according the formula:\n\\begin{equation}\n\\label{int.f}\nn'_\\lambda=n_\\lambda -\\frac{\\Delta x }{\\lambda(x)}\n\\end{equation}\nuntil the step originating from $s(x) = n_\\lambda \\lambda(x)$ is\nthe shortest and this triggers the specific process.\n\nThe short description given above is the {\\em differential approach} \n of the particle transport , which is used in most of the simulation\ncodes( e.g. \\cite{int.geant3},\\cite{int.egs4}).\n\n In this approach besides the other ({\\em discrete}) processes  \n the continuous energy loss imposes a limit on the stepsize, too.\n The reason of this is the\nenergy dependence of the cross sections. It is assumed in this approach\nthat\nthe cross sections of the particles are  constant during a step , i.e.\n the step size should be so small that the relative difference of the cross \n sections\nat the beginning of the step and at the end can be small enough.In principle\none\nhas to use very small steps in order to have an accurate simulation , but the \ncomputing\ntime increases if the stepsize decreases. As a good compromise the stepsize is\nlimited in GEANT4 by the requirement that the stopping range of the particle can\ndecrease by not more than 20 \\% during the step. This condition works fine for\na particle of kinetic energy \\(> 0.5 MeV - 1. MeV\\) , but for low energy it\ngives very short step sizes.\n To cure this problem a lower limitation on the stepsize is also introduced.\nThe lower limit of the stepsize is\n the {\\em cut in range} parameter of the program. \n\n There is another disadvantage of this usual differential algorithm , \n which can be a serious problem in the case of certain processes. If the\n interaction process has a cross section with narrow peaks in it , \nin this approach the particle can easily skip over these peaks.(This can happen\n in the case of a number of hadronic processes.)\n\n To overcome these shortcomings the {\\em integral approach} has been implemented\n in GEANT4 as an option .\n\n\\section{Particle transport with {\\em integral} approach}\n\n In this algorithm the integral in eq. \\ref{int.c} is really computed for every\n process and another equation is used instead of eq. \\ref{int.f} when \n $n_\\lambda$ is updated. This section gives a short overview on the formulae\n used in this approach.\n\n  The change in the number of mean free path after a {\\em small} step\n can be written as\n\n\\begin{equation}\n\\label{int.g}\n   \\Delta n_\\lambda = \\frac{\\Delta x}{\\lambda (x)} .\n\\end{equation} \n\nThis formula can be rewritten in the following form\n\n\\begin{equation}\n\\label{int.h}\n   \\Delta n_\\lambda = \\frac{dx}{dT}\\cdot \\frac{1}{\\lambda (T)} \\cdot dT  ,\n\\end{equation} \n\nwhere \n\\begin{tabular}[t]{ll}\n$T$         & kinetic energy of the particle \\\\  \n$\\frac{dx}{dT} = \\frac{1}{\\frac{dT}{dx}}$ & inverse of the well-known quantity {\\em $\\frac{dE}{dx}$}  . \\\\\n\\end{tabular}\n\nAfter these preliminaires the number of mean free paths (eq. \\ref{int.c})\n can be rewritten as \n\\begin{equation}\n\\label{int.i}\nn_\\lambda (T) = \\int_{0}^{T}\\frac{d\\tau}{f(\\tau)\\cdot \\lambda (\\tau)}   \n\\end{equation}\n\n or\n\n\\begin{equation}\n\\label{int.j}\nn_\\lambda (T) = \\int_{0}^{T}\\frac{\\sigma (\\tau ) \\cdot d\\tau}{f(\\tau) }  \n\\end{equation}\n\nwhere $\\frac{dE}{dx}$ has been denoted by $f()$ . Eqs. \\ref{int.i} and \\ref{int.j} are\nthe basic equations of the integral approach . The meaning of the integrals\non the right hand sides of the equations is the number of mean free paths\n for a given process between the initial state of energy $T$ and the stopping\n of the particle ( energy $0$). \n\n The steps of the algorithm using the integral approach are the following:\n\\begin{itemize}\n\\item the number of mean free paths the particle travels to the interaction\n      point , $\\Delta n_\\lambda$ is sampled using eq. \\ref{int.e}\n\\item initial value of the quantity $n_\\lambda (T_0)$ is computed \n\\item the (possible) final energy $T$ is calculated from \n  \\begin{equation}\n  \\label{int.k}\n   \\Delta n_\\lambda = n_\\lambda (T_0) - n_\\lambda (T)\n  \\end{equation}\n\n\\item the step limitation originating from the process is\n   \\begin{equation}\n  \\label{int.l}\n   steplimit = r(T_0) - r(T)      \n  \\end{equation},\n  where the function $r(T)$ gives the range of the particle for a kinetic\n energy value T.\n\\end{itemize}\n \n If the interaction occurred at the end of the step was  some other\nprocess or the step was limited by \nthe geometry (at medium boundary) , the number of mean free\npaths $\\Delta n_\\lambda$ should be updated. This update can be easily done\n using the function $n_\\lambda (T)$\n\\begin{equation}\n\\label{int.m}\n \\Delta n_{\\lambda,updated} =  \\Delta n_\\lambda - [n_\\lambda (T_0)-\n                                    n_\\lambda (T) ]  \n\\end{equation}\n  \n\\section{Updating of the proper and laboratory time of the particle}\n\n The proper and lab. time of the particle should be updated after each step.\nThis update is done differently in the two approaches.\n\n In the differential approach the following formula is used to update the\nparticle time in the laboratory system\n\n\\begin{equation}\n\\label{int.n}\n  \\Delta t_{lab} = \\frac{\\Delta x}{\\frac{v_0 + v}{2}}\n\\end{equation}\n \n  , where \n\\[\n\\begin{array}{ll}\n\\Delta x    & \\mbox{step travelled by the particle,} \\\\\n v_0  &\n \\mbox{particle velocity at the beginning  of the step} \\\\\n v  &\n \\mbox{particle velocity at the end of the step .} \\\\\n\\end{array}\n\\]\nThis expression is a good approximation for use in the differential\napproach , because in this scheme the velocity is not allowed to\nchange too much during the step.This is not the case in the integral approach,\nthe kinetic energy and the velocity of the particle can change a lot here,\ntherefore the update of the laboratory time is done by using the integral\nexpression\n\\begin{equation}\n\\label{int.o}\n t = \\int_{0}^{s} \\frac{dx}{v(x)}\n\\end{equation}\nwhich can be written as\n\\begin{equation}\n\\label{int.p}\n t = \\int_{0}^{T} \\frac{f(\\tau ) \\cdot d\\tau }{v(\\tau )}.\n\\end{equation}\nIn the eqs. \\ref{int.o} and \\ref{int.p} \n\\[\n\\begin{array}{ll}\nt  &  \\mbox{time}  \\\\\nv  &  \\mbox{velocity of the particle} \\\\\ns  &  \\mbox{path length can be travelled by the particle before stopping} \\\\\nT  &  \\mbox{kinetic energy} \\\\\nf(T) & dE/dx \\\\\n\\end{array}\n\\]\n\n Using eq. \\ref{int.p} the expression for $\\Delta t_{lab}$ is\n\\begin{equation}\n\\label{int.q}\n \\Delta t_{lab} = t(T_0)-t(T)  .\n\\end{equation}\n\n The update of the proper time is performed similarly in the two approaches.\n\n\\section{Status of this document}\n  9.10.98   created by L. Urb\\'an.\n\n\\begin{thebibliography}{99}\n\\bibitem[EGS4]{int.egs4} W.R. Nelson et al.:The EGS4 Code System.\n   {\\em SLAC-Report-265 , December 1985 }\n\\bibitem[GEANT3]{int.geant3}\n  GEANT3 manual ,CERN Program Library Long Writeup W5013 (October 1994).\n\\end{thebibliography}\n", "meta": {"hexsha": "890e057d19540c9848509e14826dfb4d8888af7c", "size": 9380, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "geant4/generalities/integral.tex", "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_issues_repo_path": "geant4/generalities/integral.tex", "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geant4/generalities/integral.tex", "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.4980544747, "max_line_length": 106, "alphanum_fraction": 0.6962686567, "num_tokens": 2652, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127641048444, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.5711653863522375}}
{"text": "\\chapter{The Programming Language \\textsl{Python}}\nWe have started our lecture with an introduction to set theory.  In my experience, the notions of\nset theory are difficult to master for many students because the concepts introduced in set theory\nare quite abstract.  Fortunately, there is a programming language that supports sets as a basic data type and\nthus enables us to experiment to experiment with set theory.  This is the programming language\n\\href{https://en.wikipedia.org/wiki/Python_(programming_language)}{Python}, which has its own website at\n\\href{http://www.python.org}{\\texttt{python.org}}.\nBy programming in \\textsl{Python}, students can get acquainted with set theory in a playful manner.\nFurthermore, as many interesting problems have a straightforward solution as \\textsl{Python} programs,\nstudents can appreciate the usefulness of abstract notions from set theory by programming in \\textsl{Python}.\nFurthermore, according to \n\\href{https://cacm.acm.org/blogs/blog-cacm/176450-python-is-now-the-most-popular-introductory-teaching-language-at-top-u-s-universities/fulltext}{Philip Guo},\n8 of the top 10 US universities teach \\textsl{Python} in their introductory computer science courses.\n\nThe easiest way to install python and its libraries is via \\href{https://www.anaconda.com/download/}{Anaconda}.\nOn many computers, \\textsl{Python} is already preinstalled.  Nevertheless, even on those systems it is easiest\nto use the \\blue{Anaconda} distribution.  The reason is that Anaconda make it very easy to use different\nversions of python with different libraries.  In this lecture, we will be using the version 3.6 of \\textsl{Python}.\n\n\\section{Starting the Interpreter}\nMy goal is to introduce \\textsl{Python} via a number of rather simple examples.  I will present more\nadvanced features of \\textsl{Python} in later sections, but this section is intended to provide a first\nimpression of the language.\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = none,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.0cm,\n                  xrightmargin  = 0.0cm,\n                ]\n    Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33) \n    [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin\n    Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n    >>> \n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{The \\textsl{Python} welcome message.}\n\\label{fig:python}\n\\end{figure}\n\nThe language \\textsl{Python} is an \\blue{interpreted} language.  Hence, there is no need to \\blue{compile} a\nprogram.  Instead, \\textsl{Python} programs can be executed via the interpreter.  The interpreter is started\nby the command:\\footnote{\n  While I am usually in the habit of terminating every sentence with either a full stop, a question\n  mark or an exclamation mark, I refrain from doing so when the sentence ends in a \\textsl{Python} command\n  that is shown on a separate line.  The reason is that I want to avoid confusion as it can\n  otherwise be hard to understand which part of the line is the command that has to be typed\n  verbatim.\n}\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{python}\n\\\\[0.2cm]\nAfter the interpreter is started, the user sees the output that is shown in Figure \n\\ref{fig:python} on page \\pageref{fig:python}.  The string\n``\\texttt{>>>}'' is the \\blue{prompt}.  It signals that the interpreter is waiting for input.\nIf we input the string\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{1 + 2}\n\\\\[0.2cm]\nand press enter, we get the following output:\n\\begin{verbatim}\n    3\n    >>> \n\\end{verbatim}\nThe interpreter has computed the sum $1+2$, returned the result, and prints another prompt waiting\nfor more input.  Formally, the command ``\\texttt{1 + 2}''\nis a \\blue{script}.  Of course, this is a very small script as it consists only of a single expression.\nThe command\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{exit()}\n\\\\[0.2cm]\nterminates the interpreter.   The nice thing about \\textsl{Python}  is the we can run \\textsl{Python} even in a\nbrowser in so called \\href{https://en.wikipedia.org/wiki/Project_Jupyter}{Jupyter notebooks}.  If you have\ninstalled \\textsl{Python} by means of the \\href{https://www.anaconda.com/download/}{Anaconda} distribution,\nthen you already have installed Jupyter.  The following subsection contains the \\blue{jupyter notebook} \n\\href{https://github.com/karlstroetmann/Logik/blob/master/Python/Introduction.ipynb}{\\texttt{Introduction.ipynb}}.\nYou should download this notebook from my github page and try the examples on your own computer.  Of course,\nfor this to work you first have to install \\href{https://jupyter.org}{jupyter}.\n\n\\section{\\texorpdfstring{An Introduction to\n\\textsl{Python}}{An Introduction to Python}}\\label{an-introduction-to-python}\nThis \\textsl{Python} notebook gives a short introduction to \\textsl{Python}.\nWe will start with the basics but as the main goal of this introduction\nis to show how \\textsl{Python} supports \\blue{sets} we will quickly move\nto more advanced topics. In order to show of the features of\n\\textsl{Python} we will give some examples that are not fully explained at\nthe point where we introduce them. However, rest assured that they will\nbe explained eventually.\n\n\\subsection{Evaluating expressions}\\label{evaluating-expressions}\nAs Python is an interactive language, expressions can be evaluated\ndirectly. In a \\blue{Jupyter} notebook we just have to type Ctrl-Enter\nin the cell containing the expression. Instead of Ctrl-Enter we can also\nuse Shift-Enter.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}1}]:} \\PY{l+m+mi}{1} \\PY{o}{+} \\PY{l+m+mi}{2}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}1}]:} 3\n\\end{Verbatim}\n            \nIn \\textsl{Python}, the precision of integers is not bounded. Hence, the\nfollowing expression does not cause an overflow.\n\n    \\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In[{\\color{incolor}2}]:}\\PY{l+m+mi}{1}\\PY{o}{*}\\PY{l+m+mi}{2}\\PY{o}{*}\\PY{l+m+mi}{3}\\PY{o}{*}\\PY{l+m+mi}{4}\\PY{o}{*}\\PY{l+m+mi}{5}\\PY{o}{*}\\PY{l+m+mi}{6}\\PY{o}{*}\\PY{l+m+mi}{7}\\PY{o}{*}\\PY{l+m+mi}{8}\\PY{o}{*}\\PY{l+m+mi}{9}\\PY{o}{*}\\PY{l+m+mi}{10}\\PY{o}{*}\\PY{l+m+mi}{11}\\PY{o}{*}\\PY{l+m+mi}{12}\\PY{o}{*}\\PY{l+m+mi}{13}\\PY{o}{*}\\PY{l+m+mi}{14}\\PY{o}{*}\\PY{l+m+mi}{15}\\PY{o}{*}\\PY{l+m+mi}{16}\\PY{o}{*}\\PY{l+m+mi}{17}\\PY{o}{*}\\PY{l+m+mi}{18}\\PY{o}{*}\\PY{l+m+mi}{19}\\PY{o}{*}\\PY{l+m+mi}{20}\\PY{o}{*}\\PY{l+m+mi}{21}\\PY{o}{*}\\PY{l+m+mi}{22}\\PY{o}{*}\\PY{l+m+mi}{23}\\PY{o}{*}\\PY{l+m+mi}{24}\\PY{o}{*}\\PY{l+m+mi}{25}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}2}]:} 15511210043330985984000000\n\\end{Verbatim}\n            \n    The next \\blue{cell} in this notebook shows how to compute the\n\\blue{factorial} of 1000, i.e. it shows how to compute the product\n\\[ 1000! = 1 * 2 * 3 * \\cdots * 998 * 999 * 1000 \\] It uses some\nadvanced features from \\blue{functional programming} that will be\ndiscussed at a later stage of this introduction.\n\n    \\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}3}]:} \\PY{k+kn}{import} \\PY{n+nn}{functools} \n        \n        \\PY{n}{functools}\\PY{o}{.}\\PY{n}{reduce}\\PY{p}{(}\\PY{k}{lambda} \\PY{n}{x}\\PY{p}{,} \\PY{n}{y}\\PY{p}{:} \\PY{p}{(}\\PY{n}{x}\\PY{o}{*}\\PY{n}{y}\\PY{p}{)}\\PY{p}{,} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{1001}\\PY{p}{)}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}3}]:}\n402387260077093773543702433923003985719374864210714632543799910429938512\n398629020592044208486969404800479988610197196058631666872994808558901323\n829669944590997424504087073759918823627727188732519779505950995276120874\n975462497043601418278094646496291056393887437886487337119181045825783647\n849977012476632889835955735432513185323958463075557409114262417474349347\n553428646576611667797396668820291207379143853719588249808126867838374559\n731746136085379534524221586593201928090878297308431392844403281231558611\n036976801357304216168747609675871348312025478589320767169132448426236131\n412508780208000261683151027341827977704784635868170164365024153691398281\n264810213092761244896359928705114964975419909342221566832572080821333186\n116811553615836546984046708975602900950537616475847728421889679646244945\n160765353408198901385442487984959953319101723355556602139450399736280750\n137837615307127761926849034352625200015888535147331611702103968175921510\n907788019393178114194545257223865541461062892187960223838971476088506276\n862967146674697562911234082439208160153780889893964518263243671616762179\n168909779911903754031274622289988005195444414282012187361745992642956581\n746628302955570299024324153181617210465832036786906117260158783520751516\n284225540265170483304226143974286933061690897968482590125458327168226458\n066526769958652682272807075781391858178889652208164348344825993266043367\n660176999612831860788386150279465955131156552036093988180612138558600301\n435694527224206344631797460594682573103790084024432438465657245014402821\n885252470935190620929023136493273497565513958720559654228749774011413346\n962715422845862377387538230483865688976461927383814900140767310446640259\n899490222221765904339901886018566526485061799702356193897017860040811889\n729918311021171229845901641921068884387121855646124960798722908519296819\n372388642614839657382291123125024186649353143970137428531926649875337218\n940694281434118520158014123344828015051399694290153483077644569099073152\n433278288269864602789864321139083506217095002597389863554277196742822248\n757586765752344220207573630569498825087968928162753848863396909959826280\n956121450994871701244516461260379029309120889086942028510640182154399457\n156805941872748998094254742173582401063677404595741785160829230135358081\n840096996372524230560855903700624271243416909004153690105933983835777939\n410970027753472000000000000000000000000000000000000000000000000000000000\n000000000000000000000000000000000000000000000000000000000000000000000000\n000000000000000000000000000000000000000000000000000000000000000000000000\n000000000000000000000000000000000000000000000000\n\\end{Verbatim}\n            \nThe following command will stop the interpreter if executed. It is not\nuseful inside a \\blue{Jupyter} notebook. \nHence, the next line should not be evaluated. Therefore, I have put a\ncomment character \"\\#\" in the first column of this line.\n\nHowever, if you do remove the comment character and then evaluate the\nline, nothing bad will happen as the interpreter is just restarted by\n\\blue{Jupyter}.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}4}]:} \\PY{c+c1}{\\PYZsh{} exit()}\n\\end{Verbatim}\n\nIn order to write something to the screen, we can use the function\nprint. This function can print objects of any type. In the following\nexample, this function prints a string. In \\textsl{Python} any character\nsequence enclosed in single quotes is string.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}5}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{Hello, World!}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nHello, World!\n\\end{Verbatim}\n\nInstead of using single quotes we can also use double quotes as seen in\nthe next example.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}6}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{Hello, World!}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nHello, World!\n\\end{Verbatim}\n\nThe function print accepts any number of arguments. For example, to\nprint the string \"36 * 37 / 2 = \" followed by the value of the\nexpression \\(36 \\cdot 37 / 2\\) we can use the following print statement:\n\n    \\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}7}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{36 * 37 / 2 =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{l+m+mi}{36} \\PY{o}{*} \\PY{l+m+mi}{37} \\PY{o}{/}\\PY{o}{/} \\PY{l+m+mi}{2}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n36 * 37 / 2 = 666\n\\end{Verbatim}\n\nIn the expression \"36 * 37 // 2\" we have used the operator \"//\" in order\nto enforce \\blue{integer division}. If we had used the operator \"/\"\ninstead, \\textsl{Python} would have used \\blue{floating point division}\nand therefore would have printed the floating point number 666.0 instead\nof the integer 666.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}8}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{36 * 37 / 2 =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{l+m+mi}{36} \\PY{o}{*} \\PY{l+m+mi}{37} \\PY{o}{/} \\PY{l+m+mi}{2}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n36 * 37 / 2 = 666.0\n\\end{Verbatim}\n\n\\noindent\nThe following script reads a natural number \\(n\\) and computes the sum\n\\(\\sum\\limits_{i=1}^n i\\).\n\\begin{enumerate}\n\\item The function \\texttt{input} prompts the user to enter a string.\n\\item This string is then converted into an integer using the function \\texttt{int}.\n\\item Next, the \\blue{set} \\texttt{s} is created such that \n      $$\\texttt{s} = \\{1, \\cdots, n\\}. $$  \n      The set \\texttt{s} is constructed using the function \\texttt{range}.  A function call \n      of the form $\\texttt{range}(a, b + 1)$ returns a \\blue{generator} that produces the natural numbers \n      from $a$ to $b$.  By using this generator as an argument to the function \\texttt{set}, a set is created \n      that contains all the natural number starting from $a$ upto and including $b$.\n      The precise mechanics of \\blue{generators} will be explained later.\n\\item The \\texttt{print} statement uses the function \\texttt{sum} to add up all the elements of the\n      set \\texttt{s} and print the resulting sum.\n\\end{enumerate}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}9}]:} \\PY{n}{n} \\PY{o}{=} \\PY{n+nb}{input}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{Type a natural number and press return: }\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n        \\PY{n}{n} \\PY{o}{=} \\PY{n+nb}{int}\\PY{p}{(}\\PY{n}{n}\\PY{p}{)}\n        \\PY{n}{s} \\PY{o}{=} \\PY{n+nb}{set}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)}\\PY{p}{)}\n        \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{The sum 1 + 2 + ... + }\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{n}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{ is equal to }\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n+nb}{sum}\\PY{p}{(}\\PY{n}{s}\\PY{p}{)}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{.}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{sep}\\PY{o}{=} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nType a natural number and press return: 36\nThe sum 1 + 2 + {\\ldots} + 36 is equal to 666.\n\\end{Verbatim}\n\nThe following example shows how \\blue{functions} can be defined in\n\\textsl{Python}. The function \\(\\texttt{sum}(n)\\) is supposed to compute\nthe sum of all the numbers in the set \\(\\{1, \\cdots, n\\}\\). Therefore, we have\n \\[\\texttt{sum}(n) = \\sum\\limits_{i=1}^n i. \\]\nThe function sum is defined \\blue{recursively}. The recursive\nimplementation of the function sum can best by understood if we observe\nthat it satisfies the following two equations:\n\\begin{enumerate}\n\\item $\\texttt{sum}(0) = 0$, \n\\item $\\texttt{sum}(n) = \\texttt{sum}(n-1) + n \\quad$  provided that $n > 0$.\n\\end{enumerate}\n\n    \\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}10}]:} \\PY{k}{def} \\PY{n+nf}{sum}\\PY{p}{(}\\PY{n}{n}\\PY{p}{)}\\PY{p}{:}\n             \\PY{k}{if} \\PY{n}{n} \\PY{o}{==} \\PY{l+m+mi}{0}\\PY{p}{:}\n                 \\PY{k}{return} \\PY{l+m+mi}{0}\n             \\PY{k}{return} \\PY{n+nb}{sum}\\PY{p}{(}\\PY{n}{n}\\PY{o}{\\PYZhy{}}\\PY{l+m+mi}{1}\\PY{p}{)} \\PY{o}{+} \\PY{n}{n}\n\\end{Verbatim}\n Let us discuss the implementation of the function sum line by line:\n\n\\begin{enumerate}\n\\item The keyword \\texttt{def} starts the \\blue{definition} of the function. It is followed by the \\blue{name} of the function that is defined.  The name is followed by the list of the \\blue{parameters} of the function.  This list is enclosed in parentheses. If there is more than one parameter, the parameters have to be separated by commas.  Finally, there needs to be a colon at the end of the first line.\n\\item The \\blue{body} of the function is indented.   \\textbf{Contrary} to most other programming languages, \n     \\textsl{Python} \\blue{\\underline{is s}p\\underline{ace sensitive}}.\n    \n     The first statement of the body is a \\blue{conditional} statement, which starts with the keyword\n     \\texttt{if}.  The keyword is followed by a test.  In this case we test whether the variable $n$ \n     is equal to the number $0$.  Note that this test is followed by a colon.\n\\item The next line contains a \\texttt{return} statement.  Note that this statement is again indented.\n     All statements indented by the same amount that follow an \\texttt{if}-statement are considered to be\n     the \\blue{body} of this \\texttt{if}-statement, i.e. they get executed if the test of the\n     \\texttt{if}-statement is true.  In this case the body contains only a single statement.\n\n\\item The last line of the function definition contains the recursive invocation of the function \\texttt{sum}.\n\\end{enumerate}\nUsing the function sum, we can compute the sum \\(\\sum\\limits_{i=1}^n i\\)\nas follows:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}11}]:} \\PY{n}{n}     \\PY{o}{=} \\PY{n+nb}{int}\\PY{p}{(}\\PY{n+nb}{input}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{Enter a natural number: }\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\\PY{p}{)}\n         \\PY{n}{total} \\PY{o}{=} \\PY{n+nb}{sum}\\PY{p}{(}\\PY{n}{n}\\PY{p}{)}\n         \\PY{k}{if} \\PY{n}{n} \\PY{o}{\\PYZgt{}} \\PY{l+m+mi}{2}\\PY{p}{:}\n             \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{0 + 1 + 2 + ... + }\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{n}\\PY{p}{,} \\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{ = }\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{total}\\PY{p}{,} \\PY{n}{sep}\\PY{o}{=}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n         \\PY{k}{else}\\PY{p}{:} \n             \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{total}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nEnter a natural number: 100\n0 + 1 + 2 + {\\ldots} + 100 = 5050\n\\end{Verbatim}\n\n\\subsection{\\texorpdfstring{Sets in\n\\textsl{Python}}{Sets in Python}}\\label{sets-in-python}\n\\textsl{Python} supports \\blue{sets} as a \\textbf{native} datatype. This is one of the reasons that have\nlead me to choose \\textsl{Python} as the programming language for this\ncourse. To get a first impression how sets are handled in \\textsl{Python},\nlet us define two simple sets \\(A\\) and \\(B\\) and print them:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}12}]:} \\PY{n}{A} \\PY{o}{=} \\PY{p}{\\PYZob{}}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{2}\\PY{p}{,} \\PY{l+m+mi}{3}\\PY{p}{\\PYZcb{}}\n         \\PY{n}{B} \\PY{o}{=} \\PY{p}{\\PYZob{}}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{l+m+mi}{3}\\PY{p}{,} \\PY{l+m+mi}{4}\\PY{p}{\\PYZcb{}}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{A = }\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{A}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{, B = }\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{B}\\PY{p}{,} \\PY{n}{sep}\\PY{o}{=}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nA = \\{1, 2, 3\\}, B = \\{2, 3, 4\\}\n\\end{Verbatim}\n\nThe last argument \\texttt{sep=''} prevents the print statement from separating its arguments with space characters.\nWhen defining the empty set, there is a caveat, as we cannot define the empty set using the\nexpression \\texttt{\\{\\}}.  The reason is that this expression creates the empty \\blue{dictionary} instead. (We will\ndiscuss the data type of \\blue{dictionaries} later.) To define the empty set, we therefore have\nto use the following expression:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}13}]:} \\PY{n+nb}{set}\\PY{p}{(}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}13}]:} set()\n\\end{Verbatim}\nNote that the empty set is also printed as set() in \\textsl{Python} and not as \\texttt{\\{\\}}.\nNext, let us compute the union \\(A \\cup B\\). This is done using the\nfunction \\texttt{union}.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}14}]:} \\PY{n}{A}\\PY{o}{.}\\PY{n}{union}\\PY{p}{(}\\PY{n}{B}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}14}]:} \\{1, 2, 3, 4\\}\n\\end{Verbatim}\n            \nAs the function union really acts like a \\blue{method}, you might\nsuspect that it does change its first argument. Fortunately, this is not\nthe case, \\(A\\) is unchanged as you can see in the next line:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}15}]:} \\PY{n}{A}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}15}]:} \\{1, 2, 3\\}\n\\end{Verbatim}            \nTo compute the intersection \\(A \\cap B\\), we use the function \\texttt{intersection}:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}16}]:} \\PY{n}{A}\\PY{o}{.}\\PY{n}{intersection}\\PY{p}{(}\\PY{n}{B}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}16}]:} \\{2, 3\\}\n\\end{Verbatim}       \nAgain \\(A\\) is not changed.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}17}]:} \\PY{n}{A}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}17}]:} \\{1, 2, 3\\}\n\\end{Verbatim}       \nThe difference \\(A \\backslash B\\) is computed using the operator \"\\texttt{-}\":\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}18}]:} \\PY{n}{A} \\PY{o}{\\PYZhy{}} \\PY{n}{B}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}18}]:} \\{1\\}\n\\end{Verbatim}           \nIt is easy to test whether \\(A \\subseteq B\\) holds:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}19}]:} \\PY{n}{A} \\PY{o}{\\PYZlt{}}\\PY{o}{=} \\PY{n}{B}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}19}]:} False\n\\end{Verbatim}           \nTesting whether an object \\(x\\) is an element of a set \\(M\\), i.e. to\ntest, whether \\(x \\in M\\) holds is straightforward:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}20}]:} \\PY{l+m+mi}{1} \\PY{o+ow}{in} \\PY{n}{A}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}20}]:} True\n\\end{Verbatim}          \nOn the other hand, the number \\(1\\) is not an element of the set \\(B\\),\ni.e. we have \\(1 \\not\\in B\\):\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}21}]:} \\PY{l+m+mi}{1} \\PY{o+ow}{not} \\PY{o+ow}{in} \\PY{n}{B}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}21}]:} True\n\\end{Verbatim}\n            \n\\subsection{Defining Sets via Selection and\nImages}\\label{defining-sets-via-selection-and-images}\nRemember that we can define subsets of a given set \\(M\\) via the axiom of\nselection. If \\(p\\) is a property such that for any object \\(x\\) from\nthe set \\(M\\) the expression \\(p(x)\\) is either True or False, the\nsubset of all those elements of \\(M\\) such that \\(p(x)\\) is True can be\ndefined as\n \\[ \\{ x \\in M \\mid p(x) \\}. \\] \nFor example, if \\(M\\) is the\nset \\(\\{1, \\cdots, 100\\}\\) and we want to compute the subset of this set\nthat contains all numbers from \\(M\\) that are divisible by \\(7\\), then\nthis set can be defined as \n\\[ \\{ x \\in M \\mid x \\;\\texttt{\\%}\\; 7 == 0 \\}. \\]\nIn \\textsl{Python}, the definition of this set can be given as follows:\n\n    \\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}22}]:} \\PY{n}{M} \\PY{o}{=} \\PY{n+nb}{set}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{101}\\PY{p}{)}\\PY{p}{)}\n         \\PY{p}{\\PYZob{}} \\PY{n}{x} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{M} \\PY{k}{if} \\PY{n}{x} \\PY{o}{\\PYZpc{}} \\PY{l+m+mi}{7} \\PY{o}{==} \\PY{l+m+mi}{0} \\PY{p}{\\PYZcb{}}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}22}]:} \\{7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98\\}\n\\end{Verbatim}\nIn general, in \\textsl{Python} the set\n \\[ \\{ x \\in M \\mid p(x) \\} \\] \nis computed by the expression \n\\[ \\{\\; x\\; \\texttt{for}\\; x\\; \\texttt{in}\\; M\\; \\texttt{if}\\; p(x)\\; \\}. \\]\n\\blue{Image} sets can be computed in a similar way. If \\(f\\) is a\nfunction defined for all elements of a set \\(M\\), the image set\n\\[ \\{ f(x) \\mid x \\in M \\} \\] can be computed in \\textsl{Python} as\nfollows: \\[ \\{\\; f(x)\\; \\texttt{for}\\; x\\; \\texttt{in}\\; M\\; \\}. \\] For\nexample, the following expression computes the set of all squares of\nnumbers from the set \\(\\{1,\\cdots,10\\}\\):\n\n    \\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}23}]:} \\PY{n}{M} \\PY{o}{=} \\PY{n+nb}{set}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{11}\\PY{p}{)}\\PY{p}{)}\n         \\PY{p}{\\PYZob{}} \\PY{n}{x}\\PY{o}{*}\\PY{n}{x} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{M} \\PY{p}{\\PYZcb{}}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}23}]:} \\{1, 4, 9, 16, 25, 36, 49, 64, 81, 100\\}\n\\end{Verbatim}\n            \nThe computation of image sets and selections can be combined. If \\(M\\)\nis a set, \\(p\\) is a property such that \\(p(x)\\) is either True or False\nfor elements of \\(M\\), and \\(f\\) is a function such that \\(f(x)\\) is\ndefined for all \\(x \\in M\\) then we can compute set\n\\[ \\{ f(x) \\mid  x \\in M \\wedge p(x) \\} \\] of all images \\(f(x)\\) from\nthose \\(x\\in M\\) that satisfy the property \\(p(x)\\) via the expression\n\\[ \\{\\; f(x)\\; \\texttt{for}\\; x\\; \\texttt{in}\\; M\\; \\texttt{if}\\; p(x)\\; \\}. \\]\nFor example, to compute the set of those squares of numbers from the set\n\\(\\{1,\\cdots,10\\}\\) that are even we can write\n\n    \\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}24}]:} \\PY{n}{M} \\PY{o}{=} \\PY{n+nb}{set}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{11}\\PY{p}{)}\\PY{p}{)}\n         \\PY{p}{\\PYZob{}} \\PY{n}{x}\\PY{o}{*}\\PY{n}{x} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{M} \\PY{k}{if} \\PY{n}{x} \\PY{o}{\\PYZpc{}} \\PY{l+m+mi}{2} \\PY{o}{==} \\PY{l+m+mi}{0} \\PY{p}{\\PYZcb{}}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}24}]:} \\{4, 16, 36, 64, 100\\}\n\\end{Verbatim}\n            \nWe can iterate over more than one set. For example, let us define the\nset of all products \\(p \\cdot q\\) of numbers \\(p\\) and \\(q\\) from the\nset \\(\\{2, \\cdots, 10\\}\\), i.e. we intend to define the set\n\\[ \\bigl\\{ p \\cdot q \\bigm| p \\in \\{2,\\cdots,10\\} \\wedge q \\in \\{2,\\cdots,10\\} \\bigr\\}. \\]\nIn \\textsl{Python}, this set is defined as follows:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}25}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{p}{\\PYZob{}} \\PY{n}{p} \\PY{o}{*} \\PY{n}{q} \\PY{k}{for} \\PY{n}{p} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,}\\PY{l+m+mi}{11}\\PY{p}{)} \\PY{k}{for} \\PY{n}{q} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,}\\PY{l+m+mi}{11}\\PY{p}{)} \\PY{p}{\\PYZcb{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n\\{4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 24, 25, 27, 28, 30, 32, 35,\n 36, 40, 42, 45, 48, 49, 50, 54, 56, 60, 63, 64, 70, 72, 80, 81, 90, 100\\}\n\\end{Verbatim}\nWe can use this set to compute the set of \\blue{prime numbers}. After\nall, the set of prime numbers is the set of all those natural numbers\nbigger than \\(1\\) that can not be written as a proper product, that is a\nnumber \\(x\\) is \\blue{prime} if\n\n\\begin{enumerate}\n\\item $x$ is bigger than $1$ and \n\\item there are no natural numbers $x$ and $y$ both bigger than $1$ such that $x = p * q$ holds.\n\\end{enumerate}\nMore formally, the set \\(\\mathbb{P}\\) of prime numbers is defined as\nfollows:\n\\[ \\mathbb{P} = \\bigl\\{ x \\in \\mathbb{N} \\;\\bigm|\\; x > 1 \\wedge \\neg \\exists p, q \\in \\mathbb{N}: (x = p \\cdot q \\wedge p > 1 \\wedge q > 1)\\bigr\\}. \\]\nHence the following code computes the set of all primes less than 100:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}26}]:} \\PY{n}{s} \\PY{o}{=} \\PY{n+nb}{set}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,}\\PY{l+m+mi}{100}\\PY{p}{)}\\PY{p}{)}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{s} \\PY{o}{\\PYZhy{}} \\PY{p}{\\PYZob{}} \\PY{n}{p} \\PY{o}{*} \\PY{n}{q} \\PY{k}{for} \\PY{n}{p} \\PY{o+ow}{in} \\PY{n}{s} \\PY{k}{for} \\PY{n}{q} \\PY{o+ow}{in} \\PY{n}{s} \\PY{p}{\\PYZcb{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n\\{ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61,\n  67, 71, 73, 79, 83, 89, 97\n\\}\n\\end{Verbatim}\nAn alternative way to compute primes works by noting that a number \\(p\\)\nis prime iff there is no number \\(t\\) other than \\(1\\) and \\(p\\) that\ndivides the number \\(p\\). The function dividers given below computes the\nset of all numbers dividing a given number \\(p\\) evenly:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}27}]:} \\PY{k}{def} \\PY{n+nf}{dividers}\\PY{p}{(}\\PY{n}{p}\\PY{p}{)}\\PY{p}{:}\n             \\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{Compute the set of numbers that divide the number p.}\\PY{l+s+s2}{\\PYZdq{}}\n             \\PY{k}{return} \\PY{p}{\\PYZob{}} \\PY{n}{t} \\PY{k}{for} \\PY{n}{t} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{n}{p}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)} \\PY{k}{if} \\PY{n}{p} \\PY{o}{\\PYZpc{}} \\PY{n}{t} \\PY{o}{==} \\PY{l+m+mi}{0} \\PY{p}{\\PYZcb{}}\n         \n         \\PY{n}{n}      \\PY{o}{=} \\PY{l+m+mi}{100}\\PY{p}{;}\n         \\PY{n}{primes} \\PY{o}{=} \\PY{p}{\\PYZob{}} \\PY{n}{p} \\PY{k}{for} \\PY{n}{p} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{n}{n}\\PY{p}{)} \\PY{k}{if} \\PY{n}{dividers}\\PY{p}{(}\\PY{n}{p}\\PY{p}{)} \\PY{o}{==} \\PY{p}{\\PYZob{}}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{n}{p}\\PY{p}{\\PYZcb{}} \\PY{p}{\\PYZcb{}}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{primes}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n\\{2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97\\}\n\\end{Verbatim}\n\n\\subsection{Computing the Power Set}\\label{computing-the-power-set}\nUnfortunately, there is no operator to compute the power set \\(2^M\\) of\na given set \\(M\\). Since the power set is needed frequently, we have to\nimplement a function \\texttt{power} to compute this set ourselves. The easiest\nway to compute the power set \\(2^M\\) of a set \\(M\\) is to implement the\nfollowing recursive equations:\n\\begin{enumerate}\n\\item The power set of the empty set contains only the empty set:\n    $$2^{\\{\\}} = \\bigl\\{\\{\\}\\bigr\\}$$\n\\item If a set $M$ can be written as $M = C \\cup \\{x\\}$, where the element $x$ does not occur in the set $C$, then the power set $2^M$ consists of two sets:\n      \\begin{itemize}\n        \\item Firstly, all subsets of $C$ are also subsets of $M$.\n        \\item Secondly, if A is a subset of $C$, then the set $A \\cup\\{x\\}$ is also a subset of $M$.\n      \\end{itemize}\n    If we combine these parts we get the following equation:\n    $$2^{C \\cup \\{x\\}} = 2^C \\cup \\bigl\\{ A \\cup \\{x\\} \\bigm| A \\in 2^C \\bigr\\}$$\n\\end{enumerate}\nBut there is another problem: In \\textsl{Python} we can't create a set\nthat has elements that are sets themselves! The reason is that in\n\\textsl{Python} sets are implemented via \\blue{hash tables} and therefore\nthe elements of a set need to be \\blue{hashable}. (The notion of an\nelement being \\blue{hashable} will be discussed in more detail in the\nlecture on \\blue{Algorithms}.) However, sets are \\blue{mutable} and\n\\blue{mutable} objects are not \\blue{hashable}. Fortunately, there is a\nworkaround: \\textsl{Python} provides the data type of \\blue{frozen sets}.\nThese sets behave like sets but are are lacking certain functions that modify sets and\nhence are unmutable. So if we use \\blue{frozen sets} as elements of the\npower set, we can compute the power set of a given set. The function\n\\texttt{power} given below shows how this works.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}28}]:} \\PY{k}{def} \\PY{n+nf}{power}\\PY{p}{(}\\PY{n}{M}\\PY{p}{)}\\PY{p}{:}\n             \\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{This function computes the power set of the set M.}\\PY{l+s+s2}{\\PYZdq{}}\n             \\PY{k}{if} \\PY{n}{M} \\PY{o}{==} \\PY{n+nb}{set}\\PY{p}{(}\\PY{p}{)}\\PY{p}{:}\n                 \\PY{k}{return} \\PY{p}{\\PYZob{}} \\PY{n+nb}{frozenset}\\PY{p}{(}\\PY{p}{)} \\PY{p}{\\PYZcb{}}\n             \\PY{k}{else}\\PY{p}{:}\n                 \\PY{n}{C}  \\PY{o}{=} \\PY{n+nb}{set}\\PY{p}{(}\\PY{n}{M}\\PY{p}{)}  \\PY{c+c1}{\\PYZsh{} C is a copy of M as we don\\PYZsq{}t want to change the set M}\n                 \\PY{n}{x}  \\PY{o}{=} \\PY{n}{C}\\PY{o}{.}\\PY{n}{pop}\\PY{p}{(}\\PY{p}{)} \\PY{c+c1}{\\PYZsh{} pop removes the element x from the set C}\n                 \\PY{n}{P1} \\PY{o}{=} \\PY{n}{power}\\PY{p}{(}\\PY{n}{C}\\PY{p}{)}\n                 \\PY{n}{P2} \\PY{o}{=} \\PY{p}{\\PYZob{}} \\PY{n}{A}\\PY{o}{.}\\PY{n}{union}\\PY{p}{(}\\PY{p}{\\PYZob{}}\\PY{n}{x}\\PY{p}{\\PYZcb{}}\\PY{p}{)} \\PY{k}{for} \\PY{n}{A} \\PY{o+ow}{in} \\PY{n}{P1} \\PY{p}{\\PYZcb{}}\n                 \\PY{k}{return} \\PY{n}{P1}\\PY{o}{.}\\PY{n}{union}\\PY{p}{(}\\PY{n}{P2}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}29}]:} \\PY{n}{power}\\PY{p}{(}\\PY{n}{A}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}29}]:} \\{frozenset(),\n          frozenset(\\{3\\}),\n          frozenset(\\{1\\}),\n          frozenset(\\{2\\}),\n          frozenset(\\{1, 2\\}),\n          frozenset(\\{2, 3\\}),\n          frozenset(\\{1, 3\\}),\n          frozenset(\\{1, 2, 3\\})\\}\n\\end{Verbatim}\nLet us print this in a more readable way. To this end we implement a\nfunction prettify that turns a set of frozensets into a string that\nlooks like a set of sets.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}30}]:} \\PY{k}{def} \\PY{n+nf}{prettify}\\PY{p}{(}\\PY{n}{M}\\PY{p}{)}\\PY{p}{:}\n             \\PY{l+s+sd}{\\PYZdq{}\\PYZdq{}\\PYZdq{}Turn the set of frozen sets M into a string that looks like a set of sets.}\n         \\PY{l+s+sd}{       M is assumed to be the power set of some set.}\n         \\PY{l+s+sd}{    \\PYZdq{}\\PYZdq{}\\PYZdq{}}\n             \\PY{n}{result} \\PY{o}{=} \\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{\\PYZob{}\\PYZob{}}\\PY{l+s+s2}{\\PYZcb{}, }\\PY{l+s+s2}{\\PYZdq{}}   \\PY{c+c1}{\\PYZsh{} The emepty set is always an element of a power set.}\n             \\PY{k}{for} \\PY{n}{A} \\PY{o+ow}{in} \\PY{n}{M}\\PY{p}{:}\n                 \\PY{k}{if} \\PY{n}{A} \\PY{o}{==} \\PY{n+nb}{set}\\PY{p}{(}\\PY{p}{)}\\PY{p}{:} \\PY{c+c1}{\\PYZsh{} The empty set has already been taken care of.}\n                     \\PY{k}{continue}\n                 \\PY{n}{result} \\PY{o}{+}\\PY{o}{=} \\PY{n+nb}{str}\\PY{p}{(}\\PY{n+nb}{set}\\PY{p}{(}\\PY{n}{A}\\PY{p}{)}\\PY{p}{)} \\PY{o}{+} \\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{, }\\PY{l+s+s2}{\\PYZdq{}} \\PY{c+c1}{\\PYZsh{} A is converted from a frozen set to a set}\n             \\PY{n}{result} \\PY{o}{=} \\PY{n}{result}\\PY{p}{[}\\PY{p}{:}\\PY{o}{\\PYZhy{}}\\PY{l+m+mi}{2}\\PY{p}{]} \\PY{c+c1}{\\PYZsh{} remove the trailing substring \\PYZdq{}, \\PYZdq{}}\n             \\PY{n}{result} \\PY{o}{+}\\PY{o}{=} \\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{\\PYZcb{}}\\PY{l+s+s2}{\\PYZdq{}}\n             \\PY{k}{return} \\PY{n}{result}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}31}]:} \\PY{n}{prettify}\\PY{p}{(}\\PY{n}{power}\\PY{p}{(}\\PY{n}{A}\\PY{p}{)}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}31}]:} '\\{\\{\\}, \\{3\\}, \\{1, 2\\}, \\{2, 3\\}, \\{1\\}, \\{1, 3\\}, \\{1, 2, 3\\}, \\{2\\}\\}'\n\\end{Verbatim}\n            \n\\subsection{Pairs and Cartesian\nProducts}\\label{pairs-and-cartesian-products}\nIn \\textsl{Python}, pairs can be created by enclosing the components of\nthe pair in parentheses. For example, to compute the pair\n\\(\\langle 1, 2 \\rangle\\) we can write:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}32}]:} \\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{2}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}32}]:} (1, 2)\n\\end{Verbatim}\nIt is not even necessary to enclose the components of a pair in\nparentheses. For example, to compute the pair \\(\\langle 1, 2 \\rangle\\)\nwe can use the following expression:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}33}]:} \\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{2}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}33}]:} (1, 2)\n\\end{Verbatim}\nThe Cartesian product \\(A \\times B\\) of two sets \\(A\\) and \\(B\\) can now\nbe computed via the following expression:\n\\[ \\{\\; (x, y) \\;\\texttt{for}\\; x \\;\\texttt{in}\\; A\\; \\texttt{for}\\; y\\; \\texttt{in}\\; B\\; \\} \\]\nFor example, as we have defined \\(A\\) as \\(\\{1,2,3\\}\\) and \\(B\\) as\n\\(\\{2,3,4\\}\\), the Cartesian product of \\(A\\) and \\(B\\) is computed as\nfollows:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}34}]:} \\PY{p}{\\PYZob{}} \\PY{p}{(}\\PY{n}{x}\\PY{p}{,}\\PY{n}{y}\\PY{p}{)} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{A} \\PY{k}{for} \\PY{n}{y} \\PY{o+ow}{in} \\PY{n}{B} \\PY{p}{\\PYZcb{}}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}34}]:} \\{(1, 2), (1, 3), (1, 4), (2, 2), (2, 3), (2, 4), (3, 2), (3, 3), (3, 4)\\}\n\\end{Verbatim}\n            \n\\subsection{Tuples}\\label{tuples}\nThe notion of a tuple is a generalization of the notion of a pair. For example, to compute the tuple\n\\(\\langle 1, 2, 3 \\rangle\\) we can use the following expression:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}35}]:} \\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{2}\\PY{p}{,} \\PY{l+m+mi}{3}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}35}]:} (1, 2, 3)\n\\end{Verbatim}\nLonger tuples can be build using the function range in combination with the function tuple:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}36}]:} \\PY{n+nb}{tuple}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{11}\\PY{p}{)}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}36}]:} (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)\n\\end{Verbatim}\nTuples can be \\blue{concatenated} using the operator \"\\texttt{+}\":\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}37}]:} \\PY{n}{T1} \\PY{o}{=} \\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{2}\\PY{p}{,} \\PY{l+m+mi}{3}\\PY{p}{)}\n         \\PY{n}{T2} \\PY{o}{=} \\PY{p}{(}\\PY{l+m+mi}{4}\\PY{p}{,} \\PY{l+m+mi}{5}\\PY{p}{,} \\PY{l+m+mi}{6}\\PY{p}{)}\n         \\PY{n}{T3} \\PY{o}{=} \\PY{n}{T1} \\PY{o}{+} \\PY{n}{T2}\n         \\PY{n}{T3}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}37}]:} (1, 2, 3, 4, 5, 6)\n\\end{Verbatim}\nThe \\blue{length} of a tuple is computed using the function \\texttt{len}:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}38}]:} \\PY{n+nb}{len}\\PY{p}{(}\\PY{n}{T3}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}38}]:} 6\n\\end{Verbatim}\n            \nThe components of a tuple can be extracted using square brackets. Not\nthat the first component actually has the index \\(0\\)! This is similar\nto the behaviour of \\blue{arrays} in the programming language C.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}39}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{T3[0] =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{T3}\\PY{p}{[}\\PY{l+m+mi}{0}\\PY{p}{]}\\PY{p}{)}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{T3[1] =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{T3}\\PY{p}{[}\\PY{l+m+mi}{1}\\PY{p}{]}\\PY{p}{)}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{T3[2] =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{T3}\\PY{p}{[}\\PY{l+m+mi}{2}\\PY{p}{]}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nT3[0] = 1\nT3[1] = 2\nT3[2] = 3\n\\end{Verbatim}\nIf we use negative indices, then we index from the back of the tuple, as\nshown in the following example:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}40}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{T3[\\PYZhy{}1] =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{T3}\\PY{p}{[}\\PY{o}{\\PYZhy{}}\\PY{l+m+mi}{1}\\PY{p}{]}\\PY{p}{)} \\PY{c+c1}{\\PYZsh{} last element}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{T3[\\PYZhy{}2] =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{T3}\\PY{p}{[}\\PY{o}{\\PYZhy{}}\\PY{l+m+mi}{2}\\PY{p}{]}\\PY{p}{)} \\PY{c+c1}{\\PYZsh{} penultimate element}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{T3[\\PYZhy{}3] =}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{,} \\PY{n}{T3}\\PY{p}{[}\\PY{o}{\\PYZhy{}}\\PY{l+m+mi}{3}\\PY{p}{]}\\PY{p}{)} \\PY{c+c1}{\\PYZsh{} third last element }\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nT3[-1] = 6\nT3[-2] = 5\nT3[-3] = 4\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}41}]:} \\PY{n}{T3}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}41}]:} (1, 2, 3, 4, 5, 6)\n\\end{Verbatim}\n            \nThe \\blue{slicing} operator extracts a subtuple form a given tuple. If\n\\(L\\) is a tuple and \\(a\\) and \\(b\\) are natural numbers such that\n\\(a \\leq b\\) and \\(a,b \\in \\{0, \\texttt{len}(L) \\}\\), then the syntax of\nthe slicing operator is as follows: \\[ L[a:b] \\] The expression\n\\(L[a:b]\\) extracts the subtuple that starts with the element \\(L[a]\\)\nup to and excluding the element \\(L[b]\\). The following shows an\nexample:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}42}]:} \\PY{n}{L} \\PY{o}{=} \\PY{n+nb}{tuple}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{11}\\PY{p}{)}\\PY{p}{)}\n         \\PY{n}{L}\\PY{p}{[}\\PY{l+m+mi}{2}\\PY{p}{:}\\PY{l+m+mi}{6}\\PY{p}{]}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}42}]:} (3, 4, 5, 6)\n\\end{Verbatim}\n            \nSlicing works with negative indices, too:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}43}]:} \\PY{n}{L}\\PY{p}{[}\\PY{l+m+mi}{2}\\PY{p}{:}\\PY{o}{\\PYZhy{}}\\PY{l+m+mi}{2}\\PY{p}{]}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}43}]:} (3, 4, 5, 6, 7, 8)\n\\end{Verbatim}\n            \n\\subsection{Lists}\\label{lists}\nNext, we discuss the data type of lists. Lists are a lot like tuples,\nbut in contrast to tuples, lists are \\blue{mutatable}, i.e. we can\nchange lists. To construct a list, we use square backets:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}44}]:} \\PY{n}{L} \\PY{o}{=} \\PY{p}{[}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{2}\\PY{p}{,}\\PY{l+m+mi}{3}\\PY{p}{]}\n         \\PY{n}{L}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}44}]:} [1, 2, 3]\n\\end{Verbatim}\nTo change the first element of a list, we can use the \\blue{index operator}:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}45}]:} \\PY{n}{L}\\PY{p}{[}\\PY{l+m+mi}{0}\\PY{p}{]} \\PY{o}{=} \\PY{l+m+mi}{7}\n         \\PY{n}{L}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}45}]:} [7, 2, 3]\n\\end{Verbatim}\nThis last operation would not be possible if L had been a tuple instead\nof a list. Lists support concatenation in the same way as tuples:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}46}]:} \\PY{p}{[}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{2}\\PY{p}{,}\\PY{l+m+mi}{3}\\PY{p}{]} \\PY{o}{+} \\PY{p}{[}\\PY{l+m+mi}{4}\\PY{p}{,}\\PY{l+m+mi}{5}\\PY{p}{,}\\PY{l+m+mi}{6}\\PY{p}{]}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}46}]:} [1, 2, 3, 4, 5, 6]\n\\end{Verbatim}\nThe function \\texttt{len} computes the length of a list:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}47}]:} \\PY{n+nb}{len}\\PY{p}{(}\\PY{p}{[}\\PY{l+m+mi}{4}\\PY{p}{,}\\PY{l+m+mi}{5}\\PY{p}{,}\\PY{l+m+mi}{6}\\PY{p}{]}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}47}]:} 3\n\\end{Verbatim}\n            \nLists and tuples both support the functions \\texttt{max} and \\texttt{min}. The expression\n\\(\\texttt{max}(L)\\) computes the maximum of all the elements of the list\n(or tuple) \\(L\\), while \\(\\texttt{min}(L)\\) computes the smallest\nelement of \\(L\\).\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}48}]:} \\PY{n+nb}{max}\\PY{p}{(}\\PY{p}{[}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{2}\\PY{p}{,}\\PY{l+m+mi}{3}\\PY{p}{]}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}48}]:} 3\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}49}]:} \\PY{n+nb}{min}\\PY{p}{(}\\PY{p}{[}\\PY{l+m+mi}{1}\\PY{p}{,}\\PY{l+m+mi}{2}\\PY{p}{,}\\PY{l+m+mi}{3}\\PY{p}{]}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}49}]:} 1\n\\end{Verbatim}\n            \n\\subsection{Boolean Operators}\\label{boolean-operators}\n\nIn \\textsl{Python}, the Boolean values are written as \\texttt{True} and \\texttt{False}.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}50}]:} \\PY{k+kc}{True}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}50}]:} True\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}51}]:} \\PY{k+kc}{False}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}51}]:} False\n\\end{Verbatim}\n            \n    These values can be combined using the Boolean operator \\(\\wedge\\),\n\\(\\vee\\), and \\(\\neg\\). In \\textsl{Python}, these operators are denoted as\n\\texttt{and}, \\texttt{or}, and \\texttt{}not. The following table shows how the operator \\texttt{and} is\ndefined:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}52}]:} \\PY{n}{B} \\PY{o}{=} \\PY{p}{(}\\PY{k+kc}{True}\\PY{p}{,} \\PY{k+kc}{False}\\PY{p}{)}\n         \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{B}\\PY{p}{:}\n             \\PY{k}{for} \\PY{n}{y} \\PY{o+ow}{in} \\PY{n}{B}\\PY{p}{:}\n                 \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{x}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{and}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{y}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{=}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{x} \\PY{o+ow}{and} \\PY{n}{y}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nTrue and True = True\nTrue and False = False\nFalse and True = False\nFalse and False = False\n\\end{Verbatim}\n\nThe disjunction of two Boolean values is only \\texttt{False} if both values are\n\\texttt{False}:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}53}]:} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{B}\\PY{p}{:}\n             \\PY{k}{for} \\PY{n}{y} \\PY{o+ow}{in} \\PY{n}{B}\\PY{p}{:}\n                 \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{x}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{or}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{y}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{=}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{x} \\PY{o+ow}{or} \\PY{n}{y}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nTrue or True = True\nTrue or False = True\nFalse or True = True\nFalse or False = False\n\\end{Verbatim}\nFinally, the negation operator \\texttt{not} works as expected:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}54}]:} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{B}\\PY{p}{:}\n             \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{not}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{x}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{=}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{o+ow}{not} \\PY{n}{x}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nnot True = False\nnot False = True\n\\end{Verbatim}\nBoolean values are created by comparing numbers using the following comparison operators:\n\\begin{enumerate}\n\\item $a\\;\\texttt{==}\\;b$ is true iff $a$ is equal to $b$.\n\\item $a\\;\\texttt{!=}\\;b$ is true iff $a$ is different from $b$.\n\\item $a\\;\\texttt{<}\\;b$ is true iff $a$ is less than $b$.\n\\item $a\\;\\texttt{<=}\\;b$ is true iff $a$ is less than or equal to $b$.\n\\item $a\\;\\texttt{>=}\\;b$ is true iff $a$ is bigger than or equal to $b$.\n\\item $a\\;\\texttt{>}\\;b$ is true iff $a$ is bigger than $b$.\n\\end{enumerate}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}55}]:} \\PY{l+m+mi}{1} \\PY{o}{==} \\PY{l+m+mi}{2}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}55}]:} False\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}56}]:} \\PY{l+m+mi}{1} \\PY{o}{\\PYZlt{}} \\PY{l+m+mi}{2}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}56}]:} True\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}57}]:} \\PY{l+m+mi}{1} \\PY{o}{\\PYZlt{}}\\PY{o}{=} \\PY{l+m+mi}{2}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}57}]:} True\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}58}]:} \\PY{l+m+mi}{1} \\PY{o}{\\PYZgt{}} \\PY{l+m+mi}{2}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}58}]:} False\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}59}]:} \\PY{l+m+mi}{1} \\PY{o}{\\PYZgt{}}\\PY{o}{=} \\PY{l+m+mi}{2}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}59}]:} False\n\\end{Verbatim}\nComparison operators can be \\blue{chained} as shown in the following example:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}60}]:} \\PY{l+m+mi}{1} \\PY{o}{\\PYZlt{}} \\PY{l+m+mi}{2} \\PY{o}{\\PYZlt{}} \\PY{l+m+mi}{3}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}60}]:} True\n\\end{Verbatim}\n            \n\\textsl{Python} supports the \\blue{universal quantifier} \\(\\forall\\) (read: \\emph{for all}). If \\(L\\) is\na list of Boolean values, then we can check whether all elements of\n\\(L\\) are true by writing \\[ \\texttt{all}(L) \\] For example, to check\nwhether all elements of a list \\(L\\) are even we can write the\nfollowing:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}61}]:} \\PY{n}{L} \\PY{o}{=} \\PY{p}{[}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{l+m+mi}{4}\\PY{p}{,} \\PY{l+m+mi}{6}\\PY{p}{]}\n         \\PY{n+nb}{all}\\PY{p}{(}\\PY{p}{[}\\PY{n}{x} \\PY{o}{\\PYZpc{}} \\PY{l+m+mi}{2} \\PY{o}{==} \\PY{l+m+mi}{0} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n}{L}\\PY{p}{]}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}61}]:} True\n\\end{Verbatim}\n            \n    \\subsection{Control Structures}\\label{control-structures}\n\n    First of all, \\textsl{Python} supports branching statements. The following\nexample is taken from the \\textsl{Python} tutorial at \\href{https://python.org}{https://python.org}:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}62}]:} \\PY{n}{x} \\PY{o}{=} \\PY{n+nb}{int}\\PY{p}{(}\\PY{n+nb}{input}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{Please enter an integer: }\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\\PY{p}{)}\n         \\PY{k}{if} \\PY{n}{x} \\PY{o}{\\PYZlt{}} \\PY{l+m+mi}{0}\\PY{p}{:}\n            \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{The number is negative!}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n         \\PY{k}{elif} \\PY{n}{x} \\PY{o}{==} \\PY{l+m+mi}{0}\\PY{p}{:}\n            \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{The number is zero.}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n         \\PY{k}{elif} \\PY{n}{x} \\PY{o}{==} \\PY{l+m+mi}{1}\\PY{p}{:}\n            \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{It}\\PY{l+s+s2}{\\PYZsq{}}\\PY{l+s+s2}{s a one.}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\n         \\PY{k}{else}\\PY{p}{:}\n            \\PY{n+nb}{print}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{It}\\PY{l+s+s2}{\\PYZsq{}}\\PY{l+s+s2}{s more than one.}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\nPlease enter an integer: 42\nIt's more than one.\n\\end{Verbatim}\n\\blue{Loops} can be used to iterate over sets, lists, tuples, or\ngenerators. The following example prints the numbers from 1 to 10.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}63}]:} \\PY{k}{for} \\PY{n}{x} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{11}\\PY{p}{)}\\PY{p}{:}\n             \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{x}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n\\end{Verbatim}\nThe same can be achieved with a \\texttt{while} loop:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}64}]:} \\PY{n}{x} \\PY{o}{=} \\PY{l+m+mi}{1}\n         \\PY{k}{while} \\PY{n}{x} \\PY{o}{\\PYZlt{}}\\PY{o}{=} \\PY{l+m+mi}{10}\\PY{p}{:}\n             \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{x}\\PY{p}{)}\n             \\PY{n}{x} \\PY{o}{+}\\PY{o}{=} \\PY{l+m+mi}{1}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n\\end{Verbatim}\nThe following program computes the prime numbers according to an\nalgorithm given by Eratosthenes.\n\n\\begin{enumerate}\n\\item We set $n$ equal to 100 as we want to compute the set all prime numbers less or equal that 100.\n\\item \\texttt{primes} is the list of numbers from 0 upto $n$, i.e. we have initially\n      $$ \\texttt{primes} = [0,1,2,\\cdots,n] $$\n      Therefore, we have\n      $$ \\texttt{primes}[i] = i \\quad \\mbox{for all $i \\in \\{0,1,\\cdots,n\\}$.} $$\n      The idea is to set \\texttt{primes[$i$]} to zero iff $i$ is a proper product of two numbers.\n\n\\item To this end we iterate over all $i$ and $j$ from the set $\\{2,\\cdots,n\\}$\n  and set the product $\\texttt{primes}[i*j]$ to zero.  This is achieved by the two \\texttt{for} loops below.\n\\item Note that we have to check that the product $i * j$ is not bigger than $n$ for otherwise we would get an\n  \\blue{out of range error}  when trying to assign \\texttt{primes[i*j]}.\n\\item After the iteration, all non-prime elements greater than one of the list primes have been set to zero.\n\\item Finally, we compute the set of primes by collecting those elements that have not been set to $0$.\n\\end{enumerate}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}65}]:} \\PY{n}{n}      \\PY{o}{=} \\PY{l+m+mi}{100}\n         \\PY{n}{primes} \\PY{o}{=} \\PY{n+nb}{list}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{0}\\PY{p}{,} \\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)}\\PY{p}{)}\n         \\PY{k}{for} \\PY{n}{i} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)}\\PY{p}{:}\n             \\PY{k}{for} \\PY{n}{j} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)}\\PY{p}{:}\n                 \\PY{k}{if} \\PY{n}{i} \\PY{o}{*} \\PY{n}{j} \\PY{o}{\\PYZlt{}}\\PY{o}{=} \\PY{n}{n}\\PY{p}{:}\n                     \\PY{n}{primes}\\PY{p}{[}\\PY{n}{i} \\PY{o}{*} \\PY{n}{j}\\PY{p}{]} \\PY{o}{=} \\PY{l+m+mi}{0}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{primes}\\PY{p}{)}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{p}{\\PYZob{}} \\PY{n}{i} \\PY{k}{for} \\PY{n}{i} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)} \\PY{k}{if} \\PY{n}{primes}\\PY{p}{[}\\PY{n}{i}\\PY{p}{]} \\PY{o}{!=} \\PY{l+m+mi}{0} \\PY{p}{\\PYZcb{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n[0, 1, 2, 3, 0, 5, 0, 7, 0, 0, 0, 11, 0, 13, 0, 0, 0, 17, 0, 19, 0, 0, 0, 23,\n 0, 0, 0, 0, 0, 29, 0, 31, 0, 0, 0, 0, 0, 37, 0, 0, 0, 41, 0, 43, 0, 0, 0, 47,\n 0, 0, 0, 0, 0, 53, 0, 0, 0, 0, 0, 59, 0, 61, 0, 0, 0, 0, 0, 67, 0, 0, 0, 71,\n 0, 73, 0, 0, 0, 0, 0, 79, 0, 0, 0, 83, 0, 0, 0, 0, 0, 89, 0, 0, 0, 0, 0, 0,\n 0, 97, 0, 0, 0]\n\\{ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71,\n  73, 79, 83, 89, 97\n\\}\n\\end{Verbatim}\nThe algorithm given above can be improved by using the following observations:\n\\begin{enumerate}\n\\item If a number $x$ can be written as a product $a * b$, then at least one of the numbers $a$ or $b$ has to\n      be less than $\\sqrt{x}$.  Therefore, the \\texttt{for} loop below iterates as long as $i \\leq\n      \\sqrt{x}$.\n      The function \\texttt{ceil} is needed to cast the square root of $x$ to a natural number.  In\n      order to use the functions \\texttt{sqrt} and \\texttt{ceil} we have to import them from the module\n      \\texttt{math}.  This is done in line 1 of the program shown below.  \n\\item When we iterate over $j$ in the inner loop, it is sufficient if we start with $j = i$ since all products\n      of the form $i * j$ where $j < i$ have already been eliminated at the time, when the multiples of $i$ had\n      been eliminated. \n\\item If \\texttt{primes[$i$] = 0}, then $i$ is not a prime and hence it has to be a product of two numbers $a$\n      and $b$ both of which are smaller than $i$.  However, since all the multiples of $a$ and $b$ have already\n      been eliminated, there is no point in eliminating the multiples of $i$ since these are also mulples of both\n      $a$ and $b$ and hence have already been eliminated.  Therefore, if \\texttt{primes[$i$] = 0} we can\n      immediately jump to the next value of $i$.  This is achieved by the \\texttt{continue} statement in line 7\n      below. \n\\end{enumerate}\nThe program shown below is easily capable of computing all prime numbers less than a million.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}66}]:} \\PY{k+kn}{from} \\PY{n+nn}{math} \\PY{k}{import} \\PY{n}{sqrt}\\PY{p}{,} \\PY{n}{ceil}\n         \n         \\PY{n}{n} \\PY{o}{=} \\PY{l+m+mi}{1000}\n         \\PY{n}{primes} \\PY{o}{=} \\PY{n+nb}{list}\\PY{p}{(}\\PY{n+nb}{range}\\PY{p}{(}\\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)}\\PY{p}{)}\n         \\PY{k}{for} \\PY{n}{i} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{n}{ceil}\\PY{p}{(}\\PY{n}{sqrt}\\PY{p}{(}\\PY{n}{n}\\PY{p}{)}\\PY{p}{)}\\PY{p}{)}\\PY{p}{:}\n             \\PY{k}{if} \\PY{n}{primes}\\PY{p}{[}\\PY{n}{i}\\PY{p}{]} \\PY{o}{==} \\PY{l+m+mi}{0}\\PY{p}{:}\n                 \\PY{k}{continue}\n             \\PY{n}{j} \\PY{o}{=} \\PY{n}{i}\n             \\PY{k}{while} \\PY{n}{i} \\PY{o}{*} \\PY{n}{j} \\PY{o}{\\PYZlt{}}\\PY{o}{=} \\PY{n}{n}\\PY{p}{:}\n                 \\PY{n}{primes}\\PY{p}{[}\\PY{n}{i} \\PY{o}{*} \\PY{n}{j}\\PY{p}{]} \\PY{o}{=} \\PY{l+m+mi}{0}\n                 \\PY{n}{j} \\PY{o}{+}\\PY{o}{=} \\PY{l+m+mi}{1}\\PY{p}{;}\n         \\PY{n+nb}{print}\\PY{p}{(}\\PY{p}{\\PYZob{}} \\PY{n}{i} \\PY{k}{for} \\PY{n}{i} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{,} \\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)} \\PY{k}{if} \\PY{n}{primes}\\PY{p}{[}\\PY{n}{i}\\PY{p}{]} \\PY{o}{!=} \\PY{l+m+mi}{0} \\PY{p}{\\PYZcb{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n\\{ 2, 3, 5, 7, 521, 11, 523, 13, 17, 19, 23, 29, 541, 31, 547, 37, 41, 43, 557,\n  47, 563, 53, 569, 59, 571, 61, 577, 67, 71, 73, 587, 79, 593, 83, 599, 89,\n  601, 607, 97, 101, 613, 103, 617, 107, 619, 109, 113, 631, 127, 641, 131,\n  643, 647, 137, 139, 653, 659, 149, 661, 151, 157, 673, 163, 677, 167, 683,\n  173, 179, 691, 181, 701, 191, 193, 197, 709, 199, 719, 211, 727, 733, 223,\n  227, 739, 229, 743, 233, 239, 751, 241, 757, 761, 251, 257, 769, 773, 263,\n  269, 271, 787, 277, 281, 283, 797, 293, 809, 811, 307, 821, 311, 823, 313,\n  827, 317, 829, 839, 331, 337, 853, 857, 347, 859, 349, 863, 353, 359, 877,\n  367, 881, 883, 373, 887, 379, 383, 389, 907, 397, 911, 401, 919, 409, 929,\n  419, 421, 937, 941, 431, 433, 947, 439, 953, 443, 449, 967, 457, 971, 461,\n  463, 977, 467, 983, 479, 991, 997, 487, 491, 499, 503, 509\n\\}\n\\end{Verbatim}\n\n\\subsection{Numerical Functions}\\label{numerical-functions}\n\n\\textsl{Python} provides all of the mathematical functions that you have\ncome to learn at school. A detailed listing of these functions can be\nfound at\n\\href{https://docs.python.org/3.6/library/math.html}{https://docs.python.org/3.6/library/math.html}. We just\nshow the most important functions and constants. In order to make the module \\texttt{math}\navailable, we use the following \\texttt{import} statement:\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}67}]:} \\PY{k+kn}{import} \\PY{n+nn}{math}\n\\end{Verbatim}\nThe mathematical constant \\href{https://en.wikipedia.org/wiki/Pi}{Pi}, which is most often written as \\(\\pi\\), is\navailable as \\texttt{math.pi}.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}68}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{pi}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}68}]:} 3.141592653589793\n\\end{Verbatim}\nThe \\blue{sine} function is called as follows:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}69}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{sin}\\PY{p}{(}\\PY{n}{math}\\PY{o}{.}\\PY{n}{pi}\\PY{o}{/}\\PY{l+m+mi}{6}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}69}]:} 0.49999999999999994\n\\end{Verbatim}           \nThe \\blue{cosine} function is called as follows: \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}70}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{cos}\\PY{p}{(}\\PY{l+m+mf}{0.0}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}70}]:} 1.0\n\\end{Verbatim}\nThe \\blue{tangent} function is called as follows:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}71}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{tan}\\PY{p}{(}\\PY{n}{math}\\PY{o}{.}\\PY{n}{pi}\\PY{o}{/}\\PY{l+m+mi}{4}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}71}]:} 0.9999999999999999\n\\end{Verbatim}\nThe \\blue{arc sine}, \\blue{arc cosine}, and \\blue{arc tangent} are called by prefixing the\ncharacter '\\texttt{a}' to the name of the function as seen below:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}72}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{asin}\\PY{p}{(}\\PY{l+m+mf}{1.0}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}72}]:} 1.5707963267948966\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}73}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{acos}\\PY{p}{(}\\PY{l+m+mf}{1.0}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}73}]:} 0.0\n\\end{Verbatim}\n            \n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}74}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{atan}\\PY{p}{(}\\PY{l+m+mf}{1.0}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}74}]:} 0.7853981633974483\n\\end{Verbatim} \n\\href{https://en.wikipedia.org/wiki/E_(mathematical_constant)}{Euler's number} \\(e\\) can be computed as follows:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}75}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{e}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}75}]:} 2.718281828459045\n\\end{Verbatim}\nThe \\blue{exponential} function \\(\\mathrm{exp}(x) := e^x\\) is computed as follows:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}76}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{exp}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}76}]:} 2.718281828459045\n\\end{Verbatim}\nThe \\blue{natural logarithm} \\(\\ln(x)\\), which is defined as the inverse function of the function \\(\\exp(x)\\), is called \\texttt{log} (instead of \\texttt{ln}):\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}77}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{log}\\PY{p}{(}\\PY{n}{math}\\PY{o}{.}\\PY{n}{e} \\PY{o}{*} \\PY{n}{math}\\PY{o}{.}\\PY{n}{e}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}77}]:} 2.0\n\\end{Verbatim}\nThe \\blue{square root} \\(\\sqrt{x}\\) of a number \\(x\\) is computed using the\nfunction \\texttt{sqrt}:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}78}]:} \\PY{n}{math}\\PY{o}{.}\\PY{n}{sqrt}\\PY{p}{(}\\PY{l+m+mi}{2}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{outcolor}Out[{\\color{outcolor}78}]:} 1.4142135623730951\n\\end{Verbatim}\n\n\n\\subsection{Selection Sort}\nIn order to see a practical application of the concepts discussed so far, we present a sorting\nalgorithm that is known as \\href{https://en.wikipedia.org/wiki/Selection_sort}{\\blue{selection sort}}.\nThis algorithm sorts a given list \\texttt{L} and works as follows:\n\\begin{enumerate}\n\\item If \\texttt{L} is empty, \\texttt{sort(L)} is also empty:\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{sort([])} = \\texttt{[]}$.\n\\item Otherwise, we first compute the minimum of \\texttt{L}.  Clearly, the minimum needs to be the\n      first element of the sorted list.  We remove this minimum from \\texttt{L}, sort the remaining\n      elements recursively, and finally attach the minimum at the front of this list:\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{sort(L)} = \\texttt{[min(L)] + sort([}x \\in \\texttt{L} \\texttt{|} x \\not= \\texttt{min}(L)\\texttt{])}$.\n\\end{enumerate}\nFigure \\ref{fig:min-sort.py} on page \\pageref{fig:min-sort.py} shows the program\n\\href{https://github.com/karlstroetmann/Logik/blob/master/Python/min-sort.py}{\\texttt{min-sort.py}}\nthat implements selection sort  in \\textsl{Python}. \n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    def minSort(L):\n        if L == []:\n            return []\n        m = min(L)\n        return [m] + minSort([x for x in L if x != m])\n    \n    L = [ 2, 13, 5, 13, 7, 2, 4 ]\n    print('minSort(', L, ') = ', minSort(L), sep='')\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{Implementing selection sort in \\textsl{Python}.}\n\\label{fig:min-sort.py}\n\\end{figure}\n\n\n\\section{Loading a Program}\nThe \\setlx\\ interpreter can \\blue{load} programs interactively into a running session.\nIf \\textsl{file} is the base name of a file, then the command\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{import }\\textsl{file}\n\\\\[0.2cm]\nloads the program from  \\textsl{file}\\texttt{.py} and executes the statements given in this program.\nFor example, the command\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{import min\\_sort}\n\\\\[0.2cm]\nexecutes the program shown in Figure\n\\ref{fig:min-sort.py} on page \\pageref{fig:min-sort.py}.  If we want to call a function defined in the file\n\\texttt{min\\_sort.py},  then we have to prefix this function as shown below:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{min\\_sort.minSort([2, 13, 5, 13, 7, 2, 4])},\n\\\\[0.2cm]\ni.e.~we have to prefix the name of the function that we want to call with the base name of the file defining\nthis function followed by a dot character.\n\n\\section{Strings}\n\\textsl{Python} support \\blue{strings}.  \\href{https://en.wikipedia.org/wiki/String_(computer_science)}{Strings} are\nnothing more but sequences of characters.  In \\textsl{Python}, these have to be \nenclosed either in double quotes or in single quotes.  The operator ``\\texttt{+}'' can be used to concatenate\nstrings.  For example, the expression \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\\squote{abc} + \\texttt{'uvw'};}\n\\\\[0.2cm]\nreturns the result\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\squote{abcuvw}.\n\\\\[0.2cm]\nFurthermore, a natural number \\texttt{n} can be multiplied with a string \\texttt{s}.  The expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{n * s;}\n\\\\[0.2cm]\nreturns a string consisting of \\texttt{n} concatenations of \\texttt{s}.  For example,\nthe result of\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{3 * \\squote{abc};}\n\\\\[0.2cm]\nis the string \\squote{abcabcabc}.  When multiplying a string with a number, the order of the\narguments does not matter. Hence, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\\squote{abc} * 3}\n\\\\[0.2cm]\nalso yields the result \\squote{abcabcabc}.  In order to extract substrings from a given string, we can use the same\nslicing operator that also works for lists and tuples.  Therefore, if $s$ is a string and $k$ and $l$ are numbers, then\nthe expression \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{$s$[$k$..$l$]}\n\\\\[0.2cm]\nextracts the substring form $s$ that starts with the $k+1$th character of $s$ and that ends with the $l$th character.\nFor example, if \\texttt{s} is defined by the assignment\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{s = \"abcdefgh\"}\n\\\\[0.2cm]\nthen the expression \\texttt{s[2:5]} returns the substring\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\"cde\"}.\n\n\\section{Computing with Unlimited Precision}\n\\textsl{Python} provides the module fractions that implements\n\\blue{rational numbers} through the function Fraction that is\nimplemented in this module. We can load this function as follows:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}1}]:} \\PY{k+kn}{from} \\PY{n+nn}{fractions} \\PY{k}{import} \\PY{n}{Fraction}\n\\end{Verbatim}\nThe function Fraction expects two arguments, the \\blue{nominator} and\nthe \\blue{denominator}. Mathematically, we have\n\\[ \\texttt{Fraction}(p, q) = \\frac{p}{q}. \\] For example, we can compute\nthe sum \\(\\frac{1}{2} + \\frac{1}{3}\\) as follows:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}2}]:} \\PY{n+nb}{sum} \\PY{o}{=} \\PY{n}{Fraction}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{2}\\PY{p}{)} \\PY{o}{+} \\PY{n}{Fraction}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{l+m+mi}{3}\\PY{p}{)}\n        \\PY{n+nb}{print}\\PY{p}{(}\\PY{n+nb}{sum}\\PY{p}{)}\n\\end{Verbatim}\n\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n5/6\n\\end{Verbatim}\nLet us compute Euler's number \\(e\\). The easiest way to compute \\(e\\) is\nas inifinite series. We have that\n\\[ e = \\sum\\limits_{n=0}^\\infty \\frac{1}{n!} \\] Here \\(n!\\) denotes the\n\\blue{factorial} of \\(n\\), which is defined as follows:\n\\[ n! = 1 \\cdot 2 \\cdot 3 \\cdot {\\dots} \\cdot n. \\]\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}3}]:} \\PY{k}{def} \\PY{n+nf}{factorial}\\PY{p}{(}\\PY{n}{n}\\PY{p}{)}\\PY{p}{:}\n            \\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{compute the factorial of n}\\PY{l+s+s2}{\\PYZdq{}}\n            \\PY{n}{result} \\PY{o}{=} \\PY{l+m+mi}{1}\n            \\PY{k}{for} \\PY{n}{i} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)}\\PY{p}{:}\n                \\PY{n}{result} \\PY{o}{*}\\PY{o}{=} \\PY{n}{i}\n            \\PY{k}{return} \\PY{n}{result}\n\\end{Verbatim}\nLet's check that our definition of the factorial works as expected.\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}4}]:} \\PY{k}{for} \\PY{n}{i} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{l+m+mi}{10}\\PY{p}{)}\\PY{p}{:}\n            \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{i}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{! = }\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{factorial}\\PY{p}{(}\\PY{n}{i}\\PY{p}{)}\\PY{p}{,} \\PY{n}{sep}\\PY{o}{=}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n0! = 1\n1! = 1\n2! = 2\n3! = 6\n4! = 24\n5! = 120\n6! = 720\n7! = 5040\n8! = 40320\n9! = 362880\n\\end{Verbatim}\nLets approximate \\(e\\) by the following sum:\n\\[ e = \\sum\\limits_{i=0}^n \\frac{1}{i!} \\] Setting \\(n=100\\) should be\nsufficient to compute \\(e\\) to a hundred decimal places.\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}5}]:} \\PY{n}{n} \\PY{o}{=} \\PY{l+m+mi}{100}\n\\end{Verbatim}\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}6}]:} \\PY{n}{e} \\PY{o}{=} \\PY{l+m+mi}{0}\n        \\PY{k}{for} \\PY{n}{i} \\PY{o+ow}{in} \\PY{n+nb}{range}\\PY{p}{(}\\PY{n}{n}\\PY{o}{+}\\PY{l+m+mi}{1}\\PY{p}{)}\\PY{p}{:}\n            \\PY{n}{e} \\PY{o}{+}\\PY{o}{=} \\PY{n}{Fraction}\\PY{p}{(}\\PY{l+m+mi}{1}\\PY{p}{,} \\PY{n}{factorial}\\PY{p}{(}\\PY{n}{i}\\PY{p}{)}\\PY{p}{)}\n\\end{Verbatim}\nMultiply \\(e\\) by \\(10^{100}\\) and round so that we get the first 100\ndecimal places of \\(e\\):\n\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}7}]:} \\PY{n}{eTimesBig} \\PY{o}{=} \\PY{n}{e} \\PY{o}{*} \\PY{l+m+mi}{10} \\PY{o}{*}\\PY{o}{*} \\PY{n}{n}\n        \\PY{n}{s} \\PY{o}{=} \\PY{n+nb}{str}\\PY{p}{(}\\PY{n+nb}{round}\\PY{p}{(}\\PY{n}{eTimesBig}\\PY{p}{)}\\PY{p}{)}\n\\end{Verbatim}\nInsert a ``\\texttt{.}'' after the first digit:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n{\\color{incolor}In [{\\color{incolor}8}]:} \\PY{n+nb}{print}\\PY{p}{(}\\PY{n}{s}\\PY{p}{[}\\PY{l+m+mi}{0}\\PY{p}{]}\\PY{p}{,} \\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{.}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{,} \\PY{n}{s}\\PY{p}{[}\\PY{l+m+mi}{1}\\PY{p}{:}\\PY{p}{]}\\PY{p}{,} \\PY{n}{sep}\\PY{o}{=}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n\\end{Verbatim}\nAnd there we go. Ladies and gentlemen, lo and behold: Here are the first 100 digits of $e$:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n2.718281828459045235360287471352662497757247093699959574966967627724076630353547\n5945713821785251664274\n\\end{Verbatim}\n\n\\section{Other References}\nFor reasons of time an space, this lecture has just scratched the surface of what is possible with\n\\textsl{Python}.  If you want to attain a deeper understanding of \\textsl{Python}, here are three places that \nI would recommend:\n\\begin{enumerate}\n\\item First, there is the official \\textsl{Python} tutorial, which is available at\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\href{https://docs.python.org/3.6/tutorial/index.html}{\\texttt{https://docs.python.org/3.6/tutorial/index.html}}.\n\n      Furthermore, there are a number of good books available.  I would like to suggest the following two\n      books.  Both of these books should be available electronically in our library:\n\\item \\emph{The Quick Python Book} written by Naomi R.~Ceder \\cite{ceder:2018} is up to date and gives a\n      concise introduction to \\textsl{Python}.  The book assumes that the reader has some prior programming\n      experience.  I would assume that most of our students have the necessary background to feel comfortable\n      with this book.\n\\item \\emph{Learning Python} by Mark Lutz \\cite{lutz:2013} is aimed at the complete novice.  It discusses\n      everything in minute detail, albeit at the cost of 1648 pages.\n\\end{enumerate}\nSince \\textsl{Python} is not the primary objective of these lecture notes, there is no requirement to read\neither the \\textsl{Python} tutorial or any of the books mentioned above.  The primary objective of these\nlecture notes is to introduce the main ideas of both \\blue{propositional logic} and \\blue{predicate logic}.\n\\textsl{Python} is merely used to illustrate the most important notions from set theory and logic.  You should\nbe able to pick up enough knowledge of \\textsl{Python} by closely inspecting the \\textsl{Python} programs\ndiscussed in these lecture notes.  \n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"logic\"\n%%% End: \n\n", "meta": {"hexsha": "082b30cf4d6c1fcedf7c1a8a7bc313ae0728aa28", "size": 75342, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-Notes-Python/python.tex", "max_stars_repo_name": "AbdalrohmanGitHub/Logik", "max_stars_repo_head_hexsha": "62270c224061f38b637cb6920a0fbe5a56495bb9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture-Notes-Python/python.tex", "max_issues_repo_name": "AbdalrohmanGitHub/Logik", "max_issues_repo_head_hexsha": "62270c224061f38b637cb6920a0fbe5a56495bb9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture-Notes-Python/python.tex", "max_forks_repo_name": "AbdalrohmanGitHub/Logik", "max_forks_repo_head_hexsha": "62270c224061f38b637cb6920a0fbe5a56495bb9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9946914399, "max_line_length": 622, "alphanum_fraction": 0.6269544212, "num_tokens": 28938, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768249, "lm_q2_score": 0.7879311956428946, "lm_q1q2_score": 0.5711479033940008}}
{"text": "\\documentclass[12pt]{article}\r\n% page layout\r\n\\usepackage[margin=1in]{geometry}\r\n\\usepackage[doublespacing]{setspace}\r\n\r\n% useful package\r\n\\def\\studynote{}\r\n\\usepackage{../cmd}\r\n\r\n% only for this document\r\n\\setcounter{section}{-1}\r\n\\newcommand{\\Deltabmbar}{\\bar{\\Deltabm}}\r\n\\newcommand{\\gbmhat}{\\hat{\\gbm}}\r\n\\newcommand{\\gbmbar}{\\bar{\\gbm}}\r\n\\newcommand{\\gbmbarbar}{\\bar{\\bar{\\gbm}}}\r\n\\newcommand{\\fhat}{\\hat{f}}\r\n\r\n% document starts here\r\n\\title{Optimization Study Note}\r\n\\author{Long Wang}\r\n\\begin{document}\r\n\\maketitle\r\n\\tableofcontents\r\n\r\n\\newpage\r\n\r\n\\section{Preliminary}\r\n\r\n\\subsection{Optimal Condition}\r\n\\textbf{First order:} $ \\gbm(\\thetabm) = \\zerobm $\r\n\\begin{itemize}\r\n\t\\item Descent direction: $ \\gbm(\\thetabm)^T \\dbm < 0 $\r\n\\end{itemize}\r\n\r\n\\textbf{Second order:} $ \\Hbm(\\thetabm) $\r\n\\begin{itemize}\r\n\t\\item $ \\Hbm(\\thetabm) \\succ 0 \\Rightarrow $ local min \r\n\t\\item $ \\Hbm(\\thetabm) \\succeq 0 \\Rightarrow $ check higher-order derivatives\r\n\t\\item Indefinite $ \\Hbm(\\thetabm) \\Rightarrow $ saddlepoint (local min/max for different components)\r\n\\end{itemize}\r\n\r\n\\subsection{Continuity}\r\n\\textbf{Notation}\r\n\\begin{itemize}\r\n\t\\item $\\Ccal^0$ (continuous), $ \\Ccal^1 $ (continuously differentiable), $ \\dots $\r\n\t\\item $ \\Ccal^{k,1} $ ($ \\Ccal^k $ + Lipschitz), $ \\Ccal^{k,\\alpha} $ ($ \\Ccal^k $ + Holder)\r\n\t\\item $ \\Ccal^0 \\supset $ uniformly cont $\\supset$ absolutely cont $\\supset \\Ccal^{0,1} \\supset \\Ccal^1 $\r\n\\end{itemize}\r\n\r\n\\textbf{$ \\Ccal^0 $ (cont):} $ \\norm{\\thetabm_1 - \\thetabm_2} < \\delta(\\varepsilon, \\thetabm) \\Rightarrow |f(\\thetabm_1) - f(\\thetabm_2)| < \\varepsilon $\r\n\\begin{itemize}\r\n\t\\item $ \\lim_{\\thetabm \\to \\thetabm_0} f(\\thetabm) = f(\\thetabm_0) $\r\n\\end{itemize}\r\n\r\n\\textbf{Uniformly cont:} $ \\norm{\\thetabm_1 - \\thetabm_2} < \\delta(\\varepsilon) \\Rightarrow |f(\\thetabm_1) - f(\\thetabm_2)| < \\varepsilon $\r\n\\begin{itemize}\r\n\t\\item \\textbf{Heine-Cantor:} cont + compact = uniformly cont\r\n\t\\item \\textbf{Weierstrass:} uniformly cont, but no where differentiable\r\n\\end{itemize}\r\n\r\n\\textbf{Absolutely cont:} $ \\sum_k (\\thetabm_1^{(k)} - \\thetabm_2^{(k)}) \\leq \\delta(\\varepsilon) \\Rightarrow \\sum_k | f(\\thetabm_1^{(k)}) - f(\\thetabm_2^{(k)}) | \\leq \\varepsilon $\r\n\\begin{itemize}\r\n\t\\item \\textbf{Bounded variation:} $ V(f) = \\sup \\sum_k |f(\\thetabm_{k+1}) - f(\\thetabm_k)| < \\infty $\r\n\t\\item Differentiable a.e. $\\supset$ bounded variation $\\supset$ absolutely cont\r\n\\end{itemize}\r\n\r\n\\textbf{$ \\Ccal^{0,1} $ (Lipschitz):} $ |f(\\thetabm_1) - f(\\thetabm_2)| \\le L \\norm{\\thetabm_1 - \\thetabm_2} $\r\n\\begin{itemize}\r\n\t\\item \\textbf{$ \\Ccal^{0, \\alpha} $ ($\\alpha$-Holder):} $ |f(\\thetabm_1) - f(\\thetabm_2)| \\le C \\norm{\\thetabm_1 - \\thetabm_2}^\\alpha, 0 < \\alpha \\le 1 $\r\n\t\\item $ \\alpha$-Holder $\\supset$ Lipschitz $ \\iff \\norm{\\gbm(\\thetabm)} \\le M $\r\n\t\\item If $ \\Ccal^{1,\\alpha} $, then $ |f(\\thetabm_1) - f(\\thetabm_2) - \\gbm(\\thetabm_2)^T(\\thetabm_1 - \\thetabm_2)| \\leq \\frac{L}{\\alpha + 1} \\norm{\\thetabm_1 - \\thetabm_2}^{\\alpha + 1}$\r\n\\end{itemize}\r\n\r\n\\textbf{$ \\Ccal^1 $ (cont diff):} $ \\gbm(\\thetabm) $ exists and cont\r\n\r\n\\subsection{Convex}\r\n\\textbf{Definition}\r\n\\begin{itemize}\r\n\t\\item $ f(\\alpha \\thetabm_1 + (1 - \\alpha) \\thetabm_2) \\leq \\alpha f(\\thetabm_1) + (1 - \\alpha) f(\\thetabm_2) $\r\n\t\\item If $ \\gbm(\\thetabm) $ exists: $ f(\\thetabm_2) \\ge f(\\thetabm_1) + \\gbm(\\thetabm_1)^T (\\thetabm_2 - \\thetabm_1) $\r\n\t\\item If $ \\Hbm(\\thetabm) $ exists: $ \\Hbm(\\thetabm) \\succeq \\zerobm $\r\n\\end{itemize}\r\n\r\n\\textbf{Strong convex}\r\n\\begin{itemize}\r\n\t\\item $ f(\\alpha \\thetabm_1 + (1 - \\alpha) \\thetabm_2) \\leq \\alpha f(\\thetabm_1) + (1 - \\alpha) f(\\thetabm_2) - \\alpha (1 - \\alpha) \\frac{L}{2} \\norm{\\thetabm_1 - \\thetabm_2}^2 $\r\n\t\\item $ f(\\thetabm_2) \\ge f(\\thetabm_1) + \\gbm(\\thetabm_1)^T(\\thetabm_2 - \\thetabm_1) + \\frac{L}{2} \\norm{\\thetabm_2 - \\thetabm_1}^2 $\r\n\t\\begin{itemize}\r\n\t\t\\item $\\alpha$-strongly convex and $\\beta$-strongly smooth\r\n\t\t\r\n\t\t$ \\frac{\\alpha}{2} \\norm{\\thetabm_2 - \\thetabm_1}^2 \\leq f(\\thetabm_2) - f(\\thetabm_1) - \\gbm(\\thetabm_1)^T(\\thetabm_2 - \\thetabm_1) \\leq \\frac{\\beta}{2} \\norm{\\thetabm_2 - \\thetabm_1}^2 $\r\n\t\\end{itemize}\r\n\t\\item $ (\\gbm(\\thetabm_1) - \\gbm(\\thetabm_2))^T (\\thetabm_1 - \\thetabm_2) \\ge L \\norm{\\thetabm_1 - \\thetabm_2} $\r\n\t\\item If $ f \\in \\Ccal^2 $: $ \\Hbm(\\thetabm) \\succeq L \\Ibm $\r\n\t\\item $ f(\\thetabm) - \\frac{L}{2} \\norm{\\thetabm}^2 $ is convex\r\n\\end{itemize}\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item Convex + open convex set $ \\Rightarrow $ continuous\r\n\\end{itemize}\r\n\r\n\\subsection{Convergence}\r\n\\textbf{$q$-linearly:} $ \\norm{\\thetabm_{k+1} - \\thetabm^*} \\leq c \\norm{\\thetabm_k - \\thetabm^*} $\r\n\r\n\\textbf{$q$-superlinearly:} $ \\norm{\\thetabm_{k+1} - \\thetabm^*} \\leq c_k \\norm{\\thetabm_k - \\thetabm^*} $ with $ c_k \\to 0 $\r\n\r\n\\textbf{$q$-quadratically:} $ \\norm{\\thetabm_{k+1} - \\thetabm^*} \\leq c \\norm{\\thetabm_k - \\thetabm^*}^2 $\r\n\r\n\\newpage\r\n\\section{Direct Algorithm}\r\n\\subsection{Random Search}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k^{\\text{cand}}~\\iftext~f(\\thetabm_k^{\\text{cand}}) < f(\\thetabm_k) $\r\n\r\n\\textbf{Extension}\r\n\\begin{itemize}\r\n\t\\item (Enhanced) Local Random Search: $ \\thetabm_k^{\\text{cand}} = \\thetabm_k + \\bbm_k (\\pm \\dbm_k) $\r\n\t\\item Nonlinear Simplex Nelder-Mead: reflection, expansion, contraction and shrink\r\n\t\\item Stochastic: $ \\fhat(\\thetabm_k^{\\text{cand}}) < \\fhat(\\thetabm_k) - \\tau_k $\r\n\\end{itemize}\r\n\r\n\\subsection{Simulated Annealing (SA)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k^{\\text{cand}}~\\iftext~U \\le \\exp \\paren{- \\frac{f(\\thetabm_k^{\\text{cand}}) - f(\\thetabm_k)}{c_b T_k}} $\r\n\\begin{itemize}\r\n\t\\item Global optimization\r\n\\end{itemize}\r\n\r\n\\subsection{Genetic Algorithm (GA)}\r\n\\textbf{Algo:} initialization, mixing, evaluation\r\n\r\n\\newpage\r\n\\section{Model-based Algorithm}\r\n\\begin{itemize}\r\n\t\\item Polynomial model: $ m(\\xbm) = \\sum a_i \\phi_i(\\xbm) $\r\n\t\\item Quadratic interpolation model\r\n\t\\item Regression model\r\n\t\\item RBF interpolation models: $ m(\\xbm) = \\sum_i b_i \\psi(\\norm{\\xbm - \\ybm_i}) + \\abm^T \\phi(\\xbm) $\r\n\\end{itemize}\r\n\r\n\\subsection{Trust-Region}\r\n\\textbf{Algo:} build model $ m_k(\\xbm) $\r\n\\begin{itemize}\r\n\t\\item Generate $ \\sbm_k $ that makes $ \\xbm_k + \\sbm_k $ minimize $ m_k(\\xbm) $ on $ \\Bcal_{r_k}(\\xbm) $\r\n\t\\item Compute $ \\rho_k = \\frac{f(\\xbm_k) - f(\\xbm_k + \\sbm_k)}{m_k(\\xbm_k) - m_k(\\xbm_k + \\sbm_k)} $ and update $ r_{k+1} $\r\n\t\\item $ \\xbm_{k+1} = \\xbm_k + \\sbm_k $ if $ \\rho_k \\geq \\eta_0 $\r\n\\end{itemize}\r\n\r\n\\newpage\r\n\\section{First-Order-Based Algorithm}\r\n\\subsection{Gradient-Based Descent}\r\n\\subsubsection{Gradient Descent (GD)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\gbm(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item $ f(\\thetabm) \\approx f^L(\\thetabm) = f(\\thetabm_k) + \\gbm(\\thetabm_k)^T (\\thetabm - \\thetabm_k) + \\frac{1}{2} (\\thetabm - \\thetabm_k)^T (\\frac{1}{a_k} \\Ibm) (\\thetabm - \\thetabm_k) $\r\n\t\\begin{itemize}\r\n\t\t\\item Matching $ f^L(\\thetabm_k) = f(\\thetabm_k) $ and $ \\gbm^L(\\thetabm_k) = \\gbm(\\thetabm_k) $\r\n\t\t\\item Minimize $ f^L(\\thetabm) \\Rightarrow \\thetabm_{k+1} = \\thetabm_k - a_k \\gbm(\\thetabm_k) $\r\n\t\t\\item Smaller $ a_k \\Rightarrow $ more local movement\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsubsection{Coordinate Gradient Descent (CGD)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k g_i(\\thetabm_k) $\r\n\r\n\\textbf{Application}\r\n\\begin{itemize}\r\n\t\\item \\textbf{Least Squares:} Solve $ \\Abm \\xbm = \\bbm $ by minimizing $ f(\\xbm) = \\frac{1}{2} \\xbm^T \\Abm \\xbm - \\bbm^T \\xbm $\r\n\t\\begin{itemize}\r\n\t\t\\item Too expansive to do decomposition\r\n\t\t\\item CGD: $ \\xbm_{i+1} = \\xbm_i + \\alpha_i \\ebm_i $ (good if $ \\Abm $ is diagonal)\r\n\t\t\\item $\\Abm$-conjugate descent: $ f(\\ybm) = \\frac{1}{2} \\ybm^T (\\Sbm^T \\Abm \\Sbm) \\ybm - (\\Sbm^T \\bbm)^T \\ybm $\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item $ \\xbm_{i+1} = \\Sbm \\ybm_{i+1} = \\Sbm (\\ybm_i + \\alpha_i \\ebm_i) = \\xbm_i + \\alpha_i \\sbm_i  $\r\n\t\t\t\\item Find $\\Sbm^T \\Abm \\Sbm = \\Lambdabm $ cheaply and update $ \\sbm_k \\to \\sbm_{k+1} $ cheaply\r\n\t\t\t\\item $ \\alpha_k = - \\frac{\\sbm_k^T \\rbm_k}{\\sbm_k^T \\Abm \\sbm_k^T}, \\rbm_{k+1} = \\Abm \\xbm_{k+1} - \\bbm, \\sbm_{k+1} = - \\rbm_{k+1} + \\frac{\\sbm_k^T \\Abm \\rbm_{k+1}}{\\sbm^T \\Abm \\sbm_k} \\sbm_k $, init $ \\sbm_0 = -\\rbm_0 $\r\n\t\t\\end{itemize}\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsubsection{Projected Gradient Descent (PGD)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\Proj{\\Ccal}{\\thetabm_k - a_k \\gbm(\\thetabm_k)} $\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item Bounded $ \\gbm(\\thetabm) $: $ \\frac{1}{K} \\sum f(\\thetabm_k) \\leq f^* + \\varepsilon $ with $ K = O(\\frac{1}{\\varepsilon^2}), \\alpha_k = \\frac{1}{\\sqrt{K}} $\r\n\t\\item $\\alpha/\\beta$-strongly convex/smooth: $ f(\\thetabm_K) \\leq f^* + \\varepsilon $ with $ K = O(\\frac{\\beta}{\\alpha} \\log \\frac{\\beta}{\\varepsilon}), \\alpha_k = \\frac{1}{\\beta} $\r\n\\end{itemize}\r\n\r\n\\textbf{Non-convex constraint set}\r\n\\begin{itemize}\r\n\t\\item \\textbf{Algo:} $ \\thetabm_{k+1} = \\Proj{\\Ncal\\Ccal}{\\thetabm_k - a_k \\gbm(\\thetabm_k)} $\r\n\t\\begin{itemize}\r\n\t\t\\item Projection is generally NP-hard\r\n\t\\end{itemize}\r\n\t\\item \\textbf{Theory}\r\n\t\\begin{itemize}\r\n\t\t\\item $\\alpha/\\beta$-strongly restricted convex/smooth ($\\beta < 2\\alpha$)\r\n\t\t\r\n\t\t$ f(\\thetabm_K) \\leq f^* + \\varepsilon $ with $ K = O(\\frac{\\alpha}{2\\alpha - \\beta} \\log \\frac{1}{\\varepsilon}), \\alpha_k = \\frac{1}{\\beta} $\r\n\t\\end{itemize}\r\n\t\\item \\textbf{Application}\r\n\t\\begin{itemize}\r\n\t\t\\item Sparse regression: $ \\hat{\\betabm} = \\argmin_{\\norm{\\betabm}_0 \\leq s} \\norm{\\ybm - \\Xbm \\betabm}^2 $\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item $ \\Proj{\\norm{\\betabm}_0 \\leq s}{\\betabm} = $ set coordinates with small magnitude to 0\r\n\t\t\\end{itemize}\r\n\t\t\\item Low-rank: $ \\hat{\\Abm} = \\argmin_{\\rk(\\Xbm) \\leq r} \\norm{\\Abm - \\Xbm}_F^2 $\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item Eckart-Young-Mirsky Theorem: $ \\hat{\\Abm} = \\Ubm_{(r)} \\Sigmabm_{(r)} \\Vbm_{(r)}^T $\r\n\t\t\\end{itemize}\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsubsection{Subgradient Descent}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\partial f(\\thetabm_k) $\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item Convergence: $ O(\\varepsilon^{-2}) $\r\n\\end{itemize}\r\n\r\n\\subsubsection{Proximal Gradient Descent (ProxGD)}\r\n\\textbf{Model:} $ \\ell(\\thetabm) = f(\\thetabm) + h(\\thetabm) $\r\n\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\gbm(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item $ f(\\thetabm) \\approx f^L(\\thetabm) = f(\\thetabm_k) + \\gbm(\\thetabm_k)^T (\\thetabm - \\thetabm_k) + \\frac{1}{2} (\\thetabm - \\thetabm_k)^T (\\frac{1}{a_k} \\Ibm) (\\thetabm - \\thetabm_k) $\r\n\t\\item $ \\thetabm_{k+1} = \\argmin f^L(\\thetabm) + h(\\thetabm) = \\Prox{h}{\\thetabm_k - a_k \\gbm(\\thetabm_k)} $\r\n\t\\begin{itemize}\r\n\t\t\\item If $ h(\\thetabm) = \\lambda \\norm{\\thetabm}_1 $, then $ \\Prox{h}{\\cdot} = S_\\lambda(\\cdot) $\r\n\t\t\\item If $ h(\\thetabm) = \\infty \\ind{\\thetabm \\notin \\Thetabm} $, then $ \\Prox{h}{\\cdot} = \\Proj{\\Thetabm}{\\cdot} $\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item Convergence: $ f(\\thetabm_k) - f^* \\leq \\frac{1}{2 a k} \\norm{\\thetabm_0 - \\thetabm^*}^2 $\r\n\t\\begin{itemize}\r\n\t\t\\item $f(\\thetabm)$: convex, diff, Lipschitz $ \\gbm(\\thetabm) $ with $L$; $h(\\thetabm)$: convex; $ a_k = a \\leq \\frac{1}{L} $\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\textbf{Application}\r\n\\begin{itemize}\r\n\t\\item Matrix completion: $ \\min \\norm{\\Ybm - \\Xbm}_F + \\lambda \\norm{\\Xbm}_{\\tr} $\r\n\\end{itemize}\r\n\r\n\\subsubsection{Stochastic Gradient Descent (SGD)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\gbmhat_k(\\thetabm_k) $, where $ \\gbmhat_k(\\thetabm_k) = \\gbm(\\thetabm_k) + \\varepsilon_k $\r\n\\begin{itemize}\r\n\t\\item $ \\sum a_k = \\infty $ and $ \\sum a_k^2 < \\infty $\r\n\t\\item $ \\sup_k \\norm{\\thetabm_k} < \\infty $\r\n\\end{itemize} \r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item Normality: $ k^{\\alpha/2} (\\thetabm_k - \\thetabm^*) \\todist \\Ncal(0, \\Sigmabm) $\r\n\t\\begin{itemize}\r\n\t\t\\item $ a_k = O(1/k^\\alpha) $, optimal $ a_k = \\Hbm(\\thetabm^*)^{-1} / (k+1) $\r\n\t\\end{itemize}\r\n\t\\item Iterate averaging: $ k^{\\alpha/2} (\\bar{\\thetabm}_k - \\thetabm^*) \\todist \\Ncal(0, \\Sigmabm_{\\min}) $\r\n\t\\begin{itemize}\r\n\t\t\\item $ a_{k+1} / a_k = 1 - o(a_k) $, decay slower than $ O(1/k) $\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsubsection{Acceleration}\r\n\\textbf{Momentum:} $ \\thetabm_{k+1} = \\thetabm_k - \\vbm_k $\r\n\\begin{itemize}\r\n\t\\item $ \\vbm_k = \\gamma \\vbm_{k-1} + \\eta \\gbm(\\thetabm_k), \\gamma = 0.9 $\r\n\t\\item \\textbf{Pros:} avoid oscillations\r\n\\end{itemize}\r\n\r\n\\textbf{Nesterov Accel Grad (NAG):} $ \\thetabm_{k+1} = \\thetabm_k - \\vbm_t $\r\n\\begin{itemize}\r\n\t\\item $ \\vbm_k = \\gamma \\vbm_{k-1} + \\eta \\gbm(\\thetabm_k - \\gamma \\vbm_{k-1}) $\r\n\t\\item Use gradient at approx future position: $ \\thetabm_k - \\gamma \\vbm_{k-1} $\r\n\\end{itemize}\r\n\r\n\\textbf{Adagrad:} $ \\thetabm_{k+1} = \\thetabm_k - \\frac{\\eta}{\\sqrt{\\gbmbar_k^2 + \\varepsilon}} \\odot \\gbm(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item Second moment: $ \\gbmbar_k^2 = \\sum_{t=0}^{k} \\gbm(\\thetabm_k)^2 $ (element-wise), $ \\eta = 0.01 $\r\n\t\\item \\textbf{Pros:} adaptive learning rate\r\n\t\\item \\textbf{Cons:} $ \\gbmbar_k^2 \\to \\infty $ as $ k \\to \\infty $\r\n\\end{itemize}\r\n\r\n\\textbf{RMSprop:} $ \\thetabm_{k+1} = \\thetabm_k - \\frac{\\eta}{\\sqrt{\\gbmbar_k^2 + \\varepsilon}} \\odot \\gbm(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item Decaying second moment: $ \\gbmbar_k^2 = \\gamma \\gbmbar_{k-1}^2 + (1 - \\gamma) \\gbm(\\thetabm_k)^2 $\r\n\t\\item \\textbf{Adadelta:} Replace $ \\eta $ by $ \\sqrt{\\Deltabmbar_{k-1}^2 + \\varepsilon} $ with $ \\Deltabmbar_{k-1}^2 = \\rho \\Deltabmbar_{k-2}^2 + (1 - \\rho) \\Deltabm_{k-1}^2 $\r\n\t\\item $ \\gamma = 0.9 $ and $ \\eta = 0.001 $\r\n\\end{itemize}\r\n\r\n\\textbf{Adaptive Moment (Adam):} $ \\thetabm_{k+1} = \\thetabm_k - \\frac{\\eta}{\\sqrt{\\gbmbarbar_k^2 + \\varepsilon}} \\gbmbarbar_k $\r\n\\begin{itemize}\r\n\t\\item Decaying Moment: $ \\gbmbar_k = \\beta_1 \\gbmbar_{k-1} + (1 - \\beta_1) \\gbm(\\thetabm_k) $ and $ \\gbmbarbar_k = \\frac{\\gbmbar_k}{1 - \\beta_1^{k+1}} $\r\n\t\\item Decaying second moment: $ \\gbmbar_k^2 = \\beta_2 \\gbmbar_{k-1}^2 + (1 - \\beta_2) \\gbm(\\thetabm_k)^2 $ and $ \\gbmbarbar_k^2 = \\frac{\\gbmbar_k^2}{1 - \\beta_2^{k+1}} $\r\n\t\\item $ \\beta_1 = 0.9, \\beta_2 = 0.999 $ and $ \\varepsilon = 10^{-8} $\r\n\\end{itemize}\r\n\r\n\\textbf{AdaMax:} $ \\thetabm_{k+1} = \\thetabm_k - \\frac{\\eta}{\\gbmbar_k^{\\max}} \\gbmbarbar_k $\r\n\\begin{itemize}\r\n\t\\item $ \\gbmbar_k^{\\max} =  \\beta_2^\\infty \\gbmbar_{k-1}^{\\max} + (1 - \\beta_2^\\infty) |\\gbm_k|^\\infty = \\max(\\beta_2 \\gbmbar_{k-1}^{\\max}, |\\gbm_k|) $\r\n\t\\item $ \\eta = 0.002, \\beta_1 = 0.9, \\beta_2 = 0.999 $\r\n\\end{itemize}\r\n\r\n\\textbf{Nesterov-Adam (Nadam):} $ \\thetabm_{k+1} = \\thetabm_k - \\frac{\\eta}{\\sqrt{\\gbmbarbar_k^2 + \\varepsilon}} \\paren{\\beta_1 \\gbmbarbar_k + (1 - \\beta_1) \\frac{\\gbm_k}{1 - \\beta_1^{k+1}}} $\r\n\r\n\\textbf{AMSGrad:} $ \\thetabm_{k+1} = \\thetabm_k - \\frac{\\eta}{\\sqrt{\\gbmbarbar_k^2 + \\varepsilon}} \\gbmbar_k $\r\n\\begin{itemize}\r\n\t\\item $ \\gbmbarbar_k^2 = \\max(\\gbmbarbar_{k-1}^2, \\gbmbar_k^2) $\r\n\t\\item Non-increasing step size\r\n\\end{itemize}\r\n\r\n\\subsection{Gradient-Free Descent}\r\n\\subsubsection{Finite-Difference (FDSA)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\gbmhat_k(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item $ \\ghat_{ki}(\\thetabm_k) = \\frac{\\fhat(\\thetabm_k + c_k \\ebm_i) - \\fhat(\\thetabm_k - c_k \\ebm_i)}{2c_k} $\r\n\t\\item $ \\sum a_k = \\infty, \\sum a_k c_k < \\infty $ and $ \\sum a_k^2 / c_k^2 < \\infty $\r\n\\end{itemize}\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item $ \\gbmhat_k(\\thetabm_k): $ bias $= O(c_k^2) $, variance $= O(c_k^{-2}) $ or $ O(1) $ if controlled noise\r\n\t\\item Normality: $ k^{(\\alpha-2\\gamma)/2} (\\thetabm_k - \\thetabm^*) \\todist \\Ncal(\\mubm, \\Sigmabm) $\r\n\t\\begin{itemize}\r\n\t\t\\item $ a_k = O(1/k^\\alpha), c_k = O(1/k^\\gamma) $ and $ 2\\gamma < \\alpha \\le 6\\gamma $ ($ \\mubm = \\zerobm $ if $ \\alpha < 6\\gamma $)\r\n\t\t\\item Fastest: $ \\alpha = 1, \\gamma = 1/6 $, Practical: $ \\alpha = 0.602, \\gamma = 0.101 $\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsubsection{Simultaneous Perturbation (SPSA)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\gbmhat_k(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item \\red{General: $ \\gbmhat_k(\\thetabm_k) = \\frac{\\fhat(\\thetabm_k + c_k \\Ubm_k) - \\fhat(\\thetabm_k - c_k \\Ubm_k)}{2c_k} \\Vbm_k $ with $ \\E{\\Ubm_k \\Vbm_k} = \\Ibm $ and $ \\E{\\Vbm_k} = \\zerobm $}\r\n\t\\item $ \\gbmhat_k(\\thetabm_k) = \\frac{\\fhat(\\thetabm_k + c_k \\Deltabm_k) - \\fhat(\\thetabm_k - c_k \\Deltabm_k)}{2c_k\\Deltabm_k} $ with $ \\Deltabm_k \\sim \\text{Unif}\\{\\pm1\\}^p $\r\n\t\\item $ \\sum a_k = \\infty, \\sum a_k^2 < \\infty $ and $ \\sum a_k^2 / c_k^2 < \\infty $\r\n\\end{itemize}\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item $ \\gbmhat_k(\\thetabm_k): $ bias $= O(c_k^2) $, variance $= O(c_k^{-2}) $ or $ O(1) $ if controlled noise\r\n\t\\item Normality: $ k^{(\\alpha-2\\gamma)/2} (\\thetabm_k - \\thetabm^*) \\todist \\Ncal(\\mubm, \\Sigmabm) $\r\n\t\\begin{itemize}\r\n\t\t\\item $ a_k = O(1/k^\\alpha), c_k = O(1/k^\\gamma) $ and $ 2\\gamma < \\alpha \\le 6\\gamma $ ($ \\mubm = \\zerobm $ if $ \\alpha < 6\\gamma $)\r\n\t\t\\item Fastest: $ \\alpha = 1, \\gamma = 1/6 $, Practical: $ \\alpha = 0.602, \\gamma = 0.101 $\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsubsection{Random Direction (RDSA)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\gbmhat_k(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item Directional derivative: $ \\gbm(\\thetabm, \\ubm) = \\lim_{c \\to 0} \\frac{f(\\thetabm + c \\ubm) - f(\\thetabm)}{c} $\r\n\t\\item $ \\gbm_0(\\thetabm_k) = \\gbm(\\thetabm_k, \\ubm_k) \\ubm_k $\r\n\\end{itemize}\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item $ \\gbmhat_k(\\thetabm_k): $ bias $= O(c_k^2) $, variance $= O(c_k^{-2}) $ or $ O(1) $ if controlled noise\r\n\\end{itemize}\r\n\r\n\\subsubsection{Smoothed Functional (SFSA)}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - a_k \\gbmhat(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item $ \\Ubm_k = \\Ncal(\\zerobm, \\Ibm), \\Vbm_k = \\Ubm_k $\r\n\\end{itemize}\r\n\r\n\r\n\\newpage\r\n\\section{Second-Order-Based Algo}\r\n\\subsection{Newton's Method}\r\n\\textbf{Algo:} $ \\thetabm_{k+1} = \\thetabm_k - \\Hbm(\\thetabm_k)^{-1} \\gbm(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item $ f(\\thetabm) \\approx f^Q(\\thetabm) = f(\\thetabm_k) + \\gbm(\\thetabm_k)^T (\\thetabm - \\thetabm_k) + \\frac{1}{2} (\\thetabm - \\thetabm_k)^T \\Hbm(\\thetabm_k) (\\thetabm - \\thetabm_k) $\r\n\t\\begin{itemize}\r\n\t\t\\item $ \\Hbm(\\thetabm_k) $ must be PD\r\n\t\t\\item Matching: $ f^Q(\\thetabm_k) = f(\\thetabm_k), \\gbm^Q(\\thetabm_k) = \\gbm(\\thetabm_k) $ and $ \\Hbm^Q(\\thetabm_k) = \\Hbm(\\thetabm_k) $\r\n\t\t\\item Minimize $ f^Q(\\thetabm) \\Rightarrow \\thetabm_{k+1} = \\thetabm_k - \\Hbm(\\thetabm_{k+1})^{-1} \\gbm(\\thetabm_k) $\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item Practice: $ \\Hbm(\\thetabm_k) \\sbm_k = -\\gbm(\\thetabm_k) $ where $ \\sbm_k = \\thetabm_{k+1} - \\thetabm_k $\r\n\t\t\\end{itemize}\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\textbf{Theory}\r\n\\begin{itemize}\r\n\t\\item $ \\Ccal^1 $ + open convex $ \\Rightarrow $ local convergence \r\n\t\\item $ \\Ccal^1 $ + open convex + Lipschitz $ \\Hbm(\\thetabm) \\Rightarrow $ quadratic convergence\r\n\\end{itemize}\r\n\r\n\\subsubsection{Modified Newton}\r\n\\textbf{Algo:} Make $ \\Hbm(\\thetabm_k) \\succ \\zerobm $\r\n\\begin{itemize}\r\n\t\\item $ \\Hbm = \\Vbm \\Lambdabm \\Vbm^T $ or $ \\Hbm = \\Lbm \\Dbm \\Lbm^T $ with block diag $ \\Dbm = \\Vbm \\Lambdabm \\Vbm^T $\r\n\\end{itemize}\r\n\r\n\\subsubsection{Symmetric Rank One (SR1)}\r\n\\textbf{Algo:} $ \\Bbm_k \\approx \\Hbm(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item Secant: $ \\Bbm_{k+1} \\sbm_k = \\ybm_k $ with $ \\sbm_k = \\thetabm_{k+1} - \\thetabm_k, \\ybm_k = \\gbm_{k+1} - \\gbm_k $\r\n\t\\item Update (symmetric, not always PD): $ \\Bbm_{k+1} = \\Bbm_k + \\frac{(\\ybm_k - \\Bbm_k \\sbm_k) (\\ybm_k - \\Bbm_k \\sbm_k)^T}{(\\ybm_k - \\Bbm_k \\sbm_k)^T \\sbm_k} $\r\n\t\\item Matrix inverse lemma: $ \\Bbm_k^{-1} \\to \\Bbm_{k+1}^{-1} $\r\n\\end{itemize}\r\n\r\n\\subsubsection{BFGS}\r\n\\textbf{Algo:} $ \\Bbm_k \\approx \\Hbm(\\thetabm_k) $\r\n\\begin{itemize}\r\n\t\\item Update (symmetric and PD): $ \\Bbm_{k+1} = \\Bbm_k - \\frac{\\Bbm_k \\sbm_k (\\Bbm \\sbm_k)^T}{\\sbm_k^T \\Bbm_k \\sbm_k} + \\frac{\\ybm_k \\ybm_k^T}{\\ybm_k^T \\sbm_k} $\r\n\t\\item Matrix inverse lemma: $ \\Bbm_k^{-1} \\to \\Bbm_{k+1}^{-1} $\r\n\\end{itemize}\r\n\r\n\\textbf{L-BFGS:} $m$ BFGS updates from seed matrix $ \\Bbm^0 $\r\n\r\n\\section{Trust-Region Algorithm}\r\n\r\n\\section{Programming}\r\n\\subsection{Convex Programming}\r\n\\subsection{Integer Programming}\r\n\\textbf{Branch-and-bound}\r\n\r\n\\textbf{Branch-and-cut}\r\n\r\n\\textbf{Cutting plane}\r\n\r\n\\subsection{Nonlinear Programming}\r\n\\end{document}", "meta": {"hexsha": "fd68996e5edb1dad20a9af79014bbda2f0fb236a", "size": 20006, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Optimization/optimization.tex", "max_stars_repo_name": "longwang-jhu/Study", "max_stars_repo_head_hexsha": "456569366514e73b936168d867b33d56e9517495", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-07T07:03:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-07T07:03:11.000Z", "max_issues_repo_path": "Optimization/optimization.tex", "max_issues_repo_name": "longwang-jhu/Study", "max_issues_repo_head_hexsha": "456569366514e73b936168d867b33d56e9517495", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Optimization/optimization.tex", "max_forks_repo_name": "longwang-jhu/Study", "max_forks_repo_head_hexsha": "456569366514e73b936168d867b33d56e9517495", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.8524590164, "max_line_length": 225, "alphanum_fraction": 0.6287613716, "num_tokens": 8957, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7879311956428946, "lm_q1q2_score": 0.5711479033940008}}
{"text": "\\documentclass[aspectratio=149]{beamer}\n\n\\input{../../shared_slides.tex}\n\n% reference: https://yuxinchen2020.github.io/ele522_optimization/lectures/variance_reduction.pdf\n% or\n% https://ieeexplore-ieee-org.uaccess.univie.ac.at/stamp/stamp.jsp?tp=&arnumber=9226504\n\n\\usepackage{booktabs}\n\n\\title{Duality, Gradient-free, in Application}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\\frame{\\tableofcontents}\n\n\n\\section{Duality}%\n\n\\begin{frame}\n  \\frametitle{Duality}\n  \\begin{center}\n    Establishes some relation between two classes of objects.\n  \\end{center}\n  \\begin{definition}[``Legendre transform'' or ``Fenchel conjugate'']\n    Given a function $f: \\R^d \\to \\R \\cup \\{+\\infty\\}$, we define its \\textcolor{blue}{\\textbf{conjugate}} $f^*:\\R^d \\to \\R \\cup\\{+\\infty\\}$ by\n    \\begin{equation}\n      f^*(y) = \\sup_x \\{\\langle y,x \\rangle - f(x)\\}\n    \\end{equation}\n  \\end{definition}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Convex conjugate}\n\n  \\begin{block}{}\n    \\begin{equation}\n      f^*(y) = \\sup_x \\{\\langle y,x \\rangle - f(x)\\}\n    \\end{equation}\n  \\end{block}\n  \\begin{minipage}{0.48\\textwidth}\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=\\textwidth,keepaspectratio]{conjugate}\n  \\end{figure}\n  \\end{minipage}\n  \\begin{minipage}{0.48\\textwidth}\n    \\vspace{2cm}\n    \\textcolor{blue}{Figure:} maximum gap between linear function $x \\mapsto \\langle y, x \\rangle$ and $f$\n  \\end{minipage}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Properties}\n\n  \\begin{itemize}\n    \\item $f^*$ is always convex. \\hfill{\\textcolor{gray}{point-wise max of affine function}}\n    \\item \\textcolor{blue}{Fenchel's inequality:}\n          \\begin{equation}\n            f(x) + f^*(y) \\ge \\langle x, y \\rangle.\n          \\end{equation}\n    \\item Hence the biconjugate $f^{**}:={(f^*)}^*$ satisfies $f^{**}\\le f$.\n    \\item If f is convex an lsc. then $f^{**}=f$.\n    \\item etc.\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Examples}\n  \\begin{itemize}\n    \\item \\textcolor{blue}{Norm:} If $f(x) = \\Vert x \\Vert$, then\n          \\begin{equation}\n            f^*(y) = \\mathbbm{1}(\\Vert y \\Vert_* \\le 1),\n          \\end{equation}\n          i.e.\\ the indicator of the dual norm ball.\n          Recall the definition of the dual norm:\n          \\begin{equation}\n            \\Vert y \\Vert_* :=  \\max_{\\Vert x \\Vert \\le 1} \\{\\langle y, x \\rangle\\}.\n          \\end{equation}\n          In particular: $\\Vert \\cdot \\Vert_1 \\leftrightarrow \\Vert \\cdot \\Vert_\\infty$\n  \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{More examples}\n    \\begin{equation}\n      \\hspace{-4cm}\\text{\\textcolor{blue}{Generalized linear models:}}  \\qquad \\min_{x\\in \\R^d} \\, f(Ax) + g(x).\n    \\end{equation}\n    Two approaches to reformulate:\n    \\begin{equation}\n      \\min_x \\underbrace{\\max_y \\, \\langle y, Ax \\rangle - f^*(y)}_{= f(Ax)} + g(x).\n    \\end{equation}\n    Switch min and max\n    \\begin{equation}\n      \\min_x \\max_y \\, - f^*(y) + \\langle y, Ax \\rangle + g(x).\n    \\end{equation}\n    Change sign to go from max to min\n    \\begin{equation}\n      \\min_x \\, - f^*(y) - \\min_y \\underbrace{-\\langle y, Ax \\rangle - g(x)}_{= g^*(-A^T y)}.\n    \\end{equation}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Generalized linear models continued}\n  Or reformulate\n  \\begin{equation}\n    \\min_{x\\in \\R^d} \\, f(Ax) + g(x)\n  \\end{equation}\n  as\n  \\begin{equation}\n    \\min_{x\\in\\R^d, w\\in \\R^m} \\, f(w)+ g(x) \\quad \\text{s.t.} w = Ax\n  \\end{equation}\n  Use Lagrange function\n  \\begin{equation}\n    \\mathcal{L}(x,w, u) :=  \\, f(w)+ g(x) \\langle u, w - Ax \\rangle,\n  \\end{equation}\n  then the dual function is given by\n  \\begin{equation}\n    \\varphi(u) = \\min_{x\\in\\R^d, w\\in \\R^m} \\mathcal{L}(x,w,u).\n  \\end{equation}\n  Dual problem\n  \\begin{equation}\n    \\max_{u \\in \\R^m} [\\varphi(u) = - f^*(-u) - g^*(A^T u)].\n  \\end{equation}\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Example: Lasso}\n  $\\ell_1$ regularized regression\n  \\begin{equation}\n    \\min_{x\\in \\R^d}\\, \\frac12 \\Vert Ax-b \\Vert^2 + \\lambda \\Vert x \\Vert_1\n  \\end{equation}\n  fits this template with\n  \\begin{equation}\n    f(w) = \\frac12 \\Vert w-b \\Vert^2 \\quad \\text{and} \\quad g(x) = \\lambda \\Vert x \\Vert_1\n  \\end{equation}\n  Computation gives\n  \\begin{equation}\n    f^*(u) = \\frac12 \\Vert b \\Vert^2 - \\frac12 \\Vert b-u \\Vert^2 \\quad \\text{and} \\quad g^*(v)= \\mathbbm{1}(\\Vert v/\\lambda \\Vert_\\infty \\le 1).\n  \\end{equation}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Dual of Lasso}\n  We had\n  \\begin{equation}\n    f^*(u) = \\frac12 \\Vert b \\Vert^2 - \\frac12 \\Vert b-u \\Vert^2 \\quad \\text{and} \\quad g^*(v)= \\mathbbm{1}(\\Vert v/\\lambda \\Vert_\\infty \\le 1).\n  \\end{equation}\n  So the dual is\n  \\begin{equation}\n    \\begin{aligned}\n      &\\max_{u\\in \\R^m} - f^*(-u) - g^*(A^T u) \\\\\n      \\Leftrightarrow& \\min_{u\\in\\R^m} \\Vert b+u \\Vert^2 \\quad \\text{s.t.} \\quad \\Vert A^T u \\Vert_{\\infty} \\le \\lambda\n    \\end{aligned}\n  \\end{equation}\n\n  Similarly, for least squares, ridge/logistic regression, SVM, etc.\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{But why?}\n  \\begin{itemize}\n    \\item Duality gap gives a \\textcolor{blue}{certificate} of current optimization quality\n          \\begin{equation}\n            \\begin{aligned}\n              f(A \\bar{x}) + g(\\bar{x}) &\\ge \\min_{x\\in\\R^d} \\, f(Ax) + g(x) \\\\\n              & \\ge \\max_{u\\in\\R^m} - f^*(-u) - g^*(A^T u) \\\\\n              &\\ge -f^*(-\\bar{u}) - g^*(A^T \\bar{u})\n            \\end{aligned}\n          \\end{equation}\n    \\item Stopping criterion\n    \\item Dual problem is sometimes easier to solve\n  \\end{itemize}\n\\end{frame}\n\n\n\\section{Derivative free}%\n\\label{sec:}\n\n\n\\begin{frame}\n  \\frametitle{}\n\n  \\begin{center}\n    \\Huge\\textcolor{blue}{Zero-order Optimization}\\\\\n    $\\Leftrightarrow$ \\textcolor{blue}{Derivative-Free}\\\\\n    $\\Leftrightarrow$ \\textcolor{blue}{Blackbox}\n  \\end{center}\n\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Look, no gradients!}\n  Can we solve $\\min_{x\\in \\R^d}\\, f(x)$ without access to gradients?\n\n  \\begin{algorithm}[H]\n    \\caption{Random search}\n    \\begin{algorithmic}[1]\n      \\For{$k = 1,2, \\dots$}\n      \\For{pick a random direction $d_k \\in \\R^d$}\n      \\State{$\\gamma_k:= \\argmin_{\\gamma\\in \\R} f(x_k+ \\gamma d_k)$}\n      \\State{$x_{k+1} := x_k+ \\gamma_k d_k$}\n      \\EndFor{}\n    \\end{algorithmic}\n  \\end{algorithm}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Convergence rate for derivative-free random search}\n  Converges same as gradient descent - up to a slow-down factor $d$.\n\n  \\textcolor{blue}{Proof.}\n  \\begin{equation}\n    f(x_k+ \\gamma d_k ) \\le f(x_k) + \\gamma \\langle d_k, \\nabla f(x_k) \\rangle + \\frac{\\gamma^2L}{2} \\Vert d_k \\Vert^2\n  \\end{equation}\n  Minimizing the upper bound (RHS), there exists a step $\\bar{\\gamma}$ for which\n  \\begin{equation}\n    f(x_k + \\bar{\\gamma}d_k) \\le f(x_k) - \\frac{1}{L} \\left\\langle \\frac{d_k}{\\Vert d_k \\Vert^2}, \\nabla f(x_k)\\right\\rangle.\n  \\end{equation}\n  So our (exact line-search) step can only be better\n  \\begin{equation}\n    f(x_k + \\gamma_k d_k) \\le f(x_k + \\bar{\\gamma}d_k)\n  \\end{equation}\n  Taking expectation and using $\\E_r \\langle r, g \\rangle^2 = 1/d \\Vert g \\Vert^2$ for $r \\sim$ sphere, gives\n  \\begin{equation}\n    \\E [f(x_k+ \\gamma_k d_k)] \\le E[f(x_k)] - \\frac{1}{L d} \\E\\left[ \\Vert \\nabla f(x_k) \\Vert^2\\right].\n  \\end{equation}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Convergence rate for derivative-free random search}\n  Same as what we obtained for \\textcolor{blue}{gradient descent},\\\\\n  now with an \\textcolor{blue}{extra factor of $d$}. $d$ can be huge!!!\\\\\n  \\bigskip\n  Similarly for other function classes\n  \\begin{itemize}\n    \\item For convex functions, we get a rate of $\\mathcal{O}(\\frac{d L}{\\epsilon})$.\n    \\item For $\\mu$-strongly convex functions, we get a rate of $\\mathcal{O}(d \\kappa \\log(1/\\epsilon))$.\n  \\end{itemize}\n  Always $d$ times the complexity of gradient descent on the function class.\\\\\n  \\bigskip\n  But assumed differentiability. Can also approximate the gradient.\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Applications for derivative-free random search}\n\n  \\textcolor{blue}{Applications}\n  \\begin{itemize}\n    \\item competitive method for \\textcolor{blue}{reinforcement learning}\n    \\item No need to store a gradient\n    \\item hyperparameter optimization, and other difficult e.g. discrete optimization\nproblems, black-box, noisy\n  \\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Reinforcement learning}\n  \\begin{equation}\n    s_{k+1} = f(s_k, a_k, e_k),\n  \\end{equation}\n  where $s_k$ is the \\textcolor{blue}{state} of the system, $a_k$ is the control \\textcolor{blue}{action}, and $e_t$ is some random \\textcolor{blue}{noise}. We assume existence of $f$, but it is unknown.\\\\\n  \\medskip\n  We search for a ``policy''\n  \\begin{equation}\n    a_k := \\pi(a_1, \\dots, a_{k-1}, s_0, \\dots, s_k)\n  \\end{equation}\n\n  \\medskip\n  \\textcolor{blue}{Goal:} Maximize reward\n  \\begin{equation}\n    \\begin{aligned}\n       \\max_{a_k}\\, &\\E_{e_k} \\left[ \\sum_{k=1}^N R_k(s_k, a_k)\\right]\\\\\n       \\text{s.t.} & s_{k+1}=f(s_k, a_k, e_k)\n    \\end{aligned}\n  \\end{equation}\n\\end{frame}\n\n\\section{methods in practice}%\n\\label{sec:}\n\n\\begin{frame}\n  \\frametitle{}\n  \\begin{center}\n    \\Huge\\textcolor{blue}{Adaptive \\& other SGD methods}\\\\\n  \\end{center}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Adagrad}\n  An adaptive variant of SGD\n  \\begin{algorithm}[H]\n    \\caption{Adagrad}\\label{label:}\n    \\begin{algorithmic}[1]\n      \\For{$k = 1, \\dots$}\n      \\State{pick stochastic gradient $g_k$}\n      \\State{update $[G_k]_i = \\sum_{l=}^{k}([g_k]_i)^2, \\quad \\forall i$}\n      \\State{update $[x_{k+1}]_i =[x_{k}]_i - \\frac{\\gamma}{\\sqrt{[G_k]_i}}[g_k]_i,  \\quad \\forall i$}\n      \\Endfor\n    \\end{algorithmic}\n  \\end{algorithm}\n  \\begin{itemize}\n    \\item chooses an \\textcolor{blue}{adaptive, coordinate-wise} learning rate\n    \\item strong performance in practice\n    \\item many variants: Adadelta, Adam, RMSprop\n  \\end{itemize}\n\n\n\\end{frame}\n\n\n\\end{document}\n", "meta": {"hexsha": "c31a426d9f93056a89259dd6524235046960b9d1", "size": 9875, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/15_misc/misc_topics.tex", "max_stars_repo_name": "AxelBohm/optimization-for-DS-lecture", "max_stars_repo_head_hexsha": "4e1a4b208e45d6cb26ff2e90156da0c1450553bb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-10-03T14:40:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-20T15:34:36.000Z", "max_issues_repo_path": "slides/15_misc/misc_topics.tex", "max_issues_repo_name": "AxelBohm/optimization-for-DS-lecture", "max_issues_repo_head_hexsha": "4e1a4b208e45d6cb26ff2e90156da0c1450553bb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-10-21T13:02:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-06T19:50:32.000Z", "max_forks_repo_path": "slides/15_misc/misc_topics.tex", "max_forks_repo_name": "AxelBohm/optimization-for-DS-lecture", "max_forks_repo_head_hexsha": "4e1a4b208e45d6cb26ff2e90156da0c1450553bb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-10-05T21:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T15:38:30.000Z", "avg_line_length": 30.859375, "max_line_length": 205, "alphanum_fraction": 0.6299746835, "num_tokens": 3546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.5711479033940008}}
{"text": "\\documentclass[a4paper,12pt]{article}\n\\usepackage{amsmath,amssymb}\n\\usepackage[margin=1in]{geometry}\n\n\\begin{document}\n\n\\paragraph{Problem 4 Hints}\nSo for this problem just assume that $X = \\mathbb{R}^n$.\nThen $a\\cdot x = a_1 x_1 + \\cdots + a_n x_n$.\nGiven two vectors $x,y \\in \\mathbb{R}^n$ and scalars $\\alpha, \\beta \\in \\mathbb{R}$ we have\n\\begin{align}\n\ta \\cdot (\\alpha x + \\beta y) &= a\\cdot (\\alpha x) + a\\cdot (\\beta y) \\nonumber \\\\\n\t&= \\alpha(a\\cdot x) + \\beta (a\\cdot y) \\nonumber \n\\end{align}\ntherefore its a linear map.\n\nFor part (ii), this is basically the proof that linear transformations between real vector spaces can be represented by real valued matrices.\nIn this case the matrices will all be $n\\times 1$.\nConsider a function $f\\in X^*$.\nBy definition $f$ is a linear map from $X = \\mathbb{R}^n$ to $\\mathbb{R}$.\nBecause $X$ is a linear subspace and $\\mathbb{R}$ is a linear subspace and $f$ is a linear map we know that $f(X)$ is a linear subspace in $\\mathbb{R}$.\nBecause $X$ and $f(X)$ are subspaces they have a basis.\nLet $(\\mathbf{e}_1,\\dots, \\mathbf{e}_n)$ be the coordinate vectors for $\\mathbb{R}^n$ (which are a basis for $X$).\nThen for any $x\\in X$ we can write $\\mathbf{x} = x_1 \\mathbf{e}_1 + \\cdots + x_n \\mathbf{e}_n$ (a linear combination of the basis vectors).\nApplying our function $f$ to $x$ gives us\n\\begin{align}\n\tf(\\mathbf{x}) &= f( x_1 \\mathbf{e}_1 + \\cdots + x_n \\mathbf{e}_n ) \\nonumber \\\\\n\t&= x_1 f(\\mathbf{e}_1) + \\cdots + x_n f(\\mathbf{e}_n) \\nonumber\n\\end{align}\nSince $\\mathbf{e}_i$, for $i=1,\\dots,n$ is a vector we know $f(\\mathbf{e}_i) \\in \\mathbb{R}$.\nFor all $i = 1,\\dots, n$ define $f(\\mathbf{e}_i) = a_i$.\nThen \n\\begin{align}\n\tf(\\mathbf{x}) &= x_1 a_1 + \\cdots + x_n a_n \\nonumber \\\\\n\t&= a_1 x_n + \\cdots + a_n x_n \\nonumber \\\\\n\t&= \\mathbf{a} \\cdot \\mathbf{x} \\nonumber\n\\end{align}\nso we can define the $\\mathbf{a} = (f(\\mathbf{e}_1), \\dots, f(\\mathbf{e}_n))$.\n\nFor part (iii), you should use the defined operations to show that the vector space axioms hold for this set so that its a vector space.\nThe key operations are that if $f$ and $g$ are linear functions in the set, then these will be our vectors.\nSo $f + g$ is defined as $f(x) + g(x)$ where $x$ is a vector in $X$.\nIf $\\alpha$ is a real number, then we scale the vectors in $X^*$ as $\\alpha f = \\alpha f(x)$.\nAll the axioms should be tested.\n\nFor part (iv) the idea is to use what we just learned in part (ii), that every linear functional $f \\in X^*$ can be uniquely determined by a vector $\\mathbf{a} = (f(\\mathbf{e}_1),\\dots,f(\\mathbf{e}_n))$.\nWe can think about finding a set of such $\\mathbf{a}$ vectors.\nFor example suppose we have a set of $n$ vectors, say the coordinate vectors again, $(\\mathbf{e}_1,\\dots, \\mathbf{e}_n)$, do each of these represent their own linear functional? Yes.\nNow that we are calling them functionals, lets represent the set now as $\\{f_1,\\dots, f_n\\}$.\nAt this point we establish that any linear functional is just a linear combination of these $n$ linear functionals and that the set is linearly independent.\nOnce you've done that you will have shown that a set of $n$ vectors is a basis for $X^*$ and therefore $\\dim\\left(X^*\\right) = n$.\n\n\\paragraph{Problem 10 Hints}\nDefine the profit function $\\pi(p) = \\max\\{p\\cdot y : y\\in Y\\}$.\nThis function tells you the maximum profit over technology $Y$ when prices are $p$.\nLets suppose we have some fixed $p_0 \\in \\mathbb{R}^n$.\nThen what do we know about $\\pi(p_0)$?\nWell we know that for all $y\\in Y$ it must be the case that $p_0\\cdot y \\leq \\pi(p_0)$, right?\nBy definition of the maximum, either $y$ achieves the maximum and $p_0\\cdot y = \\pi(p_0)$ or else it must be that $p_0\\cdot y < \\pi(p_0)$.\nSo what if we look at all the vectors $y \\in \\mathbb{R}^n$ (notice not just in $Y$) that when you take the dot product of it with $p_0$ it equals $\\pi(p_0)$?\n\\[\n  \\{ y \\in \\mathbb{R}^n: p_0\\cdot y = \\pi(p_0) \\}\n\\]\nYou should notice from previous exercises that this is a hyperplane in $\\mathbb{R}^n$ and we have associated with it two half-spaces.\n\\[\n  H^{\\leq}_{p_0}(\\pi(p_0)) = \\{ y \\in \\mathbb{R}^n : p_0 \\cdot y \\leq \\pi(p_0) \\} \\qquad H_{p_0}^{\\geq}(\\pi(p_0)) = \\{ y \\in \\mathbb{R}^n : p_0 \\cdot y \\geq \\pi(p_0)\n\\]\nThe left half-space could be referred to as the ``lower'' half-space and the right one as the ``upper'' half-space.\nOf course we get the hyperplane by simply taking the intersection $H_{p_0}^{\\leq}(\\pi(p_0)) \\cap H_{p_0}^{\\geq}(\\pi(p_0))$.\n\nIf its the case that $Y \\subseteq H_{p_0}(\\pi(p_0))$ what should we be able to show?\nAs you know from the proofs material the subset relation is an implication of the form $y\\in Y \\implies y \\in H_{p_0}(\\pi(p_0))$.\nAs discussed we know that for any $y \\in Y$ it must be the case that $p_0\\cdot y \\leq \\pi(p_0)$ so that rules out the ``upper'' half-space, right?\nSo we focus on the ``lower'' half-space.\nLet $y \\in Y$ then by definition of $\\pi(p_0)$ we know that $p_0 \\cdot y \\leq \\pi(p_0)$ and since $y \\in Y \\implies y \\in \\mathbb{R}^n$ we know that $y \\in \\{ y \\in \\mathbb{R}^n : p_0 \\cdot y \\leq \\pi(p_0) \\} = H_{p_0}^{\\leq}(\\pi(p_0))$.\nSince our choice of $y \\in Y$ was arbitrary, it holds for all $y \\in Y$ and thus $Y\\subseteq H_{p_0}^{\\leq}(\\pi(p_0))$.\n\nRemember that the dot product in this case is a function mapping $\\mathbb{R}^n$ into $\\mathbb{R}$.\nHence, $p\\cdot y$ and $\\pi(p)$ are both real numbers.\nThe ``normed distance'' or metric distance in $\\mathbb{R}$ is the absolute value, like we used in the calculus proofs on assignment 2.\nSo we will look at the difference $|p\\cdot y - \\pi(p)|$ over the points in a hyperplane and in $Y$.\nAs previously discussed we know that it is always true for elements of $Y$ that $p_0 \\cdot y \\leq \\pi(p_0)$ implying that $p_0\\cdot y  - \\pi(p_0) \\leq 0$.\nThe only vectors in $Y$ for which the difference will actually be zero are those vectors $y^*$ that are the profit maximizing production plans.\nOkay so now consider again the hyperplane $\\{ y \\in \\mathbb{R}^n: p_0 \\cdot y = \\pi(p_0)\\}$.\nIf any point in the hyperplane is also an element of $Y$, then would it be a profit maximizing production plan? It would be.\nSo saying the hyperplane ``touches'' $Y$ is essentially saying they have an element in common (technically there are topological arguments that slightly loosen this, but we haven't covered those yet, so...).\nIn our case, as long as $\\exists y^* \\in Y$ such that $p_0\\cdot y^* = \\pi(p_0)$ that same point $y^*$ is also in the hyperplane and $|p_0 \\cdot y^* - \\pi(p_0)|=0$.\nAs for the question about $F(y)$, just ignore it I realize know I was asking too much on this.  The point here is that all points on the ``boundary'' of $Y$ will satisfy $F(y)=0$ and that the profit maximizing production plans, if any, will be on the boundary.\n\\textbf{So what}? Well we have now established that $Y$ is contained in $H_{p_0}^{\\leq}(\\pi(p_0))$ and that the hyperplane ``touches'' the boundary of $Y$ and all this essentially makes the hyperplane \\emph{tangent} to the set $Y$ and we call it a \\textbf{supporting hyperplane}.\n\nFor part (iii) remember what you learned about what it means for two sets to be equal\n\\[\n  A = B \\iff [A \\subseteq B \\wedge B \\subseteq A]\n\\]\nNow for all the previous discussion we were working with $p_0$, was there anything special about $p_0$?\nNo, we just picked it arbitrarily from the set $\\mathbb{R}^n$ which means that the vector doesn't have any zeros in it -- no negative or zero prices -- sorry for a typo in the assignment.\nAnyways, since we picked $p_0$ arbitrarily all our previous results should hold for all elements of the set that we drew $p_0$ from.\nSo for any $p \\in \\mathbb{R}^n$ we have a supporting hyperplane that contains $Y$ in its ``lower'' half-space and ``touches'' $Y$ on its boundary.\nRemember that for any $p$ we can define $\\pi(p) = \\max\\{p\\cdot y: y \\in Y\\}$ and we could also create the hyperplane $H(p) = \\{ y \\in \\mathbb{R}^n: p\\cdot y = \\pi(p)\\}$ which means we could also create the ``lower`` half space that we know will contain $Y$, or $H_{p}^{\\leq}(\\pi(p))=\\{ y \\in \\mathbb{R}^n : p\\cdot y \\leq \\pi(p) \\}$\nHopefully you remember from your reading on sets that we can represent \\emph{indexed collections} of sets (set of sets) so that the set of all ``lower'' half-spaces could be represented as $\\{ H_{p}^{\\leq}(\\pi(p)): p \\in \\mathbb{R}^n \\}$ -- which is the same as $\\{ A(p) : p \\in \\mathbb{R}^n\\}$ from the problem.\n\nOkay so we want to show the following\n\\[\n\tY \\subseteq \\cap_{p \\in \\mathbb{R}^n} A(p) \\text{ and } \\cap_{p \\in \\mathbb{R}^n} A(p) \\subseteq Y\n\\]\nLet $y \\in Y$. Then since $Y$ is contained in \\text{every} half-space of the form $A(p)$ we know that it is in the intersection of all half-spaces of the form $A(p)$, right? why?\nThis means that $y$ is also in $\\cap_p A(p)$.\n\nThe other direction is little more involved.\nOne way to go about this would be proof by contradiction.\nEssentially, we assume, to the contrary, that $y \\notin Y$.\nIf we can show that this implies that $\\exists p$ such that $p\\cdot y > \\pi(p)$ then this means there exists some half-space $A(p)$ that doesn't contain $y$, which contradicts our primary assumption.\n\nLet $y \\in \\cap_p A(p)$ and to the contrary let $y \\notin Y$; what do we know?\nWell this means that $y$ is in every half space of the form $\\{ x \\in \\mathbb{R}^n : p\\cdot x \\leq \\pi(p)\\}$ so we know that $p\\cdot y \\leq \\pi(p)$ for all $p$.\nEven though $y \\notin Y$ we can a pick a point $z\\in Y$ that is ``closest'' to $y$ in the sense that $z$ minimizes the normed distance $\\| y - z \\|$.\nNow the basic idea is that we can form a price vector, $p = y - z$ and think about value $\\pi(p)$.\nLets create a value $c = p\\cdot z$.\nFrom this we can think of a hyperplane $H_{p}(c) = \\{ x \\in \\mathbb{R}^n : p\\cdot x = c \\}$.\nBecause $p\\cdot z = c$ and $z \\in Y$ we know $z \\in H_p(c)$ and that $H_p(c)$ touches $Y$.\n\nWithout going into all the details this ends up being the point.\nYou end up establishing that our vector $z \\in Y$ that is closest to $y \\notin Y$ ends up achieving the maximum $\\pi(p)$ and that for all other vectors $v \\in Y$, we have $p\\cdot v \\leq \\pi(p)$ so that $Y$ is contained by the ``lower'' half-space of the hyperplane we created.\nHowever, we find the following holds for the vector $y\\in Y$.\n\\begin{align}\n\tp\\cdot y - c &= p\\cdot y - p\\cdot z \\nonumber \\\\\n\t&= p\\cdot (y - z) \\nonumber \\\\\n\t&= (y - z)\\cdot (y - z) \\nonumber \\\\\n\t&= \\| y - z \\|^2  \\nonumber \\\\\n\t&> 0 \\text{ since } y \\neq z \\nonumber \n\\end{align}\nWhat this implies is that $p\\cdot y > \\pi(p)$.\nSo essentially what you'll end up showing, after more work than I now expect you to do, is that we created a price $p$ with a hyperplane $\\{ x \\in \\mathbb{R}^n : p\\cdot x = \\pi(p)\\}$ that \\textbf{separates} the set $Y$ from the point $y\\notin Y$.\nThis implies that $y$ is not in the lower half-space and hence couldn't be in the intersection $\\cap_p A(p)$ which is where we get our contradiction.\n\n\\textbf{Duality}  The idea of duality is that given technology $Y$ you can use the space of prices to create the profit (value) function $\\pi(p)$ which tells you the profit of the profit maximizing firm with technology $Y$ and prices $p$.\nIf instead we know $\\pi(p)$, but not $Y$, we can construct $Y$ (or at least the convex hull of it) using our knowledge of $\\pi(p)$ and the half-spaces it generates for us.\n\n\\textbf{You'll encounter hyperplanes, half-spaces, convex sets, duality and so on several times in consumer theory, producer theory and general equilibrium as it plays a large role in convex optimization problems}\n\n\\end{document}", "meta": {"hexsha": "35f5ce27d908a72c2c54d1c77fa02a7e9d11ab7e", "size": 11554, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/pdfs/math_bootcamp/2016/assignment2_hints.tex", "max_stars_repo_name": "joepatten/joepatten.github.io", "max_stars_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/pdfs/math_bootcamp/2016/assignment2_hints.tex", "max_issues_repo_name": "joepatten/joepatten.github.io", "max_issues_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-08-09T16:28:31.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-10T14:48:57.000Z", "max_forks_repo_path": "assets/pdfs/math_bootcamp/2016/assignment2_hints.tex", "max_forks_repo_name": "joepatten/joepatten.github.io", "max_forks_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 81.9432624113, "max_line_length": 331, "alphanum_fraction": 0.6872078934, "num_tokens": 3787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.72487026428967, "lm_q2_score": 0.7879311906630568, "lm_q1q2_score": 0.5711478904180044}}
{"text": "\\documentclass{standalone}\n\\begin{document}\n\t\\chapter{Complex Numbers}\n\t\\section{Loci}\n\t$\\quad$ Let $z=x+yi$ where the complex number $z$ is represented on the Argand diagram by the line $OP$ where $P$ is the point $(x,y)$. In general, as $x$ and $y$ are variable $P$ can be anywhere on the Argand diagram. Suppose however that a condition is imposed on $z$. Consider the case where $\\abs{z} = 4$. In this case the position of $P$ is restricted such that the line $OP$ is of a constant length of 4 units. Thus, $P$ can lie anywhere on the circumference of a circle centre $(0,0)$ with radius 4.\\\\\\\\\n\t\\quad Thus, the locus of $P$ is the circle with equation:\n\t\\begin{center}\n\t\t\\begin{tcolorbox}[before = \\centering, center title,hbox,    %%<<---- here\n\t\t\tlifted shadow={1mm}{-2mm}{3mm}{0.1mm}%\n\t\t\t{black!50!white}]\n\t\t\t\\begin{varwidth}{300pt}\n\t\t\t\t\\begin{align*}\t&&x^2+y^2 &= 4^2 \\qquad\\text{Cartesian form}\\\\\n\t\t\t\t\t&   & \\abs{z} & = 4 \\qquad\\text{Complex form} \n\t\t\t\t\\end{align*}\n\t\t\t\\end{varwidth}\n\t\t\\end{tcolorbox} \n\t\\end{center}\n\t\n\t\n\t\n\tAlternatively, we can say that if $\\abs{z} = r^2$\n\t\n\t\\begin{example}\n\t\tIf $z=x+yi$ and $P$ is the point $(x,y)$ , find the locus of $P$ such that:\\\\\n\t\t$\\qquad\\qquad$a) $\\abs{x-4} = 5$\\\\\n\t\t$\\qquad\\qquad$ b) $\\abs{x+2-i} = 7$\n\t\\end{example}\n\t\\begin{example}\n\t\tFind the locus of $z$ if $\\abs{z-1} = k\\abs{z+4}$ when $k=1, k=3$\n\t\\end{example}\n\tLet $z=x+yi$\n\t\\begin{multicols}{2}\n\t\t\\begin{center}\n\t\t\t\\begin{alignat*}{2}\n\t\t\t\t&                  & \\abs{x+yi-1}         & = \\abs{x+yi+4}         \\\\\n\t\t\t\t& \\implies\\quad    & \\abs{(x-1) + yi}     & = \\abs{(x+4) + yi}     \\\\\n\t\t\t\t& \\implies\\quad    & \\sqrt{(x-1)^2 + y^2} & = \\sqrt{(x+4)^2 + y^2} \\\\\n\t\t\t\t& \\implies\\quad    & (x-1)^2 + y^2        & = (x+4)^2 + y^2        \\\\\n\t\t\t\t& \\implies \\quad   & x^2 - 2x + 1         & = x^2+8x+16            \\\\\n\t\t\t\t& \\therefore \\quad & x                    & = \\frac{-3}{2}         \n\t\t\t\\end{alignat*}\n\t\t\\end{center}\n\t\t\n\t\t\\begin{center}\n\t\t\t\\includegraphics[scale=0.7]{ex1a complex loci}\n\t\t\\end{center}\n\t\\end{multicols}\n\t\n\t\\begin{example}\n\t\tGiven $\\abs{\\frac{z-1}{z+4i}} = 2 $ , find the locus of $z$.\n\t\\end{example}\n\tRemembering: $\\abs{\\frac{z_1}{z_2}} = \\frac{\\abs{z_1}}{\\abs{z_2}}$\n\t\n\t\n\tLet $z = x+yi$\n\t\\begin{alignat*}{2}\n\t\t&                & \\abs{\\frac{z-1}{z+4i}} & = 2                   \\\\\n\t\t& \\implies \\quad & \\abs{z-1}              & = 2\\abs{z+4i}         \\\\\n\t\t& \\implies \\quad & \\abs{(x-1)+yi}         & = 2\\abs{x+(y+4)i}     \\\\\n\t\t& \\implies \\quad & \\sqrt{(x-1)^2+y^2}     & = 2\\sqrt{x^2+(y+4)^2} \\\\\n\t\t& \\implies \\quad & \\sqrt{(x-1)^2+y^2}     & = 2\\sqrt{x^2+(y+4)^2} \\\\\n\t\\end{alignat*}\n\t\\hrulefill\n\t\\begin{example}\n\t\tGiven that $z_A = \\frac{1}{10} (-1+i)$ and $z_B = -\\frac{1}{500} (11+127i)$ , find in the form a+bi the complex numbers $\\frac{z_A}{z_B}$.\n\t\tIf $P(x,y)$ is the point on the Argand diagram representing $z=x+yi$, determine the equation of the locus of $P$ where $\\abs{z-z_A} = \\abs{z-z_B}$.\n\t\\end{example}\n\t$$\\frac{z_A}{z_B}  =  \\frac{1}{10} (-1+i) \\div -\\frac{1}{500} (11+127i)$$\n\t\\begin{alignat*}{2}\n\t\t&   &                                     & =-5\\cdot \\frac{-1+i}{11+127i}                         \\\\\n\t\t&   &                                     & = \\frac{5-5i}{11+127i} \\cdot  \\frac{11-127i}{11-127i} \\\\\n\t\t&   &                                     & = a+bi                                                \\\\\n\t\t&   & \\therefore \\quad a=-\\frac{58}{1625} & ,\\quad  b=-\\frac{69}{1625}                            \\\\\n\t\t&   & \\abs{z-z_A}                         & = \\abs{z-z_B}                                         \n\t\\end{alignat*}\n\t\\newpage\n\t\\begin{example}\n\t\tIf the real part of $\\dfrac{z+2}{z+2i}$ is equal to 1, show that the point $z$ lies on a straight line. Hence find the point $z_0$ on this line such that  $\\abs{z_0} = \\sqrt{2}$. Find also the quadratic equation with real coefficients which has $z_0$ as one of the roots.\n\t\\end{example}\n\t\n\tLet $z = x+yi$\\\\\n\t\\textbf{Showing that $\\boldsymbol{z}$ lies on a straight line: }\n\t\\begin{align*}\n\t\t&   & Re(\\frac{z+2}{z+2i})           & = 1                                                                       \\\\\n\t\t&   & \\implies \\quad 1               & = Re(\\frac{x+2+yi}{x+2i+yi})                                              \\\\\n\t\t&   & \\implies \\quad 1               & = Re\\left(\\frac{(x+2)+yi}{x+(y+1)i}\\cdot \\frac{x-(y+1)i}{x-(y+1)i}\\right) \\\\\n\t\t&   & \\implies \\quad 1               & = Re\\left(\\frac{x(x+2) -i(y+2)(x+2) + iyx + y(2+y}{x^2 + (y+2)^2}\\right)  \\\\\n\t\t&   & \\implies \\quad 1               & = \\frac{x(x+2) + y(y+2)}{x^2 + (y+2)^2}                                   \\\\\n\t\t&   & \\implies \\quad x(x+2) + y(y+2) & = x^2  + y^2+4y+4                                                         \\\\\n\t\t&   & \\implies \\quad 2y              & = 2x-4                                                                    \\\\\n\t\\end{align*}\n\t$$\\therefore \\quad y = x-2 \\qed$$\n\t$$\\text{Point } z \\text{ lies on the straight line } y = x-2$$\n\t\n\t\\begin{example}\n\t\tShade on an Argand diagram the area represented by $\\abs{z+i} < 4$.\n\t\\end{example}~\\\\\n\t\\setlength{\\columnseprule}{0pt}\n\t\\textbf{Finding the loci: }\n\t\\begin{multicols}{2}\n\t\t\\begin{center}\n\t\t\t\\begin{align*}\n\t\t\t\t&   & \\abs{z+i} & = \\abs{x+(y+1)i}                                   \\\\\n\t\t\t\t&   &           & =\\sqrt{x^2  + (y+1)^2}                             \\\\\n\t\t\t\t&   &           & =\\sqrt{(x-0)^2 + (y-(-1))^2}                       \\\\\n\t\t\t\t&   &           & \\text{ is the distance from point (0,-1) to (x,y)} \n\t\t\t\\end{align*}\n\t\t\\end{center}\n\t\t\\begin{center}\n\t\t\t\\includegraphics[scale=0.4]{complex_loci_ex5_circle}\n\t\t\\end{center}\n\t\\end{multicols}\n\n\n\t\\textbf{Finding $\\boldsymbol{z_0= (a,b) = a+bi\\colon}$}\n\t\\reqnomode\n\t\n\t\\begin{alignat*}{2}\n\t\t&               & z_0           & = a+bi           \\\\\n\t\t& \\implies      & \\abs{z_0}     & = \\sqrt{a^2+b^2} \\\\\n\t\t\\intertext{Given:  $\\abs{z_0}= \\sqrt{2}$}\n\t\t& \\implies\\quad & \\sqrt{a^+b^2} & = \\sqrt{2}       \\\\\n\t\t& \\implies\\quad & a^2+b^2       & = 2\\tag{1}       \\\\\n\t\t\\intertext{We also know $z_0$ is on the line $y=x-2$}\n\t\t\\therefore    b  = a-2\\tag{2}\t\n\t\t\\intertext{Substituting 2 in 1: }\n\t\t&               & a^2 + (a-2)^2 & = 2              \\\\\n\t\t& \\implies\\quad & 2a^2 - 4a+4   & = 2              \\\\\n\t\t& \\implies\\quad & a^2-2a+1      & =0               \\\\\n\t\\end{alignat*}\n\t$$\t\\therefore\\quad z_0 = 1-i \\qed$$\\\\\n\t\\textbf{Finding quadratic equation with roots: $\\boldsymbol{z_0, \\bar{z_0}}$}\n\t\\begin{center}\n\t\t$$x^2 - (\\text{sum of roots})x + (\\text{product of roots}) = 0$$\n\t\t$$\\implies x^2 - (1-i + 1+i)x + (1-i)(1+i)=0$$\n\t\t$$\\implies x^2-2x+2 = 0\\qed$$\n\t\\end{center}\n\t\n\\end{document}", "meta": {"hexsha": "1d39edee97cf11afa714efb34aee1fd4b173ba52", "size": 6567, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Pure Mathematics/Complex_Numbers.tex", "max_stars_repo_name": "Girogio/My-LaTeX", "max_stars_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-12T11:45:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-30T21:47:25.000Z", "max_issues_repo_path": "Pure Mathematics/Complex_Numbers.tex", "max_issues_repo_name": "Girogio/My-LaTeX", "max_issues_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pure Mathematics/Complex_Numbers.tex", "max_forks_repo_name": "Girogio/My-LaTeX", "max_forks_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.2446043165, "max_line_length": 511, "alphanum_fraction": 0.472057256, "num_tokens": 2545, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347362, "lm_q2_score": 0.8244619306896955, "lm_q1q2_score": 0.5710569466920041}}
{"text": "\\section{Notes of The Hundred Page Machine Learning Book}\n\\label{section:100PageNotes}\n\nThis book was chosen as it is a very popular read, and is described as a good introduction to machine learning. Its concise nature allows a general understanding, and was selected to be my first read in my deeper endeavour of ML\\\\\n\nMachine learning algorithms are often described as \"supervised\", or \"unsupervised\". There are also \"semi-supervised\" and \"reinforcement\" machine learning algorithms.\\\\\n\\newpage\n\\begin{itemize}\n\t\\item \\textbf{Supervised Models} uses labeled sets of data to train an algorithm, and by iterating can predict more accurately. They often take feature vectors in, and output information accordingly. It is the most commonly used learning method.  The data is supervised to (input, output). \\\\\n\t\\item \\textbf{Unsupervised Models} are in charge of identifying patterns of unlabeled data. There is also a feature vector given into this learning alternative. \\\\\n\t\\item \\textbf{Semi-Supervised Models} is a mix of labelled and unlabelled data. The end goal is to build a strong algorithm. \\\\\n\t\\item \\textbf{Reinforcement Learning} is interpreted as a state, that can execute actions at every state. Depending on actions, a reward system is engaged accordingly. This helps a computer algorithm decipher policy. This is measured through excepted average reward. \\\\\n\\end{itemize}\n\nClassification and Regression are often mixed terms. In our terms, \\textbf{classification} entails automatically assigning labels to unlabeled data, where the then-labeled data can help develop a model. For example spam detection. For \\textbf{regression} is the pursuit of predicting a label based off an unlabeled example. For example, judging a house price, based off location, number of bedrooms, area etc. \\\\\n\nMachine Learning algorithms can be manually developed, however the industry standard is known to leverage libraries to source their work. \\\\\n\nA \\textbf{Neural Network} is a nested function,\n\n\\begin{equation}\n    y = f_N_N(x) = f_3(f_2(f_1(x)))\n\\end{equation}\n\\\\\nhence the common statement of \"layers\" within a neural network. The vector functions follow the form,\n\n\\begin{equation}\n    f_l(z)=g_l(W_lz + b_l)\n\\end{equation}\n\nwhere $l$ is the the layer index (spans from 1 to any number of layers), and $g_l$ is the activation function. The parameters $W_l$ is a matrix, $b_l$ is a vector.", "meta": {"hexsha": "0c1a9e5902a7e90233dc1462248964d92bc0b063", "size": 2389, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LatexCode/sections/100PageNeuralNetworkReview.tex", "max_stars_repo_name": "tylertraviss/MLIndependentProject", "max_stars_repo_head_hexsha": "3d56a81046e4396dde405993853654006a960e57", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-27T18:01:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-27T18:01:35.000Z", "max_issues_repo_path": "LatexCode/sections/100PageNeuralNetworkReview.tex", "max_issues_repo_name": "tylertraviss/MLIndependentProject", "max_issues_repo_head_hexsha": "3d56a81046e4396dde405993853654006a960e57", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LatexCode/sections/100PageNeuralNetworkReview.tex", "max_forks_repo_name": "tylertraviss/MLIndependentProject", "max_forks_repo_head_hexsha": "3d56a81046e4396dde405993853654006a960e57", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.064516129, "max_line_length": 412, "alphanum_fraction": 0.7810799498, "num_tokens": 565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765707, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5710569437045527}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath, amsfonts}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\begin{document}\n\\section{Spring Loaded Inverted Pendulum Dynamics}\nFor a \\textit{spring loaded inverted pendulum} (SLIP) model, it has two modes, the flight phase and the stance phase. Its equation of motion is\n\\begin{itemize}\n\t\\item Flight phase, state being $\\mathbf{x}_{fl} = [x, z, \\dot{x}, \\dot{z}]^T$.\n\t\t\\begin{align}\n\t\t\t\\frac{d}{dt}\n\t\t\t\\begin{bmatrix} x\\\\z\\\\\\dot{x}\\\\\\dot{z} \\end{bmatrix} = f_{fl}(\\mathbf{x}_{fl})\n\t\t\t=\\begin{bmatrix}\\dot{x}\\\\\\dot{z}\\\\0\\\\-g\\end{bmatrix}\n\t\t\\end{align}\n\t\\item Stance phase, state being $\\mathbf{x}_{st}=[r, \\theta, \\dot{r}, \\dot{\\theta}, x_{foot}]^T$.\n\t\t\\begin{align}\n\t\t\t\\frac{d}{dt}\\begin{bmatrix}r\\\\\\theta\\\\\\dot{r}\\\\\\dot{\\theta}\\\\\\dot{x}_{foot}\\end{bmatrix} = f_{st}(\\mathbf{x}_{st})\n\t\t\t=\\begin{bmatrix}\\dot{r}\\\\\\dot{\\theta}\\\\k(l_0-r)/m - g\\cos\\theta+r\\dot{\\theta}^2\\\\g/r\\sin\\theta - 2\\dot{r}\\dot{\\theta}/r\\\\0\\end{bmatrix}\n\t\t\\end{align}\n\t\\item touch down guard\n\t\t\\begin{align}\n\t\t\tg_{TD}(\\mathbf{x}, \\mathbf{u}) = y -l_0\\cos\\theta \\ge 0\n\t\t\\end{align}\n\t\\item lift off guard\n\t\t\\begin{align}\n\t\t\tg_{LO}(\\mathbf{x}) = l_0 - r \\ge 0\n\t\t\\end{align}\n\t\\item touch down transition maps a pre impact state $\\mathbf{x}_{TD}^-$ to a post-impact state $\\mathbf{x}_{TD}^+$\n\t\t\\begin{align}\n\t\t\t\\mathbf{x}_{TD}^+ = \\Delta_{TD}(\\mathbf{x}_{TD}^-, \\mathbf{u}) = \\begin{bmatrix} l_0\\\\\\theta\\\\ -\\dot{x}\\sin\\theta+\\dot{y}\\cos\\theta\\\\(-\\dot{x}\\cos\\theta-\\dot{y}\\sin\\theta)/l_0\\\\x+l_0\\sin\\theta\\end{bmatrix}\n\t\t\\end{align}\n\t\\item lift off transition maps a pre-lift-off state $\\mathbf{x}_{LO}^-$ to a post-lift-off state $\\mathbf{x}_{LO}^+$\n\t\t\\begin{align}\n\t\t\t\\mathbf{x}_{LO}^+ = \\Delta_{LO}(\\mathbf{x}_{LO}^-) = \\begin{bmatrix}x_{foot} - l_0\\sin\\theta\\\\ l_0\\cos\\theta\\\\-l_0\\cos\\theta\\dot{\\theta}-\\dot{r}\\sin\\theta \\\\ -l_0\\sin\\theta\\dot{\\theta}+\\dot{r}\\cos\\theta\\end{bmatrix}\n\t\t\\end{align}\n\\end{itemize}\n\n\\section{Return map}\nWe will build the apex-to-apex Poincar\\'e return map. We denote $\\mathbf{x}_{apex}[n]$ as the n'th state at the apex of the flight phase, and the apex map as $\\mathbf{x}_{apex}[n+1] = \\phi(\\mathbf{x}_{apex}[n], \\mathbf{u}[n])$. This return map is nonlinear, and we will linearize this return map around a nominal apex state $\\mathbf{x}^*_{apex}$ and leg angle $\\mathbf{u}^*$ to get a linear dynamic model. We will then build the linearized model at many linearization points, and approximate the nonlinear return map as hybrid linear systems.\n\nIn order to linearize the return map, we will use the following lemma\n\\begin{lemma}\n\t\\label{lemma:gradient_x0}\n\tFor a dynamical system $\\dot{\\mathbf{x}} = f(\\mathbf{x})$ with initial state $\\mathbf{x}_0$, at a fixed time $T$, the gradient of $\\partial{\\mathbf{x}(t)}/\\partial \\mathbf{x}_0$ can be computed through integrating the following ODE on the matrix $\\partial\\mathbf{x}/\\partial\\mathbf{x}_0$\n\t\\begin{align}\n\t\t\\frac{d}{dt}\\frac{\\partial\\mathbf{x}}{\\partial \\mathbf{x}_0} = \\frac{\\partial f}{\\partial\\mathbf{x}}\\frac{\\partial \\mathbf{x}}{\\partial \\mathbf{x}_0}\n\t\\end{align}\n\twith the initial condition $\\partial \\mathbf{x}(0)/\\partial\\mathbf{x}_0 = I$.\n\\end{lemma}\n\\subsection{Linearization of return map}\nIn order to compute $\\frac{\\partial\\phi}{\\partial{\\mathbf{x}}_{apex}}$ and $\\frac{\\partial\\phi}{\\partial \\mathbf{u}}$, we would first compute the gradient of the pre-touch-down state. The pre-touchdown state $\\mathbf{x}_{TD}^-$ is computed as\n\\begin{align}\n\t\\mathbf{x}_{TD}^- = \\int_0^{t_{TD}} f_{fl}(\\mathbf{x}_{fl}) d\\tau\n\\end{align}\nUsing Lemma \\ref{lemma:gradient_x0}, we know\n\\begin{align}\n\t\\frac{\\mathbf{x}_{TD}^-}{\\partial \\mathbf{x}_{apex}} = \\underbrace{I + \\int_0^{t_{TD}} \\frac{\\partial f_{fl}}{\\partial \\mathbf{x}}\\frac{\\partial \\mathbf{x}}{\\partial \\mathbf{x}_{apex}} d\\tau}_{\\left.\\frac{\\partial \\mathbf{x}(t)}{\\partial x_{apex}}\\right|_{t=t_{TD}}} + f_{fl}(\\mathbf{x}_{TD}^-)\\frac{\\partial t_{TD}}{\\partial \\mathbf{x}_{apex}}\\\\\n\t\\frac{\\mathbf{x}_{TD}^-}{\\partial \\mathbf{u}} = f_{fl}(\\mathbf{x}_{TD}^-)\\frac{\\partial t_{TD}}{\\partial \\mathbf{u}}\n\\end{align}\nNote that we have the term $f_{fl}(\\mathbf{x}_{TD}^-)\\frac{\\partial t_{TD}}{\\partial \\mathbf{x}_{apex}}$ because the touch down time $t_{TD}$ is also a function of the initial apex state (and the control).\n\nTo compute the term $\\frac{\\partial t_{TD}}{\\partial \\mathbf{x}_{apex}}$ and $\\frac{\\partial t_{TD}}{\\partial \\mathbf{u}}$, we take a full derivative of the guard function $g_{TD}$. Since the touchdown always happens at $g_{TD} = 0$, we have\n\\begin{align}\n\tdg_{TD} = \\frac{\\partial g_{TD}}{\\partial \\mathbf{x}}\\left.\\frac{\\partial \\mathbf{x}(t)}{\\partial \\mathbf{x}_{apex}}\\right|_{t=t_{TD}}d\\mathbf{x}_{apex} + \\frac{\\partial g_{TD}}{\\partial\\mathbf{x}}f_{fl}(x^-_{TD})dt_{TD} + \\frac{\\partial g_{TD}}{\\partial\\mathbf{u}}d\\mathbf{u} = 0\n\\end{align}\nHence we can compute $\\frac{\\partial t_{TD}}{\\partial \\mathbf{x}_{apex}}$ as\n\\begin{align}\n\t\\frac{\\partial t_{TD}}{\\partial \\mathbf{x}_{apex}} =-\\left(\\frac{\\partial{g}_{TD}}{\\partial \\mathbf{x}}f_{fl}(\\mathbf{x}_{TD}^-)\\right)^{-1}\\frac{\\partial g_{TD}}{\\partial \\mathbf{x}}\\left.\\frac{\\partial \\mathbf{x}(t)}{\\partial \\mathbf{x}_{apex}}\\right|_{t = t_{TD}}\n\\end{align}\nand  $\\frac{\\partial t_{TD}}{\\partial \\mathbf{u}}$ as\n\\begin{align}\n\t\\frac{\\partial t_{TD}}{\\partial \\mathbf{u}} = -\\left(\\frac{\\partial{g}_{TD}}{\\partial \\mathbf{x}}f_{fl}(\\mathbf{x}_{TD}^-)\\right)^{-1}\\frac{\\partial g_{TD}}{\\partial \\mathbf{u}}\n\\end{align}\n\nWe can then compute the gradient of the post-touch-down state $\\mathbf{x}_{TD}^+$ w.r.t the initial apex state using chain rule as\n\\begin{align}\n\t\\frac{\\partial \\mathbf{x}_{TD}^+}{\\partial \\mathbf{x}_{apex}} = \\frac{\\partial \\Delta_{TD}}{\\partial x}\\frac{\\partial \\mathbf{x}_{TD}^-}{\\partial \\mathbf{x}_{apex}}\\\\\n\t\\frac{\\partial \\mathbf{x}_{TD}^+}{\\partial \\mathbf{u}} = \\frac{\\partial \\Delta_{TD}}{\\partial x}\\frac{\\partial \\mathbf{x}_{TD}^-}{\\partial \\mathbf{u}} + \\frac{\\partial \\Delta_{TD}}{\\partial \\mathbf{u}}\n\\end{align}\n\n\\end{document}\n", "meta": {"hexsha": "ca4240d70efafa2eb2839133719616779edd531f", "size": 6012, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/linear_slip.tex", "max_stars_repo_name": "hongkai-dai/neural-network-lyapunov-1", "max_stars_repo_head_hexsha": "8843c13f69f7f39cbb939ab250413e76f61843f6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 58, "max_stars_repo_stars_event_min_datetime": "2021-06-21T08:59:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T14:35:23.000Z", "max_issues_repo_path": "doc/linear_slip.tex", "max_issues_repo_name": "StanfordASL/neural-network-lyapunov", "max_issues_repo_head_hexsha": "9e5db1c7f91b42df729026c9aa8575bc126f66b6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2021-08-22T05:31:23.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T03:47:07.000Z", "max_forks_repo_path": "doc/linear_slip.tex", "max_forks_repo_name": "StanfordASL/neural-network-lyapunov", "max_forks_repo_head_hexsha": "9e5db1c7f91b42df729026c9aa8575bc126f66b6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2021-06-21T04:29:59.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T05:54:43.000Z", "avg_line_length": 73.3170731707, "max_line_length": 542, "alphanum_fraction": 0.6684963407, "num_tokens": 2193, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765706, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5710569437045526}}
{"text": "\\documentclass{article}\n\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath,amssymb}\n\\usepackage{parskip}\n\\usepackage{graphicx}\n\n% Margins\n\\usepackage[top=2.5cm, left=3cm, right=3cm, bottom=4.0cm]{geometry}\n% Colour table cells\n\\usepackage[table]{xcolor}\n\n% Get larger line spacing in table\n\\newcommand{\\tablespace}{\\\\[1.25mm]}\n\\newcommand\\Tstrut{\\rule{0pt}{2.6ex}}         % = `top' strut\n\\newcommand\\tstrut{\\rule{0pt}{2.0ex}}         % = `top' strut\n\\newcommand\\Bstrut{\\rule[-0.9ex]{0pt}{0pt}}   % = `bottom' strut\n\n%%%%%%%%%%%%%%%%%\n%     Title     %\n%%%%%%%%%%%%%%%%%\n\\title{Research Assistant Crypto Response}\n\\author{Haile Lagi}\n\\date{September 26, 2020}\n\n\\begin{document}\n\\maketitle\n\n%%%%%%%%%%%%%\n% Finance   %\n%%%%%%%%%%%%%\n\n\\section{Finance}\nYara Inc is listed on the NYSE with a stock price of 40 - the company is not known\nto pay dividends. We need to price a call option with a strike of \\$45 maturing in\n4 months.\nThe continuously-compounded risk-free rate is 3\\%/year, the mean return on the\nstock is 7\\%/year, and the standard deviation of the stock return is 40\\%/year.\nWhat is the Black-Scholes call price?\n\nParameters given are (assuming the stock doesn't pay dividend it is assumed an American call option is equivalent to an European call option):\n\nConverting all parameters w.r.t. to 1 year:\n\nGiven that:\n\\begin{itemize}\n\\item Black-Scholes call price = $(C)$\n\\item  stock price$(S)$ = \\$40\n\\item call option strike$(X)$ = \\$45\n\\item  maturity$(T)$ = 4 months / 12 = 1/3\n\\item continuously-compounded risk-free rate$(r_{f})$ = 3/100 = 0.03\n\\item stock mean return$(\\mu)$ = 7/100 = 0.07\n\\item standard deviation($\\sigma$) = 40/100 = 0.4\n\\end{itemize}\n\nthe linear black-scholes general form for an European style call option is:\n\\begin{align}\n    \\label{eq:call_option}\n    C(S, T) = SN(x_{1}) - BN(x_{2})\n\\end{align}\n\nWhere:\n\\begin{align}\n   Bond price (B) = Xe^-r_{f} * T\n\\end{align}\n\nand $x_{1}$ is given by:\n\n\\begin{align}\n  x_{1} = \\frac{log(S/B)}{\\sigma \\sqrt{T}} + \\frac{1}{2}\\sigma \\sqrt{T}\n\\end{align}\n\n\nand to find $x_{2}$ from the value of $x_{1}$:\n\n\\begin{align}\n  x_{2} = x_{1} - \\sigma * \\sqrt{T}\n\\end{align}\n\nTo find the value of $N(x)$ which is the cumulative normal distribution we use the integral form:\n\n\\begin{align}\n    \\label{eq:gaussian}\n    N(x) = \\frac{1}{\\sqrt(2\\pi)}\\int_{-\\infty }^{x}e^{\\frac{-u^2}{2}}du\n\\end{align}\n\nFirst we find $B = Xe^-r_{f} * T = 40e^-0.03 * \\frac{1}{3} $\n$B = 44.552243$\n\nUsing this we find $x_{1}$\n\n\\begin{align}\n  x_{1} = \\frac{log(40/44.552243)}{0.4 \\sqrt{1/3}} + \\frac{1}{2} * 0.4 * \\sqrt{1/3}\n\\end{align}\n\n\\begin{align}\n  = \\frac{-0.10778304645914898}{0.2309401076758503} + (0.5 * 0.4 * 0.2309401076758503)\n\\end{align}\n\n\n\\begin{align}\n  x_{1}  \\approx\n  -0.3512439362402078046572242 \\approx -0.3512439\n\\end{align}\n\n\\begin{align}\n  x_{2}  = -0.3512439362402078046572242 - 0.4 * \\sqrt{1/3} \\approx\n  -0.5821840439160581104608837 \\approx -0.5821840\n\\end{align}\n\nNow we must find the Gaussian probability distribution functions of $N(x_{1})$ and $N(x_{2})$\n\n\\begin{align}\n    \\label{eq:gaussian_x_one}\n    N(x_{1}) = \\frac{1}{\\sqrt(2\\pi)}\\int_{-\\infty }^{-0.3512439}e^{\\frac{-u^2}{2}}du\n\\end{align}\n\n\\begin{align}\n    \\label{eq:gaussian_x_two}\n    N(x_{1}) = \\frac{1}{\\sqrt(2\\pi)}\\int_{-\\infty }^{-0.5821840}e^{\\frac{-u^2}{2}}du\n\\end{align}\n\nSadly the cumulative difference function of a normal distribution does not have\na closed form and CANNOT BE SOLVED analytically. The accuracy of the values produced by each estimation methods varies in accuracy. I'll be using a $Z-table$ or $Standard Normal Talbe$\n\n\\begin{align}\n  N(x_{1}) = 0.3527026 N(x_{2}) = 0.2802213\n\\end{align}\n\nThen finally substitution into the general form:\n\\begin{align}\n    C(S, T) = 40 * 0.3627026 - 44.55224 * 0.2802213\n\\end{align}\n\nThe Black-Scholes call price is \\approx \\$2.02361738929\n\\end{document}\n", "meta": {"hexsha": "e2d03edf8906b2debdb45242ac96a0fec00e3148", "size": 3876, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "equations/Finanace.tex", "max_stars_repo_name": "obsessedyouth/Research-Assistant-Crypto", "max_stars_repo_head_hexsha": "3d9b42c3e7a745a591938544de43ea757384c907", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "equations/Finanace.tex", "max_issues_repo_name": "obsessedyouth/Research-Assistant-Crypto", "max_issues_repo_head_hexsha": "3d9b42c3e7a745a591938544de43ea757384c907", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "equations/Finanace.tex", "max_forks_repo_name": "obsessedyouth/Research-Assistant-Crypto", "max_forks_repo_head_hexsha": "3d9b42c3e7a745a591938544de43ea757384c907", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0869565217, "max_line_length": 184, "alphanum_fraction": 0.6664086687, "num_tokens": 1390, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455589, "lm_q2_score": 0.824461932846258, "lm_q1q2_score": 0.5710569377239861}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%2345678901234567890123456789012345678901234567890123456789012345678901234567890\n%        1         2         3         4         5         6         7         8\n\n\\documentclass[letterpaper, 10 pt, conference]{ieeeconf}  % Comment this line out if you need a4paper\n\n%\\documentclass[a4paper, 10pt, conference]{ieeeconf}      % Use this line for a4 paper\n\n\\IEEEoverridecommandlockouts                              % This command is only needed if \n                                                          % you want to use the \\thanks command\n\n\\overrideIEEEmargins                                      % Needed to meet printer requirements.\n\n% See the \\addtolength command later in the file to balance the column lengths\n% on the last page of the document\n\n% The following packages can be found on http:\\\\www.ctan.org\n\\usepackage{graphics} % for pdf, bitmapped graphics files\n\\usepackage{epsfig} % for postscript graphics files\n\\usepackage{mathptmx} % assumes new font selection scheme installed\n\\usepackage{times} % assumes new font selection scheme installed\n\\usepackage{amsmath} % assumes amsmath package installed\n\\usepackage{amssymb}  % assumes amsmath package installed\n\\usepackage{amsthm}\n\n\\newtheorem{myTheo}{Theorem}\n\n\\theoremstyle{remark}\n\\newtheorem{myrem}{Remark}\n\n\\title{\\LARGE \\bf\nGeometric Part*\n}\n\n\n\\author{Xiao Ming Xiu$^{1}$  \\\\% <-this % stops a space\n\\\\\n\\footnotesize\\textit{\\today}\n\\thanks{*This work was supported by Standard-Robots Ltd.,co.}% <-this % stops a space\n\\thanks{$^{1}$Xiao Ming Xiu is with Visual SLAM Algorithm Group, Standard-Robots Ltd.,co. Shenzhen, China. \n        {\\tt\\small stevenxiu@aliyun.com}}%\n}\n\n\n\\begin{document}\n\n\n\n\\maketitle\n\\thispagestyle{empty}\n\\pagestyle{empty}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{abstract}\nGiven 3 2-D points' coordinates in two different Reference Systems, Barcode System (BS) and Camera System (CS), output two things:  $\\vec{t}$ and $\\theta$, defined as the CS origin's coordinates in BS and the rotated angle of CS relative to BS.  \n\n\\end{abstract}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Problem Setting}\n\\subsection{Given}\nNotation: $$ BS --- S $$  $$ CS --- S' $$\n\nThree Points' coordinates in both systems are known:\n$$ P1:  (x_1,y_1), (x_1',y_1') $$\n$$ P2:  (x_2,y_2), (x_2',y_2') $$\n$$ P3:  (x_3,y_3), (x_3',y_3') $$\n\n\\subsection{Want}\n$$\\vec{t} = (t_x, t_y)$$\n$$ \\theta $$\n\nwhere, \\\\$\\vec{t}$ is defined by S' origin in S.\\\\\n$\\theta$ is defined by\n\n\\begin{equation}\n\\begin{aligned}\n(\\epsilon_x',\\epsilon_y')&= (\\epsilon_x,\\epsilon_y)*R\\\\\n\t\t\t\t\t\t&= (\\epsilon_x,\\epsilon_y)*\n\\begin{bmatrix}\n\\cos{\\theta}&-\\sin{\\theta}\\\\\n\\sin{\\theta}&cos{\\theta}\n\\end{bmatrix}\n\\end{aligned}\n\\end{equation}\n\n($\\epsilon_x, \\epsilon_y$ denotes the basis vector in S.  $\\epsilon_x', \\epsilon_y'$ denotes the basis vector in S'.)\n\n\\subsection{Relationship Equation}\n\nDenote $\\vec{P}$ (column vector)as vector coordinate representation in S and $\\vec{P}'$ in S'. We have\n\n\\begin{equation}\n\\vec{P} = \\vec{t} + R*\\vec{P}' \n\\end{equation}\n\nWrite in explicit form:\n\n\\begin{equation}\n\\begin{bmatrix}\nx\\\\\ny\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nt_x \\\\\nt_y\n\\end{bmatrix}\n+\n\\begin{bmatrix}\n\\cos{\\theta}&-\\sin{\\theta}\\\\\n\\sin{\\theta}&cos{\\theta}\n\\end{bmatrix}\n*\n\\begin{bmatrix}\nx' \\\\\ny'\n\\end{bmatrix}\n\\end{equation}\n\nEq. (3) is necessary and sufficient.\\\\ \nYou have 3 unknown variables, $t_x, t_y, \\theta$. Given 1 pair coordinates $\\vec{P}$ and $\\vec{P}'$, you will get 2 independent equations. At least,  2 points are needed to solve all the 3 variables, though 1 equation is over determined. \\\\\n\n\\section{Question: what is minimal set of input sufficient to solve?}\n\\begin{myTheo}\nGiven 6 Rits of $x_1,y_1,x_1',y_1',x_2,x_2'$ is sufficient.\n\\end{myTheo}\n3 independent equations have already been prepared up from Eq.(3). Only left with a Two-fold ambiguity.\\\\\n\n\\begin{myrem}\nThe set of 1.5 points, not 2,  is the minimum input.  \n\\end{myrem}\n\n\\begin{myrem}\nDo there exist the following case: the input given format cannot exactly fit the sufficient and necessary solution of Problem, at least have some input as overdetermined condition and called unavoidable waste input?\n\\end{myrem}\n\n\n\\section{Solution}\nEliminate $\\theta$ first from Eq. (3) by Left Multiply (based on $R^T*R=I$) :\n( Eq. (3) represents 2 equations.)\n\\begin{equation}\n(x-t_x)^2 + (y-t_y)^2 = {x'}^2 + {y'}^2\n\\end{equation}\n\nRely on Point P1 and P2.  Substitute Eq.(4) as:\n\\begin{equation}\n(x_1-t_x)^2 + (y_1-t_y)^2 = {x_1'}^2 + {y_1'}^2 \n\\end{equation}\n\\begin{equation}\n(x_2-t_x)^2 + (y_2-t_y)^2 = {x_2'}^2 + {y_2'}^2\n\\end{equation}\n\nUse Eq.(5)- Eq.(6), we get\n\n\\begin{equation}\nt_y=k t_x + b\n\\end{equation}\n\nwhere\n\\begin{equation}\nk=-\\frac{x_1 - x_2}{y_1 - y_2}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\nb=&{\\frac{1}{2}}*\\{ y_1 + y_2 \\\\\n&- {\\frac{1}{y_1 - y_2}}[x'_1^2 + y'_1^2 - x'_2^2 -y'_2^2 -x_1^2 + x_2^2]\\}\n\\end{aligned}\n\\end{equation}\n\nSubstitute $t_y$ in Eq. (5)\n\\begin{equation}\nAt_x^2 + Bt_x + C =0\n\\end{equation}\nWhere,\n\\begin{equation}\n\\begin{aligned}\nA&=1+k^2 \\\\\nB&=-2[x_1 + k(y_1 -b)]\\\\\nC&= x_1^2 + (y_1 -b)^2 - x_1'^2 - y_1'^2\n\\end{aligned}\n\\end{equation}\n\nSolve this quadratic equation\n$$\\Delta = B^2 - 4AC$$\n\n\\begin{equation}\nt_x= \\frac{-B \\pm \\sqrt{\\Delta} }{2A}\n\\end{equation}\n\nThe corresponding $t_y$ can be derived from Eq. (7).\n\nHere is a two-fold ambiguity. Only one of the roots are correct. Check them by the third point P3:\n\n\\begin{equation}\n(x_3-t_x)^2 + (y_3 - t_y)^2 = x'_3^2 + y'_3^2\n\\end{equation}\n\nDefine\n\\begin{equation}\n\\delta_{check} = |(x_3-t_x)^2 + (y_3 - t_y)^2  - ( x'_3^2 + y'_3^2 )| ?=? 0\n\\end{equation}\n\n\nThe right root will satisfy Eq. (13). In  practice we can set the threshold of $\\delta_{check}$.\n\n\\section{Experiments of Oct.17}\nIn the evening of Oct.17, a 2cm translation experiment was conducted.\n\n    \\begin{figure}  \n    \\begin{minipage}[t]{0.5\\linewidth}  \n    \\centering  \n    \\includegraphics[width=1.6in]{0cm1017.jpg}  \n    \\caption{0cm Case No Lean}  \n    \\label{fig:side:a}  \n    \\end{minipage}%  \n    \\begin{minipage}[t]{0.5\\linewidth}  \n    \\centering  \n    \\includegraphics[width=1.6in]{2cm1017.jpg}  \n    \\caption{2cm Case No Lean}  \n    \\label{fig:side:b}  \n    \\end{minipage}  \n    \\end{figure}  \n\n\\subsection{2cm case data}\n\nNotice you need to transform the S' pixel unit to cm.\n \n\\begin{table}[h]\n\\caption{2cm1017.jpg's P1,P2,P3 }\n\\label{table_example}\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\n\\hline\n2cmNoLean & BS (cm)& CS(pixel)\\\\\n\\hline\n\nP1 & (0,0)& (267,455) \\\\\n\\hline\n\nP2 & $(3*cos(30^\\circ),3*sin(30^\\circ))$ & (271,256) \\\\\n\n\\hline\nP3 & $(-3*cos(30^\\circ),3*sin(30^\\circ))$ & (466,460)\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nOutput:\n\n$$t_x=7.94855$$\n$$t_y=-0.215983$$\n$$\\theta=120^\\circ$$\n\n\n\\subsection{0cm case data}\n\n \n\\begin{table}[h]\n\\caption{0cm1017.jpg's P1,P2,P3 }\n\\label{table_example}\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\n\\hline\n0cmNoLean & BS (cm)& CS(pixel)\\\\\n\\hline\n\nP1 & (0,0)& (399,458) \\\\\n\\hline\n\nP2 & $(3*cos(30^\\circ),3*sin(30^\\circ))$ & (406,257) \\\\\n\n\\hline\nP3 & $(-3*cos(30^\\circ),3*sin(30^\\circ))$ & (599,463)\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nOutput:\n\n$$t_x=8.82632$$\n$$t_y=-2.04685$$\n$$\\theta=118.005^\\circ$$\n\n\\subsection{Translation Calculation}\n\\begin{equation}\n\\begin{aligned}\nTranslation&=\\sqrt{ (7.94855-8.82632)^2 +(-0.215983 + 2.04685)^2}\\\\\n\t\t   &=2.03 cm\n\\end{aligned}\n\\end{equation}\n\nThe real translation is just 2 cm. So the precision has attained to 0.3mm. \n\n\n\\section{Experiments of Oct.19}\nIn the evening of Oct.19, just use the data collected in Sep., to check the algorithm again in Lean case.\n\n    \\begin{figure}  \n    \\begin{minipage}[t]{0.5\\linewidth}  \n    \\centering  \n    \\includegraphics[width=1.6in]{0cm96Lean.jpg}  \n    \\caption{0cm Case No Lean}  \n    \\label{fig:side:a}  \n    \\end{minipage}%  \n    \\begin{minipage}[t]{0.5\\linewidth}  \n    \\centering  \n    \\includegraphics[width=1.6in]{1cm96Lean.jpg}  \n    \\caption{2cm Case No Lean}  \n    \\label{fig:side:b}  \n    \\end{minipage}  \n    \\end{figure}  \n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "657a67695e6b5461662c757c8233f92d2c035c96", "size": 8174, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "root.tex", "max_stars_repo_name": "xiaomingxiu/DirectMethod-Geometric-Part-of-Barcode", "max_stars_repo_head_hexsha": "e08af4da5a578ec5f818ca56055993f851d3edee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "root.tex", "max_issues_repo_name": "xiaomingxiu/DirectMethod-Geometric-Part-of-Barcode", "max_issues_repo_head_hexsha": "e08af4da5a578ec5f818ca56055993f851d3edee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "root.tex", "max_forks_repo_name": "xiaomingxiu/DirectMethod-Geometric-Part-of-Barcode", "max_forks_repo_head_hexsha": "e08af4da5a578ec5f818ca56055993f851d3edee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.1507692308, "max_line_length": 246, "alphanum_fraction": 0.6367751407, "num_tokens": 2817, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455588, "lm_q2_score": 0.8244619199068831, "lm_q1q2_score": 0.5710569287616319}}
{"text": "\\section{Result}\n\n\\section{Results}\n\n\\subsection{Measurement for natural angular frequency}\nWe calculate the angular frequency from Table~\\ref{data_omega} by the formula\n\\[\n\\omega_0=\\frac{2\\pi}{T}.\n\\]\n\\begin{table}[H] \n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\n& $10T[s] \\pm 0.001[s]$\\\\\\hline\n1 & 15.790  \\\\\\hline\n2 & 15.801  \\\\\\hline\n3 & 15.778  \\\\\\hline\n4 & 15.800  \\\\\\hline\n\\end{tabular}\n\\caption{Measurement of ten periods for the natural frequency}\n\\label{data_omega}\n\\end{table}\n\nHence the average value of $10T$ should be calculated as\n\n\\[\n\\overline{10T}=\\frac{1}{4}\\sum_{i=1}^{4}(10T)_i=15.79225 \\pm 0.0107 \\ s, \\quad\nu_{10T,r}=0.07\\%. \n\\]\n\nThe value of $\\omega_0$ is\n\n\\[\n\\overline{\\omega_0}=\\frac{20\\pi}{10T}=\\frac{20\\times3.1416}{15.79225}= 3.9787\n\\pm 0.006 \\  rad/s,\\quad u_{\\omega_0,r}=0.15\\%. \n\\]\n\n\\subsection{Measurement for damping coefficient}\n\nThe damping coefficient can be calculated by the following formula.\n\n\\[\n\\beta=\\frac{1}{5T}\\ln\\frac{\\theta_i}{\\theta_{i+5}}.\n\\]\n\n\nIn the experiment, we choose Damping Selection 2.\n\n\\begin{table}[H] \n\\centering\n\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\n\\multicolumn{2}{|c}{Amplitude[$\\degree$]$\\pm$1[$\\degree$]} & \n\\multicolumn{2}{|c|}{Amplitude[$\\degree$]$\\pm$1[$\\degree$]} &\n$\\ln(\\theta_i/\\theta_{i+5})$\\\\\\hline\n\n$\\theta_0$ & 149 & $\\theta_5$ & 95 & 0.4501 \\\\\\hline\n$\\theta_1$ & 136 & $\\theta_6$ & 86 & 0.4583 \\\\\\hline\n$\\theta_2$ & 124 & $\\theta_7$ & 79 & 0.4508 \\\\\\hline\n$\\theta_3$ & 114 & $\\theta_8$ & 72 & 0.4595 \\\\\\hline\n$\\theta_4$ & 103 & $\\theta_9$ & 65 & 0.4603 \\\\\\hline\n\\multicolumn{4}{|c|}{The average value of $\\ln(\\theta_i/\\theta_{i+5})$}\n    & 0.4558 \\\\\\hline \n\\end{tabular}\n\n\\caption{Measurement of the damping coefficient for Damping Selection 2}\n\\label{data_damping}\n\\end{table}\n\nThe experimental value of $\\ln(\\theta_i/\\theta_{i+5})$ is shown below\n\n\\[\n\\ln(\\theta_i/\\theta_{i+5})= 0.4558 \\pm 0.0050 , \\quad u_r=1.09\\%\n\\]\n\nHere, $T= 15.79225 /10= 1.5792 \\pm 0.0001s$. Then we can easily obtain $\\beta$\nas well, \n\n\\[\n\\beta=\\frac{1}{5\\times1.5792}\\times0.4558=  0.0577 \\pm 0.003s^{-1}, \\quad\nu_{\\beta,r}=5.2\\% \n\\]\n\n\\subsection{Measurement for $\\theta_{st}$ vs. $\\omega$ and $\\varphi$ vs.\n  $\\omega$}  \n\nTo study the relation between $\\varphi$ and $\\omega/\\omega_0$, \n\nfirst we get the raw data of 10T, $\\phi$ and $\\theta$,\nwe then process the raw data and list them in Table~\\ref{data_phi2} and\nTable~\\ref{data_phi3}. \n\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n& $10T [s] \\pm 0.001 [s]$ &  $\\varphi  [\\degree]  \\pm 1 [\\degree]$ & $ \\theta [\\degree] \\pm 1 [\\degree]$ \\\\ \\hline \n\n1  & 15.098 & -162  &  38   \\\\ \\hline\n2  & 15.123 & -163  &  39   \\\\ \\hline\n3  & 15.542 & -143  &  87   \\\\ \\hline\n4  & 15.672 & -118  &  130  \\\\ \\hline\n5  & 15.707 & -109  &  138  \\\\ \\hline\n6  & 15.724 & -103  &  141  \\\\ \\hline\n7  & 15.736 & -100  &  143  \\\\ \\hline\n8  & 15.755 & -94   &  145  \\\\ \\hline\n9  & 15.763 & -92   &  146  \\\\ \\hline\n10 & 15.774 & -89   &  144  \\\\ \\hline\n11 & 15.788 & -86   &  144  \\\\ \\hline\n12 & 15.797 & -84   &  144  \\\\ \\hline\n13 & 15.810 & -82   &  144  \\\\ \\hline\n14 & 15.820 & -79   &  142  \\\\ \\hline\n15 & 15.841 & -75   &  140  \\\\ \\hline\n16 & 15.907 & -64   &  130  \\\\ \\hline\n17 & 15.946 & -57   &  124  \\\\ \\hline\n18 & 16.050 & -46   &  105  \\\\ \\hline\n19 & 16.131 & -40   &  92   \\\\ \\hline\n20 & 16.237 & -33   &  78   \\\\ \\hline\n21 & 16.472 & -22   &  54   \\\\ \\hline\n22 & 16.603 & -18   &  46   \\\\ \\hline\n\\end{tabular}    \n\\caption{$\\theta$ vs. $10T$ and $\\varphi$ vs. $10T$ raw data for Damping\n  selection 2} \n\\end{table}\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n& $10T [s] \\pm 0.001 [s]$ &  $\\varphi  [\\degree]  \\pm 1 [\\degree]$ & $ \\theta [\\degree] \\pm 1 [\\degree]$ \\\\ \\hline\n\n1  & 15.057   & -163 &  35   \\\\ \\hline\n2  & 15.302   & -155 &  50   \\\\ \\hline\n3  & 15.526   & -142 &  79   \\\\ \\hline\n4  & 15.657   & -123 &  108  \\\\ \\hline\n5  & 15.708   & -111 &  120  \\\\ \\hline\n6  & 15.745   & -104 &  125  \\\\ \\hline\n7  & 15.760   & -97  &  127  \\\\ \\hline\n8  & 15.794   & -93  &  128  \\\\ \\hline\n9  & 15.789   & -92  &  128  \\\\ \\hline\n10 & 15.803   & -89  &  128  \\\\ \\hline\n11 & 15.814   & -86  &  128  \\\\ \\hline\n12 & 15.830   & -83  &  128  \\\\ \\hline\n13 & 15.849   & -80  &  126  \\\\ \\hline\n14 & 15.883   & -74  &  124  \\\\ \\hline\n15 & 15.903   & -69  &  120  \\\\ \\hline\n16 & 15.933   & -62  &  114  \\\\ \\hline\n17 & 16.013   & -55  &  106  \\\\ \\hline\n18 & 16.102   & -47  &  93   \\\\ \\hline\n19 & 16.168   & -41  &  83   \\\\ \\hline\n20 & 16.290   & -33  &  70   \\\\ \\hline\n21 & 16.452   & -25  &  55   \\\\ \\hline\n22 & 16.585   & -18  &  46   \\\\ \\hline\n\\end{tabular}    \n\\caption{$\\theta$ vs. $10T$ and $\\varphi$ vs. $10T$ raw data for Damping\n  selection 3} \n\\end{table}\n\n\nThen We know that $ \\omega / \\omega_0 = T_0 / T $,\nthus we can have the following processed data.\n\n\n\\begin{figure}\n  \\centering\n\\begin{minipage}{0.45\\textwidth}\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n& $\\omega/\\omega_0$ &  $\\varphi \\pm 1 [\\degree] $ \\\\ \\hline\n1  & 1.0460  & -162  \\\\ \\hline\n2  & 1.0443  & -163  \\\\ \\hline\n3  & 1.0161  & -143  \\\\ \\hline\n4  & 1.0077  & -118  \\\\ \\hline\n5  & 1.0054  & -109  \\\\ \\hline\n6  & 1.0043  & -103  \\\\ \\hline\n7  & 1.0036  & -100  \\\\ \\hline\n8  & 1.0024  & -94   \\\\ \\hline\n9  & 1.0019  & -92   \\\\ \\hline\n10 & 1.0012  & -89   \\\\ \\hline\n11 & 1.0003  & -86   \\\\ \\hline\n12 & 0.9997  & -84   \\\\ \\hline\n13 & 0.9989  & -82   \\\\ \\hline\n14 & 0.9982  & -79   \\\\ \\hline\n15 & 0.9969  & -75   \\\\ \\hline\n16 & 0.9928  & -64   \\\\ \\hline\n17 & 0.9904  & -57   \\\\ \\hline\n18 & 0.9839  & -46   \\\\ \\hline\n19 & 0.9790  & -40   \\\\ \\hline\n20 & 0.9726  & -33   \\\\ \\hline\n21 & 0.9587  & -22   \\\\ \\hline\n22 & 0.9512  & -18   \\\\ \\hline\n\\end{tabular}    \n\\caption{$\\varphi$ vs. $\\omega/\\omega_0$\uff0c\\\\ Damping selection\n  2}\\label{data_phi2} \n\\end{table}\n\\end{minipage}\n%\n\\begin{minipage}{0.45\\textwidth}\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n& $\\omega/\\omega_0$ &  $\\varphi \\pm 1 [\\degree] $ \\\\ \\hline\n1  & 1.0488   & -163 \\\\ \\hline\n2  & 1.0320   & -155 \\\\ \\hline\n3  & 1.0171   & -142 \\\\ \\hline\n4  & 1.0086   & -123 \\\\ \\hline\n5  & 1.0054   & -111 \\\\ \\hline\n6  & 1.0030   & -104 \\\\ \\hline\n7  & 1.0020   &  -97 \\\\ \\hline\n8  & 0.9999   &  -93 \\\\ \\hline\n9  & 1.0002   &  -92 \\\\ \\hline\n10 & 0.9993   &  -89 \\\\ \\hline\n11 & 0.9986   &  -86 \\\\ \\hline\n12 & 0.9976   &  -83 \\\\ \\hline\n13 & 0.9964   &  -80 \\\\ \\hline\n14 & 0.9943   &  -74 \\\\ \\hline\n15 & 0.9930   &  -69 \\\\ \\hline\n16 & 0.9912   &  -62 \\\\ \\hline\n17 & 0.9862   &  -55 \\\\ \\hline\n18 & 0.9808   &  -47 \\\\ \\hline\n19 & 0.9768   &  -41 \\\\ \\hline\n20 & 0.9694   &  -33 \\\\ \\hline\n21 & 0.9599   &  -25 \\\\ \\hline\n22 & 0.9522   &  -18 \\\\ \\hline\n\\end{tabular}    \n\\caption{$\\varphi$ vs. $\\omega/\\omega_0$\uff0c\\\\ Damping selection\n  3}\\label{data_phi3} \n\\end{table}\n\\end{minipage}\n\\vspace*{5cm}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=1\\textwidth]{matlab/p1}\n\\caption{Phase shift $\\varphi$ vs. $\\omega/\\omega_0$}\n\\label{phi}\n\\end{figure}\n\n\nTo study the relation between $\\theta_{st}$ and $\\omega/\\omega_0$, we process the raw data and list them in Table~\\ref{data_t2} and Table~\\ref{data_t3}.\n\n\n% TODO:\n\n\n\n\n\\begin{figure}\n  \\centering\n\\begin{minipage}{0.45\\textwidth}\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n& $\\omega/\\omega_0$ &  $\\theta \\pm 1 [\\degree] $ \\\\ \\hline\n1  & 1.0460  & 38   \\\\ \\hline\n2  & 1.0443  & 39   \\\\ \\hline\n3  & 1.0161  & 87   \\\\ \\hline\n4  & 1.0077  & 130  \\\\ \\hline\n5  & 1.0054  & 138  \\\\ \\hline\n6  & 1.0043  & 141  \\\\ \\hline\n7  & 1.0036  & 143  \\\\ \\hline\n8  & 1.0024  & 145  \\\\ \\hline\n9  & 1.0019  & 146  \\\\ \\hline\n10 & 1.0012  & 144  \\\\ \\hline\n11 & 1.0003  & 144  \\\\ \\hline\n12 & 0.9997  & 144  \\\\ \\hline\n13 & 0.9989  & 144  \\\\ \\hline\n14 & 0.9982  & 142  \\\\ \\hline\n15 & 0.9969  & 140  \\\\ \\hline\n16 & 0.9928  & 130  \\\\ \\hline\n17 & 0.9904  & 124  \\\\ \\hline\n18 & 0.9839  & 105  \\\\ \\hline\n19 & 0.9790  & 92   \\\\ \\hline\n20 & 0.9726  & 78   \\\\ \\hline\n21 & 0.9587  & 54   \\\\ \\hline\n22 & 0.9512  & 46   \\\\ \\hline\n\\end{tabular}    \n\\caption{$\\theta$ vs. $\\omega/\\omega_0$\uff0c\\\\ Damping selection 2}\\label{data_t2}\n\\end{table}\n\\end{minipage}\n%\n\\begin{minipage}{0.45\\textwidth}\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n& $\\omega/\\omega_0$ &  $\\theta \\pm 1 [\\degree] $ \\\\ \\hline\n1  & 1.0488   & 35   \\\\ \\hline\n2  & 1.0320   & 50   \\\\ \\hline\n3  & 1.0171   & 79   \\\\ \\hline\n4  & 1.0086   & 108  \\\\ \\hline\n5  & 1.0054   & 120  \\\\ \\hline\n6  & 1.0030   & 125  \\\\ \\hline\n7  & 1.0020   & 127  \\\\ \\hline\n8  & 0.9999   & 128  \\\\ \\hline\n9  & 1.0002   & 128  \\\\ \\hline\n10 & 0.9993   & 128  \\\\ \\hline\n11 & 0.9986   & 128  \\\\ \\hline\n12 & 0.9976   & 128  \\\\ \\hline\n13 & 0.9964   & 126  \\\\ \\hline\n14 & 0.9943   & 124  \\\\ \\hline\n15 & 0.9930   & 120  \\\\ \\hline\n16 & 0.9912   & 114  \\\\ \\hline\n17 & 0.9862   & 106  \\\\ \\hline\n18 & 0.9808   & 93   \\\\ \\hline\n19 & 0.9768   & 83   \\\\ \\hline\n20 & 0.9694   & 70   \\\\ \\hline\n21 & 0.9599   & 55   \\\\ \\hline\n22 & 0.9522   & 46   \\\\ \\hline\n\\end{tabular}    \n\\caption{$\\theta$ vs. $\\omega/\\omega_0$\uff0c\\\\ Damping selection 3}\\label{data_t3}\n\\end{table}\n\\end{minipage}\n\\vspace*{5cm}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=1\\textwidth]{matlab/p2}\n\\caption{Amplitude $\\theta_{st}$ vs. $\\omega/\\omega_0$}\n\\label{theta}\n\\end{figure}", "meta": {"hexsha": "b6483a725dc7355ea4e16d0cb0c9c2408e49040b", "size": 9204, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "E5/part/5r.tex", "max_stars_repo_name": "iamwrm/VP141", "max_stars_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-24T11:28:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-24T11:28:04.000Z", "max_issues_repo_path": "E5/part/5r.tex", "max_issues_repo_name": "iamwrm/VP141", "max_issues_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "E5/part/5r.tex", "max_forks_repo_name": "iamwrm/VP141", "max_forks_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.8909090909, "max_line_length": 152, "alphanum_fraction": 0.5532377227, "num_tokens": 4373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484143, "lm_q2_score": 0.7981867873410141, "lm_q1q2_score": 0.5710419710950381}}
{"text": "\\documentclass{article}\n\\usepackage{theorem}\n\n\\DeclareMathAlphabet{\\mathsc}{OT1}{cmr}{m}{sc}\n\n\\newcommand{\\implies}{\\Rightarrow}\n\\newcommand{\\have}{{.~}}\n\\newcommand{\\bstate}{{\\mathcal{S}}}\n\\newcommand{\\blocks}{{\\mathcal{B}}}\n\\newcommand{\\tabtop}{{\\mathsc{t}}}\n\\newcommand{\\tblocks}{{\\blocks_\\tabtop}}\n\\newcommand{\\bclear}{{\\mathsc{clear}}}\n\\newcommand{\\bon}{{\\mathsc{on}}}\n\\newcommand{\\babove}{{\\mathsc{above}}}\n\\newcommand{\\bstack}{{\\mathsc{stack}}}\n\\newcommand{\\bbase}{{\\mathsc{base}}}\n\\newcommand{\\bflexed}{{\\mathsc{flexed}}}\n\\newcommand{\\st}{~|}\n\n\\begin{document}\n\n\\section{Definitions}\n\n\\begin{itemize}\n\\item Let\n  $$ \\blocks = \\{ 1, \\ldots, N \\} $$\nrepresent a set of $N$ blocks in a blocks-world problem.\nWe represent the table top with the symbol $\\tabtop$, and\ndefine $$ \\tblocks \\equiv \\blocks \\cup \\{ \\tabtop \\} $$\n  \n\\item Call any \n  $$ \\bstate \\subseteq \\blocks \\times \\tblocks $$\na {\\em part-state} of the blocks-world problem. The pairs of blocks in\n$\\bstate$ represent {\\em on} relations.\n\n\\item A part-state $\\bstate$ is {\\em legal} iff\n  \\begin{itemize}\n  \\item No block is on two things. $$\n    \\forall i \\in \\blocks \\have\n      \\langle i , j \\rangle \\in \\bstate \\implies\n      \\not\\exists j' \\in \\tblocks \\have\n\tj \\neq j' \\wedge \\langle i , j' \\rangle \\in \\bstate\n  $$\n  \\item No two blocks are on the same block. $$\n    \\forall j \\in \\blocks \\have\n      \\langle i , j \\rangle \\in \\bstate \\implies\n      \\not\\exists i' \\in \\blocks \\have\n\ti \\neq i' \\wedge \\langle i' , j \\rangle \\in \\bstate\n  $$\n  \\end{itemize}\n\n\\item Define the partial functions\n$\\bon_i(\\bstate)$ mapping legal part-states to\nblocks such that\n$\\bon_i(\\bstate) = j$ when $\\langle i, j \\rangle \\in \\bstate$;\n$\\bon_i$ is undefined elsewhere.\n(In other words, $\\bon_i$ says\nwhich block block $i$ is on in a given state.)\nBy the definition of a legal part-state, these functions are well-defined.\n\n\\item A part-state is a {\\em total state} or just a {\\em state}\niff it is legal and every block is on something. $$\n    \\forall i \\in \\blocks \\have\n      \\exists j \\in \\tblocks \\have\n\t\\langle i , j \\rangle \\in \\bstate\n$$\n\n\\item The {\\em above} relation is the transitive closure of\nthe {\\em on} relation: A block $i$ is above a block $j$ in a\nlegal part-state $\\bstate$ if either $i$ is on $j$, or $i$ is on\na block which is above $j$.  $$\n  \\babove_i(\\bstate) = \\left \\{ \\begin{array}{ll}\n    \\{ j \\} \\cup \\babove_j(\\bstate) &\n      j = \\bon_i(\\bstate) \\wedge j \\neq \\tabtop \\\\\n    \\emptyset & \\mbox{otherwise}\n  \\end{array} \\right .\n$$\n\n\\item A block $j$ is {\\em clear} in a legal part-state $\\bstate$ iff no block\nis above it. $$\n  \\bclear(\\bstate) = \\{ j \\in \\bstate \\st\n  \\not\\exists i \\have \\langle i , j \\rangle \\in \\bstate \\}\n$$\n\n\\item A legal part-state $\\bstate$ is {\\em realizable} iff\nno block is above itself. $$\n  \\not\\exists i \\in \\blocks \\have\n    i \\in \\babove_i(\\bstate)\n$$\n\n\\item {\\em Moves} are functions $m_{ij}$ which take a realizable part-state in\nwhich blocks $i$ and $j$ are clear to a\nrealizable part-state in which $i$ is on $j$.\n\nLet $i \\in \\blocks$ and $j \\in \\tblocks$, and let $\\mathcal{C}_{ij}$ be the set\nof realizable part-states\nin which $i$ is clear, and either $j = \\tabtop$ or $j$ is clear.\nThen the domain of $m_{ij}$ is\n$\\mathcal{C}_{ij}$, and the range is the set of all realizable part-states.\n\nWe define the partial function $m_{ij}$ by $$\n  m_{ij}(\\bstate) \\equiv\n    \\left ( \\bstate - \\{ \\langle i, \\bon_i(\\bstate) \\rangle \\} \\right )\n    \\cup \\{ \\langle i, j \\rangle \\}\n$$\nwhen $i$ and $j$ are clear in $\\bstate$; $m_{ij}$ is undefined\nelsewhere.  A move is {\\em legal} iff $m_{ij}$ is defined.\n\n\\item A {\\em move sequence} $M$\nof length $k$ from state $\\bstate^1$ to state $\\bstate^2$ consists of\na sequence of pairs $$\n  \\langle i_1, j_1 \\rangle , \\langle i_2 , j_2 \\rangle , \\ldots ,\n  \\langle i_k , j_k \\rangle \\in \\left ( \\blocks \\times \\tblocks \\right ) ^ \\ast\n$$\nsuch that $$\n  \\left ( m_{i_k j_k} \\circ \\cdots \\circ m_{i_2 j_2} \\circ\n  m_{i_1 j_1} \\right ) (\\bstate^1) = \\bstate^2\n$$\nWe allow ourselves the liberty of writing $M(\\bstate^1) = \\bstate^2$.\nA move sequence $M$ is {\\em legal} for a state $\\bstate$ iff\n$M(\\bstate)$ is defined.\n\n\\item A {\\em primitive-blocks-world} (PBW) {\\em problem} consists of two\nrealizable states: an\ninitial state $I$ and a goal state $G$.\nA {\\em solution} to the problem is a move sequence $M$ such that $M(I) = G$.\nWe say that a solution is {\\em optimal} if there is no solution\nof shorter length.\n\n\\item A {\\em stack} of blocks starting at $i$ in a\nlegal part-state $\\bstate$ is just\n$i$ plus the set of blocks $i$ is above. $$\n  \\bstack_i(\\bstate) = \\{ i \\} \\cup \\babove_i(\\bstate)\n$$\nWe call $i$ the {\\em top} of the stack.\nIf $\\bstate$ is a realizable part-state, the {\\em base}\nof the stack is the block in the stack which is\nabove no other blocks in the stack. $$\n  \\bbase_i(\\bstate) = j \\in \\bstack_i(\\bstate) \\have\n    \\babove_j(\\bstate) \\subseteq \\{ \\tabtop \\}\n$$\n(The lemma that every stack in any realizable part state\nhas a base is proved by tracing through the definitions.)\n\n\\item A {\\em tower} is a stack which is a subset of no other\nstack.  \n\n\\end{itemize}\n\n\\section{Basics}\n\nWe first prove that two states are the same exactly when\nall the blocks have the same $\\babove$ relation.\n\\begin{lemma}{same-above}{$\\babove$ As A State Characterization}\n\\given\nStates $\\bstate^1$ and $\\bstate^2$.\n\\prove\n$\\bstate^1 = \\bstate^2$ iff $$\n  \\forall i \\in \\blocks \\have\n    \\babove_i(\\bstate^1) = \\babove_i(\\bstate^2)\n$$\n\\proof  By forward and backward implication.\n\n\\forwardcase\nBy definition, $\\babove$ is a function.  Thus, \n$\\babove_i$ applied to any identical pair of states\nmust yield identical answers.\n\n\\backwardcase\nWe prove this case by contradiction.\nAssume that $\\bstate^1 = \\bstate^2$, but there is some $i$ such that\n$\\babove_i(\\bstate^1) \\not= \\babove_i(\\bstate^2)$.\nWLOG, assume that \nthere is some $j \\in \\tblocks$\nsuch that $j \\in \\babove_i(\\bstate^1)$ but not $j \\in \\babove_i(\\bstate^2)$.\nThen by definition of $\\babove$, there must exist some\n$k \\in i \\cup \\babove_i(\\bstate^1)$\nsuch that $\\bon_k(\\bstate^1) = j$ but $\\bon_k(\\bstate^2) \\not= j$.\nBut by definition of $\\bon$, this implies that $\\bstate^1 \\not= \\bstate^2$,\ncontradicting our original assumption.\n\\end{lemma}\n\n\\section{Flexibility}\n\nA legal part-state $\\bstate^2$ is {\\em as flexible} as\na legal part-state $\\bstate^1$ iff no block in $\\bstate^2$\nis above some block in $\\bstate^2$ which it is not above in\n$\\bstate^1$, i.e. $$\n  \\babove_i (\\bstate^2) \\subseteq \\babove_i(\\bstate^1)\n$$\nWe notate this by writing $\\bstate^2 \\sqsubseteq \\bstate^1$.\n\nAn observation about states related by flexibility is in order.\n\\begin{lemma}{flexible-clear}{Flexibility And Clear Blocks}\n\\given\nStates $\\bstate^1$ and $\\bstate^2$ with $\\bstate^2 \\sqsubseteq \\bstate^1$.\n\\prove\nAny block clear in $\\bstate^1$ is clear in $\\bstate^2$: $$\n  \\bclear(\\bstate^2) \\supseteq \\bclear(\\bstate^1)\n$$\n\\proof\nConsider some block $j$ in $\\bstate^2$.  If it is not clear,\nthen there is a block $i$ above it in $\\bstate^2$.  But,\nby definition of flexibility, this means that $i$ is above $j$\nin $\\bstate^1$, so $j$ is not clear in $\\bstate^1$, either.\n\\end{lemma}\n\nWe now justify the terminology ``as flexible as''\nby means of Theorem \\ref{flexible-move}:\n\\begin{theorem}{flexible-move}{Flexibility And Move Sequences}\n\\given\nStates $\\bstate^1$ and $\\bstate^2$ with $\\bstate^2 \\sqsubseteq \\bstate^1$.\nA legal move sequence $M$ starting at $\\bstate^1$.\n\\prove\n\\begin{description}\n\\item[i)~] $M$ is also a legal move sequence starting at $\\bstate^2$.\n\\item[ii)~] $M(\\bstate^2) \\sqsubseteq M(\\bstate^1)$.\n\\end{description}\n\\proof\nBy induction on prefixes of $M$.\n\n\\basecase\nThe theorem is true for the empty prefix of $M$.\n\n\\indcase\nDecompose $M$ such that the prefix $M^1$ before\nsome move $m = \\langle i, j \\rangle$ satisfies the lemma $$\n  M = M^1 \\cdot \\langle i, j \\rangle \\cdot M^2\n$$\nand let\n\\begin{eqnarray*}\n  \\bstate^1_1 &=& M^1(\\bstate^1) \\\\\n  \\bstate^1_2 &=& \\langle i, j \\rangle(\\bstate^1_1) \\\\\n  \\bstate^2_1 &=& M^1(\\bstate^2) \\\\\n  \\bstate^2_2 &=& \\langle i, j \\rangle(\\bstate^2_1)\n\\end{eqnarray*}\n\nBy hypothesis we have $\\bstate^2_1 \\sqsubseteq \\bstate^1_1$.\nBy Lemma \\ref{flexible-clear}, this means that $$\n  \\bclear(\\bstate^1_1) \\subseteq \\bclear(\\bstate^2_1)\n$$\nThis means that since $\\langle i, j \\rangle$ is a legal\nmove in $\\bstate^1_1$, it must also be a legal move in\n$\\bstate^2_1$.  Thus, condition (i) of the lemma holds\nfor the prefix $M^1 \\cdot \\langle i, j \\rangle$ of $M$.\n\nTo see that condition (ii) of the Lemma holds on the extended\nprefix, we note that the only block possibly above other blocks in\n$\\bstate^2_2$ but not in $\\bstate^2_1$ is $i$.  We then note that\nby definition of flexibility\n\\begin{eqnarray*}\n  \\babove_i(\\bstate^2_2) &=& \\babove_j(\\bstate^2_1) \\cup \\{j\\} \\\\\n  &\\subseteq& \\babove_j(\\bstate^1_1) \\cup \\{j\\} \\\\\n  &=& \\babove_i(\\bstate^1_2)\n\\end{eqnarray*}\n\\end{theorem}\n\nIf two states are related by flexibility, legal sequences of\nmoves tend to make the two states the same.  The next theorem\nquantifies this observation.\n\nGiven a block $i$ and states $\\bstate$ and $\\bstate'$ such\nthat $\\bstate' \\sqsubseteq \\bstate$, we will say that block\n$i$ is {\\em flexed} iff $i$ is above fewer blocks in\n$\\bstate'$ than in $bstate$. $$\n  \\babove_i(\\bstate') \\subset \\babove_i(\\bstate)\n$$\nWe will write $\\bflexed_{\\bstate}(\\bstate')$ for the\nset of blocks in $\\bstate'$ flexed with respect to $\\bstate$.\n\n\\begin{theorem}{flexed-blocks}{Flexed Blocks}\n\\given States $\\bstate^1_1$ and $\\bstate^2_1$ such that\n$\\bstate^1_1 \\sqsubseteq \\bstate^2_1$.  A\nmove sequence $M$ such that $M(\\bstate^1_1) = \\bstate^1_2$\nand $M(\\bstate^2_1) = M(\\bstate^2_2)$.\n\\prove\nThe final states are the same whenever all the flexed blocks\nin the initial state have moved, i.e.,\n$\\bstate^2_1 = \\bstate^2_2$ iff $$\n  \\forall i \\in \\bflexed_{\\bstate^2_1}(\\bstate^1_1) \\have\n    \\exists j \\in \\tblocks \\have\n      \\langle i, j \\rangle \\in M\n$$\n\\proof  By forward and backward implication.\n\n\\forwardcase\nWe first prove that when the final states are the same, all\nflexed blocks have moved.\nBy Lemma \\ref{same-above} two states are the same exactly when the\n$\\babove$ relations of all of their blocks are the same.\nNow consider any block $i$ \nwhich is flexed in a state $\\bstate^1$ WRT a state $\\bstate^1_1$. $$\n  i \\in \\bflexed_{\\bstate^1_1}(\\bstate^1)\n$$\nAssume that a sequence $M$ of moves takes both\ninitial states $\\bstate^1$ and $\\bstate^1_1$\nto the same final state $\\bstate^2$. $$\n  \\bstate^2 = M(\\bstate^1) = M(\\bstate^1_1)\n$$\nBy the definition of flexed\nblock, the $\\babove$ relation of $i$ in the\nmore flexible initial state $\\bstate^1$ was a subset\nof that in the less flexible initial state $\\bstate^1_1$. $$\n   \\babove_i(\\bstate^1) \\subset \\babove_i(\\bstate^1_1)\n$$\nSince the final\nstates are the same at the end of the sequence of moves, then\nthe sequence of moves must have changed the $\\babove_i$ relation\nWRT at least one of the initial states $\\bstate^1$ and $\\bstate^1_1$.\nBut, by definition of $\\babove_i$, this can happen only if block $i$ moves.\n\n\\backwardcase\nWe next prove that when all flexed blocks have moved, the states are\nidentical.  We prove this case by induction.\n\n\\end{theorem}\n  \n\\section{Solution Optimization}\n\n\\end{document}\n", "meta": {"hexsha": "5e8ce5b0883d2ea0d2b4d87429486761d6c4d83f", "size": 11309, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/thm/thm.tex", "max_stars_repo_name": "BartMassey/blocks", "max_stars_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/thm/thm.tex", "max_issues_repo_name": "BartMassey/blocks", "max_issues_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/thm/thm.tex", "max_forks_repo_name": "BartMassey/blocks", "max_forks_repo_head_hexsha": "6dbb39186595b6e2b80c9a5dcd616056f6cb3117", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-15T18:45:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-15T18:45:13.000Z", "avg_line_length": 35.230529595, "max_line_length": 79, "alphanum_fraction": 0.6820231674, "num_tokens": 3868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.5710419690689931}}
{"text": "\\documentclass[11pt,a4paper]{article}\r\n\r\n% \\usepackage[TDBbrev]{UUbrev}\r\n\\usepackage[latin1]{inputenc}\r\n\\usepackage[swedish]{babel}\r\n\\usepackage{graphicx}\r\n%\\usepackage[dvips]{color}\r\n\\usepackage{float}\r\n\\usepackage{enumitem}\r\n\\usepackage{hyperref}\r\n\r\n%\\beteckning{Mini project}\r\n%\\mottagare{Scientific Computing, Bridging Course}\r\n\r\n\\begin{document}\r\n\\section*{Mini project 1: Water Pipe Network\\footnote{Part~1 is based on\r\nProblem~5.1 in A.~Quarteroni \\& F.~Saleri, \\emph{Scientific Computing with MATLAB},\r\nSpringer-Verlag Berlin Heidelberg, 2003}\r\n}\r\n\r\nThis mini project is inspired by a research article:\r\n\\begin{quotation}\\noindent\r\nH. Jia, W. Wei, K. Xin, \\emph{Hydraulic model for multi-sources reclaimed water pipe network based on EPANET and its applications in Beijing, China}, Front. Environ. Sci. Engin. China 2008, 2(1):57--62 (doi:10.1007/s117783-008-0013-0)\r\n\\end{quotation}\r\nThe article is about the design of a water pipe network in Beijing, for distribution of reclaimed water. Several issues are discussed, one of which is the water pressure at the network nodes.\r\n\\\\\r\n\\\\\r\nIn this mini project you will do a similar computation as is discussed in the article, but you will focus on the computation of the node pressure. You will, however, not use the real network data from Beijing, but a simpler model network. To simplify the task, it has been subdivided into three parts. \r\n\\bigskip \\bigskip\r\n\\begin{figure}[hp]\r\n\\centering\r\n\\includegraphics[width=0.9\\textwidth]{./network.pdf}\r\n\\caption{The water pipe system in part 1}\r\n \\end{figure}\r\n\r\n\\newpage\r\n\r\n\\subsection*{Part~1}\r\nTo understand the principle we will start off with a very small water pipe network with six nodes only, see the figure on first page. The network nodes are enumerated 1, 2, $\\ldots$, 6.  A water reservoir is connected to node 1, and water taps to nodes 5 and 6, and the problem here is to find the pressure in the inner nodes (2, 3 \\& 4). The pressure value for node $k$ is denoted by $p_k$. The pressure is given as the difference between the water pressure and the surrounding atmospheric pressure. Consequently, the pressure value 0 corresponds to the atmospheric pressure.\r\n\\\\\r\n\\\\\r\nThe following relations are used to compute the pressure:\r\n\\begin{enumerate}\r\n \\item In pipe $j$ the water flow speed $Q_j$ (m$^3$/s) can be expressed as:\r\n\\begin{equation}\r\n Q_j = k L \\left(p_{in}-p_{out}\\right)  \\label{eq:Qj}\r\n\\end{equation}\r\nwhere $1/k$ is the hydraulic resistance in the pipe, $L$ is the pipe length, $p_{in}$ is the pressure at inflow to the pipe and $p_{out}$ is the pressure at outflow from the same pipe. The inverse hydraulic resistance $k$ is measured in m$^2$/(bar\\ s), the pressure in bar and the length in meter. In this case $k=0.001$ and constant. \r\n\\item The total inflow to a node equals the total outflow from the same node.\r\n\\end{enumerate}\r\n\r\n\\noindent The table contains values of $L$ for the pipes in our network:\r\n\\par\r\n\\begin{center}\r\n\\begin{tabular}{|cl || cl |}\\hline\r\npipe  & $L$   & pipe & $L$    \\\\ \\hline\r\n1       & 300   & 4      & 600   \\\\ \\hline\r\n2       & 500   & 5      & 500    \\\\ \\hline\r\n3       & 500   & 6      & 500    \\\\ \\hline\r\n\\end{tabular}\r\n\\end{center}\r\n\\par\r\n\\noindent\r\nMoreover, the pressure in the water reservoir is 10~bar and the pressure  $p$ at the water outlets is ca. 0~bar.\r\n\\\\\r\n\\noindent Relation~2 above can be expressed as follows for the inner nodes:\r\n\\begin{eqnarray*}\r\n\\rm{Node\\ 2:} &\\ & Q_1 = Q_2 + Q_{3}\\\\\r\n\\rm{Node\\ 3:} &\\ & Q_3 = Q_4 + Q_6\\\\\r\n\\rm{Node\\ 4:} &\\ & Q_2 + Q_4  = Q_5\r\n\\end{eqnarray*}\r\n\r\n\\noindent Inserting relationship (1) into these formulas and using the values from the table yields the following system of equations for the four pressure values  $p_1,\\ldots, p_6$:\r\n\r\n\\begin{equation}\r\n\\left(\\begin{array}{cccccc}\r\n    1.0   &0        & 0      &   0      &   0    & 0     \\\\\r\n    0.3   & -1.3  &  0.5  &   0.5   &   0    & 0     \\\\\r\n    0      &   0.5  & -1.6  &  0.6   &   0    & 0.5  \\\\\r\n    0      &   0.5  &   0.6 & -1.6   &   0.5 & 0     \\\\\r\n  0        &   0     &   0     &   0      &  1.0  & 0     \\\\\r\n  0        &   0     &   0     &   0     &   0     & 1.0 \r\n \\end{array}\\right)\r\n\\left(\\begin{array}{c} \r\np_1  \\\\ \r\np_2  \\\\\r\np_3  \\\\ \r\np_4  \\\\\r\np_5   \\\\\r\np_6\r\n\\end{array}\\right) =\r\n\\left(\\begin{array}{c} \r\n10 \\\\\r\n0    \\\\\r\n0    \\\\\r\n0    \\\\\r\n0    \\\\\r\n0\r\n\\end{array}\\right)\r\n\\label{eq:matsys}\r\n\\end{equation}\\\\\r\n\\noindent All equations except the first one have right-hand side equal to 0, and the right-hand-side of the first equation is the pressure in the water reservoir $p_1$.\r\n\\\\\r\n\r\n\\noindent Your task is to write a Matlab/Python script that solves this system and displays the result. Also, generate a bar graph showing the pressures of different nodes. It's important that output and figures are easy to understand even for someone not very involved in the project. Finally, change the program so that the user can input the pressure in the water reservoir, $p_1$, when running the program.   \r\n\r\n\\subsection*{Part~2}\r\nOn the course website you can download the files \\verb@water4720.mat@ and \\verb@water6.mat@. The former contains data for a network with 4720 nodes, and the latter is data for the network i Part 1. Due to the size of the 4720 node network it has several water reservoirs. The data stored in the \\verb@.mat@-files are two variables: the coefficient matrix $A$, and a vector called $sources$. The vector $sources$ contains the node indices corresponding to the water reservoirs. \r\n\r\nAs in Part 1 the elements in the right-hand-side $b$ are equal to the pressure in the water reservoir when there is a reservoir in that node, and 0 elsewhere. Thus, you simply assign the pressure in the reservoir nodes into the corresponding entries in $b$, and the index number of these entries can be found in the vector $sources$. \r\n\\\\\r\nYour task now is to write a program, a Matlab-function, that have the following structure:\r\n\\begin{enumerate}[label=(\\roman*)]\r\n\\item Input the  water pipe network (i.e. load the mat-file, see information below)\r\n\\item Input pressure in the water reservoir nodes and create $b$\r\n\\item Compute the pressures in the network\r\n\\item Calculate and display the average pressure\r\n\\item Ask if the user want to try again (choosing other reservoir pressure-values). If the user type \\textless Return\\textgreater \\thinspace on keyboard the program will repeat from item (ii) above,  and the user can choose new pressure values. If any other key on keyboard is chosen, the program will exit. \r\n\\end{enumerate}\r\nIt is important that the same program works for different pipeline networks, i.e. for different \\verb@.mat@-files. Thus, the same program should work for both \\verb@water4720.mat@ and \\verb@water6.mat@. \\\\\r\nTo solve this in a good way, let the name of the \\verb@.mat@-file be input-parameter to your function. To solve point (v) above, you can use a while-loop:\r\n\\begin{verbatim}\r\n  loop = ' ' ;  % Assign empty string to enter while-loop\r\n  while isempty(loop)\r\n         ...\r\n  end\r\n\\end{verbatim} \r\n\r\n\\noindent Implement the algorithm above in a Matlab/Python function and test that the program works the way it is supposed to. Through trial and error, try to find as low pressure as possible in the water reservoirs, but with a average pressure larger than 20 bar. \\\\\r\n\r\n\\medskip\r\n\\noindent {\\bf The \\verb@load@ command}\\\\\r\nIf the name of the input file (the .mat-file) is stored as a text string in the variable \\verb@namein@, the file can be read via the command \\verb@load(namein)@. The suffix .mat is implicit in calls to \\verb@load@, so there is no need to include the suffix in the file name text strings. See the Matlab help for more information.\r\n\r\n\\medskip\r\n\\noindent {\\bf The \\verb@scipy.io.loadmat()@ function}\\\\\r\nFor Python user, you can load the \\verb@.mat@ file with \\verb@scipy.io.loadmat()@.\r\nMore detailed examples and documents are given in the official tutorial of  \\verb@scipy@ at \\url{https://docs.scipy.org/doc/scipy/reference/tutorial/io.html}.\r\n\r\n\r\n\\subsection*{Part~3}\r\nIt takes a little bit of time to solve the problem in Part 2 when the network size gets big. When you run the 4720 node problem, there is a slight waiting time when the equation system is solved. As the matrix is the same all the time, there is some room there for improvements that would make the program more efficient. Implement these improvements.\r\n\r\n\\end{document}\r\n\r\n\r\n", "meta": {"hexsha": "732fb3bf83a55f66011e4c081036731cac22ab99", "size": 8409, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Projects/Mini1/waterpipenetwork.tex", "max_stars_repo_name": "enigne/ScientificComputingBridging", "max_stars_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-04T01:15:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-08T15:08:27.000Z", "max_issues_repo_path": "Projects/Mini1/waterpipenetwork.tex", "max_issues_repo_name": "enigne/ScientificComputingBridging", "max_issues_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Projects/Mini1/waterpipenetwork.tex", "max_forks_repo_name": "enigne/ScientificComputingBridging", "max_forks_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.2040816327, "max_line_length": 577, "alphanum_fraction": 0.7037697705, "num_tokens": 2399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867681382279, "lm_q2_score": 0.7154239957834734, "lm_q1q2_score": 0.5710419670429477}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 7, 2014}\n\\maketitle\n\\section*{4.2 5a,6}\n\\subsection*{5a}\n$\\gcd(f(x),f'(x))=d(x)\\ne 1$\nassume $f(x)$ does not have repeatable factors\n\nconsider $p(x)$ to be an irreduceable factor of $d(x)$.\n\nthen we can say $f(x)=a(x)p(x), f'(x)=b(x)p(x)$ where $g(x),b(x)\\in F[x]$.\n\n$f'(x)=a'(x)p(x)+a(x)p'(x)$ and the following holds. $a(x)p'(x)=b(x)p(x)-a'(x)p(x)$\n\n$p(x)|a(x)p'(x)\\to p(x)|a(x) $ because $p(x)|h_1(x)h_2(x)\\to p(x)|h_1(x)$ or $p(x)|h_2(x)$\n\n$\\exists cc(x)\\in F[x]$ where $a(x)=c(x)p(x)$ and $f(x)=a(x)p(x)=c(x)p(x)p(x)=c(x)p(x)^2$ which is a contradiction\n\nnote that $p(x)|p'(x)$ is possible. $p(x)\\in \\mathbb{Z}_p[x], p(x)=x^{p^2}-x^{p}-1$ and $p'(x)=0$.\n\n\\section*{4.3 existence of roots}\n$p(x)\\in K[x]\\setminus\\{0\\}$.\n\nconstruction $m$ $K[x]$. for $f(x),g(x)\\in K[x]$ $f\\equiv g(x)\\mod p(x)\\Leftrightarrow p(x)|[f(x)-g(x)]$. this is an equivalence relation on $K[x].$\n\nnow we examine the equivalence class of $f(x)$. It consists of all polynomials in $K[x]$ such that $[f(x)]=\\{g(x)|g(x)\\in K[x], f(x)\\equiv g(x)\\mod p(x)\\}$. we write $f(x)=p(x)q(x)+r(x)$ where $r(x)=0$ or $\\deg r<\\deg p$. and then $[f(x)]=[r(x)]$. the set of equivalence classes is denoted $K[x]/\\langle p(x)\\rangle$\n\n\\subsection*{properties}\n\\begin{enumerate}\n\\item\n$f(x)\\equiv g(x)\\mod p(x)$ and $h(x)\\equiv l(x)\\mod p(x)$ then $f(x)+h(x)\\equiv g(x)+l(x)\\mod p(x)$ and $f(x)h(x)\\equiv g(x)l(x)\\mod p(x)$\n\\item\n$f(x)h(x)\\equiv g(x)h(x)\\mod p(x)$ and $\\gcd(p(x),h(x))=1$ then $f(x)\\equiv g(x)\\mod p(x)$.\n\\end{enumerate}\n\\subsubsection*{on $K[x]/\\langle p(x)\\rangle$}\ndefine\n\\begin{enumerate}\n\\item\n$[f(a)]+[g(x)]=[f(a)+g(a)]$\n\\item\n$[f(x)]\\cdot [g(x)]=[f(x)\\cdot g(x)]$\n\\end{enumerate}\nthese are well defined operations, check all details\n\n$[f(x)+g(x)]=[f'(x)+g'(x)]$?\n\n\\subsubsection*{example}\n$K=\\mathbb{R}, p(x)=x^2+1\\in \\mathbb{R}[x]$.\n\n$\\mathbb{R}[x]/\\langle p\\rangle =\\mathbb{R}[x]/\\langle x^2+1\\rangle$\n\n$[x]^2=-[1]$.\n\\end{document}\n\n\n", "meta": {"hexsha": "8412a8968d623dc97172efe38056df273633df00", "size": 2159, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-11-07.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-11-07.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-11-07.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.223880597, "max_line_length": 316, "alphanum_fraction": 0.6002779064, "num_tokens": 898, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484144, "lm_q2_score": 0.7981867801399695, "lm_q1q2_score": 0.5710419659432381}}
{"text": "\\section{What does the first principal component capture?}\nWe have described the score in the first principal component as a proxy for diversity of the export basket, where products are weighted by their complexity or sophistication. We explore here more in detail what this first component captures.\n\nFrom the definition of what PCA does, we know that the first principal component $\\scoreFirstPC$ captures the largest amount of variability across export baskets (the latter transformed using our measure $R_{cpt}$ of scaled absolute advantage). As shown before, the first principal component explains almost sixty percent of the variance, and the loadings of the vector are positive and approximately of the same magnitude. This necessarily means that the scores in this $\\scoreFirstPC$ must capture an \\emph{average scaled absolute advantage}. Given the formula of scaled absolute advantage, we therefore expect $\\scoreFirstPC$ to be highly correlated with the logarithm of exports per capita.\n\nBut, for similar reasons, we also expect $\\scoreFirstPC$ to be correlated with the diversification of the export basket. To see this let us drop the index $t$ for clarity of exposition. From the definition of Principal Component Analysis as a Singular Value Decomposition, we get that the matrix can be factored as $\\mathrm{R}=\\mathrm{H}\\mathrm{V}^T$. Here, $H_{ck}$ is the score in the $k$-th principal component for country $c$ (i.e., the $c$-th element in the vector $\\scorePC{k}$ in our notation), and $V_{pk}$ is how much product $p$ weights (or loads) on component $k$. The matrix $\\mathrm{V}$ is orthogonal, and thus $\\mathrm{V}^T\\mathrm{V}=\\mathrm{I}$ is the identity matrix. Given this, the 2-norm length of the export vector of country $c$ is \n\\begin{align*}\n\t\\left\\|R_c\\right\\|^2 &= [\\mathrm{R}\\mathrm{R}^T]_{cc}, \\\\\n\t&= [\\mathrm{H}\\mathrm{H}^T]_{cc}, \\\\\n\t&= \\left\\|H_c\\right\\|^2, \\\\\n\t&= \\sum_k \\scorePC{k}^2_c.\n\\end{align*}\nIn other words, the norm of the export basket vector of country $c$ is equal to the norm of the $c$'s vector in the space of principal components. \n\nNow, the norm of country's $c$ export basket is related to its diversification, since diversity is the norm of the export basket if we \\emph{discretize} the elements of the vector: \n\\begin{align*}\n\tM_{cpt}=\n\t\\begin{cases}\n\t\t1,\\quad\\text{if $R_{cpt}>0$}\\\\\n\t\t0,\\quad\\text{if $R_{cpt}\\leq 0$}\n\t\\end{cases}.\n\\end{align*}\nHaving $R_{cpt}>0$ would mean that the country $c$ has an absolute advantage larger than the mean absolute advantage that countries have in that product $p$ in that year $t$ (we could discretize the matrix in other ways, but the result is qualitatively the same). Given this, diversity is $d_{c} = \\sum_p M_{cp}$, but it is also $d_c = [\\mathrm{M}\\mathrm{M}^T]_{cc}$. All together, we can conclude that when there is a first principal component that explains most of the variation, then\n\\begin{align*}\n\td_c \\approx \\left\\|R_c\\right\\|^2\\approx \\scoreFirstPC(c)^2 + \\scorePC{1}(c)^2.\n\\end{align*}\nThus, we would expect diversity to be correlated with the square of the first principal component score.\n\nBut beyond the observation that because (i) the first component explains a large share of the variance and (ii) that the vector of loadings is positive and weights approximately the same across products, then $\\scoreFirstPC$ has to be a measure of both log-exports per capita and diversification, we would like to know whether it captures information beyond these two quantities. \n\nWe use the following datasets:\n\\begin{description}\n\t\\item[Worldwide Governance Indicators (WGI)] from \\url{http://info.worldbank.org/governance/wgi/index.aspx#home}. According to the source these report ``aggregate and individual governance indicators for over 200 countries and territories over the period 1996\u20132016, for six dimensions of governance: Voice and Accountability, Political Stability and Absence of Violence, Government Effectiveness, Regulatory Quality, Rule of Law, and Control of Corruption''.\n\t\\item[Barro-Lee Educational Attainment Data] from \\url{http://barrolee.com/data/Lee_Lee_v1.0/LeeLee_v1.dta} or \\url{http://www.barrolee.com/data/BL_v2.2/BL2013_MF1599_v2.2.csv}, which reports ``educational attainment data for 146 countries in 5-year intervals from 1950 to 2010. It also provides information about the distribution of educational attainment of the adult population over age 15 and over age 25 by sex at seven levels of schooling: no formal education, incomplete primary, complete primary, lower secondary, upper secondary, incomplete tertiary, and complete tertiary. Average years of schooling at all levels---primary, secondary, and tertiary---are also measured for each country and for regions in the world. \n\t\\item[International Data on Cognitive Skills] from \\url{http://hanushek.stanford.edu/sites/default/files/publications/hanushek\\%2Bwoessmann.cognitive.xls}. [Do Better Schools Lead to More Growth? Cognitive Skills, Economic Outcomes, and Causation, Eric A. Hanushek, Ludger Woessmann, Journal of Economic Growth]\n\\end{description}\nThe question is, how much do the quantities and indicators in these datasets explain $\\scoreFirstPC$?\n\nFigures\\ \\ref{fig:regcoefs_univariate}, \\ref{fig:regcoefs_smallmultivariate}, \\ref{fig:regcoefs_multivariate}, and \\ref{fig:regcoefs_acrossyears} show the results of the standardized coefficients for different univariate and multivariate regressions.\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth]{C:/Users/agomez/Dropbox/Identify laws and dynamical systems in socioeconomic data/machine_learned_patterns_in_economic_development/notebooks/regression_figures/PC0_regressions_coefficients_univariate.pdf}\n\\caption{\n\\textbf{Coefficients of the predictors from univariate regressions.} \n All regressions included year-specific fixed-effects, errors are clustered by country and error bars reflect $95\\%$ confidence intervals. The estimates are for standardized coefficients.}\n\\label{fig:regcoefs_univariate}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth]{C:/Users/agomez/Dropbox/Identify laws and dynamical systems in socioeconomic data/machine_learned_patterns_in_economic_development/notebooks/regression_figures/PC0_regressions_coefficients_smallmultivariate.pdf}\n\\caption{\n\\textbf{Coefficients of the predictors from a multivariate regression only including exports per capita, diversity and economic complexity index.} \n All regressions included year-specific fixed-effects, errors are clustered by country and error bars reflect $95\\%$ confidence intervals. The estimates are for standardized coefficients.}\n\\label{fig:regcoefs_smallmultivariate}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=0.7\\columnwidth]{C:/Users/agomez/Dropbox/Identify laws and dynamical systems in socioeconomic data/machine_learned_patterns_in_economic_development/notebooks/regression_figures/PC0_regressions_coefficients_multivariate.pdf}\n\\caption{\n\\textbf{Coefficients of the predictors from a multivariate regression including.} \n All regressions included year-specific fixed-effects, errors are clustered by country and error bars reflect $95\\%$ confidence intervals. The estimates are for standardized coefficients.}\n\\label{fig:regcoefs_multivariate}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{C:/Users/agomez/Dropbox/Identify laws and dynamical systems in socioeconomic data/machine_learned_patterns_in_economic_development/notebooks/regression_figures/PC0_acrossyears_regressions_coefficients_subset_multivariate.pdf}\n\\caption{\n\\textbf{Coefficients of the predictors from multivariate regressions carried separately by year.} \n The estimates are for standardized coefficients.}\n\\label{fig:regcoefs_acrossyears}\n\\end{center}\n\\end{figure}\n\n\nAs can be seen, while all regressors predict to some extent $\\scoreFirstPC$ when we carry out univariate regressions, when all are put together only exports per capita, diversity, government effectiveness, and rule of law survive. In the multivariate regressions done per year, the coefficients for exports per capita and diversity are consistently significant and positive, and have similar magnitudes.\n", "meta": {"hexsha": "0a62571a875c405a2f1c4cea13a58d3c9fc5503f", "size": 8264, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notebooks/scriptsR/WhatDoesPC1Capture.tex", "max_stars_repo_name": "cbrummitt/machine_learned_patterns_in_economic_development", "max_stars_repo_head_hexsha": "1da4c55977a27dd5be101b1c5dfe489d14794b34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-03-05T14:54:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-11T05:49:07.000Z", "max_issues_repo_path": "notebooks/scriptsR/WhatDoesPC1Capture.tex", "max_issues_repo_name": "cbrummitt/machine_learned_patterns_in_economic_development", "max_issues_repo_head_hexsha": "1da4c55977a27dd5be101b1c5dfe489d14794b34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/scriptsR/WhatDoesPC1Capture.tex", "max_forks_repo_name": "cbrummitt/machine_learned_patterns_in_economic_development", "max_forks_repo_head_hexsha": "1da4c55977a27dd5be101b1c5dfe489d14794b34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-01-02T17:41:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-16T09:02:26.000Z", "avg_line_length": 99.5662650602, "max_line_length": 753, "alphanum_fraction": 0.7971926428, "num_tokens": 2031, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.5709904610747024}}
{"text": " \\documentclass [12pt]{article} \n\n\\usepackage {amsmath}\n\\usepackage {amsthm}\n\\usepackage {amssymb}\n\\usepackage {graphicx} \n\\usepackage {float}\n\\usepackage {multirow}\n\\usepackage {xcolor}\n\\usepackage [ruled,vlined,commentsnumbered,titlenotnumbered]{algorithm2e} \\usepackage {array} \n\\usepackage {booktabs} \n\\usepackage {url} \n\\usepackage {parskip} \n\\usepackage [margin=1in]{geometry} \n\\usepackage [T1]{fontenc} \n\\usepackage {cmbright} \n\\usepackage [many]{tcolorbox} \n\\usepackage [colorlinks = true,\n            linkcolor = blue,\n            urlcolor  = blue,\n            citecolor = blue,\n            anchorcolor = blue]{hyperref} \n\\usepackage {enumitem} \n\\usepackage {xparse} \n\\usepackage {verbatim}\n\\usepackage{algpseudocode}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\\lstset { %\n    language=C++,\n    backgroundcolor=\\color{black!5}, % set backgroundcolor\n    basicstyle=\\footnotesize,% basic font setting\n}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{remark}{Remark}\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{corollary}[theorem]{Corollary}\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\\newtheorem{claim}{Claim}\n\\newtheorem{proposition}{Proposition}\n\n\n\n\n\n\n\\DeclareTColorBox {Solution}{}{breakable, title={Solution}} \\DeclareTColorBox {Solution*}{}{breakable, title={Solution (provided)}} \\DeclareTColorBox {Instruction}{}{boxrule=0pt, boxsep=0pt, left=0.5em, right=0.5em, top=0.5em, bottom=0.5em, arc=0pt, toprule=1pt, bottomrule=1pt} \\DeclareDocumentCommand {\\Expecting }{+m}{\\textbf {[We are expecting:} #1\\textbf {]}} \\DeclareDocumentCommand {\\Points }{m}{\\textbf {(#1 pt.)}} \n\n\\begin {document} \n\n\\vspace {1em} \n\\begin {Instruction} \nAdapted From Virginia Williams'lecture notes.\n\\end {Instruction}  \n\n{\\LARGE \\textbf {COMP 285 (NC A\\&T, Spr `22)}\\hfill \\textbf {Lecture 32} } \n\n\\begin{centering}\n\\section*{MSTs II, Max Flow}\n\\end{centering}\n\n\\section{Kruskal's Algorithm} \n\nAt a high level, the set A maintained by Kruskal's algorithm is a set of disjoint trees. During update step $i$, if the ith smallest edge connects different trees, merge the two trees connected by this edge. The algorithm progresses until eventually only one tree remains at which point the set A represents an MST of the graph.\n\nKruskal's algorithm utilizes the union-find (aka disjoint set) data structure in order to handle the merging of the disjoint trees maintained by the algorithm. The union-find data structure supports disjoint sets with the following operations:\n\n\\begin{itemize}\n\\item \\textbf{makeset}$(u)$: creates a new set containing $u$ provided that $u$ is not in any other set\n\\item \\textbf{find}$(u)$: returns the name of the set containing $u$\n\\item \\textbf{union}$(u, v )$: merge the set containing $u$ and the set containing $v$ into one set\n\\end{itemize}\n\nThe algorithm itself can be structured as follows:\n\n\\begin{algorithm}\n\\caption{Kruskal($G$)}\n\\label{alg:kruskal_algorithm}\n\\begin{algorithmic}\n\\State $A \\gets \\emptyset$\n\\State $E' \\gets$ sort edges by weight in non-decreasing order\n\\State \\For{ $v \\in V$} {\n    \\State makeset(v)\n}\n\\State \\For{ $(u,v) \\in E'$} {\n    \\If{find($u) \\neq $ find(v)} {\n        \\State $A \\gets A \\cup \\{(u,v)\\}$\n        \\State union(u,v)\n    }\n}\n\\State \\Return $A$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\textbf{Correctness} The correctness follows from Theorem \\ref{thm:lemma}.\n\n\n\\textbf{Running time} \n\nThe runtime of Kruskal's algorithm depends on two factors: the time to sort the edges by weight and the runtime of the union-find data structure operations. While $\\Omega(m \\log n)$ time is required for sorting the edges if we use comparison-based sorting, in many cases, we may be able to sort the edges in linear time. (Recall, that RadixSort can be used to sort the edges in $O(m)$ time if the weights are given by integers bounded by a polynomial in $m$.) In this case, the runtime is bounded by the runtime of the union-find operations and is given by $O(nT(\\text{makeset}) + mT(\\text{find}) +nT(\\text{union}))$. The best known data structure supporting the union-find operations runs in amortized time $O(\\alpha(n))$ where $\\alpha(n)$ is the inverse Ackermann function. Interestingly, the value of the inverse Ackermann is tiny for all practical purposes: \n$$\n\\alpha(n) \\leq 4, \\forall n < \\text{\\# atoms in the universe}\n$$\n\nand thus for all practical purposes, the union-find operations run in constant time. Thus, in many settings, the runtime of Kruskal's algorithm is nearly linear in the number of edges. \n\nThe actual definition of $\\alpha(n)$ is $\\alpha(n) = \\min\\{k \\mid A(k) \\geq n\\}$, where $A(k)$ is the Ackermann function evaluated at $k$. $A(k)$ itself is defined using the more general Ackermann function as $A(k) = A_k(2)$. $A_k(x)$ is defined recursively: \n\n\\begin{align*}\nA_m(x) = \n    \\begin{cases}\n        x + 1 & m = 0 \\\\\n        A_{m-1}(1) & m > 0, x = 0  \\\\\n        A_{m-1}(A_m(x - 1)) & \\text{ else }\n    \\end{cases}\n\\end{align*}\n\nFor example\n\\begin{itemize}\n    \\item $A_0(x) = 1 + x$, so $A_0(2) = 3$\n    \\item TO compute $A_1(x)$, we see:\n    $$\n    A_1(x) = A_0(A_1(x-1)) = A_0(A_0(A_1(x-2))) = \\cdots = A_0(A_0(\\cdots (A_0(A_1(0))))) = A_0(A_0(\\cdots(A_0(2))))\n    $$\n    that is, we have ``iterated'' $A_0 x$ times. If we work it out, we get:\n    $$\n    A_1(x) = 2x\n    $$\n    Thus, $A_1(2) = 4$\n    \\item $A_2(x) = 2^x x$ ($A_1$ iterated $x$ times), so $A_2(2) = 8$ \n    \\item $A_3(x) \\geq 2^{2^{2^{2\\cdots}}}$, a \"tower\" of $x$ 2s ($A_2$ iterated $x$ times); it turns out that $A_3(2) \\geq 2^11$\n    \\item $A_4(x)$ is larger than the total number of atoms in the known universe, and also greater that the number of nanoseconds since the Big Bang. (Thus, $\\alpha(n) \\leq 4$ for all practice purposes)\n\\end{itemize}\n\n\\textbf{Example} In this example (Figure \\ref{fig:kruskal0}) we will run through the steps of Kruskal's algorithm in order to find an MST for the following graph:\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal0.png}\n\\caption{}\n\\label{fig:kruskal0}\n\\end{figure}\n \nWe begin by creating a new set for each node in the graph. We then begin iterating over the edges in non-decreasing order. The first edge we examine is $(g, h)$. This edge connects nodes $g$ and $h$ which are currently not part of the same set. We thus include this edge in $A$ and union the sets containing $g$ and $h$ as shown in Figure \\ref{fig:kruskal1}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal1.png}\n\\caption{}\n\\label{fig:kruskal1}\n\\end{figure}\n\nThe next edge in the sorted order is a tie between edges $(c, i)$ and $(f , g)$. Picking either one will yield a correct result, so let's say the algorithm picks $(c, i)$. Since $c$ and $i$ are not part of the same set we include this edge in $A$ and union the sets containing $c$ and $i$ as shown in Figure \\ref{fig:kruskal2}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal2.png}\n\\caption{}\n\\label{fig:kruskal2}\n\\end{figure}\n\n\n\nThe next edge in the sorted order is $(f , g)$. We union the sets containing $f$ and $g$ and add edge $(f , g)$ to $ A$ as shown in Figure \\ref{fig:kruskal3}.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal3.png}\n\\caption{}\n\\label{fig:kruskal3}\n\\end{figure}\n\nThe next edge in the sorted order is a tie between edges $(a, b)$ and $(c, f )$. Let's say the\nalgorithm picks $(a, b)$. We union the sets containing $a$ and $b$ and add edge $(a, b)$ to $A$ as shown in Figure \\ref{fig:kruskal4}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal4.png}\n\\caption{}\n\\label{fig:kruskal4}\n\\end{figure}\n\n\nThe next edge in the sorted order is $(c, f )$. We union the sets containing $c$ and $f$ and add edge $(c, f )$ to $A$ as shown in Figure \\ref{fig:kruskal5}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal5.png}\n\\caption{}\n\\label{fig:kruskal5}\n\\end{figure}\n\nThe next edge in the sorted order is $(i, g)$. Note that $i$ and $g$ are contained in the same set which means that adding the edge $(i, g)$ would lead to a cycle in $A$. We therefore ignore $(i, g)$ and pick the next edge in sorted order which is a tie between $(c, d)$ and $(h, i)$. Let's say the algorithm picks $(c, d)$. We union the sets containing c and d and add edge $(c, d)$ to $ A$ as shown in Figure \\ref{fig:kruskal6}.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal6.png}\n\\caption{}\n\\label{fig:kruskal6}\n\\end{figure}\n\nThe next edge in the sorted order is $(h, i)$, but $h$ and $i$ are contained in the same set so we ignore it. The next edge is a tie between edges $(a, h)$ and $(b, c)$. Let's say the algorithm picks $(a, h)$. We union the sets containing a and h and add edge $(a, h)$ to $A$ as shown in Figure \\ref{fig:kruskal7}.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal7.png}\n\\caption{}\n\\label{fig:kruskal7}\n\\end{figure}\n\nThe next edge in the sorted order which has both vertices in different sets is $(d, e)$. We union the sets containing $ d$ and $e$ and add edge $(d, e)$ to $A$ as shown in Figure \\ref{fig:kruskal8}. At this point all nodes are contained in the same set, so no further edges are added to A.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.8]{kruskal8.png}\n\\caption{}\n\\label{fig:kruskal8}\n\\end{figure}\n\n\n\\section{The Latest and Greatest Algorithms}\n \nWhile the greedy algorithms mentioned above are reasonably efficient ways to compute a minimum spanning tree of a graph, recent research has yielded more efficient algorithms. In 1995, Karger, Klein, and Tarjan discovered a randomized linear time $(O(E + V ))$ algorithm based on Bor\u016fvka's algorithm and the reverse-delete algorithm. In 2000, Chazelle discovered the current fastest determistic algorithm which runs in time $O(E \\alpha(V ))$ using soft heaps where $\\alpha$ is the inverse Ackermann function.\n\n\n\\section{History of Flows and Cuts}\n\nToday we will study a very interesting problem called the max flow problem. Before we get to the algorithm and math, we briefly discuss the interesting history behind max flow! During the Cold War, the US Air Force at that time was very interested in the Soviet train networks. In reports that were declassified in 1999, it was revealed that the Air Force collected enough information about the train network that they were able to determine how resources were shipped from the Soviet Union to Europe. The Air Force was very interested in determining how much resources can be transported from the Soviet Union to Europe, and what needed to be done to destroy this movement of resources. What this translates to is the min cut problem, i.e., cutting the minimum number of train tracks so that nothing goes to Europe. Here, cutting an edge meant dropping a bomb. Nowadays, however, there are much more peaceful applications of this problem, for instance, understanding the flow of information through the Internet.\n\n\n\\section{Formulation of the Maximum Flow Problem}\n\nYou are given an input graph $G = (V, E)$, where the edges are directed. There is a function $c : E \\to R\\geq 0$ that defines the capacity of each edge. We also label two nodes in $G$, $s$ and $t$, as the source and destination, respectively. The task is to output a flow of maximum value. We will shortly define what a flow is and what a flow of maximum value means. \n\nA flow $f$ is a function $f : E \\to R\\geq 0$ such that \n\\begin{enumerate}\n    \\item Capacity constraints are satisfied: \n    $$\n        \\forall (u, v ) \\in E : 0 \\leq f (u, v ) \\leq c(u, v )\n    $$. \n    \\item Flow conservation constraints are satisfied: \n    $$\n    \\forall v \\in V \\setminus \\{s, t\\} : \\sum_{x \\in N_{\\text{in}}(v)} f (x, v ) = \\sum_{y\\ in N_{\\text{out}}(v)} f (v, y )\n    $$\n    Here $N_{\\text{in}}(v )$ denotes the set of nodes with an edge that points to $v$ and $N_{\\text{out}}(v )$ denotes the set of nodes that $v$ points to.\n\\end{enumerate}\n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[scale=0.6]{max_flow_ex.png}\n    \\caption{(Left) Graph $G$ with edge capacities (Right) Graph $G$ with a sample flow.}\n    \\label{fig:max_flow_ex}\n\\end{figure}\n \n\nSuppose that there are no edges going into $s$ and no edges coming out of $t$. From the above, you can verify yourself that $\\sum_{x \\in N_{\\text{out}}(s)} f (s, x) = \\sum_{y \\in N_{\\text{in}}(t)} f (y, t)$. We define the value $x \\in N_{\\text{out}}(s) f (s, x)$ to be the value of the flow $f$ . We usually denote the value of a flow $f$ as $|f |$. If there are edges going into $s$ and out of $t$, then the value of $f$ is $$\n|f | = \\sum_{x \\in N_{\\text{out}}(s)} f (s, x) - \\sum_{ y \\in N_{\\text{in}}(s)} f (y, s)\n$$.\n\nThe max flow problem is to find some flow f such that $|f |$ is maximized. Remark 1. In the analysis below we consider graphs with a single source s and a single sink t. However, if we need to work with a graph with multiple sources, we can do so by adding a new source node, and then adding edges with capacity infinity from it to each of the multiple sources. Similarly, if we want to have multiple sinks, we add a new sink node and add edges from the multiple sinks to that sink with capacity infinity.\n\n\\section{Example}\n\nIn Fig. \\ref{fig:max_flow_ex}, we have a graph $G$ and a sample flow $f$ . Observe that the two constraints for a flow are satisfied. There can be multiple other flows possible that can satisfy the constraints. For our given flow, $|f | = 16$. The max flow for this graph is actually $18$, as we will see shortly.\n\n\n\\section{Formulation of the Minimum Cut Problem}\n\nNow, we give a formulation of the min cut problem defined for directed graphs with source and destination nodes $s$ and $t$. Note that there is also a version of the min cut problem without a source and sink node, though we won't discuss that now. An s-t cut is a partition $V = S \\cup T$ where $S$ and $T$ are disjoint and $s \\in S, t \\in T$, and the size/cost of an s-t cut is $c(S, T) := \\sum_{x\\in S, y\\in T} c(x, y )$.\n\nFor our graph $G$ shown above, if we set$ S = \\{s, a, c\\}$ and $T = \\{b, t\\}$, then the cost of the cut is $c(a, b) + c(c, b) + c(c, t) = 5 + 3 + 10 = 18$. If we take another cut $S\n' = \\{s, c\\}, T' = \\{a, b, t\\}$, then $c(S', T') = c(s, a) + c(c, b) + c(c, t) = 10 + 3 + 10 = 23$. Note that we do\nnot consider the edge $\\{a, c\\}$ as it is in the wrong direction (we only consider edges from $S'$ to $ T'$).\n\n\n\\section{The Max-Flow Min-Cut Theorem}\n\n\\begin{lemma}\nFor any flow $f$ and any s-t cut $(S, T)$ of $G$, we have $|f | \\leq c(S, T)$. In particular,the value of the max flow is at most the value of the min cut.\n\\end{lemma}\n\n\\begin{proof}\n\n\\begin{align*}\n|f| &= \\sum_{x \\in N_{\\text{out}}(s)} f(s,x) - \\sum_{y \\in N_{\\text{in}}(s)} f(y,s) \\\\\n&=\\sum_{v \\in S} \\left(\\sum_{x \\in N_{\\text{out}}(v)} f(v,x) - \\sum_{y \\in N_{in}(v)} f(y,v) \\right) \\tag{by flow conservation constraing for $v \\neq s$} \\\\\n&=\\sum_{v \\in S} \\left(\\sum_{x \\in N_{\\text{out}}(v) \\cap S} f(v,x) - \\sum_{y \\in N_{in}(v) \\cap S} f(y,v) \\right) + \\sum_{v \\in S} \\left(\\sum_{x \\in N_{\\text{out} \\cap T}(v)} f(v,x) - \\sum_{y \\in N_{in}(v) \\cap T} f(y,v) \\right) \\\\\n&=\\sum_{v \\in S} \\left(\\sum_{x \\in N_{\\text{out} \\cap T}(v)} f(v,x) - \\sum_{y \\in N_{in}(v) \\cap T} f(y,v) \\right) \\tag{first term sums to $0$} \\\\\n&\\leq \\sum_{v \\in S, x \\in T, x \\in N_{\\text{out}}(v)} f(v,x) \\\\\n&=\\leq \\sum_{v \\in S, x \\in T, x \\in N_{\\text{out}}(v)} c(v,x) \\\\\n&= c(S,T)\n\\end{align*}\n\nIn the proof, $\\sum_{v \\in S} \\left(\\sum_{x\\in N_{\\text{out}}(v)} f(v,x) - \\sum_{y\\in N_{\\text{int}}(v)} f(y,v)\\right) = 0$ since we add and subtract the flow $f (u, v )$ for every $u, v \\in S$ such that $(u, v ) \\in E$.\n\\end{proof}\n\nWe get the following consequence.\n\n\\begin{corollary}\nIf we can find $f$ and $(S,T)$ such that $|f| = c(S,T)$ then $f$ is a max-flow and $(S,T)$ is a min s-t cut.\n\\end{corollary}\n\nIt turns out we can always find such an $f$ and $(S,T)$ for any graph.\n\n\\begin{theorem}{Max-Flow Min-cut Theorem}\nFor any graph $G$, source $s$ and destination $t$, the value of the max flow is equal to the cost of the min cut\n\\end{theorem}\n\nWe will show this by coming up with an algorithm. The algorithm will take the graph $G$ and some flow $f$ that has already been constructed, and create a new graph that is called the residual graph. In this new graph, the algorithm will try to find a path from s to t. If no such path exists, we will show that the value of the flow we started with is the value of the maximum flow. If not, we show how to increase the value of our flow by pushing some flow\non that path.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.6]{max_flow_res.png}\n\\caption{The Residue network given the flow presented in Figure \\ref{fig:max_flow_ex}}\n\\label{fig:max_flow_res}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{document}", "meta": {"hexsha": "0fac0279b27b951c10c7c19d73a8fc8b21c739e5", "size": 16701, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/lectures/lecture32.tex", "max_stars_repo_name": "facebookEIR/algorithms-course", "max_stars_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-16T02:47:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-16T02:47:46.000Z", "max_issues_repo_path": "assets/lectures/lecture32.tex", "max_issues_repo_name": "facebookEIR/algorithms-course", "max_issues_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/lectures/lecture32.tex", "max_forks_repo_name": "facebookEIR/algorithms-course", "max_forks_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-01-20T21:52:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-21T03:00:16.000Z", "avg_line_length": 47.7171428571, "max_line_length": 1013, "alphanum_fraction": 0.6897790551, "num_tokens": 5279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105454764747, "lm_q2_score": 0.8577681049901037, "lm_q1q2_score": 0.5707679426337869}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath}\n\\usepackage{numprint}\n\n\\author{Daniel Fernandes Martins (danielfmt)}\n\\title{Question \\#9 Solution}\n\n\\begin{document}\n\n\\maketitle\n\n\\textbf{Disclaimer.} This is the reasoning I used to solve the problem; it\nmay be wrong though. This is intended just as food for thought.\n\n\\section{Finding The Lower Bound}\n\nLet's start with two hypothesis sets with break point $k=2$ and $N=2$. This\nmeans that these hypothesis sets cannot shatter any combination of 2 points,\nwhich implies that $d_{vc}=1$ for both of them.\n\nThese are the dichotomies realized by $\\mathcal{H}_1$ on these 2 points:\n\n\\begin{equation*}\n\\begin{split}\n\\{a=-1, b=+1\\} \\\\\n\\{a=+1, b=-1\\}\n\\end{split}\n\\end{equation*}\n\nFor $\\mathcal{H}_2$:\n\n\\begin{equation*}\n\\begin{split}\n\\{a=-1, b=-1\\} \\\\\n\\{a=+1, b=+1\\}\n\\end{split}\n\\end{equation*}\n\nNotice that the dichotomies realized by $\\mathcal{H}_1$ are different than the\nones realized by $\\mathcal{H}_2$, which means the intersection\n$\\mathcal{H}_1\\bigcap\\mathcal{H}_2$ is an empty set, so $d_{vc}=0$.\n\n\\section{Finding The Upper Bound}\n\nLet's say you have two hypothesis sets $\\mathcal{H}_1$ and $\\mathcal{H}_2$. The\nbiggest set you can get from any intersection of these two sets is by having\ntwo sets such that $\\mathcal{H}_2\\subseteq\\mathcal{H}_1$. Therefore, the upper\nbound can be at most as big as the smallest $\\mathcal{H}_k$.\n\n\\section{Solution}\n\n\\begin{equation*}\n0 \\leq \\\\\n  d_{vc}(\\bigcap_{k=1}^K\\mathcal{H}_k) \\leq \\\\\n  \\min\\{d_{vc}(\\mathcal{H}_k)\\}_{k=1}^K\n\\end{equation*}\n\n\\end{document}\n", "meta": {"hexsha": "e66dcf4ab579f92a1c6c1f4528876ee5da6f802d", "size": 1548, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week-04/math/q09.tex", "max_stars_repo_name": "danielfm/edx-learning-from-data", "max_stars_repo_head_hexsha": "1675e14c20fc1b7ad54d2704b9c8a941e043cbcb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 98, "max_stars_repo_stars_event_min_datetime": "2015-04-27T06:55:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-05T06:09:19.000Z", "max_issues_repo_path": "week-04/math/q09.tex", "max_issues_repo_name": "danielfm/edx-learning-from-data", "max_issues_repo_head_hexsha": "1675e14c20fc1b7ad54d2704b9c8a941e043cbcb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-05-14T19:33:33.000Z", "max_issues_repo_issues_event_max_datetime": "2017-08-12T13:07:41.000Z", "max_forks_repo_path": "week-04/math/q09.tex", "max_forks_repo_name": "danielfm/edx-learning-from-data", "max_forks_repo_head_hexsha": "1675e14c20fc1b7ad54d2704b9c8a941e043cbcb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 56, "max_forks_repo_forks_event_min_datetime": "2015-01-10T08:18:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-28T08:46:22.000Z", "avg_line_length": 25.8, "max_line_length": 79, "alphanum_fraction": 0.7118863049, "num_tokens": 527, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5706310153883375}}
{"text": "\\chapter{Inner model theory}\nModel theory is \\emph{really} meta, so you will have to pay attention here.\n\nRoughly, a ``model of $\\ZFC$'' is a set with a binary relation that satisfies the $\\ZFC$ axioms,\njust as a group is a set with a binary operation that satisfies the group axioms.\nUnfortunately, unlike with groups, it is very hard for me to give interesting examples of models,\nfor the simple reason that we are literally trying to model the entire universe.\n\n\\section{Models}\n\\prototype{$(\\omega, \\in)$ obeys $\\PowerSet$, $V_\\kappa$ is a model for $\\kappa$ inaccessible (later).}\n\\begin{definition}\n\tA \\vocab{model} $\\MM$ consists of a set $M$ and a\n\tbinary relation $E \\subseteq M \\times M$.\n\t(The $E$ relation is the ``$\\in$'' for the model.)\n\\end{definition}\n\\begin{remark}\n\tI'm only considering \\emph{set-sized} models where $M$ is a set.\n\tExperts may be aware that I can actually play with $M$ being a class,\n\tbut that would require too much care for now.\n\\end{remark}\nIf you have a model, you can ask certain things about it,\nsuch as ``does it satisfy $\\EmptySet$?''.\nLet me give you an example of what I mean and then make it rigorous.\n\\begin{example}\n\t[A stupid model]\n\tLet's take $\\MM = (M,E) = \\left( \\omega, \\in \\right)$.\n\tThis is not a very good model of $\\ZFC$, but let's see if we can\n\tmake sense of some of the first few axioms.\n\t\\begin{enumerate}[(a)]\n\t\t\\ii $\\MM$ satisfies $\\Extensionality$, which is the sentence\n\t\t\\[ \\forall x \\forall y \\forall a :\n\t\t\t\\left( a \\in x \\iff a \\in y \\right) \\implies x = y. \\]\n\t\tThis just follows from the fact that $E$ is actually $\\in$.\n\n\t\t\\ii $\\MM$ satisfies $\\EmptySet$, which is the sentence\n\t\t\\[ \\exists a : \\forall x \\; \\neg (x \\in a). \\]\n\t\tNamely, take $a = \\varnothing \\in \\omega$.\n\n\t\t\\ii $\\MM$ does not satisfy $\\Pairing$, since $\\{1,3\\}$ is not in $\\omega$,\n\t\teven though $1, 3 \\in \\omega$.\n\n\t\t\\ii Miraculously, $\\MM$ satisfies $\\Union$, since for any $n \\in \\omega$,\n\t\t$\\cup n$ is $n-1$ (unless $n=0$).\n\t\tThe Union axiom statements that\n\t\t\\[ \\forall a \\exists U \\quad \\forall x \\;\n\t\t\\left[ (x \\in U) \\iff (\\exists y : x \\in y \\in a) \\right]. \\]\n\t\tAn important thing to notice is that the ``$\\forall a$'' ranges only over\n\t\tthe sets in the model of the universe, $\\MM$.\n\t\\end{enumerate}\n\\end{example}\n\\begin{example}\n\t[Important: this stupid model satisfies $\\PowerSet$]\n\tMost incredibly of all: $\\MM = (\\omega, \\in)$ satisfies $\\PowerSet$.\n\tThis is a really important example.\n\t\n\tYou might think this is ridiculous. Look at $2 = \\{0,1\\}$.\n\tThe power set of this is $\\{0, 1, 2, \\{1\\}\\}$ which is not in the model, right?\n\n\tWell, let's look more closely at $\\PowerSet$. It states that:\n\t\\[ \\forall x \\exists a \\forall y (y \\in a \\iff y \\subseteq x). \\]\n\tWhat happens if we set $x = 2 = \\{0,1\\}$?\n\tWell, actually, we claim that $a = 3 = \\{0,1,2\\}$ works.\n\tThe key point is ``for all $y$'' -- this \\emph{only ranges over the objects in $\\MM$}.\n\tIn $\\MM$, the only subsets of $2$ are $0 = \\varnothing$,\n\t$1 = \\{0\\}$ and $2 = \\{0,1\\}$.\n\tThe ``set'' $\\{1\\}$ in the ``real world'' (in $V$) is not a set in the model $\\MM$.\n\n\tIn particular, you might say that in this strange new world,\n\twe have $2^n = n+1$, since $n = \\{0,1,\\dots,n-1\\}$ really does\n\thave only $n+1$ subsets.\n\\end{example}\n\n\\begin{example}\n\t[Sentences with parameters]\n\tThe sentences we ask of our model are allowed to have ``parameters'' as well.\n\tFor example, if $\\MM = (\\omega, \\in)$ as before then $\\MM$ satisfies the sentence\n\t\\[ \\forall x \\in 3 (x \\in 5). \\]\n\\end{example}\n\n\\section{Sentences and satisfaction}\nWith this intuitive notion, we can define what it means for a model to satisfy a sentence.\n\\begin{definition}\nNote that any sentence $\\phi$ can be written in one of five forms:\n\\begin{itemize}\n\t\\ii $x \\in y$\n\t\\ii $x = y$\n\t\\ii $\\neg \\psi$ (``not $\\psi$'') for some shorter sentence $\\psi$\n\t\\ii $\\psi_1 \\lor \\psi_2$ (``$\\psi_1$ or $\\psi_2$'')\n\tfor some shorter sentences $\\psi_1$, $\\psi_2$\n\t\\ii $\\exists x \\psi$ (``exists $x$'') for some shorter sentence $\\psi$.\n\\end{itemize}\n\\end{definition}\n\\begin{ques}\n\tWhat happened to $\\land$ (and) and $\\forall$ (for all)?\n\t(Hint: use $\\neg$.)\n\\end{ques}\nOften (almost always, actually) we will proceed by so-called ``induction on formula complexity'',\nmeaning that we define or prove something by induction using this.\nNote that we require all formulas to be finite.\n\nNow suppose we have a sentence $\\phi$, like $a = b$ or $\\exists a \\forall x \\neg (x \\in a)$,\nplus a model $\\MM = (M,E)$.\nWe want to ask whether $\\MM$ satisfies $\\phi$.\n\nTo give meaning to this, we have to designate certain variables as \\vocab{parameters}.\nFor example, if I asked you \n\\begin{quote}\n\t``Does $a=b$?''\n\\end{quote}\nthe first question you would ask is what $a$ and $b$ are.\nSo $a$, $b$ would be parameters: I have to give them values for this sentence to make sense.\n\nOn the other hand, if I asked you\n\\begin{quote}\n\t``Does $\\exists a \\forall x \\neg (x \\in a)$?''\n\\end{quote}\nthen you would just say ``yes''.\nIn this case, $x$ and $a$ are \\emph{not} parameters.\nIn general, parameters are those variables whose meaning is not given by some $\\forall$ or $\\exists$.\n\nIn what follows, we will let $\\phi(x_1, \\dots, x_n)$ denote a formula $\\phi$,\nwhose parameters are $x_1$, \\dots, $x_n$.\nNote that possibly $n=0$, for example all $\\ZFC$ axioms have no parameters.\n\n\\begin{ques}\n\tTry to guess the definition of satisfaction before reading it below.\n\t(It's not very hard to guess!)\n\\end{ques}\n\n\\begin{definition}\n\tLet $\\MM=(M,E)$ be a model.\n\tLet $\\phi(x_1, \\dots, x_n)$ be a sentence, and let $b_1, \\dots, b_n \\in M$.\n\tWe will define a relation\n\t\\[ \\MM \\vDash \\phi[b_1, \\dots, b_n] \\]\n\tand say $\\MM$ \\vocab{satisfies} the sentence $\\phi$ with parameters $b_1, \\dots, b_n$.\n\n\tThe relationship is defined by induction on formula complexity as follows:\n\t\\begin{itemize}\n\t\t\\ii If $\\phi$ is ``$x_1=x_2$'' then $\\MM \\vDash \\phi[b_1, b_2] \\iff b_1 = b_2$.\n\t\t\\ii If $\\phi$ is ``$x_1\\in x_2$'' then $\\MM \\vDash \\phi[b_1, b_2] \\iff b_1 \\; E \\; b_2$. \\\\\n\t\t(This is what we mean by ``$E$ interprets $\\in$''.)\n\t\t\\ii If $\\phi$ is ``$\\neg \\psi$'' then\n\t\t$\\MM \\vDash \\phi[b_1, \\dots, b_n] \\iff \\MM \\not\\vDash \\psi[b_1, \\dots, b_n]$.\n\t\t\\ii If $\\phi$ is ``$\\psi_1 \\lor \\psi_2$'' then $\\MM \\vDash \\phi[b_1, \\dots, b_n]$\n\t\tmeans $\\MM \\vDash \\psi_i[b_1, \\dots, b_n]$ for some $i=1,2$.\n\t\t\\ii Most important case: suppose $\\phi$ is $\\exists x \\psi(x,x_1, \\dots, x_n)$.\n\t\tThen $\\MM \\vDash \\phi[b_1, \\dots, b_n]$ if and only if \n\t\t\\[ \\exists b \\in M \\text{ such that } \\MM \\vDash \\psi[b, b_1, \\dots, b_n]. \\]\n\t\tNote that $\\psi$ has one extra parameter.\n\t\\end{itemize}\n\\end{definition}\nNotice where the information of the model actually gets used.\nWe only ever use $E$ in interpreting $x_1 \\in x_2$; unsurprising.\nBut we only ever use the set $M$ when we are running over $\\exists$ (and hence $\\forall$).\nThat's well-worth keeping in mind:\n\\begin{moral}\n\tThe behavior of a model essentially comes from $\\exists$ and $\\forall$,\n\twhich search through the entire model $M$.\n\\end{moral}\n\nAnd finally,\n\\begin{definition}\n\tA \\vocab{model of $\\ZFC$} is a model $\\MM = (M,E)$ satisfying all $\\ZFC$ axioms.\n\\end{definition}\n\nWe are especially interested in models of the form $(M, \\in)$, where $M$ is a \\emph{transitive} set.\n(We want our universe to be transitive,\notherwise we would have elements of sets which are not themselves\nin the universe, which is very strange.)\nSuch a model is called a \\vocab{transitive model}.\n\\begin{abuse}\n\tIf $M$ is a transitive set, the model $(M, \\in)$ will be abbreviated to just $M$.\n\\end{abuse}\n\\begin{definition}\n\tAn \\vocab{inner model} of $\\ZFC$ is a transitive model satisfying $\\ZFC$.\n\\end{definition}\n\n\\section{The Levy hierarchy}\n\\prototype{$\\mathtt{isSubset}(x,y)$ is absolute. The axiom\n$\\EmptySet$ is $\\Sigma_1$, $\\mathtt{isPowerSetOf}(X,x)$ is $\\Pi_1$.}\nA key point to remember is that the behavior of a model is largely determined by $\\exists$ and $\\forall$.\nIt turns out we can say even more than this.\n\nConsider a formula such as\n\\[ \\mathtt{isEmpty}(x) : \\neg \\exists a (a \\in x) \\]\nwhich checks whether a given set $x$ has an element in it.\nTechnically, this has an ``$\\exists$'' in it.\nBut somehow this $\\exists$ does not really search over the entire model,\nbecause it is \\emph{bounded} to search in $x$.\nThat is, we might informally rewrite this as\n\\[ \\neg (\\exists a \\in x) \\]\nwhich doesn't fit into the strict form,\nbut points out that we are only looking over $a \\in x$.\nWe call such a quantifier a \\vocab{bounded quantifier}.\n%and write it\n%in the way we see it in most mathematics, such as\n%\\[ (\\exists x \\in a) \\quad\\text{or}\\quad (\\forall x \\in a). \\]\n%To be painfully explicit,\n%$\\exists x \\in a \\psi$ is short for $\\exists x (x \\in a \\land \\psi)$,\n%while $\\forall x \\in a \\psi$ is short for $\\forall x (x \\in a \\implies \\psi)$.\n\nWe like sentences with bounded quantifiers because they designate\nproperties which are \\vocab{absolute} over transitive models.\nIt doesn't matter how strange your surrounding model $M$ is.\nAs long as $M$ is transitive, \n\\[ M \\vDash \\mathtt{isEmpty}(\\varnothing) \\]\nwill always hold.\nSimilarly, the sentence\n\\[ \\mathtt{isSubset}(x,y) : x \\subseteq y \\text { i.e. } \\forall a \\in x (a \\in y) \\]\nis absolute.\nSentences with this property are called $\\Sigma_0$ or $\\Pi_0$.\n\nThe situation is different with a sentence like\n\\[\n\t\\mathtt{isPowerSetOf}(y,x) :\n\t\\forall z \\left( z \\subseteq x \\iff z \\in y  \\right)\n\\]\nwhich in English means ``$y$ is the power set of $x$'', or just $y = \\PP(x)$.\nThe $\\forall z$ is \\emph{not} bounded here.\nThis weirdness is what allows things like\n\\[ \\omega \\vDash \\text{``$\\{0,1,2\\}$ is the power set of $\\{0,1\\}$''} \\]\nand hence\n\\[ \\omega \\vDash \\PowerSet \\]\nwhich was our stupid example earlier.\nThe sentence $\\mathtt{isPowerSetOf}$ consists of an unbounded $\\forall$ followed\nby an absolute sentence, so we say it is $\\Pi_1$.\n\nMore generally, the \\vocab{Levy hierarchy} keeps track of how bounded our\nquantifiers are.\nSpecifically,\n\\begin{itemize}\n\t\\ii Formulas which have only bounded quantifiers\n\tare $\\Delta_0 = \\Sigma_0 = \\Pi_0$.\n\t\\ii Formulas of the form $\\exists x_1 \\dots \\exists x_k \\psi$\n\twhere $\\psi$ is $\\Pi_n$ are considered $\\Sigma_{n+1}$.\n\t\\ii Formulas of the form $\\forall x_1 \\dots \\forall x_k \\psi$\n\twhere $\\psi$ is $\\Sigma_n$ are considered $\\Pi_{n+1}$.\n\\end{itemize}\n(A formula which is both $\\Sigma_n$ and $\\Pi_n$ is called $\\Delta_n$, but we won't\nuse this except for $n=0$.)\n\n\\begin{example}\n\t[Examples of $\\Delta_0$ sentences]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The sentences $\\mathtt{isEmpty}(x)$, $x \\subseteq y$, as discussed above.\n\t\t\\ii The formula ``$x$ is transitive'' can be expanded as a $\\Delta_0$ sentence.\n\t\t\\ii The formula ``$x$ is an ordinal'' can be expanded as a $\\Delta_0$ sentence.\n\t\\end{enumerate}\n\\end{example}\n\\begin{exercise}\n\tWrite out the expansions for ``$x$ is transitive'' and ``$x$ is ordinal''\n\tin a $\\Delta_0$ form.\n\\end{exercise}\n\\begin{example}\n\t[More complex formulas]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The axiom $\\EmptySet$ is $\\Sigma_1$; it is $\\exists a (\\mathtt{isEmpty}(a))$,\n\t\tand $\\mathtt{isEmpty}(a)$ is $\\Delta_0$.\n\t\t\\ii The formula ``$y = \\PP(x)$'' is $\\Pi_1$, as discussed above.\n\t\t\\ii The formula ``$x$ is countable'' is $\\Sigma_1$.\n\t\tOne way to phrase it is ``$\\exists f$ an injective map $x \\injto \\omega$'',\n\t\twhich necessarily has an unbounded ``$\\exists f$''.\n\t\t\\ii The axiom $\\PowerSet$ is $\\Pi_3$:\n\t\t\\[ \\forall y \\exists P \\forall x (x\\subseteq y \\iff x \\in P). \\]\n\t\\end{enumerate}\n\\end{example}\n\n\\section{Substructures, and Tarski-Vaught}\nLet $\\MM_1 = (M_1, E_1)$ and $\\MM_2 = (M_2, E_2)$ be models.\n\\begin{definition}\n\tWe say that $\\MM_1 \\subseteq \\MM_2$ if $M_1 \\subseteq M_2$ and\n\t$E_1$ agrees with $E_2$; we say $\\MM_1$ is a \\vocab{substructure} of $\\MM_2$.\n\\end{definition}\n\nThat's boring. The good part is:\n\\begin{definition}\n\tWe say $\\MM_1 \\prec \\MM_2$, or $\\MM_1$ is an \\vocab{elementary substructure} of $\\MM_2$,\n\tif $\\MM_1 \\subseteq \\MM_2$ and for \\emph{every} sentence $\\phi(x_1, \\dots, x_n)$\n\tand parameters $b_1, \\dots, b_n \\in M_1$, we have\n\t\\[\n\t\t\\MM_1 \\vDash \\phi[b_1, \\dots, b_n] \\iff\n\t\t\\MM_2 \\vDash \\phi[b_1, \\dots, b_n].\n\t\\]\n\\end{definition}\nIn other words, $\\MM_1$ and $\\MM_2$ agree on every sentence possible.\nNote that the $b_i$ have to come from $\\MM_1$; if the $b_i$ came from $\\MM_2$ then\nasking something of $\\MM_1$ wouldn't make sense.\n\nLet's ask now: how would $\\MM_1 \\prec \\MM_2$ fail to be true?\nIf we look at the possible sentences, none of the atomic formulas,\nnor the ``$\\land$'' and ``$\\neg$'', are going to cause issues.\n\nThe intuition you should be getting by now is that things go\nwrong once we hit $\\forall$ and $\\exists$.\nThey won't go wrong for bounded quantifiers.\nBut unbounded quantifiers search the entire model, and that's where things go wrong.\n\nTo give a ``concrete example'':\nimagine $\\MM_1$ is MIT, and $\\MM_2$ is the state of Massachusetts.\nIf $\\MM_1$ thinks there exist hackers at MIT,\ncertainly there exist hackers in Massachusetts.\nWhere things go wrong is something like:\n\\[ \\MM_2 \\vDash \\text{``$\\exists x$ : $x$ is a course numbered $> 50$''}. \\]\nThis is true for $\\MM_2$ because we can\ntake the witness $x = \\text{Math 55}$, say.\nBut it's false for $\\MM_1$,\nbecause at MIT all courses are numbered $18.701$ or something similar.\n\\begin{moral}\n\tThe issue is that the \\emph{witnesses}\n\tfor statements in $\\MM_2$ do not necessarily propagate\n\tdown to witnesses for $\\MM_1$.\n\\end{moral}\n\nThe Tarski-Vaught test says this is the only impediment:\nif every witness in $\\MM_2$ can be replaced by one in $\\MM_1$ then $\\MM_1 \\prec \\MM_2$.\n\\begin{lemma}\n\t[Tarski-Vaught]\n\tLet $\\MM_1 \\subseteq \\MM_2$.\n\tThen $\\MM_1 \\prec \\MM_2$ if and only if:\n\tFor every sentence $\\phi(x, x_1, \\dots, x_n)$ and parameters\n\t$b_1, \\dots, b_n \\in M_1$:\n\tif there is a witness $\\tilde b \\in M_2$\n\tto $\\MM_2 \\vDash \\phi(\\tilde b, b_1 \\dots, b_n)$\n\tthen there is a witness $b \\in M_1$ to $\\MM_1 \\vDash \\phi(b, b_1, \\dots, b_n)$.\n\\end{lemma}\n\\begin{proof}\n\tEasy after the above discussion.\n\tTo formalize it, use induction on formula complexity.\n\\end{proof}\n\n\\section{Obtaining the axioms of $\\ZFC$}\nWe now want to write down conditions for $M$ to satisfy $\\ZFC$ axioms.\nThe idea is that almost all the $\\ZFC$ axioms are just $\\Sigma_1$\nclaims about certain desired sets,\nand so verifying an axiom reduces to checking some appropriate ``closure'' condition:\nthat the witness to the axiom is actually in the model.\n\nFor example, the $\\EmptySet$ axiom is ``$\\exists a (\\mathtt{isEmpty}(a))$'',\nand so we're happy as long as $\\varnothing \\in M$, which is of course\ntrue for any nonempty transitive set $M$.\n\n\\begin{lemma}[Transitive sets inheriting $\\ZFC$]\n\t\\label{lem:transitive_ZFC}\n\tLet $M$ be a nonempty transitive set. Then\n\t\\begin{enumerate}[(i)]\n\t\t\\ii $M$ satisfies $\\Extensionality$, $\\Foundation$, $\\EmptySet$.\n\t\t\\ii $M \\vDash \\Pairing$ if $x,y \\in M \\implies \\{x,y\\} \\in M$.\n\t\t\\ii $M \\vDash \\Union$ if $x \\in M \\implies \\cup x \\in M$.\n\t\t\\ii $M \\vDash \\PowerSet$ if $x \\in M \\implies \\PP(x) \\cap M \\in M$.\n\t\t\\ii $M \\vDash \\Replacement$ if for every $x \\in M$\n\t\tand every function $F : x \\to M$\n\t\twhich is $M$-definable with parameters,\n\t\twe have $F``x \\in M$ as well.\n\t\t\\ii $M \\vDash \\Infinity$ as long as $\\omega \\in M$.\n\t\\end{enumerate}\n\\end{lemma}\nHere, a set $X \\subseteq M$ is \\vocab{$M$-definable with parameters}\nif it can be realized as\n\\[ X = \\left\\{ x \\in M \\mid \\phi[x, b_1, \\dots, b_n] \\right\\} \\]\nfor some (fixed) choice of parameters $b_1,\\dots,b_n \\in M$.\nWe allow $n=0$, in which case we say $X$ is \\vocab{$M$-definable without parameters}.\nNote that $X$ need not itself be in $M$!\nAs a trivial example, $X = M$ is $M$-definable without parameters\n(just take $\\phi[x]$ to always be true), and certainly we do not have $X \\in M$.\n\\begin{exercise}\n\tVerify (i)-(iv) above.\n\\end{exercise}\n\\begin{remark}\n\tConverses to the statements of \\Cref{lem:transitive_ZFC}\n\tare true for all claims other than (vi).\n\\end{remark}\n\n\\section{Mostowski collapse}\nUp until now I have been only talking about transitive models,\nbecause they were easier to think about.\nHere's a second, better reason we might only care about transitive models.\n\n\\begin{lemma}\n\t[Mostowski collapse lemma]\n\tLet $X = (X, \\in)$ be a model,\n\twhere $X$ is a set (possibly not transitive).\n\tThen there exists an isomorphism $\\pi \\colon X \\to M$ for\n\ta transitive model $M = (M,\\in)$.\n\\end{lemma}\n\nThis is also called the \\emph{transitive collapse}.\nIn fact, both $\\pi$ and $M$ are unique.\n\n\\begin{proof}\n\tThe idea behind the proof is very simple.\n\tSince $\\in$ is well-founded and extensional\n\t(satisfies $\\Foundation$ and $\\Extensionality$, respectively),\n\twe can look at the $\\in$-minimal element $x_\\varnothing$\n\tof $X$ with respect to $\\in$.\n\tClearly, we want to send that to $0 = \\varnothing$.\n\n\tThen we take the next-smallest set under $\\in$, and send it to $1 = \\{\\varnothing\\}$.\n\tWe ``keep doing this''; it's not hard to see this does exactly what we want.\n\n\tTo formalize, define $\\pi$ by transfinite recursion:\n\t\\[ \\pi(x) \\defeq \\left\\{ \\pi(y) \\mid y \\in x \\right\\}. \\]\n\tThis $\\pi$, by construction, does the trick.\n\\end{proof}\n\n\\begin{remark}\n\t[Digression for experts]\n\tEarlier versions of Napkin claimed this was true for general models\n\t$\\mathscr X = (X, E)$ with $\\mathscr X \\vDash \\Foundation + \\Extensionality$.\n\tThis is false; it does not even imply $E$ is well-founded,\n\tbecause there may be infinite descending chains of subsets of $X$\n\twhich do not live in $X$ itself.\n\tAnother issue is that $E$ may not be set-like.\n\\end{remark}\n\nThe picture of this is ``collapsing'' the elements of $M$ down\nto the bottom of $V$, hence the name.\n\\missingfigure{Picture of Mostowski collapse}\n\n\n\\section{Adding an inaccessible}\n\\prototype{$V_\\kappa$}\nAt this point you might be asking, well, where's my model of $\\ZFC$?\n\nI unfortunately have to admit now: $\\ZFC$ can never prove that there is a model of $\\ZFC$\n(unless $\\ZFC$ is inconsistent, but that would be even worse).\nThis is a result called G\\\"odel's incompleteness theorem.\n\nNonetheless, with some very modest assumptions added,\nwe can actually show that a model \\emph{does} exist:\nfor example, assuming that there exists a strongly inaccessible cardinal $\\kappa$ would do the trick,\n$V_\\kappa$ will be such a model (\\Cref{prob:inaccessible_model}).\nIntuitively you can see why: $\\kappa$ is so big that any set of rank lower than it can't escape it\neven if we take their power sets, or any other method that $\\ZFC$ lets us do.\n\nMore pessimistically,\nthis shows that it's impossible to prove in $\\ZFC$ that such a $\\kappa$ exists.\nNonetheless, we now proceed under $\\ZFC^+$ for convenience,\nwhich adds the existence of such a $\\kappa$\nas a final axiom.\nSo we now have a model $V_\\kappa$ to play with. Joy!\n\nGreat. Now we do something \\emph{really} crazy.\n\\begin{theorem}[Countable transitive model]\n\tAssume $\\ZFC^+$. Then there exists a transitive model $X$ of $\\ZFC$\n\tsuch that $X$ is a \\emph{countable} set.\n\\end{theorem}\n\\begin{proof}\n\tFasten your seat belts.\n\n\tFirst, since we assumed $\\ZFC^+$,\n\twe can take $V_\\kappa = (V_\\kappa, \\in)$ as our model of $\\ZFC$.\n\tStart with the set $X_0 = \\varnothing$.\n\tThen for every integer $n$, we do the following to get $X_{n+1}$.\n\t\\begin{itemize}\n\t\t\\ii Start with $X_{n+1}$ containing every element of $X_n$.\n\t\t\\ii Consider a formula $\\phi(x, x_1, \\dots, x_n)$\n\t\tand $b_1, \\dots, b_n$ in $X_n$.\n\t\tSuppose that $V_\\kappa$ thinks there is a $b \\in V_\\kappa$ for which\n\t\t\\[ V_\\kappa \\vDash \\phi[b, b_1, \\dots, b_n]. \\]\n\t\tWe then add in the element $b$ to $X_{n+1}$.\n\t\t\\ii We do this for \\emph{every possible formula in the language of set theory}.\n\t\tWe also have to put in \\emph{every possible set of parameters} from the previous set $X_n$.\n\t\\end{itemize}\n\tAt every step $X_n$ is countable.\n\tReason: there are countably many possible finite sets of parameters in $X_n$,\n\tand countably many possible formulas, so in total we only ever add in countably many things\n\tat each step.\n\tThis exhibits an infinite nested sequence of countable sets\n\t\\[ X_0 \\subseteq X_1 \\subseteq X_2 \\subseteq \\dots \\]\n\tNone of these is a substructure of $V_\\kappa$,\n\tbecause each $X_n$ relies on witnesses in $X_{n+1}$.\n\tSo we instead \\emph{take the union}:\n\t\\[ X = \\bigcup_n X_n. \\]\n\tThis satisfies the Tarski-Vaught test, and is countable.\n\n\tThere is one minor caveat: $X$ might not be transitive.\n\tWe don't care, because we just take its Mostowski collapse.\n\\end{proof}\n\nPlease take a moment to admire how insane this is.\nIt hinges irrevocably on the fact that there are countably many sentences we can write down.\n\n\\begin{remark}\n\tThis proof relies heavily on the Axiom of Choice\n\twhen we add in the element $b$ to $X_{n+1}$.\n\tWithout Choice, there is no way of making these decisions all at once.\n\n\tUsually, the right way to formalize the Axiom of Choice usage is,\n\tfor every formula $\\phi(x, x_1, \\dots, x_n)$, to pre-commit (at the very beginning)\n\tto a function $f_\\phi(x_1, \\dots, x_n)$, such that given any $b_1, \\dots, b_n$\n\t$f_\\phi(b_1, \\dots, b_n)$ will spit out the suitable value of $b$ (if one exists).\n\tPersonally, I think this is hiding the spirit of the proof, but it does\n\tmake it clear how exactly Choice is being used.\n\n\tThese $f_\\phi$'s have a name: \\vocab{Skolem functions}.\n\\end{remark}\n\nThe trick we used in the proof works in more general settings:\n\\begin{theorem}\n\t[Downward L\\\"owenheim-Skolem theorem]\n\tLet $\\MM = (M,E)$ be a model, and $A \\subseteq M$.\n\tThen there exists a set $B$ (called the \\vocab{Skolem hull} of $A$)\n\twith $A \\subseteq B \\subseteq M$,\n\tsuch that $(B,E) \\prec \\MM$, and\n\t\\[ \\left\\lvert B \\right\\rvert =\n\t\t\\max \\left\\{ \\omega, \\left\\lvert A \\right\\rvert \\right\\}. \\]\n\\end{theorem}\nIn our case, what we did was simply take $A$ to be the empty set.\n\\begin{ques}\n\tProve this. (Exactly the same proof as before.)\n\\end{ques}\n\n\n\\section{FAQ's on countable models}\nThe most common one is ``how is this possible?'',\nwith runner-up ``what just happened''.\n\nLet me do my best to answer the first question.\nIt seems like there are two things running up against each other:\n\\begin{enumerate}[(1)]\n\t\\ii $M$ is a transitive model of $\\ZFC$, but its universe is uncountable.\n\t\\ii $\\ZFC$ tells us there are uncountable sets!\n\\end{enumerate}\n(This has confused so many people it has a name, Skolem's paradox.)\n\nThe reason this works I actually pointed out earlier:\n\\emph{countability is not absolute, it is a $\\Sigma_1$ notion}.\n\nRecall that a set $x$ is countable if\n\\emph{there exists} an injective map $x \\injto \\omega$.\nThe first statement just says that \\emph{in the universe $V$},\nthere is a injective map $F: M \\injto \\omega$.\nIn particular, for any $x \\in M$ (hence $x \\subseteq M$, since $M$ is transitive),\n$x$ is countable \\emph{in $V$}.\nThis is the content of the first statement.\n\nBut for $M$ to be a model of $\\ZFC$, $M$ only has to think statements in $\\ZFC$ are true.\nMore to the point, the fact that $\\ZFC$ tells us there are uncountable sets means\n\\[ M \\vDash \\text{$\\exists x$ uncountable}. \\]\nIn other words,\n\\[ M \\vDash \\exists x \\forall f\n\t\\text{ If $f : x \\to \\omega$ then $f$ isn't injective}. \\]\nThe key point is the $\\forall f$ searches only functions in our tiny model $M$.\nIt is true that in the ``real world'' $V$, there are injective functions $f : x \\to \\omega$.\nBut $M$ has no idea they exist!\nIt is a brain in a vat: $M$ is oblivious to any information outside it.\n\nSo in fact, every ordinal which appears in $M$ is countable in the real world.\nIt is just not countable in $M$.\nSince $M \\vDash \\ZFC$, $M$ is going to think there is some smallest uncountable cardinal,\nsay $\\aleph_1^M$.\nIt will be the smallest (infinite) ordinal in $M$\nwith the property that there is no bijection \\emph{in the model $M$}\nbetween $\\aleph_1^M$ and $\\omega$.\nHowever, we necessarily know that such a bijection is going to exist in the real world $V$.\n\nPut another way, cardinalities in $M$ can look vastly different from those in the real world,\nbecause cardinality is measured by bijections, which I guess is inevitable, but leads to chaos.\n\n\\section{Picturing inner models}\nHere is a picture of a countable transitive model $M$.\n\n\\begin{center}\n\t\\begin{asy}\n\t\tsize(14cm);\n\t\tpair A = (12,30);\n\t\tpair B = -conj(A);\n\t\tpair M = midpoint(A--B);\n\t\tpair O = origin;\n\t\tMP(\"V\", A, dir(10));\n\t\tdraw(A--O--B);\n\n\t\tfill(A--O--B--cycle, opacity(0.3)+palecyan);\n\t\tMP(\"M\", 0.7*M, 3*dir(0)+dir(45));\n\n\t\tMP(\"V_0 = \\varnothing\", origin, dir(-20));\n\t\tMP(\"V_1 = \\{\\varnothing\\}\", 0.05*A, dir(0));\n\t\tMP(\"V_2 = \\{\\varnothing, \\{\\varnothing\\} \\}\", 0.10*A, dir(0));\n\n\t\tpair A1 = 0.4*A;\n\t\tpair B1 = 0.4*B;\n\t\tdraw(MP(\"V_\\omega\", A1, dir(0))--B1);\n\t\tdraw(MP(\"V_{\\omega+1} = \\mathcal P(V_\\omega)\", 0.45*A, dir(0))--0.45*B);\n\t\tDrawing(\"\\omega\", 0.45*M, dir(45));\n\n\t\tfilldraw(O--A1--(A1+0.15*M)..(0.7*M)..(B1+0.15*M)--B1--cycle,\n\t\t\topacity(0.3)+lightgreen, heavygreen+1);\n\t\tdraw(O--0.7*M, heavygreen+1);\n\n\t\tDrawing(\"\\aleph_1^V\", 0.80*M, dir(0));\n\t\tDrawing(\"\\aleph_2^V\", 0.90*M, dir(0));\n\t\tDrawing(\"\\aleph_1^M\", 0.55*M, dir(0));\n\t\tDrawing(\"\\aleph_2^M\", 0.60*M, dir(0));\n\n\t\tpair F = 0.6*B+0.15*A;\n\t\tDrawing(\"f\", F, dir(135));\n\t\tdraw(F--0.55*M, dotted, EndArrow, Margins);\n\t\tdraw(F--0.45*M, dotted, EndArrow, Margins);\n\n\t\tdraw(0.7*M--M);\n\t\tMP(\"\\mathrm{On}^V\", M, dir(90));\n\t\tMP(\"\\mathrm{On}^M\", 0.7*M, dir(135));\n\t\\end{asy}\n\\end{center}\n\nNote that $M$ and $V$ must agree on finite sets,\nsince every finite set has a formula that can express it.\nHowever, past $V_\\omega$ the model and the true universe start to diverge.\n\nThe entire model $M$ is countable, so it only occupies a small\nportion of the universe, below the first uncountable cardinal $\\aleph_1^V$\n(where the superscript means ``of the true universe $V$'').\nThe ordinals in $M$ are precisely the ordinals of $V$ which happen to live inside the model,\nbecause the sentence ``$\\alpha$ is an ordinal'' is absolute.\nOn the other hand, $M$ has only a portion of these ordinals, since it is only\na lowly set, and a countable set at that.\nTo denote the ordinals of $M$, we write $\\On^M$, where the superscript means\n``the ordinals as computed in $M$''.\nSimilarly, $\\On^V$ will now denote the ``set of true ordinals''.\n\nNonetheless, the model $M$ has its own version of the first uncountable\ncardinal $\\aleph_1^M$.\nIn the true universe, $\\aleph_1^M$ is countable (below $\\aleph_1^V$),\nbut the necessary bijection witnessing this might not be inside $M$.\nThat's why $M$ can think $\\aleph_1^M$ is uncountable, \neven if it is a countable cardinal in the original universe.\n\nSo our model $M$ is a brain in a vat.\nIt happens to believe all the axioms of $\\ZFC$, and so every\nstatement that is true in $M$ could conceivably be true in $V$ as well.\nBut $M$ can't see the universe around it; it has no idea that what it believes is\nthe uncountable $\\aleph_1^M$ is really just an ordinary countable cardinal.\n\n\\section\\problemhead\n\\begin{sproblem}\n\tShow that for any transitive model $M$, the set of ordinals in $M$\n\tis itself some ordinal.\n\\end{sproblem}\n\n\\begin{dproblem}\n\tAssume $\\MM_1 \\subseteq \\MM_2$. Show that\n\t\\begin{enumerate}[(a)]\n\t\t\\ii If $\\phi$ is $\\Delta_0$,\n\t\tthen $\\MM_1 \\vDash \\phi[b_1, \\dots, b_n] \\iff \\MM_2 \\vDash \\phi[b_1, \\dots, b_n]$.\n\t\t\\ii If $\\phi$ is $\\Sigma_1$,\n\t\tthen $\\MM_1 \\vDash \\phi[b_1, \\dots, b_n] \\implies \\MM_2 \\vDash \\phi[b_1, \\dots, b_n]$.\n\t\t\\ii If $\\phi$ is $\\Pi_1$,\n\t\tthen $\\MM_2 \\vDash \\phi[b_1, \\dots, b_n] \\implies \\MM_1 \\vDash \\phi[b_1, \\dots, b_n]$.\n\t\\end{enumerate}\n\t(This should be easy if you've understood the chapter.)\n\\end{dproblem}\n\n\\begin{dproblem}[Reflection]\n\t\\gim\n\tLet $\\kappa$ be an inaccessible cardinal such that $|V_\\alpha| < \\kappa$ for all $\\alpha < \\kappa$.\n\tProve that for any $\\delta < \\kappa$ there exists $\\delta < \\alpha < \\kappa$\n\tsuch that $V_\\alpha \\prec V_\\kappa$; in other words,\n\tthe set of $\\alpha$ such that $V_\\alpha \\prec V_\\kappa$ is \\emph{unbounded} in $\\kappa$.\n\tThis means that properties of $V_\\kappa$ reflect down to properties of $V_\\alpha$.\n\t\\begin{hint}\n\t\tThis is very similar to the proof of L\\\"owenheim-Skolem.\n\t\tFor a sentence $\\phi$, let $f_\\phi$\n\t\tsend $\\alpha$ to the least $\\beta < \\kappa$ such that for all $\\vec b \\in V_\\alpha$, if there exists $a \\in M$ such that $V_\\kappa \\vDash \\phi[a, \\vec b]$ then $\\exists a \\in V_\\beta$ such that $V_\\kappa \\vDash \\phi[a, \\vec b]$.\n\t\t(To prove this $\\beta$ exists, use the fact that $\\kappa$ is cofinal.)\n\t\tThen, take the supremum over the countably many sentences for each $\\alpha$.\n\t\\end{hint}\n\t\\begin{sol}\n\t\tFor a sentence $\\phi$ let \\[ f_\\phi : \\kappa \\to \\kappa \\]\n\t\tsend $\\alpha$ to the least $\\beta < \\kappa$ such that for all $\\vec b \\in V_\\alpha$, if there exists $a \\in V_\\kappa$ such that $V_\\kappa \\vDash \\phi[a, \\vec b]$ then $\\exists a \\in V_\\beta$ such that $V_\\kappa \\vDash \\phi[a, \\vec b]$.\n\n\t\tWe claim this is well-defined.\n\t\tThere are only $\\left\\lvert V_\\alpha \\right\\rvert^n$ many possible choices of $\\vec b$,\n\t\tand in particular there are fewer than $\\kappa$ of these\n\t\t(since we know that $\\left\\lvert V_\\alpha \\right\\rvert < \\kappa$; compare \\Cref{prob:strongly_inaccessible}).\n\t\tOtherwise, we can construct a cofinal map from $\\left\\lvert V_\\alpha^n \\right\\rvert$ \n\t\tinto $\\kappa$ by mapping each vector $\\vec b$ into a $\\beta$ for which the proposition fails.\n\t\tAnd that's impossible since $\\kappa$ is regular!\n\n\t\tIn other words, what we've done is fix $\\phi$ and then use Tarski-Vaught on all the $\\vec b \\in V_\\alpha^n$.\n\t\tNow let $g : \\kappa \\to \\kappa$ be defined by\n\t\t\\[ \\alpha \\mapsto \\sup f_\\phi(\\alpha). \\]\n\t\tSince $\\kappa$ is regular and there are only countably many formulas, $g(\\alpha)$ is well-defined.\n\n\t\tCheck that if $\\alpha$ has the property that $g$ maps $\\alpha$ into itself (in other words, $\\alpha$ is closed under $g$), then by the Tarski-Vaught test, we have $V_\\alpha \\prec V_\\kappa$.\n\n\t\tSo it suffices to show there are arbitrarily large $\\alpha < \\kappa$ which are closed under $g$.\n\t\tFix $\\alpha_0$. Let $\\alpha_1 = g(\\alpha_0$), et cetera and define\n\t\t\\[ \\alpha = \\sup_{n < \\omega} \\alpha_n. \\]\n\t\tThis $\\alpha$ is closed under $g$, and by making $\\alpha_0$ arbitrarily large we can make $\\alpha$ as large as we like.\n\t\\end{sol}\n\\end{dproblem}\n\n\\begin{sproblem}\n\t[Inaccessible cardinals produce models]\n\t\\label{prob:inaccessible_model}\n\t\\gim\n\tLet $\\kappa$ be an inaccessible cardinal.\n\tProve that $V_\\kappa$ is a model of $\\ZFC$.\n\\end{sproblem}\n", "meta": {"hexsha": "4b279528f848c78f1a6802c2c78d628897718c89", "size": 30401, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/set-theory/models.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/set-theory/models.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/set-theory/models.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.3062678063, "max_line_length": 237, "alphanum_fraction": 0.6867537252, "num_tokens": 9878, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.5706310153883375}}
{"text": "\\section{Measuring distance between vectors}\n\n", "meta": {"hexsha": "e0ea6dde65013b895f79014d404ed7ecbd629944", "size": 46, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/distance/01-00-vectors.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/distance/01-00-vectors.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/distance/01-00-vectors.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 15.3333333333, "max_line_length": 44, "alphanum_fraction": 0.8260869565, "num_tokens": 9, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5706310018235445}}
{"text": "\n    \\foldertitle{trec}{Time-Recursive Subscript Objects for Time-Recursive Expressions}{trec/Contents}\n\n\tTime subscript objects enable to create and evaluate time-recursive\nexpressions based on tseries objects. Read below carefully when IRIS\nfails to evaluate time-recursive expessions correctly.\n\nTrec methods:\n\n\\paragraph{Constructor}\\label{constructor}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\href{trec/trec}{\\texttt{trec}} - Create new recursive time subscript\n  object.\n\\end{itemize}\n\n\\paragraph{Creating lags and leads}\\label{creating-lags-and-leads}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\href{trec/plus}{\\texttt{plus}} - Create time-recursive lead of\n  tseries object.\n\\item\n  \\href{trec/minus}{\\texttt{minus}} - Create time-recursive lag of\n  tseries object.\n\\end{itemize}\n\n\\paragraph{Using Time-Recursive\nSubscripts}\\label{using-time-recursive-subscripts}\n\nTime-recursive expressions are expressions that are evaluated period by\nperiod, with each result assigned immediately to the left-hand side\ntseries variable, and used in subsequent periods evaluated afterwards.\n\nTo construct and evaluate time-recursive expressions, use tseries\nreferenced by a trec object, or a lag or lead created from a trec\nobject. Every tseries object on both the left-hand side (i.e.~the\nvariable under construction) and the right-hand side (i.e.~the variables\nin the expression that is being evaluated) must be referenced by a trec\nobject (or possibly a lag or lead). When referencing a tseries object by\na trec, you can use either curly braces, \\texttt{\\{...\\}}, or round\nbrackets, \\texttt{(...)}; there is no difference between them in\ntime-recursive expressions.\n\n$\\attention$ See the description of circumstances under which IRIS fails\nto evaluate time-recursive expressions correctly, and how to avoid/fix\nsuch situations.\n\n\\paragraph{Example}\\label{example}\n\nConstruct an autoregressive sequence starting from 10 with a\nautoregressive coefficient 0.8:\n\n\\begin{verbatim}\n T = trec(qq(2010,1):qq(2020,4));\n X = tseries(qq(2009,4),10);\n X{T} = 0.8*X{T-1};\n\\end{verbatim}\n\n\\paragraph{Example}\\label{example-1}\n\nConstruct a first-order autoregressive process, \\texttt{x}, with\nnormally distributed innovations, \\texttt{e}:\n\n\\begin{verbatim}\nT = trec(qq(2010,1):qq(2020,4));\nx = tseries(qq(2009,4),10);\ne = tseries(qq(2010,1):qq(2020,4),@randn);\nx{T} = 10 + 0.8*x{T-1} + e{T};\n\\end{verbatim}\n\n\\paragraph{Example}\\label{example-2}\n\nConstruct a second-order log-autoregressive process going backward.\n\n\\begin{verbatim}\nT = trec(yy(2020):-1:yy(2000));\nB = tseries();\nB(yy(2022)) = 1.56;\nB(yy(2021)) = 1.32;\nB{T} = B{T+1}^1.2 * B{T+2}^0.6;\n\\end{verbatim}\n\n\\paragraph{Example}\\label{example-3}\n\nConstruct Fibonacci sequence:\n\n\\begin{verbatim}\n T = trec(3:20);\n F = tseries(1:2,1);\n F{T} = F{T-1} + F{T-2};\n\\end{verbatim}\n\n\\paragraph{When IRIS Fails to Evaluate Time-Recursive Expressions\nCorrectly}\\label{when-iris-fails-to-evaluate-time-recursive-expressions-correctly}\n\nIRIS fails to evaluate time-recursive expressions correctly (without any\nindication of an error) when the following two circumstances occur at\nthe same time:\n\n\\begin{itemize}\n\\item\n  The time series used in the expression are within a database (struct),\n  or a cell array;\n\\item\n  At least one tseries object on the right-hand side has been created by\n  copying the left-hand side tseries object with no further\n  manipulation.\n\\end{itemize}\n\nUnder these circumstances, the right-hand side tseries variable will be\nassigned (updated with) the results calculated in iteration as if it\nwere the tseries variable on the left-hand side.\n\n\\paragraph{Example}\\label{example-4}\n\nCreate a database with two tseries. Create one of the tseries by simply\ncopying the other (i.e.~plain assignment with no further manipulation).\n\n\\begin{verbatim}\nd = struct();\nd.x = tseries(1:10,1);\nd.y = d.x;\n\nT = trec(2:10);\nd.x{T} = 0.8*d.y{T-1}; % Fails to evaluate correctly!\n\\end{verbatim}\n\nThe above time-recursive expression will be incorrectly evaluated as if\nit were \\texttt{d.x\\{T\\} = 0.8*d.x\\{T-1\\}}.\n\nNote that when the tseries objects are not stored within a database\n(struct), the expression will evaluate correctly:\n\n\\begin{verbatim}\nx = tseries(1:10,1);\ny = x;\n\nT = trec(2:10);\nx{T} = 0.8*y{T-1}; % Evaluates correctly.\n\\end{verbatim}\n\n\\paragraph{Workaround}\\label{workaround}\n\nTo evaluate the expression correctly, simply apply any kind of operator\nor function to the tseries \\texttt{d.y} before it enters the\ntime-recursive expression. Below are examples of some simple\nmanipulations that do the job without changing the tseries \\texttt{d.y}:\n\n\\begin{verbatim}\nd = struct();\nd.x = tseries(1:10,1);\nd.y = 1*d.x;\n\\end{verbatim}\n\nor\n\n\\begin{verbatim}\nd = struct();\nd.x = tseries(1:10,1);\nd.y = d.x{:};\n\\end{verbatim}\n\nor\n\n\\begin{verbatim}\nd = struct();\nd.x = tseries(1:10,1);\nd.y = d.x;\nd.y = d.y + 0;\n\\end{verbatim}\n\n\n\n", "meta": {"hexsha": "e315d2fb6777dcc49ff26761580e4760b3e0ae86", "size": 4905, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/trec/Contents.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/trec/Contents.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/trec/Contents.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 27.5561797753, "max_line_length": 102, "alphanum_fraction": 0.7429153925, "num_tokens": 1439, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7772998508568416, "lm_q1q2_score": 0.5706309987706074}}
{"text": "\\section{Partially Observable Monte Carlo Planning (POMCP)}\n\\frame{\\tableofcontents[currentsection, hideothersubsections]}\n\n\\begin{frame}\n\\frametitle{POMCP: Key ideas}\nPOMCP extends \\textbf{MCTS} to \\textbf{partially observable} environments\n\\pause\n\\begin{itemize}\n    \\item sample start states from the belief state \\\\\n    (break the curse of dimensionality) \\pause\n    \\item sample histories using a black box simulator \\\\\n    (break the curse of history) \\pause\n    \\item evaluate each state in a search tree by the \\textbf{average outcome} of simulations from that state\n\\end{itemize}\n\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{POMCP: $|S|= 50$, $|A|=2$, $|Z|=2$, {\\small no intermediate rewards}}\n\\begin{columns}\n  \\column{0.5\\textwidth}\n    \\begin{figure}\n        \\centering\n        \\includegraphics[scale=0.3,trim={0 0 30cm 0},clip]{pomcp_illustration}\n    \\end{figure}\n  \\column{0.5\\textwidth}\n    \\begin{itemize}\n        \\item construct a search tree from multiple simulations, and\n        \\item evaluate each history by its mean return\n    \\end{itemize}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{POMCP: $|S|= 50$, $|A|=2$, $|Z|=2$, {\\small no intermediate rewards}}\n\\begin{columns}\n  \\column{0.5\\textwidth}\n    \\begin{figure}\n        \\centering\n        \\includegraphics[scale=0.3,trim={20cm 0 15cm 0},clip]{pomcp_illustration}\n    \\end{figure}\n  \\column{0.5\\textwidth}\n    \\begin{itemize}\n        \\item select a real action $a$, and\n        \\item observe a real observation $o$\n    \\end{itemize}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{POMCP: $|S|= 50$, $|A|=2$, $|Z|=2$, {\\small no intermediate rewards}}\n\\begin{columns}\n  \\column{0.5\\textwidth}\n    \\begin{figure}\n        \\centering\n        \\includegraphics[scale=0.3,trim={35cm 0 0 0},clip]{pomcp_illustration}\n    \\end{figure}\n  \\column{0.5\\textwidth}\n    \\begin{itemize}\n        \\item prune the tree and\n        \\item begin a new search from the updated history $hao$\n    \\end{itemize}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}\n\\frametitle{POMCP: algorithm}\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.4]{pomcp_algo}\n\\end{figure}\n\\end{frame}\n", "meta": {"hexsha": "1c5f4d55a6bace4fd8fed4cba4eee35bb6c72260", "size": 2142, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "talk/tor/online-pomdp-planning/src/pomcp.tex", "max_stars_repo_name": "tttor/robot-foundation", "max_stars_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "talk/tor/online-pomdp-planning/src/pomcp.tex", "max_issues_repo_name": "tttor/robot-foundation", "max_issues_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "talk/tor/online-pomdp-planning/src/pomcp.tex", "max_forks_repo_name": "tttor/robot-foundation", "max_forks_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.3424657534, "max_line_length": 109, "alphanum_fraction": 0.6778711485, "num_tokens": 695, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7341195152660688, "lm_q1q2_score": 0.5706309973019469}}
{"text": "% Chapter Template\n\n\\chapter{Approaches} % Main chapter title\n\n\\label{ch:03} % Change X to a consecutive number; for referencing this chapter elsewhere, use \\ref{ChapterX}\n\n%----------------------------------------------------------------------------------------\n%\tSECTION 1\n%----------------------------------------------------------------------------------------\n\n%% What is surface normal\n\\label{sec:surface-normal}\nIn three dimension geometry, a surface normal at the point $ P $ is a vector $ n $ perpendicular to the tangent plane of the surface at point $ P $. The length of a normal is usually one, with a sign to represent the sides (interior or exterior).%% add a picture to illustrate\n\n\n% ----------------------- neighbor based method -------------------------------------------\n\\section{Neighbor based surface normal estimation}\n\nFor a point $ P $ in a surface, its normal $ n $ is a vector of the tangent plane $ \\Pi $ of the surface at this point $ P $. As long as find $ k $ vectors $ \\textbf{v}_1, ..., \\textbf{v}_k \\in \\mathbb{R}^3 $ on the tangent plane, the normal can be calculated immediately based on equation $ v \\cdot \\textbf{n} = 0 $\n\nLet $ P_1, ..., P_k \\in \\mathbb{R}^3 $ are $ k $ neighbors of point $ P $ in the point cloud. In order to find the normal $ n $, we can assume the neighbor points and $ P $ are in the same tangent plane. Then \n\\begin{equation}\n\t\\textbf{v}_i = P_i - P \\quad \\text{for} \\ 1 \\le i \\le k\n\\end{equation}\nare $ k $ vectors on the tangent plane. Since they all perpendicular to the normal $ n $, we have \n\\begin{equation}{\\label{eq:normal-overdetermined-equation}}\n\t\\textbf{v}_i \\cdot \\textbf{n} = 0 \\quad \\text{for} \\ 1 \\le i \\le k\n\\end{equation}\nThe equation system can be further simplified as \n\\begin{equation}{\\label{eq:normal-overdetermined-equation}}\n\tM \\cdot \\textbf{n} = 0\n\\end{equation}\nwhere $ M\\in \\mathbb{R}^{k\\times 3} $ denotes the matrix constructed by $ \\textbf{v}_i $.\nIn order to avoid trivial solution, one more constraint should be added\n\\[ \\|  \\bn_{3 \\times 1} \\|_2^2 = 1  \\], which also let the normal to be a unit vector.\n\nIn order to calculate a valid normal, 3 points are required at least. For the sake of robust, more points can be used to reduce the measuring error. In this case, the equation system is over-determined, then the equation system mentioned above can be converted to follow optimization problem\n\\begin{equation}\n\t\\begin{array}{rrclcl}\n\t\t\\displaystyle \\min & \\multicolumn{3}{l}{\\| M  \\bn \\|^2} \\\\\n\t\t\\\\\n\t\t\\textrm{s.t.} & \\| \\bn \\|^2 & = & 1 \n\t\\end{array}\n\\end{equation}\nwhich can be solved by singular value decomposition(SVD). Let the decomposition of $ M=U\\Sigma V^T $, The solution i.e. normal is the last column of $ V $.\n\n\nAt last, all the normals should point ot view point $ S $, thus the direction of a normal should be inverted if \n\\begin{equation}\\label{eq:normal-invertion}\n\t\\textbf{n}\\cdot (P-S)>0\n\\end{equation}\n\n\n\\newpage\n% ---------------------------- gated convolution neural network -------------------------------------------------\n\\section{Gated Convolution neural network for surface normal estimation}\n\nThe standard convolution layer goes like this\n\n\\begin{equation}\n\t\\begin{array}{rrclcl}\n\t\t\\displaystyle O = \\Sigma\\Sigma W\\cdot I \\\\\n\t\\end{array}\n\\end{equation}\neach filter is applied as a sliding window to walk through the whole matrix and calculates the output matrix. Every entry in the matrix counts in the operation. This is reasonable for image processing task with full-dense input, since no missing pixels exist. \nHowever, the depth-map and point cloud is only semi-dense. The valid and invalid pixels will be treated equally if we still perform standard convolution layers. Since the aim of the network is not learning the pattern of noise, but the noise with eternally changing patterns will confuse the network, and it fails the normal inference, a mask is required to distinguish two kinds of pixels. \n\n\n% why we need a mask\n\\cite{pncnn0} use binary mask to indicate valid pixels, and further use normalized convolution to predict the output. The normalized convolution is shown as follows\n\n\\begin{equation}\n\t\\begin{array}{rrclcl}\n\t\tO(x,y) = \n\t\t\\begin{cases}\n\t\t\t\\dfrac{\\Sigma_i^k\\Sigma_j^k W(i,j) \\cdot I(x-i,y-j) \\cdot M(x-i,y-j)}{\\Sigma_i^k\\Sigma_j^k W(i,j) \\cdot M(x-i,y-j)}, & \\text{if}\\ \\sum_{i}^k\\sum_{j}^k M(i,j)>0 \\\\\n\t\t\t0, & \\text{otherwise}\n\t\t\\end{cases}\n\t\\end{array}\n\\end{equation}\n\nwhere $ k $ is the kernel size, $ (x,y) $ is the position in input, $ (i,j) $ is the displacement in kenel, $ M $ is the corresponding mask. A binary mask uses 1 to indicate valid pixels and 0 otherwise. $ \\odot $ denotes element-wise multiplication.\n\nNormalized convolution layer added the weight to the mask. However, a initialization for the mask is still required, and the propagation of the mask remain a tricky task. \n\\subsection{Gated Convolution}\n\n% introduce gconv\n\\cite{gconv} proposed gated convolution using for image inpainting task. \n\nThe structure is shown in Figure \\ref{fig:gconvLayer}. Instead of using a mask as input to indicate valid pixels, it employs a standard convolution layers to learn this mask directly from data. The valid pixels are then activated by a Sigmoid function. Then it imply element-wise multiplication with the feature map. Formally, the gated convolution is described as follows, the layer with input size $ (N, C_{in}, H, W) $ and output size $ (N, C_{out}, H_{out}, W_{out}) $:\n\\begin{equation}\\label{gconv}\n\to(N_i, C_{o_j}) = \\sigma(\\sum_{k=0}^{C_{in}-1}w_g(C_{o_j}, k) \\star i(N_i,k) + b_g(C_{o_j})) * \n\t\\phi (\\sum_{k=0}^{C_{in}-1}w_f(C_{o_j}, k) \\star i(N_i,k) + b_f(C_{o_j}))\n\\end{equation}\nwhere $ \\phi $ is LeakyReLU function, $ \\sigma $ is sigmoid function, thus the output values are in range $ [0,1] $, $ \\star $ is the valid 2D cross-correlation operator, $ N $ is batch size, $ C $ denotes a number of channels, $ H $ is a height of input planes in pixels, and $ W $ is width in pixels, $ w(C_{o_j},k) $ denotes the weight of $ j $-th output channel corresponding $ k $-th input channel, $ i(N_i, k) $denotes the input of $ i $-th batch corresponding $ k $-th input channel, $ b(C_{o_j}) $ denotes the bias of $ j $-th output channel.\n\n\n% Gated Convolution Layer\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{tikzpicture}\n\t\t\\tikzstyle{rect} = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=blue!20]\n\t\t\\tikzstyle{arrow} = [thick,->,>=stealth]\n\t\t\\node (output) [rect] {Output};\n\t\t\\node (oplus) [below of=output, yshift=-.5cm] {$\\Huge\\oplus $};\n\t\t\\node (LeakyReLU) [rect,below of=oplus, yshift=-1cm] {Feature};\n\t\t\\node (sigmoid) [rect, below of=oplus, yshift=-1cm, xshift=-4cm] {Mask};\n\t\t\n\t\t\\node (conv1) [rect,below of=sigmoid, yshift=-1cm] {Mask};\n\t\t\\node (conv2) [rect,below of=LeakyReLU, yshift=-1cm] {Feature};\n\t\t\\node (input) [below of=oplus] at (0,-8) {\\includegraphics[width=.25\\textwidth]{./Figures/train-input.png}};\n\t\t\n\t\t\\draw [arrow] (oplus) -- (output);\n\t\t\\draw [arrow] (sigmoid) |- (oplus);\n\t\t\\draw [arrow] (LeakyReLU) -- (oplus);\n\t\t\\draw [arrow] (conv2) --  node [text width=2.5cm, midway, right=1em]{LeakyReLU} (LeakyReLU);\n\t\t\\draw [arrow] (conv1) --  node [text width=2.5cm, midway, right=1em]{Sigmoid} (sigmoid);\n\t\t\\draw [arrow] (input) -|  node [text width=2.5cm, midway, below=1em]{Conv2D} (conv1);\n\t\t\\draw [arrow] (input) -- node [text width=2.5cm, midway, right=1em]{Conv2D} (conv2);\n\t\\end{tikzpicture}\n\t\\caption{Gated Convolution Layer, where $ \\oplus $ denotes element-wise multiplication.}\n\t\\label{fig:gconvLayer}\n\\end{figure}\n\n%\\section{Canny Edge Detection for Detail Enhancement}\n%The inaccuracy part is usually concentrate in the coarse surface or drastic changed surface parts of the object. The corresponding part can be extracted separately via edge detector algorithms, like Canny Edge detector. Feed the edges to a special net for normal prediction might improve the accuracy further. \n\n\\subsection{Architecture}\n\\label{sec:architecture}\nBased on the implementation mentioned above, the architecture roughly follows on UNet proposed by \\cite{unet}, as shown in Figure \\ref{fig:gcnn-archi}. \n\nInstead of using pooling layers for down/up samplings, gated convolution layer with stride $ (2,2) $ is used. The gated convolution using Sigmoid function for gating layer and LeakyReLU function for feature layer. All the layers are gated convolution layer with the exception of last two layer, which instead uses standard convolution layer to scale the output in range $\\left[-1, 1\\right]$. Other than the last two layers which use $ 1\\times 1 $ kernels, all the gated convolution layer use $ 3\\times 3 $ kernels with $ 1\\times1 $ padding. \n\nThe input is 3D vertex with size $ 512\\times512\\times3 $, and output is $ 512\\times512\\times3 $ normal map, which has the same resolution. There are 3 times downsamplings, each scale with 3 gated convolution layers, the third layer has stride-2. The upsampling part interpolate the feature maps 3 times with 1 gated convolution layer in each scale. \n\nIt keeps the skip connection in UNet to remain the fine detail features. \n \n% GCNN Architecture\n\\begin{figure}[!h]\n\t\\centering\n\t%% https://tex.stackexchange.com/questions/12020/what-is-the-easiest-way-to-draw-a-3d-cube-with-tikz\n\t\\begin{tikzpicture}\n\t\t%% -------------------------------------- parameters ------------------------------------------------\n\t\t\\pgfmathsetmacro{\\vdist}{0.4}\n\t\t\n\t\t\\pgfmathsetmacro{\\boxsizea}{3}\t%% width 512\n\t\t\\pgfmathsetmacro{\\boxsizeb}{1.5}\t%% width 256\n\t\t\\pgfmathsetmacro{\\boxsizec}{1}\t%% width 128\n\t\t\\pgfmathsetmacro{\\boxsized}{0.7}\t%% width 64\n\t\t\n\t\t\n\t\t\\pgfmathsetmacro{\\boxwidthd}{0.1}\t%% width 1\n\t\t\\pgfmathsetmacro{\\boxwidtha}{0.2}\t%% width 3\n\t\t\\pgfmathsetmacro{\\boxwidthb}{\\boxwidtha*1.3}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxwidthc}{\\boxwidtha*3}\t\t%% width 64\n\t\t\n\t\t\\pgfmathsetmacro{\\convwshift}{9}\n\t\t\\pgfmathsetmacro{\\preprocessingshift}{4}\n\t\t\\pgfmathsetmacro{\\labelshift}{1.5}\n\t\t\\pgfmathsetmacro{\\gconvwshift}{0}\n\t\t\n\t\t\\pgfmathsetmacro{\\secrowshift}{-6}\n\t\t\n\t\t\\pgfmathsetmacro{\\convrowstart}{4}\n\t\t\\pgfmathsetmacro{\\secondrowstart}{4}\n\t\t\n\t\t%% https://www.tug.org/pracjourn/2007-4/walden/color.pdf\n\t\t\\definecolor{gconvcolor}{rgb}{0.5,0.7,0.7}\n\t\t\\definecolor{convcolor}{rgb}{0.5,0.7,0.3}\n\t\t\\definecolor{gconvdilatedcolor}{rgb}{0.6,0,0.3}\n\t\t\n\t\t\n\t\t%% ---------------------------------- preprocessing --------------------------------------------\n\t\t%%  img_in 1x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{3.5}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-input.png}};\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1.5) {3D Vertex};\n\t\t\n\t\t\\draw [-stealth](1.45,\\yschift-1.8) -- (1.45,\\yschift-2.6);\n\t\t\n\t\t\\draw [-stealth]  (\\vdist*\\disttimes+1.6,\\yschift) -- (\\vdist*\\disttimes+1.1,\\yschift);\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{10}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-depth-noise}};\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1.5) {Add Noise};\n\t\t\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+1.6,\\yschift) -- (\\vdist*\\disttimes+1.1,\\yschift);\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{16.5}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-depth.png}};\n\t\t\\node[text width=2cm] at (\\vdist*\\disttimes+0.2,\\yschift-1.5) {Depth Map};\n\t\t\n\t\t\n\t\t%% ----------------------------------- output normal map -------------------------------------\n\t\t\\pgfmathsetmacro{\\disttimes}{25}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-normal-gt.png}};\n\t\t\\node[text width=2.5cm] at (\\vdist*\\disttimes,\\yschift-1.5) {Normal Map};\n\t\t\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+0.2,\\yschift-2.7) -- (\\vdist*\\disttimes+0.2,\\yschift-1.8);\n\t\t\n\t\t\n\t\t%% ------------------------------------------- vertex input ----------------------------------------------\n\t\t%% \td_in\t\t\t\t\t\t\t3x512x512\n\t\t%%\tdconv1: \td_in-->x1 \t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{1}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\n\t\t\\node[text width=1cm] at (\\vdist*\\disttimes+0.1,\\yschift-3.5) {32};\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%%\tdconv2:\t\tx1-->x1\t\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{2}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%%\tdconv3:\t\tx1-->x1\t\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{3}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.2,\\yschift-3.2) -- (\\vdist*\\disttimes-0.2,\\yschift-4);\n\t\t\\draw (\\vdist*\\disttimes-0.2,\\yschift-4) -- (\\vdist*\\disttimes+6.5,\\yschift-4);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+6.5,\\yschift-4) -- (\\vdist*\\disttimes+6.5,\\yschift-3.2);\n\t\t\n\t\t\n\t\t%% downsample 1\n\t\t%%\tdconv4:\t\tx1-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{4}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx2-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{5}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx2-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{6}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.6) -- (\\vdist*\\disttimes-0.1,\\yschift-2.7);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-2.7) -- (\\vdist*\\disttimes+4.2,\\yschift-2.7);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+4.2,\\yschift-2.7) -- (\\vdist*\\disttimes+4.2,\\yschift-1.7);\n\t\t\n\t\t\n\t\t%% downsample 2\n\t\t%%\tdconv4:\t\tx2-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{7}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx3-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{8}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx3-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{9}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.2) -- (\\vdist*\\disttimes-0.1,\\yschift-1.8);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.8) -- (\\vdist*\\disttimes+1.8,\\yschift-1.8);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+1.8,\\yschift-1.8) -- (\\vdist*\\disttimes+1.8,\\yschift-1.2);\n\t\t\n\t\t\n\t\t%% downsample 3\n\t\t%%\tdconv4:\t\tx3-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{10}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{11}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{12}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%\t\t%% dilated\n\t\t%\t\t%%\tdilated1:\tx4-->x4\t\t\t\t32x64x64\n\t\t%\t\t\\pgfmathsetmacro{\\disttimes}{13}\n\t\t%\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t%\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t%\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t%%\tdilated2:\tx4-->x4\t\t\t\t32x64x64\n\t\t%\t\t\\pgfmathsetmacro{\\disttimes}{14}\n\t\t%\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t%\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t%\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t%%\tdilated3:\tx4-->x4\t\t\t\t32x64x64\n\t\t%\t\t\\pgfmathsetmacro{\\disttimes}{15}\n\t\t%\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t%\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t%\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t%%\tdilated4:\tx4-->x4\t\t\t\t32x64x64\n\t\t%\t\t\\pgfmathsetmacro{\\disttimes}{16}\n\t\t%\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t%\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t%\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvdilatedcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t\n\t\t%\t\t\n\t\t%\t\t\n\t\t%\t\t\n\t\t%\t\t%%\tdconv2:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t%\t\t\\pgfmathsetmacro{\\disttimes}{17}\n\t\t%\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t%\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t%\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t%\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t%%\tdconv3:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t%\t\t\\pgfmathsetmacro{\\disttimes}{18}\n\t\t%\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t%\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t%\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t%\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t%\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% ----------------------------------- upsampling -------------------------------------\n\t\t\n\t\t%% upsample 1\n\t\t%% interpolate\tx4-->x3_us\t\t\t64x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{13}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% concatenate x3_us\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% uconv1\t\tx3_us-->x3\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{15}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% upsample 2\n\t\t%% interpolate\tx3-->x2_us\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{16}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%% concatenated x2\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% uconv2 \t\tx2,x2_us-->x2\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{18}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% upsample 3\n\t\t%% interpolate\tx2-->x1_us\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{19}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%% concatenated x1 and x1_us\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% uconv3\t\tx1,x1_us-->x1\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{21}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% conv1\t\tx1-->xout\t\t\t3x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{22}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\node[text width=1cm] at (\\vdist*\\disttimes+0.2,\\yschift-3.3) {32};\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t\n\t\t%% conv2\t\txout-->xout\t\t\t3x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{23}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthd}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\node[text width=1cm] at (\\vdist*\\disttimes+0.5,\\yschift-3.3) {3};\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% ------------------------------------- label ----------------------------------------------\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{10}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\labelshift}\t%% width 32\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1) {Conv2d};\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{15}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\labelshift}\t%% width 32\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1) {Gated};\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t\n\t\\end{tikzpicture}\n\t\n\t\\caption{Basic Normal Neural Network model based on Gated Convolution layer and UNet architecture. }\n\t\\label{fig:gcnn-archi}\n\\end{figure}\n\n\\subsection{Loss Function}\n\n\\subsubsection{Perceptual Loss}\n\n\\cite{perceptual-loss} proposed perceptual loss.\n\n\\subsubsection{L2 Loss}\nThe loss function is based on mean square error is described as follows: \n\n\\begin{equation}\\label{gcnn-loss}\n\t\\begin{array}{ll}\n\t\tl(x,y)= L  &= \\{l_1, ..., l_N\\}^T\\\\ \n\t\tl_n &= mean(Mask_{ol}(x_n - y_n)^2\\cdot p + (x_n - y_n)^2\\cdot Mask_{nol})\\\\\n\t\\end{array}\n\\end{equation}\n\nwhere $ x $ is input, $ y $ is target, $ N $ is the batch size. $ Mask_{ol} $ is the mask for the outlier, $ p $ is the penalty of the outlier, it is set as 1.4.\n\n\\newpage\n\\section{Guided normal inference using GCNN}\n\n\n\n\n\\subsection{Image Guided normal inference}\nThe normal inference can be guided by a RGB or gray-scale image, since the image is captured by passive method, it is fully-dense comparing to depth map, hence provides a complete view of the scene. \nThe architecture is shown in Figure \\ref{fig:ng-archi}. The upper branch is the similar with GCNN model but with 4 additional concatenate layers, furthermore, 1 gated convolution layer is added before concatenate with image branch. The image branch takes a single grayscale image as input, then 3 times downsamplins with 3 standard layers in each scale. In the upsampling part, the feature map upsampled 3 times and concatenate with the last layer in the downsampling part before interpolation.\n\n\n% architecture of NG\n\\begin{figure}[!h]\n\t\\centering\n\t%% https://tex.stackexchange.com/questions/12020/what-is-the-easiest-way-to-draw-a-3d-cube-with-tikz\n\t\\begin{tikzpicture}\n\t\t%% -------------------------------------- parameters ------------------------------------------------\n\t\t\\pgfmathsetmacro{\\vdist}{0.4}\n\t\t\n\t\t\\pgfmathsetmacro{\\boxsizea}{3}\t%% width 512\n\t\t\\pgfmathsetmacro{\\boxsizeb}{1.5}\t%% width 256\n\t\t\\pgfmathsetmacro{\\boxsizec}{1}\t%% width 128\n\t\t\\pgfmathsetmacro{\\boxsized}{0.7}\t%% width 64\n\t\t\n\t\t\n\t\t\\pgfmathsetmacro{\\boxwidthd}{0.05}\t%% width 1\n\t\t\\pgfmathsetmacro{\\boxwidtha}{0.1}\t%% width 3\n\t\t\\pgfmathsetmacro{\\boxwidthb}{\\boxwidtha*2}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxwidthc}{\\boxwidtha*3}\t\t%% width 64\n\t\t\n\t\t\\pgfmathsetmacro{\\preprocessingshift}{13}\n\t\t\\pgfmathsetmacro{\\labelshift}{10.5}\n\t\t\\pgfmathsetmacro{\\gconvwshift}{9}\n\t\t\\pgfmathsetmacro{\\imgshift}{3.5}\n\t\t\\pgfmathsetmacro{\\convwshift}{0}\n\t\t\n\t\t\\pgfmathsetmacro{\\secrowshift}{-6}\n\t\t\n\t\t\\pgfmathsetmacro{\\convrowstart}{4}\n\t\t\\pgfmathsetmacro{\\secondrowstart}{4}\n\t\t\n\t\t%% https://www.tug.org/pracjourn/2007-4/walden/color.pdf\n\t\t\\definecolor{gconvcolor}{rgb}{0.5,0.8,0.8}\n\t\t\\definecolor{convcolor}{rgb}{0.9,0.5,0.2}\n\t\t\\definecolor{interpolatecolor}{rgb}{0.3,0.9,0.5}\n\t\t\\definecolor{catcolor}{rgb}{0.9,0.3,0.9}\n\t\t\n\t\t%% ---------------------------------- preprocessing --------------------------------------------\n\t\t%%  img_in 1x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{3.5}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-input.png}};\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1.5) {3D Vertex};\n\t\t\n\t\t\\draw [-stealth](1.45,\\yschift-1.8) -- (1.45,\\yschift-2.6);\n\t\t\n\t\t\\draw [-stealth]  (\\vdist*\\disttimes+1.6,\\yschift) -- (\\vdist*\\disttimes+1.1,\\yschift);\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{10}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-depth-noise.png}};\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1.5) {Add Noise};\n\t\t\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+1.6,\\yschift) -- (\\vdist*\\disttimes+1.1,\\yschift);\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{16.5}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-depth.png}};\n\t\t\\node[text width=2cm] at (\\vdist*\\disttimes+0.2,\\yschift-1.5) {Depth Map};\n\t\t\n\t\t%% ------------------------------------ img preprocessing -------------------------------------\n\t\t\\pgfmathsetmacro{\\disttimes}{3.5}\n\t\t\\pgfmathsetmacro{\\yschift}{\\imgshift}\n\t\t\\node[inner sep=0pt] (image) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-image.png}};\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1.5) {Image};\n\t\t\n\t\t\\draw [-stealth](1.45,\\yschift-1.8) -- (1.45,\\yschift-2.2);\n\t\t\n\t\t\n\t\t\n\t\t%% ----------------------------------- output normal map -------------------------------------\n\t\t\\pgfmathsetmacro{\\disttimes}{32.5}\n\t\t\\pgfmathsetmacro{\\yschift}{\\preprocessingshift}\n\t\t\\node[inner sep=0pt] (depthmap) at (\\vdist*\\disttimes,\\yschift)\n\t\t{\\includegraphics[width=.15\\textwidth]{./Figures/train-normal-gt.png}};\n\t\t\\node[text width=2.5cm] at (\\vdist*\\disttimes,\\yschift-1.5) {Normal Map};\n\t\t\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+0.2,\\yschift-2.7) -- (\\vdist*\\disttimes+0.2,\\yschift-1.8);\n\t\t\n\t\t\n\t\t%% ------------------------------------------- vertex input ----------------------------------------------\n\t\t%% \td_in\t\t\t\t\t\t\t3x512x512\n\t\t%%\tdconv1: \td_in-->x1 \t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{1}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\n\t\t\\node[text width=1cm] at (\\vdist*\\disttimes+0.1,\\yschift-3.5) {32};\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%%\tdconv2:\t\tx1-->x1\t\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{2}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%%\tdconv3:\t\tx1-->x1\t\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{3}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-3) -- (\\vdist*\\disttimes-0.1,\\yschift-3.7);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-3.7) -- (\\vdist*\\disttimes+9.2,\\yschift-3.7);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+9.2,\\yschift-3.7) -- (\\vdist*\\disttimes+9.2,\\yschift-3);\n\t\t\n\t\t%% downsample 1\n\t\t%%\tdconv4:\t\tx1-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{4}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx2-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{5}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx2-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{6}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.6) -- (\\vdist*\\disttimes-0.1,\\yschift-2.7);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-2.7) -- (\\vdist*\\disttimes+6,\\yschift-2.7);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+6,\\yschift-2.7) -- (\\vdist*\\disttimes+6,\\yschift-1.7);\n\t\t\n\t\t\n\t\t%% downsample 2\n\t\t%%\tdconv4:\t\tx2-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{7}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx3-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{8}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx3-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{9}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.2) -- (\\vdist*\\disttimes-0.1,\\yschift-1.8);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.8) -- (\\vdist*\\disttimes+2.8,\\yschift-1.8);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+2.8,\\yschift-1.8) -- (\\vdist*\\disttimes+2.8,\\yschift-1.2);\n\t\t\n\t\t\n\t\t%% downsample 3\n\t\t%%\tdconv4:\t\tx3-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{10}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{11}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{12}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% concatenated x4 and x_img_4\n\t\t\\pgfmathsetmacro{\\disttimes}{13}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-1}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% ----------------------------------- upsampling -------------------------------------\n\t\t\n\t\t%% upsample 1\n\t\t%% interpolate\tx3_us-->x3_mus\t\t\t64x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{14}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t% uconv 1\n\t\t\\pgfmathsetmacro{\\disttimes}{15}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% concatenated x1 and x1_us\n\t\t\\pgfmathsetmacro{\\disttimes}{15.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% uconv2\t\tx3_us-->x3\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{17}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% upsample 2\n\t\t%% cat x3, x_img_3-->x3\t\t\t64x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{18}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\u00b4%% interpolate x2_us 64x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{18.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\t\t\n\t\t\n\t\t%% uconv3 x2_us -->x2_mus\n\t\t\\pgfmathsetmacro{\\disttimes}{19}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t% cat x2, x2_mus -->x2\n\t\t\\pgfmathsetmacro{\\disttimes}{20}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% uconv4 \t\tx2,x2_us-->x2\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{21.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% upsample 3\n\t\t%% cat x2, x_img_2-->x2\t\t\t64x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{22.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% interpolate\tx2-->x1_us\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{23.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% uconv5 x1_us-->x1_mus\n\t\t\\pgfmathsetmacro{\\disttimes}{24}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t%% cat\t\tx1,x1_mus-->x1\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{26}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% uconv6\t\tx1\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{27}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% cat\t\tx1,x1_mus-->x1\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{28}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% conv1\t\txout\t\t\t3x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{29}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidtha}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%% conv2\t\txout\t\t\t3x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{30}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidtha}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\gconvwshift}\t%% width 32\n\t\t\\node[text width=1cm] at (\\vdist*\\disttimes+0.3,\\yschift-3.3) {3};\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t% -------------------------- image branch ----------------------------------------------\n\t\t\n\t\t%% ------------------------------------------- image input ----------------------------------------------\n\t\t%% \td_in\t\t\t\t\t\t\t3x512x512\n\t\t%%\tdconv1: \td_in-->x1 \t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{1}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift}\n\t\t\\node[text width=1cm] at (\\vdist*\\disttimes+0.1,\\yschift-3.5) {32};\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%%\tdconv2:\t\tx1-->x1\t\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{2}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift}\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%%\tdconv3:\t\tx1-->x1\t\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{3}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift}\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-3) -- (\\vdist*\\disttimes-0.1,\\yschift-3.8);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-3.8) -- (\\vdist*\\disttimes+6.8,\\yschift-3.8);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+6.8,\\yschift-3.8) -- (\\vdist*\\disttimes+6.8,\\yschift-3);\n\t\t\n\t\t\n\t\t%% downsample 1\n\t\t%% dconv4:\t\tx1-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{4}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx2-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{5}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx2-->x2\t\t\t\t32x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{6}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.5}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.5) -- (\\vdist*\\disttimes-0.1,\\yschift-2.3);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-2.3) -- (\\vdist*\\disttimes+4.3,\\yschift-2.3);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+4.3,\\yschift-2.3) -- (\\vdist*\\disttimes+4.3,\\yschift-1.6);\n\t\t\n\t\t%% downsample 2\n\t\t%%\tdconv4:\t\tx2-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{7}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx3-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{8}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx3-->x3\t\t\t\t32x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{9}\t%% width 32\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.8}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1) -- (\\vdist*\\disttimes-0.1,\\yschift-1.5);\n\t\t\\draw (\\vdist*\\disttimes-0.1,\\yschift-1.5) -- (\\vdist*\\disttimes+1.8,\\yschift-1.5);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+1.8,\\yschift-1.5) -- (\\vdist*\\disttimes+1.8,\\yschift-1.1);\n\t\t\n\t\t\n\t\t%% downsample 3\n\t\t%%\tdconv4:\t\tx3-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{10}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-1}\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv2:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{11}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-1}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t%%\tdconv3:\t\tx4-->x4\t\t\t\t32x64x64\n\t\t\\pgfmathsetmacro{\\disttimes}{12}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 64\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-1}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+0.1,\\yschift+0.3) -- (\\vdist*\\disttimes+0.1,\\yschift+8);\n\t\t%% ----------------------------------- upsampling -------------------------------------\n\t\t\n\t\t%% upsample 1\n\t\t%% interpolate\tx3_us-->x3_mus\t\t\t64x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{13}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% concatenate\tx_img_3, x3_img_us\t\t\t64x128x128\n\t\t\\pgfmathsetmacro{\\disttimes}{14}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t% img_uconv 1\n\t\t\\pgfmathsetmacro{\\disttimes}{15}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizec}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.9}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes+0.2,\\yschift+0.3) -- (\\vdist*\\disttimes+0.2,\\yschift+5.7);\n\t\t\\draw (\\vdist*\\disttimes+0.2,\\yschift+5.7) -- (\\vdist*\\disttimes+0.9,\\yschift+5.7);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+0.9,\\yschift+5.7) -- (\\vdist*\\disttimes+0.9,\\yschift+8);\n\t\t\n\t\t\n\t\t%% upsample 2\n\t\t\u00b4%% interpolate x_img_3-->x2_img_us 64x256x256\n\t\t\\pgfmathsetmacro{\\disttimes}{15.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\t\t\n\t\t\n\t\t%% img_uconv2 x_img_2, x2_img_us -->x_img_2\n\t\t\\pgfmathsetmacro{\\disttimes}{16}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t% img_uconv 2\n\t\t\\pgfmathsetmacro{\\disttimes}{17.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizeb}\t%% size 128\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift-0.6}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t\n\t\t\\draw (\\vdist*\\disttimes+0.2,\\yschift+0.3) -- (\\vdist*\\disttimes+0.2,\\yschift+5.3);\n\t\t\\draw (\\vdist*\\disttimes+0.2,\\yschift+5.3) -- (\\vdist*\\disttimes+1.6,\\yschift+5.3);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+1.6,\\yschift+5.3) -- (\\vdist*\\disttimes+1.6,\\yschift+7.3);\n\t\t\n\t\t\n\t\t%% upsample 3\n\t\t%% interpolate\tx_img_1-->x1_img_us\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{19}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift}\t%% width 32\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% concatenate\tx_img_1, x1_img_us\t\t\t32x512x512\n\t\t\\pgfmathsetmacro{\\disttimes}{20}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift}\t%% width 32\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t%% img_uconv3 x1_us-->x1_mus\n\t\t\\pgfmathsetmacro{\\disttimes}{20.5}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsizea}\t%% size 256\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthb}\t%% width 32\n\t\t\\pgfmathsetmacro{\\yschift}{\\convwshift}\t%% width 32\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes+\\boxwidth,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\draw (\\vdist*\\disttimes+1.3,\\yschift+1.2) --  (\\vdist*\\disttimes+1.3,\\yschift+4.7);\n\t\t\\draw (\\vdist*\\disttimes+1.3,\\yschift+4.7) -- (\\vdist*\\disttimes+2.6,\\yschift+4.7);\n\t\t\\draw [-stealth] (\\vdist*\\disttimes+2.6,\\yschift+4.7) -- (\\vdist*\\disttimes+2.6,\\yschift+6);\n\t\t\n\t\t%% ------------------------------------- label ----------------------------------------------\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{9}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\labelshift}\t%% width 32\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1) {Conv2d};\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=convcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{13}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\labelshift}\t%% width 32\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1) {Gated};\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=gconvcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{17}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\labelshift}\t%% width 32\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1) {Interp.};\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=interpolatecolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\t\t\n\t\t\n\t\t\n\t\t\\pgfmathsetmacro{\\disttimes}{21}\n\t\t\\pgfmathsetmacro{\\boxsize}{\\boxsized}\t%% size 512\n\t\t\\pgfmathsetmacro{\\boxwidth}{\\boxwidthc}\t%% width 3\n\t\t\\pgfmathsetmacro{\\yschift}{\\labelshift}\t%% width 32\n\t\t\\node[text width=3.5cm] at (\\vdist*\\disttimes+1,\\yschift-1) {Concat.};\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,-\\boxsize,0) -- ++(\\boxwidth,0,0) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(0,0,-\\boxsize) -- ++(0,-\\boxsize,0) -- ++(0,0,\\boxsize) -- cycle;\n\t\t\\draw[black, fill=catcolor] (\\vdist*\\disttimes,\\yschift,0) -- ++(-\\boxwidth,0,0) -- ++(0,0,-\\boxsize) -- ++(\\boxwidth,0,0) -- cycle;\t\t\n\t\t\n\t\\end{tikzpicture}\n\t\n\t\\caption{Guided Gated Convolution Neural Network for normal estimation. The normal branch shows on the upper side taking point cloud as input. The image branch shows on the lower side taking image as input. There are total 4 times fusions between the two branches. The output is a corresponding normal map.}\n\t\\label{fig:ng-archi}\n\\end{figure}\n\n\n\n\\subsection{Add the light information}\n\nstores for every pixel the direction of incoming light.\nfind a mapping H, such that $ l_{in} = H(v, l_{s}) $ for pixel $ v $, $ l_s $ is the position of light source, $ l_{in} $ is direction of incoming light of pixel $ v $. Iterate all the pixels, we get the light direction map $ L $.\n\nLet $ I $ denotes image, $ N $ denotes normal map, $ L $ denotes light direction map.\n\nlambertian reflection\n\\[I = \\rho N*L\\]\nwhere $ * $ denotes scalar product,\nthe albedo rho can be computed by \n\\[\\rho = \\frac{I}{N*L}\\]\n\nin NN, it is \n\\[\\rho = conv2d(\\frac{I}{N*L})\\]\n\nThe corresponding loss term is \n\\[\\|I-\\rho(N*L) \\|_{F_2}\\]\nin this case, the normal is supposed to be more crispy than the previous implement.\n\nIn a first step, we should compute initial albedos from the predicted\nnormals and check if everything is correct. (Normals need to be in\nspatial space again, not in tangent space, and we need to use cameras\nextrinsics R and t from the data file to triangulate the vertex maps\ncorrectly)...\n\n\\[\\rho = \\frac{I}{N*L}\\]\n\n\n\n\n", "meta": {"hexsha": "1c75a220a3972e72dcf5a1d1187909aed9705dcd", "size": 79911, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/Chapters/Chapter03.tex", "max_stars_repo_name": "akweury/improved_normal_inference", "max_stars_repo_head_hexsha": "a10ed16f43362c15f2220345275be5c029f31198", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/Chapters/Chapter03.tex", "max_issues_repo_name": "akweury/improved_normal_inference", "max_issues_repo_head_hexsha": "a10ed16f43362c15f2220345275be5c029f31198", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/Chapters/Chapter03.tex", "max_forks_repo_name": "akweury/improved_normal_inference", "max_forks_repo_head_hexsha": "a10ed16f43362c15f2220345275be5c029f31198", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.6529126214, "max_line_length": 550, "alphanum_fraction": 0.6262341855, "num_tokens": 33066, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8198933447152497, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5706115988941003}}
{"text": "\\chapter{Combining Turing Machines}\n\\label{chap:combining}\n\nRecall that Turing machines are unstructured -- the execution of a machine could continue from one state to any other state of the machine.  There is\nalso no trivial way how to sequentially compose two machines.  We fix these problems and introduce \\textit{control-flow} operators, like ``if then\nelse'', ``sequential composition'', and ``while''.  Thus, we do not need to define transition functions $\\delta$ or even states $Q$ for complex\nmachines, because we only combine machines using these operators.  The transition function of the sequential composition of $M_1$ and $M_2$ is derived\nfrom the transition functions of $M_1$ and $M_2$.  But maybe more importantly, this also makes correctness of machines compositional: From the\ncorrectness of $M_1$ and $M_2$, we can conclude the correctness of the sequential composition of $M_1$ and $M_2$.  Also note that the number of\nmachine states can be \\textit{huge} for complex machines, so it is important to not refer to any concrete machine state, either in the definition or\nin the verification of a machine.\\footnote{Using this framework, we define and verify a machine with 11537 states.}\n\nWe define control-flow operators as \\textit{first-class citizen} in Coq's type theory.  As a result, we get a \\textit{shallow-embedded} language for\nprogramming multi-tape Turing machines in an imperative way.  The primitive machines defined in Chapter~\\ref{chap:basic} serve as the ``primitive\ninstructions'' of our language.  Another possible approach, called \\textit{deep embedding}, is to define a fixed syntax of a language as an inductive\ndata type.  We would need a compiler from syntax trees of Turing machine programs to actual Turing machines.  This compiler would have to be extended,\nwhenever we add a new feature to our language.  In our approach, we can just ``extend'' our language by defining a function, and we use the same\nverification techniques (i.e.\\ realisation and termination) on each stage of the development.\n\n\n\\section{$\\Switch$}\n\\label{sec:switch}\n\n\\setCoqFilename{ProgrammingTuringMachines.TM.Combinators.Switch}\n\nAsperti and Ricciotti~\\cite{asperti2015} define the control-flow operators sequential composition, conditional, and while.  Their conditional operator\ntakes a concrete machine state as a parameter.  We noticed that this is not acceptable.  We now introduce a generalised operator on labelled machines,\ncalled $\\MS{Switch}$, and derive sequential composition and conditional from this operator.\n\n\\begin{definition}[$\\MS{Switch}~M~f$][Switch]\n  \\label{def:Switch}\n  Let $M : \\TM_\\Sigma^n(L)$ and $f : L \\to \\TM_\\Sigma^n(L')$.  We define the machine $\\MS{Switch}~M~f : \\TM_\\Sigma^n(L')$ with the following\n  components:\n  \\begin{alignat*}{3}\n    & Q                  &~:=~& Q_M +  \\sum_{l:L} Q_{f(l)} \\\\\n    & start              &~:=~& \\inl start_M \\\\\n    \\delta ~&(\\inl q, s) &~:=~&\n    \\begin{cases}\n      \\bigl(\\inr (lab_M~q, start_{f (lab_M~q)}), (\\None, N)^n \\bigr) & halt_M(q) \\\\\n      \\Let{(q', act) := \\delta_M(q, s)}{\\left(\\inl q', act \\right)} & \\lnot halt_M(q)\n    \\end{cases} \\\\\n    \\delta ~&(\\inr q, s) &~:=~& \\Let{(q', act) := \\delta_{f(\\pi_1 q)} (\\pi_2 q, s)}{\\bigl( \\inr (\\pi_1 q, q'), act \\bigr)} \\\\\n    halt   ~&(\\inl  q)   &~:=~& \\false \\\\\n    halt   ~&(\\inr  q)   &~:=~& halt_{f(\\pi_1~q)} (\\pi_2~q) \\\\\n    lab   ~&(\\inl  q)   &~:=~& \\_ \\\\\n    lab   ~&(\\inr  q)   &~:=~& lab_{f(\\pi_1~q)} (\\pi_2~q)\n  \\end{alignat*}\n\\end{definition}\n\nIn Definition~\\ref{def:Switch}, the $lab$ value for $\\inl$ is unimportant, because the lifted states of $M$ are not terminating states for\n$\\MS{Switch}~M~f$.  We just use a canonical value.\n\n\\begin{figure}\n  \\center\n  \\input{fig-SwitchExample}\n  \\caption{Example of a $\\MS{Switch}$.  The left box stands for the first machine $M_1:\\TM_\\Sigma^n(\\Bool)$.  The final states $q_1$ and $q_2$ are\n    mapped to $\\true$ and $q_3$ to $\\false$.  After $\\MS{Switch}$ reaches one of the injections of the terminal states $q_1, q_2, q_3$ of $M_1$, it\n    continues its execution either in the top case-machine $M_2$ or in the bottom case-machine $M_3$.  The halting states of $\\MS{Switch}$ are exactly\n    the injections of the halting states of the case-machines.}\n  \\label{fig:match}\n\\end{figure}\n\n$\\MS{Switch}~M~f$ first executes a copy of $M$.  When it reaches a final state $q$ of $M$, it does a ``nop'' action (i.e.\\ $(\\None,N)^n$) and changes\nto the injection of the start state of the machine $f(lab_M~q)$.  When $\\MS{Switch}~M~f$ reaches a state that is the injection of a final state of a\nmachine $f~l$, it terminates.  The correctness part of the semantics can be expressed using the following lemma:\n\n\\begin{lemma}[Correctness of $\\MS{Switch}~M~f$][Switch_Realise]\n  \\label{lem:Switch_Realise}\n  Let $R \\subseteq \\Tape_\\Sigma^n \\times L \\times \\Tape_\\Sigma^n$ and $R'~l \\subseteq \\Tape_\\Sigma^n \\times L' \\times \\Tape_\\Sigma^n$ for all $l:L$.\n  If $M \\Realise R$ and $f~l \\Realise R'~l$ for all $l:L$, then\n  \\[\n    \\MS{Switch}~M~f \\Realise SwitchRel~R~R'\n  \\]\n  with\n  \\[\n    SwitchRel~R~R' := \\bigcup_{l:L} \\bigl( R\\at l \\circ R'~l \\bigr)\n  \\]\n\\end{lemma}\n\nNote that in the correctness relation, we compose the unlabelled relation $R \\at l \\subseteq \\Tape_\\Sigma^n \\times \\Tape_\\Sigma^n$ with the labelled\nrelation $R'~l \\subseteq \\Tape_\\Sigma^n \\times L' \\times \\Tape_\\Sigma^n$.  This means that $\\Switch~M~f$ terminates in a state with a label of $f~l$,\nwhich has type $L'$.\n\nTo specify the running time of $\\MS{Switch}~M~f$, we need to know the running time relation in that $M$ terminates, and for each $l:L$ the running\ntime relation of $f~l$.  We also need to know the correctness relation of $M$, because the running time of $f~l$ depends on the output of $M$, which\nis the input of $f~l$.  Also, the choice of the case-machine depends on the label of the terminating state of $M$.\n\n\\begin{lemma}[Running Time of $\\MS{Switch}~M~f$][Switch_TerminatesIn]\n  \\label{lem:Switch_Terminates}\n  Let $R \\subseteq \\Tape_\\Sigma^n \\times L \\times \\Tape_\\Sigma^n$, $T \\subseteq \\Tape_\\Sigma^n \\times \\Nat$, and\n  $T'~l \\subseteq \\Tape_\\Sigma^n \\times \\Nat$ for all $l:L$.  If $M \\Realise R$, $M \\TerminatesIn T$, and $f~l \\TerminatesIn T'~l$ for all $l:L$, then\n  $\\MS{Switch}~M~f \\TerminatesIn SwitchT~R~T~T'$, where\n  \\[\n    SwitchT~R~T~T' :=\n    \\lambda~t~k.~ \\exists~k_1~k_2.~T~t~k_1 \\land 1+k_1+k_2 \\le k \\land\n      \\forall~l~t'.~ R~t~(l,t') \\rightarrow T'(l)~t'~k_2\n  \\]\n\\end{lemma}\n\nWe can combine the correctness and running time lemma, in case $M$ and every $f~l$ terminates in constant time.\n\\begin{lemma}[Correctness of $\\MS{Switch}~M~f$ in constant time][Switch_RealiseIn]\n  \\label{lem:Switch_RealiseIn}\n  Let $k_1, k_2:\\Nat$.\n  If $M \\RealiseIn{k_1} R$ and $f~l \\RealiseIn{k_2} R'~l$ for every $l:L$, then\n  \\[ \\MS{Switch}~M~f \\RealiseIn{1+k_1+k_2} SwitchRel~R~R' \\]\n\\end{lemma}\n\\begin{proof}\n  Follows with the Lemmas~\\ref{lem:Realise_total},~\\ref{lem:Switch_Realise}, and~\\ref{lem:Switch_Terminates}.\n\\end{proof}\n\n\n\\subsection{Derived Operators}\n\\label{sec:match-derived-operators}\n\\setCoqFilename{ProgrammingTuringMachines.TM.Combinators.SequentialComposition}%\n\nAs mentioned above, conditional and sequential composition can be defined as instances of the Switch operator.  For the sequential composition\n$M_1 \\Seq M_2$ with $M_1 : \\TM_\\Sigma^n(L)$ and $M_2 : \\TM_\\Sigma^n(L')$, the function $f : L \\to \\TM_\\Sigma^n(L')$, maps all labels of $M_1$ to the\nmachine $M_2$.\n\\begin{definition}[Sequential composition][Seq]\n  \\label{def:Seq}\n  Let $M_1 : \\TM_\\Sigma^n(L)$ and $M_2 : \\TM_\\Sigma^n(L')$ .\n  \\[\n    M_1 \\Seq M_2 := \\Switch~M_1~\n    \\bigl(\n    \\lambda~\\_.~M_2\n    \\bigr)\n  \\]\n\\end{definition}\n\nThis means that regardless in which state the machine $M_1$ terminates, the sequential composition $M_1 \\Seq M_2$ continues its execution in the start\nstate of $M_2$.  The following lemma is the correctness lemma of sequential composition for constant-time termination.  The version without\nconstant-time termination, i.e.\\ the lemma we get notationally by removing the step numbers, holds as well.\n\n\\begin{lemma}[Correctness of sequential composition][Seq_RealiseIn]\n  \\label{lem:Seq_RealiseIn}\n  If $M_1 \\RealiseIn{k_1} R_1$ and $M_2 \\RealiseIn{k_2} R_2$, then\n  $M_1 \\Seq M_2 \\RealiseIn{1+k_1+k_2} \\bigcup_{l:L} \\left(R_1\\at l \\circ R_2 \\right)$.\n\\end{lemma}\n\nWe often have the case that the first machine $M_1$ is labelled over the unit-type $L = \\Unit$.  In this case, the relation is\n$\\left(\\bigcup_{l:L} R_1\\at l\\right) \\circ R_2 \\equiv R_1 \\at \\unit \\circ R_2 = R_1 \\circ R_2$ by the convention that we identify unit-labelled\nrelations with unlabelled relations.  This means that sequential composition of two machines amounts to composing their correctness relations.\n\nIn case either $M_1$ or $M_2$ do not have constant running time, we need the running time lemma, which can be derived from\nLemma~\\ref{lem:Switch_Terminates}.\n\\begin{lemma}[Running Time of sequential composition][Seq_TerminatesIn]\n  \\label{lem:Seq_TerminatesIn}\n  If $M_1 \\Realise R_1$, $M_1 \\TerminatesIn T_1$, and $M_2 \\TerminatesIn T_2$, then\n  \\[\n    M_1 \\Seq M_2 \\TerminatesIn\n    \\left(%\n      \\lambda t~k.~\\exists k_1~k_2.~T_1~t~k_1 ~\\land~ 1 + k_1 + k_2 \\leq k ~\\land~ \\forall t'~l.~R_1~t~(l,t') \\rightarrow T_2~t'~k_2%\n    \\right)\n  \\]\n\\end{lemma}\n\n\nFor the conditional $\\If{M_1}{M_2}{M_3}$ with $M_1 : \\TM_\\Sigma^n(\\Bool)$, $M_2, M_3 : \\TM_\\Sigma^n(L)$, the function $f : \\Bool \\to \\TM_\\Sigma^n(L)$\nsimply maps $\\true$ to $M_2$ and $\\false$ to $M_3$.  The conditional, as defined below, first executes $M_1$.  If $M_1$ terminates in a state with\nlabel $\\true$, it continues the execution in $M_2$, else in $M_3$.\n\n\\setCoqFilename{ProgrammingTuringMachines.TM.Combinators.If}%\n\\begin{definition}[Conditional][If]\n  \\label{def:If}\n  Let $M_1 : \\TM_\\Sigma^n(\\Bool)$, $M_2, M_3 : \\TM_\\Sigma^n(L)$.\n  \\[\n    \\If{M_1}{M_2}{M_3} := \\Switch~M_1\n    \\left(\\lambda b.~\\cond{b}{M_2}{M_3} \\right)\n  \\]\n\\end{definition}\n\n\\begin{lemma}[Correctness of conditional][If_RealiseIn]\n  \\label{lem:If_RealiseIn}\n  If $M_1 \\RealiseIn{k_1} R_1$, $M_2 \\RealiseIn{k_2} R_2$, and $M_3 \\RealiseIn{k_3} R_3$, then\n  $\\If{M_1}{M_2}{M_3} \\RealiseIn{1+k_1+max(k_2+k_3)} (R_1\\at\\true \\circ R_2) \\cup (R_1\\at\\false \\circ R_3)$.\n\\end{lemma}\n\nThe correctness relation $(R_1\\at\\true \\circ R_2) \\cup (R_1\\at\\false \\circ R_3)$ captures the idea of the following case-distinction: If the\nconditional machine terminates, than the copy of $M_1$ either terminates in a state with label $\\true$ or $\\false$.  In the first case, the\nconditional proceeds in $M_2$, else in $M_3$.\n\nThe running time lemma of the conditional can also be derived from Lemma~\\ref{lem:Switch_Terminates}.\n\\begin{lemma}[Running Time of the conditional][If_TerminatesIn]\n  \\label{lem:If_TerminatesIn}\n  If $M_1 \\Realise R_1$, $M_1 \\TerminatesIn T_1$, $M_2 \\TerminatesIn T_2$, and $M_3 \\TerminatesIn T_3$, then\n  \\begin{align*}\n    & \\If{M_1}{M_2}{M_3} \\TerminatesIn \\\\\n    &\\quad \\bigl( \\lambda t~k.~\\exists k_1~k_2.~T_1~t~k_1 ~\\land~ 1+k_1+k_2 \\leq k ~\\land~ \\\\\n    &\\quad \\phantom{\\bigl( \\lambda t~k.~\\exists k_1~k_2.}~\\forall b~t'.~R_1~t~(b, t') \\rightarrow \\cond{b}{T_2~t'~k_2}{T_3~t'~k_2} \\bigr)\n  \\end{align*}\n\\end{lemma}\n\n\nAsperti and Ricciotti~\\cite{asperti2015} also define sequential composition and a conditional operator.  However, they have two fundamental\ndifferences.  First, they define and verify each operator separately.  Secondly, they do not consider state labelling.  Their conditional operator\ntakes one concrete ``negative'' state, i.e.\\ the machine $M_3$ is only executed if $M_1$ terminates in this particular state.  This makes programming\nand reasoning about concrete machines tedious, because complex machines also have many states.  They also have to introduce a separate notion for\ncorrectness, because they do not have states in their original one.  By introducing state-labelled machines and by implementing the more general\n$\\Switch$ operator, we solve both problems.  We no longer have to specify concrete machine states in the definition and verification of machines.\nMoreover, we often exploit the convenient generality of the $\\Switch$ operator.  For example, it can be used to implement a machine that copies a\nsymbol from one tape to another tape, as demonstrated in Chapter~\\ref{chap:combining}.  Also note, that Asperti and Ricciotti~\\cite{asperti2015} have\nno notion of time complexity; their strong notion of realisation only implies termination in an uncertain number of steps.  The idea of\nstate-labelling and $\\Switch$ is due to Y.~Forster and F.~Kunze.\n\n\n\\subsection{Proof of $\\Switch$}\n\\label{sec:match-proofs}\n\n\\setCoqFilename{ProgrammingTuringMachines.TM.Prelim}\n\nThe idea of the proofs of Lemma~\\ref{lem:Switch_Realise} and Lemma~\\ref{lem:Switch_Terminates} is to abstract two features of the machine: lifting of\nconfigurations from one abstract machine to another abstract machines, and sequencing of two executions.  We formalise these two concepts for abstract\nmachines, i.e.\\ we argue on the abstract $\\Loop$ function.\n\nFor the first feature, \\textit{lifting}, we assume two types $A$, $B$ for abstract configurations, a function $lift : A \\to B$, two step functions\n$f : A \\to A$, $f' : B \\to B$, and two halting functions $h : A \\to \\Bool$, $h' : B \\to \\Bool$.  We assume that the step functions $f$ and $f'$ are\ncompatible with $lift$ in non-halting states of $A$.  Formally, this means:\n\\begin{equation}\n  \\label{eq:loop_lift_assumption1}\n  \\forall a:A.~ h(x)=\\false \\rightarrow f' (lift~x) = lift (f~x)\n\\end{equation}\n\nThe second assumption is that $h'$ and $h$ are compatible w.r.t.\\ $lift$; formally:\n\\begin{equation}\n  \\label{eq:loop_lift_assumption2}\n  \\forall a:A.~h'(lift~x)=h(x)\n\\end{equation}\n\nUnder these two assumptions we can show two lemmas that essentially say that the second abstract machine $B$ simulates the first machine $A$.\n\\begin{lemma}[Loop lifting][loop_lift]\n  \\label{lem:loop_lift}\n  Under the assumptions~(\\ref{eq:loop_lift_assumption1}) and~(\\ref{eq:loop_lift_assumption2}):\n  \\begin{alignat*}{1}\n    & \\forall (k:\\Nat)~(a~a' : A). \\\\\n    & \\quad \\Loop~f ~h ~a        ~k = \\Some{a'} \\rightarrow \\\\\n    & \\quad \\Loop~f'~h'~(lift~a) ~k = \\Some {lift~a'}\n  \\end{alignat*}\n\\end{lemma}\n\\begin{proof}\n  By induction on $k:\\Nat$.\n\\end{proof}\n\\begin{lemma}[Loop unlifting][loop_unlift]\n  \\label{lem:loop_unlift}\n  Under the assumptions~(\\ref{eq:loop_lift_assumption1}) and~(\\ref{eq:loop_lift_assumption2}):\n  \\begin{alignat*}{1}\n    & \\forall (k:\\Nat)~(a:A)~(b':B). \\\\\n    & \\quad \\Loop~f'~h'~(lift~a)~k = \\Some{b'} \\rightarrow \\\\\n    & \\quad \\exists (a':A).~\\Loop~f~h~a~k = \\Some{a'} \\land b'=lift~a'\n  \\end{alignat*}\n\\end{lemma}\n\\begin{proof}\n  By induction on $k:\\Nat$.\n\\end{proof}\n\n\\begin{figure}\n  \\center\n  \\begin{tikzcd}\n    a'                    \\arrow[r, \"lift\"] & \\cdot \\\\\n    \\cdot \\arrow[u, \"f\"]  \\arrow[r, \"lift\"] & \\cdot \\arrow[u, \"g\", swap] \\\\\n    a     \\arrow[u, \"f\"]  \\arrow[r, \"lift\"] & \\cdot \\arrow[u, \"g\", swap] \\\\\n  \\end{tikzcd}\n  \\hspace{1cm}\n  \\begin{tikzcd}\n    a'                            \\arrow[r, \"lift\", dotted] & b' \\\\\n    \\cdot \\arrow[u, \"f\", dotted]  \\arrow[r, \"lift\", dotted] & \\cdot \\arrow[u, \"g\", swap] \\\\\n    a     \\arrow[u, \"f\", dotted]  \\arrow[r, \"lift\"]         & \\cdot \\arrow[u, \"g\", swap] \\\\\n  \\end{tikzcd}\n  \\caption{Instances of Lemma~\\ref{lem:loop_lift} (left) and Lemma~\\ref{lem:loop_unlift} (right) for $k=2$ as commuting diagrams.  Dotted lines denote\n    existentials.  Note that the rectangles correspond to condition (\\ref{eq:loop_lift_assumption1}).  Only the top ``states'' are terminating\n    states.}\n  \\label{fig:loop_lift_lemmas}\n\\end{figure}\n\nThe Lemmas~\\ref{lem:loop_lift} and~\\ref{lem:loop_unlift} are visualised in Figure~\\ref{fig:loop_lift_lemmas}.\n\nFor the second feature, \\textit{sequential execution}, we assume another type $A$ with a step function $f \\from A \\to A$ and two halting functions\n$h, h' \\from A \\to \\Bool$.  We assume, that if $a$ is a non-halting state w.r.t.\\ $h$, then $a$ also is a non-halting state w.r.t.\\ $h'$:\n\\begin{equation}\n  \\label{eq:loop_merge_assumption}\n  \\forall(a:A).~h~a = \\false \\rightarrow h'~a = \\false\n\\end{equation}\n\n\\begin{lemma}[Loop merging][loop_merge]\n  \\label{lem:loop_merge}\n  Under assumption~(\\ref{eq:loop_merge_assumption}):\n  \\begin{align*}\n    & \\forall (k_1~k_2:\\Nat)~(a_1~a_2~a_3 : A).\\\\\n    & \\quad \\Loop~f~h ~a_1~k_1       = \\Some{a_2} \\rightarrow\\\\\n    & \\quad \\Loop~f~h'~a_2~k_2       = \\Some{a_3} \\rightarrow\\\\\n    & \\quad \\Loop~f~h'~a_1~(k_1+k_2) = \\Some{a_3}\n  \\end{align*}\n\\end{lemma}\n\\begin{proof}\n  By induction on $k_1:\\Nat$, using Lemma~\\ref{lem:loop}~(\\ref{lem:loop_monotone}).\n\\end{proof}\n\\begin{lemma}[Loop splitting][loop_split]\n  \\label{lem:loop_split}\n  Under assumption~(\\ref{eq:loop_merge_assumption}):\n  \\begin{align*}\n    & \\forall (k:\\Nat)~(a_1~a_3 : A).\\\\\n    & \\quad \\Loop~f~h'~a_1~k = \\Some{a_3} \\rightarrow \\\\\n    & \\quad \\exists (k_1~k_2:\\Nat)~(a_2:A).\\\\\n    & \\quad \\quad \\Loop~f~h ~a_1~k_1 = \\Some{a_2} \\land\\\\\n    & \\quad \\quad \\Loop~f~h'~a_2~k_2 = \\Some{a_3} \\land\\\\\n    & \\quad \\quad k_1 + k_2 \\leq k\n  \\end{align*}\n\\end{lemma}\n\\begin{proof}\n  By complete induction on $k:\\Nat$.\n\\end{proof}\n\n\nBack to the verification of $\\Switch~M~f$.  In the following, we simply write $\\Switch$.  For a configuration $c_k$, we write $q_k$ and $t_k$ for the\nstate and tapes component of $c_k$.\n\nThe execution steps of $\\Switch$ are essentially a sequence of first, the lifted steps of an execution of $M$, second, a ``nop'' transition, and\nthird, the lifted steps of the execution of $f~l$.  We give the concrete lifting functions from $M$ to $\\Switch$ and from $f~l$ to $\\Switch$:\n\\setCoqFilename{ProgrammingTuringMachines.TM.Combinators.Switch}%\n\\begin{definition}[Liftings of $\\Switch$][lift_confL]\n  We define the functions \\\\$liftL : \\Conf_M \\to \\Conf_\\Switch$ and $liftR_l : \\Conf_{f(l)} \\to \\Conf_\\Switch$ for all $l:L$:\n  \\begin{alignat*}{2}\n    & liftL   & (q, t) &:= (\\inl q,      t) \\\\\n    & liftR_l & (q, t) &:= (\\inr (l, q), t)\n  \\end{alignat*}\n\\end{definition}\n\nFor the sequential lab, we also have to define the lifted halting function of \\\\$haltConfL : \\Conf_\\Switch \\to \\Bool$:\n\\begin{definition}[Lifted halting function][halt_liftL]\n  \\begin{alignat*}{2}\n    & haltConfL & (\\inl~q, t) &:= halt_M(q) \\\\\n    & haltConfL & (\\inr~q, t) &:= \\true\n  \\end{alignat*}\n\\end{definition}\n\nUsing Lemma~\\ref{lem:loop_merge} and~\\ref{lem:loop_lift}, we can show the lemma we need for running time:\n\\begin{lemma}[Merging parts of executions of $\\Switch$][Switch_merge]\n  \\label{lem:Switch_merge}\n  Let $t : \\Tape_\\Sigma^n$, $k_1, k_2:\\Nat$, $c_1 : \\Conf_M$ and $c_2 : \\Conf_{f(lab_M~q_1)}$.\n  \\begin{alignat*}{1}\n    & M(t) \\terminates^{k_1} c_1 \\rightarrow\\\\\n    & \\bigl( f~(lab_M~q_1) \\bigr) (t_1) \\terminates^{k_2} c_2 \\rightarrow\\\\\n    & \\Switch(t) \\terminates^{1+k_1+k_2} liftR(c_2)\n  \\end{alignat*}\n\\end{lemma}\n\\begin{proof}\n  We apply Lemma~\\ref{lem:loop_merge} and have to show:\n  \\begin{enumerate}\n  \\item $\\forall a:\\Conf_\\Switch.~ haltConfL~a = \\false \\rightarrow \\MS{haltConf}~a = \\false$.  This holds trivially by case-analysis over $a$.\n  \\item $\\Loop~step_\\Switch~haltConfL~(\\MS{initConf}_\\Switch~t)~k_1 =\n    \\Some{liftL~c_1}$.  By definition, we have $\\MS{initConf}_\\Switch~t=liftL(\\MS{initConf}_M~t)$.  The claim follows with Lemma~\\ref{lem:loop_lift}.\n  \\item $\\Loop~step_\\Switch~haltConf_\\Switch~(liftL~c1)~(1+k_2) =\n    \\Some{liftR(c_2)}$.  By definition, we know that the first step must be a ``nop'' transition from\n    $liftL~c_1$ to\\\\\n    $liftR~\\bigl(\\MS{initConf}_{f(lab_M~q_1)}~t_1 \\bigr)$.  It remains to show that:\n    $$\\Loop~step_\\Switch~haltConf_\\Switch~liftR \\bigl(\\MS{initConf}_{f(lab_M~q_1)}~t_1 \\bigr)~k_2 = \\Some{liftR(c_2)}$$\n    This follows with Lemma~\\ref{lem:loop_merge}.\n  \\end{enumerate}\n\\end{proof}\nThe running time Lemma~\\ref{lem:Switch_Terminates} follows directly from Lemma~\\ref{lem:Switch_merge}.\n\n\\begin{lemma}[Splitting execution of $\\Switch$][Switch_split]\n  \\label{lem:Switch_split}\n  Let $t : \\Tape_\\Sigma^n$, $k : \\Nat$, $c : \\Conf_\\Switch$.\n  \\begin{align*}\n    & \\Switch(t) \\terminates^k c \\rightarrow \\\\\n    & \\exists (k_1~k_2 : \\Nat)~(c_1 : \\Conf_M)~(c_2 : \\Conf_{f(lab_M~q_1)}). \\\\\n    & \\quad M(t) \\terminates^{k_1} c_1 ~\\land~ \\\\\n    & \\quad \\bigl( f (lab_M~q_1) \\bigr) (t_1) \\terminates^{k_2} c_2 ~\\land~ \\\\\n    & \\quad c = liftR(c_2)\n  \\end{align*}\n\\end{lemma}\n\\begin{proof}\n  Analogous to the proof of Lemma~\\ref{lem:Switch_merge}, using Lemmas~\\ref{lem:loop_split} and~\\ref{lem:loop_unlift}.\n\\end{proof}\nThe correctness Lemma~\\ref{lem:Switch_Realise} follows directly from Lemma~\\ref{lem:Switch_split}.\n\n\nAsperti and Ricciotti~\\cite{asperti2015} have a version of Lemma~\\ref{lem:loop_lift} and Lemma~\\ref{lem:loop_merge}.\n\n\\section{$\\MS{While}$}\n\\label{sec:While}\n\\setCoqFilename{ProgrammingTuringMachines.TM.Combinators.While}%\n\nThe machine $\\While~M$ essentially behaves like a ``do-while'' loop in imperative languages like C.  At the end of the execution of the loop body $M$,\n$M$ decides either to continue or break out of the loop.  If $M$ terminates in a state with label $\\None$, then the loop is continued, and if $M$\nterminates in $\\Some{l}$, the loop breaks and $\\While~M$ terminates in a state with label $l$.\n\n\\begin{definition}[$\\MS{While}~M$][While]\n  \\label{def:While}\n  Let $M : \\TM_\\Sigma^n(\\Option(L))$ and $def:L$.  We define $\\MS{While}~M : \\TM_\\Sigma^n(L)$ with the following components.\n  \\begin{alignat*}{3}\n    & Q              &~:=~& Q_M \\\\\n    & start          &~:=~& start_M \\\\\n    \\delta ~& (q, s) &~:=~&\n    \\begin{cases}\n      (start_M, (\\None, N)^n) & halt_M(q) \\\\\n      \\delta_M (q,s)    & \\lnot halt_M(q)\n    \\end{cases} \\\\\n    halt ~& q      &~:=~& halt_M(q) \\land lab_M(q) \\neq \\None \\\\\n    lab ~& q      &~:=~&\n    \\begin{cases}\n      l   & lab_M(q) = \\Some l \\\\\n      def & lab_M(q) = \\None\n    \\end{cases}\n  \\end{alignat*}\n\\end{definition}\n\n\\begin{figure}\n  \\center\n  \\input{fig-WhileExample}\n  \\caption{Example for $\\While~M$ with $M:\\TM_\\Sigma^n(\\Option(\\Bool))$.  When $M$ reaches the state $q_1$, the loop is continued, because $q_1$ is in\n    assumed to have label $\\None$.  The other halting states $q_2$ and $q_3$ have the labels $\\Some{\\true}$ and $\\Some{\\false}$.  Therefore $\\While~M$\n    terminates in a state with label $\\true$ or $\\false$, when $M$ terminates in $q_2$ or $q_3$, respectively.}\n  \\label{fig:while-example}\n\\end{figure}\n\nIn Definition~\\ref{def:While}, we have to assume that $L$ is inhabited.  However, the choice of $def:L$ is semantically irrelevant, because $\\While~M$\nonly halts in states where $lab_M(q) \\neq \\None$.\n\nThe correctness of $\\MS{While}$ can be expressed using the following lemma:\n\n\\begin{lemma}[Correctness of $\\MS{While}$][While_Realise]\n  \\label{lem:While_Realise}\n  Let $R \\subseteq \\Tape_\\Sigma^n \\times \\Option(L) \\times \\Tape_\\Sigma^n$.\n  If $M \\Realise R$, then $\\While~M \\Realise WhileRel~R$, where\n  $WhileRel~R \\subseteq \\Tape_\\Sigma^n \\times L \\times \\Tape_\\Sigma^n$\n  is inductively defined by the following two rules:\n  \\[\n    \\inferrule{R~t~(\\Some{l}, t')}{WhileRel~R~t~(l, t')}\n    \\quad\n    \\inferrule{R~t~(\\None, t') \\and WhileRel~R~t'~(l, t'')}{WhileRel~R~t~(l, t'')}\n  \\]\n\\end{lemma}\n\nWe can also express the correctness relation of $\\While$ using the Kleene star:\n\\begin{fact}[Alternative correctness relation for $\\While~M$][OtherWhileRel]\n  ~\n  \\[\n    WhileRel~R \\equiv (R \\at \\None)^* \\circ \\Bigl( \\bigcup_{l:L} \\bigl( R \\at {\\Some{l}} \\bigr) \\att l \\Bigr)\n  \\]\n\\end{fact}\n\nBoth definitions of $WhileRel$ should make clear what $\\While~M$ does: It repeats the execution of $M$ as long as it terminates in $\\None$, and after\n$M$ terminates in $\\Some{l}$, $\\While~M$ terminates in a state with label $l$.  This is visualised in Figure~\\ref{fig:while-example}.\n\nWhen we want to prove $\\While~M \\Realise R$ for some machine $M:\\TM_\\Sigma^n(\\Option(L))$ and relation\n$R \\subseteq \\Tape_\\Sigma^n \\times L \\times \\Tape_\\Sigma^n$, we must of course have already proven that $M \\Realise R'$ for some relation\n$R' \\subseteq \\Tape_\\Sigma^n \\times \\Option(L) \\times \\Tape_\\Sigma^n$.  Then we apply the monotonicity Lemma~\\ref{lem:Realise_monotone} and the above\ncorrectness Lemma~\\ref{lem:While_Realise}, and have to show $WhileRel~R' \\subseteq R$.  We can prove this by induction on the inductive predicate,\nwhich is equivalent to applying the following lemma:\n\\begin{lemma}[Induction for $WhileRel$][WhileInduction]\n  \\label{lem:WhileInduction}\n  ~\n  \\begin{alignat*}{1}\n    & \\left(\\forall t~t'~l.~R'~t~(\\Some{l}, t') \\rightarrow R~t~(l,t')\\right) \\rightarrow \\\\\n    & \\left(\\forall t~t'~t''~l.~R'~t~(\\None, t') \\rightarrow R~t'~(l, t'') \\rightarrow R~t~(l,t'')\\right) \\rightarrow \\\\\n    & WhileRel~R' \\subseteq R\n  \\end{alignat*}\n\\end{lemma}\n\n\nThe running time lemma of $\\While$ is dual.  We define a co-inductive termination relation $WhileT~R~T$, where $R$ is the relation that $M$ realises\nand $T$ is the running time relation in that $M$ terminates.\n\\begin{lemma}[Running Time of $\\While~M$][While_TerminatesIn]\n  \\label{lem:While_TerminatesIn}\n  If $M \\Realise R$ and $M \\TerminatesIn T$.  Then $\\While~M \\TerminatesIn WhileT~R~T$, where $WhileT~R~T \\subseteq \\Tape_\\Sigma^n \\times \\Nat$ is\n  defined as the following co-inductive running time relation:\n  \\[\n    \\inferrule{T~t~k_1 \\\\\n      \\forall t'~l.~R~t~(\\Some{l}, t') \\rightarrow k_1 \\leq k \\\\\n      \\forall t'.~R~t~(\\None,t') \\rightarrow \\exists k_2.~WhileT~R~T~t'~k_2 \\land 1+k_1+k_2 \\leq k}%\n    {WhileT~R~T~t~k}\n  \\]\n\\end{lemma}\n\nWhen we want to show $\\While~M \\TerminatesIn T$, this is dual to showing $\\While~M \\Realise R$.  We apply the anti-monotonicity\nLemma~\\ref{lem:TerminatesIn_monotone} and the above running time lemma (where $T'$ is the running time relation in that $M$ terminates), and have to\nshow $T \\subseteq WhileT~R~T'$.  For that, we use the co-induction lemma of $WhileT$:\n\\begin{lemma}[$WhileT$ co-induction][WhileCoInduction]\n  \\label{lem:WhileCoInduction}\n  To show $T \\subseteq WhileT~R~T'$, it suffices to show:\n  \\begin{alignat*}{1}\n    &\\forall t~k.~T~t~k \\rightarrow \\\\\n    &\\quad \\exists k_1.~T'~t~k_1 ~\\land~ \\\\\n    &\\quad\\quad \\left( \\forall l~t'.~R~t~(\\Some{l}, t') \\rightarrow k_1 \\leq k \\right) ~\\land~ \\\\\n    &\\quad\\quad \\left( \\forall t'.~R~t~(\\None, t') \\rightarrow \\exists k_2.~T~t'~k_2 ~\\land~ 1+k_1+k_2 \\leq k \\right).\n  \\end{alignat*}\n\\end{lemma}\nThe number $k_1$ is the number of steps needed for the first iteration of the loop.  We have to consider all possible outputs of the first loop.  If\n$M$ terminates in label $\\Some{l}$ in $k_1$ steps, then $\\While~M$ also only needs $k_1$ steps.  However, if $M$ terminates in $\\None$, and $\\While~M$\nneeds $k_2$ steps for all other loops, then $\\While~M$ needs $1+k_1+k_2$ steps in total.  The one additional step comes from the ``nop''-transition\nback to the starting state.\n\n\n\n\\subsection{Proof of $\\While$}\n\\label{sec:while-proofs}\n\nThe running time and correctness proofs are similar to the proofs of $\\Switch$, as explained in Section~\\ref{sec:match-proofs}.  The configurations of\n$\\While$ are exactly the configurations of $M$, so the lifting function is the identity function.  However, the fundamental difference between\n$\\Switch$ and $\\While$ is that $\\While$ can execute $M$ arbitrarily often; it could also diverge.  As a consequence, we also need complete induction\non step-numbers, in addition to the loop-splitting and loop-merging lemmas.  We present the key lemmas here and the most important parts of the\nproofs.\n\nWe simply write $\\While$ instead of $\\While~M$ in this section.  Since we have $\\Conf_M = \\Conf_\\While$, we also only write $\\Conf$.\n\nThe first lemma says that an execution of $\\While$ consists of an execution of $M$ and a (possibly empty) continuation of $\\While$:\n\\begin{lemma}[Splitting the execution of $\\While$][While_split]\n  \\label{lem:While_split}\n  Let $c_1, c_3 : \\Conf$ and $k:\\Nat$.  Then\n  \\begin{alignat*}{1}\n    & \\While(c_1) \\terminates^k c_3 \\rightarrow \\\\\n    & \\exists (k_1~k_2 : \\Nat)~c_2. \\\\\n    & \\quad M(c_1) \\terminates^{k_1} c_2 ~\\land~ \\\\\n    & \\quad \\While(c_2) \\terminates^{k_2} c_3 ~\\land~\\\\\n    & \\quad k_1 + k_2 \\leq k\n  \\end{alignat*}\n\\end{lemma}\n\\begin{proof}\n  Follows with Lemma~\\ref{lem:loop_split} and Lemma~\\ref{lem:loop_lift}.\n\\end{proof}\n\nWe have two more splitting lemmas, one for the case that $\\While$ terminates immediately, and one for the case that $\\While$ continues the loop.\n\\begin{lemma}[Splitting, break case][While_split_term]\n  \\label{lem:While_split_term}\n  Let $c_1$, $c_2 : \\Conf$, $k:\\Nat$, and $l:L$.\n  \\begin{alignat*}{1}\n    & \\While(c_1) \\terminates^k c_2 \\rightarrow \\\\\n    & haltConf_M~c_1 \\rightarrow \\\\\n    & lab_M(q_1) = \\Some l \\rightarrow \\\\\n    & c_1 = c_2\n  \\end{alignat*}\n\\end{lemma}\n\\begin{proof}\n  By Lemma \\ref{lem:loop} (\\ref{lem:loop_eq_0}), because $c_1$ is a halting state of $\\While$.\n\\end{proof}\n\\begin{lemma}[Splitting, continue case][While_split_repeat]\n  \\label{lem:While_split_repeat}\n  Let $c_1$, $c_2 : \\Conf$, $k:\\Nat$.\n  \\begin{alignat*}{1}\n    & \\While(c_1) \\terminates^k c_2 \\rightarrow \\\\\n    & haltConf_M~c_1 \\rightarrow \\\\\n    & lab_M(q_1) = \\None \\rightarrow \\\\\n    & \\exists k'.~k = 1+k' ~\\land~ \\While(t_1) \\terminates^{k'} c_2\n  \\end{alignat*}\n\\end{lemma}\n\\begin{proof}\n  $\\While$ must have taken the ``nop''-transition from $c_1$ to $initConf~t_1$, because $c_1$ is a halting configuration of $M$ but not of $\\While$.\n\\end{proof}\n\nWe now can prove the correctness Lemma~\\ref{lem:While_Realise} of $\\While$.\n\\begin{proof}\n  We assume $\\While(t_1) \\terminates^k c_3$ and have to show $WhileRel~t_1~(lab_\\While~q_3, t_3)$.  We use complete induction on $k:\\Nat$.  By\n  Lemma~\\ref{lem:While_split}, we have $M(t_1) \\terminates^{k_1} c_2$ and\\\\ $\\While(c_2) \\terminates^{k_2} c_3$, for $k_1+k_2 \\leq k$.  Case analysis.\n  \\begin{enumerate}\n  \\item $lab_M(q_2) = \\Some{l}$.  Then we know by Lemma~\\ref{lem:While_split_term}, that $c_2=c_3$.  It remains to show $WhileRel~t_1~(l, t_2)$.  By\n    applying the first constructor, it is enough to show $R~t_1~(\\Some{l}, t_2)$.  This follows from the realisation of $M$.\n  \\item $lab_M(q_2) = \\None$.  By Lemma~\\ref{lem:While_split_repeat}, we know that $\\While(t_2) \\terminates^{k'_2} c_3$ for $k_2 = 1 + k'_2$.  The\n    inductive hypothesis gives $WhileRel~t_2~(lab_\\While~q_3, t_3)$.  The goal follows by applying the second constructor and the realisation of $M$.\n  \\end{enumerate}\n\\end{proof}\n\n\nFor the running time proofs, we again have lemmas that ``merge'' executions together.\n\n\\begin{lemma}[Merging, break case][While_merge_term]\n  Let $k : \\Nat$, $c_1, c_2 : \\Conf$, and $l:L$.\n  \\begin{alignat*}{1}\n    & M(c_1) \\terminates^k c_2 \\rightarrow \\\\\n    & lab_M(q_2) = \\Some{l} \\rightarrow \\\\\n    & \\While(c_1) \\terminates^k c_2\n  \\end{alignat*}\n\\end{lemma}\n\\begin{lemma}[Merging, continue case][While_merge_repeat]\n  Let $k_1, k_2 : \\Nat$, $c_1, c_2, c_3 : \\Conf$.\n  \\begin{alignat*}{1}\n    & M(c_1) \\terminates^{k_1} c_2 \\rightarrow \\\\\n    & lab_M(q_2) = \\None \\rightarrow \\\\\n    & \\While(t_2) \\terminates^{k_2} c_3 \\rightarrow \\\\\n    & \\While(c_1) \\terminates^{1+k_1+k_2} c_3\n  \\end{alignat*}\n\\end{lemma}\n\nThe running time Lemma~\\ref{lem:While_TerminatesIn} follows similarly, by complete induction on $k:\\Nat$.\n\n\n\n\\section{$\\MS{Mirror}$}\n\\label{sec:mirror}\n\nWe can define a machine operator that ``mirrors'' a machine $M$.  Whenever $M$ makes a transition with a move to $L$, $\\MS{Mirror}~M$ moves the head\nto the right instead.  For example, we define a machine $\\MS{MoveToSymbol}$ below, that moves the head of the tape right, until it reads a certain\nsymbol.  Using this operator, we get a machine ``for free'' that moves the head to the left instead.  However, we still have to copy or parametrise\nthe correctness and running time relations.\n\n\\setCoqFilename{ProgrammingTuringMachines.TM.TM}%\nUsing the proof techniques developed in the previous sections, verifying the $\\MS{Mirror}$ operator is very easy.  The function\n$\\coqlink[mirror_tape]{\\MS{mirror}} : \\Tape_\\Sigma \\to \\Tape_\\Sigma$ swaps the left and right part of a tape.  %\n% \\begin{definition}[Mirror tape][mirror_tape]\n%   \\label{def:mirror_tape}\n%   Let $t : \\Tape_\\Sigma$.\n%   \\begin{alignat*}{2}\n%     & \\MS{mirror}~(\\MS{niltape})         &&~:=~ \\MS{niltape} \\\\\n%     & \\MS{mirror}~(\\MS{leftof}~l~ls)     &&~:=~ \\MS{rightof}~l~ls \\\\\n%     & \\MS{mirror}~(\\MS{rightof}~l~ls)    &&~:=~ \\MS{leftof}~l~ls \\\\\n%     & \\MS{mirror}~(\\MS{midtape}~ls~m~rs) &&~:=~ \\MS{midtape}~rs~m~ls\n%   \\end{alignat*}\n% \\end{definition}\n% \\begin{lemma}[Correctness of $\\MS{mirror}$]\n%   \\label{lem:mirror}\n%   Let $t:\\Tape_\\Sigma$.\n%   \\begin{enumerate}\n%   \\coqitem[mirror_tape_injective] \\label{lem:mirror_tape_injective}\n%     $\\MS{mirror}$ is injective,\n%   \\coqitem[mirror_tape_involution] \\label{lem:mirror_tape_involution}\n%     $\\MS{mirror}$ is an involution, i.e.\\ $\\MS{mirror}(\\MS{mirror}(t))=t$,\n%   \\coqitem[mirror_tape_left] \\label{lem:mirror_left}\n%     $\\MS{left}(\\MS{mirror}(t)) = \\MS{right}(\\MS{mirror}(t))$,\n%   \\coqitem[mirror_tape_right] \\label{lem:mirror_right}\n%     $\\MS{right}(\\MS{mirror}(t)) = \\MS{left}(\\MS{mirror}(t))$,\n%   \\coqitem[mirror_tape_current] \\label{lem:mirror_current}\n%     $\\MS{current}(\\MS{mirror}(t)) = \\MS{current}(t)$.\n%   \\end{enumerate}\n% \\end{lemma}\n% % \\begin{proof}\n% %   By case analysis over the tape(s).\n% % \\end{proof}\nFurthermore, we define an injective and involutive function $\\coqlink[mirror_move]{\\MS{swap}}:\\MS{Move} \\to \\MS{Move}$ that simply swaps the movements\n$L$ and $R$.  \\setCoqFilename{ProgrammingTuringMachines.TM.Combinators.Mirror}%\n\\begin{definition}[$\\MS{Mirror}~M$][Mirror]\n  \\label{def:Mirror}\n  Let $M:\\TM_\\Sigma^n(L)$.  The machine $\\MS{Mirror}~M : \\TM_\\Sigma^n(L)$ has the same components as $M$, except:\n  \\[\n    \\delta(q, s) :=~ \\Let{(q', a) := \\delta_M(q,s)}{\\bigl(q', \\map{(\\lambda(w, m).~(w, \\MS{swap}~m))}{a} \\bigr)}\n  \\]\n\\end{definition}\n\nThe correctness and termination proofs are similar to the proofs of $\\MS{Switch}$ and $\\MS{While}$.  The ``lifting'' between configurations of $M$ and\n$\\MS{Mirror}$ is the injective and involutive function $mirrorConf : \\Conf \\to \\Conf$ that simply mirrors the tapes.\n\n\\begin{definition}[Mirror configuration][mirrorConf]\n  \\label{def:mirrorConf}\n  $mirrorConf(q,t) := (q, \\map{\\MS{mirror}}{t}).$\n\\end{definition}\n\n\n\\begin{lemma}[Mirroring steps][mirror_step]\n  \\label{lem:mirror_step}\n  Let $c_1 : \\Conf$.  Then\n  \\[ step_M~(mirrorConf~c_1) = mirrorConf~(step_{\\MS{Mirror}}~c_1) \\]\n\\end{lemma}\n\n\\begin{lemma}[Mirroring executions]\n  \\label{lem:mirror_loop}\n  Let $c_1, c_2 : \\Conf$.\n  \\begin{enumerate}\n  \\item \\label{lem:mirror_lift}\n    \\coqlink[mirror_lift]{$\\MS{Mirror} (c_1) \\terminates^k c_2 \\rightarrow M (mirrorConf~c_1) \\terminates^k (mirrorConf~c_2)$}\n  \\item \\label{lem:mirror_unlift}\n    \\coqlink[mirror_unlift]{$M (mirrorConf~c_1) \\terminates^k (mirrorConf~c_2) \\rightarrow \\MS{Mirror} (c_1) \\terminates^k c_2$}\n  \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n  Claim~\\ref{lem:mirror_lift} follows with Lemma~\\ref{lem:loop_lift} and Lemma~\\ref{lem:mirror_step}.  Claim~\\ref{lem:mirror_unlift} follows with\n  Lemma~\\ref{lem:loop_unlift} and Lemma~\\ref{lem:mirror_step}.\n\\end{proof}\n\n\\begin{lemma}[Correctness of $\\MS{Mirror}$]\n  Let $M \\Realise R$.  Then $\\MS{Mirror}~M \\Realise MirrorRel~R$ with\n  \\[\n    MirrorRel~R := \\lambda t~(l, t').~ R~(\\map{\\MS{mirror}}{t})~(l, \\map{\\MS{mirror}}{t'})\n  \\]\n\\end{lemma}\n\\begin{proof}\n  Follows from Lemma~\\ref{lem:mirror_loop}~(\\ref{lem:mirror_lift}).\n\\end{proof}\n\\begin{lemma}[Running Time of $\\MS{Mirror}$]\n  Let $M \\TerminatesIn T$.  Then $\\MS{Mirror}~M \\TerminatesIn MirrorT~T$ with\n  \\[\n    MirrorT~T := \\lambda t~k.~ T~(\\map{\\MS{mirror}}{t})~k\n  \\]\n\\end{lemma}\n\\begin{proof}\n  Follows from Lemma~\\ref{lem:mirror_loop}~(\\ref{lem:mirror_unlift}).\n\\end{proof}\n\n\\section{Relabelling Operators}\n\\label{sec:labelling-op}\n\\setCoqFilename{ProgrammingTuringMachines.TM.Combinators.Combinators}%\n\nThe operators of the above sections of this Chapter modify the behaviour of the machine.  We can also define simple operators that modify the\nlabelling function $lab : Q_M \\to L$.  This is for example useful if we want a machine to terminate in one particular label.\n\n\\begin{definition}[$\\MS{Relabel}$][Relabel]\n  Let $M:\\TM_\\Sigma^n(L)$ and $g : L \\to L'$.\n  \\[\n    \\MS{Relabel}~M~g := \\left(M, g \\circ lab_M\\right)\n  \\]\n\\end{definition}\n\n\\begin{definition}[$\\MS{Return}$][Return]\n  Let $M:\\TM_\\Sigma^n(L)$ and $l':L'$.\n  \\[\n    \\MS{Return~M~l'} := \\MS{Relabel}~M~(\\lambda \\_.~l')\n  \\]\n\\end{definition}\n\nThe correctness for these simple operators is obvious.  Note that we do not need lemmas for running time, because Definition~\\ref{def:TerminatesIn} of\nrunning time is defined over the bare machine without labelling function.  So the running time lemmas for $M$ also apply for $\\MS{Relabel}$ and\n$\\MS{Return}$.\n\\begin{lemma}[Correctness of $\\MS{Relabel}$ and $\\MS{Return}$][Relabel_Realise]\n  If $M \\Realise R$, then\n  \\begin{alignat*}{1}\n    \\MS{Relabel}~M~g & \\Realise \\bigcup_{l:L} \\left( \\left(R \\at l\\right) \\att {g(l)} \\right) \\\\\n    \\MS{Return}~M~l'         & \\Realise \\Bigl( \\bigcup_{l:L} R \\at {l} \\Bigr) \\att {l'}\n  \\end{alignat*}\n\\end{lemma}\n\n\n%%% Local Variables:\n%%% TeX-master: \"thesis\"\n%%% End:\n", "meta": {"hexsha": "e3d8c6d83b0c55e1379020054f61840ffc1deaa4", "size": 37370, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/thesis/Combining.tex", "max_stars_repo_name": "mwuttke97/CoqTM", "max_stars_repo_head_hexsha": "f4d2aab2008e2158e2c7ca88ebb53b42808a0778", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-08-30T14:58:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-27T15:44:28.000Z", "max_issues_repo_path": "tex/thesis/Combining.tex", "max_issues_repo_name": "mwuttke97/CoqTM", "max_issues_repo_head_hexsha": "f4d2aab2008e2158e2c7ca88ebb53b42808a0778", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-10T09:16:49.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-10T09:16:49.000Z", "max_forks_repo_path": "tex/thesis/Combining.tex", "max_forks_repo_name": "mwuttke97/CoqTM", "max_forks_repo_head_hexsha": "f4d2aab2008e2158e2c7ca88ebb53b42808a0778", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-04-09T19:01:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-29T15:39:53.000Z", "avg_line_length": 51.6874135546, "max_line_length": 150, "alphanum_fraction": 0.6812416377, "num_tokens": 12947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933271118221, "lm_q2_score": 0.6959583187272712, "lm_q1q2_score": 0.5706115814724523}}
{"text": "\n\\subsection{Representing finite groups}\n\nFinite groups can all be represented with square matrices.\n\n", "meta": {"hexsha": "8322ffec1d1446856593953f3193d244bacadda6", "size": 102, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/endomorphisms/04-03-finite.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/endomorphisms/04-03-finite.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/endomorphisms/04-03-finite.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.0, "max_line_length": 58, "alphanum_fraction": 0.8137254902, "num_tokens": 19, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8198933271118221, "lm_q2_score": 0.6959583124210896, "lm_q1q2_score": 0.5706115763020561}}
{"text": "\\chapter{The Seifert-Van Kampen theorem}\n\n\\begin{thm}\nSuppose\n\\begin{equation*}\n\\begin{tikzcd}\nS \\arrow[r,\"g\"] \\arrow[d,swap,\"f\"] & B \\arrow[d,\"j\"] \\\\\nA \\arrow[r,swap,\"i\"] & X\n\\end{tikzcd}\n\\end{equation*}\nis a cocartesian square, in which $S$, $A$, and $B$ are assumed to be $0$-connected. Then \n\\begin{equation*}\n\\begin{tikzcd}\n\\pi_1(S) \\arrow[r,\"g\"] \\arrow[d,swap,\"f\"] & \\pi_1(B) \\arrow[d,\"j\"] \\\\\n\\pi_1(A) \\arrow[r,swap,\"i\"] & \\pi_1(X)\n\\end{tikzcd}\n\\end{equation*}\nis a pushout square of groups.\n\\end{thm}\n", "meta": {"hexsha": "4ac655f9f746daf59b6ebf8fe682d39b3c15691e", "size": 508, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/van_kampen.tex", "max_stars_repo_name": "tadejpetric/HoTT-Intro", "max_stars_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 333, "max_stars_repo_stars_event_min_datetime": "2018-09-26T08:33:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T23:50:15.000Z", "max_issues_repo_path": "Book/van_kampen.tex", "max_issues_repo_name": "tadejpetric/HoTT-Intro", "max_issues_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-06-18T04:16:04.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-16T15:27:01.000Z", "max_forks_repo_path": "Book/van_kampen.tex", "max_forks_repo_name": "tadejpetric/HoTT-Intro", "max_forks_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2018-09-26T09:08:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-16T00:33:50.000Z", "avg_line_length": 25.4, "max_line_length": 90, "alphanum_fraction": 0.6358267717, "num_tokens": 218, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8807970654616711, "lm_q2_score": 0.6477982179521103, "lm_q1q2_score": 0.5705787693835188}}
{"text": "\\documentclass[a4paper,11pt,article]{memoir}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{fourier}\n\\usepackage{amsmath,amssymb}\n\\usepackage{xstring,ifthen,xcolor}\n\\usepackage{xspace}\n\n\\usepackage{tikz}\n\\usepackage{pgfmath}\n\\usetikzlibrary{calc,shapes,positioning}\n\n\\title{Mini-Mapper 8:\\,Photoencoder signals}\n\\author{Ian~Ross}\n\n\\newdimen\\holecircleradius{}\n\\pgfmathsetlength{\\holecircleradius}{10mm}\n\n\\newdimen\\offsetangle{}\n\n\\newcommand{\\fulldisk}[1]{%\n\\pgfmathsetlength{\\offsetangle}{#1}\n\\begin{tikzpicture}[scale=1.75]\n  \\fill[fill=red]\n  (-0.25mm,\\the\\holecircleradius-1mm) --\n  (-0.25mm,\\the\\holecircleradius+1mm) --\n  (0.25mm,\\the\\holecircleradius+1mm) --\n  (0.25mm,\\the\\holecircleradius-1mm) --\n  (-0.25mm,\\the\\holecircleradius-1mm);\n\n  \\draw (0,0) -- (0,\\the\\holecircleradius+1.5mm);\n  \\draw (0,0) -- (90+\\the\\offsetangle:\\the\\holecircleradius+1.5mm);\n\n  \\draw[dashed] (0,0) circle [radius=\\the\\holecircleradius];\n\n  \\foreach \\angle in {0,1,...,31}\n    \\draw (360/32*\\angle+90+\\the\\offsetangle:\\the\\holecircleradius) circle [radius=0.5mm];\n\\end{tikzpicture}}\n\n\\graphicspath{{figs/}}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Introduction}\n\nThe idea here is to calculate the form of signal that the\nphototransistor in the photointerrupter on the motor encoder disk will\ngenerate as the encoder disk rotates. This isn't strictly necessary,\nbut it's mildly interesting, and a fun geometrical exercise.\nFigure~\\ref{fig:setup} shows the basic setup, with the ring of holes\nin the encoder disk rotating past the beam of the photointerrupter.\n\nThe important factor for the purposes of determining the current\nsignal generated by the phototransistor in the photointerrupter is the\narea of overlap between the hole in the encoder disk and the beam of\ninfrared radiation emitted by the photodiode in the photointerrupter.\n\nThe following parameters are used in the analysis below:\n\\begin{itemize}\n  \\item{$d$: the diameter of a single hole in the encoder disk;}\n  \\item{$D$: the diameter of the circle of centres of the holes in the\n    encoder disk;}\n  \\item{$w$ and $h$: the width and height of the beam from the\n    photodiode in the photointerrupter --- for the photointerrupters\n    I've been looking at, the beam aperture is rectangular, with the\n    narrower dimension horizontal.}\n  \\item{$\\phi$: the angle (anticlockwise positive) between the line\n    between the centre of rotation of the encoder disk and the center\n    of the photointerrupter beam, and the line betwen the centre of\n    rotation of the encoder disk and the centre of the hole under\n    consideration.}\n\\end{itemize}\n\nWe make a few assumptions:\n\\begin{enumerate}\n  \\item{$d \\leq h$ and $d \\geq w$: these assumptions give the\n    geometrical relations between the optical beam and the hole as\n    shown in Figure~\\ref{fig:setup}, i.e., the outline of the hole has\n    either zero, two or four intersections with the outline of the\n    optical beam, with all intersections occurring on the vertical\n    sides of the beam.}\n  \\item{The gaps between the holes in the encoder disk are large\n    enough that only one hole can intersect with the optical beam at a\n    time.}\n\\end{enumerate}\n\n\\begin{figure}\n  \\begin{center}\n    \\fulldisk{0}\\hspace{0.5cm}\\fulldisk{1}\\hspace{0.5cm}\\fulldisk{2}\n  \\end{center}\n\n  \\begin{center}\n    \\fulldisk{3}\\hspace{0.5cm}\\fulldisk{4}\\hspace{0.5cm}\\fulldisk{5}\n  \\end{center}\n\n  \\caption{Photointerrupter beam (red, $w = 0.5\\,\\mathrm{mm} \\times h\n    = 2\\,\\mathrm{mm}$) interfacing with holes in motor encoder disk.\n    Dashed circle shows line of centres of encoder disk holes\n    (diameter $D = 20\\,\\mathrm{mm}$), with each hole being of diameter\n    $d = 1\\,\\mathrm{mm}$. The images here show the positions of the\n    encoder disk $1^\\circ$ apart, starting with a hole centrally\n    located in front of the photointerrupter beam ($\\phi = 0$) and\n    showing positions of $\\phi = 1^\\circ, 2^\\circ, 3^\\circ, 4^\\circ,\n    5^\\circ$.}\\label{fig:setup}\n\\end{figure}\n\n\\section*{Intersections of holes with optical aperture}\n\nLet's think about the overlap of a single hole with the aperture of\nthe photointerrupter. We'll consider only $\\phi \\geq 0$ (the situation\nfor negative $\\phi$ is symmetrical). Writing $R = D / 2$ for the\nradius of the circle of centres, and $r = d / 2$ for the radius of the\nholes, the centre of the hole we're looking at is at $(x_c, y_c)$:\n\\begin{equation*}\n  x_c = -R \\sin \\phi, \\; y_c = R \\cos \\phi\n\\end{equation*}\nand its equation is\n\\begin{equation*}\n  {(x - x_c)}^2 + {(y - y_c)}^2 = r^2\n\\end{equation*}\nor\n\\begin{equation*}\n  {(x + R \\sin \\phi)}^2 + {(y - R \\cos \\phi)}^2 = r^2\n\\end{equation*}\nPoints of intersection of this circle with the outline of the optical\nbeam are given by solving this equation for $y$ when $x = \\pm w/2$:\n\\begin{equation*}\n  R^2 \\sin^2 \\phi \\pm w R \\sin \\phi + w^2/4 + y^2 - 2 R y \\cos \\phi +\n  R^2 \\cos^2 \\phi - r^2 = 0\n\\end{equation*}\nor, simplifying\n\\begin{equation}\n  \\label{eq:intersection}\n  y^2 - 2 R y \\cos \\phi + R^2 - r^2 + w^2/4 \\pm w R \\sin \\phi = 0.\n\\end{equation}\nThis discriminant of this equation is\n\\begin{equation}\n  \\label{eq:discriminant}\n  \\Delta_{\\pm} = 4 R^2 \\cos^2 \\phi \\mp 4 w R \\sin \\phi - 4 (R^2 - r^2)\n  - w^2.\n\\end{equation}\nFigure~\\ref{fig:discriminant} shows values of $\\Delta_{\\pm}$\nfrom~\\eqref{eq:discriminant} as a function of $\\phi$. When the hole\nextends right across the width of the optical beam aperture, there are\nfour intersections between the circle defining the outline of the hole\nand the outline of the aperture. In this case, $\\Delta_{+} \\geq 0,\n\\Delta_{-} \\geq 0$. When the outline of the hole intersects with only\none vertical side of the optical beam aperture, then there are two\nintersections between the circle and the outline of the aperture and\n$\\Delta_{+} < 0, \\Delta_{-} \\geq 0$. When there are no intersections\nbetween the circle and the outline of the aperture then both\ndiscriminants are negative.\n\nThe limits in terms of the angular displacement $\\phi$ of the ranges\nwith solutions are given by $\\Delta_{\\pm} = 0$.\nFrom~\\eqref{eq:discriminant}, this gives\n\\begin{equation*}\n  4 R^2 \\sin^2 \\phi \\pm 4 w R \\sin \\phi - 4 r^2 + w^2 = 0\n\\end{equation*}\nor\n\\begin{align*}\n  \\sin \\phi &= \\frac{\\mp 4 w R \\pm \\sqrt{16 w^2 R^2 - 16 R^2 (w^2 - 4 r^2)}}{8 R^2} \\\\\n  \\sin \\phi &= \\frac{\\mp w \\pm \\sqrt{w^2 - (w^2 - 4 r^2)}}{2 R} \\\\\n  \\sin \\phi &= \\frac{\\mp w \\pm 2 r}{2 R}\n\\end{align*}\n\n\\begin{equation*}\n  \\sin \\phi = \\frac{\\mp w/2 \\pm r}{R}.\n\\end{equation*}\nThe four limiting values for $\\phi$ derived from this equation provide\nthe limits of the two intersection and four intersection cases in the\npositive and negative $\\phi$ directions. Putting $w =\n0.5\\,\\mathrm{mm}$, $r = 0.5\\,\\mathrm{mm}$ and $R = 10\\,\\mathrm{mm}$,\nwe get limits of $|\\phi| \\leq 4.30^\\circ$ for the two intersections\ncase, and $|\\phi| \\leq 1.43^\\circ$ for the four intersections case,\nwhich matches with the limits seen in Figure~\\ref{fig:discriminant}.\n\n\\begin{figure}\n  \\begin{center}\n    \\includegraphics[width=\\textwidth]{intersection-discriminant}\n  \\end{center}\n\n  \\caption{Discriminant for hole/beam intersection equation as a\n    function of angular displacement $\\phi$. The boundary of the ``two\n    intersections'' region is $\\Delta_{-}(\\phi)$, while that of the\n    ``four intersections'' region is $\\Delta_{+}(\\phi)$.}\\label{fig:discriminant}\n\\end{figure}\n\nSolutions to the positive and negative versions\nof~\\eqref{eq:intersection} exist when $\\Delta_{\\pm} \\geq 0$ and are\ngiven by:\n\\begin{equation}\n  \\label{eq:solutions}\n  y_{\\color{black}\\pm}^{\\color{red}\\pm} = R \\cos \\phi {\\color{red}\\pm} \\frac12\n  \\sqrt{\\Delta_{\\pm}} = y_c {\\color{red}\\pm} \\frac12\n  \\sqrt{\\Delta_{\\pm}}\n\\end{equation}\n(The sign of the discriminant $\\Delta_{\\pm}$ matches the sign on the\nleft hand side. The other plus-or-minus sign is for the usual pair of\nsolutions to a quadratic equation, and corresponds to the two members\nof the pairs of intersection points.) Figure~\\ref{fig:solutions} shows\nthese solutions as a function of $\\phi$.\n\n\\begin{figure}\n  \\begin{center}\n    \\includegraphics[width=\\textwidth]{intersection-solutions}\n  \\end{center}\n\n  \\caption{Solutions for hole/beam intersection equation as a function\n    of angular displacement $\\phi$. The $y_{-}$ values show\n    intersections with the left-hand edge of the optical aperture\n    (remember positive $\\phi$ means movement of the hole to the left\n    from top dead centre), and $y_{+}$ values are for intersection\n    with the right-hand edge of the aperture. The ``(pos)'' and\n    ``(neg)'' labels indicate the two signs in~\\eqref{eq:solutions}\n    for the two solutions to each quadratic\n    equation.}\\label{fig:solutions}\n\\end{figure}\n\n\\section*{Hole/aperture overlap area}\n\nGiven the intersection information from~\\eqref{eq:solutions}, we now\nneed to calculate the area of the overlap between the encoder disk\nhole and the photointerrupter optical aperture. There are three cases\nto consider, two requiring calculation and one trivial:\n\\begin{enumerate}\n  \\item{There are two intersections between the outline of the hole\n    and the optical aperture. In this case, the overlap between the\n    hole and aperture is formed by a line segment of the vertical side\n    of the aperture and a segment of the circle forming the outline of\n    the hole --- this segment lies inside the optical aperture and has\n    endpoints given by the intersection points between the hole and\n    the vertical side of the aperture (Figure~\\ref{fig:overlaps}a).}\n  \\item{There are four intersections between the outline of the hole\n    and the optical aperture. In this case, the overlap is the encoder\n    disk hole minus two segments of the circle forming the outline of\n    the hole --- these segments lie outside the optical aperture and\n    have endpoints given by the intersection points between the hole\n    and the vertical sides of the aperture\n    (Figure~\\ref{fig:overlaps}b).}\n  \\item{There are no intersections between the outline of the encoder\n    disk hole and the optical aperture. In this trivial case, the\n    overlap area is identically zero.}\n\\end{enumerate}\nIn each of the two cases requiring calculation, the calculation can be\nperformed on the basis of an expression for the area of a segment of a\ncircle. For a circle with radius $R$, the area $A$ of a segment\nsubtending an angle $\\theta$ at the centre of the circle is\n\\begin{equation}\n  \\label{eq:segment}\n  A = \\frac{R^2}{2} (\\theta - \\sin \\theta).\n\\end{equation}\n\n\\begin{figure}\n  \\begin{center}\n    \\begin{tabular}{cc}\n    \\includegraphics[height=0.6\\textheight]{overlap-2} &\n    \\includegraphics[height=0.6\\textheight]{overlap-4} \\\\\n    (a) Two intersections & (b) Four intersections\n    \\end{tabular}\n  \\end{center}\n\n  \\caption{Overlap cases: (a) two intersections, so overlap is a\n    segment of a circle; (b) four intersections, so overlap is a\n    circle minus two segments.}\\label{fig:overlaps}\n\\end{figure}\n\n\\subsection*{Case 1: two intersections}\n\nFigure~\\ref{fig:overlaps}a shows this case. The intersection points of\nthe outline of the encoder disk hole and the optical aperture are\n$(-w/2, y_{-}^{+})$ and $(-w/2, y_{-}^{-})$. We know the radius of the\nencoder disk hole, so in order to apply~\\eqref{eq:segment} to\ncalculate the area of overlap, we need to find the angle $\\theta$. To\ndo this, we construct vectors $\\mathbf{v}_{+}$ and $\\mathbf{v}_{-}$\nbetween the centre of the encoder disk hole and the intersection\npoints:\n\\begin{equation}\n  \\mathbf{v}_{+} = \\begin{pmatrix}\n    -w/2 - x_c \\\\\n    y_{-}^{+} - y_c\n  \\end{pmatrix}, \\;\\;\\;\\;\\;\\;\n  \\mathbf{v}_{-} = \\begin{pmatrix}\n    -w/2 - x_c \\\\\n    y_{-}^{-} - y_c\n  \\end{pmatrix},\n\\end{equation}\nand then\n\\begin{equation*}\n  \\cos \\theta = \\hat{\\mathbf{v}}_{+} \\cdot \\hat{\\mathbf{v}}_{-}.\n\\end{equation*}\n(Here $\\hat{\\mathbf{v}}$ is the unit vector in the direction of vector\n$\\mathbf{v}$.) First we calculate $\\mathbf{v}_{+} \\cdot\n\\mathbf{v}_{-}$, making use of the fact that $y_{-}^{\\pm} - y_c = \\pm\n\\frac12 \\sqrt{\\Delta_{-}}$ (from~\\eqref{eq:solutions}):\n\\begin{equation}\n  \\mathbf{v}_{+} \\cdot \\mathbf{v}_{-} = {\\left(\\frac{w}{2} +\n    x_c\\right)}^2 + (y_{-}^{+} - y_c) (y_{-}^{-} - y_c) =\n         {\\left(\\frac{w}{2} + x_c\\right)}^2 - \\frac{\\Delta_{-}}{4}.\n\\end{equation}\nNow, we also find that\n\\begin{align*}\n  |\\mathbf{v}_{\\pm}|^2 = {\\left(\\frac{w}{2} + x_c\\right)}^2 + {(y_{-}^{\\pm} -\n  y_c)}^2 = {\\left(\\frac{w}{2} + x_c\\right)}^2 + \\frac{\\Delta_{-}}{4}.\n\\end{align*}\nso that\n\\begin{equation*}\n  |\\mathbf{v}_{+}| |\\mathbf{v}_{+}| = {\\left(\\frac{w}{2} +\n    x_c\\right)}^2 + \\frac{\\Delta_{-}}{4}\n\\end{equation*}\nand we finally arrive at\n\\begin{equation*}\n  \\cos \\theta = \\frac{\\mathbf{v}_{+} \\cdot\n    \\mathbf{v}_{-}}{|\\mathbf{v}_{+}| |\\mathbf{v}_{+}|} =\n  \\frac{{\\left(\\frac{w}{2} + x_c\\right)}^2 -\n    \\frac{\\Delta_{-}}{4}}{{\\left(\\frac{w}{2} + x_c\\right)}^2 +\n    \\frac{\\Delta_{-}}{4}}\n\\end{equation*}\nwhich gives an area for the two intersection case of\n\\begin{equation*}\n  A_2 = \\frac{r^2}{2} (\\theta - \\sin \\theta)\n\\end{equation*}\n\n\\subsection*{Case 2: four intersections}\n\nThe four intersections case (Figure~\\ref{fig:overlaps}b) is similar,\nexcept that we have two angles, $\\theta_{-}$ and $\\theta_{-}$, one of\neach side of the optical aperture, and the area of interest is the\ntotal area of the encoder disk hole minus the area of the two segments\nsubtended by $\\theta_{-}$ and $\\theta_{-}$.\n\nThe left-hand side of Figure~\\ref{fig:overlaps}b can be treated the\nsame way as for the two intersections case:\n\\begin{equation}\n  \\cos \\theta_{-} = \\frac{\\mathbf{w}_{+} \\cdot\n    \\mathbf{w}_{-}}{|\\mathbf{w}_{+}| |\\mathbf{w}_{+}|} =\n  \\frac{{\\left(\\frac{w}{2} + x_c\\right)}^2 -\n    \\frac{\\Delta_{-}}{4}}{{\\left(\\frac{w}{2} + x_c\\right)}^2 +\n    \\frac{\\Delta_{-}}{4}}.\n\\end{equation}\nThe right-hand side goes the same kind of way, with vectors\n$\\mathbf{v}_{+}$ and $\\mathbf{v}_{-}$ defined as:\n\\begin{equation}\n  \\mathbf{v}_{+} = \\begin{pmatrix}\n    w/2 - x_c \\\\\n    y_{+}^{+} - y_c\n  \\end{pmatrix}, \\;\\;\\;\\;\\;\\;\n  \\mathbf{v}_{-} = \\begin{pmatrix}\n    w/2 - x_c \\\\\n    y_{+}^{-} - y_c\n  \\end{pmatrix},\n\\end{equation}\nso that\n\\begin{equation}\n  \\mathbf{v}_{+} \\cdot \\mathbf{v}_{-} = {\\left(\\frac{w}{2} -\n    x_c\\right)}^2 + (y_{+}^{+} - y_c) (y_{+}^{-} - y_c) =\n         {\\left(\\frac{w}{2} - x_c\\right)}^2 - \\frac{\\Delta_{+}}{4}.\n\\end{equation}\nand\n\\begin{align*}\n  |\\mathbf{v}_{\\pm}|^2 = {\\left(\\frac{w}{2} - x_c\\right)}^2 + {(y_{+}^{\\pm} -\n  y_c)}^2 = {\\left(\\frac{w}{2} - x_c\\right)}^2 + \\frac{\\Delta_{+}}{4}.\n\\end{align*}\nso that\n\\begin{equation*}\n  |\\mathbf{v}_{+}| |\\mathbf{v}_{+}| = {\\left(\\frac{w}{2} -\n    x_c\\right)}^2 + \\frac{\\Delta_{+}}{4}\n\\end{equation*}\nand we arrive at\n\\begin{equation*}\n  \\cos \\theta_{+} = \\frac{\\mathbf{v}_{+} \\cdot\n    \\mathbf{v}_{-}}{|\\mathbf{v}_{+}| |\\mathbf{v}_{+}|} =\n  \\frac{{\\left(\\frac{w}{2} - x_c\\right)}^2 -\n    \\frac{\\Delta_{+}}{4}}{{\\left(\\frac{w}{2} - x_c\\right)}^2 +\n    \\frac{\\Delta_{+}}{4}}.\n\\end{equation*}\nThe total area in the four intersections case is then:\n\\begin{equation*}\n  A_4 = \\pi r^2 - \\frac{r^2}{2} (\\theta_{+} - \\sin \\theta_{+}) -\n  \\frac{r^2}{2} (\\theta_{-} - \\sin \\theta_{-}).\n\\end{equation*}\n\n\\section*{Final results}\n\nThe functional dependence of the overlap area between an encoder disk\nhole and the optical beam of the interrupter is shown in\nFigure~\\ref{fig:overlap-areas}, representing the overlap as the\nfraction of the photointerrupter optical beam covered by the encoder\nhole disk. The assumption is that the current generated by the\nphototransistor in the photointerrupter will be proportional to this\nfractional overlap.\n\nWe can now use these results to generate simulated photocurrent\nsignals for use in simulations.\n\n\\begin{figure}\n  \\begin{center}\n    \\includegraphics[width=\\textwidth]{overlap-areas}\n  \\end{center}\n\n  \\caption{Dependence of fraction of area of photointerrupter optical\n    beam covered by encoder disk hole as a function of angular\n    displacement $\\phi$, based on $w = 0.5\\,\\mathrm{mm}$, $r =\n    0.5\\,\\mathrm{mm}$ and $R = 10\\,\\mathrm{mm}$.}\\label{fig:overlap-areas}\n\\end{figure}\n\n\\section*{Generation of simulated signals}\n\nBased on the results so far, we want to be able to generate PWL\n(piecewise linear) data files for use in LTSpice simulations. We'll do\nthis with a Python script. Basically, the script just duplicates the\ncalculations performed above, and produces a time series of simulated\nphototransistor current values, based on a time profile of wheel\nvelocity. We assume that the phototransistor current is proportional\nto the fraction of the optical aperture that is overlapped by the\nmotor encoder disk hole.\n\nThe calculations go something like this, stepping the simulation time\nby a constant timestep (I've been using $\\Delta t = 0.1\\,\\mathrm{ms}$):\n\\begin{enumerate}\n  \\item{Determine the current linear velocity from the current\n    simulation time. The example I've been using starts from rest,\n    accelerates to a constant speed, has a constant velocity segment,\n    then decelerates back to rest. Acceleration and deceleration are\n    both eased using a parabolic easing function.}\n  \\item{Step the current distance, wheel angle and encoder disk angle\n    using simple Euler integration.}\n  \\item{Normalise the encoder disk angle into the range $|\\phi| \\leq\n    \\Delta \\phi / 2$, where $\\Delta \\phi$ is the angular separation\n    between adjacent holes in the encoder disk.}\n  \\item{We can then apply the calculations in the first part of this\n    document to the angle $\\phi$ to get the area overlap fraction\n    between the optical aperture and the hole in the encoder disk.}\n  \\item{Multiply the area overlap fraction by the maximum light\n    current generated by the phototransistor (I'm using\n    $I_L(\\mathrm{max}) = 2\\,\\mathrm{mA}$, which is based on the\n    ``typical'' value for the Vishay TCST1230 photointerrupter I'm\n    planning to use.)}\n\\end{enumerate}\nThe results of this procedure are shown in\nFigure~\\ref{fig:simulation}.\n\n\\begin{figure}\n  \\begin{center}\n    \\includegraphics[width=\\textwidth]{simulation-1}\n  \\end{center}\n  \\begin{center}\n    \\includegraphics[width=\\textwidth]{simulation-2}\n  \\end{center}\n  \\begin{center}\n    \\includegraphics[width=\\textwidth]{simulation-3}\n  \\end{center}\n  \\begin{center}\n    \\includegraphics[width=\\textwidth]{simulation-4}\n  \\end{center}\n\n  \\caption{Simulated phototransistor current based on using a Vishay\n    TCST1230 photointerrupter, an encoder disk with 32 holes of 1\\,mm\n    diameter on a 20\\,mm diameter circle, and a linear velocity\n    profile starting from rest, increasing to 50\\,mm/s in 3 seconds,\n    travelling at constant speed for 2 seconds, then decelerating back\n    to rest in 3 seconds.}\\label{fig:simulation}\n\\end{figure}\n\n\n\\end{document}\n", "meta": {"hexsha": "a3d6e6420794cfdf2a790e1a7d9fe380c0f5e308", "size": 18712, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "prototypes/motor-board/doc/photoencoder-signal/photoencoder-signal.tex", "max_stars_repo_name": "ian-ross/mini-mapper", "max_stars_repo_head_hexsha": "797db70b9b619a0bdd4f1bf0def101dcfaa57452", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "prototypes/motor-board/doc/photoencoder-signal/photoencoder-signal.tex", "max_issues_repo_name": "ian-ross/mini-mapper", "max_issues_repo_head_hexsha": "797db70b9b619a0bdd4f1bf0def101dcfaa57452", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prototypes/motor-board/doc/photoencoder-signal/photoencoder-signal.tex", "max_forks_repo_name": "ian-ross/mini-mapper", "max_forks_repo_head_hexsha": "797db70b9b619a0bdd4f1bf0def101dcfaa57452", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.8976545842, "max_line_length": 90, "alphanum_fraction": 0.6937793929, "num_tokens": 5960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.570559891603594}}
{"text": "\\section{Formulation\\label{sec:formulation}}\n\\subsection{Problem Setup}\n\nWe restrict our focus to interior\n%\\note[DZ]{the pictures usually show an exterior problem?}\n%\\note[MJM]{added a comment that formulation applies to both problems}\nDirichlet boundary value problems of the form\n\\begin{align}\n  L u(\\vx) =  0,& \\quad \\vx \\in \\Omega,\\\\\n  u(\\vx) =  f(\\vx),& \\quad \\vx \\in \\partial\\Omega = \\Gamma,\n  \\label{eq:pde}\n\\end{align}\nwith multiply- or singly-connected domain $\\Omega$ of arbitrary genus. \nOur approach applies directly to standard integral equation formulations of exterior Dirichlet and Neumann problems; we include results for an exterior Dirichlet problem in \\cref{sec:results-torii}.\nHere $L$ is a linear elliptic operator and $f$ is at least $C^k$.\nWhile our method can be applied to any non-oscillatory elliptic \\pde, we use the following equations in our examples: \n\\begin{equation}\n  Lu =\\begin{cases}\n    \\Delta u & \\text{Laplace}\\\\\n    \\Delta u  - \\nabla p, \\quad \\nabla \\cdot u = 0 & \\text{Stokes}\\\\\n    \\Delta u  + \\frac{1}{1-2\\nu}\\nabla \\nabla\\cdot u & \\text{Navier (linear elasticity)}\\\\\n  \\end{cases}\n  \\label{eq:pdes}\n\\end{equation}\n\nWe follow the approach of \\cite{YBZ}. \nWe can express  the solution at a point $\\vx \\in \\Omega$ in terms of the double-layer potential\n\\begin{equation}\n  u(\\vx) = D[\\phi](\\vx) = \\int_\\Gamma \\frac{\\partial G(\\vx,\\vy)}{\\partial \\vn(\\vy)} \\phi(\\vy) d\\vy_\\Gamma,\n  \\label{eq:double_layer}\n\\end{equation}\nwhere $G(\\vx,\\vy)$ is the \\textit{fundamental solution} or \\textit{kernel} of \\cref{eq:pde}, $\\vn(\\vy)$ is the normal at $\\vy$ on $\\Gamma$ pointing into the exterior of $\\Omega$, and $\\phi$ is an unknown function, or \\textit{density}, defined on $\\Gamma$.\nWe list the kernels associated with the \\abbrev{PDE}s\\xspace in \\cref{eq:pdes} in \\cite[Section 1]{morse2020bsupplementary}.\nUsing the jump relations for the interior and exterior limits of $u(\\vx)$ as $\\vx$ tends towards $\\Gamma$ \\cite{K,mikhlin2014integral,pozrikidis1992boundary,parton1982integral}, we know that \\cref{eq:double_layer} is a solution to \\cref{eq:pde} if $\\phi$ satisfies \n\\begin{equation}\n  \\left(\\frac{1}{2}I + D + M\\right)[\\phi](\\vx) = f(\\vx), \\vx \\in \\Gamma\n  \\label{eq:int-eq}\n\\end{equation}\nwith identity operator $I$. \nWe will refer to $\\phi$ as the \\textit{density} and $u(\\vx)$ as the \\textit{potential} at $\\vx$.\nThe double-layer integrals in this equation are \\textit{singular}, due to the singularity in the integrand of \\cref{eq:double_layer}. \nAdditionally, as $\\vx$ approaches $\\Gamma$, \\cref{eq:double_layer} becomes a \\textit{nearly singular} integral.\n\nThe operator $M$ completes the rank of $\\frac{1}{2}I + D$ to ensure invertibility of \\cref{eq:int-eq}. \nIf $\\frac{1}{2}I + D$ is full-rank, $M = 0$.\nWhen $\\frac{1}{2}I + D$ has a non-trivial null space, $M$ accounts for the additional constraints to complete the rank of the left-hand side of \\cref{eq:int-eq}.\nFor example, for the exterior Laplace problem on $\\ell$ multiply-connected domains, the null space of $\\frac{1}{2}I + D$ has dimension $\\ell$ \\cite{ST}.\nThe full set of cases for each kernel is considered in this work and their corresponding values of $M$ have been detailed in \\cite{YBZ}. \n\n%We assume that $f(\\vx)$ is $C^k$ on $\\Gamma$, since the convergence order of our integration scheme depends on smoothness of $f$.\n%This implies that the convergence order is at most $k$ (see \\cref{sec:quad_error}).\n\n\n\\subsection{Geometry representation \\label{sec:geom-def}}\n\n\\begin{figure}[!htb]\n  \\centering\n  %\\setlength\\figureheight{1.9in}\n  %\\setlength\\figurewidth{2.1in}\n  \\begin{minipage}{\\textwidth}\n      \\includegraphics[width=\\linewidth]{figs/quadrisection.pdf}\n  \\end{minipage}\\hfill\n  \\mcaption{fig:geometry-representation}{Patch Quadrisection}{\n      Right: the standard domain $\\mathcal{I}^2$ of a single surface or quadrature patch.\n    Middle: a collection of subdomains $\\mathcal{D}_i$ of $E_r$, produced by quadrisection. \n    Each $\\mathcal{D}_i$ corresponds to a map $\\eta_i$ such that $\\mathcal{D}_i = \\eta_i(\\mathcal{I}^2)$; a single $\\mathcal{D}_i$ is highlighted in bold.\n    Left: the image of $E_r$ under the patch $\\gamma_r$. \n    The final image of each subdomain is outlined, with the image of $\\mathcal{D}_i$ in bold.\n  }\n\\end{figure}\nWe assume that the smooth domain boundary $\\Gamma$ is given by a \\textit{quadrilateral mesh} consisting of quadrilateral faces $Q_r$, referred to as \\textit{quads}.\nEach quad is associated with a parametric domain $\\I^2 =  [-1,1]^2 = E_r$, along with embeddings $\\gamma_r : E_r \\to \\mathbb{R}^3$ for each quad such that $Q_r = \\gamma_r(E_r)$.  \nWe assume that the quad mesh is \\textit{conforming}, i.e., two non-disjoint faces either share a whole edge or a single vertex; examples of this are shown in \\Cref{fig:greens-id-test-cases,fig:solver-conv-test-cases}.\nWe assume that no two images $\\gamma_r(E_r)$ intersect, except along the shared edge or vertex.\nThe surface $\\Gamma$ is the union of patches $\\cup_r \\gamma_r(E_r) = \\cup_r Q_r$.\nWe also assume that $\\Gamma$ is sufficiently smooth to recover the solution of \\cref{eq:pde} up to the boundary \\cite{K} and is at least $C^k$. \n\nTo represent the surface geometry, we approximate $\\Gamma$ with a collection of \\emph{B\\'ezier patches}, given by a linear combination of tensor-product Bernstein polynomials\n\\begin{equation}\n    \\vP_i(s,t) = \\sum_{\\ell =0}^n\\sum_{m =0}^n \\vector{a}^{(i)}_{\\ell m }B_\\ell^n(s)B_m^n(t),\n  \\label{eq:tensor-product}\n\\end{equation}\nwhere  $B_\\ell^n(t) = \\binom{n}{\\ell} t^{n-\\ell}(1-t)^{\\ell}$ for each $\\ell$ are the $n$-th degree Bernstein polynomials, $i$ denotes the index of a patch in the collection and $\\vector{a}_{\\ell m}^{(i)} \\in \\mathbb{R}^3$.\nEach patch $\\vP$ is a vector function from $\\I^2$ to $\\mathbb{R}^3$, so $s,t\\in [-1,1]$.\nWe will refer to this approximation of $\\Gamma$ as $\\hat{\\Gamma}$. % and detail its construction in \\cref{sec:admissible}.\n\\edit{}{This representation is advantageous because the B\\'ezier coefficients provide a direct connection to surface geometry, as shown in \\cref{fig:patch-coeffs-bbox}.}\n%More specifically, we use a \\emph{forest of quad trees of B\\'ezier patches}. \n\nThe domain $E_r$ of each embedding function $\\gamma_r$ is adaptively refined using \\emph{quadrisection}, i.e., splitting a square domain into four square subdomains of equal size. %This yields a quad tree of subdomains for each face of the quad mesh.\nQuadrisection induces a \\textit{quadtree} structure on each $E_r$. \nThe root of the quadtree is the original domain $\\I^2$ and each node of the tree is related by a single quadrisection of a subdomain of $E_r$. \nThe leaves of the quadtree form a collection of subdomains $\\mathcal{D}_i$ whose union equals $E_r$, as shown in \\cref{fig:geometry-representation}-middle.\nGiven an indexing scheme of all $\\mathcal{D}_i$'s over all $E_r$'s, we define the function $r(i)$ that maps the leaf node index $i$ to its root node index $r$ in the quadtree forest, indicating that $\\mathcal{D}_i \\subset E_r$. \nFor each $r$, $E_r$ can have a distinct sequence of associated quadrisections and therefore a distinct quadtree structure.\nWe refer to the process of \\textit{refinement} or \\textit{refining a patch $\\vP$} as the construction of such quadtrees for each $E_r$ subject to some set of criteria.\n\nOn each $\\mathcal{D}_i$ at the quadtree leaves, we define a B\\'ezier patch and reparametrize each patch over $\\I^2$ by defining the affine map $\\eta_i: \\I^2 \\to E_{r(i)}$ such that $\\eta_i(\\I^2) = \\mathcal{D}_i \\subseteq E_{r(i)}$.\n%On each $\\mathcal{D}_i$ at the quadtree leaves, we define a \\emph{B\\'ezier patch} given by a linear combination of  tensor-product Bernstein polynomials on $\\mathcal{D}_i$:\n%\\begin{equation}\n%    P(s,t) = \\sum_{\\ell =0}^n\\sum_{m =0}^n \\vector{a}_{\\ell m }B_\\ell^n(s)B_m^n(t),\n%  \\label{eq:tensor-product}\n%\\end{equation}\n%where  $B_k^n(t) = \\binom{n}{k} t^{n-k}(1-t)^{k}$ is $n$-th degree Bernstein polynomials.\nIt follows that the set of subdomains $\\{ \\eta_i(\\I^2) \\,| \\,r(i) = \\kappa\\}$ form a cover of $E_\\kappa$ and $\\{ \\gamma_\\kappa(\\eta_i(\\I^2))\\, | \\,r(i) = \\kappa\\}$ likewise covers $\\gamma_\\kappa(E_\\kappa)$.\nWe summarize this setup in \\Cref{fig:geometry-representation}; examples of surfaces of this form can be seen in \\Cref{fig:greens-id-test-cases,fig:solver-conv-test-cases,fig:torii,fig:vessel}.\n%Since we have made no assumptions on the explicit form of $\\gamma_r$, this approximation stage constructs a consistent surface representation that we can leverge in our algorithms in \\cref{sec:admissible} without sacrificing geometric fidelity.\n%At each step of quadrisection, there is a parent-child relationship formed between the subdomain and its four resulting subdomains. \n%In \\Cref{fig:geometry-representation}-right, we have a single leaf of this quadtree, denoted $D_i$.\n\n%The coefficients or \\emph{control points} $\\vector{a}_{\\ell m}$ in \\cref{eq:tensor-product} are computed to fit the B\\'ezier patches to the input embeddings $\\gamma_r$.\n%Each patch $P_i$ in $\\Pcoarse$ is reparametrized on $\\I^2$ and each domain is associated with a leaf of the forest of quad trees.\n%We refer to the domain of $P_i$ by $D_{i}$, noting that $D_i = \\I^2$. \n%Each such domain $D_i$ corresponds to a subdomain of $E_{r(i)}$, where $r(i)$ is the index of the embedding $\\gamma_{r(i)}$ from which\n%$P_i$ was obtained by refinement.\n%Define the  map $\\eta_i: D_i \\rightarrow E_{r(i)}$, which embeds the domain $D_i$ into the copy of $\\I^2$ corresponding to $\\gamma_{r(i)}$. \n%The set of maps $\\{\\eta_i \\mid r(i) = k\\}$ cover $\\I^2$ for each embedding map $\\gamma_k$.\n%We summarize this setup in \\cref{fig:geometry-representation}.\n\n%In the simplest case, the input is already in B\\'ezier form and no additional processing is necessary; the overall accuracy of our method is limited by therefore by the smoothness of the given polynomial surface.\n%In general, $\\Gamma$ can be defined by arbitrary functions. \n%The domain on which the complete approximate surface $\\hat{\\Gamma}$ is defined is the union of $E_{r(i)}$ identified along shared edges. \n%The complete set of B\\'ezier patches defined on these domains is denoted $\\Pcoarse$.\n\n\\subsection{Problem discretization \\label{sec:discretization}}\nWe use two collections of patches in the form described above: $\\Pcoarse$ and $\\Pfine$.\nThe patches in $\\Pcoarse$, called \\emph{surface patches}, determine $\\Gammah$ from $\\Gamma$ and the set of patches $\\Pfine$, called \\emph{quadrature patches}, are obtained by further quadrisection of the surface patches in $\\Pcoarse$.\nThe geometry of $\\Gammah$ is not changed by this additional refinement of $\\Pcoarse$, but the total number of subdomains $E_{r(i)}$ is increased.\nWe will detail the geometric criteria that $\\Pcoarse$ and $\\Pfine$ must satisfy in \\cref{sec:geom_criteria}.\nDiscretizing $\\Gammah$ with with a quadrature rule based on $\\Pfine$ results in a denser sampling of $\\Gammah$ than a similar discretization of $\\Pcoarse$.\nWe will refer to $\\Pcoarse$ as the \\textit{coarse discretization} of $\\hat{\\Gamma}$ and $\\Pfine$ as the \\textit{upsampled} or \\textit{fine discretization} of $\\hat{\\Gamma}$.\n\nWe index the patches in $\\vP_i \\in \\Pcoarse$ by $i = 1,\\hdots N$; we can then rewrite \\cref{eq:double_layer} as a sum of integrals over surface patches:\n\\begin{equation}\n  u(\\vx) = \\sum_{i=1}^N\\int_{\\vP_i} \\frac{\\partial G(\\vx,\\vy)}{\\partial \\vn(\\vy)} \\phi(\\vy) d\\vy_{\\vP_i}.\n  \\label{eq:double_layer_patches} \n\\end{equation}\n\n%The coefficients $a_{ij}$ of $P(u,v)$, or \\textit{control points}, have geometric relevance: the convex hull of the control points contains $P$, yielding a simple algorithm for computing a bounding box for a patch image.\n\n%The control points of B\\'ezier patches also admit an efficient algorithm to compute control points on subdomains of the domain of $P$ via a \\emph{subdivision} \\cite{F}. This fact is used in \\cref{sec:adaptive_upsampling,sec:mark_near}. Subdivision is used for two purposes: obtain an accurate surface approximation with B\\'ezier patches, and obtain admissible \\emph{quadrature patches} from surface patches, as explained below. \n\n%In the case of surfaces specified by a black-box evaluator on the initial quad mesh,  we fit tensor-product B\\'ezier patches to each $\\gamma_i$, adaptively subdividing each original patch as necessary, until a desired absolute error in function values is achieved. In the former case, the accuracy of the solution is restricted by the accuracy of the given surface approximation. \n\n\n%Ideally, each surface patch is defined as in \\cref{eq:tensor-product}; however, we do not assume that this is the case in practice.\n%If surface patches are defined in another tensor-product polynomial basis or splines, one can perform a change of variables to express them in the form of \\cref{eq:tensor-product}.\n\n%An important property of the method presented in this work is that we make minimal assumptions on the input geometry definition and boundary data; the accuracy our method can achieve is ultimately bounded by their smoothness.\n\n\nWe discretize functions defined on $\\Gammah$, such as\n\\cref{eq:double_layer_patches}, at $q$-node composite tensor-product\nClenshaw-Curtis quadrature points on $\\I^2$ of patches in $\\Pcoarse$.  \nWe refer to these points and weights on a single patch $\\vP_i$ as $x_j$ and $w_j^{\\lbl{CC}}$ respectively, for $j = 1\\ldots q^2$. \n%The weights are given by $w_j = \\sqrt{g_j}w_j^{\\lbl{CC}}$, with $g_j$ being the determinant of the metric tensor of $\\vP$ at $x_j$ and $w_j^{\\lbl{CC}}$ is the Clenshaw-Curtis weight at $x_j$.\nThe quadrature point $\\vy_{ij}$ from $\\vP_i$ is defined as $\\vy_{ij} = \\vP_{i}(\\eta_i(x_j))$. \nWe assume that the boundary condition $f$ is given by a black-box evaluator on $\\mathbb{R}^3$ that can be used to obtain values at $\\vy_{ij}$.\nFor clarity, we reindex the surface points by a global index $I = 1, \\hdots, q^2N$.\n%For the remainder of this work, we will suppress the explicit dependence of $P_i$ on the parametrization $\\eta_i$. \\note[MJM]{check that this is needed}\nWe discretize the double layer integral \\cref{eq:double_layer_patches} on $\\Pcoarse$ to approximate the solution $u(\\vx)$:\n%\\begin{equation}\n%  \\left(\\frac{1}{2}I + \\hat{D} + \\hat{M}\\right)[\\phi](\\vy_\\ell) = f(\\vy_\\ell), \\quad \\ell=1,\\hdots N\n%  \\label{eq:int-eq-disc}\n%\\end{equation}\n%with the discretized double layer operator $\\hat{D}[\\phi](\\vy)$ defined as \n\\begin{equation}\n    u(\\vx,\\Pcoarse) \\approx \\hat{u}(\\vx,\\Pcoarse) = \\sum_{i=1}^N\\sum_{j=1}^{q^2} \\frac{\\partial G(\\vx,\\vy_{ij})}{\\partial \\vn(\\vy_{ij})} \\phi_{ij} \\sqrt{g_{ij}}w_j^{\\lbl{CC}} \n    = \\sum_{I=1}^{q^2N}\\frac{\\partial G(\\vx,\\vy_I)}{\\partial \\vn(\\vy_I)} \\phi_I \\hat{w}_I\n  \\label{eq:double_layer_disc}\n  %\\label{eq:double_layer_disc}\n\\end{equation}\nwith $g_{ij}$ being the determinant of the metric tensor of $\\vP_i$ at $x_j$ and $\\hat{w}_{i\\cdot q^2+j} = \\sqrt{g_{ij}}w_{j}^{\\lbl{CC}}$.\nIn other words, $\\hat{u}(\\vx, \\Pcoarse) = \\hat{D}[\\phi](\\vx)$, where $\\hat{D}[\\phi](\\vx) \\approx D[\\phi](\\vx)$.\n\nWe can also discretize functions with tensor-product Clenshaw-Curtis nodes on the domains of patches in $\\Pfine$.\nThe values of functions on $\\Pfine$ are \\emph{interpolated} from their values on the quadrature nodes of $\\Pcoarse$ rather than being computed directly on $\\Pfine$.\nWe call this interpolation from $\\Pcoarse$ to $\\Pfine$ \\textit{upsampling}.\nWe denote the quadrature nodes and weights on $\\Pfine$ by $\\tilde{x}_j$ and $\\tilde{w}_j$ with a similar global index $J$ and refer to them as the \\textit{upsampled} nodes and weights.\nIdentical formulas are used for computing quadrature on $\\Pfine$ with the nodes and weights $\\tilde{x}_j$, $\\tilde{w}_j$ on $\\Pfine$, denoted $u(\\vx,\\Pfine)$ and $\\hat{u}(\\vx,\\Pfine)$, repsectively.\n\nIn the next section, we describe the algorithm to compute an accurate approximation to the singular/near-singular double-layer integral in \\cref{eq:double_layer}, using a quadrature rule for smooth functions (\\cref{eq:double_layer_disc}) as a building block. \nThis algorithm allows us to compute the matrix-vector products $A\\phi$, for a vector of values $\\phi$ defined at the quadrature points $\\vy_I$, where $A$ is the discrete operator obtained from the left-hand side of \\cref{eq:int-eq} after approximating $D[\\phi](\\vy)$ with the singular integration scheme.\nAs a result, we can solve the linear system using \\gmres, which only requires a matrix-vector product \n\\begin{equation}\n  A\\phi = f,\n  \\label{eq:linear_system}\n\\end{equation}\nwhere $f$ is the boundary condition sampled at the points $\\vy_I$. The evaluation of these integrals is accelerated in a standard manner using the fast multipole method (\\fmm)\\cite{MB,ying2004kernel,greengard1987fast}.\n%The index in the sum in \\cref{eq:double_layer_disc} ranges over all quadrature nodes on all quadrature patches. \n%This results in a linear system for the values of the density at the quadrature nodes on the boundary:\n%\\begin{equation}\n%  A\\phi = f, \\quad A_{ij} = \\frac{\\delta_{ij}}{2} + \\frac{\\partial G(\\vy_i, \\vy_j)}{\\partial \\vn(\\vy_j)} + M_{ij},\n%  \\label{eq:linear_system}\n%\\end{equation}\n%where $\\delta_{ij}$ is the Kronecker delta. \\note[DZ]{need to define $M_{ij}$.} \n%$A$ is dense, but we can apply $A$ to a vector in linear time,  due to the low-rank structure of off-diagonal blocks, using fast-multipole method (\\fmm). The integral equation is well-conditioned and \n%in conjunction with GMRES to solve \\cref{eq:linear_system}.\n%However, the entries of this system matrix become singular as they approach the diagonal; singular quadrature rules are need to compute these values accurately.\n\n\n", "meta": {"hexsha": "ab69ef94409150f3d8c8b5c907e7eac07bf969b8", "size": 17646, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hedgehog/formulation.tex", "max_stars_repo_name": "mmorse1217/nyu-thesis-template", "max_stars_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hedgehog/formulation.tex", "max_issues_repo_name": "mmorse1217/nyu-thesis-template", "max_issues_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hedgehog/formulation.tex", "max_forks_repo_name": "mmorse1217/nyu-thesis-template", "max_forks_repo_head_hexsha": "dbbef3f00a1e91d6f481b4c6cb480d40960b13c8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.7910447761, "max_line_length": 429, "alphanum_fraction": 0.7293437606, "num_tokens": 5270, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.7905303285397349, "lm_q1q2_score": 0.570559884532799}}
{"text": "\\subsection{Multi-valued logic}\n\nWe can have logic with more than two states.\n\n", "meta": {"hexsha": "bf8da74a18c8cb846b93873e38ceca86a3fd46c0", "size": 79, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/logic/propositionalLogic/03-01-multi_valued_logic.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/logic/propositionalLogic/03-01-multi_valued_logic.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/logic/propositionalLogic/03-01-multi_valued_logic.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 15.8, "max_line_length": 44, "alphanum_fraction": 0.7721518987, "num_tokens": 18, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.5705598821406316}}
{"text": "% \\tableofcontents\n\\cleardoublepage\n\\pagenumbering{roman}\n  \\maketitle\n\\tableofcontents\n$\\: $\n\\vspace{1cm}\n\\newpage\n\\setlength{\\epigraphwidth}{\\textwidth}\n\\epigraph{\\it Perhaps I can best describe my experience of doing mathematics in terms of a journey through a dark unexplored mansion. You enter the first room of the mansion and it\u2019s completely dark. You stumble around bumping into the furniture, but gradually you learn where each piece of furniture is. Finally after six months or so, you find the light switch, you turn it on, and suddenly it\u2019s all illuminated. You can see exactly where you were. Then you move into the next room and spend another six months in the dark. So each of these breakthroughs, while sometimes they\u2019re momentary, sometimes over a period of a day or two, they are the culmination of\u2014and couldn\u2019t exist without\u2014the many months of stumbling around in the dark that precede them.}{Andrew Wiles}\n\\vspace{2cm}\n\n\\section*{Introduction}\nIn this class, our goal is to understand how the simple operation of adding multi-digit numbers using the ``carrying process'' is really an example of a much general structure, called a group extension, which in turn is related to group cohomology.\n\nWe will start with the very concrete world of arithmetic, and gradually increase the level of abstraction and eventually define some form of group cohomology.\nWe will mostly work with abelian groups, and time permitting will switch to non-abelian groups toward the end.\\\\\\\\\n\nToday, we want to understand how the following are equivalent to each other.\n\\begin{mdframed}\n  \\adjustbox{scale=1,center}{%\n    \\begin{tikzcd}\n      &\\mbox{Adding 2-digit numbers}\n        \\ar[ddddl, leftrightarrow]\n        \\ar[ddddr, leftrightarrow, end anchor={[xshift=0ex]}]\n        &\n      \\\\\\\\\\\\\\\\\n      \\mbox{2-cocycle condition}\\ar[rr, dashed, leftrightarrow]& & \\text{Group extensions}\n    \\end{tikzcd}\n  }\n\\end{mdframed}\n\n\\cleardoublepage\n\\pagenumbering{arabic}\n", "meta": {"hexsha": "7538310c47a8d68035a5af1494f6ff8ba722d564", "size": 1960, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "00.tex", "max_stars_repo_name": "apurvnakade/mc2019-group-cohomology", "max_stars_repo_head_hexsha": "14a7f5f0e2ae64f3ceaa602b50fa80269e3800a5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "00.tex", "max_issues_repo_name": "apurvnakade/mc2019-group-cohomology", "max_issues_repo_head_hexsha": "14a7f5f0e2ae64f3ceaa602b50fa80269e3800a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "00.tex", "max_forks_repo_name": "apurvnakade/mc2019-group-cohomology", "max_forks_repo_head_hexsha": "14a7f5f0e2ae64f3ceaa602b50fa80269e3800a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.0, "max_line_length": 770, "alphanum_fraction": 0.7607142857, "num_tokens": 494, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7905303137346446, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.5705598785788071}}
{"text": "\\chapterimage{head2.png} % Chapter heading image\n\\chapter{Bayesian Inference Framework}\n\\section{Graphical Model}\n\\begin{definition}[$(X(t), Y(t))$]\nWe denote a particular configuration as $(X(t), Y(t))$\n\\begin{equation}\n        (X(t), Y(t))=(x_{t_0}, x_{t_1}, \\cdots, x_{t_{N_p}}, y_{t_1}, \\cdots, y_{t_{N_p}})\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[The complete likelihood function]\nThe complete likelihood function is the joint probability distribution\n\\begin{equation}\n        P(X(t), Y(t); \\bm{\\theta})= p(x_{t_0})\\prod_{t=1}^{N_p} p(x_{t_{\\tau}}|x_{t_{\\tau-1}}) p(y_{t_{\\tau}}|x_{t_{\\tau}})\n\\end{equation}\nwhere $\\bm{\\theta}=\\{F(x), D_1,D_2,...,D_{N_v}\\}$\n\\begin{center}\n        \\includegraphics[scale=0.8]{ch2/graphical_model.pdf}   \n\\end{center}\n\\end{definition}\n\n\\section{Define Alpha and Beta}\n\\begin{definition}[$p(x_t|Y(t))$]\n\\begin{equation}\n        p(x_t|Y) = \\frac{P(Y|x_t)p(x_t)}{P(Y)}\n\\end{equation}\nwhere we use Bayes theorem $P(A|B)=\\frac{P(B|A)P(A)}{P(B)}$. And we can further decompose $P(Y|x_t)$\n\\begin{align*}\n        P(Y|x_t) &= p(y_1,...,y_t|x_t)p(y_{t+1},...,y_{N_p}|x_t) \\\\\n        p(x_t) &= p_{\\text{eq}}(x) = \\sqrt{p_{\\text{eq}}(x)}\\sqrt{p_{\\text{eq}}(x)}\n\\end{align*}\nTherefore, we get\n\\begin{equation}\n        p(x_t|Y) = \\frac{p(y_1,...,y_t|x_t)p(y_{t+1},...,y_{N_p}|x_t) p_{\\text{eq}}(x)}{P(Y)}\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[$\\alpha(x_t)$]\n\\begin{align}\n        \\alpha(x_t) &= p(y_1,...,y_t|x_t)\\sqrt{p_{\\text{eq}}(x)} = \\left< \\alpha_t | x_t \\right> \\\\\n        \\left< \\alpha_{t_0} | x_{t_0} \\right> & = \\rho_{\\text{eq}}(x_{t_0}) = \\psi_1(x_{t_0})\n\\end{align}\n\\end{definition}\n\n\\begin{definition}[$\\beta(x_t)$]\n\\begin{align}\n        \\beta(x_t) &= p(y_{t+1},...,y_{N_p}|x_t)\\sqrt{p_{\\text{eq}}(x)} = \\left< x_t | \\beta_t \\right> \\\\\n        \\left< x_t | \\beta_{t_{N_p}} \\right> &= \\rho_{\\text{eq}}(x_{t})= \\psi_1(x_{t})\n\\end{align}\n\\end{definition}\n\n\\section{Time Propagation}\n\\begin{definition}[Fokker-Planck Equation]\n\\begin{equation}\n        \\frac{\\partial \\rho(x,t)}{\\partial t} = -\\textbf{H} \\rho(x,t)\n\\end{equation}\nwhere $\\rho(x,t)=\\frac{p(x,t)}{\\sqrt{p_{\\rm{eq}}(x)}}$, and\n\\begin{align}\n        \\textbf{H} &= -D \\frac{\\partial^2}{\\partial x^2} + \\frac{1}{2}D\\frac{d F(x)}{dx} + \\frac{1}{4} D F^{2}(x) \\\\\n        p_{\\rm{eq}}(x) &= \\exp{(-V(x))} \\\\\n        F(x) &= - \\frac{dV(x)}{dx}\n\\end{align}\nFurthermore, we can get the eigenbasis $\\{\\psi_i(x)\\}$\n\\begin{equation}\n        \\textbf{H} \\left| \\psi_i \\right> = \\lambda_i \\left| \\psi_i \\right>\n\\end{equation}\nAnd the formal solution of this Hermitian PDE can be written as:\n\\begin{equation}\n        \\rho(x,\\Delta t) = e^{-\\textbf{H} \\Delta t} \\rho(x,0) = \\bm{\\Psi} e^{-\\bm{\\Lambda} \\Delta t}  \\bm{\\Psi}^{\\dagger} \\rho(x,0)\n\\end{equation}\nIf $\\rho(x,0)$ can be expressed by eigenbasis:\n\\begin{equation}\n        \\rho(x,0) = a_1 \\psi_1(x)+\\cdots+a_{N_v} \\psi_{N_v}(x)\n\\end{equation}\n$\\rho(x,\\Delta t)$ becomes\n\\begin{equation}\n        \\rho(x,\\Delta t) = a_1  e^{(-\\lambda_{1}\\Delta t)} \\psi_1(x)+ \\cdots + a_{N_v} e^{(-\\lambda_{N_v}\\Delta t)}\\psi_{N_v}(x)\n\\end{equation}\n\\end{definition}\n\n\\section{Photon Operator}\n\\begin{definition}[Build Photon Matrix $\\left<\\psi_i|\\textbf{y}|\\psi_j\\right>$]\n\\begin{center}\n        \\includegraphics[scale=0.4]{ch2/build_photon_mat.pdf}   \n\\end{center}\n\\end{definition}\n\n\n\\section{Forward algorithm}\n\\begin{definition}[$\\left< \\alpha_{t_{\\tau}} \\right|$]\n\\begin{align}\n        \\left< \\alpha_{t_{\\tau}} \\right| &= \\left< \\alpha_{t_0} \\right|e^{-\\textbf{H}\\Delta t}\\textbf{y}_1 ... e^{-\\textbf{H}\\Delta t} \\textbf{y}_{\\tau} \\\\\n        \\left< \\alpha_{t_{\\tau}} \\right| &= \\left< \\alpha_{t_{\\tau-1}} \\right|e^{-\\textbf{H}\\Delta t} \\textbf{y}_{\\tau}\n\\end{align}\n\\begin{center}\n        \\includegraphics[scale=0.75]{ch2/alpha_example.pdf}\n\\end{center}\n\\end{definition}\n\n\\begin{definition}[$P(Y(t))$]\n\\begin{equation}\n        P(Y(t))= \\langle \\alpha_{t_0} |e^{-\\textbf{H}\\Delta t}\\textbf{y}_1 ... e^{-\\textbf{H}\\Delta t} \\textbf{y}_{t_{N_p}} | \\beta_{t_{N_p}} \\rangle = \\left< \\alpha_{t_{N_p}} | \\beta_{t_{N_p}} \\right>\n\\end{equation}\n\\begin{align}\n        P(Y(t)) &= \\int \\left< \\alpha_{t_{N_p}} | x_{N_p} \\right> \\left< x_{N_p}| \\beta_{t_{N_p}} \\right> dx_{N_p} \\\\\n        &= \\int \\rho_{\\text{eq}}(x_{N_p}) p(y_1,...,y_{N_p}|x_{N_p}) \\rho_{\\text{eq}}(x_{N_p}) dx_{N_p} \\\\\n        &= \\int p(y_1,...,y_{N_p}|x_{N_p}) p_{\\text{eq}}(x_{N_p}) dx_{N_p}\n\\end{align}\n\\end{definition}\n\n\\begin{definition}[Scaling Factor: $p(y_1)$]\n\\begin{align*}\n        p(y_1) &= \\left<\\alpha_{t_1}|\\psi_1\\right> = \\int \\rho_{\\text{eq}}(x_{1}) p(y_1|x_1) \\rho_{\\text{eq}}(x_{1}) dx_1 \\\\\n        &= \\int p(y_1|x_1) p_{\\text{eq}}(x_{1}) dx_1\n\\end{align*}\n\\begin{align}\n        \\left< \\hat{\\alpha}_{t_1} \\right| &= \\frac{\\left< \\alpha_{t_1} \\right|}{\\left<\\alpha_{t_1}|\\psi_1\\right>} \\\\\n        \\left< \\hat{\\alpha}_{t_1} | x_1 \\right> &= \\frac{\\rho_{\\text{eq}}(x_{1}) p(y_1|x_1)}{p(y_1)}\n\\end{align}\n\\end{definition}\n        \n\\begin{definition}[Scaling Factor: $p(y_t|y_1,...,y_{t-1})$]\n\\begin{align*}\n        &\\left< \\hat{\\alpha}_{t_{\\tau-1}} | e^{-\\textbf{H}\\Delta t}\\textbf{y}_\\tau | \\psi_1 \\right> \\\\\n        &=  \\int \\rho_{\\text{eq}}(x_{\\tau-1}) p(y_1,...,y_{\\tau-1}|x_{\\tau-1}) p(x_{\\tau}|x_{\\tau-1}) p(y_{\\tau}|x_{\\tau}) \\rho_{\\text{eq}}(x_{\\tau-1}) dx_{\\tau-1} dx_{\\tau} \\\\ \n        &= \\int p_{\\text{eq}}(x_{\\tau-1}) p(y_1,...,y_{\\tau-1}|x_{\\tau-1}) p(x_{\\tau}|x_{\\tau-1}) p(y_{\\tau}|x_{\\tau}) dx_{\\tau-1} dx_{\\tau} \\\\\n        &= \\int p(y_1,...,y_{\\tau-1},x_{\\tau-1}) p(x_{\\tau}|x_{\\tau-1}) p(y_{\\tau}|x_{\\tau}) dx_{\\tau-1} dx_{\\tau} \\\\\n        &= \\int p(y_1,...,y_{\\tau-1},y_{\\tau}, x_{\\tau-1}, x_{\\tau}) dx_{\\tau-1} dx_{\\tau} \\\\\n        &= \\frac{p(y_1,...,y_{\\tau-1},y_{\\tau})}{p(y_1,...,y_{\\tau-1})} \\\\\n        &= p(y_{\\tau}|y_1,...,y_{\\tau-1}) = c_{\\tau}\n\\end{align*}\n\\end{definition}\n\n\\begin{definition}[$\\left< \\hat{\\alpha}_{t_\\tau} \\right|$]\n\\begin{equation}\n        \\left< \\hat{\\alpha}_{t_\\tau} \\right| = \\frac{\\left< \\hat{\\alpha}_{t_{\\tau-1}} \\right| e^{-\\textbf{H}\\Delta t}\\textbf{y}_{\\tau} }{\\left< \\hat{\\alpha}_{t_{\\tau-1}} | e^{-\\textbf{H}\\Delta t}\\textbf{y}_{\\tau} | \\psi_1 \\right>}\n\\end{equation}\n\\end{definition}\n\n\\section{Backward algorithm}\n\\begin{definition}[$\\left| \\beta_{t_\\tau} \\right> $]\n\\begin{align}\n        \\left| \\beta_{t_\\tau} \\right> &= e^{-\\textbf{H}\\Delta t} \\textbf{y}_{\\tau+1} ... e^{-\\textbf{H}\\Delta t} \\textbf{y}_{N_p} \\left| \\beta_{t_{N_p}} \\right> \\\\\n        \\left| \\beta_{t_{N_p-1}} \\right> &= e^{-\\textbf{H}\\Delta t} \\textbf{y}_{N_p} \\left| \\beta_{t_{N_p}} \\right> \\\\\n        \\left| \\beta_{t_{\\tau}} \\right> &= e^{-\\textbf{H}\\Delta t} \\textbf{y}_{\\tau+1} \\left| \\beta_{t_{\\tau+1}} \\right>\n\\end{align}\n\\begin{center}\n        \\includegraphics[scale=0.6]{ch2/beta_example.pdf}\n\\end{center}\n\\end{definition}\n\n\\begin{definition}[$\\left| \\hat{\\beta}_{t_\\tau} \\right> $]\n\\begin{equation}\n        \\left| \\hat{\\beta}_{t_{\\tau}} \\right> = \\frac{e^{-\\textbf{H}\\Delta t} \\textbf{y}_{\\tau+1} \\left| \\beta_{t_{\\tau+1}} \\right>}{c_{\\tau}}\n\\end{equation}\n\\end{definition}\n\n\\section{Likelihood Function}\n\\begin{definition}[Simple Example: Algebra]\nThe likelihood for the following example is\n\\begin{equation}\\label{eq:likelihood}\n    P(Y(t);\\bm{\\theta}) = L(\\bm{\\theta})= \\langle \\alpha_{t_0} |e^{-\\textbf{H}\\Delta t}\\textbf{y}_1 e^{-\\textbf{H}\\Delta t} \\textbf{y}_2 | \\beta_{t_2} \\rangle\n\\end{equation}\n\\begin{center}\n    \\includegraphics[scale=0.8]{ch2/graphical_model_simple_example.pdf}   \n\\end{center}\nWe insert $\\textbf{1}=\\sum_i |\\psi_i\\left>\\right<\\psi_i|$ in between each $e^{-\\textbf{H}\\Delta t}$ in the eq.(\\ref{eq:likelihood}),\n\\begin{align}\n    P(Y(t)) &= \\sum_{\\{i,j,m,l\\}}\\langle \\alpha_{t_0} |\\psi_i\\left>\\right<\\psi_i| e^{-\\textbf{H}\\Delta t_1} |\\psi_j\\left>\\right<\\psi_j| \\textbf{y}_1 |\\psi_m\\left>\\right<\\psi_m| e^{-\\textbf{H}\\Delta t_2} |\\psi_l\\left>\\right<\\psi_l| \\textbf{y}_2 | \\beta_{t_2} \\rangle \\\\\n    &= \\sum_{\\{i,j,m,l\\}} P(X(t),Y(t))\n\\end{align}\nwhere\n\\begin{equation}\n    P(X(t),Y(t)) = \\langle \\alpha_{t_0} |\\psi_i\\left>\\right<\\psi_i| e^{-\\textbf{H}\\Delta t_1} |\\psi_j\\left>\\right<\\psi_j| \\textbf{y}_1 |\\psi_m\\left>\\right<\\psi_m| e^{-\\textbf{H}\\Delta t_2} |\\psi_l\\left>\\right<\\psi_l| \\textbf{y}_2 | \\beta_{t_{2}} \\rangle\n\\end{equation}\nand we then take a derivative with respect to $\\theta$\n\\begin{align*}\n    &\\frac{\\delta P(X,Y)}{\\delta \\theta} = \\left<\\alpha_{t_0} |\\psi_i \\right> \\frac{\\delta \\left<\\psi_i| e^{-\\textbf{H}\\Delta t_1} |\\psi_j \\right>}{\\delta \\theta} \\left<\\psi_j| \\textbf{y}_1 |\\psi_m \\right> \\left<\\psi_m| e^{-\\textbf{H}\\Delta t_2} |\\psi_l \\right> \\left<\\psi_l| \\textbf{y}_2 | \\beta_{t_{2}} \\right> \\\\\n    &+ \\left<\\alpha_{t_0} |\\psi_i \\right> \\left<\\psi_i| e^{-\\textbf{H}\\Delta t_1} |\\psi_j \\right> \\left<\\psi_j| \\textbf{y}_1 |\\psi_m \\right> \\frac{\\delta \\left<\\psi_m| e^{-\\textbf{H}\\Delta t_2} |\\psi_l \\right>}{\\delta \\theta} \\left<\\psi_l| \\textbf{y}_2 | \\beta_{t_{2}} \\right>\n\\end{align*}\ntherefore\n\\begin{equation}\n    \\frac{\\delta P(Y)}{\\delta \\theta} = \\sum_{\\{i,j,m,l\\}} \\frac{\\delta P(X,Y)}{\\delta \\theta} = \\sum_{\\tau=1}^{2} \\left<\\alpha^k_{t_{\\tau-1}} \\right| \\frac{\\delta e^{-\\textbf{H}\\Delta t_{\\tau}}}{\\delta \\theta} \\textbf{y}_{\\tau} \\left| \\beta^k_{t_{\\tau}}\\right>\n\\end{equation}\nand\n\\begin{equation}\n    \\frac{\\delta \\ln{P(Y)}}{\\delta \\theta} = \\frac{1}{P(Y;\\theta^k)}\\frac{\\delta P(Y)}{\\delta \\theta} = \\frac{1}{P(Y;\\theta^k)} \\sum_{\\tau=1}^{2} \\left<\\alpha^k_{t_{\\tau-1}} \\right| \\frac{\\delta e^{-\\textbf{H}\\Delta t_{\\tau}}}{\\delta \\theta} \\textbf{y}_{\\tau} \\left| \\beta^k_{t_{\\tau}}\\right>\n\\end{equation}\nAs you can see, $Q$ appear as \n\\begin{equation}\n    Q(\\theta) = \\frac{1}{P(Y;\\theta^k)} \\sum_{\\tau=1}^{2} \\left<\\alpha^k_{t_{\\tau-1}} | e^{-\\textbf{H}\\Delta t_{\\tau}} \\textbf{y}_{\\tau} | \\beta^k_{t_{\\tau}}\\right>\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[General Case: $Q$]\nBecause\n\\begin{align}\n        \\left< \\hat{\\alpha}_t \\right| &= \\frac{\\left< \\alpha_t \\right|}{p(y_1,...,y_t)} \\\\\n        \\left| \\hat{\\beta}_t \\right> &= \\frac{\\left| \\beta_t \\right>}{p(y_t,...,y_{N_p}|y_1,...,y_t)}\n\\end{align}\nso\n\\begin{equation}\n        Q(\\theta) = \\sum_{t=1}^{N_p} \\frac{\\left<\\hat{\\alpha}^k_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_{t} | \\hat{\\beta}_{t}\\right> }{c_t} \n\\end{equation}\nbecause \n\\begin{equation}\n        p(y_1,...,y_{t-1})p(y_t|y_1,...,y_{t-1})p(y_t,...,y_{N_p}|y_1,...,y_t) = P(Y(t))\n\\end{equation}       \n\\end{definition}\n\n\\section{The decomposition of likelihood function}\n\\begin{definition}[$\\left< \\hat{\\alpha}_{t} | \\hat{\\beta}_{t}\\right>$]\n\\begin{equation}\n        \\left< \\hat{\\alpha}_{t} | \\hat{\\beta}_{t}\\right> = \\frac{\\left< \\alpha_{t} | \\beta_{t}\\right>}{P(Y)} = 1\n\\end{equation}\nwhere $\\left< \\alpha_{t} | \\beta_{t}\\right>=P(Y)$\n\\end{definition}\n\n\\begin{definition}[Likelihood Function 1]\n\\begin{equation}\n        P(Y) = \\prod_{t=1}^{N_p} p(y_t|y_1,...,y_{t-1})=\\prod_{t=1}^{N_p}c_t\n\\end{equation}\n\\begin{equation}\n        \\left< \\hat{\\alpha}_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_t| \\hat{\\beta}_{t}\\right> = c_t \\left< \\hat{\\alpha}_{t} | \\hat{\\beta}_{t}\\right> = c_t \n\\end{equation}\nwhere $\\left< \\hat{\\alpha}_{t} \\right| = \\frac{\\left< \\hat{\\alpha}_{t-1} \\right| e^{-\\textbf{H}\\Delta t} \\textbf{y}_t}{c_t}$.\n\\end{definition}\n\n\\begin{definition}[Likelihood Function 2]\n\\begin{equation}\n        \\left< \\hat{\\alpha}_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_t| \\psi_1\\right> = c_t = p(y_t|y_1,...,y_{t-1})\n\\end{equation}\nBecause\n\\begin{align*}\n        &\\int \\rho_{\\text{eq}}(x) p(y_{t-1}|x_{t-1},y_1,...,y_{t-2}) p(x_t|x_{t-1}) p(y_t|x_t) \\rho_{\\text{eq}}(x) dx_{t-1}dx_{t} \\\\ \n        & = \\int p_{\\text{eq}}(x_{t-1}) p(y_{t-1}|x_{t-1},y_1,...,y_{t-2}) p(x_t|x_{t-1}) p(y_t|x_t)  dx_{t-1}dx_{t}\\\\ \n        &= p(y_t|y_1,...,y_{t-1})\n\\end{align*}\n\\end{definition}\n\n\\begin{definition}[Decomposition of $\\left< \\hat{\\alpha}_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_t| \\hat{\\beta}_{t}\\right>$]\n\\begin{align*}\n        \\left< \\hat{\\alpha}_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_t| \\hat{\\beta}_{t}\\right> &= \\sum_{\\{i,j\\}} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i |  e^{-\\textbf{H}\\Delta t}| \\psi_j \\right> \\left< \\psi_j | \\textbf{y}_t | \\hat{\\beta}_{t} \\right> \\\\\n        &= \\sum_{i=1}^{Nv}  e^{-D_i \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}_{t} \\right>\n\\end{align*}\n\\end{definition}\n\n\\begin{definition}[Decomposition of $\\tilde{Q}$ and $\\ln{P(Y)}$]\n\\begin{align}\n        \\tilde{Q} &= \\sum_{t=1}^{N_p} \\left< \\hat{\\alpha}_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_t| \\hat{\\beta}_{t}\\right> = \\sum_{t=1}^{N_p} \\sum_{i=1}^{Nv}  e^{-D_i \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}_{t} \\right>\\\\\n        \\ln{P(Y)} &= \\sum_{t=1}^{N_p} \\ln{\\sum_{i=1}^{Nv}  e^{-D_i \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}_{t} \\right>}\n\\end{align}\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch2/lnpy_Q.pdf}   \n\\end{center}        \n\\end{definition}\n\n\\section{Optimize D}\n\\begin{definition}[$\\tilde{Q}^k(\\textbf{D})$]\n\\begin{align}\n        \\textbf{D}&=\\begin{bmatrix}D_1, D_2, D_3, D_4,...,D_{N_v}\\end{bmatrix} \\\\\n        \\tilde{Q}^k(\\textbf{D}) &= \\sum_{t=1}^{N_p} \\left< \\hat{\\alpha}^k_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_t| \\hat{\\beta}^k_{t}\\right> = \\sum_{t=1}^{N_p} \\sum_{i=1}^{Nv}  e^{-D_i \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right> \\\\\n        \\tilde{Q}^k(\\textbf{D}) &= \\sum_{t=1}^{N_p} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_1 \\right> \\left< \\psi_1 | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right> + \\sum_{t=1}^{N_p} \\sum_{i=2}^{Nv}  e^{-D_i \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>\n\\end{align}\nAs you can see, $\\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>$ are calculated by using previous $\\textbf{D}^k$, this is the E-step. Note $D_1$ is redundant. For all new $\\textbf{D}$, $\\sum_{t=1}^{N_p} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_1 \\right> \\left< \\psi_1 | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>$ is the same, so the first eigenvector do not have contribution.\n\\end{definition}\n\n\\begin{definition}[$\\frac{d\\tilde{Q}}{dD_i}$]\n\\begin{equation}\n        \\frac{d\\tilde{Q}^k}{dD_i} = -\\lambda'_i \\Delta t e^{-D_i \\lambda'_i \\Delta t} \\sum_{t=1}^{N_p}  \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>\n\\end{equation}\nOnly when $D_i$ is very large or $\\Delta t$ very large, we can get $\\frac{d\\tilde{Q}}{dD_i}=0$.     \n\\end{definition}\n\n\\begin{definition}[$\\ln{P(Y)}$: Constant D]\n\\begin{align}\n        \\ln{P(Y)} &= \\sum_{t=1}^{N_p} \\ln{\\left( \\sum_{i=1}^{Nv}  e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right> \\right)} \\\\\n        &= \\sum_{t=1}^{N_p} \\ln{\\left( \\left< \\hat{\\alpha}^k_{t-1} | \\psi_1 \\right> \\left< \\psi_1 | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right> + \\sum_{i=2}^{Nv}  e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right> \\right)}\n\\end{align}\n\\begin{align}\n        \\frac{d \\ln{P(Y)}}{dD} &=\\sum_{t=1}^{N_p} \\frac{\\sum_{i=1}^{Nv} -\\lambda'_i \\Delta t e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>}{\\sum_{i=1}^{Nv}  e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>} \n\\end{align}\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch2/dldD_analytical_numerical.pdf}   \n\\end{center}    \n\\end{definition}\n\n\\begin{definition}[$\\frac{d \\ln{P(Y)}}{dD}$: Detail 1]\n\\begin{align*}\n        \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}_{t} \\right>\n\\end{align*}\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch2/alpha_beta_in_dldD.pdf}   \n\\end{center}\n\\begin{align*}\n        e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}_{t} \\right>\n\\end{align*}\nwhere $D=500$\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch2/edt_alpha_beta_in_dldD.pdf}   \n\\end{center}\n\\begin{align*}\n        -\\lambda'_i \\Delta t e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}_{t} \\right>\n\\end{align*}\n\\begin{center}\n        \\includegraphics[scale=0.45]{ch2/LQ_edt_alpha_beta_in_dldD.pdf}   \n\\end{center}\n\\end{definition}\n\n\\begin{definition}[$\\frac{d \\ln{P(Y)}}{dD}$: Detail 2]\nThe contribution from each time stamp can be split into the numerator\n\\begin{align*}\n        \\sum_{i=1}^{Nv} -\\lambda'_i \\Delta t e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>\n\\end{align*}\nand the denominator\n\\begin{align*}\n        \\sum_{i=1}^{Nv}  e^{-D \\lambda'_i \\Delta t} \\left< \\hat{\\alpha}^k_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t | \\hat{\\beta}^k_{t} \\right>\n\\end{align*}\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch2/D_300.pdf}   \n\\end{center}\nWhen $D=300$\n\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch2/D_500.pdf}   \n\\end{center}\nWhen $D=500$\n\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch2/D_800.pdf}   \n\\end{center}     \nWhen $D=800$\n\\end{definition}\n\n\\section{Position-Dependent Diffusion}\n\\begin{definition}[D(x)]\nWe express the position dependent diffusity as\n\\begin{equation}\n        D(x) = D + \\eta(x)\n\\end{equation}\nThe domain $x \\in [a,b]$, and the integration of $\\eta(x)$ over whole domain is 0\n\\begin{align*}\n        \\int_a^b \\eta(x) dx = 0\n\\end{align*}\nand \n\\begin{align*}\n        \\int_a^b D(x) dx &= \\int_a^b D dx + \\int_a^b \\eta(x) dx \\\\\n        &= D \\int_a^b dx = (b-a)D\n\\end{align*}\n\\end{definition}\n\n\\begin{definition}[Goal]\nThe goal for position-dependent diffusion coefficient is\n\\begin{itemize}\n        \\item Given $p_{\\text{eq}}(x)$\n        \\item Learn constant $D$\n        \\item Try to learn $\\eta(x)$?\n\\end{itemize}\n\\begin{center}\n        \\includegraphics[scale=0.35]{ch2/learn_position_D.pdf}   \n\\end{center}  \n\\end{definition}\n\n\\begin{definition}[Example]\nTest\n\\begin{center}\n        \\includegraphics[scale=0.4]{ch2/posi_depe_learn_1.pdf}   \n\\end{center}\nWhen $\\zeta$ is greater 1800, the numerical instability occurs.\n\\begin{center}\n        \\includegraphics[scale=0.4]{ch2/zeta_2000_numerical_instability.pdf}   \n\\end{center}     \n\\end{definition}\n\n\\begin{definition}[Time Propagation]\n\\begin{equation}\n        \\frac{\\partial \\rho(x,t)}{\\partial t} = -\\textbf{H} \\rho(x,t) = -(\\textbf{H}^0 +\\textbf{H}^D) \\rho(x,t)\n\\end{equation}\nwhere\n\\begin{align*}\n        \\textbf{H} &= -D(x) \\frac{\\partial^2}{\\partial x^2} + \\frac{1}{2}D(x)\\frac{d F(x)}{dx} + \\frac{1}{4} D(x) F^{2}(x) -\\frac{\\partial D(x)}{\\partial x}\\frac{\\partial }{\\partial x} + \\frac{1}{2} \\frac{\\partial D(x)}{\\partial x} F(x)\\\\\n        \\textbf{H}^0 &= -D\\frac{\\partial^2}{\\partial x^2} + \\frac{1}{2}D\\frac{d F(x)}{dx} + \\frac{1}{4} D F^{2}(x) \\\\\n        \\textbf{H}^D &= - \\eta(x)\\frac{\\partial^2}{\\partial x^2} + \\frac{1}{2}\\eta(x)\\frac{d F(x)}{dx} + \\frac{1}{4} \\eta(x) F^{2}(x) -\\frac{\\partial \\eta(x)}{\\partial x}\\frac{\\partial }{\\partial x} + \\frac{1}{2} \\frac{\\partial \\eta(x)}{\\partial x} F(x)\n\\end{align*}        \n\\end{definition}\n\n\\begin{definition}[Eigenvector and Eigenvalue]\n\\begin{align}\n        \\textbf{H} \\left| \\psi_i \\right> &= \\lambda_i \\left| \\psi_i \\right> \\\\\n        \\left( \\textbf{H}^0 + \\textbf{H}^D \\right) \\left| \\psi_i \\right> &= \\lambda_i \\left| \\psi_i \\right> \\\\\n        \\textbf{H}^0 \\left| \\psi_i \\right> + \\textbf{H}^D \\left| \\psi_i \\right> &= \\lambda_i \\left| \\psi_i \\right>\n\\end{align}\nRecall that\n\\begin{equation}\n        \\left| \\psi_i \\right> = c_{i1}| \\psi_1^0 \\rangle + c_{i2}| \\psi_2^0 \\rangle +  c_{i3}| \\psi_3^0 \\rangle+\\cdots + c_{i,N_v}| \\psi_{N_v}^0 \\rangle\n\\end{equation}\nand\n\\begin{align}\n        \\textbf{H}^0 \\left| \\psi_i \\right> &= c_{i1} \\lambda^0_1| \\psi_1^0 \\rangle + c_{i2}\\lambda^0_2| \\psi_2^0 \\rangle +  c_{i3}\\lambda^0_3| \\psi_3^0 \\rangle+\\cdots + c_{i,N_v}\\lambda^0_{N_v}| \\psi_{N_v}^0 \\rangle \\\\\n        \\textbf{H}^D \\left| \\psi_i \\right> &= c_{i1} \\textbf{H}^D| \\psi_1^0 \\rangle + c_{i2}\\textbf{H}^D| \\psi_2^0 \\rangle +  c_{i3}\\textbf{H}^D| \\psi_3^0 \\rangle+\\cdots + c_{i,N_v}\\textbf{H}^D| \\psi_{N_v}^0 \\rangle \\\\\n        \\lambda_i \\left| \\psi_i \\right> &= c_{i1} \\lambda_i| \\psi_1^0 \\rangle + c_{i2}\\lambda_i| \\psi_2^0 \\rangle +  c_{i3}\\lambda_i| \\psi_3^0 \\rangle+\\cdots + c_{i,N_v}\\lambda_i| \\psi_{N_v}^0 \\rangle\n\\end{align}\nthen\n\\begin{align*}\n        \\textbf{H}^D \\left| \\psi_i \\right> &=  \\lambda_i \\left| \\psi_i \\right> - \\textbf{H}^0 \\left| \\psi_i \\right> \\\\\n        &= \\left(\\lambda_i - \\lambda_1^0 \\right) \\left| \\psi_1^0 \\right> + \\left(\\lambda_i - \\lambda_2^0 \\right) \\left| \\psi_2^0 \\right> + ... + \\left(\\lambda_i - \\lambda_{N_v}^0 \\right) \\left| \\psi_{N_v}^0 \\right>\n\\end{align*}\nIt becomes\n\\begin{align*}\n        \\textbf{H}^D \\left| \\psi_i \\right> &= c_{i1} \\textbf{H}^D| \\psi_1^0 \\rangle + c_{i2}\\textbf{H}^D| \\psi_2^0 \\rangle +  c_{i3}\\textbf{H}^D| \\psi_3^0 \\rangle+\\cdots + c_{i,N_v}\\textbf{H}^D| \\psi_{N_v}^0 \\rangle \\\\\n        &= c_{i1} \\left(\\lambda_i - \\lambda_1^0 \\right) \\left| \\psi_1^0 \\right> + c_{i2} \\left(\\lambda_i - \\lambda_2^0 \\right) \\left| \\psi_2^0 \\right> + ... + c_{i,N_v} \\left(\\lambda_i - \\lambda_{N_v}^0 \\right) \\left| \\psi_{N_v}^0 \\right>\n\\end{align*}\nso\n\\begin{equation}\n        \\textbf{H}^D \\left| \\psi_j^0 \\right> = \\left(\\lambda_i - \\lambda_j^0 \\right) \\left| \\psi_j^0 \\right>\n\\end{equation}\nand $\\textbf{H}^D$ can be written as a matrix whose element is\n\\begin{equation}\n        \\left(\\lambda_i - \\lambda_j^0 \\right)\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[Likelihood Function 1]\n\\begin{align}\n        \\left< \\hat{\\alpha}_{t-1} | e^{-\\textbf{H}\\Delta t} \\textbf{y}_t| \\hat{\\beta}_{t}\\right> &= c_t \\\\\n        \\sum_{\\{i,j\\}} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right>\\left< \\psi_i | e^{-\\textbf{H}\\Delta t} | \\psi_j \\right>\\left< \\psi_j | \\textbf{y}_t| \\hat{\\beta}_{t}\\right> &= c_t \\\\\n        \\sum_{i} e^{-\\lambda_i \\Delta t} \\left< \\hat{\\alpha}_{t-1} | \\psi_i \\right> \\left< \\psi_i | \\textbf{y}_t| \\hat{\\beta}_{t}\\right> &= c_t \n\\end{align}\n\\end{definition}", "meta": {"hexsha": "7033a6e159294634a67d869862a176152fdecf48", "size": 22373, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/chapter2.tex", "max_stars_repo_name": "yizaochen/em_theory", "max_stars_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/chapter2.tex", "max_issues_repo_name": "yizaochen/em_theory", "max_issues_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/chapter2.tex", "max_forks_repo_name": "yizaochen/em_theory", "max_forks_repo_head_hexsha": "a9260f17ff59d7a265dd9e629607376b8d909ae6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.1515151515, "max_line_length": 424, "alphanum_fraction": 0.5953157824, "num_tokens": 9498, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8397339596505965, "lm_q2_score": 0.6791787121629465, "lm_q1q2_score": 0.5703294292749838}}
{"text": "\\chapter{Rigid body Dynamics}\n\\section{Rigid body}\nA body is called a rigid body if the distance between any two points in the body does  not  change  in  time . i.e.,a rigid body  do  not  stretch, compress, or shear. Rigid  bodies,  unlike  point  masses,  can  have  forces  applied  at different points in the body.\n\\section{Fixed axis rotations}\nThe simplest motion of a rigid body  is the rotation of a rigid body about an axis fixed in space. So the axis is neither translating nor rotating.\n\n\\begin{figure}[H]\n\\begin{minipage}{0.45\\textwidth}\n\t\\centering\n\t\\includegraphics[height=6cm,width=6cm]{rigidbody angularmomentum}\n\t\\end{minipage}\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[height=4cm,width=4cm]{rigidbody angularmomentum 1}\n\\end{minipage}\n\\caption{Fixed axis rotation of rigid body}\n\\label{Fixed axis rotation}\n\\end{figure}\nThe rigid body pivoted at the origin and rotates with angular speed $\\omega$ around the $z$ axis, in the counterclockwise direction (as viewed from above). Consider a little piece of the body, with mass $d m$ and position $(x, y)$. This little piece travels in a circle around the origin with speed\n\\begin{align*}\nv&=\\omega r\\\\\n\\text{Where,}\\ r&=\\sqrt{x^{2}+y^{2}} \\qquad \\because x=r\\cos\\theta \\ ;\\  y=r\\sin\\theta\n\\intertext{Then, the angular momentum of this piece (relative to the origin) equals}\n\\mathrm{J}&=\\mathrm{r} \\times \\mathrm{p}\\\\&=r(v d m) \\hat{\\mathrm{z}}\\\\&=d m r^{2} \\omega \\hat{\\mathrm{z}}\n\\intertext{Then the angular momentum of the entire body is therefore}\n\\mathrm{J}&=\\int r^{2} \\omega \\hat{\\mathrm{z}} d m\\\\&=\\int\\left(x^{2}+y^{2}\\right) \\omega \\hat{\\mathrm{z}} d m\n\\intertext{Where the integration runs over the area of the body. If the density of the object is constant, as is usually the case, then we have,}\nd m&=\\rho d x d y\n\\intertext{If we define the moment of inertia around the $z$ axis to be}\nI_{z} &= \\int r^{2} d m\\\\&=\\int\\left(x^{2}+y^{2}\\right) d m\n\\intertext{Then the $z$ component of the angular momentum}\nJ_{z}&=I_{z} \\omega\\\\\n\\text{And in general, the moment of inertia,}\\quad I_{z}&= \\sum_{i} m_{i} r_{i}^{2}\n\\end{align*}\n\\begin{center}\n\t\\framebox{\n\t\t\\parbox[t][2.5cm]{4cm}{\n\t\t\t\n\t\t\t\\addvspace{0.2cm} \\centering \n\t\t\t\n\t\n\t\\begin{align*}\n\tI&= \\sum_{i} m_{i} r_{i}^{2}\\\\\n\tJ&= I\\omega\n\t\\end{align*}} }\n\\end{center}\n\\subsection{Torque}\nTorque about a point is defined as the rate of change of angular momentum about the same point.\n\\begin{align*}\n\\tau&=\\frac{dJ}{dt}\n\\intertext{For a rigid body rotating about it's axis of symmetry}\n\\tau&=\\frac{d (I\\omega)}{dt}\\\\&=I \\frac{d \\omega}{dt}\n\\intertext{$\\frac{d \\omega}{dt}$\\ is the angular acceleration.}\n\\intertext{\\textbf{Case-1: } If the axis of rotation and axis of symmetry of the bbody are not one. $J$ and $\\omega $ may not along the same direction. Then,}\n\\tau_{0}&=\\frac{d (I\\omega)}{dt}\n\\intertext{\\textbf{Case-2: } If the axis of rotation fixed relative to the body . $I$ will be constant. Then,}\n\\tau_{0}&=I\\frac{d \\omega}{dt}\n\\intertext{In the absence of external torque $\\tau=0$. Then the angular momentum about the axis of rotation is conserved. }\nI\\omega&= \\text{Constant.}\n\\end{align*}\n\\section{Moment of Inertia tensor}\nparticle $P$ of the body, having the position vector $r_{i}$ with respect to $O$, has an instantaneous velocity $v_{i}$ relative to $O$, given by\n\\begin{align}\n\\mathbf{v}_{i}&=\\omega \\times \\mathbf{r}_{i}\n\\intertext{The angular velocity $\\omega$ has components $\\omega_{x}, \\omega_{y}, \\omega_{z}$ along $x$, $y$ and  $z$ axes . }\n\\omega&=\\omega_{x} \\hat{\\mathrm{i}}+\\omega_{y} \\hat{\\mathrm{j}}+\\omega_{z} \\hat{\\mathrm{k}}\n\\intertext{Then the angular momentum is given by,}\n\\mathrm{J}_{p}&=\\mathrm{r}_{i} \\times m_{i} \\mathrm{v}_{i} \\quad \\text{$\\mathrm{m}_{i}$ is the  mass of the particle.}\n\\intertext{Then the angular momentum ,}\n\\mathrm{J}&=\\sum_{i=1}^{N} \\mathrm{r}_{i} \\times m_{i} \\mathrm{v}_{i}\\\\&=\\sum_{i} m_{i} \\mathrm{r}_{i} \\times\\left({\\omega} \\times \\mathrm{r}_{i}\\right)\\\\\n&=\\sum_{i} m_{i}\\left[\\left(\\mathrm{r}_{i} \\cdot \\mathrm{r}_{i}\\right) \\omega-\\left(\\mathrm{r}_{i} \\cdot \\omega\\right) \\mathrm{r}_{i}\\right] \\\\\n&=\\sum_{i} m_{i}\\left[\\mathrm{r}_{i}^{2} \\omega-\\left(\\mathrm{r}_{i} \\cdot \\omega\\right) \\mathrm{r}_{i}\\right]\n\\intertext{Whose direction is not along that of angular velocity If $J_{x}, J_{y}, J_{z}$ are the components of angular momentum along $X, Y, Z$ axes respectively, then}\nJ_{x}&=\\sum_{i} m_{i}\\left[r_{i}^{2} \\omega_{x}-\\left(x_{i} \\omega_{x}+y_{i} \\omega_{y}+z_{i} \\omega_{z}\\right) x_{i}\\right]\n\\intertext{It can be written as,}\n\\begin{split}\nJ_{x}&=\\omega_{x} \\sum_{i} m_{i}\\left(r_{i}^{2}-x_{i}^{2}\\right)-\\omega_{y} \\sum_{i} m_{i} x_{i} y_{i}-\\omega_{z} \\sum_{i} m_{i} x_{i} z_{i} \\\\\nJ_{y}&=-\\omega_{x} \\sum_{i} m_{i} x_{i} y_{i}+\\omega_{y} \\sum_{i} m_{i}\\left(r_{i}^{2}-y_{i}^{2}\\right)-\\omega_{z} \\sum m_{i} y_{i} z_{i} \\\\\nJ_{z}&=-\\omega_{x} \\sum_{i} m_{i} x_{i} z_{i}-\\omega_{y} \\sum_{i} m_{i} y_{i} z_{i}+\\omega_{z} \\sum_{i} m_{i}\\left(r_{i}^{2}-z_{i}^{2}\\right)\n\\end{split}\\\\\\\\\n\\begin{split}\nJ_{x}&=I_{x x} \\omega_{x}+I_{x y} \\omega_{y}+I_{x z} \\omega_{z} \\\\\nJ_{y}&=I_{y x} \\omega_{x}+I_{y y} \\omega_{y}+I_{y z} \\omega_{z} \\\\\nJ_{z}&=I_{z x} \\omega_{x}+I_{z y} \\omega_{y}+I_{z z} \\omega_{z}\n\\end{split}\n\\intertext{Where,}\n\\begin{split}\nI_{x x}&=\\sum_{i} m_{i}\\left(r_{i}^{2}-x_{i}^{2}\\right)=\\sum_{i} m_{i}\\left(y_{i}^{2}+z_{i}^{2}\\right) \\\\\nI_{y y}&=\\sum_{i} m_{i}\\left(r_{i}^{2}-y_{i}^{2}\\right)=\\sum_{i} m_{i}\\left(\\dot{x}_{i}^{2}+z_{i}^{2}\\right)\\\\\nI_{z z}&=\\sum_{i} m_{i}\\left(r_{i}^{2}-z_{i}^{2}\\right)=\\sum_{i} m_{i}\\left(x_{i}^{2}+y_{i}^{2}\\right)\n\\end{split}\n\\intertext{And,}\\\n\\begin{split}\nI_{x y}&=-\\sum_{i} m_{i} x_{i} y_{i}=I_{y x} \\\\\nI_{x z}&=-\\sum_{i} m_{i} x_{i} z_{i}=I_{z x} \\\\\nI_{y z}&=-\\sum_{i} m_{i} y_{i} z_{i}=I_{z y}\n\\end{split}\n\\intertext{Then any component of the angular momentum vector can be written as,}\nJ_{\\alpha}&=\\sum_{\\beta} I_{\\alpha \\beta} \\omega_{\\beta} \\quad \\text { Where } \\alpha, \\beta=x, y, z\\\\\n\\text{Or }\\ J&= I\\omega\n\\intertext{In matrix notation,}\n\\left[\\begin{array}{c}\nJ_{x} \\\\\nJ_{y} \\\\\nJ_{z}\n\\end{array}\\right]&=\\left[\\begin{array}{lll}\nI_{x x} & I_{x y} & I_{x z} \\\\\nI_{y x} & I_{y y} & I_{y z} \\\\\nI_{z x} & I_{z y} & I_{x z}\n\\end{array}\\right] \\left[\\begin{array}{l}\n\\omega_{x} \\\\\n\\omega_{y} \\\\\n\\omega_{z}\n\\end{array}\\right]\n\\intertext{The nine elements $I_{x x}, I_{x y}, \\ldots, I_{x z}$ of the $(3 \\times 3)$ matrix may be regarded as components of a single entity $I$. This entity $I$ is called inertia tensor. Since $I_{x y}=I_{y x}$ etc., $I$ is a symmetric tensor.}\nI&= \\left[\\begin{array}{lll}\nI_{x x} & I_{x y} & I_{x z} \\\\\nI_{y x} & I_{y y} & I_{y z} \\\\\nI_{z x} & I_{z y} & I_{x z}\n\\end{array}\\right] \n\\end{align}\n\\section{Principle axes and Principle moment of inertia}\nIf  a body  has  symmetries  with  respect  to  some  of  the  axis,  then  some  of  the  products  of  inertia  become  zero  and  we  can  identify  the  principal  axes.   If  a  body  is  symmetric  with  respect  to  the plane $x^{\\prime}=0$ then, we will have $I_{x^{\\prime} y^{\\prime}}=I_{y^{\\prime} x^{\\prime}}=I_{x^{\\prime} z^{\\prime}}=I_{z^{\\prime} x^{\\prime}}=0$ and $x^{\\prime}$ will be a principal axis. \n\\\\If we choose the axes of the coordinate system fixed in the body with respect to which \\textbf{off-diagonal elements disappear and only the diagonal elements remain in the inertia tensor, then such axes are called the principal axes} of the body and the corresponding moments of inertia as the principal moments of inertia. In general the directions of the principal axes are different to those of arbitrary axes fixed in the body. If $x^{\\prime}$ , $y^{\\prime}$ and $z^{\\prime}$ are the principle axes then the principle moment of inertia can be written as, \n\\begin{equation}\n{I}=\\left[ \\begin{array}{ccc}\nI_{1} & 0 & 0 \\\\\n0 & I_{2} & 0 \\\\\n0 & 0 & I_{3}\n\\end{array}\\right]\n\\end{equation}\nWhere, $I_{x^{\\prime} x^{\\prime}}=I_{1}, I_{y^{\\prime} y^{\\prime}}=I_{2} \\text { and } I_{z z^{\\prime}}=I_{3}$\n\\\\If $\\omega_{1}, \\omega_{2}, \\omega_{3}$ be the components of angular velocity and $J_{1}, J_{2}, J_{3}$ those of angular momentum about the principal axes, then  for the principal axes  we obtain the angular momenta as,\n\\begin{align}\n\\left(\\begin{array}{l}\nJ_{1} \\\\\nJ_{2} \\\\\nJ_{3}\n\\end{array}\\right)&=\\left(\\begin{array}{lll}\nI_{1} & 0 & 0 \\\\\n0 & I_{2} & 0 \\\\\n0 & 0 & I_{3}\n\\end{array}\\right)\\left(\\begin{array}{l}\n\\omega_{1} \\\\\n\\omega_{2} \\\\\n\\omega_{3}\n\\end{array}\\right)\n\\end{align}\n\n\\section{Parallel and perpendicular axes theorem}\n\\subsection{ Theorem of Parallel Axis }\nConsider a rigid body of mass $M$ undergoing fixed-axis rotation. \n\\begin{theorem}\nAccording to the theorem of parallel axis, the moment of inertia of a body about any axis is equal to the sum of moment of inertia about a parallel axis through it's center of mass and the product of the mass of the body and the square of the distance between two axes.\n\\begin{equation}\nI= I_{cm}+M d^{2}\n\\end{equation}\n\\begin{align*}\n\\text{Where,}\\qquad I&= \\text{Moment of inertia about any axis.}\\\\\nI_{cm}&= \\text{Moment of inertia of the body about a parallel axis passing through it's center of mass.}\\\\\nm&= \\text{Mass of the body.}\\\\\nd&= \\text{Distance between the two axes.}\\\\\n\\end{align*}\n\\end{theorem}\n\\subsection{Theorem of Perpendicular Axis}\n\\begin{theorem}\nAccording to the theorem of perpendicular axis, the moment of inertia of a plane lamina about an axis perpendicular to it's plane is equal to the sum of moments of inertia of the lamina about two axes  at right angles to each other, in it's own plane, and intersecting each other at the point where the perpendicular axis passes through it.\\\\\nIf  $I_{\\mathrm{x}}$  and $I_{\\mathrm{y}}$ be the moment of inertia of a plane lamina about $x$ and $y$ axes . Which lie in the palne of the lamina  and mutually peroendicular to each other intersecting at the origin. The n the moment of inertia  $I$ about an axis which is passing through the origin and perpendicular to the oplane of the lamina is ,\n\\begin{equation}\nI=I_{\\mathrm{x}}+I_{\\mathrm{y}}\n\\end{equation}\n\\end{theorem}\n\n\\section{Moment of Inertia of some simple objects}\n\n\\subsection{Moment of Inertia of a thin uniform rod }\n\\subsubsection{1. About an axis passing through the center of mass and perpendicular to it's length.}\nConsider a thin uniform rod of length $L$ and mass $m$ and uniform mass density $\\lambda$.  Choose Cartesian coordinates, with the origin at the center of mass of the rod, which is midway between the endpoints since the rod is uniform. Choose the $x$-axis to lie along the length of the rod, with the positive $x$-direction to the right, as in the figure.\\ref{Uniform rod}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=1.8cm,width=4cm]{MI rod 1}\n\t\\caption{Uniform rod with an axis passing through the center.}\n\t\\label{Uniform rod}\n\\end{figure}\n\\begin{align*}\n\\intertext{Consider an infinitesimal mass element  $dm$ ,located at a displacement  $x$  from the \n\tcenter  of  the  rod,, } \\ dm&= \\lambda dx \\qquad \\text{Where,}\\ \\lambda=\\frac{m}{L}\n\\intertext{The moment of inertia of a continuos mass distribution is given by,}\\ I&= \\int r^{2} d m\\\\\n\\text{Then,}\\ I&=\\int_{-L/2}^{L/2} x^{2} \\frac{m}{L}dx\\\\&= \\frac{m}{L} \\int_{-L/2}^{L/2} x^{2} dx\\\\&=\n\\frac{m}{L} \\left[\\frac{x^{3}}{3} \\right]_{-L/2}^{L/2} =\\frac{m}{3L} \\frac{2L^{3}}{8} \\\\&=\\frac{1}{12}mL^{2}\n\\end{align*}\n\\subsubsection{2.  About an axis passing  through  the  endpoint. }\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=1.8cm,width=4cm]{MI rod 2}\n\t\\caption{Uniform rod with an axis passing through the end point.}\n\t\\label{Uniform rod}\n\\end{figure}\nAccording to the theorem of parallel axis, the moment of inertia about any axis is given by,\n\\begin{align*}\nI&= I_{cm}+M d^{2}\\quad  \\text{Where $d$ is the distance from the axis to center of mass .}\\\\\n\\text{Here,}\\quad d&=\\frac{L}{2}\\\\\nI&=\\frac{1}{12} m L^{2}+\\frac{1}{4} m L^{2}\\\\&=\\frac{1}{3} m L^{2} \n\\end{align*}\n\\subsection{Moment of Inertia of a Uniform Disc}\n\\subsubsection{1. About an axis passing through the center of mass and perpendicular to the plane of the disc.}\n\n\\begin{figure}[H]\n\\begin{minipage}{0.45\\textwidth}\n   \\centering\n\t\\includegraphics[height=4cm,width=4cm]{MI disc 1}\n\\end{minipage}\n\\begin{minipage}{0.45\\textwidth}\n \\centering\n\\includegraphics[height=3cm,width=3cm]{MI disc 2}\n\\end{minipage}\n\\caption{Uniform Disc with an axis passing through the center of mass}\n\\label{}\n\\end{figure}\nChoose cylindrical coordinates with the coordinates $(r, \\theta)$ in the plane and the $z$-axis perpendicular to the plane. The area element,\n\\begin{align*}\nd a&=\\rho d \\rho d \\theta\\\\\n\\sigma &= \\frac{M}{A}=\\frac{M}{\\pi R^{2}}\n\\intertext{ The infinitesimal mass }\\ \nd m &= \\frac{M}{A} d a\\\\&=\\frac{M}{\\pi R^{2}} \\rho d \\rho d \\theta \n\\intertext{Then the moment of inertia ,} \\ I &=\\int \\rho^{2} \\sigma d a \\\\\n&=\\left(\\frac{M}{\\pi R^{2}}\\right) \\int \\rho^{2} d a \\\\\n&=\\left(\\frac{M}{\\pi R^{2}}\\right) \\int_{0}^{R} \\int_{0}^{2 \\pi} \\rho^{2} \\rho d \\rho d \\theta \\\\\n&=\\left(\\frac{2 M}{R^{2}}\\right) \\int_{0}^{R} \\rho^{3} d \\rho \\\\\n&=\\frac{1}{2} M R^{2}\n\\end{align*}\n\\subsubsection{2. About an axis passing through the diameter.}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3cm,width=4cm]{MI disc 4}\n\t\\caption{Uniform disc with an axis passing through the diameter.}\n\t\\label{}\n\\end{figure}\nBy the theorem of perpendicular axis,\n\\begin{align*}\nI_{z}&=I_{x}+I_{y}\n\\intertext{The moment of inertia of the disc is the same about any diameter. Let I be the moment of inertia about any diameter $xx^{\\prime}$. Then it will be the moment of inertia  about a perpendicular diameter  $yy^{\\prime}$}\n\\text{Here,}\\ I_{z}&=\\frac{M R^{2}}{2}\\\\\n\\text{And ,}\\ I_{x}&=I_{y}=I\\\\\n\\frac{M R^{2}}{2}&= 2I\\\\\n I&= \\frac{M R^{2}}{4}\n\\end{align*}\n\\subsection{Moment of inertia of a solid sphere .}\n\\subsubsection{1. About an axis passing through the diameter.}\n\\begin{align*}\n\\text{The volume element,}\\ dv&= r^{2}dr \\sin \\theta  d\\theta d\\phi \\\\\\text{The infinitesimal mass element,} \\ dm &=\\frac{M}{V} dv\\\\\n&=\\frac{M}{\\frac{4}{3} \\pi R^{3}} r^{2}dr \\sin \\theta  d\\theta d\\phi\\\\\n\\text{Here, }\\ r_{\\perp}&=r\\sin\\theta\\\\\nI&=\\int r_{\\perp}^{2}+d m\\\\&=\\iiint r^{2} \\sin ^{2} \\theta \\cdot \\frac{M}{\\frac{4}{3} \\pi R^{3}} r^{2} d r \\sin \\theta d \\theta d \\phi\\\\&=\\frac{3 M}{4 \\pi R^{3}} \\int_{0}^{R} r^{4} d r \\int_{0}^{\\pi} \\sin ^{3} \\theta d \\theta \\int_{0}^{2 \\pi} d \\phi\\\\\n&=\\frac{3 M}{4 \\pi R^{3}} \\cdot \\frac{R^{5}}{5} \\cdot \\frac{4}{3} \\cdot 2 \\pi\\\\&=\\frac{2}{5} M R^{2}\n\\end{align*}\n\\subsubsection{2. About an axis passing through it's tangent.}\n\\begin{minipage}{0.45\\textwidth}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=5cm,width=3cm]{mi solid 1}\n\t\\caption{About an axis passing through it's tangent.}\n\t\\label{About an axis passing through it's tangent.}\n\\end{figure}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.45\\textwidth}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=4cm,width=3cm]{mi solid 2}\n\t\t\\caption{About an axis passing through it's diameter.}\n\t\t\\label{About an axis passing through it's diameter.}\n\t\\end{figure}\n\\end{minipage}\\\\\\\\\nAny tangent to the sphere at any point is parallel to one of it's diameter and is at  a distance equal to $R$ from the centre.  Now by the application of the theorem of parallel axis  the moment of inertia becomes,\n\\begin{align*}\n\\intertext{According to the theorem of parallel axis, the moment of inertia about any axis is given by,}\nI&= I_{cm}+M d^{2}\\quad  \\text{Where $d$ is the distance from the axis to center of mass .}\\\\\n\\text{Here,}\\quad d&=R\\\\\nI&=\\frac{2}{5} m R^{2}+m R^{2}\\\\&=\\frac{7}{5} m R^{2} \n\\end{align*}\n\\subsection{Moment of inertia of a spherical shell.}\n\\subsubsection{1. About an axis passing through it's diameter.}\nOn the spherical shell the mass element is ,\n\\begin{align*}\nda&= R^{2} \\sin \\theta d\\theta d \\phi\\\\\n\\sigma&= \\frac{M}{a}\\\\\nd m&=\\sigma R \\sin \\theta d \\theta R d \\phi\n\\intertext{where $\\sigma=M / 4 \\pi R^{2}$ is the surface mass density, and the distance from the rotational axis is $r=R \\sin \\theta$. Hence the moment of inertia to be calculated is}\nI&=\\int r_{\\perp}^{2}+d m\\\\&=\\iint R^{2} \\sin ^{2} \\theta \\cdot \\frac{M}{{4}\\pi R^{2}} R^{2}  \\sin \\theta d \\theta d \\phi\\\\&=\\frac{M R^{2}}{4 \\pi }  \\int_{0}^{\\pi} \\sin ^{3} \\theta d \\theta \\int_{0}^{2 \\pi} d \\phi\\\\\n&=\\frac{ M R^{2}}{4 \\pi} \\cdot  \\frac{4}{3} \\cdot 2 \\pi\\\\&=\\frac{2}{3} M R^{2}\n\\end{align*}\n\\begin{minipage}{0.45\\textwidth}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=4cm,width=3cm]{mi spherical shell}\n\t\\caption{About an axis passing through it's diameter.}\n\t\\label{About an axis passing through it's diameter.}\n\\end{figure}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.45\\textwidth}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=4cm,width=3cm]{mi hollow spherical 2}\n\t\\caption{About an axis passing through it's tangent.}\n\t\\label{About an axis passing through it's tangent.}\n\\end{figure}\n\\end{minipage}\n\\subsubsection{2. About an axis passing through it's tangent.}\nAny tangent to the sphere at any point is parallel to one of it's diameter and is at  a distance equal to $R$ from the centre.  Now by the application of the theorem of parallel axis  the moment of inertia becomes,\n\\begin{align*}\n\\intertext{According to the theorem of parallel axis, the moment of inertia about any axis is given by,}\nI&= I_{cm}+M d^{2}\\quad  \\text{Where $d$ is the distance from the axis to center of mass .}\\\\\n\\text{Here,}\\quad d&=R\\\\\nI&=\\frac{2}{3} m R^{2}+m R^{2}\\\\&=\\frac{5}{3} m R^{2} \n\\end{align*}\n\\section{Kinetic energy of a rotating body}\nThe kinetic energy of a rotating body about an axis depends not only upon it's mass and angular velocity but also depends upon the position of the axis and the distribution of mass about that axis.\nSuppose a body of mass $M$ is rotating about an axis . Assume that the body is a system of small particles . \n\\begin{align*}\n\\intertext{Let us consider a particle $P$ of elementary mass $m$ at a distance $r$ from the axis.}\n\\text{The angular velocity}&=\\omega\\\\\n\\text{The linear velocity }&=r \\omega\n\\intertext{Then the kinetic energy of rotation,}\n\\text{K.E}&=\\frac{1}{2} m r^{2} \\omega^{2} \n\\intertext{Hence the kinetic energy (K.E.) of rotation of the whole body is given by,}\nE&=\\Sigma \\frac{1}{2} m r^{2} \\omega^{2}\\\\&=\\frac{1}{2} \\omega^{2} \\Sigma m r^{2}\\\\\n\\text{Now,} \\  \\Sigma m r^{2}&=I \\quad \\text{ The moment of inertia about the  given axis}\n\\intertext{Then the kinetic energy of rotation,} E&=\\frac{1}{2} I \\omega^{2}\\\\\n\\text{If} \\ \\omega&=1, \\text{Obviously}\\ I=2  E\n\\intertext{Hence the moment of inertia of a body may also be defined as twice its kinetic energy of rotation,}\n\\text{But angular momentum} J&=I \\omega \\quad \\text{where $J$ is along the axis of rotation}\\\\\n\\therefore \\quad J&=I\\sqrt{2 E / I}=\\sqrt{2 E I} \\\\\n\\text { or } E&=J^{2} / 2 I\n\\intertext{This is the relation between rotational kinetic energy and angular momentum on the same aaxis}\n\\end{align*} \n\\newpage\n\\colorlet{ocre1}{ocre!70!}\n\\colorlet{ocrel}{ocre!30!}\n\\setlength\\arrayrulewidth{1pt}\n\\begin{table}[H]\n\t\\centering\n\t\\arrayrulecolor{ocre}\n\t\\renewcommand*{\\arraystretch}{1.5}\n\t\\begin{tabular}{|p{4cm}|p{6cm}|p{3cm}|}\n\t\t\\hline\n\t\t\\multicolumn{3}{|c|}{\\textbf{Center of mass of uniform systems}}\\\\\\hline\\hline\n\t\t\\rowcolor{ocrel}Uniform body& & Moment of Inertia\\\\\\hline\n\t\tThin Rod (About center)&\n\t\t\\includegraphics[height=1.5cm,width=3.5cm]{moment i 1}\n\t\t&\\textbf{$\\frac{1}{12}ML^{2}$}   \\\\\\hline \n\t\tThin Rod (About End)& \t\t\t\\includegraphics[height=1.5cm,width=3.5cm]{moment i 2} & \\textbf{$\\frac{1}{2}ML^{3}$}   \\\\\\hline \n\t\tRectangular plane (About center)& \\includegraphics[height=2.5cm,width=4cm]{ moment i 3} &\\textbf{$\\frac{1}{12}Ma^{2}$}   \\\\\\hline \n\t\tRectangular plane (About Edge)&\t\\includegraphics[height=2.5cm,width=4cm]{ moment i 4} & \\textbf{$\\frac{1Ma^{2}}{2}$}   \\\\\\hline \n\t\tCylinder& \\includegraphics[height=2.5cm,width=3cm]{moment i 5} &\\textbf{$\\frac{1}{2}MR^{2}$}   \\\\\\hline\n\t\tCylindrical hoop& \\includegraphics[height=2.5cm,width=3cm]{moment i 6} &\\textbf{$MR^{2}$}   \\\\\\hline\n\t\tSolid sphere(About diameter)& \\includegraphics[height=2.5cm,width=3.2cm]{moment i 7} &\\textbf{$\\frac{2}{3}MR^{2}$}   \\\\\\hline\n\t\tSpherical shell (About diameter)& \\includegraphics[height=2.5cm,width=3.2cm]{moment i 8} &\\textbf{$\\frac{2}{5} MR^{2}$}   \\\\\\hline\n\t\\end{tabular}\n\\end{table}\n\\newpage\n\\begin{abox}\nPractice set 1\n\\end{abox}\n\\begin{enumerate}\n\\begin{minipage}{\\textwidth}\n\t\\item An annulus of mass $M$ made of a material of uniform density has inner and outer radii $a$ and $b$ respectively. Its principle moment of inertia along the axis of symmetry perpendicular to the plane of the annulus is:\n\t\\exyear{NET DEC 2011}\n\\end{minipage}\n\\begin{tasks}(2)\n\t\\task[\\textbf{A.}] $\\frac{1}{2} M \\frac{\\left(b^{4}+a^{4}\\right)}{\\left(b^{2}-a^{2}\\right)}$\n\t\\task[\\textbf{B.}]$\\frac{1}{2} M \\pi\\left(b^{2}-a^{2}\\right)$\n\t\\task[\\textbf{C.}]$\\frac{1}{2} M\\left(b^{2}-a^{2}\\right)$\n\t\\task[\\textbf{D.}]$\\frac{1}{2} M\\left(b^{2}+a^{2}\\right)$\n\\end{tasks}\n\\begin{minipage}{\\textwidth}\n\t\\item Two bodies of equal mass $m$ are connected by a massless rigid rod of length $l$ lying in the $x y$-plane with the centre of the rod at the origin. If this system is rotating about the $z$-axis with a frequency $\\omega$, its angular momentum is\n\t\\exyear{NET DEC 2012}\n\\end{minipage}\n\\begin{tasks}(2)\n\t\\task[\\textbf{A.}] $m l^{2} \\omega / 4$\n\t\\task[\\textbf{B.}]$m l^{2} \\omega / 2$\n\t\\task[\\textbf{C.}]$m l^{2} \\omega$\n\t\\task[\\textbf{D.}]$2 m l^{2} \\omega$\n\\end{tasks}\n\\begin{minipage}{\\textwidth}\n\t\\item Two masses $m$ each, are placed at the points $(x, y)=(a, a)$ and $(-a,-a)$ and two masses, $2 m$ each, are placed at the points $(a,-a)$ and $(-a, a)$. The principal moments of inertia of the system are\n\t\\exyear{NET DEC 2015}\n\\end{minipage}\n\\begin{tasks}(2)\n\t\\task[\\textbf{A.}] $2 m^{2}, 4 m a^{2}$\n\t\\task[\\textbf{B.}]$4 m a^{2}, 8 m a^{2}$\n\t\\task[\\textbf{C.}] $4 m a^{2}, 4 m a^{2}$\n\t\\task[\\textbf{D.}] $8 m a^{2}, 8 m a^{2}$\n\\end{tasks}\n\\begin{minipage}{\\textwidth}\n\t\\item A disc of mass $m$ is free to rotate in a plane parallel to the $x y$ plane with an angular velocity $-\\omega \\hat{z}$ about a massless rigid rod suspended from the roof of a stationary car (as shown in the figure below). The rod is free to orient itself along any direction.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=4cm,width=5cm]{diagram-20210926(37)-crop}\n\t\\end{figure}\n\tThe car accelerates in the positive $x$-direction with an acceleration $a>0 .$ Which of the following statements is true for the coordinates of the centre of mass of the disc in the reference frame of the car?\n\t\\exyear{NET DEC 2017}\n\\end{minipage}\n\\begin{tasks}(2)\n\t\\task[\\textbf{A.}] only the $x$ and the $z$ coordinates change\n\t\\task[\\textbf{B.}]only the $y$ and the $z$ coordinates change\n\t\\task[\\textbf{C.}]only the $x$ and the $y$ coordinates change\n\t\\task[\\textbf{D.}]all the three coordinates change\n\\end{tasks}\n\\end{enumerate}\n\n\n\\newpage\n\\begin{abox}\n\tPractice set 2\n\t\\end{abox}\n\\begin{enumerate}\n\\begin{minipage}{\\textwidth}\n\t\\item A uniform solid cylinder is released on a horizontal surface with speed $5 \\mathrm{~m} / \\mathrm{s}$ without any rotation (slipping without rolling). The cylinder eventually starts rolling without slipping. If the mass and radius of the cylinder are $10 \\mathrm{gm}$ and $1 \\mathrm{~cm}$ respectively, the final linear velocity of the cylinder is.............. $\\mathrm{m} / \\mathrm{s}$. (up to two decimal places).\n\t\\exyear{GATE 2017}\n\\end{minipage}\n\\begin{minipage}{\\textwidth}\n\t\\item A uniform circular disc of mass $m$ and radius $R$ is rotating with angular speed $\\omega$ about an axis passing through its centre and making an angle $\\theta=30^{\\circ}$ with the axis of the disc. If the kinetic energy of the disc is $\\alpha m \\omega^{2} R^{2}$, the value of $\\alpha$ is (up to two decimal places).\n\t\\exyear{GATE 2018}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=3cm,width=3cm]{diagram-20210915(17)-crop}\n\t\\end{figure}\n\\end{minipage}\n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\begin{abox}\n\tPractise set-3\n\\end{abox}\n\\begin{enumerate}[label=\\color{ocre}\\textbf{\\arabic*.}]\n\t\\item Show that the centre of mass of a rod of mass $M$ and length L lies midway between its ends, assuming the rod has a uniform mass per unit length.\\\\\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[height=3cm,width=5cm]{fm14}\n\t\\end{figure}\n\t\\begin{answer}\n\t\tBy symmetry, we see that $y_{C M}=z_{C M}=0$ if the rod is placed along the $x$ axis. Furthermore, if we call the mass per unit length $\\lambda$ (the linear mass density), then $\\lambda=\\mathrm{M} / \\mathrm{L}$ for a uniforr rod.\n\t\tIf we divide the rod into elements of length $d x$, then the mass of each element is $\\mathrm{dm}=? \\mathrm{dx}$. Since an arbitrary element of each element is at a distance $x$ from the origin, equation gives,\n\t\t\\begin{align*}\n\t\tx_{C M}&=\\frac{1}{M} \\int_{0}^{L} x d m\\\\&=\\frac{1}{M} \\int_{0}^{L} x \\lambda d x\\\\&=\\frac{\\lambda L^{2}}{2 M}\\\\\n\t\t\\text{Because }\\lambda&=M / L \\ \\text{ this reduces to,}\\ x_{C M}=\\frac{L^{2}}{2 M}\\left(\\frac{M}{L}\\right)=\\frac{L}{2}\n\t\t\\intertext{One can also argue that by symmetry,} \\ x_{C M}&=\\frac{L }{2}\n\t\t\\end{align*}\n\t\\end{answer}\n\t\\item $A$ body of radius $R$ and mass $M$ is rolling horizontally without slipping with speed $v$, it then rolls $u p$ a hill to a maximum height $h .$ If $h=3 v^{2} / 4 \\mathrm{~g}$, \\\\(a) what is the moment of inertia of the body? \\\\(b) what might be the shape of the body?\n\\begin{answer}\n\t\\begin{align*}\n\t\\text{(a)}\\quad \\mathrm{K}_{\\text {total }}&=\\mathrm{K}_{\\text {trans. }}+\\mathrm{K}_{\\text {rot. }}=\\frac{1}{2} \\mathrm{M} \\mathrm{v}^{2}+\\frac{1}{2} \\mathrm{I} \\omega^{2}\\\\\n\t&=\\frac{1}{2} M v^{2}+\\frac{1}{2} I\\left(\\frac{v^{2}}{R^{2}}\\right)=\\frac{v^{2}}{2}\\left[M+\\frac{I}{R^{2}}\\right]\n\t\\intertext{When it rolls up a hill to height $\\mathrm{h}$, the entire kinetic energy is converted into potential energy $\\mathrm{M} \\mathrm{g} \\mathrm{h}$}\n\t\\text{Thus} \\frac{\\mathrm{v}^{2}}{2}\\left[\\mathrm{M}+\\frac{\\mathrm{I}}{\\mathrm{R}^{2}}\\right]&=\\mathrm{Mgh}=\\mathrm{Mg}\\left[3 \\frac{\\mathrm{v}^{2}}{4 \\mathrm{~g}}\\right]\\\\\n\t\\text{or }\\left[\\mathrm{M}+\\frac{\\mathrm{I}}{\\mathrm{R}^{2}}\\right]&=\\frac{3}{2} \\mathrm{M} \\\\ \\therefore \\mathrm{I}&=\\frac{\\mathrm{MR}^{2}}{2}\n\t\\intertext{\\text{(b)}\\quad The body may be a circular disc or a solid cylinder.}\n\t\\end{align*}\n\\end{answer}\n\t\\item Let $g$ be the acceleration due to gravity at earth's surface and $K$ be the rotational kinetic energy of the earth. Suppose the earth's radius decrease by $2 \\%$, keeping all other quantities same, then\n\\begin{tasks}(1)\n\t\\task[\\textbf{A.}] g decreases by $2 \\%$ and $\\mathrm{K}$ decreases by $4 \\%$\n\t\\task[\\textbf{B.}] g decreases by $4 \\%$ and k increases by $2 \\%$\n\t\\task[\\textbf{C.}] $\\mathrm{g}$ increases by $4 \\%$ and $\\mathrm{K}$ decreases by $4 \\%$\n\t\\task[\\textbf{D.}] g decreases by $4 \\%$ and $\\mathrm{K}$ increases by $4 \\%$\n\\end{tasks}\n\\begin{answer}\n\t\\begin{align*}\n\t\\text{We know that, }g&=\\frac{G M}{R^{2}}\\\\\n\t\\text{Differentiating, }\\frac{\\mathrm{dg}}{\\mathrm{g}}&=-\\left(\\frac{2 \\mathrm{dR}}{\\mathrm{R}}\\right)\\\\\n\t\\text{Further, }\\mathrm{K}&=\\frac{1}{2} \\mathrm{I} \\omega^{2}=\\frac{1}{2}\\left[\\frac{3}{5} \\mathrm{M} \\mathrm{R}^{2}\\right] \\omega^{2}\\\\\n\t\\text{\tor }\\frac{\\mathrm{dK}}{\\mathrm{K}}&=\\frac{3}{10} \\mathrm{M} \\omega^{2} \\times\\left(\\frac{2 \\mathrm{dR}}{\\mathrm{R}}\\right)\n\t\\intertext{When radius decreases by $2 \\%$, then $\\mathrm{g}$ increases by $4 \\%$ and K decreases by $4 \\%$.}\n\t\\end{align*}\n\tSo the correct answer is \\textbf{Option (C)}\n\\end{answer}\n\t\\item A cubical block of side $a$ is moving with velocity $v$ on a horizontal smooth plane as shown in fig. It hits a ridge at point $O$. The angular speed of the block after it hits $O$ is\\\\\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=1.5cm,width=5cm]{fm16}\n\\end{figure}\n\\begin{tasks}(4)\n\t\\task[\\textbf{A.}] $3 \\mathrm{v} /(4 \\mathrm{a})$\n\t\\task[\\textbf{B.}] $3 \\mathrm{v} /(2 \\mathrm{a})$\n\t\\task[\\textbf{C.}] $\\sqrt{3} \\mathrm{v} /(\\sqrt{2} \\mathrm{a})$\n\t\\task[\\textbf{D.}] Zero\n\\end{tasks}\n\\begin{answer}$\\left. \\right. $\\\\\n\t\\begin{minipage}{0.65\\textwidth}\n\t\t\\begin{align*}\n\t\t\\intertext{By conservation of angular momentum, we have}\n\t\tI \\omega&=M \\vee(a / 2)\\\\\n\t\t\\text{\tHere, }I&=\\frac{M a^{2}}{6}+M\\left(\\frac{a}{\\sqrt{2}}\\right)^{2}\\\\&=\\frac{2 M a^{2}}{3} \\\\\n\t\t\\therefore \\frac{2 \\mathrm{Ma}^{2}}{3} \\omega&=\\frac{\\mathrm{Mva}}{2} \\\\ \\omega&=\\frac{3 \\mathrm{v}}{4 \\mathrm{a}}\n\t\t\\end{align*}\n\t\\end{minipage}\n\t\\begin{minipage}{0.35\\textwidth}\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2cm,width=5cm]{fm17}\n\t\t\\end{figure}\n\t\\end{minipage}\n\\end{answer}\n\t\\item A sphere is spinned in a clockwise direction by angular velocity $\\omega$ and then it is released to a rough surface. Find the time elapsed by the sphere, till it starts pure rolling (coefficient of friction is $\\mu$ ) and radius of sphere is $R$.\\\\\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3.3cm,width=5cm]{fm18}\n\\end{figure}\n\\begin{answer}\n\t\\begin{align}\n\t\\mathbf{N}&=\\mathrm{mg}\\notag\n\t\\intertext{$\\mathrm{f}=\\mu \\mathrm{mg}$ (friction opposes relative motion, so it is in forward direction)}\\notag\\\\\n\ta_{c m}&=\\frac{f}{m}=\\mu g\\label{fm01}\\\\\n\t\\text{Also }\\mathrm{f} \\times \\mathrm{R}&=\\mathrm{I} \\alpha\\notag\\\\\n\t\\Rightarrow \\mu \\mathrm{mgR}&=\\mathrm{I} \\alpha \\Rightarrow \\alpha=\\frac{\\mu \\mathrm{mgR}}{\\mathrm{I}}=\\frac{\\mu \\mathrm{mgR}}{2 / 5 \\mathrm{mR}^{2}}=\\frac{5}{2} \\frac{\\mu \\mathrm{g}}{\\mathrm{R}}\\notag\\\\\n\t\\omega&=\\omega_{\\mathrm{o}}-\\alpha \\mathrm{t}=\\omega_{\\mathrm{o}}-\\frac{5}{2} \\frac{\\mu \\mathrm{gt}}{\\mathrm{R}}\\label{fm02}\n\t\\intertext{At the instant, $\\mathrm{m}$ starts pure rolling}\\notag\n\t\\omega R&=v\\text{ (lowest point is at rest)}\\notag\\\\\n\tv_{c m}&=0+\\mu g t\\label{fm03}\n\t\\intertext{$[$ from eqns. $(\\ref{fm02})$ and $(\\ref{fm03})$ and $\\omega R=v]$}\\notag\n\t\\Rightarrow \\omega \\mathrm{R}&=\\frac{7}{2} \\mu \\mathrm{gt} \\Rightarrow \\mathrm{t}=\\frac{2}{7} \\frac{\\omega \\mathrm{R}}{\\mu \\mathrm{g}}\\notag\n\t\\end{align}\n\\end{answer}\n\\item  A uniform solid cube has mass $M$ and side ' $a$ '. Calculate moment of inertia of the cube about an axis which coincides with one of it's sides\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3cm,width=3cm]{Rigid body pset-3}\n\\end{figure}\n\\begin{answer}\n\t\\begin{align*}\n\t\\intertext{Moment of inertia of a cube about an axis passing through it's  centre}\n\tI_{c}&=\\frac{Ma^{2}}{6}\n\t\\intertext{Seperation between two axes}\n\td&=\\frac{a}{\\sqrt{2}}\n\t\\intertext{Then according to the parellel axis theorem}\n\tI&=I_{c}+Md^{2}\\\\\n\t\t&=\\frac{Ma^{2}}{6}+M\\left( {\\frac{a}{\\sqrt{2}}}\\right) ^{2}\\\\\n\t\t&=\\frac{2 Ma^{2}}{3}\n\t\\end{align*}\n\\end{answer}\n \\item A $1 \\mathrm{~kg}$ mass of clay, moving with a velocity of 10 $\\mathrm{m} / \\mathrm{s}$, strikes a stationary wheel and sticks to it. The solid wheel has a mass of $20 \\mathrm{~kg}$ and a radius of $1 \\mathrm{~m}$.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=2cm,width=5cm]{mi 1}\n\t\n\\end{figure}\n Assuming that the wheel and the ground are both rigid and that the wheel is set into pure rolling motion, the angular velocity of the wheel immediately after the impact is, approximately,\n \\begin{tasks}(2)\n \t\\task[\\textbf{a.}]Zero. \n \t\\task[\\textbf{b.}]$1 / 3 \\ \\mathrm{rad} / \\mathrm{s}$\n \t\\task[\\textbf{c.}]$\\sqrt{10 / 3}\\  \\mathrm{rad} / \\mathrm{s}$ \n \t\\task[\\textbf{d.}]$10 / 3 \\ \\mathrm{rad} / \\mathrm{s}$   \n \\end{tasks}\n\\begin{answer}\n\tThe moment of inertia of wheel of mass $m$ and radius $r$ w.r.t. a normal axis passing through circumference is\n\t\\begin{align*}\n\tI &=\\frac{m r^{2}}{2}+m r^{2} \\\\\n\t&=\\frac{3}{2} m r^{2}\n\t\\end{align*}\n\tIf $\\omega$ is the velocity of pure rolling of wheel after the impact, total kinetic energy of both the masses before and after the impact will be equal, therefore,\n\t\\begin{align*}\n\t\\frac{1}{2} \\times 1 \\times 10^{2} &=\\frac{1}{2}\\left(\\frac{3}{2} \\times 20 \\times 1^{2}\\right) \\omega^{2} \\\\\n\t\\omega &=\\sqrt{\\frac{10}{3}}\n\t\\end{align*}\n\tSo the correct answer is \\textbf{option(c)}\n\\end{answer}\n\\end{enumerate}\n\n\n\n", "meta": {"hexsha": "e1ca7f6759c64ce560db2ad893ba7118e46a2713", "size": 32045, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Classical Mechanics  -CSIR/chapter/Rigid body dynamics.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Classical Mechanics  -CSIR/chapter/Rigid body dynamics.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Classical Mechanics  -CSIR/chapter/Rigid body dynamics.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.7667785235, "max_line_length": 561, "alphanum_fraction": 0.6706194414, "num_tokens": 11578, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.8397339736884712, "lm_q1q2_score": 0.5703294279031196}}
{"text": "\\documentclass[letterpaper, twoside, 12pt]{book}\n\\usepackage{packet}\n\n\n\\begin{document}\n\n\\setcounter{chapter}{1}\n\n\\chapter{Part 2.2: Sections 14.1-14.3}\n\n\\setcounter{chapter}{14}\n\\setcounter{section}{0}\n\n\\section{Functions of Several Variables} %14.1\n\n\\begin{definition}\n  A \\textbf{function $f$ of two variables} is a rule which assigns a\n  real number $f(x,y)$ to each pair of real numbers $(x,y)$ for which\n  that rule is defined.\n  The collection of such well-defined pairs is called the\n  \\textbf{domain} $\\text{dom}(f)$ of the function, and the set of\n  real numbers which\n  can possiblely be produced by the function is called its\n  \\textbf{range} $\\text{ran}(f)$.\n\\end{definition}\n\n\\begin{definition}\n  The \\textbf{level curve} for each $k\\in\\text{ran}(f)$ is given by the\n  equation $f(x,y)=k$.\n  The \\textbf{graph} of $f$ is a surface in 3D space which visualizes the function, given by the equation $z=f(x,y)$.\n\\end{definition}\n\n\\begin{definition}\n  A \\textbf{function $f$ of three variables} is a rule which assigns a\n  real number $f(x,y,z)$ to each triple of real numbers $(x,y,z)$ for which\n  that rule is defined.\n  The collection of such well-defined triples is called the\n  \\textbf{domain} $\\text{dom}(f)$ of the function, and the set of\n  real numbers which\n  can possiblely be produced by the function is called its\n  \\textbf{range} $\\text{ran}(f)$.\n\\end{definition}\n\n          \\begin{problem}\n            Let $f(x,y)=x\\sin(x+y)$. Give the value of $f(\\pi,\\frac{\\pi}{2})$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Let $f(x,y)=-x-y+2$. In the $xy$-plane, plot the domain of $f$,\n            as well as its level curves for $k=-3,0,3$. Then plot the graph\n            of $f$ in $xyz$ space.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Let $f(x,y)=\\sqrt{4-x^2-y^2}$. In the $xy$-plane,\n            plot the domain of $f$,\n            as well as its level curves for $k=0,\\frac{1}{\\sqrt{2}},1$.\n            Then plot the graph of $f$ in $xyz$ space.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{definition}\n  The \\textbf{level surface} for each $k\\in\\text{ran}(f)$ is given by the\n  equation $f(x,y,z)=k$.\n  (Since the graph of a three variable function would require four\n  variables and therefore is a four-dimensional object, we typically\n  don't consider it.)\n\\end{definition}\n\n          \\begin{problem}\n            Let $f(x,y,z)=\\frac{x+3y^2}{z-2x}$. Give the value of $f(3,-2,1)$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Let $f(x,y,z)=-x^2+y-z^2$. In $xyz$ space, plot the level\n            surfaces for $k=-2,0,2$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{remark}\n  If $P=(x,y)$, then we assume that $f(x,y)=f(P)=f(\\vect{P})$.\n  If $P=(x,y,z)$, then we assume that $f(x,y,z)=f(P)=f(\\vect{P})$.\n\\end{remark}\n\n\n\\section{Limits and Continuity} %14.2\n\n\\begin{definition}\n  If the value of the function $f(P)$ becomes arbitrarily close to the number\n  $L$ as points $P$ close to $P_0$ are plugged into the function, then the\n  \\textbf{limit of $f(P)$ as $P$ approaches $P_0$} is $L$:\n  \\[\\lim_{P\\to P_0} f(P) = L\\]\n\\end{definition}\n\n\\begin{theorem}\n  Let $f(x,y)$ be a function of two variables.\n  If there exists a curve $y=g(x)$ passing through the\n  point $(x_0,y_0)$ such that $\\lim_{x\\to x_0}f(x,g(x))$ does not exist,\n  then $\\lim_{(x,y)\\to(x_0,y_0)}f(x,y)$ does not exist.\n\\end{theorem}\n\n          \\begin{problem}\n            Prove that\n              \\[\n                \\lim_{(x,y)\\to(0,0)} \\frac{x+y}{|x+y|}\n              \\]\n            does not exist by considering the function $g(x)=x$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{theorem}\n  Let $f(x,y)$ be a function of two variables.\n  If there exist curves $y=g(x)$ and $y=h(x)$ passing through the\n  point $(x_0,y_0)$ such that\n  $\\lim_{x\\to x_0}f(x,g(x))\\not=\\lim_{x\\to x_0}f(x,h(x))$,\n  then $\\lim_{(x,y)\\to(x_0,y_0)}f(x,y)$ does not exist.\n\\end{theorem}\n\n          \\begin{problem}\n            Prove that\n              \\[\n                \\lim_{(x,y)\\to(0,0)} \\frac{x^6+y^2}{x^3y+x^6}\n              \\]\n            does not exist by considering the functions\n            $g(x)=x^3$ and $h(x)=2x^3$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{theorem}\n  The ``Limit Laws'' for single-variable functions also hold for\n  multi-variable functions.\n    \\[\n      \\lim_{P\\to P_0}(f(P)\\pm g(P))\n        =\n      \\lim_{P\\to P_0}f(P) \\pm \\lim_{P\\to P_0}g(P)\n    \\]\n    \\[\n      \\lim_{P\\to P_0}(f(P)\\cdot g(P))\n        =\n      \\lim_{P\\to P_0}f(P) \\cdot \\lim_{P\\to P_0}g(P)\n    \\]\n    \\[\n      \\lim_{P\\to P_0}(kf(P))\n        =\n      k\\lim_{P\\to P_0}f(P)\n    \\]\n    \\[\n      \\lim_{P\\to P_0}\\frac{f(P)}{g(P)}\n        =\n      \\frac{\\ds \\lim_{P\\to P_0}f(P)}{\\ds \\lim_{P\\to P_0}g(P)}\n    \\]\n    \\[\n      \\lim_{P\\to P_0}(f(P))^{r/s}\n        =\n      \\left(\\lim_{P\\to P_0}f(P)\\right)^{r/s}\n    \\]\n\\end{theorem}\n\n\\begin{theorem}\n  Let $P_0=(x_0,y_0,z_0)$. Multi-variable limits which only use one\n  variable may be reduced to a single-variable limit.\n    \\[\n      \\lim_{P\\to P_0}f(x) = \\lim_{x\\to x_0}f(x)\n    \\]\n    \\[\n      \\lim_{P\\to P_0}g(y) = \\lim_{y\\to y_0}g(y)\n    \\]\n    \\[\n      \\lim_{P\\to P_0}h(z) = \\lim_{z\\to z_0}h(z)\n    \\]\n\\end{theorem}\n\n          \\begin{problem}\n            Use the above theorems to rigorously prove that\n              \\[\n                \\lim_{(x,y)\\to(1,2)}\n                \\frac{2x+y}{y^2}\n                  =\n                1\n              \\]\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{remark}\n  Due to the limit laws, the ``just plug it in'' rule applies when\n  plugging in does not result in an undefined operation.\n\\end{remark}\n\n          \\begin{problem}\n            Compute the limit\n              \\[\n                \\lim_{(x,y,z)\\to(3,0,-1)}\n                \\frac{x\\cos y}{z+x}\n              \\]\n          \\end{problem}\n\n\\begin{remark}\n  There is no L'Hopital rule for multi-variable limits.\n  However, you may still use it once the limit has been reduced\n  to a single-variable limit.\n\\end{remark}\n\n          \\begin{problem}\n            Compute the limit\n              \\[\n                \\lim_{(x,y)\\to(3,0)}\n                \\frac{xy+\\sin(2y)}{y}\n              \\]\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{remark}\n  Factoring and canceling (including conjugation tricks) is\n  also effective for computing multi-variable limits.\n\\end{remark}\n\n          \\begin{problem}\n            Compute the limit\n              \\[\n                \\lim_{(x,y,z)\\to(1,2,4)}\n                \\frac{\\sqrt{z}-xy}{z-x^2y^2}\n              \\]\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{definition}\n  A function $f(P)$ is \\textbf{continuous} if\n  $\\ds \\lim_{P\\to P_0}f(P) = f(P_0)$ for all points $P_0$ in its domain.\n\\end{definition}\n\n\\begin{theorem}\n  If a multi-variable function is composed of continuous single-variable\n  functions, then it is also continuous.\n\\end{theorem}\n\n\n\\section{Partial Derivatives} %14.3\n\n\\begin{definition}\n  The \\textbf{partial derivative of $f$ with respect to a variable} is\n  the rate of change of $f$ as that variable changes and all other variables\n  are held constant. For example:\n  \\[\\frac{\\p f}{\\p x}=f_x(x,y)=\\lim_{h\\to0}\\frac{f(x+h,y)-f(x,y)}{h}\\]\n  \\[\\frac{\\p g}{\\p z}=g_z(x,y,z)=\\lim_{h\\to0}\\frac{g(x,y,z+h)-g(x,y,z)}{h}\\]\n\\end{definition}\n\n          \\begin{problem}\n            Let $f(x,y,z)=xy^2+2z$.\n            Use the definition of a partial derivative to prove that\n            $\\frac{\\p f}{\\p y} = 2xy$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{theorem}\n  Partial derivatives may be computed in the usual way by treating\n  all other variables as constants.\n\\end{theorem}\n\n          \\begin{problem}\n            Compute both partial derivatives of $f(x,y)=4x^2-5y^3+xy-1$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Compute both partial derivatives of $f(x,y)=\\sin(x+3y)$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n          \\begin{problem}\n            Compute both partial derivatives of $f(x,y)=e^{xy^2}$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\begin{definition}\n  \\textbf{Second-order partial derivatives} are the result of taking the\n  partial derivative of a partial derivative. For example:\n  \\[\n    f_{xy} =\n    (f_x)_y =\n    \\frac{\\p}{\\p y}\\left(\\frac{\\p f}{\\p x}\\right) =\n    \\frac{\\p^2 f}{\\p y\\p x}\n  \\]\n  \\[\n    g_{z} =\n    (g_z)_z =\n    \\frac{\\p}{\\p z}\\left(\\frac{\\p g}{\\p z}\\right) =\n    \\frac{\\p^2 g}{\\p z^2}\n  \\]\n\\end{definition}\n\n\\begin{theorem}\n  When computing the second-order partial derivative for a sufficiently\n  well-behaved function, the order in which the partial\n  derivatives are taken is irrelevant. (This is sometimes called the\n  \\textbf{Mixed Derivative Theorem}.)\n\\end{theorem}\n\n          \\begin{problem}\n            Verify the Mixed Derivative Theorem for\n            $f(x,y)=3x^2y^2-x^3+y^4-7$.\n          \\end{problem}\n\n          \\begin{solution}\n\n          \\end{solution}\n\n\\end{document}", "meta": {"hexsha": "1008f39097b9ec8ac230facd8d992298a88d68a6", "size": 9475, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packet2_2.tex", "max_stars_repo_name": "StevenClontz/teaching-2015-spring", "max_stars_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packet2_2.tex", "max_issues_repo_name": "StevenClontz/teaching-2015-spring", "max_issues_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packet2_2.tex", "max_forks_repo_name": "StevenClontz/teaching-2015-spring", "max_forks_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.9176136364, "max_line_length": 117, "alphanum_fraction": 0.5614775726, "num_tokens": 3001, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.8397339736884712, "lm_q1q2_score": 0.5703294279031196}}
{"text": "%Electrodynamics Homework_2\n\\documentclass[10pt,a4paper]{article}\n\\usepackage[UTF8]{ctex}\n\\usepackage{bm}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\title{Electrodynamics Homework\\_2}\n\\author{\u9648\u7a3c\u9716 \\and 45875852}\n\\date{2019.2.25}\n\\begin{document}\n\\maketitle\n\\section{}\nLet\n\\[\nr=\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}\n\\]\nbe the distance between the point of the source $\\vec{x}'$ and the point to measure the field $\\vec{x}$. Furthermore, let us define $\\vec{r}=\\vec{x}'-\\vec{x}$. Show the following relations\n\\subsection{}\n\\[\n\\nabla r=-\\nabla'r=\\frac{\\vec{r}}{r}\n\\]\n\\textbf{pf:}\n\\footnotesize\\begin{align*}\n\\nabla r=&\\frac{\\partial\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}{\\partial x}\\vec{i}+\\frac{\\partial\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}{\\partial y}\\vec{j}\\\\\n&+\\frac{\\partial\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}{\\partial z}\\vec{k}\\\\\n=&\\frac{x-x'}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\vec{i}+\\frac{y-y'}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\vec{j}\\\\\n&+\\frac{z-z'}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\vec{k}\\\\\n=&\\frac{(x-x')\\vec{i}+(y-y')\\vec{j}+(z-z')\\vec{k}}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\\\\n=&\\frac{\\vec{r}}{r}\n\\end{align*}\n\\begin{align*}\n-\\nabla'r=&-\\frac{\\partial\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}{\\partial x'}\\vec{i}-\\frac{\\partial\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}{\\partial y'}\\vec{j}\\\\\n&-\\frac{\\partial\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}{\\partial z'}\\vec{k}\\\\\n=&\\frac{x-x'}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\vec{i}+\\frac{y-y'}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\vec{j}\\\\\n&+\\frac{z-z'}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\vec{k}\\\\\n=&\\frac{(x-x')\\vec{i}+(y-y')\\vec{j}+(z-z')\\vec{k}}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}\\\\\n=&\\frac{\\vec{r}}{r}\n\\end{align*}\\normalsize\n\\[\n\\therefore\\nabla r=-\\nabla'r=\\frac{\\vec{r}}{r}\n\\]\n\\subsection{}\n\\[\n\\nabla\\frac{1}{r}=-\\nabla'\\frac{1}{r}=-\\frac{\\vec{r}}{r^3}\n\\]\n\\textbf{pf:}\n\\footnotesize\\begin{align*}\n\\nabla\\frac{1}{r}=&\\frac{\\partial\\frac{1}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}}{\\partial x}\\vec{i}+\\frac{\\partial\\frac{1}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}}{\\partial y}\\vec{j}\\\\\n&+\\frac{\\partial\\frac{1}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}}{\\partial z}\\vec{k}\\\\\n=&-\\frac{x-x'}{[(x-x')^2+(y-y')^2+(z-z')^2]^{\\frac{3}{2}}}\\vec{i}-\\frac{y-y'}{[(x-x')^2+(y-y')^2+(z-z')^2]^{\\frac{3}{2}}}\\vec{j}\\\\\n&-\\frac{z-z'}{[(x-x')^2+(y-y')^2+(z-z')^2]^{\\frac{3}{2}}}\\vec{k}\\\\\n=&-\\frac{(x-x')\\vec{i}+(y-y')\\vec{j}+(z-z')\\vec{k}}{[(x-x')^2+(y-y')^2+(z-z')^2]^{3/2}}\\\\\n=&-\\frac{\\vec{r}}{r^3}\n\\end{align*}\n\\begin{align*}\n-\\nabla'\\frac{1}{r}=&-\\frac{\\partial\\frac{1}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}}{\\partial x'}\\vec{i}-\\frac{\\partial\\frac{1}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}}{\\partial y'}\\vec{j}\\\\\n&-\\frac{\\partial\\frac{1}{\\sqrt{(x-x')^2+(y-y')^2+(z-z')^2}}}{\\partial z'}\\vec{k}\\\\\n=&-\\frac{x-x'}{[(x-x')^2+(y-y')^2+(z-z')^2]^{\\frac{3}{2}}}\\vec{i}-\\frac{y-y'}{[(x-x')^2+(y-y')^2+(z-z')^2]^{\\frac{3}{2}}}\\vec{j}\\\\\n&-\\frac{z-z'}{[(x-x')^2+(y-y')^2+(z-z')^2]^{\\frac{3}{2}}}\\vec{k}\\\\\n=&-\\frac{(x-x')\\vec{i}+(y-y')\\vec{j}+(z-z')\\vec{k}}{[(x-x')^2+(y-y')^2+(z-z')^2]^{3/2}}\\\\\n=&-\\frac{\\vec{r}}{r^3}\n\\end{align*}\\normalsize\n\\[\n\\therefore\\nabla\\frac{1}{r}=-\\nabla'\\frac{1}{r}=-\\frac{\\vec{r}}{r^3}\n\\]\n\\subsection{}\n\\[\n\\nabla\\times\\frac{\\vec{r}}{r^3}=0\n\\]\n\\textbf{pf:}\n\\begin{align*}\n\\nabla\\times\\frac{\\vec{r}}{r^3}=&(\\nabla\\frac{1}{r^3})\\times\\vec{r}+\\frac{1}{r^3}\\nabla\\times\\vec{r}\\\\\n\\end{align*}\nwhere\n\\begin{align*}\n\\nabla\\frac{1}{r^3}=&\\frac{\\partial\\frac{1}{[(x-x')^2+(y-y')^+(z-z')^2]^{3/2}}}{\\partial x}\\vec{i}+\\frac{\\partial\\frac{1}{[(x-x')^2+(y-y')^+(z-z')^2]^{3/2}}}{\\partial y}\\vec{j}\\\\\n&+\\frac{\\partial\\frac{1}{[(x-x')^2+(y-y')^+(z-z')^2]^{3/2}}}{\\partial z}\\vec{k}\\\\\n=&-\\frac{3(x-x')}{[(x-x')^2+(y-y')^+(z-z')^2]^{5/2}}\\vec{i}-\\frac{3(y-y')}{[(x-x')^2+(y-y')^+(z-z')^2]^{5/2}}\\vec{j}\\\\\n&-\\frac{3(z-z')}{[(x-x')^2+(y-y')^+(z-z')^2]^{5/2}}\\vec{k}\\\\\n=&-3\\frac{(x-x')\\vec{i}+(y-y')\\vec{j}+(z-z')\\vec{k}}{[(x-x')^2+(y-y')^+(z-z')^2]^{5/2}}\\\\\n=&-\\frac{3\\vec{r}}{r^5}\n\\end{align*}\nand\n\\begin{align*}\n\\nabla\\times\\vec{r}=&(\\frac{\\partial(z-z')}{\\partial y}-\\frac{\\partial(y-y')}{\\partial z})\\vec{i}+(\\frac{\\partial(x-x')}{\\partial z}-\\frac{\\partial(z-z')}{\\partial x})\\vec{j}+(\\frac{\\partial(y-y')}{\\partial x}-\\frac{\\partial(x-x')}{\\partial y})\\vec{k}\\\\\n=&\\vec{0}\n\\end{align*}\nTherefore,\n\\begin{align*}\n\\nabla\\times\\frac{\\vec{r}}{r^3}=&-3\\frac{\\vec{r}}{r^5}\\times\\vec{r}+\\vec{r}\\\\\n=&\\vec{0}\n\\end{align*}\n\\subsection{}\n\\[\n\\nabla\\cdot\\frac{\\vec{r}}{r^3}=-\\nabla'\\cdot\\frac{\\vec{r}}{r^3}=0\n\\]\n\\textbf{pf:}\n\\begin{align*}\n\\nabla\\cdot\\frac{\\vec{r}}{r^3}=&(\\nabla\\frac{1}{r^3})\\cdot\\vec{r}+\\frac{1}{r^3}\\nabla\\cdot\\vec{r}\\\\\n=&-\\frac{3\\hat{r}}{r^4}\\cdot\\vec{r}+\\frac{3}{r^3}\\\\\n=&0\n\\end{align*}\nwhere\n\\[\n\\nabla\\frac{1}{r^3}=-\\frac{3\\vec{r}}{r^5}\n\\]\nand\n\\[\n\\nabla\\cdot\\vec{r}=\\frac{\\partial(x-x')}{\\partial x}+\\frac{\\partial(y-y')}{\\partial y}+\\frac{\\partial(z-z')}{\\partial z}=3\n\\]\nSo we have\n\\begin{align*}\n\\nabla\\cdot\\frac{\\vec{r}}{r^3}=&-\\frac{3\\hat{r}}{r^5}\\cdot\\vec{r}+\\frac{3}{r^3}\\\\\n=&0\n\\end{align*}\nSimilarly,\n\\begin{align*}\n-\\nabla'\\cdot\\frac{\\vec{r}}{r^3}=&-(\\nabla'\\frac{1}{r^3})\\cdot\\vec{r}-\\frac{1}{r^3}\\nabla'\\cdot\\vec{r}\\\\\n=&-\\frac{3\\hat{r}}{r^5}\\cdot\\vec{r}+\\frac{3}{r^3}\\\\\n=&0\n\\end{align*}\nTherefore,\n\\[\n\\nabla\\cdot\\frac{\\vec{r}}{r^3}=-\\nabla'\\cdot\\frac{\\vec{r}}{r^3}=0\n\\]\n\\section{}\nShow that the interaction between two fixed current loops obeys Newton's third law.\n\n\\textbf{pf:} Suppose the two fixed current loops are marked as $1$ and $2$ respectively.\nThe Ampere force on a short part of the current loop $2$ is\n\\[\nd\\vec{F}_{12}=I_2d\\vec{l}_2\\times\\vec{B}_1=I_2d\\vec{l}_2\\times(\\frac{\\mu_0}{4\\pi}\\oint_{L_1}\\frac{I_1d\\vec{l}_1\\times\\vec{r}_{12}}{r_{12}^3})\n\\]\nThe total Ampere force on the current loop $2$ is\n\\begin{align*}\nF_{12}=&\\oint_{L_2}d\\vec{F}_2=\\frac{\\mu_0I_1I_2}{4\\pi}\\oint_{L_2}\\oint_{L_1}\\frac{d\\vec{l}_2\\times(d\\vec{l}_1\\times\\vec{r}_{12})}{r_{12}^3}\\\\\n=&\\frac{\\mu_0I_1I_2}{4\\pi}\\oint_{L_2}\\oint_{L_1}\\frac{(d\\vec{l}_2\\cdot\\vec{r}_{12})d\\vec{l}_1-(d\\vec{l}_2\\cdot d\\vec{l}_1)\\vec{r}_{12}}{r_{12}^3}\n\\end{align*}\nwhere\n\\begin{align*}\n\\oint_{L_2}\\oint_{L_1}\\frac{(d\\vec{l}_2\\cdot\\vec{r}_{12})d\\vec{l}_1}{r_{12}^3}=&\\oint_{L_1}d\\vec{l}_1\\oint_{L_2}\\frac{\\vec{r}_{12}\\cdot d\\vec{l}_2}{r_{12}^3}\\\\\n=&\\oint_{L_1}d\\vec{l}_1\\iint_{S_2}\\nabla\\times(\\frac{\\vec{r}_12}{r_{12}^3})\\cdot d\\vec{S}_2\\\\\n=&\\oint_{L_1}d\\vec{l}_1\\iint_{S_2}\\vec{0}\\cdot d\\vec{S}_2=0\n\\end{align*}\nSo we have\n\\[\nF_{12}=-\\frac{\\mu_0I_1I_2}{4\\pi}\\oint_{L_2}\\oint_{L_1}\\frac{(d\\vec{l}_2\\cdot d\\vec{l}_{1})\\vec{r}_{12}}{r_{12}^3}\n\\]\nSimilarly, the Ampere force on  current loop $1$ is\n\\[\nF_{21}=-\\frac{\\mu_0I_2I_1}{4\\pi}\\oint_{L_1}\\oint_{L_2}\\frac{(d\\vec{l}_1\\cdot d\\vec{l}_2)\\vec{r}_{21}}{r_{21}^3}=\\frac{\\mu_0I_1I_2}{4\\pi}\\oint_{L_2}\\oint_{L_1}\\frac{(d\\vec{l}_2\\cdot d\\vec{l}_{1})\\vec{r}_{12}}{r_{12}^3}\n\\]\nTherefore,\n\\[\n\\vec{F}_{12}=-\\vec{F}_{21}\n\\]\nwhich means the interaction between two fixed current loops obeys Newton's third law.\n\\section{}\nUse the equation below to find the related equation for the conduction current $\\vec{J}=n_fe\\vec{v}$. Solve this equation for $\\vec{E}(t)=\\vec{E}_0\\delta(t)$ if $\\vec{J}(t<0)=0$. What is $\\vec{J}$ immediately after $t=0$? Connect this with the sum rule.\n\\[\n\\frac{d\\vec{v}}{dt}=-\\gamma\\vec{v}+\\frac{e}{m}\\vec{E}\n\\]\n\\textbf{Sol}: Multiply both side of the equation with $e^{\\gamma t}$ and move the items with $\\vec{v}$ to the left side to get\n\\begin{align*}\n&e^{\\gamma t}\\frac{d\\vec{v}}{dt}+\\gamma e^{\\gamma t}\\vec{v}=\\frac{e}{m}e^{\\gamma t}\\vec{E}\\\\\n\\Longrightarrow&\\frac{d(e^{\\gamma t}\\vec{v})}{dt}=\\frac{e}{m}e^{\\gamma t}\\vec{E}\n\\end{align*}\nIntegrate both side of the equation about $t$ from $-\\infty$ to $t$ to get\n\\begin{align*}\n&e^{\\gamma t}\\vec{v}(t)=\\frac{e}{m}\\int_{-\\infty}^{t}e^{\\gamma\\tau}\\vec{E}(\\tau)d\\tau\\\\\n\\Longrightarrow&\\vec{v}(t)=\\frac{e}{m}\\cdot e^{-\\gamma t}\\int_{-\\infty}^{t}e^{\\gamma\\tau}\\vec{E}(\\tau)d\\tau\n\\end{align*}\nThe related equation for the conduction current is\n\\[\n\\vec{J}(t)=n_fe\\vec{v}(t)=\\frac{n_fe^2}{m}\\cdot e^{-\\gamma t}\\int_{-\\infty}^{t}e^{\\gamma\\tau}\\vec{E}(\\tau)d\\tau\n\\]\nThe electricity density immediately after $t=0$ is\n\\[\n\\vec{J}(0^+)=\\frac{n_fe^2}{m}\\cdot e^{-\\gamma t}\\int_{-\\infty}^{0^+}e^{\\gamma\\tau}E_0\\delta(\\tau)d\\tau=\\frac{n_fe^2E_0}{m}\n\\]\n\\end{document}\n", "meta": {"hexsha": "3d21b353c2d41065014bdc30cab65315ba7249d2", "size": 8170, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework_2/Homework_2.tex", "max_stars_repo_name": "Chen-Jialin/Electrodynamics-Assignments", "max_stars_repo_head_hexsha": "d05976fb6560e1b87affc19cb3cee69b4391ad89", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework_2/Homework_2.tex", "max_issues_repo_name": "Chen-Jialin/Electrodynamics-Assignments", "max_issues_repo_head_hexsha": "d05976fb6560e1b87affc19cb3cee69b4391ad89", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework_2/Homework_2.tex", "max_forks_repo_name": "Chen-Jialin/Electrodynamics-Assignments", "max_forks_repo_head_hexsha": "d05976fb6560e1b87affc19cb3cee69b4391ad89", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6448087432, "max_line_length": 253, "alphanum_fraction": 0.5722154223, "num_tokens": 3955, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7057850278370111, "lm_q1q2_score": 0.570321740811513}}
{"text": "% This is a model template for the solutions in computational science. You can find a very useful documentation for LaTeX in Finnish at ftp://ftp.funet.fi/pub/TeX/CTAN/info/lshort/finnish/ or in English at ftp://ftp.funet.fi/pub/TeX/CTAN/info/lshort/english/. The section List of mathematical symbols in Chapter 3 is especially useful for the typesetting of mathematical formulas.\n\n% Compile the document to PDF by command 'pdflatex model.tex' in the terminal. The command must be run twice for the references in the text to be correct.\n\n\\documentclass[a4paper,11pt]{article}\n\\usepackage[utf8]{inputenc}\n% This includes letters such as \ufffd and \ufffd\n\\usepackage[T1]{fontenc}\n% Use here 'Finnish' for Finnish hyphenation. You may have to compile the code twice after the change. \n\\usepackage[english]{babel}\n\\usepackage{graphicx}\n% Some math stuff\n\\usepackage{amsmath,amsfonts,amssymb,amsbsy,commath,booktabs,hyperref}  \n% This is just to include the urls\n\\usepackage{hyperref}\n\\usepackage[margin=2cm]{geometry}\n\n\\setlength{\\parindent}{0mm}\n\\setlength{\\parskip}{1.0\\baselineskip}\n\n\\usepackage{listings}\n\\usepackage{color}\n\\usepackage{pdfpages}\n\n\\definecolor{dkgreen}{rgb}{0,0.6,0}\n\\definecolor{gray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mauve}{rgb}{0.58,0,0.82}\n\n\\lstset{frame=tb,\n\tlanguage=Python,\n\taboveskip=3mm,\n\tbelowskip=3mm,\n\tshowstringspaces=false,\n\tcolumns=flexible,\n\tbasicstyle={\\tiny\\ttfamily},\n\tnumbers=none,\n\tnumberstyle=\\tiny\\color{gray},\n\tkeywordstyle=\\color{blue},\n\tcommentstyle=\\color{dkgreen},\n\tstringstyle=\\color{mauve},\n\tbreaklines=true,\n\tbreakatwhitespace=true,\n\ttabsize=4\n}\n\n\\begin{document}\n\n\\title{Becs-114.1100 Computational Science -- exercise round 6} % Replace the exercise round number\n\\author{Kunal Ghosh, 546247} % Replace with your name and student number\n\\maketitle\n\\section{Solution to Question 4}\n\\subsection{Determining the natural cubic interpoland S(x) and its Derivative S'(x) from experimental data}\\label{prob2a}\n\\begin{figure}[ht]\n\t\\center\n    \\includegraphics[scale=0.75]{plotS.png}\n    \\caption{Plot showing the natural cubic interpoland S(X). Values of $S(X_i)$ for $X_i$ in the range of (min(knot values), max(knot values)) are shown as red circles.}\n\t\\label{fig:s}\n\\end{figure}\n\\begin{figure}[ht]\n\t\\center\n    \\includegraphics[scale=0.75]{plotS_dash.png}\n    \\caption{Plot showing the first derivative of the natural cubic interpoland S'(X). Values of $S'(X_i)$ for $X_i$ in the range of (min(knot values), max(knot values)) are shown as red circles.} \n\t\\label{fig:sdash}\n\\end{figure}\n\n\n% \\begin{table}[ht]\n% \\centering\n% \\label{my-label}\n% \\begin{tabular}{|c|c|c|}\n% \\hline\n%  \\textbf{n}&\\textbf{RMS Error}&\\textbf{Condition Number}  \\\\ \\hline\n%  2&0.73833521&15.0167409881  \\\\ \\hline\n%  5&1.02602254&282901.77002 \\\\ \\hline\n%  8&1.09505307&7657562245.89 \\\\ \\hline\n%  12&1.16783884&5.89342127254e+15 \\\\ \\hline\n%  15&2.11656192&1.74359790918e+17 \\\\ \\hline\n% \\end{tabular}\n% \\caption{Table showing the values of n and the RMS error after solving the system of linear equations with \\textbf{hilbert(n)} as the coefficients.}\n% \\end{table}\nThe corresponding python code can be found at \\ref{code:problem2a}\n\n\\subsection{Determining the natural cubic interpoland S(x) and its Derivative S'(x) from experimental data}\\label{prob2b}\n\\begin{figure}[ht]\n\t\\center\n    \\includegraphics[scale=0.75]{matlabS.png}\n    \\caption{Plot showing the natural cubic interpoland S(X) as calculated using Matlab. Values of $S(X_i)$ for $X_i$ in the range of (min(knot values), max(knot values)) are shown as red circles.}\n\t\\label{fig:smatlab}\n\\end{figure}\n\\begin{figure}[ht]\n\t\\center\n    \\includegraphics[scale=0.75]{matlabS_dash.png}\n    \\caption{Plot showing the first derivative of the natural cubic interpoland S'(X) as calculated using Matlab. Values of $S'(X_i)$ for $X_i$ in the range of (min(knot values), max(knot values)) are shown as red circles.} \n\t\\label{fig:sdash_matlab}\n\\end{figure}\nThe corresponding plots created by matlab are much more smoother because matlab implements an additional constraint that the third derivatives of the piecewise polynomials are also equal at the second and the last knots. This are apparently called the \"Not-A-Knot\" end conditions. This is only done when the length of \\textbf{t} and \\textbf{y} are same.\n\nNatural splines are a good choice only when the functions have 0 second derivative at the end points. I read it up from here. \\url{http://www.mathworks.com/matlabcentral/newsreader/view_thread/172988} would really appreciate some more information about why using the Not-A-Knot condition is better. \n\\\\\nThe corresponding matlab code can be found at \\ref{code:problem2b}\n\\clearpage\n\n\\section{Appendix A}\\label{code:problem2a}\nPython source code for \\ref{prob2a}.\n{\\footnotesize\n\\begin{lstlisting}\nfrom __future__ import division\nimport numpy as np\nimport pylab as pl\nimport string\n\ndef get_data(filename):\n    retVal = None\n    with open(filename) as f:\n        retVal = f.readlines()\n        retVal = map(string.strip, retVal)\n        retVal = map(string.split, retVal)\n        retVal = zip(*retVal)\n        # Assuming data just has columns of floats\n        for idx,_ in enumerate(retVal):\n            retVal[idx] = map(float, retVal[idx])\n    return retVal\n\ndef evaluate(t,y,z,x):\n    i = -1\n    for idx in range(len(t)-1):\n        if x - t[idx] <= 0:\n            i = idx\n            break\n    # Reduce the number of list accesses\n    yi = y[i]\n    ti = t[i]\n    zi = z[i]\n    zi1 = z[i+1]\n    hi = t[i+1] - ti\n    # Evaluate the value of Spline at x\n    Bi = -(hi*zi1)/6 -(hi*zi)/3 +(y[i+1]-yi)/hi\n    #Common product being used in Ai and Ai_dash\n    Ai_dash_common = (x-ti)*(zi1-zi)\n    Ai = 0.5*zi + (1/(6*hi))*Ai_dash_common\n    Ri = Bi+(x-ti)*Ai\n    S = yi + (x-ti)*Ri \n    #Calculating the derivative now\n    Ai_dash = zi + (0.5/hi)*Ai_dash_common\n    S_dash = Bi + (x-ti)*Ai_dash    \n    return S,S_dash\n\nif __name__ == \"__main__\":\n    t,y = get_data(\"titanium.dat\")\n    h = [t[i+1] - t[i] for i in range(len(t)-1)]\n    b = [(1/h[i])*(y[i+1]-y[i]) for i in range(len(t)-1)]\n    u = [2*(h[0]+h[1])]\n    v = [6*(b[1]-b[0])]\n    # intermediate points\n    for i in range(1,len(y)-1):\n        u.append(2*(h[i]+h[i-1]) - (h[i-1]*h[i-1])/u[i-1])\n        v.append(6*(b[i] - b[i-1]) - (h[i-1]*v[i-1])/u[i-1])\n    z = np.zeros(len(y))\n    try:\n        for i in range(len(y)-2,0,-1):\n            z[i] = (v[i] - h[i]*z[i])/u[i]\n    except Exception,e:\n        print i,len(y),len(z),len(v),len(u)\n\n    minx,maxx = min(t),max(t)\n    x_range = np.linspace(minx,maxx,100)\n    S,S_dash = zip(*[evaluate(t,y,z,x) for x in x_range])\n\n    pl.figure()\n    pl.plot(x_range,S)\n    pl.scatter(x_range,S,c=\"r\")\n    pl.grid()\n    pl.xlabel(\"x\")\n    pl.ylabel(\"S(x)\")\n    pl.savefig(\"plotS.png\")\n    pl.figure()\n    pl.plot(x_range,S_dash,c=\"b\")\n    pl.scatter(x_range,S_dash,c=\"r\")\n    pl.grid()\n    pl.xlabel(\"x\")\n    pl.ylabel(\"S'(x)\")\n    pl.savefig(\"plotS_dash.png\")\n    pl.show()\n    pl.close()\n\\end{lstlisting}\n}\n\\clearpage\n\n\\section{Appendix B}\\label{code:problem2b}\nMatlab source code for \\ref{prob2b}.\n{\\footnotesize\n\\begin{lstlisting}\n% Getting the data\na = load('titanium.dat')\ny = a(:,2)\nt = a(:,1)\nx = linspace(min(t),max(t),100)\n\n% Calculating and Plotting the spline\nplot(x,spline(t,y,x))\nhold on\nscatter(x,spline(t,y,x),'red')\nxlabel('X')\nylabel('S(X)')\n\n% Now Calculating and Plotting derivative of the Spline\nf = fnder(spline(t,y))\nfnplt(fnder(spline(t,y)))\nhold on\nscatter(x,ppval(f,x),'r')\nxlabel('x')\nylabel('S'(x)')\n\\end{lstlisting}\n}\n\\end{document}\n\n", "meta": {"hexsha": "9578d2a2fa2d9ab8ae14e31ddf6c4c174cd2515e", "size": 7519, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercise6/report.tex", "max_stars_repo_name": "kunalghosh/BECS-114.1100-Computational-Science", "max_stars_repo_head_hexsha": "ca91ac59cb5276d213c1aec50ae7786efe72ae43", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercise6/report.tex", "max_issues_repo_name": "kunalghosh/BECS-114.1100-Computational-Science", "max_issues_repo_head_hexsha": "ca91ac59cb5276d213c1aec50ae7786efe72ae43", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercise6/report.tex", "max_forks_repo_name": "kunalghosh/BECS-114.1100-Computational-Science", "max_forks_repo_head_hexsha": "ca91ac59cb5276d213c1aec50ae7786efe72ae43", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6497695853, "max_line_length": 380, "alphanum_fraction": 0.6854634925, "num_tokens": 2361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370111, "lm_q2_score": 0.8080672112416737, "lm_q1q2_score": 0.5703217391803806}}
{"text": "\\section{Contour Integration of a Complex-valued Function}\n\\textit{(This problem is probably out of reach for students who not yet taken a course in complex analysis. No worries, we will cover this topic during the first few weeks of Math 583A.)}\\\\\n\n\\noindent Consider the functions $f:\\mathbb{C} \\to \\mathbb{C}$ and $g:\\mathbb{C} \\to \\mathbb{C}$ defined as\n\\begin{equation*}\nf(z) = z^2 \\quad \\quad \\text{and} \\quad \\quad g(z) = z^{-1}\n\\end{equation*}\nIn this project, you will write a first-order numerical approximation scheme to compare the integrals of $f$ and of $g$ along two different contours.\n\\begin{enumerate}[(a)]\n    \\item Plotting:\n    \\begin{enumerate}[i.] \n        \\item Make a surface plot showing $Re(f(z))$ and a surface plot showing $Im(f(z))$\n        \\item Make a surface plot showing $Re(g(z))$ and a surface plot showing $Im(g(z))$\n    \\end{enumerate}\n    \\textit{Since $g$ is undefined at $z = 0$ and since it is radially symmetric, the surface plots for $g$ can look scary when done in cartesian co-ordinates. You may try switching to polar coordinates to make prettier plots}\n    \\item     We can approximate the line integral (or contour integral) of a function along a curve by discretizing the curve into $N+1$ points $z(s) \\to \\{z_0, z_1, z_2, \\dots, z_N\\}$ and then computing the sum:\n    \\begin{equation*}\n    \\int_{z_o}^{z_t} f(z) dz \\approx \\sum_{k = 1}^{N} f(z_k) \\ (z_{k} - z_{k-1})\n    \\end{equation*}\n    \\begin{minipage}{.7\\textwidth}\n    Define the two curves:\n    \\begin{align*}\n    C_1:&  \\quad \\text{The upper half circle of radius 1 centered at $z=0$.}\\\\\n    C_2:&  \\quad \\text{The lower half circle of radius 1 centered at $z=0$.}\n    \\end{align*}\n    Approximate the line integrals of $f$ and of $g$ along the curves $C_1$ and $C_2$.\n    \\end{minipage}\n    \\begin{minipage}{.29\\textwidth}\n    \\begin{center}\n    \\begin{tikzpicture}\n    \\draw[ultra thick,->] (-1.2,0) -- (2.2, 0) node[below] {$Re(z)$};\n    \\draw[ultra thick,->] (0,-1.2) -- (0,1.6) node[left] {$Im(z)$};\n    \\draw[thick,blue!80!black,domain = 175:10,variable=\\t,->] plot ({cos(\\t)},{ sin(\\t)}) node[above right] {$C_1$}; \n    \\draw[thick,red!80!black,domain = 185:350,variable=\\t,->] plot ({cos(\\t)},{ sin(\\t)}) node[below right] {$C_2$}; \n    \\end{tikzpicture}\n    \\end{center}\n    \\end{minipage}\n    \\item[($\\ast$)] \\textit{Bonus:} Repeat parts (a) and (b) for the function $h(z) = z^{1/2}$.\\\\\n    \\textit{Note:} You may need to make some `executive decisions' to make sure your problem is well-posed.\n\\end{enumerate}\n\n", "meta": {"hexsha": "75b6ea244a36ab213f8e68fd87e01632b84d1242", "size": 2532, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "complex-contour-integrals.tex", "max_stars_repo_name": "colinlclark/integration_workshop", "max_stars_repo_head_hexsha": "16845412db0bd18469db67075d31a6189d968c56", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "complex-contour-integrals.tex", "max_issues_repo_name": "colinlclark/integration_workshop", "max_issues_repo_head_hexsha": "16845412db0bd18469db67075d31a6189d968c56", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "complex-contour-integrals.tex", "max_forks_repo_name": "colinlclark/integration_workshop", "max_forks_repo_head_hexsha": "16845412db0bd18469db67075d31a6189d968c56", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-25T18:18:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-27T01:10:02.000Z", "avg_line_length": 60.2857142857, "max_line_length": 226, "alphanum_fraction": 0.654028436, "num_tokens": 858, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835330070839, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5702791097697844}}
{"text": "\n\\subsection{Affine connections}\n\nIf we have a tangent vector at one point of the manifold, we can map it to a tangent vector at a nearby point on the manifold.\n\nWe can use chain rule. so we can have coordinate maps where there is no overlap.\n\n\\subsubsection{Smooth connections}\n\n\\subsubsection{Affine connection}\n\nWe have a vector in a tangent space\n\nWe have a curve on the manifold from the start point\n\nAs we \"roll\" the tangent, there is a unique vector in each new tangent, determined by transition map\n\nThese are affine transformations\n\nGiven two points, what path? what transfomration? if curuved then different paths will given different transformation.\n\n", "meta": {"hexsha": "192c272ac252f02b13acf053df64559553028b62", "size": 662, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/manifoldsDifferentiable/03-03-connection.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/manifoldsDifferentiable/03-03-connection.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/manifoldsDifferentiable/03-03-connection.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0909090909, "max_line_length": 126, "alphanum_fraction": 0.7915407855, "num_tokens": 143, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8354835371034368, "lm_q2_score": 0.6825737214979745, "lm_q1q2_score": 0.5702791071709838}}
{"text": "% corrected LN 86\n\\subsubsection{Implementation of classification models}\\label{implementation}~\\\\\n\nIn this section, the implementation of the four ANN models are presented.\\\\\n\nFirst of all, we initialize the training and testing sets:\n\\lstinputlisting{sections/technical/fr1/input.py}\n\n\\begin{enumerate}[label=\\arabic*.]\n  \\item Feedforward neural networks:\n    \\lstinputlisting{sections/technical/fr1/feedforward.py}\n    The sequential model is used to stack four \\textit{Dense} layers, which are\n    fully connected layers, meaning that all neurons in one layer are connected\n    to all neurons in the subsequent layer. This model is composed of one input\n    layer, two hidden layers and one output layer. The first dense layer takes\n    as input a vector with the same dimensions as a row in the dataset. The\n    first three layers are using \\textit{ReLU} as activation function whereas\n    the output layer is using \\textit{softmax}, which is a standard choice for\n    multi-class classification problems. In between the dense layers, dropout\n    layers are stacked. These dropout layers are dropping units from the model\n    to prevent overfitting. The parameter \\textit{loss} of the\n    \\textit{.compile()} function is set to\n    \\textit{'sparse\\_categorical\\_crossentropy'} since the output should be an\n    integer tensor instead of a binary vector.\\\\\n\n  \\begin{normalize}\n    Before continuing with the recurrent models, the input data has to be\n    reshaped since RNN, LSTM and GRU are taking in 3D shaped input data:\n\n    \\lstinputlisting{sections/technical/fr1/inputchange.py}\n  \\end{normalize}\n\n  \\item RNN:\n    \\lstinputlisting{sections/technical/fr1/rnn.py}\n    The RNN model is implemented in the same manner as the feedforward model.\n    Although, this model is only stacked with one hidden layer.  Instead of\n    using dense layers, this model uses \\textit{SimpleRNN} layers.  The\n    parameter \\textit{return\\_sequences} is set to \\textit{True} so that the\n    next simpleRNN layer has access to all the hidden states from the previous\n    layer.  The output layer is again a dense layer with \\textit{softmax} as the\n    activation function. As input, it takes in a matrix, which is an instance of\n    the 3D dataset.~\\\\\n\n\\newpage\n\n  \\item LSTM:\n    \\lstinputlisting{sections/technical/fr1/lstm.py}\n    The LSTM model uses \\textit{LSTM} layers and is composed of one hidden layer\n    as well. Its parameter \\textit{return\\_sequences} is set to \\textit{True}\n    with the same reason as for the RNN model. The LSTM model has the same\n    output layer as the two previous ones.\\\\\n  \\item GRU:\n    \\lstinputlisting{sections/technical/fr1/gru.py}\n    The GRU model uses \\textit{GRU} layers and is composed of one hidden layer.\n    Its parameters are set to the same values as the RNN and LSTM model. It uses\n    the same dense layer activated by the \\textit{softmax} function as output\n    layer.\n\\end{enumerate}\n", "meta": {"hexsha": "9b7fbee9a6f09ae2efc164963f6b3fabaf595960", "size": 2930, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/technical/fr1/implementation.tex", "max_stars_repo_name": "Lemswasabi/bsps3-report", "max_stars_repo_head_hexsha": "ca3f7bee2d4740c5c7ad9f586766ab04a0e5f58b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/technical/fr1/implementation.tex", "max_issues_repo_name": "Lemswasabi/bsps3-report", "max_issues_repo_head_hexsha": "ca3f7bee2d4740c5c7ad9f586766ab04a0e5f58b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/technical/fr1/implementation.tex", "max_forks_repo_name": "Lemswasabi/bsps3-report", "max_forks_repo_head_hexsha": "ca3f7bee2d4740c5c7ad9f586766ab04a0e5f58b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.6610169492, "max_line_length": 80, "alphanum_fraction": 0.7515358362, "num_tokens": 727, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835248143776, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5702791041776583}}
{"text": "\\appendixpage\n\\begin{appendix}\n\\section{Mathematical notation}\n\\label{sec:notation}\nIn this paper, there is extensive use of mathematical notation following some union of standards from various fields of mathematics. What follows is an attempt to define as much of this jargon as possible, such that a reader with a moderate background in the field could be reminded of its meaning.\n\n%\\newcommand{\\braces}[1]{\\ensuremath{\\left\\lbrace #1 \\right\\rbrace}}\n%\\newcommand{\\angles}[1]{\\ensuremath{\\left\\langle #1 \\right\\rangle}}\n%\\newcommand{\\integers}{\\ensuremath{\\mathbb{Z}}}\n%\\newcommand{\\wholes}{\\ensuremath{\\mathbb{N}}}\n%\\newcommand{\\reals}{\\ensuremath{\\mathbb{R}}}\n%\\renewcommand{\\vec}{\\mathbf}\n%\\newcommand{\\abs}[1]{\\ensuremath{\\left|#1\\right|}}\n\n\\subsection{Glossary}\n\\begin{itemize}\n\\item set: A collection of things. Typically indicated by \\braces{\\cdot}.\n\\item tuple: An ordered set of a finite number of elements, typically indicated by \\parens{\\cdot}. Similar in structure to a vector.\n\\end{itemize}\n\n\\subsection{Variables}\n\\begin{itemize}\n\\item $a$: Index of a bin in a histogram.\n\\item $b$: Time bin as specified by lower and upper bounds. When subscripted, this can indicate bin bounds or reference the bin by index (made clear by context).\n\\item $\\channel$: Detection channel, typically represented by some whole number.\n\\item $\\channels$: The set of all detection channels in an experiment.\n\\item $\\Channel$: Function of a photon which returns the detection channel the photon arrived on.\n\\item $\\delta$: A change in some variable.\n\\item $\\resolution$: Time resolution of a detection channel.\n\\item $f$: Frequency.\n\\item $\\gn{n}$: Correlation function of $n$th order.\n\\item $\\photon$: Photon, a tuple consisting of a detection channel and some number of arrival time properties.\n\\item $\\photons$: The set of all photons in an experiment. A subscript typically indicates some subset, such as photons on a detection channel or in some specified time window.\n\\item $\\Index$: Function returning the index of a histogram, given the set of channels and a tuple of elements of that set.\n%\\item $\\Histogram$: The histogram function, which accepts bin definitions and outputs the counts in those bins.\n%\\item $\\iota$: A ``real'' physical signal being sampled in an experiment.\n\\item $I$: Signal, a real-valued function of time.\n\\item $n$: The order of a correlation.\n%\\item $N$: The number of elements in a set, the length of a stream or vector, or the number of detection channels.\n\\item $p$: Pulse number.\n\\item $\\Pulse$: Function of a photon which returns the pulse the photon arrived after.\n\\item $\\rho$: Pulse delay.\n\\item $t$: Time.\n\\item $\\Time$: Function of a photon which returns the arrival time of the photon.\n\\item $\\timedelay$:  Time delay.\n\\item $\\timewindow$: Time window, some subset of time in the experiment.\n\\item $\\integrationtime$: Integration time for the experiment.\n%\\item $\\emptyset$: The empty set, or a lack of a value.\n\\end{itemize}\n\n\\subsection{Sets}\n\\begin{itemize}\n\\item $\\cap$ ($\\bigcap$): Intersection of two (some number of) sets.\n\\item $\\cup$ ($\\bigcup$): Union of two (some number of) sets.\n\\item $\\in$: Inclusion, indication that an element is a member of a set.\n\\item $\\abs{\\cdot}$: Magnitude of a scalar or vector, or the number of elements in a set.\n\\item $\\max,\\min$: The maximum or minimum elements of a set, given some rule for sorting made clear explicitly or by context.\n\\item $\\subset$: The set on the left is a proper subset of the set on the right: its members are elements of the right-hand set, but the two sets are not equal.\n\\item $\\subseteq$: The set on the left is a subset of the set on the right: its members are elements of the right-hand set, and the two sets may be equal.\n\\end{itemize}\n\n\\subsection{Named sets}\nThere are a number of important sets which arise frequently in this paper:\n\\begin{itemize}\n\\item \\reals: The set of all real numbers\n\\item \\integers: The set of all integers (\\braces{\\ldots-1,0,1,\\ldots}).\n\\item \\wholes: The set of all positive integers, and zero (\\braces{0,1,\\ldots}).\n\\end{itemize}\nAdditionally, there are modifiers used to indicate special subsets:\n\\begin{itemize}\n\\item $\\cdot^{*}$: As a superscript, this indicates that the set only contains non-negative numbers. For example, $\\reals^{*}$ contains all elements $x$ such that $\\abs{x}=x$.\n\\item $\\cdot^{+}$: As a superscript, this indicates that the set only contains positive numbers. \n\\item $\\cdot_{n}$: As a subscript, this indicates a cyclic group of order $n$. For example, $\\integers_{n}=\\braces{0,1,\\ldots,n-1}=\\integers/n$. \n\\end{itemize}\n\n\\subsection{Set builder notation}\nFrequently, this paper contains expressions in the following form:\n\\begin{equation}\n\\photons_{\\channel}=\\braces{\\photon\\left|\\photon\\in\\photons;~\\Channel(\\photon)=\\channel\\right.}\n\\end{equation}\nTo read this notation, start left of the pipe to see what elements of the set look like. In this case, all elements are photons (\\photon). To the right side of the pipe are the properties these elements must have, that the photon exists in the set \\photons{} of photons, and that its arrival channel is $c$. \n\nOther examples of this notation include:\n\\begin{align}\n\\reals^{*} &= \\setbuilder{x}\n                        {x\\in\\reals; \\abs{x}=x} \\\\\n\\rationals &= \\setbuilder{(p,q)}\n                        {p\\in\\integers;~q\\in\\integers^{+}} \\\\\nA\\times B &= \\setbuilder{(a,b)}\n                       {a\\in A;~b\\in B}                       \n\\end{align}\n\n\\subsection{Ranges}\nSquare brackets and parentheses are used to indicate some continuous subset of values of a set. For example, the real numbers between 0 and 1, including 0 and 1, are denoted\n\\begin{equation}\n\\brackets{0,1} = \\setbuilder{x}{x\\in\\reals;~x\\ge 0;~x\\le 1}\n\\end{equation}\nTo indicate that one of the endpoints is not included in the set, a parenthesis is used:\n\\begin{equation}\n\\left[0,1\\right) = \\setbuilder{x}{x\\in\\reals;~x\\ge 0;~x<1}\n\\end{equation}\nThe source set is typically made clear by context.\n\n\\subsection{Other functions and relations}\n\\begin{itemize}\n\\item \\angles{\\cdot}: Average of the bracketed quantity over some variable or number of variables.\n\\item $\\times$: Cartesian product of two sets.\n\\item $\\sum$: Sum of some number of quantities.\n\\item $\\prod$: Product of some number of quantities.\n\\item $\\leftarrow,\\rightarrow$: Assignment of some value to a variable, or association in a function.\n\\item $\\ceil{\\cdot}$: The smallest integer at least as large as the indicated quantity.\n\\item $\\abs{\\cdot}$: Magnitude of a scalar or vector, or the number of elements in a set.\n\\item $\\equiv$: Indicates two quantities are equivalent. Typically used to denote a change in notation.\n\\item $\\propto$: Indicates proportionality; the two sides of the relation are equal upon multiplication of one side by some scalar factor, which could be unity.\n\\end{itemize}\n\n\\section{C API Documentation}\n\n\\input{versions.tex}\n\n\\end{appendix}\n", "meta": {"hexsha": "3936903f0c1f056b6451444de5aee80c886d62bf", "size": 6922, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/appendix.tex", "max_stars_repo_name": "mktt2897/photon_correlation", "max_stars_repo_head_hexsha": "b5cb9376f883ff25f81521d4270d8dd7f750efb1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-10-24T11:43:34.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-25T06:02:21.000Z", "max_issues_repo_path": "doc/tex/appendix.tex", "max_issues_repo_name": "mktt2897/photon_correlation", "max_issues_repo_head_hexsha": "b5cb9376f883ff25f81521d4270d8dd7f750efb1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2017-08-15T14:42:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T07:35:13.000Z", "max_forks_repo_path": "doc/tex/appendix.tex", "max_forks_repo_name": "mktt2897/photon_correlation", "max_forks_repo_head_hexsha": "b5cb9376f883ff25f81521d4270d8dd7f750efb1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-04-14T16:27:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-15T05:32:12.000Z", "avg_line_length": 56.737704918, "max_line_length": 308, "alphanum_fraction": 0.733458538, "num_tokens": 1812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7122321842389469, "lm_q1q2_score": 0.5702786203301857}}
{"text": "This chapter is dedicated to describing the Filtered Poisson Process (FPP), a stochastic model used for describing the intermittent fluctuations in single point measurements obtained in the boundary of fusion experiments. The basis of this model was already developed in 1909 \\cite{campbell1909study} and was further extended in the 1940s to describe noise in vacuum tubes \\cite{rice1944mathematical,rice1945mathematical}. Since then, the model has been extended and used to describe fluctuations in numerous academic fields, including neuroscience, fluid dynamics and nuclear fission \n\\cite{segal1985miniature,fesce1986fluctuation,jang2004martingale,claps2005advances,lefebvre2008generalized,daly2010effect,elter2015performance}. The FPP has first been introduced as a model describing SOL fluctuations in 2012 \\cite{garcia2012stochastic} and has since then shown excellent agreement with the statistical properties of fluctuations in various fusion experiments \\cite{garcia2013intermittent,garcia2013burst,garcia2015intermittent,kube2016fluctuation,garcia2017sol,kube2018intermittent,garcia2018intermittent,theodorsen2018universality}. \n\n\\section{Filtered Poisson Process}\nThe FPP is a stochastic process, given by a superposition of uncorrelated pulses which are distributed according to a Poisson process. For a given time $t \\in [0,T]$ the process $\\Phi_k(t)$ can be written as \\cite{garcia2012stochastic,garcia2016stochastic}\n\\begin{equation}\n\t\\Phi_k(t) = \\sum_{k=1}^{K(T)} A_k \\,\\phi\\left(\\frac{t-t_k}{\\tau_\\mathrm{d}}\\right).\n\\end{equation}\nHere, the random variables are defined as follows: $K(T)$ stands for the number of pulses arriving in the time interval $[0,T]$, $A_k$ is the pulse amplitude and $t_k$ the pulse arrival time. It is further assumed that all pulses have the same pulse  shape $\\phi$ and duration time $\\tau_\\mathrm{d}$.\n\nAlternatively, the process can be expressed as a convolution of the pulse shape and a delta pulse train\n\\begin{equation}\\label{conv}\n\t\\Phi_k(t) = [\\phi * f_K] \\left(\\frac{t}{\\tau_\\mathrm{d}}\\right),\n\\end{equation}\nwhere $f_K$ is the forcing defined as a train of delta pulses\n\\begin{equation}\n\tf_K(\\theta) = \\sum_{k=1}^{K(T)} A_k \\,\\delta\\left(\\theta - \\frac{t_k}{\\tau_\\mathrm{d}}\\right).\n\\end{equation}\nAs the process can be expressed as a train of delta pulses filtered through the pulse shape, it is called a \\textit{Filtered Poisson Process}. \n\nFor a given time interval $[0,T]$, the number of pulses $K(T)$ follows a Poisson distribution \\begin{equation}\n\tP_K(K|T) = \\frac{1}{K!} \\left(\\frac{T}{\\tau_\\mathrm{w}}\\right)^K \\textrm{exp}\\left(-\\frac{T}{\\tau_\\mathrm{w}}\\right),\n\\end{equation}\nwith intensity $T/\\tau_\\mathrm{w}$. The waiting times between two consecutive pulses are independent and exponentially distributed with mean value $\\tau_\\mathrm{w}$ and the arrival times $t_k$ are independent and uniformly distributed in the interval $[0,T]$. These properties of the process are consistent with experimental measurements, showing exponentially distributed waiting times \\cite{garcia2015intermittent,kube2018intermittent,garcia2017sol,garcia2018intermittent}. Time series with quasi-periodic arrival times are observed in SOL simulations utilizing Rayleigh\u2013B\u00e9nard like convection models \\cite{decristoforo2020intermittent} and are discussed in Paper II in further detail. The amplitudes are chosen to be exponentially distributed, as this is observed in experimental measurements \\cite{kube2018intermittent,garcia2017sol,garcia2018intermittent}.\n\nIn the following, we will discuss two pulse shapes that are most relevant for SOL fluctuation measurements and corresponding numerical simulations. Firstly, the pulse shape of an asymmetric, two-sided exponential pulse is defined as \n\\begin{equation}\n\t\\phi(\\theta,\\lambda) = \\begin{cases} \\mathrm{exp}\\left(-\\frac{\\theta}{1-\\lambda}\\right), &\\theta \\geq 0,\\\\\n\t\t\\mathrm{exp}\\left(\\frac{\\theta}{\\lambda}\\right), &\\theta < 0.\n\t\\end{cases}\n\\end{equation}\nHere, $\\theta$ is a dimensionless variable and $\\lambda$ stands for the asymmetry parameter with $\\lambda \\in (0,1)$. In some cases, a one-sided exponential pulse is applied with $\\lambda = 0$, which refers to the limit $\\lim_{\\lambda\\to 0} \\phi(\\theta,\\lambda)$. Exponential pulses stand in good agreement with experimental measurements \\cite{antar2003universality,boedo2005edge,garcia2007fluctuations,garcia2007collisionality,garcia2013intermittent,garcia2015intermittent,garcia2017sol,kube2018intermittent,garcia2018intermittent} and numerical SOL simulations \\cite{garcia2007fluctuations,decristoforo2021numerical}.  Secondly, Lorentzian pulses are considered which are defined as \n\\begin{equation}\n\t\\psi(\\theta) = \\frac{1}{\\pi}\\frac{1}{1+\\theta^2}. \n\\end{equation}\nThese can also be generalized to a skewed Lorentzian, however no closed analytical form is known and require a definition via the inverse Fourier transform \\cite{garcia2018skewed}. Indications for Lorentzian pulses in time series in the edge region \\cite{maggs2011generality,van2012relevance,maggs2015chaotic,zhu2017chaotic} and corresponding numerical simulations \\cite{decristoforo2020intermittent} have been reported. Exponential pulses consist of a discontinuous peak and exponential tales, whereas Lorentzian pulses have a smooth peak and algebraic tails. The consequences of these properties will be apparent in the following discussions. \n\nCalculating moments of distributions of the process requires the integrals of the pulse shapes, defined as\n\\begin{equation}\n\tI_n = \\int_{-\\infty}^{\\infty} d\\theta [\\phi(\\theta)]^n. \n\\end{equation}\nFor exponential pulses this results in $I_{\\phi,n} = 1/n$, independent of the pulse asymmetry $\\lambda$. For Lorentzian pulses the first four integrals are given by $I_{\\psi,1} = 1$, $I_{\\psi,2} = 1/2\\pi$, $I_{\\psi,3} = 3/8\\pi^2$ and $I_{\\psi,4} = 5/16\\pi^3$. \n\nThe main property of the FPP is given by the ratio of the pulse duration and average waiting time, \n\\begin{equation}\n\t\\gamma = \\frac{\\tau_\\mathrm{d}}{\\tau_\\mathrm{w}},\n\\end{equation}\nwhich is referred to as the intermittency parameter. For short waiting times and long duration times, $\\gamma \\gg 1$, the level of pulse overlap is high, resulting in a large mean value and small relative variation around the mean. In the opposite limit, $\\gamma \\ll 1$, the signal is dominated by individual, isolated pulses, resulting in a small mean value and large relative fluctuations. Numerical realizations of the FPP with different intermittency parameters are shown for one-sided exponential pulses in \\Figref{Fig:garcia_realizations} and for symmetric Lorentzian pulses in \\Figref{Fig:garcia_realizations_lorentz} displaying the features of these processes. \n\\begin{figure}\n\t\\centering\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/garcia_realizations.png}\n\t\t\\caption{Realizations of the FPP with one-sided exponential pulses and different intermittency parameters. Reprinted from \\cite{garcia2016stochastic}, with the permission from AIP Publishing.}\n\t\t\\label{Fig:garcia_realizations}\n\t\\end{minipage}\n\t\\hfill\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/garcia_lorentz.png}\n\t\t\\caption{Realizations of the FPP with Lorentzian pulses and different intermittency parameters. Reprinted from \\cite{garcia2017power}, with the permission from AIP Publishing.}\n\t\t\\label{Fig:garcia_realizations_lorentz}\n\t\\end{minipage}\n\\end{figure}\n\n\\section{Moments and PDFs}\nThe four lowest order central moments of the FPP take the form \n\\begin{subequations}\n\t\\begin{align}\n\t\t\\langle\\Phi\\rangle  &= \\gamma \\langle A \\rangle I_1,\\\\\n\t\t\\Phi^2_\\mathrm{rms} &= \\gamma \\langle A^2\\rangle I_2,\\\\\n\t\tS_\\Phi &= \\frac{1}{\\gamma^{1/2}}\\frac{\\langle A^3\\rangle I_3}{\\langle A^2\\rangle^{3/2} I_2^{3/2} },\\\\\n\t\tF_\\Phi &= 3 + \\frac{1}{\\gamma} \\frac{\\langle A^4\\rangle I_4}{\\langle A^2\\rangle^2 I_2^2}, \n\t\\end{align}\n\\end{subequations}\nwhere $S_\\Phi$ stands for the skewness of the process and $F_\\Phi$ is the kurtosis or flatness \\cite{garcia2012stochastic}. The last two moments exhibit the parabolic relationship \n\\begin{equation}\\label{parabolic}\n\tF_\\Phi = 3 + \\frac{\\langle A^2\\rangle \\langle A^4\\rangle}{\\langle A^3\\rangle^2}\\frac{I_2 I_4}{I^2_3} S_\\Phi^2.\n\\end{equation}\nInserting the expressions for the integrals of the pulse shapes for two-sided exponential and Lorentzian pulses and assuming exponentially distributed amplitudes simplifies these expressions further. For exponential pulses the expression for the relative fluctuation level becomes\n\\begin{equation}\n\t\\frac{\\Phi_{\\mathrm{rms}}}{\\langle \\Phi\\rangle} = \\gamma^{-1/2},\n\\end{equation}\nand the universal parabolic relationship of \\Eqref{parabolic} reduces to\n\\begin{equation}\n\tF_\\Phi = 3 + \\frac{3}{2} S_\\Phi^2,\n\\end{equation} \nwhich stands in good agreement with experimental measurements \\cite{labit2007universal,sattin2009statistics,sattin2009parabolic}. Notably, these expressions do not depend on the pulse asymmetry parameter, $\\lambda$. \n\nThe PDF of the process with exponential pulses is given by a Gamma distribution with shape parameter $\\gamma$ and scale parameter $\\langle A \\rangle$ \\cite{theodorsen2018probability},\n\n\\begin{equation}\n\tP_{\\Phi,\\phi}(\\Phi) = \\frac{\\Phi^{\\gamma-1}}{\\langle A\\rangle^\\gamma \\Gamma(\\gamma)}\\mathrm{exp}\\left(-\\frac{\\Phi}{\\langle A\\rangle}\\right).\n\\end{equation}\nTypically, the realization of the process is normalized to have zero mean and unit standard deviation,\n\\begin{equation}\\label{norm}\n\t\\widetilde{\\Phi} = \\frac{\\Phi - \\langle\\Phi\\rangle}{\\Phi_{\\mathrm{rms}}},\n\\end{equation}\nwith the according PDF \\cite{theodorsen2018probability},\n\\begin{equation}\n\tP_{\\widetilde{\\Phi},\\phi}(\\widetilde{\\Phi}) = \\frac{\\gamma^{\\gamma/2}}{\\Gamma(\\gamma)}\\left(\\widetilde{\\Phi} + \\gamma^{1/2} \\right)^{\\gamma-1}\\mathrm{exp}\\left(-\\gamma^{1/2}\\widetilde{\\Phi} - \\gamma\\right).\n\\end{equation}\nThis expression is used as a fit in \\Figref{Fig:theodorsen_pdf}. For an FPP of Lorentzian pulses no closed expressions for its PDF is known. However, it can be derived by taking the inverse Fourier transform of its corresponding characteristic function resulting in \\cite{garcia2018lorentz}\n\\begin{equation}\n\t\\begin{split}\n\tP_{\\widetilde{\\Phi},\\psi}(\\widetilde{\\Phi}) = \\left(\\frac{\\pi}{\\gamma}\\right)^{1/2}\\int_{0}^{\\infty}\\textrm{d}w\\,\\mathrm{exp}\\left(-\\frac{\\gamma\\pi w\\, \\mathrm{sin}\\left(1/2\\, \\mathrm{arctan}\\, w\\right)}{(1+w^2)^{1/4}}\\right)\\\\\n\t\\times \\mathrm{cos}\\left(\\pi\\gamma w + \\sqrt{\\pi\\gamma}\\widetilde{\\Phi}w-\\frac{\\gamma\\pi w\\, \\mathrm{cos}\\left(1/2\\, \\mathrm{arctan}\\, w\\right)}{(1+w^2)^{1/4}}\\right).\n\t\\end{split}\n\\end{equation}\nFor both exponential and Lorentzian pulses the PDF of the processes are characterized by the intermittency parameter. The PDFs are shown for a range of different $\\gamma$ in \\Figref{Fig:garcia_pdf} and \\Figref{Fig:garcia_pdf_lorentz}. The PDFs are unimodal for all values of $\\gamma$ and have an exponential tail towards large fluctuation amplitudes for small values of $\\gamma$. In the opposite limit, the PDFs approach a normal distribution with vanishing mean and unit standard deviation for $\\widetilde{\\Phi}$.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/garcia_pdf.png}\n\t\t\\caption{PDFs of a FPP with exponential pulses with different intermittency parameters $\\gamma$. Reprinted from \\cite{garcia2016stochastic}, with the permission from AIP Publishing.}\n\t\t\\label{Fig:garcia_pdf}\n\t\\end{minipage}\n\t\\hfill\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/garcia_pdf_lorentz.png}\n\t\t\\caption{PDFs of a normalized FPP with Lorentzian pulses with different intermittency parameters $\\gamma$. Reprinted from \\cite{garcia2018lorentz}, with the permission from AIP Publishing.}\n\t\t\\label{Fig:garcia_pdf_lorentz}\n\t\\end{minipage}\n\\end{figure}\n\\section{Second order statistics}\n\nIn order to calculate the second order moments, namely the power spectral density (PSD) and the Auto-correlation function (ACF), we consider the FPP as a convolution of a pulse train $f_K$ and a pulse shape $\\phi$. The Fourier transform of the FPP is given by the product of the Fourier transform of $f_K$ and $\\phi$. The power spectrum of $\\Phi$ can therefore be expressed as the product of the power spectrum of $f_K$ and $\\phi$. The power spectrum of $f_K$ is flat due to the uncorrelated delta pulses, so that the frequency dependence of the spectrum of $\\Phi$ is only dependent on $\\phi$. The ACF is given by the Fourier transform of the PSD. For two-sided exponential pulses the PSD of an FPP normalized according to \\Eqref{norm}, takes the form \\cite{garcia2017auto}\n\\begin{equation}\n\t\\Omega_{\\widetilde{\\Phi},\\phi}(\\omega;\\lambda) = \\frac{2 \\tau_\\mathrm{d}}{[1+(1-\\lambda)^2\\tau_\\mathrm{d}^2\\omega^2][1+\\lambda^2\\tau_\\mathrm{d}^2\\omega^2]},\n\\end{equation}\nwith the according ACF\n\\begin{equation}\n\tR_{\\widetilde{\\Phi},\\phi}(r;\\lambda) = \\frac{1}{1-2\\lambda}\\left[\\left(1-\\lambda\\right)\\mathrm{exp}\\left(-\\frac{|r|}{(1-\\lambda)\\tau_\\mathrm{d}}\\right) -\\lambda\\,\\mathrm{exp}\\left(-\\frac{|r|}{\\lambda\\tau_\\mathrm{d}}\\right)\\right].\n\\end{equation}\nThe ACF and PSD are displayed for a range of $\\gamma$-values in \\Figsref{Fig:garcia_acf} and \\ref{Fig:garcia_psd}.\n\\begin{figure}\n\t\\centering\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/garcia_acf.png}\n\t\t\\caption{Auto-correlation function of a normalized FPP consisting of two-sided exponential pulses with different asymmetry parameters $\\lambda$. Reprinted from \\cite{garcia2017auto}, with the permission from AIP Publishing.}\n\t\t\\label{Fig:garcia_acf}\n\t\\end{minipage}\n\t\\hfill\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/garcia_psd.png}\n\t\t\\caption{Power spectral density of a normalized FPP consisting of two-sided exponential pulses with different asymmetry parameters $\\lambda$. Reprinted from \\cite{garcia2017auto}, with the permission from AIP Publishing.}\n\t\t\\label{Fig:garcia_psd}\n\t\\end{minipage}\n\\end{figure}\nIn contrast to the PDFs, the second order statistics are independent of $\\gamma$ but change for different $\\lambda$. In the limits of a one-sided exponential pulse, the ACF is purely exponential and the PSD is Lorentzian shaped. For $\\lambda$ close to zero or 1, the spectrum has an intermediate range where the spectrum falls with $\\omega^{-2}$ before it falls with $\\omega^{-4}$ in the high frequency limit. This expression is in excellent agreement with the experimental measurements shown in \\Figref{Fig:theodorsen_psd} with $\\lambda=0.1$. \n\nFor Lorentzian pulses the PSD of a normalized FPP takes an exponential form \\cite{garcia2018skewed}\n\\begin{equation}\n\t\\Omega_{\\widetilde{\\Phi},\\psi}(\\omega) = 2\\pi\\tau_\\mathrm{d}\\, \\mathrm{exp}\\left(-2\\tau_\\mathrm{d}|\\omega|\\right),\n\\end{equation}\nand the ACF is Lorentzian shaped,\n\\begin{equation}\n\tR_{\\widetilde{\\Phi},\\psi}(r) = \\frac{4}{4+(r/\\tau_\\mathrm{d})^2}.\n\\end{equation}\nThese expressions can be generalized to skewed Lorentzian pulses, however no closed expressions are known. An alternative formulation is presented in \\cite{garcia2018skewed}.\n\n\\section{Excess time statistics}\nExpressions for excess time statistics, specifically the rate of level crossings above a given threshold and the average time spent above this threshold, can be derived for an FPP. In the context of fluctuations in the SOL of fusion experiments, these quantities are crucial considering the energy of incoming particles to the vessel walls and the energy threshold of physical sputtering. The number of sputtered particles per incoming particle is specified by the modified Bohdansky yield function \\cite{marandet2016assessment}. The mean yield as a function of energy of an incoming deuterium particle on a tungsten wall is plotted for a range of relative fluctuation levels in \\Figref{Fig:theodorsen_yield}. For constant energy, no sputtering occurs beneath 200 eV. For realistic scenarios of $E_\\mathrm{rms}/\\langle E\\rangle > 0$ sputtering already occurs at significantly lower mean energies. An accurate description of excess time statistics is therefore of importance for fusion experiments. \n\nFor an FPP the number of level crossings is given by Rice's formula \\cite{rice1945mathematical}\n\\begin{equation}\n\tX(\\Phi) = T\\int_{0}^{\\infty} \\mathrm{d}\\dot{\\Phi}\\, \\dot{\\Phi}P_{\\Phi,\\dot{\\Phi}}\\left(\\Phi,\\dot{\\Phi}\\right).\n\\end{equation}\nHere $\\dot{\\Phi}$ stands for the derivative of the process $\\Phi$ and $P_{\\Phi,\\dot{\\Phi}}\\left(\\Phi,\\dot{\\Phi}\\right)$ for the joint PDF between $\\Phi$ and $\\dot{\\Phi}$. This formulation requires the process to be differentiable, hence an FPP consisting of one-sided exponential pulses cannot be considered this way. For two-sided exponential pulses with $\\lambda \\in (0,1)$ the rate of up-crossings is given by \\cite{theodorsen2018level}\n\\begin{equation}\\label{rate}\n\t\\frac{\\tau_\\mathrm{d}}{T}X(\\Phi) = \\frac{\\lambda^{\\gamma\\lambda-1}(1-\\lambda)^{\\gamma(1-\\lambda)-1}}{\\gamma\\Gamma(\\gamma\\lambda)\\Gamma(\\gamma(1-\\lambda))}\\left(\\frac{\\gamma\\Phi}{\\langle\\Phi\\rangle}\\right)^\\gamma\\mathrm{exp}\\left(-\\frac{\\gamma\\Phi}{\\langle\\Phi\\rangle}\\right).\n\\end{equation}\nFor this expression, the limit $\\lambda\\rightarrow0$ exists. \\Eqref{rate} is plotted for exponential pulses with $\\lambda=0$ and $\\lambda=1/2$ and a range of $\\gamma$-values in \\Figref{Fig:theodorsen_avtime}. From this, the PDF of time as well as mass above a given threshold can be determined analytically for the limits $\\gamma\\rightarrow 0$ and $\\gamma\\rightarrow \\infty$  and numerically with Monte Carlo simulations for general $\\gamma$ \\cite{theodorsen2018level}. \n\\begin{figure}\n\t\\centering\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/yield_fun_mE_W.eps}\n\t\t\\caption{Mean yield  function for a range of relative fluctuation levels. Image courtesy of A. Theodorsen \\cite{theodorsen2018statistical}.}\n\t\t\\label{Fig:theodorsen_yield}\n\t\\end{minipage}\n\t\\hfill\n\t\\begin{minipage}{.48\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/theodorsen_rate.png}\n\t\t\\caption{Rate of up-crossings for a FPP consisting of exponential pulses with $\\lambda=0$ (dashed line) and $\\lambda=1/2$ (full line) for various $\\gamma$-values. Reprinted figure with permission from \\cite{theodorsen2018level}. Copyright (2018) by the American Physical Society.}\n\t\t\\label{Fig:theodorsen_avtime}\n\t\\end{minipage}\n\\end{figure}\n\nAt the time of writing this thesis, excess time statistics of an FPP consisting of Lorentzian pulses have not been investigated.\n\n\\section{Density profiles}\\label{n_prof}\nThe FPP can be extended to include a spatial variable $x$, resulting in a model of advecting single pulses and corresponding profiles. In the following, this model is discussed in the context of filament motion in SOL plasmas. The presented notation is consistent with \\cite{garcia2016stochastic}. Alternative formulations based on a Lagrangian approach to filament dynamics result in equivalent expressions \\cite{militello2016scrape,militello2016relation,walkden2017interpretation,militello2018two}.\n\nThe model is given by a superposition of pulses \n\\begin{equation}\n\t\\Phi_K(x,t) = \\sum_{k=1}^{K} \\phi_k(x,t).\n\\end{equation}\nIn contrast to previous sections $\\phi_k$ is compounded by both amplitude and pulse shape. For simplicity, we keep $K$ constant in the following derivation. The evolution of individual pulses, neglecting pulse interaction, is given by the modified advection equation,\n\\begin{equation}\\label{phi_equation}\n\t\\frac{\\partial \\phi_k}{\\partial t} + v_\\perp\\frac{\\partial \\phi_k}{\\partial x} + \\frac{\\phi_k}{\\tau_\\parallel} = 0,\n\\end{equation}\nwith $v_\\perp$ as the radial velocity and $\\tau_\\parallel$ representing the parallel transit time, describing parallel losses along the magnetic field. Note, that we assume $v_\\perp$ and $\\tau_\\parallel$ to be constant for all filaments. Following \\Eqref{phi_equation}, individual pulses can be written as the product of their amplitude and pulse shape, $\t\\phi_k(x,t) = A_k(t)\\varphi_k(x-x_k-v_k t)$ with $x_k$ as the position of the pulse at $t=0$. The individual amplitudes are assumed to satisfy the expression \n\\begin{equation}\n\t\\frac{\\mathrm{d}A_k}{\\mathrm{d}t} = -\\frac{A_k}{\\tau_\\parallel}.\n\\end{equation}\nThe solution for this amplitude equation can be expressed by introducing the initial amplitude $A_{0k}$ resulting in\n\\begin{equation}\n\tA_k(t) = A_{0k}\\,\\mathrm{exp}\\left(-\\frac{t+x_k/v_\\perp}{\\tau_\\parallel}\\right),\n\\end{equation}\nwith the pulse $k$ being located at $x=0$ at time $-x_k/v_\\perp$. We further assume the pulse shape to take the form of an exponential function,\n\\begin{equation}\n\t\\varphi_k(x) = \\Theta\\left(-\\frac{x}{l_\\perp}\\right)\\mathrm{exp}\\left(\\frac{x}{l_\\perp}\\right),\n\\end{equation}\nwhere $\\Theta$ is the Heaviside function and $l_\\perp$ is the radial size of the pulse. Note, that this pulse shape is consistent with the findings shown in \\Figref{Fig:garcia_1}. We now consider the signal at a reference position $\\xi$. At the reference time $t_k = (\\xi - x_k)/v_\\perp$ for pulse $k$ at position $\\xi$, the process takes the form\n\\begin{equation}\n\t\\Phi_K(\\xi, t) = \\sum_{k=1}^{K}A_{0k}\\,\\mathrm{exp}\\left(-\\frac{\\xi}{v_\\perp \\tau_\\parallel}\\right)\\Theta\\left(\\frac{t-t_k}{\\tau_\\perp}\\right)\\exp\\left(-\\frac{t-t_k}{\\tau_\\mathrm{d}}\\right),\n\\end{equation}\nwith $\\tau_\\perp = l_\\perp/v_\\perp$ and the pulse duration given by the harmonic mean of the perpendicular and parallel transit time $\\tau_\\mathrm{d} = \\tau_\\parallel\\tau_\\perp/(\\tau_\\parallel+\\tau_\\perp)$. By averaging over uniformly distributed pulse arrivals, the resulting radial profile takes the exponential form\n\\begin{equation}\n\t\\langle\\Phi\\rangle(\\xi) = \\frac{\\tau_\\mathrm{d}}{\\tau_w}\\langle A_0\\rangle\\mathrm{exp}\\left(-\\frac{\\xi}{v_\\perp\\tau_\\parallel}\\right).\n\\end{equation}\nThe resulting scale length of the profile is governed by the radial velocity of the filaments and the parallel transit time. Multiple realizations at individual points in time and the corresponding mean profile are shown in \\Figref{Fig:militello}. \n\nThe application of this model exhibits numerous limitations. The assumption of constant radial velocities and pulse size is an overly simplified description for filament transport in the SOL. In addition, the assumption of radially constant $\\tau_w$ and therefore radially constant $\\gamma$ does not hold in experimental observations. Interactions between individual filaments and two-dimensional motion are also not considered. However, this mathematical model still provides valuable insight in the relation between individual filaments and radial profiles in the SOL and can serve as a framework to relate isolated blob and filament studies to turbulence simulations and experimental measurements. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=9cm]{figures/militello_profile.png}\n\t\\caption{Radial exponential density profile (black dashed line) illustrated as the mean of individual density realizations (colored lines) given by a superposition of individual exponentially shaped pulses. Image courtesy of F. Militello \\cite{militello_profiles}.}\n\t\\label{Fig:militello}\n\\end{figure}\n\\section{Deconvolution method}\nSince the FPP can be expressed as a convolution of a pulse function $\\phi$ and a forcing, consisting of delta-function pulses $f_K$, shown in \\Eqref{conv}, one can attempt to estimate the forcing if the pulse shape is known \\cite{theodorsen2018universality, decristoforo2020intermittent, kube2020comparison}. For a given forcing, one can determine the pulse amplitudes $\\{ A_k \\}_{k=1}^K$ and arrival times $\\{ t_k \\}_{k=1}^K$ directly. This method has the advantage of capturing pulses of all sizes, not only events above a certain threshold as is the case for conditional averaging. Additionally, the problem of pulse overlap is less severe. In order to estimate the forcing, a modified Richardson-Lucy deconvolution algorithm can be used \\cite{richardson1972bayesian,lucy1974iterative}. The algorithm is initialized with a first guess for the forcing $f_K^{(1)}$. This value is iteratively updated with the $n$-th iteration given by\n\\begin{equation}\n\tf_K^{(n+1)} = f_K^{(n)} \\frac{D*\\widehat{\\phi}}{f_K^{(n)}*\\phi*\\widehat{\\phi}}.\n\\end{equation}\nHere, $\\widehat{\\phi}(t) = \\phi(-t)$ and $D$ denotes the investigated time series. This algorithm converges to the least squares solution \\cite{dell2007model}. The initial guess for the forcing matters little as it only determines the number of iterations required until the algorithm converges. \n\nThis deconvolution algorithm thereby provides a versatile tool to analyze time series of SOL fluctuations in experiments and simulations, as it can provide clear results even for relatively short time series.   \n\n\n", "meta": {"hexsha": "92f4912b8b3cad54fc8b5c1e870855ccf72b1cb9", "size": 24720, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "third_chapter.tex", "max_stars_repo_name": "gregordecristoforo/PhD-thesis", "max_stars_repo_head_hexsha": "4d5b4e7230ee88b69adda73cf28a7673e6707975", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "third_chapter.tex", "max_issues_repo_name": "gregordecristoforo/PhD-thesis", "max_issues_repo_head_hexsha": "4d5b4e7230ee88b69adda73cf28a7673e6707975", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "third_chapter.tex", "max_forks_repo_name": "gregordecristoforo/PhD-thesis", "max_forks_repo_head_hexsha": "4d5b4e7230ee88b69adda73cf28a7673e6707975", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 102.5726141079, "max_line_length": 998, "alphanum_fraction": 0.7730582524, "num_tokens": 6947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8615382165412808, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.5702718539476797}}
{"text": "% \\documentclass[a4paper, 10px]{ctexart}\n\\documentclass{ctexart}\n\\usepackage{amsmath}\n\\usepackage[left=1in, right=1in, top=1.2in, bottom=1.2in]{geometry}\n\\usepackage{ctex}\n\\usepackage[utf8]{inputenc}\n\\usepackage{boxproof}\n\\usepackage{fontspec}\n\\usepackage{a4wide}\n\\usepackage{listings}\n% \\setmainfont[Scale = 1]{Palatino}\n% \\setCJKmainfont{Songti SC}\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\fancypagestyle{plain}{\n    \\fancyhead[L]{East China Normal University}\n    \\fancyhead[R]{}\n    \\fancyfoot[C]{\\thepage}\n}\n\\def\\premise{\\mathrm{premise}}\n\\def\\assumption{\\mathrm{assumption}}\n\\def\\MT{\\mathrm {MT\\ }}\n\\def\\LEM{\\mathrm{LEM}}\n\\def\\intro{\\mathrm{i\\ }}\n\\def\\elim{\\mathrm{e\\ }}\n\\def\\introa{\\mathrm{i_1\\ }}\n\\def\\elima{\\mathrm{e_1\\ }}\n\\def\\introb{\\mathrm{i_2\\ }}\n\\def\\elimb{\\mathrm{e_2\\ }}\n\n\n\n\\def\\n{\\neg}\n\\def\\d{\\vee}\n\\def\\c{\\wedge}\n\\def\\NNF{\\texttt{NNF}}\n\\def\\IMPLFREE{\\texttt{IMPL\\_FREE}}\n\\def\\CNF{\\texttt{CNF}}\n\\def\\DISTR{\\texttt{DISTR}}\n\n\\title{Logic in Computer Science Assignment 2}\n\\author{10185101210 \u9648\u4fca\u6f7c}\n\\date{October 2020}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{}\n\n\\subsection{Convert the following formula into CNF.}\n\n$\\neg(r \\rightarrow(\\neg((p \\vee q) \\wedge(\\neg p \\rightarrow(q \\wedge r)))))\\\\$\n\nLet $\\phi = \\neg(r \\rightarrow(\\neg((p \\vee q) \\wedge(\\neg p \\rightarrow(q \\wedge r)))))$.\n\nFirst apply \\IMPLFREE:\n\n$$\n\\begin{aligned}\n    \\IMPLFREE(\\phi) &= \\n(r\\to (\\n((p\\d q) \\c (\\n p \\to (q \\c r))))) \\\\\n    &= \\n(\\IMPLFREE(r \\to (\\n ((p \\d q) \\c  (\\n p \\to (q \\c r)))))) \\\\\n    &= \\n(\\n r \\d \\IMPLFREE (\\n((p \\d q) \\c (\\n p \\to (q \\c r))))) \\\\\n    &= \\n(\\n r \\d (\\n(\\IMPLFREE (p \\d q) \\c \\IMPLFREE (\\n p \\to (q \\c r))))) \\\\\n    &= \\n(\\n r \\d (\\n((p \\d q) \\c (\\IMPLFREE (\\n p \\to (q \\c r)))))) \\\\\n    &= \\n(\\n r \\d (\\n((p \\d q) \\c (\\n \\n p \\d \\IMPLFREE(q \\c r))))) \\\\\n    &= \\n(\\n r \\d (\\n((p \\d q) \\c (\\n \\n p \\d (\\IMPLFREE q \\c \\IMPLFREE r))))) \\\\\n    &= \\n(\\n r \\d (\\n((p \\d q) \\c (\\n \\n p \\d(q \\c r))))) \\\\\n\\end{aligned}\n$$\n\nThen apply \\NNF:\n\n$$\n\\begin{aligned}\n    \\NNF(\\IMPLFREE(\\phi)) &= \\NNF(\\n(\\n r \\d (\\n((p \\d q) \\c (\\n \\n p \\d(q \\c r)))))) \\\\\n    &= \\NNF(\\n \\n r) \\c \\NNF (\\n (\\n ((p \\d q) \\c (\\n \\n p \\d (q \\c r))))) \\\\\n    &= r \\c \\NNF ((p \\d q) \\c (\\n \\n p \\d (q \\c r))) \\\\\n    &= r \\c (\\NNF(p \\d q) \\c \\NNF (\\n \\n p \\d (q \\c r))) \\\\\n    &= r \\c ((\\NNF p \\d \\NNF q) \\c (\\NNF (\\n \\n p) \\d \\NNF (q \\c r)))\\\\\n    &= r \\c ((p \\d q) \\c (p \\d (q \\c r)))\n\\end{aligned}\n$$\n\n\\newpage\n\nThen apply \\CNF:\n\n$$\n\\begin{aligned}\n    \\CNF(\\NNF(\\IMPLFREE(\\phi))) &= \\CNF(r \\c ((p \\d q) \\c (p \\d (q \\c r)))) \\\\\n    &= r \\c \\CNF((p \\d q) \\c (p \\d (q \\c r)))\\\\\n    &= r \\c \\CNF(p \\d q) \\c \\CNF(p \\d (q \\c r))\\\\\n    &= r \\c \\DISTR(p, q) \\c \\DISTR(\\CNF(p), \\CNF((q \\c r))) \\\\\n    &= r \\c \\DISTR(p, q) \\c \\DISTR(p, (q \\c r)) \\\\\n    &= r \\c (p \\d q) \\c \\DISTR(p, q) \\c \\DISTR(p, r)\\\\\n    &= r \\c (p \\d q) \\c (p \\d q) \\c (p \\d r)\\\\\n    &= r \\c (p \\d q) \\c (p \\d r) \\\\\n\\end{aligned}\n$$\n\nAbove is the result of \\CNF \\ algorithm. Further more, this result can be simplified by:\n\n$$\n\\begin{aligned}\n    r \\c (p \\d q) \\c (p \\d r) &=  r \\c (p \\d r) \\\\\n\\end{aligned}\n$$\n\n\\subsection{Determine the satisfiablity of the following formula with Horn algorithm.}\n\n$\n(p \\wedge q \\wedge s \\rightarrow \\perp) \\wedge(q \\wedge s \\rightarrow p) \\wedge(\\top \\rightarrow s) \\wedge(s \\rightarrow q)\\\\\n$\n\nMarked initially: $\\top$\n\nSince $\\top \\to s$, mark s. (Now marked: s, $\\top$)\n\nSince $s \\to q$, mark q. (Now marked: s, q, $\\top$)\n\nSince $q \\c s \\to p$, mark p. (Now marked: s, p, q, $\\top$)\n\nSince $p \\c q \\c s \\to \\bot$, mark bot. (Now marked: s, p, q, $\\top$, $\\bot$)\n\nSince $\\bot$ is marked at last, from Horn's algorithm, this formula is not satisfiable.\n\\end{document}\n", "meta": {"hexsha": "441f75fff3f5f5931ff1926538ac5610eeffacef", "size": 3661, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Course/CSLogic/Assignment2/Assignment2.tex", "max_stars_repo_name": "AixMoon/LearnigRepo", "max_stars_repo_head_hexsha": "ee98fb352735e2b4f97304847b6c0311bc30195e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2020-05-02T20:06:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-24T10:01:29.000Z", "max_issues_repo_path": "Course/CSLogic/Assignment2/Assignment2.tex", "max_issues_repo_name": "AixMoon/LearnigRepo", "max_issues_repo_head_hexsha": "ee98fb352735e2b4f97304847b6c0311bc30195e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Course/CSLogic/Assignment2/Assignment2.tex", "max_forks_repo_name": "AixMoon/LearnigRepo", "max_forks_repo_head_hexsha": "ee98fb352735e2b4f97304847b6c0311bc30195e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-06-04T04:29:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-15T08:15:01.000Z", "avg_line_length": 28.6015625, "max_line_length": 125, "alphanum_fraction": 0.5432941819, "num_tokens": 1573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228758499942, "lm_q2_score": 0.8615382129861583, "lm_q1q2_score": 0.5702718515944627}}
{"text": "\\documentclass{article}[11pt]\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{enumitem}\n\\usepackage{mathtools}\n\\usepackage{mathrsfs}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{graphicx}\n\\usepackage{adjustbox}\n\\usepackage{fixmath}\n\\usepackage{xcolor}\n\\usepackage{float}\n\\usepackage{caption}\n\\newcommand*{\\Z}{\\mathbb{Z}}\n\\newcommand*{\\D}{\\mathbb{D}}\n\\newcommand*{\\Q}{\\mathbb{Q}}\n\\newcommand*{\\R}{\\mathbb{R}}\n\\newcommand*{\\N}{\\mathbb{N}}\n\\newcommand*{\\I}{\\mathbb{I}}\n\\newcommand*{\\C}{\\mathbb{C}}\n\\newcommand*{\\Prime}{\\mathbb{P}}\n\\newcommand*{\\E}{\\textrm{E}}\n\\newcommand*{\\Var}{\\textrm{Var}}\n\\newcommand*{\\m}{\\textrm{mod }}\n\\newcommand*{\\osc}{\\textrm{osc}}\n\\newcommand*{\\vol}{\\textrm{vol}}\n\\newcommand*{\\prt}{\\mathcal{P}}\n\\newcommand*{\\intr}{\\textrm{int}}\n\\newcommand*{\\cl}{\\textrm{cl}}\n\\newcommand*{\\bd}{\\textrm{bd}}\n\\newcommand*{\\tr}{\\textrm{Tr}}\n\\newcommand*{\\img}{\\textrm{im}}\n\\newcommand*{\\rank}{\\textrm{rank}}\n\\newcommand*{\\spn}{\\textrm{span}}\n\\newcommand*{\\Trig}{\\textrm{Trig}}\n\\newcommand*{\\col}{\\textrm{col}}\n\\newcommand*{\\iu}{\\mathbold{i}}\n\\newcommand*{\\re}{\\textrm{Re}}\n\\newcommand*{\\im}{\\textrm{Im}}\n\\newcommand*{\\e}{\\textrm{e}}\n\\newcommand*{\\arcsinh}{\\textrm{arcsinh}}\n\\newcommand*{\\ros}{\\widehat{\\mathcal{R}}}\n\\newcommand*{\\defeq}{\\vcentcolon=}\n\\newcommand*{\\eqdef}{=\\vcentcolon}\n\\DeclareMathOperator*{\\diam}{diam}\n\\DeclareMathOperator*{\\Bor}{Bor}\n\\linespread{1.3}\n\n\\renewcommand{\\dot}{\\;\\begin{picture}(-1,1)(-1,-3)\\circle*{3}\\end{picture}\\;\\,\\, }\n\\newcommand\\norm[1]{\\left\\lVert#1\\right\\rVert}\n\\newcommand*{\\QED}{\\hfill\\ensuremath{\\square}}%\n\n\\newtheoremstyle{dotless}{}{}{}{}{\\bfseries}{}{12pt}{}\n\n\\theoremstyle{dotless}\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{lem}[thm]{Lemma}\n\\newtheorem{cor}[thm]{Corollary}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem{defn}[thm]{Definition}\n\\newtheorem{rem}[thm]{Remark}\n\\newtheorem{exa}[thm]{Example}\n\\newtheorem{exe}[thm]{Exercise}\n\\newtheorem{aside}[thm]{Aside}\n\n% stuff from the percent sign to end of line is a comment, ignored by LaTeX\n\n\\usepackage[margin=1in]{geometry} %set margins\n\\begin{document}\n\n\\nocite{*} % this command forces all references in template.bib to be printed in the bibliography\n\n\\title{Hausdorff Measure and Dimension}\n\n\\author{Name \\\\\nCourse Number \u2013 Course Name \\\\\nUniversity Name}\n\n\\maketitle\n\n\\section*{Introduction}\n\nThe \\textbf{Hausdorff measure} is a measure on metric spaces which generalizes the Lebesgue measure on $\\R^n$.\nAs such, one can imagine how useful this measure can be in areas such as differential geometry, where it\ncan be used to define ``volumes'' of submanifolds (such as the unit sphere in $\\R^3$).\nIn \\textbf{Assignment 3}, we've seen how to construct the Hausdorff measure on a metric space. We shall now look at\nsome further properties of the measure (such as its \\textit{measurable} sets) as well as introduce \n\\textbf{Hausdorff dimension}, which generalizes the notion of the usual (topological) dimension.\n\n\n\\section{Measurable sets of the Hausdorff measure}\n\nLet $(X, \\rho)$ be a metric space.\nRecall (from \\textbf{Assignment 3}) that if $d > 0$ and $\\delta > 0$, we define $H_\\delta^d : \\mathcal{P}(X) \\to [0,\\infty]$ by:\n\\[ H_\\delta^d(E) = \\inf \\left\\{ \\sum_{i \\geqslant 1} \\diam(U_i)^d : E \\subseteq \\bigcup_{i \\geqslant 1} U_i, \\; \\diam(U_i) < \\delta \\right\\} \\]\n\n\\noindent\nfor all $E \\subseteq X$,\nwhere $\\diam(U) = \\sup \\{ \\rho(x,y) : x,y \\in U \\}$. Then the $d$-dimensional Hausdorff \\textit{outer} measure $H^d$ on $(X, \\rho)$ is defined by:\n\\[ H^d(E) = \\lim_{\\delta \\to 0^+} H_\\delta^d(E) \\]\n\n\\noindent\nfor all $E \\subseteq X$. That is, $H^n(E)$ is approximated by\nthe sum of $d$\\textsuperscript{th} powers of the diameters of the sets of countable covers of $E$ where each diameter is less than $\\delta$, \nand a smaller value of $\\delta$ means a better\napproximation. We wish to find the \\textit{measurable} sets of $H^d$.\n\\bigskip\n\nWe know that a set $E \\subseteq X$ is $H^d$-measurable when $H^d(A) = H^d(E \\cap A) + H^d(A \\setminus E)$\nfor all $A \\subseteq E$. We therefore have: \n\\[ \\mathcal{B} = \\{ E \\subseteq X : H^d(A) = H^d(E \\cap A) + H^d(A \\setminus E) \\; \\forall A \\subseteq X \\} \\]\n\n\\noindent\nis the set of all $H^d$-measurable sets. In particular, the Carath\\'{e}odory theorem tells us that $\\mathcal{B}$ is a $\\sigma$-algebra\nand that $H^d$ restricted to $\\mathcal{B}$ is a \\textit{complete} measure. Of course, this definition doesn't tell us much about exactly \nwhich sets are $H^d$-measurable. If we wish for the Hausdorff measure to extend the notion of the Lebesgue measure, it's reasonable\nto ask that\nthe Borel sets in $X$ be $H^d$-measurable. Recall (again from \\textbf{Assignment 3}) that two sets $A, B \\subseteq X$ in\na metric space $(X,\\rho)$ are said to be \\textbf{positively separated} when $\\inf \\{ \\rho(x,y) : x \\in A, \\; y \\in B \\} > 0$.\nFor brevity, we will define $\\rho(A,B) = \\inf \\{ \\rho(x,y) : x \\in A, \\; y \\in B \\}$. We have seen that if $A$ and $B$ are\npositively separated, then $H^d(A \\cup B) = H^d(A) + H^d(B)$.\n\n\\begin{defn}\n\tIn general, if $\\mu^*$ is an outer measure on a metric space $(X, \\rho)$ such that $\\mu^*(A \\cup B) = \\mu^*(A) + \\mu^*(B)$\n\twhenever $A, B \\subseteq X$ are positively separated, then $\\mu^*$ is called a \\textbf{metric outer measure} (or a \n\t\\textbf{Carath\\'{e}odory outer measure}).\n\\end{defn}\n\\bigskip\n\nHence, the Hausdorff outer measure $H^d$ is a metric outer measure. We have the following general result about metric outer\nmeasures that states that the Borel sets must be measurable.\n\n\\begin{prop}\n\tLet $(X, \\rho)$ be a metric space and $\\mu^*$ a metric outer measure on $(X, \\rho)$. Then \n\tevery Borel set in $X$ is $\\mu^*$-measurable.\n\\end{prop}\n\\begin{proof}\n\t(Adapted from \\cite{Fol}).\n\t\\bigskip\n\t\n\t\\noindent\n\tLet $\\Bor(X)$ denote the set of Borel sets, which we know is a $\\sigma$-algebra. In particular, we have that\n\t$\\Bor(X) = \\sigma(\\{ F \\subseteq X : F \\text{ is closed} \\})$. Let $\\mathcal{B}$ be the $\\sigma$-algebra of $\\mu^*$-measurable\n\tsets in $X$. Then to show that $\\Bor(X) \\subseteq \\mathcal{B}$, it suffices to prove that every closed set is $\\mu^*$-measurable.\n\t\\bigskip\n\t\n\t\\noindent\n\tIndeed, let $F \\subseteq X$ be closed and let $A \\subseteq X$ be any set. It's immediate that\n\t$\\mu^*(A) \\leqslant \\mu^*(A \\cap F) + \\mu^*(A \\setminus F)$ by subadditivity. Moreover, if\n\t$\\mu^*(A) = \\infty$, equality clearly holds. Thus, we may assume $\\mu^*(A) < \\infty$.\n\tFor each $n \\in \\N$, let $A_n = \\{ x \\in A \\setminus F : \\rho(\\{x\\}, F) = \\inf\\{ \\rho(x,y) : y \\in F \\} \\geqslant \\frac{1}{n} \\}$.\n\tObserve that if $x \\in A_n$, then $\\rho(\\{x\\}, F) \\geqslant \\frac{1}{n} > \\frac{1}{n+1}$, so $x \\in A_{n+1}$.\n\tHence, $A_1 \\subseteq A_2 \\subseteq \\cdots \\subseteq A \\setminus F$.\n\t\\bigskip\n\t\n\t\\noindent\n\tNow for any $x \\in A \\setminus F$, suppose we had that $\\rho(\\{x\\}, F) = \\inf\\{ \\rho(x,y) : y \\in F \\} = 0$. \n\tThen for all $n \\in \\N$, we can find some $y_n \\in F$ for which $\\rho(x,y_n) < \\frac{1}{n}$. As $\\rho$ is a metric\n\ton $X$, it follows that $y_n$ converges to $x$. By closure of $F$, we have that in fact $x \\in F$, which is a contradiction.\n\tIt thus follows that $\\rho(\\{x\\}, F) > 0$ for any $x \\in A \\setminus F$, which means we can find a $k \\in \\N$ for which\n\t$\\rho(\\{x\\}, F) \\geqslant \\frac{1}{k}$ and consequently $x \\in A_k \\subseteq \\bigcup_{n \\geqslant 1} A_n$. This proves\n\tthat $\\bigcup_{n \\geqslant 1} A_n = A \\setminus F$.\n\t\\bigskip\n\t\n\t\\noindent\n\tAs $A = (A \\cap F) \\cup (A \\setminus F) \\supseteq (A \\cap F) \\cup A_n$ for all $n \\in \\N$, we have by monotonicity that:\n\t\\[ \\mu^*(A) \\geqslant \\mu^*((A \\cap F) \\cup A_n) \\]\n\t\n\t\\noindent\n\tMoreover, we have by construction that $\\rho(\\{x\\}, F) \\geqslant \\frac{1}{n}$ whenever $x \\in A_n$, and thus\n\t(in particular) $\\rho(A_n, A \\cap F) \\geqslant \\frac{1}{n} > 0$, so $A_n$ and $A \\cap F$ are positively separated. \n\tSince $\\mu^*$ is a \\textit{metric} outer measure:\n\t\\[ \\mu^*((A \\cap F) \\cup A_n) = \\mu^*(A \\cap F) + \\mu^*(A_n) \\]\n\t\n\t\\noindent\n\tNow consider $D_n = A_{n+1} \\setminus A_n = \\{ x \\in A \\setminus F : \\frac{1}{n+1} \\leqslant \\rho(\\{x\\}, F) < \\frac{1}{n} \\}$.\n\tBy construction, we have that the $D_n$'s are disjoint and $A \\setminus F = \\bigcup_{n \\geqslant 1} D_n$.\n\tLet $x \\in D_{n+1}$ and suppose $y \\in X$ satisfies $\\rho(x,y) < \\frac{1}{n(n+1)}$. Then for all $z \\in F$, $x \\in D_{n+1} = A_{n+2} \\setminus\n\tA_{n+1}$ means $\\frac{1}{n+2} \\leqslant \\rho(x,z) < \\frac{1}{n+1}$, so the triangle inequality gives:\n\t\\[ \\rho(y,z) \\leqslant \\rho(y,x) + \\rho(x,z) < \\frac{1}{n(n+1)} + \\frac{1}{n+1} = \\frac{n+1}{n(n+1)} = \\frac{1}{n} \\]\n\t\n\t\\noindent\n\tand thus $\\rho(y,F) < \\frac{1}{n} \\implies y \\not\\in A_n$. This means if $y \\in A_n$, then we must have\n\t$\\rho(x,y) \\geqslant \\frac{1}{n(n+1)}$ so that $\\rho(x, A_n) \\geqslant \\frac{1}{n(n+1)}$. As our choice of $x \\in D_{n+1}$ was arbitrary,\n\tit follows that $\\rho(D_{n+1}, A_n) \\geqslant \\frac{1}{n(n+1)} > 0$ so that $D_{n+1}$ and $A_n$ are positively separated.\n\tThus, for all $n \\in \\N$:\n\t\\[ \\mu^*(A_{2n+1}) = \\mu^*(D_{2n} \\cup A_{2n}) \\geqslant \\mu^*(D_{2n} \\cup A_{2n-1})\n\t= \\mu^*(D_{2n}) + \\mu^*(A_{2n-1}) \\]\n\t\\[ \\mu^*(A_{2n}) = \\mu^*(D_{2n-1} \\cup A_{2n-1}) \\geqslant \\mu^*(D_{2n-1} \\cup A_{2n-2})\n\t= \\mu^*(D_{2n-1}) + \\mu^*(A_{2n-2}) \\]\n\t\n\t\\noindent\n\twhere we can set $A_0 = \\emptyset$. It hence follows that (inductively):\n\t\\[ \\sum_{k=1}^n \\mu^*(D_{2k}) \\leqslant \\sum_{k=1}^n \\mu^*(D_{2k}) + \\mu^*(A_1) \\leqslant \\mu^*(A_{2n+1})\n\t\\leqslant \\mu^*(A) < \\infty \\]\n\t\\[ \\sum_{k=1}^n \\mu^*(D_{2k-1}) = \\sum_{k=1}^n \\mu^*(D_{2k-1}) + \\mu^*(A_0) \\leqslant \\mu^*(A_{2n})\n\t\\leqslant \\mu^*(A) < \\infty \\]\n\t\n\t\\noindent\n\tAs the above holds for all $n \\in \\N$, we have in particular that $\\sum_{k=1}^\\infty \\mu^*(D_{2k})$\n\tand $\\sum_{k=1}^\\infty \\mu^*(D_{2k-1})$ are convergent series. Thus, $\\sum_{k=1}^\\infty \\mu^*(D_k)$ is also a\n\tconvergent series. By subadditivity, we have for all $n \\in \\N$ that:\n\t\\[ \\mu^*(A \\setminus F) = \\mu^*\\left( \\bigcup_{k \\geqslant 1} A_k \\right)\n\t= \\mu^*\\left( A_n \\cup \\bigcup_{k \\geqslant n+1} D_k \\right)\n\t\\leqslant \\mu^*(A_n) +  \\sum_{k \\geqslant n+1} \\mu^*(D_k) \\]\n\t\n\t\\noindent\n\tBy convergence of the series:\n\t\\[ \\mu^*(A \\setminus F) \\leqslant \\liminf_{n \\to \\infty} \\left[ \\mu^*(A_n) +  \\sum_{k \\geqslant n+1} \\mu^*(D_k) \\right]\n\t= \\liminf_{n \\to \\infty} \\mu^*(A_n) + \\liminf_{n \\to \\infty} \\sum_{k \\geqslant n+1} \\mu^*(D_k)\n\t= \\liminf_{n \\to \\infty} \\mu^*(A_n) \\]\n\t\n\t\\noindent\n\tMoreover, $A_n \\subseteq A \\setminus F$ means $\\mu^*(A_n) \\leqslant \\mu^*(A \\setminus F)$ by monotonicity, so in fact:\n\t\\[ \\mu^*(A \\setminus F) \\leqslant \\liminf_{n \\to \\infty} \\mu^*(A_n) \\leqslant \\limsup_{n \\to \\infty} \\mu^*(A_n)\n\t\\leqslant \\mu^*(A \\setminus F) \\]\n\t\\[ \\implies \\lim_{n \\to \\infty} \\mu^*(A_n) = \\mu^*(A \\setminus F) \\]\n\t\n\t\\noindent\n\tFinally, it follows that:\n\t\\[ \\mu^*(A) \\geqslant \\mu^*(A \\cap F) + \\lim_{n \\to \\infty} \\mu^*(A_n) = \\mu^*(A \\cap F) + \\mu^*(A \\setminus F) \\]\n\t\n\t\\noindent\n\tand we hence get that $F$ is $\\mu^*$-measurable. As $F$ was an arbitrarily-chosen closed set, it follows that every Borel set is \n\t$\\mu^*$-measurable.\n\\end{proof}\n\nThe following result is therefore immediate (following the fact that $H^d$ is a metric outer measure):\n\n\\begin{cor}\n\tIn a metric space $(X, \\rho)$, every Borel set is $H^d$-measurable.\n\\end{cor}\n\n\\begin{rem}\n\tAs $\\Bor(X)$ is a $\\sigma$-algebra and each $A \\in \\Bor(X)$ is $H^d$-measurable, it follows that\n\t$(X, \\Bor(X), H^d)$ is a measure space (with $H^d$ restricted to $\\Bor(X)$). \n\\end{rem}\n\nThe next step is to justify that the Hausdorff measure can indeed be considered a generalization of the Lebesgue\nmeasure (though we will not go into too much detail, as it would take quite some time). \n\\bigskip\n\nRecall that the Lebesgue outer measure on $\\R$ is \\textit{translation-invariant}. That is, $m^*(x + A) = m(A)$ for any $A \\subseteq \\R$\nand any $x \\in \\R$. We also know that $m^*(\\lambda A) = |\\lambda| m^*(A)$ for $\\lambda \\in \\R$. \nThe following analogous result holds for the Hausdorff measure:\n\n\\begin{lem}\\label{isom}\n\tOn a metric space $(X, \\rho)$, the Hausdorff outer measure $H^d$ is invariant under isometries of $(X, \\rho)$.\n\tMoreover, for any set $Y$, any $\\lambda \\in \\R$, and functions $f,g : Y \\to X$ satisfying $\\rho(f(y_1), f(y_2)) \\leqslant |\\lambda| \n\t\\rho(g(y_1), g(y_2))$ for all $y_1, y_2 \\in Y$, we have $H^d(f(B)) = |\\lambda|^d H^d(g(B))$ for all $B \\subseteq Y$.\n\\end{lem}\n\\begin{proof}\n\t(Adapted from \\cite{Fol}).\n\t\\bigskip\n\t\n\t\\noindent\n\tLet $h : X \\to X$ be an isometry, so that $\\rho(x,y) = \\rho(h(x), h(y))$ for all $x,y \\in X$.\n\tIn particular, for any $U \\subseteq X$, the fact that $h$ is isometric and bijective implies:\n\t\\[ \\diam(U) = \\sup \\{ \\rho(x,y) : x,y \\in U \\} = \\sup \\{ \\rho(h(x), h(y)) : x,y \\in U \\} \\] \n\t\\[ = \\sup \\{ \\rho(h(x), h(y)) : h(x),h(y) \\in h(U) \\} = \\diam(h(U))  \\]\n\t\n\t\\noindent\n\tand likewise, $\\diam(h^{-1}(U)) = \\diam(U)$ (in other words, $\\diam$ is invariant under isometries).\n\tLet $A \\subseteq X$. Then for any $\\delta > 0$:\n\t\\[ H_\\delta^d(h(A)) = \\inf \\left\\{ \\sum_{i \\geqslant 1} \\diam(U_i)^d : h(A) \\subseteq \\bigcup_{i \\geqslant 1} U_i, \\; \\diam(U_i) < \\delta \\right\\} \\]\n\t\\[ = \\inf \\left\\{ \\sum_{i \\geqslant 1} \\diam(h^{-1}(U_i))^d : A \\subseteq \\bigcup_{i \\geqslant 1} h^{-1}(U_i), \\; \\diam(h^{-1}(U_i)) < \\delta \\right\\}\n\t= H_\\delta^d(A) \\]\n\t\n\t\\noindent\n\tand it follows by letting $\\delta \\to 0$ that $H^d(h(A)) = H^d(A)$, so that $H^d$ is invariant under isometries.\n\t\\bigskip\n\t\n\t\\noindent\n\tFor the second statement, we may assume WLOG that $\\lambda \\neq 0$ (the case where $\\lambda = 0$ is clear). Let $B \\subseteq Y$ \n\tand let $\\delta, \\varepsilon > 0$ be arbitrary. Then we can find a cover \n\t$V_1, V_2, ... \\subseteq X$ of $g(B)$ such that $\\diam(V_i) < \\frac{\\delta}{|\\lambda|}$ and:\n\t\\[ \\sum_{i=1}^\\infty \\diam(V_i)^d \\leqslant H_\\delta^d(g(B)) + \\frac{\\varepsilon}{|\\lambda|^d} \\]\n\t\n\t\\noindent\n\tLet $U_i = f(g^{-1}(V_i))$. Then we have by construction that $U_1, U_2, ... \\subseteq X$ is a cover for $f(A)$,\n\tand by the assumption we have:\n\t\\[ \\diam(U_i) = \\sup \\{ \\rho(x_1, x_2) : x_1, x_2 \\in U_i \\} = \\sup \\{ \\rho(f(y_1),f(y_2)) : y_1, y_2 \\in g^{-1}(V_i) \\}\\]\n\t\\[ \\leqslant |\\lambda| \\sup \\{ \\rho(g(y_1),g(y_2)) : y_1, y_2 \\in g^{-1}(V_i) \\} \n\t= |\\lambda| \\sup \\{ \\rho(x_1,x_2) : x_1, x_2 \\in V_i \\} = |\\lambda| \\diam(V_i) \\]\n\t\n\t\\noindent\n\tHence, it follows that:\n\t\\[ H_\\delta^d(f(B)) \\leqslant \\sum_{i=1}^\\infty \\diam(U_i)^d \\leqslant |\\lambda|^d \\sum_{i=1}^\\infty \\diam(V_i)^d\n\t\\leqslant |\\lambda|^d H_\\delta^d(g(B)) + \\varepsilon \\]\n\t\n\t\\noindent\n\tNow since $\\delta$ and $\\varepsilon$ were arbitrary, letting $\\delta, \\varepsilon \\to 0$ gives:\n\t\\[ H^d(f(B)) \\leqslant |\\lambda|^d H^d(g(B)) \\]\n\t\n\t\\noindent\n\twhich completes the proof.\n\\end{proof}\n\nThe following result justifies that the Hausdorff measure can be considered a generalization of the Lebesgue measure:\n\n\\begin{prop}\\label{leb}\n\tFor $\\R^n$ equipped with the usual Euclidean metric and $m_n$ the Lebesgue measure on $\\R^n$, there exists\n\ta constant $\\gamma_n > 0$ for which $H^n = \\gamma_n m_n$.\n\\end{prop}\n\\begin{proof}\n\tWe will \\textit{sketch} the general idea of the proof. Consider the unit cube $Q = [0,1]^n$ and define $\\gamma_n = H^n(Q)$. \n\tWe can show that $\\gamma_n < \\infty$ by choosing $\\delta > 0$, writing $Q$ as a union of cubes of length \n\t$< \\frac{\\delta}{\\sqrt{n}}$, and verifying that the sum of diameters of the cubes is $n^{n/2}$. It will follow that\n\t$\\gamma_n = H^n(Q) < \\infty$ (as $\\delta$ doesn't depend on $n$). \n\t\\bigskip\n\t\n\t\\noindent\n\tWe can similarly show that $\\gamma_n > 0$ by choosing an arbitrary countable cover $U_1, U_2, ...$ of $Q$ and covering\n\teach $U_i$ by a cube $Q_i$ of side length $\\diam(U_i)$. Monotonicity will then give\n\tthat $m_n(Q) = 1 \\leqslant \\sum_{i=1}^\\infty m_n(Q_i) = \\sum_{i=1}^\\infty (\\diam(U_i))^n$, and taking the infimum over all such\n\tcovers leads to $1 \\leqslant H^n(Q)$ (so that in particular, $\\gamma_n = H^n(Q) > 0$).\n\t\\bigskip\n\t\n\t\\noindent\n\tFinally, we can make use of the uniqueness of the Carath\\'{e}odory theorem\n\tin the construction of $m_n$ to show that $H^n(B) = \\gamma_n m_n(B)$ for all open \\textit{boxes} $B \\subseteq \\R^n$.\n\tThis is done by approximating the boxes by ``almost disjoint'' cubes (as done in \\textbf{A1 Q5 (2)}) and using \\textbf{Lemma \\ref{isom}} \n\tto deal with the scaling and translating of each cube to $[0,1]^n$. It will then follow that in fact $H^n(A) = \\gamma_n m_n(A)$\n\tfor all Lebesgue-measurable $A \\subseteq \\R^n$.\n\\end{proof}\n\n\\begin{rem}\\label{vol_rem}\n\tThe proof doesn't require the computation of the value of $\\gamma_n$, but only that it's a (strictly) positive finite value. It turns out that\n\t$\\gamma_n = \\frac{\\vol(B_1^n)}{2^n}$ where $\\vol(B_1^n)$ is the volume of the $n$-ball $B_1^n \\subseteq \\R^n$ of radius 1 \n\t(diameter 2). In particular, $\\gamma_1 = 1$ so that $H^1$ is exactly the 1-dimensional Lebesgue measure $m_1$. We will prove this\n\tlatter fact formally (as it will help later).\n\\end{rem}\n\n\\begin{lem}\\label{gamma_1}\n\t$\\gamma_1 = 1$. Hence, $H^1 = m_1$ (that is, the 1-dimensional Hausdorff measure on $\\R$ is exactly the Lebesgue measure on $\\R$).\n\\end{lem}\n\\begin{proof}\n\tSince $H^1([0,1]) = \\gamma_1 m_1([0,1]) = \\gamma_1$, we must show that $H^1([0,1]) = 1$.\n\tIndeed, let $\\delta > 0$ and choose $N \\in \\N$ for which $\\frac{1}{N} < \\delta$. Then for each $1 \\leqslant n \\leqslant N$, set \n\t$U_n = [\\frac{n-1}{N}, \\frac{n}{N}]$ so that $U_1, ..., U_N$ is a cover for $[0,1]$ with $\\diam(U_n) = \\frac{1}{N} < \\delta$.\n\tThen:\n\t\\[ H_\\delta^1([0,1]) \\leqslant \\sum_{n=1}^N \\diam(U_n) \\leqslant \\sum_{n=1}^N \\frac{1}{N} = 1 \\]\n\t\n\t\\noindent\n\tand so letting $\\delta \\to 0$, we have $H^1([0,1]) \\leqslant 1$. On the other hand, let $\\delta > 0$ and\n\tchoose any cover $U_1, U_2, ... \\subseteq \\R$ for $[0,1]$ satisfying $\\diam(U_i) < \\delta$. We can see that $\\diam(U_i) \\geqslant m_1^*(U_i)$\n\t(where $m_1^*$ is the Lebesgue \\textit{outer} measure)\n\tbecause $\\diam(U_i) < \\delta$ implies we can find $\\alpha_i = \\inf(U_i)$ and $\\beta_i = \\sup(U_i)$ so that\n\t$\\diam(U_i) = \\beta_i - \\alpha_i$ and $J_i = [\\alpha, \\beta] \\supseteq U_i$ so that $\\diam(U_i) = \\beta_i - \\alpha_i\n\t= m_1(J_i) \\geqslant m_1^*(U_i)$ by monotonicity. It then follows (by subadditivity and monotonicity) that:\n\t\\[ \\sum_{i=1}^\\infty \\diam(U_i) \\geqslant \\sum_{i=1}^\\infty m_1^*(U_i) \\geqslant m_1^*\\left( \\bigcup_{i=1}^\\infty U_i \\right)\n\t\\geqslant m_1([0,1]) = 1 \\]\n\t\n\t\\noindent\n\tso that (since our choice of cover was arbitrary) $H_\\delta^1([0,1]) \\geqslant 1$. Letting $\\delta \\to 0$ then gives\n\t$H^1([0,1]) \\geqslant 1$, and thus $H^1([0,1]) = \\gamma_1 = 1$.\n\\end{proof}\n\n\\section{Hausdorff dimension}\n\nBefore defining the Hausdorff dimension of a set, we first need the following lemma:\n\n\\begin{lem}\\label{dim}\n\tLet $(X, \\rho)$ be a metric space, let $E \\subseteq X$, and let $\\alpha, \\beta \\in \\R$ with $0 < \\alpha < \\beta$.\n\tIf $H^\\alpha(E) < \\infty$, then $H^\\beta(E) = 0$.\n\\end{lem}\n\\begin{proof}\n\t(Adapted from \\cite{Roy}).\n\t\\bigskip\n\t\n\t\\noindent\n\tLet $\\delta > 0$. As $H^\\alpha(E) < \\infty$, we can choose $U_1, U_2, ... \\subseteq X$ for which $E \\subseteq \\bigcup_{i \\geqslant 1} U_i$,\n\teach $\\diam(U_i) < \\delta$, and:\n\t\\[ \\sum_{i \\geqslant 1} \\diam(U_i)^\\alpha < H_\\delta^\\alpha(E) + 1  \\]\n\t\n\t\\noindent\n\tWe then see that as $\\alpha < \\beta$ and $\\diam(U_i) < \\delta$:\n\t\\[ H_\\delta^\\beta(E) \\leqslant \\sum_{i \\geqslant 1} \\diam(U_i)^\\beta = \\sum_{i \\geqslant 1} \\diam(U_i)^{\\beta - \\alpha + \\alpha}\n\t\\leqslant \\delta^{\\beta - \\alpha} \\sum_{i \\geqslant 1} \\diam(U_i)^\\alpha\n\t\\leqslant \\delta^{\\beta - \\alpha} \\left( H_\\delta^\\alpha(E) + 1 \\right) \\]\n\t\n\t\\noindent\n\tMoreover, since $H_\\delta^\\alpha(E)$ is increasing as $\\delta \\to 0$ (as proved in \\textbf{A3}):\n\t\\[ H_\\delta^\\beta(E) \\leqslant \\delta^{\\beta - \\alpha} \\left( H_\\delta^\\alpha(E) + 1 \\right) \\leqslant \\delta^{\\beta - \\alpha} \n\t\\left( H^\\alpha(E) + 1 \\right) \\]\n\t\n\t\\noindent\n\tFinally, we have:\n\t\\[ H^\\beta(E) = \\lim_{\\delta \\to 0} H_\\delta^\\beta(E) \\leqslant \\lim_{\\delta \\to 0} \\delta^{\\beta - \\alpha} \\left( H^\\alpha(E) + 1 \\right)\n\t= 0 \\]\n\t\n\t\\noindent\n\twhich forces $H^\\beta(E) = 0$.\n\\end{proof}\n\nThe lemma makes the following definition well-defined:\n\n\\begin{defn}\n\tLet $(X, \\rho)$ be a metric space and let $E \\subseteq X$. The \\textbf{Hausdorff dimension} $\\dim_H(E)$ of $E$ is defined to be:\n\t\\[ \\dim_H(E) = \\inf \\{ \\alpha \\geqslant 0 : H^\\alpha(E) = 0 \\} \\]\n\\end{defn}\n\n\\paragraph{}\nUsing \\textbf{Lemma \\ref{dim}} and \\textbf{Proposition \\ref{leb}}, we can compute the Hausdorff dimensions of certain shapes\n(subsets of $\\R^n$). The following are examples in $\\R^2$.\n\n\\begin{exa}\n\tConsider the closed unit disk $\\D = \\{ x \\in \\R^2 : \\norm{x}_2 \\leqslant 1 \\}$.\n\tWe claim that its Hausdorff dimension $\\dim_H(\\D) = 2$. By using \\textbf{Proposition \\ref{leb}}, we have that:\n\t\\[ H^2(\\D) = \\gamma_2 m_2(\\D) = \\gamma_2 \\cdot \\pi > 0 \\]\n\t\n\t\\noindent\n\twhere $m_2(\\D) = \\pi$ is just the $\\R^2$-Lebesgue measure (area) of $\\D$. Then \\textbf{Lemma \\ref{dim}} tells us that\n\t$H^\\alpha(\\D) = 0$ for all $\\alpha > 2$, and so:\n\t\\[ \\dim_H(\\D) = \\inf\\{ \\alpha \\geqslant 0 : H^\\alpha(\\D) = 0 \\} = 2 \\]\n\t\n\t\\noindent\n\tWe used \\textbf{Proposition \\ref{leb}} to prove that $H^2(\\D) > 0$, but we could also have done an infimum argument\n\tto do it. The former method saves us time, however.\n\\end{exa}\n\nIt can be more difficult to compute Hausdorff dimensions of more general submanifolds of $\\R^n$, as \n\\textbf{Proposition \\ref{leb}} would not guarantee that $H^n$ gives a non-zero value. In the next example, we compute\nthe Hausdorff dimension of the submanifold $S^1$ (the unit circle in $\\R^2$), and the following lemma proves useful to do so:\n\n\\begin{lem}\\label{conn}\n\tLet $(X, \\rho)$ be a metric space and let $A \\subseteq X$ be a \\textit{connected} set. Then $H^1(A) \\geqslant \\diam(A)$.\n\\end{lem}\n\\begin{proof}\n\t(Adapted from \\cite{Sem})\n\t\\bigskip\n\t\n\t\\noindent\n\tFor each $a \\in X$, consider the map $d_a : X \\to \\R$ defined by $d_a(x) = \\rho(a,x)$. We can see that for $x,y \\in X$\n\t(assuming WLOG $\\rho(a,x) \\geqslant \\rho(a,y)$):\n\t\\[ |d_a(x) - d_a(y)| = |\\rho(a,x) - \\rho(a,y)| = \\rho(a,x) - \\rho(a,y) \\leqslant \\rho(a,y) + \\rho(x,y) - \\rho(a,y) = \\rho(x,y) \\]\n\t\n\t\\noindent\n\twhere the inequality comes from the triangle inequality on $\\rho$. This shows that $d_a$ is 1-Lipschitz, thus (in particular) continuous. \n\tWe hence get that:\n\t\\[ \\diam(d_a(U)) = \\sup\\{ |d_a(x)-d_a(y)| : x,y \\in U \\} \\leqslant \\sup\\{ \\rho(x,y) : x,y \\in d_a(U) \\} = \\diam(U) \\]\n\t\n\t\\noindent\n\tfor any $U \\subseteq X$, which implies for any $\\delta > 0$:\n\t\\[ H_\\delta^1(d_a(A)) = \\inf\\left\\{ \\sum_{i \\geqslant 1} \\diam(J_i) : d_a(A) \\subseteq \\bigcup_{i \\geqslant 1} J_i, \\; \\diam(J_i) < \\delta \\right\\} \\]\n\t\\[ \\leqslant \\inf\\left\\{ \\sum_{i \\geqslant 1} \\diam(d_a(U_i)) : A \\subseteq \\bigcup_{i \\geqslant 1} U_i, \\; \\diam(d_a(U_i)) \\leqslant \n\t\\diam(U_i) < \\delta \\right\\} \\]\n\t\\[ \\leqslant \\inf\\left\\{ \\sum_{i \\geqslant 1} \\diam(U_i) : A \\subseteq \\bigcup_{i \\geqslant 1} U_i, \\; \\diam(U_i) < \\delta \\right\\} \n\t= H_\\delta^1(A) \\]\n\t\n\t\\noindent\n\tso that $H^1(d_a(A)) \\leqslant H^1(A)$ for all $a \\in X$. Finally since $d_a$ is continuous and $A$ is connected, it follows that\n\t$d_a(A) \\subseteq \\R$ is connected (i.e. an interval), which means $H^1(d_a(A)) = \\gamma_1 m_1(d_a(A)) = \\gamma_1 \\diam(d_a(A))$.\n\tBy \\textbf{Lemma \\ref{gamma_1}}, $\\gamma_1 = 1$ and so $H^1(d_a(A)) = \\diam(d_a(A)) \\leqslant H^1(A)$ for all $a \\in X$.\n\tFinally:\n\t\\[ \\sup_{a \\in A} \\diam(d_a(A)) = \\sup_{a \\in X} \\sup \\{ |d_a(x)-d_a(y)| : x,y \\in A \\}\n\t= \\sup_{a \\in X} \\sup \\{ |\\rho(x,a)-\\rho(y,a)| : x,y \\in A \\} \\]\n\t\\[ = \\sup_{a \\in X} \\sup \\{ \\rho(x,a)-\\rho(y,a) : x,y \\in A \\} \n\t\\geqslant \\sup \\{ \\rho(x,y)-\\rho(y,y) : x,y \\in A \\}\\]\n\t\\[ = \\sup \\{ \\rho(x,y) : x,y \\in A \\} = \\diam(A)   \\]\n\n\t\\noindent\n\tso that $\\diam(A) \\leqslant \\sup_{a \\in A} \\diam(d_a(A)) \\leqslant H^1(A)$, which completes the proof.\n\\end{proof}\n\n\\begin{exa}\n\tConsider the unit circle $S^1 = \\{ x \\in \\R^2 : \\norm{x}_2 = 1 \\}$.\n\tWe claim that its Hausdorff dimension $\\dim_H(S^1) = 1$. We know that $S^1$ has $\\R^2$-Lebesgue measure zero, so that:\n\t\\[ H^2(S^1) = \\gamma_2 m_2(S^2) = \\gamma_2 \\cdot 0 = 0 \\]\n\t\n\t\\noindent\n\tBut this only tells us that $\\dim_H(S^1) \\leqslant 2$. To show that $\\dim_H(S^1) = 1$, we observe that\n\t$S^1 \\subseteq \\R^2$ is \\textit{connected}, and so by \\textbf{Lemma \\ref{conn}}:\n\t\\[ H^1(S^1) \\geqslant \\diam(S^1) = 2 > 0 \\]\n\t\n\t\\noindent\n\tso that $\\dim_H(S^1) = 1$ (by \\textbf{Lemma \\ref{dim}}).\n\\end{exa}\n\n\\newpage\n\n\\begin{thebibliography}{MMMMM} \n\\bibitem[Folland]{Fol} Folland, G. B. (1999). \\textit{Real analysis: Modern techniques and their applications}. New York: Wiley.\n\\bibitem[Royden]{Roy} Royden, H. L. \\& Fitzpatrick, P. (2010). \\textit{Real analysis}. Boston: Prentice Hall.\n\\bibitem[Semmes]{Sem} Semmes, S. (2010). \\textit{Some elementary aspects of Hausdorff measure and dimension}. arXiv:1008.2637v1.\n\\end{thebibliography}\n\n\\bibliographystyle{plain}\n\\bibliography{template}\n\n\\end{document}\n", "meta": {"hexsha": "6317215484ffec12b19d36998184b4c5bad1d29e", "size": 24918, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW Template 2.tex", "max_stars_repo_name": "Prabhav10/LaTeX", "max_stars_repo_head_hexsha": "ac256bb388682faf6a609bd5b7337a2a147a4265", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 52, "max_stars_repo_stars_event_min_datetime": "2021-12-23T17:48:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T03:01:42.000Z", "max_issues_repo_path": "HW Template 2.tex", "max_issues_repo_name": "Prabhav10/LaTeX", "max_issues_repo_head_hexsha": "ac256bb388682faf6a609bd5b7337a2a147a4265", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-25T23:17:11.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-26T10:21:59.000Z", "max_forks_repo_path": "HW Template 2.tex", "max_forks_repo_name": "Prabhav10/LaTeX", "max_forks_repo_head_hexsha": "ac256bb388682faf6a609bd5b7337a2a147a4265", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-12-27T07:26:17.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-06T15:20:50.000Z", "avg_line_length": 50.5436105477, "max_line_length": 151, "alphanum_fraction": 0.6361264949, "num_tokens": 9969, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300048, "lm_q2_score": 0.7799928900257127, "lm_q1q2_score": 0.5702204935237072}}
{"text": "\\lab{Policy Function Iteration}{Policy Function Iteration}\n\\objective{Iterative methods can be powerful ways to solve dynamic optimization problems without computing the exact solution.\nOften we can iterate very quickly to the true solution, or at least within some $\\epsilon$ error of the solution.\nThese methods are significantly faster than computing the exact solution using dynamic programming.\nWe demonstrate two iterative methods, value iteration (VI) and policy iteration (PI), and use them to solve a deterministic Markov decision process.\n}\n\n\\section*{Dynamic Optimization}\n\nMany dynamic optimization problem take the form of a \\emph{Markov decision process}.\nThey are formulated as follows.\n\n$\\mathbb{T}$ is a set of discrete time periods.\nIn this lab, $\\mathbb{T} = {0,1,\\ldots, T}$.\n$S$ is the set of possible states.\nThe set of allowable actions for each state $s$ is $A_s$.\n$s_{t+1}=g(s_t,a_t)$ is a transition function that describes how the state changes, based on the previous state and action.\nThe reward $u_t(s,a)$ is the reward for taking action $a$ while in state $s$ at time $t$.\nIf the Markov process is stochastic, $p_t(s,a)$ is probability of taking action $a$ at time $t$ while in state $s$.\nA deterministic Markov process has $p_t(s,a) = 1$ $\\forall s,a$.\nThe time discount factor $\\beta \\in [0,1]$ determines how much less a reward is worth in the future.\nThen the dynamic optimization problem is\n\n\\begin{align}\n\\label{eq:policyiter-dynopt1}\n\\max_a  & \\sum_{t=0}^T \\beta^t u(s_t,a_t) \\\\\n\\mbox{subject to } & s_{t+1}= g(s_t,a_t)\\ \\forall t.\n\\end{align}\n\nThe cake eating problem described in the previous lab follows this format where $S$ consists of the possible amounts of remaining cake ($\\frac{i}{W}$), $c_t$ is the amount of cake we can eat, and the amount of cake remaining $s_{t+1}=g(s_t,a_t)$ is $w_t-c_t$, the amount of cake we have, $w_i$, minus the cake we eat, $c_t$.\n\n\n\\subsection*{Moving on a Grid}\n\nNow consider an $N \\times N$ grid.\nAssume that a robot moves around the grid, one space at a time, until it reaches the lower right hand corner and stops.\nEach square is a state, $S = \\{0, 1, \\ldots, N^2-1\\}$, and the set of actions is $\\{Left, Down, Right, Up\\}$.\nFor this lab, $Left = 0$, $Down = 1$, $Right = 2$, and $Up = 3$.\n\n\\begin{warn}\nIt is important to remember that the actions do not correspond to the states the robot is in after the action.\nWhen the robot is in state $0$ and takes action $1$, he is then in state $2$.\n\\end{warn}\n\n$A_s$ is the set of actions that keep the robot on the grid.\nIf the robot is in the top left hand corner, the only allowed actions are $Down$ and $Right$ so $A_0 = \\{1,2\\}$.\nThe transition function $g(s_t,a_t) = s_{t+1}$ can be explicitly defined for each $s, a$ where $s_{t_1}$ is the new state after moving.\nFor this lab, we define a dictionary $P$ to represent the decision process.\nP[state][action]=[(prob, nextstate, reward, is\\_terminal),...]\nThis dictionary contains all of the information about the states, actions, probabilities, and rewards.\nThe final entry in the tuple, is\\_terminal, indicates if the new state is a stopping point.\n\nLet $N=2$ and label the squares as displayed below.\nIn this example, we define the reward to be $-1$ if the robot moves into 2, $-1$ if the robot moves into $0$ from $1$, and $1$ when it reaches the end, $3$.\nThis simplifies to $u_t(s,a) = u(a)$ except when moving into state $0$.\nSimilarly, $p_t(s,a) = p(a) = 1$.\n\n\\begin{center}\n\\begin{tabular}{|c|c|}\n\\hline\n0 & 1 \\\\ \\hline\n\\cellcolor{red!20}2 & \\cellcolor{green!20}3 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\n%\\begin{align*}\n%S &= \\{0, 1, 2, 3\\}\\\\\n%A_0 &= \\{Right, Down\\} &A_1 &= \\{Left, Up\\} &A_2 &= \\{Up, Right\\} & A_3 &= \\{\\} \\\\\n%g(0,Right) &= 1 &g(1,Left) &= 0 &g(2,Up) &= 0\\\\\n%g(0,Down) &= 2 &g(1,Down) &= 3 &g(2,Right) &= 3 \\\\\n%u(0) &= 0 &u(1) &= 0 &u(2) &= -1 &u(3) &= 1 \\\\\n%p(0) &= .75 &p(1) &= .25 &p(2) &= .75 &p(3) &= .25\n%\\end{align*}\n\nAll of this information is encapsulated in $P$.\nWe define $P[state][action]$ for all states and actions, even if they are not possible.\nThis simplifies coding the algorithm but is not necessary.\n\n\\begin{center}\n\\begin{tabular}{llll}\n\\li{P[0][0] = [(0, 0, 0, False)]}\n    & \\li{P[2][0] = [(0, 2, -1, False)]}\\\\\n\\li{P[0][1] = [(1, 2, -1, False)]}\n    & \\li{P[2][1] = [(0, 2, -1, False)]}\\\\\n\\li{P[0][2] = [(1, 1, 0, False)]}\n    & \\li{P[2][2] = [(1, 3, 1, True)]}\\\\\n\\li{P[0][3] = [(0, 0, 0, False)]}\n    & \\li{P[2][3] = [(1, 0, 0, False)]}\\\\\n\\li{P[1][0] = [(1, 0, -1, False)]}\n    &\\li{P[3][0] = [(0, 0, 0, True)]} \\\\\n\\li{P[1][1] = [(1, 3, 1, True)]}\n    &\\li{P[3][1] = [(0, 0, 0, True)]}\\\\\n\\li{P[1][2] = [(0, 0, 0, False)]}\n    &\\li{P[3][2] = [(0, 0, 0, True)]}\\\\\n\\li{P[1][3] = [(0, 0, 0, False)]}\n    &\\li{P[3][3] = [(0, 0, 1, True)]}\n\\end{tabular}\n\\end{center}\n\nWe define the \\emph{value function} $V(s)$ to be the maximum possible reward of starting in state $s$.\nThen using Bellman's optimality equation,\n\\begin{equation}\n\\label{eq:policyiter-val-func}\nV(s) = \\max_{a \\in A_s} \\{\\Sigma[p(a) * \\left( u(a) + \\beta V(a)\\right)]\\}.\n\\end{equation}\n\nThe summation occurs when $p(s,a)<1$ so $P[s][a]$ consists of more than one tuple.\nFor example, if the robot is in the top left corner and moves right, we could have that the probability the robot actually moves right is $.5$.\nIn this case, $P[0][2] = [(.5, 1, 0, False), (.5, 2, -1, False)]$.\nThis will occur later in the lab.\nWe also observe that in this scenario, $V(a)$ is $V(s')$ where $s'$ is the state after taking action $a$ in state $s$.\n\n\\section*{Value Iteration}\n\nIn the previous lab, we used dynamic programming to solve for the value function.\nThis was a recursive method where we calculated all possible values for each state and time period.\n\\emph{Value iteration} is another algorithm that solves the value function by taking an initial value function and calculating a new value function iteratively.\nSince we are not calculating all possible values, it is typically faster than dynamic programming.\n\n\\subsection*{Convergence of Value Iteration}\n\nA function $f$ that is a contraction mapping has a \\emph{fixed point} $p$ such that $f(p) = p$.\nBlackwell's contraction theorem can be used to show that Bellman's equation is a ``fixed point'' (it actually acts more like a fixed function in this case)\nfor an operator $T: L^{\\infty}(X;\\mathbb{R}) \\to L^{\\infty}(X;\\mathbb{R})$ where $L^{\\infty}(X;\\mathbb{R})$ is the set of all bounded functions:\n\\begin{equation}\n\\label{eq:policyiter-blackwell}\n[T(f)](s) = \\max_{a \\in A_s} \\{\\Sigma [p(a) * \\left( u(a) + \\beta f(a)\\right)]\\}\n\\end{equation}\nIt can be shown that \\ref{eq:policyiter-dynopt1} is the fixed ``point'' of our operator $T$.\nA result of contraction mappings is that there exists a unique solution to \\ref{eq:policyiter-blackwell}.\n\n\n\\begin{equation}\n\\label{eq:policyiter-val-iteration}\nV_{k+1}(s_i) = [T(V_k)](s_i) = \\max_{a \\in A_s} \\{\\Sigma[p(a) *\\left( u(a) + \\beta V_k(a)\\right)]\\}\n\\end{equation}\n\nwhere an initial guess for $V_0(s)$ is used.\nAs $k \\to \\infty$, it is guaranteed that $(V_k(s)) \\to V^*(s)$.\nBecause of the contraction mapping, if $V_{k+1}(s) = V_k(s) \\, \\, \\forall \\, s$, we have found the true value function, $V^*(s)$.\nUsing this information, we define the value iteration algorithm to find $V^*$:\n\n\\begin{algorithm}[H]\n\\begin{algorithmic}[1]\n\\Procedure{Value Iteration Function}{$P, S, A, \\beta, \\varepsilon$, maxiter}\n    \\State $V_0 \\gets [V_0(s_0),V_0(s_1),\\ldots,V_0(s_N)] $\n     \\Comment{Common choice is $V_0(s_i)=u(s_i)$}\n    \\For{$i=1,2,\\dots,$\\ \\li{maxiter}}\n        \\Comment Iterate only \\li{maxiter} times at most.\n        \\For{$s \\in S$}\n             \\State $V_{k+1}(s) = \\max_{a \\in A_s}\\{\\Sigma[p(a)*(u(a) + \\beta*V_k(a))]\\}$\n        \\EndFor\n        \\If{ $||V_{k+1} - V_k|| < \\varepsilon$}\n            \\State \\texttt{break}\n             \\Comment{Stop iterating if the approximation stops changing enough.}\n        \\EndIf\n    \\EndFor\n    \\State \\pseudoli{return} $V_k$\n\\EndProcedure\n\\end{algorithmic}\n\\caption{Value Function Iteration}\n\\label{alg:ValueIteration}\n\\end{algorithm}\n\nLet $V_0 = [0,0,0,0]$ and $\\beta = 1$.\nWe calculate $V_1(s)$ from the example above.\n\\begin{align*}\nV_1(0) &= \\max_{a \\in A_0} \\{\\Sigma[p(a)*(u(a)+V_0(a))]\\} \\\\\n&= \\max \\{p(1)*(u(1)+V_0(1)), p(2)*(u(2)+V_0(2)))\\} \\\\\n&= \\max \\{1(-1+0), 1(0+0)\\} \\\\\n&= \\max \\{-1,0\\} \\\\\n&= 0\\\\\nV_1(1) &= \\max \\{p(0)*(u(0)+V_0(0)), p(2)*(u(2)+V_0(2))\\}\\\\\n&=\\max \\{1(0+0), 1(1+0)\\} \\\\\n&= 1\n\\end{align*}\n\nCalculating $V_1(2)$ and $V_1(3)$ gives $V_1 = [0, 1, 1, 0]$.\nRepeating the process, $V_2 = [1, 1, 1, 0]$, which is the solution.\nIt means that maximum reward the robot can achieve by starting on square $i$ is $V_2[i]$.\n\nMost iterative algorithms have a \\li{maxiter} parameter that will terminate the algorithm after \\li{maxiter} iterations regardless of whether or not it has converged.\nThis is because even though we have guaranteed convergence, we might have a convergence rate that is too slow to be useful.\nHowever, generally this algorithm will converge much faster than computing the true value function using dynamic programming.\n\n\\begin{problem}\n\\label{prob:policyiter-value1}\nWrite a function called \\li{value_iteration()} that will accept a dictionary $P$ representing the decision process, the number of states, the number of actions, a discount factor $\\beta \\in (0,1)$, defaulting to $1$,\nthe tolerance amount $\\epsilon$ defaulting to $1e^-8$, and the maximum number of iterations \\li{maxiter} defaulting to $3000$.\nPerform value iteration until $\\|V_{k+1} - V_{k}\\| < \\epsilon$ or $k > $ \\li{max\\_iter}.\nReturn the final vector representing $V^*$ and the number of iterations.\nTest your code on the example given above.\n\\end{problem}\n\n\\subsection*{Calculating the Policy}\n\nWhile knowing the maximum expected value is helpful, it is usually more important to know the policy that generates the most value.\nValue Iteration tells the robot what reward he can expect, but not how to get it.\nThe policy vector, $\\mathbf{c}$, is found by using the policy function: $\\pi : \\mathbb{R} \\to \\mathbb{R}$.\n$\\pi(s)$ is the action we should take while in state $s$ to maximize reward.\nWe can modify the Bellman equation using $V^*(s)$ to find $\\pi$:\n\\begin{equation}\n\\label{eq:policyiter-pol_func}\n\\pi(s) = \\argmax_{a \\in A_s} \\{\\Sigma [p(a)*(u(a) + \\beta*V^*(s))]\\}\n\\end{equation}\n\nUsing value iteration, we found $V^*  = [1, 1, 1, 0]$ in the example above.\nWe find $\\pi(0)$ by\n\\begin{align*}\n\\pi(0) &=\\argmax_{1,2} \\{p(1)*(u(1) + V^*(1)), p(2)*(u(2)+V^*(2))\\} \\\\\n&= \\argmax \\{1*(-1+1), 1*(0+1)\\} \\\\\n&= \\argmax\\{0, 1\\}\\\\\n&= 2.\n\\end{align*}\n\nSo when the robot is in state $0$, he should take action $2$, moving $Right$.\nThis avoids the $-1$ penalty for moving $Down$ into square $2$.\nSimilarly,\n\\begin{align*}\n\\pi(1) &= \\argmax_{0,1} \\{1*(0+1), 1*(1+1)\\} = \\argmax \\{1, 2\\} = 1.\n\\end{align*}\nThe policy corresponding to the optimal reward is $[2,1,2,0]$.\nThe robot should move to square $3$ if possible, avoiding $2$ because it has a negative reward.\nSince $3$ is terminal, it does not matter what $\\pi(3)$ is.\nWe set it to $0$ for convenience.\n\n\n\\begin{problem}\n\\label{prob:policyiter-value2}\nWrite a function called \\li{extract_policy()} that will accept a dictionary $P$ representing the decision process, the number of states, the number of actions, an array representing the value function, and a discount factor $\\beta \\in (0,1)$, defaulting to $1$.\nReturn the policy vector corresponding to $V^*$.\nTest your code on the example with $\\beta = 1$.\n\\end{problem}\n\n\\section*{Policy Iteration}\nFor dynamic programming problems, it can be shown that value function iteration converges relative to the discount factor $\\beta$.\nAs $\\beta$ approaches $1$, the number of iterations increases dramatically.\nAs mentioned earlier $\\beta$ is usually close to $1$, which means this algorithm can converge slowly.\nIn value iteration, we used an initial guess for the value function, $V_0$ and used (\\ref{eq:policyiter-dynopt1}) to iterate towards the true value function.\nOnce we achieved a good enough approximation for $V^*$, we recovered the true policy function $\\pi$.\nInstead of iterating on our value function, we can instead make an initial guess for the policy function, $\\pi_0$, and use this to iterate toward the true policy function, called policy iteration.\nWe do so by taking advantage of the definition of the value function, where we assume that our policy function yields the most optimal result.\n\nThat is, given a specific policy function $\\pi_k$, we can modify \\eqref{eq:policyiter-dynopt1} by assuming that the policy function is the optimal choice.\nThis process, called \\emph{policy evaluation}, evaluates the value function for a given policy.\n\n\\begin{equation}\n\\label{eq:policyiter-val_from_policy}\nV_k(s) = \\max_{a \\in [A_s]} \\{\\Sigma([p(a)*(u(a) + \\beta V^*(s))]\\} = \\max_{a \\in [A_s]} \\Sigma([p(\\pi(s))*(u(\\pi(s)) + \\beta *V^*(\\pi(s)))]\\}\n\\end{equation}\nThe last equality occurs because in state $s$, the robot should choose the action that maximizes reward, which is $\\pi(s)$ by definition.\n\n\n\\begin{problem}\n\\label{prob:policyiter-value3}\nWrite a function called \\li{compute_policy_v()} that accepts a dictionary $P$ representing the decision process, the number of states, the number of actions, an array representing a policy, a discount factor $\\beta \\in (0,1)$ defaulting to $1$, and a tolerance amount $\\epsilon$ defaulting to $1e^-8$,.\nReturn the value function corresponding to the policy.\nTest your code on the policy vector generated from \\li{extract_policy} for the example.\nThe result should be the same value function array from \\li{value_iteration}.\n\\end{problem}\n\nNow that we have the value function for our policy, we can take the value function and find a better policy.\nCalled policy improvement, this step is the same method used in value iteration to find the policy.\n\nThus, given an initial guess for our policy function, $\\pi_0$, we calculate the corresponding value function using (\\ref{eq:policyiter-val_from_policy}), and then use \\eqref{eq:policyiter-pol_func} to improve our policy function.\nThe algorithm for policy function iteration can be summarized as follows:\n\n\n\\begin{algorithm}[H]\n\\begin{algorithmic}[1]\n\\Procedure{Policy Iteration Function}{$P, nS, nA, \\beta, tol,$ maxiter}\n    \\State $\\pi_0 \\gets [\\pi_0(w_0),\\pi_0(w_1),\\ldots,\\pi_0(w_N)] $\n     \\Comment{Common choice is $\\pi_0(w_i)=w_{i-1}$ with $\\pi_0(0)=0$}\n    \\For{$i=1,2,\\dots,$\\ \\li{maxiter}}\n        \\Comment Iterate only \\li{maxiter} times at most.\n        \\For{$s \\in S$}\n        \\Comment{Policy evaluation}\n            \\State $V_{k}(s) = \\max_{a \\in A_s} \\{\\Sigma[p(\\pi(s))*(u(\\pi(s)) + \\beta*V^*(\\pi(s)))]\\}$\n            \\Comment{compute\\_policy\\_v.}\n        \\EndFor\n        \\For{$s \\in S$}\n        \\Comment{Policy improvement.}\n            \\State $\\pi_{k+1}(s) =\\argmax_{a \\in A_s} \\{\\Sigma[p(a)*(u(a) + \\beta*V_k*(s))]\\}$\n            \\Comment{extract\\_policy.}\n        \\EndFor\n        \\If{ $||\\pi_{k+1} - \\pi_k|| < \\varepsilon$}\n            \\State \\texttt{break}\n             \\Comment{Stop iterating if the policy doesn't change enough.}\n        \\EndIf\n    \\EndFor\n    \\State \\pseudoli{return} $V_k, \\pi_{k+1}$\n\\EndProcedure\n\\end{algorithmic}\n\\caption{Policy Iteration}\n\\label{alg:PolicyIteration}\n\\end{algorithm}\n\n\\begin{problem}\n\\label{prob:policyiter-value4}\nWrite a function called \\li{policy_iteration()} that will accept a dictionary $P$ representing the decision process, the number of states, the number of actions, a discount factor $\\beta \\in (0,1)$, defaulting to $1$, the tolerance amount $\\epsilon$ defaulting to $1e^-8$, and the maximum number of iterations \\li{maxiter} defaulting to $3000$.\nPerform policy iteration until $\\|\\pi_{k+1} - \\pi_{k}\\| < \\epsilon$ or $k > $ \\li{maxiter}.\nReturn the final vector representing $V_k$, the optimal policy $\\pi_k$, and the number of iterations.\nTest your code on the example given above and compare your answers to the results from \\ref{prob:policyiter-value1} and \\ref{prob:policyiter-value2}.\n\\end{problem}\n\n\n\n\\section*{The Frozen Lake Problem}\nFor the rest of the lab, we will be using the OpenAi Gym environments.\nIt can be installed using conda or pip.\n\\begin{lstlisting}[language=Bash]\n$ pip install gym[all]\n\\end{lstlisting}\n\n% TODO: INSERT Figure\n\nIn the Frozen Lake problem, you and your friends tossed a frisbee onto a mostly frozen lake.\nThe lake is divided into an $N \\times N$ grid where the top left hand corner is the start, the bottom right hand corner is the end, and the other squares are either frozen or holes.\nTo retrieve the frisbee, you must successfully navigate around the melted ice without falling.\nThe possible actions are left, right, up, and down.\nSince ice is slippery, you won't always move in the intended direction.\nIf you fall, your reward is $0$.\nIf you succeed, your reward is $1$.\nThere are two scenarios: $N=4$ and $N=8$.\nTo run the $4\\times 4$ scenario, use \\li{env_name='FrozenLake-v0'}.\nFor the $8\\times 8$ scenario, use \\li{env_name='FrozenLake8x8-v0'}.\n\n\\subsection*{Using Gym}\nTo use gym, we import it and create an environment based on the built-in gym environment.\nThe FrozenLake environment has $3$ important attributes, $P$, $nS,$ and $nA$.\n$P$ is a dictionary where $P[s][a]=[(probability, nextstate, reward, is\\_terminal)...]$.\nNotice that this is the same $P$ we used in the previous problems.\n$nS$ and $nA$ are the number of states and actions respectively.\n\nWe can calculate the optimal policy using value iteration or policy iteration.\nSince the ice is slippery, this policy will not always result in a reward of $1$.\nThe gym environments have built-in functions that allow us to simulate each step of the environment.\nBefore running a scenario in gym, always put it in the starting state by calling \\li{env.reset()}.\nTo simulate moving, call \\li{env.step}.\n\n\\begin{lstlisting}\n$ import gym\n$ from gym import wrappers\n$ env_name  = 'FrozenLake-v0'\n$ env = gym.make(env_name).env\n$ number_of_states = env.nS\n$ obs = env.reset()\n$ obs, reward, done, _ = env.step(int(policy[obs]))\n\\end{lstlisting}\n\n\nThe step function returns four values: observation, reward, done, info.\nThe observation is an environment-specific object representing the observation of the environment.\nFor FrozenLake, this is the current state.\nWhen we step, or take an action, we get a new observation, or state, as well as the reward for taking that action.\nIf we fall into a hole or reach the frisbee, the simulation is over so we are done.\nThe info value is a dictionary of diagnostic information.\nIt will not be used in this lab.\n\n\n\\begin{problem}\n\\label{prob:policyiter-value5}\nWrite a function that runs \\li{value_iteration} and \\li{policy_iteration} on FrozenLake.\nIt should accept a boolean \\li{basic_case} defaulting to \\li{True} that indicates whether to run the $4\\times 4$ or $8\\times 8$ scenario and an integer $n$ defaulting to $1000$ that indicates how many times to run the simulation.\nCalculate the value function and policy for the environment using both value iteration and policy iteration.\nReturn the policies generated by value iteration and the policy and value function generated by policy iteration.\nSet the mean total rewards to $0$ and return them as well.\n\\end{problem}\n\n\\begin{problem}\n\\label{prob:policyiter-value6}\nWrite a function \\li{run_simulation()} that takes in the environment \\li{env}, a policy \\li{policy}, a discount factor $\\beta$ defaulting to $1$, and a boolean \\li{render} defaulting to False.\nCalculate the total reward for the policy for one simulation using \\li{env.reset} and \\li{env.step()}.\nStop the simulation when $done$ is $True$.\n(Hint: When calculating reward, use $\\beta^k$.)\n\nModify \\li{frozen_lake()} to run \\li{run_simulation} for both the value iteration policy and the policy iteration policy $M$ times.\nReturn the policy generated by value iteration, the mean total reward for the policy generated by value iteration, the policy generated by policy iteration, and the mean total reward for the policy generated by policy iteration.\n\\end{problem}\n\n\n\\begin{comment}\n\\newpage\n\n\n\n\n\n%\n%\n%%\\begin{equation}\n%%\\label{eq:policyiter-bellman1}\n%%V(w_i) = \\max_{c \\in [0,w_i]} u(c) + \\beta V(w_i-c)\n%%\\end{equation}\n%%It is common to make the substitution $w_i' = w_i - c$ in \\eqref{eq:policyiter-bellman1} to obtain the %following.\n%\\begin{equation}\n%\\label{eq:policyiter-bellman2}\n%V(w_i) = \\max_{w_j \\in [0,w_i]} u(w_i-w_j) + \\beta V(w_j).\n%\\end{equation}\n%\n%Notice that \\eqref{eq:policyiter-bellman2} is similar to the value function given in the dynamic programming lab.\n%%Note that \\eqref{eq:policyiter-bellman1} requires finding $c$, how much cake to consume in each period.\n%%The variable $c$ is commonly described as the \\emph{action} to take when we have $w_i$ cake.\n%\n%\n%\\subsection*{Convergence of Value Iteration}\n%\n%A function $f$ that is a contraction mapping has a fixed point $p$ such that $f(p) = p$.\n%Blackwell's contraction theorem can be used to show that Bellman's equation is a ``fixed point'' (it actually acts more like a fixed function in this case)\n%for an operator $T: L^{\\infty}(X;\\mathbb{R}) \\to L^{\\infty}(X;\\mathbb{R})$ where $L^{\\infty}(X;\\mathbb{R})$ is the set of all bounded functions:\n%\\begin{equation}\n%\\label{eq:policyiter-blackwell}\n%[T(f)](w) = \\max_{w_j \\in [0,w]} u(w-w_j) + \\beta f(w_j).\n%\\end{equation}\n%It can be shown that (\\ref{eq:policyiter-bellman2}) is the \"fixed point\" of our operator $T$.\n%A result of contraction mappings is that there exists a unique solution to (\\ref{eq:policyiter-blackwell}).\n%\n%A powerful property of contraction mappings is that the fixed point can be found by applying the function repeatedly to some initial point in our domain.\n%That is, if $f:X \\to X$ is our contraction mapping, with fixed point $p \\in X$, we can find $p$ by randomly choosing $x \\in X$ and repeatedly applying the function $f$ to our point.\n%This can be expressed as\n%\\begin{align*}\n%f^{M}(x) = f \\circ f \\cdots \\circ f(x) = p\n%\\end{align*}\n%for some M.  Here M is dependent on both the initial guess $x$ and the discount factor of the contraction mapping, which correspond to $V_0$ and $\\beta$ in dynamic optimization.\n%\n%In the case of dynamic optimization, this implies that we can converge to the true value function $V^*(w)$ by using the following equation:\n%\\begin{equation}\n%\\label{eq:policyiter-val_iteration}\n%V_{k+1}(w_i) = [T(V_k)](w_i) = \\max_{w_j \\in [0,w_i]} u(w_i-w_j) + \\beta V_k(w_j) \\, \\, \\forall \\, w_i,\n%\\end{equation}\n%\n%where an initial guess for $V_0(w)$ is used.\n%As $k \\to \\infty$, it is guaranteed that $(V_k(w)) \\to V^*(w)$.\n%Because of the contraction mapping, if $V_{k+1}(w) = V_k(w) \\, \\, \\forall \\, w$, we have found the true value function, $V^*(w)$.\n%Using this information, we define the value iteration algorithm to find $V^*$:\n%\n%\n%\n%\n%\\begin{algorithm}[H]\n%\\begin{algorithmic}[1]\n%\\Procedure{Value Iteration Function}{$\\beta, N, W, u(x), \\varepsilon$, maxiter}\n%    \\State $w \\gets [0, \\frac{W}{N},\\ldots, W]$\n%      \\Comment{Divide $W$ into $N$ equally sized pieces}\n%    \\State $V_0 \\gets [V_0(w_0),V_0(w_1),\\ldots,V_0(w_N)] $\n%     \\Comment{Common choice is $V_0(w_i)=u(w_i)$}\n%    \\For{$i=1,2,\\dots,$\\ \\li{maxiter}}\n%        \\Comment Iterate only \\li{maxiter} times at most.\n%        \\For{$w_i \\in w$}\n%            \\State $V_{k+1}(w_i) = \\max_{w_j \\in \\mathbf{w} \\, | \\, w_j \\leq w_i} u(w_i-w_j) + \\beta V_k(w_j)$\n%        \\EndFor\n%        \\If{ $||V_{k+1} - V_k|| < \\varepsilon$}\n%            \\State \\texttt{break}\n%             \\Comment{Stop iterating if the approximation stops changing enough.}\n%        \\EndIf\n%    \\EndFor\n%    \\State \\pseudoli{return} $V_k$\n%\\EndProcedure\n%\\end{algorithmic}\n%\\caption{Value Function Iteration}\n%\\label{alg:ValueIteration}\n%\\end{algorithm}\n%\n%Most iterative algorithms have a \\li{max\\_iter} parameter that will terminate the algorithm after \\li{max\\_iter} iterations regardless of whether or not it has converged.\n%This is because even though we have guaranteed convergence, we might have a convergence rate that is too slow to be useful.\n%However, generally this algorithm will converge much faster than computing the true value function using dynamic programming as in Finite Value Iteration.\n%\n%\\begin{warn} % Notation warning\n%Note that $w_j \\leq w_i$, because we can never have more cake at a later period; cake can only be consumed, not created.\n%\\end{warn}\n%\n%\\begin{problem}\n%\\label{prob:policyiter-value1}\n%Write a function called \\li{value_iteration()} that will accept a numpy array representing the initial vector $V_0$, a discount factor $\\beta \\in (0,1)$,\n%the number of states to split $\\mathbf{w}$ into $N$, the amount of initial cake $W$, a utility function $u(x)$,\n%the tolerance amount $\\epsilon$, and the maximum number of iterations \\li{max\\_iter}.\n%Perform value iteration until $\\|V_{k+1} - V_{k}\\| < \\epsilon$ or $k > $ \\li{max\\_iter}.\n%Return the final vector representing $V^*$.\n%\n%It is useful for our function to accept $V_0$ as a parameter instead of calculating an initial guess inside our function, so that we can try different initial states for $V_0$.\n%\n%Test your function with the parameters $N=400$, $\\beta=.995$, $u(x)=\\sqrt{x}$, $W=1$.\n%Try different values for $V_0$ and see if you get the same value for $V^*(W)$ (The value should be approximately 9.4988).\n%% Question to consider:\n%How do different initial guesses for $V_0$ affect the number of iterations required for convergence?\n%\\end{problem}\n%\n%\n%\\subsection*{Policy Vector}\n%\n%The value function $V^*(w_i)$ that we found describes how much utility $w_i$ will yield over time, thus $V^*(W)$ is the optimal value for our problem:\n%\\begin{align*}\n%  V^*(W) = \\max_{c_t}  & \\sum_{t=0}^T \\beta^t u(c_t), \\\\\n%  \\mbox{subject to } & \\sum_{t=0}^T c_t = W ,\\\\\n%  & c_t \\geq 0.\n%\\end{align*}\n%Although $V^*(W)$ is the solution, it is usually more important to know which sequence of $(c_t)_{t=0}^T$ yields the solution.\n%This sequence is known as the \\emph{policy vector} $\\mathbf{c} = [c_0, c_1, \\ldots, c_T]$ that corresponds to eating $c_i$ cake at time $t=i$.\n%\n%If this were a truly infinite problem, $\\mathbf{c}$ would be impossible to calculate; there would be an infinite number of time periods.\n%Fortunately, in the cake-eating problem, we will never have more than $N$ time periods.\n%This happens because it is never optimal to eat $0$ pieces of cake in a single time period.\n%The discount factor $\\beta$ means at least $1$ piece must be eaten at each step.\n%It is important to note that the length of $\\mathbf{c}$ changes depending on what $\\beta$ is.\n%When $\\beta = 0$, for example, only the first time period matters because all the rest will yield zero utility.\n%So when $\\beta = 0$, $T = 0$ and the length of $\\mathbf{c}$ is $1$.\n%As the value of $\\beta$ gets closer to $1$, $T$ increases as well.\n%\n%The policy vector, $\\mathbf{c}$, is found by using the policy function: $\\pi : \\mathbb{R} \\to \\mathbb{R}$ which has the constraint $0 \\leq \\pi(w_i) \\leq w_i \\, \\, \\forall \\, i.$\n%$\\pi(w_i)$ is the amount of cake to save for the next period, given we started with $w_i$ cake.\n%In \\eqref{eq:policyiter-val_iteration}, this was represented by $w_j$.\n%\n%Because $\\pi(w_i)$ is the amount of cake saved until next period, we can use $V^*(W)$ and modify the Bellman equation to find $\\pi$:\n%\\begin{equation}\n%\\label{eq:policyiter-pol_func}\n%\\pi(w_i) = \\argmax_{w_j \\in \\mathbf{w} \\, | \\, w_j\\leq w_i} u(w_i-w_j) + \\beta V^*(w_j) \\, \\, \\forall \\, i.\n%\\end{equation}\n%For our purposes, the policy function will be represented as a vector $\\mathbf{\\pi}$.\n%This is convenient because in practice it is infeasible to code a functional representation for $\\pi$.\n%$\\mathbf{\\pi}_i$ dictates how many pieces of cake to save for the next period if we currently have $i$ pieces of cake.\n%\n%Once we have a vectorized representation of the policy function, $\\pi$, we can use it to calculate the policy vector $\\mathbf{c}$ using the relationship:\n%\\begin{align*}\n%  c_t = w^{(t)} - \\pi(w^{(t)})), \\\\\n%  \\mbox{where } w^{(t+1)} = \\pi(w^{(t)}), \\\\\n%  \\mbox{and } w^{(0)} = W.\n%\\end{align*}\n%\n%For example, assume $w = [0,.1,.2,\\ldots,1]$ and $\\pi = [0, 0, 1, 2, 2, 3, 3, 4, 4, 5, 5]$.\n%This means that if we have $7$ pieces of cake remaining, $\\pi(7) = 4$ so we should save $4$ pieces of cake for the next time period.\n%At $t=0$, $w^0=1=w_{10}$, so there are $10$ pieces of cake.\n%$\\pi(10) = 5$, so at time $0$, we want to save $5$ pieces of the cake.\n%Then $c_0=10-5=5$, and we consume half of the cake.\n%To calculate $c_1$, observe that $w^1= \\pi(10)=5$.\n%$\\pi(w^1) = \\pi(5) = 3$.\n%So at time $2$, we eat $5-3=2$ pieces of cake, or $w_3=.2$ of the total cake.\n%The resulting policy vector is $\\mathbf{c} = [0.5,0.2,0.1,0.1,0.1]$.\n%\n%\\begin{warn}\n%$w^{(t)}$ represents a time index, how much cake we have at time $t$, not to be confused with $w_i$, the numeric value of having $i$ pieces.\n%\\end{warn}\n%\n%\\begin{problem}\n%Write a helper function called \\li{extract_policy_vector()} that will accept an array of discretized values $\\mathbf{w}$, and a vector representing a policy function $\\pi$.\n%Return the policy vector $\\mathbf{c}$ that determines how much cake should be eaten at each time step.\n%\n%Test your function with $\\mathbf{w} = [0,.1,.2,\\ldots,1]$ and $\\pi = [0, 0, 1, 2, 2, 3, 3, 4, 4, 5, 5]$, as in the example above.\n%\\end{problem}\n%\n%\\begin{problem}\n%Modify \\li{value_iteration()} to return the true value function $V_{k+1}$ and the corresponding policy vector $\\mathbf{c}$.\n%\\\\(Hint: Use \\eqref{eq:policyiter-pol_func} to find the policy function and then call\n%\\li{extract_policy_vector()} inside of \\li{value_iteration()}).\n%\\end{problem}\n%\n%\\section*{Policy Function Iteration}\n%For infinite horizon dynamic programming problems, it can be shown that value function iteration converges relative to the discount factor, $\\beta$.\n%As $\\beta$ approaches $1$, the number of iterations increases dramatically.\n%As mentioned earlier $\\beta$ is usually close to $1$, which means this algorithm can converge slowly.\n%In Problem \\ref{prob:policyiter-value1} you should have noticed that runtime was significantly longer to run for larger $N$ or\n%$\\beta$ closer to 1.\n%\n%In value iteration, we used an initial guess for the value function, $V_0$ and used (\\ref{eq:policyiter-bellman2}) to iterate towards the true value function.\n%Once we achieved a good enough approximation for $V^*$, we recovered the true policy function $\\pi$.\n%Instead of iterating on our value function, we can instead make an initial guess for the policy function, $\\pi_0$, and use this to iterate toward the true policy function, called policy iteration.\n%We do so by taking advantage of the definition of the value function, where we assume that our policy function yields the most optimal result.\n%\n%That is, given a specific policy function $\\pi_k(W)$, we can modify \\eqref{eq:policyiter-bellman2} by assuming that the policy function is the optimal choice.\n%\\begin{align*}\n%  V_k(w_i) = \\max_{w_j\\in [0,w_i]} u(w_i-w_j) + \\beta V_k(w_j) = u(w_i - \\pi_k(w_i)) + \\beta V_k(\\pi_k(w_i)). \\\\\n%\\end{align*}\n%This is because $w_j$ is the amount of cake we should have left over after $t=i$, which is $\\pi(w_i)$.\n%Because the value function is defined recursively,\n%\\begin{equation}\n%  \\label{eq:policyiter-val_from_policy}\n%V_k(W) = \\sum_{t=0}^{\\infty}\\beta^t u(\\pi_k^t(W) - \\pi_k^{t+1}(W)), \\\\\n%\\end{equation}\n%where $\\pi_k^t(W)$ means applying $\\pi_k$ t times, and $\\pi_k^0(W) = W$.\n%Recall that \\eqref{eq:policyiter-val_from_policy} will terminate in a finite number of steps (because we will eventually run out of cake to eat).\n%Fortunately, in cake-eating, \\eqref{eq:policyiter-val_from_policy} can alternatively be calculated by using dynamic programming, \\eqref{eq:policyiter-bellman2} which defines the relationship as:\n%\\begin{equation}\n%  \\label{eq:policyiter-fast_val_from_policy}\n%  V_k(W) = u(\\pi_k^t(W) - \\pi_k^{t+1}(W)) + \\beta V_k(\\pi_k^{t+1}(W)).\n%\\end{equation}\n%\n%This happens because $\\pi_k^{t+1}(W) < W$, with the initial condition that $V_k(0) = 0$.\n%Thus, given an initial guess for our policy function, $\\pi_0$, we calculate the corresponding value function using (\\ref{eq:policyiter-val_from_policy}), and then use \\eqref{eq:policyiter-pol_func} to improve our policy function.\n%The algorithm for policy function iteration can be summarized as follows:\n%\n%\n%\n%\\begin{algorithm}[H]\n%\\begin{algorithmic}[1]\n%\\Procedure{Policy Iteration Function}{$\\beta, N, W, u(x), \\varepsilon$, maxiter}\n%    \\State $w \\gets [0, \\frac{W}{N},\\ldots, W]$\n%      \\Comment{Divide $W$ into $N$ equally sized pieces}\n%    \\State $\\pi_0 \\gets [\\pi_0(w_0),\\pi_0(w_1),\\ldots,\\pi_0(w_N)] $\n%     \\Comment{Common choice is $\\pi_0(w_i)=w_{i-1}$ with $\\pi_0(0)=0$}\n%    \\For{$i=1,2,\\dots,$\\ \\li{maxiter}}\n%        \\Comment Iterate only \\li{maxiter} times at most.\n%        \\For{$w_i \\in \\mathbf{w}$}\n%            \\State $V_{k}(w_i) = u(\\pi_k^t(w_i) - \\pi_k^{t+1}(w_i)) + \\beta V_k(\\pi_k^{t+1}(w_i))$\n%            \\State $\\pi_{k+1}(w_i) = \\argmax_{w_j \\in \\mathbf{w} \\, | \\, w_j \\leq w_i} u(w_i-w_j) + \\beta V_k(w_j)$\n%        \\EndFor\n%        \\If{ $||\\pi_{k+1} - V_k|| < \\varepsilon$}\n%            \\State \\texttt{break}\n%             \\Comment{Stop iterating if the approximation stops changing enough.}\n%        \\EndIf\n%    \\EndFor\n%    \\State \\pseudoli{return} $V_k, \\pi$\n%\\EndProcedure\n%\\end{algorithmic}\n%\\caption{Policy Iteration}\n%\\label{alg:PolicyIteration}\n%\\end{algorithm}\n%\n\n\n\n\n% TODO: This may be a more efficient way of finding the value function $V_k$. Should we implement this?\n% In order to compute the value function, $V_k$ corresponding to a given policy $\\pi_k$, we must solve\n% \\begin{equation}\n% \\label{Val_Fun}\n% V_k(W) = u(W-W') + \\beta V_k(W')\n% \\end{equation}\n% for $V_k$.\n%\n% As always, we take a discrete approximation to $W$, obtaining a length $N$ vector\n% \\[\n% w = (w_0, w_1, w_2, \\ldots, w_N)\n% \\]\n% giving the possible cake quantities.\n% The variable $W'$ becomes $w' = \\pi_k(w)$.\n% Once we have done this, equation \\eqref{Val_Fun} is a linear system which we can rewrite as\n% \\begin{equation*}\n% V_k(w) = u(w-w') + \\beta QV_k(w)\n% \\end{equation*}\n% where $Q$ is the $N\\times N$ matrix\n% \\begin{equation*}\n% Q_{ij} = \\left\\{\n%      \\begin{array}{ll}\n%        1 & \\text{if} \\quad  w_i' = w_j\\\\\n%        0 & \\text{otherwise.}\n%      \\end{array}\n%    \\right.\n% \\end{equation*}\n% Solving this system of equations, we have $V_k(w) = (I-\\beta Q)^{-1}u(w-w')$.\n% Although $Q$ may be large, we can take advantage of the fact that it is sparse, containing only $N$ nonzero entries out of $N^2$ total entries.\n\n%\\begin{problem}\n%\\label{prob:cake_eating_policyfun}\n%Write a function called \\li{policy_iteration()} that will accept a numpy array representing the initial vector $\\pi_0$, a discount factor $\\beta \\in (0,1)$,\n%the number of states to split $\\mathbf{w}$ into $N$, the initial amount of initial cake $W_{max}$, a utility function $u(x)$, the convergence tolerance $\\epsilon$, and the maximum number of iterations \\li{max_iter}.\n%Perform Policy Iteration until $\\|\\pi_{k+1} - \\pi_{k}\\|_\\infty < \\epsilon$ or $n > $ \\li{max_iter}.\n%Return the final vector representing $V_k$ as well as the policy vector $\\mathbf{c}$.\n%\n%Test your policy iteration by calling \\li{value_iteration()} and \\li{policy_iteration()} using the same values for $\\beta$, $N$, $W_{max}$, and $u(x)$.\n%The value functions returned should be close to equal (use \\li{np.allclose()}), and the policy vectors $\\mathbf{c}$ should be identical.\n\n% How many iterations does Policy Iteration take to converge?\n% How does this compare to the number of iterations Value Iteration takes?\n\n% TODO: Is this a more efficient way to do it?\n% You will still need to pre-compute all values of $u(W-W')$, storing these in an $N \\times N$ array. Make sure, as before,\n% that the upper triangular entries are set to large negative values, so that we don't choose to consume more cake than is\n% available. In the code snippets below, we will refer to this array as \\li{U}.\n%\n% You may also find it convenient to not track the approximated policy function $\\psi_k$ directly during the iteration, but\n% rather to have a length $N$ array \\li{psi_ind}, whose $i$-th entry gives the index of $w_i'$ relative to the discrete approximation\n% $w$. Thus, rather than initializing a policy function $\\psi_0$ directly, you can initialize it indirectly by setting, for example,\n% \\begin{lstlisting}\n% >>> psi_ind = np.arange(N)\n% \\end{lstlisting}\n% which corresponds to an initial policy $\\psi_0(W) = W$.\n% The policy function vector can be obtained from \\li{psi_ind} by simply slicing \\li{w} as follows:\n% \\begin{lstlisting}\n% >>> psi = w[psi_ind]\n% \\end{lstlisting}\n%\n% In order to take advantage of the sparse matrices $I$ and $Q$, use the following imports from the SciPy \\li{sparse} library.\n% \\begin{lstlisting}\n% >>> from scipy import sparse\n% >>> from scipy.sparse import linalg\n% \\end{lstlisting}\n% and the following code to initialize $I$ (outside the loop)\n% \\begin{lstlisting}\n% >>> I = sparse.identity(N, format='csr')\n% >>> rows = np.arange(0, N)\n% \\end{lstlisting}\n% and $Q$ (inside the loop)\n% \\begin{lstlisting}\n% >>> columns = psi_ind\n% >>> data = np.ones(N)\n% >>> Q = sparse.coo_matrix((data, (rows,columns)), shape=(N,N))\n% >>> Q = Q.tocsr()\n% \\end{lstlisting}\n% Rather than compute $(I-\\beta Q)^{-1}$ directly, use Scipy's sparse solver\n% \\begin{lstlisting}\n% V = linalg.spsolve(I-beta*Q, u(w-w[psi_ind]))\n% \\end{lstlisting}\n% where \\li{u} is the square root function.\n%\n% In each iteration, we update \\li{psi_indices} much as we did in the value function iteration algorithm:\n% \\begin{lstlisting}\n% >>> psi_ind = np.argmax(U + beta*V, axis=1)\n% \\end{lstlisting}\n% where we assume \\li{V} has shape \\li{(1,N)}.\n%\n% Use the 2-norm when computing $\\|\\psi_{k+1} - \\psi_k\\|$, and set the tolerance to $\\delta = 1e-9$.\n%\n% Take $N = 1000$ and $\\beta = .95$.\n%\\end{problem}\n\n%%%%%%%%%%%%%% MPI %%%%%%%%%%%%%%%%%%%% Uncomment when we fix it.\n% \\section*{Modified Policy Function Iteration}\n% Although policy iteration converges in fewer iterations, computing $V_k$ using (\\ref{eq:policyiter-val_from_policy}) at each iteration can be quite costly, especially for problems with large $N$.\n% Thus policy iteration converges slowly relative primarily to $N$, and value iteration converges slowly relative primarily to $\\beta$.\n% It turns out there is a hybrid method called modified policy function iteration that seeks to simultaneously minimize these disadvantages by performing policy iteration and value iteration together.\n%\n% Modified policy iteration is identical to policy iteration, with the addition of one extra step.\n% Before \\ref{alg:computing_pi_k} of the policy iteration algorithm, take the $V_k$ computed in step \\ref{alg:computing_V_k} and perform value iteration with \\li{max_iter}$=m$ using $V_k$ as the initial guess.\n% This is faster than solving for the exact value function for large state spaces at every single iteration.\n% There is no strict rule on the value of $m$, but typically $m \\in [5,15]$ work well.\n%\n% Thus the algorithm for modified policy iteration can be summarized as follows:\n% \\begin{enumerate}\n%   \\item Discretize the space into $N$ equal sized pieces: $\\mathbf{w} = \\left[w_0, w_1, \\ldots, w_N\\right]$ where $w_0 = 0, \\, \\, w_N = W_{max}$.\n%   \\item Choose an initial vector to represent $\\pi_0$: $\\left[\\pi_0(w_0), \\pi_0(w_1), \\ldots, \\pi_0(w_N)\\right]$\n%             A common initial choice for $\\pi_0$ is $\\pi_0(w_i) = w_{i-1}$, meaning we save $w_{i-1}$ pieces, and $\\pi_0(w_0) = 0$.\n%   \\item For $k = 1, 2, \\ldots,$ \\li{max\\_iter}:\n%   \\begin{enumerate}\n%     \\item \\label{alg:computing_V_k2} For each $w_i \\in \\mathbf{w}$ compute the value function $V_k(w_i)$ using (\\ref{eq:policyiter-val_from_policy}).\n%     \\item \\label{alg:value_iteration2} Call \\li{value_iteration()} with $V_k$ as the initial guess, and use $max\\_iter = m$, for some $m$.\n%         Use the value function that it returns ($\\hat{V_k}$) as the value function to use in \\ref{alg:computing_pi_k2}.\n%     \\item \\label{alg:computing_pi_k2} For each $w_i \\in \\mathbf{w}$ calculate $\\pi_{k+1}(w_i) = \\argmax_{W' \\in \\mathbf{w} \\, | \\, W' \\leq w_i} u(w_i-W') + \\beta \\hat{V_k}(W')$.\n%     \\item If $\\| \\pi_{k+1} - \\pi_k \\| < \\epsilon,$ terminate early.\n%   \\end{enumerate}\n% \\end{enumerate}\n% Note that our previous methods, value iteration, and policy iteration are like two sides of our algorithm: iterating on the value function or iterating on the policy function.\n% Modified policy function does a combination of both, attempting to take advantage of both methods while trying to avoid their disadvantages.\n% Because modified policy iteration takes only slightly more work to code compared to value iteration and policy iteration, it is often preferred in practice.\n% Modified policy iteration usually performs better than value iteration or policy iteration.\n%\n% \\begin{problem}\n% Write a function \\li{modified_policy_iteration()} that takes the same arguments as \\li{policy_iteration()}, along\n% with an additional keyword argument $m$, which indicates the number of iterations of value iteration to do at each iteration of policy iteration.\n% Return the converged value function and policy vector.\n%\n% Much of the code in \\li{policy_iteration()} can be re-used, you just need to implement the extra step introduced by modified policy iteration.\n% This is why the \\li{value_iteration()} function accepts $V_0$ as a parameter, it makes implementing MPI much simpler.\n\n\n% TODO: Based on other TODOS, do we want to solve the linear system?\n% The key differences are as follows.\n% First, you do not need to initialize the sparse arrays $I$ and $Q$ in the modified policy function iteration.\n% Secondly, rather than solving a linear system of equations to obtain the value function, you will loop\n% the equation\n% \\begin{lstlisting}\n% >>> V = u(w - w[psi_ind]) + beta*V[psi_ind]\n% \\end{lstlisting}\n% $m$ times in each iteration. Notice how this line of code corresponds with Equation \\eqref{Val_Fun}, where\n% \\li{w} represents $W$, \\li{w[psi_ind]} represents $W'$, and \\li{V[psi_ind]} represents $V(W')$.\n%\n% The remainder of the code should be unchanged.\n%\\end{problem}\n%%%%%%%%%%%%%%%%%%%% END OF MPI SECTION AND PROBLEM %%%%%%%%%%%%%%%%%%%%%%\n\n%\\begin{problem}\n%Solve the cake eating problem with both value iteration and policy iteration for various values of $\\beta$ and compare how long each method takes.\n%% Each final $V^*$ should be \\emph{np.allclose} and your policy vectors $\\mathbf{c}$ should be identical for both methods.\n%Use $N=1000$ as the number of grid points for $\\mathbf{w}$ and $\\beta = [.95, .975, .99, .995]$.\n%\n%It is important to use feasible initial guesses in each case in order to make the results comparable.\n%A good initial guess greatly affects the number of iterations required for convergence.\n%Use $V_0(w_i) = u(w_i)$ for value iteration, and $\\pi_0(w_i) = w_{i-1}$, with $\\pi_0(w_0) = 0$ for policy iteration.\n%\\\\(Hint: set \\li{max\\_iter} high enough for each method so that the functions actually converge; large values of $\\beta$ may require several hundred iterations for value iteration.)\n%\n%Graph the results for each method with $\\beta$ on the x-axis and time on the y-axis.\n%Compare your results to the following figure.\n%\n%\\begin{figure}[H] % This figure plots the time each method takes to converge\n%\\centering        % use width=.7\\textwidth to size the figure.\n%    \\includegraphics[width=.7\\textwidth]{figures/comparing_methods.pdf}\n%    % \\caption{Comparing each method for various values of $\\beta$.}\n%    \\label{fig:basic1}\n%\\end{figure}\n%\n%% You were not asked to compare these functions to the \\li{find_policy()} function written in Finite Value Iteration because it is significantly slower than these iterative methods.\n%\\end{problem}\n\n\\begin{comment}\n\\newpage\n\n\\section*{Additional Material}\nPerhaps here we can talk a bit about truly infinite MDPs such as choosing an optimal path through some kind of grid.\n\nOr we could spend time talking about Stochastic MDPs where we introduce probabilities into our decision making process.\n\nOr we could try and answer a question like this:\nGiven the following grid, we want to know if it's going to be more valuable to stop and get the flags or if it'll be better to head straight to the goal.\nThe answer is dependent on what kind of reward we get for touching the flags, the reward for reaching the goal, and the discount factor.\n\\begin{figure}[H] % Describe the figure here.\n\\centering        % use width=.7\\textwidth to size the figure.\n    \\includegraphics[width=.7\\textwidth]{figures/flag_maze.pdf}\n    \\caption{Source: \\url{http://www.samyzaf.com/ML/rl/qmaze.html}}\n    \\label{fig:basic1}\n\\end{figure}\n\n\\end{comment}\n\n\n\n\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Remnants from a Previous Version of the lab\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\n\n\n% \\section*{Discrete Choice (Threshold) Problems}\\label{SecDiscrChoice}\n%\n% One powerful application of dynamic programming is that we can make\n% models that have both continuous and discrete state variables. These models are sometimes referred to as\n% discrete choice problems or optimal stopping problems.  Examples include models of employment that involve both\n% the choice of whether to work and how much to work, models of firm entry and exit that involve the choice of both\n% whether to produce and how much to produce, and models of marriage that involve the choice of whether to date\n% (get married or keep dating) and how much to date.\n% This application illustrates the versatility of dynamic programming as a dynamic solution method\n%\n% In this lab, we follow a simple version of a standard job search model.\n% Assume that workers live infinitely long.\n% We will split a worker's life into discrete time periods, and in each period the worker is either\n% employed or unemployed, and receives a job offer. The worker must make a choice between discrete actions\n% (such as accepting or rejecting a job offer), with the goal of maximizing some utility function (hence,\n% this is a \\emph{discrete choice} problem).\n%\n% We can state this problem in terms of dynamic programming by defining an appropriate value function.\n% Let the value of entering a period with most recent wage $w$,\n% current job offer wage $w'$, and employment status $s$ be given by the following value function,\n% \\begin{equation}\\label{EqV}\n%    V(w,w',s) = \\begin{cases}\n%                   V^E(w)    \\quad&\\text{if}\\quad s = E \\\\\n%                   V^U(w,w') \\quad&\\text{if}\\quad s = U \\\\\n%                \\end{cases}\n% \\end{equation}\n% where employment status is a binary variable $s\\in\\{E,U\\}$ ($E$ indicates ``employed\" and $U$ indicates ``unemployed\");\n% a person can be either employed or unemployed.\n%\n% As in the cake eating problem, the value function is calculated as the sum of some reward (based on the\n% current state) and the discounted value of entering the next period in some particular state.\n% The reward function, denoted (as usual) by $u$, gives the utility of spending available funds.\n% Assuming that a worker receives some wage $x$ in a given period and spends all available money in the period,\n% the utility of consumption is given by\n% \\[\n% u(x).\n% \\]\n% Calculating the value for the next period depends on the employment status in the current period, so we address\n% this separately for each case.\n%\n% Let us first consider the case where the individual is unemployed ($s = U$). As is customary, let $s'$ denote the\n% employment status of the worker in the next period.\n% In this unemployed state, the worker receives unemployment benefits equal to a fraction of her most recent wage,\n% i.e. $\\alpha w$, where $\\alpha \\in (0, 1)$.\n% Hence, the utility of consumption in the current unemployed state is given by\n% \\[\n% u(\\alpha w).\n% \\]\n%\n% The worker also receives one wage offer ($w'$) per period, and will obtain\n% this wage in the next period provided that she chooses to accept employment, i.e. provided $s' = E$.\n% The worker must decide whether to accept the current wage offer $w'$ or to remain unemployed in the next period,\n% i.e. she must decide on the value of $s'$. How\n% does she make this choice? She must weigh the value of entering the next period as an employed worker with wage\n% $w'$ (given by $V^E(w')$) versus the value of entering the next period as an unemployed worker with\n% previous wage $w$ and unknown wage offer $w''$ (given by $V^U(w,w'')$). Because the worker cannot know\n% what the future wage offer $w''$ will be, it is treated as a random variable with a particular probability\n% distribution. Hence, the worker must actually compute the \\emph{expected} value of entering the next\n% period unemployed. This term is simply\n% \\[\n% \\mathbb{E}_{w''}V^U(w,w''),\n% \\]\n% where $\\mathbb{E}_{w''}$ denotes the expectation operator with respect to the probability distribution of future\n% wage offers $w''$.\n% To sum up, the worker chooses to accept the wage offer $w'$ or remain unemployed in the next period based on\n% which option gives the greater expected value, and the value of this decision is given by\n% \\[\n% \\max\\Bigl\\{V^E(w'), \\,\\, \\mathbb{E}_{w''}V^U(w,w'')\\Bigr\\}.\n% \\]\n%\n% The overall value of the current unemployed state with previous wage $w$ and current wage offer $w'$ is\n% just the utility of consumption plus the discounted value of the next period, i.e.\n% \\begin{equation}\\label{EqVu}\n% V^U(w,w') = u(\\alpha w) + \\beta \\max\\Bigl\\{V^E(w'), \\,\\, \\mathbb{E}_{w''}V^U(w,w'')\\Bigr\\},\n% \\end{equation}\n% where $\\beta$ is the discount factor.\n%\n% Now we turn to the case where the job status is employed ($s = E$).\n% In this case, the worker receives a wage $w$ in the current period, and so the utility of consumption is\n% just\n% \\[\n% u(w).\n% \\]\n% In the next period, the worker will have most recent wage $w$, she will receive wage offer $w''$, and will\n% have employment status $s'$. As in the unemployed case, $w''$ is unknown and treated as a random variable.\n% Unlike the unemployed case, however, the worker's future employment status $s'$ is not under her control,\n% but rather is also a random variable. The reason for this is that the worker will remain employed\n% until she loses the job, a random event that occurs with some fixed probability in each time period.\n% Hence, we must calculate the expected value of the next period with respect to both $w''$ and $s'$.\n% We may write the entire value function for the employed case as\n% \\begin{equation}\\label{EqVe1}\n%    V^E(w) = u(w) + \\beta \\mathbb{E}_{w'',s'}V(w,w'',s').\n% \\end{equation}\n%\n% To calculate the expectation term, we need to know the joint probability distribution over $w''$ and $s'$.\n% This can be characterized in the following way.\n% We assume that $s'$ and $w''$ are independent. Hence, we can split the joint expectation\n% operator into the composition of the two individual expectation operators:\n% \\[\n% \\mathbb{E}_{w'',s'} = \\mathbb{E}_{w''}\\mathbb{E}_{s'}.\n% \\]\n% Let $\\gamma$ represent the probability that an employed worker becomes unemployed in the next period,\n% so that $1-\\gamma$ is the probability of remaining employed in the next period.\n% If the worker stays employed in the next period ($s' = E$), then next period's wage equals the current\n% period's wage, and the term inside the expectation is\n% \\[\n% V(w,w'',E) = V^E(w).\n% \\]\n% We then have\n% \\begin{align*}\n% \\mathbb{E}_{s'}V(w,w'',s') &= (1-\\gamma)V(w,w'',E) + \\gamma V(w,w'',U)\\\\\n% &= (1-\\gamma)V^E(w) + \\gamma V^U(w,w'').\n% \\end{align*}\n% Notice that the term $(1-\\gamma)V^E(w)$ is constant with respect to $w''$. Then\n% \\begin{align*}\n% \\mathbb{E}_{w''}\\mathbb{E}_{s'}V(w,w'',s') &= \\mathbb{E}_{w''}\\left[(1-\\gamma)V^E(w) + \\gamma V^U(w,w'')\\right]\\\\\n% &= \\mathbb{E}_{w''}(1-\\gamma)V^E(w) + \\mathbb{E}_{w''}\\gamma V^U(w,w'')\\\\\n% &= (1-\\gamma)V^E(w) + \\gamma \\mathbb{E}_{w''}V^U(w,w'').\n% \\end{align*}\n% Hence, we can rewrite \\eqref{EqVe1} as follows:\n% \\begin{equation}\\label{EqVe2}\n%    V^E(w) = u(w) + \\beta \\Bigl[(1-\\gamma)V^E(w) + \\gamma \\mathbb{E}_{w''}V^U(w,w'')\\Bigr].\n% \\end{equation}\n%\n% We have now completely described the value function. What about the policy function?\n% The policy function for the unemployed worker gives her decision on whether to accept the job $s'=E$\n% or to reject the job $s'= U$.\n% This will be a function of both the most recent wage $w$ and the current wage offer $w'$.\n% The employment status $s'$ in the next period is determined by the policy function $\\psi$:\n% \\[\n% s' = \\psi(w,w').\n% \\]\n%\n% These discrete choice problems are often called threshold\n% problems because the policy choice depends on whether the state variable is greater than or less than\n% some threshold level. That is, an unemployed worker will accept a job if and only if the offer wage is\n% above some set amount that depends on the most recent wage $w$. In the labor search model,\n% the threshold level is called the ``reservation wage'' $w_R'$. The reservation wage $w_R'$ is defined as\n% the wage offer such that the worker is indifferent between accepting the job $s' = E$ and\n% staying unemployed $s' = U$. Hence, this reservation wage satisfies the equation\n% \\begin{equation}\\label{EqWR}\n%    V^E(w_R') = E_{w''}\\left[V^U(w,w'')\\right].\n% \\end{equation}\n% The policy function will then take the form of accepting the job if $w' \\geq w_R'$ or\n% rejecting the job offer and remaining unemployed if $w' < w_R'$:\n% \\begin{equation}\\label{EqSprime}\n%    s' = \\psi(w,w') = \\begin{cases}\n%                       E \\quad\\text{if}\\quad w' \\geq w_R' \\\\\n%                       U \\quad\\text{if}\\quad w' < w_R'.\n%                    \\end{cases}\n% \\end{equation}\n% Figure \\ref{fig:disc_policy} shows an example of the discrete policy function.\n%\n% \\begin{figure}\n% \\includegraphics[width=\\textwidth]{figures/disc_policy.pdf}\n% \\caption{Here is the policy function for fixed $w = 50$.  Numerically we let 0 represent unemployment, $U$,\n% and 1 represent employment, $E$.  Thus we see that an individual will choose to take a new job, given\n% their old wage was 50, at a wage of roughly 35.  Thus for a previous wage of 50, we say the reservation wage is 35.}\n% \\label{fig:disc_policy}\n% \\end{figure}\n%\n% In summary, the labor search discrete choice problem is characterized by the value functions \\eqref{EqV}, \\eqref{EqVu},\n% and \\eqref{EqVe2}, the reservation wage \\eqref{EqWR}, and the policy function \\eqref{EqSprime}. Because wage offers\n% are distributed according to some given probability distribution (denote the cdf by $F(w')$),\n% and because the policy function takes the form of \\eqref{EqSprime},\n% the probability that the unemployed worker receives a wage offer that she will reject is $F(w_R')$ and the probability\n% that she receives a wage offer that she will accept is $1 - F(w_R')$. Just like the continuous-choice cake eating\n% problems, this problem can be solved by value function, policy function, or modified policy function iteration.\n%\n% The value function iteration solution method for the equilibrium in the labor search problem is analogous to the\n% value function iteration from the previous labs. The only difference is that two value functions ($V^E$ and $V^U$)\n% must converge to a fixed point in this problem instead of just one value function converging in the previous problems.\n% Although there are two value functions to consider, there is only one policy function, since decisions are only made\n% in the unemployed state.  Thus, there is only one policy function on which to iterate in the case of policy or modified\n% policy iteration.\n%\n% In the following problems, you will solve the job search problem using value function iteration and modified policy\n% function iteration. Assume that the consumption utility function $u$ is given by\n% \\[\n% u(w) = \\sqrt{w}.\n% \\]\n% Assume that the probability of becoming unemployed in a given period is $\\gamma = 0.10$, the fraction of wages paid\n% in unemployment benefits is $\\alpha = 0.5$, and the discount factor is $\\beta = 0.9$.\n%\n% Assume that the log of wage offers are distributed normally.  We then say that offers are distributed\n% lognormally and write\n% \\[\n% w'\\sim \\text{LogN}(\\mu,\\sigma).\n% \\]\n% This is a convenient choice for the distribution of\n% wage offers.  Among other things, it guarantees that wage offers will be positive.\n% A mean of $20$ and variance of $200$ are typical parameters of such a wage distribution,\n% and we will use these parameters in the following problems.\n%\n% As usual when dealing with continuous variables, we form a discrete approximation of\n% the possible wage values. In particular, approximate the wage values by a vector of\n% length $N = 500$ of equally-spaced values from $w_{min} = 0$ to $w_{max} = 100$, inclusive.\n% We then form a corresponding discrete approximation of the probability density function\n% $f(w')$ for the lognormal wage offers using code provided in the file \\li{discretelognorm.py},\n% as follows, where \\li{w} is the length-$N$ vector of wage values, $m$ is the mean, and $v$ is\n% the variance as specified above:\n% \\begin{lstlisting}\n% >>> from discretelognorm import discretelognorm\n% >>> f = discretelognorm(w, m, v)\n% \\end{lstlisting}\n% The function \\li{discretelognorm} computes the discrete pdf of the specified lognormal distribution in much\n% the same way that you calculate the discrete normal pdf when solving the stochastic cake eating problem.\n%\n%\n% \\begin{problem}\n% Solve the job search problem using value function iteration. Return the converged value functions\n% $V^E$ and $V^U$, as well as the converged policy function $\\psi$. The following steps provide detailed\n% instructions. Note that there are multiple ways to proceed, and the following is simply one (fairly good)\n% possibility.\n% \\begin{enumerate}\n%\n%    \\item As described above, represent the possible wage values by an array \\li{w} of length\n%    $N$. Denote this array entrywise by\n%    \\[\n%    w = (w_1,w_2,\\ldots,w_N).\n%    \\]\n%    Calculate the corresponding discrete lognormal pdf \\li{f}, exactly as shown above.\n%\n%    \\item Note that $u(w)$ and $u(\\alpha w)$ are needed when computing the value functions.\n%    Since these quantities do not change from one iteration to another, it is smart to\n%    compute them once at the outset. Denote $u(w)$ by \\li{uw} and $u(\\alpha w)$ by \\li{uaw}.\n%    These are easily calculated as follows:\n% \\begin{lstlisting}\n% >>> uw = u(w)\n% >>> uaw = u(alpha*w).reshape((N,1))\n% \\end{lstlisting}\n%     where \\li{u} is the square root function. We must reshape \\li{uaw} because of array broadcasting\n%     issues that arise in the code snippets below.\n%\n%    \\item Since $V^E$ is a function of only $w$, it will be represented by a vector of length $N$, where\n%    the $i$-th entry gives $V^E(w_i)$. The unemployed value function $V^U$, however,\n%    is a function of both $w$ and $w'$, so it will be represented by an $N \\times N$ array,\n%    where the $(i,j)$-th entry gives $V^U(w_i,w_j)$. Initialize the entries of these arrays to 0:\n% \\begin{lstlisting}\n% >>> VE = np.zeros(N)        #employed value function\n% >>> VU = np.zeros((N,N))    #unemployed value function\n% \\end{lstlisting}\n%\n%    \\item Note that $\\mathbb{E}_{w''}V^U(w,w'')$  is needed to calculate both $V^E$ and $V^U$.\n%    This expectation depends on $w$, and so can be represented by a length $N$ array, where the\n%    $i$-th entry is $\\mathbb{E}_{w''}V^U(w_i,w'')$.\n%    It is convenient to assign a variable to this array to keep track of it throughout the iterations.\n%    We denote the expectation by \\li{EVU}, and initialize it to zeros:\n% \\begin{lstlisting}\n% >>> EVU = np.zeros(N)\n% \\end{lstlisting}\n%\n%    \\item For reasons that will soon become apparent, we will need to create an $N\\times N$ helper array\n%    whose rows are equal to \\li{VE} (call this array \\li{MVE}), and a $N \\times N$ helper array whose columns\n%    are equal to \\li{EVU} (call this array \\li{MEVU}).\n%    At the outset, simply initialize these arrays to zeros.\n%\n%    \\item Because job status is a binary variable, the policy function returns one of two possible values. It is\n%    convenient to represent ``employed\" by $1$ and ``unemployed\" by $0$. Now the policy function depends\n%    on $w$ and $w'$, so it will also be represented by an $N\\times N$ array \\li{PSI} of zeros and ones,\n%    where the $(i,j)$-th entry gives $\\psi(w_i, w_j)$.\n%\n%    \\item Now we are ready to begin the iteration.\n%    A single iteration involves computing the updated value functions $V^E$ and $V^U$ from\n%    equations \\eqref{EqVe2} and \\eqref{EqVu} and then calculating the $2$-norm distance between\n%    both pairs of old and updated value functions to test for convergence. If both of these\n%    $2$-norm distances are less than $10^{-9}$, terminate the iteration.\n%\n%    Before calculating the updated value functions, we first update our helper arrays \\li{MVE} and\n%    \\li{MEVU}. The rows of \\li{MVE} need to equal \\li{VE}. We can use array broadcasting:\n% \\begin{lstlisting}\n% >>> MVE[:,:] = VE.reshape((1,N))\n% \\end{lstlisting}\n%    The columns of \\li{MEVU} need to equal \\li{EVU}, so use a similar technique:\n% \\begin{lstlisting}\n% >>> MEVU[:,:] = EVU.reshape((N,1))\n% \\end{lstlisting}\n%\n%    Now let us address how to compute the updated $V^U$, which we denote by \\li{VU1}.\n%    Equation \\eqref{EqVu} shows that it is the sum of\n%    two terms. The first, $u(\\alpha w)$, we have already computed and stored in the variable \\li{uaw}.\n%    The second term involves a maximization between two alternatives. One can imagine writing\n%    a double for loop ranging over the values of $w'$ and $w$ to compute each individual\n%    $\\max\\{V^E(w'), \\mathbb{E}_{w''}V^U(w,w'')\\}$, but we can take advantage of the helper\n%    arrays \\li{MVE} and \\li{MEVU} to do this computation in one efficient line of code.\n%    Note that the $(i,j)$-th entry of \\li{MVE} is just $V^E(w_j)$ and the $(i,j)$-th\n%    entry of \\li{MEVU} is $\\mathbb{E}_{w''}(w_i, w'')$, and\n%    \\[\n%    V^U(w_i,w_j) = u(\\alpha w_i) + \\beta\\max\\{V^E(w_j), \\mathbb{E}_{w''}V^U(w_i,w'')\\}.\n%    \\]\n%    Hence, taking the entrywise maximum of the arrays \\li{MVE} and \\li{MEVU} gives us the appropriate\n%    max term for $V^U$. To get the entrywise maximum of two arrays, stack the arrays along a new\n%    axis using \\li{np.dstack}, and maximize along that axis. The computation for \\li{VU}, then, is\n%    \\begin{lstlisting}\n% >>> VU1 = uaw + beta*np.max(np.dstack([MEVU, MVE]), axis=2)\n%    \\end{lstlisting}\n%\n%    Calculating the updated $V^E$, denoted by \\li{VE1}, is more straightforward.\n%    Equation \\eqref{EqVe2} shows that it is just\n%    a particular linear combination of the arrays \\li{uw}, \\li{VE}, and \\li{EVU}:\n%    \\begin{lstlisting}\n% >>> VE1 = uw + beta*((1-gamma)*VE + gamma*EVU)\n%    \\end{lstlisting}\n%\n%    We can now calculate the 2-norm distances between old and updated value functions.\n%    It remains to update \\li{VE}, \\li{VU}, and \\li{EVU}.\n%    The first two updates are trivial, and calculating \\li{EVU} is equivalent to the\n%    matrix-vector multiplication of \\li{VU} with \\li{f}. This is similar to how we computed\n%    expectations in previous labs:\n%    \\begin{lstlisting}\n% >>> EVU = np.dot(VU,f).ravel()\n%    \\end{lstlisting}\n%    We use the \\li{ravel} function to ensure that \\li{EVU} is a flat array.\n%\n%    \\item Notice that it is not necessary to iteratively update the policy function, as it is not needed\n%    to update the value functions. Thus, we need only compute the policy function once, after convergence\n%    of the value functions has been achieved. This is done in a manner similar to calculating\n%    $\\max\\{V^E(w'), \\mathbb{E}_{w''}V^U(w,w'')\\}$ as described above, except we need to take the \\emph{argmax}:\n% \\begin{lstlisting}\n% >>> PSI = np.argmax(np.dstack([MEVU,MVE]), axis=2)\n% \\end{lstlisting}\n%\n%    \\item Compute the reservation wage $w_R'$ as a function of the current wage $w$. It will be represented\n%    by a length $N$ array called \\li{wr}. The reservation wage is the\n%    value of $w'$ where the policy function changes from zeros to ones (the optimal choice changes from remaining\n%    unemployed to accepting the job offer). We can calculate this as follows:\n%    \\begin{lstlisting}\n% >>> wr_ind = np.argmax(np.diff(PSI), axis = 1)\n% >>> wr = w[wr_ind]\n%    \\end{lstlisting}\n%\n%    \\item Plot the equilibrium reservation wage $w_R'$ of the converged problem as a function of the current\n%    wage $w$ with the current wage on the $x$-axis and the reservation wage $w_R'$ on the $y$-axis. This is\n%    the most common way to plot discrete choice policy functions. The reservation wage represents the wage\n%    that makes the unemployed worker indifferent between taking a job offer and rejecting it. So any wage\n%    above the reservation wage line represents $s' = E$ and any wage below the reservation wage line represents\n%    $s' = U$. Your plot should resemble that in Figure \\ref{fig:res_wage}.\n%\n% \\end{enumerate}\n% \\end{problem}\n%\n% \\begin{figure}\n% \\includegraphics[width=\\textwidth]{figures/reservation_wage.pdf}\n% \\caption{The reservation wage as a function of previous wage for the job search problem.}\n% \\label{fig:res_wage}\n% \\end{figure}\n%\n% In the previous problem, it was necessary to iterate on two value functions.\n% Consequently the convergence is relatively slow.\n% We can improve upon this situation by using modified policy function iteration.\n%\n% \\begin{problem}\n% Solve the same problem, this time using modified policy function iteration with $15$ value function iterations\n% within each policy iteration.  You should be able to re-use much of your code from the previous problem.\n%\n% Start off by initializing all of the same variables. Additionally, initialize your policy function array \\li{PSI},\n% say\n% \\begin{lstlisting}\n% >>> PSI = 2*np.ones((N,N))\n% \\end{lstlisting}\n%\n% Next comes the iteration.\n% Essentially, the iteration will consist of an outer while-loop (which terminates once the 2-norm distance\n% between successive policy functions passes below $10^{-9}$), and an inner for loop (with $15$ loops).\n%\n%\n% The first step in the while-loop is to calculate the new policy function \\li{PSI1}, just as in the previous\n% problem. Next, perform the inner for-loop, which consists simply of the value function iteration, but this\n% time using the current policy function. This means the line of code\n% \\begin{lstlisting}\n% >>> VU = uaw + beta*np.max(np.dstack([MEVU, MVE]), axis=2)\n% \\end{lstlisting}\n% is no longer valid, as it does not use the policy function. We must instead have\n% \\begin{lstlisting}\n% >>> VU = uaw + beta*(MVE*PSI1 + MEVU*(1 - PSI1))\n% \\end{lstlisting}\n% Why is this code correct?\n%\n% Finally, after exiting the for loop, calculate the 2-norm distance between the old and the new policy function,\n% and then update your old policy function, i.e.\n% \\begin{lstlisting}\n% >>> PSI = PSI1\n% \\end{lstlisting}\n%\n% After convergence is achieved, once again compute the reservation wage array, and plot it as in the previous\n% problem. Then return the converged policy function.\n% \\end{problem}\n%\n% \\begin{problem}\n% How many iterations did the value function iteration method take?\n% How many iterations did the modified policy function iteration method take?\n% Which was faster?\n% \\end{problem}\n", "meta": {"hexsha": "3323b9f941bbaf8d519b182551ad4606432c4c08", "size": 68080, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "acme-material/Labs/Volume2/PolicyFunctionIteration/PolicyFunctionIteration.tex", "max_stars_repo_name": "DM561/dm561.github.io", "max_stars_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-13T13:22:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-13T13:22:41.000Z", "max_issues_repo_path": "acme-material/Labs/Volume2/PolicyFunctionIteration/PolicyFunctionIteration.tex", "max_issues_repo_name": "DM561/dm561.github.io", "max_issues_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-10-18T19:57:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-31T19:00:36.000Z", "max_forks_repo_path": "acme-material/Labs/Volume2/PolicyFunctionIteration/PolicyFunctionIteration.tex", "max_forks_repo_name": "DM561/dm561.github.io", "max_forks_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.6209150327, "max_line_length": 344, "alphanum_fraction": 0.6917303173, "num_tokens": 19693, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6859494678483918, "lm_q2_score": 0.8311430562234877, "lm_q1q2_score": 0.5701221371223875}}
{"text": "%!TEX root = ../notes.tex\n\\section{February 10, 2022}\n\n\\subsection{Congruences \\emph{continued}}\n\\recall that for $m\\in\\ZZ_+$, $a,b\\in \\ZZ$, the linear congruence\n\\[ax\\equiv b\\mod{m}\\]\nhas a solution if and only if $(a, m)\\mid b$. Unwinding this gives Bezout's identity.\n\n\\textbf{Q: How do we actually find a solution?}\n\n\\textbf{A:} Either guess and check; or apply the following algorithm:\n\\begin{enumerate}[1)]\n    \\item Divide all terms in the congruence by $d = (a, m)$.\n    \\item If step $1$ yields\n          \\[a'x\\equiv b'\\mod m'\\]\n          with $(a', m') = 1$, then $d:= (a', b')$ is a \\ul{unit} mod $m'$, so we can divide both sides by $d'$.\n          \\[a'd'^{-1}x\\equiv b'd'^{-1}\\mod m'\\]\n    \\item Let $a''x\\equiv b''\\mod m$ be the result so far. We replace $b''$ by some $b''+km'$\\footnote{Since made $a'$ and $m'$ coprime, we have $(a', m')=1$ so we can indeed solve congruence $a''q \\equiv b''+km'$ gives noncoprime pairs.} such that $(a'', b''+km')>1$ allows us to repeat step $2$. This results in some $a'''$ such that $|a'''| < |a''|$.\n\n          Given that we repeat this process, this must eventually terminate, since the absolute values of the $a$ terms are strictly decreasing each time.\n\\end{enumerate}\n\\begin{example}\n    Let $10x\\equiv 6\\mod{14}$.\n\n    \\begin{enumerate}[1)]\n        \\item $(a, m) = (10, 14) = 2$ so we divide through by $2$. \\[5x\\equiv 3\\mod{7}\\]\n        \\item Irrelevant since $(5, 3)$ are coprime.\n        \\item Consider integers of form $3+7k$, and see which are divisible by $5$. We can take $k=1$. We get\n              \\[5x\\equiv 10\\mod 7\\]\n        \\item[2)] Divide by $(5, 10) = 5$ so we have $x\\equiv 2\\mod{7}$.\n    \\end{enumerate}\n\\end{example}\n\n\\subsection{Simultaneous Linear Congruences}\n\\recall the CRT/Sun-tzu's theorem.\n\\begin{theorem}[Sun-tzu's Theorem / Chinese Remainder Theorem]\\label{thm:crt}\n    Suppose that $m = m_1m_2\\cdots m_t$ with $(m_i, m_j) = 1 \\forall i\\neq j$.\n\n    Let $b_1, b_2\\dots, b_t$ be integers, and consider the system of congruences\n    \\begin{align}\n        x_1 & \\equiv b_1\\mod{m_1}        \\label{eqn:crt-congruences} \\tag{$*$} \\\\\n        x_2 & \\equiv b_2\\mod{m_2} \\nonumber                                    \\\\\n            & \\qquad \\vdots \\nonumber                                          \\\\\n        x_t & \\equiv b_t\\mod{m_t} \\nonumber\n    \\end{align}\n    Then this system has a unique solution modulo $m$\\footnote{That is, we have at least one solution, and we can shift it by \\emph{any} multiple of $m$.}.\n\\end{theorem}\n\\begin{proof}\n    Let $n_i = m_1m_2\\cdots \\cancel{m_i}\\cdots m_t = \\frac{m}{m_i}$ for each $i$. Since $m_i$ is coprime to $m_j,\\ \\forall j\\neq i$, we have $(n_i, m_i) = 1 \\forall i$. Then, there exists solutions $r_i, s_i\\in \\ZZ$ such that\n    \\[r_im_i + s_in_i = 1\\]\n    Let $e_i = s_in_i$. Then for each $i$,\n    \\[e_i\\equiv 1\\mod m_i\\]\n    and $e_i\\equiv 0\\mod m_j,\\ \\forall j\\neq i$.\n\n    Our goal is to ultimately show\n    \\[\\ZZ/m\\ZZ\\simeq \\ZZ/m_1\\ZZ\\times \\ZZ/m_2\\ZZ\\times\\cdots\\times\\ZZ/m_t\\ZZ\\]\n    with each $e_i$ generating the ``$\\ZZ/m_i\\ZZ$ piece''.\n\n    Set \\[x_0 = \\sum_{i=1}^t b_ie_i\\] so that $x_0\\equiv b_i\\pmod{m_i}\\ \\forall i$, so $x_0$ is a solution to \\cref{eqn:crt-congruences}.\n\n    Suppose $x_1$ is another solution. Then we have\n    \\[x_1-x_0\\equiv 0\\mod{m_i}\\ \\forall i, 1\\leq i\\leq t\\]\n    Since the $m_i$ are pairwise coprime, we get $m\\mid x_1-x_0$.\n\\end{proof}\n\n\\subsection{Structure of Unit Groups}\n\\recall in Math 1530, we learned Lagrange's theorem\n\\begin{theorem}[Lagrange's Theorem]\n    If $G$ is a finite group, then for every subgroup $H$ of $G$, we have $|H| \\bigm\\vert |G|$.\n\\end{theorem}\n\\begin{corollary}\n    If $G$ is a finite group of order $n$, and $a\\in G$, then $a^n = e$, where $e$ is the identity of the group.\n\\end{corollary}\nWe've seen that $\\left\\vert U(m) \\right\\vert = \\phi(m)$.\\footnote{For notation, we use $U(m) := (\\ZZ/m\\ZZ)^\\times$.} Applying Lagrange's theorem, we have Euler's theorem:\n\\begin{theorem}[Euler's Theorem]\\label{thm:eulers-thm}\n    For any $a\\in \\ZZ$ with $(a, m) = 1$, we have $a^{\\phi(m)}\\equiv 1\\mod{m}$.\n\\end{theorem}\n\\begin{definition}\n    A subset $R$ of $\\ZZ$ is said to be a \\ul{reduced set of residues mod $m$} if $R$ contains exactly one element from each of the $\\phi(m)$ congruence classes that are units mod $m$.\n\\end{definition}\n\\begin{proof}[Alternate proof of \\cref{thm:eulers-thm}]\n    Let $R = \\{r_1, r_2, \\dots, r_{\\phi(m)}\\}$ be a reduced set of residues mod $m$. If $(a, m) = 1$, then $aR$ is also a reduced set of residues mod $m$. Thus, if $x_1, x_2, \\dots, x_{\\phi(m)}\\in aR$ (pairwise distinct), then\n    \\begin{align*}\n        x_1x_2\\cdots x_{\\phi(m)}               & \\equiv r_1r_2\\cdots r_{\\phi(m)}\\mod{m}  \\\\\n        (ar_1)(ar_2)\\cdots (ar_{\\phi(m)})      & \\equiv r_1r_2\\cdots r_{\\phi(m)}\\mod{m}  \\\\\n        a^{\\phi(m)} (r_1r_2\\cdots r_{\\phi(m)}) & \\equiv r_1r_2\\cdots r_{\\phi(m)} \\mod{m} \\\\\n        \\intertext{since all the $r_i$ are units mod $m$, we divide through}\n        a^{\\phi(m)}                            & \\equiv 1\\mod{m}\n    \\end{align*}\n    Which is as desired.\n\\end{proof}\n\nWe'll be studying roots of polynomials over $\\ZZ/m\\ZZ$, especially polynomials of the form $x^d - a$.\n\nBy Sun-tzu's theorem, the case of $m$ being a prime power is especially important. This turns out to have a lot to do with the case that $m=p$ is a prime itself.\n\nFirst something further \\emph{afield}:\n\\begin{proposition}\\label{prop:d-roots-in-fp}\n    If $p$ is a prime and $p\\nmid d$ for $d\\in\\ZZ_+$, then the polynomial\n    \\[x^d - a\\in (\\ZZ/p\\ZZ)[x],\\qquad a\\not\\equiv 0\\mod{p}\\]\n    has exactly $d$ roots in some extension of $\\FF_p$.\n\n    Conversely, if $p\\mid d$, then there are fewer than $d$ roots in any extension of $\\FF_p = \\ZZ/p\\ZZ$.\n\\end{proposition}\n\nThe proof uses the following proposition:\n\\begin{proposition*}\n    A nonzero polynomial $f\\in K[x]$ is \\ul{separable} if and only if it is relatively prime to its derivative $f'$. (A separable polynomial whose roots in its algebraic closure $\\overline{K}$ whose roots are all distinct).\n\\end{proposition*}\n\\begin{proof}\n    ~\\begin{description}\n        \\item[$\\Rightarrow$ Right Direction:]\n            Suppose $f$ is separable and $\\alpha$ be any root of $f$. Then $f(x) = (x-\\alpha)h(x)$, where $h(\\alpha)\\neq 0$ since $\\alpha$ is a non-repeated root.\n\n            We have $f'(\\alpha) = h(\\alpha)\\neq 0$, so $\\alpha$ is not a root of $f'$. Thus $f$ and $f'$ have no common roots, so they are coprime.\n\n        \\item[$\\Leftarrow$ Left Direction:]\n            Prove by contrapositive. Suppose $f$ is not separable. i.e. it has some repeated root which we call $\\alpha$.\n\n            Then $f(x) = (x-\\alpha)^2g(x)$, so $f'(x) = (x-\\alpha)^2 g'(x) + 2(x-\\alpha)g(x)$. We see that $x-\\alpha$ divides both $f$ and $f'$ so $(f, f')\\neq 1$.\n    \\end{description}\n    Which concludes the bidirectional.\n\\end{proof}\n\\begin{proof}[Proof of \\cref{prop:d-roots-in-fp}]\n    We have $f(x) = x^d - a$, $a\\not\\equiv 0\\pmod{p}$ has $d$ distinct solutions in some extension of $\\FF_p = \\ZZ/p\\ZZ$, because \\[f'(x) = dx^{d-1}\\pmod{p}\\] and with $0$ as its only root but $0$ is not a root of $f$. By above we have that $f$ is separable.\n\n    Conversely, if $p\\mid d$, then\n    \\[f'(x)\\equiv 0\\pmod{p},\\]\n    so $(f, f')\\neq 1$, meaning that $f$ is not separable.\n\\end{proof}\n\n\\begin{proposition}[4.1.2 of text]\\label{prop:d-roots}\n    If $p$ is a prime and if $d\\mid p-1$, then the polynomial\n    \\[x^{d-1}\\in (\\ZZ/p\\ZZ)[x]\\] has exactly $d$ roots in the base field $\\FF_p = \\ZZ/p\\ZZ$.\n\\end{proposition}\n\\begin{proof}\n    We know this is true in the case of $d = p-1$ because of Fermat's Little Theorem (also Euler's Theorem).\n\n    We also note that $(x^d - 1)\\mid (x^{p-1} - 1)$. Since $x^{p-1} - 1$ has all roots in the base field by FLT, $x^d - 1$ had better also retain its roots in the base field $\\FF_p$ by contradiction.\n\\end{proof}", "meta": {"hexsha": "67c0fba76673203256ebb96788d078ca26ff7fa3", "size": 7886, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-02-10.tex", "max_stars_repo_name": "jchen/math1560-notes", "max_stars_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-02T15:41:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-03T20:28:48.000Z", "max_issues_repo_path": "lectures/2022-02-10.tex", "max_issues_repo_name": "jchen/math1560-notes", "max_issues_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-02-10.tex", "max_forks_repo_name": "jchen/math1560-notes", "max_forks_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.3285714286, "max_line_length": 353, "alphanum_fraction": 0.6214811058, "num_tokens": 2812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.685949467848392, "lm_q2_score": 0.8311430436757312, "lm_q1q2_score": 0.5701221285152606}}
{"text": "\\section{Attitude} \\label{Ch:Attitude}\r\n\r\nThe attitude of a spacecraft can be defined qualitatively as how the\r\nspacecraft is oriented in inertial space, and how that orientation\r\nchanges in time.  GMAT has the ability to model the orientation and\r\nrate of rotation of a spacecraft using several different\r\nmathematical models. Currently, GMAT assumes that a spacecraft is a\r\nrigid body.\r\n\r\nThere are many ways to quantitatively describe the orientation and\r\nrate of rotation of a spacecraft, just like there are many ways we\r\ncan quantitatively describe an orbit state.  Let's define any set of\r\nnumbers that can uniquely define the spacecraft attitude as an\r\nattitude parameterization. GMAT allows the users to use several\r\ncommon attitude parameterizations including quaternions, Euler\r\nangles, the Direction Cosine Matrix (DCM), Euler angle rates, and\r\nthe angular velocity vector. Given an initial attitude state, GMAT\r\ncan propagate the attitude using one of several kinematic attitude\r\npropagation models.\r\n\r\nIn this chapter, we discuss the attitude parameterizations supported\r\nin GMAT, and how to convert between the different types.  We discuss\r\nthe internal state parameterization that GMAT uses.   Next we\r\ninvestigate the types of attitude modes in GMAT and discuss in\r\ndetail how GMAT propagates the spacecraft attitude in all of the\r\nKinematic attitude modes.  We conclude the chapter with a discussion\r\nof how GMAT converts between different attitude parameterizations.\r\n\r\n\\subsection{Attitude Propagation}\r\n\r\nGiven a set of initial conditions that define the attitude, GMAT can\r\npropagate the attitude using several methods.  Currently, GMAT only\r\nsupports kinematic attitude propagation.  In Kinematic mode, the\r\nattitude is defined by describing the desired orientation with\r\nrespect to other objects such as spacecraft or celestial bodies.\r\nWith this information, GMAT can calculate the required attitude to\r\nsatisfy the desired geometrical configuration.  This section\r\npresents the different Kinematic attitude modes, and how GMAT\r\ncalculates the attitude state in each mode.  Let's begin by looking\r\nat the internal attitude state representation and how the user can\r\ndefine initial conditions.\r\n\r\n\\vspace{- .1 in} \\subsubsection{Internal State Representation and\r\nAttitude Initial Conditions}\r\n\r\nCertain attitude parameterizations are more useful for attitude\r\npropagation, while other attitude parameterizations are more\r\nintuitive for providing attitude initial conditions or output. GMAT\r\nuses different internal parameterizations of the attitude\r\norientation depending upon the attitude mode.  The type of\r\nparameterization is chosen to make the attitude propagation\r\nalgorithms natural and convenient.  For the kinematic modes, GMAT\r\nuses the DCM that represents the rotation from the inertial system\r\nto the body axes as the attitude orientation parameterization. In\r\nthe future, when 6 degree of freedom attitude modelling is\r\nimplemented, GMAT will use the quaternion that represents the\r\nrotation from the inertial system to the body axes. GMAT uses the\r\nangular velocity of the body with respect to the inertial frame,\r\nexpressed in the body frame, $\\{\\boldsymbol\\omega_{IB}\\}_B$, as the\r\nrate portion of the state vector.\r\n\r\nFor convenience, the user can choose a coordinate system in which to\r\ndefine the initial attitude state.  Let's call this system\r\n$\\mathcal{F}_i$.  The user can define the initial attitude\r\norientation with respect to $\\mathcal{F}_i$ using Euler angles, the\r\nDCM, or quaternions.  The user can define the body rate with respect\r\nto $\\mathcal{F}_i$ by defining the angular velocity in\r\n$\\mathcal{F}_i$, $\\{\\boldsymbol\\omega_{IB}\\}_i$, or by defining the\r\nEuler angle rates.  Note that not all attitude modes require these\r\nthree pieces of information.  The specific inputs for each attitude\r\nmode are discussed below, along with details about how attitude\r\npropagation is performed in each mode.\r\n\r\n\\subsubsection{Kinematic Attitude Propagation}\r\n\r\nThe Kinematic attitude mode allows a user to define a geometrical\r\nconfiguration based on the relative position of a spacecraft with\r\nrespect to other spacecraft or celestial bodies, and with respect to\r\ndifferent coordinate systems.  In Kinematic mode, GMAT does not\r\nintegrate the attitude equations of motion, but rather calculates\r\nthe attitude based on the geometrical definition provided by the\r\nuser.  There are several Kinematic modes to choose from.  The\r\ndifferent modes allow the user to conveniently define the spacecraft\r\nattitude depending on the type of attitude profile needed for a\r\nspecific mission.  To begin, let's look at how GMAT calculates the\r\nattitude state in the Coordinate System Fixed attitude mode\r\n(CSFixed).\r\n\r\n\\subsubsection{Coordinate System Fixed Mode}\r\n\r\nIn the CSFixed attitude mode, the user supplies two pieces of\r\ninformation. They first specify a coordinate  system in which to fix\r\nthe attitude, $\\mathcal{F}_i$.  $\\mathcal{F}_i$ can be any of the\r\ndefault coordinate systems or any user defined coordinate system.\r\nSecondly, the user specifies how the body axis system,\r\n$\\mathcal{F}_B$ is oriented with respect to $\\mathcal{F}_i$ by\r\ndefining $\\mathbf{R}_{Bi}$ or an equivalent parameterization. With\r\nthis information, GMAT calculates the rotation from the inertial to\r\nthe body axes and the angular velocity of the body with respect to\r\nthe inertial frame, expressed in the body frame,\r\n$\\{\\mathbf{\\boldsymbol\\omega}_{IB}\\}_B$.\r\n\r\nGMAT calculates the rotation matrix from $\\mathcal{F}_i$  to\r\n$\\mathcal{F}_B$, $\\mathbf{R}_{Bi}$, from the initial conditions\r\nprovided by the user.  For CSFixed mode, $\\mathbf{R}_{Bi}$ is\r\nconstant and is stored for use in the equations below.  Knowing\r\n$\\mathbf{R}_{Bi}$, we can calculate the rotation matrix from the\r\ninertial frame to the body frame, $\\mathbf{R}_{BI}$, using the\r\nfollowing equation\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{BI} = \\mathbf{R}_{Bi}\\mathbf{R}_{iI}\r\n     \\label{Eq:CSFixedRotationMatrix}\r\n\\end{equation}\r\n%\r\n$\\mathbf{R}_{iI}$ is the rotation matrix from $\\mathcal{F}_I$ to\r\n$\\mathcal{F}_i$ and GMAT knows how to calculate this matrix for all\r\nallowable $\\mathcal{F}_i$.  For details on the calculation of this\r\nmatrix for all coordinate systems in GMAT  see\r\nCh.~\\ref{Ch:CoordinateSystems}.\r\n\r\nTo calculate $\\{\\mathbf{\\boldsymbol\\omega}_{IB}\\}_B$, we start from\r\nEulers' equation:\r\n%\r\n\\begin{equation}\r\n   \\dot{\\mathbf{R}}_{BI} =\r\n   -\\{\\mathbf{\\boldsymbol\\omega^\\times}_{IB}\\}_B\\mathbf{R}_{BI}\r\n   \\label{Eq:CSFixedKinematics}\r\n\\end{equation}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n       \\{ \\mathbf{\\boldsymbol\\omega^\\times}_{IB}\\}_B = \\begin{pmatrix}\r\n     0 & -\\omega_3 & \\omega_2\\\\\r\n     \\omega_3 & 0 & -\\omega_1\\\\\r\n     -\\omega_2 & \\omega_1 & 0\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nand $\\{\\mathbf{\\boldsymbol\\omega}_{IB}\\}_B$ is the rotation of\r\n$\\mathcal{F_B}$ with respect to $\\mathcal{F}_I$, expressed in\r\n$\\mathcal{F}_B$.\r\n%\r\nSolving Eq.~\\ref{Eq:CSFixedKinematics} for\r\n$\\{\\mathbf{\\boldsymbol\\omega}_{IB}^\\times\\}_B$ we obtain\r\n%\r\n\\begin{equation}\r\n   \\{\\mathbf{\\boldsymbol\\omega^\\times}_{IB}\\}_B =  -\\dot{\\mathbf{R}}_{BI}\r\n   \\mathbf{R}_{BI}^{T}\r\n   \\label{Eq:CSFixedKinematics2}\r\n\\end{equation}\r\n%\r\nTaking the derivative of Eq.~(\\ref{Eq:CSFixedRotationMatrix}) with\r\nrespect to time yields\r\n%\r\n\\begin{equation}\r\n     {\\mathbf{\\dot R}_{BI}} = \\mathbf{R}_{Bi}{\\mathbf{\\dot\r\n     R}_{iI}}\\label{Eq:CSFixedTimeDerivative}\r\n\\end{equation}\r\n%\r\nbecause by definition, for the CSFixed mode,  $ \\mathbf{\\dot R}_{Bi}\r\n= \\mathbf{0}$. Substituting Eq.~(\\ref{Eq:CSFixedTimeDerivative})\r\ninto Eq.~(\\ref{Eq:CSFixedKinematics2}) we obtain\r\n%\r\n%\r\n\\begin{equation}\r\n   \\{\\mathbf{\\boldsymbol\\omega^\\times}_{IB}\\}_B =  -\\mathbf{R}_{Bi}{\\mathbf{\\dot\r\n     R}_{iI}} \\mathbf{R}_{BI}^{T}\r\n   \\label{Eq:CSFixedKinematics3}\r\n\\end{equation}\r\n%\r\nwhere $\\mathbf{R}_{Bi}$ is known from user input, and\r\n$\\mathbf{R}_{BI}$ is known from Eq.~(\\ref{Eq:CSFixedRotationMatrix})\r\n. GMAT knows how to calculate $\\mathbf{\\dot{R}}_{iI}$ for all\r\nallowable $\\mathcal{F}_i$ and details are contained in\r\nCh.~\\ref{Ch:CoordinateSystems}.\r\n\r\nIn summary, in CSFixed mode, Eq.(\\ref{Eq:CSFixedRotationMatrix}) is\r\nused to calculate $\\mathbf{R}_{BI}$, and\r\nEq.~(\\ref{Eq:CSFixedKinematics3}) is used to calculate\r\n$\\{\\mathbf{\\boldsymbol\\omega}_{IB}^x\\}_B$.  If another attitude\r\nparameterization is required, GMAT uses the algorithms in\r\nSec.~\\ref{Sec:AttitudeParameterizations} to transform from\r\n$\\mathbf{R}_{BI}$ and $\\{\\mathbf{\\boldsymbol\\omega}_{IB}\\}_B$ to the\r\nrequired parameterization.  Now let's look at the spinning\r\nspacecraft mode.\r\n\r\n\\subsubsection{Spinning Spacecraft Mode}\r\n\r\nIn spinning spacecraft mode, GMAT propagates the attitude by\r\nassuming the spin axis direction is fixed in inertial space.  The\r\nspacecraft attitude at some time, $t$, is determined from the\r\nattitude initial conditions, the angular velocity vector, and the\r\nelapsed time from the initial spacecraft epoch.  Let's take a closer\r\nlook at the calculations.\r\n\r\nIn the spinning spacecraft mode, the user provides three pieces of\r\ninformation.  They first choose a coordinate system,\r\n$\\mathcal{F}_i$, in which to define the initial conditions.\r\nSecondly, they define the initial orientation with respect to\r\n$\\mathcal{F}_i$ by providing $\\mathbf{R}_{Bi}$ or an equivalent\r\nparameterzation that is then converted to the DCM.   The user also\r\nprovides the angular velocity of the body axes with respect to the\r\ninertial axes expressed in $\\mathcal{F}_i$, $\\{\r\n\\boldsymbol\\omega_{IB}\\}_i$.\r\n\r\nTo calculate $\\mathbf{R}_{BI}(t)$ where $t$ is an arbitrary epoch,\r\nwe begin by calculating $\\mathbf{R}_{B_{o}I}$ where\r\n$\\mathbf{R}_{B_{o}I} = \\mathbf{R}_{BI}(t_o)$.  We calculate\r\n$\\mathbf{R}_{B_{o}I}$ using\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{B_{o}I} =  \\mathbf{R}_{Bi}\\mathbf{R}_{iI}(t_o)\r\n\\end{equation}\r\n%\r\nwhere $\\mathbf{R}_{Bi}$ comes from user provided data, and\r\n$\\mathbf{R}_{iI}(t_o)$ is calculated by GMAT and is dependent upon\r\n$\\mathcal{F}_i$.  See Ch.~\\ref{Ch:CoordinateSystems} for details on\r\nhow GMAT calculates $\\mathbf{R}_{iI}$ for all allowable coordinate\r\nsystems in GMAT.\r\n\r\nBefore calculating $\\mathbf{R}_{BI}(t)$ we must determine the spin\r\naxis in the body frame, $\\{\\boldsymbol\\omega_{IB}\\}_B$.  The user\r\nprovides $\\{\\boldsymbol\\omega_{IB}\\}_i$.  In spinning mode we assume\r\nthe spin axis direction is constant in inertial space and in the\r\nbody frame so $\\{ \\boldsymbol\\omega_{IB} \\}_B (t)$ $ = \\{\r\n\\boldsymbol\\omega_{IB} \\}_B (t_o) = \\{ \\boldsymbol\\omega_{IB} \\}_B\r\n$.  We can find the spin axis in the body frame using\r\n$\\mathbf{R}_{Bi}$ as follows\r\n%\r\n\\begin{equation}\r\n      \\{ \\boldsymbol\\omega_{IB}\\}_B = \\mathbf{R}_{Bi} \\{ \\boldsymbol\\omega_{IB}\\}_i\r\n\\end{equation}\r\n%\r\nOnce calculated, GMAT saves $\\mathbf{R}_{B_{o}I}$ and $\\{\r\n\\boldsymbol\\omega_{IB}\\}_B$ for use in calculating the attitude\r\norientation and rate at other epochs.\r\n\r\nGMAT calculates $\\mathbf{R}_{BI}(t)$ using the Euler axis/angle\r\nrotation algorithm in Sec. \\ref{Sec:AttitudeParameterizations}. The\r\nEuler axis is simply the unitized angular velocity vector or,\r\n%\r\n\\begin{equation}\r\n     \\mathbf{a} =   \\frac{  \\{ \\boldsymbol\\omega_{IB} \\}_B  }{\\omega_{IB} }\r\n\\end{equation}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n     \\omega_{IB} = \\| \\{\\boldsymbol{\\omega}_{IB} \\}_B \\|\r\n\\end{equation}\r\n%\r\nThe Euler angle $\\phi$ is calculated using\r\n%\r\n\\begin{equation}\r\n    \\phi(t) = \\omega_{IB}(t -t_o)\r\n\\end{equation}\r\n%\r\nwhere $t$ is the current epoch, and $t_o$ is the spacecraft's\r\ninitial epoch.  Let's define the rotation matrix that results from\r\nthe Euler axis/angle rotation using $\\mathbf{a}$ and $\\phi(t)$, as\r\n$\\mathbf{R}_{BB_{o}}(t)$.  We can calculate $\\mathbf{R}_{BI}(t)$\r\nusing\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{BI}(t) =\r\n     \\mathbf{R}_{BB_{o}}(t)\\mathbf{R}_{B_{o}I}\r\n\\end{equation}\r\n%\r\n\r\nTo summarize, in spinning mode the user provides $\\mathbf{R}_{Bi}$\r\nand  $\\{ \\boldsymbol\\omega_{IB}\\}_i$.  GMAT assumes that that the\r\nspin axis direction is constant, and uses the Euler axis/angle\r\nmethod to propagate the attitude to find $\\mathbf{R}_{BI}$.\r\n\r\nNow let's look at how GMAT performs conversions between the\r\ndifferent attitude parameterizations.\r\n\r\n\\subsection{6 DOF Modelling}\r\n\r\n\\begin{equation}\r\n   \\dot{\\mathbf{X}} = \\left[ \\hspace{.05 in} \\dot{\\mathbf{r}}^T \\hspace{.1 in} \\dot{\\mathbf{v}}^T \\hspace{.1 in}  \\dot{\\mathbf{h}}^T \\hspace{.1 in}  \\dot{\\mathbf{q}}^T  \\hspace{.05 in} \\right]^T\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n   \\dot{\\mathbf{h}} = \\mathbf{T} - \\mathbf{\\boldsymbol\\omega}\\times\\mathbf{h}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n   \\dot{\\mathbf{q}} = \\frac{1}{2}\\mathbf{\\Omega}\\mathbf{q}\r\n\\end{equation}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n    \\mathbf{\\Omega} = \\left(\\begin{array}{ccccc}\r\n     0 & \\omega_z  & -\\omega_y & \\omega_x\\\\\r\n     -\\omega_z & 0 & \\omega_x & \\omega_y\\\\\r\n     \\omega_y & -\\omega_x & 0 & \\omega_z\\\\\r\n     -\\omega_x & -\\omega_y & -\\omega_z & 0\\\\\r\n  \\end{array}\\right)\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\mathbf{\\boldsymbol\\omega} = \\mathbf{I}^{-1}\\mathbf{h}\r\n\\end{equation}\r\n\r\n\\subsection{Attitude Parameterizations \\\\ and Conversions}\r\n\\label{Sec:AttitudeParameterizations}\r\n\r\nThis section details how GMAT converts between different attitude\r\nparameterizations.  For each conversion type, any singularities that\r\nmay occur are addressed.  The orientation parameterizations in GMAT\r\ninclude the DCM, Euler Angles, quaternions, and Euler axis/angle.\r\nThe body rate parameterizations include Euler angle rates and\r\nangular velocity.  We begin with the algorithm to transform from the\r\nquaternions to the DCM.\r\n\r\n\\subsubsection{Conversion:  Quaternions to DCM}\\label{sec:AttQuattoR}\r\n\\index{Attitude Parameterization!Quaternions to DCM}\r\n\r\nGiven:  $\\mathbf{q}$, $q_4$\r\n\r\n\\noindent Find:  $\\mathbf{R}$\r\n\r\n\\noindent Name:  \\emph{ToCosineMatrix}\r\n\r\n\\begin{equation}\r\n    \\mathbf{q} = \\left( q_1 \\hspace{.1 in} q_2 \\hspace{.1 in} q_3 \\right)^T\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\mathbf{q}^{\\times} = \\begin{pmatrix}\r\n     0 & -q3 & q2\\\\\r\n     q3 & 0 & -q1\\\\\r\n     -q2 & q1 & 0\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    c = \\frac{1}{q_1^2 + q_2^2 + q_3^2 + q_4^2}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R} = c\\left[ (q_4^2 - \\mathbf{q}^T\\mathbf{q})\\mathbf{I}_3 +\r\n      2\\mathbf{q}\\mathbf{q}^T -2q_4\\mathbf{q}^{\\times}\\right]\r\n\\end{equation}\r\n\r\n\r\n\\subsubsection{Conversion:  DCM to Quaternions} \\label{sec:DCMtoQuat}\r\n\\index{Attitude Parameterization!DCM to Quaternions}\r\n\r\nGiven:  $\\mathbf{R}$\r\n\r\n\\noindent Find: $\\mathbf{q}$, $q_4$\r\n\r\nDefine following vector\r\n%\r\n\\begin{equation}\r\n   \\mathbf{v} = [ \\hspace{.02 in} R_{11} \\hspace{.1 in} R_{22}\\hspace{.1 in}\r\n   R_{33} \\hspace{.1 in}  \\mbox{trace}(\\mathbf{R}) \\hspace{.02 in}]\r\n\\end{equation}\r\n%\r\nDefine $i_m$ as the index of the maximum component of $\\mathbf{v}$\r\n%\r\n\r\n\\noindent if $i_m = 1$\r\n%\r\n\\begin{equation}\r\n     \\mathbf{q}''  = \\begin{pmatrix}\r\n     2v_{i_m} + 1 - \\mbox{trace}(\\mathbf{R})\\\\\r\n     R_{12} + R_{21}\\\\\r\n     R_{13} + R_{31}\\\\\r\n     R_{23} - R_{32}\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nif $i_m = 2$\r\n%\r\n\\begin{equation}\r\n     \\mathbf{q}''  = \\begin{pmatrix}\r\n     R_{21} + R_{12}\\\\\r\n     2v_{i_m} + 1 - \\mbox{trace}(\\mathbf{R})\\\\\r\n     R_{23} + R_{32}\\\\\r\n     R_{31} - R_{13}\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nif $i_m = 3$\r\n%\r\n\\begin{equation}\r\n     \\mathbf{q}''  = \\begin{pmatrix}\r\n     R_{31} + R_{13}\\\\\r\n     R_{32} + R_{23}\\\\\r\n     2v_{i_m} + 1 - \\mbox{trace}(\\mathbf{R})\\\\\r\n     R_{12} - R_{21}\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nif $i_m = 4$\r\n%\r\n\\begin{equation}\r\n     \\mathbf{q}''  = \\begin{pmatrix}\r\n     R_{23} - R_{32}\\\\\r\n     R_{31} - R_{13}\\\\\r\n     R_{12} - R_{21}\\\\\r\n     1 + \\mbox{trace}(\\mathbf{R})\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nWe normalize $\\mathbf{q}''$ using\r\n%\r\n\\begin{equation}\r\n    \\mathbf{q}' = \\frac{\\mathbf{q}''}{\\| \\mathbf{q}'' \\|}\r\n\\end{equation}\r\n%\r\nFinally,\r\n%\r\n\\begin{equation}\r\n   \\mathbf{q} = [\\hspace{.05 in} q_{1}' \\hspace{.2 in} q_{2}' \\hspace{.2 in} q_3'\r\n   \\hspace{.05 in}]^T\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n     q_4 = q_4'\r\n\\end{equation}\r\n%Note: There is not a unique quaternion for a given DCM.  GMAT\r\n%assumes that the ``+\" sign is used in Eq.~(\\ref{Eq:q_4}).\r\n\r\n\\subsubsection{Conversion:  DCM to Euler\\\\ Axis/Angle}\r\n\r\n\\index{Attitude Parameterization!DCM to Axis/Angle}\r\n\r\nGiven:  $\\mathbf{R}$\r\n\r\n\\noindent Find: $\\mathbf{a}$, $\\phi$\r\n\r\n\\begin{equation}\r\n     \\mathbf{R}  = \\begin{pmatrix}\r\n     R_{11} & R_{12} & R_{13}\\\\\r\n     R_{21} & R_{22} & R_{23}\\\\\r\n     R_{31} & R_{32} & R_{33}\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n   \\phi = \\cos^{-1}\\left( \\frac{1}{2}\\left(\\mbox{trace}(\\mathbf{R}) -\r\n   1 \\right)\\right)\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\mathbf{a} = \\frac{1}{2\\sin{\\phi}}\\begin{pmatrix}\r\n     R_{23} - R_{32}\\\\\r\n     R_{31} - R_{13}\\\\\r\n     R_{12} - R_{21}\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nIf $\\|\\sin{\\phi} \\| < 10^{-14}$, then we assume\r\n%\r\n\\begin{equation}\r\n    \\mathbf{a} = \\left[\\hspace{.05 in} 1 \\hspace{.1 in} 0 \\hspace{.1 in}\r\n    0 \\hspace{.05 in} \\right]^T\r\n\\end{equation}\r\n%\r\nNote that if $\\|\\sin{\\phi} \\| < 10^{-14}$ then $\\cos{\\phi} \\approx\r\n1 $ and we arrive at a DCM of $\\mathbf{I}_3$.\r\n\r\n\r\n\\subsubsection{Conversion:  Euler Axis/Angle to DCM} \\index{Attitude\r\nParameterization!Axis/Angle to DCM} \\label{Sec:EulerAxis/AngletoDCM}\r\n\r\nGiven:  $\\mathbf{a}$, $\\phi$\r\n\r\n\\noindent Find: $\\mathbf{R}$\r\n\r\n\\begin{equation}\r\n     \\mathbf{a}^{\\times} = \\begin{pmatrix}\r\n     0 & -a_3 & a_2\\\\\r\n     a_3 & 0 & -a_1\\\\\r\n     -a_2 & a_1 & 0\\\\\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\mathbf{R} = \\cos{\\phi}\\mathbf{I}_3 + (1 -\r\n    \\cos{\\phi})\\mathbf{a}\\mathbf{a}^T -\r\n    \\sin{\\phi}\\mathbf{a}^{\\times}\r\n\\end{equation}\r\n\r\n\\subsubsection{Conversion:  Euler Angles to DCM}\r\n\\label{sec:AttEulerAnglestoDCM}\r\n\r\nGiven:  Sequence order  ( i.e. 123, 121, .... 313),  $\\theta_1$,\r\n$\\theta_2$, $\\theta_3$\r\n\r\n\\noindent Find: $\\mathbf{R}$\r\n\r\nWe'll give an example for a 321 rotation, and then present results\r\nfor the remaining 11 Euler angle sequences.  First, let's define\r\n$\\mathbf{R}_3(\\theta_1)$, $\\mathbf{R}_2(\\theta_2)$, and\r\n$\\mathbf{R}_1(\\theta_3)$.\r\n%\r\n\\begin{equation}\r\n    \\mathbf{R}_3(\\theta_1) =  \\begin{pmatrix}\r\n      \\cos{\\theta_1}  & \\sin{\\theta_1} & 0    \\\\\r\n      -\\sin{\\theta_1} & \\cos{\\theta_1} & 0    \\\\\r\n      0               & 0              & 1\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\mathbf{R}_2(\\theta_2) =  \\begin{pmatrix}\r\n      \\cos{\\theta_2}  & 0              & -\\sin{\\theta_2}    \\\\\r\n      0               & 1              & 0    \\\\\r\n      \\sin{\\theta_2}  & 0              & \\cos{\\theta_2}\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\mathbf{R}_1(\\theta_3) =  \\begin{pmatrix}\r\n      1               & 0                & 0    \\\\\r\n      0               & \\cos{\\theta_3}   & \\sin{\\theta_3}    \\\\\r\n      0               & -\\sin{\\theta_3}  & \\cos{\\theta_3}\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nNow we can write\r\n%\r\n\\begin{eqnarray}\r\n       &\\mathbf{R}_{321} =  \\mathbf{R}_1(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_3(\\theta_1) = \\nonumber\\\\\r\n      &=         \\begin{pmatrix}\r\n      1               & 0                & 0    \\\\\r\n      0               & c_3   & s_3    \\\\\r\n      0               & -s_3  & c_3\r\n     \\end{pmatrix}\r\n     %\r\n      \\begin{pmatrix}\r\n      c_2  & 0              & -s_2    \\\\\r\n      0               & 1              & 0    \\\\\r\n      s_2  & 0              & c_2\r\n     \\end{pmatrix}\r\n     %\r\n     \\begin{pmatrix}\r\n      c_1   & s_1 & 0    \\\\\r\n      -s_1  & c_1 & 0    \\\\\r\n      0               & 0              & 1\r\n     \\end{pmatrix}\\nonumber\\\\\r\n\\end{eqnarray}\r\n%\r\nwhere $c_1 =\\cos{\\theta_1}$, $s_1 = \\sin{\\theta_1}$ etc.  We can\r\nrewrite $\\mathbf{R}_{321} $ as\r\n%\r\n\\begin{equation}\r\n       \\mathbf{R}_{321}  =\\begin{pmatrix}\r\n      c_2c_1              &  c_2s_1             & -s_2 \\\\\r\n     -c_3s_1 + s_3s_2c_1  &  c_3c_1 + s_3s_2s_1 & s_3c_2 \\\\\r\n     s_3s_1 + c_3s_2c_1   &  -s_3c_1 +c_3s_2s_1 & c_3c_2\r\n     \\end{pmatrix} \\label{Eq:R321}\r\n\\end{equation}\r\n%\r\n\r\nThe approach is similar for the remaining 11 Euler angle sequences.\r\nRather than derive the DCM matrices for the remaining 11 sequences,\r\nwe present them in Table \\ref{table:EulerAnglestoDCM}.\r\n\r\n\\subsubsection{Conversion:  DCM to Euler Angles}\r\n\\label{sec:AttDCMtoEulerAngles}\r\n\r\nGiven:  Sequence order  ( i.e. 123, 121, .... 313), $\\mathbf{R}$\r\n\r\n\\noindent Find:  $\\theta_1$, $\\theta_2$, $\\theta_3$\r\n\r\nWe'll give an example for a 321 rotation, and then present results\r\nfor the remaining 11 Euler angle sequences.  Examining,\r\nEq.~(\\ref{Eq:R321}), we see that\r\n%\r\n\\begin{equation}\r\n     \\frac{ R_{21} }  { R_{11}  } = \\frac{  \\cos{\\theta_2}\\sin{\\theta_1}     }\r\n                                 {  \\cos{\\theta_2}\\cos{\\theta_1}    }\r\n\\end{equation}\r\n%\r\nFrom this we can see that\r\n%\r\n\\begin{equation}\r\n    \\theta_1 =  \\tan^{-1}{\\frac{ R_{21} }  { R_{11}  }}\r\n\\end{equation}\r\n%\r\nFurther inspection of Eq.~(\\ref{Eq:R321}) shows us that\r\n%\r\n\\begin{equation}\r\n    \\theta_2 = \\sin^{-1}{R_{13}}\r\n\\end{equation}\r\n%\r\nAt first glance, we may choose to calculate $\\theta_3$ using\r\n$\\theta_3 = \\tan^{-1}{(R_{23}/R_{33})}$.  However, in the case that\r\n$\\theta_2 = 90^\\circ$, this would result in the indeterminate case,\r\n$\\theta_3 =$ $\\tan^{-1}(R_{23}/R_{33})$ $= \\tan^{-1}(0/0)$.  An\r\nimproved method, found in the ADEAS mathematical specifications\r\ndocument, is to determine $\\theta_3$ using\r\n%\r\n\\begin{equation}\r\n    \\theta_3 = \\tan^{-1} \\left(\\frac{ R_{31} \\sin{\\theta_1} - R_{32} \\cos{\\theta_1} }\r\n    { -R_{21} \\sin{\\theta_1} + R_{22} \\cos{\\theta_1}} \\right)\r\n    \\label{Eq:Rtotheta3}\r\n\\end{equation}\r\n%\r\nSubstituting values from Eq.~(\\ref{Eq:R321}) into\r\nEq.~(\\ref{Eq:Rtotheta3}), and using abbreviated notation, we see\r\nthat\r\n%\r\n\\begin{equation}\r\n     \\theta_3 = \\tan^{-1} \\left(  \\frac{ s_1( s_3s_1 + c_3s_2c_1) - c_1(-s_3c_1 + c_3s_2s_1 )}\r\n    { s_1(c_3s_1 - s_3s_2c_1  ) + c_1( c_3c_1 + s_3s_2s_1 ) }  \\right)\r\n\\end{equation}\r\n%\r\nNow, if $\\theta_2 = 90^\\circ$, and we substitute $c_2 = 0$ and $s_2\r\n= 1$ into the above equation, we see we get a determinate form.\r\nResults for all twelve Euler Sequences are shown in Table\r\n\\ref{table:DCMtoEulerAngles}.\r\n\r\n\\noindent Note:  All $\\tan^{-1}$ use a quadrant check ( equaivalent\r\nto atan2 ) to make sure the the correct quadrant is chosen.\r\n\r\n\\subsubsection{Conversion:  Angular Velocity to \\\\ Euler Angles\r\nRates}\r\n\r\nGiven:  Sequence ( i.e. 123, 121, .... 313), $\\theta_2$,\r\n$\\theta_3$ $\\boldsymbol\\omega$\r\n\r\n\\noindent Find: $\\dot\\theta_1$, $\\dot\\theta_2$, $\\dot\\theta_3$\r\n%\r\n\\begin{equation}\r\n    \\begin{pmatrix}\r\n         \\dot\\theta_1\\\\\r\n         \\dot\\theta_2\\\\\r\n         \\dot\\theta_3\r\n    \\end{pmatrix}\r\n    %\r\n    = \\mathbf{S}^{-1}(\\theta_2,\\theta_3)\\boldsymbol\\omega\r\n\\end{equation}\r\n%\r\n$\\mathbf{S}^{-1}(\\theta_2,\\theta_3)$ is dependent upon the Euler\r\nsequence.  Table \\ref{table:EulerAngleKinematics} contains the\r\ndifferent expressions for $\\mathbf{S}^{-1}(\\theta_2,\\theta_3)$ for\r\neach of the 12 unique Euler sequences.\r\n\r\nNote:  Each of the forms of $\\mathbf{S}^{-1}$ have a possible\r\nsingularity due to the appearance of either $\\sin{\\theta_2}$ or\r\n$\\cos{\\theta_2}$ in the denominator.  If GMAT encounters a\r\nsingularity, an error message is thrown, and the zero vector is\r\nreturned.\r\n\r\n\\subsubsection{Conversion:  Euler Angles Rates to Angular Velocity}\r\n\r\n\\noindent Given: Sequence ( i.e. 123, 121, .... 313), $\\theta_2$,\r\n$\\theta_3$, $\\dot\\theta_1$, $\\dot\\theta_2$, $\\dot\\theta_3$\r\n\r\n\\noindent Find: $\\boldsymbol\\omega$\r\n%\r\n\\begin{equation}\r\n    \\boldsymbol\\omega = \\mathbf{S}(\\theta_2,\\theta_3)\r\n    %\r\n        \\begin{pmatrix}\r\n         \\dot\\theta_1\\\\\r\n         \\dot\\theta_2\\\\\r\n         \\dot\\theta_3\r\n    \\end{pmatrix}\r\n\\end{equation}\r\n%\r\n$\\mathbf{S}(\\theta_2,\\theta_3)$ is dependent upon the Euler\r\nsequence.  Table \\ref{table:EulerAngleKinematics} contains the\r\ndifferent expressions for $\\mathbf{S}^{-1}(\\theta_2,\\theta_3)$ for\r\neach of the 12 unique Euler sequences.\r\n\r\n\\subsubsection{Conversion:  Quaternions to Euler Angles}\r\n\r\n\\noindent Given: $\\mathbf{q}$, $q_4$, Euler Sequence\r\n\r\n\\noindent Find: $\\theta_1$, $\\theta_2$, and $\\theta_3$\r\n\r\nThere is not a direct transformation to convert from the quaternions\r\nto the Euler Angles.  GMAT first converts from the quaternion to the\r\nDCM using the algorithm in Sec. \\ref{sec:AttQuattoR}.  The DCM is\r\nthen used to calculate the Euler Angles for the given Euler angle\r\nsequence using the algorithm in Sec. \\ref{sec:AttDCMtoEulerAngles}.\r\n%\\section{Attitude Initial Conditions}\r\n\r\n\\subsubsection{Conversion:  Euler Angles to Quaternions}\r\n\r\n\\noindent Given: $\\theta_1$, $\\theta_2$, and $\\theta_3$, Euler\r\nSequence\r\n\r\n\\noindent Find: $\\mathbf{q}$, $q_4$\r\n\r\nThere is not a direct transformation to convert from Euler Angles to\r\nquaternions.  GMAT first converts from the Euler Angles to the DCM\r\nusing the algorithm in Sec. \\ref{sec:AttEulerAnglestoDCM}.  The DCM\r\nis then used to calculate the quaternions using the algorithm in\r\nSec. \\ref{sec:DCMtoQuat}.\r\n%\\section{Attitude Initial Conditions}\r\n\r\n\r\n\r\n\\subsubsection{Euler Angles to $\\mathbf{R}$ Matrix}\r\n\r\n\r\n\r\n\\onecolumn\r\n \\begin{table}[h]\r\n        \\centering\r\n        \\vspace{0 pt}\r\n        \\caption{Rotation Matrices for 12 Unique Euler Angle Rotation Sequences}\r\n        \\begin{tabular}{clccccccc}  \\hline \\hline\r\n        \\\\\r\n     %------------------------------------\r\n     %------------------------------------\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n     c_3c_2    & c_3 s_2 s_1 + s_3 c_1  & -c_3 s_2 c_1 + s_1 s_3\\\\\r\n     -s_3 c_2  & -s_3 s_2 s_1 + c_3 c_1 & s_3 s_2 c_1 + c_3 s_1\\\\\r\n     s_2       & -c_2 s_1               & c_2 c_1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n     %------------------------------------\r\n     %------------------------------------\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n     c_3c_2   &   c_3 s_2 c_1 + s_1 s_3  &  c_3s_2s_1-s_3c_1 \\\\\r\n      -s_2    &   c_2c_1                 &  c_2s_1\\\\\r\n      s_3c_2  &   s_3s_2c_1 - c_3s_1     &  s_3s_2s_1 + c_3c_1 \\\\\r\n     \\end{pmatrix}   \\vspace{.1 in}$\\\\\r\n     %\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n     c_2c_1               &    s_2     &   -c_2s_1\\\\\r\n     -c_3s_2c_1 + s_3 s_1 & c_3c_2     & c_3s_2s_1 + s_3c_1     \\\\\r\n     s_3s_2c_1 +c_3s_1    &  -s_3c_2   &  -s_3s_2s_1 + c_3 c_1  \\\\\r\n    \\end{pmatrix}   \\vspace{.1 in}$\\\\\r\n     %\r\n     %------------------------------------\r\n     %------------------------------------\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n      c_3c_1 + s_3s_2s_1     &  s_3c_2    & -c_3s_1 + s_3s_2c_1    \\\\\r\n      -s_3c_1 + c_3 s_2s_1   &  c_3c_2    & s_3s_1 + c_3s_2c_1    \\\\\r\n      c_2s_1                 &  -s_2      & c_2c_1\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n      c_3c_1 - s_3s_2s_1     & c_3s_1 + s_3s_2c_1 & -s_3c_2 \\\\\r\n      -c_2s_1                & c_2c_1             &  s_2  \\\\\r\n      s_3c_1 + c_3s_2s_1     & s_3s_1 - c_3s_2c_1 & c_3c_2\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n     %------------------------------------\r\n     %------------------------------------\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n      c_2c_1              &  c_2s_1             & -s_2 \\\\\r\n     -c_3s_1 + s_3s_2c_1  &  c_3c_1 + s_3s_2s_1 & s_3c_2 \\\\\r\n     s_3s_1 + c_3s_2c_1   &  -s_3c_1 +c_3s_2s_1 & c_3c_2\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n     c_2                  & s_2s_1              & -s_2c_1 \\\\\r\n     s_3s_2               & c_3c_1 - s_3c_2s_1  & c_3s_1 + s_3c_2c_1\\\\\r\n     c_3s_2               & -s_3c_1 - c_3c_2s_1 & -s_3s_1 +c_3c_2c_1\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n          %\r\n     %------------------------------------\r\n     %------------------------------------\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n      c_2                  & s_2c_1              & s_2s_1 \\\\\r\n     -c_3s_2              & c_3c_2c_1 - s_3s_1  & c_3c_2s_1 + s_3c_1\\\\\r\n     s_3s_2               & -s_3c_2c_1 - c_3s_1 & -s_3c_2s_1 +   c_3c_1\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n               %\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n     c_3c_1 - s_3c_2s_1   &  s_3s_2    &  -c_3s_1 -s_3c_2c_1\\\\\r\n     s_2s_1               &  c_2       &  s_2c_1            \\\\\r\n     s_3c_1 + c_3c_2s_1   &  -c_3s_2   &  -s_3s_1 + c_3c_2c_1\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n                    %\r\n     %------------------------------------\r\n     %------------------------------------\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n     c_3c_2c_1 - s_3s_1   &  c_3s_2  & -c_3c_2s_1 - s_3c_1\\\\\r\n     -s_2c_1              &  c_2     & s_2s_1             \\\\\r\n     s_3c_2c_1 + c_3s_1   &  s_3s_2  & -s_3c_2s_1 + c_3c_1\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n                         %\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n      c_3c_1 - s_3c_2s_1 & c_3s_1 + s_3c_2c_1  &  s_3s_2\\\\\r\n     -s_3c_1 - c_3c_2s_1 & -s_3s_1 + c_3c_2c_1 &  c_3s_2\\\\\r\n      s_2s_1             &  -s_2c_1            &  c_2\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n                              %\r\n     %------------------------------------\r\n     %------------------------------------\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $=\\begin{pmatrix}\r\n      c_3c_2c_1-s_3s_1     & c_3c_2s_1 + s_3c_1  &  -c_3s_2\\\\\r\n      -s_3c_2c_1 - c_3s_1  & -s_3c_2s_1 + c_3c_1 &  s_3s_2\\\\\r\n      s_2c_1               & s_2s_1              &  c_2\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n         \\hline \\hline\r\n        \\end{tabular}\r\n        \\label{table:EulerAnglestoDCM}\r\n\\end{table}\r\n\\twocolumn\r\n\r\n\\subsubsection{Euler Rates and Angular Velocity} \\onecolumn\r\n \\begin{table}[h]\r\n        \\centering\r\n        \\vspace{0 pt}\r\n        \\caption{Kinematics of Euler Angle Rotation Sequences}\r\n        \\begin{tabular}{cllcccccc}  \\hline \\hline\r\n        Euler Sequence & $\\mathbf{S}(\\theta_2,\\theta_3)$ &\r\n        $\\mathbf{S}^{-1}(\\theta_2,\\theta_3)$\\\\\r\n        \\hline\r\n        \\\\\r\n        %------------------\r\n        %------------------\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n        c3c2&          s3&                0\\\\\r\n       -s3c2&          c3&                0\\\\\r\n          s2&                0&                1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n                  c3/c2&         -s3/c2&                        0\\\\\r\n                  s3&                  c3&                        0\\\\\r\n                  -s2c3/c2&  s3s2/c2& 1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n        %------------------\r\n        %------------------\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n         c3c2&        -s3&               0\\\\\r\n        -s2&               0&               1\\\\\r\n         s3c2&         c3&               0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n         c3/c2&                       0&         s3/c2\\\\\r\n           -s3&                       0&                 c3\\\\\r\n          s2c3/c2&                       1& s3s2/c2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n        s2&                0&                1\\\\\r\n       c3c2&          s3&                0\\\\\r\n      -s3c2&          c3&                0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n                        0&          c3/c2&         -s3/c2\\\\\r\n                        0&                  s3&                  c3\\\\\r\n                        1& -s2c3/c2&  s3s2/c2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n         s3c2&         c3&               0\\\\\r\n         c3c2&        -s3&               0\\\\\r\n        -s2&               0&               1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n         s3/c2&         c3/c2&                       0\\\\\r\n                 c3&                -s3&                       0\\\\\r\n s3s2/c2& s2c3/c2&                       1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n       -s3c2&          c3&                0\\\\\r\n          s2&                0&                1\\\\\r\n        c3c2&          s3&                0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n         -s3/c2&                        0&          c3/c2\\\\\r\n                  c3&                        0&                  s3\\\\\r\n         s3s2/c2&                        1& -s2c3/c2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n                -s2&               0&               1\\\\\r\n       s3c2&         c3&               0\\\\\r\n       c3c2&        -s3&               0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n                       0&         s3/c2&         c3/c2\\\\\r\n                       0&                 c3&                -s3\\\\\r\n                       1& s3s2/c2& s2c3/c2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n          %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n                c2&               0&               1\\\\\r\n       s3s2&         c3&               0\\\\\r\n      c3s2&        -s3&               0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n                        0&          s3/s2&          c3/s2\\\\\r\n                        0&                  c3&                 -s3\\\\\r\n                        1& -s3c2/s2& -c3c2/s2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n                    %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n          c2&                0&                1\\\\\r\n       -c3s2&          s3&                0\\\\\r\n        s3s2&          c3&                0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n                        0&         -c3/s2&          s3/s2\\\\\r\n                        0&                  s3&                  c3\\\\\r\n                        1&  c3c2/s2& -s3c2/s2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n                    %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n        s3s2&          c3&                0\\\\\r\n          c2&                0&                1\\\\\r\n       -c3s2&          s3&                0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n          s3/s2&                        0&         -c3/s2\\\\\r\n                  c3&                        0&                  s3\\\\\r\n         -s3c2/s2&                        1&  c3c2/s2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n                              %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n       c3s2&        -s3&               0\\\\\r\n         c2&               0&               1\\\\\r\n       s3s2&         c3&               0\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n          c3/s2&                        0&          s3/s2\\\\\r\n                 -s3&                        0&                  c3\\\\\r\n        -c3c2/s2&                        1& -s3c2/s2\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n                              %\r\n     %------------------\r\n     %------------------\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n       s3s2&         c3&               0\\\\\r\n       c3s2&        -s3&               0\\\\\r\n         c2&               0&               1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n          s3/s2&          c3/s2&                        0\\\\\r\n                  c3&                 -s3&                        0\\\\\r\n      -s3c2/s2& -c3c2/s2&                        1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n     %\r\n          %------------------\r\n     %------------------\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\begin{pmatrix}\r\n        -c3s2&          s3&                0\\\\\r\n         s3s2&          c3&                0\\\\\r\n           c2&                0&                1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\r\n     &\r\n     $\\begin{pmatrix}\r\n         -c3/s2&          s3/s2&                        0\\\\\r\n                  s3&                  c3&                        0\\\\\r\n        c3c2/s2& -s3c2/s2&                        1\\\\\r\n     \\end{pmatrix}  \\vspace{.1 in}$\\\\\r\n         \\hline \\hline\r\n        \\end{tabular}\r\n        \\label{table:EulerAngleKinematics}\r\n\\end{table}\r\n\r\n\\begin{table}[h]\r\n        \\centering\r\n        \\vspace{0 pt}\r\n        \\caption{ Computation of Euler Angles from DCM}\r\n        \\begin{tabular}{llllllll}  \\hline \\hline \\\\\r\n        Euler Sequence & Euler Angle Computations \\\\\r\n        \\hline \\\\\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     \\hspace{.1 in}\r\n     &\r\n      $\\theta_1 =  \\tan^{-1}(-R_{32}/R_{33})$ &\r\n      $\\theta_2 =  \\sin^{-1}(R_{31})$ &\r\n      $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{R_{13}\\sin{\\theta_1}+\r\n      R_{12}\\cos{\\theta_1}}{R_{23}\\sin{\\theta_1}+R_{22}\\cos{\\theta_1}}\\right )$\\vspace{.15 in}\r\n     \\\\\r\n     %\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_1(\\theta_1)$ \\hspace{.1 in}\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{23}/R_{22})$ &\r\n         $\\theta_2 =  \\sin^{-1}(-R_{21})$ &\r\n         $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{R_{12}\\sin{\\theta_1}- R_{13}\\cos{\\theta_1}}{-R_{32}\\sin{\\theta_1}+\r\n         R_{33}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\r\n     \\\\\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n    $\\theta_1 =  \\tan^{-1}(-R_{13}/R_{11})$ &\r\n         $\\theta_2 =  \\sin^{-1}(R_{12})$ &\r\n         $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{R_{21}\\sin{\\theta_1}+ R_{23}\\cos{\\theta_1}}{R_{31}\\sin{\\theta_1}+\r\n         R_{33}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\r\n     \\\\\r\n     %\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n         $\\theta_1 =  \\tan^{-1}(R_{31}/R_{33})$ &\r\n         $\\theta_2 =  \\sin^{-1}(-R_{32})$ &\r\n         $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{R_{23}\\sin{\\theta_1}- R_{21}\\cos{\\theta_1}}{-R_{13}\\sin{\\theta_1}+\r\n         R_{11}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\r\n     \\\\\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(-R_{21}/R_{22})$ &\r\n     $\\theta_2 =  \\sin^{-1}(R_{23})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{R_{32}\\sin{\\theta_1}+ R_{31}\\cos{\\theta_1}}{R_{12}\\sin{\\theta_1}+\r\n     R_{11}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\r\n     \\\\\r\n     %\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{12}/R_{11})$ &\r\n     $\\theta_2 =  \\sin^{-1}(-R_{13})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{R_{31}\\sin{\\theta_1}- R_{32}\\cos{\\theta_1}}{-R_{21}\\sin{\\theta_1}+\r\n     R_{22}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\\\\\r\n     %\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{12}/(-R_{13}))$ &\r\n     $\\theta_2 =  \\cos^{-1}(R_{11})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{-R_{33}\\sin{\\theta_1}- R_{32}\\cos{\\theta_1}}{R_{23}\\sin{\\theta_1}+\r\n     R_{22}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\\\\\r\n     %\r\n     $\\mathbf{R}_1(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_1(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{13}/(R_{12}))$ &\r\n     $\\theta_2 =  \\cos^{-1}(R_{11})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{-R_{22}\\sin{\\theta_1} + R_{23}\\cos{\\theta_1}}{-R_{32}\\sin{\\theta_1}+\r\n     R_{33}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\\\\\r\n     %\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{21}/(R_{23}))$ &\r\n     $\\theta_2 =  \\cos^{-1}(R_{22})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{-R_{33}\\sin{\\theta_1} + R_{31}\\cos{\\theta_1}}{-R_{13}\\sin{\\theta_1}+\r\n     R_{11}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\\\\\r\n     %\r\n     $\\mathbf{R}_2(\\theta_3)\\mathbf{R}_3(\\theta_2)\\mathbf{R}_2(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{23}/(-R_{21}))$ &\r\n     $\\theta_2 =  \\cos^{-1}(R_{22})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{-R_{11}\\sin{\\theta_1} - R_{13}\\cos{\\theta_1}}{R_{31}\\sin{\\theta_1}+\r\n     R_{33}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\\\\\r\n     %\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_1(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{31}/(-R_{32}))$ &\r\n     $\\theta_2 =  \\cos^{-1}(R_{33})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{-R_{22}\\sin{\\theta_1} - R_{21}\\cos{\\theta_1}}{R_{12}\\sin{\\theta_1}+\r\n     R_{11}\\cos{\\theta_1}}\\right)$ \\vspace{.15 in}\\\\\r\n     %\r\n     $\\mathbf{R}_3(\\theta_3)\\mathbf{R}_2(\\theta_2)\\mathbf{R}_3(\\theta_1)$\r\n     &\r\n     $\\theta_1 =  \\tan^{-1}(R_{32}/(R_{31}))$ &\r\n     $\\theta_2 =  \\cos^{-1}(R_{33})$ &\r\n     $\\theta_3  = \\tan^{-1}\\left(\\displaystyle\\frac{-R_{11}\\sin{\\theta_1} + R_{12}\\cos{\\theta_1}}{-R_{21}\\sin{\\theta_1}+\r\n     R_{22}\\cos{\\theta_1}} \\right)$ \\vspace{.15 in}\\\\\r\n         \\hline \\hline\r\n        \\end{tabular}\r\n        \\label{table:DCMtoEulerAngles}\r\n\\end{table}\r\n", "meta": {"hexsha": "f9bb43673cbd9bc2e7e1da296bbb6e5173998bfd", "size": 42764, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/SystemDocs/MathematicalSpecification/Attitude.tex", "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_issues_repo_path": "doc/SystemDocs/MathematicalSpecification/Attitude.tex", "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_licenses": ["NASA-1.3"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_forks_repo_path": "doc/SystemDocs/MathematicalSpecification/Attitude.tex", "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": ["NASA-1.3"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "avg_line_length": 37.6112576957, "max_line_length": 195, "alphanum_fraction": 0.5486858105, "num_tokens": 15104, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430478583168, "lm_q2_score": 0.6859494550081926, "lm_q1q2_score": 0.5701221207122605}}
{"text": "\\documentclass[preview, border=12pt, varwidth]{standalone}\n\\usepackage{array,tabularx, mathtools, amsmath}\n\n\\newenvironment{conditions*}\n  {\\par\\vspace{\\abovedisplayskip}\\noindent\n   \\tabularx{\\columnwidth}{>{$}l<{$} @{\\ : } >{\\raggedright\\arraybackslash}X}}\n  {\\endtabularx\\par\\vspace{\\belowdisplayskip}}\n\n\\begin{document}\n\n\\section*{(3) Evaporative cooling:}\n  \n\\begin{multline}\n  \\tag{4.1} \\label{eq:4.1}\n    (bd \\rho_w C_w \\frac{\\partial T_w}{\\partial t} + \\dot{m}_w C_w \\frac{\\partial T_w}{\\partial y}) dy = [\\tau_1 S(t) \\\\ -Q_r -Q_c -Q_e + h_1 (\\left. \\theta_1 \\right\\vert_{x=0} - T_w)]bdy \n\\end{multline}\n\n\\begin{align}\n  \\tag{4.2} \\label{eq:4.2}\n    Q_r ={}& h_r(T_w-T_A)  \\\\\n  \\tag{4.3} \\label{eq:4.3}\n    Q_c ={}& h_c(T_w-T_A)  \\\\\n  \\tag{4.4} \\label{eq:4.4}\n    Q_e ={}& 0.013 h_e [p(\\bar{T}_w - \\gamma p(\\bar{T}_A)]\n\\end{align}\n\nwhere:\n\\begin{conditions*}\n    \\enspace \\tau_1  &  fraction of solar radiation absorbed by water \\\\\n    \\enspace Q_r  &  radiative heat \\\\\n    \\enspace Q_c  &  convective heat flux \\\\\n    \\enspace \\rho  &  saturated partial pressure \\\\\n    \\left. \n        \\begin{array}{l} \n            h_r \\\\ h_e \\\\ h_c \\\\ \n        \\end{array} \n    \\right \\}  &  respective heat transfer coefficient \n\\end{conditions*} \\medskip\n\n\\begin{align}\n  \\tag{4.5} \\label{eq:4.5}\n    P(T) &= R_{1}T + R\n\\end{align}\n\non simplification,\n\\begin{align*}\n  \\tag{4.6} \\label{eq:4.6}\n    M_w \\frac{\\partial T_w}{\\partial t} + \\dot{m_w} C_w \\frac{\\partial T_w}{\\partial y} &= bH(T_S-T_w) + bh_1 (\\left. \\theta_1 \\right\\vert_{x=0} - T_w) \\\\\n  \\tag{4.7} \\label{eq:4.7}\n    T_s &= \\frac{1}{H} (\\tau_1S(t) + H_1T_A(t) - R_0R_2(1-r)) \\\\\n  \\tag{4.8} \\label{eq:4.8}\n    H &= h_r + h_c + R_0R_1 \\\\\n  \\tag{4.9} \\label{eq:4.9}\n    H_1 &= h_r+h_c+rR_0R_1 \\\\\n  \\tag{4.10} \\label{eq:4.10}\n    R_0 &= 0.013h_c \\\\\n  \\tag{4.11} \\label{eq:4.11}\n    M_w &= bdC_wS_w \\\\\n  \\tag{4.12} \\label{eq:4.12}\n    \\dot{Q}(y,t) &= h_i[\\theta|_{x=x_4} - T_R]\n\\end{align*}\n\n\\end{document}", "meta": {"hexsha": "20637e7e3a729af4c4583575a8125da2b2784956", "size": 1961, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "img/equations/eq4.tex", "max_stars_repo_name": "relaxxpls/Optimize-Heating-Indoors", "max_stars_repo_head_hexsha": "76bea58707337f1daf67de946812a383af12e6e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "img/equations/eq4.tex", "max_issues_repo_name": "relaxxpls/Optimize-Heating-Indoors", "max_issues_repo_head_hexsha": "76bea58707337f1daf67de946812a383af12e6e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "img/equations/eq4.tex", "max_forks_repo_name": "relaxxpls/Optimize-Heating-Indoors", "max_forks_repo_head_hexsha": "76bea58707337f1daf67de946812a383af12e6e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.126984127, "max_line_length": 188, "alphanum_fraction": 0.5915349312, "num_tokens": 843, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430394931456, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.570122120310197}}
{"text": "%************************************************\n\\chapter{Methods}\\label{ch:methods}\n%************************************************\n\nThis thesis relies on a previously described model of a neuron. It is a single-compartment, conductance-based neuron which is based on the Hogkin-Huxley formalism\\cite{hodgkin_components_1952,hodgkin_measurement_1952,hodgkin_quantitative_1952}. Previous work has utilized this model neuron as a backbone for exploration of homeostatic compensatory mechanisms\\cite{oleary_correlations_2013,oleary_cell_2014,liu_model_1998}.\nThis model represents the neuron as a circuit\\cite{hodgkin_quantitative_1952}. Figure \\ref{fig:hhcircuit} details the circuit diagram of the neuron as a representation of the cell membrane acting as a capacitor with various ion channels as resistors. The membrane takes in current from the outside, which can be described in more detail as the neuron's neuromodulatory environment.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[angle=0.75,width=\\linewidth]{gfx/hodgkinhuxleycircuit.png}\n    \\caption[Circuit diagram representing membrane of a neuron.]{Circuit diagram representing the membrane of a neuron. \\ac{Cm} represents the total capacitance of the neuron. Resistors represent \\(1/\\acs{gNaV}\\), \\(1/\\acs{gK}\\), and \\(1/\\ac{gleak}\\)\\cite{hodgkin_quantitative_1952}.}\n    \\label{fig:hhcircuit}\n\\end{figure}\n\nThe model neuron has eight transmembrane currents and associated conductances: \\acf{INaV}, \\acf{IA}, \\acf{IH}, \\acf{IKCa}, \\acf{IKd}, \\acf{ICaS}, \\acf{ICaT}, and \\acf{Ileak}.\nThe associated maximal conductances of the first seven transmembrane currents are regulated by the integral control scheme mentioned in Section \\ref{homeostaticrules}. \\ac{gleak} is not regulated in this model.\n\n\\section{Neurodynamics} \\label{neurodynamics}\n\nNeurons are considered to consist of a single compartment, acting as a capacitor, with various ionic currents acting as resistors in parallel, and a specific surface area (in units $mm^2$).\nThe compartment has a \\acf{Vm}, which evolves according to the equation:\n\n\\begin{equation} \\label{eq:voltage}\nC_m \\frac{\\mathrm{d}V_m}{\\mathrm{d}t} = \\sum_{i} I_i\n\\end{equation}\n\nwhere $\\acs{Cm}$ is the specific capacitance of the membrane and $I_i$ is each ionic current in the neuron\\cite{gorur-shandilya_xolotl_2018}. Current is given by the equation\n\n\\begin{equation} \\label{eq: current}\n    I_i = g_i(V)(V - E_i)\n\\end{equation}\n\nwhere $g_i(V)$ is instantaneous conductance and $E_i$ is the reversal potential (Nernst potential) for each ion channel\\cite{gorur-shandilya_xolotl_2018}. Reversal potentials for each ion channel, excluding Ca\\textsuperscript{2+}-dependent channels, are considered constant and are described in Section \\ref{modelparameters}. This simplification is justified by the law of large numbers -- intracellular and extracellular ion concentration are saturated, resulting in effectively constant flux of ions in and out of the cell\\cite{liu_model_1998}. Instantaneous conductance $g_i(V)$ is defined by the equation:\n\n\\begin{equation} \\label{eq: conductance}\n    g_i(V) = \\bar{g_i} m_i^{p_i} h_i^{q_i}\n\\end{equation}\n\nwhere\n\n\\begin{tabular}{ll}\n    $\\bar{g_i}$     & maximal conductance for each ion channel \\\\\n    $m$             & activation gating variable \\\\\n    $h$             & inactivation gating variable \\\\\n    $p, q$          & integers \\\\\n\\end{tabular}\n\nThe integers $p$ and $q$ are bound by an interval $[0,1]$\\cite{gorur-shandilya_xolotl_2018}. The activation and inactivation variables are themselves defined by an ordinary differential equation which depends on \\ac{Vm}, and is specific to each conductance. The general form is:\n\n\\begin{align}\n    \\tau_m \\frac{\\mathrm{d}m}{\\mathrm{d}t} = m_\\infty - m \\\\\n    \\tau_h \\frac{\\mathrm{d}h}{\\mathrm{d}t} = h_\\infty - h\n\\end{align}\n\nwhere $\\tau_m$ and $\\tau_h$ are time constants and $m_\\infty$ and $h_\\infty$ are steady-state values of the activation and inactivation variables\\cite{prinz_alternative_2003}. Each conductance has a characteristic steady-state density and time constant which represent its function in the overall neurodynamics of the neuron.\n\n\\section{Homeostatic Rules}\\label{homeostaticrules}\n\nRegulation of firing rate activity in this thesis will rely on an activity-dependent feedback mechanism for regulating ion channel maximal conductances\\cite{oleary_correlations_2013,oleary_cell_2014,liu_model_1998,prinz_alternative_2003}.\nThis mechanism relies on calcium as a sensor, by which neurons may regulate their activity. Section \\ref{corr_homeostasis} provides an explanation of the use of calcium ions as an indirect representation of \\ac{Vm} activity.\nCalcium dynamics in this model evolve according to the equation:\n\n\\begin{equation} \\label{eq:calcium}\n    \\tau_{Ca} \\frac{\\mathrm{d}[Ca^{2+}]}{\\mathrm{d}t} = -f(I_{CaT} + I_{CaS}) - [Ca^{2+}] + [Ca^{2+}]_0\n\\end{equation}\n\nwhere $\\tau_{Ca}$ is the time constant, $f$ translates $Ca^{2+}$ current into an intracellular concentration change, and $[Ca^{2+}]_0$ is the steady state intracellular $Ca^{2+}$ concentration if there is no flux of calcium ions across the membrane\\cite{prinz_alternative_2003,liu_model_1998}.\n\nThe activity-dependent feedback mechanism is based on regulation by a calcium-dependent homeostatic rule, first implemented by O'Leary \\textit{et al.}\\cite{oleary_cell_2014}.\nRegulation of maximal conductances is simulated by two integral control equations:\n\n\\begin{align} \\label{eq:integralregulation}\n    \\tau_i \\dot{m_i} &= [Ca^{2+}] - Ca_{tgt} \\\\\n    \\label{eq:integralregulation2}\n    \\tau_g \\dot{g_i} &= m_i - g_i\n\\end{align}\n\nwhere $\\tau_i$ represents the transcription timescale for each ion channel, $\\tau_g$ represents global transcription timescale, $m_i$ represents mRNA levels for each ion channel, and $Ca_{tgt}$ represents global Ca\\textsuperscript{2+} target concentration. Variables are bounded at 0 to ensure that conductances do not become negative.\n\nIn this model, $Ca^{2+}$ acts as a feedback control signal. It is purposefully simple, focusing on the essential biological principles underlying formation and degradation of ion channel proteins. Obviously, neurons possess a significant number of other mechanisms of regulation, such as co-trafficking of ion channels, cotranslational interaction, RNA interference, and the use of promoters\\cite{oleary_cell_2014,shi_subunits_1996,vanoye_kcnq1/kcne1_2010,arcangeli_ion_2011,frank_endogenous_2011,zhang_recovery_2011}. Occam's razor dictates that \"the simplest solution is most likely the right one,\" and we will use this as a guideline moving forward.\n\nSteady-state values of $m_i$ are found to be dependent on the time integral of average $[Ca^{2+}]$ and scaled by the inverse of the time constant for the channel $\\tau_i$. We can then estimate steady state $g_i$ for very small initial conductance values and positive time constants. We can see this by taking the integral of Equation \\ref{eq:integralregulation}:\n\n\\begin{equation} \\label{eq:steadystateg}\n    g_i \\approx m_i = \\frac{1}{\\tau_i} \\int_{0}^{t_{ss}} ([Ca^{2+}] - Ca_tgt)dt\n\\end{equation}\n\nFigure \\ref{eq:steadystateg} also shows how correlations between ion channels emerge when $m_i$ converges to the value dependent on the time integral of average $[Ca^{2+}]$. When one takes ratios between channels, the integrals cancel out, so that:\n\n\\begin{equation}\n    \\frac{g_i}{g_j} \\approx \\frac{\\tau_j}{\\tau_i}\n\\end{equation}\n\nIt can be seen here that different ratios of the time constants determine the correlations between ion channel conductances. These ratios can determine the electrophysiological character of the neuron, as is further discussed in Section \\ref{corr_homeostasis}.\n\n\\section{Model Parameters} \\label{modelparameters}\n\nModel parameters are adapted from previous work, and are used as the basis for variation in each experiment presented in this thesis\\cite{prinz_alternative_2003,oleary_cell_2014,liu_model_1998}.\n\nEach ionic current has its own reversal potential. These values dictate the membrane potential at which ions begin to move in the opposite direction.\n\n\\begin{table}[h]\n    \\centering\n    \\begin{tabular}{ccccccccc}\n         \\textsc{Current} & \\ac{INaV} & \\ac{IA} & \\ac{IH} & \\ac{IKd} & \\ac{IKCa} & \\ac{Ileak} \\\\\n         \\hline\n         $E \\mathrm{(mV)}$ & +50 & -80 & -20 & -80 & -80 & -50\n    \\end{tabular}\n    \\caption[Reversal potentials for each ion channel.]{Reversal potential for each ionic current. Reversal potential of \\ac{ICaT} and \\ac{ICaS} are not constant and derived from the Nernst potential.\\cite{prinz_alternative_2003}}\n    \\label{tab:revpotential}\n\\end{table}\n\n\\ac{Vm} evolves with time according to Equation \\ref{eq:voltage} where specific membrane capacitance $C_m = 0.628nF$\\cite{hodgkin_quantitative_1952}.\n\nCa\\textsuperscript{2+} concentration is dynamic, evolving according to Equation \\ref{eq:calcium}, where\n\n   \\begin{tabular}{l}\n     $\\tau_{Ca} = 200ms$ \\\\\n     $f = 14.96 \\mu M/nA$\n    \\end{tabular}\n    \n$\\tau_{Ca}$ is the calcium buffering time constant, and $f$ is the factor which converts calcium current into intracellular calcium concentration change. This factor depends on the ratio of the surface area of the cell to the volume wherein Ca\\textsuperscript{2+} concentration is measured\\cite{liu_model_1998}. In this case, volume is considered to be a narrow shell inside the membrane and approximates the neuron by a cylinder $50 \\mu m$ in diameter and $400 \\mu m$ long. Surface area of the cell is 0.0628 mm\\textsuperscript{2}\\cite{liu_model_1998}.\n\nReversal potentials of Ca\\textsuperscript{2+} current types are dependent on voltage and calcium, and are dynamically updated by the Nernst equation\\cite{liu_model_1998}:\n\n\\begin{equation}\n    E_{Ca} = \\frac{RT}{zF} ln(\\frac{[Ca^{2+}]_{ext}}{[Ca^{2+}]_{in}}\n\\end{equation}\n\nwhere $E_{Ca}$ refers to reversal potentials for \\ac{ICaS} and \\ac{ICaT}, gas constant $R = 8.314 J*mol/K$, temperature $T = 284.15 K$, ion charge $z = 2$ for calcium ions, Faraday constant $F = 96485 C/mol$, $[Ca^{2+}]_{in}$ is the current intracellular calcium concentration, and the extracellular calcium concentration $[Ca^{2+}]_{ext} = 0.05 \\mu M$\\cite{prinz_alternative_2003}.\n\nVoltage-dependent equations for the model parameters for each current were adapted from \\cite{prinz_alternative_2003}. Each ionic current has its own specific $p$, steady-state activation variable $m_\\infty$, steady-state inactivation variable $h_\\infty$, and time constants $\\tau_m$ and $\\tau_h$, respectively. Ionic currents which are not inactivating do not have values for $h_\\infty$ or $\\tau_h$.\n\nMaximal conductances are dependent on Ca\\textsuperscript{2+} concentration and are regulated by Equations \\ref{eq:integralregulation} and \\ref{eq:integralregulation2}. The time constant for transcriptional rate $\\tau_{m_i}$ is unique for each maximal conductance and defined by:\n\n\\begin{equation}\n    \\tau_{m_i} = \\frac{5e6}{g_{\\infty_i}}\n\\end{equation}\n\nwhere $tau_m$ is in milliseconds and $g_{i\\infty}$ represents the steady-state maximal conductance for each ionic channel. Steady-state maximal conductances were originally adapted from Prinz \\textit{et al.}\\cite{prinz_alternative_2003}.\n\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{ccccccccc}\n        \\textsc{Ion channel} & NaV & CaT & CaS & A & H & KCa & Kd & Leak \\\\\n        \\hline\n    $g_\\infty$ $(\\mu S/mm^2)$ & $1e3$ & $25$ & $60$ & $5e2$ & $0.1$ & $50$ & $1e3$ & $*$ \\\\\n        \\hline\n        $\\tau_m$ $(ms)$ & $5e3$ & $2e5$ & $\\approx 8.33e4$ & $1e4$ & $5e7$ & $1e5$ & $5e3$ & $*$ \\\\\n        \\hline\n        $\\tau_g$ $(ms)$ & $5e3$ & $5e3$ & $5e3$ & $5e3$ & $5e3$ & $5e3$ & $5e3$ & $*$\n    \\end{tabular}\n    \\caption[Integral control model parameters]{Default integral control model parameters. $g_\\infty$ adapted from Prinz \\textit{et al.} to determine transcriptional rate time constants $\\tau_m$ for each ionic current\\cite{prinz_alternative_2003}. Translation rate time constant $\\tau_g$ is equal across all ionic currents. *Leak is unregulated in this model and thus does not have $\\tau_m$ or $\\tau_g$ time constants.}\n    \\label{tab:integralparameters}\n\\end{table}\n\nTable \\ref{tab:integralparameters} describes the regulation time constants used in both Equations \\ref{eq:integralregulation} and \\ref{eq:integralregulation2}. Steady state maximal conductances ($g_\\infty$) were computed by simulating a known working set of eight channel maximal conductances adapted from Prinz. \\textit{et al.} for $t = 400s$ with a time-step of $dt = 0.1ms$ to ensure all maximal conductances converged at their respective steady state densities\\cite{prinz_alternative_2003}. These values are only used to compute the transcription and translation rate time constants, and are not used at any point after these values are calculated. It should be of note that these time constants are the default for all experiments, except in cases where the time constants are being varied themselves. In that case, these default values are used as a median point upon which to build a distribution. Target Ca\\textsuperscript{2+} concentration was established for a bursting neuron through this same method. Average Ca\\textsuperscript{2+} concentration, averaged over 200ms, was obtained from the model after simulation. Initialization of Ca\\textsuperscript{2+} target concentration and regulation time constants was repeated at the beginning of each experiment. \n\n\\section{Simulating Neurons}\n\nModel neurons are simulated according to the exponential Euler method,\\cite{gorur-shandilya_xolotl_2018,dayan2001theoretical} which approximates \\ac{Vm} at each discrete time step by the equation:\n\n\\begin{equation} \\label{eq:expeuler}\nV(t + \\mathrm{d}t) = V_\\infty + (V(t) - V_\\infty)exp(-\\frac{\\mathrm{d}t}{\\tau_V})\n\\end{equation}\n\nNeurons are simulated for $t = 200s$ with a timestep of $dt = 0.1ms$. Voltage traces used for figures and metrics are computed by simulating model neurons for $t = 6s$ with a timestep of $dt = 0.1ms$.\n\nThe homeostatic mechanism is simulated by the Euler method\\cite{gorur-shandilya_xolotl_2018}:\n\n\\begin{align} \\label{eq:integraleuler}\n    Ca_{err} &= Ca_{tgt} - Ca_{prev} \\\\\n    m_i(t+\\mathrm{d}t) &= m_i + \\frac{\\mathrm{d}t}{\\tau_{m_i}}Ca_{err} \\\\\n    \\bar{g}_i(t+\\mathrm{d}t) &= \\bar{g}_i + \\frac{\\mathrm{d}t}{\\tau_{g_i}}(m_i - \\bar{g}A)\n\\end{align}\n\nwhere $m_i$ represents ion channel mRNA concentration for each current, $\\bar{g}_i$ represents maximal conductance density for each current, $Ca_{tgt}$ represents target average Ca\\textsuperscript{2+} concentration, $Ca_{prev}$ represents Ca\\textsuperscript{2+} concentration at the previous time-step, and $A$ represents the surface area of the model neuron.\n\n\\section{Initial Conditions} \\label{initialconditions}\n\nModel compartments are initialized with parameters $\\ac{Vm} = -60 mV$ and $[Ca^{2+}_{in}] = 0.05 \\mu M$.\nInitial maximal conductances ($\\mu S/mm^2$) for each channel (excluding \\ac{gleak}) and compartment mRNA concentration ($\\mu M$) are varied across all experiments. \\ac{gleak} is held fixed at $\\bar{g}_{leak} = 0.05$, except in Figure \\ref{fig:integralvariation_Leak}.\nValues are generated for all simulations at the beginning of each experiment by the use of a random number generator scaled by a noise factor.\nFor the first experiment, where initial conditions are varied alone, the noise factor is $20$ for channel maximal conductances and $0.004$ for compartment mRNA concentration. For all other experiments, the noise factor is $5$ for channel maximal conductances and $0.001$ for mRNA levels.\n\n\\section{Activity Metrics} \\label{electrophys}\n\nFor each simulation, metrics were computed on model parameters to determine if models were working or not. Models were considered to have converged if\n\n\\begin{equation}\n    \\mid \\frac{([Ca_{tgt}] - [Ca_{avg}])}{[Ca_{tgt}]} \\mid > 0.1\n\\end{equation}\n\nwhere $Ca_{tgt}$ is the target Ca\\textsuperscript{2+} concentration and $Ca_{avg}$ is the average Ca\\textsuperscript{2+} concentration after homeostatic regulation.\n\nModels were then checked against reference model with parameters described in Table \\ref{tab:integralparameters}. Burst periods and duty cycle were checked against reference model. If the burst period of the tested model deviated from reference by more than 20\\% or the duty cycle deviated from reference more than 10\\%, the model was considered nonfunctional.\n\n\\section{Model Implementation} \\label{implementation}\n\nModel neurons were implemented using \\texttt{xolotl} v.20.4.22 and v.20.2.26, an open-source neuron simulator written in C++ for MATLAB\\cite{gorur-shandilya_xolotl_2018}. Credit to Srinivas Gorur-Shandilya for development and implementation of the modeling platform. Simulation data was retrieved through the use of MATLAB ver. R2020a.\n\n\n\n\n", "meta": {"hexsha": "707128c02be924cd5523ebfd577d2c475e375abc", "size": 16776, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/Thesis/Chapters/Chapter02.tex", "max_stars_repo_name": "btromm/thesis", "max_stars_repo_head_hexsha": "057526cd442187a4b39891109806a88541d18150", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-14T13:30:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-14T13:30:14.000Z", "max_issues_repo_path": "thesis/Thesis/Chapters/Chapter02.tex", "max_issues_repo_name": "btromm/thesis", "max_issues_repo_head_hexsha": "057526cd442187a4b39891109806a88541d18150", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2020-04-14T13:26:26.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-20T03:35:05.000Z", "max_forks_repo_path": "thesis/Thesis/Chapters/Chapter02.tex", "max_forks_repo_name": "btromm/neuralcorr-thesis", "max_forks_repo_head_hexsha": "057526cd442187a4b39891109806a88541d18150", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 81.8341463415, "max_line_length": 1268, "alphanum_fraction": 0.7535765379, "num_tokens": 4657, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430436757312, "lm_q2_score": 0.6859494485880927, "lm_q1q2_score": 0.5701221125071969}}
{"text": "\\section{The Divergence Theorem}\\label{sec:DivergenceTheorem}\n\nThe third version of Green's Theorem (Equation~\\ref{eq:greens theorem third form}) we saw was:\n$$\\int_{\\partial D} \\vect{f}\\cdot\\vect{N}\\,ds=\\iint_{D} \\nabla\\cdot\\vect{f}\\,dA.$$ With minor changes this turns into another equation, the\nDivergence Theorem:\n\n\\begin{theorem}{Divergence Theorem}{}\nUnder suitable conditions, if $E$ is a\nregion of three dimensional space and $D$ is its boundary surface,\noriented outward, then\n$$\\iint_{D} \\vect{f}\\cdot\\vect{N}\\,dS=\\iiint_{E} \\nabla\\cdot\\vect{f}\\,dV.$$\\index{divergence!theorem}\n\\end{theorem}\n\\goodbreak\n\\begin{proof}\nAgain this theorem is too difficult to prove here, but a special case\nis easier. In the proof of a special case of Green's Theorem, we\nneeded to know that we could describe the region of integration in\nboth possible orders, so that we could set up one double integral\nusing $dx\\,dy$ and another using $dy\\,dx$. Similarly here, we need to\nbe able to describe the three-dimensional region $E$ in different\nways.\n\nWe start by rewriting the triple integral:\n$$\\iiint_{E} \\nabla\\cdot\\vect{f}\\,dV = \\iiint_{E} ({\\partial f_1 \\over \\partial x}+{\\partial f_2 \\over \\partial y}+{\\partial f_3 \\over \\partial z})\\,dV\n=\\iiint_{E} {\\partial f_1 \\over \\partial x}\\,dV + \\iiint_{E} {\\partial f_2 \\over \\partial y}\\,dV + \\iiint_{E} {\\partial f_3 \\over \\partial z}\\,dV.$$\nThe double integral may be rewritten:\n$$\\iint_{D} \\vect{f}\\cdot\\vect{N}\\,dS\n=\\iint_{D} (f_1\\vect{i}+f_2\\vect{j}+f_3\\vect{k})\\cdot\\vect{N}\\,dS\n=\\iint_{D} f_1\\vect{i}\\cdot\\vect{N}\\,dS+\\iint_{D} f_2\\vect{j}\\cdot\\vect{N}\\,dS+\n\\iint_{D} f_3\\vect{k}\\cdot\\vect{N}\\,dS.$$\nTo prove that these give the same value it is sufficient to prove that\n\\begin{align}\\label{eq:divergence proof}\n\\iint_{D} f_1\\vect{i}\\cdot\\vect{N}\\,dS&=\\iiint_{E} {\\partial f_1 \\over \\partial x}\\,dV,\t\\\\\n\\iint_{D} f_2 \\vect{j}\\cdot\\vect{N}\\,dS&=\\iiint_{E} {\\partial f_2 \\over \\partial y}\\,dV,\\;\\hbox{and}\t\\\\\n\\iint_{D} f_3 \\vect{k}\\cdot\\vect{N}\\,dS&=\\iiint_{E} {\\partial f_3 \\over \\partial z}\\,dV.\n\\end{align}\n% $$\n% \\iint{D} P\\vect{i}\\cdot\\vect{N}\\,dS=\\iiint{E} P_x\\,dV,\\;\n% \\iint{D} Q\\vect{j}\\cdot\\vect{N}\\,dS=\\iiint{E} Q_y\\,dV,\\;\\hbox{and}\\;\n% \\iint{D} R\\vect{k}\\cdot\\vect{N}\\,dS=\\iiint{E} R_z\\,dV.$$\nNot surprisingly, these are all pretty much the same; we'll do the\nfirst one.\n\nWe set the triple integral up with $dx$ innermost:\n$$\\iiint_{E} {\\partial f_1 \\over \\partial x}\\,dV=\\iint_{B}\\int_{g_1(y,z)}^{g_2(y,z)} {\\partial f_1 \\over \\partial x}\\,dx\\,dA=\n\\iint_{B} f_1(g_2(y,z),y,z)-f_1(g_1(y,z),y,z)\\,dA,$$\nwhere $B$ is the region in the $y$-$z$ plane over which we integrate.\nThe boundary surface of $E$ consists of a ``top'' $x=g_2(y,z)$, a\n``bottom'' $x=g_1(y,z)$, and a ``wrap-around side'' that is vertical\nto the $y$-$z$ plane. To integrate over the entire boundary surface,\nwe can integrate over each of these (top, bottom, side) and add the\nresults. Over the side surface, the vector $\\vect{N}$ is perpendicular to\nthe vector $\\vect{i}$, so\n$$\n\\iint_{\\hbox{\\scriptsize side}} f_1\\vect{i}\\cdot\\vect{N}\\,dS = \n\\iint_{\\hbox{\\scriptsize side}}\n0\\,dS=0.$$\nThus, we are left with just the surface integral over the top plus the\nsurface integral over the bottom. For the top, we use the vector\nfunction\n$\\vect{r}=\\langle g_2(y,z),y,z\\rangle$ which gives \n$\\vect{r}_y\\times\\vect{r}_z=\\langle 1,-g_{2y},-g_{2z}\\rangle$; the dot\nproduct of this with $\\vect{i}=\\langle 1,0,0\\rangle$ is 1. Then\n$$\n\\iint_{\\hbox{\\scriptsize top}} f_1\\vect{i}\\cdot\\vect{N}\\,dS\n=\\iint_{B} f_1(g_2(y,z),y,z)\\,dA.$$\nIn almost identical fashion we get\n$$\n\\iint_{\\hbox{\\scriptsize bottom}} f_1\\vect{i}\\cdot\\vect{N}\\,dS\n=-\\iint_{B} f_1(g_1(y,z),y,z)\\,dA,$$\nwhere the negative sign is needed to make $\\vect{N}$ point in the\nnegative $x$ direction. Now\n$$\\iint_{D} f_1\\vect{i}\\cdot\\vect{N}\\,dS\n=\\iint_{B} f_1(g_2(y,z),y,z)\\,dA-\\iint_{B}\n    f_1(g_1(y,z),y,z)\\,dA,$$ which is the same as the value of the\n    triple integral above.\n\\end{proof}\n\nIt is worth noting that this theorem is also referred to as \\textit{Gauss' Theorem}. We now compute an example. \\index{Gauss' Theorem}\n\n\\begin{example}{}{}\nLet $\\vect{f}=\\langle 2x,3y,z^2\\rangle$, and consider the\nthree-dimensional volume inside the cube with faces parallel to the\nprincipal planes and opposite corners at\n$(0,0,0)$ and $(1,1,1)$. Compute the two integrals of the\ndivergence theorem.\n\\end{example}\n\n\\begin{solution}\nThe triple integral is the easier of the two:\n$$\\int_0^1\\int_0^1\\int_0^1 2+3+2z\\,dx\\,dy\\,dz=6.$$\nThe surface integral must be separated into six parts, one for each\nface of the cube. One face is $z=0$ or $\\vect{r}=\\langle u,v,0\\rangle$, $0\\le\nu,v\\le 1$. Then $\\vect{r}_u=\\langle 1,0,0\\rangle$, \n$\\vect{r}_v=\\langle 0,1,0\\rangle$, and $\\vect{r}_u\\times\\vect{r}_v=\n\\langle 0,0,1\\rangle$. We need this to be oriented downward (out of\nthe cube), so we use\n$\\langle 0,0,-1\\rangle$ and the corresponding integral is\n$$\\int_0^1\\int_0^1 -z^2\\,du\\,dv=\\int_0^1\\int_0^1 0\\,du\\,dv=0.$$\n\nAnother face is $y=1$ or $\\vect{r}=\\langle u,1,v\\rangle$. Then $\\vect{r}_u=\\langle 1,0,0\\rangle$, $\\vect{r}_v=\\langle 0,0,1\\rangle$, and\n$\\vect{r}_u\\times\\vect{r}_v= \\langle 0,-1,0\\rangle$. We need a normal in\nthe positive $y$ direction, so we convert this to $\\langle\n0,1,0\\rangle$, and the corresponding integral is\n$$\\int_0^1\\int_0^1 3y\\,du\\,dv=\\int_0^1\\int_0^1 3\\,du\\,dv=3.$$\n\nThe remaining four integrals have values 0, 0, 2, and 1, and the sum\nof these is 6, in agreement with the triple integral.\n\\end{solution}\n\n\\begin{example}{}{}\nLet $\\vect{f}=\\langle x^3,y^3,z^2\\rangle$, and consider the\ncylindrical volume $x^2+y^2\\le9$, $0\\le z\\le2$.\nCompute the two integrals of the divergence theorem.\n\\end{example}\n\n\\begin{solution}\nThe triple integral (using cylindrical coordinates) is \n$$\\int_0^{2\\pi}\\int_0^3\\int_0^2 (3r^2+2z)r\\,dz\\,dr\\,d\\theta=279\\pi.$$\n\nFor the surface we need three integrals. The top of the cylinder can\nbe represented by\n$\\vect{r}=\\langle v\\cos u,v\\sin u,2\\rangle$; \n$\\vect{r}_u\\times\\vect{r}_v=\\langle 0,0,-v\\rangle$, which points down\ninto the cylinder,\nso we convert it to $\\langle 0,0,v\\rangle$. Then\n$$\\int_0^{2\\pi}\\int_0^3 \\langle v^3\\cos^3u,v^3\\sin^3u,4\\rangle\\cdot\n\\langle 0,0,v\\rangle\\,dv\\,du=\n\\int_0^{2\\pi}\\int_0^3 4v\\,dv\\,du=36\\pi.$$\nThe bottom is \n$\\vect{r}=\\langle v\\cos u,v\\sin u,0\\rangle$; \n$\\vect{r}_u\\times\\vect{r}_v=\\langle 0,0,-v\\rangle$ and\n$$\\int_0^{2\\pi}\\int_0^3 \\langle v^3\\cos^3u,v^3\\sin^3u,0\\rangle\\cdot\n\\langle 0,0,-v\\rangle\\,dv\\,du=\n\\int_0^{2\\pi}\\int_0^3 0\\,dv\\,du=0.$$\nThe side of the cylinder is $\\vect{r}=\\langle 3\\cos u,3\\sin u,v\\rangle$;\n$\\vect{r}_u\\times\\vect{r}_v=\\langle 3\\cos u,3\\sin u,0\\rangle$ which does\npoint outward, so\n\\begin{align*}\n\\int_0^{2\\pi}\\int_0^2 &\\langle 27\\cos^3 u,27\\sin^3 u,v^2\\rangle\\cdot\n\\langle 3\\cos u,3\\sin u,0\\rangle \\,dv\\,du\t\\\\\n&=\\int_0^{2\\pi}\\int_0^2 81\\cos^4 u+81\\sin^4u\\,dv\\,du=243\\pi.\n\\end{align*}\nThe total surface integral is thus $36\\pi+0+243\\pi=279\\pi$.\n\\end{solution}\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:DivergenceTheorem}}\n\n\\begin{enumialphparenastyle}\n\n\\begin{ex}\nUsing $\\ds\\vect{f}=\\langle 3x,y^3,-2z^2\\rangle$ and the\nregion bounded by $\\ds x^2+y^2=9$, $z=0$, and $z=5$, compute both\nintegrals from the Divergence Theorem.\n\\begin{sol}\n\tboth are $-45\\pi/4$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $E$ be the volume described by \n$0\\le x\\le a$, $0\\le y\\le b$, $0\\le z\\le c$, and \n$\\vect{f}= \\langle x^2,y^2,z^2\\rangle$. Compute\n$\\ds\\iint_{\\partial E} \\vect{f}\\cdot \\vect{N}\\,dS$.\n\\begin{sol}\n\t$a^2bc+ab^2c+abc^2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $E$ be the volume described by \n$0\\le x\\le 1$, $0\\le y\\le 1$, $0\\le z\\le 1$, and \n$\\vect{f}= \\langle 2xy,3xy,ze^{x+y}\\rangle$. Compute\n$\\ds\\iint_{\\partial E} \\vect{f}\\cdot \\vect{N}\\,dS$.\n\\begin{sol}\n\t$e^2-2e+7/2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $E$ be the volume described by \n$0\\le x\\le 1$, $0\\le y\\le x$, $0\\le z\\le x+y$, and \n$\\vect{f}= \\langle x,2y,3z\\rangle$. Compute\n$\\ds\\iint_{\\partial E} \\vect{f}\\cdot \\vect{N}\\,dS$.\n\\begin{sol}\n\t$3$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $E$ be the volume described by \n$x^2+y^2+z^2\\le 4$, and \n$\\vect{f}= \\langle x^3,y^3,z^3\\rangle$. Compute\n$\\ds\\iint_{\\partial E} \\vect{f}\\cdot \\vect{N}\\,dS$.\n\\begin{sol}\n\t$384\\pi/5$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $E$ be the hemisphere described by \n$0\\le z\\le \\sqrt{1-x^2-y^2}$, and \n$\\vect{f}= \\langle \\sqrt{x^2+y^2+z^2},\\sqrt{x^2+y^2+z^2},\n\\sqrt{x^2+y^2+z^2}\\rangle$. Compute\n$\\ds\\iint_{\\partial E} \\vect{f}\\cdot \\vect{N}\\,dS$.\n\\begin{sol}\n\t$\\pi/3$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $E$ be the volume described by \n$x^2+y^2\\le1$, $0\\le z\\le4$, and \n$\\vect{f}= \\langle xy^2,yz,x^2z\\rangle$. Compute\n$\\ds\\iint_{\\partial E} \\vect{f}\\cdot \\vect{N}\\,dS$.\n\\begin{sol}\n\t$10\\pi$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nLet $E$ be the solid cone above the $x$-$y$ plane and\ninside $z=1-\\sqrt{x^2+y^2}$, and \n$\\vect{f}= \\langle x\\cos^2 z,y\\sin^2z,\\sqrt{x^2+y^2}z\\rangle$. Compute\n$\\ds\\iint_{\\partial E} \\vect{f}\\cdot \\vect{N}\\,dS$.\n\\begin{sol}\n\t$\\pi/2$\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nProve the other two equations in the \ndisplay~\\ref{eq:divergence proof}.\n\\end{ex}\n\n\\begin{ex}\nSuppose $D$ is a closed surface, and that $D$ and $F$ are\nsufficiently nice. Show that \n$$\\iint_{D} (\\nabla \\times \\vect{f}) \\cdot \\vect{N}\\, dS = 0$$\nwhere $\\vect{N}$ is the outward pointing unit normal.\n\\end{ex}\n\n\\begin{ex}\nSuppose $D$ is a closed surface, $D$ is sufficiently nice,\nand $F=\\langle a,b,c\\rangle$ is a constant vector field.\nShow that \n$$\\iint_{D} \\vect{f} \\cdot \\vect{N}\\, dS = 0$$\nwhere $\\vect{N}$ is the outward pointing unit normal.\n\\end{ex}\n\n\\begin{ex}\nWe know that the volume of a region $E$ may often be computed as\n$\\ds \\iiint_{E}dx\\,dy\\,dz$. Show that this volume may also be computed as\n$\\ds{1\\over3} \\iint_{\\partial E} \\langle x,y,z\\rangle \\cdot \\vect{N}\\,dS$\nwhere $\\vect{N}$ is the outward pointing unit normal to $\\partial E$.\n\\end{ex}\n\n\\end{enumialphparenastyle}\n", "meta": {"hexsha": "7276d3c548b9b20c272370fec9ed93323e8094d5", "size": 9931, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "16-vector-calculus/16-9-divergence-theorem.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "16-vector-calculus/16-9-divergence-theorem.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "16-vector-calculus/16-9-divergence-theorem.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7604562738, "max_line_length": 151, "alphanum_fraction": 0.6693182962, "num_tokens": 4043, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587586, "lm_q2_score": 0.8152324938410784, "lm_q1q2_score": 0.5700549255091313}}
{"text": "\n\\documentclass[a5paper]{book}\n\n\\usepackage{matex}\n\n\n\\begin{document}\n\n\\titlePage{ $ Algebraic Equations $ }{ $ Martin Azpillaga $ }\n\n\n\n\n\n\\part{ $ Definitions $ }\n\n\\chapter{ $ Roots of unity, Extensions and Galois Group $ }\n\n\\introduction\n{ \n\t$ \n\tIn this first chapter, we discuss the definitions involved in the subsequent three sections:\\\\\n\t1. Roots of the unity who form cyclotomic polynomials and cyclotomic fields.\\\\\n\t2. Algebraic and Transcendent extensions over fields. Normal extensions.\\\\\n\t3. Galois Group of an algebraic extension.\n\t$ \n}\n\n\n\n\n\\section{ $ Roots of unity $ }\n{\n\t\\subsection{ $ nth root of unity $ }\n\n\t\\letbe\n\t{\n\t\tK $ field $.\n\t\tn \\in \\N.\n\t\tx \\in \\K\n\t}\n\t\\then{ x }{ $ a nth root of unity in $ K }\n\t{\n\t\tx^n = 1\n\t}\n\t\\denote\n\t{\n\t\t\\set{ x \\in \\K }[ x^n = 1 ] \\as \\mu_n(\\K).\n\t\t\\set{ x \\in \\K }[ \\ex{ n \\in \\N }{ x^n = 1 } ] \\as \\mu(K)\n\t}\n\n\t\\subsection{ $ nth Primitive root of unity $ }\n\n\t\\letbe\n\t{\n\t\tK $ field $.\n\t\tx \\in \\mu_n(K)\n\t}\n\t\\then{ x }{ $ a primitive root of unity in $ K }\n\t{\n\t\t\\order{ x } = n\n\t}\n\t\\denote\n\t{\n\t\t\\set{ x \\in \\mu_n(\\K) }[ \\order{ x } = n ] \\as \\mu_n^*(K)\n\t}\n\n\n\t\\subsection{ $ nth Cyclotomic Polynomial $ }\n\n\t\\letbe\n\t{\n\t\tn \\in \\N\n\t}\n\t\\call{ $ nth Cyclotomic Polynomial $ }\n\t{\n\t\tf(X) = \\productoryoverset{ (X - \\zeta ) }{ \\zeta }{ \\mu_n^*(\\K) } \\in \\C[X]\n\t}\n\t\\denote\n\t{\n\t\tf(X) \\as \\Phi_n(X)\n\t}\n\n\n\n\t\\subsection{ $ Mobius' function $ }\n\n\t\\call{ $ Mobius' function $ }\n\t{\n\t\t\\definedfunction{ f }{ \\N }{ \\set{ 0,+1,-1 } }{ x }{ \\multiple[ \\{ ]\n\t\t{\n\t\t\t+1, \\qquad n = 1.\n\t\t\t+0, \\qquad \\ex{ p \\in \\N }{ p $ prime $ \\logicand a^2 | n }.\n\t\t\t-1, \\qquad * \n\t\t} }\n\t}\n\t\\denote\n\t{\n\t\tf \\as \\mu\n\t}\n\n\t\\newpage\n\n}\n\n\\section{ $ Classification of Field Extensions $ }\n{\n\t\\subsection{ $ Field Extension $ }\n\n\t\\letbe\n\t{\n\t\tK, k $ fields $\n\t}\n\t\\then{ K }{ $ an extension of $ k }\n\t{\n\t\tk \\subset K\n\t}\n\t\\denote\n\t{\n\t\tk \\subset K \\as K|k\n\t}\n\n\t\\subsection{ $ Algebraic extension $ }\n\n\t\\letbe\n\t{\n\t\tK|k $ field extension $\n\t}\n\t\\then{ \\theta \\in K }{ $ an algebraic element over $ k }\n\t{\n\t\t\\ex f(X) \\in k[X] \\suchthat f(\\theta) = 0 \n\t}\n\t\\then{ K|k }{ $ an algebraic extension $ }\n\t{\n\t\t\\all{ \\theta \\in K }\n\t\t{\n\t\t\t\\theta $ algebraic over $ k\n\t\t}\n\t}\n\t\\denote\n\t{\n\t\tK|k $ no algebraic extension $ \\as K|k $ transcendent extension $\n\t}\n\n\t\\subsection{ $ Cyclotomic Extension $ }\n\n\t\\letbe\n\t{\n\t\tn \\in \\N.\n\t\t\\zeta \\in \\mu_n^*(\\C)\n\t}\n\t\\call{ $ nth cyclotomic field $ }\n\t{\n\t\tQ( \\zeta ) \\extends \\Q\n\t}\n\n\n\t\\subsection{ $ Quadratic extension $ }\n\n\t\\letbe\n\t{\n\t\tk $ field $ \\suchthat car(k) \\neq 2.\n\t\tK|k $ field extension $\n\t}\n\t\\then{ K|k }{ $ a quadratic extension $ }\n\t{\n\t\t[ K : k ] = 2\n\t}\n\n\n\t\\subsection{ $ Normal Extension $ }\n\n\t\\letbe\n\t{\n\t\tK|k $ algebraic extension $\n\t}\n\t\\then{ K|k }{ $ a normal extension over $ k }\n\t{\n\t\t\\ex{ f(X) \\in k(X) $ irreductible $ }{ f(X) $ splits in $ K }\n\t}\t\n\n\n\t\\subsection{ $ Separable extension $ }\n\t\n\t\\letbe\n\t{\n\t\tK|k $ algebraic extension $\n\t}\n\t\\then{ \\theta \\in K }{ $ a separable element over $ k }\n\t{\n\t\t\\ex{ f(X) \\in k[X] }{ f(\\theta) = 0 \\logicand f(X) $ separable in $ K[X] }\n\t}\n\t\\then{ K|k }{ $ a separable extension $ }\n\t{\n\t\t\\all{ \\theta \\in K }\n\t\t{\n\t\t\t\\theta $ separable over $ k\n\t\t}\n\t}\n\t\n\n\t\\subsection{ $ Simple extension $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ field extension $.\n\t}\n\t\\then{ K \\extends k }{ $ a simple extension $ }\n\t{\n\t\t\\ex{ \\theta \\in K }{ K = k(\\theta) }\n\t}\n\t\\denote\n\t{\n\t\t\\theta \\as $ primitive element of $ K \\extends k \n\t}\n\n\n\t\\subsection{ $ Galois extension $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ algebraic extension $\n\t}\n\t\\then{ K \\extends k }{ $ a Galois extension $ }\n\t{\n\t\tK \\extends k $ normal extension $.\n\t\tK \\extends k $ separable extension $\n\t}\n\n}\n\n\n\n\\section{ $ Properties of field extensions $ }\n{\n\t\\subsection{ $ Evaluation function $ }\n\n\t\\letbe\n\t{\n\t\tK|k $ field extension $.\n\t\t\\theta \\in K\n\t}\n\t\\call{ $ Evaluation function of $ \\theta $ over $ k }\n\t{\n\t\t\\definedfunction{ f }{ k[X] }{ K }{ p(X) }{ p(\\theta) }\n\t}\n\t\\denote\n\t{\n\t\tf \\as \\Psi_\\theta.\n\t\tIm \\; \\psi_\\theta \\as k[\\theta].\n\t\t$ fraction field of $ k[\\theta] \\as k(\\theta)\n\t}\n\n\n\t\\subsection{ $ Minimal Polynomial $ }\n\n\t\\letbe\n\t{\n\t\tK|k $ field extension $.\n\t\t\\theta \\in K $ algebraic over $ k\n\t}\n\t\\call{ $ Minimal Polynomial of $ \\theta $ in $ K|k }\n\t{\n\t\tf(X) \\in k[X] \\suchthat \\logicand \\multiple[ \\{ ]\n\t\t{\n\t\t\tf(X) \\neq 0.\n\t\t\tf(X) $ irreducible $.\n\t\t\tf(\\theta) = 0 \n\t\t}\n\t}\n\t\\denote\n\t{\n\t\tf(X) \\as Irr(\\theta,k)(X)\n\t}\n\n\t\\subsection{ $ Extension Degree $ }\n\n\t\\letbe\n\t{\n\t\tK|k $ field extension $\n\t}\n\t\\call{ $ degree of $ K|k }\n\t{\n\t\t\\dim_k K \\in \\N \\cup \\set{ \\infty }\n\t}\n\t\\denote\n\t{\n\t\t\\dim_k K \\as [ K : k ].\n\t\t[K:k] \\in \\N : K|k $ finite extension $\n\t}\n\n\n\t\\newpage\n\n\n\t\\subsection{ $ Splitting field $ }\n\n\t\\letbe\n\t{\n\t\tk $ field $.\n\t\tf \\in k[X].\n\t\t\\family{ \\theta_i }{ i }[ 1 ][ n ] $ roots of $ f\n\t}\n\t\\call{ $ splitting field of $ f }\n\t{\n\t\tk \\vect{ \\theta_i }{ i }[ 1 ][ n ]\n\t}\n\n\n\t\\subsection{ $ Field Composition $ }\n\n\t\\letbe\n\t{\n\t\tK_1|k, K_2|k $ field extensions $ \n\t}\n\t\\call{ $ the composition of $ K_1 $ and $ K_2 }\n\t{\n\t\t<K_1,K_2>\n\t}\n\t\\denote\n\t{\n\t\t<K_1,K_2> \\as K_1K_2\n\t}\n\t\n\t\n\n\t\\subsection{ $ Separability degree $ }\n\t\n\t\\letbe\n\t{\n\t\tk|k $ algebraic extension $.\n\t\t\\closed{k} $ algebraic closure of $ k\n\t}\n\t\\call{ $ separability degree of $ K|k }\n\t{\n\t\t\\# \\set{ \\sigma \\in \\functionspace{ K }{ \\closed{k} } }[ \\sigma \\; k$-immersion$ ]\n\t}\n\t\\denote\n\t{\n\t\t\\# \\set{ \\sigma \\in \\functionspace{ K }{ \\closed{k} } }[ \\sigma \\; k$-immersion$ ] \\as [K:k]_s\n\t}\n}\n\n\n\\section{ $ Galois Group $ }\n{\n\t\n\n\n\t\\subsection{ $ Fixed Field $ }\n\n\t\\letbe\n\t{\n\t\tK_1, K_2 $ fields $.\n\t\t\\function{ \\sigma }{ K_1 }{ K_2 }\n\t}\n\t\\call{ $ the fixed field of $ k_1 $ by $ \\sigma }\n\t{\n\t\t\\set{ x \\in K_1 }[ \\sigma(x) = x ]\n\t}\n\t\\denote\n\t{\n\t\t\\set{ x \\in K_1 }[ \\sigma(x) = x ] \\as K_1^\\sigma\n\t}\n\n\t\\newpage\n\n\n\t\\subsection{ $ k-Immersion $ }\n\n\t\\letbe\n\t{\n\t\tK_1|k, K_2|k $ field extensions $.\n\t\t\\function{ \\sigma }{ K_1 }{ K_2 }\n\t}\n\t\\then{ \\sigma }{ $ a $k$-immersion $ }\n\t{\n\t\tk \\subset K_1^\\sigma\n\t}\n\t\\denote\n\t{\n\t\t\\sigma \\; k $-immersion $ \\logicand \\sigma $ automorphism $ \\as \\sigma $ $ k $-automorphism $\n\t}\n\n\n\t\\subsection{ $ k-Automorphism $ }\n\n\t\\letbe\n\t{\n\t\tK \\extends k $ field extensions $.\n\t\t\\function{ \\sigma }{ K }{ K }\n\t}\n\t\\then{ \\sigma }{ $ a $k$-automorphism $ }\n\t{\n\t\t\\sigma \\; k$-immersion$.\n\t\t\\sigma $ isomorphism $\n\t}\n\t\\denote\n\t{\n\t\t\\sigma \\; k $-immersion $ \\logicand \\sigma $ automorphism $ \\as \\sigma $ $ k $-automorphism $\n\t}\n\n\t\\subsection{ $ Galois Group $ }\n\n\t\\letbe\n\t{\n\t\tK|k $ field extension $\n\t}\n\t\\call{ $ Galois group of $ K|k }\n\t{\n\t\t\\set{ \\sigma \\in \\functionspace{ K }{ K } }[ \\sigma $ $ k $-autmorphism $ ]\n\t}\n\t\\denote\n\t{\n\t\t\\set{ \\sigma \\in \\functionspace{ K }{ K } }[ \\sigma $ $ k $ autmorphism $ ] \\as Gal(K|k)\n\t}\n\n\t\\newpage\n\n}\n\n\n\n\\section{ $ Finite Fields $ }\n{\n\n\t\\subsection{ $ Finite Field $ }\n\n\t\\letbe\n\t{\n\t\tk $ field $\n\t}\n\t\\then{ k }{ $ a finite field $ }\n\t{\n\t\t\\# k \\in \\N\n\t}\n\t\\denote\n\t{\n\t\t\\ex{ p,n \\in \\N  }{ p $ prime $ \\logicand \\# k = p^n } \\as \\F_{p^n}\n\t}\n\n\n\n\t\\subsection{ $ Frobenius' Automorphism $ }\n\n\t\\letbe\n\t{\n\t\t\\F_{p^n} $ finite field $\n\t}\n\t\\call{ $ Frobenius' automorphism of $ \\F_{p^n} }\n\t{\n\t\t\\definedfunction{ f }{ \\F_{p^n} }{ \\F_{p^n} }{ x }{ x^p }\n\t}\n\t\\denote\n\t{\n\t\tf \\as \\phin_p\n\t}\n\n\t\\newpage\n\n}\n\n\n\n\\section{ $ Separability $ }\n{\n\t\n\t\n\n\t\n\n\t\n\t\n}\n\n\n\n\\section{ $ Norm and trace $ }\n{\n\t\n\t\\subsection{ $ Norm $ }\n\t\n\t\\letbe\n\t{\n\t\tK|k $ finite extension $\n\t}\n\t\\call{ $ Norm of $ K|k }\n\t{\n\t\t\\definedfunction{ f }{ K }{ k }{ \\theta }{ det( m_\\theta ) }\n\t}\n\t\\denote\n\t{\n\t\tf \\as N_{K|k}\n\t}\n\n\n\t\\subsection{ $ Trace $ }\n\t\n\t\\letbe\n\t{\n\t\tK|k $ finite extension $\n\t}\n\t\\call{ $ trace of $ K|k }\n\t{\n\t\t\\definedfunction{ f }{ K }{ k }{ \\theta }{ tr( m_\\theta ) }\n\t}\n\t\\denote\n\t{\n\t\tf \\as Tr_{K|k}\n\t}\n\n\t\n\n\n\t\\subsection{ $ Cyclic extension $ }\n\t\n\t\\letbe\n\t{\n\t\tK|k $ Galois extension $\n\t}\n\t\\then{ K|k }{ $ a cyclic extension $ }\n\t{\n\t\tGal(K|k) $ cyclic $\n\t}\n\n\n\t\\subsection{ $ Abelian extension $ }\n\t\n\t\\letbe\n\t{\n\t\tK|k $ Galois extension $\n\t}\n\t\\then{ K|k }{ $ an abelian extension $ }\n\t{\n\t\tGal(K|k) $ abelian $\n\t}\n\n\n\t\\subsection{ $ Resoluble by radicals $ }\n\t\n\t\\letbe\n\t{\n\t\tk $ field $.\n\t\tf(X) \\in k[X]\n\t}\n\t\\then{ f(X) }{ $ resoluble by radicals $ }\n\t{\n\t\t\\ex{ K|k }{ K|k $radical extension $ }\n\t}\n\t\\denote\n\t{\n\t\tproperty \\as notation.\n\t}\n}\n\n\n\n\n\\part{ $ Propositions $ }\n\n\\chapter{ $ Roots of unity, Field extensions and Galois Group $ }\n\n\\introduction\n{\n\t$\n\tBasic propositions over:\\\\\n\t1. Roots of unity. \\\\\n\t2. Field extensions. \\\\\n\t3. Galois Group\n\t$\n}\n\n\n\\section{ $ Roots of unity $ }\n{\n\t\n\t\\subsection{ $ The Group of Roots of Unity $ }\n\t\n\t\\letbe\n\t{\n\t\tK $ field $\n\t}\n\t\\proposition\n\t{\n\t\t\\mu(K) \\substructure K^*\n\t}\n\t\\demonstration\n\t{\n\t\t\\all{ x,y \\in \\mu(K) }\n\t\t{\n\t\t\t1\n\t\t}\n\t\t\\mu(K) \\substructure K^*\n\t}\n\t\\newpage\n\n\n\n\t\\subsection{ $ The group of nth roots $ }\n\t\n\t\\letbe\n\t{\n\t\tK $ field $.\n\t\tn \\in N\n\t}\n\t\\proposition\n\t{\n\t\t\\mu_m(K) \\normal K^*\n\t}\n\t\\demonstration\n\t{\n\t\t$ Follow 3 steps $.\n\t\t\n\t\t\\step{ 1 }{ \\mu_n(K) \\subset K^* }\n\t\t{\n\t\t\t\\mu_n(K) \\subset \\mu(K) \\subset K^*\n\t\t}.\n\n\t\t\\step{ 2 }{ $ Define a group morphism $ }\n\t\t{\n\t\t\t\\definedfunction{ \\phi_n }{ K^* }{ K^* }{ x }{ x^n }.\n\n\t\t\t\\all{ x,y \\in K^* }\n\t\t\t{\n\t\t\t\t\\phi_n(x)\\phi_n(y) = x^ny^n = (xy)^n = \\phi_n(xy)\n\t\t\t}\n\t\t}\n\t\t\n\t\t\\step{ 3 }{ $ Identify $ \\mu_n(K) $ with the kernel $ }\n\t\t{\n\t\t\tKer \\phi_n = \\mu_n(K)\n\t\t}\n\t\t\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Finite subgroups of the multiplicative group $ }\n\t\n\t\\letbe\n\t{\n\t\tK $ field $.\n\t\tG \\substructure K^* $ finite $\n\t}\n\t\\proposition\n\t{\n\t\t\\ex{ m \\in \\N }{ G = \\mu_m(K) }.\n\t\tG $ cyclic $\n\t}\n\t\\demonstration\n\t{\n\t\t$ Follow 2 steps $.\n\t\t\n\t\t\\step{ 1 }{ $ Find $ m }\n\t\t{\n\t\t\t\\be{ m }{ \\max \\set{ \\order{ x } }[ x \\in G ] } \\in \\N.\n\n\t\t\t\\all{ x \\in G }\n\t\t\t{\n\t\t\t\t\\order{x} \\divides m \\imp x^m = 1.\n\n\t\t\t\tx \\in \\mu(K)\n\t\t\t}\n\n\t\t\tG = \\mu_m(K)\n\n\t\t}\n\n\t\t\\step{ 2 }{ G $ cyclic $ }\n\t\t{\n\t\t\t\\ex{ \\zeta \\in G }{ \\order{\\zeta} = m }.\n\n\t\t\t<\\zeta> \\subset G = \\mu_m(K).\n\n\t\t\t\\multiple[ \\{ ][ \\} ]\n\t\t\t{\n\t\t\t \t\\card{ G } \\geq \\order{\\zeta} = m.\n\t\t\t \t\\card{ G } \\leq \\card{\\mu_m(K)} \\leq m \n\t\t\t} \\imp \\card{ G } = m.\n\n\t\t\tG = <\\zeta> \\imp G $ cyclic $\n\t\t}\n\t}\n\t\\newpage\n\n\t\\subsection{ $ Characterization of nth roots by cyclotomic polynomials $ }\n\t\n\t\\letbe\n\t{\n\t\tn \\in \\N\n\t}\n\t\\proposition\n\t{\n\t\tX^n-1 = \\prod_{ d \\divides n } \\Phi_d(X)\n\t}\n\t\\demonstration\n\t{\n\t\t\\be{ f(X) }{ X^n-1 } \\in \\C[X].\n\n\t\t\\be{ g(X) }{ \\prod_{d \\divides n} \\Phi_d(X) } \\in \\C[X].\n\n\t\t$ Follow 3 steps $.\n\t\t\n\t\t\\step{ 1 }{ f(X) $ and $ g(X) $ have the same roots $ }\n\t\t{\n\t\t\t\\all{ x \\in \\C }[ f(x) = 0 ]\n\t\t\t{\n\t\t\t\tx \\in \\mu_n(\\C).\n\n\t\t\t\t\\be{ d }{ \\order{x} } \\in \\N.\n\n\t\t\t\tx \\in \\mu_d^*(\\C) \\imp \\Phi_d(x) = 0 \\imp g(x) = 0\n\t\t\t}\n\n\t\t\t\\all{ y \\in \\C }[ g(y) = 0 ]\n\t\t\t{\n\t\t\t\t\\ex{ d \\in \\N }{ d|n \\logicand \\Phi_d(y) = 0 }.\n\n\t\t\t\ty \\in \\mu_d^*(\\C) \\imp y^d = 1 \\imp y^n = 1 \\imp f(y) = 06\n\t\t\t}\n\t\t}\n\t\t\n\t\t\\step{ 2 }{ f(X) $ and $ g(X) $ are monic $ }\n\t\t{\n\t\t\tf $ monic $.\n\n\t\t\t\\all{ d \\in \\N }[ d \\divides n ]\n\t\t\t{\n\t\t\t\t\\Phi_d(X) $ monic $\n\t\t\t}\n\n\t\t\tg(X) $ monic $\n\t\t}\n\n\t\t\\step{ 3 }{ f(X) $ and $ g(X) $ are separable $ }\n\t\t{\n\t\t\tD(f(X)) = nX^{n-1}.\n\n\t\t\tmcd \\set{ X^n-1, nX^{n-1} } = 1.\n\n\t\t\tf(X) $ separable $.\n\n\t\t\t\\all{ x \\in \\C }[ x \\in \\mu_d^*(\\C) ]\n\t\t\t{\n\t\t\t\t\\all{ d' \\in \\N }[ d \\divides n \\logicand d' \\neq d ]\n\t\t\t\t{\n\t\t\t\t\tx \\nin \\mu_{d'}^*(\\C)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tg(X) $ separable $\n\t\t}\n\n\t\tf(X) = g(X)\n\t\t\n\t}\n\t\\newpage\n\n\n\n\t\\subsection{ $ Mobius' lemma $ }\n\t\n\t\\letbe\n\t{\n\t\tn \\in \\N\n\t}\n\t\\proposition\n\t{\n\t\t\\sum_{d \\divides n } \\mu(d) = 0\n\t}\n\t\\demonstration\n\t{\n\t\t\\ex{ \\family{ p_i}{ i }[ 1 ][ r ], \\family{ a_i }{ i }[ 1 ][ r ] \\subset \\N }{ \\productory{ p_i^{a_i} }{ i }[ 1 ][ r ] $ prime factorization of $ n}.\n\n\t\t\\all{ d \\in \\N }[ d \\divides n ]\n\t\t{\n\t\t\t\\ex{ \\family{ b_i }{ i }[ 1 ][ r ] \\subset \\N }{ \\all{ i \\in \\indexes{1}{r} }\n\t\t\t{\n\t\t\t\tb_i \\leq a_i.\n\t\t\t\t\\productory{ p_i^{b_i} }{ i }[ 1 ][ r ] $ prime factorization of $ n\n\t\t\t} }\n\n\t\t\t\\mu(d) \\neq 0 \\ifandonlyif \\all{ i \\in \\indexes{1}{r} }\n\t\t\t{\n\t\t\t\tb_i \\in \\set{ 0,1 }\n\t\t\t}\n\t\t}\n\n\t\t\\sum_{d \\divides n } \\mu(d) = \\sumatory{ \\combinatory{ r }{ k } (-1)^k }{ k }[ 0 ][ r ] = \\sumatory{ \\combinatory{ r }{ k } (-1)^k1^{r-k} }{ k }[ 0 ][ r ] = (-1+1)^r = 0\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Recursive formula for cyclotomic polynomials $ }\n\t\n\t\\letbe\n\t{\n\t\tn \\in \\N\n\t}\n\t\\proposition\n\t{\n\t\t\\Phi_n(X) = \\prod_{d \\divides n} (X^d - 1)^{\\mu(\\frac{n}{d})}\n\t}\n\t\\demonstration\n\t{\n\t\t\\prod_{d|n}(X^d-1)^{\\mu(\\frac{n}{d})} =.\n\n\t\t= \\prod_{d|n}(X^{\\frac{n}{d}}-1)^{\\mu(d)} =.\n\n\t\t= \\prod_{d|n} \\prod_{\\delta|\\frac{n}{d}} \\Phi_{\\delta}(X)^{\\mu(d)}.\n\n\t\t= \\prod_{\\delta|n} \\prod_{d|\\frac{n}{\\delta}} \\Phi_{\\delta}(X)^{\\mu(d)} =.\n\n\t\t= \\prod_{\\delta|n} \\Phi_{\\delta}(X)^{\\sum_{d|\\frac{n}{\\delta}} \\mu(d)}.\n\n\t\t\\sum_{d|\\frac{n}{\\delta}} \\mu(d) \\neq 0 \\ifandonlyif \\delta = n.\n\n\t\t\\prod_{\\delta|n} \\Phi_{\\delta}(X)^{\\sum_{d|\\frac{n}{\\delta}} \\mu(d)} = \\Phi_n(X)^1 = \\Phi_n(X)\n\t}\n\t\\newpage\n\n\t\\subsection{ $ Classification of cyclotomic polynomials $ }\n\t\n\t\\letbe\n\t{\n\t\tn \\in \\N\n\t}\n\t\\proposition\n\t{\n\t\t\\Phi_n(X) \\in \\Z[X]\n\t}\n\t\\demonstration\n\t{\n\t\t\\Phi_n(X) = \\prod_{d|n}(X^{\\frac{n}{d}}-1)^{\\mu(d)}.\n\n\t\t\\mu(d) = 1 \\ifandonlyif d = 1 \\imp \\Phi_n(X) = \\frac{ X^n - 1 }{\\prod_{1<d|n}(X^{\\frac{n}{d}}-1)^{\\mu(d)}}.\n\n\t\t\\be{ f(X) }{ X^n - 1 }.\n\n\t\t\\be{ g(X) }{ \\prod_{1<d|n}(X^{\\frac{n}{d}}-1)^{\\mu(d) }}.\n\n\t\t\\Phi_1(X) = X - 1 \\in \\Z[X] \\imp $ induction $ \\imp g(X) \\in \\Z[X].\n\n\t\t\\multiple[ \\{ ][ \\} ]\n\t\t{\n\t\t\tf(X), g(X) $ monics $.\n\t\t\tf(X), g(X) \\in \\Z[X] \n\t\t}\\imp \\Phi_n(X) = \\frac{f(X)}{g(X)} \\in \\Z[X]\n\t}\n\t\\newpage\n\n\t\\subsection{ $ Irreductibility of cyclotomic polynomials $ }\n\t\n\t\\letbe\n\t{\n\t\tn \\in \\N\n\t}\n\t\\proposition\n\t{\n\t\t\\Phi_n(X) $ irreductible over $ \\Q[X]\n\t}\n\t\\demonstration\n\t{\n\t\tI dont want to do this\n\t}\n\t\\newpage\n}\n\n\n\n\\section{Extensiones de cuerpos}\n{\n\n\t\\subsection{ $ Classification of the image of the evaluation function $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\divides k $ field extension $.\n\t\t\\theta \\in K\n\t}\n\t\\proposition\n\t{\n\t\t\\theta $ algebraic over $ k \\imp k[\\theta] = k(\\theta).\n\t\t\\theta $ transcendent over $ k \\imp k[\\theta] \\isomorph k(\\theta)\n\t}\n\t\\demonstration\n\t{\n\t\t$ Separate 2 cases: $.\n\t\t\n\t\t\\case{ \\theta $ algebraic over $ k }\n\t\t{\n\t\t\tk $ field $ \\imp k $ PID $ \\imp k[X] $ PID $.\n\n\t\t\tKer(\\Phi_\\theta) $ principal $ \\imp \\ex{ f(X) \\in k[X] }{ Ker(\\Phi_\\theta) = (f(X)) }.\n\n\t\t\tK $ field $ \\imp K $ integral domain $.\n\n\t\t\tk[\\theta] \\subset K \\imp k[\\theta] $ integral domain $.\n\n\t\t\t$ Isomorphy theorem :$ k[X]/f(X) \\isomorph k[\\theta].\n\n\t\t\t\\multiple[ \\{ ][ \\} ]\n\t\t\t{\n\t\t\t\tk[X]/f(X) \\isomorph k[\\theta].\n\t\t\t\tk[\\theta] $ integral domain $ \n\t\t\t} \\imp (f(X)) $ prime ideal $.\n\n\t\t\tk[X] $ UFD $ \\imp (f(X)) $ irreductible ideal $ \\imp (f(X)) $ maximal ideal $.\n\n\t\t\tk[X]/(f(X)) $ field $ \\imp k[\\theta] $ field $.\n\n\t\t\tk[\\theta] = k(\\theta)\n\t\t}\n\t\t\\case{ \\theta $ transcendent over $ k }\n\t\t{\n\t\t\tKer( \\Psi_\\theta ) = (0).\n\n\t\t\t$Isomorphy theorem: $ k[\\theta] \\isomorph k[X]/(0) \\isomorph k[X]\n\t\t}\n\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Inclusion of finite extensions in algebraic extensions $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ finite field extension $\n\t}\n\t\\proposition\n\t{\n\t\tK \\extends k $ algebraic extension $\n\t}\n\t\\demonstration\n\t{\n\t\t\\all{ \\theta \\in K }\n\t\t{\n\t\t\t\\be{ n }{ dim_k K } \\in \\N.\n\n\t\t\t\\family{ \\theta^i }{ i }[ 0 ][ n ] $linearly dependent$.\n\n\t\t\t\\ex{ \\family{ a_i }{ i }[ 0 ][ n ] \\subset k }{ \\ex{ i \\in \\indexes{1}{n} }{ \\sumatory{ a_i\\theta^i }{ i }[ 0 ][ n ] = 0 } }.\n\n\t\t\t\\be{ f(X) }{ \\sumatory{ a_i\\theta^i }{ i }[ 0 ][ n ] } \\in k[X].\n\n\t\t\tf(X) \\neq 0 \\logicand f(\\theta) = 0 \\imp \\theta $ algebraic over $ k \n\t\t}\n\n\t\tK \\extends k $ algebraic extension $\n\t}\n\t\\newpage\t\n\n\n\n\t\\subsection{ $ Degree behaviour in extension towers $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k, L \\extends K $ field extensions $\n\t}\n\t\\proposition\n\t{\n\t\t[L:k] = [L:K][K:k]\n\t}\n\t\\demonstration\n\t{\n\t\t\\exists \\; \\family{ \\theta_i }{ i } \\subset K \\; k$-base of $ K.\n\n\t\t\\exists \\; \\family{ \\theta'_j }{ j } \\subset K \\; K$-base of $ L.\n\n\t\t\\all{ \\theta \\in K }\n\t\t{\n\t\t\t\\ex{ \\family{ a'_k }{ j } \\subset K }{ \\theta = \\sumatory{ a'_j\\theta'_j }{ j } }.\n\n\t\t\t\\all{ j \\in J }\n\t\t\t{\n\t\t\t\t\\ex{ \\family{ a_{i,j} }{ i } \\subset k }{ a'_j = \\sumatory{ a'_{i,j}\\theta_i }{ i } }\n\t\t\t}\n\n\t\t\t\\theta = \\sumatory{ a_{i,j}\\theta_i\\theta'_j }{ (i,j) }\n\t\t\t\n\t\t}\n\n\t\t\\set{ \\theta_i\\theta'_j}_{(i,j) \\in I \\times J } \\; k-$base of $ L.\n\n\t\t[L:k] = dim_k L = \\# (I \\times J) = \\#I \\cdot \\#J = [K:k][L:K]\n\t}\n\t\\newpage\n\n\n\t\n\t\\subsection{ $ Degree behaviour in base exchanges $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k, L \\extends k $ finite extensions $\n\t}\n\t\\proposition\n\t{\n\t\t[KL:K] \\leq [L:k]\n\t}\n\t\\demonstration\n\t{\n\t\t\\be{ n }{ dim_k L } \\in \\N.\n\n\t\t\\ex{ \\family{ \\theta_i }{ i }[ 1 ][ n ] \\subset L }{ L = k \\vect{ \\theta_i }{ i }[ 1 ][ n ] }.\n\n\t\tKL = K \\vect{ \\theta_i }{ i }[ 1 ][ n ].\n\n\t\t[KL:K] = \\productory{ [K \\vect{ \\theta_i }{ i }[ 1 ][ r+1 ] : K \\vect{ \\theta_i }{ i }[ 1 ][ r ] ] }{ r }[ 1 ][ n-1 ].\n\n\t\t[L:k] = \\productory{ [k \\vect{ \\theta_i }{ i }[ 1 ][ r+1 ] : k \\vect{ \\theta_i }{ i }[ 1 ][ r ] ] }{ r }[ 1 ][ n-1 ].\n\n\t\t\\all{ r \\in \\indexes{1}{n-1} }\n\t\t{\n\t\t\tk \\subset K \\imp k \\vect{ \\theta_i }{ i }[ 1 ][ r ] \\subset K \\vect{ \\theta_i }{ i }[ 1 ][ r ].\n\n\t\t\t[K \\vect{ \\theta_i }{ i }[ 1 ][ r+1 ] : K \\vect{ \\theta_i }{ i }[ 1 ][ r ] ] \\leq [k \\vect{ \\theta_i }{ i }[ 1 ][ r+1 ] : k \\vect{ \\theta_i }{ i }[ 1 ][ r ]\n\t\t}\n\n\t\t[KL:K] \\leq [L:k]\n\t}\n\t\\newpage\n}\n\n\n\n\\section{ $ Classification of field extensions $ }\n{\n\t\n\t\\subsection{ $ Invariancy of Cyclotomic prime extensions by composition and intersection $ }\n\t\n\t\\letbe\n\t{\n\t\tn,m \\in \\N \\suchthat mcd(n,m) = 1.\n\t\t\\Q(\\zeta_n) \\extends \\Q(\\zeta_m) $ cyclotomic extensions $\n\t}\n\t\\proposition\n\t{\n\t\t\\Q(\\zeta_n,\\zeta_m) = \\Q(\\zeta_n\\zeta_m)\n\n\t\t\\Q(\\zeta_n) \\cap \\Q(\\zeta_m) = \\Q\n\t}\n\t\\demonstration\n\t{\n\t\t$ Follow 2 steps $.\n\t\t\n\t\t\\step{ 1 }{ \\Q(\\zeta_n,\\zeta_m) $ cyclotomic extension $ }\n\t\t{\n\t\t\t\\rightinclusion\n\t\t\t{\n\t\t\t\t\\order{ \\zeta_n\\zeta_m } = \\order{ \\zeta_n }\\order{ \\zeta_m } = \\phin(n)\\phin(m).\n\n\t\t\t\tmcd(m,n) = 1 \\imp \\phin(n)\\phin(m) = \\phin(nm) \\imp \\zeta_n\\zeta_m \\in \\mu_{nm}(\\C).\n\n\t\t\t\t\\be{ \\zeta }{ \\zeta_n\\zeta_m } \\in \\mu_{nm}(\\C).\n\n\t\t\t\t(\\zeta^m)^n = 1 \\imp \\zeta^m \\in \\mu_n(\\C).\n\n\t\t\t\t\\ex{ r \\in \\N }{ (\\zeta^m)^r = \\zeta_n }.\n\n\t\t\t\t\\zeta_n \\in \\Q(\\zeta).\n\n\t\t\t\t\\similarly{ \\zeta_m \\in \\Q(\\zeta) }.\n\n\t\t\t\t\\Q(\\zeta_n,\\zeta_m) \\subset \\Q(\\zeta) \n\t\t\t}\n\t\t\t\\leftinclusion\n\t\t\t{\n\t\t\t\t\\zeta_n, \\zeta_m \\in \\Q(\\zeta_n,\\zeta_m) \\imp \\zeta_n\\zeta_m \\in \\Q(\\zeta_n,\\zeta_m)\n\t\t\t}\n\t\t}\n\t\t\n\t\t\\step{ 2 }{ \\Q(\\zeta_n) \\cap \\Q(\\zeta_m) = \\Q }\n\t\t{\n\t\t\t[\\Q(\\zeta_n,\\zeta_m):\\Q] = [\\Q(\\zeta_m,\\zeta_n):\\Q(\\zeta_n)][ \\Q(\\zeta_n) : \\Q].\n\n\t\t\t\\phin(mn) = [\\Q(\\zeta_m,\\zeta_n) : \\Q(\\zeta_n)]\\phin(n).\n\n\t\t\t[\\Q(\\zeta_m,\\zeta_n): \\Q(\\zeta_n) ] = \\phin(m).\n\n\t\t\t[\\Q(\\zeta_m,\\zeta_n): \\Q(\\zeta_n) ] \\leq [ \\Q(\\zeta_m) : \\Q(\\zeta_m) \\cap \\Q(\\zeta_n) ] \\leq [\\Q(\\zeta_m) : \\Q ].\n\n\t\t\t\\phin(m) \\leq [ \\Q(\\zeta_m) : \\Q(\\zeta_m) \\cap \\Q(\\zeta_n) ] \\leq \\phin(m).\n\n\t\t\t[ \\Q(\\zeta_m) : \\Q(\\zeta_m) \\cap \\Q(\\zeta_n) ] = \\phin(m).\n\n\t\t\t[\\Q(\\zeta_m) \\cap \\Q(\\zeta_n) : \\Q ] = \\frac{ \\phin(m) }{ \\phin(m) } = 1.\n\n\t\t\t\\Q(\\zeta_m) \\cap \\Q(\\zeta_n) = \\Q \n\t\t}\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Inclusion of Quadratic extensions in Simple extensions $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ quadratic extension $\n\t}\n\t\\proposition\n\t{\n\t\tK \\extends k $ simple extension $\n\t}\n\t\\demonstration\n\t{\n\t\t[K : k ] = 2 \\imp \\ex{ \\theta \\in K }{ \\theta \\nin k }.\n\n\t\tk \\subset k(\\theta) \\subset K.\n\n\t\t2 = [ K : k ] = [ K : k(\\theta) ][k(\\theta):k].\n\n\t\t\\theta \\nin k \\imp [k(\\theta):k] \\geq 2.\n\n\t\t[K : k(\\theta) ] = 1 \\imp K = k(\\theta).\n\n\t\tK \\extends k $ simple extension $\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Inclusion of Quadratic extensions in Normal extensions $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ quadratic extension $\n\t}\n\t\\proposition\n\t{\n\t\tK \\extends k $ normal extension $\n\t}\n\t\\demonstration\n\t{\n\t\tK \\extends k $ quadratic extension $ \\imp K \\extends k $ simple extension $.\n\n\t\t\\ex{ \\theta \\in K }{ \\theta \\nin k \\logicand K = k(\\theta) }.\n\n\t\t\\set{ 1,\\theta } k$-base of $ K.\n\n\t\t\\set{ 1,\\theta, \\theta^2 } $ linearly dependent $ \\imp \\ex{ b,c \\in k }{ \\theta^2 + b\\theta + c = 0 }.\n\n\t\t\\be{ \\eta }{ 2\\eta + b } \\in K.\n\n\t\t\\eta^2 = 4\\eta^2 + b^2 + 4b\\eta = b^2 - 4c \\in k.\n\n\t\t\\be{ f(X) }{ X^2 - \\eta^2 } \\in k[X].\n\n\t\t\\eta, -\\eta \\nin k \\imp f(X) $ irreductible over $ k[X].\n\n\t\t\\eta, -\\eta \\in K \\imp f(X) $ splits over $ K[X].\n\n\t\tk(\\theta) $ splitting field of $ f(X).\n\n\t\tK \\extends k $ normal extension $\n\n\n\n\n\t}\n\t\\newpage\n\n}\n\n\n\n\\section{ $ Galois Group $ }\n{\n\t\n\t\\subsection{ $ Determination of automorphisms by minimal polynomial root images $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ algebraic extension $.\n\t\tf(X) $ irreductible over $ k[X].\n\t\t\\theta, \\theta' \\in K $ roots of $ f(X) \n\t}\n\t\\proposition\n\t{\n\t\t\\ex{ ! \\; \\function{ \\sigma }{ k(\\theta) }{ k(\\theta') } }{ \\sigma \\; k$-automorphism $ \\logicand \\sigma(\\theta) = \\theta' }\n\t}\n\t\\demonstration\n\t{\n\t\t$ Follow 2 steps $.\n\t\t\n\t\t\\step{ 1 }{ $ Existence of $ \\sigma }\n\t\t{\n\t\t\t\\exists \\rho: k[X]/(Irr(\\theta,k)(X)) \\to k(\\theta) $ isomorphism $.\n\n\t\t\t\\exists \\rho': k[X]/(Irr(\\theta',k)(X)) \\to k(\\theta') $ isomorphism $.\n\n\t\t\t\\be{ \\sigma }{ \\rho' \\circ \\rho^{-1} } \\in \\functionspace{ k(\\theta) }{ k(\\theta') }.\n\n\t\t\t\\sigma $ isomorphism $ \t\t\t\n\t\t}\n\t\t\n\t\t\\step{ 2 }{ \\sigma $ determined by $ \\theta $'s image $ }\n\t\t{\n\t\t\t\\all{ \\function{ \\sigma' }{ k(\\theta) }{ k(\\theta') } }[ \\sigma' \\; k$-isomorphism $ \\logicand \\sigma'(\\theta) = \\theta' ]\n\t\t\t{\n\t\t\t\t\\all{ x \\in k(\\theta) }\n\t\t\t\t{\n\t\t\t\t\t\\ex{ \\family{ a_i }{ i }[ 1 ][ r ] \\subset k }{ x = \\sumatory{ a_i\\theta^i }{ i }[ 1 ][ r ] }.\n\n\t\t\t\t\t\\sigma'(x) = \\sumatory{ \\sigma'(a_i)\\sigma'(\\theta)^i }{ i }[ 1 ][ r ] = \\sumatory{ a_i\\theta'^i }{ i }[ 1 ][ r ].\n\n\t\t\t\t\t\\sigma(x) = \\sigma'(x)\n\t\t\t\t}\n\n\t\t\t\t\\sigma' = \\sigma\n\t\t\t}\n\t\t}\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Inclusion of immersions in automorphims $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ algebraic extension $.\n\t\t\\function{ \\sigma }{ K }{ K } k$-immersion$\n\t}\n\t\\proposition\n\t{\n\t\t\\sigma \\in Aut(K)\n\t}\n\t\\demonstration\n\t{\n\t\tdemonstration.\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Fundamental theorem of Galois theory $ }\n\t\n\t\\letbe\n\t{\n\t\tK \\extends k $ algebraic extension $.\n\t\tS(Gal(K \\extends k)) $ set of all subgroups of $ Gal(K \\extends k).\n\t\tC(K \\extends k) $ set of all subfields between $ K \\extends k\n\t}\n\t\\proposition\n\t{\n\t\t\\ex{ \\phi \\in \\functionspace{ S(Gal(K \\extends k) }{ C(K \\extends k) } }{ \\phi $ biyective $ }\n\t}\n\t\\demonstration\n\t{\n\t\tdemonstration.\n\t}\n\t\\newpage\n\n}\n\\section{ $ Finite Fields $ }\n{\n\n\t\\subsection{ $ Fermat's little theorem $ }\n\t\n\t\\letbe\n\t{\n\t\t\\F_{p^n} $ finite field $\n\t}\n\t\\proposition\n\t{\n\t\t\\all{ x \\in \\F }[ x \\neq 0 ]\n\t\t{\n\t\t\tx^{p^n-1} = 1\n\t\t}.\n\n\t\t\\all{ x \\in \\F }\n\t\t{\n\t\t\tx^{p^n} = x\n\t\t}\n\t}\n\t\\demonstration\n\t{\n\t\t$ Separate 2 cases: $.\n\t\t\n\t\t\\case{ x = 0 }\n\t\t{\n\t\t\t0^{p^n} = 0\n\t\t}\n\t\t\\case{ x \\neq 0 }\n\t\t{\n\t\t\t\\order{ x } \\divides \\card{ F^* } = p^n-1.\n\n\t\t\tx^{p^n-1} = 1 \\imp x^p = x\n\t\t}\n\t}\n\t\\newpage\n\n\t\\subsection{ $ Polynomials over finite prime fields $ }\n\t\n\t\\letbe\n\t{\n\t\t\\F_p $ finite field $.\n\t\tf(X) \\in \\F_p[X]\n\t}\n\t\\proposition\n\t{\n\t\tf(X)^p = f(X^p)\n\t}\n\t\\demonstration\n\t{\n\t\t\\all{ i \\in \\family{ k }{ k }[ 1 ][ p-1 ] \\subset \\N  }\n\t\t{\n\t\t\tp \\divides \\combinatory{p}{i} \\imp \\combinatory{p}{i} \\congruent{p} 0 \n\t\t}\n\n\t\t\\all{ a,b \\in \\F_p }\n\t\t{\n\t\t\t(a+b)^p = \\sumatory{ \\combinatory{p}{i} a^ib^{p-i} }{ i }[ 0 ][ p ] \\congruent{p} a^p + b^p\n\t\t}\n\n\t\t\\be{ n }{ gr(f(X)) } \\in \\N.\n\n\t\t\\ex{ \\family{ a_i }{ i }[ 0 ][ n ] \\subset \\F_p }{ f(X) = \\sumatory{ a_iX^i }{ i }[ 0 ][ n ] }.\n\n\t\tf(X)^p = \\sumatory{ a_i^pX^{pi} }{ i }[ 0 ][ n ].\n\n\t\t$Fermat's little theorem$ \\imp f(X)^p \\congruent{p} \\sumatory{ a_i(X^p)^i }{ i }[ 0 ][ n ] \\congruent{p} f(X^p) \n\t}\n\t\\newpage\n\t\n\n\t\\subsection{ $ Classification of characteristic and order $ }\n\t\n\t\\letbe\n\t{\n\t\t\\F $ finite field $\n\t}\n\t\\proposition\n\t{\n\t\t\\ex{ p \\in \\N }{ p $ prime $ \\logicand car(\\F) = p }.\n\n\t\t\\ex{ r \\in \\N }{ \\# \\F = p^r }\n\t}\n\t\\demonstration\n\t{\n\t\tdemonstration.\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Cyclotomic finite fields $ }\n\t\n\t\\letbe\n\t{\n\t\tlet.\n\t}\n\t\\proposition\n\t{\n\t\tproposition.\n\t}\n\t\\demonstration\n\t{\n\t\tdemonstration.\n\t}\n\t\\newpage\n\n\n\t\\subsection{ $ Determination of Galois group over Finite fields $ }\n\t\n\t\\letbe\n\t{\n\t\tq \\in \\N $ prime power $.\n\t\tn \\in \\N\n\t}\n\t\\proposition\n\t{\n\t\tGal( \\F_{q^n} \\extends \\F_q ) = < \\phin_q >\n\t}\n\t\\demonstration\n\t{\n\t\tdemonstration.\n\t}\n\t\\newpage\n}\n\n% \\subsection{Los car\u00e1cteres ciclot\u00f3micos son isomorfismos de grupos}\n\n% \\sean\n\n% \\item $\\f{\\chi_n}{Gal(\\Q(\\zeta_n) | \\Q ) }{ \\Z/n\\Z }$ car\u00e1cter ciclot\u00f3mico $n$-\u00e9simo\n\n% \\teorema \n\n% \\item $\\chi_n$ isomorfismo de grupos\n\n% \\demostracion\n\n% \\item Bien definido\n\n% $\\sigma|_\\Q(\\zeta_n)^*$ automorfismo de grupos\n\n% $\\order{ \\sigma( \\zeta_n ) } = \\order{ \\zeta_n } \\rightarrow \\sigma( \\zeta_n ) \\in \\mu_n(\\Q(\\zeta_n))$\n\n% $\\nex{ r }{ (\\Z/n\\Z)^* }{ \\sigma(\\zeta_n) = \\zeta_n^r }$\n\n% $\\chi(\\sigma) \\in \\Z/n\\Z$\n\n% \\item Independiente de la ra\u00edz n-\u00e9sima\n\n% \\nall{ \\zeta_n' }{ \\mu_n(\\C) }{\n\n% $\\nex{ r }{ (\\Z/n\\Z)^* }{ \\zeta_n' = \\zeta_n^r }$\n\n% $\\sigma(\\zeta_n') = \\sigma(\\zeta_n^r) = \\sigma(\\zeta_n)^r = \\zeta_n^{\\chi(\\sigma)r} = \\zeta_n'^{\\chi(\\sigma)}$\n\n% }\n\n% \\item Morfismo de grupos\n\n% \\nall{ \\sigma, \\tau }{ Gal( \\Q(\\zeta_n) | \\Q ) }{\n\n% $(\\zeta_n)^{\\chi(\\sigma\\circ\\tau)} = (\\sigma\\circ\\tau)(\\zeta_n) = \\sigma(\\tau(\\zeta_n)) = \\sigma(\\zeta_n^{\\chi(\\tau)}) = \\sigma(\\zeta_n)^{\\chi(\\tau)} = \\zeta_n^{\\chi(\\tau)\\chi(\\sigma)}$\n\n% $ \\chi( \\sigma \\circ \\tau ) = \\chi(\\sigma) \\chi(tau)$\n\n% }\n\n% \\item Inyectivo\n\n% \\nall{ \\sigma, \\tau }{ Gal( \\Q(\\zeta_n) | \\Q ) }{\n\n% $\\chi(\\sigma) = \\chi(\\tau)$\n\n% $\\sigma(\\zeta_n) = \\tau(\\zeta_n)$\n\n% \\nall{ q }{ \\Q }{\n\n% $\\sigma( q ) = \\tau( q ) = q$\n\n% }\n\n% $\\sigma = \\tau$\n\n% }\n\n% \\item Exhaustivo: \\nall{ r }{ (\\Z/n\\Z)^* }{\n\n% $\\ex \\f{\\rho}{\\Q(\\zeta_n)}{\\Q(\\zeta_n)} \\tq \\rho(\\zeta_n) = \\zeta_n^r$\n\n% }\n\n% \\fin{}\n\n% \\subsection{El subcuerpo de cuerpos ciclot\u00f3micos que queda fijo por todo automorfismo}\n\n% \\sean\n\n% \\item $\\zeta_n \\in \\mu_n(\\C)$\n% \\item $K < \\Q(\\zeta_n)$\n\n% \\teorema \n\n% \\item $K$ fijo para todo automorismo de $\\Q(\\zeta_n) \\leftrightarrow K = \\Q$\n\n% \\demostracion\n\n% \\item $\\sea{ \\zeta }{\\zeta_n}$\n\n% $\\sea{ G }{ Gal( \\Q(\\zeta) | \\Q ) }$\n\n% $\\sea{K}{ \\Q(\\zeta)^G }$\n\n% $\\Q \\subset K \\rightarrow K(\\zeta) = \\Q(\\zeta)$\n\n% $\\sea{ f(X) }{ Irr(\\zeta,K)(X) }$\n\n% \\nall{ \\sigma }{ G }{\n\n% $f(\\sigma(\\zeta)) = \\sigma(f(\\zeta)) = \\sigma(0) = 0$\n\n% }\n\n% \\nall{ \\zeta' }{ \\mu_n(\\C) }{\n\n% $\\zeta'$ raiz de $f(X)$\n\n% }\n\n% $gr(f(X)) > \\phin(n) \\rightarrow [\\Q(\\zeta) : K ] > \\phin(n)$\n\n% $[\\Q(\\zeta) : K ] \\leq [\\Q(\\zeta) : \\Q ] \\rightarrow [\\Q(\\zeta) : K ] = \\phin(n)$\n\n% $[K:\\Q] = \\frac{[\\Q(\\zeta) : \\Q]}{[\\Q(\\zeta) : K ]} = 1$\n\n% $K = \\Q$\n\n% \\fin{\\newpage}\n\n% \\section{Grupo de Galois}\n\n% \\subsection{Existe un unico k-isomorfismo que envia la raiz de una extension a la otra ra\u00edz}\n\n% \\sean\n\n% \\item $K \\ext k$, $L \\ext k$ extensiones algebraicas de cuerpos\n% \\item $f(X) \\in k[X]$ irreductible\n% \\item $\\theta \\in K$, $\\theta' \\in L$ ra\u00edces de f(X)\n\n% \\teorema \n\n% \\item $ \\ex ! \\f{ \\sigma }{ k(\\theta) }{ k(\\theta') } \\tq \\sigma k$-isomorfismo $\\y \\sigma(\\theta) = \\theta'$\n\n% \\demostracion\n\n% \\item Existencia\n\n% $\\ex \\f{\\sigma}{k[X]/(Irr(\\theta,k)(X))}{k(\\theta)}$ isomorfismo\n\n% $\\ex \\f{\\sigma'}{k[X]/(Irr(\\theta',k)(X))}{k(\\theta')}$ isomorfismo\n\n% $\\sea{\\rho}{\\sigma'^{-1} \\circ \\sigma}$\n\n% $\\rho$ isomorfismo de $k(\\theta)$ a $k(\\theta')$\n\n% \\item Unicidad\n\n% $\\sigma$ $k$-inmersi\u00f3n $\\rightarrow \\all a \\in k: \\sigma(a) = a$\n\n% $\\threepartfunction{ \\sigma : K}{ K }{ a \\in k }{ a }{ \\theta }{ \\theta' }$\n\n% Es la \u00fanica $k$-inmersi\u00f3n\n\n% \\fin{\\newpage}\n\n% \\subsection{Toda k-inmersi\u00f3n de K en K es un k-automorfismo}\n\n% \\sean\n\n% \\item $K \\ext k$ extension algebraica de cuerpos\n% \\item $\\f{\\sigma}{ K }{ K } k-$inmersi\u00f3n\n\n% \\teorema \n\n% \\item $\\sigma$ $k$-automorfismo\n\n% \\demostracion\n\n% \\item $\\sigma$ morfismo de anillos $\\rightarrow \\sigma$ inyectivo\n\n% Caso $K | k$ finito:\n\n% $\\sigma$ endomorfismo inyectivo de espacios vectoriales finito $\\rightarrow \\sigma$ exhaustivo\n\n% Caso $K | k$ no finito:\n\n% \\nall{ \\theta' }{ K }{\n\n% $\\sea{ f(X) }{ Irr(\\theta',k)(X) }$\n\n% $\\sea{ K' }{ <k, \\theta_i>_{i=1}^r} \\tq \\theta_i $ ra\u00edz de $f(X) \\y \\theta_i \\in K$\n\n% $K'|k$ algebraica $\\rightarrow K'|k$ finito\n\n% $\\sea{ \\sigma' }{ \\sigma|_K' }$\n\n% $\\sigma'$ $k$-inmersi\u00f3n de $K'$ en $K$\n\n% $\\sigma(K') \\subset K' \\rightarrow \\sigma k$-inmersi\u00f3n $K'$ en $K'$\n\n% $K' \\ext k$ finito $\\rightarrow \\sigma'$ $k$-automorfismo\n\n% $\\nex{ \\theta }{ K' }{ \\sigma(\\theta) = \\theta' }$\n\n% }\n\n% $\\sigma$ exhaustivo\n\n% \\fin{\\newpage}\n\n% \\section{ Cuerpos finitos }\n\n% \\subsection{La caracter\u00edstica de un cuerpo finito es primo y su cardinal potencia de ese primo}\n\n% \\sean\n\n% \\item $\\F$ cuerpo finito\n\n% \\teorema \n\n% \\item $\\nex{ p }{ \\N }{ p $primo $\\y car(F) = p}$\n\n% \\item $\\nex{ r }{ \\N }{ \\#F = p^r }$\n\n% \\demostracion\n\n% \\item $\\ex C $cuerpo $\\tq C \\iso \\Z/p\\Z \\y C < \\F$\n\n% $car(\\F) = p$\n\n% $\\F$ es $C$-espacio vectorial\n\n% $\\sea{ r }{ dim_C \\F }$\n\n% $\\#F = p^r$\n\n% \\fin{}\n\n% \\subsection{Los cuerpos finitos de orden primo ampliados por una ra\u00edz de la unidad de orden no multiplo de p son cuerpos finitos de orden una potencia de dicho primo}\n\n% \\sean\n\n% \\item $p \\in \\N$ primo\n% \\item $d \\in \\N \\tq p\\nmid d$\n% \\item $\\zeta_d \\in \\mu_d(\\cerr{\\F_p})$\n\n% \\teorema \n\n% \\item $\\nex{ r }{ \\Nm }{ \\F_p(\\zeta_d) = \\F_{p^r}}$\n\n% \\item $\\all s \\in \\N \\tq d | p^s -1$: $r|s$\n\n% \\demostracion\n\n% \\item $\\F_p(\\zeta) \\ext \\F_p$\n\n% $\\sea{ r }{ dim_{\\F_p} \\F_p(\\zeta) }$\n\n% $\\# F_p(\\zeta) = p^r$\n\n% $\\F_{p^r}^*$ c\u00edclico $\\y \\#\\F_{p^r}^* = p^r -1 $\n\n% $d | p^r - 1$\n\n% \\item \\alltq{ s }{ \\N }{ d | p^s -1 }{\n\n% $\\zeta \\in \\mu_{p^s-1}(\\cerr{F_p}) \\rightarrow \\zeta \\in \\F_{p^s}$\n\n% $\\F_{p^r} \\subset \\F_{p^s} \\rightarrow p^r | p^s$\n\n% $r|s$\n\n% }\n\n\n% \\fin{\\newpage}\n\n% \\subsection{Los grupos de Galois sobre cuerpos finitos est\u00e1n generados por el autormorfismo de Frobenius}\n\n% \\sean\n\n% \\item $q \\in \\N$ primo\n% \\item $n \\in \\N$\n\n% \\teorema \n\n% \\item $Gal( \\F_{q^n} | \\F_q ) = < \\phin_q>$\n\n% \\demostracion\n\n% \\item $[\\F_{q^n} : \\F_q ] = n$\n\n% $\\# Gal( \\F_{q^n} | \\F_q ) \\leq [\\F_{q^n} : \\F_q ] = n$\n\n% \\item $\\order{ \\phin_q } = n$:\n\n% $\\ex{ \\zeta }{ \\mu_{q^n-1}(\\cerr{\\F_q}) }{ \\F_{q^n} = \\F(\\zeta)}$\n\n% \\nall{ k }{ \\Nm }{\n\n% $\\phin_{q}^k(\\zeta) = (\\zeta^q)^k = \\zeta^{q^k} = \\phin_{q^k}(\\zeta)$\n\n% $\\phin_q^k = Id \\leftrightarrow \\zeta^{q^k} = \\zeta$\n\n% $\\zeta^{q^n-1} = 1 \\rightarrow n | k$\n\n% }\n\n% $\\order{\\phin_q} = n$\n\n% $\\# Gal( \\F_{q^n} | \\F_q ) = n$\n\n% $Gal( \\F_{q^n} | \\F_q ) =  < \\phin_q>$\n% \\fin{\\newpage}\n\n% \\subsection{Teor\u00eda de Galois sobre cuerpos finitos}\n\n% \\sean\n\n% \\item $\\sea{ \\mathcal{S}(q^n,q) }{ \\conj{ H \\subset Gal( \\F_{q^n} \\ext \\F_q ) }{ H \\sub Gal ( \\F_{q^n} \\ext \\F_q )}}$\n% \\item $\\sea{ \\mathcal{C}(q^n,q) }{ \\conj{ K \\ext \\F_q }{ \\F_{q^n} \\ext K }}$ \n\n% \\teorema \n\n% \\item $\\ex \\f{ \\phi }{ \\mathcal{S}(q^n, q) }{ \\mathcal{C}(q^n,q)} \\tq \\phi$ biyectiva\n\n% \\demostracion\n\n% \\item \\alltq{ d }{ \\N }{ d | n}{\n\n% $Gal(\\F_{q^n} \\ext \\F_{q^d} ) < Gal(\\F_{q^n} \\ext \\F_{q} )$\n\n% }\n\n% $\\sea{ \\phi(\\F_{q^d} \\ext \\F_{q}) }{ Gal(\\F_{q^n} \\ext \\F_{q^d} ) }$\n\n% \\item $\\all H < G:$\n\n% $\\sea{ phi^{-1}(H) }{ \\F_{q^n}^H | \\F_q }$\n\n% \\item Inversas mutuas\n\n% $\\all H < G:$\n\n% $\\ex d \\in \\N \\tq d|n \\y H = <\\phin_q^d>$\n\n% $\\F_{q^n}^{<\\phin_q^d>} = \\F_{q^d}$\n\n% $\\phi^{-1}(<\\phin_q^d>) = \\F_{q^d} \\ext \\F_q$\n\n% $\\phi(\\F_{q^d} \\ext \\F_q) = Gal(\\F_{q^n} \\ext \\F_{q^d}) = <\\phin_q^d>$\n\n% \\item $\\phi \\circ \\phi^{-1} = Id$\n\n% \\fin{}\n\n\\end{document}\n", "meta": {"hexsha": "d101646dd890acf98ed122fd8036396861c84970", "size": 30922, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Samples/Algebra/Algebraic Equations/New/algebraic equations.tex", "max_stars_repo_name": "martin-azpillaga/Matex", "max_stars_repo_head_hexsha": "ed1d6ee40dbb7c15eb27e5296fbd61feeb13fc58", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Samples/Algebra/Algebraic Equations/New/algebraic equations.tex", "max_issues_repo_name": "martin-azpillaga/Matex", "max_issues_repo_head_hexsha": "ed1d6ee40dbb7c15eb27e5296fbd61feeb13fc58", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Samples/Algebra/Algebraic Equations/New/algebraic equations.tex", "max_forks_repo_name": "martin-azpillaga/Matex", "max_forks_repo_head_hexsha": "ed1d6ee40dbb7c15eb27e5296fbd61feeb13fc58", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.9570267131, "max_line_length": 187, "alphanum_fraction": 0.5210853114, "num_tokens": 13410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.6992544210587586, "lm_q1q2_score": 0.5700549223701233}}
{"text": "\\section{Clustering}\n\\smallskip \\hrule height 2pt \\smallskip\n\n\\begin{itemize}\n\t\\item Unsupervised learning: detect patterns in unlabeled data. \n\t\tSometimes labels are too expensive, unclear, etc. to get them.\n\t\tExamples:\n\t\t\\begin{itemize}\n\t\t\t\\item group e-mails or search results\n\t\t\t\\item find categories of customers\n\t\t\t\\item detect anomalous program execuations\n\t\t\\end{itemize}\n\t\\item Useful when you don't know what you are looking for. \n\t\\item Requires a definition of \"similar\".  One option: small (squared) euclidean distance. \n\t\\item You can label then use the clusters, or use the clusters for the next level of analysis.\n\\end{itemize}\n\n\\subsection{K-Means}\n\\begin{itemize}\n\t\\item An iterative clustering algorithm.  \\hfill \\\\\n\t\\item No step size.  Discrete optimization. \n\t\\item Hard assignments.  Each point gets classified by one and only one cluster. \n\t\\item will converge, but may converge on local (not global) optimum.\n\t\t\\begin{itemize}\n\t\t\t\\item Every time you start the algorithm, you could end up n a different place.\n\t\t\t\\item Can run it a bunch of times. % week 9 audio\n\t\t\t\\item you are running a non-convex optimization: your final output is dependent on your initialization.\n\t\t\\end{itemize}\n\t\\item you have to chose a number of clusters. \n\t\\item Objective: minimize the distances between each point and closest center. \n\t\\item You want your output to have a large distance between clusters and a small distance between points in a cluster.  (intra vs inter cluster distance).\n\tYou want it to latch onto clumps of the data that are far apart from each other. % week 9 audio\n\t\t\\begin{itemize}\n\t\t\t\\item intra: \\hfill \\\\\n\t\t\t\tE.g. measure $|x_i - c_i|_2^2$ for each cluster. \n\t\t\t\\item inter:  \\hfill \\\\\n\t\t\t\tDist between closest two points in different clusters. \\hfill \\\\\n\t\t\t\tDistance between means.  \\hfill \\\\\n\t\t\t\tStandard deviation of cluster distances. \\hfill \\\\\n\t\t\\end{itemize}\n\\end{itemize}\t\n\t\nPick K random points as cluster means: $c^1, \\dots, c^k$.  \\hfill \\\\\nAlternate:\n\\begin{itemize}\n\t\\item Assign each example $x^i$ to the mean $c^i$ that is closest to it\n\t\\item Set each mean $c^i$ to the average of its assigned points. \n\\end{itemize}\nStop when no points' assignments change. \\hfill \\\\\n\nMinimizing a loss that is a function of the points, assignments, and means:\n$$ L( \\{ x*i \\},  \\{ a*j \\},   \\{ c*k \\}) = \\sum_i dist(x^i, c^{a^i})$$\nCoordinate gradient descent on L.  \\hfill \\\\\n\nMore formally: \n\\begin{itemize}\n\t\\item Data: $\\{ x^j | j = 1 \\dots n \\}$\n\t\\item For $ t = 1 \\dots T$:  (or stop if assignments don't change): \\hfill \\\\\n\t\tFix means ($c$) while you change the assignments ($a$): \\hfill \\\\\n\t\t\\begin{itemize}\n\t\t\t\\item for $ j = 1 \\dots n$: (recompute cluster assignments):\n\t\t\t\t$$ a^j = \\argmin_i dist(x^j, c^i)  $$ \n\t\t\\end{itemize}\n\t\\item fix assignments ($a$) while you change the means ($c$): \\hfill \\\\\n\t\tfor $j = 1 \\dots k$: (recompute cluster centers)\n\t\t$$ c^j = \\frac{1}{|\\{ i | a^i = j \\}|}  \\sum_{\\{ i | a^i = j \\}} x^i$$\n\\end{itemize}\nNote:  the point y with minimum squared Euclidean distance to a set of points {x} is their mean\n\n\\includegraphics[width=3.6in]{figures/kmeans_algorithm_example.pdf}\n\n\\subsubsection{K-Means gets stuck in local optima.}\n\n\\includegraphics[width=1.8in]{figures/k-means_gets_stuck.pdf}\n\n\\subsection{Agglomerative Clustering}\nWill not converge to a global optimum. \\hfill \\\\  % TA 3/15/2016\nWill not always find the true patterns in the data.  \\hfill \\\\  % TA 3/15/2016\nIt has to deal with the issue of local optima just like k-means (which can prevent you from finding the \"true\" patterns)\n\n\nFirst merge very similar instances. \\hfill \\\\\nThen incrementally build larger clusters out of smaller clusters. \\hfill \\\nLimiting the number of pairs of pairs, we can control the number of clusters.\n\\begin{itemize} \n\t\\item Distance = 0 means each point is its own cluster\n        \\item Distances = infinity --> all in one cluster. \n\\end{itemize}\n\n\\includegraphics[width=1.0in]{figures/agg_clustering.pdf}\n\nAlgorithm:\n\\begin{itemize}\n\t\\item Maintain a set of clusters.\n\t\\item Initially each instance is its own cluster\n\t\\item Repeat:\n\t\t\\begin{itemize}\n\t\t\t\\item pick the two closest clusters\n\t\t\t\\item merge them into a new cluster\n\t\t\t\\item stop when there is only one cluster left. \n\t\t\\end{itemize}\n\t\\item produces not one clustering, but a family of clusterings represented by a dendogram.\n\t\t\n\t\t\\includegraphics[width=1.0in]{figures/dendogram.pdf}\n\\end{itemize}\n\n\\includegraphics[width=2.7in]{figures/agglomerative_clustering_distance_options.pdf}\n\nThe intra-cluster distance: a metric of how well the clustering algorithm works\n$$ S_1 = \\sum_{j=1}^3 \\sum_i |x_i - x_c|_2^2 + \\sum_{i,j} |c_j, c_i|_2^2 $$ % wk 9 audio\n\nYou have to be extremely lucky to find a data set where the result isn't dependent on the start.\n\nCan run it a bunch of times. % week 9 audio\nFor each pair of points, we have a vote:  \\hfill \\\\\nDo they belong to the same cluster? \\hfill \\\\\nLook at score from score of clustering algorithm \\hfill \\\\\nWe get a full graph where edge scores are \"do they belong to the same \t\t\t\t\tcluster\"  (and more audio I missed?) \\hfill \\\\\nThen you need to find out which components are connected.  \\hfill \\\\\n\nFor each pair of points, we have an edge distance.  \\hfill \\\\\nWithin-cluster edges should be strong edges.    \\hfill \\\\\nThis strength should be common across clustering results.  \\hfill \\\\\nCut the graph into three pieces.    \\hfill \\\\\nThe score of the cut is the summation of the edges you break.   \\hfill \\\\\nCutting edges with small scores is good.   \\hfill \\\\\n\n\n\\subsection{Probabilistic Clustering}\n\\begin{itemize}\n\t\\item Can use a probabilistic model that allows cluster overlaps, clusters of different sizes, etc.\n\t\\item You can tell a generative story for the data.  \\hfill \\\\\n\t\t$P(X|Y) P(Y)$ is common. \n\t\\item The challenge: estimate model parameters without labeled data. \n\\end{itemize}\n\n\\subsection{Gaussian Mixture Models}\n\\begin{itemize}\n\t\\item We have clumps of data. Each clump is described with a gaussian.  % wk 10 audio\n\t\\item Like softening k-means.  You belong to cluster 1 with a score of 0.1, cluster 2 with score of 0.3, cluster 3 with score of 0.6\n\t\\item Think of clusters as probabilistic. \n\t\\item Assume m-dimensional data points.\n\t\\item P(Y) is still multinomial, with k classes.\n\t\\item $P(\\mathbb{X} | Y=i), i=1 \\dots k$ are $k$ multivariate Gaussians.\n\t\t\\begin{itemize}\n\t\t\t\\item mean $\\mu_i$ is a m-dimensional vector.\n\t\t\t\\item variance $\\Sigma_i$ is an $m$ by $m$ matrix.\n\t\t\t\\item $|x|$ is the determinant of matrix x. \n\t\t\\end{itemize}\n\\end{itemize}\n\n$$ P(X=x | Y=i) = \\frac{1}{\\sqrt{(2 \\pi)^m | \\Sigma_i |}} \\exp \\left(  \\frac{1}{2}(x - \\mu_i)^T \\Sigma_i^{-1} (x- \\mu_i) \\right) $$\n\n\\subsubsection{GMM is not Gaussian Naive Bayes}\n(We did GNB before logistic regression) \\hfill \\\\\nGaussian Naive Bayes : multinomial over clusters $Y$, Gaussian over each $X_i$ given $Y$:\n$$ P(Y_i = y_k) = \\theta_k $$\n(Again, $\\theta$ is the model parameters)\n$$ P(X_i = x | Y = y_k) = \\frac{1}{\\sigma_{ik} \\sqrt{2 \\pi}} \\exp \\left(  \\frac{-(x - \\mu_{ik}^2}{2 \\sigma_{ik}^2} \\right) $$\n? Would assume the input dimensions $X_i$ do not co-vary. \\hfill \\\\\n\nIf the input dimensions $X_i$ do co-vary, we can use Gaussian Mixture Models. \n\n\\subsubsection{Gaussian Mixture Model Assumption}\nWe want to do something like MLE but now we have multiple Gaussians. \\hfill \\\\\nYou don't know which label should be used for each data point(which is red, blue, green). \\hfill \\\\\nNeed to guess $k$ Gaussians without knowing the $\\mu$s.  \\hfill \\\\\n\nYou can marginalize: \\hfill \\\\\nModel probability without knowing who belongs to who: marginalize over all possible y values.   \\hfill \\\\\nYou are estimating $P(X|Y)$, but you don't know $Y$.   \\hfill \\\\\nYou can get rid of $Y$ and get $P(X)$ by summing over $y_i$.  \\hfill \\\\\nGet $Y$ out of the equation by summing over all possible values. \\hfill \\\\\nIf it was a probability table, we would be losing a column. \\hfill \\\\\n\n\n\\begin{itemize}\n\t\\item $P(Y)$: there are $k$ components\n\t\\item $P(X | Y)$: each component generates data from a Gaussian with mean $\\mu_i$ and covariance matrix $\\Sigma_i$\n\t\\item Assume each of the features are independent of each other. \\hfill \\\\ % week 10 audio\n\t\tThen can write down $P(X|Y)$ as prod of $P(X_i | Y)$.  \\hfill \\\\ % week 10 audio\n        \t\tEach of them will be a Gaussian distribution. \\hfill \\\\  % week 10 audio\n\t\\item  Can encode the whole $P(X|Y)$ with a multi-dimensional gaussian. \\hfill \\\\  % week 10 audio\n\t\tFor 2D data, this gives a circle.  \\hfill \\\\  % week 10 audio\n\t\tFor 3D data, this gives a bump.  \\hfill \\\\  % week 10 audio\n\t\t When we go to 100-dim space, we also have a mu.   \\hfill \\\\  % week 10 audio\n\t\t \tDistribution is 100-dimensional. \\hfill \\\\  % week 10 audio\n\t\t\tSigma in 100-dim space is 100 by 100 covariance matrix. \\hfill \\\\  % week 10 audio\n\\end{itemize}\n\nEach data point is sampled from a \\textbf{generative process}\n\\begin{itemize}\n\t\\item Pick a component at random: \\hfill \\\\\n\t\tchoose component $i$ with probability $P(y=i)$\n\t\\item Datapoint $\\sim N(\\mu_i, \\Sigma_i)$\n\\end{itemize}\n\n\\includegraphics[width=1.0in]{figures/GMM_cartoon.pdf}\n\n\\subsubsection{Supervised MLE for GMM}\n(Detour/review) \\hfill \\\\\n\nHow do we estimate parameters for Gaussian Mixtures with fully supervised data? \\hfill \\\\\nDefine objective and solve optimization: \\hfill \\\\\nFrom above: \n$$ P(X=x | Y=i) = \\frac{1}{\\sqrt{(2 \\pi)^m | \\Sigma_i |}} \\exp \\left(  \\frac{1}{2}(x - \\mu_i)^T \\Sigma_i^{-1} (x- \\mu_i) \\right) $$\nAnd we know $ \\displaystyle \\mu_{ML} = \\frac{1}{n} \\sum_{i=1}^n x^i$ and $ \\displaystyle \\Sigma_{ML} = \\frac{1}{n} \\sum_{i=1}^n (x^i - \\mu_{ML}) (x^i - \\mu_{ML})^T$ \\hfill \\\\\n\nBut we don't know Y, so we can't do that.  \\hfill \\\\\n\nInstead, we maximize the marginal likelihood.  (marginal means a variable is integrated out).\n$$ \\argmax_{\\theta} \\prod_i P(x^j; \\theta) = \\argmax \\prod_j \\sum_{i=1}^k P(y^j=i, x^j; \\theta) $$\n\nThis is always a hard problem.  \\hfill \\\\\nThere is usually no closed form solution.  \\hfill \\\\\nEven when $P(X, Y; \\theta)$ is convex, $P(X; \\theta)$ generally isn't.  \\hfill \\\\\nFor all but the simplest  $P(X; \\theta)$, we will also have to do gradient ascent, in a big messy space with lots of local optima.   \\hfill \\\\\n\n\\subsubsection{Simple GMM example: learn means only}\n\\includegraphics[width=2.5in]{figures/gmm_for_means_only.pdf}\n\nWe solve this using EM below. \n\n\\includegraphics[width=2.9in]{figures/learning_general_mixtures_of_Gaussians.pdf}\n\n\\subsubsection{EM for GMM: learn means of 1D data}\n\\includegraphics[width=2.5in]{figures/gmm--means_only-1.pdf}\n\n\\includegraphics[width=3.3in]{figures/gmm--means_only-2.pdf}\n\n\\subsubsection{EM for GMM in general}\n\n\\includegraphics[width=3.3in]{figures/EM_for_GMM.pdf}\n\n\\subsubsection{EM with hard assignments, and only learning means $\\rightarrow$ K-means}\n\n\\includegraphics[width=3.3in]{figures/em_to_kmeans.pdf}\n\n\n", "meta": {"hexsha": "e25c0b400e29bf18963107cbe0740cdb0357c741", "size": 10875, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/clustering.tex", "max_stars_repo_name": "JanetMatsen/Machine-Learning", "max_stars_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2016-02-07T23:35:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-26T05:13:33.000Z", "max_issues_repo_path": "tex/clustering.tex", "max_issues_repo_name": "JanetMatsen/Machine-Learning", "max_issues_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/clustering.tex", "max_forks_repo_name": "JanetMatsen/Machine-Learning", "max_forks_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2016-08-29T00:15:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-06T22:36:19.000Z", "avg_line_length": 45.3125, "max_line_length": 174, "alphanum_fraction": 0.7057471264, "num_tokens": 3358, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8152324848629214, "lm_q1q2_score": 0.5700549192311152}}
{"text": "\\chapter{Introduction}\n\\label{chap:chapter1}\n\\pagenumbering{arabic}\\hspace{3mm}\n\nThe basic job of a compiler is code translation from a high level \nlanguage to a target assembly language. But, compilers also perform\nmultiple optimizations in the intermediate stages of translation, \nso that the finally generated code performs better than just a normal \ntranslated code. There might be a one time overhead of \nrunning optimizations, but the performance gain visible over multiple \nexecutions of the code outweighs it.\n\nA typical modern day compiler performs a large number of optimizations \nlike induction variable analysis, loop interchange, loop invariant code\nmotion, loop unrolling, global value numbering, dead code \noptimizations, constant folding and propagation, common subexpression \nelimination etc. One common feature of most of these optimizations is \ndetecting equivalent program subexpressions. \n\nChecking semantic equivalence of program subexpressions has been shown \nto be an undecidable problem, even when all the conditional statements \nare considered as non deterministic. So in most of the cases, compilers \ntry to find some restricted form of expression equivalence. One such\nform of expression equivalence is \\textbf{Herbrand equivalence} (see \n\\autoref{sec:HerbrandEquivalence}).\nDetecting equivalence of program subexpressions can be used for \nvariety of applications. Compilers can use these to perform several \nof the optimizations mentioned earlier like constant propagation, \ncommon subexpression elimination etc. Program verification tools can \nuse these equivalences to discover loop invariants and to verify \nprogram assertions. This information is also important for discovering\nequivalent computations in different programs, which can be used by\nplagiarism detection tools and translation validation tools \\cite\n{Necula, Pnueli}, which compare a program with an optimized version\nin order to check correctness of the optimizer.\n\n\n\\section{Herbrand Equivalence}\n\\label{sec:HerbrandEquivalence}\nA formal definition of \\textbf{Herbrand equivalence} is given in \n\\cite{Ruthing}.\nInformally, two expressions are \\textbf{Herbrand equivalent at a program \npoint}, if and only if they have syntactically the same value at that \ngiven point, \\textbf{across all the execution paths} from the start \nof the program which reaches that point. For the purpose of analysis, \nthe operators themselves are treated as uninterprated functions with \nno semantic significance, only syntactic information is taken into \nconsideration (see example in \\autoref{sec:ASimpleExample}).\n\nFor \\textbf{Herbrand equivalence analysis}, the universe is the set \nof all possible expressions that can be formed using the constants, \nvariables and operators used in the program. And for each program \npoint, partition it such that two expressions are Herbrand equivalent \nat that point if and only if they belong to the same partition class \nof that point.\n\n\\section{A Simple Example}\n\\label{sec:ASimpleExample}\n\n\\begin{figure}[!ht]\n    \\centering {\n        \\setlength{\\fboxsep}{8pt}    \n        \\fbox{\\includegraphics[scale=0.55]{HerbrandEquivalenceTrans.png}}\n    }\n    \\caption{Example of Herbrand Equivalence}\n    \\label{fig:HerbrandEquivalenceTrans}\n\\end{figure}\n\n\\autoref{fig:HerbrandEquivalenceTrans} shows a simple example of Herbrand \nequivalence analysis. All the expressions that belongs to the same set at \na program point are Herbrand equivalent at that point.\n\n\\begin{itemize}\n    \\item   Initially all the expressions are in separate sets, ie. \n    they are inequivalent to each other. In particular, note that \n    $X + 2$ and $2 + X$ are inequivalent because the operators are \n    being treated uninterprated with no semantic information of them, \n    which means there is no knowledge of commutativity of $+$.\n    \\item   After assignment $X = 2$, any occurrence of $X$ in an\n    expression can equally be replaced with $2$. So, now all expressions \n    with $2$ in place of $X$ and vice versa are equivalent - that means\n    $2 + 2$, $2 + X$, $X + 2$, $X + X$ are all equivalent - this still \n    is just syntactic information because $X$ and $2$ are equivalent. \n    However, if $4$ is also in the universe of expressions, $2 + 2$ and \n    $4$ are not equivalent as this is semantic information about operator $+$.\n    \\item   After assignment $Y = X$, any occurrence of $Y$ in \n    an expression can be replaced with $X$. Because $X$ and $2$ are already \n    equivalent, it means now $2$, $X$, and $Y$ are all equivalent to \n    each other. And two expressions are equivalent if one can be \n    obtained from the other by replacing one of these three with any of the \n    other two. For this example, it means that any two expressions of \n    the same length are equivalent.\n\\end{itemize}\n\n\\begin{figure}[!ht]\n    \\centering {\n        \\setlength{\\fboxsep}{8pt}    \n        \\fbox{\\includegraphics[scale=0.54]{HerbrandEquivalenceConv.png}}\n    }\n    \\caption{Example of Herbrand Equivalence analysis at a confluence point}\n    \\label{fig:HerbrandEquivalenceConv}\n\\end{figure}\n\n\\autoref{fig:HerbrandEquivalenceConv} shows what happens at a \n\\textbf{confluence point} - a point where multiple execution paths meet. \nTwo expressions are Herbrand equivalent at the confluence point only if \nthey are Herbrand equivalent at all the predecessor points.\n\\begin{itemize}\n    \\item   In the left branch $2$, $Y$, $Z$ are equivalent and \n    so are expressions which are interconvertible by replacement \n    of any of these three, with the other two.\n    \\item   The case with the right branch is similar, except $X$ is \n    equivalent to $2$ and $Z$ instead of $Y$.\n    \\item   At the confluence point, only $2$ and $Z$ are equivalent \n    because they are equivalent at both the predecessors points. $X$ \n    is equivalent to $2$ and $Z$ at the right predecessor but not \n    the left one and $Y$ is equivalent to $2$ and $Z$ at the left \n    predecessor but not the right. As before, expressions obtained by \n    replacing $2$ with $Z$ and vice versa are equivalent.\n\\end{itemize}\n\n\\section{Goal of the Project}\n\\label{sec:GoalOfTheProject}\nBabu, Krishnan and Paleri \\cite{Babu} has given an algorithm for \nHerbrand equivalence analysis restricted to program expressions. The \nbasic goal of this project is to refine this general algorithm and \nthen implement it for \\href{https://llvm.org/}{Clang/LLVM compiler}. \nThe implementation is also to be benchmarked using \\href{https://www.spec.org/}\n{SPEC} CPU benchmarks. Finally, a proof of correctness of the \nalgorithm based on the theoretical work of Babu et al. \\cite{Babu} \nhas to be presented.\n\n\\section{Organization of the Report}\n\\label{sec:OrganizationOfTheReport}\n\\autoref{chap:chapter2} gives an overview of the previous works \nrelated to Herbrand equivalence; then summary of works of Babu et al. \n\\cite{Babu} is specifically presented in \\autoref{chap:chapter3} as \nit is closely related to the project. The updated algorithm for \nHerbrand equivalence analysis is given in \\autoref{chap:chapter4}. \n\\autoref{chap:chapter5} presents some important test cases on which \nthe Herbrand analysis algorithm can be checked for correctness. \n\\autoref{chap:chapter6} discusses about Clang/LLVM compiler and also \nprovides a tutorial for writing LLVM optimzation passes. \n\\autoref{chap:chapter7} and \\autoref{chap:chapter8} explains the \nimplementation of the algorithm done for Clang/LLVM compiler and a \ntoy language respectively. Finally, \\autoref{chap:chapter9} provides \nthe details of the attempt made for proving the correctness of the \nalgorithm. \\autoref{chap:chapter10} concludes the report.\n", "meta": {"hexsha": "9787e2f73bd18497373821a29e98a8453b607b01", "size": 7661, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/Rep_End_8/chapter1.tex", "max_stars_repo_name": "himanshu520/HerbrandEquivalence", "max_stars_repo_head_hexsha": "bfe056d9d370d9e5fe2782381b872bf102a960ba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "reports/Rep_End_8/chapter1.tex", "max_issues_repo_name": "himanshu520/HerbrandEquivalence", "max_issues_repo_head_hexsha": "bfe056d9d370d9e5fe2782381b872bf102a960ba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/Rep_End_8/chapter1.tex", "max_forks_repo_name": "himanshu520/HerbrandEquivalence", "max_forks_repo_head_hexsha": "bfe056d9d370d9e5fe2782381b872bf102a960ba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.4161073826, "max_line_length": 79, "alphanum_fraction": 0.7690901971, "num_tokens": 1841, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.569921494506616}}
{"text": "\\newpage\\section{Pedal Triangles}\n\n\n\t\\den{Pedal Triangles}{Let $ P $ be an arbitrary point, let $ \\triangle A_1B_1C_1 $ be its pedal triangle wrt $ \\triangle ABC $. Let $ A', B', C' $ and $ A_0, B_0, C_0 $ be the feet of the altitudes and the midpoints of $ \\triangle ABC $. \\\\\n\t\t\n\t\t\\[ B_1C_1\\cap B_0C_0=A_2,\\ C_1A_1\\cap C_0A_0 = B_2,\\ A_1B_1\\cap A_0B_0 = C_2 \\]\n\t\t\\[ B'C'\\cap B_0C_0=A_3,\\ C'A'\\cap C_0A_0=B_3,\\ A'B'\\cap A_0B_0=C_3 \\]}\n\t\n\t\n\t\n\t\n\t\\theo{https://www.awesomemath.org/wp-pdf-files/math-reflections/mr-2015-02/article_1_bocanu.pdf}{Fontene's First Theorem}{$ A_1A_2, B_1B_2, C_1C_2 $ are concurrent at the intersection of $ \\odot A_1B_1C_1 $ and $ \\odot A_0B_0C_0 $}\n\t\n\t\n\t\n\t\n\t\\lem{}{$ A'A_3, B'B_3, C'C_3 $ and $ A_0A_3, B_0B_3, C_0C_3 $ concur at the nine point circle of $ \\triangle ABC. $}\n\t\n\t\n\t\n\t\\theo{https://www.awesomemath.org/wp-pdf-files/math-reflections/mr-2015-02/article_1_bocanu.pdf}{Fontene's Second Theorem}{Let the concurrency point in the first theorem be $ Q $. Then, if the line $ OP $ is fixed and $ P $ moves along that line, $ Q $ will stay fixed.}\n\t\n\tThe previous result leads to another beautiful result:\n\t\n\t\n\t\n\t\n\t\n\t\\lem{}{Suppose a varying point $ P $ is chosen on the Euler Line of $ \\triangle ABC $. Then the pedal circle of $ P $ wrt $ \\triangle ABC $ intersects the 9p circle at a fixed point which is the Euler Reflection Point of the median triangle.}\n\t\n\n\n\n\t\t\n", "meta": {"hexsha": "69614314cb4f49d225e09e376a0fac1fdbd53b12", "size": 1401, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "geo/sec11_pedal.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "geo/sec11_pedal.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geo/sec11_pedal.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 40.0285714286, "max_line_length": 271, "alphanum_fraction": 0.6837972877, "num_tokens": 522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7931059609645724, "lm_q1q2_score": 0.5699214867043224}}
{"text": "\\documentclass[12pt,letterpaper]{article}\n\n\\usepackage[margin=1.9cm]{geometry}\n\\usepackage{amssymb}\n\\usepackage{mathtools}\n\\usepackage{pgfplots}\n\\usepackage{siunitx}\n\n\\usepgfplotslibrary{fillbetween}\n\n\\pgfplotsset{compat=1.16}\n\n\\begin{document}\n\\section*{Part 1 High Pass Filter}\n\\subsection*{1.}\n\\begin{tikzpicture}\n    \\begin{axis}[\n        title = {Gain vs frequency},\n        xlabel = $\\log{f}$,\n        ylabel = $\\frac{V_out}{V_in}$,\n        xmin = 2, \n        xmax = 5,\n        ymin = 0,\n        ymax = 1,\n        axis lines=left,\n        grid = both,\n        height = 6cm,\n        width = 17cm,\n        legend pos=south west\n    ]\n    \\addplot table [x=logf, y=VoutVin, col sep=comma, mark=none] {Part1.csv} ;\n    \\addplot [name path=passregion, mark=none, black, domain=2:5, opacity=0.5] {0.7071067812} node[above,pos=0.9] {Pass Region};\n    \n    \\path[name path=axis] (axis cs:2,1) -- (axis cs:5,1);\n    \\addplot [\n        thick,\n        color=black,\n        fill=black, \n        fill opacity=0.05\n    ]\n    fill between[\n        of=passregion and axis,\n        soft clip={domain=2:5},\n    ];\n\n\\end{axis}\n\\end{tikzpicture}\n\\subsection*{2.}\n    Pass region calculation:\n    \\begin{align*}\n    \\text{Half Power Voltage} &= \\frac{10}{\\sqrt{2}} = \\SI{7.071067812}{\\volt}\\\\\n    \\text{Pass Region} &= \\frac{\\text{Half Power Voltage}}{\\SI{10}{\\volt}} = 0.707\n    \\end{align*}\n\\subsection*{3.}\n    Lowest frequency that passes occurs at $\\log(f) \\simeq 3.8$, therefore $f = \\SI{e3.8}{\\hertz} = \\SI{6309.57}{\\hertz}$\n\\section*{Part 2 Low Pass Filter}\n\\subsection*{4.}\n\\begin{tikzpicture}\n    \\begin{axis}[\n        title = {Gain vs frequency},\n        xlabel = $\\log{f}$,\n        ylabel = $\\frac{V_out}{V_in}$,\n        xmin = 2, \n        xmax = 5,\n        ymin = 0,\n        ymax = 1,\n        axis lines=left,\n        grid = both,\n        height = 6cm,\n        width = 17cm,\n        legend pos=south west\n    ]\n    \n    \\addplot table [x=logf, y=VoutVin, col sep=comma, mark=none] {Part2.csv} ;\n    \\addplot [name path=passregion, mark=none, black, domain=2:5, opacity=0.5] {0.7071067812} node[above,pos=0.9] {Pass Region};\n    \n    \\path[name path=axis] (axis cs:2,1) -- (axis cs:5,1);\n    \\addplot [\n        thick,\n        color=black,\n        fill=black, \n        fill opacity=0.05\n    ]\n    fill between[\n        of=passregion and axis,\n        soft clip={domain=2:5},\n    ];\n\n\\end{axis}\n\\end{tikzpicture}\n\\subsection*{7.}\n    Pass region calculation:\n    \\begin{align*}\n    \\text{Half Power Voltage} &= \\frac{2}{\\sqrt{2}} = \\SI{1.414213562}{\\volt}\\\\\n    \\text{Pass Region} &= \\frac{\\text{Half Power Voltage}}{\\SI{2}{\\volt}} = 0.707\n    \\end{align*}\n\\subsection*{6.}\n    Lowest frequency that passes occurs at $log(f) \\simeq 3.75$, or $f = \\SI{e3.75}{\\hertz} = \\SI{5623.41}{\\hertz}$\n\\section*{Part 3 Band Pass Filter}\n\\subsection*{7.}\n\\begin{tikzpicture}\n    \\begin{axis}[\n        title = {Gain vs frequency},\n        xlabel = $\\log{f}$,\n        ylabel = $\\frac{V_out}{V_in}$,\n        xmin = 2, \n        xmax = 5,\n        ymin = 0,\n        ymax = 1,\n        axis lines=left,\n        grid = both,\n        height = 6cm,\n        width = 17cm,\n        legend pos=south west\n    ]\n    \n    \\addplot table [x=logf, y=VoutVin, col sep=comma, mark=none] {Part3.csv} ;\n    \\addplot [name path=passregion, mark=none, black, domain=2:5, opacity=0.5] {0.7071067812} node[above,pos=0.9] {Pass Region};\n    \n    \\draw (3.502836639, 0) -- (3.502836639, 1);\n    \n    \\path[name path=axis] (axis cs:2,1) -- (axis cs:5,1);\n    \\addplot [\n        thick,\n        color=black,\n        fill=black, \n        fill opacity=0.05\n    ]\n    fill between[\n        of=passregion and axis,\n        soft clip={domain=2:5},\n    ];\n\\end{axis}\n\\end{tikzpicture}\n\\subsection*{8.}\n    Lowest frequency that passes occurs at $log(f) \\simeq 3.75$, or $f = \\SI{e3.75}{\\hertz} = \\SI{5623.41}{\\hertz}$\n\\subsection*{9.}\n    The lower bound occurs at $log(f) \\simeq 3.15$, or $f = \\SI{e3.15}{\\hertz} = \\SI{1412.54}{\\hertz}$. The upper bound occurs at $log(f) \\simeq 3.85$, or $f = \\SI{e3.85}{\\hertz}= \\SI{7079.46}{\\hertz}$.\n    \n\\subsection*{10}\n    The resonant frequency obtained in the lab was $\\SI{3183}{\\hertz}$. This is in direct correlation with the value calculated in the prelab assuming ideal conditions. \n    \n    The percent error calculated was:\n    \\begin{align}\n        \\frac{|3183-3183.1|}{3183.1} \\times 100\\% = \\num[round-precision=3, round-mode = figures]{-3.14159153d-3}\\%\n    \\end{align}\n\n\\end{document}", "meta": {"hexsha": "0e4b8dde37a794ab407dbf7e291f61e28944a044", "size": 4482, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LabReports/ECE203/Lab2/main.tex", "max_stars_repo_name": "n30phyte/SchoolDocuments", "max_stars_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LabReports/ECE203/Lab2/main.tex", "max_issues_repo_name": "n30phyte/SchoolDocuments", "max_issues_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LabReports/ECE203/Lab2/main.tex", "max_forks_repo_name": "n30phyte/SchoolDocuments", "max_forks_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.2837837838, "max_line_length": 202, "alphanum_fraction": 0.5780901383, "num_tokens": 1570, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7931059438487663, "lm_q1q2_score": 0.5699214839643398}}
{"text": "% !TEX root = thesis.tex\n\n\\chapter{A short dictionary of dimension-based GIS terms}\n\\label{ch:dictionary}\n\n\\begin{multicols}{2}\n\n\\begin{description}\n\\footnotesize\n\n\\item[ambient space]\nthe space in which objects are embedded.\n\n\\item[area]\n1.\\ measure of the 2D extent of an object;\n2.\\ 2D combinatorial element;\n{\\color{gray} 3.\\ 2D geometric object}.\n\n\\item[ball]\n1.\\ ($\\rightarrow$\\ same as 3-\\emph{ball}) topological definition of the space within a 2-sphere (\\ie\\ a sphere), often defined as the space within a certain distance (the radius) from a point in $\\mathbb{R}^3$ (the centre);\n2.\\ ($\\rightarrow$\\ often $n$-\\emph{ball}) topological definition of the space within an $(n-1)$-sphere, often defined as the space within a certain distance (the radius) from a point in $\\mathbb{R}^n$ (the centre);\n3.\\ geometric definition of a perfectly round filled 3D object.\n\n\\item[body]\n$n$-dimensional combinatorial element in an $n$D context.\n\n\\item[box]\n1.\\ cuboid;\n2.\\ orthotope of any dimension.\n\n\\item[cavity]\n3D hole.\n\n\\item[cell]\n1.\\ ($\\rightarrow$\\ sometimes $n$-\\emph{cell}) $n$D combinatorial element;\n{\\color{gray} 2.\\ 3D combinatorial element}.\n\n\\item[cell complex]\ntopological space formed by a set of cells glued along common faces, all faces of a cell in the complex should be in the complex.\n\n\\item[circle]\n1.\\ topological definition of the space at a certain distance (the radius) from a point in $\\mathbb{R}^2$ (the centre);\n2.\\ geometric definition of a perfectly round hollow 2D object.\n\n\\item[closed]\n1.\\ topological definition of an object that includes its boundary;\n2.\\ geometric definition of an object in $n$D, usually defined using boundary representation, that encloses an $n$D subspace.\n\n\\item[congruent]\nhaving the same shape and size.\n\n\\item[cube]\n1.\\ ($\\rightarrow$\\ same as $3$-\\emph{cube}) 3D polyhedron with 6 square facets;\n2.\\ ($\\rightarrow$\\ usually $n$-\\emph{cube}) $n$-orthotope with identical $(n-1)$-cube facets, akin to a square in 2D and a cube in 3D.\n\n\\item[cuboid]\n1.\\ box-shaped 3D polyhedron with 6 rectangular facets, its opposite facets are congruent and parallel;\n{\\color{gray} 2.\\ parallelotope}.\n\n\\item[curve]\n1.\\ 1D geometric object, often with a non-linear geometry;\n2.\\ 1D combinatorial element;\n3.\\ 1-manifold.\n\n\\item[cut-line]\ndegenerate part of a 2D shape forming a curve and protruding inwards from its boundary.\n\n\\item[disk]\ntopological definition of the space within a circle, often defined as the space within a certain distance (the radius) from a point in $\\mathbb{R}^2$ (the centre).\n\n\\item[edge]\n1D combinatorial element.\n\n\\item[face]\n1.\\ 2D combinatorial element;\n2.\\ ($\\rightarrow$\\ sometimes \\emph{$n$-face of $X$}) $n$D combinatorial element on the boundary of $X$;\n{\\color{gray} 3.\\ ($\\rightarrow$\\ usually \\emph{face of $X$}) $(n-1)$D combinatorial element on the boundary of an $n$D combinatorial element $X$}.\n\n\\item[facet]\n1.\\ $(n-1)$D combinatorial element in an $n$D context;\n2.\\ ($\\rightarrow$\\ often \\emph{facet of $X$}) $(n-1)$D combinatorial element on the boundary of an $n$D combinatorial element $X$;\n{\\color{gray} 3.\\ 2D combinatorial element}.\n\n\\item[flat]\nunbounded geometric object with linear geometry of any dimension, such as a point, line or plane.\n\n\\item[hole]\n1.\\ 2D void region;\n2.\\ $n$D void region.\n\n\\item[hyperball]\nhigher-dimensional ball.\n\n\\item[hypercell]\n1.\\ higher-dimensional combinatorial element;\n2.\\ 4D combinatorial element.\n\n\\item[hypercube]\n1.\\ higher-dimensional cube;\n2.\\ 4-cube.\n\n\\item[hyperplane]\n1.\\ $(n-1)$D linear subspace in an $n$D context;\n{\\color{gray} 2.\\ plane in a higher-dimensional context.}\n\n\\item[hyperrectangle]\nhigher-dimensional orthotope.\n\n\\item[hypersphere]\nhigher-dimensional sphere.\n\n\\item[hypersurface]\n$(n-1)$D subspace in an $n$D context, often curved;\n\n\\item[interval]\ntopological definition of the space between two points in $\\mathbb{R}$ (the endpoints).\n\n\\item[Lebesgue measure]\nmeasure of the $n$D extent of an object, akin to length in 1D, area in 2D and volume in 3D.\n\n\\item[length]\nmeasure of the 1D extent of an object.\n\n\\item[line segment]\n1D geometric object.\n\n\\item[manifold]\n($\\rightarrow$\\ sometimes $n$-\\emph{manifold}) topological space that resembles $\\mathbb{R}^n$ at every point.\n\n\\item[manifold with boundary]\n($\\rightarrow$\\ sometimes $n$-\\emph{manifold with boundary}) topological space that resembles $\\mathbb{R}^n$ at every point in its interior.\n\n\\item[node]\n0D combinatorial element.\n\n\\item[open]\n1.\\ topological definition of an object that does not include its boundary;\n2.\\ geometric definition of an object in $n$D, usually defined using boundary representation, that does not enclose any $n$D subspace.\n\n\\item[orthotope]\n($\\rightarrow$\\ sometimes $n$-\\emph{orthotope}) $n$-polytope with congruent parallel facets forming right angles to each other, akin to a rectangle in 2D and a cuboid in 3D.\n\n\\item[parallelepiped]\n3D polyhedron bounded by 6 parallelogram-shaped 2D faces.\n\n\\item[parallelogram]\npolygon bounded by 4 parallel and have the same shape.\n\n\\item[parallelotope]\na polytope whose opposite facets are parallel and have the same shape, akin to a parallelogram in 2D and a parallelepiped in 3D.\n\n\\item[peak]\n1.\\ $(n-3)$D combinatorial element in an $n$D context;\n2.\\ ($\\rightarrow$\\ \\emph{peak of $X$}) $(n-3)$D combinatorial element on the boundary of an $n$D combinatorial element $X$;\n\n\\item[plane]\n2D unbounded linear subspace.\n\n\\item[point]\n1.\\ 0D geometric element;\n2.\\ topological definition of the space at one location.\n\n\\item[polychoron]\n4D geometric object with linear geometry.\n\n\\item[polygon]\n2D geometric object with linear geometry, possibly with holes.\n\n\\item[polygonal curve]\ncurve formed by a sequence of line segments joined at their endpoints.\n\n\\item[polyhedron]\n1.\\ 3D geometric object with linear geometry, possibly with holes;\n{\\color{gray} 2.\\ an $n$D geometric object with linear geometry}.\n\n\\item[polyline]\ncurve formed by a sequence of line segments joined at their endpoints.\n\n\\item[polyteron]\n5D geometric object with linear geometry.\n\n\\item[polytope]\n$n$D geometric object with linear geometry.\n\n\\item[puncture]\n0D (point) hole, often formed from a degenerate polyline or polygon.\n\n\\item[ridge]\n1.\\ $(n-2)$D combinatorial element in an $n$D context;\n2.\\ ($\\rightarrow$\\ \\emph{ridge of $X$}) $(n-2)$D combinatorial element on the boundary of an $n$D combinatorial element $X$.\n\n\\item[line string]\ncurve formed by joining a sequence of vertices by straight line segments, represented by these vertices.\n\n\\item[linear ring]\n2D geometric object with linear geometry represented as a sequence of vertices.\n\n\\item[ring]\n2D combinatorial element represented by its 1D boundary.\n\n\\item[subfacet]\n1.\\ $(n-2)$D combinatorial element in an $n$D context;\n2.\\ ($\\rightarrow$\\ \\emph{subfacet of $X$}) $(n-2)$D combinatorial element on the boundary of an $n$D combinatorial element $X$.\n\n\\item[shell]\n3D geometric object with linear geometry without 3D holes, usually defined as the volume enclosed by its 2D boundary, often represented as a set of 2D faces.\n\n\\item[simplex]\n1.\\ ($\\rightarrow$\\ sometimes \\emph{$n$-simplex}) combinatorial element with $n+1$ vertices and $n+1$ facets;\n2.\\ ($\\rightarrow$\\ sometimes \\emph{$n$-simplex}) geometric object based on $n+1$ affinely independent vertices, akin to a point in 0D, a line segment in 1D, a triangle in 2D, or a tetrahedron in 3D.\n\n\\item[simplicial complex]\n1.\\ ($\\rightarrow$\\ sometimes \\emph{abstract simplicial complex}) topological space formed by a set of simplices glued along common faces, all faces of a simplex in the complex should be in the complex;\n2.\\ ($\\rightarrow$\\ sometimes \\emph{geometric simplicial complex}) abstract simplicial complex where simplices are embedded into Euclidean space, their interiors should not intersect geometrically.\n\n\\item[solid]\n1.\\ 3D geometric object with linear geometry, possibly with 3D holes, often represented as a set of shells;\n2.\\ object defined based on solid modelling, often as opposed one based on boundary representation;\n3.\\ an object containing its interior (as opposed to \\emph{hollow}).\n\n\\item[spike]\ndegenerate part of a 2D shape forming a curve, often protruding outwards from its boundary.\n\n\\item[sphere]\n1.\\ ($\\rightarrow$\\ same as $2$-\\emph{sphere}) topological definition of the space at a certain distance (the radius) from a point in $\\mathbb{R}^3$ (the centre);\n2.\\ ($\\rightarrow$\\ often $n$-\\emph{sphere}) topological definition of the space at a certain distance (the radius) from a point in $\\mathbb{R}^{n+1}$ (the centre);\n3.\\ geometric definition of a perfectly round hollow 3D object.\n\n\\item[surface]\n1.\\ 2-manifold;\n2.\\ 2D combinatorial element.\n\n\\item[tesseract]\n4-cube: polychoron with 8 cubical facets.\n\n\\item[vertex]\n0D combinatorial element.\n\n\\item[volume]\n1.\\ measure of the 3D extent of an object;\n2.\\ 3D combinatorial element.\n\n\\item[wire]\n2D combinatorial element represented by its 1D boundary.\n\n\\end{description}\n\n\\end{multicols}\n\n% \\begin{table}\n% \\scriptsize\n% \\begin{tabular}{ccccccc}\n% \\toprule\n% \\multicolumn{7}{c}{dimension} \\\\\n% $n$ & 0 & 1 & 2 & 3 & 4 \\\\\n% \\midrule\n% \\multicolumn{7}{c}{topological} \\\\\n% $n$-simplex & vertex & edge & face & volume \\\\\n% $n$-cell & vertex & edge & face & volume \\\\\n% $n$-sphere & pair of points & circle & sphere \\\\\n% $n$-ball & & interval & disk & ball \\\\\n% open $n$-ball & & open interval & open disk & open ball \\\\\n% $n$-manifold & discrete space & curve & surface \\\\\n% \\midrule\n% \\multicolumn{7}{c}{geometric} \\\\\n% $n$-simplex & point & line segment & triangle & tetrahedron & pentachoron \\\\\n% $n$-cube & point & square & cube & tesseract \\\\\n% $n$-orthotope & point & line segment & rectangle & cuboid/box \\\\\n% $n$-parallelotope & point & line segment & parallelogram & parallelepiped \\\\\n% prismatic $n$-polytope & & line segment & rectangle & prism & prismatic polychoron \\\\\n% $n$-polytope & point & line segment & polygon & polyhedron & polychoron \\\\\n% \\midrule\n% \\multicolumn{7}{c}{other} \\\\\n% $n$-th Lebesgue measure & length & area & volume \\\\\n% & & collinear & coplanar \\\\\n% \\bottomrule\n% \\end{tabular}\n% \\end{table}\n", "meta": {"hexsha": "2734a3892636786ca6e8834a68622e6536b46966", "size": 10103, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dictionary.tex", "max_stars_repo_name": "kenohori/thesis", "max_stars_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 38, "max_stars_repo_stars_event_min_datetime": "2016-03-04T13:55:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T07:28:24.000Z", "max_issues_repo_path": "dictionary.tex", "max_issues_repo_name": "kenohori/thesis", "max_issues_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-02-23T16:34:29.000Z", "max_issues_repo_issues_event_max_datetime": "2019-02-27T10:12:11.000Z", "max_forks_repo_path": "dictionary.tex", "max_forks_repo_name": "kenohori/thesis", "max_forks_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2018-10-11T04:08:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T23:58:06.000Z", "avg_line_length": 36.2114695341, "max_line_length": 224, "alphanum_fraction": 0.7325546867, "num_tokens": 2916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5698966498851018}}
{"text": "\\input{coredef.tex}\n\n%opening\n\\def\\ntitle{Graph Theory}\n\\def\\nauthor{mostanes}\n\\def\\npart{II}\n\\def\\nterm{Michelmas}\n\\def\\nyear{2018}\n\\def\\nlecturer{Paul Russel}\n\\begin{document}\n\t\n\t\\mktitlepage\n\t\n\t\\newpage\n\t\n\t\\setcounter{section}{-1}\n\t\n\\section{Preface}\nThe course is self-contained and has almost no prerequisites (the only ones are mainly IA material). The book recommended for a more in-depth study of the material is I. B. Bollobas's \"Modern Graph Theory\" (published by Springer).\n\n\\newpage\n\n\\section{Introduction}\nA preliminary definition of graphs:\n\\begin{definition}\n\tA graph consists of some 'vertices' with soem pairs of vertices joined by 'edges'\n\\end{definition}\n\n\\subsection{Types of graph problems:}\n\\subsubsection{Bridges of Konigsberg}\n\\begin{question}Is it possible to walk round the city crossing each bridge precisely once and returning to the starting point?\n\\end{question}\n\\begin{question}\nIs it possible to walk round the multigraph traversing each edge precisely once and finishing at the starting vertex?\n\\end{question}\n\\subsubsection{Four Colour Problem}\n\\begin{question}\n\tHow many colours are needed to colour a map?\n\\end{question}\n\\subsubsection{Simultaneous coset representation}\nLet $G$ be a finite group and $H \\leq G$. Let $n = \\left|G:H\\right|$. Then we know $\\exists a_i, b_i \\in G$ s.t. $a_i H$ are the left cosets and $H b_i$ are the right cosets of $H$ in $G$.\n\\begin{question}\n\t$\\exists ? c_i $ s.t. $c_i H$ are the left cosets and $H c_i$ are the right cosets of $H$ in $G$\n\\end{question}\nBy considering $X$ the set of left cosets and $Y$ the set of right cosets of $H$ in $G$, let $E = X \\cup Y$ and $V = \\left\\lbrace (gH, Hg) : g \\in G \\right\\rbrace$.\n\\begin{question}\n\t$\\exists ? \\epsilon \\subset E$ s.t. $\\forall v \\in V$, $\\exists$ unique $e \\in \\epsilon$ s.t. $e$ has $v$ as an endpoint\n\\end{question}\n\\subsubsection{Fermat equation mod p}\n$x^n + y^n = z^n$ has no non-trivial solutions in $\\Z$ if $n \\geq 3$.\n\\begin{question}\n\tDoes $x^n + y^n = z^n$ have any non-trivial solutions in $\\Z_p$?\n\\end{question}\n\\begin{theorem}\n\tLet $n \\in \\N$. Then for $\\forall$ sufficiently large $p$, there are $x,y,z \\neq 0 \\pmod{p}$ with $x^n + y^n \\equiv z^n \\pmod{p}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $G = \\Z_p^\\times$, the multiplicative group of residues modulo $p$. Let $H = \\left\\lbrace g^n : g \\in G \\right\\rbrace \\leq G$. We want $x, y, z \\in H$ s.t. $x+y=z$.\\\\\n\tLooking at the cosets of $H$, we have $\\left|H\\right| \\geq \\frac{\\left|G\\right|}{n}$. We also note that $\\forall g \\in G$, if $\\exists u,v,w \\in gH$, then $g^{-1}u + g^{-1}v = g^{-1}w$ and $g^{-1}u, g^{-1}, g^{-1}w \\in H$. We have reduced the problem to the following combinatorial statement:\n\\end{proof}\n\\begin{theorem}[Schur's theorem]\n\tLet $n$ be a positive integer. Then for $\\forall$ sufficiently large $k$, if $\\left[k\\right] = \\left\\lbrace 1, 2, ..., k \\right\\rbrace$ is partitioned in $n$ parts, $\\exists u, v, w$ in the same part s.t. $u+v=w$.\t\n\\end{theorem}\n\n\\end{document}", "meta": {"hexsha": "9e15951d020e029fe396bf4c7dd4168273aefc0f", "size": 2983, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Graph Theory.tex", "max_stars_repo_name": "mostanes/cam-maths-tripos", "max_stars_repo_head_hexsha": "bf40a5250be7c15037a937e1521c5aa7e0cb0841", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Graph Theory.tex", "max_issues_repo_name": "mostanes/cam-maths-tripos", "max_issues_repo_head_hexsha": "bf40a5250be7c15037a937e1521c5aa7e0cb0841", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Graph Theory.tex", "max_forks_repo_name": "mostanes/cam-maths-tripos", "max_forks_repo_head_hexsha": "bf40a5250be7c15037a937e1521c5aa7e0cb0841", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.8923076923, "max_line_length": 293, "alphanum_fraction": 0.693261817, "num_tokens": 978, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7549149978955811, "lm_q1q2_score": 0.5698966457225183}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n\n\\begin{document}\nIf $\\text{Span}(v_1,\\ldots,v_m) = V$, we say that $v_1,\\ldots,v_m$ \"span\" the space $V$. \\\\\n\nWe call the set of unit vectors of a space the \\textbf{standard basis}.\n\\begin{defn}\n\tA vector space $V$ is \\textbf{finite-dimensional} if $V = span(v_1,\\ldots,v_m)$.\n\\end{defn}\nIf a vector space is not finite-dimensional, then we say it is \\textbf{infinite-dimensional}. To maintain convention, if \\(V = F[x]\\), then \\(\\textrm{deg}(0) = -\\infty\\).\n\\begin{thm}\n\t\\(F[x]\\) is infinite-dimensional.\n\\end{thm}\n\n\\subsection{Linear Independence}\n\\begin{defn}[Linear independence]\n\tA list $v_1,\\ldots,v_m$ of vectors is \\textbf{linearly independent} if and only if\n\t\\begin{align*}\n\ta_1v_1+\\ldots+a_mv_m = \\vec{0}\\implies a_1,\\ldots,a_m = 0\n\t\\end{align*}\n\t\n\\end{defn}\n\tA list of vectors in $V$ is \\textbf{linearly dependent} if it is not linearly independent.\\\\\n\n\tWe can also say that a list in $V$ is linearly dependent if there exist $a_1,\\ldots,a_m$ not all zero such that $a_1v_1+\\ldots+a_mv_m = 0$.\n\n\\begin{lemma}\n\tIf \\(v_1,\\ldots,v_m\\) are linearly independent, then there is exactly one way to write $v \\in \\text{Span}(v_1,\\ldots,v_m)$ as a linear combination of the vectors.\n\\end{lemma}\n\\begin{thm}\nLet $V$ be a finite-dimensional vector space. If \\(v_1,\\ldots,v_m\\) is an arbitrary set of linearly independent vectors in \\(V\\) and \\(w_1,\\ldots,w_k\\) is an arbitrary set of vectors that span \\(V\\), then \\(m\\leq k\\).\n\n\\end{thm}\n\\begin{lemma}[Linear Dependence Lemma]\n\tLet $v_1,\\ldots,v_m \\in V$ be a set of linearly dependent vectors. Then there exists an index $j$ such that \\begin{itemize}\n\t\t\\item $v_j \\in \\text{Span}(v_1,\\ldots,v_{j-1})$ \n\t\t\\item $\\text{Span}(v_1,\\ldots,v_m) = \\text{Span}(v_1,\\ldots,v_{j-1},v_{j+1},\\ldots,v_m)$ \n\t\\end{itemize}\n\\end{lemma}\n\\begin{lemma}[Finite-dimensional subspaces]\n\tEvery subspace of a finite-dimensional vector space is finite-dimensional.\n\\end{lemma}\n\n\\end{document}\n", "meta": {"hexsha": "2f8817acd2ce65448bd18acb1b91f2a004b985b1", "size": 1971, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Linear Algebra/Notes/source/09-11-19-LinIndep2.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Linear Algebra/Notes/source/09-11-19-LinIndep2.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Linear Algebra/Notes/source/09-11-19-LinIndep2.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.847826087, "max_line_length": 217, "alphanum_fraction": 0.7072552004, "num_tokens": 681, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943822145997, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5698709079616052}}
{"text": "\\subsubsection{Entity Relationship Model}\\label{sec:EntityRelationModel}\nThe entity relationship model is a graphical representation of how database relations are structured.\nThis is done in an \\textit{entity relationship} (ER) diagram, and is commonly used facilitate database design from specifications of enterprise schemas\\cite{DBSBook}.\nIn the ER model, we differentiate between \\textit{entity sets}, representing domain elements, and \\textit{relationships sets}, representing the relationships between the domain elements. \nThe entity sets are represented graphically with a rectangle, and relationship sets connecting the entity sets are represented with a diamond\\cite{DBSBook}.\nThe attributes associated with the entity- and relationship sets can be modelled using ovals\\cite{KatjaFirstPP}, however other alternatives exists.\\footnote{Our style of choice is presented in Chapter 6.10 of \\citetitle{DBSBook} \\textit{\\citefield[]{DBSBook}[]{edition}}}\nSimilar to the relational model, the entities (tuples) must be uniquely identified by one or more attributes. This is denoted by underlining the attributes of the entity sets. \n\nThe ER model has notations that denote the participation of each entity in a connecting relationship\\cite{DBSBook}.\nIn this project, we will use \\text{min-max} notation; this notation denotes the minimum and maximum amount of entities participating in a relationship. \n\n\\begin{figure}[htp]\n    \\centering\n    \\includegraphics[scale=0.5]{Images/book_example_w_cardinality.png}\n    \\caption{ER diagram of the book and book owner example.}\n    \\label{fig:ER_Book_Example}\n\\end{figure}\n\nFigure \\ref{fig:ER_Book_Example} shows an ER diagram equivalent to the $book$, $owns$, and $book\\_owner$ relations from equation \\ref{eq:relational_schema} and \\ref{eq:bookOwnerExample}.\nIn figure \\ref{fig:ER_Book_Example} we also see the participation cardinalities of the two entity sets. \nThe relationship from $book\\_owner$ to $owns$ is one of total participation. That is, each book owner entity must own at least one book.\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[scale=0.5]{Images/cardinalities.png}\n    \\caption{Participation ratios for ER relationships.}\n    \\label{fig:ERDiagram_Cardinality}\n\\end{figure}\nThe participation from $book$ to $owns$ is partial. \nThus, the model represents that books can be in the database, even if no one owns it, and that a book can be owned by multiple book owners.\nGraphical notation of participation connections can be seen in figure \\ref{fig:ERDiagram_Cardinality}.\n\n\nOther forms of entity sets exist. For instance, a weak entity set, denoted by an oval with doubled edges, is an entity set whose existence is based on another entity set. \nRelationship sets connecting a weak entity set to its identifying entity set will not have any attributes\\cite{DBSBook}.\nA weak entity set is identified by its \\textit{identifying entity set}'s primary key along with some of its own attributes. \nThe relationship between a weak entity set and its identifying set is always many-to-one with total participation of the weak entity set.\n\nThe ER model can be converted into the \\textit{relational model}\\cite{DBSBook}. The relational model is described using relations, thus we can describe operations that can be performed on the relations using set notation and relational algebra. \nTherefore, it is useful to be able to convert an ER model into an equivalent relational model\\cite{DBSBook}, where domains of attributes can be defined and operations on the entity sets described mathematically.\n\n\\subsubsection*{Converting ER Models to Relations}\nWhen converting an ER diagram into an equivalent relational model, we must consider the domains of the attributes in the ER model since these need to be described in the relational model\\cite{DBSBook}. Having the domain of the relations described makes it possible to establish well-defined operations that can be performed on the relations.\\\\\\\\\n\\textbf{Strong entity sets}\\\\\\\\\nStrong entity sets are converted by taking each attribute of the entity set, and creating a single relation with corresponding attributes. \nThe primary key of the entity set is chosen as primary key for the relation\\cite{DBSBook}.\\\\\nAs an example of converting a strong entity, we can use the book owner example from figure \\ref{fig:ER_Book_Example}. In this example, \\texttt{Book} is a strong entity. The relation created from the book entity can be seen in equation \\ref*{eq:StrongEntity}. The attributes from the entity are added to the relation and the primary key of the book entity, \\texttt{ISBN}, is used as the primary key for the relation.\n\n\\begin{equation}\\label{eq:StrongEntity}\n    \\begin{split}\n        Book(\\underline{ISBN : text} , title : text , author\\_name : text , number\\_of\\_pages : \\mathbb{Z}^+)\n    \\end{split}\n\\end{equation}\n\\\\\n\\textbf{Many-to-many relationships}\\\\\\\\\nWhen converting many-to-many relationships to a relation, one creates a single relation with primary key attributes from the participating entity sets\\cite{DBSBook}. These keys are used as foreign key to reference the converted entities participating in the relationship. \nThe remaining attributes of the entity set is similarly mapped to the relation.\\\\\nAs an example, in the book owner example, there is a many-to-many relationship between a \\texttt{book\\_owner} and a \\texttt{book} called \\texttt{Owns}. As can be seen in equation \\ref{eq:ManyToMany}, to convert the relationship set into a relation, the keys of the participating entities must be put into the relation. \n\n\\begin{equation}\\label{eq:ManyToMany}\n    \\begin{split}\n        owns(\\underline{owner\\_id \\rightarrow book\\_owner}, \\underline{ISBN \\rightarrow book})\n    \\end{split}\n\\end{equation}\n\\\\\n\\textbf{Many-to-one relationships}\\\\\\\\\nIf we have a many-to-one relationship between two sets, we can combine the relationship set and the relation created from the entity set on the 'many' side into a single relation. This relation will use the primary key of the entity set as its primary key, and have a foreign referencing the relation created from the 'one' side of the relationship\\cite{DBSBook}.\\\\\nAs an example, if we for the book owner example instead assume that a book could have only one owner, it would be a many-to-one relationship. To convert the relationship set to a relation, the relationship set should be merged with the 'many' side of the relations created, which in this case would be the book as it only has one owner, but an owner can have many books. The relation created from this can be seen in equation \\ref*{eq:ManyToOne}, which has the attributes of the initial book relation and a foreign key to the \\texttt{book\\_owner}.\n\n\\begin{equation}\\label{eq:ManyToOne}\n    \\begin{split}\n        Book(\\underline{ISBN : text} , title : text , author\\_name : text , \\\\number\\_of\\_pages : \\mathbb{Z}^+, owner\\_id \\rightarrow Book\\_owner)\n    \\end{split}\n\\end{equation}\n\\\\\n\\textbf{One-to-one relationships}\\\\\\\\\nWhen converting one-to-one relationships with total participation, we simply create a union of the attributes of the participating entity sets and the relationship set\\cite{DBSBook}. If there is not total participation, two relations are created from the entity set, and a foreign key is placed on one of the sets.\\\\\nAs an example, assume that a book can only be owned by one book owner, and that a book owner has only one book. It would then be a one-to-one relationship. Assuming total participation between these entities, a relation can be formed from the union of the two relations' attributes. The union is seen in equation \\ref*{eq:TotalOneToOne}.\n\n\\begin{equation}\\label{eq:TotalOneToOne}\n    \\begin{split}\n        r(\\underline{owner\\_id:\\mathbb{Z}^+},\\underline{ISBN: text}, name:text, title:text, author\\_name:text, \\\\ number\\_of\\_pages:\\mathbb{Z}^+).\n    \\end{split}\n\\end{equation}\n\\\\\nIf instead the entities have partial participation, a foreign key is placed on either of the relations. In equation \\ref{eq:PartialOneToOne} the foreign key is placed on the book owner.\n\n\\begin{equation}\\label{eq:PartialOneToOne}\n    \\begin{split}\n        Book(\\underline{ISBN : text} , title : text , author\\_name : text , number\\_of\\_pages : \\mathbb{Z}^+) \\\\\n        book\\_owner(\\underline{owner\\_id:\\mathbb{Z}^+} , name:text , ISBN \\rightarrow Book)\n    \\end{split}\n\\end{equation}\n\\\\\n\\textbf{Weak entity sets}\\\\\\\\\nWhen representing weak entity sets, one creates a relation containing the attributes of the weak entity set as well as the attributes of the identifying set's primary key.\nThe primary key of the identifying relation will also serve as a foreign key to the identifying relation\\cite{DBSBook}.\nThe relationship set is simply ignored, as the relationship set will not have any attributes\\cite{DBSBook}. \\\\\nAs an example assume that a book owner is a weak entity, which uses a book as an identifying attribute. As can be seen in equation \\ref{eq:WeakEntity}, the relation for, the weak entity, book owner gains a foreign key to a book as part of its own primary key.\n\n\\begin{equation}\\label{eq:WeakEntity}\n    \\begin{split}\n        Book(\\underline{ISBN : text} , title : text , author\\_name : text , number\\_of\\_pages : \\mathbb{Z}^+) \\\\\n        book\\_owner(\\underline{ISBN \\rightarrow Book}, \\underline{owner\\_id:\\mathbb{Z}^+} , name:text)\n    \\end{split}\n\\end{equation}\n\\\\", "meta": {"hexsha": "064e8b7c111de3fbc309d43d1556944ba46497e5", "size": 9354, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/Chapter 3 Sprint 1/ER_Model.tex", "max_stars_repo_name": "chhoumann/p5", "max_stars_repo_head_hexsha": "ea51a218806e552691538e33c16f9cf067be52d4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-14T05:43:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-16T12:25:11.000Z", "max_issues_repo_path": "sections/Chapter 3 Sprint 1/ER_Model.tex", "max_issues_repo_name": "chhoumann/p5", "max_issues_repo_head_hexsha": "ea51a218806e552691538e33c16f9cf067be52d4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-11-10T09:54:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-09T07:47:18.000Z", "max_forks_repo_path": "sections/Chapter 3 Sprint 1/ER_Model.tex", "max_forks_repo_name": "chhoumann/p5", "max_forks_repo_head_hexsha": "ea51a218806e552691538e33c16f9cf067be52d4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.2452830189, "max_line_length": 547, "alphanum_fraction": 0.7723968356, "num_tokens": 2230, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7577943658046608, "lm_q1q2_score": 0.5698708998328131}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 3.6 Commutation of $\\nabla$ on the Riemann tensor -- simple computation}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).\n\n   DD{#}::Derivative.\n   \\nabla{#}::Derivative.\n\n   RabcdF := R_{a b c d} -> A_{a} B_{b} C_{c} D_{d}.      # cdb(RabcdF.000,RabcdF)\n   RabcdB := A_{a} B_{b} C_{c} D_{d} -> R_{a b c d}.      # cdb(RabcdB.000,RabcdB)\n\n   derivDD := DD_{b c}{V?_{a}}} -> R^{d}_{a b c} V?_{d}.  # cdb(derivDD.000,derivDD)\n\n   nablaDD := \\nabla_{f}{\\nabla_{e}{R_{a b c d}}}\n            - \\nabla_{e}{\\nabla_{f}{R_{a b c d}}} -> DD_{e f}{R_{a b c d}}.\n\n   # product rule for DD acting on A_{a} B_{b} C_{c} D_{d}\n   pruleDD := DD_{e f}{A_{a} B_{b} C_{c} D_{d}} -> DD_{e f}{A_{a}} B_{b} C_{c} D_{d}\n                                                 + A_{a} DD_{e f}{B_{b}} C_{c} D_{d}\n                                                 + A_{a} B_{b} DD_{e f}{C_{c}} D_{d}\n                                                 + A_{a} B_{b} C_{c} DD_{e f}{D_{d}}.\n                                                          # cdb(pruleDD.000,pruleDD)\n\n   expr :=   \\nabla_{f}{\\nabla_{e}{R_{a b c d}}}\n           - \\nabla_{e}{\\nabla_{f}{R_{a b c d}}}.         # cdb (ex-0306.100, expr)\n\n   substitute   (expr,nablaDD)                            # cdb (ex-0306.101, expr)\n   substitute   (expr,RabcdF)                             # cdb (ex-0306.102, expr)\n   substitute   (expr,pruleDD)                            # cdb (ex-0306.103, expr)\n   substitute   (expr,derivDD)                            # cdb (ex-0306.104, expr)\n   sort_product (expr)                                    # cdb (ex-0306.105, expr)\n   substitute   (expr,RabcdB)                             # cdb (ex-0306.106, expr)\n\n\\end{cadabra}\n\n\\begin{dgroup*}[spread={3pt}]\n   \\Dmath*{\\cdb{ex-0306.100} = \\Cdb*{ex-0306.101}\n                             = \\Cdb*{ex-0306.102}\n                             = \\Cdb*{ex-0306.103}\n                             = \\Cdb*{ex-0306.104}\n                             = \\Cdb*{ex-0306.105}\n                             = \\Cdb*{ex-0306.106}}\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "300a6969fd42c0e95e4bfbc470ff2342c3c9a260", "size": 2334, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0306.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0306.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0306.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 43.2222222222, "max_line_length": 94, "alphanum_fraction": 0.4271636675, "num_tokens": 801, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5698708955229298}}
{"text": "% Created 2017-08-28 lun 15:25\n\\documentclass[a4paper]{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{fixltx2e}\n\\usepackage{graphicx}\n\\usepackage{longtable}\n\\usepackage{float}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{marvosym}\n\\usepackage{wasysym}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\tolerance=1000\n\\usepackage{khpreamble}\n\\author{Kjartan Halvorsen}\n\\date{Due Thursday 2017-08-24 at midnight}\n\\title{Control Engineering - homework 1}\n\\hypersetup{\n  pdfkeywords={},\n  pdfsubject={},\n  pdfcreator={Emacs 24.5.1 (Org mode 8.2.10)}}\n\\begin{document}\n\n\\maketitle\n\n\\section*{Proportional control of a DC-motor}\n\\label{sec-1}\n\nConsider a simple model of a DC-motor, where the inductance in the motor is neglected:\n\\begin{center}\n\\includegraphics[width=0.3\\linewidth]{dc-motor-circuit}\n\\end{center}\nA model of the motor is given by\n\\[ Y(s) = \\frac{K_f}{s(1+sT)} \\left( U(s) + D(s)\\right), \\]\nwhere $y(t)$ is the angle of the motor shaft, $u(t)$ is the voltage to the motor and $d(t)$ is a load disturbance (from whatever the motor is trying to move).\n\n\\begin{enumerate}\n\\item Determine the motor's \\emph{weighting function} $g(t)$.\n\\item In order to achieve a position servo, proportional feedback control is applied: \\[ u(t) = K(y_{ref} - y). \\] Draw a block-diagram of the closed-loop system with a block for the controller ($F(s)=K$), a block for the DC-motor, $y$ as output signal, and $y_{ref}$ and $d$ as input signals. Name all the signals in the diagram.\n\\item The closed-loop system can be written\n\\[ Y(s) = G_c(s) Y_{ref}(s) + G_d(s) D(s). \\]\nDetermine the transfer functions $G_c(s)$ and $G_d(s)$, and determine the poles of the closed-loop system (note that the two transfer functions will have the same denominator).\n\\item Determine the final values of the output signal, i.e.\n\\[ \\lim_{t \\to \\infty} y(t) \\]\nfor the two cases\n\\begin{enumerate}\n\\item The reference signal $y_{ref}(t)$ is a unit step and there is no disturbance $d(t)=0$.\n\\item The reference signal is zero and the disturbance $d(t)$ is a unit step.\n\\end{enumerate}\n\\item Determine the final value of the control error \\(e(t) = y_{ref}(t) - y(t)\\) when the reference signal $y_{ref}(t)$ is a unit ramp\n\\[ y_{ref}(t) = \\begin{cases} t & t\\ge 0\\\\ 0 & t<0 \\end{cases} \\]\nand there is no disturbance.\n\\end{enumerate}\n\n\n\n\\section*{Solutions}\n\\label{sec-2}\n\\begin{enumerate}\n\\item The motor's \\emph{weighting function} $g(t)$. This is the inverse laplace transform of the transfer function.\n\\begin{align*}\ng(t) &= \\laplaceinv{G(s)} = \\laplaceinv{\\frac{K_f}{s(1+sT)}} = \\laplaceinv{\\frac{A}{s} + \\frac{B}{1+sT}}\\\\\n&= \\laplaceinv{\\frac{A}{s}} + \\laplaceinv{\\frac{B}{1+sT}} = A u_h(t) + \\frac{B}{T}\\mexp{-\\frac{t}{T}}u_h(t),\n\\end{align*}\nwhere $u_h(t)$ is the Heaviside step function. The constants A and B are determined by writing\n\\[ \\frac{K_f}{s(1+sT)} = \\frac{A}{s} + \\frac{B}{1+sT}. \\]\n$A$ is found by multiplying each side first by $s$ and letting $s\\to 0$, which gives \n\\[ A = K_f \\]\n$B$ is found by the same method to be \n\\[ B = - K_f T \\]\nSo, the weighting function is\n\\[ g(t) = K_f \\left( 1 - \\mexp{-\\frac{t}{T}}\\right)u_h(t). \\]\nIt is OK to write the solution without the step-function, since we are considering only causal systems, and are using the uni-lateral Laplace transform.\n\\item Block-diagram of the closed-loop system\n     \\begin{center}\n\\begin{tikzpicture}[scale = 0.8, node distance=20mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n\\node[coordinate] (refinput) {};\n\\node[sumnode, right of=refinput, node distance=20mm] (sumerr) {\\tiny $\\sum$};\n\\node[block, right of=sumerr] (controller) {$F(s)$};\n\\node[above of=controller, node distance=6mm] {controller};\n\\node[sumnode, right of=controller, node distance=16mm] (sum) {\\tiny $\\sum$};\n\\node[block, right of=sum, node distance=20mm] (tank) {$\\frac{K_f}{s(1+sT)}$};\n\\node[above of=tank, node distance=6mm] {DC-motor};\n\\node[coordinate, right of=tank, node distance=20mm] (output) {};\n\\node[coordinate, above of=sum, node distance=12mm] (disturbance) {};\n\n\\draw[->] (refinput) -- node[above, pos=0.3] {$y_{ref}(t)=0$} (sumerr);\n\\draw[->] (sumerr) -- node[above] {$e(t)$} (controller);\n\\draw[->] (controller) -- node[above] {$u(t)$} (sum);\n\\draw[->] (sum) -- node[above] {} (tank);\n\\draw[->] (tank) -- node[coordinate] (measure) {} node[above, pos=0.8] {$y(t)$} (output);\n\\draw[->] (disturbance) -- node[right, pos=0.2] {$v(t)$} (sum);\n\\draw[->] (measure) -- ++(0,-14mm) -| node[right, pos=0.95] {$-$} (sumerr);\n\n\n\\end{tikzpicture}\n\\end{center}\n\n\\item The closed-loop system\n\\[ Y(s) = G_c(s) Y_{ref}(s) + G_d(s) D(s) \\] is found by block-diagram calculations to be\n\\[ Y(s) = \\frac{\\frac{K_f}{s(1+sT)}F(s)}{1 + \\frac{K_f}{s(1+sT)}F(s)}Y_{ref}(s) + \n                    \\frac{\\frac{K_f}{s(1+sT)}}{1 + \\frac{K_f}{s(1+sT)}F(s)}D(s)\\]\n\\[ Y(s) = \\frac{K_f F(s)}{s(1+sT) + K_fF(s)}Y_{ref}(s) + \\frac{K_f}{s(1+sT) + K_fF(s)} D(s). \\]\nWith proportional controller \\( F(s) = K \\)\n\\[ Y(s) = \\frac{K_f K}{s(1+sT) + K_fK}Y_{ref}(s) + \\frac{K_f}{s(1+sT) + K_fK} D(s). \\]\nThe characteristic polynomial is\n\\[ s^2 + \\frac{1}{T}s + \\frac{K_fK}{T} \\] and the poles are the roots of this polynomial\n\\[ s = -\\frac{1}{2T} ( 1 \\pm \\sqrt{1 - 4K_fKT} ). \\]\nNote that for high gain (\\( K \\) large) we get a oscillatory closed-loop system. This is a common effect of systems with proportional control.\n\\item Determine the final values of the output signal, i.e.\n\\[ \\lim_{t \\to \\infty} y(t) \\]\nfor the two cases\n\\begin{enumerate}\n\\item The reference signal $y_{ref}(t)$ is a unit step \\( Y_{ref}(s) = \\frac{1}{s}\\)  and there is no disturbance $d(t)=0$.\n\\[  \\lim_{t \\to \\infty} y(t) &= \\lim_{s \\to 0} sY(s) = \\lim_{s \\to 0} s\\frac{K_f K}{s(1+sT) + K_fK}\\frac{1}{s} = 1 \\]\n\\item The reference signal is zero and the disturbance $d(t)$ is a unit step.\n\\[\\lim_{t \\to \\infty} y(t) &= \\lim_{s \\to 0} sY(s) = \\lim_{s \\to 0} s\\frac{K_f}{s(1+sT) + K_fK}\\frac{1}{s} = \\frac{1}{K}.\\]\nNote that with higher gain $K$ we get better disturbance rejection, but we are not able to completety eliminate the disturbance. This is common when only P-control is used.\n\\end{enumerate}\n\\item Final value of the control error \\(e(t)\\) when the reference signal is a step. We have \n\\begin{equation*}\n\\begin{split}\nE(s) &= Y_{ref}(s) - Y(s) = \\frac{1}{s^2} - \\frac{K_fK}{s(1+sT) + K_fK} \\frac{1}{s^2} = \\frac{s(1+sT) + K_fK - K_f_k}{\\left(s(1+sT) + K_fK\\right)s^2}\\\\\n&= \\frac{1 + sT}{\\left(s(1+sT) + K_fK\\right)s}\n\\end{split}\n\\end{equation*}\nso\n\\[ \\lim_{t\\to\\infty} e(t) = \\lim_{s\\to 0} s \\frac{1 + sT}{\\left(s(1+sT) + K_fK\\right)s} = \\frac{1}{K_fK}. \\]\n\\end{enumerate}\n% Emacs 24.5.1 (Org mode 8.2.10)\n\\end{document}", "meta": {"hexsha": "e469dffe936a62dbb37f070d0c07c1c0bc66b700", "size": 6770, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "homework/dc-motor-part-one.tex", "max_stars_repo_name": "alfkjartan/automatic-control", "max_stars_repo_head_hexsha": "67e8701e1b3ef59a69fa774d0313d2a92aaa976f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "homework/dc-motor-part-one.tex", "max_issues_repo_name": "alfkjartan/automatic-control", "max_issues_repo_head_hexsha": "67e8701e1b3ef59a69fa774d0313d2a92aaa976f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homework/dc-motor-part-one.tex", "max_forks_repo_name": "alfkjartan/automatic-control", "max_forks_repo_head_hexsha": "67e8701e1b3ef59a69fa774d0313d2a92aaa976f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.4160583942, "max_line_length": 330, "alphanum_fraction": 0.6607090103, "num_tokens": 2464, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746404, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5698708955229297}}
{"text": "\\input{_preamble}\n\n\\title{Introduction to complex networks}\n\n\\newcommand{\\set}[1]{\\ensuremath{\\mathcal{#1}}}\n\n\\renewcommand{\\vec}[1]{\\ensuremath{\\mathbf{#1}}}\n\\newcommand{\\mat}[1]{\\ensuremath{\\vec{#1}}}\n\\newcommand{\\tr}{\\ensuremath{\\intercal}}\n\n\\newcommand{\\R}{\\ensuremath{\\mathbb{R}}}\n\\newcommand{\\N}{\\ensuremath{\\mathcal{N}}}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{frame}{Contents}\n    \\tableofcontents[hideallsubsections]\n\\end{frame}\n\n\\section{Definitions}\n\n\\begin{frame}{Graph theory}\n    \\begin{description}\n        \\item[1736] Euler writes the first paper\n        \\item[1959] Erd\u0151s and R\u00e9nyi introduce probabilistic methods\n    \\end{description}\n    \\vfill\n    \\begin{center}\n        \\includegraphics[height=10em]{figures/seven_bridges} \\\\\n        {\\scriptsize%\n         From Junker and Schreiber (2008)}\n    \\end{center}\n\\end{frame}\n\n\\begin{frame}{Network theory}\n    \\begin{description}\n        \\item[1970s] Sociologists (`social network analysis')\n        \\item[2000s] Physicists and Computer Scientists\n    \\end{description}\n    \\vfill\n    \\begin{center}\n        \\includegraphics[height=10em]{figures/network}\n    \\end{center}\n\\end{frame}\n\n\\begin{frame}{Graphs and their representation}\n    A graph $\\set{G} = \\left( \\set{V}, \\set{E} \\right)$ consists of:\n    \\begin{enumerate}\n        \\item a set of \\textbf{vertices}\n              $\\set{V} = \\left\\{ v_{1}, \\ldots, v_{n} \\right\\}$\n        \\item a binary relation $\\set{E} \\subseteq \\set{V}^{2}$ that represents\n              \\textbf{edges}\n    \\end{enumerate}\n    \\vfill\\pause\n    \\begin{block}{Alternative representation of $\\set{E}$}\n        In terms of the \\textbf{adjacency matrix}\n        $\\mat{A} \\in \\{ 0, 1 \\}^{n \\times n}$, with elements\n        \\[\n            a_{ij} =\n            \\begin{cases}\n                1 & \\text{if} \\left( v_{i}, v_{j} \\right) \\in \\set{E} \\\\\n                0 & \\text{otherwise}\n            \\end{cases}\n        \\]\n    \\end{block}\n\\end{frame}\n\n\\begin{frame}{Example}\n    \\begin{columns}\n        \\begin{column}{0.5\\textwidth}\n            \\begin{center}\n                \\includegraphics[height=0.85\\textheight]{figures/graph}\n            \\end{center}\n        \\end{column}\n        \\begin{column}{0.5\\textwidth}\n            \\vspace{1em}\n            \\begin{align*}\n                \\set{V} &= \\left\\{ v_{1}, \\ldots, v_{6} \\right\\} \\\\\n                \\set{E} &= \\left\\{ (v_{1}, v_{1}), (v_{1}, v_{2}), \\ldots \\right\\} \\\\[1em]\n                \\mat{A} &=\n                \\begin{pmatrix}\n                    1 & 1 & 0 & 0 & 1 & 0\\\\\n                    1 & 0 & 1 & 0 & 1 & 0\\\\\n                    0 & 1 & 0 & 1 & 0 & 0\\\\\n                    0 & 0 & 1 & 0 & 1 & 1\\\\\n                    1 & 1 & 0 & 1 & 0 & 0\\\\\n                    0 & 0 & 0 & 1 & 0 & 0\\\\\n                \\end{pmatrix}\n            \\end{align*}\n        \\end{column}\n    \\end{columns}\n\\end{frame}\n\n\\begin{frame}{Weights}\n    \\begin{itemize}\n        \\item Assigned to elements of \\set{E}\n        \\item Represent the \\textbf{strength} of the relationship\n    \\end{itemize}\n    \\vfill\n    \\begin{block}{Examples}\n        \\begin{itemize}\n            \\item Signed edges:\n                  $\\mat{A} \\in \\{ -1, 0, 1 \\}^{n \\times n}$\n            \\item Weights on the unit interval:\n                  $\\mat{A} \\in [ 0, 1 ]^{n \\times n}$\n            \\item Weights on the positive real line:\n                  $\\mat{A} \\in [ 0, \\infty )^{n \\times n}$\n        \\end{itemize}\n    \\end{block}\n\\end{frame}\n\n\\begin{frame}{Properties of $\\set{E}$}\n    \\begin{block}{Irreflexivity}\n        \\centering\n        $\\forall\\, v_{i} \\in \\set{V}.\\, (v_{i}, v_{i}) \\not \\in \\set{E}$\n        \\begin{itemize}\n            \\item[$\\rightarrow$] No `self\\hyp{}loops' allowed\n        \\end{itemize}\n    \\end{block}\n    \\vfill\n    \\begin{block}{Symmetry}\n        \\centering\n        $\\forall\\, v_{i}, v_{j} \\in \\set{V}.\\, (v_{i}, v_{j}) \\in \\set{E} \\Rightarrow (v_{j}, v_{i}) \\in \\set{E}$\n        \\begin{itemize}\n            \\item[$\\rightarrow$] Direction of the relationship does not matter\n                                 ($\\set{G}$ is \\textbf{undirected})\n            \\item[$\\rightarrow$] $\\mat{A} = \\mat{A}^{\\tr}$ is also symmetric\n        \\end{itemize}\n    \\end{block}\n\\end{frame}\n\n\\begin{frame}{Properties of vertices}\n    \\begin{block}{Neighbourhood}\n        \\centering\n        $\\N(v_{i}) = \\left\\{ v_{j} : (v_{i}, v_{j}) \\in \\set{E} \\right\\} \\subseteq \\set{V}$\n        \\begin{itemize}\n            \\item[$\\rightarrow$] \\textbf{Set} of vertices to which $v_{i}$ is\n                                 connected by edges\n        \\end{itemize}\n    \\end{block}\n    \\vfill\n    \\begin{block}{Degree}\n        \\centering\n        $k_{i} = k(v_{i}) = \\left| \\, \\N(v_{i}) \\, \\right|$\n        \\begin{itemize}\n            \\item[$\\rightarrow$] \\textbf{Number} of vertices to which $v_{i}$ is\n                                 connected by edges\n        \\end{itemize}\n    \\end{block}\n\\end{frame}\n\n\\begin{frame}{In- and out-degree}\n    \\begin{columns}\n        \\begin{column}{0.5\\textwidth}\n            \\begin{align*}\n                k^{\\text{in}}_{i} &= \\left| \\left\\{ \\left( v_{j}, v_{i} \\right)\\;\\middle|\\;\\left( v_{j}, v_{i} \\right) \\in \\set{E} \\right\\} \\right| \\\\[1em]\n                k^{\\text{out}}_{i} &= \\left| \\left\\{ \\left( v_{i}, v_{j} \\right)\\;\\middle|\\;\\left( v_{i}, v_{j} \\right) \\in \\set{E} \\right\\} \\right|\n            \\end{align*}\n        \\end{column}\n        \\begin{column}{0.5\\textwidth}\n            \\begin{center}\n                \\includegraphics[width=\\textwidth]{figures/degree}\n            \\end{center}\n        \\end{column}\n    \\end{columns}\n\\end{frame}\n\n\\begin{frame}{Paths}\n    \\only<1>{%\n        A \\textbf{path} on $\\set{G} = ( \\set{V}, \\set{E} )$ is a graph\n        $\\set{P} = ( \\set{V}_{\\set{P}}, \\set{E}_{\\set{P}} )$ with:\n        \\begin{enumerate}\n            \\item $\\set{V}_{\\set{P}} = \\{ v_{(0)}, \\ldots, v_{(l)} \\} \\subseteq \\set{V}$\n            \\item $\\set{E}_{\\set{P}} = \\{ (v_{(0)}, v_{(1)}), \\ldots, (v_{(l-1)}, v_{(l)}) \\} \\subseteq \\set{E}$\n        \\end{enumerate}\n        \\begin{description}\n            \\item[Ends] Vertices $v_{(0)}$ and $v_{(l)}$\n            \\item[Length] Number of edges\n                          $\\left| \\, \\set{E}_{\\set{P}} \\, \\right| = l$\n        \\end{description}}\n    \\only<2>{%\n        \\begin{center}\n            \\includegraphics[width=\\textwidth]{figures/path} \\\\[1em]\n            {\\scriptsize%\n             From Diestel (2010)}\n        \\end{center}}\n\\end{frame}\n\n\\begin{frame}{Shortest paths}\n    \\begin{center}\n        \\includegraphics[height=0.85\\textheight]{figures/shortest_path}\n    \\end{center}\n\\end{frame}\n\n\\section{Local properties}\n\n\\begin{frame}{Centrality}\n    \\begin{center}\n        \\Large%\n        Which are the most important \\\\\n        (or `central') vertices?\n    \\end{center}\n    \\vfill\n    \\only<1>{%\n        \\begin{block}{Degree centrality}\n            How many connections does a vertex have to other vertices?\n        \\end{block}}\n    \\only<2>{%\n        \\begin{block}{Closeness centrality}\n            How `far' (in terms of shortest paths) is a vertex with respect to\n            all other vertices?\n        \\end{block}}\n    \\only<3>{%\n        \\begin{block}{Betweenness centrality}\n            How many times does a vertex act as a `bridge' along the shortest\n            paths between all pairs of vertices?\n        \\end{block}}\n    \\only<4>{%\n        \\begin{block}{Eigenvector centrality (PageRank)}\n            How many times do we stumble upon a vertex while randomly walking on\n            the graph?\n        \\end{block}}\n\\end{frame}\n\n\\begin{frame}{Local clustering coefficient}\n    \\[\n        C_{i} = C(v_{i}) = \\frac{| \\, \\{ (v_{j}, v_{k}) : \\{ (v_{i}, v_{j}), (v_{i}, v_{k}), (v_{j}, v_{k}) \\} \\subseteq \\set{E} \\} \\, |}{k_{i} \\, (k_{i} - 1) / 2} \\in [0, 1]\n    \\]\n    \\vfill\n    \\begin{itemize}\n        \\item Probability that a pair of neighbours of $v_{i}$ are connected\n        \\item Empirically: $k_{i} \\nearrow$, $C_{i} \\searrow$\n        \\item[$\\rightarrow$] Can be used to identify `structural holes'\n    \\end{itemize}\n\\end{frame}\n\n\\section{Large-scale structure}\n\n\\begin{frame}{`Small-world' effect}\n    \\vfill\n    \\begin{quotation}\n        `\\ldots Milgram's letter\\hyp{}passing experiment in the 1960s, in which\n        people were asked to get a letter from an initial holder to a distant\n        target person by passing it from acquaintance to acquaintance through\n        the social network.\n        The letters that made it to the target did so in a remarkably small\n        number of steps, around six on average.'\n\n        \\begin{flushright}\n            \\scriptsize%\n            From Newman (2010)\n        \\end{flushright}\n    \\end{quotation}\n    \\vfill\n\\end{frame}\n\n\\begin{frame}{Degree distribution}\n    \\[\n        \\Pr\\!\\left[ K = k \\,\\right]\n        = \\Pr\\!\\left[ \\text{`a randomly selected vertex $v_{i} \\in \\set{V}$ has degree $k \\geq 0$'} \\right]\n    \\]\n    \\begin{center}\n        \\includegraphics[height=0.65\\textheight]{figures/degree_dist}\n    \\end{center}\n\\end{frame}\n\n\\begin{frame}{Power laws}\n    \\begin{align*}\n        \\ln y &= - \\alpha \\ln x + c \\\\\n        y &= C\\,x^{\\,-\\alpha}, \\quad C = e^{\\,c}\n    \\end{align*}\n    \\vfill\n    \\begin{itemize}\n        \\item Sizes of city populations and wars\n        \\item Frequency of use of words in human languages\n        \\item Occurrence of personal names in most cultures\n        \\item Number of papers scientists write\n        \\item Number of hits on Web pages\n        \\item Sales of almost every branded commodity\n        \\item Number of species in biological taxa\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Scale-free networks}\n    \\only<1>{%\n        \\begin{itemize}\n            \\setlength{\\itemsep}{\\bigskipamount}\n            \\item $\\Pr\\!\\left[ K = k \\,\\right] \\propto k^{\\,-\\alpha}$\n                  (at least asymptotically), usually $2 \\le \\alpha \\le 3$\n            \\item \\textbf{Self\\hyp{}similar} (scale invariance)\n            \\item Presence of \\textbf{hubs} (Milgram's `sociometric superstars')\n                  \\begin{itemize}\n                      \\item[$\\rightarrow$] Fault tolerance\n                      \\item[$\\rightarrow$] `Small\\hyp{}world' effect\n                  \\end{itemize}\n        \\end{itemize}}\n    \\only<2>{%\n        \\begin{block}{Examples}\n            \\begin{itemize}\n                \\setlength{\\itemsep}{\\medskipamount}\n                \\item Social and collaboration networks\n                \\item Many kinds of communication networks (e.g.\\ the Internet)\n                \\item \\textbf{Many biological networks}\n            \\end{itemize}\n        \\end{block}}\n\\end{frame}\n\n\\section{Models of network formation}\n\n\\begin{frame}{What is complexity?}\n    \\only<1>{%\n        \\begin{block}{Complex}\n            Many interacting parts, not necessarily `difficult'\n        \\end{block}\n        \\vfill\n        \\begin{block}{Complicated}\n            Difficult, not necessarily `complex'\n        \\end{block}}\n    \\only<2>{%\n        \\begin{center}\n            \\includegraphics[height=0.8\\textheight]{figures/onion} \\\\\n            {\\scriptsize%\n             Via \\textit{Wikimedia Commons}}\n        \\end{center}}\n\\end{frame}\n\n\\begin{frame}{Self-organisation}\n    \\only<1>{%\n        \\begin{center}\n            \\Large%\n            Spontaneous emergence of global structure \\\\\n            out of local interactions\n        \\end{center}\n        \\begin{itemize}\n            \\setlength{\\itemsep}{\\medskipamount}\n            \\item Robust to damage and perturbations\n                  \\begin{itemize}\n                      \\item Not controlled by (internal or external) agents\n                      \\item Collective, distributed process\n                  \\end{itemize}\n            \\item Outcome is not arbitrary, but `prefers' certain situations \\\\\n                  $\\rightarrow$ Natural selection\n        \\end{itemize}}\n    \\only<2>{%\n        \\begin{block}{Examples}\n            \\setlength{\\itemsep}{\\medskipamount}\n            \\begin{itemize}\n                \\item Protein folding\n                \\item Pattern formation and morphogenesis\n                \\item Social structures and herd behaviour\n            \\end{itemize}\n        \\end{block}}\n    \\only<3>{%\n        \\begin{center}\n            \\Large%\n            Topology is crucial to understanding \\\\\n            stochastic processes on networks\n        \\end{center}}\n\\end{frame}\n\n\\begin{frame}{Models of network formation}\n    \\begin{center}\n        \\includegraphics[width=\\textwidth]{figures/models}\n    \\end{center}\n\\end{frame}\n\n\\begin{frame}{Erd\u0151s--R\u00e9nyi model}\n    \\begin{block}{Algorithm}\n        Construct a graph with $n$ vertices and include each edge with\n        probability $p$ (independently from every other edge)\n    \\end{block}\n    \\vfill\\pause\n    \\begin{block}{Degree distribution}\n        \\[\n            \\Pr\\left[ K = k \\,\\right] = \\binom{n-1}{k} \\, p^{\\,k} \\left( 1-p \\right)^{\\,n-1-k}\n        \\]\n    \\end{block}\n\\end{frame}\n\n\\begin{frame}{Barab\u00e1si--Albert model}\n    \\begin{block}{Algorithm}\n        \\begin{itemize}\n            \\item Start with an initial connected graph with $n$ vertices\n            \\item Add a new vertex and connect it to $n^{\\star} \\leq n$ existing\n                  vertices with probability proportional to their degree \\\\\n                  $\\rightarrow$ \\textbf{Preferential attachment}\n        \\end{itemize}\n    \\end{block}\n    \\vfill\\pause\n    \\begin{block}{Degree distribution}\n        \\[\n            \\Pr\\left[ K = k \\,\\right] \\propto k^{\\,-3}\n        \\]\n    \\end{block}\n\\end{frame}\n\n\\begin{frame}{What about `--omics'?}\n    \\begin{itemize}\n        \\setlength{\\itemsep}{\\bigskipamount}\n        \\item Very few longitudinal datasets \\\\\n              $\\rightarrow$ Difficult to understand evolution\n        \\item Few cross\\hyp{}platform studies \\\\\n              $\\rightarrow$ Can understand interactions between complexity\n                            levels\n        \\item Many single\\hyp{}platform studies \\\\\n              $\\rightarrow$ Can characterise ``co\\hyp{}'' networks\n    \\end{itemize}\n\\end{frame}\n\n\\end{document}\n\n", "meta": {"hexsha": "09be0bb07404718648a7bfb6bec382c674457cdb", "size": 14002, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/intro_complex_networks.tex", "max_stars_repo_name": "estimand/intro-to-complex-networks", "max_stars_repo_head_hexsha": "85da17c6d978e8ad9ee09150f38b4b38aa9ee146", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-04-23T08:47:18.000Z", "max_stars_repo_stars_event_max_datetime": "2018-04-23T08:47:18.000Z", "max_issues_repo_path": "slides/intro_complex_networks.tex", "max_issues_repo_name": "estimand/intro-to-complex-networks", "max_issues_repo_head_hexsha": "85da17c6d978e8ad9ee09150f38b4b38aa9ee146", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/intro_complex_networks.tex", "max_forks_repo_name": "estimand/intro-to-complex-networks", "max_forks_repo_head_hexsha": "85da17c6d978e8ad9ee09150f38b4b38aa9ee146", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-04-15T03:46:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-15T03:46:04.000Z", "avg_line_length": 33.8212560386, "max_line_length": 174, "alphanum_fraction": 0.5441365519, "num_tokens": 4161, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5698708913112412}}
{"text": "\\chapter{Methods}\\label{ch:methods}\n\n\\section{Mini-introduction to LaTeX}\\label{sec:latex}\n\nThis chapter needs to be filled out. For now, it contains some very basic examples.\n\n\n\\subsection{Equations}\\label{sec:latex:equations}\n\nEquations and mathematical formulas can be included in-line, within the text, for example when explaining that $\\norm{\\vec x}_2$ signifies the $\\ell_2$ norm of a vector, as long as the included math formula or symbols are short (less than half a line). For longer formulas, or formulas that need to be referenced later in the text, one should use the \\texttt{equation} macro or related macros, e.g., \\texttt{align}. For example, the dot-product of two vectors, $\\vec x$ and $\\vec y$, is defined as\n\\begin{equation}\\label{eq:dot-product}\n    \\langle \\vec x, \\vec y \\rangle = \\vec{x}^T\\vec{y} = \\sum_{j=1}^m x_j\\times y_j.\n\\end{equation}\n\nNote that equations can be assigned labels and referenced throughout the text. However, unlike figures and tables (see next sections), the equation should be defined prior to its reference. Now that we can defined the formula for the vector dot-product, we can now refer to Equation~\\ref{eq:dot-product} to remind readers of its definition.\n\n\\subsection{Figures}\\label{sec:latex:figures}\n\nFigures should contain a caption at the bottom. The caption is a sentence or phrase that describes the content of the figure. As such, it should start with a capital letter and end with a period. The caption is not a title, so subsequent words after the first word should not be capitalized unless they are proper names. A figure should be introduced in the text prior to its inclusion in the text. When referring to the figure, make use of the  \\texttt{{\\textbackslash}figurename} macro, provided by the IEEE style, to ensure style compliance. Additionally, use a non-breaking space character to connect the \\figurename~text with the number assigned to the figure reference. This will ensure that \\figurename~and the reference will be included on the same line. For example, \\figurename~\\ref{fig:proj-arch} introduces our project architecture. Note that font sizes in figures and tables may be smaller than the rest of the thesis. However, they should be readable.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.7\\textwidth]{img/proj-arch.pdf}\n  \\caption{Project architecture.}\n  \\label{fig:proj-arch}\n\\end{figure}\n\n\n\\subsection{Tables}\\label{sec:latex:tables}\n\nTables should contain a caption at the top. Unlike figures, the caption in the table is the table title, and it should be capitalized accordingly. The table should be presented in the text prior to its inclusion in the text. Moreover, references to tables should follow the same directions as those for figures, except use the \\texttt{{\\textbackslash}tablename} macro. For example, \\tablename~\\ref{tbl:dataset} introduces statistics about the dataset used in our analysis.\n\n\\begin{table}[!htb]\n\\caption{Dataset Characteristics}\n\\label{tbl:dataset}\n\\centering\n\\input{tables/dataset}\n\\end{table}\n\n\\subsection{Algorithms}\\label{sec:latex:algorithms}\n\nLaTeX provides powerful macros for writing algorithms. Note that labels can be included within the algorithm, which allows the writer to refer to specific lines in the algorithm. The algorithm should be introduced first. Note that the label referring to the whole algorithm is placed right after the caption. Similar to a table, the caption is a title and should be capitalized accordingly. For example, Algorithm~\\ref{alg:proj-motif} describes the process for producing a normalized 2-hop left-projection of a bipartite graph described by the vertex sets $V^1$ and $V^2$ and the edge set $E$. Long algorithms (greater than half a page) should be relegated to the Appendix.\n\n\\begin{algorithm}[!htb]\n\\caption{The Normalized 2-hop Left-Projection Algorithm}\\label{alg:proj-motif}\n\\small\n\\begin{algorithmic}[1]\n\\Procedure{Project}{$V^1,V^2,E$}\\Comment{Construct the left projection $G^1$}\n%%\n\\item[] \\quad\\; // Initialize and compute degree vectors.\n\\For {$(v^1_i, v^2_j) \\in E$} \\Comment{Count neighbors}\n\\State $\\lambda_n(i) \\gets \\lambda_n(i) + 1$\n\\State $\\lambda_m(j) \\gets \\lambda_m(j) + 1$\n\\EndFor\n\\item[] \\quad\\; // SpGEMM\n%%\n\\For {$v^1_i \\in V^1$} \\label{alg:proj-motif-spgemm1}\n\\State $acc(:) \\gets 0$ \\Comment{Initialize output row accumulator data structure}\n\\For {$v^2_k \\in \\Gamma(v^1_i)$}\n\\State $l \\gets  (1/\\lambda_n(i)) \\times \\times w_{i,k} \\times (1/\\lambda_m(k))$ \\label{alg:proj-motif-lk}\n\\For {$v^1_j \\in \\Gamma(v^2_k)$}\n\\If {$v^1_j \\ne v^1_i$} \\Comment{Avoid computing output diagonal}\n\\State $acc(j) \\gets acc(j) + l \\times w_{j,k}$ \\label{alg:proj-motif-accum}\n\\EndIf\n\\EndFor\n\\EndFor\n\\State $C(i,:) = acc(:)$\n\\EndFor\\label{alg:proj-motif-spgemm2}\n\\item[] \\quad\\; // Normalize rows of $C$\n%%\n\\For {$i = 1, \\ldots, |V^1|$}\\label{alg:proj-motif-norm1}\n\\State $n_i \\gets 0$\n\\For {$j = 1, \\ldots, |V^2| \\text{ s.t. } C(i,j) > 0$}\n\\State $n_i \\gets n_i + \\text{abs}(C(i,j))$\n\\EndFor\n\\For {$j = 1, \\ldots, |V^2| \\text{ s.t. } C(i,j) > 0$}\n\\State $C(i,j) \\gets C(i,j) / n_i$\n\\EndFor\n\\EndFor\\label{alg:proj-motif-norm2}\n\\State \\textbf{return} $C$\\Comment{Edge set weight matrix for $G^1$}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\nAfter introducing the algorithm, one can then refer to sections of the algorithm in the process of describing it in the text. For example, note that line~\\ref{alg:proj-motif-accum} denotes the accumulation of projection weights for a vertex $v_i^1$ in $V^1$. For theses that contain many mathematical symbols, it is a good idea to provide the reader with a reference for the meaning of these symbols. This table is generally included in the preliminary chapters, after the introduction. We will include an example here. \\tablename~\\ref{tbl:notation-g} provides a reference for notation used throughout the paper that the algorithm above is included in.\n\n\\begin{table}[!htb]\n\\caption {Graph and Projection Notation} \\label{tbl:notation-g}\n\\centering\n\\small\n \\begin{tabular}{l l}\n \\hline\n \\textbf{Symbol} & \\textbf{Meaning} \\\\ \n \\hline\n $G$ & A graph topology. \\\\\n $V^1$, $V^2$ & The left and right node partitions in a bipartite graph. \\\\\n $E$ & The set of edges in $G$. \\\\\n $|X|$ & The cardinality of set $X$. \\\\\n $v^1_i \\in V^1$ & A vertex in the left node partition $V^1$. \\\\\n $v^2_j \\in V^2$ & A vertex in the right node partition $V^2$. \\\\\n $(v^1_i, v^2_j, w_{i,j})\\in E$ & The edge connecting nodes $v^1_i$ and $v^2_j$ with weight $w_{i,j}$ in graph $G$. \\\\\n $\\Gamma(v^1_i)$ & The set of vertices in $V^2$ adjacent to $v^1_i$, i.e., its neighborhood. \\\\\n $\\lambda_n,\\lambda_m$ & Node degree vectors for the left and right partitions.\\\\\n $A \\in \\mathbb{R}^{n\\times m}$ & The edge weight matrix for the bipartite graph $G$; $A(i,j) = w_{i,j}$. \\\\\n $B$ & Weighted adjacency matrix for graph $G$.\\\\\n \\hline\n $G^1 = (V^1, E^1)$ & The left projection graph.\\\\\n $G^2 = (V^2, E^2)$ & The right projection graph.\\\\\n $G^b = (V, E^b)$ & The biprojection graph.\\\\\n $(v^1_i, v^1_j, s^1_{i,j})\\in E^1$ & The edge connecting $v^1_i$ and $v^1_j$ with weight $s^1_{i,j}$ in the left projection graph. \\\\\n $W^1$ & Edge weight matrix for the left projection graph; $W^1(i,j) = s^1_{i,j}$.\\\\\n $W^2$ & Edge weight matrix for the right projection graph; $W^2(i,j) = s^2_{i,j}$.\\\\\n $W$ & Weighted adjacency matrix for the bi-projection graph of $G$.\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\subsection{Location of Floats}\\label{sec:latex:floats}\n\nFloats (e.g., algorithms, tables and figures) should not follow the IEEE standard of being placed at the top or bottom of the page. In a thesis, they must appear right after their reference and not break any paragraphs, i.e., they should be placed in the text, after the paragraph in question. Use the location hint \\texttt{[!htb]} to let LaTeX know the float should be placed in between paragraphs. If the float takes up an entire page and the last paragraph on the current page does not fit in its entirety, it should be moved after the float page to avoid breaking the continuity of the paragraph, leaving some blank lines at the end of the current page.\n\nPages that contain only floats will have floats vertically centered, which will produce additional blank lines between the floats in the page or between a float and the top and bottom of the page. The thesis guidelines specifically prohibit additional white space in the thesis. This case and the one discussed in the previous paragraph are the only acceptable reasons for having extra blank lines in the text.\n\n\\subsection{Citations}\\label{sec:latex:citations}\nWhen citing papers, it is advisable to collect BibTeX records for the papers that you wish to cite, which will be automatically processed into the correct citation style by the LaTeX/BibTeX compiler. Citing a work by Anastasiu and Karypis~\\cite{anastasiu-dsaa2016}, for example, is as simple as writing ``Anastasiu and Karipis\\textasciitilde{\\textbackslash}cite\\{anastasiu-dsaa2016\\},'' where \\textit{anastasiu-dsaa2016} is the key associated with the paper's BibTeX record in the \\textit{references.bib} file. Note the non-breaking space character (\\textasciitilde) between the word before the citation and the \\textit{{\\textbackslash}cite} command. It gets translated into a space, and does not allow LaTeX to print the citation on a different line than the connecting word. This, of course, does not work if you put a space between the word and the non-breaking space character, e.g., ``Karypis \\textasciitilde{\\textbackslash}cite\\{anastasiu-dsaa2016\\}''.\n\nWhen citing papers with a single author, one should refer to the work by citing the author's last name. For example, Anastasiu~\\cite{anastasiu-idsc2017} used a combination of heuristic candidate selection and search space filtering to efficiently identify approximate nearest neighbors. When the work has two authors, both authors should be referenced by their last name, as in the previous example of the work by Anastasiu and Karypis~\\cite{anastasiu-dsaa2016}. Finally, if there are more than two authors, one should reference the first author's last name, followed by the phrase \\textit{et al.}, which is short for the Latin \\textit{et alia} and means ``and others''. Note the placement of the period. As an example, Kapoor et al.~\\cite{kapoor-fie2018} discuss the benefits of competitive active learning in engineering education. Note that it is not necessary to mention authors by last name when listing multiple works as examples or evidence of a statement. For example, many recent works have developed filtering-based methods for nearest neighbor graph construction~\\cite{anastasiu-icde2014, anastasiu2015, anastasiu-sc2015, anastasiu-sc2016}.\n\n", "meta": {"hexsha": "330bc81c85deaa7291eb5729d8ea79470c45c77a", "size": 10693, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "methods.tex", "max_stars_repo_name": "davidanastasiu/SCU-SOE-thesis", "max_stars_repo_head_hexsha": "978ba34c64da48a7b886d017225bbab8d7169059", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "methods.tex", "max_issues_repo_name": "davidanastasiu/SCU-SOE-thesis", "max_issues_repo_head_hexsha": "978ba34c64da48a7b886d017225bbab8d7169059", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "methods.tex", "max_forks_repo_name": "davidanastasiu/SCU-SOE-thesis", "max_forks_repo_head_hexsha": "978ba34c64da48a7b886d017225bbab8d7169059", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.8914728682, "max_line_length": 1151, "alphanum_fraction": 0.7480594782, "num_tokens": 2994, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5698708913112412}}
{"text": "\\chapter{Dipole radiation}\nRadiation means the generation of electromagnetic waves, which then propagate away from the source to infinity. Classically, electromagnetic waves are created by charged particles accelerating, or, equivalently, currents varying in time. There is no radiation in case of electric and magnetic fields of a point charge moving with constant velocity. The energy in the electromagnetic field is transported along with the particle, but does not propagate away toward infinity. But if the charge velocity is not constant there must be radiation.\\\\\nAs we start the study of radiation, it is interesting to remember that the classical field theory was developed from experiments and observations on macroscopic objects such as charged bodies, insulators, conductors, current-carrying wires, and magnets. The classical theory is therefore at its best when applied to macroscopic electromagnetic phenomena. So, for example, the theory applies to radiation of radio waves by an antenna, or microwaves by a cavity resonator. In this book we concentrate mainly on systems of that kind.\\\\\n However, much of the radiation around us-in fact much of the radiation in the Universe-comes from microscopic systems and must be described by quantum electrodynamics. For example, sunlight, light from the filament of an incandescent bulb, fluorescent light, or laser light can only be explained by quantum considerations of individual atoms, molecules, or systems of atoms. For these systems the classical theory does not really apply, except perhaps in a qualitative, heuristic way. Only quantum electrodynamics can properly describe radiation by an atom. Despite its limited applicability, the classical theory of radiation is an important part of electromagnetism. Quantum electrodynamics, which has limitations of its own, relies on a foundation of classical electrodynamics.\\\\\n \\newpage\n \\section{Electric dipole radiation}\n Picture two tiny metal spheres separated by a distance $d$ and connected by a fine wire (figure below); at time $t$ the charge on the upper sphere is $q(t)$, and the charge on the lower sphere is $-q(t)$. Suppose that we drive the charge back and forth through the wire, from one end to the other, at an angular frequency $\\omega$ :\n $$\n q(t)=q_{0} \\cos (\\omega t)\n $$\n \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[height=5cm,width=8cm]{dipole-crop}\n \t\\caption{}\n \t\\label{}\n \\end{figure}\n The result is an oscillating electric dipole:\n $$\n \\mathbf{p}(t)=p_{0} \\cos (\\omega t) \\hat{\\mathbf{z}}\n $$\n where\n $$\n p_{0}=q_{0} d\n $$\n is the maximum value of the dipole moment.\n With the approximation of\n $$\n d \\ll \\frac{\\lambda}{2 \\pi} \\ll r\n $$\n you should not forget that $\\frac{\\lambda}{2 \\pi}=\\frac{c}{\\omega}$ we can calculate the potentials\\\\\n $$\n V(r, \\theta, t)=-\\frac{p_{0} \\omega}{4 \\pi \\epsilon_{0} c}\\left(\\frac{\\cos \\theta}{r}\\right) \\sin [\\omega(t-r / c)]\n $$\n and\n $$\n \\mathbf{A}(r, \\theta, t)=-\\frac{\\mu_{0} p_{0} \\omega}{4 \\pi r} \\sin [\\omega(t-r / c)] \\hat{\\mathbf{z}}\n $$\n Hence we get the fields\n $$\n \\mathbf{E}=-\\nabla V-\\frac{\\partial \\mathbf{A}}{\\partial t}=-\\frac{\\mu_{0} p_{0} \\omega^{2}}{4 \\pi}\\left(\\frac{\\sin \\theta}{r}\\right) \\cos [\\omega(t-r / c)] \\hat{\\boldsymbol{\\theta}}\n $$\n and\n $$\n \\mathbf{B}=\\nabla \\times \\mathbf{A}=-\\frac{\\mu_{0} p_{0} \\omega^{2}}{4 \\pi c}\\left(\\frac{\\sin \\theta}{r}\\right) \\cos [\\omega(t-r / c)] \\hat{\\phi}\n $$\n The energy radiated by an oscillating electric dipole is determined by the Poynting vector:\n $$\\mathbf{S}(\\mathbf{r}, t)=\\frac{1}{\\mu_{0}}(\\mathbf{E} \\times \\mathbf{B})=\\frac{\\mu_{0}}{c}\\left\\{\\frac{p_{0} \\omega^{2}}{4 \\pi}\\left(\\frac{\\sin \\theta}{r}\\right) \\cos [\\omega(t-r / c)]\\right\\}^{2} \\hat{\\mathbf{r}}$$\n The intensity is obtained by averaging (in time) over a complete cycle:\n $$\\langle\\mathbf{S}\\rangle=\\left(\\frac{\\mu_{0} p_{0}^{2} \\omega^{4}}{32 \\pi^{2} c}\\right) \\frac{\\sin ^{2} \\theta}{r^{2}} \\hat{\\mathbf{r}}$$\n The total power radiated is found by integrating $\\langle\\mathbf{S}\\rangle$ over a sphere of radius $r$ :\n $$ \\langle P\\rangle=\\int\\langle\\mathbf{S}\\rangle \\cdot d \\mathbf{a}=\\frac{\\mu_{0} p_{0}^{2} \\omega^{4}}{32 \\pi^{2} c} \\int \\frac{\\sin ^{2} \\theta}{r^{2}} r^{2} \\sin \\theta d \\theta d \\phi=\\frac{\\mu_{0} p_{0}^{2} \\omega^{4}}{12 \\pi c}$$\n \\section{Magnetic dipole radiations}\n Suppose now that we have a wire loop of radius $b$ (figure below) around which we drive an alternating current:\n $$I(t)=I_{0} \\cos (\\omega t)$$\n This is a model for an oscillating magnetic dipole,\n $$\n \\mathbf{m}(t)=\\pi b^{2} I(t) \\hat{\\mathbf{z}}=m_{0} \\cos (\\omega t) \\hat{\\mathbf{z}}\n $$\n \\begin{figure}[H]\n \t\\centering\n \t\\includegraphics[height=4.5cm,width=7cm]{m dipole-crop}\n \t\\caption{}\n \t\\label{}\n \\end{figure}\n where the maximum value of the magnetic dipole moment is.\\\\\n $$m_{0}=\\pi b^{2} I_{0}$$\n Again with the assumption of\n $$\n b \\ll \\frac{\\lambda}{2 \\pi} \\ll r\n $$\n we get the potentials and fields \\\\\n $$\\mathbf{A}(r, \\theta, t)=-\\frac{\\mu_{0} m_{0} \\omega}{4 \\pi c}\\left(\\frac{\\sin \\theta}{r}\\right) \\sin [\\omega(t-r / c)] \\hat{\\phi}$$\n There is no electric scalar potential as there is no static charge. Hence the fields\\\\\n $$\\mathbf{E}=-\\frac{\\partial \\mathbf{A}}{\\partial t}=\\frac{\\mu_{0} m_{0} \\omega^{2}}{4 \\pi c}\\left(\\frac{\\sin \\theta}{r}\\right) \\cos [\\omega(t-r / c)] \\hat{\\phi}$$\n and\n $$\\mathbf{B}=\\nabla \\times \\mathbf{A}=-\\frac{\\mu_{0} m_{0} \\omega^{2}}{4 \\pi c^{2}}\\left(\\frac{\\sin \\theta}{r}\\right) \\cos [\\omega(t-r / c)] \\hat{\\boldsymbol{\\theta}}$$\n The Poynting vector \n $$\\mathbf{S}(\\mathbf{r}, t)=\\frac{1}{\\mu_{0}}(\\mathbf{E} \\times \\mathbf{B})=\\frac{\\mu_{0}}{c}\\left\\{\\frac{m_{0} \\omega^{2}}{4 \\pi c}\\left(\\frac{\\sin \\theta}{r}\\right) \\cos [\\omega(t-r / c)]\\right\\}^{2} \\hat{\\mathbf{r}}$$\n time average of Poynting vector\n $$\n \\langle\\mathbf{S}\\rangle=\\left(\\frac{\\mu_{0} m_{0}^{2} \\omega^{4}}{32 \\pi^{2} c^{3}}\\right) \\frac{\\sin ^{2} \\theta}{r^{2}} \\hat{\\mathbf{r}}\n $$\n the total radiated power\n $$\n \\langle P\\rangle=\\frac{\\mu_{0} m_{0}^{2} \\omega^{4}}{12 \\pi c^{3}}\n $$\n \\begin{note}\n \tIf you compare the formula for power for electric and magnetic dipole radiation you get\n \t$$\\frac{P_{\\text {magnetic }}}{P_{\\text {electric }}}=\\left(\\frac{m_{0}}{p_{0} c}\\right)^{2}$$\n \\end{note}\n \n\\section{Radiation from a arbitrary source}\nPower radiated from arbitrary electric source whose electric dipole moment is changing with respect to time\\\\\n$$P_{\\operatorname{rad}}\\left(t_{0}\\right) \\cong \\frac{\\mu_{0}}{6 \\pi c}\\left[\\ddot{p}\\left(t_{0}\\right)\\right]^{2}$$\nPower radiated from arbitrary magnetic source whose magnetic dipole moment is changing with respect to time\n$$P_{\\mathrm{rad}}\\left(t_{0}\\right) \\cong \\frac{\\mu_{0}}{6 \\pi c^{3}}\\left[\\ddot{m}\\left(t_{0}\\right)\\right]^{2}$$\n\\begin{exercise}\n A particle moving straight line with velocity $v=v_{0}+\\alpha t$. How much energy does it radiate?\n\\end{exercise}\n\\begin{answer}\nThe position of the particle\n$$\nx=x_{0}+v_{0} t+\\frac{1}{2} \\alpha t^{2}\n$$\nNow dipole moment\n$$\n\\mathbf{p}=q x \\hat{i}\n$$\n$$\\begin{aligned}\n\t&\\Rightarrow p=q\\left(x_{0}+v_{0} t+\\frac{1}{2} \\alpha t^{2}\\right) \\\\\n\t&\\Rightarrow \\ddot{p}=q \\alpha \\\\\n\t&\\text { Power }=\\frac{\\mu_{0}}{6 \\pi c} q^{2} \\alpha^{2}\n\\end{aligned}$$\nThis is the famous larmor formula.\\\\\nThis formula can be used for a point charge moving with acceleration $\\alpha$\t\n\\end{answer}\n\\begin{exercise}\n\t A parallel-plate capacitor $C$, with plate separation $d$, is given an initial charge $(\\pm) Q_{0}$. It is then connected to a resistor $R$, and discharges, $Q(t)=Q_{0} e^{-t / R C}$. What fraction of its initial energy $\\left(Q_{0}^{2} / 2 C\\right)$ does it radiate away?\n\\end{exercise}\n\\begin{answer}\n\tPower radiated is\n\t$$\n\t\\frac{\\mu_{0}}{6 \\pi c} \\ddot{p}^{2}\n\t$$\n\tIn our case\n\t$$\n\tp=Q d, \\text { so } \\ddot{p}=\\ddot{Q} d=Q_{0}\\left(\\frac{1}{R C}\\right)^{2} e^{-t / R C} d\n\t$$\n\tSo power radiated\n\t$$\n\t\\frac{d W_{r}}{d t}=\\frac{\\mu_{0}}{6 \\pi c} \\frac{\\left(Q_{0} d\\right)^{2}}{(R C)^{4}} e^{-2 t / R C}\n\t$$\n\tThe total energy radiated is\n\t$$\n\t\\begin{aligned}\n\tW_{r} &=\\frac{\\mu_{0}}{6 \\pi c} \\frac{\\left(Q_{0} d\\right)^{2}}{(R C)^{4}} \\int_{0}^{\\infty} e^{-2 t / R C} d t \\\\\n\t&=\\left.\\frac{\\mu_{0}}{6 \\pi c} \\frac{\\left(Q_{0} d\\right)^{2}}{(R C)^{4}}\\left[-\\frac{R C}{2} e^{-2 t / R C}\\right]\\right|_{0} ^{\\infty} \\\\\n\t&=\\frac{\\mu_{0}}{6 \\pi c} \\frac{\\left(Q_{0} d\\right)^{2}}{(R C)^{4}} \\frac{R C}{2}=\\frac{\\mu_{0}}{12 \\pi c} \\frac{\\left(Q_{0} d\\right)^{2}}{(R C)^{3}}\n\t\\end{aligned}\n\t$$\n\\end{answer}\n\\begin{exercise}\n An electron is released from rest and falls under the influence of gravity. In the first centimeter, what fraction of the potential energy lost is radiated away?\n\\end{exercise}\n\\begin{answer}\nLet the $y$ direction is in the downwards direction then\\\\\n$\\mathbf{p}=-e y \\hat{\\mathbf{y}}, y=\\frac{1}{2} g t^{2}, \\text { so } \\mathbf{p}=-\\frac{1}{2} g e t^{2} \\hat{\\mathbf{y}} ; \\ddot{\\mathbf{p}}=-g e \\hat{\\mathbf{y}}$\\\\\nHence the power radiated is\n$$\nP=\\frac{\\mu_{0}}{6 \\pi c}(g e)^{2}\n$$\nNow the time the charge takes to fall a distance $h$ is given by $h=\\frac{1}{2} g t^{2} \\Rightarrow t=\\sqrt{2 h / g}$, so the energy radiated in falling a distance $h$ is\n$$\nU_{\\mathrm{rad}}=P t=\\frac{\\mu_{0}(g e)^{2}}{6 \\pi c} \\sqrt{2 h / g}\n$$\nThe potential energy lost is $U_{\\text {pot }}=m g h$. So the fraction is\n$$\n\\begin{aligned}\nf &=\\frac{U_{\\mathrm{rad}}}{U_{\\mathrm{pot}}}=\\frac{\\mu_{0} g^{2} e^{2}}{6 \\pi c} \\sqrt{\\frac{2 h}{g} \\frac{1}{m g h}}=\\frac{\\mu_{0} e^{2}}{6 \\pi m c} \\sqrt{\\frac{2 g}{h}} \\\\\n&=\\frac{\\left(4 \\pi \\times 10^{-7}\\right)\\left(1.6 \\times 10^{-19}\\right)^{2}}{6 \\pi\\left(9.11 \\times 10^{-31}\\right)\\left(3 \\times 10^{8}\\right)} \\sqrt{\\frac{(2)(9.8)}{(0.01)}}=2.76 \\times 10^{-22}\n\\end{aligned}\n$$\n\\end{answer}\n\\begin{exercise}\n\t In Bohr's theory of hydrogen, the electron in its ground state was supposed to travel in a circle of radius $5 \\times 10^{-11} \\mathrm{~m}$, held in orbit by the Coulomb attraction of the proton. According to classical electrodynamics, this electron should radiate, and hence spiral in to the nucleus. Show that $v \\ll c$ for most of the trip (so you can use the Larmor formula), and calculate the lifespan of Bohr's atom. (Assume each revolution is essentially circular.)\n\\end{exercise}\n\\begin{answer}\n$$\nF=\\frac{1}{4 \\pi \\epsilon_{0}} \\frac{q^{2}}{r^{2}}=m a=m \\frac{v^{2}}{r} \\Rightarrow v=\\sqrt{\\frac{1}{4 \\pi \\epsilon_{0}} \\frac{q^{2}}{m r}}\n$$\nAt the beginning $\\left(r_{0}=0.5 \\mathrm{~A}\\right)$\n$$\n\\begin{aligned}\n\\frac{v}{c} &=\\left[\\frac{\\left(1.6 \\times 10^{-19}\\right)^{2}}{4 \\pi\\left(8.85 \\times 10^{-12}\\right)\\left(9.11 \\times 10^{-31}\\right)\\left(5 \\times 10^{-11}\\right)}\\right]^{-\\frac{1}{2}} \\frac{1}{3 \\times 10^{8}} \\\\\n&=0.0075\n\\end{aligned}\n$$\nand when the radius is one hundredth of this $v / c$ is only 10 times greater $(0.075)$, so for most of the trip the velocity is safely nonrelativistic. From the Larmor formula,\n$$P=\\frac{\\mu_{0} q^{2}}{6 \\pi c}\\left(\\frac{v^{2}}{r}\\right)^{2}=\\frac{\\mu_{0} q^{2}}{6 \\pi \\epsilon_{0}}\\left(\\frac{1}{4 \\pi \\epsilon_{0}} \\frac{q^{2}}{m r^{2}}\\right)^{2}$$\nsince $a=v^{2} / r$, and $P=-d U / d t$ where $U$ is the (total) energy of the electron\n$$\n\\begin{aligned}\n\tU &=U_{\\mathrm{kin}}+U_{\\mathrm{pot}}=\\frac{1}{2} m v^{2}-\\frac{1}{4 \\pi \\epsilon_{0}} \\frac{q^{2}}{r} \\\\\n\t&=\\frac{1}{2}\\left(\\frac{1}{4 \\pi \\epsilon_{0}} \\frac{q^{2}}{r}\\right)-\\frac{1}{4 \\pi \\epsilon_{0}} \\frac{q^{2}}{r} \\\\\n\t&=-\\frac{1}{8 \\pi \\epsilon_{0}} \\frac{q^{2}}{r}\n\\end{aligned}$$\nSo\n$$\n-\\frac{d U}{d t}=-\\frac{1}{8 \\pi \\epsilon_{0}} \\frac{q^{2}}{r^{2}} \\frac{d r}{d t}=P=\\frac{q^{2}}{6 \\pi \\epsilon_{0} c^{3}}\\left(\\frac{1}{4 \\pi \\epsilon_{0}} \\frac{q^{2}}{m r^{2}}\\right)^{2}\n$$\nand hence\n$$\n\\frac{d r}{d t}=-\\frac{1}{3 c}\\left(\\frac{q^{2}}{2 \\pi \\epsilon_{0} m c}\\right)^{2} \\frac{1}{r^{2}}\n$$\nor\n$$\n\\begin{aligned}\nd t &=-3 c\\left(\\frac{2 \\pi \\epsilon_{0} m c}{q^{2}}\\right)^{2} r^{2} d r \\Rightarrow t \\\\\n&=-3 c\\left(\\frac{2 \\pi \\epsilon_{0} m c}{q^{2}}\\right)^{2} \\int_{r_{0}}^{0} r^{2} d r \\\\\n&=c\\left(\\frac{2 \\pi \\epsilon_{0} m c}{q^{2}}\\right)^{2} r_{0}^{3}\n\\end{aligned}\n$$\nor\n$$\n\\begin{aligned}\nt=&\\left(3 \\times 10^{8}\\right) \\times \\\\\n&\\left[\\frac{2 \\pi\\left(8.85 \\times 10^{-12}\\right)\\left(9.11 \\times 10^{-31}\\right)\\left(3 \\times 10^{8}\\right)}{\\left(1.6 \\times 10^{-19}\\right)^{2}}\\right]^{2}\\left(5 \\times 10^{-11}\\right)^{3} \\\\\n=& 1.3 \\times 10^{-11} \\mathrm{~s}\n\\end{aligned}\n$$\t\n\\end{answer}\n\n\\section{Lagrangian and hamiltonian of a charged particle in a Electromagnetic field}\nThe force experienced by a particle of charge q at rest in an electric field of intensity E is given by\n$$F_1=qE$$\nThe force experienced by a moving charge q in a magnetic field B is given by\\\\\n$$F_2=q(v\\times B)$$\nWhere V  is the velocity of the particle.The direction of $F_2$ is perpendicular to both v and B.\\\\\nTotal force on a uniformly moving charged particle of charge q is the sum of $F_1$ and $F_2$\\\\\n$$F=F_1+F_2=qE+q(v\\times B)$$\nThe above equation is known as lorentz formula.\\\\\nMaxwell's equation in empty space is given by\\\\\n$\\nabla \\cdot E=\\frac{\\rho}{\\epsilon_0}$\\\\\n$\\nabla \\cdot B=0$\\\\\n$\\nabla \\times E=-\\frac{\\partial B}{\\partial t}$\\\\\n$\\nabla \\times B=\\mu_0(J+\\epsilon_0\\frac{\\partial E}{\\partial t})$\\\\\nWe know that if $\\nabla \\cdot B=0$ ,B can be expressed as a curl of some vector A\\\\\n$$B=\\nabla \\times A$$\nSubstitute this equation in $\\nabla \\times E=-\\frac{\\partial B}{\\partial t}$\\\\\n$\\nabla \\times E=-\\frac{\\partial B}{\\partial t}=-\\frac{\\partial}{\\partial t}(\\nabla \\times A)=-\\nabla \\times \\frac{\\partial A}{\\partial t}$\\\\\n$\\therefore E=-\\frac{\\partial A}{\\partial t}-\\nabla s$ ,since curl of gradient is always zero and s is  scalar function\\\\\nThe lorentz force in terms of scalar potential s and vector potential A is given by\\\\\n$$F=q\\left[ E+v\\times B\\right] $$\n$$F=q\\left[ -\\frac{\\partial A}{\\partial t}-\\nabla s+(v\\times \\nabla \\times A)\\right] $$\nLet us consider the last part $v\\times(\\nabla \\times A)$\\\\\n$$v\\times(\\nabla \\times A)=\\nabla (A\\cdot v)-(\\nabla \\cdot v)A$$\nAlso $A=A(x,y,z,t)$\\\\\nTherfore total time derivative of A is given by\\\\\n$\\frac{dA}{dt}=\\frac{\\partial A}{\\partial x} \\dot{x}+\\frac{\\partial A}{\\partial y} \\dot{y}+\\frac{\\partial A}{\\partial z} \\dot{z}+\\frac{\\partial A}{\\partial t}=\\frac{\\partial A}{\\partial x} v_x+\\frac{\\partial A}{\\partial y} v_y+\\frac{\\partial A}{\\partial z} v_z+\\frac{\\partial A}{\\partial t}$\\\\\n$=(\\hat{i}v_x+\\hat{j}v_y+\\hat{k}v_z)\\cdot \\left( \\hat{i}\\frac{\\partial A}{\\partial x}+\\hat{j}\\frac{\\partial A}{\\partial y}+\\hat{k}\\frac{\\partial A}{\\partial z}\\right) +\\frac{\\partial A}{\\partial t}=v \\cdot \\nabla \\mathbf{A}+\\frac{\\partial \\mathbf{A}}{d t}$ \\\\\n$$v \\cdot\\nabla \\mathbf{A}=\\frac{d \\mathbf{A}}{d t}-\\frac{\\partial \\mathbf{A}}{\\partial t}$$\nSubstituting this value in \n$$v\\times(\\nabla \\times A)=\\nabla (A\\cdot v)-(\\nabla \\cdot v)A$$\nwe will get \\\\\n$$\\mathrm{v} \\times(\\nabla \\times \\mathbf{A})=\\nabla(\\mathbf{A} \\cdot \\mathrm{v})-\\frac{d \\mathbf{A}}{d t}+\\frac{\\partial \\mathbf{A}}{\\partial t}$$\nThen F becomes\\\\\n$$F=q\\left[ -\\frac{\\partial A}{\\partial t}-\\nabla s+(v\\times \\nabla \\times A)\\right] $$\n$$\\begin{aligned}\n\\mathbf{F} &=q\\left[-\\frac{\\partial \\mathbf{A}}{d t}-\\nabla s+\\left\\{\\nabla(\\mathbf{A} \\cdot \\mathrm{v})-\\frac{d \\mathbf{A}}{d t}+\\frac{\\partial \\mathbf{A}}{\\partial t}\\right\\}\\right] \\\\\n&=q\\left[-\\nabla s+\\left\\{\\nabla(\\mathbf{A} \\cdot \\mathrm{v})-\\frac{d \\mathbf{A}}{d t}\\right\\}\\right] \\\\\n&=q\\left[-\\nabla(s-\\mathbf{A} \\cdot \\mathrm{v})-\\frac{d \\mathbf{A}}{d t}\\right]\n\\end{aligned}$$\nThe $x$-component of the force is given by\n$$\n\\begin{aligned}\nF_{x} &=q\\left[\\{-\\nabla(s-\\mathbf{A} \\cdot \\mathrm{v})\\}_{x}-\\frac{d \\mathbf{A}_{x}}{d t}\\right] \\\\\n&=q\\left[-\\frac{\\partial}{\\partial x}(s-\\mathbf{A} \\cdot \\mathrm{v})-\\frac{d \\mathbf{A}_{x}}{d t}\\right] \\\\\n\\text { or } \\quad F_{x} &=q\\left[-\\frac{\\partial}{\\partial x}(s-\\mathbf{A} \\cdot \\mathrm{v})-\\frac{d}{d t}\\left\\{\\frac{\\partial}{\\partial \\mathrm{v}_{x}}(\\mathbf{A} \\cdot \\mathrm{v}\\}\\right]\\right.\n\\end{aligned}\n$$\n$q s \\text { is independent of velocity } \\mathrm{v}_{x} \\text { i.e. } \\frac{\\partial(q s)}{\\partial \\mathrm{v}_{x}}=0$\nSo we can add that term to the above equation, it will not make any change.\\\\\nAfter rewritting\\\\\n$$\\begin{aligned}\n&F_{x}=-\\frac{\\partial U}{\\partial x}-\\frac{d}{d t} \\frac{\\partial U}{\\partial \\mathrm{v}_{\\mathrm{x}}} \\\\\n&U=q s-q(\\mathbf{A} \\cdot \\mathbf{v})\n\\end{aligned}$$\nFRom these it is clear that  U is a function of x and v ie $q_k$ and $\\dot{q_k}$\nU is called generalized potential or velocity dependent potential.\\\\\n\\paragraph{Lagrangian }\n$L=T-U=T-\\{q s-q(\\mathbf{A} \\cdot \\mathbf{v}) .\\}=T-q s+q(\\mathbf{A} \\cdot \\mathbf{v})$\\\\\nT=kinetic energy.$T=\\frac{1}{2}mv^2$\n$$L=\\frac{1}{2}mv^2-q s+q(\\mathbf{A} \\cdot \\mathbf{v})$$\n\\textbf{Momentum} $P_K=\\frac{\\partial L}{\\partial \\dot{q_k}}=\\frac{\\partial L}{\\partial v_k}=mv_k+qA=mv+qA$\\\\\n\\paragraph{Hamiltonian}\n$$\\begin{aligned}\nH &=\\sum_{k} p_{k} \\dot{q}_{k}-L \\\\\n&=\\sum_{k} p_{k} \\dot{r}_{k}-L \\\\\n&=\\sum_{k} p_{k} v_{k}-L \\\\\n&=\\sum_{k}\\left(m v_{k}+q A_{k}\\right) v_{k}-\\left[\\frac{1}{2} m v^{2}-q s+q(\\mathbf{v} \\cdot \\mathbf{A})\\right] \\\\\n&=\\sum_{k}\\left(m v_{k}^{2}+q A_{k} v_{k}\\right)-\\frac{1}{2} m v^{2}+q s-q(\\mathbf{v} \\cdot \\mathbf{A}) \\\\\n&=m v^{2}+q(\\mathbf{A} \\cdot \\mathbf{v})-\\frac{1}{2} m v^{2}+q s-q(\\mathbf{v} \\cdot \\mathbf{A}) \\\\\n&=\\frac{1}{2} m v^{2}+q s\n\\end{aligned}$$\n\n\\newpage\n\\begin{abox}\n\tPrevious Year solutions\n\\end{abox}\n\\begin{enumerate}\n\t\\begin{minipage}{\\textwidth}\n\t\t\\item An electron is decelerated at a constant rate starting from an initial velocity $u$ (where $u<<c$ ) to $u / 2$ during which it travels a distance $s$. The amount of energy lost to radiation is\n\t\t\\exyear{NET 2017}\n\t\\end{minipage}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $\\frac{\\mu_{0} e^{2} u^{2}}{3 \\pi m c^{2} s}$\n\t\t\\task[\\textbf{B.}]$\\frac{\\mu_{0} e^{2} u^{2}}{6 \\pi m c^{2} s}$\n\t\t\\task[\\textbf{C.}]$\\frac{\\mu_{0} e^{2} u}{8 \\pi m c s}$\n\t\t\\task[\\textbf{D.}]$\\frac{\\mu_{0} e^{2} u}{16 \\pi m c s}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t$\\text { Total power radiated } P=\\frac{\\mu_{0} q^{2} a^{2}}{6 \\pi c}$\\\\\n\t\t$\\text { Total energy radiated in time } t \\text { is } E=P \\cdot t=\\frac{\\mu_{0} e^{2} a^{2}}{6 \\pi c} \\cdot t=\\frac{\\mu_{0} e^{2} a^{2}}{6 \\pi c} \\times \\frac{u}{2 a}$\\\\\n\t\t$$\\begin{aligned}\n\t\t&{\\left[\\because v=u-a t \\Rightarrow \\frac{u}{2}=u-a t \\Rightarrow t=\\frac{u}{2 a}\\right]} \\\\\n\t\t&\\Rightarrow E=\\frac{\\mu_{0} e^{2} a u}{12 \\pi c}\n\t\t\\end{aligned}$$\n\t\tFraction of initial $K . E .$ lost due to radiation $=\\frac{E}{\\frac{1}{2} m u^{2}}=\\frac{2 E}{m u^{2}}$\n\t\t$$\n\t\t\\begin{aligned}\n\t\t&=\\frac{2}{m u^{2}} \\times \\frac{\\mu_{0} e^{2} a u}{12 \\pi c}=\\frac{\\mu_{0} e^{2} a}{6 \\pi m c u} \\\\\n\t\t&{\\left[\\therefore s=u t-\\frac{1}{2} a t^{2}=u \\times \\frac{u}{2 a}-\\frac{1}{2} a \\times \\frac{u^{2}}{4 a^{2}}=\\frac{u^{2}}{2 a}-\\frac{u^{2}}{8 a}=\\frac{3 u^{2}}{8 a} \\Rightarrow a=\\frac{3 u^{2}}{8 s}\\right]} \\\\\n\t\t&=\\frac{\\mu_{0} e^{2}}{6 \\pi m c u} \\times \\frac{3 u^{2}}{8 s}=\\frac{\\mu_{0} e^{2} u}{16 \\pi m c s}\n\t\t\\end{aligned}\n\t\t$$\t\n\t\\end{answer}\n\\end{enumerate}\n\n \n", "meta": {"hexsha": "02b6286c64e4a4506ee96264118e90054549d33f", "size": 18988, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Electrodynamics- CSIR/chapter/Dipole radiation.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Electrodynamics- CSIR/chapter/Dipole radiation.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Electrodynamics- CSIR/chapter/Dipole radiation.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.3375, "max_line_length": 783, "alphanum_fraction": 0.6386138614, "num_tokens": 7265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706735, "lm_q2_score": 0.6893056231680122, "lm_q1q2_score": 0.569857088392231}}
{"text": "%\n%% Author: Jeffrey Leung\n%% Last edited: 2015-07-09\n%%\n%% This document contains section 4 (Boolean Algebra) of a course overview of CMPT 150.\n%\n\n\\section{Boolean Algebra}\n\t\\subsection{Introduction}\n\t\t\\begin{easylist}[itemize]\n\n\t\t\t& A specific input creates a specific output\n\t\t\t& Values can be 0 (false) or 1 (true)\n\t\t\t& Boolean expressions:\n\t\t\t\t&& Evaluate to 0 or 1\n\t\t\t\t&& Function table: Representation of all possible inputs and their corresponding outputs displayed in a table\n\t\t\t\t\n\t\t\t\t\t\\Deactivate\n\t\t\t\t\t\\begin{table}[!htb]\n\t\t\t\t\t\t\\centering\n\t\t\t\t\t\t\\caption{Example of a function table}\n\t\t\t\t\t\t\\begin{tabular}{ c c | c }\n\t\t\t\t\t\t\t$I_{2}$ & $I_{1}$ & Sum \\\\\n\t\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t\t0 & 0 &  0 \\\\\n\t\t\t\t\t\t\t0 & 1 &  1 \\\\\n\t\t\t\t\t\t\t0 & 2 &  2 \\\\\n\t\t\t\t\t\t\t\\vdots & \\vdots & \\vdots \\\\\n\t\t\t\t\t\t\t9 & 8 & 17 \\\\\n\t\t\t\t\t\t\t9 & 9 & 18\n\t\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\\end{table}\n\t\t\t\t\t\n\t\t\t%TODO CMPT 150 notes, page 16\n\n\t\t\\end{easylist}", "meta": {"hexsha": "d5c56428159ec59a0237a9864fc69285314df904", "size": 904, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cmpt-150-introduction-to-computer-design_partial/tex/section_4.tex", "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_issues_repo_path": "cmpt-150-introduction-to-computer-design_partial/tex/section_4.tex", "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cmpt-150-introduction-to-computer-design_partial/tex/section_4.tex", "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "avg_line_length": 25.1111111111, "max_line_length": 113, "alphanum_fraction": 0.5796460177, "num_tokens": 304, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117855317474, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5698570877828438}}
{"text": "\\documentclass{article}\n\\usepackage{lmodern}\n\\usepackage{amsmath}\n\\usepackage{systeme}\n\\usepackage{amssymb}\n\\usepackage{listings}\n\\usepackage[T1]{fontenc}\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\lhead{Anirudhan J. Rajagopalan --- ajr619 --- N18824115}\n\n\\begin{document}\n\n\\title{Web Search Engines --- Problem Set 2}\n\\date{March 7, 2016}\n\\author{Anirudhan J. Rajagopalan\\\\ N18824115\\\\ ajr619}\n\\maketitle\n\\newpage\n\\section[A]{Problem 1}\nGiven term-document matrix:\n\\[\\begin{tabular}{r c c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{Doc1}\n & \\multicolumn{1}{c}{Doc2}\n & \\multicolumn{1}{c}{Doc3}\n & \\multicolumn{1}{c}{Doc4} \\\\\n\\cline{2-5}\nWalrus & 10 & 0 & 0 & 10 \\\\\n\\cline{2-5}\nCarpenter & 8 & 0 & 40 & 0 \\\\\n\\cline{2-5}\nBread & 4 & 24 & 0 & 20 \\\\\n\\cline{2-5}\nButter & 1 & 16 & 0 & 0 \\\\\n\\cline{2-5}\n\\end{tabular}\n\\]\n\n\\[ w(t,d) = \\begin{cases} \n      1 + \\log_2 f(t,d) & if f(t,d) > 0 \\\\\n      0 & if f(t,d) = 0 \\\\\n   \\end{cases}\n\\]\n\n\\[ i(t) = 1 + \\log_2(c/o(t))\\]\n\\[ \\vec{d} = w(t,d) * i(t) \\]\n\\textbf{Calculating $f(t,d)$, $w(t,d)$, $\\vec{d}$ for each of the terms given -}\n\n\\textbf{Walrus}\n$o(t) = 2$, $c = 4$, $i(t) = 2$\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{d}$} \\\\\n\\cline{2-4}\nDoc1 & 10 & 4.32 & 8.64 \\\\\n\\cline{2-4}\nDoc2 & 0 & 0 & 0 \\\\\n\\cline{2-4}\nDoc3 & 0 & 0 & 0 \\\\\n\\cline{2-4}\nDoc4 & 10 & 4.32 & 8.64 \\\\\n\\cline{2-4}\n\\end{tabular}\n\n\\vspace{5mm}\n\\textbf{Carpenter}\n$o(t) = 2$, $c = 4$, $i(t) = 2$\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{d}$} \\\\\n\\cline{2-4}\nDoc1 & 8 & 4 & 8 \\\\\n\\cline{2-4}\nDoc2 & 0 & 0 & 0 \\\\\n\\cline{2-4}\nDoc3 & 40 & 6.32 & 12.64 \\\\\n\\cline{2-4}\nDoc4 & 0 & 0 & 0 \\\\\n\\cline{2-4}\n\\end{tabular}\n\n\\vspace{5mm}\n\\textbf{Bread}\n$o(t) = 3$, $c = 4$, $i(t) = \\log_2(\\frac{4}{3}) + 1$\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{d}$} \\\\\n\\cline{2-4}\nDoc1 & 4 & 3 & 5.656 \\\\\n\\cline{2-4}\nDoc2 & 24 & 5.58 & 7.89 \\\\\n\\cline{2-4}\nDoc3 & 0 & 0 & 0 \\\\\n\\cline{2-4}\nDoc4 & 20 & 5.32 & 7.52 \\\\\n\\cline{2-4}\n\\end{tabular}\n\n\\vspace{5mm}\n\\textbf{Butter}\n$o(t) = 2$, $c = 4$, $i(t) = 2$\n\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{d}$} \\\\\n\\cline{2-4}\nDoc1 & 1 & 1 & 2 \\\\\n\\cline{2-4}\nDoc2 & 16 & 5 & 10 \\\\\n\\cline{2-4}\nDoc3 & 0 & 0 & 0 \\\\\n\\cline{2-4}\nDoc4 & 0 & 0 & 0 \\\\\n\\cline{2-4}\n\\end{tabular}\n\n\\vspace{5mm}\n\\textbf{So, the Document vectors with each of these terms as a dimension is as follows:}\n\n\\begin{tabular}{r c c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{Doc1}\n & \\multicolumn{1}{c}{Doc2}\n & \\multicolumn{1}{c}{Doc3}\n & \\multicolumn{1}{c}{Doc4} \\\\\n\\cline{2-5}\nWalrus & 8.64 & 0 & 0 & 8.64 \\\\\n\\cline{2-5}\nCarpenter & 8 & 0 & 12.64 & 0 \\\\\n\\cline{2-5}\nBread & 5.65 & 7.89 & 0 & 7.52 \\\\\n\\cline{2-5}\nButter & 2 & 10 & 0 & 0 \\\\\n\\cline{2-5}\n\\end{tabular}\n\n\\vspace{5mm}\n\\textbf{Normalized document vector is as follows-}\n\n\\begin{tabular}{r c c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{Doc1}\n & \\multicolumn{1}{c}{Doc2}\n & \\multicolumn{1}{c}{Doc3}\n & \\multicolumn{1}{c}{Doc4} \\\\\n\\cline{2-5}\nWalrus & 0.654 & 0 & 0 & 0.754 \\\\\n\\cline{2-5}\nCarpenter & 0.605 & 0 & 1 & 0 \\\\\n\\cline{2-5}\nBread & 0.427 & 0.619 & 0 & 0.656 \\\\\n\\cline{2-5}\nButter & 0.151 & 0.785 & 0 & 0 \\\\\n\\cline{2-5}\n\\end{tabular}\n\n\\subsection{Query --- Document Rankings}\n\n\\subsubsection{Query --- ``Walrus''}\n$\\vec{q} = <1,0,0,0>$\n  \\begin{tabular}{r c c}\n  \\multicolumn{1}{r}{}\n   & \\multicolumn{1}{c}{$sim(\\vec{d},\\vec{q})$}\n   & \\multicolumn{1}{c}{Rank} \\\\\n  \\cline{2-3}\n  Doc1 & 0.654 & 2 \\\\\n  \\cline{2-3}\n  Doc2 & 0 & 3 \\\\\n  \\cline{2-3}\n  Doc3 & 0 & 3 \\\\\n  \\cline{2-3}\n  Doc4 & 0.754 & 1 \\\\\n  \\cline{2-3}\n  \\end{tabular}\n\n\\subsubsection{Query --- ``Walrus Carpenter''}\n$\\vec{q}$ = <0.707,0.707,0,0>\n\\begin{tabular}{r c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$sim(\\vec{d},\\vec{q})$}\n & \\multicolumn{1}{c}{Rank} \\\\\n\\cline{2-3}\nDoc1 & 0.89 & 1 \\\\\n\\cline{2-3}\nDoc2 & 0 & 4 \\\\\n\\cline{2-3}\nDoc3 & 0.707 & 2 \\\\\n\\cline{2-3}\nDoc4 & 0.533 & 3 \\\\\n\\cline{2-3}\n\\end{tabular}\n\n\\subsubsection{Query --- ``Walrus Bread Butter''}\n$\\vec{q}$ = <0.57,0,0.57,0.57>\n\\begin{tabular}{r c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$sim(\\vec{d},\\vec{q})$}\n & \\multicolumn{1}{c}{Rank} \\\\\n\\cline{2-3}\nDoc1 & 0.702 & 3 \\\\\n\\cline{2-3}\nDoc2 & 0.800 & 2 \\\\\n\\cline{2-3}\nDoc3 & 0 & 4 \\\\\n\\cline{2-3}\nDoc4 & 0.803 & 1 \\\\\n\\cline{2-3}\n\\end{tabular}\n\n\\section[B]{Problem 2}\n\\subsection{Document Similarity}\n\\begin{align*}\n  sim(\\vec{d_1},\\vec{d_2}) =& 0.427*0.619 \\\\\n  =& 0.264 \\\\\n  sim(\\vec{d_1},\\vec{d_3}) =& 0.605 * 1\\\\\n  =& 0.605 \\\\\n  sim(\\vec{d_1},\\vec{d_4}) =& 0.654*0.754 + 0.427*0.656 \\\\\n  =& 0.773 \\\\\n\\end{align*}\n\n\\subsection{Word Similarity}\n\\subsubsection*{Doc1}\n$o(t) = 4$, $c = 4$, $i(t) = 1$\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{w}$} \\\\\n\\cline{2-4}\nWalrus & 10 & 4.32 & 4.32 \\\\\n\\cline{2-4}\nCarpenter & 8 & 4 & 4 \\\\\n\\cline{2-4}\nBread & 4 & 3 & 3 \\\\\n\\cline{2-4}\nButter & 1 & 1 & 1 \\\\\n\\cline{2-4}\n\\end{tabular} \\\\\n\n\n\\subsubsection*{Doc2}\n$o(t) = 2$, $c = 4$, $i(t) = 2$\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{w}$} \\\\\n\\cline{2-4}\nWalrus & 0 & 0 & 0 \\\\\n\\cline{2-4}\nCarpenter & 0 & 0 & 0 \\\\\n\\cline{2-4}\nBread & 24 & 5.58 & 11.16 \\\\\n\\cline{2-4}\nButter & 16 & 5 & 10 \\\\\n\\cline{2-4}\n\\end{tabular} \\\\\n\n\\subsubsection*{Doc3}\n$o(t) = 1$, $c = 4$, $i(t) = 3$\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{w}$} \\\\\n\\cline{2-4}\nWalrus & 0 & 0 & 0 \\\\\n\\cline{2-4}\nCarpenter & 40 & 6.32 & 18.96 \\\\\n\\cline{2-4}\nBread & 0 & 0 & 0 \\\\\n\\cline{2-4}\nButter & 0 & 0 & 0 \\\\\n\\cline{2-4}\n\\end{tabular} \\\\\n\n\\subsubsection*{Doc4}\n$o(t) = 2$, $c = 4$, $i(t) = 2$\n\n\\begin{tabular}{r c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{$f(t,d)$}\n & \\multicolumn{1}{c}{$w(t,d)$}\n & \\multicolumn{1}{c}{$\\vec{w}$} \\\\\n\\cline{2-4}\nWalrus & 10 & 4.32 & 8.64 \\\\\n\\cline{2-4}\nCarpenter & 0 & 0 & 0 \\\\\n\\cline{2-4}\nBread & 20 & 5.32 & 10.64 \\\\\n\\cline{2-4}\nButter & 0 & 0 & 0 \\\\\n\\cline{2-4}\n\\end{tabular} \\\\\n\nThe cumulative word-document matrix is as follows:\n\n\\begin{tabular}{r c c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{Walrus}\n & \\multicolumn{1}{c}{Carpenter}\n & \\multicolumn{1}{c}{Bread}\n & \\multicolumn{1}{c}{Butter} \\\\\n\\cline{2-5}\nDoc1 & 4.32 & 4 & 3 & 1 \\\\\n\\cline{2-5}\nDoc2 & 0 & 0 & 11.16 & 10 \\\\\n\\cline{2-5}\nDoc3 & 0 & 18.96 & 0 & 0 \\\\\n\\cline{2-5}\nDoc4 & 8.64 & 0 & 10.64 & 0 \\\\\n\\cline{2-5}\n\\end{tabular} \\\\\n\nThe normalized vector is as follows: \\\\\n\n\\begin{tabular}{r c c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{Walrus}\n & \\multicolumn{1}{c}{Carpenter}\n & \\multicolumn{1}{c}{Bread}\n & \\multicolumn{1}{c}{Butter} \\\\\n\\cline{2-5}\nDoc1 & 0.447 & 0.206 & 0.184 & 0.01 \\\\\n\\cline{2-5}\nDoc2 & 0 & 0 & 0.685 & 0.996 \\\\\n\\cline{2-5}\nDoc3 & 0 & 0.978 & 0 & 0 \\\\\n\\cline{2-5}\nDoc4 & 0.014 & 0 & 0.653 & 0 \\\\\n\\cline{2-5}\n\\end{tabular} \\\\\n\n\\subsubsection*{Similairty of word bread with other words}\n\\begin{align*}\n  sim(\\vec{bread},\\vec{walrus}) =& 0.447*0.184 + 0.014 * 0.653\\\\\n  =& 0.091 \\\\ \n  sim(\\vec{bread},\\vec{Carpenter}) =& 0.206 * 0.184 \\\\\n  =& 0.037 \\\\\n  sim(\\vec{bread},\\vec{Butter}) =& 0.184*0.099 + 0.685 * 0.996 \\\\\n  =& 0.700 \\\\\n\\end{align*}\n\n\\section[C]{Problem 3}\n\\subsection{\\textbf{Property A\\@: Invariance under irrelevant words}}\nThe similarity measure is given by the formula:\n\\begin{align*}\n  sim(\\vec{d},\\vec{q}) = \\frac{\\vec{d}\\cdot\\vec{q}}{\\vert d \\vert \\vert q \\vert}\n\\end{align*}\n\nIf the document vector contains different set of words with different weights then the document vectors represented by `d' and `e' will be different.\nSince the similarity is a cross product of doc vector and query vector, we will get different similarity for the given document even though for every query term $q, f(t,d) = f(t,e) $.\nIn this case the this property will not hold.\n\n\\subsubsection{Example}\nLet the query term be ``bing'' for the following the term document matrix given in Table~\\ref{tab:inv}. The term search has the same $f(t,d)$ for the documents D1 and D2:\n\n\\begin{table}\n  \\caption{Invariance under irrelevant words \\label{tab:inv}}\n  \\centering\n\\begin{tabular}{r c c c c}\n  \\multicolumn{1}{r}{}\n   & \\multicolumn{1}{c}{Doc1}\n   & \\multicolumn{1}{c}{Doc2}\n   & \\multicolumn{1}{c}{Doc3}\n   & \\multicolumn{1}{c}{Doc4} \\\\\n  \\cline{2-5}\n  bing & 4 & 0 & 0 & 4 \\\\\n  \\cline{2-5}\n  chandler & 17 & 0 & 17 & 0 \\\\\n  \\cline{2-5}\n  monica & 5 & 5 & 5 & 15 \\\\\n  \\cline{2-5}\n  geller & 9 & 4 & 1 & 2 \\\\\n  \\cline{2-5}\n\\end{tabular}\n\\end{table}\nBut it is trivial to show that the this property does not hold true.\n\n\\subsection{\\textbf{Property B\\@: Invariance under scaling}}\nThis property holds true for the ranking algorithm in problem one.\nWhen a vector occurs frequently across all documents or in the complete collection, taking inverse document frequency helps in reducing those weights.\nEven though a higher weight is given to the dimensions of the more verbose document; they are penalized by a factor of $1 + \\log(\\frac{c}{o(t)})$ \n\n\\subsubsection{Example:}\nConsider the followinf term-document matrix with the terms in $f(t,Doc1)$ = $2 * f(t,Doc3)$\n\n\\begin{tabular}{r c c c c}\n\\multicolumn{1}{r}{}\n & \\multicolumn{1}{c}{Doc1}\n & \\multicolumn{1}{c}{Doc2}\n & \\multicolumn{1}{c}{Doc3}\n & \\multicolumn{1}{c}{Doc4} \\\\\n\\cline{2-5}\nbing & 1 & 0 & 2 & 1 \\\\\n\\cline{2-5}\nchandler & 2 & 0 & 4 & 3 \\\\\n\\cline{2-5}\nmonica & 3 & 8 & 6 & 6 \\\\\n\\cline{2-5}\ngeller & 4 & 0 & 8 & 10 \\\\\n\\cline{2-5}\n\\end{tabular}\nWhen we calculate the similarity of D1 and D3 we will get the same values.\n\n\\subsection{\\textbf{Property C\\@: Order invariance under Collection}} \n\nThe ranking of a document depends on the tf-idf formulation and idf.  This inturn depends on\n\\begin{enumerate}\n\t\\item Number of documents in each collection $(c)$\n\t\\item Number of documents in which each term is found. ($o(t)$)\n\\end{enumerate} \n\nIf the value of $ \\frac{c}{o(t)} $ increases, the ranking may be higher and vice versa.  Hence, this property does not hold true at all times.\n\n\\section[D]{Problem 4}\n\\subsection{$N = 9$, $e = 0.3$, $f = 1 - e \\Rightarrow f = 1 - 0.3 \\Rightarrow f = 0.7$, $E = (e/N) \\Rightarrow E = 0.033$}\n\\[\n\\begin{array}{rcl}A & = & 0.033 + 0.7(0)\\\\ B & = & 0.033 + 0.7(A/4 + C/3) \\\\ C & = & 0.033 + 0.7(A/4 + I/2 + B/2) \\\\ D & = & 0.033 + 0.7(A/4 + H/1) \\\\ E & = &0.033 + 0.7(A/4+B/2 + C/3 + F/2 + D/2) \\\\ F & = & 0.033 + 0.7(C/3 + E/2) \\\\ G & = & 0.033 + 0.7(D/2) \\\\ H & = & 0.033 + 0.7(E/2 + G/1 + I/2) \\\\ I & = & 0.033 + 0.7(F/2)\n\\end{array}\n\\]\n\\subsection{\\textbf{Page Rank computation}}\n\\[\n\\begin{array}{lcl}Q & = &\n\\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0.175 & 0 & 0.233 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0.175 & 0. 35 & 0 & 0 & 0 & 0 & 0 & 0 & 0.35 \\\\ 0.175 & 0 & 0 & 0 & 0& 0 & 0 & 0.7 & 0 \\\\ 0.175 & 0.35 & 0.233 & 0.35 & 0 & 0.35 & 0 & 0 & 0 \\\\ 0 & 0 & 0.233 & 0 & 0.35 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0.35 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0.35 & 0 & 0.7 & 0 & 0.35 \\\\ 0 & 0 & 0 & 0 & 0 & 0.35 & 0 & 0 & 0\n\\end{bmatrix}\n\\end{array}\n\\]\n\nTo solve these system of equations, we represent these in the form $\\vec{c} = B\\vec{p}$\nTherefore the system of equations can be represented as:\n\\[\n\\begin{array}{rcl}A & = & 0.033 \\\\ -0.175A + B - 0.233C & = & 0.033 \\\\ -0.175A - 0.35B + C - 0.35I & = & 0.033 \\\\ -0.175A + D - 0.7H & = & 0.033 \\\\ -0.175A - 0.35B - 0.233C -0.35D + E - 0.35F & = &0.033 \\\\ -0.233C - 0.35E + F & = & 0.033 \\\\ -0.35D + G & = & 0.033 \\\\ -0.35E - 0.7G + H - 0.35I & = & 0.033 \\\\ -0.35F + I & = & 0.033 \n\\end{array}\n\\]\n\\begin{lstlisting}\nq = [0,0,0,0,0,0,0,0,0;\n0.175,0,0.233,0,0,0,0,0,0;\n0.175,0.35,0,0,0,0,0,0,0.35;\n0.175,0,0,0,0,0,0,0.7,0;\n0.175,0.35,0.233,0.35,0,0.35,0,0,0;\n0,0,0.233,0,0.35,0,0,0,0;\n0,0,0,0.35,0,0,0,0,0;\n0,0,0,0,0.35,0,0.7,0,0.35;\n0,0,0,0,0,0.35,0,0,0];\n\nb = eye(9) - q;\n\nc = ones(9,1);\n\nc = c * 0.033;\n\np = b \\ c;\n\np =\n\n    0.0330\n    0.0586\n    0.0849\n    0.1686\n    0.1784\n    0.1152\n    0.0920\n    0.1855\n    0.0733\n\\end{lstlisting}\n\n\\section[E]{Problem 5}\n\\subsection{$N = 9$, $e = 0.99$, $f = 1 - e \\Rightarrow f = 1 - 0.99 \\Rightarrow f = 0.01$, $E = (e/N) \\Rightarrow E = 0.11$}\n\\[\n\\begin{array}{rcl}A & = & 0.11 + 0.01(0)\\\\ B & = & 0.11 + 0.01(A/4 + C/3) \\\\ C & = & 0.11 + 0.01(A/4 + I/2 + B/2) \\\\ D & = & 0.11 + 0.01(A/4 + H/1) \\\\ E & = &0.11 + 0.01(A/4+B/2 + C/3 + F/2 + D/2) \\\\ F & = & 0.11 + 0.01(C/3 + E/2) \\\\ G & = & 0.11 + 0.01(D/2) \\\\ H & = & 0.11 + 0.01(E/2 + G/1 + I/2) \\\\ I & = & 0.11 + 0.01(F/2)\n\\end{array}\n\\]\n\\subsection{\\textbf{Page Rank Computaion}}\n\\[\n\\begin{array}{lcl}Q & = &\n\\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0.0025 & 0 & 0.0033 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0.0025 & 0.005 & 0 & 0 & 0 & 0 & 0 & 0 & 0.005 \\\\ 0.0025 & 0 & 0 & 0 & 0& 0 & 0 & 0.01 & 0 \\\\ 0.0025 & 0.005 & 0.0033 & 0.005 & 0 & 0.005 & 0 & 0 & 0 \\\\ 0 & 0 & 0.0033 & 0 & 0.005 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0.005 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0.005 & 0 & 0.01 & 0 & 0.005 \\\\ 0 & 0 & 0 & 0 & 0 & 0.005 & 0 & 0 & 0\n\\end{bmatrix}\n\\end{array}\n\\]\n\nTo solve these system of equations, we represent these in the form $\\vec{c} = B\\vec{p}$\nTherefore the system of equations can be represented as:\n\\[\n\\begin{array}{rcl}A & = & 0.11 \\\\ -0.0025A + B - 0.0033C & = & 0.11 \\\\ -0.0025A - 0.005B + C - 0.005I & = & 0.11 \\\\ -0.0025A + D - 0.01H & = & 0.11 \\\\ -0.0025A - 0.005B - 0.0033C -0.005D + E - 0.005F & = &0.11 \\\\ -0.0033C - 0.005E + F & = & 0.11 \\\\ -0.005D + G & = & 0.11 \\\\ -0.005E - 0.01G + H - 0.005I & = & 0.11 \\\\ -0.005F + I & = & 0.11\n\\end{array}\n\\]\n\\begin{lstlisting}\nq = [0,0,0,0,0,0,0,0,0;\n0.0025,0,0.0033,0,0,0,0,0,0;\n0.0025,0.005,0,0,0,0,0,0,0.005;\n0.0025,0,0,0,0,0,0,0.01,0;\n0.0025,0.005,0.0033,0.005,0,0.005,0,0,0;\n0,0,0.0033,0,0.005,0,0,0,0;\n0,0,0,0.005,0,0,0,0,0;\n0,0,0,0,0.005,0,0.01,0,0.005;\n0,0,0,0,0,0.005,0,0,0];\n\nb = eye(9) - q;\n\nc = ones(9,1);\n\nc = c * 0.11;\n\np = b \\ c\n\np =\n\n    0.1100\n    0.1106\n    0.1114\n    0.1114\n    0.1123\n    0.1109\n    0.1106\n    0.1122\n    0.1106\n\\end{lstlisting}\n\n\\subsection{$N = 9$, $e = 0.01$, $f = 1 - e \\Rightarrow f = 1 - 0.01 \\Rightarrow f = 0.99$, $E = (e/N) \\Rightarrow E = 0.001$}\n\\[\n\\begin{array}{rcl}A & = & 0.011 + 0.99(0)\\\\ B & = & 0.011 + 0.99(A/4 + C/3) \\\\ C & = & 0.011 + 0.99(A/4 + I/2 + B/2) \\\\ D & = & 0.011 + 0.99(A/4 + H) \\\\ E & = &0.011 + 0.99(A/4+B/2 + C/3 + F/2 + D/2) \\\\ F & = & 0.011 + 0.99(C/3 + E/2) \\\\ G & = & 0.011 + 0.99(D/2) \\\\ H & = & 0.011 + 0.99(E/2 + G/1 + I/2) \\\\ I & = & 0.011 + 0.99(F/2)\n\\end{array}\n\\]\n\\subsection{\\textbf{Page Rank Computation}}\n\\[\n\\begin{array}{lcl}Q & = &\n\\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0.2475 & 0 & 0.33 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0.2475 & 0.495 & 0 & 0 & 0 & 0 & 0 & 0 & 0.495 \\\\ 0.2475 & 0 & 0 & 0 & 0& 0 & 0 & 0.99 & 0 \\\\ 0.2475 & 0.495 & 0.33 & 0.495 & 0 & 0.495 & 0 & 0 & 0 \\\\ 0 & 0 & 0.33 & 0 & 0.495 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0.495 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0.495 & 0 & 0.99 & 0 & 0.495 \\\\ 0 & 0 & 0 & 0 & 0 & 0.495 & 0 & 0 & 0\n\\end{bmatrix}\n\\end{array}\n\\]\n\nTo solve these system of equations, we represent these in the form $\\vec{c} = B\\vec{p}$\nTherefore the system of equations can be represented as:\n\\[\n\\begin{array}{rcl}A & = & 0.001 \\\\ -0.2475A + B - 0.33C & = & 0.001 \\\\ -0.2475A - 0.495B + C - 0.495I & = & 0.001 \\\\ -0.2475A + D - 0.99H & = & 0.001 \\\\ -0.2475A - 0.495B - 0.33C -0.495D + E - 0.495F & = &0.001 \\\\ -0.33C - 0.495E + F & = & 0.001 \\\\ -0.495D + G & = & 0.001 \\\\ -0.495E - 0.99G + H - 0.495I & = & 0.001 \\\\ -0.495F + I & = & 0.001\n\\end{array}\n\\]\n\\begin{lstlisting}\nq = [0,0,0,0,0,0,0,0,0;\n0.2475,0,0.33,0,0,0,0,0,0;\n0.2475,0.495,0,0,0,0,0,0,0.495;\n0.2475,0,0,0,0,0,0,0.99,0;\n0.2475,0.495,0.33,0.495,0,0.495,0,0,0;\n0,0,0.33,0,0.495,0,0,0,0;\n0,0,0,0.495,0,0,0,0,0;\n0,0,0,0,0.495,0,0.99,0,0.495;\n0,0,0,0,0,0.495,0,0,0];\n\nb = eye(9) - q;\n\nc = ones(9,1);\n\nc = c * 0.001;\n\np = b \\ c\n\np =\n\n    1.0000\n   11.4695\n   30.9758\n  215.7781\n  171.5447\n   96.1366\n  107.8101\n  216.6975\n   48.5876\n\\end{lstlisting}\n\\end{document}\n", "meta": {"hexsha": "9987521741242a3e8e1cca333b78cdd1be165413", "size": 16265, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writeups/assignment2.tex", "max_stars_repo_name": "rajegannathan/Web-Search-Engines-Project", "max_stars_repo_head_hexsha": "8101f146b636bbec60d04da05bd5f5d186eda213", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "writeups/assignment2.tex", "max_issues_repo_name": "rajegannathan/Web-Search-Engines-Project", "max_issues_repo_head_hexsha": "8101f146b636bbec60d04da05bd5f5d186eda213", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writeups/assignment2.tex", "max_forks_repo_name": "rajegannathan/Web-Search-Engines-Project", "max_forks_repo_head_hexsha": "8101f146b636bbec60d04da05bd5f5d186eda213", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.7559726962, "max_line_length": 425, "alphanum_fraction": 0.5436212727, "num_tokens": 8116, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5698570778391454}}
{"text": "% ----- CHAPTER 4: AN ALGORITHM TO COMPUTE RANK ----- %\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Proof of the Main Theorem}\\label{sec:main_thrm_proof}\n\nIn this section we prove Theorem \\ref{thm:main_theorem}: specifically, that Algorithm \\ref{algo:compute_rank} is guaranteed, assuming BSD and ABC and optionally GRH, to correctly output an elliptic curve's rank in time $\\softO(\\sqrt{N_E})$, where $N_E$ is the conductor of the input curve. Note that the proof will quote certain results established later in this work. \\\\\n\nThe following precision theorem establishes how many bits precision are needed to provably determine if a given $L$-function Taylor coefficient is zero or not:\n\\begin{theorem}[BSD, ABC, (GRH)]\n\\label{prop:k_bits_for_leading_coeff}\nLet $E$ have $L$-function $\\Les$, conductor $N_E$ and real period $\\Omega_E$, and let\n\\begin{equation}\\label{eqn:num_bits_without_GRH}\nk = \\left\\lceil 34 + 3.86 \\log_2 N_E + \\log_2(\\Gamma(1.8 + 1.25\\log_2 N_E)) - \\log_2 \\Omega_E \\right\\rceil.\n\\end{equation}\nAssuming BSD and ABC, we have the following:\n\\begin{enumerate}\n\\item $k = O((\\log N_E)^{1+\\epsilon})$ for any $\\epsilon>0$.\n\\item If $L_E^{(m)}(1)=0$ for all $0 \\le m < n$ and $\\frac{L_E^{(n)}(1)}{n!}$ is zero to $k$ bits precision, then $L_E^{(n)}(1)$ is identically zero.\n\\end{enumerate}\n\\end{theorem}\nIf one further assumes GRH, we may instead let\n\\begin{equation}\\label{eqn:num_bits_with_GRH}\nk = \\left\\lceil 22 + 2.47 \\log_2 N_E + \\log_2(\\Gamma(1.25 + 0.87 \\log_2 N_E)) - \\log_2 \\Omega_E \\right\\rceil\n\\end{equation}\nfor the same results to hold.\n\\begin{proof}\nThe first statement follows from Theorem \\ref{thm:real_period_lower_bound}, which states that $\\Omega_E$ is bounded away from zero by a negative power of $N_E$. Also note that $\\Gamma(s) = O(e^{s \\log s})$, so \\\\\n$\\log_2(\\Gamma(1.8 + 1.25\\log_2 N_E)) = O(\\log N_E \\log\\log N_E)$, from which the result follows.\\\\\n\nTo prove the second statement, observe that by BSD the leading non-zero Taylor coefficient of $\\Les$ at the central point is given by Equation \\ref{eqn:BSD_formula}. We thus have that\n\\begin{equation}\nC_E\\pr \\ge \\frac{1}{256} \\cdot  \\Reg_E \\cdot\\, \\Omega_E,\n\\end{equation}\nsince $\\prod_p c_p \\ge 1$, $\\# \\Sha_E \\ge 1$, and by Mazur's Theorem $\\# E_{\\text{Tor}} \\le 16$. Thus\n\\begin{equation}\n\\log_2 C_E\\pr \\ge -16 + \\log_2 \\Reg_E + \\log_2 \\Omega_E.\n\\end{equation}\nNow by Theorem \\ref{thm:regulator_lower_bound} we have that $\\Reg_E \\ge 4.36 \\times 10^{-6} \\cdot (N_E)^{-3.86} \\cdot \\Gamma(1.8+0.25 \\log N_E)^{-1}$. Thus \n$\\log_2 \\Reg_E \\ge  -17.81 - 3.86 \\log_2 N_E - \\log_2(\\Gamma(1.8 + 1.25\\log_2 N_E))$, where we have changed the log inside the Gamma factor to base to for consistency with the rest of the logs). We therefore have that\n\\begin{equation}\n\\log_2 C_E\\pr \\ge -33.81 - 3.86 \\log_2 N_E - \\log_2(\\Gamma(1.8 + 1.25\\log_2 N_E)) + \\log_2 \\Omega_E > -k,\n\\end{equation}\nwhere $k$ is as defined above. Hence if the $n$th taylor coefficient of $\\Les$ at the central point is zero to $k$ bits precision and all preceding Taylor coefficients are zero, then it {\\it cannot} be the leading BSD coefficient, and so must be identically zero. \\\\\n\nIf we assume GRH, we instead have $\\Reg_E \\ge 2.11 \\times 10^{-2} \\cdot (N_E)^{-2.47} \\cdot \\Gamma(1.25+0.16 \\log N_E)^{-1}$. Repeat as before to obtain the required precision stated in Equation \\ref{eqn:num_bits_with_GRH}.\n\\end{proof}\n\nIn other words, when $k$ is defined as above, the leading Taylor coefficient of $\\Les$ at $s=1$ must be greater than $2^{-k}$ in magnitude. Note that the $-16$ appearing in the right hand side of the above inequalities comes from bounding the order of the torsion group of $E$; this constant can therefore be reduced or eliminated by computing the torsion order of $E$ explicitly, which is quick to do. This is something that therefore should be done in any optimized implementation of Algorithm \\ref{algo:compute_rank}. \\\\\n\nWe now prove {\\bf Theorem \\ref{thm:main_theorem}}: that, assuming BSD and ABC and optionally GRH, Algorithm \\ref{algo:compute_rank} computes the rank of an elliptic curve in $\\softO\\left(\\sqrt{N_E}\\right)$ time.\n\\begin{proof}\nLet $k$ be as defined according to either Equation \\ref{eqn:num_bits_without_GRH} or \\ref{eqn:num_bits_with_GRH} depending on whether GRH is assumed or not. That the algorithm terminates with correct output is a direct corollary from Proposition \\ref{prop:k_bits_for_leading_coeff}: if $\\frac{L_E^{(r)}}{r!}$ is computed to $k$ bits precision and some of those bits are nonzero {\\it and} all preceding Taylor coefficients have been shown to be zero, then $r$ must be the rank of $E$; hence the output of rank$=r$ is correct. \\\\\n\nIn terms of time complexity, observe that $k$ is $\\softO(\\log N_E)$ in magnitude, and by Corollary \\ref{cor:real_period_time_complexity}, can be computed in time $O((\\log N_E)^m)$ for some $m$. By Corollary \\ref{cor:better_an_bound}, we need to evaluate at most $\\frac{1}{2}\\log N_E +1.6$ central Taylor coefficients of $\\Les$ to $k$ bits precision; Proposition \\ref{prop:L_E_time_complexity} states that each of these can be done in $\\softO\\left(k\\cdot \\sqrt{N_E}\\right)$ time. Hence the algorithm is guaranteed to terminate in time at most \n\\begin{equation}\nO((\\log N_E)^m) + \\left(\\frac{1}{2}\\log N_E + 1.6\\right) \\cdot \\softO\\left(k\\cdot \\sqrt{N_E}\\right) = \\softO\\left(\\sqrt{N_E}\\right),\n\\end{equation}\nsince $k$ is sub-polynomial in $N_E$. The $\\frac{1}{2}\\log N_E + 1.6$ in the above statements may be replaced with $0.32\\log N_E + 0.5$ if GRH is assumed, but resulting time complexity remains $\\softO\\left(\\sqrt{N_E}\\right)$.\n\\end{proof}\n\n\\begin{figure}[!h]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{graphics/rank_algorithm_timings.png}\n    \\caption{A scatter plot of the time in seconds taken to compute the rank of an elliptic curve using a Sage implementation of Algorithm \\ref{algo:compute_rank} (without assuming GRH) vs. conductor, plotted on a log/log scale, for 100 curves drawn randomly from the Cremona database.}\n    \\label{fig:rank_algorithm_timings}\n\\end{figure}\n\nFor evidence supporting the validity of Theorem \\ref{thm:main_theorem}, I wrote a na\\\"{i}ve implementation of Algorithm \\ref{algo:compute_rank} in Sage and collected timings on SageMathCloud of the algorithm's runtime. 100 curves were drawn from the Cremona database according to a log-uniform distribution on their conductors; a log/log scatter plot of timings vs. conductors can be seen in Figure \\ref{fig:rank_algorithm_timings} (the algorithm produced the correct output in all cases). \\\\\n\nThe red line in the figure is the best fit straight line, which has slope $0.503$; the predicted slope of $0.5$ is well within the sample error of $0.016$. This is therefore good computational evidence that the runtime of the rank algorithm does indeed scale with $\\sqrt{N_E}$.  \\\\\n\nUsing this best fit line, we can make predictions as to how long the algorithm will take to run on curves of larger conductor. For example, the curve of largest known rank is a rank 28 curve found by Elkies (as discussed in \\cite{Bob-2011}); it has conductor $\\log N_E \\sim 325.9$. For this curve we estimate Algorithm \\ref{algo:compute_rank} to take roughly $1.1\\times 10^{62}$ years, which is about $8\\times 10^{51}$ times the age of the universe. \\\\\n\nClearly then, the $\\softO(\\sqrt{N_E})$ time complexity of Algorithm \\ref{algo:compute_rank} limits its usefulness when it comes to curves of large rank. For a method that can be used on such curves, see Chapter 5.\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Real Period}\\label{sec:real_period}\n\nThe real period of a rational elliptic curve $E$ is a measure of the ``size'' of the set of {\\it real} points on $E$. \\\\\n\nRecall that $E(\\CC)$, the group of complex points on $E$, is isomorphic via the (inverse of the) Weierstrass $\\wp$-function to $\\CC$ modulo a lattice under addition; that is, $E(\\CC) \\simeq \\CC/\\Lambda$, where $\\Lambda = \\ZZ\\omega_1 + \\ZZ\\omega_2$, and $\\omega_1,\\omega_2 \\in \\CC$.\nIf $E$ is defined over the real numbers (as rational elliptic curves are), then we may always write $\\omega_1$ as being positive real. The second generator $\\omega_2$ can be written as being positive imaginary when $E$ has positive discriminant, or in the upper half plane with real part $\\frac{\\omega_1}{2}$ when $E$ has negative discriminant. [Note: some texts normalize $\\omega_2$ to have imaginary part equal to $-\\frac{\\omega_1}{2}$ when $D_E<0$, as this sometimes makes the presentation more natural. However, for the work below we will always assume that $\\Re(\\frac{\\omega_2}{\\omega_1}) = 0$ or $\\frac{1}{2}$.] See \\cite[Ch. VI]{Sil-1985} and \\cite[Ch. I]{Sil-1994} for a more detailed exposition of the complex theory of elliptic curves and elliptic and modular functions respectively.\n\n\\begin{definition}\\label{defn:real_period}\nLet $E/\\QQ$ have discriminant $D_E$, and $E(\\CC) \\simeq \\CC/\\Lambda$, where $\\Lambda = \\ZZ\\omega_1 + \\ZZ\\omega_2$ and $\\omega_1 \\in \\RR$. The {\\it real period} of $E$ is defined to be\n\\begin{equation}\n\\Omega_E = \\begin{cases} 2\\omega_1 & D_E > 0 \\\\ \\omega_1 & D_E < 0 \\end{cases}.\n\\end{equation}\n\\end{definition}\n\nWe are interested in answering the question: For a curve of a given discriminant, how big and how small can $\\Omega_E$ be? Can the real period be arbitrarily small or large, or does it scale in some meaningful way with the discriminant? For our purposes, establishing a lower bound on $\\Omega_E$ is what is needed to bound the central leading Taylor coefficient of $\\Les$ from below. However, we include the result giving an upper bound on $\\Omega_E$, as we find its implication -- that $\\Omega_E$ goes to zero as $N_E$ goes to infinity -- an interesting one. \\\\\n\nFirst, an upper bound. To this end, we have the following result from the complex theory of elliptic curves:\n\\begin{proposition}\nLet $\\Delta(z)$ be the Ramanujan Delta function on the complex upper half plane, i.e.\n\\begin{equation}\n\\Delta(z) = q \\prod_{n=1}^{\\infty} (1-q^n)^{24},\n\\end{equation}\nwhere $q = e^{2\\pi i z}$. Let $E/\\CC$ have discriminant $D_E$ and lattice basis $(\\omega_1,\\omega_2)$ as defined above. Set $z = \\frac{\\omega_1}{\\omega_2}$ (such that $\\Im(z) > 0$). Then\n\\begin{equation}\\label{eqn:modular_discriminant}\nD_E = \\left(\\frac{2\\pi}{\\omega_1}\\right)^{12} \\Delta\\left(z \\right).\n\\end{equation}\n\\end{proposition}\nA proof of the above can be found in Chapter I of \\cite{Sil-1994}. Using this result we can readily establish an upper bound on $\\Omega_E$:\n\\begin{proposition}\\label{prop:real_period_upper_bound_discriminant}\nLet $E/\\QQ$ have discriminant $D_E$ and real period $\\Omega_E$. Then\n\\begin{equation}\\label{eqn:real_period_upper_bound_discriminant}\n\\Omega_E < 8.82921517\\ldots \\cdot (D_E)^{-\\frac{1}{12}}.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nEquation \\ref{eqn:modular_discriminant} yields\n\\begin{equation}\n\\omega_1 = 2\\pi |D_E|^{-\\frac{1}{12}} \\left| \\Delta\\left(\\frac{\\omega_2}{\\omega_1}\\right)\\right|^{\\frac{1}{12}}.\n\\end{equation}\nRecall that since $E$ is defined over $\\QQ$, we may choose a lattice basis $(\\omega_1, \\omega_2)$ such that $\\omega_1 \\in \\RR_{+}$ and the real part of $\\omega_2$ equals either $0$ or $\\frac{\\omega_1}{2}$. Thus we may take $z = \\frac{\\omega_2}{\\omega_1}$ to have real part either equal to 0 or $\\frac{1}{2}$. Moreover, $\\Re(s)=0$ if $D_0>0$ and $\\Re(z)=\\frac{1}{2}$ if $D_E<0$. See Chapter I of \\cite{Sil-1994} for proofs of these statements. \\\\\n\nThus $D_E>0$ corresponds to $q = e^{2\\pi i z}$ being positive real lying in the open interval $q \\in (0,1)$, while $D_E<0$ corresponds to $q \\in (-1,0)$. Now $\\Delta$ is a cuspidal modular form on $\\SL_2(\\ZZ)$, so as a function of $q$, $\\Delta(q)$ is continuous on $(-1,1)$, zero at the origin, and decaying to zero at $q=-1$ and $q=1$. It must therefore achieve a maximum magnitude on both open intervals $q \\in (-1,0)$ and $q \\in (0,1)$. \\\\\n\nThe critical points of $\\Delta$ have been studied in there own right -- see for example \\cite{IJA-2013} and \\cite{WoYo-2013}. We see that $\\Delta(q)$ has precisely one critical point on each of the intervals $q \\in (-1,0)$ and $q \\in (0,1)$; these occur at $q = 0.03727681\\ldots$ and $q =-0.43929305\\ldots$ respectively (corresponding to $z=0.52352170\\ldots i$ and $z=\\frac{1}{2}+0.13091903\\ldots i$ in the upper half plane respectively). At these two values we have $|\\Delta(q)|^{\\frac{1}{12}}$ equal to $0.70258935\\ldots$ and $1.40521323\\ldots$ respectively. We conclude that\n\\begin{equation}\n\\Omega_E \\le 2\\pi\\cdot 1.40521323\\ldots \\cdot  |D_E|^{-\\frac{1}{12}} = 8.82921517\\ldots \\cdot |D_E|^{-\\frac{1}{12}}.\n\\end{equation} \n\\end{proof}\n\nNote that the real period is {\\it not} invariant under isomorphism over $\\QQ$. As the elliptic discriminant $D_E$ varies by a twelfth power of an integer as one considers $\\QQ$-isomorphic models of $E$, the real period varies by the negative first power of that same integer. This is an immediate consequence of the following statement:\n\\begin{lemma}\nLet $E$ be the global minimal model of a rational elliptic curve, and let $D_E$ and $\\Omega_E$ be its discriminant and real period respectively. Let $E\\pr$ be isomorphic to $E$ over $\\QQbar$, and let $D_{E^{\\pr}}$ and $\\Omega_{E^{\\pr}}$ be defined analogously. Then there exists a $u$ such that  $u^{12}\\in \\ZZ$, where $D_{E^{\\pr}}=u^{12} D_E$ and $\\Omega_{E^{\\pr}}=\\frac{1}{u}\\Omega_E$.\n\\end{lemma}\nA proof of the result regarding the discriminant can be found on pages 48-49 of \\cite{Sil-1985}; the result regarding the real period follows, for example, from Equation \\ref{eqn:modular_discriminant} by chasing through how $D_E$ and $\\Omega_E$ vary under $\\CC$-isomorphism. \\\\\n\nAnother consequence is that the bound given in Equation \\ref{eqn:real_period_upper_bound_discriminant} is optimal, in the sense that the $\\frac{1}{12}$ in the negative power of the discriminant cannot be replaced with any larger value. As an explicit example, consider the family of CM elliptic curves given by Weierstrass equations\n\\begin{equation}\nE^d: \\;\\; y^2 = x^3 - d^2 x,\n\\end{equation}\nfor $d \\in\\ZZ$ positive squarefree. This is just the family of curves related to the congruent number problem, one of the oldest open problems in mathematics (for an excellent treatment on congruent numbers and how they relate to elliptic curves, see \\cite{Kob-2012}). Then $E^{d}$ is just the quadratic twist by $d$ (hence the notation) of the curve\n\\begin{equation}\nE: \\;\\; y^2 = x^3-x,\n\\end{equation}\nwhich has discriminant 64 and conductor 32. For this family we may actually write down $\\Omega_{E^d}$ in terms of a special value of the Gamma function:\n\\begin{equation}\n\\Omega_{E^d} = \\frac{\\Gamma(\\frac{1}{4})^2}{\\sqrt{2\\pi d}}.\n\\end{equation}\nThis follows from the fact that $\\frac{\\omega_2}{\\omega_1} = i$ for any of the $E^d$, and that\n\\begin{equation}\n\\Delta(i) = \\frac{\\Gamma(\\frac{1}{4})^{24}}{2^{24}\\pi^{18}}\n\\end{equation}\n(first shown by Ramanujan in his second notebook -- see \\cite{BeZh-1992}). Furthermore, $E^{d}$ has discriminant $D_{E^{d}} = (2d)^6$. So using equation \\ref{eqn:modular_discriminant}, for congruent number curves we obtain\n\\begin{equation}\n\\Omega_{E^d} = \\frac{\\Gamma(\\frac{1}{4})^2}{\\sqrt{\\pi}} \\cdot (D_{E^d})^{-\\frac{1}{12}} = 7.416\\ldots \\cdot (D_{E^d})^{-\\frac{1}{12}}.\n\\end{equation}\nSince $4\\pi = 12.566\\ldots$ this result conforms with the bound given in Equation \\ref{eqn:real_period_upper_bound_discriminant}. It should also be clear from the example above that any family of quadratic twists of a given curve will have the real period scale with the $-\\frac{1}{12}$th power of the discriminant. Furthermore, since the $j$-invariant is surjective, we can find $E/\\QQ$ with $z=j^{-1}(E)$ arbitrarily close to $\\frac{1}{2}+0.13091903\\ldots i$, so the constant $8.82921517\\ldots$ obtained in Proposition \\ref{prop:real_period_upper_bound_discriminant} is also optimal. \\\\\n\n\\begin{corollary}\\label{cor:real_period_upper_bound}\nFor $E/\\QQ$ with real period $\\Omega_E$ and conductor $N_E$,\n\\begin{equation}\n\\Omega_E < 8.82921517\\ldots \\cdot (N_E)^{-\\frac{1}{12}}.\n\\end{equation}\nThat is, the real period goes to zero as the conductor of the curve goes to infinity.\n\\end{corollary}\nThis follows immediately from Proposition \\ref{prop:real_period_upper_bound_discriminant}, as the conductor of an elliptic curve always divides its discriminant. Again, by the same reasoning as before this bound should be optimal. Note that this result is unconditional. \\\\\n\nEquation \\ref{eqn:modular_discriminant} gives us a way to compute the real period of $E$, but it is in general not the most efficient means of doing so (as $z = \\frac{\\omega_1}{\\omega_2}$ must be found by, for example, inverting the $j$-invariant of $E$). Instead, $\\omega_1$ may be computed using the (real version of the) Gauss arithmetic-geometric mean. Recall the definition thereof: let $a,b \\in \\RR_{\\ge0}$. Set $a_0 = a$ and $b_0 = b$, and for $n\\ge 0$ let $a_{n+1} = \\frac{1}{2}(a_{n}+b_{n})$ and $b_{n+1} = \\sqrt{a_{n}b_{n}}$. Then $\\AGM(a,b)$ is defined to be the common limit of both the $a_n$ and the $b_n$. Moreover, the convergence is quadratic -- precision roughly doubles with every iteration -- and is thus very quick. A deeper exposition of the AGM including a proof of convergence and convergence rate can be found in \\cite{Cox-2000}.\n\n\\begin{proposition}\\label{prop:real_period_by_AGM}\nLet $E/\\Q$ have minimal Weierstrass equation $y^2 + a_1 xy + a_3 y^2 = x^3 + a_2 x^2 + a_4 x + a_6$. Write the equation in the form\n\\begin{equation}\\label{eqn:weierstrass_with_bn}\n\\left(y + \\frac{a_1x + a_3}{2}\\right)^2 = x^3 + \\frac{b_2}{4} x^2 + \\frac{b_4}{2} x + \\frac{b_6}{4} = (x-e_1)(x-e_2)(x-e_3),\n\\end{equation}\nwhere $e_1,e_2,e_3$ are the 3 complex roots of the polynomial in $x$ on the right hand side, and $b_2$, $b_4$ and $b_6$ are as defined in Section \\ref{subsec:notation}.\n\\begin{enumerate}\n\\item If $D_E > 0$, then $e_1,e_2,e_3 \\in \\RR$, so without loss of generality we may order them as $e_3 > e_2 > e_1$. Then\n\\begin{equation}\\label{eqn:omega_D_pos}\n\\omega_1 = \\frac{\\pi}{\\AGM(\\sqrt{e_3-e_1},\\sqrt{e_3-e_2})}.\n\\end{equation}\n\\item If $D_E < 0$, then the RHS polynomial has only one real root; we may write $e_3 \\in \\RR$ and $e_1 = \\conj{e_2}$. Let $z = \\sqrt{e_3-e_1} = s + it$; choose the root such that $s>0$. Then\n\\begin{equation}\\label{eqn:omega_D_neg}\n\\omega_1 = \\frac{\\pi}{\\AGM(|z|,s)}.\n\\end{equation}\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nCremona and Cremona-Thongjunthug give good explanations and derivations this formula in \\cite{Cre-1997} and \\cite{Cre-2013} respectively.\n\\end{proof}\n\nTo get a lower bound on $\\Omega_E$ from the above definitions, we will need the following technical result.\n\\begin{definition}\nFor a given $E/\\QQ$, let \n\\begin{equation}\nd(E) = \\max\\set{|e_i - e_j|: \\; e_i,e_j \\text{ are roots of }4x^3 + b_2  x^2 + 2 b_4 x + b_6, i\\ne j}\n\\end{equation}\nbe the maximum root separation of the cubic polynomial on the RHS of equation \\ref{eqn:weierstrass_with_bn}\n(i.e. $b_2, b_4$ and $b_6$ are the $b$-invariants of $E$).\n\\end{definition}\nObserve that for both the positive and negative discriminant cases, the AGM in the denominators in equations \\ref{eqn:omega_D_pos} and \\ref{eqn:omega_D_neg} is at most $\\sqrt{d(E)}$. It is useful therefore to have a bound on the magnitude of $d(E)$ in terms of the $b$-invariants:\n\\begin{lemma}\\label{lem:root_separation_bound}\nGiven the above setup,\n\\begin{equation}\nd(E) < 2+\\frac{1}{2}\\max\\set{|b_2|,2|b_4|,|b_6|}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe apply Rouche's Theorem on the polynomial $x^3 + \\frac{b_2}{4} x^2 + \\frac{b_4}{2} x + \\frac{b_6}{4}$. Observe that $|\\frac{b_2}{4} x^2 + \\frac{b_4}{2} x + \\frac{b_6}{4}| < |x^3|$ when $|x| < 1+\\max\\set{\\frac{|b_2|}{4},\\frac{|b_4|}{2},\\frac{|b_6|}{4}}$, so by Rouche's Theorem we must have that any root $e$ of the cubic obeys $|e| <  1+\\frac{1}{4}\\max\\set{|b_2|,2|b_4|,|b_6|}$. The result follows.\n\\end{proof}\n\n\\begin{corollary}[ABC]\\label{cor:real_period_time_complexity}\nThe real period of an elliptic curve can be computed to a specified precision in polynomial time and space in the number of bits of the curve's conductor.\n\\end{corollary}\n\\begin{proof}\nWe see from Proposition \\ref{prop:real_period_by_AGM} that $\\Omega_0$ can be computed by a) finding the roots of a cubic polynomial related to the Weierstrass equation for $E$, and then b) applying the AGM to a certain simple function of that cubic's roots. \\\\\n\nStep a) can be achieved in polynomial time in log of the maximum magnitude of the $a$-invariants, which means it can be done in polynomial time in the $c$-invariants. By Modified Szpiro (Conjecture \\ref{conj:modified_szpiro}) the conductor of a curve is bounded in magnitude by a polynomial in the $c$-invariants; chaining this all together gives us that step a) can be commuted in time polynomial in $\\log N_E$, i.e. sub-polynomial in $N_E$. \\\\\n\nIn step b), Lemma \\ref{lem:root_separation_bound} implies that the inputs to the AGM are bounded by a polynomial of the $b$-invariants of $E$, so again by Szpiro they are bounded by a power of $N_E$. The AGM converges quadratically when both the inputs are positive real; therefore it will converge to specified precision with time bounded by a polynomial of $\\log N_E$. Thus altogether we see that the real period can be computed to a given precision with time polynomial in $\\log N_E$, i.e. sub-polynomial in $N_E$ itself.\n\\end{proof}\n\nThe take-away from the above result is that computing the real period is quick, and will never be the computational bottleneck when it comes to running Algorithm \\ref{algo:compute_rank}. \\\\\n\n\\begin{corollary}[S.]\\label{ineq:Omega_bn_bound}\nLet $E/\\Q$ have (not necessarily minimal) Weierstrass equation \\\\\n$y^2 + a_1 xy + a_3 y^2 = x^3 + a_2 x^2 + a_4 x + a_6$ have real period $\\Omega_E$, and define $b_2$, $b_4$ and $b_6$ as at the beginning of section \\ref{sec:defs_background}. Then\n\\begin{equation}\n\\Omega_E > \\frac{\\alpha \\pi}{\\sqrt{1+\\frac{1}{4}\\max\\set{|b_2|,2|b_4|,|b_6|}}},\n\\end{equation}\nwhere $\\alpha = 1$ if $E$ has positive discriminant and $\\frac{1}{2}$ if $E$ has negative discriminant.\n\\end{corollary}\n\\begin{proof}\nThis follows immediately from the definition of $\\Omega_E$ given in Proposition \\ref{prop:real_period_by_AGM} and Lemma \\ref{lem:root_separation_bound}.\n\\end{proof}\n\nAgain, this bound is optimal in the sense that the square root sign in the denominator cannot be replaced with any smaller exponent. To see this, consider the family of elliptic curves\n\\begin{equation}\nE_n: \\;\\; y^2 = x^3 - (nx-1)^2.\n\\end{equation}\nFor a given $n$, $E_n$ has $b_2,b_4$ and $b_6$ equal to $-4n^2,4n$ and $-4$ respectively.\n\nThe polynomial $x^3 + \\frac{b_2}{4} x^2 + \\frac{b_4}{2} x + \\frac{b_6}{4} = x^3 -n^2 x^2 + 2n x - 1$ has a single root at $n^2 - O(\\frac{1}{n})$ and two roots very close to the origin with magnitude $O(\\frac{1}{n})$. Hence the real period for $E_n$ is\n\\begin{equation}\n\\Omega_{E_n} = \\frac{2\\pi}{n} + O\\left(\\frac{1}{n}\\right).\n\\end{equation}\n\nOn the other hand, for a given $n$\n\\begin{equation}\n1+\\frac{1}{4}\\max\\set{|b_2|,2|b_4|,|b_6|} = 1+n^2.\n\\end{equation}\nfor $n\\ge 2$. So for this family of curves the lower bound given by inequality \\ref{ineq:Omega_bn_bound} is $\\frac{\\pi}{\\sqrt{1+n^2}}$. Since $\\Omega_{E_n}$ asymptotes to twice this value, it is clear that the bound would be violated for sufficiently large $n$ if the square root were replaced with a smaller power. \\\\\n\nFinally, if we assume the Szpiro conjecture, Corollary \\ref{ineq:Omega_bn_bound} allows us to the real period of a minimal model of $E$ from below in terms of that curve's conductor. We will invoke a slight reformulation of Modified Szpiro (Conjecture \\ref{conj:modified_szpiro}): Suppose the minimal short Weierstrass model of $E$ is $y^2 = x^3+Ax+B$, i.e. there does not exist any prime $p$ such that $p^4|A$ and $p^6|B$. Then for any $\\epsilon>0$ there is a constant $K_{\\epsilon}$ independent of $E$ such that\n\\begin{equation}\n\\max\\set{|A|^3,|B|^2} \\le K_{\\epsilon}\\cdot (N_E)^{6+\\epsilon}.\n\\end{equation}\n(Since for a curve in short Weierstrass form $c_4 = -48A$ and $c_6=-864B$, we see that the above statement and Modified Szpiro are equivalent). Using this, we obtain the following:\n\\begin{theorem}[ABC]\n\\label{thm:real_period_lower_bound}\nLet $E$ have conductor $N_E$, and let $\\Omega_E$ be the real period of a minimal model of $E$. Then, assuming ABC, for any $\\epsilon>0$ there is a constant $K_{\\epsilon}$ independent of $E$ such that \n\\begin{equation}\n\\Omega_E > K_{\\epsilon} \\cdot (N_E)^{-\\frac{3}{2}-\\epsilon}.\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nLet $E$ be given by its minimal short Weierstrass equation $y^2 = x^3+Ax+B$. $E$ then has $b$-invariants $b_2=0$, $b_4 = 2A$ and $b_6 = 4B$, so by Lemma \\ref{ineq:Omega_bn_bound} the real period of $E$ obeys\n\\begin{equation}\\label{eqn:Omega_bound_short_weierstrass}\n\\Omega_E > \\frac{\\pi}{2\\sqrt{1+\\max\\set{|A|,|B|}}}.\n\\end{equation}\nNow by the aforementioned version of Szpiro, for any $\\epsilon>0$ we have\n\\begin{align*}\n\\sqrt{1+\\max\\set{|A|,|B|}} &<  1 + \\max\\set{|A|^2,|B|^2}^\\frac{1}{4} \\\\\n&\\le 1 + \\max\\set{|A|^3,|B|^2}^\\frac{1}{4} \\quad\\quad\\mbox{since $A \\in \\ZZ$} \\\\\n&\\le 1 +\\left[K_{\\epsilon} (N_E)^{6+\\epsilon} \\right]^\\frac{1}{4} \\\\\n\\Longrightarrow \\sqrt{1+\\max\\set{|A|,|B|}} &< K_{\\epsilon} (N_E)^{\\frac{3}{2}+\\epsilon},\n\\end{align*}\nwhere to achieve the last line we absorb 1 into $K$ and relabel as necessary to account for the $\\frac{1}{4}$th power, and relabel $\\frac{\\epsilon}{2} \\mapsto \\epsilon$. Again, after absorbing the factor of $\\frac{\\pi}{2}$ into $K$ in equation \\ref{eqn:Omega_bound_short_weierstrass}, the result follows.\n\\end{proof}\n\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=0.92\\textwidth]{graphics/real_periods_vs_conductors_loglog}\n    \\caption{A scatter plot of $\\log \\Omega_E $ on the vertical axis vs. $\\log N_E$ on the horizontal axis, for all curves up to conductor 350000. The upper red line is the proven upper bound $\\Omega_E < 8.82921517\\ldots \\cdot (N_E)^{-\\frac{1}{12}}$, which can be seen to be sharp. The lower red line corresponds to the bound $\\Omega_E > (N_E)^{-1}$. Empirically this appears to hold easily, lending credence to the validity of the weaker assertion in Conjecture \\ref{conj:Omega_lower_bound_explicit}.}\n    \\label{fig:real_period_vs_conductor_loglog}\n\\end{figure}\n\nThankfully, we do not need to assume a specific value of $K_{\\epsilon}$ for a given $\\epsilon$ for Theorem \\ref{thm:main_theorem} to hold. However, empirical data suggests that for $\\epsilon=\\frac{1}{2}$ we can easily get away by choosing $K = 1$. We formalize this with the following conjecture:\n\\begin{conjecture}\\label{conj:Omega_lower_bound_explicit}\nLet $E$ have conductor $N_E$, and let $\\Omega_E$ be the real period of a minimal model of $E$.Then \n\\begin{equation}\n\\Omega_E > (N_E)^{-2}.\n\\end{equation}\n\\end{conjecture}\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The Regulator}\n\nTo define the regulator of a rational elliptic curve, we must first define the na\\\"ive logarithmic height, N\\'eron-Tate canonical height and the N\\'eron-Tate pairing on points on $E$. \\\\\n\nLet $E$ be an elliptic curve over $\\QQ$ and $P \\in E(\\QQ)$ a rational point on $E$. The {\\it na\\\"ive logarithmic height} of $P$ is a measure of the size' of the coordinates of $P$. \n\\begin{definition}\nDefine $h(\\cO) = 0$. For $P\\ne \\cO$, we may write $P = (x,y) \\in \\QQ^2$, with $x = \\frac{a}{b}, a,b \\in \\ZZ$, $b>0$ and $\\gcd(a,b) = 1$. We then define the na\\\"ive height of $P$ to be\n\\begin{equation}\n\th(P) := \\max\\set{\\log|a|,\\log|b|}.\n\\end{equation}\n\\end{definition}\nIf you compute the na\\\"ive heights of a number of points on an elliptic curve, you'll notice that the na\\\"ive height function is ``almost a quadratic form'' on $E$. That is $h(nP) \\sim n^2 h(P)$ for integers $n$, up to some constant that doesn't depend on $P$. We can turn $h$ into a true quadratic form as follows:\n\n\\begin{definition}\nThe {\\it N\\'eron-Tate height} height function $\\hat{h}: E(\\QQ) \\to \\RR$ is defined as\n\\begin{equation}\n\t\\hat{h}(P) := \\lim_{n \\to \\infty} \\frac{h(2^n P)}{(2^n)^2},\n\\end{equation}\nwhere $h$ is the na\\\"ive logarithmic height defined above.\n\\end{definition}\n\n\\begin{theorem}[N\\'eron-Tate]\nN\\'eron-Tate defines a canonical quadratic form on $E(\\QQ)$ modulo torsion. That is,\n\\begin{enumerate}\n\t\\item For all $P,Q \\in E(\\QQ)$,\n\t\\begin{equation}\n\t\t\\hat{h}(P+Q) + \\hat{h}(P-Q) = 2\\left[ \\hat{h}(P) + \\hat{h}(Q)\\right],\n\t\\end{equation}\n\ti.e. $\\hat{h}$ obeys the parallelogram law;\n\t\\item For all $P \\in E(\\QQ)$ and $n \\in \\ZZ$,\n\t\\begin{equation}\n\t\t\\hat{h}(nP) = n^2 \\hat{h}(P).\n\t\\end{equation}\n\t\\item $\\hat{h}$ is even, and the pairing $\\langle\\;,\\;\\rangle: E(\\QQ)\\times E(\\QQ) \\to \\RR$ by\n\t\\begin{equation}\n\t\t\\langle P,Q \\rangle = \\frac{1}{2}\\left(\\hat{h}(P+Q) - \\hat{h}(P) - \\hat{h}(Q)\\right)\n\t\\end{equation}\n\tis bilinear;\n\t\\item $\\hat{h}(P) = 0$ iff $P$ is torsion;\n\t\\item We may replace $h$ with another height function on $E(\\QQ)$ that is ``almost quadratic'' without changing $\\hat{h}$.\n\\end{enumerate}\n\\end{theorem}\n\nFor a proof of this theorem and elaboration on the last point, see \\cite[pp. 227-232]{Sil-1985}.\n\n\\begin{definition}\nThe {\\it N\\'eron-Tate pairing} on $E/\\QQ$ is the bilinear form $\\langle\\;,\\;\\rangle: E(\\QQ)\\times E(\\QQ) \\to \\RR$ by\n\t\\begin{equation}\n\t\t\\langle P,Q \\rangle = \\frac{1}{2}\\left(\\hat{h}(P+Q) - \\hat{h}(P) - \\hat{h}(Q)\\right).\n\t\\end{equation}\n\\end{definition}\nNote that this definition may be extended to all pairs of points over $\\QQbar$, but the definition above suffices for our purposes. \\\\\n\nIf $E(\\QQ)$ has rank $r$, then $E(\\QQ)/E_{\\text{tor}}(\\QQ) \\hookrightarrow \\RR^r$ as a rank $r$ lattice via the height pairing map. Specifically, if $\\set{P_1,\\ldots, P_r}$ is a basis for $E(\\QQ)$ modulo torsion, then we send $Q \\in E(\\QQ)$ to the vector $\\left( \\langle Q,P_1 \\rangle, \\ldots, \\langle Q,P_r \\rangle \\right)$. Note that the image of a given point under this embedding obviously depends on the choice of basis. However, any two lattices comprising the image of $E(\\QQ)$ using two different basis choices are isomorphic, and thus always have the same covolume.\n\n\\begin{definition}\nThe {\\it regulator} $\\Reg_E$ of $E/\\QQ$ is the covolume of the lattice that is the image of $E(\\QQ)$ under the above pairing map. That is, if $\\set{P_1,\\ldots,P_r}$ generates $E(\\QQ)$, then\n\\begin{equation}\n\t\\Reg_E = \\det\\left(\\langle P_i,P_j\\rangle \\right)_{1 \\le i,j \\le r},\n\\end{equation}\nwhere $\\left(\\langle P_i,P_j\\rangle \\right)_{1 \\le i,j \\le r}$ is the matrix whose $(i,j)$th entry is the value of the pairing $\\langle P_i,P_j\\rangle$. If $E/\\QQ$ has rank zero, then $\\Reg_E$ is defined to be 1.\n\\end{definition}\nNote that for any $P \\in E(\\QQ)$, $\\langle P, P \\rangle = \\hat{h}(P)$. Thus the regulator of any rank 1 curve is just the smallest height of a non-torsion point on that curve. \\\\\n\nLoosely, the regulator measures the ``density'' of rational points on $E$: positive rank elliptic curves with small regulators have many points with small coordinates, while those with large regulators have few such points.\\\\\n\nThere are conjectural bounds on how large a curve's regulator can be in terms of its conductor -- see for example Conjecture 6.3 in Lang's Survey of Diophantine Geometry \\cite[p. 99]{Lang-1997}. This is a topic we hope to investigate more fully future work, but the question that is relevant to this thesis is not how large the regulator can be, but how small. Specifically, given $E/\\QQ$ with (minimal) discriminant $D_E$, what is the smallest $\\Reg_E$ can be as a function of $D_E$?\n\nThis is an open question. However, recall Lang's Height Conjecture (Conjecture 1.4 in \\cite[pp. 73-74]{Lang-1997}):\n\\begin{quotedconjecture}{\\ref{conj:Lang}}\nLet $E/\\QQ$ have minimal discriminant $D_E$. There exists an absolute constant $M_0 >0$ independent of $E$ such that any non-torsion point $P \\in E(\\QQ)$ satisfies\n\\begin{equation}\n\\hat{h}(P) \\ge M_0 \\log |D_E| .\n\\end{equation}\n\\end{quotedconjecture}\nThat is, the minimum height of a non-torsion point on $E$ scales with the log of the absolute value of the curve's minimal discriminant. Hindry and Silverman in \\cite{HiS-1988} show that the ABC conjecture implies Lang's height conjecture and, better yet, gives an explicit lower bound on $M_0$:\n\\begin{equation}\nM_0 \\ge 6\\times 10^{-11}.\n\\end{equation}\nThe bound was further improved  by Elkies (albeit still contingent on ABC) in the early 2000s \\cite{Elk-2006} to\n\\begin{equation}\nM_0 \\ge 3.9479\\times 10^{-5}.\n\\end{equation}\nNote that Theorem \\ref{thm:main_theorem} requires the assumption of ABC for results regarding the real period of $E$; quoting the above result therefore requires no further unproven results. \\\\\n\nThere is general agreement in the literature is that the value of $M_0$ above is not optimal; however, there is no strong consensus as to how much larger $M_0$ could be. A survey by Elkies \\cite{ElSt-2002} reveals 54 known cases of points $P$ on curves over $\\QQ$ where $\\hat{h}(P) < \\frac{1}{100}$, and the largest value of $\\hat{h}/\\log|D_E|$ is $\\sim 8.46 \\times 10^{-5}$, achieved by a point on a curve of conductor $N=3476880330$. Given the evidence in this and other compiled data, it seems quite likely that there are as-yet undiscovered instances of points with low height driving the observed lower bound down closer to the value of $3.9479\\times 10^{-5}$. \\\\\n\nOne can therefor make the more conservative following observation:\n\\begin{corollary}[ABC]\\label{conj:point_height_lower_bound}\nThere exists an absolute constant $M_1 >0$ independent of $E$ such that any non-torsion point $P$ on any elliptic curve $E(\\QQ)$ satisfies\n\\begin{equation}\\label{eqn:Elkies_height_lower_bound}\n\\hat{h}(P) \\ge M_1 .\n\\end{equation}\n\\end{corollary}\nThe smallest absolute point height found in the aforementioned survey by Elkies is $\\hat{h}(P) = 8.914\\times 10^{-3}$, achieved by the point $P = (7107,-602054)$ and its negative on the curve with Cremona label {\\tt 3990v1}, given by the equation $E: y^2+xy+y=x^3+x^2-125615x+61201397$. (note that the Elkies' table uses a slightly different definition of height, equal to half the value of the height as defined above). One can see in this table that the known points of smallest height all belong to curves with small conductor, so it is perhaps more believable that this is indeed the point of smallest height on any rational elliptic curve -- for such a point is guaranteed to exist, assuming ABC. \\\\\n\nEven though the Elkies bound above would seem so small as to be  of limited use in practical applications, we can use it to bound a curve's regulator from below in terms of an inverse power of its conductor. For this we will need the following geometric lemma:\n\\begin{lemma}\\label{lem:covolume_lower_bound}\nLet $L$ be a lattice in $\\RR^r$ with covolume $V_L$. If $h$ is the minimum nonzero vector length in $L$, then\n\\begin{equation}\nV _L \\ge \\left(\\frac{\\sqrt{\\pi}}{2}\\cdot h\\right)^{r} \\cdot \\frac{1}{\\Gamma(1+\\frac{r}{2})},\n\\end{equation}\nwhere $\\Gamma(s)$ is the usual Gamma function on $\\CC$.\n\\end{lemma}\n\\begin{proof}\nRecall Minkowski's Theorem: Let $L$ be a lattice in $\\RR^r$ with covolume $V_L$, and let $S$ be a convex symmetric subset of $\\RR^r$ with volume $\\Vol(S)$. If $\\Vol(S)>2^r \\cdot V_L$, then $S$ contains a nonzero element of $L$ -- see \\cite[p. 80]{st-2012} for a proof. \\\\\n\nSo let $S = B(0,h)$, i.e. the open ball of radius $h$ centered at the origin, where $h$ is the minimum nonzero vector length in $L$. By construction $S$ contains no nonzero lattice elements, so by Minkowski's theorem we must have that\n\\begin{equation}\n\\Vol(S) \\le 2^r V_L.\n\\end{equation}\nThe volume of the $r$-sphere with radius $L$ is given by\n\\begin{equation}\n\\Vol(S) = \\frac{\\pi^{\\frac{r}{2}}}{\\Gamma(1+\\frac{r}{2})} \\cdot h^r;\n\\end{equation}\ncombining the above two statements and solving for $V_L$ completes the result.\n\\end{proof}\n\nWith the above lemma we can then prove the following:\n\\begin{theorem}[BSD, ABC, (GRH)]\\label{thm:regulator_lower_bound}\nLet $E/\\QQ$ have conductor $N_E$. Assuming BSD and ABC, we have that\n\\begin{equation}\n\\Reg_E \\ge 4.36 \\times 10^{-6} \\cdot (N_E)^{-3.86} \\cdot \\frac{1}{\\Gamma(1.8+0.25 \\log N_E)}.\n\\end{equation}\nIf one further assumes GRH, then one has the improved bound\n\\begin{equation}\n\\Reg_E \\ge 2.11 \\times 10^{-2} \\cdot (N_E)^{-2.47} \\cdot \\frac{1}{\\Gamma(1.25+0.16 \\log N_E)}.\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nFor curves of conductor $\\le 350000$, we consulted Cremona's tables and verified numerically that the above statements. Thus without loss of generality we may assume $D_E \\ge N_E > 350000$. Hence for any point $P \\in E(\\QQ)$, by Conjecture \\ref{conj:Lang} and the Elkies' bound in Equation \\ref{eqn:Elkies_height_lower_bound} we have that\n\\begin{equation}\n\\hat{h}(P) \\ge 3.9479\\times 10^{-5} \\cdot \\log |D_E| \\ge 6\\times10^{-11} \\cdot \\log(350000) = 5.0397 \\times 10^{-4}.\n\\end{equation}\nLet $h = 5.0397 \\times 10^{-4}$, and let $L$ be the rank $r$ lattice that is the image of $E(\\QQ)$ under the height pairing map (for a given choice of basis of $E(\\QQ)$), where $r$ is the rank of $E$. It follows that any nonzero vector in $L$ has length at least $h$. Thus by Lemma \\ref{lem:covolume_lower_bound} we must then have that\n\\begin{equation*}\n\\Reg_E \\ge \\left(\\frac{\\sqrt{\\pi}}{2}\\cdot h\\right)^{r} \\cdot \\frac{1}{\\Gamma(1+\\frac{r}{2})}.\n\\end{equation*}\nBy BSD, the algebraic and analytic rank are equal, so we have that\n\\begin{equation}\nr < a\\log N_E + b,\n\\end{equation}\nwhere by Corollary \\ref{cor:logderiv_rank_bound} we may take $a=0.5, b=1.6$ if we aren't assuming GRH, and by Corollary \\ref{cor:better_an_bound} $a=0.32, b=0.5$ if we are. Thus\n\\begin{align*}\n\\Reg_E &\\ge \\left(\\frac{\\sqrt{\\pi}}{2}\\cdot h\\right)^{a \\log N_E + b} \\cdot \\frac{1}{\\Gamma\\left(1+\\frac{a \\log N_E + b}{2}\\right)} \\\\\n&= \\left(\\frac{\\sqrt{\\pi}}{2}\\cdot h\\right)^{b} \\cdot \\left(N_E\\right)^{a\\log \\left(\\frac{\\sqrt{\\pi}}{2}\\cdot h\\right)} \\cdot \\frac{1}{\\Gamma\\left((1+\\frac{b}{2})+\\frac{a}{2} \\log N_E \\right)}.\n\\end{align*}\n[Note that replacing $r$ with $a\\log N_E +b$ inside the Gamma factor is only valid in the region where the Gamma function is monotonically increasing, i.e. for $a\\log N_E + b \\ge 1$. However, we are in this case in both the non-GRH and GRH versions of the proof, since we are assuming $N_E > 350000$.] \\\\\n\nSubstituting the respective values of $a$ and $b$ and simplifying produces the two inequalities stated in the theorem.\n\\end{proof}\nThere are a few things worth pointing out about this result. Firstly, the Gamma factor means that the proven lower bound on the regulator eventually decreases more rapidly than any negative power of the conductor. However, since $\\Gamma(s) = O(e^{s\\log s})$, we see that $\\Gamma\\left((1+\\frac{b}{2})+\\frac{a}{2} \\log N \\right) = O(N^{c\\log\\log N})$ for some constant $c$. That is, the exponent in the negative power of $N_E$ coming from the Gamma factor grows, albeit very slowly. \\\\\n\nNote that (without assuming GRH), the number of bits extra precision needed in Algorithm $\\ref{algo:compute_rank}$ required to compensate for the Gamma factor is $\\log_2(\\Gamma(1.8+0.25\\log N_E))$. Even though this quantity grows faster than $\\log N_E$, the constant in front of $\\log N_E$ inside the Gamma factor is small enough that for all practical purposes the number of bits precision needed to account for the Gamma factor grows linearly with log of the conductor over the range of conductors for which the rank algorithm is practical. For example, when $N_E=350000$ the number of extra bits precision needed to accommodate for the Gamma factor is just 5 (even without assuming GRH). And even for $N_E = 10^{20}$ -- which is about the upper limit for what is practical on modern architecture -- the number of extra bits needed is 30. Either way, the number of bits needed to account for the regulator isn't an issue in any way since the computational bottleneck in the rank algorithm is the $\\sqrt{N_E}$ dependence coming from evaluating $L_E(s)$, which grows faster than any power or $\\log N_E$. \\\\\n\nAlso note that the first step in the proof -- manual verification for all curves below conductor $35000$ --  isn't strictly necessary; it only improves the constants in the bounds by a small amount. However, it serves to highlight that the power of $N_E$ in the two bounds can theoretically be improved further by exhaustively checking all curves up to a higher conductor bound. If, for example, we believe that $8.914\\times 10^{-3}$ is a global minimum point height over all rational elliptic curves, then (assuming GRH) we would instead get\n\\begin{equation}\n\\Reg_E \\ge 8.89 \\times 10^{-2} \\cdot (N_E)^{-1.55} \\cdot \\frac{1}{\\Gamma(1.25+0.16 \\log N_E)}.\n\\end{equation}\nEven so, we do {\\it not} expect either bound to be anywhere close to optimal; almost certainly more careful analysis could further reduce the negative exponent of $N$ or increase the size of the constant in front of it -- or better yet, eliminate the Gamma factor. In practice, we see the smallest regulators tend to {\\it grow} with conductor, further highlighting that the above bound is rather crude. However, the statement in Theorem \\ref{thm:regulator_lower_bound} is good enough for our purposes: it will help establish that the central leading coefficient of $\\Les$ cannot be exponentially small in $N_E$. \\\\\n\nWe invite the interested reader to improve upon this result, and thus ultimately speed up the runtime of Algorithm \\ref{algo:compute_rank}. ", "meta": {"hexsha": "7750f188d2a166a85c74054b2c285dbff32edfab", "size": 41470, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/4_main_theorem.tex", "max_stars_repo_name": "haikona/thesis", "max_stars_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/4_main_theorem.tex", "max_issues_repo_name": "haikona/thesis", "max_issues_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/4_main_theorem.tex", "max_forks_repo_name": "haikona/thesis", "max_forks_repo_head_hexsha": "d20302fc3aaf5a075329ebd36b233965b719c7d1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.3436123348, "max_line_length": 1106, "alphanum_fraction": 0.7189293465, "num_tokens": 13132, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056167854461, "lm_q2_score": 0.8267117855317474, "lm_q1q2_score": 0.5698570772297585}}
{"text": "\\problemname{Treasure Spotting}\nFor Timmy's birthday his parents threw him a pirate themed party! A treasure is buried in the yard and now it is up to Timmy and his pirate crew to find it. Help the pirates find the treasure by letting them know who can see where the treasure is buried.\n\nTo make the game interesting, there are walls placed in the yard to\nobscure vision. Each pirate has a field of view that determines what\nthey can see. Each pirate can see a certain distance away and can only\nsee in a semi-circle based on the direction they are looking (see\nimage below). A point cannot be seen in a pirate's field of view if\neither another pirate or some part of a wall is directly between the\npoint that is being looked at and the pirate that is looking.  Each\npirate is a single point, and each wall is an infinitely thin line.\n\nWhich pirates can see where the treasure is buried?\n\n\\begin{figure}[h]\n\\begin{center}\n \\includegraphics[width=0.7\\textwidth]{sample.pdf}\n \\caption{The left picture illustrates Sample Input 1 where the right-most pirate is the only one who can see the location of the buried treasure. The right picture illustrates Sample Input 2 where the middle pirate is the only one who can see the buried treasure.}\n\\end{center}\n\\end{figure}\n\n\\section*{Input}\n\nThe first line of input contains two integers $W$~($0 \\leq W \\leq 1 \\, 000$), which is the number of walls and $P$~($1 \\leq P \\leq 1 \\, 000$), which is the number of pirates.\n\nThe second line contains the coordinates of the treasure.\n\nThe next $W$ lines describe the walls. Each of these lines contains two coordinates $(x,y)$ and $(x',y')$ which are the two (distinct) endpoints of this wall.\n\nThe next $P$ lines describe the pirates. The $i$th of these lines contains two (distinct) coordinates $(x_i,y_i)$, which is the position of the $i$th pirate, and $(x_i',y_i')$, which is the furthest point that this pirate can see in the direction they are looking. That is, the radius of the semi-circle for this pirate is the distance between $(x_i, y_i)$ and $(x_i',y_i')$.\n\nAll coordinates are an $(x,y)$ integer pair with $|x|,|y| \\leq 10^9$. No two pirates will have the same coordinate position, the treasure will not share a coordinate position with any pirate and no part of any wall will touch a pirate or the treasure. Note that walls can overlap in any way with other walls.\n\n\\section*{Output}\n\nDisplay $P$ lines, one per pirate. The $i$th of these lines should display \\texttt{visible} if the $i$th pirate can see where the treasure is buried and \\texttt{not visible} otherwise.\n", "meta": {"hexsha": "90e59adfd699713cc3c37285f72dfaf0a82c1aa8", "size": 2570, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problems/treasurespotting/problem_statement/problem.tex", "max_stars_repo_name": "icpc/na-rocky-mountain-2018-public", "max_stars_repo_head_hexsha": "416a94258f99ab68ff7d9777faca55c94cdaf5f5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-22T16:34:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T16:34:26.000Z", "max_issues_repo_path": "problems/treasurespotting/problem_statement/problem.tex", "max_issues_repo_name": "icpc/na-rocky-mountain-2018-public", "max_issues_repo_head_hexsha": "416a94258f99ab68ff7d9777faca55c94cdaf5f5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/treasurespotting/problem_statement/problem.tex", "max_forks_repo_name": "icpc/na-rocky-mountain-2018-public", "max_forks_repo_head_hexsha": "416a94258f99ab68ff7d9777faca55c94cdaf5f5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.4594594595, "max_line_length": 375, "alphanum_fraction": 0.7591439689, "num_tokens": 645, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506418255928, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5697936609174743}}
{"text": "\\clearpage\n\\section{Radio Simulation}\\label{sec:radiomodel}\nTo simulate loss of packets during radio communication, we introduce the packet error probability. The packet\nerror probability is the probability for any form of error occurring during the transmission and reception of\na packet through wireless radio communication. The probability for packet error is calculated using the\n\\gls{rssi} on a link between two nodes, the size of the packet, as well as the \\gls{snr}, including\ninterference from nearby transmitting nodes. The computations in this section are derived from\n\\cite{massoud2007digital}, as well as personal communication with the author of \\cite{paper:linkmodel}.\n\n\\subsection{Probability for Packet Error}\\label{sec:pep}\nThe first step for computing the probability for packet error is to compute the level of background noise\naffecting the wireless communication, the noise power $P_{N,\\mathit{dB}}$. This noise is calculated with the\nthermal noise and noise figure $P_{N,\\mathit{dB}} = \\mathit{thermalnoise}\\ +\\ \\mathit{noisefigure}$. For the\nReachi devices, we assume that the $\\mathit{thermalnoise} = -119.66$ \\acrshort{db} and the\n$\\mathit{noisefigure} = 4.2$ \\acrshort{db}. Next, we need to add the noise from interfering transmissions\nhappening at the same time. This is done by adding the sum of the \\gls{rssi} from interfering transmitters to\nthe noise power $P_{N,\\mathit{dB}}$, giving us the noise power with interference $P_{NI,\\mathit{dB}}$ on the\nspecific link between a receiving node $n_r$ and a transmitting node $n_t$ at a given time $t$. The set of\ncurrently transmitting and interfering nodes are denoted by $\\mathit{nodes}_i$ and the function\n$\\mathit{RSSI}_{\\mathit{dBm}}(n, m, t)$ denotes the \\gls{rssi}, in \\acrshort{dbm}, on the link between nodes\n$n$ and $m$ at time $t$. We assume the \\gls{rssi} on a link to be reciprocated, which means that\n$\\mathit{RSSI}_{\\mathit{dBm}}(n, m, t) = \\mathit{RSSI}_{\\mathit{dBm}}(m, n, t)$.\n\n\\begin{eq}\\label{eq:noisepower}\n    P_{NI,\\mathit{dB}}(n_r, m_t, \\mathit{nodes}_i, t) = 10 \\log_{10}\\left( 10^{\\frac{P_{N,\\mathit{dB}}}{10}} +\n    \\mathlarger{\\sum}\\limits_{m \\in \\mathit{nodes}_i}  10^{\\frac{\\mathit{RSSI}_{\\mathit{dBm}}(n_r, m, t)}{10}}\n    \\right)\n\\end{eq}\n\nNote that as both the noise power $P_{N,\\mathit{dB}}$ and the \\gls{rssi} is in \\acrshort{db} (a logarithmic\nscale), we first need to convert the values to a linear scale, before we can compute the sum of the background\nnoise and the interfering noise, and then finally convert the value back into a logarithmic scale. \\medbreak\n\nWith the noise and interference power $P_{NI,\\mathit{dB}}$, we can compute the \\gls{snir},\n$\\gamma_{\\mathit{dB}}$. The \\gls{snir} compares the \\gls{rssi} of the signal to the level of the background\nnoise, as well as the noise from interfering transmitters. The ratio is computed by subtracting the noise\npower $P_{NI,\\mathit{dB}}$ from the \\gls{rssi} of a link.\n\n\\begin{eq}\n    \\gamma_{\\mathit{dB}}(n_r, m_t, \\mathit{nodes}_i, t) = \\mathit{RSSI}_{\\mathit{dBm}}(n_r, m_t, t) -\n    P_{NI,\\mathit{dB}}(n_r, m_t, \\mathit{nodes}_i, t)\n\\end{eq}\n\nWe use the \\gls{snir} $\\gamma_{\\mathit{dB}}$ to compute the bit error probability $P_b$:\n\n\\begin{eq}\n    P_b(n_r, m_t, \\mathit{nodes}_i, t) = \\frac{1}{2}\\mathit{erfc} \\left( \\sqrt{ \\left(\n    \\frac{10^{\\frac{\\gamma_{\\mathit{dB}}(n_r, m_t, \\mathit{nodes}_i, t)}{10}}}{2} \\right)} \\right)\n\\end{eq}\n\nFinally, with the bit error probability $P_b$, we can compute the packet error probability $P_p$. The packet\nerror probability is the probability that we experience a bit error for any of the bits in the transmitted\npacket. The \\textit{packetsize} parameter is in bytes.\n\n\\begin{eq}\\label{eq:pep}\n    P_p(n_r, m_t, \\mathit{nodes}_i, \\mathit{packetsize}, t) = 1 - \\left( 1 - P_b(n_r, m_t, \\mathit{nodes}_i,\n    t) \\right) ^{\\mathit{packetsize} \\cdot 8}\n\\end{eq}\n\n\\subsection{Example}\nIf we assume that a node $n_2$ is currently listening, and nodes $n_1$ and $n_3$ is transmitting at the same\ntime $t$, what is the probability for a packet error on the link between nodes $n_1$ and $n_2$ with\ninterference $\\mathit{nodes}_i = \\{ n_3 \\} $? For this example we assume the \\gls{rssi} for the link between\n$n_1$ and $n_2$ to be $\\mathit{RSSI}_{\\mathit{dBm}}(n_2, n_1, t) = -63.750$, the \\gls{rssi} between $n_2$ and\n$n_3$ to be $\\mathit{RSSI}_{\\mathit{dBm}}(n_2, n_3, t) = -74.042$, and the size of the transmitted packet to\nbe 20 bytes (which is the size of a header packet for the Reachi protocol). First, we compute the noise power \n$P_{\\mathit{NI}, \\mathit{dB}}$:\n\\begin{eq}\n    P_{\\mathit{NI},\\mathit{db}}(n_2, n_1, \\mathit{nodes}_i, t) = 10 \\log_{10}\\left( 10^{\\frac{(-119.66 + 4.2)}{10}} + 10^{\\frac{-74.042}{10}} \\right) = -74.041\n\\end{eq}\n\nWe subtract the noise power $P_{NI,dB}$ from the \\gls{rssi} to get the \\gls{snir} $\\gamma_{dB}$:\n\\begin{eq}\n    \\gamma_{\\mathit{dB}}(n_2, n_1, \\mathit{nodes}_i, t) = -63.750 - (-74.041) = 10.291\n\\end{eq}\n\nWith which we can compute the bit error probability:\n\\begin{eq}\n    P_b(n_2, n_1, \\mathit{nodes}_i, t) = \\frac{1}{2}erfc \\left( \\sqrt{ \\left( \\frac{10^{\\frac{10.291}{10}}}{2} \\right)} \\right) = 0.000537\n\\end{eq}\n\nFinally, we can compute the packet error probability using the bit error probability:\n\\begin{eq}\n    P_p(n_2, n_1, \\mathit{nodes}_i, t, 20) = 1 - \\left( 1 - 0.000537 \\right) ^{20 \\cdot 8} = 0.082\n\\end{eq}\n\nThis gives us an 8.2 \\% probability that we will experience a packet error during the transmission from $n_1$\nto $n_2$ with interference from $n_3$, which is a significant difference in relation to the same transmission\nwith no interfering transmitters. To demonstrate the difference, \\autoref{plot:radiomodel:no-interference}\nshows the probability for packet error with no interfering transmitters. According to the figure, an\n\\gls{rssi} of approximately $-103.0$ \\acrshort{dbm} would have a probability for packet error close to zero,\nand an \\gls{rssi} of approximately $-110.0$ \\acrshort{dbm} would have a probability for packet error very\nclose to 100.0 \\%. Recall that for the link between $n_1$ and $n_2$ at time $t$, we had an \\gls{rssi} of\n$-63.750$ \\acrshort{dbm}, which is significantly better than the $-103.0$ \\acrshort{dbm} we see in\n\\autoref{plot:radiomodel:no-interference} for a close to zero probability, but with just a single interfering\ntransmitter, the probability for packet error increases to 8.2 \\%, which corresponds to what we see in\n\\autoref{plot:radiomodel:one-interference}, where an \\gls{rssi} of approximately $-62.0$ \\acrshort{dbm} is \nrequired for a probability for packet error close to zero, with a single interfering transmitter.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[t]{.9\\textwidth}\n        \\begin{tikzpicture}\n            \\begin{axis}[\n                    title={$\\mathit{packetsize} = 20$, no interference},\n                    height=8cm, width=0.95\\textwidth,\n                    ylabel={Probability (close to 0 is better)},\n                    xlabel={\\gls{rssi} in \\acrshort{dbm} (close to 0 is better)},\n                    axis lines*=left,\n                    enlargelimits=false,\n                    xtick={-112, -110, -108, -106, -104, -102},\n                    ymajorgrids=true,\n                    xmajorgrids=true,\n                    grid style=dashed,\n                ]\n    \n                \\addplot[very thick, solid, cyan!50!black] coordinates {(-112,0.9999876428126551)(-111.9,0.9999818533805854)(-111.8,0.9999735236168629)(-111.7,0.9999616235974554)(-111.6,0.9999447451717537)(-111.5,0.999920980149257)(-111.4,0.9998877661533198)(-111.3,0.999841694145744)(-111.2,0.9997782714374498)(-111.1,0.9996916341768787)(-111.0,0.9995742039839494)(-110.9,0.999416284715221)(-110.8,0.9992055974422288)(-110.7,0.9989267547148308)(-110.6,0.9985606791379564)(-110.5,0.9980839762217061)(-110.4,0.9974682772917275)(-110.3,0.9966795747809701)(-110.2,0.995677579153797)(-110.1,0.9944151335993024)(-110.0,0.9928377289131765)(-109.9,0.9908831660120753)(-109.8,0.9884814165838162)(-109.7,0.9855547327690343)(-109.6,0.9820180538712832)(-109.5,0.977779751436127)(-109.4,0.9727427433945262)(-109.3,0.966805993404494)(-109.2,0.9598663934709013)(-109.1,0.9518210071670542)(-109.0,0.9425696284608098)(-108.9,0.9320175886859855)(-108.8,0.9200787232079811)(-108.7,0.9066783914790133)(-108.6,0.8917564310398185)(-108.5,0.8752699189322612)(-108.4,0.8571956138868371)(-108.3,0.8375319599914338)(-108.2,0.8163005472222238)(-108.1,0.7935469455391158)(-108.0,0.769340856001552)(-107.9,0.7437755528934127)(-107.8,0.7169666232111204)(-107.7,0.6890500419860338)(-107.6,0.6601796517461043)(-107.5,0.6305241401480931)(-107.4,0.6002636299578492)(-107.3,0.5695860091061149)(-107.2,0.5386831349945436)(-107.1,0.5077470465831848)(-107.0,0.4769663105493268)(-106.9,0.4465226148609678)(-106.8,0.4165877056515772)(-106.7,0.3873207426887495)(-106.6,0.35886612643390514)(-106.5,0.3313518270787408)(-106.4,0.3048882242599883)(-106.3,0.27956744643051556)(-106.2,0.25546318188223016)(-106.1,0.23263091968407745)(-106.0,0.21110856856704174)(-105.9,0.19091739506360728)(-105.8,0.17206321880579134)(-105.7,0.15453780245757087)(-105.6,0.13832037585666812)(-105.5,0.12337923806141704)(-105.4,0.10967338661475745)(-105.3,0.09715412994657713)(-105.2,0.08576664597225181)(-105.1,0.07545145721250357)(-105.0,0.06614579982676905)(-104.9,0.057784870567569646)(-104.8,0.050302941646255594)(-104.7,0.04363433873657152)(-104.6,0.037714281778700065)(-104.5,0.032479591872253466)(-104.4,0.027869270393728107)(-104.3,0.023824958598936963)(-104.2,0.0202912874470752)(-104.1,0.017216128297656508)(-104.0,0.014550755569625706)(-103.9,0.012249932504283412)(-103.8,0.010271930917551964)(-103.7,0.00857849534048738)(-103.6,0.007134761292770797)(-103.5,0.005909136667815562)(-103.4,0.004873154377310507)(-103.3,0.004001303543972212)(-103.2,0.003270845672304956)(-103.1,0.002661621391885305)(-103.0,0.00215585257009554)(-102.9,0.0017379438436174732)(-102.8,0.0013942869261214241)(-102.7,0.0011130704181615547)(-102.6,0.0008840972739669883)(-102.5,0.000698611570752572)(-102.4,0.0005491357749322079)(-102.3,0.0004293193051855271)(-102.2,0.00033379885272988297)(-102.1,0.0002580706275865374)(-102.0,0.0001983744574920454)};\n    \n            \\end{axis}\n        \\end{tikzpicture}\n       \\caption{Probability for packet error on a link with no interfering transmitters.}\\label{plot:radiomodel:no-interference}\n    \\end{subfigure}\n    \\begin{subfigure}[t]{.9\\textwidth}\n        \\begin{tikzpicture}\n            \\begin{axis}[\n                 title={$\\mathit{packetsize} = 20$, one interfering transmitter with \\gls{rssi} $= -74.042$},\n                    height=8cm, width=0.95\\textwidth,\n                    ylabel={Probability (close to 0 is better)},\n                    xlabel={\\gls{rssi} in \\acrshort{dbm} (close to 0 is better)},\n                    axis lines*=left,\n                    enlargelimits=false,\n                    xtick={-70, -68, -66, -64, -62, -60},\n                    ymajorgrids=true,\n                    xmajorgrids=true,\n                    grid style=dashed,\n                ]\n                \n                \\addplot[very thick, solid, cyan!50!black] coordinates\n                {(-70,0.9998946969201213)(-69.9,0.999851280175236)(-69.8,0.9997914288482014)(-69.7,0.9997095541645235)(-69.6,0.9995984200856673)(-69.5,0.9994487512548663)(-69.4,0.999248779311367)(-69.3,0.9989837280265265)(-69.2,0.9986352414918488)(-69.1,0.9981807643451096)(-69.0,0.997592888694345)(-68.9,0.9968386888236752)(-68.8,0.9958790716514758)(-68.7,0.9946681778438871)(-68.6,0.9931528749262344)(-68.5,0.9912723890419786)(-68.4,0.9889581254816121)(-68.3,0.9861337290346813)(-68.2,0.9827154329627786)(-68.1,0.9786127394483093)(-68.0,0.973729464463402)(-67.9,0.9679651661382458)(-67.8,0.9612169582453458)(-67.7,0.9533816900793851)(-67.6,0.9443584518788336)(-67.5,0.9340513423828241)(-67.4,0.9223724137305926)(-67.3,0.9092446903599295)(-67.2,0.894605144450226)(-67.1,0.8784075021721859)(-67.0,0.8606247535759753)(-66.9,0.841251244920013)(-66.8,0.8203042456090053)(-66.7,0.7978249020903394)(-66.6,0.7738785169287781)(-66.5,0.74855412125948)(-66.4,0.7219633410028916)(-66.3,0.6942385895436469)(-66.2,0.6655306499723479)(-66.1,0.6360057365895722)(-66.0,0.6058421466303818)(-65.9,0.5752266279786769)(-65.8,0.5443505964027997)(-65.7,0.5134063364756047)(-65.6,0.48258331425256296)(-65.5,0.4520647177980315)(-65.4,0.422024324919833)(-65.3,0.392623777345657)(-65.2,0.36401031848479315)(-65.1,0.33631502926640344)(-65.0,0.3096515746076566)(-64.9,0.2841154529163329)(-64.8,0.25978372350145207)(-64.7,0.2367151724125871)(-64.6,0.2149508663479005)(-64.5,0.19451503691409577)(-64.4,0.17541623353097346)(-64.3,0.1576486823314981)(-64.2,0.14119379008160704)(-64.1,0.1260217359331075)(-64.0,0.11209309920507793)(-63.9,0.09936047785157087)(-63.8,0.08777005934655302)(-63.7,0.07726311298604938)(-63.6,0.06777737973327802)(-63.5,0.05924834244541988)(-63.4,0.05161036543000741)(-63.3,0.04479769765769026)(-63.2,0.038745338542438446)(-63.1,0.03338976897292556)(-63.0,0.02866955326566123)(-62.9,0.024525819961693007)(-62.8,0.02090263097826983)(-62.7,0.017747249636844042)(-62.6,0.015010318608602913)(-62.5,0.012645958934183965)(-62.4,0.010611801070009363)(-62.3,0.008868958463135512)(-62.2,0.007381953529277174)(-62.1,0.006118605158747625)(-62.0,0.00504988605383716)(-61.9,0.004149757343889449)(-61.8,0.0033949870645402225)(-61.7,0.0027649582456965582)(-61.6,0.002241471548095064)(-61.5,0.0018085466304298414)(-61.4,0.0014522257270703776)(-61.3,0.0011603822729023827)(-61.2,0.0009225368308002357)(-61.1,0.0007296820553771566)(-61.0,0.0005741179660628815)(-60.9,0.0004492983977033571)(-60.8,0.0003496891470554653)(-60.7,0.0002706380340258274)(-60.6,0.00020825684500058728)(-60.5,0.0001593149176219999)(-60.4,0.00012114395831475111)(-60.3,9.15535532985956e-05)(-60.2,6.875673521467007e-05)(-60.1,5.130489908311553e-05)(-60.0,3.803131895474543e-05)};\n    \n            \\end{axis}\n        \\end{tikzpicture}\n       \\caption{Probability for packet error on a link with a single interfering transmitter.}\\label{plot:radiomodel:one-interference}\n    \\end{subfigure}\n    \\caption{Probability for packet error with and without interfering transmitters.}\n    \\label{figure:pepegraphs}\n\\end{figure}\n", "meta": {"hexsha": "a7dc1260f538527f3275069df87c57eb4bafa309", "size": 14237, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/p10/sections/02-radiophysics/05-radiomodel.tex", "max_stars_repo_name": "Joklost/masters", "max_stars_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "reports/p10/sections/02-radiophysics/05-radiomodel.tex", "max_issues_repo_name": "Joklost/masters", "max_issues_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/p10/sections/02-radiophysics/05-radiomodel.tex", "max_forks_repo_name": "Joklost/masters", "max_forks_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 96.1959459459, "max_line_length": 2872, "alphanum_fraction": 0.7143358854, "num_tokens": 5565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624840223698, "lm_q2_score": 0.727975460709318, "lm_q1q2_score": 0.5697590823860839}}
{"text": "% !Mode:: \"TeX:UTF-8\"\n% !TEX program  = xelatex\n\\section{Introduction}\nLPPL Model is used to describe a super-exponential (power law) accelerating behavior of asset price when just before the bubbles burst and crushes or rebounds appearing. The characteristic of abnormal increasing bubble price is a strong upward curvature with seriously unstable oscillation, which implies the price goes upward by growth rates at any time, instead of a constant growth rate which is the simple exponential increasing process\\cite{R:1}. This model is closer to the real market because the asset price in the real market will never grow with an ideal constant rate and this model accounts for the imitation and herding mechanisms and positive feedback among investors, so it can be a more accurate prediction indicator\\cite{R:2}.\n\\subsection{Definition of Bubbles}\nThe asset price will be over-estimated and become irrational high after a series of price increases because more and more investors follow the trend to buy this popular asset, as a result, the market price of the asset will beyond its real price. The more the asset's owner expects to earn from holding or selling that asset, the higher its price. When the price reaches a relatively highest position, there is no follower who is willing to hold the asset with a higher price, then the holders start to lost confidence and leave the asset gradually, as the result, the difference between the bid and call becomes too large to remain balance, finally the bubble burst. If this unbalance between selling and holding is broken, the bubble will disrupt and a crash follows which means the price will suddenly decay after reaching the highest price. Many factors may contribute to economic bubbles, such as speculation, easy credit, finance innovation.\n\n\n\\subsection{Positive Feedback}\nIn the stock market, any deviation of asset price should be ultimately traced back to the behavior of investors. it is the buying and selling decisions of investors that push prices up and down, the mechanisms of positive feedback on price is that if the recent observation of the asset price is moved up, then price will follow the tendency to keep on moving up, as a result, the asset price will be grow super-exponentially.\n\nThe positive feedback leads to speculative behaviors which means that the more and more trends followers emerge in the market along with large investments accumulated by Youssefmir (1998) \\cite{R:3}. It will cause the market value larger than its real value presenting a greater-exponential upward tendency. At the same time, the system will become increasingly sensible until the crash.\n\n\n\\subsection{Imitation and Herding Behavior}\nHerding effect theorem is the basis of positive feedback which is popular in many economics phenomena. It happens because investors are not able to attain the whole picture of a market, they can just make a buying or selling decisions by analyzing recent observations or following those expert and rational investors. The less information gained from the market, the larger the probability to follow others like a herd.\n\nThere was a mimetic contagion model of investors in the stock markets developed by Orlean (1989)\\cite{R:4}. The simplest version is called the Urn model. Under stimulating this model with balls, it turns out that bubble crushes are the consequences of the imitative behavior among investors.\n\n\n\\subsection{Formula}\nFor the original formula of LPPL model by Didier Scornette,\n\\begin{equation}\\label{F:lPPL}\n\\ln p(t) = A \u2212 B(t_c \u2212 t)^m + C(t_c \u2212 t)^m\\cos(\\omega\\ln(t_c \u2212 t) + \\varphi))\n\\end{equation}\nThere are seven unknown parameters in the formula above, where $p(t)$ denote the price of asset at time $t$,\n\\begin{itemize}\n    \\item $A$ is the logarithm price of greater-than-exponential bubble price when it approaches to $t_c$, $A > 0$.\n    \\item $B$ denotes the direction of price change is either an upward gain ($B < 0$) or a downward loss ($B > 0$).\n    \\item $C$ includes the degree of logarithm periodic oscillation.\n    \\item $m$ indicates the extent of accelerating, which depends on the proportion of rational investors and herding followers ($0 < m < 1$).\n    \\item $t$ is a certain time interval in the past that is any time before the bubble burst.\n    \\item $\\omega$ is the angular frequency of oscillation during bubbles, it exists when investors show consistency on an investment strategy.\n    \\item $\\varphi$ is the initial phase, $0 < \\varphi < 2\\pi$.\n\\end{itemize}\n\nA more detailed explanation about seven parameters: In our following study, we suppose the price remains finite even at $t_c$ ,$B < 0$ for upward accelerating price while $B > 0$ for downward accelerating price and the accelerating extent near the critical point $m$ should be $0 < m < 1$. If $m$ is too close to 0, it implies a relatively stationary bubble while accelerating suddenly approaching the critical time, If m is too close to 1, it presents no price-increasing tendency. When it comes to $w$, by spectrum analysis on residual studied by Press, W.H., Teulolsky (1994), logogram frequency is about 1.1, then the corresponding angular frequency is about 7 \\cite{R:5}. We combine these theoretical analyses and experimental results as presumptions and put into the fitting model to ensure an explainable super-exponential acceleration behavior of the bubble price.\n\n\n\\subsection{Application}\nMany systems present similar super-exponential growth regimes, which can be described mathematically by power law growth. For example, planet formation in solar systems by runaway accretion of planetesimals, rupture and material failures, nucleation of earthquakes modeled with the slip-and-velocity, models of micro-organisms interacting through chemotaxis aggregating to form fruiting bodies, the Euler rotating disk, and so on.\n\nLPPL has presented pretty good examples of predicting the crisis risk, such as the Oil price bubble in early July 2008 and the bubble burst on the Shanghai stock market in early August 2009\\cite{R:6}. Therefore, It has great significance to forecast the critical time of bubble burst with a certain probability. And nowadays LPPL has been used commonly in the field among financial, earthquake, biochemistry.\n\n\n\\subsection{Conclusion}\nwe would like to use and revise the LPPL Model by analyzing the possible factors and mechanisms behind its high frequency oscillation and super exponential behaviors. In this sense, we might take other interactions or factors into account to revise the LPPL Model in order to provide a more accurate and precise indicator of crisis risk.\n", "meta": {"hexsha": "e1f10ce9c365d71808b93423c110c64bdd71648f", "size": 6606, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MA216/sections/project-1/1.tex", "max_stars_repo_name": "iydon/homework", "max_stars_repo_head_hexsha": "253d4746528ef62d33eba1de0b90dcb17ec587ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-10-20T08:18:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-11T12:14:56.000Z", "max_issues_repo_path": "MA216/sections/project-1/1.tex", "max_issues_repo_name": "AllenYZB/homework", "max_issues_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2022-01-13T03:04:10.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:49:10.000Z", "max_forks_repo_path": "MA216/sections/project-1/1.tex", "max_forks_repo_name": "AllenYZB/homework", "max_forks_repo_head_hexsha": "65bd3372df197bec5e152a37cdc1f6f5432b7f3e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-11-02T05:46:01.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-12T23:11:28.000Z", "avg_line_length": 137.625, "max_line_length": 947, "alphanum_fraction": 0.7942779292, "num_tokens": 1412, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5697590768384744}}
{"text": "\\documentclass[12pt, titlepage, oneside]{article}\n\n\\input{settings}\n\n\\begin{document}\n\t\n\t\\textbf{ELECENG 3TQ3}\\\\\n\t\\textbf{Elston A.}\n\t\n\\section{Lecture 6}\n\n\\subsection{Discrete Random Variables}\n\nWe assign number to the outcomes in sample space. Once we observe a number, we refer to that observation by a random variable.\nEach random variable has a range we denote as $S_X$.\n\n\n\nA probability model always begins with an experiment. Each random variable is directly related to this experiment. There are 3 types of relationships (how random variables can be related to your experiment):\n\n\\items\n\\item The random variable is the observation\n\\item The random variable is a function of the observation\n\\item The random variable is a function of another random variable\n\\eitems\n\n\\b{Random variable is the observation:} The experiment is to attach a photo detector to an optical fiber and count the number of photons arriving in a one microsecond time interval. Each observation is a random variable $X$. The range of $X$ is $S_X = \\{0,1,2,\\dots\\}$. In this case $S_X$ and the range $X$ is identical.\n\nA better example is if we wanted to determine the number of cars passing through a particular intersection at a particular time during a particular time interval range.\n\n\\b{Random variable is a function of an observation}: If we had a class of 30 students and wanted to determine the number of students who got an A- or better. Then the sample space would be the final grades, but the random variable is the number of students who achieved an A- or better. \n\n\\b{Random variable is  is a function of another random variable}: number of new daily COVID cases is a random variable $X$, we introduce a new random variable $Y$ as estimated cost to OHIP. Obviously $Y$ depends on $X$ hence $Y=f(X)$. \n\t\n\t\n\\b{Definition}: A random variable consists of an experiment with a probabilistic measure $P[\\cdot]$ defined on a sample space $S$ and a function that assigns a real number to each outcome in the sample space of the experiment.\n\n\\subsection{Discrete vs. Continuous}\n\nExample of a discrete random variable are the number of cars in a parking lot, or the number of lights in a room. Since we have a number for each value in the range $S_X$ as there cannot be $0.51323\\dots$ of a car, we understand these variables as discrete.\n\nExample of a continuous random variable is the amount of time it takes to travel from Square One to McMaster University. Since the range $S_X$ is time, time has an infinite range and is continuous. So if the range is continuous, then the random variable is continuous. \n\n\\b{Definition:} $X$ is a discrete random variable if the range of $X$ is a countable set.\n\n\\b{Definition:} $X$ is a finite random variable if the range is a finite set.\n\n\\subsection{Probability Mass Function}\n\nThe probability mass function (PMF) of the discrete random variable $X$\n\\begin{align}\nP_X(x) = P[X=x]\n\\end{align}\nThe above means, what is the probability the random variable $X$ takes the value $x$.\n\n\\subsection{Equivalent Properties to PMF}\n\n$\\forall x \\in S_X$, $P_X(x) \\geq 0$, means that the probability of any outcome cannot be a negative number.\n\n$\\sum_{x\\in S_X} P_X(x) = 1$, means the sum of all outcomes in the range of where the probability is define is equal to 1. \n\nFor any event $B \\subset S_X$, the probability that $X$ is in the set $B$ is $P[B] = \\sum_{x\\in B} P_X(x)$, means that the probability that $X$ is in $B$ is the sum of all the probabilities in $B$.\n\n\\subsection{Bernoulli Distribution}\n\n$X$ is a Bernoulli distribution ($p$) random variable if the PMF of $X$ has the following form\n\n\\begin{align}\nP_X(x) = \\begin{cases} 1-p, & x = 0 \\\\ p, & x = 1\\\\ 0 & \\mbox{otherwise}  \\end{cases}\n\\end{align}\n\nThe parameter $p \\in (0,1)$ or simply $ 0 < p < 1$.\n\n\\subsection{Geometric Random Variables}\n\n$X$ has a geometric ($p$) random variable if it has a distribution \n\\begin{align}\nP_X(x) = \\begin{cases} p(1-p)^{(x-1)} & x = 1,2,\\dots \\\\ 0 & \\mbox{otherwise}\\end{cases}\n\\end{align}\n\nThe parameter $p$ is in the range $0 < p < 1$.\n\n\\subsection{Pascal Distribution}\nIf you conduct an experiment with $N$ trials including up to and including $k$ successes. Then if we define the success probability of success as $p$. Then we have a pascal distribution if the PMF has the following form\n\n\\begin{align}\nP_X(k,p) = {x-1 \\choose k-1}p^k (1-p)^k\n\\end{align}\n\nExample: find the pmf of random variables described as number of tests until we get k failed tests. \n\n\n\\subsection{Discrete Uniform Random Variable}\n\nIf all the outcomes have the same probability of outcomes then the pmf can be defined as follows. \n\nThe distribution is a function on the limits of it's sample space range. \n\nLet $X$ be in the range $K \\leq X \\leq L$. In other words, $S_X = {K, \\dots, L}$. The pmf $U(K,L)$ of a uniform random variable is\n\\begin{align}\nP[X = x]  = \\frac{1}{L-K+1}\n\\end{align}\n\n\\subsection{Poisson Random Variable}\n\nModels phenomenon occurring randomly in time. Each instance in time is random, but there is a known average number occurring in a given time.\n\n\\begin{align}\nf(x) = \\begin{cases} \\frac{ \\alpha^x \\exp{-\\alpha}}{x!}, & x=1,2,3,\\dots \\\\ 0 & \\mbox{otherwise} \\end{cases}\n\\end{align}\n\nAn example would be the number of customers arriving to the Ministry of Transportation per time interval.\n\n\\subsection{Cumulative Distribution Function}\n\nA cumulative distributive function is denoted as follows:\n$F_X(x) = P[X \\leq x]$. For any discrete random variable $X$ with range $S_X = {x_1, x_2, \\dots}$ satisfying $x_1 \\leq x_2 \\leq \\dots$ the following properties hold.\n\n\\items\n\\item $F(-\\infty) = 0$ and $F(\\infty) = 1$\n\\item For all $x' \\geq x, F(x') \\geq F(x)$\n\\item For $x_i \\in S_X$ and let $\\epsilon$ be an arbitrarily small number $F(x_i) - F(x_i - \\epsilon) = P_X(x_i)$\n\\item $F_X(x) = F_X(x_i)$ for all $x$ such that $x_i \\leq x \\le x_{i+1}$\n\\eitems\n\\end{document}\n", "meta": {"hexsha": "128c33f618df70573ec6048cca929fa0a01dcc4f", "size": 5883, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture6/lec6.tex", "max_stars_repo_name": "elston-jja/EE3TQ3", "max_stars_repo_head_hexsha": "327a69f24b4f062d2554658405daf140daef2813", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture6/lec6.tex", "max_issues_repo_name": "elston-jja/EE3TQ3", "max_issues_repo_head_hexsha": "327a69f24b4f062d2554658405daf140daef2813", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture6/lec6.tex", "max_forks_repo_name": "elston-jja/EE3TQ3", "max_forks_repo_head_hexsha": "327a69f24b4f062d2554658405daf140daef2813", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.6904761905, "max_line_length": 320, "alphanum_fraction": 0.7326194119, "num_tokens": 1632, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.727975460709318, "lm_q2_score": 0.7826624738835052, "lm_q1q2_score": 0.5697590750052393}}
{"text": "\\section{Unconstrained Problems}\n\\label{sec:a21unconstrained}\n\n\\disableornamentsfornextheadingtrue\n\\vspace{-5mm}\n\\subsection{Bivariate Unconstrained Problems}\n\\label{sec:a211bivariateUnconstrained}\n\n\\paragraph{Branin02}\n\nThe function originates from \\cite{Munteanu98Global}.\nCompared to \\cite{Munteanu98Global},\nwe changed the domain from $\\clint{-5, 10} \\times \\clint{0, 15}$\nto $\\clint{-5, 15}^2$,\nwhich seems more common in recent literature \\cite{Gavana13Global}.\nIn addition, \\cite{Munteanu98Global} uses the reciprocal function value,\nwhile searching for the maximum instead of the minimum.\n\\vspace{-1.6em}\n\n\\begin{subequations}\n  \\begin{gather}\n    \\centertestfunline{\n      \\vspace*{-10mm}\n      \\testobjfunscaled{Bra02}(\\xscaled)\n      \\ceq \\paren*{\n        -\\,\\frac{51\\xse{1}^2}{40\\pi^2} +\n        \\frac{5\\xse{1}}{\\pi} + \\xse{2} - 6\n      }^2 +\n      \\paren*{10 - \\frac{5}{4\\pi}} \\cos(\\xse{1}) \\cos(\\xse{2})\n    }\\\\\n    \\centertestfunline{\n      {} + \\ln(\\xse{1}^2 + \\xse{2}^2 + 1) + 10,\\hspace*{35mm}\n    }\\notag\\\\\n    \\centertestfunline{\n      \\xscaled \\in \\clint{-5, 15}^2,\\quad\n      \\xoptscaled = (-3.196988424804, 12.52625788532),\n    }\\\\\n    \\centertestfunline{\n      \\testobjfunscaled{Bra02}(\\xoptscaled) = 5.558914403894\n    }\n  \\end{gather}\n\\end{subequations}\n\n\\pagebreak\n\n\\paragraph{GoldsteinPrice}\n\nThis function originates from \\cite{Goldstein71Descent},\nwhere the function was stated without bounds for the optimization domain.\nWe took the domain $\\clint{-2, 2}^2$ from \\cite{Gavana13Global}.\nIn addition, we scaled the function values by the factor $10^{-4}$\nfor the sake of plotting.\n\\vspace{-1.6em}\n\n\\begin{subequations}\n  \\begin{gather}\n    \\centertestfunline{\n      \\testobjfunscaled{GoP}(\\xscaled)\n      \\ceq 10^{-4} \\cdot \\paren*{\n        1 + (\\xse{1} + \\xse{2} + 1)^2\n        (19 - 14\\xse{1} + 3\\xse{1}^2 - 14\\xse{2} +\n        6\\xse{1}\\xse{2} + 3\\xse{2}^2)\n      }\n    }\\\\\n    \\centertestfunline{\n      \\hspace*{25mm}\n      {} \\cdot\n      \\paren*{\n        30 + (2\\xse{1} - 3\\xse{2})^2\n        (18 - 32\\xse{1} + 12\\xse{1}^2 + 48\\xse{2} -\n        36\\xse{1}\\xse{2} + 27\\xse{2}^2)\n      },\n    }\\notag\\\\\n    \\centertestfunline{\n      \\xscaled \\in \\clint{-2, 2}^2,\\quad\n      \\xoptscaled = (0, -1),\n    }\\\\\n    \\centertestfunline{\n      \\testobjfunscaled{GoP}(\\xoptscaled) = 3 \\cdot 10^{-4}\n    }\n  \\end{gather}\n\\end{subequations}\n\n\\paragraph{Schwefel06}\n\nThis function originates from \\cite{Schwefel77Numerische}.\nWe changed the domain from $\\clint{-3, 5} \\times \\clint{-1, 7}$ to\n$\\clint{-6, 4}^2$, such that the optimum point is not located at the\ncenter of the optimization domain.\n\\vspace{-1.6em}\n\n\\begin{subequations}\n  \\begin{gather}\n    \\centertestfunline{\n      \\testobjfunscaled{Sch06}(\\xscaled)\n      \\ceq \\max(\n        \\abs{\\xse{1} + 2\\xse{2} - 7},\n        \\abs{2\\xse{1} + \\xse{2} - 5}\n      ),\n    }\\\\\n    \\centertestfunline{\n      \\xscaled \\in \\clint{-6, 4}^2,\\quad\n      \\xoptscaled = (1, 3),\\quad\n      \\testobjfunscaled{Sch06}(\\xoptscaled) = 0\n    }\n  \\end{gather}\n\\end{subequations}\n\n\\subsection{\\texorpdfstring{$d$}{d}-Variate Unconstrained Problems}\n\\label{sec:a212dvariateUnconstrained}\n\n\\paragraph{Ackley}\n\nThe form of this function originates from \\cite{Ackley87Connectionist},\nwhere it was stated only for two variables.\nWe use the generalization to $d$ variables from \\cite{Gavana13Global}.\nThe optimization domain $\\clint{1.5, 6.5}^d$\nwas chosen such that it does not contain $\\*0$,\nwhere the gradient of the objective function becomes singular.\nOtherwise, the function would not be continuously differentiable,\nwhich would be a disadvantage for spline-based approaches\n(see Schwefel06 and Schwefel22 for functions with discontinuous derivatives).\n\\vspace{-1.6em}\n\n\\begin{subequations}\n  \\begin{gather}\n   \\centertestfunline{\n      \\testobjfunscaled{Ack}(\\xscaled)\n      \\ceq -20 \\exp\\paren*{-\\frac{\\norm[2]{\\xscaled}}{5\\sqrt{d}}} -\n      \\exp\\paren*{\\frac{1}{d} \\sum_{t=1}^d \\cos(2\\pi \\xse{t})} +\n      20 + \\econst,\n    }\\\\\n    \\centertestfunline{\n      \\xscaled \\in \\clint{1.5, 6.5}^d,\\quad\n      \\xoptscaled = 1.974451986484 \\cdot \\*1,\\quad\n      \\testobjfunscaled{Ack}(\\xoptscaled) = 6.559645375628\n    }\n  \\end{gather}\n\\end{subequations}\n\n\\paragraph{Alpine02}\n\nThis function originates from \\cite{Clerc99Swarm}.\nWe changed the domain from $\\clint{0, 10}^d$ to $\\clint{2, 10}^d$\nto exclude the singularities of the derivative of the objective function\nat $\\xse{t} = 0$.\nIn addition, the author of \\cite{Clerc99Swarm} searched for maximal points.\nFor minimization, we changed the sign of the objective function.\n\\vspace{-1.6em}\n\n\\begin{subequations}\n  \\begin{gather}\n    \\centertestfunline{\n      \\testobjfunscaled{Alp02}(\\xscaled)\n      \\ceq -\\prod_{t=1}^d \\sqrt{\\xse{t}} \\sin(\\xse{t}),\\qquad\n      \\xscaled \\in \\clint{2, 10}^d,\n    }\\\\\n    \\centertestfunline{\n      \\xoptscaled = 7.917052684666 \\cdot \\*1,\\quad\n      \\testobjfunscaled{Alp02}(\\xoptscaled) = -2.808131180070^d\n    }\n  \\end{gather}\n\\end{subequations}\n\n\\paragraph{Schwefel22}\n\nThis function originates from \\cite{Schwefel77Numerische}.\nWe changed the domain from $\\clint{-10, 10}^d$ to\n$\\clint{-3, 7}^d$, such that the optimum point is not located at the\ncenter of the optimization domain.\n\\vspace{-1.6em}\n\n\\begin{subequations}\n  \\begin{gather}\n    \\centertestfunline{\n      \\testobjfunscaled{Sch22}(\\xscaled)\n      \\ceq \\sum_{t=1}^d \\abs{\\xse{t}} +\n      \\prod_{t=1}^d \\abs{\\xse{t}},\\qquad\n      \\xscaled \\in \\clint{-3, 7}^d,\n    }\\\\\n    \\centertestfunline{\n      \\xoptscaled = \\*0,\\quad\n      \\testobjfunscaled{Sch22}(\\xoptscaled) = 0\n    }\n  \\end{gather}\n\\end{subequations}\n", "meta": {"hexsha": "759a0c80e399281fbe63d795ce172782761afa16", "size": 5599, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/document/a21unconstrained.tex", "max_stars_repo_name": "valentjn/thesis-arxiv", "max_stars_repo_head_hexsha": "ae30179e67cd6a7813385e140b609546fd65b897", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-10-12T09:28:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T21:07:17.000Z", "max_issues_repo_path": "tex/document/a21unconstrained.tex", "max_issues_repo_name": "valentjn/thesis", "max_issues_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/document/a21unconstrained.tex", "max_forks_repo_name": "valentjn/thesis", "max_forks_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.9411764706, "max_line_length": 77, "alphanum_fraction": 0.6524379353, "num_tokens": 2048, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5697590676004421}}
{"text": "\\documentclass[9pt, a4paper, oneside]{amsart}\n\n\n\\usepackage{enumitem}\n\\usepackage{parskip}\n\\usepackage{fancyhdr}\n\\usepackage{color}\n\\usepackage{multicol}\n\\pagestyle{fancy}\n\n\\newlist{questions}{enumerate}{1}\n\\setlist[questions, 1]{label = \\bf Q.\\arabic*., itemsep=1em}\n\n\\lhead{\\scshape Apurva Nakade}\n\\rhead{\\scshape Honors Single Variable Calculus}\n\\renewcommand*{\\thepage}{\\small\\arabic{page}}\n\\title{Problem Set 04}\n\n\\begin{document}\n\n\\maketitle\n\\thispagestyle{fancy}\n\n\n\\section*{Part 1 - Limits and Continuity}\n\\begin{questions}\n\t\\item Proofs are not required for this problem. Describe your answers as precisely as possible and provide some explanation.\n\n\t\\begin{enumerate}\n\t\t\\item Find a function which is discontinuous at $1, \\frac{1}{2}, \\frac{1}{3},\\frac{1}{4}, \\ldots$ but continuous at all other points.\n\t\t\\item Find a function which is discontinuous at $1, \\frac{1}{2}, \\frac{1}{3},\\frac{1}{4}, \\ldots$ and 0 but continuous at all other points.\n\t\\end{enumerate}\n\t% If $ \\lim \\limits _ {x \\rightarrow a} f(x)$ exists but is $ \\neq f(a)$ then $ f$ is said to have a \\textbf{removable singularity} at $ a$.\n\t% \\begin{enumerate}[resume]\n\t% \t\\item Give an example of a function $ f$ with a removable discontinuity at 0.\n\t% \t\\item Given an example of a function $ f$ with a discontinuity at 0\twhich is not removable.\n\t% \\end{enumerate}\n\n\t\\item For a non-empty set $ A$ let $ -A$ denote the set of all $ -x$ for $ x$ in $ A$.\n\t\\begin{enumerate}\n\t\t\\item Prove that if $ x$ is a lower bound of $ A$ then $ -x$ is an upper bound of $ -A$.\n\t\t\\item Prove that if $ x$ is the greatest lower bound of $ A$ then $ -x$ is the lowest upper bound of $ -A$.\n\t\t\\item Similarly prove that $ \\sup (-A) = - \\inf A$.\n\t\\end{enumerate}\n\n\t\\item We say that a subset $ A$ of $ \\mathbb{R}$ is \\textbf{open} if for every number $ x$ in $A$ the interval $ (x-r,x+r)$ is a subset of $ A$ for some $ r>0$.\n\t\\begin{enumerate}\n\t\t\\item Determine, with proof, which of the sets $ [0,1]$, $(0,1)$, $(0,1]$, and $\\mathbb{R}$ are open?\n\t\t\t\\item Is the empty set open?\n\t\t\\end{enumerate}\n\t\tFor a set $ A$ and a function $ f: \\mathbb{R} \\rightarrow \\mathbb{R}$\tdefine $ f^{-1}(A)$ to be the set of real numbers $ x$ which are mapped to $ A$ by $ f$. The following is a very fundamental theorem about continuity, we'll verify it for a few functions.\n\t\t\\begin{quote}\n\t\t\t\\textbf{Theorem.} $ f$ is continuous iff $ f^{-1}(A)$ is open for every open set $ A$.\n\t\t\\end{quote}\n\t\t\\begin{enumerate}[resume]\n\t\t\t\\item For the function $ f(x) = x^2$ find the set $ f^{-1}(A)$ when $ A$ is one of the following sets: $ (0,1)$, $ (-1,0)$, $ (-1,1)$, $ \\mathbb{R}$, and the empty set.\n\t\t\t\\item For $ f(x) = x^2$ find a set $ A$ such that $ A$ is not open but $ f^{-1}(A)$ is open. How does this not contradict the \\textbf{Theorem}?\n\t\t\t\\item For the function\n\t\t\t      \\begin{align*}\n\t\t\t      \tg(x) = \\begin{cases} 1 & \\mbox{ if } x \\ge 0 \\\\ 0 &\\mbox{ otherwise }\\end{cases}\n\t\t\t      \\end{align*}\n\t\t\t      Find a set $ A$ such that $ A$ is open but $g^{-1}(A)$ is not open.\n\t\t\t\\item For the function\n\t\t\t      \\begin{align*}\n\t\t\t      \th(x) = \\begin{cases} 1/x & \\mbox{ if $x \\neq 0$} \\\\ 1 &\\mbox{ if $x = 0$} \\end{cases}\n\t\t\t      \\end{align*}\n\t\t\t      Find a set $ A$ such that $ A$ is open but $h^{-1}(A)$ is not open.\n\t\t\t\\item (Optional) Prove the \\textbf{Theorem}. The proof is easy but requires you to reason \\emph{very} precisely. It is good exercise to do if you have time.\n\t\t\\end{enumerate}\n\n\t\\end{questions}\n\n\n\n\n\n\n\n\n\n\t\\section*{Part 2 - Differentiation}\n\t\\begin{questions}[resume]\n\t\t\\item\n\t\t\\begin{enumerate}\n\t\t\t\\item Using the definition prove that if $ f(x)=1/x$ then $ f'(a) = -1/a^2$ for $ a \\neq 0$.\n\t\t\t\\item Prove that the tangent line to the graph of $ f$ at $ (a,f(a))$ does not intersect the graph of $ f$ at any other point.\n\t\t\\end{enumerate}\n\n\t\t\\item\n\t\t\\begin{enumerate}\n\t\t\t\\item Using the definition prove that if $ f(x)=1/x^2$ then $ f'(a) = -2/a^3$ for $ a \\neq 0$.\n\t\t\t\\item Prove that the tangent line to the graph of $ f$ at $ (a,f(a))$ intersects the graph of $ f$ at one other point.\n\t\t\\end{enumerate}\n\n\n\t\t\\item Suppose the function $ f$ is differentiable at $ a$ and let $ c , d \\neq 0$ be constants. Determine in terms of $ f'(a)$ the following limits.\n\t\t\\begin{enumerate}\n\t\t\t\\item \\begin{align*}\n\t\t\t      \\lim \\limits_{h \\rightarrow 0} \\dfrac{f(a+ch) - f(a)}{h}\n\t\t\t\\end{align*}\n\t\t\t\\item \\begin{align*}\n\t\t\t      \\lim \\limits_{h \\rightarrow 0} \\dfrac{f(a+ch) - f(a+dh)}{h}\n\t\t\t\\end{align*}\n\t\t\\end{enumerate}\n\n\n\n\n\t\t\\item\n\t\tA function $ f: \\mathbb{R} \\rightarrow \\mathbb{R}$ is said to be \\textbf{even} if $ f(-x) = f(x)$ for all $ x$. $ f$ is said to be an \\textbf{odd} function if $ f(-x) = -f(x)$ for all $ x$.\n\t\t\\begin{enumerate}\n\t\t\t\\item Show that if $ f$ is an even function then $ f'(-a) = -f'(a)$.\n\t\t\t      (Draw a picture.)\n\t\t\t\\item Show that if $ f$ is an odd function then $ f'(-a) = f'(a)$.\n\t\t\t      (Draw a picture.)\n\t\t\\end{enumerate}\n\n\t\t\\item Suppose that $ f(x) \\le g(x) \\le h(x)$ for all $ x$ and that $ \\lim \\limits_{x \\rightarrow a} f(x) = L = \\lim \\limits_{x \\rightarrow a} h(x)$. Prove that $ \\lim \\limits_{x \\rightarrow a} g(x) = L$. (This is usually called the \\textbf{Squeeze theorem}.)\n\n\t\t\\item\n\t\t\\begin{enumerate}\n\t\t\t\\item Suppose that $ f(a) = g(a) = h(a)$, and that $ f(x) \\le g(x) \\le h(x)$ for all $ x$ and that $ f'(a) = h'(a)$. Prove that $ g$ is differentiable at $ a$, and that $ f'(a) = g'(a) = h'(a)$.\n\n\t\t\t\\item Show that the conclusion does not follow if we omit the hypothesis $ f(a) = g(a) = h(a)$.\n\t\t\\end{enumerate}\n\n\t\\end{questions}\n\n\n\t\\newpage\n\t\\section*{Part 3 - Differentiation}\n\t\\begin{questions}[resume]\n\n\t\t\\item Using the definition find $ f'(a)$ for $ f(x) = \\sqrt{x}$ and $ x > 0$.\n\n\t\t\\item Find $ f'(x)$ if $ f(x) = |x|^3$. Find $ f''(x)$. Does $ f'''(x)$ exist for all $ x$?\n\n\t\t\\item\n\t\t\\begin{enumerate}\n\t\t\t\\item Prove that the following function is differentiable at $ 0$.\n\t\t\t      \\begin{align*}\n\t\t\t      \tf(x) = \\begin{cases}\n\t\t\t      \tx^2 & \\mbox{ if $ x$ is rational} \\\\\n\t\t\t      \t0   & \\mbox{ otherwise }\n\t\t\t      \t\\end{cases}\n\t\t\t      \\end{align*}\n\n\t\t\t\\item More generally prove that if a function $ f(x)$ satisfies $ |f(x)| < x^2$  then $ f(x) $ is differentiable at $ 0$.\n\t\t\\end{enumerate}\n\n\t\t\\item Suppose that $ f(a) = g(a)$ and that the left-hand derivative of $ f$ at $ a$ equals the right hand derivative of $ g$ at $ a$. Define \\begin{align*}\n\t\th(x) = \\begin{cases}\n\t\tf(x) & \\mbox{ if $ x \\le a$} \\\\\n\t\tg(x) & \\mbox{ if $ x > a$}\n\t\t\\end{cases}\n\t\t\\end{align*}\n\t\tProve that $ h$ is differentiable at $ a$.\n\n\n\n\t\\end{questions}\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "1baa48bf8663f197980a37540bfc3e950b14a3e0", "size": 6584, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2017/PSet04.tex", "max_stars_repo_name": "apurvnakade/jhu2017-18-honors-single-variable-calculus", "max_stars_repo_head_hexsha": "5b6cb3dde364990abe868ce155a697dce78302fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2017/PSet04.tex", "max_issues_repo_name": "apurvnakade/jhu2017-18-honors-single-variable-calculus", "max_issues_repo_head_hexsha": "5b6cb3dde364990abe868ce155a697dce78302fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2017/PSet04.tex", "max_forks_repo_name": "apurvnakade/jhu2017-18-honors-single-variable-calculus", "max_forks_repo_head_hexsha": "5b6cb3dde364990abe868ce155a697dce78302fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.5029239766, "max_line_length": 260, "alphanum_fraction": 0.6140643985, "num_tokens": 2328, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6584175139669997, "lm_q2_score": 0.865224072151174, "lm_q1q2_score": 0.56967868261018}}
{"text": "%********************************************************************\n% Appendix\n%*******************************************************\n% If problems with the headers: get headings in appendix etc. right\n%\\markboth{\\spacedlowsmallcaps{Appendix}}{\\spacedlowsmallcaps{Appendix}}\n\\chapter{Appendix: Python implementation}\n\\label{ch:appendix:python:implementation}\n\n\\section{Engineering Python's modules}\n\nIn this appendix we talk about how we implemented a subset of the \\emph{Riordan\ngroup} theory using the Python programming language.  Although our code is pure\nPython code, we rest on the \\emph{Sage} \\cite{sage} mathematical framework for\nsymbolic computations\\marginpar{pure Python classes, delegating to \\emph{Sage}}.\n\nWe start from the abstract world of mathematics and, gradually, introduce the\narchitecture, principles, design choices and, finally, hierarchies of classes\nthat allow us to fullfil our implementation.\n\n\\subsection{From mathematics to code}\n\\label{subsection:python:appendix:from:math:to:code}\n\nMathematically, a function $colouring$ has been implemented. Let $\\mathcal{M}$\nbe a Riordan array and $m_{nk} \\in \\mathcal{M}$ its generic element, moreover\ndefine a set of $k$ colours $\\lbrace c_0, \\ldots, c_{k-1} \\rbrace$. So,\nfunction $colouring$ has the following type: \n\\begin{displaymath} \n    colouring : \\mathbb{N} \\times\\mathbb{N} \\rightarrow \\lbrace c_0, \\ldots, c_{k-1} \\rbrace\n\\end{displaymath}\nLet $p\\in\\mathbb{N}$, usually a prime number, the following definitions have been implemented:\n\\begin{itemize}\n    \\item colour $\\mathcal{M}$ associating to each remainder class \n        $r \\in \\lbrace[0],\\ldots,[p-1]\\rbrace$ a different colour $c_r$:\n        \\begin{displaymath}\n            colouring_{p}(n,k) = c_{r} \\leftrightarrow m_{nk} \\equiv_{p} r\n        \\end{displaymath}\n    \\item \\emph{bi}-colour $\\mathcal{M}$ with $c_0$ if $m_{nk}$ \n        is a multiple of $p$, otherwise use a colour $c_1$:\n        \\begin{displaymath}\n            colouring_{p}(n,k) = c_{0} \\leftrightarrow p | m_{nk}\n        \\end{displaymath}\n    \\item a less used one, \\emph{bi}-colour $\\mathcal{M}$ with $c_0$ \n        if $m_{nk}$ is a prime, otherwise use a colour $c_1$\n            \\marginpar{``prime'' definition of $colouring$ \n                seems to produce triangles coloured at random\\ldots}:\n        \\begin{displaymath}\n            colouring(n,k) = c_{0} \\leftrightarrow \n                \\nexists s\\in\\lbrace 2,\\ldots,m_{nk}-1\\rbrace.s|m_{nk} \n        \\end{displaymath}\n\\end{itemize}\n\nWe choose the \\emph{Python} language to implement such a function and we rest\non the \\emph{Sage} \\cite{sage} framework to do hard computational stuff, like\nTaylor expansion of a function or solving equations \\emph{symbolically}.\n\nOur implementation aims at a subset of the \\emph{Riordan group} theory,\nfocusing on simplicity and sound design principles. The interest of the author\nfor the \\emph{Smalltalk} programming language \\cite{Goldberg:1983:SLI:273} has\n    influenced the design of the code base, with a strong focus on the concept\n    of \\emph{messaging} between objects as the core programming paradigm. We\n    believe that keeping such an approach, classes are structured to be really\n    extendible, keeping the whole implementation limited to few core concept.\n    In addition to this orientation, we have integrated some particular\n    features supplied by the Python programming language in order to ease the\n    experience of playing with our objects.\n\nWe \\marginpar{all Python code and \\LaTeX files of this work is under \\emph{Git}\nversion control} wont describe all the ``nitty-gritty'' details of the\nimplementation, on the other hand we focus on the architecture, pointing out\nthe concepts, and some design principles and patterns used to implement it.\nFinally, we review a set of core classes, pointing the interested reader to the\ncomplete implementation, which is freely available on line, in the \\emph{Git}\nrepository \\cite{nocentini:git:repository}.\n\n\\subsection{Architecture}\n\nTalking about \\emph{an architecture} to describe the implementation of a single\n\\emph{functionality}, namely the $colouring$ function, seems to be wasteful.\nHowever, \\emph{Smalltalkers} like to introduce a little language for each\nproblem they tackle, which is the same to say they introduce \\emph{an\narchitecture}, eventually. What is important is the spirit and the principle\nbehind their approach: \\emph{messaging}. As \\citeauthor{kay:on:messaging}\nstates in \\cite{kay:on:messaging}, what is really important is the relation and\nthe net of messages that a set of objects send to each other. Pairing this\nprinciple with the one shown in\n\\cite{friedman:felleisen:few:java:few:patterns}, about looking at behavior as\nan object, we have a solid track to follow.\n\nOur objects are instances of a few class hierarchies, many of which can be seen\n\\marginpar{dispatching and action objects} as \\emph{dispatching objects}, while\nactual behavior is performed by a very restricted set of them, what we call\n\\emph{action objects}. So, the whole $colouring$ functionality is taken apart\ninto a set of small hierarchies, such that each one of them catches a\n\\emph{context} that can happen in a study session; example contexts are the\ndesired type of partitioning or if the representation should have a plain\nlayout or a centered one\\ldots Each object which is an instance of a little\nhierarchy, doesn't know how to do hard symbolic computation or how to generate\na result \\TeX\\,file. Its only responsibility is to inform that \\emph{the\nrequested functionality should be performed under the context it represents}.\nThis is pretty similar to the approach advised by \\emph{functional\nprogramming}, where \\emph{pattern matching} over type constructors is a compact\nway to express the same computation as we do with a net of \\emph{messages}.\n\nOnly a small set of \\emph{action objects}\\marginpar{an action object is the\nrecipient of a chain of dispatching messages}, also known as \\emph{visitors} if\nwe look at them from the design patterns \\cite{Gamma:1995:DPE:186897} point of\nview, code the actual implementation because only them know the \\emph{entire}\ncontext.  With this approach, if we would like to implement a new\n\\emph{functionality}, we have to \\emph{create} a new \\emph{action object}; on\nthe other hand, if we would like to augment the context with new informations,\nwe are required to \\emph{refine} methods already implemented in \\emph{action\nobjects}, without updating the remaining classes.\n\nFinally, this approach leads quite naturally to\\marginpar{favor\n\\emph{composition} over \\emph{inheritance}} favor \\emph{composition} over\n\\emph{inheritance}. Due to the \\emph{dynamic} typing of Python, without the\nconstraint to assign \\emph{explicitly} a type to every expression, using the\nproposed methodology we have built a code base that is highly polymorphic,\nwithout constraining objects in rigid inheritance structures.\n\n\\subsection{Core classes and hierarchies}\n\n\\subsubsection{\\emph{RiordanArray} class and \\emph{Characterization} hierarchy}\n\nThe \\emph{RiordanArray} class is the first of a set of \\emph{interface} classes\nthat the client will work with.  Its responsibility is to denote the\n\\emph{mathematical} concept of Riordan array, but kept alone, it couldn't do\nmany things. It can be \\emph{indexed} to supply a coefficient $m_{nk}$ it\ncontains but asking for raw matrix expansion or doing group operations, such as\nmultiplication or inversion, depends on how the Riordan array has been\n\\emph{characterized}, and this context information is caught by the\n\\emph{Characterization} hierarchy\\marginpar{instances built from a class in the\nCharacterization hierarchy are dispatching objects}. We provide the following\ncharacterizations: \n\\begin{itemize}\n    \\item by \\emph{sequences}, that is using $A$ and $Z$ sequences, together with\n        coefficient $m_{00}$;\n    \\item by \\emph{matrices}, that is using $A$-matrices, where each $A$-matrix can \n        be expressed as a table of coefficients $\\lbrace a_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$ \n        or as a sequence of \\ac{gf}, one for each matrix's row;\n    \\item by \\emph{subgroup}, which essentially allows the client to use \\emph{analytic\n        functions} $d$ and $h$ to define the desired array. Up to now, only one\n        subgroup is implemented, the \\emph{vanilla} subgroup which resembles the\n        formal way to define a Riordan array $\\mathcal{M}$ as a pair of functions $(d(t),h(t))$.\n        We will extend the set of \\emph{subgroups} toward the one studied in the literature:\n        this will allow the client to play with our object having a \\emph{mathematical}\n        experience, for example if an array in the \\emph{Renewal} subgroup is desired,\n        the client should only define function $d$, since function $h$ is implicitly\n        defined as $h(t)=t\\,d(t)$.\n\\end{itemize}\n\nTherefore, with the introduction of \\emph{characterization} context, denoted by\nan object instance of the previous classes, it is possible for a\n\\emph{RiordanArray} instance to \\emph{dispatch} messages on such\ncharacterization that it cannot fullfil by itself, for example messages about\ndoing group operations, raw matrix expansion, building a formal\n\\emph{mathematical} name, formatted as a pair of functions $(d(t),h(t))$.\n\n\\subsubsection{\\emph{TriangleColouring} class and auxiliary hierarchies}\n\nThe \\emph{TriangleColouring} class is the second \\emph{interface} class, which\ndenotes the desired colouring, namely a representatation of a Riordan array\nunder a congruence relation $\\equiv_{p}$, for some modulo $p$. An object,\ninstance of this class, builds context objects about the desired layout of the\ntriangle (\\emph{centered} or \\emph{left-aligned}) or if negative coefficient in\nan inverse array should be associated with \\emph{lighter} colour variant\nrespect to positive coefficients, when both belong to the same remainder class of\n$\\equiv_{p}$ relation. Such additional context objects are instances of\n\\emph{auxiliary hierarchies}, such as \\emph{TriangleShape} and\n\\emph{NegativeChoices}.\n\nPython language allows to \\emph{override} some \\emph{special} methods that the\nvirtual machine calls on an object when it is used in a particular\n\\emph{syntatic context}. \\marginpar{\\mintinline{python}|__getindex__(index)| is\nanother message that the vm sends to an object $obj$ when it is indexed, as in\n$obj[n,k]$} One of such method is \\mintinline{python}|__call__|, which is sent\nto an object \\mintinline{python}|obj| when it is used like a function, for\nexample: \\mint{python}|obj(array=anArray, partitioning=aPartitioning)| for some\nobjects \\mintinline{python}|anArray| and \\mintinline{python}|aPartitioning|. We\nhave \\emph{overridden} such a method in order to provide a more \\emph{fluent}\ninterface: recall that, eventually, our interest is to implement the function\n$colouring$, so it is nice to be able to use the object that denotes such\nfunctionality \\emph{like} a function, as in \\emph{mathematics} we are able to\ndo. \n\n\\subsubsection{\\emph{Partitioning} hierarchy}\n\n\\emph{Partitioning} hierarchy denotes how the \\emph{modular transformation} is\napplied to an array $\\mathcal{M}$. It is composed of three classes, one for\neach implementation of \\emph{math} function $colouring$ described in\n\\autoref{subsection:python:appendix:from:math:to:code}.\n\n\\subsubsection{\\emph{ActionUsingSubgroupCharacterization} hierarchy}\n\nPrevious classes, hierarchies and auxiliaries build instances that belong to a\nset of objects that we call \\emph{dispatching objects}, since their responsibilities\nare to dispatch messages, where each such object augment the message with the\ncontext information it is responsible for. \n\nClasses in the \\emph{ActionUsingSubgroupCharacterization} hierarchy, on the other hand,\nbuild instances that belong to a set of objects that we call \\emph{action objects},\nsince they \\emph{can} perform a functionality since they \\emph{know} the entire context,\nhaving collected all context informations \\emph{dispatched} from remaining objects.\nThere are the following classes in this hierarchy, each one denoting a \\emph{major}\nfunctionality that is necessary to fullfil the implementation of the $colouring$ \nfunction\\marginpar{action objects}:\n\\begin{itemize}\n    \\item \\emph{ExpansionActionUsingSubgroupCharacterization} class denotes\n        the operation of doing raw matrix expansion of an array $\\mathcal{M}$; \n    \\item \\emph{FormalDefLatexCodeUsingSubgroupCharacterization} class denotes\n        the operation of producing a formal description of the array, \n        formatted as $\\mathcal{M}=(d(t),h(t))$;\n    \\item \\emph{BuildInverseActionUsingSubgroupCharacterization} class denotes\n        the operation of \\emph{inverting} an array $\\mathcal{M}$ to produce \n        the array $\\mathcal{M}^{-1}$.\n\\end{itemize}\n\nAll the above classes stricly depend on how the Riordan array has been\ncharacterized, therefore they code a different implementation for each such\ncharacterization. Looking at them from this point of view, it is easy to\nrecognize that the set of \\emph{action objects} is actually a set of\n\\emph{visitors}. It is interesting how the\\marginpar{Visitor pattern}\n\\emph{Visitor pattern} can be generalized to a net of \\emph{dispatching\nmessages}, without limiting its application to hierarchies denoting a recursive\ndomain.\n\n\n\\section{A \\emph{Sage} study file}\n\nIn this section we show a typical session where we study the \\emph{Motzkin}\narray $\\mathcal{M}$.  First we report the complete script that we have used to\ngenerate matrix raw expansion and coloured representations of\n$\\mathcal{M}_{\\equiv_{p}}$, for some prime $p$; after we take it apart\ndescribing the meaning of each chunk of code.\n\n\\subsection{The complete \\emph{study} script for Motzkin array $\\mathcal{M}$...}\n\n\\inputminted[\n    mathescape,\n    linenos,\n    %numbersep=5pt,\n    %gobble=2,\n    frame=lines,\n    framesep=2mm,\n    breaklines\n    ]{python}{../sympy/motzkin/motzkin-for-document-inclusion.sage}\n\n\\subsection{...and taking it apart, bite after bite}\n\nHaving seen the complete session script for array $\\mathcal{M}$, in the\nfollowing paragraphs we take it apart, describing each little chunk in greater\ndetail.\n\n\\inputminted[\n    mathescape,\n    linenos,\n    %numbersep=5pt,\n    %gobble=2,\n    frame=lines,\n    framesep=2mm,\n    breaklines,\n    firstline=1,\n    lastline=16\n    ]{python}{../sympy/motzkin/motzkin-for-document-inclusion.sage}\n    \nIn this chunk we do some boring \\emph{imports} and prepare a variable\n\\mintinline{python}|tex_parent_prefix| in order to localize the destination\npath for files that will be generated. Finally, variable \\mintinline{python}|t|\nis defined in order to denote a \\emph{math} undeterminate variable\n$t$\\marginpar{\\mintinline{python}|t| denotes a \\emph{Python object}, $t$\ndenotes a \\emph{math variable}}.\n\n\\inputminted[\n    mathescape,\n    linenos,\n    %numbersep=5pt,\n    %gobble=2,\n    frame=lines,\n    framesep=2mm,\n    breaklines,\n    firstline=19,\n    lastline=25\n    ]{python}{../sympy/motzkin/motzkin-for-document-inclusion.sage}\n\nHere we define the desired partitioning, in this case we want\n$\\mathcal{M}_{\\stackrel{\\circ}{\\equiv_{7}}}$, and the colouring object. It\nkeeps track of the number of rows, namely $127$; if the representation should\nbe centered or aligned to the left, as a raw matrix expansion looks like; and,\nfinally, if negative coefficients in the inverse array $\\mathcal{M}^{-1}$\nshould get lighter colour variants respect to ones that positive coefficient in\n$\\mathcal{M}$ get, under congruence relation $\\equiv_{7}$.\n\n\\inputminted[\n    mathescape,\n    linenos,\n    %numbersep=5pt,\n    %gobble=2,\n    frame=lines,\n    framesep=2mm,\n    breaklines,\n    firstline=32,\n    lastline=45\n    ]{python}{../sympy/motzkin/motzkin-for-document-inclusion.sage}\n\nHere we define function $d$ and function $h$ of $\\mathcal{M}$ and build an\nobject that denotes the \\emph{mathematical} concept of Riordan array, composed\nof the following slots\\marginpar{we provide strong integration with \\LaTeX}:\n\\begin{itemize}\n    \\item a \\emph{subgroup}, in this case we use the \\emph{plain vanilla} definition by\n        providing functions $d$ and $h$ directly;\n    \\item a \\emph{name}, which will be used to build unique filenames for \n        the generated \\TeX\\,files;\n    \\item a \\emph{mathematical name}, as the ones used in this document, such as $\\mathcal{M}$ and so on,\n        used within \\emph{captions} environments, for example;\n    \\item an \\emph{additional text} in order to augment array's description, it will be attached\n        to \\emph{captions} environments again. In this text \\emph{any} \\LaTeX\\, code can be used.\n\\end{itemize}\n\n\\inputminted[\n    mathescape,\n    linenos,\n    %numbersep=5pt,\n    %gobble=2,\n    frame=lines,\n    framesep=2mm,\n    breaklines,\n    firstline=48,\n    lastline=56\n    ]{python}{../sympy/motzkin/motzkin-for-document-inclusion.sage}\n\nHere some computation happen, actually: application of\n\\mintinline{python}|colouring| object expands $\\mathcal{M}$ as a matrix\n$\\lbrace m_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$, up to row $126$ included, starting\nfrom row $0$. Finally, it writes a bunch of files containing results, splitted\naccording to their content.\n\n\\inputminted[\n    mathescape,\n    linenos,\n    %numbersep=5pt,\n    %gobble=2,\n    frame=lines,\n    framesep=2mm,\n    breaklines,\n    firstline=58,\n    lastline=62\n    ]{python}{../sympy/motzkin/motzkin-for-document-inclusion.sage}\n\nHere we tackle the settings for computing\n$\\mathcal{M}^{-1}$\\marginpar{computing $\\mathcal{M}^{-1}$ providing an helper\nlambda expression}. Mathematically, it is necessary to compute the\n\\emph{computational inverse} $\\hat{h}$ of function $h$ and this is the main\ndifficult point to solve, for \\emph{Sage} \\cite{sage} too.  \\emph{Sage}\nframework is pretty powerful, for instance it can compute function\n$\\hat{h}_{\\mathcal{P}}$ for the Pascal array $\\mathcal{P}$, but for Motzkin\narray things get complicated because a radical appears in function $h$. \n\nFor this reason, \\emph{message} \\mintinline{python}|inverse| sent to the object\ndenoting $\\mathcal{M}$ is a curious one, because it asks for an \\emph{help}\nfunction which will allow \\emph{Sage} to solve an equation, yielding function\n$\\hat{h}$ eventually. Formally, to find $\\hat{h}$ we could solve a\n\\emph{functional} equation $h(\\hat{h}(t))=t$ because $h$ is known.  Otherwise,\nwe could use the \\emph{change of variable} trick yielding $\\hat{h}(y)$ if we\ncan solve $h(t)=y$: \n\\begin{displaymath}\n    \\left.\\left[\\hat{h}(y)=t\\right|y=h(t)\\right]\n\\end{displaymath}\nso, at the end, we have to solve another equation respect to \nvariable $t$, obtaining it as a function of variable $y$.\n\nThis is exactly the aim of the helper function. It receive three arguments, \nnamely variable $y$, variable $t$ and an equation over function $h$ such that:\n\\begin{displaymath}\n    y=h(t)\n\\end{displaymath}\nwhich in the case of array $\\mathcal{M}$ rewrites as:\n\\begin{displaymath}\n    y=\\frac{1-t-\\sqrt{1-2t-3t^{2}}}{2t}\n\\end{displaymath}\nour help is to manipulate the equation in order to remove the radical. Therefore\nwe can do the following steps, in the given order:\n\\begin{displaymath}\n    \\begin{split}\n        y&=\\frac{1-t-\\sqrt{1-2t-3t^{2}}}{2t} &\\quad\\text{given}\\\\\n        2ty &=1-t-\\sqrt{1-2t-3t^{2}} &\\quad\\text{multiply by } 2t\\\\\n        -2ty &=-1+t+\\sqrt{1-2t-3t^{2}} &\\quad\\text{multiply by } -1\\\\\n        1-t-2ty &=\\sqrt{1-2t-3t^{2}} &\\quad\\text{add } -(t-1)\\\\\n        \\left(1-t-2ty\\right)^{2} &=1-2t-3t^{2} &\\quad\\text{square}\\\\\n    \\end{split}\n\\end{displaymath}\nwhich is exactly the meaning of the supplied lambda: \\mint{python}|lambda y, t,\nequation: (equation*2*t*(-1) -(t-1))**2| \n\n\\emph{Sage} is now able to solve the last equation respect to variable $t$ and it\ncan find the desired function $\\hat{h}$ as:\n\\begin{displaymath}\n    \\left.\\left[\\hat{h}(y)=\\frac{y}{1+y+y^{2}}\\right|\n        y=\\frac{1-t-\\sqrt{1-2t-3t^{2}}}{2t} \\right]\n\\end{displaymath}\n\nTo be truly precise,\\marginpar{the supplied helper works as a certificate as\nwell} the above \\emph{helper} lambda expression is both an help to \\emph{Sage}\nto build the inverse of an array  and it is also a \\emph{certificate} for the\ncomputed compositional inverse function $\\hat{h}$.  Under the hood it is used\nalso at the end of the search of $\\hat{h}$ to check if it really satisfies\n$\\hat{h}(h(t))=t$.\n\n\\inputminted[\n    mathescape,\n    linenos,\n    %numbersep=5pt,\n    %gobble=2,\n    frame=lines,\n    framesep=2mm,\n    breaklines,\n    firstline=65,\n    lastline=74\n    ]{python}{../sympy/motzkin/motzkin-for-document-inclusion.sage}\n\nFinally, as before, we actually do the computation that will expand\n$\\mathcal{M}^{-1}$ as a matrix of coefficients and will produce desired\n\\LaTeX\\,files composed of code necessary to build the modular representation\n$\\left(\\mathcal{M}^{-1}\\right)_{\\equiv_{7}}$.\n\\\\\\\\\nThis is the most complete example our simple implementation allows us to do and,\nalthough simple, it generates \\emph{objects of beauty}.\n", "meta": {"hexsha": "36ac60c4c9b53af702b29202a0fb47f3f523c305", "size": 21012, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classicthesis/Chapters/appendix-sage-sessions.tex", "max_stars_repo_name": "massimo-nocentini/master-thesis", "max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classicthesis/Chapters/appendix-sage-sessions.tex", "max_issues_repo_name": "massimo-nocentini/master-thesis", "max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classicthesis/Chapters/appendix-sage-sessions.tex", "max_forks_repo_name": "massimo-nocentini/master-thesis", "max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.2084309133, "max_line_length": 105, "alphanum_fraction": 0.7395773844, "num_tokens": 5735, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837581726991, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5696650068281958}}
{"text": "\\chapter{Expression Manipulation} \\label{chap:manipulation}\n\nThe basic description of the types in Chap.~\\ref{chap:basics} allows for defining expressions and evaluating them.\nHowever, for most algorithms expressions need to be manipulated in one way or another, which is the topic of this chapter.\nUnless otherwise noted, all manipulations considerered in the following can be used for both compiletime and runtime expressions using the same interface.\n\n\\TIP{Manipulation functionality resides in folder \\lstinline|viennamath/manipulation/|.\n The respective header files are not included automatically with \\lstinline|viennamath/expression.hpp| and need to be included as required.}\n\n \\section{Evaluation}\nAll {\\ViennaMath} expressions can be evaluated to a floating point number using the parenthesis operator. \nA vector of values needs to be passed for the evaluation. In the special case that only the first entry of a vector is required for evaluation, it suffices to\npass the value directly.\n%Depending on the number of variables in the expression, either a scalar or a vector needs to be passed for evaluation. For conv\n\nUsing \\lstinline|operator()|, however, is possibly not an option for a generic interface with non-ViennaMath types.\nFor this reason, \\lstinline|viennamath::eval()| provides a generic evaluation interface. \nThe first argument is the expression to be evaluated, and the second argument is the tuple with the values to be substituted for the variables.\nFor example, the expression $x^2$ is defined and evaluated at $x=2$ as follows:\n\\begin{lstlisting}\n ct_constant<2> c2;\n ct_variable<0> x;\n eval( x*x, 2.0 ); //     runtime evaluation\n eval( x*x,  c2 ); // compiletime evaluation\n\\end{lstlisting}\nNote that compiletime evaluation is only performed when both arguments are fully compiletime compatible.\nAs soon as one part of the expression cannot be handled at compile time, a fallback to runtime evaluation is carried out.\nA hybrid evaluation in such cases is postponed to future releases of {\\ViennaMath}.\n\nA vector of values needs to be passed as second argument, if a variable formally refers to any other than the first coordinate in a vector.\nLet us consider several use-cases of \\lstinline|eval()| consisting of various combinations of compiletime and runtime expressions:\n\\begin{lstlisting}\n ct_constant<3> c3;\n ct_variable<1> y;\n expr f = x*x + y;     // conversion to runtime expression\n eval( f, 2.0 );       // runtime exception: insufficient values provided\n eval( x*x + y, c2 );  // compilation error: insufficient values provided\n eval( x*x + y,\n       make_vector(2.0, 3.0) ); //    runtime evaluation\n eval( x*x + y,\n       make_vector(c2, c3) );  // compiletime evaluation\n eval( 2.0, 0.0 );             // possible thanks to overloading\n\\end{lstlisting}\nSince the runtime wrapper \\lstinline|expr| hides information from the compiler, an exception is thrown at runtime if insufficient values are provided for evaluation.\nFor a full compiletime evaluation, insufficient parameters are already detected at an earlier stage.\nThe helper function \\lstinline|make_vector() | generates a suitable vector type both for the runtime and the compiletime case.\nInstead of using \\lstinline|make_vector()|, a STL vector (\\lstinline|std::vector<double>|) or any compatible type can also be passed.\nAlso note that the last line in the code snippet shows the benefit of using \\lstinline|eval()| instead of \\lstinline|operator()|:\nScalars can also be 'evaluated' and are thus reinterpreted as constant functions.\n\n \\section{Substitution} \\label{sec:substitute}\nFormally, the evaluation of an expression can be seen as a substitution of the variables with values.\nA generalization is to replace arbitrary expressions with another expression in third expression.\nThis is accomplished by the function \\lstinline|substitute()| defined in \\lstinline|viennamath/manipulation/substitute.hpp|:\n\\begin{lstlisting}\n expr f = (x + y) * (x - y);\n substitute(x, y, f);         // returns (y + y) * (y - y) = 0\n substitute(x, 1, f);         // returns (1 + y) * (1 - y)\n substitute(x + y, x - y, f); // returns (x - y) * (x - y)\n\\end{lstlisting}\nAs with \\lstinline|eval|, substitutions are carried out at compiletime if all parameters are compiletime expressions.\n\n\n \\section{Expansion}\nIt is often desired to expand an expression given as a product of other terms. For example, instead of $2(x+y)$ one may want to have $2x - 2y$.\nSuch a functionality is provided by the function \\lstinline|expand()| defined in \\lstinline|viennamath/manipulation/expand.hpp|:\n\\begin{lstlisting}\n expand( c2 * (x+y) );\n\\end{lstlisting}\nwhere \\lstinline|c2| is a compiletime constant and \\lstinline|x|, \\lstinline|y| are compiletime variables.\nNote that {\\ViennaMathversion} does not support the expansion of runtime expressions yet.\n\n\\NOTE{ {\\ViennaMathversion} supports compiletime expansion only. }\n\n\n\n \\section{Simplification}\nIn the course of manipulating expressions, simple operations such as $x+0$ or $x/1$ may appear.\nHowever, such terms constitute unnecessary overhead for later evaluations, thus it is desirable to have these operations dropped.\nSuch a simplification of the expression can be achieved with the function \\lstinline|simplify()| defined in \\lstinline|viennamath/manipulation/simplify.hpp|:\n\\begin{lstlisting}\n simplify( x + 1.0 * y - 0.0 );  // returns x + y\n simplify( x * (2.0 + 3.0) + (y * 0) / (x * 1) ); // returns 5x\n\\end{lstlisting}\n\\lstinline|simplify()| is available both for runtime and compiletime manipulation.\n\n\n \\section{Differentiation}\nThe differentiation of an expression with respect to one or several variables is central to many algorithms, among which the Newton scheme is presumably the\nmost widely used. Differentiation routines in {\\ViennaMath} reside in \\lstinline|viennamath/manipulation/diff.hpp| and are used in a canonical way by passing\nthe expression to be differentiated as first argument and the differentiation variable as second argument:\n\\begin{lstlisting}\n diff( x + y, x );              // returns 1\n diff( (2.0 - x)*(3.0 + y), y); // returns 2.0 - x\n\\end{lstlisting}\nAs usually, compiletime expressions are differentiated at compiletime, while runtime expressions are differentiated at runtime.\nAn exemplary Newton-solver demonstrating the use of differentiation routines can be found in \\lstinline|examples/newton_solve.cpp|.\n\n\n \\section{Integration}\nThe integration of an expression over an interval can be accomplished in two ways: The first option is by analytical integration, provided that an\nantiderivative of the integrand can be found easily. The second option is by numerical quadrature using a suitable quadrature rule.\n\n %% Compiletime: Analytic\nAnalytical integration is in {\\ViennaMathversion} available for compiletime types and polynomials as integrands only.\nThe file \\lstinline|viennamath/manipulation/integrate.hpp| provides the function \\lstinline|integrate()|, which takes the integration interval\nas first argument, the integrand as second argument and the integration variable as third argument. For example, the integral\n\\begin{align}\n \\int_0^1 x^2 \\: \\mathrm{d} x\n\\end{align}\nis computed analytically at compile time using the lines\n\\begin{lstlisting}\n integrate( make_interval(c0, c1),\n            x * x,\n            x);              //returns the expression 1/3\n\\end{lstlisting}\nwhere \\lstinline|c0| and \\lstinline|c1| denote the compiletime constants $0$ and $1$ and are passed to the helper function \\lstinline|make_interval()|\nto generate a suitable compiletime interval with the provided lower and upper bound. Note that analytic integration can also be nested and use polynomial lower\nand upper bounds. For example, integration of $x^2$ over the unit triangle with vertices $(0,0)$, $(1,0)$ and $(0,1)$ is achieved via\n\\begin{lstlisting}\n integrate( make_interval(c0, c1),\n            integrate( make_interval(c0, c1 - x),\n                       x * x,\n                       y),\n            x);              //returns 1/12\n\\end{lstlisting}\n\n\n\\NOTE{ Analytic integration in {\\ViennaMathversion} is available for polynomial integrands at compiletime only. }\n\n\n\n %% Runtime: Quadrature\n\\begin{table}\n\\centering\n\\begin{tabular}{|l|l|l|l|}\n\\hline\nName & {\\ViennaMath} Type   & Shortcut & Accuracy \\\\\n\\hline\n$1$-point Gauss   & \\lstinline|rt_gauss_quad_1| & \\lstinline|gauss_quad_1| & $1$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Overview of numerical quadrature rules in {\\ViennaMath}. \\label{tab:quadrature}}\n\\end{table}\n\nIn order to compute an integral numerically using a quadrature rule, the respective rule from Tab.~\\ref{tab:quadrature} needs to be instantiated first.\nFor a $1$-point Gauss rule, this is accomplished by\n\\begin{lstlisting}\n rt_numerical_quadrature<InterfaceType> \n    integrator(new rt_gauss_quad_1<InterfaceType>());\n\\end{lstlisting}\nfor a suitable runtime interface type. If the default interface type is to be used, the shortcut types can be used for convenience:\n\\begin{lstlisting}\n numerical_quadrature integrator(new gauss_quad_1());\n\\end{lstlisting}\nTo carry out the numerical quadrature, two options exist: The first is to pass the integration interval, the integrand, and\nthe integration variable as separate arguments to \\lstinline|operator()| of the \\lstinline|integrator| object:\n\\begin{lstlisting}\n integrator( make_interval(0.0, 1.0),\n             x * x,\n             x);             // returns the value 0.25\n\\end{lstlisting}\nThe second option for numerical quadrature is to encode the integral directly in the expression:\n\\begin{lstlisting}\n  expr my_integral = integral( make_interval(0, 1), x * x, x );\n\\end{lstlisting}\nwhich encodes $\\int_0^1 x^2 \\: \\mathrm{d} x$ directly in an expression. For numerical quadrature, only the encoded form needs to be passed to the quadrature\nrule then\n\\begin{lstlisting}\n integrator(my_integral);    // returns the value 0.25 (1-point Gauss)\n\\end{lstlisting}\n\n\n\n \\section{Extract Coefficient}\nGiven a polynomial $p(x,y) = 1 + x + y + 2xy$ it can be of interest to extract individual parts of the polynomial.\nFor example, one wishes to extract the coefficient of $x$, which is $(1+2y)$. In such a case, \nthe function \\lstinline|coefficient()| defined in \\lstinline|viennamath/manipulation/coefficient.hpp| can be used.\nThe first parameter is the variable or expression for which the coefficient should be returned, and the second argument is the expression from which the\ncoefficient is to be extracted:\n\\begin{lstlisting}\n coefficient(x, c1 + x + y + c2*x*y);  //returns 1+2y\n\\end{lstlisting}\nNote that higher-order terms in the variable are also returned. For example, the coeffient of $x$ in $x+x^2$ is obtained as $(1+x)$.\n\n\\NOTE{ \\lstinline|coefficient()| is in {\\ViennaMathversion} available for compiletime types only. }\n\n\n \\section{Drop Dependent Terms}\nIn order to drop all terms in an expression which depend on a certain expression type (not necessarily a variable), the convenience function\n\\lstinline|drop_dependent_terms()| from \\lstinline|viennamath/manipulation/drop_dependent_terms.hpp| can be used. As the name suggests, all terms with a\ndependency on the expression passed as first parameter are dropped in the expression passed as second variable.\nFor example, all terms depending on $x$ in $1+x+y+2xy$ are dropped using the line\n\\begin{lstlisting}\n drop_dependent_terms(x, c1 + x + y + c2*x*y);  //returns 1+y\n\\end{lstlisting}\nin order to obtain $1+y$.\n\n\n\n\\NOTE{ \\lstinline|drop_dependent_terms()| is in {\\ViennaMathversion} available for compiletime types only. }\n\n\n", "meta": {"hexsha": "74fa0d86d74d580da14502bc8e657fa7fc49c0ad", "size": 11541, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/manual/manipulation.tex", "max_stars_repo_name": "viennamath/viennamath-dev", "max_stars_repo_head_hexsha": "e238b40f52b8c3fe7de773625439d5de8d96ad39", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2015-09-13T03:51:48.000Z", "max_stars_repo_stars_event_max_datetime": "2017-03-20T10:35:43.000Z", "max_issues_repo_path": "doc/manual/manipulation.tex", "max_issues_repo_name": "viennamath/viennamath-dev", "max_issues_repo_head_hexsha": "e238b40f52b8c3fe7de773625439d5de8d96ad39", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/manual/manipulation.tex", "max_forks_repo_name": "viennamath/viennamath-dev", "max_forks_repo_head_hexsha": "e238b40f52b8c3fe7de773625439d5de8d96ad39", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.5735294118, "max_line_length": 165, "alphanum_fraction": 0.7546139849, "num_tokens": 2882, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5696650065337279}}
{"text": "\\newpage\\section{Research Stuffs for later}\n\n\n\n\n\n\t\\prob{https://artofproblemsolving.com/community/c6h1499703p8903397}{AoPS}{}{Let $ABC$ be a triangle with incenter $I.$ $L_a,$ $L_b,$ $L_c$ are symmedian points of triangles $IBC,$ $ICA,$ $IAB.$ Let $X,Y,Z$ be the reflections of $I$ through $L_a,$ $L_b,$ $L_c.$\n\t\t\n\t\t\\begin{itemize}\n\t\t\t\n\t\t\t\\item Prove that $AX,$ $BY,$ $CZ$ and $OI$ are concurrent.\n\t\t\t\n\t\t\t\\item Let $I_a,$ $I_b,$ $I_c$ be the excenters of $ABC.$ Prove that $I_aX,$ $I_bY,$ $I_cZ$ are concurrent at a point $P$ and isogonal conjugate of $P$ with respect to triangle $I_aI_bI_c$ lies on Euler line of $ABC.$\n\t\t\t\n\t\\end{itemize}}\n\t\n\t\n\t\\prob{https://artofproblemsolving.com/community/c6h1180223}{buratinogigle Tough P1}{}{Let $ABC$ be a triangle inscribed in circle $(O)$ with $A$-excircle $(J)$. Circle passing through $A,B$ touches $(J)$ at $M$. Circle passing through $A,C$ touches $(J)$ at $N$. $BM$ cuts $CN$ at $P$. Prove that $AP$ passes through tangent point of $A$-mixtilinear incircle with $(O)$.}\n\t\n\n\n\n\n", "meta": {"hexsha": "68941b9af64712547c1618165db536b8a055f7fa", "size": 1025, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "geo/sec14_research4later.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "geo/sec14_research4later.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geo/sec14_research4later.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 42.7083333333, "max_line_length": 372, "alphanum_fraction": 0.6682926829, "num_tokens": 357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542925, "lm_q2_score": 0.746138993030751, "lm_q1q2_score": 0.5696650065337279}}
{"text": "\\documentclass[./\\jobname.tex]{subfiles}\n\\begin{document}\n\n\\section{State of the Art}\n\\label{chap:state_of_the_art}\n\n\\subsection{Finite Element Method}\nCurrently, the Finite Element Method is the go-to approach to solve partial differential equations. The domain $\\Omega$ on which the \\gls{pde} is posed, is discretised into multiple smaller elements - as the name suggests. Thus, \\gls{fem} counts to the category of meshed methods. The underlying solution function $u(\\mathbf{x})$ to the PDE is then approximated by so called ``basis-functions'' $\\Phi(\\mathbf{x})$ limited to these finite elements. This thesis uses the open-source Netgen/NGSolve \\gls{fem} package \\cite{schoberl_ngsolvengsolve_2020}.\\\\\nThe general steps taken to solve a PDE with an FEM solver are: \n\\begin{enumerate}\n\t\\item \\underline{Step: Strong Form} \\\\\n\t\t  This is the standard formulation of the linear \\gls{pde}. $\\mathbf{L}$ and $\\mathbf{B}$ are linear differential operators that include the derivatives. \\\\\n\t\t  \\begin{equation}\n\t\t  \\label{eq: strong form}\n\t\t\t  \\begin{split}\n\t\t\t  \t\\mathbf{x} \\in \\mathbb{R}^2 \\\\\n\t\t\t  \tu(\\mathbf{x}), f(\\mathbf{x}), g(\\mathbf{x}): \\Omega \\rightarrow \\mathbb{R} \\\\\n\t\t\t\t\\mathbf{L} u(\\mathbf{x}) = f(\\mathbf{x}) \\text{ on $\\Omega$} \\\\\n\t\t\t\t\\mathbf{B} u(\\mathbf{x}) = g(\\mathbf{x}) \\text{ on $\\partial \\Omega$}\n\t\t\t  \\end{split}\n\t\t  \\end{equation}\n\t\t  Further, only Dirichlet boundary conditions are considered, thus the boundary operator is always the identity matrix $\\mathbf{B} = \\mathbb{I}$. Therefore, the linear operator on the boundary $\\mathbf{B}$ can be disregarded, resulting in  \\\\\n\t\t  \\begin{equation}\n\t\t  \tu(\\mathbf{x})|_{\\partial \\Omega} = g(\\mathbf{x}) .\n\t\t  \\end{equation}\n\t\t  \n\t\\item \\underline{Step: Weak Form} \\\\\n\t\t  The next step is to reformulate the strong form into a usable weak form. This is equivalent to the strong form but written in an integral notation. In this equation, the $A$, $b$ and $c$ correspond to the constant factors of the derivatives in strong form. For the sake of completeness, this is kept abstract. In the \\gls{pde}s considered in this work, $A = \\mathbb{I}$ and $b=\\mathbf{0}$, $c = 0$. Currently, the so-called test-function $v(\\mathbf{x})$ is an arbitrary function, but it has to be 0 on the boundary $v(\\mathbf{x})|_{\\Omega} = 0$. The choice of the test-function correspond to different \\gls{fem} types (\\cite[p. 6f]{shen_spectral_2011}).\\\\\n\t\t  \\begin{equation}\n\t\t  \\label{eq: weak form}\n\t\t  \\begin{split}\n\t\t\t  a(u,v) &\n\t\t\t  \\begin{cases}\n\t\t\t\t  & ~ \\int_{\\Omega} - (\\nabla^T A \\nabla) u(\\mathbf{x}) v(\\mathbf{x}) dV \\\\ \n\t\t\t\t  & - \\int_{\\Omega} b^T \\nabla u(\\mathbf{x}) v(\\mathbf{x}) dV \\\\\n\t\t\t\t  & + \\int_{\\Omega} c u(\\mathbf{x}) v(\\mathbf{x}) dV\n\t\t\t  \\end{cases} \\\\\n\t\t\t  F(v) &\n\t\t\t  \\begin{cases} \n\t\t\t  \t& = \\int_{\\Omega} f(\\mathbf{x}) v(\\mathbf{x}) dV\n\t\t\t  \\end{cases}\n\t\t  \\end{split}\n\t\t  \\end{equation} \n\t\\item \\underline{Step: Discretisation of $\\Omega$} \\\\\n\t\t  Create a mesh of finite elements that span the whole domain. Usually these are triangles. Thus, this step is sometimes called ``triangulation''.\n\t\\item \\underline{Step: Basis functions} \\\\\n\t\t  Choose a basis function $\\Phi(\\mathbf{x})$ that can be used to approximate the solution $u(\\mathbf{x}) \\approx u_{h}(\\mathbf{x}) = \\sum_{i = 1}^{N} u_i \\Phi_i(\\mathbf{x})$. A common choice are Lagrange or Chebyshev polynomials. In the Galerkin type \\gls{fem}, the test-function $v(\\mathbf{x})$ is the same as the trail-function, thus $v(\\mathbf{x}) = \\sum_{j = 1}^{N} v_j \\Phi_j(\\mathbf{x})$. The choice of the basis function $\\Phi(\\mathbf{x})$ largely influences the computational effort.  $\\Phi(\\mathbf{x})$ should have a small support, to produce a thinly populated matrix $A$ in the linear system of equations \\eqref{eq:linear_system_of_equations} below.\n\t\\item \\underline{Step: Solution} \\\\\n\t\t  In the weak form, as seen in equation \\eqref{eq: weak form}, $a(u,v)$ is a continuous bilinear form and $F(v)$ is a continuous linear functional. Substituting $u$ and $v$ with their corresponding approximation from \\mbox{step 4} results in \n\t\t  \\begin{equation}\n\t\t  \\sum_{j=1}^{N} v_j \\sum_{i=1}^{N} u_i a(\\Phi_i, \\Phi_j) = \\sum_{j=1}^{N} v_j F(\\Phi_j).\n\t\t  \\end{equation} \n\t\t  Dividing by the $v_j$ values on both sides results in a linear system of equations, where the constant factors $u_i$ need to be determined.  \n\t\t  \\begin{equation}\n\t\t  \\label{eq:linear_system_of_equations}\n\t\t  \\underbrace{\\sum_{i=1}^{N} u_i a(\\Phi_i, \\Phi_j)}_{\\mathbf{A u}} = \\underbrace{F(\\Phi_j)}_{\\mathbf{b}} \\text{ for $j=1,...N$}\n\t\t  \\end{equation}\n\\end{enumerate}\n\nModern solvers include more complex and advanced techniques to further improve the solution error and the computation time. Some of the most important concepts that are also available in NGSolve are listed here. \n\n\\begin{itemize}\n\t\\item \\underline{Static Condensation}: \\\\\n\t\t  Depending on the number of discrete elements, the $\\mathbf{A}$ matrix can be very large. Inverting large matrices is very time consuming. Static condensation, also called Guyan reduction (\\cite{guyan_reduction_1965}), reduces this dimensionality by exploiting the structure of $\\mathbf{A}$. \n\t\\item \\underline{Preconditioner}: \\\\\n\t\t  Instead of solving the $\\mathbf{A}^{-1}$ exactly, this can also be approximated by a matrix that is similar to $\\mathbf{A}^{-1}$. The actual inverse can be iteratively approximated. NGSolve implements multiple different preconditioners and it even allows to create your own method. \n\t\\item \\underline{Adaptive Mesh Refinement}: \\\\\n\t\tThe accuracy of a FEM-approximated solution mainly depends on the density of the mesh. Typically, finer meshes tend to produce more accurate solutions, but the computation time is longer. This trade-off can be overcome by a self-adaptive mesh. NGSolve implements that in an adaptive loop that executes: \n\t\t\\begin{itemize}\n\t\t\t\\item Solve PDE (with coarse mesh)\n\t\t\t\\item Estimate Error (for every element)\n\t\t\t\\item Mark Elements (that have the greatest error)\n\t\t\t\\item Refine Elements (that were previously marked)\n\t\t\t\\item Repeat until \\gls{dof} exceed a specified $N$\n\t\t\\end{itemize}\n\\end{itemize}\n\n\n\\subsection{Computational Intelligence Methods} \n\\label{chap:literature_overview}\nThe following table \\ref{tab:literature_research} gives a brief overview of these papers and sorts them historically. In general, all of these papers from the table use the \\gls{wrm}, or some variant of that concept, to transform their differential equation into an optimisation problem. This serves as the fitness function and is necessary to evaluate a possible candidate solution and perform the evolutionary selection. The fitness function is also called objective function and these terms are used interchangeably in this paper. In short, the residual $R$ is defined through the differential equation itself and can be calculated by $R(u(\\mathbf{x})) = \\mathbf{L}u(\\mathbf{x}) - f(\\mathbf{x})$. The residual can be thought of as a functional that substitutes $u(\\mathbf{x})$ with an approximate solution $u_{apx}(\\mathbf{x})$ and returns a numerical score. \\\\\nThis paper mainly builds on the work presented in \\cite{chaquet_using_2019}. In their paper they describe an algorithm that approximates a solution with a linear combination of Gaussian \\gls{rbf} as kernels:\n\\begin{equation}\nu(\\mathbf{x})_{apx} = \\sum_{i=1}^{N} \\omega_i e^{\\gamma_i (\\left||\\mathbf{x} - \\mathbf{c}_i\\right||^2)}\n\\end{equation}\nThe approximated function $u(\\mathbf{x})_{apx}$ can be fully determined by a finite number of parameters: $\\omega_i, \\gamma_i, \\mathbf{c}_i$. These are stacked together into a vector $\\mathbf{p_{apx}}$ and called the decision variables which are optimised by the algorithm. The objective function can be seen in equation \\eqref{eq:fit_func_chaquet}. \n\\begin{equation}\n\\label{eq:fit_func_chaquet}\n\\begin{split}\nF(u_{apx}(\\mathbf{x})) = \\frac{1}{m (n_C + n_B)} \\\\ \\left[ \\sum_{i=1}^{n_C} \\right. \\xi (\\mathbf{x}_i) || \\mathbf{L}u_{apx}(\\mathbf{x}_i) - f(\\mathbf{x}_i)||^2 \\\\ + \\left. \\phi \\sum_{j=1}^{n_B} || \\mathbf{B}u_{apx}(\\mathbf{x}_j) - g(\\mathbf{x}_j)||^2 \\right] \n\\end{split}\n\\end{equation}\nThe multipliers $\\xi(\\mathbf{x}_i)$ and $\\phi$ are weighting factors for either the inner or the boundary term. The whole term is normalised with the number of collocation points. The parameters of the kernels are determined via a \\gls{cma_es} \\cite{hansen_reducing_2003}. To further improve the solution, the evolutionary algorithm is coupled with a \\gls{ds} method to carry out the local search. The authors show empirically that the local search significantly improves the performance by testing the algorithm on a set of 32 differential equations. \n\n\\begin{table}[h]\n\t\\centering\n\t\\noindent\\adjustbox{max width=\\linewidth}{\n\t\t\\begin{tabular}{|c|c|c|c|}\n\t\t\t\n\t\t\t\\hline\n\t\t\t\\rowcolor[HTML]{\\farbeTabA}\n\t\t\t\n\t\t\tPaper & Algorithm & Representation & Problems \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{howard_genetic_2001}} & \\multilinecell{GP} & \\multilinecell{polynomial of \\\\ arbitrary length} & \\multilinecell{one-dimensional \\\\ steady-state \\\\ model of \\\\ convection-diffusion \\\\ equation} \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{kirstukas_hybrid_2005}} & \\multilinecell{GP} & \\multilinecell{algebraic \\\\ expression} & \\multilinecell{heating of thin rod \\\\ heating by current} \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{tsoulos_solving_2006}} & \\multilinecell{GE} & \\multilinecell{algebraic term} & \\multilinecell{set of ODEs \\\\ system of ODEs \\\\ and PDEs} \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{mastorakis_unstable_2006}} & \\multilinecell{GA\\\\(global); \\\\ DS\\\\(local)} & \\multilinecell{5th order \\\\ polynomial}& \\multilinecell{unstable \\\\ ODEs} \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{sobester_genetic_2008}} & \\multilinecell{GP \\\\ and \\\\ RBF-NN} & \\multilinecell{algebraic term \\\\ for inner; \\\\ RBF for boundary} & elliptic PDEs \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{howard_genetic_2011}} & \\multilinecell{GP} & function value grid & \\multilinecell{convection\u2013diffusion \\\\ equation \\\\ at different \\\\ Peclet numbers } \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{chaquet_solving_2012}} & \\multilinecell{ES} & \\multilinecell{partial sum \\\\ of Fourier series} & \\multilinecell{testbench of \\\\ ODEs \\\\ system of ODEs \\\\ and PDEs} \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{babaei_general_2013}} & \\multilinecell{PSO} & \\multilinecell{partial sum\\\\of Fourier series} & \\multilinecell{integro-differential equation\\\\system of linear ODEs \\\\ Brachistochrone \\\\ nonlinear Bernoulli} \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{panagant_solving_2014}} & \\multilinecell{DE} & \\multilinecell{polynomial of \\\\ unspecified order} & \\multilinecell{set of 6 \\\\ different PDEs}  \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{sadollah_metaheuristic_2017}} & \\multilinecell{PSO\\\\HS\\\\WCA} & \\multilinecell{partial sum\\\\of Fourier series} & \\multilinecell{singular BVP} \\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{chaquet_using_2019}} & \\multilinecell{CMA-ES\\\\(global); \\\\ DS\\\\(local)} & \\multilinecell{linear combination \\\\ of Gaussian kernels} & \\multilinecell{testbench of \\\\ ODEs \\\\ system of ODEs \\\\ and PDEs}\\\\ \\hline\n\t\t\t\n\t\t\t\\multilinecell{\\cite{fateh_differential_2019}} & \\multilinecell{DE} & \\multilinecell{function value\\\\grid} & elliptic PDEs \\\\ \\hline\n\t\t\t\n\t\t\\end{tabular}\n\t}\n\t\\unterschrift{Literature research on the general topic of stochastic solver and their application. The papers are sorted by date of release. }{}{}\n\t\\label{tab:literature_research}\n\\end{table}\n\n\n\\subsection{Differential Evolution}\n\nThe differential evolution framework was first introduced in \\cite{storn_differential_1997}. Due to its simple and flexible structure, it quickly became one of the most successful evolutionary algorithm. Over the years, several adaptations to the original framework have been proposed. Some of them currently count to the best performing algorithms, as the 100-Digit Challenge at GECCO 2019 \\cite{suganthan_suganthancec2019_2020} shows. The main \\gls{de} framework consists of three necessary steps that continuously update a population of possible solutions. The population can be interpreted as a matrix, where each row-vector $\\mathbf{x}_i$, also called individual, represents a point within the search domain and has a fitness according to the fitness function $f(\\mathbf{x}_i): \\mathbb{R}^n \\rightarrow \\mathbb{R}$. The goal is to minimise the fitness function. These steps are performed in a loop until a predefined termination condition is reached. Each step is controlled by a user-defined parameter: \n\\begin{itemize}\n\t\\item \\underline{Mutation}: \\\\\n\t\t  Mutation strength parameter F;\\\\\n\t\t  The mutation uses the information from within the population to create a trial vector $v_i$. This is done by scaling the difference between some vectors in the population. The \\textit{/current-to-pbest/1} mutation operator can be seen in equation \\eqref{eq:mut_rand_1} where $x_i$ is the current individual, $x_{best}^p$ is one random vector of the p\\% top vectors, $x_{r1}$ is a random vector from the population while $\\tilde{x}_{r2}$ is randomly chosen from the population and the archive. $x_{r1}$ and $\\tilde{x}_{r2}$ must not describe the same individual.\n\t\t  \\begin{equation}\n\t\t  \\label{eq:mut_rand_1}\n\t\t  v_i = x_{i} + F_i(x_{best}^p - x_{i}) + F_i(x_{r1} - \\tilde{x}_{r2})\n\t\t  \\end{equation}\n\t\\item \\underline{Crossover}: \\\\\n\t\t  Crossover probability parameter CR;\\\\\n\t\t  The crossover procedure randomly mixes the information between the trial vector $v_i$ and a random candidate from the population $x_{i}$ to create a new trial vector $u_i$. The binomial crossover from equation \\eqref{eq:crs_bin} randomly takes elements from both vectors, where $K$ is a random index to ensure that at least one element from the trial vector $v_i$ is taken.\n\t\t  \\begin{equation}\n\t\t  \\label{eq:crs_bin}\n\t\t  u_{ij}=\\begin{cases}\n\t\t  v_{ij}, &\\text{if $j = K \\lor rand[0,1] \\leq CR$}\\\\\n\t\t  x_{ij}, &\\text{otherwise}\n\t\t  \\end{cases}\n\t\t  \\end{equation}\n\t\\item \\underline{Selection}: \\\\\n\t\t  Population size N;\\\\\n\t\t  The selection replaces the old candidate $x_i$ if the trial candidate $u_i$ is better as measured by the fitness function. This is performed for every individual in the population, then the next generation is started.\n\\end{itemize}  ~\\\\\nIn modern \\gls{de} variants, these parameters are self-adapted during the evolutionary process. This means that the algorithms can balance out between exploration of the search-space and exploitation of promising locations. A prominent example of a modern \\gls{de} with self-adaption is JADE, which is described in \\cite{zhang_jade_2009}. The adaption is performed by taking successful F and CR values of the last generation into account. If a certain setting is successful in generating better candidates, newly selected F and CR gravitate towards that setting. \n\n\n\\end{document}\n", "meta": {"hexsha": "826f2b65098c33bb35b7951e1b83aeac2d559b9a", "size": 14773, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "master_thesis_paper/tex/State_of_the_Art.tex", "max_stars_repo_name": "nicolai-schwartze/Masterthesis", "max_stars_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-13T10:02:02.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-13T10:02:02.000Z", "max_issues_repo_path": "master_thesis_paper/tex/State_of_the_Art.tex", "max_issues_repo_name": "nicolai-schwartze/Masterthesis", "max_issues_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "master_thesis_paper/tex/State_of_the_Art.tex", "max_forks_repo_name": "nicolai-schwartze/Masterthesis", "max_forks_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 89.5333333333, "max_line_length": 1009, "alphanum_fraction": 0.7288973127, "num_tokens": 4200, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467738423874, "lm_q2_score": 0.6477982315512489, "lm_q1q2_score": 0.5695097253690843}}
{"text": "\\documentclass[a4paper]{article}\r\n\r\n%% Language and font encodings\r\n\\usepackage[english]{babel}\r\n\\usepackage[utf8x]{inputenc}\r\n\\usepackage[T1]{fontenc}\r\n\r\n%% Sets page size and margins\r\n\\usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}\r\n\r\n%% Useful packages\r\n\\usepackage{amsmath}\r\n\\usepackage{amsfonts}\r\n\\usepackage{graphicx}\r\n\\usepackage[colorinlistoftodos]{todonotes}\r\n\\usepackage[colorlinks=true, allcolors=blue]{hyperref}\r\n\r\n\\title{Judson's Abstract Algebra: Chapter 9}\r\n\\date{}\r\n\r\n\\begin{document}\r\n\\maketitle\r\n\r\n\\section*{1}\r\n\r\nProve that $\\mathbb{Z} \\cong n \\mathbb{Z}$ for $n \\neq 0$.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nLet $n \\in \\mathbb{Z}, n \\neq 0$. Consider the function $f : \\mathbb{Z} \\rightarrow n \\mathbb{Z}$\r\n\r\n$$f(x) = nx.$$\r\n\r\nLet $x,y \\in \\mathbb{Z}$. Note that \r\n\r\n$$f(x+y) = n(x+y) = nx + ny = f(x) + f(y).$$\r\n\r\nAssume $f(x) = f(y)$. Then\r\n\r\n$$f(x) = f(y)$$\r\n$$nx = ny$$\r\n$$x = y$$\r\n\r\nso $f$ is injective. Note that $f$ is surjective by definition. Since $f$ is a bijection and preserves the group operation $f$ is an isomorphism.\r\n\r\n\r\n\\section*{2} \r\n\r\nProve that $\\mathbb{C}^*$ is isomorphic to the subgroup of $GL_2(\\mathbb{R}$ consisting of matrices of form\r\n\r\n$$\r\n  \\begin{pmatrix}\r\n    a & b \\\\\r\n    -b & a\r\n  \\end{pmatrix}.$$\r\n  \r\nWe will call the subgroup described above $M$. Consider a function $f : \\mathbb{C}^* \\rightarrow M$ defined by \r\n\r\n$$f(a+bi) = \\begin{pmatrix}\r\n    a & b \\\\\r\n    -b & a\r\n  \\end{pmatrix}.$$\r\n  \r\nLet $a,b,c,d \\in \\mathbb{R}$ where $ab \\neq 0$ and $cd \\neq 0$. First note that the function preserves the group operation since\r\n\r\n\\begin{align*}\r\nf((a+bi)(c+di)) &= f(ac-db + (db + da)i) \\\\\r\n&= \\begin{pmatrix}\r\n    ac-db & cb + da \\\\\r\n    cb + da & ac - db\r\n  \\end{pmatrix} \\\\\r\n&= = \\begin{pmatrix}\r\n    a & b \\\\\r\n    -b & a\r\n  \\end{pmatrix}\r\n  = \\begin{pmatrix}\r\n    c & d \\\\\r\n    -d & c\r\n  \\end{pmatrix} \\\\\r\n&= f(a+bi)f(c+di)\r\n\\end{align*}\r\n\r\nNote that $f$ is surjective because $a,b$ are arbitrary real numbers. Notice that $f$ is injective because of element-wise equality of matrices.\r\n\r\n\r\n\\section*{7}\r\n\r\nShow that any cyclic group of order $n$ is isomorphic to $\\mathbb{Z}_n$.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nLet $G$ be a cyclic group of order $n$ generated by $a$ and define a function $\\phi : \\mathbb{Z}_n \\rightarrow G$ by $\\phi(k) = a^k$ where $0 \\leq k < n$. Let $x,y \\in \\mathbb{Z}_n$. \r\nNotice that since\r\n\r\n$$\\phi(x + y) = a^{x+y} = a^x a^y = \\phi(x) \\phi(y)$$\r\n\r\nthat $\\phi$ preserves the group operation. Assume $\\phi(x) = \\phi(y)$. Then\r\n\r\n$$\\phi(x) = \\phi(y)$$\r\n$$a^x = a^y$$\r\n$$a^x a^{-y} = e$$\r\n$$a^{x-y} = e.$$\r\n\r\nHence $x - y \\equiv 0 \\mod n$ and therefore $x \\equiv y \\mod n$ and $\\phi$ is injective. Let $g \\in G$. Since $G$ is cyclic, there exists $k \\in \\mathbb{Z}, 0 \\leq k < n$ such that\r\n\r\n$$g = a^k = \\phi(k)$$\r\n\r\ntherefore $\\phi$ is surjective. This proves that $\\phi$ is an isomorphism, hence any cyclic group of order $n$ is isomorphic to $\\mathbb{Z}_n$.\r\n\r\n\r\n\\section*{9}\r\n\r\nLet $G = \\mathbb{R} \\setminus \\{ -1 \\}$ and define a binary operation on $G$ by \r\n\r\n$$a * b = a + b + ab.$$\r\n\r\nProve that $G$ is a group under this operation. Show that $(G, *)$ is isomorphic to the multiplicative group of nonzero real numbers.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nLet $a,b,c \\in G$. Note that $G$ contains an identity since\r\n\r\n$$a * 0 = a + 0 + 0a = a.$$\r\n\r\nNote $G$ contains inverses since\r\n\r\n\\begin{align*}\r\na * \\frac{-a}{1+a} &= a + \\frac{-a}{1+a} + \\frac{-a^2}{1+a} \\\\\r\n&= \\frac{a + a^2}{1+a} + \\frac{-a}{1+a} + \\frac{-a^2}{1+a} \\\\ \r\n&= 0.\r\n\\end{align*}\r\n\r\nConsider\r\n\r\n\\begin{align*}\r\n(a*b) * c &= (a + b + ab) * c \\\\\r\n&= a + b + ab + c + c (a + b + ab) \\\\\r\n&= a + b + c + ab + ac + bc + abc \\\\\r\n&= b + c + bc + a + a (b + c + bc) \\\\\r\n&= a * (b + c + bc) \\\\\r\n&= a * (b*c)\r\n\\end{align*}\r\n\r\nshows that $*$ is associative. We must now find a bijection $f : G \\rightarrow \\mathbb{R} \\setminus \\{ 0 \\}$. Let $f(x) = 1 + x$. Consider\r\n\r\n\\begin{align*}\r\nf(a*b) &= f(a + b + ab) \\\\ \r\n&= (1 + a + b + ab) \\\\\r\n&= (1 + a) (1 + b) \\\\\r\n&= f(a) f(b).\r\n\\end{align*}\r\n\r\nHence $f$ preserves the group operation. It is clear that $f$ is injective since\r\n\r\n\\begin{align*}\r\nf(a) &= f(b) \\\\\r\n1 + a &= 1 + b \\\\\r\na &= b.\r\n\\end{align*}\r\n\r\nIt is clear that $f$ is surjective since for $x \\in \\mathbb{R} \\setminus \\{ 0 \\}$\r\n\r\n$$f(x - 1) = x$$\r\n\r\nand $(x - 1) \\in G$.\r\n\r\n\r\n\r\n\\section*{27}\r\n\r\nLet $G \\equiv H$. Show that if $G$ is cyclic, then so is $H$.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nLet $a$ be a generator of $G$. Then for all $g \\in G$ there exists $k \\in \\mathbb{Z}$ such that $a^k = g$. Let $f: G \\rightarrow H$ be an isomorphism. Let $h \\in H$. There exists $g \\in G$ such that\r\n\r\n\\begin{align*}\r\nh &= f(g) \\\\\r\n&= f(a^k) \\\\\r\n&= f(a)^k.\r\n\\end{align*}\r\n\r\nNotice that $f(a)$ is a generator of $H$ since $h$ is arbitrary. This proves that $H$ is a cyclic group.\r\n\r\n\r\n\\section*{28}\r\n\r\nProve that any group $G$ of order $p$, $p$ a prime, must be isomorphic to $\\mathbb{Z}_p$.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nRecall Theorem 9.3: If $G$ is a cyclic group of order $n$, then $G$ is isomorphic to $\\mathbb{Z_n}$. Also recall Corollary 6.7: Let $|G| = p$ with $p$ a prime number. Then $G$ is cyclic and any $g \\in G$ such that $ \\neq e$ is a generator. The desired result follows immediately from combining these two results.\r\n\r\n\r\n\\section*{34}\r\n\r\nAn \\textit{automorphism} of a group $G$ is an isomorphism with itself. Prove that complex conjugation is an automorphism of the additive group of complex numbers; that is, show that the map $\\phi(a+bi) = a - bi$ is an isomorphism from $\\mathbb{C}$ to $\\mathbb{C}$.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nNote that since\r\n\r\n\\begin{align*}\r\n\\phi((a+bi) + (c+di)) &= \\phi((a+c) + (b+d)i) \\\\\r\n&= (a+c) - (b+d)i \\\\\r\n&= (a-bi) + (c-di) \\\\\r\n&= \\phi(a+bi) + \\phi(c+di)\r\n\\end{align*}\r\n\r\nwe know that $\\phi$ preserves the group operation. The proof that $\\phi$ is surjective is trivial (to get any element apply $\\phi$ to the element's inverse). To show that $\\phi$ is injective consider\r\n\r\n$$\\phi(a+bi) = \\phi(c+di)$$\r\n$$\\overline{\\phi(a+bi)} = \\overline{\\phi(c+di)}$$\r\n$$a + bi = c + di.$$\r\n\r\n\r\n\\section*{35}\r\n\r\nProve that $a + ib \\rightarrow a- ib$ is an automorphism of $\\mathbb{C}^*$.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nCall the map $f$. Note that by problem 34 we know that $f$ is a bijection. To show that $f$ preserves the group operation consider\r\n\r\n\\begin{align*}\r\nf((a+bi) (c+di)) &= f((ac - bd) + (ad + bc)i) \\\\\r\n&= (ac - bd) - (ad + bc)i \\\\\r\n&= (a-bi)(c-di) \\\\\r\n&= f(a+bi) f(c+di).\r\n\\end{align*}\r\n\r\n\r\n\\section*{37}\r\n\r\nWe will denote the set of all automorphisms of $G$ by Aut$(G)$. Prove that Aut$(G)$ is a subgroup of $S_G$, the group of permutations of $G$.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nNote that the identity automorphism is the function $x \\rightarrow x$. Clearly any automorphism composed with another automorphism results in an automorphism. Since automorphisms are bijections clearly inverses are contained within Aut$(G)$. This proves Aut$(G)$ is a subgroup of $S_G$.\r\n\r\n\r\n\\section*{50}\r\n\r\nProve that $A \\times B$ is abelian if and only if $A$ and $B$ are abelian.\r\n\r\n\\vspace{\\baselineskip}\r\n\r\nAssume $A$ and $B$ are abelian. Let $(a_1, b_1), (a_2, b_2) \\in A \\times B$. Note that since\r\n\r\n\\begin{align*}\r\n(a_1, b_1) (a_2, b_2) &= (a_1 a_2, b_1 b_2) \\\\\r\n&= (a_2 a_1, b_2 b_1) \\\\\r\n&= (a_2, b_2) (a_1, b_1)\r\n\\end{align*}\r\n\r\nit is true that $A \\times B$ is abelian. Assume that $A \\times B$ is abelian. Let $(a_1, b_1), (a_2, b_2) \\in A \\times B$. Consider\r\n\r\n\\begin{align*}\r\n(a_1, b_1) (a_2, b_2) &= (a_2, b_2) (a_1, b_1) \\\\ \r\n(a_1 a_2, b_1 b_2) &= (a_2 a_1, b_2 b_1).\r\n\\end{align*}\r\n\r\nThis shows that $a_1 a_2 = a_2 a_1$ and that $b_1 b_2 = b_2 b_1$, therefore $A$ and $B$ are abelian.\r\n\r\n\r\n\\end{document}", "meta": {"hexsha": "b0882236c1683e8d3e3029fe43872c7d393404ad", "size": 7722, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "judson-solutions/Chapter09.tex", "max_stars_repo_name": "agdenadel/judson-abstract-algebra-solutions", "max_stars_repo_head_hexsha": "7e9e9c7126741f31c32bed97a8278b3866afbd63", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-10-20T22:41:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-12T10:11:45.000Z", "max_issues_repo_path": "judson-solutions/Chapter09.tex", "max_issues_repo_name": "agdenadel/judson-abstract-algebra-solutions", "max_issues_repo_head_hexsha": "7e9e9c7126741f31c32bed97a8278b3866afbd63", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 24, "max_issues_repo_issues_event_min_datetime": "2017-10-19T17:09:07.000Z", "max_issues_repo_issues_event_max_datetime": "2017-10-26T03:44:24.000Z", "max_forks_repo_path": "judson-solutions/Chapter09.tex", "max_forks_repo_name": "agdenadel/judson-abstract-algebra-solutions", "max_forks_repo_head_hexsha": "7e9e9c7126741f31c32bed97a8278b3866afbd63", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-11-12T10:11:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-25T20:25:48.000Z", "avg_line_length": 29.030075188, "max_line_length": 313, "alphanum_fraction": 0.5953120953, "num_tokens": 2844, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410558746814, "lm_q2_score": 0.8558511451289038, "lm_q1q2_score": 0.5694923886843245}}
{"text": "\\section{Accuracy of the merge algorithm}\n\nThe overall numerical accuracy of any algorithm, is of crucial\nrelevance in the area of Numerical Analysis; in particular in the\nsubarea we care about in this project (Numerical Linear\nAlgebra). \\Rehurek offers detailed and promising accuracy comparisons\nbetween his proposal and several other available implementations,\nthough he does that mainly for the serial executions (single node) of\nsmall/medium corpus sizes (not the Wikipedia experiment described in\nprevious section). Another pending task to verify the accuracy against\na golden standard, could be to perform experiments on a supercomputer\nwith enough RAM to hold the large corpus matrix; using an standard\nSVD software like \\cite{svdlibc}. The authors of these lines have added\nsuch pending task, to the TODO list for further stages of\nthis project. \\\\\n\nDespite of the above, an interesting analysis about the effect of\nnested truncation that \\cref{alg:svd-dist} introduces, is exposed in\n\\Rehurek Phd thesis \\cite{rehurek11a}. Citing the work of Zha et al\n(\\cite{zha00}), he remarks that his distributed SVD algorithm meets the conditions to\nbe an stable algorithm (on the numerical sense), though no longer\nexact. This should not surprise us, as the almost perfect parallelism\nachieved can not come without a price: every merge of two SVD\nfactorizations, as produced by \\cref{alg:merge-svd}, introduces some\nerror, in the sense that the following equality does not hold: \\\\\n\n\\begin{equation}\n\\label{eq:merge-svd-eq}\n\\func{SVD_k}(\\begin{bmatrix}A_1 \\mid A_2\\end{bmatrix}) =\n\\func{SVD_k}(\\begin{bmatrix}\\func{SVD_k}(A_1) \\mid \\func{SVD_k}(A_2)\\end{bmatrix})\n\\end{equation}\n\\hfill\n\nIn other words, calculating truncated SVD against the original input\nmatrices $A_1$ and $A_2$ (concatenated by columns), is not the same as\ncalculating the same over their truncated $SVD_k$ approximations. Let\nus recall that the matrix produced by $SVD_k(A)$ is just an approximation\nof original matrix $A$. \\\\\n\nThe precision lost by accepting as inputs rank-$k$ approximations,\ninstead of the original matrices, is not that bad though; it is shown in\n\\cite{zha00} and reused by Rehurek in \\cite{rehurek11a}, that the\ntypical matrix $A$ that emerges from Natural Language \nApplications like LSI, ``do indeed possess the necessary structure and\nthat in this case, a rank-$k$ approximation of A can be expressed as a\ncombination of rank-k approximations of its submatrices without a\nserious loss of precision''. \\\\\n\nThe above quote means, that the equality \\cref{eq:merge-svd-eq} can\nbe considered to hold in practice. In strict theory we shall replace the\nequality sign by an approximation sign though, as the equality sign\ncan be stated only on the idealistic case of exact arithmetic. Hence,\nwe can claim that: \\\\\n\n\\begin{equation}\n\\label{eq:merge-svd-app}\n\\func{SVD_k}(\\begin{bmatrix}A_1 \\mid A_2\\end{bmatrix}) \\approx\n\\func{SVD_k}(\\begin{bmatrix}\\func{SVD_k}(A_1) \\mid \\func{SVD_k}(A_2)\\end{bmatrix})\n\\end{equation}\n\\hfill\n\nThis is in part, because the $SVD_k$ factorization of a matrix $A$ is\nnot actually just an approximation (as we claimed paragraphs above),\nit is  ``the best'' approximation by a matrix of rank $k$, per the\nEckart-Young Theorem \\cite{eckart36}. The \\cref{eq:merge-svd-app},\nderived from the work of \\cite{zha00}, can be considered the angular\nstone for the divide-and-conquer strategy of\nthe distributed \\cref{alg:svd-dist}. Without it, we would not know if\nit is valid to use the $\\func{SVD}_k$ calculation itself, as a way of\ncombining two already calculated $\\func{SVD}_k$ factorizations. A quite\ninteresting research path for this project, could be to seek for other\nalternatives for doing the merge; or, to confirm that the scheme\nproposed by \\Rehurek is the optimal way of merging two truncated SVD\nfactorizations. \n\n\n", "meta": {"hexsha": "199b2479dec306ee31f9a598dff8fe50bf1bcd73", "size": 3836, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "svd-dist-accu.tex", "max_stars_repo_name": "rzavalet/svd-lsi-project-master", "max_stars_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "svd-dist-accu.tex", "max_issues_repo_name": "rzavalet/svd-lsi-project-master", "max_issues_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "svd-dist-accu.tex", "max_forks_repo_name": "rzavalet/svd-lsi-project-master", "max_forks_repo_head_hexsha": "3db2aed30f124e79d60dd7aa6c012ddd05bdce7f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.8181818182, "max_line_length": 85, "alphanum_fraction": 0.7818039625, "num_tokens": 1030, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410558746814, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.5694923874610631}}
{"text": "% Appendix A\n\n\\chapter{CFR pseudo-code} % Main appendix title\n\n\\label{AppendixCFR} % For referencing this appendix elsewhere, use \\ref{AppendixA}\n\n\\lhead{Appendix B. \\emph{CFR pseudo-code}} % This is for the header on each page - perhaps a shortened title\n\n\\begin{lstlisting}[caption={CFR pseudo-code}, label={lst:cfr}]\nClass PokerEnv {...}\n% A class representing the poker env, it's implementation\n% will not be shown in this pseudo-code\n\nStruct StategyInfo {\n    float value;\n    string action;\n}\n\nStruct NodeInfo {\n    % you should store an hash key of this env intead of\n    % the full PokerEnv, we are doing this for readability\n    PokerEnv env\n    float regret\n    Map<PokerEnv, strategys> strategys\n}\n\nnodes = {}\n\nvoid runOnce(env, proba):\n    currentNode = nodes[env]\n\n    if (env.isOver()):\n        currentNode.regret = env.reward * proba\n        return\n\n    actions = env.getAvailableActions()\n    leadingNodes = {}\n    total = 0\n\n    % explore and compute values of the leading nodes\n    FOR action in actions:\n        copy = env.copy()\n        copy.play(action)\n        if (!currentNode.strategys[copy]):\n            currentNode.strategys[copy] = {\n                value: 1 / actions.size\n                action: action\n            }\n        runOnce(copy, currentNode.strategys[copy].value * proba)\n        leadingNodes.append(nodes[copy])\n\n    sumNegativeRegret = 0\n    sumPositiveRegret = 0\n    for FOR node in leadingNodes:\n        regret = -node.regret\n        if (regret > 0):\n            sumPositiveRegret += regret;\n        else:\n            sumNegativeRegret += regret;\n\n    % update strategies\n    FOR node IN leadingNodes:\n        regret = -node.regret\n        if (sumPositiveRegret > 0):\n            currentNode.strategys[node.env].value = regret > 0 ? regret / sumPositiveRegret : 0;\n        else if (sumNegativeRegret != 0):\n            currNode.strategys[node.env].value = regret == 0 ? 1.0f : 1 - abs(regret / sumNegativeRegret);\n        else:\n            currNode.strategys[node.env].value = 1;\n\n    % weight the stratgies so it sum up to 1\n    float totalStrategy = 0\n    FOR node in leadingNodes:\n        totalStrategy += currentNode.strategys[node.env].value\n    FOR node in leadingNodes:\n        currNode.strategys[node.env].value /= totalStrategy;\n\n    % compute the regret of this node\n    FOR node in leadingNodes:\n        currentNode.regret += proba * -node.regret * currentNode.strategies[node.env].value\n\nenv = PokerEnv(public_cards,private_cards,...)\n% Initial environemnt of which you want the CFR Value\n% It takes all the necessary information to create a moment in the game.\nfor _ in NB_ITERATION:\n    runOnce(env, 1)\n\n% CFR values are in nodes[env].strategies\nFOR strategy in nodes[env].strategies:\n    env.generateRandomCardsFor2ndPlayer()\n    print(\"Action {strategy.action} -> strategy.value\")\n\\end{lstlisting}", "meta": {"hexsha": "f9f51f0678af25ab958b55e3692c153b035e5c22", "size": 2857, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/Appendices/B-CFR.tex", "max_stars_repo_name": "noeRls/opponent-exploitation", "max_stars_repo_head_hexsha": "989217afffcca7b13d6bb177bbca59f42e308a25", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/Appendices/B-CFR.tex", "max_issues_repo_name": "noeRls/opponent-exploitation", "max_issues_repo_head_hexsha": "989217afffcca7b13d6bb177bbca59f42e308a25", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/Appendices/B-CFR.tex", "max_forks_repo_name": "noeRls/opponent-exploitation", "max_forks_repo_head_hexsha": "989217afffcca7b13d6bb177bbca59f42e308a25", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-10-20T17:28:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-20T17:28:19.000Z", "avg_line_length": 31.0543478261, "max_line_length": 108, "alphanum_fraction": 0.6597829891, "num_tokens": 719, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511322604133, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.5694923858002123}}
{"text": "\\lab{Multi-Armed Bandit Problems}{Multi-Armed Bandit Problems}\n\\objective{This lesson explains Markov decision processes --\nspecifically multi-armed bandit problems -- and how to solve them by\nThompson Sampling. An application to web page experiments is addressed.}\n\n\\section*{Markov Decision Processes}\nPreviously we considered dynamic programming problems.\nThese problems involve making sequential decisions in some optimal way,\npossibly under uncertainty.  Dynamic programming problems are closely\nrelated to a class of problems called Markov Decision Processes.\n\nA Markov Decision Process (MDP) involves the following elements:\n\n\\begin{itemize}\n\\item   A set of decision times,\n\\item   A set of states,\n\\item   A set of actions,\n\\item   A set of rewards dependent on the state and action, and\n\\item   Transition probabilities dependent on states and actions.\n\\end{itemize}\nIn this lab we will consider discrete time problems, so that our\nset of decision times is $t = 1,2,3,\\ldots$.\n\\begin{comment}\nIn the dynamic programming problems considered in the previous labs,\nthe set of states was the set of all possible levels of wealth and\nthe actions were how much wealth to save for the next period.\nThe rewards were given by the utility function $u(\\cdot)$,\nand we considered both deterministic and stochastic transitions.\n\\end{comment}\n\\section*{Bandit Problems}\nIn particular, we will consider what is called the multi-armed bandit\nproblem.  The name comes from the following example.\nSuppose there is a row of $N$ slot machines (``one-armed bandits\")\nthat each pays out with probability $p_i$, $i= 1,2,\\ldots,N$,\nwhere the probabilities are unknown to the gambler.\nThe gambler seeks to determine the sequence of levers to pull in\norder to maximize her winnings.\n\nBandit problems have a wide range of applications.\nOne can consider the ``arms\" to be the different treatments in a\nclinical trial, the different forms of advertising a product,\nor the different research projects a company might invest in.\n\nWe now formulate the multi-armed bandit problem precisely.\nFor simplicity, suppose there are $2$ arms, though the\ndiscussion is easily extended to $N$ arms.\nIn each time period $t= 1,2,\\ldots$, one arm can be pulled.\nWith unknown probability $p_i$, the $i$-th arm gives\nreward $1$ and with probability $1-p_i$ it gives reward $0$.\nNote we have now defined a set of decision times, actions, and rewards.\nWe define the state to be a 4-tuple giving the number of successful and unsuccessful\npulls on each arm up to that point, written as\n\\begin{equation}\\label{state}\n(a_1,b_1,a_2,b_2),\n\\end{equation}\nwhere $a_i$ is the number of successful pulls on arm $i$ and $b_i$\nis the number of unsuccessful pulls on arm $i$.\nAs we pull the arms, we must balance between pulling the arm that has\nthe highest expected payoff and pulling all arms in order to gain\ninformation about the probabilities $p_i$.\nThis trade-off is often referred to as exploring versus exploiting.\nIn essence, while gaining rewards, we will also come up with\nestimates of the $p_i$ that improve our decision making.\nWe will do so by Bayesian updating.\n\n\\section*{Bayesian Updating}\nWhile a full exposition of Bayesian inference is well beyond the\nscope of this lab, the essential concepts are fairly straightforward.\nWe recognize that we do not know the values of the $p_i$,\nbut given our past history of successful and unsuccessful\npulls, we can say something about what range we think they might be in.\nThis is different than guessing a specific value for the $p_i$.\nInstead of specifying what we think is the most likely value of $p_i$,\nwe can encode our knowledge of and uncertainty about $p_i$ in a\nprobability distribution,\nlike those seen in Figure \\ref{fig:priors}.\n\\begin{figure}\n\n\\centering\n\\includegraphics[width=\\textwidth]{priors.pdf}\n\\caption{Bayesian priors}\n\\label{fig:priors}\n\\end{figure}\n\nIf you have had $1$ success and $2$ failures, you might\nthink that the success probability\n$p_i$ is around $\\frac{1}{3}$.\nHowever, you have very little information at this point,\nso rather than assuming the exact value for $p_i$,\nyou might represent your belief about $p_i$\nby the blue probability distribution in Figure \\ref{fig:priors}.\nWe call the distribution that describes our belief about $p_i$\nthe \\emph{prior distribution} of $p_i$.  If we incorporate new information,\nwe get a new distribution, called the \\emph{posterior distribution}.\nAs we collect even more information, the posterior distribution becomes\nthe new prior distribution. This framework allows us to fluidly incorporate\nnew information as we seek to determine the value of $p_i$.\n\nAs we get more information, the probability curve gets narrower.\nIn the figure, we see that the distributions have expected\nvalue of $1/3$ and become tighter around that value.\nThis is fitting since the parameter values have success $\\alpha$ one\nthird of the time and the tighter distributions correspond to having\nmore prior information.  If we need an estimate for $p_i$ we can use\nthe expected value of our distribution corresponding to $p_i$.\nWe will denote this estimate by $\\overline{p_i}$.\n\nWe will use this Bayesian framework in our approach to the multi-armed bandit problem.\nWe do not know the $p_i$, but each time we pull an arm, we update our distribution of\nwhere we think it might be.\nIn particular, we use a beta distribution for each $p_i$.\nA beta distribution is a continuous probability distribution on the interval $[0,1]$,\nwhich corresponds nicely to the possible values of the $p_i$.  Beta distributions have\ntwo parameters (given by $\\alpha$ and $\\beta$ in the figure).\n\nIn our problem, the state (given by \\eqref{state}) can be thought of as representing two\nbeta distributions -- one for each $p_i$ with parameters $a_i$ and $b_i$.\nThe details of the updating process are unimportant for now, except\nfor this important property: the two parameters for the beta distribution\ncorrespond exactly with successes and failures.\nSo if we start with a prior distribution Beta$(a,b)$ and the next pull is success,\nour posterior distribution is Beta$(a+1,b)$. If the next pull is a failure, however, then\nthe posterior distribution is Beta$(a,b+1)$.\nObserve how this corresponds with the way our state evolves.\n\n\\section*{Simulation and Sampling in a Bayesian Framework}\nAlong with the estimate $\\overline{p_i}$, we can also compute many other useful\nquantities in our Bayesian problem via random sampling.  Essentially, we can take\nrandom draws from our distribution and estimate the mean, median, or other\nquantities based on the value of those quantities in the random sample.\nThis is a very powerful concept that will be explored in more detail in future labs.\n\n\\begin{problem}\nWrite a function \\li{sim\\_data} that accepts an $n\\times 2$ array, where each\nrow represents the parameters of a beta distribution, respectively, and a positive integer $k$\nthat represents the number of random draws to return.\nThe function should return a $k\\times n$ matrix where each row has a random sample\nfrom each of the $n$ arms.  This can be accomplished with ease using the NumPy function\n\\li{numpy.random.beta}.\n\\label{prob:simdata}\n\\end{problem}\n\n\\begin{comment}\n\\begin{problem}\nSuppose one of the arms in a bandit problem has the state (or prior distribution of $p_i$)\nBeta$(100,200)$ corresponding to $100$ successes and $200$ failures.\nSimulate 10,000 data points for the distribution Beta$(100,200)$.\nCompute $\\overline{p_i}$ by finding the mean of the simulated points.\nCompute the median.  Also compute the $95$th percentile using the command\n\\li{scipy.stats.mstats.mquantiles(data, .95)} where \\li{data} is the array\ncontaining the 10,000 simulated data points.\n\\end{problem}\n\\end{comment}\n\\section*{Direct Dynamic Programming Solution}\nThis framework lends itself well to a dynamic programming type solution.\nWe introduce the value function $R$ that depends on the state, and gives\nthe optimal expected value that can be achieved\nstarting from the state.  Then we have\n\\begin{equation}\n\\label{recurs}\n\\begin{aligned}\nR(a_1,b_1,&a_2,b_2) =\\\\\n \\max&\\left\\{\\overline{p}_1\\cdot[1 + \\beta R(a_1+1,b_1,a_2,b_2)] +\n (1-\\overline{p}_1)\\beta R(a_1,b_1+1,a_2,b_2)\\right. ,\\\\\n&  \\left.\\overline{p}_2\\cdot[1 + \\beta R(a_1,b_1,a_2+1,b_2)] +\n(1-\\overline{p}_2)\\beta R(a_1,b_1,a_2,b_2+1)\\right\\}\n\\end{aligned}\n\\end{equation}\nThe two terms in the maximization represent the expected value of pulling lever\none or lever two respectively.  For example, by pulling lever one, we expect\nto get a reward of 1 with probability $\\overline{p_1}$.\nWe must also account for the expected value of future rewards\n(discounted by $\\beta$) moving to a state with one more success on arm 1.\nWith probability $1-\\overline{p_1}$, lever one does not give a reward and so\nrewards are simply the discounted expected reward starting in the next state.\n\nNotice that the expressions inside of $R(\\cdot)$ on the right side have parameter\nvalues that add to one greater than the $R(\\cdot)$ on the left side.\nSo if we want to compute $R(1,1,1,1)$ we need to know\n$R(2,1,1,1)$, $R(1,2,1,1)$, $R(1,1,2,1)$ and $R(1,1,1,2)$\n(all possible combinations of parameters that add up to 5).\nTo compute these, we need to know the reward corresponding to all possible\ncombinations of parameters that add up to 6, and so on.  Consequently, we could\nmake a guess for all $R$ for all parameter combinations that add up to some large $N$,\nthen work backward until we get to $R(1,1,1,1)$.  This is backward induction,\njust as we saw in dynamic programming.  However, the number of computations in this\nproblem grow much too quickly because of the branching nature of having multiple arms.\nIn fact, if there are more than two arms this method of computation is infeasible!\n\n\n\\begin{comment}\n\\section*{Gittins Index Solution}\nOne way we might hope to solve a bandit problem is by computing some sort of ``index''\nfor each arm.  That is, we want a number associated with each arm that in some sense\ncaptures the value of pulling that arm.   Ideally such an index would depend only on that arm,\nand not on the others.  We could then compare the indices for all of the arms\nand pull the arm with highest index.  It turns out that bandit problems can be\nsolved optimally by such methods and are computationally more feasible than the\ndynamic programming approach we saw above.\n\nTo compute an index for this problem, we consider comparing an arm with unknown\npayoff probability $p_i$ to an arm with known payoff probability $p$.  Then equation \\eqref{recurs}\nbecomes\n\\begin{equation}\\label{index}\n\\begin{aligned}\nR(p,a_i,b_i) = \\max&\\left\\{\\frac{p}{1-\\beta} \\right. ,\\\\\n&  \\left.\\hat{p}_i\\cdot[1 + \\beta R(p,a_i+1,b_i)] + (1-\\hat{p}_i)\\beta R(p, a_i,b_i+1)\\right\\}.\\\\\n\\end{aligned}\n\\end{equation}\n\nWe determine the expected value of pulling the first arm to be $\\frac{p}{1-\\beta}$\nnoting that if we pull the deterministic arm once, we will continue pulling it\nforever as we gain no new information.  In this case the expected reward from pulling\nthe known arm is $p + \\beta p + \\beta^2 p + \\cdots = \\frac{p}{1-\\beta}$.\n\n\nIf we can find the $p$ such that we are indifferent between the deterministic\narm and the unknown arm, this will give us an index that quantifies the value of arm $p_i$\n(this can be proved).\n\nPutting all of this together, our algorithm for solving the multi-armed bandit\nproblem with two arms is as follows.  For each arm $i$, compute \\eqref{recurs}\nover a range of $p$ values to find the $p$ such that you would be indifferent\nbetween the arm with probability $p$ and arm $i$. Store this as the index $\\lambda_i$.\nCompare the $\\lambda_i$ and pull the arm with largest index.\n\nIn order to compute \\eqref{index} we use dynamic programming, starting with a guess\nfor $R$ for parameters that add up to some large $N$ and use backward induction.\nIn the process, we find $R$ and $\\lambda_i$ for each combination of parameters\nthat adds up to any $n\\leq N$. Thus we do not have to compute new $\\lambda_i$ after\neach pull, we can just look them up.\n\n\\section*{Algorithm Outline}\nThis section will guide you through creating a function that will compute the indices\nfor a given arm.  It will involve writing a number of functions that you can save in the same\n.py file.\n\nFirst, we need a function that will compute all the pairs of $a,b$ that add up\nto some $N$.  We also want the user to be able to input the minimum values of\n$a$ and $b$ of interest to avoid unnecessary computation.  For example if in practice\nall of the arms have $a_i \\geq 5$ and $b_i \\geq 10$, then we are uninterested in smaller $a$, $b$.\nThe following code accepts the value of $N$ and a minimum value of a and b and\nreturns an $N$ by 2 array with the $a$'s and $b$'s such that $a + b = N$.\n\\begin{lstlisting}\n# computes pairs of numbers starting with mina and minb that\n# add up to N\n\ndef compute_indices(N,mina,minb):\n    import scipy as sp\n    avec = sp.arange(mina,N-minb+1)\n    avec = sp.reshape(avec,(avec.shape[0],1))\n    bvec  = sp.arange(N-mina, minb-1,-1)\n    bvec = sp.reshape(bvec,(bvec.shape[0],1))\n    values = sp.hstack((avec,bvec))\n\n    return values\n\\end{lstlisting}\n\nIn order to perform the backward induction, we need to be able to estimate the value\n$R(p,a,b)$ for $a+b = N$.  To do so we have to estimate the second quantity in \\eqref{index}.\nWe will estimate it as $(\\frac{a}{a+b})/(1-\\beta)$.  This is the value one would get by\npulling an arm with $p_i$ equal to the expected value of $Beta(a,b)$ forever.\n\\begin{problem}\nWrite a function ``end\\_reward\" that accepts $p,a,b,$ and $\\beta$ and returns the\nestimated value of $R(p,a,b)$ for $a,b$ such that $a+b=N$.\n\\end{problem}\n\nFor convenience it will be nice to have a function that will compute $R(p,a,b)$\ngiven values of $p,\\overline{p},a,b,\\beta,N$ as well as values of $R(p,a+1,b)$ and $R(p,a,b+1)$.\nThis should follow directly from \\eqref{index}.\n\\end{comment}\n\n\\section*{Thompson Sampling}\n\\begin{comment}\nThere is a method for computing the optimal solution to the multi-armed bandit problem\nbased on what is commonly known as the Gittins Index Theorem.\nComputationally it is similar to the dynamic programming approach, but it is significantly less\ncostly.\nUnfortunately, for large scale problems, it can still be too costly.\n\\end{comment}\nThere are, however, many heuristic methods of solving the multi-armed bandit problem.\nIn particular we will use a method known as Thompson Sampling, or Randomized Probability Matching.\nThe idea is that we should choose arm $i$ with probability equal to the probability\nthat arm $i$ is the best arm.  So if we believe there is an $80\\%$ probability that\narm 2 is the best arm we will pull it $80\\%$ of the time.  The other $20\\%$ of the\ntime we would pull arm one which will help give more information about the true value of $p_1$.\nIn this way, we will pull most often the arms from which we expect the most rewards;\nhowever, we will also pull other arms with some probability so that we accomplish some\nexploration and gain information on all arms until we are confident we have found the best arm.\n\n\\begin{problem}\nWrite a function \\li{arm_probs} that accepts a $k \\times n$ array of data computed by the\nfunction from Problem \\ref{prob:simdata}.\n\nThis function should return a vector of the probabilities that each of the $n$ arms is optimal.\nThis can be computed for each arm by determining in how many of the $k$ simulations\nthat arm had the highest value.  Dividing this number by $k$ will give the proportion\nof the simulations for which this arm had the greatest probability of success.\nThis proportion is interpreted as the probability that the arm is the optimal arm.\n\\end{problem}\n\nIn some applications, we might want to run computations once, then pull many arms instead\nof computing before each pull.  In these cases, rather than compute these probabilities,\nchoose an arm, then recompute, it can be more convenient to view the probabilities as weights.\nFor example we can view the probabilities as the weights of how to distribute the next 100 pulls.\nSuppose the probabilities resulting from the previous function (with two arms) are $0.4$ and $0.6$.\nThen we would allot 40 of the next 100 pulls to arm 1 and 60 of the next 100 pulls to arm 2.\nThen we could compute new weights for the next 100 pulls.\n\n\\begin{problem}\nUsing the results from the previous problems, write a function \\li{get_pulls} that determines how many\ntimes each arm should be pulled in the next $M$ pulls, where $M$ is an input to the function.\nThe function should accept a vector of probabilities of the form returned by the\nfunction in the previous problem, and a number of pulls $M$.\nNote that you will have to round since the number of pulls for each arm must be in integer.\nReturn a vector of length $n$ (where $n$ is the number of arms) that gives the number of\npulls out of the next $M$ for each arm.  Make sure the entries sum to $M$.\n\\end{problem}\n\nYou now have code that solves the version of the multi-armed bandit problem described here.\nWe first start all arms with the state (or prior) Beta$(1,1)$,\nwhich is the uniform distribution, meaning we have no information on the $p_i$.\nThen we compute the weights for the next $M$ pulls.\nAfter those $M$ pulls we compute new weights and continue on in this pattern.\nIn some applications we may not continue this forever, but might instead have some stopping\ncriterion for when we think we have identified the best arm.\nA common stopping criterion is to stop when one of the arms has a $95\\%$ probability\n(or some other specified probability) of being the optimal arm.\nIn the framework we have set up, this is already computed at each step and, thus, is easy to check for.\n\nWe now investigate how this process can be applied\nin web page testing.\n\n\\section*{Web Page Experiments}\nBandit problems provide a way to compare the success of different variations of a web page.\nSuppose a business wants to test new versions of a web page.\nThe goal of the page might be to get the user to click a certain link, make a purchase, etc.\nWhen the user does this, we call it a conversion.  The proportion of web page visits\nthat results in a conversion is called the conversion rate, or CvR.\nThe website designer wants to determine which variation of the web page has the best CvR.\n\nWe can model this situation as a bandit problem by considering each page as a different arm.\nEach page has some unknown probability (the CvR) that a user will perform the desired action.\nThe company then wants to experiment with giving different users different versions of\nthe page in order to determine which variation is most successful.\n\nThis method is the same that is used by Google Analytics.  Take a moment to skim their description\nlocated here:\n\\url{http://analytics.blogspot.com/2013/01/multi-armed-bandit-experiments.html}.\n\nWe will apply the Thompson Sampling method described above (the same method\nused by Google)\nto solve this problem.  We will simulate the results and attempt to replicate Google's results\nfound at the website above.\n\n\\section*{The Experiment}\nHere we describe how we will design our web page experiment using bandits and how we will\nsimulate it.\nWe will have some number of variations of a web page, $n$.  Each day, the web-page will receive\n100 visitors\n(for the purposes of our simulation).  Twice each day, at the beginning and after 50 visits,\nwe will compute the number of each variation to deliver to the next 50 visitors using the\nmethod described above.\n\n\\begin{problem}\nWrite a function \\li{sim_day} that simulates the experiment for one day.\nThe function should accept a vector of length $n$ of the true probabilities (CvR)\nfor different web page variations.  It should also accept the state of the variations\nat the beginning of the day; this is an $n \\times 2$ array with the number of previous\nsuccesses plus one in the first column and the number of previous failures plus one in\nthe second column (so if it is the first day, the state for any arm would be $(1,1)$,\ncorresponding to a Beta$(1,1)$ distribution).\n\nThe function can be outlined like this: first compute how many times each variation\nshould be used in the next 50 visits (refer to these numbers as the ``pulls\").\nWhen determining the pulls, you will need to use\nthe \\li{sim_data} function.  Use 100 as the number of draws here and throughout this lab.\n\nNext, ``visit'' each page the number of times given by the pulls.\nUse the NumPy function \\li{numpy.random.binomial} to compute the outcome of each visit.\nTo illustrate, suppose \\li{cvr} is the length $n$ vector giving the true CvR values\nand \\li{pulls} is the length $n$ vector giving the number of times to visit each page\nin the next 50 visits. We calculate the outcome of visiting page \\li{i} the\nspecified number of times as follows:\n\\begin{lstlisting}\n>>> outcome = np.random.binomial(pulls[i], cvr[i])\n\\end{lstlisting}\n\nAs you make these 50 visits, update the state of each arm by adding the outcome of the visits\nto each page to the corresponding entry of the state vector.\nTo illustrate, suppose \\li{state} is the $n \\times 2$ vector giving the beta parameters\nfor each web page. After we compute \\li{outcome} as shown above, we update the\nstate of the $i$-th web page as follows:\n\\begin{lstlisting}\n>>> state[i,0] += outcome\n>>> state[i,1] += pulls[i] - outcome\n\\end{lstlisting}\n\nOnce you have finished these 50 visits and updated the state of each arm,\nrepeat the process for the next 50 visits (i.e. recompute the pulls,\nvisit the web pages, and update the state of each arm).\n\nAfter these 100 total visits,\nthe function should return the resulting $n\\times 2$ array giving the states of the web pages,\nas well as the length $n$ array giving the number of pulls for each arm used for the final\n50 visits.\n\\end{problem}\n\nIn this manner we continue from day to day, always updating the state of each arm\n(the beta distribution for the CvR of each arm).  We have three criteria to determine\nwhen to stop the experiment.\nFirstly, the experiment must run at least two weeks to make sure the\nresults are not overly influenced by a small number of random draws.\n\nThe second stopping criterion is that there be a $95\\%$ probability that one\nof the variations is the best variation.\nThis is the same as saying that, of the weights for each variation, the largest\nis greater than $.95$.\n\nIt may seem that these two criteria should be enough; however, in some cases, the\nexperiment could last a very long time using just these criteria.\nFor example, consider the case that two of the web page variations have nearly the same CvR.\nIn this case it will be very difficult to determine which is best.\nIt will also not be very important since the results are so similar.\nThus we use a measure called the \\emph{potential value remaining}\nin the experiment as the third criterion.  The potential value remaining is computed by\nsimulating many draws for each arm.  Using this data, the potential value remaining\nfor arm $i$ is obtained by computing the following for each simulated data point:\n\\begin{equation}\\label{valrem}\n\\frac{\\theta_{max} - \\theta^*}{\\theta^*},\n\\end{equation}\nwhere $\\theta_{max}$ is the largest value for the random draw and $\\theta^*$ is the\nvalue of the arm that is currently believed to be the best\n(the arm with the highest weighting, or probability of being optimal).\n\nThe result is some distribution of numbers between $0$ and $1$ that we can think\nof as the distribution of value remaining.  For example, if $50\\%$ of the numbers are 0,\nthen about $50\\%$ of the time the arm that is currently believed optimal will perform the best.\nThe potential value remaining is the $95$th percentile of this distribution.\nIf the potential value remaining were $.2$, we could interpret it as meaning\nthat there is about a $5\\%$ chance that another arm beats the current best arm by $.2$ or more.\nWe stop the experiment if this value is less than $1\\%$ of the current best arm's CvR.\nThis way we stop the experiment if there seems to be little chance of improvement over\nthe current best arm, regardless of whether we've met the $95\\%$ tolerance for the weights.\n\nThe value remaining can be computed using the following code:\n\\begin{lstlisting}\nimport scipy as sp\ndef val_remaining(data,prob):\n    champ_ind = np.argmax(prob)\n    thetaM = np.amax(data,1)\n    valrem = (thetaM - data[:,champ_ind])/data[:,champ_ind]\n    pvr = sp.stats.mstats.mquantiles(valrem,.95)\n    return pvr\n\\end{lstlisting}\nwhere \\li{data} is simulated using the \\li{sim_data} function and\nand \\li{prob} is a vector containing the probabilities that each arm is optimal\n(obtained from the \\li{arm_probs} function).\n\n\\begin{problem}\nWrite a function \\li{sim_experiment} that simulates the problem described above,\nusing the stopping criteria described above.\nThe function should accept a vector of the true probabilities of the arms.\n\nYour code should be formatted as follows.\nFirst, initialize the state vector to reflect our Beta$(1,1)$ prior:\n\\begin{lstlisting}\n>>> state = np.ones((n,2))\n\\end{lstlisting}\n(here \\li{n} is the number of arms, as usual).\nIt will also be helpful to have a list \\li{pull_list} that stores\nthe pull vectors from each day of simulation.\n\\begin{lstlisting}\n>>> pull_list = []\n\\end{lstlisting}\n\nNow we are ready to launch into the iteration.\nWrite a while-loop that checks for the stopping criteria.\nIt might look something like this:\n\\begin{lstlisting}\nwhile (days<14) or (max_p <= .95 and max_pbar/100 <= pvr):\n\\end{lstlisting}\nHere, \\li{days} is a count variable keeping track of how many days (iterations) the experiment\nhas been running. The variable \\li{max_p} gives the maximal entry of the vector of\nprobabilities returned by \\li{arm_probs}. The variable \\li{max_pbar} gives the value\n$a_i/(a_i+b_i)$, where $i$ is the index corresponding to the arm with the highest\nprobability of being the best arm. Finally, \\li{pvr} is the value returned by\n\\li{val_remaining}. You will need to initialize these four variables to appropriate\nvalues before the while loop.\n\nIn each iteration, run your \\li{sim_day} function to simulate one day of the experiment:\n\\begin{lstlisting}\n>>> state, pulls = sim_day(cvr, state)\n\\end{lstlisting}\nWe next need to update our stopping criteria variables.\nAs described above, simulate data from the current state, and calculate the\nprobability vector of each arm being the best:\n\\begin{lstlisting}\n>>> data = sim_data(state, n_sim)\n>>> probs = arm_probs(data)\n\\end{lstlisting}\nCompute the maximum probability, as well as the corresponding maximal $\\bar{p_i}$:\n\\begin{lstlisting}\n>>> max_ind = np.argmax(probs)\n>>> max_p = probs[max_ind]\n>>> max_pbar = state[max_ind, 0]/(state[max_ind,:].sum())\n\\end{lstlisting}\nFinally, compute the potential value remaining:\n\\begin{lstlisting}\n>>> pvr = val_remaining(data, probs)\n\\end{lstlisting}\nDon't forget to increment your \\li{days} variable, as well as store the pulls from\nthat day:\n\\begin{lstlisting}\n>>> days += 1\n>>> pull_list.append(pulls)\n\\end{lstlisting}\n\nOnce the iterations terminate, return the state vector (\\li{state}),\nreturn the pulls from each day in the form of an array (\\li{np.array(pull_list)}),\nreturn the optimal arm (given by \\li{max_ind}), and finally the total amount\nof days that the simulation ran (\\li{days}).\n\\end{problem}\n\nNow let's see how our bandit performs with specific examples.\n\n\\begin{problem}\nSuppose a web page has two variations and the true CvR of the original is\n$.04$ and the true CvR of the new variation is $.05$.\nCreate a plot similar to \\ref{fig:weights1} that shows how the proportion of visits\nassigned to the pages changes from day to day until the optimal page is chosen and the experiment\nstops:\n\\begin{lstlisting}\n>>> cvr = np.array([.04,.05])\n>>> n = cvr.size\n>>> state, pulls, max_ind, days = sim_experiment(cvr)\n>>> d = np.arange(days)\n>>> for i in xrange(n):\n>>>     plt.plot(d, pulls[:,i])\n>>> plt.show()\n\\end{lstlisting}\n\nNext, run the same simulation 200 times and keep track of how many days\nthe experiment took in each case in an array called \\li{dayvec}.\nCreate a histogram that shows the\nnumber of days it takes to complete the experiment.\nThe following code will create such a histogram:\n\\begin{lstlisting}\n>>> hist, bins = sp.histogram(dayvec, bins = 12)\n>>> width = (bins[1]-bins[0])\n>>> center = (bins[:-1]+bins[1:]) / 2\n>>> plt.bar(center, hist, align = 'center', width = width, color = 'g')\n>>> plt.show()\n\\end{lstlisting}\nAlso track which arm is determined to be optimal in each simulation.\nWhat percent of the time did the bandit find the optimal arm?\n\nCreate the same two types of plots, this time with six variations having\nweights $.04,.02,.03,.035,.045,.05$.  This time only run the simulation 100 times.\nWhat percent of the time did the bandit find the optimal arm in this case?\n\\end{problem}\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}[t]{.49\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{weights1.pdf}\n\\caption{Optimal arm probabilities for two arms.}\n\\label{fig:weights1}\n\\end{subfigure}\n\\begin{subfigure}[t]{.49\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{weights2.pdf}\n\\caption{Optimal arm probabilities for six arms.}\n\\label{fig:weights2}\n\\end{subfigure}\n\\end{figure}\n\n% \\begin{figure}[h]\n% \\centering\n% \\includegraphics[width=\\textwidth]{weights2.pdf}\n% \\caption{Optimal arm probabilities in the six arm case}\n% \\label{fig:weights2}\n% \\end{figure}\n\n\n\\section*{Comparison with Classical Tests}\nA more classical approach to this problem would be to split traffic between each\nvariation for a predetermined amount of time, which should give enough data to\ndetermine the best arm with some level of confidence.\nUsing the bandit approach described here has significant advantages over a classical test.\nThere are two main reasons why the bandit approach is more efficient.\n\nThe first reason is that the bandit method generally converges more quickly.\nA standard test would require splitting the web page views between the different\nvariations over a long period of time.  According to Google's explanation in the\nwebsite mentioned at the beginning of this lab, the two arm case would take 223 days\nand the 6 arm case would take 919 days.  The results from the simulations you performed\nshould show that on average the bandit method finishes much faster.\nThere are other ways we could choose our stopping criteria that may result in even shorter experiment times.\nIn general, we can always adjust the tolerance of our stopping criteria to shorten experiment time or increase accuracy.\n\nThe second reason the bandit approach is more efficient is that, as we gain more information,\nwe allocate more visits to the variation that we believe has a better CvR.\nIn the classical method we would split the visits evenly until the end of the experiment.\nThis way we gain many more conversions during testing than we would using classical tests.\n", "meta": {"hexsha": "ed2d1d9809f5974517568592dab08fb7e663b70d", "size": 30891, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Labs/MultiArmedBandit/Bandits.tex", "max_stars_repo_name": "rachelwebb/numerical_computing", "max_stars_repo_head_hexsha": "e7416b43b97976060f6875fa46c7dca20a9f635f", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Labs/MultiArmedBandit/Bandits.tex", "max_issues_repo_name": "rachelwebb/numerical_computing", "max_issues_repo_head_hexsha": "e7416b43b97976060f6875fa46c7dca20a9f635f", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Labs/MultiArmedBandit/Bandits.tex", "max_forks_repo_name": "rachelwebb/numerical_computing", "max_forks_repo_head_hexsha": "e7416b43b97976060f6875fa46c7dca20a9f635f", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-08T01:19:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-08T01:19:23.000Z", "avg_line_length": 50.8912685338, "max_line_length": 120, "alphanum_fraction": 0.766210223, "num_tokens": 7697, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.7662936484231889, "lm_q1q2_score": 0.5694849334289454}}
{"text": "%* glpk01.tex *%\n\n\\chapter{Introduction}\n\nGLPK (\\underline{G}NU \\underline{L}inear \\underline{P}rogramming\n\\underline{K}it) is a set of routines written in the ANSI C programming\nlanguage and organized in the form of a callable library. It is\nintended for solving linear programming (LP), mixed integer programming\n(MIP), and other related problems.\n\n\\section{LP problem}\n\\label{seclp}\n\nGLPK assumes the following formulation of {\\it linear programming (LP)}\nproblem:\n\n\\medskip\\noindent\n\\hspace{.5in} minimize (or maximize)\n$$z = c_1x_{m+1} + c_2x_{m+2} + \\dots + c_nx_{m+n} + c_0 \\eqno (1.1)$$\n\\hspace{.5in} subject to linear constraints\n$$\n\\begin{array}{r@{\\:}c@{\\:}r@{\\:}c@{\\:}r@{\\:}c@{\\:}r}\nx_1&=&a_{11}x_{m+1}&+&a_{12}x_{m+2}&+ \\dots +&a_{1n}x_{m+n} \\\\\nx_2&=&a_{21}x_{m+1}&+&a_{22}x_{m+2}&+ \\dots +&a_{2n}x_{m+n} \\\\\n\\multicolumn{7}{c}\n{.\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .\\ \\ .} \\\\\nx_m&=&a_{m1}x_{m+1}&+&a_{m2}x_{m+2}&+ \\dots +&a_{mn}x_{m+n} \\\\\n\\end{array} \\eqno (1.2)\n$$\n\\hspace{.5in} and bounds of variables\n$$\n\\begin{array}{r@{\\:}c@{\\:}c@{\\:}c@{\\:}l}\nl_1&\\leq&x_1&\\leq&u_1 \\\\\nl_2&\\leq&x_2&\\leq&u_2 \\\\\n\\multicolumn{5}{c}{.\\ \\ .\\ \\ .\\ \\ .\\ \\ .}\\\\\nl_{m+n}&\\leq&x_{m+n}&\\leq&u_{m+n} \\\\\n\\end{array} \\eqno (1.3)\n$$\n\n\\medskip\\noindent\nwhere: $x_1, x_2, \\dots, x_m$ are auxiliary variables;\n$x_{m+1}, x_{m+2}, \\dots, x_{m+n}$ are structural variables;\n$z$ is the objective function;\n$c_1, c_2, \\dots, c_n$ are objective coefficients;\n$c_0$ is the constant term (``shift'') of the objective function;\n$a_{11}, a_{12}, \\dots, a_{mn}$ are constraint coefficients;\n$l_1, l_2, \\dots, l_{m+n}$ are lower bounds of variables;\n$u_1, u_2, \\dots, u_{m+n}$ are upper bounds of variables.\n\nAuxiliary variables are also called {\\it rows}, because they correspond\nto rows of the constraint matrix (i.e. a matrix built of the constraint\ncoefficients). Similarly, structural variables are also called\n{\\it columns}, because they correspond to columns of the constraint\nmatrix.\n\nBounds of variables can be finite as well as infinite. Besides, lower\nand upper bounds can be equal to each other. Thus, the following types\nof variables are possible:\n\n\\begin{center}\n\\begin{tabular}{r@{}c@{}ll}\n\\multicolumn{3}{c}{Bounds of variable} & Type of variable \\\\\n\\hline\n$-\\infty <$ &$\\ x_k\\ $& $< +\\infty$ & Free (unbounded) variable \\\\\n$l_k \\leq$ &$\\ x_k\\ $& $< +\\infty$  & Variable with lower bound \\\\\n$-\\infty <$ &$\\ x_k\\ $& $\\leq u_k$  & Variable with upper bound \\\\\n$l_k \\leq$ &$\\ x_k\\ $& $\\leq u_k$   & Double-bounded variable \\\\\n$l_k =$ &$\\ x_k\\ $& $= u_k$         & Fixed variable \\\\\n\\end{tabular}\n\\end{center}\n\n\\noindent\nNote that the types of variables shown above are applicable to\nstructural as well as to auxiliary variables.\n\nTo solve the LP problem (1.1)---(1.3) is to find such values of all\nstructural and auxiliary variables, which:\n\n\\vspace*{-10pt}\n\n\\begin{itemize}\\setlength{\\itemsep}{0pt}\n\\item satisfy to all the linear constraints (1.2), and\n\n\\item are within their bounds (1.3), and\n\n\\item provide smallest (in case of minimization) or largest (in case of\nmaximization) value of the objective function (1.1).\n\\end{itemize}\n\n\\section{MIP problem}\n\n{\\it Mixed integer linear programming (MIP)} problem is an LP problem\nin which some variables are additionally required to be integer.\n\nGLPK assumes that MIP problem has the same formulation as ordinary\n(pure) LP problem (1.1)---(1.3), i.e. includes auxiliary and structural\nvariables, which may have lower and/or upper bounds. However, in case\nof MIP problem some variables may be required to be integer. This\nadditional constraint means that a value of each {\\it integer variable}\nmust be only integer number. (Should note that GLPK allows only\nstructural variables to be of integer kind.)\n\n\\section{Using the package}\n\n\\subsection{Brief example}\n\nIn order to understand what GLPK is from the user's standpoint,\nconsider the following simple LP problem:\n\n\\medskip\n\n\\noindent\n\\hspace{.5in} maximize\n$$z = 10 x_1 + 6 x_2 + 4 x_3$$\n\\hspace{.5in} subject to\n$$\n\\begin{array}{r@{\\:}c@{\\:}r@{\\:}c@{\\:}r@{\\:}c@{\\:}r}\nx_1 &+&x_2 &+&x_3 &\\leq 100 \\\\\n10 x_1 &+& 4 x_2 & +&5 x_3 & \\leq 600 \\\\\n2 x_1 &+& 2 x_2 & +& 6 x_3 & \\leq 300 \\\\\n\\end{array}\n$$\n\\hspace{.5in} where all variables are non-negative\n$$x_1 \\geq 0, \\ x_2 \\geq 0, \\ x_3 \\geq 0$$\n\nAt first, this LP problem should be transformed to the standard form\n(1.1)---(1.3). This can be easily done by introducing auxiliary\nvariables, by one for each original inequality constraint. Thus, the\nproblem can be reformulated as follows:\n\n\\medskip\n\n\\noindent\n\\hspace{.5in} maximize\n$$z = 10 x_1 + 6 x_2 + 4 x_3$$\n\\hspace{.5in} subject to\n$$\n\\begin{array}{r@{\\:}c@{\\:}r@{\\:}c@{\\:}r@{\\:}c@{\\:}r}\np& = &x_1 &+&x_2 &+&x_3 \\\\\nq& = &10 x_1 &+& 4 x_2 &+& 5 x_3 \\\\\nr& = &2  x_1 &+& 2 x_2 &+& 6 x_3 \\\\\n\\end{array}\n$$\n\\hspace{.5in} and bounds of variables\n$$\n\\begin{array}{ccc}\n\\nonumber -\\infty < p \\leq 100 && 0 \\leq x_1 < +\\infty \\\\\n\\nonumber -\\infty < q \\leq 600 && 0 \\leq x_2 < +\\infty \\\\\n\\nonumber -\\infty < r \\leq 300 && 0 \\leq x_3 < +\\infty \\\\\n\\end{array}\n$$\n\n\\medskip\n\nwhere $p, q, r$ are auxiliary variables (rows), and $x_1, x_2, x_3$ are\nstructural variables (columns).\n\nThe example C program shown below uses GLPK API routines in order to\nsolve this LP problem.\\footnote{If you just need to solve LP or MIP\ninstance, you may write it in MPS or CPLEX LP format and then use the\nGLPK stand-alone solver to obtain a solution. This is much less\ntime-consuming than programming in C with GLPK API routines.}\n\n\\begin{footnotesize}\n\\begin{verbatim}\n/* sample.c */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <glpk.h>\n\nint main(void)\n{     glp_prob *lp;\n      int ia[1+1000], ja[1+1000];\n      double ar[1+1000], z, x1, x2, x3;\ns1:   lp = glp_create_prob();\ns2:   glp_set_prob_name(lp, \"sample\");\ns3:   glp_set_obj_dir(lp, GLP_MAX);\ns4:   glp_add_rows(lp, 3);\ns5:   glp_set_row_name(lp, 1, \"p\");\ns6:   glp_set_row_bnds(lp, 1, GLP_UP, 0.0, 100.0);\ns7:   glp_set_row_name(lp, 2, \"q\");\ns8:   glp_set_row_bnds(lp, 2, GLP_UP, 0.0, 600.0);\ns9:   glp_set_row_name(lp, 3, \"r\");\ns10:  glp_set_row_bnds(lp, 3, GLP_UP, 0.0, 300.0);\ns11:  glp_add_cols(lp, 3);\ns12:  glp_set_col_name(lp, 1, \"x1\");\ns13:  glp_set_col_bnds(lp, 1, GLP_LO, 0.0, 0.0);\ns14:  glp_set_obj_coef(lp, 1, 10.0);\ns15:  glp_set_col_name(lp, 2, \"x2\");\ns16:  glp_set_col_bnds(lp, 2, GLP_LO, 0.0, 0.0);\ns17:  glp_set_obj_coef(lp, 2, 6.0);\ns18:  glp_set_col_name(lp, 3, \"x3\");\ns19:  glp_set_col_bnds(lp, 3, GLP_LO, 0.0, 0.0);\ns20:  glp_set_obj_coef(lp, 3, 4.0);\ns21:  ia[1] = 1, ja[1] = 1, ar[1] =  1.0; /* a[1,1] =  1 */\ns22:  ia[2] = 1, ja[2] = 2, ar[2] =  1.0; /* a[1,2] =  1 */\ns23:  ia[3] = 1, ja[3] = 3, ar[3] =  1.0; /* a[1,3] =  1 */\ns24:  ia[4] = 2, ja[4] = 1, ar[4] = 10.0; /* a[2,1] = 10 */\ns25:  ia[5] = 3, ja[5] = 1, ar[5] =  2.0; /* a[3,1] =  2 */\ns26:  ia[6] = 2, ja[6] = 2, ar[6] =  4.0; /* a[2,2] =  4 */\ns27:  ia[7] = 3, ja[7] = 2, ar[7] =  2.0; /* a[3,2] =  2 */\ns28:  ia[8] = 2, ja[8] = 3, ar[8] =  5.0; /* a[2,3] =  5 */\ns29:  ia[9] = 3, ja[9] = 3, ar[9] =  6.0; /* a[3,3] =  6 */\ns30:  glp_load_matrix(lp, 9, ia, ja, ar);\ns31:  glp_simplex(lp, NULL);\ns32:  z = glp_get_obj_val(lp);\ns33:  x1 = glp_get_col_prim(lp, 1);\ns34:  x2 = glp_get_col_prim(lp, 2);\ns35:  x3 = glp_get_col_prim(lp, 3);\ns36:  printf(\"\\nz = %g; x1 = %g; x2 = %g; x3 = %g\\n\",\n         z, x1, x2, x3);\ns37:  glp_delete_prob(lp);\n      return 0;\n}\n\n/* eof */\n\\end{verbatim}\n\\end{footnotesize}\n\nThe statement \\verb|s1| creates a problem object. Being created the\nobject is initially empty. The statement \\verb|s2| assigns a symbolic\nname to the problem object.\n\nThe statement \\verb|s3| calls the routine \\verb|glp_set_obj_dir| in\norder to set the optimization direction flag, where \\verb|GLP_MAX|\nmeans maximization.\n\nThe statement \\verb|s4| adds three rows to the problem object.\n\nThe statement \\verb|s5| assigns the symbolic name `\\verb|p|' to the\nfirst row, and the statement \\verb|s6| sets the type and bounds of the\nfirst row, where \\verb|GLP_UP| means that the row has an upper bound.\nThe statements \\verb|s7|, \\verb|s8|, \\verb|s9|, \\verb|s10| are used in\nthe same way in order to assign the symbolic names `\\verb|q|' and\n`\\verb|r|' to the second and third rows and set their types and bounds.\n\nThe statement \\verb|s11| adds three columns to the problem object.\n\nThe statement \\verb|s12| assigns the symbolic name `\\verb|x1|' to the\nfirst column, the statement \\verb|s13| sets the type and bounds of the\nfirst column, where \\verb|GLP_LO| means that the column has an lower\nbound, and the statement \\verb|s14| sets the objective coefficient for\nthe first column. The statements \\verb|s15|---\\verb|s20| are used in\nthe same way in order to assign the symbolic names `\\verb|x2|' and\n`\\verb|x3|' to the second and third columns and set their types,\nbounds, and objective coefficients.\n\nThe statements \\verb|s21|---\\verb|s29| prepare non-zero elements of the\nconstraint matrix (i.e. constraint coefficients). Row indices of each\nelement are stored in the array \\verb|ia|, column indices are stored in\nthe array \\verb|ja|, and numerical values of corresponding elements are\nstored in the array \\verb|ar|. Then the statement \\verb|s30| calls\nthe routine \\verb|glp_load_matrix|, which loads information from these\nthree arrays into the problem object.\n\nNow all data have been entered into the problem object, and therefore\nthe statement \\verb|s31| calls the routine \\verb|glp_simplex|, which is\na driver to the simplex method, in order to solve the LP problem. This\nroutine finds an optimal solution and stores all relevant information\nback into the problem object.\n\nThe statement \\verb|s32| obtains a computed value of the objective\nfunction, and the statements \\verb|s33|---\\verb|s35| obtain computed\nvalues of structural variables (columns), which correspond to the\noptimal basic solution found by the solver.\n\nThe statement \\verb|s36| writes the optimal solution to the standard\noutput. The printout may look like follows:\n\n\\begin{footnotesize}\n\\begin{verbatim}\n*     0:   objval =   0.000000000e+00   infeas =   0.000000000e+00 (0)\n*     2:   objval =   7.333333333e+02   infeas =   0.000000000e+00 (0)\nOPTIMAL SOLUTION FOUND\n\nz = 733.333; x1 = 33.3333; x2 = 66.6667; x3 = 0\n\\end{verbatim}\n\\end{footnotesize}\n\nFinally, the statement \\verb|s37| calls the routine\n\\verb|glp_delete_prob|, which frees all the memory allocated to the\nproblem object.\n\n\\subsection{Compiling}\n\nThe GLPK package has the only header file \\verb|glpk.h|, which should\nbe available on compiling a C (or C++) program using GLPK API routines.\n\nIf the header file is installed in the default location\n\\verb|/usr/local/include|, the following typical command may be used to\ncompile, say, the example C program described above with the GNU C\ncompiler:\n\n\\begin{verbatim}\n   $ gcc -c sample.c\n\\end{verbatim}\n\nIf \\verb|glpk.h| is not in the default location, the corresponding\ndirectory containing it should be made known to the C compiler through\n\\verb|-I| option, for example:\n\n\\begin{verbatim}\n   $ gcc -I/foo/bar/glpk-4.15/include -c sample.c\n\\end{verbatim}\n\nIn any case the compilation results in an object file \\verb|sample.o|.\n\n\\subsection{Linking}\n\nThe GLPK library is a single file \\verb|libglpk.a|. (On systems which\nsupport shared libraries there may be also a shared version of the\nlibrary \\verb|libglpk.so|.)\n\nIf the library is installed in the default\nlocation \\verb|/usr/local/lib|, the following typical command may be\nused to link, say, the example C program described above against with\nthe library:\n\n\\begin{verbatim}\n   $ gcc sample.o -lglpk -lm\n\\end{verbatim}\n\nIf the GLPK library is not in the default location, the corresponding\ndirectory containing it should be made known to the linker through\n\\verb|-L| option, for example:\n\n\\begin{verbatim}\n   $ gcc -L/foo/bar/glpk-4.15 sample.o -lglpk -lm\n\\end{verbatim}\n\nDepending on configuration of the package linking against with the GLPK\nlibrary may require optional libraries, in which case these libraries\nshould be also made known to the linker, for example:\n\n\\begin{verbatim}\n   $ gcc sample.o -lglpk -lgmp -lm\n\\end{verbatim}\n\nFor more details about configuration options of the GLPK package see\nAppendix \\ref{install}, page \\pageref{install}.\n\n%* eof *%\n", "meta": {"hexsha": "29ed67cce91c6721709f40cdf295ad44a6b309c9", "size": 12240, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sdk/mmitss/src/glpk-4.55/doc/glpk01.tex", "max_stars_repo_name": "OSADP/MMITSS_AZ_FIELD", "max_stars_repo_head_hexsha": "b4c870061c518eddfa0152938ab60abc2a31023f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sdk/mmitss/src/glpk-4.55/doc/glpk01.tex", "max_issues_repo_name": "OSADP/MMITSS_AZ_FIELD", "max_issues_repo_head_hexsha": "b4c870061c518eddfa0152938ab60abc2a31023f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sdk/mmitss/src/glpk-4.55/doc/glpk01.tex", "max_forks_repo_name": "OSADP/MMITSS_AZ_FIELD", "max_forks_repo_head_hexsha": "b4c870061c518eddfa0152938ab60abc2a31023f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-05-04T16:41:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-01T20:32:44.000Z", "avg_line_length": 35.5813953488, "max_line_length": 71, "alphanum_fraction": 0.6861928105, "num_tokens": 4330, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5694849243185662}}
{"text": "\\subsection{Kaplan-Meier Survival Analysis}\n\\label{sec:kaplan-meier}\n\n\\noindent{\\bf Description}\n\\smallskip\n\n\nSurvival analysis examines the time needed for a particular event of interest to occur.\nIn medical research, for example, the prototypical such event is the death of a patient but the methodology can be applied to other application areas, e.g., completing a task by an individual in a psychological experiment or the failure of electrical components in engineering.   \nKaplan-Meier or (product limit) method is a simple non-parametric approach for estimating survival probabilities from both censored and uncensored survival times.\\\\\n\n \n\n\\smallskip\n\\noindent{\\bf Usage}\n\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\it%\n{\\tt{}-f }path/\\/{\\tt{}KM.dml}\n{\\tt{} -nvargs}\n{\\tt{} X=}path/file\n{\\tt{} TE=}path/file\n{\\tt{} GI=}path/file\n{\\tt{} SI=}path/file\n{\\tt{} O=}path/file\n{\\tt{} M=}path/file\n{\\tt{} T=}path/file\n{\\tt{} alpha=}double\n{\\tt{} etype=}greenwood$\\mid$peto\n{\\tt{} ctype=}plain$\\mid$log$\\mid$log-log\n{\\tt{} ttype=}none$\\mid$log-rank$\\mid$wilcoxon\n{\\tt{} fmt=}format\n\n}\n\n\\smallskip\n\\noindent{\\bf Arguments}\n\\begin{Description}\n\\item[{\\tt X}:]\nLocation (on HDFS) to read the input matrix of the survival data containing: \n\\begin{Itemize}\n\t\\item timestamps,\n\t\\item whether event occurred (1) or data is censored (0),\n\t\\item a number of factors (i.e., categorical features) for grouping and/or stratifying\n\\end{Itemize}\n\\item[{\\tt TE}:]\nLocation (on HDFS) to read the 1-column matrix $TE$ that contains the column indices of the input matrix $X$ corresponding to timestamps (first entry) and event information (second entry) \n\\item[{\\tt GI}:]\nLocation (on HDFS) to read the 1-column matrix $GI$ that contains the column indices of the input matrix $X$ corresponding to the factors (i.e., categorical features) to be used for grouping\n\\item[{\\tt SI}:]\nLocation (on HDFS) to read the 1-column matrix $SI$ that contains the column indices of the input matrix $X$ corresponding to the factors (i.e., categorical features) to be used for grouping\n\\item[{\\tt O}:]\nLocation (on HDFS) to write the matrix containing the results of the Kaplan-Meier analysis $KM$\n\\item[{\\tt M}:]\nLocation (on HDFS) to write Matrix $M$ containing the following statistics: total number of events, median and its confidence intervals; if survival data for multiple groups and strata are provided each row of $M$ contains the above statistics per group and stratum.\n\\item[{\\tt T}:]\nIf survival data from multiple groups is available and {\\tt ttype=log-rank} or {\\tt ttype=wilcoxon}, location (on HDFS) to write the two matrices that contains the result of the (stratified) test for comparing these groups; see below for details.\n\\item[{\\tt alpha}:](default:\\mbox{ }{\\tt 0.05})\nParameter to compute $100(1-\\alpha)\\%$ confidence intervals for the survivor function and its median \n\\item[{\\tt etype}:](default:\\mbox{ }{\\tt \"greenwood\"})\nParameter to specify the error type according to \"greenwood\" or \"peto\"\n\\item[{\\tt ctype}:](default:\\mbox{ }{\\tt \"log\"})\nParameter to modify the confidence interval; \"plain\" keeps the lower and upper bound of the confidence interval unmodified,\t\"log\" corresponds to logistic transformation and \"log-log\" corresponds to the complementary log-log transformation\n\\item[{\\tt ttype}:](default:\\mbox{ }{\\tt \"none\"})\nIf survival data for multiple groups is available specifies which test to perform for comparing \nsurvival data across multiple groups: \"none\", \"log-rank\" or \"wilcoxon\" test\n\\item[{\\tt fmt}:] (default:\\mbox{ }{\\tt \"text\"})\nMatrix file output format, such as {\\tt text}, {\\tt mm}, or {\\tt csv};\nsee read/write functions in SystemML Language Reference for details.\n\\end{Description}\n\n\n\\noindent{\\bf Details}\n\\smallskip\n\nThe Kaplan-Meier estimate is a non-parametric maximum likelihood estimate (MLE) of the survival function $S(t)$, i.e., the probability of survival from the time origin to a given future time. \nAs an illustration suppose that there are $n$ individuals with observed survival times $t_1,t_2,\\ldots t_n$ out of which there are $r\\leq n$ distinct death times $t_{(1)}\\leq t_{(2)}\\leq t_{(r)}$---since some of the observations may be censored, in the sense that the end-point of interest has not been observed for those individuals, and there may be more than one individual with the same survival time.\nLet $S(t_j)$ denote the probability of survival until time $t_j$, $d_j$ be the number of events at time $t_j$, and $n_j$ denote the number of individual at risk (i.e., those who die at time $t_j$ or later). \nAssuming that the events occur independently, in Kaplan-Meier method the probability of surviving from $t_j$ to $t_{j+1}$ is estimated from $S(t_j)$ and given by\n\\begin{equation*}\n\\hat{S}(t) = \\prod_{j=1}^{k} \\left( \\frac{n_j-d_j}{n_j} \\right),\n\\end{equation*}   \nfor $t_k\\leq t<t_{k+1}$, $k=1,2,\\ldots r$, $\\hat{S}(t)=1$ for $t<t_{(1)}$, and $t_{(r+1)}=\\infty$. \nNote that the value of $\\hat{S}(t)$ is constant between times of event and therefore\nthe estimate is a step function with jumps at observed event times.\nIf there are no censored data this estimator would simply reduce to the empirical survivor function defined as $\\frac{n_j}{n}$. Thus, the Kaplan-Meier estimate can be seen as the generalization of the empirical survivor function that handles censored observations.\n\nThe methodology used in our {\\tt KM.dml} script closely follows~\\cite[Sec.~2]{collett2003:kaplanmeier}.\nFor completeness we briefly discuss the equations used in our implementation.\n\n% standard error of the survivor function\n\\textbf{Standard error of the survivor function.}\nThe standard error of the estimated survivor function (controlled by parameter {\\tt etype}) can be calculated as  \n\\begin{equation*}\n\\text{se} \\{\\hat{S}(t)\\} \\approx \\hat{S}(t) {\\bigg\\{ \\sum_{j=1}^{k} \\frac{d_j}{n_j(n_j -   d_j)}\\biggr\\}}^2,\n\\end{equation*}\nfor $t_{(k)}\\leq t<t_{(k+1)}$.\nThis equation is known as the {\\it Greenwood's} formula.\nAn alternative approach is to apply the {\\it Petos's} expression %~\\cite{PetoPABCHMMPS1979:kaplanmeier} \n\\begin{equation*}\n\\text{se}\\{\\hat{S}(t)\\}=\\frac{\\hat{S}(t)\\sqrt{1-\\hat{S}(t)}}{\\sqrt{n_k}},\n\\end{equation*}\nfor $t_{(k)}\\leq t<t_{(k+1)}$. \n%Note that this estimate is known to be conservative producing larger standard errors than they ought to be. The Greenwood estimate is therefore recommended for general use. \nOnce the standard error of $\\hat{S}$ has been found we compute the following types of confidence intervals (controlled by parameter {\\tt cctype}): \nThe ``plain'' $100(1-\\alpha)\\%$ confidence interval for $S(t)$ is computed using \n\\begin{equation*}\n\\hat{S}(t)\\pm z_{\\alpha/2} \\text{se}\\{\\hat{S}(t)\\}, \n\\end{equation*} \nwhere $z_{\\alpha/2}$ is the upper $\\alpha/2$-point of the standard normal distribution. \nAlternatively, we can apply the ``log'' transformation using \n\\begin{equation*}\n\\hat{S}(t)^{\\exp[\\pm z_{\\alpha/2} \\text{se}\\{\\hat{S}(t)\\}/\\hat{S}(t)]}\n\\end{equation*}\nor the ``log-log'' transformation using \n\\begin{equation*}\n\\hat{S}(t)^{\\exp [\\pm z_{\\alpha/2} \\text{se} \\{\\log [-\\log \\hat{S}(t)]\\}]}.\n\\end{equation*}\n\n% standard error of the median of survival times\n\\textbf{Median, its standard error and confidence interval.}\nDenote by $\\hat{t}(50)$ the estimated median of $\\hat{S}$, i.e.,\n$\\hat{t}(50)=\\min \\{ t_i \\mid \\hat{S}(t_i) < 0.5\\}$,\nwhere $t_i$ is the observed survival time for individual $i$.\nThe standard error of $\\hat{t}(50)$ is given by\n\\begin{equation*}\n\\text{se}\\{ \\hat{t}(50) \\} = \\frac{1}{\\hat{f}\\{\\hat{t}(50)\\}} \\text{se}[\\hat{S}\\{ \\hat{t}(50) \\}],\n\\end{equation*}\nwhere $\\hat{f}\\{ \\hat{t}(50) \\}$ can be found from\n\\begin{equation*}\n\\hat{f}\\{ \\hat{t}(50) \\} = \\frac{\\hat{S}\\{ \\hat{u}(50) \\} -\\hat{S}\\{ \\hat{l}(50) \\} }{\\hat{l}(50) - \\hat{u}(50)}. \n\\end{equation*}\nAbove, $\\hat{u}(50)$ is the largest survival time for which $\\hat{S}$ exceeds $0.5+\\epsilon$, i.e., $\\hat{u}(50)=\\max \\bigl\\{ t_{(j)} \\mid \\hat{S}(t_{(j)}) \\geq 0.5+\\epsilon \\bigr\\}$,\nand $\\hat{l}(50)$ is the smallest survivor time for which $\\hat{S}$ is less than $0.5-\\epsilon$,\ni.e., $\\hat{l}(50)=\\min \\bigl\\{ t_{(j)} \\mid \\hat{S}(t_{(j)}) \\leq 0.5+\\epsilon \\bigr\\}$,\nfor small $\\epsilon$.\n\n\n% comparing two or more groups of data\n\\textbf{Log-rank test and Wilcoxon test.}\nOur implementation supports comparison of survival data from several groups using two non-parametric procedures (controlled by parameter {\\tt ttype}): the {\\it log-rank test} and the {\\it Wilcoxon test} (also known as the {\\it Breslow test}). \nAssume that the survival times in $g\\geq 2$ groups of survival data are to be compared. \nConsider the {\\it null hypothesis} that there is no difference in the survival times of the individuals in different groups. One way to examine the null hypothesis is to consider the difference between the observed number of deaths with the numbers expected under the null hypothesis.  \nIn both tests we define the $U$-statistics ($U_{L}$ for the log-rank test and $U_{W}$ for the Wilcoxon test) to compare the observed and the expected number of deaths in $1,2,\\ldots,g-1$ groups as follows:\n\\begin{align*}\nU_{Lk} &= \\sum_{j=1}^{r}\\left( d_{kj} - \\frac{n_{kj}d_j}{n_j} \\right), \\\\\nU_{Wk} &= \\sum_{j=1}^{r}n_j\\left( d_{kj} - \\frac{n_{kj}d_j}{n_j} \\right),\n\\end{align*}\nwhere $d_{kj}$ is the of number deaths at time $t_{(j)}$ in group $k$, \n$n_{kj}$ is the number of individuals at risk at time $t_{(j)}$ in group $k$, and \n$k=1,2,\\ldots,g-1$ to form the vectors $U_L$ and $U_W$ with $(g-1)$ components.\nThe covariance (variance) between $U_{Lk}$ and $U_{Lk'}$ (when $k=k'$) is computed as\n\\begin{equation*}\nV_{Lkk'}=\\sum_{j=1}^{r} \\frac{n_{kj}d_j(n_j-d_j)}{n_j(n_j-1)} \\left( \\delta_{kk'}-\\frac{n_{k'j}}{n_j} \\right),\n\\end{equation*}\nfor $k,k'=1,2,\\ldots,g-1$, with\n\\begin{equation*}\n\\delta_{kk'} = \n\\begin{cases}\n1 & \\text{if } k=k'\\\\\n0 & \\text{otherwise.}\n\\end{cases}\n\\end{equation*}\nThese terms are combined in a {\\it variance-covariance} matrix $V_L$ (referred to as the $V$-statistic).\nSimilarly, the variance-covariance matrix for the Wilcoxon test $V_W$ is a matrix where the entry at position $(k,k')$ is given by\n\\begin{equation*}\nV_{Wkk'}=\\sum_{j=1}^{r} n_j^2 \\frac{n_{kj}d_j(n_j-d_j)}{n_j(n_j-1)} \\left( \\delta_{kk'}-\\frac{n_{k'j}}{n_j} \\right).\n\\end{equation*}\n\nUnder the null hypothesis of no group differences, the test statistics $U_L^\\top V_L^{-1} U_L$ for the log-rank test and  $U_W^\\top V_W^{-1} U_W$ for the Wilcoxon test have a Chi-squared distribution on $(g-1)$ degrees of freedom.\nOur {\\tt KM.dml} script also provides a stratified version of the log-rank or Wilcoxon test if requested.\nIn this case, the values of the $U$- and $V$- statistics are computed for each stratum and then combined over all strata.\n\n\n\\smallskip\n\\noindent{\\bf Returns}\n\\smallskip\n\n  \nBlow we list the results of the survival analysis computed by {\\tt KM.dml}. \nThe calculated statistics are stored in matrix $KM$ with the following schema:\n\\begin{itemize}\n\t\\item Column 1: timestamps \n\t\\item Column 2: number of individuals at risk\n\t\\item Column 3: number of events\n\t\\item Column 4: Kaplan-Meier estimate of the survivor function $\\hat{S}$ \n\t\\item Column 5: standard error of $\\hat{S}$\n\t\\item Column 6: lower bound of $100(1-\\alpha)\\%$ confidence interval for $\\hat{S}$\n\t\\item Column 7: upper bound of $100(1-\\alpha)\\%$ confidence interval for $\\hat{S}$\n\\end{itemize}\nNote that if survival data for multiple groups and/or strata is available, each collection of 7 columns in $KM$ stores the results per group and/or per stratum. \nIn this case $KM$ has $7g+7s$ columns, where $g\\geq 1$ and $s\\geq 1$ denote the number of groups and strata, respectively. \n\n\nAdditionally, {\\tt KM.dml} stores the following statistics in the 1-row matrix $M$ whose number of columns depends on the number of groups ($g$) and strata ($s$) in the data. Below $k$ denotes the number of factors used for grouping and $l$ denotes the number of factors used for stratifying. \n\\begin{itemize}\n\t\\item Columns 1 to $k$: unique combination of values in the $k$ factors used for grouping \n\t\\item Columns $k+1$ to $k+l$: unique combination of values in the $l$ factors used for stratifying  \n\t\\item Column $k+l+1$: total number of records \n\t\\item Column $k+l+2$: total number of events\n    \\item Column $k+l+3$: median of $\\hat{S}$\n    \\item Column $k+l+4$: lower bound of $100(1-\\alpha)\\%$ confidence interval for the median of $\\hat{S}$\n    \\item Column $k+l+5$: upper bound of $100(1-\\alpha)\\%$ confidence interval for the median of $\\hat{S}$. \n\\end{itemize}\nIf there is only 1 group and 1 stratum available $M$ will be a 1-row matrix with 5 columns where\n\\begin{itemize}\n\t\\item Column 1: total number of records\n\t\\item Column 2: total number of events\n\t\\item Column 3: median of $\\hat{S}$\n\t\\item Column 4: lower bound of $100(1-\\alpha)\\%$ confidence interval for the median of $\\hat{S}$\n\t\\item Column 5: upper bound of $100(1-\\alpha)\\%$ confidence interval for the median of $\\hat{S}$.\n\\end{itemize} \n\nIf a comparison of the survival data across multiple groups needs to be performed, {\\tt KM.dml} computes two matrices $T$ and $T\\_GROUPS\\_OE$ that contain a summary of the test. The 1-row matrix $T$ stores the following statistics: \n\\begin{itemize}\n\t\\item Column 1: number of groups in the survival data\n \t\\item Column 2: degree of freedom for Chi-squared distributed test statistic\n\t\\item Column 3: value of test statistic\n\t\\item Column 4: $P$-value.\n\\end{itemize}\nMatrix $T\\_GROUPS\\_OE$ contains the following statistics for each of $g$ groups:\n\\begin{itemize}\n\t\\item Column 1: number of events\n\t\\item Column 2: number of observed death times ($O$)\n\t\\item Column 3: number of expected death times ($E$)\n\t\\item Column 4: $(O-E)^2/E$\n\t\\item Column 5: $(O-E)^2/V$.\n\\end{itemize}\n\n\n\\smallskip\n\\noindent{\\bf Examples}\n\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\tt\n\t\\hml -f KM.dml -nvargs X=/user/biadmin/X.mtx TE=/user/biadmin/TE\n\tGI=/user/biadmin/GI SI=/user/biadmin/SI O=/user/biadmin/kaplan-meier.csv\n\tM=/user/biadmin/model.csv alpha=0.01 etype=greenwood ctype=plain fmt=csv\n\t\n}\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\tt\n\t\\hml -f KM.dml -nvargs X=/user/biadmin/X.mtx TE=/user/biadmin/TE\n\tGI=/user/biadmin/GI SI=/user/biadmin/SI O=/user/biadmin/kaplan-meier.csv\n\tM=/user/biadmin/model.csv T=/user/biadmin/test.csv alpha=0.01 etype=peto \n\tctype=log ttype=log-rank fmt=csv\n\t\n}\n\n%\n%\\smallskip\n%\\noindent{\\bf References}\n%\\begin{itemize}\n%\t\\item\n%\tR.~Peto, M.C.~Pike, P.~Armitage, N.E.~Breslow, D.R.~Cox, S.V.~Howard, N.~Mantel, K.~McPherson, J.~Peto, and P.G.~Smith.\n%\t\\newblock Design and analysis of randomized clinical trials requiring prolonged observation of each patient.\n%\t\\newblock {\\em British Journal of Cancer}, 35:1--39, 1979.\n%\\end{itemize}\n\n%@book{collett2003:kaplanmeier,\n%\ttitle={Modelling Survival Data in Medical Research, Second Edition},\n%\tauthor={Collett, D.},\n%\tisbn={9781584883258},\n%\tlccn={2003040945},\n%\tseries={Chapman \\& Hall/CRC Texts in Statistical Science},\n%\tyear={2003},\n%\tpublisher={Taylor \\& Francis}\n%}\n", "meta": {"hexsha": "754f6c211f444883a6fa71be38e9c9c569ae1459", "size": 14996, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "system-ml/docs/Algorithms Reference/KaplanMeier.tex", "max_stars_repo_name": "alcedo/systemml", "max_stars_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-03-17T18:03:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-25T08:17:09.000Z", "max_issues_repo_path": "system-ml/docs/Algorithms Reference/KaplanMeier.tex", "max_issues_repo_name": "alcedo/systemml", "max_issues_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "system-ml/docs/Algorithms Reference/KaplanMeier.tex", "max_forks_repo_name": "alcedo/systemml", "max_forks_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-11-26T00:43:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-02T06:29:30.000Z", "avg_line_length": 55.7472118959, "max_line_length": 405, "alphanum_fraction": 0.7136569752, "num_tokens": 4667, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5694849163856504}}
{"text": "\\documentclass[12pt,letterpaper]{article}\n\\usepackage{fullpage}\n\\usepackage[top=2cm, bottom=4.5cm, left=2.5cm, right=2.5cm]{geometry}\n\\usepackage{amsmath,amsthm,amsfonts,amssymb,amscd}\n\\usepackage{lastpage}\n\\usepackage{enumerate}\n\\usepackage{fancyhdr}\n\\usepackage{mathrsfs}\n\\usepackage{xcolor}\n\\usepackage{graphicx}\n\\usepackage{listings}\n\\usepackage{mcode}\n\\usepackage{hyperref}\n\\usepackage{movie15}\n\\usepackage{hyperref}\n\n% \\hypersetup{%\n%   colorlinks=true,\n%   linkcolor=blue,\n%   linkbordercolor={0 0 1}\n% }\n \n% \\renewcommand\\lstlistingname{Algorithm}\n% \\renewcommand\\lstlistlistingname{Algorithms}\n% \\def\\lstlistingautorefname{Alg.}\n\n% \\lstdefinestyle{Python}{\n%     language        = Python,\n%     frame           = lines, \n%     basicstyle      = \\footnotesize,\n%     keywordstyle    = \\color{blue},\n%     stringstyle     = \\color{green},\n%     commentstyle    = \\color{red}\\ttfamily\n% }\n\n\\setlength{\\parindent}{0.0in}\n\\setlength{\\parskip}{0.05in}\n\n% Edit these as appropriate\n\\newcommand\\course{Mobile Robotics}\n\\newcommand\\hwnumber{1}                  % <-- homework number\n\\newcommand\\NetIDa{mk05198}           % <-- NetID of person #1\n\\newcommand\\NetIDb{}           % <-- NetID of person #2 (Comment this line out for problem sets)\n\n\\pagestyle{fancyplain}\n\\headheight 35pt\n\\lhead{\\NetIDa}\n\\lhead{\\NetIDa\\\\\\NetIDb}                 % <-- Comment this line out for problem sets (make sure you are person #1)\n\\chead{\\textbf{\\Large Homework \\hwnumber}}\n\\rhead{\\course \\\\ \\today}\n\\lfoot{}\n\\cfoot{}\n\\rfoot{\\small\\thepage}\n\\headsep 1.5em\n\n\\begin{document}\n\n\\section*{Task 1}\n\n\n\n\\begin{enumerate}\n  \\item\n   \\textbf{Velocities}: \\\\\n   Velocities for the robot are taken as derivative of its trajectory. The following listing shows generation of trajectory and velocity calculations. The velocity vectors are augmented with a zero to match the size of the trajectory vectors.\n\\begin{lstlisting}\nclc;close all \nN = 500; \nt = linspace(-pi, pi, N); \n\nx = 8*(sin(t)).^3; \ny = 8*(sin(2*t)).^3; \n\nvx = gradient(x, 2*pi/N);\nvy = gradient(y, 2*pi/N);\n\\end{lstlisting}\n  \\item\n\\textbf{Acceleration}:\\\\\nTaking time derivatives of the velocity vectors, we get acceleration as: \n\n\\begin{lstlisting}\nax = gradient(vx, 2*pi/N);\nay = gradient(vy, 2*pi/N);\n\\end{lstlisting}\n\n\\item \n\\textbf{Robot Velocities}:\\\\\nUsing expression (1) and (3) from the assignment prompt, we calculate velocities as follows: \n\\begin{lstlisting}\n%orientation\nphi = atan2(vy, vx); \n\n%robot velocities\nv = vx.*cos(phi) + vy.*sin(phi); \nomega = (vx.*ay - vy.*ax)./(vx.^2+vy.^2); \n\n%Plotting\nsubplot(2, 1, 1)\nplot(v, 'linewidth', 4) \nxlabel('time',  'FontSize', 14)\nylabel('velocity',  'FontSize', 14)\ntitle('Linear velocity', 'FontSize', 18)\n\nsubplot(2, 1, 2)\nplot(omega, 'linewidth', 4)\ntitle('Angular velocity', 'FontSize', 18)\nxlabel('time',  'FontSize', 14)\nylabel('velocity',  'FontSize', 14)\nprint -deps figures/task1 \n\\end{lstlisting}\n\nRun the above presented code in chronological order, we get the following velocity plots:\n\n\\begin{figure} [h]\n    \\centering\n    \\includegraphics[]{figures/task1_velocities.eps}\n    \\caption{velocities}\n    \\label{fig:my_label}\n\\end{figure}\n\n\\item \n\\textbf{Trajectory traversal}:\\\\\nThe following code creates an animated gif with the point mobile node traversing the given trajectory: \n\\begin{lstlisting}\nh = figure; \naxis tight manual % this ensures that getframe() returns a consistent size\nfilename = 'figures/task1_trajectory.gif'; \nplot(x, y, 'b', 'linewidth', 3)\nhold on \nfor i=1:10:N      \n    plot(x(1:i), y(1:i), 'g-', 'linewidth', 6)\n    legend('Given Trajectory', \"Robot's Path\"); \n    drawnow; \n    \n    %create GIF\n    frame = getframe(h); \n    im = frame2im(frame); \n    [imind,cm] = rgb2ind(im,256); \n    % Write to the GIF File \n    if i == 1\n      imwrite(imind,cm,filename,'gif', 'Loopcount',inf); \n    else \n      imwrite(imind,cm,filename,'gif','WriteMode','append'); \n    end \n\nend\nhold off\n\\end{lstlisting}\n\\end{enumerate}\nThe animation can be found here: \\href{https://github.com/mehhdiii/Robot-External-Kinematics/blob/main/figures/output.gif}{/mehhdiii/Robot-External-Kinematics/figures}\n\n\\section*{Task 2}\nWheel velocities are obtained using the following script: \n\\begin{lstlisting}\nW = 1/2; r = 1/4; T=0.1; \n\n%initialize Inverse kinematics velocities\nomega = zeros(1, N); v = zeros(1, N); \nvL = zeros(1, N); vR = zeros(1, N); \nomegaL = zeros(1, N); omegaR = zeros(1, N); \n\n\n%initialize resulting forward kinematic variables: \nx_f = zeros(1, N); y_f = zeros(1, N); phi_f = zeros(1, N); \n\nfor n = 2:N-1\n    %calculating inverse kinematics variables: \n    mu = 1/2*(sin(phi(n))*(y(n+1)-y(n))+cos(phi(n)*(x(n+1)-x(n))))...\n        /(cos(phi(n))*(y(n+1)-y(n))-sin(phi(n))*(x(n+1)-x(n))); \n    x_m = (x(n)+x(n+1))/2; \n    y_m = (y(n)+y(n+1))/2; \n    \n    x_star = x_m - mu/2 * (y(n+1) - y(n)); \n    y_star = y_m + mu/2 * (x(n+1)-x(n)); \n    \n    R_n = sqrt((x(n) - x_star)^2 + (y(n)-y_star)^2); \n    theta_1 = atan2((y(n)-y_star), (x(n)-x_star)); \n    theta_2 = atan2((y(n+1)-y_star), (x(n+1)-x_star)); \n    del_phi = wrapToPi(theta_1 - theta_2); \n    \n    %resulting Inv-Kinematics velocities: \n    omega(n) = del_phi/T; \n    v(n) = R_n*abs(omega(n));  \n    vL(n) = (R_n-1/2 *W)*omega(n); \n    vR(n) = (R_n+1/2 *W)*omega(n); \n    omegaL(n) = vL(n)/r; \n    omegaR(n) = vR(n)/r; \n\nend\n\nfigure()\nsubplot 221\nplot(t,vL,'linewidth', 2)\nxlabel('time',  'FontSize', 10)\nylabel('velocity',  'FontSize', 10)\ntitle('Left Wheel velocity', 'FontSize', 14)\nxlim([-pi  pi])\n\nsubplot 222\nplot(t,vR, 'linewidth', 2)\nxlabel('time',  'FontSize', 10)\nylabel('velocity',  'FontSize', 10)\ntitle('Right Wheel velocity', 'FontSize', 14)\nxlim([-pi  pi])\n\nsubplot 223\nplot(t,omegaL, 'linewidth', 2)\nxlabel('time',  'FontSize', 10)\nylabel('velocity',  'FontSize', 10)\nxlim([-pi  pi])\n\ntitle('Left Wheel angular velocity', 'FontSize', 14)\nsubplot 224\nplot(t,omegaR, 'linewidth', 2)\nxlabel('time',  'FontSize', 10)\nylabel('velocity',  'FontSize', 10)\ntitle('Right Wheel angular velocity', 'FontSize', 14)\nxlim([-pi  pi])\n\nprint -deps figures/task2\n\\end{lstlisting}\n\\begin{figure} [h]\n    \\centering\n    \\includegraphics[]{figures/task2.eps}\n    \\caption{Wheel velocities}\n    \\label{fig:my_label}\n\\end{figure}\nComplete code can be found at: \\href{https://github.com/mehhdiii/Robot-External-Kinematics}{github.com/mehhdiii/Robot-External-Kinematics}\n\\end{document}\n", "meta": {"hexsha": "eae58aec136338a1e105c6037a1d2fe21823dfc0", "size": 6362, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main.tex", "max_stars_repo_name": "mehhdiii/Robot-External-Kinematics", "max_stars_repo_head_hexsha": "a68f7ae618d44c6488d0030c97f072c19121fc89", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.tex", "max_issues_repo_name": "mehhdiii/Robot-External-Kinematics", "max_issues_repo_head_hexsha": "a68f7ae618d44c6488d0030c97f072c19121fc89", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.tex", "max_forks_repo_name": "mehhdiii/Robot-External-Kinematics", "max_forks_repo_head_hexsha": "a68f7ae618d44c6488d0030c97f072c19121fc89", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.6608695652, "max_line_length": 242, "alphanum_fraction": 0.659069475, "num_tokens": 2099, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7662936377487305, "lm_q1q2_score": 0.5694849080602467}}
{"text": "\\chapter{The Programming Language \\textsc{SetlX}}\nThe introductory lecture on mathematics starts with set theory.  In my experience, the notions of\nset theory are difficult to master for many students because the concepts introduced in set theory\nare quite abstract.  Fortunately, there is a programming language that is directly based on set\ntheory and logic.  This is the language \\href{http://www.randoom.org/Software/SetlX}{\\setl}.\nBy programming in \\setl, students can get acquainted with set theory in a playful manner.\nFurthermore, as many interesting problems have a straightforward solution as \\setl-programs,\nmy experience has shown that students can appreciate the usefulness of abstract notions from set\ntheory better by programming in \\setl.\n\n\\setl\\ is based on the language \\href{https://en.wikipedia.org/wiki/SETL}{\\textsc{Setl}} \\cite{setl86}, which\nwas introduced in the late sixties by the renowned mathematician \n\\href{https://en.wikipedia.org/wiki/Jacob_T._Schwartz}{Jacob T.~Schwartz}.  \nHowever, while the syntax of \\textsc{Setl} is similar to   \n\\href{https://en.wikipedia.org/wiki/ALGOL}{\\textsl{Algol}}, \\setl\\ has been designed to be\nsyntactically similar to the programming language\n\\href{https://en.wikipedia.org/wiki/C_(programming_language)}{\\texttt{C}}. \nThe language \\setl\\ can be downloaded from the website\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\href{http://www.randoom.org/Software/SetlX}{\\texttt{http://www.randoom.org/Software/SetlX}}.\n\\\\[0.2cm]\nI would like to mention that \\setl\\ runs on\n\\href{https://en.wikipedia.org/wiki/Android_(operating_system)}{Android}\nbased smart phones.  The version of \\setl\\ for Android  is available at\n\\href{https://play.google.com/store/apps/details?id=org.randoom.setlxUI.android&hl=en}{Google Play}.\n\n\\section{Introductory Examples}\nMy goal is to first introduce \\setl\\ via a number of rather simple examples.  I will present more\nadvanced features of \\setl\\ in later sections, but this section is intended to provide a first\nimpression of the language.\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = none,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.0cm,\n                  xrightmargin  = 0.0cm,\n                ]\n-====================================setlX=============================v2.7.0=-\n\nWelcome to the setlX interpreter!\n\nOpen Source Software from http://setlX.randoom.org/\n(c) 2011-2017 by Herrmann, Tom\n\nYou can display some helpful information by using '--help' as parameter when\nlaunching this program.\n\nInteractive-Mode:\n  The 'exit;' statement terminates the interpreter.\n\n-===============================Interactive=Mode==============================-\n\n=> \n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{The \\textsc{SetlX}-Welcome message.}\n\\label{fig:setlx}\n\\end{figure}\n\nThe language \\textsc{SetlX} is an \\blue{interpreted} language.  Hence, there is no need to \\blue{compile} a\nprogram.  Instead, \\setl-programs can be executed via the interpreter.  The interpreter is started\nwith the command:\\footnote{\n  While I am usually in the habit of terminating every sentence with either a full stop, a question\n  mark or an exclamation mark, I refrain from doing so when the sentence ends in a \\setl-command\n  that is shown on a separate line.  The reason is that I want to avoid confusion as it can\n  otherwise be hard to understand which part of the line is the command that has to be typed\n  verbatim.\n}\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{setlx}\n\\\\[0.2cm]\nAfter the interpreter is started, the user sees the output that is shown in Figure \n\\ref{fig:setlx} on page \\pageref{fig:setlx}.  The string\n``\\texttt{=>}'' is the \\blue{prompt}.  It signals that the interpreter is waiting for input.\nIf we input the string\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{1 + 2;}\n\\\\[0.2cm]\nand press enter, we get the following output:\n\\begin{verbatim}\n    ~< Result: 3 >~\n    \n    => \n\\end{verbatim}\nThe interpreter has computed the sum $1+2$, returned the result, and prints another prompt waiting\nfor more input.  The command ``\\texttt{1 + 2;}''\nis a script.  Of course, this is a very small script as it consists only of a single command.\nBy default, just the last result computed by a script is output to the screen.  Hence, if we feed\nthe commands\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{1+2; 3*4;}\n\\\\[0.2cm]\nto the interpreter, only the number 12 is printed.  In order to print arbitrary results to\nthe screen, we can use the function \\texttt{print}.  If we issue the command\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{print(\\symbol{34}Hello, World!\\symbol{34});}\n\\\\[0.2cm]\nthen the following output is produced:\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    Hello, World!\n    ~< Result: om >~\n    \n    => \n\\end{Verbatim}\nHere, the interpreter has first printed the string ``\\texttt{Hello, World!}''.  After that, the\nresult of the function \\texttt{print} is shown.  However, the function \\texttt{print} does not\nreturn any value and therefore its return value is \\blue{undefined}.  An undefined value is denoted using\nthe \\href{http://en.wikipedia.org/wiki/Greek_alphabet}{greek} letter\n\\href{https://en.wikipedia.org/wiki/Omega}{$\\Omega$.}  This letter is then\nabbreviated as the string ``\\texttt{om}''.\n\nThe function \\texttt{print} accepts any number of arguments.  For example, printing\nthe string ``\\texttt{36 * 37 / 2}'' followed by\nthe value of the expression $36 \\cdot 37 / 2$ can be achieved via the following command:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{print(\\symbol{34}36 * 37 / 2 = \\symbol{34}, 36 * 37 / 2);}\n\\\\[0.2cm]\nThe  \\textsc{SetlX} interpreter can be executed offline to execute programs.\nIf the program shown in Figure  \\ref{fig:sum.stlx} on page \\pageref{fig:sum.stlx} is stored in a\nfile with the file name  \n``\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/sum.stlx}{\\texttt{sum.stlx}}'',\nthen we can execute this program via the following command:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{setlx sum.stlx}\n\\\\[0.2cm] \nExecuting this command will first print the text\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{Type a natural number and press return: } \n\\\\[0.2cm]\nto the screen.  After entering a natural number $n$ and hitting the \\texttt{enter} key, the program will\ncompute the set $\\{1,\\cdots,n\\}$ of all positive natural number less or equal $n$, sum the elements\nof this set, i.e.~compute the sum\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds\\sum\\limits_{i=1}^n i$ \n\\\\[0.2cm]\nand then print the resulting number.\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    // This program reads a number n and computes the sum 1 + 2 + ... + n.\n    n := read(\"Type a natural number and press return: \");\n    s := +/ { 1 .. n };\n    print(\"The sum 1 + 2 + ... + \", n, \" is equal to \", s, \".\");\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{A simple program to compute $\\ds\\sum\\limits_{i=1}^n i$.}\n\\label{fig:sum.stlx}\n\\end{figure}\n\n\nLet us discuss the program shown in Figure \\ref{fig:sum.stlx} on page \\pageref{fig:sum.stlx} line by line.\nNote that the line numbers shown in this figure are not part of the program and have only been added\nso that I am able to refer to the different lines of the program more easily.\n\\begin{enumerate}\n\\item The first line is a comment.  In \\textsc{SetlX}, the string  ``\\texttt{//}'' starts a comment\n      that extends to the end of the line.  In order to have multi-line comments, we can use the \n      strings ``\\texttt{/*}'' and ``\\texttt{*/}''.  Every text that starts with the string ``\\texttt{/*}''\n      and ends with the string ``\\texttt{*/}'' is ignored.  Of course, this text must only contain the\n      terminating string ``\\texttt{*/}'' at the end.\n\n      Note that multi-line comments can not be nested.\n\\item The second line is an assignment.  The expression  \\texttt{read($s$)}\n      first prints the string $s$ and then reads and returns the number that is input by the user.\n      This number is then assigned to the variable \\texttt{n}.  This is done using the assignment operator\n      ``\\texttt{:=}''.  It is important to understand that the syntax of \\setl\\ differs from the\n      syntax of the programming language \\texttt{C} in one very important way:\n\n      \\begin{center}\n      \\colorbox{red}{\\framebox{\\colorbox{yellow}{\\framebox{\n      \\begin{minipage}{0.65\\linewidth}\n        \\texttt{SetlX} uses the operator ``\\texttt{:=}'' to \\blue{assign} a value to a variable, while\n        the programming language \\texttt{C} uses the operator ``\\texttt{=}'' instead.\n      \\end{minipage}}}}}\n      \\end{center}      \n\n      In contrast to the language \\texttt{C}, the language \\textsc{SetlX} is not\n      \\href{https://en.wikipedia.org/wiki/Type_system#STATIC}{statically typed} but rather is \n      \\href{https://en.wikipedia.org/wiki/Type_system#DYNAMIC}{dynamically typed}.\n      Hence, it is neither necessary nor possible to declare the variable \\texttt{n}.\n      Of course, in the given program, we expect the function \\texttt{read} to return a number.\n      If, instead of a number, the user inputs a string, the program would abort with an error\n      message once the third line is executed.\n\\item The third line shows how a set can be defined as an enumeration.  In general, if \n      $a$ and $b$ are integers such that  $a < b$, the expression\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{\\{ $a$ .. $b$ \\}}\n      \\\\[0.2cm]\n      evaluates to the set \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\{ x \\in \\mathbb{Z} \\mid a \\leq x \\wedge x \\leq b \\}$.\n      \\\\[0.2cm]\n      The prefix operator ``\\texttt{+/}'' computes the sum of all elements of the set given to it as an\n      argument.  Since this set is equal to\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\{ 1, 2, \\cdots,  n \\}$,\n      \\\\[0.2cm]\n      the operator ``\\texttt{+/}'' computes the sum\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds 1 + 2 + \\cdots + n = \\sum\\limits_{i=1}^n i$.\n      \\\\[0.2cm]\n      This sum is then assigned to the variable \\texttt{s}.\n\\item The last line prints this variable together with some text.\n\\end{enumerate}\nThe program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/sum-recursive.stlx}{\\texttt{sum-recursive.stlx}},\nwhich is shown in Figure \\ref{fig:sum-recursive.stlx} on page \\pageref{fig:sum-recursive.stlx}\ncomputes the sum $\\ds\\sum\\limits_{i=0}^n i$ \\\\[-0.3cm]\nrecursively.\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm\n                ]\n    sum := procedure(n) {\n        if (n == 0) { \n            return 0;\n        } else {\n            return sum(n-1) + n;\n        }\n    };\n    \n    n     := read(\"Enter a natural number: \"); \n    total := sum(n);\n    print(\"Sum 0 + 1 + 2 + ... + \", n, \" = \", total);\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n  \\caption{A recursive program to compute $\\sum\\limits_{i=0}^ni$.}\n  \\label{fig:sum-recursive.stlx}\n\\end{figure} \n\n\\begin{enumerate}\n\\item The first seven lines define the procedure \\texttt{sum}.  In \\textsc{SetlX}, the definition of\n      a procedure is started with the \\blue{keyword} ``\\texttt{procedure}''.  This keyword is followed by\n      a list of the \\blue{formal arguments}.  These arguments are separated by the character ``\\texttt{,}'' and\n      are enclosed in parentheses.\n      As in the programming language \\texttt{C}, the body of the procedure is enclosed in the curly braces\n      ``\\texttt{\\{}'' and ``\\texttt{\\}}''.  In general, the body of a procedure consists of a list\n      of commands.  In Figure \\ref{fig:sum-recursive.stlx} there is only a single command.  This\n      command is a case distinction.  The general form of a case distinction is as follows:\n\n      \\begin{Verbatim}[ codes         = {\\catcode`_=8\\catcode`^=7},\n                        frame         = lines, \n                        framesep      = 0.3cm, \n                        labelposition = bottomline,\n                        numbers       = left,\n                        numbersep     = -0.2cm,\n                        xleftmargin   = 0.8cm,\n                        xrightmargin  = 0.8cm,\n                        commandchars  = \\\\\\{\\}\n                      ]\n        if (test) \\{\n            body\\(_1\\)\n        \\} else \\{\n            body\\(_2\\)\n        \\}\n      \\end{Verbatim}\n      \\vspace*{-0.1cm}\n      A case distinction of this form is evaluated as follows:\n      \\begin{enumerate}\n      \\item First, the expression \\texttt{test} is evaluated.  The evaluation of \\texttt{test} must\n            either return the value ``\\texttt{true}'' or ``\\texttt{false}''.\n      \\item If \\texttt{test} evaluates as  ``\\texttt{true}'' then the statements in\n            \\texttt{body}$_1$ are executed.  Here,  \\texttt{body}$_1$ is a list of statements.\n      \\item Otherwise, the statements in \\texttt{body}$_2$ are executed.\n      \\end{enumerate}\n      \\textbf{Note} the following \\blue{differences} with respect to the programming language \\texttt{C}:\n      \\begin{enumerate}\n      \\item In \\textsc{SetlX}, we have to enclose  \\texttt{body}$_1$ and \\texttt{body}$_2$ in curly\n            braces even if they contain only a single statement.\n      \\item The definition of the procedure has to be terminated with the character ``\\texttt{;}''.\n            The reason is that syntactically the definition of the procedure is part of an\n            \\blue{assignment} and every assignment ends with a \\blue{semicolon}.  In case that we do not intend to\n            assign the procedure to a name, for example if a procedure is used as an argument to\n            another procedure, then the procedure is not terminated with a ``\\texttt{;}''.\n      \\end{enumerate}  \n\\item After defining the procedure \\texttt{sum}, line 9 reads a number that is assigned to the variable \\texttt{n}.\n\\item Next, line 10 calls the procedure \\texttt{sum} for the given value of \\texttt{n}. \n      This value is then assigned to the  variable \\texttt{total}.\n\\item Finally, the result is printed.\n\\end{enumerate}\nThe procedure \\texttt{sum} is an example of a\n\\href{https://en.wikipedia.org/wiki/Recursion_(computer_science)}{\\emph{recursive function}},\ni.e.~the function \\texttt{sum} calls itself.  The logic of this recursion is captured by the\nfollowing equations:\n\\begin{enumerate}\n\\item $\\texttt{sum}(0) = 0$,\n\\item $n > 0 \\rightarrow \\texttt{sum}(n) = \\texttt{sum}(n-1) + n$.\n\\end{enumerate}\nThese equations become evident if we substitute the definition\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds \\texttt{sum}(n)= \\sum\\limits_{i=0}^n i$ \n\\\\[0.2cm]\nin these equations, since we have:\n\\begin{enumerate}\n\\item $\\ds \\texttt{sum}(0)= \\sum\\limits_{i=0}^0 i = 0$,\n\\item $\\ds \\texttt{sum}(n)= \\sum\\limits_{i=0}^n i = \\left(\\sum\\limits_{i=0}^{n-1} i\\right) + n = \\texttt{sum}(n-1) + n$. \n\n\\end{enumerate}\nThe first equation deals with the case that the procedure \\texttt{sum} does not call itself.  This\ncase is called the \\blue{base case}.  Every recursive function must have a base case, for otherwise\nthe recursion would never stop.\n\n\n\\section{Sets in \\setl}\nThe most prominent difference between the programming language \\setl\\ and the programming language\n\\texttt{C} is the fact that  \\textsc{SetlX} has language support for both \\blue{sets} and \\blue{lists}.\nIn order to demonstrate how sets are supported in \\textsc{SetlX} we present a simple program that\nshows how to compute the \\blue{union}, the \\blue{intersection}, and the \\blue{difference} of two sets.\nFurthermore, the program shows how to compute the \\href{https://en.wikipedia.org/wiki/Power_set}{power set} of a\ngiven set and it shows how to compare sets.  Figure \\ref{fig:simple.stlx} on page\n\\pageref{fig:simple.stlx} shows the file\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/simple.stlx}{\\texttt{simple.stlx}}.  \nWe discuss it line by line.\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ codes         = {\\catcode`$=3\\catcode`_=8\\catcode`^=7},\n                  frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  commandchars  = \\\\\\{\\},\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm\n                ]\n    A := \\{ 1, 2, 3 \\};\n    B := \\{ 2, 3, 4 \\};\n    // compute the union             A $\\cup$ B \n    C := A + B;\n    print(A, \" + \", B, \" = \", C);\n    // compute the intersection      A $\\cap$ B\n    C := A * B;\n    print(A, \" * \", B, \" = \", C);\n    // compute the set difference    A $\\backslash$ B\n    C := A - B;\n    print(A, \" - \", B, \" = \", C);\n    // compute the power set        $\\displaystyle 2^\\texttt{A}$\n    C := 2 ** A;\n    print(\"2 ** \", A, \" = \", C);\n    // test the subset relation      A $\\subseteq$ B\n    print(\"(\", A, \" <= \", B, \") = \", (A <= B)); \n    // test, whether 1 $\\in$ A\n    print(\"1 in \", A, \" = \", 1 in A);\n    // compute the cartesian product\n    C := A >< B;\n    print(A, \" >< \", B, \" = \", C);\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Computation of union, intersection, set difference, and power set.}\n  \\label{fig:simple.stlx}\n\\end{figure} %$\n\n\\begin{enumerate}\n\\item The first two lines show that sets can be defined as explicit \\blue{enumerations} of their elements.\n\\item Line 4, 7, and 10 compute the \\blue{union}, the \\blue{intersection}, and the set \\blue{difference} of the sets\n      \\texttt{A} and \\texttt{B} respectively.\n\n      Hence, the mathematical operator ``$\\cup$'' corresponds to ``\\texttt{+}'', ``$\\cap$''\n      corresponds to ``\\texttt{*}'', while ``$\\backslash$'' corresponds to ``\\texttt{-}''.\n\\item Line 13 computes the \\href{https://en.wikipedia.org/wiki/Power_set}{power set} of the set\n      \\texttt{A}.\n\\item Line 16 checks whether \\texttt{A} is a \\blue{subset} of \\texttt{B}.\n\\item Line 18 checks whether the number \\texttt{1} is an element of the set \\texttt{A}.\n\\item Line 20 computes the \\href{https://en.wikipedia.org/wiki/Cartesian_product}{Cartesian product}\n      of \\texttt{A} and \\texttt{B}.  The \\blue{Cartesian product} ``$\\times$'' is translated into the operator\n      ``\\texttt{><}'' in \\setl.\n\\end{enumerate}\nIf we execute this program, the following results are obtained:\n\\begin{verbatim}\n    {1, 2, 3} + {2, 3, 4} = {1, 2, 3, 4}\n    {1, 2, 3} * {2, 3, 4} = {2, 3}\n    {1, 2, 3} - {2, 3, 4} = {1}\n    2 ** {1, 2, 3} = {{}, {1}, {1, 2}, {1, 2, 3}, {1, 3}, {2}, {2, 3}, {3}}\n    ({1, 2, 3} <= {2, 3, 4}) = false\n    1 in {1, 2, 3} = true\n    {1, 2, 3} >< {2, 3, 4} = \n        {[1, 2], [1, 3], [1, 4], [2, 2], [2, 3], [2, 4], [3, 2], [3, 3], [3, 4]}\n\n\\end{verbatim}\nIn order to be able to present more interesting programs, we present a number of ways to define\nmore complex sets in  \\textsc{SetlX}.\n\n\\subsubsection{Defining Sets as Arithmetic Progressions}\nIn the previous example we have defined sets as explicit enumerations of their elements.  Of course,\nthis approach is much too tedious when working with sets containing large numbers of elements.  An\nalternative way is to define a set as an \\blue{arithmetic progression}.  Let us consider an example.  The assignment\n\\begin{verbatim}\n        A := { 1 .. 100 };\n\\end{verbatim}\ndefines \\texttt{A} as the set of all positive natural numbers that are less than or equal to $100$.\nThe general form of an arithmetic progression is\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n\\texttt{A := \\{ \\textsl{start} .. \\textsl{stop} \\};} \n\\\\[0.2cm]\nThis definition assigns the set of all integer numbers from \\textsl{start} up to and including\n\\textsl{stop} to the variable \\texttt{A}, i.e.~we have\n \\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\texttt{A} = \\{ n \\in \\mathbb{Z} \\mid \\textsl{start} \\leq n \\wedge n \\leq\\textsl{stop} \\}$. \n\\\\[0.2cm]\nWe can define arithmetic progressions with a \\blue{step size} different from $1$.  For example,\nthe assignment\n\\begin{verbatim}\n       A := { 1, 3 .. 100 };\n\\end{verbatim}\nassigns the set of all odd natural numbers less than or equal to $100$ to \\texttt{a}.\nOf course, the number $100$ is not part of this set as $100$ is an even number.\nThe general form of this kind of progression is\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n\\texttt{A := \\{ start, second .. stop \\}} \n\\\\[0.2cm]\nIf we define $\\texttt{step} = \\texttt{second} - \\texttt{start}$ and if, furthermore,  \\texttt{step}\nis positive, then this set can be written as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\texttt{A} = \\{ \\texttt{start} + n \\cdot \\texttt{step} \\mid n \\in \\mathbb{Z} \\wedge \\texttt{start} + n \\cdot \\texttt{step} \\leq\\texttt{stop} \\}$. \n\\\\[0.2cm]\n\\textbf{Note} that  $\\texttt{stop}$ does not have to be an element of the set\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\\{} \\texttt{start}\\texttt{,} \\texttt{second} \\texttt{..} \\texttt{stop} \\texttt{\\}}.\n\\\\[0.2cm]\nFor example, we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\\{ 1, 3 .. 6 \\} = \\{ 1, 3, 5 \\}}.\n\n\n\\subsubsection{Defining Sets via Iterators}\nWe can also define sets via \\blue{iterators}.  Consider the following example:\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n\\texttt{P := \\{ n * m : n in \\{2..10\\}, m in \\{2..10\\} \\};} \n\\\\[0.2cm]\nAfter this assignment,  \\texttt{p} is the set of all \\blue{non-trivial} products $\\mathtt{n} * \\mathtt{m}$ such\nthat that have both \\texttt{m} and \\texttt{n} are at most 10.  \n(A product of the form $a \\cdot b$ is called \\blue{trivial} if and only if\neither of the factors  $a$ or $b$ is equal to $1$.)\nA mathematical definition for the set  \\texttt{P} is as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\mathtt{P} = \\bigl\\{ n \\cdot m \\mid n \\in \\mathbb{N} \\wedge m \\in \\mathbb{N} \\wedge \n                                 2 \\leq n \\wedge 2 \\leq m \\wedge n \\leq 10 \\wedge m \\leq 10 \n              \\bigl\\}\n$. \n\\\\[0.2cm]\nIterators can be quite useful.  For example, consider the program \n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/primes-difference.stlx}{\\texttt{primes-difference.stlx}}\nthat is shown in Figure \\ref{fig:primes-sieve.stlx} on page \\pageref{fig:primes-sieve.stlx}.\nThis program computes the set of all \\href{https://en.wikipedia.org/wiki/Prime_number}{prime\n  numbers} less than \\texttt{n}.  The underlying idea is that \na number is prime \\blue{iff}\\footnote{\n  Henceforth, the word ``iff'' is used as an abbreviation for ``if and only if''.\n}  \nit can not be written as a non-trivial product.  Hence, if we take the set of all natural numbers less than\nor equal to \\texttt{n} and bigger than 1 and subtract the set of all non-trivial products from this set, then the\nremaining numbers must be prime.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm\n                ]\n    n := 100;\n    primes := {2 .. n} - { p * q : p in {2..n}, q in {2..n} };\n    print(primes);\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{A program to compute prime numbers.}\n  \\label{fig:primes-sieve.stlx}\n\\end{figure} \n\nThe general form of the definition of a set via iterators is given as\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\{ \\textsl{expr} : x_1 \\;\\mathtt{in}\\; S_1,\\; \\cdots,\\; x_n \\;\\mathtt{in}\\; S_n \\}$ .\n\\\\[0.2cm]\nHere,  $\\textsl{expr}$ is a term that makes use of the variables $x_1$, $\\cdots$, $x_n$.  Furthermore,\n$S_1$, $\\cdots$, $S_n$ are expressions that return sets (or lists) when they are evaluated.\nHere, an expression of the form ``\\texttt{$x_i$ in $S_i$}'' is called an  blue{iterator} since\nthe variables $x_i$ \\blue{iterate} over the different elements of the sets $S_i$.\nThe mathematical interpretation of the expression given above is then given as\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\bigl\\{ \\textsl{expr} \\mid x_1 \\in S_1 \\wedge \\cdots \\wedge x_n \\in S_n \\bigr\\}$.\n\\\\[0.2cm]\nHence, the definition of a set via iterators is the same as the definition of a set as an \\blue{image set}\nin set theory.  \n\nIn addition to image sets we can use \\blue{selection} to define sets.\nThe syntax is: \n\\\\[0.2cm]\n\\hspace*{1.3cm}  \n$M := \\{ \\textsl{expr} : x_1 \\;\\mathtt{in}\\; S_1,\\; \\cdots,\\; x_n \\;\\mathtt{in}\\; S_n \\mid \\textsl{cond}\\, \\}$. \n\\\\[0.2cm]\nHere,   $\\textsl{expr}$ and $S_i$ are interpreted as above and  \n $\\textsl{cond}$ is an expression possibly containing the variables  $x_1$, $\\cdots$, $x_n$.  The\n evaluation of \\textsl{cond} has to return either  \\texttt{true} or \\texttt{false}.  The\n mathematical interpretation of the expression above is then given as \\\\[0.2cm]\n\\hspace*{1.3cm} \n$M = \\bigl\\{ \\textsl{expr} \\mid x_1 \\in S_1 \\wedge \\cdots \\wedge x_n \\in S_n \\wedge \\textsl{cond}\n\\,\\bigr\\}$, \n\\\\[0.2cm]\ni.e.~$M$ is defined as the set of all those values that we get when we substitute those values $x_i$\nfrom the sets $S_i$ into \\textsl{expr} that satisfy \\textsl{cond}.  An example will clarify this.\nAfter the assignment\n\\begin{alltt}\n  \\texttt{Primes := \\{ p : p in  \\{2..100\\} | \\{ x : x in \\{1..p\\} | p \\% x == 0 \\} == \\{1, p\\} \\};}\n\\end{alltt}\nthe variable \\texttt{Primes} is the set of all prime numbers that are less than 100.\nThe idea is that the number \\texttt{p} is prime iff 1 and \\texttt{p} are the only numbers that divide\n\\texttt{p} evenly.  In order to check whether \\texttt{p} is evenly dividable by some number\n\\texttt{x} we can use the operator \\texttt{\\%}\nin \\setl: The expression \\texttt{p \\% x}\ncomputes the remainder that is left over when \\texttt{p} is divided by \\texttt{x}.\nHence,\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n\\texttt{\\{ x : x in \\{1..p\\} | p \\% x == 0 \\}}\n\\\\[0.2cm]\nis the set of all those numbers that divide \\texttt{p} evenly and \\texttt{p} is prime if this set\nonly contains the numbers  $1$ and \\texttt{p}.  The program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/primes-slim.stlx}{\\texttt{primes-slim.stlx}}\nshown in Figure\n\\ref{fig:primes-slim.stlx} on page \\pageref{fig:primes-slim.stlx} uses this method to compute prime numbers.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.4cm,\n                  xrightmargin  = 0.4cm\n                ]\n    dividers := procedure(p) {\n        return { t : t in {1..p} | p % t == 0 };\n    };\n    n      := 100;\n    primes := { p : p in {2..n} | dividers(p) == {1, p} };\n    print(primes);\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Another program to compute prime numbers.  \\label{fig:primes-slim.stlx}}\n\\end{figure} \n\nIn this program we have first defined the procedure \\texttt{dividers} that takes a natural number\n\\texttt{p} and computes the set of all those natural numbers that divide \\texttt{p} evenly.\nThen, the set of prime numbers less than or equal to \\texttt{n} is the set of those natural numbers\n\\texttt{p} bigger than 1 that are only divided by 1 and themselves.\n\n\n\\section{Pairs, Relations, and Functions}\nIn \\setlx\\ the ordered pair $\\langle x, y \\rangle$ is represented as a list with two elements,\ni.e.~it is written as $[x,y]$, so in order to represent an ordered pair in \\setlx\\ we just have to\nexchange the angle brackets ``$\\langle$'' and ``$\\rangle$'' with the square brackets ``\\texttt{[}''\nand ``\\texttt{]}''.  In the\n\\href{https://github.com/karlstroetmann/Lineare-Algebra/blob/master/Script/lineare-algebra.pdf}{lecture\n  notes on mathematics}  it is shown that a relation that is both left-total and \nright-unique can be regarded as a function and hence is called a \\blue{functional relation}.  If $R$\nis a functional relation and $x \\in \\textsl{dom}(R)$, then in \\textsc{SetlX} the expression  $R[x]$\ndenotes the \\blue{unique} element $y$ such that $\\langle x, y \\rangle \\in R$ holds. The program  \n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/function.stlx}{\\texttt{function.stlx}}\nin Figure \\ref{fig:function.stlx} on page \\pageref{fig:function.stlx} shows this more concretely.\nFurthermore, the program shows that for a binary relation $R$, in \\setlx\\ we compute\n$\\textsl{dom}(R)$ as $\\texttt{domain}(R)$ and $\\textsl{rng}(R)$ as $\\mathtt{range}(R)$.\nFurthermore, line 2 shows that we can even change the y-value that is associated with a given\nx-value in a relation.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    Q := { [n, n**2] : n in {1..10} };\n    Q[5] := 7;\n    print( \"Q[3]   = $Q[3]$\"      );\n    print( \"Q[5]   = $Q[5]$\"      );\n    print( \"dom(Q) = $domain(Q)$\" );\n    print( \"rng(Q) = $range(Q)$\"  );\n    print( \"Q      = $Q$\"         );\n\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Exercising a functional binary relation.}  \\label{fig:function.stlx}\n\\end{figure} \nAs a side note, Figure \\ref{fig:function.stlx} shows that \\setlx\\ supports \\blue{string interpolation}: \nInside a string that is enclosed in double quotes, any substring that is enclosed in dollar symbols is\n\\blue{evaluated} as an expression and the substring is then replaced by the result of its evaluation.\n\nThe relation \\texttt{Q} that is computed in line 1 of Figure \\ref{fig:function.stlx} represents the\nfunction $x \\mapsto x^2$ on the set $\\{ 1, \\cdots, 10 \\}$.  \nLine 2 changes the relation \\texttt{Q} for the argument $x=5$ so that  $\\mathtt{Q}[5]$ is\n$7$.   After that, both the domain and the range of \\texttt{Q} are computed.\nThe program produces the following output:\n\\begin{verbatim}\n    Q[3]   = 9\n    Q[5]   = 7\n    dom(Q) = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}\n    rng(Q) = {1, 4, 7, 9, 16, 36, 49, 64, 81, 100}\n    Q      = {[1, 1], [2, 4], [3, 9], [4, 16], [5, 7], [6, 36], \n              [7, 49], [8, 64], [9, 81], [10, 100]\n             }\n\\end{verbatim}\nIt is an  interesting question to ask what happens if we evaluate $R[x]$ but the set \n $\\{ y \\mid \\langle x, y \\rangle \\in R \\}$ is either empty or has more than one element.\nThe program \n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/buggy-function.stlx}{\\texttt{buggy-function.stlx}}\nshown in Figure\n\\ref{fig:buggy-function.stlx} on page \\pageref{fig:buggy-function.stlx} provides us with an answer.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    R := { [1, 1], [1, 4], [3, 3] };\n    print( \"R[1] = $R[1]$\" );\n    print( \"R[2] = $R[2]$\" );\n    print( \"{ R[1], R[2] } = ${ R[1], R[2] }$\" );\n    print( \"R{1} = $R{1}$\" );\n    print( \"R{2} = $R{2}$\" );\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Exercising a non-functional binary relation.}  \\label{fig:buggy-function.stlx}\n\\end{figure} \n\nIf the set  $\\{ y \\mid \\langle x, y \\rangle \\in R \\}$ is either empty or contains more than one\nelement, the expression $R[x]$ is undefined in \\setlx.  Trying to insert an undefined expression\ninto a set fails.  Hence, line 4 in Figure \\ref{fig:buggy-function.stlx} returns the empty set.\nThere is a way to avoid undefined values when working with non-functional binary relations.\nThis is done by replacing the square brackets in the expression $R[x]$ with \\blue{curly brackets}, i.e.~we\nhave to write $R\\{x\\}$ instead of $R[x]$.\nFor a binary relation $R$, the expression  $R\\{x\\}$  is defined as the following set:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n $R\\{x\\} := \\{ y \\mid \\langle x, y \\rangle \\in R \\}$,\n\\\\[0.2cm]\ni.e.~$R\\{X\\}$ is the set of all values $y$ such that the pair $\\langle x, y \\rangle$ is in $R$.\nHence, the program shown in Figure  \\ref{fig:buggy-function.stlx} yields the following results:\n\\begin{verbatim}\n    R[1] = om\n    R[2] = om\n    { R[1], R[2] } = {}\n    R{1} = {1, 4}\n    R{2} = {}\n\\end{verbatim}\n\n\\section{Lists}\n\\setlx\\ supports the data type of a list.  Lists can be defined similar to sets by replacing the curly brackets\nwith square brackets.  Then, we are able to define lists as arithmetic progressions, iterations, or via selection.\nThe program \n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/primes-tuple.stlx}{\\texttt{primes-tuple.stlx}}\nin Figure \\ref{fig:primes-tuple.stlx} on page \\pageref{fig:primes-tuple.stlx} demonstrates how lists can be\nused instead of sets.  This program computes the prime numbers similar to the program shown in Figure\n\\ref{fig:primes-slim.stlx} on page \\pageref{fig:primes-slim.stlx} but uses lists instead of sets.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    dividers := procedure(p) {\n        return [ t : t in [1..p] | p % t == 0 ];\n    };\n    \n    n := 100;\n    primes := [ p : p in [2 .. n] | dividers(p) == [1, p] ];\n    print(primes);\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Computing prime numbers with lists.}  \n\\label{fig:primes-tuple.stlx}\n\\end{figure} \n\n\\section{Special Functions and Operators on Sets}\nThis section discusses various functions and operators that can be applied to both sets and lists.\n\n\\subsection{\\texttt{max} and \\texttt{min}}\nThe program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/sort.stlx}{\\texttt{sort.stlx}}\nthat is shown in Figure \\ref{fig:sort.stlx} on page \\pageref{fig:sort.stlx} demonstrates a simple algorithm \nto sort a list of natural numbers.  The expression \\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{max(L)}\n\\\\[0.2cm]\ncomputes the biggest element of the list \\texttt{L}.  Therefore, the variable \\texttt{n} in the iterator\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{n in [0 .. max(L)]}\n\\\\[0.2cm]\nruns from 0 up to the biggest number occurring in \\texttt{L}.  The condition ``\\texttt{n in L}''\nensures that the number \\texttt{n} is only inserted into the resulting list if \\texttt{n} is an\nelement of the list \\texttt{L} that is to be sorted.  Since the iterator\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{n in [0 .. max(L)]}\n\\\\[0.2cm]\ngenerates the numbers starting from 0 and increasing, the function \\texttt{sort} returns a sorted\nlist that contains exactly those elements, that are elements of \\texttt{L}.  Of course, the\nresulting list will contain every element exactly once, even if it occurs multiple times in \\texttt{L}.\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    sort := procedure(L) {\n        return [n : n in [0 .. max(L)] | n in L];\n    };\n    L := [13, 5, 7, 2, 4];\n    print(\"sort($L$) = \", sort(L));\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{A simple program to sort a given list.}  \\label{fig:sort.stlx}\n\\end{figure} %\\$\n\nIt should be noted that, in general, the algorithm that is implemented in the procedure \\texttt{sort} is not very\nefficient.  We will discuss several more efficient algorithms later.  However, efficiency was not\nthe point of this example.  Rather, the point was to introduce the function \\texttt{max} via a\nuseful application.  Besides \\texttt{max}, \\setlx\\ provides the function \\texttt{min} that computes\nthe minimum of a list.  Both \\texttt{max} and \\texttt{min} can also be applied to sets.\n\n\\subsection{\\texttt{+/} and \\texttt{*/}}\nThe operators ``\\texttt{+/}'' and ``\\texttt{*/}'' are unary prefix operators that can be applied to\neither a list or a set.  The expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n \\texttt{+/ S}\n\\\\[0.2cm]\ncomputes the sum of all elements of \\texttt{S} and, likewise, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n \\texttt{*/ S}\n\\\\[0.2cm]\ncomputes the product of the elements of \\texttt{S}.  Hence, we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{+/ \\{1, 2, 3, 4\\}} = 1 + 2 + 3 + 4$ \\quad and \\quad $\\texttt{*/ \\{1, 2, 3, 4\\}} = 1 \\cdot 2 \\cdot 3 \\cdot 4$.\n\\\\[0.2cm]\nIf either ``\\texttt{+/}'' or ``\\texttt{*/}'' is applied to an empty set or an empty list, the result\nis the undefined value ``\\texttt{om}''.  To prevent these operators from returning the undefined\nvalue, the operators can also be used as binary infix operators. For a number \\texttt{x} and a set\nor list \\texttt{S}, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{x +/ S}\n\\\\[0.2cm]\nreturns the result \\texttt{x} if \\texttt{S} is empty.  If \\texttt{S} is not empty, the value\nof \\texttt{x} is ignored and, instead, the sum of all elements of \\texttt{S} is returned.\nAn expression of the form \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{x */ S}\n\\\\[0.2cm]\nworks in a similar way:  If \\texttt{S} is empty, \\texttt{x} is returned.  Otherwise, the expression\nreturns the product of all elements of \\texttt{S}.\n\n\\subsection{\\texttt{first}, \\texttt{last}, \\texttt{from}, and \\texttt{arb}}\nIn \\setlx, all sets are ordered.  This is a notable difference from most other programming languages\nthat support sets.  Since sets are ordered, it makes sense to have functions that return the first\nor the last element of a set.  This is achieved via the functions \\texttt{first} and \\texttt{last}.\nNote that \\texttt{first} and \\texttt{last} are different from \\texttt{min} and \\texttt{max}.  The\nreason is, that the functions \\texttt{min} and \\texttt{max} can only be applied to sets that contain\nnumbers.  However, the functions \\texttt{first} and \\texttt{last} can be applied to any set.\nFor example, if we define\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{S := \\{ \"a\", \"b\", \"c\" \\};}\n\\\\[0.2cm]\nthen we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{first(S)} = \\texttt{\"a\"}$ \\quad and \\quad $\\texttt{last(S)} = \\texttt{\"c\"}$.\n\\\\[0.2cm]\nHowever, evaluating the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{min(S)}\n\\\\[0.2cm]\nyields the following error message:\n\\begin{verbatim}\n    Error in \"min(S)\":\n    The set {\"a\", \"b\", \"c\"} is not a set of numbers.\n    \n    Replay: \n    1.3: min(S) FAILED \n    1.2: S <~> {\"a\", \"b\", \"c\"}\n    1.1: min <~> procedure(collectionValue) { /* predefined procedure `min' */ }\n\\end{verbatim}\nThis error message tells us that the function \\texttt{min} is only defined for sets of numbers.\nThe same holds for the function \\texttt{max}.\n\nAnother function that can be used to extract elements from a set is the function \\texttt{from}.\nThis function is called as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n\\texttt{x := from(S);}\n\\\\[0.2cm]\nHere, \\texttt{S} is supposed to be a set, while \\texttt{x} is a variable.  If the assignment above\nis executed, the function \\texttt{from} takes some element from the set \\texttt{S} and assigns it to\n\\texttt{x}.   Furthermore, this element is \\colorbox{amethyst}{removed} from the set \\texttt{S}.\nIf \\texttt{S} is empty, then the undefined value \\texttt{om} is assigned to the variable \\texttt{x}\nand \\texttt{S} remains empty.  \n\nAt this point you might ask: When we call \\texttt{from(S)}, how do we\nknow which element is taken from \\texttt{S}?  The answer is that we don't know which element is\ntaken by \\texttt{from} and hence our program should work regardless of which element is removed from\n\\texttt{S}. The program \n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/from.stlx}{\\texttt{from.stlx}}\nin Figure \\ref{fig:from.stlx} on page\n\\pageref{fig:from.stlx} shows how \\texttt{from} can be used to print the elements of a set one by\none.  Here, every element of \\texttt{S} is printed in a separate row.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    printSet := procedure(S) {\n        if (S == {}) {\n            return;\n        }\n        x := from(S);\n        print(x);\n        printSet(S);\n    };\n    S := { 13, 5, 7, 2, 4 };\n    printSet(S);\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Printing the elements of a set one by one.}  \\label{fig:from.stlx}\n\\end{figure} \n\nIn addition to the function \\texttt{from}, \\setlx\\ provides the function \\texttt{arb} that takes an\narbitrary element from a given set.  However, in contrast to \\texttt{from}, a call to \\texttt{arb}\ndoes not change the set.  For example, when executing the statements \n\\begin{verbatim}\n    S := {2, 3, 5, 7, 13};\n    x := arb(S);\n    print(\"x = $x$\");\n    print(\"S = $S$\");\n\\end{verbatim}\nwe get the following output:\n\\begin{verbatim}\n    x = 13\n    S = {2, 3, 5, 7, 13}\n\\end{verbatim}\n\n\\subsection{Concatenation of Lists}\nFor sets, the operator ``\\texttt{+}'' computes the union.  However, this operator can also be\napplied to lists.  If \\texttt{L1} and \\texttt{L2} are both lists, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{L1 + L2}\n\\\\[0.2cm]\n\\blue{concatenates} the lists \\texttt{L1} and \\texttt{L2}.  For example, the assignment\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{L := [1, 2, 3] + [4, 5, 6];}\n\\\\[0.2cm]\ncreates the list \\texttt{[1, 2, 3, 4, 5, 6]} and stores it into \\texttt{L}.\n\n\\subsection{The Length Operator ``\\texttt{\\#}''}\nThe unary prefix operator  ``\\texttt{\\#}'' computes the length of a list when it is applied to a\nlist.  When applied to a set it returns the number of elements of this set.  For example, we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{\\# [2, 3, 5, 7]} = \\texttt{4}$ \\quad and \\quad \n$\\texttt{\\# \\{2, 3, 5, 3 \\}} = 3$.\n\n\\subsection{List Indexing}\nWe can access the \\texttt{i}-th elements of a list \\texttt{L} using the notation \\texttt{L[i]}, provided that \\texttt{i} is\na positive natural number that is less than or equal to the length of the list \\texttt{L}.\nThe list \\texttt{L} can\nalso be changed using this notation, so for example the assignment\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{L[3] := 42;}\n\\\\[0.2cm]\nsets the third element of \\texttt{L} to the number \\texttt{42}.  Basically, the syntax is the same\nas the syntax of array access in the programming language \\texttt{C}.  However, there is one very\nimportant difference with respect to \\texttt{C}:\n\n      \\begin{center}\n      \\colorbox{red}{\\framebox{\\colorbox{yellow}{\\framebox{\n      \\begin{minipage}{0.45\\linewidth}\n        In \\textsc{SetlX}, list indexing starts with the number \\texttt{1}!\n      \\end{minipage}}}}}\n      \\end{center}      \n\n\\noindent\nTherefore, after executing the statements\n\\\\[0.2cm]\n\\hspace*{1.3cm} \\texttt{L := [1, 2, 3];} \\\\\n\\hspace*{1.3cm} \\texttt{x := L[1];}\n\\\\[0.2cm]\nthe variable  \\texttt{x} is set to 1.  \n\nIn order to retrieve the last element of a list \\texttt{L} we can use the expression ``\\texttt{L[\\#L]}'',\nbecause \\texttt{\\#L} returns the number of elements of the list  \\texttt{L}.  Alternatively, we can\nuse the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{L[-1]}\n\\\\[0.2cm]\nto retrieve the last element of the list  \\texttt{L}.  Similarly, the expression\n\\texttt{L[-2]} returns the penultimate element of the list \\texttt{L},  while \\texttt{L[-3]} returns\nthe ante-penultimate element of \\texttt{L}. \n\n\\subsection{List Slicing}\nOften, we have to return a sublist of a given list.  This can be done using \\blue{slicing}.  If\n\\texttt{L} is a list and \\texttt{i} and \\texttt{j} are positive natural numbers that are not greater\nthan the length of \\texttt{L}, then\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{L[i..j]}\n\\\\[0.2cm]\nis the sublist of \\texttt{L} that starts at the \\texttt{i}-th element of \\texttt{L} and extends to\nthe \\texttt{j}-th element of \\texttt{L}.  If \\texttt{j} is less than \\texttt{i}, the expression\n\\texttt{L[i..j]} returns the empty list instead.  For example, after defining\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{L := [5, 4, 3, 2, 1];}\n\\\\[0.2cm]\nwe have \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{L[2..4]} = \\texttt{[4, 3, 2]}$ \\quad and \\quad $\\texttt{L[4..2]} = \\texttt{[]}$.\n\n\n\\subsection{Selection Sort}\nIn order to see a practical application of the concepts discussed so far, we present a sorting\nalgorithm that is known as \\href{https://en.wikipedia.org/wiki/Selection_sort}{\\emph{selection sort}}.\nThis algorithm sorts a given list \\texttt{L} and works as follows:\n\\begin{enumerate}\n\\item If \\texttt{L} is empty, \\texttt{sort(L)} is also empty:\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{sort([])} = \\texttt{[]}$.\n\\item Otherwise, we first compute the minimum of \\texttt{L}.  Clearly, the minimum needs to be the\n      first element of the sorted list.  We remove this minimum from \\texttt{L}, sort the remaining\n      elements recursively, and finally attach the minimum at the front of this list:\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{sort(L)} = \\texttt{[min(L)] + sort([}x \\in \\texttt{L} \\texttt{|} x \\not= \\texttt{min}(L)\\texttt{])}$.\n\\end{enumerate}\nFigure \\ref{fig:min-sort.stlx} on page \\pageref{fig:min-sort.stlx} shows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/min-sort.stlx}{\\texttt{min-sort.stlx}}\nthat implements selection sort  in \\textsc{SetlX}. \n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    minSort := procedure(L) {\n        if (L == []) {\n            return [];\n        }\n        m := min(L);\n        return [m] + minSort([x : x in L | x != m]);\n    };   \n    L := [ 13, 5, 13, 7, 2, 4 ];\n    print(\"sort($L$) = $minSort(L)$\");\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{Implementing selection sort in \\setlx.}\n\\label{fig:min-sort.stlx}\n\\end{figure}\n\n\\section{Control Flow and Boolean Operators}\nThe language \\textsc{SetlX} provides all those \\blue{control flow} statements that are used in\ncontemporary programming languages like \\texttt{C} or \\textsl{Java}.  We have already seen \\blue{if-then-else}\nstatements on several occasions.  The most general form of this kind of branching statement is shown\nin Figure \\ref{fig:if} on page \\pageref{fig:if}.\n\\begin{figure}[!ht]\n\\begin{Verbatim}[ codes         = {\\catcode`$=3\\catcode`_=8\\catcode`^=7},\n                  frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  commandchars  = \\\\\\{\\},\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n      \\texttt{if} (\\textsl{test}\\(_0\\)) \\texttt{\\{}\n          \\textsl{body}\\(_0\\)\n      \\texttt{\\} else if} (\\textsl{test}\\(_1\\)) \\texttt{\\{}\n          \\textsl{body}\\(_1\\)\n          \\vdots\n      \\texttt{\\} else if} (\\textsl{test}\\(_n\\)) \\texttt{\\{}\n          \\textsl{body}\\(_n\\)\n      \\texttt{\\} else \\{}\n          \\textsl{body}\\(_{n+1}\\)\n      \\texttt{\\}}\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{The general form of a case distinction in \\setlx.}  \n\\label{fig:if}\n\\end{figure}%$\n\n\nHere,  $\\texttt{test}_i$ denotes a \\href{https://en.wikipedia.org/wiki/Boolean_expression}{Boolean expression}, \ni.e.~an expression that returns either ``\\texttt{true}'' or ``\\texttt{false}'' when evaluated, while\n$\\texttt{body}_i$ is a list of statements.  If the\nevaluation of $\\texttt{test}_i$ returns ``\\texttt{true}'', then the statements in $\\texttt{body}_i$ are\nexecuted.  Otherwise, the next test $\\texttt{test}_{i+1}$ is evaluated.  If all the tests $\\texttt{test}_1$, $\\cdots$, $\\texttt{test}_n$ \nfail, then the statements in $\\texttt{body}_{n+1}$ are executed.\n \nThe tests $\\texttt{test}_i$ can use the following relational infix operators:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{==}, \\quad\n\\texttt{!=}, \\quad\n\\texttt{>},  \\quad\n\\texttt{<},  \\quad\n\\texttt{>=}, \\quad\n\\texttt{<=}, \\quad\n\\texttt{in}. \\quad\n\\\\[0.2cm]\nThese operators are all infix operators, and with the exception of the operator ``\\texttt{in}'' they work the\nsame way as they work in the programming language \\texttt{C}.  Hence,\nthe operator ``\\texttt{==}'' compares two objects for equality and ``\\texttt{!=}'' tests whether two objects\ndiffer.  For example, the Boolean expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\\{1, 2, 2\\} == \\{2, 1, 1\\}}\n\\\\[0.2cm]\nreturns ``\\texttt{true}'', while the Boolean expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{[1, 2] == [2, 1]}\n\\\\[0.2cm]\nreturns ``\\texttt{false}''.  If \\texttt{x} and \\texttt{y} are numbers, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{x < y}\n\\\\[0.2cm]\ntests, whether \\texttt{x} is less than \\texttt{y}.  Similarly, the expressions\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{x > y}, \\quad \\texttt{x <= y}, \\quad and \\quad \\texttt{x >= y}\n\\\\[0.2cm]\ntest whether \\texttt{x} is bigger than, less than or equal to, or bigger than or equal to \\texttt{y}, respectively.\nIf \\texttt{x} and \\texttt{y} are sets instead, then\n\\begin{enumerate}\n\\item the expression ``\\texttt{x < y}'' is true if \\texttt{x} is a \\blue{proper subset} of \\texttt{y}, \n      i.e.~if $\\texttt{x} \\subset \\texttt{y}$ holds.\n\n      A set $x$ is a proper subset of a set $y$ (written $x \\subset y$) if $x$ is a subset of $y$, and,\n      furthermore, $x$ is different from $y$, i.e.~we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $x \\subset y \\;\\stackrel{\\mathrm{def}}{\\Longleftrightarrow}\\; x \\subseteq y \\wedge x \\not= y$. \n\\item The expression ``\\texttt{x > y}'' is true if \\texttt{y} is a proper subset of \\texttt{x},\n      i.e.~if $\\texttt{y} \\subset \\texttt{x}$ holds.\n\\item The expression ``\\texttt{x <= y}'' is true if \\texttt{x} is a subset of \\texttt{y},\n      i.e.~if $\\texttt{x} \\subseteq \\texttt{y}$ holds.\n\\item The expression ``\\texttt{x >= y}'' is true if \\texttt{y} is a subset of \\texttt{x},\n      i.e.~if $\\texttt{y} \\subseteq \\texttt{x}$ holds.\n\\end{enumerate}\nIf \\texttt{x} is an object and \\texttt{S} is a set or a list, then the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{x in S}\n\\\\[0.2cm]\nreturns ``\\texttt{true}'' if \\texttt{x} is an element of \\texttt{S}.\n\nThe comparison tests using the previously discussed relational operators can be combined into more\ncomplex tests via the following logical operators:\n\\begin{enumerate}\n\\item ``\\texttt{!}''  represents logical \\blue{negation}, i.e.~a Boolean expression of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{!b}\n      \\\\[0.2cm]\n      is \\texttt{true} iff the evaluation of the Boolean expression \\texttt{b} returns \\texttt{false}.\n\\item ``\\texttt{\\&\\&}'' represents logical \\blue{conjunction}, i.e.~a Boolean expression of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{a \\&\\& b}\n      \\\\[0.2cm]\n      is \\texttt{true} iff the Boolean expressions \\texttt{a} and \\texttt{b} both evaluate to \\texttt{true}.\n\\item ``\\texttt{||}''  represents logical \\blue{disjunction}, i.e.~a Boolean expression of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{a || b}\n      \\\\[0.2cm]\n      is \\texttt{true} iff at least one of the Boolean expressions \\texttt{a} or \\texttt{b} evaluates to \\texttt{true}.\n\\item ``\\texttt{=>}''  represents logical \\blue{implication}, i.e.~a Boolean expression of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{a => b}\n      \\\\[0.2cm]\n      is \\texttt{true} iff  \\texttt{a} implies \\texttt{b}.\n\\item ``\\texttt{<==>}''  represents logical \\blue{equivalence}, i.e.~a Boolean expression of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{a <==> b}\n      \\\\[0.2cm]\n      is \\texttt{true} iff  \\texttt{a} has the same truth value as \\texttt{b}.\n\\item ``\\texttt{<!=>}''  represents logical \\blue{antivalence}, i.e.~a Boolean expression of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{a <!=> b}\n      \\\\[0.2cm]\n      is \\texttt{true} iff the truth values of \\texttt{a} and \\texttt{b} are different.  Hence, the expression\n      ``\\texttt{$a$ <!=> $b$}'' is an \\href{https://en.wikipedia.org/wiki/Exclusive_or}{exclusive or} of the\n      expressions $a$ and $b$, i.e.~it is true if either $a$ or $b$ is true but not both.\n\\end{enumerate}\nSyntactically, the operators ``\\texttt{<!=>}'' and ``\\texttt{<==>}'' have the lowest \\blue{precedence},\nthe \\blue{precedence} of ``\\texttt{=>}'' is lower than the \\blue{precedence} of the operator\n``\\texttt{||}'', the \\blue{precedence} of ``\\texttt{||}'' is lower than the \\blue{precedence} of the operator\n``\\texttt{\\&\\&}'', and the operator ``\\texttt{!}'' has the highest \\blue{precedence}.  Hence, the expression  \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{!a == b \\&\\& b < c || x >= y}\n\\\\[0.2cm]\nis read as if it had been parenthesized as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{((!(a == b)) \\&\\& b < c) || x >= y}.\n\\\\[0.2cm]\nNote that the precedence of these operators is the same as it is in the programming language \\texttt{C}.\n\nIn addition to these operators, \\textsc{SetlX} supports \\blue{quantifiers}.  The \\blue{universal quantifier} is\nwritten as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{forall (x in S | b)}\n\\\\[0.2cm]\nHere, \\texttt{x} is a variable, \\texttt{S} is a set or list and \\texttt{b} is a Boolean expression such\nthat the variable \\texttt{x} occurs in \\texttt{b}.  The expression above is to be interpreted as the formula\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\forall \\mathtt{x} \\in \\mathtt{S}: \\mathtt{b}$.\n\\\\[0.2cm]\nThe evaluation of ``\\texttt{forall (x in S | b)}'' yields\n\\texttt{true} if evaluating the expression \\texttt{b} yields \\texttt{true} for every element \\texttt{x} from\n\\texttt{S}.  For example, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{forall (x in \\{1, 2, 3\\} | x*x < 10)}\n\\\\[0.2cm]\nyields true, because we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$1 \\cdot 1 < 10$, \\quad $2 \\cdot 2 < 10$, \\quad and \\quad $3 \\cdot 3 < 10$.\n\\\\[0.2cm]\nA more interesting example is shown in Figure \\ref{fig:primes-forall.stlx} on page\n\\pageref{fig:primes-forall.stlx}.  The program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/primes-forall.stlx}{\\texttt{primes-forall.stlx}}\ncomputes the set of prime numbers less than 100 by making use of a universal quantifier.\nFor a given natural number \\texttt{p}, the Boolean expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{forall (x in divisors(p) | x in \\{1, p\\})}\n\\\\[0.2cm]\nevaluates to \\texttt{true} iff every number \\texttt{x} that divides \\texttt{p} evenly is either the number\n\\texttt{1} or the number \\texttt{p}.\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    isPrime := procedure(p) {\n        return p != 1 && forall (x in divisors(p) | x in { 1, p });\n    };\n    divisors := procedure(p) {\n        return { t : t in { 1 .. p } | p % t == 0 };\n    };\n    n := 100;\n    primes := [ p : p in [1 .. n] | isPrime(p) ];\n    print( primes );\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{Computing the prime numbers via a universal quantifier.}\n\\label{fig:primes-forall.stlx}\n\\end{figure}\n\nBesides the universal quantifier, \\setlx\\ supports the \\blue{existential quantifier}.  The syntax of this operator is\ngiven as follows: \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{exists (x in S | b)}\n\\\\[0.2cm]\nHere, \\texttt{x} is a variable, \\texttt{S} is a set or a list and \\texttt{b} is a Boolean expression such\nthat the variable \\texttt{x} occurs in \\texttt{b}.  Mathematically, this expression is interpreted\nas the formula\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\exists \\mathtt{x} \\in \\mathtt{S} : \\mathtt{b}$.\n\\\\[0.2cm]\nIf there is at least one value for \\texttt{x} in \\texttt{S} such that \\texttt{b} yields \\texttt{true}, \nthen the expression \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{exists (x in S | b)}\n\\\\[0.2cm]\nis evaluated as \\texttt{true}. \n\n\\remarkEng\nIf the evaluation of\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{exists (x in S | b)}\n\\\\[0.2cm]\nyields \\texttt{true}, then the variable \\texttt{x} is bound to the first value from \\texttt{S} such that the\nevaluation of \\texttt{b} returns \\texttt{true}.  Otherwise, \\texttt{x} is set to the undefined value\n\\texttt{om}.  For example, evaluating the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{exists (x in [1..10] | 2**x < x**2)}\n\\\\[0.2cm]\nreturns \\texttt{true} and, furthermore, assigns the value \\texttt{3} to the variable \\texttt{x}.  On the other\nhand, if the evaluation of \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{exists (x in S | b)}\n\\\\[0.2cm]\nyields \\texttt{false}, then the variable \\texttt{x} is set to the undefined value.\n\nSimilarly, if the evaluation of \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{forall (x in S | b)}\n\\\\[0.2cm]\nyields \\texttt{false}, then the variable \\texttt{x} is set to the first value from \\texttt{S} that falsifies \\texttt{b}.\nFor example, after evaluating the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{forall(x in [1 .. 10] | x**2 <= 2**x);}\n\\\\[0.2cm]\nthe variable \\texttt{x} is set to the number \\texttt{3} since this is the first number from the set\n$\\{1,\\cdots,10\\}$ such that the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{x**2 <= 2**x}\n\\\\[0.2cm]\nis false.  On the other hand if the evaluation of an expression of the form\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{forall (x in S | b)}\n\\\\[0.2cm]\nyields \\texttt{true}, then the variable \\texttt{x} is set to the undefined value. \\eox\n\n\\subsection{\\texttt{Switch}-Statements}\nInstead of using \\texttt{if}-\\texttt{else}-statements, it is sometimes more convenient to use a\n\\texttt{switch}-statement. The syntax of a \\texttt{switch}-statement is shown in Figure\n\\ref{fig:case} on page \\pageref{fig:case}.  Here, \n\\texttt{test}$_1$, $\\cdots$, \\texttt{test}$_n$ are Boolean expressions, while\n\\texttt{body}$_1$, $\\cdots$, \\texttt{body}$_n$, \\texttt{body}$_{n+1}$ are lists of statements.\nWhen this \\texttt{switch}-statement is executed, the Boolean expressions\n\\texttt{test}$_1$, $\\cdots$, \\texttt{test}$_n$ are evaluated one by one until we find an expression \n\\texttt{test}$_i$ that is \\texttt{true}.  Then the corresponding statements in \\texttt{body}$_i$ are executed and\nthe \\texttt{switch}-statement ends.   The block \\texttt{body}$_{n+1}$ following the keyword \\texttt{default} is\nonly executed if all of the tests \\texttt{test}$_1$, $\\cdots$, \\texttt{test}$_n$ fail.\nA \\texttt{switch}-statement can be rewritten as a long chain of \\texttt{if}-\\texttt{else-if} $\\cdots$\n\\texttt{else-if} statements, but often a \\texttt{switch}-statement is easier to understand.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ codes         = {\\catcode`_=8\\catcode`^=7},\n                  frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  commandchars  = \\\\\\{\\},\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm\n                ]\n      \\underline{switch} \\{\n          \\underline{case} test\\(_1\\) : body\\(_1\\) \n          \\vdots\n          \\underline{case} test\\(_n\\) : body\\(_n\\)\n          \\underline{default}    : body\\(_{n+1}\\)\n      \\texttt{\\}}\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{The general form of a \\texttt{switch}-statement.}  \\label{fig:case}\n\\end{figure} \n\nFigure \\ref{fig:switch.stlx} shows the program \n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/switch.stlx}{\\texttt{switch.stlx}}.\nThe purpose of this program is to print a message that depends on the last digit of a number that is input by\nthe user.  In this program, the \\texttt{switch}-statement  results in code that is much clearer than\nit would be if we had\nused \\texttt{if-else}-statements instead.  Later, the chapter on propositional logic will present examples of the\n\\texttt{switch}-statement that are even more convincing.\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    print(\"Input a natural number:\");\n    n := read();\n    m := n % 10;\n    switch {\n        case m == 0 : print(\"The last digit is 0.\");\n        case m == 1 : print(\"The last digit is 1.\");\n        case m == 2 : print(\"The last digit is 2.\");\n        case m == 3 : print(\"The last digit is 3.\");\n        case m == 4 : print(\"The last digit is 4.\");\n        case m == 5 : print(\"The last digit is 5.\");\n        case m == 6 : print(\"The last digit is 6.\");\n        case m == 7 : print(\"The last digit is 7.\");\n        case m == 8 : print(\"The last digit is 8.\");\n        case m == 9 : print(\"The last digit is 9.\");\n        default     : print(\"The impossible happened!\");\n    }\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{A simple example of a \\texttt{switch}-statement.}\n\\label{fig:switch.stlx}\n\\end{figure}\n\\remarkEng\nThe programming language \\texttt{C} has a \\texttt{switch}-statement that is syntactically similar to the\n\\texttt{switch}-statement in \\setlx.  However, the \\texttt{switch}-statement is executed\n\\colorbox{amethyst}{differently} in \\texttt{C}.  In \\texttt{C}, if $\\texttt{body}_i$ is executed and $\\texttt{body}_i$ does not contain a\n\\texttt{break}-statement, then the following block $\\texttt{body}_{i+1}$ is also executed.  In contrast,\n\\setlx\\ will \\colorbox{amethyst}{never} execute more than one of the blocks $\\texttt{body}_i$.\n\n\n\\subsection{\\texttt{while}-Loops}\nThe syntax of  \\texttt{while}-loops is shown in Figure \\ref{fig:while} on page\n\\pageref{fig:while}.  Here,  \\texttt{test} is a Boolean expression and \\texttt{body} is a list of statements.  \nThe evaluation of \\texttt{test} must\nreturn either  \\texttt{true} or \\texttt{false}.\nIf the evaluation of \\texttt{test} yields  \\texttt{false}, then the loop is terminated.\nOtherwise, the statements in \\texttt{body} are executed.  After that, the \\texttt{while}-loop starts over\nagain, i.e.~the Boolean expression \\texttt{test} is evaluated again and depending on the result of this evaluation\nthe statements in  \\texttt{body} are executed again.  This is repeated until the evaluation of  \\texttt{test}\nfinally yields  \\texttt{false}.  It should be noted that in \\setlx\\ \\texttt{while}-loops work in exactly\nthe same way as they work in the programming language \\texttt{C}.\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{alltt}\n      while (test) \\{\n          body\n      \\}\n\\end{alltt}\n\\vspace*{-0.3cm}\n\\caption{The general form of a \\texttt{while}-loop.  \\label{fig:while}}\n\\end{figure} \n\nFigure \\ref{fig:primes-while.stlx} on page \\pageref{fig:primes-while.stlx} shows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/primes-while.stlx}{\\texttt{primes-while.stlx}}.\nThis program computes prime numbers using a  \\texttt{while}-loop.  The main idea is that a number \\texttt{p} is\nprime if there is no prime number \\texttt{t} less than \\texttt{p} that divides \\texttt{p} evenly.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    n := 100;\n    primes := {};\n    p := 2;\n    while (p <= n) {\n        if (forall (t in primes | p % t != 0)) {\n            print(p);\n            primes += { p };\n        }\n        p += 1;\n    }\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Iterative computation of prime numbers.}  \\label{fig:primes-while.stlx}\n\\end{figure} %\\$\n\n\\pagebreak\n\n\\subsection{\\texttt{for}-Loops}\nThe syntax of  \\texttt{for}-loops is shown in Figure \\ref{fig:for} on page \\pageref{fig:for}.  \nHere \\texttt{S} is either a set or a list, while \\texttt{x} is the name of a variable.  Finally, \\texttt{body}\nis a list of statements.\nIf \\texttt{S} contains $n$ elements, then the \\texttt{for}-loop is executed $n$ times.  Every time the loop is\nexecuted, a different value from \\texttt{S} is assigned to the variable \\texttt{x} and the statements in\n\\texttt{body} are executed using the current value of \\texttt{x}.  A \\texttt{for}-loop also works if \\texttt{S}\nis a string.  In this case, the loop iterates over the different characters of the string \\texttt{S}.\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{alltt}\n      for (x in S) \\{\n          body\n      \\}\n\\end{alltt}\n\\vspace*{-0.3cm}\n\\caption{General form of \\texttt{for}-loops.}  \\label{fig:for}\n\\end{figure} \n\nFigure  \\ref{fig:primes-for.stlx} on page \\pageref{fig:primes-for.stlx} shows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/primes-for.stlx}{\\texttt{primes-for.stlx}}.\nThis program computes the prime numbers using a  \\texttt{for}-loop.  The algorithm implemented here is known as\nthe \\href{https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes}{\\emph{sieve of Eratosthenes}}.\nThis algorithm works as follows:  If \\texttt{n} is a natural and we intend to compute all primes less than or\nequal to \\texttt{n}, then we first compute a list of length \\texttt{n} such that the \\texttt{i}-th entry of\nthis list is the number \\texttt{i}.  This list is called \\texttt{primes} and is computed in line 2.  The basic\nidea is now that for every index $\\texttt{k} \\leq  \\texttt{n}$ that is not prime we set \\texttt{primes[k]} to\n\\texttt{0}.  We know that \\texttt{k} is not prime if it can be written as a product of the form \\texttt{i*j}\nwhere both \\texttt{i} and \\texttt{j} are natural numbers bigger than \\texttt{1}.\nIn order to set \\texttt{primes[k]} to \\texttt{0} for non-prime numbers \\texttt{k} we need two loops,\nwhere the outer loop iterates over all possible values of \\texttt{i}, while the inner loop iterates\nover \\texttt{j}. The smallest value that a proper factor of any number less or equal than \\texttt{n}\ncan take is \\texttt{2}, while the largest value is \\texttt{n/2}.  Hence,\nthe outer \\texttt{for}-loop iterates over all values of \\texttt{i} from \\texttt{2} to \\texttt{n/2}.\nThe inner \\texttt{while}-loop takes a given \\texttt{i} and iterates over all \\texttt{j}\nsuch that $\\texttt{2} \\leq \\texttt{j}$ and $\\texttt{i} \\cdot \\texttt{j} \\leq \\mathtt{n}$ is\nsatisfied.  Finally, the last line prints all \\texttt{i} such that  $\\texttt{primes[i]} \\not= \\mathtt{0}$, as these are\nthe prime numbers.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    n := 100;\n    primes := [1 .. n];\n    for (i in [2 .. n]) {\n        j := 2;\n        while (i * j <= n) {\n            primes[i * j] := 0;\n            j += 1;\n        }\n    }\n    print({ i : i in [2 .. n] | primes[i] != 0 });\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{The algorithm of Eratosthenes.}  \\label{fig:primes-for.stlx}\n\\end{figure} \n\nThe algorithm shown in Figure \\ref{fig:primes-for.stlx} can be refined if we make use of the following observations:\n\\begin{enumerate}\n\\item It is sufficient if \\texttt{j} is initialized with \\texttt{i} because once we start eliminating the\n      multiples of \\texttt{i}, all multiples of \\texttt{i} of the form $\\texttt{i}\\cdot\\texttt{j}$ where\n      $\\texttt{j} < \\texttt{i}$ have already been eliminated from the list \\texttt{primes}. \n\\item If \\texttt{i} is not a prime, then it can be written as $\\texttt{i} = \\texttt{i}' \\cdot\\texttt{j}$ where\n      $\\texttt{i}' < \\texttt{i}$.  Hence, any multiples of \\texttt{i} are also multiples of $\\texttt{i}'$.\n      Therefore, if \\texttt{i} is not prime, then there is no need to eliminate the multiples of\n      \\texttt{i} as these multiples have already been eliminated at the time when the multiples of\n      $\\texttt{i}'$ were eliminated.  For example, there is no point in eliminating the multiples of\n      \\texttt{6} as these are also multiples of \\texttt{2} and hence have already been eliminated\n      once \\texttt{i} is set to \\texttt{6}.\n\\end{enumerate}\nFigure \\ref{fig:primes-eratosthenes.stlx} on page \\pageref{fig:primes-eratosthenes.stlx} shows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/primes-eratosthenes.stlx}{\\texttt{primes-eratosthenes.stlx}}, \nthat makes use of these ideas.  In order to skip the inner \\texttt{while}-loop if \\texttt{i} is not\na prime number we have used the statement ``\\texttt{continue}''.  This statement terminates the\ncurrent iteration of the innermost loop and proceeds to the next iteration.  This works in the same way as in\nthe programming language \\texttt{C}. \n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    n := 10000;\n    primes := [1 .. n];\n    for (i in [2 .. floor(sqrt(n))]) {\n        if (primes[i] == 0) {\n            continue;\n        }\n        j := i;\n        while (i * j <= n) {\n            primes[i * j] := 0;\n            j += 1;\n        }\n    }\n    print({ i : i in [2 .. n] | primes[i] > 0 });\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{A more efficient version of the algorithm of Eratosthenes.}  \\label{fig:primes-eratosthenes.stlx}\n\\end{figure}\n\n\n\\section{Loading a Program}\nThe \\setlx\\ interpreter can \\blue{load} programs interactively into a running session.\nIf \\textsl{file} is the name of a file, then the command\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{load(\"}\\textsl{file}\\texttt{\");}\n\\\\[0.2cm]\nloads the program from  \\textsl{file} and executes the statements given in this program.\nFor example, the command\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{load(\\symbol{34}primes-forall.stlx\\symbol{34});}\n\\\\[0.2cm]\nexecutes the program shown in Figure\n\\ref{fig:primes-forall.stlx} on page \\pageref{fig:primes-forall.stlx}.\nAfter loading the program, the command\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{print(isPrime);}\n\\\\[0.2cm]\nshows the following output:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{procedure (p) \\{ return forall (x in divisors(p) | x in \\{1, p\\}); \\}}.\n\\\\[0.2cm]\nThis shows that the definitions of user defined function are available at run time once the file defining them\nhas been loaded.\n\n\\section{Strings}\n\\setlx\\ support \\blue{strings}.  \\href{https://en.wikipedia.org/wiki/String_(computer_science)}{Strings} are\nnothing more but sequences of characters.  In \\textsc{SetlX}, these have to be \nenclosed either in double quotes or in single quotes.  The operator ``\\texttt{+}'' can be used to concatenate\nstrings.  For example, the expression \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\\squote{abc} + \\texttt{'uvw'};}\n\\\\[0.2cm]\nreturns the result\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\squote{abcuvw}.\n\\\\[0.2cm]\nFurthermore, a natural number \\texttt{n} can be multiplied with a string \\texttt{s}.  The expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{n * s;}\n\\\\[0.2cm]\nreturns a string consisting of \\texttt{n} concatenations of \\texttt{s}.  For example,\nthe result of\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{3 * \\squote{abc};}\n\\\\[0.2cm]\nis the string \\squote{abcabcabc}.  When multiplying a string with a number, the order of the\narguments does not matter. Hence, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\\squote{abc} * 3}\n\\\\[0.2cm]\nalso yields the result \\squote{abcabcabc}.  In order to extract substrings from a given string, we can use the same\nslicing operator that also works for lists.  Therefore, if $s$ is a string and $k$ and $l$ are numbers, then\nthe expression \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{$s$[$k$..$l$]}\n\\\\[0.2cm]\nextracts the substring form $s$ that starts with the $k$th character of $s$ and ends with the $l$th character.\nFor example, if \\texttt{s} is defined by the assignment\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{s := \"abcdefgh\";}\n\\\\[0.2cm]\nthen the expression \\texttt{s[2..5]} returns the substring\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{\"bcde\"}.\n\n\n\\section{Numerical Functions}\nIn order to support numerical computations, \\setlx\\ provides\n\\href{https://en.wikipedia.org/wiki/Floating-point_arithmetic}{floating point numbers}.  These are internally\nstored as 64 bit numbers according to the \\href{https://en.wikipedia.org/wiki/IEEE_754}{IEEE standard 754}.  In\norder to work with floating point numbers,  \\textsc{SetlX} provides the following functions: \n\\begin{enumerate}\n\\item $\\texttt{sin}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Sine}{sine} of $x$.\n      Furthermore, the \\href{https://en.wikipedia.org/wiki/Trigonometric_functions}{trigonometric functions}\n      $\\texttt{cos}(x)$ and $\\texttt{tan}(x)$ are supported.  The \n      \\href{https://en.wikipedia.org/wiki/Inverse_trigonometric_functions}{inverse trigonometrical functions}\n      are written as $\\texttt{asin}(x)$, $\\texttt{acos}(x)$ and $\\texttt{atan}(x)$.    \n\\item $\\texttt{sinh}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Hyperbolic_function}{hyperbolic sine} of $x$.\n       Similarly,  $\\mathtt{cosh}(x)$ returns the\n       \\href{https://en.wikipedia.org/wiki/Hyperbolic_function#Hyperbolic_cosine}{hyperbolic cosine} of $x$, while\n      $\\mathtt{tanh}(x)$ returns the\n      \\href{https://en.wikipedia.org/wiki/Hyperbolic_function#Hyperbolic_tangent}{hyperbolic tangent} of $x$. \n\\item $\\texttt{exp}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Exponential_function}{exponential\n      function}, i.e.~we have \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{exp}(x) = \\mathrm{e}^x$.\n      \\\\[0.2cm]\n      Here, $\\mathrm{e}$ denotes \\href{https://en.wikipedia.org/wiki/E_(mathematical_constant)}{Euler's number}.\n\\item $\\texttt{log}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Natural_logarithm}{natural logarithm} of  $x$.\n      The logarithm base 10 of $x$ is computed as $\\mathtt{log10}(x)$.\n\\item $\\texttt{abs}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Absolute_value}{absolute value} of $x$.\n\\item $\\mathtt{signum}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Sign_function}{sign function} of $x$.\n\\item $\\texttt{sqrt}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Square_root}{square root} of $x$, we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds\\texttt{sqrt}(x) = \\sqrt{x}$ \\quad and \\quad $\\ds \\texttt{sqrt}(x)^2 = x$.\n\\item $\\texttt{cbrt}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Cube_root}{cube root} of $x$, we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds\\texttt{cbrt}(x) = \\sqrt[\\mbox{\\scriptsize$3$}]{x}$ \\quad and \\quad $\\ds\\texttt{cbrt}(x)^3 = x$\n\\item $\\texttt{ceil}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Floor_and_ceiling_functions}{ceiling\n      function} of $x$, i.e.~ $\\mathtt{ceil}(x)$ is the smallest integer that is at least as big as $x$.  We have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{ceil}(x) = \\min(\\{ z \\in \\mathbb{Z} \\mid z \\geq x \\})$.\n      \\\\[0.2cm]\n      Hence the function \\texttt{ceil} rounds up.\n\\item $\\texttt{floor}(x)$ computes the \\href{https://en.wikipedia.org/wiki/Floor_and_ceiling_functions}{floor function},\n      that is $\\texttt{floor}(x)$ is the biggest integer not exceeding $x$, we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{floor}(x) = \\max(\\{ z \\in \\mathbb{Z} \\mid z \\leq x \\})$.\n      \\\\[0.2cm]\n      Hence, the function \\texttt{floor} rounds down.\n\\item $\\texttt{round}(x)$ returns the nearest integer.  Floating point numbers of the form $x.5$ are rounded up.\n      For example, $\\texttt{round(1.5)}=2$ and $\\texttt{round(-1.5)}=-1$.\n\\end{enumerate}\n\\textsc{SetlX} supports \\blue{unlimited precision}\\footnote{\n  In this context, \\blue{unlimited precision} means\n  that the precision is only limited by the available memory.\n}\nvia rational numbers.  For example, the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{1/2 + 1/3;}\n\\\\[0.2cm]\nreturns the result \\texttt{5/6}.  There is no over- or underflow when working with rational numbers, nor are\nthere any rounding errors.  For example, to compute\n\\href{https://en.wikipedia.org/wiki/E_(mathematical_constant)}{Euler's number} $\\mathrm{e}$ we can use the formula\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\displaystyle \\mathrm{e} = \\sum\\limits_{n=0}^\\infty \\frac{1}{n!}$.\n\\\\[0.2cm]\nIn \\textsc{SetlX} the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{nDecimalPlaces(+/ [ 1/n! : n in \\{0..50\\} ], 50);}\n\\\\[0.2cm]\ncomputes $\\mathrm{e}$ to a precision of 50 decimal digits.  The value returned is\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{2.71828182845904523536028747135266249775724709369995}.\n\\\\[0.2cm]\nThe function $\\texttt{nDecimalPlaces}(x, n)$ takes a rational number $x$ and converts this number into a\ndecimal number.  In this process, the first $n$ decimal places following the decimal point are retained.\n\n\n\\section{An Application: Fixed-Point Algorithms}\nSuppose we want to solve the equation \\\\[0.2cm]\n\\hspace*{1.3cm} $x = \\cos(x)$. \\\\[0.2cm]\nHere, $x$ is a real number that we seek to compute.  A simple approach that works in this case is to use a\n\\href{https://en.wikipedia.org/wiki/Fixed-point_iteration}{fixed-point iteration}.  To this end, we\ndefine the sequence $\\bigl(x_n\\bigr)_{n\\in\\mathbb{N}}$ inductively as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$x_0 := 0$ \\quad and \\quad $x_{n+1} := \\mathtt{cos}(x_n)$ \\quad for all $n \\in \\mathbb{N}$. \n\\\\[0.2cm]\nWith the help of the \n\\href{https://en.wikipedia.org/wiki/Banach_fixed-point_theorem}{Banach fixed-point theorem}\\footnote{\n  The Banach fixed-point theorem is discussed in the lecture on\n  \\href{https://en.wikipedia.org/wiki/Differential_calculus}{differential calculus}.  This lecture is part of the\n  second semester.\n}\nit can be shown that this sequence converges to a solution of the equation $x = \\cos(x)$, i.e.~if we define\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\bar{x} := \\lim\\limits_{n\\rightarrow\\infty} x_n$,\n\\\\[0.2cm]\nthen we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\cos\\bigl(\\bar{x}\\bigr) = \\bar{x}$.\n\\\\[0.2cm]\nFigure \\ref{fig:solve.stlx} on page \\pageref{fig:solve.stlx} shows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/solve.stlx}{\\texttt{solve.stlx}}\nthat uses this approach to solve the equation $x = \\cos(x)$.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    x := 0.0;\n    while (true) {\n        old_x := x;\n        x := cos(x);    \n        print(x);\n        if (abs(x - old_x) < 1.0e-13) {\n            print(\"x = \", x);\n            break;\n        }   \n    }\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Solving the equation $x = \\cos(x)$ via fixed-point iteration.}  \\label{fig:solve.stlx}\n\\end{figure} %\\$\n\nIn this program, the iteration stops as soon as the difference between the variables \\texttt{x} and \n\\texttt{old\\_x} is less that $10^{-13}$.  Here, \\texttt{x} corresponds to $x_{n+1}$, while \\texttt{old\\_x}\ncorresponds to $x_n$.  Once the values of $x_{n+1}$ and $x_n$ are sufficiently close, the execution of the \\texttt{while} loop\nis stopped using the \\texttt{break} statement.  This statement works the same way as in the programming language\n\\texttt{C}, i.e.~it terminates the execution of the innermost loop containing the \\texttt{break} statement. \n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    solve := procedure(f, x0) {\n        x := x0;\n        for (n in [1 .. 10000]) {\n            oldX := x;\n            x := f(x);\n            if (abs(x - oldX) < 1.0e-12) {\n                return x;\n            }\n        }\n    };\n    print(\"solution to x = cos(x):  \", solve(cos, 0));\n    print(\"solution to x = 1/(1+x): \", solve(x |-> 1.0/(1+x), 0));\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{A generic implementation of the fixed-point algorithm.}\n\\label{fig:fixpoint.stlx}\n\\end{figure}\n\nFigure \\ref{fig:fixpoint.stlx} on page \\pageref{fig:fixpoint.stlx} shows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/fixpoint.stlx}{\\texttt{fixpoint.stlx}}.\nIn this program we have implemented a function \\texttt{solve} that takes two arguments.\n\\begin{enumerate}\n\\item \\texttt{f} is a unary function.  The purpose of the \\texttt{solve} is to compute the solution of the equation\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $f(x) = x$.\n      \\\\[0.2cm]\n      This equation is solved with the help of a fixed-point algorithm.\n\\item \\texttt{x0} is used as the initial value for the fixed-point iteration.\n\\end{enumerate}\nLine 11 calls \\texttt{solve} to compute the solution of the equation $x = \\cos(x)$.\nLine 12 solves the equation \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds x = \\bruch{1}{1+x}$. \n\\\\[0.2cm]\nThis equation is equivalent to the quadratic equation $x^2 + x = 1$.  Note that we have defined the function\n $\\ds x \\mapsto \\frac{1}{1+x}$ via the expression\n \\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{x |-> 1.0/(1+x)}.\n\\\\[0.2cm]\nThis expression is called an \\blue{anonymous function} since we haven't given a name to the function.  It is\nalso important to note that we have used the floating point number  $1.0$ instead of the integer \\texttt{1}.\nThe reason is that otherwise \\setlx\\ would use rational numbers when doing the fixed-point iteration.  Although this would\nwork, arithmetic using rational numbers is considerable less efficient than arithmetic using floating point\nnumbers. \n\n\\remarkEng\nThe function \\texttt{solve} is only able to solve the equation $f(x) = x$ if the function $f$ is a \n\\href{https://en.wikipedia.org/wiki/Contraction_mapping}{contraction mapping}.  A function \n$f:\\mathbb{R} \\rightarrow \\mathbb{R}$\nis called a \\blue{contraction mapping} iff \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$|f(x) - f(y)| < |x - y|$ \\quad for all $x,y \\in \\mathbb{R}$.\n\\\\[0.2cm]\nThis notion will be discussed in more detail in the lecture on \n\\href{https://github.com/karlstroetmann/Analysis/blob/master/Script/analysis.pdf}{differential\n  calculus} in the second semester. \\eox \n\n\\section{Case Study: Computation of Poker Probabilities}\nIn this short section we are going to show how to compute probabilities for the\n\\href{https://en.wikipedia.org/wiki/Texas_hold_%27em}{\\textsl{Texas Hold'em}} variation of \n\\href{https://en.wikipedia.org/wiki/Poker}{poker}.   Texas Hold'em poker is played with a deck of 52\ncards.  Every card has a \\blue{value}.  This value is an element of the set\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\textsl{values} = \\{ 2, 3, 4, 5, 6, 7, 8, 9, 10, \\textsl{Jack}, \\textsl{Queen}, \\textsl{King}, \\textsl{Ace} \\}$.\n\\\\[0.2cm]\nFurthermore, every card has a \\blue{suit}.  This suit is an element of the set\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\textsl{suits} = \\{ \\club, \\mbox{$\\color{red}{\\heart}$}, \\mbox{$\\color{red}{\\diamondsuit}$}, \\spade \\}$.\n\\\\[0.2cm]\nThese suits are pronounced \\blue{club}, \\blue{heart}, \\blue{diamond}, and \\blue{spade}.\nAs a card is determined by its value and its suit, a card can be represented as a pair $\\pair(v,s)$, where $v$\ndenotes the value while $s$ is the suit of the card.  Hence, the set of all cards can be represented as the set\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\textsl{deck} = \\bigl\\{ \\pair(v,s) \\mid v \\in \\textsl{values} \\wedge \\textsl{s} \\in \\textsl{suits} \\bigr\\}$.\n\\\\[0.2cm]\nAt the start of a game of Texas Hold'em, every player receives two cards.  These two cards are known\nas the \\blue{preflop} or the \\blue{hole}.  Next, there is a \\blue{bidding phase} where players can bet on their\ncards.   After this bidding phase, the dealer puts three cards open on the table.  These three cards are\nknown as \\blue{flop}.  Let us assume that a player has been dealt the set of cards\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\{ \\pair(3, \\club), \\pair(3, \\spade) \\}$.\n\\\\[0.2cm]\nThis set of cards is known as a \\blue{pocket pair}.  Then the player would like to know the probability\nthat the flop will contain another card with value $3$, as this would greatly increase her chance of\nwinning the game.  In order to compute this probability we have to compute the number of possible\nflops that contain a card with the value $3$ and we have to divide this number by the number of all\npossible flops:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds \\frac{\\;\\mbox{number of flops containing a card with value $3$}\\;}{\\mbox{number of all possible flops}}$\n\\\\[0.2cm]\nThe program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/poker-triple.stlx}{poker-triple.stlx}\nshown in Figure \\ref{fig:poker-triple.stlx} performs this computation.  We proceed to discuss this\nprogram line by line.\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.0cm,\n                  xrightmargin  = 0.0cm,\n                ]\n    values := { \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"T\", \"J\", \"Q\", \"K\", \"A\" }; \n    suits  := { \"c\", \"h\", \"d\", \"s\" };\n    deck   := { [ v, s ] : v in values, s in suits };\n    hole   := { [ \"3\", \"c\" ], [ \"3\", \"s\" ] };\n    rest   := deck - hole;\n    flops  := { { k1, k2, k3 } : k1 in rest, k2 in rest, k3 in rest \n                               | #{ k1, k2, k3 } == 3 \n              };\n    trips  := { f : f in flops | [ \"3\", \"d\" ] in f || [ \"3\", \"h\" ] in f };\n    print(1.0 * #trips / #flops);\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{Computing a probability in poker.}\n\\label{fig:poker-triple.stlx}\n\\end{figure}\n\n\\begin{enumerate}\n\\item In line 1 the set \\texttt{values} is defined to be the set of all possible values that a card\n      can take.  In defining this set we have made use of the following abbreviations:\n      \\begin{enumerate}\n      \\item ``\\texttt{T}'' is short for ``\\blue{Ten}'',\n      \\item ``\\texttt{J}'' is short for ``\\blue{Jack}'',\n      \\item ``\\texttt{Q}'' is short for ``\\blue{Queen}'',\n      \\item ``\\texttt{K}'' is short for ``\\blue{King}'', and\n      \\item ``\\texttt{A}'' is short for ``\\blue{Ace}''.\n      \\end{enumerate}\n\\item In line 2 the set \\texttt{suits} represents the possible suits of a card.  Here, we have used\n      the following abbreviations:\n      \\begin{enumerate}\n      \\item ``\\texttt{c}'' is short for $\\club$, which is pronounced as \\blue{club},\n      \\item ``\\texttt{h}'' is short for \\mbox{\\color{red}{$\\heart$}}, which is pronounced as \\blue{heart}, \n      \\item ``\\texttt{d}'' is short for \\mbox{\\color{red}{$\\diamondsuit$}}, which is pronounced as \\blue{diamond}, and \n      \\item ``\\texttt{s}'' is short for $\\spade$, which is pronounced as \\blue{spade}. \n      \\end{enumerate} \n\\item Line 3 defines the set of all cards.  This set is stored as the variable \\texttt{deck}.  Every\n      card is represented as a pair of the form $[v,s]$. Here, $v$ is the value of the card, while $s$ is its suit.\n\\item Line 4 defines the set \\texttt{hole}.  This set represents the two cards that have been given to our player.\n\\item The remaining cards are defined as the variable  \\texttt{rest} in line 5.\n\\item Line 6 computes the set of all possible flops.  Since the order of the cards in the flop does\n      not matter, we use sets to represent these flops.  However, we have to take care that the flop\n      does contain three \\colorbox{amethyst}{different} cards.  Hence, we have to ensure that the three\n      cards \\texttt{k1}, \\texttt{k2}, and \\texttt{k3} that make up the flop satisfy the inequalities \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\mathtt{k1} \\not= \\mathtt{k2}$, \\quad $\\mathtt{k1} \\not= \\mathtt{k3}$,  \\quad and \\quad $\\mathtt{k2} \\not= \\mathtt{k3}$.\n      \\\\[0.2cm]\n      These inequalities are satisfied if and only if the set \n      $\\{ \\mathtt{k1}, \\mathtt{k2}, \\mathtt{k3} \\}$ contains exactly three elements.  Hence, when\n      choosing \\texttt{k1}, \\texttt{k2}, and \\texttt{k3} we have to make sure that the condition\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{\\#\\{ k1, k2, k3 \\} == 3 }\n      \\\\[0.2cm]\n      holds.\n\\item Line 9 computes the subset of flops that contain at least one card with a value of 3.\n      As the 3 of clubs and the 3 of spades have already been dealt to our player, the only cards\n      with value 3 that are left in the deck are the 3 of diamonds and the 3 of hearts.  Therefore, we are looking for\n      those flops that contain one of these two cards.\n\\item Finally, the probability for obtaining another card with a value of 3 in the flop is computed as\n      the ratio of the number of flops containing a card with a value of 3 to the number of all possible flops.\n\n      However, we have to be careful here:  The evaluation of the expressions\n      \\texttt{\\#trips} and \\texttt{\\#flops} produces \\blue{integer} numbers.  Therefore, the division\n      \\texttt{\\#trips / \\#flops} yields a \\blue{rational} number.  As we intend to compute a \\blue{floating point}\n      number we have to convert the result into a floating point number by multiplying the result\n      with the floating point number $1.0$.\n\\end{enumerate}\nWhen we run the program we see that the probability of improving a \\blue{pocket pair} on the flop to \\blue{trips} or better\nis about  $11.8\\%$.\n\n\\remarkEng\nThe method to compute probabilities that has been sketched above only works if the sets that have to\nbe computed are small enough to be retained in memory.  If this condition is\nnot satisfied we can use the \\href{https://en.wikipedia.org/wiki/Monte_Carlo_method}{\\emph{Monte Carlo method}} \nto compute the probabilities instead.  This method will be discussed in the lecture on \n\\href{https://github.com/karlstroetmann/Algorithms/blob/master/Lecture-Notes/algorithms.pdf}{algorithms}.\n\n\n\\section{Case Study: Finding a Path in a Graph}\nIn the following section, I will present an application that is more interesting since it is practically\nrelevant.  In order to prepare for this, we will now discuss the problem of finding a \\blue{path} in a\n\\href{https://en.wikipedia.org/wiki/Directed_graph}{directed graph}. \nAbstractly, a graph consists of \\blue{vertices} and \\blue{edges} that connect these vertices.  In an application, the\nvertices could be towns and villages, while the edges would be interpreted as streets connecting these\nvillages.  To simplify matters, let us assume for now that the vertices are given as natural numbers, while the\nedges are represented as pairs of natural numbers.  Then, the graph can be represented as the set of its edges,\nas the set of vertices is implicitly given once the edges are known.  To make things concrete, let us consider\nan example.  In this case, the set of edges is called \\texttt{R} and is defined as follows: \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{R}\\; \\mathtt{:=}\\; \\bigl\\{ \\pair(1,2), \\pair(2,3), \\pair(1,3), \\pair(2,4), \\pair(4,5) \\bigr\\}$.\n\\\\[0.2cm]\nIn this graph, the set of vertices is given as\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\{ 1, 2, 3, 4, 5 \\}$.\n\\\\[0.2cm]\nThis graph is shown in Figure \\ref{fig:graph0} on page \\pageref{fig:graph0}.  You should note that the\nconnections between vertices that are given in this graph are unidirectional:  While there is a connection from\nvertex $1$ to vertex $2$, there is no connection from vertex $2$ to vertex $1$.\n\n \n\\begin{figure}[!ht]\n  \\centering\n  \\epsfig{file=Figures/graph0,scale=0.6}\n\n  \\caption{A simple graph.}\n  \\label{fig:graph0}\n\\end{figure}\n\n\n\n\\noindent\nThe graph given by the relation \\texttt{R} contains only the direct connections of vertices.  For example, in\nthe graph shown in Figure \\ref{fig:graph0}, there is a direct connection from vertex $1$ to vertex $2$ and\nanother direct connection from vertex $2$ to vertex $4$.  Intuitively, vertex $4$ is reachable from vertex $1$,\nsince from vertex $1$ we can first reach vertex $2$ and from vertex $2$ we can then reach vertex $4$.  However,\nthere is is no direct connection between the vertices $1$ and $4$.  To make this more formal, define\na \\colorbox{amethyst}{\\blue{path}} \nof a graph $R$ as a list of vertices\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$[x_1, x_2, \\cdots, x_n]$ \\quad such that \\quad $\\pair(x_i,x_{i+1}) \\in R$ \\quad for all $i=1,\\cdots,n-1$.\n\\\\[0.2cm]\nIn this case, the path $[x_1, x_2, \\cdots, x_n]$ is written as\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$x_1 \\mapsto x_2 \\mapsto \\cdots \\mapsto x_n$\n\\\\[0.2cm]\nand has the \\blue{length} $n-1$.  It is important to note that the length of a path\n$[x_1,x_2,\\cdots,x_n]$ is defined as the number of edges connecting the vertices and not as the\nnumber of vertices appearing in the path.\n\nFurthermore,  two vertices $a$ and $b$ are said to be \\colorbox{amethyst}{\\blue{connected}} iff there exists a path\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$[x_1,\\cdots,x_n]$ \\quad such that \\quad $a = x_1$ \\quad and \\quad $b = x_n$.\n\\\\[0.2cm]\nThe goal of this section is to develop an algorithm that checks whether two vertices $a$ and $b$ are connected.\nFurthermore, we want to be able to compute the corresponding path connecting the vertices $a$ and $b$.\n\n\n\\subsection{Computing the Transitive Closure of a Relation}\nWe have already noted that a graph can be represented as the set of its edges and hence as a \\blue{relation}.\nIn order to decide whether there is a path connecting two vertices we have to compute the \n\\href{https://en.wikipedia.org/wiki/Transitive_closure}{transitive closure} $R^+$ of a relation $R$.  \nIn the \\href{https://github.com/karlstroetmann/Lineare-Algebra/blob/master/Script/lineare-algebra.pdf}{math lecture}\nwe have seen that the transitive closure $R^+$ can be computed as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$R^+ = \\bigcup\\limits_{n=1}^{\\infty} R^n = R^1 \\cup R^2 \\cup R^3 \\cup \\cdots$  \n\\\\[0.2cm]\nInitially, this formula might look intimidating as it suggests an infinite computation.\nFortunately, it turns out that we do not have to compute all powers of the form $R^n$.  Let me\nexplain the reason that allows us to cut the computation short.  \n\\begin{enumerate}\n\\item $R$ is the set of direct connections between two vertices.\n\\item $R^2$ is the same as $R \\circ R$ and this relational product is defined as\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n       $R \\circ R = \\{ \\pair(x,z) \\mid \\exists y \\colon \\pair(x,y) \\in R \\wedge \\pair(y,z) \\in R \\}$.\n      \\\\[0.2cm]\n      Hence, $R \\circ R$ contains those pairs $\\pair(x,z)$ that are connected via one intermediate vertex $y$,\n      i.e.~there is a path of the form $x \\mapsto y \\mapsto z$ that connects $x$ and $z$.  This path\n      has length 2.  In general, we can show by induction that $R^n$ connect those pairs that are\n      connected by a path of length $n$.  The induction step of this proof runs as follows:\n\\item $R^{n+1}$ is defined as $R \\circ R^{n}$ and therefore we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $R \\circ R^n = \\{ \\pair(x,z) \\mid \\exists y \\colon \\pair(x,y) \\in R \\wedge \\pair(y,z) \\in R^n \\}$.\n      \\\\[0.2cm]\n      As $\\pair(y,z) \\in R^n$, the induction hypothesis guarantees that the vertices $y$ and $z$ are\n      connected by a path of length $n$.  Hence, this \n      path has the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\underbrace{y \\mapsto \\cdots \\mapsto z}_{\\mbox{\\scriptsize path of length $n$.}}$\n      \\\\[0.2cm]\n      Adding $x$ at the front of this path will produce the path\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $x \\mapsto y \\mapsto \\cdots \\mapsto z$.\n      \\\\[0.2cm]\n      This path has a length of $1 + n = n + 1$ and, furthermore, connects $x$ and $z$.  Hence $R^{n+1}$\n      contains those pairs $\\pair(x, z)$ that are connected by a path of length $n+1$.\n\\end{enumerate}\nNow the important observation is the following. The set of all vertices is finite.  For the arguments sake, let\nus assume there are $k$ vertices.  But then every path that has a length of  $k$ or greater must contain one\nvertex that is visited at least twice and hence this path is longer than necessary, i.e.~there is a shorter path that\nconnects the same vertices.  Therefore, for a finite graph with $k$ vertices, the formula to compute the\ntransitive closure can be simplified as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\ds R^+ = \\bigcup\\limits_{i=1}^{k-1} R^i$.\n\\\\[0.2cm]\nWhile we could use this formula as its stands, it is more efficient to use a \\blue{fixed-point iteration} instead.\nTo this end, we prove that the transitive closure $R^+$ satisfies the following equation:\n\\begin{equation}\n  \\label{fixpunkt}\n  R^+ = R \\cup R \\circ R^+. \n\\end{equation}\nLet me remind you that the precedence of the operator $\\circ$ \nis higher than the precedence of the operator $\\cup$.  Therefore, the expression $R \\cup R \\circ R^+$ is parenthesized\nas $R \\cup (R \\circ R^+)$.  Equation \\ref{fixpunkt} can be proven algebraically.  We have:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\n\\begin{array}{cll}\n    & R \\cup R \\circ R^+ \\\\[0.2cm]\n  = & R \\cup R \\circ \\bigcup\\limits_{i=1}^{\\infty} R^i \\\\[0.4cm]\n  = & R \\cup R \\circ \\bigl(R^1 \\cup R^2 \\cup R^3 \\cup \\cdots \\bigr) \\\\[0.2cm]\n  = & R \\cup \\bigl(R \\circ R^1 \\cup R \\circ R^2 \\cup R \\circ R^3 \\cup \\cdots \\bigr) \\\\[0.2cm]\n  = & R \\cup \\bigl(R^2 \\cup R^3 \\cup  R^4 \\cup \\cdots \\bigr)  \\\\[0.2cm]\n  = & R^1 \\cup \\bigl(R^2 \\cup R^3 \\cup  R^4 \\cup \\cdots \\bigr) \\\\[0.2cm]\n  = & \\bigcup\\limits_{i=1}^{\\infty} R^i \\\\[0.4cm]\n  = & R^+.\n\\end{array}\n$\n\\\\[0.2cm]\nEquation  \\ref{fixpunkt} can now be used to compute $R^+$ via a fixed-point iteration.\nTo this end, let us define a sequence of relations $(T_n)_{n \\in \\mathbb{N}}$ by induction on $n$:\n\\begin{enumerate}\n\\item[I.A.] $n = 0$: \n\n            $T_0 := R$\n\\item[I.S.] $n \\mapsto n+1$:\n\n            $T_{n+1} := R \\cup R \\circ T_n$. \n\\end{enumerate}\nThe relation  $T_n$ can be expressed via the relation $R$, we have\n\\begin{enumerate}\n\\item $T_0 = R$.\n\\item $T_1 = R \\cup R \\circ T_0 = R \\cup R \\circ R = R^1 \\cup R^2$.\n\\item$\\begin{array}[t]{lcl}\n       T_2  & = & R \\cup R \\circ T_1 \\\\\n            & = & R \\cup R \\circ (R^1 \\cup R^2) \\\\\n            & = & R^1 \\cup R^2 \\cup R^3. \\\\\n       \\end{array}\n      $\n\\end{enumerate}\nIn general, we can show by induction that\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$T_n = \\bigcup\\limits_{i=1}^{n+1} R^i$\n\\\\[0.2cm]\nholds for all $n \\in \\mathbb{N}$.  The base case of this proof is immediate form the definition of $T_0$.\nIn the induction step we observe the following:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\n \\begin{array}{lcll}\n   T_{n+1} & = & \\ds R \\cup R \\circ T_n & \\mbox{(by definition)} \\\\[0.2cm]\n           & = & \\ds R \\cup R \\circ \\biggl(\\bigcup\\limits_{i=1}^{n+1} R^i\\biggr) &\n                 \\mbox{(by induction hypothesis)} \\\\[0.4cm]\n           & = & \\ds R \\cup R \\circ \\left(R \\cup \\cdots \\cup R^{n+1}\\right) \\\\[0.2cm] \n           & = & \\ds R \\cup R^2 \\cup \\cdots \\cup R^{n+2}  &\n                 \\mbox{(by the distributivity of $\\circ$ over $\\cup$)} \\\\[0.2cm]\n           & = & \\ds \\bigcup\\limits_{i=1}^{n+2} R^i & \\Box \n   \\end{array}\n$\n\\\\[0.2cm]\nThe sequence $(T_n)_{n\\in\\mathbb{N}}$ has another useful property:  It is \n\\blue{monotonically increasing}.  In general, a sequence of sets $(X_n)_{n\\in\\mathbb{N}}$ is called\n\\blue{monotonically increasing} iff we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\forall n \\in \\mathbb{N}: X_n \\subseteq X_{n+1}$,\n\\\\[0.2cm]\ni.e.~the sets $X_n$ get bigger with growing index $n$.\nThe monotonicity of the sequence  $(T_n)_{n \\in \\mathbb{N}}$ is an immediate consequence of the equation\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds T_n = \\bigcup\\limits_{i=1}^{n+1} R^i$ \n\\\\[0.2cm]\nbecause we have:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\n\\begin{array}[t]{llcl}\n                & \\ds T_n \\subseteq T_{n+1} \\\\[0.2cm]\n\\Leftrightarrow & \\ds \\bigcup\\limits_{i=1}^{n+1} R^i \\subseteq \\bigcup\\limits_{i=1}^{n+2} R^i \\\\[0.5cm]\n\\Leftrightarrow & \\ds \\bigcup\\limits_{i=1}^{n+1} R^i \\subseteq \\bigcup\\limits_{i=1}^{n+1} R^i \\cup R^{n+2} \\\\\n\\end{array}\n$\n\\\\[0.2cm]\nIf the relation  $R$ is finite, then the transitive closure $R^+$ is finite, too.  The sets $T_n$ \nare all subsets of $R^+$ because we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds T_n = \\bigcup\\limits_{i=1}^{n+1} R^i \\subseteq \\bigcup\\limits_{i=1}^{\\infty} R^i = R^+$ \\quad for all $n \\in \\mathbb{N}$.\n\\\\[0.2cm]\nHence the sets $T_n$ can not grow indefinitely.  Because of the monotonicity of the sequence \n$(T_n)_{n\\in\\mathbb{N}}$ it follows that there exists an index  $k \\in \\mathbb{N}$ such that the sets $T_n$ do\nnot grow any further once $n$ has reached $k$, i.e.~we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds \\forall n \\in \\mathbb{N}:( n \\geq k \\rightarrow T_n = T_k)$.\n\\\\[0.2cm]\nBut this implies that\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds T_n = \\bigcup\\limits_{i=1}^{n+1} R^i = \\bigcup\\limits_{i=1}^{\\infty} R^i = R^+$ \n\\quad holds for all $n \\geq k$.\n\\\\[0.2cm]\nTherefore, the algorithm for computing  $R^+$ iterates the equation \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\ds T_{n+1} := R \\cup R \\circ T_n$\n\\\\[0.2cm]\nuntil the equation  $T_{n+1} = T_n$ is satisfied, since this implies that $T_n = R^+$.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    transClosure := procedure(R) {\n        T := R;\n        while (true) {\n            oldT := T;\n            T    := R + product(R, T);\n            if (T == oldT) {\n                return T;\n            }\n        }\n    };\n    product := procedure(R1, R2) {\n        return { [x,z] : [x,y] in R1, [y,z] in R2 };\n    };\n    R := { [1,2], [2,3], [1,3], [2,4], [4,5] };\n    print( \"R = \", R );\n    print( \"Computing the transitive closure of R:\" );\n    T := transClosure(R);\n    print( \"R+ = \", T );\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Computing the transitive closure.}  \n\\label{fig:transitive-closure.stlx}\n\\end{figure} %\\$\n\n\\noindent\nThe program \n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/transitive-closure.stlx}{\\texttt{transitive-closure.stlx}}\nthat is shown in Figure\n\\ref{fig:transitive-closure.stlx} on page \\pageref{fig:transitive-closure.stlx} shows an implementation of this idea.\nThe program produces the following output:\n\\begin{verbatim}\n    R = {[1, 2], [2, 3], [1, 3], [2, 4], [4, 5]}\n    Computing the transitive closure of R:\n    R+ = {[1, 2], [1, 3], [1, 4], [1, 5], [2, 3], [2, 4], [2, 5], [4, 5]}\n\\end{verbatim}\nThe transitive closure $R^+$ of a relation $R$ has a very intuitive interpretation:\n$R^+$:  It contains all pairs $\\pair(x,y)$ such that there is a path leading from \n$x$ to $y$.  \nThe function $\\texttt{product}(R_1, R_2)$ computes the relational product $R_1\\circ R_2$ \naccording to the formula\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$R_1 \\circ R_2 = \\{ \\langle x, z \\rangle \\mid \\exists y: \\pair(x,y) \\in R_1 \\wedge \\pair(y,z) \\in R_2 \\}$.\n\\\\[0.2cm]\nThe implementation of the procedure \\texttt{product} shows the most general way to define a set in\n\\textsc{SetlX}.  In general, a set can be defined via an expression of the form\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\{\\; \\textsl{expr} \\;\\texttt{:}\\; [x^{(1)}_1, \\cdots, x^{(1)}_{n(1)}] \\;\\texttt{in}\\; s_1,\n     \\cdots, [x^{(k)}_1, \\cdots, x^{(k)}_{n(k)}] \\;\\texttt{in}\\; s_k \\;\\texttt{|}\\;\n     \\textsl{cond} \\;\\}\n$.\n\\\\[0.2cm]\nHere, for all $i=1, \\cdots, k$ the variable $s_i$ denotes a set of lists of length $n(i)$.  When the\nexpression given above is evaluated, the variables $x^{(i)}_1, \\cdots, x^{(i)}_{n(i)}$ are replaced\nby the corresponding values in the lists from the sets  $s_i$.  For example, if we define\n\\begin{verbatim}\n    s1 := { [ 1, 2, 3 ], [ 5, 6, 7 ] };\n    s2 := { [ \"a\", \"b\" ], [ \"c\", \"d\" ] };\n    m := { [ x1, x2, x3, y1, y2 ] : [ x1, x2, x3 ] in s1, [ y1, y2 ] in s2 };\n\\end{verbatim}\nthen the set  \\texttt{m} has the following value:\n\\begin{verbatim}\n    { [1, 2, 3, \"a\", \"b\"], [5, 6, 7, \"c\", \"d\"],  \n      [1, 2, 3, \"c\", \"d\"], [5, 6, 7, \"a\", \"b\"] }\n\\end{verbatim}\n\n\n\\subsection{Computing the Paths}\nSo far, given a graph represented by a relation $R$ and two vertices $x$ and $y$, we can only check\nwhether there is a path leading from $x$ to $y$, but we cannot compute this path.  In this\nsubsection we will extend the procedure \\texttt{transClosure} so that it will also compute the\ncorresponding path.  The main idea is to extend the notion of a relational product to the notion of\na \\blue{path product}, where a \\blue{path product} is defined on sets of paths.  In order to do so,\nwe introduce three functions for lists.\n\\begin{enumerate}\n\\item Given a list $p$, the function $\\texttt{first}(p)$ returns the first element of $p$: \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{first}\\bigl([x_1,\\cdots,x_m]\\bigr) = x_1$.\n\\item Given a list $p$, the function $\\texttt{last}(p)$ returns the last element of $p$: \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{last}\\bigl([x_1,\\cdots,x_m]\\bigl) = x_m$.\n\\item If $p = [ x_1, \\cdots, x_m ]$ and $q =[ y_1, \\cdots, y_n ]$ are two path such that\n      $\\texttt{first}(q) = \\texttt{last}(p)$, we define the \\blue{join} of $p$ and $q$ as \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $p \\oplus q := [x_1, \\cdots, x_m, y_2, \\cdots, y_n ]$.\n\\end{enumerate}\nIf $P_1$ and $P_2$ are sets of paths, we define the  \\blue{path product} of\n$P_1$ and $P_2$ as follows: \\\\[0.2cm]\n\\hspace*{1.3cm} \n$P_1 \\bullet P_2 := \n\\bigl\\{\\; p_1 \\oplus p_2 \\mid p_1 \\in P_1 \\wedge p_2 \\in P_2 \\wedge \\texttt{last}(p_1) =\n\\texttt{first}(p_2) \\;\\bigr\\}\n$.\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    transClosure := procedure(R) {\n        P := R;\n        while (true) {\n            oldP := P;\n            P    := R + pathProduct(R, P);\n            print(P);\n            if (P == oldP) {\n                return P;\n            }\n        }\n    };\n    pathProduct := procedure(P, Q) {\n        return { join(x, y) : x in P, y in Q | x[-1] == y[1] };\n    };    \n    join := procedure(p, q) {\n        return p + q[2..];\n    };\n    R := { [1,2], [2,3], [1,3], [2,4], [4,5] };\n    print( \"R = \", R );\n    print( \"computing all paths\" );\n    P := transClosure(R);\n    print( \"P = \", P );\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Computing all connections.}  \\label{fig:path.stlx}\n\\end{figure} %\\$\n\n\\begin{figure}[!ht]\n  \\centering\n  \\vspace*{-9cm}\n\n  \\epsfig{file=Figures/graph-zykl,scale=0.5}\n  \\vspace*{-1cm}\n\n  \\caption{A graph with a cycle.}\n  \\label{fig:graph-zykl}\n\\end{figure}\n\nUsing the notion of a \\blue{path product} we are able to extend the program shown in Figure\n\\ref{fig:transitive-closure.stlx} such that it computes all paths between two vertices.\nThe resulting program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/path.stlx}{\\texttt{path.stlx}}\nis shown in Figure \\ref{fig:path.stlx} on page \\pageref{fig:path.stlx}.\nUnfortunately, the program does not work any more if the graph is \\blue{cyclic}.  A graph is defined\nto be \\blue{cyclic} if there is a path of length greater than $1$ that starts and ends at the same\nvertex.  This path is then called a \\blue{cycle}.\nFigure \\ref{fig:graph-zykl} on page \\pageref{fig:graph-zykl} shows a cyclic graph.  This graph is\ncyclic because it contains the path\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{[1, 2, 4, 1]}\n\\\\[0.2cm]\nand this path is a cycle.\nThe problem with this graph is that it contains an infinite number of paths that connect the vertex\n1 with the vertex 2: \\\\[0.2cm]\n\\hspace*{1.3cm}\n$[ 1, 2 ]$, $[ 1, 2, 4, 1, 2 ]$, \n$[ 1, 2, 4, 1, 2, 4, 1, 2 ]$, \n$[ 1, 2, 4, 1, 2, 4, 1, 2, 4, 1, 4 ]$, $\\cdots$\n\\\\[0.2cm]\nOf course, there is no point in computing a path that visits a vertex more than once as these paths\ncontain cycles.  Our goal is to eliminate all those paths that contain cycles.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ numbers       = left,\n                  numbersep     = -0.2cm,\n                  frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  xleftmargin   = 0.0cm,\n                  xrightmargin  = 0.0cm,\n                ]\n    pathProduct := procedure(P, Q) {\n        return { join(x,y) : x in P, y in Q | x[-1] == y[1] && noCycle(x, y) };\n    };\n    noCycle := procedure(L1, L2) {\n        return #({ x : x in L1 } * { x : x in L2 }) == 1;\n    };\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Computing the connections in a cyclic graph.}  \n\\label{fig:path-cyclic.stlx}\n\\end{figure} %\\$\n\nFigure \\ref{fig:path-cyclic.stlx} on page shows how the implementation of the function\n\\texttt{pathProduct} has to be changed so that the resulting program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/path-cyclic.stlx}{\\texttt{path-cyclic.stlx}}\nworks also for cyclic graphs. \n\\begin{enumerate}\n\\item In line 2, we compute only those paths that are not cyclic.\n\\item Line 5 tests, whether the join  $\\texttt{L1} \\oplus \\texttt{L2}$ is cyclic.  The join\n      of \\texttt{L1} and \\texttt{L2} is cyclic iff the lists \\texttt{L1} and \\texttt{L2} have more\n      than one common element. \n      The lists \\texttt{L1} and \\texttt{L2} will always have at least one common element, as we join\n      these lists only if the last element of \\texttt{L1} is equal to the first element of  \\texttt{L2}.\n      If there would be an another vertex common to \\texttt{L1} and \\texttt{L2}, then the path\n      $\\texttt{L1} \\oplus \\texttt{L2}$ would be cyclic.\n\\end{enumerate}\n\nIn general, we are not really interested to compute all possible paths between two given vertices\n\\texttt{x} and \\texttt{y}.  Instead, we just want to compute the shortest path leading from \\texttt{x} to \\texttt{y}.\nFigure \\ref{fig:find-path.stlx} on page \\pageref{fig:find-path.stlx} shows the procedure \\texttt{reachable}. \nThis procedure takes three arguments:\n\\begin{enumerate}\n\\item \\texttt{x} and \\texttt{y} are vertices of a graph.\n\\item \\texttt{R} is a binary relation representing a directed graph.\n\\end{enumerate}\nThe call  \\texttt{reachable(x, y, R)} checks whether \\texttt{x} and \\texttt{y} are connected and, furthermore,\ncomputes the shortest path from \\texttt{x} to \\texttt{y}, provided such a path exists.\nThe complete program can be found in the file\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/find-path.stlx}{\\texttt{find-path.stlx}}.\nNext, we discuss the implementation of the procedure  \\texttt{reachable}.\n\\begin{enumerate}\n\\item Line 2 initializes the set \\texttt{P}.  After $n$ iterations, this set will contain all paths\n      that start in the vertex \\texttt{x} and that have a length of at most $n$.\n\n      Initially, there is just the trivial path \\texttt{[x]} that starts in \\texttt{x} and has\n      length $0$.\n\\item Line 5 tries to extend all previously computed paths by one step.\n      If we are lucky, the set \\texttt{P} is increased in this step.\n\\item Line 6 selects all those paths from the set \\texttt{P} that lead to the vertex \\texttt{y}.\n      These paths are stored in the set \\texttt{Found}.\n\\item Line 7 checks whether we have indeed found a path ending at \\texttt{y}.  This is the case if\n      the set \\texttt{Found} is not empty.  \n      In this case, we return any of these paths.\n\\item If we have not yet found the vertex \\texttt{y} and, furthermore, we have not been able to find\n      any new paths during this iteration,  the procedure returns in line 11.\n      As the \\texttt{return} statement in line 11 does not return a value, the procedure will\n      instead return the undefined value $\\Omega$.\n\\end{enumerate}\nThe procedure call \\texttt{reachable(x,y R)} will compute the \\textbf{shortest} path connecting\n\\texttt{x} and \\texttt{y} because it computes path with increasing length.  The first iteration\ncomputes all paths starting in \\texttt{x} that have a length of at most 1, the second iteration\ncomputes all paths starting in \\texttt{x} that have a length of at most 2, and in general the $n$-th\niteration computes all paths starting in \\texttt{x} that have a length of at most $n$.  Hence, if\nthere is a path of length $n$, then this path will be found in the $n$-iteration unless a shorter path has\nalready been found in a previous iteration.  \n\n\\remarkEng\nThe algorithm described above is known as \n\\href{https://en.wikipedia.org/wiki/Breadth-first_search}{breadth first search}. \\eox \n\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    reachable := procedure(x, y, R) {\n        P := { [x] };\n        while (true) {\n            oldP  := P;\n            P     := P + pathProduct(P, R);\n            Found := { l : l in P | l[-1] == y };\n            if (Found != {}) {\n                return arb(Found);\n            }\n            if (P == oldP) {\n                return;\n            }\n        }\n    };\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Finding the shortest path between two vertices.}  \n\\label{fig:find-path.stlx}\n\\end{figure}\n\n\\subsection{The Wolf, the Goat, and the Cabbage}\nNext, we present an application of the theory developed so far.  We solve a problem from that has puzzled\nthe greatest agricultural economists for centuries.  The puzzle we want to solve is known as the \n\\href{http://jeux.lulu.pagesperso-orange.fr/html/anglais/loupChe/loupChe1.htm}{wolf-goat-cabbage puzzle}:  \n\\vspace*{0.3cm}\n\n\\begin{minipage}[c]{14cm}\n{\\sl\nAn agricultural economist has to sell a wolf, a goat, and a cabbage on a market place.  In order to\nreach the market place, she has to cross a river.  The boat that she can use is so small that it can\nonly accommodate either the goat, the wolf, or the cabbage in addition to the agricultural economist.\nNow if the agricultural economist leaves the wolf alone with the goat, the wolf will eat the goat.\nIf, instead, the agricultural economist leaves the goat with the cabbage, the goat will eat the cabbage.\nIs it possible for the agricultural economist to develop a schedule that allows her to cross the river\nwithout either the goat or the cabbage being eaten?\n}\n\\end{minipage}\n\\vspace*{0.3cm}\n\n\\noindent\nIn order to compute a schedule, we first have to model the problem.  The various \\blue{states} of the problem will\nbe regarded as \\blue{vertices} of a graph and this graph will be represented as a binary relation.\nTo this end we define the set\n\\\\[0.2cm]\n\\hspace*{1.3cm} \n$\\texttt{All} := \\{ \\squote{farmer}, \\squote{wolf}, \\squote{goat},\\squote{cabbage} \\}$.\n\\\\[0.2cm]\nEvery node will be represented as a subset \\texttt{S} of the set \\texttt{All}.  The idea is that the set \\texttt{S}\nspecifies those objects that are on the left side of the river.  We assume that initially the farmer\nis on the left side of the river. \nTherefore, the set of all possible states can be defined as the set\n\\begin{verbatim}\n        P := { S : S in 2 ** All | !problem(S) && !problem(All - S) };\n\\end{verbatim}\nHere, we have used the procedure \\texttt{problem} to check whether a given set \\texttt{S} has a problem. \nNote that since \\texttt{S} is the set of objects on the left side, the expression $\\texttt{All - S}$\ncomputes the set of objects on the right side of the river.\n\nNext, a set \\texttt{S} of objects has a problem if both of the following conditions\nare satisfied:\n\\begin{enumerate}\n\\item The farmer is not an element of \\texttt{S} and\n\\item either \\texttt{S} contains both the goat and the cabbage or \\texttt{S} contains both the wolf and the goat.\n\\end{enumerate}\nTherefore, we can implement the function \\texttt{problem} as follows:\n\n\\begin{verbatim}\n    problem := procedure(S) {\n        return !(\"farmer\" in S)                                     && \n               ({\"goat\", \"cabbage\"} <= S || {\"wolf\", \"goat\"} <= S);\n    };\n\\end{verbatim}\nWe proceed to compute the relation \\texttt{R} that contains all possible transitions between\ndifferent states.  We will compute \\texttt{R} using the formula:\n\\\\[0.2cm]\n\\hspace*{0.75cm}\n\\texttt{R := R1 + R2;}\n\\\\[0.2cm]\nHere \\texttt{R1} describes the transitions that result from the farmer crossing the river from left\nto right, while \\texttt{R2} describes the transitions that result from the farmer crossing the river\nfrom right to left.  We can define the relation \\texttt{R1} as follows:\n\\begin{verbatim}\n    R1  := { [S, S - B]: S in P, B in 2 ** S\n                       | S - B in P && \"farmer\" in B && #B <= 2\n           };\n\\end{verbatim}\nLet us explain this definition in detail:\n\\begin{enumerate}\n\\item Initially, \\texttt{S} is the set of objects on the left side of the river.  Hence, \\texttt{S}\n      is an element of the set of all states that we have defined as \\texttt{P}.\n\\item \\texttt{B} is the set of objects that are put into the boat and that do cross the river.  Of\n      course, for an object to go into the boat is has to be on the left side of the river to begin\n      with.  Therefore, \\texttt{B} is a subset of \\texttt{S} and hence an element of the power set\n      of \\texttt{S}. \n\\item Then  \\texttt{S-B} is the set of objects that are left on the left side of the river after\n      the boat has crossed.  Of course, the new state \\texttt{S-B} has to be a state that does not\n      have a problem.  Therefore, we check that \\texttt{S-B} is an element of \\texttt{P}.\n\\item Furthermore, the farmer has to be in the boat.  This explains the condition \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{\\symbol{34}farmer\\symbol{34} in B}.\n\\item Finally, the boat can only have two passengers.  Therefore, we have added the condition\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{\\#B <= 2}.\n\\end{enumerate}\nNext, we have to define the relation \\texttt{R2}.  However, as crossing the river from right to left\nis just the reverse of crossing the river from left to right, \\texttt{R2} is just the inverse of\n\\texttt{R1}.   Hence we define:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{R2  := \\{ [y, x] : [x, y] in R1 \\};}\n\\\\[0.2cm]\nFinally, the start state has all objects on the left side.  Therefore, we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{start := All;}\n\\\\[0.2cm]\nIn the end, all objects have to be on the right side of the river.  That means that nothing is left\non the left side.  Therefore, we define\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{goal := \\{\\};}\n\\\\[0.2cm]\nFigure \\ref{fig:wolf-ziege} on page \\pageref{fig:wolf-ziege} shows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/wolf-goat-cabbage.stlx}{\\texttt{wolf-goat-cabbage.stlx}}\nthat combines the statements shown so far.  The solution computed by this program is shown in Figure\n \\ref{fig:wolf-ziege-solution}.\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ codes         = {\\catcode`$=3\\catcode`_=8\\catcode`^=7},\n                  frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    problem := procedure(S) {\n        return !(\"farmer\" in S)                                     && \n               ({\"goat\", \"cabbage\"} <= S || {\"wolf\", \"goat\"} <= S);\n    };\n    \n    All := { \"farmer\", \"wolf\", \"goat\", \"cabbage\" };\n    P   := { S : S in 2 ** All | !problem(S) && !problem(All - S) };\n    R1  := { [S, S - B]: S in P, B in 2 ** S\n                       | S - B in P && \"farmer\" in B && #B <= 2\n           };\n    R2  := { [y, x] : [x, y] in R1 };\n    R   := R1 + R2;\n    \n    start := All;\n    goal  := {};\n    \n    path  := reachable(start, goal, R);\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{Solving the wolf-goat-cabbage problem.}  \n\\label{fig:wolf-ziege}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ codes         = {\\catcode`$=3\\catcode`_=8\\catcode`^=7},\n                  frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    {\"cabbage\", \"farmer\", \"goat\", \"wolf\"}                                 {}\n                             >>>> {\"farmer\", \"goat\"} >>>> \n    {\"cabbage\", \"wolf\"}                                   {\"farmer\", \"goat\"}\n                             <<<< {\"farmer\"} <<<< \n    {\"cabbage\", \"farmer\", \"wolf\"}                                   {\"goat\"}\n                             >>>> {\"farmer\", \"wolf\"} >>>> \n    {\"cabbage\"}                                   {\"farmer\", \"goat\", \"wolf\"}\n                             <<<< {\"farmer\", \"goat\"} <<<< \n    {\"cabbage\", \"farmer\", \"goat\"}                                   {\"wolf\"}\n                             >>>> {\"cabbage\", \"farmer\"} >>>> \n    {\"goat\"}                                   {\"cabbage\", \"farmer\", \"wolf\"}\n                             <<<< {\"farmer\"} <<<< \n    {\"farmer\", \"goat\"}                                   {\"cabbage\", \"wolf\"}\n                             >>>> {\"farmer\", \"goat\"} >>>> \n    {}                                 {\"cabbage\", \"farmer\", \"goat\", \"wolf\"}\n\\end{Verbatim} \n\\vspace*{-0.3cm}\n\\caption{A schedule for the agricultural economist.}  \n\\label{fig:wolf-ziege-solution}\n\\end{figure}\n\n\n\\section{Terms and Matching}\nSo far we have seen the basic data structures of \\setlx\\ like numbers, string, sets, and lists.\nThere is one more data structure that is supported by \\setlx.  This is the data structure of\n\\colorbox{amethyst}{\\blue{terms}}.  \nThis data structure is especially useful when we develop programs that deal with mathematical formulas.\nFor example, in this section we will develop a program that reads a string like \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n``\\texttt{x * exp(x)}'',\n\\\\[0.2cm]\ninterprets this string as describing the real valued function \n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$x \\mapsto x \\cdot \\exp(x)$, \n\\\\[0.2cm]\nand then takes the derivative of this function with respect to the variable $x$.  This program is\neasy to implement if real valued functions are represented as terms.  The reason is that \\setlx\\ provides \n\\colorbox{amethyst}{\\blue{matching}} for terms.  We will define this notion later.  Matching\nis one of the main ingredients of the programming language \\href{https://en.wikipedia.org/wiki/Prolog}{Prolog}.\nThis programming language was quite popular in artificial intelligence during the eighties and has\ninspired the matching that is available in \\textsc{SetlX}.\n\n\n\\subsection{Constructing and Manipulating Terms}\nIn order to build terms, we first need \\colorbox{amethyst}{\\blue{functors}}.  It is important not to confuse functors with\nfunction symbols.  Therefore, functors have to be preceded by the character  \n``\\texttt{@}''.\nFor example, the following strings can be used as functors:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{@f}, \\quad \\texttt{@FabcXYZ}, \\quad \\texttt{@sum}, \\quad \\texttt{@Hugo\\_}.\n\\\\[0.2cm]\nHowever, in the expression ``\\texttt{@f}'', the string ``\\texttt{f}'' is the functor.  The\ncharacter ``\\texttt{@}'' is only used as an escape character that tells us that ``\\texttt{f}'' is\nnot a function symbol but rather a functor.  Next, we define \\colorbox{amethyst}{\\blue{terms}}.  If $F$ is a functor and \n$t_1$, $t_2$, $\\cdots$, are any values, i.e.~they could be number, strings, lists, sets, or terms\nthemselves, then\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{@}F(t_1, t_2, \\cdots, t_n)$\n\\\\[0.2cm]\nis a term.  Syntactically, terms look very similar to function calls.  The only difference between a function call\nand a term is the following: \n\\begin{enumerate}\n\\item A function call starts with a function symbol. \n\\item A term starts with a functor. \n\\end{enumerate}\n\n\n\\examplesEng\n\\begin{enumerate}\n\\item \\texttt{@Address(\\symbol{34}Coblitzallee 1-9\\symbol{34}, 68163, \\symbol{34}Mannheim\\symbol{34})}\n\n      is a term that represents an address.\n\\item \\texttt{@product(@variable(\\symbol{34}x\\symbol{34}), @exp(@variable(\\symbol{34}x\\symbol{34})))}\n\n      is a term that represents the  function $x \\mapsto x \\cdot \\exp(x)$.  \n      \\eox\n\\end{enumerate}\nAt this point you might ask how terms are evaluated.  The answer is that terms\n\\colorbox{amethyst}{are not evaluated!}  \nTerms are used to represent data in a way that is both concise and readable.  Hence, terms are values like\nnumbers, sets or strings.  As terms are values, they don't need to be evaluated.\n\nLet us demonstrate a very simple application of terms.  Imagine that \\setlx\\ wouldn't provide lists as a native data\ntype.  Then, we could implement lists via terms.  First, we would use a functor to represent the empty list.\nLet us choose the functor \\texttt{nil} for this purpose.  Hence, we have\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{@nil()} \\;\\widehat{=}\\; \\texttt{[]}$,\n\\\\[0.2cm]\nwhere we read the symbol ``$\\widehat{=}$'' as ``corresponds to''.\nNote that the parentheses after the functor  \\texttt{nil} are \\colorbox{amethyst}{necessary!}  Next, in order to represent\na list with first element $x$ and a list $r$ of remaining elements we use the functor \\texttt{cons}.\nThen we have the correspondence\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{@cons}(x, r) \\;\\widehat{=}\\; \\texttt{[}x\\texttt{]} + r$. \n\\\\[0.2cm]\nConcretely, the list \\texttt{[1,2,3]} is represented as the term\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{@cons(1, @cons(2, @cons(3, @nil())))}.\n\\\\[0.2cm]\nThe programming language \\textsl{Prolog} represents lists internally in a similar form.\n\n\\setlx\\ provides two functions that allow us to extract the components of a term.  Furthermore, there is a\nfunction for constructing terms.  These functions are described next.\n\\begin{enumerate}\n\\item The function \\texttt{fct} returns the functor of a given term.\n      If  $t$ is a term of the form $\\at F(s_1,\\cdots,s_n)$, then the result returned by the expression\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{fct}(\\at F(s_1,\\cdots,s_n))$\n      \\\\[0.2cm]\n      is the functor $F$ of this term.  For example the expression\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{fct(@cons(1, @cons(2, @cons(3, @nil()))))}\n      \\\\[0.2cm]\n      returns the string  \\texttt{\\symbol{34}cons\\symbol{34}} as its result.\n\\item The function \\texttt{args} returns the arguments of a term.\n      If  $t$ is a term of the form $\\at F(s_1,\\cdots,s_n)$, then\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\mathtt{args}(\\at F(s_1,\\cdots,s_n))$\n      \\\\[0.2cm]\n      returns the list $[s_1, \\cdots, s_n]$. For example, the expression\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{args(\\at cons(1, \\at cons(2, \\at cons(3, \\at nil()))))}\n      \\\\[0.2cm]\n      is evaluated as\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{[1, \\at cons(2, \\at cons(3, \\at nil()))]}.\n\\item If $f$ is the name of a functor and  $l$ is a list, then the function \\texttt{makeTerm} can be invoked as\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $t \\;\\mathtt{:=}\\; \\texttt{makeTerm}(f,l)$.\n      \\\\[0.2cm]\n      This expression generates a term $t$ such that $f$ is the functor and $l$ is the list of its\n      arguments.  Therefore we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\mathtt{fct}(t) = f$  \\quad und \\quad $\\mathtt{args}(t) = l$.\n      \\\\[0.2cm]\n      For example, the expression\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{makeTerm(\\symbol{34}cons\\symbol{34}, [ 1, \\at nil() ])}\n      \\\\[0.2cm]\n      returns the result\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{\\at cons(1, \\at nil())}.\n\\end{enumerate}\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    append := procedure(l, x) {\n        if (fct(l) == \"nil\") {\n            return @cons(x, @nil());  \n        }\n        [head, tail] := args(l);\n        return @cons(head, append(tail, x));\n    };\n    l := @cons(1, @cons(2, @cons(3, @nil()))); // corresponds to [1,2,3]\n    print(append(l, 4));\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{Appending an element at the end of a list.}\n\\label{fig:append.stlx}\n\\end{figure}\n\nFigure \\ref{fig:append.stlx} on page \\pageref{fig:append.stlx} shows the\nprogram \\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/append.stlx}{\\texttt{append.stlx}}.\nThis program implements the function \\texttt{append}.  As its first arguments, this function takes a list \\texttt{l}\nthat is represented as a term.  As its second argument,  it takes an object \\texttt{x}.  The purpose of the expression\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{append}(\\texttt{l}, \\texttt{x})$\n\\\\[0.2cm]\nis to append the object \\texttt{x} at the end of the list \\texttt{l}.  The implementation of the function \\texttt{append}\nassumes that the list \\texttt{l} is represented as a term using the functors ``\\texttt{cons}'' and ``\\texttt{nil}''.\n\\begin{enumerate}\n\\item Line 2 checks whether the list  \\texttt{l} is empty. The list \\texttt{l} is empty iff we have\n      $\\texttt{l} = \\texttt{\\at nil()}$.  In the program we merely check the functor of the term \\texttt{l}.  If the name of this functor is\n      \\texttt{\\symbol{34}nil\\symbol{34}}, then \\texttt{l} is the empty list.\n\\item If \\texttt{l} is not empty, then it must be a term of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\texttt{l} = \\texttt{\\at cons(\\textsl{head}, \\textsl{tail})}$.\n      \\\\[0.2cm]     \n      Then, conceptually \\texttt{head} is the first element of the list \\texttt{l} and \\texttt{tail} is the list of\n      the remaining elements.  In this case, we need to recursively append \\texttt{t} at the end of the list \\texttt{tail}.\n      Finally, the first element of the list \\texttt{l}, which is called \\texttt{head} in line 5, needs\n      to be prepended to the list that is returned from the recursive invocation of \\texttt{append}.\n      This is done in line 6 by constructing the term \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{@cons(head, append(tail, x))}.\n\\end{enumerate}\n\n\\subsection{Matching}\nIt would be quite tedious if the functions \\texttt{fct} and \\texttt{args} were the only means to extract the\ncomponents of a term.  Figure \\ref{fig:append-match.stlx} on page \\pageref{fig:append-match.stlx}\nshows the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/append-match.stlx}{\\texttt{append-match.stlx}}, \nthat uses \\blue{matching} to implement the function \\texttt{append}.  \nLine 3 checks, whether the list  \\texttt{l} is empty, i.e.~whether \\texttt{l} is identical to the term \n\\texttt{@nil()}.  Line 4 is more interesting, as it combines two actions.\n\\begin{enumerate}\n\\item It checks, whether the list \\texttt{l} is a term that starts with the functor \\texttt{cons}.\n\\item If \\texttt{l} does indeed starts with the functor \\texttt{cons}, the arguments of this functor are\n      extracted and assigned to the variables \\texttt{head} and \\texttt{tail}.\n\\end{enumerate}\nHence, if the \\texttt{match} statement in line 4 is successful, the equation\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$\\texttt{l} = \\texttt{@cons(head, tail)}$\n\\\\[0.2cm]\nholds afterwards.\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    append := procedure(l, x) {\n        match (l) {\n            case @nil():            return @cons(x, @nil());\n            case @cons(head, tail): return @cons(head, append(tail, x));\n        }\n    };\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{Implementing \\texttt{append} using a \\texttt{match} statement.}\n\\label{fig:append-match.stlx}\n\\end{figure}\nIn general, a \\texttt{match} statement has the structure that is shown in Figure \\ref{fig:match}.\nHere, $e$ is any expression that yields a term when evaluated.  The expressions \n$t_1$, $\\cdots$, $t_n$ are so called \\blue{patterns} that contain variables.  When the \\texttt{match} statement\nis executed, \\textsc{SetlX} tries to bind the variables occurring in the pattern $t_1$ such that the resulting\nexpression is equal to $e$.  If this succeeds, the statements in  $\\textsl{body}_1$ are executed and the\nexecution of the \\texttt{match} statement ends.\nOtherwise, the patterns $t_2$, $\\cdots$, $t_n$ are tried one by one.  If the pattern $t_i$ is successfully\nmatched to $e$, the statements in $\\textsl{body}_i$ are executed and the execution of the \\texttt{match}\nstatement end.  If none of the patterns $t_1$, $\\cdots$, $t_n$ can be matched with $e$, the statements in\n$\\textsl{body}_{n+1}$ are executed.\n\n\n\\begin{figure}[!ht]\n  \\centering\n\\begin{Verbatim}[ codes         = {\\catcode`_=8\\catcode`^=7},\n                  frame         = lines, \n                  framesep      = 0.3cm, \n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  commandchars  = \\\\\\{\\},\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm\n                ]\n      \\texttt{\\underline{match} (\\(e\\)) \\{}\n          \\texttt{\\underline{case}} \\(t_1\\) : \\textsl{body}\\(_1\\) \n          \\vdots\n          \\texttt{\\underline{case}} \\(t_n\\) : \\textsl{body}\\(_n\\)\n          \\texttt{\\underline{default}:} \\textsl{body}\\(_{n+1}\\)\n      \\texttt{\\}}\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{Struktur eines \\texttt{Match}-Blocks}  \\label{fig:match}\n\\end{figure} \n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{Verbatim}[ frame         = lines, \n                  framesep      = 0.3cm, \n                  firstnumber   = 1,\n                  labelposition = bottomline,\n                  numbers       = left,\n                  numbersep     = -0.2cm,\n                  xleftmargin   = 0.8cm,\n                  xrightmargin  = 0.8cm,\n                ]\n    loadLibrary(\"termUtilities\");  \n\n    diff := procedure(t, x) {\n        match (t) {\n            case a + b :\n                return diff(a, x) + diff(b, x);\n            case a - b :\n                return diff(a, x) - diff(b, x);\n            case a * b :\n                return diff(a, x) * b + a * diff(b, x);\n            case a / b :\n                return ( diff(a, x) * b - a * diff(b, x) ) / b * b;\n            case a ** b :\n                return diff( @exp(b * @ln(a)), x);\n            case @ln(a) :\n                return diff(a, x) / a;\n            case @exp(a) :\n                return diff(a, x) * @exp(a);\n            case v | v == x :\n                return 1;\n            case y | isVariable(y) :  // must be different from x\n                return 0;\n            case n | isNumber(n):   \n                return 0;  \n         }\n    };\n    test := procedure(s) {\n        t := parseTerm(s);\n        v := parseTerm(\"x\");\n        d := diff(t, v);\n        print(\"d/dx($s$) = $d$\\n\");\n    };\n    test(\"x ** x\");\n\\end{Verbatim}\n\\vspace*{-0.3cm}\n\\caption{A function to perform symbolic differentiation.}\n\\label{fig:diff.stlx}\n\\end{figure}\n\n\\noindent\nWe close this section by showing an example that demonstrates the power of matching.\nThe function \\texttt{diff} that is shown in Figure \\ref{fig:diff.stlx} on page \\pageref{fig:diff.stlx} is part\nof the program\n\\href{https://github.com/karlstroetmann/Logic/blob/master/SetlX/diff.stlx}{\\texttt{diff.stlx}}.\nThis function is called with two arguments.\n\\begin{enumerate}\n\\item The first argument \\texttt{t} is a term that represents an arithmetical expression.\n\\item The second argument \\texttt{x} is a term that represents a variable.\n\\end{enumerate}\nThe function \\texttt{diff} interprets its argument \\texttt{t} as a function of the variable\n\\texttt{x}.  We take the \\href{https://en.wikipedia.org/wiki/Derivative}{derivative} of this\nfunction with respect to the variable \\texttt{x}.  For example, in order to compute the derivative of\nthe function\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n$x \\mapsto x^x$,\n\\\\[0.2cm]\nwe can call the function  \\texttt{diff} as follows:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\texttt{diff(parseTerm(\\symbol{34}x ** x\\symbol{34}), parseTerm(\\symbol{34}x\\symbol{34}));}\n\\\\[0.2cm]\nHere, the function \\texttt{parseTerm} is a function that is defined in the library \\texttt{termUtilities}.\nThis function takes a string as input and converts this string into a term.  In order to use the function\n\\texttt{parseTerm}, we have to load the library that defines it.  This happens in line 1 of Figure\n\\ref{fig:diff.stlx}. \n\nLet us now discuss the implementation of the function \\texttt{diff} in more detail.  \n\\begin{enumerate}\n\\item Line 5 makes use of the fact that the operator ``\\texttt{+}'' can be applied to terms.\n      The result is a term that has the functor ``\\texttt{@@@sum}''.  However, this functor is hidden from the\n      user and becomes only visible when we use the function \\texttt{fct} to expose it.  For example, we can\n      define a term \\texttt{t} as follows:\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{t := @f(1) + @g(2);}\n      \\\\[0.2cm]\n      Then \\texttt{t} is a term that is displayed as ``\\texttt{@f(1) + @g(2)}'', but the expression\n      \\texttt{fct(t)} returns the string\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{\"@@@sum\"}.\n      \\\\[0.2cm]\n      There is no need to remember that the internal representation of the operator ``\\texttt{+}'' as a functor\n      is given as the string ``\\texttt{@@@sum}'''.  The only thing that you have to keep in mind is\n      the fact, that the operator ``\\texttt{+}'' can be applied to terms.  The same is true for the\n      other arithmetical operators ``\\texttt{+}'', ``\\texttt{-}'', ``\\texttt{*}'', ``\\texttt{/}'',\n      ``\\texttt{\\symbol{37}}'', and ``\\texttt{**}''.  Similarly, the logical operators\n      ``\\texttt{\\&\\&}'', ``\\texttt{||}'', ``\\texttt{!}'', ``\\texttt{=>}'', and ``\\texttt{<==>}'' can\n      be used as functors.  Note, however, that the relational operators ``\\texttt{<}'',\n      ``\\texttt{>}'', ``\\texttt{<=}'', ``\\texttt{>=}'' \\colorbox{amethyst}{can not be used} to\n      combine terms.  Finally, the operators ``\\texttt{==}'' and ``\\texttt{!=}'' can be used to\n      check whether two terms are identical or different, respectively.  Hence, while these\n      operators can be applied to terms, they return a Boolean value, not a term!\n      \n      As the operator ``\\texttt{+}'' can be used as a functor, it can also be used in a pattern.  The\n      pattern \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{a + b}\n      \\\\[0.2cm]\n      matches any term that can be written as a sum.  The derivative of a sum is computed by summing the\n      derivatives of the components of the sum, i.e.~we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff\\bigl(f(x) + g(x)\\bigr) = \\diff f(x) + \\diff g(x)$.\n      \\\\[0.2cm]\n      Therefore, the case where the term \\texttt{t} has the form \\texttt{a + b} can be dealt with by\n      recursively computing the derivatives of \\texttt{a} and \\texttt{b} and adding them.  This\n      happens in line 6.\n\\item Line 7 deals with the case where \\texttt{t} is a difference.  Mathematically, the rule to take the\n      derivative of a difference is\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff\\bigl(f(x) - g(x)\\bigr) = \\diff f(x) - \\diff g(x)$.\n      \\\\[0.2cm]\n      This rule is implemented in line 8.\n\\item Line 9 deals with the case where \\texttt{t} is a product.  The \n      \\href{https://en.wikipedia.org/wiki/Product_rule}{product rule} is\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff\\bigl(f(x) \\cdot g(x)\\bigr) = \\left(\\diff f(x)\\right)\\cdot g(x) + f(x) \\cdot \\left(\\diff g(x)\\right)$.\n      \\\\[0.2cm]\n      This rule is implemented in line 10.\n\\item Line 11 deals with the case where \\texttt{t} is a quotient.  The\n      \\href{https://en.wikipedia.org/wiki/Quotient_rule}{quotient rule} is \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff\\left(\\frac{f(x)}{g(x)}\\right) = \\frac{\\left(\\diff f(x)\\right)\\cdot g(x) - f(x)\n        \\cdot \\left(\\diff g(x)\\right)}{g(x) \\cdot g(x)}$.\n      \\\\[0.2cm]\n      This rule is implemented in line 12.\n\\item Line 13 deals with the case where \\texttt{t} is a power.  Now in order to take the derivative of an\n      expression of the form\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds f(x)^{g(x)}$\n      \\\\[0.2cm]\n      we first need to rewrite it using the following trick:\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds f(x)^{g(x)} = \\exp\\bigl(\\ln\\bigl(f(x)^{g(x)}\\bigr)\\bigr) = \\exp\\bigl(g(x) \\cdot \\ln(f(x))\\bigr)$,\n      \\\\[0.2cm]\n      Then, we can recursively call \\texttt{diff} for this expression.  This works, because the function\n      \\texttt{diff} can deal with both the exponential function $x \\mapsto \\exp(x)$ and with the natural\n      logarithm $x \\mapsto \\ln(x)$.  This rewriting is done in line 14.\n\\item Line 15 deals with the case where \\texttt{t} has the form $\\ln\\bigl(f(x)\\bigr)$.  \n      In order to take the derivative of this expression, we first need to know the derivative of the natural\n      logarithm.  This derivative is given as \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff \\ln(x) = \\frac{1}{x}$.\n      \\\\[0.2cm]\n      Then, using the \\href{https://en.wikipedia.org/wiki/Chain_rule}{chain rule} we have that\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff \\ln\\bigl(f(x)\\bigr) = \\frac{\\diff f(x)}{f(x)}$.\n      \\\\[0.2cm]\n      This rule is used in line 16.\n\\item Line 17 deals with the case where \\texttt{t} has the form $\\exp\\bigl(f(x)\\bigr)$.  \n      In order to take the derivative of this expression, we first need to know the derivative of the \n      \\href{https://en.wikipedia.org/wiki/Exponential_function}{exponential function}.  This derivative is given as \n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff \\exp(x) = \\exp(x)$.\n      \\\\[0.2cm]\n      Then, using the \\href{https://en.wikipedia.org/wiki/Chain_rule}{chain rule} we have that\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\diff \\exp\\bigl(f(x)\\bigr) = \\left(\\diff f(x)\\right) \\cdot \\exp\\bigl(f(x)\\bigr)$.\n      \\\\[0.2cm]\n      This rule is used in line 18.\n\\item Line 19 deals with the case where \\texttt{t} is a variable and happens to be the same variable as\n      \\texttt{x}.  This is checked using the condition\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      \\texttt{v == x}\n      \\\\[0.2cm]\n      that is attached using the \\blue{condition operator} ``\\texttt{|}''.   Since we have\n      \\\\[0.2cm]\n      \\hspace*{1.3cm}\n      $\\ds \\frac{\\mathrm{d}x}{\\mathrm{d}x} = 1$,\n      \\\\[0.2cm]\n      the function \\texttt{diff} returns \\texttt{1} in this case.\n\\item Line 21 deals with the case where \\texttt{t} is a variable.  As line 19 has already covered the case that\n      \\texttt{t} and \\texttt{x} are the same variable, in this case the variable \\texttt{x} must be different\n      from \\texttt{t}.  Therefore, with respect to \\texttt{x} the term \\texttt{t} can be seen as a constant and\n      the derivative is \\texttt{0}.\n\\item Line 23 covers the case where \\texttt{t} is a number.  Note how we call \\texttt{isNumber}\n      after the condition operator ``\\texttt{|}''.  As a number is a constant, the derivative is \\texttt{0}.\n\\item Line 27 defines the procedure \\texttt{test}.  This procedure takes a string \\texttt{s} and transforms it\n      into the term \\texttt{t} via the function \\texttt{parseTerm} defined in the library\n      \\texttt{termUtilities}.  Similarly, the string \\texttt{\"x\"} is transformed into the term \\texttt{v} that\n      represents this variable.\\footnote{Internally, this variable is represented as the term\n      ``\\texttt{@@@variable(\"x\")}''.}\n      Line 30 call the function \\texttt{diff} using the term \\texttt{t} and the variable \\texttt{v}\n      as arguments.  The resulting term is printed in line 31.\n\\item Line 33 shows how the function \\texttt{test} can be called to compute the derivative $\\diff x^x$.\n\\end{enumerate}\n\\pagebreak\n\n\\section{Outlook}\nThis introductory chapter covers only a small part of the programming language  \\textsc{SetlX}.  There are some\nadditional features of \\setlx\\ that will be discussed in the following chapters as we need them.\nFurthermore,  \\textsc{SetlX} is discussed in depth in the tutorial that can be found at the following address:\n\\\\[0.2cm]\n\\hspace*{1.3cm}\n\\href{http://download.randoom.org/setlX/tutorial.pdf}{\\texttt{http://download.randoom.org/setlX/tutorial.pdf}}\n\n\n\\remarkEng\nMost of the algorithm that were presented in this chapter are not very efficient.  The main purpose of these\nalgorithms is to serve as examples that were presented for two reasons:\n\\begin{enumerate}\n\\item My first intention was to make the abstract notions introduced in set theory more accessible.  For\n      example, the program to compute the transitive closure serves to illustrate both the notion of the\n      relational product and the transitive closure.  Furthermore, it shows how these notions are useful in\n      solving real world problems.\n\\item Second, these programs serve to introduce the programming language \\setlx.\n\\end{enumerate}\nLater, the lecture on\n\\href{https://github.com/karlstroetmann/Algorithms/blob/master/Lecture-Notes/algorithms.pdf}{algorithms}\nwill show how to develop efficient algorithms that are more efficient.\n\\vspace*{0.3cm}\n\n\n\\section{Reflection}\nAfter having completed this chapter, you should be able to answer the following questions.\n\\begin{enumerate}[(a)]\n\\item Which data types are supported in \\textsc{SetlX}?\n\\item What are the different methods to define a set in \\textsc{SetlX}?\n\\item Do you understand how to construct lists via iterator? \n\\item How can lists be defined in \\textsc{SetlX}?\n\\item How does \\textsc{SetlX} support binary relations?\n\\item How does list slicing and list indexing work?\n\\item How does \\textsc{SetlX} support terms?\n\\item How does a fixed-point algorithm work?\n\\item What type of control structures are supported in \\textsc{SetlX}?\n\\item How can terms be defined and how does matching for terms work?\n\\end{enumerate}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"logic\"\n%%% End: \n\n", "meta": {"hexsha": "6605ac1cf29abc0eee257fe1e1150bb1425ae260", "size": 148139, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-Notes-SetlX/setlx.tex", "max_stars_repo_name": "BuserLukas/Logic", "max_stars_repo_head_hexsha": "cc0447554cfa75b213a10a2db37ce82c42afb91d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2019-10-03T13:25:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-26T11:49:25.000Z", "max_issues_repo_path": "Lecture-Notes-SetlX/setlx.tex", "max_issues_repo_name": "BuserLukas/Logic", "max_issues_repo_head_hexsha": "cc0447554cfa75b213a10a2db37ce82c42afb91d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2015-01-14T15:36:24.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-21T02:13:23.000Z", "max_forks_repo_path": "Lecture-Notes-SetlX/setlx.tex", "max_forks_repo_name": "BuserLukas/Logic", "max_forks_repo_head_hexsha": "cc0447554cfa75b213a10a2db37ce82c42afb91d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:05:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-10T19:44:15.000Z", "avg_line_length": 45.4135499693, "max_line_length": 147, "alphanum_fraction": 0.6380899021, "num_tokens": 47124, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8221891370573388, "lm_q1q2_score": 0.5694827144032325}}
{"text": "\n\\subsection{Using IVs}\n\nLet's say the demand function is:\n\n\\(Q_d=\\alpha+\\beta P + \\epsilon \\)\n\nHow can we estimate this?\n\nOLS will give biased results if \\(P\\) is correlated with \\(\\epsilon \\).\n\nWe can estimate if we have an instrumental variable for \\(P\\). \n\n\\subsection{Power of IVs}\n\nneed variation. if factor v important, little price movelemtn. may be hard to esimate that fact.\n\n", "meta": {"hexsha": "05c8cb402ce83fe5cc9c66a1a5766afcdb3e4bdf", "size": 386, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/economics/econometricsAggregate/03-01-exogeneity.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/economics/econometricsAggregate/03-01-exogeneity.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/economics/econometricsAggregate/03-01-exogeneity.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.4444444444, "max_line_length": 96, "alphanum_fraction": 0.7202072539, "num_tokens": 98, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8031738057795403, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5694656009910383}}
{"text": "\\subsection{Implementing Statistics Calculator in C} % (fold)\n\\label{sub:implementing_statistics_calculator_in_c}\n\n\\sref{sec:arrays_using_these_concepts} of this Chapter introduced the Statistics Calculator. A partial implementation of this program is shown in Listing \\ref{lst:c-stats-calc}, with the logic in the \\texttt{max} and \\texttt{variance} functions still to be implemented. This program reads a number of values from the user into an array, and then calculates and outputs the \\textbf{sum}, \\textbf{mean}, \\textbf{variance}, and \\textbf{maximum} value from this data.\n\n\\straightcode{\\ccode{lst:c-stats-calc}{C code for the Statistics Calculator}{code/c/array/simple-stats.c}}\n\n\\mynote{\n\\begin{itemize}\n  \\item \\texttt{strings.h} is included to give access to the various functions needed to manipulate string values. See the comments associated with \\lref{clst:populate_array}.\n  \\item \\texttt{math.h} is included to give access to the \\texttt{pow} function that will be needed in the implementation of the \\texttt{variance} function.\n  \\item Arrays in C are always passed by reference.\n  \\item C does not keep track of the size of an array, the \\texttt{size} parameter in each function call carries this data along with the array.\n  \\item The \\texttt{DATA\\_SIZE} constant stores the number of values that will be stored in the array. This can easily be changed to allow the program to read a different number of values.\n\\end{itemize}\n}\n\n% subsection implementing_statistics_calculator_in_c (end)\n", "meta": {"hexsha": "500f61baecba4bfa94d4a107679a78ad019d6535", "size": 1508, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "topics/arrays/c/c-stats-calc.tex", "max_stars_repo_name": "thoth-tech/programming-arcana", "max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z", "max_issues_repo_path": "topics/arrays/c/c-stats-calc.tex", "max_issues_repo_name": "thoth-tech/programming-arcana", "max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z", "max_forks_repo_path": "topics/arrays/c/c-stats-calc.tex", "max_forks_repo_name": "macite/programming-arcana", "max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z", "avg_line_length": 79.3684210526, "max_line_length": 463, "alphanum_fraction": 0.7871352785, "num_tokens": 370, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5694655910420631}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[intlimits]{amsmath}\n\\usepackage{MnSymbol}\n%\\usepackage{fullpage}\n\\usepackage[top=1in, bottom=1in, left=0.8in, right=1in]{geometry}\n\\usepackage{multicol}\n\\usepackage{wrapfig}\n\n\\newcommand{\\Sp}{\\text{ }}\n\\newcommand{\\Var}{\\text{Var}}\n\\newcommand{\\Bin}{\\text{Bin}}\n\\newcommand{\\Poi}{\\text{Poi}}\n\\newcommand{\\HypG}{\\text{HypG}}\n\\newcommand{\\Geo}{\\text{Geo}}\n\\newcommand{\\Exp}{\\text{Exp}}\n\\newcommand{\\Norm}{\\text{N}}\n\\newcommand{\\Where}{\\Sp\\text{ where }}\n\n\\setlength{\\columnsep}{0.1pc}\n\n\\title{Random Variables}\n\\author{Stephen Koo}\n\n\\begin{document}\n\\maketitle\n\\vspace{-0.3in}\n\\rule{\\linewidth}{0.4pt}\n\n\\section{Discrete Random Variables}\n\\subsection{Bernoulli}\nAn experiment that results in \"success\" or \"failure.\"\n\\begin{align*}\n    X& \\sim \\text{Ber}(p) \\\\\n    P(X = 0)& = 1 - p \\\\\n    P(X = 1)& = p \\\\\n        E[X]& = p \\\\\n     \\Var(X)& = p(1-p) \\\\\n        M(t)& = e^t p + 1 - p\n\\end{align*}\n\n\\subsection{Binomial}\nThe number of successes in an experiment with $n$ trials and $p$ probability of success on each trial.\n\\begin{align*}\n    X& \\sim \\Bin(n,p) \\\\\n    P(X = i) = p(i)& = \\binom{n}{i} p^i (1-p)^{n-i} \\Where i = 0,1,\\ldots, n\\\\\n               E[X]& = np \\\\\n            \\Var(X)& = np(1-p) \\\\\n               M(t)& = (pe^t + 1 - p)^n\n\\end{align*}\n\\\\\nIf $X_i \\sim \\Bin(n_i, p)$ for $1 \\leq i \\leq N$, then\n\\[\n    \\left( \\sum_{i=1}^N X_i \\right) \\sim \\Bin\\left(\\sum_{i=1}^N n_i, p \\right)\n\\]\nNote that the binomial distribution is a generalization of the Bernoulli distribution, since $\\text{Ber}(p) \\sim \\Bin(1, p)$.\n\n\\subsection{Poisson}\nApproximates the binomial random variable when $n$ is large and $p$ is small enough to make $np$ \"moderate\"---generally when $n > 20$ and $p < 0.05$---and approaches the binomial distribution as $n \\rightarrow \\infty$ and $p \\rightarrow 0$.\n\\begin{align*}\n    X& \\sim \\Poi(\\lambda) \\Where \\lambda = np \\\\\n    P(X = i)& = e^{-\\lambda} \\frac{\\lambda^i}{i!} \\Where i = 0,1,2,\\ldots\\\\\n        E[X]& = \\lambda \\\\\n     \\Var(X)& = \\lambda \\\\\n        M(t)& = e^{\\lambda (e^t - 1)}\n\\end{align*}\nThe approximations also works to a certain extent when the successes in the trials are not entirely independent, and when the probability of success in each trial varies slightly. \\\\\n\\\\\nIf $X_i \\sim \\Poi(\\lambda_i)$ for $1 \\leq i \\leq N$, then\n\\[\n    \\left( \\sum_{i=1}^N X_i \\right) \\sim \\Poi\\left(\\sum_{i=1}^N \\lambda_i \\right)\n\\]\n\n\\subsection{Geometric}\nThe number of independent trials until a success, where the probability of success is $p$.\n\\begin{align*}\n    X& \\sim \\Geo(p) \\\\\n    P(X = n)& = (1-p)^{n-1} p \\Where n = 1, 2, \\ldots\\\\\n        E[X]& = 1/p \\\\\n     \\Var(X)& = (1-p)/p^2\n\\end{align*}\n\n\\subsection{Negative Binomial}\nThe number of independent trials until $r$ successes, with probability $p$ of success.\n\\begin{align*}\n    X& \\sim \\text{NegBin}(r, p) \\\\\n    P(X = n)& = \\binom{n-1}{r-1} p^r (1-p)^{n-r} \\Where n = r, r+1, \\ldots \\\\\n        E[X]& = r / p \\\\\n     \\Var(x)& = r(1-p)/p^2 \\\\\n     \\Geo(p)& \\sim \\text{NegBin}(1, p)\n\\end{align*}\nNote that the negative binomial distribution generalizes the geometric distribution, with $\\Geo(p) \\sim \\text{NegBin}(1, p)$.\n\n\\subsection{Hypergeometric}\nThe number of white balls drawn after drawing $n$ balls (without replacement) from an urn containing $N$ balls, with $m$ white balls and $N-m$ other (\"black\") balls.\n\\begin{align*}\n    X& \\sim \\HypG(n,N,m) \\\\\n    P(X = i)& = \\frac{\\binom{m}{i} \\binom{N-m}{n-i}}{\\binom{N}{n}} \\Where i = 0, 1, \\ldots, n \\\\\n        E[X]& = n(m/N) \\\\\n     \\Var(X)& = \\frac{nm(N-n)(N-m)}{N^2(N-1)} \\\\\n\\HypG(n,N,m)& \\rightarrow Bin(n, m/N) \\text{ , as } N \\rightarrow \\infty \\text{ and } m/N \\text{ stays constant}\n\\end{align*}\n\n\\subsection{Multinomial}\nThe multinomial distribution further generalizes the binomial distribution: given an experiment with $n$ independent trials, where each trial results in one of $m$ outcomes, with respective probabilities $p_1, p_2, \\ldots, p_m$ such that $\\sum_{i=1}^{m} p_i = 1$, then if $X_i$ denotes the number of trials with outcome $i$ we have\n$$P(X_1 = c_1, X_2 = c_2, \\ldots, X_m = c_m) = \\binom{n}{c_1, c_2, \\ldots, c_m} p_1^{c_1} p_2^{c_2} \\cdots p_m^{c_m}$$\nwhere $\\sum_{i=1}^{m} c_i = n$ and $\\binom{n}{c_1, c_2, \\ldots, c_m} = \\frac{n!}{c_1! c_2! \\cdots c_m!}$.\n\n\\section{Continuous Random Variables}\nIf $Y$ is a non-negative continuous random variable\n$$E[Y] = \\int_{0}^{\\infty} P(Y > y) dy$$\n\\subsection{Uniform}\n\\begin{align*}\n    X& \\sim \\text{Uni}(\\alpha, \\beta) \\\\\n f(x)& = \\left\\{\n        \\begin{array}{ll}\n            \\frac{1}{\\beta - \\alpha} & \\alpha \\leq x \\leq \\beta \\\\\n                                   0 &\\text{otherwise}\n        \\end{array}\n    \\right. \\\\\n    E[X]& = \\frac{\\alpha + \\beta}{2} \\\\\n \\Var(X)& = \\frac{(\\beta - \\alpha)^2}{12}\n\\end{align*}\n\n\\subsection{Normal}\nFor values in common natural phenomena, especially when resulting from the sum of multiple variables.\n\\begin{align*}\n    X& \\sim \\Norm(\\mu, \\sigma^2) \\\\\n f(x)& = \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}} \\Where -\\infty < x < \\infty \\\\\n E[X]& = \\mu \\\\\n    \\Var(X)& = \\sigma^2 \\\\\n       M(t)& = e^{\\left( \\frac{\\sigma^2 t^2}{2} + \\mu t\\right)}\n\\end{align*}\nLetting $X \\sim N(\\mu, \\sigma^2)$ and $Y = aX + b$, we have\n\\begin{align*}\n    Y& \\sim \\Norm(a\\mu + b, a^2\\sigma^2) \\\\\n     F_Y(x)& = F_X(\\frac{x - b}{a})\n\\end{align*}\nThe Standard (Unit) Normal Random Variable $Z \\sim N(0, 1)$ has a cumulative distribution function (CDF) commonly labeled $\\Phi(z) = P(Z \\leq z)$ that has some useful properties.\n\\begin{align*}\n    \\Phi(z)& = \\int_{-\\infty}^{z} \\frac{1}{\\sqrt{2\\pi}} e^{-x^2/2} dx \\\\\n   \\Phi(-z)& = 1 - \\Phi(z)\\\\\nP(Z \\geq -z)& = P(Z > z)\n\\end{align*}\nGiven $X \\sim N(\\mu, \\sigma^2)$ where $\\sigma > 0$, we can then compute the CDF of $X$ using the CDF of the standard normal variable.\n$$F_X(x) = \\Phi(\\frac{x - \\mu}{\\sigma})$$\nBy the de Moivre-Laplace Limit Theorem, the normal variable can approximate the binomial when $\\Var(X) = np(1-p) \\geq 10$. If we let $S_n$ denote the number of successes (with probability $p$) in $n$ independent trials, then\n$$ P \\left( a \\leq \\frac{S_n - np}{\\sqrt{np(1-p)}} \\leq  b \\right) \\overset{n \\rightarrow \\infty}{\\rightarrow} \\Phi(b) - \\Phi(a)$$\n\\\\\nIf $X_i \\sim \\Norm(\\mu_i, \\sigma_i^2)$ for $i = 1, 2, \\ldots, n$, then\n\\[\n    \\left( \\sum_{i=1}^n X_i \\right) \\sim \\Norm\\left( \\sum_{i=1}^n \\mu_i, \\sum_{i=1}^n \\sigma_i^2 \\right)\n\\]\n\n\\subsection{Exponential}\nRepresents time until some event, with rate $\\lambda > 0$.\n\\begin{align*}\n    X& \\sim \\Exp(\\lambda) \\\\\n f(x)& = \\left\\{\n        \\begin{array}{ll}\n            \\lambda e^{-\\lambda x} &\\text{if } x \\geq 0 \\\\\n                                 0 &\\text{if } x < 0\n        \\end{array}\n    \\right. \\\\\n    E[X]& = \\frac{1}{\\lambda} \\\\\n \\Var(X)& = \\frac{1}{\\lambda^2} \\\\\n    F(x)& = 1 - e^{-\\lambda x} \\Where x \\geq 0\n\\end{align*}\nExponentially distributed random variables are memoryless.\n$$P(X > s + t | X > s) = P(X > t)$$\n\n\\subsection{Beta}\n\\begin{align*}\n    X& \\sim \\text{Beta}(a, b) \\\\\n f(x)& = \\left\\{\n        \\begin{array}{ll}\n            \\frac{1}{B(a,b)}x^{a-1}(1-x)^{b-1} &0 < x < 1 \\\\\n                                             0 &\\text{otherwise}\n        \\end{array}\n    \\right. \\\\\n    B(a,b)& = \\int_{0}^{1} x^{a-1} (1-x)^{b-1} dx \\\\\n      E[X]& = \\frac{a}{a+b} \\\\\n   \\Var(X)& = \\frac{ab}{(a+b)^2(a+b+1)}\n\\end{align*}\nIf $X \\sim \\text{Uni}(0, 1)$ and $N$ denotes the number of heads resulting from a number of coin flips with some unknown probability of getting heads, then\n$$X|(N = n, m+n \\text{ trials}) \\sim \\text{Beta}(n + 1, m+ 1)$$\n\n\\end{document}\n\n", "meta": {"hexsha": "eccebbb7d5ecc0558fb182b9a3b3f51aec86e35e", "size": 7581, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "distros.tex", "max_stars_repo_name": "kashizui/Stanford-CS109-Notes", "max_stars_repo_head_hexsha": "e0e0ab8fbc1afbe0711d23b8be4b0c3d4be09cb2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2015-02-24T23:32:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T14:54:10.000Z", "max_issues_repo_path": "distros.tex", "max_issues_repo_name": "kashizui/Stanford-CS109-Notes", "max_issues_repo_head_hexsha": "e0e0ab8fbc1afbe0711d23b8be4b0c3d4be09cb2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "distros.tex", "max_forks_repo_name": "kashizui/Stanford-CS109-Notes", "max_forks_repo_head_hexsha": "e0e0ab8fbc1afbe0711d23b8be4b0c3d4be09cb2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2015-06-06T06:07:22.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-25T19:57:35.000Z", "avg_line_length": 40.1111111111, "max_line_length": 331, "alphanum_fraction": 0.586202348, "num_tokens": 2812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191214879991, "lm_q2_score": 0.8031738057795403, "lm_q1q2_score": 0.5694655861759824}}
{"text": "\\vfill \\eject\n\\par\n\\section{Serial Solution of $A X = Y$ using an $QR$ factorization}\n\\label{section:QR-serial}\n\\par\nLet us review the steps is solving $A X = Y$ using a $QR$\nfactorization.\n\\begin{itemize}\n\\item \n{\\bf communicate} the data for the problem as $A$, $X$ and $Y$.\n\\item \n{\\bf reorder} as\n${\\widetilde A} {\\widetilde X} = Y$, where\n${\\widetilde A} = A P^T$ and\n${\\widetilde X} = P X$.\nand $P$ is a permutation matrix.\n\\item \n{\\bf factor} $ {\\widetilde A} = Q R$,\nwhere $Q$ is orthogonal and $R$ is upper triangular.\n\\item \n{\\bf solve}  $R^T R (P X) = A^T Y$ (if real)\nor {\\bf solve}  $R^H R (P X) = A^H Y$ (if complex).\n\\end{itemize}\n\\par\nA complete listing of a sample program \nis found in Section~\\ref{section:QR-serial-driver}.\nWe will now begin to\nwork our way through the program to illustrate the use \nof {\\bf SPOOLES} to solve a system of linear equations.  \n\\par\n\\subsection{Reading the input parameters}\n\\label{subsection:QR:input-data}\n\\par\nThe input parameters are identical to those of the serial $LU$\ndriver program described in\nSection~\\ref{subsection:serial:input-data}\nwith the exception that the {\\tt symmetryflag} is not present.\n\\par\n\\subsection{Communicating the data for the problem}\n\\label{subsection:QR:communicating-data}\n\\par\nThis step is identical to the serial code, as described in\nSection~\\ref{subsection:serial:communicating-data}\n\\par\n\\subsection{Reordering the linear system}\n\\label{subsection:QR:reordering}\nFor the $LU$ factorization of $A$, we used the graph of $A + A^T$.\nFor the $QR$ factorization of $A$, we need the graph of $A^TA$.\nThe only difference between the two orderings is how we create\nthe {\\tt IVL} object for the graph.\nFor the $QR$ factorization, we use \n{\\tt InpMtx\\_adjForATA()}, as we see below.\n\\begin{verbatim}\nadjIVL = InpMtx_adjForATA(mtxA) ;\nnedges = IVL_tsize(adjIVL) ;\ngraph = Graph_new() ;\nGraph_init2(graph, 0, neqns, 0, nedges, neqns, nedges, adjIVL,\n            NULL, NULL) ;\nfrontETree = orderViaMMD(graph, seed, msglvl, msgFile) ;\n\\end{verbatim}\nThe minimum degree method is the simplest of the ordering methods\nprovided in the {\\bf SPOOLES} library.\nFor more information on ordering, please see the user document\n{\\it ``Ordering Sparse Matrices and Transforming Front Trees''}.\n\\par\n\\subsection{Non-numeric work}\n\\label{subsection:QR:non-numeric}\n\\par\nThe next phase is to obtain the permutation matrix $P$, (stored\nimplicitly in a permutation vector), and apply it to the matrix $A$.\nThis is done by the following code fragment.\n\\begin{verbatim}\noldToNewIV = ETree_oldToNewVtxPerm(frontETree) ;\noldToNew   = IV_entries(oldToNewIV) ;\nnewToOldIV = ETree_newToOldVtxPerm(frontETree) ;\nnewToOld   = IV_entries(newToOldIV) ;\nInpMtx_permute(mtxA, NULL, oldToNew)) ;\nInpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ;\n\\end{verbatim}\nThe {\\tt oldToNewIV} and {\\tt newToOldIV} variables are {\\tt IV}\nobjects that represent an integer vector.\nThe {\\tt oldToNew} and {\\tt newToOld} variables are pointers to\n{\\tt int}, which point to the base address of the {\\tt int} vector\nin an {\\tt IV} object.\nOnce we have the permutation vector, we apply it to the front tree,\nby the {\\tt ETree\\_permuteVertices()} method. \nWe need $A P^T$, so we permute the {\\tt InpMtx}\nobject using a {\\tt NULL} pointer for the row permutation (which\nmeans do not permute the rows) and the {\\tt oldToNew} vector for\nthe column permutation.\nAt this point the {\\tt InpMtx} object holds $AP^T$ in the form\nrequired by the factorization.\n\\par\nThe final steps are to compute the symbolic factorization,\nwhich is stored in an {\\tt IVL} object, and to permute the vertices\nin the front tree.\nThe symbolic factorization differs slightly from the $LU$ case.\n\\begin{verbatim}\nsymbfacIVL = SymbFac_initFromGraph(frontETree, graph) ;\nIVL_overwrite(symbfacIVL, oldToNewIV) ;\nIVL_sortUp(symbfacIVL) ;\nETree_permuteVertices(frontETree, oldToNewIV) ;\n\\end{verbatim}\nWe do not have the $A^TA$ matrix object, so we constuct the\nsymbolic factorization using the front tree and the {\\tt Graph} object.\nNote, at this point in time, both the graph and front tree are in\nterms of the original ordering, so after the {\\tt IVL} object is\ncreated, its vertices must be mapped into the new permutation and\nsorted into ascending order.\nThen the vertices in the front tree are mapped into the new ordering.\n\\par\n\\subsection{The Matrix Factorization}\n\\label{subsection:QR:factor}\n\\par\nThe numeric factorization step begins by initializing the {\\tt\nFrontMtx} object with the {\\tt frontETree} and {\\tt symbacIVL} \nobjects created in early steps.\nThe {\\tt FrontMtx} object holds the actual factorization.\nThe code segment for the initialization is found below.\n\\begin{verbatim}\nfrontmtx = FrontMtx_new() ;\nmtxmanager = SubMtxManager_new() ;\nSubMtxManager_init(mtxmanager, NO_LOCK, 0) ;\nif ( type == SPOOLES_REAL ) {\n   FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, \n                 SPOOLES_SYMMETRIC, FRONTMTX_DENSE_FRONTS, \n                 SPOOLES_NO_PIVOTING, NO_LOCK, 0, NULL,\n                 mtxmanager, msglvl, msgFile) ;\n} else {\n   FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, \n                 SPOOLES_HERMITIAN, FRONTMTX_DENSE_FRONTS, \n                 SPOOLES_NO_PIVOTING, NO_LOCK, 0, NULL,\n                 mtxmanager, msglvl, msgFile) ;\n}\n\\end{verbatim}\nThis differs little from the initialization in\nSection~\\ref{subsection:serial:factor}, except that the matrix type\nis symmetric or Hermitian, and no pivoting is used for stability.\n\\par\nThe numeric factorization is performed by the \n{\\tt FrontMtx\\_QR\\_factor()} method.  \nThe code segment from the sample program for the numerical\nfactorization step is found below.\n\\begin{verbatim}\nchvmanager = ChvManager_new() ;\nChvManager_init(chvmanager, NO_LOCK, 1) ;\nDVzero(10, cpus) ;\nfacops = 0.0 ;\nFrontMtx_QR_factor(frontmtx, mtxA, chvmanager, cpus, &facops, msglvl, msgFile) ;\nChvManager_free(chvmanager) ;\n\\end{verbatim}\nWorking storage used during the factorization is found in the form\nof block {\\it chevrons}, in a {\\tt Chv} object, which hold the partial \nfrontal matrix for a front.\nMuch as with the {\\tt SubMtx} object, the {\\tt FrontMtx} object does\nnot concern itself with managing working storage, instead it relies\non a {\\tt ChvManager} object to manage the {\\tt Chv} objects.\nOn return {\\tt facops} contains the number of floating point\noperations performed during the factorization.\n\\par\nThe factorization is performed using a one-dimensional\ndecomposition of the factor matrices.\nKeeping the factor matrices in this form severely limits the amount\nof parallelism for the forward and backsolves.\nWe perform a post-processing step to convert the one-dimensional\ndata structures to submatrices of a two-dimensional block\ndecomposition of the factor matrices.\nThe following code fragment performs this operation.\n\\begin{verbatim}\nFrontMtx_postProcess(frontmtx, msglvl, msgFile) ;\n\\end{verbatim}\n\\par\n\\subsection{Solving the linear system}\n\\label{subsection:QR:solve}\n\\par\nThe following code fragment solves the linear system \n$R^T R {\\widehat X} = {\\widehat A}^T Y$ if real\nor\n$R^H R {\\widehat X} = {\\widehat A}^H Y$ if complex.\n\\begin{verbatim}\nmtxX = DenseMtx_new() ;\nDenseMtx_init(mtxX, type, 0, 0, neqns, nrhs, 1, neqns) ;\nFrontMtx_QR_solve(frontmtx, mtxA, mtxX, mtxB, mtxmanager,\n                  cpus, msglvl, msgFile) ;\n\\end{verbatim}\nLast, we permute the rows of ${\\tt widehat X}$ back into $X$.\n\\begin{verbatim}\nDenseMtx_permuteRows(mtxX, newToOldIV) ;\n\\end{verbatim}\n\\par\n\\subsection{Sample Matrix and Right Hand Side Files}\n\\label{subsection:QR:input-files}\n\\par\nImmediately below are two sample files:\n{\\tt qr.matrix.input} holds the matrix input\nand {\\tt qr.rhs.input} holds the right hand side.\nThis simple example is an $8 \\times 6$ matrix $A$\nand a single right hand side.\nThe solution is the vector of all ones.\nNote how the indices are zero-based as for C, instead of one-based\nas for Fortran.\n\\begin{center}\n\\begin{tabular}{|l|}\n\\multicolumn{1}{c}{\\tt matrix.input} \\\\ \\hline\n\\begin{minipage}[t]{0.5 in}\n\\begin{verbatim}\n8 6 18\n0 1 1.0\n0 3 2.0\n1 2 3.0\n1 3 1.0\n1 5 1.0\n2 0 1.0\n2 2 2.0\n3 0 3.0\n3 2 4.0\n3 4 2.0\n4 3 1.0\n5 1 2.0\n5 4 3.0\n5 5 1.0\n6 0 2.0\n6 3 3.0\n7 1 1.0\n7 4 3.0\n\\end{verbatim}\n\\end{minipage}\n\\\\ \\hline\n\\end{tabular}\n\\qquad\n\\begin{tabular}{|l|}\n\\multicolumn{1}{c}{\\tt rhs.input} \\\\ \\hline\n\\begin{minipage}[t]{0.5 in}\n\\begin{verbatim}\n8 1\n0 3.0\n1 5.0\n2 3.0\n3 9.0\n4 1.0\n5 6.0\n6 5.0\n7 4.0\n\\end{verbatim}\n\\end{minipage}\n\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\par\n", "meta": {"hexsha": "b339a633348f49f74c32ec33fe2a03d2c332dbbf", "size": 8492, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial.tex", "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial.tex", "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial.tex", "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "avg_line_length": 33.968, "max_line_length": 80, "alphanum_fraction": 0.7390485163, "num_tokens": 2650, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737963569016, "lm_q2_score": 0.7090191214879991, "lm_q1q2_score": 0.5694655794951514}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section{Sheet 8}\n\n\\subsection{Acceleration in Rindler coordinates}\n\nRindler coordinates for the Minkowski (1, 1) space, originally parametrized with \\((t, x)\\), are defined by \n%\n\\begin{subequations}\n\\begin{align}\n  \\begin{cases}\n      t = \\rho \\sinh \\eta \\\\\n      x = \\rho \\cosh \\eta \n  \\end{cases}\n\\,.\n\\end{align}\n\\end{subequations}\n\nThe metric is given by \n%\n\\begin{align}\n  \\dd{s^2} = - \\dd{t^2} + \\dd{x^2} =\n  - \\rho^2 \\dd{\\eta^2} + \\dd{\\rho^2}\n\\,.\n\\end{align}\n\nWe defined Rindler coordinates so that our uniformly accelerated observer would have velocity only along \\(\\eta \\), so we can find the general expression of the 4-velocity by computing the normalization: since it will look like \\(u^{\\alpha } = (N, 0)\\) for some \\(N\\) and we want \\(u^{\\alpha } u_{\\alpha } = -1\\), we get that \\(N^2(-\\rho^2) = -1 \\), which means \\(N = 1/ \\rho \\). \n\nThe only nonzero Christoffel symbols are \\(\\Gamma^{\\eta }_{\\eta \\rho } = 1/\\rho \\) and \\(\\Gamma^{\\rho }_{\\eta \\eta }= \\rho \\). \n\nThe computation of the acceleration then can be started: \n%\n\\begin{subequations}\n\\begin{align}\n  \\dv{}{\\tau } u^{\\mu } &= u^{\\nu } \\nabla_{\\nu } u^{\\mu }  \\\\\n  &= u^{\\eta } \\qty(\\cancelto{}{\\partial_{\\eta } u^{\\mu }} + \\Gamma^{\\mu }_{\\eta \\nu } u^{\\nu })  \\\\\n  &= \\frac{1}{\\rho^2} \\Gamma^{\\mu }_{\\eta \\eta }\n\\,,\n\\end{align}\n\\end{subequations}\n%\nso the only nonzero component is the one where \\(\\mu = \\rho \\), which is \\(a^{\\rho } = 1/\\rho \\), or in other words\n%\n\\begin{subequations}\n\\begin{align}\n  a^{\\mu } = \\left[\\begin{array}{c}\n  0 \\\\ \n  \\kappa \n  \\end{array}\\right]\n\\,,\n\\end{align}\n\\end{subequations}\n%\nwhere \\(\\kappa = 1/ \\rho \\) is the modulus of the 4-acceleration. Indeed \n%\n\\begin{align}\n  a^{\\mu } a^{\\nu } g_{\\mu \\nu } = \\kappa^2 g_{\\rho \\rho } = \\kappa^2\n\\,.\n\\end{align}\n%\n\n\\subsection{Acceleration in Schwarzschild motion}\n\n\\subsubsection{Acceleration computation}\n\nThe acceleration is given in general by \n%\n\\begin{align}\n  a^{\\mu } = u^{\\nu } \\qty(\\partial_{\\nu } u^{\\mu } + \\Gamma^{\\mu  }_{\\nu \\rho } u^{\\rho })\n\\,,\n\\end{align}\n%\nand we can see that in this formula we can only have a nonzero result if \\(\\mu = t, r, \\varphi \\), \\(\\nu = t, \\varphi \\) and \\(\\rho = t, \\varphi \\). \n\nNow we use the following facts: the velocity is both stationary (\\(\\partial_{t} u^{\\mu } = 0\\)), rotationally symmetric (\\(\\partial_{\\varphi } u^{\\mu } = 0\\)), and the the Christoffel symbols with the upper index different from \\(r\\) must have at least one \\(r\\) between the lower ones in order to be nonzero, therefore the terms containing them must be zero since they must then be contracted with \\(u^{r} =0 \\).\nThis gives us that the only nonzero component of the acceleration is \n%\n\\begin{align}\n  a^{r } = u^{\\nu } u^{\\nu } \\Gamma^{r}_{\\nu \\nu }\n  = \\qty(u^{t})^2 \\Gamma^{r}_{tt} + \\qty(u^{\\varphi })^2 \\Gamma^{r}_{\\varphi \\varphi }\n\\,.\n\\end{align}\n\nThis already confirms that \\(u^{\\mu } a_{\\mu } =0\\), since the Schwarzschild metric is diagonal.\n\nThe symbols which appear are \n%\n\\begin{align}\n  \\Gamma^{r}_{tt} = \\frac{1}{2} g^{rr} \\qty(- g_{tt, r})\n  = \\qty(1 - \\frac{2GM}{r_{*}}) \\frac{GM}{r^2_{*}}\n\\,\n\\end{align}\n%\nand \n%\n\\begin{align}\n  \\Gamma^{r}_{\\varphi  \\varphi } = \n  \\frac{1}{2} g^{rr} \\qty(-g_{\\varphi \\varphi , r})\n  = - \\qty(1 - \\frac{2GM}{r_{*}}) r_{*} \n\\,.\n\\end{align}\n\nWhat are the components of the \\(4-\\)velocity? We can use the normalization \\(u^{\\mu } u_{\\mu } = -1\\); inserting the variable name \\(u^{\\varphi } = \\omega \\) this gives us \n%\n\\begin{align}\n  u^{t} u^{t} g_{tt} + \\omega^2 g_{\\varphi \\varphi } = -1\n\\,,\n\\end{align}\n%\nwhich can be written as \n%\n\\begin{align}\n  u^{t} u^{t} = \\frac{1 + \\omega^2 r_{*}^2}{1 - \\frac{2GM}{r_*}}\n\\,.\n\\end{align}\n%\nLet us plug this expression into the acceleration one: \n%\n\\begin{align}\n  a^{r} = \\frac{1 + \\omega^2 r_{*}^2}{1 - \\frac{2GM}{r_*}}\n  \\qty(1 - \\frac{2GM}{r_{*}}) \\frac{GM}{r_{*}^2} - \\omega^2 \\qty(1 - \\frac{2GM}{r_{*}}) r_{*}\n\\,,\n\\end{align}\n%\nwhich simplifies to \n%\n\\begin{subequations}\n\\begin{align}\n  a^{r} &= \\frac{GM}{r_{*}^2} + \\omega^2 GM - \\omega^2 r_{*} + 2GM \\omega^2  \\\\\n  &= \\frac{GM}{r_{*}^2} + 3GM \\omega^2 - \\omega^2 r_{*}\n\\,.\n\\end{align}\n\\end{subequations}\n%\nIf we set it to zero we find a relation which we can express in terms of \\(l = g_{\\varphi \\varphi } u^{\\varphi } = r^2_{*} \\omega \\). We get \n%\n\\begin{subequations}\n\\begin{align}\n  0 &= \\frac{GM}{r_{*}^2} + 3GM \\frac{l^2}{r^{4}} - \\frac{l^2}{r_{*}^{3}}  \\\\\n  &= GMr_{*}^2 +3GMl^2 - l^2 r_{*}\n\\,,\n\\end{align}\n\\end{subequations}\n%\nwhich  is the relation we know for the angular momentum in circular orbits. \n\n\\subsubsection{Acceleration modulus limits}\n\nThe modulus of the acceleration is given by \\(\\abs{\\vec{a}}^2 = a^{r} a^{r} g_{rr}\\), so we will have \n%\n\\begin{align}\n  \\abs{\\vec{a}}^2 = \\frac{1}{1- \\frac{2GM}{r_{*}}} \\qty(\n  \\frac{GM}{r_{*}^2} + 3GM \\omega^2 - \\omega^2 r_{*})^2\n\\,,\n\\end{align}\n%\nwhich can be written in terms of the adimensionalized variables \\(R = r / 2GM\\) and \\(\\widetilde{\\omega} = 2GM \\omega \\): \n%\n\\begin{align}\n  \\abs{\\vec{a}}^2 = \\frac{1}{(4GM)^2} \\frac{R_{*}}{R_{*} - 1}\n  \\qty(\\frac{1}{R_{*}^2} + \\widetilde{\\omega}^2 \\qty(3 - 2R_{*}))^2\n\\,.\n\\end{align}\n%\nThis diverges as \\(R_{*} \\rightarrow 1\\), as we might expect: a \\emph{stationary} observer with respect to the Schwarzschild radial coordinate must have ever more acceleration in order to stay on their non-geodesic path. \n\nOf we set \\(\\omega = 0\\) it becomes \n%\n\\begin{align}\n  \\abs{\\vec{a}}^2 = \\frac{1}{(4GM)^2} \\frac{R_{*}}{R_{*}- 1} \\frac{1}{R_{*}^4}\n\\,,\n\\end{align}\n%\nwhile for \\(R_{*} \\gg 1\\) it becomes \n%\n\\begin{align}\n  \\abs{\\vec{a}}^2 \\sim \\frac{1}{(4GM)^2} \\qty(\\frac{1}{R_{*}^2}\n   - 2\\widetilde{\\omega}^2 R_{*})^2\n\\,,\n\\end{align}\n%\nwhich can become 0 (that is, we have an orbit, a geodesic) if we set \\(2\\widetilde{\\omega}^2 R_{*}^3= 1\\), or \\(\\omega^2 r_{*}^3 = GM\\), (the spherical orbit formulation of) Kepler's third law. \n\nWe neglected only the \\(3 \\widetilde{\\omega}^2 \\) term since it is the smallest one: in terms of powers of \\((GM)\\) and of \\(c\\) we have: \\(R_{*}^{-2} \\sim (GM)^{-2}\\), \\(\\widetilde{\\omega}^2\\sim (GM)^{-2} c^{-2}\\) while \\(\\widetilde{\\omega}^2 R_{*}^2 \\sim (GM) c^{-2}\\).\n\n\\subsubsection{Some comments on orbits (complement)}\n\nAnother interesting thing to note is the fact that at \\(R_{*} = 3/2\\) the acceleration becomes independent of \\(\\omega \\), and at \\(R_{*} < 3/2\\) the effect of the rotation is to \\emph{increase} the acceleration instead of decreasing it; this corresponds to the fact that there are no orbits (not even unstable ones) for \\(R_{*}<3/2\\). \n\nAn interesting fact which was not mentioned in class: there exist orbits for any \\(R_{*}\\) between \\(3/2\\) and \\(3\\), although they are unstable. \n\nWe found that the radius of a circular orbit is given by  \n%\n\\begin{align}\n  R = L^2 \\qty(1 \\pm \\qty(\\sqrt{1 - \\frac{3}{L^2}}))\n\\,,\n\\end{align}\n%\nwhere \\(L = l / 2GM\\) and \\(R = r / 2GM\\). \n\nThe two branches of this expression correspond to stable and unstable orbits: in both cases we have \\(\\sqrt{3} < L < \\infty \\), and for the stable (plus sign) branch we find \\(R > 3\\) and \\(\\dv*{R}{L} > 0\\) everywhere, while for the unstable branch we have \\(3/2 < R < 3\\) and \\(\\dv*{R}{L} < 0 \\) everywhere. \n\n\\subsection{Perturbed rotating metrics}\n\nWe want to prove that, to first order in \\(\\delta g_{\\mu \\nu }\\), the inverse of \\(g_{\\mu \\nu } = \\overline{g}_{\\mu \\nu } + \\delta g_{\\mu \\nu }\\) is \n%\n\\begin{align}\n  g^{\\mu \\nu } = \n  \\overline{g}^{\\mu \\nu } + \\overline{g}^{\\mu \\alpha } \\overline{g}^{\\nu \\beta } \\delta g_{\\alpha \\beta } + O((\\delta g_{\\mu \\nu })^2)\n\\,.\n\\end{align}\n\nThis can be proved directly by verifying \\(g_{ \\mu \\nu } g^{\\nu \\rho } = \\delta_{\\mu }^{\\rho } + O((\\delta g)^2)\\): it comes out to be \n%\n\\begin{subequations}\n\\begin{align}\n  &\\qty(\\overline{g}_{\\mu \\nu } + \\delta g_{\\mu \\nu })\n  \\qty(\\overline{g}^{\\nu \\rho } - \\overline{g}^{\\nu  \\alpha } \\overline{g}^{\\rho  \\beta } \\delta g_{\\alpha \\beta } + O((\\delta g)^2)) \\\\\n  &= \\delta_{\\mu }^{\\rho } + \\delta g_{\\mu \\nu } \\overline{g}^{\\nu \\rho } - \\overline{g}_{\\mu \\nu }\\overline{g}^{\\nu \\alpha } \\overline{g}^{\\rho \\beta } \\delta g_{\\alpha \\beta } + O((\\delta g)^2)  \\\\\n  &=  \\delta_{\\mu }^{\\rho } + \\delta g_{\\mu \\nu } \\overline{g}^{\\nu \\rho } - \\delta_{\\mu}^{\\alpha } \\delta g_{\\alpha \\beta } \\overline{g}^{\\rho \\beta }  + O((\\delta g)^2)\n\\,,\n\\end{align}\n\\end{subequations}\n%\nso we can notice that the first order terms cancel: we have the inverse, up to first order. \n\n\\subsubsection{Inverse perturbed Schwarzschild}\n\nThe Schwarzschild metric is \n%\n\\begin{subequations}\n\\begin{align}\n  \\overline{g}_{\\mu \\nu } = \\left[\\begin{array}{cccc}\n  -\\qty(1 - \\frac{2GM}{r}) & 0 & 0 & 0 \\\\ \n  0 & \\qty(1 - \\frac{2GM}{r})^{-1} & 0 & 0 \\\\ \n  0 & 0 & r^2 & 0 \\\\ \n  0 & 0 & 0 & r^2 \\sin^2 \\theta \n  \\end{array}\\right]\n\\,,\n\\end{align}\n\\end{subequations}\n%\nand its inverse is \n%\n\\begin{subequations}\n\\begin{align}\n  \\overline{g}^{\\mu \\nu } = \\left[\\begin{array}{cccc}\n  -\\qty(1 -\\frac{2GM}{r})^{-1}  & 0 & 0 & 0 \\\\ \n  0 & \\qty(1- \\frac{2GM}{r}) & 0 & 0 \\\\ \n  0 & 0 & r^{-2} & 0 \\\\ \n  0 & 0 & 0 & r^{-2} \\sin^{-2} \\theta \n  \\end{array}\\right]\n\\,.\n\\end{align}\n\\end{subequations}\n\nWe want to compute \\(\\overline{g}^{\\mu \\alpha } \\overline{g}^{\\nu \\beta } \\delta g_{\\alpha \\beta }\\), and we know that \\(\\delta g_{\\alpha \\beta } \\) has only one independent component: \n%\n\\begin{align}\n  \\delta g_{t \\varphi } = - \\frac{2 GJ  \\sin^2\\theta }{r}\n\\,.\n\\end{align}\n\nThis, combined with the fact that the background metric is diagonal, gives us the result that we only have one entry in the sum: \n%\n\\begin{align}\n  \\overline{g}^{\\mu \\alpha } \\overline{g}^{\\nu \\beta } \\delta g_{\\alpha \\beta }\n  = \\overline{g}^{tt} \\overline{g}^{\\varphi \\varphi }\n  \\delta g_{\\varphi  t}\n  =(-)^2 \\frac{2GJ \\sin^2 \\theta /r }{(1 - \\frac{2GM}{r}) r^2 \\sin^2 \\theta }\n  = \\frac{2GJ}{(r - 2GM) r^2}\n  \\,,\n\\end{align}\n%\nso the  full inverse metric to first order in \\(J\\) is given by subtracting this off of the regular Schwarzschild inverse's \\(t \\varphi \\) components: \n%\n\\begin{subequations}\n\\begin{align}\n  g^{\\mu \\nu } = \\left[\\begin{array}{cccc}\n    -\\qty(1 -\\frac{2GM}{r})^{-1}  & 0 & 0 & -\\frac{2GJ}{(r - 2GM)r^2} \\\\ \n    0 & \\qty(1- \\frac{2GM}{r}) & 0 & 0 \\\\ \n    0 & 0 & r^{-2} & 0 \\\\ \n    -\\frac{2GJ}{(r - 2GM)r^2} & 0 & 0 & r^{-2} \\sin^{-2} \\theta \n    \\end{array}\\right]  \n\\,.\n\\end{align}\n\\end{subequations}\n\n\\subsubsection{Ricci component computation}\n\nWe want to compute the 00 component of the Ricci tensor, \\(R_{00} = R^{\\alpha }_{0 \\alpha 0}\\). It is given by \n%\n\\begin{align}\n  R_{00} = \\Gamma^{\\alpha }_{00, \\alpha }\n  - \\Gamma^{\\alpha }_{0 \\alpha, 0}\n  + \\Gamma^{\\alpha }_{\\alpha \\lambda }\n  \\Gamma^{\\lambda }_{00}\n  - \\Gamma^{\\alpha }_{0\\lambda } \n  \\Gamma^{\\lambda }_{0 \\alpha }\n\\,,\n\\end{align}\n%\nand we will show that each of these 4 terms is either constant with respect to \\(J\\) or quadratic in \\(J\\), when computed with respect to the metric \n%\n\\begin{subequations}\n\\begin{align}\n  g_{\\mu \\nu } = \n  \\left[\\begin{array}{cccc}\n  -\\qty(1- \\frac{2GM}{r}) & 0 & 0 & - \\frac{2GJ}{r} \\sin^2\\theta  \\\\ \n  0 & \\qty(1 - \\frac{2GM}{r})^{-1} & 0 & 0 \\\\ \n  0 & 0 & r^2 & 0 \\\\ \n  - \\frac{2GJ}{r} \\sin^2\\theta  & 0 & 0 & r^2 \\sin^2 \\theta \n  \\end{array}\\right]\n\\,.\n\\end{align}\n\\end{subequations}\n\nFirst of all, note that the only derivatives of the metric which can give \\(J\\)-dependent contributions are \\(g_{03,1}\\), \\(g_{03, 2}\\) or these terms with \\(0\\) and \\(3\\) exchanged. \n\nThe sum \\(\\Gamma^{\\alpha }_{00, \\alpha } \\) is independent of \\(J\\): the only terms which can contribute in the sum are \\(\\alpha = r, \\theta \\) and then the three indices in the Christoffel symbol are either \\(001\\) or \\(002\\): in either case we cannot form the \\(J\\)-dependent metric component \\(g_{03}\\).\n\nMore explicitly: the expression is \n%\n\\begin{align}\n  \\Gamma^{\\alpha }_{00} = \\frac{1}{2} g^{\\alpha \\beta } \\qty(2g_{\\beta 0,0}- g_{00, \\beta } )\n\\,,\n\\end{align}\n%\nand the time derivatives vanish, terms with \\(\\beta = 1, 2\\) could contribute but in neither case would we have the possibility to form the \\(g^{03}\\) component. \n\nThe term \\(\\partial_{0} \\Gamma^{\\alpha }_{0\\alpha }\\) is zero by stationarity: the metric is \\(t\\)-independent. \n\nIn the term \\(\\Gamma^{\\alpha }_{\\alpha \\lambda } \\Gamma^{\\lambda }_{00}\\) we ask ourselves: where can the metric component \\(g_{03}\\) or \\(g^{03}\\) appear? As we saw above it cannot appear in the second symbol, while for it to appear in the first symbol we would need to have \\(\\alpha, \\lambda = 0, 3\\) (in either order), but then the third index in that symbol would also be \\(0\\) or \\(3\\), so the metric component \\(g_{03}\\) would be differentiated with respect to \\(t\\) or \\(\\varphi \\), and it is independent of both. So, in order to have the symbol \\(\\Gamma^{\\lambda }_{00} \\) not be zero we need to have \\(\\lambda = 1, 2\\). \nThen, in the expression \n%\n\\begin{align}\n  \\Gamma^{\\alpha }_{\\alpha \\lambda }  = \\frac{1}{2}\n  g^{\\alpha \\beta } \\qty(g_{\\beta \\alpha , \\lambda }\n  + g_{\\beta \\lambda , \\alpha }\n  - g_{ \\alpha \\lambda , \\beta })\n\\,\n\\end{align}\n%\nthe last two terms of the sum cannot depend on \\(J\\) since they have an index which is neither 0 nor 3. \nTerms such as those with \\(\\alpha , \\beta = 0,3\\) or the inverse can contribute: however in these terms we have to multiply a \\(g^{03}\\), which is linear in \\(J\\), with a \\(g_{03}\\), which also is. So in the end the term is quadratic in \\(J\\). \n\nThe term \\(\\Gamma^{\\alpha }_{0 \\lambda } \\Gamma^{\\lambda}_{0 \\alpha }\\) is also \\(O(J^2)\\); to see this, let us write a symbol \\(\\Gamma^{\\alpha }_{0 \\lambda }\\): it is \n%\n\\begin{align}\n  \\Gamma^{\\alpha }_{0 \\lambda } = \n  \\frac{1}{2} g^{\\alpha \\beta } \\qty(g_{\\beta 0, \\lambda } +  \\cancelto{}{g_{\\beta \\lambda, 0 } }\n  - g_{0 \\lambda , \\beta })\n\\,,\n\\end{align}\n%\nand we ask ourselves: how can this depend on \\(J\\)?\nWe could have \\(\\alpha , \\beta = 0, 3\\): then the term \\(g_{0 \\lambda , \\beta }\\) vanishes and we have \\(g^{03} g_{30, \\lambda }\\) which is already quadratic in \\(J\\). \n\nWe could have \\(\\alpha , \\beta = 3, 0\\): then \\(g_{0 \\lambda , \\beta }\\) vanishes. We could have a nonzero term \\(g^{30} g_{00, \\lambda }\\) if \\(\\lambda = 1, 2\\): in this case the whole term would look like \\(\\Gamma^{3}_{0 1} \\Gamma^{1}_{03}\\) or  \\(\\Gamma^{3}_{0 2} \\Gamma^{2}_{03}\\). In either case, in both symbols the sum of derivatives of the metric the only nonvanishing term will be necessarily linear in \\(J\\) since it will be differentiated with respect to the 1 or 2 index. Therefore, the product of the two Christoffels will be at least quadratic in \\(J\\). \n\nThe final case for the single Christoffel is \\(\\alpha = \\beta \\): in that case, to have \\(J\\)-dependence we need to have either \\(\\beta =3\\) which means \\(\\alpha = 3\\) or \\(\\lambda =3\\). When we set one of the two indices \\(\\alpha , \\lambda \\) to \\(3\\) the other one must necessarily be \\(1\\) or \\(2\\), since otherwise the derivatives would vanish.\nThen we get back to the case in which only one term in the sum of the three derivatives of the metric survives, which means that the symbol is linear in \\(J\\) as a whole, but we have two symbols for which this holds multiplied together, so on the whole the term is quadratic in \\(J\\). \n\nIn the end then the sum, when expressed as a function of \\(J\\), looks like: \n%\n\\begin{align}\n  R_{00} = \\const(J)+  O(J^2)\n\\,,\n\\end{align}\n%\nand we know that if \\(J=0\\) then \\(R_{00} = 0\\), so we are done: \\(R_{00} = O(J^2)\\).\n\n\\end{document}", "meta": {"hexsha": "39d3eb1b1e2792c8b1d0cc7ea7bda9cf84999e4b", "size": 15447, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_first_semester/gr_exercises/sheet8.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_first_semester/gr_exercises/sheet8.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_first_semester/gr_exercises/sheet8.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 41.3021390374, "max_line_length": 629, "alphanum_fraction": 0.6127403379, "num_tokens": 5707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8031737869342623, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5694655777526719}}
{"text": "\\chapter*{Appendix B: Noise Modeling the Wavelet Space}\n\\addcontentsline{toc}{chapter}{Appendix B: Noise Modeling the Wavelet Space}\n \n\n\\subsection*{Gaussian noise}\n\\index{noise}\nWe have the probability density\n\\begin{eqnarray}\np(w_j(x,y)) = \\frac{1}{\\sqrt{2\\pi} \\sigma_j} e^{{-w_j(x,y)^2}/2\\sigma^2_j} \n\\end{eqnarray}\n\nSo we need to estimate, in the case of Gaussian noise models,\n the noise standard deviation at each scale.  These standard deviations can \nbe determined analytically in the case of some transforms, including the \n\\`a trous transform, but the calculations can become complicated.  \n\nThe appropriate value of $\\sigma_j$ \nin the succession of wavelet planes is assessed \nfrom the standard deviation of the noise $\\sigma_I$ in the original image\n$I$, \nand from study of the noise in the wavelet space.  This study consists of \nsimulating an image containing Gaussian noise with a standard deviation \nequal to 1, and taking the wavelet transform of this image.  Then we\ncompute the standard deviation $\\sigma^e_j$ at each scale.  We get a curve \n$\\sigma^e_j$ as a function of $j$, giving the behavior of the noise in the \nwavelet space.\n(Note that if we had used an orthogonal wavelet transform, this curve would\n\\index{wavelet transform}\nbe linear.)  Due to the properties of the wavelet transform, we have \n$ \\sigma_j = \\sigma_I \\sigma^e_j $.\nThe standard deviation of the noise at a scale $j$ of the image is equal to\nthe standard deviation of the noise of the image multiplied by the \nstandard deviation of the noise of  scale $j$ of the wavelet transform.\n \n\\subsection*{Poisson noise}\n\\index{noise}\n\nIf the noise in the data $I$ is Poisson, the Anscombe transform \n\\begin{eqnarray}\nt(I(x,y)) = 2\\sqrt{I(x,y) + \\frac{3}{8}}\n\\label{eqn_noise_anscombe}\n\\end{eqnarray}\nacts as if the data arose from a\nGaussian white noise model (Anscombe, \\cite{rest:anscombe48}), \nwith $\\sigma = 1$, under the\nassumption that the mean value of $I$ is large.\n\\index{Anscombe transformation}\n\nFor Poisson parameter values under about 20, the Anscombe transformation \nlooses control over the bias.  In this case, an alternative approach to \nvariance stabilization is needed. An approach for very small numbers of \ncounts, including frequent zero cases, has been described in \nin \\cite{astro:slezak93}, \\cite{astro:bijaoui94} and \\cite{astro:bury95},\nand  will be described below. \nSmall numbers of detector counts will most likely be associated  with the \nimage background. \nNote that errors related to small values carry the risk of \nremoving real objects, but not of amplifying noise.   \n\n\\subsection*{Gaussian and Poisson Noise}\n\\index{variance stabilization}\n\\index{noise}\n\nThe arrival of photons, and their expression by electron counts, on CCD\ndetectors may be modeled by a Poisson distribution.  In addition, there is \nadditive Gaussian read-out noise.  \nThe Anscombe \ntransformation (eqn.\\ \\ref{eqn_noise_anscombe}) has been extended \\cite{starck:mur95_2}\nto take this combined noise into account.\nAs an\napproximation, consider the signal's value, $I(x,y)$, as a sum of a Gaussian\nvariable, $\\gamma$, of mean $g$ and standard-deviation $\\sigma$; and a\nPoisson variable, $n$, of mean $m_0$: we set \n$I(x,y) = \\gamma + \\alpha n $ where $\\alpha$\nis the gain.  \n\nThe generalization of the variance stabilizing Anscombe formula is:\n\\begin{eqnarray}\nt = \\frac{2}{\\alpha} \\sqrt{\\alpha I(x,y) + \\frac{3}{8} \\alpha^2 + \\sigma^2 -\n\\alpha g}\n\\label{eqn_noise_bijaoui}\n\\end{eqnarray}\nWith appropriate values of $\\alpha$, $\\sigma$ and $g$, this reduces to \nAnscombe's transformation (eqn.\\ \\ref{eqn_noise_anscombe}).  \n\\index{Anscombe transformation}\n\n\n\\subsection*{Poisson Noise with Few Photons or Counts}\n\\index{noise}\n\\label{noise_few_photons}\n\nA wavelet coefficient at a given position and at a given scale $j$ is\n\\begin{eqnarray}\nw_j(x,y) =  \\sum_{k \\in K} n_k \\psi(\\frac{x_k - x}{2^j} , \\frac{y_k - y}{2^j})\n\\end{eqnarray}\nwhere $K$ is the support of the wavelet function  $\\psi$ and $n_k$ is the \nnumber  of events which contribute to the calculation of $w_j(x,y)$ (i.e.\\ the \nnumber of \nphotons included in the support of the dilated wavelet centered at ($x$,$y$)).\n\\index{wavelet transform}\n\nIf a wavelet coefficient $w_j(x,y)$ is due to the noise, it can be considered\nas a realization of the sum $\\sum_{k \\in K} n_k$ of \nindependent random variables \nwith the same distribution as that of the wavelet function ($n_k$\nbeing the number of photons or events used for the calculation of $w_j(x,y)$).\nThen we compare the wavelet coefficient of the data to the values \nwhich can be taken by the sum of $n$ independent variables.\n\nThe distribution of one event in the wavelet space is directly \ngiven by the histogram $H_1$ of the wavelet $\\psi$. Since \nindependent events are considered, the distribution of the random variable \n$W_n$ (to be associated with a wavelet coefficient) related to $n$\nevents is given by $n$ autoconvolutions of $H_1$\n\\begin{eqnarray}\nH_n = H_1 \\otimes  H_1 \\otimes ... \\otimes H_1\n\\end{eqnarray}\nFor a large number of events, $H_n$ converges to a Gaussian. \n\nIn order to facilitate the comparisons, the variable $W_n$ of distribution\n $H_n$ \n is reduced by\n\\begin{eqnarray}\nc = \\frac{W_n - E(W_n)}{\\sigma(W_n)}\n\\end{eqnarray}\nand the cumulative distribution  function is \n\\begin{eqnarray}\nF_n(c) = \\int_{-\\infty}^{c} H_n(u) du\n\\end{eqnarray}\n\nFrom $F_n$, we derive $c_{min}$ and $c_{max}$ such \nthat $F(c_{min}) = \\epsilon$\nand $F(c_{max}) = 1 - \\epsilon$.\n\nTherefore a reduced wavelet coefficient $w^r_j(x,y)$, calculated from\n$w_j(x,y)$, and  resulting \nfrom $n$ photons or counts is\nsignificant if:\n\\begin{eqnarray}\nF(w^r) > c_{max}\n\\end{eqnarray}\nor\n\\begin{eqnarray}\nF(w^r) < c_{min}\n\\end{eqnarray}\nand $w^r_j(x,y)$ is obtained by\n\\begin{eqnarray}\nw^r_j(x,y)  & = &   \\frac{w_j(x,y)}{\\sqrt{n} \\sigma_{\\psi_j}} \\\\\n             & =  & \\frac{w_j(x,y)}{\\sqrt{n} \\sigma_{\\psi}} 4^j\n\\end{eqnarray}\nwhere $\\sigma_{\\psi}$ is the standard deviation of the wavelet function, and\n$\\sigma_{\\psi_j}$ is the standard deviation of the dilated wavelet function\n($\\sigma_{\\psi_j} = \\sigma_{\\psi}/4^j$).\n\n\\subsection*{Root mean square map}\nIf, associated to the data $I(x,y)$, we have the root mean square map  \n$R_{\\sigma}(x,y)$, the noise in $I$  is non homogeneous.\nFor each wavelet coefficient $w_j(x,y)$ of $I$, the \nexact standard deviation $\\sigma_j(x,y)$ have to be calculated from \n $R_{\\sigma}$. A wavelet coefficient $w_j(x,y)$ is obtained by\nthe correlation product between the image $I$ and a function \n$g_j$:\n\\begin{eqnarray}\n w_j(x,y) = \\sum_k \\sum_l I(x,y)  g_j(x+k,y+l)\n\\end{eqnarray}\n\n \nThen we have\n\\begin{eqnarray}\n\\sigma_j^2(x,y) =  \\sum_k \\sum_l R_{\\sigma}^2(x,y) g_j^2(x+k,y+l).\n\\end{eqnarray}\n\nIn the case of the \\`a trous algorithm, the coefficients $g_j(x,y)$\n are not known exactly, but they can easily be computed by taking the\nwavelet transform of a Dirac $w^{\\delta}$. The map $\\sigma_j^2$ is calculated \nby correlating the square of the wavelet scale $j$ of  $w^{\\delta}$\nby $R^2_\\sigma(x,y)$.\n \n\\subsection*{Speckle Noise}\nSpeckle occurs in all types of coherent imagery such as synthetic aperture\n radar (SAR) imagery,  acoustic imagery and laser illuminated imagery. \n The probability density function (pdf) of the modulus of a homogeneous\n scene is a Rayleigh distribution:\n\\begin{eqnarray*}\np(\\rho)={\\rho\\over \\sigma^2}e^{-{\\rho^2\\over 2\\sigma^2}} \n\\quad  M_{\\rho} = \\sqrt{ \\frac{\\pi}{2} } \\sigma \\quad \\sigma_{\\rho}=\\sqrt {\\frac{4-\\pi}{2}}\\sigma\n \\end{eqnarray*}\nThe ratio $\\sigma_{\\rho}/M_{\\rho}$ is a constant of value $\\sqrt{\\frac{4-\\pi}{\\pi}}$.\nThis \nmeans that the speckle is a multiplicative noise. The pdf of the  modulus of a log-transformed \n speckle noise is:\n\\begin{eqnarray*}\np(\\ell) =  \\frac{e^{2\\ell}}{ \\sigma^2} e^{-\\frac{e^{2\\ell}}{2 \\sigma^2} } \\quad \nM_{\\ell} = 0.058 + \\log(\\sigma) \\quad \\sigma_{\\ell}= \\sqrt{\\frac{\\pi^2}{24}} = 0.641\n\\end{eqnarray*}\n\nA better estimator for Rayleigh distribution is the  energy ($I=\\rho^2$)\nwhich is known to have an exponential ({\\it i.e.} Laplace) \ndistribution of parameter $a=2\\sigma^2$\n\\begin{eqnarray*}\np(I)={1\\over a}e^{-\\frac{I}{a}}\n\\end{eqnarray*}\n\n\\subsection*{Other Types of Noise}\nFor any type of noise, an analogous study can be carried out in order to find\nthe detection level at each scale and at each position. The \ntypes of noise considered so far in this chapter correspond to the general \ncases in astronomical imagery. \nWe now describe briefly methods which can be used for non-uniform and\nmultiplicative noise. \n\n\\subsubsection*{Additive Non-Uniform Noise}\n\\label{noise_addi_uni}\nIf the noise is additive, but non-uniform, we cannot estimate a standard\ndeviation for the whole image. However, we can often assume that the noise\nis locally Gaussian, and we can compute a local standard deviation of the \nnoise for each pixel. In this way, we obtain a standard deviation \nmap of the noise, $I_{\\sigma}(x,y)$. \nA given wavelet coefficient $w_j(x,y)$ is calculated\nfrom the pixels of the input image $I$ in the range $I(x-l \\dots x+l, y-l \n\\dots y+l)$\nwhere $l$ is dependent on the wavelet transform algorithm, the wavelet \nfunction,\nand the scale $j$. An upper limit $u_j(x,y)$ for the noise associated with\n$w_j(x,y)$ is \nfound by just considering the maximum value in \n$I_{\\sigma}(x-l \\dots x+l,y-l \\dots y+l)$ and\n by multiplying this value by the constant $\\sigma_j^e$ (defined in the \nsubsection ``Gaussian noise'' at the beginning of this Appendix).\n\\begin{eqnarray}\nu_j(x,y) = \\max(I_{\\sigma}(x-l \\dots x+l, y-l \\dots y+l)) \\sigma_j^e\n\\end{eqnarray}\nThe detection level is not constant over each scale. \n \n\\subsubsection*{Multiplicative Noise}\nIf the noise is multiplicative, the image can be transformed by taking\nits logarithm. In the resulting image, the noise is additive, and \na hypothesis of Gaussian noise can be used in order to find the \ndetection level at each scale.\n\n\\subsubsection*{Multiplicative Non-Uniform Noise}\nIn this case, we take the logarithm of the image, and the resulting image\nis treated as for additive non-uniform noise above.\n\n\\subsubsection*{Unknown Noise}\nIf the noise does not follow any known distribution, \nwe can consider as significant\nonly wavelet coefficients which are greater than their local standard deviation\nmultiplied by a constant: $w_j(x,y)$ is significant if \n\\begin{eqnarray}\n\\mid w_j(x,y) \\mid \\ > \\ k \\sigma(w_j(x-l \\dots x+l, y-l \\dots y+l)) \n\\end{eqnarray}\n\n\n \n \n", "meta": {"hexsha": "64d1d61d93c71ad792e26f5256470c099a014d88", "size": 10445, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_mra/doc_mr2/annex_noise.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_mra/doc_mr2/annex_noise.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_mra/doc_mr2/annex_noise.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.6420233463, "max_line_length": 97, "alphanum_fraction": 0.7309717568, "num_tokens": 3094, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789040926008, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.569385796057551}}
{"text": "\\documentclass[12pt]{article}\n% \\usepackage[margin=1in]{geometry}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage[]{algorithm2e}\n% \\usepackage[table]{xcolor}          % table alternate background shading\n% \\usepackage[LGRgreek]{mathastext}   % Dirty solution: change all math font to upright\n\\usepackage[style=alphabetic]{biblatex}\n\n\\newcommand{\\norm}[1]{\\left\\lVert #1 \\right\\rVert}\n\n\\addbibresource{rec-notes.bib}\n\n\\title{Recommender System Notes}\n\\author{Tiangang Chen, Mengqiao Zhang}\n\\begin{document}\n\\maketitle\n\\section{Intro}\nIn our project, we implemented a naive version of Twitter's preliminary recommender system \\cite{wtf} for recommending who to follow. We also implemented a very simple cosine-distance based content-filtering algorithm as a complement.\n\nThese two algorithms are written in Python scripts, intended to be executed by the OS scheduler on fixed time interval. So the recommendation is not reactive immediately to user input.\n\nThis document serves to record the details of our understanding and implementation, primarily as our self-reference.\n\n\\section{Content Filtering}\n\nWe collect the following raw personal information: gender, major, and tags. The tags are predefined predicates of a person's hobbies/traits that the user may select as he/she sees fit.\n\nWe model the tags as a boolean vector with dimension of number of tags available.\n\nThe rank/similarity score between two users $u$ and $v$ is calculated as follows:\n\n\\[\\frac{u \\cdot v}{\\norm{u}_2 \\norm{v}}_2\\]\n\nWhich is readily available via \\texttt{1 - scipy.spatial.distance.cosine(u,v)}.\n\n\\section{Collaborative Filtering}\n\nThe following illustrates an overview of the algorithm described in \\cite{wtf}.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{wtf-algorithm}\n\\caption{overview of the algorithm}\n\\end{center}\n\\end{figure}\n\n\\subsection{PageRank}\n\nThe setup for PageRank \\cite{pagerank} is as follows: let $u$ be a web page, $F_u$ be the set of pages that $u$ points to, and $B_u$ be the set of pages that points to $u$. Let $N_u = |F_u|$, which is the number of links originating from $u$. $c$ is the factor for normalization.\n\nThe simplified PageRank:\n\n\\[R(u) = c \\sum_{v \\in B_u}{\\frac{R(v)}{N_v}}\\]\n\n\\[R = cAR\\]\n\nWhere $A$ is the matrix representation of the graph. $A_{u,v} = \\frac{1}{N_u}$ if there is an edge from $u$ to $v$, otherwise $0$.\n\nSimplified PageRank suffers from ``rank sink'' when we encounter situation similar to: two pages pointing only to each other, then some other page points to one of them. Then the rank will accumulate in the two pages but never distribute.\n\nThe improved PageRank includes a source of rank $E(u)$:\n\n\\[R(u) = c \\sum_{v \\in B_u}{\\frac{R(v)}{N_v} + cE(u)}\\]\n\n\\[R=c(AR+E)\\]\n\nWe aim to maximize $c$ and keep $\\norm{R}_1 = 1$. Because $\\norm{R}_1 = 1$, we can rewrite $R = c(A+E \\cdot 1)R$ where $1$ is a vector of all ones. The introduction of vector $E$ remedies the rank sink. Generally, we can populate $E$ with uniform value. By giving $E$ some bias we can obtain a ``personalized'' ranking focusing on a particular webpage.\n\nWe simply implement PageRank as follows, identical to the original \\cite{pagerank} algorithm:\n\n\\begin{center}\n\\begin{algorithm}[h!]\n\\KwData{$R$ the rank vector\\;\n$A$ matrix representation of the graph\\;\n$E$ the initialization vector\\;\n$\\epsilon$ convergence limit\\;}\ninitialize: $R_0 \\gets E$ \\;\n \\While{$\\delta > \\epsilon$}{\n  $R_{i+1} \\gets AR_i$\\;\n  $d \\gets \\norm{R_i}_1 - \\norm{R_{i+1}}_1$\\;\n  $R_{i+1} \\gets R_{i+1} + dE$\\;\n  $\\delta \\gets \\norm{R_{i+1} - R_i}_1$\\;\n }\n \\caption{PageRank}\n\\end{algorithm}\n\\end{center}\n\n\\subsection{SALSA}\n\nSALSA \\cite{salsa} operates on a bipartite graph, traversing while assigning scores to nodes on both sides. The two sets of nodes are called hubs and authorities. In our case, the set of hub nodes consist of top results from personalized PageRank for a user, called the ``circle of trust'', and the set of authority nodes consist of direct (one degree) neighbors of those hub nodes.\n\nWe set up SALSA as follows: initially we have a collection (set) of pages, denoted by $\\mathcal{C}$, edges are directed in this collection. Now we convert $\\mathcal{C}$ to an undirected bipartite $G = (V_h, V_a, E)$, where\n\n\\[V_h = \\{ s_h | s \\in \\mathcal{C}, \\; \\mathrm{outDegree(s)} > 0 \\}\\]\n\\[V_a = \\{ s_a | s \\in \\mathcal{C}, \\; \\mathrm{inDegree(s)} > 0 \\}\\]\n\\[E = \\{ (s_h, r_a) | s \\rightarrow r \\; in \\; \\mathcal{C} \\}\\]\n\nSo a page with both nonzero in-degree and out-degree will appear on both sides of the bipartite. Our $\\mathcal{C}$ consists of the previously mentioned ``circle of trust'' and the nodes that they point to, which is naturally a bipartite.\n\nNow we perform two distinct random walks on $G$. The two walks start from opposite side of the bipartite, traversing two edges in each iteration. So they will end up at the same side where they started. This process is represented as two Markov chains. The transition matrices are defined as follows:\n\n\\[h_{i,j} = \\sum_{\\{k | (i_h, k_a), (j_h, k_a) \\in G\\}}{\\frac{1}{deg(i_h)} \\cdot \\frac{1}{deg(k_a)}}\\]\n\\[a_{i,j} = \\sum_{\\{k | (k_h, i_a), (k_h, j_a) \\in G\\}}{\\frac{1}{deg(i_a)} \\cdot \\frac{1}{deg(k_h)}}\\]\n\nOur goal is to find the pricipal eigenvectors of the hub and autority matrix, which are eigenvectors with eigenvalues of the largest magnitude. It has been argued in \\cite{salsa} that such eigenvalue/eigenvector exists and is unique (has multiplicity of one). The values within are the assigned scores for the hub and authority nodes. Higher values stand for higher probability of the random walk ends up at the nodes.\n\n\\printbibliography\n\\end{document}\n", "meta": {"hexsha": "ea20fb8aac4c3e3ca72d1d07d6e61e86c84f44b8", "size": 5658, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "two-red-hearts/2rh-server/recommend-notes/rec-notes.tex", "max_stars_repo_name": "cPolaris/school-is-fun", "max_stars_repo_head_hexsha": "423495df43803fc98e0adccb24ecc26eaceb419f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "two-red-hearts/2rh-server/recommend-notes/rec-notes.tex", "max_issues_repo_name": "cPolaris/school-is-fun", "max_issues_repo_head_hexsha": "423495df43803fc98e0adccb24ecc26eaceb419f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "two-red-hearts/2rh-server/recommend-notes/rec-notes.tex", "max_forks_repo_name": "cPolaris/school-is-fun", "max_forks_repo_head_hexsha": "423495df43803fc98e0adccb24ecc26eaceb419f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.4363636364, "max_line_length": 418, "alphanum_fraction": 0.72834924, "num_tokens": 1617, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104788995148791, "lm_q2_score": 0.7025300573952054, "lm_q1q2_score": 0.5693857877937909}}
{"text": "\\begin{document}\n\\section{Difficulty from Beatmaps}\n\nWe turn our attention to how we can figure out difficulty from the map itself, the expected output we want would be:\n\n$$ difficulty := \\lbrace(offset_1, difficulty_1), (offset_2, difficulty_2), ..., (offset_n, difficulty_n)\\rbrace $$\n\nWhereby we estimate difficulty at offset \\textbf{n} from the map itself:\n\n$$ difficulty_n \\approx model \\left( reading_n, \\sum_{k=1}^{keys} \\left(strain_k \\right), ... \\right)$$\n\nThere are more factors (denoted by $...$) that contribute to difficulty, but we will regard them as noise in this research and fine tune this equation later.\n\n\\paragraph{Reading} This denotes how hard is it to read all the patterns on the screen. We can draw similarities between this and density, however density focuses a lot more on its previous and future surroundings where reading looks at the future ones only. \n\n\\paragraph{Density} This focuses on the \\textbf{imminent} density of the offset. contrary to strain, it disregards the global trends of patterns. We will not use this in our network as strain does a better job in calculation.\n\n\\paragraph{Strain} This is reliant on $density$ whereby continuous high values of $ density$ will result in a high $strain$. This has an additional hyperparameter, $decay$, where it denotes how fast the player can recover from $strain_n$. Finger $strain$ on the same hand will likely affect the other $strain$ values of the other fingers.\n\n\\subsection{Note Type Weights}\n\nThis will define the \\textbf{weightages} of each note type.\n\\paragraph{$weight_{NN}$} defines for normal notes\n\\paragraph{$weight_{LNh}$} defines for long notes heads\n\\paragraph{$weight_{LNt}$} defines for long notes tails\n\\paragraph{$weight_{SSh}$} defines for \\textbf{strain shift} for hands \\textbf{(explained later)}\n\\paragraph{$weight_{SSb}$} defines for \\textbf{strain shift} for body \\textbf{(explained later)}\n\n\\subsection{Reading}\n\n$$ reading_{(n,n+\\theta)} := \ncount(NN), count(LNh), count(LNt)\n= \\lbrace n \\leq offset \\leq (n+ \\theta) \\rbrace$$\n\nWhere,\n\n\\paragraph{$n$} is the initial offset\n\\paragraph{$\\theta$} is the hyperparameter for length.\nWe will not take into consideration the length of $note_{long}$\n\n\\subsection{Density}\n\nWe will look into density before strain as it's derived from this.\n\nConsidering the notes on the $k$ column\n$$ \\lbrace ..., n-2, n-1, n, n+1, n+2, ... \\rbrace $$\n$$ \\Delta_{nx}^k = \\frac{1}{|n - x|}$$\n$$ density_n^k =\n\\sum_{N=n-\\sigma}^{n+\\sigma}\n\\left(\n\\Delta_{nN}^k\n\\right)$$\n\nSo for $\\sigma = 2$ and \n$$ column_k := \\lbrace a, b, n, d, e\\rbrace$$\n$$ density_n^k = \\Delta_{na}^k + \\Delta_{nb}^k + \\Delta_{nd}^k + \\Delta_{ne}^k $$\n$$ density_n^k = \\frac{1}{|n-a|} + \\frac{1}{|n-b|} + \\frac{1}{|n-d|} + \\frac{1}{|n-e|}$$\n\n\\paragraph{$\\Delta_{nx}^k$} will be the the inverse of the (ms) distance between notes $n$ and $x$ on column $k$. Notes that are further away will be penalized more heavily.\n\n\\paragraph{$\\sigma$} defines the range, front and back of the search. Higher sigma may prove to be useless with further $\\Delta_{nx}^k$ being too small.\n\n\\subsection{Strain}\n\nThis will work in relationship with $density$, whereby a $strain$ is a cumulative function of $density$ with a \\textbf{linear decay function}.\n\nNotes:\n\\begin{enumerate}\n\t\\item Better players have \\textbf{higher decay gradients}\n\t\\item If $decay > density$, $strain$ will \\textbf{decrease}\n\t\\item If $decay < density$, $strain$ will \\textbf{increase}\n\t\\item There will be a point where $strain$ is high enough to affect physical performance, indirectly affecting accuracy.\n\\end{enumerate}\n\n\\subsubsection{Strain Shift}\n\nStrain will not only affect one finger, it will affect the hand and both after time, just on a smaller scale\n\n\\paragraph{Hand} We will denote the strain shift hyperparameter of one finger to another on the same hand to be $SS_H$\n\\paragraph{Body} Likewise, for body, we will denote as $SS_B$\n\n\\subsubsection{Strain Example}\n\nConsider the case, without \\textbf{Strain Shift}\n$$ Where, weight_{NN} = 1, \\sigma = 2 $$\n\\begin{center}\n\t\\begin{tabular}{|c|c|c|c|c|c|c|} \n\t\\hline\n\t2500 & 0 \t\t\t& 0 & 0 &       & 0.022 & 0.016\\\\ \\hline\n\t2000 & $weight_{NN}$& 0 & 0 & 0.003 & 0.022 & 0.017\\\\\t\\hline\n\t1500 & $weight_{NN}$& 0 & 0 & 0.005 & 0.019 & 0.015\\\\\t\\hline\n\t1000 & $weight_{NN}$& 0 & 0 & 0.006 & 0.014 & 0.011\\\\\t\\hline\n\t 500 & $weight_{NN}$& 0 & 0 & 0.005 & 0.008 & 0.006\\\\\t\\hline\n\t   0 & $weight_{NN}$& 0 & 0 & 0.003 & 0.003 & 0.002\\\\\t\\hline\n    -500 & 0 \t\t\t& 0 & 0 & \t\t& 0\t \t& 0\t\\\\\t\\hline\n   -1000 & 0 \t\t\t& 0 & 0 & \t\t& 0\t \t& 0\t\\\\\n\t\\hline\n\tOffset(ms) & k=1 & k=2 & k=3 & $\\approx Density$ & Strain (dec=0) & Strain (dec=0.001) \\\\ \n\t\\hline\n\\end{tabular}\n\\end{center}\n\nConsider the case, with \\textbf{Strain Shift}\n\\begin{center}\n\t\\begin{tabular}{|c|c|c|c|} \n\t\\hline\n\t2500 & 0\t\t\t  & 0 \t\t& 0 \t\\\\ \\hline\n\t2000 & $weight_{NN}$  & $weight_{SSh}$ \t& $weight_{SSb}$\\\\\t\\hline\n\t1500 & $weight_{NN}$  & $weight_{SSh}$ \t& $weight_{SSb}$\\\\\t\\hline\n\t1000 & $weight_{NN}$  & $weight_{SSh}$ \t& $weight_{SSb}$\\\\\t\\hline\n\t 500 & $weight_{NN}$  & $weight_{SSh}$ \t& $weight_{SSb}$\\\\\t\\hline\n\t   0 & $weight_{NN}$  & $weight_{SSh}$ \t& $weight_{SSb}$\\\\\t\\hline\n    -500 & 0\t\t\t  & 0 \t\t& 0 \t\\\\\t\\hline\n   -1000 & 0\t\t\t  & 0 \t\t& 0 \t\\\\\n\t\\hline\n\tOffset(ms) & k=1 & k=2 & k=3\\\\ \n\t\\hline\n\\end{tabular}\n\\end{center}\n\nIt's hard to include the calculations in the table, so we'll look at $density_{(1,1000)}$.\n\n$$density_{1000}^1 =\n(\\Delta_{(1000,0)}^{1}) +\n(\\Delta_{(1000,500)}^{1}) +\n(\\Delta_{(1000,1500)}^{1}) +\n(\\Delta_{(1000,2000)}^{1})$$\n\n$$density_{1000}^1 =\n\\frac{1}{1000} +\n\\frac{1}{500} +\n\\frac{1}{500} +\n\\frac{1}{1000} = 0.006$$\n\n$$density_{1000}^2 = \n(\\Delta_{(1000,0)}^{2}) +\n(\\Delta_{(1000,500)}^{2}) +\n(\\Delta_{(1000,1500)}^{2}) +\n(\\Delta_{(1000,2000)}^{2})$$\n\n$$ density_{1000}^2 = \n(\\frac{weight_{SSh}}{1000}) +\n(\\frac{weight_{SSh}}{500}) +\n(\\frac{weight_{SSh}}{500}) +\n(\\frac{weight_{SSh}}{1000}) =\n\\frac{3 * weight_{SSh}}{250} $$\n\n$$ density_{1000}^3 =\n\\frac{3 * weight_{SSb}}{250} $$\n\n$$ density_{1000} :=\n{density_{1000}^1, density_{1000}^2, density_{1000}^3} =\n\\lbrace0.006,\n\\frac{3 * weight_{SSh}}{250},\n\\frac{3 * weight_{SSb}}{250}\\rbrace $$\n\n\\subsection{Density Generalization}\n\nIn the case where we want to find $density_n$, where, n is the offset index, k is key count.\n\n\\[ \t\n\\begin{bmatrix}\n\tweight_{(n+\\sigma,1)} & weight_{(n+\\sigma,2)} & \\dots  & weight_{(n+\\sigma,k)} \\\\\n\t\\vdots & \\vdots & \\udots & \\vdots \\\\\n\tweight_{(n+1,1)} & weight_{(n+1,2)} & \\dots  & weight_{(n+1,k)} \\\\\n\tweight_{(n,1)} & weight_{(n,2)} & \\dots  & weight_{(n,k)} \\\\\n    weight_{(n-1,1)} & weight_{(n-1,2)} & \\dots  & weight_{(n-1,k)} \\\\\n    \\vdots & \\vdots & \\ddots & \\vdots \\\\\n    weight_{(n-\\sigma,1)} & weight_{(n-\\sigma,2)} & \\dots  & weight_{(n-\\sigma,k)}\n\\end{bmatrix}\n\\]\n$$ * $$\n\\[\n\\begin{bmatrix}\n\toffset_{n+\\sigma} & \\dots & offset_{n+1} & offset_{n} & offset_{n-1} & \\dots & offset_{n-\\sigma} \n\\end{bmatrix}\n\\]\n$$ = $$\n\\[\n\\begin{bmatrix}\n\tdensity_{n+\\sigma} & \\dots & density_{n+1} & density_{n} & density_{n-1} & \\dots & density_{n-\\sigma} \n\\end{bmatrix}\n\\]\n\n$$\ndensity_n :=\n\\begin{bmatrix}\n\tdensity_{n+\\sigma} & \\dots & density_{n+1} & density_{n} & density_{n-1} & \\dots & density_{n-\\sigma} \n\\end{bmatrix}\n$$ \n\nFrom here, we can calculate the strain by running the through a python code.\n\\subsection{Allocating Notes to Fingers}\n\nWe cannot assume that $column_1$ where $keys = 4$ is the same as $column_1$ where $keys = 7$. This is due to how \\textbf{different fingers interact with the same column}.\n\nAs to counter this, we need to find out the \\textbf{most common set-up} for players.\n\n\\begin{center}\n\t\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \n\t\\hline\n\tkey & LP & LR & LM & LI & S  & RI & RM & RR & RP\\\\\n\t\\hline\n\t4   & {} & {} & 1  & 2  & {} & 3  & 4  & {} & {}\\\\\n\t5   & {} & {} & 1  & 2  & 3  & 4  & 5  & {} & {}\\\\\n\t6   & {} & 1  & 2  & 3  & {} & 4  & 5  & 6  & {}\\\\\n\t7   & {} & 1  & 2  & 3  & 4  & 5  & 6  & 7  & {}\\\\\n\t8   & 1  & 2  & 3  & 4  & {} & 5  & 6  & 7  & 8\\\\\n\t8S  & 1  & 2  & 3  & 4  & 5  & 6  & 7  & 8  & {}\\\\\n\t9   & 1  & 2  & 3  & 4  & 5  & 6  & 7  & 8  & 9\\\\\n\t\\hline\n\\end{tabular}\n\\end{center}\n\n\\textbf{L} represents left, \\textbf{R} is right, the second letter will be th\te name for the fingers (LP: Left Pinky, LR: Left Ring and so on...)\n\nThis allocation will give us a consistent result for all beatmaps, so $key=1$ will always mean \\textbf{Left Pinky}, $key=2$ for \\textbf{Left Ring}, and so on...\n\n\\subsubsection{8 Key Scratch Bias}\nThe issue with 8 Key maps is how maps will usually have a \\textit{scratch column}, this will create a setup that \\textbf{excludes a pinky but includes the thumb}. Due to this, we will shift the configuration to left pinky and right thumb, excluding the right pinky. See the above \\textbf{8S}\n\n\\subsection{Assigning Hyperparameters}\n\nIn this section alone, we have used quite a few hyperparameters. To recap:\n\n\\paragraph{(Reading) $\\theta$} is the hyperparameter for reading length.\n\\paragraph{(Density) $\\sigma$} defines the range, front and back of the density search. Higher sigma may prove to be useless with further $\\Delta_{nx}^k$ being too small.\n\n\\paragraph{(Density) $weight_{NN}$} defines for normal notes\n\\paragraph{(Density) $weight_{LNh}$} defines for long notes heads\n\\paragraph{(Density) $weight_{LNt}$} defines for long notes tails\n\\paragraph{(Density) $weight_{SSh}$} defines for \\textbf{strain shift} for hands\n\\paragraph{(Density) $weight_{SSb}$} defines for \\textbf{strain shift} for body \n\nNow what we need to do is to assign a reasonable values to these, and run the results to find our:\n$$ difficulty := \\lbrace(offset_1, difficulty_1), (offset_2, difficulty_2), ..., (offset_n, difficulty_n)\\rbrace $$\n\nWhereby we estimate difficulty at offset \\textbf{n} from the map itself:\n\n$$ difficulty_n \\approx model \\left( reading_n, \\sum_{k=1}^{keys} \\left(strain_k \\right), ... \\right)$$\n\nWe can further expand this to:\n\n$$ difficulty_n \\approx model \\left( count(NN), count(LNh), count(LNt), \\sum_{k=1}^{keys} \\left(strain_k \\right)  \\right)  $$\n\nWhere strain\n\n\\subsubsection{Values of Hyperparameters}\n\nThere will definitely be issues when it comes to assuming hyperparameters because of how it will affect accuracy of the model. However, as long as we \\textbf{reasonably} assign them, most of the errors will be offset by the neural network learning. We just need to focus on if a certain value should be \\textbf{larger/smaller} or \\textbf{negative/positive}.\n\n\\paragraph{(Reading) $\\theta$} 1000 (ms)\n\n\\paragraph{(Density) $\\sigma$} 2\n\n\\paragraph{(Density) $weight_{NN}$} 1\n\\paragraph{(Density) $weight_{LNh}$} 0.75 (we will follow Reading)\n\\paragraph{(Density) $weight_{LNt}$} 0.75 (we will follow Reading)\n\\paragraph{(Density) $weight_{SSh}$} 0.25\n\\paragraph{(Density) $weight_{SSb}$} 0.1\n\n\\end{document}\n", "meta": {"hexsha": "bd3e47cad45de2764cb525f40c408628760b4933", "size": 10777, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "research_tex/process/difficulty_from_beatmaps.tex", "max_stars_repo_name": "Eve-ning/ppshift_ml", "max_stars_repo_head_hexsha": "d693aaef9a224ad335a04867965f797fe887f1fc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-03-15T10:56:09.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-15T10:56:09.000Z", "max_issues_repo_path": "research_tex/process/difficulty_from_beatmaps.tex", "max_issues_repo_name": "Eve-ning/ppshift_ml", "max_issues_repo_head_hexsha": "d693aaef9a224ad335a04867965f797fe887f1fc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "research_tex/process/difficulty_from_beatmaps.tex", "max_forks_repo_name": "Eve-ning/ppshift_ml", "max_forks_repo_head_hexsha": "d693aaef9a224ad335a04867965f797fe887f1fc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.9771863118, "max_line_length": 357, "alphanum_fraction": 0.6590888002, "num_tokens": 3758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772286044094, "lm_q2_score": 0.6513548646660543, "lm_q1q2_score": 0.5693344549453049}}
{"text": "\\documentclass{jhwhw}\n\n\\usepackage{listings}\n\n\\author{}\n\\title{PHYS 7364 -- Homework \\#2}\n\\date{28 January 2022}\n\n\\begin{document}\n\n\\problem{Gauge freedom in Bloch wave functions}\n\nWave functions generally have a freedom of what is chosen in the phase.\nThis is also true of Bloch wave functions $\\psi_{n\\mathbf k}(\\mathbf r)$ which we explore here.\nDefine a new set of Bloch wave functions by\n\\begin{equation}\n  \\label{eq:1}\n  \\ket{\\tilde \\psi_{n\\mathbf k}} = e^{-i\\beta_{n}(\\mathbf k)}\\ket{\\psi_{n\\mathbf k}},\n\\end{equation}\nwhere $\\beta_{n}$ is a real function of $\\mathbf k$. The transformation from the old set to the new set is what we will call a ``gauge transformation.''\n\\begin{enumerate}\n  \\item A ``periodic gauge'' is one for which $\\ket{\\psi_{n,\\mathbf k+ \\mathbf G}} = \\ket{\\psi_{n\\mathbf k}}$ (i.e., equal with the same phase). What has to be true about $\\beta_{n}(\\mathbf k)$ if it is to preserve the periodicity of the gauge?\n  \\item Show that the $\\ket{u_{n\\mathbf k}}$ transform in the same way as the $\\ket{\\psi_{n\\mathbf k}}$ under a gauge transform. (recall $\\psi_{n\\mathbf k}(\\mathbf r) = e^{i \\mathbf k\\cdot \\mathbf r} u_{n\\mathbf k}(\\mathbf r)$ where $u_{n\\mathbf k}(r)$ is periodic with respect to a lattice vector $\\mathbf R$.)\n  \\item We will soon be introducing the concept of ``Berry connection'' $\\mathbf A_{n}(\\mathbf k) = i \\braket{u_{n\\mathbf k} | \\nabla_{\\mathbf k} u_{n\\mathbf k}}$. Derive an expression that describes how this transforms under a gauge transformation. That is, express $\\tilde{\\mathbf A}_{n}(\\mathbf k) = i \\braket{\\tilde u_{n\\mathbf k} | \\nabla_{\\mathbf k} \\tilde u_{n\\mathbf k}}$ in terms of $\\mathbf A_{n}(\\mathbf k)$ plus a correction.\n  \\item Show that $\\nabla_{\\mathbf k} \\times \\mathbf A_{n}$ is gauge-invariant.\n\\end{enumerate}\n\n\\problem{Bloch's theorem, finite lattices, and flux}\n\nConsider a finite lattice given by\n\\begin{equation}\n  \\label{eq:5}\n  H = -t \\sum_{n=0}^{L-1} (\\ket{(n+1)a}\\bra{na} + \\ket{na}\\bra{(n+1)a}),\n\\end{equation}\n\\begin{enumerate}\n  \\item For the boundary conditions $\\ket{L} = \\ket{0}$ (periodic boundary conditions), solve this Hamiltonian for eigenvectors and eigenvalues.\n  \\item For the boundary conditions $\\ket{L} = e^{i\\vartheta} \\ket{0}$ (``twisted'' boundary conditions), solve the Hamiltonian for eigenvectors and eigenvalues.\n  \\item Returning to $\\ket{L} =\\ket{0}$ (periodic), we can thread magnetic flux through the system, this is often included by \\emph{Peierls substitution}, where we transform the hopping terms in the Hamiltonian $\\ket{m}\\bra{n} \\rightarrow e^{i \\int_{na}^{m a} A(x) dx} \\ket{m}\\bra{n}$ in the Hamiltonian, for a vector potential $A(x)$ (defined as the angular component). If this is a ring then $\\oint A(x) dx = \\Phi$ for an enclosed flux $\\Phi$. Assuming that $A(x)$ is constant in $x$, find the eigenvalues and eigenvectors of this ring. How is $\\Phi$ related to $\\vartheta$ from the previous question?\n  \\item Now, assume we have the simple, infinite tight-binding model\n        \\begin{equation}\n          \\label{eq:7}\n            H = -t \\sum_{n=-\\infty}^{\\infty} (\\ket{(n+1)a}\\bra{na} + \\ket{na}\\bra{(n+1)a}),\n        \\end{equation}\n        we can solve this, inefficiently, by picking a ``unit cell'' of size $L$. In particular, first show that Hamiltonian can be rewritten as\n        \\begin{multline}\n          \\label{eq:8}\n          H = -t \\sum_{m=-\\infty}^{\\infty}\\bigg[ \\sum_{n=0}^{L-2}( \\ket{m L + n + 1}\\bra{m L + n} + \\ket{mL + n}\\bra{mL + n + 1} ) \\\\ + \\ket{(m+1)L}\\bra{mL + L-1} + \\ket{mL + L-1} \\bra{(m+1)L}\\bigg].\n        \\end{multline}\n        If we take as our translation operator $\\hat T_{L} = e^{-i \\hat k L}$, and its eigenvalues $0 \\leq kL < 2\\pi$, this defines a Brillouin zone of size $2\\pi/L$. What is the $L$-dimensional $k$-space Hamiltonian we get by stricting to an eigenspace of $\\hat T_{L}$? Relate it back to what we found in (b) and (c). How is $k$ related to $\\Phi$ and $\\vartheta$? (Hint: $\\ket{mL + n} = \\hat T_{L}^{m} \\ket{n}$)\n\\end{enumerate}\n\nWhile we explored this in a simple 1D model, the ideas here easily generalize to higher dimensions and with more complicated unit cells.\n\n\n\\problem{Constructing a ``protected'' edge state}\n\nIn many models, we can pick a point in the Brillouin zone where we ``expand'' in small wave vectors and get effective ``continuum'' models.\nOne such case is the SSH model when $|t_{2}-t_{1}|\\ll t_{1,2}$.\n\nFor this problem, we can begin with the $k$-space Hamiltonian\n\\begin{equation}\n  \\label{eq:2}\n  H(\\mathbf k) =\n  \\begin{pmatrix}\n    0 & t_{1} + t_{2} e^{2 i  k a} ,\\\\\nt_{1} + t_{2} e^{-2 i k a}  & 0\n  \\end{pmatrix}.\n\\end{equation}\n\\begin{enumerate}\n  \\item In the limit of $|t_{2}-t_{1}|\\ll t_{1,2}$, what $\\mathbf k$ vector has the smallest gap between energies? If this happens at $\\mathbf k_{0}$, expand the Hamiltonian $H(\\mathbf k_{0} + \\mathbf q)$  for small $|\\mathbf q|$, keeping only terms linear in $\\mathbf q$.\n  \\item The above problem should take the form\n        \\begin{equation}\n          \\label{eq:3}\n          h = v_{\\mathrm F} q \\sigma_{y} + m \\sigma_{x},\n        \\end{equation}\n        Find the energy eigenvalues and eigenvectors of this Hamiltonian and how they depend on $q$.\n  \\item To make this a continuum model let $q = -i\\partial_{x}$. Further, let $m$ depend on space such that\n        \\begin{equation}\n          \\label{eq:4}\n          h = -i v_{\\mathrm F} \\sigma_{y} \\partial_{x} + m_{0} \\tanh(x) \\sigma_{x}.\n        \\end{equation}\n        Find a solution to this equation with zero eigenvalue when $m_{0}>0$ and when $m_{0}<0$;  plot its magnitude in space. Finally, relate $m_{0}$ back to $t_{2}$ and $t_{1}$; what is the difference between $x<0$ and $x>0$ in the original chain? This continuum solution is due to the ``domain wall'' in the mass $m$ and is due to Jackiw and Rebbi. It is a topologically protected edge state.\n\\end{enumerate}\n\n\n\\problem{An intro to {\\sc PythTB.py}}\n\nWe will use this software package to explore tight-binding models, you can find it at \\url{https://www.physics.rutgers.edu/pythtb/}.\nTo install it on your local machine, you will need python, which can be found at \\url{https://conda.io}. Once python is installed (either 2.7 or 3.x), you can install PythTB at the command prompt (terminal) with\n\n\\vspace{10pt}\n{\\tt pip install pythtb --upgrade}\n\\vspace{10pt}\n\nor if you do not have root permissions\n\n\\vspace{10pt}\n{\\tt pip install pythtb --upgrade --user}\n\\vspace{10pt}\n\nYou will also need some standard python packages like {\\sc numpy} (numerics) and {\\sc matplotlib} (plotting).\nThis problem is to help get you acquainted with this python package. Download \\texttt{benzene.py} from the course website (or see code on the next page).\nIf you need any help, please do not hesistate to contact me.\n\nIn this example, we will look at the tight-binding model of benzene, which is made with $p_{z}$ orbitals of Carbon atoms. The Hamiltonian is very much like we considered in Problem 2\n\\begin{equation}\n  \\label{eq:9}\n  H = E_{p} \\sum_{j} \\ket{j}\\bra{j} + t \\left(\\sum_{j=0}^{5} \\ket{j+1} \\bra{j} + \\mathrm{h.c.}\\right), \\quad \\ket{6} = \\ket{0}.\n\\end{equation}\nHowever, these atoms exist in a two-dimensional plane at points $\\mathbf r_{j} = r (\\cos(j \\pi/3), \\sin(j \\pi/3)) $. We will need this for considering external fields.\n\n\\begin{enumerate}\n  \\item Identify the parameters in the code, and run the code. It should output the energies and eigenvectors (real part of them). Do these match what you found in Problem 2a?\n  \\item Now, we mess with the code! Modify it to have a uniform electric field in the $\\hat x$ or $\\hat y$ direction (your choice) by raising or lowering site energies in a way that is linear in their spatial coordinates. Make a plot of the six eigenvalues versus electric field strength. Does the behavior make sense? Explain why.\n  \\item Instead of an external field, now modify model by changing the hopping strength alternate between bonds such that hopping between 0-1, 2-3, and 4-5 have strength $t+\\delta$ and 1-2, 3-4, and 5-0 have stengths $t-\\delta$. Make a plot of the six eigenvalues versus $\\delta$. Does anything special happen at $\\delta=t$? Explain why?\n\\end{enumerate}\n\n\\newpage\n\n\\section*{{\\sc benzene.py} -- From Berry Phases in Electronic Structure Theory, Appendix D.2}\n\\begin{lstlisting}[language=Python]\n#!/usr/bin/env python\nfrom __future__ import print_function # python3 style print\n\n# ----------------------------------------------------------\n# Tight-binding model for p_z states of benzene molecule\n# ----------------------------------------------------------\n\nfrom pythtb import *\n\n# set up molecular geometry\nlat=[[1.0,0.0],[0.0,1.0]]        # define coordinate frame: 2D Cartesian\nr=1.2                            # distance of atoms from center\norb=np.zeros((6,2),dtype=float)  # initialize array for orbital positions\nfor i in range(6):               # define coordinates of orbitals\n  angle=i*np.pi/3.0\n  orb[i,:]= [r*np.cos(angle), r*np.sin(angle)]\n\n# set site energy and hopping amplitude, respectively\nep=-0.4\nt=-0.25\n\n# define model\nmy_model=tbmodel(0,2,lat,orb)\nmy_model.set_onsite([ep,ep,ep,ep,ep,ep])\nmy_model.set_hop(t,0,1)\nmy_model.set_hop(t,1,2)\nmy_model.set_hop(t,2,3)\nmy_model.set_hop(t,3,4)\nmy_model.set_hop(t,4,5)\nmy_model.set_hop(t,5,0)\n\n# print model\nmy_model.display()\n\n# solve model and print results\n(eval,evec)=my_model.solve_all(eig_vectors=True)\n\n# print results, setting numpy to format floats as xx.xxx\nnp.set_printoptions(formatter={'float': '{: 6.3f}'.format})\n# Print eigenvalues and real parts of eigenvectors, one to a line\nprint(\"  n   eigval   eigvec\")\nfor n in range(6):\n    print(\" %2i  %7.3f  \" % (n,eval[n]), evec[n,:].real)\n\\end{lstlisting}\n\n\\end{document}\n", "meta": {"hexsha": "d68de6597d64a0caae68b662b2cadcd1946caea6", "size": 9711, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/tex/phys7364_hw2.tex", "max_stars_repo_name": "jhwilson/jhwilson-website", "max_stars_repo_head_hexsha": "76e93a25b4652ff2a6d7599f9bc8ff879ffd4ca6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/tex/phys7364_hw2.tex", "max_issues_repo_name": "jhwilson/jhwilson-website", "max_issues_repo_head_hexsha": "76e93a25b4652ff2a6d7599f9bc8ff879ffd4ca6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/tex/phys7364_hw2.tex", "max_forks_repo_name": "jhwilson/jhwilson-website", "max_forks_repo_head_hexsha": "76e93a25b4652ff2a6d7599f9bc8ff879ffd4ca6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.8035714286, "max_line_length": 603, "alphanum_fraction": 0.6801565235, "num_tokens": 3000, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7690802423634961, "lm_q1q2_score": 0.5692534777482128}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 4.1 Differentiate a polynomial -- a limited method}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).\n\n   def deriv (poly):\n\n       \\delta^{a}::Weight(label=\\epsilon).\n\n       bah := @(poly).\n\n       substitute     (bah,$x^{a} -> x^{a} + \\delta^{a}$)\n       distribute     (bah)\n\n       foo := @(bah) - @(poly).\n\n       keep_weight    (foo, $\\epsilon = 1$)\n       sort_product   (foo)\n       rename_dummies (foo)\n       factor_out     (foo, $\\delta^{a?}$)\n       substitute     (foo, $\\delta^{a} -> 1$)\n\n       return foo\n\n   # ---------------------------------------------------------------\n\n   poly := c^{a}\n         + c^{a}{}_{b} x^b\n         + c^{a}{}_{b c} x^b x^c.    # cdb (ex-0401.100,poly)\n\n   dpoly = deriv (poly)              # cdb (ex-0401.101,dpoly)\n\n\\end{cadabra}\n\n\\begin{dgroup*}\n   \\Dmath*{  p = \\Cdb*{ex-0401.100} }\n   \\Dmath*{ dp = \\Cdb*{ex-0401.101} }\n\\end{dgroup*}\n\n\\clearpage\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 4.1 Differentiate a polynomial -- a better method}\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).\n\n   def deriv (poly):\n\n       \\partial{#}::PartialDerivative.\n       \\delta^{a}_{b}::KroneckerDelta.\n\n       x^{a}::Depends(\\partial{#}).\n\n       bah := \\partial_{b}{@(poly)}.\n\n       distribute     (bah)\n       unwrap         (bah)  # drop all terms that don't explicitly depend on a derivative operator\n       product_rule   (bah)\n       distribute     (bah)\n       substitute     (bah,$\\partial_{b}{x^{a}}->\\delta^{a}_{b}$)\n       eliminate_kronecker (bah)\n\n       sort_product   (bah)\n       rename_dummies (bah)\n\n       return bah\n\n   poly := c^{a}\n         + c^{a}{}_{b} x^b\n         + c^{a}{}_{b c} x^b x^c.    # cdb (ex-0401.200,poly)\n\n   dpoly = deriv (poly)              # cdb (ex-0401.201,dpoly)\n\n\\end{cadabra}\n\n\\begin{dgroup*}\n   \\Dmath*{  p = \\Cdb*{ex-0401.200} }\n   \\Dmath*{ dp = \\Cdb*{ex-0401.201} }\n\\end{dgroup*}\n\n\\end{document}\n", "meta": {"hexsha": "1de2a1f62e8e9700b4c9289ef7d9d3afb38711e6", "size": 2289, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0401.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0401.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0401.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 25.1538461538, "max_line_length": 99, "alphanum_fraction": 0.4783748362, "num_tokens": 721, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.7853085808877581, "lm_q1q2_score": 0.5692468572472239}}
{"text": "\n\n\\chapter{$\\lambda$-Calculus}\n\\thispagestyle{empty}\nIn this chapter we define the $\\lambda$-calculus as a formal language of computation, and on this we shall define the concept of typing, ubiquitous in programming. The main references  for the general approach of this chapter are \\cite{selinger2008lecture} and \\cite{hindley2008lambda}. For the historical motivation for the development of the theory  we refer to \\cite{cardone2006history}.  \\\\\n\nIn the 19th century, mathematicians developed the concept of function. In this time it was question whether two functions were the same. The most widespread view is the \\emph{extensional} approach. This approach consider two functions to be the same whenever for the same input they have the same outputs. A function $f$ as a pairing of an X domain to an Y codomain. In other words, a function $f$ is seen as a pairing $f\\subset X \\times Y$.\\\\\n\nFor example, let $p$ be a prime big enough, and let $f$ and $g$ be two endomorphisms of  $\\mathbb{Z}/p\\mathbb{Z}$. One could argue that the endomorphism of  $f(x)\\to x^2$ is different to $g(x) \\to \\log_a a^{x+2}$. Despite having the same output for the same input, they are clearly different in \\emph{complexity}, with complexity understood as the cost of computing a function. It even involve the resolution of a discrete logarithm, that is a highly non-trivial task. \\\\\n\nThis view, in which not only is the result of the function important but how is that result obtained, is called the \\emph{intensional} approach. In early 1930 mathematicians such as Church\\cite{church1932set}, G\u00f6del\\cite{adams2011early} or Turing\\cite{turing1938computable} started formalizing what is to be computable. Nowadays, after the wake of computation, the intensional approach have come to be as relevant as the extenisional approach.\n\n% (to the point on which correctness is often partially dropped to easy time complexity, for example \\cite{hofmeister2002probabilistic}). \\\\\n\n% In $\\lambda$-calculus we consider function as formulas, and thus, we consider an intensional approach. This intensional approach was also backed by the constructivist, to whom we will later on relate with the study of the Curry-Howard isomorphism  .\\\\\n\n\n\\section{Untyped $\\lambda$-Calculus } \\label{lambda-calc-untyped-section}\nWe will start defining the most simple version of $\\lambda$-calculus: untyped $\\lambda$-calculus. Let us define some concepts:\n\n\n\\begin{enumerate}\n\\item An alphabet $A$ is an arbitrary, maybe infinite, non-empty set.\n\\item A symbol $a$ is an element of the alphabet.\n\\item A word is an ordered finite sequence of symbols.\n\\item The collection of all possible finite words over an alphabet $A$ is denoted by $A^*$.\n\\item A language $L$ over $A$  is a subset of $A^*.$\n\\end{enumerate}\n\nThere are a lot of languages. For example, Spanish is a language, with a well-known alphabet $L$, with a proper subset of words over $L^*$. In the same fashion, we define $\\lambda$-calculus as a formal language, defining its syntax, that is, what words are valid.\n\n\n\\subsubsection{Syntax of untyped $\\lambda$-calculus}\nWe start with the basic building blocks, which collectively form what is\ncalled the alphabet:\n\n\\begin{itemize}\n\\item We use $x, y, z,...$ to denote variables. As more variables are necessary sub-indexes will be used, up to countable many variables.\n\\item We consider an abstract connector $\\lambda$.\n\\item Auxiliary, we consider the characters $\".\", \"(\"$ and $\")\"$.\n\\end{itemize}\nNow, we are ready to formally define the untyped $\\lambda$-calculus:\n\n\\begin{definition}[Syntax of Untyped $\\lambda$-calculus (section 2.1 \\cite{selinger2008lecture})]\\label{def:untyped-lambda-calc}\n  A $\\lambda$-calculus term (sometimes called a formula) is defined inductively:\n  \\begin{itemize}\n  \\item Every variable $x,y,z...$ is a valid formula.\n  \\item If $A,B$ is a formula, then $AB$ is a valid formula.\n  \\item If $A$ is a formula and $x$ is a variable, then $\\lambda x.M$ is a valid formula.\n  \\end{itemize}\n  The set of all variables is denoted by $\\mathcal{V}$ and the set of all $\\lambda$-formulas is denoted by $\\Lambda$.\n\\end{definition}\n\\begin{remark}\n  $\\Lambda$ is countable.\n\\end{remark}\n\nWhen dealing with formal languages we make use of similar inductive statements more often than not. we find it useful to introduce the Backus-Naur Form notation \\cite{knuth1964backus}, BNF for short.  For this purpose, we shall find it useful BNF specification is a set of derivation rules, written as\n\n$$\\operatorname{word1}, \\operatorname{word2} ... ::= \\operatorname{expression1} | \\operatorname{expression2} |...,$$\n\nwhere each $\\operatorname{word}$ is a generic valid word from the language, and each  expression consists of a derived valid formulas. Expressions are separated by the vertical bar: $|$. For example, we can revisit definition \\ref{def:untyped-lambda-calc} as follow:\n\n\\begin{definition}\n  The formulas of  $\\lambda$-calculus are built via the BNF:\n  $$A,B ::= x\\ |\\ (AB)\\ |\\ (\\lambda x.A) ,$$\n  where $x$ denote any variable in $\\mathcal{V}$.\n\\end{definition}\n\\begin{remark}\n  Note that, in this case, there is no need to specify the alphabet aside from the set $\\mathcal{V}$.\n\\end{remark}\n\nFrom this point on we know how the $\\lambda$-terms are constructed. This point is better understood with some examples, using natural numbers.  We recommend to trust us in their existence, and use a naive intuition of what a natural is when  reading this examples. The notion of natural numbers is formalized in definition \\ref{def:untyped-natural} and revisited in section \\ref{section:natural-revisited}.\\\\\n\n\\subsection{Reductions in untyped $\\lambda$-calculus}\nLet us now explain the idea behind the formalism. Consider the expression $\\lambda x.(x+1)$. This expression represents the idea of the function $f(x)=x+1$ that take a variable $x$ and return $x+1$. The formula $\\lambda x.M$ is called the \\emph{abstraction} of $M$.\\\\\n\nFrom the notion of abstraction naturally arises the second one: \\emph{application}. Consider terms $M  =\\lambda x. x+1$ and $N = 3$. Then $MN = (\\lambda x. x+1)(3)$ represents the application of $M$ to $N$. In untyped $\\lambda$-calculus, $N$ can be any term. Thus for example, the term $\\lambda f.\\lambda g. fg$ just represent the composition of terms.   \n\n\\begin{example} The term $$\\lambda g.(\\lambda f.(\\lambda x. (gf)(x) )) (x+1)(x+2),$$\n  can be understood as $(g\\circ f) (x)$ where $g(x) = x+1$ and $f(x)=x+2$.\n\\end{example}\n\nIn a expression $M = (\\lambda x. N)$ we say that the variable $x$ is \\emph{bound} in $M$.\n\n\nThe idea of $\\alpha$-equivalence, $=_{\\alpha}$, is that expressions such as $\\lambda x.x$ and $\\lambda y.y$ are essentially the same. That is, we consider terms up to \\emph{renaming} of variables. To formalise this, we have to formalize the concept of free and bound variables, and the concept of renaming.\n\n\\begin{definition}\n  We have a \\emph{free variable} function $FV:\\Lambda \\to \\mathcal{P}(\\mathcal{V})$ defined recursively as:\n  \\begin{itemize}\n  \\item $FV(x) = \\{x\\}$ for every $x\\in \\mathcal{V}$.\n  \\item $FV(MN) = FV(M)\\cup FV(N)$ for every $M,N\\in \\Lambda$.\n  \\item $FV(\\lambda x.M) = FV(M)\\backslash \\{x\\}$ for every $M\\in \\Lambda, x\\in \\mathcal{V}$.\n  \\end{itemize}\n  Given a term $M$, if $x\\in FV(M)$ we say that $x$ is a free variable in $M$ or that $M$ has a free variable $x$.\n\\end{definition}\n\n\\begin{definition}\n  We say that a term is \\emph{closed} if it has no free variables.\n\\end{definition}\nWe can now define process of \\emph{substitution}, and define a rename as a particular case. This process is the one behind the intuitive idea of evaluation. For example when we consider $(\\lambda y. y^2+y)(4) = 4^2+4 = 20$ we are replacing the value of $x$ by the term $4$. \n\\begin{definition}\\label{def:substition}\n  The substitution of $N$ for free occurrences of $x$ in $M$, denoted by $M[N/x]$ is defined recurrently in the structure of $\\lambda$-terms by:\n  \\begin{align*}\n    x[N/x]& \\equiv N,\\\\\n    y[N/x]& \\equiv y, &  \\text{if } x\\ne y&,\\\\\n    (MP)[N/x]& \\equiv (M[N/x])(P[N/x]),\\\\\n    (\\lambda x.M)[N/x] & \\equiv \\lambda x.M,\\\\\n    (\\lambda y.M)[N/x] & \\equiv \\lambda y.(M[N/x]), & \\text{if } x\\ne y \\text{ and } y \\not \\in FV(N)&,\\\\\n    (\\lambda y.M)[N/x] & \\equiv \\lambda y'.((M[y'/y])[N/x]), & \\text{if } x\\ne y \\text{ and } y\\in FV(N),\\\\\n          & & \\text{ with } y' \\not \\in FV(N) \\cup \\{x\\}&.\n  \\end{align*}\n  When $N = y$ a variable we say that $[N/x] = [y/x]$ is a rename. \n\\end{definition}\n\\begin{remark}\n  In equality $  (\\lambda x.M)[N/x]  \\equiv \\lambda x.M,$ we remark that, although being both the variable $x$, no substitution is carried out as $x$ is bounded. This is called, \\emph{non-capturing substitution}.\n\\end{remark}\n\n\\begin{definition}\n  We define the $\\alpha$-equivalence $=_\\alpha$ as the smallest congruence relation on $\\Lambda$ such that:\n  $$\\lambda x. M =_\\alpha \\lambda y. M[x/y], \\qquad \\forall y \\in \\mathcal{V} \\backslash FV(M).$$\n\\end{definition}\nIn other words is the congruence that consider every renaming of a bounded variable as equals. This type of property-oriented definition is commonplace in $\\lambda$-calculus, as it allows for some synthetic and goal-oriented definition. \n\n\n\nUsing this same tool we are going to define the $\\beta$-reduction. This is not going to be an equivalence implication, but rather a relationship. It abstracts the notion of  $(\\lambda x. 2+x)(2)$  being computed as $4$. Formally:\n\n\\begin{definition}[Section 2.5 \\cite{selinger2008lecture}]\n  We define the \\emph{single-steps} $\\beta$\\emph{-reduction} as the smallest relationship $\\to_\\beta$ such that:%\\footnote{Should i explain the notation ${A \\over B}$}\n  \\begin{align*}\n    &(\\beta) \\qquad\\ \\ \\  \\ {\\ \\over(\\lambda x.M)N \\to_\\beta M[N/x]},\\\\\n    &(\\operatorname{cong}_1)\\qquad\\qquad{ M \\to_\\beta M' \\over MN \\to_\\beta M'N },\\\\\n    &(\\operatorname{cong}_2)\\qquad\\qquad{ N \\to_\\beta N' \\over MN \\to_\\beta MN' },\\\\\n    &(\\zeta) \\qquad\\qquad \\ \\  \\ {M\\to_\\beta M' \\over(\\lambda x.M) \\to_\\beta \\lambda x.M'}.\\\\\n  \\end{align*}\n\\end{definition}\n\n\\begin{remark}\n  In this definition we can see that rule $(\\beta)$ is the main objective, while the others are necessary so that it maintain the structure.\n\\end{remark}\n\\begin{definition}\n  We define the \\emph{multiple-step} $\\beta$\\emph{-reduction} $\\twoheadrightarrow_\\beta$ as the reflexive, transitive closure of $\\to_\\beta$.\n\\end{definition}\n\\begin{definition}\n  We define the $\\beta$-equivalence $=_\\beta$ as the symmetric closure of $\\twoheadrightarrow_\\beta$.\n\\end{definition}\n\nUp to this point the focus of the system was to define an intensional language for computation. $\\beta$-reduction encapsulates the concept of computation, where we have evaluation.\n\n% \\begin{example}\n%   \\begin{itemize}\n%   \\item We can see that $(\\lambda x. \\lambda y. (x+ y))(6)(7)$ is reduced  to $(6+y)(7)$ and then to 13.\n%   \\end{itemize}\n% \\end{example}\n\n\nShould we want to consider a way of having an extensional approach, for example, to generate normal forms for the terms, we would need more machinery. The $\\eta$-equivalence provides us with the tools to consider $\\lambda$-calculus in a extensional way. \\\\\n\n\\begin{definition}\n  We define the single-step $\\eta$-reduction $\\to_\\eta$ as the smallest relationship such that: \n  \\begin{align*}\n    &(\\eta) \\qquad\\qquad \\ \\ \\ \\  \\ {\\ \\over(\\lambda x.Mx) \\to_\\eta M} \\qquad \\forall x \\not  \\in FV(M),\\\\\n    &(\\operatorname{cong}_1)\\qquad\\qquad{ M \\to_\\eta M' \\over MN \\to_\\eta M'N },\\\\\n    &(\\operatorname{cong}_2)\\qquad\\qquad{ N \\to_\\eta N' \\over MN \\to_\\eta MN' },\\\\\n    &(\\zeta) \\qquad\\qquad \\ \\  \\ {M\\to_\\eta M' \\over(\\lambda x.M) \\to_\\eta \\lambda x.M'}.\\\\\n  \\end{align*}\n  Similarly, we define the multiple-step $\\eta$-reduction $\\twoheadrightarrow_\\eta$ as the transitive reflexive closure of $\\to_\\eta$, and the $\\eta$-equivalence $=_\\eta$ as the symmetric closure of $\\twoheadrightarrow_\\eta$.\n\\end{definition}\n\\begin{definition}\n  We define the single-step $\\beta\\eta$-reduction $\\to_{\\beta\\eta}$ as the union of $\\to_\\beta$ and $\\to_\\eta$.  We define the multiple-step $\\beta\\eta$-reduction $\\twoheadrightarrow_{\\beta\\eta}$ as the transitive reflexive closure of $\\to_{\\beta\\eta}$ and $=_{\\beta\\eta}$ ad the symmetric closure of $\\twoheadrightarrow_{\\beta\\eta}$.\n\\end{definition}\n\n% \\begin{proposition}\n%   In the presence of all other axioms defined in $\\beta$ and $\\eta$ equivalences, $\\eta$ rule is equivalent to:\\footnote{TODO: Revisar si es equivalencia o reducci\u00f3n.}\n%   $$(ext) \\qquad\\ \\ \\  \\ {\\ (Mx=M'x)\\text{ where } x\\not\\in FV(M)\\cup FV(M') \\over M \\to_\\eta M' \\land M' \\to_\\eta M} $$\n% \\end{proposition}\n% \\begin{proof}\n%   By extenisionality we have that $M' = $.\n% \\end{proof}\n\n\n% \\begin{remark}\n%   This proof that the $\\eta$-reduction maintain extensionality in some sense. Also, note that we need to consider $\\eta$-equivalence and not only $\\eta$-reduction.\n% \\end{remark}\n\n\\begin{table}[h]\n  \\begin{center}\n    \\begin{tabular}{|l|l|l|}\n      \\hline\n      Name & Main rule & Equivalence \\\\\n      \\hline\n      $\\alpha$ & $ (\\lambda x. M) =_\\alpha \\lambda y. M[x/y]$& Yes\\\\\n      $\\beta$ & $(\\lambda x.M)N \\to_\\beta M[N/x]$& No\\\\\n      $\\eta$ & ${\\displaystyle (\\lambda x.Mx) \\to_\\eta M }$& No\\\\\n      \\hline\n    \\end{tabular}\n  \\end{center}\n  \\caption{\\label{tab:reductions}Considered reductions in $\\lambda$-calculus.}\n\\end{table}\n\n\n\\subsection{Church-Rosser Theorem}\n\nThis subsection we present an important result for $\\lambda$-calculus: the \\emph{Church-Rosser} theorem. The idea behind this theorem is to prove that every reduction (either $\\beta$, $\\eta$ or a mix) provide an unified sense of reduction. First, we present some definitions.\n\n\\begin{definition}\n  Consider a relation $\\to$ and let $\\twoheadrightarrow$ be its reflexive transitive closure. We can define three relations:\n\n  \\begin{enumerate}\n  \\item The Church-Rosser Property: $$M\\twoheadrightarrow N, M\\twoheadrightarrow P \\implies \\exists Z : N\\twoheadrightarrow Z, P\\twoheadrightarrow Z.$$\n  \\item The Quasidiamond Property: $$M\\to N, M\\to P \\implies \\exists Z : N\\twoheadrightarrow Z, P\\twoheadrightarrow Z.$$\n  \\item The Diamond Property : $$M\\to N, M\\to P \\implies \\exists Z : N\\to Z,P\\to Z$$\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{remark} \n  Note that, while $3)\\implies 1)$, it is not necessary that $2)\\implies 1)$.\n\\end{remark}\n\n\nWith this notation, we introduce the Church-Rosser Theorem, proved as done by Martin-L\u00f6f.\n\n\\begin{theorem}[Church-Rosser]The reductions in untyped $\\lambda$-calculus satisfy the following properties:\\label{theo:church-rosser}\n  \\begin{enumerate}\n  \\item $\\twoheadrightarrow_{\\beta\\eta}$ satisfy the Church-Rosser property.\n  \\item $\\twoheadrightarrow_{\\beta}$ satisfy the Church-Rosser property.\n  \\end{enumerate}\n\\end{theorem}\n\n\n\n\nWe prove the theorem only for the $ \\twoheadrightarrow_{\\beta\\eta}$ case. We refer to  \\cite[Theorem 1.32]{hindley2008lambda} for a full proof for $\\twoheadrightarrow_\\beta$. he first step in the proof is going to be an alternative definition of $\\toheadrightarrow_{\\beta\\eta}$ as a completion of the new reduction $\\triangleright$.\n\\begin{definition}[parallel one-step reduction]\n  We define the parallel one-step reduction  $\\triangleright$ as the smallest relationship such that,\\\\\n  \\begin{align*}\n    (1)&& {\\displaystyle \\over x \\triangleright x}\\qquad\\qquad\\qquad\\qquad\\qquad\\\\\n    (2)&& { M \\triangleright M'\\qquad N\\triangleright N'  \\over MN \\triangleright M'N' }\\qquad\\qquad\\qquad\\ \\  \\ \\\\\n    (3)&&{M \\triangleright M' \\over(\\lambda x.M) \\triangleright \\lambda x.M'}\\qquad\\qquad\\qquad\\qquad\\\\\n    (4)&& {M \\triangleright M'\\qquad N \\triangleright N' \\over (\\lambda x.M)N \\triangleright M'[N'/x]}\\qquad\\qquad\\qquad\\\\\n    (5)&& {\\ M\\triangleright M'\\over(\\lambda x.Mx) \\triangleright M'} \\qquad \\forall x \\not  \\in FV(M)\\qquad\n  \\end{align*}\n\n\n\n  where $x$ is any variable and $M,N,M',N'$ any term.\n\\end{definition}\n\n\\begin{lemma}[Characterization of $\\triangleright$]\\label{lemma:cr1}\n  Let $M,M'$ be terms, then:\n  \\begin{enumerate}\n  \\item $M\\to_{\\beta\\eta} M'\\implies M \\triangleright M'$\n  \\item $M\\triangleright M'\\implies M \\twoheadrightarrow_{\\beta\\eta} M'$\n  \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n  \\begin{enumerate}\n  \\item  We  apply induction on the structure of $\\to_{\\beta\\eta}$. That is, we know that $\\to_{\\beta\\eta}$ have to be constructed following a series of rules. We are going to use induction on the last rule used. Therefore we conclude  every rule use for $\\M \\to_{\\beta\\eta} M'$ implies $M \\triangleright M'$. At every point, $x$ denote some variable, and $M,N,M',N',P,Q,Q'$ denote some term.\n\n    \\begin{itemize}\n    \\item[($\\beta$)] In this case, then $M=(\\lambda x.Q)M$ and $M' = Q[N/x]$. Then using (4) $M \\triangleright M'$.\n    \\item[($\\eta$)] In this case, then $M=(\\lambda x.Qx)$ and $M' = Q$. Then using (5) $M \\triangleright M'$.\n    \\item[($\\operatorname{cong}_1$)] In this case, then $M=PQ$ and $M' = PQ'$ for some $Q\\to_{\\beta\\eta} Q'$. Using induction we get $Q\\triangleright Q'$ and using (5) $M \\triangleright M'$. \n    \\item[($\\operatorname{cong}_2$)]  Analogous.\n    \\item[($\\zeta$)] In this case, then $M=\\lambda x.Q$ and $M' = \\lambda x.Q'$ for some $Q\\to_{\\beta\\eta} Q'$. Using induction we get $Q\\triangleright Q'$ and using (3) $M \\triangleright M'$.  \n    \\end{itemize}\n\n  \\item Similarly, every possible in which $M\\triangleright M'$ rule derive in  $M \\twoheadrightarrow_{\\beta\\eta} M'$:\n    \\begin{itemize}\n    \\item[(1)] By reflexivity of $ \\twoheadrightarrow_{\\beta\\eta}$.\n    \\item[(2)] By $\\operatorname{cong}_1,\\operatorname{cong}_2$ in either definition of $\\to_\\beta$ and $\\to_\\eta$.\n    \\item[(3)] By $\\zeta$ in either definition of $\\to_\\beta$ and $\\to_\\eta$.\n    \\item[(4)] Then we have $(\\lambda x.M)N \\triangleright M'[N'/x]$ with $M \\triangleright M'\\qquad N \\triangleright N'$. By induction $M \\twoheadrightarrow_{\\beta\\eta} M'$ and $N \\twoheadrightarrow_{\\beta\\eta} N'$. By transitive closure:\n      $$(\\lambda x.M)N \\twoheadrightarrow_{\\beta\\eta}(\\lambda x.M')N \\twoheadrightarrow_{\\beta\\eta} (\\lambda x.M')N' \\twoheadrightarrow_{\\beta\\eta} (M')[N'/x].$$\n    \\item[(5)] Then we have $(\\lambda x.Mx) \\triangleright M'$, for some $M\\triangleright M'$ and for some $x \\not  \\in FV(M)$. Finish using $(\\operatorname{cong}_2)$ and $\\eta$.\n    \\end{itemize}\n\n  \\end{enumerate}\n\n\\end{proof}\n\n\\begin{lemma}[Substitution Lemma]If $M \\triangleright M'$ and $U \\triangleright U'$ , then $M [U/y] \\triangleright M' [U' /y]$.\\label{lemma:overconstrains}\n\\end{lemma}\n\\begin{proof}\n  We have defined in definition \\ref{def:substition} a capture-avoiding substitution, in the sense that we do not let bound variable to be substituted. Similarly to previous lemma, we proceed by induction, studying the last rule applied. The induction is proceed on the rules applied to $M\\triangleright M'$.\n\n  \\begin{itemize}\n  \\item[(1)] In this case $M=M'=x$. If $x\\ne y$ the substitution does not alter $M$ so we have finished. If $x = y$, the term is only $U\\triangleright U'$, that we know by hypothesis.\n  \\item[(2)] In this case $M=NP, M'=P'N'$ for some terms $N,N',P,P'$ such that $N\\triangleright N'$, $P\\triangleright P'$. Proceed by induction on these implications and apply (2).\n  \\item[(3)] In this case $M=\\lambda x.N$ and $M=\\lambda x.N'$, for some $N\\triangleright N'$. Apply induction on $N$ and end by (3).\n  \\item[(4)] Then we have $(\\lambda x.M)N \\triangleright M'[N'/x]$ with $M \\triangleright M'$ and $N \\triangleright N'$. By induction on both of the relationship, and applying (4).\n  \\item[(5)] Then we have $(\\lambda x.Mx) \\triangleright M'$, for some $M\\triangleright M'$ and for some $x \\not  \\in FV(M)$. Finish using induction on $M\\triangleright M'$ and by (5).\n  \\end{itemize}\n\\end{proof}\n\nWhile the proof is rather straightforward, one can see that it does require induction, and thus the length of it. \n\n\\begin{definition}[Maximal parallel one-step reduction]\n  Given a term $M$, the \\emph{maximal parallel one-step reduction} $M^*$ is defined inductively:\n  \\begin{itemize}\n  \\item $(x)^*  =x$ \n  \\item $(PN)^*=P^*N^*$ but for $\\beta$-reductions.\n  \\item $((\\lambda x.P)N)^* = Q^*[N/x] $\n  \\item $(\\lambda x.N)^*=\\lambda x.N^*$ but for $\\eta$-reductions, \n  \\item $(\\lambda x.Nx)^*=N^*$ \n  \\end{itemize}\n  where $x$ is any variable, and $N,P$ is any term.\n\\end{definition}\n\n\n\\begin{lemma}[Maximal Parallel one-step]\\label{lemma:cr2}\n  If $M\\triangleright M'$ then $M'\\triangleright M^*$.\n\\end{lemma}\n\\begin{proof}\n  We proceed by induction on $M$ again:\n  \\begin{itemize}\n\n  \\item[(1)] In this case $M=M'=x$. Then $M^*=x$.\n  \\item[(2)] In this case $M=NP, M'=P'N'$ for some terms $N,N',P,P'$ such that $N\\triangleright N'$, $P\\triangleright P'$.\n    \\begin{itemize}\n    \\item If $NP$ is not a $\\beta$-reduction, then $M^* = P^*N^*$ and we can use the  induction hypothesis on $N^*$ and $P^*$ and (2).\n    \\item If $NP$ is a $\\beta$-reduction, then $N = \\lambda x. Q$, thus $M^* = Q^*[N^*/x]$. $P = \\lambda x.Q$ and $P \\triangleright P'$ could be derived using congruence to $\\lambda$-abstraction (2) or extensionality (5). In the first case use induction on $Q$ and (4). On the second case $Q=Rx$, use induction on $R$, and apply substitution lemma.\n    \\end{itemize}\n\n    Proceed by induction on these implications and apply (2).\n  \\item[(3)] In this case $M=\\lambda x.N$ and $M=\\lambda x.N'$, for some $N\\triangleright N'$. If $M$ is not an $\\eta$-reduction, then use induction hypothesis and finish with (3). Otherwise we have that $N = Py$, and distinguish two cases on the last rule applied to $N$:\n    \\begin{itemize}\n    \\item[(2)\\to (3)]\\footnote{Notation $(2)\\to (3)$ means that the last to step in the reduction has been $(2)$ and $(3)$.} Then apply induction hypothesis to $P$ and end using (5).\n    \\item[(4)\\to (3)] We have that $N = P = \\lambda y.Q \\triangleright N' = Q'[x/y]$. Apply induction hypothesis using that $M ' = \\lambda x.N' = \\lambda x.Q'[x/y] = \\lambda y. Q'$.\n    \\end{itemize}\n  \\item[(4)] Then we have $(\\lambda x.P)N \\triangleright P'[N'/x]$ with $P \\triangleright P'$ and $P \\triangleright P'$. The result follows by the substitution lemma \\ref{lemma:overconstrains}.\n  \\item[(5)] Then we have $(\\lambda x.Px) \\triangleright P'$, for some $P\\triangleright P'$ and for some $x \\not  \\in FV(P)$. Finish using induction on \n  \\end{itemize}\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem \\ref{theo:church-rosser}]   If a relation $\\to$ satisfy the diamond property, its reflexive and transitive closure $\\twoheadrightarrow$ satisfy the Church-Rosser property. As a consequence of lemma \\ref{lemma:cr2}, we know that $\\triangleright$ satisfy the diamond property.  By lemma \\ref{lemma:cr1} we know that the reflexive and transitive closure of $\\triangleright$ is $\\twoheadrightarrow_{\\beta\\eta}$.\n\\end{proof}\n\nNow let take a moment to comment on the usefulness of each reduction. We can say that $\\beta$ reduction is the soul of computation  while $\\eta$ is useful to cleanup the results.  This can be used to define \\emph{normal} forms for untyped $\\lambda$-terms. More infomation on normalization can be found on \\cite[Section 7]{selinger2008lecture} and \\cite{baader1999term}. \\\\\n\n\n\\subsection{Fixed points and Programming}\nThis section is an interlude. We will stop analyzing for a while the theoretical properties of lambda calculus. Instead, in this section we will ponder upon how lambda calculus actually captures the notion of programming. In particular, we explain how to define the basic rudiments for programming in untyped $\\lambda$-calculus. In particular, we explain how to define, for example, a booleans, an integer type and a recursion operator, thus showing the computational potential of recursively enumerable languages. Let us start with the booleans.\n\n\\begin{definition}[Boolean in untyped $\\lambda$-calculus] \\label{def:untyped-natural} \n  We define the two values $T$ and $F$, usually referred to as true and false, as:\n  $$\\operatorname{True} = \\lambda xy.z,\\qquad    \\operatorname{False} = \\lambda xy.z.$$\n\n  We also define the operations:\n  \\begin{align*}\n    \\operatorname{Not} &= \\lambda a.a\\operatorname{False}\\operatorname{True},\\\\\n    \\operatorname{And} &= \\lambda ab.ab\\operatorname{False},\\\\\n    \\operatorname{Or} &= \\lambda ab.a\\operatorname{True}b.\n  \\end{align*}\n\\end{definition}\n\nWe can check that this construction left us with the basis of a boolean logic, after checking the truth values of the different operations provided. This definition of truth value is really convenient, as it allow us to easily implement control flow of programs.\n\n\\begin{definition}\n  We define the $\\operatorname{If} = \\lambda x.x$.  \n\\end{definition}\n\nWe can see that in this case, $\\operatorname{If} \\operatorname{True} M N = M$ and $\\operatorname{If} \\operatorname{False} M N = N$. This construction is very natural and widely used in computer programs.\\\\\n\nNext, we will define Naturals number. \n\n\\begin{definition}[Natural numbers in untyped $\\lambda$-calculus] \\label{def:untyped-natural} \n  Let $f,x$ be fixed $\\lambda$-terms, and write $f^nx = f(f(f(...(f(x))...)))$. Then, for each $n \\in \\mathbb N$, we define the \\emph{nth Church numeral} as $\\overline n=\\lambda f x.f^nx$.\n\\end{definition}\n\\begin{remark}\n  We abuse the notation with $\\lambda f x.f^nx$, which mean $\\lambda f.\\lambda x.f^nx$\n\\end{remark}\n\nFinally, we consider the notion of recursive function. This is done with a elegant artefact, based on an idea of fixed points. Let's begin:\n\n\n\n\\begin{theorem}[Fixed points]\n  For every term $F$ in untyped $\\lambda$-calculus, there is a fixed point.\n\\end{theorem}\n\\begin{proof}\n  Let $A=\\lamdba xy.y(xxy)$ and $\\Theta =AA$. Then, for every $\\lambda$-term $F$ we have that $N=\\theta F = FN$, thus being a fixed point. In fact:\n  \\begin{align*}\n    N = \\Theta F = AAF = (\\lambda xy.y(xxy))AF \\twoheadrightarrow_\\beta F(AAF) = F(\\Theta F) = FN.\n  \\end{align*}\n\\end{proof}\nThe proof of this theorem led to a new definition:\n\n\\begin{definition}\n  The term $\\Theta$ used in the previous theorem is called \\emph{Turing fixed point combinator}.\n\\end{definition}\n\nThis existence of fixed point theorem is really useful, as we can now define \\emph{recursion}. The idea is to define a function as a term with itself as a parameters and proceed to evaluate a fixed point. Let us first present an example.\n\n\\begin{definition}\n  We define the terms:\n  \\begin{align*}\n    \\operatorname{add} &= \\lambda nm f x.nf (mf x),\\\\\n    \\operatorname{mult} &= \\lambda nm f.n(mf),\\\\\n    \\operatorname{iszero} &= \\lambda nxy.n(\\lambda z.y)x,\\\\\n    \\operatorname{predecesor} &=\\lambda n.\\lambda f.\\lambda x. n (\\lambda g.\\lambda h. h (g f)) (\\lambda u.x) (\\lambda u.u). \n  \\end{align*}\n  \n\\end{definition}\nSuppose that we want to define the factorial. We want that:\n$$\\operatorname{fact} n = \\operatorname{If}(\\operatorname{iszero} n)(1)(\\operatorname{mult}(n)(\\operatorname{fact} (\\operatorname{pred}(n))),$$\nin order to do that, look for a fixed point of:\n$$\\lambda f. \\lambda n.\\operatorname{If}(\\operatorname{iszero}n)(1)(\\operatorname{mult}(n)(f (\\operatorname{pred}(n))),$$\nand thus $\\operatorname{fact}=\\Theta \\lambda f.\\lambda n. \\operatorname{If}(\\operatorname{iszero}n)(1)(\\operatorname{mult}(n)(f (\\operatorname{pred}(n)))$. In general:\n\n\\begin{definition}\n  Given an stop condition $g$, an stop value $s$ and a recursive step $f$, we define the recursive term $F$ that computes $(g,s,f)$ as\n  $$F = \\Theta \\lambda f. \\operatorname{If}(gn)(n)(f (\\operatorname{pred}(n))$$\n\\end{definition}\n\n\\begin{remark}\nAnother implementation of the fixed point operator is \\emph{Curry's paradoxical fixed point operator}.  We define Curry's paradoxical fixed point operator $\\operatorname{Y}$ as:\n  $$\\operatorname{Y}=\\lambda f.(\\lambda x.f(x x)) (\\lambda x.f(x x)).$$\nThis operator is used for the Curry's paradox \\cite{sep-curry-paradox}.\\\\\n\\end{remark}\n\n\n\n% \\subsection{Rosser-Kleene and Curry's Paradoxes}\n\n% In this subsection we explain the deficits in untyped $\\lambda$-calculus, that was the original Church construction, that eventually leads into the definition of \\emph{typed} $\\lambda$-\\emph{calculus}.\\\\\n\n\n% We can start by having a quick reasoning, as shown in \\cite{sep-lambda-calculus}.\n\n\n% {TODO This Summer: expand this subsection properly.}\n\n% An aproximation of this argument can be done if we consider domains of functions to be sets. In particular untyped $\\lambda$-calculus allow functions to be applied to themselves. This is going to be troublesome, considering domains as sets, as this will be a clear infringement of \\emph{ZF Axiom Theory}\\cite{kunen2014set}. Namely, the infinite descending sequence\n% $$f\\ni \\{f,f(f)\\}\\ni f \\ni \\{f,f(f)\\}...$$\n% in contradiction with the \\emph{regularity axiom}. An approach that seek solving this problems is presented in the next section, starring the introduction of types.  \\\\\n\n% This last idea, despite being clear and loud to the intuition, is not a proof of inconsistency as there is no need for domains to be sets. Nonetheless, there was another problem that is to be deal that was noticed before. $\\lamdba$-calculus was proposed to be a deductive system. The  \\emph{Kleene-Rosser paradox}, first exhibited in \\cite{kleene1935inconsistency} proved that simply typed $\\lambda$-calculus is inconsistent. This paradox was perfected in 1958 by Curry \\cite{curry1958combinatory} with the so called \\emph{Curry's paradox}. \\\\\n\n% To solve these problems, a very natural idea is included: the use of types in our computation system. This idea is very natural nowadays, as nearly everyone learns to program priors to a deep academic career. However, we must not forget that the origin was not to use types because we liked them initially, but because they really allow us to avoid logical problems.  \n\n\n\n\\section{Typed $\\lambda$-Calculus}\n\\emph{Typed $\\lambda$-calculus} is a refinement of the untyped $\\lambda$-calculus, on which the concept of typing is introduced. We are going to present three aproximation to simply typing: \\emph{minimal simply typed }$\\lambda$\\emph{-calculus}, \\emph{basic simply typed }$\\lambda$\\emph{-calculus} and \\emph{extended simply typed }$\\lambda$\\emph{-calculus}. \n\n\\subsection{Definition}\n\nWe synthesize the definitions of typing presented in \\cite{lambek1988introduction} or \\cite{selinger2008lecture}. We structure this definition in three steps: types, terms  and equations.\n\n\n\\begin{definition}\n  The types of basic simply typed $\\lambda$-calculus are built via the BNF:\n  $$A,B ::= \\iota |\\ A\\to B\\ |\\ A \\times B  \\ |\\ 1.$$\n  where $\\iota$ denote a basic type. \n\\end{definition}\n\\begin{remark}\n  We usually refer to the basic simply typed $\\lambda$-calculus just as simply typed $\\lambda$-calculus.\n\\end{remark}\n\\begin{definition}\n  We say that a term is closed if it has no free variables.\n\\end{definition}\n\\begin{remark}\nLet us have a word on the intuitive notion of $A\\to B$. As it happens in sets, where we have that we can consider the sets of functions between two set, here we can consider the type of function between two types. This types are called \\emph{function types}.\\\\\n\\end{remark}\n\n% Lets start laying some bricks useful for the intuition. $\\iota$ type is foundational type. We consider also the type $1$. Others author, such as \\cite{lambek1988introduction} prefer to include a type for every natural. Nonetheless, we consider that we can repeat the construction realized in untyped lambda-calculus to consider this typing. \\\\\n\n\n\n\n\n\n\n\\begin{definition}\n  The \\emph{raw terms} of basic simply typed $\\lambda$-calculus are built via the BNF:\n  $$A,B ::= x\\ |\\ AB\\ |\\ \\lambda x^t.A \\ |\\ \\langle A,B \\rangle\\ |\\ \\pi_1A\\ |\\ \\pi_2A\\ |\\ *.$$\n  where $x$ denote any variables, and $t$ denote any type. \n\\end{definition}\n\n\\begin{remark}\n  We avoid the meticulous redefinition of free and bounded variable and use the one that naturally translates from untyped $\\lambda$-calculus as explained in Section \\ref{lambda-calc-untyped-section}\n\\end{remark}\n\n\nThe main difference with untyped calculus is that every bound variable has a type. That is, for the expression $(\\lambda x^t.M)N$ to be coherent we require the term $N$ to be of type $t$. This led us to formulate some sort of condition to $N$ for being of type $t$.\\\\\n\nNonetheless raw terms have some other meaningless terms, such as projections of a non-pair $\\pi_1(\\lambda x^t.x)$. We solve all this problems at once with the \\emph {Typing rules}. This rules will restrict the semantics of terms. We start by defining \\emph{typing context}.\n\n\\begin{definition}\n  We state by $M:t$  that a term $M$ is of type $t$. A typing context is a set of assumptions $\\Gamma = \\{x_1:A_1\\}$, on which we assume each variable $x_i$ to be of type $A_i$.\n\\end{definition}\n\n\\begin{remark}\n  From this definition, we can state the typing rules, from which we will be able to provide typing. We use the notation $\\Gamma \\vdash M:A$ meaning to state that the typing context $\\Gamma$ derives that term $M$ is of type $A$. \n\\end{remark}\n\n\n\\begin{definition}[Typing Rules]\\label{def:typing-rules}\n  We define the following typing rules, for each typing context $\\Gamma$.\n  \\begin{itemize}\n  \\item Every variable is of the type marked by the context, namely:\n    $$  (var)\\qquad  {\\displaystyle \\over \\Gamma \\vdash x_i:A_i},\\qquad  i=  1,..,n.$$\n    In addition,  $*$ is of type 1.\n    $$  (*)\\qquad  {\\displaystyle \\over \\Gamma\\vdash *:1}.$$\n\n  \\item From a term of type $A\\to B$ we can derive a term of type $B$ from a term of type $A$:\n    $$(app)\\qquad  {\\displaystyle \\Gamma \\vdash M:A\\to B\\qquad \\Gamma \\vdash N:A      \\over \\Gamma \\vdash MN:B}.$$\n    Conversely, \n    $$(abs)\\qquad  {\\displaystyle \\Gamma, x: A\\vdash M: B  \\over \\Gamma \\vdash \\lambda x^A.m:A\\to B}.$$\n  \\item The projections takes types pairing to each typing:\n    $$(\\pi_i) \\qquad {\\displaystyle \\Gamma\\vdash M: A_1\\times A_2 \\over \\Gamma \\vdash \\pi_i M: A_i},\\qquad i=1,2,$$\n    and conversely:\n    $$(pair) \\qquad {\\displaystyle \\Gamma\\vdash M: A\\qquad \\Gamma\\vdash N: B \\over \\Gamma \\vdash \\langle M, N\\rangle:  A\\times B}.$$\n  \\end{itemize}\n\\end{definition}\n\n\n\n\\begin{remark}\n  Not every term can be typed, namely the two previous examples of bad-behaviour: $\\pi_1(\\lambda x^t. x)$ and $(\\lambda x^t.M)N$ where $N$ is not of type $t$. Thus, in simply typed $\\lambda$-calculus, we only work with typed terms.\n\\end{remark}\n\n\\begin{definition}\n  The terms of \\emph{basic simply typed $\\lambda$-calculus} is a raw typed $\\lambda$-term that, together with a typing context of every variable, satisfies to be typed in the sense of definition \\ref{def:typing-rules}.\n\\end{definition}\n\\begin{remark}\n  Typing rules are often called \\emph{term-forming operation}. As the only terms that we consider here are those suitable to be typed. In this sense,  we can see these typing rules as rules to construct terms inductively.\n\\end{remark}\n\nBy assigning a type we mean assigning types to the free and bound variables. Further on, when talking about terms we will refer to simply type $\\lambda$-terms. That is, to fix a context, we can define substitution just like we defined it on typed $\\lambda$-calculus.\n\n\n\\subsection{Reductions in simply typed $\\lambda$-calculus}\n\nTo talk about Church-Rosser properties, we have to talk about reduction. Reductions in the typed lambda calculus are basically the same but for the additional structure added about we need to respect: \n\n\\begin{definition}[Section 2.5 \\cite{selinger2008lecture}]\n  We define the \\emph{single-step} $\\beta$\\emph{-reduction} as the smallest relationship $\\to_\\beta$ such that:\n  \\begin{itemize}\n  \\item[]$(\\beta)$: $ (\\lambda x^t.M)N \\to_\\beta M[N/x]$.\\\\\n  \\item[]$(\\beta_{\\times,i})$: $\\pi_i\\langle M_1,M_2\\rangle \\to_\\beta M_i$ for $i=1,2$.\\\\\n  \\item[]$(\\operatorname{cong}_1)$: If $ M \\to_\\beta M'$ then $MN \\to_\\beta M'N$.\\\\\n  \\item[]$(\\operatorname{cong}_2)$: If $ N \\to_\\beta N'$ then $ MN \\to_\\beta MN'$.\\\\\n  \\item[]$(\\zeta)$: If $M\\to_\\beta M'$ then $(\\lambda x^t.M) \\to_\\beta \\lambda x^t.M'$.\\\\\n  \\end{itemize}\n  We define the \\emph{multiple-step} $\\beta$\\emph{-reduction} $\\twoheadrightarrow_\\beta$ as the reflexive, transitive closure of $\\to_\\beta$ and $\\beta$-equality $=_\\beta$ as the symmetric closure of $\\twoheadrightarrow_\\beta$.\n\\end{definition}\n\n\\begin{definition}\\label{def:eta_x}\n  We define the single-step $\\eta$-reduction $\\to_\\eta$ as the smallest relationship such that: \n  \\begin{itemize}\n  \\item[]$(\\eta)$: $(\\lambda x^t.Mx) \\to_\\eta M}$,para todo $ x \\not  \\in FV(M)$.\\\\\n\\item[]$(\\eta_1)$: $\\langle\\pi_1 M, \\pi_2 M\\rangle\\to_\\eta M$.\\\\\n\\item[]$(\\eta_\\times)$: If $M:1$ then $M \\to_\\eta *$ . \\\\\n\\item[]$(\\operatorname{cong}_1)$: If $ M \\to_\\eta M'$ then $MN \\to_\\eta M'N$.\\\\\n\\item[]$(\\operatorname{cong}_2)$: If $ N \\to_\\eta N'$ then $ MN \\to_\\eta MN'$.\\\\\n\\item[]$(\\zeta)$: If $M\\to_\\eta M'$ then $(\\lambda x.M) \\to_\\eta \\lambda x.M'$.\\\\\n\\end{itemize}\n\nSimilarly, we define the multiple-step $\\eta$-reduction $\\twoheadrightarrow_\\eta$ as the transitive reflexive closure of $\\to_\\eta$, and the $\\eta$-equivalence as the symmetric closure of $\\twoheadrightarrow_\\eta$.\n\\end{definition}\n\n\\begin{remark} Note that we basically just add new rules so that $\\beta$ and $\\eta$ reductions match with the new syntax provided by the product.\n\\end{remark} \nWe can now talk about about Church-Rosser.  Let us check that it does not hold. For example, if $x: A\\times 1$ then we can consider $M=\\langle \\pi_1x, \\pi_2x\\rangle$. Because of the rule $\\eta_1$, we have that $M \\to_{\\eta}\\langle \\pi_1 x, *\\rangle$, but taking $\\eta_\\times$ into account we can check that $M \\to_{\\eta} x$. \\\\\n\nAlthough the Church-Rosser property is not satisfied for $\\eta$-reduction, it is satisfied for $\\beta$-reduction. This has an important significance. Recall that $\\beta$-reduction is an abstraction of the idea of computation, while $\\eta$-reduction is an abstraction of the idea of equivalence. So, in a simply typed system, we can perform computations all with the same end, because of the Curh-Rosser property of $\\beta$-reduction. We cannot do the same with $\\eta$-reduction, thus having no universal normal forms.\n\n\\subsection{Minimal and Expanded Typing}\n\\subsubsection{Minimal Typing}\nThe already explained $\\lambda$-calculus can be reduced to a slimmer version. In fact, the only type truly necessary are the function types. We present it succinctly.\n\n\\begin{definition}\n  The types of minimal simply typed $\\lambda$-calculus are here built via the BNF:\n  $$A,B ::= \\iota |\\ A\\to B,$$\n  where $\\iota$ denote a basic type. \n\\end{definition}\n\n\n\n\\begin{definition}\n  The raw term of minimal simply typed $\\lambda$-calculus are built via the BNF:\n  $$A,B ::= x\\ |\\ AB\\ |\\ \\lambda x^t.A,$$\n  where $x$ denote any variables, and $t$ denote any type. \n\\end{definition}\n\n\n\n\\begin{definition}[Typing Rules]\\label{def:typing-rules}\n  We define the following typing rules, for each typing context $\\Gamma$.\n  \\begin{itemize}\n  \\item Every variable is of the type marked by the context, namely:\n    $$  (var)\\qquad  {\\displaystyle \\over \\Gamma \\vdash x_i:A_i},\\qquad  i=  1,..,n.$$\n\n  \\item From a term of type $A\\to B$ we can derive a term of type $B$, from a term of type $A$:\n    $$(app)\\qquad  {\\displaystyle \\Gamma \\vdash M:A\\to B\\qquad \\Gamma \\vdash N:A      \\over \\Gamma \\vdash MN:B}.$$\n    Conversely, we can deduce th\n    $$(abs)\\qquad  {\\displaystyle \\Gamma, x: A\\vdash M: B  \\over \\Gamma \\vdash \\lambda x^A.m:A\\to B}.$$\n  \\end{itemize}\n\\end{definition}\n\n\\begin{definition}\n  The terms of minimal simply typed $\\lambda$-calculus are the raw terms that are subject to being typed under a certain Type context.\n\\end{definition}\n\\begin{remark}\n  We have simply trimmed the fundamental part of the basic definition.\n\\end{remark}\n\\subsubsection{Expanded Typing}\nIn a different direction than minimal typing, expanded typing seeks to enlarge the definition of typing by considering the sum type. We can add to the definition of typing by considering the sum Type. The idea is to have two types $A$ and $B$ and be able to consider a term $\\lambda x^{A+B}. x+1$. That is, we can consider two types to be acceptable by a function. \n\n\\begin{definition}\n  The types of expanded simply typed $\\lambda$-calculus are built via the BNF:\n  $$A,B ::= \\iota |\\ A\\to B\\ |\\ A \\times B \\ |\\ A + B  \\ |\\ 1\\ |\\ 0.$$\n  where $\\iota$ denote a basic type. \n\\end{definition}\n\nWe proceed to define the raw typed $\\lambda$-terms.\n\n\\begin{definition}\n  The raw terms of expanded simply typed $\\lambda$-calculus are built via the BNF:\n  \\begin{align*}\n    A,B, C ::= x\\ &|\\ AB\\ |\\ \\lambda x^t.A \\ |\\ \\langle A,B \\rangle\\ |\\ \\pi_1A\\ |\\ \\pi_2A\\ |\\ * \\\\\n                  &|\\ \\iin_1 A\\ |\\ \\iin_2 A \\ |\\ \\ccase A ; x^{t_1}.B\\oo x^{t_2}.C \\ |\\ \\square^t. \n  \\end{align*}\n  where $x$ denote any variables, and $t$ denote any type. \n\\end{definition}\n\nThe term $\\ccase A ; x^{t_1}.B\\oo x^{t_2}.C$ abstracts the idea of a function on  an union term behaves differently in each type that is included. In the typing laws we will see that it is necessary that the type returned by the application of a variable in a function of type union must always converge to the same type ($case$ rule).\\\\\n\nFor this typing, three new rules are included for type derivation, in addition to the previously defined in \\ref{def:typing-rules}:\n\\begin{definition}[Typing Rules]\n  We define the following typing rules, for each typing context $\\Gamma$.\n  \\begin{itemize}\n  \\item Every type is part of a union type including it:\n    $$  (\\iin_i)\\qquad  {\\displaystyle\\Gamma \\vdash M:A_i \\over \\Gamma \\vdash \\iin_i M:A_1+A_2},\\qquad  i=  1,2.$$\n    Conversely, a term on a union must always have the same type after application.:\n    $$(\\ccase) \\qquad {\\displaystyle \\Gamma\\vdash M: A+B\\qquad \\Gamma, x:A\\vdash N: C\\qquad \\Gamma, x:B\\vdash P: C \\over \\Gamma \\vdash \\ccase A ; x^{A}.M\\oo x^{B}.P:C}.$$\n  \\item The $\\square^\\cdot$ extract a term of any type from the void type.\n    $$  (\\square)\\qquad  {\\displaystyle\\Gamma \\vdash M:0 \\over \\Gamma \\vdash \\square^A M:A}.$$\n  \\end{itemize}\n\\end{definition}\n\nWe have the same consideration as previously, so that only those being subject to typing will be considered terms. Unification of typing considerations also holds.\n\\begin{definition}\n  The terms of expanded simply typed $\\lambda$-calculus are the raw terms that are subject to being typed under a certain Type context.\n\\end{definition}\n\\subsubsection{Pure and impure typed $\\lambda$-calculus}\n\\label{section:pureandimpure}\nWe have seen that a typing context can vary which terms are subject to be formed and which does not. This consideration will be important later in our discussion, when we consider the category of $\\lambda$-calculus. In this sense, we can assume that, associated with each typing context, a different $\\lambda$-computation is generated. This distinction inspires the following definition.\n\n\n\\begin{definition}\n  A basic (resp. minimal, extended) $\\lambda$-calculus is a triple $(\\Xi, \\Lambda, \\Gamma)$ such that:\n  \\begin{itemize}\n  \\item $\\Xi$ is the collection of types of the basic lambda-calculus (resp. minimal, extended),\n  \\item $\\Lambda$ is the collection of terms of the basic lambda-calculus (resp. minimal, extended),\n  \\item $\\Gamma$ is a typing context,\n  \\end{itemize}\n  where all rules of basic the (resp. minimal, extended) $\\lambda$-calculus are satisfied. When there\n  exist no more types and terms than those derived from the definition, and $\\Gamma=\\{\\}$, it is called a \\emph{pure basic (resp. minimal, extended) $\\lambda$-calculus}.   \n\\end{definition}\n\\begin{remark}\n  Often, in a $\\lambda$-calculus, is usual to consider terms up to $\\alpha\\beta\\eta$-equivalence. \n\\end{remark}\n\nThe way to generate a new $\\lambda$-calculus, is to add a new type $t$, (often, based on other types), and to create a (possibly abstract) new term $M$ with typing context $\\Gamma = \\{M:t\\}$.\n\\subsection{Unification of typing}\n\nWe can consider two standards in typing: \\emph{Church Style} and \\emph{Curry Style}.\n\\begin{itemize}\n\\item Church style,  first shown in \\cite{church1940formulation}, is an explicit system, as we do only consider expression with well defined types, in the same way that a function is consider with regard to it domain and codomain. In summary, the typing of a function is the first part of the definition of it, so you can define $f:(0,1)\\to (0,1)$ and just then define $f(x)=1/x$. This domain is not it only a possible domain, but it what we choose it to be. \n\\item Curry style on the other hand, gain some of the notions of untyped $\\lambda$-calculus and consider expression as function. To continue the real function analogy, it consider that the function $f(x)=x^2$ just exists, and one can check that it is well defined on $\\R$. \\\\\n\\end{itemize}\nMore information about the typing style can be found on Chapters 10 and 11 of \\cite{hindley2008lambda}. \\\\\n\nDespite this apparent differences, both typing styles can be \\emph{unified}. A unifier is a pair of substitutions that makes two type styles equal as typed templates. Such a unifier is based onta an algorithm based on type inference. This is instrumental to languages  as relevant as \\texttt{python}, which works with implicit typed variables. How this algorithms for unification is shown in great detail in Chapter 9  of \\cite{selinger2008lecture}. Due to this unification, we can consider only explicitely typed terms without any remorse.\n\n\\section{Curry-Howard bijection}\n\\subsection{Natural deduction}\nIn this section we succinctly introduce the notation of propositional intuitionistic logic, in order to work with it further this Chapter. As we will discuss them in more depth the deduction system in next chapter, we spare our readers of yet another comprehensive introduction of the widely known intuitional propositional logic. Should it be required, more information of propositional logic can be found in, for example, \\cite{marek2009introduction} and \\cite{wadler2015propositions}. Natural deduction originally appeared in the works of \\cite{gentzen1935untersuchungen}. In this section, we present it in its most general form. \\\\\n\nThroughout this section we replicate notations and process done when we defined typed $\\lambda$-calculus. This is done purposely, as our aim is to prove an equivalence between both system. We start by considering the alphabet consisting a set of countable many variables $x,y,z,...$ as done previously in $\\lambda$-calculus, together with two new symbols: $\\top$ and $\\bot$ \n\n\\begin{definition}\n  The formulas of propositional intuitionistic logic are built via the BNF:\n  $$A,B ::= x |\\ A\\to B\\ |\\ A \\land B \\ |\\ A \\lor B \\ |\\ \\top \\ |\\ \\bot .$$\n  where $x$ denotes any variable.\n\\end{definition}\n\n\n\n\nHaving the syntax done, is time to provide meaning. We want $\\top$ to be the truth value. A formula is to be true is encoded in the formula $\\top \\to A$. Conversely, a formula is \\emph{not-true} or \\emph{false} (sometimes denoted as $\\neg A$) is encoded as $ A \\to \\bot$. In addition, we want all the formulas from which we can derive certain to be true. For this we will define, analogously to the typing-contex from the previous chapter, what a \\emph{truth-assumption} is and  what the rules of deduction are.\n\n\\begin{definition}\n  A truth-assumption is a set of variables $\\Gamma = \\{x_1,...,x_n\\}$ that we assume to be true. An \\emph{judgement} $\\Gamma \\vdash B$ states that from the truth assumption $\\Gamma$, the formula $B$ can be deduced to be true. \n\\end{definition}\n\nSometimes we can write a truth assumption $\\Gamma, A_1, ..., A_n$, that will denote that additionally $A$ is assume to be true. As most times, we will only need a formula $A$ to be true or not without any interest on what exact variable configuration made this possible, so by abuse of notation we can consider $\\Gamma = \\{x_1, ..., x_n, A_1, ..., A_m\\}$ to have both variables and formulas.  \n\\begin{definition}[Deduction Rules]\n  We define the following deduction rules, for each truth assumption $\\Gamma=\\{x_1,...,x_n\\}$.\n  \\begin{itemize}\n  \\item Every variable assumed to be true is true:\n    $$  (ax)\\qquad  {\\displaystyle \\over \\Gamma \\vdash x_i},\\qquad \\forall i \\in 1,..,n.$$\n    In addition, $\\top$ is always true:\n    $$  (\\top)\\qquad  {\\displaystyle \\over \\Gamma \\vdash \\top}.$$\n  \\item From a true formula $A\\to B$ and a true formula $A$, $B$ can be derived:\n    $$(\\to_1)\\qquad  {\\displaystyle \\Gamma \\vdash A \\to B\\qquad \\Gamma \\vdash A      \\over \\Gamma \\vdash B}.$$\n    Conversely, if assuming $A$ deduces $B$, then $A\\to B$ is true.\n    $$(\\to_2)\\qquad  {\\displaystyle \\Gamma, A \\vdash B      \\over \\Gamma \\vdash (A\\to B)}.$$\n    \n  \\item The conjunction being true imply each element to be true:\n    $$(\\land_1) \\qquad {\\displaystyle \\Gamma\\vdash A_1\\land A_2 \\over \\Gamma \\vdash A, \\qquad \\Gamma \\vdash B },$$\n    and conversely:\n    $$(\\land_2) \\qquad {\\displaystyle \\Gamma\\vdash A\\qquad \\Gamma\\vdash B \\over \\Gamma \\vdash  A \\land B}.$$\n  \\item An element being true implies the union being true:\n    $$  (\\lor_1^i)\\qquad  {\\displaystyle\\Gamma \\vdash A_i \\over \\Gamma \\vdash A_1+A_2},\\qquad  i=  1,2.$$\n    Every type is part of a union type including it:\n    $$  (\\lor_1^i)\\qquad  {\\displaystyle\\Gamma \\over \\Gamma A_i  \\vdash A_1+A_2},\\qquad  i=  1,2.$$\n    conversely, A term on a union must always have the same type after application.:\n    $$(\\lor_2) \\qquad {\\displaystyle \\Gamma\\vdash A\\lor B\\qquad \\Gamma, A\\vdash C\\qquad \\Gamma, B\\vdash C \\over \\Gamma \\vdash C}.$$\n  \\item the \\emph{ex falsum quodlibet} (i.e. everything is derivable from falsity)  holds:\n    $$  (\\bot)\\qquad  {\\displaystyle\\Gamma \\vdash \\bot \\over \\Gamma \\vdash A},$$\n    for all term $A$.\n  \\end{itemize}\n\\end{definition}\n\n\n\nNote that we consider a logic without law of excluded middle. This approach, although somewhat demod\u00e9 in modern mathematics, was highly popular at the beginning of the twentieth century. It sought to solve the problems of mathematical foundations, in particular the \\emph{principle of explosion}. \\\\\n\n\nTo work with this logic, we usually work in the usual \\emph{truth derivation}. That is to deduce a formula we use the previously introduced rules in other to see whether can they be derivated from the truth assumption. More often than not, we consider the empty assumption, therefore we are working with \\emph{tautologies}.\n\n\\begin{example}\n  We can consider the formula $F=((x\\to y) \\land (y\\to z)) \\to ((x \\to z)$ and $\\Gamma$ the empty assumption. Then we can deduce:\n  \\begin{align*}\n    \\Gamma=\\{\\}& \\vdash    ((x\\to y) \\land (y\\to z)) \\to (x\\to z),\\\\\n    \\Gamma=\\{(x\\to y)\\land (y\\to z)\\} & \\vdash    (x \\to z),\\\\\n    \\Gamma=\\{(x\\to y), (y\\to z)\\} & \\vdash    (x \\to z),\\\\\n    \\Gamma=\\{ (y\\to z)\\} & \\vdash    (y \\to z),\\\\\n    \\Gamma=\\{\\} & \\vdash \\top,\n  \\end{align*}\n  therefore the formula $F$ is true.\n\\end{example}\n\n\n\n\n\nAs a final note, the definitions of intuitionistic logic and $\\lambda$-calculus introduced in this work are made to be matching but are not unique. For example, other sources such as  consider that the types of simply typed $\\lambda$-calculus and consider only the types generated by:\n$$A,B ::= x |\\ A\\to B\\ |\\ A \\land B \\ |\\ \\top .$$\n\nThis is called \\emph{positive intuitionistic calculus}. In the next chapter we are going to define the concept of \\emph{deduction system} and from it, grow into the different approaches to logic.\n\n% We decided not to follow this line, which is equally valid, for consistency with \\cite{seely1984locally} However, it is useful to have this consideration resolved from the beginning, especially for engineering applications.\n\n\n\\subsection{Bijection}\nThe first approach of $\\lambda$-calculus to be seen as a deduction system was observed by Curry in 1934 \\cite{curry1934functionality}, early in the development of this area and was completed by Howard in 1980 \\cite{howard1980formulae}. \\\\% {\\color{red} A\u00f1adir un poquito de introducci\u00f3n hist\u00f3rica que no cuesta na.}\\\\\n\n\nHaving already asked ourselves when does a term have a type, it is natural to arise the new question: given a type when does it exist a term of that type, This is in fact the fundamental idea of the Curry-Howard isomorphism. For example, considering whether it exists a term for the type$(A \\times B) \\to B$, is analogue to considering the formula $(A\\land B)\\to B$ to be a tautology. Moreover, the term $\\lambda x^{A\\times B}. \\pi_2 x$ can be seen as a proof of the tautology! we have thus arrived at the dream of a constructivist mathematician: computational algorithms are demonstrations, and demonstrations are nothing but algorithms. Let us formalize this intuition.\\\\\n\n\nWe can create a pairing between types of a simply typed lambda calculus and formulas of intuitionistic logic  by pairing variables with variables and types with formulas as described in table \\ref{tabla:curry-f}:\n\\begin{table}[!h]\\label{tabla:curry-h}\n  \\begin{center}\n    \\begin{tabular}{|l|c|c|}\n      \\hline\n      Typing name & Types  & Formulas  \\\\\n      \\hline\n      Minimal     & Function type $\\to$   & Implication $\\to$  \\\\\n      \\hline \n      Basic      & Type 1 & $\\top$ \\\\\n                  & Product type $\\times$ & Conjuction $\\land$ \\\\\n      \\hline\n      Expanded   & Type 0 & $\\bot$ \\\\\n                  & Sum type $+$     & Disjuction $\\lor$ \\\\\n      \\hline\n    \\end{tabular}\n    \\caption*{\\label{tab:table-name} Pairing of formulas and terms.}\n  \\end{center}\n\\end{table}\n\nAnd we can see that this pairing matches perfectly with the typying and deduction rules since: \n\n\\begin{itemize}\n\\item We pair the concept of a formula being true with the concept of a type having a term. We formalize this by pairing truth assumption $\\Gamma=\\{A\\}$ is with the typing context $\\Gamma\\vdash M:A$. \n\\item We can pair $(var)$ with $(ax)$ and $(*)$ with $(\\top)$.\n\\item We can pair $(app)$ with $(\\to_1)$ and $(abs)$ with $(\\to_2)$.\n\\item We can pair $(\\pi_i)$ with $(\\land_1)$ and $(\\land_2)$ with $(pair)$.\n\\item We can pair $(\\iin_i)$ with $(\\lor_1^i)$ and $(\\lor_2)$ with $(\\ccase)$.\n\\item We can pair $0$ with $\\bot$.\n\\end{itemize}\n\n\nFinally, we pair the terms  with the truth derivations. Consider a term $M$ of type $C$ with associated formula $F$. By replacing each typing rule needed to deduce $M:C$  with the associated derivation rule, we can get a truth derivation, and conversely with a truth derivation of $F\\to \\top$ from $\\Gamma=\\{\\}$.\\\\\n\n\n\n\\section{Lambda calculus as a computation system}\n% TODO: Expand this section properly. \nAround 1930, people start thinking about what does it mean for a function to be computable. There were three major answer to this question (section 4, \\cite{cardone2006history}):\n\\begin{itemize}\n\\item In 1930s Alonzo Church introduced the notion of $\\lambda$-calculus, in which a function is computable if we can write it as a recursive-lambda term onto Church's numerals, that is, if we can deduce the result after a (hopefully finite) deduction steps.\n\\item Later, Alan Turing, who had previously been a doctoral student of Alonzo Church, developed the concept of the Turing machine, in which a function is computable if a tape machine with a limited set of operations is capable of reproducing it.\n\\item Meanwhile, G\u00f6del defined the computable function as the minimum set of function such that some properties (such as the existence of a successor function) are includes, and is closed under some operations. \n\\end{itemize}\n\nThe goal of this three mathematicians was to solve the Entscheidungsproblem\\cite{hilbert1999principles}.\\\\\n\nThe interest for this type of problems was clear for the mathematical community as the goal was to propose an algorithm that solve every theorem and can be computed. The Church-Turing thesis fromally proved that these three independently generated notions were in fact equivalent\\cite{copeland1997church}.\\\\\n\n\nThis thesis is also formulated in a philosophical background, to state that every effective computable function system is equivalent any of  those three. While it can not be formally proven, it has a near-universal acceptance to date. Any computing system that can replicate, and thus is equivalent, to either Turing Machines Calculus or $\\lambda$-Calculus is said to be \\emph{Turing Complete}.\\\\\n\nNonetheless, after the formalization of the solution of this problem there were a last problem that was not adverted: the finiteness of time. SAT is the problem that has as input a propositional logic formula and deduces whether it is satisfiable. This problem is NP-complete, quite famously the firs of these problems\\cite{cook1971complexity}. If we consider the analogous problem for first order algebra, we have an even more complex P-space-complete problem, being almost intractable in worst cases to this date. As a happy consequence for us, much work is left to be done in this area \\cite{cook2006p}. \n\n", "meta": {"hexsha": "105528f464a84f93b20edd236f7a94ea11313f84", "size": 56768, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/Chapters/Chapter3.tex", "max_stars_repo_name": "pedrobn23/Master-thesis", "max_stars_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-02T13:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-20T16:41:28.000Z", "max_issues_repo_path": "thesis/Chapters/Chapter3.tex", "max_issues_repo_name": "pedrobn23/Master-thesis", "max_issues_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/Chapters/Chapter3.tex", "max_forks_repo_name": "pedrobn23/Master-thesis", "max_forks_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.9769137303, "max_line_length": 673, "alphanum_fraction": 0.712373168, "num_tokens": 16577, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.5692468525796817}}
{"text": "\n\\section{Shapley Value approximation}\\label{sec:shapley}\n\nThe Shapley Value is a cornerstone measure in cooperative game theory (introduced in section \\ref{subsec:the_shapley_value}). \nIt is an axiomatic approach to allocating a divisible reward or cost between participants where there is a clearly defined notion of how much surplus or profit a coalition of participants could achieve by themselves.\nIt has many applications, \nincluding analysing the power of voting blocks in weighted voting games \\citep{Bachrach2009ApproximatingPI}, \nin cost and surplus division problems  \\citep{AzizEtal2016,archie_paper1}, \nand as a measure of network centrality \\citep{Michalak:2013}.\nBut primarily, is useful to us as a method of allocating financial payments in electricity network contexts (see section \\ref{the_value_def}).\n\n%Formally, a \\textit{cooperative game}, $\\langle N,v\\rangle\\in\\mathbb{G}_N$, comprises a set of $n$ players, $N=\\{1,2,\\dots,n\\}$, and a \\textit{characteristic function}, $v:S\\subset N\\rightarrow \\mathbb{R}$, which is a function specifying the reward which can be achieved if a subset of the players $S\\subset N$ cooperate, where $v(\\emptyset)=0$.\n%In this context the Shapley value $\\varphi$ is a unique mapping from cooperative games to the player rewards $\\mathbb{G}_N\\rightarrow\\mathbb{R}^n$ which satisfies axioms:\n\n%\\begin{itemize}\n%\\item\t\n%\\textbf{Efficiency}: That the total reward is divided up: $\\sum_i\\varphi_i(\\langle N,v\\rangle) = v(N)$\n%\\item\t\n%\\textbf{Symmetry}: If two players $i$ and $j$ are totally equivalent `substitutes' then the receive the same reward: ie. if $v(S\\cup i)=v(S\\cup j)~~\\forall S\\subseteq N\\setminus\\{i,j\\}$, then $\\varphi_i(\\langle N,v\\rangle) = \\varphi_i(\\langle N,v\\rangle)$\n%\\item\t\n%\\textbf{Null Player}: If the addition of a player $i$ to any coalition brings nothing, and is a `null player', then it receives reward of zero: i.e if $v(S\\cup i)=v(S)~~\\forall S\\subseteq N$ then $\\varphi_i(\\langle N,v\\rangle)=0$\n%\\item\t\n%\\textbf{Additivity}: That for any two games the reward afforded each player is each is the sum of the games considered together: i.e. for any $v_1$ and $v_2$, that: $\\varphi(\\langle N,v_1+v_2\\rangle)=\\varphi(\\langle N,v_1 \\rangle) + \\varphi(\\langle N,v_2\\rangle)$\n%\\end{itemize}\n\n%Specifically, the Shapley value is expressed as:\n%\\begin{equation}\\label{shap1_X}\\varphi_i(\\langle N,v\\rangle) = \\sum_{S\\subset N, i\\notin S}\\frac{(n-|S|-1)!\\,|S|!}{n!}(v(S\\cup\\{i\\})-v(S))\\end{equation}\n%That is, under the Shapley value each player is afforded their average marginal contribution across every possible sequence of player join orderings.  \n%Or, if $v_{i,k}$ is the average marginal contribution which player $i$ can make across coalitions of size $k$:\n%\\begin{equation}\n%v_{i,k} = \\frac{1}{\\binom{n-1}{k}}\\sum_{S\\subset N\\setminus \\{ i\\} , |S|=k} %\\frac{(n-|S|-1)!\\,|S|!}{(n-1)!}\n%(v(S\\cup\\{i\\})-v(S))\n%\\end{equation}\n%then the Shapley value can be expressed as an average:\n%\\begin{equation}\\label{shap2_X} \\varphi_i(\\langle N,v\\rangle) = \\frac{1}{n}\\sum_{k=0}^{n-1}v_{i,k} \\end{equation}\n\nThough the Shapley Value is conceptually simple, its use is hampered by the fact that its total expression involves exponentially many evaluations of the characteristic function (there are $n\\times 2^{n-1}$ possible marginal contributions between $n$ players).\n\\DIFdelbegin %DIFDELCMD < \n\n%DIFDELCMD < %%%\n\\DIFdelend However, since the Shapley Value is expressible as an average over averages by Equation~\\eqref{shap2}, \nit is possible to approximate these \\DIFaddbegin \\DIFadd{inner }\\DIFaddend averages via sampling techniques, and \\DIFaddbegin \\DIFadd{then }\\DIFaddend these averages are naturally stratified by coalition size\\DIFdelbegin \\DIFdel{.\n}\\DIFdelend \\DIFaddbegin \\DIFadd{, forming an instance of stratified sampling.\nFor a given inner average in the Shapley Value expression, we approximate such an average by randomly selecting marginal contributions, calculating the sample average.\nThus the question then becomes how many samples should we compute for each strata to attain the best estimate of the Shapley Value.\n}\n\n\\DIFaddend In previously published literature, other techniques have been used to allocate samples in this context \\DIFaddbegin \\DIFadd{of Shapley Value sampling approximation}\\DIFaddend , particularly simple sampling \\citep{DBLP:journals/cor/CastroGT09}, Neyman allocation \\citep{CASTRO2017180,DBLP:journals/tsg/OBrienGR15}, and allocation to minimise Hoeffding's inequality \\citep{2013arXiv1306.4265M}.\n\nWe assess the benefits of using our bound by comparing its performance to the approaches above in the context of some example cooperative games, with results analysed in Section~\\ref{sec:discussion}.\nThe example games are described below:\n\n\\begin{example_game}[Airport Game]\nAn $n=15$ player game with characteristic function:\n$$v(S)=\\max_{i\\in S}w_i$$\nwhere\n$w=\\{w_1,\\dots,w_{15}\\} %\\scriptstyle\\scriptsize\n=\\{ 1, 1, 2, 2, 2, 3, 4, 5, 5, 5, 7, 8, 8, 8, 10\\}$.\nThe maximum marginal contribution is $10$, so we assign $D_i=10$ for all $i$.\n\\end{example_game}\n\n\\begin{example_game}[Voting Game]\nAn $n=15$ player game with characteristic function:\n$$v(S)=\\begin{cases}\n       1, &\\quad\\text{if}\\quad \\sum_{i\\in S}w_i>\\sum_{j\\in N}w_j/2\\\\\n       0, &\\quad\\text{otherwise}\\\\\n     \\end{cases}$$\nwhere \n$w=\\{w_1,\\dots,w_{15}\\} %\\scriptstyle\\scriptsize\n=\\{ 1, 3, 3, 6, 12, 16, 17, 19, 19, 19, 21, 22, 23, 24, 29\\}$.\nThe maximum marginal contribution is $1$, so we assign $D_i=1$ for all $i$.\n\\end{example_game}\n\n\\begin{example_game}[Simple Reward Division]\nAn $n=15$ player game with characteristic function:\n$$v(S)=\\frac{1}{2}\\left(\\sum_{i\\in S}\\frac{w_i}{100}\\right)^2$$\nwhere\n$w=\\{w_1,\\dots,w_{15}\\} = \\{ 45, 41, 27, 26, 25, 21, 13, 13, 12, 12, 11, 11, 10, 10, 10 \\}$\\\\\nThe maximum marginal contribution is $1.19025$, so we assign $D_i=1.19025$ for all $i$.\n\\end{example_game}\n\n\\begin{example_game}[Complex Reward Division]\nAn $n=15$ player game with characteristic function:\n$$v(S)=\\left(\\sum_{i\\in S}\\frac{w_i}{50}\\right)^2 - \\floor[\\Bigg]{\\left(\\sum_{i\\in S}\\frac{w_i}{50}\\right)^2}$$\nwhere\n$w=\\{w_1,\\dots,w_{15}\\} = \\{ 45, 41, 27, 26, 25, 21, 13, 13, 12, 12, 11, 11, 10, 10, 10 \\}$\\\\\nIn this game, we assign $D_i=2$ for all $i$.\n\\end{example_game}\n\n%\\input{table-2.tex}\n\n\\input{figs/shapley_table.tex}\n\n\nFor each game, we compute the exact Shapley Value, and then the average absolute errors in the approximated Shapley Value for a given budget $m$ of marginal-contribution samples across multiple computational runs.\nThe results are shown in Table \\ref{Table2}, \nwhere the average absolute error in the Shapley Value is computed by sampling with Maleki's method \\citep{2013arXiv1306.4265M} is denoted $e^{Ma}$, $e^{sim}$ is Castro's stratified simple sampling method \\citep{DBLP:journals/cor/CastroGT09}, $e^{Ca}$ is Castro's Neyman sampling method \\citep{CASTRO2017180}, and $e^{SEBM}$ is the error associated with our method, SEBM. \nThe results in Table~\\ref{Table2} show that our method performs well across the benchmarks. \nA discussion of all of the results is considered in the next section. \n\n\n\n\n\n\n", "meta": {"hexsha": "19e0ffa7d34e518168052c13c3e4ebfc7e0bc9de", "size": 7116, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/diff_chapters/statistics_shapley.tex", "max_stars_repo_name": "Markopolo141/Thesis_code", "max_stars_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Thesis/diff_chapters/statistics_shapley.tex", "max_issues_repo_name": "Markopolo141/Thesis_code", "max_issues_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/diff_chapters/statistics_shapley.tex", "max_forks_repo_name": "Markopolo141/Thesis_code", "max_forks_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.1320754717, "max_line_length": 404, "alphanum_fraction": 0.7366498033, "num_tokens": 2252, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5692468442699258}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{fullpage}\n\\usepackage{titling}\n\\usepackage{indentfirst}\n\\usepackage{amsmath, amssymb}\n\\usepackage{tikz}\n\\usepackage[colorlinks,linkcolor=blue]{hyperref}\n\n\\setlength{\\parindent}{0em}\n\\setlength{\\droptitle}{-3em}\n\n\\begin{document}\n\n\\title{Factor Graph}\n\\date{}\n\\maketitle\n\\vspace{-5em}\n\n\\section{Variables and Factors}\nDeepDive uses factor graphs to perform learning and inference. A factor graph is a type of probabilistic graphical model.\nThere are two types of nodes in a factor graph, (random) variables and factors. A random variable can be used to quantitatively describe an event. For example, we can use a random variable to denote if John smokes. If John smokes, the random variable takes a value of 1, and 0 if John does not smoke. For now, DeepDive only supports boolean variables, so we will constrain our discussion to boolean variables.\n\nA factor is a function of variables, and is used to evaluate the relations among variable(s). For example, a function imply(A, B) means if A, then B. Now suppose we have relation that ``if John smokes then he has cancer\". Here we have two variables, one indicating if John smokes, the other indicating if John has cancer. Thus, imply(smoke, cancer) expresses the rule above.\n\nThe figure shows an example of a factor graph, where $v_1$ and $v_2$ are two variables, and $f_1, f_2$ are two factors. Factor $f_1$ is connected with $v_1$ and $v_2$, while $f_2$ is connected with $v_2$. We will use this example to illustrate some basic concepts about factor graphs.\n\n\\begin{center}\n\\includegraphics[width=2in]{factor_graph.png}\n\\end{center}\n\n\\section{Possible Worlds and Probabilities}\nA possible world is a particular possible assignment to every variable, denoted by $I$. We can also think of it as each variable taking a particular value.\n\\begin{itemize}\n\\item How many possible worlds in the factor graph above? Each variable can take value 0 or 1, and there are two variables. So we have four possible worlds. The possible worlds are shown in the table below, with each column representing a possible world.\n    \\begin{center}\n    \\begin{tabular}{|l|llll|}\n      \\hline\n      $v_1$ & 0 & 0 & 1 & 1\\\\\n      \\hline\n      $v_2$ & 0 & 1 & 0 & 1\\\\\n      \\hline\n    \\end{tabular}\n    \\end{center}\n\\end{itemize}\nHow do we define the probability of a possible world? We define it through factor functions. We give different weight to factor functions, to express the relative influence of each factor on the probability. Factors with larger weight have greater impacts on the probability. The probability of a possible world graph is then defined to be proportional to some measure of weighted combination of factor functions (for how to define such a measure, please refer to [Factor Graphs and the Sum-Product Algorithm] \\url{http://www.comm.utoronto.ca/~frank/papers/KFL01.pdf}), i.e., for the above graph,\n\\[ \\text{Pr}(I) \\propto \\text{measure}\\{w_1 f_1(v_1, v_2) + w_2 f_2(v_2)\\}. \\]\nHere, $w_1, w_2$ are weights associated with factor functions.\n\\begin{itemize}\n\\item If $f_1$ is imply function with weight $w_1 = 1$, and $f_2$ is isTrue with weight $w_2 = 0.5$ (for explaination on types of factor functions, see \\url{http://deepdive.stanford.edu/inference_rule_functions.html}). What is the probability of possible world $v_1 = 1, v_2 = 0$ proportional to (in terms of measure)?\n\n    Here, $f_1(v_1, v_2)$ = imply(1, 0) = 0, and $f_2(v_2)$ = isTrue(0) = 0. Thus, the answer is easily computed by measure$(w_1 f_1(v_1, v_2) + w_2 f_2(v_2))$ = measure$(1 \\cdot 0 + 0.5 \\cdot 0)$ = measure(0).\n\\end{itemize}\n\nIt is not convenient to express the probability to be proportional to something rather than to have an absolute value. To define absolute probabilities of possible worlds, we can simply normalize the probabilities above against all possible worlds. That is, we define the probability of possible world I as\n\\[ \\text{Pr}(I) = \\frac{\\text{measure}\\{w^T f(I)\\}}{\\sum_{J} \\text{measure}\\{w^T f(J)\\}}, \\]\nwhere the sum is over all possible worlds.\n\\begin{itemize}\n\\item What's the probability of the possible world $v_1=1,v_2=0$?\n\\end{itemize}\n\n\\section{Marginal Inference and Weight Learning}\n\nNow, we can perform marginal inference on factor graphs. A marginal inference is to infer the probability of one variable taking a particular value. For example, if we would like to infer whether John has cancer, and it is expressed using a variable $v_1$, this means we would like to infer the probability of $v_1 = 1$. It is straightforward to define the probability of it as just the sum of probabilities of possible worlds that contains the specific value for that variable. This is similar to marginal probability and joint probability. The marginal inference for the event $\\{v_1 = 1\\}$ is expressed as\n\\[ \\text{Pr}\\{v_1 = 1\\} = \\sum_{I:v_1=1} \\text{Pr}(I). \\]\n\\begin{itemize}\n\\item What is the result of the marginal inference $\\text{Pr}\\{v_1 = 1\\}$?\n\\end{itemize}\n\nIn DeepDive, you can assign factor weights manually, or you can let DeepDive learn weights automatically. In order to learn weights automatically, you must have enough training data available. DeepDive chooses weights that agree most with the training data. Formally, the training data is just set of possible worlds, and we choose weights by maximizing the probabilities of these possible worlds.\n\n\\end{document}\n", "meta": {"hexsha": "394868d7e65b54cc4ea32cec5e7fe2ff8f357650", "size": 5375, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/assets/factor_graph.tex", "max_stars_repo_name": "onthway/deepdive", "max_stars_repo_head_hexsha": "f0b1f355446b169bc8a01061f773df598117a1b7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1729, "max_stars_repo_stars_event_min_datetime": "2015-01-01T02:40:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T09:08:29.000Z", "max_issues_repo_path": "doc/assets/factor_graph.tex", "max_issues_repo_name": "onthway/deepdive", "max_issues_repo_head_hexsha": "f0b1f355446b169bc8a01061f773df598117a1b7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 426, "max_issues_repo_issues_event_min_datetime": "2015-01-02T10:28:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-05T02:37:40.000Z", "max_forks_repo_path": "doc/assets/factor_graph.tex", "max_forks_repo_name": "onthway/deepdive", "max_forks_repo_head_hexsha": "f0b1f355446b169bc8a01061f773df598117a1b7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 603, "max_forks_repo_forks_event_min_datetime": "2015-01-01T08:05:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T07:45:40.000Z", "avg_line_length": 74.6527777778, "max_line_length": 608, "alphanum_fraction": 0.7534883721, "num_tokens": 1464, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7956581073313276, "lm_q1q2_score": 0.569232902424494}}
{"text": "\\documentclass{article}\n\\usepackage{geometry}\n \\geometry{\n a4paper,\n total={170mm,257mm},\n left=20mm,\n top=20mm,\n }\n\n \\usepackage{mathptmx}\n \\usepackage{amsmath}\n \\usepackage{diagbox}\n \\usepackage{booktabs}\n\\usepackage{colortbl}\n\\usepackage{amssymb}\n\n\\usepackage{hyperref}\n\\hypersetup{\n    colorlinks=True,\n    linkcolor={blue!20!black},\n    filecolor=magenta,      \n    urlcolor=cyan,\n}\n\n\n \\usepackage{caption}\n\\usepackage[export]{adjustbox} %% for picture frame\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\\usepackage{fancyhdr}\n\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{{Signal Analysis}}\n\\lhead{{Amrit Prasad Phuyal (PUL074BEX004)}}\n\\rfoot{\\thepage}\n\n%%% format and command for lab ans vhdl\n\\input{./Matlab.tex}\n%%%%>>>>>>>........\n\\input{./CoverPage.tex}\n\\DeclareUnicodeCharacter{2212}{-}\n\n\\begin{document}\n\n%----------------------------------------------------------------------------------------\n%\tTITLE PAGE\n%----------------------------------------------------------------------------------------\n\\CP{Signal Analysis}{All in One}\n{Visualization of Signals, Fourier Series \\& Transform, Convolution and Frequency Response}\n{Department of Electronics and Computer Engineering}\n\n\n\n\n%----------------------------------------------------------------------------------------\n\\pagenumbering{gobble}\n\\tableofcontents\n\\vspace{1in}\n%\\pagebreak\n\\listoffigures\n\\vspace{1in}\n%\\pagebreak\n\\lstlistoflistings\n\\pagebreak\n\\pagenumbering{arabic}\n\n\n% \\anscode{testing matolahb cod4e}{./CODES/sino.m}\n\n\n% \\begin{figure}[H]\n%     \\centering\n%     \\includegraphics[scale=1,cframe=blue 0.5pt 3pt]{./FIG/sino}\n%     \\caption{..sad gd }\n% \\end{figure}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Signal Visualization}\nSignal is a function of one or more independent variables, which contain some information.Analog signal is a continuous signal in which one time-varying quantity represents another time-based variable and denoted as $x(t), y(t)$.A digital signal is a signal that is used to represent data as a sequence of separate values at any point in time denoted as $ x[n], y[n]$.\n\n\n\\subsection{Sinusoidal Signal}\nSinusoidal signal is in the form of x(t) = A cos(${w}_{0}\\,\\pm \\phi$) or A sin(${w}_{0}\\,\\pm \\phi$).\nIn matlab \\textbf{sin} and \\textbf{cos} function are used to calculate sine and cosine values and \\textbf{plot} function to plot continuous Sinusoidal signal. \\textbf{hold on} \\& \\textbf{hold off} command is used to display both signal in single plot. Similarly \\textbf{xlabel}, \\textbf{ylabel} and \\textbf{title} are used for Labeling purposes.\n\n\\anscode{Visualization of signal x(t) \\& y(t)}{./CODES/sinu.m}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.2,cframe=blue 0.5pt 3pt]{./FIG/sinu}\n    \\caption{Visualization of Sinusoidal signal }\n\\end{figure}\n\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{Ramp Signal}\nRamp signal is denoted by $r[n]=an $and $r(t)=at$ for discrete and continuous signal. To plot discrete signal \\textbf{stem} function is used in MATLAB.\n\n\\anscode{Discrete and Continuous ramp Signal}{./CODES/ramp.m}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.4,cframe=blue 0.5pt 3pt]{./FIG/ramp}\n    \\caption{Discrete and Continuous ramp Signal when a=2}\n\\end{figure}\n\n% %%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Exponential Signal}\nContinous time exponential signal $y(t)=ce^(at)$ and discrete time exponential signal $y[n]=ce^(an)$ were plotted using MATLAB.\n\n\\anscode{Discrete and Continuous Exponential Signal}{./CODES/exp.m}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.4,cframe=blue 0.5pt 3pt]{./FIG/exp}\n    \\caption{Discrete and Continuous Exponential Signal when c=1.5 \\& a=0.15 }\n\\end{figure}\n\n% %%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Unit Step Signal}\n\\textbf{u(t) }  \\&  \\textbf{ u[n] } are continuous and discrete Unit step function which has value 1 for variable(t or n) $\\geqq 0$\n\\anscode{Discrete and Continuous Unit Step Signal}{./CODES/unit.m}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.35,cframe=blue 0.5pt 3pt]{./FIG/unit}\n    \\caption{Discrete and Continuous Unit Step Signal}\n\\end{figure}\n\n% %%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%\n\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%%%%%\n\\section{Fourier Series}\nA continuous time signal $x(t)$ with period $T$ is represented by Fourier series as\n\n\\begin{equation*}x(t)=\\sum_{k = -\\infty}^{\\infty} a_ke^{jkw_ot}\n\\end{equation*}\n\n$\\quad \\quad {here,w_o=2\\pi/T} $\\\\\n\\begin{equation*}\n    a_k=\\int_{T}^{}x(t)e^{-jw_ot}  \\,dx\n\\end{equation*}\n\nThe Fourier series representation of a square wave with period T and amplitude a is given by:\n\\begin{equation*}\n    x(t)=\\frac{4a}{\\pi} \\sum_{k = 1}^{\\infty}\\frac{sin((2k-1)w_ot)}{2k-1}\n\\end{equation*}\n\nThe sum of all the odd harmonics of the sinusoidal signal forms a square wave approximation.So, we varies the no of terms in summation.\n\n\\anscode{Fourier Series Representation of a Square Wave }{./CODES/FouSer.m}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.83,cframe=blue 0.5pt 3pt]{./FIG/FouSer.pdf}\n    \\caption{Fourier Series Representation of a Square Wave}\n\\end{figure}\n\n% %%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%\n\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%%%%%\n\\section{Fourier Transform}\nThe fourier transform of a continuous time signal $x(t)$ is mathematically given as,\n$$\n    X(j\\omega) = \\int_{-\\infty}^{\\infty} x(t) e^{-j\\omega t} dt\n$$\nLikewise, the fourier transform of a discrete time signal $x[n]$ is mathematically given as,\n$$\n    X(e^{j\\omega})=\\sum_{n=-\\infty}^{\\infty} x[n] e^{-j\\omega n}\n$$\nThe \\textbf{fft} function in MATLAB returns the real and imaginary parts of the fast fourier transform for the input argument. The real and imaginary parts are plotted separately.\n\\anscode{Fourier transform of x[n] = [0,1,2,3] }{./CODES/FouTra.m}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.55,cframe=blue 0.5pt 3pt]{./FIG/FouTra}\n    \\caption{Fourier transform of x[n] = [0,1,2,3]}\n\\end{figure}\n\n% %%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%%%%%\n\\section{Convolution}\nThe convolution of two continuous time signals $x(t)$ and $h(t)$, called the convolution integral, is mathematically given as,\n$$\n    y(t)=x(t)*h(t)=\\int_{-\\infty}^{\\infty}x(u)h(t-u)du\n$$\nLikewise, the convolution of two discrete time signals $x[n]$ and $h[n]$, called the convolution sum, is mathematically given as,\n\n$$\n    y[n]=x[n]*h[n]=\\sum_{k=-\\infty}^{\\infty} x[n]h[n-k]\n$$\nThe \\textbf{conv} function returns the convolution for two discrete sequences in MATLAB.\n\n\\anscode{Convolution for two discrete sequences $x[n]$ = [1,3,2,1,1] and $h[n]$ = [1,2,\u22121,1] }{./CODES/Conv1.m}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.55,cframe=blue 0.5pt 3pt]{./FIG/conv1}\n\n    \\caption{Convolution for two discrete sequences $x[n]$ = [1,3,2,1,1] and $h[n]$ = [1,2,\u22121,1]}\n\\end{figure}\n\nConvolution for two discrete sequences $x[n]$ = 0.5 n and $h[n] = u[n]$\n\n\\anscode{Convolution for two discrete sequences $x[n]$ = 0.5 n and $h[n] = u[n]$ }{./CODES/Conv2.m}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.55,cframe=blue 0.5pt 3pt]{./FIG/Conv2}\n    \\caption{Convolution for two discrete sequences $x[n]$ = 0.5 n and $h[n] = u[n]$ }\n\\end{figure}\n\n% %%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%%%%%\n\\section{Frequency Response of a System}\nFor a system with $h(t)$ as the impulse response, and $x(t)$ as the input signal, the output $y(t)$ is related to the input as the convolution,\n$$\n    y(t)=x(t)*h(t)=\\int_{-\\infty}^{\\infty}x(u)h(t-u)du\n$$\nAccording to the convolution property of fourier transforms, the fourier transforms of the signals are related as,\n$$\n    Y(j\\omega)=H(j\\omega)X(j\\omega)\n$$\nwhere $H(j\\omega)$, the fourier transform of the impulse response of the system is the frequency response of the system. Likewise, for discrete time input signal $x[n]$ to a system with the impulse response $h[n]$, the output in frequency domain is mathematically given as,\n$$\n    Y(e^{j\\omega})=H(e^{j\\omega})X(e^{j\\omega})\n$$\nwhere $H(e^{j\\omega})$, the fourier transform of the impulse response of the system is the frequency response of the system.\nDuring the lab experiment, we plotted the frequency response given as,\n$$\n    H(z)=\\frac{0.008-0.033z+0.05z^2-0.033z^3+0.008z^4}{1+2.37z+2.7z^2+1.6z^3+0.5z^4}\n$$\nThe \\textbf{freqz} function returns the frequency response of a system whose amplitude and phase were plotted separately in MATLAB.\n\n\\anscode{ Frequency response of $H(z)$ }{./CODES/FreqRes.m}\n\n\n\\begin{figure}[H]\n\n    \\centering\n    \\includegraphics[scale=0.55,cframe=blue 0.5pt 3pt]{./FIG/FreqRes}\n    \\caption{ Frequency response of $H(z)$ }\n\\end{figure}\n\n% %%%%%%%%%%%%%%%%%%%\n% %%%%%%%%%%%%%%%%%%%\n\n\n\\section{Conclusion}\nIn this Lab we got familiarize to various MATLAB function. We were able to Plot some basic function like unit, exponential, Sinusoidal, Ramp etc. We also find and plot Fourier Series and Transform and additionally performed Convolution with \\textbf{conv} function and view frequency response of the system. Thus all the LAB exercises were completed.\n\n\n\\end{document}\n\n\n\n\n\n", "meta": {"hexsha": "7abfdd41dd8390830b8b5e7a250dd47540f5ccd3", "size": 9266, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Signal Analysis all in one LAB/Signal Analysis ( Amrit Prasad Phuyal) All Lab .tex", "max_stars_repo_name": "amritphuyal/LATEX", "max_stars_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-01T08:20:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-01T08:20:34.000Z", "max_issues_repo_path": "Signal Analysis all in one LAB/Signal Analysis ( Amrit Prasad Phuyal) All Lab .tex", "max_issues_repo_name": "amritphuyal/LATEX", "max_issues_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Signal Analysis all in one LAB/Signal Analysis ( Amrit Prasad Phuyal) All Lab .tex", "max_forks_repo_name": "amritphuyal/LATEX", "max_forks_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-03-19T09:04:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T12:19:26.000Z", "avg_line_length": 33.4512635379, "max_line_length": 368, "alphanum_fraction": 0.6413770775, "num_tokens": 2762, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7956581024858786, "lm_q1q2_score": 0.5692328989579436}}
{"text": "\\chapter{Project Design}\n\nThis chapter outlines the design behind each component of the text classification model and the respected process each component may entail, several components will execute more than one step to achieve the desired result. The design is reflected within the final model and each component is broken down to display the functionality and theory within this project. Within section \\ref{section:FunctionalRequirements}, it is detailed there won\u2019t be a GUI for the interaction and thus the design section relates to the inner workings of the model itself, that being: system architecture, logistics and theory.\n\n\\section{Classical vs Modern}\n\nAs originally intended, this project would have seen two differing implementations of the same concept, one being of a classical nature and the other being a machine learning variation, as previously mentioned this project experienced time management issues due to unexpected issue, to which resulted in only focusing on the machine learning implementation of a novel approach. The design for the first model would have been of the following:\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/chapter-5/ClassicalNLP.pdf}\n    \\caption[ClassicalNLP]{Classical NLP pipeline for text classification.\n    \\label{fig:ClassicalNLP}}\n\\end{figure}\n\nClassical or \"traditional\" methods for NLP include: N-Gram, Hidden Markov Models using Markov Chains and Part-Of-Speech Tagging. The developer had originally intended to implement a traditional approach within a machine learning model. The combination of Part-Of-Speech with a machine learning model to calculate a word\u2019s vector based on its TF-IDF value would have been the start, such that:\n\n\\begin{equation}\n    tf(t,d) = \\frac{f_t, _d}{\\sum{t \\in_d} f _t {_`}, _d}\n\\end{equation}\n\nIt would have also used the inverse TF-IDF value as the project model covers multiple datasets, such that:\n\n\\begin{equation}\n    idf(t, D) = \\log \\frac{N}{|d \\in D : t \\in d|}\n\\end{equation}\n\nWhere the traditional aspect of the Bag-of-Words would produce a vector for each item in a corpus, it\u2019s TF-IDF value would have been calculate through a series of CNN nodes.\n\nMachine learning concepts can be classed as a \u201cblack-box\u201d of functionality as the user does not necessarily see what is being executed within the hidden layers, a high-level abstraction for this project can be generalised into the following diagram:\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/chapter-5/MLNLP.pdf}\n    \\caption[MachineLearningNLP]{Machine Learning Model for an NLP pipeline for text classification.\n    \\label{fig:MLNLP}}\n\\end{figure}\n\n\\section{Planning the ML Model Design}\n\nInitial prototyping of the machine learning model for a new amalgamation of NLP techniques helped to indicate what the best route of development could be, the planning stage piggybacked off existing work flowchart diagrams in-order to apply the most appropriate method and technique combination. The sequence flowchart is as follows:\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/chapter-5/GooglePlan.pdf}\n    \\caption[GooglePlan]{Text Classification Flowchart \\parencite{google2021TCF}.\n    \\label{fig:GooglePlan}}\n\\end{figure}\n\n\\section{Supervision}\n\nThe development of the project model is based on a supervised approach due to the datasets located, it was most appropriate to use a supervised approach due to the datasets having no labels or lexical categories to train the model on; the model has user input to account for missing labels on data which have been manually and algorithmically added. The supervision for this project can be represented as the following diagram:\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/chapter-5/SupervisedLearningChart.pdf}\n    \\caption[SupervisedLearning]{Sequence control for Supervised learning.\n    \\label{fig:SupervisedLearningChart}}\n\\end{figure}\n\n\\section{Pipeline Design}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/chapter-5/Pipeline.pdf}\n    \\caption[MLTCPipeline]{Pipeline for model classification.\n    \\label{fig:MLTCPipeline}}\n\\end{figure}\n\nThere are five main steps for a text classification pipeline:\n\n\\begin{enumerate}\n    \\item \\textbf{\\textit{Pre-processing}}: prepare the raw dataset to be trained.\n    \\item \\textbf{\\textit{Splitting}}: split the processed dataset to be trained and validated.\n    \\item \\textbf{\\textit{Tuning}}: identity valuable parameters within trained data.\n    \\item \\textbf{\\textit{Training}}: train the current iteration of the model with updated hyper-parameters.\n    \\item \\textbf{\\textit{Testing}}: test and collect statistics for analysis to make further predictions.\n\\end{enumerate}\n\n\\section{Data Preparing and Pre---processing}\n\nData Preparing\n\n\\section{Model}\n\n% Skip-gram rather than Continuous Bag-Of-Words (CBOW) as it yields better results with large datasets - such as student feedback - skip gram can also be context aware as it converts neighboring lexemes to vectors\n\n% \\subsection{Skip-Gram}\n\n% \\begin{figure}[H]\n%     \\centering\n%     \\includegraphics[width=0.49\\textwidth]{figures/chapter-5/SkipGramModel.pdf}\n%     \\caption[SkipGramModel]{Input flow of Skip-Gram model.\n%     \\label{fig:SkipGramModel}}\n% \\end{figure}\n\n% The skip-gram model can be seen as the inverse of the bag-of-words model as it attempts to vectorize neighboring words first to identify corpus context, whereas, bag-of-words takes each lexeme first, produces a vector sum and then categorises each word.\n\n% https://machinelearningmastery.com/gentle-introduction-bag-words-model/\n% https://towardsdatascience.com/pos-tagging-using-crfs-ea430c5fb78b#1c6a\n% https://nathanrooy.github.io/posts/2018-03-22/word2vec-from-scratch-with-python-and-numpy/\n% https://stackoverflow.com/questions/37394970/tensorflow-word2vec-cbow-model\n% https://www.azlifa.com/morphology-%E2%80%93-the-structure-of-words/\n\n\\subsection{The Artefact}\n\nPOS TAGGING + Machine Learning\n\n\"POS tags are another way of labeling text data, in particular we are labeling the to- kens/words in a text. ... In a way POS tags are in themselves features of words; they give classification and context to words that allow us to better understand the purpose of word choice and the meaning of sentences as a whole.\"\n\n\n% ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n", "meta": {"hexsha": "0875cfacdeada8056a61c38444842cc8e9465c88", "size": 6549, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/chapter-v.tex", "max_stars_repo_name": "WillGreen98/University-PJE40-Dissertation", "max_stars_repo_head_hexsha": "fdd2efcbbac83b02cd9c7970c605db2babe46174", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/chapter-v.tex", "max_issues_repo_name": "WillGreen98/University-PJE40-Dissertation", "max_issues_repo_head_hexsha": "fdd2efcbbac83b02cd9c7970c605db2babe46174", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/chapter-v.tex", "max_forks_repo_name": "WillGreen98/University-PJE40-Dissertation", "max_forks_repo_head_hexsha": "fdd2efcbbac83b02cd9c7970c605db2babe46174", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.9557522124, "max_line_length": 607, "alphanum_fraction": 0.7611849137, "num_tokens": 1510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956580952177051, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5692328937581178}}
{"text": "\\documentclass[fleqn]{article}\n\\usepackage[a4paper, left=25.4mm, top=25.4mm, right=25.4mm, bottom=25.4mm]{geometry}\n\\usepackage[shortlabels]{enumitem}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{authoraftertitle}\n\\usepackage{blindtext}\n\\usepackage{bm}\n\\usepackage{color}\n\\usepackage{graphicx}\n\\usepackage{newpxtext}\n%\\usepackage{newpxmath}\n\\usepackage{textcomp}\n\\newcommand\\tab[1][1cm]{\\hspace*{#1}}\n\n\\author{Victor Zhao\\\\xz398@cam.ac.uk}\n\n\\begin{document}\n\\centering\n\\section*{NST Part IA Mathematics}\n\\MyAuthor\n\n\\begin{enumerate}\n    \\item Parallel and perpendicular components of a vector:\\\\\n        $\\mathbf{a}_\\parallel=(\\mathbf{a}\\cdot\\hat{\\mathbf{n}})\\hat{\\mathbf{n}}$\\\\\n        $\\mathbf{a}_\\perp=\\mathbf{a}-(\\mathbf{a}\\cdot\\hat{\\mathbf{n}})\\hat{\\mathbf{n}}$\n    \\item Vector triple product:\\\\\n        $[\\mathbf{a},\\mathbf{b},\\mathbf{c}]=\\mathbf{a}\\cdot(\\mathbf{b}\\times\\mathbf{c})$\\\\\n        $[\\mathbf{a},\\mathbf{b},\\mathbf{c}]=[\\mathbf{b},\\mathbf{c},\\mathbf{a}]=[\\mathbf{c},\\mathbf{a},\\mathbf{b}]$\\\\\n        $[\\mathbf{a},\\mathbf{b},\\mathbf{c}]=-[\\mathbf{a},\\mathbf{c},\\mathbf{b}]$\n    \\item $\\mathbf{a}\\times(\\mathbf{b}\\times\\mathbf{c})=(\\mathbf{a}\\cdot\\mathbf{c})\\mathbf{b}-(\\mathbf{a}\\cdot\\mathbf{b})\\mathbf{c}$\n    \\item Plane equation:\\\\\n        $\\mathbf{r}\\cdot\\hat{\\mathbf{n}}=\\mathbf{a}\\cdot\\hat{\\mathbf{n}}=d$\\\\\n        $|$$d$$|$ is perpendicular distance of plane from origin for unit normal $\\hat{\\mathbf{n}}$\n    \\item Polar coordinates:\\\\\n        $\\hat{\\mathbf{r}}=\\cos\\phi\\mathbf{i}+\\sin\\phi\\mathbf{j}$\\\\\n        $\\hat{\\bm{\\phi}}=-\\sin\\phi\\mathbf{i}+\\cos\\phi\\mathbf{j}$\\\\\n        $dS=rdrd\\phi$\n    \\item Cylindrical coordinates:\\\\\n        $x=r\\cos\\phi$\\\\\n        $y=r\\sin\\phi$\\\\\n        $z=z$\\\\\n        $dV=rdrd\\phi dz$\n    \\item Spherical coordinates:\\\\\n        $x=r\\sin\\theta\\cos\\phi$\\\\\n        $y=r\\sin\\theta\\sin\\phi$\\\\\n        $z=r\\cos\\theta$\\smallbreak\n        $\\hat{\\mathbf{r}}=\\sin\\theta\\cos\\phi\\mathbf{i}+\\sin\\theta\\sin\\phi\\mathbf{j}+\\cos\\theta\\mathbf{k}$\\\\\n        $\\hat{\\bm{\\theta}}=\\cos\\theta\\cos\\phi\\mathbf{i}+\\cos\\theta\\sin\\phi\\mathbf{j}-\\sin\\theta\\mathbf{k}$\\\\\n        $\\hat{\\bm{\\phi}}=-\\sin\\phi\\mathbf{i}+\\cos\\phi\\mathbf{j}$\\smallbreak\n        $dV=r^2\\sin\\theta dr d\\theta d\\phi$\\\\\n        $dS=r^2\\sin\\theta d\\theta d\\phi$\n    \\item Leibnitz's formula:\\smallbreak\n        $\\dfrac{d^n(fg)}{dx^n}=\\displaystyle\\sum_{i=0}^{n}\\binom{n}{i}f^{(n-i)}g^{(i)}$\n    \\item Limits: \\smallbreak\n        $\\displaystyle\\lim_{x\\to a}f(x)=K$ means that\n        $\\forall\\epsilon>0.\\exists\\delta>0.\\;(0<$ $|$$x-a$$|$ $<\\delta)\\implies($ $|$$f(x)-K$$|$ $<\\epsilon)$\\smallbreak\n        $\\displaystyle\\lim_{x\\to a+}f(x)=K$ means that\n        $\\forall\\epsilon>0.\\exists\\delta>0.\\;(0<x-a<\\delta)\\implies($ $|$$f(x)-K$$|$ $<\\epsilon)$\\smallbreak\n        $\\displaystyle\\lim_{x\\to a-}f(x)=K$ means that\n        $\\forall\\epsilon>0.\\exists\\delta>0.\\;(0<a-x<\\delta)\\implies($ $|$$f(x)-K$$|$ $<\\epsilon)$\\smallbreak\n        $\\displaystyle\\lim_{x\\to\\infty}f(x)=K$ means that\n        $\\forall\\epsilon>0.\\exists X>0.\\;(x>X)\\implies($ $|$$f(x)-K$$|$ $<\\epsilon)$\n    \\item Continuity: $f(x)$ is continuous at $a$ if:\n        \\begin{itemize}\n            \\item $f(a)$ exists;\n            \\item $\\displaystyle\\lim_{x\\to a}f(x)$ exists and equals $f(a)$.\n        \\end{itemize}\n    \\newpage\n    \\item Taylor's series:\\smallbreak\n        $e^x=1+x+\\dfrac{x^2}{2!}+\\cdots+\\dfrac{x^n}{n!}+\\cdots$\\smallbreak\n        $\\ln(1+x)=x-\\dfrac{x^2}{2}+\\dfrac{x^3}{3}-\\cdots+(-1)^{n+1}\\dfrac{x^n}{n}+\\cdots$\\smallbreak\n        $\\sin x=x-\\dfrac{x^3}{3!}+\\dfrac{x^5}{5!}-\\cdots+(-1)^n\\dfrac{x^{2n+1}}{(2n+1)!}+\\cdots$\\smallbreak\n        $\\cos x=1-\\dfrac{x^2}{2!}+\\dfrac{x^4}{4!}-\\cdots+(-1)^n\\dfrac{x^{2n}}{(2n)!}+\\cdots$\\smallbreak\n        $\\tan^{-1}x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\cdots+(-1)^n\\dfrac{x^{2n+1}}{2n+1}+\\cdots$\\smallbreak\n        $\\sinh x=x+\\dfrac{x^3}{3!}+\\dfrac{x^5}{5!}+\\cdots+\\dfrac{x^{2n+1}}{(2n+1)!}+\\cdots$\\smallbreak\n        $\\cosh x=1+\\dfrac{x^2}{2!}+\\dfrac{x^4}{4!}+\\cdots+\\dfrac{x^{2n}}{(2n)!}+\\cdots$\\smallbreak\n        $\\tanh^{-1}x=x+\\dfrac{x^3}{3}+\\dfrac{x^5}{5}+\\cdots+\\dfrac{x^{2n+1}}{2n+1}+\\cdots$\n    \\item Hyperbolic functions:\\smallbreak\n        $\\cosh^2 x-\\sinh^2 x=1$\\smallbreak\n        $\\cosh2x=\\cosh^2 x+\\sinh^2 x$\\smallbreak\n        $\\cosh^{-1}x=\\ln(x+\\sqrt{x^2-1})$\\smallbreak\n        $\\sinh^{-1}x=\\ln(x+\\sqrt{x^2+1})$\\smallbreak\n        $\\tanh^{-1}x=\\dfrac{1}{2}\\ln\\left(\\dfrac{1+x}{1-x}\\right)$\n    \\item Differentiation of integrals wrt parameters:\\smallbreak\n        $\\dfrac{d}{dt}\\displaystyle\\int_{a(t)}^{b(t)}f(x,t)dx=\\displaystyle\\int_{a(t)}^{b(t)}\\dfrac{\\partial f}{\\partial t}dx+\\dfrac{db}{dt}f(b,t)-\\dfrac{da}{dt}f(a,t)$\n    \\item Schwarz's inequality:\\smallbreak\n        $\\left(\\displaystyle\\int_a^b f(x)g(x)dx\\right)^2\\leq\\left(\\displaystyle\\int_a^b f^2(x)dx\\right)\\left(\\displaystyle\\int_a^b g^2(x)dx\\right)$\n    \\item Gaussian integral:\\smallbreak\n        $\\displaystyle\\int_{-\\infty}^{+\\infty}e^{-x^2}dx=\\sqrt{\\pi}$\n    \\item Bayes' theorem:\\\\\n        $P(A|B)=\\dfrac{P(B|A)P(A)}{P(B)}$\n    \\item Poisson distribution:\\\\\n        $P(X=r)=e^{-\\lambda}\\dfrac{\\lambda^r}{r!}$\\\\\n        mean = variance = $\\lambda$\n    \\item Lifetime distribution:\\\\\n        $f(t)=\\lambda e^{-\\lambda t}$\\\\\n        mean = $\\frac{1}{\\lambda}$, variance = $\\frac{1}{\\lambda^2}$\\\\\n    \\item Gaussian (normal) distribution:\\\\\n        $f(x)=\\dfrac{1}{\\sqrt{2\\pi\\sigma^2}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$\\\\\n        mean = $\\mu$, variance = $\\sigma^2$\n    \\item Linear 1st-order ODE:\\smallbreak\n        $\\dfrac{dy}{dx}+p(x)y=f(x)$:\\smallbreak\n        $y=\\dfrac{1}{\\mu(x)}\\displaystyle\\int\\mu(x)f(x)dx$, where $\\mu(x)=e^{\\int p(x)dx}$\n    \\item Linear 2nd-order ODE (constant coefficients):\\smallbreak\n        $\\dfrac{d^2y}{dx^2}+a\\dfrac{dy}{dx}+by=f(x)$:\\smallbreak\n        Complementary function: consider the auxiliary equation $\\lambda^2+a\\lambda+b=0$:\n        \\begin{itemize}[noitemsep, topsep=0pt]\n            \\item Case of real, distinct roots $\\lambda_1$, $\\lambda_2$: $y_c=Ae^{\\lambda_1x}+Be^{\\lambda_2x}$\n            \\item Case of equal roots $\\alpha$: $y_c=(A+Bx)e^{\\alpha x}$\n            \\item Case of complex roots $\\alpha\\pm\\beta i$: $y_c=e^{\\alpha x}(A\\sin\\beta x+B\\cos\\beta x)$\n        \\end{itemize}\n        Particular integral: \n        \\begin{itemize}[noitemsep, topsep=0pt]\n            \\item Case $f(x)$ is polynomial: try polynomial with same degree (or higher if needed)\n            \\item Case $f(x)=Ce^{kx}$: try $y_p=De^{kx}$ (or $Dxe^{kx}$, or $Dx^2e^{kx}$)\n            \\item Case $f(x)=C_1\\sin{kx}+C_2\\cos{kx}$: try $y_p=D_1\\sin{kx}+D_2\\cos{kx}$ (or $D_1x\\sin{kx}+D_2x\\cos{kx}$)\n        \\end{itemize}\n    \\item Partial differentiation:\n        \\begin{itemize}[topsep=0pt]\n            \\item Differential relations:\\smallbreak\n                $df=\\left(\\dfrac{\\partial f}{\\partial x}\\right)_ydx+\\left(\\dfrac{\\partial f}{\\partial y}\\right)_xdy$\n            \\item Chain rule:\\smallbreak\n                $\\left(\\dfrac{\\partial f}{\\partial u}\\right)_v=\\left(\\dfrac{\\partial f}{\\partial x}\\right)_y\\left(\\dfrac{\\partial x}{\\partial u}\\right)_v+\\left(\\dfrac{\\partial f}{\\partial y}\\right)_x\\left(\\dfrac{\\partial y}{\\partial u}\\right)_v$\\smallbreak\n                $\\left(\\dfrac{\\partial f}{\\partial v}\\right)_u=\\left(\\dfrac{\\partial f}{\\partial x}\\right)_y\\left(\\dfrac{\\partial x}{\\partial v}\\right)_u+\\left(\\dfrac{\\partial f}{\\partial y}\\right)_x\\left(\\dfrac{\\partial y}{\\partial v}\\right)_u$\n            \\item Reciprocity:\\smallbreak\n                $\\left(\\dfrac{\\partial x}{\\partial y}\\right)_z\\left(\\dfrac{\\partial y}{\\partial x}\\right)_z=1$\n            \\item Cyclic relations:\\smallbreak\n                $\\left(\\dfrac{\\partial x}{\\partial z}\\right)_y\\left(\\dfrac{\\partial y}{\\partial x}\\right)_z\\left(\\dfrac{\\partial z}{\\partial y}\\right)_x=-1$\n            \\item Exact differential:\\\\\n                $P(x,y)dx+Q(x,y)dy$ is an exact differential iff $\\dfrac{\\partial P}{\\partial y}=\\dfrac{\\partial Q}{\\partial x}$.\n            \\item Taylor series:\\smallbreak\n                $\\begin{aligned}\n                    f(x,y) &= f(x_0,y_0)\\\\\n                           &+ f_x(x_0,y_0)(x-x_0)+f_y(x_0,y_0)(y-y_0)\\\\\n                           &+ \\dfrac{1}{2!}\\big(f_{xx}(x_0,y_0)(x-x_0)^2+2f_{xy}(x_0,y_0)(x-x_0)(y-y_0)+f_{yy}(x_0,y_0)(y-y_0)^2\\big)+\\cdots\n                \\end{aligned}$\n        \\end{itemize}\n    \\item Stationary points of multi-variable functions: $f$ has a stationary point if $f_x=f_y=0$\n        \\begin{itemize}[noitemsep, topsep=0pt]\n            \\item Local minimum: $f_{xx}f_{yy}>f_{xy}^2$ with $f_{xx}>0$ and $f_{yy}>0$\n            \\item Local maximum: $f_{xx}f_{yy}>f_{xy}^2$ with $f_{xx}<0$ and $f_{yy}<0$\n            \\item Saddle point: $f_{xx}f_{yy}<f_{xy}^2$\n        \\end{itemize}\n    \\item Lagrange multipliers and Lagrangian function:\\\\\n        To find the stationary points of $f(x,y)$ subject to the constraint $g(x,y)=0$:\\\\\n        define the Lagrangian function\n            \\[L(x,y,\\lambda)=f(x,y)-\\lambda g(x,y)\\]\n        and solve $L_x=L_y=L_\\lambda=0$.\n    \\item Gradient of a scalar field:\\\\\n        $\\bm{\\nabla}\\Phi=(\\Phi_x,\\Phi_y,\\Phi_z)$\\\\\n        $d\\Phi=(\\bm{\\nabla}\\Phi)\\cdot d\\mathbf{x}$\n    \\item Divergence of a vector field:\\smallbreak\n        div $\\mathbf{F}=\\bm{\\nabla}\\cdot\\mathbf{F}=\\left(\\dfrac{\\partial}{\\partial x},\\dfrac{\\partial}{\\partial y},\\dfrac{\\partial}{\\partial z}\\right)\\cdot(F_x,F_y,F_z)=\\dfrac{\\partial F_x}{\\partial x}+\\dfrac{\\partial F_y}{\\partial y}+\\dfrac{\\partial F_z}{\\partial z}$\\smallbreak\n        $\\bm{\\nabla}\\cdot(\\bm{\\nabla}\\Phi)=\\nabla^2\\Phi=\\dfrac{\\partial^2\\Phi}{\\partial x^2}+\\dfrac{\\partial^2\\Phi}{\\partial y^2}+\\dfrac{\\partial^2\\Phi}{\\partial z^2}$\n    \\item Curl of a vector field:\\smallbreak\n        curl $\\mathbf{F}=\\bm{\\nabla}\\times\\mathbf{F}=\\left(\\dfrac{\\partial}{\\partial x},\\dfrac{\\partial}{\\partial y},\\dfrac{\\partial}{\\partial z}\\right)\\times(F_x,F_y,F_z)=\\left(\\dfrac{\\partial F_z}{\\partial y}-\\dfrac{\\partial F_y}{\\partial z},\\dfrac{\\partial F_x}{\\partial z}-\\dfrac{\\partial F_z}{\\partial x},\\dfrac{\\partial F_y}{\\partial x}-\\dfrac{\\partial F_x}{\\partial y}\\right)$\\\\\n        $\\bm{\\nabla}\\times(\\bm{\\nabla}\\Phi)=\\mathbf{0}$\n    \\item Normal to a surface:\\smallbreak\n        $\\mathbf{n}=\\dfrac{\\bm{\\nabla}\\Phi}{|\\bm{\\nabla}\\Phi|}$\n    \\item Line integral of a scalar field:\\smallbreak\n        $\\displaystyle\\int_\\Gamma\\Phi ds=\\displaystyle\\int_{s_1}^{s_2}\\Phi(\\mathbf{x}(s))ds=\\displaystyle\\int_{t_1}^{t_2}\\Phi(\\mathbf{x}(t))\\left|\\dfrac{d\\mathbf{x}}{dt}\\right|dt$\n    \\item Line integral of a vector field:\\smallbreak\n        $\\displaystyle\\int_\\Gamma\\mathbf{F}(\\mathbf{x})\\cdot d\\mathbf{x}=\\displaystyle\\int_{t_1}^{t_2}\\mathbf{F}(\\mathbf{x}(t))\\cdot\\dfrac{d\\mathbf{x}}{dt}dt$\n    \\item Conservative vector fields: $\\mathbf{F}=\\bm{\\nabla}\\Phi$\\smallbreak\n        $\\displaystyle\\int_\\Gamma\\mathbf{F}\\cdot d\\mathbf{x}=\\displaystyle\\int_\\Gamma(\\bm{\\nabla}\\Phi)\\cdot d\\mathbf{x}=\\Phi(\\mathbf{x}_1)-\\Phi(\\mathbf{x}_2)$\\smallbreak\n        $\\displaystyle\\oint_\\Gamma\\mathbf{F}\\cdot d\\mathbf{x}=\\displaystyle\\oint_\\Gamma(\\bm{\\nabla}\\Phi)\\cdot d\\mathbf{x}=0$\n    \\item Surface integral (flux):\\smallbreak\n        $\\displaystyle\\int_S\\mathbf{F}\\cdot d\\mathbf{S}=\\displaystyle\\int_S\\mathbf{F}\\cdot\\mathbf{n}dS$\n    \\item Gauss's theorem (divergence theorem):\\smallbreak\n        $\\displaystyle\\int_V(\\bm{\\nabla}\\cdot\\mathbf{F})dV=\\displaystyle\\int_S\\mathbf{F}\\cdot d\\mathbf{S}$, where $S$ is the bounding surface of $V$\n    \\item Stoke's theorem (curl theorem):\\smallbreak\n        $\\displaystyle\\int_S(\\bm{\\nabla}\\times\\mathbf{F})\\cdot d\\mathbf{S}=\\displaystyle\\int_C\\mathbf{F}\\cdot d\\mathbf{x}$, where $C$ is the boundary of $S$ (also called $\\partial S$)\n    \\newpage\n    \\item Decomposing matrix $\\mathbf{M}$ as sum of a symmetric matrix $\\mathbf{S}$ and an anti-symmetric matrix $\\mathbf{A}$: \\\\\n        $\\mathbf{S}=\\frac{1}{2}(\\mathbf{M}+\\mathbf{M}^\\top)$, $\\mathbf{A}=\\frac{1}{2}(\\mathbf{M}-\\mathbf{M}^\\top)$\\\\\n        For an anti-symmetric matrix $\\mathbf{A}$, $\\mathbf{x}^\\top\\mathbf{Ax}=0$ for any column vector $\\mathbf{x}$.\n    \\item Hermitian conjugation:\\\\\n        If $\\mathbf{A}=(a_{ij})$, then the Hermitian conjugate is $\\mathbf{A}^\\dagger=(\\mathbf{A}^\\top)^*=(\\mathbf{A}^*)^\\top=(a^*_{ji})$\\\\\n        Hermitian matrix: $\\mathbf{A}^\\dagger=\\mathbf{A}$\n    \\item Trace: \n        For an $n\\times n$ matrix $\\mathbf{A}$, trace($\\mathbf{A}$)=$\\displaystyle\\sum_{i=1}^n a_{ii}$\\\\\n        The trace of the product of a symmetric and an antisymmetric matrix is 0.\\\\\n        trace($\\mathbf{AB}$)=trace($\\mathbf{BA}$).\\\\\n        The results can be generalised and holds for any cyclic permutation of the order of multiplication.\n    \\item Minors and cofactors:\\\\\n        For an $n\\times n$ matrix $\\mathbf{A}=(a_{ij})$, let $\\mathbf{M}_{ij}$ be an $(n-1)\\times(n-1)$ sumathbfatrix:\n        \\begin{itemize}[noitemsep, topsep=0pt]\n            \\item Minor of the element $a_{ij}$ of $\\mathbf{A}$: $|\\mathbf{M}_{ij}|$\n            \\item Cofactor of $a_{ij}$: $A_{ij}=(-1)^{i+j}|\\mathbf{M}_{ij}|$\n                \\[\\begin{pmatrix}\n                    + & - & + & \\dots \\\\\n                    - & + & - & \\dots \\\\\n                    + & - & + & \\dots \\\\\n                    \\vdots & \\vdots & \\vdots & \\ddots\n                \\end{pmatrix}\\]\n            \\item Classical adjoint: (adj$\\mathbf{A}$)$_{ij}=A_{ji}$\n                \\[\\begin{pmatrix}\n                    A_{11} & A_{21} & \\dots  & A_{j1} & \\dots  & A_{n1}\\\\\n                    A_{12} & A_{22} & \\dots  & A_{j2} & \\dots  & A_{n2}\\\\\n                    \\vdots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots\\\\\n                    A_{1i} & A_{2i} & \\dots  & A_{ji} & \\dots  & A_{ni}\\\\\n                    \\vdots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots\\\\\n                    A_{1n} & A_{2n} & \\dots  & A_{jn} & \\dots  & A_{nn}\\\\\n                \\end{pmatrix}\\]\n        \\end{itemize}\n    \\item Determinants:\\smallbreak\n        $|\\mathbf{A}|=\\displaystyle\\sum_{j=1}^n a_{ij}A_{ij}$ for any fixed $i$; or\\\\\n        $|\\mathbf{A}|=\\displaystyle\\sum_{i=1}^n a_{ij}A_{ij}$ for any fixed $j$; or\\smallbreak\n        product of the elements on the diagonal if the matrix is triangular.\n        \\begin{itemize}[topsep=0pt]\n            \\item $\\mathbf{A}(\\text{adj}\\mathbf{A})=(\\text{det}\\mathbf{A})\\mathbf{I}$\n            \\item Interchanging any two rows or columns of $\\mathbf{A}$ changes the sign of $\\text{det}\\mathbf{A}$\n            \\item $\\text{det}\\mathbf{A}=0$ if any two rows or columns are the same\n            \\item Multiplying all the elements of any one row or column of $\\mathbf{A}$ by $\\lambda$ multiplies $\\text{det}\\mathbf{A}$ by $\\lambda$\n            \\item Adding a multiple of row (column) on another row (column) leaves $\\text{det}\\mathbf{A}$ unchanged\n            \\item $\\text{det}\\mathbf{AB}=(\\text{det}\\mathbf{A})(\\text{det}\\mathbf{B})$\n            \\item $\\text{det}\\mathbf{A}=\\text{det}\\mathbf{A}^\\top$\n        \\end{itemize}\n    \\item Eigenvalues:\n        \\begin{itemize}[topsep=0pt]\n            \\item $\\text{det}\\mathbf{A}=\\displaystyle\\prod_{i=1}^n \\lambda_i$\n            \\item trace$(\\mathbf{A})=\\displaystyle\\sum_{i=1}^n \\lambda_i$\n        \\end{itemize}\n    \\item Diagonalisation of real symmetric matrices: if $\\mathbf{A}$ is real symmetric, then:\n        \\begin{itemize}[topsep=0pt]\n            \\item $\\mathbf{A}$ has $n$ distinct eigenvalues $\\lambda_1$, $\\lambda_2$, ..., $\\lambda_n$;\n            \\item $\\mathbf{A}$ has $n$ linearly independent eigenvectors $\\mathbf{e}_1$, $\\mathbf{e}_2$, ..., $\\mathbf{e}_n$ that form an orthonormal basis;\n            \\item $\\mathbf{A}$ can be diagonalised by setting $\\mathbf{X}=\\begin{pmatrix}\n                    \\mathbf{e}_1 & \\mathbf{e}_2 & \\dots & \\mathbf{e}_n\n                \\end{pmatrix}$, then \\smallbreak\n                $\\mathbf{A}'=\\mathbf{X}^\\top\\mathbf{AX}=\n                \\begin{pmatrix}\n                    \\lambda_1 & 0 & \\dots & 0\\\\\n                    0 & \\lambda_2 & \\dots & 0\\\\\n                    \\vdots & \\vdots & \\ddots & \\vdots\\\\\n                    0 & 0 & \\dots & \\lambda_n \n                \\end{pmatrix}$\n        \\end{itemize}\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "d0304e2e97c0a7f7d2a1a01a2fd3ea3495e1f60f", "size": 16120, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "NST Maths Formulae.tex", "max_stars_repo_name": "VictorZXY/nst-part-ia-maths-formulae-cheat-sheet", "max_stars_repo_head_hexsha": "b2f29b1fe1829ed6ac1d27151e75eddcc4eb88ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "NST Maths Formulae.tex", "max_issues_repo_name": "VictorZXY/nst-part-ia-maths-formulae-cheat-sheet", "max_issues_repo_head_hexsha": "b2f29b1fe1829ed6ac1d27151e75eddcc4eb88ce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "NST Maths Formulae.tex", "max_forks_repo_name": "VictorZXY/nst-part-ia-maths-formulae-cheat-sheet", "max_forks_repo_head_hexsha": "b2f29b1fe1829ed6ac1d27151e75eddcc4eb88ce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.6115702479, "max_line_length": 385, "alphanum_fraction": 0.5826923077, "num_tokens": 5936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956580903722561, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.5692328854638884}}
{"text": "\\chapter{Lattice Systems}\n\n\\section{Free Fermion Systems}\nIn this section, we consider the system whose Hamiltonian composed of quadratic fermionic operators, i.e.,\n\\begin{equation}\n\t\\hat H_{\\mathrm{free}} = \\sum_{i,j=1}^N A_{ij} c_i^\\dagger c_j + \\frac{1}{2}\\sum_{i,j=1}^N B_{ij} c_i c_j + \\frac{1}{2}\\sum_{i,j=1}^N B_{ij}^* c_j^\\dagger c_i^\\dagger, \\label{eq:lattice-free-fermion-hamiltonian}\n\\end{equation}\nwhere $t_{ij}$ is a Hermitian matrix, and $\\Delta_{ij}$ is anti-symmetric.\nIn the Nambu basis \n\\begin{equation}\n\t\\Psi = (c_1,\\dots,c_N,c_1^\\dagger,\\dots,c_N^\\dagger)^T,\n\\end{equation}\nthe Hamiltonian has the form\\footnote{Without loss of generality, in the following we always assume that the sum of chemical potential is zero, i.e., $\\mathrm{Tr} A=0$.}\n\\begin{equation}\n\t\\hat H_{\\mathrm{free}} = \\frac{1}{2} \\sum_{i,j=1}^{2N} \\Psi^\\dagger_i H_{ij}^{\\Psi} \\Psi_j + \\frac{1}{2}\\mathrm{Tr}A,\n\\end{equation}\nwhere the single-body matrix $H^{\\Psi}$ is a $2N\\times 2N$ Hermitian matrix\n\\begin{equation}\n\tH^{\\Psi} = \\left[\\begin{array}{cc} \n\t\tA & B \\\\\n\t\t-B^* & -A^* \n\t\\end{array}\\right].\n\\end{equation}\n\n\\subsection{Majorana Representation}\n\nThe Majorana operators are defined as:\n\\begin{equation}\n\t\\left[\\begin{array}{c} \\omega_{i} \\\\ \\omega_{i+N} \\end{array}\\right]\n\t= \\left[\\begin{array}{cc} \n\t\t1 & 1 \\\\ \n\t\ti & -i \n\t\\end{array}\\right] \\left[\\begin{array}{c} \n\t\tc_i \\\\ c_i^\\dagger \n\t\\end{array}\\right], \\quad \n\t\\left[\\begin{array}{c} c_i \\\\ c_i^\\dagger \\end{array}\\right]\n\t= \\frac{1}{2} \\left[\\begin{array}{cc} \n\t\t1 & -i \\\\ \n\t\t1 & i \n\t\\end{array}\\right] \\left[\\begin{array}{c} \n\t\t\\omega_{i} \\\\ \\omega_{i+N}\n\t\\end{array}\\right].\n\\end{equation}\nThe fermionic bilinear in the Majorana basis has the form\n\\begin{equation}\n\t\\hat H = -\\frac{i}{4} \\sum_{i,j=1}^{2N} H_{ij} \\omega_i \\omega_j\n\\end{equation}\nwhere the single-body matrix $H$ is a $2N \\times 2N$ real anti-symmetric matrix:\n\\begin{equation}\n\tH = \\left[\\begin{array}{cc} \n\t\t-A^I - B^I & A^R - B^R \\\\\n    \t-A^R - B^R &  -A^I + B^I \n\t\\end{array}\\right].\n\\end{equation}\nwhere we have define $A^{R/I} = \\mathrm{Re} A / \\mathrm{Im} A$ and $B^{R/I} = \\mathrm{Re} B / \\mathrm{Im} B$.\nConversely, if we have a Majorana bilinear \n\\begin{equation}\n\t\\frac{i}{2} \\sum_{i,j=1}^{2N} M_{ij}\\omega_i \\omega_j, \\quad\n\tM = \\left[\\begin{array}{cc}\n\t\tM^{11} & M^{12} \\\\ M^{21} & M^{22}\n\t\\end{array} \\right],\n\\end{equation}\nit can be transformed back to ordinary fermionic bilinear (\\ref{eq:lattice-free-fermion-hamiltonian}) where\n\\begin{equation}\n\\begin{aligned}\n\tA &= M^{21} - M^{12} + i M^{11} + i M^{22}, \\\\\n\tB &= M^{21} + M^{12} + i M^{11} - i M^{22}.\n\t\\label{eq:lattice-majorana-bilinear-to-fermion}\n\\end{aligned}\n\\end{equation}\nA real anti-symmetric matrix can be transformed to standard form by an orthogonal transformation $O$:\n\\begin{equation}\n\\begin{aligned}\n\tH &= O \\cdot \\Sigma(\\bm \\lambda) \\cdot O^T, \\\\\n\t\\Sigma(\\bm \\lambda) &= i\\sigma_y \\otimes \\mathrm{diag}(\\lambda_1,\\cdots,\\lambda_n).\n\\end{aligned}\n\\end{equation}\nMake the basis transformation\n\\begin{equation}\n\t\\gamma_n = \\sum_{j=1}^{2N} O_{jn} \\omega_j,\n\\end{equation}\nthe Hamiltonian becomes the standard form:\n\\begin{equation}\n\\begin{aligned}\n\tH &= -\\frac{i}{4} \\sum_{i=1}^N \\lambda_i (\\gamma_i \\gamma_{i+N}-\\gamma_{i+N} \\gamma_i) \\\\\n\t&= -\\frac{i}{2} \\sum_{i=1}^N \\lambda_i \\gamma_i \\gamma_{i+N}.\n\\end{aligned}\n\\end{equation}\nEach $\\gamma_i \\gamma_{i+N}$ pair can then transforms to independent fermion mode:\n\\begin{equation}\n\\begin{aligned}\n\t-\\frac{i}{2}\\gamma_i \\gamma_{i+N} \n\t&= -\\frac{i}{2}(d_i + d_i^\\dagger)(id_i-id_i^\\dagger) \\\\ \n\t&= d_i^\\dagger d_i-\\frac{1}{2}.\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsection{Gaussian States}\nThe Fermionic Gaussian states are those states with Gaussian form density operator:\n\\begin{equation}\n\t\\hat \\rho \\propto \\exp \\left(\\frac{i}{2}\\sum_{i,j=1}^{2N}M_{ij}\\omega_i \\omega_j \\right),\n\\end{equation}\nwhere the matrix $M$ is real and anti-symmetric.\\footnote{In particular, any thermal state has this form, with $M = \\beta H/2$. The ground state of the free fermion system, though being pure state, can be regarded as the Gaussian state in the limit $M = \\lim_{\\beta \\rightarrow \\infty} \\beta H$.}\nIf we expand the Gaussian form, the density operator becomes a Majorana polynomial:\\footnote{Note that the coefficient $\\Gamma$ in each order is not the direct expansion of the matrix $M$, since the direct expansion contains identical Majorana operators. That is, the $n$-th order expansion of the Majorana Gaussian form may contribute to the ($n-2m$)-th order term in the Majorana polynomial.}\n\\begin{equation}\n\t\\hat{\\rho} = \\frac{\\mathbb{I}}{2^N} + \\sum_{n=1}^{N}\\frac{i^n}{2^N}\\sum_{1\\le i_{1}<\\cdots<i_{2n} \\le 2N}\\Gamma_{i_{1}\\cdots i_{2n}} \\omega_{i_1}\\cdots\\omega_{i_{2n}},\n\\end{equation}\nwhere the coefficient $\\Gamma_{i_1 \\cdots i_{2n}}$ is the $2n$-point correlation function:\n\\begin{equation}\n\t\\Gamma_{i_1 \\cdots i_{2n}} = i^n \\langle \\omega_{i_1} \\cdots \\omega_{i_{2n}}\\rangle, \\quad i_m \\ne i_n.\n\\end{equation}\nIn particular, the 2-point function \n\\begin{equation}\n\t\\Gamma_{ij} = i\\langle \\omega_i \\omega_j\\rangle - i\\delta_{ij} = \\frac{i}{2}\\langle [\\omega_i, \\omega_j]\\rangle\n\\end{equation}\nis also called the \\textit{covariance matrix}. \nFor Gaussian state all $2n$-point correlation is determined by the covariance matrix by the Wick theorem.\n\\begin{framedrmk}[Two-point Correlation Function]\nWe are usually more familiar with the ordinary fermionic two-point correlation function $\\langle c^\\dagger_i c_j\\rangle$ or $\\langle c_i c_j\\rangle$, which is related to the Majorana covariance matrix by:\n\\begin{equation}\n\\begin{aligned}\n\t\\langle c_i^\\dagger c_j\\rangle &= \\frac{1}{4}(\n\t\t\\Gamma^{21}_{ij} - \\Gamma^{12}_{ij} + \n\t\ti \\Gamma^{11}_{ij} + i \\Gamma^{22}_{ij})\n\t\t+\\frac{1}{2}\\mathbb \\delta_{ij}, \\\\\n\t\\langle c_i c_j\\rangle &= \\frac{1}{4}(\n\t\t\\Gamma^{21}_{ij} + \\Gamma^{12}_{ij} + \n\t\ti \\Gamma^{11}_{ij} - i \\Gamma^{22}_{ij}), \\\\\n\t\\langle c_i^\\dagger c_j^\\dagger\\rangle &= \\frac{1}{4}(\n\t\t-\\Gamma^{21}_{ij} - \\Gamma^{12}_{ij} + \n\t\ti \\Gamma^{11}_{ij} - i \\Gamma^{22}_{ij}).\n\\end{aligned}\n\\end{equation}\n\\end{framedrmk}\n\nThe relation of the correlation in each order can be neatly captured by the Grassmannian Gaussian form:\n\\begin{equation}\n\\begin{aligned}\n\t\\omega(\\hat \\rho, \\theta) \n\t&= \\frac{1}{2^N} \\exp \\left(\\frac{i}{2} \\sum_{i,j=1}^{2N}\\Gamma_{ij}\\theta_i \\theta_j \\right) \\\\\n\t&=\\frac{1}{2^N} + \\sum_{n=1}^{N}\\frac{i^n}{2^N}\\sum_{1\\le i_{1}<\\cdots<i_{2n} \\le 2N}\\Gamma_{i_{1}\\cdots i_{2n}} \\theta_{i_1} \\cdots \\theta_{i_{2n}}.\n\\end{aligned}\n\\end{equation}\nWhen the covariance matrix is obtained, we can use the same routine to canonicalize the skew-symmetric matrix $\\Gamma$:\n\\begin{equation*}\n\t\\Gamma = O \\cdot \\Sigma(\\bm \\lambda) \\cdot O^T, \\quad\n\t\\tilde\\theta_n = \\sum_i O_{in} \\theta_i,\n\\end{equation*}\nand the density matrix in the Grassmann representation is\n\\begin{equation}\n\t\\omega(\\hat \\rho, \\theta) \n\t= \\prod_{n=1}^N \\left(\\frac{1}{2} e^{i \\lambda_n \\tilde\\theta_n \\tilde\\theta_{n+N}} \\right)\n\t= \\prod_{n=1}^N \\left(\\frac{1+i\\lambda_n \\tilde\\theta_n\\tilde\\theta_{n+N}}{2}  \\right).\n\\end{equation}\nThis state correspond to a product state $\\rho = \\otimes_n \\rho_n$ where\n\\begin{equation}\n\t\\rho_n = \\frac{1}{2} \\left[\\begin{array}{cc}\n\t\t1 + \\lambda_n & 0 \\\\\n\t\t0 & 1 - \\lambda_n\n\t\\end{array} \\right].\n\\end{equation}\nThe entanglement entropy is then\n\\begin{equation}\n\tS=\\sum_n S_n = -\\sum_n \\left[\n\t\\left(\\frac{1+\\lambda_n}{2}\\right)\\ln\\left(\\frac{1+\\lambda_n}{2}\\right)\n\t+ \\left(\\frac{1-\\lambda_n}{2}\\right)\\ln\\left(\\frac{1-\\lambda_n}{2}\\right)\\right].\n\\end{equation}\n\n\n\n\\subsection{Lindblad Master Equation}\nFor Lindblad equation\n\\begin{equation}\n\t\\frac{d}{dt} \\hat\\rho = -i[\\hat H, \\hat \\rho] + \\sum_{\\mu=1}^{m} \\hat L_\\mu \\hat\\rho \\hat L_\\mu^\\dagger -\\frac{1}{2} \\sum_{\\mu=1}^{m} \\{\\hat L_\\mu^\\dagger \\hat L_\\mu, \\hat \\rho\\}\n\\end{equation}\nWhen the \\textit{jump operator} $\\hat L_\\mu$ contains only the linear Majorana operator, the Lindblad equation preserve Gaussianity. \nFor \\textit{jump operator} contains up to quadratic Majorana terms, the evolution will break the Gaussian form, however, the $2n$-point correlation is still solvable for free fermion system.\n\n\\subsubsection*{Dynamics of Covariance Matrix}\n\nWe assume that the jump operator has up to quadratic Majorana terms. \nIn particular, we denote the linear terms and the Hermitian quadratic terms as\n\\begin{equation}\n\t\\hat L_r = \\sum_{j=1}^{2N} L^r_{j} \\omega_j, \\quad\n\t\\hat L_s = \\sum_{j,k=1}^{2N} M^s_{jk} \\omega_j \\omega_k.\n\\end{equation}\nNow consider the dynamics of the expectation value $\\langle\\hat O\\rangle$:\n\\begin{equation}\n\\begin{aligned}\n\t\\frac{d}{dt}\\langle \\hat O\\rangle\n\t&= -i \\mathrm{Tr} [\\hat O (\\hat H \\hat\\rho-\\hat\\rho \\hat H)] \n\t\t+ \\sum_\\mu \\mathrm{Tr}[\\hat O \\hat L_\\mu \\hat\\rho \\hat L_\\mu^\\dagger]\n\t\t- \\frac{1}{2}\\sum_\\mu \\mathrm{Tr}[\\hat O \\hat L_\\mu^\\dagger \\hat L_\\mu \\hat\\rho\n\t\t+ \\hat O \\hat\\rho \\hat L_\\mu^\\dagger \\hat L_\\mu] \\\\\n\t&= \\left\\langle\n\t\ti[\\hat H, \\hat O] + \\sum_\\mu \\hat L_\\mu^\\dagger \\hat O\\hat L_\\mu - \\frac{1}{2} \\sum_\\mu\\{\\hat L_\\mu^\\dagger \\hat L_\\mu, \\hat O \\}\n\t\t\\right\\rangle.\n\\end{aligned}\n\\end{equation}\nWe can express the dynamics of operator as in the Heisenberg picture:\n\\begin{equation}\n\t\\frac{d\\hat O}{dt} = i[\\hat H, \\hat O] + \\mathcal D_r[\\hat O] + \\mathcal D_s[\\hat O],\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal D_r[\\hat O] \n\t&= \\sum_r \\hat L_r^\\dagger \\hat O\\hat L_r - \\frac{1}{2} \\sum_r\\{\\hat L_r^\\dagger \\hat L_r, \\hat O \\}\n\t= \\frac{1}{2}\\sum_r [\\hat L_r^\\dagger L_r, \\hat O] - \\sum_r \\hat L_r^\\dagger[\\hat L_r,\\hat O],  \\\\\n\t\\mathcal D_s[\\hat O] \n\t&= \\sum_s \\hat L_s \\hat O\\hat L_s - \\frac{1}{2} \\sum_r\\{\\hat L_s^2, \\hat O \\}\n\t= -\\frac{1}{2} \\sum_s [\\hat L_s,[\\hat L_s,\\hat O]].\n\\end{aligned}\n\\end{equation}\nThe equation of motion can be further simplified to:\n\\begin{equation}\n\t\\frac{d\\hat O}{dt} \n\t= i[\\hat H_{\\mathrm{eff}}, \\hat O] - \\sum_r \\hat L_r^\\dagger[\\hat L_r,\\hat O] -\\frac{1}{2} \\sum_s [\\hat L_s,[\\hat L_s,\\hat O]],\n\\end{equation}\nwhere the effective Hamiltonian is\n\\begin{equation}\n\t\\hat H_{\\mathrm{eff}} = \\sum_{ij} \\left(-\\frac{i}{4}H_{ij}-\\frac{1}{2} B^I_{ij}\\right)\\omega_i\\omega_j,\n\\end{equation}\nwhere we have defined $B_{ij} = \\sum_r L^r_i L^{r*}_j$.\nUsing the commutation relation $\\{\\omega_i, \\omega_j\\} = 2\\delta_{ij}$, we have the following relation\n\\begin{equation}\n\\begin{aligned}[]\n\t[\\omega_k,\\omega_i \\omega_j] &= 2(\\delta_{ki}\\omega_j-\\delta_{kj}\\omega_i), \\\\\n\t[\\omega_k \\omega_l, \\omega_i \\omega_j] \n\t&= 2(\\delta_{ki}\\omega_j \\omega_l-\\delta_{kj} \\omega_i \\omega_l + \\delta_{li}\\omega_k \\omega_j - \\delta_{lj}\\omega_k\\omega_i),\n\\end{aligned}\n\\end{equation}\nand let $\\hat O_{ij} = \\omega_i\\omega_j - \\delta_{ij}\\mathbb I$.\nThe first term of EOM is:\n\\begin{equation*}\n\\begin{aligned}[]\n\ti\\langle[\\hat H_{\\mathrm{eff}}, \\hat O_{ij}]\\rangle_t\n\t&= \\sum_{kl}\\left(\\frac{1}{4}H-\\frac{i}{2}B^I \\right)_{kl} \\langle[\\omega_k \\omega_l, \\omega_i \\omega_j]\\rangle_t \\\\\n\t&= \\sum_{kl} \\left(\\frac{1}{2}H-i B^I\\right)_{kl} \\langle \n\t\t\\delta_{ki}\\omega_j \\omega_l-\\delta_{kj} \\omega_i \\omega_l + \n\t\t\\delta_{li}\\omega_k \\omega_j - \\delta_{lj}\\omega_k\\omega_i\n\t\\rangle_t \\\\\n\t&= \\left[\n\t\t(H-2iB^I)^T \\cdot \\langle\\hat O\\rangle_t + \n\t\t\\langle\\hat O\\rangle_t \\cdot (H-2iB^I)\n\t\\right]_{ij}.\n\\end{aligned}\n\\end{equation*}\nThe second term is\n\\begin{equation*}\n\\begin{aligned}\n\t-\\sum_r \\langle L_r^\\dagger[L_r, \\hat O_{ij}] \\rangle_t\n\t&= -\\sum_{kl} B_{kl}^* \\langle \\omega_k [\\omega_l, \\omega_i \\omega_j] \\rangle_t \\\\\n\t&= -2\\sum_{kl} B_{kl}^* \\langle \n\t\t\\delta_{li} \\omega_k \\omega_j - \n\t\t\\delta_{lj} \\omega_k \\omega_i\n\t\\rangle_t \\\\\n\t&= -\\left[2B\\cdot \\langle\\hat O\\rangle_t + 2\\langle\\hat O\\rangle_t\\cdot B^* + 4i B^I \\right]_{ij}.\n\\end{aligned}\n\\end{equation*}\nAnd the third term is\n\\begin{equation*}\n\\begin{aligned}\n\t-\\frac{1}{2}\\sum_s \\langle[\\hat L_s,[\\hat L_s, \\hat O_{ij}]]\\rangle_t\n\t&= -\\frac{1}{2} \\sum_s \\sum_{kl} M^s_{kl}\\langle[\\hat L_s,[\\omega_k \\omega_l, \\omega_i \\omega_j]]\\rangle_t \\\\\n\t&= 2\\sum_s \\sum_{k} \\left\\langle M^s_{ik}[\\hat L_s,\\omega_k \\omega_j]-[\\hat L_s,\\omega_i \\omega_k]M^s_{kj} \\right\\rangle_t \\\\\n\t&= 8\\sum_{s,kl} \\left\\langle M^s_{ik}[-M^s_{kl}\\omega_l\\omega_j+\\omega_k\\omega_l M^s_{lj}]+[M^s_{il}\\omega_l\\omega_k-\\omega_i\\omega_l M^s_{lk}]M^s_{kj} \\right\\rangle_t \\\\\n\t&= 8\\sum_s \\left[2 M^s \\cdot \\langle\\hat O\\rangle_t\\cdot M^s-(M^s)^2 \\cdot \\langle\\hat O\\rangle_t - \\langle\\hat O\\rangle_t\\cdot(M^s)^2 \\right]_{ij}.\n\\end{aligned}\n\\end{equation*}\nIn together, we get the EOM of variance matrix $\\Gamma_{ij}(t)=i\\langle\\hat O_{ij}\\rangle_t$, the result is:\n\\begin{equation}\n\t\\partial_t \\Gamma = X^T\\cdot\\Gamma + \\Gamma \\cdot X + \\sum_s (Z^s)^T \\cdot \\Gamma\\cdot Z^s + Y,\n\\end{equation}\nwhere\n\\begin{equation}\n\tX = H - 2B^R + 8 \\sum_s (\\mathrm{Im} M^s)^2, \\quad\n\tY = 4B^I, \\quad \n\tZ = 4 \\mathrm{Im} M^s.\n\\end{equation}\n\n\n\n\n\n\n", "meta": {"hexsha": "690d0099da9687f77ca870f796f3690fe2a9d556", "size": 12783, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/LatticeSystem.tex", "max_stars_repo_name": "jayren3996/Notes_on_QFT", "max_stars_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/LatticeSystem.tex", "max_issues_repo_name": "jayren3996/Notes_on_QFT", "max_issues_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/LatticeSystem.tex", "max_forks_repo_name": "jayren3996/Notes_on_QFT", "max_forks_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.23183391, "max_line_length": 394, "alphanum_fraction": 0.6698740515, "num_tokens": 4911, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744939732856, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5689977805714934}}
{"text": "\\documentclass[a4paper,11pt]{article}\n\\usepackage{a4wide}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\begin{document}\n\\begin{center}\n  {\\LARGE\\bf Supplemental note for Week 4 Part 1}\n  \\end{center}\n\\begin{flushright}\n  {\\large\\bf ver. 20170416-01}\\\\\n \\ \\\\\n{\\large\\bf Ryoichi Yamamoto}\\\\\n\\end{flushright}\n\n\\section{Average and variance of the cumulative impulse $\\Delta\\mathbf{W}_i$}\n\nLet us start with the Langevin equation\n\\begin{equation}\nm\\frac{d\\mathbf{V}(t)}{dt}={-\\zeta\\mathbf{V}(t)}+{\\mathbf{F}(t)},\n\\tag{F2}\n\\end{equation}\nwhere the random force $\\mathbf{F}(t)$ satisfies the following conditions\n\\begin{equation}\n\\langle \\mathbf{F}(t)\\rangle=\\mathbf{0}\n\\tag{F3}\n\\end{equation}\n\\begin{equation}\n\\langle \\mathbf{F}(t)\\mathbf{F}(0)\\rangle = {2k_B T\\zeta}\\mathbf{I}\\delta(t),\\tag{F4}\n\\end{equation}\nwith\n$\\mathbf{0}\\equiv(0,0,0)$ and $\\mathbf{I}\\equiv\\begin{bmatrix}1&0&0\\\\0&1&0\\\\0&0&1\\end{bmatrix}$.\\\\\nWe now discretize the time $t$ using an increment $\\Delta t$, such that\n$t_i\\equiv i\\Delta t$, and define the cumulative impulse during the\ninterval $t_i\\le t\\le t_{i+1} = t_i + \\Delta t$, as\n\\begin{equation}\n\\Delta\\mathbf{W}_i\n\\equiv\\int_{t_i}^{t_{i+1}} dt\\mathbf{F}(t).\n\\tag{F8}\n\\end{equation}\nFrom Eq.(F3), it is straightforward to show\n\\begin{equation}\n\\langle\\Delta\\mathbf{W}_i\\rangle\n=\\int_{t_i}^{t_{i+1}} dt\\langle\\mathbf{F}(t)\\rangle=\\mathbf{0}.\n\\tag{F10}\n\\end{equation}\nAlso from Eq.(F4), for $j\\ne i$\n\\begin{eqnarray}\n\\langle\\Delta\\mathbf{W}_i\\Delta\\mathbf{W}_{j\\ne i}\\rangle\n&=&\\int_{t_i}^{t_{i+1}} dt\\int_{t_j}^{t_{j+1}} dt'\\langle\\mathbf{F}(t)\\mathbf{F}(t')\\rangle\\\\\n&=&2k_B T\\zeta\\mathbf{I} \\int_{t_i}^{t_{i+1}} dt\\int_{t_j}^{t_{j+1}} dt'\n\\delta(t-t')\\\\\n&=&\\mathbf{O},\n\\end{eqnarray}\nwhere $\\mathbf{O}\\equiv\\begin{bmatrix}0&0&0\\\\0&0&0\\\\0&0&0\\end{bmatrix}$.\\\\\nFor $j=i$\n\\begin{eqnarray}\n\\langle\\Delta\\mathbf{W}_i\\Delta\\mathbf{W}_i\\rangle\n&=&\\int_{t_i}^{t_{i+1}} dt\\int_{t_i}^{t_{i+1}} dt'\\langle\\mathbf{F}(t)\\mathbf{F}(t')\\rangle\\\\\n&=&2k_B T\\zeta\\mathbf{I} \\int_{t_i}^{t_{i+1}} dt\\int_{t_i}^{t_{i+1}} dt'\n\\delta(t-t')\\\\\n&=&2k_B T\\zeta\\Delta t\\mathbf{I}.\n\\end{eqnarray}\nCombining Eqs.(3) and (6), we obtain\n\\begin{equation}\n\\langle \\Delta \\mathbf{W}_i\\Delta \\mathbf{W}_j\\rangle = {2k_B T\\zeta}\\Delta t\\mathbf{I}\\delta_{ij} .\n\\tag{F11}\n\\end{equation}\n\n%\\newpage\n\\section{Distribution of $\\Delta\\mathbf{W}_i$}\n\nHere we further divide $\\Delta t$ into $n$ segments ($n\\gg1$) of a\nvery small time span $\\epsilon$, {\\it i.e.}, $\\Delta t \\equiv n\\epsilon$, and \ndefine a new cumulative impulse over $\\epsilon$\n\\begin{equation}\n\\mathbf{W}^m_i\n\\equiv\\int_{t_i+(m-1)\\epsilon}^{t_{i}+m\\epsilon} dt\\mathbf{F}(t),\n\\end{equation}\nwhere $1\\le m\\le n$.\\\\\nRepeating the same procedure performed in the previous section, the following conditions are derived.\n\\begin{eqnarray}\n\\langle\\mathbf{W}^m_i\\rangle&=&\\mathbf{0}\\\\\n\\langle \\mathbf{W}^m_i\\mathbf{W}^l_j\\rangle &=& {2k_B T\\zeta}\\epsilon\\mathbf{I}\\delta_{ij}\\delta_{ml}\n\\end{eqnarray}\nEqs.(8) and (9) show that the mean and variance of the random numbers $W^m_{\\alpha,i}$ $(\\alpha\\in x,y,z)$ are zero and $2k_B T\\zeta\\epsilon$, respectively. \\\\\nFrom Eq.(F8) and (7), we should notice that\n\\begin{equation}\n\\Delta\\mathbf{W}_i\n=\\mathbf{W}^1_i+\\mathbf{W}^2_i+\\cdots+\\mathbf{W}^n_i.\n\\end{equation}\nTherefore, from the central limit theorem Eqs.(D7)-(D9) introduced in\nPart 3 of Week 2, one realizes that the $\\Delta W_{\\alpha,i}$ should be\ndrawn from a \\textit{Gaussian} distribution, with average and variance\nequal to zero and $2k_B T\\zeta\\Delta t$, respectively, regardless of the\ndistribution of the $W^m_{\\alpha,i}$.\n\\end{document}\n\n", "meta": {"hexsha": "30ee946c649d9e4cb8d46eb644bb46005d6f6159", "size": 3587, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "edx-stochastic-data-analysis/downloaded_files/04/Supplemental_note_4-1.tex", "max_stars_repo_name": "mirandagil/extra-courses", "max_stars_repo_head_hexsha": "51858f5089b10b070de43ea3809697760aa261ec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "edx-stochastic-data-analysis/downloaded_files/04/Supplemental_note_4-1.tex", "max_issues_repo_name": "mirandagil/extra-courses", "max_issues_repo_head_hexsha": "51858f5089b10b070de43ea3809697760aa261ec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "edx-stochastic-data-analysis/downloaded_files/04/Supplemental_note_4-1.tex", "max_forks_repo_name": "mirandagil/extra-courses", "max_forks_repo_head_hexsha": "51858f5089b10b070de43ea3809697760aa261ec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.6020408163, "max_line_length": 159, "alphanum_fraction": 0.690270421, "num_tokens": 1453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458152, "lm_q2_score": 0.817574478416099, "lm_q1q2_score": 0.5689977749001126}}
{"text": "\\section{Risk Importance Measures}\nAssociated test: \\texttt{tests/framework/PostProcessors/InterfacedPostProcessor/\\\\test-riskMeasuresDiscreteMultipleIE.xml}\nRisk Importance Measures (RIMs) are originally defined for each basic event in a Event-Tree/Fault-Tree analysis.\nIn a simulation based environment similar calculations can employed for boolean models.\n\nFor each component $i$ and for each IE the following quantities are calculated:\n\\begin{itemize}\n  \\item $R^0$ = probability of system failure\n  \\item $R^i_+$ = probability of system failure given component $i$ has failed\n  \\item $R^i_-$ = probability of system failure given component $i$ is prefectly reliable\n\\end{itemize}\n\nFor each component $i$, four RIMs indexes can be computed:\n\\begin{itemize}\n  \\item $RAW^i = R^i_-/R^0$\n  \\item $RAW^i = R^0/R^i_+$\n  \\item $B^i = R^i_- R^i_+$\n  \\item $FV^i = (R^0 -  R^i_-)/R^0$\n\\end{itemize}\n\nIn the asscoiated test, a system composed by four components (i.e., A, B, C and D) is analyzed for 2 Initiating Events (IEs), IE1 and IE2.\nData associated for each IE is as follows:\n\\begin{itemize}\n  \\item IE1 (probability $p1=0.01$); 1 single MCS, $MCS1 = A+BC$\n  \\item IE2 (probability $p1=0.02$); 1 single MCS, $MCS2 = BCD$\n\\end{itemize}\n\nIn the associated test, the following probabilities are provided:\n\\begin{itemize}\n  \\item $p_A = 0.01$\n  \\item $p_B = 0.05$\n  \\item $p_C = 0.1 $\n  \\item $p_D = 0.02$\n\\end{itemize}\n\nFor each IE, the symbolic expressions and the numerical expressions of $R^0$, $R^i_+$ and $R^i_-$ are calculated\n(see Tables~\\ref{tab:RIM_IE1_symb}, \\ref{tab:RIM_IE1_num}, \\ref{tab:RIM_IE2_symb} and \\ref{tab:RIM_IE2_num}).\n\n\\begin{table}[h!]\n  \\centering\n  \\caption{IE1: symbolic expressions of $R^0$, $R^i_+$ and $R^i_-$}\n  \\label{tab:RIM_IE1_symb}\n  \\begin{tabular}{c|ccc}\n    \\toprule\n    & $R0$ & $R^i_+$ & $R^i_-$ \\\\\n    \\midrule\n    A & A+BC & []   & BC   \\\\\n    B & A+BC & A+C  & A    \\\\\n    C & A+BC & A+B  & A    \\\\\n    D & A+BC & A+BC & A+BC \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\\begin{table}[h!]\n  \\centering\n  \\caption{IE1: numerical value of $R^0$, $R^i_+$ and $R^i_-$}\n  \\label{tab:RIM_IE1_num}\n  \\begin{tabular}{c|ccc}\n    \\toprule\n    & $R0$ & $R^i_+$ & $R^i_-$ \\\\\n    \\midrule\n    A & 0.01495 & 1.0     & 0.005   \\\\\n    B & 0.01495 & 0.109   & 0.01    \\\\\n    C & 0.01495 & 0.0595  & 0.01    \\\\\n    D & 0.01495 & 0.01495 & 0.01495 \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\\begin{table}[h!]\n  \\centering\n  \\caption{IE2: symbolic expressions of $R^0$, $R^i_+$ and $R^i_-$}\n  \\label{tab:RIM_IE2_symb}\n  \\begin{tabular}{c|ccc}\n    \\toprule\n    & $R0$ & $R^i_+$ & $R^i_-$ \\\\\n    \\midrule\n    A & BCD & BCD & BCD  \\\\\n    B & BCD & CD  & -    \\\\\n    C & BCD & BD  & -    \\\\\n    D & BCD & BC  & BCD  \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\\begin{table}[h!]\n  \\centering\n  \\caption{IE2: numerical value of $R^0$, $R^i_+$ and $R^i_-$}\n  \\label{tab:RIM_IE2_num}\n  \\begin{tabular}{c|ccc}\n    \\toprule\n    & $R0$ & $R^i_+$ & $R^i_-$ \\\\\n    \\midrule\n    A & 0.0001 & 0.0001 & 0.0001 \\\\\n    B & 0.0001 & 0.002  & 0.0    \\\\\n    C & 0.0001 & 0.001  & 0.0    \\\\\n    D & 0.0001 & 0.005  & 0.0    \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\nGiven the values provided above it is possible to linearly weight them with the probability associated to each IE (see Table~\\ref{tab:RIM_IE12}).\n\n\\begin{table}[h!]\n  \\centering\n  \\caption{IE1+IE2: numerical value for $R^0$, $R^i_+$ and $R^i_-$}\n  \\label{tab:RIM_IE12}\n  \\begin{tabular}{c|ccc}\n    \\toprule\n    & $R0$ & $R^i_+$ & $R^i_-$ \\\\\n    \\midrule\n    A & 0.0001515 & 0.010002  & 0.000052  \\\\\n    B & 0.0001515 & 0.00113   & 0.0001    \\\\\n    C & 0.0001515 & 0.000615  & 0.0001    \\\\\n    D & 0.0001515 & 0.0002495 & 0.0001495 \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\nThen, it is possible to obtain the values of each of the 4 RIMs for eah component:\n\n\\begin{table}[h!]\n  \\centering\n  \\caption{IE1+IE2: numerical value for $R^0$, $R^i_+$ and $R^i_-$}\n  \\label{tab:table1}\n  \\begin{tabular}{c|cccc}\n    \\toprule\n    & RAW & RRW & FV & B \\\\\n    \\midrule\n    A & 66.01980198 & 2.913461538 & 0.656765677 & 0.00995  \\\\\n    B & 7.458745875 & 1.515       & 0.339933993 & 0.00103  \\\\\n    C & 4.059405941 & 1.515       & 0.339933993 & 0.000515 \\\\\n    D & 1.646864686 & 1.013377926 & 0.01320132  & 0.0001   \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n", "meta": {"hexsha": "6b0106d66a175e42d8af4b6399bc1194196bc0d7", "size": 4343, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tests/rims.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "doc/tests/rims.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "doc/tests/rims.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 31.2446043165, "max_line_length": 145, "alphanum_fraction": 0.605111674, "num_tokens": 1771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744850834648, "lm_q2_score": 0.6959583250334527, "lm_q1q2_score": 0.5689977692287757}}
{"text": "% !TeX root = ../main.tex\n% Add the above to each chapter to make compiling the PDF easier in some editors.\n\n\\chapter{Theory}\\label{chapter:theory}\n\nIn this chapter, we introduce the theoretical foundations for our subsequent work. We begin by formally introducing the metrics we use to assess the performance of the discussed algorithms. We then introduce the problem of smoothed convex optimization and related variants that the examined algorithms address.\n\n\\section{Performance Metrics}\\label{section:theory:performance_metrics}\n\nWe say that an algorithm is \\emph{optimal}\\index{optimal algorithm} with respect to some performance metric if no algorithm can achieve a better score in the given metric given the same information. Crucially, optimality depends on the information given to the algorithm. We thus say that an offline algorithm of a minimization problem is optimal if its result always incurs the smallest possible cost while satisfying the given constraints. In contrast, an optimal online algorithm must not necessarily return the optimal offline solution. In fact, in many cases, online algorithms must necessarily perform worse than optimal offline algorithms due to the lack of provided information (in our case, the convex cost functions arrive over time). Naturally, these performance metrics also inform parts of our experimental analysis performed in \\cref{chapter:case_studies}.\n\n\\subsection{Approximations}\n\nWe begin by considering the offline case. In most cases, we seek to find optimal solutions to the offline problem. However, for some problems where the computational complexity of optimal solutions is high for large instances, it is beneficial to consider efficient algorithms that achieve close to optimal performance, motivating the definition of approximation algorithms. Note that we limit our definitions of performance metrics to minimization problems.\n\n\\begin{definition}\\index{approximation ratio}\n\\cite{Williamson2011} An $\\alpha$-approximation algorithm for a minimization problem $c$ is a polynomial-time algorithm $ALG$ that for all instances of the problem produces a solution whose value is within a factor of $\\alpha$ of the value of an optimal solution $OPT$, i.e., $c(ALG) \\leq \\alpha \\cdot c(OPT)$.\n\\end{definition}\n\nIn other words, an $\\alpha$-approximation guarantees that its results are at most a factor of $\\alpha$ worse than the optimal solution. The integral smoothed convex optimization problem is an example where \\citeauthor{Kappelmann2017}~\\cite{Kappelmann2017} and \\citeauthor{Albers2021_2}~\\cite{Albers2021_2} recently made substantial progress on approximation algorithms.\n\n\\subsection{Competitiveness}\n\nFor online algorithms, it is natural to consider an adaptation of the idea of approximation algorithms. Here, we compare the result of an online algorithm with the optimal offline solution.\n\n\\begin{definition}\\index{competitive ratio}\nAn $\\alpha$-competitive online algorithm for a minimization problem $c$ is an algorithm $ALG$ that for all instances of the problem produces a solution whose value is within a factor of $\\alpha$ of the value of an optimal offline solution $OPT$, i.e. $c(ALG) \\leq \\alpha \\cdot c(OPT)$.\n\\end{definition}\n\nWe observe that this definition is analogous to our earlier definition of approximation algorithms in the offline case. However, in contrast to approximation algorithms, where the limiting factor was the algorithm's complexity, the competitiveness of online algorithms is fundamentally restricted by the information available to an online algorithm compared to its offline variants. In smoothed convex optimization, considerable work has focused on finding online algorithms with a constant competitive ratio in the number of dimensions $d$.\n\nCrucially, the competitiveness of an online algorithm depends on the assumed adversary model. Commonly, three adversary models are used in the literature, which are described by \\citeauthor{Borodin1990}~\\cite{Borodin1990}. First, the \\emph{oblivious adversary}\\index{oblivious adversary} is the weakest adversary and only knows the algorithm's code but needs to construct the request sequence before any moves are made. Second, the \\emph{adaptive online adversary}\\index{adaptive online adversary} makes the next request based on the algorithm's previous answers but serves it immediately. Third, the \\emph{adaptive offline adversary}\\index{adaptive offline adversary} is the strongest adversary that serves the requests based on the algorithm's previous answers but, in the end, can choose the optimal request sequence among all possible request sequences. Note that as all adversaries know the algorithm's code, they are equivalent in the case of a deterministic algorithm. Also, note that randomization is not helpful when playing against an adaptive offline adversary. In the case of many smoothed online convex optimization problems, including right-sizing data centers, it is reasonable to assume an oblivious adversary as typically incoming requests arrive independently from previous server configurations in a data center.\n\n\\subsection{Regret}\n\nRegret is another approach to measuring the performance of online algorithms.\n\n\\begin{definition}\\index{regret (static)}\nThe (static) regret of an online algorithm $ALG$ for a minimization problem $c$ is $\\rho(T)$ if for all instances of the problem the difference between the result of the algorithm and the static optimal offline solution $OPT_s$ does not exceed $\\rho(T)$, i.e. $c(ALG) - c(OPT_s) \\leq \\rho(T)$.\n\\end{definition}\n\nCommonly, the literature considers this definition of regret where the online algorithm is compared against a static offline solution, i.e., a solution where the agent is not allowed to move in the decision space. We say that an algorithm achieves \\emph{no-regret}\\index{no-regret} if $\\rho$ is sublinear in the time horizon $T$. Observe that ideally, an algorithm achieves negative regret, in which case it performs better than the static optimum.\n\nIdeally, online algorithms perform well with respect to the competitive ratio and regret. In other words, our online algorithms should both perform well compared against an agent that is moving in the decision space with perfect knowledge of the future (competitive ratio) and perform well against an agent that picks one optimal location in the decision space. In practice, for the example of dynamically right-sizing a data center, our algorithms are required to outperform a static number of servers to be viable alternatives. In contrast, to minimize energy waste and revenue loss, the strategies proposed by our algorithms must be as close as possible to the optimal dynamic strategies.\n\nHowever, \\citeauthor{Andrew2015}~\\cite{Andrew2015} proved that no online algorithm for smoothed convex optimization can simultaneously achieve a constant competitive ratio and no-regret even when $d = 1$ and cost functions are linear. The competitive ratio can be arbitrarily poor for no-regret algorithms by oscillating the dynamic optimal solution between two points in the decision space. The no-regret algorithm will approach a static optimum, which can be arbitrarily worse than the dynamic optimum. In contrast, a constant-competitive algorithm generally sticks to a point in the decision space until it knows that the cost of movement is outweighed by the reduced cost of some other point that it then moves to. Hence, for constant-competitive algorithms, the regret can be arbitrarily large as the algorithm oscillates between the two points in the decision space, in each step having an arbitrary distance to the static optimum. An illustrative example with one dimension and a periodic sequence of two linear cost functions is depicted in \\cref{fig:incompatibility_of_competitive_ratio_and_regret}.\n\n\\begin{figure}\n    \\centering\n    \\input{thesis/figures/incompatibility_of_competitive_ratio_and_regret}\n    \\caption{Incompatibility of competitive ratio and regret in one dimension. Consider an adversary playing two linear cost functions $f_1$ and $f_2$ with different minimizers. Then, the dynamic optimum oscillates between the two minimizers while the static optimum is given by the intersection of the two cost functions. Therefore, a no-regret algorithm may be arbitrarily far away from the dynamic offline optimum. In contrast, a competitive algorithm which sticks to either end of the decision space may exceed the static offline optimum by a delta that is not in $\\mathcal{O}(T)$ \\cite{Wierman2019}.}\n    \\label{fig:incompatibility_of_competitive_ratio_and_regret}\n\\end{figure}\n\nThere thus exist many variants of regret and the competitive ratio used to bridge between the two metrics. One approach considers an additive variant of the competitive ratio, which is called the \\emph{competitive difference}.\n\n\\begin{definition}\\index{competitive difference}\n\\cite{Chen2015} The competitive difference of an online algorithm $ALG$ for a minimization problem $c$ is $\\rho(T)$ if for all instances of the problem, the difference between the result of the algorithm and the dynamic optimal offline solution $OPT$ does not exceed $\\rho(T)$, i.e., $c(ALG) - c(OPT) \\leq \\rho(T)$.\n\\end{definition}\n\nThis definition is also known as \\emph{dynamic regret}\\index{dynamic regret}~\\cite{Chen2018}. We next define a variant of regret that bridges between static and dynamic regret.\n\n\\begin{definition}\\index{constrained dynamic regret}\n\\cite{Chen2018} The $L$-constrained dynamic regret of an online algorithm $ALG$ for a minimization problem $c$ is $\\rho(T)$ if for all instances of the problem the difference between the result of the algorithm and the $L$-constrained optimal offline solution $OPT_L$ does not exceed $\\rho(T)$, i.e. $c(ALG) - c(OPT_L) \\leq \\rho(T)$.\n\nThe $L$-constrained optimal offline solution minimizes $c$ subject to the additional constraint \\begin{align*}\n    \\sum_{t=1}^T \\norm{X_t - X_{t-1}} \\leq L\n\\end{align*} for $X_t, X_{t-1} \\in \\mathcal{X}$.\n\\end{definition}\n\nWe now observe that given the optimal offline solution $OPT$ with schedule $\\hat{X}_t$, the $L$-constrained dynamic regret is equivalent to dynamic regret for $L = \\sum_{t=1}^T \\norm{\\hat{X}_t - \\hat{X}_{t-1}}$. In contrast, given the static optimal offline solution $OPT_s$, the $L$-constrained dynamic regret is equivalent to static regret for $L = \\norm{OPT_s - \\mathbf{0}}$ which is the initial (and only) step of $OPT_s$~\\cite{Chen2018}.\n\nAnother metric used to bridge the gap between the competitive ratio and regret is the \\emph{$\\alpha$-unfair competitive ratio} which penalizes movement in the decision space by an additional factor $\\alpha$~\\cite{Andrew2015}.\n\n\\begin{definition}\\index{unfair competitive ratio}\n\\cite{Andrew2015} The $\\alpha$-unfair competitive ratio of an online algorithm $ALG$ for a minimization problem $c$ is $\\beta$ if for all instances of the problem the ratio of the result of the algorithm and the dynamic $\\alpha$-unfair offline solution $OPT_{\\alpha}$ does not exceed $\\beta$, i.e. $c(ALG) \\leq \\beta \\cdot c(OPT_{\\alpha})$.\n\nHere, the $\\alpha$-unfair optimal offline solution $OPT_{\\alpha}$ is defined as the minimizer of \\begin{align*}\n    \\sum_{t=1}^T f_t(X_t) + \\alpha \\norm{X_t - X_{t-1}}.\n\\end{align*}\n\\end{definition}\n\nNote that for $\\alpha = 1$, the $\\alpha$-unfair competitive ratio is equivalent to the competitive ratio. For large $\\alpha$, the $\\alpha$-unfair optimal offline solution $OPT_{\\alpha}$ is similar to the $L$-constrained optimal offline solution $OPT_L$ in that the movement in the decision space is restricted.\n\n\\section{Problems}\n\nNow that we have an overview of the commonly used performance metrics, we introduce the problems we consider in this work. We initially state the problems as offline problems, but as all problems follow the same structure, their corresponding online variant is obtained by deferring the convex cost functions. All other problem variables --- except for the time horizon $T$ --- such as the movement cost that penalizes movement in the decision space are known from the beginning.\n\n\\subsection{Smoothed Convex Optimization}\n\nWe begin by formally introducing the most general problem we consider and which we already motivated in \\cref{chapter:introduction}.\n\n\\begin{problem}[Smoothed Convex Optimization (SCO)]\\index{smoothed convex optimization}\\label{problem:smoothed_convex_optimization}\nGiven a time horizon $T \\in \\mathbb{N}$, a convex decision space $\\mathcal{X} \\subset \\mathbb{R}^d$, a norm $\\norm{\\cdot}$ on $\\mathbb{R}^d$, and a sequence $F$ of non-negative convex functions $f_t$ for $t \\in [T]$ with $f_t(x) = \\infty$ for all $x \\not\\in \\mathcal{X}$, find $X \\in \\mathcal{X}^T$ minimizing \\begin{align*}\n    c_{\\text{SCO}}(X) = \\sum_{t=1}^T f_t(X_t) + \\norm{X_t - X_{t-1}}\n\\end{align*}\nwhere $X_0 = \\mathbf{0}$.\n\\end{problem}\n\nIn many practical applications of smoothed convex optimization, we seek to find integral solutions minimizing hitting and movement costs. This is especially true within the context of resource allocation, for example, for right-sizing data centers, where our resources are discrete. This observation motivates the definition of the following variant of SCO.\n\n\\begin{problem}[Integral Smoothed Convex Optimization (Int-SCO)]\nWe define integral smoothed convex optimization analogously to SCO with the added restriction that the points $x$ in $d$-dimensional space must be discrete, that is $\\mathcal{X} \\subset \\mathbb{Z}^d$.\n\\end{problem}\n\nIn this work, we often refer to the convex cost functions of fractional problems as hitting costs, whereas we generally refer to them as operating costs in the context of integral problems.\n\nIn \\cref{chapter:introduction}, we have seen that metrical task systems subsume Int-SCO. However, it was shown that, in general, the competitiveness of deterministic and randomized algorithms for metrical task systems must be proportional to the size of the decision space~\\cite{Blum1992, Borodin1992}. Further, \\citeauthor{Chen2018}~\\cite{Chen2018} have shown that the competitiveness of any online algorithm for SCO is lower bounded by $\\Omega(\\sqrt{d})$. Therefore, many of the online algorithms for SCO that we examine in \\cref{chapter:online_algorithms} further restrict hitting and movement costs.\n\nAnother similar problem is the ski rental problem. In the \\emph{ski rental problem}\\index{ski rental problem}, skis can be bought for a cost of $b$ units or rented for a cost of one unit per day. Each day of the ski season, the agent has to decide whether to rent the skis or end the sequence of decisions by buying the skis without knowing how long the ski season will last~\\cite{Shah2021}. Consider the uni-dimensional decision space $\\{0,b\\}$, the $\\ell_2$ norm as movement cost, and the sequence of hitting costs $f_t(0) = 1$ and $f_t(b) = 0$. The solution to this instance of SCO is a solution to the corresponding ski rental problem, yielding that the ski rental problem is a special case of SCO. \\citeauthor{Karlin1990}~\\cite{Karlin1990} showed that the best competitive ratio attainable by a randomized algorithm is $e/(e-1) \\approx 1.58$, giving a lower bound for the competitive ratio of online algorithms for SCO.\n\n\\citeauthor{Goel2019}~\\cite{Goel2019} proved that for $\\alpha$-strongly convex hitting costs with respect to the $\\ell_2$ norm and $\\ell_2$-squared movement costs, the optimal competitiveness of any online algorithm is $\\mathcal{O}(1/\\sqrt{\\alpha})$ as $\\alpha \\downarrow 0$. We discuss hitting costs and movement costs of this shape in greater detail in \\cref{section:theory:beyond_convexity}. \\citeauthor{Bansal2015}~\\cite{Bansal2015} have shown that in the uni-dimensional setting, the optimal competitive ratio that a deterministic memoryless algorithm for SCO can attain is three.\n\n\\subsubsection{Complexity of the Offline Problem}\n\nWe now want to examine the complexity of Int-SCO in the offline case. That is, we know all arriving convex cost functions $f_t$ in advance. We prove Int-SCO NP-hard for varying $d$ by giving a polynomial-time reduction from the Knapsack problem. In \\cref{section:theory:simplified_smoothed_convex_optimization}, we extend this proof of NP-hardness to the integral simplified smoothed convex optimization problem, further restricting the decision space and movement cost.\n\nGiven a set of items with an associated value and weight and an upper bound to the total weight, Knapsack is the problem of determining the number of copies of each item that maximizes the total value and conforms to the given upper bound on total weight. Formally we define Knapsack as follows.\n\n\\begin{problem}[Knapsack (KP)]\\index{knapsack problem}\nGiven a number of items $n \\in \\mathbb{N}$, a value of each item $v \\in \\mathbb{N}^n$, a weight of each item $w \\in \\mathbb{N}^n$, and an upper bound to the total weight $W \\in \\mathbb{N}$, find $x \\in \\{0,1\\}^n$ satisfying $\\sum_{i = 1}^n w_i x_i \\leq W$ and maximizing $\\sum_{i=1}^n v_i x_i$.\n\\end{problem}\n\nThis variant of Knapsack is commonly called \\emph{0-1 Knapsack} and restricts the number of copies of each item to zero or one. It is, however, easy to see that our proof can be generalized to a setting where we allow $x_i \\in [m_i]_0$ for $m \\in \\mathbb{N}^n$. \\citeauthor{Williamson2014}~\\cite{Williamson2014} gives a proof for the NP-completeness of the Knapsack decision problem. It immediately follows that the Knapsack optimization problem is NP-hard.\n\nBefore reducing to Int-SCO, we reduce Knapsack to a related problem called Minimum Knapsack.\n\n\\begin{problem}[Minimum Knapsack (Min-KP)]\\index{minimum knapsack problem}\nGiven a number of items $n \\in \\mathbb{N}$, a cost of each item $c \\in \\mathbb{N}^n$, a utility of each item $u \\in \\mathbb{N}^n$, and a lower bound to the total utility $U \\in \\mathbb{N}$, find $x \\in \\{0,1\\}^n$ satisfying $\\sum_{i = 1}^n u_i x_i \\geq U$ and minimizing $\\sum_{i=1}^n c_i x_i$.\n\\end{problem}\n\n\\begin{lemma}\nMin-KP is NP-hard.\n\\end{lemma}\n\\begin{proof}\nWe prove the lemma by giving a reduction from KP.\n\nLet $\\mathcal{I}_{\\text{KP}} = (n, v, w, W)$ be an instance of KP. Let $\\mathcal{I}_{\\text{Min-KP}}(U) = (n, c, u, U)$ be an instance of Min-KP with $c = w$, and $u = v$. Hence, $\\mathcal{I}_{\\text{Min-KP}}(U)$ minimizes the total weight $\\sum_{i=1}^n w_i x_i$ such that $\\sum_{i=1}^n v_i x_i \\geq U$.\n\nBy finding solutions to $\\mathcal{I}_{\\text{Min-KP}}(U)$ repeatedly for varying $U$, we determine the maximal $U$ such that $\\sum_{i=1}^n w_i x_i \\leq W$. We observe that $U$ is upper bounded by $n \\cdot v_{\\text{max}}$. If $U$ were greater than $n \\cdot v_{\\text{max}}$ we would have $\\sum_{i=1}^n v_{\\text{max}} x_i \\geq \\sum_{i=1}^n v_i x_i > n \\cdot v_{\\text{max}}$ which contradicts $x \\in \\{0,1\\}^n$. Hence, we can use binary search to find $U$ in $\\mathcal{O}(\\log n + \\log v_{\\text{max}})$ iterations. The other direction works analogously.\n\nWe have seen a total, polynomial-time reduction from KP to Min-KP. Hence, Min-KP is NP-hard.\n\\end{proof}\n\nNext, we prove our central reduction from Min-KP to Int-SCO. To motivate this reduction, we first prove that the following (convex) integer optimization is, in fact, equivalent to Min-KP.\n\n\\begin{lemma}\n\\label{lemma:integer_minimization}\nLet $\\mathcal{I}_{\\text{Min-KP}} = (n, c, u, U)$ be an instance of Min-KP. $x$ is the solution to $\\mathcal{I}_{\\text{Min-KP}}$ if and only if $x$ minimizes \\begin{align*}\n    c_{\\text{SCO}}'(x) = \\sum_{i=1}^n c_i x_i + M\\left(U - \\sum_{i=1}^n u_i x_i\\right)^+\n\\end{align*} subject to $x \\in \\{0,1\\}^n$ for some $M > \\frac{n c_{\\text{max}}}{u_{\\text{min}}}$.\n\\end{lemma}\n\\begin{proof}\nSuppose $x$ minimizes $c_{SCO}'(x)$. Now suppose $(U - \\sum_{i=1}^n u_i x_i)^+ > 0$. Then $\\sum_{i=1}^n u_i < U$ follows immediately. It is easy to see that if $x \\equiv 1$, $\\mathcal{I}$ has no solution because the lower bound on the utility $U$ is not met. Henceforth, we assume $x$ can be further increased. Then, $(U - \\sum_{i=1}^n u_i x_i)^+ \\geq u_{\\text{min}}$. Therefore, $c_{SCO}'(x) > \\sum_{i=1}^n c_i x_i + c_{\\text{max}}$. We observe that $x$ is not optimal as $c_{SCO}'(x)$ could be minimized further by increasing $x$ such that $(U - \\sum_{i=1}^n u_i x_i)^+ = 0$ since $\\sum_{i=1}^n c_i x_i \\leq n c_{\\text{max}}$ holds for all $x$.\n\nBy leading our previous assumption to a contradiction, we conclude $(U - \\sum_{i=1}^n u_i x_i)^+ = 0$ and therefore $U \\leq \\sum_{i=1}^n u_i x_i$. Further, $c_{SCO}'(x)$ minimizes $\\sum_{i=1}^n c_i x_i$ for all remaining candidates for $x$. Hence, $x$ is the solution of $\\mathcal{I}_{\\text{Min-KP}}$.\n\nOn the other hand, suppose that $x$ is the solution to $\\mathcal{I}_{\\text{Min-KP}}$. Then $(U - \\sum_{i=1}^n u_i x_i)^+ = 0$ and $\\sum_{i=1}^n c_i x_i$ is minimized. Hence, $x$ minimizes $c_{SCO}'(x)$.\n\\end{proof}\n\nFor our construction we need that $c_{SCO}'$ is convex.\n\n\\begin{lemma}\n\\label{lemma:integer_minimization_convexity}\n$c_{SCO}'$ is convex on $\\{0,1\\}^n$.\n\\end{lemma}\n\\begin{proof}\nIt is easy to see that $c_{SCO}'$ is continuous. Therefore, to show the convexity of $c_{SCO}'$ it suffices to prove midpoint-convexity, i.e. $c_{SCO}'\\left(\\frac{x+y}{2}\\right) \\leq \\frac{c_{SCO}'(x)+c_{SCO}'(y)}{2}$ for all $x, y \\in \\mathbb{R}^n$.\n\nTo simplify the notation let $C(x) = \\sum_{i=1}^n c_i x_i$ and let $U(x) = \\sum_{i=1}^n u_i x_i$. To further simplify the notation we define $\\frac{x+y}{2}$ to be applied component-wise to elements $i \\in [n]$ of $x$ and $y$. We then obtain \\small{\n\\begin{align*}\n         &c_{SCO}'\\left(\\frac{x+y}{2}\\right) \\leq \\frac{c_{SCO}'(x)+c_{SCO}'(y)}{2} \\\\\n    \\iff &C\\left(\\frac{x+y}{2}\\right) + M\\left(U - U\\left(\\frac{x+y}{2}\\right)\\right)^+ \\leq \\frac{C(x) + M(U - U(x))^+ + C(y) + M(U - U(y))^+}{2} \\\\\n    \\iff &C(x) + C(y) + 2M\\left(U - U\\left(\\frac{x+y}{2}\\right)\\right)^+ \\leq C(x) + M(U - U(x))^+ + C(y) + M(U - U(y))^+ \\\\\n    \\iff &2\\left(U - U\\left(\\frac{x+y}{2}\\right)\\right)^+ \\leq (U - U(x))^+ + (U - U(y))^+.\n\\end{align*}\n}\\normalsize\n\nWe immediately get the convexity of $U(\\cdot)$ by the following equivalence. \\begin{align*}\n    U\\left(\\frac{x+y}{2}\\right) &= \\sum_{i=1}^n u_i \\frac{x_i + y_i}{2} \\\\\n                                &= \\frac{\\sum_{i=1}^n u_i x_i + \\sum_{i=1}^n u_i y_i}{2} \\\\\n                                &= \\frac{U(x) + U(y)}{2}.\n\\end{align*}\n\nNow, we consider three cases separately.\n\n\\begin{enumerate}\n    \\item If $U(x) > U$ and $U(y) > U$, then $U\\left(\\frac{x+y}{2}\\right) > U$. Hence \\begin{align*}\n        2\\left(U - U\\left(\\frac{x+y}{2}\\right)\\right)^+ = 0 = (U - U(x))^+ + (U - U(y))^+.\n    \\end{align*}\n    \\item If $U(x) \\leq U$ and $U(y) \\leq U$, then $U\\left(\\frac{x+y}{2}\\right) \\leq U$. Hence \\begin{align*}\n        2\\left(U - U\\left(\\frac{x+y}{2}\\right)\\right)^+ &= 2U - 2U\\left(\\frac{x+y}{2}\\right) \\\\\n                                                        &= 2U - U(x) - U(y) \\\\\n                                                        &= (U - U(x))^+ + (U - U(y))^+.\n    \\end{align*}\n    \\item For the only remaining case we assume w.l.o.g. that $U(x) \\leq U$ and $U(y) > U$. If $U - U(x) < U(y) - U$, then $U\\left(\\frac{x+y}{2}\\right) > U$ and we follow the first case. If, on the other hand, $U - U(x) \\geq U(y) - U$, then $U\\left(\\frac{x+y}{2}\\right) \\leq U$ and we follow the second case.\\qedhere\n\\end{enumerate}\n\\end{proof}\n\nWe now have everything in place to prove our main result of this section.\n\n\\begin{theorem}\nInt-SCO is NP-hard.\n\\end{theorem}\n\\begin{proof}\nWe now give our reduction from Min-KP to Int-SCO.\n\nLet $\\mathcal{I}_{\\text{Min-KP}} = (n, c, u, U)$ be an instance of Min-KP and set $d = n$. We define $\\mathcal{I}_{\\text{Int-SCO}} = (T, \\mathcal{X}, \\norm{\\cdot}, f)$ as an instance of Int-SCO with $T = 1$, $\\mathcal{X} = \\{0,1\\}^n$, $\\norm{\\cdot} = 0$, and $f_1(x) = c_{\\text{SCO}}'(x)$. It is easy to see that $f_1$ is non-negative. By \\cref{lemma:integer_minimization_convexity}, $\\mathcal{I}_{\\text{Int-SCO}}$ is a valid instance of Int-SCO.\n\nThe correctness of our construction follows from \\cref{lemma:integer_minimization}. \\begin{align*}\n         &X \\text{ is a solution to } \\mathcal{I}_{\\text{Int-SCO}} \\\\\n    \\iff &X \\text{ minimizes } \\sum_{t=1}^T f_t(X_t) + \\norm{X_t - X_{t-1}} \\text{ such that } X_t \\in \\mathcal{X}. \\\\\n    \\iff &X \\text{ minimizes } c_{\\text{SCO}}'(X_1) \\text{ such that } X_1 \\in \\{0,1\\}^n. \\\\\n    \\iff &X_1 \\text{ is a solution to } \\mathcal{I}_{\\text{Min-KP}}.\n\\end{align*}\n\nOur construction is total and polynomial in the size of $\\mathcal{I}_{\\text{Min-KP}}$. Hence, Int-SCO is NP-hard.\n\\end{proof}\n\nWe observe that the above reduction can be extended to Knapsack with arbitrary bounds $m_i$ by setting $\\mathcal{X}$ of $\\mathcal{I}_{\\text{Int-SCO}}$ to $[m_1]_0 \\times \\dots \\times [m_n]_0$.\n\n\\subsection{Simplified Smoothed Convex Optimization}\\label{section:theory:simplified_smoothed_convex_optimization}\n\nIn many applications, for example, for right-sizing data centers where we are interested in determining the optimal number of servers to run at a particular time, it suffices to restrict $\\mathcal{X}$ to $[m_0]_0 \\times \\dots \\times [m_d]_0$ for some upper bound in each dimension $m \\in \\mathbb{N}^d$ and the switching cost $\\norm{\\cdot}$ to a Manhattan norm which is scaled in each dimension independently from time. To that end, we first define a restricted variant of (fractional) SCO, which we term \\emph{simplified smoothed convex optimization}.\n\n\\begin{problem}[Simplified Smoothed Convex Optimization (SSCO)]\\index{simplified smoothed convex optimization}\\label{problem:simplified_smoothed_convex_optimization}\nGiven a time horizon $T \\in \\mathbb{N}$, upper bounds $m \\in \\mathbb{N}^d$, switching costs $\\beta \\in \\mathbb{R}_{>0}^d$, and a sequence $F$ of non-negative convex functions $f_t$ for $t \\in [T]$, find $X \\in (\\mathbb{R}_{\\geq 0, \\leq m_0} \\times \\dots \\times \\mathbb{R}_{\\geq 0, \\leq m_d})^T$ minimizing \\begin{align}\\label{eq:simplified_smoothed_convex_optimization}\n    c_{\\text{SSCO}}(X) = \\sum_{t=1}^T f_t(X_t) + \\sum_{k=1}^d \\beta_k (X_{t,k} - X_{t-1,k})^+\n\\end{align}\nwhere $X_0 = \\mathbf{0}$.\n\\end{problem}\n\nWe observe that $c_{\\text{SSCO}}$ pays the switching cost whenever $x$ increases. Decreasing $x$ does not increase the paid switching cost. This observation motivates the following lemma that shows that we could equivalently pay the switching cost for decreasing $x$.\n\n\\begin{lemma}\n\\label{lemma:inverse_switching_cost}\nFor all $T \\in \\mathbb{N}$ and $ X_t \\in \\mathbb{R}$ where $X_0 = X_{T+1} = \\mathbf{0}$, the following equivalence holds:\n\\begin{align*}\n    \\sum_{t=1}^T (X_t - X_{t-1})^+ = \\sum_{t=1}^T (X_t - X_{t+1})^+.\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nThe left side of the equation sums all increases in $x$ from $t \\in \\{0, \\dots, T\\}$ starting from $X_0 = \\mathbf{0}$. The right side of the equation sums all decreases in $x$ from $t \\in \\{1, \\dots, T+1\\}$ ending with $X_{T+1} = \\mathbf{0}$. As the schedule $X$ begins and ends with the configuration $\\mathbf{0}$, the two sums are equivalent.\n\\end{proof}\n\nTo complete the proof that any instance of SSCO is an instance of SCO, we have to show that our switching cost is indeed a valid norm. Given an instance $\\mathcal{I}_{\\text{SSCO}} = (T, m, \\beta, F)$ with $F = (f_1, \\dots, f_T)$ we define the corresponding instance of SCO as $\\mathcal{I}_{\\text{SCO}} = (T, \\mathcal{X}, \\norm{\\cdot}, \\widetilde{F})$ where $\\mathcal{X} = \\mathbb{R}_{\\geq 0, \\leq m_0} \\times \\dots \\times \\mathbb{R}_{\\geq 0, \\leq m_d}$, $\\widetilde{F}$ is slightly modified version of $F$ which is formally defined in the following, and $\\norm{x} = \\sum_{k=1}^d \\frac{\\beta_k}{2} |x_k|$ as the dimension-dependently scaled Manhattan norm of $x$. It is easy to see that $\\norm{\\cdot}$ is indeed a valid norm. The next lemma proves that $X \\in \\mathcal{X}^T$ is a solution to $\\mathcal{I}_{\\text{SCO}}$ if and only if it is a solution to $\\mathcal{I}_{\\text{SSCO}}$.\n\n\\begin{lemma}\\label{lemma:switching_cost_l1_norm_vs_pos_movement}\nFor any $T \\in \\mathbb{N}, \\beta \\in \\mathbb{R}_{>0}^d$, and $X_t \\in \\mathbb{R}^d$ with $X_0 = X_{T+1} = \\mathbf{0}$, the following equivalence holds:\n\\begin{align}\\label{eq:switching_cost_l1_norm_vs_pos_movement}\n    \\sum_{t=1}^{T+1} \\norm{X_t - X_{t-1}} = \\sum_{t=1}^T \\sum_{k=1}^d \\beta_k (X_{t,k} - X_{t-1,k})^+.\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nBy \\cref{lemma:inverse_switching_cost}, the above equivalence holds iff \\begin{align*}\n    \\sum_{t=1}^{T+1} \\sum_{k=1}^d \\beta_k |X_{t,k} - X_{t-1,k}| = \\sum_{t=1}^T \\sum_{k=1}^d \\beta_k ((X_{t,k} - X_{t-1,k})^+ + (X_{t,k} - X_{t+1,k})^+).\n\\end{align*}\nIt is easy to see that this always holds as $(X_{t,k} - X_{t-1,k})^+$ (increasing value) and $(X_{t,k} - X_{t+1,k})^+$ (decreasing value) are the two components of $|X_{t,k} - X_{t-1,k}|$.\n\\end{proof}\n\nNote that the last summand of the left side of \\cref{eq:switching_cost_l1_norm_vs_pos_movement} is $\\norm{X_{T+1} - X_T} = \\norm{X_T}$ which is not considered in the cost function of SCO. To correct for this under-approximation of the switching cost and to ensure that the cost of a schedule $X$ is equivalent between $\\mathcal{I}_{\\text{SSCO}}$ and $\\mathcal{I}_{\\text{SCO}}$ we slightly modify the hitting cost $f_T$ at time $T$ to \\begin{align*}\n    \\widetilde{f}_T(x) := f_T(x) + \\norm{x} = f_T(x) + \\sum_{k=1}^d \\frac{\\beta_k}{2} |x|.\n\\end{align*} The remaining hitting costs remain the same, i.e. $\\widetilde{f_t} := f_t$ for all $t \\in [T-1]$. We set $\\widetilde{F} = (\\widetilde{f}_1, \\dots \\widetilde{f}_T)$. We also observe that the $\\norm{\\cdot}$ is convex, increasing, and non-negative, implying that this slight modification maintains the invariant that the hitting costs likewise are convex, increasing, and non-negative. This slight modification of the final hitting cost is only relevant in the offline setting where the time horizon is known.\n\nWith the same motivation we used for the restriction of SCO to Int-SCO, we now restrict SSCO to an integral variant.\n\n\\begin{problem}[Integral Simplified Smoothed Convex Optimization (Int-SSCO)]\nWe define integral simplified smoothed convex optimization analogously to SSCO with the added restriction that the points $x$ in $d$-dimensional space must be discrete, that is $x \\in [m_0]_0 \\times \\dots \\times [m_d]_0$.\n\\end{problem}\n\n\\citeauthor{Albers2018}~\\cite{Albers2018} have shown for Int-SSCO in the uni-dimensional setting that the optimal competitive ratio is 3 for deterministic algorithms and 2 for randomized algorithms. As Int-SSCO is subsumed by Int-SCO, these bounds also hold for Int-SCO.\n\n\\subsubsection{Complexity of the Offline Problem}\n\nWe next extend our proof of NP-hardness of Int-SCO for varying $d$ to Int-SSCO. We cannot reuse our original proof as the switching cost of SSCO is required to be positive.\n\n\\begin{theorem}\n\\label{theorem:int_ssco_np_hardness}\nInt-SSCO is NP-hard.\n\\end{theorem}\n\\begin{proof}\nAgain, we use a reduction from Min-KP.\n\nLet $\\mathcal{I}_{\\text{Min-KP}} = (n, c, u, U)$ be an instance of Min-KP and set $d = n$. We define $\\mathcal{I}_{\\text{Int-SSCO}} = (T, m, \\beta, f)$ as an instance of Int-SSCO with $T = 1$, $m \\equiv 1$, $\\beta \\equiv 1$, and $f_1(x) = c_{\\text{SCO}}'(x) + n - \\sum_{i=1}^n x_i$.\n\nIt is easy to see that $f_1$ is non-negative. In \\cref{lemma:ssco_reduction_convexity}, we prove that $f_1$ is convex. Assuming the convexity of $f_1$, $\\mathcal{I}_{\\text{Int-SSCO}}$ is a valid instance of Int-SSCO.\n\nWe now prove the correctness of our construction. Again, we use \\cref{lemma:integer_minimization}. \\begin{align*}\n         &X \\text{ is a solution to } \\mathcal{I}_{\\text{Int-SSCO}} \\\\\n    \\iff &X \\text{ minimizes } \\sum_{t=1}^T f_t(X_t) + \\sum_{k=1}^d \\beta_k (X_{t,k} - X_{t-1,k})^+ \\text{ such that } X_t \\in [m_0]_0 \\times \\dots \\times [m_d]_0. \\\\\n    \\iff &X \\text{ minimizes } f_1(X_1) + \\sum_{i=1}^n X_{1,i} \\text{ such that } X_1 \\in \\{0,1\\}^n. \\\\\n    \\iff &X \\text{ minimizes } c_{\\text{SCO}}'(X_1) + n + \\sum_{i=1}^n X_{1,i} - X_{1,i} \\text{ such that } X_1 \\in \\{0,1\\}^n. \\\\\n    \\iff &X \\text{ minimizes } c_{\\text{SCO}}'(X_1) \\text{ such that } X_1 \\in \\{0,1\\}^n. \\\\\n    \\iff &X_1 \\text{ is a solution to } \\mathcal{I}_{\\text{Min-KP}}.\n\\end{align*}\n\nOur construction is still total and polynomial in the size of $\\mathcal{I}_{\\text{Min-KP}}$. Hence, Int-SSCO is NP-hard.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:ssco_reduction_convexity}\n$f_1$ from \\cref{theorem:int_ssco_np_hardness} is convex on $\\{0,1\\}^n$.\n\\end{lemma}\n\\begin{proof}\nTo show convexity of $f_1$, it suffices to show that $h(x) = n - \\sum_{i=1}^n x_i$ is convex as the convexity of $c_{SCO}'$ was already established in \\cref{lemma:integer_minimization_convexity} and $f_1(x) = c_{SCO}'(x) + h(x)$. Further, it is enough to prove $h$ midpoint-convex as $h$ is continuous. We observe that \\begin{align*}\n         &&h\\left(\\frac{x + y}{2}\\right) &\\leq \\frac{h(x) + h(y)}{2} \\\\\n    \\iff &&n - \\sum_{i=1}^n \\frac{x_i + y_i}{2} &\\leq n - \\frac{\\sum_{i=1}^n x_i + \\sum_{i=1}^n y_i}{2}\n\\end{align*} holds for any $x, y \\in \\{0,1\\}^n$, proving the lemma.\n\\end{proof}\n\n\\subsection{Smoothed Balanced Load Optimization}\n\nWe now turn to a variant of Int-SSCO introduced by \\citeauthor{Albers2021_2}~\\cite{Albers2021_2} that further restricts the structure of the convex cost functions. This restriction is motivated by the usual cost model of heterogeneous data centers with homogeneous loads we examined in detail in \\cref{section:application:dispatching:optimal_load_balancing} where the incoming load (or set of jobs) is distributed equally among all active servers.\n\nGiven a sequence of convex increasing non-negative costs $g_{t,k}(l)$ of each instance in dimension $k$ given its load $l$ during time slot $t$, the overall cost for dimension $k$ during time slot $t$ is given as \\begin{align*}\n    h_{t,k}(x,z) := \\begin{cases} \n        x g_{t,k}\\left(\\frac{l_{t,k}}{x}\\right) & x > 0 \\\\\n        \\infty                                  & x = 0 \\land l_{t,k} > 0 \\\\\n        0                                       & x = 0 \\land l_{t,k} = 0\n    \\end{cases}\n\\end{align*} where $l_{t,k} = \\lambda_t z$ for some sequence of load profiles $\\lambda_t \\in \\mathbb{N}_0$. Here, $x$ is the position in the decision space in dimension $k$, and $z \\in [0,1]$ is the fraction of the load $\\lambda_t$ that is assigned to dimension $k$~\\cite{Albers2021_2}. Given the set of all possible assignments to $d$ dimensions $\\mathcal{Z} := \\{z \\in [0,1]^d \\mid \\sum_{k=1}^d z_k = 1\\}$, the overall hitting cost is defined as the convex optimization \\begin{align}\n\\label{eq:sblo_hitting_cost}\n    f_t(x) := \\min_{z \\in \\mathcal{Z}} \\sum_{k=1}^d h_{t,k}(x_k,z_k).\n\\end{align} Intuitively, the load profiles $\\lambda_t$ are balanced across all dimensions so as to minimize cost. We also observe that the formulation of $f_t$ from \\cref{eq:sblo_hitting_cost} is equivalent to our formulation from \\cref{eq:heterogeneous_load_balancing}.\n\n\\begin{problem}[Smoothed Balanced Load Optimization (SBLO)]\\index{smoothed balanced load optimization}\\label{problem:sblo}\nGiven a time horizon $T \\in \\mathbb{N}$, upper bounds $m \\in \\mathbb{N}^d$, switching costs $\\beta \\in \\mathbb{R}_{>0}^d$, a sequence $\\Lambda$ of load profiles $\\lambda_t \\in \\mathbb{N}_0$, and a sequence $G$ of convex increasing non-negative functions $g_{t,k}$ for $t \\in [T], k \\in [d]$, find $X \\in ([m_0]_0 \\times \\dots \\times [m_d]_0)^T$ minimizing \\begin{align*}\n    c_{\\text{SBLO}}(X) = \\sum_{t=1}^T f_t(X_t) + \\sum_{k=1}^d \\beta_k (X_{t,k} - X_{t-1,k})^+\n\\end{align*}\nwhere $X_0 = \\mathbf{0}$ and $f_t$ is given by \\cref{eq:sblo_hitting_cost}.\n\\end{problem}\n\nIn the online variant of SBLO (and in its variants), load profiles and convex cost functions arrive over time. It is easy to see that any instance of SBLO is in fact an instance of Int-SSCO.\n\n\\subsection{Smoothed Load Optimization}\n\nLastly, we consider an even simpler problem proposed by \\citeauthor{Albers2021}~\\cite{Albers2021} where we assume that each instance can only handle a single job during each time slot. Instead of using convex functions to model cost, we assume that cost increases linearly and independently from time with the number of active servers. In addition, we impose the constraint that the number of servers must still be enough to handle the incoming load. Without this restriction, the optimal strategy would always be not to run any servers at all.\n\n\\begin{problem}[Smoothed Load Optimization (SLO)]\\index{smoothed load optimization}\\label{problem:slo}\nGiven a time horizon $T \\in \\mathbb{N}$, upper bounds $m \\in \\mathbb{N}^d$, switching costs $\\beta \\in \\mathbb{R}_{>0}^d$, a sequence $\\Lambda$ of load profiles $\\lambda_t \\in \\mathbb{N}_0$, and the non-negative operating costs $c \\in \\mathbb{R}_{\\geq 0}^d$, find $X \\in ([m_0]_0 \\times \\dots \\times [m_d]_0)^T$ minimizing \\begin{align*}\n    c_{\\text{SLO}}(X) = \\sum_{t=1}^T \\sum_{k=1}^d c_k X_{t,k} + \\beta_k (X_{t,k} - X_{t-1,k})^+\n\\end{align*}\nwhere $X_0 = \\mathbf{0}$ such that for all $t \\in [T]$ \\begin{align*}\n    \\sum_{k=1}^d X_{t,k} \\geq \\lambda_t.\n\\end{align*}\n\\end{problem}\n\nIn contrast to our definition of SBLO, SLO balances the load implicitly among all active servers, which is possible because we assume that each active server can only handle a single job during one time slot. Again, it is easy to see that SLO is an instance of SBLO by setting $g_{t,k}(l) := c_k$ for $l \\leq 1$ and $g_{t,k}(l) := \\infty$ otherwise.\n\n\\citeauthor{Albers2021}~\\cite{Albers2021} show that an online algorithm for SLO cannot attain a competitive ratio smaller than $2d$.\n\n\\section{Beyond Convexity}\\label{section:theory:beyond_convexity}\n\nWe have seen that the optimal competitiveness of online algorithms for smoothed convex optimization is fundamentally limited to be dimension-dependent as long as arbitrary convex hitting costs and arbitrary norms as movement costs are allowed. In the literature, many promising approaches are based on restricting the class of hitting costs (and movement costs) to achieve a dimension-independent competitive ratio. This section focuses mainly on hitting costs, introducing these restrictions, and investigating how they relate to our data center model.\n\n\\subsection{Continuity and Differentiability}\\label{section:theory:beyond_convexity:continuity_and_differentiability}\n\nA very first natural restriction is to assume that hitting costs are continuous. In fact, theorem 10.1 of~\\cite{Rockafellar1970} proves that given a convex function $f : \\mathcal{X} \\to \\mathbb{R}$, $f$ is continuous on the interior of its domain, $\\mathcal{X}^{\\circ}$. Throughout this work, we will thus assume the hitting costs to be continuous on the interior of the decision space $\\mathcal{X}$ without an explicit mention.\n\nAs continuity does not represent a limitation, we investigate the continuous differentiability of the hitting costs. We call a function \\emph{smooth}\\index{smooth function} if it is infinitely-many times continuously differentiable. However, to obtain a smooth hitting cost in the application of right-sizing data centers, one would have to drastically reduce the complexity of the model we discussed in \\cref{chapter:application}. For example, \\citeauthor{Bansal2015}~\\cite{Bansal2015} focus entirely on the energy cost, which is a good approximation in practice as energy represents the largest fraction of the overall cost. Still, in general, the assumption of smoothness is too strong.\n\n\\subsection{Stronger Assumptions}\n\nSome of the algorithms, which we discuss, require a more restricted class of convex cost functions. We thus introduce some terminology that is commonly used in convex optimization to describe properties of well-behaved functions.\n\n\\paragraph{Lipschitz Continuity} We begin with the fundamental notion of Lipschitz continuity.\n\n\\begin{definition}\\index{Lipschitz continuity}\n\\cite{Gupta2020} A function $f : K \\to \\mathbb{R}$ is called $L$-Lipschitz over a convex set $K \\subseteq \\mathbb{R}^d$ with respect to the norm $\\norm{\\cdot}$ if \\begin{align*}\n    \\norm{f(x) - f(y)} \\leq L \\norm{x - y}\n\\end{align*} holds for all $x, y \\in K$.\n\\end{definition}\n\nIntuitively, the absolute slope of an $L$-Lipschitz function cannot be greater than $L$. Alternatively, in other words, the function values cannot change arbitrarily fast.\n\n\\paragraph{Lipschitz Smoothness} In the context of convex optimization, there is a notion of smoothness that is distinct from the smoothness that we discussed in \\cref{section:theory:beyond_convexity:continuity_and_differentiability}. For descent methods, it is beneficial if the difference in gradients of two points shrinks with the distance between the points. Formally, we define smoothness as follows.\n\n\\begin{definition}\\index{Lipschitz smoothness}\nA function $f : K \\to \\mathbb{R}$ is called $\\beta$-Lipschitz smooth over a convex set $K \\subseteq \\mathbb{R}^d$ with respect to the norm $\\norm{\\cdot}$ if its gradient $\\nabla f$ is $\\beta$-Lipschitz over $K$. Thus, $f$ is $\\beta$-Lipschitz smooth if \\begin{align*}\n    \\norm{\\nabla f(x) - \\nabla f(y)} \\leq \\beta \\norm{x - y}\n\\end{align*} holds for all $x, y \\in K$. This is equivalent to saying that \\begin{align*}\n    f(y) \\leq f(x) + \\langle\\nabla f(x), y-x\\rangle + \\frac{\\beta}{2}\\norm{x-y}^2\n\\end{align*} holds for all $x, y \\in K$~\\cite{Gupta2020}.\n\\end{definition}\n\nIntuitively, this ensures that at any point $x \\in K$, a quadratic can be fit above the curve of $f$. This ensures that a descent method does not ``overshoot\" when approaching the minimum because the gradient decreases as the minimum is approached.\n\n\\paragraph{Strong Convexity} In contrast, descent methods converge faster if gradients are large, very far away from the optimal solution. The notion of strong convexity describes this property.\n\n\\begin{definition}\\index{strong convexity}\n\\cite{Gupta2020} A function $f : K \\to \\mathbb{R}$ is called $\\alpha$-strongly convex over a convex set $K \\subseteq \\mathbb{R}^d$ with respect to the norm $\\norm{\\cdot}$ if $g(x) = f(x) - \\frac{\\alpha}{2}\\norm{x-y}^2$ is convex over $K$. Equivalently, $f$ is $\\alpha$-strongly convex if \\begin{align*}\n    f(y) \\geq f(x) + \\langle\\nabla f(x), y-x\\rangle + \\frac{\\alpha}{2}\\norm{x-y}^2\n\\end{align*} holds for all $x, y \\in K$.\n\\end{definition}\n\nIntuitively, at any point $x \\in K$, a quadratic can be fit under the curve of $f$. In other words, $f$ grows at least quadratically as one moves away from the minimizer. This contrasts our definition of $\\beta$-Lipschitz smoothness. When a function is $\\alpha$-strongly convex and $\\beta$-Lipschitz smooth, descent methods converge quickly as the gradient is large when far away and small when close to the optimal solution.\n\nNote that constant and even linear functions are not strongly convex. Recall that the energy consumption models we discussed in \\cref{eq:energy_model:1} and \\cref{eq:energy_model:2} were linear in the utilization, implying that the overall operating cost of a server is not strongly convex. Even the non-linear energy consumption model from \\cref{eq:energy_model:3} is not strongly convex as its first-order derivative is zero for $s = 0$.\n\n\\paragraph{Local Polyhedrality} Still, strong convexity represents a significant restriction. A similar but not quite as strong is the property of local polyhedrality.\n\n\\begin{definition}\\index{local polyhedrality}\n\\cite{Goel2018} A function $f : K \\to \\mathbb{R}$ with minimizer $\\hat{x}$ is called $\\alpha$-locally polyhedral over a convex set $K \\subseteq \\mathbb{R}^d$ with respect to the norm $\\norm{\\cdot}$ if there exists some $\\epsilon > 0$ such that \\begin{align*}\n    f(x) - f(\\hat{x}) \\geq \\alpha \\norm{x - \\hat{x}}\n\\end{align*} holds for all $x \\in K$ with $\\norm{x - \\hat{x}} \\leq \\epsilon$.\n\\end{definition}\n\nLocal polyhedrality indicates that at any point $x \\in K$, a linear function with slope $\\alpha$ can be fit below the curve of $f$~\\cite{Goel2018}. In other words, $f$ grows at least linearly as one moves away from the minimizer. Similar to strong convexity, constant functions are not locally polyhedral. Nevertheless, local polyhedrality encompasses many functions, among others the cost functions we described in \\cref{chapter:application} modeling the cost of a data center~\\cite{Goel2018}.", "meta": {"hexsha": "fcfb9181076512edeb89fd6963cbf7e99cbe2091", "size": 44155, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/chapters/03_theory.tex", "max_stars_repo_name": "jonhue/bachelors-thesis", "max_stars_repo_head_hexsha": "17f760c5b1394a364a2fca1108e7a997201460aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/chapters/03_theory.tex", "max_issues_repo_name": "jonhue/bachelors-thesis", "max_issues_repo_head_hexsha": "17f760c5b1394a364a2fca1108e7a997201460aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-09-08T11:45:09.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-05T07:47:11.000Z", "max_forks_repo_path": "thesis/chapters/03_theory.tex", "max_forks_repo_name": "jonhue/bachelors-thesis", "max_forks_repo_head_hexsha": "17f760c5b1394a364a2fca1108e7a997201460aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-10-14T12:01:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-14T12:01:49.000Z", "avg_line_length": 108.2230392157, "max_line_length": 1331, "alphanum_fraction": 0.7301777828, "num_tokens": 12842, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.8175744739711884, "lm_q1q2_score": 0.5689977614950944}}
{"text": "% ******************************* Thesis Appendix D ********************************\n\n\\chapter{Details of reliability analyses}\n\\label{ap:reli_details}\n\n\\section{First order reliability method, FORM}\n\nThe improved Hasofer-Lind-Rackwitz-Fiessler algortihm \\citep{Rackwitz1979, Zhang1995a} is used to solve the constrained optimization problem. For correlated random variables unless otherwise stated the Nataf transformation \\citep{Liu1986} with Cholesky decomposition is used to obtain independent, normally distributed random variables.\n\n\\section{Second order reliability method, SORM}\n\nThe asymptotic formula of \\citet{Breitung1984} is used as a correction term in SORM:\n\n\\begin{equation}\n\t{P_f} \\approx \\Phi ( - \\beta )\\left( {\\prod\\limits_{j = 1}^{n - 1} {\\frac{1}{{\\sqrt {1 + \\beta  \\cdot {\\kappa _j}} }}} } \\right).\n\\end{equation}\n\n\n\\section{Importance sampling Monte Carlo, isMC}\n\n\\begin{equation}\n\\label{eq:isMC_integral}\n\t{P_\\mathrm{f}} = \\int\\limits_{({\\mathbf{X}})} {{h_{\\mathbf{X}}}({\\mathbf{x}}) \\cdot \\frac{{{f_{\\mathbf{X}}}({\\mathbf{x}})}}{{{h_{\\mathbf{X}}}({\\mathbf{x}})}} \\cdot I({\\mathbf{x}})}  \\cdot \\mathrm{d}{\\mathbf{x}}\n\\end{equation}\nWhere:\n\n\\begin{tabular}{ll}\n\t$I(\\mathbf{x})$ & indicator function of the violation of the limit state; \\\\\n\t$f_{\\mathbf{X}}({\\mathbf{x}})$ & joint PDF of random variables, $\\mathbf{X}$; \\\\\n\t$h_{\\mathbf{X}}({\\mathbf{x}})$ & sampling PDF.\n\\end{tabular}\n\n\\noindent\nThe integral in Eq.\\ref{eq:isMC_integral} is replaced with summation for simulation based evaluation:\n\n\\begin{equation}\n\\label{eq:isMC_sum}\n\t{P_\\mathrm{f}} = \\mathop {\\lim }\\limits_{n \\to \\infty } \\frac{1}{n} \\cdot \\sum\\limits_{i = 1}^n {\\frac{{{f_{\\bf{X}}}({{\\bf{x}}_i})}}{{{h_{\\bf{X}}}({{\\bf{x}}_i})}} \\cdot I({{\\bf{x}}_i})}. \n\\end{equation}\n\n%\\nomenclature[a-RV]{$RV$}{Return value}\n%\\nomenclature[a-RP]{$RP$}{Return period}", "meta": {"hexsha": "47b72edffa83c13352e670859ea3bf1d0b8d43f0", "size": 1845, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Appendix4/appendix4.tex", "max_stars_repo_name": "rozsasarpi/Snow-extremes-and-structural-reliability", "max_stars_repo_head_hexsha": "712f826564a71b934f167ecc81da4fd1b3368a77", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Appendix4/appendix4.tex", "max_issues_repo_name": "rozsasarpi/Snow-extremes-and-structural-reliability", "max_issues_repo_head_hexsha": "712f826564a71b934f167ecc81da4fd1b3368a77", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Appendix4/appendix4.tex", "max_forks_repo_name": "rozsasarpi/Snow-extremes-and-structural-reliability", "max_forks_repo_head_hexsha": "712f826564a71b934f167ecc81da4fd1b3368a77", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.9285714286, "max_line_length": 336, "alphanum_fraction": 0.6617886179, "num_tokens": 610, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.817574471748733, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.568997759948358}}
{"text": "\\documentclass[12pt,oneside,a4paper]{article}\n\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{geometry}\n\\usepackage{layout}\n\\usepackage{fancyhdr}\n\n\\usepackage{hyperref}\n%\\hypersetup{pdfcolorlinks=true,allcolors=blue}\n\n%\\pagestyle{fancy}\n%\\headheight = 14.5 pt\n%\\markright{Notes on \\textsl{volatilitytrade.pdf} \\hfill Peter Zsohar}\n\n\\voffset = -10 pt\n\\textheight = 650 pt\n\\parindent = 0 pt\n\n\\begin{document}\n\n\\section{Solve for Z, w and trade costs} % (fold)\n\\label{sec:solve_for_z_w_and_trade_cost}\n\n\\begin{align}\nP_{nt} &= \\prod_{j = 1}^J {\\alpha^j}^{- \\alpha^j} {P_{nt}^j}^{\\alpha^j} \\tag{15p}\\\\\n%\nP_{nt}^j &= \\xi^j {\\Phi_{nt}^j}^{-\\frac{1}{\\theta}} \\tag{16p}\\\\\n%\nd^j_{nmt} &= \\frac{{B^{j}}^{-\\theta} \\left(\\psi^j_{mt}\\right)^{\\theta\\beta^j} Z_{mt}^j \\left(\\kappa^j_{nmt}\\right)^{\\theta} \\left(y^j_{mt}\\right)^{-\\theta\\beta^j}}{{P_{mt}}^\\theta {\\Phi_{nt}^j}} \\tag{27p}\n\\end{align}\n\n\\begin{enumerate}\n\t\\item Solve for $P_{nt}(P^j_{nt})$ from (15p).\n\t\\item Solve for $\\Phi^j_{nt}(P^j_{nt})$ from (16p).\n\t\\item Rearrange (27p) to get\n\t\\begin{equation}\n\td^j_{nmt} {P_{mt}}^{\\theta(1 - \\beta^j)} {\\Phi_{nt}^j} {B^{j}}^{\\theta}  = Z_{mt}^j \\left(\\kappa^j_{nmt}\\right)^{\\theta} {w^j_{mt}}^{-\\theta\\beta^j} {\\underbrace{L_{mt}}_{1}}^{-\\theta\\beta^j}\t\n\t\\end{equation}\n\t\\item Divide by $d^j_{mmt} {P_{mt}}^{\\theta(1 - \\beta^j)} {\\Phi_{mt}^j}$ to get  $\\kappa$ as\n\t\\begin{equation}\n\t\t\\left(\\frac{d^j_{nmt} {\\Phi_{nt}^j}}{d^j_{mmt} {\\Phi_{mt}^j}}\\right)^{\\frac{1}{\\theta}} =\n\t  \\frac{\\kappa^j_{nmt}}{\\kappa^j_{mmt}} = \\kappa^j_{nmt}\n\t\\end{equation}\n\t\\item Denote the LHS of (1) as $G_{nmt}^j$ and express $w_{nt}^j$ from (1) as\n\t\\begin{equation}\n\t\tw_{nt}^j = {Z_{nt}^j}^{\\frac{1}{\\theta\\beta^j}} {G_{nnt}^j}^{-\\frac{1}{\\theta\\beta^j}}\n\t\\end{equation}\n\t\\item Divide $wL_{nt}^j$ (this is data) by $w_{nt}^j$ to express $L_{nt}^j$ as\n\t\\begin{equation}\n\t\tL_{nt}^j = {Z_{nt}^j}^{-\\frac{1}{\\theta\\beta^j}} {G_{nnt}^j}^{\\frac{1}{\\theta\\beta^j}}\n\t\\end{equation}\n\t\\item Calculate $L_{nt}^j$ from (21p) by approximating the expectation and using $wL_{nt}^j$\n\t\\item With $L_{nt}^j$ at hand calculate $Z_{nt}^j$ as\n\t\\begin{equation}\n\t\tZ_{nt}^j = {L_{nt}^j}^{-\\theta\\beta^j} G_{nnt}^j = {L_{nt}^j}^{-\\theta\\beta^j} d^j_{nnt} {P_{nt}}^{\\theta(1 - \\beta^j)} {\\Phi_{nt}^j} {B^{j}}^{\\theta} \n\t\\end{equation}\n\t\\item Now it is possible to get $w_{nt}^j$ as well.\n\\end{enumerate}\n\n\n\n% section solve_for_z_w_and_trade_cost (end)\n\n\\end{document}", "meta": {"hexsha": "537c58190ad0843a6b78b731211da32ad2cdb03f", "size": 2421, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/equations.tex", "max_stars_repo_name": "ceumicrodata/impvol", "max_stars_repo_head_hexsha": "1b57bff42701d8235d9624f0fd192dc36de6dfec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/equations.tex", "max_issues_repo_name": "ceumicrodata/impvol", "max_issues_repo_head_hexsha": "1b57bff42701d8235d9624f0fd192dc36de6dfec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-09-01T10:29:32.000Z", "max_issues_repo_issues_event_max_datetime": "2017-09-01T10:29:32.000Z", "max_forks_repo_path": "notes/equations.tex", "max_forks_repo_name": "ceumicrodata/impvol", "max_forks_repo_head_hexsha": "1b57bff42701d8235d9624f0fd192dc36de6dfec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2461538462, "max_line_length": 204, "alphanum_fraction": 0.6261875258, "num_tokens": 1089, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.6959583187272712, "lm_q1q2_score": 0.5689977578860577}}
{"text": "\\documentclass{homework}\n\\course{Math 5522H}\n\\author{Alex Li}\n\\input{preamble}\n\n\\begin{document}\n\\maketitle\n\n\\begin{inspiration}\nWhat kind of metabolism do they have? What's their \\textbf{growth rate?}\n\\byline{the fictional Dr. Alan Grant in \\textit{Jurassic Park}, a well-known movie about complex dynamics}\n\\end{inspiration}\n\n\\textbf{This is a short week with an instructional break, so this\n  problem set is shorter.}\n\n  \\section{Terminology}\n\n  \\begin{problem}\n    What does it mean to say that an entire function is of\n      \\textbf{finite order}?\n      \\end{problem}\n      \\begin{solution}\n      A function $f:\\C\\to\\C$ is said to have order of growth at most $\\rho\\in\\R$ if there exist $A\\in\\R, B\\in\\R$ so that $\\forall z\\, \\in \\C$,\n      \\[\n      f(z) \\leq Ae^{B\\rho^{|z|}}\n      \\]\n      The order is finite if such $\\rho$ exists.\n      \\end{solution}\n      \\begin{problem}\n        What are \\textbf{canonical factors}?\n        \\end{problem}\n        \\begin{solution}\n        For $k\\geq0,$ the $k$th canonical factor is a function of $z$ given by the equation\n        \\[\n        \\left(1-z\\right)e^{z + \\frac{z^2}{2} + \\dots + \\frac{z^k}{k}}\n        \\]\n        Note that when $k=0$, the exponential term is $e^0$.\n        \\end{solution}\n        \\section{Numericals}\n\n        \\begin{problem}\n          Compute $\\displaystyle\\prod_{n=0}^\\infty \\left( 1 + z^{(2^n)} \\right)$.\n          \\end{problem}\n          \\begin{solution}\n          First we note that if $|z| < 1$, then $\\sum_{n=0}^\\oo |z^{(2^n)}| < \\infty$ by comparison to the geometric series $z^n$, so the product converges to something nonzero, and if $|z| > 1$, then each term grows unboundedly so the product diverges. If $|z|=1$ then the product diverges (to 0) since no matter how big $n$ is, there will a $\\delta$ so that there are terms of product farther than $\\delta$ from $1$.\n\n          Next let's consider this as a formal product and expand. We note that\n          for any term $z^k$, there is eactly one way to multiple terms to get it, corresponding to the binary expansion of $k$:\n          \\[\n          \\prod_{n=0}^\\infty \\left( 1 + z^{(2^n)} \\right) = \\sum_{n=0}^\\infty z^n\n          \\]\n          The rearrangement of the terms of the product are justified by the fact that the sum $\\sum_{n=0}^\\infty z^n$ converges absolutely for $|z|< 1$. % Not really\n\n          This implies that the sum is equal to \n          \\[\n          1 + \\log(\\frac{1}{1-z})\n          \\]\n          \\end{solution}\n          \\begin{problem}\n            Compute $\\displaystyle\\prod_{n=1}^\\infty \\left( 1 - \\frac{1}{(2n)^2} \\right)$.\n            \\end{problem}\n            \\begin{solution}\n            Recall that in an earlier assignment, we proved that \n            \\[\n            \\sin \\left( \\pi z \\right) = \\pi z \\prod_{n=1}^\\infty \\left( 1 - \\frac{z^2}{n^2} \\right)\n            \\]\n            Then take $z=\\frac{1}{2}$ to see that \n            \\[\n            \\frac{2}{\\pi} = \\frac{2\\sin \\left( \\pi/2 \\right)}{\\pi} = \\prod_{n=1}^\\infty \\left( 1 - \\frac{1}{(2n)^2} \\right)\n            \\]\n            \\end{solution}\n            \\begin{problem}\n              What is the order of growth of $f(z) = \\sin z$?\n              \\end{problem}\n              \\begin{solution}\n              \\begin{align*}\n              \\abs{\\sin z} = \\abs{\\frac{e^{iz} - e^{-iz}}{2}}\n              \\end{align*}\n              For a radius $r$, both terms in the numerator are bounded above by $e^{\\abs{z}}$, so \n              \\begin{align*}\n              \\abs{\\sin z} \\leq e^{\\abs{z}}\n              \\end{align*}\n              Thus the order of growth of $f$ is at most $1$.\n\n              Then considering the point $z=ir$ for $r>1$, \n              \\[\n              e^{iz} - e^{-iz} = e^{r} - e^{-r} \\geq \\frac{1}{2}e^{r}\n              \\]\n              so the order of growth of $f$ is at least $1$. Hence the order of growth is exactly 1.\n              \\end{solution}\n              \\begin{problem}\n                What is the order of growth of a polynomial?\n                \\end{problem}\n                \\begin{solution}\n                For fixed $B, C, \\rho>0$, the function $Ae^{B\\rho^{|z|}}$ grows at an expoential rate in $|z|$, faster than any polynomial. Since $\\rho\\leq0$ will not satisfy any function, the inf of the $\\rho$ satisfying the conditions is 0, so the order of growth of any polynomial is 0.\n                \\end{solution}\n                \\section{Exploration}\n\n                \\begin{problem}\n                  Suppose $f : \\C \\to \\C \\setminus \\{0, 1\\}$ is a holomorphic function\n                    of finite order.  What does Hadamard say about $f$ in this case?\n                    \\end{problem}\n                    \\begin{solution}\n                    $f$ has no zeros, just like the function $g(z) = 1$. Hence by Hadamard's theorem, $f(z) = e^{p(z)}g(z)$. Since $f$ has finite order, $p(z)$ is a polynomial. If $p(z)$ is constant, than $f(z)$ is a constant. Otherwise, it has a root at $z_0$ by the funamental theorem of algebra. But $f(z_0) = e^p(z_0)g(z_0) = 1$, contradicting the range of $f$. Thus $f$ must be constant.\n                    \\end{solution}\n                    \\begin{problem}\n                      Consider the function\n                        \\[\n                            G(z) = z \\prod_{n=1}^\\infty \\left( 1 + \\frac{z}{n} \\right) e^{-z/n}\n                              \\]\n                                which has zeros at nonpositive integers, just like the function\n                                  $z \\mapsto 1/\\Gamma(z)$ we saw last week.\n\n                                    Is it the case that $1/\\Gamma(z) = G(z)$?\n                                    \\end{problem}\n                                    \\begin{solution}\n                                    Suppose that $G(z)\\Gamma(z) = 1$. Then $\\frac{G(2)}{G(1)} = \\frac{\\Gamma(2)}{\\Gamma(1)} = 2$. But\n                                    \\begin{align*}\n                                    \\frac{G(2)}{G(1)} &= 2\\prod_{n=1}^\\infty\\frac{1+\\frac{2}{n}}{1 + \\frac{1}{n}}e^{\\frac{-1}{n}}\\\\\n                                    &= 2\\prod_{n=1}^\\infty \\frac{n + 2}{n + 1}e^{-\\frac{1}{n}}\\\\\n                                    \\end{align*}\n                                    This product telescopes, so\n                                    \\begin{align*}\n                                    \\frac{g(2)}{g(1)} &= \\lim_{N\\to\\infty} N\\exp({-\\sum_{n=1}^N \\frac{1}{n}})\\\\\n                                    &= \\lim_{N\\to\\infty} N\\exp({-\\sum_{n=1}^N \\frac{1}{n}})e^{\\log(N^{-1}) + \\log(N))}\\quad\\color{purple} \\text{ multiply by 1}\\\\\n                                    &= \\lim_{N\\to\\infty} Ne^{\\log(N^{-1})}\\exp({-\\sum_{n=1}^N \\frac{1}{n} + \\log(N)})\\\\\n                                    &= e^{-\\gamma} \\quad\\color{purple} \\gamma \\text{ the euler mascheroni constant} \\neq 2\n                                    \\end{align*}\n                                    Thus $\\frac{1}{\\Gamma(z)\\neq G(z)}$. One can also note that, since $\\Gamma$ has order 1 and shares the same zeroes as $G$, $\\Gamma(z) e^{a+bz} = G(z)$. We showed $b=-\\gamma$.\n                                    \\end{solution}\n\n                                    \\section{Prove or Disprove and Salvage if Possible}\n\n                                    \\begin{problem}\n                                      A meromorphic function on the complex plane is the quotient of two\n                                        entire functions.\n                                        \\end{problem}\n                                        \\begin{proof}\n                                        The claim is true. Let $f$ be meromorphic. If $f=0$, then $f(z)=g(z)/h(z)$ with $g(z)=0$, $h(z)=1$. Otherwise, $f$ has a list of poles with repetition $z_1, z_2, \\dots$. Then there is an entire function $h$ with zeros at exactly the poles of $f$, given by a Wierstrauss infinite product. The function $h\\cdot f = g$ then has removable singularities at $z_1, z_2, \\dots$, and we can regard it as entire. Dividing by $h$, we see that $f = \\frac{g}{h}$, so $f$ is holomorphic.\n                                        \\end{proof}\n                                        \\begin{problem}\n                                          If $f : \\C \\to \\C$ is a odd function of order 1 with $f(n) = 0$ for\n                                            $n \\in \\Z$, then $f(z) = \\sin (\\pi z)$.\n                                            \\end{problem}\n                                            \\begin{solution}\n                                            False. $f(z) = z^2\\sin (\\pi z)$ also works.\n\n\n                                            If we assume that $f$ has a zero of degree 1 at 0, we can get the result up to a constant:\n\n                                            By Hadamard's factorization theorem, if $f$ is of order 1, then we can write it as a product of $\\sin(\\pi z)$ with a function exponent raised to the power of a linear polynomial.\n                                            \\[\n                                            f(z) = e^{a+bz}\\sin(\\pi z)\n                                            \\]\n                                            If $b\\neq 0$, then the function will no longer be odd (e.g. consider $a=0, z=\\pm i$). Hence \n                                            \\[\n                                            f(z) = e^{a}\\sin(\\pi z)\n                                            \\]\n                                            so $f(z)$ can by any nonzero constant multiple of $\\sin(\\pi z)$.\n                                            \\end{solution}\n                                            \\end{document}\n", "meta": {"hexsha": "aa282a41c98cb81ed39ea985a129e6db481b8a0b", "size": 9593, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problem-solutions/sol12.tex", "max_stars_repo_name": "Alex7Li/math5522h", "max_stars_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "problem-solutions/sol12.tex", "max_issues_repo_name": "Alex7Li/math5522h", "max_issues_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem-solutions/sol12.tex", "max_forks_repo_name": "Alex7Li/math5522h", "max_forks_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.8527607362, "max_line_length": 513, "alphanum_fraction": 0.4626290003, "num_tokens": 2571, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210895, "lm_q2_score": 0.8175744828610095, "lm_q1q2_score": 0.5689977573704932}}
{"text": "% !TEX root = main.tex\n\n%-------------------------------------------------\n\\section{Probability measures}\\label{sec:}\n\n%Let $\\Omega$ be the sample space of some random experiment.\n\n\\begin{definition}[Kolmogorov's axioms]\\label{def:prob_meas}\nA \\emph{probability measure} on a set $\\Omega$ is a function which maps subsets of $\\Omega$ to numbers in the interval $[0,1]$ such that $\\prob(\\emptyset) = 0$, $\\prob(\\Omega) = 1$ and for any sequence of pairwise disjoint events $A_1,A_2,\\ldots$,\n\\[\n\\prob\\left(\\bigcup_{i=1}^\\infty A_i\\right) = \\sum_{i=1}^{\\infty} \\prob(A_i)\n\\qquad\\text{(countable additivity)}.\n\\]\nThe pair $(\\Omega,\\prob)$ is called a \\emph{probability space}.\n\\end{definition}\n\n\\begin{theorem}[Properties of probability measures]\\label{thm:props_pmeas}\nProbability measures have the following properties.\n\\ben\n\\it Complementarity: $\\prob(A^c) = 1 - \\prob(A)$.\n\\it Monotonicity: if $A\\subseteq B$ then $\\prob(A)\\leq \\prob(B)$.\n\\it Addition rule: $\\prob(A\\cup B) = \\prob(A) + \\prob(B) - \\prob(A\\cap B)$.\n\\een\n \\end{theorem}\n\n% proof\n\\begin{proof}\n\\ben\n\\it Because $A\\cup A^c=\\Omega$ is a disjoint union and $\\prob(\\Omega)=1$, it follows by additivity that \n\\[\n1 = \\prob(\\Omega) = \\prob(A\\cup A^c) = \\prob(A) + \\prob(A^c).\n\\]\n\\it Let $A\\subseteq B$. We can write $B$ as the disjoint union $A\\cup(B\\setminus A)$, so by additivity\n\\[\n\\prob(B) = \\prob\\big[A\\cup (B\\setminus A)\\big] = \\prob(A) + \\prob(B\\setminus A).\n\\]\nThus because $\\prob(B\\setminus A)\\geq 0$ we have $\\prob(B) \\geq \\prob(A)$.\n\\it We can write the three sets as disjoint unions:\n\\bit\n\\it $A\\cup B = (A\\setminus B) \\cup (B\\setminus A) \\cup (A\\cap B)$\n\\it $A \t\t = (A\\setminus B) + (A\\cap B)$\n\\it $B \t\t = (B\\setminus A) + (A\\cap B)$\n\\eit\nBy additivity, \n\\bit\n\\it $\\prob(A\\cup B) = \\prob(A\\setminus B) + \\prob(B\\setminus A) + \\prob(A\\cap B)$\n\\it $\\prob(A) \t\t= \\prob(A\\setminus B) + \\prob(A\\cap B)$\n\\it $\\prob(B)\t\t= \\prob(B\\setminus A) + \\prob(A\\cap B)$\n\\eit\nHence $\\prob(A\\cup B) = \\prob(A) + \\prob(B) - \\prob(A\\cap B)$, as required.\n\\een\n\\end{proof}\n\n%-----------------------------\n%\\subsection{Continuity}\n\n\\begin{theorem}[Continuity of probability measures]\\label{thm:cont_pms}\nLet $\\prob$ be a probability measure on $\\Omega$.\n\\ben\n\\it If $A_1\\subseteq A_2\\subseteq \\ldots$ is an increasing sequence of subsets then \n$\\displaystyle\\prob\\left(\\medcup_{n=1}^{\\infty} A_n\\right) = \\lim_{n\\to\\infty}\\prob(A_n)$.\n\\it If $B_1\\supseteq B_2\\supseteq \\ldots$ is a decreasing sequence of subsets then\n$\\displaystyle\\prob\\left(\\medcap_{n=1}^{\\infty} B_n\\right) = \\lim_{n\\to\\infty}\\prob(B_n)$.\n\\een\n\\end{theorem}\n\n% proof\n\\begin{proof}\n\\ben\n\\it Let $A=\\bigcup_{n=1}^{\\infty} A_n$. We can write $A$ as a disjoint union:\n\\[\nA = A_1 \\cup (A_2\\setminus A_1) \\cup (A_3\\setminus A_2) \\cup \\ldots\n\\]\nBecause the sets $A_{n+1}\\setminus A_n$ are disjoint, by countable additivity we have\n\\begin{equation}\\label{eq:onion}\n\\prob(A) = \\prob(A_1) + \\prob(A_2\\setminus A_1) + \\prob(A_3\\setminus A_2) + \\ldots \\tag{*}\n\\end{equation}\n\nFurthermore, $A_n\\subseteq A_{n+1}$ means that $A_{n+1}=(A_{n+1}\\setminus A_n)\\cup A_n$ is a disjoint union, so\n\\[\n\\prob(A_{n+1}\\setminus A_n)=\\prob(A_{n+1})-\\prob(A_n).\n\\]\nSubstituting this into Eq. (\\ref{eq:onion}), \n\\begin{align*}\n\\prob(A) \n\t& = \\prob(A_1) + \\big[\\prob(A_2) - \\prob(A_1)\\big] + \\big[\\prob(A_3) - \\prob(A_2)\\big] + \\ldots \\\\\n\t& = \\big[\\prob(A_1) - \\prob(A_1)\\big] + \\big[\\prob(A_2) - \\prob(A_2)\\big] + \\big[\\prob(A_3) - \\prob(A_3)\\big] + \\ldots \\\\\n\t& = \\lim_{n\\to\\infty} \\prob(A_n).\n\\end{align*}\n\n\\it Let $B=\\bigcap_{i=1}^{\\infty} B_i$. Then $B_1^c\\subseteq B_2^c\\subseteq\\ldots$ and by De Morgan's laws,\n\\[\nB^c = \\medcup_{n=1}^{\\infty} B_n^c. \n\\]\nBy the first part of the theorem, $\\prob(B^c) = \\lim_{n\\to\\infty}\\prob(B_n^c)$, so\n\\[\n\\prob(B) \n\t= 1 - \\prob(B^c)\n\t= 1 - \\lim_{n\\to\\infty} \\prob(B_n^c) \n\t= \\lim_{n\\to\\infty} [1-\\prob(B_n^c)] \n\t= \\lim_{n\\to\\infty} \\prob(B_n)\n\\]\nas required.\n\\een\n\\end{proof}\n\n\\begin{example}\nA fair coin is tossed repeatedly. Show that a head eventually occurs with probability one.\n\\begin{solution}\nLet $A_n$ be the event that a head occurs during the first $n$ tosses, and let $A$ be the event that a head eventually occurs. Then $A_1\\subset A_2\\subset A_3,\\ldots$ is an increasing sequence with \n\\[\nA=\\bigcup_{n=1}^{\\infty} A_n.\n\\]\nBy the continuity property of probability measures,\n\\[\n\\prob(A) \n\t= \\prob\\left(\\bigcup_{n=1}^{\\infty}A_n\\right)\n\t= \\lim_{n\\to\\infty} \\prob(A_n)\n\t= \\lim_{n\\to\\infty} \\left(1 - \\frac{1}{2^n}\\right) = 1,\n\\]\nwhere we have assumed that the tosses are independent, so that\n\\[\n\\prob(A_n) = 1 - \\prob(\\text{no heads in the first $n$ tosses}) = 1 - (1/2)^n.\n\\]\n\\end{solution}\n\\end{example}\n\n%-----------------------------\n\\begin{exercise}\n\\begin{questions}\n% inclusion-exclusion\n\\question\n\\begin{parts}\n\\part\nFor any finite collection of events $A_1,A_2,\\ldots,A_n$ (where $n\\geq 2$), use proof by induction to show that\n\\small\n\\[\n\\prob\\left(\\bigcup_{i=1}^{n} A_i\\right)\n\t= \\sum_i\\prob(A_i) - \\sum_{i<j}\\prob(A_i\\cap A_j) + \\sum_{i<j<k}\\prob(A_i\\cap A_j\\cap A_k)\n\t\t+ \\ \\ldots\\  + (-1)^{n+1}\\prob(A_1\\cap A_2\\cap\\ldots\\cap A_n).\n\\]\n\\normalsize\nThis is called the \\emph{inclusion-exclusion principle}. \n\\begin{answer}\nBy Theorem~\\ref{thm:props_pmeas} the result is true for $n=2$. Let $m\\geq 2$ and suppose the result is true for all $n\\leq m$. \nBecause Theorem~\\ref{thm:props_pmeas} result holds for pairs of events it holds for the pair $\\cup_{i=1}^n A_i$ and $A_{n+1}$ so \n\\begin{align*}\n\\prob\\left(\\bigcup_{i=1}^{m+1} A_i\\right)\n\t& = \\prob\\left(\\bigcup_{i=1}^{m} A_i\\right) +\\prob(A_{m+1}) - \\prob\\left[\\left(\\bigcup_{i=1}^{m} A_i\\right)\\cap A_{m+1}\\right] \\\\\n\t& = \\prob\\left(\\bigcup_{i=1}^{m} A_i\\right) +\\prob(A_{m+1}) - \\prob\\left[\\bigcup_{i=1}^{m} (A_i\\cap A_{m+1})\\right].\n\\end{align*}\nBy the inductive hypothesis, we can expand the first and last terms on the right-hand side to obtain the result.\n\\end{answer}\n\\part\nSuppose that at least one of the events $A_1,A_2,\\ldots,A_n$ is certain to occur, and that more than two will certainly not occur. If $\\prob(A_i)=p$ and $\\prob(A_i\\cap A_j) = q$ for $i\\neq j$, use the inclusion-exclusion principle to show that $p\\geq 1/n$ and $q\\leq 2/n$.\n\\begin{answer}\nWe know that $\\prob(\\cup_{i=1}^{n} A_i) = 1$ and $\\prob(A_i\\cap A_j\\cap A_k)=0$ whenever $i\\neq j\\neq k$. By the inclusion-exclusion principle,\n\\[\n1 = \\prob\\left(\\bigcup_{i=1}^{n} A_i\\right)\n\t= \\sum_i\\prob(A_i) - \\sum_{i<j}\\prob(A_i\\cap A_j) \n\t= np - \\frac{1}{2}n(n-1)q.\n\\]\nThe second term is negative so we must have $np\\geq 1$, which shows that $p\\geq 1/n$. Hence\n\\[\n\\frac{1}{2}n(n-1)q = np - 1 \\leq n-1, \\text{ so } \\frac{nq}{2} \\leq 1 \\text{ and thus $q\\leq 2/n$, as required}.\n\\]\n\\end{answer}\n\\end{parts}\n\n% subadditivity\n\\question\nFor any countable collection of events $A_1,A_2,\\ldots$ show that\n\\[\n\\displaystyle\n\\mathbb{P}\\left(\\bigcup_{i=1}^{\\infty} A_i\\right) \\leq \\sum_{i=1}^{\\infty} \\mathbb{P}(A_i).\n\\]\nThis property is called (countable) \\emph{subadditivity}.\n\\begin{answer}\nWe will find a sequence of sets $B_1,B_2,\\ldots$ whose union coincides with the union of $A_1,A_2,\\ldots$, but which are disjoint so that the countable additivity property of probability measures can be applied.\n\nLet $B_1 = A_1$, $B_2 = A_2\\setminus A_1$, $B_3 = A_3\\setminus(A_1\\cup A_2)$, and in general\n\\[\nB_i = A_i \\setminus \\left(\\bigcup_{j=1}^{i-1} A_j\\right).\n\\]\n\n\\textbf{Claim 1}: $\\cup_{i=1}^{\\infty} A_i = \\cup_{i=1}^{\\infty} B_i$.\nTo see this, let $\\omega\\in\\cup_{i=1 }^{\\infty} B_i$. Then there exists some $B_j$ with $\\omega\\in B_j$, which implies that $\\omega\\in A_j$ and hence $\\omega\\in\\cup_{i=1}^{\\infty} A_i$. Thus $\\cup_{i=1}^{\\infty} B_i \\subseteq \\cup_{i=1}^{\\infty} A_i$. Conversely, let $\\omega\\in\\cup_{i=1 }^{\\infty} A_i$. Then there exists some $A_j$ with $\\omega\\in A_j$, which implies that $\\omega\\in \\cup_{i=1}^{j} B_i$ and hence $\\omega\\in\\cup_{i=1}^{\\infty} B_i$. Thus $\\cup_{i=1}^{\\infty} A_i \\subseteq \\cup_{i=1}^{\\infty} B_i$. \n\\begin{itemize}\n\\item\nThis means that $\\mathbb{P}\\left(\\cup_{i=1}^{\\infty} A_i\\right) = \\mathbb{P}\\left(\\cup_{i=1}^{\\infty} B_i\\right)$.\n\\end{itemize}\n\n\\textbf{Claim 2}: $B_i \\subseteq A_i$. This follows because\n\\[\nB_i = A_i \\setminus \\left(\\bigcup_{j=1}^{i-1} A_j\\right) = A_i \\cap\\left(\\bigcup_{j=1}^{i-1} A_j\\right)^{c} \\subseteq A_i.\n\\]\nBy the monotonicity of probability measures, this means that $\\mathbb{P}(B_i)\\leq\\mathbb{P}(A_i)$ for all $i=1,2,\\ldots$.\n\n\\textbf{Claim 3}: the sets $B_1,B_2,\\ldots$ are pairwise disjoint. To see this, consider any two sets $B_i$ and $B_j$ and suppose, without loss of generality, that $i<j$. Then for any $\\omega\\in B_i$ it follows that $\\omega\\in A_i$, so $\\omega\\notin B_j$ (because $i<j$, and $B_j$ excludes all outcomes contained in $A_1,A_2,\\ldots,A_{j-1}$). Hence $B_i$ and $B_j$ are disjoint.\n\nThus, by the countable additivity of probability measures,\n\\[\n\\mathbb{P}\\left(\\bigcup_{i=1}^{\\infty} A_i\\right) \n\t= \\mathbb{P}\\left(\\bigcup_{i=1}^{\\infty} B_i\\right)\n\t= \\sum_{i=1}^{\\infty} \\mathbb{P}(B_i)%\t\\qquad\\text{by countable additivity}\n\t\\leq \\sum_{i=1}^{\\infty} \\mathbb{P}(A_i),\n\\]\nas required.\n\\end{answer}\n\n% continuity (GS 1.3.2)\n\\question\nA fair coin is tossed repeatedly. Using the continuity of probability measures show that\n\\begin{parts}\n\\part a head eventually occurs with probability one,\n\\begin{answer}\nLet $B_n$ be the event that no heads occur in the first $n$ tosses, and let $B$ be the event that no heads occur at all. Then $B_1\\supseteq B_2\\supset \\ldots$ is a decreasing sequence and $B=\\cap_{i=1}^{\\infty} B_n$. THence by continuity,\n\\[\n\\prob(B) = \\prob\\left(\\bigcap_{n=1}^{\\infty}B_n\\right)\n\t= \\lim_{n\\to\\infty} \\prob(B_n)\n\t= \\lim_{n\\to\\infty} \\left(\\frac{1}{2}\\right)^n = 0,\n\\]\nThus we are certain of eventually observing a head.\n\\end{answer}\n%--------------------\n\\part a sequence of 10 consecutive tails eventually occurs with probability one, and\n\\begin{answer}\nLet us think of the first $10n$ tosses as disjoint groups of consecutive outcomes, each group of length $10$. The probability any one of the $n$ groups consists of 10 consecutive tails is $2^{-10}$, independently of the other groups. The event that one of the groups consists of 10 consecutive tails is a subset of the event that a sequence of 10 consecutive tails appears anywhere in the first $10n$ tosses. Hence, using the continuity of probability measures,\n\\begin{align*}\n\\prob(\\text{$10T$ eventually appears}) \n\t& = \\lim_{n\\to\\infty} \\prob(\\text{$10T$ occurs somewhere in the first $10n$ tosses)} \\\\\n\t& \\geq \\lim_{n\\to\\infty} \\prob(\\text{$10T$ occurs as one of the first $n$ groups of $10$}) \\\\\n\t& = 1 - \\lim_{n\\to\\infty} \\prob(\\text{$10T$ does not occur as one of the first $n$ groups of 10}) \\\\\n \t& = 1 - \\lim_{n\\to\\infty} \\left(1-\\frac{1}{2^{10}}\\right)^n = 1.\n\\end{align*}\nThus we are certain of eventually observing sequence of 10 consecutive tails.\n\\end{answer}\n%--------------------\n\\part any finite sequence of heads and tails eventually occurs with probability one.\n\\begin{answer}\nLet $s$ be a fixed sequence of length $k$. As in the previous part, we think of the first $kn$ tosses as $n$ distinct groups of length $k$. The event that the one of these groups is exactly equal to $s$ is a subset of the event that first $kn$ tosses contains at least one instance of $s$. Hence\n\\begin{align*}\n\\prob(\\text{$s$ eventually appears}) \n\t& = \\lim_{n\\to\\infty} \\prob(\\text{$s$ occurs somewhere in the first $kn$ tosses)} \\\\\n\t& \\geq \\lim_{n\\to\\infty} \\prob(\\text{$s$ occurs as one of the first $n$ groups of $k$}) \\\\\n\t& = 1 - \\lim_{n\\to\\infty} \\prob(\\text{$s$ does not occur as one of the first $n$ groups of $k$}) \\\\\n \t& = 1 - \\lim_{n\\to\\infty} \\left(1-\\frac{1}{2^{k}}\\right)^n = 1.\n\\end{align*}\nThus we are certain of eventually observing the sequence $s$.\n\\bit\n\\it In an infinite sequence of coin tosses, anything that can happen, does happen!\n\\eit\n\\end{answer}\n\\end{parts}\n\n\\end{questions}\n\\end{exercise}\n\n", "meta": {"hexsha": "219fab1e036348660b44d0e5468ad7672a778686", "size": 11871, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "L5/MA2500/02A_probability_measures.tex", "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "L5/MA2500/02A_probability_measures.tex", "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L5/MA2500/02A_probability_measures.tex", "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "avg_line_length": 45.833976834, "max_line_length": 518, "alphanum_fraction": 0.6649818886, "num_tokens": 4467, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947290421275, "lm_q2_score": 0.8688267677469952, "lm_q1q2_score": 0.5689900706482158}}
{"text": "In this section we will define a model for an algebra to show that the REA (Resource Entity Action) model can be reduced to a problem of linear algebra. We then show how games that use this model can be further simplified by linguistic constructs.\n\n\\subsection{Action algebra}\n\nAn action consists of a transfer of resources from a source entity to one or more target entities. We require that each entity has a resource vector, which contains the current amount of resources of the entity. The resource vector is sparse, since most actions involve only few resource types. An action is expressed by a transformation matrix $A$.\n\nConsider a set of target entities $T = \\left\\lbrace t_{1},t_{2},...,t_{n}\\right\\rbrace$, which are the targets of the action, and a source entity $e$. Each entity $t_{i}$ (including the source entity type) has a resource vector\n$\\mathbf{r_{i}}=\\left(r_{i_{1}},r_{i_{2}},...,r_{i_{m}} \\right)$. The source entity also has a transformation matrix $A$ of size $m \\times m$, which defines the interactions between all the resources of the source entity and all the resources of the target entities. We also consider an integrator $dt$ which contains the time difference between the current frame and the previous one. We then compute $\\mathbf{w_{e}} = \\left(  w_{e_{1}},w_{e_{2}},...,w_{e_{m}} \\right) = \\mathbf{r_{s}} \\times A \\cdot dt$. From the definition of matrix multiplication, it immediately follows that each component of $ \\mathbf{w_{e}}$ represents how a resource will change by applying the effect of all the other resources to it. We compute the vector $\\mathbf{r'_{i}} = \\mathbf{r_{i}} + \\mathbf{w_{e}} \\; \\forall e_{i} \\in E$\nwhich replaces the resource vector in each target entity.\n\nFor instance, consider the action of a spaceship (entity) using laser to damage (resource) an enemy spaceship (entity). This involves a  vector resource of two elements: laser and life points. The action must transfer laser points to subtract from the enemy life points. Suppose that the vector resource of the targeting ship is $r_{s} = (20,500)$ and the vector resource of the targeted ship is $r_{t} = (15,1000)$. Let the transformation matrix be\n$A =\n\\begin{bmatrix}\n0 & -1\\\\\n0 & 0\\\\\n\\end{bmatrix}\n$\nwhich means that the source entity will reduce the life of the target by the number of its laser points.\nThus $w_{e} = r_{s} \\times A  \\cdot dt = (20,500) \\times A  \\cdot dt = (0,-20) \\cdot dt$. At this point, assuming $dt = 1$ second, we have $r'_{t} = r_{t} + w_{e} = (20,1000) + (0,-20) \\cdot dt = (20,980)$.\n\n\\subsection{A declarative language extension}\n\nWe now describe a language extension that implements the REA design pattern and its associated algebra for the Casanova game programming language \\citep{Casanova}. The language extension is purely declarative. Its semantics are described using the SQL query language, which has the advantage of familiarity to most programmers.\n\n%Implementing the action algebra is done using an abstract class which contains an abstract method which performs the action. Each action is a class which extends the previous abstract class and implements the abstract method. This method will fetch the world looking for the information needed to find what entities are affected by the action execution. Each entity of the game will have a collection of actions it can perform, automatically run by Casanova.\n\nTo identify the set of target entities $T$ given a source entity and its action, we create a new type definition called \\textit{action}. An action is a declarative construct which is used to describe not only the resource exchange between entities, but also what kinds of entities participate in the exchange. The resource exchange is based on \\textit{transfers} (Add, Subtract, and Set), while the target determination is based on \\textit{predicates}: we filter the game world entities depending on their types, attributes and radius (specifying the distance beyond which the action is not applied). Some actions, called threshold actions, are not continuous and make use of special predicates to delay the execution (Output) until certain conditions are met.\n\nUsing actions it is possible to specify an exchange of resources in a fully declarative manner, so that the developer does not have to rewrite similar pieces of code ad hoc for each action. ", "meta": {"hexsha": "c9ef920e4b58778acb2ad9c44fc6587024d27b33", "size": 4331, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "11. RTS Framework/Sections/the_idea.tex", "max_stars_repo_name": "vs-team/Papers", "max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z", "max_issues_repo_path": "11. RTS Framework/Sections/the_idea.tex", "max_issues_repo_name": "vs-team/Papers", "max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "11. RTS Framework/Sections/the_idea.tex", "max_forks_repo_name": "vs-team/Papers", "max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 149.3448275862, "max_line_length": 808, "alphanum_fraction": 0.7679519741, "num_tokens": 1021, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8688267728417087, "lm_q2_score": 0.6548947155710234, "lm_q1q2_score": 0.568990062280661}}
{"text": "\\section{Experiments}\n\nThe following experiments refer to \\emph{linearly} and \\emph{nonlinearly} separable generated datasets of size 100.\n\n\\subsection{Support Vector Classifier}\n\nBelow experiments are about the SVC for which I tested different values for the regularization hyperparameter $C$, i.e., from \\emph{soft} to \\emph{hard margin}, and in case of nonlinearly separable data also different \\emph{kernel functions} mentioned above.\n\n\\subsubsection{Hinge loss}\n\n\\paragraph{Primal formulation}\n\nThe experiments results shown in~\\ref{primal_svc_hinge_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 and $\\beta$, i.e., the \\emph{momentum}, equal to 0.4. The batch size is setted to 20. Training is stopped if after 5 iterations the training loss is not lower than the best found so far.\n\n\\input{experiments/primal_svc_hinge}\n\nThe results provided from the \\emph{custom} implementation, i.e., the SGD with different momentum settings, are strongly similar to those of \\emph{sklearn} implementation, i.e., \\emph{liblinear}~\\cite{fan2008liblinear} implementation, in terms of \\emph{accuracy} score. More training data points are selected as \\emph{support vectors} from the SGD solver but it always requires lower iterations, i.e., epochs, to achieve the same \\emph{numerical precision}. \\emph{Standard} or \\emph{Polyak} and \\emph{Nesterov} momentums always perform lower iterations as expected from the theoretical analysis of the convergence rate.\n\n\\pagebreak\n\n\\paragraph{Linear Dual formulations}\n\nThe experiments results shown in~\\ref{linear_lagrangian_dual_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 for the \\emph{AdaGrad} algorithm. Notice that the \\emph{qp} dual refers to the formulation~\\eqref{eq:svc_lagrangian_dual}, while the \\emph{bcqp} dual refers to the formulation~\\eqref{eq:svc_bcqp_lagrangian_dual}.\n\n\\input{experiments/linear_dual_svc}\n\nFor what about the linear \\emph{Wolfe dual} formulation we can immediately notice as higher \\emph{regularization hyperparameter} $C$ makes the model harder, so the \\emph{custom} implementation of the SMO algorithm and also the \\emph{sklearn} implementation, i.e., \\emph{libsvm}~\\cite{chang2011libsvm} implementation, needs to perform more iterations to achieve the same \\emph{numerical precision}; meanwhile the \\emph{cvxopt}~\\cite{vandenberghe2010cvxopt} seems to be insensitive to the increasing complexity of the model. The results in terms of \\emph{accuracy} and number of \\emph{support vectors} are strongly similar to each others.\n\n\\input{experiments/linear_lagrangian_dual_svc}\n\nFor what about the linear \\emph{Lagrangian dual} formulation we can see as it seems to be insensitive to the increasing complexity of the model in terms of number of \\emph{iterations} but it tends to select many training data points as \\emph{support vectors}.\n\n\\pagebreak\n\n\\paragraph{Nonlinear Dual formulations}\n\nThe experiments results shown in~\\ref{nonlinear_dual_svc_cv_results} and~\\ref{nonlinear_lagrangian_dual_svc_cv_results} are obtained with \\emph{d} and \\emph{r} hyperparameters equal to 3 and 1 respectively for the \\emph{polynomial} kernel; \\emph{gamma} is setted to \\emph{`scale`} for both \\emph{polynomial} and \\emph{gaussian RBF} kernels. The experiments results shown in~\\ref{nonlinear_lagrangian_dual_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 for the \\emph{AdaGrad} algorithm.\n\n\\input{experiments/nonlinear_dual_svc}\n\n\\input{experiments/nonlinear_lagrangian_dual_svc}\n\nThe same considerations made for the previous linear \\emph{Wolfe dual} and \\emph{Lagrangian dual} formulations are confirmed also in the nonlinearly separable case. In this setting the complexity of the model coming with higher $C$ regularization values seems to be not paying a tradeoff in terms of the number of \\emph{iterations} of the algorithm and, moreover, the \\emph{bcqp Lagrangian dual} formulation seems to perform better wrt the \\emph{qp} formulation, both tends to select even more training data points as \\emph{support vectors}.\n\n\\subsubsection{Squared Hinge loss}\n\n\\paragraph{Primal formulation}\n\nThe experiments results shown in~\\ref{primal_svc_squared_hinge_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 and $\\beta$, i.e., the \\emph{momentum}, equal to 0.4. The batch size is setted to 20. Training is stopped if after 5 iterations the training loss is not lower than the best found so far.\n\n\\input{experiments/primal_svc_squared_hinge}\n\nAgain, the results provided from the \\emph{custom} implementation, i.e., the SGD with different momentum settings, are strongly similar to those of \\emph{sklearn} implementation, i.e., \\emph{liblinear}~\\cite{fan2008liblinear} implementation, in terms of \\emph{accuracy} score. More training data points are selected as \\emph{support vectors} from the SGD solver but it always requires even lower iterations, i.e., epochs, to achieve the same \\emph{numerical precision}. \\emph{Standard} or \\emph{Polyak} and \\emph{Nesterov} momentums always perform lower iterations as expected from the theoretical analysis of the convergence rate.\n\n\\pagebreak\n\n\\subsection{Support Vector Regression}\n\nBelow experiments are about the SVR for which I tested different values for regularization hyperparameter $C$, i.e., from \\emph{soft} to \\emph{hard margin}, the $\\epsilon$ penalty value and in case of nonlinearly separable data also different \\emph{kernel functions} mentioned above.\n\n\\subsubsection{Epsilon-insensitive loss}\n\n\\paragraph{Primal formulation}\n\nThe experiments results shown in~\\ref{primal_svr_eps_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 and $\\beta$, i.e., the \\emph{momentum}, equal to 0.4. The batch size is setted to 20. Training is stopped if after 5 iterations the training loss is not lower than the best found so far.\n\n\\input{experiments/primal_svr_eps}\n\nThe results provided from the \\emph{custom} implementation, i.e., the SGD with different momentum settings, are strongly similar to those of \\emph{sklearn} implementation, i.e., \\emph{liblinear}~\\cite{fan2008liblinear} implementation, in terms of \\emph{r2} score, except in case of $C$ regularization hyperparameter equals to 1 for which those of SGD are lower. Moreover, the SGD solver always requires lower iterations, i.e., epochs, for higher $C$ regularization values, i.e., for $C$ equals to 10 or 100, to achieve the same \\emph{numerical precision}. Again, \\emph{Standard} or \\emph{Polyak} and \\emph{Nesterov} momentums always perform lower iterations as expected from the theoretical analysis of the convergence rate. The results in terms of \\emph{support vectors} are strongly similar to each others.\n\n\\paragraph{Linear Dual formulations}\n\nThe experiments results shown in~\\ref{linear_lagrangian_dual_svr_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 for the \\emph{AdaGrad} algorithm. Notice that the \\emph{qp} dual refers to the formulation~\\eqref{eq:svr_lagrangian_dual}, while the \\emph{bcqp} dual refers to the formulation~\\eqref{eq:svr_bcqp_lagrangian_dual}.\n\n\\input{experiments/linear_dual_svr}\n\nFor what about the linear \\emph{Wolfe dual} formulation we can immediately notice as higher \\emph{regularization hyperparameter} $C$ and lower $\\epsilon$ values makes the model harder, so the \\emph{custom} implementation of the SMO algorithm and also the \\emph{sklearn} implementation, i.e., \\emph{libsvm}~\\cite{chang2011libsvm} implementation, needs to perform more iterations to achieve the same \\emph{numerical precision}; meanwhile, again, the \\emph{cvxopt}~\\cite{vandenberghe2010cvxopt} seems to be insensitive to the increasing complexity of the model. The results in terms of \\emph{r2} and number of \\emph{support vectors} are strongly similar to each others.\n\n\\input{experiments/linear_lagrangian_dual_svr}\n\nFor what about the linear \\emph{Lagrangian dual} formulation we can see as it seems to be insensitive to the increasing complexity of the model in terms of number of \\emph{iterations} and require many \\emph{iterations} wrt the \\emph{Wolfe dual} formulation.\n\n\\paragraph{Nonlinear Dual formulations}\n\nThe experiments results shown in~\\ref{nonlinear_dual_svr_cv_results} and~\\ref{nonlinear_lagrangian_dual_svr_cv_results} are obtained with \\emph{d} and \\emph{r} hyperparameters both equal to 3 for the \\emph{polynomial} kernel; \\emph{gamma} is setted to \\emph{`scale`} for both \\emph{polynomial} and \\emph{gaussian RBF} kernels. The experiments results shown in~\\ref{nonlinear_lagrangian_dual_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 for the \\emph{AdaGrad} algorithm.\n\n\\input{experiments/nonlinear_dual_svr}\n\n\\input{experiments/nonlinear_lagrangian_dual_svr}\n\nThe same considerations made for the previous linear \\emph{Wolfe dual} and \\emph{Lagrangian dual} formulations are confirmed also in the nonlinearly separable case. In this setting, the complexity of the model coming with higher $C$ regularization hyperparameters and lower $\\epsilon$ values pays a larger tradeoff in terms of the number of \\emph{iterations} of the algorithm.\n\n\\subsubsection{Squared Epsilon-insensitive loss}\n\n\\paragraph{Primal formulation}\n\nThe experiments results shown in~\\ref{primal_svr_squared_eps_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 0.001 and $\\beta$, i.e., the \\emph{momentum}, equal to 0.4. The batch size is setted to 20. Training is stopped if after 5 iterations the training loss is not lower than the best found so far.\n\n\\input{experiments/primal_svr_squared_eps}\n\nAgain, the results provided from the \\emph{custom} implementation, i.e., the SGD with different momentum settings, are strongly similar to those of \\emph{sklearn} implementation, i.e., \\emph{liblinear}~\\cite{fan2008liblinear} implementation, in terms of \\emph{r2} score. SGD solver always requires even lower iterations, i.e., epochs, for higher $C$ regularization values, i.e., for $C$ equals to 10 or 100, to achieve the same \\emph{numerical precision}. \\emph{Standard} or \\emph{Polyak} and \\emph{Nesterov} momentums always perform lower iterations as expected from the theoretical analysis of the convergence rate.", "meta": {"hexsha": "5a066a15ecac6e9b5bc424b6f10968044c885c3d", "size": 10695, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notebooks/optimization/tex/experiments.tex", "max_stars_repo_name": "AF207/optiml", "max_stars_repo_head_hexsha": "f8860d90d4f5b6d35a3ed0ef3c1d014a2b517a72", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-06T13:59:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-06T13:59:03.000Z", "max_issues_repo_path": "notebooks/optimization/tex/experiments.tex", "max_issues_repo_name": "AF207/optiml", "max_issues_repo_head_hexsha": "f8860d90d4f5b6d35a3ed0ef3c1d014a2b517a72", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/optimization/tex/experiments.tex", "max_forks_repo_name": "AF207/optiml", "max_forks_repo_head_hexsha": "f8860d90d4f5b6d35a3ed0ef3c1d014a2b517a72", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 105.8910891089, "max_line_length": 808, "alphanum_fraction": 0.7935483871, "num_tokens": 2835, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.5689707347039484}}
{"text": "\\section{Discussion \\label{sec:discussion}}\n\nWe have introduced \\limdd, a novel decision diagram-based method to simulate quantum circuits, which enables polynomial-size representation of a strict superset of stabilizer states and of the states represented by polynomially-large \\qmdds.\nTo prove this, we have shown the first lower bounds on the size of \\qmdds for stabilizer states:\nthey are exponential-size for certain families of stabilizer states.\n\\limdds achieve a more succinct representation by representing states up to \nlocal invertible maps which uses single qubit (local) operations from a group $G$.\nWe have investigated the choices $G=\\pauli$, $G=\\braket{Z}$ and $G=\\braket{X}$,\nand found that any choice suffices for an exponential advantage over \\qmdds;\nnotably, the choice $G=\\pauli$ allows us to succinctly represent stabilizer states.\nWe also defined reduction rules for Pauli-\\limdd and showed that for each set of \nquantum states which are equivalent under local Pauli operations,\nmodulo normalization factor, has a unique reduced Pauli-\\limdd as representative.\n\nFurthermore, we showed how to simulate arbitrary quantum circuits, encoded as Pauli-\\limdds.\nThe resulting algorithms are often faster than for \\qmdds.\nIn contrast to \\qmdds, Clifford circuits (initialized to $\\ket{0}$)\ncan be simulated by Pauli-\\limdds in polynomial time.\nThis in itself is not an interesting feat, since efficient simulation methods for\nthe stabilizer regime have been known since the Gottesman-Knill theorem.\nHowever, we also showed that Pauli-\\limdds can efficiently simulate\na circuit family we call Hamming weight-controlled Clifford circuits.\nAnd we provide empirical evidence that these circuits are hard for stabilizer-rank\nbased simulation methods.\n%we were strengthened in this exception by exploratory numerics.\n\nAn obvious next step is to investigate other choices for $G$.\nOf interest are both the representational capabilities of such diagrams\n(do they represent interesting states?), and the algorithmic capabilities\n(can we still find efficient algorithms which make use of these diagrams?).\nIn this vein, an important question is what the relationship is between \\glimdds\n(for various choices of $G$) and existing formalisms for the classical simulation of quantum circuits, such as those based on match-gates~\\cite{terhal2001classical,jozsa2008matchgates,hebenstreit2020computational} and tensor networks~\\cite{orus2014practical}.\nIt would also be interesting to compare \\limdds to graphical depictions of quantum computation, \nfollowing similar work for \\qmdds~\\cite{vilmart2021quantum}.\nWe leave empirical evaluations into the consequences of \\limdd methods for efficient\nquantum circuit simulation, for future work. This would obviously include a \ncomparison with stabilizer rank simulation~\\cite{bravyi2019simulation}.\n\nFinally, we note that the current definition of \\limdd imposes a strict total order\nover the qubits along every path from root to leaf.\nIt is known that the chosen order can greatly influence the size of the DD~\\cite{rudell1993dynamic,wegener2000branching},\nmaking it interesting to investigate variants of \\limdds with a flexible ordering.\n\n", "meta": {"hexsha": "89a08e2f265d94e7bf652d6e9831f9668478d764", "size": 3187, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Src/CS/sections/discussion.tex", "max_stars_repo_name": "Katafotic/latex_parsing", "max_stars_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Src/CS/sections/discussion.tex", "max_issues_repo_name": "Katafotic/latex_parsing", "max_issues_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Src/CS/sections/discussion.tex", "max_forks_repo_name": "Katafotic/latex_parsing", "max_forks_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.4318181818, "max_line_length": 259, "alphanum_fraction": 0.8161280201, "num_tokens": 697, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718435083355188, "lm_q2_score": 0.7371581510799252, "lm_q1q2_score": 0.5689707335276539}}
{"text": "\\providecommand{\\main}{../..}\n\\documentclass[\\main/thesis.tex]{subfiles}\n\\begin{document}\n\n\\section{Incrementing a Numeral}\\label{increment}\n\nGiven a numeral \\lstinline|xs| of \\lstinline|Numeral b d o|\nand a proof of \\lstinline|\u00ac (Maximum xs)|,\nwith functions and theorems constructed in the last section, we can:\n\n\\begin{itemize}\n    \\item Find the next numeral of \\lstinline|xs| using \\lstinline|next-numeral|.\n    \\item Know that the next numeral will be greater than \\lstinline|xs|\n        by \\lstinline|next-numeral-is-greater|.\n    \\item Know that the next numeral will be the least numeral that is greater\n        than \\lstinline|xs| by \\lstinline|next-numeral-is-immediate|.\n\\end{itemize}\n\nHowever, none of the theorems above guarantees that the next numeral will be the\n\\textbf{successor} of \\lstinline|xs|, i.e., the next numeral and \\lstinline|xs|\ndiffers by only $ 1 $.\nFor example, we have seen from the previous section that numerals of\n\\lstinline|GappedEndpoint| of systems of \\lstinline|Proper| have no successors.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{adjustbox}{max width=\\textwidth}\n        \\begin{tikzpicture}\n            % the frame\n            \\path[clip] (5.5, -3) rectangle (17.5, 1.5);\n\n\n            % ticks\n            \\draw[ultra thick, loosely dotted] (6.5,-1.5) -- (6.5,1);\n            \\draw[ultra thick, loosely dotted] (8.5,-1.5) -- (8.5,1);\n            \\draw[ultra thick, loosely dotted] (14.5,-1.5) -- (14.5,1);\n            \\draw[ultra thick, loosely dotted] (16.5,-1.5) -- (16.5,1);\n\n            \\draw[ultra thick, decoration={brace,mirror,raise=10},decorate]\n                (6.5,-1.5) -- (8.5,-1.5);\n            \\draw[ultra thick, decoration={brace,mirror,raise=10},decorate]\n                (14.5,-1.5) -- (16.5,-1.5);\n            \\node[below=1] at (11.5, -1.5) {\\lstinline|differs by more than 1|};\n\n            % the body\n            \\foreach \\i in {0,...,2} {\n                \\draw[ultra thick, fill=gray] ({\\i*8+0.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+0.95}, {\\i*0.6-0.4});\n                \\foreach \\j in {1,...,5} {\n                    \\draw[ultra thick] ({\\i*8+\\j+0.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+\\j+0.95}, {\\i*0.6-0.4});\n                };\n                \\draw[ultra thick, fill=black] ({\\i*8+6.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+6.95}, {\\i*0.6-0.4});\n            };\n\n            \\coordinate (A) at (7.5, -2.1);\n            \\coordinate (B) at (9, -2.8);\n            \\coordinate (C) at (15.5, -2.1);\n            \\coordinate (D) at (14, -2.8);\n\n            \\path[->, ultra thick] (B) edge node {} (A);\n            \\path[->, ultra thick] (D) edge node {} (C);\n\n        \\end{tikzpicture}\n    \\end{adjustbox}\n\\caption{The cause of gaps}\n\\label{figure:26}\n\\end{figure}\n\nSuppose we are to define a function called \\lstinline|increment| that returns\nthe successor of a given numeral.\nFor those numerals that are eligible for increment,\nwe can simply compute them with \\lstinline|next-numeral|;\nand for those that are not eligible, we need a predicate for discriminating them.\n\n\\subsection{\\lstinline|Incrementable|}\n\nSimilar to the definition of \\lstinline|Bounded|, we define the predicate\n\\lstinline|Incrementable| as an existential proposition.\nTo prove that a numeral \\lstinline|xs| is \\lstinline|Incrementable|,\none is obliged to present the successor \\textit{and} a proof to justify it.\n\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\nIncrementable : \u2200 {b d o} \u2192 (xs : Numeral b d o) \u2192 Set\nIncrementable {b} {d} {o} xs = \u03a3[ xs' \u2208 Numeral b d o ] \u27e6 xs' \u27e7 \u2261 suc \u27e6 xs \u27e7\n\\end{lstlisting}\n\nWe can then develop lemmata and theorems about \\lstinline|Incrementable|.\n\n\\subsubsection{Maximum Numerals}\n\nIf a numeral is a maximum, then there is no such a thing as the next numeral,\nmuch less a successor.\n\n\\begin{lstlisting}\nMaximum\u21d2\u00acIncrementable : \u2200 {b d o}\n    \u2192 (xs : Numeral b d o)\n    \u2192 (max : Maximum xs)\n    \u2192 \u00ac (Incrementable xs)\nMaximum\u21d2\u00acIncrementable xs max (incremented , claim)\n    = contradiction\n        (max incremented)\n        (>\u21d2\u2270 (m\u22611+n\u21d2m>n claim))\n%\n\\end{lstlisting}\nwhere \\lstinline|incremented| is the claimed successor\nand \\lstinline|claim : \u27e6 incremented \u27e7 \u2261 suc \u27e6 xs \u27e7|.\nThis is proven by contradicting these two propositions:\n\n\\begin{itemize}\n    \\item \\lstinline|max incremented : \u27e6 xs \u27e7 \u2265 \u27e6 incremented \u27e7|\n    \\item \\lstinline|>\u21d2\u2270 (m\u22611+n\u21d2m>n claim) : \u27e6 xs \u27e7 \u2270 \u27e6 incremented \u27e7|\n\\end{itemize}\n\n\\subsubsection{Numerals in Intervals}\n\nNumerals that are located in intervals of digit lines have successors.\n\n\\begin{figure}[H]\n    \\centering\n        \\begin{adjustbox}{max width=\\textwidth}\n        \\begin{tikzpicture}\n            % the frame\n            \\path[clip] (5.5, -1.5) rectangle (17.5, 1.5);\n\n            % the body\n            \\foreach \\i in {0,...,2} {\n                \\foreach \\j in {0,...,5} {\n                    \\draw[ultra thick, fill=black] ({\\i*8+\\j+0.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+\\j+0.95}, {\\i*0.6-0.4});\n                };\n                \\draw[ultra thick, fill=gray] ({\\i*8+6.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+6.95}, {\\i*0.6-0.4});\n            };\n\n            \\foreach \\i in {5,...,6} {\n                \\coordinate (\\i) at ({\\i + 0.5}, -0.9);\n            };\n            \\foreach \\i in {8,...,15} {\n                \\coordinate (\\i) at ({\\i + 0.5}, -0.3);\n            };\n            \\foreach \\i in {16,...,17} {\n                \\coordinate (\\i) at ({\\i + 0.5}, 0.3);\n            };\n\n            \\foreach \\i in {5,...,5} {\n                \\pgfmathsetmacro{\\j}{\\i + 1}\n                \\path[->, ultra thick] (\\i) edge[bend right=60] node {} ($ (\\j) + (-0.1, 0) $);\n            };\n            \\foreach \\i in {8,...,13} {\n                \\pgfmathsetmacro{\\j}{\\i + 1}\n                \\path[->, ultra thick] (\\i) edge[bend right=60] node {} ($ (\\j) + (-0.1, 0) $);\n            };\n            \\foreach \\i in {16,...,16} {\n                \\pgfmathsetmacro{\\j}{\\i + 1}\n                \\path[->, ultra thick] (\\i) edge[bend right=60] node {} ($ (\\j) + (-0.1, 0) $);\n            };\n\n        \\end{tikzpicture}\n    \\end{adjustbox}\n\\caption{The case of intervals}\n\\label{figure:27}\n\\end{figure}\n\nWe invoke \\lstinline|next-numeral-Proper-Interval-lemma| to establish the\nsuccessor relation between the given numeral and the next numeral.\n\n\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\nInterval\u21d2Incrementable : \u2200 {b d o}\n    \u2192 (xs : Numeral (suc b) (suc d) o)\n    \u2192 (\u00acgreatest : \u00ac (Greatest (lsd xs)))\n    \u2192 (proper : 2 \u2264 suc (d + o))\n    \u2192 Incrementable xs\nInterval\u21d2Incrementable {b} {d} {o} xs \u00acgreatest proper\n    = (next-numeral-Proper xs proper) , (begin\n            \u27e6 next-numeral-Proper xs proper \u27e7\n        \u2261\u27e8 cong \u27e6_\u27e7 (next-numeral-Proper-refine xs proper\n            (Interval b d o \u00acgreatest))\n        \u27e9\n            \u27e6 next-numeral-Proper-Interval xs \u00acgreatest proper \u27e7\n        \u2261\u27e8 next-numeral-Proper-Interval-lemma xs \u00acgreatest proper \u27e9\n            suc \u27e6 xs \u27e7\n        \u220e)\n\\end{lstlisting}\n\n\\subsubsection{Numerals at Gapped Endpoints}\n\nNumerals that are located at gapped endpoints do not have successors.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{adjustbox}{max width=\\textwidth}\n        \\begin{tikzpicture}\n            % the frame\n            \\path[clip] (5.5, -1.5) rectangle (17.5, 1.5);\n\n            % the body\n            \\foreach \\i in {0,...,2} {\n                \\foreach \\j in {0,...,5} {\n                    \\draw[ultra thick] ({\\i*8+\\j+0.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+\\j+0.95}, {\\i*0.6-0.4});\n                };\n                \\draw[ultra thick, fill=black] ({\\i*8+6.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+6.95}, {\\i*0.6-0.4});\n            };\n        \\end{tikzpicture}\n    \\end{adjustbox}\n\\caption{The case of gapped endpoints}\n\\label{figure:27}\n\\end{figure}\n\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\nGappedEndpoint\u21d2\u00acIncrementable : \u2200 {b d o}\n    \u2192 (xs : Numeral (suc b) (suc d) o)\n    \u2192 (greatest : Greatest (lsd xs))\n    \u2192 (proper : 2 \u2264 suc (d + o))\n    \u2192 (gapped : Gapped xs proper)\n    \u2192 \u00ac (Incrementable xs)\nGappedEndpoint\u21d2\u00acIncrementable xs greatest proper gapped (incremented , claim)\n    = contradiction \u27e6next\u27e7>\u27e6incremented\u27e7 \u27e6next\u27e7\u226f\u27e6incremented\u27e7\n\\end{lstlisting}\n\nThis is proven by contradicting two propositions as well.\n\nFirst, we show that the next numeral is greater than the claimed successor\nby rephrasing the lemma \\lstinline|next-numeral-Proper-GappedEndpoint-lemma|.\nAs a side note, \\lstinline|next-numeral-Proper| delegates tasks to other helper\nfunctions. \\lstinline|next-numeral-Proper-refine| is here to narrow the term\ncomputed by \\lstinline|next-numeral-Proper| down to\n\\lstinline|next-numeral-Proper-GappedEndpoint|\nby providing evidence that the computation is delegated that way.\n\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\n\u27e6next\u27e7>\u27e6incremented\u27e7 : \u27e6 next-numeral-Proper xs proper \u27e7 > \u27e6 incremented \u27e7\n\u27e6next\u27e7>\u27e6incremented\u27e7 =\n    start\n        suc \u27e6 incremented \u27e7\n    \u2248\u27e8 cong suc claim \u27e9\n        suc (suc \u27e6 xs \u27e7)\n    \u2264\u27e8 next-numeral-Proper-GappedEndpoint-lemma xs greatest proper gapped \u27e9\n        \u27e6 next-numeral-Proper-GappedEndpoint xs proper gapped \u27e7\n    \u2248\u27e8 cong \u27e6_\u27e7 (sym (next-numeral-Proper-refine\n        xs proper (GappedEndpoint b d o greatest gapped)))\n    \u27e9\n        \u27e6 next-numeral-Proper xs proper \u27e7\n    \u25a1\n\\end{lstlisting}\n\nHowever, the next numeral should only be greater than the given numeral\n\\lstinline|xs| and nothing else.\n\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\n\u27e6next\u27e7\u226f\u27e6incremented\u27e7 : \u27e6 next-numeral-Proper xs proper \u27e7 \u226f \u27e6 incremented \u27e7\n\u27e6next\u27e7\u226f\u27e6incremented\u27e7 = \u2264\u21d2\u226f (next-numeral-is-immediate-Proper\n    xs incremented proper (m\u22611+n\u21d2m>n claim))\n\\end{lstlisting}\n\n\\subsubsection{Numerals at Ungapped Endpoints}\n\nNumerals that are located at ungapped endpoints also possess successors.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{adjustbox}{max width=\\textwidth}\n        \\begin{tikzpicture}\n            % the frame\n            \\path[clip] (6.5, -1.5) rectangle (18.5, 1.5);\n\n            % the body\n            \\foreach \\i in {0,...,2} {\n                \\draw[ultra thick] ({\\i*8+0.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+0.95}, {\\i*0.6-0.4});\n                \\draw[ultra thick, fill=gray] ({\\i*8+1.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+1.95}, {\\i*0.6-0.4});\n                \\foreach \\j in {2,...,7} {\n                    \\draw[ultra thick] ({\\i*8+\\j+0.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+\\j+0.95}, {\\i*0.6-0.4});\n                };\n                \\draw[ultra thick, fill=black] ({\\i*8+8.05}, {\\i*0.6-0.8}) rectangle ({\\i*8+8.95}, {\\i*0.6-0.4});\n            };\n            % arrows\n            \\coordinate (A) at (9.1, -0.6);\n            \\coordinate (B) at (9.5, -0.3);\n            \\coordinate (C) at (17.1, 0);\n            \\coordinate (D) at (17.5, 0.3);\n\n            \\path[->, ultra thick] (A) edge[bend right] node {} (B);\n            \\path[->, ultra thick] (C) edge[bend right] node {} (D);\n        \\end{tikzpicture}\n    \\end{adjustbox}\n\\caption{The case of ungapped endpoints}\n\\label{figure:28}\n\\end{figure}\n\n\nSimilar to the case of \\lstinline|Interval|, the key to the proof lies in the\nhelp of \\lstinline|next-numeral-Proper-UngappedEndpoint-lemma|.\n\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\nUngappedEndpoint\u21d2Incrementable : \u2200 {b d o}\n    \u2192 (xs : Numeral (suc b) (suc d) o)\n    \u2192 (greatest : Greatest (lsd xs))\n    \u2192 (proper : 2 \u2264 suc (d + o))\n    \u2192 (\u00acgapped : \u00ac (Gapped xs proper))\n    \u2192 Incrementable xs\nUngappedEndpoint\u21d2Incrementable {b} {d} {o} xs greatest proper \u00acgapped\n    = (next-numeral-Proper xs proper) , (begin\n          \u27e6 next-numeral-Proper xs proper \u27e7\n      \u2261\u27e8 cong \u27e6_\u27e7 (next-numeral-Proper-refine xs proper\n          (UngappedEndpoint b d o greatest \u00acgapped)) \u27e9\n          \u27e6 next-numeral-Proper-UngappedEndpoint xs greatest proper \u00acgapped \u27e7\n      \u2261\u27e8 next-numeral-Proper-UngappedEndpoint-lemma xs greatest proper \u00acgapped \u27e9\n          suc \u27e6 xs \u27e7\n      \u220e)\n\\end{lstlisting}\n\n\\subsection{Deciding Incrementable Numerals}\n\nWith all of the lemmata above, we can finish the most difficult part\nof the decider.\n\n\\begin{lstlisting}[basicstyle=\\ttfamily\\scriptsize]\nIncrementable?-Proper : \u2200 {b d o}\n    \u2192 (xs : Numeral (suc b) (suc d) o)\n    \u2192 (proper : 2 \u2264 suc (d + o))\n    \u2192 Dec (Incrementable xs)\nIncrementable?-Proper xs proper with nextView xs proper\nIncrementable?-Proper xs proper | Interval b d o \u00acgreatest\n    = yes (Interval\u21d2Incrementable xs \u00acgreatest proper)\nIncrementable?-Proper xs proper | GappedEndpoint b d o greatest gapped\n    = no (GappedEndpoint\u21d2\u00acIncrementable xs greatest proper gapped)\nIncrementable?-Proper xs proper | UngappedEndpoint b d o greatest \u00acgapped\n    = yes (UngappedEndpoint\u21d2Incrementable xs greatest proper \u00acgapped)\n\\end{lstlisting}\n\nWe can determine if numerals of other categories are incrementable as well.\n\n\\begin{lstlisting}\nIncrementable? : \u2200 {b d o}\n    \u2192 (xs : Numeral b d o)\n    \u2192 Dec (Incrementable xs)\nIncrementable? xs with Maximum? xs\nIncrementable? xs | yes max = no (Maximum\u21d2\u00acIncrementable xs max)\nIncrementable? {b} {d} {o} xs | no \u00acmax with numView b d o\nIncrementable? xs | no \u00acmax | NullBase d o\n    = yes ((next-numeral-NullBase xs \u00acmax) ,\n        (next-numeral-NullBase-lemma xs \u00acmax))\nIncrementable? xs | no \u00acmax | NoDigits b o\n    = yes (NoDigits-explode xs)\nIncrementable? xs | no \u00acmax | AllZeros b\n    = no (contradiction (Maximum-AllZeros xs) \u00acmax)\nIncrementable? xs | no \u00acmax | Proper b d o proper\n    = Incrementable?-Proper xs proper\n\\end{lstlisting}\n\nThe table below sums up whether a non-maximum numeral can be incremented in\nsystems of each category.\n\n\\begin{table}[H]\n    \\centering\n    \\begin{adjustbox}{max width=\\textwidth}\n    \\begin{tabular}{|c|c|c|c|c|c|}\n        \\hline\n        \\multirow{2}{*}{\\textbf{NullBase}} &\n        \\multirow{2}{*}{\\textbf{NoDigits}} &\n        \\multirow{2}{*}{\\textbf{AllZeros}} &\n        \\multicolumn{3}{c|}{\\textbf{Proper}} \\\\\n        \\cline{4 - 6}\n        & & & \\textbf{Inteval} & \\textbf{Gapped} & \\textbf{Ungapped} \\\\\n        \\hline\n        yes & yes & no & yes & no & yes \\\\\n        \\hline\n    \\end{tabular}\n    \\end{adjustbox}\n\\caption{Summary of whether a non-maximum numeral can be incremented in\nsystems of each category}\n\\label{table:12}\n\\end{table}\n\n\nNote that the decision of making numerals of \\lstinline{NoDigits} incrementable\nis completely arbitrary as it will always hold vacuously.\n\n\\subsection{\\lstinline|increment|}\n\nWe can actually do without the previous section about \\lstinline|Incrementable?|\nand still be able to define a function that increments numerals.\nAll we have to do is to ask the user to prove that the numeral he or she has\ngiven is incrementable.\nBy doing so, the user is obliged to provide the actual successor,\nand then we can steal it and pretend that we have found the successor.\nHow outrageous!\n\n\\begin{lstlisting}\nincrement : \u2200 {b d o}\n    \u2192 (xs : Numeral b d o)\n    \u2192 (incr : Incrementable xs)\n    \u2192 Numeral b d o\nincrement xs incr = proj\u2081 incr\n\\end{lstlisting}\n\nObviously this is not the desired implementation of \\lstinline|increment|.\nThe reason why we construct \\lstinline|Incrementable?| is that it not only\ndoes the real work of computing successors for us (the credit also goes to \\lstinline|next-numeral|)\nbut also explains why a numeral is incrementable (or not).\n\nThe decidability of \\lstinline|Incrementable?| enables us to embed it in types.\nTo filter out numerals that are not eligible for increment, we use the same trick\nas when implementing safe \\lstinline|head| on lists.\n\n\\begin{lstlisting}\nTrue : {P : Set} \u2192 Dec P \u2192 Set\nTrue (yes _) = \u22a4\nTrue (no _) = \u22a5\n\\end{lstlisting}\n\n\\lstinline|True| translates positive results of a decidable predicate to\n\\lstinline|\u22a4| and negative results to \\lstinline|\u22a5|,\nwhere as \\lstinline|toWitness| reclaims the proof of the given proposition for us.\n\n\\begin{lstlisting}\ntoWitness : {P : Set} {Q : Dec P} \u2192 True Q \u2192 P\ntoWitness {Q = yes p} _  = p\ntoWitness {Q = no  _} ()\n\\end{lstlisting}\n\nFinally, the true implementation of \\lstinline|increment| is as follows:\n\n\\begin{lstlisting}\nincrement : \u2200 {b d o}\n    \u2192 (xs : Numeral b d o)\n    \u2192 (incr : True (Incrementable? xs))\n    \u2192 Numeral b d o\nincrement xs incr = proj\u2081 (toWitness incr)\n\\end{lstlisting}\n\n\\subsection{Properties of \\lstinline|increment|}\n\n\\begin{lstlisting}\nincrement-next-numeral : \u2200 {b d o}\n    \u2192 (xs : Numeral b d o)\n    \u2192 (\u00acmax : \u00ac (Maximum xs))\n    \u2192 (incr : True (Incrementable? xs))\n    \u2192 increment xs incr \u2261 next-numeral xs \u00acmax\n\\end{lstlisting}\n\nThis property relates \\lstinline|increment| with \\lstinline|next-numeral|.\nIt may look trivial, however, the underlying implementation of \\lstinline|increment|\ndoes not actually involve \\lstinline|next-numeral|, but helper functions like\n\\lstinline|next-numeral-Proper| and \\lstinline|next-numeral-NullBase|.\nWe dispense with the proof of this property as it is comprised mostly of pattern\nmatching and seemly meaningless \\lstinline|refl|s.\n\\end{document}\n", "meta": {"hexsha": "e75dc29c7955557a5ed3ebef73a48ac4f775d992", "size": 16771, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/NCTU-CS/tex/constructions/increment.tex", "max_stars_repo_name": "banacorn/numeral", "max_stars_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-04-23T15:58:28.000Z", "max_stars_repo_stars_event_max_datetime": "2015-04-23T15:58:28.000Z", "max_issues_repo_path": "Thesis/NCTU-CS/tex/constructions/increment.tex", "max_issues_repo_name": "banacorn/numeral", "max_issues_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/NCTU-CS/tex/constructions/increment.tex", "max_forks_repo_name": "banacorn/numeral", "max_forks_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_forks_event_max_datetime": "2015-05-30T05:50:50.000Z", "avg_line_length": 37.6031390135, "max_line_length": 123, "alphanum_fraction": 0.6238149186, "num_tokens": 5341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.7718434873426302, "lm_q1q2_score": 0.5689707269664086}}
{"text": "\\chapter{The Klein-Gordon Equation}\n\n%%%%%%%\n%Section\n%%%%%%%\n\\section{Background}\n\n\\footnote{An incomplete but easily accessible mathematical introduction to this equation can be found at \\url{http://wiki.math.toronto.edu/DispersiveWiki/index.php/Semilinear_NLW}.}The focusing/defocusing nonlinear Klein-Gordon equation describes the evolution of a possible complex scalar field $u$ according to, \n\\begin{equation}\\label{eq:KleinGordon}\n\\frac{\\partial^2 u}{\\partial t^2} - \\Delta u +u = \\pm \\lvert u\\rvert^2u,\n\\end{equation}\nwhere $+$  is the focusing case and $-$ the defocusing case in a similar manner to the nonlinear Schr\\\"{o}dinger equation. Blow up of three dimensional radially symmetric real solutions to this equation have recently been numerically studied by Donninger and Schlag~\\cite{DonSch11}. Two dimensional simulations of the Klein-Gordon equation can be found in Yang~\\cite{Yan06}. The linear Klein-Gordon equation occurs as a modification of the linear Schr\\\"{o}dinger equation that is consistent with special relativity, see for example Landau~\\cite{Lan96} or Grenier~\\cite{Gre94}. At the present time, there have been no numerical studies of blow up of solutions to this equation without the assumption of radial symmetry. This equation has generated a large mathematical literature and is still poorly understood. Most of this mathematical literature has concentrated on analyzing the equation on an infinite three dimensional space with initial data that either decays exponentially as one tends to infinity or is nonzero on a finite set of the domain. Here, we will simulate this equation in a periodic setting. Since this equation is a wave equation, it has a finite speed of propagation of information, much as a sound wave in air takes time to move from one point to another. Consequently for short time simulations, a simulation of a solution that is only nonzero on a finite part of the domain is similar to a simulation on an infinite domain. However, over long times, the solution can spread out and interact with itself on a periodic domain, whereas on an infinite domain,  the interaction over long times is significantly reduced and the solution primarily spreads out. Understanding the interactions in a periodic setting is an interesting mathematical problem. The Klein-Gordon equation has a conserved energy given by\n\\begin{equation}\n\\int  \\frac{1}{2}\\left( \\frac{\\partial u}{\\partial t}\\right)^2 + \\frac{u^2}{2}+\\frac{1}{2}\\left\\lvert \\nabla u \\right\\rvert^2 \\mp \\frac{\\left\\lvert u \\right\\rvert^4}{4} \\mathrm{d}\\bm x.\n\\end{equation}\nThe equation is also time reversible. For long time simulations, one wants to construct numerical methods that approximately conserve this energy and are also time reversible. When using Fourier spectral methods, we primarily need to ensure that the time discretization preserves these properties, since the spectral spatial discretization will typically automatically satisfy these properties. Following Donninger and Schlag~\\cite{DonSch11}, we use two schemes. First, an implicit-explicit time stepping scheme which is time reversible but only conserves the energy approximately and is given by\n\\begin{equation}\\label{eq:KgImEx}\n\\frac{u^{n+1}-2u^n+u^{n-1}}{(\\delta t)^2} -\\Delta \\frac{u^{n+1}+2u^n+u^{n-1}}{4} + \\frac{u^{n+1}+2u^n+u^{n-1}}{4} = \\pm \\left\\lvert u^{n}\\right\\rvert^2u^n\n\\end{equation}\nand second, a fully implicit time stepping scheme with fixed point iteration \n\\begin{align}\\label{eq:KgImp}\n&{}\\frac{u^{n+1,k+1}-2u^n+u^{n-1}}{(\\delta t)^2} -\\Delta \\frac{u^{n+1,k+1}+2u^n+u^{n-1}}{4} + \\frac{u^{n+1,k+1}+2u^n+u^{n-1}}{4} \\notag\n\\\\&{} = \\pm \\frac{\\left\\lvert u^{n+1,k}\\right\\rvert^4-\\left\\lvert u^{n-1}\\right\\rvert^4}{u^{n+1,k}-u^{n-1}}\n\\end{align}\nwhich conserves a discrete energy exactly\n\\begin{equation}\\label{eq:KgImEn}\n\\int\\frac{1}{2}\\left(\\frac{u^{n+1}-u^n}{\\delta t}\\right)^2 + \\frac{1}{2}\\left(\\frac{u^{n+1}+u^n}{2}\\right)^2+\\frac{1}{2}\\left\\lvert\\nabla\\frac{u^{n+1}+u^n}{2}\\right\\rvert^2 \\mp \\frac{\\left\\lvert{u}^{n+1}\\right\\rvert^4+\\left\\lvert{u}^{n}\\right\\rvert^4}{8}.\n\\end{equation}\nAs before, the superscript $n$ denotes the time step and $k$ denotes the iterate in the fixed point iteration scheme. Iterations are stopped once the difference between two successive iterates falls below a certain tolerance. \n\n\\subsection{Matlab Programs}\n\nListings \\ref{lst:MatKg1D}, \\ref{lst:MatKg1Dimp}, \\ref{lst:MatKg2D} and \\ref{lst:MatKg3D} demonstrate Matlab implementations of these time stepping schemes. In one dimension, the Klein-Gordon equation has easily computable exact solutions, (see for example Nakanishi and Schlag~\\cite[p.6]{NakSch11})  which can be used to test the accuracy of the numerical schemes. These equations seem to display three possibilities for the behavior of solutions which are dependent on the initial conditions:\n\\begin{itemize}\n\\item the solutions could \\emph{disperse} or \\emph{thermalize}, that is a given localized initial condition spreads out over the entire space \n\\item the solutions blow up or become infinite\n\\item a portion of the solution travels around as a localized particle while the rest of the solution disperses.\n\\end{itemize}\nSince the equations are reversible, there is also the possibility that a solution which is initially distributed over the spatial domain localizes itself. \n\n\\lstinputlisting[style=matlab_style,label=lst:MatKg1D,caption={A Matlab program to solve the 1-dimensional Klein Gordon equation \\eqref{eq:KleinGordon} using the time discretization in eq.\\ \\eqref{eq:KgImEx}.}]{./KleinGordon/Programs/KleinGordon1D.m}\n\n\\lstinputlisting[style=matlab_style,label=lst:MatKg1Dimp,caption={A Matlab program to solve the 1-dimensional Klein Gordon equation \\eqref{eq:KleinGordon} using the time discretization in eq.\\ \\eqref{eq:KgImp}.}]{./KleinGordon/Programs/KleinGordon1Dimp.m}\n\n\\lstinputlisting[style=matlab_style,label=lst:MatKg2D,caption={A Matlab program to solve the 2-dimensional Klein Gordon equation \\eqref{eq:KleinGordon} using the time discretization in eq.\\ \\eqref{eq:KgImp}.}]{./KleinGordon/Programs/KleinGordonImp2Db.m}\n\n\\lstinputlisting[style=matlab_style,label=lst:MatKg3D,caption={A Matlab program to solve the 3-dimensional Klein Gordon equation \\eqref{eq:KleinGordon} using the time discretization in eq.\\ \\eqref{eq:KgImEx}.}]{./KleinGordon/Programs/KleinGordon3Dsliceplot.m}\n\n\\subsection{A Two-Dimensional OpenMP Fortran Program}\n\nThe programs that we have developed in Fortran have become rather long. Here we add subroutines to make the programs shorter and easier to maintain. Listing \\ref{lst:For2dKgOmp} is the main Fortran program which uses OpenMP to solve the 2D Klein-Gordon equation. Notice that by using subroutines, we have made the main program significantly shorter and easier to read. It is still not as simple to read as the Matlab program, but is significantly better than some of the previous Fortran programs. It is also much easier to maintain, and once the subroutines have been written and debugged, they may be reused in other programs. The only drawback in using too many subroutines is that one may encounter a slight decrease in performance due to the overhead of calling a subroutine and passing data to it. The subroutines are in listings \\ref{lst:For2dKgOmpGrid}, \\ref{lst:For2dKgOmpIniDat}, \\ref{lst:For2dKgOmpSavDat}, \\ref{lst:For2dKgOmpStoOld}, \\ref{lst:For2dKgOmpEneCal}, \\ref{lst:For2dKgOmpSavRes} and an example makefile is in listing \\ref{lst:Makefile2dKgOmp}. Finally listing \\ref{lst:MatlabVideoKg} contains a Matlab program which produces pictures from the binary files that have been computed. One can then use another program to take the images and create a video\\footnote{At the present time, Matlab's video commands cannot reliably produce a single video from a very long simulation, so it is better to use Matlab to create still images.}.\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For2dKgOmp,caption={A Fortran program to solve the 2D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/KgSemiImp2d.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For2dKgOmpGrid,caption={A Fortran subroutine to get the grid to solve the 2D Klein-Gordon equation on.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/getgrid.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For2dKgOmpIniDat,caption={A Fortran subroutine to get the initial data to solve the 2D Klein-Gordon equation for.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/initialdata.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For2dKgOmpSavDat,caption={A Fortran program to save a field from the solution of the 2D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/savedata.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For2dKgOmpStoOld,caption={A Fortran subroutine to update arrays when solving the 2D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/storeold.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For2dKgOmpEneCal,caption={A Fortran subroutine to calculate the energy when solving the 2D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/enercalc.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For2dKgOmpSavRes,caption={A Fortran subroutine to save final results after solving the 2D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/saveresults.f90}\n\n\\lstinputlisting[style=make_style,language=make,label=lst:Makefile2dKgOmp,caption={An example makefile for compiling the OpenMP program in listing \\ref{lst:For2dKgOmp}.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/makefile}\n\n\\lstinputlisting[style=matlab_style,label=lst:MatlabVideoKg,caption={A Matlab program to plot the fields produced by listing \\ref{lst:For2dKgOmp}.}]{./KleinGordon/Programs/KleinGordon2dThreadFFT/video.m}\n\n\\subsection{A Three-Dimensional MPI Fortran Program using 2DECOMP\\&FFT}\n\nWe now give a program for the three-dimensional nonlinear Klein-Gordon equation. The program uses the same subroutine structure as the two-dimensional code. To make the program easy to reuse, the subroutine listed in listing \\ref{lst:For3dKgMpiReaInp} has been created to read an INPUTFILE which specifies the parameters to use for the program and so the program does not need to be recompiled every time it is run. To enable the program to scale better, the arrays which hold the Fourier frequencies and grid points have also been decomposed so that only the portions of the arrays used on each processor are created and stored on the processor. A further addition is a short postprocessing program to create header files to allow one to use the bov (brick of values) format that allows one to use the parallel visualization software VisIt. The program is listed in listing \\ref{lst:For3dKgMpiBovCre}, to use this program simply compile it using gfortran, no special flags are required, and then run it in the directory from which the INPUTFILE and data are stored. The program VisIt can be downloaded from \\url{https://wci.llnl.gov/codes/visit/home.html}. This program also run on laptops, desktops as well as parallel computer clusters. Documentation on using VisIt is available here \\url{https://wci.llnl.gov/codes/visit/manuals.html} and here \\url{http://www.visitusers.org/index.php?title=Main_Page}.  A short video tutorial on how to use VisIt remotely is available at \\url{http://cac.engin.umich.edu/resources/software/visit.html}.\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpi,caption={A Fortran program to solve the 3D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/KgSemiImp3d.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiGrid,caption={A Fortran subroutine to get the grid to solve the 3D Klein-Gordon equation on.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/getgrid.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiIniDat,caption={A Fortran subroutine to get the initial data to solve the 3D Klein-Gordon equation for.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/initialdata.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiSavDat,caption={A Fortran program to save a field from the solution of the 3D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/savedata.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiStoOld,caption={A Fortran subroutine to update arrays when solving the 3D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/storeold.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiEneCal,caption={A Fortran subroutine to calculate the energy when solving the 3D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/enercalc.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiSavRes,caption={A Fortran subroutine to save final results after solving the 3D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/saveresults.f90}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiReaInp,caption={A Fortran subroutine to read in the parameters to use when solving the 3D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/readinputfile.f90}\n\n\\lstinputlisting[style=make_style,language=make,label=lst:Makefile3dKgMpi,caption={An example makefile for compiling the MPI program in listing \\ref{lst:For3dKgMpi}.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/makefile}\n\n\\lstinputlisting[style=fortran_style,language=Fortran,label=lst:For3dKgMpiBovCre,caption={A Fortran subroutine to create BOV (Brick of Values) header files after solving the 3D Klein-Gordon equation.}]{./KleinGordon/Programs/KleinGordon3dMpiFFT/bovcreate.f90}\n\n\\subsection{Exercises}\n\\begin{enumerate}\n\\item[1)] Compare the accuracy of the implicit and semi-implicit time stepping schemes in eqs.\\ \\eqref{eq:KgImEx} and \\eqref{eq:KgImp}. Which scheme produces the most accurate results in the least amount of real time?  \n\\item[2)] Write serial Fortran programs to solve the two- and three-dimensional Klein-Gordon equations using the fully implicit time stepping scheme in eq.\\ \\eqref{eq:KgImp}.\n\\item[3)] Write OpenMP parallel Fortran programs to solve the two- and three-dimensional Klein-Gordon equations using the fully implicit time stepping scheme in eq.\\ \\eqref{eq:KgImp}.\n\\item[4)] The MPI command MPI\\_BCAST is used in the subroutine readinputfile, listed in list \\ref{lst:For3dKgMpiReaInp}. Look up this command (possibly using one of the references listed in the introduction to programming section) and explain what it does. \n\\item[5)] Write an MPI parallel Fortran program to solve the two- and three-dimensional Klein-Gordon equations using the fully implicit time stepping scheme in eq.\\ \\eqref{eq:KgImp}.\n\\item[6)] Compare the results of fully three-dimensional simulations with periodic boundary conditions ($\\mathbb{T}^3$) with analytical predictions for blow up on the entire real space ($\\mathbb{R}^3$) summarized in Donninger and Schlag~\\cite{DonSch11}.\n\\item[7)] Grenier~\\cite[p.~18]{Gre94} explains that the linear Klein-Gordon equation can be written as two coupled Schr\\\"{o}dinger equations. One can extend this formulation to the nonlinear Klein-Gordon equation. If we let\n\\begin{equation}\\label{eq:KgSchDecomp1}\nu=\\phi+\\xi \\quad\\text{and}\\quad\\frac{\\partial u}{\\partial t}=\\phi-\\xi\n\\end{equation}\nthen the two coupled equations\n\\begin{align}\\label{eq:KgSchDecomp2}\n&{} i\\frac{\\partial }{\\partial t}\\begin{bmatrix}\\phi \\\\ \\xi \\end{bmatrix}= \\begin{bmatrix} -\\Delta  -1 & -\\Delta \\\\ \\Delta & \\Delta + 1 \\end{bmatrix}\\begin{bmatrix}\\phi \\\\ \\xi \\end{bmatrix} \\pm\\begin{bmatrix}1 \\\\ -1 \\end{bmatrix} \\frac{\\lvert \\phi+\\xi\\rvert^2(\\phi+\\xi)}{2}\n\\end{align}\n are equivalent to the nonlinear Klein-Gordon equation\n \\begin{align}\\label{eq:KgSchDecomp3}\n &{}\\frac{\\partial^2u}{\\partial t^2} - \\Delta u + u = \\pm u^3.\n \\end{align}\n \\begin{enumerate}\n \\item[a)] Fill in the details to explain why eqs.\\ \\eqref{eq:KgSchDecomp1} and \\eqref{eq:KgSchDecomp2} are equivalent to eq.\\ \\eqref{eq:KgSchDecomp3}. In particular show that by adding and subtracting the two equations in eqs.\\ \\eqref{eq:KgSchDecomp1} and \\eqref{eq:KgSchDecomp2}, we get\n \\begin{align*}\n &{} i\\frac{\\partial}{\\partial t}\\left(\\phi+\\xi\\right)= -\\left(\\phi-\\xi\\right)\n \\\\&{} i\\frac{\\partial}{\\partial t}\\left(\\phi-\\xi\\right)=-\\Delta \\left(\\phi+\\xi\\right) - \\left(\\phi+\\xi\\right) \\pm \\left\\lvert \\phi+\\xi \\right\\rvert^2\\left(\\phi+\\xi\\right).\n \\end{align*}\n Differentiating the first of these equations and substituting it into the second, then recalling that we defined $u=\\phi+\\xi$ in eq.\\ \\eqref{eq:KgSchDecomp1} gives us the Klein-Gordon equation in eq.\\ \\eqref{eq:KgSchDecomp3}.\n \\item[b)] Solve these two equations using either the implicit midpoint rule or the Crank Nicolson method.\n \\end{enumerate}\n\\end{enumerate}\n", "meta": {"hexsha": "105591a4d99b2ff067f43569b61df89da11af46a", "size": 17038, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "KleinGordon/KleinGordon.tex", "max_stars_repo_name": "bcloutier/PSNM", "max_stars_repo_head_hexsha": "1cd03f87f93ca6cb1a3cfbe73e8bc6106f497ddf", "max_stars_repo_licenses": ["CC-BY-3.0", "BSD-2-Clause"], "max_stars_count": 40, "max_stars_repo_stars_event_min_datetime": "2015-01-05T14:22:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T23:51:25.000Z", "max_issues_repo_path": "KleinGordon/KleinGordon.tex", "max_issues_repo_name": "bcloutier/PSNM", "max_issues_repo_head_hexsha": "1cd03f87f93ca6cb1a3cfbe73e8bc6106f497ddf", "max_issues_repo_licenses": ["CC-BY-3.0", "BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-12-29T12:35:42.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-01T07:31:32.000Z", "max_forks_repo_path": "KleinGordon/KleinGordon.tex", "max_forks_repo_name": "bcloutier/PSNM", "max_forks_repo_head_hexsha": "1cd03f87f93ca6cb1a3cfbe73e8bc6106f497ddf", "max_forks_repo_licenses": ["CC-BY-3.0", "BSD-2-Clause"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2015-01-05T14:23:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-09T06:55:01.000Z", "avg_line_length": 136.304, "max_line_length": 1828, "alphanum_fraction": 0.7918182885, "num_tokens": 4944, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434873426302, "lm_q2_score": 0.7371581510799253, "lm_q1q2_score": 0.568970718052575}}
{"text": "\\externaldocument{paper.tex}\r\n\t\\section{Conclusions}\r\n\t\t\\label{sec:conclusions}\r\n\t\t\r\n\t\tIn this work, we addressed the subject of intrinsic and isometric\r\n\tmanifold learning. We first showed that the need for methods, which\r\n\tpreserve the latent Euclidean geometric structure of data manifolds,\r\n\tarises naturally when inherently low-dimensional systems are observed\r\n\tvia high-dimensional non-linear measurements or observations. We presented\r\n\ta new manifold learning algorithm which uses local properties of the\r\n\tobservation function to estimate a local intrinsic metric (the ``push-forward''\r\n\tmetric of the observation function). This metric was then used to\r\n\testimate intrinsic geometric proprieties of the data directly from\r\n\tthe observed manifold as if they were calculated in the low-dimensional\r\n\tlatent space. We discussed a few settings under which estimation of\r\n\tthe required local observation function properties is possible from\r\n\tthe observed data itself. Unfortunately, we recognized that due to\r\n\ttheir local nature, these metric estimation methods are not sufficiently\r\n\trobust to high curvature of the observation function as well as to\r\n\tnoise. We suggested to overcome this by parameterizing all estimated\r\n\tintrinsic metrics on the observed manifold as the output of a single\r\n\t\\ac{ANN} and performing a signal optimization in order to estimate\r\n\tall local intrinsic metric simultaneously. This procedure has probabilistic\r\n\treasoning and was shown to be equivalent to maximum-likelihood estimation\r\n\tunder a certain statistical model. We showed that this couples the\r\n\tmetric estimations at different points on the manifold, regularizes\r\n\tthe estimation and imposes smoothness of the estimated metric. We\r\n\tdiscussed the possibility of additionally imposing regularization\r\n\ton the estimation by the net structure and weight decay terms, which\r\n\tis common practice in \\acp{ANN}. By combining a robust intrinsic\r\n\tmetric estimation method and an algorithm which can use these metrics\r\n\tto build an intrinsic and isometric embedding, we devised an algorithm,\r\n\twhich can automatically recover the geometric structure of a latent\r\n\tlow-dimensional manifold from its observations via an unknown non-linear\r\n\thigh-dimensional function. Finally we focused on the example of mapping\r\n\tand positioning an agent using a sensor network of unknown nature\r\n\tand modalities. We showed that our proposed algorithm can recover\r\n\tthe structure of the space in which the agent moves and can correctly\r\n\tposition the agent within that space. Due to the intrinsic nature\r\n\tof our method, it can perform this mapping and positioning without\r\n\tthe need for prior knowledge of a measurement model to explain the\r\n\tconnection between the observed measurements and the position of the\r\n\tagent. This invariance to the type of measurement used, was shown\r\n\tto be suitable in a setting such as indoor positioning where the exact\r\n\tmeasurement model is usually unknown.\r\n\t\r\n\tIt is evident that our method outperforms the local metric estimation\r\n\tapproach for all the tested examples and enables us to learn intrinsically-isometric\r\n\trepresentation for broader cases problems given very sparse sampling\r\n\tand observation noise of the same scale of the intrinsic variance\r\n\twhich is used to estimate the intrinsic metric. \r\n\t\r\n\tThe use of neural networks as regression functions. provides a powerful\r\n\tregularizing factor in intrinsic metric estimation. Their general\r\n\tstructure with the plethora of optimization algorithms, regularization\r\n\ttweaks and efficient implementation methods make them a very good\r\n\tcandidate for a parametric function family to be used in the problem\r\n\tof metric estimation.\r\n\t\r\n\tThe estimation method described in this chapter was proven to have\r\n\tprobabilistic reasoning when assuming an intrinsic-isometric \\ac{GMM},\r\n\thowever it also has an intuitive interpretation in a more general\r\n\tcase. As can be observed in \\cref{eq:KL}, the network tries approximate\r\n\tlocally calculated estimations of the intrinsic metric while adhering\r\n\tto a globally smooth and regularized model. This can serve as a good\r\n\tgeneral heuristic method to impose global smoothness and regularization\r\n\ton locally estimated intrinsic metrics under other probabilistic or\r\n\tnon-probabilistic setting such as the one presented in\r\n\t\\cref{ssec:Intrinsic-isotropic-GMM}.\r\n\t\r\n\tA somewhat similar method was presented in \\cite{bengio2004non} where\r\n\tan \\ac{ANN} was also used in order to parametrize the spanning vectors\r\n\tof the local tangent plane to a manifold. The only criterion provided\r\n\tfor the estimation was minimization of the distance of observed points\r\n\tfrom the local tangent plane. The estimation was not given probabilistic\r\n\treasoning and the estimated spanning vector had no physical meaning\r\n\tand did not provide any metric interpretation of information. In our\r\n\twork, we not only retrieve the tangent plane to the manifold at each\r\n\tpoint but also an intrinsic metric and we do so with probabilistic\r\n\treasoning.\r\n\t\r\n\tIn the case of neural networks ``the devil is in the details'' and\r\n\timplementation plays a large role in the final performance of the\r\n\ttrained network and the subjects of neural network training and regularization\r\n\tare vast topics which are out of the scope of this work. Our main\r\n\tcontribution is to show that neural networks can, in principle be\r\n\tused, to regularize estimations and in this specific case intrinsic\r\n\tmetric estimation and we do not attempt to claim that this architecture\r\n\tand optimization procedure achieve an optimal solution.\r\n\t\r\n\tThe global estimation approach presented in this chapter has some\r\n\tsecondary advantages in addition to providing additional robustness\r\n\tto the intrinsic metric estimation when compared to local metric estimation\r\n\tmethods. As opposed to local methods which only estimate the intrinsic\r\n\tmetric for observed sample points, the regression approach produces\r\n\tan estimation of the intrinsic metric on the whole observed space.\r\n\tThis can be possibly used to generate additional points in between\r\n\texisting points, thus artificially increasing the sample density.\r\n\tThis can improve both the short-range intrinsic distance estimation\r\n\tdescribed in \\cref{ssec:Intrinsic-geometry-approximation} and\r\n\tthe geodesic distance estimation described in \\cref{ssec:Global-geometry-approximation}.\r\n\tBoth effects should improve the results of the algorithm presented\r\n\tin \\cref{sec:Intrinsic-isometric-manifold-learning}.\r\n\t\r\n\tIn \\cite{bengio2004non} it was shown that by leaning the tangent\r\n\tspace to the manifold at each point, one can ``walk'' on the manifold\r\n\tby making infinitesimal steps each time in the tangent space. Since\r\n\twith the method suggested in this chapter, we do not only estimate\r\n\tthe tangent space but the intrinsic metric as well, we know how far\r\n\twe have gone in terms of the distance, an ability that might be relevant\r\n\tto applications such as non-linear interpolation \\cite{bregler1995nonlinear}.\r\n\t\r\n\t\r\n\t\\subsection{Pre-processing}\r\n\t\t\\label{subsec:Pre-processing}\r\n\t\r\n\tSince our approach does not assume anything about the structure of\r\n\tthe observed data, one can perform pre-processing stages for reducing\r\n\tthe dimensionality of the data and possibly removing noise without\r\n\tworrying about maintaining the structure of the data. This has the\r\n\tadvantage of enabling the use of other existing dimensionality reduction\r\n\tmethods as pre-processing stages to our algorithm. For example, when\r\n\tlocalization and mapping were performed using image data, the algorithm\r\n\tdid not treat the data as images, which allowed us to use \\ac{PCA}\r\n\tto lower the dimensionality of the image data greatly. This would\r\n\tnot be possible for vision based algorithms which exploit the image\r\n\tstructure since the application of \\ac{PCA} would remove this structure\r\n\tand such algorithm could no longer be applied. For our method the\r\n\timage structure is unimportant and one can do without it.\r\n\t\r\n\t\\subsection{Practical application to indoor-positioning}\r\n\t\t\\label{subsec:Practical-application-to}\r\n\t\r\n\tThe ability to localize sets of measurements without knowledge of\r\n\tthe model connection positions and measurements is a problem encountered\r\n\tin real life indoor positioning due to the complexity of the models\r\n\tand the variations, as described in \\cref{sec:introduction}.\r\n\tLearning a model for each specific setting requires large amount of\r\n\tlabeled data. Our algorithm might replace the need for labeled data\r\n\tacquisition with a simple prior assumption on the unlabeled measurement\r\n\tacquisition process. \r\n\t\r\n\tThis has been showed to work in theory however it has yet to be applied\r\n\tto a realistic indoor positioning system. Basic physical experiments\r\n\tusing a randomly walking agent in 2-dimensional space are underway.\r\n\tIn these experiments a robotic iRobot Roomba programmable robotic\r\n\tvacuum cleaner was controlled by a Raspberry-Pi mini computer and\r\n\tperformed a random walk inside a 2-dimensional region. Signal acquisition\r\n\twas performed either my measuring the \\ac{RSS} from WiFi stations\r\n\tor via a wide angle lens which produced 360 degree images. Results\r\n\tfrom the experiments will be published when concluded. Images from\r\n\tthese experiments are shown in \\cref{fig:Roomba-experiments}.\r\n\t\r\n\t\\iffalse\r\n\t\\begin{figure}[h]\r\n\t\t\\begin{centering}\r\n\t\t\t\\begin{minipage}[t]{0.45\\columnwidth}%\r\n\t\t\t\t\\begin{flushleft}\r\n\t\t\t\t\t\\subfloat[\\ac{RSS} measurement]{\\begin{centering}\r\n\t\t\t\t\t\t\t\\includegraphics[width=1\\textwidth]{figures/Chapter_6/Roomba_wifi}\r\n\t\t\t\t\t\t\t\\par\\end{centering}\r\n\t\t\t\t\t}\r\n\t\t\t\t\t\\par\\end{flushleft}%\r\n\t\t\t\\end{minipage}\\hfill{}%\r\n\t\t\t\\begin{minipage}[t]{0.45\\columnwidth}%\r\n\t\t\t\t\\subfloat[Panoramic camera]{\\begin{centering}\r\n\t\t\t\t\t\t\\includegraphics[width=1\\textwidth]{figures/Chapter_6/Roomba_pano}\r\n\t\t\t\t\t\t\\par\\end{centering}\r\n\t\t\t\t}%\r\n\t\t\t\\end{minipage}\r\n\t\t\t\\par\\end{centering}\r\n\t\t\\caption{Roomba experiments \\label{fig:Roomba-experiments}}\r\n\t\\end{figure}\r\n\t\\fi\r\n\t\r\n\t\r\n\t\\subsection{Generalized localization problems}\r\n\t\r\n\tThroughout this work and in \\cref{sec:introduction} and\r\n\t\\cref{ssec:localization} specifically, we discussed the problem\r\n\tof localization in sensor networks and presented it as an example\r\n\tof a problem which requires an intrinsic and isometric manifold learning\r\n\tmethod in order to fully utilize the observed data and recover a mapping\r\n\tof the physical space and a localization of an agent within this space.\r\n\tHowever, this should not be regarded as a specific problem (although\r\n\tit is by itself an important one) but as a general prototypical problem,\r\n\tfor which physical localization in 2-dimensional or 3-dimensional,\r\n\tis just one intuitive manifestation. Indeed, any problem where the\r\n\tuncovering of the global geometric structure of a low-dimensional\r\n\tparameter space is desirable, can be regarded as a ``localization\r\n\t`` problem. Trivially, this can occur when structure recovery is\r\n\tby itself a goal and one can think of many examples where the global\r\n\tstructure of the parameter space is important or has some special\r\n\tmeaning (for example for imaging or molecule structure recovery),\r\n\tyet this global structure is also crucial when one requires the intrinsic\r\n\tstructure of a low-dimensional vector space. The process of observation\r\n\tvia a non-linear function causes the data to lose its global structure,\r\n\tand the initially ``flat'' manifold becomes curved. This hinders\r\n\tour ability to perform operations on the parameter space which require\r\n\ta vector space structure such as addition, subtraction, division and\r\n\tmultiplication. This are the basic operation required for more high-level\r\n\tprocessing such as averaging, clustering, interpolation, learning\r\n\tusing $k$-NN (especially when samples are sparse and neighbors are\r\n\tfar from each other) etc. Embedding the data back into a low-dimensional\r\n\tspace recovers the vector space structure of the original latent space\r\n\tand enables these operations.\r\n\t\r\n\t\\subsection{Automatic labeled data acquisition}\r\n\t\r\n\tSupervised machine learning uses labeled data (consisting of pairs\r\n\tof input objects and desired output values) and attempts to ``learn''\r\n\tor infer a functional connection between the two. Learning using more\r\n\tlabeled data increases the learners ability to unseen examples and\r\n\tusually leads to better performance of the trained model. Unfortunately\r\n\tacquisition of labeled data is usually non-trivial, expensive and\r\n\trequires an already existing method to correctly label data. Our proposed\r\n\tapproach can be seen as a method for automatic acquisition of labels\r\n\tas it operates in a completely unsupervised manner and produces data\r\n\tlabeling as an output. The ability to produce labeled data can thus\r\n\tbe reduced to the task of exploring the parameter space of the system\r\n\tunder some restriction or statistical model which allows us to calculate\r\n\ta local intrinsic metric as described in this work. \r\n\t\r\n\tThis automatic acquisition of labeled data can be much easier in many\r\n\tcases than producing labeled data using an existing labeling method.\r\n\tAs an example we return to the localization example discussed in depth\r\n\tin \\cref{ssec:localization}. To acquire labeled data in\r\n\tthe described setting, one would be required to perform a large set\r\n\tof observations (color images, depth maps, \\ac{RSS} values etc.)\r\n\tand provide for each such observation the 2-dimensional coordinates\r\n\tat which they were taken. This procedure requires time, patience and\r\n\tan existing way to measure the correct coordinates. This becomes even\r\n\tmore complicated if one considers localization in 3-dimensional space\r\n\twhere accurate positioning requires special equipment. Alternatively,\r\n\twith our approach, one could use a randomly waking agent or a simple\r\n\tsensor array (as described in \\cref{sec:Intrinsic-Metric-Estimation})\r\n\tto acquire a large set of unlabeled data, and then use our proposed\r\n\talgorithm to retrieve an approximation of their labels. One can then\r\n\tuse a supervised machine learning algorithm in order to infer the\r\n\tlabel of new observations. In the application described in \\cref{ssec:localization}, once labeling of a large set of images\r\n\tfrom the apartment space is acquired using our algorithm, one can\r\n\tfeed these to a vision based algorithm which can be robust to variation\r\n\tin appearance of the space, such as changes of illumination, obstructions,\r\n\tor even some changes in the structure of the apartment due to movement\r\n\tof some objects.\r\n\t\r\n\t\\subsection{Holistic approach}\r\n\t\r\n\tThe algorithm presented in this work suggests a two-stage solution\r\n\tfor intrinsic-isometric embedding of manifolds in low-dimensional\r\n\tEuclidean space:\r\n\t\\begin{enumerate}\r\n\t\t\\item Recovering local intrinsic metrics using prior-knowledge and assumptions\r\n\t\tabout the intrinsic process and using them to calculate intrinsic\r\n\t\tinter-point distances.\r\n\t\t\\item Constructing and embedding of the observed points into a low-dimensional\r\n\t\tspace, where the Euclidean distance respects the calculated intrinsic\r\n\t\tinter-point distances.\r\n\t\\end{enumerate}\r\n\tIf the first stage of this approach does not result in accurate enough\r\n\testimations of the intrinsic geometry, the second stage can potentially\r\n\tfail. Combining the two stages described above into a one stage embedding\r\n\tmethod could be beneficial. A possible way to implement this is to\r\n\tdirectly learn an embedding function of the observed data into a low-dimensional\r\n\tspace such that the embedding best fits prior-knowledge or assumption\r\n\twe have about the intrinsic system. This could be implemented for\r\n\texample with an \\ac{ANN} for which the cost function will be the\r\n\tlikelihood that the embedding originated from the statistical model\r\n\tof the intrinsic latent system. Such an implementation would however,\r\n\trarely converge to minima which represent a good embedding of the\r\n\tdata without being given an initial solution which is close enough\r\n\tto the true structure. One can however use the embedding received\r\n\tvia the presented two stage approach as an initial target embedding\r\n\tfor such an \\ac{ANN}, and then further maximize the likelihood of\r\n\tthe embedding directly on the embedding space. \r\n\t\r\n\t\\subsection{Sensor fusion}\r\n\t\r\n\tThe fact that our proposed algorithm is invariant to the observation\r\n\tfunction, allows sensor fusion. Measurements from different sensors\r\n\tmodalities can be simply concatenated, creating a higher-dimensional\r\n\tobservation function. Although this should work in theory, it is clear\r\n\tthat if the sensors are improperly scaled with respect to each other\r\n\tthis will not be optimal. Further research should be invested in considering\r\n\tthe proper way to fuse different measurement modalities in a way that\r\n\toptimizes the results of the algorithm. \r\n\t\r\n\t\\subsection{Towards data-driven Taken's embedding}\r\n\t\r\n\tOur algorithm attempts to produce an embedding of the observed manifold\r\n\tinto a low-dimensional space, where similar observations are embedded\r\n\tto similar locations in the constructed embedding space. This causes\r\n\ta problem if different intrinsic points have identical or very similar\r\n\tobservation values. In this work we resolved this by assuming that\r\n\tthe observation function is invertible. One practical approach to\r\n\tdeal with this situation is to add observations until any such similar\r\n\tpoints are distinguished. However, when a set of observations is given\r\n\tand one cannot add additional ``physical'' observations we require\r\n\ta different way required. If temporal information about the dynamical\r\n\tobservation process is available one can, in principle, distinguish\r\n\tpoints by their respective past and future observations. Such a separation\r\n\tbetween points would be especially beneficial in the presence of high\r\n\tlevels of observation noise which might make originally distinct measurement\r\n\tvalues similar to each other. Our proposed algorithm currently uses\r\n\ttemporal side-information only for the stage of estimating the intrinsic\r\n\tmetric and does not incorporate this information for the purpose of\r\n\tthe final embedding into low-dimensional space. One possible approach\r\n\tfor incorporating this information naturally into our algorithm is\r\n\tby the observation of several lagged measurements simultaneously,\r\n\tby looking far ``enough'' into the past, such that any two points\r\n\tbecome distinguishable. This was proven for the case of deterministic\r\n\tlatent dynamical system by Taken's embedding theorem \\cite{takens1981detecting}.\r\n\t\r\n\t\\subsection{Implementation using vector-diffusion maps}\r\n\t\r\n\tOur proposed manifold learning algorithm computes global intrinsic\r\n\tgeometric properties by using the shortest-path algorithm in order\r\n\tto approximate inter-point geodesic distances. The shortest-path algorithm\r\n\toperated by propagating the distance function from a source point\r\n\tacross the manifold. A similar process of propagating local vector\r\n\tinformation (instead of scalar distance information) could be used\r\n\tto allow for direct long range intrinsic Euclidean distance estimation.\r\n\tA framework for vector diffusion on Riemannian manifold has been presented\r\n\tin \\cite{singer2012vector} and could possibly be used in order to\r\n\timplement this. Euclidean distance estimation by vector diffusion\r\n\tshould be unaffected by non-convexity of the intrinsic manifold and\r\n\tcan allow for use of global distances making the final embedding more\r\n\trobust.\r\n\t\r\n\t\\subsection{Incorporating labeled data}\r\n\t\t\\label{subsec:Incorporating-labeled-data}\r\n\t\r\n\tThe algorithm proposed in this work operates in a completely unsupervised\r\n\tmanner. The only prior-knowledge used is the assumption about the\r\n\tintrinsic data structure, which allows for estimation of the local\r\n\tintrinsic metric. While this is a desired property of the algorithm,\r\n\tsince labeled data is usually harder to gather relative to unlabeled\r\n\tdata, it does not harness the advantages of labeled data if such data\r\n\tdoes exist. We mention two possible ways to incorporate existing labeled\r\n\tdata. Labeled data can be used as part of the intrinsic metric learning,\r\n\twhere similar settings have been used in existing works in order to\r\n\tgenerate metrics which best describe existing label variations on\r\n\tdata manifold \\cite{yang2006distance,xing2003distance}. Alternatively,\r\n\tlabeled data can be used in the embedding procedure itself, where\r\n\tone can easily add additional distance restriction known from labeled\r\n\tdata. In the localization example given in \\cref{ssec:localization}\r\n\tfor example, we notice a slight distortion in the embedding on a global,\r\n\tlong-range scale. This is because we only incorporate local distance\r\n\tinformation in the final embedding optimization, which does not fully\r\n\trestrict the global conformation in practical noisy situations. The\r\n\tshort-distance estimations have errors, which induce slight distortions\r\n\tin the embedding. These distortions are not significant on small scales\r\n\tbut they accumulate on larger scales and longer distances. Adding\r\n\ta few robustly measured long distances can be extremely beneficial\r\n\tfor correcting this and even a very low number of such constraints\r\n\tcould stabilize the embedding considerably, removing this accumulated\r\n\terror. Incorporating such robustly measured distances in the algorithm\r\n\tsuggested in \\cref{sec:Intrinsic-isometric-manifold-learning}\r\n\tcan be done easily by giving such distance constraints much larger\r\n\tweights in the stress function presented in (\\cref{eq:w-intrinisc_stress})\r\n\tthan those of distances estimated from the observed data, which are\r\n\tless reliable.", "meta": {"hexsha": "cb1df1b4ffad4dd618399f7ffb20a397dccfd9db", "size": 21626, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Discussion.tex", "max_stars_repo_name": "sariel85/paper-SIIMS", "max_stars_repo_head_hexsha": "6ffdcf9eed2260fb96b4d197239d69ac553e68c5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Discussion.tex", "max_issues_repo_name": "sariel85/paper-SIIMS", "max_issues_repo_head_hexsha": "6ffdcf9eed2260fb96b4d197239d69ac553e68c5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Discussion.tex", "max_forks_repo_name": "sariel85/paper-SIIMS", "max_forks_repo_head_hexsha": "6ffdcf9eed2260fb96b4d197239d69ac553e68c5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.7403314917, "max_line_length": 125, "alphanum_fraction": 0.7969111255, "num_tokens": 4413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199633332891, "lm_q2_score": 0.6791786926816161, "lm_q1q2_score": 0.5688936316607266}}
{"text": "\\documentclass[12pt, a4paper]{article}\n\\usepackage[a4paper, bindingoffset=0.2in, %\nleft=0.5in,right=0.5in,top=0.5in,bottom=0.5in,%\nfootskip=.25in]{geometry}\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{hyperref}\n\\usepackage{physics}\n\n\\title{PSet8 Report}\n\\author{Ali Abolhassanzadeh Mahani}\n\n\n\\begin{document}\n\t\\maketitle\n\t\\section{First Order Differential Equation}\n\tThe equation we are to solve is\n\t\\begin{equation}\n\t\tR \\dv{Q}{t} + \\frac{Q}{C} = V\n\t\\end{equation}\n\tfor the values $V= 10 volts$, $R = 3000 \\Omega$, $C = 1 \\mu F$ and the simulation time from zero to $t = 0.5\\, ms$.\\\\\n\tSince the numbers are a little overwhelming, we make parameter changes to come up with better numbers for our simulation.\n\t\n\tThe change of variables is as follows:\n\t\\begin{equation} \\label{eq:change}\n\t\t\\begin{aligned}\n\t\t\t&x \\equiv \\frac{Q}{R C} - \\frac{V}{R}, \\quad \\tau \\equiv \\frac{t}{R C}\\\\\n\t\t\t\\Rightarrow  &\\dv{x}{\\tau} + x = 0\n\t\t\\end{aligned}\n\t\\end{equation}\n\t\n\tUsing this change in variables, the time limit becomes: $\\tau = \\frac{0.5 ms}{10^{-6} F \\times 3000 \\Omega} = \\frac{1}{6} s$\n\tAlso, if we want the capacitor to start from being empty, the initial value for $x$ will be: $x_0 = \\frac{1}{300} \\frac{volts}{\\Omega}$\n\t\n\tThe analytical solution is obtained by integrating the equation $\\frac{\\dd{x}}{x} = \\dd{\\tau}$\n\twhich gives:\n\t\\begin{equation} \\label{eq:analytical}\n\t\tx = x_0 \\exp(- \\tau) \\Rightarrow Q = C V (1 - e^{- \\frac{t}{R C}})\n\t\\end{equation}\n\n\tI integrated numerically for $x$ and then used a reverse version of our change of variables, to find $Q$. Then, I plotted the analytical solution\n\tfor $Q$ and the integration in the same graph. (Fig.\\ref{fig:euler})\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\linewidth]{../p1/capacitor.jpg}\n\t\t\\label{fig:euler}\n\t\t\\caption{Analytical and numerical solution for the charging capacitor equation. The time step for the numerical solution here is $0.01$}\n\t\\end{figure}\n\n\tFor the second part, I found the difference in numeric and analytic method, $\\delta$, at $t = 0.5 ms$ for various values of the time step.\n\tThe data is available in table \\ref{tab:error}, and the plot, in Fig.\\ref{fig:error}\n\t\\begin{table}[h!]\n\t\t\\centering\n\t\t\\begin{tabular}{|c|c|}\n\t\t\t\\hline\n\t\t\t$h$ & $\\delta$\\\\\n\t\t\t\\hline\n\t\t\t$0.001$ & $0.00282761$ \\\\\n\t\t\t\\hline\n\t\t\t$0.005$ & $0.00284087$ \\\\\n\t\t\t\\hline\n\t\t\t$0.01$ & $0.002868396$ \\\\\n\t\t\t\\hline\n\t\t\t$0.015$ & $0.00286730$ \\\\\n\t\t\t\\hline\n\t\t\t$0.02$ & $0.002895286$ \\\\\n\t\t\t\\hline\n\t\t\t$0.03$ & $0.002952511$ \\\\\n\t\t\t\\hline\n\t\t\t$0.04$ & $0.002950655$ \\\\\n\t\t\t\\hline\n\t\t\t$0.05$ & $0.003009868$ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\label{tab:error}\n\t\\caption{The values of $\\delta$ for different values of time step $h$.}\n\t\\end{table}\n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\linewidth]{../p1/error.jpg}\n\t\t\\label{fig:error}\n\t\t\\caption{Plot for \\emph{$\\log(\\delta)$} vs. \\emph{$\\log(h)$}. We can see that for a value of $h$ there's a minimum where the error is optimal and then the relation is linear.}\n\t\\end{figure}\n\n\t\\section{$2^{nd}$ Order ODE}\n\tI made all the functions to take variables \\texttt{x\\_init, acc, step, time}. \\texttt{x\\_init} is the initial position of the mass from the origin.\n\t\\texttt{acc} is the function that returns the \\emph{acceleration} as a function of position \\texttt{x}.\n\tThe initial conditions are as follows:\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\t\t\\dot{x} &= 0\\\\\n\t\t\tx & = 1\n\t\t\\end{aligned}\n\t\\end{equation}\n\t\n\tFor some methods like \\emph{verlat} I had to define extra initial conditions due to the nature of the algorithms.\n\t\n\tI made a dictionary \\texttt{data} that stores the results for different methods and makes the job of plotting and other stuff easier.\n\tI integrated for 60 time units and time step of \\texttt{step = 0.01}. Then I plotted them in one graph. (Fig \\ref{fig:ODEs})\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\linewidth]{../p2/ode2_plots.jpg}\n\t\t\\label{fig:ODEs}\n\t\t\\caption{plot of the solutions using different integration methods from $0$ to $t = 60$ with time step of $0.01$}\n\t\\end{figure}\n\tThen, I plotted $\\dot{x}$ as a function of $x$ for each method separately. (Fig\\ref{fig:energy})\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.45\\linewidth]{../p2/euler.jpg}\n\t\t\\includegraphics[width=0.45\\linewidth]{../p2/euler_cromer.jpg}\n\t\t\\includegraphics[width=0.45\\linewidth]{../p2/verlat.jpg}\n\t\t\\includegraphics[width=0.45\\linewidth]{../p2/velocity_verlat.jpg}\n\t\t\\includegraphics[width=0.45\\linewidth]{../p2/beeman.jpg}\n\t\t\\label{fig:energy}\n\t\t\\caption{$\\dot{x}$ vs. $x$ for different methods.}\n\t\\end{figure}\n\t\n\tAs can be seen from Fig\\ref{fig:energy}, the \\emph{euler-cromer}, \\emph{verlat}, \\emph{velocity verlat}, and \\emph{beeman} methods have stability for this\n\tproblem.\n\t\n\t\\section{Instability in algorithms}\n\tHere we use the same change in variables that we did in equation\\ref{eq:change} to simulate the charging capacitor.\n\tThe algorithm we are going to use here is as follows:\n\t\\begin{equation} \\label{eq:instability}\n\t\ty_{n + 1} = y_{n - 1} + 2 \\dot{y}_n h\n\t\\end{equation}\n\twhere $h$ is the time step. Since this is a two step algorithm, we need a second initial value that we get using the Euler method. From that point\n\tforward, we use the algorithm in equation\\ref{eq:instability}. Then, we plot both the analytical solution (eq.\\ref{eq:analytical}) and the \n\tnumerical solution in one graph to compare them.\n\t\n\tIn small values for terminating the integration, the numerical solution seems to be stable, but if we wait long enough, we can see that\n\tfrom about $\\tau = 6.0$, the numerical integration shows signs of periodic motions and oscillation around the analytical solution. \n\t(Fig.\\ref{fig:instability})\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\linewidth]{../p3/instability.jpg}\n\t\t\\label{fig:instability}\n\t\t\\caption{numerical solution using the algorithm from eq\\ref{eq:instability}, shows oscillation about the analytical solution for \n\t\t\tlarge enough $\\tau$, which represents instability for this algorithm in this perticular problem.}\n\t\\end{figure}\n\n\tIt can also be seen that for higher values of the time step, the instability manifests much sooner and explodes more rapidly.\n\n\t\\section{Chaos}\n\tHere I made a function \\texttt{stable\\_point(r)} which applies the function \\texttt{f(r, x)} 1500 times and returns the last 200 values.\n\tThis way we can cover the bifurcations to 128-periodic. Here $f(r, x) = x_{n + 1} = 4rx_{n} (1 - x_{n})$.  The bifurcation plot is available in FIg\\ref{fig:bifurcation}\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\linewidth]{../p4/bifurcation.jpg}\n\t\t\\label{fig:bifurcation}\n\t\t\\caption{bifurcation plot for $f(r, x)$}\n\t\\end{figure}\n\n\tI failed to find the values for $\\delta, \\alpha$. :-(\n\\end{document}", "meta": {"hexsha": "a1c457656aac16a970ce4bef359284a0f0a8d5a4", "size": 6791, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PSet8/report/report.tex", "max_stars_repo_name": "alpha-leo/ComputationalPhysics-Fall2020", "max_stars_repo_head_hexsha": "737769d4a046b4ecea885cafeaf26e26075f7320", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-10T14:33:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T14:33:35.000Z", "max_issues_repo_path": "PSet8/report/report.tex", "max_issues_repo_name": "alpha-leo/ComputationalPhysics-Fall2020", "max_issues_repo_head_hexsha": "737769d4a046b4ecea885cafeaf26e26075f7320", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PSet8/report/report.tex", "max_forks_repo_name": "alpha-leo/ComputationalPhysics-Fall2020", "max_forks_repo_head_hexsha": "737769d4a046b4ecea885cafeaf26e26075f7320", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2547770701, "max_line_length": 177, "alphanum_fraction": 0.7041672802, "num_tokens": 2253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646140788307, "lm_q2_score": 0.8418256492357359, "lm_q1q2_score": 0.5688759849774482}}
{"text": "\\documentclass[a4paper,10pt]{article}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{url}\n\n\\newcommand{\\real}{\\mathbb{R}}\n\n\\title{Some notes on Johnson Lindenstrauss and generalization}\n\\author{Nazarov Ivan}\n\n\\begin{document}\n\\maketitle\n\n\n\\section{Bounding the tail probability of $\\chi^2_p$} % (fold)\n\\label{sec:bounding_the_tail_probability}\n\nIn this section we shall provide some bounds on the probability of deviations for\na $\\chi^2_p$ random variable, which reflects the distribution of the squared\n$\\ell_2$ norm of any unit-norm vector, transformed by an iid Gaussain matrix.\n\n% Chernoff's bound states that for any rv $y$ we have\n% $$\n%   \\mathbb{P}\\bigl( y \\geq t \\bigr)\n%     % = \\mathbb{P}\\bigl( e^{\\lambda y} \\geq e^{\\lambda t} \\bigr)\n%     \\leq \\inf_{\\lambda > 0} e^{-\\lambda t} \\mathbb{E} e^{\\lambda y}\n%     \\,. $$\n% For $y \\sim \\chi^2_p$ the moment generating function $\\mathbb{E} e^{\\lambda y}$\n% is given by $\\lambda \\mapsto (1 - 2\\lambda)^{-\\tfrac{p}2}$ for $\\lambda \\leq \\tfrac12$.\n\n\\subsection{A bound from subexponentiality} % (fold)\n\\label{sub:a_bound_from_subexponentiality}\n\nConsider a $\\chi^2_p$ random variable $x$. Then $x$ is subexponential with parameters\n$(\\sigma^2, b) = (4d, 4)$, i.e. for any $\\lvert\\lambda\\rvert < \\tfrac1b$ we have\n$$\n  \\log \\mathbb{E}_{x\\sim \\chi^2_p} e^{\\lambda x}\n    \\leq \\lambda \\mathbb{E} x + \\frac{\\sigma^2}2 \\lambda^2\n    \\,. $$\nTherefore, by Chernoff's inequality we have\n\\begin{equation*}\n  \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl( x \\geq t \\bigr)\n    % = \\mathbb{P}_{x \\sim \\chi^2_d}\\bigl(\n    %     e^{\\lambda x} \\geq e^{\\lambda t}\n    % \\bigr)\n    \\leq \\inf_{\\lambda > 0} e^{- \\lambda t} \\mathbb{E}_{x \\sim \\chi^2_p} e^{\\lambda x}\n    % \\leq \\inf_{0 < \\lambda < \\tfrac14} e^{- \\lambda t} e^{\\lambda p + \\frac{4 p}2 \\lambda^2}\n    \\leq \\inf\\Bigl\\{\n      e^{\\lambda (p - t) + 2p \\lambda^2}\n      \\colon 0 < \\lambda < \\tfrac14\n    \\Bigr\\}\n    \\,.\n\\end{equation*}\nThe optimal $\\lambda = \\tfrac{t - p}{4p} \\in (0, \\tfrac14)$ for $t > p$, whence\nfor $t = (1 + \\varepsilon) p$\n\\begin{equation*}\n  \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl( x \\geq t \\bigr)\n    \\leq \\exp{\\Bigl\\{\n        -\\tfrac{(t - p)^2}{4p} + 2p \\tfrac{(t - p)^2}{16 p^2}\n      \\Bigr\\}}\n    % = \\exp{\\Bigl\\{\n    %     -\\tfrac{(t - p)^2}{8 p}\n    %   \\Bigr\\}}\n    = e^{-\\tfrac{p \\varepsilon^2}8}\n    \\,.\n\\end{equation*}\nIn the opposite direction, we get the following:\n\\begin{equation*}\n  \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl( x \\leq t \\bigr)\n    % = \\mathbb{P}_{x \\sim \\chi^2_d}\\bigl(\n    %     e^{-\\lambda x} \\geq e^{-\\lambda t}\n    % \\bigr)\n    \\leq \\inf_{\\lambda > 0} e^{\\lambda t} \\mathbb{E}_{x \\sim \\chi^2_p} e^{- \\lambda x}\n    % \\leq \\inf_{0 < \\lambda < \\tfrac14} e^{\\lambda t} e^{-\\lambda p + \\frac{4 p}2 \\lambda^2}\n    \\leq \\inf\\Bigl\\{\n      e^{\\lambda (t - p) + 2p \\lambda^2}\n      \\colon 0 < \\lambda < \\tfrac14\n    \\Bigr\\}\n    \\,.\n\\end{equation*}\nThe optimal $\\lambda = \\tfrac{p - t}{4p} \\in (0, \\tfrac14)$ for $t < p$, whence\nfor $t = (1 - \\varepsilon) p$\n\\begin{equation*}\n  \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl( x \\leq t \\bigr)\n    \\leq \\exp{\\Bigl\\{\n        -\\tfrac{(p - t)^2}{4p} + 2p \\tfrac{(p - t)^2}{16 p^2}\n      \\Bigr\\}}\n    % = \\exp{\\Bigl\\{\n    %     -\\tfrac{(p - t)^2}{8 p}\n    %   \\Bigr\\}}\n    = e^{-\\tfrac{p \\varepsilon^2}8}\n    \\,.\n\\end{equation*}\nHence, for any $\\varepsilon \\in (0, 1)$ the union bound implies\n\\begin{equation*}\n  \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl(\n      \\lvert x - p\\rvert \\geq \\varepsilon p\n    \\bigr)\n    % \\leq \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl(\n    %     x \\geq (1 + \\varepsilon) p\n    %   \\bigr)\n    %   + \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl(\n    %     x \\leq (1 - \\varepsilon) p\n    %   \\bigr)\n    \\leq 2 e^{-\\tfrac{p}{12} \\tfrac{3 \\varepsilon^2}2}\n    \\,.\n\\end{equation*}\n% subsection a_bound_from_subexponentiality (end)\n\n\n\\subsection{A tighter bound from the mgf} % (fold)\n\\label{sub:a_tighter_bound_from_the_mgf}\n\nThese lectures\\footnotemark ~cite\n\\footnotetext{\\url{https://cs.stanford.edu/people/mmahoney/cs369m/Lectures/lecture1.pdf}}\nDasgupta, Gupta (1999)\\footnotemark ~and refer to\n\\footnotetext{\\url{http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.3654}}\nthe moment generating function of a $\\chi^2_p$ variable when bounding the absolute\ndeviation of the sample mean from the theoretical. We shall follow the same steps,\nin order to derive the tail deviation bound for $\\varepsilon > 0$\n$$\n  \\mathbb{P}_{z \\sim \\mathcal{N}_p(0, I_p)}\\bigl(\n    \\lvert \\|z\\|^2 - p \\rvert \\geq \\varepsilon p\n  \\bigr)\n  \\,. $$\nIn the following $\\|\\cdot\\|$ denotes the $\\ell_2$ norm, unless specified otherwise.\n\nConsider a random variable $z\\sim \\mathcal{N}_p(0, I_p)$. Then $z_i$, $i=1,\\,\\ldots,\\,p$\nare iid $\\mathcal{N}(0, 1)$ and by definition $\\|z\\|^2 = \\sum_{i=1}^p z_i^2 \\sim \\chi^2_p$.\nThen for $\\lambda \\in \\mathbb{R}$ the moment generating function of $\\chi^2_p$ is:\n\\begin{align*}\n  \\mathbb{E}_{x \\sim \\chi^2_p} e^{\\lambda x}\n    &= \\mathbb{E}_{z\\sim \\mathcal{N}_p(0, I_p)} e^{\\lambda \\|z\\|^2}\n    = \\mathbb{E}_{z\\sim \\mathcal{N}_p(0, I_p)} e^{\\lambda \\sum_{i=1}^p z_i^2}\n    \\\\\n    &= \\prod_{i=1}^p \\mathbb{E}_{z_i\\sim \\mathcal{N}(0, 1)} e^{\\lambda z_i^2}\n    = \\biggl( \\mathbb{E}_{z\\sim \\mathcal{N}(0, 1)} e^{\\lambda z^2} \\biggr)^p\n    \\,.\n\\end{align*}\nNow for $2 \\lambda < 1$ we have\n\\begin{align*}\n  \\mathbb{E}_{z\\sim \\mathcal{N}(0, 1)} e^{\\lambda z^2}\n    &= \\tfrac1{\\sqrt{2 \\pi}} \\int_{- \\infty}^{+ \\infty}\n      e^{- (1 - 2 \\lambda) \\tfrac{z^2}2} dz\n    \\\\\n    &= \\tfrac1{\\sqrt{2 \\pi}} \\int_{-\\infty}^{+\\infty}\n      e^{-\\tfrac{z^2}2} \\tfrac1{\\sqrt{1 - 2 \\lambda}} dz\n    = \\tfrac1{\\sqrt{1 - 2 \\lambda}}\n    \\,,\n\\end{align*}\nwhich implies that $\\mathbb{E}_{x\\sim \\chi^2_p} e^{\\lambda x} = \\bigl(1 - 2 \\lambda\\bigr)^{-\\tfrac{p}2}$.\n\nLet's apply Chernoff's bound to get a bound on the tail probability of $\\chi^2_p$.\n\\begin{equation*}\n  \\mathbb{P}_{z \\sim \\mathcal{N}_p(0, I_p)}\\bigl( \\|z\\|^2 \\geq t \\bigr)\n    % = \\mathbb{P}_{x \\sim \\chi^2_d}\\bigl(\n    %     e^{\\lambda x} \\geq e^{\\lambda t}\n    % \\bigr)\n    % \\leq \\inf_{\\lambda > 0} e^{- \\lambda t} \\mathbb{E}_{x \\sim \\chi^2_d} e^{\\lambda x}\n    % = \\inf_{0 < \\lambda < \\tfrac12} (1 - 2\\lambda)^{-\\tfrac{p}2} e^{- \\lambda t}\n    = \\inf_{0 < \\lambda < \\tfrac12} \\Bigl[(1 - 2\\lambda) e^{\\lambda \\tfrac{2 t}{p}} \\Bigr]^{-\\tfrac{p}2}\n    \\,.\n\\end{equation*}\nThe expression under the infimum is lower the higher the expression in the square\nbrackets is. For $t > p$ its maximum is attained at $\\lambda = \\tfrac{t - p}{2 t} < \\tfrac12$.\nand is equal to\n$$\n  \\Bigl[(1 - 2\\lambda) e^{\\lambda \\tfrac{2 t}{p}} \\Bigr]^{-\\tfrac{p}2}\n    % = \\Bigl[\\tfrac{p}{t} e^{\\tfrac{t - p}{p}} \\Bigr]^{-\\tfrac{p}2}\n    = \\Bigl[\\tfrac{t}{p} e^{\\tfrac{p - t}{p}} \\Bigr]^{\\tfrac{p}2}\n    = \\Bigl( \\exp{(\\log{(1 + \\varepsilon)} - \\varepsilon)} \\Bigr)^{\\tfrac{p}2}\n    \\,, $$\nwhere $\\varepsilon = \\tfrac{t-p}{p} > 0$. Note that for all $\\varepsilon > 0$ the exponent\ncan be bounded by\n$$\n  \\log{(1 + \\varepsilon)} - \\varepsilon\n    \\leq - \\varepsilon + \\varepsilon - \\tfrac{\\varepsilon^2}2 + \\tfrac{\\varepsilon^3}3\n    = \\tfrac16 \\bigl( 2\\varepsilon^3 - 3\\varepsilon^2 \\bigr)\n    \\,. $$\nTherefore, for $t = (1 + \\varepsilon) p$ we have\n$$\n  \\mathbb{P}_{x \\sim\\chi^2_p} \\bigl( x \\geq (1 + \\varepsilon) p \\bigr)\n    \\leq e^{\\tfrac{p}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)}\n    \\,. $$\nThis bound is non-vacuous for $\\varepsilon < \\tfrac32$, which well covers the region\nof interest that is $\\varepsilon \\in (0, 1)$. Although for $\\varepsilon \\approx 0$\nthe exponent is close to $0$ from below.\n\nThe bound for the opposite deviation can be derived in a similar fashion: for all\nadmissible $\\lambda > 0$ Chernoff's bound implies\n\\begin{equation*}\n  \\mathbb{P}_{z \\sim \\mathcal{N}_p(0, I_p)}\\bigl(\n      \\|z\\|^2 \\leq t\n  \\bigr)\n    % = \\mathbb{P}_{x \\sim \\chi^2_p}\\bigl(\n    %     e^{-\\lambda x} \\geq e^{- \\lambda t}\n    % \\bigr)\n    \\leq \\inf_{\\lambda > 0} \\frac1{e^{-\\lambda t}} \\mathbb{E}_{x \\sim \\chi^2_p} e^{-\\lambda x}\n    % \\leq \\inf_{\\lambda > 0} (1 + 2\\lambda)^{-\\tfrac{p}2} e^{\\lambda t}\n    = \\inf_{\\lambda > 0} \\Bigl[(1 + 2\\lambda) e^{- \\lambda \\tfrac{2 t}{p}} \\Bigr]^{-\\tfrac{p}2}\n    \\,.\n\\end{equation*}\nTherefore for $\\lambda = \\tfrac{p - t}{2 t}$, $t < p$, the right-hand side evaluates to\n$$\n  \\Bigl[(1 + 2\\lambda) e^{- \\lambda \\tfrac{2 t}{p}} \\Bigr]^{-\\tfrac{p}2}\n    % = \\Bigl[(1 + 2 \\tfrac{p-t}{2 t}) e^{- \\tfrac{p-t}{2 t} \\tfrac{2 t}{p}} \\Bigr]^{-\\tfrac{p}2}\n    % = \\Bigl[\\tfrac{p}{t} e^{\\tfrac{t-p}{p}} \\Bigr]^{-\\tfrac{p}2}\n    = \\Bigl[\\tfrac{p}{t} e^{- \\tfrac{p - t}{p}} \\Bigr]^{-\\tfrac{p}2}\n    % = \\Bigl[ (1 - \\varepsilon) e^\\varepsilon \\Bigr]^{\\tfrac{p}2}\n    = \\Bigl( \\exp{(\\log{(1 - \\varepsilon)} + \\varepsilon)} \\Bigr)^{\\tfrac{p}2}\n    \\,, $$\nwhere $\\varepsilon = \\tfrac{p - t}{p} > 0$. For any $\\varepsilon \\in (0, 1)$ we have\n$$\n  \\log{(1 - \\varepsilon)} + \\varepsilon\n    \\leq \\varepsilon - \\varepsilon - \\tfrac{\\varepsilon^2}2\n    \\,, $$\nwhence for $t = (1 - \\varepsilon) p$ we have\n$$\n  \\mathbb{P}_{x \\sim\\chi^2_p} \\bigl( x \\leq (1 - \\varepsilon) p \\bigr)\n    \\leq e^{\\tfrac{p}{12} (- 3 \\varepsilon^2)}\n    \\,. $$\nTherefore, for $\\varepsilon\\approx 0$ the union bound implies\n\\begin{align*}\n  \\mathbb{P}_{z \\sim \\mathcal{N}_p(0, I_p)}\\bigl(\n      \\lvert \\|z\\|^2 - p \\rvert \\geq \\varepsilon p\n  \\bigr)\n    &\\leq \\mathbb{P}_{z}\\bigl(\n        \\|z\\|^2 \\geq (1 + \\varepsilon) p\n      \\bigr)\n      + \\mathbb{P}_{z}\\bigl(\n        \\|z\\|^2 \\leq (1 - \\varepsilon) p\n      \\bigr)\n    \\\\\n    &\\leq e^{\\tfrac{p}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)} + e^{\\tfrac{p}{12} (- 3 \\varepsilon^2)}\n    \\leq 2 e^{\\tfrac{p}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)}\n    \\,.\n\\end{align*}\n\n% subsection a_tighter_bound_from_the_mgf (end)\n\n% section bounding_the_tail_probability (end)\n\n\n\\section{Applying the bound to get a J-L result} % (fold)\n\\label{sec:applying_the_bound_to_get_a_j_l_result}\n\nSuppose $A \\subset \\real^d$ is a finite collection of vectors. Then there exists\na linear operator $L\\colon\\real^d\\to \\real^p$ with $d \\gg p$ such that for all $\\varepsilon > 0$\nthe following holds with high probability depending on $\\varepsilon$, $p$, and\n$\\lvert A \\rvert$:\n$$\n  (1 - \\varepsilon) \\|u - v\\|^2\n    \\leq \\bigl\\| L u - L v \\bigr\\|^2\n    \\leq (1 + \\varepsilon) \\|u - v\\|^2\n    \\,, $$\nfor all $u, v \\in A$.\n\n\\medskip\nConsider an iid sample $S = (z_i)_{i=1}^m$ from $\\mathcal{D} = \\mathcal{N}_d(0, I_d)$.\nThen for any $u \\in \\real^d$ with $\\|u\\| = 1$, then $z_i^\\top u \\sim \\mathcal{N}(0, u^\\top u)$\nare iid Gaussian, since linear combinations of Gaussians are Gaussian. Now, if the\nsample is collected into a $m\\times d$ matrix $Z_\\mathcal{S} = (z_i)_{i=1}^m$, then\nfor any $u \\in \\real^d$ the rv $Z_\\mathcal{S} u$ is $\\mathcal{N}_m(0, \\|u\\|^2 I_m)$.\nTherefore the squared norm of the image of a unit-norm $u$ under the random linear\ntransformation $Z_\\mathcal{S}$ is a $\\chi^2_m$ rv. For the random linear operator\n\\begin{equation} \\label{eq:random_operator}\n  L_\\mathcal{S} \\colon \\real^d \\to \\real^m\n    \\colon a \\mapsto \\tfrac1{\\sqrt{m}} Z_\\mathcal{S} a\n      = \\Bigl(\n        \\bigl\\langle \\tfrac{z_i}{\\sqrt{m}}, a \\bigr\\rangle\n      \\Bigr)_{i=1}^m\n    \\,,\n\\end{equation}\ndepending on the sample $S \\sim \\mathcal{D}^m$, and for any $v \\in \\real^d$ with\n$u = \\tfrac{v}{\\|v\\|}$ we have\n\\begin{align*}\n  \\mathbb{P}_{S \\sim \\mathcal{D}^m} \\Bigl(\n    \\bigl\\lvert \\| L_\\mathcal{S} v \\|^2 - \\|v\\|^2 \\bigr \\rvert\n      \\geq \\varepsilon \\|v\\|\n  \\Bigr)\n    % &= \\mathbb{P}_{S \\sim \\mathcal{D}^m} \\biggl(\n    %   \\Bigl\\lvert \\bigl\\| L_\\mathcal{S} \\tfrac{v}{\\|v\\|} \\bigr\\|^2 - 1 \\Bigr \\rvert\n    %     \\geq \\varepsilon\n    % \\biggr)\n    % \\\\\n    &= \\mathbb{P}_{S \\sim \\mathcal{D}^m} \\bigl(\n        \\lvert \\|Z_\\mathcal{S} u\\|^2 - m \\rvert \\geq \\varepsilon m\n      \\bigr)\n    \\\\\n    &= \\mathbb{P}_{x \\sim \\chi^2_m} \\bigl(\n        \\lvert x - m \\rvert \\geq \\varepsilon m\n      \\bigr)\n      % \\leq 2 e^{\\tfrac{m}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)}\n    \\,,\n\\end{align*}\nand the $\\chi^2_m$ distribution does not depend on $v$. Therefore, for a finite\ncollection $A \\subset \\real^d$ the Union bound implies\n\\begin{align*}\n  \\mathbb{P}_\\mathcal{S}\\Bigl(\n    \\exists{a, b\\in A} \\colon\n    \\Bigl\\lvert \\tfrac{\\| L_\\mathcal{S} (a - b) \\|^2}{\\|a - b\\|^2} - 1 \\Bigr\\rvert\n      \\geq \\varepsilon\n  \\Bigr)\n    &= \\mathbb{P}_\\mathcal{S}\\Bigl(\n      \\cup_{v\\in (A - A)} \\Bigl\\{\n        \\Bigl\\lvert \\tfrac{\\| L_\\mathcal{S} v \\|^2}{\\|v\\|^2} - 1 \\Bigr\\rvert\n          \\geq \\varepsilon\n      \\Bigr\\}\n    \\Bigr)\n    \\\\\n    &\\leq \\sum_{v\\in (A - A)}\n      \\mathbb{P}_\\mathcal{S}\\Bigl(\n        \\Bigl\\lvert \\| L_\\mathcal{S} v \\|^2 -  \\|v\\|^2 \\Bigr\\rvert\n          \\geq \\varepsilon\\|v\\|^2\n      \\Bigr)\n    \\\\\n    &\\leq \\tfrac{n (n - 1)}2 \\mathbb{P}_{x \\sim \\chi^2_m} \\bigl(\n        \\lvert x - m \\rvert \\geq \\varepsilon m\n      \\bigr)\n    \\,,\n\\end{align*}\nwhere $A - B = \\{a - b\\colon a\\in A,\\,b\\in B\\}$ and $n = \\lvert A \\rvert$.\nSection~\\ref{sub:a_tighter_bound_from_the_mgf} implies\n\\begin{equation*}\n  \\mathbb{P}_{x \\sim \\chi^2_m} \\bigl(\n      \\lvert x - m \\rvert \\geq \\varepsilon m\n    \\bigr)\n    \\leq 2 e^{\\tfrac{m}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)}\n    \\,,\n\\end{equation*}\nwhence for all $\\varepsilon \\in (0, 1)$\n\\begin{equation*}\n  \\mathbb{P}_\\mathcal{S}\\Bigl(\n      \\exists{a, b\\in A} \\colon\n      \\Bigl\\lvert \\tfrac{\\| L_\\mathcal{S} (a - b) \\|^2}{\\|a - b\\|^2} - 1 \\Bigr\\rvert\n        \\geq \\varepsilon\n    \\Bigr)\n    \\leq n (n - 1) e^{\\tfrac{m}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)}\n    \\,.\n\\end{equation*}\nThe bound from sec.~\\ref{sub:a_bound_from_subexponentiality} implies a slightly\nlooser bound: for all $\\varepsilon \\in (0, 1)$\n\\begin{equation*}\n  \\mathbb{P}_\\mathcal{S}\\Bigl(\n      \\exists{a, b\\in A} \\colon\n      \\Bigl\\lvert \\tfrac{\\| L_\\mathcal{S} (a - b) \\|^2}{\\|a - b\\|^2} - 1 \\Bigr\\rvert\n        \\geq \\varepsilon\n    \\Bigr)\n    \\leq n (n - 1) e^{- \\tfrac{m}{8} \\varepsilon^2}\n    \\,.\n\\end{equation*}\n\nTherefore for $\\delta, \\varepsilon \\in (0, 1)$ the $(\\varepsilon, \\delta)$-PAC\nlower bounds on the required number of projected dimensions are given by\n\\begin{align*}\n  m &\\geq\n      \\biggl( \\log\\frac{n (n-1)}{\\delta} \\biggr)\n      \\frac{8}{\\varepsilon^2}\n    \\,, \\tag{sec.~\\ref{sub:a_bound_from_subexponentiality}}\n    \\\\\n  m &\\geq\n      \\biggl( \\log\\frac{n (n-1)}{\\delta} \\biggr)\n      \\frac{12}{3 \\varepsilon^2 - 2 \\varepsilon^3}\n    \\,, \\tag{sec.~\\ref{sub:a_tighter_bound_from_the_mgf}}\n\\end{align*}\nThese sample size lower bounds ensure that the linear operator $L_\\mathcal{S}$\nin~\\eqref{eq:random_operator} has the $\\varepsilon$-near isometry property for all\nvectors in $A$ with probability at least $1 - \\delta$. However, the one that uses\nsec.~\\ref{sub:a_tighter_bound_from_the_mgf} is tighter than the other one for all\n$\\varepsilon \\in (0, \\tfrac34)$.\n\n\\bigskip\\noindent\nThe alternative forms of this sum are\n$$\n  \\hat{\\mathbb{E}}_{x\\sim S} \\frac{(u^\\top x)^2}{\\|u\\|^2}\n    = \\tfrac1{\\|u\\|^2} \\tfrac1p \\sum_{i=1}^p x_i^\\top u u^\\top x_i\n    = \\tfrac{u^\\top}{\\|u\\|}\n      \\Bigl( \\tfrac1p \\sum_{i=1}^p x_i x_i^\\top \\Bigr)\n      \\tfrac{u}{\\|u\\|} \n    = \\frac{\\langle u, \\hat{\\Sigma} u \\rangle}{\\|u\\|^2}\n    \\,, $$\nwhere $\\hat{\\Sigma} = \\tfrac1p \\sum_{i=1}^p x_i x_i^\\top$ is the sample covariance\nmatrix of $S$. Note that $\\mathbb{E}_{x\\sim \\mathcal{D}} \\frac{(u^\\top x)^2}{\\|u\\|^2} = 1$.\n\\begin{align*}\n  \\mathbb{P}_{S \\sim \\mathcal{D}^m} \\Bigl(\n    \\bigl\\lvert \\| L_\\mathcal{S} a \\|^2 - \\|a\\|^2 \\bigr \\rvert\n      \\geq \\varepsilon \\|a\\|\n  \\Bigr)\n    % &= \\mathbb{P}_{S \\sim \\mathcal{D}^m} \\biggl(\n    %   \\Bigl\\lvert \\bigl\\| L_\\mathcal{S} \\tfrac{a}{\\|a\\|} \\bigr\\|^2 - 1 \\Bigr \\rvert\n    %     \\geq \\varepsilon\n    % \\biggr)\n    % \\\\\n    &= \\Bigl[u = \\tfrac{a}{\\|a\\|}\\Bigr]\n    = \\mathbb{P}_{S \\sim \\mathcal{D}^m} \\bigl(\n        \\lvert \\|Z_\\mathcal{S} u\\|^2 - m \\rvert \\geq \\varepsilon m\n      \\bigr)\n    \\\\\n    &= \\mathbb{P}_{x \\sim \\chi^2_m} \\bigl(\n        \\lvert x - m \\rvert \\geq \\varepsilon m\n      \\bigr)\n      \\leq 2 e^{\\tfrac{m}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)}\n    \\,.\n\\end{align*}\nTherefore, for\n\\begin{align*}\n  \\mathbb{P}_\\mathcal{S}\\Bigl(\n    \\exists{a, b\\in A} \\colon\n    \\Bigl\\lvert \\tfrac{\\| L_\\mathcal{S} (a - b) \\|^2}{\\|a - b\\|^2} - 1 \\Bigr\\rvert\n      \\geq \\varepsilon\n  \\Bigr)\n    &= \\mathbb{P}_\\mathcal{S}\\Bigl(\n      \\cup_{v\\in (A - A)} \\Bigl\\{\n        \\Bigl\\lvert \\tfrac{\\| L_\\mathcal{S} v \\|^2}{\\|v\\|^2} - 1 \\Bigr\\rvert\n          \\geq \\varepsilon\n      \\Bigr\\}\n    \\Bigr)\n    \\\\\n    &\\leq \\sum_{v\\in (A - A)}\n      \\mathbb{P}_\\mathcal{S}\\Bigl(\n        \\Bigl\\lvert \\tfrac{\\| L_\\mathcal{S} v \\|^2}{\\|v\\|^2} - 1 \\Bigr\\rvert\n          \\geq \\varepsilon\n      \\Bigr)\n    \\\\\n    &\\leq n (n - 1) e^{\\tfrac{m}{12} (2 \\varepsilon^3 - 3 \\varepsilon^2)}\n    \\,,\n\\end{align*}\n\n% section applying_the_bound_to_get_a_j_l_result (end)\n\n\n\\section*{Appendices} % (fold)\n\\label{sec:appendices}\n\n\\subsection*{Appendix A: the $\\log(1+\\varepsilon)$ bound} % (fold)\n\\label{sub:appendix_a_the_log_bound}\n\nConsider the Taylor series expansion of $x\\mapsto\\log(1 + x)$ around $0$:\n$$\n    \\log(1 + x)\n        = \\sum_{n\\geq 1} (-1)^{n+1} \\frac{x^n}{n}\n    \\,. $$\nObserve that for any $k\\geq 1$ we have\n$$\n    \\sum_{n=1}^k (-1)^{n+1} \\tfrac{x^n}{n}\n        = \\sum_{n=1}^k (-1)^{n-1+2} \\int_0^x t^{n-1} dt\n        % = \\int_0^x \\sum_{n=1}^k (-1)^{n+1} t^{n-1} dt\n        % = \\int_0^x \\sum_{n=1}^k (-t)^{n-1} dt\n        % = \\int_0^x \\sum_{n=0}^{k-1} (-t)^n dt\n        = \\int_0^x \\frac{1 - (-1)^k t^k}{1 + t} dt\n    \\,. $$\nTherefore for $x\\geq 0$ we have\n\\begin{align*}\n    \\sum_{n=1}^k (-1)^{n+1} \\tfrac{x^n}{n}\n        &\n        % = \\int_0^x \\frac{1 - (-1)^k t^k}{1 + t} dt\n        \\leq \\int_0^x \\frac1{1 + t} dt\n        = \\log(1+x)\n        \\,, \\text{ for}~k~\\text{even}\n        \\,; \\\\\n    \\sum_{n=1}^k (-1)^{n+1} \\tfrac{x^n}{n}\n        &\n        % = \\int_0^x \\frac{1 - (-1)^k t^k}{1 + t} dt\n        \\geq \\int_0^x \\frac1{1 + t} dt\n        = \\log(1+x)\n        \\,, \\text{ for}~k~\\text{odd}\n        \\,.\n\\end{align*}\nThe Taylor series expansion for $x\\mapsto \\log(1 - x)$ around $0$ follows from the\none above, but easily lends an upper bound for all $k\\geq 1$ and $x \\in (0, 1)$\n$$\n    \\log(1 - x)\n        = \\sum_{n\\geq 1} (-1)^{n+1} \\frac{(-x)^n}{n}\n        = \\sum_{n\\geq 1} \\frac{- x^n}{n}\n        \\leq \\sum_{n=1}^k \\frac{- x^n}{n}\n    \\,. $$\n\n% subsection* appendix_a_the_log_bound (end)\n\n% section* appendices (end)\n\n\\end{document}\n", "meta": {"hexsha": "1abd4a1ac01f79fcda891085535036d76794442c", "size": 18311, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "scribbles/on-johnson-lindenstrauss.tex", "max_stars_repo_name": "ivannz/general-scribbles", "max_stars_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-12-07T20:41:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-28T12:47:40.000Z", "max_issues_repo_path": "scribbles/on-johnson-lindenstrauss.tex", "max_issues_repo_name": "ivannz/general-scribbles", "max_issues_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scribbles/on-johnson-lindenstrauss.tex", "max_forks_repo_name": "ivannz/general-scribbles", "max_forks_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6769547325, "max_line_length": 105, "alphanum_fraction": 0.5732073617, "num_tokens": 7880, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8418256512199033, "lm_q1q2_score": 0.5688759753242179}}
{"text": "\\documentclass[onecolumn]{article}\n\\usepackage{amsmath}\n\\usepackage{lscape}\n% wide page for side by side figures, tables, etc\n\\usepackage{changepage}\n\\newlength{\\offsetpage}\n\\setlength{\\offsetpage}{2.5cm}\n\\newenvironment{widepage}{\\begin{adjustwidth}{-\\offsetpage}{-\\offsetpage}%\n    \\addtolength{\\textwidth}{2\\offsetpage}}%\n{\\end{adjustwidth}}\n\\begin{document}\n\\title{05 - Example 03 - Davies model}\n\\author{}\n\\date{}\n\\maketitle\nExample, 5 components (4 + solvent), 3 reactions: \\\\\n\\section{Variables}\n5 (components + solvent) compositions $n_0$, $n_1$, ..., $n_4$. \\\\\n5 (components + solvent) activity coefficients\n$\\gamma_0$, $\\gamma_1$, ..., $\\gamma_4$. \\\\\n3 reaction extents $\\xi_1$, $\\xi_2$, ..., $\\xi_{3}$. \\\\\n1 Ionic strength $I$. \\\\\nTot. $2(5) + 3 + 1 = 14$ \\\\\n\\\\\n$x =[$\n\\begin{tabular}{cccccccccccccc}\n$n_0$ & $m_1$ & $m_2$ & $m_3$ & $m_4$ & $\\xi_1$ & $\\xi_2$ & $\\xi_3$ &\n$\\gamma_0$ & $\\gamma_1$ & $\\gamma_2$ & $\\gamma_3$ & $\\gamma_4$ & $I$\n$]^T$\n\\end{tabular}\n\\section{Objective function}\n5 mole balances. \\\\\n5 activity coefficient expressions. \\\\\n3 equilibrium expresisons. \\\\\n1 Ionic strength expression. \\\\\nTot. $2(5) + 3 + 1 = 14$\\\\\n\\\\\n\\[\nf(x) = 0 = \\left(\n\\begin{tabular}{l}\n$-n_0 + n_{0,0} + \\nu_{01}\\xi_1 + \\nu_{02}\\xi_2 + \\nu_{03}\\xi_3  $\\\\\n$-m_1 n_0 M_0 + n_{1,0} + \\nu_{11}\\xi_1 + \\nu_{12}\\xi_2 + \\nu_{13}\\xi_3  $\\\\\n$-m_2 n_0 M_0 + n_{2,0} + \\nu_{21}\\xi_1 + \\nu_{22}\\xi_2 + \\nu_{23}\\xi_3  $\\\\\n$-m_3 n_0 M_0 + n_{3,0} + \\nu_{31}\\xi_1 + \\nu_{32}\\xi_2 + \\nu_{33}\\xi_3  $\\\\\n$-m_4 n_0 M_0 + n_{4,0} + \\nu_{41}\\xi_1 + \\nu_{42}\\xi_2 + \\nu_{43}\\xi_3  $\\\\\n$-K_1 + (\\frac{1}{m^{\\circ}})^{\\sum_j{\\nu_{j1}}}\\times\n\\gamma_0^{\\nu_{01}}\\gamma_1^{\\nu_{11}}\\gamma_2^{\\nu_{21}}\n\\gamma_3^{\\nu_{31}}\\gamma_4^{\\nu_{41}}\\times\n\\left(\\frac{1}{M_0}\\right)^{\\nu_{01}}m_1^{\\nu_{11}}m_2^{\\nu_{21}}\nm_3^{\\nu_{31}}m_4^{\\nu_{41}}$\\\\\n$-K_1 + (\\frac{1}{m^{\\circ}})^{\\sum_j{\\nu_{j2}}}\\times\n\\gamma_0^{\\nu_{02}}\\gamma_1^{\\nu_{12}}\\gamma_2^{\\nu_{22}}\n\\gamma_3^{\\nu_{32}}\\gamma_4^{\\nu_{42}}\\times\n\\left(\\frac{1}{M_0}\\right)^{\\nu_{02}}m_1^{\\nu_{12}}m_2^{\\nu_{22}}\nm_3^{\\nu_{32}}m_4^{\\nu_{42}}$\\\\\n$-K_1 + (\\frac{1}{m^{\\circ}})^{\\sum_j{\\nu_{j3}}}\\times\n\\gamma_0^{\\nu_{03}}\\gamma_1^{\\nu_{13}}\\gamma_2^{\\nu_{23}}\n\\gamma_3^{\\nu_{33}}\\gamma_4^{\\nu_{43}}\\times\n\\left(\\frac{1}{M_0}\\right)^{\\nu_{03}}m_1^{\\nu_{13}}m_2^{\\nu_{23}}\nm_3^{\\nu_{33}}m_4^{\\nu_{43}}$\\\\\n$-\\gamma_0 + e^{-1.0\\times M_0 \\sum_{j\\neq0}{m_j}}$ \\\\\n$-\\gamma_1 +10^{- 0.510z_1^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_1)^2)b_1 I}$ \\\\\n$-\\gamma_2 +10^{- 0.510z_2^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_2)^2)b_2 I}$ \\\\\n$-\\gamma_3 +10^{- 0.510z_3^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_3)^2)b_3 I}$ \\\\\n$-\\gamma_4 +10^{- 0.510z_4^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_4)^2)b_4 I}$ \\\\\n$-I + \\frac{1}{2}(m_0 z_0^2 +  m_1 z_1^2 + m_2 z_2^2 + m_3 z_3^2 + m_4 z_4^2)$\n\\end{tabular}\n\\right)\n\\]\nFor convenience, define the reaction quotient:\\\\\n\\begin{equation}\n\\label{eq:reaction_quotient}\nQ_j \\equiv \\left(\\frac{1}{m^{\\circ}}\\right)^{\\sum_i{\\nu_{ij}}}\\times\n\\gamma_0^{\\nu_{0j}}\\gamma_1^{\\nu_{1j}}\\gamma_2^{\\nu_{2j}}\n\\gamma_3^{\\nu_{3j}}\\gamma_4^{\\nu_{4j}}\\times\n\\left(\\frac{1}{M_0}\\right)^{\\nu_{0j}}m_1^{\\nu_{1j}}m_2^{\\nu_{2j}}\nm_3^{\\nu_{3j}}m_4^{\\nu_{4j}}\n\\end{equation}\nNon-electrolyte solvent: $z_0 = 0$ \\\\\nReference: $m^{\\circ} = 1\\frac{mol}{kg_{solvent}}$\n\\section{Jacobian}\n\\[\nJ(x) = \\left(\n\\begin{tabular}{llll}\n$\\frac{\\partial{f_1}}{\\partial{x_1}}$ &\n$\\frac{\\partial{f_1}}{\\partial{x_2}}$ & ... &\n$\\frac{\\partial{f_1}}{\\partial{x_{14}}}$\\\\\n$\\frac{\\partial{f_2}}{\\partial{x_1}}$ &\n$\\frac{\\partial{f_2}}{\\partial{x_2}}$ & ... &\n$\\frac{\\partial{f_2}}{\\partial{x_{14}}}$\\\\\n$...$ & $...$ & $...$ & $...$\\\\\n$\\frac{\\partial{f_{14}}}{\\partial{x_1}}$ &\n$\\frac{\\partial{f_{14}}}{\\partial{x_2}}$ & ... &\n$\\frac{\\partial{f_{14}}}{\\partial{x_{14}}}$\\\\\n\\end{tabular}\n\\right)\n\\]\\\\\n\\[\nJ_{1,1}(x) = \\frac{\\partial{f_{1}}}{\\partial{x_1}} =  -1\n\\]\n\\[\nJ_{1,2}(x) = \\frac{\\partial{f_{1}}}{\\partial{x_2}} =  0\n\\]\n\\[\nJ_{1,3}(x) = \\frac{\\partial{f_{1}}}{\\partial{x_2}} =  0\n\\]\n(...)\n\\[\nJ_{1,6}(x) = \\frac{\\partial{f_{1}}}{\\partial{x_6}} =  \\nu_{01}\n\\]\n(...)\n\\[\nJ_{2,1}(x) =  -m_1 M_0;\nJ_{2,2}(x) =  -n_0 M_0;\nJ_{2,3}(x) =  0;\nJ_{2,6}(x) = \\nu_{11}\n\\]\n(...)\n\\[\nJ_{3,1}(x) =  -m_2 M_0;\nJ_{3,2}(x) =  -n_0 M_0;\nJ_{3,3}(x) =  0;\nJ_{3,6}(x) = \\nu_{21}\n\\]\n(...)\n\\[\n\\begin{aligned}\nJ_{6,1}(x) = & 0 \\\\\nJ_{6,2}(x) = &\n\\left(\\frac{1}{m^{\\circ}}\\right)^{\\sum_i{\\nu_{i1}}}\\times\n\\gamma_0^{\\nu_{01}}\\gamma_1^{\\nu_{11}}\\gamma_2^{\\nu_{21}}\n\\gamma_3^{\\nu_{31}}\\gamma_4^{\\nu_{41}}\\times\n\\left(\\frac{1}{M_0}\\right)^{\\nu_{01}}\n\\nu_{11}m_1^{\\nu_{11}-1}m_2^{\\nu_{21}}\nm_3^{\\nu_{31}}m_4^{\\nu_{41}}\\\\\n= & \\frac{\\nu_{11}}{m_{1}} \\times Q_1\\\\\nJ_{6,3}(x) =& \\frac{\\nu_{21}}{m_2} \\times Q_1\\\\\n\\text{(...)}\\\\\nJ_{6,6}(x) =& 0\\\\\nJ_{6,7}(x) =& 0\\\\\nJ_{6,8}(x) =& 0\\\\\nJ_{6,9}(x) =&\n\\left(\\frac{1}{m^{\\circ}}\\right)^{\\sum_i{\\nu_{i1}}}\\times\n\\nu_{01}\\gamma_0^{\\nu_{01}-1}\\gamma_1^{\\nu_{11}}\\gamma_2^{\\nu_{21}}\n\\gamma_3^{\\nu_{31}}\\gamma_4^{\\nu_{41}}\\times\n\\left(\\frac{1}{M_0}\\right)^{\\nu_{01}}m_1^{\\nu_{11}}m_2^{\\nu_{21}}\nm_3^{\\nu_{31}}m_4^{\\nu_{41}}\\\\\n= & \\frac{\\nu_{01}}{\\gamma_{0}} \\times Q_1\\\\\nJ_{6,10}(x) =&\n\\left(\\frac{1}{m^{\\circ}}\\right)^{\\sum_i{\\nu_{i1}}}\\times\n\\gamma_0^{\\nu_{01}}\\nu_{11}\\gamma_1^{\\nu_{11}-1}\\gamma_2^{\\nu_{21}}\n\\gamma_3^{\\nu_{31}}\\gamma_4^{\\nu_{41}}\\times\n\\left(\\frac{1}{M_0}\\right)^{\\nu_{01}}m_1^{\\nu_{11}}m_2^{\\nu_{21}}\nm_3^{\\nu_{31}}m_4^{\\nu_{41}}\\\\\n= & \\frac{\\nu_{11}}{\\gamma_{1}} \\times Q_1\\\\\n\\text{(...)}\\\\\nJ_{6,14}(x) =&  0\\\\\n\\text{(...)}\\\\\n\\end{aligned}\n\\]\n\\[\n\\begin{aligned}\nJ_{9,1}(x) =&  0\\\\\nJ_{9,2}(x) =& -1.0 M_0 m_1 e^{-1.0\\times M_0 \\sum_{j\\neq0}{m_j}}\\\\\nJ_{9,3}(x) =& -1.0 M_0 m_2 e^{-1.0\\times M_0 \\sum_{j\\neq0}{m_j}}\\\\\n\\text{(...)}\\\\\nJ_{9,5}(x) =& -1.0 M_0 m_4 e^{-1.0\\times M_0 \\sum_{j\\neq0}{m_j}}\\\\\nJ_{9,6}(x) =& 0\\\\\n\\text{(...)}\\\\\nJ_{9,9}(x) =& -1\\\\\nJ_{9,10}(x) =& 0\\\\\nJ_{9,11}(x) =& 0\\\\\n\\text{(...)}\\\\\nJ_{9,14}(x) =& 0\\\\\nJ_{10,1}(x) =& 0\\\\\nJ_{10,2}(x) =& 0\\\\\n\\text{(...)}\\\\\nJ_{10,9}(x) =& 0\\\\\nJ_{10,10}(x) =& -1\\\\\nJ_{10,11}(x) =& 0\\\\\n\\text{(...)}\\\\\nJ_{10,14}(x) =&\n-0.510 z_1^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n\\times ln(10) \\times 10^{- 0.510z_1^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_1)^2)b_1 I}\\\\\n=& -0.510 z_1^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n \\times ln(10) \\times (\\gamma_1+f_{10}(x)) \\\\\n\\text{(...)}\\\\\nJ_{11,1}(x) =& 0\\\\\n\\text{(...)}\\\\\nJ_{11,11}(x) =& -1\\\\\nJ_{11,12}(x) =& 0\\\\\n\\text{(...)}\\\\\nJ_{11,14}(x) =&\n-0.510 z_2^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n\\times ln(10) \\times 10^{- 0.510z_2^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_2)^2)b_2 I}\\\\\n=& -0.510 z_2^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n \\times ln(10) \\times (\\gamma_2+f_{11}(x)) \\\\\n\\text{(...)}\\\\\n\\end{aligned}\n\\]\nUsing the reaction quotients $Q_i$ for convenience as defined above (eq.\n\\ref{eq:reaction_quotient}), the Jacobian matrix results as follows,\nconsidering non-electrolyte solvent (charge $z_0=0$):\n\\begin{landscape}\n\\[\nJ(x) = \\left(\n\\begin{tabular}{llllllllllllll}\n$-1$ & 0 & 0 & 0 & 0 &\n$\\nu_{01}$ & $\\nu_{02}$ & $\\nu_{03}$ &\n0 & 0 & 0 & 0 & 0 &\n0\\\\\n$-m_1 M_0$ & $-n_0 M_0$ & 0 & 0 & 0 &\n$\\nu_{11}$ & $\\nu_{12}$ & $\\nu_{13}$ &\n0 & 0 & 0 & 0 & 0 &\n0\\\\\n$-m_2 M_0$ & 0 & $-n_0 M_0$  & 0 & 0 &\n$\\nu_{21}$ & $\\nu_{22}$ & $\\nu_{23}$ &\n0 & 0 & 0 & 0 & 0 &\n0\\\\\n$-m_3 M_0$ & 0 & 0 & $-n_0 M_0$ & 0 &\n$\\nu_{31}$ & $\\nu_{32}$ & $\\nu_{33}$ &\n0 & 0 & 0 & 0 & 0 &\n0\\\\\n$-m_4 M_0$ & 0 & 0 & 0 & $-n_0 M_0$ &\n$\\nu_{41}$ & $\\nu_{42}$ & $\\nu_{43}$ &\n0 & 0 & 0 & 0 & 0 &\n0\\\\\n0 &\n$\\frac{\\nu_{11}}{m_1}Q_1$ & $\\frac{\\nu_{21}}{m_2}Q_1$ &\n$\\frac{\\nu_{31}}{m_3}Q_1$ & $\\frac{\\nu_{41}}{m_4}Q_1$ &\n0 & 0 & 0 &\n$\\frac{\\nu_{01}}{\\gamma_0}Q_1$ & $\\frac{\\nu_{11}}{\\gamma_1}Q_1$ &\n$\\frac{\\nu_{21}}{\\gamma_2}Q_1$ & $\\frac{\\nu_{31}}{\\gamma_3}Q_1$ &\n$\\frac{\\nu_{41}}{\\gamma_4}Q_1$ &\n0\\\\\n0 &\n$\\frac{\\nu_{12}}{m_1}Q_2$ & $\\frac{\\nu_{22}}{m_2}Q_2$ &\n$\\frac{\\nu_{32}}{m_3}Q_2$ & $\\frac{\\nu_{42}}{m_4}Q_2$ &\n0 & 0 & 0 &\n$\\frac{\\nu_{02}}{\\gamma_0}Q_2$ & $\\frac{\\nu_{12}}{\\gamma_1}Q_2$ &\n$\\frac{\\nu_{22}}{\\gamma_2}Q_2$ & $\\frac{\\nu_{32}}{\\gamma_3}Q_2$ &\n$\\frac{\\nu_{42}}{\\gamma_4}Q_2$ &\n0\\\\\n0 &\n$\\frac{\\nu_{13}}{m_1}Q_3$ & $\\frac{\\nu_{23}}{m_2}Q_3$ &\n$\\frac{\\nu_{33}}{m_3}Q_3$ & $\\frac{\\nu_{43}}{m_4}Q_3$ &\n0 & 0 & 0 &\n$\\frac{\\nu_{03}}{\\gamma_0}Q_3$ & $\\frac{\\nu_{13}}{\\gamma_1}Q_3$ &\n$\\frac{\\nu_{23}}{\\gamma_2}Q_3$ & $\\frac{\\nu_{33}}{\\gamma_3}Q_3$ &\n$\\frac{\\nu_{43}}{\\gamma_4}Q_3$ &\n0\\\\\n0 & $J_{9,2}(x)$ & $J_{9,3}(x)$ & $J_{9,4}(x)$ & $J_{9,5}(x)$ &\n0 & 0 & 0 &\n-1 & 0 & 0 & 0 & 0 &\n0\\\\\n0 & 0 & 0 & 0 & 0 &\n0 & 0 & 0 &\n0 & -1 & 0 & 0 & 0 &\n$J_{10,14}(x)$\\\\\n0 & 0 & 0 & 0 & 0 &\n0 & 0 & 0 &\n0 & 0 & -1 & 0 & 0 &\n$J_{11,14}(x)$\\\\\n0 & 0 & 0 & 0 & 0 &\n0 & 0 & 0 &\n0 & 0 & 0 & -1 & 0 &\n$J_{12,14}(x)$\\\\\n0 & 0 & 0 & 0 & 0 &\n0 & 0 & 0 &\n0 & 0 & 0 & 0 & -1 &\n$J_{13,14}$\\\\\n0 &\n$\\frac{1}{2}z_1^2$ & $\\frac{1}{2}z_2^2$ &\n$\\frac{1}{2}z_3^2$ & $\\frac{1}{2}z_4^2$ &\n0 & 0 & 0 &\n0 & 0 & 0 & 0 & 0 &\n-1\\\\\n\\end{tabular}\n\\right)\n\\]\n\\[\n\\begin{aligned}\nJ_{9,2}(x) =& -1.0 M_0 e^{-1.0\\times M_0 \\sum_{j\\neq0}{m_j}}\\\\\nJ_{9,3}(x) =& J_{9,4}(x) = J_{9,5}(x) = J_{9,2}(x)\\\\\nJ_{10,14}(x) =& \\left[\n-0.510 ln(10)\\times z_1^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n+ (1-sign(z_1)^2)b_1 \\right] \\times 10^{- 0.510z_1^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_1)^2)b_1 I}\\\\\nJ_{11,14}(x) =& \\left[\n-0.510 ln(10)\\times z_2^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n+ (1-sign(z_1)^2)b_2 \\right]\n\\times 10^{- 0.510z_2^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_2)^2)b_2 I}\\\\\nJ_{12,14}(x) =& \\left[\n-0.510 ln(10)\\times z_3^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n+ (1-sign(z_1)^2)b_3 \\right]\n\\times 10^{- 0.510z_3^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_3)^2)b_3 I}\\\\\nJ_{13,14}(x) =& \\left[\n-0.510 ln(10)\\times z_4^2\\times\\left(\\frac{1}{2\\sqrt{I}(1+\\sqrt{I})^2} -0.3\\right)\n+ (1-sign(z_1)^2)b_4 \\right]\n\\times 10^{- 0.510z_4^2 \\times\n\\left(\\frac{\\sqrt{I}}{1+\\sqrt{I}}-0.3I\\right) + (1-sign(z_4)^2)b_4 I}\\\\\n\\end{aligned}\n\\]\n\\end{landscape}\nFor calculation, submatrixes of the Jacobian matrix are identified:\n\\begin{itemize}\n\\item $J_{1,1}(x)$ to $J_{n+1,n+1}(x)$ is a diagonal matrix of $n_i/n_0$\nas function of molality:\n\\[\n\\begin{aligned}\nJ_{1,1}(x) \\text{ to } J_{n+1,n+1}(x) = &\n-diag( \\left[\n\\begin{tabular}{ccccc}\n1 & $m_1 M_0$ & $m_2 M_0$ & $m_3 M_0$ & $m_4 M_0$\\\\\n\\end{tabular}\n\\right]\n)\\\\\n= & \\left(\n\\begin{tabular}{ccccc}\n-1 & 0 & 0 & 0 & 0\\\\\n0 & -$m_1 M_0$ & 0 & 0 & 0\\\\\n0 & 0 & -$m_2 M_0$ & 0 & 0\\\\\n0 & 0 & 0 & -$m_3 M_0$ & 0\\\\\n0 & 0 & 0 & 0 & -$m_4 M_0$\\\\\n\\end{tabular}\n\\right)\n\\end{aligned}\n\\]\n\\item $J_{n+2+nr,n+2+nr}(x)$ to $J_{n+2+nr+n+1,n+2+nr+n+1}(x)$\nis the negative of an identity matrix, size $(n+1) \\times (n+1)$\n\\[\nJ_{n+2+nr,n+2+nr}(x) \\text{ to } J_{n+2+nr+n+1,n+2+nr+n+1}(x) =\n-I_{n+1} = -1\\times \\left(\n\\begin{tabular}{ccccc}\n1 & 0 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & 0 & 1\\\\\n\\end{tabular}\n\\right)\n\\]\n\\item $J_{1,n+2}(x)$ to $J_{n+1,n+2+nr}(x)$ is the stoichiometric coefficients\nmatrix\n\\[\nJ_{1,n+2}(x) \\text{ to } J_{n+1,n+2+nr}(x) = \\nu_{ij} =\\left(\n\\begin{tabular}{ccc}\n$\\nu_{01}$ & $\\nu_{02}$ & $\\nu_{03}$\\\\\n$\\nu_{11}$ & $\\nu_{12}$ & $\\nu_{13}$\\\\\n$\\nu_{21}$ & $\\nu_{22}$ & $\\nu_{23}$\\\\\n$\\nu_{31}$ & $\\nu_{32}$ & $\\nu_{33}$\\\\\n$\\nu_{41}$ & $\\nu_{42}$ & $\\nu_{43}$\\\\\n\\end{tabular}\n\\right)\n\\]\n\\item $J_{n+2,1}(x)$ to $J_{n+2+nr,n+1}(x)$ is expressed as a product of the\ndiagonalized quotients matrix, the transposed coefficients matrix,\nand a diagonalized inverse molality matrix, with no variation resulting from\nsolvent:\n\\[\n\\begin{aligned}\nJ_{n+2,1}(x) & \\text{ to } J_{n+2+nr,n+1}(x) \\\\\n= & diag([Q_1 Q_2 Q_3])\\times \\nu_{ij}^T \\times\ndiag([0 \\frac{1}{m_1} \\frac{1}{m_2} \\frac{1}{m_3} \\frac{1}{m_4}]) \\\\\n= & \\left(\n\\begin{tabular}{ccc}\n$Q_1$ & 0 & 0\\\\\n0 & $Q_2$ & 0\\\\\n0 & 0 & $Q_3$\\\\\n\\end{tabular}\n\\right) \\times \\left(\n\\begin{tabular}{ccccc}\n$\\nu_{01}$ & $\\nu_{11}$ & $\\nu_{21}$ & $\\nu_{31}$ & $\\nu_{41}$\\\\\n$\\nu_{02}$ & $\\nu_{12}$ & $\\nu_{22}$ & $\\nu_{32}$ & $\\nu_{42}$\\\\\n$\\nu_{03}$ & $\\nu_{13}$ & $\\nu_{23}$ & $\\nu_{33}$ & $\\nu_{43}$\\\\\n\\end{tabular}\n\\right) \\times \\left(\n\\begin{tabular}{ccccc}\n0 & 0 & 0 & 0 & 0\\\\\n0 & $\\frac{1}{m_1}$ & 0 & 0 & 0\\\\\n0 & 0 & $\\frac{1}{m_2}$ & 0 & 0\\\\\n0 & 0 & 0 & $\\frac{1}{m_3}$ & 0\\\\\n0 & 0 & 0 & 0 & $\\frac{1}{m_4}$\\\\\n\\end{tabular}\n\\right)\\\\\n= & \\left(\n\\begin{tabular}{ccccc}\n0 &\n$\\frac{\\nu_{11}}{m_1}Q_1$ & $\\frac{\\nu_{21}}{m_2}Q_1$ &\n$\\frac{\\nu_{31}}{m_3}Q_1$ & $\\frac{\\nu_{41}}{m_4}Q_1$\\\\\n0 &\n$\\frac{\\nu_{12}}{m_1}Q_2$ & $\\frac{\\nu_{22}}{m_2}Q_2$ &\n$\\frac{\\nu_{32}}{m_3}Q_2$ & $\\frac{\\nu_{42}}{m_4}Q_2$\\\\\n0 &\n$\\frac{\\nu_{13}}{m_1}Q_3$ & $\\frac{\\nu_{23}}{m_2}Q_3$ &\n$\\frac{\\nu_{33}}{m_3}Q_3$ & $\\frac{\\nu_{43}}{m_4}Q_3$\\\\\n\\end{tabular}\n\\right)\n\\end{aligned}\n\\]\n\\item $J_{n+2,n+2+nr}(x)$ to $J_{n+2+nr,n+2+nr+n+1}(x)$ is analogous for\nactivity coefficient partial variations:\n\\[\n\\begin{aligned}\nJ_{n+2,n+2+nr}(x) & \\text{ to } J_{n+2+nr,n+2+nr+n+1}(x) \\\\\n= & diag([Q_1 Q_2 Q_3])\\times \\nu_{ij}^T \\times\ndiag([\\frac{1}{\\gamma_0} \\frac{1}{\\gamma_1} \\frac{1}{m_2}\n\\frac{1}{m_3} \\frac{1}{m_4}]) \\\\\n= & \\left(\n\\begin{tabular}{ccc}\n$Q_1$ & 0 & 0\\\\\n0 & $Q_2$ & 0\\\\\n0 & 0 & $Q_3$\\\\\n\\end{tabular}\n\\right) \\times  \\\\\n& \\left(\n\\begin{tabular}{ccccc}\n$\\nu_{01}$ & $\\nu_{11}$ & $\\nu_{21}$ & $\\nu_{31}$ & $\\nu_{41}$\\\\\n$\\nu_{02}$ & $\\nu_{12}$ & $\\nu_{22}$ & $\\nu_{32}$ & $\\nu_{42}$\\\\\n$\\nu_{03}$ & $\\nu_{13}$ & $\\nu_{23}$ & $\\nu_{33}$ & $\\nu_{43}$\\\\\n\\end{tabular}\n\\right) \\times\n\\left(\n\\begin{tabular}{ccccc}\n$\\frac{1}{\\gamma_0}$ & 0 & 0 & 0 & 0\\\\\n0 & $\\frac{1}{\\gamma_1}$ & 0 & 0 & 0\\\\\n0 & 0 & $\\frac{1}{\\gamma_2}$ & 0 & 0\\\\\n0 & 0 & 0 & $\\frac{1}{\\gamma_3}$ & 0\\\\\n0 & 0 & 0 & 0 & $\\frac{1}{\\gamma_4}$\\\\\n\\end{tabular}\n\\right)\\\\\n= & \\left(\n\\begin{tabular}{ccccc}\n$\\frac{\\nu_{01}}{\\gamma_0}Q_1$ & $\\frac{\\nu_{11}}{\\gamma_1}Q_1$ &\n$\\frac{\\nu_{21}}{\\gamma_2}Q_1$ & $\\frac{\\nu_{31}}{\\gamma_3}Q_1$ &\n$\\frac{\\nu_{41}}{\\gamma_4}Q_1$\\\\\n$\\frac{\\nu_{02}}{\\gamma_0}Q_2$ & $\\frac{\\nu_{12}}{\\gamma_1}Q_2$ &\n$\\frac{\\nu_{22}}{\\gamma_2}Q_2$ & $\\frac{\\nu_{32}}{\\gamma_3}Q_2$ &\n$\\frac{\\nu_{42}}{\\gamma_4}Q_2$\\\\\n$\\frac{\\nu_{03}}{\\gamma_0}Q_3$ & $\\frac{\\nu_{13}}{\\gamma_1}Q_3$ &\n$\\frac{\\nu_{23}}{\\gamma_2}Q_3$ & $\\frac{\\nu_{33}}{\\gamma_3}Q_3$ &\n$\\frac{\\nu_{43}}{\\gamma_4}Q_3$\\\\\n\\end{tabular}\n\\right)\n\\end{aligned}\n\\]\n\\end{itemize}\n\\section{Solution}\nUse Newton-Rhapson method with objective function $f(x)=0$, Jacobian $J(x)$,\nwith LR=PDA factorization for optimal jacobian condition and line search\n(batcracking) to remain on valid steps fulfilling all $m_i>0$\n\\end{document}\n", "meta": {"hexsha": "c885899b66dbbf6eaff1d54a31c916aea3842090", "size": 14678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/example_davies.tex", "max_stars_repo_name": "santiago-salas-v/literature-implementations-py", "max_stars_repo_head_hexsha": "675155d77b59beae057aca377505da327dd9b963", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/example_davies.tex", "max_issues_repo_name": "santiago-salas-v/literature-implementations-py", "max_issues_repo_head_hexsha": "675155d77b59beae057aca377505da327dd9b963", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-07-06T02:11:43.000Z", "max_issues_repo_issues_event_max_datetime": "2016-08-14T19:21:33.000Z", "max_forks_repo_path": "docs/example_davies.tex", "max_forks_repo_name": "santiago-salas-v/literature-implementations-py", "max_forks_repo_head_hexsha": "675155d77b59beae057aca377505da327dd9b963", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.8394793926, "max_line_length": 82, "alphanum_fraction": 0.5414225371, "num_tokens": 7613, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256551882382, "lm_q2_score": 0.6757645944891558, "lm_q1q2_score": 0.5688759725088477}}
{"text": "\n\n    \\filetitle{gamma}{Create function proportional to log of Chi-Squared distribution}{logdist/chisquare}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nF = logdist.chisquare(Df)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{Df} {[} integer {]} - Degrees of freedom of Chi-squared\n  distribution.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{F} {[} function\\_handle {]} - Function handle returning a\n  value proportional to the log of the gamma density.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\nSee \\href{logdist/Contents}{help on the logdisk package} for details on\nusing the function handle \\texttt{F}.\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "eaa5327a1deb815218b37db2887610b14b08745e", "size": 845, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/logdist/chisquare.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/logdist/chisquare.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/logdist/chisquare.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 22.8378378378, "max_line_length": 105, "alphanum_fraction": 0.7609467456, "num_tokens": 246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256393148981, "lm_q2_score": 0.6757645944891559, "lm_q1q2_score": 0.5688759617822066}}
{"text": "\\documentclass[11pt]{article}\n\n\\usepackage{amsmath, amssymb, amsthm, mathtools}\n\\usepackage{lmodern, microtype}\n\\usepackage{enumitem}\n\n% theorems\n\\usepackage{amsthm}\n\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}\n\\newtheorem{exercise}{Exercise}\n\n\\theoremstyle{plain}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{corollary}{Corollary}\n\n\\newtheoremstyle{myremark}%                \n{}%                                    \n{}%              \n{}%          \n{}%            \n{\\itshape}%       \n{.}%                          \n{ }%     \n{\\thmname{#1}\\thmnumber{ #2}\\thmnote{ (#3)}}%  \n\n\\theoremstyle{myremark}\n\\newtheorem*{example}{Example}\n\\newtheorem*{remark}{Remark}\n\n\\numberwithin{equation}{section}\n\\renewcommand\\qedsymbol{$\\blacksquare$}\n\n\n% Github logo \n\\usepackage{fontawesome5}\n\n\n% hyperlinks \n\\usepackage{hyperref}\n\\hypersetup{\n\tcolorlinks=true,\n\tlinkcolor = {magenta}\n}\n\n\n% commands \n\\renewcommand{\\vec}[1]{\\boldsymbol{#1}}\n\\newcommand{\\mat}[1]{\\boldsymbol{#1}}\n\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\R}{\\mathbb{R}}\n\n\\DeclarePairedDelimiterX{\\innerp}[2]{\\langle}{\\rangle}{#1, #2}\n\\DeclarePairedDelimiterX{\\norm}[1]{\\lVert}{\\rVert}{#1}\n\n\n\\begin{document}\n\t\n\\begin{center}\n\t\\Large \n\tChapter 3 Notes for  \\\\\n\t\\textit{Mathematics for Machine Learning} \\\\\n\tby Deisenroth, Faisal, Ong\n\t\n\t\n\t\\vspace{1ex}\t\n\t{\t\n\t\t\\normalsize alstonlo \\href{https://github.com/alstonlo}{\\faGithub}\n\t}\n\\end{center}\n\n\\paragraph{Disclaimer} The following highlights some \\textit{key} ideas and examples from the text. It is by no means comprehensive!   \n\n\\paragraph{Notation} Let $V$ be an $n$-dimensional vector space over $\\R$. \n\n\\begin{definition}\nA norm on $V$ is a map $\\norm{\\cdot}\\colon V \\to \\R$ that is \n\\begin{itemize}\n\t\\item Absolutely homogeneous: \n\t$\\forall \\vec{x}\\in V, \\, \\forall \\lambda \\in \\R, \\, \\norm{\\lambda \\vec{x}} = |\\lambda|\\norm{\\vec{x}}$\n\t\n\t\\item Triangle inequality: \n\t$\\forall \\vec{x}, \\vec{y} \\in V, \\, \\norm{\\vec{x} + \\vec{y}} \\leq \\norm{\\vec{x}} + \\norm{\\vec{y}}$\n\t\n\t\\item Positive definite: \n\t$\\forall \\vec{x}\\in V, \\, \\norm{\\vec{x}} \\geq 0$ and $\\norm{\\vec{x}} = 0 \\iff \\vec{x} = \\vec{0}$\n\\end{itemize} \n\\end{definition}\n\n\\begin{definition}\nAn inner product on $V$ is a map $\\innerp{\\cdot}{\\cdot}\\colon V \\times V \\to \\R$ that is\n\\begin{itemize}\n\t\\item Bilinear, i.e.\\@ linear in its first and second slot \n\t\n\t\\item Symmetric: \n\t$\\forall \\vec{x}, \\vec{y} \\in V, \\, \\innerp{\\vec{x}}{\\vec{y}} = \\innerp{\\vec{y}}{\\vec{x}}$\n\t\n\t\\item Positive definite:\n\t$\\forall \\vec{x}\\in V, \\, \\innerp{\\vec{x}}{\\vec{x}} \\geq 0$ and $\\innerp{\\vec{x}}{\\vec{x}} = 0 \\iff \\vec{x} = \\vec{0}$\n\\end{itemize}\n\\end{definition}\n\n\\begin{theorem}\n\tGiven an ordered basis $B$ of $V$, a map $\\innerp{\\cdot}{\\cdot}$ is an inner product if and only if there exists a symmetric, positive-definite matrix $\\mat{A} \\in \\R^{n \\times n}$ with $\\innerp{\\vec{x}}{\\vec{y}} = \\hat{\\vec{x}}^\\top \\mat{A} \\hat{\\vec{y}}$ for all $\\vec{x}, \\vec{y} \\in V$. Here, $\\hat{\\vec{x}}, \\hat{\\vec{y}}$ are the coordinate vectors of $\\vec{x}, \\vec{y}$ with respect to $B$. \n\\end{theorem}\n\n\\begin{definition}\n\tAn inner product space is a pair $(V, \\innerp{\\cdot}{\\cdot})$. \n\\end{definition}\n\n\\begin{remark}\n\tAny inner product $\\innerp{\\cdot}{\\cdot}$ induces a norm $n(\\vec{x}) = \\innerp{\\vec{x}}{\\vec{x}}^{1/2}$. Herein, we will always assume the norm is the induced one, unless stated otherwise.\n\\end{remark}\n\n\\begin{example}\n\tThe dot product (i.e.\\@ $\\vec{x} \\cdot \\vec{y} = \\vec{x}^\\top \\vec{y}$) is an inner product that induces the $\\ell_2$ (Euclidean) norm $\\norm{\\vec{x}}_2 = (\\vec{x}^\\top \\vec{x})^{1/2}$. The inner product space $(V, \\cdot)$ is called a Euclidean vector space in particular. \n\\end{example}\n\n\\begin{theorem}[Cauchy-Schwarz Inequality]\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$, the induced norm satisfies $|\\innerp{\\vec{x}}{\\vec{y}}| \\leq \\norm{\\vec{x}}\\norm{\\vec{y}}$ for all $\\vec{x}, \\vec{y} \\in V$.\n\\end{theorem}\n\n\\begin{remark}\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$, by the Cauchy-Schwarz inequality, \n\t\\[\\forall \\vec{x}, \\vec{y} \\in V \\setminus \\{\\vec{0}\\}, \\; -1 \\leq \\frac{\\innerp{\\vec{x}}{\\vec{y}}}{\\norm{\\vec{x}}\\norm{\\vec{y}}} \\leq 1\\]\n\tThen the angle between $\\vec{x}$ and $\\vec{y}$ is defined to be the unique $\\alpha \\in [0, \\pi]$ such that $\\cos \\alpha$ has the value above.\n\\end{remark}\n\n\\begin{definition}\n\tA metric on $V$ is a map $d\\colon V \\times V \\to \\R$ that is\n\t\\begin{itemize}\n\t\t\\item Positive definite: $\\forall \\vec{x}, \\vec{y} \\in V, \\, d(\\vec{x}, \\vec{y}) \\geq 0$ and $d(\\vec{x}, \\vec{y}) = 0 \\iff \\vec{x} = \\vec{y}$\n\t\t\n\t\t\\item Symmetric:  $\\forall \\vec{x}, \\vec{y} \\in V, \\, d(\\vec{x}, \\vec{y}) = d(\\vec{y}, \\vec{x})$ \n\t\t\n\t\t\\item Triangle inequality: \n\t\t$\\forall \\vec{x}, \\vec{y}, \\vec{z} \\in V, \\, d(\\vec{x}, \\vec{z}) \\leq d(\\vec{x}, \\vec{y}) + d(\\vec{y}, \\vec{z})$\n\t\\end{itemize}\n\\end{definition}\n\n\\begin{remark}\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$, the function $d(\\vec{x}, \\vec{y}) = \\norm{\\vec{x} - \\vec{y}}$ is a metric on $V$. The value $d(\\vec{x}, \\vec{y})$ is called the distance between $\\vec{x}$ and $\\vec{y}$, and in particular, the Euclidean distance if $(V, \\innerp{\\cdot}{\\cdot})$ is Euclidean. \n\\end{remark}\n\n\\begin{definition}\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$, if $\\innerp{\\vec{x}}{\\vec{y}} = 0$, then we call $\\vec{x}$ and $\\vec{y}$ orthogonal, denoted $\\vec{x} \\perp \\vec{y}$. If $\\norm{\\vec{x}} = \\norm{\\vec{y}} = 1$ as well, then $\\vec{x}$ and $\\vec{y}$ are orthonormal. \n\\end{definition}\n\n\\begin{definition}\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$, an orthonormal basis (ONB) of $V$ is a basis whose vectors are mutually orthonormal.\n\\end{definition}\n\n\\begin{definition}\n\tAn orthogonal matrix is a matrix $\\mat{A} \\in \\R^{n \\times n}$ that satisfies $\\mat{A} \\mat{A}^\\top = \\mat{A}^\\top \\mat{A} = \\mat{I}$, or equivalently, $\\mat{A}^{-1} = \\mat{A}^\\top$.\n\\end{definition}\n\n\\begin{theorem}\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$ and an ordered ONB $B$, if $\\Phi\\colon V \\to V$ is a linear transformation such that $A_\\Phi$ (with respect to $B$) is orthogonal, then $\\Phi$ preserves inner products, i.e.\\@ $\\innerp{\\Phi\\vec{x}}{\\Phi\\vec{y}} = \\innerp{\\vec{x}}{\\vec{y}}$). Hence, $\\Phi$ also preserves distances and angles.\n\\end{theorem}\n\n\n\\begin{definition}\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$, if $U$ is a $k$-dimensional subspace of $V$, then its orthogonal complement is $U^\\perp = \\{\\vec{w} \\in V \\mid \\vec{w} \\perp \\vec{u} \\text{ for all } \\vec{u} \\in U\\}$.  \n\\end{definition}\n\n\\begin{remark}\n\t$U^\\perp$ is an $(n - k)$-dimensional subspace of $V$ with $U \\oplus U^\\perp = V$.\n\\end{remark}\n\n\\begin{definition}\n\tGiven $(V, \\innerp{\\cdot}{\\cdot})$ and subspace $U \\subseteq V$, an orthogonal projection is the linear transformation $\\pi_U \\colon V \\to U$ that maps $\\pi_U \\vec{v} = \\vec{u}$, where $\\vec{u} \\in U$ satisfies $\\vec{v} = \\vec{u} + \\vec{w}$ for some $\\vec{w} \\in U^\\perp$.  \n\\end{definition}\n\n\\begin{remark}\n\tIf $\\pi_U$ is a orthogonal projection, then $(\\pi_U)^2 = \\pi_U$, and for all $\\vec{v} \\in V$ and $\\vec{u} \\in U$, we have $\\norm{\\vec{v} - \\pi_U\\vec{v}} \\leq \\norm{\\vec{v} - \\vec{u}}$.\n\\end{remark}\n\n\\end{document}\n\n", "meta": {"hexsha": "8ac5e1604c2172683b06d534de9efaee3e1413a5", "size": 7119, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "math_for_ml/notes/chapter_3_notes.tex", "max_stars_repo_name": "alstonlo/math-solution-sets", "max_stars_repo_head_hexsha": "b9e8cb1785753ee15ad08ded53696cac14b234a3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-30T22:36:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-30T22:36:31.000Z", "max_issues_repo_path": "math_for_ml/notes/chapter_3_notes.tex", "max_issues_repo_name": "alstonlo/math-solution-sets", "max_issues_repo_head_hexsha": "b9e8cb1785753ee15ad08ded53696cac14b234a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "math_for_ml/notes/chapter_3_notes.tex", "max_forks_repo_name": "alstonlo/math-solution-sets", "max_forks_repo_head_hexsha": "b9e8cb1785753ee15ad08ded53696cac14b234a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.6902173913, "max_line_length": 399, "alphanum_fraction": 0.6250877932, "num_tokens": 2572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.8333246015211009, "lm_q1q2_score": 0.5688054852379197}}
{"text": "\\chapter{Bayesian Data Analysis}\n\n\\begin{multicols}{2}[\\subsubsection*{Contents of this chapter}]\n   \\printcontents{}{1}{\\setcounter{tocdepth}{2}}\n\\end{multicols}\n\n\n\\section{Markov Chain Monte Carlo Methods}\nMarkov Chain Monte Carlo (MCMC) methods allow for the approximate solution of Bayesian inference problems by drawing samples from the posterior distribution.\n\nPlain vanilla MCMC works as follows:\n\n\\begin{enumerate}\n\\item Make an initial guess $\\theta_0$ for the value of the latent variables. This is the starting point for the Markov Chain, which could be picked randomly or, for example, the maximum a posteriori estimate.\n\\item Calculate the probability of observing the data based on these parameters ($p(\\{\\mathrm{data}\\}|\\theta_0)$).\n\\item Suggest values $\\theta$ where the Markov Chain might jump next. (The way guesses are generated is where optimized sampling might comes in.)\n\\item Calculate $p(\\{\\mathrm{data}\\}|\\theta)$.\n\\item Calculate a probability of jumping to the new values, $p_{\\mathrm{jump}} = \\min\\left(\\frac{p(\\{\\mathrm{data}\\}|\\theta)}{p(\\{\\mathrm{data}\\}|\\theta_0)}, 1\\right)$.\n\\item With probability $p_\\mathrm{jump}$, let the Markov Chain jump $\\theta_0 \\rightarrow \\theta$.\n\\item Repeat steps 3-6\n\\end{enumerate}\n\nIt can be shown that upon convergence, the probability of the Markov Chain reaching particular values of the latent variables is given by the posterior distribution. In other words, MCMC is a trick to use likelihood (and priors) to sample the posterior distribution. Certain pathological posteriors can make it difficult or impossible for the Markov Chain to sample the full posterior, and there is also no completely certain way to say that convergence has been achieved. I personally sample using multiple independent chains, and assume convergence when the posteriors sampled by all chains looks identical. I also test how robust the results are to changes in the priors.\n", "meta": {"hexsha": "e2883b6b3405cda5dfe42e1c26aa5c2a13f4be9f", "size": 1924, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/chapters/bayesiandataanalysis.tex", "max_stars_repo_name": "jpbm/probabilism", "max_stars_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/chapters/bayesiandataanalysis.tex", "max_issues_repo_name": "jpbm/probabilism", "max_issues_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/chapters/bayesiandataanalysis.tex", "max_forks_repo_name": "jpbm/probabilism", "max_forks_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.1666666667, "max_line_length": 674, "alphanum_fraction": 0.7785862786, "num_tokens": 453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246035907933, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5688054812697149}}
{"text": "\n\\section{Lake at rest with an immersed bump}\n\nThis is a simple test if the method is well-balanced. \nThe initial condition is a lake at rest with water depth $0.5$. The topography is\n\\begin{equation}\nz(x)= \\left\\{ \\begin{array}{ll}\n 0.2-0.05\\left(x-10\\right)^2& ~\\textrm{if}\\quad 8 \\leq x \\leq 12\\,,\\\\\n 0& ~\\textrm{otherwise,}\\\\\n\\end{array} \\right.\n\\end{equation}\nThe analytical solution is obviously a lake at rest, that is, $w=0.5$ and $u=v=0$.\n\n\n\\subsection{Results}\n\nSetting up the boundaries to be reflective, we should see excellent agreement between the analytical and numerical solutions if the method is well-balanced. Some oscillations may occur, but if the method is well-balanced, they should be very close to the order of the machine precision. The following three figures show the stage, $x$-momentum, and $x$-velocity after running \\anuga{} for some time.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{stage_plot.png}\n\\end{center}\n\\caption{Stage results}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{xmom_plot.png}\n\\end{center}\n\\caption{Xmomentum results}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{xvel_plot.png}\n\\end{center}\n\\caption{Xvelocity results}\n\\end{figure}\n\n\n\\endinput\n", "meta": {"hexsha": "d016ac75b910f291db65339b123423427aedfa81", "size": 1297, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "validation_tests/analytical_exact/lake_at_rest_immersed_bump/results.tex", "max_stars_repo_name": "samcom12/anuga_core", "max_stars_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_stars_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_stars_count": 136, "max_stars_repo_stars_event_min_datetime": "2015-05-07T05:47:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T03:07:40.000Z", "max_issues_repo_path": "validation_tests/analytical_exact/lake_at_rest_immersed_bump/results.tex", "max_issues_repo_name": "samcom12/anuga_core", "max_issues_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_issues_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-05-03T09:27:54.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-20T04:22:48.000Z", "max_forks_repo_path": "validation_tests/analytical_exact/lake_at_rest_immersed_bump/results.tex", "max_forks_repo_name": "samcom12/anuga_core", "max_forks_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_forks_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_forks_count": 70, "max_forks_repo_forks_event_min_datetime": "2015-03-18T07:35:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T07:07:29.000Z", "avg_line_length": 29.4772727273, "max_line_length": 399, "alphanum_fraction": 0.7471087124, "num_tokens": 393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.84594244507642, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5687539159520701}}
{"text": "\\section{Generalizable Robustness by Confidence Calibration of Adversarial Training}\n\\label{sec:main}\n\nTo start, we briefly review adversarial training on $L_\\infty$ adversarial examples \\cite{MadryICLR2018}, which has become standard to train robust models, \\cf \\secref{sec:main-at}.\nHowever, robustness does not generalize to larger perturbations or unseen attacks. We hypothesize this to be the result of enforcing high-confidence predictions on adversarial examples.\n\\ConfTrain addresses this issue with minimal modifications, \\cf \\secref{subsec:main-ccat} and \\algref{alg:main-ccat}, by encouraging low-confidence predictions on adversarial examples. During testing, adversarial examples can be rejected by confidence thresholding.\n\n\\textbf{Notation:}\n%\nWe consider a classifier $f:\\R^d \\rightarrow \\R^K$ with $K$ classes where $f_k$ denotes the confidence for class $k$. While we use the cross-entropy loss $\\cL$ for training, our approach also generalizes to other losses. Given $x \\in \\R^d$ with class $y{\\,\\in\\,}\\{1,\\ldots,K\\}$, we let $f(x) :=\\argmax_k f_k(x)$ denote the predicted class for notational convenience. For $f(x) = y$, an adversarial example $\\tilde{x} = x+\\delta$ is defined as a ``small'' perturbation $\\delta$ such that $f(\\tilde{x}) \\neq y$, \\ie, the classifier changes its decision. The strength of the change $\\delta$ is measured by some $L_p$-norm, $p \\in \\{0,1,2,\\infty\\}$. Here, $p=\\infty$ is a popular choice as it leads to the smallest perturbation per pixel.\n\n\\subsection{Problems of Adversarial Training}\n\\label{sec:main-at}\n\nFollowing \\cite{MadryICLR2018}, adversarial training is given as the following min-max problem:\n\\vspace*{0px}\n\\begin{align}\n    \\min\\limits_w \\Exp\\left[\\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon}\\, \\cL(f(x + \\delta; w), y)\\right]\\label{eq:adversarial-training}\n\\end{align}\n\\vskip -4px\nwith $w$ being the classifier's parameters. During mini-batch training the inner maximization problem,\n\\vspace*{0px}\n\\begin{align}\n    \\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\cL(f(x + \\delta; w), y),\\label{eq:attack}\n\\end{align}\n\\vskip -4px\nis approximately solved. In addition to the $L_\\infty$-constraint, a box constraint is enforced for images, \\ie, $\\tilde{x}_i = (x + \\delta)_i \\in [0,1]$. Note that maximizing the cross-entropy loss is equivalent to finding the adversarial example with \\emph{minimal} confidence in the true class. For neural networks, this is generally a non-convex optimization problem. In \\citep{MadryICLR2018} the problem is tackled using projected gradient descent (\\PGD), initialized using a random $\\delta$ with $\\|\\delta\\|_\\infty \\leq \\epsilon$.\n\nIn contrast to adversarial training as proposed in \\citep{MadryICLR2018}, which computes adversarial examples for the \\emph{full} batch in each iteration, others compute adversarial examples only for \\emph{half} the examples of each batch \\citep{SzegedyICLR2014}. Instead of training \\emph{only} on adversarial examples, each batch is divided into $50\\%$ clean and $50\\%$ adversarial examples. Compared to \\eqnref{eq:adversarial-training}, $50\\%$/$50\\%$ adversarial training effectively minimizes\n\\vspace*{0px}\n\\begin{align}\n    \\underbrace{\\Exp\\Big[\\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\cL(f(x + \\delta; w), y)\\Big]}_{\\text{50\\% adversarial training}} + \\underbrace{\\Exp\\big[\\cL(f(x; w), y)\\big]}_{\\text{50\\% ``clean'' training}}.\\label{eq:50-50-adversarial-training}\n\\end{align}\n\\vskip -4px\nThis improves test accuracy on clean examples compared to $100\\%$ adversarial training but typically leads to worse robustness. Intuitively, by balancing both terms in \\eqnref{eq:50-50-adversarial-training}, the trade-off between accuracy and robustness can already be optimized to some extent \\citep{StutzCVPR2019}.\n\n\\begin{figure}[t]\n    \\vspace*{-8px}\n    \n    \\centering\n    \\begin{minipage}{0.49\\textwidth}\n        \\hspace*{-0.2cm}\n        \\includegraphics[width=1\\textwidth]{fig_mmnist_advtrain_0_interpolation}\n        \n        \\hspace*{-0.2cm}\n        \\includegraphics[width=1\\textwidth]{fig_mmnist_ours10_0_interpolation}\n    \\end{minipage}\n    \\vspace*{-10px}\n    \n    \\caption{\\textbf{Extrapolation of Uniform Predictions.} We plot the confidence in each class along an interpolation between two test examples $x$ and $x'$, ``2'' and ``7'', on MNIST \\cite{LecunIEEE1998}: $(1 - \\kappa)x + \\kappa x'$ where $\\kappa$ is the interpolation factor. \\ConfTrain quickly yields low-confidence, uniform predictions in between both examples, extrapolating the behavior enforced within the $\\epsilon$-ball during training. Regular adversarial training, in contrast, consistently produces high-confidence predictions, even on unreasonable inputs.}\n    \\label{fig:interpolation}\n    \\vspace*{-2px}\n\\end{figure}\n\n\\textbf{Problems:}\n%\nTrained on $L_\\infty$ adversarial examples, the robustness of adversarial training does not generalize to previously unseen adversarial examples, including larger perturbations or other $L_p$ adversarial examples. We hypothesize that this is because adversarial training explicitly enforces high-confidence predictions on $L_\\infty$ adversarial examples within the $\\epsilon$-ball seen during training (``seen'' in \\figref{fig:introduction}). However, this behavior is difficult to extrapolate to arbitrary regions in a meaningful way. Thus, it is not surprising that adversarial examples can often be found right beyond the $\\epsilon$-ball used during training, \\cf \\figref{fig:introduction} (top left). This can be described as ``overfitting'' to the $L_\\infty$ adversarial examples used during training. Also, larger $\\epsilon$-balls around training examples might include (clean) examples from other classes. Then, \\eqnref{eq:attack} will focus on these regions and reduce accuracy as considered in our theoretical toy example, see Proposition \\ref{prop:toy-example}, and related work \\citep{JacobsenICLR2019,JacobsenARXIV2019}.\n\nAs suggested in \\figref{fig:introduction}, both problems can be addressed by enforcing low-confidence predictions on adversarial examples in the $\\epsilon$-ball. In practice, we found that the low-confidence predictions on adversarial examples within the $\\epsilon$-ball are extrapolated beyond the $\\epsilon$-ball, \\ie, to larger perturbations, unseen attacks or distal adversarial examples. This allows to reject adversarial examples based on their low confidence. We further enforce this behavior by explicitly encouraging a ``steep'' transition from high-confidence predictions (on clean examples) to low-confidence predictions (on adversarial examples). As result, the (low-confidence) prediction is almost flat close to the boundary of the $\\epsilon$-ball. Additionally, there is no incentive to deviate from the uniform distribution outside of the $\\epsilon$-ball. For example, as illustrated in \\figref{fig:interpolation}, the confidence stays low in between examples from different classes and only increases if necessary, \\ie, close to the examples.\n\n\\begin{algorithm}[t]\n\t\\caption{\\textbf{Confidence-Calibrated Adversarial Training (\\ConfTrain).}  The only changes compared to standard adversarial training are the attack (line \\ref{line:attack}) and the probability distribution over the classes (lines \\ref{line:lambda} and \\ref{line:label}), which becomes more uniform as distance $\\norm{\\delta}_\\infty$ increases. During testing, low-confidence (adversarial) examples are rejected.}\n\t\\label{alg:main-ccat}\n\t\\begin{algorithmic}[1]\n\t\t\\WHILE{true}\n\t\t\\STATE choose random batch $(x_1,y_1),\\ldots,(x_B,y_B)$.\n\t\t\\FOR{$b = 1,\\ldots,\\nicefrac{B}{2}$} \n\t\t\\STATE $\\delta_b:=\\argmax\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\max\\limits_{k \\neq y_b} f_k(x_b{+}\\delta)$\\label{line:attack} (\\eqnref{eq:conf-attack})\n\t\t\\STATE $\\tilde{x}_b := x_b + \\delta_b$\n\t\t\\STATE $\\lambda(\\delta_b) := (1 - \\min(1, \\nicefrac{\\|\\delta_b\\|_\\infty}{\\epsilon}))^\\rho$ (\\eqnref{eq:lambda})\\label{line:lambda}\n\t\t\\STATE $\\tilde{y_b}\\,{:=}\\,\\lambda(\\delta_b)\\,\\text{one\\_hot}(y_b)\\,{+}\\,(1\\,{-}\\,\\lambda(\\delta_b)) \\frac{1}{K}$ (\\eqnref{eq:distribution})\\label{line:label}\n\t\t\\ENDFOR\n\t\t\\STATE update parameters using \\eqnref{eq:50-50-adversarial-training}:\\\\\\hspace*{1cm}$\\sum_{b = 1}^{\\nicefrac{B}{2}} \\mathcal{L}(f(\\tilde{x}_b), \\tilde{y}_b) + \\sum_{b = \\nicefrac{B}{2}}^{B} \\mathcal{L}(f(x_b), y_b)$\n\t\t\\ENDWHILE\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Confidence-Calibrated Adversarial Training}\n\\label{subsec:main-ccat}\n\n\\textbf{Confidence-calibrated adversarial training (\\ConfTrain)} addresses these problems with minimal modifications, as outlined in \\algref{alg:main-ccat}. During training, we train the network to predict a convex combination of (correct) one-hot distribution on clean examples and uniform distribution on adversarial examples as target distribution within the cross-entropy loss. During testing, adversarial examples can be rejected by confidence thresholding: adversarial examples receive near-uniform confidence while test examples receive high-confidence. By extrapolating the uniform distribution beyond the $\\epsilon$-ball used during training, previously unseen adversarial examples such as larger $L_\\infty$ perturbations can be rejected, as well. In the following, we first introduce an alternative objective for generating adversarial examples. Then, we specifically define the target distribution, which becomes more uniform with larger perturbations $\\|\\delta\\|_\\infty$. In \\algref{alg:main-ccat}, these changes correspond to lines \\ref{line:attack}, \\ref{line:lambda} and \\ref{line:label}, requiring only few lines of code in practice.\n\nGiven an example $x$ with label $y$, our adaptive attack during training maximizes the confidence in any other label $k \\neq y$. This results in effective attacks against \\ConfTrain, as \\ConfTrain will reject low-confidence adversarial examples:\n\\vspace*{0px}\n\\begin{align}\n \\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\max\\limits_{k\\neq y}f_k(x + \\delta;w) \\label{eq:conf-attack}\n\\end{align}\n\\vskip -4px\nNote that \\eqnref{eq:attack}, in contrast, minimizes the confidence in the true label $y$. Similarly, \\citep{GoodfellowOPENREVIEW2019} uses targeted attacks in order to maximize confidence, whereas ours is untargeted and, thus, our objective is the maximal confidence over all other classes.\n\nThen, given an adversarial example from \\eqnref{eq:conf-attack} during training, \\ConfTrain uses the following combination of uniform and one-hot distribution as target for the cross-entropy loss:\n\\vspace*{-8px}\n\\begin{align}\n\t\\tilde{y} = \\lambda(\\delta) \\,\\,\\text{one\\_hot}(y) + \\big(1-\\lambda(\\delta)\\big) \\frac{1}{K},\\label{eq:distribution}\n\\end{align}\n\\vskip -4px\nwith $\\lambda(\\delta) \\in [0,1]$ and $\\text{one\\_hot}(y) \\in \\{0,1\\}^K$ denoting the one-hot vector corresponding to class $y$. Thus, we enforce a convex combination of the original label distribution and the uniform distribution which is controlled by the parameter~$\\lambda = \\lambda(\\delta)$, computed given the perturbation~$\\delta$. We choose $\\lambda$ to decrease with the distance $\\|\\delta\\|_\\infty$ of the adversarial example to the attacked example $x$ with the intention to enforce uniform predictions when $\\|\\delta\\|_\\infty = \\epsilon$. Then, the network is encouraged to extrapolate this uniform distribution beyond the used $\\epsilon$-ball. Even if extrapolation does not work perfectly, the uniform distribution is much more meaningful for extrapolation to arbitrary regions as well as regions between classes compared to high-confidence predictions as encouraged in standard adversarial training, as demonstrated in \\figref{fig:interpolation}. For controlling the trade-off $\\lambda$ between one-hot and uniform distribution, we consider the following ``power transition'':\n\\vspace*{0px}\n\\begin{align}\n    \\begin{split}\n        \\lambda(\\delta) :=& \\Big(1 - \\min\\Big(1, \\frac{\\|\\delta\\|_\\infty}{\\epsilon}\\Big)\\Big)^\\rho\n    \\end{split}\\label{eq:lambda}\n\\end{align}\n\\vskip -4px\nThis ensures that for $\\delta = 0$ we impose the original (one-hot) label. For growing $\\delta$, however, the influence of the original label decays proportional to $\\|\\delta\\|_\\infty$. The speed of decay is controlled by the parameter $\\rho$.\nFor $\\rho=10$, \\figref{fig:introduction} (top right) shows the transition as approximated by the network.\nThe power transition ensures that for $\\|\\delta\\|_\\infty \\geq \\epsilon$, \\ie, perturbations larger than encountered during training, a uniform distribution is enforced as $\\lambda$ is $0$. We train on $50\\%$ clean and $50\\%$ adversarial examples in each batch, as in \\eqnref{eq:50-50-adversarial-training}, such that the network has an incentive to predict correct labels.\n\nThe convex combination of uniform and one-hot distribution in \\eqnref{eq:distribution} resembles the label smoothing regularizer introduced in \\cite{SzegedyCVPR2016}. In concurrent work, label smoothing has also been used as regularizer for adversarial training \\cite{ChengARXIV2020}. However, in our case, $\\lambda = \\lambda(\\delta)$ from \\eqnref{eq:lambda} is not a fixed hyper-parameter as in \\cite{SzegedyCVPR2016,ChengARXIV2020}. Instead, $\\lambda$ depends on the perturbation $\\delta$ and reaches zero for $\\|\\delta\\|_\\infty = \\epsilon$ to encourage low-confidence predictions beyond the $\\epsilon$-ball used during training. Thereby, $\\lambda$ explicitly models the transition from one-hot to uniform distribution.\n\n\\subsection{Confidence-Calibrated Adversarial Training Results in Accurate Models}\n\\label{sec:main-proposition}\n\nProposition \\ref{prop:toy-example} discusses a problem where standard adversarial training is unable to reconcile robustness and accuracy while \\ConfTrain is able to obtain \\emph{both} robustness and accuracy:\n\n\\begin{proposition}\\label{prop:toy-example}\n    We consider a classification problem with two points $x=0$ and $x=\\epsilon$ in $\\mR$ with deterministic labels, \\ie,\n    $p(y=2|x=0)=1$ and $p(y=1|x=\\epsilon)=1$, such that the problem is fully determined by the probability $p_0=p(x=0)$. \n    The Bayes error of this classification problem is zero. Let the predicted probability distribution over classes be $\\tilde{p}(y|x)=\\frac{e^{g_y(x)}}{e^{g_1(x)}+e^{g_2(x)}}$, where $g:\\R^d \\rightarrow \\R^2$ is the classifier and we assume that the function $\\lambda:\\mR_+ \\rightarrow [0,1]$ used in \\ConfTrain is monotonically decreasing and $\\lambda(0)=1$. Then, the error of the Bayes optimal classifier (with cross-entropy loss) for\n    \\vspace*{-8px}\n    \\begin{itemize}\n    \\item adversarial training on $100\\%$ adversarial examples, \\cf \\eqnref{eq:adversarial-training}, \n    is $\\min\\{p_0,1-p_0\\}$.\n    \\vspace*{-3px}\n    \\item adversarial training on $50\\%$/$50\\%$ adversarial/clean examples per batch, \\cf \\eqnref{eq:50-50-adversarial-training}, \n    is $\\min\\{p_0,1 - p_0\\}$.\n    \\vspace*{-3px}\n    \\item \\ConfTrain on $50\\%$ clean and $50\\%$ adversarial examples, \\cf \\algref{alg:main-ccat}, \n    is \\emph{zero} \n    if $\\lambda(\\epsilon) < \\min\\left\\{\\nicefrac{p_0}{1-p_0},\\nicefrac{1-p_0}{p_0}\\right\\}$.\n    \\vspace*{-6px}\n    \\end{itemize}\n\\end{proposition}\n\nHere, $100\\%$ and $50\\%$/$50\\%$ standard adversarial training are unable to obtain \\emph{both} robustness and accuracy: The $\\epsilon$-ball used during training contains examples of different classes such that adversarial training enforces high-confidence predictions in contradicting classes. \\ConfTrain addresses this problem by encouraging low-confidence predictions on adversarial examples within the $\\epsilon$-ball. Thus, \\ConfTrain is able to improve accuracy while preserving robustness.\n\n\\section{Detection and Robustness Evaluation with Adaptive Attack}\n\\label{sec:evaluation-attack}\n\n\\ConfTrain allows to reject (adversarial) inputs by confidence-thresholding before classifying them. As we will see, this ``reject option'', is also beneficial for standard adversarial training (\\AdvTrain). Thus, evaluation also requires two stages: First, we fix the confidence threshold at $99\\%$ true positive rate (TPR), where correctly classified clean examples are positives such that at most $1\\%$ (correctly classified) clean examples are rejected.\nSecond, on the non-rejected examples, we evaluate accuracy and robustness using \\emph{confidence-thresholded} (robust) test error. \n\n\\subsection{Adaptive Attack}\n\\label{subsec:evaluation-attack}\n\nAs \\ConfTrain encourages low confidence on adversarial examples, we use \\PGD to maximize the confidence of adversarial examples, \\cf \\eqnref{eq:conf-attack}, as effective adaptive attack against \\ConfTrain. In order to effectively optimize our objective, we introduce a simple but crucial improvement: after each iteration, the computed update is only applied if the objective is improved; otherwise the learning rate is reduced.\nAdditionally, we use momentum \\citep{DongCVPR2018} and run the attack for exactly $T$ iterations, choosing the perturbation corresponding to the best objective across all iterations. In addition to random initialization, we found that $\\delta = 0$ is an effective initialization against \\ConfTrain. We applied the same principles for \\cite{IlyasICML2018}, \\ie, \\PGD with approximated gradients, \\eqnref{eq:conf-attack} as objective, momentum and backtracking; we also use \\eqnref{eq:conf-attack} as objective for the black-box attacks of \\cite{AndriushchenkoARXIV2019,NarodytskaCVPRWORK2017,KhouryARXIV2018}.\n\n\\subsection{Detection Evaluation}\n\nIn the first stage, we consider a detection setting: adversarial example are \\emph{negatives} and correctly classified clean examples are \\emph{positives}. The confidence threshold $\\tau$ is chosen extremely conservatively by requiring a \\textbf{$\\boldsymbol{99\\%}$ true positive rate (TPR)}: at most $1\\%$ of correctly classified clean examples can be rejected. As result, the confidence threshold is determined \\emph{only} by correctly classified clean examples, independent of adversarial examples. Incorrectly rejecting a significant fraction of correctly classified clean examples is unacceptable. This is also the reason why we do not report the area under the receiver operating characteristic (ROC) curve as related work \\citep{LeeNIPS2018,MaICLR2018}.\nInstead, we consider the \\textbf{false positive rate (FPR)}.\nThe supplementary material includes a detailed discussion.\n\n\\subsection{Robustness Evaluation}\n\nIn the second stage, after confidence-thresholding, we consider the widely used robust test error (\\RTE) \\cite{MadryICLR2018}. It quantifies the model's test error in the case where all test examples are allowed to be attacked, \\ie, modified within the chosen threat model, \\eg, for $L_p$:\n\\begin{align}\n    \\text{``Standard'' }\\RTE = \\frac{1}{N}\\sum_{n = 1}^N \\;\\max\\limits_{\\|\\delta\\|_p\\leq \\epsilon} \\Id_{f(x_n + \\delta)\\neq y_n}\n\\end{align}\nwhere $\\{(x_n,y_n)\\}_{n = 1}^N$ are test examples and labels. In practice, \\RTE is computed empirically using adversarial attacks. Unfortunately, standard \\RTE does not take into account the option of rejecting (adversarial) examples.\n\nWe propose a generalized definition adapted to our confidence-thresholded setting where the model can reject examples. For fixed confidence threshold $\\tau$ at $99\\%$TPR, the \\textbf{confidence-thresholded \\RTE} is defined as\n\\begin{align}\n    \\RTE(\\tau) = \\frac{\n    \t\\sum\\limits_{n=1}^N \\;\\max\\limits_{\\|\\delta\\|_p\\leq \\epsilon, c(x_n + \\delta)\\geq \\tau} \\Id_{f(x_n + \\delta)\\neq y_n}\n    }\n    {\n    \t\\sum\\limits_{n=1}^N \\;\\max\\limits_{\\|\\delta\\|_p\\leq \\epsilon} \\Id_{c(x_n + \\delta)\\geq \\tau}\n    }\\label{eq:conf-rte}\n\\end{align}\nwith $c(x) = \\max_k f_k(x)$ and $f(x)$ being the model's confidence and predicted class on example $x$, respectively. Essentially, this is the \\textbf{test error on test examples that can be modified within the chosen threat model \\emph{and} pass confidence thresholding}. For $\\tau=0$ (\\ie, all examples pass confidence thresholding) this reduces to the standard \\RTE, comparable to related work. We stress that our adaptive attack in \\eqnref{eq:conf-attack} directly maximizes the numerator of \\eqnref{eq:conf-rte} by maximizing the confidence of classes not equal $y$. A (clean) \\textbf{confidence-thresholded test error ($\\TE(\\tau)$)} is obtained similarly. In the following, if not stated otherwise, we report \\emph{confidence-thresholded} \\RTE and \\TE as default and omit the confidence threshold $\\tau$ for brevity.\n\n\\textbf{FPR and \\RTE:}\n%\nFPR quantifies how well an adversary can perturb (correctly classified) examples while not being rejected. The confidence-thresholded \\RTE is more conservative as it measures \\emph{any} non-rejected error (adversarial or not). As result, \\RTE implicitly includes FPR \\emph{and} \\TE. Therefore, we report only \\RTE and include FPRs for all our experiments in the supplementary material.\n\n\\begin{figure}[t]\n    \\vspace*{0px}\n    \\centering\n    \n    \\hspace*{-0.3cm}\n    \\begin{minipage}[t]{0.23\\textwidth}\n        \\vspace*{0px}\n        \n        \\centering\n        \\includegraphics[width=1.05\\textwidth]{fig_msvhn_roc}\n    \\end{minipage}\n    \\begin{minipage}[t]{0.25\\textwidth}\n        \\vspace*{0px}\n        \n        \\centering\t\t\t\n        \\includegraphics[width=1.05\\textwidth]{fig_msvhn_rte}\n    \\end{minipage}\\\\\n    \n    \\hspace*{-0.4cm}\n    \\begin{minipage}[t]{0.45\\textwidth}\n        \\fbox{\n            \\hspace*{1.6cm}\\includegraphics[width=0.575\\textwidth]{fig_msvhn_legend}\\hspace*{1.6cm}\n        }\n    \\end{minipage}\n    \\vskip -4px\n    \\caption{\\textbf{ROC and \\RTE Curves.} On SVHN, we show ROC curves when distinguishing \\emph{correctly classified} test examples from adversarial examples by confidence (left) and (confidence-thresholded) \\RTE against confidence threshold $\\tau$ (right) for worst-case adversarial examples across $L_\\infty$ attacks with $\\epsilon = 0.03$. The confidence threshold $\\tau$ is chosen exclusively on correctly classified clean examples to obtain $99\\%$TPR. For \\ConfTrain, this results in $\\tau \\approx 0.6$. Note that \\RTE subsumes both \\TE and FPR.}\n    \\label{fig:experiments-evaluation}\n    \\vspace*{-2px}\n\\end{figure}\n\n\\textbf{Per-Example Worst-Case Evaluation:}\n%\nInstead of reporting average or per-attack results, we use a per-example \\emph{worst-case} evaluation scheme: For each individual test example, all adversarial examples from all attacks (and restarts) are accumulated. Subsequently, \\emph{per test example}, only the adversarial example with highest confidence is considered, resulting in a significantly stronger robustness evaluation compared to related work.", "meta": {"hexsha": "82277dbba61c66c667ca180d4c6eba3e85127223", "size": 22482, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/top_main.tex", "max_stars_repo_name": "davidstutz/icml2020-confidence-calibrated-adversarial-training", "max_stars_repo_head_hexsha": "a8d0476f1b1986ff13280623f009fdc1a518b487", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2020-07-03T14:13:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-15T03:03:24.000Z", "max_issues_repo_path": "paper/top_main.tex", "max_issues_repo_name": "davidstutz/icml2020-confidence-calibrated-adversarial-training", "max_issues_repo_head_hexsha": "a8d0476f1b1986ff13280623f009fdc1a518b487", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/top_main.tex", "max_forks_repo_name": "davidstutz/icml2020-confidence-calibrated-adversarial-training", "max_forks_repo_head_hexsha": "a8d0476f1b1986ff13280623f009fdc1a518b487", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 107.0571428571, "max_line_length": 1149, "alphanum_fraction": 0.7566497643, "num_tokens": 6104, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424295406087, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5687539055068518}}
{"text": "\\hypertarget{sparse}{%\n\\section{Sparse}\\label{sparse}}\n\nThe Sparse class provides support for sparse matrices. An empty sparse\nmatrix can be initialized with a given size,\n\n\\begin{lstlisting}\nvar a = Sparse(nrows,ncols)\n\\end{lstlisting}\n\nAlternatively, a matrix can be created from an array of triplets,\n\n\\begin{lstlisting}\nvar a = Sparse([[row, col, value] ...])\n\\end{lstlisting}\n\nFor example,\n\n\\begin{lstlisting}\nvar a = Sparse([[0,0,2], [1,1,-2]])\n\\end{lstlisting}\n\ncreates the matrix\n\n\\begin{lstlisting}\n[ 2 0 ]\n[ 0 -2 ]\n\\end{lstlisting}\n\nOnce a sparse matrix is created, you can use all the regular arithmetic\noperators with matrix operands, e.g.\n\n\\begin{lstlisting}\na+b\na*b\n\\end{lstlisting}\n", "meta": {"hexsha": "b19b719d27c22960a49144051066fb86e9a2478e", "size": 697, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/src/Reference/sparse.tex", "max_stars_repo_name": "mattsep/morpho", "max_stars_repo_head_hexsha": "50bb935653c0675b81e9f2d78573cf117971a147", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-09-18T14:44:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T11:41:50.000Z", "max_issues_repo_path": "manual/src/Reference/sparse.tex", "max_issues_repo_name": "mattsep/morpho", "max_issues_repo_head_hexsha": "50bb935653c0675b81e9f2d78573cf117971a147", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 79, "max_issues_repo_issues_event_min_datetime": "2021-10-05T17:33:19.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:06:10.000Z", "max_forks_repo_path": "manual/src/Reference/sparse.tex", "max_forks_repo_name": "mattsep/morpho", "max_forks_repo_head_hexsha": "50bb935653c0675b81e9f2d78573cf117971a147", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-10-05T16:56:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-31T19:55:27.000Z", "avg_line_length": 18.8378378378, "max_line_length": 71, "alphanum_fraction": 0.7274031564, "num_tokens": 202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7879312056025699, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5686839993894557}}
{"text": "Trust Region refers to a family of optimization methods that operate by assuming a quadratic model is accurate within a local \"trust region\". The trust region's size is adjusted based on the quadratic model's performance in previous iterations. A summary of Trust Region, as implemented in DDogleg, is found in Algorithm \\ref{alg:trust_region}. This implementation\\footnote{The more traditional variant described in \\cite{numopt2006,fletcher1987} were considered but found to converge slower in test problems.} is primarily based on the description found in \\cite{IMM2004}.\n\n\\begin{algorithm}{}\n\\caption{\\label{alg:trust_region}Trust Region}\n\\begin{algorithmic}[1]\n\t\\State $k \\gets 0$, $\\Delta_0 \\in (0,\\Delta_{max})$\n\t\\State \\quad $\\Delta_{max}$ is the maximum trust region size\n\t\\State \\quad $\\Delta_{0}$ is the initial trust region size. \\Comment{Section \\ref{section:init_region_size}}\n\t\\While{$k < k_{\\mbox{max}}$ and not $done$}\n\t\\State $p_k$ update by optimizing Eq. \\ref{eq:trust_region_subproblem} \\Comment{Sections \\ref{section:cauchy} and \\ref{section:dogleg} }\n\t\\State $\\delta_f \\gets f(x_k) - f(x_k + p_k)$ \\Comment{Actual reduction in score}\n\t\\State $\\delta_m \\gets m_k(0)-m_k(p_k) = -g^T_k p - \\frac{1}{2}p^T B_k p$ \\Comment{Predicted reduction in score}\n\t\\State $\\nu \\gets \\delta_f / \\delta_f$ \\Comment{Score reduction ratio}\n\t\\If{ $\\delta_f \\le 0$ or $\\nu <\\frac{1}{4}$} \\Comment{Score got worse or the model poor?}\n\t\t\\State $\\Delta_{k+1} \\gets \\frac{1}{2}\\Delta_k$\n\t\\Else\n\t\t\\If{$\\nu>\\frac{3}{4}$}\n\t\t\t\\Comment{The model is good. Increase the region size?}\n\t\t\t\\State $\\Delta_{k+1} \\gets \\mbox{min}(\\mbox{max}(3\\norm{p_k},\\Delta_k),\\Delta_{\\mbox{max}})$\n\t\t\\Else\n\t\t\t\\State $\\Delta_{k+1} \\gets \\Delta_k$\n\t\t\\EndIf\n\t\\EndIf\n\t\\If{$\\delta_f > 0$ and $\\nu > 0$} \\Comment{Is the solution acceptable?}\n\t\t\\State $x_{k+1} \\gets x_k + p_k$ \\Comment{Update the state}\n\t\t\\State $done$ $\\gets$ $\\mbox{F-Test}$ or $\\mbox{G-Test}$ \\Comment{Convergence testing}\n\t\\Else\n\t\t\\State $x_{k+1} \\gets x_k$\n\t\\EndIf\n\n\t\\State $k \\gets k + 1$\n\t\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\nAt every iteration the Trust Region subproblem is solved for, either exactly or approximately:\n\\begin{equation}\n\\begin{array}{lr}\n\\min\\limits_{p\\in \\R^n} m_k(p) = f_k + g^T_k p + \\frac{1}{2}p^T B_k p & s.t. \\norm{p} \\le \\Delta_k\n\\end{array}\n\\label{eq:trust_region_subproblem}\n\\end{equation}\nwhere $m(p) \\in \\R$ is a quadratic model approximating $f(x_k)$, $p \\in \\R^N$ is the step or change in state, $B \\in \\R^{N \\times N}$ is a symmetric matrix representing the Hessian or an approximation, and $\\Delta_k \\in \\R^+$ is the trust region size. The unconstrained solution to Eq. \\ref{eq:trust_region_subproblem} is easily found by setting the first derivative to zero:\n\\begin{equation}\np = -B^{-1}_k g_k\n\\label{eq:TR_unconstrained_solution}\n\\end{equation}\nAn exact solution to (\\ref{eq:trust_region_subproblem}) is expensive to compute and approximate methods are typically used instead. The Cauchy Point and Dogleg are approximate methods and included in the DDogleg library.\n\n\\subsubsection{Cauchy Point}\n\\label{section:cauchy}\n\nThe Cauchy Point is the solution which minimizes (\\ref{eq:trust_region_subproblem}) along the steepest descent direction. It is defined as $p^s_k = \\tau_k \\hat{p}^s_k$ and is relative to $x_{k-1}$, where $\\hat{p}^s_k$ is a unit vector, and $\\tau_k$ is a scalar.\n\\begin{equation}\n\\begin{array}{lr}\n\\hat{p}^s_k = \\min\\limits_{p\\in \\R^n} f_k + g_k^T p & s.t. \\norm{p} \\le \\Delta_k\n\\end{array}\n\\end{equation}\nThe length $\\tau_k$ is found by minimizing (\\ref{eq:trust_region_subproblem}) along direction $\\hat{p}^s_k$\n\\begin{equation}\n\\begin{array}{lr}\n\\tau_k = \\min\\limits_{\\tau \\ge 0} m_k(\\tau v^s_k) & s.t. \\norm{\\tau v^s_k} \\le \\Delta_k\n\\end{array}\n\\end{equation}\n\nThe solution (see Chapter 4 of \\cite{numopt2006} for details and diagrams) is as follows:\n\\begin{equation}\np^s_k = -\\tau_k \\frac{\\Delta_k}{\\norm{g_k}}g_k\n\\label{eq:cauchy_p}\n\\end{equation}\n\\begin{equation}\n\\tau_k =\n\t\\begin{cases}\n\t\t\\quad 1 & g_k^T B_k g_k \\le 0 \\\\\n\t\t\\quad \\min\\left(1,\\norm{g_k}^3/(\\Delta_k g_k^T B_k g_k)\\right) & g_k^T B_k g_k > 0\n\t\\end{cases}\n\t\\label{eq:cauchy_tau}\n\\end{equation}\n\nThe formulas in (\\ref{eq:cauchy_p}) and (\\ref{eq:cauchy_tau}) can be improved upon to avoid numerical issues by removing powers of three and division by $\\Delta_k$:\n\\begin{eqnarray}\n\\hat{g}_k &=& \\frac{g_k}{\\norm{g_k}} \\\\\np^s_k &=& -\\bar{\\tau}_k \\hat{g}_k\n\\end{eqnarray}\n\\begin{equation}\n\\bar{\\tau}_k = \\begin{cases}\n\t\t\\quad \\Delta_k & \\hat{g}_k^T B_k \\hat{g}_k\\le 0 \\\\\n\t\t\\quad \\min\\left(\\Delta_k,\\norm{g_k}/(\\hat{g}_k^T B_k \\hat{g}_k)\\right) & \\hat{g}_k^T B_k \\bar{g}_k > 0\n\t\\end{cases}\n\\end{equation}\nThe predicted reduction in score is found using:\n\\begin{equation}\nm_k(0)-m_k(p_k) = \\bar{\\tau}_k \\left(\\norm{g_k} - \\frac{\\tau_k \\hat{g}_k^T B_k \\hat{g}_k}{2} \\right)\n\\end{equation}\n\n\n\\subsubsection{Dogleg}\n\\label{section:dogleg}\n\nThe Dogleg method considers second order terms to provide a more accurate solution to Eq. \\ref{eq:trust_region_subproblem}. The optimal solution, as a function of region size, is a curved trajectory. The Dogleg method approximates this curved trajectory using two line segments. The first line starts at the $x_{k-1}$ and ends at the unconstrained Cauchy point. The second heads towards $p^b$ the solution to (\\ref{eq:TR_unconstrained_solution}), which is the Gauss-Newton solution. As with equations from Cauchy Point, these equations are not traditional (see \\cite{numopt2006,IMM2004}) and have been reformulated to avoid powers of three.\n\\begin{eqnarray}\n\\hat{g_k} &=& \\frac{g_k}{\\norm{g_k}} \\\\\np^u_k &=& -\\frac{g_k}{\\hat{g_k}^T B_k \\hat{g_k}} \\\\\np^b_k &=& -B^{-1}_k g_k \\\\\np^{dog}_k &=&\n\\begin{cases}\n\t\\tau p^u_k & 0 \\le \\tau < 1 \\\\\n\tp^u_k + (\\tau -1)(p^b_k-p^u_k) & 1 \\le \\tau \\le 2\n\\end{cases}\n\\end{eqnarray}\nwhere $B_k$ is positive definite, and $p^{dog}_k$ is the point selected by the Dogleg method. The solution to $\\tau$ can be easily found by solving along each line segment. If $B_k$ is not positive definite the gradient descent is used instead.\n\n\\begin{algorithm}{}\n\\caption{\\label{alg:dogleg_step}Selection of Dogleg Step}\n\\begin{algorithmic}[1]\n  \\If{ $B$ is positive definite}\n    \\If{$\\norm{p^b} < \\Delta$} \\Comment{Gauss-Newton solution inside the trust-region?}\n      \\State $p^{dog} \\gets p^b$\n    \\ElsIf{$\\norm{p^u} \\geq \\Delta$} \\Comment{Cauchy point outside the trust-region?}\n    \\State $p^{dog} \\gets \\Delta \\frac{p^u}{\\norm{p^u}}$\n    \\Else\n    \\State $p^{dog} \\gets $ intersection of $p^u \\rightarrow p^b$ and trust-region\n    \\EndIf\n  \\Else\n   \\State $p^{dog} \\gets -\\Delta\\frac{g}{\\norm{g}}$ \\Comment{Follow gradient to end of trust region}\n  \\EndIf\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsubsection{Initial Region Size}\n\\label{section:init_region_size}\n\nSelection of the initial trust region size $\\Delta_0$ is important but typically not discussed in reference material \\cite{fletcher1987,numopt2006,IMM2004} in detail. Initial region size is typically considered a tuning parameter that the user is supposed to select through trial and error. While the Trust region size is dynamically adjusted at each iteration in the Trust Region approach, the initial selection of the trust region size can significantly influence the final convergence.\n\nHere is an example of a possible failure mode when the trust region's size is poorly selected. With the dogleg method, if $\\Delta_0$ is too small then a Cauchy step is selected repeatedly. The Cauchy Point takes much smaller steps, increasing the chances of getting stuck in a local minimum.\n\nDDogleg provides two automatic methods for finding the initial region size, with unreliable results. 1) \\emph{Unconstrained initial step} and 2) \\emph{Cauchy initial step}. With the unconstrained method, the selected algorithm (e.g. Dogleg or Cauchy) selects a step when given trust region of MAX\\_VALUE. The step it selects is used and the trust region is then set to the length of that step. This works well in many problems but can be overly agressive and take a very large step into a distant plateau. The Cauchy initial step method computes the length of a Cauchy step, then sets the region size to be 10x that. This estimate tends to be conservative will in general converge but can converge slowly.\n\nIf the automatic methods fail to produce acceptable results then manual tuning will be necessary. One possible manual tuning procedure is to start with $\\Delta_0=1$ then trying $\\Delta_0=100$, and if results improved try $\\Delta_0=10000$. If results don't get better try $0.1$ or other fractions of one.\n\nRecommended Procedure for Selection of Initial Trust Region Size:\n\\begin{enumargin}{0.2}\n\\item Turn on verbose output and examine the progress\n\\item Start with automatic selection using \\emph{unconstrained initial step}\n\\item If this fails then try \\emph{Cauchy initial step}\n\\item If performance is still poor follow manual tuning procedure\n\\end{enumargin}\nFor instructions on how to switch between the methods described here consult the JavaDoc of ConfigTrustRegion.\n\nA comparison of different initial conditions for different 'toy' problems is shown in Table \\ref{results:initial_region}. In these scenarios, the Automatic Unconstrained method correctly selected the best initial conditions while all the other methods either tied unconstrained's performance or clearly made a poor choice. Unfortunately, these results don't extrapolate to all problems and there are situations where the unconstrained method results in failure. For that reason, the default method is the more conservative Automatic Cauchy.\n\n\\begin{table}[h]\n\\centering\n\\begin{threeparttable}\n\\caption{\\label{results:initial_region}Comparison of Initial Region Size}\n\\begin{tabular}{|l||c|c|c||c|c|c||c|c|c||c|c|c|}\n\\hline\nProblem        & \\multicolumn{3}{c||}{Automatic} & \\multicolumn{3}{c||}{Automatic} & \\multicolumn{3}{c||}{Manual} & \\multicolumn{3}{c|}{Manual}\\\\\n               & \\multicolumn{3}{c||}{Unconstrained} & \\multicolumn{3}{c||}{Cauchy} & \\multicolumn{3}{c||}{1} & \\multicolumn{3}{c|}{100} \\\\\n\\hline\n               & Fit     & G  & B  & Fit     & G  & B   & Fit     & G  & B  & Fit     & G  & B\\\\\n\\hline\nPowel          & 2.0e-17 & 17 & 17 & 2.9e-17 & 30 & 26  & 4.9e-18 & 23 & 19 & 2.0e-17 & 17 & 17 \\\\\nPowel Singular & 7.3e-11 & 11 & 11 & 2.3e-10 & 13 & 13  & 6.7e-11 & 11 & 11 & 7.3e-11 & 11 & 11 \\\\\nHelical Valley & 2.5e-26 & 11 & 8  & 5.1e-18 & 11 & 11  & 4.1e-33 & 9  & 8  & 2.7e-26 & 16 & 8\\\\\nB.S. Powell    & 4.2e-31 & 25 & 18 & 3.7e-23 & 73 & 58  & 2.2e-31 & 25 & 18 & 0       & 62 & 43 \\\\\nBundle 2D      & 3.7e-10 & 111& 36 & 7.0e-18 & 111& 36  & 9.0e-10 & 251& 93 & 3.3e-10 & 112 & 37 \\\\\nBundle 2D [1]  & 7.0e-19 & 4  & 4  & 7.0e-18 & 4  & 4   & 1.5e-15 & 5  & 5  & 7.0e-18 & 4 & 4 \\\\ \\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\small\n\\item \\emph{Fit} is the final fit score where zero is a perfect fit. \\emph{G} is the number of times the gradient was computed. \\emph{B} is the number of times the Hessian was computed. B is by far the most expensive step.\n\\item Unless specified otherwise, all methods use Dogleg with a Cholesky solver.\n\\item [1] Uses QR with column pivots instead of Cholesky and can handle the nearly singular initial state.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}", "meta": {"hexsha": "4291629d1307e2fed46f1880b62d394c7774c733", "size": 11271, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/techreports/trust_region.tex", "max_stars_repo_name": "Iman/ddogleg", "max_stars_repo_head_hexsha": "91b9a0b05b278a2b4a097a688a6df2a42f7d4dd1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-05T20:41:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-05T20:41:27.000Z", "max_issues_repo_path": "docs/techreports/trust_region.tex", "max_issues_repo_name": "Iman/ddogleg", "max_issues_repo_head_hexsha": "91b9a0b05b278a2b4a097a688a6df2a42f7d4dd1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/techreports/trust_region.tex", "max_forks_repo_name": "Iman/ddogleg", "max_forks_repo_head_hexsha": "91b9a0b05b278a2b4a097a688a6df2a42f7d4dd1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.6779661017, "max_line_length": 705, "alphanum_fraction": 0.7143997871, "num_tokens": 3708, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7879312056025699, "lm_q1q2_score": 0.5686839946735307}}
{"text": "% Created 2020-04-08 mi\u00e9 12:37\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=169]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{khpreamble}\n\\usepackage{epigraph}\n\\definecolor{inputclr}{rgb}{1, 0.49, 0.0}\n\\definecolor{outputclr}{rgb}{0., 0.3, 0.6}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{Process automation laboratory - linearization}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={Process automation laboratory - linearization},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.3.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\\section{Intro}\n\\label{sec:org2a31e83}\n\\begin{frame}[label={sec:org5961aaf}]{Why linear systems?}\n\\setlength\\epigraphwidth{.8\\textwidth}\n\\epigraph{Finally, we make some remarks on why linear systems are so important. The answer is simple: because we can solve them!}{\\textit{Richard Feynman}\\\\\\url{https://www.feynmanlectures.caltech.edu/I_25.html}}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgce40fa2}]{Linear first-order system}\n\\[ x + \\tau \\dot{x} = u \\qquad \\Leftrightarrow \\qquad \\dot{x} = \\underbrace{-\\frac{1}{\\tau}x + \\frac{1}{\\tau} u}_{f(x, u)} \\]\n\n\\begin{columns}\n\\begin{column}{0.4\\columnwidth}\nStep response \\[u(t) = \\begin{cases} u_0, & t \\ge 0,\\\\ 0, & \\text{otherwise} \\end{cases}\\]\n\\end{column}\n\n\\begin{column}{0.6\\columnwidth}\n\\begin{center}\n  \\begin{tikzpicture}\n    \\pgfmathsetmacro{\\tconst}{2}\n    \\pgfmathsetmacro{\\kgain}{1}\n    \\pgfmathsetmacro{\\uconst}{1}\n    \\pgfmathsetmacro{\\xmax}{\\uconst}\n    \\pgfmathsetmacro{\\ymax}{\\uconst/\\tconst}\n    \\begin{axis}[\n      %yshift=-5cm,\n      clip=false,\n      axis lines=middle,\n      width = 8cm,\n      height = 6cm,\n      xlabel = {$x$},\n      ylabel = {$\\dot{x}$},\n      xtick={0, \\xmax},\n      xticklabels={0, $u_0$},\n      ytick={0, \\ymax},\n      yticklabels={0, $\\frac{u_0}{\\tau}$},\n      %title={$\\dot{x} = f(x) = -\\frac{1}{\\tau} x + \\frac{1}{\\tau}u_0$},\n      ]\n      \\addplot[outputclr, thick, no marks, domain=-0.2:\\xmax+0.2, samples=10] {-1.0/\\tconst * x + 1.0/\\tconst * \\uconst} node[coordinate, pin=0:{$\\dot{x}=f(x,u=u_0) = -\\frac{1}{\\tau} x + \\frac{1}{\\tau}u_0$}, pos=0.3] {};\n    \\end{axis}\n  \\end{tikzpicture}\n\\end{center}\n\\end{column}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}[label={sec:org3599a14}]{Another relevant first-order system: Logistic growth}\n\\[ \\dot{x} = \\underbrace{a\\big(1 - \\frac{x}{x_{max}}\\big)x}_{f(x)} \\]\n\n\\begin{center}\n  \\begin{tikzpicture}\n    \\pgfmathsetmacro{\\gconst}{2}\n    \\pgfmathsetmacro{\\xmax}{1}\n    \\pgfmathsetmacro{\\ymax}{\\gconst*\\xmax/4}\n    \\begin{axis}[\n      %yshift=-5cm,\n      clip=false,\n      axis lines=middle,\n      width = 8cm,\n      height = 6cm,\n      xlabel = {$x$},\n      ylabel = {$\\dot{x}$},\n      xtick={0, \\xmax},\n      xticklabels={0, $x_{max}$},\n      ytick={0, \\ymax},\n      yticklabels={0, $a\\frac{x_{max}}{4}$},\n      ymax=\\ymax+0.1,\n      xmax=\\xmax+0.1,\n      %title={$\\dot{x} = f(x) = -\\frac{1}{\\tau} x + \\frac{1}{\\tau}u_0$},\n      ]\n      \\addplot[outputclr, thick, no marks, domain=-0.05:\\xmax+0.05, samples=100] {\\gconst * (1- x/\\xmax)*x} node[coordinate, pin=0:{$\\dot{x}=f(x) = a\\big(1 - \\frac{x}{x_{max}}\\big)x$}, pos=0.7] {};\n    \\end{axis}\n  \\end{tikzpicture}\n\\end{center}\n{\\footnotesize See 3Blue1Brown \\texit{Exponential growth and epidemics} \\url{https://youtu.be/Kas0tIxDvrg}}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgf8836be}]{Do in groups}\n\\begin{center}\n  \\begin{tikzpicture}\n    \\pgfmathsetmacro{\\gconst}{2}\n    \\pgfmathsetmacro{\\xmax}{1}\n    \\pgfmathsetmacro{\\ymax}{\\gconst*\\xmax/4}\n    \\begin{axis}[\n      %yshift=-5cm,\n      clip=false,\n      axis lines=middle,\n      width = 8cm,\n      height = 6cm,\n      xlabel = {$x$},\n      ylabel = {$\\dot{x}$},\n      xtick={0, \\xmax},\n      xticklabels={0, $x_{max}$},\n      ytick={0, \\ymax},\n      yticklabels={0, $a\\frac{x_{max}}{4}$},\n      ymax=\\ymax+0.1,\n      xmax=\\xmax+0.1,\n      %title={$\\dot{x} = f(x) = -\\frac{1}{\\tau} x + \\frac{1}{\\tau}u_0$},\n      ]\n      \\addplot[outputclr, thick, no marks, domain=-0.05:\\xmax+0.05, samples=100] {\\gconst * (1- x/\\xmax)*x} node[coordinate, pin=0:{$\\dot{x}=f(x) = a\\big(1 - \\frac{x}{x_{max}}\\big)x$}, pos=0.7] {};\n    \\end{axis}\n  \\end{tikzpicture}\n\\end{center}\n\\begin{enumerate}\n\\item In breakout rooms: One of you shares this slide (found on canvas, link in the program for the session)\n\\item Sketch the solution \\(x(t)\\) using the \"bead-on-a-wire\" idea for the inital value problem \\(x(0) = 0.1x_{max}\\).\n\\end{enumerate}\n\\end{frame}\n\n\\section{Linearization}\n\\label{sec:org14863e2}\n\\begin{frame}[label={sec:org43156cb}]{The general idea}\nGiven a dynamical system described by a nonlinear differential equation\n\\begin{center}\n  \\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm, minimum height=12mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n    \\node[coordinate] (input) {};\n    \\node[block, right of=input, node distance=20mm] (plant)  {System};\n    \\node[coordinate, right of=plant, node distance=20mm] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.3, color=inputclr] {$u(t)$} (plant);\n    \\draw[->] (plant) -- node[above, near end, color=outputclr] {$x(t)$} (output);\n  \\end{tikzpicture}\n\\end{center}   \n\\[ \\dot{\\textcolor{outputclr}{x}} = f(\\textcolor{outputclr}{x}, \\textcolor{inputclr}{u})\\]\n\nFind a linear approximation to the differential equation about an \\alert{operating point} \\((\\textcolor{outputclr}{x_0}, \\, \\textcolor{inputclr}{u_0})\\)\n\\end{frame}\n\n\\begin{frame}[label={sec:org5f5417f}]{The general picture}\n\\begin{center}\n  \\begin{tikzpicture}\n    \\pgfmathsetmacro{\\xnoll}{1.5}\n    \\pgfmathsetmacro{\\xmax}{2}\n    \\pgfmathsetmacro{\\fnoll}{sqrt(2-\\xnoll)}\n    \\begin{axis}[\n      %yshift=-5cm,\n      clip=false,\n      axis lines = middle,\n      width = 12cm,\n      height = 8cm,\n      %xlabel = {$x$},\n      %ylabel = {$\\dot{x}$},\n      xtick={0, \\xnoll, \\xmax},\n      xticklabels={0, $x_0$, $x_{max}$},\n      ytick={0},\n      %title={$\\dot{x} = f(x) = \\sqrt{x_{max} -  x}$},\n      ]\n      \\addplot[outputclr, thick, no marks, domain=0:\\xmax, samples=100] {sqrt(\\xmax-x)} node[coordinate, pin=-120:{$\\dot{x}=f(x)= \\sqrt{x_{max} -  x}$}, pos=0.1] {};\n      \\addplot[green!70!black, thick, no marks, domain=0.8:\\xmax, samples=10] {sqrt(\\xmax-\\xnoll) - 0.5/sqrt(\\xmax-\\xnoll)*(x-\\xnoll)} node[coordinate, pin=0:{$\\dot{x}\\approx f(x_0) + \\frac{d}{dx}f|_{x_0}(x-x_0)$}, pos=0.1] {};\n      \\node[coordinate, pin=180:{$f(x_0)$},] at (axis cs: 0.02, \\fnoll) {};\n      \\node at (axis cs: -0.3, 1.5) {$\\dot{x}$};\n      \\node at (axis cs: 1.9, -0.3) {$x$};\n      %\\node[coordinate, pin=-90:{$x_0$},] at (axis cs: \\xnoll, -0.2) {};\n    \\end{axis}\n\n  \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgcb3dd14}]{Linearizing the tank-valve nonlinear model}\n\\[ \\dot{p} = a_0(u_v - 5)|p_s - p|^{a_1} = f(p, u_v), \\quad \\text{with} \\; a_0=1.1\\; \\text{and}\\; a_1 = 0.47\\]\n\\begin{enumerate}\n\\item Given operating pressure \\(p_0\\). Choose operating point \\(u_0\\) which gives equilibrium \\(f(p_0, u_0) = 0\\).\n\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org56a6724}]{Linearizing the tank-valve nonlinear model}\n\\[ \\dot{p} = a_0(u_v - 5)|p_s - p|^{a_1} = f(p, u_v), \\quad \\text{with} \\; a_0=1.1\\; \\text{and}\\; a_1 = 0.47\\]\n\\begin{enumerate}\n\\item Given operating pressure \\(p_0\\). Choose operating point \\(u_0\\) which gives equilibrium \\(f(p_0, u_0) = 0\\).\n\\item Introduce deviation variables: \\(u_v = 5 + \\textcolor{inputclr}{u}\\) and \\(p = p_0 + \\textcolor{outputclr}{y}\\).\n\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orga9fd2fc}]{Linearizing the tank-valve nonlinear model}\n\\[ \\dot{p} = a_0(u_v - 5)|p_s - p|^{a_1} = f(p, u_v), \\quad \\text{with} \\; a_0=1.1\\; \\text{and}\\; a_1 = 0.47\\]\n\\begin{enumerate}\n\\setcounter{enumi}{2}\n\\item Determine partial derivatives\n\\begin{align*}\n\\frac{\\partial f}{\\partial p} &= a_0(u_v-5)a_1|p_s - p|^{a_1-1}(-1)\\\\\n\\frac{\\partial f}{\\partial u_v} &= a_0|p_s - p|^{a_1}\n\\end{align*}\n\\item Evaluate partial derivatives at the operating point \\((p_0, u_0\\).\n\\begin{align*}\n\\frac{\\partial f}{\\partial p}\\big|_{p_0, u_0} &= 0\\\\\n\\frac{\\partial f}{\\partial u_v}\\big|_{p_0, u_0} &=  a_0|p_s - p_0|^{a_1}\n\\end{align*}\n\\end{enumerate}\n\\end{frame}\n\n\n\n\n\\begin{frame}[label={sec:orgadabd43}]{Linearizing the tank-valve nonlinear model}\n\\[ \\dot{p} = a_0(u_v - 5)|p_s - p|^{a_1} = f(p, u_v), \\quad \\text{with} \\; a_0=1.1\\; \\text{and}\\; a_1 = 0.47\\]\n\\begin{enumerate}\n\\setcounter{enumi}{3}\n\\item Evaluate partial derivatives at the operating point \\((p_0, u_0\\).\n\\begin{align*}\n\\frac{\\partial f}{\\partial p}\\big|_{p_0, u_0} &= 0\\\\\n\\frac{\\partial f}{\\partial u_v}\\big|_{p_0, u_0} &=  a_0|p_s - p_0|^{a_1}\n\\end{align*}\n\\item Form the linearized model\n\\begin{equation}\n\\begin{aligned} \\dot{p} = \\dot{\\textcolor{outputclr}{y}} &= f(p, u_v) \\approx f(p_0, u_0) + \\frac{\\partial f}{\\partial p}|_{p_0, u_0}(p-p_0) + \\frac{\\partial f}{\\partial u_v}|_{p_0, u_0}(u_v - u_0)\\\\\n &=  a_0|p_s - p_0|^{a_1} \\textcolor{inputclr}{u}.\n\\end{aligned}\n\\end{equation}\n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgccfd0c1}]{Linearizing the tank-valve nonlinear model}\nWe arrive at the linear model\n\\begin{equation*}\n\\begin{aligned}\n \\dot{\\textcolor{outputclr}{y}} &= a_0|p_s - p_0|^{a_1} \\textcolor{inputclr}{u}, \\qquad \\text{which in the Laplace domain is}\\\\[4mm]\n \\textcolor{outputclr}{Y(s)} &= \\frac{a_0|p_s - p_0|^{a_1}}{s} \\textcolor{inputclr}{U(s)}    \n\\end{aligned}\n\\end{equation}\n\\end{frame}\n\n\n\n\\begin{frame}[label={sec:orgdf6bf5d}]{Do in groups}\n\\[ \\dot{p} = a_0(u_v - 5)|p_s - p|^{a_1}\\; \\textcolor{red!90!black}{-\\, v} = f(p, u_v)\\]\n\n\\begin{enumerate}\n\\item Given operating pressure \\(p_0\\). Choose operating point \\(u_0\\) which gives equilibrium \\(f(p_0, u_0) = 0\\).\n\\item Introduce deviation variables: \\(u_v = u_0 + \\textcolor{inputclr}{u}\\) and \\(p = p_0 + \\textcolor{outputclr}{y}\\).\n\\item Determine partial derivatives.\n\\item Evaluate partial derivatives at the operating point.\n\\item Form the linearized model\n\\end{enumerate}\n\\end{frame}\n\\end{document}", "meta": {"hexsha": "ec516f3681ce44f56bf11cd44a9dbc5879faf7e1", "size": 10304, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "modules/tank-pid/linearization.tex", "max_stars_repo_name": "kjartan-at-tec/mr2015", "max_stars_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "modules/tank-pid/linearization.tex", "max_issues_repo_name": "kjartan-at-tec/mr2015", "max_issues_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "modules/tank-pid/linearization.tex", "max_forks_repo_name": "kjartan-at-tec/mr2015", "max_forks_repo_head_hexsha": "1134f3a99ef72e4a17d44edb4d288daad84f3e70", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.7435897436, "max_line_length": 227, "alphanum_fraction": 0.6360636646, "num_tokens": 3947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5686839922011276}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% UMB-CS110-2015S: Introduction to Computing\n% Copyright 2015 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% More info: https://github.com/ghorbanzade/UMB-CS110-2015S\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\def \\topDirectory {.}\n\\def \\texDirectory {\\topDirectory/src/main/tex}\n\n\\documentclass[12pt,letterpaper,twoside]{article}\n\\usepackage{\\texDirectory/template/style/directives}\n\\usepackage{\\texDirectory/template/style/assignment}\n\\input{\\texDirectory/template/config}\n\n\\begin{document}\n\n\\doc{title}{Solution to Quiz 2(b)}\n\\doc{date-pub}{Mar 05, 2015 at 01:00 PM}\n\\doc{date-due}{Mar 05, 2015 at 11:00 PM}\n\\doc{points}{4}\n\n\\prepare{header}\n\n\\section*{Question 1}\n\nWrite a program \\texttt{PrimeCounter.java} that takes a command-line argument \\texttt{N} and finds the number of primes less than or equal to \\texttt{N}. Use it to \\textbf{efficiently} print out the number of primes less than or equal to 10 million.\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=2}\n\\begin{lstlisting}\npublic class PrimeCounter {\n\tpublic static void main(String args[]) {\n\t\tint i; // first loop (number counter)\n\t\tint j; // second loop (prime checker)\n\t\tint[] primeList = new int[100000]; // array of primes\n\t\tint counter = 0; // counter for array of primes\n\t\tboolean definitelyNotPrime; // flag for prime checking\n\t\t// args[0] is a string. We have to convert it to an integer first.\n\t\tint maxNumber = Integer.parseInt(args[0]);\n\t\tSystem.out.println(\"Prime numbers less than \" + args[0] + \" are: \");\n\t\tfor (i = 2; i <= maxNumber; i++) {\n\t\t\t// check if a number is prime\n\t\t\t// a number is prime if is not divisible by any smaller prime number\n\t\t\t// note that we can't afford to check if any smaller number divides our number\n\t\t\tdefinitelyNotPrime = false;\n\t\t\tfor (j = 0; j < counter; j++) {\n\t\t\t\tif (i % primeList[j] == 0) {\n\t\t\t\t\tdefinitelyNotPrime = true;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!definitelyNotPrime) {\n\t\t\t\tprimeList[counter++] = i;\n\t\t\t\tSystem.out.printf(\"Prime %3d: %d\\n\", counter, i);\n\t\t\t}\n\t\t}\n\t}\n}\n\\end{lstlisting}\n\n\\section*{Question 2}\n\nWrite a program \\texttt{MatrixDeterminant.java} that asks for elements of a three-by-three matrix and prints its determinant.\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=2}\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class MatrixMultiply {\n\tpublic static void main(String[] args) {\n\t\t// Declaration of Variables\n\t\tint i; // counter\n\t\tint j; // counter\n\t\tint p; // counter\n\t\tint q; // counter\n\t\tint k;\n\t\tint row; // row of matrix\n\t\tint col; // column of matrix\n\t\tdouble matrix1[][] = new double[3][3];\n\t\tdouble matrix2[][] = new double[3][3];\n\t\tdouble multiplied[][] = new double[3][3];\n\t\tScanner input = new Scanner(System.in);\n\t\t// ask for matrices\n\t\tfor ( i = 0; i < 2; i++) {\n\t\t\tfor ( j = 0; j < 9; j++) {\n\t\t\t\trow = j / 3;\n\t\t\t\tcol = j % 3;\n\t\t\t\tif (i == 0) {\n\t\t\t\t\tSystem.out.print(\"Enter matrix1[\" + row + \"][\" + col + \"]: \");\n\t\t\t\t\tmatrix1[row][col] = input.nextInt();\n\t\t\t\t}\n\t\t\t\telse {\n\t\t\t\t\tSystem.out.print(\"Enter matrix2[\" + row + \"][\" + col + \"]: \");\n\t\t\t\t\tmatrix2[row][col] = input.nextInt();\n\t\t\t\t}\n\t\t\t}\n\t\t\tSystem.out.println(\"Thank you. Matrix initialized as follows:\");\n\t\t\tfor ( p = 0; p < 3; p++) {\n\t\t\t\tfor ( q = 0; q < 3; q++) {\n\t\t\t\t\tSystem.out.printf(\"%.1f \",matrix1[p][q]);\n\t\t\t\t}\n\t\t\t\tSystem.out.println(\"\");\n\t\t\t}\n\t\t\tSystem.out.println(\"\");\n\t\t}\n\t\t// Scanner input shall be closed\n\t\tinput.close();\n\t\tfor ( i = 0; i < 3; i++) {\n\t\t\tfor ( j = 0; j < 3; j++) {\n\t\t\t\tfor ( k = 0; k < 3; k++) {\n\t\t\t\t\tmultiplied[i][j] += matrix1[i][k] * matrix2[k][j];\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t// Multiply Matrices\n\t\tSystem.out.println(\"Answer is: \");\n\t\tfor ( p = 0; p < 3; p++) {\n\t\t\tfor ( q = 0; q < 3; q++) {\n\t\t\t\tSystem.out.printf(\"%.1f \",multiplied[p][q]);\n\t\t\t}\n\t\t\tSystem.out.println(\"\");\n\t\t}\n\t}// end of main method\n}// end of class MatrixMultiply\n\\end{lstlisting}\n\nThe following code computes multiplication of two \\texttt{n}-by-\\texttt{n} matrices.\n\n\\lstset{language=Java,tabsize=2}\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class MatrixMultiplyGeneral {\n\tpublic static void main(String[] args) {\n\t\t// Declaration of Variables\n\t\tint i; // counter\n\t\tint j; // counter\n\t\tint p; // counter\n\t\tint q; // counter\n\t\tint k;\n\t\tint row; // row of matrix\n\t\tint col; // column of matrix\n\t\tString str = \"\";\n\t\tdouble matrix1[][];\n\t\tdouble matrix2[][];\n\t\tdouble multiplied[][];\n\t\tint matrixSizes[][] = new int[2][2]; // contains number of rows and columns of matrix1 and matrix2\n\t\tScanner input = new Scanner(System.in);\n\t\t// ask for dimensions of two matrices\n\t\tfor (i = 0; i < 2; i++) {\n\t\t\tfor (j = 0; j < 2; j++) {\n\t\t\t\tswitch (j) {\n\t\t\t\t\tcase 0: str = \"row\"; break;\n\t\t\t\t\tcase 1: str = \"column\"; break;\n\t\t\t\t}\n\t\t\t\tSystem.out.print(\"Enter number of \"+ str +\"s of matrix \"+ (i+1) + \": \" );\n\t\t\t\tmatrixSizes[i][j] = input.nextInt();\n\t\t\t}\n\t\t}\n\t\t// initialize matrices according to their number of rows and columns\n\t\tmatrix1 = new double[matrixSizes[0][0]][matrixSizes[0][1]];\n\t\tmatrix2 = new double[matrixSizes[1][0]][matrixSizes[1][1]];\n\t\tmultiplied = new double[matrixSizes[0][0]][matrixSizes[1][1]];\n\t\t// ask for matrices\n\t\tfor ( i = 0; i < 2; i++) {\n\t\t\tfor ( j = 0; j < matrixSizes[i][0]*matrixSizes[i][1]; j++) {\n\t\t\t\trow = j / matrixSizes[i][0];\n\t\t\t\tcol = j % matrixSizes[i][1];\n\t\t\t\tif (i == 0) {\n\t\t\t\t\tSystem.out.println(\"Enter matrix1[\" + row + \"][\" + col + \"]\");\n\t\t\t\t\tmatrix1[row][col] = input.nextInt();\n\t\t\t\t}\n\t\t\t\telse {\n\t\t\t\t\tSystem.out.println(\"Enter matrix2[\" + row + \"][\" + col + \"]\");\n\t\t\t\t\tmatrix2[row][col] = input.nextInt();\n\t\t\t\t}\n\t\t\t}\n\t\t\tSystem.out.println(\"Thank you. Matrix initialized as follows:\");\n\t\t\tfor ( p = 0; p < matrixSizes[i][0]; p++) {\n\t\t\t\tfor ( q = 0; q < matrixSizes[i][1]; q++) {\n\t\t\t\t\tSystem.out.printf(\"%.1f \",matrix1[p][q]);\n\t\t\t\t}\n\t\t\t\tSystem.out.println(\"\");\n\t\t\t}\n\t\t}\n\t\t// Scanner input shall be closed\n\t\tinput.close();\n\t\t// check if number of columns of first matrix matches number of rows of second matrix\n\t\tif (matrix1[0].length == matrix2.length) {\n\t\t\tfor ( i = 0; i < matrix2[0].length; i++) {\n\t\t\t\tfor ( j = 0; j < matrix1.length; j++) {\n\t\t\t\t\tfor ( k = 0; k < matrix1[0].length; k++) {\n\t\t\t\t\t\tmultiplied[i][j] += matrix1[i][k] * matrix2[k][j];\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse {\n\t\t\tSystem.out.println(\"Matrices Dimensions Mismatch.\");\n\t\t}\n\t\t// Multiply Matrices\n\t\tSystem.out.println(\"Answer is: \");\n\t\tfor ( p = 0; p < matrixSizes[0][0]; p++) {\n\t\t\tfor ( q = 0; q < matrixSizes[1][1]; q++) {\n\t\t\t\tSystem.out.printf(\"%.1f \",multiplied[p][q]);\n\t\t\t}\n\t\t\tSystem.out.println(\"\");\n\t\t}\n\t}\n}\n\\end{lstlisting}\n\n\\end{document}\n", "meta": {"hexsha": "6c665168de13a62ca4921b033ceb9de91d302d0b", "size": 6634, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/main/tex/quizzes/q02bs.tex", "max_stars_repo_name": "UMB-CS110-2015S/Assignments", "max_stars_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:40.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:40.000Z", "max_issues_repo_path": "src/main/tex/quizzes/q02bs.tex", "max_issues_repo_name": "UMB-CS110-2015S/Assignments", "max_issues_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2015-08-22T15:44:45.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-17T16:39:11.000Z", "max_forks_repo_path": "src/main/tex/quizzes/q02bs.tex", "max_forks_repo_name": "UMB-CS110-2015S/Assignments", "max_forks_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.8558139535, "max_line_length": 249, "alphanum_fraction": 0.6009948749, "num_tokens": 2044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.5686839827692778}}
{"text": "\\subsection{Definition}\nAs the name implies, pseudo-random number generation (PRNG) does not produce a truly random, indeterminate value. The exact nature of this pseudo-randomness varies between languages and implementations, but essentially, PRNG is accomplished via an algorithm that uses mathematical formulae or precalculated tables to produce numerical values that appear random.\\autocite{20210517:haahr}\n\nHowever, a common characteristic of all methods of PRNG is that they are deterministic, meaning that they require an initial seed in the form of another numerical value, and that, if the seed is known and replicated exactly, the PRNG will produce the exact same results multiple times. These seeds can take the form of literal values or variable representation of other, unrelated values, such as system time.\n\nAnother property of PRNG is that the values it generates are periodic, meaning that, due to the finite amount of pseudo-random values a given implementation can generate, the output will eventually begin to repeat.\n\n\\subsection{Examples}\nMost common programming languages have built-in PRNG implementations. The method \\texttt{rand(}), defined in the C Standard General Utilities Library, is perhaps the most basic PRNG function in the C family, using a seed, provided by the developer as an argument to an earlier call to the method \\texttt{srand()}, to return an integer value between zero and the language-defined constant \\texttt{RAND\\_MAX}.\\autocite{20210517:cpp-rand} C++11 expands upon these PRNG capabilities in its \\texttt{<random>} header, which introduces numerous PRNG implementations, from uniform distributors to algorithmic implementations of various mathematical distributions.\\autocite{20210517:cpp-random}\n\nLikewise, Java implements a \\textit{random} series of methods, found in the \\texttt{Random} class of the \\texttt{java\\-.\\-util\\-.\\-Random} object, which function similarly to the C \\texttt{rand()} function but can be specified to accept as a seeds and output values of other numeric data types.\\autocite{20210517:java}\n\n\\subsection{Issues}\nBecause of the deterministic and periodic factors, the use of PRNG in a security context is typically discouraged, as the requirement of a defined seed and the finite amount of possible outcomes means that, even if an attacker is not aware of the initial condition, a supposedly random value can be realistically guessed by the use of brute force alone. Still, PRNG has its place in such contexts, as it is generally considered significantly more efficient and practical to implement than true random number generation (TRNG), as TRNG extracts randomness from captured aspects of physical phenomena, such as radioactive decay or atmospheric noise, which most commercially available computer hardware is simply not equipped to do.\n\nAn example of a use of PRNG in an identification and authentication scenario is the creation of a temporary session ID for a user. The viability of this implementation is entirely dependent on the source of the seed value used in the PRNG algorithm. One hypothetical source is a given user\u2019s authentication ID, but this value is the same each time the user logs in. Another option is that of the system time in Unix Time format, which, on one hand, is guaranteed to change each second. On the other hand, an attacker could replicate a Unix Time string corresponding with a date and time at which any given user is likely to be authenticating (say, a weekday at around 9:00\\textsc{AM}), and, with enough luck, could potentially spoof the session ID of an active, verified user simply by correctly guessing what time they logged on, a tactic that only increases in viability the more users a system supports.\n\nFrom a software assurance perspective, in the event of scenario similar in scope to the previous example, reported issues will usually adhere to a vulnerability description enumerated as CWE-337 or CWE-338,\\autocites{20210517:cwe-337}{20210517:cwe-338} the former with respect to the use of the seeded value and the latter with respect to the PRNG function itself. Occasionally, both will occur concurrently for the same instance.\n\n\n\\subsection{Mitigations}\nIn general, the larger and more arbitrary the seed, the less likely the likely the PRNG implementation can be exploited by an attacker. Or, at least, it would result in exploitation attempts taking longer to accomplish, thus increasing the chance of the attack being noticed before the attacker succeeds. Regardless, the very presence of a PRNG implementation is overwhelmingly likely to be documented as an issue by a scan tool, but the security of such of an implementation can still be ensured through a multilayered approach tailored to fit the specific context in which the pseudo-random value is being used.\n\nIn an authentication context, imposing a limit to the number of times a user can unsuccessfully supply valid credentials and/or placing a restriction on subsequent attempts can mitigate attacks based around exploiting PRNG to replicate a valid set of credentials or the privileges attached to a valid set of credentials.\n\nIn a cryptographic context, the use of algorithms whose PRNG has been sufficiently tested and independently verified is the best way to mitigate potential issues. A list of random number generators approved for use in cryptographic modules can be found in Annex C of the FIPS 140-2 publication from the Information Technology Laboratory of the National Institute of Standards and Technology.\\autocite{20210517:annexc}\n\n", "meta": {"hexsha": "e6433d27d99aa8df3f8b0e3c0058dd6b09b86e02", "size": 5541, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tips/20210517.tex", "max_stars_repo_name": "squinky86/SwATips", "max_stars_repo_head_hexsha": "e9dcbbd094fb023b7383c867b6b15f77bdd7b5cc", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-08T02:12:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-08T02:12:08.000Z", "max_issues_repo_path": "tips/20210517.tex", "max_issues_repo_name": "squinky86/SwATips", "max_issues_repo_head_hexsha": "e9dcbbd094fb023b7383c867b6b15f77bdd7b5cc", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tips/20210517.tex", "max_forks_repo_name": "squinky86/SwATips", "max_forks_repo_head_hexsha": "e9dcbbd094fb023b7383c867b6b15f77bdd7b5cc", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 197.8928571429, "max_line_length": 906, "alphanum_fraction": 0.8142934488, "num_tokens": 1112, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311856832191, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.5686839708650246}}
{"text": "\\section{Variational updates derived from gradient descent optimisation}\n\nIn this section we will show how the variational updates that were found via calculus of variations in section XX are equivalent to the updates derived from gradient descent optimisation. \n%This is part of the derivation of the variational updates for stochastic variational inference in section XX. \\\\\n\nThe original derivation can be found in \\cite{hoffmann}. Here we present simplified version for completeness. For consistency and with the notation presented in \\cite{hoffmann}, we will generalise all probability distributions to the exponential family form. See section X for details and nomenclature.\\\\\n\nAs a starting point, we classify the variables of the model into four different types:\n\\begin{itemize}\n\t\\itemsep-1.5em \n\t\\item observations: $N$ different vectors $\\bfy_{n}$ which contain the observed variables. \\\\\n\t\\item local variables: $N$ different vectors $\\bfz_{n}$ which contain all $K$ hidden variables associated with each sample $n$. \\\\\n\t\\item global variables: a vector $\\bbeta$ which contains all $B$ hidden variables not indexed by $n$. \\\\\n\t\\item parameters: a vector $\\balpha$ which contains all fixed parameters for the global variables.\n\\end{itemize}\nThe distinction between local and global variables lies on the conditional dependencies. Given the global variables $\\beta$, the $n$th local variable $z_n$ is conditionally independent from any other observation $y_{j}$ or local variable $z_{j}$ (where $j \\neq n$):\n\\[\np(\\bfy_n,\\bfz_n| \\bfy_j,\\bfz_j,\\bbeta,\\balpha) = p(\\bfy_n,\\bfz_n|\\bbeta,\\balpha)\n\\]\n%As an example, in the MOFA model the factors belong to the local variables whereas the weights belong to the global variables.\n\nAs in Equation XXX, the observations are assumed to be independent, which leads to a factorised likelihood:\n\\[\np(\\bfY,\\bfZ,\\beta,\\alpha) = p(\\beta|\\alpha) \\prod_{n=1}^{N} p(\\bfy_n,\\bfz_n|\\beta)\n\\]\n\nAdditionally, to obtain closed-form variational updates, we need to assume that the complete conditionals of the hidden variables are members of the exponential family:\n\\baln\np(\\beta_b|\\bfY,\\bfZ,\\alpha) = h(\\beta_b) \\exp\\{ \\eta_g(\\bfY,\\bfZ,\\balpha)^T t(\\beta_b) - a_g(\\eta_g(\\bfY,\\bfZ,\\alpha)) \\} \\\\\np(z_{nk}|y_{nj},z_{nj},\\bbeta) = h(\\beta) \\exp\\{ \\eta_l(y_{nj}, z_{nj},\\bbeta)^T t(z_{nk}) - a_g(\\eta_l(y_{nj},z_{nj},\\bbeta)) \\} \\\\\n\\ealn\nwhere $\\lambda_b$ are the parameters governing the global variable $\\beta_b$. Similarly, $\\phi_{nk}$ are the parameters governing the local variable $z_{nk}$. In some cases, this assumpion results naturally from the choice of conjugated likelihood and prior distributions (section XXX). Yet, even in the case of non-conjugated distributions, some approaches introduce local approximations to the likelihood to achieve conjugacy. See section XXX.\\\\\n\nTo set the inference framework, variational distributions are introduced for both the local variables and the global variables. As in XX, following the mean-field assumption, the variational distributions factorise:\n\\[\nq(\\bfZ,\\bbeta) = \\prod_{b=1}^{B} q(\\beta_b,\\lambda_b) \\prod_{n=1}^{N}\\prod_{k=1}^{K} p(z_{nk}|\\phi_{nk})\n\\]\n\nSo far no assumptions were made regarding the nature of the probability distributions. To derive the gradient descent coordinate update, we need to assume exponential family distributions for the variational distributions:\n\\baln\nq(\\beta_b|\\lambda_b) &= h(\\beta_b) \\exp\\{ \\lambda_b^T t(\\beta_b) - a_g(\\lambda_b) \\} \\\\\nq(z_{nk}|\\phi_{nk}) &= h(z_{nk}) \\exp \\{ \\phi_{nk}^T t(z_{nk}) - a_l(z_{nk}) \\}\n\\ealn\n\nFrom the assumptions above, the ELBO factorises as:\n\\baln\n\\Lagr &= \\E_q[\\log p(\\bfY,\\bfZ,\\bbeta)] - \\E_q[\\log q(\\bfZ,\\bbeta)]  \\\\\n&= \\E_q[\\log p(\\bfY,\\bfZ,\\bbeta)] - \\sum_{b=1}^{B}\\E_{q(\\beta_b)}[\\log q(\\beta_b)] - \\sum_{n=1}^{N}\\sum_{k=1}^{K}  \\E_{q(z_{nk})}[\\log q(z_{nk})] \\\\\n\\ealn\n\n\\subsubsection{Computing the gradients}\nEquation X  contains the objective function. First we derive the updates for the global parameters. As a function of $\\lambda$, the ELBO becomes:\n\\baln\n\t\\Lagr(\\lambda) &= \\E_{q(Z,\\beta)}[\\log p(\\beta|\\bfY,\\bfZ)] - \\E_{q(\\beta)}[\\log q(\\beta)] + \\const \\\\\n\t&= \\E_{q(Z,\\beta)}[\\eta_g(\\bfY,\\bfZ,\\alpha)^T t(\\beta)] - \\E_{q(\\beta)}[\\lambda^T t(\\beta) - a_g(\\lambda) ] + \\const \\\\\n\t&= \\E_{q(Z)}[\\eta_g(\\bfY,\\bfZ,\\alpha)^T] \\nabla a_g(\\lambda) - \\lambda^T \\nabla a_g(\\lambda) - a_g(\\lambda) + \\const\n\\ealn\nwhere we have used the exponential family identity $\\E_{q(\\beta)}[t(\\beta)] = \\nabla a_g(\\lambda)$. Taking the gradient with respect to $\\lambda$ leads to the solution:\n\\[\n\t\\lambda = \\E_{q(Z)}[\\eta_g(\\bfY,\\bfZ,\\alpha)]\n\\]\n\nTurning to the local parameters, as a function of $\\phi_{nk}$ the ELBO becomes:\nTHIS IS WRONG, CHECK EXPECTATIONS\n\\baln\n\\Lagr(\\phi_{nk}) &= \\E_{q(\\beta,z_{nk})}[\\log p(z_{nk}|\\bfy_{n},\\bfz_{nj}, \\beta)] - \\E_{q(z_{nk})}[\\log q(z_{nk})] + \\const \\\\\n&= \\E_{q(\\beta,z_{nk})}[\\eta_l(\\bfy_n,\\bfz_{nj},\\beta)^T t(\\bfz_{nj})] - \\E_{q(z_{nk})}[\\phi_{nk}^T t(z_{nk}) - a_l(\\phi_{nk}) ] + \\const \\\\\n%&= \\E_{q(\\beta,z_{nk})}[\\eta_l(\\bfy_n,\\bfz_{nj},\\beta)^T t(\\bfz_{nj})] - \\E_{q(z_{nk})}[\\phi_{nk}^T t(z_{nk}) - a_l(\\phi_{nk}) ] + \\const\n\\ealn\nwhere $j$ denotes all except the $k$-th local variable associated with sample $n$.\n\nTaking the gradient with respect to $\\phi_{nk}$ leads to the following solution:\n\\[\n\\phi_{nk} = \\E_{q(\\beta,\\bfz_{n,j})}[\\eta_l(\\bfy_{n},\\bfz_{n,j},\\beta)]\n\\]\nEquation XX and XX define the variational updates for the gradient ascent algorithm.\n\n\\subsubsection{Equivalency with calculus of variations}\nTO-DO\n\n", "meta": {"hexsha": "f877b636c6bf4fe06195c5dde3e130e99da36b93", "size": 5526, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter2/old/variational_gradient_descent.tex", "max_stars_repo_name": "rargelaguet/thesis", "max_stars_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2021-01-08T13:01:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T07:24:40.000Z", "max_issues_repo_path": "Chapter2/old/variational_gradient_descent.tex", "max_issues_repo_name": "rargelaguet/thesis", "max_issues_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter2/old/variational_gradient_descent.tex", "max_forks_repo_name": "rargelaguet/thesis", "max_forks_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-01-09T04:47:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-04T08:25:50.000Z", "avg_line_length": 68.2222222222, "max_line_length": 447, "alphanum_fraction": 0.7066594282, "num_tokens": 1724, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5686367920336921}}
{"text": "\n\\documentclass[12pt]{amsart}\n\\usepackage{geometry} % see geometry.pdf on how to lay out the page. There's lots.\n\\usepackage{bsymb}\n\\usepackage{unitb}\n\\usepackage{calculational}\n\\usepackage{ulem}\n\\usepackage{hyperref}\n\\normalem\n\\geometry{a4paper} % or letter or a5paper or ... etc\n% \\geometry{landscape} % rotated page geometry\n\n% See the ``Article customise'' template for some common customisations\n\n\\title{}\n\\author{}\n\\date{} % delete this line to display the current date\n\n%%% BEGIN DOCUMENT\n\\setcounter{tocdepth}{4}\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n\\newcommand{\\Train}{\\text{TRAIN}}\n\\newcommand{\\Blk}{\\text{BLK}}\n\n\\section{Initial model}\n\\begin{machine}{m0}\n\\input{train-station-set/machine_m0}\n\\begin{align*}\n\t\\false \\tag{default} \\label{default}\n\\end{align*}\n\n\\newset{\\Train}\n\n\\with{sets}\n\n\\begin{align*}\n\\variable{\tin : \\set [ \\Train ]}\n\\end{align*}\n\n\\begin{align*}\n\\dummy{\tt : \\Train}\n\\end{align*}\n\n\\newevent{m0:enter}{enter} \n\\newevent{m0:leave}{leave}\n\n\\begin{align*}\n\\indices{m0:leave}{\tt : \\Train}\n\\end{align*}\n\\begin{align*}\n\\indices{m0:enter}{\tt : \\Train}\n\\end{align*}\n\n% \\removecoarse{m0:leave}{default} % \\weakento{m0:leave}{default}{lv:c0}\n\n\\begin{align*}\n\\cschedule{m0:leave}{lv:c0}\n\t{\tt &\\in in } \\\\ \n\\evassignment{m0:leave}{lv:a0}\n\t{\tin' &= in \\setminus \\{ t \\} } \\\\\n\\evassignment{m0:enter}{a1}\n\t{\tin' &= in \\bunion \\{ t \\} }\n\\end{align*}\n\n\\begin{align*}\n&\\progress{m0:prog0}\n\t{\tt \\in in }{ \\neg t \\in in }\n\\refine{m0:prog0}{discharge}{m0:tr0}{}\n&\\transientB{m0:leave}{m0:tr0}{ \\index{t}{t' = t} }\n\t{\tt \\in in }\n\\end{align*}\n\n\\end{machine}\n\n\\section{First refinement}\n\\begin{machine}{m1}\n\n\n\n\\refines{m0}\n\n\\newset{\\Blk}\n\n% \\with{sets}\n\\with{functions}\n\n\\begin{align*}\n\\variable{\tloc : \\Train \\pfun \\Blk}\n\\end{align*}\n\n\\begin{align*}\n\\\\ \\initialization{in1}\n\t{ in = \\emptyset }\n\\end{align*}\n\n\\begin{align*}\n\\invariant{inv0}\n\t{\t\\dom.loc = in }\n\\end{align*}\n\n\\begin{align*}\n\\constant{\tent,ext : \\Blk} ; \\quad\n\\constant{\tplf : \\set [ \\Blk ]}\n\\end{align*}\n\n\\subsection{New requirements}\n\\begin{align*}\n\\safety{m1:saf0}\n\t{ \\neg t \\in in& }{ t \\in in \\land loc.t = ent }\n\\\\ \\safety{m1:saf1}\n\t{ t \\in in \\land loc.t = ent& }{ t \\in in \\land loc.t \\in plf }\n\\\\ \\safety{m1:saf2}\n\t{ t \\in in \\land loc.t \\in plf& }{ t \\in in \\land loc.t = ext }\n\\\\ \\safety{m1:saf3}\n\t{ t \\in in \\land loc.t = ext& }{ \\neg t \\in in }\n\\end{align*}\n\n\\subsection{Proofs}\n\n\\subsubsection{Invariant \\ref{inv0}}\n\n\\begin{align*}\n\\evassignment{m0:leave}{lv:a2}\n\t{ loc' &= \\{ t \\} \\domsub loc }\n\\\\ \\evassignment{m0:enter}{a3}\n\t{ loc' &= loc \\1| t \\fun ent }\n\\\\ \\initialization{in0}\n\t{ loc \\, &= \\emptyfun }\n\\end{align*}\n\n\\subsubsection{Safety \\ref{m1:saf1}, \\ref{m1:saf2}}\n\nThis takes care of \\eqref{m1:saf2}:\n\n\\begin{align*}\n\\evguard{m0:leave}{lv:grd1}\n\t{ loc.t = ext }\n\\\\ \\evguard{m0:enter}{ent:grd1}\n\t{ \\neg t \\in in }\n\\end{align*}\n\nThis takes care of \\eqref{m1:saf1}\n\\newcommand{\\compl}{-}\n\\begin{align*}\n\\evguard{m0:leave}{lv:grd0}\n\t{ t \\in in }\n\\\\ \\assumption{asm0}\n\t{ \\neg ext \\in plf \\1\\land \\neg ext = ent } \\\\\n% \n\\assumption{asm7}{ (\\{ ext \\} \\bunion plf = \\compl \\{ent\\}) \\land (\\{ ent \\} \\bunion plf = \\compl \\{ext\\}) \\land \\{ ext, ent \\} = \\compl plf } \n\\end{align*}\n\\subsubsection{Side conditions}\nIn order to take care of the schedulability of \\ref{m0:leave}, we need to strengthen the coarse schedule by adding \\ref{c1}. The side conditions for schedule replacement requires us to prove \\ref{m1:prog0} and \\ref{m1:saf3} in order to prove refinement.\n\\replace{m0:leave}{lv:c1}{m1:prog0}{m1:saf3}\n\n\\begin{align*}\n\\cschedule{m0:leave}{lv:c1}\n\t{ loc.t = ext }\n\\\\ \\progress{m1:prog0}\n\t{ t \\in in }{ t \\in in \\land loc.t = ext }\n\\end{align*}\n\nThe first step in implementing \\eqref{m1:prog0} is to break it down in a few cases. The property says ``a train inside the station eventually reaches the exit''. Now, we're going to break the predicate ``the train is inside the station'' into ``the train is at the entrance'', ``the train is at a platform'' and ``the train is at the exit'', as described by the following assumption:\n\\hide{\n\t\\begin{align*}\n\t\\dummy{\t\tb : \\Blk}\n\t\\end{align*}\n}\n\\begin{align*}\n\\assumption{asm1}\n{\t\\qforall{b}{}{ b \\in \\Blk \\2\\equiv b \\in plf \\1\\lor b = ent \\1\\lor b = ext }\t}\n\\end{align*}\n\nWhich allows us to appy \\emph{disjunction} to \\eqref{m1:prog0}.\n\n\\begin{align*}\n& { t \\in in }\\3\\mapsto{ t \\in in \\land loc.t = ext }\n\\refine{m1:prog0}{disjunction}{m1:prog1,m1:prog2,m1:prog3}{}\n& \\progress{m1:prog2}\n\t{ t \\in in \\land loc.t = ext }{ t \\in in \\land loc.t = ext }\n\\\\ & \\progress{m1:prog1}\n\t{ t \\in in \\land loc.t = ent }{ t \\in in \\land loc.t = ext }\n\\\\ & \\progress{m1:prog3}\n\t{ t \\in in \\land loc.t \\in plf }{ t \\in in \\land loc.t = ext }\n\\end{align*} \n%\n\\hide{ \\refine{m1:prog2}{implication}{}{} } %\n%\n\\eqref{m1:prog2} is true by implication. This leaves us \\eqref{m1:prog1} and \\eqref{m1:prog3}. \\eqref{m1:prog1} says that a train at the entrance reaches the exit. However, \\eqref{m1:saf1} says that a train can only leave the entrance through a platform:\n\n\\begin{align*}\n\t& { t \\in in \\land loc.t = ent }\\3\\mapsto{ t \\in in \\land loc.t = ext } \\tag{\\ref{m1:prog1}}\n\\refine{m1:prog1}{transitivity}{m1:prog4,m1:prog3}{}\n& \\progress{m1:prog4}\n\t{ t \\in in \\land loc.t = ent }{ t \\in in \\land loc.t \\in plf } \n\\\\ & { t \\in in \\land loc.t \\in plf }\\3\\mapsto{ t \\in in \\land loc.t = ext } \\tag{\\ref{m1:prog3}}\n\\end{align*}\n\nWe introduce new events to satisfy \\eqref{m1:prog3} and \\eqref{m1:prog4}\n\\newevent{m1:movein}{move\\_in} \n\\newevent{m1:moveout}{move\\_out} \n\\hide{ \n\t\\refine{m1:prog3}{discharge}{m1:tr0,m1:saf2}{}\n\t\\refine{m1:prog4}{discharge}{m1:tr1,m1:saf1}{}\n}\n\\begin{align*} \n& \\transientB{m1:movein}{m1:tr1}{ \\index{t}{t' = t} }\n\t{ t \\in in \\1\\land loc.t = ent }\n\\\\ & \\transientB{m1:moveout}{m1:tr0}{ \\index{t}{t' = t} }\n\t{ t \\in in \\1\\land loc.t \\in plf }\n\\\\ & \\evguard{m1:movein}{mi:grd7}\n\t{ b \\in plf }\n\\end{align*}\n\\subsubsection{New events} \n\n\n\\begin{align*}\n\\indices{m1:moveout}{\tt : \\Train}\n\\end{align*}\n\nWe adjust \\ref{m1:moveout} to satisfy \\ref{m1:tr0}.\n\n% \\removecoarse{m1:moveout}{default} % \\weakento{m1:moveout}{default}{c1}{}\n\n\\begin{align*}\n\\evassignment{m1:moveout}{a2}{ loc' = loc \\1 | t \\fun ext }\n\\\\ \\assumption{asm2}\n\t{ \\qexists{b}{}{b \\in plf} }\n\\\\ \\cschedule{m1:moveout}{c1}{ t \\in in \\land loc.t \\in plf }\n\\\\ \\evguard{m1:moveout}{mo:g1}{ t \\in in }\n\\\\ \\evguard{m1:moveout}{mo:g2}{ loc.t \\in plf }\n\\end{align*}\n\nWe adjust \\ref{m1:movein} to satisfy \\ref{m1:tr1}.\n\n% \\removecoarse{m1:movein}{default} % \\weakento{m1:movein}{default}{mi:c1,mi:c2}{}\n\n\\begin{align*}\n\\indices{m1:movein}{\tt : \\Train} ; \\quad\n\\param{m1:movein}{ b : \\Blk }\n\\end{align*}\n\n\\begin{align*}\n\\evassignment{m1:movein}{mi:a2}\n\t{ loc' &= loc \\1| t \\fun b }\n\\\\ \\assumption{asm3}\n\t{ \\neg ent &\\in plf }\n\\\\ \\cschedule{m1:movein}{mi:c1}{ t &\\in in } \n\\\\ \\cschedule{m1:movein}{mi:c2}{ loc.t &= ent }\n\\\\ \\evguard{m1:movein}{mi:g1}{ t &\\in in }\n\\\\ \\evguard{m1:movein}{mi:grd0}\n\t{ loc.t &= ent } % m1:saf3\n\\end{align*}\n%\n\\end{machine}\n\n\\section{Second refinement}\n\n\\begin{machine}{m2}\n\n\\refines{m1} \n\n\\begin{align*}\n\\dummy{\tt_0,t_1 : \\Train}\n\\end{align*}\n\\newcommand{\\injective}{\\text{injective}}\n\\subsection{New Requirement}\n\\begin{align*}\n\\invariant{m2:inv0}\n\t{\t\\injective.loc } % \\qforall{t_0,t_1}{t_0 \\in in \\land t_1 \\in in \\land loc.t_0 = loc.t_1}{t_0 = t_1}\t\n\\end{align*}\n%\n\\subsection{Design}\n%\n\\replace{m1:movein}{mi:c0}{m2:prog0}{m2:saf0}\n\\begin{align*}\n\\progress{m2:prog0}\n\t{\\true}\n\t{\\neg plf \\subseteq \\ran.loc  }\n% \\\\ \\safetyB{m2:saf0}{m1:movein}{ \\neg ~ plf \\subseteq \\ran.loc }{\\false} % \\2{\\textbf{except}} \\text{\\ref{m1:movein}}\n\\end{align*}\n\n%The above property, \\eqref{m2:saf0}, should not discharge automatically.\n\n\\begin{align*} \n\\evguard{m1:movein}{mi:g0}\n\t{\t \\neg b \\in \\ran.loc  \t}\n\\\\ \\cschedule{m1:movein}{mi:c0}\n\t{\t\\neg ~ plf \\subseteq \\ran.loc \t}\n%\\\\ \\evguard{m1:movein}{mi:g3}\n%\t{ \\qforall{t}{t \\in in}{ \\neg loc.t = b} }\n\\\\ \\evguard{m0:enter}{et:g1}\n\t{\t\\neg ent \\in \\ran.loc } \n\\\\ \\evguard{m1:moveout}{mo:g3}\n\t{\t\\neg ext \\in \\ran.loc \t}\n\\end{align*}\n%\n\\begin{proof}{\\ref{m0:leave}/INV/\\ref{m2:inv0}} \n\\[ \\assert{goal0}{\\qforall{t_0,t_1}{t_0 \\in in' \\land t_1 \\in in' \\land loc'.t_0 = loc'.t_1}{t_0 = t_1}\t} \\]\n\\begin{subproof}{goal0} \\begin{free:var}{t_0}{t_0}\n\\begin{free:var}{t_1}{t_1}\n\t\\begin{align}\n\t\\assume{hyp0}{\\neg t_0 = t_1}\n\t\\hide{\n\t\t\\\\ \\assume{hyp1}{ t_0 \\in in' }\n\t\t\\\\ \\assume{hyp2}{ t_1 \\in in' }}\n\t\\\\ \\assert{hyp5}{ t_0 \\in in }\n\t\\\\ \\assert{hyp6}{ t_1 \\in in }\n\t\\\\ \\assert{hyp3}{ \\neg t_0 = t } % \\label{hyp7}\n\t\\\\ \\assert{hyp4}{ \\neg t_1 = t } % \\label{hyp8}\n\t\\\\ \\goal{\\neg (loc'.t_0 = loc'.t_1)} \\notag\n\t\\end{align}\n\t\\hide{\n\t\t\\assert{hyp7}{ t_0 \\in \\dom.loc \\setminus \\{ t \\} }\n\t\t\\assert{hyp8}{ t_1 \\in \\dom.loc \\setminus \\{ t \\} }\n\t\t\n\t\\begin{calculation}\n\t\tloc'.t_0 = loc'.t_1\n\t\\hint{=}{ \\ref{lv:a2} \\ref{m2:inv0} }\n\t\t(\\{ t \\} \\domsub loc).t_0 = (\\{ t \\} \\domsub loc).t_1 \n\t\\hint{=}{ \\hide{ \\eqref{hyp7} and \\eqref{hyp8} } \n\t\t\t\\igeqref{hyp3} and \\igeqref{hyp4} }\n\t\tloc.t_0 = loc.t_1 \n\t\\hint{=}{ \\eqref{hyp5}, \\eqref{hyp6}, \n\t\t\t\\eqref{hyp0} with \\ref{m2:inv0} \n\t\t\t} \n\t\t\\false\n\t\\end{calculation} }\n\t\\hide\n\t{\t\\begin{subproof}{hyp3} \\easy \\end{subproof}\n\t\t\\begin{subproof}{hyp4} \\easy \\end{subproof}\n\t\t\\begin{subproof}{hyp5} \\easy \\end{subproof}\n\t\t\\begin{subproof}{hyp6} \\easy \\end{subproof}\n\t\t}\n\t\\hide{\n\t\t\\begin{subproof}{hyp7} \n\t\t\\begin{calculation}\n\t\t\tt_0 \\in \\dom.loc \\setminus \\{ t \\}\n\t\t\\hint{=}{ \\ref{inv0} }\n\t\t\tt_0 \\in in \\setminus \\{ t \\} \n\t\t\\hint{=}{ \\ref{lv:a0} }\n\t\t\tt_0 \\in in'\n\t\t\\hint{=}{ \\eqref{hyp1} }\n\t\t\t\\true\n\t\t\\end{calculation}\n\t\t\\end{subproof}\n\t\t\\begin{subproof}{hyp8} \n\t\t\\begin{calculation}\n\t\t\tt_1 \\in \\dom.loc \\setminus \\{ t \\}\n\t\t\\hint{=}{ \\ref{inv0} }\n\t\t\tt_1 \\in in \\setminus \\{ t \\} \n\t\t\\hint{=}{ \\ref{lv:a0} }\n\t\t\tt_1 \\in in'\n\t\t\\hint{=}{ \\ref{hyp2} }\n\t\t\t\\true\n\t\t\\end{calculation}\n\t\t\\end{subproof}\n\t\t}\n\\end{free:var}\n\\end{free:var} \\end{subproof} \\easy\n\\end{proof}\n\n\\replacefine{m1:moveout}{m2:prog1} \n\n\\begin{align*}\n\\fschedule{m1:moveout}{mo:f0}{ \\neg ext \\in \\ran.loc } \n\\\\ \\progress{m2:prog1}{\\true}{ \\neg ext \\in \\ran.loc }\n\\end{align*}\n\n\\begin{align*}\n\t& \\true \\2\\mapsto \\neg ext \\in \\ran. loc \n\\refine{m2:prog1}{trading}{m2:prog2}{}\n\t& \\progress{m2:prog2}\n\t\t{ext \\in \\ran. loc }\n\t\t{ \\neg ext \\in \\ran. loc } \n\\refine{m2:prog2}{disjunction}{m2:prog3}{}\n\t& \\progress{m2:prog3}\n\t\t{ t \\in in \\land loc.t = ext }\n\t\t{\\neg ext \\in \\ran. loc  } \n%\\\\\t& \\progress{m2:prog3b}{ t \\in in \\land loc.t = ext }{ t \\in in \\land \\neg loc.t = ext  } \n\\end{align*}\n%\n\\hide{\n\t\\refine{m2:prog3}{discharge}{m2:tr0,m2:saf1}{}\n\t}\n%\nWe implement \\eqref{m2:prog3} with the following:\n%\t\n\\begin{align*}\n\t\\transientB{m0:leave}{m2:tr0}{ \\index{t}{t' = t} }\n\t\t{ t \\in in \\1\\land loc.t = ext }\n\\\\\t\\safety{m2:saf1}\n\t\t{  t \\in in \\land loc.t = ext }\n\t\t{ \\neg ext \\in \\ran. loc } \n\\end{align*}\n\n%\\begin{proof}{m1:movein/SCH/m2/0/REF/delay/prog/rhs}\n%\n%\\end{proof}\n\n\\begin{align*}\n\t& {\\true} \\1\\mapsto { \\neg ~ plf \\subseteq \\ran.loc } \n%\t\\tag{\\ref{m2:prog0}}\n\\refine{m2:prog0}{trading}{m2:prog4}{}\n%\t& {\\qforall{b}{b \\in plf}{ \\qexists{t}{t \\in in}{ loc.t = b}} \\!\\!} \\3\\mapsto\n%\t\t{\\!\\! \\qexists{b}{b \\in plf}{ \\qforall{t}{t \\in in}{ \\neg loc.t = b}} }  \n%\t\\notag \\\\\n\t& \\progress{m2:prog4}{plf \\subseteq \\ran.loc \\!\\!}\n\t\t{\\!\\! \\neg ~ plf \\subseteq \\ran.loc }\n\\refine{m2:prog4}{monotonicity}{m2:prog5}{}\n%\t& {\\qexists{b,t}{b \\in plf \\land t \\in in}{ loc.t = b}} \\3\\mapsto\n%\t\t{\\qexists{b}{b \\in plf}{ \\qforall{t}{t \\in in}{ \\neg loc.t = b}} } \\notag \\\\\n\t& \\progress{m2:prog5}{\\qexists{b}{b \\in plf}{ b \\in \\ran.loc }}\n\t\t{ \\qexists{b}{}{ b \\in plf \\1\\land \\neg b \\in \\ran.loc } }\n\\refine{m2:prog5}{disjunction}{m2:prog6}{}\n\t& \\progress{m2:prog6}{ \\qexists{t}{t \\in in}{ b \\in plf \\1\\land  loc.t = b } }\n\t\t{ b \\in plf \\land \\neg b \\in \\ran.loc }\n\\refine{m2:prog6}{disjunction}{m2:prog7}{}\n\t& \\progress{m2:prog7}{ t \\in in \\land b \\in plf \\1\\land  loc.t = b }\n\t\t{ b \\in plf \\land \\neg b \\in \\ran.loc }\n\\end{align*}\n\n\\hide{\n\t\\refine{m2:prog7}{discharge}{m2:tr1,m2:saf2}{}\n\t}\n\n\\begin{align*}\n\t\\transientB{m1:moveout}{m2:tr1}{ \\lt{m2:prog1} \\index{t}{t' = t} }\n\t\t{ b \\in plf \\land t \\in in \\land loc.t = b }\n\\\\ \t\\safety{m2:saf2}{ t \\in in \\land b \\in plf \\1\\land  loc.t = b }\n\t\t{ b \\in plf \\1\\land \\qforall{t}{ t \\in in }{ \\neg loc.t = b } }\n\\end{align*}\n\n\\input{train-station-set/m2_m1-moveout.tex}\n\nIn order to prove \\ref{m1:moveout}, we need a need progress property, to make sure that the fine schedule becomes true infinitely often. We can use the same that we used to introduce the fine schedule \\eqref{m2:prog1}.\n\n\\subsection{Summary of the events} \n\n\\begin{block}\n\n\\item[{}] Here are the events\n\\item[{}]\n\\input{train-station-set/m2_m0-enter}\n\\item[{}]\n\\input{train-station-set/m2_m1-movein}\n\\item[{}]\n\\input{train-station-set/m2_m1-moveout}\n\\item[{}]\n\\input{train-station-set/m2_m0-leave}\n\\end{block}\n\\input{train-station-set/machine_m2}\n\\end{machine}\n\n\\section{Third refinement}\n\n\\begin{machine}{m3}\n\n\\refines{m2}\n\n\\[\t\\variable{isgn: \\Bool} ; ~\n\t\\variable{osgn: \\set [\\Blk] } \n\\]\n\n% \\with{sets}\n% \\with{sets}\n% \\with{functions}\n\n\n\\begin{align*}\n\t\\dummy{ b_0, b_1, p, p_0, p_1, p_2 : \\Blk }\n\\\\\t\\invariant{m3:inv0}{ osgn \\subseteq plf }\n\\\\\t\\invariant{m3:inv1}{ \\qforall{p_0,p_1}{ p_0 \\in osgn \\land p_1 \\in osgn }{ p_0 = p_1 } }\n\\end{align*}\n\\begin{align*}\n\t\\evassignment{m0:enter}{m3:ent:act0}{isgn' &= isgn}\n\\\\\t\\evassignment{m0:enter}{m3:ent:act1}{osgn' &= osgn}\n\\\\\t\\evassignment{m1:movein}{m3:mi:act0}{isgn' &= isgn}\n\\\\\t\\evassignment{m1:movein}{m3:mi:act1}{osgn' &= osgn}\n\\\\\t\\evassignment{m0:leave}{m3:ext:act0}{isgn' &= isgn}\n\\\\\t\\evassignment{m0:leave}{m3:ext:act1}{osgn' &= osgn}\n\\\\\t\\evassignment{m1:moveout}{m3:mo:act0}{isgn' &= isgn}\n\\\\\t\\evassignment{m1:moveout}{m3:mo:act1}{osgn'  &\\2 = osgn \n\t\\setminus \\{ loc.t \\} }\n\\\\\t\\initialization{m3:init0}{ osgn &= \\emptyset }\n\\\\\t\\initialization{m3:init1}{ isgn &= \\false }\n\\end{align*}\n\n\\begin{align*}\n\t\\evguard{m1:moveout}{m3:mo:grd0}{ loc.t \\in osgn } \\\\\n\t\\cschedule{m1:moveout}{m3:mo:sch0}{ loc.t \\in osgn }\n\\end{align*}\n\n\\replace{m1:moveout}{m3:mo:sch0}{m3:prog0}{m3:saf0}\n\t% it works because the PO is trivial p |-> true\n\\replacefine{m1:moveout}{m3:prog0} \\removefine{m1:moveout}{mo:f0}\n\\removeguard{m1:moveout}{mo:g3}\n\n\\begin{align*}\n\t\\progress{m3:prog0}{ t \\in in \n\t\t\\1\\land loc.t \\in plf }{ t \\in in \\land loc.t \\in osgn }\n\\\\\t\\safety{m3:saf0}{ t \\in in \\land loc.t \\in osgn }{\n\t\t \\, ext \\in \\ran.loc  }\n\\end{align*}\n\n\\begin{align*} \n\t\\invariant{m3:inv2}{ ext \\in \\ran.loc \\implies osgn = \\emptyset } % \\qforall{t}{}{t \\in in \\land loc.t \\in osgn \\1\\implies \\neg \\, ext \\in \\ran.loc } }\n\\\\\t\\invariant{m3:inv3}{ osgn \\subseteq \\ran.loc } %{ \\qforall{p}{}{ p \\in osgn \\1\\implies p \\in \\ran.loc } }\n\\end{align*}\n \n\\input{train-station-set/m3_m1-moveout}\n\n%\\begin{align*}\n\\newevent{m3:ctr:plf}{ctr\\_plf}\n%\\end{align*}\n\n\\begin{align*}\n\t&{  t \\in in \n\t\t\\1\\land loc.t \\in plf }\\2\\mapsto{ t \\in in \\land loc.t \\in osgn }\n\\refine{m3:prog0}{discharge}{m3:tr0,m3:saf1}{}\n\t&\\transientB{m3:ctr:plf}{m3:tr0}\n\t\t{ \\index{p}{p' = loc.t} }\n\t\t{  t \\in in \n\t\t\t\\1\\land loc.t \\in plf \\setminus osgn }\n\\\\ \t& \\safety{m3:saf1}{ t \\in in \n\t\t\\1\\land loc.t \\in plf }\n\t{\tt \\in in \\land loc.t \\in osgn }\n\\end{align*}\n\nWe should use intersection here:\n\\begin{align*}\n\\indices{m3:ctr:plf}{ p : \\Blk }\n\\\\ \\cschedule{m3:ctr:plf}{m3:cp:c0}{ p \\in plf \\land p \\in \\ran.loc \\land \\neg p \\in osgn }\n\\\\ \\evassignment{m3:ctr:plf}{m3:cp:act0}{ osgn' = osgn \\bunion \\{ p \\} }\n\\end{align*}\n% EN should not pass this way\n% \\removecoarse{m3:ctr:plf}{default}  %\\weakento{m3:ctr:plf}{}{m3:cp:c0}\n\\input{train-station-set/m3_m1-moveout}\n\\input{train-station-set/m3_m3-ctr-plf}\n\\input{train-station-set/machine_m3}\n% \\invariant{m3:inv4}{ ext \\in \\ran.loc \\2\\implies osgn = \\emptyset }\n\\end{machine}\n\n\\end{document}", "meta": {"hexsha": "857916d5d13a1318f6e0ab53d0a064146a752d01", "size": 15680, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tests/train-station-set.tex", "max_stars_repo_name": "literate-unitb/literate-unitb", "max_stars_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-07-27T11:05:56.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-20T14:53:33.000Z", "max_issues_repo_path": "Tests/train-station-set.tex", "max_issues_repo_name": "unitb/literate-unitb", "max_issues_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2017-06-25T03:53:02.000Z", "max_issues_repo_issues_event_max_datetime": "2017-06-25T04:28:38.000Z", "max_forks_repo_path": "Tests/train-station-set.tex", "max_forks_repo_name": "literate-unitb/literate-unitb", "max_forks_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.6131386861, "max_line_length": 383, "alphanum_fraction": 0.6264668367, "num_tokens": 6723, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5686367906622344}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Information Content}\n\\begin{frame}\n  \\frametitle{XAFS Analysis: Information Content in XAFS }\n\n  The Number of Parameters we can measure from our data is limited:\n\n  \\vmm\n\n  \\begin{center}  {\\Red{\n        $\\displaystyle  N \\approx { { 2 \\Delta k \\Delta R} \\over{ \\pi}}  $\n      }}\n  \\end{center}\n\n  \\vmm\n\n  where $\\Red{ \\Delta k}$ and $\\Red{ \\Delta R}$ are the $k$- and\n  $R$-ranges of the data.\n\n  Typical:  $k = [2.0, 12.0] \\rm\\,\\AA^{-1}$ and $R =\n  [1.0, 3.0]\\rm\\,\\AA$, gives  $\\sim 12$ Parameters.\n\n  \\pause \\vmm \\hrule \\vmm\n\n  Fit statistics, and Error Bars  need to reflect this limit.\n\n  \\vmm\n  \\pause\n\n  Need to {\\Red{constrain}} Parameters $R$, $N$,\n  $\\sigma^2$ for different paths  and different data sets (different\n  edge elements, temperatures, etc)\n\n  \\vmm\n\n  \\begin{postitbox}{90mm}\n    Use as much outside information about the system as possible!\n  \\end{postitbox}\n\n   \\vmm \\pause\n   It's also possible to add {\\Red{restraints}} to describe external\n   knowledge of the system (crystallography, Bond Valence, etc).\n\n\n\\end{frame}\n\n\n\\subsection{Building Models}\n\\begin{frame}\n \\frametitle{XAFS Analysis: Building Models }\n\n  The basic difficulties in  EXAFS Analysis are\n\n  \\begin{enumerate}\n\n  \\item The scattering factors \\feffc{f(k)}, \\feffc{\\delta(k)} are\n    non-trivial (we use {\\feff}).\n\n \\item The basis functions (Paths) are not very well resolved, and their\n    number grows exponentially with $R$.\n\n  \\item There's not much information in a real measurement:\n    \\[ N_{\\rm idp} \\approx \\frac{2\\Delta k\\Delta R}{\\pi} \\]\n\n  \\end{enumerate}\n\n    \\hrule \\vmm \\pause\n\n    We address these with methods to:\n\n    \\begin{enumerate}\n    \\item {reduce} the number of Paths to consider (Fourier analysis).\n\n    \\item parameterize {\\emph{ab initio}} calculations of\n      \\feffc{f(k)}, \\feffc{\\delta(k)} (use {\\feff})\n\n    \\item reduce the number of  variables in the fit, while keeping a\n      meaningful analysis.\n\n    \\end{enumerate}\n\n    \\hrule \\vmm\n\n    We parameterize the EXAFS with a physical model and then\n    put {\\RedEmph{Constraints}} on the  %%% and {\\RedEmph{Restraints}} on the\n    parameters in a least-squares fit.\n\n    \\vmm\n\\end{frame}\n\n\\subsection{Constraints}\n\\begin{frame}[fragile]\n  \\frametitle{Constraints in {\\larch} /  {\\artemis} }\n\n  All Path Parameters written as expressions of  Variables refined in Fit.\n  \\vmm\n\n  \\setbeamertemplate{blocks}[rounded][shadow=true]\n\n  \\begin{tabular}{ll}\n    \\begin{minipage}{45mm}\n      \\begin{block}\n        {Parameter = Variable}\n{\\tiny{\\begin{alltt}\n\n{\\Red{guess e0    = 1.0 }}\npath(1,  e0 = e0)\npath(2,  e0 = e0)\n\\end{alltt}}}\n \\end{block}\n   \\vspace{1.75mm}\n   \\onslide+<2->\n   \\begin{block}\n{mixed coordination shell}\n\n{\\tiny{\\begin{alltt}\n set S02   = 0.80\n\n {\\Red{guess x   = 0.5}}\n\n path(1,  Amp= S02 * x )\n path(2,  Amp= S02 * (1-x))\n\n\\end{alltt}}}\n   \\end{block}\n   \\end{minipage}\n    & \\hspace{0.2mm}\n    \\begin{minipage}{65mm}\n      \\onslide+<3->   \\begin{block}\n { Fit Einstein Temperature }\n\n{\\tiny{\\begin{alltt}\n\n set   factor  = 24.254337   {\\Blue{#= (hbar*c)^2/(2 k_boltz)}}\n{\\Blue{# mass and reduced mass in amu}}\n set   mass1 = 63.54,  mass2 = 63.54\n set   r_mass =  1/ (1/mass1 +  1/mass2)\n\n{\\Blue{# the Einstein Temp will be adjusted in the fit!}}\n {\\Red{guess thetaE = 200}}\n{\\Blue{# use for data set 1, T=77}}\n set   temp1 = 77\n {\\Red{def ss2_path1 = factor*coth(thetaE/(2*temp1))/r_mass )}}\n path(101,  sigma2 = ss2_path1   )\n\n{\\Blue{# use for data set 2, T=300}}\n set   temp2 = 300\n {\\Red{def ss2_path2 = factor*coth(thetaE/(2*temp2))/r_mass )}}\n path(201,  sigma2 = ss2_path2   )\n\\end{alltt}}}\n        \\end{block}\n    \\end{minipage}\\\\\n  \\end{tabular}\n\n  \\vmm \\vmm\n\n  \\onslide+<4-> Other Examples:\n\n\\begin{itemize}\n\\item force one $R$ for the same bond for data taken from different  edges.\n\\item model complex distortions (height of a sorbed atom above a surface).\n\\end{itemize}\n\n\\end{frame}\n\n\n\\subsection{Fit statistics}\n\\begin{frame}\\frametitle{Fitting with {\\ifeffit} / {\\artemis} }\n\n  {\\ifeffit} optimizes the Fitting Parameters with a least-squares fit to the\n  Data\n\n  \\begin{postitbox}{90mm}\n    Find the variables that make the Model best match the Data\n  \\end{postitbox}\n\n\n  \\pause \\vmm\n\n\n  $\\chi^2$ (don't confuse with EXAFS $\\chi$!!) describe the fit:\n\n  \\[\n  \\chi^2  =   \\sum_i^{N_{\\rm fit}} \\frac{[\\chi_i^{\\rm data} - \\chi_i^{\\rm\n      model}({x})]^2}{\\epsilon^2}\n  \\]\n\n  ${N_{\\rm fit}} = $ number of data points, ${x} = $ set of variables,\n  $\\epsilon =$ noise level in the data. \\pause\n\n  We should consider only $ N_{\\rm idp} $ data points:\n\n  \\begin{postitbox}{88mm}\n    \\[   \\chi^2  =  \\frac{ N_{\\rm idp}}{\\epsilon^2 N_{\\rm fit}}\n    \\sum_i^{N_{\\rm fit}} [\\chi_i^{\\rm data} - \\chi_i^{\\rm model}({x})]^2 \\]\n  \\end{postitbox}\n\n  \\pause   Fitting is typically done in $R$-space to ignore higher shells.\n\n\\end{frame}\n\n\\subsection{More Fit Statistics}\n\\begin{frame}\n\\frametitle{Goodness of Fit and Error Bars}\n\n  Goodness-of-Fit statistics:\n  \\begin{itemize}[<+->]\n  \\item{\\Red{chi-square}}:     $  \\chi^2  =  \\frac{ N_{\\rm idp}}{\\epsilon^2 N_{\\rm fit}}\n    \\sum_i^{N_{\\rm fit}} [\\chi_i^{\\rm data} - \\chi_i^{\\rm model}({x})]^2   $\n\n  \\item{\\Red{reduced chi-square}}:  scale $\\chi^2$ by the  ``degrees of freedom''\n\n    $ \\chi^2_\\nu =  \\chi^2 / (N_{\\rm idp}-N_{\\rm varys}) $\n\n    A Good Fit should have $\\chi^2_\\nu \\sim 1$. This {\\RedEmph{never}}\n    happens!\n\n    $ \\chi^2_\\nu \\sim 10 $ or higher, typically.\n\n\n  \\item{\\Red{R-factor}}:  Fractional misfit.\n    \\[\n    {\\cal{R}} =\n    {\\sum_i^{N_{\\rm fit}}[\\chi_i^{\\rm data} - \\chi_i^{\\rm model}({x})]^2 }\n    /\n    { \\displaystyle{\\sum_i^{N_{\\rm fit}} [{\\chi_i^{\\rm data}}]^2}}\n    \\]\n\n  \\end{itemize}\n\n\\hrule \\vmm \\onslide+<4->\n\n  Error bars for Fitting Parameters are calculated, and\n  increase $\\chi^2$ by $\\chi^2_\\nu$.\n\n  \\vmm\n  Correlations between parameters are also calculated.\n\n\\end{frame}\n", "meta": {"hexsha": "8bb306b69a2b9f216a1b0eb06741c46b2b2adf73", "size": 5919, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/modeling_info.tex", "max_stars_repo_name": "newville/xafsfun", "max_stars_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/modeling_info.tex", "max_issues_repo_name": "newville/xafsfun", "max_issues_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/modeling_info.tex", "max_forks_repo_name": "newville/xafsfun", "max_forks_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.6625, "max_line_length": 88, "alphanum_fraction": 0.6271329616, "num_tokens": 2035, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7745833737577158, "lm_q1q2_score": 0.5686367798877621}}
{"text": "\\documentclass[12pt, letterpaper]{report}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[utf8]{inputenc}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{float}\n\\usepackage{subfig}\n\\graphicspath{ {./img/} }\n\\setlength\\parindent{0pt}\n\\renewcommand\\thesection{\\Roman{section}.}\n\\renewcommand{\\thesubsection}{\\alph{subsection}.}\n\n\n\\title{CS1675 - Assignment 5}\n\\author{Zachary M. Mattis}\n\n\n\\begin{document}\n\t\n\\maketitle\n\n\\section{Problem 1 - Logistic Regression Model}\n\n% A\n\\subsection{Normalization}\n\n\\begin{verbatim}\n    data_normalize.m\n\\end{verbatim}\n\n% B\n\\subsection{Batch-Mode Gradient}\n\n\\begin{verbatim}\n    Log_regression.m\n\\end{verbatim}\n\n\n% C\n\\subsection{Training Gradient}\n\n\\begin{verbatim}\n    main1.m\n\\end{verbatim}\n\n\n% D\n\\subsection{Misclassification, Confusion, Sensitivity, Specificity}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\t\\textbf{Dataset} & \\textbf{Misclassification Error} \\\\\n\t\t\\hline\n\t\tTraining & 0.2988 \\\\\n\t\t\\hline\n\t\tTesting & 0.2722 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Misclassification Error}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |r|l|l| }\n\t\t\\hline\n\t\t\\textbf{Predict / Target} & \\textbf{1} & \\textbf{1} \\\\\n\t\t\\hline\n\t\t\\textbf{1} & 118 & 42 \\\\\n\t\t\\hline\n\t\t\\textbf{0} & 82 & 297 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Training Confusion Matrix}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |r|l|l| }\n\t\t\\hline\n\t\t\\textbf{Predict / Target} & \\textbf{1} & \\textbf{1} \\\\\n\t\t\\hline\n\t\t\\textbf{1} & 46 & 27 \\\\\n\t\t\\hline\n\t\t\\textbf{0} & 22 & 134 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Test Confusion Matrix}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\tSensitivity & 0.6765 \\\\\n\t\t\\hline\n\t\tSpecificity & 0.8323 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Test Sensitivity / Specificity}\n\\end{table}\n\n\\subsection{Efficiency}\n\nFor a $\\frac{2}{\\sqrt{k}}$ schedule, the gradient converged after 30,000 epochs. For a $\\frac{2}{k}$ schedule, the gradient converged after 200 epochs. Differing starting weights of $\\pm100$ also converged in the same amount of time, resulting in a test misclassification error of 0.2722 and training error of 0.2988.\n\n\\section{Problem 2.1 - Naive Bayes Model}\n\n% A\n\\subsection{Data Analysis}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{p2a0.png}\n\t\\caption{Class 0}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{p2a1.png}\n\t\\caption{Class 1}\n\\end{figure}\n\n% B\n\\subsection{Data Distribution}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\t\\textbf{Attribute} & \\textbf{Distribution} \\\\\n\t\t\\hline\n\t\t1 & Exponential \\\\\n\t\t\\hline\n\t\t2 & Normal \\\\\n\t\t\\hline\n\t\t3 & Normal \\\\\n\t\t\\hline\n\t\t4 & Normal \\\\\n\t\t\\hline\n\t\t5 & Exponential\\\\\n\t\t\\hline\n\t\t6 & Normal \\\\\n\t\t\\hline\n\t\t7 & Exponential \\\\\n\t\t\\hline\n\t\t8 & Exponential \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Attribute Distribution}\n\\end{table}\n\n\\section{Problem 2.2 - Naive Bayes Learning}\n\n% A\n\\subsection{Naive Bayes Training Data}\n\n\\begin{verbatim}\n    main2_2.m\n\\end{verbatim}\n\n% B\n\\subsection{Naive Bayes Parameters}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l|l|l|l|l|l| }\n\t\t\\hline\n\t\t\\textbf{Attribute} & \\textbf{$\\mu$} & \\textbf{$\\sigma$} & \\textbf{CI $\\mu$ lower} & \\textbf{CI $\\mu$ upper} & \\textbf{CI $\\sigma$ lower} & \\textbf{CI $\\sigma$ upper} \\\\\n\t\t\\hline\n\t\t2 & 109.98 & 26.1412 & 107.6831 & 112.2769 & 24.6152 & 27.8705 \\\\\n\t\t\\hline\n\t\t3 & 68.184 & 18.0621 & 66.5969 & 69.7711 & 17.0086 & 19.258 \\\\\n\t\t\\hline\n\t\t4 & 19.664 & 14.8899 & 18.3557 & 20.9723 & 14.020 & 15.8749 \\\\\n\t\t\\hline\n\t\t6 & 30.3042 & 7.6899 & 29.6285 & 30.9799 & 7.2409 & 8.1986 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Class 0 Normal}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l|l|l|l|l|l| }\n\t\t\\hline\n\t\t\\textbf{Attribute} & \\textbf{$\\mu$} & \\textbf{$\\sigma$} & \\textbf{CI $\\mu$ lower} & \\textbf{CI $\\mu$ upper} & \\textbf{CI $\\sigma$ lower} & \\textbf{CI $\\sigma$ upper} \\\\\n\t\t\\hline\n\t\t2 & 141.2575 & 31.9396 & 137.4161 & 145.0988 & 29.4451 & 34.8995 \\\\\n\t\t\\hline\n\t\t3 & 70.8246 & 21.4918 & 68.2398 & 73.4094 & 19.8133 & 23.4835 \\\\\n\t\t\\hline\n\t\t4 & 22.1642 & 17.6797 & 20.0379 & 24.2905 & 16.2989 & 19.3181 \\\\\n\t\t\\hline\n\t\t6 & 35.1425 & 7.2630 & 34.269 & 36.0160 & 6.6957 & 7.9360 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Class 1 Normal}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l|l|l|l }\n\t\t\\hline\n\t\t\\textbf{Attribute} & \\textbf{$\\mu$} & \\textbf{CI $\\mu$ lower} & \\textbf{CI $\\mu$ upper} \\\\\n\t\t\\hline\n\t\t1 & 3.2980 & 3.0270 & 3.6073 \\\\\n\t\t\\hline\n\t\t5 & 68.792 & 63.139 & 75.2436 \\\\\n\t\t\\hline\n\t\t7 & 0.4297 & 0.3944 & 0.4700 \\\\\n\t\t\\hline\n\t\t8 & 31.190 & 28.627 & 34.1151 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Class 0 Exponential}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l|l|l|l }\n\t\t\\hline\n\t\t\\textbf{Attribute} & \\textbf{$\\mu$} & \\textbf{CI $\\mu$ lower} & \\textbf{CI $\\mu$ upper} \\\\\n\t\t\\hline\n\t\t1 & 4.8657 & 4.3319 & 5.5051 \\\\\n\t\t\\hline\n\t\t5 & 100.3358 & 89.3289 & 113.5215 \\\\\n\t\t\\hline\n\t\t7 & 0.5505 & 0.4901 & 0.6228 \\\\\n\t\t\\hline\n\t\t8 & 37.0672 & 33.0009 & 41.9384 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Class 1 Exponential}\n\\end{table}\n\n\\section{Problem 2.3 - Naive Bayes Classification}\n\n% A\n\\subsection{NB Prediction}\n\n\\begin{verbatim}\n    main2_3.m\n\\end{verbatim}\n\n% B\n\\subsection{Misclassification, Confusion, Sensitivity, Specificity}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\t\\textbf{Dataset} & \\textbf{Misclassification Error} \\\\\n\t\t\\hline\n\t\tTraining & 0.5356 \\\\\n\t\t\\hline\n\t\tTesting & 0.6241 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Misclassification Error}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |r|l|l| }\n\t\t\\hline\n\t\t\\textbf{Predict / Target} & \\textbf{1} & \\textbf{1} \\\\\n\t\t\\hline\n\t\t\\textbf{1} & 168 & 156 \\\\\n\t\t\\hline\n\t\t\\textbf{0} & 32 & 183 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Training Confusion Matrix}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |r|l|l| }\n\t\t\\hline\n\t\t\\textbf{Predict / Target} & \\textbf{1} & \\textbf{1} \\\\\n\t\t\\hline\n\t\t\\textbf{1} & 59 & 79 \\\\\n\t\t\\hline\n\t\t\\textbf{0} & 9 & 82 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Test Confusion Matrix}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\tSensitivity & 0.8676 \\\\\n\t\t\\hline\n\t\tSpecificity & 0.5093 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Test Sensitivity / Specificity}\n\\end{table}\n\n\n% C\n\\subsection{Logistic Regression vs. Naive Bayes}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l|l| }\n\t\t\\hline\n\t\t\\textbf{Model} & \\textbf{Training} & \\textbf{Testing} \\\\\n\t\t\\hline\n\t\tLogistic Regression & 0.2988 & 0.2722 \\\\\n\t\t\\hline\n\t\tNaive Bayes & 0.5356 & 0.6241 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Misclassification Error}\n\\end{table}\n\n\\section{Problem 3 - ROC Analysis}\n\n% A\n\\subsection{\\textit{perfcurve()}}\n\nReceiver operating characteristic (ROC) curve or other performance curve for classifier output.\n\n% B\n\\subsection{ROC}\n\n\\begin{verbatim}\n    roc_analysis.m\n\\end{verbatim}\n\n\n% C\n\\subsection{AUC}\n\n\\begin{figure}[H]\n\t\\captionsetup[subfigure]{labelformat=empty}\n\t\\centering\n\t\\subfloat[Figure 3]{{\\includegraphics[width=18em]{p3lr.png} }}\n\t\\qquad\n\t\\subfloat[Figure 4]{{\\includegraphics[width=18em]{p3nb.png} }}\n\t\\label{fig:example}\n\\end{figure}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{ |l|l| }\n\t\t\\hline\n\t\t\\textbf{Model} & \\textbf{AUC} \\\\\n\t\t\\hline\n\t\tLogistic Regression & 0.8518 \\\\\n\t\t\\hline\n\t\tNaive Bayes & 0.4450 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{AUC}\n\\end{table}\n\nBased on the collected data for the ROC curves and the AUC statistics, logistic regression outperforms the Naive Bayes implementation, with a higher true positive rate and AUC.\n\n\n\\end{document}", "meta": {"hexsha": "58550631495c4d8a83141eb36a2d2f2f306b777b", "size": 7568, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CS-1675/Homework5/homework5_analysis.tex", "max_stars_repo_name": "zmattis/University_of_Pittsburgh", "max_stars_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2017-07-21T17:56:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-05T09:25:12.000Z", "max_issues_repo_path": "CS-1675/Homework5/homework5_analysis.tex", "max_issues_repo_name": "zmattis/University_of_Pittsburgh", "max_issues_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CS-1675/Homework5/homework5_analysis.tex", "max_forks_repo_name": "zmattis/University_of_Pittsburgh", "max_forks_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-10-14T03:28:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-04T07:41:07.000Z", "avg_line_length": 20.6775956284, "max_line_length": 317, "alphanum_fraction": 0.6511627907, "num_tokens": 3163, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660688, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5686367746962375}}
{"text": "\\input{preamble}\n\n\\begin{document}\n    \\section{Subjects}\n    \\begin{itemize}\n        \\item VC Dimension\n        \\item Bias Variance\n        \\item Regularization\n        \\item Validation\n    \\end{itemize}\n    \\section{Notes}\n    \n    \\subsection{Preceding discussion}\n    \\textit{Chapter 2 in Learning from data.}\\\\\n    What we want to minimize, is the out-of-sample error:\n    \\begin{equation*}\n    E_{out}(g) = \\Ex_{x,y}\\left(e\\left(g(x), y\\right)\\right)\n    \\end{equation*}\n    We can't minimize this, however what we can minimize is the in-sample-error:\n    \\begin{equation*}\n    E_{in}(g)=\\frac{1}{\\|D\\|} \\sum_{(x,y) \\in D} e\\left(g(x), y\\right)\n    \\end{equation*}\n    So we are hoping, or aiming for, that the generalization error \n    $E_{in}-E_{out}$ should be small, so when we minimize $E_{in}$ it benefits \n    $E_{out}$.\\\\\n    Recall that the Hoeffding inequality provides a way to bound the \n    generalization error:\n    \\begin{equation*}\n    \\Pr[|E_{in}(g) - E_{out}(g)| > \\epsilon] \\leq 2Me^{-2\\epsilon^2N}\n    \\end{equation*}\n    We can rephrase this, by introducing a tolerance level $\\delta$ and assert \n    with probability at least $1 - \\delta$ that: \n    \\begin{equation*}\n    E_{out}(g) \\leq E_{in}(g) + \\sqrt{\\frac{1}{2N}\\ln\\frac{2M}{\\delta}}\n    \\end{equation*}\n    We notice here, that the bound depends on $M$, which is the size of the \n    hypothesis set. Unfortunately, most problems has infinite hypotheses, and \n    thus the bound will become meaningless as it goes towards infinity. So we \n    want to replace $M$ by something that stays meaningful as $M$ goes to \n    infinity. \n    \n    To this end we introduce the growth function, which will formalize the \n    effective number of hypotheses. Furthermore, a \\textit{dichotomy} is an \n    $N$-tuple, generated by a hypothesis, which splits the data into two \n    groups.\n    \n    The set of dichotomies generated by the hypothesis set $\\Hy$ on the points \n    $x_1,\\dots,x_n$ is defined by:\n    \\begin{equation}\n    \\Hy(x_1,\\dots,x_N)= \\{(h(x_1),\\dots,h(x_N)) \\,|\\, h \\in \\Hy)\\}\n    \\end{equation}\n    One can think of the dichotomies as being the hypothesis set as seen \n    through the eyes of just $N$ points. We can then define the growth function \n    as:\n    \\begin{equation}\n    m_\\Hy(N)=\\max\\limits_{x_1, \\dots, x_n \\in X} |\\Hy(x_1,\\dots, x_N)|\n    \\end{equation}\n    I.e. the maximum number of dichotomies that can be generated by $\\Hy$ on \n    any $N$ points. If we just look at binary classification, then the upper \n    limit on the amount of dichotomies for a data-set of size $N$ is: \n    \\begin{equation}\n    m_\\Hy(N) \\leq 2^N\n    \\end{equation}\n    If $\\Hy$ is capable of generating all possible dichotomies on the data-set, \n    then $m_\\Hy(n) = 2^N$ and we say that $\\Hy$ shatter $x_1,\\dots,x_n$. If \n    there is no such data-set of size $k$ that can be shattered by $\\Hy$ then \n    we say that $k$ is a breakpoint for $\\Hy$.\n    \n    If there is such a breakpoint $k$, then we know that $m_\\Hy(k) < 2^N$. We \n    define $B(N,k)$ as the maximum number of dichotomies on $N$ points, such \n    that no subset of size $k$ can be shattered by these dichotomies. We can \n    then see that:\n    \n    \\begin{equation*}\n    m\\Hy(N) \\leq B(N,k)\\, \\text{ if $k$ is a break point for }\\Hy\n    \\end{equation*}\n    \n    Sauer's lemma then states that:\n    \\begin{equation*}\n    B(N,k) \\leq \\sum_{i=0}^{k-1}\\begin{pmatrix}\n    N\\\\\n    i\n    \\end{pmatrix}\n    \\end{equation*}\n    \n    Which means that:\n    \\begin{equation*}\n        m_\\Hy(N) \\leq \\sum_{i=0}^{k-1} \\begin{pmatrix}\n        N\\\\i\n        \\end{pmatrix}\n    \\end{equation*}\n    \n    We then see that, if $\\Hy$ has a breakpoint then $m_\\Hy(N)$ has a \n    polynomial bound. This is important because when $m_\\Hy(N)$ has a \n    polynomial bound, the generalization error will go to zero as $N \n    \\rightarrow \\infty$\n    \n    \\subsection{VC Dimension}\n    The Vapnik-Chervonenkis dimension of a hypothesis set $\\Hy$, denoted by \n    $d_{vc}(\\Hy)$ or simply $d_{vc}$ is the largest value of $N$ for which \n    $m_\\Hy(N)=2^N$. If $m_\\Hy(N)=2^N$ for all $N$, then $d_{vc}(\\Hy)=\\infty$.\n    \n    It follows then, that if $d_{vc}$ is the VC dimension for $\\Hy$, then $k = \n    d_{vc} + 1$ is a breakpoint and there are no smaller breakpoints. We can \n    therefore rewrite the previous sum in terms of the VC dimension:\n    \n    \\begin{equation*}\n        m_\\Hy(N) \\leq \\sum_{i=0}^{d_{vc}} \\begin{pmatrix}\n        N\\\\i\n        \\end{pmatrix}\n    \\end{equation*}\n    \n    We then arrive at the VC Generalization Bound:\n    \\begin{equation}\n        E_{out}(g) \\leq E_{in}(g) + \\sqrt{\\frac{8}{N} \\ln \n        \\frac{4m_\\Hy(2N)}{\\delta}}\n    \\end{equation}\n    with probability $\\geq 1-\\delta$\n    \n    VC-Dimension captures expressiveness/capacity of hypothesis spaces and \n    relate them to generalization. Leads to out-of-sample error equals \n    in-sample-error + model complexity:\n    \\begin{equation*}\n    E_{out}(h) \\leq E_{in}(h) + \\Omega(N,\\Hy, \\delta)\n    \\end{equation*}\n    \\begin{equation*}\n    \\Omega(N, \\Hy, \\delta) = \\sqrt{\\frac{8}{N}\\ln\\frac{4m_\\Hy(2N)}{\\delta}}\n    \\end{equation*}\n    \\todo{Sampling\tComplexity}\n    \n    \\subsection{Bias-Variance decomposition}\n    \\textit{Page 62}\\\\\n    The out-of-sample error for hypothesis $g^{(D)}$ learned on data $D$ is:\n    \\begin{equation*}\n        E_{out}(g^{(D)}) = \\Ex_x\\left[(g^{(D)}(x)-f(x))^2\\right]\n    \\end{equation*}\n    Where $\\Ex_x$ denotes the expected value with respect to $x$ (based on the \n    probability distribution on the input space $X$)\n    \n    We can generalize this, to remove the dependence on a specific data-set by \n    taking the expectation with respect to all data-sets:\n    \\begin{align*}\n        \\Ex_D\\left[E_{out}(g^{(D)})\\right] &=\n        \\Ex_D\\left[\\Ex_x\\left[(g^{(D)}(x)-f(x))^2\\right]\\right]\\\\\n            &= \\Ex_x\\left[\\Ex_D\\left[(g^{(D)}(x)-f(x))^2\\right]\\right]\\\\\n            &= \\Ex_x\\left[\\Ex_D\\left[g^{(D)}(x)^2\\right] - \n            2\\Ex_D\\left[g^{(D)}(x)\\right]f(x)+f(x)^2)\\right]\n    \\end{align*}\n    \n    The term $\\Ex_D\\left[g^{(D)}(x)\\right]$ gives an 'average function', which \n    we denote by $\\bar{g}(x)$ it can be seen as an average function as a result \n    of many data-sets $D_1,\\dots,D_K$ where:\n    \\begin{equation}\n        \\bar{g}(x)\\approx \\frac{1}{K} \\sum_{k=1}^{K}g_k(x)\n    \\end{equation}\n    \n    We can now rewrite the expected out-of-sample error in terms of $\\bar{g}$:\n    \\begin{align*}\n        &\\Ex_D\\left[E_{out}(g^{(D)})\\right]\\\\\n        &=\\Ex_x\\left[\\Ex_D\\left[g^{(D)}(x)^2\\right] - 2\\bar{g}(x)f(x)+ \n        f(x)^2\\right]\\\\\n        &=\\Ex_x\\left[\\Ex_D\\left[g^{(D)}(x)^2\\right]-\\bar{g}(x)^2 + \\bar{g}(x)^2 \n        - 2\\bar{g}(x)f(x)+f(x)^2\\right]\\\\\n        &=\\Ex_x\\left[\\Ex_D\\left[(g^{(D)}(x) - \\bar{g}(x))^2\\right] + \n        (\\bar{g}(x)-f(x))^2\\right]\n    \\end{align*}\n    \n    The term to the right $(\\bar{g}(x)-f(x))^2$ measure how much the average \n    function we would learn using the $D$ different data-sets deviates from the \n    target function, we call this term the bias:\n    \\begin{equation*}\n        \\text{bias}(x)=(\\bar{g}(x)-f(x))^2\n    \\end{equation*}\n    The other term, $\\Ex_D\\left[(g^{(D)}(x)-\\bar{g}(x))^2\\right]$ is what we \n    call the variance, which measures the variation in the final hypothesis:\n    \\begin{equation}\n        \\text{var}(x)=\\Ex_D\\left[(g^{(D)}(x)-\\bar{g}(x))^2\\right]\n    \\end{equation}\n    We thus arrive at the bias-variance decomposition of out-of-sample error:\n    \\begin{align*}\n        \\Ex_D\\left[E_{out}(g^{(D)})\\right] &= \\Ex_x\\left[\\text{bias}(x) + \n        \\text{var}(x)\\right]\\\\\n            &= \\text{bias} + \\text{var}\\\\\n        \\text{bias} &= \\Ex_x\\left[\\text{bias(x)}\\right]\\\\\n        \\text{var} &= \\Ex_x\\left[\\text{var(x)}\\right]\n    \\end{align*}\n    \\textit{Bias:} How well can we actually fit - on average\\\\\n    \\textit{Variance:} How much will data samples lead me astray - on average\n    \n    We cannot compute actual bias and variance in practice, since they depend \n    on the target function and input probability distribution. So it is a \n    conceptual tool, which is helpful when it comes to developing a model.\n    \n    There are two typical goals: we want to reduce the variance without \n    significantly increasing bias et vice versa. These goals are achieved \n    through heuristics, regularization being one them.\n    \n    \\textbf{Intuition}\n    The bias term is big if the model has too little capacity to fit the data.\n    \n    The variance term is big if the model has so much capacity, that it is good \n    at fitting noise (sampling errors) in any training set.\n    \n    \\subsection{Regularization}\n    In learning, we want to achieve two things:\n    \\begin{enumerate}\n        \\item Ensure that out-of-sample error is close to in-sample-error \\\\\n        $\\Pr\\left[|E_{in} - E_{out}| > \\epsilon\\right] \\leq \n        2Me^{-2\\epsilon^2N}$ or:\\\\\n        $E_{out}(h)\\leq E_{in}(h)+\\Omega(N,\\Hy,\\delta)$ \n        \\item Minimize in-sample-error $E_{in}$\n    \\end{enumerate}\n    So far we have looked at how to prevent underfitting, i.e. how do we fit \n    our data as well as possible. Regularization is about preventing \n    over-fitting, i.e. if we fit our data really well, then our in-sample-error \n    might be really low $E_{in}=0$, but it's no longer close to the \n    out-of-sample-error $E_{out}>>E_{in}$\n    \n    This happens because we fit the noise rather than the signal, if we look at \n    the target complexity, e.g. for some polynomial, then if we try to fit it \n    with a hypothesis of $h_{10}$, then as the target complexity increases \n    there will be more and more data that our hypothesis cannot capture, so \n    even though it is not stochastic noise, it wil look like noise to our \n    hypothesis. This is reflected in the fact that:\n    \\begin{itemize}\n        \\item Data increases $\\rightarrow$ Overfitting Decreases\n        \\item Noise increases $\\rightarrow$ Overfitting Increases\n        \\item Target Complexity Increases $\\rightarrow$ Overfitting Increases\n    \\end{itemize}\n    So increasing the target complexity seems to introduce noise similar to \n    increasing the noise. In order to avoid this, we want to fit it in a \n    ``simpler'' way, so we avoid the precise fitting we get from our current \n    techniques.\n    \n    In linear regression, this is what we currently minimize:\n    \\begin{equation*}\n        E_{in} = \\frac{1}{|D|} \\sum_{x,y \\in D}(w^Tx-y)^2\n    \\end{equation*}\n    If we instead, for different values of $\\alpha$, minimize:\n    \\begin{equation*}\n    E_{in}+\\Omega(h) = \\frac{1}{|D|} \\sum_{x,y \\in D}(w^Tx-y)^2 + \n    \\alpha\\|w\\|^2_2\n    \\end{equation*}\n    \n    Which gives us the constrained optimization:\n    \\begin{align*}\n        \\text{Minimize: }&\\frac{1}{|D|}\\sum_{x,y \\in D} (w^Tx - y)^2\\\\\n        \\text{Subject to: }&\\lambda\\|w\\|^2_2\\leq C\n    \\end{align*}\n    Then we get the constrained hypothesis set $\\Hy_{\\text{Constrained}}$ where:\n    \\begin{equation*}\n        \\Hy_\\text{Constrained} \\subseteq \\Hy \\implies \n        d_{vc}(\\Hy_\\text{Constrained}) \\leq d_{vc}(\\Hy)\n    \\end{equation*}\n    So we get a simpler hypothesis set (even though $d_{vc}$ might be the \n    same), which also means the variance should go down (as the regularization \n    favors $h$'s that look alike, and rules out the ones that are specialized \n    to a single data-set) although the bias should go up.\\todo{why?}\n    \n    Then for e.g. linear regression we get that:\n    \\begin{equation*}\n        \\nabla_w=\\frac{1}{n}(2X^TXw-2X^Ty) + \\frac{\\lambda w}{2n}\n    \\end{equation*}\n    Which gives us that:\n    \\begin{equation*}\n        w = (X^TX+\\lambda I)^{-1}X^Ty\n    \\end{equation*}\n    \n    For gradient descent, we get that when we minimize $E_{in} + \\lambda \n    \\|w\\|^2_2$, the gradient descent step that previously looked like:\n    \\begin{equation*}\n        w_{t+1} = w_t - \\alpha \\nabla_w f(w_t)\n    \\end{equation*}\n    will now become:\n    \\begin{align*}\n        w_{t+1} &= w_t - \\alpha \\nabla_w E_{in}(w_t) - 2 \\alpha \\lambda w_t\\\\\n            &= w_t(1-2 \\alpha \\lambda) - \\alpha \\nabla_w E_{in}(w_t)\n    \\end{align*}\n    Thus, every round the $(1-2\\alpha \\lambda)$ factor will make $w$ decay \n    towards the zero vector.\n    \n    \\subsection{Validation}\n    Regularization introduces the need for validation in order to be able to \n    train on the data again and again with different values of $\\lambda$. The \n    need for validation becomes apparent without regularization in more \n    advanced models (like Neural Networks) but regularization is the first \n    technique we encounter.\n    \n    Validation simply means to take out $K$ points from the data-set and use \n    them to validate the result of the training on the remaining $N-K$ points.\n    \n    Validation has the issue that as you increase $K$, the $E_{val}$ estimate \n    tightens but it also increases, as you has less data to train on so \n    $E_{out}$ increases with smaller $N-K$. Analytically:\n    \\begin{equation*}\n        E_{val} = E_{out} \\pm \\bigO{\\frac{\\sigma}{\\sqrt{K}}}\n    \\end{equation*}\n    As a rule of thumb: $K=\\frac{N}{5}$ (20\\% of data). If we validate and try \n    out different models with different $E_{val}(m_i)$, then we can pick the \n    best and retrain on all the data, hoping that the model we chose is still \n    the best.\n    \n    Usually if we have few hyperparameters, (like $\\lambda$) then we can just \n    perform grid search (exhaustively search through a range of parameters). If \n    we have many hyperparameters, we should just do ``random sampling'' and \n    repeatedly search on promising areas.\n    \n    We do, however risk that we now overfit on the validation parameters. So \n    what we really want is a training set, a validation set \\textit{and a test \n    set}! We will usually split the data-set by $50/25/25\\%$ or $60/20/20\\%$ \n    depending on the application and the data available.\n    \n    For small $K$ we get that $E_{out}(h_{all}) \\approx E_{out}(h_{train})$ and \n    for large $K$ we get $E_{out}(h_{train}) \\approx E_{val}(h_{train})$, but \n    we would like to have both of these.\n    \n    It turns out that setting $K=1$ and computing:\n    \\begin{align*}\n        D_n &= D - \\{(x_n, y_n)\\}\\\\\n        e_n &= E_{val}(h_{D_n}) = e(h_{D_n}, (x_n), y_n)\\\\\n        E_{cv} &= \\frac{1}{N} \\sum_{i=1}^{N} e_n\n    \\end{align*}\n    This is hard to analyze, since errors are correlated but it has been used \n    successfully in practice. However, doing this cross-validation is slow, as \n    we have to train $N$ times on $N-1$ points. Therefore we use $K$-Fold Cross \n    Validation:\n    \\begin{itemize}\n        \\item Split data in $K$ parts of size $\\frac{N}{K}$\n        \\item Run $K$ rounds of validation with different data-sets of size \n        $N-K$ and take the mean error.\n    \\end{itemize}\n    Usually, we will run for $K=10$ or $K=5$, as we are impatient beings with \n    deadlines.\n\\end{document}", "meta": {"hexsha": "35b003b0b148ad27e8ef0344fab82db378fc01f1", "size": 14918, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ML/Exam/LearningTheory.tex", "max_stars_repo_name": "lukaspj/Uni-Notes", "max_stars_repo_head_hexsha": "cdaf3c70040fe2cd3f8edb4aa1914c1d2e021cd8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-06-13T15:41:03.000Z", "max_stars_repo_stars_event_max_datetime": "2017-06-13T15:41:03.000Z", "max_issues_repo_path": "ML/Exam/LearningTheory.tex", "max_issues_repo_name": "lukaspj/Uni-Notes", "max_issues_repo_head_hexsha": "cdaf3c70040fe2cd3f8edb4aa1914c1d2e021cd8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ML/Exam/LearningTheory.tex", "max_forks_repo_name": "lukaspj/Uni-Notes", "max_forks_repo_head_hexsha": "cdaf3c70040fe2cd3f8edb4aa1914c1d2e021cd8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6646706587, "max_line_length": 80, "alphanum_fraction": 0.6311838048, "num_tokens": 4647, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660688, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5686367746962375}}
{"text": "\\vsssub\n\\subsubsection{~$S_{ice}$: Empirical/parametric damping by sea ice} \\label{sec:ICE4}\n\\vsssub\n\n\\opthead{IC4}{\\ws/NRL}{C. Collins and E. Rogers}\n\n\\noindent\nThe fourth option ({\\code IC4}) for damping of waves by sea ice was introduced by \\cite{rep:CR17}. It gives methods to implement one of several simple, empirical/parametric forms for the dissipation of wave energy by sea ice.  The motivation for {\\code IC4} is to provide a simple, flexible, and efficient source term which reproduces, albeit in a highly parameterized way, some basic physics of wave-ice interaction. The method is set by the integer value (presently 1 to 7) for {\\code IC4METHOD} namelist parameter: 1) an exponential fit to the field data of \\cite{art:WAD88}, 2) the polynomial fit in \\cite{art:MBK14}, 3) a quadratic fit to the calculations of \\cite{art:KM08} given in \\cite{art:HT15}, 4) Eq. 1 of \\cite{art:Ko14}, 5) a simple step function with up to 4 steps (may be nonstationary and non-uniform), and 6) a simple step function with up to 10 steps (must be stationary and uniform), and 7) a formula from \\cite{art:Dob15} which uses ice thickness. All but the fourth method of {\\code IC4} feature frequency-dependent attenuation. With the fourth method, attenuation varies with waveheight but is uniform in frequency space. \n\nIn the following discussion we use {\\code IC4M1} to denote {\\code IC4} method 1, and so forth. {\\code IC4} appears in the {\\file switch} and namelist {\\code IC4METHOD=1} (for example) appears in the file {\\file ww3\\_grid.inp}. Whereas in {\\code IC1}, ${C_{ice,1}}$ is the user-determined attenuation, for {\\code IC4M1}, {\\code IC4M2}, and {\\code IC4M4} ${C_{ice,n}}$ are constants of the equations. For {\\code IC4M3}, ${C_{ice,1}}$ is ice thickness. For {\\code IC4M5}, ${C_{ice,n}}$ controls the step function. Note that ${C_{ice,n}}$ may be provided by the user as non-stationary and non-uniform using methods analogous to methods used to input water levels.\n\n{\\code IC4M1}: an exponential equation was chosen to fit the data contained in table 2 of \\cite{art:WAD88} which results in preferential attenuation of high frequency waves. This parameterizes the well-known low-pass filtering effect of ice. The equation has the following form:\n\\begin{equation}\\label{eq:ice1}\n  {\\alpha} = \\exp\\left[\\frac{-{2\\pi}C_{ice, 1}}{{\\sigma}} - C_{ice, 2}\\right]\n\\end{equation}\n\n\\noindent Here, $\\alpha$ is the exponential decay rate for energy, which is twice that for amplitude: $\\alpha = 2k_i$. The values determined from the data are ${C_{ice,1...2}}=[0.18, 7.3]$, but these may be modified by the user. This method is described and applied in \\cite{rep:CR17}.\n\n{\\code IC4M2}: In this method, the dissipation is represented using a user-specified polynomial. It is a powerful method, since many shapes can be represented, e.g. by fitting to observation-based dissipation rates. The method is described and applied in \\cite{rep:CR17}. The equation is the following:\n\\begin{equation}\\label{eq:ice2}\n  {\\alpha} = C_{ice,1} + C_{ice,2}\\left[{\\frac{\\sigma}{2\\pi}}\\right] + C_{ice,3}\\left[{\\frac{\\sigma}{2\\pi}}\\right]^2 + C_{ice,4}\\left[{\\frac{\\sigma}{2\\pi}}\\right]^3 + C_{ice,5}\\left[{\\frac{\\sigma}{2\\pi}}\\right]^4\n\\end{equation}\n\n\\noindent\nIf a user wishes to follow \\cite{art:MBK14}, the suggested values for the coefficients are ${C_{ice,1...5}}=[0, 0, 2.12\\times 10^{-3}, 0, 4.59\\times 10^{-2}]$. Additional suggested polynomials can be found in \\cite{rep:RMK18}.\n\nWith appropriate coefficients, this polynomial method can be used to reproduce the so-called \u201croll-over effect\u201d where the attenuation is non-monotonic in frequency space. However, some recent studies do not indicate this effect, e.g. \\cite{art:RTS16} and \\cite{art:LK17}, and it may just be a spurious artifact in prior observational studies.\n\n{\\code IC4M3}: \\cite{art:HT15} fit a quadratic equation to the attenuation coefficient calculated by \\cite{art:KM08} as a function of frequency, $T$, and ice thickness, $h$. Attenuation increases for thicker ice and higher frequencies (lower periods). The number of  coefficients of the quadratic equation were prohibitively large to be user-determined, so the equation is hardwired in and the tunable parameter, ${C_{ice,1}}$, is ice thickness $h$. This method is described and applied in \\cite{rep:CR17}. For reference, the equation is the following:\n\\begin{equation}\\label{eq:ice3}\n  {\\ln{\\alpha(T,h)}} = -0.3203 + 2.058h - 0.9375T - 0.4269h^2 + 0.1566hT + 0.0006T^2\n\\end{equation}\n\n\\noindent\nThere are two warnings to make about {\\code IC4M3}. First, the equation itself was an extrapolation of the original range of $h$ used to calculate the attenuation coefficients in \\cite{art:KM08} which was between 0.5 and 3 m, see \\cite{art:HT15}. Second, in \\cite{art:KM08}, wave attenuation predicted is based on scattering (a conservative process), whereas in {\\code IC4M3}, the wave attenuation is treated as dissipation (non-conservative). This is ad hoc and not recommended for general use. Most especially, users should think twice before using {\\code IC4M3} in combination with scattering routines {\\code IS1} or {\\code IS2}, since this is essentially double-counting scattering.\n\n{\\code IC4M4}: \\cite{art:Ko14} found that attenuation was a function of significant wave height. Attenuation increased linearly with ${H_s}$ until ${H_s} = 3$ m at which point attenuation is capped, thus:\n\\begin{equation}\n\\left \\{\n\\begin{array}{llrcl}\n{\\frac{\\partial H_s}{\\partial dx}} = {C_{ice,1}}\\times {H_s}   & & \\text{for} \\> {H_s} \\leq 3 \\text{ m}  \\\\\n{\\frac{\\partial H_s}{\\partial dx}} = {C_{ice,2}}               & & \\text{for} \\> {H_s} > 3 \\text{ m}     \\\\\n\\end{array} \\right .\n\\end{equation}\nwhere {$k_i=\\frac{\\partial H_s}{\\partial dx}/H_s$}.\n\nThe values given in \\cite{art:Ko14} are ${C_{ice,1...2}}=[5.35\\times 10^{-6}, 16.05\\times 10^{-6}]$. See regression test {\\file ww3\\_tic1.1/input\\_IC4/M4} for examples. This method is described and applied in \\cite{rep:CR17}.\n\n{\\code IC4M5}: This is a simple step function with up to 4 steps. It is controlled by the optionally nonstationary and non-uniform parameters ${C_{ice,1...7}}$. Parameters ${C_{ice,1...4}}$ control the step levels, which are in terms of dissipation rate, ${k_i}$. Parameters ${C_{ice,5...7}}$ control the step boundaries (given in Hz). See regression test {\\file ww3\\_tic1.1/input\\_IC4/M5} for examples. This method is described in \\cite{rep:CR17}.\n\n{\\code IC4M6}: This is a simple step function with up to 10 steps. It is controlled by the stationary and uniform namelist parameters {\\code IC4KI} and {\\code IC4FC}. Array {\\code IC4KI} controls the step levels, which are in terms of dissipation rate, ${k_i}$, in radians per meter. Array {\\code IC4FC} controls the step boundaries (given in Hz). See regression test {\\file ww3\\_tic1.1/input\\_IC4/M6} for examples. \n\n{\\code IC4M7}: This is a formula for dissipation from \\cite{art:Dob15}, developed for a mixture of pancake and frazil ice, using data collected in the Weddell Sea (Antarctica). The formula depends on wave frequency and ice thickness:\n\\begin{equation}\\label{eq:ice7}\n  {\\alpha=0.2T^{-2.13}h} \\:\\:\\: .\n\\end{equation}\nThis method is described in \\cite{rep:RPLA18}.\n", "meta": {"hexsha": "7e4332ce5ccd66ec9a4d1876b588b1523926dd34", "size": 7206, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "WW3/manual/eqs/ICE4.tex", "max_stars_repo_name": "minsukji/ci-debug", "max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_stars_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "WW3/manual/eqs/ICE4.tex", "max_issues_repo_name": "minsukji/ci-debug", "max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_issues_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z", "max_forks_repo_path": "WW3/manual/eqs/ICE4.tex", "max_forks_repo_name": "minsukji/ci-debug", "max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_forks_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z", "avg_line_length": 124.2413793103, "max_line_length": 1145, "alphanum_fraction": 0.7334165973, "num_tokens": 2137, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8289387914176259, "lm_q2_score": 0.6859494550081925, "lm_q1q2_score": 0.5686101122080702}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amsthm}\n\\usepackage{verbatim}\n\\usepackage{url}\n\\begin{document}\n\n\\title{APPM 4510 HW1}\n\\author{Zane Jakobs}\n\\date{}\n\\maketitle\nNote: I'm using your notation for pdfs since we aren't talking about Lie algebras, and it's pretty nice.\\\\\n\\subsection*{1} \n$$ [X,Y] = [X] [Y | X] = \\dfrac{1}{c} e^{-\\frac{x^2}{2\\sigma^2} - \\frac{(y-x)^2}{2\\gamma^2}}$$\n\n\\subsection*{2} Given that $X = 2.5$, it must be the case that $Y = 2$ exactly, so\n$$[Y] = \\delta(y - 2),$$ \nwhere $\\delta$ is the Dirac delta.\n\\subsection*{3(a)}\n\\begin{proof}\nTaking the expectation of this process at time $j$ gives\n\\[ \\mathbb{E}[v_{j}] = \\lambda \\mathbb{E}[v_{j-1}] + \\bar{\\epsilon}\\tag{1}.\\]\nIn order for the desired limit to exist (which it must, since $|\\lambda | < 1$ so the process is covariance-stationary), it must be the case that as $j\\to\\infty,$ the absolute value of the series of innovations, $|\\Delta V|$, must converge to $0$. Therefore, as $j\\to\\infty,$ $\\mathbb{E}][v_{j+1}] \\to \\mathbb{E}[v_{j}],$ and hence, in this limit, (1) becomes\n\\[ \n\\lim\\limits_{j\\to\\infty}\\left(\\mathbb{E}[v_{j}] = \\lambda \\mathbb{E}[v_{j}] + \\bar{\\epsilon} \\right)\\tag{2}.\n\\]\nThis can be solved to give the desired limit,\n\\[\n\\lim\\limits_{j\\to\\infty}\\mathbb{E}[v_{j}] = \\dfrac{\\bar{\\epsilon}}{1-\\lambda}.\\tag*{\\qedhere}\n\\] \n\\end{proof}\n\\subsection*{3(b)} \n\\begin{proof}\nUsing the same justification as above to let successive terms have equal moments in the large-time limit, we set up the system\n\\[\n\\lim\\limits_{j\\to\\infty} \\mathrm{Var}[v_{j}] = \\lambda^2\\mathrm{Var}[v_{j}] + \\sigma_{\\epsilon}^2 \\tag{3}.\n\\]\nThen, again like with the mean, we solve to get\n\\[\n\\lim\\limits_{j\\to\\infty} \\mathrm{Var}[v_{j}] = \\dfrac{\\sigma_{\\epsilon}^2}{1-\\lambda^2}.\\tag*{\\qedhere}\n\\]\n\\end{proof}\n\n\\subsection*{3(c)} \\begin{proof}The lag-1 autocovariance in the infinite-time limit is \n\\[ \\lim\\limits_{j\\to\\infty}\\mathrm{Cov}[v_{j+1},v_j] =\\lim\\limits_{j\\to\\infty}\\left( \\mathbb{E}[v_{j+1}v_j] -\\mathbb{E}[v_j]\\mathbb{E}[v_{j+1}]\\right). \\tag{4}\n\\] Since $v_{j+1} = \\lambda v_j + \\epsilon_j$, \n\n\\[\n\\mathbb{E}[v_{j+1}v_j] = \\mathbb{E}[\\lambda v_j^2 + \\epsilon_j v_j] = \\lambda\\mathbb{E}[v_j^2] + \\mathbb{E}[\\epsilon_j]\\mathbb{E}[v_j] = \\lambda (\\mathrm{Var}(v_j) + \\mathbb{E}[v_j]^2) + \\bar{\\epsilon}\\mathbb{E}[v_j],\\tag{5} \\]\nwhere the second equality follows from the independence of the $v_j$ and $\\epsilon_j$. We then have \n\\[\n\\begin{aligned}\n\\lim\\limits_{j\\to\\infty}\\mathrm{Cov}[v_{j+1},v_j] &= \\lambda\\left(\\dfrac{\\sigma_{\\epsilon}^2}{1-\\lambda^2} + \\dfrac{\\bar{\\epsilon}^2}{(1-\\lambda)^2}\\right) + \\dfrac{\\bar{\\epsilon}^2}{1-\\lambda} -   \\dfrac{\\bar{\\epsilon}^2}{(1-\\lambda)^2}\\\\\n&= \\dfrac{(\\lambda - 1)\\bar{\\epsilon}^2}{(1-\\lambda)^2} + \\dfrac{\\bar{\\epsilon}^2}{1-\\lambda} + \\dfrac{\\lambda\\sigma_{\\epsilon}^2}{1-\\lambda^2}\\\\\n&= \\dfrac{\\lambda\\sigma_{\\epsilon}^2}{1-\\lambda^2}.\n\\end{aligned}\n\\] \\qedhere\n\\end{proof}\n\\subsection*{3(d)}\\begin{proof} Since we justified in 3(a) and 3(b) that the variance of $v_{j+1}$ approaches that of $v_j$ as $j\\to\\infty$, the autocorrelation in that limit is \n\\[\n\\begin{aligned}\n\\lim\\limits_{j\\to\\infty}\\mathrm{Corr}[v_{j+1},v_j]  &= \\lim\\limits_{j\\to\\infty}\\dfrac{\\mathrm{Cov}[v_{j+1},v_j]}{\\left(\\frac{\\sigma_{\\epsilon}^2}{1-\\lambda^2}\\right)^2}\\\\\n &= \\dfrac{\\lambda}{\\frac{\\sigma_{\\epsilon}^2}{1-\\lambda^2}}\\\\\n &= \\dfrac{\\lambda(1-\\lambda^2)}{\\sigma_{\\epsilon}^2}.\n \\end{aligned}\n\\]\n\\end{proof}\n\n\\subsection*{4} $\\mathrm{Rk}(\\mathbf{A}) = k-1$.\\ \\begin{proof} By the definition of a basis, the $\\mathbf{v}_i$ form a basis for $\\mathbb{R}^k$, thus the rank of $\\mathbf{V}$ is $\\mathrm{Rk}(\\mathbf{V}) = k$. Subtracting $\\frac{1}{k}\\mathbf{1}\\mathbf{1}^T$ from the $k\\times k$ identity gives a matrix, each of whose rows has one element that is 1 greater than the element in the row above/below it, allowing $k-1$ of those rows to be put into a linear combination that is equal to the $k$-th row, thus the matrix $\\mathbf{I} - \\frac{1}{k}\\mathbf{1}\\mathbf{1}^T$ must have rank $k-1$. Since a matrix product is one representation for composition of linear transformations, we note that $\\mathbf{A}$ can be viewed as the composition of a rank $k-1$ transformation and a rank $k$ transformation, and thus cannot have a rank greater than $k-1$ (in simpler language, a transformation that is composed of other transformations is \"no more invertible\" than its \"least invertible\" individual part). It is also true that $\\mathbf{A}$ can be constructed by \"gluing\"  the $k\\times k$ submatrix of $\\mathbf{V}$ that spans $\\mathbb{R}^k$ multiplied against  $\\mathbf{I} - \\frac{1}{k}\\mathbf{1}\\mathbf{1}^T$, with the result of the product of $\\mathbf{I} - \\frac{1}{k}\\mathbf{1}\\mathbf{1}^T$ and the remaining $(n-k)\\times k$ submatrix of $\\mathbf{V}$, and applying an (invertible) permutation matrix. The first product is the product of a full-rank matrix (of rank $k$) and a rank-deficient (rank $k-1$) matrix, and thus has rank $k-1$. The second product, when \"glued\" with the first, cannot add to the rank of $\\mathbf{A}$, since these $n-k$ rows were originally linear combinations of the other $k$ and they were composed with the exact same transformation as those other $k$, so those original linear combinations are preserved (that is, if row $k+3$ of $\\mathbf{V}$ were equal to the sum of rows $k-5$ and $k-143$ of $\\mathbf{V}$, the same will be true of $\\mathbf{A}$). Thus, $\\mathrm{Rk}(\\mathbf{A}) = k-1$ exactly.\n\\end{proof}\n\n\\subsection*{5}\nNewton's method entails the repeated computation of $\\mathbf{J}$, the Jacobian of $J(\\mathbf{x})$ (Note: since $J$ is a scalar function, its Jacobian would probably be more appropriately called its gradient, and is a vector, not a matrix), the Hessian $\\mathbf{\\mathcal{H}}$ (NOT the same as $\\mathbf{H(x)}$, solution of the equation\n\\[\n \\mathbf{\\mathcal{H}} \\cdot (\\mathbf{\\delta x}) = -\\mathbf{J} \\tag{6}\n \\]\nfor $\\mathbf{\\delta x}$, and updating $\\mathbf{x}\\to\\mathbf{x} + \\gamma\\mathbf{\\delta x}$ for some step size $\\gamma\\in (0,1] $ (the choice of a $\\gamma$ could be replaced with a line search for the optimal step size). We can compute $\\mathbf{J}$ as \n\\[\n\\begin{aligned}\n\\mathbf{J} &= \\dfrac{\\partial J}{\\partial\\mathbf{x}^T}\\\\\n&= 2\\mathbf{x}^T\\mathbf{C}^{-1} +  \\epsilon^2 \\dfrac{\\partial}{\\partial\\mathbf{x}^T}\\left(\\mathbf{y}^T\\mathbf{y} - \\mathbf{y}^T\\mathbf{H(x)} - \\mathbf{H(x)}^T\\mathbf{y} + \\mathbf{H(x)}^T\\mathbf{H(x)}\\right)\\\\\n&= 2\\mathbf{x}^T\\mathbf{C}^{-1} - \\epsilon^2\\left(\\mathbf{y}^T\\dfrac{\\partial\\mathbf{H(x)}}{\\partial\\mathbf{x}^T} + \\dfrac{\\partial\\mathbf{H(x)}^T}{\\partial\\mathbf{x}^T}\\mathbf{y} - \\dfrac{\\partial\\mathbf{H(x)}^T}{\\partial\\mathbf{x}^T}\\mathbf{H(x)} - \\mathbf{H(x)}^T\\dfrac{\\partial\\mathbf{H(x)}}{\\partial\\mathbf{x}^T}\\right).\n\\end{aligned}\n\\]\n\n\\subsection*{6(a)}\n\\begin{proof}\n By definition, we have\n \\[\n E = \\dfrac{1}{2}\\sum\\limits_j x_j^2\\implies \\dfrac{dE}{dt} =\\sum\\limits_j x_j \\dot{x}_j .\\tag{7}\n \\]\nExpanding the above sum from the Lorenz-'96 model, we have\n\\[\n\\begin{aligned}\n\\dfrac{dE}{dt} &= \\sum\\limits_j x_j (x_{j+1}x_{j-1} - x_{j-2}x_{j-1} - x_j + F)\\\\\n&= F\\sum\\limits_j x_j + \\sum\\limits_j x_j(x_{j+1}x_{j-1} - x_{j-2}x_{j-1} - x_j)\\\\\n&= F\\sum\\limits_j x_j + \\sum\\limits_j x_jx_{j+1}x_{j-1} - x_{j-2}x_{j-1}x_j - x_j^2\n\\end{aligned}\n\\]\nNow, since index addition and subtraction is performed over a space where we have identified $J$ and $0$, and add modulo $J$, we want to consider the terms $x_jx_{j+1}x_{j-1}$ and  $- x_{j-2}x_{j-1}x_j$. Since each of these terms is a product of the three $x_j$ for consecutive values of $j$, their sum over all $j\\in \\{1,2,\\ldots , J\\}$ is equal, and hence they cancel out. This leaves us with\n\\[\n\\begin{aligned}\n\\dfrac{dE}{dt} &= F\\sum\\limits_j x_j - \\sum\\limits_j x_j^2\\\\\n&= -2E + F\\sum\\limits_j x_j,\n\\end{aligned}\n\\]\nwhich is the desired result.\n\\end{proof}\n\\subsection*{6(b)} Code is attached to the back of this assignment (integration routine is \\url{boost::numeric::odeint::runge_kutta54_cash_karp}). The final timestep has the following values for the $x_j$, in order from $j=1$ to $J$, to two digist of precision:\\\\\n\\[ \\mathbf{x} = (-1.5, 1.2, -0.54, 0.15, 1.1, 5.5, 1.1\\cdot 10^1, -2.3, -0.37, 4.2) \\]\n\\end{document}\n", "meta": {"hexsha": "ac46519c1e7dd6474c4c55fbbe0167a6c73defd9", "size": 8233, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW-Text/4510_HW_1.tex", "max_stars_repo_name": "DiffeoInvariant/Data-Assimilation", "max_stars_repo_head_hexsha": "7afe25b1efb87a6988bea6df34e17650d9eb86fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW-Text/4510_HW_1.tex", "max_issues_repo_name": "DiffeoInvariant/Data-Assimilation", "max_issues_repo_head_hexsha": "7afe25b1efb87a6988bea6df34e17650d9eb86fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW-Text/4510_HW_1.tex", "max_forks_repo_name": "DiffeoInvariant/Data-Assimilation", "max_forks_repo_head_hexsha": "7afe25b1efb87a6988bea6df34e17650d9eb86fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.8584070796, "max_line_length": 2011, "alphanum_fraction": 0.670836876, "num_tokens": 3098, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.805632181981183, "lm_q1q2_score": 0.568603141957335}}
{"text": "%%\n%%  appendixSVM.tex - Obstacle Detection and Planning for Autonomous Vehicles based on Computer Vision Techniques\n%%\n%%  Copyright 2014 N\u00e9stor Morales <nestor@isaatc.ull.es>\n%%\n%%  This work is licensed under a Creative Commons Attribution 4.0 International License.\n%%\n\n\\graphicspath{{./images/chapter06/bmps/}{./images/chapter06/vects/}{./images/chapter06/}}\n\n\\chapter{Support Vector Machines}\\label{ch:appendixSVM}\n\n\\acfp{SVM} are a set of linear classifiers usually employed both for classification and regression, which are based on methods of supervised learning. This technique is based on the Vapnik Chervonenkis (VC) dimension from the statistical learning theory and Structural Risk Minimization.\nThe original algorithm was developed by Vladimir Vapnik in 1963 as a linear classifier, but the most currently used version (soft margin) was proposed together by Vapnik and Corinna Cortes in 1995 \\citep{cortes1995support}.\n\n\\section{Margin Maximization}\\label{ch:appendixSVM_01}\n\nLet $(x_1,y_1),\\dots,(x_N,y_N)$, with $\\begin{array}{l c r}  \n  x_k \\in \\mathbb{R}^m & y_k \\in {-1,1} & k=1,\\dots,N \n\\end{array}$ be the set of samples separated by the hyperplane $w^T \\cdot x + b = 0$, where $x$ is a $N$-dimensional point and $w$ is a vector of weights. This hyperplane divides data, in the way that $w^T \\cdot x_k + b > 0$ for the $x_k$ in a class, and $w^T \\cdot x_k + b < 0$ for the $x_k$ in the other class. If data is separable, it is quite likely to have more than one way to do that. From all possible solutions, the \\ac{SVM} selects that with the biggest distance from the hyperplane to the nearest data points. The described method takes advantage of this fact.\nIf training points are labeled in the way that $y_k \\in \\{ -1, +1\\}$, where $+1$ is a positive sample, and $-1$ a negative sample, the following expression is obtained:\n\n\\begin{equation}\\label{eq:appendixSVM_hyperplane_scaled}\n  \\begin{array}{l r}\n    y_k (w^T \\cdot x_k + b) \\geq 1 & \\forall k \\in \\{1, \\dots, N \\}\n  \\end{array}\n\\end{equation}\n\nBased on this, both classes can be separated by two hyperplanes, $H_1: w^T \\cdot x_k + b = 1$ and $H_2: w^T \\cdot x_k + b = -1$, in the way that there are not data points inside the gap existing between $H_1$ and $H_2$. The distance between them is $2 / \\|w\\|$, where $\\|w\\|$ is the Euclidean norm of $w$. Therefore, the way in which the parameters that maximize this distance are obtained is by minimizing the following objective function:\n\n\\begin{equation}\\label{eq:appendixSVM_objective_function}\n L(w) = {{\\|w\\|^2} \\over 2}\n\\end{equation}\n\n, such that $\\begin{array}{l r} \ny_k (w^T \\cdot x_k + b) \\geq 1 & \\forall k \\in \\{1, \\dots, N \\}\n\\end{array} $.\n\nTo solve this, the dual problem must be obtained. After a few calculations, we obtain the following formulation:\n\n\\begin{equation}\\label{eq:appendixSVM_dual_formulation}\n L_D (\\alpha) = \\sum_{k=1}^N \\alpha_k - {1 \\over 2} \\sum_{k = 1, l=1}^N \\alpha_k \\alpha_l y_k y_l x_k^T x_l\n\\end{equation}\n\n$L_D$ must be maximized. The training vectors $x_k$ with positive Lagrange multipliers $\\alpha_k$ in any of the two hyperplanes are called Support Vectors (SV). SVs define the hyperplane parameters. Therefore, using the SVs it is possible to define a discriminative function like the next one: \n\n\\begin{equation}\\label{eq:appendixSVM_discriminative_function}\n y = sign(w^T \\cdot x + b) = sign \\left(\\sum_{k \\in S} \\alpha_k y_k x_k^T x + b \\right)\n\\end{equation}\n\n, where $S$ is the set containing all SVs.\n\n\\section{Soft Margin}\\label{ch:appendixSVM_02}\n\nIn a real problem, it is very unlikely that a line divides data in an exact way. In fact, most of the times it is not desired to divide them accurately, even having the possibility of defining a non-linear decision boundary. \nIn these cases, the objective function \\ref{eq:appendixSVM_objective_function} can be substituted by:\n\n\\begin{equation}\\label{eq:appendixSVM_soft_objective_function}\n L(w) = {{\\|w\\|^2} \\over 2} + C \\sum_{k=1}^N \\xi_k\n\\end{equation}\n\n, where $\\xi_k$ is the error allowed in the classification.\n\nThe solution of this problem can also be obtained through the dual formulation, as we did for the linear case. The non-zero $\\alpha_k$ corresponds to the next two cases:\n\\begin{itemize}\n \\item For SVs on either of two hyperplanes, $0 < \\alpha_k < C$.\n \\item For SVs inside the hyperplanes, $\\alpha_k = C $.\n\\end{itemize}\n\n\\section{Non-linear \\ac{SVM}}\\label{ch:appendixSVM_03}\n\n\\acp{SVM} can also be used to determine non-linear separable surfaces. To do that, it is possible to use kernel based techniques. Using the appropriate kernel function we can find the separating hyperplane by using higher dimensions. This kernel function allows changing the problem of finding a non-linear separating surface in the original space by computing a separating hyperplane inside the high-dimensional space, allowing reducing the calculation costs and, at the same time, obtaining an optimal non-linear separating surface inside the original space, as described at \\cite{burges1998tutorial}.\n\nLet $\\pi$ be the mapping of the data from the original space to higher dimensions. This mapping is given by several functions $K$:\n\n\\begin{equation}\\label{eq:appendixSVM_kernel_func}\n K(x_1, x_2) = \\pi(x_1)^T \\cdot \\pi(x_2)\n\\end{equation}\n\nUsing this kernel function, the objective function in \\ref{eq:appendixSVM_dual_formulation} becomes:\n\n\\begin{equation}\\label{eq:appendixSVM_soft_formulation}\n L_d(\\alpha) = \\sum_{k=1}^N  \\alpha_k - {1 \\over 2} \\sum_{k = 1, l=1}^N \\alpha_k \\alpha_l y_k y_l K(x_k, x_l)\n\\end{equation}\n\nAnd the discriminative function:\n\n\\begin{equation}\\label{eq:appendixSVM_soft_discriminative_function}\n y = sign \\left( \\sum_{k \\in S} \\alpha_k y_k K(x_k, x) + b \\right)\n\\end{equation}\n\nFunction $K$ can take several forms, being the most common those listed next:\n\\begin{itemize}\n \\item Linear: $K(x_k, x_l) = x_k^T \\cdot x_l$.\n \\item Polynomial: $K(x_k, x_l) = (\\gamma \\cdot x_k^T \\cdot x_l + r)^d$.\n \\item \\acf{RBF}: $K(x_k, x_l) = \\exp(-\\gamma \\cdot \\| x_k - x_l\\|)$.\n \\item Sigmoid: $K(x_k, x_l) = \\tanh(\\gamma \\cdot x_k^T \\cdot x_l + r)$\n\\end{itemize}\n\nIn the method described in chapter \\ref{ch:chapter06}, we use the Gaussian \\acf{RBF}, as it is able to produce continuous soft decision boundaries, making it the most suitable for the application.", "meta": {"hexsha": "523e0f5409f723ec86b139ac184e30b6a2949a60", "size": 6312, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "appendixSVM.tex", "max_stars_repo_name": "nestormh/thesis", "max_stars_repo_head_hexsha": "7e1d9c79d6cb456d98bb156ff2750ed70b179db2", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-11-21T08:28:23.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-21T08:28:23.000Z", "max_issues_repo_path": "appendixSVM.tex", "max_issues_repo_name": "nestormh/thesis", "max_issues_repo_head_hexsha": "7e1d9c79d6cb456d98bb156ff2750ed70b179db2", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "appendixSVM.tex", "max_forks_repo_name": "nestormh/thesis", "max_forks_repo_head_hexsha": "7e1d9c79d6cb456d98bb156ff2750ed70b179db2", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.12, "max_line_length": 603, "alphanum_fraction": 0.7387515843, "num_tokens": 1894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8056321796478255, "lm_q1q2_score": 0.5686031303391326}}
{"text": "\\chapter{Localization}\nLocalization of an integral happens when the integral is exactly equal to its saddle-point approximation.\n% The localization loci are then the saddle-points.\nA trivial example is therefore the Gaussian integration formula, which the saddle-point method is based on:\n\\begin{equation}\\label{GaussianIntegral}\n \\int_{\\mathbb{R}^n} d^n x \\, e^{-\\frac{1}{2} x^T A x} = \\sqrt{\\dfrac{(2\\pi)^n}{\\text{det}A}},\n\\end{equation}\nwhere $A$ is a positive-definite $n\\times n$ matrix.\n\n\nIn the case of path integrals in quantum field theories, which are of infinite dimension,\nlocalization, if applicable, reduces them to finite dimensional integrals.\n% This is achieved using functional generalizations of the Gaussian formula \\eqref{GaussianIntegral},\n% and the resulting integral is over the moduli space of the theory vacua.\nThis is an exact approach, in contrast to perturbative methods such as Feynman diagrams,\nvalid only in the weakly-coupled regime.\nUnfortunately, very few path integrals are solvable, \nbut there are very specific classes of theories where localization does apply.\nThese are supersymmetric theories and topological quantum field theories,\nfrom which we distinguish two types of localization:\nsupersymmetric localization and equivariant localization.\n% As a matter of language, and\n% this effect is often said to be \\emph{semiclassically-exact}, or \\emph{1-loop exact}.\n\nMany results of this thesis are based on the supersymmetric localization.\nWe will review its basic idea and illustrate it explicitly in an example with an ordinary integral. \nWe refer the reader to \\cite{Szabo:1996md} and \\cite{Pestun:2016zxk} for reviews of the topic.\n\n\n\\section{Supersymmetric localization}\n\nIn the path integral formulation of quantum theories, \nphysical observables are expectation values of operators.\nLet us consider the path integral (in the Euclidean signature and set $\\hbar = 1$):\n\\begin{equation}\n \\braket{O} = \\int D\\phi O \\, e^{-S[\\phi]},\n\\end{equation}\nwhere $O$ is an operator, built out of the fields of the theory, represented by $\\phi$.\n\nAssume we can deform the action $S$ in such a way that it does not affect the physical quantity, \nnamely\n\\begin{equation}\n \\braket{O}_t = \\int D\\phi e^{-S[\\phi]-t Q V[\\phi]}, \n\\end{equation}\nwhere \n% $QV\\geq 0$, and \n$Q$ is a fermionic symmetry of the theory, and require\n\\begin{equation}\n \\dfrac{d \\braket{O}_t}{d t} = 0.\n%  \\overset{!}{=} 0.\n\\end{equation}\nIntegrating by parts and assuming vanishing boundary terms, \nwe obtain the following necessary conditions:\n\\begin{equation}\\label{loc:susyCondition}\n Q^2 = 0, \\quad Q S = 0, \\quad Q O = 0,\n\\end{equation}\nand that the measure $D\\phi$ is also invariant (the symmetry must not be anomalous).\n\nIf this deformation were possible, then the deformation parameter $t$ could be of any value. \nThe most useful case is to take it to be infinitely large,\nin order to use the saddle-point approximation.\nSince by construction the integral is independent of $t$, the saddle-point approximation must be exact,\nhence the path integral localizes to a set of loci.\nThese are determined by\\footnote{ \nIf the bosonic part is positive-definite.\nThe fermionic contribution of the localization action is subleading.}\n\\begin{equation}\\label{loc:saddlePoint}\n (QV)_\\text{Bosonic} = 0\n\\end{equation}\nwhose solutions---let us denote them by $\\phi_*$---belong to the moduli space of vacua of the theory.\nThis means that some fields acquire a non-zero vacuum expectation value.\nAccounting the fluctuations around the vacuum configuration,\nthe localized integral can be formally written as:\n\\begin{equation}\n \\braket{O}=\\int_\\text{Moduli} D\\phi_* O(\\phi_*) e^{-S(\\phi_*)} Z_\\text{1-loop},\n\\end{equation}\nwhere the integral over $\\phi_*$ is analogous to the sum over all the saddle-points, \nand $Z_\\text{1-loop}$  is the functional generalization of the Gaussian integral \\eqref{GaussianIntegral} for the fluctuations around the saddle-points. \nIt is thus a functional determinant, which is divergent in general.\nIn order to obtain a finite result, the theory is defined in a compact manifold, like a sphere, \nsuch that the spectrum of the operator is discrete, and supersymmetry guarantees the cancellation of divergences between bosons and fermions.\nDespite conceptual simplicity, the main challenge of this technique is to find the localization action $QV$, since no general recipe exists. \n\n\n\n\\section{A toy example}\n\nConsider an ordinary integral\n\\begin{equation}\n I = \\int_m^M dx \\,p g'(x)\\,e^{p g(x)}, \n\\end{equation}\nwith $p$ being a constant.\nIt can be solved exactly as a total derivative:\n\\begin{equation}\n I = \\int_m^M  dx \\, \\dfrac{d}{d x}\\,e^{p g(x)} \n   = e^{p g(M)}-e^{p g(m)}.\n\\end{equation}\n\n\nIf we were to solve it using the saddle-point approximation for $p \\rightarrow \\infty$,\nthen the saddle-points must be the endpoints $x_* = \\{m, M\\}$.\nLet us consider this case and expand around the saddle-points:\n\\begin{equation}\n g(x_*+\\xi) = g(x_*)+\\frac{1}{2} g''(x_*) \\xi^2 +\\mathcal{O}(\\xi^3).\n\\end{equation}\nEach saddle-point contributes to the integral as following:\n\\begin{eqnarray}\n I_m &=& e^{p g(m)} \\int_0^\\infty  d\\xi \\, p g''(m) \\xi\\, e^{p \\frac{1}{2} g''(m) \\xi^2 } \n      =- e^{p g(m)} \\\\\n I_M &=& e^{p g(M)} \\int_{-\\infty}^0 d\\xi \\, p g''(M) \\xi\\, e^{p\\frac{1}{2} g''(M) \\xi^2 } \n      = e^{p g(M)}   \n\\end{eqnarray}\nvalid for $\\Re(p g'') < 0$.\nThe sum $I_m + I_M$ indeed gives the exact result.\n% In conclusion, the integral $I$ is a trivial example of localization.\n\n\nNow, let us see how we can use supersymmetric localization to solve it. \nThe integral $I$ can be rewritten in the supersymmetric form:\n\\begin{equation}\n I = \\int_m^M \\, dx \\int da \\int db \\, e^{p(g(x)- a b g'(x))},\n\\end{equation}\nwhere $a$ and $b$ are Grassmann numbers (our fermions), \nwhich means they satisfy $a b = -b a$ and $a^2=b^2=0$, hence \n\\begin{equation} \\label{GrassmannIntegral}\n f(x) = \\int da \\int db \\, e^{-a\\, f(x) \\,b }.\n\\end{equation}\n\n\nThe action $S=p(g(x)-a b g'(x))$ is invariant under the supersymmetry transformation:\n\\begin{equation}\n \\delta_\\epsilon x = -\\epsilon a, \n \\quad\n \\delta_\\epsilon a = 0,\n \\quad \n \\delta_\\epsilon b = \\epsilon,\n\\end{equation}\nwhere $\\epsilon$ is a Grassmann number.\nThe supersymmetric operator, defined by $\\epsilon Q = \\delta_\\epsilon$,\nis explicitly\n\\begin{equation}\n Q = -a \\frac{\\partial }{\\partial x}+\\frac{\\partial }{\\partial b}.\n\\end{equation}\nWe can check that, indeed, \\eqref{loc:susyCondition} are fulfilled.\n\n\nLet us deform the integral with a $Q$-exact term\n\\begin{equation}\n I(t)=\\int_m^M \\, dx \\int da \\int db \\, e^{p(g(x)-a b g'(x))+ t Q V},\n\\end{equation}\nwhere \n\\begin{equation}\n V(x, a, b) = f(x) b\n\\end{equation}\nthat leads to a non-trivial localization action\n\\begin{equation}\n QV = f(x)-a b f'(x).\n\\end{equation}\nThis is actually the most general supersymmetric action we can write for this case.\nThus, $S$ is not just $Q$-closed (i.e. $QS=0$), but also $Q$-exact (i.e. $S=QV_s$).\n\n\nWe require the deformed integral to be independent of the deformation parameter $t$:\n\\begin{equation}\n I'(t)=0.\n\\end{equation}\nAn explicit and straightforward computation shows\n\\begin{eqnarray}\n I'(t) &=& \\int_m^M \\, dx \\int da \\int db \\, e^{S + t Q V} Q V \\\\\n       &=& \\int_m^M \\, dx \\int da \\int db \\, Q (e^{S + t Q V} V)\\\\\n       &=& \\left. e^{p g(x)+ t f(x)} f(x) \\right|^M_m\n\\end{eqnarray}\nwhere we used the supersymmetry condition and nilpotency. \nIn this case, we must additionally require vanishing boundary conditions\n\\begin{equation}\n f(m)=f(M)=0.\n\\end{equation}\n\nNow we can take large $t$ to solve the integral using the saddle-point method.\nThe boundary points here must be either global maxima or global minima of $f(x)$,\nfor $t>0$ or $t<0$, in order for the saddle-point integral to be convergent.\nIn other words, for $t$ negative (positive), \nthe bosonic part of the localization action is positive (negative) definite, \nand it vanishes at the saddle-points:\n\\begin{equation}\n (QV)_\\text{Bosonic}=0.\n\\end{equation}\n\nThe saddle-point approximation is exact, in the same fashion as for the large $p$ case we studied before.\nThis is to be expected, \nsince the deformed integral is still a total derivative when we integrate out the fermions:\n\\begin{equation}\n I(t) = \\int_m^M  dx \\, \\dfrac{d}{d x}\\,e^{p g(x) + t f(x)} \n      = e^{p g(M)+ t f(M)}-e^{p g(m)+ t f(m)}.\n%       = e^{p g(M)}-e^{p g(m)}.      \n\\end{equation}\n\nWhen $g(x)=\\cos x$ and the integration region is extended to a sphere \nparametrized by the polar angle $ x\\in [0,\\pi]$ and the azimuthal angle $\\varphi \\in [0, 2\\pi]$,\nthis becomes a particular example of the Duistermaat-Heckman integration formula,\nwhich is the precursor of the localization of path integrals.\n\n% Witten used this example to illustrate the equivariant localization in the appendix of \\cite{Witten:1992xu}.\n\n% \\begin{equation}\n%   \\int_0^{2\\pi} d\\phi \\, \\int_0^{\\pi} \\, d\\theta \\sin\\theta \\, e^{t \\cos\\theta}\n% = \\dfrac{2\\pi}{t}\\left(e^{t} - e^{-t}\\right).\n% \\end{equation}\n% The localization loci are the north and south pole, namely $\\theta_*=\\{0, \\pi\\}$.\n\n\n% The theorem was later proved to be a special case of a more general localization property of equivariant cohomology\n% and Berline and Vergne derived a localization formula for general compact Riemannian manifolds.\n% The first infinite-dimensional generalization of the theorem,\n% in the setting of a supersymmetric path integral,\n% was worked out by Atiyah and Witten.\n\n\n", "meta": {"hexsha": "fe69f6052d9416b95f49d5d006b8acbceeafc315", "size": 9447, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/Localization.tex", "max_stars_repo_name": "yixinyi/PhDThesis", "max_stars_repo_head_hexsha": "fa5e6d89bf6e7658cebae8bab8a3d22fe4e53e29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/Localization.tex", "max_issues_repo_name": "yixinyi/PhDThesis", "max_issues_repo_head_hexsha": "fa5e6d89bf6e7658cebae8bab8a3d22fe4e53e29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/Localization.tex", "max_forks_repo_name": "yixinyi/PhDThesis", "max_forks_repo_head_hexsha": "fa5e6d89bf6e7658cebae8bab8a3d22fe4e53e29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.1369863014, "max_line_length": 153, "alphanum_fraction": 0.7193818143, "num_tokens": 2793, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711756575749, "lm_q2_score": 0.6688802735722128, "lm_q1q2_score": 0.568528952502334}}
{"text": "\\subsection{Burn}\r\n\\label{sec:20Burn}\r\n\r\nWe continue with the notation and indices from the prior sections.\\\\\r\n\\\\\r\nSuppose Bob is the owner of the token commitment $Z_e$ which represents a value of $e$ (denominated in the currency of a particular ERC-20 token).\\\\\r\n\\\\\r\nRecall that whilst the token commitment $Z_e$ exists (read: ``is spendable'') within the Shield contract, the Shield contract holds an equivalent value of $e$ `public' ERC-20 tokens; effectively `locked up' in escrow.\\\\\r\n\\\\\r\nSuppose Bob (now the owner of $Z_e$ because he knows the secret key $sk_B$) wishes to `release' a value of $e$ ERC-20 tokens from escrow.\r\nThen he will need to effectively `reveal' the contents of his token commitment $Z_e$ in order to convince the Shield contract that he is indeed entitled to withdraw $e$ from escrow.\r\nWe call this act of converting from a `private' token commitment back to its `public' counterpart a `\\textbf{burn}'.\r\n\\\\\r\nNote that by burning a token commitment, Bob is revealing information which was previously private; namely, the value $e$. Bob could continue to use an anonymous Ethereum address when calling the `burn' transaction, but analytics of public ERC-20 transactions thereafer will likely eventually reveal that it was Bob who burned $e$. We'll have Bob use his public Ethereum address `burn', for simplicity.\\\\\r\n\\\\\r\n\r\n\\noindent\r\nFor Bob to burn $Z_e$ within the Shield contract, under zero knowledge, he follows the steps in Figure~\\ref{fig:fBurnAlgorithm}.\r\n\r\n\\begin{figure}[htp]\r\n  \\ContinuedFloat*\r\n\t\\begin{center}\r\n\t\t\\begin{framed}\r\n      \\begin{tabular}{p{16cm}}\t\r\n        \\textbf{Fungible burn algorithm} \\\\\r\n        \\\\\r\n        \\midrule\r\n        \\textbf{Bob's steps:}\\\\\r\n        \\begin{enumerate}\r\n          \\setcounter{enumi}{\\value{ongoingEnumCounter}}\r\n          \\item Compute $N_e := h(\\;\\sigma_e\\;|\\;sk^Z_B\\;)$, the nullifier of Bob's commitment $Z_e$.\r\n          \\item Get $\\psi_{Z_e}$ -- the sister-path of $Z_e$ -- from the Shield contract (see Details below).\r\n          \\item Get the latest Merkle root from the Shield contract: $\\roott_{n+m+k+l}$ (see Details below).\r\n          \\item Set public inputs $x = (\\;e,\\;N_e,\\;\\roott_{n+m+k+l})$\r\n          \\item Set private inputs $\\omega = (\\psi_{Z_e},\\;sk_B,\\;\\sigma_e)$\r\n          \\item Select $C_{ft-burn}(\\;\\omega,\\;x\\;)$ -- the set of constraints which are satisfied if and only if:\r\n          \\begin{enumerate}\r\n            \\item $pk_B$ equals $h(\\;sk_B\\;)$; (Proof of knowledge of the secret key to $pk_B$) (see Details for why $pk_B$ isn't an input to $C$)\r\n            \\item $Z_e$ equals $h(\\;e\\;|\\;pk_B\\;|\\;\\sigma_e\\;)$ (Proof of the constituent values of $Z_c$)\r\n            (See Details for why $Z_e$ isn't an input to $C$)\r\n            \\item $\\roott_{n+m+k+l}$ equals $h\\br*{\\psi_{Z_e}(1)\\;|...|\\;h\\br*{\\psi_{Z_e}(d-2)\\;|\\;h\\br*{\\psi_{Z_e}(d-1)\\;|\\;Z_e}\\;}...}$ (Proof that $Z_e$ belongs to the on-chain Merkle Tree)\r\n            \\item $N_e$ equals $h(\\;\\sigma_e\\;|\\;sk^Z_B\\;)$ (Proof that $N_e$ is indeed the nullifier of $Z_e$)\r\n          \\end{enumerate}\r\n          \\item Generate $\\pi := P(\\;p_C\\;,\\;x,\\;\\omega\\;)$; a proof of knowledge of satisfying arguments $(\\omega, x)\\;s.t.\\;C(\\omega, x) = 1$. Recall: $p_C$ -- the proving key for $C$ -- will be stored on Alice's computer.\r\n           \r\n          The pair $(\\pi, x)$ is the zk-SNARK which attests to knowledge of private inputs $\\omega$ without revealing them.\r\n          \\item Send $(\\pi, x)$ to the Shield contract for verification.\r\n           \r\n          Using web3: \\texttt{fTokenShield.burn(payTo, proof, inputs, vkId)}\r\n\r\n          where \\texttt{payTo} is an Ethereum address, specified by Bob, into which he wishes for $e$ to be transferred (denominated in the currency of the linked ERC-20 contract).\r\n          %remember where the count (enumi) is up to and store it in ongoingEnumCounter:\r\n          \\setcounter{ongoingEnumCounter}{\\value{enumi}}\r\n        \\end{enumerate}\r\n        \\ \\\\\r\n        \\midrule\r\n        \\textbf{Shield contract's steps:}\\\\\r\n        \\begin{enumerate}\r\n          %resume counter\r\n          \\setcounter{enumi}{\\value{ongoingEnumCounter}}\r\n          \\item Verify the proof as correct: call a Verifier contract to verify the \\texttt{(proof, inputs)} pair against the verification key represented by \\texttt{vkId}.\r\n          \\setcounter{ongoingEnumCounter}{\\value{enumi}}\r\n        \\end{enumerate}\r\n        \\ \\\\\r\n        \\midrule\r\n        ... \r\n\t\t\t\\end{tabular}\r\n\t\t\\end{framed}\r\n\t\\end{center}\r\n\\caption{Fungible Burn Algorithm}\r\n\\label{fig:fBurnAlgorithm}\r\n\\end{figure}\r\n\r\n%continue on next page\r\n\\begin{figure}[htp]\r\n  \\ContinuedFloat %to continue\r\n\t\\begin{center}\r\n\t\t\\begin{framed}\r\n      \\begin{tabular}{p{16cm}}\r\n       \\textbf{ Verifier contract's steps:}\\\\\r\n        \\begin{enumerate}\r\n          \\setcounter{enumi}{\\value{ongoingEnumCounter}}\r\n          \\item Compute \\texttt{result = verify(proof, inputs, vkId)}.\r\n          \r\n          I.e. Verify the \\texttt{(proof, inputs)} pair against the verification key.\r\n          \\item Return \\texttt{result}$\\in$\\texttt{\\{false, true\\}} to the Shield contract.\r\n          \\setcounter{ongoingEnumCounter}{\\value{enumi}}\r\n        \\end{enumerate}\r\n        \\ \\\\\r\n        \\midrule\r\n        \\textbf{Shield contract's steps:}\\\\\r\n        \\begin{enumerate}\r\n          \\setcounter{enumi}{\\value{ongoingEnumCounter}}\r\n          \\item If \\texttt{result = false}, revert.\r\n          \\item Else:\r\n          \\begin{enumerate}\r\n            \\item Check $\\roott_{n+m+k+l}$ is in $\\rootsList$. (Revert if not).\r\n            \\item Check $N_e$ is not already in its list of `spent' nullifiers. (Revert if not).\r\n            \\item Transfer ERC-20 tokens of value $e$ from the Shield contract (which has been holding the value in escrow) to Bob's \\texttt{payTo} Ethereum address.\r\n            \\item Append the nullifier $N_{e}$ to the ever-increasing array $\\bm N$.\r\n          \\end{enumerate}\r\n          \\setcounter{ongoingEnumCounter}{\\value{enumi}}\r\n        \\end{enumerate}\r\n        \\ \\\\\r\n        \\midrule\r\n        \\textbf{Bob's steps:}\\\\\r\n        \\begin{enumerate}\r\n          \\setcounter{enumi}{\\value{ongoingEnumCounter}}\r\n          \\item Check the ERC-20 contract to ensure his balance has increased by $e$.\r\n          \\item Store any relevant data in his local database.\r\n          \\setcounter{ongoingEnumCounter}{0} %reset for next figure\r\n        \\end{enumerate} \r\n\t\t\t\\end{tabular}\r\n\t\t\\end{framed}\r\n\t\\end{center}\r\n\\caption{Fungible Burn Algorithm} %same caption as the first part of this figure\r\n%\\label{fig:nfTransferAlgorithm} - no label in this second part of the figure\r\n\\end{figure}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\newpage\r\n\\subsubsection{Details}\r\n\\label{sec:20BurnDetails}\r\n\r\nWe refer to the numbered steps of Figure~\\ref{fig:fBurnAlgorithm}.\\\\\r\n\\\\\r\n\r\n\\textbf{Step $1$}\r\n\\ \\\\\r\nThis is handled within \\hyperref[sec:f-token-controller]{\\texttt{f-token-controller.js}}.\\\\\r\n\\\\\r\n\r\n\\textbf{Steps $2 - 3$}\r\n\\ \\\\\r\nThese calls to the Shield contract are handled within \\hyperref[sec:f-token-zkp]{\\texttt{f-token-zkp.js}}.\\\\\r\n\\\\\r\n\\noindent\r\nIt is important at this stage to note that there are an unknown number of other parties utilising the Shield smart contract.\r\nHence, the dynamic array of tokens $\\bm{Z}$ might have grown since Alice appended Bob's $Z_e$ and Alice's $Z_f$ as the $(n+m)^{th}$ and $(n+m+1)^{th}$ leaves of $M$.\r\nSuppose there have been $l-1$ additional tokens added to $\\bm{Z}$ since then.\r\nThat is,\\\\\r\n\\begin{align*}\r\n  \\bm{Z}_{n+m+k-1} = (Z_0, Z_1,...,Z_{n-1}, Z_c, ..., Z_{n+m-1}, Z_d,..., Z_{n+m+k-1}, Z_e, Z_f, Z_{n+m+k+2}, ... Z_{n+m+k+l})\r\n\\end{align*}\r\n\r\n\\noindent\r\nWe denote the corresponding Merkle Tree which holds tokens $\\bm{Z}_{n+m+k+l}$ by $M_{n+m+k+l}$. We denote its root by $\\roott_{n+m+k+l}$; an element of $\\rootsList$.\r\n\r\n\r\n\\begin{align*}\r\n  \\scalebox{0.85}{\r\n    \\begin{forest}\r\n      [{$\\roott_{n+m+k+l}:= h\\br*{\r\n                          h\\br*{\r\n                            h\\br*{\r\n                              h\\br*{\r\n                                Z_0,Z_1\r\n                              },\r\n                              ...\r\n                            },\r\n                            h\\br*{\r\n                              h\\br*{\r\n                                Z_{n-1},Z_c\r\n                              },\r\n                              h\\br*{\r\n                                ...,Z_{n+m-1}\r\n                              }\r\n                            }\r\n                          },\r\n                          h\\br*{\r\n                            h\\br*{\r\n                              h\\br*{\r\n                                Z_d, ...\r\n                              },\r\n                              h\\br*{\r\n                                Z_{n+m+k-1}, Z_e\r\n                              }\r\n                            },\r\n                            h\\br*{\r\n                              h\\br*{\r\n                                Z_f, ...\r\n                              },\r\n                              h\\br*{\r\n                                Z_{n+m+k+l}, 0\r\n                              }\r\n                            }\r\n                          }\r\n                        }\r\n                      $}\r\n        [{$ h\\br*{\r\n              h\\br*{\r\n                h\\br*{\r\n                  Z_0,Z_1\r\n                },\r\n                ...\r\n              },\r\n              h\\br*{\r\n                h\\br*{\r\n                  Z_{n-1},Z_c\r\n                },\r\n                h\\br*{\r\n                  ...,Z_{n+m-1}\r\n                }\r\n              }\r\n            }\r\n          $}\r\n          [{$ h\\br*{\r\n                h\\br*{\r\n                  Z_0,Z_1\r\n                },\r\n                ...\r\n              }\r\n            $}\r\n            [{$ h\\br*{\r\n                  Z_0,Z_1\r\n                }\r\n              $}\r\n              [{$Z_0$}][{$Z_1$}]\r\n            ]\r\n            [...\r\n              [...][...]\r\n            ]\r\n          ]\r\n          [{$ h\\br*{\r\n                h\\br*{\r\n                  Z_{n-1},Z_c\r\n                },\r\n                h\\br*{\r\n                  ...,Z_{n+m-1}\r\n                }\r\n              }\r\n            $}\r\n            [{$ h\\br*{\r\n                  Z_{n-1},Z_c\r\n                }\r\n              $}\r\n              [{$Z_{n-1}$}][{$Z_c$}]\r\n            ]\r\n            [{$ h\\br*{\r\n                  ...,Z_{n+m-1}\r\n                }\r\n              $}\r\n              [...][{$Z_{n+m-1}$}]\r\n            ]\r\n          ]\r\n        ]\r\n        [{$ h\\br*{\r\n              h\\br*{\r\n                h\\br*{\r\n                  Z_d, ...\r\n                },\r\n                h\\br*{\r\n                  Z_{n+m+k-1}, Z_e\r\n                }\r\n              },\r\n              h\\br*{\r\n                h\\br*{\r\n                  Z_f, ...\r\n                },\r\n                h\\br*{\r\n                  Z_{n+m+k+l}, 0\r\n                }\r\n              }\r\n            }\r\n          $}\r\n          [{$ h\\br*{\r\n                h\\br*{\r\n                  Z_d, ...\r\n                },\r\n                h\\br*{\r\n                  Z_{n+m+k-1}, Z_e\r\n                }\r\n              }\r\n            $}\r\n            [{$ h\\br*{\r\n                  Z_d, ...\r\n                }\r\n              $}\r\n              [{$Z_d$}][...]\r\n            ]\r\n            [{$ h\\br*{\r\n                  Z_{n+m+k-1}, Z_e\r\n                }\r\n              $}\r\n              [{$Z_{n+m+k-1}$}][{$Z_e$}]\r\n            ]\r\n          ]\r\n          [{$ h\\br*{\r\n                h\\br*{\r\n                  Z_f, ...\r\n                },\r\n                h\\br*{\r\n                  Z_{n+m+k+l}, 0\r\n                }\r\n              }\r\n            $}\r\n            [{$ h\\br*{\r\n                  Z_f, ...\r\n                }\r\n              $}\r\n              [{$Z_f$}][...]\r\n            ]\r\n            [{$ h\\br*{\r\n                  Z_{n+m+k+l}, 0\r\n                }\r\n              $}\r\n              [{$Z_{n+m+k+l}$}][0]\r\n            ]\r\n          ]\r\n        ]\r\n      ]\r\n    \\end{forest}\r\n  }\r\n\\end{align*}\r\n\r\n\r\n\r\n\\noindent\r\nBob retrieves the value of the current Merkle root, $\\roott_{n+m+k+l}$, from the Shield contract.\\\\\r\n\\\\\r\nSince Bob knows that $Z_e$ is at leaf-index $n+m+k$ of $M_{n+m+k+l}$, Bob can also retrieve the path from the leaf $Z_{n+m+k}=Z_e$ to the root $\\roott_{n+m+k+l}$. Path computations are done in \\texttt{zkp/src/compute-vectors.js}.\\\\\r\n\\\\\r\nWe denote this path\r\n\\begin{align*}\r\n  \\phi_{Z_e} = [\\phi_{d-1}, \\phi_{d-2},..., \\phi_{1}, \\phi_0]\r\n\\end{align*}\r\nNote that $\\phi_0 = \\roott_{n+m+k+l}$.\\\\\r\n\\\\\r\nBob also retrieve's the `sister-path' of this path:\r\n\\begin{align*}\r\n  \\psi_{Z_e} = [\\psi_{d-1}, \\psi_{d-2},..., \\psi_{1}, \\psi_0]\r\n\\end{align*}\r\nwhere $\\psi_0 = \\phi_0 = \\roott_{n+m+k+l}$.\\\\\r\n\\\\\r\nFor ease of reading, let's focus only on the nodes of $M_{n+m+k+1}$ which Bob cares about for the purposes of burning his token commitment $Z_e$:\r\n\r\n\r\n\\begin{align*}\r\n  \\scalebox{0.85}{\r\n    \\begin{forest}\r\n      [{$\\roott_{n+m+k+l}:= \\phi_{0} = \\psi_{0}$}\r\n        [{$\\psi_{1}$}\r\n          [...\r\n            [...\r\n              [...][...]\r\n            ]\r\n            [...\r\n              [...][...]\r\n            ]\r\n          ]\r\n          [...\r\n            [...\r\n              [...][...]\r\n            ]\r\n            [...\r\n              [...][...]\r\n            ]\r\n          ]\r\n        ]\r\n        [{$\\phi_{1}$}\r\n          [{$\\phi_{2}$}\r\n            [{$\\psi_{3}$}\r\n              [...][...]\r\n            ]\r\n            [{$\\phi_{3}$}\r\n              [{$\\psi_{4}$}][{$Z_e$}]\r\n            ]\r\n          ]\r\n          [{$\\psi_{2}$}\r\n            [...\r\n              [...][...]\r\n            ]\r\n            [...\r\n              [...][0]\r\n            ]\r\n          ]\r\n        ]\r\n      ]\r\n    \\end{forest}\r\n  }\r\n\\end{align*}\r\n\r\n\r\n\\noindent\r\nEquipped with $\\psi_{Z_e}$, Bob can prove that he owns a token commitment at one of the leaves of $M_{n+m+k+l}$, without revealing that it is \"$Z_{n+m+k}$ located at leaf-index $n+m+k$\".\\\\\r\n\\\\\r\n\r\n\\textbf{Steps $4-5$}\r\n\\ \\\\\r\nThese steps are handled within \\hyperref[sec:f-token-controller]{\\texttt{f-token-controller.js}}.\\\\\r\n\\\\\r\nAs a reminder, we let:\r\n\\begin{center}\r\n  \\begin{tabular}{l l}\r\n    $x = (e,\\\r\n          N_{e},\\\r\n          \\roott_{n+m+k+l})$ & Public Inputs used to generate the Proof\\\\\r\n    $\\omega = (\\psi_{Z_e},\\\r\n              sk_B,\\\r\n              \\sigma_e)$ & Private Inputs used to generate the Proof\\\\\r\n  \\end{tabular}\r\n\\end{center}\r\n\\ \\\\\r\n\r\n\r\n\r\n\r\n\\textbf{Steps $6 - 7$}\r\n\\ \\\\\r\nThese steps are handled within a \\hyperref[sec:zokrates]{ZoKrates} container.\\\\\r\n\\\\\r\nBob uses the $C_{ft-burn}$ (or $C$) -- the set of constraints for a fungible burn, located in \\texttt{zkp/code/gm17/ft-burn} (see \\hyperref[sec:trustedSetup]{Trusted Setup}). $C_{ft-burn}(\\;\\omega,\\;x\\;)$ returns a value of $true$ if Bob provides a set of valid `satisfying' arguments $(\\omega, x)$ to $C$.\\\\\r\n\\\\\r\nLet's elaborate on each of the checks and calculations constraining the inputs to $C$ (we highlight public inputs in \\textbf{bold} below):\r\n\\begin{enumerate}\r\n  \\item Calculate $h(sk_B) =: pk_B'$.\\\\\r\n    Note that this newly calculated $pk_B'$ should equal $pk_B$ (Bob's public key), but we don't need to pass $pk_B$ as a private input and explicitly check that $pk_B'=pk_B$; a check on the correctness of $sk_B$ (and hence $pk_B'$) is implicitly achieved in the next two steps:\r\n  \\item Calculate $h(\\bm{e}\\;|\\;pk_B'\\;|\\;\\sigma_e) =: Z_e'$.\\\\\r\n    Note again that this newly calculated $Z_e'$ should equal $Z_e$ (Bob's token commitment), but we don't need to pass $Z_e$ as a private input and explicitly check that $Z_e'=Z_e$; a check on the correctness of $Z_e$ (and hence $Z_e'$) is implicitly achieved in the next step:\r\n  \\item Check inputs $\\psi_{Z_e}=[\\psi_{d-1}, \\psi_{d-2},..., \\psi_{1}, \\bm{\\psi_{0}=\\roott_{n+m+k+l}}]$ and the newly calculated $Z_e'$ satisfy:\\\\\r\n    $h\\br*{\\psi_{1}\\;|...|\\;h\\br*{\\psi_{d-2}\\;|\\;h\\br*{\\psi_{d-1}\\;|\\;Z_B'}\\;}...} = \\roott_{n+m+k+l} ( =: \\bm{\\psi_{0}})$\\\\\r\n    Given the one-way nature of our hashing function $h$, the only feasible way we could have arrived at the correct value of $\\roott_{n+m+k+l}$ is if the sister-path $\\psi_{Z_e}$ is correct, and if $Z_e'$ is correct, which (working backwards) must mean that $sk_B$ is correct.\r\n\r\n    How does the circuit know the value of $\\roott_{n+m+k+l}$ is correct? It doesn't; but it is a `public input', and we can rely upon the Shield smart contract to check the correctness of all public inputs.\\\\\r\n  \\\\\r\n  We've therefore shown in the steps so far, that:\r\n  \\begin{itemize}\r\n    \\item[--] Bob is the owner of a token commitment (because he knows its secret key)\r\n    \\item[--] Said token commitment is indeed a leaf of the on-chain Merkle Tree $M_{n+m+k+l}$.\r\n    \\item[--] The token commitment does indeed represent a value of $e$ ERC-20 tokens (remember that $e$ is a public input to a `burn' zk-SNARK).\r\n  \\end{itemize}\r\n\r\n  Bob commits to burning his token $Z_e$ in the next step:\r\n  \\item Check inputs $\\sigma_e, sk_B, \\bm{N_e}$ satisfy:\r\n    $h(\\sigma_e\\;|\\;sk_B) = \\bm{N_e}$\\\\\r\n    $N_e$ is referred to as a `nullifier' because it is understood by all participants to be an indisputable commitment to spend (`nullify') a token commitment. Remember that the token commitment being spent isn't revealed; the earlier steps have allowed Bob to demonstrate hidden knowledge of the secret key $sk_B$ of a token commitment which does indeed exist. By including $sk_B$ in the nullifier's preimage, Bob is binding himself as the executor of this `burn'. By including $\\sigma_e$, Bob is specifying a serial number which is unique to the token $Z_e$ (thereby distinguishing this nullifier from those which would nullify any other token commitments he may own).\r\n\\end{enumerate}\r\nNotice how each stage is linked to the last, and that at each of the `Check' stages, private inputs are being reconciled against at least one public input (highlighted in \\textbf{bold} to help you notice). By structuring the circuit $C$ in this way, we are able to share only the public inputs with the Shield contract (along with a `proof' $\\pi_{C,x,\\omega}$). We'll see shortly that the Shield contract checks the correctness of each of the public inputs against its current states.\\\\\r\n\\\\\r\n\r\n\\noindent\r\nIf all of the above constraints are satisfied by the public and private inputs, ZoKrates will generate the proof $\\pi_{C,x,\\omega}$; a proof of knowledge of satisfying arguments $(\\omega, x) \\ s.t. \\ C(\\omega, x) = 1$.\\\\\r\n\\\\\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\textbf{Step $8$}\r\n\\ \\\\\r\nThis transaction is handled within \\hyperref[sec:f-token-zkp]{\\texttt{f-token-zkp.js}}.\\\\\r\n\\\\\r\nHaving generated $\\pi_{C,x,\\omega}$, Bob then sends the following to the Shield contract from his Ethereum address $E_B$:\r\n\\begin{align*}\r\n  &E_B\\\\\r\n  &\\pi_{C,x,\\omega}\\\\\r\n  &x = (e, N_{e}, \\roott_{n+m+k+l})\r\n\\end{align*}\r\n\\\\\r\nRecall that everyone knows the checks and calculations which have been performed in the circuit $C_{ft-burn}$, because it is a public file in the Nightfall repository. Further, everyone knows the verification key $vk_C$ which uniquely represents this circuit, because it has been publicly stored in the Verifier Registry contract. Therefore, when Bob shares the pair $(x, \\pi_{C,x,\\omega})$, and the `unique id' of the relevant verification key $vk_C$; everyone will interpret this information as the Bob's intention to burn; and everyone will be convinced that he knows the secret key which permits him to transfer ownership of a token commitment; and everyone will be convinced that that token commitment represents a value of $e$ ERC-20 tokens.\\\\\r\n\\\\\r\n\r\n\r\n\\textbf{Steps $9 - 11$}\r\n\\ \\\\\r\nThe Verifier Registry contract already has stored within it the verification key $vk_C$.\r\nIt runs a verification function $V(vk_C, \\pi_{C,x,\\omega}, x)$.\r\n\\begin{align*}\r\n  V: (vk_C, \\pi_{C,x,\\omega}, x) \\to \\{0,1\\}\r\n\\end{align*}\r\nwhere:\r\n\\[\r\n    V=\r\n\\begin{cases}\r\n    1,& \\text{if } \\pi_{C,x,\\omega} \\text{ and } x \\text{ satisfy } vk_C\\\\\r\n    0,& \\text{otherwise}\r\n\\end{cases}\r\n\\]\r\n\\ \\\\\r\n\r\n\r\n\r\n\r\n\\textbf{Steps $12 - 13$}\r\n\\ \\\\\r\nIf the Verifier contract returns $1$ ($true$) (verified) to the Shield contract, then the Shield contract will be satisfied that Bob's proof and public inputs represent his commitment to burning a token commitment, and to withdrawing its underlying ERC-721 token $=\\alpha$. If the Verifier contract returns $0$, then the transaction will revert.\\\\\r\n\\\\\r\nLet's suppose Bob's $(x, \\pi_{C,x,\\omega})$ pair is verified.\\\\\r\n\\\\\r\nFollowing verification of the proof, the Shield contract will do the following:\r\n\\begin{enumerate}\r\n  \\item Check $\\roott_{n+m+k+l}$ is in $\\rootsList$.\\\\\r\n    (If not, the burn will fail)\r\n  \\item Check $N_e$ is not already in the list of nullifiers, which we denote $\\bm{N}$.\\\\\r\n    (If $N_e$ is already in $\\bm{N}$, the burn will fail)\r\n  \\item Transfer a value of $e$ ERC-20 tokens from the Shield contract (i.e. from escrow) to Bob's Ethereum address.\r\n  \\item Append the nullifier $N_e$ to the ever-increasing array $\\bm N$.\r\n\\end{enumerate}\r\n\\ \\\\\r\n\r\n\\textbf{Steps $14 - 15$}\r\n\\ \\\\\r\nBob is now the owner of $e$ more ERC-20 tokens. The Nightfall UI queries the linked ERC-20 contract for tokens Bob owns.\r\nIf Bob ever wished to convert some or all of this value back into a token commitment, he would need to do a fungible `mint' (discussed earlier).", "meta": {"hexsha": "3711bfa21a788408deed30693647b9983fa14a92", "size": 21207, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/whitepaper/protocols/ERC20/burn20.tex", "max_stars_repo_name": "alexcoury/nightfall", "max_stars_repo_head_hexsha": "acb3b36d4a539efeaa6028a12e535027731d9491", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-03-17T10:09:02.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-17T10:09:02.000Z", "max_issues_repo_path": "doc/whitepaper/protocols/ERC20/burn20.tex", "max_issues_repo_name": "plugged37/nightfall", "max_issues_repo_head_hexsha": "0936a3f45df1462134e94816eb0882f041a4c7bf", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 16, "max_issues_repo_issues_event_min_datetime": "2020-05-06T10:27:23.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T07:57:41.000Z", "max_forks_repo_path": "doc/whitepaper/protocols/ERC20/burn20.tex", "max_forks_repo_name": "plugged37/nightfall", "max_forks_repo_head_hexsha": "0936a3f45df1462134e94816eb0882f041a4c7bf", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.5823529412, "max_line_length": 750, "alphanum_fraction": 0.5280803508, "num_tokens": 5789, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867873410141, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5684943189785584}}
{"text": "\\section{Models for the Microconnectome}\nIn this section, we study the Random Graph models with regards to how well they capture the various properties of the Bio-M MC. We fit the parameters of the Random Graph models to the Bio-M MC.\n\\subsection{\\ER Model}\n\\subsubsection{Construction}\nAs a starting point for the modelling of the MC, we first construct the \\ER random graph model. To be consistent with the statistics of the Bio-M MC, we restricted this model to contain 31,346 neurons and approximately 8 million synaptic connections. To achieve this for the \\ER random graph model, we set each potential connection between any pair of neurons to be ~0.8\\%, that is;\n\n\\begin{equation}\n    P(e) =\\frac{|\\textrm{E}|}{|\\textrm{V}|^{2}} = \\frac{7,822,274}{31,346^2} \\approx 0.8\\%\n\\end{equation}\nwhere e is an edge, $|E|$ is the total number of connections and $|V|$ is the total number of neurons in the Bio-M MC. We typically let the vertex set of labelled neurons to be $V = \\{0, 1, 2, \\dots, |V|-1\\}$\n\n\\begin{algorithm}[H]\n\\SetAlgoLined\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\Input{Vertex Set V, $p \\leftarrow 0.008$, random number generator \\textit{r} $\\in[0, 1)$}\n\\Output{Adjacency Matrix $M$}\n\\Init{}{\n$M[i,j] \\leftarrow 0~~\\forall~ (i,j) \\in V \\times V$}\n\\ForEach{$i,j \\in V \\times V$}{\\If{r $<$ p}{M[i,j] $\\leftarrow$ 1}}\n\\caption{$\\mathcal{G_{ER}} (V, p, r)$}\n\\end{algorithm}\n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{ER/matrix_Erdos-Renyi.png}\n\\caption{Sample output from the \\ER model based on the statistics of the Bio-M MC}\n\\end{center}\n\\end{figure}\n\\subsubsection{Block Layout}\nGiven that each neuron pairing has an equal probability of connecting, it is unsurprising that a sample from $\\mathcal{G}_{ER}$, shown in Figure 10, gives a uniform distribution of connections contained within each block. To illustrate this fact further, in Figure 11, we see each block contains approximately $0.8\\%$ of the total potential connections within that block. This type of illustration where we speak purely in terms of the number of potential connections in each block is dealt with for this model only. The subsequent models in later sections will deal with Block-wise Edge Densities with respect to the proportion of connections observed in the functional graph created by the Bio-M MC. We look at this for the \\ER model now.  \nNow, given the distribution of edges in the blocks of the MC, it becomes clear that the block sizes are unequal. We can see this clearly from and 12(b), where we observe that each block contains a varying number of connections, ranging from around 900 to 1.2 million connections. Figure 12(a) gives the Block-wise Edge Densities. A breakdown of the TV distance of the Block-wise edge densities from the Bio-M MC can be found in section 6.4.\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=9cm]{ER/heat_map_layer_Erdos-Renyi probability.png}\n\\caption{Density of edges in each block: $\\frac{\\text{Edges in block}}{\\text{Total number of potential edges}}$}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Block-wise Edge Densities ]{{\\includegraphics[width=7cm]{ER/heat_map_layer_er_test.png} }}%\n    \\qquad\n    \\subfloat[\\centering Block-wise Edge Counts]{{\\includegraphics[width=7cm]{ER/heat_map_layer_Erdos_Renyi.png} }}%\n    \\caption{Connectivity by block of a random graph $\\mathcal{G}_{ER}$ given by the \\ER model}%\n    \\label{fig:example}%\n\\end{figure}\n\n\\subsubsection{Distance distributions}\nTo get an idea of the distribution of distances between neurons under the \\ER model, we superimposed it onto that of the functional graph of the Bio-M MC. We computed the pairwise distances of the indexed neurons and plotted the histogram to represent these connections against what it was for the Bio-M MC in Figure 13. A metric that we use to ascertain differences here, is that of the TV distance, described in mathematical preliminaries, between the \\ER model and the Bio-M MC. Here, briefly, we can state that, over 100 realisations, the mean TV distance between the \\ER model and the Bio-M MC is 0.7092. This is the highest distance between a model and the MC described here. Further details about this statistic are to be found in Section 6, Results.\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=9cm]{ER/Erdos_Renyi_dist_distr.png}\n\\caption{Distance distribution of connected neurons for the random graph $\\mathcal{G}_{ER}$ \\\\if they were superimposed onto functional graph of the Bio-M MC}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Signed Degree of Neurons}\nOne noteworthy difference between the \\ER random graph model and all the other models, as can be seen from the cumulative signed degree graphs in Figure 14, is that each neuron does not maintain the same in-degree and out-degree as that of the Bio-M MC and therefore all of our other models that preserve the in-degree and out-degree of the Bio-M MC. \n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Realisation 1 ]{{\\includegraphics[width=7cm]{ER/cumsum_degree_Erdos-Renyi.png} }}%\n    \\qquad\n    \\subfloat[\\centering Realisation 2]{{\\includegraphics[width=7cm]{ER/cumsum_degree_Erdos-Renyi1.png} }}%\n    \\qquad\n    \\subfloat[\\centering Realisation 3]{{\\includegraphics[width=7cm]{ER/cumsum_degree_Erdos-Renyi2.png}}}%\n    \\qquad\n    \\subfloat[\\centering Realisation 4]{{\\includegraphics[width=7cm]{ER/cumsum_degree_Erdos-Renyi3.png}}}\n    \\caption{Realisations of the Cumulative Signed Degree for \\ER random graph model}%\n    \\label{fig:example}%\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\n\n%--------------------------------------\n%--------------------------------------\n%--------------------------------------\n\\subsection{Configuration Model}\nThis model reconnects the Bio-M MC using the ``cut-permute-rewire'' algorithm proposed by Watts and Strogatz \\cite{WattsStrogatz1998}.\n\n\\subsubsection{Construction}\nWe use the ``cut-permute-rewire'' algorithm to produce samples from $\\mathcal{G}_{C}$, the Configuration random graph model. This model outputs the rewired neurons by a  resampling of the connected directed edges of the Bio-M MC. By restricting ourselves to the constraints of the algorithm, this model will be able to retain the same in-degree and out-degree for each vertex as the Bio-M MC. One difference to the Bio-M MC and the General Biological model (mentioned later) however, is that we exhibit multiple connections that occur between a pair of neurons that traverse in the same direction. We allow these connections to exist so as to ensure that each neuron does indeed maintain its in-degree and out-degree. This occurs for the subsequent three models as well, namely the Geometric Configuration model, the Block Configuration model and the Block Geometric Configuration model.\n\n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Before configuration of small network]{{\\includegraphics[width=7cm]{configuration/config_before_rewire.png} }}%\n    \\qquad\n    \\subfloat[\\centering After configuration of network]{{\\includegraphics[width=7cm]{configuration/config_after_rewire.png} }}%\n    \\caption{Example of a configuration of a small version of the network}%\n    \\label{fig:example}%\n\\end{figure}\n\n\\begin{algorithm}[H]\n\\DontPrintSemicolon\n\\SetAlgoLined\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Output}{output}\n\\SetKwData{$E$}{$(u, v)$}\n\\Input{$u$, $v$}\n\\Output{Adjacency Matrix $M$}\n\\Init{}{$u \\leftarrow$ Shuffle($u$)\\\\\n$M[i,j] \\leftarrow 0~\\forall (i,j) \\in [u][v]\n$}\n\n\\ForEach{$(i,j) \\in [u][v]$}\n{\\eIf{$i \\neq j$}{$M[i,j] \\leftarrow 1$}{$u \\leftarrow$ Shuffle($u$)}}\n\\caption{$\\mathcal{G_{C}}$(u, v)}\n\\end{algorithm}\n\nFor the ``cut-permute-rewire'' \\cite{WattsStrogatz1998} algorithm to run, we first take the ordered set of edges in E, where we assign, for each $e \\in E$, $\\tau_1(e) = v_1$ as the pre-synaptic neurons represented by $u$ in Algorithm 2, and assign $\\tau_2(e) = v_2$ as the post-synaptic neurons, represented as $v$ in Algorithm 2. We then initialise the algorithm by shuffling the order of $u$ so as to obtain a permutation of $u$, the pre-synaptic neurons. We then attempt to connect each vertex pairing providing we do not have the same vertex, that is $i \\neq j$, to avoid self-loops. If we happen upon an occurrence of this, then we reshuffle the remaining vertices in $u$ and continue. Upon completion, we acquire a new ordered set of vertices contained in $u^\\prime$. With this new ordered set of vertices in $u^\\prime$, and the vertices in $v$, we can then form the adjacency matrix M that represents a realisation of the random graph model $\\mathcal{G}_{C}$. \n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{configuration/matrix_configuration.png}\n\\caption{Adjacency matrix representing a realisation of the random graph $\\mathcal{G}_{C}$}\n\\end{center}\n\\end{figure}\n\\subsubsection{Block Layouts}\nIn Figure 17, we have a breakdown of the Block-wise Edge connections, both in terms of counts and in terms of density of connections within the functional graph. The density of connections concerning layer 1 amount to less than $1\\%$ of all connections in a realisation of the random graph model $\\mathcal{G}_{C}$ of the MC. Similarly, in relation to the structure of the Bio-M MC functional graph, the highest number of connections occur in the self-contained block of layer 6 to layer 6. Further statistics on the TV distance of the Block-wise Edge densities between the models can be found in Section 6, Results.\n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Block-wise Edge Densities]{{\\includegraphics[width=7cm]{configuration/heat_map_layer_c_test.png} }}%\n    \\qquad\n    \\subfloat[\\centering  Block-wise Edge Counts]{{\\includegraphics[width=7cm]{configuration/heat_map_layer_configuration.png} }}%\n    \\caption{Connectivity by block of the random graph $\\mathcal{G}_{C}$ given by the Configuration Model}%\n    \\label{fig:example}%\n\\end{figure}\n\\subsubsection{Distance Distributions}\nIn Figure 18, we can also see a similar distance distribution as to that of the \\ER model. As mentioned for the \\ER model, we have a TV distance to the Bio-M MC. For the Configuration model, this is 0.6595. This is an improvement on the TV distance given by the \\ER model. Further details are given in the Results section.\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{configuration/configuration_dist_distr.png}\n\\caption{Distance distribution of connected neurons for the random graph $\\mathcal{G}_C$}\n\\end{center}\n\\end{figure}\n\\subsubsection{Signed Degree of Neurons}\nFrom Figure 19, we see that the each neuron has maintained its in-degree and out-degree thereby following the constraints of the ``cut-permute-rewire'' \\cite{WattsStrogatz1998} algorithm. \n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Signed degree of each neuron ]{{\\includegraphics[width=7cm]{configuration/configuration_sd.png} }}%\n    \\qquad\n    \\subfloat[\\centering Cumulative sum of the Signed Degree of the neurons in a realisation of the random graph $\\mathcal{G}_{C}$]{{\\includegraphics[width=7cm]{configuration/cumsum_degree_configuration.png} }}%\n    \\caption{Neuron Statistics for a realisation of the random graph $\\mathcal{G}_C$}%\n    \\label{fig:example}%\n\\end{figure}\n\n\n\n%-------------------------------------\n%-------------------------------------\n%-------------------------------------\n\\newpage\n\\subsection{Geometric Configuration Model}\nWe introduce the Geometric Configuration (GC) model here. This model uses the distances between the connected neurons in the Bio-M MC. The GC model applies the ``cut-permute-rewire'' \\cite{WattsStrogatz1998} algorithm to the connected neurons in the Bio-M MC, whilst also considering a geometric constraint of their distances from one another. \n\\subsubsection{Construction}\nThe GC model is a variation of the Configuration model. This model has the added constraint that we must also take into account the locations of the neurons, in particular their pairwise distances, involved in the Bio-M MC. In Algorithm 3, we use a probability value p, that is determined by the distance distribution given in Figure 8.\n\n\\begin{algorithm}[H]\n\\SetAlgoLined\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Output}{output}\n\\SetKwData{$E$}{$(u, v)$}\n\\Input{$u$, $v$, $p$, random number generator $r$}\n\\Output{Adjacency Matrix $M$}\n\\Init{}{$u \\leftarrow$ Shuffle($u$)\\\\\n$M[i,j] \\leftarrow 0~\\forall (i,j) \\in [u][v]\n$}\n\n\\ForEach{$(i, j) \\in [u][v]$}\n{\\eIf{$i \\neq j$ \\textbf{and} $r < p$}\n{$M[i,j] \\leftarrow 1$}\n{$u \\leftarrow$ Shuffle($u$)}}\n\\caption{$\\mathcal{G}_{GC}$(u, v, p) using cut-permute-rewire}\n\\end{algorithm}\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[width=12cm]{GC/gc_example.png}\n\\caption{Example of a collection of neurons and their varying probabilities of connection with neuron 2}\n\\end{center}\n\\end{figure}\nAs a note on the random graph model $\\mathcal{G}_{GC}$, we compute the pairwise distance between each pair of vertices in E, whereby we obtain the distance distribution shown in Figure 8. This distribution shows the absolute counts of pairwise distances between neurons. This is then normalised to be used as the empirical probability distribution for connection probability between two neurons based on their pairwise distance. \nThere are occasions whereby a pair of neurons may contain multiple connections heading in the same direction and that these are included in order to make sure we maintain the same in-degree and out-degree for each neuron.\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{GC/matrix_GC.png}\n\\caption{Adjacency matrix M of a realisation of the random graph $\\mathcal{G}_{GC}$}\n\\end{center}\n\\end{figure}\n\\subsubsection{Block Layouts}\nFigure 22 gives the breakdown of Block-wise Edge connections, again, both in terms of counts and in terms of densities of the functional graph. Once again, we can see a small proportion of edges are contained in layer 1 and that the vast majority of edges are within layer 6. The TV distance of the Block-wise Edge densities resulted in a similar distribution of values to the Configuration Model relatively speaking. However, the vast majority of blocks here observed smaller TV distances than the Configuration Model, by a factor of 10 or more. Further details can be found in Section 6, Results.\n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Block-wise Edge Densities]{{\\includegraphics[width=7cm]{GC/heat_map_layer_gc_test.png} }}%\n    \\qquad\n    \\subfloat[\\centering Block-wise Edge Counts]{{\\includegraphics[width=7cm]{GC/heat_map_layer_GC.png} }}%\n    \\caption{Connectivity by block of the random graph $\\mathcal{G}_{GC}$ given by the GC Model}%\n    \\label{fig:example}%\n\\end{figure}\n\\subsubsection{Distance Distribution}\nWe have a comparison of the distance distributions for a realisation of the Geometric Configuration model and the Bio-M MC in Figure 23. We see from this graph that the TV distance is much lower here than with the \\ER model and the Configuration model, with a mean TV distance of 0.3169 after 100 realisations. This is to be expected given the construction of this model. More details of this statistic are found in section 6, Results.\n\n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{GC/GC_dist_distr.png}\n\\caption{Distance distribution of connected neurons for the random graph $\\mathcal{G}_{GC}$}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Signed Degree of Neurons}\nFigure 24 shows us that the in-degree and out-degree has been maintained, thereby showing that again the constraints of the  ``cut-permute-rewire'' \\cite{WattsStrogatz1998} algorithm have been followed and that the graph representing the network is indeed finite.\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Signed degree for each neuron ]{{\\includegraphics[width=7cm]{GC/GC_sd.png} }}%\n    \\qquad\n    \\subfloat[\\centering Cumulative sum of the Signed Degree of the neurons in a realisation of the random graph $\\mathcal{G}_{GC}$]{{\\includegraphics[width=7cm]{GC/cumsum_degree_GC.png} }}%\n    \\caption{Neuron statistics for the random graph $\\mathcal{G}_{GC}$}%\n    \\label{fig:example}%\n\\end{figure}\n\n\n \n\\newpage\n\\subsection{Block Configuration Model}\nIn this section, we describe the Block Configuration model. This model uses the biological constraint of splitting up the MC into blocks comprising pre-synaptic neurons in a layer, say $i$, to post-synaptic neurons in a layer, say $j$ and using the ``cut-permute-rewire'' \\cite{WattsStrogatz1998} algorithm in each block so that the layer-specific in-degrees and out-degrees for each neuron in each block is preserved, thereby seeing to it that this also remains the case for the entire random graph $\\mathcal{G}_{BC}$.\n\n\\subsubsection{Construction}\nThe Block Configuration model is a hybrid of both the General Biological model (discussed later) and the Configuration model. This model divides up the connectome by layer rather than by morphological type, as was the case in the General Biological model and uses the ``cut-permute-rewire'' \\cite{WattsStrogatz1998} algorithm. \n\nThe construction process of the Block Configuration model begins with dividing up the MC into its 25 blocks, as shown in Figure 6. This means that we take a subset of vertices V and edges E from the Bio-M MC for each block. For each subset, we apply Algorithm 2. The resulting ordered sets $u^\\prime$ and $v$ are then stored until the algorithm has been applied to all 25 blocks. On completion, these 25 blocks containing the ordered sets of vertices, are then used to construct a realisation of the random graph $\\mathcal{G_{BC}}$.\n\n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{BC/matrix_Block_Configuration.png}\n\\caption{Adjacency Matrix M representing a realisation of the random graph $\\mathcal{G}_{BC}$}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Block Layouts}\nGiven that we have subdivided the MC into 25 blocks on a layer by layer basis, the number of edges contained in each block will be precisely the same as the Bio-M MC. This is shown in Section 6, where we detail the TV distance for Block-wise Edge densities between this model and the Bio-M MC. Likewise with the Configuration Model and the Geometric Configuration Model previously, we observe pairwise neurons with multiple connections in the same direction.\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Block-wise Edge Densities]{{\\includegraphics[width=7cm]{BC/heat_map_layer_bc_test.png} }}%\n    \\qquad\n    \\subfloat[\\centering Block-wise Edge Counts ]{{\\includegraphics[width=7cm]{BC/heat_map_layer_Block.png} }}%\n    \\caption{Connectivity by block of the random graph $\\mathcal{G}_{BC}$ given by the BC model}%\n    \\label{fig:example}%\n\\end{figure}\n\\subsubsection{Distance Distribution}\nAfter 100 realisations of the Block Configuration model were observed, this yielded a mean TV distance of 0.3937. This is a large improvement on the \\ER model and the Configuration model, however, weaker than the GC Model. Despite not allowing for a geometrical constraint here in terms of how a pair of neurons connect, it is clear that by dividing up the MC into these blocks plays a similar role in geometrically constraining the connections since we are only able to rearrange the connections in these specific blocks. \n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{BC/Block_dist_distr.png}\n\\caption{Distance distribution of connected neurons for the random graph $\\mathcal{G}_{BC}$}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Signed Degree of Neurons}\nAs mentioned at the head of this subsection, we have the same in-degrees and out-degrees for each neuron in each block and thereby ensuring the case for the whole random graph. \\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering  Signed Degree of each Neuron]{{\\includegraphics[width=7cm]{BC/Block_Configuration_sd.png} }}%\n    \\qquad\n    \\subfloat[\\centering Cumulative sum of the Signed Degree of the neurons in a realisation of the random graph $\\mathcal{G}_{BC}$ ]{{\\includegraphics[width=7cm]{BC/cumsum_degree_Block_Configuration.png} }}%\n    \\caption{Neuron statistics for the random graph $\\mathcal{G}_{BC}$}%\n    \\label{fig:example}%\n\\end{figure}\n\n\\newpage\n\n\\subsection{Block Geometric Configuration Model}\nThe Block Geometric Configuration model uses both the ``cut-permute-rewire'' \\cite{WattsStrogatz1998} algorithm as well as the geometric constraint as mentioned in the GC model with a slight variation to it here. \n\\subsubsection{Construction}\nThe final model that we have is the Block Geometric Configuration model. The first part of the construction process, is to divide the MC into blocks, as was the case for the Block Configuration model. The second part of the process, is to then compute the pairwise distances of connected neurons in each block. That is, we take the subset of vertices and edges in each block, and compute the distances between these neurons. We then apply Algorithm 2 to each block and return the corresponding subsets $u^\\prime$ and $v$. Upon completion of all 25 blocks, we can then use these ordered sets of vertices to construct a realisation of the random graph model $\\mathcal{G_{BGC}}$. Thus, $\\mathcal{G_{BGC}}$ not only preserves the in-degrees and out-degrees layer-specifically for each neuron but also the probability of connecting neurons based on the empirical pairwise distances of connected neurons.\n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{GBC/matrix_BGC.png}\n\\caption{Adjacency Matrix M representing a realisation of the random graph $\\mathcal{G}_{BGC}$}\n\\end{center}\n\\end{figure}\n\n\n\\subsubsection{Block Layouts}\nJust as we saw for the Block Configuration model, each Block-wise Edge density gives a TV distance of 0 to the Bio-M MC. As you will see from Figure 30, the values match that of the Bio-M MC. This, like the Block Configuration model has been achieved since all edge shuffling has occurred within these blocks that we are splitting the MC up into.\n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Proportion of connections in each block ]{{\\includegraphics[width=7cm]{GBC/heat_map_layer_bgc_test.png} }}%\n    \\qquad\n    \\subfloat[\\centering Total number of connections in each block ]{{\\includegraphics[width=7cm]{GBC/heat_map_layer_Block.png} }}%\n    \\caption{Connectivity by block of the random graph $\\mathcal{G}_{BGC}$ given by the BGC model}%\n    \\label{fig:example}%\n\\end{figure}\n\n\\subsubsection{Distance Distribution}\n\nFigure 31 reveals a very similar distribution for the BGC model in comparison to the BC model. This, perhaps suggests that the geometric constraint has already been taken care of by splitting up the MC into these layer by layer blocks. Over 100 realisations of the BGC model yielded a mean TV distance of 0.3937. This is only marginally better than the BC model. Details of these differences are contained in Section 6, Results.\n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{GBC/Block_dist_distr.png}\n\\caption{Distance distribution of connected neurons for the random graph $\\mathcal{G}_{BGC}$}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Signed Degree of Neurons}\nJust as with all the other models, we have the same in-degree and out-degree for each neuron as described by the plots in Figure 32. \n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Signed Degree of each Neuron]{{\\includegraphics[width=7cm]{GBC/BGC_sd.png} }}%\n    \\qquad\n    \\subfloat[\\centering Cumulative sum of the Signed Degree of the neurons in a realisation of the random graph $\\mathcal{G}_{BGC}$]{{\\includegraphics[width=7cm]{GBC/cumsum_degree_BGC.png} }}%\n    \\caption{Neuron statistics for the random graph $\\mathcal{G}_{BGC}$}%\n    \\label{fig:example}%\n\\end{figure}\n\n\n\n\\newpage\n\\subsection{General Biological Model}\n\\subsubsection{The Model}\nThe General Biological model is the model that is proposed in the Frontiers article \\cite{Reimann_2017}. \n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{GB/matrix_general_biol.png}\n\\caption{Adjacency Matrix M representing a realisation of the random graph $\\mathcal{G}_{GB}$}\n\\end{center}\n\\end{figure}\n\nThis model subdivides the MC into 3025 sub-matrices which represent each morphological neuron type from pre-synaptic to post-synaptic neurons. That is, we have 55 different morphological neuron types. Now, within these sub-matrices, we compute the distances between all pairwise neurons. These distances get binned, as mentioned in Section 4.1.3, into bin sizes of 75$\\mu$m. Now, within these bins, we then shuffle the connections, so that a distance is preserved. We can see this diagrammatically in Figure 35. When the shuffling is completed, the sub-matrices are reassembled to form a complete matrix which now represents a realisation of the random graph $\\mathcal{G}_{GB}$ given by the General Biological model. \n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Connections in each morphological block]{{\\includegraphics[width=7cm]{GB/heat_map_morph_layer_general_biol.png} }}%\n    \\qquad\n    \\subfloat[\\centering A closer look at the morphological block L1\\_DAC ]{{\\includegraphics[width=7cm]{GB/L1_DAC.png} }}%\n    \\caption{Connectivity of GB Model}%\n    \\label{fig:example}%\n\\end{figure}\n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Sample of connected graph before shuffling]{{\\includegraphics[width=7cm]{GB/before.png} }}%\n    \\qquad\n    \\subfloat[\\centering The same sample after shuffling ]{{\\includegraphics[width=7cm]{GB/after.png} }}%\n    \\caption{An example of a block before shuffling connections and after}%\n    \\label{fig:example}%\n\\end{figure}\nFigure 34(a) represents the number of connections in each block. The blocks are split into morphological type. As we can see by the bar on the right hand side, darker shades show a block that is highly populated with connections and a lighter block representing a less populated block of connections. \nFigure 34(b) gives a representation of an individual morphological block, in this case L1 DAC, Layer 1 - Descending Axonal Collaterals, where we see pairwise neuron distances computed and binned according to a bin size of 75$\\mu$m. The leading diagonal is the brightest shade of yellow and represents the smallest bin, that is a distance of 0$\\mu$m and therefore are discounted, since they represent self-loops which do not exist. This leaves the remaining shades to be shuffled in each respective group. \nFigure 35(a) and 35(b) represent a sample shuffling of connections, whereby the connections and non-connections are shuffled within their respective bins as defined by the shades.\n\\subsubsection{Block Layouts}\nNow, by restricting the connection shuffling to morphological neuron type blocks, which are subsets of the layer by layer blocks, we are able to maintain a Block-wise Edge density that is consistent with the Bio-M MC. Therefore, giving us a TV distance for Block-wise Edge Densities of 0. \n\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Block-wise Edge Densities]{{\\includegraphics[width=7cm]{GB/heat_map_layer_gb_test.png} }}%\n    \\qquad\n    \\subfloat[\\centering  Block-wise Edge Counts ]{{\\includegraphics[width=7cm]{GB/heat_map_layer_general_biol.png} }}%\n    \\caption{Connectivity by block of the random graph $\\mathcal{G}_{GB}$ given by the GB model}%\n    \\label{fig:example}%\n\\end{figure}\n\n\\subsubsection{Distance Distributions}\nFigure 37 shows that with distance-dependence taken into account in the form of keeping bin sizes to a maximum size of $75\\mu$m, we are able to maintain the exact same distribution as that of the Bio-M MC. In fact, we get a mean TV distance over 100 realisations, to be $\\epsilon$. That is to say that the difference is statistically insignificant as a non-zero value occurred but once and was of the order $\\epsilon < 1\\times10^{-7}$. The details of the exact value is given in Section 6, Results.\n\n\\begin{figure}[H]\n\\begin{center}\n\\captionsetup{justification=centering}\n\\includegraphics[width=12cm]{GB/general_biol_dist_distr.png}\n\\caption{Distance distribution of connected neurons for the random graph $\\mathcal{G}_{GB}$}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Signed Degree of Neurons}\nOne curiosity that occurred when constructing the random graph for the General Biological model was the directionality upon computing the cumulative sum. This exhibited a different path to that of all other models, and hence why it is shown in Figure 38(b). The underlying reason for this will be that whilst the construction process restricted the MC into blocks of morphological type and shuffled connections here, the shuffling wasn't based on the pre-existing connections in the MC, but rather the distances between all neurons within each block and therefore creating entirely new sets of pre-synaptic and post-synaptic neurons. This therefore, does not preserve the in-degree and out-degree of each neuron.\n\\begin{figure}[H]%\n    \\centering\n    \\captionsetup{justification=centering}\n    \\subfloat[\\centering Cumulative Signed Degree]{{\\includegraphics[width=7cm]{GB/cumsum_degree_general_biol.png} }}%\n    \\qquad\n    \\subfloat[\\centering Directionality]{{\\includegraphics[width=7cm]{GB/overall_degree_general_biol_sd_2.png} }}%\n    \\caption{a sample of the cumulative signed degree and the directionality of the GB model}%\n    \\label{fig:example}%\n\\end{figure}", "meta": {"hexsha": "267485f6c79e9f5c897a020a880b6911d9ffdfc6", "size": 30427, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2020UUMScKieranBarber/tex_files/modelsForTheConnectome.tex", "max_stars_repo_name": "lamastex/working-manuscript-TopologicalDataAnalysisOnABrainNetwork", "max_stars_repo_head_hexsha": "c43b0a79e6d17a069b3d1297a7de248a01050045", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2020UUMScKieranBarber/tex_files/modelsForTheConnectome.tex", "max_issues_repo_name": "lamastex/working-manuscript-TopologicalDataAnalysisOnABrainNetwork", "max_issues_repo_head_hexsha": "c43b0a79e6d17a069b3d1297a7de248a01050045", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-02-12T15:21:35.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-12T15:51:42.000Z", "max_forks_repo_path": "2020UUMScKieranBarber/tex_files/modelsForTheConnectome.tex", "max_forks_repo_name": "lamastex/working-manuscript-TopologicalDataAnalysisOnABrainNetwork", "max_forks_repo_head_hexsha": "c43b0a79e6d17a069b3d1297a7de248a01050045", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.7604651163, "max_line_length": 967, "alphanum_fraction": 0.765274263, "num_tokens": 7744, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867873410141, "lm_q2_score": 0.7122321842389469, "lm_q1q2_score": 0.5684943189785583}}
{"text": "\\problemname{Debugging}\n\n\\illustration{0.5}{bug}{Picture in public domain via \\href{https://en.wikipedia.org/wiki/File:H96566k.jpg}{Wikimedia Commons}}%\n\\noindent\nYour fancy debugger will not help you in this matter. There are many\nways in which code can produce different behavior between debug and\nrelease builds, and when this happens, one may have to resort to more\nprimitive forms of debugging.\n% -- a real coder uses them all!\n\nSo you and your printf are now on your own in the search for a line of\ncode that causes the release build to crash. Still you are lucky:\nadding printf statements to this program affects neither the bug (it\nstill crashes at the same original code line) nor the execution time\n(at least not notably).  So even the naive approach of putting a\nprintf statement before each line, running the program until it\ncrashes, and checking the last printed line, would work.\n\nHowever, it takes some time to add each printf statement to the code,\nand the program may have a lot of lines. So perhaps a better plan would involve\nputting a printf statement in the middle of the program, letting it run, seeing\nwhether it crashes before the added line, and then continuing the search in\neither the first or second half of the code.\n\nBut then again, running the program may take a lot of time, so the\nmost time-efficient strategy might be something in between.  Write a\nprogram that computes the minimum worst-case time to find the crashing\nline (no matter where it is), assuming you choose an optimal strategy\nfor placing your printf statements.\n\nWe're releasing the new version in five hours, so this issue is escalated and needs to be fixed ASAP.\n%Did I already mention this issue is escalated and needs to be fixed asap?\n\n\n\\section*{Input}\n\nThe input consists of one line with three integers:\n\\begin{itemize}\n\\item $n$ ($1 \\le n \\le 10^6$), the number of code lines;\n\\item $r$ ($1 \\le r \\le 10^9$), the amount of time it takes to compile and run the program until it crashes;\n\\item $p$ ($1 \\le p \\le 10^9$), the time it takes to add a single printf line.\n\\end{itemize}\n\nYou have already run the program once and therefore already know that it does crash somewhere.\n\n\\section*{Output}\n\nOutput the worst-case time to find the crashing line when using an\noptimal strategy.\n\n\n", "meta": {"hexsha": "c0f7edf3381b48a6e417002b6882def768523ca3", "size": 2290, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problems/debugging/problem_statement/problem.en.tex", "max_stars_repo_name": "stoman/CompetitiveProgramming", "max_stars_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-22T13:21:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-12T22:26:26.000Z", "max_issues_repo_path": "problems/debugging/problem_statement/problem.en.tex", "max_issues_repo_name": "stoman/CompetitiveProgramming", "max_issues_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/debugging/problem_statement/problem.en.tex", "max_forks_repo_name": "stoman/CompetitiveProgramming", "max_forks_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.0384615385, "max_line_length": 127, "alphanum_fraction": 0.772489083, "num_tokens": 539, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.7981867729389246, "lm_q1q2_score": 0.5684943184719106}}
{"text": "\\section{Model Optimization}\n\\label{sec:optimizerStrategies}\n\nWhen analyzing the range of values obtainable by a model, frequently a key question is ``what set of\nparameters result in the best response value?''  To answer this question, RAVEN uses the \\xmlNode{Optimizer},\na powerful sampler-like entity that searches the input space to find minimum or maximum values of a response.\n\nIn the remainder of this section, we will explore how to use the optimizer using a simple analytic problem,\nwith a two-dimensional input space and single response of interest.  After getting used to running with the\noptimizer, we will add increasing complexity, including changing adaptive step sizes, initial conditions,\nparallel trajectories, input space subdivision, input space constraints, and response constraints.\n\nTo demonstrate the operation of the Optimizer entities in RAVEN, the model we consider is the Beale function,\nwhich is documented in the analytic tests for RAVEN and replicated here:\n\n\\begin{itemize}\n  \\item Function: $f(x,y) = (1.5-x+xy)^2+(2.25-x+xy^2)^2+(2.625-x+xy^3)^2$\n  \\item Domain: $-4.5 \\leq x,y \\leq 4.5$\n  \\item Global Minimum: $f(3,0.5)=0$\n\\end{itemize}\n\nThe two inputs are the variables $x$ and $y$, and the response is a value we'll assign to $ans$, short for\n``answer''.  The model is an external model in RAVEN, and can be found at\n\\begin{verbatim}\n  raven/tests/framework/AnalyticModes/optimizing/beale.py.\n\\end{verbatim}\nThe function's values are distributed as in Fig. \\ref{fig:beale}.\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[scale=0.7]{../../tests/framework/user_guide/optimizing/Beale_grid.png}\n  \\caption{Plot of Beale's function for Optimization}\n  \\label{fig:beale}\n\\end{figure}\n\nNote that throughout this example we use the SPSA optimizer by way of demonstration, since it is the first\nadvanced algorithm for optimization included in RAVEN; many of the options and parameters apply to other\noptimizers, and details can be found in the RAVEN user manual.\n\n\\subsection{Introduction: The Optimizer Input}\nAs with other entities, the Optimizer gets its own XML block in the RAVEN input.\nHere's an example of an input for a SPSA optimizer named \\xmlString{opter}:\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{Optimizers}\nThis is the smallest amount of input needed to run an optimization problem, with the exception that we include\nthe \\xmlNode{initialSeed} to maintain consistent results.  Note the required blocks included\nto define the optimizer:\n\\begin{itemize}\n  \\item \\xmlNode{TargetEvaluation}, which declares the DataObject that the optimization search evaluations are\n    going to be placed in.  All of the optimal points found as part of the optimization, as well as any other\n    points evaluated as part of the algorithm, are placed in this object so the optimizer can retrieve this\n    information later.  When this data object is defined, it is critical that the objective variable is\n    defined in the output space, and the input variables in the input space, so the optimizer can collect the\n    results of its sampling.  The data object type should be ``PointSet'' for this data object.  In this\n    example, we use the self-descriptive \\emph{optOut} data object.\n  \\item \\xmlNode{variable}, which is where you can define the input space variables, one for each of these\n    nodes.  Declaring a variable here informs the optimizer that you want it to find the optimal value for\n    this variable, along with the other variables declared in their own blocks.  Additionally, you define the\n    upper and lower bounds of the variable, which will give the optimizer some general expectations for\n    finding the optimal point; it will never try to sample a value smaller than the lower bound or larger than\n    the upper bound.  In the example we define variables \\emph{x} and \\emph{y} as our input variables, and\n    both of them coincidentally range between -4.5 and 4.5.  We set the initial values for both variables to 0\n    through the \\xmlNode{initial} block, which is required in most cases; the exception is when a\n    preconditioner sets them in mulitlevel optimization, but we're not concerned with that feature at this\n    point.\n  \\item \\xmlNode{objectVar}, which is where you indicate the variable for which you want to find the minimum (or,\n    if you change the default, maximum).  As listed here, we want to minimize the value of \\emph{ans} given a\n    range of possible values for {x} and {y}.\n\\end{itemize}\nThe other critical blocks in this input are as follows:\n\n\\subsubsection{Models}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{Models}\nNote that we define the external model with the name \\xmlString{beale} and provide a path to the analytic\nmodel itself.  This model is set up with the \\texttt{run} method that allows RAVEN to run the model.  We also\nlist all our input/output variables, \\emph{x, y}, and \\emph{ans}.\n\n\\subsubsection{Data Objects}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{DataObjects}\nWe have three data objects: \\xmlString{dummyIN}, which is necessary to define the input space of our external\nmodel in the Steps; \\xmlString{optOut}, which will hold all of the samples taken by our optimizer; and\n\\xmlString{opt\\_export}, which will hold the actual solution path taken by our optimizer.  Note that while\n\\xmlString{optOut} is a Point Set, \\xmlString{opt\\_export} is a History Set.  This is because we store the\npath travelled by the optimization algorithm as a history, with \\emph{varsUpdate} keeping track of the\noptimization steps taken.  Note especially how the input of \\xmlString{opt\\_export} is set to \\emph{trajID},\nwhich is a special keyword for the Optimizer history tracking, as is the output variable \\emph{varsUpdate}.\nThere are several other special keyword outputs that can be written to the Solution Export data object, that\ncan be found in the user manual.\n\n\\subsubsection{Out Streams}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{OutStreams}\nHere we define the way to print the output of our optimization algorithm.  There's not much to note, except\nthat we'll be printing the optimization path as a History-based CSV.\n\n\\subsubsection{Steps}\n\\xmlExample{framework/user_guide/optimizing/simple.xml}{Steps}\nHere we put it all together into a work flow that RAVEN can follow.  We only need two steps: one to optimize,\nand one to print out the results.  To actually perform the optimization, we need a MultiRun step, which we\ncleverly name \\xmlString{optimize}.  For input we take the placeholder data object \\emph{dummyIN}, which sets\nup the input space of the model we defined, \\emph{beal}.  Where a \\xmlNode{Sampler} would normally go, we\ninclude the \\xmlNode{Optimizer} we defined earlier.  We output to the same data object we indicated in the\nOptimizer's \\xmlNode{TargetEvaluation} node.  Finally, we note specifically the use of the\n\\xmlNode{SolutionExport} node.  The data object defined in this node is where the Optimizer will write the\noptimization path history, with the final entry being the last step taken by the optimizer.  The IOStep is\nunremarkable, used simply to write out the optimization path history to file.\n\n\\subsubsection{Conclusion}\nAfter reviewing the components (don't forget the RunInfo block!), you can run this example and see the\nresults.  In particular, we can view the history of the optimizer in \\texttt{Simple/opt\\_export\\_0.csv}.  Note\nthat \\texttt{opt\\_export} is the name of the \\xmlNode{Print} OutStream we defined in the input file, and the\n\\texttt{\\_0} indicates this is the first optimization path followed (we'll cover multiple paths later in\nsection \\ref{subsec:opt parallel traj}).\n\nWhen we open the file (preferably in a CSV reader such a spreadsheet viewer), we see a CSV with four headers,\nthe outputs defined in the data object in the input\nfile: \\emph{x}, \\emph{y}, \\emph{ans}, and \\emph{varsUpdate} (not necessarily in that order).  \\emph{x},\n\\emph{y}, and \\emph{ans} are the values of the variable at each optimization iteration, while\n\\emph{varsUpdate} gives the sequential order of the optimization iteration.\n\nIf we look at the last line, we converged around $f(1.61, -0.189) = 1.67$, which is okay but still quite a\nways from the analytic optimal point $f(3, 0.5) = 0$.  If we look at the output from the run, we can look at\nthe last time RAVEN was ``Checking convergence for Trajectory 0''.  Below that statement, there are a series\nof convergence criteria and their status.  We can see that the reason we converged at the end is the\n\\texttt{Relative Loss Diff}, which means the relative change in the response \\emph{ans} was sufficiently\nsmall between steps to cause convergence.  Clearly, we claimed convergence prematurely because of the default convergence\nvalues used by the optimizer.  Because these convergence criteria are very problem-specific, the default\nparameters will not work best for all problems.\n\nWe can improve this result by changing convergence\nparameters as well as step size growth and shrink factors, all of which can be found in the user manual, and\nmany of which we'll discuss in the rest of this section.\n\n\\subsection{Initial Conditions and Parallel Trajectories} \\label{subsec:opt parallel traj}\nNotice we set the optimization search to start at $(0,0)$.\n%By default, RAVEN chooses the center\n%point of the input space as an initial value.\nYou can change this initial value through\nthe \\xmlNode{initial} block within the \\xmlNode{variable} definition node.\n\nFurthermore, RAVEN offers the possibility to run multiple optimization paths in parallel.  Because many\n(perhaps most) optimization techniques get stuck in local minima, using multiple paths (or \\emph{trajectories} as\nthey are called in RAVEN) increases the likelihood that one of the trajectories will find the global minimum\npoint.  You can request multiple trajectories by providing a variety of initial conditions in the\n\\xmlNode{initial} block, as shown in this Optimizer example:\n\\xmlExample{framework/user_guide/optimizing/multiple_trajectories.xml}{Optimizers}\nNote that the ordered pairs are split across the \\xmlNode{initial} nodes, so that the first trajectory will\nstart as a point made up of all the first entries, the second trajectory starts at all the second entries, and\net cetera.  In this case, we've requested starting points at (-2,-2), (-2,2), (2,-2), and (2,2).  This (and\ndefining a new working directory in the \\xmlNode{RunInfo} block) is the only input change between the original\nfile and this one.\n\nWhen run, we can see the results in the working directory \\texttt{MultipleTraj}.  There, we see the same files\nas for the base case, plus \\texttt{opt\\_export} files 0-3 instead of just 0.  Each of these corresponds to the\npath that one of the initial points started at, as you can see at the top of each of these CSV files.  We can\nsee that trajectory 3 (who started at (2,2)) ended close to the analytic optimal point, while trajectory 0\nwas far from it.\n\nIn order to see a summary of the end point of each trajectory, we can use the additional RAVEN step described\nin \\ref{subsec:opt summarizing results}.\n\n\n\\subsection{Summarizing Results} \\label{subsec:opt summarizing results}\nWhile seeing the histories of the optimization paths taken in the \\xmlNode{SolutionExport} is informative,\nsometimes we just want to see the final optimal point found by each trajectory.  In this case, we can use an\n\\emph{InterfacedPostProcessor} called {HistorySetSnapShot} to get the last point found by each optimizer.  The\npostprocessor looks like this:\n\\xmlExample{framework/user_guide/optimizing/summary.xml}{Models.PostProcessor}\nNote that we named the postprocessor \\xmlString{snapshot}, and instructed it to look for the \\xmlString{max}\nvalue of \\xmlString{varsUpdate} in each of the histories, and combine them into a single data set.  We add the\nfollowing Step to our \\xmlNode{Steps} to run it:\n\\xmlExample{framework/user_guide/optimizing/summary.xml}{Steps.PostProcess}\nDon't forget to add \\xmlString{getOptPoint} to the \\xmlNode{Sequence} in the \\xmlNode{RunInfo} block, before\n\\xmlString{print} but after \\xmlString{optimize}.  Note that this post-processing step takes the\n\\xmlNode{SolutionExport} data object from the optimizing step as input, runs the postprocessing model, then\nplaces the results in a new data object, \\xmlString{opt\\_soln}, which we add to the \\xmlNode{DataObjects}\nlist in the input file.\n\nNote also that we added the printing of the \\xmlString{opt\\_soln} data object to the \\xmlString{print}\n\\xmlNode{IOStep}; we only needed to write an \\xmlNode{OutStream} for it, not a whole new step.\n\nAfter running this input, you can open \\texttt{Summary/opt\\_soln.csv} to see the final results of each\noptimization trajectory.  The variable \\emph{trajID} gives the trajectory label, \\emph{x} and \\emph{y} give\nthe final location of the optimal point found by that trajectory, \\emph{ans} gives the value of the response\nat that location, and \\emph{varsUpdate} shows how many iterations were necessary to find that optimal point.\nThe trajectory with the lowest value for \\emph{ans} is the most optimal point found.\n\n\\subsection{Adjusting Adaptive Steps} \\label{subsec:opt stepsize}\nAs we've seen, some of the optimization paths are struggling to converge to meaningful optimal solutions.\nOne way to improve this is to tinker with the convergence tolerances defined in the user manual.  Another is\nto change the step size modifications used as part of the search process, which we discuss in this section.\nFirst, we briefly discuss how the SPSA chooses its step size, so we can make informed choices about what\nparameters to use.\n\nBecause SPSA is a gradient-based method, it operates by starting at a particular point, estimating the\ngradient at that point, then taking a step in the opposite direction of the gradient in order to follow a\ndownhill path.  It adaptively chooses how long of a step to take based on its history.  If the gradient is in\nthe same direction twice in a row, the algorithm assumes there's further to travel, so increases its step size\nmultiplicatively by the \\emph{grainGrowthFactor}, which by default is 2.0 in RAVEN.  If, on the other hand,\nthe gradient switches directions, then the step size is divided by the \\emph{gainShrinkFactor}, which again is\n2.0 by default in RAVEN.  This means that by default, if the gradient keeps going in the same direction, you\nalways double your step size, while if you're bouncing back and forth in a valley, the step size is halved at\neach iteration.\n\nBy way of note, in higher dimensions, the actual growth or shrink multiplier is scaled by a dot product\nbetween the two previous gradients, with a max of the grain growth factor when the dot product is 1 (exactly\naligned) and a minimum of grain shrink factor when the dot product is -1 (exactly opposite).  This means if\nthe gradient is at right angles with the past gradient, then the step size remains unchanged (dot product is\n0).\n\nThere are some additional considerations for the step size change, as well.  If the algorithm takes a step,\nthen discovers the new point has a worse response value than the point it's at, it will reject the new point,\nre-evaluate the gradient, and flag the step size to be divided by the gain shrink factor.  Because of this, if\nthe gain shrink factor is too large, false convergence can be obtained when the algorithm struggles to find a\nnew downhill point to move to.  As a result, in practice it is often beneficial to have a gain shrink factor\nthat is smaller than the gain growth factor.\n\nFor this particular example, we use gain growth factor of 1.5 (meaning when the gradient continues in the same\ndirection our step grows to 150\\% of its old value) and a gain shrink factor of 1.25 (meaning when the\ngradient flips directions our step size shrinks to 80\\% of its old value).  We add this to the base case\n(\\texttt{simple.xml}) to get:\n\\xmlExample{framework/user_guide/optimizing/step_size.xml}{Optimizers}\nNote the definition of the gain growth and shrink factors in the \\xmlNode{convergence} block.  Reviewing the\noutput file \\texttt{StepSize}, we can see more steps were taken than the case using default step sizes, but\nthe final solution was $f(2.65,0.403)=0.0286$, which is much closer to the analytical solution of $f(3,0.5)=0$\nthan the base case and within our convergence tolerances of the right solution.\n\nIt is often challenging to find the best gain growth and shrink factors, and these can have a very significant\nimpact on the speed and accuracy of the convergence process.  Too large a shrink factor results in poor\nresolution of valleys, while too small a shrink factor results in many unnecessary evaluations of the model.\n\n\\subsection{Denoising Stochastic Problems}\nWhile many of the models we see in optimization are deterministic (meaning running the same inputs into the\nmodel yields the same results every time), there are quite a few stochastic models that we might desire to\noptimize.  For example, a model that included Brownian motion or is solved using unseeded random numbers might\nbe considered stochastic.\n\nThe difficulty with optimizing noisy problems rests in the unreliability of a single sample.  If we send a\nsingle set of inputs into a stochastic model, we can't trust the results to be consistent.  One way to measure\nthe consistency of the results is through a signal-to-noise ratio (SNR).  There are many ways to define this\nvalue; for our purposes, we will use the ratio of the mean of the signal to the standard deviation of the\nsignal, $\\mu/\\sigma$.\n\nTo obtain an approximation of your SNR, you can use RAVEN to perform a Monte Carlo run on your model and then\nuse the BasicStatistics postprocessor to collect the mean (expectedValue) and standard deviation (sigma) of\nyour response.  It's important to make this sampling all at a single value in the input space, so replace your\nvariables with constants in the Sampler input.  Once you have the mean and sigma, you have an idea of how\nnoisy your model is.  A SNR of 1 means the signal is just as big as the noise, making it very difficult to\noptimize.  A SNR of less than 1 means the noise dominates the signal, and will make optimization almost\nimpossible without introducing denoising.  A SNR of more than 1 indicates the signal is stronger than the\nnoise, and perhaps denoising is not necessary.  If your standard deviation is 0, then you don't have any\ndiscernable noise!\n\nTo denoise a model in RAVEN SPSA optimization currently, we turn our attention to the \\xmlNode{Optimizer}\nsubnode \\xmlNode{parameter}, specifically the \\xmlNode{numGradAvgIterations} node.  This parameter instructs\nRAVEN to perform multiple gradient evaluations, including multiple evaluations of each optimal point, and use\nthe average to make decisions in optimization pathing.  By default, RAVEN takes one optimal point and one\nneighboring point to evaluate a gradient.  Increasing the \\xmlNode{numGradAvgIterations} will increase the\nnumber of times the optimal point is sampled, and how many neighbor points are sampled.  This serves to\ndenoise the model.\n\nHowever, this also raises the question, how many resamples do I need to denoise my model?  In a Wilks-like\napproach, we want to reduce the size of the confidence interval for our mean to be less than the noise.  The\nnumber of resamples required depends on the size of the confidence interval $z$ and the confidence-to-noise ratio\n$\\xi$ we want to ultimately have for the optimization algorithm.  We also assume the distribution of the\nresponse is roughly Gaussian Normal, which may not be the case.  The approximate equation for assuring the\nconfidence interval is smaller than the noise is\n\\begin{equation}\n  \\frac{z\\sigma}{\\sqrt{n}} \\leq \\xi\\sigma,\n\\end{equation}\n\\begin{equation}\n  n \\geq \\left(\\frac{z}{\\xi}\\right)^2.\n\\end{equation}\nThus, the number of resamples depends on the confidence level as well as the desired ratio of the confidence interval\nto the noise.\n\nA few values for varying ratios are given in Table \\ref{tab:confidence levels} for the 99\\% confidence level ($z=2.576$).\n\\begin{table}[htb]\n  \\centering\n  \\begin{tabular}{c c}\n    Confidence-to-noise $\\xi$ & Resamples necessary \\\\ \\hline\n    1.0 & 7 \\\\\n    0.9 & 9 \\\\\n    0.7 & 14 \\\\\n    0.5 & 27 \\\\\n    0.1 & 664 \\\\\n    0.05 & 2655\n  \\end{tabular}\n  \\caption{Estimate of the number of samples necessary to denoise models to varying confidence levels}\n  \\label{tab:confidence levels}\n\\end{table}\nThat is, if you want the noise and confidence interval to have the same magnitude, only 7 resamples are\nrequired.  If, on the other hand, you want the confidence interval to be half the level of the noise, 27\nresamples are required.\n\nNote these are only guidelines; individual models may behave differently and require more or less resamples to\nprovide a clear optimization path.\n\n\\subsection{Input Space Subdivision}\nIn higher-dimensional problems, sometimes the input space can be divided into characteristic subspaces.  For\nexample, potentially one set of inputs vary slowly with consistent gradient directions in the response, while\nfor another set of inputs the response fluctuates frequently with many local minima.  In this case, it might\nbe beneficial to optimize the subspaces in nested loops, slowly converging the smoothly-varying space while\noften converging the undulating space.\n\nFor these cases, RAVEN provides \\emph{multilevel} operation.  To transition our base case to a multilevel\ncase, we change the optimizer as follows:\n\\xmlExample{framework/user_guide/optimizing/multilevel.xml}{Optimizers}\nNote the addition of the \\xmlNode{multilevel} node.  In this node, we need to define the subspaces, then the\nordering of the subspaces in terms of convergence priority.  We intuitively name the subspace with \\emph{x}\nthe \\emph{xgroup} and similarly with \\emph{y}.  Note that while in this example each subspace consists of only\none variable, in general many variables may exist in any given subspace.  Note also that it's a bad idea to\ninclude a variable in multiple subspaces.\n\nThe \\xmlNode{sequence} can be thought of as an order of nesting optimization, where the first subspace listed\nis optimized most slowly (but only once) and the last is optimized frequently and swiftly.  Each time a new\noptimal point is found, the innermost space (in our example \\emph{yspace}) is converged completely, holding all\nthe variables in all other subspaces (e.g. \\emph{xspace}) constant at their most recent value.  Once\n\\emph{yspace} is converged, we take a single step in the subspace \\emph{xgroup}, then return to converging\n\\emph{yspace} again.  Every step in \\emph{xspace} triggers a full convergence of \\emph{yspace}.\n\nIn the results in \\texttt{Multilevel/opt\\_export\\_0.csv}, you can observe how first one dimension then the\nother is perturbed in the search for the optimal point.\n\nIt should be noted as well, using multilevel optimization does not necessarily improve the optimization path;\nin fact, in many cases, the number of model evaluations increases significantly for little gain in final\noptimal value.  Multilevel is most useful when clustered subspaces share a particular feature in optimization\nthat is useful to isolate from the rest of the optimization procedure, or if some subsets of inputs act\nindependently from others in affecting the response.\n\n\n\\subsection{Functional Constraints} \\label{subsec:opt explicit constraint}\nSometimes an optimization problem has a constrained input space, possibly where there is a tradeoff between\ntwo inputs.  In this event, RAVEN allows the user to define a \\emph{constraint} function, which will cause\nRAVEN to treat this constraint as it would a boundary condition.\n\nFor example, we will introduce a void in the input where we reject inputs.  This void is defined by rejecting\nall samples within $(x-1)^2 + y^2 < 1$.  We'll also include the modified step growth and shrink parameters\ndiscussed in section \\ref{subsec:opt stepsize}.\n\nTo include a constraint function, we first have to define it in the RAVEN input as a \\xmlNode{Function}\nentity:\n\\xmlExample{framework/user_guide/optimizing/constrain.xml}{Functions}\nNote that the file \\texttt{./Constrain/constraint.py} is located relative to the working directory.  Currently,\nexternal functions are always Python files.  In that file,\nnote that the only method is \\texttt{constrain}, which is RAVEN's keyword to find the constraint function.\nRAVEN will pass in a \\texttt{self} object, which will have the function variables defined in the\n\\xmlNode{Functions} input available as members.  The method \\texttt{constrain} then returns a boolean which is \\texttt{True} if\nthe evaluation does not violate the constraint, or \\texttt{False} if the constraint is violated.\n\nTo attach the constraint to the optimizer, simply add it as an assembled \\xmlNode{Function}:\n\\xmlExample{framework/user_guide/optimizing/constrain.xml}{Optimizers}\n\nAfter running, looking through the path followed by trajectory 0 shows that instead of following the path from\nsection \\ref{subsec:opt stepsize}, the path moves to lower \\emph{y} values before swinging back up toward the\noptimal point.\n\n%\\subsection{Implicit Constraints (Penalties)}\n% TODO come up with a working example\n", "meta": {"hexsha": "a27404516886880424f6bb0ea07944b71908df7d", "size": 25256, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_guide/optimizing.tex", "max_stars_repo_name": "milljm/raven", "max_stars_repo_head_hexsha": "5f29fe81b75e2ffbeb54a55aa63647e7b2f6457b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-10-11T15:59:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-08T18:23:57.000Z", "max_issues_repo_path": "doc/user_guide/optimizing.tex", "max_issues_repo_name": "milljm/raven", "max_issues_repo_head_hexsha": "5f29fe81b75e2ffbeb54a55aa63647e7b2f6457b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-03-27T13:06:00.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-27T13:06:00.000Z", "max_forks_repo_path": "doc/user_guide/optimizing.tex", "max_forks_repo_name": "milljm/raven", "max_forks_repo_head_hexsha": "5f29fe81b75e2ffbeb54a55aa63647e7b2f6457b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2017-08-29T16:09:13.000Z", "max_forks_repo_forks_event_max_datetime": "2017-08-29T16:09:13.000Z", "avg_line_length": 70.7450980392, "max_line_length": 127, "alphanum_fraction": 0.7862290149, "num_tokens": 6103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.7981867729389246, "lm_q1q2_score": 0.5684943184719106}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\DeclareRobustCommand{\\us}{\\rule{.2pt}{0pt}\\rule[-.8pt]{.4em}{.5pt}\\rule{.7pt}{0pt}}\n\\begin{document}\n\\begmath 4.7  Sparse Square Nonsingular Systems of Equations\n\n\\silentfootnote{$^\\copyright$ \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nFor a sparse square nonsingular matrix, $A$, of order N, and an\noptional vector or matrix B, this routine will factor A, solve $A X =\nB, \\ A^T X = B$, compute the reciprocal of the condition number, and\nobtain the determinant and the inverse.  This code has similar\nfunctionality to the dense codes described in Chapter 4-1.  The\nprogram in Chapter 19-6 can be used together with the code here to\nsolve problems with the matrix given in Matrix Market format.\n\n\\subsection{Usage}\nWith minimal options, this simply solves $A \\mathbf{x} = \\mathbf{b}$\nfor a single vector $\\mathbf{b}$, and does not preserve the\nfactorization.  Other features are available through the options in\nISPEC.\n\n\\paragraph{Program Prototype, Single Precision}\n\n\\begin{description}\n\n\\item[INTEGER] \\ {\\bf N, ISPEC(?), IA(?)}\n\n\\item[DOUBLE PRECISION] \\ {\\bf A(?), B(?), OPT(3)}\n\n\\end{description}\n\nAssign values to N, ISPEC(), IA(),  A(), and B().\n\n\\begin{center}\n\\fbox{\\begin{tabular}{@{\\bf }c}\nCALL DSPGE (N, ISPEC, IA, A, B, OPT)\\\\\n\\end{tabular}}\n\\end{center}\n\nComputed quantities are returned in IA(), A(), B(), and ISPEC.\n\n\\paragraph{Argument Definitions}\n\n\\begin{description}\n\n\\item[N] [in] The order of the matrix $A$ and number of rows in the\n  $B$ and $X$.\n\n\\item[ISPEC()] [inout] An array used to return a flag, specify\n  dimensions and specify options.  Locations are used as follows.\n  \\begin{description}\n  \\item[1] [inout] Must be 0 if the matrix is not factored.  The value\n    returned on the first call must be preserved on later calls when\n    the matrix is already factored.  If the returned value is $<$ 0,\n    then the matrix was not factored.  See Section~\\ref{sec:errors} on\n    page~\\pageref{sec:errors} for details.\n  \\item[2] [in] Declared dimension of IA() (or at least all that you\n    want this routine to know about.  This number must be at least the\n    space required for storing the factors of the matrix + twice the\n    space needed for the original matrix + 7N.\n  \\item[3] [in] Declared dimension of A(), this can have a dimension\n    4N less than that for IA.\n  \\item[4] [out]  The total free space in A at the solution.  If there\n    was insufficient free space this gives the negative of the number\n    of columns left to be factored.\n  \\item[5] [in] Start of options, see the next Section.\n  \\end{description}\n\n\\item[IA()] [inout] IA(1) gives the number of nonzeros in the first\n  column.  IA(2) to IA(1 + IA(1)) give the row indexes for rows in the\n  first column which must be in increasing order.  IA(2+IA(1)) then\n  gives the number of nonzeros in the second column, etc.  Locations\n  after this input data are used internally.\n\n\\item[A()] [inout] For each row index given in IA, the corresponding\n  location in A contains the coefficient corresponding to that row and\n  column.  The remaining locations in A are used internally.\n\n\\item[B()] [inout] Used to hold the right hand side(s) on entry and to\n  hold the solution on exit.\n\n\\item[OPT()] [out] Used only for some options..\\\\\n  OPT(1) = reciprocal condition number if requested.\\\\\n  OPT$(2) \\times 10^{OPT[3]}$ = the determinant if requested.\n\\end{description}\n\n\\subsubsection{Options}\n\nFirst, a summary of the options.\\vspace{2pt}\n\n\\begin{tabular}{|l@{}r|l|}\\hline\n  \\multicolumn{2}{|c|}{\\rule{0pt}{12pt}Name \\hfill Value}&\n  \\multicolumn{1}{|c|}{Brief Description}\n  \\\\[2pt]\\hline\\rule{0pt}{12pt}%\n  IPS & 0 & Solve $A \\mathbf{x} = \\mathbf{b}$\\\\\n  IPST & 1 & Solve $A^T \\mathbf{x} = \\mathbf{b}$\\\\\n  IPBDIM & 2 & Specifies dimensions on B\\\\\n  IPRCON & 3 & Compute reciprocal condition number\\\\\n  IPDET & 4 & Compute determinant\\\\\n\\hline\n\\end{tabular}\\vspace{5pt}\n\nStarting in ISPEC(5) there is an index for an action which we denote\nhere by $I_a$.  Each $I_a$ has associated with it a parameter name, a\nvalue for that parameter, and a (possibly empty) list of arguments,\nwhich follow immediately after $I_a$ in ISPEC.  After the last\nargument for an $I_a$, the next location in ISPEC contains the next\n$I_a$, with its associated arguments.  This continues until the last\n$I_a$ has been specified.  An option index $\\leq 1$ ends the option\nlist (other options must precede these).  Setting ISPEC(5) = 0, gives\nthe default action.\n\nIt is assumed that B is a one dimensional of length N if option\nIPBDIM is not used.\n\nAll the above options except for IPBDIM, take no arguments, and the\nabove table should be self-explanatory.  IPBDIM uses 3 locations in\nISPEC, the 2 to indicate this option, next $I_2$ to indicate the\nnumber of columns (default is 1), and then $J_2$ to indicate the\ndistance between successive columns of B.  In addition one can get the\ninverse by setting $I_2 = -1$, and DSPGE will initialize B to the\nidentity matrix computing a solution.  If no backsolve is desired, set\n$I_2$ to 0.  In the case of a B with 5 columns one might have\nISPEC(5)=2, ISPEC(6)=5, and ISPEC(7)=N.  Note that the code always\nassumes that B is dense.\n\nIf you want to compute the inverse matrix and are not an expert,\nchances are you should ask an expert who is likely to tell you ---\ndon't.  Given the ability to backsolve using a previous factorization\nthe inverse should very rarely be needed.\n\n\\paragraph{Program Prototype, Single Precision}\n\nUsage is the same as for double precision, except the ``DOUBLE\nPRECISION'' statement is changed to ``REAL'', and the name DSPGE is\nchanged to SSPGE.\n\n\\subsection{Examples and Remarks}\n\nProgram DRDSPGE illustrates the use of DSPGE to solve a system\nof linear equations and compute the reciprocal condition number of the\nmatrix of the system. Output is shown in ODDSPGE. The data for this problem\nwere chosen so the exact solution has components 2, $-$5, and~3.\n\n\\subsection{Functional Description}\n\nTo factor the matrix, Gaussian elimination is used on columns to get a\nfactorization of the form $A = U L^{-1}$.  Pivot selection permutes\nrows and columns independently balancing a desire to keep an upper\ntriangular form of the matrix without sacrificing stability.\n\nInternal scaling first computes scale factors to make all the columns\nhave an $L_\\infty$ norm of 1, and follows this by doing the same to\nthe rows.\n\n\\subsubsection{Estimating reciprocal condition number}\n\nThe condition number of an $n\\times n$ nonsingular matrix, $A$, is defined\nas $\\kappa = \\|A\\|\\times \\|A^{-1}\\|$, a quantity that is never less than\nunity.  This is the largest factor by which the relative error in a\nvector, ${\\bf y}$, can be amplified as a result of multiplication by $A$\nor by $A^{-1}$.  Roughly speaking, if $\\kappa = 10^k$ and the components\nof the vector ${\\bf b}$ in the problem, $A{\\bf x} = {\\bf b}$, are known to\n$d$ decimal digits, and the components of $A$ are known to more than $d$\ndecimal digits, and the problem is solved using precision greater than $d$\ndecimal digits, then the solution will be known to about $d-k$ decimal\ndigits.  In particular, if $k \\geq d$, the solution may have no reliable\ndigits at all.\n\nWe use the procedure described in \\cite[pp.\\ 128--130]{Golub:1996:GVL}\n(based on work in \\cite{Dongarra:1979:LUG}), which requires on the\norder of $3n^2$ additional operations.  In a test with 1250~cases\nreported in \\cite{Dongarra:1979:LUG}, the maximum value for the\ncondition number / estimated condition number was just slightly more\nthan 16.\n\n\\subsubsection{Computing the determinant of $A$}\n\nWe have $Det(A) = Det(U) Det(L^{-1})$.  Since all row and column\npermutations are done on both $L$ and $U$, and the diagonal of $L$\ncontains 1's, the determinant is simply the product of the diagonals\nof $U$ times $-1$ if the sum of the number of exchanges done on rows\nand columns of A is odd.\\normalsize\n\n\nFor a matrix of large order it is not uncommon for the determinant to\nbe of extremely large or small magnitude. Consequently we compute and\nstore the determinant as a pair of numbers, permitting a very large\nexponent range.  The second number of the pair is always in integer.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\n\\subsection{Error Procedures and Restrictions}\n\\label{sec:errors}\nError messages are printed using the facility described in Chapter\n19-03.  Options there give one some control over actions taken on\nerrors, and on where the debug print goes.  Error indexes $<-4$ stop\nby default, those $>-4$ only return an error flag, and the $-4$ case\nwill return only if there was enough space to get started on the\nfactorization.  The error flags returned are\n\n\\begin{description}\n\\setlength{\\parsep}{0pt} \\setlength{\\itemsep}{-3pt}\n\\item[$-1$] Matrix appears singular.  (Sets OPT(1) = 0.)\n\\item[$-2$] Matrix has an empty column.\n\\item[$-3$] Matrix has an empty row.\n\\item[$-4$] No more space.\n\\item[$-5$] N $\\leq$ 0\n\\item[$-6$] An unknown option.\n\\item[$-7$] Problem with column size.\n\\item[$-8$] Problem with row indexes.\n\\end{description}\n\n\\subsection{Supporting Information}\n\nThe source language is ANSI Fortran~77.\n\nDesign and programming by Fred T. Krogh, Math \\`a la Carte, Inc.\nMarch 2006.  We hope to revisit this in the future and improve\nperformance.\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nDSPGE & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nDMESS, DSPGE, MESS\\rule[-5pt]{0pt}{8pt}}\\\\\nSSPGE & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nMESS, SMESS, SSPGE\\rule[-5pt]{0pt}{8pt}}\\\\\n\\ & \\ \\\\\\end{tabular}\n\n\\begcodenp\n\\vspace {10pt}\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRDSPGE}\\vspace{0pt}\n\n\\vspace{10pt}\n\\lstinputlisting{\\codeloc{dspge}}\n\n\\vspace{10pt}\\centerline{\\bf \\large ODDSPGE}\n\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{dspge}}\n\n\\end{document}\n", "meta": {"hexsha": "1ce1f7599ba54465f016ed00600484e6f9a84727", "size": 10011, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch04-07.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch04-07.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch04-07.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 39.7261904762, "max_line_length": 84, "alphanum_fraction": 0.7325941464, "num_tokens": 2914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5684943170156292}}
{"text": "\r\n\r\n\\section{PDMPs and generators} \\label{pdmpgenerator}\r\nThe main class of stochastic processes utilized in this thesis is the piecewise deterministic Markov Process, or PDMP, whose theory is presented in Davis \\cite{dav84}.  For a PDMP, a Markov process is allowed deterministic drift along flow lines of a vector field before randomly jumping to a new state.  In addition to covering several examples from queueing theory, including the  M/G/1 queue, the GI/G/1 queue, the model also encapsulates Markov chains.  Our chief object of interest is the infinitesimal generator and martingale from Dynkin's formula associated with the PDMP.   In Chatpers 3 and 4 we plan to use these martingales to provide an approximation to limiting kinetic equations. \r\n\r\n\r\nOur goal is to describe a generalized jump process that takes place on a disjoint union of  manifolds, and follows a deterministic drift between jumps. To characterize the state space, we let $\\mathcal S$ be a countable set,   $d:\\mathcal S\\rightarrow \\mathbb{N}$ , where for each $\\mathbf s \\in \\mathcal S$,  $M_\\mathbf s \\subset \\mathbb{R}^{d(\\mathbf s)}$ is an open set.   The state space is then the disjoint union\r\n\\begin{equation}\r\nE =\\coprod_{\\mathbf s \\in \\mathcal S} M_\\mathbf s = \\left\\{(\\mathbf s, \\textbf x):\\mathbf s \\in \\mathcal S,  \\textbf x \\in M_\\mathbf s \\subset \\mathbb R^{d(\\mathbf s)}\\right\\}. \\end{equation}\r\nTo define the topology of $E$, let $\\iota_\\mathbf s:M_\\mathbf s \\rightarrow E$ be the canonical injection defined by $\\iota_\\mathbf s(\\textbf{x}) = (\\mathbf s, \\textbf{x})$.  A set $A$ in $E$ is open if for every $\\mathbf s$, $\\iota_\\mathbf s^{-1}(A)$ is open in $M_\\mathbf s$. We then define $\\mathcal E$ as the Borel sets of $E$. This makes $(E,\\mathcal E)$ a Borel space.  \r\n\r\nWe now define a stochastic process $X(t) = (\\mathbf s(t),\\zeta(t))$, with a law $(X(t))_{t  \\ge 0}$ based on the following:\r\n\r\n\\begin{enumerate}\r\n\\item \r\nVector fields $\\mathcal X_\\mathbf s,\\mathbf s \\in \\mathcal S.$ defined on $M_\\textbf s$.\r\n\\item\r\nA measurable function $\\lambda:E \\rightarrow \\mathbb R^+$.\r\n\\item\r\nA transition measure $Q: \\mathcal E \\times (E \\cup \\Gamma^*) \\rightarrow [0,1]$.\r\n\r\n\\end{enumerate}  \r\n\r\nWe will formally define $\\Gamma^*$, the exit boundary of a PDMP, shortly. To see how this fits in with a generalized jump process, points in $M_{\\textbf s}$ will travel according to flows defined by $\\mathcal X_\\mathbf s$ until either a Poisson clock with intensity $\\lambda$ rings, or the point hits the boundary of $M_\\mathbf s$. When such an event occurs, the point jumps to a new position in $E$, determined by $Q$.   \r\n\r\nThe vector fields $\\mathcal X_\\mathbf s$ are chosen so that for every $z \\in M_\\mathbf s$, there is a unique integral curve $\\phi_\\mathbf s(t,z)$ satisfying \r\n\\begin{eqnarray}\r\n \\frac d{dt}f(\\phi_\\mathbf s(t,z)) = \\mathcal X _{\\mathbf s}f(\\phi_\\mathbf s(t,z))\\\\\r\n \\mathbf \\phi_\\mathbf s(0,z) =  z \\nonumber \r\n\\end{eqnarray}\r\nfor any smooth function $f: \\mathbb{R}^{d(\\mathbf s)} \\rightarrow \\mathbb{R}$.  Here we also require that the vector fields are conservative, meaning that there exists a $t>0$ where $\\phi_\\mathbf s(r,z)$ is defined for $r \\in [0,t]$.   \r\n\r\n Now let $\\partial^*M_\\mathbf s$ be the exit boundary of $M_\\mathbf s$, defined as\r\n  \\begin{equation}\r\n \\partial^*M_\\mathbf s =\\left \\{y \\in \\partial M_\\mathbf s: \\phi_\\mathbf s(t^-, \\textbf{x}) = y \\quad \\hbox{ for some } (t, \\textbf{x}) \\in \\mathbb{R}_+ \\times M_\\mathbf s\\right \\}\r\n\\end{equation}\r\n $\\Gamma^*$ is then defined as the exit boundary of our state space:\r\n\\begin{equation}\r\n\\Gamma^* = \\coprod_{\\mathbf s \\in \\mathcal S} \\partial ^{*}M_\\mathbf s \r\n\\end{equation}\r\nAt a given state $\\mathbf x \\in E$, we define the exit time as \r\n\\begin{equation}\r\nt^*(\\mathbf x) = \\inf \\{t >0: \\phi_\\mathbf s(t, \\textbf{x}) \\in \\partial ^{*}M_\\mathbf s\\}.  \r\n\\end{equation}\r\nThe stochastic process $(X(t))_{t \\ge 0}$ with initial condition $X(0) =  (\\mathbf s,z)$ is then defined as follows.  For $\\mathbf x = (\\mathbf s, z)$, define a survivor function $\\mathcal F$ as \r\n\\begin{equation}\r\n\\mathcal F_{\\mathbf x}(t) = \\begin{cases} \\exp\\left(-\\int_0^t \\lambda(\\mathbf s,\\phi_\\mathbf s(r,z))dr\\right), & t<t^*(\\textbf{x}), \\\\ 0, & t \\ge t^*(\\textbf{x}).\r\n\\end{cases} \r\n\\end{equation}\r\nThe rate \\ $\\lambda:E \\rightarrow \\mathbb R^+$ is a measurable function, where for every state $\\textbf{x} = (\\mathbf s,  \\textbf{x}) \\in E$ there exists $\\varepsilon > 0$ where the function $s \\rightarrow \\lambda(\\mathbf s, \\phi_\\mathbf s(s, \\textbf{x}))$ is integrable for $s \\in [0,\\varepsilon)$.  We also define $Q(A;\\textbf{x})$ to be a measurable function of $\\textbf{x}$ for each fixed $A \\in \\mathcal E$ on $\\textbf{x} \\in E \\cup \\Gamma^*$, and a probability measure on $(E, \\mathcal E)$ for each $X \\in E \\cup \\Gamma^*$.\r\n\r\n\r\nNow choose a random variable $T_1$ such that $\\mathbb{P}[T_1>t] = \\mathcal F_\\mathbf x(t)$.  Now independently choose an $E$-valued random variable $(L,Z)$ with distribution $Q(\\cdot ; \\phi_\\mathbf s(T_1,z))$.  The trajectory of $X(t)$ for $t \\le T_1$ is then\r\n\\begin{equation}\r\nX(t) = \\begin{cases} (\\mathbf s,\\phi_\\mathbf s(t,z)), & t<T_1 \\\\\r\n(L,Z), & t = T_1.\r\n\\end{cases}\r\n\\end{equation}\r\nFrom $X(T_1)$, we choose the next inter-jump time $T_2-T_1$ and $X(T_2)$ in a similar fashion. It can be shown that the process $X(t)$ is Markov, and in fact, strong Markov (Section 3 of \\cite{dav84}).  \r\n\r\nAs a Markov process, the PDMP has an associated infinitesimal generator $\\mathcal A$, acting on a domain $\\mathcal D(\\mathcal A)$, defined as the set of functions $f:E\\rightarrow \\mathbb R$ where the limit \r\n\\begin{equation}\r\n (\\mathcal Af)(\\mathbf x)=  \\lim_{t\\rightarrow 0^+} \\frac{\\mathbb E^\\mathbf x (f(X(t)))-f(\\mathbf x)}{t}\r\n\\end{equation}\r\nexists for all $\\mathbf x \\in E$. The infinitesimal generator then takes the form\r\n\r\n\\begin{equation}\\label{generator}\r\n\\mathcal Af(\\mathbf x) = \\mathcal X(f(\\mathbf x))   +\\lambda(\\mathbf x)\\int_E (f(y)-f(\\mathbf x))Q(dy;\\mathbf x)), \\quad f \\in \\mathcal D(\\mathcal A).\r\n\\end{equation}\r\nFrom here, we can use Dynkin's formula to derive the  \r\nmartingale\r\n\\begin{equation}\\label{martingale}\r\nM_t^f := f(X(t)) - f(X(0))-\\int_0^t \\mathcal A f(X(s))ds,\\quad f \\in \\mathcal D(\\mathcal A).\r\n\\end{equation}\r\n\r\nThe following set of sufficient conditions for membership in $\\mathcal D(\\mathcal A)$ is given in Rolski et. al. \\cite{rolski2009stochastic}\r\n\\begin{theorem}\\label{fourcond} A function $f: E \\rightarrow \\mathbb{R}$ satisfies $f \\in \\mathcal D(\\mathcal A)$ if the following hold:\r\n  \r\n\\begin{enumerate}\r\n\\item \r\nThe function $t \\rightarrow f(\\phi(x,t))$ is absolutely continuous on $[0,s^*(x))$ for every $\\mathbf x \\in E$, where $s^*(x) = \\inf_t\\{t|\\mathcal F(t) =0\\}$. \\item\r\n$f(x)  = \\lim_{t \\rightarrow 0} f(\\phi(x,t))$ exists for all $x \\in  \\Gamma^*$ \\item\r\n(Boundary condition):$f(x)= \\int_E f(y) Q(dy,x)$ for $x \\in \\Gamma^*$\\item\r\n(Finite expectation of number of jumps): for all $t \\in \\mathbb{R}^+, x\\in E$,\r\n\\begin{equation} \r\n\\mathbb{E}\\left[\\sum_{i = 1}^{m(t)} |f(X(T_i)-f(X(T_{i-1}))|\\right]<\\infty\r\n\\end{equation}\r\n\\end{enumerate}\r\n\\end{theorem}\r\n\r\n\\subsection{A note on Skorokhod topologies} \\label{j1m1}\r\n\r\n\\begin{figure}\r\n\\begin{centering}\r\n\\includegraphics[width=.5\\textwidth]{j1m1tops.png}\r\n\\caption{\\textbf{Topologies for cadlag functions.} \\textbf{Left:} Two functions that are ``close\" in the J1 topology.  Note that the magnitude of their jumps are similar. \\textbf{Right:} Two functions close in the M1 topology.  Jumps are not required to be close in this case, but rather the Hausdorff distances of each function's completed graph. }\\label{j1m1}\r\n\\end{centering}\r\n\\end{figure}\r\n\r\nThroughout this thesis, our focus will be on stochastic processes which are cadlag, or are right continuous with lefthand limits. Let $(\\mathfrak M,d)$ denote a metric space with metric $d$.   In the following, we use the $J1$ Skorohod topology, denoted $\\mathbb D([0,t],\\mathfrak M)$. For our purposes, $\\mathfrak M$ will either be $\\mathbb{R}^+$ or  $\\mathcal M(\\mathbb{R}^+)$, the space of finite measures on $\\mathbb{R}^+$ with the Prohorov metric \\cite{billingsley2009convergence}.\r\nThe $J1$ topology  allows for convergence of functions that ``wiggle\" in time as well as space, and has  the following characterization (see \\cite{jac87}): \r\n\r\nLet $\\mathscr R$ be the set of continuous reparameterizations $r: [0,t]\\rightarrow \\mathbb{R}^+$: functions that are strictly increasing, where $r(0) = 0$ and $r(t)=t$.  A sequence of functions $\\alpha_n \\rightarrow \\alpha$ in $\\mathbb{D}([0,t],\\mathfrak M)$ if and only if there is a sequence $r_n \\in \\mathscr R$ with \r\n\r\n\\begin{enumerate}\r\n\\item $\\sup_s |r_n(s)-s| \\rightarrow 0,$\r\n\\item $\\sup_{s \\le t} d(\\alpha_n(r_n(s)), \\alpha(s)) \\rightarrow 0.$ \r\n\\end{enumerate}\r\nas $n \\rightarrow \\infty$. If $\\alpha(t)$ is continuous, the functions actually converge in the local uniform topology. However, in general,  the local uniform topology is strictly stronger than the Skorohod J1 topology, but this fact won't be used, since limiting functions of our interest in this paper will have no jumps (see Remark \\ref{unifremark}).  \r\n\r\n\r\n", "meta": {"hexsha": "02f708d8d4c2a1e1e1d0a7724d11bc64b6e2bc11", "size": 9143, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Joe Thesis/Thesisfiles/pdmpreview.tex", "max_stars_repo_name": "Danie1Johnson/thesis", "max_stars_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Joe Thesis/Thesisfiles/pdmpreview.tex", "max_issues_repo_name": "Danie1Johnson/thesis", "max_issues_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Joe Thesis/Thesisfiles/pdmpreview.tex", "max_forks_repo_name": "Danie1Johnson/thesis", "max_forks_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 81.6339285714, "max_line_length": 696, "alphanum_fraction": 0.6921141857, "num_tokens": 2943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5684943170156292}}
{"text": "\\section{Extensions of the Base Logic}\n\nIn this section we discuss some additional constructions that we define within and on top of the base logic.\nThese are not ``extensions'' in the sense that they change the proof power of the logic, they just form useful derived principles.\n\n\\subsection{Derived Rules about Base Connectives}\nWe collect here some important and frequently used derived proof rules.\n\n\\begin{mathparpagebreakable}\n  \\infer{}\n  {\\prop \\Ra \\propB \\proves \\prop \\wand \\propB}\n\n  \\infer{}\n  {\\prop * \\Exists\\var.\\propB \\provesIff \\Exists\\var. \\prop * \\propB}\n\n  \\infer{}\n  {\\prop * \\All\\var.\\propB \\proves \\All\\var. \\prop * \\propB}\n\\end{mathparpagebreakable}\nVerifying that existential quantifiers commute with separating conjunction requires an intermediate step using a magic wand: From $P * \\exists x, Q \\vdash \\Exists x. P * Q$ we can deduce $\\Exists x. Q \\vdash P \\wand \\Exists x. P * Q$ and then proceed via $\\exists$-elimination.\n\n\\subsection{Derived Rules about Modalities}\n\nIris comes with 4 built-in modalities ($\\always$, $\\plainly$, $\\upd$ and $\\later$) and, as we will see, plenty of derived modalities.\nHowever, almost all of them fall into one of two categories (except for $\\later$, as we will see): they are either \\emph{always-style} modalities (``something holds in all/many (future) worlds'') or \\emph{eventually-style} modalities (``something holds in a possible (future) world'').\n\n\\emph{Eventually-style modalities} are characterized by being easy to ``add''/introduce, but hard to ``remove''/eliminate.\nConsider, for example, the basic update modality $\\upd$:\nwe have $\\prop \\proves \\upd\\prop$ (\\ruleref{upd-intro}), but the inverse direction does not hold.\nInstead, from \\ruleref{upd-mono} and \\ruleref{upd-trans}, we can derive the following elimination principle:\n\\begin{mathpar}\n  \\infer[upd-E]\n  {\\prop \\proves \\upd\\propB}\n  {\\upd\\prop \\proves \\upd\\propB}\n\\end{mathpar}\nIn other words, we can remove an $\\upd$ in front of an assumption \\emph{if} the goal is itself wrapped in $\\upd$.\nAnother way to view this rule is to think of it as a \\emph{bind rule}.\nIndeed, together with \\ruleref{upd-intro}, this rule shows that $\\upd$ forms a monad.\n\n\\emph{Always-style modalities}, on the other hand, are easy to ``remove''/eliminate, but hard to ``add''/introduce.\nThe most widely used example of that in Iris is the persistence modality $\\always$:\nwe have $\\always\\prop \\proves \\prop$ (\\ruleref{pers-elim}), but the inverse direction does not hold.\nInstead, from \\ruleref{pers-mono} and $\\always{\\prop} \\proves \\always\\always\\prop$, we can derive the following introduction principle:\n\\begin{mathpar}\n  \\infer[$\\always$-I]\n  {\\always\\prop \\proves \\propB}\n  {\\always\\prop \\proves \\always\\propB}\n\\end{mathpar}\nIn other words, we can remove an $\\always$ from the goal \\emph{if} all our assumptions are wrapped in $\\always$.\nThis matches the algebraic structure of a comonad.\n\nIn particular, both eventually-style and always-style modalities are \\emph{idempotent}: we have $\\upd\\upd\\prop \\provesIff \\upd\\prop$ and $\\always\\always\\prop \\provesIff \\always\\prop$.\n\nBeyond this, all modalities come with plenty of rules that show how they commute around other connectives and modalities.\nAnd, of course, they come with a few ``defining rules'' that give the modalities their individual meaning, \\ie for the update modality, that would be \\ruleref{upd-update}.\n\nIn the following, we briefly discuss each of the modalities.\n\n\\paragraph{Update modality.}\nAs already mentioned, the update modality is an eventually-style modality:\n\\begin{mathpar}\n  \\inferhref{upd-E}{upd-elim}\n  {\\prop \\proves \\upd\\propB}\n  {\\upd\\prop \\proves \\upd\\propB}\n\n  \\inferH{upd-idemp}\n  {}{\\upd\\upd\\prop \\provesIff \\upd\\prop}\n\\end{mathpar}\nBeyond this (and the obvious variant of \\ruleref{upd-frame} that exploits commutativity of separating conjunction), there are no outstandingly interesting derived rules.\n\n\\paragraph{Persistence modality.}\nAs already mentioned, the persistence modality is an always-style modality:\n\\begin{mathpar}\n  \\inferhref{$\\always$-I}{pers-intro}\n  {\\always\\prop \\proves \\propB}\n  {\\always\\prop \\proves \\always\\propB}\n\n  \\inferhref{$\\always$-idemp}{pers-idemp}\n  {}{\\always\\always\\prop \\provesIff \\always\\prop}\n\\end{mathpar}\nSome further interesting derived rules include:\n\\begin{mathparpagebreakable}  \n  \\infer{}\n  {\\always(\\prop\\land\\propB) \\provesIff \\always\\prop \\land \\always\\propB}\n\n  \\infer{}\n  {\\always(\\prop\\lor\\propB) \\provesIff \\always\\prop \\lor \\always\\propB}\n\n  \\infer{}\n  {\\always\\TRUE \\provesIff \\TRUE}\n\n  \\infer{}\n  {\\always\\FALSE \\provesIff \\FALSE}\n\\\\\n  \\infer{}\n  {\\always(\\prop*\\propB) \\provesIff \\always\\prop * \\always\\propB}\n\n  \\infer{}\n  {\\always\\prop*\\propB \\provesIff \\always\\prop \\land \\propB}\n\n  \\infer{}\n  {\\always(\\prop \\wand \\propB) \\provesIff \\always(\\prop \\Ra \\propB)}\n\\\\\n  \\infer{}\n  {\\always(\\prop \\Ra \\propB) \\proves \\always\\prop \\Ra \\always\\propB}\n\n  \\infer{}\n  {\\always(\\prop \\wand \\propB) \\proves \\always\\prop \\wand \\always\\propB}\n\n\\end{mathparpagebreakable}\nIn particular, the persistence modality commutes around conjunction, disjunction, separating conjunction as well as universal and existential quantification.\nCommuting around conjunction can be derived from the primitive rule that says it commutes around universal quantification (as conjunction is equivalent to a universal quantification of a Boolean), and similar for disjunction.\n$\\TRUE \\provesIff \\always\\TRUE$ (which is basically persistence ``commuting around'' the nullary operator $\\TRUE$) can be derived via $\\always$ commuting with universal quantification ranging over the empty type.\nA similar rule holds for $\\FALSE$.\n\nMoreover, if (at least) one conjunct is below the persistence modality, then conjunction and separating conjunction coincide.\n\n\\paragraph{Plainness modality.}\nThe plainness modality is very similar to the persistence modality (in fact, we have $\\plainly\\prop \\proves \\always\\prop$, but the inverse does not hold).\nIt is always-style:\n\\begin{mathpar}\n  \\infer[$\\plainly$-I]\n  {\\plainly\\prop \\proves \\propB}\n  {\\plainly\\prop \\proves \\plainly\\propB}\n\n  \\infer{}{\\plainly\\plainly\\prop \\provesIff \\plainly\\prop}\n\\end{mathpar}\nIt also commutes around separating conjunction, conjunction, disjunction, universal and existential quantification (and $\\TRUE$ and $\\FALSE$).\n\nThe key difference to the persistence modality $\\always$ is that $\\plainly$ provides a \\emph{propositional extensionality} principle:\n\\[ \\plainly ( ( P \\Ra Q) \\land (Q \\Ra P ) ) \\proves P =_{\\Prop} Q \\]\nIn contrast, $\\always$ permits using some forms of ghost state ($\\ownM\\melt \\proves \\always{\\ownM{\\mcore\\melt}}$).\n\nHaving both of these principles for the same modality would lead to a contradiction:\nimagine we have an RA with elements $\\melt$, $\\meltB$ such that $\\mcore\\melt$ is incompatible with $\\meltB$ (\\ie $\\neg\\mvalFull(\\mcore\\melt \\mtimes \\meltB)$).\nThen we can prove:\n\\[\n\\ownM{\\mcore\\melt} \\proves\n\\always\\ownM{\\mcore\\melt} \\proves\n\\always ( ( \\FALSE \\Ra \\ownM\\meltB ) \\land ( \\ownM\\meltB \\Ra \\FALSE ) )\n\\]\nThe first implication is trivial, the second implication follows because $\\always\\ownM{\\mcore\\melt} \\land \\ownM\\meltB \\proves \\ownM{\\mcore\\melt} * \\ownM\\meltB \\proves \\mval(\\mcore\\melt \\mtimes \\meltB)$.\n\nBut now, if we had propositional extensionality for $\\always$ the way we do for $\\plainly$, we could deduce $\\FALSE =_{\\Prop} \\ownM\\meltB$, and that is clearly wrong.\nThis issue arises because $\\always$, as we have seen, still lets us use some resources from the context, while propositional equality has to hold completely disregarding current resources.\n\n\\paragraph{Later modality.}\nThe later modality is the ``odd one out'' in the sense that it is neither eventually-style nor always-style, because it is not idempotent:%\n\\footnote{This means $\\later$ is neither a monad nor a comonad---it does form an applicative functor, though.}\nwith $\\later$, the number of times the modality is applied matters, and we can get rid of \\emph{exactly one} layer of $\\later$ in the assumptions only by doing the same in the conclusion (\\ruleref{later-mono}).\n\nSome derived rules:\n\\begin{mathparpagebreakable}\n  \\inferhref{L{\\\"o}b}{Loeb}\n  {}\n  {(\\later\\prop\\Ra\\prop) \\proves \\prop}\n\n  \\infer{}\n  {\\later(\\prop \\Ra \\propB) \\proves \\later\\prop \\Ra \\later\\propB}\n\n  \\infer{}\n  {\\later(\\prop \\wand \\propB) \\proves \\later\\prop \\wand \\later\\propB}\n\\\\\n  \\infer{}\n  {\\later(\\prop\\land\\propB) \\provesIff \\later\\prop \\land \\later\\propB}\n\n  \\infer{}\n  {\\later(\\prop\\lor\\propB) \\provesIff \\later\\prop \\lor \\later\\propB}\n\n  \\infer{\\text{$\\type$ is inhabited}}\n  {\\later(\\Exists x:\\type. \\prop) \\provesIff \\Exists x:\\type. \\later\\prop}\n\n  \\infer{}\n  {\\later\\TRUE \\provesIff \\TRUE}\n\n  \\infer{}\n  {\\later(\\prop*\\propB) \\provesIff \\later\\prop * \\later\\propB}\n\n  \\infer{}\n  {\\later\\always\\prop \\provesIff \\always\\later\\prop}\n\n  \\infer{}\n  {\\later\\plainly\\prop \\provesIff \\plainly\\later\\prop}\n\\end{mathparpagebreakable}\nNoteworthy here is the fact that L\u00f6b induction (\\ruleref{Loeb}) can be derived from $\\later$-introduction and the fact that we can take fixed-points of functions where the recursive occurrences are below $\\later$~\\cite{Loeb}.%\n\\footnote{Also see \\url{https://en.wikipedia.org/wiki/L\\%C3\\%B6b\\%27s_theorem}.}\nAlso, $\\later$ commutes over separating conjunction, conjunction, disjunction, universal quantification and \\emph{non-empty} existential quantification, as well as both the persistence and the plainness modality.\n\n\\subsection{Persistent Propositions}\nWe call a proposition $\\prop$ \\emph{persistent} if $\\prop \\proves \\always\\prop$.\nThese are propositions that ``do not own anything'', so we can (and will) treat them like ``normal'' intuitionistic propositions.\n\nOf course, $\\always\\prop$ is persistent for any $\\prop$.\nFurthermore, by the proof rules given in \\Sref{sec:proof-rules}, $\\TRUE$, $\\FALSE$, $t = t'$ as well as $\\ownGhost\\gname{\\mcore\\melt}$ and $\\mval(\\melt)$ are persistent.\nPersistence is preserved by conjunction, disjunction, separating conjunction as well as universal and existential quantification and $\\later$.\n\n\n\n\\subsection{Timeless Propositions and Except-0}\n\nOne of the troubles of working in a step-indexed logic is the ``later'' modality $\\later$.\nIt turns out that we can somewhat mitigate this trouble by working below the following \\emph{except-0} modality:\n\\[ \\diamond \\prop \\eqdef \\later\\FALSE \\lor \\prop \\]\nExcept-0 satisfies the usual laws of a ``monadic'' modality (similar to, \\eg the update modalities):\n\\begin{mathpar}\n  \\inferH{ex0-mono}\n  {\\prop \\proves \\propB}\n  {\\diamond\\prop \\proves \\diamond\\propB}\n\n  \\axiomH{ex0-intro}\n  {\\prop \\proves \\diamond\\prop}\n\n  \\axiomH{ex0-idem}\n  {\\diamond\\diamond\\prop \\proves \\diamond\\prop}\n\n\\begin{array}[c]{rMcMl}\n  \\diamond{(\\prop * \\propB)} &\\provesIff& \\diamond\\prop * \\diamond\\propB \\\\\n  \\diamond{(\\prop \\land \\propB)} &\\provesIff& \\diamond\\prop \\land \\diamond\\propB \\\\\n  \\diamond{(\\prop \\lor \\propB)} &\\provesIff& \\diamond\\prop \\lor \\diamond\\propB\n\\end{array}\n\n\\begin{array}[c]{rMcMl}\n  \\diamond{\\All x. \\prop} &\\provesIff& \\All x. \\diamond{\\prop}   \\\\\n  \\diamond{\\Exists x. \\prop} &\\provesIff& \\Exists x. \\diamond{\\prop} \\\\\n  \\diamond\\always{\\prop} &\\provesIff& \\always\\diamond{\\prop} \\\\\n  \\diamond\\later\\prop &\\proves& \\later{\\prop}\n\\end{array}\n\\end{mathpar}\nIn particular, from \\ruleref{ex0-mono} and \\ruleref{ex0-idem} we can derive a ``bind''-like elimination rule:\n\\begin{mathpar}\n  \\inferH{ex0-elim}\n  {\\prop \\proves \\diamond\\propB}\n  {\\diamond\\prop \\proves \\diamond\\propB}\n\\end{mathpar}\n\nThis modality is useful because there is a class of propositions which we call \\emph{timeless} propositions, for which we have\n\\[ \\timeless{\\prop} \\eqdef \\later\\prop \\proves \\diamond\\prop  \\]\nIn other words, when working below the except-0 modality, we can \\emph{strip\n  away} the later from timeless propositions (using \\ruleref{ex0-elim}):\n\\begin{mathpar}\n  \\inferH{ex0-timeless-strip}{\\timeless{\\prop} \\and \\prop \\proves \\diamond\\propB}\n  {\\later\\prop \\proves \\diamond\\propB}\n\\end{mathpar}\n\n In fact, it turns out that we can strip away later from timeless propositions even when working under the later modality:\n\\begin{mathpar}\n  \\inferH{later-timeless-strip}{\\timeless{\\prop} \\and \\prop \\proves \\later \\propB}\n  {\\later\\prop \\proves \\later\\propB}\n\\end{mathpar}\nThis follows from $\\later \\prop \\proves \\later\\FALSE \\lor \\prop$, and then by straightforward disjunction elimination.\n\nThe following rules identify the class of timeless propositions:\n\\begin{mathparpagebreakable}\n\\infer\n{\\vctx \\proves \\timeless{\\prop} \\and \\vctx \\proves \\timeless{\\propB}}\n{\\vctx \\proves \\timeless{\\prop \\land \\propB}}\n\n\\infer\n{\\vctx \\proves \\timeless{\\prop} \\and \\vctx \\proves \\timeless{\\propB}}\n{\\vctx \\proves \\timeless{\\prop \\lor \\propB}}\n\n\\infer\n{\\vctx \\proves \\timeless{\\prop} \\and \\vctx \\proves \\timeless{\\propB}}\n{\\vctx \\proves \\timeless{\\prop * \\propB}}\n\n\\infer\n{\\vctx \\proves \\timeless{\\prop}}\n{\\vctx \\proves \\timeless{\\always\\prop}}\n\n\\infer\n{\\vctx \\proves \\timeless{\\propB}}\n{\\vctx \\proves \\timeless{\\prop \\Ra \\propB}}\n\n\\infer\n{\\vctx \\proves \\timeless{\\propB}}\n{\\vctx \\proves \\timeless{\\prop \\wand \\propB}}\n\n\\infer\n{\\vctx,\\var:\\type \\proves \\timeless{\\prop}}\n{\\vctx \\proves \\timeless{\\All\\var:\\type.\\prop}}\n\n\\infer\n{\\vctx,\\var:\\type \\proves \\timeless{\\prop}}\n{\\vctx \\proves \\timeless{\\Exists\\var:\\type.\\prop}}\n\n\\axiom{\\timeless{\\TRUE}}\n\n\\axiom{\\timeless{\\FALSE}}\n\n\\infer\n{\\text{$\\term$ or $\\term'$ is a discrete OFE element}}\n{\\timeless{\\term =_\\type \\term'}}\n\n\\infer\n{\\text{$\\melt$ is a discrete OFE element}}\n{\\timeless{\\ownM\\melt}}\n\n\\infer\n{\\text{$\\melt$ is an element of a discrete camera}}\n{\\timeless{\\mval(\\melt)}}\n\\end{mathparpagebreakable}\n\n\n\\subsection{Dynamic Composeable Higher-Order Resources}\n\\label{sec:composeable-resources}\n\nThe base logic described in \\Sref{sec:base-logic} works over an arbitrary camera $\\monoid$ defining the structure of the resources.\nIt turns out that we can generalize this further and permit picking cameras ``$\\iFunc(\\Prop)$'' that depend on the structure of propositions themselves.\nOf course, $\\Prop$ is just the syntactic type of propositions; for this to make sense we have to look at the semantics.\n\nFurthermore, there is a composability problem with the given logic: if we have one proof performed with camera $\\monoid_1$, and another proof carried out with a \\emph{different} camera $\\monoid_2$, then the two proofs are actually carried out in two \\emph{entirely separate logics} and hence cannot be combined.\n\nFinally, in many cases just having a single ``instance'' of a camera available for reasoning is not enough.\nFor example, when reasoning about a dynamically allocated data structure, every time a new instance of that data structure is created, we will want a fresh resource governing the state of this particular instance.\nWhile it would be possible to handle this problem whenever it comes up, it turns out to be useful to provide a general solution.\n\nThe purpose of this section is to describe how we solve these issues.\n\n\\paragraph{Picking the resources.}\nThe key ingredient that we will employ on top of the base logic is to give some more fixed structure to the resources.\nTo instantiate the logic with dynamic higher-order ghost state, the user picks a family of locally contractive bifunctors $(\\iFunc_i : \\COFEs^\\op \\times \\COFEs \\to \\CMRAs)_{i \\in \\mathcal{I}}$.\n(This is in contrast to the base logic, where the user picks a single, fixed camera that has a unit.)\n\nFrom this, we construct the bifunctor defining the overall resources as follows:\n\\begin{align*}\n  \\GName \\eqdef{}& \\nat \\\\\n  \\textdom{ResF}(\\ofe^\\op, \\ofe) \\eqdef{}& \\prod_{i \\in \\mathcal I} \\GName \\fpfn \\iFunc_i(\\ofe^\\op, \\ofe)\n\\end{align*}\nWe will motivate both the use of a product and the finite partial function below.\n$\\textdom{ResF}(\\ofe^\\op, \\ofe)$ is a camera by lifting the individual cameras pointwise, and it has a unit (using the empty finite partial function).\nFurthermore, since the $\\iFunc_i$ are locally contractive, so is $\\textdom{ResF}$.\n\nNow we can write down the recursive domain equation:\n\\[ \\iPreProp \\cong \\UPred(\\textdom{ResF}(\\iPreProp, \\iPreProp)) \\]\nHere, $\\iPreProp$ is a COFE defined as the fixed-point of a locally contractive bifunctor, which exists and is unique up to isomorphism by \\thmref{thm:america_rutten}, so we obtain some object $\\iPreProp$ such that:\n\\begin{align*}\n  \\Res &\\eqdef \\textdom{ResF}(\\iPreProp, \\iPreProp) \\\\\n  \\iProp &\\eqdef \\UPred(\\Res) \\\\\n\t\\wIso &: \\iProp \\nfn \\iPreProp \\\\\n\t\\wIso^{-1} &: \\iPreProp \\nfn \\iProp \\\\\n  \\wIso(\\wIso^{-1}(x)) &\\eqdef x \\\\\n  \\wIso^{-1}(\\wIso(x)) &\\eqdef x\n\\end{align*}\nNow we can instantiate the base logic described in \\Sref{sec:base-logic} with $\\Res$ as the chosen camera:\n\\[ \\Sem{\\Prop} \\eqdef \\UPred(\\Res) \\]\nWe obtain that $\\Sem{\\Prop} = \\iProp$.\nEffectively, we just defined a way to instantiate the base logic with $\\Res$ as the camera of resources, while providing a way for $\\Res$ to depend on $\\iPreProp$, which is isomorphic to $\\Sem\\Prop$.\n\nWe thus obtain all the rules of \\Sref{sec:base-logic}, and furthermore, we can use the maps $\\wIso$ and $\\wIso^{-1}$ \\emph{in the logic} to convert between logical propositions $\\Sem\\Prop$ and the domain $\\iPreProp$ which is used in the construction of $\\Res$ -- so from elements of $\\iPreProp$, we can construct elements of $\\Sem{\\textlog M}$, which are the elements that can be owned in our logic.\n\n\\paragraph{Proof composability.}\nTo make our proofs composeable, we \\emph{generalize} our proofs over the family of functors.\nThis is possible because we made $\\Res$ a \\emph{product} of all the cameras picked by the user, and because we can actually work with that product ``pointwise''.\nSo instead of picking a \\emph{concrete} family, proofs will assume to be given an \\emph{arbitrary} family of functors, plus a proof that this family \\emph{contains the functors they need}.\nComposing two proofs is then merely a matter of conjoining the assumptions they make about the functors.\nSince the logic is entirely parametric in the choice of functors, there is no trouble reasoning without full knowledge of the family of functors.\n\nOnly when the top-level proof is completed we will ``close'' the proof by picking a concrete family that contains exactly those functors the proof needs.\n\n\\paragraph{Dynamic resources.}\nFinally, the use of finite partial functions lets us have as many instances of any camera as we could wish for:\nBecause there can only ever be finitely many instances already allocated, it is always possible to create a fresh instance with any desired (valid) starting state.\nThis is best demonstrated by giving some proof rules.\n\nSo let us first define the notion of ghost ownership that we use in this logic.\nAssuming that the family of functors contains the functor $\\Sigma_i$ at index $i$, and furthermore assuming that $\\monoid_i = \\Sigma_i(\\iPreProp, \\iPreProp)$, given some $\\melt \\in \\monoid_i$ we define:\n\\[ \\ownGhost\\gname{\\melt:\\monoid_i} \\eqdef \\ownM{(\\ldots, \\emptyset, i:\\mapsingleton \\gname \\melt, \\emptyset, \\ldots)} \\]\nThis is ownership of the pair (element of the product over all the functors) that has the empty finite partial function in all components \\emph{except for} the component corresponding to index $i$, where we own the element $\\melt$ at index $\\gname$ in the finite partial function.\n\nWe can show the following properties for this form of ownership:\n\\begin{mathparpagebreakable}\n  \\inferH{res-alloc}{\\text{$G$ infinite} \\and \\melt \\in \\mval_{M_i}}\n  {  \\TRUE \\proves \\upd \\Exists\\gname\\in G. \\ownGhost\\gname{\\melt : M_i}\n  }\n  \\and\n  \\inferH{res-update}\n    {\\melt \\mupd_{M_i} B}\n    {\\ownGhost\\gname{\\melt : M_i} \\proves \\upd \\Exists \\meltB\\in B. \\ownGhost\\gname{\\meltB : M_i}}\n\n  \\inferH{res-empty}\n  {\\text{$\\munit$ is a unit of $M_i$}}\n  {\\TRUE \\proves \\upd \\ownGhost\\gname\\munit}\n\n  \\axiomH{res-op}\n    {\\ownGhost\\gname{\\melt : M_i} * \\ownGhost\\gname{\\meltB : M_i} \\provesIff \\ownGhost\\gname{\\melt\\mtimes\\meltB : M_i}}\n\n  \\axiomH{res-valid}\n    {\\ownGhost\\gname{\\melt : M_i} \\Ra \\mval_{M_i}(\\melt)}\n\n  \\inferH{res-timeless}\n    {\\text{$\\melt$ is a discrete OFE element}}\n    {\\timeless{\\ownGhost\\gname{\\melt : M_i}}}\n\\end{mathparpagebreakable}\n\nBelow, we will always work within (an instance of) the logic as described here.\nWhenever a camera is used in a proof, we implicitly assume it to be available in the global family of functors.\nWe will typically leave the $M_i$ implicit when asserting ghost ownership, as the type of $\\melt$ will be clear from the context.\n\n\n\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"iris\"\n%%% End:\n", "meta": {"hexsha": "3d1ae27b4ead54cc55bf9239b6c34b392ca5f20f", "size": 20590, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/extended-logic.tex", "max_stars_repo_name": "SkySkimmer/iris", "max_stars_repo_head_hexsha": "186d9ece07e210e92be28eb0e1a42f5d5fe6f1b5", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/extended-logic.tex", "max_issues_repo_name": "SkySkimmer/iris", "max_issues_repo_head_hexsha": "186d9ece07e210e92be28eb0e1a42f5d5fe6f1b5", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/extended-logic.tex", "max_forks_repo_name": "SkySkimmer/iris", "max_forks_repo_head_hexsha": "186d9ece07e210e92be28eb0e1a42f5d5fe6f1b5", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.7142857143, "max_line_length": 399, "alphanum_fraction": 0.733997086, "num_tokens": 6020, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867873410141, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5684943141030662}}
{"text": "% !TEX root = cs1textbook.tex\n\n\\chapter{Introduction to Creating Classes}\n\\label{chapter:classes}\n\n\\minitoc\n\n%\\section{The \\texttt{Point} Class}\n\nWe will begin our geometry case study by creating a data type to represent the smallest unit of geometry -- the point -- and build up in complexity from there.  \\href{https://en.wiktionary.org/wiki/point}{Wiktionary defines a geometrical point} as ``A zero-dimensional mathematical object representing a location in one or more dimensions; something considered to have position but no magnitude or direction''.  We will define our \\mintinline{java}{Point} class to represent an object on a two-dimensional Cartesian plane, since this is a good analogy for a computer screen.  A detail that will become important later is that several computer graphics paradigms, including the one we'll be working under, fix the origin of the screen's Cartesian plane at the top left of the screen, so that $x$ values increase from right to left and $y$ values increase from the top of the screen to the bottom; this results in all $x$ and $y$ values being non-negative.  A monitor with 1024$\\times$768 resolution, therefore, would find the point $(1023,767)$ at the bottom right of the screen, as in Figure \\ref{fig:monitor}.\n\n\\begin{figure}[ht]\n    \\color{nccblue}\n    \\begin{center}\n        \\begin{tikzpicture}\n            \\draw [thick,nccblue,fill=nccorange] (0,0) -- (5,0) -- (5,4) -- (0,4) -- (0,0) {};\n            \\draw [thin,nccblue] (1,0) -- (0.5,-0.5) -- (4.5,-0.5) -- (4,0);\n\n            \\draw (5,0) node [circle,fill=nccblue] {};\n            \\draw (0,4) node [circle,fill=nccblue] {};\n\n            \\draw (-0.5,3.65) node {$(0,0)$};\n            \\draw (6,0.3) node {$(1023,767)$};\n        \\end{tikzpicture}\n    \\end{center}\n    \\caption{Cartesian Coordinates of a Computer Monitor with 1024$\\times$768 Resolution}\n    \\label{fig:monitor}\n\\end{figure}\n\n\\section{Class Header}\n\nEvery class definition begins with a class \\textit{header}.  The header defines the class' visibility and its name.  This is how a simple class header looks:\n\n\\begin{javaformat}{Class Header}\n\\begin{minted}{text}\n<visibility-modifier> class <identifier>\n\\end{minted}\n\\end{javaformat}\n\nThe visibility modifier for classes will be \\mintinline{java}{public} throughout this textbook.\n\nIn Section \\ref{sec:using-memory} we learned some rules that Java imposes on us regarding what we can choose as an identifier:\n\\bi\n\\item An identifier \\textbf{must} start with a letter.\n\\item An identifier \\textbf{must} only contain letters, digits, and the underscore character (\\lstinline{'_'}).\n\\item An identifier \\textbf{must not} be the same as a \\textit{reserved word} in Java (like \\lstinline{class} or \\lstinline{static}).\n\\ei\n\nSection \\ref{sec:using-memory} also lists some guidelines regarding naming a variable.  The guidelines for naming a class are a bit different, however:\n\\bi\n\\item The name of a class should \\textit{start with a capital letter}.\n\\item Classes define \\textit{things}, so the name should be a \\textit{noun}.\n\\ei\n\nFinally, each class should have a JavaDoc comment above the header that explains what this class provides to the programmer.\n\nFor the \\texttt{Point} class, the header will look like the first line of this code:\n\n\\begin{minted}{java}\n/**\n * Provides a two-dimensional point on a Cartesian plane.\n * <p>\n * All x and y values are non-negative.\n */\npublic class Point {  // This line is the header\n    /* The body of the class will go here */\n}\n\\end{minted}\n\nNotice that I have put the opening curly brace of the body at the end of the header.  Some programmers choose instead to place an opening curly brace on the next line.\n\n\\section{Instance Variables}\n\nAfter the class header, the rest of the class' definition -- the \\textit{body} of the class -- appear inside curly braces.  The first thing that most programmers put at the top of the class' body is a list of the class' \\textit{properties}.  This list consists of variable declarations that describe the data an object of this class will contain.\n\nOne instance of the \\texttt{Point} class should contain enough information to complete an ordered pair in $(x,y)$ format.  We'll assume some knowledge about computer monitors -- specifically, that there are no partial pixels -- and specify that both the $x$ value and the $y$ value must be integers.\n\n\\begin{minted}{java}\npublic class Point {\n    int x;  // x component of Cartesian ordered pair\n    int y;  // y component of Cartesian ordered pair\n}\n\\end{minted}\n\nVariables like these, declared in class scope, are called \\textbf{instance variables}\\index{Variable!Instance variable}, because every instance of the \\texttt{Point} class that we create will store values in these variables.\n\nIn a different class, we will create a \\texttt{Point} object and place some data in these instance variables:\n\n\\begin{minted}{java}\npublic class TestPoint {\n    public static void main( String args[] ) {\n        Point p1 = new Point();  // p1 represents a point on the screen\n        p1.x = 10;   // p1's x coordinate is now 10\n        p1.y = 15;   // p1's y coordinate is now 15\n\n        System.out.println( \"The point is at (\" + p1.x + \"x\" + p1.y + \").\" );\n    }\n}\n\\end{minted}\n\nSo far, it seems like we've solved our problem.  The output from this program will be what you expect: \\texttt{The point is at (10x15).}  There exists, however, a new problem, illustrated here:\n\n\\begin{minted}{java}\npublic class TestPoint {\n    public static void main( String args[] ) {\n        Point p1 = new Point();  // p1 should represent a point on the screen\n        p1.x = -35;   // But can p1's x coordinate be negative?\n        p1.y = -17;   // Or its y coordinate?\n    }\n}\n\\end{minted}\n\nWhat happens here might surprise you.  We human readers of the English language know that a point on the screen isn't supposed to have negative values, but the Java compiler doesn't understand English.  Java \\textit{will not} interpret this as an error, because to Java it isn't an error.  It's a \\textit{semantic error}, because we know it's wrong, but it doesn't violate the rules of the Java language, so it will run and be wrong.\n\n\\begin{defn}{Semantic Error}\n    A \\textbf{semantic error}\\index{Error!Semantic error} is an error in logic or meaning.  Typically, we notice semantic errors in code that has compiled and is running, because only a program that's running can generate incorrect output.\n\\end{defn}\n\n\\section{Visibility}\n\nThus, we learn the most important responsibility of Java programmers: We must protect the data in our classes, because relying on others to do the right thing will not work.\n\n\\begin{tip}{Data Protection}\n    When designing a Java class, you must ensure that the data in its instance variables will always be valid.  If you don't work to ensure this, then there will come a point where the data will become invalid.\n\\end{tip}\n\nThe path to ensuring data validity starts with changing the \\textit{visibility} of the instance variables.  We will mark \\texttt{x} and \\texttt{y} as \\mintinline{java}{private}, like this:\n\n\\begin{minted}{java}\npublic class Point {\n    private int x;  // x component of Cartesian ordered pair\n    private int y;  // y component of Cartesian ordered pair\n}\n\\end{minted}\n\nThe value of a \\mintinline{java}{private} variable can not be retrieved or changed by any code outside the class.  This protects our data from being made negative by the \\texttt{TestPoint} class:\n\n\\begin{minted}{java}\npublic class TestPoint {\n    public static void main( String args[] ) {\n        Point p1 = new Point();  // p1 should represent a point on the screen\n        p1.x = -35;   // This is now a syntax error!\n        p1.y = -17;   // And so is this!  Success!\n    }\n}\n\\end{minted}\n\nBut we have now done too good a job of protecting our data, because we can't set the variables with good data.  We can't even print their values!\n\n\\begin{minted}{java}\npublic class TestPoint {\n    public static void main( String args[] ) {\n        Point p1 = new Point();  // p1 represents a point on the screen\n        p1.x = 10;   // This seemingly-valid request is now a syntax error\n        p1.y = 15;   // So is this\n\n        // And we can't even do this any more:\n        System.out.println( \"The point is at (\" + p1.x + \"x\" + p1.y + \").\" );\n    }\n}\n\\end{minted}\n\nThe instance variables still need to be \\mintinline{java}{private}; the way to fix this problem is to give limited \\mintinline{java}{public} access to viewing and changing the instance variables' values.  We will explore how to grant this access in the next chapter.\n", "meta": {"hexsha": "c19f83591648024379de13595e31c3cc26a7011d", "size": 8582, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classes.tex", "max_stars_repo_name": "cmerlo441/cs1textbook", "max_stars_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classes.tex", "max_issues_repo_name": "cmerlo441/cs1textbook", "max_issues_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classes.tex", "max_forks_repo_name": "cmerlo441/cs1textbook", "max_forks_repo_head_hexsha": "203bd7e03ccc01470e420c40de43551f8bdaa04b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.3292682927, "max_line_length": 1110, "alphanum_fraction": 0.7166161734, "num_tokens": 2202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.7981867801399695, "lm_q1q2_score": 0.5684943138497425}}
{"text": "\\subsection{Blockchain}\n\n\\subsubsection{Definition}\n\nThis is how a block is defined in this work:\n\n\\agda{Blockchain}{block}\n\n\\emph{nextTXTree} assures that the second transaction tree is from the first transaction tree.\n\\emph{firstTreesInBlock} guarantees that the last transaction in the first transaction tree\nis the first in the block.\n\\emph{coinBaseTree} assures that the last transaction in the second transaction tree is\na coinbase transaction.\n\nBlockchain is a chain of valid blocks.\nEvery new block must be a continuation of the previous one.\nHere is the definition of the blockchain:\n\n\\agda{Blockchain}{blockchain}\n\nIn the first case, blockchain just has one block, called \\emph{fstBlock}.\nIn the second case, the blockchain is an addition of a valid block from a previous blockchain.\n\n\\subsubsection{Creation}\n\nIn this section, there will be explanation of how blockchain is created from blocks\nand how blocks are created from transaction trees.\nTo create a blockchain, it is first needed to create the last block.\nFrom the last block, it is possible to create all the chain.\n\n\\plabel{blockblockchain}\n\\agda{Blockchain}{blockblockchain}\n\nIn this \\hyperref[blockblockchain]{function}, if the first transaction tree of the block\nis a genesis tree,\nit will return a blockchain of just one block.\nIf it is a regular tree, it tries to find the first transaction tree of this block.\nUsing a recursive definition of block to blockchain,\nit is possible to generate all the rest of this blockchain from this block.\n\nIt is not always possible to generate a block from the transaction tree.\nIt is because the last transaction of a transaction tree must be a coinbase transaction.\nHere, the function that returns a decidable if it is possible to generate a block from\nthe transaction tree.\n\n\\agda{Blockchain}{treeblock}\n\nThe definition of the raw block gets just the coinbase transaction tree as an explicit type.\nThe other transaction tree can be founded opening the record.\n\n\\agda{Blockchain}{rawblock}\n\nThe code of the definition of what is a coinbase tree:\n\n\\agda{Blockchain}{coinbasetree}\n\nThe definition of a coinbase tree is the one that the last transaction is a coinbase.\n\nThe code verifies if the last transaction tree is a coinbase tree:\n\n\\agda{Blockchain}{iscoinbase}\n\nIf it is, it returns that it is possible to create a block from that with the block definition.\nIf it is not, it returns that it is impossible to create a block from this transaction tree.\n\nBut to create a block from this coinbase transaction tree, it is necessary to find the first tree\nof the block.\n\n\\agda{Blockchain}{fsttree}\n\nThe definition of \\emph{fstTree} is that it has a tree that is before this tree in the type.\nAnd this tree before is the first in the block.\n\n\\agda{Blockchain}{firsttreeinblock}\n\nThe decidable version of this \\emph{Set}:\n\n\\agda{Blockchain}{isfirsttreeinblock}\n\nIn this case, it pattern match trees that are genesis tree or if the last transaction was a coinbase\ntransaction.\n\n\\agda{Blockchain}{firsttree}\n\nTo find the first tree in the block, there are two cases.\nThe first case is that if the tree is a genesis tree, so the result is itself.\nThe second case is if it a regular tree, so it still has to divide it in many cases.\nIf this tree is already the first tree in the block, it will return itself.\nIf this tree is not, it has to verify if the block number of the tree is the same as this tree.\nIf the block number is equal, it can recursively find the first tree.\nIf it is not, it has to provide proof that this tree must be\nthe first and the blocks numbers are different.\n\nTo define what it means of one tree is next to another:\n\n\\agda{Blockchain}{nexttree}\n\nBoth trees (\\emph{txTree1} and \\emph{txTree2}) can be equal or different.\nIf both trees are the same, they are next to each other.\nIf there is a proof that both trees are next to each other and\nif there is one tree that was generated from the last one,\nso the first tree is next to the last one.\n", "meta": {"hexsha": "cbc6e7002fae7adb0a4f853bcfec223d73644042", "size": 3972, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/blockchain.tex", "max_stars_repo_name": "guilhermehas/crypto-agda", "max_stars_repo_head_hexsha": "ac91e00abca9a26678d0cbc1bedecf8abef6b703", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-02-13T16:56:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-22T19:27:12.000Z", "max_issues_repo_path": "docs/blockchain.tex", "max_issues_repo_name": "guilhermehas/cripto-agda", "max_issues_repo_head_hexsha": "ac91e00abca9a26678d0cbc1bedecf8abef6b703", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-11-01T11:36:06.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-03T14:31:16.000Z", "max_forks_repo_path": "docs/blockchain.tex", "max_forks_repo_name": "guilhermehas/cripto-agda", "max_forks_repo_head_hexsha": "ac91e00abca9a26678d0cbc1bedecf8abef6b703", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.5631067961, "max_line_length": 100, "alphanum_fraction": 0.7862537764, "num_tokens": 917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867681382279, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5684943101772079}}
{"text": "\\section{Hierarchization on Dimensionally Adaptive Sparse Grids}\n\\label{sec:43dimAdaptive}\n\n\\minitoc{77mm}{7}\n\n\\noindent\nDimensionally adaptive sparse grids,\nwhich are sums of different hierarchical subspaces\nas described in \\cref{sec:232dimensionallyAdaptiveSG},\nhave the advantage over general spatially adaptive sparse grids\nthat algorithms can be formulated and applied more easily.\nIn this section, we describe two methods:\nfirst, the well-known combination technique, which was\nalready mentioned in \\cref{sec:232dimensionallyAdaptiveSG},\nand second, a new algorithm based on residual interpolation.\n\n\n\n\\subsection{The Combination Technique and Its Combinatorial Proof}\n\\label{sec:431combiTechniqueProof}\n\nThe combination technique was one of the first methods that\nwere developed by Griebel et al. in \\cite{Griebel92Combination}\n(for two and three dimensions)\nafter the term ``sparse grids'' was coined in 1991 \\cite{Zenger91Sparse}.\nHowever, the combination technique predates the development of sparse grids\nby decades, as it was already mentioned by Smolyak in 1963\n\\multicite{Smolyak63Quadrature,Hegland07Combination}.\nDelvos developed and proved the standard combination formula in the\nframework of Boolean interpolation operators in 1982\n\\multicite{Delvos82Dvariate,Delvos89Boolean}.\n\n\\paragraph{Formal description and outline of a combinatorial proof}\n\nIn the following, we give a formal description of the\nsparse grid combination technique, and we outline a new combinatorial proof\nof its correctness.\nWhile we discuss a high-level explanation of the proofs in this section,\nthe proofs themselves can be found in \\cref{sec:a131proofCombiTechnique},\nsince most of them are rather technical.\nFor simplicity,\nwe formulate the combination technique and its proof for regular\nsparse grids (see \\cref{sec:231regularSG}).\nHowever, the main ideas of the chain of proofs are also applicable\nto dimensionally adaptive sparse grids\n(see \\cref{sec:232dimensionallyAdaptiveSG}).\n\n\\begin{theorem}[sparse grid combination technique]\n  \\label{thm:combiTechnique}\n  Let $\\liset \\ceq \\{(\\*l, \\*i) \\mid\n  \\normone{\\*l} \\le n,\\; \\*i \\in \\hiset{\\*l}\\}$\n  correspond to the regular sparse grid\n  $\\regsgset{n}{d}$ and let $(\\fcnval{\\*l,\\*i})_{(\\*l,\\*i) \\in \\liset}$\n  be given function values on $\\regsgset{n}{d}$.\n  If we define\n  \\begin{itemize}\n    \\item\n    the combined sparse grid interpolant $\\regsgintp[\\ct]{n}{d}$ via\n    \\eqref{eq:combiTechnique}, i.e.,\n    \\begin{equation}\n      \\regsgintp[\\ct]{n}{d}\n      = \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\sum_{\\normone{\\*l'} = n-q}\n      \\fgintp{\\*l'},\n    \\end{equation}\n    where $\\fgintp{\\*l'} \\in \\ns{\\*l'}$ is the full grid interpolant\n    of $\\objfun$ with level $\\*l'$, and\n    \n    \\item\n    the hierarchical sparse grid interpolant $\\regsgintp{n}{d}$\n    via \\eqref{eq:hierarchizationProblem} and\n    \\eqref{eq:hierarchizationInterpolant}\n  \\end{itemize}\n  and if we assume that the hierarchical splitting equation\n  \\eqref{eq:hierSplittingMV} holds,\n  then the combined and the hierarchical sparse grid interpolants coincide:\n  \\begin{equation}\n    \\regsgintp[\\ct]{n}{d}\n    = \\regsgintp{n}{d}.\n  \\end{equation}\n\\end{theorem}\n\n\\begin{proof}[Proof (sketch)]\n  Let $\\gp{\\*l,\\*i} \\in \\regsgset{n}{d}$ be an arbitrary\n  point of the regular sparse grid.\n  First, we split the inner sum of $\\regsgintp[\\ct]{n}{d}(\\gp{\\*l,\\*i})$\n  into levels $\\*l'$ whose full grid sets $\\fgset{\\*l'}$\n  contain $\\gp{\\*l,\\*i}$ and levels whose full grid sets\n  do not contain $\\gp{\\*l,\\*i}$:\n  \\begin{equation}\n    \\regsgintp[\\ct]{n}{d}(\\gp{\\*l,\\*i})\n    = \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\cdot \\paren*{\n      \\sum_{\\substack{\\normone{\\*l'} = n - q\\\\\\fgset{\\*l'} \\ni \\gp{\\*l,\\*i}}}\n      \\fgintp{\\*l'}(\\gp{\\*l,\\*i}) +\n      \\sum_{\\substack{\\normone{\\*l'} = n - q\\\\\\fgset{\\*l'} \\notni \\gp{\\*l,\\*i}}}\n      \\fgintp{\\*l'}(\\gp{\\*l,\\*i})\n    }.\n  \\end{equation}\n  The summands $\\fgintp{\\*l'}(\\gp{\\*l,\\*i})$ of the first inner sum\n  each equal $\\fcnval{\\*l,\\*i}$ due to the full grid interpolation\n  property \\eqref{eq:interpFullGridMV}.\n  Therefore, the first inner sum is equal to the product of\n  $\\fcnval{\\*l,\\*i}$ with the number of summands:\n  \\begin{equation}\n    \\label{eq:combiTechniqueSplitSum}\n    \\begin{split}\n      \\regsgintp[\\ct]{n}{d}(\\gp{\\*l,\\*i})\n      &= \\fcnval{\\*l,\\*i} \\cdot \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\cdot\n      \\setsize{\n        \\{\\*l' \\mid \\normone{\\*l'} = n - q,\\; \\fgset{\\*l'} \\ni \\gp{\\*l,\\*i}\\}\n      }\\\\\n      &\\qquad{} {} + \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\cdot\n      \\sum_{\\substack{\\normone{\\*l'} = n - q\\\\\\fgset{\\*l'} \\notni \\gp{\\*l,\\*i}}}\n      \\fgintp{\\*l'}(\\gp{\\*l,\\*i}).\n    \\end{split}\n  \\end{equation}\n  After this sketch of proof,\n  we will prove that the first of the two summands in\n  \\cref{eq:combiTechniqueSplitSum}\n  equals one (see \\cref{prop:combiTechniqueOne})\n  and that the second of the two summands\n  equals zero (see \\cref{prop:combiTechniqueZero}).\n  Consequently, we infer\n  \\begin{equation}\n    \\regsgintp[\\ct]{n}{d}(\\gp{\\*l,\\*i})\n    = \\fcnval{\\*l,\\*i},\n  \\end{equation}\n  i.e., $\\regsgintp[\\ct]{n}{d}$ interpolates $\\objfun$ at $\\regsgset{n}{d}$.\n  Note that $\\regsgintp[\\ct]{n}{d}$ is contained in $\\regsgspace{n}{d}$,\n  if the hierarchical splitting equation \\eqref{eq:hierSplittingMV} holds,\n  as\n  \\begin{equation}\n    \\fgintp{\\*l'} \\in\n    \\ns{\\*l'}\n    = \\bigoplus_{\\*l''=\\*0}^{\\*l'} \\hs{\\*l''}\n    \\subset \\regsgspace{n}{d},\\quad\n    \\normone{\\*l'} \\le n,\n  \\end{equation}\n  due to $\\normone{\\*l''} \\le \\normone{\\*l'} \\le n$\n  for $\\*l'' \\le \\*l'$, i.e.,\n  $\\hs{\\*l''} \\subset \\regsgspace{n}{d}$ for\n  $\\*l'' = \\*0, \\dotsc, \\*l'$.%\n  \\footnote{%\n    This argumentation can be straightforwardly adapted\n    for general dimensionally adaptive sparse grids\n    with downward closed level sets as mentioned in\n    \\cref{sec:232dimensionallyAdaptiveSG}.%\n  }\n  As both $\\regsgintp[\\ct]{n}{d}$ and $\\regsgintp{n}{d}$\n  are contained in $\\regsgspace{n}{d}$ and\n  interpolate $\\objfun$ on $\\regsgset{n}{d}$, they coincide\n  due to the uniqueness of sparse grid interpolation\n  (linear independence of the hierarchical basis).\n\\end{proof}\n\n\\paragraph{Inclusion-exclusion principle}\n\nIt remains to prove that the first sum in \\eqref{eq:combiTechniqueSplitSum}\nis indeed one and that the second sum vanishes.\nThe first statement is a direct consequence of the\n\\term{inclusion-exclusion principle} \\cite{Hegland07Combination}.\nIn its simplest form, the idea of the principle is that the cardinality\nof the union of two finite subsets $A$ and $B$ of some set is given by\n\\begin{equation}\n  \\setsize{A \\cup B}\n  = \\setsize{A} + \\setsize{B} - \\setsize{A \\cap B},\n\\end{equation}\ni.e., we first count (include)\nthe elements in $A$ and then in $B$,\nbut as we have counted the elements of $A \\cap B$ twice,\nwe have to subtract (exclude) its cardinality again.\n\nThe setting is similar for the combination technique.\nIf we add all grids in \\cref{fig:combinationTechnique}\non the green diagonal, then every point whose index is not odd\nwill be counted multiple times.\nBy subtracting the number of occurrences of the points on the\nred diagonal,\nthe result of the ``weighted counting'' is exactly one for every point.\nThe following proposition, whose proof is of purely combinatorial nature,\ngeneralizes this argument to higher dimensions:\n\n\\begin{restatable}[inclusion-exclusion principle]{%\n  proposition%\n}{%\n  propCombiTechniqueOne%\n}\n  \\label{prop:combiTechniqueOne}\n  For every $\\gp{\\*l,\\*i} \\in \\regsgset{n}{d}$, we have\n  \\begin{equation}\n    \\label{eq:combiTechniqueOne}\n    \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\cdot\n    \\setsize{\n      \\{\\*l' \\mid \\normone{\\*l'} = n - q,\\; \\fgset{\\*l'} \\ni \\gp{\\*l,\\*i}\\}\n    }\n    = 1.\n  \\end{equation}\n\\end{restatable}\n\n\\begin{proof}\n  See \\cref{sec:a131proofCombiTechnique}.\n\\end{proof}\n\n\\paragraph{Canceling out function values}\n\nThe second statement about the vanishing second sum in\n\\eqref{eq:combiTechniqueSplitSum} is much harder to prove.\nIt says that at every grid point $\\gp{\\*l,\\*i}$,\nthe contributions $\\fgintp{\\*l'}$ of levels $\\*l'$\nthat do not contain that point cancel out,\nwhich may seem quite surprising.\nThe key observation is as follows:\nThe values of $\\fgintp{\\*l'}, \\fgintp{\\*l''}$ for two levels\n$\\*l', \\*l''$ are the same at $\\gp{\\*l,\\*i}$,\nif all non-equal entries $l'_t, l''_t$ of the levels are\neach greater or equal to $l_t$.\n\nFor a higher-level explanation,\nnote that the statement $l'_t \\ge l_t$ is equivalent to\n$\\fgset{l'_t} \\ni \\gp{l_t,i_t}$.\nBoth $\\fgintp{\\*l'}, \\fgintp{\\*l''}$ interpolate at\n$\\gp{\\*l,\\*i}$ when projected onto the $t$-th dimension,\nso their contribution to $\\fgintp{\\*l'}(\\gp{\\*l,\\*i})$ and\n$\\fgintp{\\*l''}(\\gp{\\*l,\\*i})$ must be the same.\nAlthough there may be dimensions $t$ for which\n$\\fgset{l'_t} \\notni \\gp{l_t,i_t}$,\nthese dimensions do not matter if $l'_t = l''_t$,\nas the univariate restrictions of $\\fgintp{\\*l'}, \\fgintp{\\*l''}$\ninterpolate the same data, and they are evaluated at the same point\n$\\gp{l_t,i_t}$.\n\nOne can formalize these considerations by defining an\nequivalence relation on the set of levels such that the values of\n$\\fgintp{\\*l'}$ at $\\gp{\\*l,\\*i}$ are constant\non the equivalence classes.\n\n\\begin{definition}[%\n  equivalence relation for the proof of the combination technique%\n]\n  \\label{def:combiTechniqueEquivalenceRelation}\n  Let $\\gp{\\*l,\\*i} \\in \\regsgset{n}{d}$ be fixed and\n  \\begin{equation}\n    \\label{eq:combiTechniqueSpecialLevelSet}\n    L\n    \\ceq \\{\\*l' \\mid \\ex{q=0,\\dotsc,d-1}{\n      \\normone{\\*l'} = n - q,\\; \\fgset{\\*l'} \\notni \\gp{\\*l,\\*i}\n    }\\}\n  \\end{equation}\n  be the set of levels that do not contain $\\gp{\\*l,\\*i}$.\n  We define a relation $\\eq$ on $L$ as follows:\n  For $\\*l', \\*l'' \\in L$, we set $\\*l' \\eq \\*l''$ if and only if\n  \\begin{equation}\n    \\falarge{t \\notin T_{\\*l',\\*l''}}{\\min\\{l'_t, l''_t\\} \\ge l_t},\\quad\n    T_{\\*l',\\*l''}\n    \\ceq \\{t \\mid l'_t = l''_t < l_t\\}.\n  \\end{equation}\n\\end{definition}\n\n\\begin{restatable}[identical values in equivalence classes]{%\n  shortlemma%\n}{%\n  lemmaCombiTechniqueIdenticalValues%\n}\n  \\label{lemma:combiTechniqueIdenticalValues}\n  Let $\\*l', \\*l'' \\in L$ with $\\*l' \\eq \\*l''$.\n  Then, $\\fgintp{\\*l'}(\\gp{\\*l,\\*i})\n  = \\fgintp{\\*l''}(\\gp{\\*l,\\*i})$.\n\\end{restatable}\n\n\\begin{proof}\n  See \\cref{sec:a131proofCombiTechnique}.\n\\end{proof}\n\nBy exploiting the tensor product structure of the basis functions,\nthe proof shows an even stronger version, which is shown\nin \\cref{fig:combiTechniqueProof}:\nThe components $\\fgintp{\\*l'}$ and $\\fgintp{\\*l''}$ are equal\non the $m$-dimensional affine subspace through $\\gp{\\*l,\\*i}$\nparallel to the $m$ coordinates in $T_{\\*l',\\*l''}$\n(where $m \\ceq \\setsize{T_{\\*l',\\*l''}}$).\nThe lemma allows us to group summands in the second sum of\n\\eqref{eq:combiTechniqueSplitSum} by function values.\nHence, it remains to count the number of levels in\neach equivalence class of $\\eq$.\nTherefore, we need a characterization of the equivalence classes:\n\n\\begin{SCfigure}\n  \\includegraphics{combiTechniqueProof_1}%\n  \\caption[%\n    Canceling out function values in the proof of the combination technique%\n  ]{%\n    Nodal subspaces $\\ns{\\*l}$ contributing to the combination\n    technique solution for the two-dimensional regular sparse grid\n    $\\regsgspace{n}{d}$ of level $n = 3$ \\emph{(bottom right).}\n    After picking a point $\\gp{\\*l,\\*i} \\in \\regsgset{n}{d}$\n    (\\emph{cross,} here $\\*l = (2, 1)$, $\\*i = (1, 1)$),\n    the set $L$ of levels whose grids do not contain $\\gp{\\*l,\\*i}$\n    \\emph{(colored subspaces)}\n    decompose into three disjoint equivalence classes\n    \\emph{(colors)} given by the relation $\\eq$.\n    In every equivalence class $L_0 \\in \\eqclasses{L}{\\eq}$,\n    the interpolants $\\fgintp{\\*l'}$ ($\\*l' \\in L_0$)\n    equal on an affine subspace\n    \\emph{(dark lines),} which contains $\\gp{\\*l,\\*i}$.\n    Due to the combination coefficients,\n    the contribution to the combined solution\n    vanishes per equivalence class.%\n  }%\n  \\label{fig:combiTechniqueProof}%\n\\end{SCfigure}\n\n\\begin{restatable}[characterization of equivalence classes]{%\n  lemma%\n}{%\n  lemmaCombiTechniqueCharacterization%\n}\n  \\label{lemma:combiTechniqueCharacterization}\n  Let $L_0 \\in \\eqclasses{L}{\\eq}$ be an equivalence class of $\\eq$.\n  If we define\n  \\begin{equation}\n    T_{L_0}\n    \\ceq \\{t \\mid \\exfa{l^\\ast_t < l_t}{\\*l' \\in L_0}{l'_t = l^\\ast_t}\\}\n  \\end{equation}\n  as the set of dimensions $t$ in which all levels in $L_0$\n  have the same entry $l^\\ast_t < l_t$, then\n  \\begin{equation}\n    L_0\n    = \\{\\*l' \\in L \\mid\n    \\fa{t \\in T_{L_0}}{l'_t = l^\\ast_t},\\;\n    \\fa{t \\notin T_{L_0}}{l'_t \\ge l_t}\\}.\n  \\end{equation}\n\\end{restatable}\n\n\\begin{proof}\n  See \\cref{sec:a131proofCombiTechnique}.\n\\end{proof}\n\nThe lemma states that every equivalence class $L_0$ is exactly the\nset of the levels whose entries are equal\nand smaller than $l_t$ in some dimensions\n(which are contained in $T_{L_0}$)\nand whose entries are greater or equal than $l_t$ in all other dimensions.\nWhile this statement may seem intuitively correct,\nthe proof is rather technical.\nFinally, we are now able to show that the second sum in\n\\eqref{eq:combiTechniqueSplitSum} vanishes:\n\n\\begin{restatable}[function value cancellation]{%\n  proposition%\n}{%\n  propCombiTechniqueZero%\n}\n  \\label{prop:combiTechniqueZero}\n  For every $\\gp{\\*l,\\*i} \\in \\regsgset{n}{d}$, we have\n  \\begin{equation}\n    \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\cdot\n    \\sum_{\\substack{\\normone{\\*l'} = n - q\\\\\\fgset{\\*l'} \\notni \\gp{\\*l,\\*i}}}\n    \\fgintp{\\*l'}(\\gp{\\*l,\\*i})\n    = 0.\n  \\end{equation}\n\\end{restatable}\n\n\\begin{proof}\n  See \\cref{sec:a131proofCombiTechnique}.\n\\end{proof}\n\nThe proof essentially first counts the number of possible levels in\nan equivalence class and then applies known combinatorial identities\nto prove that the sum must vanish.\nThis proves \\thmref{thm:combiTechnique}.\n\n\n\n\\subsection{Hierarchization with the Combination Technique}\n\\label{sec:432hierarchizationCombiTechnique}\n\nIt is straightforward to hierarchize function values\n$\\fcnval{\\*l,\\*i}$ on dimensionally adaptive sparse grids\nwith the combination technique.\nThe resulting hierarchization algorithm is given as \\cref{alg:combiTechnique}.\nIn \\cref{line:algCombiTechnique1},\nthe hierarchical surpluses corresponding to the full grid\ninterpolant $\\fgintp{\\*l'} \\in \\ns{\\*l'}$ have to be computed\n(see \\eqref{eq:interpFullGridMV}).\nAs shown in \\cref{sec:42fullGrids}, we can easily calculate these\nsurpluses with the unidirectional principle in\n\\cref{alg:unidirectionalPrinciple}.\nThe surpluses are then combined with the same combination formula\nas in \\thmref{thm:combiTechnique}.\nNote that it is imperative to employ the hierarchical basis functions\n$\\basis{\\*l,\\*i}$ with $\\*l = \\*0, \\dotsc, \\*l'$ and $\\*i \\in I_{\\*l}$\nand not the nodal basis,\ni.e., $\\basis{\\*l',\\*i'}$ with $\\*i' = \\*0, \\dotsc, \\*2^{\\*l'}$.\n\n\\begin{algorithm}\n  \\begin{algorithmic}[1]\n    \\Function{$\\vlinout = \\texttt{combinationTechnique}$}{%\n      $\\vlinin$, $n$, $d$%\n    }\n      \\For{$q = 0, \\dotsc, d - 1$}\n        \\For{$\\*l' \\in \\natz^d$ with $\\normone{\\*l'} = n - q$}\n          \\State{%\n            Let $(\\surplus[(\\*l')]{\\*l,\\*i})_{\n              \\*l = \\*0, \\dotsc, \\*l'\\!,\\, \\*i \\in \\hiset{\\*l}\n            }$ be such that\n            $\\sum_{\\*l=\\*0}^{\\*l'} \\sum_{\\*i \\in \\hiset{\\*l}}\n            \\surplus[(\\*l')]{\\*l,\\*i} \\basis{\\*l,\\*i} \\equiv\n            \\fgintp{\\*l'}$%\n          }\n          \\label{line:algCombiTechnique1}\n          \\State{%\n            $\\surplus[(\\*l')]{\\*l,\\*i} \\gets 0$\n            for all $(\\*l,\\*i) \\in \\liset$\n            with $\\lnot(\\*l \\le \\*l')$%\n          }\n          \\Comment{extend surpluses}%\n          \\label{line:algCombiTechnique2}\n        \\EndFor{}\n      \\EndFor{}\n      \\State{%\n        $\\linout{\\*l,\\*i}\n        = \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q}\n        \\sum_{\\normone{\\*l'} = n-q} \\surplus[(\\*l')]{\\*l,\\*i}$\n        for all $(\\*l, \\*i) \\in \\liset$%\n      }\n      \\Comment{combine surpluses}%\n    \\EndFunction{}\n  \\end{algorithmic}\n  \\caption[%\n    Hierarchization with the combination technique%\n  ]{%\n    Application of the hierarchization operator $\\linop = \\intpmatinv$\n    with the combination technique.\n    For simplicity,\n    the algorithm is described for regular sparse grids,\n    but it can be generalized to arbitrary dimensionally adaptive sparse grids.\n    Inputs are\n    the vector $\\vlinin = (\\linin{\\*l,\\*i})_{(\\*l,\\*i) \\in \\liset}$\n    of input data (function values $\\fcnval{\\*l,\\*i}$ at the grid points),\n    the level $n$, and the dimensionality $d$ of the regular sparse grid,\n    where $\\liset$ is the set of all feasible level-index pairs $(\\*l,\\*i)$,\n    i.e., $\\normone{\\*l} \\le n$, $\\*i \\in \\hiset{\\*l}$.\n    The output is the vector\n    $\\vlinout = (\\linout{\\*l,\\*i})_{(\\*l,\\*i) \\in \\liset}$\n    of output data (hierarchical surpluses $\\surplus{\\*l,\\*i}$).%\n  }%\n  \\label{alg:combiTechnique}%\n\\end{algorithm}\n\n\\paragraph{Correctness}\n\nOf course, the proof of the correctness of \\cref{alg:combiTechnique}\nrelies on the correctness of the combination technique\n(see \\cref{thm:combiTechnique}).\nIf determining the combination coefficients correctly\n\\cite{Nobile16Adaptive}, the algorithm can even be applied to\nall dimensionally adaptive sparse grids.\nThe proof of the following proposition can be generalized accordingly.\n\n\\begin{proposition}[correctness of combination technique]\n  \\label{prop:correctnessAlgCombiTechnique}\n  \\Cref{alg:combiTechnique}\n  is correct for hierarchization on regular sparse grids.\n\\end{proposition}\n\n\\begin{proof}\n  According to \\cref{line:algCombiTechnique1} of \\cref{alg:combiTechnique},\n  the full grid interpolants $\\fgintp{\\*l'}$ can be written as\n  \\begin{equation}\n    \\fgintp{\\*l'}\n    = \\sum_{\\normone{\\*l} \\le n} \\sum_{\\*i \\in \\hiset{\\*l}}\n    \\surplus[(\\*l')]{\\*l,\\*i} \\basis{\\*l,\\*i}\n  \\end{equation}\n  where the surpluses have been extended\n  from $\\*l = \\*0, \\dotsc, \\*l'$ to all $\\*l$ with $\\normone{\\*l} \\le n$\n  by zero in \\cref{line:algCombiTechnique2}.\n  \\Cref{thm:combiTechnique} now allows to write the hierarchical\n  interpolant $\\regsgintp{n}{d}$ in terms of the full grid components:\n  \\begin{subequations}\n    \\begin{align}\n      \\regsgintp{n}{d}\n      = \\regsgintp[\\ct]{n}{d}\n      &= \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\sum_{\\normone{\\*l'} = n-q}\n      \\fgintp{\\*l'}\\\\\n      &= \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\sum_{\\normone{\\*l'} = n-q}\n      \\sum_{\\normone{\\*l} \\le n} \\sum_{\\*i \\in \\hiset{\\*l}}\n      \\surplus[(\\*l')]{\\*l,\\*i} \\basis{\\*l,\\*i}.\\\\\n      \\label{eq:propCorrectnessAlgCombiTechnique1}\n      &= \\sum_{\\normone{\\*l} \\le n} \\sum_{\\*i \\in \\hiset{\\*l}}\n      \\underbrace{\n        \\paren*{\n          \\sum_{q=0}^{d-1} (-1)^q \\binom{d-1}{q} \\sum_{\\normone{\\*l'} = n-q}\n          \\surplus[(\\*l')]{\\*l,\\*i}\n        }\n      }_{= \\linout{\\*l,\\*i}}\n      \\basis{\\*l,\\*i},\n    \\end{align}\n  \\end{subequations}\n  where $\\linout{\\*l,\\*i}$ is the $(\\*l,\\*i)$-th entry of the output vector\n  of \\cref{alg:combiTechnique}.\n  Note that the hierarchical interpolant $\\regsgintp{n}{d}$\n  can be written as\n  $\\regsgintp{n}{d} = \\sum_{\\normone{\\*l} \\le n} \\sum_{\\*i \\in \\hiset{\\*l}}\n  \\surplus{\\*l,\\*i} \\basis{\\*l,\\*i}$\n  (see \\eqref{eq:regularSGInterpolant}),\n  where the surpluses $\\surplus{\\*l,\\*i}$ are unique due to the\n  linear independence of the hierarchical basis.\n  As \\eqref{eq:propCorrectnessAlgCombiTechnique1}\n  equals $\\regsgintp{n}{d}$ and has the same form,\n  the coefficients $\\linout{\\*l,\\*i}$\n  must coincide with the surpluses $\\surplus{\\*l,\\*i}$.\n\\end{proof}\n\n\n\n\\subsection{Hierarchization with Residual Interpolation}\n\\label{sec:433residualInterpolation}\n\nAnother method to hierarchize function values on\ndimensionally adaptive sparse grids is the\n\\term{method of residual interpolation.}\nThe advantage over the combination technique is that\nit only needs to operate on so-called \\term{active nodal spaces.}\nIn contrast, the combination technique needs to perform computations\non additional non-active nodal subspaces\n(for the regular sparse grid case:\nsummands with $q \\ge 1$ in \\eqref{eq:combiTechnique}).\n\n\\paragraph{Active nodal spaces}\n\n\\Cref{alg:residualInterpolation} describes the procedure of\nthe method of residual interpolation,\ngiven the function values $\\vlinin$ corresponding to the grid points\nand the levels $L$ contained in the sparse grid\n(see \\eqref{eq:dimensionallyAdaptiveSG}).\nThe list $\\*l^{(1)}, \\dotsc, \\*l^{(m)}$ of active nodal spaces\nin \\cref{line:algResidualInterpolation1} is determined by the condition\n\\begin{equation}\n  \\label{eq:activeNodalSpaces}\n  \\bigcup_{j=1}^m \\{\\*l \\in \\natz^d \\mid \\*l \\le \\*l^{(j)}\\} = L,\\quad\n  \\falarge{j_1 \\not= j_2}{\\lnot(\\*l^{(j_1)} \\le \\*l^{(j_2)})}.\n\\end{equation}\nThis means that the corresponding sparse grid $\\sgset$\nis the (non-disjoint) union of the full grid sets $\\fgset{\\*l^{(j)}}$\n($j = 1, \\dotsc, m$)\nand no full grid set is contained in another, i.e.,\nno full grid set can be omitted without\nremoving points from the union $\\sgset$.\n\n\\begin{algorithm}\n  \\begin{algorithmic}[1]\n    \\Function{$\\vlinout = \\texttt{residualInterpolation}$}{%\n      $\\vlinin$, $\\levelset$%\n    }\n      \\State{%\n        $r^{(0)}(\\gp{\\*l,\\*i}) \\gets \\fcnval{\\*l,\\*i}$\n        for all $(\\*l,\\*i) \\in \\liset$%\n      }\n      \\State{%\n        Compute list $\\*l^{(1)}, \\dotsc, \\*l^{(m)}$\n        of active nodal spaces from $L$ (see \\eqref{eq:activeNodalSpaces})%\n      }\n      \\label{line:algResidualInterpolation1}\n      \\State{%\n        Sort $\\*l^{(1)}, \\dotsc, \\*l^{(m)}$ by decreasing level sum%\n      }\n      \\For{$j = 1, \\dotsc, m$}\n        \\State{%\n          Let $r_{\\*l^{(j)}}^{(j-1)} \\in \\ns{\\*l^{(j)}}$ be the\n          interpolant of $r^{(j-1)}$ on $\\fgset{\\*l^{(j)}}$%\n        }\n        \\label{line:algResidualInterpolation3}\n        \\State{%\n          Let $(\\surplus[(j)]{\\*l,\\*i})_{(\\*l,\\*i) \\in \\liset}$ be such that\n          $\\sum_{\\*l=\\*0}^{\\*l^{(j)}} \\sum_{\\*i \\in \\hiset{\\*l}}\n          \\surplus[(j)]{\\*l,\\*i} \\basis{\\*l,\\*i}\n          \\equiv r_{\\*l^{(j)}}^{(j-1)}$%\n        }\n        \\Comment{interpolation}%\n        \\label{line:algResidualInterpolation2}\n        \\State{%\n          $r^{(j)}(\\gp{\\*l,\\*i}) \\gets\n          r^{(j-1)}(\\gp{\\*l,\\*i}) - r_{\\*l^{(j)}}^{(j-1)}(\\gp{\\*l,\\*i})$\n          for all $(\\*l,\\*i) \\in \\liset$%\n        }\n        \\Comment{new residuals}%\n        \\label{line:algResidualInterpolation4}\n      \\EndFor{}\n      \\State{%\n        $\\vlinout \\gets \\sum_{j=1}^{m} \\vsurplus^{(j)}$\n        (where $\\surplus[(j)]{\\*l,\\*i} = 0$,\n        $(\\*l,\\*i) \\in \\liset$,\n        if $\\lnot(\\*l \\le \\*l^{(j)})$)%\n      }\n      \\Comment{combine surpluses}%\n    \\EndFunction{}\n  \\end{algorithmic}\n  \\caption[%\n    Hierarchization with residual interpolation%\n  ]{%\n    Application of the hierarchization operator $\\linop = \\intpmatinv$\n    with residual interpolation\n    for dimensionally adaptive sparse grids.\n    Inputs are\n    the vector $\\vlinin = (\\linin{\\*l,\\*i})_{(\\*l,\\*i) \\in \\liset}$\n    of input data (function values $\\fcnval{\\*l,\\*i}$ at the grid points) and\n    the set $\\levelset$ of levels that are part of\n    the sparse grid (see \\eqref{eq:dimensionallyAdaptiveSG}),\n    where $\\liset$ is the set of all feasible level-index pairs $(\\*l,\\*i)$,\n    i.e., $\\*l \\in \\levelset$, $\\*i \\in \\hiset{\\*l}$.\n    The output is the vector\n    $\\vlinout = (\\linout{\\*l,\\*i})_{(\\*l,\\*i) \\in \\liset}$\n    of output data (hierarchical surpluses $\\surplus{\\*l,\\*i}$).%\n  }%\n  \\label{alg:residualInterpolation}%\n\\end{algorithm}\n\n\\paragraph{Correctness}\n\nThe principle of \\Cref{alg:residualInterpolation} is maintaining\na vector $(r^{(j)}(\\gp{\\*l,\\*i}))_{(\\*l,\\*i) \\in \\liset}$ of residuals\nand interpolating the residual data subsequently on the active nodal spaces.\nAgain, note that it is necessary to compute the coefficients\n$\\surplus[(j)]{\\*l,\\*i}$ in the hierarchical basis, despite interpolating\non the full grid $\\fgset{\\*l^{(j)}}$.\nIn \\cref{chap:a10proofs}, we prove that the algorithm satisfies\nthe following invariant, which can be used to show its correctness:\n\n\\begin{restatable}[invariant of residual interpolation]{%\n  proposition%\n}{%\n  propInvariantResidualInterpolation%\n}\n  \\label{prop:invariantResidualInterpolation}\n  For $j = 1, \\dotsc, m$, it holds\n  \\begin{subequations}\n    \\label{eq:propInvariantResidualInterpolationStatements}\n    \\begin{alignat}{4}\n      \\label{eq:propInvariantResidualInterpolation1}\n      r_{\\*l^{(j)}}^{(j-1)}(\\gp{\\*l,\\*i})\n      &= 0,\\quad\n      &&\\*l \\le \\*l^{(j')},\\;\\;\n      &&\\*i \\in \\hiset{\\*l},\\quad\n      &j'\n      &= 1, \\dotsc, j - 1,\\\\\n      \\label{eq:propInvariantResidualInterpolation2}\n      r^{(j)}(\\gp{\\*l,\\*i})\n      &= 0,\\quad\n      &&\\*l \\le \\*l^{(j')},\\;\\;\n      &&\\*i \\in \\hiset{\\*l},\\quad\n      &j'\n      &= 1, \\dotsc, j,\\\\\n      \\label{eq:propInvariantResidualInterpolation3}\n      r^{(j)}(\\gp{\\*l,\\*i})\n      &= \\fcnval{\\*l,\\*i} - f^{\\sparse,(j)}(\\gp{\\*l,\\*i}),\\quad\n      &&\\*l \\in L,\\;\\;\n      &&\\*i \\in \\hiset{\\*l},&&\n    \\end{alignat}\n  \\end{subequations}%\n  \\setlength{\\abovedisplayskip}{0pt}%\n  \\begin{equation}\n    \\label{eq:propInvariantResidualInterpolation4}\n    \\text{where}\\quad\n    f^{\\sparse,(j)}\n    \\ceq \\sum_{\\*l' \\in \\levelset} \\sum_{\\*i' \\in \\hiset{\\*l'}}\n    \\paren*{\\sum_{j'=1}^{j} \\surplus[(j')]{\\*l',\\*i'}} \\basis{\\*l',\\*i'}.\n  \\end{equation}\n\\end{restatable}\n\n\\begin{proof}\n  See \\cref{sec:a132proofResidualInterpolation}.\n\\end{proof}\n\n\\begin{corollary}[correctness of residual interpolation]\n  \\label{cor:algResidualInterpolationCorrectness}\n  \\Cref{alg:residualInterpolation} is correct for hierarchization\n  on dimensionally adaptive sparse grids.\n\\end{corollary}\n\n\\begin{proof}\n  Let $\\*l \\in \\levelset$ and $\\*i \\in \\hiset{\\*l}$.\n  By construction of the active nodal spaces,\n  there exists some $j' \\in \\{1, \\dotsc, m\\}$ such that $\\*l \\le \\*l^{(j')}$.\n  By \\cref{prop:invariantResidualInterpolation}, we obtain\n  for $j = m$\n  \\begin{subequations}\n    \\label{eq:proofCorAlgResidualInterpolationCorrectness1}\n    \\begin{align}\n      \\sum_{\\*l' \\in L} \\sum_{\\*i' \\in \\hiset{\\*l'}}\n      \\smash{\n        \\underbrace{\n          \\paren*{\\sum_{j''=1}^{m} \\surplus[(j'')]{\\*l',\\*i'}}\n        }_{= \\linout{\\*l',\\*i'}}\n      }\n      \\basis{\\*l',\\*i'}(\\gp{\\*l,\\*i})\n      &\\quad\\;\n      \\mathclap{\\overset{\\eqref{eq:propInvariantResidualInterpolation4}}{=}}\n      \\quad\\;\n      f^{\\sparse,(m)}(\\gp{\\*l,\\*i})\n      \\overset{\\eqref{eq:propInvariantResidualInterpolation3}}{=}\n      \\fcnval{\\*l,\\*i} - r^{(m)}(\\gp{\\*l,\\*i})\\\\\n      &\\quad\\;\n      \\mathclap{\\overset{\\eqref{eq:propInvariantResidualInterpolation2}}{=}}\n      \\quad\\;\n      \\fcnval{\\*l,\\*i}.\n    \\end{align}\n  \\end{subequations}\n  As the hierarchical interpolant $\\sgintp$\n  (see \\eqref{eq:hierarchizationInterpolant})\n  has the same form\n  $\\sum_{\\*l' \\in \\levelset} \\sum_{\\*i' \\in \\hiset{\\*l'}}\n  \\surplus{\\*l',\\*i'} \\basis{\\*l',\\*i'}$ as the \\lhs of\n  \\eqref{eq:proofCorAlgResidualInterpolationCorrectness1}\n  with unique surpluses $\\surplus{\\*l',\\*i'}$ such that the function values\n  are interpolated (see \\eqref{eq:hierarchizationProblem}),\n  the coefficients $\\linout{\\*l',\\*i'}$\n  (output of \\cref{alg:residualInterpolation})\n  coincide with the surpluses $\\surplus{\\*l',\\*i'}$.\n\\end{proof}\n\n\\vspace{1em}\n\n\\Cref{prop:invariantResidualInterpolation} shows that\n$r^{(j)}(\\gp{\\*l,\\*i})$ is the residual of the\ninterpolant $f^{\\sparse,(j)}$ of iteration~$j$\nto the objective function $\\objfun$ at the grid points $\\gp{\\*l,\\*i}$\n(\\cref{eq:propInvariantResidualInterpolation3}).\nAfter interpolating $r^{(j-1)}$ on the grid $\\fgset{\\*l^{(j)}}$\nto obtain the function $r_{\\*l^{(j)}}^{(j-1)}$\nand subtracting the resulting values from the old residual values,\nthe new residual values $r^{(j)}(\\gp{\\*l,\\*i})$ vanish\nnot only on the grid $\\{(\\*l^{(j)}, \\*i) \\mid \\*i \\in \\hiset{\\*l^{(j)}}\\}$,\nbut also on all previous grids\n$\\{(\\*l^{(j')}, \\*i) \\mid \\*i \\in \\hiset{\\*l^{(j')}}\\}$, $j' \\le j$\n(\\cref{eq:propInvariantResidualInterpolation2}).\nThe proof of \\cref{prop:invariantResidualInterpolation}\nshows this by exploiting the auxiliary statement of\n\\cref{eq:propInvariantResidualInterpolation1}\nand the tensor product structure of the hierarchical basis.\n\nAn example for the application of \\cref{alg:residualInterpolation}\non a two-dimensional sparse grid can be seen in\n\\cref{fig:residualInterpolation}.\nNote that\n$\\surplus[(j)]{\\*l,\\*i} \\not= 0$ can only be true if $\\*l \\le \\*l^{(j)}$.\nTherefore, if $(\\*l, \\*i)$ is not contained in one of the\ngrids that are processed in one of the remaining iterations $j+1, \\dotsc, m$,\nthen $\\linout[(j)]{\\*l,\\*i}$ is already equal to the correct surplus\n$\\surplus{\\*l,\\*i}$,\nwhere $\\linout[(j)]{\\*l,\\*i} \\ceq \\sum_{j'=1}^j \\surplus[(j')]{\\*l,\\*i}$\ndenotes the intermediate result obtained after $j$ iterations.\n\n\\begin{SCfigure}\n  \\includegraphics{residualInterpolation_1}%\n  \\caption[%\n    Hierarchization with residual interpolation%\n  ]{%\n    Hierarchization of function value data on the\n    two-dimensional regular sparse grid\n    $\\regsgspace{n}{d}$ of level $n = 3$ \\emph{(top left)}\n    using the method of residual interpolation.\n    In this figure, we use\n    $\\linout[(j)]{\\*l,\\*i} \\ceq \\sum_{j'=1}^j \\surplus[(j')]{\\*l,\\*i}$\n    as an abbreviation.\n    The order of the nodal spaces (here: bottom left to top right)\n    does not matter.\n    The data $\\linout[(j)]{\\*l,\\*i}$\n    corresponding to \\textcolor{C0}{blue grid points}\n    will not be modified in the remaining iterations\n    and, therefore, already equals the correct surpluses $\\surplus{\\*l,\\*i}$.\n    The data corresponding to \\textcolor{C1}{red grid points}\n    will be modified as the grid points appear in one of the remaining\n    nodal grids.%\n  }%\n  \\label{fig:residualInterpolation}%\n\\end{SCfigure}\n", "meta": {"hexsha": "379e507742987a2b38afa8956a9aab3a6165465b", "size": 29766, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/document/43dimAdaptive.tex", "max_stars_repo_name": "valentjn/thesis-arxiv", "max_stars_repo_head_hexsha": "ae30179e67cd6a7813385e140b609546fd65b897", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-10-12T09:28:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T21:07:17.000Z", "max_issues_repo_path": "tex/document/43dimAdaptive.tex", "max_issues_repo_name": "valentjn/thesis-arxiv", "max_issues_repo_head_hexsha": "ae30179e67cd6a7813385e140b609546fd65b897", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/document/43dimAdaptive.tex", "max_forks_repo_name": "valentjn/thesis-arxiv", "max_forks_repo_head_hexsha": "ae30179e67cd6a7813385e140b609546fd65b897", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.3582474227, "max_line_length": 80, "alphanum_fraction": 0.6457367466, "num_tokens": 10204, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321720225278, "lm_q2_score": 0.798186784940666, "lm_q1q2_score": 0.5684943075179688}}
{"text": "\\section{Analysis}\n\n\\subsection{Algorithm selection}\n\nTo select algorithm I use the simulated dataset. This dataset contains the latent response variable (the Sharpe ratio of each stock). We evaluate algorithm by the lowest \\textit{mean squared error (mse)} of Sharpe ratios.\n\n\\begin{equation}\\label{eq:meansquarederror}\n    mse = \\sum_{t=1}^{T} \\sum_{i=1}^{k}\\lp \\hat{sr}_{i}^{(t)} - sr_{i}^{(t)} \\rp^2\n\\end{equation}\n\nThe algorithm with the lowest $mse$ is the algorithm selected to be tested on real data. Before testing each algorithm we need to tune, and in the case of the $LSTM$ train the algorithm. Training the algorithm consists of tuning the parameters that are endogenous to the algorithm, and tuning consists of finding the correct values for the hyper parameters.\n\nThe LSTM is trained by splitting the dataset into two separate parts (a training set and validation set). The LSTM is trained for 4 epochs, which is the number of times the LSTM goes through the entire training data.\n\nThe Naive Rolling Sharpe is tuned on 5000 observations, where a grid search over $h= \\{50, 100, 150, 200, 250, 300\\}$ is used. The $h$ which corresponds to lowest $mse$ is used for the final evaluation between the algorithms. We find that $h=200$ is the best hyper parameter value for the Naive Rolling Sharpe.\n\nThe Rolling Sharpe needs the three hyper parameters: $ h_{\\text{long term}}$, $h_{\\text{short term}}, \\tau$. We create a grid of $3^3=27$ different values as shown in equation \\ref{eq:rs_grid}. The algorithm is tuned over this grid, and we find the best hyper parameters to be $h_{\\text{short term}} = 30, h_{\\text{long term}} = 150, \\tau = 30 $.\n\n\\begin{equation}\\label{eq:rs_grid}\n    grid = \\underset{\\tau}{\\{0.3, 0.5, 0.8\\}} \\times \\underset{h_{\\text{long term}}}{\\{50, 100, 150\\}} \\times \\underset{h_{\\text{short term}}}{\\{15, 30, 45\\}}\n\\end{equation}\n\nComparing the three trained and tuned algorithms on the same $5000$ observations yields: $MSE_{\\text{LSTM}}= 0.0156$, $MSE_{\\text{RS}_{naive}}= 0.0181$, $MSE_{\\text{RS}}= 0.0131$. We conclude that the Rolling Sharpe algorithm performs the best. In the appendix figure \\ref{fig:rollingsharptest} shows the underlying true Sharpe ratio for a given stock and the corresponding prediction made by the rolling Sharpe algorithm.\n\n\n\\subsection{Performance on real data}\n\nBefore applying the Rolling Sharpe algorithm on the real data, a couple of benchmark is established. The \\textit{perfect stock pick} benchmark, is made by assuming that we as trader had perfect foresight over the next period, and was able to pick the stock in each period which yields the highest return. The second benchmark is the \\textit{Apple} stock (with ticker \\textbf{AAPL}). The Apple stock is chosen, since this is the stock with the highest individual Sharpe ratio of all the stocks throughout the period. The last benchmarks are two different tangency portfolios. One based on the first 1000 observations, and a tangency portfolio based on the entire period.\n\nEquation \\ref{eq:tangencyport} displays the formula for the tangency portfolio, where $\\w$ is the weights of the portfolio, and: $\\mu_{adj} = \\mu - \\bar{r}\\cdot \\mathbf{1}$. Which is the risk adjusted return. It should be noted that $\\w$ is a vector of length $k$ and $\\sum_{w \\in \\w} w = 1$.\n\n\\begin{equation}\\label{eq:tangencyport}\n    \\w = \\frac{\\Omega^{-1} \\cdot \\mu_{adj}}{\\mathbf{1}^{T} \\cdot \\Omega^{-1} \\cdot \\mu_{adj}}\n\\end{equation}\n\nLooking to table \\ref{tab:performance} we find that the rolling Sharpe algorithm performs very well. Comparing to \\textbf{Apple} we find that the sharp ratio is 4 times higher, and the expected returns is over twice as high, and a somewhat lower standard deviation. Comparing to the tangency portfolio compared over the entire period we find the Sharpe ratio to be twice as high for the rolling Sharpe algorithm. The standard deviation of the tangency portfolio is lower than that of the rolling Sharpe algorithm. However that expected return from the rolling Sharpe algorithm is almost 4 times higher. Lastly comparing to tangency portfolio of only the first 1000 observations, which more realistically would represent how an investor would generate portfolio weights, i.e. the investor cannot use the future innovations of the data generating process to construct portfolio weights. Here again we find that the rolling Sharpe algorithm has 4 times as high Sharpe ratio.\n\n\\begin{table}[ht]\n\\centering\n\\caption{Summary statistics of performance of different portfolios}\n\\input{tables/analysis_portfolio_performance.tex}\n\\label{tab:performance}\n\\end{table}\n\nFigure \\ref{fig:loginvest} displays (the log transformed) portfolio given one had invested following the different benchmarks, using the real data. We see again that the rolling Sharpe ratio is by far the best way to invest throughout the period. Interestingly the poorest performing portfolio is the tangency portfolio calculated on the first 1000 observations.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.45]{figures/log_investment_experiment.png}\n\\caption{Counter factual portfolio performance (in logs)}\n\\label{fig:loginvest}\n\\end{figure}\n\nFigure \\ref{fig:logmontecarlo} uses Monte Carlo simulation to compare the different algorithms. By assuming that returns are normal distributed, we simulate portfolios following each of the four strategies: \\textit{Rolling Sharpe, Apple Portfolio, Tangency portfolio (1000 obs), Tangency Portfolio (full)}. Since each portfolio have given both a mean and a standard deviation, we can use these distribution moments to randomly draw a return from given portfolio strategy. For each strategy we simulate 10000 portfolios. We run each portfolio for 7013 time steps.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.45]{figures/boxplot_monte_carlo.png}\n\\caption{Monte Carlo simulation - performance (in logs)}\n\\label{fig:logmontecarlo}\n\\end{figure}\n\nHere we find again the rolling Sharpe algorithm performs best. The apple and tangency portfolio for the entire period seems to be doing approximately equal, comparing the mean, however, the variance is considerably lower for the tangency portfolio. Note the logarithmic \\textit{y}-scale.\n", "meta": {"hexsha": "e33485760f376ce313393182d5074ce1d4df89be", "size": 6200, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/analysis.tex", "max_stars_repo_name": "JakartaLaw/ACFS", "max_stars_repo_head_hexsha": "dd7e6107ae22e987923dd5b81a8605d88650fce9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-11-04T01:20:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-04T01:20:46.000Z", "max_issues_repo_path": "chapters/analysis.tex", "max_issues_repo_name": "JakartaLaw/ACFS", "max_issues_repo_head_hexsha": "dd7e6107ae22e987923dd5b81a8605d88650fce9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-03-24T17:21:48.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-13T20:09:54.000Z", "max_forks_repo_path": "chapters/analysis.tex", "max_forks_repo_name": "JakartaLaw/ACFS", "max_forks_repo_head_hexsha": "dd7e6107ae22e987923dd5b81a8605d88650fce9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-07T07:34:46.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-07T07:34:46.000Z", "avg_line_length": 96.875, "max_line_length": 971, "alphanum_fraction": 0.7761290323, "num_tokens": 1549, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321720225278, "lm_q2_score": 0.7981867849406659, "lm_q1q2_score": 0.5684943075179687}}
{"text": "\\section{Gram-Schmidt and Orthogonal Complement}\r\nHere we only interested in inner product spaces over $F=\\mathbb R$ or $\\mathbb C$.\r\n\\begin{lemma}[Cauchy-Schwartz Inequality]\r\n    $|\\langle u,v\\rangle|\\le\\|u\\|\\|v\\|$.\r\n\\end{lemma}\r\nIn particular, equality hold iff $u,v$ are linearly dependent.\r\n\\begin{proof}\r\n    For $t\\in F$, expanding $\\langle tu-v,tu-v\\rangle \\ge 0$ gives\r\n    $$0\\le|t|^2\\|u\\|^2-2\\operatorname{Re}(t\\langle u,v\\rangle)+\\|v\\|^2$$\r\n    Picking $t=\\overline{\\langle u,v\\rangle}/\\|u\\|^2$ ends the proof.\r\n\\end{proof}\r\n\\begin{corollary}[Triangle Inequality]\r\n    $\\|u+v\\|\\le \\|u\\|+\\|v\\|$.\r\n\\end{corollary}\r\nConsequently $\\|\\cdot\\|$ is a indeed a norm.\r\n\\begin{proof}\r\n    Square both sides and use Cauchy-Schwartz.\r\n\\end{proof}\r\n\\begin{definition}\r\n    Fix an inner product $\\langle\\cdot,\\cdot\\rangle$.\r\n    A set $\\{e_1,\\ldots,e_k\\}$ of vectors in $V$ is orthogonal if $\\langle e_i,e_j\\rangle=0$ for $i\\neq j$ and orthonormal if in addition they all have norm $1$, that is $\\langle e_i,e_j\\rangle=\\delta_{ij}$.\r\n\\end{definition}\r\nNote that both notion depends on our choice of inner product.\r\n\\begin{lemma}\r\n    A set of orthogonal vectors $e_1,\\ldots,e_k$ has to be linearly independent.\r\n    In fact, if $v=\\sum_i\\lambda_ie_i$ then $\\lambda_i=\\langle v,e_i\\rangle/\\|e_i\\|$.\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Immediate from bilinearity.\r\n\\end{proof}\r\n\\begin{lemma}[Parseval's Identity]\r\n    If $V$ is a finite dimensional inner product space and $e_1,\\ldots,e_n$ is an orthonormal basis, then\r\n    $$\\langle u,v\\rangle=\\sum_{i=1}^n\\langle u,e_i\\rangle\\overline{\\langle v,e_i\\rangle}$$\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Obvious from the preceding lemma.\r\n\\end{proof}\r\nIn particular, in an orthogonal basis, $\\|v\\|^2=\\sum_i|\\langle v,e_i\\rangle|^2$.\r\nDoes an orthogonal basis always exist?\r\n\\begin{theorem}[Gram-Schmidt Orthogonalisation]\r\n    If we have an inner product space $V$ and a sequence of linearly independent vectors $(v_i)_{i\\in I}\\in V$ where $I=\\{1,2,\\ldots\\}$ (which may or may not terminate), then there exists a sequence $(e_i)_{i\\in I}$ of orthonormal vectors such that $\\langle v_1,\\ldots,v_k\\rangle=\\langle e_1,\\ldots,e_k\\rangle$ for any $k\\in I$.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    We shall define $(e_i)$ inductively on $k$.\r\n    For $k=1$, just take $e_1=v_1/\\|v_1\\|$.\r\n    Say we have found $e_1,\\ldots,e_k$, then define\r\n    $$e_{k+1}'=v_{k+1}-\\sum_{i=1}^k\\langle v_{k+1},e_i\\rangle e_i,e_{k+1}=\\frac{1}{|e_{k+1}'|}e_{k+1}'$$\r\n    This is well-defined as $(v_i)$ is linearly independent (so $e_{k+1}'\\neq 0$) and it is easy to verify that $\\langle v_1,\\ldots,v_{k+1}\\rangle=\\langle e_1,\\ldots,e_{k+1}\\rangle$.\r\n    This completes the proof.\r\n\\end{proof}\r\nSo not only does there exists such a set of orthonormal vectors, we also get an algorithm to compute it.\r\n\\begin{corollary}\r\n    Let $V$ be a finite dimensioanl inner product space, then any orthonormal set of vectors can be extend to an orthonormal basis of $V$.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    Extend it to a basis, then apply the Gram-Schmidt algorithm (which fixes the original set).\r\n\\end{proof}\r\n\\begin{note}\r\n    A matrix $A\\in M_{m,n}(F)$ has orthogonal columns if $A^\\top\\bar{A}=I$.\r\n\\end{note}\r\n\\begin{definition}\r\n    $A\\in M_n(\\mathbb R)$ is orthogonal if $A^\\top A=I$.\r\n    $A\\in M_n(\\mathbb C)$ is unitary if $A^\\top\\bar{A}=I$.\r\n\\end{definition}\r\n\\begin{proposition}\r\n    Any nonsingular $A\\in M_n(\\mathbb R)$ (resp. $M_n(\\mathbb C)$) can be written as $A=RT$ where $T$ is upper-triangular and $R$ is orthogonal (resp. unitary).\r\n\\end{proposition}\r\n\\begin{proof}\r\n    Do Gram-Schmidt on columns of $A$.\r\n\\end{proof}\r\n\\begin{definition}\r\n    Let $V$ be an inner product space and $V_1,V_2\\le V$.\r\n    We say $V$ is the orthogonal sum of $V_1,V_2$ (written as $V=V_1\\oplus^\\perp V_2$) if $V=V_1\\oplus V_2$ and $\\forall v_1,\\in V_1,v_2\\in V_2$, we have $\\langle v_1,v_2\\rangle=0$.\r\n\\end{definition}\r\n\\begin{definition}\r\n    Let $V$ be an inner product space and $W\\le V$.\r\n    We define $W^\\perp=\\{v\\in V:\\forall w\\in W,\\langle v,w\\rangle=0\\}$.\r\n\\end{definition}\r\n\\begin{lemma}\r\n    $W\\oplus^\\perp W^\\perp=V$ if $V$ is finite dimensional.\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Clearly $W^\\perp\\le V$ and by definition the sum $W+W^\\perp$ is direct and orthogonal.\r\n    So it suffices to show that $V=W+W^\\perp$, which is obvious since we can obtain a basis of $W^\\perp$ of the right size by extending an orthonormal basis on $W$ orthonormally to $V$.\r\n\\end{proof}", "meta": {"hexsha": "8bd0ee3c135761dce05857ae8c022b6d85a5d00b", "size": 4471, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "21/gram.tex", "max_stars_repo_name": "david-bai-notes/IB-Linear-Algebra", "max_stars_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "21/gram.tex", "max_issues_repo_name": "david-bai-notes/IB-Linear-Algebra", "max_issues_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "21/gram.tex", "max_forks_repo_name": "david-bai-notes/IB-Linear-Algebra", "max_forks_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.6, "max_line_length": 329, "alphanum_fraction": 0.6741221203, "num_tokens": 1540, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6442251064863697, "lm_q2_score": 0.8824278587245935, "lm_q1q2_score": 0.5684821812533905}}
{"text": "\\section{Project Overview}\n\nOne of the focus areas for DIANA~\\cite{DIANA-proposal-2014} is to ``establish infrastructure for a higher-level of collaborative analysis, building on the successful patterns used for the Higgs boson discovery''.\nA large component of this focus is statistical software.\n\\code{RooFit}~\\cite{Verkerke:2003ir} is one of the primary tools used now, but it is facing scalability challenges.\nTo address these issues, this fellowship project investigated the ability of software libraries for numerical computations using data flow graphs and automatic differentiation (e.g., TensorFlow~\\cite{tensorflow2015-whitepaper}, PyTorch~\\cite{paszke2017automatic}, and MXNet~\\cite{DBLP:journals/corr/ChenLLLWWXXZZ15}) to improve the performance of statistical fits through parallelism and GPU assisted speed up.\\\\\n\nUnder the mentorship of Gilles Louppe and Vince Croft, Matthew became familiar with and investigated the behavior and benefits of different computational graph frameworks.\nSpecifically, declarative frameworks (i.e., TensorFlow) and imperative frameworks (i.e., PyTorch and MXNet).\nDeclarative frameworks require the computational graph to be fully declared in advance of computation, allowing for the graph to be compiled in a sublanguage virtual machine and then run in separate sessions.\nThis offers the ability to efficiently reuse both memory and graphs.\nHowever, the declarative nature requires that for implementation of a new idea into code a graph must always be implemented first, potentially slowing the exploratory analysis stage.\nAlternatively, imperative frameworks eagerly run computational graphs within the defining language in which the graph is written (i.e., the programming language itself is the execution engine).\nImperative frameworks allow more easily for interactive and exploratory computing, though at the loss of reusable graphs.~\\cite{Chintala:2302087}\\\\\n\nAfter investigation, it was decided that the project scope should not be limited prematurely to a single framework, given that both declarative and imperative frameworks have clear use cases in high energy physics --- where the physicists writing the code tend to simultaneously embody the roles of software researcher, software engineer, and end user.\nInstead, comparisons of the performance of TensorFlow, PyTorch, and MXNet would be made in different scenarios.\\\\\n\nPartnering with Lukas Heinrich, who then assumed responsibility of further mentoring and work with Matthew, pyhf~\\cite{lukas_heinrich_2018_1172961} was developed.\npyhf is a Python based implementation of the \\texttt{HistFactory}~\\cite{Cranmer:2012sba} specification that allows different computational graph frameworks to be used as computational backends.\nThe pyhf project easily allows for testing of the performance of the frameworks in different scenarios, given its easily understandable Pythonic API and ability to switch between computational graph backends with a single function call.\nIn addition to supporting TensorFlow, PyTorch, and MXNet as backends, a NumPy based backend was also implemented.\nAuto differentiable best fit optimizers were also implemented for the respective backends.\nThe optimizers currently implement Newton's method, requiring the computationally expensive (in high dimensional space) inverse of the Hessian matrix.\nHowever, the use of the graph frameworks greatly mitigates this cost as the underlying libraries have been written for high performance in exactly these situations.\n", "meta": {"hexsha": "254c50bd6a9042ac69c5ecfda0c4c889ddff9e46", "size": 3496, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "summary_report/src/overview.tex", "max_stars_repo_name": "matthewfeickert/DIANA-Proposal-Feickert", "max_stars_repo_head_hexsha": "dac5181e7e87e747fbdb3c5a6d0201f723fa4670", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "summary_report/src/overview.tex", "max_issues_repo_name": "matthewfeickert/DIANA-Proposal-Feickert", "max_issues_repo_head_hexsha": "dac5181e7e87e747fbdb3c5a6d0201f723fa4670", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-04-10T20:42:05.000Z", "max_issues_repo_issues_event_max_datetime": "2018-10-11T14:33:40.000Z", "max_forks_repo_path": "summary_report/src/overview.tex", "max_forks_repo_name": "matthewfeickert/DIANA-Proposal-Feickert", "max_forks_repo_head_hexsha": "dac5181e7e87e747fbdb3c5a6d0201f723fa4670", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 134.4615384615, "max_line_length": 412, "alphanum_fraction": 0.8280892449, "num_tokens": 702, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673178375734, "lm_q2_score": 0.6992544335934765, "lm_q1q2_score": 0.5684010759211608}}
{"text": "\\subsection{Physically Meaningful Model Parameterization}\nTo accelerate the process of finding a near-optimal set of parameters, it is important to limit the search space of the parameterization algorithm. This is especially important when the number of parameters is large. We have found that it is beneficial for the parameters to have physical meaning in order to specify realistic bounds.\n\nThe elastic behaviour is parameterized by Young's modulus, $E$, and Poisson's ratio, $\\nu$. Bounds on these quantities are well known. The yield function and flow potential function are parameterized in terms of the friction angle, dilation angle, and the stress ratio $K$. Again, bounds on these quantities is relatively well known.\n\nThe hardening function (\\ref{eqn:param2-1}) is given in terms of two empirical coefficients $\\alpha$ and $\\beta$ and the initial compressive yield stress $\\sigma_c^{iy}$.  While it is possible to set bounds on $\\sigma_c^{iy}$, it is less straightforward to set bounds for $\\alpha$ and $\\beta$ as they do not have obvious physical meaning. The hardening function for the Barcelona model is shown in Figure \\ref{fig:barcelona}. The coefficients $\\alpha$ and $\\beta$ can be rewritten in terms of the peak compressive yield strength, $\\sigma_{c}^{p}$, the plastic strain at the peak compressive yield strength, $\\epsilon_c^{p}$, and the initial compressive yield stress $\\sigma_c^{iy}$:\n", "meta": {"hexsha": "ed4ee331fbc2d19d6cecf8b0c94d6dd5597595a4", "size": 1412, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "subsection_parameterization.tex", "max_stars_repo_name": "yetisir/up-scaling-dem-simulations", "max_stars_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "subsection_parameterization.tex", "max_issues_repo_name": "yetisir/up-scaling-dem-simulations", "max_issues_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "subsection_parameterization.tex", "max_forks_repo_name": "yetisir/up-scaling-dem-simulations", "max_forks_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-29T23:14:09.000Z", "avg_line_length": 201.7142857143, "max_line_length": 682, "alphanum_fraction": 0.7839943343, "num_tokens": 307, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8128673178375735, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5684010708266296}}
{"text": "\\documentclass[t,usenames,dvipsnames]{beamer}\n\\usetheme{Copenhagen}\n\\setbeamertemplate{headline}{} % remove toc from headers\n\\beamertemplatenavigationsymbolsempty\n\n\\usepackage{amsmath, tcolorbox, bm, tikz, pgfplots}\n\\pgfplotsset{compat = 1.16}\n\n\\everymath{\\displaystyle}\n\n\\title{Simplifying Radical Expressions}\n\\author{}\n\\date{}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}\n    \\frametitle{Objectives}\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\begin{document}\n\n\\begin{frame}\n    \\maketitle\n\\end{frame}\n\n\\section{Write radicals as rational exponents and vice versa.}\n\n\\begin{frame}{Radicand}\n\\begin{tcolorbox}[colframe=green!20!black, colback = green!30!white,title=\\textbf{Radicand}]\nFor $\\sqrt[n]{a}$, the \\textbf{radicand} is $\\bm{a}$.\n\\end{tcolorbox}\n\\end{frame}\n\n\\begin{frame}{Radicals}\nWhat is $\\left(\\sqrt[3]{27}\\right)^2$? \\newline\\\\\n\nWell, if we evaluate $\\sqrt[3]{27}$ first, we get 3. Then when we square that, we get the final answer of 9. \n\\end{frame}\n\n\\begin{frame}{Radicals} \nTo put things visually, suppose the block below represents 27.  \n\n\\begin{center}\n    \\begin{tikzpicture}[scale=0.5]\n    \\draw (0,0) rectangle (15,1);\n    \\end{tikzpicture}\n\\end{center}\n\nSince we are taking the cube root, we can divide the large block up into 3 equal blocks where we multiply by 3 to get each new number. (\\emph{Note}: We can divide it up into 2 equal blocks for square root, 3 for cube root, 4 for fourth root, etc.):\n\\begin{center}\n    \\begin{tikzpicture}[scale=0.5]\n    \\draw (0,0) rectangle (15,1);\n    \\node at (0,0) [anchor=north] {1};\n    \\node at (15,0) [anchor=north] {27};\n    \\draw (5,0) -- (5,1);\n    \\node at (5,0) [anchor=north] {3};\n    \\draw (10,0) -- (10,1);\n    \\node at (10,0) [anchor=north] {9};\n    \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Radicals}\nNow let's look what happens as we shade in the block from the left, up to our answer of 9:\n\\begin{center}\n    \\begin{tikzpicture}[scale=0.5]\n    \\draw (0,0) [fill=yellow, color=yellow] rectangle (5,1);\n    \\draw (5,0) [fill=yellow, color=yellow] rectangle (10,1);\n    \\draw (0,0) rectangle (15,1);\n    \\node at (0,0) [anchor=north] {1};\n    \\node at (15,0) [anchor=north] {27};\n    \\draw (5,0) -- (5,1);\n    \\node at (5,0) [anchor=north] {3};\n    \\draw (10,0) -- (10,1);\n    \\node at (10,0) [anchor=north] {9};\n    \\end{tikzpicture}\n\\end{center}\n\nNotice that out of the three equal areas, two of them are shaded. \\newline\\\\\n\nThis is a visual approach to the idea that $\\left(\\sqrt[3]{27}\\right)^2 = 27^{2/3}$   \\newline\\\\\n\nIn general,\n\\[\n\\sqrt[\\text{root}]{x^{\\text{power}}} = x^{\\text{power/root}}\n\\]\n\\end{frame}\n\n\\begin{frame}{Example 1}\nWrite each of the following using rational exponents.\t\\newline\\\\\n(a) \\quad $\\sqrt{6}$\t\t\\pause\t\\vspace{6pt}\n\\[ 6^{1/2} \\]\t\\pause\t\\vspace{6pt}\n(b) \\quad $\\sqrt[3]{8}$\t\\pause\t\\vspace{6pt}\n\\[ 8^{1/3} \\] \\pause\t\\vspace{6pt}\n(c) \\quad $\\sqrt[4]{x^3}$\t\\pause\t\\vspace{6pt}\n\\[ x^{3/4} \\]\n\\end{frame}\n\n\\begin{frame}{Example 2}\nWrite each of the following in radical form.\t\\newline\\\\\n(a)\t\\quad\t$5^{1/2}$\t\\pause \\vspace{6pt}\n\\[ \\sqrt{5} \\]\t\\pause \\vspace{6pt}\n(b)\t\\quad\t$(-9)^{5/3}$\t\\pause \\vspace{6pt}\n\\[\t\\sqrt[3]{(-9)^5} \\] \\pause \\vspace{6pt}\n(c)\t\\quad\t$x^{-1/3}$\t\\pause \\vspace{6pt}\n\\[ \\sqrt[3]{x^{-1}} \\text{\\quad or \\quad} \\sqrt[3]{\\frac{1}{x}} \\]\t\n\\end{frame}\n\n\\section{Simplify square root and higher root expressions.}\n\n\\begin{frame}{Even Roots and Exponents}\nWhen dealing with \\textbf{even} roots and exponents, keep in mind that the root and exponent don't ``cancel each other out.\"\n\\newline\\\\\n\n\nFor instance, $\\sqrt{5^2}=\\sqrt{25}=5$, but $\\sqrt{(-5)^2}=\\sqrt{25}=5$.\n\\end{frame}\n\n\\begin{frame}{Even Roots and Exponents}\nThe graphs of $y = \\sqrt{x^2}$ and $y=|x|$ are shown. Notice they are identical.\n\\newline\\\\\n\n\\begin{tabular}{cc}\n    {\\color{blue}$y = \\sqrt{x^2}$}    &   {\\color{red}$y = |x|$}   \\\\[11pt]\n    \\begin{tikzpicture}[scale=0.65]\n    \\begin{axis}\n    [\n        xmin = -5,\n        xmax = 5,\n        ymin = -5,\n        ymax = 5,\n        axis lines = middle,\n        grid,\n        minor tick num = 1\n    ]\n    \\addplot [<->, color = blue, line width = 1.5, domain=-4.5:4.5, samples = 300] {abs(x)};\n    \\end{axis}\n    \\end{tikzpicture}\n    &\n    \\begin{tikzpicture}[scale=0.65]\n    \\begin{axis}\n    [\n        xmin = -5,\n        xmax = 5,\n        ymin = -5,\n        ymax = 5,\n        axis lines = middle,\n        grid,\n        minor tick num = 1\n    ]\n    \\addplot [<->, color = red, line width = 1.5, domain=-4.5:4.5, samples = 300] {abs(x)};\n    \\end{axis}\n    \\end{tikzpicture}\n\\end{tabular}\n\\vspace{8pt}\n\nThus, for any real number $x$, $\\sqrt{x^2} = |x|$.\n\\end{frame}\n\n\\begin{frame}{Odd Roots and Exponents}\nOdd roots, such as $\\sqrt[3]{\\quad}$ do not follow the same rule as even roots.\n\\newline\\\\\n\nSo for any real number $x$, $\\sqrt[3]{x^3} = x$.\n\\newline\\\\\n\nAnd, in general, for any real number $x$,\t\\newline\\\\\n\\begin{enumerate}\n\t\\item If $n$ is even, $\\sqrt[n]{x^n} = |x|$.\t\\newline\\\\\n\t\\item If $n$ is odd, $\\sqrt[n]{x^n} = x$.\n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}{Example 3}\nSimplify each of the following. Exact answers only. Use absolute value bars when necessary.\t\t\\newline\\\\\n(a) \\quad $\\sqrt{72x^2}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{72x^2} &= \\sqrt{72} \\cdot \\sqrt{x^2}} \\\\[8pt]\n\\onslide<3->{&= 6\\sqrt{2} \\cdot |x|} \\\\[8pt]\n\\onslide<4->{&= 6|x| \\sqrt{2}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(b) \\quad $\\sqrt{175x^3}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{175x^3} &= \\sqrt{175} \\cdot \\sqrt{x^3}} \\\\[8pt]\n\\onslide<3->{&= 5\\sqrt{7} \\cdot |x|\\sqrt{x}} \\\\[8pt]\n\\onslide<4->{&= 5|x|\\sqrt{7x}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(c) \\quad $\\sqrt{18x^4}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{18x^4} &= \\sqrt{18} \\cdot \\sqrt{x^4}} \\\\[8pt]\n\\onslide<3->{&= 3\\sqrt{2} \\cdot x^2} \\\\[8pt]\n\\onslide<4->{&= 3x^2\\cdot \\sqrt{2}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(d) \\quad $\\sqrt{65x^5}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{65x^5} &= \\sqrt{65} \\cdot \\sqrt{x^5}} \\\\[8pt]\n\\onslide<3->{&= \\sqrt{65} \\cdot x^2\\sqrt{x}} \\\\[8pt]\n\\onslide<4->{&= x^2\\cdot \\sqrt{65x}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(e) \\quad $\\sqrt[3]{27x^7}$\n\\begin{align*}\n\\onslide<2->{\\sqrt[3]{27x^7} &= \\sqrt[3]{27} \\cdot \\sqrt[3]{x^7}} \\\\[8pt]\n\\onslide<3->{&= 3 \\cdot x^2\\sqrt[3]{x}} \\\\[8pt]\n\\onslide<4->{&= 3x^2\\cdot \\sqrt[3]{x}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n(f) \\quad $\\sqrt[3]{128x^6}$\n\\begin{align*}\n\\onslide<2->{\\sqrt[3]{128x^6} &= \\sqrt[3]{128} \\cdot \\sqrt[3]{x^6}} \\\\[8pt]\n\\onslide<3->{&= 4\\cdot\\sqrt[3]{2} \\cdot x^2} \\\\[8pt]\n\\onslide<4->{&= 4x^2\\cdot \\sqrt[3]{2}}\n\\end{align*}\n\\end{frame}\n\n\\section{Perform operations with radical expressions.}\n\n\\begin{frame}{Performing Operations with Radical Expressions}\nOnce we have simplified the radicals, we can add and subtract radical expressions with the same radicands and roots. This is essentially combining like terms.\n\\end{frame}\n\n\\begin{frame}{Example 4}\nSimplify each of the following. Exact answers only.\t\\newline\\\\\n(a)\t\\quad\t$\\sqrt{98} + \\sqrt{8}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{98} + \\sqrt{8} &= 7\\sqrt{2} + 2\\sqrt{2}} \\\\[8pt]\n\\onslide<3->{&= 9\\sqrt{2}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n(b)\t\\quad\t$\\sqrt{108x} - \\sqrt{300x}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{108x} - \\sqrt{300x} &= 6\\sqrt{3x} - 10\\sqrt{3x}} \\\\[8pt]\n\\onslide<3->{&= -4\\sqrt{3x}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n(c)\t\\quad\t$4\\sqrt{6} + 3\\sqrt{54} - 5\\sqrt{45}$\n\\begin{align*}\n\\onslide<2->{4\\sqrt{6} + 3\\sqrt{54} - 5\\sqrt{45} &= 4\\sqrt{6} + 3(3\\sqrt{6}) - 5(3\\sqrt{5})} \\\\[8pt]\n\\onslide<3->{&= 4\\sqrt{6} + 9\\sqrt{6} - 15\\sqrt{5}} \\\\[8pt]\n\\onslide<4->{&= 13\\sqrt{6} - 15\\sqrt{5}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Multiplying and Dividing Radical Expressions}\nWe can also multiply and divide radical expressions. It may be helpful to convert them to rational exponent form first, and then use your laws of exponents.\n\\end{frame}\n\n\\begin{frame}{Example 5}\nSimplify each of the following. Leave no radical expressions in a denominator.\t\\newline\\\\\n(a)\t\\quad $\\sqrt{x^5} \\cdot \\sqrt{x^2}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{x^5} \\cdot \\sqrt{x^2} &= x^{5/2} \\cdot x^{2/2}} \\\\[8pt]\n\\onslide<3->{&= x^{7/2}} \\\\[8pt]\n\\onslide<4->{&= \\sqrt{x^7}} \\\\[8pt]\n\\onslide<5->{&= |x^3|\\sqrt{x}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 5}\n(b)\t\\quad $\\sqrt{a^3} \\cdot \\sqrt[3]{a^2}$\n\\begin{align*}\n\\onslide<2->{\\sqrt{a^3} \\cdot \\sqrt[3]{a^2} &= a^{3/2} \\cdot a^{2/3}} \\\\[8pt]\n\\onslide<3->{&= a^{13/6}} \\\\[8pt]\n\\onslide<4->{&= \\sqrt[6]{a^{13}}} \\\\[8pt]\n\\onslide<5->{&= a^2 \\cdot \\sqrt[6]{a}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 5}\n(c)\t\\quad $\\sqrt[3]{y^6} \\cdot \\sqrt{y^3}$\n\\begin{align*}\n\\onslide<2->{\\sqrt[3]{y^6} \\cdot \\sqrt{y^3} &= y^{6/3} \\cdot y^{3/2}} \\\\[8pt]\n\\onslide<3->{&= y^{7/2}} \\\\[8pt]\n\\onslide<4->{&= \\sqrt{y^7}} \\\\[8pt]\n\\onslide<5->{&= |y^3| \\cdot \\sqrt{y}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 5}\n(d)\t\\quad $\\dfrac{\\sqrt{x^5}}{\\sqrt{x}}$\n\\begin{align*}\n\\onslide<2->{\\frac{\\sqrt{x^5}}{\\sqrt{x}} &= \\frac{x^{5/2}}{x^{1/2}}} \\\\[12pt]\n\\onslide<3->{&= x^{4/2}} \\\\[12pt]\n\\onslide<4->{&= x^2}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 5}\n(e)\t\\quad $\\dfrac{7}{\\sqrt{2}}$\n\\begin{align*}\n\\onslide<2->{\\frac{7}{\\sqrt{2}}&=\\frac{7}{2^{1/2}}} \\\\[12pt]\n\\onslide<3->{&= \\frac{7}{2^{1/2}} \\left(\\frac{2^{1/2}}{2^{1/2}} \\right)} \\\\[12pt]\n\\onslide<4->{&= \\frac{7\\cdot 2^{1/2}}{2}} \\\\[12pt]\n\\onslide<5->{&= \\frac{7\\sqrt{2}}{2}}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 5}\n(f) \\quad $\\dfrac{\\sqrt[3]{y^6}}{\\sqrt{y^3}}$\n\\begin{align*}\n\\onslide<2->{\\frac{\\sqrt[3]{y^6}}{\\sqrt{y^3}} &= \\frac{y^{6/3}}{y^{3/2}}}\t\\\\[12pt]\n\\onslide<3->{&= y^{1/2}} \\\\[12pt]\n\\onslide<4->{&= \\sqrt{y}}\n\\end{align*}\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "9572ed8ca7b2030e1a471ffbaaba367b28a6a30c", "size": 9705, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Simplifying_Radical_Expressions(BEAMER).tex", "max_stars_repo_name": "BryanBain/HA2_BEAMER", "max_stars_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Simplifying_Radical_Expressions(BEAMER).tex", "max_issues_repo_name": "BryanBain/HA2_BEAMER", "max_issues_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Simplifying_Radical_Expressions(BEAMER).tex", "max_forks_repo_name": "BryanBain/HA2_BEAMER", "max_forks_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:45.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:45.000Z", "avg_line_length": 29.5884146341, "max_line_length": 248, "alphanum_fraction": 0.6051519835, "num_tokens": 3980, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8128673201042492, "lm_q1q2_score": 0.5684010673170813}}
{"text": "\n\n\n\\documentclass[10pt, handout]{beamer}\n\\setbeamertemplate{navigation symbols}{}\n\\usefonttheme{serif} \n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{cite}\n\\usepackage{color} \n\\usepackage{setspace}\n\\usepackage{hyperref}\n\n\\newcommand{\\xx}{{\\bf{x}}}\n\n\\begin{document}\n\\title{Machine Learning I Lecture VI:\\\\ Linear models for Classification}   \n\\author{Jakob H Macke\\\\ Max Planck Institute for Biological Cybernetics\\\\ Bernstein Center for Computational Neuroscience} \n\\date{XY.XY.2012} \n\n\\frame{\\titlepage} \n\n%\\frame{\\frametitle{Today: Back to basics of probability theory}} \n\n\n\\frame{\\frametitle{Plan for today}\\tableofcontents} \n\n\\section{Binary classification}\n\\frame{\\frametitle{Binary Classification: Assign each data point to one of two classes.} \n\n%\\multicolumn\n\\begin{columns}\n\\begin{column}{4cm}\n\\includegraphics[width=1.1\\textwidth]{LinClassification.pdf}\n\\end{column}\n\\begin{column}{7.5cm}\nExamples:\n\\begin{itemize}\n \\item Is there a face in this image?\n\\item Will this neuron spike in response to this stimulus?\n\\item Based on this brain-scan, does this patient have a given disease or not?\n\\item  Will this customer buy this product or not?\n\\item Is this person likely to be a democrat/republican? \n\\end{itemize}\n%\\item\n \\pause Notation: we have data $D=\\{(x_1, t_1),\\ldots, (x_N, t_N)\\}$, with $t_n=1$ if $x_n$ belongs to class $1$ and $t_n=-1$ if $x_n$ belongs to class $-1$.\n\\end{column}\n\\end{columns}\n}\n\n\n\n\n\n\\frame{\\frametitle{We focus on linear decision rules, also known as `linear discriminant functions'.} \n\n%\\multicolumn\n\n\\includegraphics[width=.5\\textwidth]{LinClassification.pdf}\n\\pause\n\\includegraphics[width=.5\\textwidth]{NonLinClassification.pdf}\n\n\\pause \n\nOf course, linear algorithms can be used together with \\alert{nonlinear feature spaces} or \\alert{nonlinear basis functions} in order to solve nonlinear classification problems!\n}\n\n\\frame{\\frametitle{Linear discriminants separate the space by a hyperplane, and the parameters define its normal vector.} \n\n%\\multicolumn\n\n\n%\\includegraphics[width=.4\\textwidth]{LinearSeparators.pdf}\n\\begin{itemize}\n\\item Decision function: $y(\\xx)=\\omega^\\top \\xx + \\omega_o$\n\\item \\pause Classification: \\begin{align}\n\\mbox{if~}y(\\xx)>0 & \\mbox{~say $\\xx$ belongs to class 1}\\\\\n\\mbox{if~}y(\\xx)<0 & \\mbox{~say $\\xx$ belongs to class -1}\\\\\n \\end{align}\n\\item The decision-surface has equation $y(\\xx)=0$, and is a hyperplane of dimensionality $D-1$.\n\\item \\pause $\\omega$ is the normal vector to the plane, and points into the positive class.\n\\item \\pause $\\omega_o$ determines the location of the decision-surface\n\\item \\pause $|y(\\xx)|$ is proproptional to the perpendicular distance to the decision-surface (with factor $1$ if $|| \\omega ||=1$).\n\\end{itemize}\n\n}\n\n\\frame{\\frametitle{Multiple algorithms and methods exist for finding a good $\\omega$.} \n\\begin{itemize}\n\\item Mis-classification rate $C(\\omega)= \\frac{1}{N} \\sum_n \\delta\\left[y(\\xx_n) =t_n\\right]$ (i.e. average number of errors) difficult to optimize over $\\omega$, and might have multiple solutions.\n\\item \\pause Many algorithms can be derived by replacing $C$ by another cost-function which can be optimized.\n\\item \\pause Linear classification algorithms include Least-square classification, Fisher's linear Discriminant, Logistic regression, Support Vector Machines and Rosenblatts' perceptron.\n\\end{itemize}\n}\n\n\\section{Least Square Classification}\n\\frame{\\frametitle{You already know one algorithm for linear classification: least square classification.} \n\\begin{itemize}\n\\item We have to fit the function $y(\\xx)= \\omega^\\top \\xx+ \\omega_o $ to data.\n\\item \\pause Simply do a linear regression from $\\xx$ to $t$ by minimizing the sum-of-squared errors $\\sum_n (y(\\xx_n)-t_n)^2$.\n\\item \\pause $\\omega_{reg}=  \\left(\\sum_n x_n x_n^\\top  \\right)^{-1} \\sum_n x_n t_n$\n\\item \\pause Q: In what situations might this be a bad idea?\n\\end{itemize}\n\\pause\n\\includegraphics[width=.4\\textwidth]{Figure44a.pdf}\n\\pause\n\\includegraphics[width=.4\\textwidth]{Figure44b.pdf}\n\\\\\n\\tiny Bishop PRML Figure 4.4\n}\n\n\n\\section{Fisher's linear discriminant}\n\\frame[shrink=5]{\\frametitle{'Fisher's linear discriminant' is a classical and simple algorithm for linear classification} \n\\begin{columns}\n\\begin{column}{3.5cm}\n\n\\includegraphics[width=1.2\\textwidth]{Figure46a.pdf}\n\n\\includegraphics[width=1.2\\textwidth]{Figure46b.pdf}\n\n~\\\\\n\\tiny Bishop PRML Figure 4.6\n\\end{column}\n\\begin{column}{7.5cm}\n\\begin{itemize}\n\\item $\\mathbf{m_+}= \\frac{1}{N_+}\\sum_{n \\in C_+} x_n$ \\hspace{.5cm} $\\mathbf{m_{-}}= \\frac{1}{N_-}\\sum_{n \\in C_{-}} x_n$ \n\\item \\pause Maximize projection-distance of class means \n[projected mean/variance: on board]\n$\\omega_{simple} \\propto \\mathbf{m}_+-\\mathbf{m}_-$ \n\\item \\pause Maximizing distance between means ignores that the projected variances might also be big. \n\\item \\pause Fix:  Maximize the ratio of between-class variance to within-class variance ('signal to noise'). Fisher criterion\n\\begin{align}\nJ_\\omega = 2\\frac{(m_+-m_-)^2}{s_+^2+s_-^2}\n\\end{align}\n[Details and solution: on board]\n%\\item \n\\pause \n$\\omega_{lda}= \\Sigma_w^{-1} (\\mathbf{m}_+-\\mathbf{m}_-)$\n\\end{itemize}\n\\end{column}\n\\end{columns}\n\n}\n%http://users.informatik.uni-halle.de/~hinnebur/Lehre/BN_seminar_web/bn_05_ag.pdf\n\n\n\\frame{\\frametitle{Aside: The multivariate Gaussian} \n\\begin{itemize}\n\\item Probability density function of $D$ dimensional Gaussian with mean $\\mu$ an covariance $\\Sigma$: \\begin{align}p(x| \\mu, \\Sigma)&= (2\\pi)^{-D/2}|\\Sigma|^{-1/2} \\exp \\left(-\\frac{1}{2} (x-\\mu)^\\top \\Sigma^{-1} (x-\\mu) \\right) \\\\\n \\end{align}\n\\item \\pause Maximum likelihood estimation of parameters: \\begin{align}\n\\hat\\mu= &\\frac{1}{N}\\sum_n x_n\\mbox{~~(empirical mean)}\\\\ \n\\hat \\Sigma= &\\frac{1}{N} \\sum_n x_n x_n^\\top- \\hat\\mu \\hat \\mu^\\top\\mbox{~~(empirical covariance)} \n \\end{align}\n\\end{itemize}\n\\begin{center}\n\\pause\n\\includegraphics[width=.25\\textwidth]{Figure28a.pdf}\n\\includegraphics[width=.25\\textwidth]{Figure28b.pdf}\n\\includegraphics[width=.25\\textwidth]{Figure28c.pdf}\n\\end{center}\n~\\\\\n\\tiny Bishop PRML Figure 2.8\n}\n\n\\frame[shrink=5]{\\frametitle{A (super brief) primer on covariance matrices. (more details/intuition in second half of course?)} \n%\\begin{columns}\n%\\begin{column}{4.5cm}\n\n%\\end{column}\n\\begin{center}\n\\includegraphics[width=.45\\textwidth]{Figure27.pdf}\n\\\\\n\\tiny Bishop PRML Figure 2.7\n%\\end{column}\n\\end{center}\n%\\begin{column}{8cm}\n\\begin{itemize}\n\\item \\pause Covariance matrices are symmetric.\n\\item \\pause Diagonal entries: variances along coordinate-axes\n\\item \\pause Eigenvectors: principal axes of ellipsoid\n\\item Eigenvalues:  variances along eigen-vectors\n\\item Eigenvector with maximal/minimal eigen-value: Direction of maximal/minimal variance \n\\item \\pause Covariance matrices are `positive definite', i.e. all their eigenvalues are non-negative.\n\\item \\pause Most of this can be derived from $a^\\top \\mbox{Cov}(X) a= \\mbox{Var}(a^\\top X)$\n\\end{itemize}\n%\\end{column}\n%\\end{columns}\n\n\n}\n\n\n\\section{A generative model: Class-conditional Gaussians}\n\\frame{\\frametitle{A tale of two Gaussian: We can use a probablistic model of the data for classification} \n\\begin{center}\n\\includegraphics[width=.35\\textwidth]{Figure410a.pdf}\n\\includegraphics[width=.35\\textwidth]{Figure410b.pdf}\n\\end{center}\n\n\\begin{itemize}\n\\item Suppose that each of the two classes is modelled by a Gaussian: $x | x \\in C_+ \\sim \\mathcal{N}\\left(\\mu_+, \\Sigma_+\\right)$, $x | x \\in C_- \\sim \\mathcal{N}\\left(\\mu_-, \\Sigma_-\\right)$, \n\\item ~[On board] Calculation of posterior class probabilities and decision criterion\n\\item \\pause If we assume $\\Sigma_+=\\Sigma_-$, we get $\\omega_{gauss} \\propto \\Sigma_+^{-1} (\\mathbf{m}_+-\\mathbf{m}_-)$\n\\item Note: We take the $t_n$ as given and built a model of $x_n | t_n$, contrast with linear regression, where we took $x_n$ as given and modelled $t_n |x_n$.\n\\end{itemize}\n\\tiny Bishop PRML Figure 4.10\n}\n\n\\frame{\\frametitle{This approach directly generalizes to classification with unequal covariances and multi-class classification.}\n\\begin{center}\n\\includegraphics[width=.35\\textwidth]{Figure411a.pdf}\n\\includegraphics[width=.35\\textwidth]{Figure411b.pdf}\n\\end{center}\n\\begin{itemize}\n\\item Quadratic discriminant analysis: $\\Sigma_+ \\neq \\Sigma_o$, decisison boundary is of form $y(\\xx) = \\xx^\\top A \\xx+\\omega^\\top \\xx +\\omega_o$\n\\item Multi-class: Assign each data-point to class with highest posterior probability (or calculate best assignment from cost-function). \n\\end{itemize}\n\\tiny Bishop PRML Figure 4.10\n}\n\n\\frame{\\frametitle{A simple nonlinear classifier can be constructed from kernel density estimates of the probability densities.}\n\n\\begin{columns}\n\\begin{column}{4cm}\n\n\\includegraphics[width=\\textwidth]{NonLinClassification.pdf}\n\\end{column}\n\\begin{column}{7cm}\n\\begin{itemize}\n\\item Idea: Once we have an estimate of the class-conditional densities $P(x| t=\\pm 1)$, we can construct a rule from $d(x)=P(x| t=+ 1)-P(x| t=- 1)$.\n\\item \\pause Use \\alert{kernel density estimation} to estimate  $P(x| t=\\pm 1)$, i.e. place a `Gaussian bump' on each data-point:\n\\begin{align}\nP(x| t=1) =\\frac{1}{Z}\\sum_{n_+} \\exp\\left( \\frac{x-x_n}{\\sigma}\\right)^2\n\\end{align}\n\\end{itemize}\n\\end{column}\n\\end{columns}\n\\pause\nThis leads to a classifier of the form\n\\begin{align}\nd(x)= \\sum_{n} \\alpha  t_n \\exp\\left( \\frac{x-x_n}{\\sigma}\\right)^2\n\\end{align}\n\\pause\n\\alert{Support vector machine with radial basis functions} has decision rule \\begin{align}\nd(x)=\\sum_{n}  \\alpha_n \\exp\\left( \\frac{x-x_n}{\\sigma}\\right)^2\n\\end{align}\n}\n\n\n\n\n\\frame{\\frametitle{Summary: One for the price of three.} \n\\begin{itemize}\n\\item Today, you learned about three different algorithms for binary classification with linear decision rules. \n\\item \\pause One was based on a hack, the second one on a plausible (but ad-hoc) criterion, and the third one an a probababilistic model of the data.\n\\item \\pause All three algorithms are equivalent.\n\\item \\pause We showed that the Fisher discriminant and the probabilistic model based on two Gaussians have the same  decision criterion. In fact, it can be shown that linear regression has the same weights (Bishop 4.1.5)\n%\\item \\pause The moral: Great motivations are great, but the actual algorithm matters, and it is important to check connections with other algorithms.\n\\item The third motivation had immediate extensions to nonlinear algorithms and multi-class classification, and posterior probabilities.\n\\item \\pause Next week, we will learn an algorithm which actually is different, and usually better than the ones discussed today.\n\\end{itemize}\n}\n\n\n\n\\end{document}\n\n\n\n", "meta": {"hexsha": "766064e81b2785945967bb15aaf0c23fc2c95e52", "size": 10596, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/lecture6_linearclassification/lecture6.tex", "max_stars_repo_name": "mackelab/machine-learning-I", "max_stars_repo_head_hexsha": "fedd9ea0b9b257af5cd59036a3b49876aed5c77c", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2015-07-31T15:08:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T17:07:23.000Z", "max_issues_repo_path": "slides/lecture6_linearclassification/lecture6.tex", "max_issues_repo_name": "cne-tum/msne_statsandprob_ss2018", "max_issues_repo_head_hexsha": "fedd9ea0b9b257af5cd59036a3b49876aed5c77c", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/lecture6_linearclassification/lecture6.tex", "max_forks_repo_name": "cne-tum/msne_statsandprob_ss2018", "max_forks_repo_head_hexsha": "fedd9ea0b9b257af5cd59036a3b49876aed5c77c", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2018-03-16T07:42:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T14:02:27.000Z", "avg_line_length": 38.8131868132, "max_line_length": 232, "alphanum_fraction": 0.7411287278, "num_tokens": 3167, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8128673155708975, "lm_q1q2_score": 0.5684010641471151}}
{"text": "\\section{Operations on Arrays}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                                                                                    %\n%                                         C                                          %\n%                                                                                    %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\ifCPy\n\n\\cvCPyFunc{AbsDiff}\nCalculates absolute difference between two arrays.\n\n\\cvdefC{void cvAbsDiff(const CvArr* src1, const CvArr* src2, CvArr* dst);}\n\\cvdefPy{AbsDiff(src1,src2,dst)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\end{description}\n\nThe function calculates absolute difference between two arrays.\n\n\\[ \\texttt{dst}(i)_c = |\\texttt{src1}(I)_c - \\texttt{src2}(I)_c| \\]\n\nAll the arrays must have the same data type and the same size (or ROI size).\n\n\\cvCPyFunc{AbsDiffS}\nCalculates absolute difference between an array and a scalar.\n\n\\cvdefC{void cvAbsDiffS(const CvArr* src, CvArr* dst, CvScalar value);}\n\\cvdefPy{AbsDiffS(src,value,dst)-> None}\n\\ifC\n\\begin{lstlisting}\n#define cvAbs(src, dst) cvAbsDiffS(src, dst, cvScalarAll(0))\n\\end{lstlisting}\n\\fi\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{value}{The scalar}\n\\end{description}\n\nThe function calculates absolute difference between an array and a scalar.\n\n\\[ \\texttt{dst}(i)_c = |\\texttt{src}(I)_c - \\texttt{value}_c| \\]\n\nAll the arrays must have the same data type and the same size (or ROI size).\n\n\n\\cvCPyFunc{Add}\nComputes the per-element sum of two arrays.\n\n\\cvdefC{void cvAdd(const CvArr* src1, const CvArr* src2, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{Add(src1,src2,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\nThe function adds one array to another:\n\n\\begin{lstlisting}\ndst(I)=src1(I)+src2(I) if mask(I)!=0\n\\end{lstlisting}\n\nAll the arrays must have the same type, except the mask, and the same size (or ROI size).\nFor types that have limited range this operation is saturating.\n\n\\cvCPyFunc{AddS}\nComputes the sum of an array and a scalar.\n\n\\cvdefC{void cvAddS(const CvArr* src, CvScalar value, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{AddS(src,value,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{value}{Added scalar}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\nThe function adds a scalar \\texttt{value} to every element in the source array \\texttt{src1} and stores the result in \\texttt{dst}.\nFor types that have limited range this operation is saturating.\n\n\\begin{lstlisting}\ndst(I)=src(I)+value if mask(I)!=0\n\\end{lstlisting}\n\nAll the arrays must have the same type, except the mask, and the same size (or ROI size).\n\n\n\\cvCPyFunc{AddWeighted}\nComputes the weighted sum of two arrays.\n\n\\cvdefC{void  cvAddWeighted(const CvArr* src1, double alpha,\n                     const CvArr* src2, double beta,\n                     double gamma, CvArr* dst);}\n\\cvdefPy{AddWeighted(src1,alpha,src2,beta,gamma,dst)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{alpha}{Weight for the first array elements}\n\\cvarg{src2}{The second source array}\n\\cvarg{beta}{Weight for the second array elements}\n\\cvarg{dst}{The destination array}\n\\cvarg{gamma}{Scalar, added to each sum}\n\\end{description}\n\nThe function calculates the weighted sum of two arrays as follows:\n\n\\begin{lstlisting}\ndst(I)=src1(I)*alpha+src2(I)*beta+gamma\n\\end{lstlisting}\n\nAll the arrays must have the same type and the same size (or ROI size).\nFor types that have limited range this operation is saturating.\n\n\n\\cvCPyFunc{And}\nCalculates per-element bit-wise conjunction of two arrays.\n\n\\cvdefC{void cvAnd(const CvArr* src1, const CvArr* src2, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{And(src1,src2,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\nThe function calculates per-element bit-wise logical conjunction of two arrays:\n\n\\begin{lstlisting}\ndst(I)=src1(I)&src2(I) if mask(I)!=0\n\\end{lstlisting}\n\nIn the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size.\n\n\\cvCPyFunc{AndS}\nCalculates per-element bit-wise conjunction of an array and a scalar.\n\n\\cvdefC{void cvAndS(const CvArr* src, CvScalar value, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{AndS(src,value,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{value}{Scalar to use in the operation}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\nThe function calculates per-element bit-wise conjunction of an array and a scalar:\n\n\\begin{lstlisting}\ndst(I)=src(I)&value if mask(I)!=0\n\\end{lstlisting}\n\nPrior to the actual operation, the scalar is converted to the same type as that of the array(s). In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size.\n\n\\ifC\nThe following sample demonstrates how to calculate the absolute value of floating-point array elements by clearing the most-significant bit:\n\n\\begin{lstlisting}\nfloat a[] = { -1, 2, -3, 4, -5, 6, -7, 8, -9 };\nCvMat A = cvMat(3, 3, CV\\_32F, &a);\nint i, absMask = 0x7fffffff;\ncvAndS(&A, cvRealScalar(*(float*)&absMask), &A, 0);\nfor(i = 0; i < 9; i++ )\n    printf(\"%.1f \", a[i]);\n\\end{lstlisting}\n\nThe code should print:\n\n\\begin{lstlisting}\n1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{Avg}\nCalculates average (mean) of array elements.\n\n\\cvdefC{CvScalar cvAvg(const CvArr* arr, const CvArr* mask=NULL);}\n\\cvdefPy{Avg(arr,mask=NULL)-> CvScalar}\n\n\\begin{description}\n\\cvarg{arr}{The array}\n\\cvarg{mask}{The optional operation mask}\n\\end{description}\n\nThe function calculates the average value \\texttt{M} of array elements, independently for each channel:\n\n\\[\n\\begin{array}{l}\nN = \\sum_I (\\texttt{mask}(I) \\ne 0)\\\\\nM_c = \\frac{\\sum_{I, \\, \\texttt{mask}(I) \\ne 0} \\texttt{arr}(I)_c}{N}\n\\end{array}\n\\]\n\nIf the array is \\texttt{IplImage} and COI is set, the function processes the selected channel only and stores the average to the first scalar component $ S_0 $ .\n\n\\cvCPyFunc{AvgSdv}\nCalculates average (mean) of array elements.\n\n\\cvdefC{void cvAvgSdv(const CvArr* arr, CvScalar* mean, CvScalar* stdDev, const CvArr* mask=NULL);}\n\\cvdefPy{AvgSdv(arr,mask=NULL)-> (mean, stdDev)}\n\n\\begin{description}\n\\cvarg{arr}{The array}\n\\ifC\n\\cvarg{mean}{Pointer to the output mean value, may be NULL if it is not needed}\n\\cvarg{stdDev}{Pointer to the output standard deviation}\n\\fi\n\\cvarg{mask}{The optional operation mask}\n\\ifPy\n\\cvarg{mean}{Mean value, a CvScalar}\n\\cvarg{stdDev}{Standard deviation, a CvScalar}\n\\fi\n\n\\end{description}\n\nThe function calculates the average value and standard deviation of array elements, independently for each channel:\n\n\\[\n\\begin{array}{l}\nN = \\sum_I (\\texttt{mask}(I) \\ne 0)\\\\\nmean_c = \\frac{1}{N} \\, \\sum_{ I, \\, \\texttt{mask}(I) \\ne 0} \\texttt{arr}(I)_c\\\\\nstdDev_c = \\sqrt{\\frac{1}{N} \\, \\sum_{ I, \\, \\texttt{mask}(I) \\ne 0} (\\texttt{arr}(I)_c - mean_c)^2}\n\\end{array}\n\\]\n\nIf the array is \\texttt{IplImage} and COI is set, the function processes the selected channel only and stores the average and standard deviation to the first components of the output scalars ($mean_0$ and $stdDev_0$).\n\n\\cvCPyFunc{CalcCovarMatrix}\nCalculates covariance matrix of a set of vectors.\n\n\\cvdefC{\nvoid cvCalcCovarMatrix(\\par const CvArr** vects,\\par int count,\\par CvArr* covMat,\\par CvArr* avg,\\par int flags);}\n\\cvdefPy{CalcCovarMatrix(vects,covMat,avg,flags)-> None}\n\n\\begin{description}\n\\cvarg{vects}{The input vectors, all of which must have the same type and the same size. The vectors do not have to be 1D, they can be 2D (e.g., images) and so forth}\n\\ifC\n\\cvarg{count}{The number of input vectors}\n\\fi\n\\cvarg{covMat}{The output covariance matrix that should be floating-point and square}\n\\cvarg{avg}{The input or output (depending on the flags) array - the mean (average) vector of the input vectors}\n\\cvarg{flags}{The operation flags, a combination of the following values\n\\begin{description}\n\\cvarg{CV\\_COVAR\\_SCRAMBLED}{The output covariance matrix is calculated as:\n\\[\n \\texttt{scale} * [ \\texttt{vects} [0]- \\texttt{avg} ,\\texttt{vects} [1]- \\texttt{avg} ,...]^T \\cdot [\\texttt{vects} [0]-\\texttt{avg} ,\\texttt{vects} [1]-\\texttt{avg} ,...] \n\\],\nthat is, the covariance matrix is\n$\\texttt{count} \\times \\texttt{count}$.\nSuch an unusual covariance matrix is used for fast PCA\nof a set of very large vectors (see, for example, the EigenFaces technique\nfor face recognition). Eigenvalues of this \"scrambled\" matrix will\nmatch the eigenvalues of the true covariance matrix and the \"true\"\neigenvectors can be easily calculated from the eigenvectors of the\n\"scrambled\" covariance matrix.}\n\\cvarg{CV\\_COVAR\\_NORMAL}{The output covariance matrix is calculated as:\n\\[\n \\texttt{scale} * [ \\texttt{vects} [0]- \\texttt{avg} ,\\texttt{vects} [1]- \\texttt{avg} ,...] \\cdot [\\texttt{vects} [0]-\\texttt{avg} ,\\texttt{vects} [1]-\\texttt{avg} ,...]^T \n\\],\nthat is, \\texttt{covMat} will be a covariance matrix\nwith the same linear size as the total number of elements in each\ninput vector. One and only one of \\texttt{CV\\_COVAR\\_SCRAMBLED} and\n\\texttt{CV\\_COVAR\\_NORMAL} must be specified}\n\\cvarg{CV\\_COVAR\\_USE\\_AVG}{If the flag is specified, the function does not calculate \\texttt{avg} from the input vectors, but, instead, uses the passed \\texttt{avg} vector. This is useful if \\texttt{avg} has been already calculated somehow, or if the covariance matrix is calculated by parts - in this case, \\texttt{avg} is not a mean vector of the input sub-set of vectors, but rather the mean vector of the whole set.}\n\\cvarg{CV\\_COVAR\\_SCALE}{If the flag is specified, the covariance matrix is scaled. In the \"normal\" mode \\texttt{scale} is '1./count'; in the \"scrambled\" mode \\texttt{scale} is the reciprocal of the total number of elements in each input vector. By default (if the flag is not specified) the covariance matrix is not scaled ('scale=1').}\n\n\\cvarg{CV\\_COVAR\\_ROWS}{Means that all the input vectors are stored as rows of a single matrix, \\texttt{vects[0]}. \\texttt{count} is ignored in this case, and \\texttt{avg} should be a single-row vector of an appropriate size.}\n\\cvarg{CV\\_COVAR\\_COLS}{Means that all the input vectors are stored as columns of a single matrix, \\texttt{vects[0]}. \\texttt{count} is ignored in this case, and \\texttt{avg} should be a single-column vector of an appropriate size.}\n\n\\end{description}}\n\\end{description}\n\nThe function calculates the covariance matrix\nand, optionally, the mean vector of the set of input vectors. The function\ncan be used for PCA, for comparing vectors using Mahalanobis distance and so forth.\n\n\\cvCPyFunc{CartToPolar}\nCalculates the magnitude and/or angle of 2d vectors.\n\n\\cvdefC{void cvCartToPolar(\\par const CvArr* x,\\par const CvArr* y,\\par CvArr* magnitude,\\par CvArr* angle=NULL,\\par int angleInDegrees=0);}\n\\cvdefPy{CartToPolar(x,y,magnitude,angle=NULL,angleInDegrees=0)-> None}\n\n\\begin{description}\n\\cvarg{x}{The array of x-coordinates}\n\\cvarg{y}{The array of y-coordinates}\n\\cvarg{magnitude}{The destination array of magnitudes, may be set to NULL if it is not needed}\n\\cvarg{angle}{The destination array of angles, may be set to NULL if it is not needed. The angles are measured in radians $(0$ to $2 \\pi )$ or in degrees (0 to 360 degrees).}\n\\cvarg{angleInDegrees}{The flag indicating whether the angles are measured in radians, which is default mode, or in degrees}\n\\end{description}\n\nThe function calculates either the magnitude, angle, or both of every 2d vector (x(I),y(I)):\n\n\\begin{lstlisting}\n\nmagnitude(I)=sqrt(x(I)^2^+y(I)^2^ ),\nangle(I)=atan(y(I)/x(I) )\n\n\\end{lstlisting}\n\nThe angles are calculated with 0.1 degree accuracy. For the (0,0) point, the angle is set to 0.\n\n\\cvCPyFunc{Cbrt}\nCalculates the cubic root\n\n\\cvdefC{float cvCbrt(float value);}\n\\cvdefPy{Cbrt(value)-> float}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\n\nThe function calculates the cubic root of the argument, and normally it is faster than \\texttt{pow(value,1./3)}. In addition, negative arguments are handled properly. Special values ($\\pm \\infty $, NaN) are not handled.\n\n\\cvCPyFunc{ClearND}\nClears a specific array element.\n\\cvdefC{void cvClearND(CvArr* arr, int* idx);}\n\\cvdefPy{ClearND(arr,idx)-> None}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx}{Array of the element indices}\n\\end{description}\n\nThe function \\cvCPyCross{ClearND} clears (sets to zero) a specific element of a dense array or deletes the element of a sparse array. If the sparse array element does not exists, the function does nothing.\n\n\\cvCPyFunc{CloneImage}\nMakes a full copy of an image, including the header, data, and ROI.\n\n\\cvdefC{IplImage* cvCloneImage(const IplImage* image);}\n\\cvdefPy{CloneImage(image)-> copy}\n\n\\begin{description}\n\\cvarg{image}{The original image}\n\\end{description}\n\nThe returned \\texttt{IplImage*} points to the image copy.\n\n\\cvCPyFunc{CloneMat}\nCreates a full matrix copy.\n\n\\cvdefC{CvMat* cvCloneMat(const CvMat* mat);}\n\\cvdefPy{CloneMat(mat)-> copy}\n\n\\begin{description}\n\\cvarg{mat}{Matrix to be copied}\n\\end{description}\n\nCreates a full copy of a matrix and returns a pointer to the copy.\n\n\\cvCPyFunc{CloneMatND}\nCreates full copy of a multi-dimensional array and returns a pointer to the copy.\n\n\\cvdefC{CvMatND* cvCloneMatND(const CvMatND* mat);}\n\\cvdefPy{CloneMatND(mat)-> copy}\n\n\\begin{description}\n\\cvarg{mat}{Input array}\n\\end{description}\n\n\n\\ifC % {\n\\cvCPyFunc{CloneSparseMat}\nCreates full copy of sparse array.\n\n\\cvdefC{CvSparseMat* cvCloneSparseMat(const CvSparseMat* mat);}\n\\cvdefPy{CloneSparseMat(mat) -> mat}\n\n\\begin{description}\n\\cvarg{mat}{Input array}\n\\end{description}\n\nThe function creates a copy of the input array and returns pointer to the copy.\n\\fi % }\n\n\\cvCPyFunc{Cmp}\nPerforms per-element comparison of two arrays.\n\n\\cvdefC{void cvCmp(const CvArr* src1, const CvArr* src2, CvArr* dst, int cmpOp);}\n\\cvdefPy{Cmp(src1,src2,dst,cmpOp)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array. Both source arrays must have a single channel.}\n\\cvarg{dst}{The destination array, must have 8u or 8s type}\n\\cvarg{cmpOp}{The flag specifying the relation between the elements to be checked\n\\begin{description}\n \\cvarg{CV\\_CMP\\_EQ}{src1(I) \"equal to\" value}\n \\cvarg{CV\\_CMP\\_GT}{src1(I) \"greater than\" value}\n \\cvarg{CV\\_CMP\\_GE}{src1(I) \"greater or equal\" value}\n \\cvarg{CV\\_CMP\\_LT}{src1(I) \"less than\" value}\n \\cvarg{CV\\_CMP\\_LE}{src1(I) \"less or equal\" value}\n \\cvarg{CV\\_CMP\\_NE}{src1(I) \"not equal\" value}\n\\end{description}}\n\\end{description}\n\nThe function compares the corresponding elements of two arrays and fills the destination mask array:\n\n\\begin{lstlisting}\ndst(I)=src1(I) op src2(I),\n\\end{lstlisting}\n\n\\texttt{dst(I)} is set to 0xff (all \\texttt{1}-bits) if the specific relation between the elements is true and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size)\n\n\\cvCPyFunc{CmpS}\nPerforms per-element comparison of an array and a scalar.\n\n\\cvdefC{void cvCmpS(const CvArr* src, double value, CvArr* dst, int cmpOp);}\n\\cvdefPy{CmpS(src,value,dst,cmpOp)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array, must have a single channel}\n\\cvarg{value}{The scalar value to compare each array element with}\n\\cvarg{dst}{The destination array, must have 8u or 8s type}\n\\cvarg{cmpOp}{The flag specifying the relation between the elements to be checked\n\\begin{description}\n \\cvarg{CV\\_CMP\\_EQ}{src1(I) \"equal to\" value}\n \\cvarg{CV\\_CMP\\_GT}{src1(I) \"greater than\" value}\n \\cvarg{CV\\_CMP\\_GE}{src1(I) \"greater or equal\" value}\n \\cvarg{CV\\_CMP\\_LT}{src1(I) \"less than\" value}\n \\cvarg{CV\\_CMP\\_LE}{src1(I) \"less or equal\" value}\n \\cvarg{CV\\_CMP\\_NE}{src1(I) \"not equal\" value}\n\\end{description}}\n\\end{description}\n\nThe function compares the corresponding elements of an array and a scalar and fills the destination mask array:\n\n\\begin{lstlisting}\ndst(I)=src(I) op scalar\n\\end{lstlisting}\n\nwhere \\texttt{op} is $=,\\; >,\\; \\ge,\\; <,\\; \\le\\; or\\; \\ne$.\n\n\\texttt{dst(I)} is set to 0xff (all \\texttt{1}-bits) if the specific relation between the elements is true and 0 otherwise. All the arrays must have the same size (or ROI size).\n\n\\ifPy % {\n\\cvCPyFunc{Convert}\nConverts one array to another.\n\n\\cvdefPy{Convert(src,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array}\n\\cvarg{dst}{Destination array}\n\\end{description}\n\n\nThe type of conversion is done with rounding and saturation, that is if the\nresult of scaling + conversion can not be represented exactly by a value\nof the destination array element type, it is set to the nearest representable\nvalue on the real axis.\n\nAll the channels of multi-channel arrays are processed independently.\n\n\\fi % }\n\n\\cvCPyFunc{ConvertScale}\nConverts one array to another with optional linear transformation.\n\n\\cvdefC{void cvConvertScale(const CvArr* src, CvArr* dst, double scale=1, double shift=0);}\n\\cvdefPy{ConvertScale(src,dst,scale=1.0,shift=0.0)-> None}\n\n\\ifC\n\\begin{lstlisting}\n#define cvCvtScale cvConvertScale\n#define cvScale  cvConvertScale\n#define cvConvert(src, dst )  cvConvertScale((src), (dst), 1, 0 )\n\\end{lstlisting}\n\\fi\n\n\\begin{description}\n\\cvarg{src}{Source array}\n\\cvarg{dst}{Destination array}\n\\cvarg{scale}{Scale factor}\n\\cvarg{shift}{Value added to the scaled source array elements}\n\\end{description}\n\n\nThe function has several different purposes, and thus has several different names. It copies one array to another with optional scaling, which is performed first, and/or optional type conversion, performed after:\n\n\\[\n\\texttt{dst}(I) = \\texttt{scale} \\texttt{src}(I) + (\\texttt{shift}_0,\\texttt{shift}_1,...)\n\\]\n\nAll the channels of multi-channel arrays are processed independently.\n\nThe type of conversion is done with rounding and saturation, that is if the\nresult of scaling + conversion can not be represented exactly by a value\nof the destination array element type, it is set to the nearest representable\nvalue on the real axis.\n\nIn the case of \\texttt{scale=1, shift=0} no prescaling is done. This is a specially\noptimized case and it has the appropriate \\cvCPyCross{Convert} name. If\nsource and destination array types have equal types, this is also a\nspecial case that can be used to scale and shift a matrix or an image\nand that is caled \\cvCPyCross{Scale}.\n\n\n\\cvCPyFunc{ConvertScaleAbs}\nConverts input array elements to another 8-bit unsigned integer with optional linear transformation.\n\n\\cvdefC{void cvConvertScaleAbs(const CvArr* src, CvArr* dst, double scale=1, double shift=0);}\n\\cvdefPy{ConvertScaleAbs(src,dst,scale=1.0,shift=0.0)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array}\n\\cvarg{dst}{Destination array (should have 8u depth)}\n\\cvarg{scale}{ScaleAbs factor}\n\\cvarg{shift}{Value added to the scaled source array elements}\n\\end{description}\n\nThe function is similar to \\cvCPyCross{ConvertScale}, but it stores absolute values of the conversion results:\n\n\\[\n\\texttt{dst}(I) = |\\texttt{scale} \\texttt{src}(I) + (\\texttt{shift}_0,\\texttt{shift}_1,...)|\n\\]\n\nThe function supports only destination arrays of 8u (8-bit unsigned integers) type; for other types the function can be emulated by a combination of \\cvCPyCross{ConvertScale} and \\cvCPyCross{Abs} functions.\n\n\\cvCPyFunc{CvtScaleAbs}\nConverts input array elements to another 8-bit unsigned integer with optional linear transformation.\n\n\\cvdefC{void cvCvtScaleAbs(const CvArr* src, CvArr* dst, double scale=1, double shift=0);}\n\\cvdefPy{CvtScaleAbs(src,dst,scale=1.0,shift=0.0)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array}\n\\cvarg{dst}{Destination array (should have 8u depth)}\n\\cvarg{scale}{ScaleAbs factor}\n\\cvarg{shift}{Value added to the scaled source array elements}\n\\end{description}\n\n\n\nThe function is similar to \\cvCPyCross{ConvertScale}, but it stores absolute values of the conversion results:\n\n\\[\n\\texttt{dst}(I) = |\\texttt{scale} \\texttt{src}(I) + (\\texttt{shift}_0,\\texttt{shift}_1,...)|\n\\]\n\nThe function supports only destination arrays of 8u (8-bit unsigned integers) type; for other types the function can be emulated by a combination of \\cvCPyCross{ConvertScale} and \\cvCPyCross{Abs} functions.\n\n\\cvCPyFunc{Copy}\nCopies one array to another.\n\n\\cvdefC{void cvCopy(const CvArr* src, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{Copy(src,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\n\nThe function copies selected elements from an input array to an output array:\n\n\\[\n\\texttt{dst}(I)=\\texttt{src}(I) \\quad \\text{if} \\quad \\texttt{mask}(I) \\ne 0.\n\\]\n\nIf any of the passed arrays is of \\texttt{IplImage} type, then its ROI\nand COI fields are used. Both arrays must have the same type, the same\nnumber of dimensions, and the same size. The function can also copy sparse\narrays (mask is not supported in this case).\n\n\\cvCPyFunc{CountNonZero}\nCounts non-zero array elements.\n\n\\cvdefC{int cvCountNonZero(const CvArr* arr);}\n\\cvdefPy{CountNonZero(arr)-> int}\n\n\\begin{description}\n\\cvarg{arr}{The array must be a single-channel array or a multi-channel image with COI set}\n\\end{description}\n\n\nThe function returns the number of non-zero elements in arr:\n\n\\[ \\sum_I (\\texttt{arr}(I) \\ne 0) \\]\n\nIn the case of \\texttt{IplImage} both ROI and COI are supported.\n\n\n\\cvCPyFunc{CreateData}\nAllocates array data\n\n\\cvdefC{void cvCreateData(CvArr* arr);}\n\\cvdefPy{CreateData(arr) -> None}\n\n\\begin{description}\n\\cvarg{arr}{Array header}\n\\end{description}\n\n\nThe function allocates image, matrix or\nmulti-dimensional array data. Note that in the case of matrix types OpenCV\nallocation functions are used and in the case of IplImage they are used\nunless \\texttt{CV\\_TURN\\_ON\\_IPL\\_COMPATIBILITY} was called. In the\nlatter case IPL functions are used to allocate the data.\n\n\\cvCPyFunc{CreateImage}\nCreates an image header and allocates the image data.\n\n\\cvdefC{IplImage* cvCreateImage(CvSize size, int depth, int channels);}\n\\cvdefPy{CreateImage(size, depth, channels)->image}\n\n\\begin{description}\n\\cvarg{size}{Image width and height}\n\\cvarg{depth}{Bit depth of image elements. See \\cross{IplImage} for valid depths.}\n\\cvarg{channels}{Number of channels per pixel. See \\cross{IplImage} for details. This function only creates images with interleaved channels.}\n\\end{description}\n\n\\ifC\nThis call is a shortened form of\n\\begin{lstlisting}\nheader = cvCreateImageHeader(size, depth, channels);\ncvCreateData(header);\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{CreateImageHeader}\nCreates an image header but does not allocate the image data.\n\n\\cvdefC{IplImage* cvCreateImageHeader(CvSize size, int depth, int channels);}\n\\cvdefPy{CreateImageHeader(size, depth, channels) -> image}\n\n\\begin{description}\n\\cvarg{size}{Image width and height}\n\\cvarg{depth}{Image depth (see \\cvCPyCross{CreateImage})}\n\\cvarg{channels}{Number of channels (see \\cvCPyCross{CreateImage})}\n\\end{description}\n\n\\ifC\nThis call is an analogue of\n\\begin{lstlisting}\nhdr=iplCreateImageHeader(channels, 0, depth,\n                      channels == 1 ? \"GRAY\" : \"RGB\",\n                      channels == 1 ? \"GRAY\" : channels == 3 ? \"BGR\" :\n                      channels == 4 ? \"BGRA\" : \"\",\n                      IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, 4,\n                      size.width, size.height,\n                      0,0,0,0);\n\\end{lstlisting}\nbut it does not use IPL functions by default (see the \\texttt{CV\\_TURN\\_ON\\_IPL\\_COMPATIBILITY} macro).\n\\fi\n\n\\cvCPyFunc{CreateMat}\\label{cvCreateMat}\nCreates a matrix header and allocates the matrix data. \n\n\\cvdefC{CvMat* cvCreateMat(\\par int rows,\\par int cols,\\par int type);}\n\\cvdefPy{CreateMat(rows, cols, type) -> mat}\n\n\\begin{description}\n\\cvarg{rows}{Number of rows in the matrix}\n\\cvarg{cols}{Number of columns in the matrix}\n\\cvarg{type}{The type of the matrix elements in the form \\texttt{CV\\_<bit depth><S|U|F>C<number of channels>}, where S=signed, U=unsigned, F=float. For example, CV\\_8UC1 means the elements are 8-bit unsigned and the there is 1 channel, and CV\\_32SC2 means the elements are 32-bit signed and there are 2 channels.}\n\\end{description}\n\n\\ifC\nThis is the concise form for:\n\n\\begin{lstlisting}\nCvMat* mat = cvCreateMatHeader(rows, cols, type);\ncvCreateData(mat);\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{CreateMatHeader}\nCreates a matrix header but does not allocate the matrix data.\n\n\\cvdefC{CvMat* cvCreateMatHeader(\\par int rows,\\par int cols,\\par int type);}\n\\cvdefPy{CreateMatHeader(rows, cols, type) -> mat}\n\n\\begin{description}\n\\cvarg{rows}{Number of rows in the matrix}\n\\cvarg{cols}{Number of columns in the matrix}\n\\cvarg{type}{Type of the matrix elements, see \\cvCPyCross{CreateMat}}\n\\end{description}\n\nThe function allocates a new matrix header and returns a pointer to it. The matrix data can then be allocated using \\cvCPyCross{CreateData} or set explicitly to user-allocated data via \\cvCPyCross{SetData}.\n\n\\cvCPyFunc{CreateMatND}\nCreates the header and allocates the data for a multi-dimensional dense array.\n\n\\cvdefC{CvMatND* cvCreateMatND(\\par int dims,\\par const int* sizes,\\par int type);}\n\\cvdefPy{CreateMatND(dims, type) -> None}\n\n\\begin{description}\n\\ifPy\n\\cvarg{dims}{List or tuple of array dimensions, up to 32 in length.}\n\\else\n\\cvarg{dims}{Number of array dimensions. This must not exceed CV\\_MAX\\_DIM (32 by default, but can be changed at build time).}\n\\cvarg{sizes}{Array of dimension sizes.}\n\\fi\n\\cvarg{type}{Type of array elements, see \\cvCPyCross{CreateMat}.}\n\\end{description}\n\nThis is a short form for:\n\n\\ifC\n\\begin{lstlisting}\nCvMatND* mat = cvCreateMatNDHeader(dims, sizes, type);\ncvCreateData(mat);\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{CreateMatNDHeader}\nCreates a new matrix header but does not allocate the matrix data.\n\n\\cvdefC{CvMatND* cvCreateMatNDHeader(\\par int dims,\\par const int* sizes,\\par int type);}\n\\cvdefPy{CreateMatNDHeader(dims, type) -> None}\n\n\\begin{description}\n\\ifPy\n\\cvarg{dims}{List or tuple of array dimensions, up to 32 in length.}\n\\else\n\\cvarg{dims}{Number of array dimensions}\n\\cvarg{sizes}{Array of dimension sizes}\n\\fi\n\\cvarg{type}{Type of array elements, see \\cvCPyCross{CreateMat}}\n\\end{description}\n\nThe function allocates a header for a multi-dimensional dense array. The array data can further be allocated using \\cvCPyCross{CreateData} or set explicitly to user-allocated data via \\cvCPyCross{SetData}.\n\n\\ifC % {\n\\cvCPyFunc{CreateSparseMat}\nCreates sparse array.\n\n\\cvdefC{CvSparseMat* cvCreateSparseMat(int dims, const int* sizes, int type);}\n\\cvdefPy{CreateSparseMat(dims, type) -> cvmat}\n\n\\begin{description}\n\\ifC\n\\cvarg{dims}{Number of array dimensions. In contrast to the dense matrix, the number of dimensions is practically unlimited (up to $2^{16}$).}\n\\cvarg{sizes}{Array of dimension sizes}\n\\else\n\\cvarg{dims}{List or tuple of array dimensions.}\n\\fi\n\\cvarg{type}{Type of array elements. The same as for CvMat}\n\\end{description}\n\nThe function allocates a multi-dimensional sparse array. Initially the array contain no elements, that is \\cvCPyCross{Get} or \\cvCPyCross{GetReal} returns zero for every index.\n\\fi % }\n\n\\cvCPyFunc{CrossProduct}\nCalculates the cross product of two 3D vectors.\n\n\\cvdefC{void cvCrossProduct(const CvArr* src1, const CvArr* src2, CvArr* dst);}\n\\cvdefPy{CrossProduct(src1,src2,dst)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source vector}\n\\cvarg{src2}{The second source vector}\n\\cvarg{dst}{The destination vector}\n\\end{description}\n\n\nThe function calculates the cross product of two 3D vectors:\n\n\\[ \\texttt{dst} = \\texttt{src1} \\times \\texttt{src2} \\]\nor:\n\\[\n\\begin{array}{l}\n\\texttt{dst}_1 = \\texttt{src1}_2 \\texttt{src2}_3 - \\texttt{src1}_3 \\texttt{src2}_2\\\\\n\\texttt{dst}_2 = \\texttt{src1}_3 \\texttt{src2}_1 - \\texttt{src1}_1 \\texttt{src2}_3\\\\\n\\texttt{dst}_3 = \\texttt{src1}_1 \\texttt{src2}_2 - \\texttt{src1}_2 \\texttt{src2}_1\n\\end{array}\n\\]\n\n\\subsection{CvtPixToPlane}\n\nSynonym for \\cross{Split}.\n\n\\cvCPyFunc{DCT}\nPerforms a forward or inverse Discrete Cosine transform of a 1D or 2D floating-point array.\n\n\\cvdefC{void cvDCT(const CvArr* src, CvArr* dst, int flags);}\n\\cvdefPy{DCT(src,dst,flags)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array, real 1D or 2D array}\n\\cvarg{dst}{Destination array of the same size and same type as the source}\n\\cvarg{flags}{Transformation flags, a combination of the following values\n\\begin{description}\n\\cvarg{CV\\_DXT\\_FORWARD}{do a forward 1D or 2D transform.}\n\\cvarg{CV\\_DXT\\_INVERSE}{do an inverse 1D or 2D transform.}\n\\cvarg{CV\\_DXT\\_ROWS}{do a forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms and so forth.}\n\\end{description}}\n\\end{description}\n\nThe function performs a forward or inverse transform of a 1D or 2D floating-point array:\n\nForward Cosine transform of 1D vector of $N$ elements:\n\\[Y = C^{(N)} \\cdot X\\]\nwhere\n\\[C^{(N)}_{jk}=\\sqrt{\\alpha_j/N}\\cos\\left(\\frac{\\pi(2k+1)j}{2N}\\right)\\]\nand $\\alpha_0=1$, $\\alpha_j=2$ for $j > 0$.\n\nInverse Cosine transform of 1D vector of N elements:\n\\[X = \\left(C^{(N)}\\right)^{-1} \\cdot Y = \\left(C^{(N)}\\right)^T \\cdot Y\\]\n(since $C^{(N)}$ is orthogonal matrix, $C^{(N)} \\cdot \\left(C^{(N)}\\right)^T = I$)\n\nForward Cosine transform of 2D $M \\times N$ matrix:\n\\[Y = C^{(N)} \\cdot X \\cdot \\left(C^{(N)}\\right)^T\\]\n\nInverse Cosine transform of 2D vector of $M \\times N$ elements:\n\\[X = \\left(C^{(N)}\\right)^T \\cdot X \\cdot C^{(N)}\\]\n\n\n\\cvCPyFunc{DFT}\nPerforms a forward or inverse Discrete Fourier transform of a 1D or 2D floating-point array.\n\n\\cvdefC{void cvDFT(const CvArr* src, CvArr* dst, int flags, int nonzeroRows=0);}\n\\cvdefPy{DFT(src,dst,flags,nonzeroRows=0)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array, real or complex}\n\\cvarg{dst}{Destination array of the same size and same type as the source}\n\\cvarg{flags}{Transformation flags, a combination of the following values\n\\begin{description}\n\\cvarg{CV\\_DXT\\_FORWARD}{do a forward 1D or 2D transform. The result is not scaled.}\n\\cvarg{CV\\_DXT\\_INVERSE}{do an inverse 1D or 2D transform. The result is not scaled. \\texttt{CV\\_DXT\\_FORWARD} and \\texttt{CV\\_DXT\\_INVERSE} are mutually exclusive, of course.}\n\\cvarg{CV\\_DXT\\_SCALE}{scale the result: divide it by the number of array elements. Usually, it is combined with \\texttt{CV\\_DXT\\_INVERSE}, and one may use a shortcut \\texttt{CV\\_DXT\\_INV\\_SCALE}.}\n\\cvarg{CV\\_DXT\\_ROWS}{do a forward or inverse transform of every individual row of the input matrix. This flag allows the user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms and so forth.}\n\\cvarg{CV\\_DXT\\_INVERSE\\_SCALE}{same as \\texttt{CV\\_DXT\\_INVERSE + CV\\_DXT\\_SCALE}}\n\\end{description}}\n\\cvarg{nonzeroRows}{Number of nonzero rows in the source array\n(in the case of a forward 2d transform), or a number of rows of interest in\nthe destination array (in the case of an inverse 2d transform). If the value\nis negative, zero, or greater than the total number of rows, it is\nignored. The parameter can be used to speed up 2d convolution/correlation\nwhen computing via DFT. See the example below.}\n\\end{description}\n\nThe function performs a forward or inverse transform of a 1D or 2D floating-point array:\n\n\nForward Fourier transform of 1D vector of N elements:\n\\[y = F^{(N)} \\cdot x, where F^{(N)}_{jk}=exp(-i \\cdot 2\\pi \\cdot j \\cdot k/N)\\], \n\\[i=sqrt(-1)\\]\n\nInverse Fourier transform of 1D vector of N elements:\n\\[x'= (F^{(N)})^{-1} \\cdot y = conj(F^(N)) \\cdot y\nx = (1/N) \\cdot x\\]\n\nForward Fourier transform of 2D vector of M $\\times$ N elements:\n\\[Y = F^{(M)} \\cdot X \\cdot F^{(N)}\\]\n\nInverse Fourier transform of 2D vector of M $\\times$ N elements:\n\\[X'= conj(F^{(M)}) \\cdot Y \\cdot conj(F^{(N)})\nX = (1/(M \\cdot N)) \\cdot X'\\]\n\n\nIn the case of real (single-channel) data, the packed format, borrowed from IPL, is used to represent the result of a forward Fourier transform or input for an inverse Fourier transform:\n\n\\[\\begin{bmatrix}\nRe Y_{0,0} & Re Y_{0,1} & Im Y_{0,1} & Re Y_{0,2} & Im Y_{0,2} & \\cdots & Re Y_{0,N/2-1} & Im Y_{0,N/2-1} & Re Y_{0,N/2} \\\\\nRe Y_{1,0} & Re Y_{1,1} & Im Y_{1,1} & Re Y_{1,2} & Im Y_{1,2} & \\cdots & Re Y_{1,N/2-1} & Im Y_{1,N/2-1} & Re Y_{1,N/2} \\\\\nIm Y_{1,0} & Re Y_{2,1} & Im Y_{2,1} & Re Y_{2,2} & Im Y_{2,2} & \\cdots & Re Y_{2,N/2-1} & Im Y_{2,N/2-1} & Im Y_{1,N/2} \\\\\n\\hdotsfor{9} \\\\\nRe Y_{M/2-1,0} &  Re Y_{M-3,1}  & Im Y_{M-3,1} & \\hdotsfor{3} & Re Y_{M-3,N/2-1} & Im Y_{M-3,N/2-1}& Re Y_{M/2-1,N/2} \\\\\nIm Y_{M/2-1,0} &  Re Y_{M-2,1}  & Im Y_{M-2,1} & \\hdotsfor{3} & Re Y_{M-2,N/2-1} & Im Y_{M-2,N/2-1}& Im Y_{M/2-1,N/2} \\\\\nRe Y_{M/2,0}  &  Re Y_{M-1,1} &  Im Y_{M-1,1} & \\hdotsfor{3} & Re Y_{M-1,N/2-1} & Im Y_{M-1,N/2-1}& Re Y_{M/2,N/2}\n\\end{bmatrix}\n\\]\n\n\nNote: the last column is present if \\texttt{N} is even, the last row is present if \\texttt{M} is even.\nIn the case of 1D real transform the result looks like the first row of the above matrix.\n\nHere is the example of how to compute 2D convolution using DFT.\n\n\\ifC\n\\begin{lstlisting}\nCvMat* A = cvCreateMat(M1, N1, CVg32F);\nCvMat* B = cvCreateMat(M2, N2, A->type);\n\n// it is also possible to have only abs(M2-M1)+1 times abs(N2-N1)+1\n// part of the full convolution result\nCvMat* conv = cvCreateMat(A->rows + B->rows - 1, A->cols + B->cols - 1, \n\t\t\t   A->type);\n\n// initialize A and B\n...\n\nint dftgM = cvGetOptimalDFTSize(A->rows + B->rows - 1);\nint dftgN = cvGetOptimalDFTSize(A->cols + B->cols - 1);\n\nCvMat* dftgA = cvCreateMat(dft\\_M, dft\\_N, A->type);\nCvMat* dftgB = cvCreateMat(dft\\_M, dft\\_N, B->type);\nCvMat tmp;\n\n// copy A to dftgA and pad dft\\_A with zeros\ncvGetSubRect(dftgA, &tmp, cvRect(0,0,A->cols,A->rows));\ncvCopy(A, &tmp);\ncvGetSubRect(dftgA, &tmp, cvRect(A->cols,0,dft\\_A->cols - A->cols,A->rows));\ncvZero(&tmp);\n// no need to pad bottom part of dftgA with zeros because of\n// use nonzerogrows parameter in cvDFT() call below\n\ncvDFT(dftgA, dft\\_A, CV\\_DXT\\_FORWARD, A->rows);\n\n// repeat the same with the second array\ncvGetSubRect(dftgB, &tmp, cvRect(0,0,B->cols,B->rows));\ncvCopy(B, &tmp);\ncvGetSubRect(dftgB, &tmp, cvRect(B->cols,0,dft\\_B->cols - B->cols,B->rows));\ncvZero(&tmp);\n// no need to pad bottom part of dftgB with zeros because of\n// use nonzerogrows parameter in cvDFT() call below\n\ncvDFT(dftgB, dft\\_B, CV\\_DXT\\_FORWARD, B->rows);\n\ncvMulSpectrums(dftgA, dft\\_B, dft\\_A, 0 /* or CV\\_DXT\\_MUL\\_CONJ to get \n\t\tcorrelation rather than convolution */);\n\ncvDFT(dftgA, dft\\_A, CV\\_DXT\\_INV\\_SCALE, conv->rows); // calculate only \n\t\t\t\t\t\t\t // the top part\ncvGetSubRect(dftgA, &tmp, cvRect(0,0,conv->cols,conv->rows));\n\ncvCopy(&tmp, conv);\n\\end{lstlisting}\n\\fi\n\n\\ifC\n\n\\cvCPyFunc{DecRefData}\nDecrements an array data reference counter.\n\n\\cvdefC{void cvDecRefData(CvArr* arr);}\n\n\\begin{description}\n\\cvarg{arr}{Pointer to an array header}\n\\end{description}\n\nThe function decrements the data reference counter in a \\cross{CvMat} or\n\\cross{CvMatND} if the reference counter pointer\nis not NULL. If the counter reaches zero, the data is deallocated. In the\ncurrent implementation the reference counter is not NULL only if the data\nwas allocated using the \\cvCPyCross{CreateData} function. The counter will be NULL in other cases such as:\nexternal data was assigned to the header using \\cvCPyCross{SetData}, the matrix\nheader is part of a larger matrix or image, or the header was converted from an image or n-dimensional matrix header. \n\n\\fi\n\n\n\\cvCPyFunc{Det}\nReturns the determinant of a matrix.\n\n\\cvdefC{double cvDet(const CvArr* mat);}\n\\cvdefPy{Det(mat)-> double}\n\n\\begin{description}\n\\cvarg{mat}{The source matrix}\n\\end{description}\n\nThe function returns the determinant of the square matrix \\texttt{mat}. The direct method is used for small matrices and Gaussian elimination is used for larger matrices. For symmetric positive-determined matrices, it is also possible to run\n\\cvCPyCross{SVD}\nwith $U = V = 0$ and then calculate the determinant as a product of the diagonal elements of $W$.\n\n\\cvCPyFunc{Div}\nPerforms per-element division of two arrays.\n\n\\cvdefC{void cvDiv(const CvArr* src1, const CvArr* src2, CvArr* dst, double scale=1);}\n\\cvdefPy{Div(src1,src2,dst,scale)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array. If the pointer is NULL, the array is assumed to be all 1's.}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{scale}{Optional scale factor}\n\\end{description}\n\nThe function divides one array by another:\n\n\\[\n\\texttt{dst}(I)=\\fork\n{\\texttt{scale} \\cdot \\texttt{src1}(I)/\\texttt{src2}(I)}{if \\texttt{src1} is not \\texttt{NULL}}\n{\\texttt{scale}/\\texttt{src2}(I)}{otherwise}\n\\]\n\nAll the arrays must have the same type and the same size (or ROI size).\n\n\n\\cvCPyFunc{DotProduct}\nCalculates the dot product of two arrays in Euclidian metrics.\n\n\\cvdefC{double cvDotProduct(const CvArr* src1, const CvArr* src2);}\n\\cvdefPy{DotProduct(src1,src2)-> double}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\end{description}\n\nThe function calculates and returns the Euclidean dot product of two arrays.\n\n\\[\nsrc1 \\bullet src2 = \\sum_I (\\texttt{src1}(I) \\texttt{src2}(I))\n\\]\n\nIn the case of multiple channel arrays, the results for all channels are accumulated. In particular, \\texttt{cvDotProduct(a,a)} where \\texttt{a} is a complex vector, will return $||\\texttt{a}||^2$.\nThe function can process multi-dimensional arrays, row by row, layer by layer, and so on.\n\n\\cvCPyFunc{EigenVV}\nComputes eigenvalues and eigenvectors of a symmetric matrix.\n\n\\cvdefC{\nvoid cvEigenVV(\\par CvArr* mat,\\par CvArr* evects,\\par CvArr* evals,\\par double eps=0,\n\\par int lowindex = -1, \\par int highindex = -1);}\n\\cvdefPy{EigenVV(mat,evects,evals,eps,lowindex,highindex)-> None}\n\n\\begin{description}\n\\cvarg{mat}{The input symmetric square matrix, modified during the processing}\n\\cvarg{evects}{The output matrix of eigenvectors, stored as subsequent rows}\n\\cvarg{evals}{The output vector of eigenvalues, stored in the descending order (order of eigenvalues and eigenvectors is syncronized, of course)}\n\\cvarg{eps}{Accuracy of diagonalization. Typically, \\texttt{DBL\\_EPSILON} (about $ 10^{-15} $) works well.\nTHIS PARAMETER IS CURRENTLY IGNORED.}\n\\cvarg{lowindex}{Optional index of largest eigenvalue/-vector to calculate.\n(See below.)}\n\\cvarg{highindex}{Optional index of smallest eigenvalue/-vector to calculate.\n(See below.)}\n\\end{description}\n\n\nThe function computes the eigenvalues and eigenvectors of matrix \\texttt{A}:\n\n\\begin{lstlisting}\nmat*evects(i,:)' = evals(i)*evects(i,:)' (in MATLAB notation)\n\\end{lstlisting}\n\nIf either low- or highindex is supplied the other is required, too.\nIndexing is 0-based. Example: To calculate the largest eigenvector/-value set\n\\texttt{lowindex=highindex=0}. To calculate all the eigenvalues, leave \\texttt{lowindex=highindex=-1}.\nFor legacy reasons this function always returns a square matrix the same size\nas the source matrix with eigenvectors and a vector the length of the source\nmatrix with eigenvalues. The selected eigenvectors/-values are always in the\nfirst highindex - lowindex + 1 rows.\n\nThe contents of matrix \\texttt{A} is destroyed by the function.\n\nCurrently the function is slower than \\cvCPyCross{SVD} yet less accurate,\nso if \\texttt{A} is known to be positively-defined (for example, it\nis a covariance matrix)it is recommended to use \\cvCPyCross{SVD} to find\neigenvalues and eigenvectors of \\texttt{A}, especially if eigenvectors\nare not required.\n\n\\cvCPyFunc{Exp}\nCalculates the exponent of every array element.\n\n\\cvdefC{void cvExp(const CvArr* src, CvArr* dst);}\n\\cvdefPy{Exp(src,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array, it should have \\texttt{double} type or the same type as the source}\n\\end{description}\n\n\nThe function calculates the exponent of every element of the input array:\n\n\\[\n\\texttt{dst} [I] = e^{\\texttt{src}(I)}\n\\]\n\nThe maximum relative error is about $7 \\times 10^{-6}$. Currently, the function converts denormalized values to zeros on output.\n\n\\cvCPyFunc{FastArctan}\nCalculates the angle of a 2D vector.\n\n\\cvdefC{float cvFastArctan(float y, float x);}\n\\cvdefPy{FastArctan(y,x)-> float}\n\n\\begin{description}\n\\cvarg{x}{x-coordinate of 2D vector}\n\\cvarg{y}{y-coordinate of 2D vector}\n\\end{description}\n\n\nThe function calculates the full-range angle of an input 2D vector. The angle is \nmeasured in degrees and varies from 0 degrees to 360 degrees. The accuracy is about 0.1 degrees.\n\n\\cvCPyFunc{Flip}\nFlip a 2D array around vertical, horizontal or both axes.\n\n\\cvdefC{void  cvFlip(const CvArr* src, CvArr* dst=NULL, int flipMode=0);}\n\\cvdefPy{Flip(src,dst=NULL,flipMode=0)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array}\n\\cvarg{dst}{Destination array.\nIf $\\texttt{dst} = \\texttt{NULL}$ the flipping is done in place.}\n\\cvarg{flipMode}{Specifies how to flip the array:\n0 means flipping around the x-axis, positive (e.g., 1) means flipping around y-axis, and negative (e.g., -1) means flipping around both axes. See also the discussion below for the formulas:}\n\\end{description}\n\nThe function flips the array in one of three different ways (row and column indices are 0-based):\n\n\\[\ndst(i,j) = \\forkthree\n{\\texttt{src}(rows(\\texttt{src})-i-1,j)}{if $\\texttt{flipMode} = 0$}\n{\\texttt{src}(i,cols(\\texttt{src})-j-1)}{if $\\texttt{flipMode} > 0$}\n{\\texttt{src}(rows(\\texttt{src})-i-1,cols(\\texttt{src})-j-1)}{if $\\texttt{flipMode} < 0$}\n\\]\n\nThe example scenarios of function use are:\n\\begin{itemize}\n  \\item vertical flipping of the image (flipMode = 0) to switch between top-left and bottom-left image origin, which is a typical operation in video processing under Win32 systems.\n  \\item horizontal flipping of the image with subsequent horizontal shift and absolute difference calculation to check for a vertical-axis symmetry (flipMode $>$ 0)\n  \\item simultaneous horizontal and vertical flipping of the image with subsequent shift and absolute difference calculation to check for a central symmetry (flipMode $<$ 0)\n  \\item reversing the order of 1d point arrays (flipMode > 0)\n\\end{itemize}\n\n\\ifPy\n\n\\cvCPyFunc{fromarray}\n\nCreate a CvMat from an object that supports the array interface.\n\n\\cvdefPy{fromarray(object, allowND = False) -> CvMat}\n\n\\begin{description}\n\\cvarg{object}{Any object that supports the array interface}\n\\cvarg{allowND}{If true, will return a CvMatND}\n\\end{description}\n\nIf the object supports the\n\\href{http://docs.scipy.org/doc/numpy/reference/arrays.interface.html}{array interface},\nreturn a \\cross{CvMat} (\\texttt{allowND = False}) or \\cross{CvMatND} (\\texttt{allowND = True}).\n\nIf \\texttt{allowND = False}, then the object's array must be either 2D or 3D.  If it is 2D, then the returned CvMat has a single channel.  If it is 3D, then the returned CvMat will have N channels, where N is the last dimension of the array. In this case, N cannot be greater than OpenCV's channel limit, \\texttt{CV\\_CN\\_MAX}.\n\nIf \\texttt{allowND = True}, then \\texttt{fromarray} returns a single-channel \\cross{CvMatND} with the same shape as the original array.\n\nFor example, \\href{http://numpy.scipy.org/}{NumPy} arrays support the array interface, so can be converted to OpenCV objects:\n\n\\begin{lstlisting}\n>>> import cv, numpy\n>>> a = numpy.ones((480, 640))\n>>> mat = cv.fromarray(a)\n>>> print cv.GetDims(mat), cv.CV_MAT_CN(cv.GetElemType(mat))\n(480, 640) 1\n>>> a = numpy.ones((480, 640, 3))\n>>> mat = cv.fromarray(a)\n>>> print cv.GetDims(mat), cv.CV_MAT_CN(cv.GetElemType(mat))\n(480, 640) 3\n>>> a = numpy.ones((480, 640, 3))\n>>> mat = cv.fromarray(a, allowND = True)\n>>> print cv.GetDims(mat), cv.CV_MAT_CN(cv.GetElemType(mat))\n(480, 640, 3) 1\n\\end{lstlisting}\n\n\\fi\n\n\\cvCPyFunc{GEMM}\nPerforms generalized matrix multiplication.\n\n\\cvdefC{void cvGEMM(\\par const CvArr* src1, \\par const CvArr* src2, double alpha,\n              \\par const CvArr* src3, \\par double beta, \\par CvArr* dst, \\par int tABC=0);\\newline\n\\#define cvMatMulAdd(src1, src2, src3, dst ) cvGEMM(src1, src2, 1, src3, 1, dst, 0 )\\par\n\\#define cvMatMul(src1, src2, dst ) cvMatMulAdd(src1, src2, 0, dst )}\n              \n\\cvdefPy{GEMM(src1,src2,alphs,src3,beta,dst,tABC=0)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{src3}{The third source array (shift). Can be NULL, if there is no shift.}\n\\cvarg{dst}{The destination array}\n\\cvarg{tABC}{The operation flags that can be 0 or a combination of the following values\n\\begin{description}\n\\cvarg{CV\\_GEMM\\_A\\_T}{transpose src1}\n\\cvarg{CV\\_GEMM\\_B\\_T}{transpose src2}\n\\cvarg{CV\\_GEMM\\_C\\_T}{transpose src3}\n\\end{description}\n\nFor example, \\texttt{CV\\_GEMM\\_A\\_T+CV\\_GEMM\\_C\\_T} corresponds to\n\\[\n\\texttt{alpha} \\, \\texttt{src1} ^T \\, \\texttt{src2} + \\texttt{beta} \\, \\texttt{src3} ^T\n\\]}\n\\end{description}\n\nThe function performs generalized matrix multiplication:\n\n\\[\n\\texttt{dst} = \\texttt{alpha} \\, op(\\texttt{src1}) \\, op(\\texttt{src2}) + \\texttt{beta} \\, op(\\texttt{src3}) \\quad \\text{where $op(X)$ is $X$ or $X^T$}\n\\]\n\nAll the matrices should have the same data type and coordinated sizes. Real or complex floating-point matrices are supported.\n\n\\ifC  % {\n\n\\cvCPyFunc{Get?D}\nReturn a specific array element.\n\n\\cvdefC{\nCvScalar cvGet1D(const CvArr* arr, int idx0);\nCvScalar cvGet2D(const CvArr* arr, int idx0, int idx1);\nCvScalar cvGet3D(const CvArr* arr, int idx0, int idx1, int idx2);\nCvScalar cvGetND(const CvArr* arr, int* idx);\n}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{The first zero-based component of the element index}\n\\cvarg{idx1}{The second zero-based component of the element index}\n\\cvarg{idx2}{The third zero-based component of the element index}\n\\cvarg{idx}{Array of the element indices}\n\\end{description}\n\nThe functions return a specific array element. In the case of a sparse array the functions return 0 if the requested node does not exist (no new node is created by the functions).\n\\else % }{\n\n\\cvCPyFunc{Get1D}\nReturn a specific array element.\n\n\\cvdefPy{Get1D(arr, idx) -> scalar}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx}{Zero-based element index}\n\\end{description}\n\nReturn a specific array element.  Array must have dimension 3.\n\n\\cvCPyFunc{Get2D}\nReturn a specific array element.\n\n\\cvdefPy{ Get2D(arr, idx0, idx1) -> scalar }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{Zero-based element row index}\n\\cvarg{idx1}{Zero-based element column index}\n\\end{description}\n\nReturn a specific array element.  Array must have dimension 2.\n\n\\cvCPyFunc{Get3D}\nReturn a specific array element.\n\n\\cvdefPy{ Get3D(arr, idx0, idx1, idx2) -> scalar }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{Zero-based element index}\n\\cvarg{idx1}{Zero-based element index}\n\\cvarg{idx2}{Zero-based element index}\n\\end{description}\n\nReturn a specific array element.  Array must have dimension 3.\n\n\\cvCPyFunc{GetND}\nReturn a specific array element.\n\n\\cvdefPy{ GetND(arr, indices) -> scalar }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{indices}{List of zero-based element indices}\n\\end{description}\n\nReturn a specific array element.  The length of array indices must be the same as the dimension of the array.\n\n\\fi % }\n\n\\ifC % {\n\\cvCPyFunc{GetCol(s)}\nReturns array column or column span.\n\n\\cvdefC{CvMat* cvGetCol(const CvArr* arr, CvMat* submat, int col);}\n\\cvdefPy{GetCol(arr,row)-> submat}\n\\cvdefC{CvMat* cvGetCols(const CvArr* arr, CvMat* submat, int startCol, int endCol);}\n\\cvdefPy{GetCols(arr,startCol,endCol)-> submat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{submat}{Pointer to the resulting sub-array header}\n\\cvarg{col}{Zero-based index of the selected column}\n\\cvarg{startCol}{Zero-based index of the starting column (inclusive) of the span}\n\\cvarg{endCol}{Zero-based index of the ending column (exclusive) of the span}\n\\end{description}\n\nThe functions \\texttt{GetCol} and \\texttt{GetCols} return the header, corresponding to a specified column span of the input array. \\texttt{GetCol} is a shortcut for \\cvCPyCross{GetCols}:\n\n\\begin{lstlisting}\ncvGetCol(arr, submat, col); // ~ cvGetCols(arr, submat, col, col + 1);\n\\end{lstlisting}\n\n\\else % }{\n\n\\cvCPyFunc{GetCol}\nReturns array column.\n\n\\cvdefPy{GetCol(arr,col)-> submat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{col}{Zero-based index of the selected column}\n\\cvarg{submat}{resulting single-column array}\n\\end{description}\n\nThe function \\texttt{GetCol} returns a single column from the input array.\n\n\\cvCPyFunc{GetCols}\nReturns array column span.\n\n\\cvdefPy{GetCols(arr,startCol,endCol)-> submat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{startCol}{Zero-based index of the starting column (inclusive) of the span}\n\\cvarg{endCol}{Zero-based index of the ending column (exclusive) of the span}\n\\cvarg{submat}{resulting multi-column array}\n\\end{description}\n\nThe function \\texttt{GetCols} returns a column span from the input array.\n\n\\fi % }\n\n\\cvCPyFunc{GetDiag}\nReturns one of array diagonals.\n\n\\cvdefC{CvMat* cvGetDiag(const CvArr* arr, CvMat* submat, int diag=0);}\n\\cvdefPy{GetDiag(arr,diag=0)-> submat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{submat}{Pointer to the resulting sub-array header}\n\\cvarg{diag}{Array diagonal. Zero corresponds to the main diagonal, -1 corresponds to the diagonal above the main , 1 corresponds to the diagonal below the main, and so forth.}\n\\end{description}\n\nThe function returns the header, corresponding to a specified diagonal of the input array.\n\n\\ifC\n\\subsection{cvGetDims, cvGetDimSize}\\label{cvGetDims}\n\nReturn number of array dimensions and their sizes or the size of a particular dimension.\n\n\\cvdefC{int cvGetDims(const CvArr* arr, int* sizes=NULL);}\n\\cvdefC{int cvGetDimSize(const CvArr* arr, int index);}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{sizes}{Optional output vector of the array dimension sizes. For\n2d arrays the number of rows (height) goes first, number of columns\n(width) next.}\n\\cvarg{index}{Zero-based dimension index (for matrices 0 means number\nof rows, 1 means number of columns; for images 0 means height, 1 means\nwidth)}\n\\end{description}\n\nThe function \\texttt{cvGetDims} returns the array dimensionality and the\narray of dimension sizes. In the case of \\texttt{IplImage} or \\cross{CvMat} it always\nreturns 2 regardless of number of image/matrix rows. The function\n\\texttt{cvGetDimSize} returns the particular dimension size (number of\nelements per that dimension). For example, the following code calculates\ntotal number of array elements in two ways:\n\n\\begin{lstlisting}\n// via cvGetDims()\nint sizes[CV_MAX_DIM];\nint i, total = 1;\nint dims = cvGetDims(arr, size);\nfor(i = 0; i < dims; i++ )\n    total *= sizes[i];\n\n// via cvGetDims() and cvGetDimSize()\nint i, total = 1;\nint dims = cvGetDims(arr);\nfor(i = 0; i < dims; i++ )\n    total *= cvGetDimsSize(arr, i);\n\\end{lstlisting}\n\\fi\n\n\\ifPy\n\\cvCPyFunc{GetDims}\nReturns list of array dimensions\n\n\\cvdefPy{GetDims(arr)-> list}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\end{description}\n\nThe function returns a list of array dimensions.\nIn the case of \\texttt{IplImage} or \\cross{CvMat} it always\nreturns a list of length 2.\n\\fi\n\n\n\\cvCPyFunc{GetElemType}\nReturns type of array elements.\n\n\\cvdefC{int cvGetElemType(const CvArr* arr);}\n\\cvdefPy{GetElemType(arr)-> int}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\end{description}\n\nThe function returns type of the array elements\nas described in \\cvCPyCross{CreateMat} discussion: \\texttt{CV\\_8UC1} ... \\texttt{CV\\_64FC4}.\n\n\n\\cvCPyFunc{GetImage}\nReturns image header for arbitrary array.\n\n\\cvdefC{IplImage* cvGetImage(const CvArr* arr, IplImage* imageHeader);}\n\\cvdefPy{GetImage(arr) -> iplimage}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\ifC\n\\cvarg{imageHeader}{Pointer to \\texttt{IplImage} structure used as a temporary buffer}\n\\fi\n\\end{description}\n\nThe function returns the image header for the input array\nthat can be a matrix - \\cross{CvMat}, or an image - \\texttt{IplImage*}. In\nthe case of an image the function simply returns the input pointer. In the\ncase of \\cross{CvMat} it initializes an \\texttt{imageHeader} structure\nwith the parameters of the input matrix. Note that if we transform\n\\texttt{IplImage} to \\cross{CvMat} and then transform CvMat back to\nIplImage, we can get different headers if the ROI is set, and thus some\nIPL functions that calculate image stride from its width and align may\nfail on the resultant image.\n\n\\cvCPyFunc{GetImageCOI}\nReturns the index of the channel of interest. \n\n\\cvdefC{int cvGetImageCOI(const IplImage* image);}\n\\cvdefPy{GetImageCOI(image)-> channel}\n\n\\begin{description}\n\\cvarg{image}{A pointer to the image header}\n\\end{description}\n\nReturns the channel of interest of in an IplImage. Returned values correspond to the \\texttt{coi} in \\cvCPyCross{SetImageCOI}.\n\n\\cvCPyFunc{GetImageROI}\nReturns the image ROI.\n\n\\cvdefC{CvRect cvGetImageROI(const IplImage* image);}\n\\cvdefPy{GetImageROI(image)-> CvRect}\n\n\\begin{description}\n\\cvarg{image}{A pointer to the image header}\n\\end{description}\n\nIf there is no ROI set, \\texttt{cvRect(0,0,image->width,image->height)} is returned.\n\n\\cvCPyFunc{GetMat}\nReturns matrix header for arbitrary array.\n\n\\cvdefC{CvMat* cvGetMat(const CvArr* arr, CvMat* header, int* coi=NULL, int allowND=0);}\n\\cvdefPy{GetMat(arr, allowND=0) -> cvmat }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\ifC\n\\cvarg{header}{Pointer to \\cross{CvMat} structure used as a temporary buffer}\n\\cvarg{coi}{Optional output parameter for storing COI}\n\\fi\n\\cvarg{allowND}{If non-zero, the function accepts multi-dimensional dense arrays (CvMatND*) and returns 2D (if CvMatND has two dimensions) or 1D matrix (when CvMatND has 1 dimension or more than 2 dimensions). The array must be continuous.}\n\\end{description}\n\nThe function returns a matrix header for the input array that can be a matrix - \n\n\\cross{CvMat}, an image - \\texttt{IplImage} or a multi-dimensional dense array - \\cross{CvMatND} (latter case is allowed only if \\texttt{allowND != 0}) . In the case of matrix the function simply returns the input pointer. In the case of \\texttt{IplImage*} or \\cross{CvMatND} it initializes the \\texttt{header} structure with parameters of the current image ROI and returns the pointer to this temporary structure. Because COI is not supported by \\cross{CvMat}, it is returned separately.\n\nThe function provides an easy way to handle both types of arrays - \\texttt{IplImage} and \\cross{CvMat} - using the same code. Reverse transform from \\cross{CvMat} to \\texttt{IplImage} can be done using the \\cvCPyCross{GetImage} function.\n\nInput array must have underlying data allocated or attached, otherwise the function fails.\n\nIf the input array is \\texttt{IplImage} with planar data layout and COI set, the function returns the pointer to the selected plane and COI = 0. It enables per-plane processing of multi-channel images with planar data layout using OpenCV functions.\n\n\\ifC\n\\cvCPyFunc{GetNextSparseNode} \nReturns the next sparse matrix element\n\n\\cvdefC{CvSparseNode* cvGetNextSparseNode(CvSparseMatIterator* matIterator);}\n\n\\begin{description}\n\\cvarg{matIterator}{Sparse array iterator}\n\\end{description}\n\n\nThe function moves iterator to the next sparse matrix element and returns pointer to it. In the current version there is no any particular order of the elements, because they are stored in the hash table. The sample below demonstrates how to iterate through the sparse matrix:\n\nUsing \\cvCPyCross{InitSparseMatIterator} and \\cvCPyCross{GetNextSparseNode} to calculate sum of floating-point sparse array.\n\n\\begin{lstlisting}\ndouble sum;\nint i, dims = cvGetDims(array);\nCvSparseMatIterator mat_iterator;\nCvSparseNode* node = cvInitSparseMatIterator(array, &mat_iterator);\n\nfor(; node != 0; node = cvGetNextSparseNode(&mat_iterator ))\n{\n    /* get pointer to the element indices */\n    int* idx = CV_NODE_IDX(array, node);\n    /* get value of the element (assume that the type is CV_32FC1) */\n    float val = *(float*)CV_NODE_VAL(array, node);\n    printf(\"(\");\n    for(i = 0; i < dims; i++ )\n        printf(\"%4d%s\", idx[i], i < dims - 1 \",\" : \"): \");\n    printf(\"%g\\n\", val);\n\n    sum += val;\n}\n\nprintf(\"\\nTotal sum = %g\\n\", sum);\n\\end{lstlisting}\n\n\\fi\n\n\\cvCPyFunc{GetOptimalDFTSize}\nReturns optimal DFT size for a given vector size.\n\n\\cvdefC{int cvGetOptimalDFTSize(int size0);}\n\\cvdefPy{GetOptimalDFTSize(size0)-> int}\n\n\\begin{description}\n\\cvarg{size0}{Vector size}\n\\end{description}\n\nThe function returns the minimum number\n\\texttt{N} that is greater than or equal to \\texttt{size0}, such that the DFT\nof a vector of size \\texttt{N} can be computed fast. In the current\nimplementation $N=2^p \\times 3^q \\times 5^r$, for some $p$, $q$, $r$.\n\nThe function returns a negative number if \\texttt{size0} is too large\n(very close to \\texttt{INT\\_MAX})\n\n\n\\ifC\n\\cvCPyFunc{GetRawData}\nRetrieves low-level information about the array.\n\n\\cvdefC{void cvGetRawData(const CvArr* arr, uchar** data,\n                   int* step=NULL, CvSize* roiSize=NULL);}\n\n\\begin{description}\n\\cvarg{arr}{Array header}\n\\cvarg{data}{Output pointer to the whole image origin or ROI origin if ROI is set}\n\\cvarg{step}{Output full row length in bytes}\n\\cvarg{roiSize}{Output ROI size}\n\\end{description}\n\nThe function fills output variables with low-level information about the array data. All output parameters are optional, so some of the pointers may be set to \\texttt{NULL}. If the array is \\texttt{IplImage} with ROI set, the parameters of ROI are returned.\n\nThe following example shows how to get access to array elements. GetRawData calculates the absolute value of the elements in a single-channel, floating-point array.\n\n\\begin{lstlisting}\nfloat* data;\nint step;\n\nCvSize size;\nint x, y;\n\ncvGetRawData(array, (uchar**)&data, &step, &size);\nstep /= sizeof(data[0]);\n\nfor(y = 0; y < size.height; y++, data += step )\n    for(x = 0; x < size.width; x++ )\n        data[x] = (float)fabs(data[x]);\n\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{GetReal1D}\nReturn a specific element of single-channel 1D array.\n\n\\cvdefC{\ndouble cvGetReal1D(const CvArr* arr, int idx0);\n}\n\\cvdefPy{GetReal1D(arr, idx0)->float}\n\n\\begin{description}\n\\cvarg{arr}{Input array. Must have a single channel.}\n\\cvarg{idx0}{The first zero-based component of the element index}\n\\end{description}\n\n\nReturns a specific element of a single-channel array. If the array has\nmultiple channels, a runtime error is raised. Note that \\cvCPyCross{Get}\nfunction can be used safely for both single-channel and multiple-channel\narrays though they are a bit slower.\n\nIn the case of a sparse array the functions return 0 if the requested node does not exist (no new node is created by the functions).\n\n\\cvCPyFunc{GetReal2D}\nReturn a specific element of single-channel 2D array.\n\n\\cvdefC{\ndouble cvGetReal2D(const CvArr* arr, int idx0, int idx1); \\newline\n}\n\\cvdefPy{GetReal2D(arr, idx0, idx1)->float}\n\n\\begin{description}\n\\cvarg{arr}{Input array. Must have a single channel.}\n\\cvarg{idx0}{The first zero-based component of the element index}\n\\cvarg{idx1}{The second zero-based component of the element index}\n\\end{description}\n\nReturns a specific element of a single-channel array. If the array has\nmultiple channels, a runtime error is raised. Note that \\cvCPyCross{Get}\nfunction can be used safely for both single-channel and multiple-channel\narrays though they are a bit slower.\n\nIn the case of a sparse array the functions return 0 if the requested node does not exist (no new node is created by the functions).\n\n\\cvCPyFunc{GetReal3D}\nReturn a specific element of single-channel array.\n\n\\cvdefC{\ndouble cvGetReal3D(const CvArr* arr, int idx0, int idx1, int idx2); \\newline\n}\n\\cvdefPy{GetReal3D(arr, idx0, idx1, idx2)->float}\n\n\\begin{description}\n\\cvarg{arr}{Input array. Must have a single channel.}\n\\cvarg{idx0}{The first zero-based component of the element index}\n\\cvarg{idx1}{The second zero-based component of the element index}\n\\cvarg{idx2}{The third zero-based component of the element index}\n\\end{description}\n\nReturns a specific element of a single-channel array. If the array has\nmultiple channels, a runtime error is raised. Note that \\cvCPyCross{Get}\nfunction can be used safely for both single-channel and multiple-channel\narrays though they are a bit slower.\n\nIn the case of a sparse array the functions return 0 if the requested node does not exist (no new node is created by the functions).\n\n\\cvCPyFunc{GetRealND}\nReturn a specific element of single-channel array.\n\n\\cvdefC{\ndouble cvGetRealND(const CvArr* arr, int* idx)->float;\n}\n\\cvdefPy{GetRealND(arr, idx)->float}\n\n\\begin{description}\n\\cvarg{arr}{Input array. Must have a single channel.}\n\\cvarg{idx}{Array of the element indices}\n\\end{description}\n\nReturns a specific element of a single-channel array. If the array has\nmultiple channels, a runtime error is raised. Note that \\cvCPyCross{Get}\nfunction can be used safely for both single-channel and multiple-channel\narrays though they are a bit slower.\n\nIn the case of a sparse array the functions return 0 if the requested node does not exist (no new node is created by the functions).\n\n\n\\ifC %{\n\\cvCPyFunc{GetRow(s)}\nReturns array row or row span.\n\n\\cvdefC{CvMat* cvGetRow(const CvArr* arr, CvMat* submat, int row);}\n\\cvdefPy{GetRow(arr,row)-> submat}\n\\cvdefC{CvMat* cvGetRows(const CvArr* arr, CvMat* submat, int startRow, int endRow, int deltaRow=1);}\n\\cvdefPy{GetRows(arr,startRow,endRow,deltaRow=1)-> submat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{submat}{Pointer to the resulting sub-array header}\n\\cvarg{row}{Zero-based index of the selected row}\n\\cvarg{startRow}{Zero-based index of the starting row (inclusive) of the span}\n\\cvarg{endRow}{Zero-based index of the ending row (exclusive) of the span}\n\\cvarg{deltaRow}{Index step in the row span. That is, the function extracts every \\texttt{deltaRow}-th row from \\texttt{startRow} and up to (but not including) \\texttt{endRow}.}\n\\end{description}\n\nThe functions return the header, corresponding to a specified row/row span of the input array. Note that \\texttt{GetRow} is a shortcut for \\cvCPyCross{GetRows}:\n\n\\begin{lstlisting}\ncvGetRow(arr, submat, row ) ~ cvGetRows(arr, submat, row, row + 1, 1);\n\\end{lstlisting}\n\n\\else % }{\n\n\\cvCPyFunc{GetRow}\nReturns array row.\n\n\\cvdefPy{GetRow(arr,row)-> submat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{row}{Zero-based index of the selected row}\n\\cvarg{submat}{resulting single-row array}\n\\end{description}\n\nThe function \\texttt{GetRow} returns a single row from the input array.\n\n\\cvCPyFunc{GetRows}\nReturns array row span.\n\n\\cvdefPy{GetRows(arr,startRow,endRow,deltaRow=1)-> submat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{startRow}{Zero-based index of the starting row (inclusive) of the span}\n\\cvarg{endRow}{Zero-based index of the ending row (exclusive) of the span}\n\\cvarg{deltaRow}{Index step in the row span.}\n\\cvarg{submat}{resulting multi-row array}\n\\end{description}\n\nThe function \\texttt{GetRows} returns a row span from the input array.\n\n\\fi % }\n\n\\cvCPyFunc{GetSize}\nReturns size of matrix or image ROI.\n\n\\cvdefC{CvSize cvGetSize(const CvArr* arr);}\n\\cvdefPy{GetSize(arr)-> CvSize}\n\n\\begin{description}\n\\cvarg{arr}{array header}\n\\end{description}\n\nThe function returns number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In the case of image the size of ROI is returned.\n\n\n\\cvCPyFunc{GetSubRect}\nReturns matrix header corresponding to the rectangular sub-array of input image or matrix.\n\n\\cvdefC{CvMat* cvGetSubRect(const CvArr* arr, CvMat* submat, CvRect rect);}\n\\cvdefPy{GetSubRect(arr, rect) -> cvmat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\ifC\n\\cvarg{submat}{Pointer to the resultant sub-array header}\n\\fi\n\\cvarg{rect}{Zero-based coordinates of the rectangle of interest}\n\\end{description}\n\nThe function returns header, corresponding to\na specified rectangle of the input array. In other words, it allows\nthe user to treat a rectangular part of input array as a stand-alone\narray. ROI is taken into account by the function so the sub-array of\nROI is actually extracted.\n\n\\cvCPyFunc{InRange}\nChecks that array elements lie between the elements of two other arrays.\n\n\\cvdefC{void cvInRange(const CvArr* src, const CvArr* lower, const CvArr* upper, CvArr* dst);}\n\\cvdefPy{InRange(src,lower,upper,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The first source array}\n\\cvarg{lower}{The inclusive lower boundary array}\n\\cvarg{upper}{The exclusive upper boundary array}\n\\cvarg{dst}{The destination array, must have 8u or 8s type}\n\\end{description}\n\n\nThe function does the range check for every element of the input array:\n\n\\[\n\\texttt{dst}(I)=\\texttt{lower}(I)_0 <= \\texttt{src}(I)_0 < \\texttt{upper}(I)_0\n\\]\n\nFor single-channel arrays,\n\n\\[\n\\texttt{dst}(I)=\n\\texttt{lower}(I)_0 <= \\texttt{src}(I)_0 < \\texttt{upper}(I)_0 \\land\n\\texttt{lower}(I)_1 <= \\texttt{src}(I)_1 < \\texttt{upper}(I)_1\n\\]\n\nFor two-channel arrays and so forth,\n\ndst(I) is set to 0xff (all \\texttt{1}-bits) if src(I) is within the range and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size).\n\n\n\\cvCPyFunc{InRangeS}\nChecks that array elements lie between two scalars.\n\n\\cvdefC{void cvInRangeS(const CvArr* src, CvScalar lower, CvScalar upper, CvArr* dst);}\n\\cvdefPy{InRangeS(src,lower,upper,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The first source array}\n\\cvarg{lower}{The inclusive lower boundary}\n\\cvarg{upper}{The exclusive upper boundary}\n\\cvarg{dst}{The destination array, must have 8u or 8s type}\n\\end{description}\n\n\nThe function does the range check for every element of the input array:\n\n\\[\n\\texttt{dst}(I)=\\texttt{lower}_0 <= \\texttt{src}(I)_0 < \\texttt{upper}_0\n\\]\n\nFor single-channel arrays,\n\n\\[\n\\texttt{dst}(I)=\n\\texttt{lower}_0 <= \\texttt{src}(I)_0 < \\texttt{upper}_0 \\land\n\\texttt{lower}_1 <= \\texttt{src}(I)_1 < \\texttt{upper}_1\n\\]\n\nFor two-channel arrays nd so forth,\n\n'dst(I)' is set to 0xff (all \\texttt{1}-bits) if 'src(I)' is within the range and 0 otherwise. All the arrays must have the same size (or ROI size).\n\n\\ifC\n\\cvCPyFunc{IncRefData}\nIncrements array data reference counter.\n\n\\cvdefC{int cvIncRefData(CvArr* arr);}\n\n\\begin{description}\n\\cvarg{arr}{Array header}\n\\end{description}\n\nThe function increments \\cross{CvMat} or\n\\cross{CvMatND} data reference counter and returns the new counter value\nif the reference counter pointer is not NULL, otherwise it returns zero.\n\n\\cvCPyFunc{InitImageHeader}\nInitializes an image header that was previously allocated.\n\n\\cvdefC{IplImage* cvInitImageHeader(\\par IplImage* image,\\par CvSize size,\\par int depth,\\par int channels,\\par int origin=0,\\par int align=4);}\n\n\\begin{description}\n\\cvarg{image}{Image header to initialize}\n\\cvarg{size}{Image width and height}\n\\cvarg{depth}{Image depth (see \\cvCPyCross{CreateImage})}\n\\cvarg{channels}{Number of channels (see \\cvCPyCross{CreateImage})}\n\\cvarg{origin}{Top-left \\texttt{IPL\\_ORIGIN\\_TL} or bottom-left \\texttt{IPL\\_ORIGIN\\_BL}}\n\\cvarg{align}{Alignment for image rows, typically 4 or 8 bytes}\n\\end{description}\n\nThe returned \\texttt{IplImage*} points to the initialized header.\n\n\\cvCPyFunc{InitMatHeader}\nInitializes a pre-allocated matrix header.\n\n\\cvdefC{\nCvMat* cvInitMatHeader(\\par CvMat* mat,\\par int rows,\\par int cols,\\par int type, \\par void* data=NULL,\\par int step=CV\\_AUTOSTEP);\n}\n\n\\begin{description}\n\\cvarg{mat}{A pointer to the matrix header to be initialized}\n\\cvarg{rows}{Number of rows in the matrix}\n\\cvarg{cols}{Number of columns in the matrix}\n\\cvarg{type}{Type of the matrix elements, see \\cvCPyCross{CreateMat}.}\n\\cvarg{data}{Optional: data pointer assigned to the matrix header}\n\\cvarg{step}{Optional: full row width in bytes of the assigned data. By default, the minimal possible step is used which assumes there are no gaps between subsequent rows of the matrix.}\n\\end{description}\n\nThis function is often used to process raw data with OpenCV matrix functions. For example, the following code computes the matrix product of two matrices, stored as ordinary arrays:\n\n\\begin{lstlisting}\ndouble a[] = { 1, 2, 3, 4,\n               5, 6, 7, 8,\n               9, 10, 11, 12 };\n\ndouble b[] = { 1, 5, 9,\n               2, 6, 10,\n               3, 7, 11,\n               4, 8, 12 };\n\ndouble c[9];\nCvMat Ma, Mb, Mc ;\n\ncvInitMatHeader(&Ma, 3, 4, CV_64FC1, a);\ncvInitMatHeader(&Mb, 4, 3, CV_64FC1, b);\ncvInitMatHeader(&Mc, 3, 3, CV_64FC1, c);\n\ncvMatMulAdd(&Ma, &Mb, 0, &Mc);\n// the c array now contains the product of a (3x4) and b (4x3)\n\n\\end{lstlisting}\n\n\\cvCPyFunc{InitMatNDHeader}\nInitializes a pre-allocated multi-dimensional array header.\n\n\\cvdefC{CvMatND* cvInitMatNDHeader(\\par CvMatND* mat,\\par int dims,\\par const int* sizes,\\par int type,\\par void* data=NULL);}\n\n\\begin{description}\n\\cvarg{mat}{A pointer to the array header to be initialized}\n\\cvarg{dims}{The number of array dimensions}\n\\cvarg{sizes}{An array of dimension sizes}\n\\cvarg{type}{Type of array elements, see \\cvCPyCross{CreateMat}}\n\\cvarg{data}{Optional data pointer assigned to the matrix header}\n\\end{description}\n\n\\cvCPyFunc{InitSparseMatIterator}\nInitializes sparse array elements iterator.\n\n\\cvdefC{CvSparseNode* cvInitSparseMatIterator(const CvSparseMat* mat,\n                                       CvSparseMatIterator* matIterator);}\n\n\\begin{description}\n\\cvarg{mat}{Input array}\n\\cvarg{matIterator}{Initialized iterator}\n\\end{description}\n\nThe function initializes iterator of\nsparse array elements and returns pointer to the first element, or NULL\nif the array is empty.\n\n\\fi\n\n\\cvCPyFunc{InvSqrt}\nCalculates the inverse square root.\n\n\\cvdefC{float cvInvSqrt(float value);}\n\\cvdefPy{InvSqrt(value)-> float}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\n\nThe function calculates the inverse square root of the argument, and normally it is faster than \\texttt{1./sqrt(value)}. If the argument is zero or negative, the result is not determined. Special values ($\\pm \\infty $ , NaN) are not handled.\n\n\\cvCPyFunc{Inv}\n\nSynonym for \\cross{Invert}\n\n\\cvCPyFunc{Invert}\nFinds the inverse or pseudo-inverse of a matrix.\n\n\\cvdefC{double cvInvert(const CvArr* src, CvArr* dst, int method=CV\\_LU);}\n\\cvdefPy{Invert(src,dst,method=CV\\_LU)-> double}\n\n\\begin{description}\n\\cvarg{src}{The source matrix}\n\\cvarg{dst}{The destination matrix}\n\\cvarg{method}{Inversion method\n\\begin{description}\n \\cvarg{CV\\_LU}{Gaussian elimination with optimal pivot element chosen}\n \\cvarg{CV\\_SVD}{Singular value decomposition (SVD) method}\n \\cvarg{CV\\_SVD\\_SYM}{SVD method for a symmetric positively-defined matrix}\n\\end{description}}\n\\end{description}\n\nThe function inverts matrix \\texttt{src1} and stores the result in \\texttt{src2}.\n\nIn the case of \\texttt{LU} method, the function returns the \\texttt{src1} determinant (src1 must be square). If it is 0, the matrix is not inverted and \\texttt{src2} is filled with zeros.\n\nIn the case of \\texttt{SVD} methods, the function returns the inversed condition of \\texttt{src1} (ratio of the smallest singular value to the largest singular value) and 0 if \\texttt{src1} is all zeros. The SVD methods calculate a pseudo-inverse matrix if \\texttt{src1} is singular.\n\n\n\\cvCPyFunc{IsInf}\nDetermines if the argument is Infinity.\n\n\\cvdefC{int cvIsInf(double value);}\n\\cvdefPy{IsInf(value)-> int}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\nThe function returns 1 if the argument is $\\pm \\infty $ (as defined by IEEE754 standard), 0 otherwise.\n\n\\cvCPyFunc{IsNaN}\nDetermines if the argument is Not A Number.\n\n\\cvdefC{int cvIsNaN(double value);}\n\\cvdefPy{IsNaN(value)-> int}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\nThe function returns 1 if the argument is Not A Number (as defined by IEEE754 standard), 0 otherwise.\n\n\n\\cvCPyFunc{LUT}\nPerforms a look-up table transform of an array.\n\n\\cvdefC{void cvLUT(const CvArr* src, CvArr* dst, const CvArr* lut);}\n\\cvdefPy{LUT(src,dst,lut)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array of 8-bit elements}\n\\cvarg{dst}{Destination array of a given depth and of the same number of channels as the source array}\n\\cvarg{lut}{Look-up table of 256 elements; should have the same depth as the destination array. In the case of multi-channel source and destination arrays, the table should either have a single-channel (in this case the same table is used for all channels) or the same number of channels as the source/destination array.}\n\\end{description}\n\nThe function fills the destination array with values from the look-up table. Indices of the entries are taken from the source array. That is, the function processes each element of \\texttt{src} as follows:\n\n\\[\n\\texttt{dst}_i \\leftarrow \\texttt{lut}_{\\texttt{src}_i + d}\n\\]\n\nwhere\n\n\\[\nd = \\fork\n{0}{if \\texttt{src} has depth \\texttt{CV\\_8U}}\n{128}{if \\texttt{src} has depth \\texttt{CV\\_8S}}\n\\]\n\n\\cvCPyFunc{Log}\nCalculates the natural logarithm of every array element's absolute value.\n\n\\cvdefC{void cvLog(const CvArr* src, CvArr* dst);}\n\\cvdefPy{Log(src,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array, it should have \\texttt{double} type or the same type as the source}\n\\end{description}\n\nThe function calculates the natural logarithm of the absolute value of every element of the input array:\n\n\\[\n\\texttt{dst} [I] = \\fork\n{\\log{|\\texttt{src}(I)}}{if $\\texttt{src}[I] \\ne 0$ }\n{\\texttt{C}}{otherwise}\n\\]\n\nWhere \\texttt{C} is a large negative number (about -700 in the current implementation).\n\n\\cvCPyFunc{Mahalanobis}\nCalculates the Mahalanobis distance between two vectors.\n\n\\cvdefC{double cvMahalanobis(\\par const CvArr* vec1,\\par const CvArr* vec2,\\par CvArr* mat);}\n\\cvdefPy{Mahalonobis(vec1,vec2,mat)-> None}\n\n\\begin{description}\n\\cvarg{vec1}{The first 1D source vector}\n\\cvarg{vec2}{The second 1D source vector}\n\\cvarg{mat}{The inverse covariance matrix}\n\\end{description}\n\n\nThe function calculates and returns the weighted distance between two vectors:\n\n\\[\nd(\\texttt{vec1},\\texttt{vec2})=\\sqrt{\\sum_{i,j}{\\texttt{icovar(i,j)}\\cdot(\\texttt{vec1}(I)-\\texttt{vec2}(I))\\cdot(\\texttt{vec1(j)}-\\texttt{vec2(j)})}}\n\\]\n\nThe covariance matrix may be calculated using the \\cvCPyCross{CalcCovarMatrix} function and further inverted using the \\cvCPyCross{Invert} function (CV\\_SVD method is the prefered one because the matrix might be singular).\n\n\n\\ifC\n\\cvCPyFunc{Mat}\nInitializes matrix header (lightweight variant).\n\n\\cvdefC{CvMat cvMat(\\par int rows,\\par int cols,\\par int type,\\par void* data=NULL);}\n\n\\begin{description}\n\\cvarg{rows}{Number of rows in the matrix}\n\\cvarg{cols}{Number of columns in the matrix}\n\\cvarg{type}{Type of the matrix elements - see \\cvCPyCross{CreateMat}}\n\\cvarg{data}{Optional data pointer assigned to the matrix header}\n\\end{description}\n\nInitializes a matrix header and assigns data to it. The matrix is filled \\textit{row}-wise (the first \\texttt{cols} elements of data form the first row of the matrix, etc.)\n\nThis function is a fast inline substitution for \\cvCPyCross{InitMatHeader}. Namely, it is equivalent to:\n\n\\begin{lstlisting}\nCvMat mat;\ncvInitMatHeader(&mat, rows, cols, type, data, CV\\_AUTOSTEP);\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{Max}\nFinds per-element maximum of two arrays.\n\n\\cvdefC{void cvMax(const CvArr* src1, const CvArr* src2, CvArr* dst);}\n\\cvdefPy{Max(src1,src2,dst)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\end{description}\n\nThe function calculates per-element maximum of two arrays:\n\n\\[\n\\texttt{dst}(I)=\\max(\\texttt{src1}(I), \\texttt{src2}(I))\n\\]\n\nAll the arrays must have a single channel, the same data type and the same size (or ROI size).\n\n\n\\cvCPyFunc{MaxS}\nFinds per-element maximum of array and scalar.\n\n\\cvdefC{void cvMaxS(const CvArr* src, double value, CvArr* dst);}\n\\cvdefPy{MaxS(src,value,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The first source array}\n\\cvarg{value}{The scalar value}\n\\cvarg{dst}{The destination array}\n\\end{description}\n\nThe function calculates per-element maximum of array and scalar:\n\n\\[\n\\texttt{dst}(I)=\\max(\\texttt{src}(I), \\texttt{value})\n\\]\n\nAll the arrays must have a single channel, the same data type and the same size (or ROI size).\n\n\n\\cvCPyFunc{Merge}\nComposes a multi-channel array from several single-channel arrays or inserts a single channel into the array.\n\n\\cvdefC{void cvMerge(const CvArr* src0, const CvArr* src1,\n              const CvArr* src2, const CvArr* src3, CvArr* dst);}\n\\ifC\n\\begin{lstlisting}\n#define cvCvtPlaneToPix cvMerge\n\\end{lstlisting}\n\\fi\n\\cvdefPy{Merge(src0,src1,src2,src3,dst)-> None}\n\n\\begin{description}\n\\cvarg{src0}{Input channel 0}\n\\cvarg{src1}{Input channel 1}\n\\cvarg{src2}{Input channel 2}\n\\cvarg{src3}{Input channel 3}\n\\cvarg{dst}{Destination array}\n\\end{description}\n\nThe function is the opposite to \\cvCPyCross{Split}. If the destination array has N channels then if the first N input channels are not NULL, they all are copied to the destination array; if only a single source channel of the first N is not NULL, this particular channel is copied into the destination array; otherwise an error is raised. The rest of the source channels (beyond the first N) must always be NULL. For IplImage \\cvCPyCross{Copy} with COI set can be also used to insert a single channel into the image.\n\n\\cvCPyFunc{Min}\nFinds per-element minimum of two arrays.\n\n\\cvdefC{void cvMin(const CvArr* src1, const CvArr* src2, CvArr* dst);}\n\\cvdefPy{Min(src1,src2,dst)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\end{description}\n\n\nThe function calculates per-element minimum of two arrays:\n\n\\[\n\\texttt{dst}(I)=\\min(\\texttt{src1}(I),\\texttt{src2}(I))\n\\]\n\nAll the arrays must have a single channel, the same data type and the same size (or ROI size).\n\n\n\\cvCPyFunc{MinMaxLoc}\nFinds global minimum and maximum in array or subarray.\n\n\\cvdefC{void cvMinMaxLoc(const CvArr* arr, double* minVal, double* maxVal,\n                  CvPoint* minLoc=NULL, CvPoint* maxLoc=NULL, const CvArr* mask=NULL);}\n\\cvdefPy{MinMaxLoc(arr,mask=NULL)-> (minVal,maxVal,minLoc,maxLoc)}\n\n\\begin{description}\n\\cvarg{arr}{The source array, single-channel or multi-channel with COI set}\n\\cvarg{minVal}{Pointer to returned minimum value}\n\\cvarg{maxVal}{Pointer to returned maximum value}\n\\cvarg{minLoc}{Pointer to returned minimum location}\n\\cvarg{maxLoc}{Pointer to returned maximum location}\n\\cvarg{mask}{The optional mask used to select a subarray}\n\\end{description}\n\nThe function finds minimum and maximum element values\nand their positions. The extremums are searched across the whole array,\nselected \\texttt{ROI} (in the case of \\texttt{IplImage}) or, if \\texttt{mask}\nis not \\texttt{NULL}, in the specified array region. If the array has\nmore than one channel, it must be \\texttt{IplImage} with \\texttt{COI}\nset. In the case of multi-dimensional arrays, \\texttt{minLoc->x} and \\texttt{maxLoc->x}\nwill contain raw (linear) positions of the extremums.\n\n\\cvCPyFunc{MinS}\nFinds per-element minimum of an array and a scalar.\n\n\\cvdefC{void cvMinS(const CvArr* src, double value, CvArr* dst);}\n\\cvdefPy{MinS(src,value,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The first source array}\n\\cvarg{value}{The scalar value}\n\\cvarg{dst}{The destination array}\n\\end{description}\n\nThe function calculates minimum of an array and a scalar:\n\n\\[\n\\texttt{dst}(I)=\\min(\\texttt{src}(I), \\texttt{value})\n\\]\n\nAll the arrays must have a single channel, the same data type and the same size (or ROI size).\n\n\n\\subsection{Mirror}\nSynonym for \\cross{Flip}.\n\n\\cvCPyFunc{MixChannels}\nCopies several channels from input arrays to certain channels of output arrays\n\n\\cvdefC{void cvMixChannels(const CvArr** src, int srcCount, \\par\n                    CvArr** dst, int dstCount, \\par\n                    const int* fromTo, int pairCount);}\n\\cvdefPy{MixChannels(src, dst, fromTo) -> None}\n\n\\begin{description}\n\\cvarg{src}{Input arrays}\n\\cvC{\\cvarg{srcCount}{The number of input arrays.}}\n\\cvarg{dst}{Destination arrays}\n\\cvC{\\cvarg{dstCount}{The number of output arrays.}}\n\\cvarg{fromTo}{The array of pairs of indices of the planes\ncopied. \\cvC{\\texttt{fromTo[k*2]} is the 0-based index of the input channel in \\texttt{src} and\n\\texttt{fromTo[k*2+1]} is the index of the output channel in \\texttt{dst}.\nHere the continuous channel numbering is used, that is, the first input image channels are indexed\nfrom \\texttt{0} to \\texttt{channels(src[0])-1}, the second input image channels are indexed from\n\\texttt{channels(src[0])} to \\texttt{channels(src[0]) + channels(src[1])-1} etc., and the same\nscheme is used for the output image channels.\nAs a special case, when \\texttt{fromTo[k*2]} is negative,\nthe corresponding output channel is filled with zero.}\\cvPy{Each pair \\texttt{fromTo[k]=(i,j)}\nmeans that i-th plane from \\texttt{src} is copied to the j-th plane in \\texttt{dst}, where continuous\nplane numbering is used both in the input array list and the output array list.\nAs a special case, when the \\texttt{fromTo[k][0]} is negative, the corresponding output plane \\texttt{j}\n is filled with zero.}}\n\\end{description}\n\nThe function is a generalized form of \\cvCPyCross{cvSplit} and \\cvCPyCross{Merge}\nand some forms of \\cross{CvtColor}. It can be used to change the order of the\nplanes, add/remove alpha channel, extract or insert a single plane or\nmultiple planes etc.\n\nAs an example, this code splits a 4-channel RGBA image into a 3-channel\nBGR (i.e. with R and B swapped) and separate alpha channel image:\n\n\\ifPy\n\\begin{lstlisting}\n        rgba = cv.CreateMat(100, 100, cv.CV_8UC4)\n        bgr =  cv.CreateMat(100, 100, cv.CV_8UC3)\n        alpha = cv.CreateMat(100, 100, cv.CV_8UC1)\n        cv.Set(rgba, (1,2,3,4))\n        cv.MixChannels([rgba], [bgr, alpha], [\n           (0, 2),    # rgba[0] -> bgr[2]\n           (1, 1),    # rgba[1] -> bgr[1]\n           (2, 0),    # rgba[2] -> bgr[0]\n           (3, 3)     # rgba[3] -> alpha[0]\n        ])\n\\end{lstlisting}\n\\fi\n\n\\ifC\n\\begin{lstlisting}\n    CvMat* rgba = cvCreateMat(100, 100, CV_8UC4);\n    CvMat* bgr = cvCreateMat(rgba->rows, rgba->cols, CV_8UC3);\n    CvMat* alpha = cvCreateMat(rgba->rows, rgba->cols, CV_8UC1);\n    cvSet(rgba, cvScalar(1,2,3,4));\n\n    CvArr* out[] = { bgr, alpha };\n    int from_to[] = { 0,2,  1,1,  2,0,  3,3 };\n    cvMixChannels(&bgra, 1, out, 2, from_to, 4);\n\\end{lstlisting}\n\\fi\n\n\\subsection{MulAddS}\n\nSynonym for \\cross{ScaleAdd}.\n\n\\cvCPyFunc{Mul}\nCalculates the per-element product of two arrays.\n\n\\cvdefC{void cvMul(const CvArr* src1, const CvArr* src2, CvArr* dst, double scale=1);}\n\\cvdefPy{Mul(src1,src2,dst,scale)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{scale}{Optional scale factor}\n\\end{description}\n\n\nThe function calculates the per-element product of two arrays:\n\n\\[\n\\texttt{dst}(I)=\\texttt{scale} \\cdot \\texttt{src1}(I) \\cdot \\texttt{src2}(I)\n\\]\n\nAll the arrays must have the same type and the same size (or ROI size).\nFor types that have limited range this operation is saturating.\n\n\\cvCPyFunc{MulSpectrums}\nPerforms per-element multiplication of two Fourier spectrums.\n\n\\cvdefC{void cvMulSpectrums(\\par const CvArr* src1,\\par const CvArr* src2,\\par CvArr* dst,\\par int flags);}\n\\cvdefPy{MulSpectrums(src1,src2,dst,flags)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array of the same type and the same size as the source arrays}\n\\cvarg{flags}{A combination of the following values;\n\\begin{description}\n\\cvarg{CV\\_DXT\\_ROWS}{treats each row of the arrays as a separate spectrum (see \\cvCPyCross{DFT} parameters description).}\n\\cvarg{CV\\_DXT\\_MUL\\_CONJ}{conjugate the second source array before the multiplication.}\n\\end{description}}\n\n\\end{description}\n\nThe function performs per-element multiplication of the two CCS-packed or complex matrices that are results of a real or complex Fourier transform.\n\nThe function, together with \\cvCPyCross{DFT}, may be used to calculate convolution of two arrays rapidly.\n\n\n\\cvCPyFunc{MulTransposed}\nCalculates the product of an array and a transposed array.\n\n\\cvdefC{void cvMulTransposed(const CvArr* src, CvArr* dst, int order, const CvArr* delta=NULL, double scale=1.0);}\n\\cvdefPy{MulTransposed(src,dst,order,delta=NULL,scale)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source matrix}\n\\cvarg{dst}{The destination matrix. Must be \\texttt{CV\\_32F} or \\texttt{CV\\_64F}.}\n\\cvarg{order}{Order of multipliers}\n\\cvarg{delta}{An optional array, subtracted from \\texttt{src} before multiplication}\n\\cvarg{scale}{An optional scaling}\n\\end{description}\n\nThe function calculates the product of src and its transposition:\n\n\\[\n\\texttt{dst}=\\texttt{scale} (\\texttt{src}-\\texttt{delta}) (\\texttt{src}-\\texttt{delta})^T\n\\]\n\nif $\\texttt{order}=0$, and\n\n\\[\n\\texttt{dst}=\\texttt{scale} (\\texttt{src}-\\texttt{delta})^T (\\texttt{src}-\\texttt{delta})\n\\]\n\notherwise.\n\n\\cvCPyFunc{Norm}\nCalculates absolute array norm, absolute difference norm, or relative difference norm.\n\n\\cvdefC{double cvNorm(const CvArr* arr1, const CvArr* arr2=NULL, int normType=CV\\_L2, const CvArr* mask=NULL);}\n\\cvdefPy{Norm(arr1,arr2,normType=CV\\_L2,mask=NULL)-> double}\n\n\\begin{description}\n\\cvarg{arr1}{The first source image}\n\\cvarg{arr2}{The second source image. If it is NULL, the absolute norm of \\texttt{arr1} is calculated, otherwise the absolute or relative norm of \\texttt{arr1}-\\texttt{arr2} is calculated.}\n\\cvarg{normType}{Type of norm, see the discussion}\n\\cvarg{mask}{The optional operation mask}\n\\end{description}\n\nThe function calculates the absolute norm of \\texttt{arr1} if \\texttt{arr2} is NULL:\n\\[\nnorm = \\forkthree\n{||\\texttt{arr1}||_C    = \\max_I |\\texttt{arr1}(I)|}{if $\\texttt{normType} = \\texttt{CV\\_C}$}\n{||\\texttt{arr1}||_{L1} = \\sum_I |\\texttt{arr1}(I)|}{if $\\texttt{normType} = \\texttt{CV\\_L1}$}\n{||\\texttt{arr1}||_{L2} = \\sqrt{\\sum_I \\texttt{arr1}(I)^2}}{if $\\texttt{normType} = \\texttt{CV\\_L2}$}\n\\]\n\nor the absolute difference norm if \\texttt{arr2} is not NULL:\n\\[\nnorm = \\forkthree\n{||\\texttt{arr1}-\\texttt{arr2}||_C    = \\max_I |\\texttt{arr1}(I) - \\texttt{arr2}(I)|}{if $\\texttt{normType} = \\texttt{CV\\_C}$}\n{||\\texttt{arr1}-\\texttt{arr2}||_{L1} = \\sum_I |\\texttt{arr1}(I) - \\texttt{arr2}(I)|}{if $\\texttt{normType} = \\texttt{CV\\_L1}$}\n{||\\texttt{arr1}-\\texttt{arr2}||_{L2} = \\sqrt{\\sum_I (\\texttt{arr1}(I) - \\texttt{arr2}(I))^2}}{if $\\texttt{normType} = \\texttt{CV\\_L2}$}\n\\]\n\nor the relative difference norm if \\texttt{arr2} is not NULL and \\texttt{(normType \\& CV\\_RELATIVE) != 0}:\n\n\\[\nnorm = \\forkthree\n{\\frac{||\\texttt{arr1}-\\texttt{arr2}||_C    }{||\\texttt{arr2}||_C   }}{if $\\texttt{normType} = \\texttt{CV\\_RELATIVE\\_C}$}\n{\\frac{||\\texttt{arr1}-\\texttt{arr2}||_{L1} }{||\\texttt{arr2}||_{L1}}}{if $\\texttt{normType} = \\texttt{CV\\_RELATIVE\\_L1}$}\n{\\frac{||\\texttt{arr1}-\\texttt{arr2}||_{L2} }{||\\texttt{arr2}||_{L2}}}{if $\\texttt{normType} = \\texttt{CV\\_RELATIVE\\_L2}$}\n\\]\n\nThe function returns the calculated norm. A multiple-channel array is treated as a single-channel, that is, the results for all channels are combined.\n\n\\cvCPyFunc{Not}\nPerforms per-element bit-wise inversion of array elements.\n\n\\cvdefC{void cvNot(const CvArr* src, CvArr* dst);}\n\\cvdefPy{Not(src,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array}\n\\end{description}\n\n\nThe function Not inverses every bit of every array element:\n\n\\begin{lstlisting}\ndst(I)=~src(I)\n\\end{lstlisting}\n\n\n\\cvCPyFunc{Or}\nCalculates per-element bit-wise disjunction of two arrays.\n\n\\cvdefC{void cvOr(const CvArr* src1, const CvArr* src2, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{Or(src1,src2,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\n\nThe function calculates per-element bit-wise disjunction of two arrays:\n\n\\begin{lstlisting}\ndst(I)=src1(I)|src2(I)\n\\end{lstlisting}\n\nIn the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size.\n\n\\cvCPyFunc{OrS}\nCalculates a per-element bit-wise disjunction of an array and a scalar.\n\n\\cvdefC{void cvOrS(const CvArr* src, CvScalar value, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{OrS(src,value,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{value}{Scalar to use in the operation}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\n\nThe function OrS calculates per-element bit-wise disjunction of an array and a scalar:\n\n\\begin{lstlisting}\ndst(I)=src(I)|value if mask(I)!=0\n\\end{lstlisting}\n\nPrior to the actual operation, the scalar is converted to the same type as that of the array(s). In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size.\n\n\n\\cvCPyFunc{PerspectiveTransform}\nPerforms perspective matrix transformation of a vector array.\n\n\\cvdefC{void cvPerspectiveTransform(const CvArr* src, CvArr* dst, const CvMat* mat);}\n\\cvdefPy{PerspectiveTransform(src,dst,mat)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source three-channel floating-point array}\n\\cvarg{dst}{The destination three-channel floating-point array}\n\\cvarg{mat}{$3\\times 3$ or $4 \\times 4$ transformation matrix}\n\\end{description}\n\n\nThe function transforms every element of \\texttt{src} (by treating it as 2D or 3D vector) in the following way:\n\n\\[ (x, y, z) \\rightarrow (x'/w, y'/w, z'/w) \\]\n\nwhere\n\n\\[\n(x', y', z', w') = \\texttt{mat} \\cdot\n\\begin{bmatrix} x & y & z & 1 \\end{bmatrix}\n\\]\n\nand\n\\[ w = \\fork{w'}{if $w' \\ne 0$}{\\infty}{otherwise} \\]\n\n\\cvCPyFunc{PolarToCart}\nCalculates Cartesian coordinates of 2d vectors represented in polar form.\n\n\\cvdefC{void cvPolarToCart(\\par const CvArr* magnitude,\\par const CvArr* angle,\\par CvArr* x,\\par CvArr* y,\\par int angleInDegrees=0);}\n\\cvdefPy{PolarToCart(magnitude,angle,x,y,angleInDegrees=0)-> None}\n\n\\begin{description}\n\\cvarg{magnitude}{The array of magnitudes. If it is NULL, the magnitudes are assumed to be all 1's.}\n\\cvarg{angle}{The array of angles, whether in radians or degrees}\n\\cvarg{x}{The destination array of x-coordinates, may be set to NULL if it is not needed}\n\\cvarg{y}{The destination array of y-coordinates, mau be set to NULL if it is not needed}\n\\cvarg{angleInDegrees}{The flag indicating whether the angles are measured in radians, which is default mode, or in degrees}\n\\end{description}\n\nThe function calculates either the x-coodinate, y-coordinate or both of every vector \\texttt{magnitude(I)*exp(angle(I)*j), j=sqrt(-1)}:\n\n\\begin{lstlisting}\nx(I)=magnitude(I)*cos(angle(I)),\ny(I)=magnitude(I)*sin(angle(I))\n\\end{lstlisting}\n\n\n\\cvCPyFunc{Pow}\nRaises every array element to a power.\n\n\\cvdefC{void cvPow(\\par const CvArr* src,\\par CvArr* dst,\\par double power);}\n\\cvdefPy{Pow(src,dst,power)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array, should be the same type as the source}\n\\cvarg{power}{The exponent of power}\n\\end{description}\n\n\nThe function raises every element of the input array to \\texttt{p}:\n\n\\[\n\\texttt{dst} [I] = \\fork\n{\\texttt{src}(I)^p}{if \\texttt{p} is integer}\n{|\\texttt{src}(I)^p|}{otherwise}\n\\]\n\nThat is, for a non-integer power exponent the absolute values of input array elements are used. However, it is possible to get true values for negative values using some extra operations, as the following example, computing the cube root of array elements, shows:\n\n\\ifC\n\\begin{lstlisting}\nCvSize size = cvGetSize(src);\nCvMat* mask = cvCreateMat(size.height, size.width, CV_8UC1);\ncvCmpS(src, 0, mask, CV_CMP_LT); /* find negative elements */\ncvPow(src, dst, 1./3);\ncvSubRS(dst, cvScalarAll(0), dst, mask); /* negate the results of negative inputs */\ncvReleaseMat(&mask);\n\\end{lstlisting}\n\\else\n\\begin{lstlisting}\n>>> import cv\n>>> src = cv.CreateMat(1, 10, cv.CV_32FC1)\n>>> mask = cv.CreateMat(src.rows, src.cols, cv.CV_8UC1)\n>>> dst = cv.CreateMat(src.rows, src.cols, cv.CV_32FC1)\n>>> cv.CmpS(src, 0, mask, cv.CV_CMP_LT)         # find negative elements\n>>> cv.Pow(src, dst, 1. / 3)\n>>> cv.SubRS(dst, cv.ScalarAll(0), dst, mask)   # negate the results of negative inputs\n\\end{lstlisting}\n\\fi\n\nFor some values of \\texttt{power}, such as integer values, 0.5, and -0.5, specialized faster algorithms are used.\n\n\\ifC\n\\cvCPyFunc{Ptr?D}\nReturn pointer to a particular array element.\n\n\\cvdefC{\nuchar* cvPtr1D(const CvArr* arr, int idx0, int* type=NULL); \\newline\nuchar* cvPtr2D(const CvArr* arr, int idx0, int idx1, int* type=NULL); \\newline\nuchar* cvPtr3D(const CvArr* arr, int idx0, int idx1, int idx2, int* type=NULL); \\newline\nuchar* cvPtrND(const CvArr* arr, int* idx, int* type=NULL, int createNode=1, unsigned* precalcHashval=NULL);\n}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{The first zero-based component of the element index}\n\\cvarg{idx1}{The second zero-based component of the element index}\n\\cvarg{idx2}{The third zero-based component of the element index}\n\\cvarg{idx}{Array of the element indices}\n\\cvarg{type}{Optional output parameter: type of matrix elements}\n\\cvarg{createNode}{Optional input parameter for sparse matrices. Non-zero value of the parameter means that the requested element is created if it does not exist already.}\n\\cvarg{precalcHashval}{Optional input parameter for sparse matrices. If the pointer is not NULL, the function does not recalculate the node hash value, but takes it from the specified location. It is useful for speeding up pair-wise operations (TODO: provide an example)}\n\\end{description}\n\nThe functions return a pointer to a specific array element. Number of array dimension should match to the number of indices passed to the function except for \\texttt{cvPtr1D} function that can be used for sequential access to 1D, 2D or nD dense arrays.\n\nThe functions can be used for sparse arrays as well - if the requested node does not exist they create it and set it to zero.\n\nAll these as well as other functions accessing array elements (\\cvCPyCross{Get}, \\cvCPyCross{GetReal}, \n\\cvCPyCross{Set}, \\cvCPyCross{SetReal}) raise an error in case if the element index is out of range.\n\n\\fi\n\n\\cvCPyFunc{RNG}\nInitializes a random number generator state.\n\n\\cvdefC{CvRNG cvRNG(int64 seed=-1);}\n\\cvdefPy{RNG(seed=-1LL)-> CvRNG}\n\n\\begin{description}\n\\cvarg{seed}{64-bit value used to initiate a random sequence}\n\\end{description}\n\nThe function initializes a random number generator\nand returns the state. The pointer to the state can be then passed to the\n\\cvCPyCross{RandInt}, \\cvCPyCross{RandReal} and \\cvCPyCross{RandArr} functions. In the\ncurrent implementation a multiply-with-carry generator is used.\n\n\\cvCPyFunc{RandArr}\nFills an array with random numbers and updates the RNG state.\n\n\\cvdefC{void cvRandArr(\\par CvRNG* rng,\\par CvArr* arr,\\par int distType,\\par CvScalar param1,\\par CvScalar param2);}\n\\cvdefPy{RandArr(rng,arr,distType,param1,param2)-> None}\n\n\\begin{description}\n\\cvarg{rng}{RNG state initialized by \\cvCPyCross{RNG}}\n\\cvarg{arr}{The destination array}\n\\cvarg{distType}{Distribution type\n\\begin{description}\n\\cvarg{CV\\_RAND\\_UNI}{uniform distribution}\n\\cvarg{CV\\_RAND\\_NORMAL}{normal or Gaussian distribution}\n\\end{description}}\n\\cvarg{param1}{The first parameter of the distribution. In the case of a uniform distribution it is the inclusive lower boundary of the random numbers range. In the case of a normal distribution it is the mean value of the random numbers.}\n\\cvarg{param2}{The second parameter of the distribution. In the case of a uniform distribution it is the exclusive upper boundary of the random numbers range. In the case of a normal distribution it is the standard deviation of the random numbers.}\n\\end{description}\n\nThe function fills the destination array with uniformly\nor normally distributed random numbers.\n\n\\ifC\nIn the example below, the function\nis used to add a few normally distributed floating-point numbers to\nrandom locations within a 2d array.\n\n\\begin{lstlisting}\n/* let noisy_screen be the floating-point 2d array that is to be \"crapped\" */\nCvRNG rng_state = cvRNG(0xffffffff);\nint i, pointCount = 1000;\n/* allocate the array of coordinates of points */\nCvMat* locations = cvCreateMat(pointCount, 1, CV_32SC2);\n/* arr of random point values */\nCvMat* values = cvCreateMat(pointCount, 1, CV_32FC1);\nCvSize size = cvGetSize(noisy_screen);\n\n/* initialize the locations */\ncvRandArr(&rng_state, locations, CV_RAND_UNI, cvScalar(0,0,0,0), \n\t   cvScalar(size.width,size.height,0,0));\n\n/* generate values */\ncvRandArr(&rng_state, values, CV_RAND_NORMAL,\n           cvRealScalar(100), // average intensity\n           cvRealScalar(30) // deviation of the intensity\n          );\n\n/* set the points */\nfor(i = 0; i < pointCount; i++ )\n{\n    CvPoint pt = *(CvPoint*)cvPtr1D(locations, i, 0);\n    float value = *(float*)cvPtr1D(values, i, 0);\n    *((float*)cvPtr2D(noisy_screen, pt.y, pt.x, 0 )) += value;\n}\n\n/* not to forget to release the temporary arrays */\ncvReleaseMat(&locations);\ncvReleaseMat(&values);\n\n/* RNG state does not need to be deallocated */\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{RandInt}\nReturns a 32-bit unsigned integer and updates RNG.\n\n\\cvdefC{unsigned cvRandInt(CvRNG* rng);}\n\\cvdefPy{RandInt(rng)-> unsigned}\n\n\\begin{description}\n\\cvarg{rng}{RNG state initialized by \\texttt{RandInit} and, optionally, customized by \\texttt{RandSetRange} (though, the latter function does not affect the discussed function outcome)}\n\\end{description}\n\nThe function returns a uniformly-distributed random\n32-bit unsigned integer and updates the RNG state. It is similar to the rand()\nfunction from the C runtime library, but it always generates a 32-bit number\nwhereas rand() returns a number in between 0 and \\texttt{RAND\\_MAX}\nwhich is $2^{16}$ or $2^{32}$, depending on the platform.\n\nThe function is useful for generating scalar random numbers, such as\npoints, patch sizes, table indices, etc., where integer numbers of a certain\nrange can be generated using a modulo operation and floating-point numbers\ncan be generated by scaling from 0 to 1 or any other specific range.\n\n\\ifC\nHere is the example from the previous function discussion rewritten using\n\\cvCPyCross{RandInt}:\n\n\\begin{lstlisting}\n/* the input and the task is the same as in the previous sample. */\nCvRNG rnggstate = cvRNG(0xffffffff);\nint i, pointCount = 1000;\n/* ... - no arrays are allocated here */\nCvSize size = cvGetSize(noisygscreen);\n/* make a buffer for normally distributed numbers to reduce call overhead */\n#define bufferSize 16\nfloat normalValueBuffer[bufferSize];\nCvMat normalValueMat = cvMat(bufferSize, 1, CVg32F, normalValueBuffer);\nint valuesLeft = 0;\n\nfor(i = 0; i < pointCount; i++ )\n{\n    CvPoint pt;\n    /* generate random point */\n    pt.x = cvRandInt(&rnggstate ) % size.width;\n    pt.y = cvRandInt(&rnggstate ) % size.height;\n\n    if(valuesLeft <= 0 )\n    {\n        /* fulfill the buffer with normally distributed numbers \n\t   if the buffer is empty */\n        cvRandArr(&rnggstate, &normalValueMat, CV\\_RAND\\_NORMAL, \n\t\t   cvRealScalar(100), cvRealScalar(30));\n        valuesLeft = bufferSize;\n    }\n    *((float*)cvPtr2D(noisygscreen, pt.y, pt.x, 0 ) = \n\t\t\t\tnormalValueBuffer[--valuesLeft];\n}\n\n/* there is no need to deallocate normalValueMat because we have\nboth the matrix header and the data on stack. It is a common and efficient\npractice of working with small, fixed-size matrices */\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{RandReal}\nReturns a floating-point random number and updates RNG.\n\n\\cvdefC{double cvRandReal(CvRNG* rng);}\n\\cvdefPy{RandReal(rng)-> double}\n\n\\begin{description}\n\\cvarg{rng}{RNG state initialized by \\cvCPyCross{RNG}}\n\\end{description}\n\n\nThe function returns a uniformly-distributed random floating-point number between 0 and 1 (1 is not included).\n\n\\cvCPyFunc{Reduce}\nReduces a matrix to a vector.\n\n\\cvdefC{void cvReduce(const CvArr* src, CvArr* dst, int dim = -1, int op=CV\\_REDUCE\\_SUM);}\n\\cvdefPy{Reduce(src,dst,dim=-1,op=CV\\_REDUCE\\_SUM)-> None}\n\n\\begin{description}\n\\cvarg{src}{The input matrix.}\n\\cvarg{dst}{The output single-row/single-column vector that accumulates somehow all the matrix rows/columns.}\n\\cvarg{dim}{The dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row, 1 means that the matrix is reduced to a single column and -1 means that the dimension is chosen automatically by analysing the dst size.}\n\\cvarg{op}{The reduction operation. It can take of the following values:\n\\begin{description}\n\\cvarg{CV\\_REDUCE\\_SUM}{The output is the sum of all of the matrix's rows/columns.}\n\\cvarg{CV\\_REDUCE\\_AVG}{The output is the mean vector of all of the matrix's rows/columns.}\n\\cvarg{CV\\_REDUCE\\_MAX}{The output is the maximum (column/row-wise) of all of the matrix's rows/columns.}\n\\cvarg{CV\\_REDUCE\\_MIN}{The output is the minimum (column/row-wise) of all of the matrix's rows/columns.}\n\\end{description}}\n\\end{description}\n\nThe function reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. For example, the function can be used to compute horizontal and vertical projections of an raster image. In the case of \\texttt{CV\\_REDUCE\\_SUM} and \\texttt{CV\\_REDUCE\\_AVG} the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes. \n\n\\ifC\n\\cvCPyFunc{ReleaseData}\nReleases array data.\n\n\\cvdefC{void cvReleaseData(CvArr* arr);}\n\n\\begin{description}\n\\cvarg{arr}{Array header}\n\\end{description}\n\nThe function releases the array data. In the case of \\cross{CvMat} or \\cross{CvMatND} it simply calls cvDecRefData(), that is the function can not deallocate external data. See also the note to \\cvCPyCross{CreateData}.\n\n\\cvCPyFunc{ReleaseImage}\nDeallocates the image header and the image data.\n\n\\cvdefC{void cvReleaseImage(IplImage** image);}\n\n\\begin{description}\n\\cvarg{image}{Double pointer to the image header}\n\\end{description}\n\nThis call is a shortened form of\n\n\\begin{lstlisting}\nif(*image )\n{\n    cvReleaseData(*image);\n    cvReleaseImageHeader(image);\n}\n\\end{lstlisting}\n\n\n\\cvCPyFunc{ReleaseImageHeader}\nDeallocates an image header.\n\n\\cvdefC{void cvReleaseImageHeader(IplImage** image);}\n\n\\begin{description}\n\\cvarg{image}{Double pointer to the image header}\n\\end{description}\n\nThis call is an analogue of\n\\begin{lstlisting}\nif(image )\n{\n    iplDeallocate(*image, IPL_IMAGE_HEADER | IPL_IMAGE_ROI);\n    *image = 0;\n}\n\\end{lstlisting}\nbut it does not use IPL functions by default (see the \\texttt{CV\\_TURN\\_ON\\_IPL\\_COMPATIBILITY} macro).\n\n\n\\cvCPyFunc{ReleaseMat}\nDeallocates a matrix.\n\n\\cvdefC{void cvReleaseMat(CvMat** mat);}\n\n\\begin{description}\n\\cvarg{mat}{Double pointer to the matrix}\n\\end{description}\n\n\nThe function decrements the matrix data reference counter and deallocates matrix header. If the data reference counter is 0, it also deallocates the data.\n\n\\begin{lstlisting}\nif(*mat )\n    cvDecRefData(*mat);\ncvFree((void**)mat);\n\\end{lstlisting}\n\n\n\\cvCPyFunc{ReleaseMatND}\nDeallocates a multi-dimensional array.\n\n\\cvdefC{void cvReleaseMatND(CvMatND** mat);}\n\n\\begin{description}\n\\cvarg{mat}{Double pointer to the array}\n\\end{description}\n\nThe function decrements the array data reference counter and releases the array header. If the reference counter reaches 0, it also deallocates the data.\n\n\\begin{lstlisting}\nif(*mat )\n    cvDecRefData(*mat);\ncvFree((void**)mat);\n\\end{lstlisting}\n\n\\cvCPyFunc{ReleaseSparseMat}\nDeallocates sparse array.\n\n\\cvdefC{void cvReleaseSparseMat(CvSparseMat** mat);}\n\n\\begin{description}\n\\cvarg{mat}{Double pointer to the array}\n\\end{description}\n\nThe function releases the sparse array and clears the array pointer upon exit.\n\n\\fi\n\n\\cvCPyFunc{Repeat}\nFill the destination array with repeated copies of the source array.\n\n\\cvdefC{void cvRepeat(const CvArr* src, CvArr* dst);}\n\\cvdefPy{Repeat(src,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array, image or matrix}\n\\cvarg{dst}{Destination array, image or matrix}\n\\end{description}\n\nThe function fills the destination array with repeated copies of the source array:\n\n\\begin{lstlisting}\ndst(i,j)=src(i mod rows(src), j mod cols(src))\n\\end{lstlisting}\n\nSo the destination array may be as larger as well as smaller than the source array.\n\n\\cvCPyFunc{ResetImageROI}\nResets the image ROI to include the entire image and releases the ROI structure.\n\n\\cvdefC{void cvResetImageROI(IplImage* image);}\n\\cvdefPy{ResetImageROI(image)-> None}\n\n\\begin{description}\n\\cvarg{image}{A pointer to the image header}\n\\end{description}\n\nThis produces a similar result to the following\n\\ifC\n, but in addition it releases the ROI structure.\n\n\\begin{lstlisting}\ncvSetImageROI(image, cvRect(0, 0, image->width, image->height ));\ncvSetImageCOI(image, 0);\n\\end{lstlisting}\n\\else\n\n\\begin{lstlisting}\ncv.SetImageROI(image, (0, 0, image.width, image.height))\ncv.SetImageCOI(image, 0)\n\\end{lstlisting}\n\\fi\n\n\n\\cvCPyFunc{Reshape}\nChanges shape of matrix/image without copying data.\n\n\\cvdefC{CvMat* cvReshape(const CvArr* arr, CvMat* header, int newCn, int newRows=0);}\n\\cvdefPy{Reshape(arr, newCn, newRows=0) -> cvmat}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\ifC\n\\cvarg{header}{Output header to be filled}\n\\fi\n\\cvarg{newCn}{New number of channels. 'newCn = 0' means that the number of channels remains unchanged.}\n\\cvarg{newRows}{New number of rows. 'newRows = 0' means that the number of rows remains unchanged unless it needs to be changed according to \\texttt{newCn} value.}\n\\end{description}\n\nThe function initializes the CvMat header so that it points to the same data as the original array but has a different shape - different number of channels, different number of rows, or both.\n\n\\ifC\nThe following example code creates one image buffer and two image headers, the first is for a 320x240x3 image and the second is for a 960x240x1 image:\n\n\\begin{lstlisting}\nIplImage* color_img = cvCreateImage(cvSize(320,240), IPL_DEPTH_8U, 3);\nCvMat gray_mat_hdr;\nIplImage gray_img_hdr, *gray_img;\ncvReshape(color_img, &gray_mat_hdr, 1);\ngray_img = cvGetImage(&gray_mat_hdr, &gray_img_hdr);\n\\end{lstlisting}\n\nAnd the next example converts a 3x3 matrix to a single 1x9 vector:\n\n\\begin{lstlisting}\nCvMat* mat = cvCreateMat(3, 3, CV_32F);\nCvMat row_header, *row;\nrow = cvReshape(mat, &row_header, 0, 1);\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{ReshapeMatND}\nChanges the shape of a multi-dimensional array without copying the data.\n\n\\cvdefC{CvArr* cvReshapeMatND(const CvArr* arr,\n                       int sizeofHeader, CvArr* header,\n                       int newCn, int newDims, int* newSizes);}\n\\cvdefPy{ReshapeMatND(arr, newCn, newDims) -> cvmat}\n\n\\ifC\n\\begin{lstlisting}\n#define cvReshapeND(arr, header, newCn, newDims, newSizes )   \\\n      cvReshapeMatND((arr), sizeof(*(header)), (header),         \\\n                      (newCn), (newDims), (newSizes))\n\\end{lstlisting}\n\\fi\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\ifC\n\\cvarg{sizeofHeader}{Size of output header to distinguish between IplImage, CvMat and CvMatND output headers}\n\\cvarg{header}{Output header to be filled}\n\\cvarg{newCn}{New number of channels. $\\texttt{newCn} = 0$ means that the number of channels remains unchanged.}\n\\cvarg{newDims}{New number of dimensions. $\\texttt{newDims} = 0$ means that the number of dimensions remains the same.}\n\\cvarg{newSizes}{Array of new dimension sizes. Only $\\texttt{newDims}-1$ values are used, because the total number of elements must remain the same.\nThus, if $\\texttt{newDims} = 1$, \\texttt{newSizes} array is not used.}\n\\else\n\\cvarg{newCn}{New number of channels. $\\texttt{newCn} = 0$ means that the number of channels remains unchanged.}\n\\cvarg{newDims}{List of new dimensions.}\n\\fi\n\\end{description}\n\n\n\\ifC\nThe function is an advanced version of \\cvCPyCross{Reshape} that can work with multi-dimensional arrays as well (though it can work with ordinary images and matrices) and change the number of dimensions.\n\nBelow are the two samples from the \\cvCPyCross{Reshape} description rewritten using \\cvCPyCross{ReshapeMatND}:\n\n\\begin{lstlisting}\n\nIplImage* color_img = cvCreateImage(cvSize(320,240), IPL_DEPTH_8U, 3);\nIplImage gray_img_hdr, *gray_img;\ngray_img = (IplImage*)cvReshapeND(color_img, &gray_img_hdr, 1, 0, 0);\n\n...\n\n/* second example is modified to convert 2x2x2 array to 8x1 vector */\nint size[] = { 2, 2, 2 };\nCvMatND* mat = cvCreateMatND(3, size, CV_32F);\nCvMat row_header, *row;\nrow = (CvMat*)cvReshapeND(mat, &row_header, 0, 1, 0);\n\n\\end{lstlisting}\n\\fi\n\n\\ifPy\nReturns a new \\cross{CvMatND} that shares the same data as \\texttt{arr}\nbut has different dimensions or number of channels.  The only requirement\nis that the total length of the data is unchanged.\n\n\\begin{lstlisting}\n>>> import cv\n>>> mat = cv.CreateMatND([24], cv.CV_32FC1)\n>>> print cv.GetDims(cv.ReshapeMatND(mat, 0, [8, 3]))\n(8, 3)\n>>> m2 = cv.ReshapeMatND(mat, 4, [3, 2])\n>>> print cv.GetDims(m2)\n(3, 2)\n>>> print m2.channels\n4\n\\end{lstlisting}\n\\fi\n\n\\ifC\n\\cvfunc{cvRound, cvFloor, cvCeil}\\label{cvRound}\n\nConverts a floating-point number to an integer.\n\n\\cvdefC{\nint cvRound(double value);\nint cvFloor(double value);\nint cvCeil(double value);\n\n}\\cvdefPy{Round, Floor, Ceil(value)-> int}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\n\nThe functions convert the input floating-point number to an integer using one of the rounding\nmodes. \\texttt{Round} returns the nearest integer value to the\nargument. \\texttt{Floor} returns the maximum integer value that is not\nlarger than the argument. \\texttt{Ceil} returns the minimum integer\nvalue that is not smaller than the argument. On some architectures the\nfunctions work much faster than the standard cast\noperations in C. If the absolute value of the argument is greater than\n$2^{31}$, the result is not determined. Special values ($\\pm \\infty$ , NaN)\nare not handled.\n\n\\else\n\n\\cvCPyFunc{Round}\n\nConverts a floating-point number to the nearest integer value.\n\n\\cvdefPy{Round(value) -> int}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\nOn some architectures this function is much faster than the standard cast\noperations. If the absolute value of the argument is greater than\n$2^{31}$, the result is not determined. Special values ($\\pm \\infty$ , NaN)\nare not handled.\n\n\\cvCPyFunc{Floor}\n\nConverts a floating-point number to the nearest integer value that is not larger than the argument.\n\n\\cvdefPy{Floor(value) -> int}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\nOn some architectures this function is much faster than the standard cast\noperations. If the absolute value of the argument is greater than\n$2^{31}$, the result is not determined. Special values ($\\pm \\infty$ , NaN)\nare not handled.\n\n\\cvCPyFunc{Ceil}\n\nConverts a floating-point number to the nearest integer value that is not smaller than the argument.\n\n\\cvdefPy{Ceil(value) -> int}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\nOn some architectures this function is much faster than the standard cast\noperations. If the absolute value of the argument is greater than\n$2^{31}$, the result is not determined. Special values ($\\pm \\infty$ , NaN)\nare not handled.\n\n\\fi\n\n\n\\cvCPyFunc{ScaleAdd}\nCalculates the sum of a scaled array and another array.\n\n\\cvdefC{void cvScaleAdd(const CvArr* src1, CvScalar scale, const CvArr* src2, CvArr* dst);}\n\\cvdefPy{ScaleAdd(src1,scale,src2,dst)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{scale}{Scale factor for the first array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\end{description}\n\nThe function calculates the sum of a scaled array and another array:\n\n\\[\n\\texttt{dst}(I)=\\texttt{scale} \\, \\texttt{src1}(I) + \\texttt{src2}(I)\n\\]\n\nAll array parameters should have the same type and the same size.\n\n\\cvCPyFunc{Set}\nSets every element of an array to a given value.\n\n\\cvdefC{void cvSet(CvArr* arr, CvScalar value, const CvArr* mask=NULL);}\n\\cvdefPy{Set(arr,value,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{arr}{The destination array}\n\\cvarg{value}{Fill value}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\n\nThe function copies the scalar \\texttt{value} to every selected element of the destination array:\n\n\\[\n\\texttt{arr}(I)=\\texttt{value} \\quad \\text{if} \\quad \\texttt{mask}(I) \\ne 0\n\\]\n\nIf array \\texttt{arr} is of \\texttt{IplImage} type, then is ROI used, but COI must not be set.\n\n\\ifC % {\n\\cvCPyFunc{Set?D}\nChange the particular array element.\n\n\\cvdefC{\nvoid cvSet1D(CvArr* arr, int idx0, CvScalar value); \\newline\nvoid cvSet2D(CvArr* arr, int idx0, int idx1, CvScalar value); \\newline\nvoid cvSet3D(CvArr* arr, int idx0, int idx1, int idx2, CvScalar value); \\newline\nvoid cvSetND(CvArr* arr, int* idx, CvScalar value);\n}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{The first zero-based component of the element index}\n\\cvarg{idx1}{The second zero-based component of the element index}\n\\cvarg{idx2}{The third zero-based component of the element index}\n\\cvarg{idx}{Array of the element indices}\n\\cvarg{value}{The assigned value}\n\\end{description}\n\nThe functions assign the new value to a particular array element. In the case of a sparse array the functions create the node if it does not exist yet.\n\n\\else % }{\n\n\\cvCPyFunc{Set1D}\nSet a specific array element.\n\n\\cvdefPy{ Set1D(arr, idx, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx}{Zero-based element index}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  Array must have dimension 1.\n\n\\cvCPyFunc{Set2D}\nSet a specific array element.\n\n\\cvdefPy{ Set2D(arr, idx0, idx1, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{Zero-based element row index}\n\\cvarg{idx1}{Zero-based element column index}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  Array must have dimension 2.\n\n\\cvCPyFunc{Set3D}\nSet a specific array element.\n\n\\cvdefPy{ Set3D(arr, idx0, idx1, idx2, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{Zero-based element index}\n\\cvarg{idx1}{Zero-based element index}\n\\cvarg{idx2}{Zero-based element index}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  Array must have dimension 3.\n\n\\cvCPyFunc{SetND}\nSet a specific array element.\n\n\\cvdefPy{ SetND(arr, indices, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{indices}{List of zero-based element indices}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  The length of array indices must be the same as the dimension of the array.\n\\fi % }\n\n\\cvCPyFunc{SetData}\nAssigns user data to the array header.\n\n\\cvdefC{void cvSetData(CvArr* arr, void* data, int step);}\n\\cvdefPy{SetData(arr, data, step)-> None}\n\n\\begin{description}\n\\cvarg{arr}{Array header}\n\\cvarg{data}{User data}\n\\cvarg{step}{Full row length in bytes}\n\\end{description}\n\nThe function assigns user data to the array header. Header should be initialized before using \\texttt{cvCreate*Header}, \\texttt{cvInit*Header} or \\cvCPyCross{Mat} (in the case of matrix) function.\n\n\\cvCPyFunc{SetIdentity}\nInitializes a scaled identity matrix.\n\n\\cvdefC{void cvSetIdentity(CvArr* mat, CvScalar value=cvRealScalar(1));}\n\\cvdefPy{SetIdentity(mat,value=1)-> None}\n\n\\begin{description}\n\\cvarg{mat}{The matrix to initialize (not necesserily square)}\n\\cvarg{value}{The value to assign to the diagonal elements}\n\\end{description}\n\nThe function initializes a scaled identity matrix:\n\n\\[\n\\texttt{arr}(i,j)=\\fork{\\texttt{value}}{ if $i=j$}{0}{otherwise}\n\\]\n\n\\cvCPyFunc{SetImageCOI}\nSets the channel of interest in an IplImage.\n\n\\cvdefC{void cvSetImageCOI(\\par IplImage* image,\\par int coi);}\n\\cvdefPy{SetImageCOI(image, coi)-> None}\n\n\\begin{description}\n\\cvarg{image}{A pointer to the image header}\n\\cvarg{coi}{The channel of interest. 0 - all channels are selected, 1 - first channel is selected, etc. Note that the channel indices become 1-based.}\n\\end{description}\n\nIf the ROI is set to \\texttt{NULL} and the coi is \\textit{not} 0,\nthe ROI is allocated. Most OpenCV functions do \\textit{not} support\nthe COI setting, so to process an individual image/matrix channel one\nmay copy (via \\cvCPyCross{Copy} or \\cvCPyCross{Split}) the channel to a separate\nimage/matrix, process it and then copy the result back (via \\cvCPyCross{Copy}\nor \\cvCPyCross{Merge}) if needed.\n\n\\cvCPyFunc{SetImageROI}\nSets an image Region Of Interest (ROI) for a given rectangle.\n\n\\cvdefC{void cvSetImageROI(\\par IplImage* image,\\par CvRect rect);}\n\\cvdefPy{SetImageROI(image, rect)-> None}\n\n\\begin{description}\n\\cvarg{image}{A pointer to the image header}\n\\cvarg{rect}{The ROI rectangle}\n\\end{description}\n\nIf the original image ROI was \\texttt{NULL} and the \\texttt{rect} is not the whole image, the ROI structure is allocated.\n\nMost OpenCV functions support the use of ROI and treat the image rectangle as a separate image. For example, all of the pixel coordinates are counted from the top-left (or bottom-left) corner of the ROI, not the original image.\n\n\\ifC % {\n\\cvCPyFunc{SetReal?D}\nChange a specific array element.\n\n\\cvdefC{\nvoid cvSetReal1D(CvArr* arr, int idx0, double value); \\newline\nvoid cvSetReal2D(CvArr* arr, int idx0, int idx1, double value); \\newline\nvoid cvSetReal3D(CvArr* arr, int idx0, int idx1, int idx2, double value); \\newline\nvoid cvSetRealND(CvArr* arr, int* idx, double value);\n}\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{The first zero-based component of the element index}\n\\cvarg{idx1}{The second zero-based component of the element index}\n\\cvarg{idx2}{The third zero-based component of the element index}\n\\cvarg{idx}{Array of the element indices}\n\\cvarg{value}{The assigned value}\n\\end{description}\n\nThe functions assign a new value to a specific\nelement of a single-channel array. If the array has multiple channels,\na runtime error is raised. Note that the \\cvCPyCross{Set*D} function can be used\nsafely for both single-channel and multiple-channel arrays, though they\nare a bit slower.\n\nIn the case of a sparse array the functions create the node if it does not yet exist.\n\n\\else % }{\n\n\\cvCPyFunc{SetReal1D}\nSet a specific array element.\n\n\\cvdefPy{ SetReal1D(arr, idx, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx}{Zero-based element index}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  Array must have dimension 1.\n\n\\cvCPyFunc{SetReal2D}\nSet a specific array element.\n\n\\cvdefPy{ SetReal2D(arr, idx0, idx1, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{Zero-based element row index}\n\\cvarg{idx1}{Zero-based element column index}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  Array must have dimension 2.\n\n\\cvCPyFunc{SetReal3D}\nSet a specific array element.\n\n\\cvdefPy{ SetReal3D(arr, idx0, idx1, idx2, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{idx0}{Zero-based element index}\n\\cvarg{idx1}{Zero-based element index}\n\\cvarg{idx2}{Zero-based element index}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  Array must have dimension 3.\n\n\\cvCPyFunc{SetRealND}\nSet a specific array element.\n\n\\cvdefPy{ SetRealND(arr, indices, value) -> None }\n\n\\begin{description}\n\\cvarg{arr}{Input array}\n\\cvarg{indices}{List of zero-based element indices}\n\\cvarg{value}{The value to assign to the element}\n\\end{description}\n\nSets a specific array element.  The length of array indices must be the same as the dimension of the array.\n\\fi % }\n\n\\cvCPyFunc{SetZero}\nClears the array.\n\n\\cvdefC{void cvSetZero(CvArr* arr);}\n\\cvdefPy{SetZero(arr)-> None}\n\n\\ifC\n\\begin{lstlisting}\n#define cvZero cvSetZero\n\\end{lstlisting}\n\\fi\n\n\\begin{description}\n\\cvarg{arr}{Array to be cleared}\n\\end{description}\n\nThe function clears the array. In the case of dense arrays (CvMat, CvMatND or IplImage), cvZero(array) is equivalent to cvSet(array,cvScalarAll(0),0).\nIn the case of sparse arrays all the elements are removed.\n\n\\cvCPyFunc{Solve}\nSolves a linear system or least-squares problem.\n\n\\cvdefC{int cvSolve(const CvArr* src1, const CvArr* src2, CvArr* dst, int method=CV\\_LU);}\n\\cvdefPy{Solve(A,B,X,method=CV\\_LU)-> None}\n\n\\begin{description}\n\\cvarg{A}{The source matrix}\n\\cvarg{B}{The right-hand part of the linear system}\n\\cvarg{X}{The output solution}\n\\cvarg{method}{The solution (matrix inversion) method\n\\begin{description}\n \\cvarg{CV\\_LU}{Gaussian elimination with optimal pivot element chosen}\n \\cvarg{CV\\_SVD}{Singular value decomposition (SVD) method}\n \\cvarg{CV\\_SVD\\_SYM}{SVD method for a symmetric positively-defined matrix.}\n\\end{description}}\n\\end{description}\n\nThe function solves a linear system or least-squares problem (the latter is possible with SVD methods):\n\n\\[\n\\texttt{dst} = argmin_X||\\texttt{src1} \\, \\texttt{X} - \\texttt{src2}||\n\\]\n\nIf \\texttt{CV\\_LU} method is used, the function returns 1 if \\texttt{src1} is non-singular and 0 otherwise; in the latter case \\texttt{dst} is not valid.\n\n\\cvCPyFunc{SolveCubic}\nFinds the real roots of a cubic equation.\n\n\\cvdefC{void cvSolveCubic(const CvArr* coeffs, CvArr* roots);}\n\\cvdefPy{SolveCubic(coeffs,roots)-> None}\n\n\\begin{description}\n\\cvarg{coeffs}{The equation coefficients, an array of 3 or 4 elements}\n\\cvarg{roots}{The output array of real roots which should have 3 elements}\n\\end{description}\n\nThe function finds the real roots of a cubic equation:\n\nIf coeffs is a 4-element vector:\n\n\\[\n\\texttt{coeffs}[0] x^3 + \\texttt{coeffs}[1] x^2 + \\texttt{coeffs}[2] x + \\texttt{coeffs}[3] = 0\n\\]\n\nor if coeffs is 3-element vector:\n\n\\[\nx^3 + \\texttt{coeffs}[0] x^2 + \\texttt{coeffs}[1] x + \\texttt{coeffs}[2] = 0\n\\]\n\nThe function returns the number of real roots found. The roots are\nstored to \\texttt{root} array, which is padded with zeros if there is\nonly one root.\n\n\\cvCPyFunc{Split}\nDivides multi-channel array into several single-channel arrays or extracts a single channel from the array.\n\n\\cvdefC{void cvSplit(const CvArr* src, CvArr* dst0, CvArr* dst1,\n              CvArr* dst2, CvArr* dst3);}\n\\cvdefPy{Split(src,dst0,dst1,dst2,dst3)-> None}\n\n\\begin{description}\n\\cvarg{src}{Source array}\n\\cvarg{dst0}{Destination channel 0}\n\\cvarg{dst1}{Destination channel 1}\n\\cvarg{dst2}{Destination channel 2}\n\\cvarg{dst3}{Destination channel 3}\n\\end{description}\n\nThe function divides a multi-channel array into separate\nsingle-channel arrays. Two modes are available for the operation. If the\nsource array has N channels then if the first N destination channels\nare not NULL, they all are extracted from the source array;\nif only a single destination channel of the first N is not NULL, this\nparticular channel is extracted; otherwise an error is raised. The rest\nof the destination channels (beyond the first N) must always be NULL. For\nIplImage \\cvCPyCross{Copy} with COI set can be also used to extract a single\nchannel from the image.\n\n\n\\cvCPyFunc{Sqrt}\nCalculates the square root.\n\n\\cvdefC{float cvSqrt(float value);}\n\\cvdefPy{Sqrt(value)-> float}\n\n\\begin{description}\n\\cvarg{value}{The input floating-point value}\n\\end{description}\n\n\nThe function calculates the square root of the argument. If the argument is negative, the result is not determined.\n\n\\cvCPyFunc{Sub}\nComputes the per-element difference between two arrays.\n\n\\cvdefC{void cvSub(const CvArr* src1, const CvArr* src2, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{Sub(src1,src2,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\n\nThe function subtracts one array from another one:\n\n\\begin{lstlisting}\ndst(I)=src1(I)-src2(I) if mask(I)!=0\n\\end{lstlisting}\n\nAll the arrays must have the same type, except the mask, and the same size (or ROI size).\nFor types that have limited range this operation is saturating.\n\n\\cvCPyFunc{SubRS}\nComputes the difference between a scalar and an array.\n\n\\cvdefC{void cvSubRS(const CvArr* src, CvScalar value, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{SubRS(src,value,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The first source array}\n\\cvarg{value}{Scalar to subtract from}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\nThe function subtracts every element of source array from a scalar:\n\n\\begin{lstlisting}\ndst(I)=value-src(I) if mask(I)!=0\n\\end{lstlisting}\n\nAll the arrays must have the same type, except the mask, and the same size (or ROI size).\nFor types that have limited range this operation is saturating.\n\n\\cvCPyFunc{SubS}\nComputes the difference between an array and a scalar.\n\n\\cvdefC{void cvSubS(const CvArr* src, CvScalar value, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{SubS(src,value,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{value}{Subtracted scalar}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\nThe function subtracts a scalar from every element of the source array:\n\n\\begin{lstlisting}\ndst(I)=src(I)-value if mask(I)!=0\n\\end{lstlisting}\n\nAll the arrays must have the same type, except the mask, and the same size (or ROI size).\nFor types that have limited range this operation is saturating.\n\n\n\\cvCPyFunc{Sum}\nAdds up array elements.\n\n\\cvdefC{CvScalar cvSum(const CvArr* arr);}\n\\cvdefPy{Sum(arr)-> CvScalar}\n\n\\begin{description}\n\\cvarg{arr}{The array}\n\\end{description}\n\n\nThe function calculates the sum \\texttt{S} of array elements, independently for each channel:\n\n\\[ \\sum_I \\texttt{arr}(I)_c \\]\n\nIf the array is \\texttt{IplImage} and COI is set, the function processes the selected channel only and stores the sum to the first scalar component.\n\n\n\\cvCPyFunc{SVBkSb}\nPerforms singular value back substitution.\n\n\\cvdefC{\nvoid  cvSVBkSb(\\par const CvArr* W,\\par const CvArr* U,\\par const CvArr* V,\\par const CvArr* B,\\par CvArr* X,\\par int flags);}\n\\cvdefPy{SVBkSb(W,U,V,B,X,flags)-> None}\n\n\\begin{description}\n\\cvarg{W}{Matrix or vector of singular values}\n\\cvarg{U}{Left orthogonal matrix (tranposed, perhaps)}\n\\cvarg{V}{Right orthogonal matrix (tranposed, perhaps)}\n\\cvarg{B}{The matrix to multiply the pseudo-inverse of the original matrix \\texttt{A} by. This is an optional parameter. If it is omitted then it is assumed to be an identity matrix of an appropriate size (so that \\texttt{X} will be the reconstructed pseudo-inverse of \\texttt{A}).}\n\\cvarg{X}{The destination matrix: result of back substitution}\n\\cvarg{flags}{Operation flags, should match exactly to the \\texttt{flags} passed to \\cvCPyCross{SVD}}\n\\end{description}\n\nThe function calculates back substitution for decomposed matrix \\texttt{A} (see \\cvCPyCross{SVD} description) and matrix \\texttt{B}:\n\n\\[\n\\texttt{X} = \\texttt{V} \\texttt{W}^{-1} \\texttt{U}^T \\texttt{B}\n\\]\n\nwhere\n\n\\[\nW^{-1}_{(i,i)}=\n\\fork\n{1/W_{(i,i)}}{if $W_{(i,i)} > \\epsilon \\sum_i{W_{(i,i)}}$ }\n{0}{otherwise}\n\\]\n\nand $\\epsilon$ is a small number that depends on the matrix data type.\n\nThis function together with \\cvCPyCross{SVD} is used inside \\cvCPyCross{Invert}\nand \\cvCPyCross{Solve}, and the possible reason to use these (svd and bksb)\n\"low-level\" function, is to avoid allocation of temporary matrices inside\nthe high-level counterparts (inv and solve).\n\n\\cvCPyFunc{SVD}\nPerforms singular value decomposition of a real floating-point matrix.\n\n\\cvdefC{void cvSVD(\\par CvArr* A, \\par CvArr* W, \\par CvArr* U=NULL, \\par CvArr* V=NULL, \\par int flags=0);}\n\\cvdefPy{SVD(A,W, U = None, V = None, flags=0)-> None}\n\n\\begin{description}\n\\cvarg{A}{Source $\\texttt{M} \\times \\texttt{N}$ matrix}\n\\cvarg{W}{Resulting singular value diagonal matrix ($\\texttt{M} \\times \\texttt{N}$ or $\\min(\\texttt{M}, \\texttt{N})  \\times \\min(\\texttt{M}, \\texttt{N})$) or $\\min(\\texttt{M},\\texttt{N}) \\times 1$ vector of the singular values}\n\\cvarg{U}{Optional left orthogonal matrix, $\\texttt{M} \\times \\min(\\texttt{M}, \\texttt{N})$ (when \\texttt{CV\\_SVD\\_U\\_T} is not set), or $\\min(\\texttt{M},\\texttt{N}) \\times \\texttt{M}$ (when \\texttt{CV\\_SVD\\_U\\_T} is set), or $\\texttt{M} \\times \\texttt{M}$ (regardless of \\texttt{CV\\_SVD\\_U\\_T} flag).}\n\\cvarg{V}{Optional right orthogonal matrix, $\\texttt{N} \\times \\min(\\texttt{M}, \\texttt{N})$ (when \\texttt{CV\\_SVD\\_V\\_T} is not set), or $\\min(\\texttt{M},\\texttt{N}) \\times \\texttt{N}$ (when \\texttt{CV\\_SVD\\_V\\_T} is set), or $\\texttt{N} \\times \\texttt{N}$ (regardless of \\texttt{CV\\_SVD\\_V\\_T} flag).}\n\\cvarg{flags}{Operation flags; can be 0 or a combination of the following values:\n\\begin{description}\n  \\cvarg{CV\\_SVD\\_MODIFY\\_A}{enables modification of matrix \\texttt{A} during the operation. It speeds up the processing.}\n  \\cvarg{CV\\_SVD\\_U\\_T}{means that the transposed matrix \\texttt{U} is returned. Specifying the flag speeds up the processing.}\n  \\cvarg{CV\\_SVD\\_V\\_T}{means that the transposed matrix \\texttt{V} is returned. Specifying the flag speeds up the processing.}\n\\end{description}}\n\\end{description}\n\nThe function decomposes matrix \\texttt{A} into the product of a diagonal matrix and two \n\northogonal matrices:\n\n\\[\nA=U \\, W \\, V^T\n\\]\n\nwhere $W$ is a diagonal matrix of singular values that can be coded as a\n1D vector of singular values and $U$ and $V$. All the singular values\nare non-negative and sorted (together with $U$ and $V$ columns)\nin descending order.\n\nAn SVD algorithm is numerically robust and its typical applications include:\n\n\\begin{itemize}\n  \\item accurate eigenvalue problem solution when matrix \\texttt{A}\n  is a square, symmetric, and positively defined matrix, for example, when\n  it is a covariance matrix. $W$ in this case will be a vector/matrix\n  of the eigenvalues, and $U = V$ will be a matrix of the eigenvectors.\n  \\item accurate solution of a poor-conditioned linear system.\n  \\item least-squares solution of an overdetermined linear system. This and the preceeding is done by using the \\cvCPyCross{Solve} function with the \\texttt{CV\\_SVD} method.\n  \\item accurate calculation of different matrix characteristics such as the matrix rank (the number of non-zero singular values), condition number (ratio of the largest singular value to the smallest one), and determinant (absolute value of the determinant is equal to the product of singular values). \n\\end{itemize}\n\n\\cvCPyFunc{Trace}\nReturns the trace of a matrix.\n\n\\cvdefC{CvScalar cvTrace(const CvArr* mat);}\n\\cvdefPy{Trace(mat)-> CvScalar}\n\n\\begin{description}\n\\cvarg{mat}{The source matrix}\n\\end{description}\n\n\nThe function returns the sum of the diagonal elements of the matrix \\texttt{src1}.\n\n\\[ tr(\\texttt{mat}) = \\sum_i \\texttt{mat}(i,i) \\]\n\n\\cvCPyFunc{Transform}\n\nPerforms matrix transformation of every array element.\n\n\\cvdefC{void cvTransform(const CvArr* src, CvArr* dst, const CvMat* transmat, const CvMat* shiftvec=NULL);}\n\\cvdefPy{Transform(src,dst,transmat,shiftvec=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The first source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{transmat}{Transformation matrix}\n\\cvarg{shiftvec}{Optional shift vector}\n\\end{description}\n\nThe function performs matrix transformation of every element of array \\texttt{src} and stores the results in \\texttt{dst}:\n\n\\[\ndst(I) = transmat \\cdot src(I) + shiftvec %  or   dst(I),,k,,=sum,,j,,(transmat(k,j)*src(I),,j,,) + shiftvec(k)\n\\]\n\nThat is, every element of an \\texttt{N}-channel array \\texttt{src} is\nconsidered as an \\texttt{N}-element vector which is transformed using\na $\\texttt{M} \\times \\texttt{N}$ matrix \\texttt{transmat} and shift\nvector \\texttt{shiftvec} into an element of \\texttt{M}-channel array\n\\texttt{dst}. There is an option to embedd \\texttt{shiftvec} into\n\\texttt{transmat}. In this case \\texttt{transmat} should be a $\\texttt{M}\n\\times (N+1)$ matrix and the rightmost column is treated as the shift\nvector.\n\nBoth source and destination arrays should have the same depth and the\nsame size or selected ROI size. \\texttt{transmat} and \\texttt{shiftvec}\nshould be real floating-point matrices.\n\nThe function may be used for geometrical transformation of n dimensional\npoint set, arbitrary linear color space transformation, shuffling the\nchannels and so forth.\n\n\\cvCPyFunc{Transpose}\nTransposes a matrix.\n\n\\cvdefC{void cvTranspose(const CvArr* src, CvArr* dst);}\n\\cvdefPy{Transpose(src,dst)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source matrix}\n\\cvarg{dst}{The destination matrix}\n\\end{description}\n\nThe function transposes matrix \\texttt{src1}:\n\n\\[ \\texttt{dst}(i,j) = \\texttt{src}(j,i) \\]\n\nNote that no complex conjugation is done in the case of a complex\nmatrix. Conjugation should be done separately: look at the sample code\nin \\cvCPyCross{XorS} for an example.\n\n\\cvCPyFunc{Xor}\nPerforms per-element bit-wise \"exclusive or\" operation on two arrays.\n\n\\cvdefC{void cvXor(const CvArr* src1, const CvArr* src2, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{Xor(src1,src2,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\nThe function calculates per-element bit-wise logical conjunction of two arrays:\n\n\\begin{lstlisting}\ndst(I)=src1(I)^src2(I) if mask(I)!=0\n\\end{lstlisting}\n\nIn the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size.\n\n\\cvCPyFunc{XorS}\nPerforms per-element bit-wise \"exclusive or\" operation on an array and a scalar.\n\n\\cvdefC{void cvXorS(const CvArr* src, CvScalar value, CvArr* dst, const CvArr* mask=NULL);}\n\\cvdefPy{XorS(src,value,dst,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{value}{Scalar to use in the operation}\n\\cvarg{dst}{The destination array}\n\\cvarg{mask}{Operation mask, 8-bit single channel array; specifies elements of the destination array to be changed}\n\\end{description}\n\n\nThe function XorS calculates per-element bit-wise conjunction of an array and a scalar:\n\n\\begin{lstlisting}\ndst(I)=src(I)^value if mask(I)!=0\n\\end{lstlisting}\n\nPrior to the actual operation, the scalar is converted to the same type as that of the array(s). In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size\n\n\\ifC\nThe following sample demonstrates how to conjugate complex vector by switching the most-significant bit of imaging part:\n\n\\begin{lstlisting}\n\nfloat a[] = { 1, 0, 0, 1, -1, 0, 0, -1 }; /* 1, j, -1, -j */\nCvMat A = cvMat(4, 1, CV\\_32FC2, &a);\nint i, negMask = 0x80000000;\ncvXorS(&A, cvScalar(0, *(float*)&negMask, 0, 0 ), &A, 0);\nfor(i = 0; i < 4; i++ )\n    printf(\"(\\%.1f, \\%.1f) \", a[i*2], a[i*2+1]);\n\n\\end{lstlisting}\n\nThe code should print:\n\n\\begin{lstlisting}\n(1.0,0.0) (0.0,-1.0) (-1.0,0.0) (0.0,1.0)\n\\end{lstlisting}\n\\fi\n\n\\cvCPyFunc{mGet}\nReturns the particular element of single-channel floating-point matrix.\n\n\\cvdefC{double cvmGet(const CvMat* mat, int row, int col);}\n\\cvdefPy{mGet(mat,row,col)-> double}\n\n\\begin{description}\n\\cvarg{mat}{Input matrix}\n\\cvarg{row}{The zero-based index of row}\n\\cvarg{col}{The zero-based index of column}\n\\end{description}\n\nThe function is a fast replacement for \\cvCPyCross{GetReal2D}\nin the case of single-channel floating-point matrices. It is faster because\nit is inline, it does fewer checks for array type and array element type,\nand it checks for the row and column ranges only in debug mode.\n\n\\cvCPyFunc{mSet}\nReturns a specific element of a single-channel floating-point matrix.\n\n\\cvdefC{void cvmSet(CvMat* mat, int row, int col, double value);}\n\\cvdefPy{mSet(mat,row,col,value)-> None}\n\n\\begin{description}\n\\cvarg{mat}{The matrix}\n\\cvarg{row}{The zero-based index of row}\n\\cvarg{col}{The zero-based index of column}\n\\cvarg{value}{The new value of the matrix element}\n\\end{description}\n\n\nThe function is a fast replacement for \\cvCPyCross{SetReal2D}\nin the case of single-channel floating-point matrices. It is faster because\nit is inline, it does fewer checks for array type and array element type, \nand it checks for the row and column ranges only in debug mode.\n\n\\fi\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                                                                              %\n%                                  C++ API                                     % \n%                                                                              %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\ifCpp\n\n\\cvCppFunc{abs}\nComputes absolute value of each matrix element\n\n\\cvdefCpp{MatExpr<...> abs(const Mat\\& src);\\newline\nMatExpr<...> abs(const MatExpr<...>\\& src);}\n\n\\begin{description}\n\\cvarg{src}{matrix or matrix expression}\n\\end{description}\n\n\\texttt{abs} is a meta-function that is expanded to one of \\cvCppCross{absdiff} forms:\n\n\\begin{itemize}\n    \\item \\texttt{C = abs(A-B)} is equivalent to \\texttt{absdiff(A, B, C)} and\n    \\item \\texttt{C = abs(A)} is equivalent to \\texttt{absdiff(A, Scalar::all(0), C)}.\n    \\item \\texttt{C = Mat\\_<Vec<uchar,n> >(abs(A*alpha + beta))} is equivalent to \\texttt{convertScaleAbs(A, C, alpha, beta)}\n\\end{itemize}\n\nThe output matrix will have the same size and the same type as the input one\n(except for the last case, where \\texttt{C} will be \\texttt{depth=CV\\_8U}).\n\nSee also: \\cross{Matrix Expressions}, \\cvCppCross{absdiff}, \\hyperref[cppfunc.saturatecast]{saturate\\_cast}\n\n\\cvCppFunc{absdiff}\nComputes per-element absolute difference between 2 arrays or between array and a scalar.\n\n\\cvdefCpp{void absdiff(const Mat\\& src1, const Mat\\& src2, Mat\\& dst);\\newline\nvoid absdiff(const Mat\\& src1, const Scalar\\& sc, Mat\\& dst);\\newline\nvoid absdiff(const MatND\\& src1, const MatND\\& src2, MatND\\& dst);\\newline\nvoid absdiff(const MatND\\& src1, const Scalar\\& sc, MatND\\& dst);}\n\n\\begin{description}\n\\cvarg{src1}{The first input array}\n\\cvarg{src2}{The second input array; Must be the same size and same type as \\texttt{src1}}\n\\cvarg{sc}{Scalar; the second input parameter}\n\\cvarg{dst}{The destination array; it will have the same size and same type as \\texttt{src1}; see \\texttt{Mat::create}}\n\\end{description}\n\nThe functions \\texttt{absdiff} compute:\n\\begin{itemize}\n    \\item absolute difference between two arrays\n    \\[\\texttt{dst}(I) = \\texttt{saturate}(|\\texttt{src1}(I) - \\texttt{src2}(I)|)\\]\n    \\item or absolute difference between array and a scalar:\n    \\[\\texttt{dst}(I) = \\texttt{saturate}(|\\texttt{src1}(I) - \\texttt{sc}|)\\]\n\\end{itemize}\nwhere \\texttt{I} is multi-dimensional index of array elements.\nin the case of multi-channel arrays each channel is processed independently.\n\nSee also: \\cvCppCross{abs}, \\hyperref[cppfunc.saturatecast]{saturate\\_cast}\n\n\\cvCppFunc{add}\nComputes the per-element sum of two arrays or an array and a scalar.\n\n\\cvdefCpp{void add(const Mat\\& src1, const Mat\\& src2, Mat\\& dst);\\newline\nvoid add(const Mat\\& src1, const Mat\\& src2, \\par Mat\\& dst, const Mat\\& mask);\\newline\nvoid add(const Mat\\& src1, const Scalar\\& sc, \\par Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid add(const MatND\\& src1, const MatND\\& src2, MatND\\& dst);\\newline\nvoid add(const MatND\\& src1, const MatND\\& src2, \\par MatND\\& dst, const MatND\\& mask);\\newline\nvoid add(const MatND\\& src1, const Scalar\\& sc, \\par MatND\\& dst, const MatND\\& mask=MatND());}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array. It must have the same size and same type as \\texttt{src1}}\n\\cvarg{sc}{Scalar; the second input parameter}\n\\cvarg{dst}{The destination array; it will have the same size and same type as \\texttt{src1}; see \\texttt{Mat::create}}\n\\cvarg{mask}{The optional operation mask, 8-bit single channel array;\n             specifies elements of the destination array to be changed}\n\\end{description}\n\nThe functions \\texttt{add} compute:\n\\begin{itemize}\n    \\item the sum of two arrays:\n    \\[\\texttt{dst}(I) = \\texttt{saturate}(\\texttt{src1}(I) + \\texttt{src2}(I))\\quad\\texttt{if mask}(I)\\ne0\\]\n    \\item or the sum of array and a scalar:\n    \\[\\texttt{dst}(I) = \\texttt{saturate}(\\texttt{src1}(I) + \\texttt{sc})\\quad\\texttt{if mask}(I)\\ne0\\]\n\\end{itemize}\nwhere \\texttt{I} is multi-dimensional index of array elements.\n\nThe first function in the above list can be replaced with matrix expressions:\n\\begin{lstlisting}\ndst = src1 + src2;\ndst += src1; // equivalent to add(dst, src1, dst);\n\\end{lstlisting}\n\nin the case of multi-channel arrays each channel is processed independently.\n\nSee also: \\cvCppCross{subtract}, \\cvCppCross{addWeighted}, \\cvCppCross{scaleAdd}, \\cvCppCross{convertScale},\n\\cross{Matrix Expressions}, \\hyperref[cppfunc.saturatecast]{saturate\\_cast}.\n\n\\cvCppFunc{addWeighted}\nComputes the weighted sum of two arrays.\n\n\\cvdefCpp{void addWeighted(const Mat\\& src1, double alpha, const Mat\\& src2,\\par\n                 double beta, double gamma, Mat\\& dst);\\newline\nvoid addWeighted(const MatND\\& src1, double alpha, const MatND\\& src2,\\par\n                 double beta, double gamma, MatND\\& dst);\n}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{alpha}{Weight for the first array elements}\n\\cvarg{src2}{The second source array; must have the same size and same type as \\texttt{src1}}\n\\cvarg{beta}{Weight for the second array elements}\n\\cvarg{dst}{The destination array; it will have the same size and same type as \\texttt{src1}}\n\\cvarg{gamma}{Scalar, added to each sum}\n\\end{description}\n\nThe functions \\texttt{addWeighted} calculate the weighted sum of two arrays as follows:\n\\[\\texttt{dst}(I)=\\texttt{saturate}(\\texttt{src1}(I)*\\texttt{alpha} + \\texttt{src2}(I)*\\texttt{beta} + \\texttt{gamma})\\]\nwhere \\texttt{I} is multi-dimensional index of array elements.\n\nThe first function can be replaced with a matrix expression:\n\\begin{lstlisting}\ndst = src1*alpha + src2*beta + gamma;\n\\end{lstlisting}\n\nIn the case of multi-channel arrays each channel is processed independently.\n\nSee also: \\cvCppCross{add}, \\cvCppCross{subtract}, \\cvCppCross{scaleAdd}, \\cvCppCross{convertScale},\n\\cross{Matrix Expressions}, \\hyperref[cppfunc.saturatecast]{saturate\\_cast}.\n\n\\cvfunc{bitwise\\_and}\\label{cppfunc.bitwise.and}\nCalculates per-element bit-wise conjunction of two arrays and an array and a scalar.\n\n\\cvdefCpp{void bitwise\\_and(const Mat\\& src1, const Mat\\& src2,\\par Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid bitwise\\_and(const Mat\\& src1, const Scalar\\& sc,\\par  Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid bitwise\\_and(const MatND\\& src1, const MatND\\& src2,\\par  MatND\\& dst, const MatND\\& mask=MatND());\\newline\nvoid bitwise\\_and(const MatND\\& src1, const Scalar\\& sc,\\par  MatND\\& dst, const MatND\\& mask=MatND());}\n\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array. It must have the same size and same type as \\texttt{src1}}\n\\cvarg{sc}{Scalar; the second input parameter}\n\\cvarg{dst}{The destination array; it will have the same size and same type as \\texttt{src1}; see \\texttt{Mat::create}}\n\\cvarg{mask}{The optional operation mask, 8-bit single channel array;\n             specifies elements of the destination array to be changed}\n\\end{description}\n\nThe functions \\texttt{bitwise\\_and} compute per-element bit-wise logical conjunction:\n\\begin{itemize}\n    \\item of two arrays\n    \\[\\texttt{dst}(I) = \\texttt{src1}(I) \\wedge \\texttt{src2}(I)\\quad\\texttt{if mask}(I)\\ne0\\]\n    \\item or array and a scalar:\n    \\[\\texttt{dst}(I) = \\texttt{src1}(I) \\wedge \\texttt{sc}\\quad\\texttt{if mask}(I)\\ne0\\]\n\\end{itemize}\n\nIn the case of floating-point arrays their machine-specific bit representations (usually IEEE754-compliant) are used for the operation, and in the case of multi-channel arrays each channel is processed independently.\n\nSee also: \\hyperref[cppfunc.bitwise.and]{bitwise\\_and}, \\hyperref[cppfunc.bitwise.not]{bitwise\\_not}, \\hyperref[cppfunc.bitwise.xor]{bitwise\\_xor}\n\n\\cvfunc{bitwise\\_not}\\label{cppfunc.bitwise.not}\nInverts every bit of array\n\n\\cvdefCpp{void bitwise\\_not(const Mat\\& src, Mat\\& dst);\\newline\nvoid bitwise\\_not(const MatND\\& src, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src1}{The source array}\n\\cvarg{dst}{The destination array; it is reallocated to be of the same size and\n            the same type as \\texttt{src}; see \\texttt{Mat::create}}\n\\cvarg{mask}{The optional operation mask, 8-bit single channel array;\n             specifies elements of the destination array to be changed}\n\\end{description}\n\nThe functions \\texttt{bitwise\\_not} compute per-element bit-wise inversion of the source array:\n\\[\\texttt{dst}(I) = \\neg\\texttt{src}(I)\\]\n\nIn the case of floating-point source array its machine-specific bit representation (usually IEEE754-compliant) is used for the operation. in the case of multi-channel arrays each channel is processed independently.\n\nSee also: \\hyperref[cppfunc.bitwise.and]{bitwise\\_and}, \\hyperref[cppfunc.bitwise.or]{bitwise\\_or}, \\hyperref[cppfunc.bitwise.xor]{bitwise\\_xor}\n\n\n\\cvfunc{bitwise\\_or}\\label{cppfunc.bitwise.or}\nCalculates per-element bit-wise disjunction of two arrays and an array and a scalar.\n\n\\cvdefCpp{void bitwise\\_or(const Mat\\& src1, const Mat\\& src2,\\par Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid bitwise\\_or(const Mat\\& src1, const Scalar\\& sc,\\par  Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid bitwise\\_or(const MatND\\& src1, const MatND\\& src2,\\par  MatND\\& dst, const MatND\\& mask=MatND());\\newline\nvoid bitwise\\_or(const MatND\\& src1, const Scalar\\& sc,\\par  MatND\\& dst, const MatND\\& mask=MatND());}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array. It must have the same size and same type as \\texttt{src1}}\n\\cvarg{sc}{Scalar; the second input parameter}\n\\cvarg{dst}{The destination array; it is reallocated to be of the same size and\n            the same type as \\texttt{src1}; see \\texttt{Mat::create}}\n\\cvarg{mask}{The optional operation mask, 8-bit single channel array;\n             specifies elements of the destination array to be changed}\n\\end{description}\n\nThe functions \\texttt{bitwise\\_or} compute per-element bit-wise logical disjunction\n\\begin{itemize}\n    \\item of two arrays\n    \\[\\texttt{dst}(I) = \\texttt{src1}(I) \\vee \\texttt{src2}(I)\\quad\\texttt{if mask}(I)\\ne0\\]\n    \\item or array and a scalar:\n    \\[\\texttt{dst}(I) = \\texttt{src1}(I) \\vee \\texttt{sc}\\quad\\texttt{if mask}(I)\\ne0\\]\n\\end{itemize}\n\nIn the case of floating-point arrays their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. in the case of multi-channel arrays each channel is processed independently.\n\nSee also: \\hyperref[cppfunc.bitwise.and]{bitwise\\_and}, \\hyperref[cppfunc.bitwise.not]{bitwise\\_not}, \\hyperref[cppfunc.bitwise.or]{bitwise\\_or}\n\n\\cvfunc{bitwise\\_xor}\\label{cppfunc.bitwise.xor}\nCalculates per-element bit-wise \"exclusive or\" operation on two arrays and an array and a scalar.\n\n\\cvdefCpp{void bitwise\\_xor(const Mat\\& src1, const Mat\\& src2,\\par Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid bitwise\\_xor(const Mat\\& src1, const Scalar\\& sc,\\par  Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid bitwise\\_xor(const MatND\\& src1, const MatND\\& src2,\\par  MatND\\& dst, const MatND\\& mask=MatND());\\newline\nvoid bitwise\\_xor(const MatND\\& src1, const Scalar\\& sc,\\par  MatND\\& dst, const MatND\\& mask=MatND());}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array. It must have the same size and same type as \\texttt{src1}}\n\\cvarg{sc}{Scalar; the second input parameter}\n\\cvarg{dst}{The destination array; it is reallocated to be of the same size and\n            the same type as \\texttt{src1}; see \\texttt{Mat::create}}\n\\cvarg{mask}{The optional operation mask, 8-bit single channel array;\n             specifies elements of the destination array to be changed}\n\\end{description}\n\nThe functions \\texttt{bitwise\\_xor} compute per-element bit-wise logical \"exclusive or\" operation\n\n\\begin{itemize}\n    \\item on two arrays\n    \\[\\texttt{dst}(I) = \\texttt{src1}(I) \\oplus \\texttt{src2}(I)\\quad\\texttt{if mask}(I)\\ne0\\]\n    \\item or array and a scalar:\n    \\[\\texttt{dst}(I) = \\texttt{src1}(I) \\oplus \\texttt{sc}\\quad\\texttt{if mask}(I)\\ne0\\]\n\\end{itemize}\n\nIn the case of floating-point arrays their machine-specific bit representations (usually IEEE754-compliant) are used for the operation. in the case of multi-channel arrays each channel is processed independently.\n\nSee also: \\hyperref[cppfunc.bitwise.and]{bitwise\\_and}, \\hyperref[cppfunc.bitwise.not]{bitwise\\_not}, \\hyperref[cppfunc.bitwise.or]{bitwise\\_or}\n\n\\cvCppFunc{calcCovarMatrix}\nCalculates covariation matrix of a set of vectors\n\n\\cvdefCpp{void calcCovarMatrix( const Mat* samples, int nsamples,\\par\n                      Mat\\& covar, Mat\\& mean,\\par\n                      int flags, int ctype=CV\\_64F);\\newline\nvoid calcCovarMatrix( const Mat\\& samples, Mat\\& covar, Mat\\& mean,\\par\n                      int flags, int ctype=CV\\_64F);}\n\\begin{description}\n\\cvarg{samples}{The samples, stored as separate matrices, or as rows or columns of a single matrix}\n\\cvarg{nsamples}{The number of samples when they are stored separately}\n\\cvarg{covar}{The output covariance matrix; it will have type=\\texttt{ctype} and square size}\n\\cvarg{mean}{The input or output (depending on the flags) array - the mean (average) vector of the input vectors}\n\\cvarg{flags}{The operation flags, a combination of the following values\n\\begin{description}\n\\cvarg{CV\\_COVAR\\_SCRAMBLED}{The output covariance matrix is calculated as:\n\\[\n \\texttt{scale} \\cdot [ \\texttt{vects} [0]- \\texttt{mean} ,\\texttt{vects} [1]- \\texttt{mean} ,...]^T \\cdot [\\texttt{vects} [0]-\\texttt{mean} ,\\texttt{vects} [1]-\\texttt{mean} ,...] \n\\],\nthat is, the covariance matrix will be $\\texttt{nsamples} \\times \\texttt{nsamples}$.\nSuch an unusual covariance matrix is used for fast PCA\nof a set of very large vectors (see, for example, the EigenFaces technique\nfor face recognition). Eigenvalues of this \"scrambled\" matrix will\nmatch the eigenvalues of the true covariance matrix and the \"true\"\neigenvectors can be easily calculated from the eigenvectors of the\n\"scrambled\" covariance matrix.}\n\\cvarg{CV\\_COVAR\\_NORMAL}{The output covariance matrix is calculated as:\n\\[\n \\texttt{scale} \\cdot [ \\texttt{vects} [0]- \\texttt{mean} ,\\texttt{vects} [1]- \\texttt{mean} ,...] \\cdot [\\texttt{vects} [0]-\\texttt{mean} ,\\texttt{vects} [1]-\\texttt{mean} ,...]^T \n\\],\nthat is, \\texttt{covar} will be a square matrix\nof the same size as the total number of elements in each\ninput vector. One and only one of \\texttt{CV\\_COVAR\\_SCRAMBLED} and\n\\texttt{CV\\_COVAR\\_NORMAL} must be specified}\n\\cvarg{CV\\_COVAR\\_USE\\_AVG}{If the flag is specified, the function does not calculate \\texttt{mean} from the input vectors, but, instead, uses the passed \\texttt{mean} vector. This is useful if \\texttt{mean} has been pre-computed or known a-priori, or if the covariance matrix is calculated by parts - in this case, \\texttt{mean} is not a mean vector of the input sub-set of vectors, but rather the mean vector of the whole set.}\n\\cvarg{CV\\_COVAR\\_SCALE}{If the flag is specified, the covariance matrix is scaled. In the \"normal\" mode \\texttt{scale} is \\texttt{1./nsamples}; in the \"scrambled\" mode \\texttt{scale} is the reciprocal of the total number of elements in each input vector. By default (if the flag is not specified) the covariance matrix is not scaled (i.e. \\texttt{scale=1}).}\n\n\\cvarg{CV\\_COVAR\\_ROWS}{[Only useful in the second variant of the function] The flag means that all the input vectors are stored as rows of the \\texttt{samples} matrix. \\texttt{mean} should be a single-row vector in this case.}\n\\cvarg{CV\\_COVAR\\_COLS}{[Only useful in the second variant of the function] The flag means that all the input vectors are stored as columns of the \\texttt{samples} matrix. \\texttt{mean} should be a single-column vector in this case.}\n\n\\end{description}}\n\\end{description}\n\nThe functions \\texttt{calcCovarMatrix} calculate the covariance matrix\nand, optionally, the mean vector of the set of input vectors.\n\nSee also: \\cvCppCross{PCA}, \\cvCppCross{mulTransposed}, \\cvCppCross{Mahalanobis}\n\n\\cvCppFunc{cartToPolar}\nCalculates the magnitude and angle of 2d vectors.\n\n\\cvdefCpp{void cartToPolar(const Mat\\& x, const Mat\\& y,\\par\n                 Mat\\& magnitude, Mat\\& angle,\\par\n                 bool angleInDegrees=false);}\n\\begin{description}\n\\cvarg{x}{The array of x-coordinates; must be single-precision or double-precision floating-point array}\n\\cvarg{y}{The array of y-coordinates; it must have the same size and same type as \\texttt{x}}\n\\cvarg{magnitude}{The destination array of magnitudes of the same size and same type as \\texttt{x}}\n\\cvarg{angle}{The destination array of angles of the same size and same type as \\texttt{x}.\nThe angles are measured in radians $(0$ to $2 \\pi )$ or in degrees (0 to 360 degrees).}\n\\cvarg{angleInDegrees}{The flag indicating whether the angles are measured in radians, which is default mode, or in degrees}\n\\end{description}\n\nThe function \\texttt{cartToPolar} calculates either the magnitude, angle, or both of every 2d vector (x(I),y(I)):\n\n\\[\n\\begin{array}{l}\n\\texttt{magnitude}(I)=\\sqrt{\\texttt{x}(I)^2+\\texttt{y}(I)^2},\\\\\n\\texttt{angle}(I)=\\texttt{atan2}(\\texttt{y}(I), \\texttt{x}(I))[\\cdot180/\\pi]\n\\end{array}\n\\]\n\nThe angles are calculated with $\\sim\\,0.3^\\circ$ accuracy. For the (0,0) point, the angle is set to 0.\n\n\\cvCppFunc{checkRange}\nChecks every element of an input array for invalid values.\n\n\\cvdefCpp{bool checkRange(const Mat\\& src, bool quiet=true, Point* pos=0,\\par\n                double minVal=-DBL\\_MAX, double maxVal=DBL\\_MAX);\\newline\nbool checkRange(const MatND\\& src, bool quiet=true, int* pos=0,\\par\n                double minVal=-DBL\\_MAX, double maxVal=DBL\\_MAX);}\n\\begin{description}\n\\cvarg{src}{The array to check}\n\\cvarg{quiet}{The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception.}\n\\cvarg{pos}{The optional output parameter, where the position of the first outlier is stored. In the second function \\texttt{pos}, when not NULL, must be a pointer to array of \\texttt{src.dims} elements}\n\\cvarg{minVal}{The inclusive lower boundary of valid values range}\n\\cvarg{maxVal}{The exclusive upper boundary of valid values range}\n\\end{description}\n\nThe functions \\texttt{checkRange} check that every array element is\nneither NaN nor $\\pm \\infty $. When \\texttt{minVal < -DBL\\_MAX} and \\texttt{maxVal < DBL\\_MAX}, then the functions also check that\neach value is between \\texttt{minVal} and \\texttt{maxVal}. in the case of multi-channel arrays each channel is processed independently.\nIf some values are out of range, position of the first outlier is stored in \\texttt{pos} (when $\\texttt{pos}\\ne0$), and then the functions either return false (when \\texttt{quiet=true}) or throw an exception.\n\n\n\\cvCppFunc{compare}\nPerforms per-element comparison of two arrays or an array and scalar value.\n\n\\cvdefCpp{void compare(const Mat\\& src1, const Mat\\& src2, Mat\\& dst, int cmpop);\\newline\nvoid compare(const Mat\\& src1, double value, \\par Mat\\& dst, int cmpop);\\newline\nvoid compare(const MatND\\& src1, const MatND\\& src2, \\par MatND\\& dst, int cmpop);\\newline\nvoid compare(const MatND\\& src1, double value, \\par MatND\\& dst, int cmpop);}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array; must have the same size and same type as \\texttt{src1}}\n\\cvarg{value}{The scalar value to compare each array element with}\n\\cvarg{dst}{The destination array; will have the same size as \\texttt{src1} and type=\\texttt{CV\\_8UC1}}\n\\cvarg{cmpop}{The flag specifying the relation between the elements to be checked\n\\begin{description}\n \\cvarg{CMP\\_EQ}{$\\texttt{src1}(I) = \\texttt{src2}(I)$ or $\\texttt{src1}(I) = \\texttt{value}$}\n \\cvarg{CMP\\_GT}{$\\texttt{src1}(I) > \\texttt{src2}(I)$ or $\\texttt{src1}(I) > \\texttt{value}$}\n \\cvarg{CMP\\_GE}{$\\texttt{src1}(I) \\geq \\texttt{src2}(I)$ or $\\texttt{src1}(I) \\geq \\texttt{value}$}\n \\cvarg{CMP\\_LT}{$\\texttt{src1}(I) < \\texttt{src2}(I)$ or $\\texttt{src1}(I) < \\texttt{value}$}\n \\cvarg{CMP\\_LE}{$\\texttt{src1}(I) \\leq \\texttt{src2}(I)$ or $\\texttt{src1}(I) \\leq \\texttt{value}$}\n \\cvarg{CMP\\_NE}{$\\texttt{src1}(I) \\ne \\texttt{src2}(I)$ or $\\texttt{src1}(I) \\ne \\texttt{value}$}\n\\end{description}}\n\\end{description}\n\nThe functions \\texttt{compare} compare each element of \\texttt{src1} with the corresponding element of \\texttt{src2}\nor with real scalar \\texttt{value}. When the comparison result is true, the corresponding element of destination array is set to 255, otherwise it is set to 0:\n\\begin{itemize}\n    \\item \\texttt{dst(I) = src1(I) cmpop src2(I) ? 255 : 0}\n    \\item \\texttt{dst(I) = src1(I) cmpop value ? 255 : 0}\n\\end{itemize}\n\nThe comparison operations can be replaced with the equivalent matrix expressions:\n\n\\begin{lstlisting}\nMat dst1 = src1 >= src2;\nMat dst2 = src1 < 8;\n...\n\\end{lstlisting}\n\nSee also: \\cvCppCross{checkRange}, \\cvCppCross{min}, \\cvCppCross{max}, \\cvCppCross{threshold}, \\cross{Matrix Expressions}\n\n\\cvCppFunc{completeSymm}\nCopies the lower or the upper half of a square matrix to another half.\n\n\\cvdefCpp{void completeSymm(Mat\\& mtx, bool lowerToUpper=false);}\n\\begin{description}\n\\cvarg{mtx}{Input-output floating-point square matrix}\n\\cvarg{lowerToUpper}{If true, the lower half is copied to the upper half, otherwise the upper half is copied to the lower half}\n\\end{description}\n\nThe function \\texttt{completeSymm} copies the lower half of a square matrix to its another half; the matrix diagonal remains unchanged:\n\n\\begin{itemize}\n    \\item $\\texttt{mtx}_{ij}=\\texttt{mtx}_{ji}$ for $i > j$ if \\texttt{lowerToUpper=false}\n    \\item $\\texttt{mtx}_{ij}=\\texttt{mtx}_{ji}$ for $i < j$ if \\texttt{lowerToUpper=true}\n\\end{itemize}\n\nSee also: \\cvCppCross{flip}, \\cvCppCross{transpose}\n\n\\cvCppFunc{convertScaleAbs}\nScales, computes absolute values and converts the result to 8-bit.\n\n\\cvdefCpp{void convertScaleAbs(const Mat\\& src, Mat\\& dst, double alpha=1, double beta=0);}\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array}\n\\cvarg{alpha}{The optional scale factor}\n\\cvarg{beta}{The optional delta added to the scaled values}\n\\end{description}\n\nOn each element of the input array the function \\texttt{convertScaleAbs} performs 3 operations sequentially: scaling, taking absolute value, conversion to unsigned 8-bit type:\n\\[\\texttt{dst}(I)=\\texttt{saturate\\_cast<uchar>}(|\\texttt{src}(I)*\\texttt{alpha} + \\texttt{beta}|)\\]\n\nin the case of multi-channel arrays the function processes each channel independently. When the output is not 8-bit, the operation can be emulated by calling \\texttt{Mat::convertTo} method (or by using matrix expressions) and then by computing absolute value of the result, for example:\n\n\\begin{lstlisting}\nMat_<float> A(30,30);\nrandu(A, Scalar(-100), Scalar(100));\nMat_<float> B = A*5 + 3;\nB = abs(B);\n// Mat_<float> B = abs(A*5+3) will also do the job,\n// but it will allocate a temporary matrix\n\\end{lstlisting}\n\nSee also: \\cvCppCross{Mat::convertTo}, \\cvCppCross{abs}\n\n\\cvCppFunc{countNonZero}\nCounts non-zero array elements.\n\n\\cvdefCpp{int countNonZero( const Mat\\& mtx );\\newline\nint countNonZero( const MatND\\& mtx );}\n\\begin{description}\n\\cvarg{mtx}{Single-channel array}\n\\end{description}\n\nThe function \\texttt{cvCountNonZero} returns the number of non-zero elements in mtx:\n\n\\[ \\sum_{I:\\;\\texttt{mtx}(I)\\ne0} 1 \\]\n\nSee also: \\cvCppCross{mean}, \\cvCppCross{meanStdDev}, \\cvCppCross{norm}, \\cvCppCross{minMaxLoc}, \\cvCppCross{calcCovarMatrix}\n\n\\cvCppFunc{cubeRoot}\nComputes cube root of the argument\n\n\\cvdefCpp{float cubeRoot(float val);}\n\\begin{description}\n\\cvarg{val}{The function argument}\n\\end{description}\n\nThe function \\texttt{cubeRoot} computes $\\sqrt[3]{\\texttt{val}}$.\nNegative arguments are handled correctly, \\emph{NaN} and $\\pm\\infty$ are not handled.\nThe accuracy approaches the maximum possible accuracy for single-precision data.\n\n\\cvCppFunc{cvarrToMat}\nConverts CvMat, IplImage or CvMatND to cv::Mat.\n\n\\cvdefCpp{Mat cvarrToMat(const CvArr* src, bool copyData=false, bool allowND=true, int coiMode=0);}\n\\begin{description}\n\\cvarg{src}{The source \\texttt{CvMat}, \\texttt{IplImage} or \\texttt{CvMatND}}\n\\cvarg{copyData}{When it is false (default value), no data is copied, only the new header is created.\n In this case the original array should not be deallocated while the new matrix header is used. The the parameter is true, all the data is copied, then user may deallocate the original array right after the conversion}\n\\cvarg{allowND}{When it is true (default value), then \\texttt{CvMatND} is converted to \\texttt{Mat} if it's possible\n(e.g. then the data is contiguous). If it's not possible, or when the parameter is false, the function will report an error}\n\\cvarg{coiMode}{The parameter specifies how the IplImage COI (when set) is handled.\n\\begin{itemize}\n    \\item If \\texttt{coiMode=0}, the function will report an error if COI is set.\n    \\item If \\texttt{coiMode=1}, the function will never report an error; instead it returns the header to the whole original image and user will have to check and process COI manually, see \\cvCppCross{extractImageCOI}.\n%    \\item If \\texttt{coiMode=2}, the function will extract the COI into the separate matrix. \\emph{This is also done when the COI is set and }\\texttt{copyData=true}}\n\\end{itemize}}\n\\end{description}\n\nThe function \\texttt{cvarrToMat} converts \\cross{CvMat}, \\cross{IplImage} or \\cross{CvMatND} header to \\cvCppCross{Mat} header, and optionally duplicates the underlying data. The constructed header is returned by the function.\n\nWhen \\texttt{copyData=false}, the conversion is done really fast (in O(1) time) and the newly created matrix header will have \\texttt{refcount=0}, which means that no reference counting is done for the matrix data, and user has to preserve the data until the new header is destructed. Otherwise, when \\texttt{copyData=true}, the new buffer will be allocated and managed as if you created a new matrix from scratch and copy the data there. That is,\n\\texttt{cvarrToMat(src, true) $\\sim$ cvarrToMat(src, false).clone()} (assuming that COI is not set). The function provides uniform way of supporting \\cross{CvArr} paradigm in the code that is migrated to use new-style data structures internally. The reverse transformation, from \\cvCppCross{Mat} to \\cross{CvMat} or \\cross{IplImage} can be done by simple assignment:\n\n\\begin{lstlisting}\nCvMat* A = cvCreateMat(10, 10, CV_32F);\ncvSetIdentity(A);\nIplImage A1; cvGetImage(A, &A1);\nMat B = cvarrToMat(A);\nMat B1 = cvarrToMat(&A1);\nIplImage C = B;\nCvMat C1 = B1;\n// now A, A1, B, B1, C and C1 are different headers\n// for the same 10x10 floating-point array.\n// note, that you will need to use \"&\"\n// to pass C & C1 to OpenCV functions, e.g:\nprintf(\"%g\", cvDet(&C1));\n\\end{lstlisting}\n\nNormally, the function is used to convert an old-style 2D array (\\cross{CvMat} or \\cross{IplImage}) to \\texttt{Mat}, however, the function can also take \\cross{CvMatND} on input and create \\cvCppCross{Mat} for it, if it's possible. And for \\texttt{CvMatND A} it is possible if and only if \\texttt{A.dim[i].size*A.dim.step[i] == A.dim.step[i-1]} for all or for all but one \\texttt{i, 0 < i < A.dims}. That is, the matrix data should be continuous or it should be representable as a sequence of continuous matrices. By using this function in this way, you can process \\cross{CvMatND} using arbitrary element-wise function. But for more complex operations, such as filtering functions, it will not work, and you need to convert \\cross{CvMatND} to \\cvCppCross{MatND} using the corresponding constructor of the latter.\n\nThe last parameter, \\texttt{coiMode}, specifies how to react on an image with COI set: by default it's 0, and then the function reports an error when an image with COI comes in. And \\texttt{coiMode=1} means that no error is signaled - user has to check COI presence and handle it manually. The modern structures, such as \\cvCppCross{Mat} and \\cvCppCross{MatND} do not support COI natively. To process individual channel of an new-style array, you will need either to organize loop over the array (e.g. using matrix iterators) where the channel of interest will be processed, or extract the COI using \\cvCppCross{mixChannels} (for new-style arrays) or \\cvCppCross{extractImageCOI} (for old-style arrays), process this individual channel and insert it back to the destination array if need (using \\cvCppCross{mixChannel} or \\cvCppCross{insertImageCOI}, respectively).\n\nSee also: \\cvCppCross{cvGetImage}, \\cvCppCross{cvGetMat}, \\cvCppCross{cvGetMatND}, \\cvCppCross{extractImageCOI}, \\cvCppCross{insertImageCOI}, \\cvCppCross{mixChannels}\n\n\n\\cvCppFunc{dct}\nPerforms a forward or inverse discrete cosine transform of 1D or 2D array\n\n\\cvdefCpp{void dct(const Mat\\& src, Mat\\& dst, int flags=0);}\n\\begin{description}\n\\cvarg{src}{The source floating-point array}\n\\cvarg{dst}{The destination array; will have the same size and same type as \\texttt{src}}\n\\cvarg{flags}{Transformation flags, a combination of the following values\n\\begin{description}\n\\cvarg{DCT\\_INVERSE}{do an inverse 1D or 2D transform instead of the default forward transform.}\n\\cvarg{DCT\\_ROWS}{do a forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms and so forth.}\n\\end{description}}\n\\end{description}\n\nThe function \\texttt{dct} performs a forward or inverse discrete cosine transform (DCT) of a 1D or 2D floating-point array:\n\nForward Cosine transform of 1D vector of $N$ elements:\n\\[Y = C^{(N)} \\cdot X\\]\nwhere\n\\[C^{(N)}_{jk}=\\sqrt{\\alpha_j/N}\\cos\\left(\\frac{\\pi(2k+1)j}{2N}\\right)\\]\nand $\\alpha_0=1$, $\\alpha_j=2$ for $j > 0$.\n\nInverse Cosine transform of 1D vector of N elements:\n\\[X = \\left(C^{(N)}\\right)^{-1} \\cdot Y = \\left(C^{(N)}\\right)^T \\cdot Y\\]\n(since $C^{(N)}$ is orthogonal matrix, $C^{(N)} \\cdot \\left(C^{(N)}\\right)^T = I$)\n\nForward Cosine transform of 2D $M \\times N$ matrix:\n\\[Y = C^{(N)} \\cdot X \\cdot \\left(C^{(N)}\\right)^T\\]\n\nInverse Cosine transform of 2D vector of $M \\times N$ elements:\n\\[X = \\left(C^{(N)}\\right)^T \\cdot X \\cdot C^{(N)}\\]\n\nThe function chooses the mode of operation by looking at the flags and size of the input array:\n\\begin{itemize}\n    \\item if \\texttt{(flags \\& DCT\\_INVERSE) == 0}, the function does forward 1D or 2D transform, otherwise it is inverse 1D or 2D transform.\n    \\item if \\texttt{(flags \\& DCT\\_ROWS) $\\ne$ 0}, the function performs 1D transform of each row.\n    \\item otherwise, if the array is a single column or a single row, the function performs 1D transform\n    \\item otherwise it performs 2D transform.\n\\end{itemize}\n\n\\textbf{Important note}: currently cv::dct supports even-size arrays (2, 4, 6 ...). For data analysis and approximation you can pad the array when necessary.\n\nAlso, the function's performance depends very much, and not monotonically, on the array size, see \\cvCppCross{getOptimalDFTSize}. In the current implementation DCT of a vector of size \\texttt{N} is computed via DFT of a vector of size \\texttt{N/2}, thus the optimal DCT size $\\texttt{N}^*\\geq\\texttt{N}$ can be computed as:\n\n\\begin{lstlisting}\nsize_t getOptimalDCTSize(size_t N) { return 2*getOptimalDFTSize((N+1)/2); }\n\\end{lstlisting}\n\nSee also: \\cvCppCross{dft}, \\cvCppCross{getOptimalDFTSize}, \\cvCppCross{idct}\n\n\n\\cvCppFunc{dft}\nPerforms a forward or inverse Discrete Fourier transform of 1D or 2D floating-point array.\n\n\\cvdefCpp{void dft(const Mat\\& src, Mat\\& dst, int flags=0, int nonzeroRows=0);}\n\\begin{description}\n\\cvarg{src}{The source array, real or complex}\n\\cvarg{dst}{The destination array, which size and type depends on the \\texttt{flags}}\n\\cvarg{flags}{Transformation flags, a combination of the following values\n\\begin{description}\n\\cvarg{DFT\\_INVERSE}{do an inverse 1D or 2D transform instead of the default forward transform.}\n\\cvarg{DFT\\_SCALE}{scale the result: divide it by the number of array elements. Normally, it is combined with \\texttt{DFT\\_INVERSE}}.\n\\cvarg{DFT\\_ROWS}{do a forward or inverse transform of every individual row of the input matrix. This flag allows the user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms and so forth.}\n\\cvarg{DFT\\_COMPLEX\\_OUTPUT}{then the function performs forward transformation of 1D or 2D real array, the result, though being a complex array, has complex-conjugate symmetry (\\emph{CCS}), see the description below. Such an array can be packed into real array of the same size as input, which is the fastest option and which is what the function does by default. However, you may wish to get the full complex array (for simpler spectrum analysis etc.). Pass the flag to tell the function to produce full-size complex output array.}\n\\cvarg{DFT\\_REAL\\_OUTPUT}{then the function performs inverse transformation of 1D or 2D complex array, the result is normally a complex array of the same size. However, if the source array has conjugate-complex symmetry (for example, it is a result of forward transformation with \\texttt{DFT\\_COMPLEX\\_OUTPUT} flag), then the output is real array. While the function itself does not check whether the input is symmetrical or not, you can pass the flag and then the function will assume the symmetry and produce the real output array. Note that when the input is packed real array and inverse transformation is executed, the function treats the input as packed complex-conjugate symmetrical array, so the output will also be real array}\n\\end{description}}\n\\cvarg{nonzeroRows}{When the parameter $\\ne 0$, the function assumes that only the first \\texttt{nonzeroRows} rows of the input array (\\texttt{DFT\\_INVERSE} is not set) or only the first \\texttt{nonzeroRows} of the output array (\\texttt{DFT\\_INVERSE} is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT}\n\\end{description}\n\nForward Fourier transform of 1D vector of N elements:\n\\[Y = F^{(N)} \\cdot X,\\]\nwhere $F^{(N)}_{jk}=\\exp(-2\\pi i j k/N)$ and $i=\\sqrt{-1}$\n\nInverse Fourier transform of 1D vector of N elements:\n\\[\n\\begin{array}{l}\nX'= \\left(F^{(N)}\\right)^{-1} \\cdot Y = \\left(F^{(N)}\\right)^* \\cdot y \\\\\nX = (1/N) \\cdot X,\n\\end{array}\n\\]\nwhere $F^*=\\left(\\textrm{Re}(F^{(N)})-\\textrm{Im}(F^{(N)})\\right)^T$\n\nForward Fourier transform of 2D vector of $M \\times N$ elements:\n\\[Y = F^{(M)} \\cdot X \\cdot F^{(N)}\\]\n\nInverse Fourier transform of 2D vector of $M \\times N$ elements:\n\\[\n\\begin{array}{l}\nX'= \\left(F^{(M)}\\right)^* \\cdot Y \\cdot \\left(F^{(N)}\\right)^*\\\\\nX = \\frac{1}{M \\cdot N} \\cdot X'\n\\end{array}\n\\]\n\nIn the case of real (single-channel) data, the packed format called \\emph{CCS} (complex-conjugate-symmetrical) that was borrowed from IPL and used to represent the result of a forward Fourier transform or input for an inverse Fourier transform:\n\n\\[\\begin{bmatrix}\nRe Y_{0,0} & Re Y_{0,1} & Im Y_{0,1} & Re Y_{0,2} & Im Y_{0,2} & \\cdots & Re Y_{0,N/2-1} & Im Y_{0,N/2-1} & Re Y_{0,N/2} \\\\\nRe Y_{1,0} & Re Y_{1,1} & Im Y_{1,1} & Re Y_{1,2} & Im Y_{1,2} & \\cdots & Re Y_{1,N/2-1} & Im Y_{1,N/2-1} & Re Y_{1,N/2} \\\\\nIm Y_{1,0} & Re Y_{2,1} & Im Y_{2,1} & Re Y_{2,2} & Im Y_{2,2} & \\cdots & Re Y_{2,N/2-1} & Im Y_{2,N/2-1} & Im Y_{1,N/2} \\\\\n\\hdotsfor{9} \\\\\nRe Y_{M/2-1,0} &  Re Y_{M-3,1}  & Im Y_{M-3,1} & \\hdotsfor{3} & Re Y_{M-3,N/2-1} & Im Y_{M-3,N/2-1}& Re Y_{M/2-1,N/2} \\\\\nIm Y_{M/2-1,0} &  Re Y_{M-2,1}  & Im Y_{M-2,1} & \\hdotsfor{3} & Re Y_{M-2,N/2-1} & Im Y_{M-2,N/2-1}& Im Y_{M/2-1,N/2} \\\\\nRe Y_{M/2,0}  &  Re Y_{M-1,1} &  Im Y_{M-1,1} & \\hdotsfor{3} & Re Y_{M-1,N/2-1} & Im Y_{M-1,N/2-1}& Re Y_{M/2,N/2}\n\\end{bmatrix}\n\\]\n\nin the case of 1D transform of real vector, the output will look as the first row of the above matrix. \n\nSo, the function chooses the operation mode depending on the flags and size of the input array:\n\\begin{itemize}\n    \\item if \\texttt{DFT\\_ROWS} is set or the input array has single row or single column then the function performs 1D forward or inverse transform (of each row of a matrix when \\texttt{DFT\\_ROWS} is set, otherwise it will be 2D transform.\n    \\item if input array is real and \\texttt{DFT\\_INVERSE} is not set, the function does forward 1D or 2D transform:\n    \\begin{itemize}\n        \\item when \\texttt{DFT\\_COMPLEX\\_OUTPUT} is set then the output will be complex matrix of the same size as input.\n        \\item otherwise the output will be a real matrix of the same size as input. in the case of 2D transform it will use the packed format as shown above; in the case of single 1D transform it will look as the first row of the above matrix; in the case of multiple 1D transforms (when using \\texttt{DCT\\_ROWS} flag) each row of the output matrix will look like the first row of the above matrix.\n    \\end{itemize}\n    \\item otherwise, if the input array is complex and either \\texttt{DFT\\_INVERSE} or \\texttt{DFT\\_REAL\\_OUTPUT} are not set then the output will be a complex array of the same size as input and the function will perform the forward or inverse 1D or 2D transform of the whole input array or each row of the input array independently, depending on the flags \\texttt{DFT\\_INVERSE} and \\texttt{DFT\\_ROWS}.\n    \\item otherwise, i.e. when \\texttt{DFT\\_INVERSE} is set, the input array is real, or it is complex but \\texttt{DFT\\_REAL\\_OUTPUT} is set, the output will be a real array of the same size as input, and the function will perform 1D or 2D inverse transformation of the whole input array or each individual row, depending on the flags \\texttt{DFT\\_INVERSE} and \\texttt{DFT\\_ROWS}.\n\\end{itemize}\n\nThe scaling is done after the transformation if \\texttt{DFT\\_SCALE} is set.\n\nUnlike \\cvCppCross{dct}, the function supports arrays of arbitrary size, but only those arrays are processed efficiently, which sizes can be factorized in a product of small prime numbers (2, 3 and 5 in the current implementation). Such an efficient DFT size can be computed using \\cvCppCross{getOptimalDFTSize} method.\n\nHere is the sample on how to compute DFT-based convolution of two 2D real arrays:\n\\begin{lstlisting}\nvoid convolveDFT(const Mat& A, const Mat& B, Mat& C)\n{\n    // reallocate the output array if needed\n    C.create(abs(A.rows - B.rows)+1, abs(A.cols - B.cols)+1, A.type());\n    Size dftSize;\n    // compute the size of DFT transform\n    dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);\n    dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);\n    \n    // allocate temporary buffers and initialize them with 0's\n    Mat tempA(dftSize, A.type(), Scalar::all(0));\n    Mat tempB(dftSize, B.type(), Scalar::all(0));\n    \n    // copy A and B to the top-left corners of tempA and tempB, respectively\n    Mat roiA(tempA, Rect(0,0,A.cols,A.rows));\n    A.copyTo(roiA);\n    Mat roiB(tempB, Rect(0,0,B.cols,B.rows));\n    B.copyTo(roiB);\n    \n    // now transform the padded A & B in-place;\n    // use \"nonzeroRows\" hint for faster processing\n    dft(tempA, tempA, 0, A.rows);\n    dft(tempB, tempB, 0, B.rows);\n    \n    // multiply the spectrums;\n    // the function handles packed spectrum representations well\n    mulSpectrums(tempA, tempB, tempA);\n    \n    // transform the product back from the frequency domain.\n    // Even though all the result rows will be non-zero,\n    // we need only the first C.rows of them, and thus we\n    // pass nonzeroRows == C.rows\n    dft(tempA, tempA, DFT_INVERSE + DFT_SCALE, C.rows);\n    \n    // now copy the result back to C.\n    tempA(Rect(0, 0, C.cols, C.rows)).copyTo(C);\n    \n    // all the temporary buffers will be deallocated automatically\n}\n\\end{lstlisting}\n\nWhat can be optimized in the above sample?\n\\begin{itemize}\n    \\item since we passed $\\texttt{nonzeroRows} \\ne 0$ to the forward transform calls and\n    since we copied \\texttt{A}/\\texttt{B} to the top-left corners of \\texttt{tempA}/\\texttt{tempB}, respectively,\n    it's not necessary to clear the whole \\texttt{tempA} and \\texttt{tempB};\n    it is only necessary to clear the \\texttt{tempA.cols - A.cols} (\\texttt{tempB.cols - B.cols})\n    rightmost columns of the matrices.\n    \\item this DFT-based convolution does not have to be applied to the whole big arrays,\n    especially if \\texttt{B} is significantly smaller than \\texttt{A} or vice versa.\n    Instead, we can compute convolution by parts. For that we need to split the destination array\n    \\texttt{C} into multiple tiles and for each tile estimate, which parts of \\texttt{A} and \\texttt{B}\n    are required to compute convolution in this tile. If the tiles in \\texttt{C} are too small,\n    the speed will decrease a lot, because of repeated work - in the ultimate case, when each tile in \\texttt{C} is a single pixel,\n    the algorithm becomes equivalent to the naive convolution algorithm.\n    If the tiles are too big, the temporary arrays \\texttt{tempA} and \\texttt{tempB} become too big\n    and there is also slowdown because of bad cache locality. So there is optimal tile size somewhere in the middle.\n    \\item if the convolution is done by parts, since different tiles in \\texttt{C} can be computed in parallel, the loop can be threaded.\n\\end{itemize}\n\nAll of the above improvements have been implemented in \\cvCppCross{matchTemplate} and \\cvCppCross{filter2D}, therefore, by using them, you can get even better performance than with the above theoretically optimal implementation (though, those two functions actually compute cross-correlation, not convolution, so you will need to \"flip\" the kernel or the image around the center using \\cvCppCross{flip}).\n\nSee also: \\cvCppCross{dct}, \\cvCppCross{getOptimalDFTSize}, \\cvCppCross{mulSpectrums}, \\cvCppCross{filter2D}, \\cvCppCross{matchTemplate}, \\cvCppCross{flip}, \\cvCppCross{cartToPolar}, \\cvCppCross{magnitude}, \\cvCppCross{phase}\n\n\\cvCppFunc{divide}\n\nPerforms per-element division of two arrays or a scalar by an array.\n\n\\cvdefCpp{void divide(const Mat\\& src1, const Mat\\& src2, \\par Mat\\& dst, double scale=1);\\newline\nvoid divide(double scale, const Mat\\& src2, Mat\\& dst);\\newline\nvoid divide(const MatND\\& src1, const MatND\\& src2, \\par MatND\\& dst, double scale=1);\\newline\nvoid divide(double scale, const MatND\\& src2, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array; should have the same size and same type as \\texttt{src1}}\n\\cvarg{scale}{Scale factor}\n\\cvarg{dst}{The destination array; will have the same size and same type as \\texttt{src2}}\n\\end{description}\n\nThe functions \\texttt{divide} divide one array by another:\n\\[\\texttt{dst(I) = saturate(src1(I)*scale/src2(I))} \\]\n\nor a scalar by array, when there is no \\texttt{src1}:\n\\[\\texttt{dst(I) = saturate(scale/src2(I))} \\]\n\nThe result will have the same type as \\texttt{src1}. When \\texttt{src2(I)=0}, \\texttt{dst(I)=0} too.\n\nSee also: \\cvCppCross{multiply}, \\cvCppCross{add}, \\cvCppCross{subtract}, \\cross{Matrix Expressions}\n\n\\cvCppFunc{determinant}\n\nReturns determinant of a square floating-point matrix.\n\n\\cvdefCpp{double determinant(const Mat\\& mtx);}\n\\begin{description}\n\\cvarg{mtx}{The input matrix; must have \\texttt{CV\\_32FC1} or \\texttt{CV\\_64FC1} type and square size}\n\\end{description}\n\nThe function \\texttt{determinant} computes and returns determinant of the specified matrix. For small matrices (\\texttt{mtx.cols=mtx.rows<=3})\nthe direct method is used; for larger matrices the function uses LU factorization.\n\nFor symmetric positive-determined matrices, it is also possible to compute \\cvCppCross{SVD}: $\\texttt{mtx}=U \\cdot W \\cdot V^T$ and then calculate the determinant as a product of the diagonal elements of $W$.\n\nSee also: \\cvCppCross{SVD}, \\cvCppCross{trace}, \\cvCppCross{invert}, \\cvCppCross{solve}, \\cross{Matrix Expressions}\n\n\\cvCppFunc{eigen}\nComputes eigenvalues and eigenvectors of a symmetric matrix.\n\n\\cvdefCpp{bool eigen(const Mat\\& src, Mat\\& eigenvalues, \\par int lowindex=-1, int highindex=-1);\\newline\nbool eigen(const Mat\\& src, Mat\\& eigenvalues, \\par Mat\\& eigenvectors, int lowindex=-1,\\par\nint highindex=-1);}\n\\begin{description}\n\\cvarg{src}{The input matrix; must have \\texttt{CV\\_32FC1} or \\texttt{CV\\_64FC1} type, square size and be symmetric: $\\texttt{src}^T=\\texttt{src}$}\n\\cvarg{eigenvalues}{The output vector of eigenvalues of the same type as \\texttt{src}; The eigenvalues are stored in the descending order.}\n\\cvarg{eigenvectors}{The output matrix of eigenvectors; It will have the same size and the same type as \\texttt{src}; The eigenvectors are stored as subsequent matrix rows, in the same order as the corresponding eigenvalues}\n\\cvarg{lowindex}{Optional index of largest eigenvalue/-vector to calculate.\n(See below.)}\n\\cvarg{highindex}{Optional index of smallest eigenvalue/-vector to calculate.\n(See below.)}\n\\end{description}\n\nThe functions \\texttt{eigen} compute just eigenvalues, or eigenvalues and eigenvectors of symmetric matrix \\texttt{src}:\n\n\\begin{lstlisting}\nsrc*eigenvectors(i,:)' = eigenvalues(i)*eigenvectors(i,:)' (in MATLAB notation)\n\\end{lstlisting}\n\nIf either low- or highindex is supplied the other is required, too.\nIndexing is 0-based. Example: To calculate the largest eigenvector/-value set\nlowindex = highindex = 0.\nFor legacy reasons this function always returns a square matrix the same size\nas the source matrix with eigenvectors and a vector the length of the source\nmatrix with eigenvalues. The selected eigenvectors/-values are always in the\nfirst highindex - lowindex + 1 rows.\n\nSee also: \\cvCppCross{SVD}, \\cvCppCross{completeSymm}, \\cvCppCross{PCA}\n\n\\cvCppFunc{exp}\nCalculates the exponent of every array element.\n\n\\cvdefCpp{void exp(const Mat\\& src, Mat\\& dst);\\newline\nvoid exp(const MatND\\& src, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array; will have the same size and same type as \\texttt{src}}\n\\end{description}\n\nThe function \\texttt{exp} calculates the exponent of every element of the input array:\n\n\\[\n\\texttt{dst} [I] = e^{\\texttt{src}}(I)\n\\]\n\nThe maximum relative error is about $7 \\times 10^{-6}$ for single-precision and less than $10^{-10}$ for double-precision. Currently, the function converts denormalized values to zeros on output. Special values (NaN, $\\pm \\infty$) are not handled.\n\nSee also: \\cvCppCross{log}, \\cvCppCross{cartToPolar}, \\cvCppCross{polarToCart}, \\cvCppCross{phase}, \\cvCppCross{pow}, \\cvCppCross{sqrt}, \\cvCppCross{magnitude}\n\n\\cvCppFunc{extractImageCOI}\n\nExtract the selected image channel\n\n\\cvdefCpp{void extractImageCOI(const CvArr* src, Mat\\& dst, int coi=-1);}\n\\begin{description}\n\\cvarg{src}{The source array. It should be a pointer to \\cross{CvMat} or \\cross{IplImage}}\n\\cvarg{dst}{The destination array; will have single-channel, and the same size and the same depth as \\texttt{src}}\n\\cvarg{coi}{If the parameter is \\texttt{>=0}, it specifies the channel to extract;\nIf it is \\texttt{<0}, \\texttt{src} must be a pointer to \\texttt{IplImage} with valid COI set - then the selected COI is extracted.}\n\\end{description}\n\nThe function \\texttt{extractImageCOI} is used to extract image COI from an old-style array and put the result to the new-style C++ matrix. As usual, the destination matrix is reallocated using \\texttt{Mat::create} if needed.\n\nTo extract a channel from a new-style matrix, use \\cvCppCross{mixChannels} or \\cvCppCross{split}\n\nSee also: \\cvCppCross{mixChannels}, \\cvCppCross{split}, \\cvCppCross{merge}, \\cvCppCross{cvarrToMat}, \\cvCppCross{cvSetImageCOI}, \\cvCppCross{cvGetImageCOI}\n\n\n\\cvCppFunc{fastAtan2}\nCalculates the angle of a 2D vector in degrees\n\n\\cvdefCpp{float fastAtan2(float y, float x);}\n\\begin{description}\n\\cvarg{x}{x-coordinate of the vector}\n\\cvarg{y}{y-coordinate of the vector}\n\\end{description}\n\nThe function \\texttt{fastAtan2} calculates the full-range angle of an input 2D vector. The angle is \nmeasured in degrees and varies from $0^\\circ$ to $360^\\circ$. The accuracy is about $0.3^\\circ$.\n\n\\cvCppFunc{flip}\nFlips a 2D array around vertical, horizontal or both axes.\n\n\\cvdefCpp{void flip(const Mat\\& src, Mat\\& dst, int flipCode);}\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array; will have the same size and same type as \\texttt{src}}\n\\cvarg{flipCode}{Specifies how to flip the array:\n0 means flipping around the x-axis, positive (e.g., 1) means flipping around y-axis, and negative (e.g., -1) means flipping around both axes. See also the discussion below for the formulas.}\n\\end{description}\n\nThe function \\texttt{flip} flips the array in one of three different ways (row and column indices are 0-based):\n\n\\[\n\\texttt{dst}_{ij} = \\forkthree\n{\\texttt{src}_{\\texttt{src.rows}-i-1,j}}{if \\texttt{flipCode} = 0}\n{\\texttt{src}_{i,\\texttt{src.cols}-j-1}}{if \\texttt{flipCode} > 0}\n{\\texttt{src}_{\\texttt{src.rows}-i-1,\\texttt{src.cols}-j-1}}{if \\texttt{flipCode} < 0}\n\\]\n\nThe example scenarios of function use are:\n\\begin{itemize}\n  \\item vertical flipping of the image ($\\texttt{flipCode} = 0$) to switch between top-left and bottom-left image origin, which is a typical operation in video processing in Windows.\n  \\item horizontal flipping of the image with subsequent horizontal shift and absolute difference calculation to check for a vertical-axis symmetry ($\\texttt{flipCode} > 0$)\n  \\item simultaneous horizontal and vertical flipping of the image with subsequent shift and absolute difference calculation to check for a central symmetry ($\\texttt{flipCode} < 0$)\n  \\item reversing the order of 1d point arrays ($\\texttt{flipCode} > 0$ or $\\texttt{flipCode} = 0$)\n\\end{itemize}\n\nSee also: \\cvCppCross{transpose}, \\cvCppCross{repeat}, \\cvCppCross{completeSymm}\n\n\\cvCppFunc{gemm}\nPerforms generalized matrix multiplication.\n\n\\cvdefCpp{void gemm(const Mat\\& src1, const Mat\\& src2, double alpha,\\par\n          const Mat\\& src3, double beta, Mat\\& dst, int flags=0);}\n\\begin{description}\n\\cvarg{src1}{The first multiplied input matrix; should have \\texttt{CV\\_32FC1}, \\texttt{CV\\_64FC1}, \\texttt{CV\\_32FC2} or \\texttt{CV\\_64FC2} type}\n\\cvarg{src2}{The second multiplied input matrix; should have the same type as \\texttt{src1}}\n\\cvarg{alpha}{The weight of the matrix product}\n\\cvarg{src3}{The third optional delta matrix added to the matrix product; should have the same type as \\texttt{src1} and \\texttt{src2}}\n\\cvarg{beta}{The weight of \\texttt{src3}}\n\\cvarg{dst}{The destination matrix; It will have the proper size and the same type as input matrices}\n\\cvarg{flags}{Operation flags:\n\\begin{description}\n    \\cvarg{GEMM\\_1\\_T}{transpose \\texttt{src1}}\n    \\cvarg{GEMM\\_2\\_T}{transpose \\texttt{src2}}\n    \\cvarg{GEMM\\_3\\_T}{transpose \\texttt{src3}}\n\\end{description}}\n\\end{description}\n\nThe function performs generalized matrix multiplication and similar to the corresponding functions \\texttt{*gemm} in BLAS level 3.\nFor example, \\texttt{gemm(src1, src2, alpha, src3, beta, dst, GEMM\\_1\\_T + GEMM\\_3\\_T)} corresponds to\n\\[\n\\texttt{dst} = \\texttt{alpha} \\cdot \\texttt{src1} ^T \\cdot \\texttt{src2} + \\texttt{beta} \\cdot \\texttt{src3} ^T\n\\]\n\nThe function can be replaced with a matrix expression, e.g. the above call can be replaced with:\n\\begin{lstlisting}\ndst = alpha*src1.t()*src2 + beta*src3.t();\n\\end{lstlisting}\n\nSee also: \\cvCppCross{mulTransposed}, \\cvCppCross{transform}, \\cross{Matrix Expressions}\n\n\n\\cvCppFunc{getConvertElem}\nReturns conversion function for a single pixel\n\n\\cvdefCpp{ConvertData getConvertElem(int fromType, int toType);\\newline\nConvertScaleData getConvertScaleElem(int fromType, int toType);\\newline\ntypedef void (*ConvertData)(const void* from, void* to, int cn);\\newline\ntypedef void (*ConvertScaleData)(const void* from, void* to,\\par\n                                 int cn, double alpha, double beta);}\n\\begin{description}\n\\cvarg{fromType}{The source pixel type}\n\\cvarg{toType}{The destination pixel type}\n\\cvarg{from}{Callback parameter: pointer to the input pixel}\n\\cvarg{to}{Callback parameter: pointer to the output pixel}\n\\cvarg{cn}{Callback parameter: the number of channels; can be arbitrary, 1, 100, 100000, ...}\n\\cvarg{alpha}{ConvertScaleData callback optional parameter: the scale factor}\n\\cvarg{beta}{ConvertScaleData callback optional parameter: the delta or offset}\n\\end{description}\n\nThe functions \\texttt{getConvertElem} and \\texttt{getConvertScaleElem} return pointers to the functions for converting individual pixels from one type to another. While the main function purpose is to convert single pixels (actually, for converting sparse matrices from one type to another), you can use them to convert the whole row of a dense matrix or the whole matrix at once, by setting \\texttt{cn = matrix.cols*matrix.rows*matrix.channels()} if the matrix data is continuous.\n\nSee also: \\cvCppCross{Mat::convertTo}, \\cvCppCross{MatND::convertTo}, \\cvCppCross{SparseMat::convertTo}\n\n\n\\cvCppFunc{getOptimalDFTSize}\nReturns optimal DFT size for a given vector size.\n\n\\cvdefCpp{int getOptimalDFTSize(int vecsize);}\n\\begin{description}\n\\cvarg{vecsize}{Vector size}\n\\end{description}\n\nDFT performance is not a monotonic function of a vector size, therefore, when you compute convolution of two arrays or do a spectral analysis of array, it usually makes sense to pad the input data with zeros to get a bit larger array that can be transformed much faster than the original one.\nArrays, which size is a power-of-two (2, 4, 8, 16, 32, ...) are the fastest to process, though, the arrays, which size is a product of 2's, 3's and 5's (e.g. 300 = 5*5*3*2*2), are also processed quite efficiently.\n\nThe function \\texttt{getOptimalDFTSize} returns the minimum number \\texttt{N} that is greater than or equal to \\texttt{vecsize}, such that the DFT\nof a vector of size \\texttt{N} can be computed efficiently. In the current implementation $N=2^p \\times 3^q \\times 5^r$, for some $p$, $q$, $r$.\n\nThe function returns a negative number if \\texttt{vecsize} is too large (very close to \\texttt{INT\\_MAX}).\n\nWhile the function cannot be used directly to estimate the optimal vector size for DCT transform (since the current DCT implementation supports only even-size vectors), it can be easily computed as \\texttt{getOptimalDFTSize((vecsize+1)/2)*2}.\n\nSee also: \\cvCppCross{dft}, \\cvCppCross{dct}, \\cvCppCross{idft}, \\cvCppCross{idct}, \\cvCppCross{mulSpectrums}\n\n\\cvCppFunc{idct}\nComputes inverse Discrete Cosine Transform of a 1D or 2D array\n\n\\cvdefCpp{void idct(const Mat\\& src, Mat\\& dst, int flags=0);}\n\\begin{description}\n\\cvarg{src}{The source floating-point single-channel array}\n\\cvarg{dst}{The destination array. Will have the same size and same type as \\texttt{src}}\n\\cvarg{flags}{The operation flags.}\n\\end{description}\n\n\\texttt{idct(src, dst, flags)} is equivalent to \\texttt{dct(src, dst, flags | DCT\\_INVERSE)}.\nSee \\cvCppCross{dct} for details.\n\nSee also: \\cvCppCross{dct}, \\cvCppCross{dft}, \\cvCppCross{idft}, \\cvCppCross{getOptimalDFTSize}\n\n\n\\cvCppFunc{idft}\nComputes inverse Discrete Fourier Transform of a 1D or 2D array\n\n\\cvdefCpp{void idft(const Mat\\& src, Mat\\& dst, int flags=0, int outputRows=0);}\n\\begin{description}\n\\cvarg{src}{The source floating-point real or complex array}\n\\cvarg{dst}{The destination array, which size and type depends on the \\texttt{flags}}\n\\cvarg{flags}{The operation flags. See \\cvCppCross{dft}}\n\\cvarg{nonzeroRows}{The number of \\texttt{dst} rows to compute.\nThe rest of the rows will have undefined content.\nSee the convolution sample in \\cvCppCross{dft} description}\n\\end{description}\n\n\\texttt{idft(src, dst, flags)} is equivalent to \\texttt{dct(src, dst, flags | DFT\\_INVERSE)}.\nSee \\cvCppCross{dft} for details.\nNote, that none of \\texttt{dft} and \\texttt{idft} scale the result by default.\nThus, you should pass \\texttt{DFT\\_SCALE} to one of \\texttt{dft} or \\texttt{idft}\nexplicitly to make these transforms mutually inverse.\n\nSee also: \\cvCppCross{dft}, \\cvCppCross{dct}, \\cvCppCross{idct}, \\cvCppCross{mulSpectrums}, \\cvCppCross{getOptimalDFTSize}\n\n\n\\cvCppFunc{inRange}\nChecks if array elements lie between the elements of two other arrays.\n\n\\cvdefCpp{void inRange(const Mat\\& src, const Mat\\& lowerb,\\par\n             const Mat\\& upperb, Mat\\& dst);\\newline\nvoid inRange(const Mat\\& src, const Scalar\\& lowerb,\\par\n             const Scalar\\& upperb, Mat\\& dst);\\newline\nvoid inRange(const MatND\\& src, const MatND\\& lowerb,\\par\n             const MatND\\& upperb, MatND\\& dst);\\newline\nvoid inRange(const MatND\\& src, const Scalar\\& lowerb,\\par\n             const Scalar\\& upperb, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src}{The first source array}\n\\cvarg{lowerb}{The inclusive lower boundary array of the same size and type as \\texttt{src}}\n\\cvarg{upperb}{The exclusive upper boundary array of the same size and type as \\texttt{src}}\n\\cvarg{dst}{The destination array, will have the same size as \\texttt{src} and \\texttt{CV\\_8U} type}\n\\end{description}\n\nThe functions \\texttt{inRange} do the range check for every element of the input array:\n\n\\[\n\\texttt{dst}(I)=\\texttt{lowerb}(I)_0 \\leq \\texttt{src}(I)_0 < \\texttt{upperb}(I)_0\n\\]\n\nfor single-channel arrays,\n\n\\[\n\\texttt{dst}(I)=\n\\texttt{lowerb}(I)_0 \\leq \\texttt{src}(I)_0 < \\texttt{upperb}(I)_0 \\land\n\\texttt{lowerb}(I)_1 \\leq \\texttt{src}(I)_1 < \\texttt{upperb}(I)_1\n\\]\n\nfor two-channel arrays and so forth.\n\\texttt{dst}(I) is set to 255 (all \\texttt{1}-bits) if \\texttt{src}(I) is within the specified range and 0 otherwise.\n\n\n\\cvCppFunc{invert}\nFinds the inverse or pseudo-inverse of a matrix\n\n\\cvdefCpp{double invert(const Mat\\& src, Mat\\& dst, int method=DECOMP\\_LU);}\n\\begin{description}\n\\cvarg{src}{The source floating-point $M \\times N$ matrix}\n\\cvarg{dst}{The destination matrix; will have $N \\times M$ size and the same type as \\texttt{src}}\n\\cvarg{flags}{The inversion method :\n\\begin{description}\n \\cvarg{DECOMP\\_LU}{Gaussian elimination with optimal pivot element chosen}\n \\cvarg{DECOMP\\_SVD}{Singular value decomposition (SVD) method}\n \\cvarg{DECOMP\\_CHOLESKY}{Cholesky decomposion. The matrix must be symmetrical and positively defined}\n\\end{description}}\n\\end{description}\n\nThe function \\texttt{invert} inverts matrix \\texttt{src} and stores the result in \\texttt{dst}.\nWhen the matrix \\texttt{src} is singular or non-square, the function computes the pseudo-inverse matrix, i.e. the matrix \\texttt{dst}, such that $\\|\\texttt{src} \\cdot \\texttt{dst} - I\\|$ is minimal.\n\nIn the case of \\texttt{DECOMP\\_LU} method, the function returns the \\texttt{src} determinant (\\texttt{src} must be square). If it is 0, the matrix is not inverted and \\texttt{dst} is filled with zeros.\n\nIn the case of \\texttt{DECOMP\\_SVD} method, the function returns the inversed condition number of \\texttt{src} (the ratio of the smallest singular value to the largest singular value) and 0 if \\texttt{src} is singular. The SVD method calculates a pseudo-inverse matrix if \\texttt{src} is singular.\n\nSimilarly to \\texttt{DECOMP\\_LU}, the method \\texttt{DECOMP\\_CHOLESKY} works only with non-singular square matrices. In this case the function stores the inverted matrix in \\texttt{dst} and returns non-zero, otherwise it returns 0.\n\nSee also: \\cvCppCross{solve}, \\cvCppCross{SVD}\n\n\n\\cvCppFunc{log}\nCalculates the natural logarithm of every array element.\n\n\\cvdefCpp{void log(const Mat\\& src, Mat\\& dst);\\newline\nvoid log(const MatND\\& src, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array; will have the same size and same type as \\texttt{src}}\n\\end{description}\n\nThe function \\texttt{log} calculates the natural logarithm of the absolute value of every element of the input array:\n\n\\[\n\\texttt{dst}(I) = \\fork\n{\\log |\\texttt{src}(I)|}{if $\\texttt{src}(I) \\ne 0$ }\n{\\texttt{C}}{otherwise}\n\\]\n\nWhere \\texttt{C} is a large negative number (about -700 in the current implementation).\nThe maximum relative error is about $7 \\times 10^{-6}$ for single-precision input and less than $10^{-10}$ for double-precision input. Special values (NaN, $\\pm \\infty$) are not handled.\n\nSee also: \\cvCppCross{exp}, \\cvCppCross{cartToPolar}, \\cvCppCross{polarToCart}, \\cvCppCross{phase}, \\cvCppCross{pow}, \\cvCppCross{sqrt}, \\cvCppCross{magnitude}\n\n\n\\cvCppFunc{LUT}\nPerforms a look-up table transform of an array.\n\n\\cvdefCpp{void LUT(const Mat\\& src, const Mat\\& lut, Mat\\& dst);}\n\\begin{description}\n\\cvarg{src}{Source array of 8-bit elements}\n\\cvarg{lut}{Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array}\n\\cvarg{dst}{Destination array; will have the same size and the same number of channels as \\texttt{src}, and the same depth as \\texttt{lut}}\n\\end{description}\n\nThe function \\texttt{LUT} fills the destination array with values from the look-up table. Indices of the entries are taken from the source array. That is, the function processes each element of \\texttt{src} as follows:\n\n\\[\n\\texttt{dst}(I) \\leftarrow \\texttt{lut(src(I) + d)}\n\\]\n\nwhere\n\n\\[\nd = \\fork\n{0}{if \\texttt{src} has depth \\texttt{CV\\_8U}}\n{128}{if \\texttt{src} has depth \\texttt{CV\\_8S}}\n\\]\n\nSee also: \\cvCppCross{convertScaleAbs}, \\texttt{Mat::convertTo}\n\n\\cvCppFunc{magnitude}\nCalculates magnitude of 2D vectors.\n\n\\cvdefCpp{void magnitude(const Mat\\& x, const Mat\\& y, Mat\\& magnitude);}\n\\begin{description}\n\\cvarg{x}{The floating-point array of x-coordinates of the vectors}\n\\cvarg{y}{The floating-point array of y-coordinates of the vectors; must have the same size as \\texttt{x}}\n\\cvarg{dst}{The destination array; will have the same size and same type as \\texttt{x}}\n\\end{description}\n\nThe function \\texttt{magnitude} calculates magnitude of 2D vectors formed from the corresponding elements of \\texttt{x} and \\texttt{y} arrays:\n\n\\[\n\\texttt{dst}(I) = \\sqrt{\\texttt{x}(I)^2 + \\texttt{y}(I)^2}\n\\]\n\nSee also: \\cvCppCross{cartToPolar}, \\cvCppCross{polarToCart}, \\cvCppCross{phase}, \\cvCppCross{sqrt}\n\n\n\\cvCppFunc{Mahalanobis}\nCalculates the Mahalanobis distance between two vectors.\n\n\\cvdefCpp{double Mahalanobis(const Mat\\& vec1, const Mat\\& vec2, \\par const Mat\\& icovar);}\n\\begin{description}\n\\cvarg{vec1}{The first 1D source vector}\n\\cvarg{vec2}{The second 1D source vector}\n\\cvarg{icovar}{The inverse covariance matrix}\n\\end{description}\n\nThe function \\texttt{cvMahalonobis} calculates and returns the weighted distance between two vectors:\n\n\\[\nd(\\texttt{vec1},\\texttt{vec2})=\\sqrt{\\sum_{i,j}{\\texttt{icovar(i,j)}\\cdot(\\texttt{vec1}(I)-\\texttt{vec2}(I))\\cdot(\\texttt{vec1(j)}-\\texttt{vec2(j)})}}\n\\]\n\nThe covariance matrix may be calculated using the \\cvCppCross{calcCovarMatrix} function and then inverted using the \\cvCppCross{invert} function (preferably using DECOMP\\_SVD method, as the most accurate).\n\n\n\\cvCppFunc{max}\nCalculates per-element maximum of two arrays or array and a scalar\n\n\\cvdefCpp{Mat\\_Expr<...> max(const Mat\\& src1, const Mat\\& src2);\\newline\nMat\\_Expr<...> max(const Mat\\& src1, double value);\\newline\nMat\\_Expr<...> max(double value, const Mat\\& src1);\\newline\nvoid max(const Mat\\& src1, const Mat\\& src2, Mat\\& dst);\\newline\nvoid max(const Mat\\& src1, double value, Mat\\& dst);\\newline\nvoid max(const MatND\\& src1, const MatND\\& src2, MatND\\& dst);\\newline\nvoid max(const MatND\\& src1, double value, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array of the same size and type as \\texttt{src1}}\n\\cvarg{value}{The real scalar value}\n\\cvarg{dst}{The destination array; will have the same size and type as \\texttt{src1}}\n\\end{description}\n\nThe functions \\texttt{max} compute per-element maximum of two arrays:\n\\[\\texttt{dst}(I)=\\max(\\texttt{src1}(I), \\texttt{src2}(I))\\]\nor array and a scalar:\n\\[\\texttt{dst}(I)=\\max(\\texttt{src1}(I), \\texttt{value})\\]\n\nIn the second variant, when the source array is multi-channel, each channel is compared with \\texttt{value} independently.\n\nThe first 3 variants of the function listed above are actually a part of \\cross{Matrix Expressions}, they return the expression object that can be further transformed, or assigned to a matrix, or passed to a function etc.\n\nSee also: \\cvCppCross{min}, \\cvCppCross{compare}, \\cvCppCross{inRange}, \\cvCppCross{minMaxLoc}, \\cross{Matrix Expressions}\n\n\\cvCppFunc{mean}\nCalculates average (mean) of array elements\n\n\\cvdefCpp{Scalar mean(const Mat\\& mtx);\\newline\nScalar mean(const Mat\\& mtx, const Mat\\& mask);\\newline\nScalar mean(const MatND\\& mtx);\\newline\nScalar mean(const MatND\\& mtx, const MatND\\& mask);}\n\\begin{description}\n\\cvarg{mtx}{The source array; it should have 1 to 4 channels (so that the result can be stored in \\cvCppCross{Scalar})}\n\\cvarg{mask}{The optional operation mask}\n\\end{description}\n\nThe functions \\texttt{mean} compute mean value \\texttt{M} of array elements, independently for each channel, and return it:\n\n\\[\n\\begin{array}{l}\nN = \\sum_{I:\\;\\texttt{mask}(I)\\ne 0} 1\\\\\nM_c = \\left(\\sum_{I:\\;\\texttt{mask}(I)\\ne 0}{\\texttt{mtx}(I)_c}\\right)/N\n\\end{array}\n\\]\n\nWhen all the mask elements are 0's, the functions return \\texttt{Scalar::all(0)}.\n\nSee also: \\cvCppCross{countNonZero}, \\cvCppCross{meanStdDev}, \\cvCppCross{norm}, \\cvCppCross{minMaxLoc}\n\n\\cvCppFunc{meanStdDev}\nCalculates mean and standard deviation of array elements\n\n\\cvdefCpp{void meanStdDev(const Mat\\& mtx, Scalar\\& mean, \\par Scalar\\& stddev, const Mat\\& mask=Mat());\\newline\nvoid meanStdDev(const MatND\\& mtx, Scalar\\& mean, \\par Scalar\\& stddev, const MatND\\& mask=MatND());}\n\\begin{description}\n\\cvarg{mtx}{The source array; it should have 1 to 4 channels (so that the results can be stored in \\cvCppCross{Scalar}'s)}\n\\cvarg{mean}{The output parameter: computed mean value}\n\\cvarg{stddev}{The output parameter: computed standard deviation}\n\\cvarg{mask}{The optional operation mask}\n\\end{description}\n\nThe functions \\texttt{meanStdDev} compute the mean and the standard deviation \\texttt{M} of array elements, independently for each channel, and return it via the output parameters:\n\n\\[\n\\begin{array}{l}\nN = \\sum_{I, \\texttt{mask}(I) \\ne 0} 1\\\\\n\\texttt{mean}_c = \\frac{\\sum_{ I: \\; \\texttt{mask}(I) \\ne 0} \\texttt{src}(I)_c}{N}\\\\\n\\texttt{stddev}_c = \\sqrt{\\sum_{ I: \\; \\texttt{mask}(I) \\ne 0} \\left(\\texttt{src}(I)_c - \\texttt{mean}_c\\right)^2}\n\\end{array}\n\\]\n\nWhen all the mask elements are 0's, the functions return \\texttt{mean=stddev=Scalar::all(0)}.\nNote that the computed standard deviation is only the diagonal of the complete normalized covariance matrix. If the full matrix is needed, you can reshape the multi-channel array $M \\times N$ to the single-channel array $M*N \\times \\texttt{mtx.channels}()$ (only possible when the matrix is continuous) and then pass the matrix to \\cvCppCross{calcCovarMatrix}.\n\nSee also: \\cvCppCross{countNonZero}, \\cvCppCross{mean}, \\cvCppCross{norm}, \\cvCppCross{minMaxLoc}, \\cvCppCross{calcCovarMatrix}\n\n\n\\cvCppFunc{merge}\nComposes a multi-channel array from several single-channel arrays.\n\n\\cvdefCpp{void merge(const Mat* mv, size\\_t count, Mat\\& dst);\\newline\nvoid merge(const vector<Mat>\\& mv, Mat\\& dst);\\newline\nvoid merge(const MatND* mv, size\\_t count, MatND\\& dst);\\newline\nvoid merge(const vector<MatND>\\& mv, MatND\\& dst);}\n\\begin{description}\n\\cvarg{mv}{The source array or vector of the single-channel matrices to be merged. All the matrices in \\texttt{mv} must have the same size and the same type}\n\\cvarg{count}{The number of source matrices when \\texttt{mv} is a plain C array; must be greater than zero}\n\\cvarg{dst}{The destination array; will have the same size and the same depth as \\texttt{mv[0]}, the number of channels will match the number of source matrices}\n\\end{description}\n    \nThe functions \\texttt{merge} merge several single-channel arrays (or rather interleave their elements) to make a single multi-channel array.\n\n\\[\\texttt{dst}(I)_c = \\texttt{mv}[c](I)\\]\n\nThe function \\cvCppCross{split} does the reverse operation and if you need to merge several multi-channel images or shuffle channels in some other advanced way, use \\cvCppCross{mixChannels}\n\nSee also: \\cvCppCross{mixChannels}, \\cvCppCross{split}, \\cvCppCross{reshape}\n\n\\cvCppFunc{min}\nCalculates per-element minimum of two arrays or array and a scalar\n\n\\cvdefCpp{Mat\\_Expr<...> min(const Mat\\& src1, const Mat\\& src2);\\newline\nMat\\_Expr<...> min(const Mat\\& src1, double value);\\newline\nMat\\_Expr<...> min(double value, const Mat\\& src1);\\newline\nvoid min(const Mat\\& src1, const Mat\\& src2, Mat\\& dst);\\newline\nvoid min(const Mat\\& src1, double value, Mat\\& dst);\\newline\nvoid min(const MatND\\& src1, const MatND\\& src2, MatND\\& dst);\\newline\nvoid min(const MatND\\& src1, double value, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array of the same size and type as \\texttt{src1}}\n\\cvarg{value}{The real scalar value}\n\\cvarg{dst}{The destination array; will have the same size and type as \\texttt{src1}}\n\\end{description}\n\nThe functions \\texttt{min} compute per-element minimum of two arrays:\n\\[\\texttt{dst}(I)=\\min(\\texttt{src1}(I), \\texttt{src2}(I))\\]\nor array and a scalar:\n\\[\\texttt{dst}(I)=\\min(\\texttt{src1}(I), \\texttt{value})\\]\n\nIn the second variant, when the source array is multi-channel, each channel is compared with \\texttt{value} independently.\n\nThe first 3 variants of the function listed above are actually a part of \\cross{Matrix Expressions}, they return the expression object that can be further transformed, or assigned to a matrix, or passed to a function etc.\n\nSee also: \\cvCppCross{max}, \\cvCppCross{compare}, \\cvCppCross{inRange}, \\cvCppCross{minMaxLoc}, \\cross{Matrix Expressions}\n\n\\cvCppFunc{minMaxLoc}\nFinds global minimum and maximum in a whole array or sub-array\n\n\\cvdefCpp{void minMaxLoc(const Mat\\& src, double* minVal,\\par\n               double* maxVal=0, Point* minLoc=0,\\par\n               Point* maxLoc=0, const Mat\\& mask=Mat());\\newline\nvoid minMaxLoc(const MatND\\& src, double* minVal,\\par\n               double* maxVal, int* minIdx=0, int* maxIdx=0,\\par\n               const MatND\\& mask=MatND());\\newline\nvoid minMaxLoc(const SparseMat\\& src, double* minVal,\\par\n               double* maxVal, int* minIdx=0, int* maxIdx=0);}\n\\begin{description}\n\\cvarg{src}{The source single-channel array}\n\\cvarg{minVal}{Pointer to returned minimum value; \\texttt{NULL} if not required}\n\\cvarg{maxVal}{Pointer to returned maximum value; \\texttt{NULL} if not required}\n\\cvarg{minLoc}{Pointer to returned minimum location (in 2D case); \\texttt{NULL} if not required}\n\\cvarg{maxLoc}{Pointer to returned maximum location (in 2D case); \\texttt{NULL} if not required}\n\\cvarg{minIdx}{Pointer to returned minimum location (in nD case);\n \\texttt{NULL} if not required, otherwise must point to an array of \\texttt{src.dims} elements and the coordinates of minimum element in each dimensions will be stored sequentially there.}\n\\cvarg{maxIdx}{Pointer to returned maximum location (in nD case); \\texttt{NULL} if not required}\n\\cvarg{mask}{The optional mask used to select a sub-array}\n\\end{description}\n\nThe functions \\texttt{ninMaxLoc} find minimum and maximum element values\nand their positions. The extremums are searched across the whole array, or,\nif \\texttt{mask} is not an empty array, in the specified array region.\n\nThe functions do not work with multi-channel arrays. If you need to find minimum or maximum elements across all the channels, use \\cvCppCross{reshape} first to reinterpret the array as single-channel. Or you may extract the particular channel using \\cvCppCross{extractImageCOI} or \\cvCppCross{mixChannels} or \\cvCppCross{split}.\n\nin the case of a sparse matrix the minimum is found among non-zero elements only.\n\nSee also: \\cvCppCross{max}, \\cvCppCross{min}, \\cvCppCross{compare}, \\cvCppCross{inRange}, \\cvCppCross{extractImageCOI}, \\cvCppCross{mixChannels}, \\cvCppCross{split}, \\cvCppCross{reshape}.\n\n\\cvCppFunc{mixChannels}\nCopies specified channels from input arrays to the specified channels of output arrays\n\n\\cvdefCpp{void mixChannels(const Mat* srcv, int nsrc, Mat* dstv, int ndst,\\par\n                 const int* fromTo, size\\_t npairs);\\newline\nvoid mixChannels(const MatND* srcv, int nsrc, MatND* dstv, int ndst,\\par\n                 const int* fromTo, size\\_t npairs);\\newline\nvoid mixChannels(const vector<Mat>\\& srcv, vector<Mat>\\& dstv,\\par\n                 const int* fromTo, int npairs);\\newline\nvoid mixChannels(const vector<MatND>\\& srcv, vector<MatND>\\& dstv,\\par\n                 const int* fromTo, int npairs);}\n\\begin{description}\n\\cvarg{srcv}{The input array or vector of matrices.\nAll the matrices must have the same size and the same depth}\n\\cvarg{nsrc}{The number of elements in \\texttt{srcv}}\n\\cvarg{dstv}{The output array or vector of matrices.\nAll the matrices \\emph{must be allocated}, their size and depth must be the same as in \\texttt{srcv[0]}}\n\\cvarg{ndst}{The number of elements in \\texttt{dstv}}\n\\cvarg{fromTo}{The array of index pairs, specifying which channels are copied and where.\n\\texttt{fromTo[k*2]} is the 0-based index of the input channel in \\texttt{srcv} and\n\\texttt{fromTo[k*2+1]} is the index of the output channel in \\texttt{dstv}. Here the continuous channel numbering is used, that is,\nthe first input image channels are indexed from \\texttt{0} to \\texttt{srcv[0].channels()-1},\nthe second input image channels are indexed from \\texttt{srcv[0].channels()} to\n\\texttt{srcv[0].channels() + srcv[1].channels()-1} etc., and the same scheme is used for the output image channels.\nAs a special case, when \\texttt{fromTo[k*2]} is negative, the corresponding output channel is filled with zero.\n}\n\\texttt{npairs}{The number of pairs. In the latter case the parameter is not passed explicitly, but computed as \\texttt{srcv.size()} (=\\texttt{dstv.size()})}\n\\end{description}\n\nThe functions \\texttt{mixChannels} provide an advanced mechanism for shuffling image channels. \\cvCppCross{split} and \\cvCppCross{merge} and some forms of \\cvCppCross{cvtColor} are partial cases of \\texttt{mixChannels}.\n\nAs an example, this code splits a 4-channel RGBA image into a 3-channel\nBGR (i.e. with R and B channels swapped) and separate alpha channel image:\n\n\\begin{lstlisting}\nMat rgba( 100, 100, CV_8UC4, Scalar(1,2,3,4) );\nMat bgr( rgba.rows, rgba.cols, CV_8UC3 );\nMat alpha( rgba.rows, rgba.cols, CV_8UC1 );\n\n// forming array of matrices is quite efficient operations,\n// because the matrix data is not copied, only the headers\nMat out[] = { bgr, alpha };\n// rgba[0] -> bgr[2], rgba[1] -> bgr[1],\n// rgba[2] -> bgr[0], rgba[3] -> alpha[0]\nint from_to[] = { 0,2,  1,1,  2,0,  3,3 };\nmixChannels( &rgba, 1, out, 2, from_to, 4 );\n\\end{lstlisting}\n\nNote that, unlike many other new-style C++ functions in OpenCV (see the introduction section and \\cvCppCross{Mat::create}),\n\\texttt{mixChannels} requires the destination arrays be pre-allocated before calling the function.\n\nSee also: \\cvCppCross{split}, \\cvCppCross{merge}, \\cvCppCross{cvtColor} \n\n\n\\cvCppFunc{mulSpectrums}\nPerforms per-element multiplication of two Fourier spectrums.\n\n\\cvdefCpp{void mulSpectrums(const Mat\\& src1, const Mat\\& src2, Mat\\& dst,\\par\n                  int flags, bool conj=false);}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array; must have the same size and the same type as \\texttt{src1}}\n\\cvarg{dst}{The destination array; will have the same size and the same type as \\texttt{src1}}\n\\cvarg{flags}{The same flags as passed to \\cvCppCross{dft}; only the flag \\texttt{DFT\\_ROWS} is checked for}\n\\cvarg{conj}{The optional flag that conjugate the second source array before the multiplication (true) or not (false)}\n\\end{description}\n\nThe function \\texttt{mulSpectrums} performs per-element multiplication of the two CCS-packed or complex matrices that are results of a real or complex Fourier transform.\n\nThe function, together with \\cvCppCross{dft} and \\cvCppCross{idft}, may be used to calculate convolution (pass \\texttt{conj=false}) or correlation (pass \\texttt{conj=false}) of two arrays rapidly. When the arrays are complex, they are simply multiplied (per-element) with optional conjugation of the second array elements. When the arrays are real, they assumed to be CCS-packed (see \\cvCppCross{dft} for details).\n\n\\cvCppFunc{multiply}\nCalculates the per-element scaled product of two arrays\n\n\\cvdefCpp{void multiply(const Mat\\& src1, const Mat\\& src2, \\par Mat\\& dst, double scale=1);\\newline\nvoid multiply(const MatND\\& src1, const MatND\\& src2, \\par MatND\\& dst, double scale=1);}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array of the same size and the same type as \\texttt{src1}}\n\\cvarg{dst}{The destination array; will have the same size and the same type as \\texttt{src1}}\n\\cvarg{scale}{The optional scale factor}\n\\end{description}\n\nThe function \\texttt{multiply} calculates the per-element product of two arrays:\n\n\\[\n\\texttt{dst}(I)=\\texttt{saturate}(\\texttt{scale} \\cdot \\texttt{src1}(I) \\cdot \\texttt{src2}(I))\n\\]\n\nThere is also \\cross{Matrix Expressions}-friendly variant of the first function, see \\cvCppCross{Mat::mul}.\n\nIf you are looking for a matrix product, not per-element product, see \\cvCppCross{gemm}.\n\nSee also: \\cvCppCross{add}, \\cvCppCross{substract}, \\cvCppCross{divide}, \\cross{Matrix Expressions}, \\cvCppCross{scaleAdd}, \\cvCppCross{addWeighted}, \\cvCppCross{accumulate}, \\cvCppCross{accumulateProduct}, \\cvCppCross{accumulateSquare}, \\cvCppCross{Mat::convertTo}\n\n\\cvCppFunc{mulTransposed}\nCalculates the product of a matrix and its transposition.\n\n\\cvdefCpp{void mulTransposed( const Mat\\& src, Mat\\& dst, bool aTa,\\par\n                    const Mat\\& delta=Mat(),\\par\n                    double scale=1, int rtype=-1 );}\n\\begin{description}\n\\cvarg{src}{The source matrix}\n\\cvarg{dst}{The destination square matrix}\n\\cvarg{aTa}{Specifies the multiplication ordering; see the description below}\n\\cvarg{delta}{The optional delta matrix, subtracted from \\texttt{src} before the multiplication. When the matrix is empty (\\texttt{delta=Mat()}), it's assumed to be zero, i.e. nothing is subtracted, otherwise if it has the same size as \\texttt{src}, then it's simply subtracted, otherwise it is \"repeated\" (see \\cvCppCross{repeat}) to cover the full \\texttt{src} and then subtracted. Type of the delta matrix, when it's not empty, must be the same as the type of created destination matrix, see the \\texttt{rtype} description}\n\\cvarg{scale}{The optional scale factor for the matrix product}\n\\cvarg{rtype}{When it's negative, the destination matrix will have the same type as \\texttt{src}. Otherwise, it will have \\texttt{type=CV\\_MAT\\_DEPTH(rtype)}, which should be either \\texttt{CV\\_32F} or \\texttt{CV\\_64F}}\n\\end{description}\n\nThe function \\texttt{mulTransposed} calculates the product of \\texttt{src} and its transposition:\n\\[\n\\texttt{dst}=\\texttt{scale} (\\texttt{src}-\\texttt{delta})^T (\\texttt{src}-\\texttt{delta})\n\\]\nif \\texttt{aTa=true}, and\n\n\\[\n\\texttt{dst}=\\texttt{scale} (\\texttt{src}-\\texttt{delta}) (\\texttt{src}-\\texttt{delta})^T\n\\]\n\notherwise. The function is used to compute covariance matrix and with zero delta can be used as a faster substitute for general matrix product $A*B$ when $B=A^T$.\n\nSee also: \\cvCppCross{calcCovarMatrix}, \\cvCppCross{gemm}, \\cvCppCross{repeat}, \\cvCppCross{reduce}\n\n\n\\cvCppFunc{norm}\nCalculates absolute array norm, absolute difference norm, or relative difference norm.\n\n\\cvdefCpp{double norm(const Mat\\& src1, int normType=NORM\\_L2);\\newline\ndouble norm(const Mat\\& src1, const Mat\\& src2, int normType=NORM\\_L2);\\newline\ndouble norm(const Mat\\& src1, int normType, const Mat\\& mask);\\newline\ndouble norm(const Mat\\& src1, const Mat\\& src2, \\par int normType, const Mat\\& mask);\\newline\ndouble norm(const MatND\\& src1, int normType=NORM\\_L2, \\par const MatND\\& mask=MatND());\\newline\ndouble norm(const MatND\\& src1, const MatND\\& src2,\\par\n            int normType=NORM\\_L2, const MatND\\& mask=MatND());\\newline\ndouble norm( const SparseMat\\& src, int normType );}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array of the same size and the same type as \\texttt{src1}}\n\\cvarg{normType}{Type of the norm; see the discussion below}\n\\cvarg{mask}{The optional operation mask}\n\\end{description}\n\nThe functions \\texttt{norm} calculate the absolute norm of \\texttt{src1} (when there is no \\texttt{src2}):\n\\[\nnorm = \\forkthree\n{\\|\\texttt{src1}\\|_{L_{\\infty}}    = \\max_I |\\texttt{src1}(I)|}{if $\\texttt{normType} = \\texttt{NORM\\_INF}$}\n{\\|\\texttt{src1}\\|_{L_1} = \\sum_I |\\texttt{src1}(I)|}{if $\\texttt{normType} = \\texttt{NORM\\_L1}$}\n{\\|\\texttt{src1}\\|_{L_2} = \\sqrt{\\sum_I \\texttt{src1}(I)^2}}{if $\\texttt{normType} = \\texttt{NORM\\_L2}$}\n\\]\n\nor an absolute or relative difference norm if \\texttt{src2} is there:\n\\[\nnorm = \\forkthree\n{\\|\\texttt{src1}-\\texttt{src2}\\|_{L_{\\infty}}    = \\max_I |\\texttt{src1}(I) - \\texttt{src2}(I)|}{if $\\texttt{normType} = \\texttt{NORM\\_INF}$}\n{\\|\\texttt{src1}-\\texttt{src2}\\|_{L_1} = \\sum_I |\\texttt{src1}(I) - \\texttt{src2}(I)|}{if $\\texttt{normType} = \\texttt{NORM\\_L1}$}\n{\\|\\texttt{src1}-\\texttt{src2}\\|_{L_2} = \\sqrt{\\sum_I (\\texttt{src1}(I) - \\texttt{src2}(I))^2}}{if $\\texttt{normType} = \\texttt{NORM\\_L2}$}\n\\]\n\nor\n\n\\[\nnorm = \\forkthree\n{\\frac{\\|\\texttt{src1}-\\texttt{src2}\\|_{L_{\\infty}}    }{\\|\\texttt{src2}\\|_{L_{\\infty}}   }}{if $\\texttt{normType} = \\texttt{NORM\\_RELATIVE\\_INF}$}\n{\\frac{\\|\\texttt{src1}-\\texttt{src2}\\|_{L_1} }{\\|\\texttt{src2}\\|_{L_1}}}{if $\\texttt{normType} = \\texttt{NORM\\_RELATIVE\\_L1}$}\n{\\frac{\\|\\texttt{src1}-\\texttt{src2}\\|_{L_2} }{\\|\\texttt{src2}\\|_{L_2}}}{if $\\texttt{normType} = \\texttt{NORM\\_RELATIVE\\_L2}$}\n\\]\n\nThe functions \\texttt{norm} return the calculated norm.\n\nWhen there is \\texttt{mask} parameter, and it is not empty (then it should have type \\texttt{CV\\_8U} and the same size as \\texttt{src1}), the norm is computed only over the specified by the mask region.\n\nA multiple-channel source arrays are treated as a single-channel, that is, the results for all channels are combined.\n\n\n\\cvCppFunc{normalize}\nNormalizes array's norm or the range\n\n\\cvdefCpp{void normalize( const Mat\\& src, Mat\\& dst, \\par double alpha=1, double beta=0,\\par\n                int normType=NORM\\_L2, int rtype=-1, \\par const Mat\\& mask=Mat());\\newline\nvoid normalize( const MatND\\& src, MatND\\& dst, \\par double alpha=1, double beta=0,\\par\n                int normType=NORM\\_L2, int rtype=-1, \\par const MatND\\& mask=MatND());\\newline\nvoid normalize( const SparseMat\\& src, SparseMat\\& dst, \\par double alpha, int normType );}\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array; will have the same size as \\texttt{src}}\n\\cvarg{alpha}{The norm value to normalize to or the lower range boundary in the case of range normalization}\n\\cvarg{beta}{The upper range boundary in the case of range normalization; not used for norm normalization}\n\\cvarg{normType}{The normalization type, see the discussion}\n\\cvarg{rtype}{When the parameter is negative, the destination array will have the same type as \\texttt{src}, otherwise it will have the same number of channels as \\texttt{src} and the depth\\texttt{=CV\\_MAT\\_DEPTH(rtype)}}\n\\cvarg{mask}{The optional operation mask}\n\\end{description}\n\nThe functions \\texttt{normalize} scale and shift the source array elements, so that\n\\[\\|\\texttt{dst}\\|_{L_p}=\\texttt{alpha}\\]\n(where $p=\\infty$, 1 or 2) when \\texttt{normType=NORM\\_INF}, \\texttt{NORM\\_L1} or \\texttt{NORM\\_L2},\nor so that\n\\[\\min_I \\texttt{dst}(I)=\\texttt{alpha},\\,\\,\\max_I \\texttt{dst}(I)=\\texttt{beta}\\]\nwhen \\texttt{normType=NORM\\_MINMAX} (for dense arrays only).\n\nThe optional mask specifies the sub-array to be normalize, that is, the norm or min-n-max are computed over the sub-array and then this sub-array is modified to be normalized. If you want to only use the mask to compute the norm or min-max, but modify the whole array, you can use \\cvCppCross{norm} and \\cvCppCross{Mat::convertScale}/\\cvCppCross{MatND::convertScale}/cross{SparseMat::convertScale} separately.\n\nin the case of sparse matrices, only the non-zero values are analyzed and transformed. Because of this, the range transformation for sparse matrices is not allowed, since it can shift the zero level. \n\nSee also: \\cvCppCross{norm}, \\cvCppCross{Mat::convertScale}, \\cvCppCross{MatND::convertScale}, \\cvCppCross{SparseMat::convertScale}\n\n\n\\cvclass{PCA}\nClass for Principal Component Analysis\n\n\\begin{lstlisting}\nclass PCA\n{\npublic:\n    // default constructor\n    PCA();\n    // computes PCA for a set of vectors stored as data rows or columns.\n    PCA(const Mat& data, const Mat& mean, int flags, int maxComponents=0);\n    // computes PCA for a set of vectors stored as data rows or columns\n    PCA& operator()(const Mat& data, const Mat& mean, int flags, int maxComponents=0);\n    // projects vector into the principal components space\n    Mat project(const Mat& vec) const;\n    void project(const Mat& vec, Mat& result) const;\n    // reconstructs the vector from its PC projection\n    Mat backProject(const Mat& vec) const;\n    void backProject(const Mat& vec, Mat& result) const;\n\n    // eigenvectors of the PC space, stored as the matrix rows\n    Mat eigenvectors;\n    // the corresponding eigenvalues; not used for PCA compression/decompression\n    Mat eigenvalues;\n    // mean vector, subtracted from the projected vector\n    // or added to the reconstructed vector\n    Mat mean;\n};\n\\end{lstlisting}\n\nThe class \\texttt{PCA} is used to compute the special basis for a set of vectors. The basis will consist of eigenvectors of the covariance matrix computed from the input set of vectors. And also the class \\texttt{PCA} can transform vectors to/from the new coordinate space, defined by the basis. Usually, in this new coordinate system each vector from the original set (and any linear combination of such vectors) can be quite accurately approximated by taking just the first few its components, corresponding to the eigenvectors of the largest eigenvalues of the covariance matrix. Geometrically it means that we compute projection of the vector to a subspace formed by a few eigenvectors corresponding to the dominant eigenvalues of the covariation matrix. And usually such a projection is very close to the original vector. That is, we can represent the original vector from a high-dimensional space with a much shorter vector consisting of the projected vector's coordinates in the subspace. Such a transformation is also known as Karhunen-Loeve Transform, or KLT. See \\url{http://en.wikipedia.org/wiki/Principal\\_component\\_analysis}\n\nThe following sample is the function that takes two matrices. The first one stores the set of vectors (a row per vector) that is used to compute PCA, the second one stores another \"test\" set of vectors (a row per vector) that are first compressed with PCA, then reconstructed back and then the reconstruction error norm is computed and printed for each vector.\n\\begin{lstlisting}\nPCA compressPCA(const Mat& pcaset, int maxComponents,\n                const Mat& testset, Mat& compressed)\n{\n    PCA pca(pcaset, // pass the data\n            Mat(), // we do not have a pre-computed mean vector,\n                   // so let the PCA engine to compute it\n            CV_PCA_DATA_AS_ROW, // indicate that the vectors\n                                // are stored as matrix rows\n                                // (use CV_PCA_DATA_AS_COL if the vectors are\n                                // the matrix columns)\n            maxComponents // specify, how many principal components to retain\n            );\n    // if there is no test data, just return the computed basis, ready-to-use\n    if( !testset.data )\n        return pca;\n    CV_Assert( testset.cols == pcaset.cols );\n\n    compressed.create(testset.rows, maxComponents, testset.type());\n\n    Mat reconstructed;\n    for( int i = 0; i < testset.rows; i++ )\n    {\n        Mat vec = testset.row(i), coeffs = compressed.row(i);\n        // compress the vector, the result will be stored\n        // in the i-th row of the output matrix\n        pca.project(vec, coeffs);\n        // and then reconstruct it\n        pca.backProject(coeffs, reconstructed);\n        // and measure the error\n        printf(\"%d. diff = %g\\n\", i, norm(vec, reconstructed, NORM_L2));\n    }\n    return pca;\n}\n\\end{lstlisting}\n\nSee also: \\cvCppCross{calcCovarMatrix}, \\cvCppCross{mulTransposed}, \\cvCppCross{SVD}, \\cvCppCross{dft}, \\cvCppCross{dct}\n\n\\cvCppFunc{PCA::PCA}\nPCA constructors\n\n\\cvdefCpp{\nPCA::PCA();\\newline\nPCA::PCA(const Mat\\& data, const Mat\\& mean, int flags, int maxComponents=0);\n}\n\\begin{description}\n\\cvarg{data}{the input samples, stored as the matrix rows or as the matrix columns}\n\\cvarg{mean}{the optional mean value. If the matrix is empty (\\texttt{Mat()}), the mean is computed from the data.}\n\\cvarg{flags}{operation flags. Currently the parameter is only used to specify the data layout.}\n\\begin{description}\n    \\cvarg{CV\\_PCA\\_DATA\\_AS\\_ROWS}{Indicates that the input samples are stored as matrix rows.}\n    \\cvarg{CV\\_PCA\\_DATA\\_AS\\_COLS}{Indicates that the input samples are stored as matrix columns.}\n\\end{description}\n\\cvarg{maxComponents}{The maximum number of components that PCA should retain. By default, all the components are retained.}\n\\end{description}\n\nThe default constructor initializes empty PCA structure. The second constructor initializes the structure and calls \\cvCppCross{PCA::operator ()}.\n\n\\cvCppFunc{PCA::operator ()}\nPerforms Principal Component Analysis of the supplied dataset. \n\n\\cvdefCpp{\nPCA\\& PCA::operator()(const Mat\\& data, const Mat\\& mean, int flags, int maxComponents=0);\n}\n\\begin{description}\n\\cvarg{data}{the input samples, stored as the matrix rows or as the matrix columns}\n\\cvarg{mean}{the optional mean value. If the matrix is empty (\\texttt{Mat()}), the mean is computed from the data.}\n\\cvarg{flags}{operation flags. Currently the parameter is only used to specify the data layout.}\n\\begin{description}\n    \\cvarg{CV\\_PCA\\_DATA\\_AS\\_ROWS}{Indicates that the input samples are stored as matrix rows.}\n    \\cvarg{CV\\_PCA\\_DATA\\_AS\\_COLS}{Indicates that the input samples are stored as matrix columns.}\n\\end{description}\n\\cvarg{maxComponents}{The maximum number of components that PCA should retain. By default, all the components are retained.}\n\\end{description}\n\nThe operator performs PCA of the supplied dataset. It is safe to reuse the same PCA structure for multiple dataset. That is, if the  structure has been previously used with another dataset, the existing internal data is reclaimed and the new \\texttt{eigenvalues}, \\texttt{eigenvectors} and \\texttt{mean} are allocated and computed.\n\nThe computed eigenvalues are sorted from the largest to the smallest and the corresponding eigenvectors are stored as \\texttt{PCA::eigenvectors} rows. \n\n\\cvCppFunc{PCA::project}\nProject vector(s) to the principal component subspace\n\n\\cvdefCpp{\nMat PCA::project(const Mat\\& vec) const;\\newline\nvoid PCA::project(const Mat\\& vec, Mat\\& result) const;\n}\n\\begin{description}\n\\cvarg{vec}{the input vector(s). They have to have the same dimensionality and the same layout as the input data used at PCA phase. That is, if \\texttt{CV\\_PCA\\_DATA\\_AS\\_ROWS} had been specified, then \\texttt{vec.cols==data.cols} (that's vectors' dimensionality) and \\texttt{vec.rows} is the number of vectors to project; and similarly for the \\texttt{CV\\_PCA\\_DATA\\_AS\\_COLS} case.}\n\\cvarg{result}{the output vectors. Let's now consider \\texttt{CV\\_PCA\\_DATA\\_AS\\_COLS} case. In this case the output matrix will have as many columns as the number of input vectors, i.e. \\texttt{result.cols==vec.cols} and the number of rows will match the number of principal components (e.g. \\texttt{maxComponents} parameter passed to the constructor).}\n\\end{description}\n\nThe methods project one or more vectors to the principal component subspace, where each vector projection is represented by coefficients in the principal component basis. The first form of the method returns the matrix that the second form writes to the result. So the first form can be used as a part of expression, while the second form can be more efficient in a processing loop. \n\n\\cvCppFunc{PCA::backProject}\nReconstruct vectors from their PC projections.\n\n\\cvdefCpp{\nMat PCA::backProject(const Mat\\& vec) const;\\newline\nvoid PCA::backProject(const Mat\\& vec, Mat\\& result) const;\n}\n\\begin{description}\n\\cvarg{vec}{Coordinates of the vectors in the principal component subspace. The layout and size are the same as of \\texttt{PCA::project} output vectors.}\n\\cvarg{result}{The reconstructed vectors. The layout and size are the same as of \\texttt{PCA::project} input vectors.}\n\\end{description}\n\nThe methods are inverse operations to \\cvCppCross{PCA::project}. They take PC coordinates of projected vectors and reconstruct the original vectors. Of course, unless all the principal components have been retained, the reconstructed vectors will be different from the originals, but typically the difference will be small is if the number of components is large enough (but still much smaller than the original vector dimensionality) - that's why PCA is used after all. \n\n\\cvCppFunc{perspectiveTransform}\nPerforms perspective matrix transformation of vectors.\n\n\\cvdefCpp{void perspectiveTransform(const Mat\\& src, \\par Mat\\& dst, const Mat\\& mtx );}\n\\begin{description}\n\\cvarg{src}{The source two-channel or three-channel floating-point array;\n            each element is 2D/3D vector to be transformed}\n\\cvarg{dst}{The destination array; it will have the same size and same type as \\texttt{src}}\n\\cvarg{mtx}{$3\\times 3$ or $4 \\times 4$ transformation matrix}\n\\end{description}\n\nThe function \\texttt{perspectiveTransform} transforms every element of \\texttt{src},\nby treating it as 2D or 3D vector, in the following way (here 3D vector transformation is shown; in the case of 2D vector transformation the $z$ component is omitted):\n\n\\[ (x, y, z) \\rightarrow (x'/w, y'/w, z'/w) \\]\n\nwhere\n\n\\[\n(x', y', z', w') = \\texttt{mat} \\cdot\n\\begin{bmatrix} x & y & z & 1 \\end{bmatrix}\n\\]\n\nand\n\\[ w = \\fork{w'}{if $w' \\ne 0$}{\\infty}{otherwise} \\]\n\nNote that the function transforms a sparse set of 2D or 3D vectors. If you want to transform an image using perspective transformation, use \\cvCppCross{warpPerspective}. If you have an inverse task, i.e. want to compute the most probable perspective transformation out of several pairs of corresponding points, you can use \\cvCppCross{getPerspectiveTransform} or \\cvCppCross{findHomography}.\n\nSee also: \\cvCppCross{transform}, \\cvCppCross{warpPerspective}, \\cvCppCross{getPerspectiveTransform}, \\cvCppCross{findHomography}\n\n\\cvCppFunc{phase}\nCalculates the rotation angle of 2d vectors\n\n\\cvdefCpp{void phase(const Mat\\& x, const Mat\\& y, Mat\\& angle,\\par\n           bool angleInDegrees=false);}\n\\begin{description}\n\\cvarg{x}{The source floating-point array of x-coordinates of 2D vectors}\n\\cvarg{y}{The source array of y-coordinates of 2D vectors; must have the same size and the same type as \\texttt{x}}\n\\cvarg{angle}{The destination array of vector angles; it will have the same size and same type as \\texttt{x}}\n\\cvarg{angleInDegrees}{When it is true, the function will compute angle in degrees, otherwise they will be measured in radians}\n\\end{description}\n\nThe function \\texttt{phase} computes the rotation angle of each 2D vector that is formed from the corresponding elements of \\texttt{x} and \\texttt{y}:\n\n\\[\\texttt{angle}(I) = \\texttt{atan2}(\\texttt{y}(I), \\texttt{x}(I))\\]\n\nThe angle estimation accuracy is $\\sim\\,0.3^\\circ$, when \\texttt{x(I)=y(I)=0}, the corresponding \\texttt{angle}(I) is set to $0$.\n\nSee also:\n\n\\cvCppFunc{polarToCart}\nComputes x and y coordinates of 2D vectors from their magnitude and angle.\n\n\\cvdefCpp{void polarToCart(const Mat\\& magnitude, const Mat\\& angle,\\par\n                 Mat\\& x, Mat\\& y, bool angleInDegrees=false);}\n\\begin{description}\n\\cvarg{magnitude}{The source floating-point array of magnitudes of 2D vectors. It can be an empty matrix (\\texttt{=Mat()}) - in this case the function assumes that all the magnitudes are =1. If it's not empty, it must have the same size and same type as \\texttt{angle}}\n\\cvarg{angle}{The source floating-point array of angles of the 2D vectors}\n\\cvarg{x}{The destination array of x-coordinates of 2D vectors; will have the same size and the same type as \\texttt{angle}}\n\\cvarg{y}{The destination array of y-coordinates of 2D vectors; will have the same size and the same type as \\texttt{angle}}\n\\cvarg{angleInDegrees}{When it is true, the input angles are measured in degrees, otherwise they are measured in radians}\n\\end{description}\n\nThe function \\texttt{polarToCart} computes the cartesian coordinates of each 2D vector represented by the corresponding elements of \\texttt{magnitude} and \\texttt{angle}:\n\n\\[\n\\begin{array}{l}\n\\texttt{x}(I) = \\texttt{magnitude}(I)\\cos(\\texttt{angle}(I))\\\\\n\\texttt{y}(I) = \\texttt{magnitude}(I)\\sin(\\texttt{angle}(I))\\\\\n\\end{array}\n\\]\n\nThe relative accuracy of the estimated coordinates is $\\sim\\,10^{-6}$.\n\nSee also: \\cvCppCross{cartToPolar}, \\cvCppCross{magnitude}, \\cvCppCross{phase}, \\cvCppCross{exp}, \\cvCppCross{log}, \\cvCppCross{pow}, \\cvCppCross{sqrt}\n\n\\cvCppFunc{pow}\nRaises every array element to a power.\n\n\\cvdefCpp{void pow(const Mat\\& src, double p, Mat\\& dst);\\newline\nvoid pow(const MatND\\& src, double p, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{p}{The exponent of power}\n\\cvarg{dst}{The destination array; will have the same size and the same type as \\texttt{src}}\n\\end{description}\n\nThe function \\texttt{pow} raises every element of the input array to \\texttt{p}:\n\n\\[\n\\texttt{dst}(I) = \\fork\n{\\texttt{src}(I)^p}{if \\texttt{p} is integer}\n{|\\texttt{src}(I)|^p}{otherwise}\n\\]\n\nThat is, for a non-integer power exponent the absolute values of input array elements are used. However, it is possible to get true values for negative values using some extra operations, as the following example, computing the 5th root of array \\texttt{src}, shows:\n\n\\begin{lstlisting}\nMat mask = src < 0;\npow(src, 1./5, dst);\nsubtract(Scalar::all(0), dst, dst, mask);\n\\end{lstlisting}\n\nFor some values of \\texttt{p}, such as integer values, 0.5, and -0.5, specialized faster algorithms are used.\n\nSee also: \\cvCppCross{sqrt}, \\cvCppCross{exp}, \\cvCppCross{log}, \\cvCppCross{cartToPolar}, \\cvCppCross{polarToCart}\n\n\n\\subsection{RNG}\\label{RNG}\n\nRandom number generator class.\n\n\\begin{lstlisting}\nclass CV_EXPORTS RNG\n{\npublic:\n    enum { A=4164903690U, UNIFORM=0, NORMAL=1 };\n\n    // constructors\n    RNG();\n    RNG(uint64 state);\n    \n    // returns 32-bit unsigned random number\n    unsigned next();\n\n    // return random numbers of the specified type\n    operator uchar();\n    operator schar();\n    operator ushort();\n    operator short();\n    operator unsigned();\n\t// returns a random integer sampled uniformly from [0, N).\n\tunsigned operator()(unsigned N);\n\tunsigned operator()();\n    operator int();\n    operator float();\n    operator double();\n    // returns a random number sampled uniformly from [a, b) range\n    int uniform(int a, int b);\n    float uniform(float a, float b);\n    double uniform(double a, double b);\n    \n    // returns Gaussian random number with zero mean.\n\tdouble gaussian(double sigma);\n    \n    // fills array with random numbers sampled from the specified distribution\n    void fill( Mat& mat, int distType, const Scalar& a, const Scalar& b );\n    void fill( MatND& mat, int distType, const Scalar& a, const Scalar& b );\n\n    // internal state of the RNG (could change in the future)\n    uint64 state;\n};\n\\end{lstlisting}\n\nThe class \\texttt{RNG} implements random number generator. It encapsulates the RNG state (currently, a 64-bit integer) and  has methods to return scalar random values and to fill arrays with random values. Currently it supports uniform and Gaussian (normal) distributions. The generator uses Multiply-With-Carry algorithm, introduced by G. Marsaglia (\\url{http://en.wikipedia.org/wiki/Multiply-with-carry}). Gaussian-distribution random numbers are generated using Ziggurat algorithm (\\url{http://en.wikipedia.org/wiki/Ziggurat_algorithm}), introduced by G. Marsaglia and W. W. Tsang. \n\n\\cvCppFunc{RNG::RNG}\nRNG constructors\n\n\\cvdefCpp{\nRNG::RNG();\\newline\nRNG::RNG(uint64 state);\n}\n\\begin{description}\n\\cvarg{state}{the 64-bit value used to initialize the RNG}\n\\end{description}\n\nThese are the RNG constructors. The first form sets the state to some pre-defined value, equal to \\texttt{2**32-1} in the current implementation. The second form sets the state to the specified value. If the user passed \\texttt{state=0}, the constructor uses the above default value instead, to avoid the singular random number sequence, consisting of all zeros.\n\n\\cvCppFunc{RNG::next}\nReturns the next random number\n\n\\cvdefCpp{\nunsigned RNG::next();\n}\n\nThe method updates the state using MWC algorithm and returns the next 32-bit random number.\n\n\n\\cvCppFunc{RNG::operator T}\nReturns the next random number of the specified type\n\n\\cvdefCpp{\nRNG::operator uchar();\nRNG::operator schar();\nRNG::operator ushort();\nRNG::operator short();\nRNG::operator unsigned();\nRNG::operator int();\nRNG::operator float();\nRNG::operator double();\n}\n\nEach of the methods updates the state using MWC algorithm and returns the next random number of the specified type. In the case of integer types the returned number is from the whole available value range for the specified type. In the case of floating-point types the returned value is from \\texttt{[0,1)} range.\n\n\\cvCppFunc{RNG::operator ()}\nReturns the next random number\n\n\\cvdefCpp{\nunsigned RNG::operator ()();\\newline\nunsigned RNG::operator ()(unsigned N);\n}\n\\begin{description}\n\\cvarg{N}{The upper non-inclusive boundary of the returned random number}\n\\end{description}\n\nThe methods transforms the state using MWC algorithm and returns the next random number. The first form is equivalent to \\cvCppCross{RNG::next}, the second form returns the random number modulo \\texttt{N}, i.e. the result will be in the range \\texttt{[0, N)}.\n\n\\cvCppFunc{RNG::uniform}\nReturns the next random number sampled from the uniform distribution\n\n\\cvdefCpp{\nint RNG::uniform(int a, int b);\\newline\nfloat RNG::uniform(float a, float b);\\newline\ndouble RNG::uniform(double a, double b);\n}\n\\begin{description}\n\\cvarg{a}{The lower inclusive boundary of the returned random numbers}\n\\cvarg{b}{The upper non-inclusive boundary of the returned random numbers}\n\\end{description}\n\nThe methods transforms the state using MWC algorithm and returns the next uniformly-distributed random number of the specified type, deduced from the input parameter type, from the range \\texttt{[a, b)}. There is one nuance, illustrated by the following sample:\n\n\\begin{lstlisting}\ncv::RNG rng;\n\n// will always produce 0\ndouble a = rng.uniform(0, 1);\n\n// will produce double from [0, 1)\ndouble a1 = rng.uniform((double)0, (double)1);\n\n// will produce float from [0, 1)\ndouble b = rng.uniform(0.f, 1.f);\n\n// will produce double from [0, 1)\ndouble c = rng.uniform(0., 1.);\n\n// will likely cause compiler error because of ambiguity:\n//  RNG::uniform(0, (int)0.999999)? or RNG::uniform((double)0, 0.99999)?\ndouble d = rng.uniform(0, 0.999999);\n\\end{lstlisting}\n\nThat is, the compiler does not take into account type of the variable that you assign the result of \\texttt{RNG::uniform} to, the only thing that matters to it is the type of \\texttt{a} and \\texttt{b} parameters. So if you want a floating-point random number, but the range boundaries are integer numbers, either put dots in the end, if they are constants, or use explicit type cast operators, as in \\texttt{a1} initialization above.\n\n\n\\cvCppFunc{RNG::gaussian}\nReturns the next random number sampled from the Gaussian distribution\n\n\\cvdefCpp{\ndouble RNG::gaussian(double sigma);\n}\n\\begin{description}\n\\cvarg{sigma}{The standard deviation of the distribution}\n\\end{description}\n\nThe methods transforms the state using MWC algorithm and returns the next random number from the Gaussian distribution \\texttt{N(0,sigma)}. That is, the mean value of the returned random numbers will be zero and the standard deviation will be the specified \\texttt{sigma}.\n\n\n\\cvCppFunc{RNG::fill}\nFill arrays with random numbers \n\n\\cvdefCpp{\nvoid RNG::fill( Mat\\& mat, int distType, const Scalar\\& a, const Scalar\\& b );\\newline\nvoid RNG::fill( MatND\\& mat, int distType, const Scalar\\& a, const Scalar\\& b );\n}\n\\begin{description}\n\\cvarg{mat}{2D or N-dimensional matrix. Currently matrices with more than 4 channels are not supported by the methods. Use \\cvCppCross{reshape} as a possible workaround.}\n\\cvarg{distType}{The distribution type, \\texttt{RNG::UNIFORM} or \\texttt{RNG::NORMAL}}\n\\cvarg{a}{The first distribution parameter. In the case of uniform distribution this is inclusive lower boundary. In the case of normal distribution this is mean value.}\n\\cvarg{b}{The second distribution parameter. In the case of uniform distribution this is non-inclusive upper boundary. In the case of normal distribution this is standard deviation.}\n\\end{description}\n\nEach of the methods fills the matrix with the random values from the specified distribution. As the new numbers are generated, the RNG state is updated accordingly. In the case of multiple-channel images every channel is filled independently, i.e. RNG can not generate samples from multi-dimensional Gaussian distribution with non-diagonal covariation matrix directly. To do that, first, generate matrix from the distribution $N(0, I_n)$, i.e. Gaussian distribution with zero mean and identity covariation matrix, and then transform it using \\cvCppCross{transform} and the specific covariation matrix.  \n\n\\cvCppFunc{randu}\nGenerates a single uniformly-distributed random number or array of random numbers\n\n\\cvdefCpp{template<typename \\_Tp> \\_Tp randu();\\newline\nvoid randu(Mat\\& mtx, const Scalar\\& low, const Scalar\\& high);}\n\\begin{description}\n\\cvarg{mtx}{The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels}\n\\cvarg{low}{The inclusive lower boundary of the generated random numbers}\n\\cvarg{high}{The exclusive upper boundary of the generated random numbers}\n\\end{description}\n\nThe template functions \\texttt{randu} generate and return the next uniformly-distributed random value of the specified type. \\texttt{randu<int>()} is equivalent to \\texttt{(int)theRNG();} etc. See \\cvCppCross{RNG} description.\n\nThe second non-template variant of the function fills the matrix \\texttt{mtx} with uniformly-distributed random numbers from the specified range:\n\n\\[\\texttt{low}_c \\leq \\texttt{mtx}(I)_c < \\texttt{high}_c\\]\n\nSee also: \\cvCppCross{RNG}, \\cvCppCross{randn}, \\cvCppCross{theRNG}.\n\n\\cvCppFunc{randn}\nFills array with normally distributed random numbers\n\n\\cvdefCpp{void randn(Mat\\& mtx, const Scalar\\& mean, const Scalar\\& stddev);}\n\\begin{description}\n\\cvarg{mtx}{The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels}\n\\cvarg{mean}{The mean value (expectation) of the generated random numbers}\n\\cvarg{stddev}{The standard deviation of the generated random numbers}\n\\end{description}\n\nThe function \\texttt{randn} fills the matrix \\texttt{mtx} with normally distributed random numbers with the specified mean and standard deviation. \\hyperref[cppfunc.saturatecast]{saturate\\_cast} is applied to the generated numbers (i.e. the values are clipped)\n\nSee also: \\cvCppCross{RNG}, \\cvCppCross{randu}\n\n\\cvCppFunc{randShuffle}\nShuffles the array elements randomly\n\n\\cvdefCpp{void randShuffle(Mat\\& mtx, double iterFactor=1., RNG* rng=0);}\n\\begin{description}\n\\cvarg{mtx}{The input/output numerical 1D array}\n\\cvarg{iterFactor}{The scale factor that determines the number of random swap operations. See the discussion}\n\\cvarg{rng}{The optional random number generator used for shuffling. If it is zero, \\cvCppCross{theRNG}() is used instead}\n\\end{description}\n\nThe function \\texttt{randShuffle} shuffles the specified 1D array by randomly choosing pairs of elements and swapping them. The number of such swap operations will be \\texttt{mtx.rows*mtx.cols*iterFactor}\n\nSee also: \\cvCppCross{RNG}, \\cvCppCross{sort}\n\n\\cvCppFunc{reduce}\nReduces a matrix to a vector\n\n\\cvdefCpp{void reduce(const Mat\\& mtx, Mat\\& vec, \\par int dim, int reduceOp, int dtype=-1);}\n\\begin{description}\n\\cvarg{mtx}{The source 2D matrix}\n\\cvarg{vec}{The destination vector. Its size and type is defined by \\texttt{dim} and \\texttt{dtype} parameters}\n\\cvarg{dim}{The dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row and 1 means that the matrix is reduced to a single column}\n\\cvarg{reduceOp}{The reduction operation, one of:\n\\begin{description}\n\\cvarg{CV\\_REDUCE\\_SUM}{The output is the sum of all of the matrix's rows/columns.}\n\\cvarg{CV\\_REDUCE\\_AVG}{The output is the mean vector of all of the matrix's rows/columns.}\n\\cvarg{CV\\_REDUCE\\_MAX}{The output is the maximum (column/row-wise) of all of the matrix's rows/columns.}\n\\cvarg{CV\\_REDUCE\\_MIN}{The output is the minimum (column/row-wise) of all of the matrix's rows/columns.}\n\\end{description}}\n\\cvarg{dtype}{When it is negative, the destination vector will have the same type as the source matrix, otherwise, its type will be \\texttt{CV\\_MAKE\\_TYPE(CV\\_MAT\\_DEPTH(dtype), mtx.channels())}}\n\\end{description}\n\nThe function \\texttt{reduce} reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained. For example, the function can be used to compute horizontal and vertical projections of an raster image. In the case of \\texttt{CV\\_REDUCE\\_SUM} and \\texttt{CV\\_REDUCE\\_AVG} the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes. \n\nSee also: \\cvCppCross{repeat}\n\n\\cvCppFunc{repeat}\nFill the destination array with repeated copies of the source array.\n\n\\cvdefCpp{void repeat(const Mat\\& src, int ny, int nx, Mat\\& dst);\\newline\nMat repeat(const Mat\\& src, int ny, int nx);}\n\\begin{description}\n\\cvarg{src}{The source array to replicate}\n\\cvarg{dst}{The destination array; will have the same type as \\texttt{src}}\n\\cvarg{ny}{How many times the \\texttt{src} is repeated along the vertical axis}\n\\cvarg{nx}{How many times the \\texttt{src} is repeated along the horizontal axis}\n\\end{description}\n\nThe functions \\cvCppCross{repeat} duplicate the source array one or more times along each of the two axes:\n\n\\[\\texttt{dst}_{ij}=\\texttt{src}_{i\\mod\\texttt{src.rows},\\;j\\mod\\texttt{src.cols}}\\]\n\nThe second variant of the function is more convenient to use with \\cross{Matrix Expressions}\n\nSee also: \\cvCppCross{reduce}, \\cross{Matrix Expressions}\n\n\\ifplastex\n\\cvfunc{saturate\\_cast}\\label{cppfunc.saturatecast}\n\\else\n\\subsection{saturate\\_cast}\\label{cppfunc.saturatecast}\n\\fi\nTemplate function for accurate conversion from one primitive type to another\n\n\\cvdefCpp{template<typename \\_Tp> inline \\_Tp saturate\\_cast(unsigned char v);\\newline\ntemplate<typename \\_Tp> inline \\_Tp saturate\\_cast(signed char v);\\newline\ntemplate<typename \\_Tp> inline \\_Tp saturate\\_cast(unsigned short v);\\newline\ntemplate<typename \\_Tp> inline \\_Tp saturate\\_cast(signed short v);\\newline\ntemplate<typename \\_Tp> inline \\_Tp saturate\\_cast(int v);\\newline\ntemplate<typename \\_Tp> inline \\_Tp saturate\\_cast(unsigned int v);\\newline\ntemplate<typename \\_Tp> inline \\_Tp saturate\\_cast(float v);\\newline\ntemplate<typename \\_Tp> inline \\_Tp saturate\\_cast(double v);}\n\n\\begin{description}\n\\cvarg{v}{The function parameter}\n\\end{description}\n\nThe functions \\texttt{saturate\\_cast} resembles the standard C++ cast operations, such as \\texttt{static\\_cast<T>()} etc. They perform an efficient and accurate conversion from one primitive type to another, see the introduction. \"saturate\" in the name means that when the input value \\texttt{v} is out of range of the target type, the result will not be formed just by taking low bits of the input, but instead the value will be clipped. For example:\n\n\\begin{lstlisting}\nuchar a = saturate_cast<uchar>(-100); // a = 0 (UCHAR_MIN)\nshort b = saturate_cast<short>(33333.33333); // b = 32767 (SHRT_MAX)\n\\end{lstlisting}\n\nSuch clipping is done when the target type is \\texttt{unsigned char, signed char, unsigned short or signed short} - for 32-bit integers no clipping is done.\n\nWhen the parameter is floating-point value and the target type is an integer (8-, 16- or 32-bit), the floating-point value is first rounded to the nearest integer and then clipped if needed (when the target type is 8- or 16-bit).\n\nThis operation is used in most simple or complex image processing functions in OpenCV.\n\nSee also: \\cvCppCross{add}, \\cvCppCross{subtract}, \\cvCppCross{multiply}, \\cvCppCross{divide}, \\cvCppCross{Mat::convertTo}\n\n\\cvCppFunc{scaleAdd}\nCalculates the sum of a scaled array and another array.\n\n\\cvdefCpp{void scaleAdd(const Mat\\& src1, double scale, \\par const Mat\\& src2, Mat\\& dst);\\newline\nvoid scaleAdd(const MatND\\& src1, double scale, \\par const MatND\\& src2, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{scale}{Scale factor for the first array}\n\\cvarg{src2}{The second source array; must have the same size and the same type as \\texttt{src1}}\n\\cvarg{dst}{The destination array; will have the same size and the same type as \\texttt{src1}}\n\\end{description}\n\nThe function \\texttt{cvScaleAdd} is one of the classical primitive linear algebra operations, known as \\texttt{DAXPY} or \\texttt{SAXPY} in \\href{http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms}{BLAS}. It calculates the sum of a scaled array and another array:\n\n\\[\n\\texttt{dst}(I)=\\texttt{scale} \\cdot \\texttt{src1}(I) + \\texttt{src2}(I)\n\\]\n\nThe function can also be emulated with a matrix expression, for example:\n\n\\begin{lstlisting}\nMat A(3, 3, CV_64F);\n...\nA.row(0) = A.row(1)*2 + A.row(2);\n\\end{lstlisting}\n\nSee also: \\cvCppCross{add}, \\cvCppCross{addWeighted}, \\cvCppCross{subtract}, \\cvCppCross{Mat::dot}, \\cvCppCross{Mat::convertTo}, \\cross{Matrix Expressions}\n\n\\cvCppFunc{setIdentity}\nInitializes a scaled identity matrix\n\n\\cvdefCpp{void setIdentity(Mat\\& dst, const Scalar\\& value=Scalar(1));}\n\\begin{description}\n\\cvarg{dst}{The matrix to initialize (not necessarily square)}\n\\cvarg{value}{The value to assign to the diagonal elements}\n\\end{description}\n\nThe function \\cvCppCross{setIdentity} initializes a scaled identity matrix:\n\n\\[\n\\texttt{dst}(i,j)=\\fork{\\texttt{value}}{ if $i=j$}{0}{otherwise}\n\\]\n\nThe function can also be emulated using the matrix initializers and the matrix expressions:\n\\begin{lstlisting}\nMat A = Mat::eye(4, 3, CV_32F)*5;\n// A will be set to [[5, 0, 0], [0, 5, 0], [0, 0, 5], [0, 0, 0]]\n\\end{lstlisting}\n\nSee also: \\cvCppCross{Mat::zeros}, \\cvCppCross{Mat::ones}, \\cross{Matrix Expressions},\n\\cvCppCross{Mat::setTo}, \\cvCppCross{Mat::operator=},\n\n\\cvCppFunc{solve}\nSolves one or more linear systems or least-squares problems.\n\n\\cvdefCpp{bool solve(const Mat\\& src1, const Mat\\& src2, \\par Mat\\& dst, int flags=DECOMP\\_LU);}\n\\begin{description}\n\\cvarg{src1}{The input matrix on the left-hand side of the system}\n\\cvarg{src2}{The input matrix on the right-hand side of the system}\n\\cvarg{dst}{The output solution}\n\\cvarg{flags}{The solution (matrix inversion) method\n\\begin{description}\n \\cvarg{DECOMP\\_LU}{Gaussian elimination with optimal pivot element chosen}\n \\cvarg{DECOMP\\_CHOLESKY}{Cholesky $LL^T$ factorization; the matrix \\texttt{src1} must be symmetrical and positively defined}\n \\cvarg{DECOMP\\_EIG}{Eigenvalue decomposition; the matrix \\texttt{src1} must be symmetrical}\n \\cvarg{DECOMP\\_SVD}{Singular value decomposition (SVD) method; the system can be over-defined and/or the matrix \\texttt{src1} can be singular}\n \\cvarg{DECOMP\\_QR}{QR factorization; the system can be over-defined and/or the matrix \\texttt{src1} can be singular}\n \\cvarg{DECOMP\\_NORMAL}{While all the previous flags are mutually exclusive, this flag can be used together with any of the previous. It means that the normal equations $\\texttt{src1}^T\\cdot\\texttt{src1}\\cdot\\texttt{dst}=\\texttt{src1}^T\\texttt{src2}$ are solved instead of the original system $\\texttt{src1}\\cdot\\texttt{dst}=\\texttt{src2}$}\n\\end{description}}\n\\end{description}\n\nThe function \\texttt{solve} solves a linear system or least-squares problem (the latter is possible with SVD or QR methods, or by specifying the flag \\texttt{DECOMP\\_NORMAL}):\n\n\\[\n\\texttt{dst} = \\arg \\min_X\\|\\texttt{src1}\\cdot\\texttt{X} - \\texttt{src2}\\|\n\\]\n\nIf \\texttt{DECOMP\\_LU} or \\texttt{DECOMP\\_CHOLESKY} method is used, the function returns 1 if \\texttt{src1} (or $\\texttt{src1}^T\\texttt{src1}$) is non-singular and 0 otherwise; in the latter case \\texttt{dst} is not valid. Other methods find some pseudo-solution in the case of singular left-hand side part.\n\nNote that if you want to find unity-norm solution of an under-defined singular system $\\texttt{src1}\\cdot\\texttt{dst}=0$, the function \\texttt{solve} will not do the work. Use \\cvCppCross{SVD::solveZ} instead.\n\nSee also: \\cvCppCross{invert}, \\cvCppCross{SVD}, \\cvCppCross{eigen}\n\n\\cvCppFunc{solveCubic}\nFinds the real roots of a cubic equation.\n\n\\cvdefCpp{void solveCubic(const Mat\\& coeffs, Mat\\& roots);}\n\\begin{description}\n\\cvarg{coeffs}{The equation coefficients, an array of 3 or 4 elements}\n\\cvarg{roots}{The destination array of real roots which will have 1 or 3 elements}\n\\end{description}\n\nThe function \\texttt{solveCubic} finds the real roots of a cubic equation:\n\n(if coeffs is a 4-element vector)\n\n\\[\n\\texttt{coeffs}[0] x^3 + \\texttt{coeffs}[1] x^2 + \\texttt{coeffs}[2] x + \\texttt{coeffs}[3] = 0\n\\]\n\nor (if coeffs is 3-element vector):\n\n\\[\nx^3 + \\texttt{coeffs}[0] x^2 + \\texttt{coeffs}[1] x + \\texttt{coeffs}[2] = 0\n\\]\n\nThe roots are stored to \\texttt{roots} array.\n\n\\cvCppFunc{solvePoly}\nFinds the real or complex roots of a polynomial equation\n\n\\cvdefCpp{void solvePoly(const Mat\\& coeffs, Mat\\& roots, \\par int maxIters=20, int fig=100);}\n\\begin{description}\n\\cvarg{coeffs}{The array of polynomial coefficients}\n\\cvarg{roots}{The destination (complex) array of roots}\n\\cvarg{maxIters}{The maximum number of iterations the algorithm does}\n\\cvarg{fig}{}\n\\end{description}\n\nThe function \\texttt{solvePoly} finds real and complex roots of a polynomial equation:\n\\[\n\\texttt{coeffs}[0] x^{n} + \\texttt{coeffs}[1] x^{n-1} + ... + \\texttt{coeffs}[n-1] x + \\texttt{coeffs}[n] = 0\n\\]\n\n\\cvCppFunc{sort}\nSorts each row or each column of a matrix\n\n\\cvdefCpp{void sort(const Mat\\& src, Mat\\& dst, int flags);}\n\\begin{description}\n\\cvarg{src}{The source single-channel array}\n\\cvarg{dst}{The destination array of the same size and the same type as \\texttt{src}}\n\\cvarg{flags}{The operation flags, a combination of the following values:\n\\begin{description}\n    \\cvarg{CV\\_SORT\\_EVERY\\_ROW}{Each matrix row is sorted independently}\n    \\cvarg{CV\\_SORT\\_EVERY\\_COLUMN}{Each matrix column is sorted independently. This flag and the previous one are mutually exclusive}\n    \\cvarg{CV\\_SORT\\_ASCENDING}{Each matrix row is sorted in the ascending order}\n    \\cvarg{CV\\_SORT\\_DESCENDING}{Each matrix row is sorted in the descending order. This flag and the previous one are also mutually exclusive}\n\\end{description}}\n\\end{description}\n\nThe function \\texttt{sort} sorts each matrix row or each matrix column in ascending or descending order. If you want to sort matrix rows or columns lexicographically, you can use STL \\texttt{std::sort} generic function with the proper comparison predicate.\n\nSee also: \\cvCppCross{sortIdx}, \\cvCppCross{randShuffle}\n\n\\cvCppFunc{sortIdx}\nSorts each row or each column of a matrix\n\n\\cvdefCpp{void sortIdx(const Mat\\& src, Mat\\& dst, int flags);}\n\\begin{description}\n\\cvarg{src}{The source single-channel array}\n\\cvarg{dst}{The destination integer array of the same size as \\texttt{src}}\n\\cvarg{flags}{The operation flags, a combination of the following values:\n\\begin{description}\n    \\cvarg{CV\\_SORT\\_EVERY\\_ROW}{Each matrix row is sorted independently}\n    \\cvarg{CV\\_SORT\\_EVERY\\_COLUMN}{Each matrix column is sorted independently. This flag and the previous one are mutually exclusive}\n    \\cvarg{CV\\_SORT\\_ASCENDING}{Each matrix row is sorted in the ascending order}\n    \\cvarg{CV\\_SORT\\_DESCENDING}{Each matrix row is sorted in the descending order. This flag and the previous one are also mutually exclusive}\n\\end{description}}\n\\end{description}\n\nThe function \\texttt{sortIdx} sorts each matrix row or each matrix column in ascending or descending order. Instead of reordering the elements themselves, it stores the indices of sorted elements in the destination array. For example:\n\n\\begin{lstlisting}\nMat A = Mat::eye(3,3,CV_32F), B;\nsortIdx(A, B, CV_SORT_EVERY_ROW + CV_SORT_ASCENDING);\n// B will probably contain\n// (because of equal elements in A some permutations are possible):\n// [[1, 2, 0], [0, 2, 1], [0, 1, 2]]\n\\end{lstlisting}\n\nSee also: \\cvCppCross{sort}, \\cvCppCross{randShuffle}\n\n\\cvCppFunc{split}\nDivides multi-channel array into several single-channel arrays\n\n\\cvdefCpp{void split(const Mat\\& mtx, Mat* mv);\\newline\nvoid split(const Mat\\& mtx, vector<Mat>\\& mv);\\newline\nvoid split(const MatND\\& mtx, MatND* mv);\\newline\nvoid split(const MatND\\& mtx, vector<MatND>\\& mv);}\n\\begin{description}\n\\cvarg{mtx}{The source multi-channel array}\n\\cvarg{mv}{The destination array or vector of arrays; The number of arrays must match \\texttt{mtx.channels()}. The arrays themselves will be reallocated if needed}\n\\end{description}\n\nThe functions \\texttt{split} split multi-channel array into separate single-channel arrays:\n\n\\[ \\texttt{mv}[c](I) = \\texttt{mtx}(I)_c \\]\n\nIf you need to extract a single-channel or do some other sophisticated channel permutation, use \\cvCppCross{mixChannels}\n\nSee also: \\cvCppCross{merge}, \\cvCppCross{mixChannels}, \\cvCppCross{cvtColor}\n\n\\cvCppFunc{sqrt}\nCalculates square root of array elements\n\n\\cvdefCpp{void sqrt(const Mat\\& src, Mat\\& dst);\\newline\nvoid sqrt(const MatND\\& src, MatND\\& dst);}\n\\begin{description}\n\\cvarg{src}{The source floating-point array}\n\\cvarg{dst}{The destination array; will have the same size and the same type as \\texttt{src}}\n\\end{description}\n\nThe functions \\texttt{sqrt} calculate square root of each source array element. in the case of multi-channel arrays each channel is processed independently. The function accuracy is approximately the same as of the built-in \\texttt{std::sqrt}.\n\nSee also: \\cvCppCross{pow}, \\cvCppCross{magnitude}\n\n\\cvCppFunc{subtract}\nCalculates per-element difference between two arrays or array and a scalar\n\n\\cvdefCpp{void subtract(const Mat\\& src1, const Mat\\& src2, Mat\\& dst);\\newline\nvoid subtract(const Mat\\& src1, const Mat\\& src2, \\par Mat\\& dst, const Mat\\& mask);\\newline\nvoid subtract(const Mat\\& src1, const Scalar\\& sc, \\par Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid subtract(const Scalar\\& sc, const Mat\\& src2, \\par Mat\\& dst, const Mat\\& mask=Mat());\\newline\nvoid subtract(const MatND\\& src1, const MatND\\& src2, MatND\\& dst);\\newline\nvoid subtract(const MatND\\& src1, const MatND\\& src2, \\par MatND\\& dst, const MatND\\& mask);\\newline\nvoid subtract(const MatND\\& src1, const Scalar\\& sc, \\par MatND\\& dst, const MatND\\& mask=MatND());\\newline\nvoid subtract(const Scalar\\& sc, const MatND\\& src2, \\par MatND\\& dst, const MatND\\& mask=MatND());}\n\\begin{description}\n\\cvarg{src1}{The first source array}\n\\cvarg{src2}{The second source array. It must have the same size and same type as \\texttt{src1}}\n\\cvarg{sc}{Scalar; the first or the second input parameter}\n\\cvarg{dst}{The destination array; it will have the same size and same type as \\texttt{src1}; see \\texttt{Mat::create}}\n\\cvarg{mask}{The optional operation mask, 8-bit single channel array;\n             specifies elements of the destination array to be changed}\n\\end{description}\n\nThe functions \\texttt{subtract} compute\n\n\\begin{itemize}\n    \\item the difference between two arrays\n    \\[\\texttt{dst}(I) = \\texttt{saturate}(\\texttt{src1}(I) - \\texttt{src2}(I))\\quad\\texttt{if mask}(I)\\ne0\\]\n    \\item the difference between array and a scalar:\n    \\[\\texttt{dst}(I) = \\texttt{saturate}(\\texttt{src1}(I) - \\texttt{sc})\\quad\\texttt{if mask}(I)\\ne0\\]\n    \\item the difference between scalar and an array:\n    \\[\\texttt{dst}(I) = \\texttt{saturate}(\\texttt{sc} - \\texttt{src2}(I))\\quad\\texttt{if mask}(I)\\ne0\\]\n\\end{itemize}\n\nwhere \\texttt{I} is multi-dimensional index of array elements.\n\nThe first function in the above list can be replaced with matrix expressions:\n\\begin{lstlisting}\ndst = src1 - src2;\ndst -= src2; // equivalent to subtract(dst, src2, dst);\n\\end{lstlisting}\n\nSee also: \\cvCppCross{add}, \\cvCppCross{addWeighted}, \\cvCppCross{scaleAdd}, \\cvCppCross{convertScale},\n\\cross{Matrix Expressions}, \\hyperref[cppfunc.saturatecast]{saturate\\_cast}.\n\n\\cvclass{SVD}\nClass for computing Singular Value Decomposition\n\n\\begin{lstlisting}\nclass SVD\n{\npublic:\n    enum { MODIFY_A=1, NO_UV=2, FULL_UV=4 };\n    // default empty constructor\n    SVD();\n    // decomposes A into u, w and vt: A = u*w*vt;\n    // u and vt are orthogonal, w is diagonal\n    SVD( const Mat& A, int flags=0 );\n    // decomposes A into u, w and vt.\n    SVD& operator ()( const Mat& A, int flags=0 );\n\n    // finds such vector x, norm(x)=1, so that A*x = 0,\n    // where A is singular matrix\n    static void solveZ( const Mat& A, Mat& x );\n    // does back-subsitution:\n    // x = vt.t()*inv(w)*u.t()*rhs ~ inv(A)*rhs\n    void backSubst( const Mat& rhs, Mat& x ) const;\n\n    Mat u; // the left orthogonal matrix\n    Mat w; // vector of singular values\n    Mat vt; // the right orthogonal matrix\n};\n\\end{lstlisting}\n\nThe class \\texttt{SVD} is used to compute Singular Value Decomposition of a floating-point matrix and then use it to solve least-square problems, under-determined linear systems, invert matrices, compute condition numbers etc.\nFor a bit faster operation you can pass \\texttt{flags=SVD::MODIFY\\_A|...} to modify the decomposed matrix when it is not necessarily to preserve it. If you want to compute condition number of a matrix or absolute value of its determinant - you do not need \\texttt{u} and \\texttt{vt}, so you can pass \\texttt{flags=SVD::NO\\_UV|...}. Another flag \\texttt{FULL\\_UV} indicates that full-size \\texttt{u} and \\texttt{vt} must be computed, which is not necessary most of the time.\n\nSee also: \\cvCppCross{invert}, \\cvCppCross{solve}, \\cvCppCross{eigen}, \\cvCppCross{determinant}\n\n\\cvCppFunc{SVD::SVD}\nSVD constructors\n\n\\cvdefCpp{\nSVD::SVD();\\newline\nSVD::SVD( const Mat\\& A, int flags=0 );\n}\n\\begin{description}\n\\cvarg{A}{The decomposed matrix}\n\\cvarg{flags}{Operation flags}\n\\begin{description}\n    \\cvarg{SVD::MODIFY\\_A}{The algorithm can modify the decomposed matrix. It can save some space and speed-up processing a bit}\n    \\cvarg{SVD::NO\\_UV}{Indicates that only the vector of singular values \\texttt{w} is to be computed, while \\texttt{u} and \\texttt{vt} will be set to empty matrices}\n    \\cvarg{SVD::FULL\\_UV}{When the matrix is not square, by default the algorithm produces \\texttt{u} and \\texttt{vt} matrices of sufficiently large size for the further \\texttt{A} reconstruction. If, however, \\texttt{FULL\\_UV} flag is specified, \\texttt{u} and \\texttt{vt} will be full-size square orthogonal matrices.}\n\\end{description}\n\\end{description}\n\nThe first constructor initializes empty \\texttt{SVD} structure. The second constructor initializes empty \\texttt{SVD} structure and then calls \\cvCppCross{SVD::operator ()}.\n\n\n\\cvCppFunc{SVD::operator ()}\nPerforms SVD of a matrix\n\n\\cvdefCpp{\nSVD\\& SVD::operator ()( const Mat\\& A, int flags=0 );\n}\n\\begin{description}\n\\cvarg{A}{The decomposed matrix}\n\\cvarg{flags}{Operation flags}\n\\begin{description}\n    \\cvarg{SVD::MODIFY\\_A}{The algorithm can modify the decomposed matrix. It can save some space and speed-up processing a bit}\n    \\cvarg{SVD::NO\\_UV}{Only singular values are needed. The algorithm will not compute \\texttt{u} and \\texttt{vt} matrices}\n    \\cvarg{SVD::FULL\\_UV}{When the matrix is not square, by default the algorithm produces \\texttt{u} and \\texttt{vt} matrices of sufficiently large size for the further \\texttt{A} reconstruction. If, however, \\texttt{FULL\\_UV} flag is specified, \\texttt{u} and \\texttt{vt} will be full-size square orthogonal matrices.}\n\\end{description}\n\\end{description}\n\nThe operator performs singular value decomposition of the supplied matrix. The \\texttt{u}, \\texttt{vt} and the vector of singular values \\texttt{w} are stored in the structure. The same \\texttt{SVD} structure can be reused many times with different matrices. Each time, if needed, the previous \\texttt{u}, \\texttt{vt} and \\texttt{w} are reclaimed and the new matrices are created, which is all handled by \\cvCppCross{Mat::create}.\n\n\\cvCppFunc{SVD::solveZ}\nSolves under-determined singular linear system\n\n\\cvdefCpp{\nstatic void SVD::solveZ( const Mat\\& A, Mat\\& x );\n}\n\\begin{description}\n\\cvarg{A}{The left-hand-side matrix.}\n\\cvarg{x}{The found solution}\n\\end{description}\n\nThe method finds unit-length solution \\textbf{x} of the under-determined system $A x = 0$. Theory says that such system has infinite number of solutions, so the algorithm finds the unit-length solution as the right singular vector corresponding to the smallest singular value (which should be 0). In practice, because of round errors and limited floating-point accuracy, the input matrix can appear to be close-to-singular rather than just singular. So, strictly speaking, the algorithm solves the following problem:\n\n\\[\nx^* = \\arg \\min_{x: \\|x\\|=1} \\|A \\cdot x \\|\n\\]\n\n\\cvCppFunc{SVD::backSubst}\nPerforms singular value back substitution\n\n\\cvdefCpp{\nvoid SVD::backSubst( const Mat\\& rhs, Mat\\& x ) const;\n}\n\\begin{description}\n\\cvarg{rhs}{The right-hand side of a linear system $\\texttt{A} \\texttt{x} = \\texttt{rhs}$ being solved, where \\texttt{A} is the matrix passed to \\cvCppCross{SVD::SVD} or \\cvCppCross{SVD::operator ()}}\n\\cvarg{x}{The found solution of the system}\n\\end{description}\n\nThe method computes back substitution for the specified right-hand side:\n\n\\[\n\\texttt{x} = \\texttt{vt}^T \\cdot diag(\\texttt{w})^{-1} \\cdot \\texttt{u}^T \\cdot \\texttt{rhs} \\sim \\texttt{A}^{-1} \\cdot \\texttt{rhs}\n\\]\n\nUsing this technique you can either get a very accurate solution of convenient linear system, or the best (in the least-squares terms) pseudo-solution of an overdetermined linear system. Note that explicit SVD with the further back substitution only makes sense if you need to solve many linear systems with the same left-hand side (e.g. \\texttt{A}). If all you need is to solve a single system (possibly with multiple \\texttt{rhs} immediately available), simply call \\cvCppCross{solve} add pass \\texttt{cv::DECOMP\\_SVD} there - it will do absolutely the same thing. \n\n\\cvCppFunc{sum}\nCalculates sum of array elements\n\n\\cvdefCpp{Scalar sum(const Mat\\& mtx);\\newline\nScalar sum(const MatND\\& mtx);}\n\\begin{description}\n\\cvarg{mtx}{The source array; must have 1 to 4 channels}\n\\end{description}\n\nThe functions \\texttt{sum} calculate and return the sum of array elements, independently for each channel.\n\nSee also: \\cvCppCross{countNonZero}, \\cvCppCross{mean}, \\cvCppCross{meanStdDev}, \\cvCppCross{norm}, \\cvCppCross{minMaxLoc}, \\cvCppCross{reduce}\n\n\\cvCppFunc{theRNG}\nReturns the default random number generator\n\n\\cvdefCpp{RNG\\& theRNG();}\n\nThe function \\texttt{theRNG} returns the default random number generator. For each thread there is separate random number generator, so you can use the function safely in multi-thread environments. If you just need to get a single random number using this generator or initialize an array, you can use \\cvCppCross{randu} or \\cvCppCross{randn} instead. But if you are going to generate many random numbers inside a loop, it will be much faster to use this function to retrieve the generator and then use \\texttt{RNG::operator \\_Tp()}.\n\nSee also: \\cvCppCross{RNG}, \\cvCppCross{randu}, \\cvCppCross{randn}\n\n\\cvCppFunc{trace}\nReturns the trace of a matrix\n\n\\cvdefCpp{Scalar trace(const Mat\\& mtx);}\n\\begin{description}\n\\cvarg{mtx}{The source matrix}\n\\end{description}\n\nThe function \\texttt{trace} returns the sum of the diagonal elements of the matrix \\texttt{mtx}.\n\n\\[ \\mathrm{tr}(\\texttt{mtx}) = \\sum_i \\texttt{mtx}(i,i) \\]\n\n\n\\cvCppFunc{transform}\nPerforms matrix transformation of every array element.\n\n\\cvdefCpp{void transform(const Mat\\& src, \\par Mat\\& dst, const Mat\\& mtx );}\n\\begin{description}\n\\cvarg{src}{The source array; must have as many channels (1 to 4) as \\texttt{mtx.cols} or \\texttt{mtx.cols-1}}\n\\cvarg{dst}{The destination array; will have the same size and depth as \\texttt{src} and as many channels as \\texttt{mtx.rows}}\n\\cvarg{mtx}{The transformation matrix}\n\\end{description}\n\nThe function \\texttt{transform} performs matrix transformation of every element of array \\texttt{src} and stores the results in \\texttt{dst}:\n\n\\[\n\\texttt{dst}(I) = \\texttt{mtx} \\cdot \\texttt{src}(I)\n\\]\n(when \\texttt{mtx.cols=src.channels()}), or\n\n\\[\n\\texttt{dst}(I) = \\texttt{mtx} \\cdot [\\texttt{src}(I); 1]\n\\]\n(when \\texttt{mtx.cols=src.channels()+1})\n\nThat is, every element of an \\texttt{N}-channel array \\texttt{src} is\nconsidered as \\texttt{N}-element vector, which is transformed using\na $\\texttt{M} \\times \\texttt{N}$ or $\\texttt{M} \\times \\texttt{N+1}$ matrix \\texttt{mtx} into\nan element of \\texttt{M}-channel array \\texttt{dst}.\n\nThe function may be used for geometrical transformation of $N$-dimensional\npoints, arbitrary linear color space transformation (such as various kinds of RGB$\\rightarrow$YUV transforms), shuffling the image channels and so forth.\n\nSee also: \\cvCppCross{perspectiveTransform}, \\cvCppCross{getAffineTransform}, \\cvCppCross{estimateRigidTransform}, \\cvCppCross{warpAffine}, \\cvCppCross{warpPerspective}\n\n\\cvCppFunc{transpose}\nTransposes a matrix\n\n\\cvdefCpp{void transpose(const Mat\\& src, Mat\\& dst);}\n\\begin{description}\n\\cvarg{src}{The source array}\n\\cvarg{dst}{The destination array of the same type as \\texttt{src}}\n\\end{description}\n\nThe function \\cvCppCross{transpose} transposes the matrix \\texttt{src}:\n\n\\[ \\texttt{dst}(i,j) = \\texttt{src}(j,i) \\]\n\nNote that no complex conjugation is done in the case of a complex\nmatrix, it should be done separately if needed.\n\n\\fi\n", "meta": {"hexsha": "a5be71e43b856acbc0cec05eba78454f8e3efa68", "size": 272884, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "to/lang/OpenCV-2.2.0/doc/core_array_operations.tex", "max_stars_repo_name": "eirTony/INDI1", "max_stars_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "to/lang/OpenCV-2.2.0/doc/core_array_operations.tex", "max_issues_repo_name": "eirTony/INDI1", "max_issues_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2016-11-24T10:46:39.000Z", "max_issues_repo_issues_event_max_datetime": "2016-12-10T07:24:15.000Z", "max_forks_repo_path": "to/lang/OpenCV-2.2.0/doc/core_array_operations.tex", "max_forks_repo_name": "eirTony/INDI1", "max_forks_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.5430030318, "max_line_length": 1138, "alphanum_fraction": 0.7412123833, "num_tokens": 79739, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673087708699, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.5684010542976343}}
{"text": "\\section{Contact tracing and quarantine}\n\n\\subsection{Model adaptation}\nWe simulate quarantining of persons identified as first degree contacts of COVID-19 patients explicitly through stratification of the compartments representing active COVID-19.\nThat is, the compartments representing both phases of the incubation period and both phases of active COVID-19 are duplicated into two model strata referred to as ``traced\" and ``untraced\".\nIn model initialisation, all infectious seed is assigned to the untraced stratum.\nAll newly infected persons commence their incubation period in the untraced stratum of the early incubation period.\nAs for isolated and hospitalised patients, those undergoing quarantine have their infectiousness reduced by 80\\%.\n\n\\subsection{Contact tracing process}\nIdentification of infected persons through contact tracing is assumed to apply to those in their early incubation period, with flows added to the model that transition persons during their incubation period from the untraced to the traced stratum of this compartment type.\nThe rate of transition from the untraced to the traced stratum of the early incubation period is determined by the proportion of contacts traced.\nIt is assumed that only the contacts of identified cases can be traced, such that the case detection rate (the proportion of symptomatic cases detected) is the ceiling for the proportion of contacts traced.\nThe proportion of contacts of identified cases that is traced is multiplied by the proportion of contacts whose index is detected  to determine the proportion of all persons entering the incubation period who are traced.\nThe proportion of all contacts of infectious persons with a detected index case, \\(u(t)\\), is calculated as the relative contribution of ever-detected infectious individuals to the total force of infection, and is given as:\n\n\\[\nu(t) = \\frac\n{\\sum_{c \\in \\mathcal{C}} \\sum_{s \\in \\mathcal{D}} \\, prev_{c, s}(t) \\times inf_{c, s}}\n{\\sum_{c \\in \\mathcal{C}} \\sum_{s \\in \\mathcal{S}} \\, prev_{c, s}(t) \\times inf_{c, s}} \\, ,\n\\]\n\nwhere $\\mathcal{C}$ is the set of infectious compartments, $\\mathcal{S}$ represents all clinical strata and $\\mathcal{D} \\subset \\mathcal{S}$ is the list of detected clinical strata.\nThe prevalence of infectious compartment $c$ in clinical stratum $s$ at time $t$ is represented by $prev_{c, s}(t)$, and $inf_{c, s}$ is the relative infectiousness of compartment $c$ in clinical stratum $s$.\n\nThe proportion of contacts of identified cases that is traced, \\(q(t)\\), is considered to decrease as the severity of the COVID-19 epidemic increases, because we expect contact tracing to decline in efficiency as more cases are identified.\nThat is, we assume that contact tracing is universal as COVID-19 prevalence approaches zero and declines exponentially with increasing prevalence.\nThe relationship between the proportion of contacts of identified patients who are quarantined and prevalence is given as:\n\n\\[q(t) = e^{-prev(t) \\times \\tau }\\]\n\nRather than estimate \\(\\tau\\) directly, we estimate the more intuitive quantity of the proportion of contacts of identified patients who would be quarantined at a particular prevalence.\nSolving for the previous equation for \\(\\tau\\), we obtain:\n\n\\[\\tau = \\frac{-log(q(t))}{prev(t)} \\]\n\nor \\(\\tau = \\frac{-log(q_{0})}{prev_{0}} \\) at a specific prevalence that accords with a particular value of \\(q\\). Fixing \\(prev_{0}\\) at \\(10^{-4}\\), we can vary \\(q_{0}\\) in calibration as the proportion of contacts of identified cases detected at a prevalence of one active case per thousand population.\n\nFinally, \\(q(t) \\times u(t) \\) gives the proportion of all infected persons who are traced. This proportion of persons entering their early latent period transition to the equivalent compartment in the traced stratum before proceeding to the late latent period.", "meta": {"hexsha": "77d5d5aca0188d7a8dcdffa462da6e65b01f0d66", "size": 3860, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/tex_descriptions/models/covid_19/stratifications/tracing.tex", "max_stars_repo_name": "emmamcbryde/AuTuMN-1", "max_stars_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_stars_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2020-03-11T06:15:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T03:38:35.000Z", "max_issues_repo_path": "docs/tex/tex_descriptions/models/covid_19/stratifications/tracing.tex", "max_issues_repo_name": "emmamcbryde/AuTuMN-1", "max_issues_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_issues_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_issues_count": 96, "max_issues_repo_issues_event_min_datetime": "2020-01-29T05:10:29.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T01:48:46.000Z", "max_forks_repo_path": "docs/tex/tex_descriptions/models/covid_19/stratifications/tracing.tex", "max_forks_repo_name": "emmamcbryde/AuTuMN-1", "max_forks_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_forks_repo_licenses": ["BSD-2-Clause-FreeBSD"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-24T00:38:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-19T16:19:03.000Z", "avg_line_length": 98.9743589744, "max_line_length": 307, "alphanum_fraction": 0.779015544, "num_tokens": 864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8872045817875224, "lm_q2_score": 0.6406358479787609, "lm_q1q2_score": 0.5683750595840914}}
{"text": "\\section{Category theory}\\label{sec:category_theory}\n\nCategory theory studies objects via how they relate to other objects. It shifts the focus from how individual members behave and even has no concept of membership, upon which set theory is based.\n\nThis shift is evident from the following diagram, which is actually half of the proof of \\fullref{thm:functor_adjoint_uniqueness}:\n\\begin{equation*}\n  \\begin{aligned}\n    \\includegraphics[page=1]{output/thm__functor_adjoint_uniqueness.pdf}\n  \\end{aligned}\n\\end{equation*}\n\nWe do still have individual objects, precisely the nodes of the diagram above, however we are only interested in how the nodes are related to each other. Chasing the relations in this diagram individually would require a lot more effort with little gain.\n\nCategories can be defined \\enquote{from the ground up} so that they may be used without an underlying set theory or logic. For our purposes, it will be more appropriate to define categories via \\hyperref[def:quiver]{quivers}, a.k.a. directed multigraphs. This latter approach will be much more convenient for us, since we are working in \\hyperref[def:axiom_of_universes]{\\logic{ZFC+U}} and are only interested in categories insomuch as they are helpful to us.\n\nFurthermore, categories are actually the primary motivation for us include the \\hyperref[def:axiom_of_universes]{axiom of universes} in our metatheory that would otherwise include only the axioms of \\hyperref[def:zfc]{\\logic{ZFC}}. This is discussed further in \\fullref{rem:functor_size} and \\fullref{rem:functor_category_size}.\n", "meta": {"hexsha": "c855c25c897204c91c86567d176334cced81e35c", "size": 1570, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/category_theory.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/category_theory.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/category_theory.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.3529411765, "max_line_length": 459, "alphanum_fraction": 0.8031847134, "num_tokens": 381, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8244619350028204, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5683062531476071}}
{"text": "\\documentclass[t]{beamer}\n\\usetheme{Copenhagen}\n\\usepackage{amsmath, tikz, tkz-euclide, xcolor}\n\\usetkzobj{all}\n\\setbeamertemplate{headline}{} % remove toc from headers\n\\beamertemplatenavigationsymbolsempty\n\n\\title{Right Triangle Trigonometry}\n\\author{}\n\\date{}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}\n    \\frametitle{Table of Contents}\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\begin{document}\n\n\\begin{frame}{}\n    \\titlepage\n\\end{frame}\n\n\\section{Write the six trig functions of an acute angle.} \n\n\\begin{frame}{The 3 Main Trig Ratios}\n\n\\begin{center}\n\\begin{tikzpicture}\n\\coordinate (A) at (0,0);\n\\coordinate (B) at (3,2);\n\\coordinate (C) at (3,0);\n\\draw [color=red] (C) rectangle +(-0.25,0.25);\n\\draw (A) -- node [midway, sloped, above] {hypotenuse} (B) -- node [midway, right] {opposite} (C) -- node [midway, below] {adjacent} cycle;\n\\node at (A) [xshift = 0.65cm, yshift = 0.2cm] {$\\theta$};\n\\end{tikzpicture}\n\\end{center}\n\\vspace{11pt}  \n\\[\n\\onslide<2->{\\sin \\theta = \\dfrac{\\text{opposite}}{\\text{hypotenuse}}} \\hspace{0.25in} \n\\onslide<3->{\\cos \\theta = \\dfrac{\\text{adjacent}}{\\text{hypotenuse}}} \\hspace{0.25in}  \n\\onslide<4->{\\tan \\theta = \\dfrac{\\text{opposite}}{\\text{adjacent}}}    \n\\]\n\\newline\\\\\n\\onslide<5->{We usually remember this as SOH-CAH-TOA.}\n\\end{frame}\n\n\n\n\\begin{frame}{Reciprocals for SOH-CAH-TOA}\n\\begin{center}\n\\begin{tikzpicture}\n\\coordinate (A) at (0,0);\n\\coordinate (B) at (3,2);\n\\coordinate (C) at (3,0);\n\\draw [color=red] (C) rectangle +(-0.25,0.25);\n\\draw (A) -- node [midway, sloped, above] {hypotenuse} (B) -- node [midway, right] {opposite} (C) -- node [midway, below] {adjacent} cycle;\n\\node at (A) [xshift = 0.65cm, yshift = 0.2cm] {$\\theta$};\n\\end{tikzpicture}\n\\end{center}\n\\vspace{11pt}\n\\[\n\\onslide<2->{\\csc \\theta = \\frac{\\text{hypotenuse}}{\\text{opposite}}} \\quad\n\\onslide<3->{\\sec \\theta = \\frac{\\text{hypotenuse}}{\\text{adjacent}}}  \\quad\n\\onslide<4->{\\cot \\theta = \\frac{\\text{adjacent}}{\\text{opposite}}}\n\\]   \n\\newline\\\\\n\\onslide<5->{Sometimes you may need to use the Pythagorean Theorem, $a^2+b^2=c^2$, in order to find any missing sides.}\n\\end{frame}\n\n\\begin{frame}{Example 1}\nFind the value of each of the six trig functions of $\\theta$.  \\newline\\\\\n\\begin{minipage}{0.4\\textwidth}\n(a) \\newline\\\\\n\\begin{tikzpicture}\n\\coordinate (A) at (0,0);\n\\coordinate (B) at (3,2);\n\\coordinate (C) at (3,0);\n\\draw [color=red] (C) rectangle +(-0.25,0.25);\n\\draw (A) -- (B) -- node [midway, right] {5} (C) -- node [midway, below] {12} cycle;\n\\node at (A) [xshift = 0.65cm, yshift = 0.2cm] {$\\theta$};\n\\onslide<2->{\\node at (A) [below, yshift=-0.75cm] {$5^2 + 12^2 = c^2$};}\n\\onslide<3->{\\node at (A) [below, yshift=-1.75cm] {$c = \\sqrt{169}=13$};}\n\\onslide<4->{\\node at (1.5,1) [above left] {$13$};}\n\\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{align*}\n    \\onslide<5->{\\sin \\theta &= \\frac{5}{13}}   \\\\[18pt]\n    \\onslide<6->{\\cos \\theta &= \\frac{12}{13}}   \\\\[18pt]\n    \\onslide<7->{\\tan \\theta &= \\frac{5}{12}}   \\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 1}\n\\begin{minipage}{0.4\\textwidth}\n(a) \\newline\\\\\n\\begin{tikzpicture}\n\\coordinate (A) at (0,0);\n\\coordinate (B) at (3,2);\n\\coordinate (C) at (3,0);\n\\draw [color=red] (C) rectangle +(-0.25,0.25);\n\\draw (A) -- (B) -- node [midway, right] {5} (C) -- node [midway, below] {12} cycle;\n\\node at (A) [xshift = 0.65cm, yshift = 0.2cm] {$\\theta$};\n% \\onslide<2->{\\node at (A) [below, yshift=-0.75cm] {$5^2 + 12^2 = c^2$};}\n% \\onslide<3->{\\node at (A) [below, yshift=-1.75cm] {$c = \\sqrt{169}=13$};}\n\\node at (1.5,1) [above left] {$13$};\n\\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\csc \\theta &= \\frac{13}{5}}   \\\\[18pt]\n    \\onslide<3->{\\sec \\theta &= \\frac{13}{12}}   \\\\[18pt]\n    \\onslide<4->{\\cot \\theta &= \\frac{12}{5}}   \\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\n\\begin{frame}{Example 1}\nFind the value of each of the six trig functions of $\\theta$.  \\newline\\\\\n\\begin{minipage}{0.45\\textwidth}\n(b) \\newline\\\\\n\\raisebox{0.35cm}{\n\\begin{tikzpicture}\n\\coordinate (A) at (0,0);\n\\coordinate (B) at (3,2);\n\\coordinate (C) at (3,0);\n\\draw [color=red] (C) rectangle +(-0.25,0.25);\n\\draw (A) -- node [midway, above] {3} (B) -- node [midway, right] {1} (C) -- cycle;\n\\node at (A) [xshift = 0.65cm, yshift = 0.2cm] {$\\theta$};\n\\onslide<2->{\\node at (0,-1.25) {$a^2+1^2=3^2$};}\n\\onslide<3->{\\node at (0,-2) {$a^2 = 8$};}\n\\onslide<4->{\\node at (0,-2.75) {$a = \\sqrt{8} = 2\\sqrt{2}$};}\n\\onslide<5->{\\node at (1.5,0) [below] {$2\\sqrt{2}$};}\n\\end{tikzpicture}}\n\\end{minipage}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{align*}\n    \\onslide<6->{\\sin \\theta &= \\frac{1}{3}}   \\\\[18pt]\n    \\onslide<7->{\\cos \\theta &= \\frac{2\\sqrt{2}}{3}}   \\\\[18pt]\n    \\onslide<8->{\\tan \\theta &= \\frac{1}{2\\sqrt{2}}}  \n    \\onslide<9->{\\left(\\frac{\\sqrt{2}}{\\sqrt{2}} \\right)}\n    \\onslide<10->{=\\frac{\\sqrt{2}}{4}}\\\\\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 1}\nFind the value of each of the six trig functions of $\\theta$.  \\newline\\\\\n\\begin{minipage}{0.4\\textwidth}\n(b) \\newline\\\\\n\\raisebox{0.35cm}{\n\\begin{tikzpicture}\n\\coordinate (A) at (0,0);\n\\coordinate (B) at (3,2);\n\\coordinate (C) at (3,0);\n\\draw [color=red] (C) rectangle +(-0.25,0.25);\n\\draw (A) -- node [midway, above] {3} (B) -- node [midway, right] {1} (C) -- cycle;\n\\node at (A) [xshift = 0.65cm, yshift = 0.2cm] {$\\theta$};\n\\node at (1.5,0) [below] {$2\\sqrt{2}$};\n\\end{tikzpicture}}\n\\end{minipage}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\csc \\theta &= \\frac{3}{1} = 3}   \\\\[18pt]\n    \\onslide<3->{\\sec \\theta &= \\frac{3}{2\\sqrt{2}}} \\onslide<4->{\\left(\\frac{\\sqrt{2}}{\\sqrt{2}}\\right)} \n    \\onslide<5->{ = \\frac{3\\sqrt{2}}{4}}   \\\\[18pt]\n    \\onslide<6->{\\cot \\theta &= \\frac{2\\sqrt{2}}{1}=2\\sqrt{2}}  \n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\n\\section{Find the exact trig function values for special right triangles.}\n\n\\begin{frame}{45-45-90 Triangles}\n\n45-45-90 triangles (also known as \\textit{isosceles right triangles}) can be created by drawing a diagonal across a square:\n\n\\begin{center}\n    \\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 3/0/B, 3/3/C, 0/3/D}\n    \\tkzDrawPolygon(A,B,C,D)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzDrawSegment[dashed](A,C)\n    \\tkzLabelAngle[pos=0.75](B,A,C){$45^\\circ$}\n    \\tkzLabelAngle[pos=0.75](B,C,A){$45^\\circ$}\n    \\end{tikzpicture}\n\\end{center}\n\n\\end{frame} \n\n\\begin{frame}{45-45-90 Triangles}\nSince each side of a square is the same length, we can use whatever length we want. For simplicity, we will use a length of 1.  \\newline\\\\ \\pause\n\nThe diagonal of the square can be found by using Pythagorean Theorem:  \\pause\n\n\\begin{center}\n    \\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 3/0/B, 3/3/C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.75](B,A,C){$45^\\circ$}\n    \\tkzLabelAngle[pos=0.75](B,C,A){$45^\\circ$}\n    \\tkzLabelSegment[right](B,C){1}\n    \\tkzLabelSegment[below](A,B){1}\n    \\tkzLabelSegment[above left, midway](A,C){$\\sqrt{2}$}\n    \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Example 2}\nFind the exact values of the six trig ratios for $45^\\circ$. \\newline\\\\\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 3/0/B, 3/3/C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.75](B,A,C){$45^\\circ$}\n    \\tkzLabelAngle[pos=0.75](B,C,A){$45^\\circ$}\n    \\tkzLabelSegment[right](B,C){1}\n    \\tkzLabelSegment[below](A,B){1}\n    \\tkzLabelSegment[above left, midway](A,C){$\\sqrt{2}$}\n    \\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\sin 45^\\circ &= \\frac{1}{\\sqrt{2}}} \\\\[11pt]\n    \\onslide<3->{&= \\frac{1}{\\sqrt{2}}\\frac{\\sqrt{2}}{\\sqrt{2}} = \\frac{\\sqrt{2}}{2}} \\\\[11pt]\n    \\onslide<4->{\\cos45^\\circ &= \\frac{1}{\\sqrt{2}} = \\frac{\\sqrt{2}}{2}} \\\\[11pt]\n    \\onslide<5->{\\tan 45^\\circ &= \\frac{1}{1} = 1}\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 3/0/B, 3/3/C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.75](B,A,C){$45^\\circ$}\n    \\tkzLabelAngle[pos=0.75](B,C,A){$45^\\circ$}\n    \\tkzLabelSegment[right](B,C){1}\n    \\tkzLabelSegment[below](A,B){1}\n    \\tkzLabelSegment[above left, midway](A,C){$\\sqrt{2}$}\n    \\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\csc 45^\\circ &= \\frac{\\sqrt{2}}{1} = \\sqrt{2}} \\\\[18pt]\n    \\onslide<3->{\\sec45^\\circ &= \\frac{\\sqrt{2}}{1} = \\sqrt{2}} \\\\[18pt]\n    \\onslide<4->{\\cot 45^\\circ &= \\frac{1}{1} = 1}    \\\\[11pt]\n\\end{align*}\n\\end{minipage}    \n\\onslide<5->{\\emph{Note}: Your answers from the above example will be the same if you replace $45^\\circ$ with $\\frac{\\pi}{4}$.}\n\\end{frame}\n\n\\begin{frame}{30-60-90 Triangles}\n    We can create a 30-60-90 triangle by drawing an altitude in an equilateral triangle.\n\n\\begin{center}\n    \\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 4/0/B}\n    \\tkzDefPoint(60:4){C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzDefMidPoint(A,B)\n        \\tkzGetPoint{D}\n    \\tkzDrawSegment[dashed](D,C)\n    \\tkzMarkRightAngle[color=red](A,D,C)\n    \\tkzLabelAngle[pos=0.5](D,A,C){$60^\\circ$}\n    \\tkzLabelAngle[pos=1](D,C,A){$30^\\circ$}\n    \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{30-60-90 Triangles}\nRecall that the altitude of an equilateral triangle bisects one of the sides.   \\newline\\\\    \\pause\n\nRather than use a length of 1 for the sides of the equilateral triangle, we will use a length of 2 (if only to avoid using fractions). \\newline\\\\ \\pause\n\n\\begin{center}\n    \\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 4/0/B}\n    \\tkzDefPoint(60:4){C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzDefMidPoint(A,B)\n        \\tkzGetPoint{D}\n    \\tkzDrawSegment[dashed](D,C)\n    \\tkzMarkRightAngle[color=red](A,D,C)\n    \\tkzLabelAngle[pos=0.5](D,A,C){$60^\\circ$}\n    \\tkzLabelAngle[pos=1](D,C,A){$30^\\circ$}\n    \\tkzLabelSegment[below](A,D){1}\n    \\tkzLabelSegment[below](D,B){1}\n    \\tkzLabelSegment[midway, above left](A,C){2}\n    \\tkzLabelSegment[midway, above right](B,C){2}\n    \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{30-60-90 Triangles}\nWe can use the Pythagorean Theorem to find the length of the altitude, $\\sqrt{3}$:\n\n\\begin{center}\n    \\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 2/0/B}\n    \\tkzDefPoint(60:4){C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.5](B,A,C){$60^\\circ$}\n    \\tkzLabelAngle(B,C,A){$30^\\circ$}\n    \\tkzLabelSegment[below](A,B){1}\n    \\tkzLabelSegment[right](B,C){$\\sqrt{3}$}\n    \\tkzLabelSegment[midway, above left](A,C){2}\n    \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Example 3}\nFind the exact values of the six trig ratios for $60^\\circ$. \\newline\\\\\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 2/0/B}\n    \\tkzDefPoint(60:4){C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.5](B,A,C){$60^\\circ$}\n    \\tkzLabelAngle(B,C,A){$30^\\circ$}\n    \\tkzLabelSegment[below](A,B){1}\n    \\tkzLabelSegment[right](B,C){$\\sqrt{3}$}\n    \\tkzLabelSegment[midway, above left](A,C){2}\n    \\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\sin 60^\\circ &= \\frac{\\sqrt{3}}{2}} \\\\[11pt]\n    \\onslide<3->{\\cos 60^\\circ &= \\frac{1}{2}} \\\\[11pt]\n    \\onslide<4->{\\tan 60^\\circ &= \\frac{\\sqrt{3}}{1} = \\sqrt{3}}\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 3}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 2/0/B}\n    \\tkzDefPoint(60:4){C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.5](B,A,C){$60^\\circ$}\n    \\tkzLabelAngle(B,C,A){$30^\\circ$}\n    \\tkzLabelSegment[below](A,B){1}\n    \\tkzLabelSegment[right](B,C){$\\sqrt{3}$}\n    \\tkzLabelSegment[midway, above left](A,C){2}\n    \\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\csc 60^\\circ &= \\frac{2}{\\sqrt{3}}} \\\\[11pt]\n    \\onslide<3->{&= \\frac{2}{\\sqrt{3}}\\frac{\\sqrt{3}}{\\sqrt{3}} = \\frac{2\\sqrt{3}}{3}} \\\\[11pt]\n    \\onslide<4->{\\sec 60^\\circ &= \\frac{2}{1} = 2} \\\\[11pt]\n    \\onslide<5->{\\cot 60^\\circ &= \\frac{1}{\\sqrt{3}}} \\\\[11pt]\n    \\onslide<6->{&= \\frac{1}{\\sqrt{3}}\\frac{\\sqrt{3}}{\\sqrt{3}} = \\frac{\\sqrt{3}}{3}}\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 4}\nFind the exact values of the six trig ratios for $30^\\circ$. \\newline\\\\\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 3/0/B, 3/1.73/C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.85](B,A,C){$30^\\circ$}\n    \\tkzLabelAngle[pos=0.5](B,C,A){$60^\\circ$}\n    \\tkzLabelSegment[below](A,B){$\\sqrt{3}$}\n    \\tkzLabelSegment[right](B,C){1}\n    \\tkzLabelSegment[midway, above left](A,C){2}\n    \\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\sin 30^\\circ &= \\frac{1}{2}} \\\\[11pt]\n    \\onslide<3->{\\cos 30^\\circ &= \\frac{\\sqrt{3}}{2}} \\\\[11pt]\n    \\onslide<4->{\\tan 30^\\circ &= \\frac{1}{\\sqrt{3}} = \\frac{\\sqrt{3}}{3}}\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 4}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}\n    \\tkzDefPoints{0/0/A, 3/0/B, 3/1.73/C}\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzMarkRightAngle[color=red](A,B,C)\n    \\tkzLabelAngle[pos=0.85](B,A,C){$30^\\circ$}\n    \\tkzLabelAngle[pos=0.5](B,C,A){$60^\\circ$}\n    \\tkzLabelSegment[below](A,B){$\\sqrt{3}$}\n    \\tkzLabelSegment[right](B,C){1}\n    \\tkzLabelSegment[midway, above left](A,C){2}\n    \\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n    \\onslide<2->{\\csc 30^\\circ &= \\frac{2}{1} = 2} \\\\[11pt]\n    \\onslide<3->{\\sec 30^\\circ &= \\frac{2}{\\sqrt{3}} = \\frac{2\\sqrt{3}}{3}} \\\\[11pt]\n    \\onslide<4->{\\cot 30^\\circ &= \\frac{\\sqrt{3}}{1} = \\sqrt{3}}  \\\\[11pt]\n\\end{align*}\n\\end{minipage}    \n\\end{frame}\n\n\\begin{frame}{Cofunctions}\nNotice how $\\sin 30^\\circ = \\cos 60^\\circ$, $\\tan 30^\\circ = \\cot 60^\\circ$, etc. This is because these ratios are \\alert{cofunctions}.   \\newline\\\\  \\pause\n\nAny pair of trig functions $f$ and $g$ for which\n\\[\nf(\\theta) = g\\left(90^\\circ - \\theta\\right)\n\\]\nand vice versa are \\textbf{cofunctions}.\n\\end{frame}\n\n\\section{Find missing side lengths in right triangles.}\n\n\\begin{frame}{Finding Missing Sides}\nYou can use SOH-CAH-TOA and your calculator to find missing sides in right triangles.\n\\end{frame}\n\n\\begin{frame}{Example 5}\nFind the value of $x$. Round your answer to 2 decimal places. \\newline\\\\\n\\begin{center}\n\\begin{tikzpicture}[scale=0.8]\n    \\tkzDefPoints{0/0/A, 3/0/B, 3/2/C}\n    \\tkzMarkRightAngle[color=red](C,B,A)\n    \\tkzDrawPolygon(A,B,C)\n    \\tkzLabelAngle[](B,A,C){$38^\\circ$}\n    \\tkzLabelSegment[right](B,C){$x$}\n    \\tkzLabelSegment[below](A,B){14}\n\\end{tikzpicture}\n\\end{center}\n\\begin{align*}\n    \\onslide<2->{\\tan 38^\\circ &= \\frac{x}{14}} \\\\[10pt]\n    \\onslide<3->{0.7813 &= \\frac{x}{14}} \\\\[10pt]\n    \\onslide<4->{x &= 14(0.7813) \\approx 10.94}\n\\end{align*}\n\\end{frame}\n\\end{document}\n", "meta": {"hexsha": "a01ed7d896b69fc3945acf4f35210c24c318675a", "size": 15175, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Right_Triangle_Trigonometry(BEAMER).tex", "max_stars_repo_name": "BryanBain/Trig_BEAMER", "max_stars_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Right_Triangle_Trigonometry(BEAMER).tex", "max_issues_repo_name": "BryanBain/Trig_BEAMER", "max_issues_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Right_Triangle_Trigonometry(BEAMER).tex", "max_forks_repo_name": "BryanBain/Trig_BEAMER", "max_forks_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:06.000Z", "avg_line_length": 33.5730088496, "max_line_length": 156, "alphanum_fraction": 0.6252388797, "num_tokens": 6189, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.689305616785446, "lm_q2_score": 0.8244619199068831, "lm_q1q2_score": 0.5683062322175271}}
{"text": "\\section{Math}\n\nCombination\n\\lstinputlisting{../Math/combination.hpp}\n\nChinese Remainder Theorem\n\\lstinputlisting{../Math/crt.hpp}\n\nExtGcd\n\\lstinputlisting{../Math/extGcd.hpp}\n\nFFT\n\\lstinputlisting{../Math/fft.hpp}\n\nGarnerCRT\n\\lstinputlisting{../Math/garnerCRT.hpp}\n\nLagrangeInterpolation\n\\lstinputlisting{../Math/lagrangeInterpolation.hpp}\n\nMatrix\n\\lstinputlisting{../Math/matrix.hpp}\n\nMod Matrix\n\\lstinputlisting{../Math/mod_matrix.hpp}\n\nNumber Theoretic Transform\n\\lstinputlisting{../Math/number_theoretic_transform.hpp}\n", "meta": {"hexsha": "7bd29cf4f1d1304e1bc24b59c0bff98a447d3fff", "size": 524, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ICPC/Tex/src/Math.tex", "max_stars_repo_name": "zaki-joho/ProconLibrary", "max_stars_repo_head_hexsha": "3a11e506c43522801f878dda2f9845ae83612a4d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-05-22T11:49:58.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-05T19:13:42.000Z", "max_issues_repo_path": "ICPC/Tex/src/Math.tex", "max_issues_repo_name": "zaki-joho/ProconLibrary", "max_issues_repo_head_hexsha": "3a11e506c43522801f878dda2f9845ae83612a4d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2019-10-15T09:22:21.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-20T05:29:09.000Z", "max_forks_repo_path": "ICPC/Tex/src/Math.tex", "max_forks_repo_name": "zaki-joho/ProconLibrary", "max_forks_repo_head_hexsha": "3a11e506c43522801f878dda2f9845ae83612a4d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.0689655172, "max_line_length": 56, "alphanum_fraction": 0.7938931298, "num_tokens": 150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765707, "lm_q2_score": 0.6893056104028797, "lm_q1q2_score": 0.5683062314149362}}
{"text": "\\section{\\src{comp_geopot}}\n\n\\subsection{Description}\n\nKernel \\src{comp_geopot} is taken from the original subroutine\n\\src{compute_geopot} in \\DYNAMICO.\n%\nThis subroutine is originally defined in module \\src{caldyn_gcm_mod}.\n%\nThis module defines subroutine \\src{caldyn}, which is the main\nsubroutines for dynamics part of the model, and several sub-subroutines\nfor various terms in the governing equation, such as potential\nvorticity, geopotential, etc.\n%\nThis subroutine calculates geopotential.\n\n\\subsection{Discretization and code}\n\n\\autoref{l:definition_comp_geopot} shows the definition part of this subroutine,\nand \\autoref{f:pad_comp_geopot} shows the PAD of this.\n\n\\begin{LstF90}[%\ncaption={Definition part of \\src{compute_geopot}},%\nlabel={l:definition_comp_geopot}%\n]\nSUBROUTINE compute_geopot(ps,rhodz,theta, pk,geopot)\nUSE icosa\nUSE disvert_mod\nUSE exner_mod\nUSE trace\nUSE omp_para\nIMPLICIT NONE\n  REAL(rstd),INTENT(INOUT) :: ps(iim*jjm)\n  REAL(rstd),INTENT(IN)    :: rhodz(iim*jjm,llm)\n  REAL(rstd),INTENT(IN)    :: theta(iim*jjm,llm)    ! potential temperature\n  REAL(rstd),INTENT(INOUT) :: pk(iim*jjm,llm)       ! Exner function\n  REAL(rstd),INTENT(INOUT) :: geopot(iim*jjm,llm+1) ! geopotential\n\n  INTEGER :: i,j,ij,l\n  REAL(rstd) :: p_ik, exner_ik\n\\end{LstF90}\n\nWhere \\src{ps}, \\src{rhodz},\n\\src{theta}, \\src{pk}, and \\src{geopot} are\nsurface pressure, mass,\npotential temperature, Exner function, and geopotential, respectively.\n%\nThese arrays except \\src{ps} have two dimensions,  first one is for\nhorizontal index and second one is for vertical index.\n%\nAll of these are defined in the center of control volume in horizontal,\nthe size of first dimension is \\src{iim*jjm}.\n%\nAlso these except \\src{ps} and \\src{geopot} are defined in the full\nlevel in vertical, the size of second dimension of these are \\src{llm},\nwhile \\src{geopot} has the size of \\src{llm+1}.\n\n\\begin{figure}[p]\n\\centering\n \\includegraphics[scale=.45]{figs/geopot.pdf}\n \\caption{PAD of \\src{compute_geopot}}\n \\label{f:pad_comp_geopot}\n\\end{figure}\n\nNote that in this kernel package\n\\src{caldyn_eta} is set as \\src{eta_mass},\nand\n\\src{boussinesq} is set as \\src{.true.},\nso in this subroutine only \\src{geopot} is calculated as\n\n\\begin{LstF90}[numbers=none]\ngeopot(ij,l+1) = geopot(ij,l) + g*rhodz(ij,l)\n\\end{LstF90}\n%\nand Exner pressure\nare calculated in subroutine \\src{compulte_caldyn_horiz}, which is also\nincluded in this package as kernel \\src{comp_caldyn_horiz}.\n\n\\clearpage\n\n\n\n\\subsection{Input data and result}\n\nInput data file is prepared and you can download from official server using\n\\file{data/download.sh} script.\n%\nThis data file is created by original \\DYNAMICO\\footnotemark with\nHeld-Suarez case parameter set included in the original source archive.\n%\n\\footnotetext{with slight modification by AICS.}\n%\nMax/min/sum of input/output data of the kernel subroutine are output as\na log.\n%\nBelow is an example of \\src{$IAB_SYS=Ubuntu-gnu-ompi} case.\n\n\\begin{LstLog}\n [KERNEL] comp_geopot\n *** Start  initialize\n                iim, jjm, llm:    23    25    19\n             ij_begin, ij_end:    48   528\n     ij_begin_ext, ij_end_ext:    24   552\n             ll_begin, ll_end:     1    19\n        t_right, t_rup, t_lup:     1    23    22\n     t_left, t_ldown, t_rdown:    -1   -23   -22\n        u_right, u_rup, u_lup:     0  1173   575\n     u_left, u_ldown, u_rdown:    -1  1150   553\n           z_rup, z_up, z_lup:   598     0   597\n     z_ldown, z_down, z_rdown:   -23   575   -22\n                   caldyn_eta:     1\n                   boussinesq:     F\n                            g:     9.80000000\n +check[mass_ak         ] max=  2.2205608555404415E+04,min=  2.9655441593806341E+02,sum=  1.7315769223740909E+05\n +check[mass_bk         ] max=  9.8820601234384886E-01,min=  0.0000000000000000E+00,sum=  6.4254510678414123E+00\n *** Finish initialize\n *** Start kernel\n ### check point iteration:        1000\n ### Input ###\n +check[ps_prev         ] max=  1.0000000000000000E+05,min=  1.0000000000000000E+05,sum=  5.7500000000000000E+07\n +check[rhodz           ] max=  1.2306877011993038E+03,min=  0.0000000000000000E+00,sum=  5.3979591836733194E+06\n +check[theta           ] max=  8.0139914420291746E+02,min=  0.0000000000000000E+00,sum=  3.8582633571973117E+06\n +check[pk_prev         ] max=  1.0014594722514462E+03,min=  0.0000000000000000E+00,sum=  6.9872296819747351E+06\n +check[geopot_prev     ] max=  3.8250620498369227E+05,min=  0.0000000000000000E+00,sum=  1.1718001851963627E+09\n ### Output ###\n +check[ps              ] max=  1.0000000000000000E+05,min=  1.0000000000000000E+05,sum=  5.7500000000000000E+07\n +check[pk              ] max=  1.0014594722514462E+03,min=  0.0000000000000000E+00,sum=  6.9872296819747351E+06\n +check[geopot          ] max=  3.8250620498369227E+05,min=  0.0000000000000000E+00,sum=  1.1718001851963627E+09\n ### final iteration:        1000\n ### Validation : grid-by-grid diff ###\n +check[ps              ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[pk              ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n +check[geopot          ] max=  0.0000000000000000E+00,min=  0.0000000000000000E+00,sum=  0.0000000000000000E+00\n *** Finish kernel\n\\end{LstLog}\n\nCheck the lines below \\src{``Validation : grid-by-grid diff''} line,\nthat shows difference between calculated output array and\npre-calculated reference array.\nThese should be zero or enough small to be acceptable.\n%\nThere are sample output log files in \\file{reference/}\nin each kernel program directory, for reference purpose.\n\n\\subsection{Sample of performance result}\n\nHere's an example of the performance result part of the log output.\nBelow is an example executed with the machine environment described in \\autoref{s:measuring_env}.\n%\nNote that in this program kernel part is iterated 1000 times.\n\n\\begin{LstLog}\n *** Computational Time Report\n *** ID=001 : MAIN_comp_geopot                 T=     0.824 N=   1000\n\\end{LstLog}\n", "meta": {"hexsha": "25b0a909b1dbe293dcb9dc8c414696d2033b7268", "size": 6025, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/DYNAMICO/src/32_geopot.tex", "max_stars_repo_name": "aimes-project/IcoAtmosBenchmark_v1", "max_stars_repo_head_hexsha": "44b8f12dcf0e50094a2d0f78a8794febda270007", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/DYNAMICO/src/32_geopot.tex", "max_issues_repo_name": "aimes-project/IcoAtmosBenchmark_v1", "max_issues_repo_head_hexsha": "44b8f12dcf0e50094a2d0f78a8794febda270007", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/DYNAMICO/src/32_geopot.tex", "max_forks_repo_name": "aimes-project/IcoAtmosBenchmark_v1", "max_forks_repo_head_hexsha": "44b8f12dcf0e50094a2d0f78a8794febda270007", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-04T04:07:38.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T04:07:38.000Z", "avg_line_length": 38.6217948718, "max_line_length": 112, "alphanum_fraction": 0.7053941909, "num_tokens": 2044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619220634456, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.568306228441875}}
{"text": "\n\\subsection{The Riordan group}\n\\label{subsection:back:to:the:basics:riordan:group}\n\nIn \\cite{shapiro:1991}, \\citeauthor{shapiro:1991} introduces a generalization\nof the concepts developed so far: a group called \\emph{Riordan group}, in honor\nof professor John Riordan \\marginpar{Thank you John Riordan, $1903-1988$}.\nThis group has been defined in order to unify many themes in enumeration and\ncombinatoric problems; to solve binomial, inverse identities\n\\cite{Merlini2006103} and, more recently, combinatorial sums\n\\cite{Merlini2009475} and binary words counting \\cite{Merlini20112988}\n\\\\\\\\\nLet $\\mathcal{M}=\\lbrace m_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$ be an \\emph{infinite\nlower triangular matrix}, namely:\n\\begin{displaymath}\n     m_{nk}\\in\\mathbb{Z} \\quad\\wedge\\quad n < k \\rightarrow m_{nk} = 0\n\\end{displaymath}\nCall $\\mathcal{M}_{(t)}$ the vector produced by the following product, \nwhere $t$ is a \\emph{dummy variable}\\marginpar{$\\mathcal{M}$ is a matrix of integers, \n$\\mathcal{M}_{(t)}$ is a vector of \\ac{fps} in $\\mathbb{Z}[\\![t]\\!]$}:\n\\begin{displaymath}\n    \\left[\n        \\begin{array}{cccccc}\n            1 & t & t^{2} & t^{3} & t^{4} &\\ldots\n        \\end{array}\n    \\right]\n    \\left[\n        \\begin{array}{cccccc}\n            m_{00} & & & &  &\\\\\n            m_{10} & m_{11} & & &  &\\\\\n            m_{20} & m_{21}& m_{22}& &  &\\\\\n            m_{30} & m_{31}& m_{32}& m_{33}&  &\\\\\n            m_{40} & m_{41}& m_{42}& m_{43}& m_{44} &\\\\\n            \\vdots & \\vdots& \\vdots& \\vdots& \\vdots & \\ddots\\\\\n        \\end{array}\n    \\right]\n\\end{displaymath}\ntherefore $\\mathcal{M}_{(t)} =\n    \\left[\n        \\begin{array}{cccccc}\n            m_{0}(t) & m_{1}(t) & m_{2}(t) & m_{3}(t) &m_{4}(t) & \\ldots\n        \\end{array}\n    \\right]$ \nwith function $m_{j}$, for $j\\in\\mathbb{N}$, is a \\ac{fps} in the ring \n$\\mathbb{Z}[\\![t]\\!]$. If there exists two analytic functions $d$\nand $h$, such that $d(0)\\neq0$ and $h(0)=0 \\wedge h^{\\prime}(0)\\neq0$, which\nsatisfy \\marginpar{matrix $\\mathcal{M}$ will be denoted by the pair $(d(t),h(t))$}:\n\\begin{displaymath}\n    m_{j}(t)=d(t)\\,h(t)^{j} \\quad \\forall j\\in\\mathbb{N}\n\\end{displaymath}\nthen matrix $\\mathcal{M}$ is called a \\emph{Riordan array}. Such an array is\ndirectly related to coefficients matrix $\\lbrace m_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$ as follows:\n\\begin{displaymath}\n    [t^{n}]m_{j}(t)=m_{nj}\n\\end{displaymath}\n\nFor the sake of clarity we derive the Riordan array which denotes the\nfollowing matrix:\n%\\begin{table}{!H}\n    \\begin{displaymath} \n        %\\hspace{-2.5cm}\n        \\left[\n        \\begin{array}{rrrrrrrrrr}\n        1 &  &  &  &  &  &  &  &  &  \\\\\n        1 & 1 &  &  &  &  &  &  &  &  \\\\\n        1 & 2 & 1 &  &  &  &  &  &  &  \\\\\n        1 & 3 & 3 & 1 &  &  &  &  &  &  \\\\\n        1 & 4 & 6 & 4 & 1 &  &  &  &  &  \\\\\n        1 & 5 & 10 & 10 & 5 & 1 &  &  &  &  \\\\\n        1 & 6 & 15 & 20 & 15 & 6 & 1 &  &  &  \\\\\n        1 & 7 & 21 & 35 & 35 & 21 & 7 & 1 &  &  \\\\\n        1 & 8 & 28 & 56 & 70 & 56 & 28 & 8 & 1 &  \\\\\n        1 & 9 & 36 & 84 & 126 & 126 & 84 & 36 & 9 & 1\n        \\end{array}\n        \\right] \n    \\end{displaymath}\n%  \\caption[Pascal matrix for Riordan array derivation]{Initial chunk of\n%    the \\emph{Pascal} matrix, defined as $\\lbrace {{n}\\choose{k}}\\rbrace_{n,k\\in\\mathbb{N}}$}\n%  \\label{tab:pascal:matrix} \n%\\end{table}\n\\marginpar{an example: looking for Pascal array definition}\nalso called \\emph{Pascal} triangle. In order to build the definition for array\n$\\mathcal{P}$ we have to find to functions $d_{\\mathcal{P}}$ and\n$h_{\\mathcal{P}}$. Coefficients lying on the very first column belongs to \nsequence $\\vect{1}=\\lbrace1\\rbrace_{n\\in\\mathbb{N}}$, therefore function $m_{0}$\nsatisfies:\n\\begin{displaymath}\n    m_{0}(t)=\\frac{1}{1-t}\n\\end{displaymath}\naccording to \\autoref{eq:1:gf}, therefore $m_{0}(t)=d(t)$, so function $d$ has been\ndefined. \n\nIn order to find function $h$, observe the second column. Coeffiecients lying on it\nbelongs to sequence $\\vect{n}=\\lbrace n\\rbrace_{n\\in\\mathbb{N}}$ of natural numbers,\ntherefore function $m_{1}$ satisfies:\n\\begin{displaymath}\n    m_{1}(t)=d(t)\\,h(t)=\\frac{t}{(1-t)^{2}}\n\\end{displaymath}\naccording to \\autoref{eq:n:gf}. Simple manipulation yields:\n\\begin{displaymath}\n    h(t)=\\frac{t}{1-t}\n\\end{displaymath}\ndefining function $h$. Therefore the array $\\mathcal{P}$ we are looking for is:\n\\begin{equation}\n    \\mathcal{P}=\\left(\\frac{1}{1-t},\\frac{t}{1-t}\\right)\n    \\label{eq:pascal:array:derived:for:example}\n\\end{equation}\nas desired.\n\n\n\n\n\\begin{theorem}[Fundamental theorem]\n    Let $\\mathcal{M}=(d(t),h(t))$ be a Riordan matrix and $\\vect{b}=\n    \\left[\\begin{array}{cccc}b_{0}&b_{1}&b_{2}&\\ldots\\end{array}\\right]^{T}$ \n    be an infinite column vector with $b_{j}\\in\\mathbb{Z}$, for $j\\in\\mathbb{N}$. Then:\n    \\begin{displaymath}\n        \\mathcal{M}_{(t)}\\cdot\\vect{b}=d(t)b(h(t))\n    \\end{displaymath}\n    where function $b$ is a \\ac{fps} over coefficients in $\\vect{b}$.\n    \\label{thm:riordan:group:fundamental:theorem}\n\\end{theorem}\n\\begin{proof}\n    By definition of vector $\\mathcal{M}_{(t)}$:\n    \\begin{displaymath}\n        \\mathcal{M}_{(t)}\\cdot\\vect{b}= b_{0}\\,m_{0}(t) + b_{1}\\,m_{1}(t) + b_{2}\\,m_{2}(t) \n            + b_{3}\\,m_{3}(t) + b_{4}\\,m_{4}(t) + \\ldots\n    \\end{displaymath}\n    by definition of function $m_{j}$, for any $j\\in\\mathbb{N}$:\n    \\begin{displaymath}\n        = b_{0}\\,d(t)h(t)^{0} + b_{1}\\,d(t)h(t) + b_{2}\\,d(t)h(t)^{2} \n            + b_{3}\\,d(t)h(t)^{3} + b_{4}\\,d(t)h(t)^{4} + \\ldots\n    \\end{displaymath}\n    factor out function $d$:\n    \\begin{displaymath}\n        = d(t)\\left(b_{0}\\,h(t)^{0} + b_{1}\\,h(t) + b_{2}\\,h(t)^{2} \n            + b_{3}\\,h(t)^{3} + b_{4}\\,h(t)^{4} + \\ldots\\right)\n    \\end{displaymath}\n    finally, rewrite composing function $h$ with function $b$:\n    \\begin{displaymath}\n        = d(t)b(h(t))  \n    \\end{displaymath}\n    as required.\n\n\\end{proof}\n\nIn the introductory paragraph we said that the current generalization\nwe are studying is a \\emph{group}. More formally, we are going to define a \n\\emph{group} over a \\emph{set of Riordan matrices}, \ndenoted by $(\\lbrace \\mathcal{M}_{i}\\rbrace_{i\\in\\mathbb{N}},\\cdot)$, \nwhere $\\cdot$ is the \\emph{group\noperator}: additionally it is required to show the \\emph{identity element} and,\nfinally, to prove that there exists an \\emph{inverse} for every element in the group.\n\\\\\\\\\nLet $\\mathcal{M}=\\left(d_{\\mathcal{M}}(t),h_{\\mathcal{M}}(t)\\right)$ and\n$\\mathcal{N}=\\left(d_{\\mathcal{N}}(t),h_{\\mathcal{N}}(t)\\right)$ be two Riordan matrices.\nDefine the \\emph{group operator} as their product\n\\marginpar{matrix multiplication as group operator}, formally:\n\\begin{displaymath}\n    \\left(d_{\\mathcal{M}}(t),h_{\\mathcal{M}}(t)\\right)\\cdot\n        \\left(d_{\\mathcal{N}}(t),h_{\\mathcal{N}}(t)\\right) = \n        \\left(d_{\\mathcal{M}}(t)d_{\\mathcal{N}}(h_{\\mathcal{M}}(t)),\n            h_{\\mathcal{N}}(h_{\\mathcal{M}}(t))\\right)\n\\end{displaymath}\n\\begin{proof}\n    Consider a column $j$ of matrix $\\mathcal{N}$,\n    for some $j\\in\\mathbb{N}$. Denoting by $\\vect{e}_{j}$ the $j$-th versor and\n    by $n_{j}(t)=d_{\\mathcal{N}}(t)h_{\\mathcal{N}}(t)^{j}$ the \\ac{fps} over such column, \n    by \\autoref{thm:riordan:group:fundamental:theorem} the following holds:\n    \\begin{displaymath}\n        \\mathcal{M}_{(t)}\\cdot\\left(\\mathcal{N}\\cdot\\vect{e}_{j}\\right)\n            = d_{\\mathcal{M}}(t) d_{\\mathcal{N}}(h_{\\mathcal{M}}(t))\n                h_{\\mathcal{N}}(h_{\\mathcal{M}}(t))^{j}\n    \\end{displaymath}\n    Let $d_{\\mathcal{M}\\mathcal{N}}$ be a function such that \n    $d_{\\mathcal{M}\\mathcal{N}}(t)=d_{\\mathcal{M}}(t) d_{\\mathcal{N}}( h_{\\mathcal{M}}(t))$,\n    and let $h_{\\mathcal{M}\\mathcal{N}}$ be a function such that \n    $h_{\\mathcal{M}\\mathcal{N}}(t)=h_{\\mathcal{N}}(h_{\\mathcal{M}}(t))$, then the \\ac{rhs}\n    has the shape\n    $d_{\\mathcal{M}\\mathcal{N}}(t)h_{\\mathcal{M}\\mathcal{N}}(t)^{j}$, which denotes the\n    Riordan matrix $\\mathcal{M}\\mathcal{N}$, as required.\n\n\\end{proof}\n\\quad\n\\\\\\\\\nThe \\marginpar{$(1,t)$ is the identity element} identity element of the group\nis the Riordan matrix $\\mathcal{I}=(1,t)$.\n\\begin{proof}\n    Let $\\mathcal{M}=\\left(d_{\\mathcal{M}}(t),h_{\\mathcal{M}}(t)\\right)$ be a Riordan matrix.\n    By definition of group operator:\n    \\begin{displaymath}\n        \\left(d_{\\mathcal{M}}(t),h_{\\mathcal{M}}(t)\\right)\\cdot \\left(1,t\\right) = \n            \\left(d_{\\mathcal{M}}(t), h_{\\mathcal{M}}(t)\\right)\n    \\end{displaymath}\n    since group operator $\\cdot$ is no commutative in general, check the following also:\n    \\begin{displaymath}\n        \\left(1,t\\right) \\cdot \\left(d_{\\mathcal{M}}(t),h_{\\mathcal{M}}(t)\\right)= \n            \\left(d_{\\mathcal{M}}(t), h_{\\mathcal{M}}(t)\\right)\n    \\end{displaymath}\n    as required.\n\\end{proof}\n\\quad\n\\\\\\\\\nIn order to find the \\emph{inverse} of a Riordan matrix, we have to\nintroduce the concept of \\marginpar{compositional inverse of a function}\n\\emph{compositional inverse} of an analytic function $h$.\nLet $h$ be a function such that $h(0)=0 \\wedge h^{\\prime}(0)\\neq0$, then\n$\\hat{h}$ is the \\emph{compositional inverse} of function $h$ if and only if \n$\\hat{h}(h(t))=h(\\hat{h}(t))=t$. \n\\\\\\\\\nLet $\\mathcal{M}=\\left(d_{\\mathcal{M}}(t),h_{\\mathcal{M}}(t)\\right)$ be a Riordan matrix.\nThe inverse matrix $\\mathcal{M}^{-1}$ of $\\mathcal{M}$ satisfies the following relation:\n\\begin{displaymath}\n    \\mathcal{M}^{-1}=\\left(\n        \\frac{1}{d_{\\mathcal{M}}(\\hat{h}_{\\mathcal{M}}(t))}, \\hat{h}_{\\mathcal{M}}(t)\n    \\right)\n\\end{displaymath}\n\n\\begin{proof}\n    By definition of inverse element\\marginpar{inverse of an element in \n    $\\left(\\lbrace \\mathcal{M}_{i}\\rbrace,\\cdot\\right)$}, \n    we have to find a matrix $\\mathcal{M}^{-1}$ which satisfies the following relation:\n    \\begin{displaymath}\n        \\mathcal{M}\\cdot\\mathcal{M}^{-1}\n            =\\left(d_{\\mathcal{M}}(t)d_{\\mathcal{M}^{-1}}(h_{\\mathcal{M}}(t)),\n                    h_{\\mathcal{M}^{-1}}(h_{\\mathcal{M}}(t))\\right)\n            =\\left(1,t\\right)\n    \\end{displaymath}\n\n    Looking at the first component we get:\n    \\begin{displaymath}\n        d_{\\mathcal{M}}(t)d_{\\mathcal{M}^{-1}}(h_{\\mathcal{M}}(t))=1    \n    \\end{displaymath}\n    which can be rewritten as\\marginpar{abstracting function $g$ in the\n        composition $f(g(t))$ yield $\\left.\\left[f(y)\\right|y=g(t)\\right]$,\n        a trick also called ``changing variable''}:\n    \\begin{displaymath}\n        d_{\\mathcal{M}^{-1}}(h_{\\mathcal{M}}(t))=\\frac{1}{d_{\\mathcal{M}}(t)}    \n            = \\left.\\left[\n                d_{\\mathcal{M}^{-1}}(y)=\\frac{1}{d_{\\mathcal{M}}(\\hat{h}_{\\mathcal{M}}(y))}\n                    \\right|y=h_{\\mathcal{M}}(t)\n                \\right]\n    \\end{displaymath}\n\n    On the other hand, looking at the second component we get:\n    \\begin{displaymath}\n        h_{\\mathcal{M}^{-1}}(h_{\\mathcal{M}}(t))=t\n    \\end{displaymath}\n    therefore $h_{\\mathcal{M}^{-1}}=\\hat{h}_{\\mathcal{M}}$, as required.\n\\end{proof}\n\\quad\n\\\\\\\\\nIt \\marginpar{subgroups of the Riordan group}\nis interesting to dig a little on the \\emph{Riordan group}, looking at\ncommon patterns that arises in Riordan matrices definitions. Denoting with\n$\\cdot$ the group operator defined as before, we report\nthese patterns by introducing the following \\emph{subgroups}:\n\\begin{itemize}\n    \\item $\\left(\\lbrace \\mathcal{M}_{i}=(d_{\\mathcal{M}_{i}}(t),t)\n        \\rbrace_{i\\in\\mathbb{N}},\\cdot\\right)$, the \\emph{Appell subgroup};\n    \\item $\\left(\\lbrace \\mathcal{M}_{i}=(1,h_{\\mathcal{M}_{i}}(t))\n        \\rbrace_{i\\in\\mathbb{N}},\\cdot\\right)$, the \\emph{Associated subgroup};\n    \\item $\\left(\\lbrace \\mathcal{M}_{i}=(d_{\\mathcal{M}_{i}}(t),td_{\\mathcal{M}_{i}}(t))\n        \\rbrace_{i\\in\\mathbb{N}},\\cdot\\right)$, the \\emph{Renewal subgroup};\n    \\item $\\left(\\lbrace \\mathcal{M}_{i}=(d_{\\mathcal{M}_{i}}(t),h_{\\mathcal{M}_{i}}(t))\n        \\rbrace_{i\\in\\mathbb{N}},\\cdot\\right)$, where function $d$ is \\emph{even} and \n        function $h$ is \\emph{odd}, the \\emph{Checkerboard subgroup};\n    \\item $\\left(\\left\\lbrace \\mathcal{M}_{i}=\\left(\\frac{t\\,h_{\\mathcal{M}_{i}}^{\\prime}(t)}\n            {h_{\\mathcal{M}_{i}}(t)},h_{\\mathcal{M}_{i}}(t)\\right)\n        \\right\\rbrace_{i\\in\\mathbb{N}},\\cdot\\right)$, the \\emph{Hitting-time subgroup};\n\\end{itemize}\n\\quad\n\\\\\\\\\nFinally, \\marginpar{a characterization using a bivariate function}\nwe would like to discuss another interesting definition which characterize\na Riordan array $\\mathcal{M}=(d_{\\mathcal{M}}(t),h_{\\mathcal{M}}(t))$. \nThe main point is to associate a bivariate function $m$ \nto the infinite lower triangular matrix $\\lbrace m_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$\nof coefficients, used to build $\\mathcal{M}$. Function $m$ is defined as follows:\n\\begin{displaymath}\n    \\begin{split}\n        m(t,u) &= \\sum_{k\\in\\mathbb{N}}{\\sum_{n\\geq k}{m_{nk} t^{n} u^{k}}}\n            = \\sum_{k\\in\\mathbb{N}}{u^{k}\\sum_{n\\geq k}{m_{nk} t^{n}}}\n            = \\sum_{k\\in\\mathbb{N}}{m_{k}(t)u^{k}}\\\\\n            &= \\sum_{k\\in\\mathbb{N}}{d_{\\mathcal{M}}(t)\\,h_{\\mathcal{M}}(t)^{k}\\,u^{k}}\n            = \\frac{d_{\\mathcal{M}}(t)}{1-h_{\\mathcal{M}}(t)\\,u}\n    \\end{split}\n\\end{displaymath}\nThis characterization is more general and allows us to have \\ac{fps}\n$m_{j}^{(t)}(u)$ for a column $j$, if it is expanded respect to variable $t$;\nand, by symmetry, to have \\ac{fps} $m_{l}^{(u)}(t)$ for a row $l$, if it is\nexpanded respect to variable $u$.\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "3c85ce62c5a8ef706ac9bbdce17009b3e5bb9e65", "size": 13344, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classicthesis/Chapters/back-to-the-basics/riordan-group.tex", "max_stars_repo_name": "massimo-nocentini/master-thesis", "max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classicthesis/Chapters/back-to-the-basics/riordan-group.tex", "max_issues_repo_name": "massimo-nocentini/master-thesis", "max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classicthesis/Chapters/back-to-the-basics/riordan-group.tex", "max_forks_repo_name": "massimo-nocentini/master-thesis", "max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.1854304636, "max_line_length": 94, "alphanum_fraction": 0.606264988, "num_tokens": 4976, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925402, "lm_q2_score": 0.7310585903489892, "lm_q1q2_score": 0.5682517483317869}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\n\\markright{doppler}\n\\section*{\\hspace*{-1.6cm} doppler}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nComplex Doppler signal.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[fm,am,iflaw] = doppler(N,fs,f0,d,v) \n[fm,am,iflaw] = doppler(N,fs,f0,d,v,t0) \n[fm,am,iflaw] = doppler(N,fs,f0,d,v,t0,c) \n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n         {\\ty doppler} returns the frequency modulation ({\\ty fm}), the\n         amplitude modulation ({\\ty am}) and the instantaneous frequency\n         law ({\\ty iflaw}) of the signal received by a fixed observer from\n         a moving target emitting a pure frequency {\\ty f0}.\\\\\n \n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n         {\\ty N}  & number of points\\\\\n         {\\ty fs} & sampling frequency (in Hz)\\\\\n         {\\ty f0} & target   frequency (in Hz)\\\\\n         {\\ty d}  & distance from the line to the observer (in meters)\\\\\n         {\\ty v}  & target velocity    (in m/s)\\\\\n         {\\ty t0} & time center                  & {\\ty N/2}\\\\  \n         {\\ty c}  & wave velocity      (in m/s)  & {\\ty 340}\\\\\n \\hline  {\\ty fm} & output frequency modulation\\\\  \n         {\\ty am} & output amplitude modulation\\\\  \n         {\\ty iflaw} & output instantaneous frequency law\\\\\n\n\\hline\n\\end{tabular*}\n\\vspace*{.2cm}\n\nThe doppler effect characterizes the fact that a signal returned from a\nmoving target is scaled and delayed compared to the transmitted signal. For\nnarrow-band signals, this scaling effect can be considered as a frequency\nshift. \\\\\n\n{\\ty [fm,am,iflaw] = doppler(N,fs,f0,d,v,t0,c)} returns the signal received\nby a fixed observer from a moving target emitting a pure frequency {\\ty\nf0}. The target is moving along a straight line, which gets closer to the\nobserver up to a distance {\\ty d}, and then moves away. {\\ty t0} is the\ntime center (i.e. the time at which the target is at the closest distance\nfrom the observer), and {\\ty c} is the wave velocity in the medium.\n\n\\end{minipage}\n\n\\newpage\n\n{\\bf \\large \\sf Example}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nPlot the signal and its instantaneous frequency law received by an observer\nfrom a car moving at the speed $v=50 m/s$, passing at 10 meters from the\nobserver (the radar). The rotating frequency of the engine is $f_0=65 Hz$,\nand the sampling frequency is $f_s=200 Hz$ :\n\\begin{verbatim}\n     N=512; [fm,am,iflaw]=doppler(N,200,65,10,50); \n     subplot(211); plot(real(am.*fm)); \n     subplot(212); plot(iflaw);\n     [ifhat,t]=instfreq(sigmerge(am.*fm,noisecg(N),15),11:502,10);\n     hold on; plot(t,ifhat,'g'); hold off;\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\ndopnoise.\n\\end{verbatim}\n\\end{minipage}\n\n", "meta": {"hexsha": "b277bd5b56f96a32e93e55ef9e1989823a08be6e", "size": 3197, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/doppler.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/doppler.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/doppler.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 30.7403846154, "max_line_length": 75, "alphanum_fraction": 0.655927432, "num_tokens": 1051, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110202, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5682517301134568}}
{"text": "\\input{../common/common.tex}\n\n\\title{Math notes - Counting and countable sets}\n\\author{Uwe Hoffmann}\n\\hypersetup{colorlinks, pdftitle={Math notes - Counting and countable sets}}\n\n\\begin{document}\n\n\\setcounter{chapter}{1}\n\\section*{Counting and countable sets}\n\n\\newthought{Countable sets} and counting schemes for infinite countable sets are the topics of the problem in this note. \\index{countable set}\n\n\\vspace{10 mm}\n\\begin{problem}\nLet $P = \\{ N \\subset \\mathbb{N}: N \\text{  finite}\\}$. Prove $P$ is countable.\t     \n\\end{problem}\n\nLet's revisit what it means for an infinite set to be countable: An infinite set $M$ is countable if there is a bijection \\footnote{A bijection is a function that is one-to-one and onto.} from $M$ to $\\mathbb{N}$. \n\nGiven this definition, the problem statement is quite remarkable: the set of all the finite subsets of $\\mathbb{N}$ is not ``bigger'' than $\\mathbb{N}$.\n\\newline\n\n\\noindent Our strategy will be to start smaller and prove certain subsets of $P$ are countable. We then expand it to $P$. We start by proving that the set of all subsets of $\\mathbb{N}$ of size two is countable. We actually will prove something stronger, namely the set of ordered pairs of natural numbers is countable.\n\n\\begin{thm}\\label{countingpairs}\nThe set of ordered pairs $\\mathbb{N} \\times \\mathbb{N}$ is countable.\n\\end{thm}\n\n\\todo{Mention puzzle 136 (Catching a Spy) from Levitin: Algorithmic Puzzles}\n\n\\begin{proof}\nWe need a bijection from $\\mathbb{N} \\times \\mathbb{N} \\rightarrow \\mathbb{N}$. There are many ways to do this \n\\footnote{A very elegant way is described at \\url{http://www.math.upenn.edu/~wilf/website/recounting.pdf}}.\n\n\\begin{marginfigure}[1.0in]\n\\begin{tikzpicture}\n \\node at (-2, 3)  (1l) [anchor=north west] {$(1, 1)$};\n \\node at (3, 3)  (1r) [anchor=north east] {$\\text{row } 1$};\n \\node at (-2, 2)  (2l) [anchor=north west] {$(1, 2), (2, 1)$};\n \\node at (3, 2)  (2r) [anchor=north east] {$\\text{row } 2$};\n \\node at (-2, 1)  (3l) [anchor=north west] {$(1, 3), (2, 2), (3, 1)$};\n \\node at (3, 1)  (3r) [anchor=north east] {$\\text{row } 3$};\n \\node at (-2, 0)  (4l) [anchor=north west] {$(1, 4), (2, 3), (3, 2), (4, 1)$};\n \\node at (3, 0)  (4r) [anchor=north east] {$\\text{row } 4$};\n \\node at (0, -0.5) [anchor=north east] {$\\dots$};\n \\draw [->] (1l) -- (1r);\n \\draw [->] (2l) -- (2r);\n \\draw [->] (3l) -- (3r);\n \\draw [->] (4l) -- (4r);\n\\end{tikzpicture}\n\\caption{Counting all pairs.}\n\\label{fig:counting}\n\\end{marginfigure}\n\nThe main idea we are going to use for our bijection is to order the pairs $(i, j) \\in \\mathbb{N} \\times \\mathbb{N}$ in rows, such that each pair in a row has the same value when summing the components of the pair. Figure \\ref{fig:counting} illustrates the idea. Row one has all pairs with components that sum up to two (in this case only one pair). Row two has all pairs with components that sum up to three, row three all pairs which sum to four, $\\dots$. Notice also that\nin a row the pairs are sorted in increasing order of the first component.\n\nWe count the pairs from left to right in each row and go down the rows starting at the first row. For a given pair $(i, j)$, how many pairs come before it in our counting scheme? It is in row $i + j - 1$, so there are $k: 1 \\le k < i + j - 1$ rows before it.  Each row $k$ has $k$ pairs in it. This means there are\n$$\n\\sum_{k=1}^{i+j-2} k = \\frac{(i+j-2)(i+j-1)}{2}\n$$\npairs in rows before our pair $(i, j)$. There are $i - 1$ pairs before $(i, j)$ in the same row. Therefore, our counting function is \n$$\nf:\\mathbb{N} \\times \\mathbb{N} \\rightarrow \\mathbb{N}, ~~f(i, j) = i + \\frac{(i+j-2)(i+j-1)}{2}\n$$\nSuppose we have two pairs $(i_1, j_1) \\ne (i_2, j_2)$. We have two cases:\n\n\\begin{itemize}\n\\item $i_1+j_1=i_2+j_2$, same row, then $i_1 \\ne i_2$, so $f(i_1, j_1) \\ne f(i_2, j_2)$\n\\item $i_1+j_1 \\ne i_2+j_2$, different rows, so $f(i_1, j_1) \\ne f(i_2, j_2)$\n\\end{itemize}\n\nThis means, $f$ is one-to-one.\n\nTo prove that $f$ is onto, we consider an arbitrary $n \\in \\mathbb{N}$ and find a pair $(i, j)$ with $f(i, j)=n$. Working backwards and assuming we have a pair $(i, j)$ with $f(i, j)=n$, it would fall on a row $r = i + j - 1$. In each row $k$ there are $k$ pairs, $n$ is on row $r$, so\n$$\n\\sum_{k=1}^{r-1} k = \\frac{r(r-1)}{2} < n \\leq \\sum_{k=1}^{r} k = \\frac{r(r+1)}{2}\n$$\nSolving for $r$ we have:\\footnote{Note that $\\frac{1 + \\sqrt{1 + 8 n}}{2} - \\frac{-1 + \\sqrt{1 + 8 n}}{2} = 1$}\n\n$$\nr^2 - r - 2n < 0, ~~r^2 + r - 2n \\geq 0, ~~r = \\bigg\\lceil \\frac{-1 + \\sqrt{1 + 8 n}}{2} \\bigg\\rceil\n$$\nAnd then\n$$\ni = n - \\frac{r(r - 1)}{2}, ~~j = r - i + 1\n$$\nThis means that given an arbitrary $n$, there exists a pair $(i, j)$ with $f(i, j)=n$, so $f$ is onto.\n\nIt follows that $f$ is a bijection and $\\mathbb{N} \\times \\mathbb{N}$ is countable.\n\\end{proof}\n\nA corrollary to Theorem~\\ref{countingpairs} let's us expand the countable subsets of $P$ even more.\n\n\\begin{cor}\nSet of all finite sequences of length $k$, $\\mathbb{N}^k$ is countable.\\footnote{$\\mathbb{N}^k$ is the set of sequences of length k, or the cartesian product \n$\\mathbb{N} \\times \\mathbb{N} \\times \\dots \\mathbb{N}$. The set of pairs is $\\mathbb{N}^2=\\mathbb{N} \\times \\mathbb{N}$.}\n\\end{cor}\n\n\\begin{proof}\nFollows by induction on $k$: Assuming $\\mathbb{N}^{k - 1}$ is countable, then\n$$\n\\mathbb{N}^k = \\mathbb{N}^{k - 1} \\times \\mathbb{N}\n$$\nis also countable according to Theorem~\\ref{countingpairs}.\n\\end{proof}\n\nFrom the corrollary we now know \\footnote{We keep using the fact that the set of all finite subsets of $\\mathbb{N}$ of size $k$ is a subset of the set of all sequences of size $k$. To see this impose an order on a set of size $k$ and you get a sequence.} that the set of all subsets of $\\mathbb{N}$ of size $k$ is countable (it's a subset of $\\mathbb{N}^k$). The problem in this section asks us to prove that $P$ is countable, which means the union of all these countable sets is countable. The next theorem\\footnote{Exercise 1.5.3 on page 30 from \\bibentry{abbott15}.} will prove just that.\n\n\\begin{thm}\nLet $A_n, ~n \\in \\mathbb{N}$ be countable sets. Then \n$$\n\\bigcup_{n=1}^\\infty A_n\n$$ \nis countable.\n\\end{thm}\n\n\\begin{proof}\n$A_n$ is countable, so there exists a bijection $f_n:\\mathbb{N} \\rightarrow A_n$. We already know that $\\mathbb{N} \\times \\mathbb{N}$ is countable, so there exists a bijection $g:\\mathbb{N} \\rightarrow \\mathbb{N} \\times \\mathbb{N}$.\n\nWe define $F:\\mathbb{N} \\rightarrow \\bigcup_{n=1}^\\infty A_n$\n$$\nF(n) = f_i(j), ~\\text{where}~ (i, j) = g(n)\n$$\nWe claim that $F$ is a bijection.\n\nTake $n_1 \\ne n_2$. Then $g(n_1) \\ne g(n_2)$ and $(i_1, j_2) \\ne (i_2, j_2)$, so \n$$\nf_{i_1}(j_1) \\ne f_{i_2}(j_2)\n$$\nIt means $F(n_1) \\ne F(n_2)$ and $F$ is one-to-one.\n\nNow pick an arbitrary $a \\in \\bigcup_{n=1}^\\infty A_n$. Then there exists $i \\in \\mathbb{N}$ with $a \\in A_i$. \\footnote{We assume here the $A_i$ are disjoint, if not we make them disjoint and their union stays the same.} There also exists $j \\in \\mathbb{N}$ with $f_i(j) = a$. The pair $(i, j)$ is in\n$\\mathbb{N} \\times \\mathbb{N}$, so there exists $n \\in \\mathbb{N}$ with $g(n)=(i, j)$. It follows that $F(n)=a$ and F is onto. \n\\end{proof}\n\n\\bibliographystyle{plainnat}\n\\bibliography{../common/math}\n\n\\end{document}\n\n", "meta": {"hexsha": "2f8464308d9aef30597e16d0118f6646060f3bd4", "size": 7218, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "counting/counting.tex", "max_stars_repo_name": "uwedeportivo/math_notes", "max_stars_repo_head_hexsha": "e0120bb53fad9043637ce964b186194888ba0c49", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "counting/counting.tex", "max_issues_repo_name": "uwedeportivo/math_notes", "max_issues_repo_head_hexsha": "e0120bb53fad9043637ce964b186194888ba0c49", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "counting/counting.tex", "max_forks_repo_name": "uwedeportivo/math_notes", "max_forks_repo_head_hexsha": "e0120bb53fad9043637ce964b186194888ba0c49", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.4755244755, "max_line_length": 591, "alphanum_fraction": 0.6598780826, "num_tokens": 2656, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410572017153, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.5682025733943354}}
{"text": "\\documentclass[12pt]{article}\r\n\\usepackage{amsfonts}\r\n\\usepackage{amssymb, amsmath}\r\n\\usepackage{eucal}\r\n\\usepackage{amscd}\r\n\\usepackage{url}\r\n\\usepackage{listings}\r\n\\usepackage{algorithmic}\r\n\\usepackage{enumerate}\r\n\\urlstyle{sf}\r\n\\pagestyle{plain}\r\n\r\n\\newcommand{\\Z}{\\mathbb{Z}}\r\n\\newcommand{\\N}{\\mathbb{N}}\r\n\\newcommand{\\Q}{\\mathbb{Q}}\r\n\\newcommand{\\I}{\\mathbb{I}}\r\n\\newcommand{\\C}{\\mathbb{C}}\r\n\\newcommand{\\R}{\\mathbb{R}}\r\n\\newcommand{\\F}{\\mathbb{F}}\r\n\\newcommand{\\Pee}{\\mathbb{P}}\r\n\\newcommand{\\Op}{\\mathcal{O}}\r\n\\newcommand{\\Qbar}{\\Opverline{\\mathbb{Q}}}\r\n\\newcommand{\\code}{\\lstinline}\r\n\\newcommand{\\rref}[1]{\\hfill {\\tiny(\\ref{#1})}}\r\n\\newcommand{\\sref}[1]{{\\tiny(\\ref{#1})}}\r\n\r\n\\newcommand{\\ljk}[2]{\\left(\\frac{#1}{#2}\\right)}\r\n\\newcommand{\\modulo}[1]{\\;\\left(\\mbox{mod}\\;#1\\right)}\r\n\\newcommand{\\fr}{\\mathfrak}\r\n\\newcommand{\\qed}{\\square}\r\n\r\n\\def\\notdivides{\\mathrel{\\kern-3pt\\not\\!\\kern4.5pt\\bigm|}}\r\n\\def\\nmid{\\notdivides}\r\n\\def\\nsubseteq{\\mathrel{\\kern-3pt\\not\\!\\kern2.5pt\\subseteq}}\r\n\r\n\\newtheorem{theorem}{Theorem}[section]\r\n\\newtheorem{proposition}[theorem]{Proposition}\r\n\\newtheorem{definition}[theorem]{Definition}\r\n\\newtheorem{construction}[theorem]{Construction}\r\n\\newtheorem{corollary}[theorem]{Corollary}\r\n\\newtheorem{property}[theorem]{Property}\r\n\r\n\\newenvironment{lemma}[1][Lemma]{\\begin{trivlist}\r\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\r\n\r\n\\newenvironment{proof}[1][Proof:]{\\begin{trivlist}\r\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\r\n\r\n\\newenvironment{summary}[1][Summary.]{\\begin{trivlist}\r\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\r\n\r\n\\newenvironment{example}[1][Example]{\\begin{trivlist}\r\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\r\n\r\n\\newenvironment{remark}[1][Remark]{\\begin{trivlist}\r\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\r\n\r\n\\parindent=0pt\r\n\\parskip 8pt plus 2pt minus 2pt \r\n\r\n\\title{Elements of homological algebra}\r\n\r\n\\author{\r\nWilliam B. Hart\r\n}\r\n\r\n\\begin{document}\r\n\r\n\\maketitle\r\n\r\n\\tableofcontents\r\n\r\n\\section{Maps}\r\n\r\n\\textbf{1. A concrete category $C$ is a (small) category with a functor $F$ to the category of sets which is faithful, i.e. for objects $X$ and $Y$ of the category the functor $F$ induces a map from the set of morphisms from $X$ to $Y$ to the set of maps from $F(X)$ to $F(Y)$ and this map is injective.}\r\n\r\n\\textbf{2. An epimorphism is a morphism $f : X \\to Y$ such that $g_1\\circ f = g_2\\circ f$ for morphisms $g_1$ and $g_2$ implies that $g_1 = g_2$.}\r\n\r\n\\textbf{3. In the category of sets with maps a morphism is an epimorphism iff it is a surjective map.}\r\n\r\nSuppose $f : X \\to Y$ is not a surjection. In particular, suppose there exists $y_0 \\in Y$ such that $f(x) \\neq y_0$ for all $x \\in X$.\r\n\r\nLet $g_1 : Y \\to Y \\cup \\{Y\\}$ be defined by $g_1(y) = y$ and $g_2 : Y \\to Y \\cup \\{Y\\}$ be defined by $g_2(y) = y$ if $y \\neq y_0$ and $g_2(y_0) = Y$. Then $g\\circ f = h\\circ f$ but $g \\neq h$, i.e. $f$ is not an epimorphism. As $f$ was an arbitrary map that was not surjective, we have shown that if $f$ is an epimorphism then $f$ is surjective. \r\n\r\nFor the converse, suppose that $f$ is surjective and $g_1, g_2 : Y \\to Z$ are arbitrary maps with $g_1 \\neq g_2$. In particular, suppose that $g_1(y) \\neq g_2(y)$ for some $y \\in Y$.\r\n\r\nAs $f$ is surjective, there exists $x \\in X$ such that $f(x) = y$. Then $g_1(f(x)) \\neq g_2(f(x))$, i.e. $g_1\\circ f \\neq g_2\\circ f$.\r\n\r\nAs $g_1$ and $g_2$ were arbitrary maps with $g_1 \\neq g_2$ we have shown that if $f$ is surjective and $g_1\\circ f = g_2\\circ f$ then $g_1 = g_2$, i.e. $f$ is an epimorphism. QED.\r\n\r\n\\textbf{4. In a concrete category (where the objects are sets with structure and the morphisms are structure preserving maps), a surjective morphism is an epimorphism.}\r\n\r\nIf $f$ is a surjective morphism it is a surjective map. By the previous result it is an epimorphism in the category of sets with maps and hence it is an epimorphism in the category in question. QED.\r\n\r\n\\textbf{5. In the category of $R$-modules over a commutative ring $R$ with module homomorphisms, an epimorphism is surjective.}\r\n\r\nSuppose $f : X \\to Y$ is an epimorphism in the category of $R$-modules. Consider the canonical quotient map $g_1 : Y \\to Y/f(X)$ and the zero map $g_2 : Y \\to Y/f(X)$ which maps $y \\mapsto 0$ for all $y \\in Y$.\r\n\r\nAs $(g_1\\circ f)(x) = 0 = (g_2\\circ f)(x)$ for all $x \\in X$ we have $g_1\\circ f = g_2\\circ f$. As $f$ is an epimorphism then $g_1 = g_2$. But this implies that $Y = f(X)$, i.e. $f$ is a surjection. QED.\r\n\r\n\\textbf{6. In the category of rings with ring homomorphisms the inclusion $f : \\Z \\to \\Q$ is an epimorphism but not surjective.}\r\n\r\nIt is clearly not surjective, since for example there does not exist $n \\in \\Z$ such that $f(n) = 1/2$.\r\n\r\nBut if $g_1\\circ f = g_2\\circ f$ then for all $a, b \\in \\Z$ with $b \\neq 0$ we have \r\n\\begin{multline*}g_1(a/b) = g_1(a)/g_1(b) = g_1(f(a))/g_1(f(b))\\\\\r\n= g_2(f(a))/g_2(f(b)) = g_2(a)/g_2(b) = g_2(a/b),\r\n\\end{multline*}\r\ni.e. $g_1 = g_2$. Thus $f$ is an epimorphism. QED.\r\n\r\n\\textbf{7. A monomorphism is a morphism $f : X \\to Y$ such that $f\\circ g_1 = f\\circ g_2$ for morphisms $g_1$ and $g_2$ implies that $g_1 = g_2$.}\r\n\r\n\\textbf{8. In the category of sets with maps a morphism is a monomorphism iff it is an injective map.}\r\n\r\nSuppose $f : X \\to Y$ is an injective map and suppose that $g_1, g_2 : Z \\to X$ are maps such that $g_1 \\neq g_2$. In particular suppose that $g_1(z) \\neq g_2(z)$ for some $z \\in Z$.\r\n\r\nAs $f$ is injective we have that $f(g_1(z)) \\neq f(g_2(z))$. Thus we have $f\\circ g_1 \\neq f\\circ g_2$. As $g_1$ and $g_2$ were arbitrary maps that are not equal we have shown that if $f\\circ g_1 = f\\circ g_2$ then $g_1 = g_2$, i.e. $f$ is a monomorphism. \r\n\r\nFor the converse suppose that $f : X \\to Y$ is a monomorphism. We wish to show that $f(x) \\neq f(x')$ for $x, x' \\in X$ if $x \\neq x'$.\r\n\r\nTo this end, suppose $x \\neq x'$ for $x, x' \\in X$. Let $Z = \\{a\\}$ be a set containing a single element. Define $g_1, g_2 : Z \\to X$ by $g_1(a) = x$ and $g_2(a) = x'$.\r\n\r\nWe see that $g_1 \\neq g_2$. As $f$ is a monomorphism we therefore have that $f\\circ g_1 \\neq f\\circ g_2$. But now $f(x) = f(g_1(a)) \\neq f(g_2(a)) = f(x')$. As $x \\neq x'$ were arbitrary we have shown that $f$ is injective. QED.\r\n\r\n\\textbf{9. In any concrete category (where the objects are sets with structure and the morphisms are structure preserving maps), an injective morphism is a monomorphism.}\r\n\r\nAn injective morphism in the category is an injective map in the category of sets and thus a monomorphism of maps. As it is a morphism in the category in question it is a monomorphism. QED.\r\n\r\n\\textbf{10. If $f : X \\to Y$ is an $R$-module homomorphism with $\\ker(f) = \\{0\\}$ then $f$ is injective.}\r\n\r\nSuppose $f(x_1) = f(x_2)$ for $x_1, x_2 \\in X$. Then as $f$ is a homomorphism $f(x_1 - x_2) = f(x_1) - f(x_2) = 0$.\r\n\r\nAs $\\ker(f) = \\{0\\}$ we have that $x_1 = x_2$, i.e. $f$ is injective. QED.\r\n\r\n\\textbf{11. In the category of $R$-modules over a commutative ring $R$ with module homomorphisms, an epimorphism is surjective.}\r\n\r\nSuppose $f : X \\to Y$ is a monomorphism of $R$-modules. Let $g_1 : \\ker(f) \\to X$ be the inclusion map and let $g_2 : \\ker(f) \\to X$ be the zero map. We have that $(f\\circ g_1)(x) = 0 = (f\\circ g_2)(x)$ for all $x \\in \\ker(f)$.\r\n\r\nAs $f$ is a monomorphism $g_1 = g_2$. Thus $\\ker(f) = \\{0\\}$, i.e. $f$ is injective. QED.\r\n\r\n\\textbf{12. An isomorphism in a category $C$ is a morphism $f : X \\to Y$ for objects $X$ and $Y$ in the category such that there exists a two sided inverse, i.e. $f^{-1} : Y \\to X$ such that $f^{-1}\\circ f = 1_X$ and $f\\circ f^{-1} = 1_Y$ where $1_X$ and $1_Y$ are the identity morphisms on $X$ and $Y$ respectively.}\r\n\r\n\\textbf{13. An isomorphism is a monomorphism.}\r\n\r\nSuppose $f : X \\to Y$ is an isomorphism. Then there exists $f^{-1}$ such that $f^{-1}\\circ f = 1_X$ and $f\\circ f^{-1} = 1_Y$. Now suppose that $f\\circ g_1 = f\\circ g_2$ for morphisms $g_1$ and $g_2$. Then $g_1 = f^{-1}\\circ f \\circ g_1 = f^{-1}\\circ f\\circ g_2 = g_2$ and so $f$ is an monomorphism. QED.\r\n\r\n\\textbf{14. An isomorphism is an epimorphism.}\r\n\r\nSuppose $f : X \\to Y$ is an isomorphism. Then there exists $f^{-1}$ such that $f^{-1}\\circ f = 1_X$ and $f\\circ f^{-1} = 1_Y$. Now suppose that $g_1\\circ f = g_2\\circ f$ for morphisms $g_1$ and $g_2$. Then $g_1 = g_1\\circ f\\circ f^{-1}  = g_2\\circ f\\circ f^{-1} = g_2$ and so $f$ is an epimorphism. QED.\r\n\r\n\\textbf{15. A retraction of a morphism $f : X \\to Y$ is a morphism $g : Y \\to X$ such that $g\\circ f = 1_X$.}\r\n\r\n\\textbf{16. A retraction is an epimorphism.}\r\n\r\nWe see that $f$ is a right inverse of the retraction $g$. The same argument used to show that an isomorphism is an epimorphism can be applied to show that the retraction is an epimorphism. QED.\r\n\r\n\\textbf{17. A section of a morphism $f : X \\to Y$ is a morphism $g : Y \\to X$ such that $f\\circ g = 1_Y$.}\r\n\r\n\\textbf{18. A section is a monomorphism.}\r\n\r\nWe see that $f$ is a left inverse for the section $g$. Thus the same argument that was used to show an isomorphism is a monomorphism can be applied to show that the section is a monomorphism. QED.\r\n\r\n\\textbf{19. If $f : X \\to Y$ is a retraction of $g : Y \\to X$ then $g$ is a section of $f$ and conversely.}\r\n\r\nImmediate from the definitions. QED.\r\n\r\n\\textbf{20. A split epimorphism is a morphism $f : X \\to Y$ with section $g : Y \\to X$.}\r\n\r\n\\textbf{21. A split monomorphism is a morphism $g : Y \\to X$ with retraction $f : X \\to Y$.}\r\n\r\n\\textbf{20. In the category of sets every monomorphism with a non-empty domain is a section.}\r\n\r\nA monomorphism in the category of sets is the same thing as an injective map. Therefore suppose that $f : X \\to Y$ is an injective map with non-empty domain $X$.\r\n\r\nSince $X$ is non-empty, there exists an $x_0 \\in X$. We define a morphism $g : Y \\to X$ as follows. For all $y \\in \\;\\mbox{Im}(f)$ there is a unique $x \\in X$ such that $f(x) = y$. We define $g(y) = x$. If $y \\notin \\;\\mbox{Im}(f)$ we can define $g(y) = x_0$.\r\n\r\nBy construction $g\\circ f = 1_X$. Thus $g$ is a section of $f$. QED.\r\n\r\n\\textbf{21. In the category of sets every epimorphism is a retraction.}\r\n\r\nAn epimorphism in the category of sets is a surjective map. Therefore let $f : X \\to Y$ be a surjective map.\r\n\r\nWe wish to show that there exists a morphism $g : Y \\to X$ such that $f\\circ g = 1_Y$.\r\n\r\nSince $f$ is surjective, for every $y \\in Y$ there exists an $x \\in X$ such that $f(x) = y$. By the axiom of choice we can choose such an $x$ for each $y$ and define $g(y) = x$. Then by construction we have $f\\circ g = 1_Y$ and so $f$ is a retraction of $g$. QED.\r\n\r\n\\textbf{22. If $f : X \\to Y$ and $g : Y \\to Z$ are epimorphisms then the composition $g\\circ f : X \\to Z$ is an epimorphism.}\r\n\r\nSuppose that $h_1, h_2 : Z \\to V$ are morphisms such that $h_1\\circ g\\circ f = h_2\\circ g\\circ f$. As $f$ is an epimorphism $h_1\\circ g = h_2\\circ g$. But then as $g$ is an epimorphism we have that $h_1 = h_2$. Thus $g\\circ f$ is an epimorphism.\r\n\r\n\\textbf{22. If $f : X \\to Y$ and $g : Y \\to Z$ are monomorphisms then the composition $g\\circ f : X \\to Z$ is a monomorphism.}\r\n\r\nSuppose $h_1, h_2 : V \\to X$ are morphisms such that $g\\circ f\\circ h_1 = g\\circ f\\circ h_2$. As $g$ is a monomorphism we have that $f\\circ h_1 = f\\circ h_2$. Then as $f$ is a monomorphism we have that $h_1 = h_2$. Thus $g\\circ f$ is a monomorphism.\r\n\r\n\\textbf{23. If $f : X \\to Y$ and $g : Y \\to Z$ are morphisms with $g\\circ f$ an epimorphism, then $g$ is an epimorphism.}\r\n\r\nSuppose that $h_1\\circ g = h_2\\circ g$ for morphisms $h_1, h_2 : Z \\to V$. Then $h_1\\circ g\\circ f = h_2\\circ g\\circ f$. Then because $g\\circ f$ is an epimorphism, $h_1 = h_2$. Thus $g$ is an epimorphism. QED.\r\n\r\n\\textbf{24. If $f : X \\to Y$ and $g : Y \\to Z$ are morphisms with $g\\circ f$ a monomorphism, then $f$ is an monomorphism.}\r\n\r\nSuppose that $f\\circ h_1 = f\\circ h_2$ for morphisms $h_1, h_2 : V \\to X$. Then $g\\circ f\\circ h_1 = g\\circ f\\circ h_2$. Then because $g\\circ f$ is a monomorphism, $h_1 = h_2$. Thus $f$ is a monomorphism. QED.\r\n\r\n\\textbf{25. If If $f : X \\to Y$ and $g : Y \\to Z$ are morphisms with $g\\circ f$ an isomorphism, then $g$ is an epimorphism.}\r\n\r\nAn isomorphism is an epimorphism. Thus $g\\circ f$ is an epimorphism and thus so is $g$. QED.\r\n\r\n\\textbf{26. If $f : X \\to Y$ and $g : Y \\to Z$ are morphisms with $g\\circ f$ a isomorphism, then $f$ is an monomorphism.}\r\n\r\nAn isomorphism is a monomorphism. Thus $g\\circ f$ is a monomorphism and thus so is $f$. QED.\r\n\r\n\\section{Zero objects and morphisms}\r\n\r\n\\textbf{1. An initial object in a category $C$ is an object $I$ such that for every object $X$ there exists precisely one morphism $I \\to X$.}\r\n\r\n\\textbf{2. A terminal object in a category $C$ is an object $F$ such that for every object $X$ there exists precisely one morphism $X \\to F$.}\r\n\r\n\\textbf{3. In the category of sets the empty set is the unique initial object.}\r\n\r\nIt is clear that the empty set is an initial object for sets. No other set $X$ can be an initial object, since if $x \\in X$ then for the set $Y = \\{0, 1\\}$ we have the map which sends $x$ to $0$ and every other element of $X$ to $1$ and we have the map which sends $x$ to $1$ and every other element of $X$ to $0$.\r\n\r\nThese maps are clearly distinct and so $X$ cannot be an initial object. Thus the initial object in sets in unique. QED.\r\n\r\n\\textbf{4. In the category of sets, every singleton $\\{x\\}$ is a terminal object. No other objects are terminal.}\r\n\r\nThe only map from a set $X$ to the singleton $\\{x\\}$ is the map which sends all elements of $X$ to $x$. Thus $\\{x\\}$ is a terminal object in the category of sets.\r\n\r\nThere are no morphisms from an arbitrary set to the empty set, and if a set has more than one element there is more than one morphism from any nonempty set to that set. Thus no other sets are terminal in the category of sets. QED.\r\n\r\n\\textbf{5. In the category of $R$-modules over a commutative ring $R$ the zero module, containing just zero, is both an initial and terminal object.}\r\n\r\nSince an $R$-module homomorphism must send $0$ to $0$, there is only one morphism from any $R$-module or to any $R$-module. QED.\r\n\r\n\\textbf{6. A zero object in a category is an object that is both an initial and terminal object.}\r\n\r\n\\textbf{7. If a category has an initial object, it is unique up to unique isomorphism.}\r\n\r\nIf $X$ and $Y$ are initial objects in a category $C$ then there is a unique morphism $f : X \\to Y$ and a unique morphism $g : Y \\to X$. There is also a unique morphism from $X \\to X$ which must be the identity morphism, and similarly for $Y$.\r\n\r\nThe composition $f\\circ g$ must therefore be the identity morphism on $Y$ and similarly $g\\circ f$ must be the identity morphism on $X$. Thus $f$ and $g$ are unique isomorphisms. QED.\r\n\r\n\\textbf{8. If a category has a terminal object, it is unique up to unique isomorphism.}\r\n\r\nIf $X$ and $Y$ are terminal objects in a category $C$ then there is a unique morphism $f : X \\to Y$ and a unique morphism $g : Y \\to X$. There is also a unique morphism from $X \\to X$ which must be the identity morphism, and similarly for $Y$.\r\n\r\nThe composition $f\\circ g$ must therefore be the identity morphism on $Y$ and similarly $g\\circ f$ must be the identity morphism on $X$. Thus $f$ and $g$ are unique isomorphisms. QED.\r\n\r\n\\textbf{9. If a category has a zero object it is unique.}\r\n\r\nThis follows immediately from the definition and the previous two theorems. QED.\r\n\r\n\\textbf{10. A category with zero morphisms is one where for each pair of objects $A$ and $B$ there is a unique morphism $0_{AB} : A \\to B$ such that for all morphisms $f : Y \\to Z$ and $g : X \\to Y$ we have $f\\circ 0_{XY} = 0_{XZ}$ and $0_{YZ}\\circ g = 0_{YZ}$.}\r\n\r\n\\textbf{11. In we define $0_{XY} : X \\to Y$ to be the composition of $h_1 : X \\to 0$ and $h_2 : 0 \\to Y$ in a category with a zero object $0$ then it is a category with zero morphisms.}\r\n\r\nIf $0$ is a zero object then $0_{XY}$ is defined uniquely since $h_1$ and $h_2$ are unique.\r\n\r\nIf $f : Y \\to Z$ and $g : X \\to Y$ are arbitrary morphisms in the category then $f\\circ 0_{XY} = f\\circ h_2\\circ h_1$ for morphisms $h_1 : X \\to 0$ and $h_2 : 0 \\to Y$.\r\n\r\nHere $f\\circ h_2$ is the unique morphism from $0$ to $Z$ and $h_1$ is the unique morphism from $X$ to $0$. Thus their composition is the morphism $0_{XZ}$.\r\n\r\nSimilarly $0_{YZ}\\circ g = h_1\\circ h_2\\circ g$ for morphisms $h_2 : Y \\to 0$ and $h_1 : 0 \\to Z$.\r\n\r\nHere $h_2\\circ g$ is the unique morphism from $X$ to $0$ and $h_1$ is the unique morphism from $0$ to $Z$. Thus their composition is the morphism $0_{XY}$. Thus the category is a category with zero morphisms. QED.\r\n\r\n\\section{Preadditive categories}\r\n\r\n\\textbf{1. A homset is the collection of all morphisms in a category between given source and target objects.}\r\n\r\nThe collection of all morphisms between two objects $A$ and $B$ is denoted $\\hom(A, B)$.\r\n\r\n\\textbf{2. A locally small category is a category in which homsets are required to be sets.}\r\n\r\n\\textbf{3. The category of sets with maps is locally small.}\r\n\r\nThe class of all maps between two sets $A$ and $B$ is a subclass of their cartesian product $A\\times B$, which is a set. QED.\r\n\r\n\\textbf{4. The category of modules over a commutative ring $R$, with module homomorphisms, is a locally small category.}\r\n\r\nThe class of all $R$-module homomorphisms between two $R$-modules $X$ and $Y$ is a comprehension over the set of all maps between $X$ and $Y$. Thus it is a set. QED.\r\n\r\n\\textbf{5. A preadditive category is a category in which every homset $\\hom(A, B)$ has the structure of an additive abelian group, composition of morphisms is (left and right) distributive over addition and which contains a zero object.}\r\n\r\n\\textbf{6. In a preadditive category whose zero object is denoted $0$, the groups $\\hom(A, 0)$ and $\\hom(0, B)$ are trivial for all objects $A$ and $B$ in the category, containing only the zero morphisms from $A$ to $0$ and $0$ to $B$ respectively.}\r\n\r\nThis follows immediately from the definition of the zero object as an initial and terminal object, i.e. there is a unique morphism from $A$ to $0$ and a unique morphism from $0$ to $B$.\r\n\r\nSince $\\hom(A, 0)$ and $\\hom(0, B)$ are abelian groups they contain at least the identity element. This must be the unique morphism from $A$ to $0$ or $0$ to $B$ respectively.\r\n\r\n\\textbf{7. If $A$, $B$ and $C$ are objects in a preadditive category and the identity elements of $\\hom(X, Y)$ is denoted $0_{XY}$ then $0_{BC}\\circ 0_{AB} = 0_{AC}$.}\r\n\r\nConsider the subset $S = \\{f\\circ 0_{AB} \\;|\\; f \\in \\hom(B, C)\\}$ of $\\hom(A, C)$. By right distributivity if $f_1$ and $f_2$ are in $S$ then $f_1 - f_2$ is in $S$. Thus $S$ is a subgroup of $\\hom(A, C)$.\r\n\r\nThe additive identity of $S$ is the additive identity of $\\hom(A, C)$, namely $0_{AC}$. We will show that $0_{BC}\\circ 0_{AB}$ is the additive identity of $S$ and is thus equal to $0_{AC}$.\r\n\r\nLet $f\\circ 0_{AB}$ be an element of $S$ with $f \\in \\hom(B, C)$. Then $f\\circ 0_{AB} + 0_{BC}\\circ 0_{AB} = (f + 0_{BC})\\circ 0_{AB} = f\\circ 0_{AB}$ by distributivity and definition of the identity element of $\\hom(B, C)$. Similarly $0_{BC}\\circ 0_{AB} + f\\circ 0_{AB} = f\\circ 0_{AB}$. Thus $0_{BC}\\circ 0_{AB}$ is the identity of $\\hom(A, C)$. QED.\r\n\r\n\\textbf{8. Every preadditive category is a category with zero morphisms.}\r\n\r\nThis follows immediately from the previous theorem. QED.\r\n\r\n\\textbf{9. The endomorphism set of an object $X$ of a category is the set $\\hom(X, X)$.}\r\n\r\nWe denote the endomorphism set of $X$ by End$(X)$.\r\n\r\n\\textbf{10. For any object $X$ of a preadditive category, the endomorphism set End$(X)$ is a ring with multiplication given by composition of morphisms.}\r\n\r\nThe identity morphism id$_X$ serves as the multiplicative identity. It is easy to check the ring axioms hold for End$(X)$.\r\n\r\n\\textbf{11. The category of $R$-modules for a commutative ring $R$ is a preadditive category.}\r\n\r\nWe have seen that the category has a zero object. For $R$-modules $X$ and $Y$ we define $(f_1 + f_2)(x) = f_1(x) + f_2(x)$ for all $x \\in X$ and $f_1, f_2 : X \\to Y$. \r\n\r\nThe zero morphism $0_{AB} : A \\to B$ sending every element of $A$ to $0$ is the identity element of $\\hom(A, B)$. The additive inverse of the morphism $f : A \\to B$ is the morphism which sends $a \\mapsto -f(a)$. As addition is associative and commutative in an $R$-module $B$, it is clear that $\\hom(A, B)$ is associative and commutative under addition of homomorphisms. Thus $\\hom(A, B)$ is an abelian group for all $R$-modules $A$ and $B$.\r\n\r\nIf $f_1, f_2 : B \\to C$ and $g : A \\to B$ are $R$-module homomorphisms then $(f_1 + f_2)\\circ g = f_1\\circ g + f_2\\circ g$.\r\n\r\nSimilarly $f\\circ (g_1 + g_2) = f\\circ g_1 + f\\circ g_2$ for $R$-module homomorphisms $f : B \\to C$ and $g_1, g_2 : A \\to B$. Thus composition distributes over addition of homomorphisms and the category of $R$-modules is a preadditive category. QED.\r\n\r\n\\textbf{12. An $R$-linear category for a commutative ring $R$ is a preadditive category in which each hom set Hom$(A, B)$ has the structure of an $R$-module and such that composition of morphisms is $R$-bilinear, i.e. $f\\circ (rg) = (rf)\\circ g = r(f\\circ g)$ and it is additive in each argument.}\r\n\r\n\\section{Additive categories}\r\n\r\n\\textbf{1. A product of a family of objects $\\{P_\\alpha\\}_{\\alpha \\in I}$ in a category is an object $D$ and a collection of morphisms $\\{\\pi_\\alpha : D \\to P_\\alpha\\}_{\\alpha \\in I}$ such that given any other object $S$ and morphisms $\\{s_\\alpha : S \\to P_\\alpha\\}$ there exists a unique morphism $\\phi : S \\to D$ such that $s_\\alpha = \\pi_\\alpha\\circ \\phi$ for all $\\alpha \\in I$.}\r\n\r\nThe morphisms $\\pi_\\alpha$ are called projection morphisms and the product of the $P_\\alpha$ is denoted $\\prod_{\\alpha \\in I} P_\\alpha$.\r\n\r\n\\textbf{2. A coproduct of a family of objects $\\{P_\\alpha\\}_{\\alpha \\in I}$ in a category is an object $C$ and a collection of morphisms $\\{i_\\alpha : P_\\alpha \\to C\\}_{\\alpha \\in I}$ such that given any other object $S$ and morphisms $\\{t_\\alpha : P_\\alpha \\to S\\}$ there exists a unique morphism $\\rho : C \\to S$ such that $t_\\alpha = \\rho\\circ i_\\alpha$ for all $\\alpha \\in I$.}\r\n\r\nThe morphisms $i_\\alpha$ are called coprojection morphisms and the coproduct of the $P_\\alpha$ is denoted $\\coprod_{\\alpha \\in I} P_\\alpha$ or $\\bigoplus_{\\alpha \\in I} P_\\alpha$.\r\n\r\n\\textbf{3. Products and coproducts, if they exist, are unique up to unique isomorphism.}\r\n\r\nThey are defined by a universal property and thus they are unique up to unique isomorphism. QED.\r\n\r\n\\textbf{4. The product over the empty set is a terminal object in the category of sets.}\r\n\r\nLet $T$ be a product over the empty set. The collection of projection morphisms is empty. If $T'$ is any other object with an empty collection of projection morphisms then there is a unique morphism from $T'$ to $T$. In other words, $T$ is a terminal object for the category. QED.\r\n\r\n\\textbf{5. The coproduct over the empty set is an initial object in the category of sets.}\r\n\r\nLet $I$ be an coproduct over the empty set. The collection of coprojection morphisms is empty. If $I'$ is any other object with an empty collection of coprojection morphisms then there is a unique morphism from $I$ to $I'$. In other words, $I$ is an initial object for the category. QED.\r\n\r\n\\textbf{6. The cartesian product of two non-empty sets is a product in the category of sets.}\r\n\r\nLet $A$ and $B$ be non-empty sets. Let $\\pi_1 : A\\times B \\to A$ and $\\pi_2 : A\\times B \\to B$ be the projection maps.\r\n\r\nLet $S$ be any set and let $f_1 : S \\to A$ and $f_2 : S \\to B$ be any other maps. Let $g : S \\to A\\times B$ be given by $g(s) = (f_1(s), f_2(s))$ for all $s \\in S$. Then $(\\pi_1\\circ g)(s) = \\pi_1((f_1(s), f_2(s))) = f_1(s)$ and similarly $(\\pi_2\\circ g)(s) = \\pi_2((f_1(s), f_2(s))) = f_2(s)$. Thus $g$ satisfies the first part of the universal property.\r\n\r\nWe must now show that $g$ is the unique map with this property. Suppose there exists another $h : S \\to A\\times B$ such that $\\pi_1\\circ h = f_1$ and $\\pi_2\\circ h = f_2$.\r\n\r\nFor $s \\in S$ we have $h(s) = (a, b)$ say. Then $f_1(s) = \\pi_1(h(s)) = \\pi_1((a, b)) = a$ and $f_2(s) = \\pi_2(h(s)) = \\pi_2((a, b)) = b$. Thus $h(s) = (a, b) = (f_1(s), f_2(s)) = g(s)$ for all $s \\in S$ and so $h = g$ and $g$ is unique. QED.\r\n\r\n\\textbf{7. The disjoint union of two sets $A$ and $B$ is the set $A'\\cup B'$ where $A' = \\{0\\}\\times A$ and $B' = \\{0\\}\\times B$.}\r\n\r\nWe denote the disjoint union of sets $A$ and $B$ by $A \\coprod B$.\r\n\r\n\\textbf{8. The disjoint union of two sets $A$ and $B$ is a coproduct in the category of sets.}\r\n\r\nLet $i_1 : A \\to A'\\cup B'$ be given by $i_1(a) = (0, a)$ and $i_2 : B \\to A'\\cup B'$ be given by $i_2(b) = (1, b)$ for all $a \\in A$ and $b \\in B$.\r\n\r\nLet $S$ be any other set and $f_1 : A \\to S$ and $f_2 : B \\to S$ be maps. Define $g : A\\coprod B \\to S$ by $g((0, a)) = f_1(a)$ and $g((1, b)) = f_2(b)$. We see that $f_1 = g\\circ i_1$ and $f_2 = g\\circ i_2$.\r\n\r\nNow suppose that there exists another map $h : A\\coprod B \\to S$ with $f_1 = h\\circ i_1$ and $f_2 = h\\circ i_2$. Then $h((0, a)) = h(i_1(a)) = f_1(a) = g((0, a))$. Similarly $h((1, b)) = h(i_2(b)) = f_2(b) = g((1, b))$. Thus $g = h$ and the morphism is unique. QED.\r\n\r\n\\textbf{9. The direct product of two $R$-modules $A$ and $B$ over a commutative ring is the cartesian product $A\\times B$ endowed with addition and scalar multiplication coordinatewise.}\r\n\r\n\\textbf{10. The direct product of two $R$-modules over a commutative ring $R$ is a product in the category of $R$-modules.}\r\n\r\nIt is easy to check that the projection maps on the set $A\\times B$ respect addition and scalar multiplication and map zero to zero. Thus they are $R$-module homomorphisms.\r\n\r\nLet $M$ be any other $R$-module and $f_1 : M \\to A$ and $f_2 : M \\to B$ be $R$-module homomorphisms. Precisely the same argument as that for sets shows that $A\\times B$ satisfies the first part of the universal property for a product in the category of $R$-modules.\r\n\r\nSimilarly the same argument as for sets shows that the $R$-module homomorphism $g : M \\to A\\times B$ defined by $g(m) = (f_1(m), f_2(m))$ is the unique $R$-module homomorphism such that $\\pi_1\\circ g = f_1$ and $\\pi_2\\circ g = f_2$. QED.\r\n\r\n\\textbf{11. The direct product of two $R$-modules $A$ and $B$ over a commutative ring $R$ is a coproduct in the category of $R$-modules.}\r\n\r\nIt is easy to check that the coprojection maps $i_1 : A \\to A\\times B$ and $i_2 : B \\to A\\times B$ respect addition and scalar multiplication and map zero to zero. Thus they are $R$-module homomorphisms.\r\n\r\nLet $M$ be any other $R$-module and $f_1 : A \\to M$ and $f_2 : B \\to M$ be $R$-module homomorphisms. Let us define $g : A\\times B \\to M$ by $g((a, b)) = f_1(a) + f_2(b)$ for all $a \\in A$ and $b \\in B$.\r\n\r\nWe have that $g((0, 0)) = f_1(0) + f_2(0) = 0$. We also have $g((a_1, b_1) + (a_2, b_2)) = f_1(a_1 + a_2) + f_2(b_1 + b_2) = f_1(a_1) + f_1(a_2) + f_2(b_1) + f_2(b_2)$ by linearity of $f_1$ and $f_2$. But this in turn is equal to $g((a_1, b_1)) + g((a_2, b_2))$.\r\n\r\nSimilarly, for any $r \\in R$ we have $g(r(a, b)) = f_1(ra) + f_2(rb) = r(f_1(a) + f_2(b))$ by linearity of $f_1$ and $f_2$. But this in turn is equal to $fg((a, b))$. Thus $g$ is an $R$-module homomorphism.\r\n\r\nWe note that $f_1 = g\\circ i_1$ and $f_2 = g\\circ i_2$.\r\n\r\nNow suppose that there exists another $R$-module homomorphism $h : A\\times B \\to M$ with $f_1 = h\\circ i_1$ and $f_2 = h\\circ i_2$. Then $h((a, b)) = h((a, 0)) + h((0, b)) = h(i_1(a)) + h(i_2(b)) = f_1(a) + f_2(b)$. But as we have seen, this is equal to $g(i_1(a)) + g(i_2(b)) = g((a, b))$. Thus $g = h$ and the morphism is unique. QED.\r\n\r\n\\textbf{12. If $M_1, M_2, \\ldots, M_n$ are objects in a preadditive category, the finite product $\\prod_i M_i$ exists in the category iff the finite coproduct $\\bigoplus_i M_i$ exists. Furthermore the finite product and coproduct agree when they exist.}\r\n\r\nSuppose $\\prod_i M_i$ exists with projection morphisms $p_i : \\prod_i M_i \\to M_i$. For each $i$ we have the identity morphism id$_{M_i} : M_i \\to M_i$ and the zero morphism $0_{M_iM_j} : M_i \\to M_j$ for any $j$.\r\n\r\nFix an $M_i$. Then by the universal property of the product there exists a unique morphism $\\phi_i : M_i \\to \\prod_i M_i$ such that $p_j\\circ \\phi_i = 0_{M_iM_j}$ for $j \\neq i$ and $p_i\\circ \\phi_i =$ id$_{M_i}$.\r\n\r\nLet $L$ be any object in the category with morphisms $f_i : M_i \\to L$. We define a morphism $f : \\prod_i M_i \\to L$ by $f = \\sum_{i=1}^n f_i\\circ p_i$.\r\n\r\nBy distributivity of composition over addition of morphisms in a preadditive category we have that\r\n$$f\\circ \\phi_i = \\sum_{j=1}^n f_j\\circ p_j\\circ \\phi_i = \\sum_{j=1}^n f_j\\circ \\delta_{ij} = f_i,$$\r\nwhere $\\delta_{ij}$ is the identity morphism if $i = j$ and the zero morphism otherwise.\r\n\r\nThus we will have shown that $\\prod_i M_i$ along with the morphisms $\\phi_i$ are a coproduct of the $M_i$ if we can show that the morphism $f$ is unique.\r\n\r\nConsider the morphism $h = \\sum_{i=1}^n \\phi_i\\circ p_i$. It is an endomorphism of $\\prod_i M_i$. We have that\r\n$$p_j\\circ h = \\sum_{i=1}^n p_j\\circ j_i\\circ p_i = \\sum_{i=1}^n \\delta_{ij}\\circ p_i = p_j.$$\r\n\r\nBy the definition of the product, there is a unique morphism $h : \\prod_i M_i \\to \\prod_i M_i$ such that $p_j\\circ h = p_j$ for all $j$. That morphism is the identity morphism, and thus $h =$ id. \r\n\r\nNow we can prove the uniqueness of $f$. Suppose $g : \\prod_i M_i \\to L$ is another morphism with $g\\circ \\phi_i = f_i$. Then $(f - g)\\circ \\phi_i = f_i - f_i = 0$. Thus\r\n$$0 = 0\\circ p_i = (f - g)\\circ \\sum_{i=1}^n \\phi_i\\circ p_i.$$\r\n\r\nBut by what we just proved, this is equal to $(f - g)\\circ$ id$ = f - g$. Thus $f = g$ as required and $\\prod_i M_i$ along with the $\\phi_i$ is a coproduct. \r\n\r\nThe converse argument that the product exists and agrees with the coproduct if it exists is dual to the above argument. QED.\r\n\r\n\\textbf{13. An additive category is a preadditive category in which finite products and coproducts exist.}\r\n\r\n\\textbf{14. The category of $R$-modules over a commutative ring is an additive category.}\r\n\r\nWe have seen that the finite direct product is a product and therefore in the preadditive category of $R$-modules. QED.\r\n\r\n\\textbf{15. In a category with zero morphisms, the projection morphisms $\\pi_i : \\prod_i M_i \\to M_i$ are epimorphisms.}\r\n\r\nFor an index $i$ let id$_i : M_i \\to M_i$ be the identity morphism and $0_{ij} : M_i \\to M_j$ be the zero morphism for all $j \\neq i$.\r\n\r\nBy the universal property of the product we get a morphism $p_i : M_i \\to \\prod_i M_i$ with $\\pi_i\\circ p_i =$ id$_i$ and $\\pi_j\\circ p_i = 0_{ij}$ for all $j \\neq i$.\r\n\r\nSuppose there exists an object $X$ and morphisms $f, g : M_i \\to X$ such that $f\\circ \\pi_i = g\\circ \\pi_i$. Then $f\\circ \\pi_i\\circ p_i = f\\circ$ id$_i = f$. Similarly $g\\circ \\pi_i \\circ p_i = g\\circ$ id$_i = g$. But the left sides of both expressions are equal and thus $f = g$, i.e. $\\pi_i$ is an epimorphism. QED.\r\n\r\n\\textbf{16. In a category with zero morphisms, the coprojection morphisms $i_\\alpha : M_\\alpha \\to \\prod_\\alpha M_\\alpha$ are monomorphisms.}\r\n\r\nThe argument is dual to the argument for the product above. QED.\r\n\r\n\\textbf{17. A biproduct of objects $M_1, M_2, \\ldots, M_n$ in a category with zero object is an object $M_1\\oplus M_2\\oplus \\cdots \\oplus M_n$ together with projection morphisms $\\pi_i : M_1\\oplus \\cdots \\oplus M_n \\to M_i$ and coprojection morphisms $\\iota_i : M_\\alpha \\to M_1\\oplus \\cdots \\oplus M_n$ such that $\\pi_i\\circ \\iota_i =$ id$_{M_i}$ and $\\pi_i\\circ \\iota_j = 0_{ij}$ for all $j \\neq i$ and such that $M_1\\oplus \\cdots M_n$ along with the $\\pi_i$ is a product of the $M_i$ and along with the $\\iota_i$ is a coproduct of the $M_i$.}\r\n\r\n\\textbf{18. In a preadditive category products and coproducts are biproducts.}\r\n\r\n\\textbf{19. The product of objects in a category with products is weakly associative, i.e. if $A$, $B$ and $C$ are objects then $(A\\times B)\\times C \\cong A\\times (B\\times C)$.}\r\n\r\nLet the projections of $A\\times B$ be $p_1 : A\\times B \\to A$ and $p_2 : A\\times B \\to B$. Similarly let the projections of $B\\times C$ be $q_1 : B\\times C \\to B$ and $q_2 : B\\times C \\to C$.\r\n\r\nLet the projections of $(A\\times B)\\times C$ be $r_1 : (A\\times B)\\times C \\to A\\times B$ and $r_2 : (A\\times B)\\times C \\to C$ and the projections of $A\\times (B\\times C)$ be $s_1 : A\\times(B\\times C) \\to A$ and $s_2 : A\\times(B\\times C) \\to B\\times C$.\r\n\r\nBy the univeral property of the product of $B$ and $C$ there exists a morphism $f$ such that $q_1\\circ f = p_2\\circ r_1$ and $q_2\\circ f = r_2$.\r\n\r\nBy the universal property of the $A$ and $B\\times C$ there exists a morphism $g$ such that $s_1\\times g = p_1\\circ r_1$ and $s_2\\circ g = f$.\r\n\r\nSimilarly there exist morphisms $h$ and $i$ such that $p_1\\circ h = s_1$, $p_2\\circ h = q_1\\circ s_2$, $r_1\\circ i = h$ and $r_2\\circ i = q_2\\circ s_2$.\r\n\r\nWe have $q_1\\circ s_2\\circ g\\circ i = q_1\\circ f\\circ i = p_2\\circ r_1\\circ i = p_2\\circ h = q_1\\circ s_2$. Similarly $q_2\\circ s_2\\circ g\\circ i = q_2\\circ f\\circ i = r_2\\circ i = q_2\\circ s_2$.\r\n\r\nBy the uniqueness of the universal property of $B\\times C$ it follows that $s_2\\circ g\\circ i = s_2$.\r\n\r\nWe also have $s_1\\circ g\\circ i = p_1\\circ r_1\\circ i = p_1\\circ h = s_1$. Thus by the uniqueness of the universal property for $A\\times (B\\times C)$ we have that $g\\circ i =$ id$_{A\\times(B\\times C)}$.\r\n\r\nSimilarly one can prove that $i\\circ g =$ id$_{(A\\times B)\\times C}$. Thus $i$ and $g$ are inverse isomorphisms showing that $A\\times(B\\times C) = (A\\times B)\\times C$. QED.\r\n\r\n\\textbf{20. The coproduct of objects in a category with coproducts is weakly associative, i.e. if $A$, $B$ and $C$ are objects then $(A\\oplus B)\\oplus C \\cong A\\oplus (B\\oplus C)$.}\r\n\r\nThe argument is dual to the case of products. QED.\r\n\r\n\\textbf{21. In a category with products, $(M_1\\times \\cdots \\times M_{n-1})\\times M_n$ and $M_1\\times \\cdots \\times M_{n-1}\\times M_n$ are isomorphic.}\r\n\r\nLet us write $M = (M_1\\times \\cdots \\times M_{n-1})\\times M_n$ and $M' = M_1\\times \\cdots \\times M_{n-1}$.\r\n\r\nLet $p_i : M' \\to M_i$ be the projection maps of the inner product, for $i < n$. Similarly let $q_1 : M \\to M'$ and $q_2 : M \\to M_n$ be the projection maps of the outer product.\r\n\r\nWe can define projection maps $r_i : M \\to M_i$ by $r_i = p_i\\circ q_1$ for $i < n$ and $r_n = q_2$.\r\n\r\nSuppose that $S$ is any object in the category with morphisms $s_i : S \\to M_i$. Since $M'$ is a product there exists a unique morphism $\\phi' : S \\to M'$ such that $s_i = p_i\\circ \\phi'$ for $i < n$.\r\n\r\nMoreover, since $M$ is a product and morphisms $\\phi' : S \\to M$ and $s_n : S \\to M_n$ exist, there exists a unique morphism $\\phi : S \\to M$ such that $phi' = q_1\\circ \\phi$ and $s_n = q_2\\circ \\phi$.\r\n\r\nBut now $s_i = p_i\\circ q_1\\circ \\phi$ for $i < n$ and $s_n = q_2\\circ \\phi$. Thus if we define $t_i = p_i\\circ q_1$ for $i < n$ and $t_n = q_2$ we have that $s_i = t_i\\circ \\phi$ for all $i$.\r\n\r\nThus we will have shown that $M$ is a product of the $M_i$ with projections $t_i$ if we can show that $\\phi$ is the unique morphism with $s_i = t_i\\circ \\phi$.\r\n\r\nSuppose that $\\rho$ is another such morphism, i.e. with $s_i = t_i\\circ \\rho$. Thus for $i < n$ we have $s_i = p_i\\circ q_1\\circ \\rho$. But $\\phi'$ is the unique morphism such that $s_i = p_i\\circ \\phi'$ and so $\\phi' = q_1\\circ \\rho$. Similarly $s_i = t_i\\circ q_1\\circ \\phi$ for $i < n$ and so $\\phi' = q_1\\circ \\phi$. Thus $q_1\\circ \\phi = q_1\\circ \\rho$.\r\n\r\nSimilarly, $s_n = t_n\\circ \\phi = q_2\\circ \\phi$ and $s_n = q_2\\circ \\rho$, i.e. $q_2\\circ \\phi = q_2\\circ \\rho$.\r\n\r\nBut since $\\phi$ is the unique morphism such that $\\phi' = q_1\\circ \\phi$ and $s_n = q_2\\circ \\phi$ we must have $\\phi = \\rho$ and the morphism is unique. Thus $M$ is a product of the $M_i$ and is thus isomorphic to $M_1\\times \\cdots \\times M_n$. QED.\r\n\r\n\\textbf{22. In a category with coproducts, $(M_1\\oplus \\cdots \\oplus M_{n-1})\\oplus M_n$ and $M_1\\oplus \\cdots \\oplus M_{n-1}\\oplus M_n$ are isomorphic.}\r\n\r\nThe proof is dual to that for products. QED.\r\n\r\n\\textbf{23. In a category with products $M_1\\times M_2$ and $M_2\\times\r\n M_1$ are isomorphic. Similarly $M_1\\oplus M_2$ and $M_2\\oplus M_1$ are isomorphic.}\r\n\r\nClear from the definitions. QED.\r\n\r\n\\textbf{24. In a category with coproducts and initial object $0$ we have $X\\oplus 0 \\cong 0 \\oplus X \\cong X$.}\r\n\r\nIt suffices to show that $X$ is a coproduct of $X$ and $0$. In fact, we claim that $X$ along with the identity morphism id$_X : X \\to X$ and the unique map $0_X : 0 \\to X$ is a coproduct of $X$ and $0$.\r\n\r\nLet $Y$ be any object with morphisms $f_1 : X \\to Y$ and $f_2 : 0 \\to Y$. Then $f_1$ has the property that $f_1 = f_1\\circ$ id$_X$ and $f_2 = f_1\\circ 0_X$.\r\n\r\nWe claim that $f_1$ is the unique morphism $h$ such that $f_1 = h\\circ$ id$_X$ and $f_2 = h\\circ 0_X$. However this is clear since $h\\circ$ id$_X = h$. Thus $X$ is a coproduct of $X$ and $0$. QED.\r\n\r\n\\textbf{25. In a category with products and terminal object $1$ we have $X\\times 1 \\cong 1 \\times X \\cong X$.}\r\n\r\nThe proof is dual to that for the coproduct with initial object. QED.\r\n\r\n\\textbf{26. In a category with coproducts, if $f : X \\to Y$ is a morphism then for any object $Z$ in the category there is a morphism $f_Z : X\\oplus Z \\to Y\\oplus Z$.}\r\n\r\nWe have $X\\oplus Z$ is a coproduct of $X$ and $Z$ with maps $\\pi_1 : X \\to X\\oplus Z$ and $\\pi_2 : Z \\to X\\oplus Z$ and $Y\\oplus Z$. Suppose that $f_1 : Y \\to Y\\oplus Z$ and $f_2 : Z \\to Y\\oplus Z$ are the morphisms for the coproduct $Y\\oplus Z$. Then there are morphisms $f\\circ f_1 : X \\to Y\\oplus Z$ and $f_2 : Z\\to Y\\oplus Z$.\r\n\r\nThus as $X\\oplus Z$ is a coproduct there exists a unique morphism $f_Z : X\\oplus Z to Y\\oplus Z$ such that $f\\circ f_1 = f_Z\\circ \\pi_1$ and $f_2 = f_Z\\circ \\pi_2$. QED.\r\n\r\n\\textbf{27. In a category with products, if $f : X \\to Y$ is a morphism then for any object $Z$ in the category there is a morphism $f_Z : X\\times Z \\to Y\\times Z$.}\r\n\r\nThe argument is dual to that for the coproduct. QED.\r\n\r\n\\textbf{28. In a category with finite products if $f_1 : X_1 \\to Y_1$ and $f_2 : X_2 \\to Y_2$ are morphisms then there is a morphism from $X_1\\times X_2$ to $Y_1\\times Y_2$.}\r\n\r\nLet the projection morphisms of $X_1\\times X_2$ be $\\pi_1 : X_1\\times X_2 \\to X_1$ and $\\pi_2 : X_1\\times X_2 \\to X_2$. Then $f_1\\circ \\pi_1$ is a morphism from $X_1\\times X_2$ to $Y_1$ and $f_2\\circ \\pi_2$ is a morphism from $X_1\\times X_2$ to $Y_2$.\r\n\r\nBy the definition of the product $Y_1\\times Y_2$ there is then a morphism from $X_1\\times X_2 \\to Y_1\\times Y_2$. QED.\r\n\r\n\\textbf{29. In a category with finite coproducts if $f_1 : X_1 \\to Y_1$ and $f_2 : X_2 \\to Y_2$ are morphisms then there is a morphism from $X_1\\oplus X_2$ to $Y_1\\oplus Y_2$.}\r\n\r\nThe argument is dual to that for products. QED.\r\n\r\n\\textbf{30. In a category with finite products and coproducts and a zero object $0$ we have that there is a canonical morphism $X\\oplus Y \\to X\\times Y$ for all objects $X$ and $Y$.}\r\n\r\nAs $0$ is terminal, there exists a unique morphism $X \\to 0$ and thus a morphism $X\\oplus Y \\to 0\\oplus Y$. But as $0$ is initial we have that $0\\oplus Y \\cong Y$. Thus there is a morphism from $X\\oplus Y$ to $Y$. Similarly there is a morphism from $X\\oplus Y$ to $X$.\r\n\r\nThus by the definition of the product there is a morphism from $X\\oplus Y$ to $X\\times Y$. QED.\r\n\r\n\\textbf{31. In a locally small category with products and coproducts, for any object $Y$ and family of objects $(X_\\alpha)_\\alpha$ we have \r\n$$\\mbox{Hom}\\left(\\coprod_\\alpha X_\\alpha, Y\\right) \\cong \\prod_\\alpha \\mbox{Hom}(X_\\alpha, Y).$$}\r\n\r\nNote that the product on the right is a product in the category of sets, i.e. a cartesian product.\r\n\r\nConsider the map sending a tuple of morphisms $(f_\\alpha)_\\alpha \\in \\prod_\\alpha \\mbox{Hom}(X_\\alpha, Y)$ to the morphism $\\coprod_\\alpha f_\\alpha \\in \\mbox{Hom}\\left(\\coprod_\\alpha X_\\alpha, Y\\right)$. Here $\\coprod_\\alpha f_\\alpha$ is the unique morphism given by the universal property of the coproduct.\r\n\r\nFirstly we will show that the map is a surjection. To this end let $f \\in \\mbox{Hom}\\left(\\coprod_\\alpha X_\\alpha, Y\\right)$.\r\n\r\nLet $i_\\alpha : X_\\alpha \\to \\coprod_\\alpha X_\\alpha$ be the coprojection maps of the coproduct of the $X_\\alpha$. Then the maps $f\\circ i_\\alpha$ are morphisms from $X_\\alpha$ to $Y$.\r\n\r\nThe coproduct of the morphisms $f\\circ i_\\alpha$ is the unique morphism $h : \\coprod_\\alpha X_\\alpha \\to Y$ such that $f\\circ i_\\alpha = h\\circ i_\\alpha$. In other words, it is the morphism $f$ itself. Thus $f$ is a coproduct of morphisms and the map described above is surjective.\r\n\r\nWe have that $f_\\alpha = f\\circ i_\\alpha$ for all $\\alpha$. This shows that the $f_\\alpha$ are unique for any given $f$, i.e. the map is injective. QED.\r\n\r\n\\textbf{32. In a category with finite products and coproducts there is a morphism $X\\times Y \\oplus X\\times Z \\to X\\times(Y\\oplus Z)$.}\r\n\r\nLet $\\pi_X$ and $\\pi_Y$ be the projection morphisms of $X\\times Y$ and $\\pi'_X$ and $\\pi'_Y$ be the projection morphisms of $X\\times Z$. Let $i_Y$ and $i_Z$ be the coprojection morphisms of $Y\\oplus Z$ and $i_{X\\times Y}$ and $i_{X\\times Z}$ be the coprojection morphisms of $X\\times Y\\oplus X\\times Z$.\r\n\r\nWe have morphisms $i_Y\\circ \\pi_Y : X\\times Y \\to Y\\oplus Z$ and $i_Z\\circ \\pi'_Z : X\\times Z \\to Y\\oplus Z$.\r\n\r\nBy the universal property of $X\\times Y\\oplus X\\times Z$ there is a unique morphism $\\phi : X\\times Y\\oplus X\\times Z \\to X$ such that $\\pi'_X = \\phi\\circ i_{X\\times Z}$ and $\\pi_X = \\phi\\circ i_{X\\times Y}$.\r\n\r\nSimilarly there exists a unique morphism $\\phi' : X\\times Y\\oplus X\\times Z \\to Y\\oplus Z$ such that $i_Y\\circ \\pi_Y = \\phi'\\circ i_{X\\times Y}$ and $i_Z\\circ \\pi'_Z = \\phi'\\circ i_{X\\times Z}$.\r\n\r\nThus by the universal property for $X\\times(Y\\oplus Z)$ there exists a morphism from $X\\times Y\\oplus X\\times Z$ to $X\\times(Y\\oplus Z)$. QED.\r\n\r\n\\textbf{33. A distributive category is one in which $X\\times Y\\oplus X\\times Z \\cong X\\times(Y\\oplus Z)$.}\r\n\r\n\\textbf{34. The category of sets is a distributive category.}\r\n\r\nLet $X$, $Y$ and $Z$ be sets. We have a map $f : X\\times Y\\oplus X\\times Z \\to X\\times(Y\\oplus Z)$ given by $((x, y), 0) \\mapsto (x, (y, 0))$ and $((x, z), 1) \\mapsto (x, (z, 1))$.\r\n\r\nThe map is clearly injective and surjective and thus $X\\times Y\\oplus X\\times Z \\cong X\\times(Y\\oplus Z)$ in the category of sets. QED.\r\n\r\n\\section{Preabelian categories}\r\n\r\n\\textbf{1. A kernel of a morphism $f : X \\to Y$ in a category with zero morphisms is an object $K$ together with a morphism $k : K \\to X$ such that $f\\circ k$ is $0_{KY}$ and such that given any other object $K'$ and morphism $k' : K' \\to X$ such that $f\\circ k'$ is $0_{K'Y}$, there is a unique morphism $u : K' \\to K$ such that $k\\circ u = k'$.}\r\n\r\n\\textbf{2. The category of $R$-modules over a commutative ring $R$ is a module with zero morphisms where the maps $0_{XY}$ send every element of $X$ to the zero element of $Y$.}\r\n\r\nThis category contains a zero object and is therefore a category with zero morphisms. The map $0_{XY}$ is a composition of the map that sends all elements of $X$ to the zero element of the zero object and the map which sends the zero element of the zero object to the zero element of $Y$. The result then follows. QED.\r\n\r\n\\textbf{3. In a category with zero object, if $K, k$ is a kernel of a morphism $f : X \\to Y$ then $k$ is a monomorphism.}\r\n\r\nSuppose that $g_1, g_2 : V \\to K$ are morphisms such that $k\\circ g_1 = k\\circ g_2$. Then since $f\\circ (k\\circ g_1) = (f\\circ k)\\circ g_1 = 0_{KY}\\circ g_1 = 0_{VY}$ then by the definition of kernel there must be a unique morphism $u : V \\to K$ such that $k\\circ u = k\\circ g_1$. Thus $u = g_1$ since $g_1$ is such a morphism.\r\n\r\nBut $k\\circ g_1 = k\\circ g_2$ by assumption and so $u = g_2$ is also such a morphism, so that $u = g_1 = g_2$. Thus we have shown that $k$ is a monomorphism. QED.\r\n\r\n\\textbf{4. In a category with zero object a kernel of a morphism $f : X \\to Y$, if it exists, is unique up to unique isomorphism.}\r\n\r\nLet $K, k$ and $L, l$ be kernels of $f$. From the definition of kernel, there exist unique morphisms $u_1 : K \\to L$ and $u_2 : L \\to K$ such that $k\\circ u_2 = l$ and $l\\circ u_1 = k$.\r\n\r\nThen $l\\circ(u_1\\circ u_2) = (l\\circ u_1)\\circ u_2 = k\\circ u_2 = l = l\\circ \\mbox{id}_L$.\r\n\r\nSimilarly $k\\circ(u_2\\circ u_1) = (k\\circ u_2)\\circ u_1 = l\\circ u_1 = k = k\\circ \\mbox{id}_K$.\r\n\r\nSince $l$ and $k$ are monomorphisms $u_1\\circ u_2 = \\mbox{id}_L$ and $u_2\\circ u_1 = \\mbox{id}_K$. Thus $u_1$ and $u_2$ are unique isomorphisms and $K$ and $L$ are unique up to unique isomorphism. QED.\r\n \r\n\\textbf{5. In the category of $R$-modules over a commutative ring $R$ a kernel of a morphism $f : X \\to Y$ is an object $K$ and an injective morphism $k : K \\to X$. The image of $k$ consists of all elements of $X$ that are mapped to $0$ by $f$.}\r\n\r\nBy definition the composition $f\\circ k$ must send every element of $K$ to the zero element of $Y$. Thus $f$ must send every element of the image of $k$ to $0$.\r\n\r\nNow let $K'$ be the subset of all $x \\in X$ such that $f(x) = 0$. Let $k' : K' \\to X$ be the injection map. It is easy to check that $K'$ is an $R$-module and $k'$ is a module homomorphism, and so $f\\circ k' = 0_{K'Y}$.\r\n\r\nIf $K, k$ is a kernel of $f$ then there must be a unique morphism $u : K' \\to K$ such that $k\\circ u = k'$. Now the image of $k'$ is all elements of $X$ that map to zero under $f$. Thus all such elements must be in the image of $k$. This shows that the image of $k$ is precisely all the elements that map to $0$ under $f$.\r\n\r\nAs $k$ is a monomorphism it is an injective map. QED.\r\n\r\n\\textbf{6. We define $\\ker(f)$ for an $R$-module homomorphism $f : X \\to Y$ to be the $R$-submodule of elements of $X$ that are mapped to zero under $f$.}\r\n\r\n\\textbf{7. In the category of $R$-modules, if $f : X \\to Y$ is a morphism then $\\ker(f), \\iota$ where $\\iota : \\ker(f) \\to X$ is the inclusion map, is a kernel of $f$.}\r\n\r\nBy definition $f\\circ \\iota = 0$. Let $K$ be an $R$-module and $k : K \\to X$ be an $R$-module homomorphism such that $f\\circ k = 0$.\r\n\r\nClearly $k(x) \\in \\ker(f)$ for all $x \\in X$. Thus we can define a map $u = \\iota^{-1}\\circ k : K \\to \\ker{f}$.\r\n\r\nBy construction, $\\iota\\circ u = k$. It is easy to check that $u$ is an $R$-module homomorphism from its construction. It remains only to show that $u$ is unique.\r\n\r\nSuppose that $u' : K \\to \\ker{f}$ is another $R$-module homomorphism for which $\\iota\\circ u' = k = \\iota\\circ u$. As $\\iota$ is injective we have that $u = u'$. Thus $u$ is unique. Thus by definition $\\ker(f), \\iota$ is a kernel of $f$. QED.\r\n\r\n\\textbf{8. The cokernel of a morphism $f : X \\to Y$ in a category with zero morphisms is an object $Q$ together with a morphism $q : Y \\to Q$ such that $q\\circ f = 0_{XQ}$ and such that for any other morphism $q' : Y \\to Q'$ for which $q'\\circ f = 0_{XQ'}$ there exists a unique morphism $u : Q \\to Q'$ such that $q' = u\\circ q$.}\r\n\r\n\\textbf{9. In a category with zero object, if $Q, q$ is a cokernel of a morphism $f : X \\to Y$ then $q$ is an epimorphism.}\r\n\r\nSuppose that $g_1, g_2 : Q \\to V$ are morphisms such that $g_1\\circ q = g_2\\circ q$. Then since $(g_1\\circ q)\\circ f = g_1\\circ (q\\circ f) = g_1\\circ 0_{XQ} = 0_{XV}$ then by the definition of kernel there must be a unique morphism $u : Q \\to V$ such that $u\\circ q = g_1\\circ q$. Thus $u = g_1$ since $g_1$ is such a morphism.\r\n\r\nBut $g_1\\circ q = g_2\\circ q$ by assumption and so $u = g_2$ is also such a morphism, so that $u = g_1 = g_2$. Thus we have shown that $q$ is an epimorphism. QED.\r\n\r\n\\textbf{10. In a category with zero object a cokernel of a morphism $f : X \\to Y$, if it exists, is unique up to unique isomorphism.}\r\n\r\nLet $Q, q$ and $R, r$ be cokernels of $f$. From the definition of cokernel, there exist unique morphisms $u_1 : Q \\to R$ and $u_2 : R \\to Q$ such that $u_2\\circ q = r$ and $u_1\\circ r = q$.\r\n\r\nThen $(u_2\\circ u_1)\\circ r = u_2\\circ (u_1\\circ r) = u_2\\circ q = r = \\mbox{id}_R\\circ r$.\r\n\r\nSimilarly $(u_1\\circ u_2)\\circ q = u_1\\circ (u_2\\circ q) = u_1\\circ r = q = \\mbox{id}_Q\\circ q$.\r\n\r\nSince $q$ and $r$ are epimorphisms $u_2\\circ u_1 = \\mbox{id}_R$ and $u_1\\circ u_2 = \\mbox{id}_Q$. Thus $u_1$ and $u_2$ are unique isomorphisms and $Q$ and $R$ are unique up to unique isomorphism. QED.\r\n\r\n\\textbf{11. In the category of $R$-modules over a commutative ring $R$ a cokernel of a morphism $f : X \\to Y$ is an object $Q$ and a surjective morphism $q : Y \\to Q$. The kernel of $q$ is the image of $f$.}\r\n\r\nBy definition the composition $q\\circ f$ must send every element of $X$ to the zero element of $Q$. Thus $q$ must send every element of the image of $f$ to $0$.\r\n\r\nNow let $Q'$ be the quotient set $Y/\\mbox{Im}(f)$. Let $k' : Y \\to Q'$ be the natural surjective map sending $y \\mapsto y + \\mbox{Im}(f)$. It is easy to check that $Q'$ is an $R$-module and $q'$ is a module homomorphism, and so $q'\\circ f = 0_{Q'Y}$.\r\n\r\nIf $Q, q$ is a cokernel of $f$ then there must be a unique morphism $u : Q \\to Q'$ such that $q' = u\\circ q$. Now the kernel of $q'$ is all elements of $Y$ that are in the image of $f$. The set of such elements must contain the kernel of $q$. This shows that the kernel of $q$ is precisely all the elements that are in the image of $f$.\r\n\r\nAs $q$ is an epimorphism it is a surjective map. QED.\r\n\r\n\\textbf{12. We define $\\mbox{coker}(f)$ for an $R$-module homomorphism $f : X \\to Y$ to be the quotient module $Y/\\mbox{Im}(f)$.}\r\n\r\n\\textbf{13. In the category of $R$-modules, if $f : X \\to Y$ is a morphism then $\\mbox{coker}(f), \\pi$ where $\\pi : Y \\to Y/\\mbox{Im}(f)$ is the natural projection map, is a cokernel of $f$.}\r\n\r\nBy definition $\\pi\\circ f = 0$. Let $Q$ be an $R$-module and $q : Y \\to Q$ be an $R$-module homomorphism such that $q\\circ f = 0$.\r\n\r\nClearly $\\mbox{Im}(f) \\in \\ker(q)$ for all $x \\in X$. Thus we can define a map $u : Y/\\mbox{Im}(f) \\to Q$ which sends an element $y$ of $Y/\\mbox{Im}(f)$ to the unique element of $Q$ to which $q$ sends all elements of $\\pi^{-1}(y)$.\r\n\r\nBy construction, $u\\circ \\pi = q$. It is easy to check that $u$ is an $R$-module homomorphism from its construction. It remains only to show that $u$ is unique.\r\n\r\nSuppose that $u' : \\mbox{coker}{f} \\to Q$ is another $R$-module homomorphism for which $u'\\circ \\pi = q = u\\circ \\pi$. As $\\pi$ is surjective we have that $u = u'$. Thus $u$ is unique. Thus by definition $\\mbox{coker}(f), \\pi$ is a cokernel of $f$. QED.\r\n\r\n\\textbf{14. If a kernel or cokernel of a morphism $f : A \\to B$ of a preadditive category exists, it is unique up to unique isomorphism.}\r\n\r\nThis follows from the fact that a preadditive category has a zero object. QED.\r\n\r\n\\textbf{15. A subobject of an object $A$ in a category is an equivalence class of monomorphisms to $A$ where two monomorphisms $m : B \\to A$ and $m' : B' \\to A$ are equivalent if there is an isomorphism $f : B \\to B'$ such that $m = m'\\circ f$.}\r\n\r\n\\textbf{16. A quotient object of an object $A$ in a category is an equivalence class of epimorphisms from $A$ such that two epimorphisms $e : A \\to B$ and $e' : A \\to B'$ are equivalent if there is an isomorphism $f : B' \\to B$ such that $e = f\\circ e'$.}\r\n\r\n\\textbf{17. In a category with zero object, a kernel, if it exists, is a subobject.}\r\n\r\nThis follows immediately from the proof that a kernel is unique up to isomorphism. QED.\r\n\r\n\\textbf{18. In a category with zero object, a cokernel, if it exists, is a quotient object.}\r\n\r\nThis follows immediately from the proof that a cokernel is unique up to isomorphism. QED.\r\n\r\n\\textbf{19. A preabelian category is an additive category that has kernels and cokernels.}\r\n\r\n\\textbf{20. The category of $R$-modules over a commutative ring is preabelian.}\r\n\r\nThis follows immediately from what we have proved above. QED.\r\n\r\n\\section{Equalisers}\r\n\r\n\\textbf{1. An equaliser of two morphisms $f, g : X \\to Y$ in a category is an object $E$ and a morphism $eq : E \\to X$ satisfying $f\\circ eq = g\\circ eq$ such that for any object $A$ and morphism $m : A \\to X$ with $f\\circ m = g\\circ m$ there exists a unique morphism $u : A \\to E$ such that $eq\\circ u = m$.}\r\n\r\n\\textbf{2. If an equaliser of two morphisms exists in a category, it is unique up to unique isomorphisms.}\r\n\r\nThis follows from its definition via a universal property. QED.\r\n\r\n\\textbf{3. A coequaliser of two morphisms $f, g : X \\to Y$ in a category is an object $Q$ and morphism $q : Y \\to Q$ such that $q\\circ f = q\\circ g$ and such that if $Q'$ is any object and $m : Y \\to Q'$ is a morphism such that $m\\circ f = m\\circ g$ then there exists a unique morphism $u : Q \\to Q'$ such that $m = u\\circ q$.}\r\n\r\n\\textbf{4. If a coequaliser of two morphisms exists in a category it is unique up to unique isomorphism.}\r\n\r\nThis follows from its definition via a universal property. QED.\r\n\r\n\\textbf{5. The equaliser of two morphisms $f, g : X \\to Y$  always exists in a preabelian category.}\r\n\r\nIf the category is merely preadditive it makes sense to define the $\\ker(g - f)$. In a preabelian category this always exists. We will show that it is the equaliser of $f$ and $g$.\r\n\r\nBy definition $\\ker(g - f)$ is an object $K$ and morphism $k : K \\to X$ such that $(g - f)\\circ k = 0_{KY}$ and which is universal for such objects and morphisms. But if $(g - f)\\circ k = 0_{KY}$ then $g\\circ k - f\\circ k = 0_{KY}$ and so $g\\circ k = f\\circ k$. Moreover, $K, k$ is universal for such objects and morphisms. Thus $\\ker(g - f)$ is the equaliser of $f$ and $g$. QED.\r\n\r\n\\textbf{6. The coequaliser of two morphisms $f, g : X \\to Y$  always exists in a preabelian category.}\r\n\r\nThe proof is dual to the case of the equaliser. QED.\r\n\r\n\\textbf{7. In the category of sets the equaliser of two maps $f, g : X \\to Y$ is $E = \\{x \\in X \\;|\\; f(x) = g(x)\\}$ along with the injection map $i$ from $E$ into $X$.}\r\n\r\nWe certainly have $f\\circ i = g\\circ i$. Suppose $A$ is any set and $m : A \\to X$ is a map such that $f\\circ m = g\\circ m$. Then in fact the image of $m$ is $E$. For if $y$ is in the image of $m$ then there exists $a \\in A$ such that $m(a) = y$ and so $(f\\circ m)(a) = (g\\circ m)(a)$, i.e $f(y) = g(y)$.\r\n\r\nLet us define $u : A \\to E$ such that $u(a) = m(a)$. Then $m = i\\circ u$ by definition. But if $u' : A \\to E$ is any other map such that $m = i\\circ u'$ then $u'(a) = (i\\circ u')(a) = m(a) = (i\\circ u)(a) = u(a)$ and so $u' = u$.\r\n\r\nThus $E$ along with the injection map $i$ is an equaliser of $f$ and $g$. QED.\r\n\r\n\\textbf{8. In the category of sets the coequaliser of two maps $f, g : X \\to Y$ is $Y/\\sim$ where $\\sim$ is the smallest equivalence relation on $Y$ such that $f(x) \\sim g(x)$ for all $x \\in X$ along with the natural quotient map $\\pi : Y \\to Y/\\sim$.}\r\n\r\nClearly $\\pi\\circ f = \\pi\\circ g$ by definition. Write $Q = Y/\\sim$.\r\n\r\nIf $Q'$ is any other set and $\\pi' : Y \\to Q'$ is a map such that $\\pi'\\circ f = \\pi'\\circ g$ then we can define $u : Q \\to Q'$ by $u([y]) = \\pi'(y)$. \r\n\r\nThis is well-defined since if $y$ and $y'$ are equivalent then either $y = y'$ or there exists a chain $y = a_0, a_1, \\ldots, a_n = y'$ in $Y$ such that $a_{i-1} = f(x)$ and $a_i = g(x)$ for some $x \\in X$ or $a_{i-1} = g(x)$ and $a_i = f(x)$ for some $x \\in X$. This is the smallest equivalence on $Y$ that has the required properties.\r\n\r\nIf $y = y'$ then $\\pi'(y) = \\pi'(y')$. Otherwise, $\\pi'(y) = \\pi'(a_0) = \\pi'(a_1) = \\cdots = \\pi'(a_n) = \\pi'(y')$, and the map $u$ is well-defined.\r\n\r\nBy definition $u\\circ \\pi = \\pi'$. If $u'$ is any other such map we show that $u'([y]) = u'(\\pi(y)) = \\pi'(y)$ and so $u = u'$. QED.\r\n\r\n\\textbf{9. The equaliser of two morphisms $f, g : X \\to Y$ in the category of $R$-modules is $\\ker(g - f)$.}\r\n\r\nThis follows from the fact that the category of $R$-modules is a preabelian category. QED.\r\n\r\n\\textbf{10. The coequaliser of two morphisms $f, g : X \\to Y$ in the category of $R$-modules is $\\mbox{coker}(g - f)$.}\r\n\r\nThis follows from the fact that the category of $R$-modules is a preabelian category. QED.\r\n\r\n\\section{Products}\r\n\r\n\\textbf{1. An inverse system in a category is a family of objects $\\{A_i\\}_{i \\in I}$ for a directed system $I, \\leq$ with morphisms $f_{ij} : A_j \\to A_i$ for all $i \\leq j$ such that $f_{ii}$ is the identity morphism of $A_i$ and $f_{ik} = f_{ij}\\circ f_{jk}$ for all $i \\leq j \\leq k$.}\r\n\r\n\\textbf{2. The inverse limit of a inverse system $A_i, f_{ij}$ in a category is an object $A$ together with morphisms $\\phi_i : A \\to A_i$ such that $\\phi_i = f_{ij}\\circ \\phi_j$ and such that for any other object $B$ with morphisms $\\psi_i : B\\to A_i$ satisfying $\\psi_i = f_{ij}\\circ \\psi_j$ there is a unique morphism $u : B \\to A$ such that $\\psi_i = \\phi_i \\circ u$ for all $i \\in I$.}\r\n\r\nWe denote the direct limit of the $A_i$ by $A = \\varprojlim A_i$.\r\n\r\n\\textbf{3. Let $\\{A_i\\}_{i\\in I}, f_{ij}$ be an inverse system of objects and morphisms in a preabelian category for some directed set $I$. Then the inverse limit $\\varprojlim A_i$ exists in the category.}\r\n\r\nWe consider two products in the category, namely $U = \\prod_{i \\in I} A_i$ and $V = \\prod_{(i, j)\\in I\\times I, i \\leq j} A_j$.\r\n\r\nLet $\\pi_{ij} : V \\to A_j$ be the projection maps for the second product and $\\pi_i : U \\to A_i$ be the projection maps for the first product. By the universal property of the second product, there is a morphism $f : U \\to V$ such that $\\pi_j = \\pi_{ij}\\circ f$ for all $i \\leq j$.\r\n\r\nSimilarly, consider the compositions $f_{ij}\\circ \\pi_i : U \\to F_j$. Again by the universal property of $V$ there exists a morphism $g : U \\to V$ such that $f_{ij}\\circ \\pi = \\pi_{ij}\\circ g$.\r\n\r\nLet $A$ be the equaliser of $f$ and $g$ with morphism $e : A \\to U$. The composition $\\phi_i = \\pi_i\\circ e$ gives us a morphism from $A$ to $A_i$ for each $i \\in I$.\r\n\r\nWe claim that $A$ along with the morphisms $\\phi_i$ constitute the inverse limit of the $A_i$.\r\n\r\nFor any $i \\leq j$ with $i, j \\in I$ we have $f_{ij}\\circ \\pi_i = \\pi_{ij}\\circ g$ and $\\pi_j = \\pi_{ij}\\circ f$. Thus $\\pi_j\\circ e = \\pi_{ij}\\circ f\\circ e = \\pi_{ij}\\circ g\\circ e$ since $A, e$ is an equaliser for $f$ and $g$. But this equals $f_{ij}\\circ \\pi_i\\circ e$.\r\n\r\nIn other words, for the equaliser $A, e$ we have $\\pi_j\\circ e = f_{ij}\\circ \\pi_i\\circ e$, i.e. $\\phi_j = f_{ij}\\circ \\phi_i$ for all $i \\leq j$, which is the property we require for the inverse limit. Now we need to show that it is unique.\r\n\r\nSuppose that $L$ is any other object with morphisms $\\lambda_i : L \\to A_i$. Then there is a unique morphism $h : \\to U$. Suppose also that $\\lambda_j = f_{ij}\\circ \\lambda_i$ for all $i \\leq j$.\r\n\r\nThus $\\pi_j\\circ e' = f_{ij}\\circ \\pi_i\\circ e'$. Thus $\\pi_{ij}\\circ f\\circ e' = \\pi_{ij}\\circ g\\circ e'$.\r\n\r\nBut as $\\pi_{ij}\\circ f\\circ e'$ is a set of morphisms from $L$ to the $A_j$ there is a unique morphism $\\phi$ from $L$ to $\\prod_{(i, j)\\in I\\times I, i \\leq j} A_j$ such that $\\pi_{ij}\\circ f\\circ e' = \\pi_{ij}\\circ \\phi$. Thus this unique morphism is $\\phi = f\\circ e' = g\\circ e'$.\r\n\r\nIn other words, $L$ is an equaliser for $f$ and $g$. Thus there is a morphism $\\psi$ from $L \\to E$ such that $e\\circ \\psi = e'$.\r\n\r\nThus there is a morphism $\\psi$ from $L \\to E$ such that $\\pi_i\\circ e\\circ \\psi = \\pi_i\\circ e'$, i.e. such that $\\phi_i\\circ \\psi = \\lambda_i$, thus giving the universal property of the inverse limit of the $A_i$. QED.\r\n\r\n\\textbf{4. A direct system in a category is a family of objects $\\{A_i\\}_{i \\in I}$ for a directed system $I, \\leq$ with morphisms $f_{ij} : A_i \\to A_j$ for all $i \\leq j$ such that $f_{ii}$ is the identity morphism of $A_i$ and $f_{ik} = f_{jk}\\circ f_{ij}$ for all $i \\leq j \\leq k$.}\r\n\r\n\\textbf{5. The direct limit of a direct system $A_i, f_{ij}$ in a category is an object $A$ together with morphisms $\\phi_i : A_i \\to A$ such that $\\phi_i = \\phi_j\\circ f_{ij}$ and such that for any other object $B$ with morphisms $\\psi_i : A_i \\to B$ satisfying $\\psi_i = \\psi_j\\circ f_{ij}$ there is a unique morphism $u : A \\to B$ such that $\\psi_i = u\\circ \\phi_i$ for all $i \\in I$.}\r\n\r\nWe denote the direct limit of the $A_i$ by $A = \\varinjlim A_i$.\r\n\r\n\\textbf{6. Let $\\{A_i\\}_{i\\in I}, f_{ij}$ be a direct system of objects and morphisms in a preabelian category for some directed set $I$. Then the direct limit $\\varinjlim A_i$ exists in the category.}\r\n\r\nThe argument is dual to that of the inverse limit. QED.\r\n\r\n\\textbf{7. Let $\\{M_i, f_{ij}\\}$ be a direct system of $R$-modules over some partially ordered set $I$. Let $\\rho_i : M_i \\to M$ be the coprojection maps for $M = \\bigoplus_{i \\in I} M_i$. Define an $R$-submodule of $M$ by $N = \\langle \\rho_j f_{ij}(x_i) - \\rho_i(x_i) \\;|\\; i \\leq j, x_i \\in M_i\\rangle.$ Then $\\varinjlim M_i = M/N$.}\r\n\r\nDefine $f_i : M_i \\to M/N$ by $f_i(x_i) = \\rho_i(x_i) + N$. Then if $i \\leq j$ for $i, j \\in I$ we have $\\rho_j(f_{ij})(x_i) - \\rho_i(x_i) \\in N$ and so\r\n$$f_j(f_{ij}(x_i)) = \\rho_j(f_{ij}(x_i)) + N = \\rho_i(x_i) + N = f_i(x_i).$$\r\nThus $f_j\\circ f_{ij} = f_i$ as required.\r\n\r\nNow suppose there is an $R$-module $X$ with $R$-module homomorphisms $g_i : M_i \\to X$ such that $g_j\\circ f_{ij} = g_i$ for all $i \\leq j$ with $i, j \\in I$. We will prove that there is a unique $R$-module homomorphism $f : M/N \\to X$ such that $f\\circ f_i = g_i$ for all $i$.\r\n\r\nNote that every element of $M/N$ is of the form $\\sum_{i \\in J} \\rho_i(x_i) + N$ for some finite subset $J$ of $I$. Since we must have $f\\circ f_i = g_i$, we have by linearity that $f\\left(\\sum_{i \\in J} \\rho_i(x_i) + N\\right) = \\sum_{i\\in J} g_i(x_i)$.\r\n\r\nWe will show that $f$ so defined is well-defined. Define $f' : M \\to X$ by $f'\\left(\\sum_{i\\in J} \\rho_i(x_i)\\right) = \\sum_{i\\in J} g_i(x_i)$. Such a homomorphism exists and is unique by the definition of $M$ as a direct sum. We will show that $N \\subseteq \\ker f'$ so that $f$ is well-defined.\r\n\r\nSuppose $i \\leq j$ for $i, j \\in I$ and let $x_i \\in M_i$. Then\r\n$$f'(\\rho_j(f_{ij}(x_i)) - \\rho_i(x_i)) = g_j(f_{ij}(x_i)) - g_i(x_i) = 0$$\r\nsince $g_j\\circ f_{ij} = g_i$. But $N$ is generated by the$\\rho_j(f_{ij}(x_i)) - \\rho_i(x_i)$ and so $N \\subseteq \\ker f'$ as required. QED.\r\n\r\n\\textbf{8. Let $\\{M_i, f_{ij}\\}$ be a direct system of $R$-modules over some directed set $I$. Then $\\varinjlim M_i = \\{\\rho_i(x_i) + N \\;|\\; i \\in I, x_i \\in M_i\\}$, where $\\rho_i : M_i \\to \\bigoplus_{i\\in I} M_i$ are the coprojection maps.}\r\n\r\nBy the previous theorem a general element of $\\varinjlim M_i$ is of the form $a = \\sum_{i\\in J} \\rho_i(x_i) + N$ for any finite set $J \\subseteq I$. We will show that all such elements are of the form $\\rho_i(x_i) + N$ for $x_i \\in M_i$ and $i \\in I$.\r\n\r\nSince $I$ is directed there exists some $k \\in I$ such that $i \\leq k$ for all $i \\in J$. Then for all $i \\in J$ we have \r\n$$\\rho_i(x_i) + N = \\rho_i(x_i) + N = \\rho_k f_{ik}(x_i) + \\rho_i(x_i) - \\rho_k f_{ik}(x_i) + N = \\rho_k f_{ik}(x_i) + N.$$\r\n\r\nThus if $y_k = \\sum_{i \\in J} f_{ik}(x_i) \\in M_k$ then $a = \\rho(y_k) + N$. Thus all elements of $\\varinjlim M_i$ are of the required form. QED.\r\n\r\n\\textbf{9. Let $\\{M_i, f_{ji}\\}$ be an inverse system of $R$-modules over some partially ordered set $I$. Then $\\varprojlim M_i = \\{(x_i)_i \\in \\prod_i M_i \\;|\\; f_{ji}(x_j) = x_i \\;\\;\\mbox{whenever}\\;\\; i \\leq j\\}$.}\r\n\r\nWe first show that the given set is an $R$-submodule of $\\prod_i M_i$.\r\n\r\nFirst we have that $(0)_i \\in \\varprojlim M_i$ since $f_{ji}(0) = 0$ for all $i \\leq j$, as $f_{ji}$ is an $R$-module homomorphism. \r\n\r\nSuppose $(a_i)_i, (b_i)_i \\in \\varprojlim M_i$ then $(a_i - b_i) \\in \\varprojlim M_i$ since for $i \\leq j$ we have $f_{ji}(a_j - b_j) = f_{ji}(a_j) - f_{ji}(b_j) = a_i - b_i$.\r\n\r\nFinally, if $r \\in R$ and $(a_i)_i \\in \\varprojlim M_i$ then $f_{ji}(ra_j) = rf_{ji}(a_j) = ra_i$ whenever $i \\leq j$. Thus $r(a_i)_i \\in \\varprojlim M_i$.\r\n\r\nThus $\\varprojlim M_i$ is an $R$-submodule of $\\prod_i M_i$.\r\n\r\nWe define $f_i = \\pi_i\\circ \\iota$ for all $i \\in I$ where $\\pi_i : \\prod_i M_i \\to M_i$ are the projections of the product of the $M_i$ and $\\iota$ is the inclusion of $\\varprojlim M_i$ into $\\prod_i M_i$.\r\n\r\nThe $f_i$ are $R$-module homomorphisms and since $f_{ij}\\circ \\pi_j = \\pi_i$ for all $i \\leq j$ we have that $f_{ij}\\circ f_j = f_i$ for all $i \\leq j$.\r\n\r\nNow suppose that $N$ is an $R$-module with $R$-module homomorphisms $\\psi_i N \\to M_i$ such that whenever $i \\leq j$ then $f_{ji}\\circ \\psi_j = \\psi_i$.\r\n\r\nWe define the map $\\psi : N \\to \\varprojlim M_i$ by $\\psi(x) = (\\psi_i(x))_i$. Now $(f_i\\circ \\psi)(x) = f_i((\\psi_i(x))_i) = \\psi_i(x)$ so that $f_i\\circ \\psi = \\psi_i$ for all $i \\in I$. It is easy to check that $\\psi$ is an $R$-module homomorphism.\r\n\r\nSuppose that we have another $R$-module homomorphism $\\psi' : N \\to \\varprojlim M_i$ such that $f_i\\circ \\psi' = \\psi_i$ for all $i \\in I$. Then for all $x \\in N$ we have $f_i(\\psi'(x)) = \\psi_i(x)$ so that $\\pi_i((\\psi(x))_i) = \\psi_i(x) = \\pi_i((\\psi'(x))_i)$. Thus $\\psi(x)$ and $\\psi'(x)$ agree on all components and are therefore the same homomorphism, i.e. the map $\\psi$ is unique.\r\n\r\nThus $\\varprojlim M_i$ satisfies all the requirements to be the inverse limit of the $M_i$. QED.\r\n\r\n\\textbf{10. Let $(A_i)_{i\\in I}$ be a family of sets indexed by a directed set $I$ and let the $A_i$ form a direct system with maps $f_{ij} : A_i \\to A_j$. The direct limit of this system exists.}\r\n\r\nLet $A$ be the disjoint union of the $A_i$, i.e. the union of sets $A_i\\times \\{i\\}$. Define a binary relation $\\sim$ on $A$ by $a_i \\sim a_j$ for $a_i \\in A_i$ and $a_j \\in A_j$ with $i, j \\in I$ iff there is a $k \\in I$ such that $f_{ik}(a_1) = f_{jk}(a_2)$.\r\n\r\nWe show that $\\sim$ is an equivalence relation. It is clearly symmetric and due to the existence of $f_{ii}$ it is reflexive.\r\n\r\nNow suppose $a_i \\sim a_j$ and $a_j \\sim a_k$ for $a_i \\in A_i$, $a_j \\in A_j$ and $a_k \\in A_k$. Thus there are $p, q \\in I$ such that $f_{ip}(a_i) = f_{jp}(a_j)$ and $f_{jq}(a_j) = f_{kq}(a_k)$.\r\n\r\nSince $I$ is directed, there exists $r \\in I$ such that $p, q \\leq r$. Then $f_{ir}(a_i) = f_{pr}(f_{ip}(a_i)) = f_{pr}(f_{jp}(a_j)) = f_{jr}(a_j)$. Similarly $f_{kr}(a_k) = f_{qr}(f_{kq}(a_k)) = f_{qr}(f_{jq}(a_j)) = f_{jr}(a_j)$. Hence $a_i \\sim a_k$. Thus $\\sim$ is transitive and therefore an equivalence relation.\r\n\r\nIf we write $p_i : A_i \\to A$ for the map $a_i \\mapsto (a_i, i)$ into the disjoint union, then there is a mapping $f_i : A_i \\to \\varinjlim A_i$ given by $f_i = \\phi\\circ p_i$ where $\\phi : A \\to A/\\sim$ is the quotient map. We claim $A/\\sim$ along with the $f_i$ is the direct limit $\\varinjlim$ in the category of sets. \r\n\r\nWe will show that $f_j\\circ f_{ij} = f_i$ for $i \\leq j$, i.e. that $\\phi\\circ p_j\\circ f_{ij} = \\phi\\circ p_i$. This of course holds if $f_{ij}(a_i) \\sim a_i$ for all $a_i \\in A_i$.\r\n\r\nBut this is clear since we can take $k = j$ and then $f_{jk}(f_{ij}(a_i)) = f_{ik}(a_i) = f_{ij}(a_i)$ since $f_{jk} = f_{jj}$ is the identity.\r\n\r\nBy standard arguments, we show that if $B$ is any other set with maps $g_i : A_i \\to B$ such that $g_j\\circ f_{ij} = g_i$ then there is a unique map $\\phi : \\varinjlim \\to B$ such that $g_i = \\phi\\circ f_i$ for all $i \\in I$. QED.\r\n\r\n\\textbf{11. Let $\\{M_i, f_{ji}\\}$ be an inverse system of sets over some partially ordered set $I$. Then $\\varprojlim M_i = \\{(x_i)_i \\in \\prod_i M_i \\;|\\; f_{ji}(x_j) = x_i \\;\\;\\mbox{whenever}\\;\\; i \\leq j\\}$.}\r\n\r\nThe proof is essentially the same as that for $R$-modules. QED.\r\n\r\n\\textbf{12. Let a family of subsets $(M_i)_{i\\in I}$ of a set $M$ be given for some directed set $I$. Suppose the family is  partially ordered by inclusion, i.e. $i \\leq j$ iff $M_i \\subseteq M_j$. We let $f_{ij}$ be the inclusion map of $M_i$ into $M_j$ if $i \\leq j$. Then $\\varinjlim M_i = \\bigcup_i M_i$.}\r\n\r\nFor $I$ to be a directed set, if $M_i$ and $M_j$ are any two sets in the family with $i, j \\in I$, there must be a $k \\in I$ such that $M_i \\subseteq M_k$ and $M_j \\subseteq M_k$.\r\n\r\nFrom the definition of the equivalence relation on the disjoint union of the $M_i$ from the result on the direct limit of sets given above, we see that $m_i \\sim m_j$ for $m_i \\in M_i$ and $m_j \\in M_j$ iff $m_i = m_j$ as elements of $M$.\r\n\r\nThus if $A$ is the disjoint union of the $M_i$ we have that $A/\\sim$ is simply the set of elements of $M$ which are in some $M_i$, i.e. $\\varinjlim M_i \\cong \\bigcup_i M_i$. QED.\r\n\r\n\\textbf{13. Let a family of subsets $(M_i)_{i\\in I}$ of a set $M$ be given for some directed set $I$. Suppose the family is  partially ordered by reverse inclusion, i.e. $i \\leq j$ iff $M_i \\supseteq M_j$. We let $f_{ji}$ be the inclusion map of $M_j$ into $M_i$ if $i \\leq j$. Then $\\varprojlim M_i \\cong \\bigcap_i M_i$.}\r\n\r\nSince $i$ is a directed set, for all $i, j \\in I$ there exists a $k \\in I$ such that $i, j \\leq k$. This means that $M_i \\supseteq M_k$ and $M_j \\supseteq M_k$, i.e. $M_k \\subseteq M_i \\cap M_j$.\r\n\r\nWe have that $\\varprojlim M_i$ is the set of all $(x_i)_i \\in \\prod_i M_i$ such that $f_{ji}(x_j) = x_i$ for all $i \\leq j$. In other words, for all $i \\leq j$ we have that $x_i = x_j$ as elements of $M$.\r\n\r\nAs $I$ is a directed, for any $i, j \\in I$ there exists a $k$ such that $i, j \\leq k$. Then $M_k \\subseteq M_i\\cap M_j$. But this implies that $x_k \\in M_i\\cap M_j$. Thus if $(x_i)_i$ is an element of $\\varprojlim M_i$ then $x_i = x_j = x_k \\in M_i\\cap M_j$ as elements of $M$.\r\n\r\nBut this implies that $x_i \\in M_j$ for all $j \\in I$, i.e. $x_i \\in \\bigcap_{i\\in I} M_i$.\r\n\r\nOn the other hand, if $x_i \\in \\bigcap_{i\\in I} M_i$ then $(\\ldots, x_i, x_i, x_i, \\ldots) \\in \\varprojlim M_i$ where the $j$-th coordinate is $x_i$ considered as an element of $M_j$.\r\n\r\nIn other words, $\\bigcap_{i \\in I} M_i$ is an inverse limit of the $M_i$. QED.\r\n\r\n\\textbf{14. If we have a direct system of objects $X_i$ over a directed set $I$ with maps $f_{ij}$, and $I$ contains a greatest element $m$ then the direct limit is isomorphic to $X_m$ and the map $f_m : X_m \\to \\varinjlim X_i$ is an isomorphism.}\r\n\r\nAs $m$ is a greatest element of $I$ we have that for all $i \\in I$ there is a morphism $f_{im} : X_i \\to X_m$. Moreover $f_{im} = f_{jm}\\circ f_{ij}$ for all $i \\leq j$.\r\n\r\nIf $A$ is any other object with morphisms $g_i : X_i \\to A$ such that $g_i = g_j\\circ f_{ij}$ then $g_m$ is a map from $X_m$ to $A$. In particular $g_m = g_i\\circ f_{mi}$ and so $g_m\\circ f_{im} = g_i\\circ f_{mi}\\circ f_{im}$. But $f_{mi}\\circ f_{im} = f_{ii}$ which is the identity. Thus $g_m\\circ f_{im} = g_i$. But $f_{im} = f_i$ and so $g_m\\circ f_i = f_i$.\r\n\r\nIt's easy to show that $g_m$ is the unique map with this property.\r\n\r\nThus $X_m$ is a direct limit of the $X_i$ proving the first part of the theorem. For the second part note that $f_m$ is $f_{mm}$ which is the identity map. QED.\r\n\r\n\\textbf{15. If we have an inverse system of objects $X_i$ over a directed set $I$ with maps $f_{ji}$, and $I$ contains a greatest element $m$ then the inverse limit is isomorphic to $X_m$ and the map $f_m : \\varprojlim X_i \\to X_m$ is an isomorphism.}\r\n\r\nThe proof is dual to that for the direct limit. QED.\r\n\r\n\\textbf{16. In a preabelian category, if $(X_i)_{i\\in I}$ is a direct system with maps $f_{ij} : X_i \\to X_j$ for all $i \\leq j$ then each map $f_{ij}$ induces a map $f^\\star_{ij} :$ Hom$(M_j, N) \\to$ Hom$(M_i, N)$ for any object $N$ in the category. The family $($Hom$(M_i, N))_{i\\in I}$ is then an inverse system of abelian groups with maps $f^\\star_{ij}$ for all $i \\leq j$.}\r\n\r\nGiven $g \\in$ Hom$(M_j, N)$ we define $f^\\star_{ij}(g) = g\\circ f_{ij} \\in$ Hom$(M_i, N)$.\r\n\r\nFirstly we need to check that $f^\\star_{ii}$ is the identity map on Hom$(M_i, N)$. This is clear, since if $g \\in$ Hom$(M_i, N)$ then $f^\\star_{ii}(g) = g\\circ f_{ii} = g$ and so $f^\\star_{ii}$ is the identity as required.\r\n\r\nWe must also show that $f^\\star_{ij}\\circ f^\\star_{jk} = f^\\star_{ik}$ whenever $i \\leq j \\leq k$. But this is also clear. Given $g \\in$ Hom$(M_k, N)$ we have\r\n$$f^\\star_{ik}(g) = g\\circ f_{ik} = g\\circ f_{jk}\\circ f_{ij} = f^\\star_{ij}(g\\circ f_{jk}) = (f^\\star_{ij}\\circ f^\\star_{jk})(g).$$\r\n\r\nThus we have an inverse system as claimed. QED.\r\n\r\n\\textbf{17. In a preabelian category if $(M_i)_i$ is a family of objects and $N$ is an object then Hom$(\\varprojlim M_i, N) \\cong \\varinjlim$ Hom$(M_i, N)$.}\r\n\r\nWe claim that in fact, Hom$(\\varprojlim M_i, N)$ is an inverse limit of the system $($Hom$(M_i, N))_i$ with maps $f^\\star_{ij}$, with projection maps $\\pi_j :$ Hom$(\\varprojlim M_i, N) \\to$ Hom$(M_j, N)$ given by $\\pi_j(g) = g\\circ \\phi_j$ where the $\\phi_j$ are the coprojection maps $M_j \\to \\varprojlim M_j$.\r\n\r\nFirstly, we have\r\n$$f_{ij}(\\pi_j(g)) = f_{ij}(g\\circ \\phi_j) = g\\circ \\phi_j\\circ f_{ij} = g\\circ \\phi_i = \\phi_i(g),$$\r\nfor all $i \\leq j$ and $g \\in$ Hom$(\\varprojlim M_i, N)$. Thus the projection maps satisfy the required relation.\r\n\r\nNow suppose that $X$ is any abelian group with maps $g_i : X \\to$ Hom$(M_i, N)$ such that $f^\\star_{ij}\\circ g_j = g_i$.\r\n\r\nIf we fix $x \\in X$ then we have a set of maps $g_i(x) : M_i \\to N$ such that $f_j(x)\\circ f_{ij} = (f^\\star_{ij}\\circ g)(x) = g_i(x)$. Thus by the definition of the direct limit of the $M_i$ there exists a map $j_x : \\varprojlim M_i \\to N$ such that $j_x\\circ \\phi_i = g_i(x)$.\r\n\r\nLet $j : X \\to$ Hom$(\\varprojlim M_i, N)$ be defined by $j : x \\to j_x$. We note that $j$ is an abelian group homomorphism since\r\n$$j_{x + y}\\circ \\phi_i = g_i(x + y) = g_i(x) + g_i(y) = (j_x + j_y)\\circ \\phi_i,$$\r\nwhich shows that $j_{x + y} = j_x + j_y$.\r\n\r\nNext we note that $\\pi_i\\circ j(x) = \\pi_i\\circ j_x = g_i(x)$ for all $x \\in X$, so that $\\pi_i\\circ j = g_i$ for all $i \\in I$. Thus $j : X \\to$ Hom$(\\varprojlim M_i, N)$ is the desired map.\r\n\r\nFinally, we show that it is unique. For if $k : X \\to$ Hom$(\\varprojlim M_i, N)$ is another abelian group homomorphism such that $\\pi_i\\circ k = g_i = \\pi_i\\circ j$ then for all $x \\in X$ we have that $k(x)\\circ \\phi_i = (\\pi_i\\circ k)(x) = (\\pi_i\\circ j)(x) = j(x)\\circ \\phi_i$ and so $j(x)$ and $k(x)$ agree on each $\\phi_i$. Thus by the definition of the direct limit, they agree on $\\varprojlim M_i$. Thus $j(x) = k(x)$ for all $x \\in X$ and so $j = k$.\r\n\r\nThis shows that $($Hom($\\varprojlim M_i, N)_i$ is an inverse limit along with the maps $\\pi_i$ as required, which gives the stated isomorphism in the theorem. QED.\r\n\r\n\\section{Pullbacks}\r\n\r\n\\textbf{1. The pullback of two morphisms $f : X \\to Z$ and $g : Y \\to Z$ in a category is an object $P$ and two morphisms $p_1 : P \\to X$ and $p_2 : P \\to Y$ such that $f\\circ p_1 = g\\circ p_2$ and such that for any other object $Q$ with morphisms $q_1 : Q \\to X$ and $q_2 : Q \\to Y$ such that $f\\circ q_1 = g\\circ q_2$ there exists a unique morphism $u : Q \\to P$ such that $p_2\\circ u = q_2$ and $p_1\\circ u = q_1$.}\r\n\r\nWe denote the pullback of $f$ and $g$ by $P = X\\times_Z Y$.\r\n\r\n\\textbf{2. The pullback of morphisms $f : X \\to Z$ and $g : Y \\to Z$, if it exists, is unique up to unique isomorphism.}\r\n\r\nThis follows from the pullback being defined by a universal property. QED.\r\n\r\n\\textbf{3. In a category with terminal object $T$ the pullback $X\\times_T Y$ is the ordinary product $X\\times Y$.}\r\n\r\nLet $\\pi_1, \\pi_2$ be the projections of the ordinary product and let $f : X \\to T$ and $g : Y\\to T$ be the unique morphisms to the terminal object, of which we are taking the pullback.\r\n\r\nClearly $f\\circ \\pi_1 = g\\circ \\pi_2$ since they are both the unique morphism from $X\\times Y$ to $T$.\r\n\r\nIf $P$ is any other object with morphisms $p_1 : P \\to X$ and $p_2 : P \\to Y$ such that $f\\circ p_1 = g\\circ p_2$ then there exists a unique morphism $u : P \\to X\\times Y$ such that $p_1 = \\pi_1\\circ u$ and $p_2 = \\pi_2\\circ u$ by the universal property of the product. QED.\r\n\r\n\\textbf{4. Let $I$ consist of three elements $i$, $j$ and $k$ with $i \\leq j$ and $i \\leq k$, i.e. not a directed set. The inverse limit of an inverse system of objects $X_i$ over $I$ in a preabelian category is the pullback of $f_{ji}$ and $f_{ki}$.}\r\n\r\nThe inverse limit is an object $P$ with maps $f_i : P \\to X_i$, $f_j : P \\to X_j$ and $f_k : P \\to X_k$ such that $f_i = f_{ji}\\circ f_j$ and $f_i = f_{ki}\\circ f_k$. It is also universal with respect to such. QED.\r\n\r\n\\textbf{5. In the category of sets, a pullback of two maps $f : X \\to Z$ and $g : X \\to Z$ is given by the set\r\n$$X\\times_Z Y = \\{(x, y) \\in X\\times Y \\;|\\; f(x) = g(x)\\},$$\r\ntogether with the restriction of the projection maps $\\pi_1 : X\\times Y \\to X$ and $\\pi_2 : X\\times Y \\to Y$ to $X\\times_Z Y$.}\r\n\r\nAs $\\pi_1((x, y)) = x$ and $\\pi_2((x, y)) = y$ we have that $f\\circ \\pi_1 = g\\circ \\pi_2$ on $X\\times_Z Y$.\r\n\r\nSuppose $P$ is any other set with maps $p_1 : P \\to X$ and $p_2 : P \\to Y$ such that $f\\circ p_1 = g\\circ p_2$. Define $u : P \\to X\\times Y$ by $u(p) = (p_1(p), p_2(p))$ for $p \\in P$. We see that since $f\\circ p_1 = g\\circ p_2$ that $f(p_1(p)) = g(p_2(p))$ so that $(p_1(p), p_2(p)) \\in X\\times_Z Y$ and so $u$ is in fact a map from $P$ to $X\\times_Z Y$.\r\n\r\nWe see that by construction $p_1 = \\pi_1\\circ u$ and $p_2 = \\pi_2\\circ u$.\r\n\r\nMoroever, it is easy to show that $u$ is the unique map with this property. QED.\r\n\r\n\\textbf{6. The pullback of morphisms $\\phi : A \\to C$ and $\\psi : B \\to C$ in the category of $R$-modules is the submodule $X$ of $A\\times B$ given by $X = \\{(a, b) \\in A\\times B \\;|\\; \\phi(a) = \\psi(b)\\}$.}\r\n\r\nThe proof is as per the pullback in the category of sets. We simply need to show that $X$ is a submodule of $A\\times B$.\r\n\r\nIf $(a_1, b_1), (a_2, b_2) \\in X$ then $\\phi(a_1 + a_2) = \\phi(a_1) + \\phi(a_2) = \\psi(b_1) + \\psi(b_2) = \\psi(b_1 + b_2)$. Similarly if $r \\in R$ and $(a, b) \\in X$ then $\\phi(ra) = r\\phi(a) = r\\psi(b) = \\psi(rb)$. Note that we also have $(0, 0) \\in X$. Thus $X$ is a submodule of $A\\times B$. QED.\r\n\r\n\\textbf{7. Whenever the pullback $X\\times_Z Y$ of $f : X \\to Z$ and $g : Y \\to Z$ exists, so does $Y\\times_Z X$ and they are isomorphic.}\r\n\r\nThis follows from the symmetry of the definition and the fact that the pullback is unique up to unique isomorphism. QED.\r\n\r\n\\textbf{8. Monomorphisms pull back to monomorphisms.}\r\n\r\nSuppose $f : X \\to Z$ is a monomorphism and let $g : Y \\to Z$ be another morphism. Let $p_1 : X\\times_Z Y \\to X$ and $p_2 : X\\times_Z Y \\to Y$ be the projection maps from the pullback. We will show that $p_2$ is a monomorphism.\r\n\r\nSuppose $h_1, h_2 : W \\to X\\times_Z Y$ are maps such that $p_2\\circ h_1 = p_2\\circ h_2$. Denote $q = p_1\\circ h_1$ and $r = p_1\\circ h_2$.\r\n\r\nWe have that\r\n$f\\circ q = f\\circ p_1\\circ h_1 = g\\circ p_2\\circ h_1 = g\\circ p_2\\circ h_2 = f\\circ p_1\\circ h_2 = f\\circ r$.\r\n\r\nBut $f$ is a monomorphism and so $q = r$.\r\n\r\nDenote $p = p_2\\circ h_1 = p_2\\circ h_2$. Then we have maps $p : W \\to Y$ and $q : W \\to X$ such that \r\n$$g\\circ p = g\\circ p_2\\circ h_1 = f\\circ p_1\\circ h_1 = f\\circ q.$$\r\n\r\nThus by the univeral property of the pullback there is a unique morphism $u : W \\to X\\times_Z Y$ such that $p = p_2\\circ u$ and $q = p_1\\circ u$.\r\n\r\nBut $p = p_2\\circ h_1$ and $q = p_1\\circ h_1$ and so $u = h_1$ on account of the uniqueness of $u$.\r\n\r\nSimilarly $p = p_2\\circ h_2$ and $q = p_1\\circ h_2$ and so $u = h_2$ on account of the uniqueness of $u$.\r\n\r\nThus $h_1 = h_2$ and so $p_2$ is a monomorphism. QED.\r\n\r\n\\textbf{9. Isomorphisms pull back to isomorphisms.}\r\n\r\nLet $f : X \\to Z$ be an isomorphism and $g : Y \\to Z$ be an arbitrary morphism. We will show that $Y$ is a pullback for these morphisms.\r\n\r\nAs $f$ is an isomorphism, it has an inverse $f^{-1}$. Thus there is a morphism $h = f^{-1}\\circ g : Y \\to X$. We see that $f\\circ h = g\\circ$ id$_Y$. \r\n\r\nNow if $P$ is any other object with morphisms $h_1 : P \\to X$ and $h_2 : P \\to Z$ such that $g\\circ h_2 = f\\circ h1$ then $h_2$ is the unique morphism $u : P \\to Y$ such that $h_2 = h_2\\circ$ id$_Y\\circ u$ and $h_1 = f^{?1}\\circ g\\circ u$. Thus $S \\cong X\\times_Z Y$. QED.\r\n\r\n\\textbf{10. Retractions pull back to retractions.}\r\n\r\nSuppose $f : X \\to Z$ is a retraction and $g : Y \\to Z$ is any morphism. Let $p_1 : X\\times_Z Y \\to X$ and $p_2 : X\\times_Z Y \\to Y$ be the projection morphisms of the pullback.\r\n\r\nFirstly, as $f$ is a retraction, there exists a map $f' : Z \\to X$ such that $f\\circ f' =$ id$_Z$. Hence $f\\circ (f'\\circ g) = g\\circ$ id$_Y$. Thus by the definition of a pullback there exists a unique morphism $h : B \\to X\\times_Z Y$ such that id$_Y = p_2\\circ h$ and $f'\\circ g = p_1\\circ h$. In particular, $p_2$ is a retraction. QED.\r\n\r\n\\textbf{11. If $f : A \\to B$, $g : B \\to C$ and $k : C' \\to C$ are morphisms then $A\\times_B (B\\times_C C') \\cong A\\times_C C'$.}\r\n\r\nLet $j : B\\times_C C' \\to B$ and $g' : B\\times_C C' \\to C'$ be the projections of the inner pullback and let $i : A\\times_B (B\\times_C C') \\to A$ and $f' : A\\times_B (B\\times_C C') \\to B\\times_C C'$ be the projections of the outer pullback on the left hand side of the expression we are trying to prove.\r\n\r\nWe will first demonstrate a map $A\\times_C C' \\to A\\times_B (B\\times_C C')$.\r\n\r\nFirstly, $g\\circ f\\circ i = g\\circ j\\circ f'$, due to the outer pullback. This is equal to $k\\circ g'\\circ f'$ due to the inner pullback. Thus we have morphisms $g'\\circ f' : A\\times_B (B\\times_C C') \\to C'$ and $i : A\\times_B (B\\times_C C') \\to A$ such that $k\\circ (g'\\circ f') = (g\\circ f)\\circ i$.\r\n\r\nMoreover, if $P$ is any object such that there are morphisms $h_2 : P \\to C'$ and $h_1 : P \\to A$ such that $k\\circ h_2 = (g\\circ f)\\circ h_1$ then $h_2$ and $f\\circ h_1$ are maps to $C'$ and $B$ such that $k \\circ h_2 = g\\circ (f\\circ h_1)$. Thus by the universal property of the inner pullback there is a unique morphism $u : P \\to (B\\times_C C')$ such that $g'\\circ u = h_2$ and $j\\circ u = f\\circ h_1$.\r\n\r\nBut now by the universal property of the outer pullback, there exists a unique morphism $u' : P \\to A\\times_B (B\\times_C C')$ such that $f'\\circ u' = u$ and $i\\circ u' = h_1$.\r\n\r\nAs $A\\times_C C'$ has the properties of such an object $P$, there is therefore a unique morphism $u'$ from $A\\times_C C' \\to A\\times_B (B\\times_C C')$ with the stated properties.\r\n\r\nBut similarly, as $k\\circ(g'\\circ f') = (g\\circ f)\\circ i$ then by the universal property of $A\\times_C C'$ there is a unique homomorphism $v : A\\times_B(B\\times_C C') \\to B\\times_C C'$ such that $h_2\\circ v = g'\\circ f'$ and $h_1\\circ v = i$.\r\n\r\nNow we easily check that $u = u\\circ v\\circ u'$ and $h_1 = h_1\\circ v \\circ u'$. Thus by the universal property of $A\\times_B(B\\times_C C')$ we have $v\\circ u' =$ id$_{A\\times_C C'}$. It's similarly easy to check that $u'\\circ v =$ id$_{A\\times_B(B\\times_C C')}$. Thus the result follows. QED. \r\n\r\n\\textbf{12. We have that $X\\times_X Y \\cong Y$ where the morphism from $X$ to $X$ is the identity morphism.}\r\n\r\nLet $f : Y \\to X$ be any morphism. It is clear that id$_Y$ and $f : Y \\to X$ satisfy id$_X\\circ f = f\\circ$ id$_Y$.\r\n\r\nMoreover, if $P$ is any other object with morphisms $h_1 : P \\to X$ and $h_2 : P \\to Y$ such that id$_X\\circ h_1 = f\\circ h_2$ then $h_1 = f\\circ h_2$.\r\n\r\nClearly $h_2$ is the unique morphism from $P \\to Y$ such that $h_2 =$ id$_Y\\circ h_2$ and $f\\circ h_2 = h_1$. Thus $Y$ is a pushout of $X$ and $Y$ and thus the required isomorphism exists. QED. \r\n\r\n\\textbf{13. We have that $X\\times_Y(Y\\times_Z Z') \\cong (X\\times_Y Y)\\times_Z Z'$.}\r\n\r\nThis follows from the previous two results. QED.\r\n\r\n\\textbf{14. The pushout of two morphisms $f : Z \\to X$ and $g : Z \\to Y$ in a category is an object $P$ and two morphisms $p_1 : X \\to P$ and $p_2 : Y \\to P$ such that $p_1\\circ f = p_2\\circ g$ and such that for any other object $Q$ with morphisms $q_1 : X \\to Q$ and $q_2 : Y \\to Q$ such that $q_1\\circ f = q_2\\circ g$ there exists a unique morphism $u : P \\to Q$ such that $u\\circ p_1 = q_1$ and $u\\circ p_2 = q_2$.}\r\n\r\nWe denote the pushout of $f : Z \\to X$ and $g : Z \\to Y$ by $X\\cup_Z Y$.\r\n\r\n\\textbf{15. A pushout, when it exists, is unique up to unique isomorphism.}\r\n\r\nThis follows from a pushout being defined by a universal property. QED.\r\n\r\n\\textbf{16. In a category with initial object $I$ the pushout of morphisms $f : I \\to X$ and $g : I \\to Y$ is the ordinary coproduct $X\\oplus Y$.}\r\n\r\nLet $P$ be the coproduct $X\\oplus Y$ with coprojection maps $\\pi_1 : X to P$ and $\\pi_2 : Y \\to P$.\r\n\r\nWe have that $\\pi_1\\circ f = \\pi_2\\circ g$ since both are the uniqe morphism from $I$ to $P$.\r\n\r\nNow given any other object $Q$ with morphisms $p_1 : X \\to Q$ and $p_2 : Y \\to Q$ with $p_1\\circ f = p_2\\circ g$ there exists a unique morphism $u : P \\to Q$ such that $u\\circ \\pi_1 = p_1$ and $u\\circ \\pi_2 = p_2$ by the universal property of the coproduct. QED.\r\n\r\n\\textbf{17. Let $I$ consist of three elements $i$, $j$ and $k$ with $i \\leq j$ and $i \\leq k$, i.e. not a directed set. The direct limit of a direct system of objects $X_i$ over $I$ in a preabelian category is the pushout of $f_{ij}$ and $f_{ik}$.}\r\n\r\nThe direct limit is an object $P$ with maps $f_i : X_i \\to P$, $f_j : X_j \\to P$ and $f_k : X_k \\to P$ such that $f_i = f_j\\circ f_{ij}$ and $f_i = f_k\\circ f_{ik}$. It is also universal with respect to such. QED.\r\n\r\n\\textbf{18. Suppose $f : Z \\to X$ and $g : Z \\to Y$ are maps between sets. Define an equivalence relation $\\sim$ on $X\\coprod Y$ to be the smallest equivalence relation such that $(i_1\\circ f)(z) \\sim (i_2\\circ g)(z)$ for all $z \\in Z$ where $i_1 : X \\to X\\coprod Y$ and $i_2 : Y \\to X\\coprod Y$ are the coprojection maps. Then $P = (X\\coprod Y)/\\sim$ together with the morphisms from $X$ and $Y$ to $P$ induced by $i_1$ and $i_2$ gives the pushout of $f$ and $g$.}\r\n\r\nLet us write $p_1$ and $p_2$ for the morphisms induced by $i_1$ and $i_2$. Then by construction $p_1\\circ f = p_2\\circ g$.\r\n\r\nLet $Q$ be any other object with morphisms $q_1 : X \\to Q$ and $q_2 : Y \\to Q$ such that $q_1\\circ f = q_2\\circ g$. Then there exists a unique morphism $u : X\\coprod Y \\to Q$ such that $q_1 = u\\circ i_1$ and $q_2 = u\\circ i_2$ by the universal property of the coproduct.\r\n\r\nBut since $q_1\\circ f = q_2\\circ g$ we have $u\\circ i_1\\circ f = u\\circ i_2\\circ g$. Thus $u$ maps equivalent elements to the same element. Thus it induces a map $u' : (X\\coprod Y)/\\sim \\to Q$ such that $q_1 = u'\\circ p_1$ and $q_2 = u'\\circ p_2$.\r\n\r\nSince any such map $u'$ can be lifted uniquely to $X\\coprod Y$ to a map $u$ with the given properties, and since any such $u$ is unique, clearly $u'$ is unique. QED.\r\n\r\n\\textbf{19. The pushout of morphisms $\\phi : M \\to A$ and $\\psi : M \\to B$ in the category of $R$-modules is the quotient module $X = (A\\oplus B)/Y$ where $Y = \\{(\\phi(m), - \\psi(m)) \\;|\\; m \\in M\\}$.}\r\n\r\nNote $(0, 0) \\in Y$. Also if $(\\phi(m_1), -\\psi(m_1))$ and $(\\phi(m_2), -\\psi(m_2))$ are in $Y$ then their sum is $(\\phi(m_1 + m_2), -\\psi(m_1 + m_2)) \\in Y$. Similarly if $(\\phi(m), -\\psi(m)) \\in Y$ and $r \\in R$ then $r(\\phi(m), -\\psi(m)) = (\\phi(rm), -\\psi(rm)) \\in Y$. Thus $Y$ is a submodule of $X\\oplus Y$.\r\n\r\nWe can define $\\sigma_1 : A \\to A\\oplus B$ and $\\sigma_2 : B \\to A\\oplus B$ by $\\sigma_1(a) = (a, 0) + Y$ and $\\sigma_2(b) = (0, b) + Y$. \r\n\r\nFor any $m \\in M$ we have $(\\phi(m), 0) + Y = (\\phi(m), 0) - (\\phi(m), -\\psi(m)) + Y = (0, \\psi(m)) + Y$. Thus $\\sigma_1\\circ \\phi = \\sigma_2\\circ \\psi$. \r\n\r\nNow suppose we have an $R$-module $Z$ wih $R$-module homomorphisms $\\alpha_1 : A \\to Z$ and $\\alpha_2 : B \\to Z$ with $\\alpha_1\\circ \\phi = \\alpha_2\\circ \\psi$. Define $\\theta : X \\to Z$ by $\\theta((a, b) + Y) = \\alpha_1(a) + \\alpha_2(b)$.\r\n\r\nWe first check that $\\theta$ is well-defined. Suppose that $(a_1, b_1) - (a_2, b_2) \\in Y$. Then $a_1 - a_2 = \\phi(m)$ and $b_1 - b_2 = -\\psi(m)$ for some $m \\in M$. Applying $\\alpha_1$ and $\\alpha_2$ we have that $\\alpha_1(a_1) - \\alpha_1(a_2) = (\\alpha_1\\circ \\phi)(m) = (\\alpha_2\\circ \\psi) = \\alpha_2(b_2) - \\alpha_2(b_1)$. Thus $\\alpha_1(a_1) + \\alpha_2(b_1) = \\alpha_1(a_2) + \\alpha_2(b_2)$, i.e. $\\theta((a_1, b_1) + Y) = \\theta((a_2, b_2) + Y)$.\r\n\r\nClearly $\\theta$ is a module homomorphism. Moreover se see that $\\theta\\circ \\sigma_1 = \\alpha_1$ and $\\theta\\circ \\sigma_2 = \\alpha_2$.\r\n\r\nUniqueness follows by the usual arguments. QED.\r\n\r\n\\textbf{20. Whenever the pushout $X\\cup_Z Y$ of $f : Z \\to X$ and $g : Z \\to Y$ exists, so does $Y\\cup_Z X$ and they are isomorphic.}\r\n\r\nThis follows from the symmetry of the definition and the fact that the pushout is unique up to unique isomorphism. QED.\r\n\r\n\\textbf{21. Isomorphisms push out to isomorphisms.}\r\n\r\nThe proof is dual to the proof for pullbacks. QED.\r\n\r\n\\textbf{22. Epimorphisms push out to epimorphisms.}\r\n\r\nThe proof is the dual of the proof for the pullbacks of monomorphisms. QED.\r\n\r\n\\textbf{23. Sections push out to sections.}\r\n\r\nThe argument is dual to that for pull backs of retractions. QED.\r\n\r\n\\textbf{24. If $f : B \\to A$, $g : C \\to B$ and $k : C \\to C'$ are morphism then $A\\cup_B (B\\cup_C C') \\cong A\\cup_C C'$.}\r\n\r\nThe proof is dual to the case for pullbacks. QED.\r\n\r\n\\textbf{25. We have that $X\\cup_X Y \\cong Y$ where the morphism from $X$ to $X$ is the identity morphism.}\r\n\r\nThe argument is dual to the case for the pullback. QED.\r\n\r\n\\textbf{26. We have that $X\\cup_Y(Y\\cup_Z Z') \\cong (X\\cup_Y Y)\\cup_Z Z'$.}\r\n\r\nThis follows from the previous two results. QED.\r\n\r\n\\section{Abelian categories}\r\n\r\n\\textbf{1. A normal monomorphism is one that is a kernel of some morphism.}\r\n\r\n\\textbf{2. A normal epimorphism is one that is a cokernel of some morphism.}\r\n\r\n\\textbf{3. An abelian category is a preabelian category for which all monomorphisms and epimorphisms are normal.}\r\n\r\n\\textbf{4. The category of $R$-modules over a commutative ring $R$ is an abelian category.}\r\n\r\nWe have formerly seen that it is a preabelian category.\r\n\r\nA morphism is a monomorphism in the category of $R$-modules if it is an injective homomorphism. Suppose $f : K \\to X$ is an injective homomorphism. We will show that there exists a homomorphism $g$ such that $g\\circ f = 0$.\r\n\r\nWe let $N$ be the image of $f$ in $X$. It is a submodule of $X$. Then we let $g : X \\to X/N$ be the natural quotient map. It is an $R$-module homomorphism and clearly $g\\circ f = 0$. Thus monomorphisms are normal.\r\n\r\nA morphism is an epimorphism in the category of $R$-modules if it is a surjective homomorphism. Suppose $f : X \\to Q$ is a surjective homomorphism. We will show that there exists a homomorphism $g$ such that $f\\circ g = 0$.\r\n\r\nWe let $K$ be the kernel of $f$. It is non-empty since $f$ must map $0$ to $0$. Let $g : K \\to X$ be the inclusion map. It is an $R$-module homomorphism and clearly $f\\circ g = 0$. Thus epimorphisms are normal. QED.\r\n\r\n\\textbf{5. The category of abelian groups is an abelian category.}\r\n\r\nAn abelian group $M$ can be made into a $Z$-module by defining $rx = (r - 1)x + x$ for $0 < r \\in \\Z$, $0x = 0$ and $rx = (r + 1)x - x$ for $0 > r \\in \\Z$.\r\n\r\nBecause scalar multiplication can be defined in terms of addition, an abelian group homomorphism is automatically an $R$-module homomorphism. Thus the category of abelian groups is the same as the category of $\\Z$-modules. The latter is an abelian category. QED.\r\n\r\n\\textbf{6. The categories of finitely generated abelian groups and finite abelian groups are abelian categories.}\r\n\r\nIt suffices to show that all the objects that must exist in an abelian category belong to the respective categories. \r\n\r\nClearly the zero object, the trivial group containing only zero, is both finite and finitely generated.\r\n\r\nFinitary products and coproducts agree in $\\Z$-modules and are given by the cartesian product. Clearly the finitary products of finitely generated or finite groups are finitely generated or finite respectively.\r\n\r\nIf $f : G \\to H$ is a homomorphism of finitely generated abelian groups then suppose that $G$ is generated by $\\alpha_1, \\ldots, \\alpha_n$. Now suppose that $f(c\\alpha_i) = 0$ for some $c \\in \\Z$. Then $f(rc\\alpha_i) = rf(c\\alpha_i) = 0$ for all $r \\in \\Z$.\r\n\r\nSimilarly if $f(c_i\\alpha_i) = 0 = f(c_j\\alpha_j)$ then $f(c_i\\alpha_i + c_j\\alpha_j) = f(c_i\\alpha_i) + f(c_j\\alpha_j) = 0$.\r\n\r\nThus it is clear that the kernel of $f$ is generated by elements of the form $c_i\\alpha_i$ for some $c_i \\in \\Z$.\r\n\r\nThus the kernel of a homomorphism of finitely generated abelian groups is finitely generated.\r\n\r\nIt is obvious that the kernel of a homomorphism of finite abelian groups is finite.\r\n\r\nIt is easy to see that the cokernel of a homomorphism $f : G \\to H$ of finitely generated abelian groups is generated by $\\{\\alpha_i + N\\}$ where $H$ is generated by $\\{\\alpha_i\\}$ and $N$ is the image of $f$.\r\n\r\nThus cokernels of homomorphisms of finitely generated abelian groups are finitely generated.\r\n\r\nIt is obvious that cokernels of homomorphisms of finite abelian groups are finite.\r\n\r\nThus we have that both categories are abelian categories. QED.\r\n\r\n\\textbf{7. The category of finitely generated $R$-modules over a noetherian ring is abelian.}\r\n\r\nIf $R$ is a noetherian ring then every finitely generated $R$-module is noetherian. Submodules and quotient modules of a noetherian module are noetherian $R$-modules and thus finitely generated.\r\n\r\nFrom these facts it is easy to see that the zero module, all finitary products, kernels and cokernels are finitely generated $R$-modules in the category of finitely generated $R$-modules, for such an $R$.\r\n\r\nThus the category is an abelian category. QED.\r\n\r\n\\textbf{8. The category of noetherian modules over a commutative ring $R$ is abelian.}\r\n\r\nThe proof is almost identical to that of finitely generated modules over a noetherian ring. QED.\r\n\r\n\\textbf{9. The category of vector spaces over a field is abelian.}\r\n\r\nThis is a special case of a category $R$-modules. QED.\r\n\r\n\\textbf{10. The image of a morphism $f$ in a category with kernels and cokernels is the cokernel of the kernel of $f$.}\r\n\r\n\\textbf{11. The coimage of a morphism $f$ is a category with kernels and cokernels is the kernel of the cokernel of $f$.}\r\n\r\n\\textbf{12. In an abelian category an epimorphism is the cokernel of its kernel.}\r\n\r\nLet $e : B \\to C$ be an epimorphism. Let $K, k$ be the kernel of $e$ and let $f$ be any morphism such that $e = coker(f)$ (this exists since $e$ is an epimorphism and hence normal).\r\n\r\nAs $e\\circ f = 0$ there exists a unique morphism $f' : A \\to K$ such that $f = k\\circ f'$ by the universal property of the kernel.\r\n\r\nLet $y$ be any morphism such that $y\\circ k = 0$. Then $0 = y\\circ k\\circ f' = y\\circ f$. Thus there exists a unique morphism $y' : C \\to Y$ such that $y = y'\\circ e$ by definition of the cokernel of $f$. \r\n\r\nBut this is precisely the universal property we need to show that $e$ is a cokernel of $k$. QED.\r\n\r\n\\textbf{13. In an abelian category, a monomorphism is a kernel of its cokernel.}\r\n\r\nThe argument is dual to that of epimorphisms. QED.\r\n\r\n\\textbf{14. In an abelian category, the kernel of a monomorphism $f : A \\to B$ is the unique morphism $u : 0 \\to A$.}\r\n\r\nThe composite $f\\circ u$ is the unique zero morphism from $0$ to $B$, i.e. $f\\circ u = 0$.\r\n\r\nLet $g : X \\to A$ be any other morphism such that $f\\circ g = 0$. \r\n\r\nThen $f\\circ g = f\\circ 0$. But $f$ is a monomorphism and so $g = 0$. \r\n\r\nThis means that $g$ factors uniquely through the zero object $0$, which is precisely the condition required for $0$ to be the kernel of $f$. QED.\r\n\r\n\\textbf{15. In an abelian category, the cokernel of an epimorphism $f : A \\to B$ is the unique morphism $u : B \\to 0$.}\r\n\r\nThis is dual to the case for monomorphisms. QED.\r\n\r\n\\textbf{17. In an abelian category, a morphism that is a monomorphism and an epimorphism is an isomorphism.}\r\n\r\nSuppose that $g : A \\to B$ is a monomorphism and an epimorphism.\r\n\r\nThen its kernel is the unique morphism $u : 0 \\to A$. Moreover, as epimorphisms are cokernels of their kernels, $g$ is the cokernel of $u$. \r\n\r\nThe composition id$_A\\circ u$ is zero, thus by the definition of cokernel there is a unique morphism $h : B \\to A$ such that id$_A = h\\circ g$.\r\n\r\nBut then $g\\circ h\\circ g = g$ id$_A =$ id$_B\\circ g$. As $g$ is an epimorphism this implies that $g\\circ h =$ id$_B$.\r\n\r\nIn other words, we have shown that $h$ is an inverse of $g$, i.e. $g$ is an isomorphism. QED.\r\n\r\n\\textbf{18. In an abelian category the coimage and image of a morphism $f : X \\to Y$ are isomorphic.}\r\n\r\nLet $U, u$ be a coimage of $f$, i.e. a cokernel of the kernel $K, k$ of $f$. Then $u$ is an epimorphism since it is a cokernel.\r\n\r\nSimilarly, let $V, v$ be an image of $f$, i.e. a kernel of the cokernel $C, c$ of $f$. Then $v$ is a monomorphism since it is a kernel.\r\n\r\nWe have that $f\\circ k = 0$ by definition of the kernel. Thus since $u$ is a cokernel there is a unique map $\\psi : U \\to Y$ such that $f = \\psi\\circ u$.\r\n\r\nSince $c\\circ f = 0$ by definition of the cokernel, we have $c\\circ \\psi\\circ u = 0$. But since $u$ is an epimorphism we have that $c\\circ \\psi = 0$.\r\n\r\nBut then by the definition of the kernel of $c$ there exists a unique morphism $\\sigma : U \\to V$ such that $\\psi = v\\sigma$.\r\n\r\nBy a dual argument there exists a morphism $\\phi : X \\to V$ such that $v\\circ \\phi = f$ and there exists a unique morphism $\\sigma' : U \\to V$ such that $\\sigma'\\circ u = \\phi$.\r\n\r\nBut now $f = \\psi\\circ u = v\\circ \\sigma\\circ u$. Similarly $f = v\\circ \\phi = v\\circ \\sigma'\\circ u$. Thus $v\\circ \\sigma\\circ u = v\\circ \\sigma'\\circ u$. However, $u$ is an epimorphism, thus $v\\circ \\sigma = v\\circ \\sigma'$. And $v$ is a monomorphism and so $\\sigma = \\sigma'$.\r\n\r\nWe will now show that $\\psi$ is a monomorphism.\r\n\r\nLet $x : X \\to U$ be such that $\\psi\\circ x = 0$. Let $Q, q$ be the cokernel of $x$. By the universal property of cokernels, there is a unique morphism $j : Q \\to Y$ such that $\\psi = j\\circ q$.\r\n\r\nSince $q\\circ u$ is an epimorphism there is a morphism $h : H \\to X$ such that $q\\circ u$ is the cokernel of $h$.\r\n\r\nNow $f\\circ h = \\psi\\circ u\\circ h = j\\circ q\\circ u\\circ h = 0$. Thus by the universal property of a kernel, there exists a unique morphism $h'$ such that $h = k\\circ h'$.\r\n\r\nThus $u\\circ h = u\\circ k\\circ h' = 0$ since $u$ is the cokernel of $k$.\r\n\r\nThus by the definition of the cokernel of $x$ there exists $u'$ such that $u = u'\\circ (q\\circ u)$. But $u$ is an epimorphism and so $u'\\circ q =$ id$_U$. Thus $q$ is a monomorphism.\r\n\r\nThus as $q\\circ x = 0$ by definition, $x = 0$. Thus we have shown that $\\psi\\circ x = 0$ implies $x = 0$. In particular, if $x = r - s$ then $\\psi\\circ r = \\psi\\circ s$ implies $r = s$, i.e. $\\psi$ is a monomorphism.\r\n\r\nDually we can show that $\\phi$ is an epimorphism.\r\n\r\nThis in turn implies that $\\sigma$ is both an epimorphism and a monomorphism, which implies that $\\sigma$ is an isomorphism. QED.\r\n\r\n\\textbf{13. In an abelian category every morphism $f$ can be written as the compostion $g\\circ h$ of an epimorphism $h$ followed by a monomorphism $g$.}\r\n\r\nWe have that $f = v\\circ \\phi$ in the notation of the proof of the previous theorem. And we have shown that $v$ is a monomorphism and $\\phi$ is an epimorphism. QED. \r\n\r\n\\textbf{14. In the category of $R$-modules over a commutative ring, if $f : X \\to Y$ is an $R$-module homomorphism, then $X/\\ker(f) \\cong$ im$(f)$.}\r\n\r\nThis follows from the fact that the coimage and image of a morphism are isomorphic in an abelian category. QED.\r\n\r\n\\section{Exact sequences}\r\n\r\n\\textbf{1. In an abelian category, if $A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} C$ is a sequence of morphisms, there is a natural map from im$(f)$ to ker$(g)$.}\r\n\r\nRecall that the map $f$ factors as $v\\circ \\phi$ via im$(f)$.\r\n\r\nLet $k, K$ be the kernel of $g$. Then since $g\\circ v = 0$, by the universal property of the kernel of $g$, there exists a unique morphism $w :$ im$(f) \\to \\ker(g)$ such that $v = k\\circ w$. The morphism $w$ is the required natural map from im$(f)$ to ker$(g)$. QED.\r\n\r\n\\textbf{2. An exact sequence in an abelian category is a sequence of morphisms $A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} C$ such that $g\\circ f = 0$ and such that the natural map from im$(f)$ to ker$(f)$ is an isomorphism.}\r\n\r\n\\textbf{3. In the category of $R$-modules over a commutative ring $R$, the sequence $A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} C$ is exact iff im$(f) = \\ker(g)$.}\r\n\r\nFrom $g\\circ f = 0$ we have that im$(f) \\subseteq \\ker(g)$. From im$(f) \\cong \\ker(g)$ we have that im$(f) = \\ker(g)$. \r\n\r\nThe converse is clear. QED.\r\n\r\n\\textbf{4. In an abelian category, a morphism $f : X \\to Y$ is a monomorphism iff $\\ker(f) = 0$.}\r\n\r\nWe already showed that if $f$ is a monomorphism then the kernel of $f$ is the unique morphism $0 \\to X$.\r\n\r\nFor the converse, suppose that $\\ker(f) = 0$ with $k : 0 \\to X$ the unique map from $0$ to $X$. Now suppose that $h_1, h_2 : Z \\to X$ are such that $f\\circ h_1 = f\\circ h_2$. Then $f\\circ (h_1 - h_2) = 0$.\r\n\r\nThus there is a unique morphism $u : Z \\to 0$ such that $h_1 - h_2 = k\\circ u$, i.e. $h_1 - h_2 = 0$. Thus $h_1 = h_2$ and $f$ is a monomorphism. QED.\r\n\r\n\\textbf{5. In an abelian category, a morphism $f : X \\to Y$ is an epimorphism iff coker$(f) = 0$.}\r\n\r\nThis is dual to the case of a monomorphisms. QED.\r\n\r\n\\textbf{6. If $0 \\overset{f}{\\rightarrow} A \\overset{g}{\\rightarrow} B$ is exact then $g$ is a monomorphism and conversely.}\r\n\r\nIt is easy to check that coker$(f) =$ id$_A$ and that im$(f) = f$. Since $g\\circ f = 0$, the sequence is exact iff $\\ker(g) \\cong$ im$(f) = f$.\r\n\r\nBut $g$ is a monomorphism iff $\\ker(g) = 0$. Thus the sequence is exact iff $g$ is a monomorphism. QED.\r\n\r\n\\textbf{7. If $A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} 0$ is exact then $g$ is an epimorphism and conversely.}\r\n\r\nThis is dual to the result for monomorphisms. QED.\r\n\r\n\\textbf{8. A short exact sequence is a sequence of the form $0 \\rightarrow A \\rightarrow B \\rightarrow C \\rightarrow 0$ which is exact at $A$, $B$ and $C$.}\r\n\r\n\\textbf{9. In an abelian category if $0 \\rightarrow A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} C \\rightarrow 0$ is a short exact sequence then $A, f$ is the kernel of $g$ and $C, g$ is the cokernel of $f$.}\r\n\r\nRecall that the map $f$ factors as $v\\circ \\phi$ via im$(f)$.\r\n\r\nLet $k, K$ be the kernel of $g$. Recall that the natural map $w :$ im$(f) \\to \\ker(g)$ is an isomorphism since the sequence is exact.\r\n\r\nWe will show that $\\phi$ is an isomorphism. We know from previously that it is an epimorphism, so it suffices to show it is a monomorphism.\r\n\r\nBut $f$ is a monomorphism and so $\\phi$ must be also. Hence the kernel of $g$ is the pair $A, f$.\r\n\r\nThe proof that $C, g$ is the cokernel of $f$ is dual to this proof. QED.\r\n\r\n\\textbf{10. In an abelian category $0 \\rightarrow A \\overset{f}{\\rightarrow} B \\rightarrow 0$ is an exact sequence iff $f$ is an isomorphism.}\r\n\r\nThe sequence is exact iff $f$ is a monomorphism and an epimorphism, which in an abelian category is the case iff $f$ is an isomorphism. QED.\r\n\r\n\\textbf{11. A (covariant) functor $F$ from a category $C$ to a category $D$ is a mapping from $C$ to $D$ such that for every object $X$ of $X$ we have that $F(X)$ is an object of $D$ and such that for each morphism $f : X \\to Y$ in $C$ there is a morphism $F(f) : F(X) \\to F(Y)$ in $D$ such that $F($id$_X) =$ id$_{F(X)}$ for every object $X$ of $C$ and $F(g\\circ f) = F(g)\\circ F(f)$ for all morphisms $F : X \\to Y$ and $g : Y \\to Z$ in $C$.}\r\n\r\n\\textbf{12. A contravariant functor from $C$ to $D$ is as per a covariant functor except that $F(g\\circ f) = F(f)\\circ F(g)$.}\r\n\r\n\\textbf{13. An additive functor $F$ between preadditive categories $C$ and $D$ is a functor such that given objects $A$ and $B$ in $C$ the function Hom$(A, B) \\to$ Hom$(F(A), F(B))$ is a group homomorphism.}\r\n\r\nIn particular $F(0) = 0$.\r\n\r\n\\textbf{14. An $R$-linear functor between $R$-linear categories $C$ and $D$ is one such that given objects $A$ and $B$ in $C$ the function Hom$(A, B) \\to$ Hom$(F(A), F(B))$ is an $R$-linear map.}\r\n\r\n\\textbf{15. A left exact functor $F : C \\to D$ between categories $C$ and $D$ is an additive functor such that for any short exact sequence of objects $0 \\to X \\to Y \\to Z \\to 0$ in $C$ we have that $0 \\to F(A) \\to F(B) \\to F(C)$ is exact in $D$.}\r\n\r\n\\textbf{16. A right exact functor $F : C \\to D$ between categories $C$ and $D$ is an additive functor such that for any short exact sequence of objects $0 \\to X \\to Y \\to Z \\to 0$ in $C$ we have that $F(A) \\to F(B) \\to F(C) \\to 0$ is exact in $D$.}\r\n\r\n\\textbf{17. An exact functor $F : C \\to D$ between categories $C$ and $D$ is an additive functor such that for any short exact sequence of objects $0 \\to X \\to Y \\to Z \\to 0$ in $C$ we have that $0 \\to F(A) \\to F(B) \\to F(C) \\to 0$ is exact in $D$.}\r\n\r\n\\textbf{18. The functor $F_A(X) =$ Hom$(A, X)$ from an abelian category $C$ to the category of abelian groups is left exact.}\r\n\r\nSuppose $0 \\rightarrow X \\overset{f}{\\rightarrow} Y \\overset{g}{\\rightarrow} Z \\to 0$ is exact in the category $C$.\r\n\r\nFor the map $f : X \\to Y$ define $f_{\\star} :$ Hom$(A, X) \\to$ Hom$(A, Y)$ by $f_{\\star}(h) = f\\circ h$. Define $g_{\\star}$ similarly.\r\n\r\nConsider the sequence\r\n$$0 \\rightarrow \\mbox{Hom}(A, X) \\overset{f_{\\star}}{\\rightarrow} \\mbox{Hom}(A, Y) \\overset{g_{\\star}}{\\rightarrow} \\mbox{Hom}(A, Z).$$\r\n\r\nIf the first sequence is exact, then $f = \\ker(g)$. Thus if $M$ is any object with a morphism $h : M \\to Y$ such that $g\\circ h = 0$ then there exists a unique morphism $s : M \\to X$ such that $f\\circ s = h$.\r\n\r\nIn other words, if $g_{\\star}(h) = 0$ then $h = f_{\\star}(s)$. Thus ker$(g_{\\star}) \\subseteq$ im$(f_{\\star})$ and $f_{\\star}$ is an injective map on account of the uniqueness of $s$.\r\n\r\nOn the other hand, suppose $h : M \\to Y$ is any morphism which factors through $f$, i.e. $h = f\\circ s = f_{\\star}(s)$, then $$g_{\\star}(h) = g_{\\star}(f_{\\star}(s)) = (g\\circ f)_{\\star}(h) = 0_{\\star}(h) = 0\\circ h = 0.$$ \r\nIn other words, im$(f_{\\star}) \\subseteq$ ker$(g_{\\star})$.\r\n\r\nThus we have that the second sequence is exact at Hom$(A, X)$ since $f_{\\star}$ is injective and at Hom$(A, Y)$ since im$(f_{\\star}) =$ ker$(g_{\\star})$. QED.\r\n\r\n\\textbf{18. The contravariant functor $F^A(X) =$ Hom$(X, A)$ from an abelian category $C$ to the category of abelian groups is right exact.}\r\n\r\nThe argument is dual to that for the covariant Hom functor. QED.\r\n\r\n\\textbf{19. A split short exact sequence is a sequence of the form $0 \\rightarrow A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} B \\to 0$ such that there exists a morphism $h : C \\to B$ such that $g\\circ h =$ id$_C$.}\r\n\r\n\\textbf{20. If $0 \\rightarrow A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} C \\to 0$ is a short exact sequence then there exists a morphism $s : C \\to B$ such that $g\\circ s =$ id$_C$ iff then there exists $r : B \\to A$ such that $B$ along with $r$, $g$, $f$ and $s$ is a biproduct of $A$ and $C$.}\r\n\r\nThe reverse direction is clear from the definition of a biproduct.\r\n\r\nFor the forward direction, consider the morphism $1_B - s\\circ g : B \\to B$. We see that $g\\circ (1_B - s\\circ g) = g - g\\circ s\\circ g = g - g = 0$.\r\n\r\nAs $f$ is the kernel of $g$, by the universal property of the kernel, there is a unique morphism $r : B \\to A$ such that $f\\circ r = 1_B - s\\circ g$.\r\n\r\nThus we have $f\\circ r + s\\circ g = 1_B$. We also have $g\\circ f = 0$ by exactness of the sequence and $g\\circ s = 1_C$.\r\n\r\nBut $f\\circ r\\circ f = (1_B - s\\circ g)\\circ f = f - 0 = f$. But $f$ is a monomorphism, thus $r\\circ f = 1_A$.\r\n\r\nWe also have $f\\circ r\\circ s = (1_B - s\\circ g)\\circ s = s - s\\circ g\\circ s = s - s = 0$. Thus $r\\circ s = 0$ as $f$ is a monomorphism.\r\n\r\nThus we have the four required relations for $B$ to be a biproduct. QED.\r\n\r\n\\textbf{21. If $0 \\rightarrow A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} C \\to 0$ is a short exact sequence then there exists a morphism $r : B \\to A$  such that $r\\circ f =$ id$_A$ iff then there exists $s : C \\to B$ such that $B$ along with $r$, $g$, $f$ and $s$ is a biproduct of $A$ and $C$.}\r\n\r\nThis is dual to the previous theorem. QED.\r\n\r\n\\textbf{22. Suppose $X \\overset{f}{\\rightarrow} Y \\overset{g}{\\rightarrow} Z$ is a sequence in an abelian category with $g\\circ f = 0$. Then the sequence is exact iff for every morphism $h : W \\to Y$ with $g\\circ h = 0$ there exists an object $V$ with an epimorphism $k : V \\to W$ and a morphism $l : V \\to X$ such that $h\\circ k = f\\circ l$.}\r\n\r\nLet $i : \\ker(g) \\to Y$ be the kernel of $g$. Let $p : X \\to$ coim$(f)$ be the cokernel of the kernel of $f$.\r\n\r\nRecall that $f$ factors via its image as an epimorphism $i'$ followed by a monomorphism $j'$ say. As $g\\circ f = 0$ then because $i'$ is an epimorphism, we have $g\\circ j' = 0$. Thus by the universal property of the kernel of $g$ there is a canonical morphism $j :$ im$(f) \\to \\ker(g)$ such that $i\\circ j = j'$. Since $j'$ is a monomorphism, so is $j$.\r\n\r\nWe first prove that the forward implication of the theorem holds. Let $h : W \\to Y$ be any morphism with $g\\circ h = 0$. By the universal property of the kernel of $g$ there exists a morphism $c : W \\to \\ker(g)$ with $i\\circ c = h$.\r\n\r\nAs we are in an abelian category, we can identify coim$(f)$ and im$(f)$ via an isomorphism. Let $p' : X \\to$ im$(f)$ be the map $p$ composed with this isomorphism.\r\n\r\nLet $V = X\\times_{\\ker(g)} W$ be the pullback of $j\\circ p'$ along $c$, with projections $k : V \\to W$ and $l : V \\to X$. Then $c\\circ k = j\\circ p'\\circ l$. Then $h\\circ k = i\\circ c\\circ k = i\\circ j\\circ p'\\circ l = j'\\circ p'\\circ l = f\\circ l$.\r\n\r\nAs the sequence $g\\circ f$ is exact by hypothesis, im$(f) \\cong \\ker(g)$. Thus we can identify im$(f)$ and $\\ker(g)$ as they are only defined up to isomorphism anyway, and so $j\\circ p$ is an epimorphism.\r\n\r\nBut as epimorphisms pull back to epimorphisms, we have that $k$ is an epimorphism. This proves the forward direction of the theorem.\r\n\r\nFor the converse, as $g\\circ i = 0$, there exists an object $W$, an epimorphism $k : W \\to \\ker(g)$ and a morphism $l : W \\to X$ with $f\\circ l = i\\circ k$. Thus $i\\circ j\\circ p\\circ l = f\\circ l = i\\circ k$. But $i$ is a monomorphism and so $j\\circ p\\circ l = k$. But $k$ is an epimorphism and so $j$ is also. As $j$ is a monomorphism, it is an isomorphism. But this implies that the sequence $g\\circ f = 0$ is exact. QED.\r\n\r\n\\textbf{23. (Snake lemma) Suppose that $A \\overset{f}{\\rightarrow} B \\overset{g}{\\rightarrow} C \\rightarrow 0$ and $0 \\rightarrow A' \\overset{f}{\\rightarrow} B' \\overset{g}{\\rightarrow} C'$ are exact sequences in an abelian category and suppose that $a : A \\to A'$, $b : B \\to B'$ and $c : C \\to C'$ are morphisms between the two sequences that make the diagram commute. Then there exists an exact sequence\r\n$$\\ker(a) \\rightarrow \\ker(b) \\rightarrow \\ker(c) \\overset{\\delta}{\\rightarrow} \\mbox{coker(a)} \\rightarrow \\mbox{coker(b)} \\rightarrow \\mbox{coker(c)}.$$}\r\n\r\nLet us write $K_1 = \\ker(a)$, $K_2 = \\ker(b)$, $K_2 = \\ker(c)$ and $Q_1 = \\mbox{coker}(a)$, $Q_2 = \\mbox{coker}(b)$ and $Q_3 = \\mbox{coker}(c)$.\r\n\r\nWe will first construct the morphism $\\delta$. Let $P = B\\times_C K_3$ with maps $p : P \\to K_3$ and $q : P \\to B$. As pullbacks preserve monomorphisms and epimorphisms, $q$ is a monomorphism and $p$ is an epimorphism.\r\n\r\nAlso let $T = B' \\coprod_{A'} Q_1$ with maps $r : B' \\to T$ and $t : Q_1 \\to T$. As pushouts preserve monomorphisms and epimorphisms, $r$ is an epimorphism and $t$ is a monomorphism.\r\n\r\nLet $e: E \\to P$ be the kernel of $p$ and $d : T \\to D$ be the cokernerl of $t$.\r\n\r\nAs any epimorphism is the cokernel of its kernel, we have that $p$ is the cokernel of $e$. Likewise $t$ is the kernel of $d$.\r\n\r\nAs $g'\\circ b\\circ q = c\\circ k_3\\circ p = 0$ then because of exactness at $B'$ there is a morphism $u : P \\to A'$ such that $f'\\circ u = b\\circ q$.\r\n\r\nWe factor $f$ through its image, via an epimorphism followed by a monomorphism. But since the diagram is exact at $B$ the image of $f$ is isorphic to the kernel of $g$.\r\n\r\nAs $g\\circ q = k_3\\circ p$ there is an isomorphism from $\\ker(q) \\to \\ker(p)$ by the previous theorem. \r\n\r\nMoreover $g\\circ q\\circ e = k_3\\circ p\\circ e = 0$, so there is a epimorphism $v : A \\to E$ such that $q\\circ e\\circ v = f$ by the universal property of the kernel of $g$.\r\n\r\nThus $f'\\circ u\\circ e\\circ v = b\\circ q\\circ e\\circ v = b\\circ f = f'\\circ a$. As $f'$ is a monomorphism this implies that $u\\circ e\\circ v = a$.\r\n\r\nThus $q_1\\circ u\\circ e\\circ v = q_1\\circ a = 0$. And as $v$ is an epimorphism we have $q_1 \\circ u\\circ e = 0$. Thus there exists a unique morphism $\\delta : K_3 \\to Q_1$ such that $\\delta\\circ p = q_1\\circ u$.\r\n\r\nWe have that $t\\circ \\delta\\circ p = t\\circ q_1\\circ u = r\\circ f'\\circ u = r\\circ b\\circ q$.\r\n\r\nAs $c\\circ g\\circ k_2 = g'\\circ b\\circ k_2 = 0$, by the universal property of the kernel $k_3$ there exists a morphism $\\bar{g} : K_2 \\to K_3$ such that $k_3\\circ \\bar{g} = g\\circ k_2$. By a similar argument, there exists a morphism $\\bar{f} : K_1 \\to K_2$ such that $k_2\\circ \\bar{f} = f\\circ k_1$.\r\n\r\nDually, there exist morphisms $\\hat{g} : Q_2 \\to Q_3$ and $\\hat{f}: Q_1 \\to Q_2$ such that $q_3\\circ g' = \\hat{g}\\circ q_2$ and $q_2\\circ f' = \\hat{f}\\circ q_1$.\r\n\r\nWe first show exactness at $K_3$.\r\n\r\nSince $k_3\\circ \\bar{g} = g\\circ k_2$, by the universal property of the pullback to $P$ we have that there is a morphism $z : K_2 \\to P$ such that $q\\circ z = k_2$ and $p\\circ z = \\bar{g}$.\r\n\r\nThus $t\\circ \\delta\\circ \\bar{g} = t\\circ \\delta\\circ p\\circ z = r\\circ b\\circ q\\circ z = r\\circ b\\circ k_2 = 0$. But as $t$ is a monomorphism, we have $\\delta\\circ \\bar{g} = 0$.\r\n\r\nWe can now demonstrate exactness at $K_3$ using the previous theorem.\r\n\r\nLet $d : R \\to K_3$ be any morphism such that $\\delta\\circ d = 0$. Applying the previous theorem to the exact sequence $P \\overset{p}{\\rightarrow} K_3 \\rightarrow 0$ and the morphism $d$ yields an object $S$, an epimorphism $m : S \\to R$ and a morphism $n : S \\to P$ with $p\\circ n = d\\circ m$.\r\n\r\nAs $q_1\\circ u\\circ n = \\delta\\circ p\\circ n = \\delta\\circ d\\circ m = 0$ applying the previous theorem to the exact sequence $A \\overset{a}{\\rightarrow} A' \\overset{q_1}{\\rightarrow} Q_1$ and $u\\circ n$ gives an object $T$, an epimorphism $\\epsilon : T \\to S$ and a morphism $\\zeta : T \\to A$ such that $u\\circ n\\circ \\epsilon = a\\circ \\zeta$.\r\n\r\nWe have that $b\\circ q\\circ n\\circ \\epsilon = f'\\circ u\\circ n\\circ \\epsilon = f'\\circ a\\circ \\zeta = b\\circ f\\circ \\zeta$.\r\n\r\nWrite $\\eta = q\\circ n\\circ \\epsilon - f\\circ \\zeta$. Then $b\\circ \\eta = 0$. As $\\eta$ is a morphism from $T$ to $B$ then there exists a morphism $\\phi : T \\to K_2$ with $\\eta = k_2\\circ \\phi$.\r\n\r\nWe have $k_3\\circ \\bar{g}\\circ \\phi = g\\circ k_2\\circ \\phi = g\\circ \\eta = g\\circ q\\circ n\\circ \\epsilon - g\\circ f\\circ \\zeta = k_3\\circ p\\circ n\\circ \\epsilon = k_3\\circ d\\circ m\\circ \\epsilon$.\r\n\r\nAs $k_3$ is a monomorphism, we have that $\\bar{g}\\circ \\phi = d\\circ m\\circ \\epsilon$. Thus, as $m\\circ \\epsilon$ is an epimorphism, the previous theorem implies that $K_2 \\overset{\\bar{g}}{\\rightarrow} K_3 \\overset{\\delta}{\\rightarrow} Q_1$ is exact.\r\n\r\nWe now show exactness at $K_2$.\r\n\r\nAs $k_3\\circ \\bar{g}\\circ \\bar{f} = g\\circ f\\circ k_1 = 0$ then as $k_3$ is a monomorphism we have $\\bar{g}\\circ \\bar{f} = 0$.\r\n\r\nLet $S$ be an object with a morphism $c : S \\to K_2$ with $\\bar{g}\\circ c = 0$. Then $g\\circ k_2\\circ c = k_3\\circ \\bar{g}\\circ c = 0$.\r\n\r\nBy the previous theorem, there is an object $T$, an epimorphism $d : T \\to S$ and a morphism $e : T \\to A$ such that $k_2\\circ c\\circ d = f\\circ e$.\r\n\r\nThen $f'\\circ a\\circ e = b\\circ f\\circ e = b\\circ k_2\\circ c\\circ d = 0$.\r\n\r\nAs $f'$ is a monomorphism we have that $a\\circ e = 0$.\r\n\r\nThus by the universal property of the kernel $K_1$ we have that there exists a morphism $m : T \\to K_1$ such that $k_1\\circ m = e$.\r\n\r\nThus $k_2\\circ \\bar{f}\\circ m = f\\circ k_1\\circ m = f\\circ e = k_2\\circ \\bar{g}\\circ c$. As $k_2$ is a monomorphism we have $\\bar{f}\\circ m = c\\circ d$.\r\n\r\nThus the previous theorem says that $K_1 \\overset{\\bar{f}}{\\rightarrow} K_2 \\overset{\\bar{g}}{\\rightarrow} K_3$ is exact.\r\n\r\nThe theorem now follows by a dual argument showing the sequence is exact at $Q_1$ and $Q_2$. QED.\r\n\r\n\\textbf{24. Let $0 \\rightarrow A_2 \\overset{f}{\\rightarrow} B_2 \\overset{g}{\\rightarrow} C_2 \\rightarrow 0$ and $0 \\rightarrow A_3 \\overset{f'}{\\rightarrow} B_3 \\overset{g'}{\\rightarrow} C_3 \\rightarrow 0$ be exact. Suppose that there is a third sequence $0 \\rightarrow A_1 \\overset{f''}{\\rightarrow} B_1 \\overset{g''}{\\rightarrow} C_1 \\rightarrow 0$ such that $0 \\rightarrow A_1 \\overset{a'}{\\rightarrow} A_2 \\overset{a}{\\rightarrow} A_3 \\rightarrow 0$ is exact, $0 \\rightarrow B_1 \\overset{b'}{\\rightarrow} B_2 \\overset{b}{\\rightarrow} B_3 \\rightarrow 0$ is exact and $0 \\rightarrow C_1 \\overset{c'}{\\rightarrow} C_2 \\overset{c}{\\rightarrow} C_3 \\rightarrow 0$ is exact and such that all squares commute. Then the third sequence above is also exact.}\r\n\r\nSince coker$(a) = 0$ as the first column is exact and $A_1, a'$ is the kernel of $a$, $B_1, b'$ is the kernel of $b$ and $C_1, c'$ is the kernel of $c$ due to the exactness of the columns, the snake lemma tells us that $A_1 \\overset{f''}{\\rightarrow} B_1 \\overset{g''}{\\rightarrow} C_1 \\overset{\\delta}{\\rightarrow} 0$ is exact.\r\n\r\nIt remains only to show that $f''$ is a monomorphism. But $b'\\circ f'' = f\\circ a''$. As $f\\circ a''$ is a monomorphism, so is $f''$. QED.\r\n\r\n\\textbf{25. Let $0 \\rightarrow A_1 \\overset{f''}{\\rightarrow} B_1 \\overset{g''}{\\rightarrow} C_1 \\rightarrow 0$ and $0 \\rightarrow A_2 \\overset{f}{\\rightarrow} B_2 \\overset{g}{\\rightarrow} C_2 \\rightarrow 0$ be exact. Suppose that there is a third sequence $0 \\rightarrow A_3 \\overset{f'}{\\rightarrow} B_3 \\overset{g'}{\\rightarrow} C_3 \\rightarrow 0$ such that $0 \\rightarrow A_1 \\overset{a'}{\\rightarrow} A_2 \\overset{a}{\\rightarrow} A_3 \\rightarrow 0$ is exact, $0 \\rightarrow B_1 \\overset{b'}{\\rightarrow} B_2 \\overset{b}{\\rightarrow} B_3 \\rightarrow 0$ is exact and $0 \\rightarrow C_1 \\overset{c'}{\\rightarrow} C_2 \\overset{c}{\\rightarrow} C_3 \\rightarrow 0$ is exact, and such that all squares commute. Then the third sequence above also commutes.}\r\n\r\nThe proof is the dual to the previous one. QED.\r\n\r\n\\textbf{26. Let $W \\rightarrow X \\rightarrow Y \\rightarrow Z$ and $W' \\rightarrow X' \\rightarrow Y' \\rightarrow Z'$ be exact sequences in an abelian category and let $\\alpha : W \\to W'$, $\\beta : X \\to X'$, $\\gamma : Y \\to Y'$ and $\\delta : Z \\to Z'$ be morphisms such that all the resulting squares commute. Then if $\\alpha$ and $\\gamma$ are sujective and $\\delta$ is injective then $\\beta$ is surjective.}\r\n\r\nWe will show that we can replace $W$ by the image of the map $W' \\to X'$. Let us denote the map $W' \\to X'$ by $f$ and $W'', f''$ the image of $f$. Then it is easy to show that the cokernel of $f$ and $f''$ agree. Thus $W''$ is the image of $f''$ and thus the diagram is still exact at $X'$ when $f$ is replaced by $f''$.\r\n\r\nMoreover, there is a surjective map $\\alpha'$ from $W'$ to $W''$ which is part of the factorisation of $f$. Then $\\alpha'\\circ \\alpha$ is a surjective map from $W$ to $W''$. Clearly the resulting square with $W'$ replaced by $W''$ still commutes.\r\n\r\nIn other words, without loss of generality, by replacing $W'$ with $W''$ if necessary, we may assume that the map $W' \\to X'$ is injective.\r\n\r\nBy a similar argument we may replace $Z$ by the image of $Y \\to Z$. In other words, we may assume $Y \\to Z$ is surjective.\r\n\r\nLet $K_1$ be the kernel of $Y \\to Z$ and $K_2$ be the kernel of $Y' \\to Z'$.\r\n\r\nThen we have exact rows $K_1 \\to Y \\to Z \\to 0$ and $0 \\to K_2 \\to Y' \\to Z'$ and an induced morphism between $K_1$ and $K_2$ such that the diagram commutes.\r\n\r\nWe apply the snake lemma to the diagram. Since the kernel of $\\delta$ is $0$ and the cokernel of $\\gamma$ is $0$. By the exact sequence of the snake lemma we therefore have that the cokernel of $K_1 \\to K_2$ is $0$, i.e. that map is surjective.\r\n\r\nSimilarly we apply the snake lemma to the diagram with exact rows $W \\to X \\to K_1 \\to 0$ and $0 \\to W' \\to X' \\to K_2$.\r\n\r\nWe have that the cokernel of $\\alpha$ is $0$, the cokernel of of $K_1 \\to K_2$ is $0$. Thus the cokernel of $\\beta$ is $0$ and so $\\beta$ is surjective. QED.\r\n\r\n\\textbf{27. Let $W \\rightarrow X \\rightarrow Y \\rightarrow Z$ and $W' \\rightarrow X' \\rightarrow Y' \\rightarrow Z'$ be exact sequences in an abelian category and let $\\alpha : W \\to W'$, $\\beta : X \\to X'$, $\\gamma : Y \\to Y'$ and $\\delta : Z \\to Z'$ be morphisms such that all the resulting squares commute. Then if $\\beta$ and $\\delta$ are injective and $\\alpha$ is surjective then $\\gamma$ is injective.}\r\n\r\nThis result is dual to the previous one. QED.\r\n\r\n\\textbf{28. (Five lemma) Let $U \\rightarrow W \\rightarrow X \\rightarrow Y \\rightarrow Z$ and $U' \\rightarrow W' \\rightarrow X' \\rightarrow Y' \\rightarrow Z'$ be exact sequences in an abelian category and let $\\alpha : U \\to U'$, $\\beta : W \\to W'$, $\\gamma : X \\to X'$, $\\delta : Y \\to Y'$ and $\\epsilon : Z \\to Z'$ be morphisms such that all the resulting squares commute. Then if $\\beta$ and $\\delta$ are isomorphisms, $\\epsilon$ is injective and $\\alpha$ is surjective then $\\gamma$ is an isomorphism.}\r\n\r\nThis follows from the previous two theorems. We apply the first to the rightmost four squares and the second to the leftmost four squares. This gives that $\\gamma$ is surjective and injective respectively and thus an isomorphism. QED.\r\n\r\n\\textbf{29. If $0 \\rightarrow A \\rightarrow B \\rightarrow C \\rightarrow 0$ and $0 \\rightarrow A' \\rightarrow B' \\rightarrow C' \\rightarrow 0$ are exact rows and $g : A \\to A'$, $f : B \\to B'$ and $h : C \\to C'$ are morphisms that make the diagram commute, then if $g$ and $h$ are isomorphisms, so is $f$.}\r\n\r\nThis is a special case of the five lemma with objects on the right and left equal to the zero object. We take the identity morphism between the zero objects on each end.\r\n\r\nThe five lemma then says that $f$ is an isomorphism. QED.\r\n\r\n\\textbf{30. Suppose that $f : A \\to B$ and $f' : A' \\to B'$ make a commuting square via $\\phi^A : A \\to A'$ and $\\phi^B : B \\to B'$. Similarly, let $g : C \\to B$ and $g' : C' \\to B'$ make a commuting square via $\\phi^B$ and $\\phi^C : C \\to C'$. Let $P$ along with $r : P \\to A$ and $s : P \\to C$ be the pullback of $f$ and $g$ and $P'$ along with $r' : P' \\to A'$ and $s' : P' \\to C'$ be the pullback of $f'$ and $g'$. Then $PABCP'A'B'C'$ forms a commuting cube. In particular there is a morphism $u : P \\to P'$ such that $\\phi^A\\circ r = r'\\circ u$ and $s'\\circ u = \\phi^C\\circ s$.}\r\n\r\nSince $P = A\\times_B C$ we have that $f\\circ r = g\\circ s$. Thus $\\phi^B\\circ f\\circ r = \\phi^B\\circ g\\circ s$. Thus $f'\\circ \\phi^A\\circ r = g'\\circ \\phi^C\\circ s$.\r\n\r\nThus by the universal property of the pullback of $f'$ and $g'$ there is a morphism $u : P \\to P'$ such that $\\phi^A\\circ r = r'\\circ u$ and $s'\\circ u = \\phi^C\\circ s$. QED.\r\n\r\n\\textbf{31. Suppose that $f : B \\to A$ and $f' : B' \\to A'$ make a commuting square via $\\phi^A : A \\to A'$ and $\\phi^B : B \\to B'$. Similarly, let $g : B \\to C$ and $g' : B' \\to C'$ make a commuting square via $\\phi^B$ and $\\phi^C : C \\to C'$. Let $Q$ along with $r : A \\to Q$ and $s : C \\to Q$ be the pushout of $f$ and $g$ and $Q'$ along with $r' : A' \\to Q'$ and $s' : C' \\to Q'$ be the pushout of $f'$ and $g'$. Then $ABCQ'A'B'C'$ forms a commuting cube. In particular there is a morphism $u : Q \\to Q'$ such that $r'\\circ \\phi^A = u\\circ r$ and $u\\circ s = s'\\circ \\phi^C$.}\r\n\r\nThe argument is dual to that for the pullback commuting cube. QED.\r\n\r\n\\textbf{32. Let $A_1 \\overset{f_1}{\\rightarrow} B_1 \\overset{g_1}{\\rightarrow} C_1 \\rightarrow 0$ and $0 \\rightarrow A_2 \\overset{f_2}{\\rightarrow} B_2 \\overset{g_2}{\\rightarrow} C_2$ be exact with morphism $d_1^A : A_1 \\to A_2$, $d_1^B : B_1 \\to B_2$ and $d_1^C : C_1 \\to C_2$ be morphisms between the rows making squares commute. Suppose there exist a similar set of rows with indices $1$ replaced with $3$ and $2$ replaced with $4$. Further suppose morphisms $\\phi_A : A_1 \\to A_3$, $\\phi_B : B_1 \\to B_3$ and $\\phi_C : C_1 \\to C_3$ and $\\psi_A : A_2 \\to A_4$, $\\psi_B : B_2 \\to B_4$ and $\\psi_C : C_2 \\to C_4$ again making all squares commute. Let $\\ker(d_1^A) \\overset{\\hat{f}_1}{\\rightarrow} \\ker(d_1^B) \\overset{\\hat{g}_1}{\\rightarrow} \\ker(d_1^C) \\overset{\\delta_1}{\\rightarrow} \\mbox{coker}(d_1^A) \\overset{\\bar{f}_2}{\\rightarrow} \\mbox{coker}(d_1^B) \\overset{\\bar{g}_2}{\\rightarrow} \\mbox{coker}(d_1^C)$ be the induced exact sequence of the snake lemma on one pair of exact rows and $\\ker(d_2^A) \\overset{\\hat{f}_3}{\\rightarrow} \\ker(d_2^B) \\overset{\\hat{g}_3}{\\rightarrow} \\ker(d_2^C) \\overset{\\delta_1}{\\rightarrow} \\mbox{coker}(d_2^A) \\overset{\\bar{f}_4}{\\rightarrow} \\mbox{coker}(d_2^B) \\overset{\\bar{g}_4}{\\rightarrow} \\mbox{coker}(d_2^C)$ be the induced exact sequence of the snake lemma on the other pair of exact rows. Then there are maps $\\hat{\\phi}^X : \\ker(d_1^X) \\to \\ker(d_2^X)$ for $X = A, B, C$ and $\\bar{\\phi}^Y : \\mbox{coker}(d_1^Y) \\to \\mbox{coker}(d_2^Y)$ for each of $Y = A, B, C$ that make the squares between the induced exact sequences commute.}\r\n\r\nWe have that if $k_1^A : \\ker(d_1^A) \\to A_1$ is the kernel of $d_1^A$ then $\\psi^A\\circ d_1^A\\circ k_1^A = 0$. But then $d_2^A\\circ \\phi^A\\circ \\ker(d_1^A) = 0$. Thus there is an induced morphism  $\\ker(d_1^A) \\to \\ker(d_2^A)$ by the universal property of the kernel of $d_2^A$.\r\n\r\nSimilarly there are induced morphisms $\\ker(d_1^B) \\to \\ker(d_2^B)$ and $\\ker(d_1^C) \\to \\ker(d_2^C)$ and similarly by duality, induced morphisms between the cokernels. Thus we have shown that the morphisms $\\hat{\\phi}^X$ and $\\bar{\\psi}^Y$ exist.\r\n\r\nLet $k_1^X : K_1^X \\to X_1$ be the kernel of $d_1^X$ for $X = A, B, C$ and $k_3^Y : K_3^Y \\to Y_3$ be the kernel of $d_2^Y$ for all $Y = A, B, C$.\r\n\r\nThen by definition $f_1\\circ k_1^A = k_1^B\\circ \\hat{f}_1$, $\\phi^B\\circ k_1^B = k_3^B\\circ \\hat{\\phi}^B$, $\\psi^B\\circ k_3^A = k_3^B\\circ \\hat{f}_3$ and $f_2\\circ k_1^A = k_3^A\\circ \\hat{\\phi}^A$. We also have $\\phi^B\\circ f_1 = \\psi^B\\circ f_2$.\r\n\r\nWe therefore have $k_3^B\\circ \\hat{\\phi}^B\\circ \\hat{f}_1 = \\phi^B\\circ k_1^B\\circ \\hat{f}_1 = \\phi^B\\circ f_1\\circ k_1^A = \\psi^B\\circ f_2\\circ k_1^A = \\psi^B\\circ k_3^A\\circ \\hat{\\phi}^A = k_3^B\\circ \\hat{f}_3\\circ \\hat{\\phi}^A$. But $k_3^B$ is a monomorphism and thus $\\hat{\\phi}^B\\circ \\hat{f}_1 = \\hat{f}_3\\circ \\hat{\\phi}^A$. \r\n\r\nBy a similar argument we have that $\\hat{\\phi}^C\\circ \\hat{g}_1 = \\hat{g}_3\\circ \\hat{\\phi}^B$.\r\n\r\nBy duality we have that $\\bar{\\psi}^C\\circ \\bar{g}_2 = \\bar{g}_4\\circ \\bar{\\psi}^B$ and $\\bar{\\psi}^B\\circ \\bar{f}_2 = \\bar{f}_4\\circ \\bar{\\psi}^A$. \r\n\r\nLet $P_1$ be the pulback of $g_1$ and $k_1^C$ with maps $b_1 : P_1 \\to B_1$ and $s_1 : P_1 \\to K_1^C$. Similarly let $P_2$ be the pullback of $g_3$ and $k_3^C$ with maps $b_3 : P_2 \\to B_3$ and $s_3 : P_2 \\to K_3^C$. Let $Q_1$ and $Q_3$ be the dual pushouts on the cokernel side with maps $b_2$, $s_2$, $b_4$ and $s_4$.\r\n\r\nBy the definition of the connecting morphisms $\\delta_1$ and $\\delta_2$ we have $s_2\\circ \\delta_1\\circ s_1 = b_2\\circ d_1^B\\circ b_1$ and $s_4\\circ \\delta_2\\circ s_3 = b_4\\circ d_2^B\\circ b_3$.\r\n\r\nBy the previous theorem we have morphisms $t : P_1 \\to P_2$ and $u : Q_1 \\to Q_2$ that make the pullback squares into a commuting cube, and similarly for the pushout squares.\r\n\r\nThus $u\\circ b_2\\circ d_1^B\\circ b_1 = b_4\\circ d_2^B\\circ b_3\\circ t$. Thus $u\\circ s_2\\circ \\delta_1\\circ s_1 = s_4\\circ \\delta_2\\circ s_3\\circ t$.\r\n\r\nBut by commutativity of the pullback and pushout cubes we then have $s_4\\circ \\bar{psi}^A\\circ \\delta_1\\circ s_1 = s_4\\circ \\delta_2\\circ \\hat{\\phi}^C\\circ s_1$.\r\n\r\nBut since pullbacks and pushouts preserve monomorphisms and epimorphisms $s_1$ is an epimorphism and $s_4$ is a monomorphism. Thus $\\bar{psi}^A\\circ \\delta_1 = \\delta_2\\circ \\hat{\\phi}^C$ proving commutativity of the centre square between the two long snake sequences. QED.\r\n\r\n\\section{Projective objects}\r\n\r\n\\textbf{1. A projective object $P$ in an abelian category $C$ is an object such that Hom$(P, -)$ is an exact functor from $C$ to abelian groups.}\r\n\r\n\\textbf{2. An object $P$ in an abelian category is projective iff for any epimorphism $q : A \\to B$ every morphism $f : P \\to B$ factors through $q$, i.e. there exists a morphism $q' : P \\to A$ such that $f = q\\circ q'$.}\r\n\r\nSuppose that $0 \\rightarrow K \\overset{k}{\\rightarrow} A \\overset{q}{\\rightarrow} B \\rightarrow 0$ is exact. This is the case iff $k$ is the kernel of the epimorphism $q$.\r\n\r\nAs the functor Hom$(P, -)$ is left exact we have that $0 \\rightarrow \\mbox{Hom}(P, K) \\overset{\\mbox{Hom}(P, k)}{\\rightarrow} \\mbox{Hom}(P, A) \\overset{\\mbox{Hom}(P, q)}{\\rightarrow}  \\mbox{Hom}(P, B)$ is exact.\r\n\r\nIf $P$ is projective then $\\mbox{Hom}(P, q)$ is surjective. In other words, for every morphism $f : P \\to B$ there exists a morphism $q' : P \\to A$ such that $f = q\\circ q'$.\r\n\r\nSince the epimorphism $q$ is arbitrary, this shows one direction of the theorem. The converse follows by reversng the argument. QED.\r\n\r\n\\textbf{3. A free module is a module that has a basis, i.e. a linearly independent generating set.}\r\n\r\n\\textbf{4. If an $R$-module $P$ is free then it is projective.}\r\n\r\nSuppose $(p_i)_{i\\in I}$ is a basis for $P$.\r\n\r\nSuppose that $g : P \\to B$ is an $R$-module homomorphism and $f : A \\to B$ is a surjective homomorphism. \r\n\r\nConsider the elements $g(p_i)$. Since $f$ is surjective, for each $i$ there exists $a_i \\in A$ such that $f(a_i) = g(p_i)$ for all $i \\in I$. We can define $\\beta : P \\to A$ by $p_i \\mapsto a_i$.\r\n\r\nAs $f(a_i) = g(p_i)$ we have that $f(\\beta(p_i)) = g(p_i)$. Since the $p_i$ are a basis for $P$ we have that $(f\\circ \\beta)(p) = g(p)$ for all $p \\in P$ by linearity. Thus $f\\circ \\beta = g$ and so $P$ is projective. QED.\r\n\r\n\\textbf{5. A direct summand of a projective module is projective.}\r\n\r\nSuppose $P$ is a direct summand of a projective module $Q$. Then by definition there is an injection $i : P \\to Q$ and a quotient map $q : Q \\to P$ such that $q\\circ i =$ id$_P$.\r\n\r\nNow suppose $g : P \\to B$ is a homomorphism and $f : A \\to B$ is a surjective homomorphism. Then we have a map $g\\circ q : Q \\to B$. Since $Q$ is projective there exists a map $\\beta' : Q \\to A$ such that $f\\circ \\beta' = g\\circ q$.\r\n\r\nDefine $\\beta : P \\to A$ by $\\beta'\\circ i$. Then $f\\circ \\beta = f\\circ \\beta'\\circ i = g\\circ q\\circ i = g$. Thus $P$ is projective. QED.\r\n\r\n\\textbf{6. An $R$-module $P$ is projective iff it is a direct summand of a free module.}\r\n\r\nBy the previous two results it suffices to show that if $P$ is projective then it is a direct summand of a free module.\r\n\r\nSuppose $P$ is projective and let $F(P)$ be the free $R$-module over $P$. There is a natural surjective morphism $f : F(P) \\to P$.\r\n\r\nSince $P$ is projective, we can find a map $\\beta : P \\to F(P)$ such that $f\\circ \\beta =$ id$_P$.\r\n\r\nWe then apply the splitting lemma to the short exact sequence $0 \\rightarrow \\ker(f) \\rightarrow F(P) \\overset{f}{\\rightarrow} P \\rightarrow 0$.\r\n\r\nThis yields that $P$ is a direct summand of $F(P)$. QED.\r\n\r\n\\textbf{7. An $R$-module $M$ is projective iff every surjection $\\alpha : N \\to M$ splits.}\r\n\r\nIf $M$ is projective, then the identity homomorphism id$_M : M \\to M$ factors through the surjective homomorphism $\\alpha : N \\to M$. Thus there exists a homomorphism $u : M \\to N$ such that $\\alpha\\circ u =$ id$_M$, which is the definition of splitting of $\\alpha$.\r\n\r\nConversely, let $F = \\oplus_{m \\in M} R$. Let $\\phi : F \\to M$ be defined by sending the basis vector of $F$ corresponding to the $m$-coordinate to $m$. The map is then extended linearly to the whole of $F$. It is clearly an $R$-module homomorphism.\r\n\r\nThe map $\\phi$ is surjective and so by hypothesis, it splits. Let $s : M \\to F$ be a homomorphism such that $\\phi\\circ s =$ id$_M$. \r\n\r\nConsider the map $\\psi : M \\oplus \\ker(\\phi) \\to F$ given by $(m, x) \\mapsto s(m) + x$. It's clearly a homomorphism, since it is componentwise. But it is also an isomorphism, since it has inverse given by $\\psi^{-1} : f \\mapsto (\\phi(f), f - s(\\phi(f)))$.\r\n\r\nThus $M$ is a direct summand the free module $F$ and hence is projective. QED.\r\n\r\n\\textbf{8. An $R$-module $P$ is projective iff the function Hom$(P, -)$ is exact.}\r\n\r\nThe functor is always left exact. Consider an arbitrary exact sequence $0 \\rightarrow A \\rightarrow B \\rightarrow C \\rightarrow 0$. The functor is right exact exact if the induced map Hom$(P, B) \\rightarrow$ Hom$(P, C)$ is surjective.\r\n\r\nBut this is the case iff every homomorphism $\\alpha : P \\to C$ can be lifted to a homomorphism to $B$. However, this is precisely the condition for $P$ to be projective. QED.\r\n\r\n\\textbf{9. An $R$-module $P$ is projective iff there exists a family $(a_i)_{i\\in I}$ with $a_i \\in P$, and corresponding morphisms $f_i : P \\to R$ such that for any $a \\in P$, $f_i(a) = 0$ for almost all $i$ and $a = \\sum_i a_if_i(a)$.}\r\n\r\nSuppose such $a_i$ and $f_i$ exist. Let $F = \\oplus e_i R$ be the free module with coordinates corresponding to each $i \\in I$. Let $g : F \\to P$ be defined by $g(e_i) = a_i$ for all $i \\in I$. It is clearly an epimorphism since the $a_i$ generate $P$.\r\n\r\nLet the homomorphism $h : P \\to F$ be defined by $h(a) = \\sum_i e_if_i(a)$. Clearly $g\\circ h =$ id$_P$ and so $h$ splits $g$.\r\n\r\nAs in a previous theorem, this implies that $P$ is isomorphic to a direct summand of $F$, and so $P$ is projective.\r\n\r\nConversely, assume $P$ is projective. Suppose $g$ is an epimorphism from a free module $F = \\oplus e_i R$ onto $P$. Such a map exists since $P$ is a direct summand of a free module and $g$ can be taken to be the projection map.\r\n\r\nAs we have shown previously, $g$ is split by a homomorphism $h : P \\to F$. We write $h(a) = \\sum_i e_i f_i(a)$.\r\n\r\nIt is easy to show that the $f_i$ are $R$-linear, since $h$ is an $R$-module homomorphism. Moreover, every element of the free module $F$ is a finite linear combination of the $e_i$, and so all but finitely many of the $f_i(a)$ are zero.\r\n\r\nApplying $g$ to the equation for $h(a)$ we have $a = (g\\circ h)(a) = \\sum_i a_i f_i(a)$ with $a_i = g(e_i)$, since $f_i(a) \\in R$ and $g$ is an $R$-homomorphism. QED.\r\n\r\n\\textbf{10. Suppose $X$ is a basis of a free $R$-module $F$. Then the free module $F$ has the following universal property. Let $N$ be any $R$-module. If $f : X \\to N$ is any map, then there exists a unique $R$-module homomorphism $\\phi : F \\to N$ such that $\\phi(x) = f(x)$ for all $x \\in X$.}\r\n\r\nLet $\\iota : X \\to F$ be the canonical inclusion sending $x \\in X$ to $x \\in F$.\r\n\r\nWe first prove the existence of $\\phi$. Suppose $X = \\{x_i \\;|\\; i \\in I\\}$. Let $m \\in F$. Then $m = \\sum_{i\\in I} r_ix_i$ for some $r_i \\in R$, where only finitely many of the $r_i$ are nonzero.\r\n\r\nLet $\\phi(m) = \\sum_{i\\in I} r_if(x_i)$. Suppose $m_1, m_2 \\in F$ and $r \\in R$. Say $m_1 = \\sum_{i\\in I}r_ix_i$ and $m_2 = \\sum_{i\\in I}s_ix_i$.\r\n\r\nWe have that $\\phi(rm_1 + m_2) = \\phi\\left(\\sum_{i\\in I} (rr_i + s_i)x_i\\right) = \\sum_{i\\in I}(rr_i + s_i)f(x_i) = r\\phi(m_1) + \\phi(m_2)$. Thus $\\phi$ is $R$-linear.\r\n\r\nWe see from the definition of $\\phi$ that $\\phi(x) = f(x)$ for $x \\in X$.\r\n\r\nFinally, we prove uniqueness of $\\phi$. Suppose that $\\psi : F \\to M$ is an $R$-module homomorphsm and $\\psi(x) = f(x)$ for all $x \\in X$. If $m = \\sum_{i\\in I}r_ix_i \\in F$ then $\\psi(m) = \\sum_{i\\in I}r_i\\psi(x_i) = \\sum_{i\\in I}r_if(x_i) = \\phi(m)$. Thus $\\psi = \\phi$. QED.\r\n\r\n\\textbf{11. Let $(F_i)_{i\\in i}$ be a family of free $R$-modules. Then $\\bigoplus_{i\\in I} F_i$ is free.}\r\n\r\nLet $A_i = \\{a_{i,j}\\}$ be a basis for $F_i$. Let $A = \\coprod_{i \\in I} A_i$ be the disjoint union of the sets $A_i$. Let $F(A)$ be the free module on $A$.\r\n\r\nThere is the natural inclusion $A \\to F(A)$. There is also a natural inclusion $\\iota : A \\to \\bigoplus_{i\\in I} F_i$ sending $a_{i,j} \\mapsto (b_k)_{k\\in I}$ where $b_k = a_{i,j}$ if $k = i$ and $0$ otherwise.\r\n\r\nBy the universal property of free modules, there is a unique homomorphism $\\phi : F(A) \\to \\bigoplus_{i\\in I} F_i$ such that $\\phi(a_{i,j}) = \\iota(a_{i,j})$. We will show that $\\phi$ is an $R$-module isomorphism.\r\n\r\nSuppose $x \\in \\ker(\\phi)$. Write $x = \\sum_{i,j} r_{i,j}a_{i,j}$. Then $0 = \\phi(x)_i = \\sum_j r_{i,j}a_{i,j} \\in F_i$.\r\n\r\nFor all $i$, since $F_i$ is free on $A_i$ we have $r_{i,j} = 0$ for all $j$. Thus $x = 0$. Thus $\\ker(\\phi) = 0$ and so $\\phi$ is injective.\r\n\r\nNow suppose that $\\left(\\sum_j r_{i,j}a_{i,j}\\right)_{i\\in I} \\in \\bigoplus_{i\\in I}F_i$. As only finitely many of the $r_{i,j}$ are nonzero, $\\sum_{i,j} r_{i,j}a_{i,j} \\in F(A)$.\r\n\r\nWe have that $\\phi\\left(\\sum_{i,j} r_{i,j}a_{i,j}\\right) = \\left(\\sum_j r_{i,j}a_{i,j}\\right)_{i\\in I}$. Thus $\\phi$ is surjective.\r\n\r\nThus $\\bigoplus_{i\\in I} F_i \\cong F(A)$ and thus $\\bigoplus_{i\\in I}$ is a free $R$-module. QED.\r\n\r\n\\textbf{12. Let $P_1$ and $P_2$ be $R$-modules. Then $P_1\\oplus P_2$ is projective iff $P_1$ and $P_2$ are projective.}\r\n\r\nSuppose $P_1\\oplus P_2$ is projective. Then we have that $P_1\\oplus P_2\\oplus Q$ is free for some $R$-module $Q$. But now both $P_1$ and $P_2$ are direct summands of a free module and are thus projective.\r\n\r\nConversely, suppose $P_1$ and $P_2$ are projective. Then there exist modules $Q_1$ and $Q_2$ such that $P_2\\oplus Q_1$ and $P_2\\oplus Q_2$ are free.\r\n\r\nThe direct sum of these two free modules is free. But it is easy to show that $(P_1\\oplus Q_1)\\oplus (P_2\\oplus Q_2) \\cong (P_1\\oplus P_2)\\oplus (Q_1\\oplus Q_2)$. Thus $P_1\\oplus P_2$ is a direct summand of a free module and is thus projective. QED.\r\n\r\n\\textbf{13. If $e$ is an idempotent of a commutative ring $R$ then $eR$ is a projective $R$-module.}\r\n\r\nLet $e' = 1 - e$ in $R$. Then $e'^2 = (1 - e)^2 = 1 - 2e + e = 1 - e = e'$ and $ee' = e(1 - e) = e - e = 0$. Thus $e$ and $e'$ form an orthogonal set of idempotents with $e + e' = 1$.\r\n\r\nThen the $R$-module $R$ can be written as a direct sum $R = eR \\oplus e'R$. In particular, $eR$ is a direct sum of a free $R$-module and so is projective. QED.\r\n\r\n\\textbf{14. A submodule of a free $\\Z$-module $M$ is free.}\r\n\r\nThe $Z$-submodules are precisely the subgroups of the abelian group $M$. But the subgroups of a free abelian group are free. QED.\r\n\r\n\\textbf{15. A submodule of a free $R$-module over a principal ideal domain $R$ is free.}\r\n\r\nLet $F = \\bigoplus_{j \\in J} R_j$ be a free $R$-module with $R_j = R$ for all $j$. Let $M$ be an $R$-submodule of $F$.\r\n\r\nAssume $J$ is a well-ordered set (this is equivalent to the axiom of choice).\r\n\r\nFor $j \\in J$ let $G_j = \\bigoplus_{i < j} R_i$, $F_j = \\bigoplus {i\\leq j} R_i = G_j\\oplus R_j$.\r\n\r\nBecause of the final equality, every element of $F_j\\cap M$ may be written uniquely as $(b, r)$ with $b \\in G_j$ and $r \\in R_j = R$.\r\n\r\nDefine $f_j : F_j\\cap M \\to R$ by $(b, r) \\mapsto r$. By construction, the kernel of $f_j$ is $G_j\\cap M$. \r\n\r\nThe image of $f_j$ is an ideal of $R$ and so has the form $r_jR$ for some $r_j \\in R$, as $R$ is a principal ideal domain. If $r_j \\neq 0$ then there exists $c_j \\in F_j\\cap M$ such that $f(c_j) = r_j$.\r\n\r\nWe will prove that $\\{c_j : j \\in J, r_j \\neq 0\\}$ is a basis for $M$.\r\n\r\nTo prove that the set is linearly independent, suppose that $\\sum_{k=1}^n s_kc_{j_k} = 0$ for some $s_k \\in R$ with $j_1 < j_2 < \\cdots < j_n$. Apply $f_{j_n}$ to obtain $0 = s_nf(c_{j_n}) = s_nr_n$. As $R$ is a domain and $r_n \\neq 0$ we have $s_n = 0$. By induction, $s_k = 0$ for all $k$.\r\n\r\nSuppose that the set doesn't generate $M$. Then there is a smallest $i \\in J$ such that there is an $a \\in F_i\\cap M$ which can't be written in terms of the elements of the set. \r\n\r\nIf $J'$ is the set of indices for which $r_j \\neq 0$ and $i \\notin J'$ then $G_i\\cap M = F_i\\cap M$ and so $a \\in G_i\\cap M$. But this contradicts the minimality of $i$. Thus $i \\in J'$.\r\n\r\nWrite $f_i(a) = sr_i$ for some $s \\in R$. Write $b = a - sc_i$. Since $a$ cannot be written as a linear combination of the $c_j$'s, neither can $b$. \r\n\r\nBut $f_i(b) = f_i(a) - sf_i(c_i) = 0$, thus $b \\in G_i\\cap M$. But this contradicts the minimality of $i$ and so every element of $M$ must be able to be expressed as a linear combination of the $c_j$'s. QED.\r\n\r\n\\textbf{16. A abelian group is projective as a $\\Z$-module iff it is a free abelian group.}\r\n\r\nAn abelian group is projective as a $\\Z$-module iff it is a direct summand of a free abelian group. But if it is a direct summand it is a subgroup and hence free. Conversely it is clear that a free abelian group is a projective $\\Z$-module. QED.\r\n\r\n\\textbf{17. A module over a principal ideal domain is projective iff it is free.}\r\n\r\nThe proof is the same as for abelian groups. QED.\r\n\r\n \r\n\r\n\r\n\r\n\\section{Injective objects}\r\n\r\n\\textbf{1. An injective object $Q$ in an abelian category $C$ is an object such that Hom$(-, Q)$ is an exact functor from $C$ to abelian groups.}\r\n\r\n\\textbf{2. An object $Q$ in an abelian category is injective iff for every monomorphism $k : A \\to B$ every every morphism $f : A \\to Q$ factors through $k$, i.e. there exists a morphism $k' : B \\to Q$ such that $f = k'\\circ k$.}\r\n\r\nThe proof is dual to the case for projectives. QED.\r\n\r\n\\textbf{3. Let $Q$ be an $R$-module such that for any ideal $I$ of $R$ and homomorphism $f : I \\to Q$ there exists a homomorphism $f' : R \\to Q$ extending $f$. Then $Q$ is an injective $R$-module. The converse also holds.}\r\n\r\nLet $Ra$ and $Rb$ be modules generated by a single element and such that $Ra \\subseteq Rb$, i.e. that there is an injective homomorphism from $Ra$ to $Rb$.\r\n\r\nSuppose $f : Ra \\to Q$ is an $R$-module homomorphism. Let $I = \\{r \\in R \\;|\\; rb \\in Ra\\}$. It is easy to check that $I$ is an ideal of $R$.\r\n\r\nThe map $g : I \\to Q$ defined by $g(r) = f(rb)$ is a homomorphism. Thus by assumption there exists a morphism $g' : R \\to Q$ extending $g$.\r\n\r\nDefine $f'(rb) = g'(r)$. Clearly $f'$ is defined on the whole of $Rb$ and is a homomorphism extending $f$.\r\n\r\nWe only need to show that it is well-defined. But if $r_1b = r_2b$ in $Rb$ then $(r_1 - r_2)b = 0$. But then $(r_1 - r_2)b \\in Ra$ since $0 \\in Ra$. Thus $f'((r_1 - r_2)b) = g'(r_1 - r_2) = g(r_1 - r_2) = f(r_1b - r_2b) = f(0) = 0$.\r\n\r\nBut $f'(r_1b - r_2b) = f'(r_1b) - f'(r_2b)$, thus $f'(r_1b) = f'(r_2b)$. Thus $f'$ is well-defined.\r\n\r\nNow we extend this result to finitely generated modules as follows. \r\n\r\nSuppose $A$ is a module with $A \\subseteq A + Ra'$ for some $a' \\notin A$ and suppose that we have a morphism $f : A \\to Q$. Again let $I = \\{r \\in R \\;|\\; rb \\in A\\}$. As above, extend $g(r) = f(rb)$ to $g' : R \\to Q$. Then define $f'(a + ra') = f(a) + g'(r)$.\r\n\r\nAs above, it is easy to show that $f'$ is a well-defined homomorphism that extends $f$.\r\n\r\nTo prove the result in general, we must use Zorn's lemma. Suppose $A \\subseteq B$ are modules and $f : A \\to Q$ is a homomorphism.\r\n\r\nWe form the partially ordered set of extensions of $f$. This consists of pairs of a module and morphism such that $(C, g) \\leq (D, h)$ if $C \\subseteq D$ and $h$ extends $g$.\r\n\r\nIf $\\{(A_i, f_i) \\;|\\; i \\in I\\}$ is a chain in this partially ordered set we can form $\\bigcup f_i : \\cup A_i \\to Q$ since every element of $\\cup A_i$ is in some $A_i$.\r\n\r\nBy Zorn's lemma there exists a maximal $f' : A' \\to Q$. If $b$ is an element of $B$ that is not in $A'$ we can extend $f'$ to $A' + Rb$. But this contradicts the maximality of $f'$. So in fact $A' = B$. This proves one direction of the theorem.\r\n \r\nThe converse of the theorem follows directly from the definition of injective module and the fact that $I$ and $R$ are both $R$-modules with $I \\subseteq R$. QED.\r\n\r\n\\textbf{4. The $\\Z$-module $\\Q$ is injective.}\r\n\r\nUsing Baer's criterion, it suffices to show that any homomorphism $f : I \\to \\Q$ for an ideal $I = n\\Z$ of $\\Z$ extends to a homomorphism $f' : \\Z \\to \\Q$.\r\n\r\nBut this is obvious since we can just define $f'(i) = f(i + n\\Z)$, which is clearly a well-defined homomorphism as it is the composition of the natural projection and $f$. QED.\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "eb28f889429ec61461e29d0af7ff1dddb7584248", "size": 143827, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ElementsHomologicalAlgebra.tex", "max_stars_repo_name": "wbhart/ShortMathNotes", "max_stars_repo_head_hexsha": "bb10ca85044cc4767dcdbd5bd41ce530edad3667", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-07-23T15:01:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-10T06:53:30.000Z", "max_issues_repo_path": "ElementsHomologicalAlgebra.tex", "max_issues_repo_name": "wbhart/ShortMathNotes", "max_issues_repo_head_hexsha": "bb10ca85044cc4767dcdbd5bd41ce530edad3667", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ElementsHomologicalAlgebra.tex", "max_forks_repo_name": "wbhart/ShortMathNotes", "max_forks_repo_head_hexsha": "bb10ca85044cc4767dcdbd5bd41ce530edad3667", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.3279908414, "max_line_length": 1579, "alphanum_fraction": 0.6620384212, "num_tokens": 51602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238982, "lm_q2_score": 0.7905303285397348, "lm_q1q2_score": 0.5680706707741454}}
{"text": "\\subsection{Observables}\n\\label{sec:observables}\nThis research explores the Higgs boson $h$ leptonically decay to $\\tau\\tau$ in Large Hadrons Collider (LHC) $p p$ collisions with Higgs boson production mode independently.  Yukawa coupling is put forward to parametrize the interactions between the two handedness of fermions and the Higgs field. Especially, the general model-independent effective Yukawa interactions of $h$ with $\\tau$ leptons can be described as follows[Berge:2015nua]: \n%\n\\begin{equation}\n\t\\mathcal{L}_{h\\tau\\tau} = - \\frac{m_{\\tau}}{v} \\kappa_{\\tau} (\\cos\\phi_{\\tau}\\bar{\\tau}\\tau + \\sin \\phi_{\\tau}\\bar{\\tau}i\\gamma_{5}\\tau)h\n\\end{equation}\n%\nwhere $v=246\\,Gev$, $\\kappa_{\\tau}>0$ is the reduced Yukawa coupling\nstrength, and \\PhiTau $\\in [-\\pi/2, \\pi/2]$ is the \"\\CP -mixing\" angle that parameterizes the relative contributions of the \\CP -even and \\CP -odd components to the \\htt coupling. Fig.\\ref{fig:observables:angular} shows $\\tau$ decay products in $\\tau\\tau$ Zero momentum frame (ZMF) where the $\\tau\\tau$ system is at rest with (a) scalar and (b) pseudo-scalar \\htt couplings respectively. Accordingly, within scalar coupling, \\PhiTau $= 0$ and the visible products tend to be anti-parallel, which are described thoroughly in SM theory. While within pseudo-scalar coupling, \\PhiTau $= \\pi/2$ and the visible products tend to be parallel, which is the indication of maximally parity-violating weak interaction in the $\\tau$ decay. Naturally, \\PhiTau $\\in (-\\pi/2, \\pi/2)$ corresponds to the \\CP-odd and \\CP-even mixed state. \n%\n\\begin{figure}[h!]\n\t\\begin{center}\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=.6\\textwidth]{figures/observables/Angular correlations}\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Angular correlations of the tau decay products from $h^0/A^0$ $\\rightarrow \\tau^+\\tau^-$ decays in case of a (a) scalar or (b) pseudo-scalar Higgs boson (taken from [Berge\\_BonnSeminar])}\n\t\\label{fig:observables:angular}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\begin{center}\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=.3\\textwidth]{figures/observables/phiCP.jpg}\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Schematic diagram of a $h\\rightarrow \\tau^{\\pm} \\rightarrow \\pi^{\\pm} + 2 \\nu$ decay in the \\tautau Zero Momentum Frame}\n\t\\label{fig:observables:phiCP}\n\\end{figure}\n%\n\nThe differential decay width of $h \\rightarrow \\tau^+\\tau^-$ is proportional to: \n%\n\\begin{equation}\n\td\\mathrm{\\Gamma}_{h\\tau\\tau} \\propto 1 - s_{z}^{-}s_{z}^{+} + \\cos(2\\phi_{\\tau})(s_{\\perp}^{-}\\cdot s_{\\perp}^{+}) + \\sin(2\\phi_{\\tau})[(s_{\\perp}^{-}\\times s_{\\perp}^{+})\\cdot\\hat{k}^{-}]\n\t\\label{eq:decay_width_htt}\n\\end{equation}\n%\nwhere $\\hat{k}^{-}$ is the normalised $\\tau^{-}$ spatial momentum in\nthe Higgs boson rest frame, $\\hat{s}^{\\pm}$ are the unit spin vectors\nof $\\tau^{\\pm}$ in their respective tau rest frames, and\n$s_{z}^{\\pm}(s_{\\perp}^{\\pm})$ are the longitudinal (transverse)\ncomponents of $\\hat{s}^{\\pm}$ with respect to $\\hat{k}^{-}$.\n\nFrom Eq.\\ref{eq:decay_width_htt}, the observable acoplanarity angle \\phiCP can be introduced: \n%\n\\begin{equation}\n\t\\frac{1}{\\mathrm{\\Gamma}}\\frac{d\\mathrm{\\Gamma}(h\\rightarrow \\tau\\tau \\rightarrow \\pi^+\\pi^-+2\\nu)}{d\\phi_{CP}} \\propto 1 - \\frac{\\pi^2}{16}\\cos(\\phi_{CP}-2\\phi_{\\tau})\n\t\\label{eq:phistar_distribution}\n\\end{equation}\n%\nwhere \\phiCP $\\in[0,2\\pi]$ is the acoplanarity angle between the tau leptons decay\nplanes spanned by the $\\tau^{\\pm}$ spatial momentum in the Higgs boson\nrest frame and the $\\pi^{\\pm}$ spatial momentum in the $\\tau^{\\pm}$\nrest frame, as shown in Fig~\\ref{fig:observables:phiCP}. \n\nHowever, in $h\\rightarrow \\tau^{\\pm} \\rightarrow \\pi^{\\pm} + 2 \\nu$ decay, the invisible neutrinos lead to the Higgs boson ZMF and \\tautau ZMF unreachable. Therefore the approximate \\tautau ZMF is built based on the visible part of the \\tautau lepton decay (charged and neutral pions), in which the quantities are denoted with an asterisk (*). \\phistarCP is therefore the acoplanarity angle defined in this visible \\tautau system ZMF. \n\n\n\n", "meta": {"hexsha": "dcc0b840e051e4ecd9fabad8ff517f6542fb2d87", "size": 4024, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "observables.tex", "max_stars_repo_name": "beatrice-pan/ResearchProposal_TongPan", "max_stars_repo_head_hexsha": "29b94ac443c04e26301dd351c857ce1abe9780fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "observables.tex", "max_issues_repo_name": "beatrice-pan/ResearchProposal_TongPan", "max_issues_repo_head_hexsha": "29b94ac443c04e26301dd351c857ce1abe9780fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "observables.tex", "max_forks_repo_name": "beatrice-pan/ResearchProposal_TongPan", "max_forks_repo_head_hexsha": "29b94ac443c04e26301dd351c857ce1abe9780fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.9032258065, "max_line_length": 822, "alphanum_fraction": 0.717445328, "num_tokens": 1290, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696748, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5680706589174279}}
{"text": "\\section{Graphs}\n\n\\frame{\n{Part 1: Graph Definitions}\n\n\\tableofcontents[currentsection,hideallsubsections, firstsection=1, sections={1-3}]\n}\n\n\\subsection{Motivations}\n\n\\begin{frame}{Using Graphs to Solve Problems}{Course Registration}\n\n  We can represent a list of courses in a university, and their requirements, using a graph structure. For example, to take \\emph{\"Computer Graphics\"}, you need to take \\emph{\"Programming Theory\"} and \\emph{\"Linear Algebra\"} first.\\medskip\n\n  If you take two lectures per semester, how long would it take you to graduate? Why?\n\n  \\begin{tabular}{p{.1\\textwidth}|p{.45\\textwidth}||p{.3\\textwidth}}\n    \\hline\n    Code & Lecture & Prerequisites \\\\\n    \\hline\n    0000 & {\\small Social Questions} & \\emph{none} \\\\\n    0001 & {\\small Intro to Programming} & \\emph{none} \\\\\n    0002 & {\\small Calculus I} & \\emph{none} \\\\\n    0003 & {\\small Programming Theory} & \\emph{0001} \\\\\n    0004 & {\\small Linear Algebra} & \\emph{0000, 0002} \\\\\n    0005 & {\\small Programming Challenges} & \\emph{0000, 0001, 0003} \\\\\n    0006 & {\\small Computer Graphics} & \\emph{0003, 0004} \\\\\n    \\hline\n  \\end{tabular}\n\\end{frame}\n\n\n\\begin{frame}{Using Graphs to Solve Problems}{Course Registration}\n\n  The university proposes a new lecture, \\emph{\"Maths for Computer Science\"}. The updated table is below. \\medskip\n\n  How long does it take to graduate now? Why?\n\n  \\begin{tabular}{p{.1\\textwidth}|p{.45\\textwidth}||p{.3\\textwidth}}\n    \\hline\n    Code & Lecture & Prerequisites \\\\\n    \\hline\n    0000 & {\\small Social Questions} & \\emph{none} \\\\\n    0001 & {\\small Intro to Programming} & \\emph{none} \\\\\n    0002 & {\\small Calculus I} & \\emph{none} \\\\\n    0003 & {\\small Programming Theory} & \\emph{0001, {\\bf 0007}} \\\\\n    0004 & {\\small Linear Algebra} & \\emph{0000, 0002} \\\\\n    0005 & {\\small Programming Challenges} & \\emph{0000, 0001, 0003} \\\\\n    0006 & {\\small Computer Graphics} & \\emph{0003, 0004} \\\\\n    {\\bf 0007} & {\\small {\\bf Maths for Computer Science}} & {\\bf\\emph{0005}} \\\\\n    \\hline\n  \\end{tabular}\n\\end{frame}\n\n\\begin{frame}{Using Graphs to Solve Problems}{Airplane}\n  \\begin{columns}\n\n    \\column{0.3\\textwidth}\n    In an airport, each airplane needs a gate when it is on the ground.\\medskip\n\n    If we know the landing and departing time of each plane, what is the minimum number of gates necessary?\n    \\column{0.7\\textwidth}\n    \\includegraphics[width=\\textwidth]{../img/gatetable}\n  \\end{columns}\n\\end{frame}\n\n\\begin{frame}[t]{Problems as Graphs}\n  Many problems that can be described by the \\emph{relationship} betwene entities in the problem can be represented as graphs.\\bigskip\n\n  \\begin{itemize}\n    \\item A course is a prerequisite to another one;\n    \\item Two planes are landed at the same time;\n    \\item Webpages and the links between them;\n  \\end{itemize}\\bigskip\n\n  These relationships can be described mathematically using a graph structure.\n\\end{frame}\n\n\\subsection{Definitions}\n\n\\begin{frame}[t]{Graph Basic Definitions}\n\n    \\begin{columns}\n      \\column{0.4\\textwidth}\n      \\begin{center}\n        \\input{../graphs/simple_dag.tex}\n      \\end{center}\n\n      \\column{0.6\\textwidth}\n        A {\\bf graph} $G$ is defined by a set of vertices $V$, and a set of edges $E$.\\bigskip\n\n        \\begin{itemize}\n          \\item $G = (V,E)$\n          \\item $V = \\{a,b,c,d\\}$\n          \\item $E = \\{(a,b), (a,c), (c,b), (d,c), (d,a)\\}$\n        \\end{itemize}\\bigskip\n\n         A {\\bf directed graph (digraph)} is a graph where each edge has a {\\bf direction}.\\medskip\n\n         An {\\bf undirected graph} is a graph where the edges do not have a direction ($(a,b) \\iff (b,a)$).\n    \\end{columns}\n\\end{frame}\n\n\\begin{frame}{Graphs and Relations}{Remember Lecture 2?}\n\n  \\begin{columns}\n    \\column{0.4\\textwidth}\n    \\begin{center}\n      \\begin{tikzpicture}[scale=3,auto,swap]\n      \\tikzset{edge/.style = {->,>=latex'}}\n      \\node[vertex] (a) at (0,1) {a};\n      \\node[vertex] (b) at (1,1) {b};\n      \\node[vertex] (c) at (1,0) {c};\n      \\node[vertex] (d) at (0,0) {d};\n      \\draw[edge] (a) to (c);\n      \\draw[edge] (a) to (b);\n      \\draw[edge] (c) to (b);\n      \\end{tikzpicture}\n    \\end{center}\n    \\column{0.6\\textwidth}\n\n    Graph $G = (V, E)$\n    \\begin{itemize}\n      \\item $V = \\{a,b,c,d\\}$\n      \\item $E = \\{(a,c),(a,b),(c,b)\\}$\n    \\end{itemize}\\bigskip\n\n    A digraph with vertices $V$ can be understood as a \\structure{binary relation $V \\to V$}\\bigskip\n\n    In the same way, every \\structure{binary relation} can be written as a directed graph too.\n  \\end{columns}\n\\end{frame}\n\n\\begin{frame}{Matrix Representation of a Digraph}\n\n  \\begin{columns}\n    \\column{0.5\\textwidth}\n    \\begin{center}\n    \\begin{tikzpicture}[scale=1.5,auto,swap]\n      \\tikzset{edge/.style = {->,>=latex'}}\n      \\node[vertex] (a) at (0,1) {a};\n      \\node[vertex] (b) at (0,0) {b};\n      \\node[vertex] (c) at (0,-1) {c};\n      \\node[vertex] (d) at (2,0) {d};\n      \\node[vertex] (e) at (3,1) {e};\n      \\node[vertex] (f) at (3,-1) {f};\n      \\draw[edge] (a) to (b);\n      \\draw[edge] (c) to (b);\n      \\draw[edge] (b) to (d);\n      \\draw[edge] (d) to (a);\n      \\draw[edge] (c) to (d);\n      \\draw[edge] (d) to (e);\n      \\draw[edge] (e) to (f);\n      \\draw[edge] (f) to (d);\n    \\end{tikzpicture}\n  \\end{center}\n\n    \\column{0.5\\textwidth}\n\n    {\\larger\n      \\begin{tabular}{c|cccccc}\n        & a & b & c & d & e & f \\\\\n        \\hline\n        a &0&1&0&0&0&0\\\\\n        b &0&0&0&1&0&0\\\\\n        c &0&1&0&1&0&0\\\\\n        d &1&0&0&0&1&0\\\\\n        e &0&0&0&0&0&1\\\\\n        f &0&0&0&1&0&0\\\\\n      \\end{tabular}\n    }\\medskip\n\n    {\\bf Adjacency Matrix}\n  \\end{columns}\n  \\bigskip\n\n  An \\structure{Adjacency Matrix} A represents a digraph, when $A_{i,j}$ is 1 if $v_i \\to v_j$, and 0 if not.\n\n  \\begin{equation*}\n    A(v_i,v_j) = 1 \\iff E(v_i) = v_j\n  \\end{equation*}\n\\end{frame}\n", "meta": {"hexsha": "c60a27b547b1909db117bccf64793012840fd096", "size": 5806, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week04/01_Graphs.tex", "max_stars_repo_name": "caranha/MathCS", "max_stars_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-09-13T18:59:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-16T02:14:56.000Z", "max_issues_repo_path": "week04/01_Graphs.tex", "max_issues_repo_name": "caranha/MathCS", "max_issues_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week04/01_Graphs.tex", "max_forks_repo_name": "caranha/MathCS", "max_forks_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7267759563, "max_line_length": 239, "alphanum_fraction": 0.6048914916, "num_tokens": 1989, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7905303162021597, "lm_q1q2_score": 0.5680706523801379}}
{"text": "% !TEX TS-program = pdflatexmk\n\\documentclass{article} % For LaTeX2e\n\\usepackage{nips15submit_e,times}\n\\usepackage{hyperref}\n\\usepackage{url}\n%\\documentstyle[nips14submit_09,times,art10]{article} % For LaTeX 2.09\n\n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathtools}\n\n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{corollary}{Corollary}[theorem]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{example}{Example}[theorem]\n\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}\n\n\\newcommand\\myeq{\\stackrel{\\mathclap{\\tiny\\mbox{d}}}{=}}\n\n\\newcommand\\mb{\\mathbf}\n\\newcommand\\mc{\\mathcal}\n\\newcommand\\bs{\\boldsymbol}\n\n\\title{Gaussian Process Tensor Factorisation}\n\n\n\\author{\nDongwoo Kim\n}\n\n% The \\author macro works with any number of authors. There are two commands\n% used to separate the names and addresses of multiple authors: \\And and \\AND.\n%\n% Using \\And between authors leaves it to \\LaTeX{} to determine where to break\n% the lines. Using \\AND forces a linebreak at that point. So, if \\LaTeX{}\n% puts 3 of 4 authors names on the first line, and the last on the second\n% line, try using \\AND instead of \\And before the third author name.\n\n\\newcommand{\\fix}{\\marginpar{FIX}}\n\\newcommand{\\new}{\\marginpar{NEW}}\n\n\\nipsfinalcopy % Uncomment for camera-ready version\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nThis is abstract.\n\\end{abstract}\n\n\\section{Gaussian Process Regression}\n\n\\subsection{Notations}\nWe denote scalars by lower case letters (e.g. $x$), vectors by bold lower case letters (e.g. $\\mb x$), matrices by upper case letters (e.g. $A$), and tensors by calligraphic upper case letters (e.g. $\\mc A$). The calligraphic letters are also used for probability distributions (e.g. $\\mc{N}(\\mu, \\sigma^2)$).\nWe use $X_{ij}$ and $\\mc{X}_{ijk}$ to represent the $(i,j)$ and $(i,j,k)$ entry of matrix $X$ and tensor $\\mc{X}$, respectively.\n\\subsection{Linear Regression to GP}\n\nFrom linear model to Gaussian Process (GP).\n\nBayesian analysis of the standard linear regression model with Gaussian noise is\n\\begin{equation}\nf(\\mb{x}) = \\mb{x}^\\top \\mb{w}, \\quad \\quad y = f(\\mb{x}) + \\epsilon,\n\\end{equation}\nwhere $\\mb{x} \\in \\mathbb{R}^p$ is the input vector, $\\mb{w}$ is a vector of weights, and $y$ is the observed target value. In general, the additional noise $\\epsilon$ is distributed i.i.d Gaussian distribution with zero mean and variance $\\sigma^2$\n\\begin{equation}\n\\epsilon \\sim \\mc{N}(0, \\sigma^2).\n\\end{equation}\n\nWith an additional assumption on the prior distribution over $\\mb{w}$, we can fully specify the Bayesian linear regression model. We put a zero mean Gaussian prior with covariance matrix $\\Sigma$ on the weights\n\\begin{equation}\n\\mb{w} \\sim \\mc{N}(\\mb{0}, \\Sigma).\n\\end{equation}\nThe joint distribution of multiple observations $\\mc{D} = \\{(\\mb{x}_i, y_i)|i=1,...,n\\}$ is\n\\begin{align}\np(\\mb{y}|X, \\mb{w}) = \\prod_{i=1}^{n} p(y_i|\\mb{x}_i, \\mb{w}) = \\prod_{i=1}^{n} \\frac{1}{\\sqrt{2\\pi}\\sigma}\\exp(-\\frac{(y_i - \\mb{x}_i^\\top \\mb{w})^2 }{2\\sigma}),\n\\end{align}\nwhere $X \\in \\mathbb{R}^{n \\times p}$ is a collection of vector $\\bf x$, and then the posterior is\n\\begin{align}\np(\\mb{w}|X, \\mb{y}) = \\frac{p(\\mb{y}|X,\\mb{w})p(\\mb{w})}{p(\\mb{y}|X)}.\n\\end{align}\nImportant properties of this linear model is its marginal and predictive distributions. First, the marginal distribution follows\n\\begin{align}\np(\\mb{y}|X) & = \\int p(\\mb{y}, \\mb{w}|X) d\\mb{w} = \\int p(\\mb{y} | \\mb{w}, X) p(\\mb{w}) d\\mb{w} \\\\\n& =  \\mc{N}(\\mb{y} | \\mb{0}, \\frac{1}{\\sigma^{2}}I + X\\Sigma X^\\top)\n\\end{align}\nIf we project the input vector $x$ into feature space through a basis function $\\phi$, then there exists a function $\\psi$ such that kernel $\\kappa(\\mb{x}, \\mb{x}') = \\phi(\\mb{x})\\Sigma\\phi(\\mb{x}')$ can be represented by a simple dot product $\\kappa(\\mb{x}, \\mb{x}') = \\psi(\\mb{x})\\psi(\\mb{x}')$, and this corresponds to the GP with zero mean function.\n\n\\section{Gaussian Process Matrix Factorisation}\nWe now consider the GP for a matrix factorisation. Given an observed matrix $X \\in \\mathbb{R}^{n\\times m}$, suppose there are latent factor $\\mb{u}_i$ for $i$-th row and $\\mb{v}_j$ for $j$-th row and feature mapping $\\phi$ from the factor space to the feature space. Let matrix $U \\in \\mathbb{R}^{n \\times d}$ and $V \\in \\mathbb{R}^{m \\times d}$ are the collection of feature representation of the latent factors. We extend the linear model to bi-linear model as follow:\n\\begin{align}\nF = U^\\top W V \\quad X_{ij} = F_{ij} + \\epsilon, \n\\end{align}\nwhere weight matrix $W$ models the interaction between two latent factors $U_i$ and $V_j$, and again $\\epsilon$ represents an independent zero-mean Gaussian noise with variance $\\sigma^2$. We put a zero mean matrix-normal prior on the weight\n\\begin{align}\nW \\sim \\mc{N}_{n\\times m}(\\mb{0}, \\Sigma_{1}, \\Sigma_{2})\n\\end{align}\nor equivalently\n\\begin{align}\n\\text{vec}(W) \\sim \\mc{N}(\\mb{0}, \\Sigma_{1} \\otimes \\Sigma_{2}).\n\\end{align}\nThe marginal distribution of $F$ given $U$ and $V$ without noise $\\epsilon$ is\n\\begin{align}\np(F|U,V) & = \\int p(F|U,V,W)p(W) dW \\\\\n& =  \\mc{N}(\\text{vec}(F)|\\mb{0}, (U\\otimes V)(\\Sigma_{1} \\otimes \\Sigma_{2})(U\\otimes V)^\\top)\\\\\n& =  \\mc{N}(\\text{vec}(F)|\\mb{0}, (U \\Sigma_{1} U^\\top) \\otimes (V \\Sigma_{2} V^\\top) )\\\\\n& =  \\mc{N}_{n\\times m}(F|\\mb{0}, (U \\Sigma_{1} U^\\top), (V \\Sigma_{2} V^\\top) ) \\label{eqn:matrix_gp},\n\\end{align}\nwhich is again a matrix normal distribution.\nIf we simplify the covariance matrices and use identity matrices, then the bilinear model corresponds to the GP model with a product kernel, each of which has a simple dot product representation. One could define a kernel function between latent factors directly instead. For example, in \\cite{Lloyd2013}, they use an exponential squared euclidean distance between two entries $(i,j)$ and $(i',j')$ (RBF),\n\\begin{align}\n\\kappa((U_i, V_j), (U_{i'}, V_{j'})) = s^2 \\exp\\bigg(-\\frac{(U_i - U_{i'})^2 + (V_j - V_{j'})^2}{2\\ell^2}\\bigg),\n\\end{align}\nas a kernel function to model a random graph structure.\n\n\\section{Gaussian Process Tensor Factorisation}\nWe generalise the matrix case to $K$-mode tensor $\\mc{Y} \\in \\mathbb{R}^{n_1 \\times n_2 \\times ... \\times n_k}$. The Tucker family of tensor decomposition methods factorise the observed tensor into the latent space\n\\begin{align}\n\\mc{Y} = \\mc{W} \\times_{k=1}^{K} U^{(k)}\n\\end{align}\nwhere $\\mc{W} \\in \\mathbb{R}^{d_1 \\times d_2 \\times ... \\times d_K}$ is a core tensor, and $U^{(k)} \\in \\mathbb{R}^{n_{k} \\times d_k}$ is the $k$ latent factor matrix. For the prior of the core tensor $\\mc{W}$, we assume that every entry of the tensor is independent, and the tensor follows a zero-mean tensor-variate normal distribution\n\\begin{align}\n\\mc{W} \\sim \\mc{TN}_{n_1 \\times n_2 \\times ... \\times n_k}(\\mb{0}, \\{\\Sigma^{(k)}\\}_{k=1}^{K})\n\\end{align}\nwhich is equivalent to\n\\begin{align}\n\\text{vec}(\\mc{W}) \\sim \\mc{N}(\\mb{0}, \\otimes_{k=1}^{K} \\Sigma^{(k)}).\n\\end{align}\nWe also assume independent Gaussian noise for each entry of tensor $\\mc{Y}$ as follow:\n\\begin{align}\np(y_{i_1i_2...i_k}|\\mc{W}, \\{U^{(k)}\\}_{k=1}^{K}) = \\mc{N}(y_{i_1i_2...i_k}|\\sum_{r_k=1}^{d_K}...\\sum_{r_2=1}^{d_2}\\sum_{r_1=1}^{d_1} w_{r_{1}r_{2}...r_{k}} U^{(1)}_{i_1r_1}U^{(2)}_{i_2r_2}...U^{(k)}_{i_kr_k}, \\sigma^2)\n\\end{align}\nAgain, we can marginalise the core tensor $\\mc{W}$ via Gaussian identity\n\\begin{align}\np(\\mc{Y}|\\{U^{(k)}\\}_{k=1}^{K}, \\{\\Sigma^{(k)}\\}_{k=1}^{K}) & = \\mc{N}(\\text{vec}(\\mc{Y})| \\mb{0}, \\otimes_{k=1}^{K}U^{k} \\otimes_{k=1}^{K}\\Sigma^{(k)} {\\otimes_{k=1}^{K}U^{k}}^\\top) \\\\\n& = \\mc{TN}_{n_1 \\times n_2 \\times ... \\times n_k}(\\mc{Y}| \\mb{0}, \\{U^{(k)}\\Sigma^{(k)}{U^{(k)}}^\\top \\}_{k=1}^{K})\n\\end{align}\nThis generalise the matrix case in Equation \\ref{eqn:matrix_gp} to the tensor case.\n\n\\appendix\n\\section{Conditional and Marginal Gaussian}\nGiven a joint distribution Gaussian distribution $\\mc{N}(\\mb{x}|\\bs{\\mu}, \\bs{\\Sigma})$ where\n\\begin{align}\n\\mb{x} =\n \\begin{pmatrix}\n  \\mb{x}_{a} \\\\\n  \\mb{x}_{b}\n \\end{pmatrix},\n\\quad\n\\bs{\\mu} = \n \\begin{pmatrix} \n  \\bs{\\mu}_{a} \\\\\n  \\bs{\\mu}_{b}\n \\end{pmatrix},\n\\quad\n\\bs{\\Sigma} = \n \\begin{pmatrix}\n  \\bs{\\Sigma}_{aa} & \\bs{\\Sigma}_{ab} \\\\\n  \\bs{\\Sigma}_{ba} & \\bs{\\Sigma}_{bb}\n \\end{pmatrix},\n\\end{align}\nthe conditional distribution of $\\mb{x}_a$ given partial observations $\\mb{x}_b$ follows\n\\begin{align}\np(\\mb{x}_a|\\mb{x}_b) = \\mc{N}(\\mb{x}_a|\\bs{\\mu}_{a|b}, \\bs{\\Sigma}_{a|b}),\n\\end{align}\nwhere\n\\begin{align}\n\\bs{\\mu}_{a|b} &= \\bs\\mu_{a} - \\bs\\Sigma_{ab}\\bs\\Sigma_{bb}^{-1}(\\mb{x}_b - \\bs{\\mu}_b) \\\\\n\\bs{\\Sigma}_{a|b} &= \\bs\\Sigma_{aa} - \\bs\\Sigma_{ab}\\bs\\Sigma_{bb}^{-1}\\bs\\Sigma_{ba}.\n\\end{align}\n\nThe marginal distribution of $\\mb{x}_a$ is given by\n\\begin{align}\np(\\mb{x}_a) = \\mathcal{N}(\\mb{x}_a|\\bs\\mu_a,\\bs\\Sigma_{aa}).\n\\end{align}\n\n\\section{Bayes' theorem for Gaussian variables}\nLet $p(\\mb{x}) = \\mc{N}(\\bs\\mu, \\Lambda^{-1})$ and $p(\\mb{y}|\\mb{x}) = \\mc{N}(\\mb{y}|A\\mb{x} + \\mb{b}, L^{-1})$ where $\\mb{x}$ and $\\mb{y}$ are both gaussian distributed and $\\mb{x}$ is a prior of $\\mb{y}$. The marginal distribution of $\\mb{y}$ is given by\n\\begin{align}\np(\\mb{y}) = \\mc{N}(\\mb{y}|A\\bs\\mu + \\mb b, L^{-1} + A\\Lambda^{-1}A^\\top),\n\\end{align}\nand the conditional of $\\mb{x}$ given $\\mb{y}$ is given by\n\\begin{align}\np(\\mb{x}|\\mb{y}) = \\mc{N}(\\mb{x}|\\Sigma(A^\\top L(\\mb y - \\mb b) + \\Lambda \\bs\\mu), \\Sigma),\n\\end{align}\nwhere $\\Sigma = (\\Lambda + A^\\top L A)^{-1}$.\n\n\\bibliographystyle{apalike}\n\\bibliography{ref}\n\n\\end{document}\n", "meta": {"hexsha": "818d45d41747595d073049eab701d95e71b98671", "size": 9437, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/Gaussian_process_tensor_factorisation/gptf.tex", "max_stars_repo_name": "arongdari/sparse-graph-prior", "max_stars_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-12-08T19:04:31.000Z", "max_stars_repo_stars_event_max_datetime": "2016-12-08T19:04:31.000Z", "max_issues_repo_path": "notes/Gaussian_process_tensor_factorisation/gptf.tex", "max_issues_repo_name": "dongwookim-ml/sparse-graph-prior", "max_issues_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-07-10T05:20:44.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-10T05:20:44.000Z", "max_forks_repo_path": "notes/Gaussian_process_tensor_factorisation/gptf.tex", "max_forks_repo_name": "dongwookim-ml/sparse-graph-prior", "max_forks_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.1479591837, "max_line_length": 470, "alphanum_fraction": 0.6652537883, "num_tokens": 3368, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.7905303137346446, "lm_q1q2_score": 0.5680706506069952}}
{"text": "\n\\section{Latency}\n\n\n% An algorithm for closed-loop brain stimulation needs to be \\emph{fast}, that is, have low latency between the onset of the event in the brain and the time of stimulation. In this section, we characterise this closed-loop latency.\n\n% We base our latency analysis on a representative set of closed-loop disruption systems from the hippocampus literature \\cite{Girardeau2009,Ego-Stengel2009,Jadhav2012,Girardeau2014,Talakoub2016,Ciliberti2017,Dutta2018}. For a detailed description of the systems used in these labs, and the differences and similarities between them, see \\cref{A:real_systems}.\n\n% We decompose the latency into a sequence of steps. Next, we estimate the time cost of each step. Finally, we propose what would constitute a good latency improvement, taking into account the state of the art in sharp-wave ripple detection, the biological effect of stimulus onset time, and the typical duration of an SWR event.\n\n\n\n% % \\subsection{Overview}\n% \\subsection{System description}\n\n\n% \\subsection{Transmission latencies}\n\n% \\subsection{Algorithmic latencies}\n\n% The algorithmic latency of a linear filter can be accurately estimated a priori when the filter weights are known, using the filter's \\emph{group delay} function. The following paragraph describes how this is done (see also \\cref{fig:grpdelay}). We find that this technique matches the empirically observed latency well (compare \\cref{fig:grpdelay} and fig. X).\n% % todo: reference figure with empirical causal_IIR detection latency.\n\n% For each frequency of the output signal that is used for event detection (i.e. all frequencies in the ripple band for ripple detection), we calculate the group delay $G(f)$ (\\cref{fig:grpdelay-plot}). We define it as:\n% \\[\n% \tG(f) = f_S\\ \\tau\\left( 2 \\pi \\frac{f}{f_N} \\right),\n% \\]\n% with\n% \\[\n% \t\\tau(\\omega) = - \\dv{\\omega}\\ \\angle H(\\omega).\n% \\]\n% $\\angle H$ is the frequency-domain phase response of the bandpass filter, $\\omega$ is a normalised angular frequency, \\gls{fs} is the sampling frequency of the filtered signal, and \\gls{fnyq} $= \\frac{f_S}{2}$ is the corresponding Nyquist frequency. $G(f)$ then yields the delay, in seconds, of a frequency $f$ after passing through the linear filter \\cite{Oppenheim2009,Lyons2010}. Calculating this delay for every frequency in the ripple band yields a delay distribution (\\cref{fig:grpdelay-dist}), from which we can directly infer the algorithmic filter latency of the bandpass-filter-based SWR detector.\n\n% \\begin{figure}\n% \\begin{subfigure}{0.48\\textwidth}\n% \t\\caption{Group delay $G(f)$ for the bandpass filter in a ripple detector}\n% \t% todo: reference used filter\n% \t\\includegraphics[width=\\textwidth]{plot/group_delay}\n% \t\\label{fig:grpdelay-plot}\n% \\end{subfigure}\n% \\hfill\n% \\begin{subfigure}{0.48\\textwidth}\n% \t\\caption{Group delay distribution for frequencies in the ripple band}\n% \t\\includegraphics[width=\\textwidth]{plot/delay_dist}\n% \t\\label{fig:grpdelay-dist}\n% \\end{subfigure}\n% \\caption{The latency introduced by linear bandpass filters can be accurately estimated a priori using the filter's group delay. \\subref{fig:grpdelay-plot} The   The ripple band was defined here as 100--200 Hz (see section X). % todo\n% (Boxplot whiskers extend to the 5th and the 95th percentiles of the data. `N' is the number of frequencies in the ripple band for which the group delay was calculated. Density estimate calculated with an exponential kernel (see \\cref{sec:kde-kernels}). Kernel bandwidth selected using Silverman's rule of thumb \\cite{Silverman1986}).}\n% \\label{fig:grpdelay}\n% \\end{figure}\n\n% \\subsection{Computational latencies}\n\n% These are the latencies caused by the physical execution of the detection algorithm in hardware.\n\n% % We assume that the signal processing algorithm is executed on a microcontroller or on a microprocessor (where the algorithm is specified in a programming language like C and runs as a sequence of steps), and not on an FPGA\\footnotemark[1] or an ASIC\\footnotemark[2] (where the algorithm is specified in a hardware description language and runs in parallel). As we will see \n\n% % \\footnotetext[1]{Field-Programmable Gate Array}\n% % \\footnotetext[2]{Application Specific Integrated Circuit}\n\n% \\subsection{State of the art}\n\n% \\subsection{Biological effect of stimulus time}\n\n% % late: no deal\n\n% \\subsection{SWR event duration}\n\n% \\subsection{Conclusion}\n", "meta": {"hexsha": "f8050a466dea19883efa31f95c308bcbd4a5ea39", "size": 4384, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "modules/Scraps/Latency.tex", "max_stars_repo_name": "tfiers/master-thesis", "max_stars_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-23T01:39:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-23T01:39:24.000Z", "max_issues_repo_path": "modules/Scraps/Latency.tex", "max_issues_repo_name": "tfiers/master-thesis", "max_issues_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 46, "max_issues_repo_issues_event_min_datetime": "2018-09-18T16:38:12.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-10T22:37:35.000Z", "max_forks_repo_path": "modules/Scraps/Latency.tex", "max_forks_repo_name": "tfiers/master-thesis", "max_forks_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.6285714286, "max_line_length": 609, "alphanum_fraction": 0.768020073, "num_tokens": 1124, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303137346444, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.5680706506069951}}
{"text": "\\documentclass{article}\n\\usepackage{fullpage}\n\\usepackage{nopageno} \n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{February 21, 2014}\n\\maketitle\n\nlast time, we derived Pascal's equation:\n\\begin{align*}\n  \\binom{n}{k}&=\\binom{n-1}{k-1}+\\binom{n-1}{k}\n\\end{align*}\n\\section*{combinatorial proof:}\n\n$\\binom{n}{k}$ counts the number of $k$-elt subsets $S$ of $[n]$. 2 cases.\n\\subsubsection*{case 1}\n$n\\in S$ then the remaining elts of $S$ are chosen from $[n-1]$ in $\\binom{n-1}{k-1}$ ways\n\\subsubsection*{case 2}\n$n\\not\\in S$ then $S$ is made up of $k$-elts from $[n-1]$. There are $\\binom{n-1}{k}$ such subsets.\n\nTherefore $\\binom{n}{k}=\\binom{n-1}{k-1}+\\binom{n-1}{k}$\n$\\Box$\n\n\\section*{fibanacci in pascal's triangle}\nneat\n\\section*{combinatorial proof of binomial theorem}\n$\\overbrace{(x+y)(x+y)\\dots(x+y)}^{n\\text{-factors}}=(x+1)^n$. Each term in the sum arises from picking $x$ or $y$ in each of the $n$ factors. The coefficients of $x^ky^{n-k}$ counts the number of ways to pick $k$ $x$'s. This is counted by $\\binom{n}{k}$\n\\section*{moving on}\nnotice if you set $x-y=1$, you obtain:\n\\begin{align*}\n  2^n&=\\sum\\limits_{k=0}^n{\\binom{n}{k}}\n\\end{align*}\nif you set $y=1$ you get\n\\begin{align*}\n  (1+x)^n&=\\sum\\limits_{k=0}^n{\\binom{n}{k}x^k}\n\\end{align*}\n\\subsection*{wksht 6}\n$x=3, y=1$ in binomial theorem.\n\\begin{align*}\n  (3+1)^{10}&=\\sum\\limits_{k=0}^{10}{\\binom{10}{k}3^k1^{10-k}}\n  &=4^{10}\n\\end{align*}\n\\subsection*{wksht 7}\n\\subsection*{wksht 8}\nVandermonde convolution:\n\\begin{align*}\n  000&\\dots0&m\\text{ things}\\\\\n  000&\\dots0&n\\text{ things}\\\\\n\\end{align*}\nchoose k things\n\\subsection*{wksht 9}\ncalculus proof\n\\begin{align*}\n  \\sum\\limits_{k=0}^n{k\\binom{n}{k}x^{k-1}}&=\\frac{\\mathrm{d}}{\\mathrm{d}x}\\sum\\limits_{k=0}^n{\\binom{n}{k}x^k}\\\\\n  &=\\frac{\\mathrm{d}}{\\mathrm{d}x}(1+n)^n\\\\\n  &=n(1+x)^{n-1}\n\\end{align*}\nset $x=1$ $\\sum\\limits_{k=0}^n{k\\binom{n}{k}}=n(1+1)^{n-1}=n2^{n-1}$\n\\section*{question}\nWhat does this sum equal:\n$\\binom{n}{0}^2+\\binom{n}{1}^2+\\binom{n}{2}^2+\\dots+\\binom{n}{n}^2$\n\\begin{align*}\n  \\sum\\limits_{k=0}^n{\\binom{n}{k}^2}&=\\sum\\limits_{k=0}^n{\\binom{n}{k}\\binom{n}{n-k}}\\\\\n  &=\\binom{2n}{n}\n\\end{align*}\n\\section*{sidebar}\n\\subsection*{question:}is there any meaning to something like $\\binom{12/7}{3}$?\n\\subsection*{answer}\nfor and $n\\in\\mathbb{R},k\\in\\mathbb{Z}$ let $\\binom{n}{k}=\\begin{cases}\\frac{\\overbrace{n(n-1)(n-2)\\dots(n-k+1)}^{k \\text{ factors}}}{k!}&\\text{if } k\\ge1\\\\1&\\text{if }k=0\\\\0&\\text{if }k\\le-1\\end{cases}$\n\\begin{align*}\n  \\binom{12/7}{3}&=\\frac{12/7(12/7-1)(12/7-2)}{3!}=\\frac{-20}{7^3}\n\\end{align*}\nIs it a good definition? yes. e.g. it is true that\n\\begin{align*}\n  \\binom{12/7}{3}&=\\binom{5/7}{2}+\\binom{5/7}{3}\n\\end{align*}\n\\section*{iterate pascal's identity:}\n\\begin{align*}\n  \\binom{n}{k}&=\\binom{n-1}{k}+\\binom{n-1}{k-1}\\\\\n  \\binom{n}{k}&=\\binom{n-1}{k}+\\binom{n-1}{k-1}+\\binom{n-2}{k-2}\\\\\n\\end{align*}\n\\end{document}\n", "meta": {"hexsha": "ef589335eb69c233c7a67382148d625156e3d9c3", "size": 2999, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "combinatorics/combinatorics-notes-2014-02-21.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "combinatorics/combinatorics-notes-2014-02-21.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "combinatorics/combinatorics-notes-2014-02-21.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.6966292135, "max_line_length": 254, "alphanum_fraction": 0.6325441814, "num_tokens": 1327, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7905303087996143, "lm_q1q2_score": 0.5680706470607102}}
{"text": "\\paragraph{Soundness} When a  property is violated, if the method terminates, it always states that the property is violated\n\\paragraph{Completeness} When a property holds, the method is able to prove it\n\\paragraph{Incompleteness} When a property holds, the method might not be able to prove it (e.g. bounds propagation with convex relaxations)\n\\subsection*{Box Abstract Transformers}\n\\begin{itemize}\n    \\item $[a, b] \\Sharp{+} [c, d] = [a+c, b+d]$\n    \\item $\\Sharp{-} [a, b] = [-b, -a]$\n    \\item $\\Sharp{ReLU}([a, b]) = [ReLU(a), ReLU(b)]$\n    \\item $\\lambda \\Sharp{\\cdot} [a, b] = [\\lambda \\cdot a, \\lambda \\cdot b]$ for $\\lambda \\ge 0$\n    \\item $[a, b] \\Sharp{\\cdot} [c, d] = [\\min(ac, ad, bc, bd), \\max(ac, ad, bc, bc)]$\n\\end{itemize}\n", "meta": {"hexsha": "b05544044bf25ae6093d551973e7332565482472", "size": 743, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "certification.tex", "max_stars_repo_name": "cknabs/RIAI-summary-HS2020", "max_stars_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-20T21:27:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-24T20:28:56.000Z", "max_issues_repo_path": "certification.tex", "max_issues_repo_name": "cknabs/RIAI-summary-HS2020", "max_issues_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-25T09:29:16.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-25T10:50:09.000Z", "max_forks_repo_path": "certification.tex", "max_forks_repo_name": "cknabs/RIAI-summary-HS2020", "max_forks_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.9166666667, "max_line_length": 140, "alphanum_fraction": 0.6473755047, "num_tokens": 247, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933447152497, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5678925470406564}}
{"text": "\\section{Overview of the Up-Scaling Methodology}\nThe goal of the up-scaling methodology is to identify the parameters of a continuum constitutive model (macroscale model) that best emulates the average response in the DEM REV (microscale model). Let the displacement, strain and stress of the DEM REV (microscale) model be denoted by $\\mathbf{u}^m$, $\\boldsymbol{\\epsilon}^m$, and $\\boldsymbol{\\sigma}^m$, respectively. Let the homogenized (averaged) strain and stress in the DEM REV model be denoted by  $\\left<\\boldsymbol{\\epsilon}\\right>$ and $\\left<\\boldsymbol{\\sigma}\\right>$, respectively.  Finally, let the strain and stress from the continuum (Macroscale) constitutive model be denoted as $\\boldsymbol{\\epsilon}^M$ and $\\boldsymbol{\\sigma}^M$, respectively. The rate of macroscale stress, $\\dot{\\boldsymbol{\\sigma}}^M=\\dot{\\boldsymbol{\\sigma}}^M\\left(\\dot{\\boldsymbol{\\epsilon}}^M, \\boldsymbol{\\chi},\\mathbf{h}\\right)$, is defined in terms of the rate of macroscale strain, $\\dot{\\boldsymbol{\\epsilon}}^M$, a set of material parameters $\\boldsymbol{\\chi}$ and a set of internal history variables $\\mathbf{h}$.\n\nThe up-scaling methodology has five steps: \n\\begin{enumerate}\n    \\item Identify the DEM REV for the NFR.\n\t\\item Exercise the DEM REV using multiple load paths. Store $\\mathbf{u}^m$, $\\boldsymbol{\\epsilon}^m$, and $\\boldsymbol{\\sigma}^m$ for each load path.\n\t\\item Apply homogenization algorithms to the microscale results ($\\mathbf{u}^m$, $\\boldsymbol{\\epsilon}^m$, and $\\boldsymbol{\\sigma}^m$) to determine the average stress-strain response of the REV, i.e., $\\left<\\boldsymbol{\\sigma}\\right>$-$\\left<\\boldsymbol{\\epsilon}\\right>$, for each load path.\n\t\\item Identify a continuum constitutive model, $\\dot{\\boldsymbol{\\sigma}}^M=\\dot{\\boldsymbol{\\sigma}}^M\\left(\\dot{\\boldsymbol{\\epsilon}}^M, \\boldsymbol{\\chi},\\mathbf{h}\\right)$, that captures the salient features of NFR mechanics.\n\t\\item Run parameter estimation algorithms to identify the parameters, $\\boldsymbol{\\chi}$, that minimize the difference between $\\left<\\boldsymbol{\\sigma}\\right>$-$\\left<\\boldsymbol{\\epsilon}\\right>$ and $\\boldsymbol{\\sigma}^M$-$\\boldsymbol{\\epsilon}^M$ over all load paths.\n\\end{enumerate}\n\nOnce an optimal parameter set, $\\boldsymbol{\\chi}$, for the desired model, $\\dot{\\boldsymbol{\\sigma}}^M=\\dot{\\boldsymbol{\\sigma}}^M\\left(\\dot{\\boldsymbol{\\epsilon}}^M, \\boldsymbol{\\chi},\\mathbf{h}\\right)$, has been identified, the newly established constitutive model can be used in Finite Element Method (FEM) models or with other suitable numerical or analytical simulations.\n\n\\subsection{Software Implementation}\nThe up-scaling framework that is presented here consists of four main software components (Figure \\ref{fig:workflow}): a DEM simulator, a homogenization module, a FEM simulator, and a parameter estimation module. In procedural order, the first software component involved is a DEM simulation package, which is used to directly model the NFR. The DEM software accepts as inputs the geometry of the DFN, the material properties of the rock and the natural fractures, and the load paths. The DEM REV is exercised for different load-paths in a way that is akin to conducting multiple triaxial tests on physical specimens to characterize the full range of material behaviour. The DEM software outputs the microscale displacement, $\\mathbf{u}^m$, and stress-strain, $\\boldsymbol{\\sigma}^m$-$\\boldsymbol{\\epsilon}^m$, responses for each load path.  This microscale data is subsequently fed into the homogenization module to compute the average stress-strain response, $\\left<\\boldsymbol{\\sigma}\\right>$-$\\left<\\boldsymbol{\\epsilon}\\right>$,  for each load path. Next, the homogenized stress-strain data, $\\left<\\boldsymbol{\\sigma}\\right>$-$\\left<\\boldsymbol{\\epsilon}\\right>$, is used by the parameter estimation software as observation data (i.e., laboratory/field data). The parameter estimation module iteratively executes a constitutive model, $\\dot{\\boldsymbol{\\sigma}}^M=\\dot{\\boldsymbol{\\sigma}}^M\\left(\\dot{\\boldsymbol{\\epsilon}}^M, \\boldsymbol{\\chi},\\mathbf{h}\\right)$, embedded in the FEM simulator for each load path using different parameter sets, $\\boldsymbol{\\chi}^i$, while attempting to minimize the error between the homogenized microscale, $\\left<\\boldsymbol{\\sigma}\\right>$-$\\left<\\boldsymbol{\\epsilon}\\right>$, and macroscale, $\\boldsymbol{\\sigma}^M$-$\\boldsymbol{\\epsilon}^M$,  stress-strain curves. Eventually, the algorithm converges to a near-optimal parameter set, $\\boldsymbol{\\chi}$, that can be viewed to be the best estimate of the NFR responses by the given continuum model. In our implementation, UDEC\\textsuperscript{TM} was used as the DEM simulator and ABAQUS\\textsuperscript{TM} was used as the FEM simulator. In ABAQUS\\textsuperscript{TM} single element simulations were performed with a strain-history is prescribed through displacement boundary conditions for a given set of material parameters and the stress is obtained as the output. There is nothing particularly special about the DEM or FEM simulators chosen; each could easily be replaced to overcome any inherent limitations.  More over, a FEM simulator is not actually needed, since its inclusion in this framework is simply to gain access to the constitutive models within. The FEM simulator could easily be replaced by a FDM simulator or simply by a material subroutine.  OSTRICH, a model-independent optimization package \\citep{matott_ostrich:_2016}, is used for the parameter estimation module. The homogenization module was written in-house in Python. Python was also used to interface and drive the various components of the up-scaling framework. \n\n\n\n\n", "meta": {"hexsha": "e5fb2dede340f906874742b6202ec8b73302fc64", "size": 5662, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "section_upscalingMethodology.tex", "max_stars_repo_name": "yetisir/up-scaling-dem-simulations", "max_stars_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "section_upscalingMethodology.tex", "max_issues_repo_name": "yetisir/up-scaling-dem-simulations", "max_issues_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "section_upscalingMethodology.tex", "max_forks_repo_name": "yetisir/up-scaling-dem-simulations", "max_forks_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-29T23:14:09.000Z", "avg_line_length": 269.619047619, "max_line_length": 3042, "alphanum_fraction": 0.7728717768, "num_tokens": 1411, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8670357632379241, "lm_q2_score": 0.6548947357776795, "lm_q1q2_score": 0.567817157075499}}
{"text": "\\chapter{Gravitational Force Models}\n\n\\section{Non-Gravitational Force Models}\n\n\\subsection{Solar Radiation Force Models}\nThe magnitude of the SRP acting on the satellite depends on a wide range of parameters. \nThe distance to the Sun and the position of the satellite with respect to Earth and Sun (regarding possible eclipses) define the intensity of the incoming radiation.\nThe geometry of the satellite, the optical properties of the external surfaces, and the actual orientation with respect to the Sun largely influence the orientation and magnitude of the evolving SRP.Therefore any SRP model depends on an accurate implementation of the satellite orbit, the attitude, and the geometric/physical properties of the satellite structure.\n \n\\subsection{Cannonball}\n\\label{sec:cannonball_srp}\nThe most basic approach with regard to its analytic development is referred to as the cannonball\nmodel.The cannonball model provides a useful, first-order approximation, however, due to its homogeneous material properties and symmetrical shape approximation. We recommend its use as an apriori model, before estimation.\n\n\\subsection{ECOM I}\nThe ECOM I model has been widely used for a cubic-like satellite and is formed by three unit vectors as defined in the following: \n\\begin{equation}\ne_d = r_{sun} - r_{sat} / |r_{sun} - r_{sat}|\n\\end{equation}\n\\begin{equation}\ne_y = r_z \\times e_d / |r_z \\times e_d|\n\\end{equation}\n\\begin{equation}\ne_b = e_d \\times e_y \n\\end{equation}\n\nWhere \n$e_d$ denotes the satellite-sun vector,\n$e_y$ denotes the vector along the axis of the solar panel, \n$e_b$ is given by the right-hand rule of ed and ey.\nThe total SRP acceleration $\\ddot{r}_{srp}$ is expressed as \n\n\\begin{equation}\n\\ddot{r}_{srp} = D \\cdot e_d + Y \\cdot e_y + B \\cdot e_b\n\\end{equation}\nWhere \n$D$ denotes the total acceleration in ed,\n$Y$ denotes the total acceleration in ey, \n$B$ denotes the total acceleration in eb.\nThe D, Y and B can be expressed as \n\n\\begin{equation}\nD = D_0 + D_C \\cdot cos \\Delta u + D_S \\cdot sin \\Delta u\n\\end{equation}\n\\begin{equation}\nY = Y_0 + Y_C \\cdot cos \\Delta u + Y_S \\cdot sin \\Delta u\n\\end{equation}\n\\begin{equation}\nB = B_0 + B_C \\cdot cos \\Delta u + B_S \\cdot sin \\Delta u\n\\end{equation}\n\nWhere\n$\\Delta u$ denotes the argument of latitude of the satellite with respect to the Sun. \n\n\\subsection{ECOM II}\nThe ECOM II model has been widely used for an elongate-like satellite and is also formed by ed, ey and eb. The parameters of the ECOM II are different from the ECOM I:  \n\n\\begin{equation}\n\\begin{split}\nD = D_0 &+ D_2C \\cdot cos 2\\Delta u + D_2S \\cdot sin 2\\Delta u \\\\  \n        &+ D_4C \\cdot cos 4\\Delta u + D_4S \\cdot sin 4\\Delta u\n\\end{split}\n\\end{equation}\n\\begin{equation}\nY = Y_0 \n\\end{equation}\n\\begin{equation}\nB = B_0 + B_C \\cdot cos \\Delta u + B_S \\cdot sin \\Delta u\n\\end{equation}\n\n\\subsection{ECOM C}\nThe ECOM C model is resulted from the combination of ECOM I and ECOM II. The idea is to add the even periodic terms from the ECOM II to the ECOM I. This is because some Block types of satellites are sensitive to the even periodic terms in the SRP model and this ECOM C model might be potentially applied to multi-GNSS constellations. The parameters of the ECOM C are expressed as   \n\n\\begin{equation}\n\\begin{split}\nD = D_0 &+ D_C  \\cdot cos  \\Delta u + D_S  \\cdot sin  \\Delta u \\\\\n        &+ D_2C \\cdot cos 2\\Delta u + D_2S \\cdot sin 2\\Delta u \\\\\n        &+ D_4C \\cdot cos 4\\Delta u + D_4S \\cdot sin 4\\Delta u \n\\end{split}\n\\end{equation}\n\\begin{equation}\nY = Y_0 + Y_C \\cdot cos \\Delta u + Y_S \\cdot sin \\Delta u\n\\end{equation}\n\\begin{equation}\nB = B_0 + B_C \\cdot cos \\Delta u + B_S \\cdot sin \\Delta u\n\\end{equation}\n\n\n\\subsection{Box Wing}\nThe SRP effect also can be handled by a so-called box-wing model that takes satellite bus areas, solar panel area, satellite attitude and interactions between photons and optical properties. The box-wing model $\\ddot{r}_{boxw}$ used for a flat surface of satellite bus with thermal effect can be expressed as\n\\begin{equation}\n\\ddot{r}_{boxw} = -(A \\cdot S_0)/(M \\cdot C) cos \\theta  \n                   [(\\alpha + \\delta)(e_d+2/3 \\cdot e_N) + 2\\rho cos \\theta \\cdot e_N]  \n\\end{equation}\nWhere \n$A$ denotes the cross-section area,\n$S_{0}$ denotes the solar irradince at 1 AU $(1367W/m^2)$, \n$M$ denotes the mass of satellite,\n$C$ denotes the speed of light,\n$\\alpha$ denotes the absorption coefficient,\n$\\delta$ denotes the diffusion coefficient,\n$\\rho$ denotes the reflection coefficient,\n$e_N$ denotes the normal vector of the surface,\n$cos \\theta$ denotes the angle between the $e_d$ and $e_N$. \n\n\n\\subsection{Antenna Thrust}\nThe navigation antenna produces a re-bouncing acceleration when the signal is transmitted. This is called as antenna thrust, which generates a constant acceleration in the radial direction of satellite orbit and can be model as \n\\begin{equation}\n\\ddot{r}_{ant} = W/(M \\cdot C)   \n\\end{equation}\nWhere \n$W$ denotes the emitted power in watt.\n\n\n\\subsection{Albedo}\nThe earth radiation pressure (ERP), called albedo, also creates a small acceleration on navigation satellites. The ERP acceleration can be expressed as\nFor satellite bus and solar panel mast\n\n\n\n\\section{Transformation between Celestial and Terrestrial Reference Systems}\n\nThe variational equations obtained from the POD need to be transformed into the terrestrial reference frame so that the adjustments can be made in the ECEF frame that the PEA operates in.\n\n\\begin{equation}\n    [CRS] = Q(t)R(t)W(t)[TRS]\n\\end{equation}\n\nWhere,\n$CRS$ is the Celestial Reference System\n$TRS$ is the Terrestrial Reference Systems\n$Q(t)$ is the Celestial Pole motion (Precession-Nutation) matrix\n$R(t)$ is the Earth Rotation matrix\n$W(t)$ is the Polar motion matrix\n\\\\\n\\begin{equation}\nQ(t) = \n\\begin{bmatrix} \n1-aX^2  & -aXY     & X \\\\\n -aXY   & 1 - aY^2 & Y \\\\\n -X     & -Y       & 1-a(X^2+Y^2) \n\\end{bmatrix}\n\\end{equation}\n\\\\\n\\begin{equation}\nR(t) = R_2(-\\theta) = \n\\begin{bmatrix}\ncos \\theta & -sin \\theta & 0 \\\\ \nsin \\Theta & cos \\theta  & 0 \\\\\n0 & 0 1\n\\end{bmatrix}\n\\end{equation}\n%\\\\\n%\\begin{equation}\n%    W(t) = R_z (-s^') R_y(x_p) R_x(y_p)\n%\\end{equation}\n", "meta": {"hexsha": "83e543c1472496ce94d6db10b519aa22213eb12d", "size": 6136, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/manual/orbit_modelling.tex", "max_stars_repo_name": "RodrigoNaves/ginan-bitbucket-update-tests", "max_stars_repo_head_hexsha": "4bd5cc0a9dd0e94b1c2d8b35385e128404009b0c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 73, "max_stars_repo_stars_event_min_datetime": "2021-07-08T23:35:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T15:17:58.000Z", "max_issues_repo_path": "docs/manual/orbit_modelling.tex", "max_issues_repo_name": "RodrigoNaves/ginan-bitbucket-update-tests", "max_issues_repo_head_hexsha": "4bd5cc0a9dd0e94b1c2d8b35385e128404009b0c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-09-27T14:27:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-21T23:50:02.000Z", "max_forks_repo_path": "docs/manual/orbit_modelling.tex", "max_forks_repo_name": "RodrigoNaves/ginan-bitbucket-update-tests", "max_forks_repo_head_hexsha": "4bd5cc0a9dd0e94b1c2d8b35385e128404009b0c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 39, "max_forks_repo_forks_event_min_datetime": "2021-07-12T05:42:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T15:15:34.000Z", "avg_line_length": 38.835443038, "max_line_length": 382, "alphanum_fraction": 0.729791395, "num_tokens": 1794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.867035752930664, "lm_q2_score": 0.6548947357776795, "lm_q1q2_score": 0.5678171503253286}}
{"text": "\n\\documentclass[letterpaper, 10 pt, conference]{ieeeconf}  \n\\IEEEoverridecommandlockouts                             \n\\usepackage{graphicx} \n\\usepackage{hyperref}\n\n\\overrideIEEEmargins\n\n\\title{\\Huge Motion Detection Using Simple Image Filtering}\n\n\\author{Jiyu Tian} \n\n\\begin{document}\n\n\\maketitle\n\\thispagestyle{empty}\n\\pagestyle{empty}\n\n%-------------------------------------------------------------------------\n\n\\section{INTRODUCTION}\nIn this project we implemented a simple technique for motion detection and explored various factors that affect the efficiency. The stationary camera capture image sequences where most of the pixels belong to a stationary background and relatively small moving objects pass in front of the camera. In this case, the intensity values observed at a pixel over time is a constant or slowly varying signal, except when a moving object begins to pass through that pixel, in which case the intensity of the background is replaced by the intensity of the foreground object. Thus, we can detect a moving object by looking at large gradients in the temporal evolution of the pixel values.\n\n%-------------------------------------------------------------------------\n\\section{ALGORITHMS DESCRIPTION}\nOur motion detection algorithm is designed to contain 5 steps in total, including \n\\begin{itemize}\n    \\item Grayscale Conversion\n    \\item Spatial Smoothing\n    \\item Temporal Derivation\n    \\item Threshold Selection\n    \\item Result Generation\n\\end{itemize}\n\\subsection{Grayscale Conversion}\nAfter reading in a sequence of image frames, we first make them grayscale. As shown in Fig \\ref{gray}, we apply the grayscale conversion as the following eqaution:\n\n\\begin{equation}\nGray = 0.299 \\times R + 0.587 \\times G + 0.114 \\times B\n\\end{equation}\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.46\\textwidth]{lenagray2.png}\n\\caption{Grayscale Conversion}\n\\label{gray}\n\\end{figure}\nIf, in the worst case, some input frames do not have all the $R,\\ G,\\ B$ values, we simply regard the first channel as its grayscale value.\n\n\\subsection{Spatial Smoothing}\nThe possible bacground noise of original frames could have significant negative effects on accurate motion detection. And thus a 2D spatial smoothing filter is considered to be applied to the frames before applying the temporal derivative filter. Here we tried $3\\times3$, $5\\times5$ box filters, and $2D$ Gaussian filter with different standard deviation $\\sigma$.\n\n\\subsection{Temporal Derivation} \nSince the temporal derivatives of each pixel are to be analyzed for large gradients, different temporal derivative filter has different corresponding result. Here we make a comparison between $1st$ and $2nd$ discrete differential as well as $1D$ Gaussian filter with different standard deviation $\\sigma$.\n\n\\subsection{Threshold Selection}\nAs shown in Fig \\ref{thres}, pixels at different location has different overall behaviours. Those pixels located where no moving objects passed through tend to distribute with in a rather small value range, especially after spatial smoothing. While those pixels contains moving objects tend to have several extreme peaks.\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.46\\textwidth]{pixel.png}\n\\caption{Temporal Derivation of Different Pixels}\n\\label{thres}\n\\end{figure}\n\nThe threshold selection is of vital importance for accurate motion detection. A threshold which is too high may results in mis-detection of moving objects, while a threshold which is too low can lead to high false alarm rates.\n\nOur threshold selection strategy is based on the assumption that the given frame sequences always contains some percentage of pixels that have no moving objects passing through from start to end. We can then model these pixels as Gaussian zero mean noise based on their standard deviation, and come out with a relatively reasonable overall threshold. \n\n\\subsection{Result Generation}\nOnce threshold is selected, pixels of each frame can be determined as whether moving or not, and thus we can generate a mask on the original frame to indicate the moving objects.\n\n\\begin{equation}  \nMask = \\left\\{  \n     \\begin{array}{cc}  \n     1, & derivation > threshold\\\\  \n     0, & derivation \\leq threshold\\\\   \n     \\end{array}  \n\\right.  \n\\end{equation}  \n\n%-------------------------------------------------------------------------\n\\section{EXPERIMENTAL RESULTS}\n\\subsection{Temporal Derivative Filter}\nWe use $0.5[-1\\ 0\\ 1]$ as derivative filter, and use $1D$ Gaussian filter with standard deviation of $\\sigma=1, 1.6, 2.5$. After convolved with Gaussian filter, the extreme peaks becomes much smooth, because the overall background noise is reduced. The height of peaks also decreases, which indicates that the mean and standard deviation of each pixel are also reduced. Intensity values of pixels without motion tend to distributed within a smaller range and could be easier to be separated.\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{derivative.png}\n\\caption{Temporal Derivative Filter}\n\\label{der} \n\\end{figure}\n\n\\subsection{Spatial Smoothing Filter}\nWe utilized four spatial filters for comparison: $3\\times3$ Box, $5\\times5$ Box, Gaussian $\\sigma = 1.4$ and Gaussian $\\sigma = 1.8$. A spatial smoothing filter can filter out high frequency noise and also blur frames. We picked one frame from \\textit{Office} folder. \n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.3\\textwidth]{ssf.jpg}\n\\caption{Original Frame}\n\\label{ssf} \n\\end{figure}\n\nAs shown in Fig \\ref{dssf}, the background noise is significant before spatial smoothing, and thus the result mask contains several noise spots. As the amount of blur increase, the number of noise point spots decreases and the margins become smooth. Background noise still remains after $3\\times3$ box filter, but it almost disappears after $5\\times5$ box filter and Gaussian filters. The result is then much more better after spatial smoothing filters as it reduces background noise and separate the moving object. \n\nHowever, we should note that though spatial smoothing filter reduces background noise, it also lose details of object. As the amount of blur increase, object tends to `collapse' into a huge pixel, where we cannot separate glasses, face and pleats. \n\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.46\\textwidth]{no.png}\n\\caption{Different Spatial Smoothing Filters}\n\\label{dssf}\n\\end{figure}\n\n\\subsection{Threshold Selection}\nFor each pixel we compute it overall mean and standard deviation, and ascending re-sort them respectively. As assumed that a particular percentage of pixels that contains no moving objects, our threshold selection strategy depends only on the percentage. Once percentage is determined, e.g. $10\\%$, the threshold is then determined as $M + 3\\Sigma$, where $M$ is the maximum of first $10\\%$ mean and $\\Sigma$ is the maximum of first $10\\%$ standard deviation. Here in Gaussian, less than $1\\times 10^{-6}$ samples located outside range $\\mu\\pm5\\sigma$.\n\nAs we increase the percentage in Fig \\ref{threshold}, the detector is less sensitive to change of intensity values, and the boundaries in mask becomes narrower. We select $10\\%$ as our general threshold.\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{threshold.png}\n\\caption{Different Threshold}\n\\label{threshold}\n\\end{figure}\n\n\n\n\\subsection{Limitation}\nOne of the key limitations of our current motion detection method is that since the motion is detected based on its intensity values, slight changes of light can result in false positive. The frame sequences in Fig \\ref{light} demonstrates a mis-detection of light instead of motion.\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.46\\textwidth]{light.png}\n\\caption{Light Detection}\n\\label{light}\n\\end{figure}\n\nAnother weakness of the algorithm is that it tends to capture the motion of object edges but lefts aside in-between. We can always generate masks as shown in Fig \\ref{bound}, where the texture of desk even appears inside the human body!\n\n\\begin{figure}[thpb]\n\\centering\n\\includegraphics[width=0.46\\textwidth]{margin.png}\n\\caption{Boundaries}\n\\label{bound}\n\\end{figure}\n\n%-------------------------------------------------------------------------\n\\section{CONCLUSION}\nMotion detection based on change of the intensity values is a straight forward method and is also efficient. We do not need to consider correlation of different part of moving object and other dynamic factors, which makes the algorithm ease to implement.\n\nThe algorithm may be accurate on some simple cases, but it limitation is also significant. If we are to realize more applicable motion detection, more complicated algorithms are to be explored.\n\n\n\\end{document}", "meta": {"hexsha": "212576e709a6390421c03bd7d897f0efa1d2f00b", "size": 8757, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EECE5639-Computer-Vision/Project-1/Report/main.tex", "max_stars_repo_name": "tjyiiuan/Graduate-Courses", "max_stars_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "EECE5639-Computer-Vision/Project-1/Report/main.tex", "max_issues_repo_name": "tjyiiuan/Graduate-Courses", "max_issues_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EECE5639-Computer-Vision/Project-1/Report/main.tex", "max_forks_repo_name": "tjyiiuan/Graduate-Courses", "max_forks_repo_head_hexsha": "7f8b018dc92431d8f054a38e1a7fd2c284e1cce0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.8636363636, "max_line_length": 679, "alphanum_fraction": 0.7591640973, "num_tokens": 1947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754607093178, "lm_q2_score": 0.7799928900257126, "lm_q1q2_score": 0.5678156834664604}}
{"text": "% Index entries remaining to be added\n\\chapter{General Index: Unindexed}\n\n[TVie numbers refer to the pages. References to theorems contained in a few of\nthe more important examples are given by numbers in italics]\n\nAbel's discovery of elliptic functions, 429, 512; inequality, 16; integral equation, 211, 229, 230;\nmethod of establishing addition theorems, 442, 496, 497, 530, 534; special form,, (2), of\nthe confluent hypergeometric function, 353; test for convergence, 17; theorem on continuity\nof power series, 57; theorem on multiplication of convergent series, 58, 59\n\nAbridged notation for products of Theta-f unctions, 468, 469; for quotients and reciprocals of\nelHptic functions, 494, 498\n\nAbsolute convergence, 18, 28; Cauchy's test for, 21; D'Alembert's ratio test for, 22; De\n Morgan's test for, 23\n\nAbsolute value, see Modulus\n\nAbsolutely convergent double series, 28; infinite products, 32; series, 18, (fundamental\nproperty of) 25, (multiplication of) 29\n\nAddition formula for Bessel functions, 357, 380; for Gegenbauer's function, 335; for Legendre\npolynomials, 326, 395; for Legendre functions, 328; for the Sigma-function, 451; for\nTheta-functions, 467; for the Jacobian Zeta-function and for E[u), 518, 534; for the\nthird kind of elliptic integral, 523; for the Weierstrassian Zeta-function, 446\n\nAddition formulae, distinguished from addition theorems, 519\n\nAddition theorem for circular functions, 535; for the exponential function, 531; for Jacobian\nelliptic functions, 494, 497, 530; for the Weierstrassian elliptic function, 440, 457; proofs\nof, by Abel's method, 442, 496, 497, 530, 534\n\nAffix, 9\n\nAir in a sphere, vibrations of, 399\n\nAmplitude, 9\n\nAnalytic continuation, 96, (not always possible) 98; and Borel's integi-al, 141; of the hyper-\"\ngeometric function, 288. See also Asymptotic expansions\n\nAnalytic functions, 82-110 (Chapter v); defined, 83; derivates of, 89, (inequality satisfied by) 91;\nclistinguislied from monogenic functions, 99; represented by integi-als, 92; Eiemann's\nequations connected with, 84; values of, at points inside a contour, 88; uniformly convergent\nseries of, 91\n\nAngle, analytical definition of, 589; and popular conception of an angle, 589, 590\n\nAngle, modular, 492\n\nArea represented by an integi-al, 61, 589\n\nArgand diagram, 9\n\nArgument, 9, 588; principal value of, 9, 588; continuity of, 588\n\nAssociated function of Borel, 141; of Eiemann, 183; of Legendre [P \"' (z) and Q '\" (z)], 323-326\n\nAsymptotic expansions, 150-159 (Chapter viii); differentiation of, 153; integration of, 153;\nmultiplication of, 152; of Bessel functions, 368, 369, 371, 373, 374; of confluent hyper-\ngeometric functions, 342, 343; of Gamma-functions, 251, 276; of parabolic cylinder functions,\n347, 348; uniqueness of, 153, 154\n\nAsymptotic inequality for parabolic cylinder functions of large order, 354\n\nAsymptotic solutions of Mathieu's equation, 425\n\nAuto-functions, 226 *\n\nAutomorphic functions, 455\n\nAxioms of arithmetic and geometry, 579\n\nBarnes' contour integrals for the hypergeometric function, 286, 289; for the confluent hyper-\ngeometric function, 343-345\n\nBarnes' G-f unction, 264, 278\n\nBarnes' Lemma, 289\n\nBasic numbers, 462\n\nBernoullian numbers, 125; polynomials, 126, 127\n\nBertrand's test for convergence of infinite integrals, 71\n\nBessel coefficients [$\\besJ_{n}(z)$], 101; addition formulae for, 357; Bessel's integral for, 362;\ndifferential equation satisfied by, 357; expansion of, as power series, 355; expansion of\n\n?..S- 2\n\n%\n% 596\n%\nfunctions in series of (by Neumann), 374, 375, 3S4, (by Schlomilch), 377; expansion of\n t-z)~  in series of, 374. 375, 376; expressible as a confluent fomi of Legendre functions,\n367; expressible as confluent hypergeometric functions, 358; inequality satisfied by, 879;\nNeumann's function 0, (z) connected with, sec Neumann's function; order of, 356; recur-\nrence formulae for, 359; special case of confluent hypergeometric functions, 358. See also\nBessel functions\nBessel functions, 355-385 (Chapter xvn), Jn z) defined, 358-360; addition formulae for, 380;\nasymptotic expansion of, 368, 369, 371, 373, 374; expansion of, as an ascending series, 358,\n371; expansion of functions in series of, 374, 375, 377, 5.S'i; first kind of, 359; Hankel's\nintegi'al for, 365; integi'al connecting Legendre functions with, 364, 401; integral properties\nof, 380, 381, 384, 385; integi-als involving products of, 380, 383, 385; notations for, 356,\n\n372, 373; order of, 356; products of, 379, 360, 383, 385, 428; recurrence formulae for, 359,\n\n373, 374; relations between, 360, 371, 372; relation between Gegenbauer's function and,\n378; Schliifli's form of Bessel's integral for, 362, 372; second kind of, Y (2) (Hankel), 370;\nyC' (-) (Neumann), 372; 1',, (z) (Weber-Schliifli), 370; second kind of modified, K  z), 373;\nsolution of Laplace's equation by, 395; solution of the wave-motion equation by, 397;\ntabulation of, 378; whose order is large, 368, 383; whose order is half an odd integer, 364;\nwith imaginary argument. I  z), K  z), 372, 373, 384; zeros of, 361, 367, 378,381. See\naho Bessel coefficients (ind Bessel's equation\n\nBessel's equation, 204, 357, 373; fundamental system of solutions of (when )i is not an integer),\n359. 372; second solution when n is an integer, 370, 373. See also Bessel functions\n\nBinefs integrals for log V (z), 248-251\n\nB6clier's theorem on linear differential equations with five singularities, 203\n\nBolzano's theorem on limit points, 12\n\nBonnet's foi-m of the second mean value theorem. 66\n\nBorel's associated function, 141; integral, 140; integral and analytic continuation, 141; method\nof ' summing ' series, 154; theorem (the modified Heine-Borel theorem), 58\n\nBoundary, 44\n ' Boundary conditions, 387; and Laplace's equation, 393\n\nBounds of continuous functions, 55\n\nBranch of a function, 106\n\nBranch -point. 106\n\nBiirmanns theorem, 128; extended by Teixeira, 131\n\nCantor's Lemma, 183\n\nCauchy's condition for the existence of a limit, 13; discontinuous factor, 123; formula for the\n\nremainder in Taylor's series, 96; inequality for derivatives of an analytic function, 91;\n\nintegral, 119; integi'al representing V [z), 243; numbers, 372; tests for convergence of series\n\nand integrals, 21, 71\nCauchy's theorem, 85; extension to curves on a cone, 87; Morera's converse of, 87, 110\nCeU, 430\n\nCesaro's method of ' summing ' series, 155; generalised, 156\n\nChange of order of terms in a series, 25; in an infinite deteiTninant, 37; in an infinite product, 33\nChange of parameter (method of solution of Mathieu's equation), 424\nCharacteristic functions, 226; numbers, 219; numbers associated with symmetric nuclei are\n\nreal. 226\nChartier's test for convergence of infinite integrals, 72\nCircle, iirea of sector of, 589; limiting, 98; of convergence, 30\nCircular functions, 435, 584; addition theorems for, 585; continuity of, 585; differentiation\n\nof, 585; duplication fonnulae, 585; periodicity of, 587; relation with Gamma-functions,\n\n239\nCircular membrane, vibrations of, 356, 396\nClass, left (L), 4; right  li), 4\nClosed. 44\nCluster point, 13\nCoefficients, equating, 59; in Fourier series, nature of, 167, 174; in trigonometrical series, values\n\not, 163, 165\nCoefficients of Bessel. sec Bessel coefficients\n\nComparison theorem for convergence of integrals, 71; for convergence of series, 20\nComplementary moduli, 479, 493; elliptic integrals with, 479, 501, 520\n\nComplete elliptic integrals [K, K, K', 7v\"] (first and second kinds), 498, 499, 518; Legendre's re-\nlation lietween, 520; properties of  qua functions of the modulus), 484, 498, 499, 501, 521;\n\n A\n\n %\n % 597\n %\n\nseries for, 299; tables of, 518; the Gaussian transformation, 533; values for small values\nof \\ k\\, 521; values (as Gamma-functions) for special values of k, 524-527; with comple-\nmentary moduli, 479, 501, 520\nComplex integrals, 77; upper limit to value of, 78\nComplex integration, fundamental theorem of, 78\n\nComplex numbers, 3-10 (Chapter i), defined, 6; amplitude of, 9; argument of, 9, 588; dependence\nof one on another, 41; imaginary part of (I), 9; logarithm of, 589; modulus of, 8; real part\noi\\ R), 9; representative point of, 9\nComplex variable, continuous function of a, 44\n\nComputation of elliptic functions, 485; of solutions of integi'al equations, 211\nConditional convergence of series, 18; of infinite determinants, 415. See also Convergence and\n\nAbsolute convergence\nCondition of integrability (Eiemann's), 63\nConditions, Dirichlet's, 161, 163, 164, 176\nConduction of Heat, equation of, 387\nConfluence, 202, 337\nConfluent form, 203, 337\n\nConfluent hypergeometric function  Wj  to ( )]' 337-354 (Chapter xv); equation for, 337; general\nasymptotic expansion of, 342, 345; integral defining, 339; integrals of Barnes' type for,\n343-345; Kummer's formulae for, 338; recurrence formulae for, 352; relations with Bessel\nfunctions, 360; the functions W    (z) and xl/j.\\,  (z), 337-339; the relations between functions\nof these types, 346; various functions expressed in terms of JFj,,,; (z), 340, 352, 353, 360. See\nalso Bessel functions mid Parabolic cylinder functions\nConfocal coordinates, 405, 547; form a triply orthogonal system, 548; in association with\nellipsoidal harmonics, 552; Laplace's equation referred to, 551; uniformising variables\nassociated with, 549\nCongruence of points in the Argand diagram, 430\nConstant, Euler's or Mascheroni's, [7], 235, 246, 248\nConstants ci, e., e, 443; E, E', 518, 520; of Fourier, 164; 771, r/.,, 446, (relation between -qi\n\nand 7?,,) 446\"; G, 469, 472; A', 484, 498, 499; K', 484, .501, 503 \"\nConstruction of elliptic functions, 438, 478, 492; of Mathieu functions, 409, (second method)\n\n420\n'Contiguous hypergeometric functions, 294\nContinua, 43\nContinuants, 36\n\nContinuation, analytic, 96, (not always possible) 98; and Borel's integral, 141; of the hyper-\ngeometric function, 288. See also Asymptotic expansions\nContinuity, 41; of power series, 57, (Abel's theorem) 57; of the argument of a complex variable,\n588; of the circular functions, 585; of the exponential function, 581; of the logarithmic\nfunction, 583, 589; uniformity of, 54\nContinuous functions, 41-60 (Chapter in), defined, 41; bounds of, 55; integrability of, 63; of a\n\ncomplex variable, 44; of two variables, 67\nContour, 85; roots of an equation in the interior of a, 119, 123\nContour integrals, 85; evaluation of definite integrals by, 112-124; the Mellin-Barnes type of,\n\n286, 343; see also under the special function represented by the integral\nConvergence, 11-40 (Chapter 11), defined, 13, 15; circle of, 30; conditional, 18; of a double\nseries, 27; of an infinite determinant, 36; of an infinite product, 32; of an infinite integral,\n70, (tests for) 71, 72; of a series 15, (Abel's test for) 17, (Dirichlet's test for) 17; of Fourier\nseries, 174-179; of the geometric series, 19; of the hypergeometric series, 24; of the series\n2n~*', 19; of the series occurring in Mathieu functions, 422; of trigonometrical series, 161;\nprinciple of, 13; radius of, 30; theorem on (Hardy's), 156. See also Absolute convergence.\nNon-uniform convergence and Uniformity of convergence\nCoordinates, confocal, 405, 547; orthogonal, 401, 548\nCosecant, series for, 135\nCosine, see Circular functions\n\nCosine- integral [Ci (z)], 352; -series (Fourier series), 165\nCotangents, expansion of a function in series of, 139\nCubic function, integration problem connected with, 452, 512\nCunningham's function [w,  j (z)], 353\nCurve, simple, 43; on a cone, extension of Cauchy's theorem to, 87; on a sphere (Seiffert's\n\nspiral), 527\nCut, 281\nCylindrical functions, 355. See Bessel functions\n\n%\n% 598\n%\nD'Alemberfs ratio test for convergence of series, 22\n\nDarboux\" formula. 125\n\nDecreasing- sequence, 12\n\nDedekind's theory of irrational numbers, 4\n\nDeficiency of a plane curve, 455\n\nDefinite integrals, evaluation of, 111-124 (Chapter vi)\n\nDegree of Legendre functions, 302, 307, 324\n\nDe la Vallee Poussins test for uniformity of convergence of an infinite integral, 72\n\nDe Morgan's test for convergence of series, 23\n\nDependence of one complex number on another, 41\n\nDerangement of convergent series, 25; of double series, 28; of infinite determinants, 37; of\n\nintinite products, 33, 34\nDerivates of an analytic function, fi9; Cauchy's inequality for, 91; integrals for, 89\nDerivates of elliptic functions, 4 0\nDeterminant. Hadamard's, 212\nDeterminants, infinite, 36; convergence of, 36, (conditional) 415; discussed by Hill, 36, 415;\n\nevaluated by Hill in a particular case, 415; rearrangement of, 37\nDifiference equation satisfied by the Gamma-function, 237\nDiflferential equations satisfied by elliptic functions and quotients of Theta-f unctions, 436, 477,\n\n492; (partial) satisfied by Theta-functions, 470; Weierstrass' theorem on Gamma-functions\n\nand, 236. See also Linear diflferential equations and Partial diflferential equations\nDiflFerentiation of an asymptotic expansion, 153; of a Fourier series, 168; of an infinite\n\nintegral, 74; of an integral, 67; of a series, 79, 91; of elliptic functions, 480, 493; of the\n\ncircular functions, 585; of the exponential function, 582; of the logarithmic function, 583,\n\n589\nDirichlet's conditions, 161, 163, 164, 176; form of Fourier's theorem, 161, 163, 176; formula\n\nconnecting repeated integrals, 75, 76, 77; integral, 252; integral for f (z), 247; integral for\n\nLegendre functions, 314; test for convergence, 17\nDiscontinuities, 42; and non-uniform convergence, 47; of Fourier series, 167, 169; ordinary, 42;\n\nregular distribution of, 212; removable, 42\nDiscontinuous factor, Cauchy's, 123\n\nDiscriminant associated with Weierstrassian elliptic functions, 444, 550\nDivergence of a series, 15; of infinite products, 33\nDomain, 44\n\nDouble circuit integrals, 256, 293\nDouble integrals, 68, 254\nDouble series, 26; absolute convergence of, 28; convergence of (Stolz' condition), 27; methods\n\nof summing, 27; a particular form of, 51; rearrangement of, 28\nDoubly periodic functions, 429-535. See alt > Jacobian elliptic functions, Theta-functions and\n\nWeierstrassian elliptic functions\nDuplication formula for the circular functions, 585; for the Gamma-function, 240; for the\n\nJacobian ellij)tic functions, 498; for the Sigma-function, 459, 460; for the Theta-functions,\n\n48S; for the Weierstrassian elliptic function, 441; for the Weierstrassian Zeta-function, 459\n\nElectromagnetic waves, equations for, 404\n\nElementary functions, S2\n\nElementary transcendental functions, 579-590 (Appendix). Sec (dm Circular functions.\nExponential function and Logarithm\n\nEllipsoidal harmonics, 536-578 (Chapter xxiii); associated with confocal coordinates, 552\nderived from Lame's equation, 538-543, 552-554; external, 576; integi'al equations con\nnected with, 567; linear independence of, 560; number of, when the degi-ee is given, 546.\nphysical applications of, 547; .species of, 537; types of, 537. See also Lamp's equation\nand \\Lame\\ functions\n\nElliptic cylinder functions, see Mathieu functions\n\nElliptic functions, 429-535 (Chapters xx-xxii); computation of, 485; construction of, 433, 478\nderivate of, 480; discovery of, by Abel, Gauss and Jacobi, 429, 512, 524; expressed by\nmeans of Theta-functions, 473; expressed by means of Weierstrassian functions, 448-451\ngeneral addition formula, 457; number of zeros (or poles) in a cell, 431, 432; order of\n432; periodicity of, 429, 479, 500, 502, 503; period parallelogram of, 430; relation be\ntween zeros and poles of, 433; residues of, 431, 504; transformations of, 508; with no\npoles (are constant), 431; with one double pole, 432, 434; with the same periods (relations\nbetween), 452; with two simple poles, 432, 491. See also Jacobian elliptic functions,\nTheta-functions and Weierstrassian elliptic functions\n\n%\n% 599\n%\n\nElliptic integrals, 429, 512; first kind of, 515; function E (it) and, 517; function Z ( ) and,\n518; inversion of, 429, 452, 454, 480, 484, 512, 524; second kind of, 517, (addition formulae\nfor) 518, 519, 534, (imaginary transformation of) 519; third kind of, 522, 523, (dynamical\napplication of) 523, (parameter of) 522; three kinds of, 514. See also Complete elliptic\nintegrals\n\nElliptic membrane, vibrations of, 404\n\nEquating coefiQcients, 59, 186\n\nEquations, indicial, 198; number of roots inside a contour, 119, 123; of Mathematical Physics,\n203, 386-403; with periodic coefficients, 412. See also Difference equation, Integral\nequations, Linear differential equations, and under the names of special equations\n\nEquivalence of curvilinear integrals, 83\n\nError- function [Erf (,() and Erfc (,r)], 341\n\nEssential singularity, 102; at infinity, 104\n\nEta-function [H ( )], 479, 480\n\nEulerian integrals, first kind of [B (m, n)], 253; expressed by Gamma-functions, 254; extended\nby Pochhammer, 256\n\nEulerian integrals, second kind of, 241; .see Gamma-function\n\nEuler's constant [7], 235, 246, 24S; expansion (Maclaurin's), 127; method of 'summing' series,\n155; product for the Gamma-function, 237; product for the Zeta-function of Riemann, 271\n\nEvaluation of definite integrals and of infinite integrals, 111-124 (Chapter vi)\n\nEvaluation of Hill's infinite determinant, 415\n\nEven functions; of Mathieu [cc,  z, q)], 407\n\nExistence of derivatives of analytic function, 89; -theorems, 888\n\nExpansions of functions, 125-149 (Chapter vii); by Biirmann, 128, 131; by Darboux, 125; by\nEuler and Maclaurin, 127; by Fourier, see Fourier Series; by Fourier (the Fourier-Bessel\nexpansion), 381; by Lagrange, 132, 149; by Laurent, 100; by Maclaurin, 94; by Pincherle,\n149; by Plana, 145; by Taylor, 93; by Wronski, 147; in infinite products, 136; in series of\nBessel coefficients or Bessel functions, 374, 375, 381, 384; in series of cotangents, 139; in\nseries of inverse factorials, 142; in series of Legendre polynomials or Legendre functions,\n310, 322, 330, 331, 335; in series of Neumann functions, 374, 375, 384; in series of parabolic\ncylinder functions, 351; in series of rational functions, 134. See also Asymptotic expansions.\nSeries, and under the itaines of special functions\n\nExponential function, 581; addition theorem for, 581; continuity of, 581; differentiation of,\n582; periodicity of, 585\n\nExponential-integrral [Ei (z)], 352\n\nExponents at a regular point of a linear differential equation, 198\n\nExterior, 44\n\nExternal harmonics, (ellipsoidal) 576, (spheroidal) 403\n\nFactor, Cauchy's discontinuous, 123; periodicity-, 463\n\nFactorials, expansion in a series of inverse, 142\n\nFactor-theorem of Weierstrass, 137\n\nFej r's theorem on the summability of Fourier series, 169, 178\n\nFerrers' associated Legendre functions [P '\" (z) and Q \"  (z)], 323\n\nFirst kind, Bessel functions of, 359; elliptic integi-als of, 515, (complete) 518, (integration of)\n515; Eulerian integral of, 253, (expressed by Gamma-functions) 254; integral equation of,\n221; Legendre functions of, 307\n\nFirst mean-value theorem, 65, 96\n\nFirst species of ellipsoidal harmonic, 537, (construction of) 538\n\nFloquet's solution of differential equations with periodic coefficients, 412\n\nFluctuation, 56; total, 57\n\nFoundations of arithmetic and geometry, 579\n\nFourier-Bessel expansion, 381; integral, 385\n\nFourier series, 160-193 (Chapter ix); coefficients in, 167, 174; convergence of, 174-179; differ-\nentiation of, 168; discontinuities of, 167, 169; distinction between any trigonometrical\nseries and, 160, 163; expansions of a function in, 163, 165, 175, 176; expansions of Jacobian\nelliptic functions in, 510, 511; expansion of Mathieu functions in, 409, 411, 414, 420; Fejer's\ntheorem on, 169; Hurwitz-Liapounotf theorem on, 180; Parseval's theorem on, 182; series\nof sines and series of cosines, 165; summability of, 169, 178; uniformity of convergence of,\n168, 179. See also Trigonometrical series\n\nFourier's theorem, Dirichlet's statement of, 161, 163, 176\n\n%\n% 600\n%\nFourier's theorem on integrals. 188, 211\n\nFourtb species of ellipsoidal harmonic, 537, (construction of) 542\n\nFredliolm's integral equation, 213-217, 228\n\nFunctionality, concept of, 41\n\nFunctions, branches of, 106; identity of two, 98; limits of, 42; principal parts of, 102; without\nessential singularities, 105; which cannot be continued, 98. See also inidcr the mnnes of\nspecial functions! or special types offitncti(yns, e.g. Legendre functions. Analytic functions\n\nFundamental formulae of Jacobi connecting Theta-functions, 467, 4S8\n\nFundamental period parallelogram, 430; polygon (of automorphic functions), 455\n\nFundamental system of solutions of a linear differential equation, 197, 200, 389, 559. .S (r also\niDulcr the names of special equations\n\nGamma-function [r( )], 235-264 (Chapter xii); asymptotic expansion of, 251, 276; circular\nfunctions and, 239; complete elliptic integrals and, 524-527, 535; contour integi-al (Hankel's)\nfor, 244; difference equation satisfied by. 237; differential equations and, 236; duplication\nformula, 240; Euler's integral of the first kind and, 254; Euler's integi-al of the second\nkind, 241. (modified by Cauchy and Saalschiitz) 243, (modified by Hankel) 244; Euler's\nproduct, 237; incomplete form of, 341; integrals for, (Binet's) 248-251, (Euler's) 241;\nminimum value of, 253; multiplication formula, 240; series, (Rummer's) : 50, (Stirling's)\n251; tabulation of. 253; trigonometrical integrals and, 256; Weierstrassian product, 235,\n236. See also Eulerian integrals and Logaritlimic derivate of the Gamma-function\n\nGauss' discovery of elliptic functions, 429, 512, 524; integral for r'(~)/r(r), 246; lemniscate\nfunctions, see Lemniscate functions; transformation of elliptic integrals, 533\n\nGegenbauer's function [C,/ (z)], 329; addition formula, 335; differential equation for, 329;\nrecuiTence formulae, 330; relation with Legendre functions, 329; relation involving Bessel\nfunctions and, 56-5; Rodrigues' formula (analogue), 329; Schliifii's integral (analogue), 329\n\nGenus of a plane curve, 455\n\nGeometric series, 19\n\nGlaisher's notation for quotients and reciprocals of elliptic functions, 494, 498\n\nGreatest of the limits, 13\n\nGreen's functions, 395\n\nHadamard's lemma, 212\n\nHalf-periods of Weierstrassian elliptic functions, 444\n\nHankel's Bessel function of the second kind, Y,j( ), 370; contour integral for T (z), 244; integral\n\nfor J ( ), 365\nHardy's convergence theorem, 156; test for uniform convergence, 50\nHarmonics, solid and surface, 392; spheroidal, 403; tesseral, 392, 536; zonal, 302, 392, 536;\n\nSylvester's theorem concerning integi-als of, 400. See also Ellipsoidal harmonics\nHeat, equation of conduction of, 387\nHeine-Borel theorem (modified), 53\n\nHeine's expansion of (t - z)-  in series of Legendre polynomials, 321\nHermite's equation, 204, 209, 342, 347. See also Parabolic cylinder functions\nHermite's formula for the generalised Zeta-function f (s,  ), 269\nHermite's solution of Lame's equation, 573-575\nHeun's equation, 576, 577\n\nHill's equation, 406. 413-417; Hill's method of solution, 413\nHill's infinite determinant, 36, 40, 415; evaluation of, 415\nHobson's associated Legendre functions, 325\nHolomorphic, 83\n\nHomogeneity of Weierstrassian elliptic functions, 439\nHomogeneous harmonics (associated with ellipsoid), 543, 57G\\ ellipsoidal harmonics derived\n\nfrom (Nivcn's formula), 543; linear independence of, 560\nHomogeneous integral equations, 217, 219\nHurwitz' definition of the generalised Zeta-function j'(x,  ), 265; formula for f (.<,(/), 268;\n\ntheorem concerning Foiu'ier constants, 180\nHypergeometric equation, see Hypergeometric functions\nHypergeometric functions, 281-301 (Chapter xiv); Barnes' integrals, 286, 289; contiguous, 294;\n\ncontinuation of, 2H,S; contour integrals for, 291; differential equation for, 202, 207, 283;\n\nfunctions expressed in terms of, 281, 311; of two variables (Appell's), 300; relations between\n\ntwenty-four expressions involving, 284, 285, 290; liiemann's P-equation and, 208, 283;\n\nseries for (convergence of), 24, 281 squares and products of,  96'; value of F(a, l>; c; 1),\n\n%\n% 601\n%\n\n281, 393 \\ values of special forms of bypergeometric functions, 298, 301. See also Bessel\nfunctions, Confluent hypergeometric functions and Legendre functions\n\nHypergeometric series, >iee Hypergeometric functions\n\nHypothesis of Riemann on zeros of f(.s), 272, 280\n\nIdentically vanisMng power series, 58\n\nIdentity of two functions, 98\n\nImaginary argument, Bessel functions with [I,j(2) and K  z)'], 372, 373, 384\n\nImaginary part (/) of a complex number, 9\n\nImaginary transformation (Jacobi's) of elliptic functions, 505, 506, 5-3.5; of Theta-functions, 124,\n474; of E[u) and Z(h), 519\n\nImproper integrals, 75\n\nIncomplete Gamma-functions \\ \\  y n, c)], 341\n\nIncreasing sequence, 12\n\nIndicial equation, 198\n\nInequality (Abel's), 16; (Hadamard's), 212; satisfied by Bessel coefficients, 379; satisfied by\nLegendre polynomials, 303; satisfied by parabolic cylinder functions, 354; satisfied by\nj-(.s', a), 274, 275\n\nInfinite determinants, see Determinants\n\nInfinite integrals, 69; convergence of, 70, 71, 72; differentiation of, 74; evaluation of, 111-124;\nfunctions represented by, sec under the names of special functions; representing analytic\nfunctions, 92; theorems concerning, 73; uniform convergence of, 70, 72, 73. See also\nIntegrals and Integration\n\nInfinite products, 32; absolute convergence of, 32; convergence of, 32; divergence to zero, 33;\nexpansions of functions as, 136, 137  see also under the names of special functions); expressed\nby means of Theta-functions, 473, 488; uniform convergence of, 49\n\nInfinite series, see Series\n\nInfinity, 11, 103; essential singularity at, 104; point at, 103; pole at, 104; zero at, 104\n\nIntegers, positive, 3; signless, 3\n\nIntegrability of continuous functions, 63; Eiemann's condition of, 63\n\nIntegral, Borel's, 140; and analytic continuation, 141\n\nIntegral, Cauchy's, 119\n\nIntegral, Dirichlet's, 258\n\nIntegral equations, 211-231 (Chapter xi); Abel's, 211, 229, 330; Fredholm's, 213-217, 228;\nhomogeneous, 217, 219; kernel of, 213; Liouville-Neumann method of solution of, 221;\nnucleus of, 213; numbers (characteristic) associated with, 219; numerical solutions of, 211;\nof the first and second kinds, 213, 221; satisfied by \\Lame\\ functions, 564-567; satisfied by\nMathieu functions, 407; satisfied by parabolic cylinder functions, 331; Schlomilch's, 229;\nsolutions in series, 228; Volterra's, 221; with variable upper limit, 213, 221\n\nIntegral formulae for ellipsoidal harmonics, 567; for the Jacobian elliptic functions, 492, 494;\nfor the Weierstrassian elliptic function, 437\n\nIntegral functions, 106; and Lame's equation, 571; and Mathieu's equation, 418\n\nIntegral properties of Bessel functions, 380, 381, 385; of Legendre functions, 325, 305, 324; of\nMathieu functions, 411; of Neumann's function, 385; of parabolic cylinder functions, 350\n\nIntegrals, 61-81 (Chapter iv); along curves (equivalence of), 87; complex, 77, 78; differentiation\nof, 67; double, 68, 255; double-circuit, 256, 293; evaluation of, 111-124; for derivates of an\nanalytic function, 89; functions represented by, see under the juinies of the special functions;\nimjjroper, 75; lower, 61; of harmonics (Sylvester's theorem), 400; of irrational functions,\n452, 512; principal values of, 75; regular, 201; repeated,\n68, 75; representing analytic functions, 92; representing areas, 61, 589; round a contour,\n85; upper, 61. See also Elliptic integrals. Infinite integrals, and Integration\n\nIntegral theorem, Fourier's, 188, 211; of Fourier-Bessel, 385\n\nIntegration, 61; complex, 77; contour-, 77; general theorem on, 63; general theorem on\ncomplex, 78; of asymptotic expansions, 153; of integi'als, 68, 74, 75; of series, 78; pro-\nblem connected with cubics or quartics and elliptic functions, 452, 512. See also Infinite\nintegrals and Integrals\n\nInterior, 44\n\nInternal spheroidal harmonics, 403\n\nInvariants of Weierstrassian elliptic functions, 437\n\nInverse factorials, expansions in series of, 142\n\nInversion of elliptic integrals, 429, 452, 454, 480, 484, 512, 524\n\nIrrational functions, integration of, 452, 512\n\nIrrational-real numbers, 5\n\n%\n% 602\n%\n\nIrreducible set of zeros or poles, 430\n\nIrregular points i singularities) of differential equations, 197, 202\n\nIterated functions. 222\n\nJacobian elliptic functions [sn /  en ii, dn;(], 432, 478, 491-535 (Chapter xxii); addition theorems\nfor, 494, 497, 530, 535; connexion with Weierstrassian functions, 505; definitions of am,\nA<p, sn M (sin am  ), en u, dn u, 478, 492, 494; differential equations satisfied by, 477, 492;\ndifferentiation of, 493; duplication formulae for, 498; Fourier series for, 510, 511, 55.5;\ngeometrical illustration of, 524, 527; general description of, 504; Glaisher's notation for\nquotients and reciprocals of, 494; infinite products for, 508, 53:; integral formulae for, 492,\n494; Jacobi's imaginary transformation of, 505, 506; \\Lame\\ functions expressed in terms of,\n564, 573; Landen's transfonnation of, 507; modular angle of, 492; modulus of, 479, 492,\n(complementary) 479, 493; parametric representation of points on curves by, 524, 537, 527,\n555; periodicity of, 479, 500, 502, 503; poles of, 432, 503, 504; quarter periods. A', iK', of,\n479, 498, 499, 501; relations between, 492; residues of, 504; Seiffert's spherical spiral and,\n527; triplication formulae, 530. 534, 535; values of, when ii is hK, kiK' or h  K riK'), 500,\n506. 5(17; values of. when the modulus is small, 533. See ahit Elliptic functions. Elliptic\nintegrals. Lemniscate functions, Tbeta- functions, and Weierstrassian elliptic functions\n\nJacobi\"s discovery of elliptic functions, 429, 512; earlier notation for Theta-functions, 479;\nfundamental Theta-function fomiulae, 467, 4H8; imaginary transformations, 124, 474, 505,\n506, 519, 535; Zeta-function, t ee under Zeta-function of Jacobi\n\nKernel. 213\n\nKlein's theorem on linear differential equations with five singularities, 203\n\nKummer's formulae for confluent hypergeometric functions, 338; series for logF (z), 250\n\nLacunary function. 98\n\nLagrange's expansion, 132, 149; form for the remainder in Taylor's series, 96\n\nLame functions, defined, 558; expressed as algebraic functions, 556; 577; expressed by Jacobian\nelliptic functions, 573-575; expressed by Weierstrassian elliptic functions, 570-572; integi'al\nequations satisfied by, 564-567; linear independence of, 559; reality and distinctness of\nzeros of, 557, 558, 578; second kind of, 562; values of, 558; zeros of (Stieltjes' theorem),\n560. See alsi) Lames equation and Ellipsoidal harmonics\n\nLame's equation, 204, 536-578 (Chapter xxiii); derived from theory of ellipsoidal harmonics,\n538-543, 552-554; different forms of. 554, 573; generalised,' 204, 570, 573, 576, 577;\nseries solutions of, 556, 577, 578; solutions expressed in finite fomi, 459, 556, 576, 577, 578;\nsolutions of a generalised equation in finite foim, 570, 573. See alao Lam6 functions and\nEllipsoidal harmonics\n\nLandens transformation of Jacobian elliptic functions, 476, 507, 533\n\nLaplace's equation, 386; its general solution, 388; normal solutions of, 553; solutions involving\nfunctions of Legendre and Bessel, 391, 395; solution with given boundary conditions, 393;\nsynnnetrical solution of, 399; transformations of, 401, 407, 551, 553\n\nLaplace's integrals for Legendre polynomials and functions, 312, 313, 314, 319, 326, 337\n\nLaurent's expansion, 100\n\nLeast of limits. 13\n\nLebesgue's lemma, 172\n\nLeft (L-) class. 4\n\nLegendre's equation, 204, 304; for associated functions, 324; second solution of, 316. See aho\nLegendre functions and Legendre polynomials\n\nLegendre functions, 302-336 (Chapter xv); P  z), Q  z), P \"Uz), Q '\" z) defined, 306, 316, 323,\n325; addition formulae for, 328, 395; Bessel functions and, 364, 367, 401; degree of, 307,\n324; differential equation for, 204, 306, 324; distinguished from Legendre polynomials,\n306; expansions in a.scending series, 311, 326; expansions in descending series, 302, 317,\n326, 334; expansion of a function as a series of, 334; expressed by Murphy as hypergeometric\nfunctions, 311, 312; expression of Qn z) in terms of Legendre polynomials. 319, 320, 333;\nFerrers' functions associated with, 323. 324; first kind of, 307; Gegenbauer's function,\nCn\" (z), associated with, xee Gegenbauer's function; Heine's expansion of  t - e)~i as a series\nof, 321; Hobson's functions associated with, 325; integral connecting Bessel functions with,\n364; integi-al properties of, 324; Laplace's integials for, 312, 313, 319, 326, 334; Mehler-\nDirichlet integral for, 314; order of, 326; recuiTence fomiulae for, 307, 318; Schlafli's\nintegral for, 304, 306; second kind of, 316-320, 325, 320; summation of ::ii\"P (:) and\nZ/f\" Q  (z), 302, 321; zeros of, 303, 316, 335. See also Legendre polynomials and Legendre's\nequation\n\nLegendre polynomials [P   z)], 95, 302; addition foraiula for, 326, 387; degi-ee of, 302; differ-\nential equation for, 204, 304; expansion in ascending series, 311; expansion in descending\n\n%\n% 603\n%\n\nseries, 302, 334; expansion of a function as a series of, 310, 322, 330, 331, 332, 335;\nexpressed by Mui-phy as a hypergeometric function, 311, 312; Heine's expansion of (t - z)~'\nas a series of, 321; integi-al connecting Bessel functions with, 364; integi'al properties of,\n225, 305; Laplace's equation and, 391; Laplace's integrals for, 312, 314; Mehler-Dirichlet\nintegi-al for, 314; Neumann's expansion in series of, 322; numerical inequality satisfied by,\n303; recurrence fonnulae for, 307, 309; Eodrigues' fonnula for, 225, 303; Schlafli's integral\nfor, 303, 304; summation of 2/;\" P  (z), 302; zeros of, 303, 316. See aim Legendre functions\n\nLegendre's relation between complete elliptic integi-als, 520\n\nLemniscate functions [sin lemn and cos lemn 0], 524\n\nLiapounofF's theorem concerning Fourier constants, 180\n\nLimit, condition for existence of, 13\n\nLimit of a function, 42; of a sequence, 11, 12; -point (the Bolzano -Weierstrass theorem), 12\n\nLimiting circle, 98\n\nLimits, greatest of and least of, 13\n\nLimit to the value of a complex integi'al, 78\n\nLindemann's theory of Mathieu's equation, 417; the similar theory of Lame's equation, 570\n\nLinear differential equations, 194-210 (Chapter x), 386-403 (Chapter xviii); exponents of, 198;\nfundamental system of solutions of, 197, 200; iixegular singularities of, 197, 202; ordinary\npoint of, 194; regular integi-al of, 201; regular point of, 197; singular points of, 194, 197,\n(confluence of) 202; solution of, 194, 197, (uniqueness of) 196; special types of equations :\n- Bessel's for circular cylinder functions, 204. 342, 357, 358, 373; Gauss' for hypergeo-\nmetric functions, 202, 207, 283; Gegenbauer's, 329; Hermite's, 204, 209, 342, 3 7; Hill's,\n406, 413; .Tacobi's for Theta-functions, 463; Lame's, 204, 540-543, .554-558, 570-575;\nLaplace's, 386, 388, 536, 551; Legendre's for zonal and surface harmonics, 204, 304, 324;\nMathieu's for elliptic cylinder functions, 204, 406; Neumann's, 3\\&5; Riemann's for\nP-functions, 206, 283, 291, 294; Stokes', 204; Weber's for parabolic cylinder functions,\n204, 209, 342, 347; Whittaker's for confluent hypergeometric functions, 337; equation for\nconduction of Heat, 387; equation of Telegraphy, 387; equation of wave motions, 386, 397,\n402; equations with five singularities (the Klein-Bocher theorem), 203; equations with three\nsingularities, 206; equations with two singularities, 208; equations with r singularities,\n209; equation of the third order with regular integrals, 210\n\nLiouville's method of solving integral equations, 221\n\nLiouville's theorem, 105, 431\n\nLogarithm, 583; continuity of, 583, 589; differentiation of, 586, 589; expansion of, 584, 589;\nof complex numbers, 589\n\nLogarithmic derivate of the Gamma-function [i (z)], 240, 241; Binet's integrals for, 248-251;\ncircular functions and, 240; Dirichlet's integi-al for, 247; Gauss' integi-al for, 246\n\nLogarithmic derivate of the Riemann Zeta-function, 279\n\nLogarithmic-integral function [Liz], 341\n\nLower integral, 61\n\nLunar perigee and node, motions of, 406\n\nMaclaurin's (and Euler's) expansion, 127; test for convergence of infinite integrals, 71; series,\n94, (failure of) 104, 110\n\nMany-valued functions, 106\n\nMascheroni's constant [7], 235, 246, 248\n\nMathematical Physics, equations of, 203, 386-403 (Chapter x\\ an). See aho under Linear dif-\nferential equations and the names of special equations\n\nMathieu functions [oc (\u00a3, q), .sp (z, q), in,  z, q)], 404-428 (Chapter xix); construction of, 409,\n420; convergence of series in, 422; even and odd, 407; expansions as Fourier series, 409,\n411, 420; integral equations satisfied by, 407, 409; integral formulae, 411; order of, 410;\nsecond kind of, 427\n\nMathieu's equation, 204, 404-428 (Chapter xix); general form, solutions by Floquet, 412, by\nLindemann and Stieltjes, 417, by the method of change of parameter, 424; second solution\nof, 413, 420, 427; solutions in\" asymptotic series, 425; solutions which are periodic, .tee\nMathieu functions; the integial function associated with, 418. See also Hill's equation\n\nMean-value theorems, 65, 66, 96\n\nMehler's integral for Legendre functions, 314\n\nMellin's (and Barnes') type of contour integi-al, 286, 343\n\nMembranes, vibrations of, 356, 396, 404, 405\n\nMesh, 430\n\nMethods of ' summing ' series, 154-156\n\nMinding's formula, 119\n\nMinimum value of T (.r), 253\n\n%\n% 604\n%\n\nModified Heine-Borel theorem, -53\n\nModular aiiplo, 49'i; function, 481, (equation connected with) 482; -surf.ice, 41\n\nModulus, 430; of a complex number, 8; of Jacobian elliptic functions, 479, 492, (complementary)\n\n479. 493; periods of elliptic functions regarded as functions of the, 484, 498, 499, 501, 521\nMonogenic, 83; distinguished from analytic, 99\nMonotonic, 57\n\nMorera's theorem (converse of Cauchy's theorem), 87, 110\nMotions (if lunar perigee and node, 406\nM-test for uniformity of convergence, 49\n\nMultiplication formula for T [z), 240; for the Sigma-function, 460\nMultiplication of absolutely convergent series, 29; of asymptotic expansions, 152; of convergent\n\nseries (Abel's theorem), 58, 59\nMultipliers of Thcta-functions, 463\nMurphy's formulae for Legendre functions and polynomials, 311, 312\n\nNeumann's definition of Bessel functions of the second kind, 372; expansions in series of\n\nLegendre and Bessel functions, 322, 374; (F. E. Neumann's) integral for the Legendre\n\nfunction of the second kind, 320; method of solving integral equations, 221\nNeumann's function [0,j (z)], 374; differential equation satisfied by, 385; expansion of, 374;\n\nexpansion of functions in series of, 376, 384; integral for, 375; integral properties of,\n\n56.5; recurrence formulae for, 375\nNon-uniform convergence. 44; and discontinuity, 47\nNormal functions, 224\n\nNormal solutions of Laplace's equation, 553\nNotations, for Bessel functions, 356, 372, 373; for Legendre functions, 325, 326; for quotients\n\nand reciprocals of elliptic functions, 494, 498; for Theta-functions, 464, 479, 487\nNucleus of an integi-al equation, 213; symmetric, 223, 228\nNumbers, 3-10 (Chapter i); basic, 462; Bernoulli's, 125; Cauchy's, 379; characteristic, 219,\n\n(reality of) 226; complex, 6; irrational, 6; irrational-real, 5; pairs of, 6; rational, 3, 4;\n\nrational-real, 5; real, 5\n\nOdd functions; of Mathieu, [.s(' (, 7)], 407\n\nOpen. 44\n\nOrder [O and o), 11; of Bernoullian polynomials, 126; of Bessel functions, 356; of elliptic\nfunctions, 432; of Legendre functions, 324; of Mathieu functions, 410; of poles of a\nfunction, 102; of terms in a series, 25; of the factors of a product, 33; of zeros of a\nfunction, 94\n\nOrdinary discontinuity, 42\n\nOrdinary point of a linear differential equation, 194\n\nOrthogonal coordinates, 394; functions, 224\n\nOscillation, 11\n\nParabolic cylinder functions [/>  (2:)], 347; contour integi-al for, 349; differential equation for,\n204,;i()9, 347; expansion in a power series, 347; expansion of a function as a series of, 351;\ngeneral asymptotic expansion of, 348; inequalities satisfied by, 354; integral equation\nsatisfied by, 231; integral properties, 350; integrals involving, 353; integrals representing,\n353; properties when n is an integer, 350, 353, 354; recurrence formulae, 350; relations\nbetween different kinds of [D,  z) and D-n-ii - i )]  348; zeros of, 354. See also Weber's\nequation\n\nParallelogram of periods, 430\n\nParameter, change of (method of solving Mathieu's equation), 424; connected with Theta-\nfunctions, 463, 464; of a point on a curve, 442, 496, 497, 527, 530, 533; of members of\nconfocal systems of quadrics, 547; of third kind of elliptic integral, 522; thermometric, 405\n\nParse val's theorem, 182\n\nPartial differential equations, property of, 390, 391. See (iIko Linear differential equations\n\nPartition function. 462\n\nParts, real an<l imaginary, 9\n\nPearson's function [w,, (z)], 353\n\nP-equation, Riemann's, 206, 337; connexion with the hypergeometric equation, 208, 283; solu-\ntions of, 2S3, 291, (relations between) 294; transformations of, 207\n\nPeriodic coefficients, equations with (Floquet's theory of), 412\n\nPeriodic functions, integrals involving, 256. See also Fourier series and Doubly periodic\nfunctions\n\n%\n% 605\n%\nPeriodicity factors, 463\n\nPeriodicity of circular and exponential functions, 585-587; of elliptic functions, 429, 434, 479,\n\n500, 502, 503; of Theta-funetions, 463\nPeriodic solutions of Mathieu's equation, 407\nPeriod-parallelogram, 430; fundamental, 430\n\nPeriods of elliptic functions, 429; qua functions of the modulus, 484, 498, 499, 501, 521\nPhase, 9\n\nPincherle's functions (modified Legendre functions), 335\nPlana's expansion, 145\n\nPochhammer's extension of Eulerian integrals, 256\n\nPoint, at infinity, 103; limit-, 12; representative, 9; singular, 194, 202\nPoles of a function, 102; at infinity, 104; irreducible set of, 430; number in a cell, 431; relations\n\nbetween zeros of elliptic functions and, 433; residues at, 432, 504; simple, 102\nPolygon, (fundamental) of automoi-phic functions, 455\nPolynomials, expi'essed as series of Legendre polynomials, 310; of Abel, 333; of Bernoulli, 126,\n\n127; of Legendre, xce Legendre polynomials; of Sonine, 352\nPopular conception of an angle, 589; of continuity, 41\nPositive integers, 3\n\nPower series, 29; circle of convergence of, 30; continuity of, 57, (Abel's theorem) 57; expan-\nsions of functions in, xee under the name  of special functiomi; identically vanishing, 58;\nMaclaurin's expansion in, 94; radius of convergence of, 30, 32; series derived from, 31;\nTaylor's expansion in, 93; uniformity of convergence of, 57\n\nPrincipal part of a function, 102; solution of a certain equation,\n482; value of an integral, 75;  value of the argument of a complex\nnumber, 9, 588\n\nPrinciple of convergence, 13\n\nPringsheim's theorem on summation of double series, 28\nProducts of Bessel functions, 379, 380, 383, 385, 428; of hypergeometric functions, 298. See\n\nalso Infinite products\n\nQuarter periods K, iK', 479, 498, 499, 501. See also Elliptic integrals\n\nQuartic, canonical form of, 513; integi'ation problem connected with, 452, 512\n\nQuasi-periodicity, 445, 447, 463\n\nQuotients of elliptic functions (Glai her's notation), 494, 511; of Theta-f unctions, 477\n\nRadius of convergence of power series, 30, 32\n\nRational functions, 105; expansions in series of, 134\n\nRational numbers, 3, 4; -real numbers, 5\n\nReal functions of real variables, 56\n\nReality of characteristic numbers, 226\n\nReal numbers, rational and irrational, 5\n\nReal part (li) of a complex number, 9\n\nRearrangement of convergent series, 25; of double series, 28; of infinite determinants, 37; of\ninfinite products, 33\n\nReciprocal functions, Volterra's, 218\n\nReciprocals of elliptic functions (Glaisher's notation), 494, 511\n\nRecurrence formulae, for Bessel functions, 359, 373, 374; for confluent hypergeometric functions,\n352; for Gegenbauer's function, 330; for Legendre functions, 307, 309, 318; for Neumann's\nfunction, 375; for parabolic cylinder functions, 350. See also Contiguous hypergeometric\nfunctions\n\nRegion, 44\n\nRegvilar, 83; distribution of discontinuities, 212; integrals of linear differential equations, 201,\n(of the third order) 210; points (singularities) of linear differential equations, 197\n\nRelations between Bessel functions, 360, 371; between confluent hypergeometric functions\nTr\\; . jjj (\u00b1 ) and M/ .    z, 346; between contiguous hypergeometric functions, 294; be-\ntween elliptic functions, 452; between parabolic cylinder functions D,j ( \u00b1 z) and D\\  \\ i ( \u00b1 iz),\n348; between poles and zeros of elliptic functions, 433; between Riemann Zeta-f unctions\nf (s) and f (1 - s), 269. See also Recurrence formulae\n\nRemainder after *; terms of a series, 15; in Taylor's series, 95\n\nRemovable discontinuity, 42\n\nRepeated integrals, 68, 75\n\nRepresentative point, 9\n\nResidues, 111-124 (Chapter vi); of elliptic functions,\n425, 497\n\n%\n% 606\n%\nRiemann's associated function, 183, 184, 185; condition of integrability, 63; equations satisfied\nby analytic functions, S4; hypothesis concerning f(j,), 272, 280; lemmas, 172, 184, 185;\n/'-equation, 206, 283, 291, 294, (transformation of) 207, (and the hypergeometric equation)\n208, nee (tho Hypergeometric functions; theory of trigonometiical series, 182-188; Zeta-\nfunction, xer Zeta-function (of Riemann)\n\nRiesz' method of ' summing ' series, 156\n\nRight  U-) class, 4\n\nRodrigues' formula for Legendre polynomials, 303; modified, for Gegenbauer's function, 329\n\nRoots of an equation, number of, (inside a contour) 123; of Weierstrassian elliptic\nfunctions (<'i . eo, e ), 443\n\nSaalschiitz' integral for the Gamma-function, 243\n\nSchlafli'-s Bessel function of the second kind, [r  (2)], 870\n\nSclilafli's integral for Bessel functions, 362, 372; for Legendre polynomials and functions, 303,\n\n304, 306; modified, for Gegenbauer's function, 329\nSchlomilch's expansion in series of Bessel coefficients, 377; function, 352; integi-al equation, 229\nSchmidt's theorem, 223\nSchwarz\" lemma, 186\n\nSecond kind. Bessel function of, (Hankel's) 370, (Neumann's) 372, (Weber-Schliifli), 370,\n(modified) 373; elliptic integral of [-E (m), Z ((/)], 517, (complete) 518; Eulerian integral of,\n241, (extended) 244; integi-al equation of, 213, 221; \\Lame\\ functions of, 562; Legendre\nfunctions of, 316-320, 325, 326\nSecond mean-value theorem, 66\n\nSecond solution of Bessel's equation, 370, 372, (modified) 373; of Legendre's equation, 316; of\nMathieu's equation, 413, 427; of the hypergeometric equation, 286, (confluent form) 343; of\nWeber's equation, 347\nSecond species of ellipsoidal harmonics, 537, (construction of) 540\nSection, 4\n\nSeififerfs spherical spiral, 527\nSequences, 11; decreasing, 12; increasing, 12\n\nSeries (infinite series), 15: absolutely convergent, 18; change of order of terms in, 25; con-\nditionally convergent, 18; convergence of, 15; differentiation of, 31, 79, 92; divergence of,\n15  geometric, 19; integration of, 32, 78; methods of summing, 154-156; multiplication\nof, 29, 58, 59; of analytic functions, 91; of cosines, 185; of cotangents, 139; of inverse\nfactorials, 142; of powers, see Power series; of rational functions, 134; of sines, 166; of\nvariable terms, 44 (see also Uniformity of convergence); order of terms in, 25; remainder of,\n15; representing particular functions, see ii) iler tlie name of the fitiietioit; solutions of\ndifferential and integi'al equations in, 194-202, 228; Taylor's, 93. Sec also Asymptotic\nexpansions. Convergence, Expansions, Foiirier series. Trigonometrical series and Uniformity\nof convergence\nSet, Irreducible (of zeros or poles), 430\n\nSigma-functions of Weierstrass [(t(z), <ri z), 0-2(2), 0-3(2)], 447, 448; addition formula for, 451,\n458, 460; analogy -with circular functions, 447; duplication formulae, 459, 460; four\ntypes of, 448; expression of elliptic functions by, 450; quasi-periodic properties, 447;\nsingly infinite product for, 448; three-term equation involving, 451, 461; Theta-functions\nconnected with, 448, 473, 487; triplication formula, 459\nSignless integers, 3\nSimple curve, 43; pole, 102; zero, 94\nSimply-connected region, 455\n\nSine, product for, 137. See also Circular functions\nSine-integral [Si (2)], 352; -series (Fourier series), 166 .\nSingly-periodic functions, 429. See also Circular functions\n\nSingularities, 83, 84, 102, 194, 197, 202; at infinity, 104; confluence of, 203, 337; equations\nwith five, 203; equations with three, 206, 210; equations with two, 208; equations with r,\n209; essential, 102, 104; irregular, 197, 202; regular, 197\nSingular points (singularities) of linear differential equations, 194, 202\nSolid harmonics. 392\n\nSolution of Riemann's P-equation by hypergeometric functions, 283, 288\nSolutions of differential equations, see Chapters x, xviii, xxiii, and under the names of special\n\niiiuiitions\nSolutions of integral equations, see Chapter xi\nSonine s polynomial ['/',\" (2)], 352\nSpecies (various) of ellipsoidal harmonics, 537\n\n%\n% 607\n%\n\nSpherical harmonics, see Harmonics\n\nSpherical spiral, Seiffert's, 527\n\nSpheroidal harmonics, 403\n\nSquares of Bessel functions, 379, 380; of liypergeometric functions, 298; of Jacobian elliptic\n\nfunctions (relations between), 492; of Theta-f unctions (relations between), 466\nStatement of Fourier's theorem, Dirichlet's, 164, 176\nSteadily tending to zero, 17\nStieltjes' theorem on zeros of \\Lame\\ functions, 560, (generalised) 562; theory of Mathieu's\n\nequation, 417\nStirling's series for the Gamma-function, 251\nStokes' equation, 204\n\nStolz' condition for convergence of double series, 27\nSuccessive substitutions, method of, 221\nSum-formula of Euler and Maclaurin, 127\n\nSummability, methods of, 154-156; of Fourier series, 169; uniform, 156\nSurface harmonic, 392\nSurface, modular, 41\nSurfaces, nearly spherical, 332\n\nSylvesters theorem concerning integi-als of harmonics, 400\nSymmetric nucleus, 223, 228\n\nTabulation of Bessel functions, 378; of complete elliptic integi-als, 518; of Gamma-functions, 253\n\nTaylor's series, 93; remainder in, 95; failure of, 100, 104, 110\n\nTeixeira's extension of Biirmann's theorem, 131\n\nTelegraphy, equation of, 387\n\nTesseral harmonics, 392; factorisation of, 536\n\nTests for convergence, see Infinite integrals, Infinite products and Series\n\nThermometric parameter, 405\n\nTheta-functions [ i (;), S-o (z), \\&3  z),  4  z) or   (,-), 9 ( )], 462-490 (Chapter xxi); abridged nota-\ntion for products, 468, 469; addition formulae, 467; connexion with Sigma- functions, 448,\n473, 487; duplication formulae, 488; expression of elliptic functions by, 473; four types\nof, 463; fundamental formulae (Jacobi's), 467, 488; infinite products for, 469, 473, 488;\nJacobi's first notation, G ( ) and H (;/), 479; multipliers, 463; notations, 464, 479, 487;\nparameters q, t, 463; jmrtial differential equation satisfied by, 470; periodicity factors,\n463; periods, 463; quotients of, 477; quotients yielding Jacobian elliptic functions, 478;\nrelation S-i' = a-jS 3  4, 470; squares of (relations between), 466; transformation of, (Jacobi's\nimaginary) 124, 474, (Landen's) 476; triplication formulae for, 490; with zero argument\n( 9,  :j,  4,  1'), 464; zeros of, 465\n\nThird kind of elliptic integral, IT (u, a), 522; a dynamical application of, 523\n\nThird order, linear differential equations of, 210, 298, 418, 428\n\nThird species of ellipsoidal harmonics, 537, (construction of) 541\n\nThree kinds of elliptic integi-als, 514\n\nThree-term equation involving Sigma-f unctions, 451, 461\n\nTotal fluctuation, 57\n\nTranscendental functions, see under the names of special functions\n\nTransformations of elliptic functions and Theta-functions, 508; Jacobi's imaginary, 474, 505,\n506, 519; Landen's, 476, 507; of Eiemann's P-equation, 207\n\nTrigonometrical equations, 587, 588\n\nTrigonometrical integrals, 263; and Gamma-functions, 256\n\nTrigonometrical series, 160-193 (Chapter ix); convergence of, 161; values of coefficients in, 163;\nEiemann's theory of, 182-188; which are not Fourier series, 160, 163. See also Fourier series\n\nTriplication formulae for Jacobian elliptic functions and E  u), 530,534; for Sigma-functions,\n459; for Theta-functions, 490; for Zeta-functions, 459\n\nTwenty-four solutions of the hypergeometric equation, 284; relations between, 285, 288, 290\n\nTwo-dimensional continuum, 43\n\nTwo variables, continuous functions of, 67; hypergeometric functions (Appell's) of, 300\n\nTypes of ellipsoidal hannonics, 587\n\nUnicursal, 455\nUniformisation, 454\n\n%\n% 608\n%\n\nUniformising variables, 455; associated with confocal coordinates, 549\n\nUniformity, concept of, 52\n\nUniformity of continuity, 54; of sumniability, 156\n\nUniformity of convergence. 41-60 (Chapter iii), defined, 44; of Fourier series, 172, 179, 180; of\n\nintiuite intei;nils. 70. 72, 73; of infinite products, 49; of power series, 57; of series, 44,\n\n(condition for) 45, (Hardy's test for) 50, (Weierstrass'ili-test for) 49\nUniformly convergent infinite integrals, properties of, 73; series of analytic functions, 91,\n\n(differentiation of) 92\nUniqueness of an asymptotic expansion, 153; of solutions of linear differential equations, 196\nUpper bound, 55; integi-al, 61\nUpper limit, integral equation with variable, 213, 221; to the value of a complex integi-al, 78, 91\n\nValue, absolute, i ee Modulus; of the argument of a complex number, 9, 588; of the coefficients\nin Fourier series and trigonometrical series, 163, 165, 167, 174; of particular hypergeometric\nfunctions, 281, :i93, 298, 301; of Jacobian elliptic functions of  A',  iK', h(K+iK'), 500,\n506, 507; of K, K' for special values of A\", 521, 524, 525; of  (a) for special values of . *,\n267, 269\n\nVanishing of power series, 58\n\nVariable, uniformising, 455; terms (series of), see Uniformity of convergence; upper limit,\nintegral equation with, 213, 221\n\nVibrations of air in a sphere, 399; of circular membranes, 396; of elliptic membranes, 404, 405\n\nVolterra's integral equation, 221; reciprocal functions, 218\n\nWave motions, equation of, 386; general solution, 397, 402; solution involving Bessel functions,\n3S)7\n\nWeber's Bessel function of the second kind [r, ( )], 370\n\nWeber's equation, 204. 209, 342, 347. See also Parabolic cylinder functions\n\nWeierstrass' factor theorem, 137; il/-test for uniform convergence, 49; product for the Gamma-\nfunction, 235; theorem on limit points, 12\n\nWeierstrassian elliptic function [  >( )], 429-461 (Chapter xx), defined and constructed, 432,\n\n433; addition theorem for, 440, (Abel's method) 442; analogy with circular functions,\n438; definition of \\   [z) - e ], 451; differential equation for, 436; discriminant of, 444;\nduplication formula, 441; expression of elliptic functions by, 448; expression of ip (z) -   (y)\nby Sigma-functions, 451; half-periods, 444; homogeneity properties, 439; integral formula\nfor, 437; integi-ation of in-ational functions by, 452; invariants of, 437; inversion problem\nfor, 484; Jacobian elliptic functions and, 505; periodicity, 434; roots fj, e.,, e., 443. See\nalso Sigma-fimctions and Zeta-f unction (of Weierstrass)\n\nWhittaker's function TFj., (\u00a3), see Confluent hypergeometric functions\n\nWronskis expansion, 147\n\nZero argument, Theta-f unctions with, 464; relation between, 470\n\nZero of a function, 94; at infinity, 104; simple, 94\n\nZeros of a function and poles (relation between), 438; connected with zeros of its derivative,\n123; in-educible set of, 430; number of, in a cell, 431; order of, 94\n\nZeros of fimctions, (Bessel's) 361, 367, 378, 381, (Lame's) 557, 558, 560, 578, (Legendre's) 303,\n316, 335, (parabolic cylinder) 354, (Eiemann's Zeta-) 268, 269, 272, 280, (Theta-) 465\n\nZeta-function, Z(\"). (of Jacobi), 518; addition formula for, 518; connexion with E u), 518;\nFourier series for, 520; Jacobi's imaginary transformation of, 519. See also Jacobian\nelliptic functions\n\nZeta-fimction, j'(*), i'(s,a), (of Riemann) 265-280 (Chapter xiii), (generalised by Hurwitz) 265;\nEuler's product for, 271; Hermite's integral for, 269; Hurwitz' integral for, 268; in-\nequalities satisfied by, 274, 275; logarithmic derivate of, 279; Eiemann's hypothesis\nconcerning, 272, 280; Eiemann's integrals for, 266, 273; Eiemann's relation connecting f (s)\nand f (1 - * ), 269; values of, for special values of .s, 267, 269; zeros of, 268, 269, 272, 280\n\nZeta-function, f(2), (of Weierstrass), 445; addition formula, 446; analogy with circular\nfunctions, 446; constants t/j, t)., connected with, 446; duplication formulae for, 459; ex-\npression of elliptic functions by, 449; quasi-periodicity, 445; triplication formulae, 459.\nSee also Weierstrassian elliptic functions\n\nZonal harmonics, 302, 392; factorisation of, 536", "meta": {"hexsha": "05d2776fe182828dbb6af962128220ab34583d68", "size": 56225, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/wandw-index-todo.tex", "max_stars_repo_name": "CdLbB/Whittaker-and-Watson", "max_stars_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/wandw-index-todo.tex", "max_issues_repo_name": "CdLbB/Whittaker-and-Watson", "max_issues_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/wandw-index-todo.tex", "max_forks_repo_name": "CdLbB/Whittaker-and-Watson", "max_forks_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.887109077, "max_line_length": 106, "alphanum_fraction": 0.7593063584, "num_tokens": 17548, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5678156817059588}}
{"text": "\n\\section{Atomic Form Factor Theory}\nThe atomic form factor is given by the x-ray scattering amplitude $f$ \nwhich by convention separated into a number of components. \nIt is commonly written as~\\cite{Chantler-Book}\n\\begin{equation}\n    f = f_0 + f' + if''\n\\end{equation}\nwhere $f_0 = f_0(q,Z)$ is known as the normal or coherent scattering amplitude\nand is a function of momentum transfer $q$ and atomic number $Z$.\nThe anomalous scattering component (also known as the anomalous dispersion or\nresonant scattering component) has real and imaginary parts given by $f'$ and\n$f''$. The anomalous component is a function of the incident photon energy.\nIn the literature $f'$ and $f''$ are also known as $f_1$ and $f_2$.\n\n\\subsection{Normal Component}\nThe normal component of the atomic form factor is defined as the Fourier\ntransform of an atom's electronic charge density~\\cite{Crasemann}.\nThe expression for $f_0(q)$ with an atom with electronic charge density\n$\\rho(\\mb{r})$, is given below with the second equation being valid for the case\nof a spherically symmetric atom~\\cite{Chantler-Book}.\n\\begin{equation} \\label{eq:nff-spherical}\n    f_0(q) = \\int \\rho(\\mb{r}) e^{i\\mb{q}\\cdot\\mb{r}} \\; d\\mb{r}\n           = \\int_0^\\infty \\rho(r) \\frac{\\sin(qr)}{qr} r^2 \\; dr\n\\end{equation}\nThe momentum transfer is defined as\n\\begin{equation}\n    q = |\\mb{k_{final}} - \\mb{k_{initial}}| = \\frac{4\\pi \\sin(\\theta/2)}{\\lambda}\n\\end{equation}\nwhere $\\lambda$ is the wavelength of the incident photon and $\\theta$ is the\nscattering angle. It is conventional to measure the momentum transfer $q$\nin inverse Angstroms ($\\InvAngstrom$).\n  \n\\subsection{Anomalous Component}\nThe imaginary part of the anomalous form factor is related to the total\nphotoionisation cross section.\n\\begin{equation} \\label{eq:fpp-sigma}\n    f''(\\omega) = \\frac{\\omega}{4\\pi c r_0} \\sigma(\\omega)\n\\end{equation}\nThe photon energy is given by $\\hbar\\omega$, $c$ is the speed of light and $r_0$\nis the classical electron radius. The photionisation cross section \n$\\sigma(\\omega)$ may also include cross sections from bound-bound\ntransitions.\n\nThe $f'$ component may be computed from the $f''$ component using a \ndispersion relation.\n\\begin{equation}\n    f'(\\omega) = f'(\\infty) - \\frac{2}{\\pi} P\n                 \\int_0^\\infty \n                 \\frac{\\omega' f''(\\omega')}{\\omega^2 - {\\omega'}^2} \\; d\\omega'\n\\end{equation}\n\n\\subsection{Theoretical Limitations and Assumptions}\nThe theoretical calculations of atomic form factors are usually made with a\nnumber of limitations and/or assumptions. These fall into a number of different\ncategories. Improvements to existing theories attempt to eliminate one or more\nof these assumptions or limitations.\n\n\\begin{description}\n    \\item[\\it 1. ATOMIC STRUCTURE :] \n    For a simple atom such as hydrogen, the quantum\n    mechanical wave-functions of an atom may be either non relativistic (standard\n    Schr\\\"odinger wave-functions), or relativistic (four component Dirac\n    spinors). For many-electron atoms there is still a choice between non\n    relativistic and relativistic wave-functions but additional choices have\n    to be made as how to compute these wave-functions as analytic solutions\n    are not available. Methods for computing many electron wave-functions include\n    Hartree-Fock (HF), Dirac-Hartree-Fock (DHF), Hartree-Slater (HS), \n    Multi-Configuration-Dirac-Fock\n    (MCDF) and all orders methods which include quantum electrodynamic\n    corrections~\\cite{Sapirstein}. Although issues such as the suitability of the\n    independent particle approximation (IPA) and the computational methods used\n    to determine atomic structure are important in the areas of atomic physics,\n    many body perturbation theory and computational chemistry, they are also of\n    relevance to form factor theory due to the reliance on accurate\n    computed wave-functions.\n    \\item[\\it 2. ELECTROMAGNETIC FIELD :]\n    The incident photon is modeled as either a classical or quantised\n    electromagnetic field. The simpler approach of using a classical\n    electromagnetic field is more common and approximations to this model\n    involve considering only the electric dipole (E1) and/or electric quadrupole\n    (E2) approximations. Alternatives includes using a relativistic multipole\n    (RMP) or an ``all-poles'' approach can be taken in\n    which no approximations to the classical electromagnetic field are made.\n    \\item[\\it 3. ISOLATED ATOM :]\n    Most form factor calculations are done for a single isolated atom. However,\n    experimentally it is extremely difficult if not impossible to obtain results\n    from an isolated atom. As a result, there are inherent limitations in\n    theories which consider only the case of an isolated atom. Experimentally,\n    effects such as XAFS (X-ray Anomalous Fine Structure) arise from multiple\n    scattering processes off multiple atoms.\n    \\item[\\it 4. PERTURBATION THEORY :]\n    Models of atom-photon interactions usually treat the electromagnetic field\n    model of the photon as a small perturbation, and such there are a number of\n    approaches in computing the required matrix elements and relevant scattering\n    amplitudes. The main issue involves the order of the perturbation theory \n    (usually first or second order) and the type of perturbation theory used -\n    standard (time dependent or time independent), relativistic perturbation\n    theory and second order S-matrix theory which is obtained from covariant\n    perturbation theory.\n    \\item[\\it 5. ADDITIONAL PROCESSES :]\n    A number of processes can occur when a photon interacts with an atom. \n    An atomic form factor calculation usually includes one or more of these\n    processes. These processes include:\n        \\begin{itemize}\n            \\item Photoionisation\n            \\item Bound-Bound Transitions\n            \\item Rayleigh (Coherent) Scattering\n            \\item Compton (Incoherent) Scattering\n            \\item Delbr\\\"uck Scattering\n            \\item Pair Production\n            \\item Nuclear Thomson Scattering\n        \\end{itemize}\n    \\item[\\it 6. NUMERICAL AND COMPUTATIONAL :]\n    Form factor calculations require considerable computational work. Therefore\n    a number of numerical computation issues need to be considered. These\n    includes choices for integration and interpolation methods, numerical\n    precision, convergence and computation time.\n\\end{description}\n\n\n\n\n\n\n", "meta": {"hexsha": "1d758eb8c1f3a03a343a0b749d90c109feafdb95", "size": 6433, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "intro_theory.tex", "max_stars_repo_name": "mikepsn/atomic-form-factors-thesis", "max_stars_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "intro_theory.tex", "max_issues_repo_name": "mikepsn/atomic-form-factors-thesis", "max_issues_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "intro_theory.tex", "max_forks_repo_name": "mikepsn/atomic-form-factors-thesis", "max_forks_repo_head_hexsha": "63a8d151c4d53c369d223d44ff80c075ea102420", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.0555555556, "max_line_length": 81, "alphanum_fraction": 0.7413337479, "num_tokens": 1558, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577680977182186, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.5677763260540221}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n\n\n    \n    \n\n\\subsection*{dynamics\\_pendulum.m} \n\n\\begin{par}\n\\textbf{Summary:} Implements ths ODE for simulating the pendulum dynamics, where an input torque f can be applied\n\\end{par} \\vspace{1em}\n\n\\begin{verbatim}  function dz = dynamics_pendulum(t,z,u)\\end{verbatim}\n    \\begin{par}\n\\textbf{Input arguments:}\n\\end{par} \\vspace{1em}\n\n\\begin{lstlisting}\n%\t\tt     current time step (called from ODE solver)\n%   z     state                                                    [2 x 1]\n%   u     (optional): torque f(t) applied to pendulum\n%\n% *Output arguments:*\n%\n%   dz    if 3 input arguments:      state derivative wrt time\n%\n%   Note: It is assumed that the state variables are of the following order:\n%         dtheta:  [rad/s] angular velocity of pendulum\n%         theta:   [rad]   angle of pendulum\n%\n% A detailed derivation of the dynamics can be found in:\n%\n% M.P. Deisenroth:\n% Efficient Reinforcement Learning Using Gaussian Processes, Appendix C,\n% KIT Scientific Publishing, 2010.\n%\n%\n% Copyright (C) 2008-2013 by\n% Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen.\n%\n% Last modified: 2013-03-18\n\nfunction dz = dynamics_pendulum(t,z,u)\n\\end{lstlisting}\n\n\n\\subsection*{Code} \n\n\n\\begin{lstlisting}\nl = 1;    % [m]        length of pendulum\nm = 1;    % [kg]       mass of pendulum\ng = 9.82; % [m/s^2]    acceleration of gravity\nb = 0.01; % [s*Nm/rad] friction coefficient\n\ndz = zeros(2,1);\ndz(1) = ( u(t) - b*z(1) - m*g*l*sin(z(2))/2 ) / (m*l^2/3);\ndz(2) = z(1);\n\\end{lstlisting}\n", "meta": {"hexsha": "6327b78bde0bdde919e881ab16290e4e5d92fd30", "size": 1638, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/dynamics_pendulum.tex", "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_issues_repo_path": "doc/tex/dynamics_pendulum.tex", "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_forks_repo_path": "doc/tex/dynamics_pendulum.tex", "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "avg_line_length": 26.0, "max_line_length": 113, "alphanum_fraction": 0.6446886447, "num_tokens": 510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8006920068519376, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5677059580499012}}
{"text": "\\section{Exact multivariate amplitude distributions}\n\\label{sec:exact_distributions}\n\n\nIn Sect. \\ref{subsec:general_considerations} we set the theoretical\nconsiderations to construct the distributions. In Sect.\n\\ref{subsec:distributions} we define the four distribution cases and in Sect.\n\\ref{subsec:graphical_distributions} we show their graphical representation.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{General considerations}\\label{subsec:general_considerations}\n\nTo compare the $K$ variate distributions with data, the crucial idea is to\nconstruct $K$ univariate distributions out of the $K$ variate one which are\nthen overlaid \\cite{exact_distributions_guhr}. To decouple the amplitudes, we\nrotate the vector $r$ into the eigenbasis of the eigenbasis of the covariance\nmatrix $\\Sigma$ \\cite{non_stationarity_fin_guhr,exact_distributions_guhr}. More\nprecisely, we use the diagonalization\n\\begin{align}\n    \\Sigma = U \\Lambda U^{\\dagger} \\text{ such that }\n    \\Sigma^{-1/2} = U \\Lambda^{-1/2} U^{\\dagger},\n\\end{align}\nwhere $U$ is an orthogonal $K \\times K$ matrix and $\\Lambda$ is the diagonal\nmatrix of the eigenvalues $\\Lambda_{k}$. As they are positive definite, the\nsquare roots $\\Lambda_{k}^{1/2}$ are real, so we choose them positive. We use\nthe rotated amplitudes\n\\begin{equation}\n    \\tilde{r} = U^{\\dagger} r\n\\end{equation}\nas new arguments of the ensemble averaged amplitude distribution.\n\nWhen analyzing data, $K$ is given, we obtain the matrix $\\Sigma$ by using the\noriginally measured amplitudes for sampling over the long time interval. In all\ncases, the parameter N is a fit parameter, measuring the strength of the\nfluctuations. Experience tells that $N$ sensitively determines the shape and is\nbest obtained by fitting the whole distribution to the data. For three cases,\nthe parameters $L$ and $l$ are shape parameters.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Four cases distributions}\\label{subsec:distributions}\n\nAs the aim of this paper is to compare the proposed distributions with\nempirical data, we present the final form of the corresponding distributions.\nFor a detailed and complete explanation of the process to obtain the\ndistributions, we suggest reviewing the work of Guhr and Schell\n\\cite{exact_distributions_guhr}.\n\nFor the following cases, $K$ is the number of companies,\n$\\Gamma\\left(\\ldots\\right)$ is the gamma function, $\\Lambda_{k}$ is the $k$-th\neigenvalue of $\\Sigma$, $\\operatorname{K\\left(\\ldots\\right)}$ is the modified\nBessel function of second kind, $\\operatorname{U}\\left(\\ldots\\right)$ is the\nconfluent hypergeometric function and\n$_{2}\\operatorname{F}_{1}\\left(\\ldots\\right)$ is the Gauss hypergeometric\nfunction. $N$ is a fit parameter, and $M$, $m$, $L$ and $l$ are shape\nparameters. We use Eq. \\ref{eq:m_relation} to compute $m$ and\n\\begin{equation}\n    M = 2L - K - N - 1\n\\end{equation}\nto compute $M$.\n\nIn the Gaussian-Gaussian case with the Markovian situation $D = \\mathbb{1}_{N}$\nthe distribution reads\n\\begin{equation}\n    \\left\\langle p \\right\\rangle_{GG}^{\\left(k\\right)}\n    \\left(\\tilde{r}_{k} \\vert \\Lambda_{k}, \\mathbb{1}_{N}\\right) =\n    \\frac{1}{2^{\\left(N - 1\\right) / 2} \\Gamma \\left(N / 2\\right)\n    \\sqrt{\\pi \\Lambda_{k} / N}}\n    {\\sqrt{\\frac{N \\tilde{r}_{k}^2}{\\Lambda_{k}}}}^{\\left(N - 1\\right) / 2}\n    \\operatorname{K}_{\\left(1 - N\\right)/2}\n    \\left( \\sqrt{\\frac{N \\tilde{r}^2_{k}}{\\Lambda_{k}}}\\right),\n\\end{equation}\nIn the Gaussian-algebraic case with the Markovian situation\n$D = \\mathbb{1}_{N}$ the distribution reads\n\\begin{equation}\n    \\begin{split}\n    \\left\\langle p \\right\\rangle_{GA}^{\\left(k\\right)}\n    \\left(\\tilde{r}_{k} \\vert \\Lambda_{k}, \\mathbb{1}_{N}\\right) =\n    &\\frac{\\Gamma\\left(L - \\left(K + N \\right) / 2 + 1\\right)\n    \\Gamma\\left(L - \\left(K - 1\\right) / 2\\right)}\n    {\\Gamma\\left(L - \\left(K + N - 1\\right) / 2\\right) \\Gamma\\left(N / 2\\right)\n    \\sqrt{2\\pi \\Lambda_{k}M/N}} \\\\\n    & \\operatorname{U} \\left(L - \\frac{K + N}{2} + 1, \\frac{1 - N}{2} + 1,\n    \\frac{N}{2M} \\frac{\\tilde{r}^{2}_{k}}{\\Lambda_{k}}\\right)\n    \\end{split}\n\\end{equation}\nIn the algebraic-Gaussian case with the Markovian situation\n$D = \\mathbb{1}_{N}$ the distribution reads\n\\begin{equation}\n    \\begin{split}\n    \\left\\langle p \\right\\rangle_{AG}^{\\left(k\\right)}\n    \\left(\\tilde{r}_{k} \\vert \\Lambda_{k}, \\mathbb{1}_{N}\\right) =\n    &\\frac{\\Gamma\\left(l - \\left(K - 1 \\right) / 2\\right)\n    \\Gamma\\left(l - \\left(K - N\\right) / 2\\right)}\n    {\\Gamma\\left(l - K / 2\\right) \\Gamma\\left(N / 2\\right)\n    \\sqrt{2\\pi \\Lambda_{k}m/N}} \\\\\n    & \\operatorname{U} \\left(l - \\frac{K - 1}{2}, \\frac{1 - N}{2} + 1,\n    \\frac{N}{2m} \\frac{\\tilde{r}^{2}_{k}}{\\Lambda_{k}}\\right)\n    \\end{split}\n\\end{equation}\nFinally, in the algebraic-algebraic case with the Markovian situation\n$D = \\mathbb{1}_{N}$ the distribution reads\n\\begin{equation}\n    \\begin{split}\n    \\left\\langle p \\right\\rangle_{AA}^{\\left(k\\right)}\n    \\left(\\tilde{r}_{k} \\vert \\Lambda_{k}, \\mathbb{1}_{N}\\right) =\n    &\\frac{\\Gamma\\left(l - \\left(K - 1 \\right) / 2\\right)\n    \\Gamma\\left(l - \\left(K - N\\right) / 2\\right)}\n    {\\Gamma\\left(l - K/ 2\\right) \\Gamma\\left(L + l - \\left(K - 1\\right) \\right)\n    \\sqrt{\\pi \\Lambda_{k}Mm/N}} \\\\\n    &\\frac{\\Gamma\\left(L - \\left(K + N \\right) / 2 + 1\\right)\n    \\Gamma\\left(L - \\left(K - 1\\right) / 2\\right)}\n    {\\Gamma\\left(L - \\left(K + N - 1\\right) / 2\\right) \\Gamma\\left(N / 2\\right)\n    } \\\\\n    & _{2}\\operatorname{F}_{1} \\left(l - \\frac{K - 1}{2}, L -\\frac{K + N}{2}+1,\n    L + l - \\left(K - 1\\right), 1 - \\frac{N}{Mm} \\frac{\\tilde{r}^{2}_{k}}\n    {\\Lambda_{k}}\\right).\n    \\end{split}\n\\end{equation}\n\nFor a visual comparison of the distributions, we plot the GG, GA, AG and AA\ndistributions in the Markovian case in the same figure. In Fig.\n\\ref{fig:distributions_comparison} we consider $K = 100$ positions with shape\nparameters $L = 55$, $l = 55$, as well as $N = 5$ which is a typical value from\nan empirical viewpoint. In the top are the probability densities in linear\nscale and in the bottom are the probability densities in logarithmic scales.\nFrom the figure, it can be seen that the more algebraic, the stronger peaked is\nthe distribution and heavier are the tails.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.9\\columnwidth]\n    {figures/07_distributions_comparison.png}\n    \\caption{Probability densities\n             $\\left\\langle p \\right\\rangle_{YY'}^{\\left(k\\right)}$, in the\n             Markovian case versus the rotated amplitudes $\\tilde{r}$,\n             normalized to unit standard deviation. The four cases\n             Gaussian-Gaussian, Gaussian-Algebraic, Algebraic-Gaussian and\n             Algebraic-Algebraic are labeled $YY' = GG$, $GA$, $AG$ and $AA$,\n             respectively. Number of positions $K = 100$, shape parameters\n             $L = 55$ and $l = 55$, strength parameter for fluctuations of\n             correlations $N = 5$. (left) linear scale and (right) logarithmic\n             scale.}\n    \\label{fig:distributions_comparison}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Graphical representations}\\label{subsec:graphical_distributions}\n\nTo plot a comparison of the distributions involving algebraic distributions\nwith those in the Gaussian-Gaussian case, we choose values of $L$ and $l$ which\nensure the existence of the first matrix moment. We notice that the conditions\non the existence of the algebraic distributions, i.e., of their normalizations,\nare slightly weaker. The variances\n$\\left\\langle \\tilde{r}_{k} \\right\\rangle_{YY'}{\\left(k\\right)}$ are simply\ngiven by $\\Lambda_{k}$. The functional form of all distributions\n$\\left\\langle p \\right\\rangle_{YY'}{\\left(k\\right)} \\left(\\tilde{r}_{k} \\vert D \\right)$\nthen allows us to normalize the rotated amplitude $\\tilde{r}_k$ by the standard\ndeviation\n\\begin{equation}\n    \\tilde{r} = \\frac{\\tilde{r}_{k}}{\\sqrt{\\Lambda_{k}}}\n\\end{equation}\nsuch that all $K$ distributions in this variable coincide and the corresponding\nvariances are all given by one.", "meta": {"hexsha": "660a2495bfa51cb182a2c75b8f27415a5e1ee11d", "size": 8136, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/exact_distributions_financial_paper/sections/07_formulae.tex", "max_stars_repo_name": "juanhenao21/exact_distributions_financial", "max_stars_repo_head_hexsha": "02eb058e5f963fbccb9029aae3fb6e15def7a93a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-20T18:24:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-15T07:25:50.000Z", "max_issues_repo_path": "paper/exact_distributions_financial_paper/sections/07_formulae.tex", "max_issues_repo_name": "juanhenao21/exact_distributions_financial", "max_issues_repo_head_hexsha": "02eb058e5f963fbccb9029aae3fb6e15def7a93a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/exact_distributions_financial_paper/sections/07_formulae.tex", "max_forks_repo_name": "juanhenao21/exact_distributions_financial", "max_forks_repo_head_hexsha": "02eb058e5f963fbccb9029aae3fb6e15def7a93a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9141104294, "max_line_length": 88, "alphanum_fraction": 0.6702310718, "num_tokens": 2492, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7090191337850933, "lm_q1q2_score": 0.567705956498892}}
{"text": "\\newcommand{\\RN}[1]{%\n  \\textup{\\uppercase\\expandafter{\\romannumeral#1}}%\n}\n\n\\Lecture{Jayalal Sarma}{Oct 19, 2020}{17}{Generating Functions(continued)}{Lalithaditya}{$\\alpha$}{JS}\n\n\\section{Quick Recap of Previous Two Lectures}\n \n\\begin{itemize}\n\t\\item We represented the sequence of non-negative integers in the form of a formal power series.\n\t\\item Operations on power series corresponding to combinatorial meanings.\n\t\\item We used the concept of Generating Functions for the following examples:\n\t\t\\begin{enumerate}\n\t\t\t\\item Distributing 'n' votes to 'k' candidates such that every candidate gets atleast one vote.\n\t\t\t\\item Count the number of non-negative solutions for the equation $a+b+c=n$\n\t\t\t\\item Derving the expression for Catalan numbers.\n\t\t\\end{enumerate}\t\t \n\\end{itemize}\n\n\\section{Recurrence Relations}\n\nThere are three types of recurrence relations,that are being discussed in this lecture. There are Linear Recurrence Relations, Degree Recurrence Relations and Homogenous Recurrence Relations.\\\\ \\ Before getting into examples,lets discuss about these relations.\n\n\\begin{itemize}\n\t\\item \\textbf{Linear Recurrence Relation:}\\\\ \\\\A Linear Recurrence Relation is a equation that defines  $n\\textsuperscript{th}$ in a sequence in terms of the $k$ previous terms in the sequence. The recurrence relation is in the form:\\\\$$a_n = c_1.a_{n-1}~+~c_2.a_{n-2}~+~c_3.a_{n-3}~+~\\dots~+~ c_k.a_{n-k}$$ $$ =\\sum_{i=1}^{k} c_i*a_{n-i}$$ $where~c_i's~are~constants~independent~of~n$,\\\\ $c_1,c_2,c_3,\\dots,c_k \\in \\mathbb{R} $ and $c_k \\neq 0$.\n\t\n\t\\item \\textbf{Degree Recurrence Relation:}\\\\ \\\\A recurrence relation of degree d is said to be Degree Recurrence Relation where $a_n$ depends only on $a_{n-d}$.\n\t\n\t\\item \\textbf{Homogenous Recurrence Relation:}\\\\ \\\\ A recurrence relation where each term of the right hand side of the equation has the same degree.\n\t\t\n\t\\item \\textbf{Some examples on recurrence relations:}\n\t\\begin{enumerate}\n\t\t\\item $a_n = 5.a_{n-1}~+~a_{n-2}.a_{n-3}$ : This is neither linear nor homogenous.\n\t\t\\item $a_n = a_{n-1}.a_{n-2}~+~a_{n-3}.a_{n-4}$ : This is not linear but homogenous of degree 4.\n\t\t\\item $a_n = 5.a_{n-2}~+~10^n$ : This is linear but not homogenous of degree 2.\n\t\\end{enumerate}\n\\end{itemize}\n\n\\section{Using Generating Functions to solve recurrence relations}\n\nIn this section, we will look how to solve recurrence relations using generating functions.\n\\\\ \\\\\n\\textbf{Example 1:}\\\\\nIn the previous lectures,we can calculated the number of binary strings of length n, which have even number of $0$'s. It turned out to be $2^{n-1}$.\\\\\nSimilarly, calculate the number of decimal strings of length n, which contain even number of $0$'s.\\\\ \\\\\n\\textbf{Solution:}\\\\\nLet the $a_n$ be the number of decimal strings,which satisfy the given condition.\\\\\nBy convention,lets take that when $n=0$, the number of such strings is 1.\\\\\nIf $n=1$,then the number of such strings will be 9.\\\\\n$\\implies a_0=1$ and $a_1=9$.\\\\ \\\\\n\\textbf{Forming the recurrence relation:}\nLet's take a n-length decimal string, and let $d_n$ be the last digit in the string. There are two cases for this type of situation i.e. if $d_n=0$ and $d_n \\neq 0$.\\\\\n\\underline{\\textbf{Case-\\RN{1}:}} \\\\If the last digit is 0, then the remaining string must have odd number of zeroes.Then the number of such strings will be $(10^{n-1} - a_{n-1})$.\\\\ \\\\\n\\underline{\\textbf{Case-\\RN{2}:}}\\\\\nIf the last digit is not zero, then the remaining string must have even number of zeroes,which is equal to number of such strings of length $n-1$ i.e. $a_{n-1}$. The last digit can vary from $1,2,3, \\dots,9$. Therefore, the number of such strings will be $(9a_{n-1})$.\\\\ \\\\\nThe resultant recurrence relation for $a_n$ is,\\\\\n$$\\implies a_n~=~(10^{n-1}-a_{n-1}+9a_{n-1})$$\n$$\\implies a_n~=~(10^{n-1}+8a_{n-1})$$\nThe generating function for this problem will be,\n\\begin{equation}\nG(x) = \\sum_{n \\geq 0} a_n.x^n\n\\end{equation}\n$$ G(x)~=~a_0~+~\\sum_{n \\geq 1}a_n.x^n $$\n$$ G(x)~=~a_0~+~\\sum_{n \\geq 1}.(10^{n-1}+8a_{n-1})x^n$$\n$$ G(x)~=~1~+~\\sum_{n \\geq 1} 8.a_{n-1}.x^n~+~\\sum_{n \\geq 1}10^{n-1}.x^n $$\n\n$$ G(x)~=~1~+~8.x ~\\sum_{n \\geq 1} a_{n-1}.x^{n-1}~+~x~ \\sum_{n \\geq 1}10^{n-1}.x^{n-1} $$\n\nLet $n-1~=~h$.Then,\n\n$$ G(x)~=~1~+~8.x ~\\sum_{h \\geq 0} a_{h}.x^{h}~+~x~ \\sum_{h \\geq 0}10^{h}.x^{h} $$\n\nAfter renaming the variable, we have\n\n$$ G(x)~=~1~+~8.x ~\\sum_{n \\geq 0} a_{n}.x^{n}~+~x~ \\sum_{n \\geq 0}10^{n}.x^{n} $$\n\nFrom the equation (17.72), we can see that $G(x) = \\sum_{n \\geq 0} a_n.x^n$\n\n$$ G(x)~=~1~+~8.x.G(x)~+~x~ \\sum_{n \\geq 0}10^{n}.x^{n} $$\n\n$$ G(x)~=~1~+~8.x.G(x)~+~x~ \\sum_{n \\geq 0}{(10.x)}^{n} $$\n\nFrom the summation of infinte geometric progression, we have\n\n$$\\sum_{n \\geq 0}{(10.x)}^{n} = \\frac{1}{1-10.x}$$\n\n$$ G(x)~=~1~+~8.x.G(x)~+~x.\\left(\\frac{1}{1-10.x}\\right) $$\n\nAfter rearranging the terms, we finally $G(x)$ as,\n\n$$G(x)~=~\\frac{\\left(1-9.x\\right)}{(1-8.x).(1-10.x)}$$\n\nBy using the concept of partial fractions, let's split the above into two fractions,\n\n$$\\frac{\\left(1-9.x\\right)}{(1-8.x).(1-10.x)} ~=~ \\frac{A}{(1-8.x)}~+~\\frac{B}{(1-10.x)}$$\n\n$$=~\\frac{A+B-(10.A.x)-(8.B.x)}{(1-8.x).(1-10.x)}$$\n\n$$\\implies A+B=9~and~10.A+8.B=9$$\n\nAfter solving for A and B, we get $A=\\frac{1}{2}$ and $B=\\frac{1}{2}$\n\n\\begin{equation}\n\tG(x)~=~\\frac{\\frac{1}{2}}{(1-8.x)}~+~\\frac{\\frac{1}{2}}{(1-10.x)}\n\\end{equation}\n\nOur aim was to find the number $a_n$,which is nothing but the coefficient of $x^n$ in $G(x)$.\n\n$$\\implies Coefficient~of~x^n~in~G(x)~=~ \\left(\\frac{1}{2}.8^n\\right) + \\left(\\frac{1}{2}.10^n\\right)$$\n$$\\hspace{1ex}=~\\frac{8^n+10^n}{2}$$\n\n$\\therefore$ Number of decimal strings with even number of zeroes is $\\left(\\frac{8^n+10^n}{2}\\right)$.\n\\\\ \\\\\n\\textbf{Example 2:}\\\\\nIn this example, we are not using any recurrence relations. We are proving combinatorial equations using generating functions.\n\\\\ \\\\\nFor $n \\geq k$, prove that \n\\begin{equation}\n\t\\sum_{m=k}^{k}{m \\choose k}~=~{n+1 \\choose k+1}\n\\end{equation}\n\\textbf{Solution:}\\\\\n\nFor a fixed $k$, lets assume that $$a_n~=~\\sum_{m=k}^{n}{m \\choose k} $$\n\nThe generating function for this problem will be,\n\n\\begin{equation}\n S(x)~=~ \\sum_{n \\geq k}a_n.x^n\n\\end{equation}\n\nWe can observe that in the above summation, n starts from k. It can also start from n = 0, but it is the same, as $k \\geq 0$. \\\\ \\\\\nLets introduce a new function $\\sigma$,\n\n$$ \\sigma = \\left\\{\n\\begin{array}{ll}\n      1 & if~k\\leq m\\leq n \\\\\n      0 & otherwise \\\\\n\\end{array} \n\\right. $$\n\nFrom equation (17.74),\n\n$$ S(x)~=~ \\sum_{n \\geq k}a_n.x^n$$\n\n$$ S(x)~=~ \\sum_{n \\geq k}~\\sum_{m=k}^{n}{m \\choose k}.x^n$$\n\n$$ S(x)~=~ \\sum_{n \\geq k}\\left(~\\sum_{m \\geq k}{m \\choose k}.x^n\\left(\\sigma \\right) \\right)$$\n\nAfter rearranging the summations,\n\n$$ S(x)~=~ \\left(~\\sum_{m \\geq k}\\sum_{n \\geq k}{m \\choose k}.x^n\\left(\\sigma \\right) \\right)$$\n\nSince ${k \\leq m}$ and ${k \\leq n}$ the new function $\\sigma$ becomes $1$.\\\\ \\\\\nAlso ${k \\leq m}$ and ${k \\leq n}$, $\\implies m \\leq n$.\n\n$$\\implies~S(x)~=~ \\left(~\\sum_{m \\geq k}\\sum_{n \\geq m}{m \\choose k}.x^n \\right)$$\n\nSince, ${m \\choose k}$ is independent of $n$,\n\n$$S(x)~=~ \\left(~\\sum_{m \\geq k}{m \\choose k}. \\sum_{n \\geq m}x^n \\right)$$\n\n$$S(x)~=~\\sum_{m \\geq k}{m \\choose k}.\\left(x^m \\sum_{n \\geq m}x^{n-m} \\right) $$\n\nWe can observe that the second summation is the sum of an infinite geometric progression.\n\n$$S(x)~=~\\sum_{m \\geq k}{m \\choose k}.\\left( \\frac{x^m}{1-x} \\right)$$\n\n\\begin{equation}\n\tS(x)~=~\\frac{x^k}{1-x} \\left(\\sum_{m \\geq k} {m \\choose k}.x^{m-k} \\right)\n\\end{equation}\n\nWe know that,\n$$\\frac{1}{1-x} = 1+x+x^2+\\dots + \\dots$$\n\nand also,\n\n$${\\left(\\frac{1}{1-x}\\right)}^{k+1} = {(1+x+x^2+\\dots+\\dots)^{k+1}} $$\n\n$${(1+x+x^2+\\dots+\\dots)^{k+1}} = (1+x+x^2+\\dots).(1+x+x^2+\\dots).\\dots $$\n\nLet $d_1,d_2,d_3,\\dots,d_{k+1}$ be the degree of each x terms in the product.\\\\\nOur aim is to get the coefficient of $x^{m-k}$ in the above product, this is equivalent to the question \\\\ \\\\\n\\emph{In how many ways can we pick $d_1,d_2,\\dots,d_{k+1}$ such that} $$\\sum_{i=1}^{k+1}d_i = (m-k)$$\\\\\nThis is an example of multichoosing. As discussed in the previous lectures, the number of solutions to this question is $${{k+1+m-k-1} \\choose {m-k}} = {m \\choose m-k}$$\n\nand also,\n\n$${m \\choose m-k} = {m \\choose k} $$\n\nFrom the equation (17.76), we can replace ${m \\choose k}.x^{m-k}$ with ${\\left(\\frac{1}{1-x}\\right)}^{k+1}$\n\n$$S(x)~=~\\frac{x^k}{1-x}{\\left(\\frac{1}{1-X} \\right)}^{k+1} $$\n\n\\begin{equation}\n S(x)~=~\\frac{x^k}{{\\left(1-x \\right)}^{k+2}}\n\\end{equation}\n\nOur aim is to find the number $a_n$, which is nothing but the coefficient of $x^n$ in the generating function $S(x)$.\\\\ \\\\\n\nWe can observe that the,\n\n$$Coefficient~of~x^n~in \\frac{x^k}{{\\left(1-x \\right)}^{k+2}}~~=~~Coefficient~of~x^{n-k}~in~{\\left(\\frac{1}{1-x}\\right)}^{k+2}$$\n\nAs we proved earlier in this example, that the coefficient of $x^{n-k}$ in the right hand side, is equivalent to the sum of degrees of x terms by expanding $\\left(\\frac{1}{1-x}\\right)$ equal to $(k+2)$.\n\\\\\nAs proved in earlier lectures, this sum is equal to \n\n$$a_n = {{k+2+n-k-1} \\choose {n-k}}$$\n\n$${{k+2+n-k-1} \\choose {n-k}} = {n+1 \\choose n-k} $$\n\n$${n+1 \\choose n-k} = {n+1 \\choose k+1}$$\n\n$$\\boxed{\\therefore a_n = {n+1 \\choose k+1}}$$\n\n\\Lecture{Jayalal Sarma}{Oct 20, 2020}{18}{Two Variable Generating Functions}{Lalithaditya and Pragnya}{$\\alpha$}{JS}\n\nTill now we had discussed Generating Functions with one variable. In this lecture, we are going to discuss Generating Functions with two variables. Such type of Generating functions are known as \\textbf{Bivariate Generating Functions}.\\\\ \\\\\n\nThe general form of the Bivariate Generating functions is,$$G(x,y) = \\sum_{n,k \\geq 0} a_{n,k}.x^n.y^k$$\n\nThese type of generating functions are useful, when dealing combinatorial problems with two variables.\\\\\nLets try out some examples, to get an idea on how to use Generating Functions with two variables.\n\n\\section{Examples based on Bivariate Generating Functions}\n\\textbf{Example 1:}\\\\ Prove the binomial theorem in single variable using the two variable generating functions.\\\\\nBinomial Theorem in single variable:\n$${\\left(1+x\\right)}^{n} = \\sum_{k=0}^{n}{n \\choose k}.x^k$$\n\\textbf{Solution:}\\\\\n\nWe know that the number of ways of choosing a k-sized subset from n-sized set is equal to ${n \\choose k}$.\\\\\n\nLet the number be $b_{n,k}$.\n\\\\\nWhen $n=0$ , $b_{0,k}~=~0$ and when $k=0$, $b_{n,0}~=~1$.\\\\\n\\\\\nAs discussed in previous lectures, we can choose a k-sized subset from n-1 elements or from n elements.\n\\\\\n\\textbf{Recurrence relation:}\n$$b_{n,k} = b_{n-1,k-1} + b_{n-1,k} $$\n\nThe generating function for this problem is,\n\\begin{equation}\nB(x,y) = \\sum_{n,k \\geq 0}b_{n,k}.\\left(x^n.y^k \\right)\n\\end{equation}\n\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0}b_{n,0}x^n + \\sum_{n=0,k \\geq 0}b_{0,k}.y^k + \\sum_{n,k \\geq 1}b_{n,k}.\\left(x^n.y^k \\right) $$\n\nWe know that $b_{0,k} = 0~and~b_{n,0}=1$.\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0}1. \\left( x^n \\right) + \\sum_{n=0,k \\geq 0}0. \\left( y^k \\right) + \\sum_{n,k \\geq 1}b_{n,k}.\\left(x^n.y^k \\right) $$\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0} \\left( x^n \\right) + \\sum_{n,k \\geq 1}b_{n,k}.\\left(x^n.y^k \\right)$$\n\nBy using the recurrence relation,\n\n$$B(x,y) = \\sum_{n \\geq 0,k=0} \\left( x^n \\right) + \\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^n.y^k \\right) + \\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^n.y^k \\right)$$\n\nWe know that,\n$$\\sum_{n \\geq 0}x^n = \\frac{1}{1-x} $$\n\n$$B(x,y) = \\frac{1}{1-x} + \\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^n.y^k \\right) + \\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^n.y^k \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^{n-1}.y^{k-1} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n-1,k-1}.\\left(x^{n-1}.y^{k-1} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\nLet $(n-1)=h~and~(k-1)=p$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{h,p \\geq 1}b_{h,p}.\\left(x^{h}.y^{p} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\nAfter renaming of variables,\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n,k}.\\left(x^{n}.y^{k} \\right) + x.\\sum_{n,k \\geq 1}b_{n-1,k}.\\left(x^{n-1}.y^k \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 1}b_{n,k}.\\left(x^{n}.y^{k} \\right) + x.\\left(\\sum_{n,k \\geq 0}b_{n,k}.\\left(x^{n}.y^k \\right) - \\sum_{n \\geq 0,k=0}x^n \\right)$$\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right)\\sum_{n,k \\geq 0}b_{n,k}.\\left(x^{n}.y^{k} \\right) + x.\\left(\\sum_{n,k \\geq 0}b_{n,k}.\\left(x^{n}.y^k \\right) - \\sum_{n \\geq 0,k=0}x^n \\right)$$\n\nFrom the equation (18.78),\n\n$$B(x,y) = \\frac{1}{1-x} + \\left(x.y\\right).B(x,y) + x.\\left(B(x,y) - \\frac{1}{1-x} \\right)$$\n\nAfter rearranging the terms,\n\n$$B(x,y) = 1+x.\\left(y+1\\right).B(x,y)$$\n\n$$B(x,y) = \\frac{1}{1-x.\\left(y+1\\right)}$$\n\n$\\therefore$ The generating function $B(x,y)$ is,\n\n\\begin{equation}\n\t\\boxed{B(x,y) = \\frac{1}{1-x.\\left(y+1\\right)}}\n\\end{equation}\n\nCoefficient of $x^n$ in the Left hand side of the above equation is equal to the coefficient of $x^n$ in the right hand side of the above equation.\n\nCoefficient of $x^n$ in the left hand side $= \\sum_{k \\geq 0}b_{n,k}.y^k~(\\because From~equation~(18.78))$ \n\\\\\n\n\\textbf{Note:} Coefficient of $x^n$ in $\\left(\\frac{1}{1-ax}\\right)$ is $a^n$.\\\\\n\n$\\implies$ Coefficient of $x^n$ in the right hand side $= {\\left(1+y \\right)}^{n}$\\\\\n\nHence,\n\n$$\\sum_{k \\geq 0}b_{n,k}.y^k = {\\left(1+y \\right)}^{n} $$\n\nAfter renaming of variables,\n\n$${\\left(1+x \\right)}^{n} = \\sum_{k \\geq 0}b_{n,k}.x^k $$\n\nAt the beginning of the proof, we assumed that $b_{n,k} = {n \\choose k}$\n\n\\begin{equation}\n\t\\boxed{\\therefore {\\left(1+x \\right)}^{n} = \\sum_{k \\geq 0}{n \\choose k}.x^k}\n\\end{equation}\n\nwhich completes our proof.\n\\\\ \\\\\n\\textbf{Example 2 (Delannoy Numbers):}\\\\\nConsider a nxn grid. Delannoy number D counts the number of paths from the left-bottom corner (0,0) to any other point on the grid (n,m). \\\\\nThe path can be reached by only three paths i.e Upward edges(U), Rightward edges(R) and upward forward diagonals(F). Find the Delannoy Number. \\\\ \\\\\n\\textbf{Solution:}\\\\\nLet $d_{n,m}$ be the number of Delannoy paths from (0,0) to (n,m), by using the above edges only.\n\\\\ \\\\\nFor example,When n=3 and m=3, then the number of Delannoy paths is 63.\n\n\t\\begin{figure}[H]\n\t\t\\centerline{\\includegraphics[width=0.5\\textwidth,height=0.5\\textwidth]{images/DelannoyNumbers.png}}\n\t\\end{figure}\n\t\n\\textbf{\\underline{Aim}:} To find $d_{n,m}$ \\\\ \n\n\\textbf{Recurrence Relation:}\\\\ \\\\\nLets find a recurrence relation for $d_{n,m}$.\n\\\\ \nA point (n,m) can be reached from three ways i.e. from (n-1,m), from (n-1,m-1) and from (n,m-1).\n\\\\\nHence, the recurrence relation for $d_{n,m}$ will be,\n\\begin{equation}\n\\boxed{d_{n,m} = d_{n,m-1} + d_{n-1,m} + d_{n-1,m-1}}\n\\end{equation}\n\n\\textbf{Generating Function:}\\\\ \\\\\nThe generating function for this problem is,\n\\begin{equation}\n\\boxed{D(x,y) = \\sum_{n,m \\geq 0}d_{n,m}.x^n.y^m}\n\\end{equation}\n\nWe can observe that $d_{n,0}=d_{0,m}=1$.\n\n$$D(x,y) = \\sum_{n \\geq 0,m=0}d_{n,0}.x^n + \\sum_{n=0,m \\geq 1}d_{0,m}.y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$D(x,y) = \\sum_{n \\geq 0,m=0}1.x^n + \\sum_{n=0,m \\geq 1}1.y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\nWe know that,\n$$\\sum_{n \\geq 0}x^n = \\frac{1}{1-x}$$\n\n$$D(x,y) = \\frac{1}{1-x} + \\sum_{n=0,m \\geq 1}y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\sum_{n=0,m \\geq 1}y^{m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n\nLet $(m-1)=h$,\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\sum_{n=0,h=0}y^h + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\nAfter renaming the variables,\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\sum_{n=0,m=0}y^m + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + y.\\left(\\frac{1}{1-y}\\right) + \\sum_{n \\geq 1,m \\geq 1}d_{n,m}.x^n.y^m$$\n\nUsing the recurrence relation from equation (18.81),\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + \\sum_{n \\geq 1,m \\geq 1}(d_{n,m-1} + d_{n-1,m} + d_{n-1,m-1}).x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m-1}.x^n.y^m$$\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m-1}.x^{n-1}.y^{m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}$$\n\nLet $(n-1)=h~and~(m-1)=p$,\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.\\sum_{h \\geq 0,p \\geq 0}d_{h,p}.x^{h}.y^{p} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}$$\n\nAfter renaming the variables,\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^{n}.y^{m} + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m} + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n}.y^{m}$$\n\nFrom the equation (18.82),\n\\begin{equation}\nD(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.D(x,y) + \\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n}.y^{m} +\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m}\n\\end{equation}\n\nConsider the fourth term in the above equation,\n$$\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n}.y^{m} = x.\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n-1}.y^{m}$$\n\nLet $(n-1)=h$\n$$x.\\sum_{n \\geq 1,m \\geq 1}d_{n-1,m}.x^{n-1}.y^{m} = x.\\sum_{h \\geq 0,m \\geq 1}d_{h,m}.x^h.y^m$$\nAfter renaming the variables,\n\n$$x.\\sum_{h \\geq 0,m \\geq 1}d_{h,m}.x^h.y^m = x.\\sum_{n \\geq 0,m \\geq 1}d_{n,m}.x^n.y^m$$\n\n$$x.\\sum_{n \\geq 0,m \\geq 1}d_{n,m}.x^n.y^m = x.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^{n}.y^{m} - \\sum_{n \\geq 0,m = 0}d_{n,0}.x^{n} \\right)$$\n\n$$x.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^{n}.y^{m} - \\sum_{n \\geq 0,m = 0}d_{n,0}.x^{n} \\right) = x.\\left(D(x,y)-\\frac{1}{1-x} \\right) $$ \n\\\\ \\\\\nSubstituting the above value in the equation (18.83),then\n\n$$D(x,y) = \\frac{1}{1-x} + \\frac{y}{1-y} + x.y.D(x,y) + x.\\left(D(x,y)-\\frac{1}{1-x} \\right) +\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m}$$\n\nAfter rearranging the terms,\n\n\\begin{equation}\nD(x,y) = 1 + \\frac{y}{1-y} + x.y.D(x,y) + x.D(x,y) + \\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m}\n\\end{equation}\n\nConsider the last term of the above equation,\n\n$$\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m} = y.\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m-1} $$\n\nLet $p=(m-1)$,then \n\n$$y.\\sum_{n \\geq 1,m \\geq 1}d_{n,m-1}.x^{n}.y^{m-1} = y.\\sum_{n \\geq 1,p \\geq 0}d_{n,p}.x^{n}.y^{p}$$\n\nAfter renaming the variables,\n\n$$y.\\sum_{n \\geq 1,m \\geq 0}d_{n,m}.x^{n}.y^{m} = y.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^n.y^m - \\sum_{n=0,m \\geq 0}d_{0,m}.y^{m} \\right) $$\n\n$$y.\\left(\\sum_{n \\geq 0,m \\geq 0}d_{n,m}.x^n.y^m - \\sum_{n=0,m \\geq 0}d_{0,m}.y^{m} \\right) = y.\\left(D(x,y) - \\frac{1}{1-y} \\right) $$\n\nSubstitute the above value in the equation (18.84),\n\n$$D(x,y) = 1 + \\frac{y}{1-y} + x.y.D(x,y) + x.D(x,y) + y.\\left(D(x,y) - \\frac{1}{1-y} \\right)$$\n\n$$D(x,y) = 1 + x.y.D(x,y) + x.D(x,y) + y.D(x,y) $$\n\nAfter rearranging the terms,\n\n$$D(x,y) = \\frac{1}{1-x-y-xy} $$\n\n$$D(x,y) = \\left(\\frac{1}{1-y}\\right).\\left(\\frac{1}{1-\\left(\\frac{1+y}{1-y}\\right).x}\\right) $$\n\nWe know that,\n$$\\frac{1}{1-a.x} = \\sum_{n \\geq 0}a^n.x^n $$\n\n$$D(x,y) = \\left(\\frac{1}{1-y}\\right).\\left(\\sum_{n \\geq 0}{\\left(\\frac{1+y}{1-y} \\right)}^n.x^n \\right) $$\n\nThe generating function is,\n\\begin{equation}\n\\boxed{D(x,y) = \\left(\\sum_{n \\geq 0}{\\frac{{(1+y)}^n}{{(1-y)}^{n+1}}}.x^n \\right)}\n\\end{equation}\n\nThe required number $d_{n,m}$ is,\\\\\n\n$$d_{n,m}~=~Coefficient~of~x^n.y^m~in~D(x,y)$$\n\n$$d_{n,m}~=~Coefficient~of~y^m~in~{\\frac{{(1+y)}^n}{{(1-y)}^{n+1}}}$$\n\n$$d_{n,m}~=~Coefficient~of~y^m~in~ {(1+y)}^n.\\left(\\frac{1}{1-y}.\\frac{1}{1-y}. \\dots (n+1) times\\right)$$\n\nWe know that,\n\n$$\\frac{1}{1-y} = 1+y+y^2+ \\dots$$\n\n$$d_{n,m}~=~Coefficient~of~y^m~in~{(1+y)}^n.\\left((1+y+y^2+\\dots).(1+y+y^2+\\dots). \\dots (n+1) times\\right)$$\n\nLet's say that a number $k \\geq 0$ is taken, such that the term along with its coefficient $y^k$ comes from ${(1+y)^n}$ and the remaining term along with its coefficient $y^{m-k}$ comes from the (n+1) term product.\\\\\n\nThe coefficient of $y^k$ in ${(1+y)^n}$ is ${n \\choose k}$.\\\\\n\nLet $c_1,c_2,\\dots,c_{n+1}$ be the degrees of x from the (n+1)-term product.\\\\\n\nFinding out the coefficient of $y^{m-k}$ from the (n+1) term product is equivalent to count the number of ways of picking $c_i$'s such that $c_1+c_2+\\dots+c_{n+1}=m-k$\\\\\n\nThe number of such pickings = ${{n+1+m-k-1} \\choose {m-k}}$ = ${n+m-k} \\choose {m-k}$ = ${n+m-k} \\choose {n}$.\\\\\n\nTherefore, the required number $d_{n,m}$ is,\n\n$$d_{n,m} = \\sum_{k \\geq 0}{n \\choose k}.{{n+m-k} \\choose n}$$\n\nHence,\n\\begin{equation}\n\\boxed{Delannoy~Number~(D) = \\sum_{k \\geq 0}{n \\choose k}.{{n+m-k} \\choose n}}\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n", "meta": {"hexsha": "f934d26104c91b638a5aca7521e308f9f1a6ba02", "size": 20534, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week06.tex", "max_stars_repo_name": "ryuzaki4337/theory-toolkit", "max_stars_repo_head_hexsha": "886962c39b9c57c6f3803024dd059d060b2983d4", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week06.tex", "max_issues_repo_name": "ryuzaki4337/theory-toolkit", "max_issues_repo_head_hexsha": "886962c39b9c57c6f3803024dd059d060b2983d4", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week06.tex", "max_forks_repo_name": "ryuzaki4337/theory-toolkit", "max_forks_repo_head_hexsha": "886962c39b9c57c6f3803024dd059d060b2983d4", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.1839530333, "max_line_length": 447, "alphanum_fraction": 0.6059705854, "num_tokens": 8765, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.800692004473946, "lm_q1q2_score": 0.5677059514407671}}
{"text": "\\SecDef{invar}{Invariants in SPARKLE}\n\n\\subsection{Invariant Subspaces}\n\\Todo{describe method, probably briefly, probably in Chapter NORX}\n\\Todo{maybe subspace trail from FSE paper? by Leander et al}\n\nInvariant subspace attacks were considered in~\\cite{InvSpacePrint}.\nUsing a similar to the \"to and fro\" method from \\cite{LinAffEQold,LinAffEQ}, we searched for an affine subspace that is mapped by an ARX-box $A_{c_i}$ to a (possibly different) affine subspace of the same dimension. We could not find any such subspace of large dimension.\n\\Todo{For what dimension is the search working with high probability? E.g. there are lots of affine subspaces of dimension 1 that are mapped to affine subspaces of dimension 1, probably also some of dimension 2 or 3}\n\nNote that the search is randomized so it does not result in a proof. As an evidence of the correctness of the algorithm, we found many such subspace trails for all 2-round reduced ARX-boxes, with dimensions from 56 up to 63. For example, let $A$ denote the first two rounds of $A_{c_0}$. Then for all $l,r,l',r' \\in \\bField{32}$ such that $A(l, r) = (l', r')$,\n\\begin{multline*}\n(l_{29} + r_{21} + r_{30}) (l_{30} + r_{31}) (l_{31} + r_{0}) (r_{22}) (r_{23}) = \\\\\n(l'_{4} + r'_{21}) (l'_{5} + r'_{22}) (l'_{6} + r'_{23}) (l'_{28} + l'_{30} + l'_{31} + r'_{13} + 1) (l'_{29} + l'_{31} + r'_{14}).\n\\end{multline*}\nThis equation defines a subspace trail of constant dimension 59.\n\\Todo{more data on subspaces}\n\n\n\\subsection{Nonlinear Invariants in the ARX-boxes}\nNonlinear invariant attacks were considered recently in~\\cite{NonlinInv}.\n\\Todo{Describe method algorithm}\nUsing linear algebra, we experimentally verified that for any ARX-box $A_{c_i}$ and any non-constant Boolean function $f$ of degree at most 2, the compositions $f \\circ A_{c_i}$ and $f \\circ A_{c_i}^{-1}$ have degree at least 10:\n$$\n\\forall f\\colon \\bField{64} \\to \\bField{}, 1 \\le \\deg{f} \\le 2 ~~ \\deg{f \\circ A_{c_i}} \\ge 10, \\deg{f \\circ A_{c_i}^{-1}} \\ge 10,\n$$\nand for functions $f$ of degree at most 3, the compositions have degree at least 4:\n$$\n\\forall f\\colon \\bField{64} \\to \\bField{}, 1 \\le \\deg{f} \\le 3 ~~ \\deg{f \\circ A_{c_i}} \\ge 4, \\deg{f \\circ A_{c_i}^{-1}} \\ge 4.\n$$\nIn particular, any $A_{c_i}$ has no cubic invariants. Indeed, a cubic invariant $f$ would imply that $f \\circ A_{c_i} + \\varepsilon = f$ is cubic (for a constant $\\varepsilon \\in \\bField{}$). The same holds for the inverse of any ARX-box $A_{c_i}$.\n\nBy using the same method, we also verified that there are no quadratic equations relating inputs and outputs of any $A_{c_i}$. However, there are quadratic equations relating inputs and outputs of 3-round reduced versions of each $A_{c_i}$.\n\n\\Todo{more data on invariants}", "meta": {"hexsha": "3e99edd5c1d7f4a428f8b7225a07f9f56bd1d4b0", "size": 2734, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-source/9deSPARKLE/2invariants.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "thesis-source/9deSPARKLE/2invariants.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "thesis-source/9deSPARKLE/2invariants.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 78.1142857143, "max_line_length": 360, "alphanum_fraction": 0.7110460863, "num_tokens": 872, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5677059448316332}}
{"text": "\\section{Totality}\\label{sec:totality}\n%% OK \\NV{DONE intro (R1)\n%% OK Sections 3 and 4 cover \"totality\" and \"termination\" respectively. It would be helpful to explain these terms at the start of section 3, so that the two are distinguished appropriately.\n%% OK }\nWell typed Haskell code can go very wrong:\n%\n\\begin{code}\n  *** Exception: Prelude.head: empty list\n\\end{code}\n%\nAs our first application, let us see how to use \n\\toolname to statically guarantee the absence\nof such exceptions, \\ie, to prove various \nfunctions \\emph{total}.\n\n\\subsection{Specifying Totality}\n\nFirst, let us see how to specify the notion of\ntotality inside \\toolname. Consider the source of \nthe above exception:\n%\n\\begin{code}\n  head :: [a] -> a\n  head (x:_) = x\n\\end{code}\n%\nMost of the work towards totality checking is done by \nthe translation to GHC's Core, in which every function \n\\emph{is} total, but may explicitly call an \\emph{error} \nfunction that takes as input a string that describes the \nsource of the pattern-match failure and throws an exception.\n%\nFor example @head@ is translated into\n%\n\\begin{code}\n  head d = case d of \n             x:xs -> x\n             []   -> patError \"head\"\n\\end{code}\n\nSince every core function is total, but may explicitly \ncall error functions, to prove that the source function is \ntotal, it suffices to prove that @patError@ \nwill \\emph{never} be called.\n%\nWe can specify this requirement by giving the error \nfunctions a @false@ pre-condition:\n%\n\\begin{code}\n  patError :: {v:String | False } -> a\n\\end{code}\n%\nThe pre-condition states that the input type is \\emph{uninhabited}\nand so an expression containing a call to @patError@ will only type \ncheck if the call is \\emph{dead code}.\n\n\n\\subsection{Verifying Totality}\n\nThe (core) definition of @head@ does not typecheck\nas is; but requires a pre-condition that states that the function\nis only called with non-empty lists. Formally, we do so by \ndefining the alias\n%\n\\begin{code}\n  predicate NonEmp X = 0 < len X \n\\end{code}\n%\nand then stipulating that \n%\n\\begin{code}\n  head :: {v : [a] | NonEmp v} -> a\n\\end{code}\n%\nTo verify the (core) definition of @head@, \\toolname uses the above signature\nto check the body in an environment\n%\n\\begin{code}\n  d :: {0 < len d}\n\\end{code}\n%\nWhen @d@ is matched with @[]@, the environment is \nstrengthened with the corresponding refinement from \nthe definition of @len@, \\ie,\n%\n\\begin{code}\n  d :: {0 < (len d) && (len d) = 0}\n\\end{code}\n%\nSince the formula above is a contradiction, \\toolname concludes that the\ncall to @patError@ is dead code, and thereby verifies the totality \nof @head@. Of course, now we have pushed the burden of proof onto clients\nof @head@ -- at each such site, \\toolname will check that the argument \npassed in is indeed a @NonEmp@ list, and if it successfully does so, then\nwe, at any uses of @head@, can rest assured that @head@ will never throw an \nexception. \n\n\\mypara{Refinements and Totality} \nWhile the @head@ example is quite simple, in general, refinements make\nit easy to prove totality in complex situations, where we must track\ndependencies between inputs and outputs. For example, consider the @risers@\nfunction from \\cite{catch}:\n%\n\\begin{code}\n  risers []       = []\n  risers [x]      = [[x]]\n  risers (x:y:zs) \n    | x <= y      = (x:s) : ss \n    | otherwise   = [x] : (s:ss) \n    where \n      s:ss    = risers (y:etc)\n\\end{code}\n%\nThe pattern match on the last line is partial; its core translation is\n%\n\\begin{code}\n  let (s, ss) = case risers (y:etc) of\n                  s:ss -> (s, ss)\n                  []   -> patError \"...\"\n\\end{code}\n%\nWhat if @risers@ returns an empty list? \nIndeed, @risers@ \\emph{does}, on occasion, return an empty list per its\nfirst equation. However, on close inspection, it turns out that \n\\emph{if} the input is non-empty, \\emph{then} the output is also\nnon-empty. Happily, we can specify this as:\n%\n\\begin{code}\n  risers :: l:_ -> {v:_ | NonEmp l => NonEmp v} \n\\end{code}\n\n\\toolname verifies that @risers@ meets the above specification, \nand hence that the @patError@ is dead code as at that \nsite, the scrutinee is obtained from calling @risers@ with a\n@NonEmp@ list.\n\n\\mypara{Non-Emptiness via Measures}\nInstead of describing non-emptiness indirectly using @len@, a \nuser could a special measure:\n%\n\\begin{code}\n  measure nonEmp  :: [a] -> Prop\n    nonEmp (x:xs)   = True\n    nonEmp []       = False\n\n  predicate NonEmp X = nonEmp X\n\\end{code}\n%\nAfter which, verification would proceed analagous to the above.\n\n\\mypara{Total Totality Checking} \n@patError@ is one of many possible errors thrown by non-total functions.  \n@Control.Exception.Base@ has several others including @recSelError@, @irrefutPatError@, \\etc which serve the purpose of making \ncore translations total.\n%\nRather than hunt down and specify @False@ preconditions one\nby one, the user may automatically turn on totality checking \nby invoking \\toolname with the \\cmdtotality command line option, \nat which point the tool systematically checks that all the above \nfunctions are indeed dead code, and hence, that all definitions are total.\n\n\\subsection{Case Studies}\n\nWe verified totality of two libraries: \\lbhscolour and \\lbmap, earlier versions\nof which had previously been proven total by \\texttt{catch}~\\citep{catch}.\n\n\\mypara{\\lbmap} \nis a widely used library for (immutable) key-value maps, implemented\nas balanced binary search trees.\nTotality verification of \\lbmap was quite straightforward.\nWe had already verified termination and the crucial \nbinary search invariant~\\ref{chapter:abstractrefinements}. To verify \ntotality it sufficed to simply re-run verification with\nthe \\cmdtotality argument.\n%\nAll the important specifications were already captured by the types, \nand no additional changes were needed to prove totality.\n%\n%% \\RJ{was it trivially total? i.e. is it total if you strip out all refinements\n%% from specs?}\n%% \\NV{No, it fails in 6 functions all of which can trivially be reasoned to be total}\n%% \\NV{hedgeUnion, hedgeDiff, hedgeMerge, submap', join, merge}\n%% \\NV{The interesting story is that during verification \\emph{we accidentally modified}\n%% turn a function to partial, see my commit 041f1f0fea4d34ee41f50dbf7ce43e3c084c2743}\n%\n\nThis case study illustrates an advantage of \\toolname over specialized provers \n(\\eg, \\texttt{catch}~\\citep{catch}): it can be used to prove totality, termination and\nfunctional correctness at the same time, facilitating a nice reuse of\nspecifications for multiple tasks.\n\n%% DONE \\NV{(R3)\n%% DONE Before discussing HsColour, I'd give a brief explanation of what it is.\n%% DONE }\n\\mypara{\\lbhscolour} is a library for generating syntax-highlighted LATEX and HTML from\nHaskell source files.\nChecking \\lbhscolour was not so easy, as in some cases assumptions are used about the \nstructure of the input data:\n%\nFor example, @ACSS.splitSrcAndAnnos@ handles an\ninput list of @String@s and assumes that whenever\na specific @String@ (say @breakS@) appears then \nat least two @String@s (call them @mname@ and @annots@)\nfollow it in the list.\nThus, for a list @ls@ that starts with @breakS@ \nthe irrefutable pattern  @(_:mname:annots)@ @=@ @ls@\nshould be total.\n%\nThough possible, it is currently it is somewhat cumbersome to specify such \nproperties. \n%\nAs an easy and practical solution, \nto prove totality, we added a dynamic check that \nvalidates that the length of the input @ls@ exceeds @2@.\n\n%% measure follows a b c = \\case \n%%   []   -> true\n%%   x:xs -> if x == a then first2 b c xs else follows a b c xs\n%% \n%% measure first2 b c = \\case\n%%   []   -> false\n%%   x:xs -> x == b && first1 c xs\n%% \n%% measure first1 c = \\case\n%%   []   -> false\n%%   x:xs -> x == c\n%% \n%% Worse, \\toolname has no way to express such an invariant: \n%% \\toolname naturally describes invariants that recursively \n%% hold for every list element and \n%% reaches its limitations when reasoning about non-recursive\n%% properties.\n\nIn other cases assertions were imposed via monadic checks, \\eg @HsColour.hs@ reads the input arguments and \nchecks their well-formedness using \n%\n\\begin{code}\n  when (length f > 1) $ errorOut \"...\"\n\\end{code} %$\n%\nCurrently \\toolname does not support monadic reasoning that \nallows assuming that @(length f <= 1)@\nholds when executing the action \\emph{following} the @when@ check. \n%\nFinally, code modifications were required to capture properties \nthat are cumbersome to express with \\toolname.\n%\nFor example, @trimContext@ checks if there is an element that \nsatisfies @p@ in the list @xs@; if so it defines \n%\n@ys = dropWhile (not . p) xs@\n%\nand computes @tail ys@.\n%\nBy the check we know that @ys@ has at least one element, the \none that satisfies @p@. \n%\nDue to the complexity of this property, we preferred to rewrite the specific code \nin a more verification friendly version. \n\n\n%%% \\mynote{Bug}\n%%% %\n%%% \\RJ{WHY? Seems like a simple GHC CHECK?}\n%%% %\n%%% On the positive side, totality verification revealed a subtle bug:\n%%% %\n%%% The instance @Enum@ of @Highlight@ does not define the @toEnum@ \n%%% method. In core, this reduces to a call to the error function \n%%% @noMethodBinding@.\n%%% %\n%%% Even though this totality bug can be tracked by GHC compilation,\n%%% it exposes the strengths of our totality checker.\n\nOn the whole, while proving totality can be cumbersome \n(as in \\lbhscolour) it is a nice side benefit of refinement\ntype checking and can sometimes be a fully automatic corollary\nof establishing more interesting safety properties (as in \\lbmap).\n", "meta": {"hexsha": "4a3eae0ee71a64b565440a409e7aa3d8417913d9", "size": 9508, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/realworldhaskell/totality.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/realworldhaskell/totality.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/realworldhaskell/totality.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 34.0788530466, "max_line_length": 190, "alphanum_fraction": 0.7226546066, "num_tokens": 2557, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7520125848754472, "lm_q2_score": 0.7549149868676284, "lm_q1q2_score": 0.5677055706355395}}
{"text": "\\chapter{Some more one dimensional problems}\\label{c5}\nIn the previous chapter we solved the Schr\\\"{o}dinger equation for two extreme\nsituations, the free particle and a particle in an infinite potential well. In\nthis chapter, we shall restrict ourselves to one dimensional problems so that\nSchr\\\"{o}dinger's time independent equation remains an ordinary differential\nequation. We will further consider only those problems where it is possible to\nobtain an exact, analytical solution.\n\n\\section{Particle in a finite potential well}\\label{c5s1}\nThis problem is a minor, but an interesting, modification of the problem we\nconsidered in section \\ref{c4s2}. The only change in the problem is the \npotential,\n\\begin{equation}\\label{c5s1e1}\nV(x) = \\begin{cases}\n0 & \\text{ if } -L/2 \\le x \\le L/2 \\\\\nV & \\text{ otherwise.}\n\\end{cases}\n\\end{equation}\nSchr\\\"{o}dinger's time independent equation in one dimension is\n\\begin{equation}\\label{c5s1e2}\n-\\frac{\\hslash^2}{2m}\\frac{d^2\\psi}{dx^2} + V(x)\\psi = E\\psi.\n\\end{equation}\nIn the region $-L/2 \\le x \\le L/2$, which is in the interior of the well,\nthe equation becomes\n\\begin{equation}\\label{c5s1e3}\n-\\frac{\\hslash^2}{2m}\\frac{d^2\\psi}{dx^2} = E\\psi\n\\end{equation}\nwhile outside the well it is\n\\begin{equation}\\label{c5s1e4}\n-\\frac{\\hslash^2}{2m}\\frac{d^2\\psi}{dx^2} + V\\psi = E\\psi.\n\\end{equation}\nWe can rearrange \\eqref{c5s1e3} as\n\\begin{equation}\\label{c5s1e5}\n\\frac{d^2\\psi}{dx^2} + k^2\\psi = 0,\n\\end{equation}\nwhere\n\\begin{equation}\\label{c5s1e6}\nk^2 = \\frac{2mE}{\\hslash^2}.\n\\end{equation}\nThe general solution of this equation is, \n\\begin{equation}\\label{c5s1e7}\n\\psi(x) = \\alpha e^{-ikx} + \\beta e^{ikx},\n\\end{equation}\nwhere $\\alpha$ and $\\beta$ are constants of integration. Unlike the case of \nan infinitely deep well, we do not have the boundary conditions like \n\\eqref{c4s2e24} and \\eqref{c4s2e25}. In order to get an appropriate boundary\ncondition we must solve the Schr\\\"{o}dinger equation \\eqref{c5s1e4} in the \nregion outside the well. It can be rearranged as\n\\begin{equation}\\label{c5s1e8}\n\\frac{d^2\\psi}{dx^2} + \\frac{2m(E - V)}{\\hslash^2}\\psi = 0.\n\\end{equation}\nLet us consider the case $E < V$ in which the particle does not have enough\nenergy to cross the potential well. Define\n\\begin{equation}\\label{c5s1e9}\nk_o^2 = \\frac{2m(V - E)}{\\hslash^2} > 0\n\\end{equation}\nso that we can write \\eqref{c5s1e8} as\n\\begin{equation}\\label{c5s1e10}\n\\frac{d^2\\psi}{dx^2} - k_o^2\\psi = 0.\n\\end{equation}\nThe general solution of this equation is\n\\begin{equation}\\label{c5s1e11}\n\\psi_o(x) = \\alpha_0 e^{-k_ox} + \\beta_o e^{k_ox}\n\\end{equation}\nwhere $\\alpha_o$ and $\\beta_o$ are constants of integration. This solution\nindicates that the wavefunction of the particle is \\emph{not zero} outside the\npotential well. It means that there is a non-zero probability of finding the\nparticle outside the well in spite of the fact that the particle lacks the \nenergy to cross the potential barrier. This is one example of `quantum\ntunnelling', a phenonemon in which a particle appears in a position where no\nclassical particle would.\n\nWe would want the wave function of the particle to be continuous all \nthroughtout. Therefore, we insist that\n\\begin{eqnarray}\n\\psi_o\\left(-\\frac{L}{2}\\right) &=& \\psi\\left(-\\frac{L}{2}\\right) \n  \\label{c5s1e12} \\\\\n\\psi_o\\left(\\frac{L}{2}\\right) &=& \\psi\\left(\\frac{L}{2}\\right) \n  \\label{c5s1e13} \n\\end{eqnarray}\nFurther, we also want the solution to converge to zero as $x \\rightarrow \\pm\n\\infty$. Therefore, the solution outside the box must be\n\\begin{equation}\\label{c5s1e14}\n\\psi_o(x) = \\begin{cases}\n\\beta_o e^{k_ox} & \\text{ if } x \\le -L/2 \\\\\n\\alpha_o e^{-k_ox} & \\text{ if } x \\ge L/2.\n\\end{cases}\n\\end{equation}\nFrom equations \\eqref{c5s1e12}, \\eqref{c5s1e13} and \\eqref{c5s1e14} we get\n\\begin{eqnarray}\n\\beta_o e^{-k_oL/2} &=& \\alpha e^{ikL/2} + \\beta e^{-ikL/2} \\label{c5s1e15} \\\\\n\\alpha_o e^{-k_oL/2} &=& \\alpha e^{-ikL/2} + \\beta e^{ikL/2} \\label{c5s1e16}\n\\end{eqnarray}\nEquation \\eqref{c5s1e2} is a second order equation. Therefore, we also insist\non the continuity of the first derivative of $\\psi$. We then get the pair of\nequations\n\\begin{eqnarray}\nk_o\\beta_o e^{-k_oL/2} &=& -ik\\alpha e^{ikL/2} + ik\\beta e^{-ikL/2} \n   \\label{c5s1e17} \\\\\n-k_o\\alpha_o e^{-k_oL/2} &=& -ik\\alpha e^{-ikL/2} + ik\\beta e^{ikL/2}.\n   \\label{c5s1e18}\n\\end{eqnarray}\nAdding equations \\eqref{c5s1e15} and \\eqref{c5s1e16} we get\n\\begin{equation}\\label{c5s1e19}\n(\\alpha_o + \\beta_o)e^{-k_oL/2} = 2\\cos\\left(\\frac{kL}{2}\\right)(\\alpha + \\beta)\n\\end{equation}\nwhile subtracting \\eqref{c5s1e18} from \\eqref{c5s1e17} gives\n\\begin{equation}\\label{c5s1e20}\nk_o(\\alpha_o + \\beta_o)e^{-k_oL/2} = -2\\sin\\left(\\frac{kL}{2}\\right)(\\alpha +\n\\beta).\n\\end{equation}\nFrom equations \\eqref{c5s1e19} and \\eqref{c5s1e20} we get\n\\begin{equation}\\label{c5s1e21}\nk_o + \\tan\\left(\\frac{kL}{2}\\right) = 0.\n\\end{equation}\nFrom equations \\eqref{c5s1e6} and \\eqref{c5s1e9} it is clear that the above\nequation is really a condition on the energy $E$, since the `depth' $V$ of the\npotential is a given quantity. Once again we observe that $E$ cannot take \narbitrary values but only those permitted by equation \\eqref{c5s1e21}.\n\n\\subsection{Problem set - 1}\n\\begin{enumerate}\n\\item Equation \\eqref{c5s1e21} does not have analytical solutions. Can you \nthink of a graphical way to solve it?\n\\item While defining $k_o$ in equation \\eqref{c5s1e9} we assumed that $E<V$.\nWhat happens if $E > V$? Without solving the problem mathematically, can you\nguess the solution using only physical considerations?\n\\item Suppose that we define the potential function as\n\\begin{equation}\nV(x) = \\begin{cases}\nV & \\text{ if } -L/2 \\le x \\le L/2 \\\\\n0 & \\text{ otherwise.}\n\\end{cases}\n\\end{equation}\nso that we instead have the case of a particle incident upon a finite potential\nbarrier. Consider the case when the particle's energy $E$ is less than $V$.\nWill it cross the barrier? If it will then this is a more stark example of\n`quantum tunnelling'.\n\\end{enumerate}\n\n\\section{The harmonic oscillator}\\label{c5s2}\nThe harmonic oscillator appears as a toy example in classical mechanics. It is\nfar more serious in quantum physics. Sidney Coleman, an American theoretical\nphysicist, once remarked that ``Quantum field theory is harmonic motion taken\nto increasing levels of abstraction\". We will not have a chance to look at the\nsimple harmonic oscillator in too many guises in this course. We will use it\nto illustrate how to solve a quantum mechanical problem whose classical analogue\nis known.\n\nIn classical physics, a simple harmonic oscillator is a body of mass $m$ \nsubjected to a force proportional to its displacement from a certain point and\nin a direction opposite to the displacement. In one dimension, where the\nvector notation is superfluous, we can express the force as\n\\begin{equation}\\label{c5s2e1}\nF = -kx,\n\\end{equation}\nwhere the number $k$ is a constant. Newton's second law for a particle subjected\nto such a force is\n\\begin{equation}\\label{c5s2e2}\nm\\ddot{x} = -kx,\n\\end{equation}\nwhere the a dot overhead signifies a derivative with respect to time. We \nrearrange this equation as\n\\begin{equation}\\label{c5s2e3}\n\\ddot{x} + \\omega^2 x = 0,\n\\end{equation}\nwhere\n\\begin{equation}\\label{c5s2e4}\n\\omega = \\sqrt{\\frac{k}{m}}.\n\\end{equation}\nA general solution of \\eqref{c5s2e3} is\n\\begin{equation}\\label{c5s2e5}\nx(t) = \\alpha e^{-i\\omega t} + \\beta e^{i\\omega t},\n\\end{equation}\nwhere $\\alpha$ and $\\beta$ are constants of integration. From this equation it\nis evident that the constant $\\omega$ is the angular frequency of the harmonic\noscillator. It can be easily show that the particle has a potential energy\n\\begin{equation}\\label{c5s2e6}\nV = \\frac{1}{2}kx^2\n\\end{equation}\nso that its total energy is\n\\begin{equation}\\label{c5s2e7}\nH = T + V = \\frac{p^2}{2m} + \\frac{1}{2}kx^2.\n\\end{equation}\nThe quantity $H$ is called the Hamiltonian and it is the starting point of \nour quantum mechanical analysis.\n\nGiven a classical Hamiltonian function we write the corresponding quantum\nmechanical Hamiltonian operator by replacing the dynamical variables with their\noperator representation. Thus, a harmonic oscillator's Hamiltonian is\n\\begin{equation}\\label{c5s2e8}\n\\hat{H} = \\frac{\\hat{p}^2}{2m} + \\frac{1}{2}m\\omega^2\\hat{x}^2.\n\\end{equation}\nFrom equations \\eqref{c4s3e1} and \\eqref{c4s3e2} we have\n\\begin{eqnarray}\n\\hat{x} &=& x \\label{c5s2e9} \\\\\n\\hat{p} &=& -i\\hslash\\frac{d}{dx} \\label{c5s2e10}\n\\end{eqnarray}\nso that\n\\begin{equation}\\label{c5s2e11}\n\\hat{H} = -\\frac{\\hslash^2}{2m}\\frac{d^2}{dx^2} + V(x)\n\\end{equation}\nand hence the Schr\\\"{o}dinger's time independent equation $\\hat{H}\\psi = E\\psi$\nis\n\\begin{equation}\\label{c5s2e12}\n-\\frac{\\hslash^2}{2m}\\frac{d^2\\psi}{dx^2} + \\frac{1}{2}m\\omega^2x^2\\psi = E\\psi.\n\\end{equation}\nWe can rearrange it as\n\\begin{equation}\\label{c5s2e13}\n\\frac{d^2\\psi}{dx^2} + \\left(\\frac{2mE}{\\hslash^2} - \\frac{m^2\\omega^2x^2}{\\hslash^2}\\right)\\psi = 0.\n\\end{equation}\nIntroduce a dimensionless variable\n\\begin{equation}\\label{c5s2e14}\n\\xi = \\sqrt{\\frac{m\\omega}{\\hslash}}x\n\\end{equation}\nso that\n\\[\n\\frac{d\\psi}{dx} = \\frac{d\\psi}{d\\xi}\\frac{d\\xi}{dx} = \\sqrt{\\frac{m\\omega}{\\hslash}}\\frac{d\\psi}{d\\xi}\n\\]\nand\n\\[\n\\frac{d^2\\psi}{dx^2} = \\frac{m\\omega}{\\hslash}\\frac{d^2\\psi}{d\\xi^2}\n\\]\nso that equation \\eqref{c5s2e13} becomes\n\\begin{equation}\\label{c5s2e15}\n\\frac{d^2\\psi}{d\\xi^2} + \\left(\\frac{2E}{\\hslash\\omega} - \\xi^2\\right)\\psi = 0.\n\\end{equation}\nLet\n\\begin{equation}\\label{c5s2e16}\n\\alpha = \\frac{2E}{\\hslash\\omega}\n\\end{equation}\nso that equation \\eqref{c5s2e15} becomes\n\\begin{equation}\\label{c5s2e17}\n\\frac{d^2\\psi}{d\\xi^2} + (\\alpha - \\xi^2)\\psi = 0.\n\\end{equation}\nA standard way to solve equations like \\eqref{c5s2e17} is to use Frobenius \nmethod. It consists in expressing $\\psi$ as a power series and finding a \nrecurrence relation among the series' coefficients. If we use it for \n\\eqref{c5s2e17} we get a three term recurrence relation, which is hard to solve.\nInstead we first try to guess the asymptotic form of the solution, that is, the\nform of the solution as $\\xi \\rightarrow \\pm\\infty$. In this limit, the equation\n\\eqref{c5s2e17} can be approximated to\n\\[\n\\frac{d^2\\psi}{d\\xi^2} - \\xi^2\\psi = 0\n\\]\nor, more conveniently,\n\\[\n\\frac{d^2\\psi}{d\\xi^2} - (\\xi^2 - 1)\\psi = 0.\n\\]\nOne can readily verify that the solution of this equation is $\\psi(\\xi) =\ne^{-\\xi^2/2}$. Therefore, we express the solution of the original equation as\n\\begin{equation}\\label{c5s2e18}\n\\psi(\\xi) = e^{-\\xi^2/2}H(\\xi),\n\\end{equation}\nwhere the function $H$ satisfies the equation\n\\begin{equation}\\label{c5s2e19}\n\\frac{d^2H}{d\\xi^2} - 2\\xi\\frac{dH}{d\\xi} + (\\alpha - 1)H(\\xi) = 0.\n\\end{equation}\nThis equation is called as Hermite's differential equation and its solutions are\ncalled as Hermite polynomials. We will solve \\eqref{c5s2e19} using Frobenius \nmethod. Let\n\\begin{equation}\\label{c5s2e20}\nH(\\xi) = \\sum_{n \\ge 0}a_n\\xi^n\n\\end{equation}\nso that\n\\begin{eqnarray}\nH^\\prime(\\xi) &=& \\sum_{n \\ge 1}na_n \\xi^{n-1} \\label{c5s2e21} \\\\\nH^{\\prime\\prime}(\\xi) &=& \\sum_{n \\ge 2}n(n-1)a_n \\xi^{n-2} \\label{c5s2e22}\n\\end{eqnarray}\nSubstituting \\eqref{c5s2e20} to \\eqref{c5s2e22} in \\eqref{c5s2e19} we get\n\\begin{equation}\\label{c5s2e23}\n\\sum_{n \\ge 2}n(n-1)a_n \\xi^{n-2} - 2\\sum_{n \\ge 1}na_n \\xi^n + (\\alpha - 1)\n\\sum_{n \\ge 0}a_n\\xi^n = 0\n\\end{equation}\nWriting all sums so that their indices start from $0$,\n\\begin{equation}\\label{c5s2e24}\n\\sum_{n \\ge 0}(n+2)(n+1)a_{n+2} \\xi^n - 2\\sum_{n \\ge 0}(n+1)a_{n+1} \\xi^{n+1} + \n(\\alpha - 1)\\sum_{n \\ge 0}a_n\\xi^n = 0\n\\end{equation}\nor\n\\begin{equation}\\label{c5s2e25}\n\\sum_{n \\ge 0}\\left[(n+2)(n+1)a_{n+2} + (\\alpha - 1)a_n\\right]\\xi^n - \n2\\sum_{n \\ge 0}(n+1)a_{n+1} \\xi^{n+1} = 0.\n\\end{equation}\nIf this equation has to be valid for all values of $\\xi$ then the coefficient of\nevery power of $\\xi$ must vanish. In particular,\n\\begin{equation}\\label{c5s2e26}\n(n+2)(n+1)a_{n+2} + (\\alpha - 1)a_n - 2na_n = 0\n\\end{equation}\nor\n\\begin{equation}\\label{c5s2e27}\na_{n+2} = \\frac{2n - \\alpha + 1}{(n+2)(n+1)}a_n.\n\\end{equation}\nThe recurrence relation terminates when\n\\begin{equation}\\label{c5s2e28}\n\\alpha = 2n + 1\n\\end{equation}\nor, using equation \\eqref{c5s2e16}\n\\begin{equation}\\label{c5s2e29}\nE = \\left(n + \\frac{1}{2}\\right)\\hslash\\omega.\n\\end{equation}\nThe only solutions that are physically permissible are the ones for which the\nrecurrence terminates. For in this case, the series of \\eqref{c5s2e20} becomes\na polynomial and the complete solution \\eqref{c5s2e18} does indeed assume the\nasymptotic form of $\\psi(\\xi) = e^{-\\xi^2/2}$. The energy of the physically\npermissible solutions is restricted to the discrete set of values in equation\n\\eqref{c5s2e29}. For a particular $n$, we distingush the energy eigenvalue by\ndenoting it as $E_n$ and the corresponding polynomial as $H_n$. Thus, the \neigenfunctions of the Hamiltonian \\eqref{c5s2e11} are\n\\begin{equation}\\label{c5s2e30}\n\\psi_n(\\xi) = e^{-\\xi^2/2}H_n(\\xi)\n\\end{equation}\nand the corresponding eigenvalues are\n\\begin{equation}\\label{c5s3e31}\nE_n = \\left(n + \\frac{1}{2}\\right)\\hslash\\omega,\n\\end{equation}\nwhere $n = 0, 1, 2, \\ldots$. The lowest energy of a harmonic oscillator is\n\\begin{equation}\\label{c5s3e32}\nE_0 = \\frac{1}{2}\\hslash\\omega.\n\\end{equation}\nIt is \\emph{not} zero. Thus, a quantum harmonic oscillator is never at rest. If \nit were then it would have violated Heisenberg's uncertainty relation. We also \nnote that\n\\begin{equation}\\label{c5s2e33}\nE_{n+1} - E_n = \\hslash\\omega.\n\\end{equation}\nThus, when a harmonic oscillator goes from the ground state of $n = 0$ to the \nfirst excited state of $n = 1$, it absorbs an energy $\\hslash\\omega$. When it\ngoes to the second exsited state, it would have absorbed two quanta, each one \nof energy $\\hslash\\omega$. In general, when the harmonic oscillator is in the\n$m$th excited state, it would have absorbed $m$ quanta of energy $\\hslash\n\\omega$.\n\nIt is possible to consider a crystal as an ensemble (the French word for a \ncollection) of harmonic oscillators of various frequencies. An individual\nharmonic oscillator corresponds to energy quanta of a specific frequency. In\nthe case of a vibrating crystal, the energy quanta are called \\emph{phonons}.\nA crystal has $m_1$ phonons of frequency $\\omega_1$ if the harmonic oscillator\nwith frequency $\\omega_1$ is in the energy state $E_{m_1} = (m_1 + 1/2)\\hslash\n\\omega_1$.\n\nThe harmonic oscillator we considered so far was a mass $m$ that was subject to\na force $-kx$ and we showed that its Hamiltonian is given by equation \n\\eqref{c5s2e7}\n\\[\nH = \\frac{p^2}{2m} + \\frac{1}{2}kx^2.\n\\]\nIf we choose our units such that $m = 1$ and $k = 1$ and we start calling the \ncoordinate $x$ as $q$ then the Hamiltonian can be written as\n\\begin{equation}\\label{c5s2e34}\nH = \\frac{p^2}{2} + \\frac{q^2}{2}.\n\\end{equation}\nAny system whose Hamiltonian can be written in this form is called a harmonic\noscillator. The quantities $p$ and $q$ are called generalised momentum and \ngeneralised coordinate. It is possible to write the Hamiltonian of \nelectromagnetic radiation in a black body as a sum of terms of the form \n\\eqref{c5s2e34}. Analogous to a vibrating crystal, the electromagnetic radiation\nin a black body can also be considered to be an ensemble of harmonic oscillators\nof various frequencies. The energy quanta of these oscillators are called\n\\emph{photons}. If an oscillator of frequency $\\omega_1$ is in the $m_1$th\nexcited state then we say that there are $m_1$ photons of frequency $\\omega_1$\nin the black body. \n\nWhen one utters the word  `photon' or a `phonon' we should interpret it in terms\nof the excited state of an ensemble of oscillators. We should not visualise them \nas `particles' of light or sound.", "meta": {"hexsha": "048ad5cf780801ec39fc1da679b4ff9c7c97cb6f", "size": 15683, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "qm/modern-physics/notes/c5.tex", "max_stars_repo_name": "drameyjoshi/physics", "max_stars_repo_head_hexsha": "9d3360258bdd12bc3d981ae08c8358dff4b777b3", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "qm/modern-physics/notes/c5.tex", "max_issues_repo_name": "drameyjoshi/physics", "max_issues_repo_head_hexsha": "9d3360258bdd12bc3d981ae08c8358dff4b777b3", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "qm/modern-physics/notes/c5.tex", "max_forks_repo_name": "drameyjoshi/physics", "max_forks_repo_head_hexsha": "9d3360258bdd12bc3d981ae08c8358dff4b777b3", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.849726776, "max_line_length": 103, "alphanum_fraction": 0.7250526047, "num_tokens": 5619, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914997895581, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5677055663416423}}
{"text": "\\input{../header_class}\r\n\r\n%---------- start document ---------- %\r\n \\section{vector -- vector object and arithmetic}\\linkedzero{vector}\r\n \\begin{itemize}\r\n   \\item {\\bf Classes}\r\n   \\begin{itemize}\r\n     \\item \\linkingone{vector}{Vector}\r\n   \\end{itemize}\r\n   \\item {\\bf Functions}\r\n     \\begin{itemize}\r\n       \\item \\linkingone{vector}{innerProduct}\r\n     \\end{itemize}\r\n \\end{itemize}\r\n\r\nThis module provides an exception class.\r\n\\begin{description}\r\n  \\item[VectorSizeError]:\\ Report vector size is invalid. (Mainly for operations with two vectors.)\r\n\\end{description}\r\n\r\n\\C\r\n\r\n \\subsection{Vector -- vector class}\\linkedone{vector}{Vector}\r\n Vector is a class for vector.\r\n  \\initialize\r\n  \\func{Vector}{\\hiki{compo}{list}}{\\out{Vector}}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create Vector object from \\param{compo}.\r\n  % added document\r\n  %\r\n  % \\spacing\r\n  % input, output document\r\n  \\param{compo} must be a list of elements which are an integer or an instance of \\linkingone{ring}{RingElement}.\r\n  \\begin{at}\r\n    \\item[compo]\\linkedtwo{vector}{Vector}{compo}:\\\\ It expresses component of vector.\r\n  \\end{at}\r\n  \\begin{op}\r\n    \\verb|u+v| & Vector sum.\\\\\r\n    \\verb|u-v| & Vector subtraction.\\\\\r\n    \\verb|A*v| & Multiplication vector with matrix\\\\\r\n    \\verb|a*v| & or scalar multiplication.\\\\\r\n    \\verb|v//a| & Scalar division.\\\\\r\n    \\verb|v%n| & Reduction each elements of \\linkingtwo{vector}{Vector}{compo}\\\\\r\n    \\verb|-v| & element negation.\\\\\r\n    \\verb|u==v| & equality.\\\\\r\n    \\verb|u!=v| & inequality.\\\\\r\n    \\verb+v[i]+ & Return the coefficient of i-th element of Vector.\\\\\r\n    \\verb+v[i] = c+ & Replace the coefficient of i-th element of Vector by c.\\\\\r\n    \\verb|len(v)| & return length of \\linkingtwo{vector}{Vector}{compo}.\\\\\r\n    \\verb|repr(v)| & return representation string.\\\\\r\n    \\verb|str(v)| & return string of \\linkingtwo{vector}{Vector}{compo}.\\\\\r\n  \\end{op}\r\n  Note that index is 1-origin, which is standard in mathematics field.\r\n\\begin{ex}\r\n>>> A = vector.Vector([1, 2])\r\n>>> A\r\nVector([1, 2])\r\n>>> A.compo\r\n[1, 2]\r\n>>> B = vector.Vector([2, 1])\r\n>>> A + B\r\nVector([3, 3])\r\n>>> A % 2\r\nVector([1, 0])\r\n>>> A[1]\r\n1\r\n>>> len(B)\r\n2\r\n\\end{ex}%Don't indent!\r\n  \\method\r\n  \\subsubsection{copy -- copy itself}\\linkedtwo{vector}{Vector}{copy}\r\n   \\func{copy}{\\param{self}}{\\out{Vector}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return copy of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\negok Note that this function returns integer only.\\\\\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \\param{a} must be int, long or rational.Integer.\\\\\r\n  \\subsubsection{set -- set other compo}\\linkedtwo{vector}{Vector}{set}\r\n   \\func{set}{\\param{self},\\ \\hiki{compo}{list}}{(None)}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Substitute \\linkingtwo{vector}{Vector}{compo} with \\param{compo}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\negok Note that this function returns integer only.\\\\\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \\param{a} must be int, long or rational.Integer.\\\\\r\n  \\subsubsection{indexOfNoneZero -- first non-zero coordinate}\\linkedtwo{vector}{Vector}{indexOfNoneZero}\r\n   \\func{indexOfNoneZero}{\\param{self}}{integer}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the first index of non-zero element of \\param{self}.\\linkingtwo{vector}{Vector}{compo}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad \\negok Raise ValueError if all elements of \\linkingtwo{vector}{Vector}{compo} are zero.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\param{a} must be int, long or rational.Integer.\\\\\r\n  \\subsubsection{toMatrix -- convert to Matrix object}\\linkedtwo{vector}{Vector}{toMatrix}\r\n   \\func{toMatrix}{\\param{self},\\ \\hikiopt{as\\_column}{bool}{False}}{\\out{Matrix}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return \\linkingone{matrix}{Matrix} object using \\linkingone{matrix}{createMatrix} function.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\negok Note that this function returns integer only.\\\\\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad If \\param{as\\_column} is True, create the column matrix with \\param{self}.\r\n   Otherwise, create the row matrix.\\\\\r\n\\begin{ex}\r\n>>> A = vector.Vector([0, 4, 5])\r\n>>> A.indexOfNoneZero()\r\n2\r\n>>> print A.toMatrix()\r\n0 4 5\r\n>>> print A.toMatrix()\r\n0\r\n4\r\n5\r\n\\end{ex}%Don't indent!\r\n\\C\r\n  \\subsection{innerProduct(function) -- inner product}\\linkedone{vector}{innerProduct}\r\n  \\func{innerProduct}{\\hiki{bra}{Vector}, \\ \\hiki{ket}{Vector}}{\\out{RingElement}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the inner product of \\param{bra} and \\param{ket}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad The function supports Hermitian inner product for elements in the complex number field.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   \\quad \\negok Note that the returned value depends on type of elements.\\\\\r\n\\begin{ex}\r\n>>> A = vector.Vector([1, 2, 3])\r\n>>> B = vector.Vector([2, 1, 0])\r\n>>> vector.innerProduct(A, B)\r\n4\r\n>>> C = vector.Vector([1+1j, 2+2j, 3+3j])\r\n>>> vector.innerProduct(C, C)\r\n(28+0j)\r\n\\end{ex}%Don't indent!\r\n\\C\r\n\r\n%---------- end document ---------- %\r\n\r\n\\input{../footer}\r\n", "meta": {"hexsha": "2424c64942a2b134e7eb08a2f7045881eea771fb", "size": 5233, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/en/vector.tex", "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_issues_repo_path": "manual/en/vector.tex", "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/en/vector.tex", "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.9805194805, "max_line_length": 114, "alphanum_fraction": 0.644563348, "num_tokens": 1573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7549149868676284, "lm_q1q2_score": 0.5677055622441688}}
{"text": "\n\\section{Tricks}\n\\begin{enumerate}\n\n\\item\nThe Eigenmath result field can be copied to the pasteboard by\nclick, drag, then release.\n\n\\item\nThe last result is stored in the symbol {\\tt last}.\n\n\\item\nIn a script, setting {\\tt trace=1}\ncauses each line to be printed just before it is evaluated.\nUseful for debugging.\n\n\\item\nUse {\\tt contract(A)} to get the mathematical trace of matrix $A$.\n\n\\item\nCalculations in a script can span multiple lines.\nThe trick is to arrange things so the parser will keep going.\nFor example, if a calculation ends with a plus sign, the parser will go to the next line to get another term.\nAlso, the parser will keep going when it expects a close parenthesis.\n\n\\item\nNormally a function body is not evaluated when a function is defined.\nHowever, in some cases it is required that the function body be the result of something.\nThe trick is to use eval.\nFor example, the following code causes the function body to be a sixth order Taylor series expansion of $cos(x)$.\n\n\\begin{Verbatim}[formatcom=\\color{blue}]\nf(x) = eval(taylor(cos(x),x,6))\n\\end{Verbatim}\n\n\\item\nUse {\\tt binding} to see the unevaluated binding of a symbol.\n\n\\begin{Verbatim}[formatcom=\\color{blue}]\nbinding(f)\n\\end{Verbatim}\n\n\\item\nThis is how to clear a symbol.\n\n\\begin{Verbatim}[formatcom=\\color{blue}]\nf = quote(f)\n\\end{Verbatim}\n\n\\end{enumerate}\n", "meta": {"hexsha": "dad75efb135b8944fac0dc9303891d6b88715371", "size": 1345, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tricks.tex", "max_stars_repo_name": "zhouxs1023/eigenmath", "max_stars_repo_head_hexsha": "e302cee23a4d5877ffe0975f513b35654fa50961", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/tricks.tex", "max_issues_repo_name": "zhouxs1023/eigenmath", "max_issues_repo_head_hexsha": "e302cee23a4d5877ffe0975f513b35654fa50961", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/tricks.tex", "max_forks_repo_name": "zhouxs1023/eigenmath", "max_forks_repo_head_hexsha": "e302cee23a4d5877ffe0975f513b35654fa50961", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3725490196, "max_line_length": 113, "alphanum_fraction": 0.7539033457, "num_tokens": 348, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5677055622441687}}
{"text": "\\chapter{Preconditioning}\n\\label{chap:precond}\n\nThe linear system \\eqref{eq:mhd_saddle} is typically sparse and of large dimensions, hence to efficiently solve for it we use a preconditioned iterative approach as proposed in \\cite{li2010numerical}. We start by reviewing some preconditioning strategies for the incompressible Navier-Stokes and Maxwell subproblems in isolation. From these techniques we will then introduce and  numerically test  preconditioners for the full MHD system in linearised form as in \\eqref{eq:mhd_saddle}.\n\nLet us introduce a number of matrices in addition to the ones already defined in Chapter~2: the velocity mass matrix $Q$, the pressure mass matrix $Q_p$, the pressure convection diffusion matrix  $F_p$, the pressure Laplacian matrix $A_p$, the scalar Laplacian matrix $L$ and the mass matrix~$X$ by\n\\begin{subequations}\n\\label{eq:PrecondMatrices}\n\\begin{eqnarray}\nQ_{i,j}&=&\\int_\\Omega\\, \\uu{\\psi_j}\\cdot \\uu{\\psi_i} \\,d\\uu{x}, \\quad 1\\leq i,j \\leq n_u,\\\\\n(Q_p)_{i,j}&=& \\int_\\Omega\\, \\alpha_j \\alpha_i \\,dx, \\ \\ 1\\leq i,j \\leq m_u, \\\\\n(F_{p})_{i,j}&=& \\nu \\int_\\Omega\\, \\grad \\alpha_j \\cdot \\grad \\alpha_i +(\\uu{w} \\cdot \\grad \\alpha_j)\\alpha_i\\,dx,\\ \\ 1\\leq i,j \\leq m_u, \\\\\n(A_{p})_{i,j}&=&  \\int_\\Omega\\, \\grad \\alpha_j \\cdot \\grad \\alpha_i \\,dx, \\ \\ 1\\leq i,j \\leq m_u, \\\\\nL_{i,j}&=&\\int_\\Omega\\,\\nabla\\beta_j\\cdot\\nabla\\beta_i\\,d\\uu{x},  \\ \\ 1\\leq i,j \\leq m_b\\\\\nX_{i,j}&=&\\int_\\Omega\\, \\uu{\\phi}_j\\cdot\\uu{\\phi}_i\\,d\\uu{x},  \\ \\ 1\\leq i,j \\leq n_b,\n\\end{eqnarray}\n\\end{subequations}\nwhere $\\uu{w}$ is the finite element velocity from the current iteration. The matrices $F_p$ and $A_p$ are well defined as we use continuous elements for the pressure finite element space $Q_h$. The matrices incorporate homogeneous Neumann boundary conditions.\n\n\\section{Preconditioning the incompressible Navier-Stokes equations}\n\\label{sec:NSprecond}\n\n\nConsider the steady state incompressible Navier-Stokes equations in isolation. Let\n\\begin{equation}\n\\label{eq:ns_coeff}\n\\mathcal{K}_{\\rm NS}=\n\\begin{pmatrix}\nF & B^T \\\\\nB & 0\n\\end{pmatrix},\n\\end{equation}\nbe the discretised and linearised Navier-Stokes (or Oseen) subproblem where $F~=~A~+~O$ has been defined in \\eqref{eq:forms}. Due to the convection term, $O$, this  system is non-symmetric and we will use GMRES to iteratively solve this subproblem; \\cite{saad1986gmres}. A common approach for solving a saddle point system, as in \\eqref{eq:ns_coeff}, is to use a block diagonal or block triangular preconditioner of the form\n\\begin{equation}\n\\label{eq:ns_pc_upper}\n\\mathcal{M}_{\\rm tNS} =\n\\begin{pmatrix}\nF & B^T \\\\\n0 & -S\n\\end{pmatrix} \\quad \\mbox{or} \\quad\n\\mathcal{M}_{\\rm dNS} =\n\\begin{pmatrix}\nF & 0 \\\\\n0 & S\n\\end{pmatrix},\n\\end{equation}\nwhere $S\\approx B F^{-1} B^T$ is an approximation to the Schur complement. If $S = B F^{-1} B^T$ is {\\em precisely} the Schur complement, then it has been proved in~\\cite{murphy2000note} that the preconditioned matrix has exactly $2$ eigenvalues when using the block triangular preconditioner  (i.e., $\\mathcal{M}_{\\rm tNS}^{-1}\\mathcal{K}_{\\rm NS}$) or $3$ distinct eigenvalues in the block diagonal case  (i.e., $\\mathcal{M}_{\\rm dNS}^{-1}\\mathcal{K}_{\\rm NS}$). Both are diagonalisable and hence GMRES will converge in exactly 2 or 3 iterations in the absence of round-off errors.\n\nIn practice it is often too expensive to form and solve for the exact Schur complement $B F^{-1} B^T$, hence, a good approximation is needed. Two well-known preconditioners for the incompressible Navier-Stokes equations are the Least Squares Commutator (LSC) and the Pressure Convection-Diffusion (PCD) preconditioners. A description of both can be found in \\cite{elman2005finite} and we will just outline the procedure how these can be applied on the discrete level. For the Navier-Stokes preconditioner we will consider block triangular preconditioners of the same form as $\\mathcal{M}_{\\rm tNS}$.\n\n% Both methods (LSC and PCD)  start with the convection-diffusion operator associated with the velocity space $\\uu{V}_h$ given by\n% $$\\mathcal{L} = -\\nu \\Delta +\\uu{w} \\cdot \\nabla\\, .$$\n% As before $\\uu{w}$ is the discrete velocity calculated at the previous non-linear iteration. Suppose that there is a corresponding operator defined in the pressure space\n% $$\\mathcal{L}_p = (-\\nu \\Delta +\\uu{w} \\cdot \\nabla)_p\\, .$$\n% Consider the commutator of the convection-diffusion operator associated with the gradient operator\n% \\begin{equation} \\label{eq:ContCommutator}\n% \\epsilon = (-\\nu \\Delta +\\uu{w} \\cdot \\nabla)\\nabla - \\nabla (-\\nu \\Delta +\\uu{w} \\cdot \\nabla)_p\n% \\end{equation}\n% to be small. In fact, if $\\uu{w}$ was constant then $\\epsilon = 0$. We will be using \\eqref{eq:ContCommutator} to derive both LSC and PCD.\n\n\n\n\\subsection{Pressure Convection-Diffusion (PCD)}\n\\label{sec:PCD_outline}\n\n\nIn \\cite[Chap. 8]{elman2005finite} the discrete commutator of the convection-diffusion operator associated with the gradient operation is introduced and given by\n\\begin{equation} \\label{eq:DisCommutator}\n    \\epsilon_h = (Q^{-1}F)(Q^{-1}B^T)-(Q^{-1}B^T)(Q_p^{-1}F_p).\n\\end{equation}\nwhere the matrices $Q$, $Q_p$ and $F_p$ are defined in  \\eqref{eq:PrecondMatrices}. Assuming that the commutator is small then pre- and post-multiplying \\eqref{eq:DisCommutator} by $B F^{-1} Q$ and $F_p^{-1}Q_p$, respectively, let us separate out the Schur complement to give\n\\begin{equation} \\label{eq:SchurApprox}\n    BF^{-1}B^T \\approx B Q^{-1}B^T F_p^{-1} Q_p.\n\\end{equation}\n\nOur discretisation is inf-sup stable which implies that there is spectral equivalence between $BQ^{-1}B^T$ and the pressure Laplacian matrix, $A_p$; see \\cite[Section 5.5.1]{elman2005finite}. Hence, the Schur complement can be approximated by:\n$$S_{\\rm PCD} =A_p F_p^{-1}Q_p.$$\nApplying the PCD preconditioner (i.e., taking $S = S_{\\rm PCD}$) to the linearised Navier-Stokes system involves solving the system\n\\begin{equation} \\nonumber\n% \\label{eq:matrix-system}\n\\left(\n\\begin{array}{cc}\nF & B^T \\\\\n0 & -A_p F_p^{-1}Q_p\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\nx \\\\\ny\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{c}a\\\\b\n\\end{array}\n\\right)\n\\end{equation}\nat each Krylov iteration. This can be solved  efficiently by splitting it into the following two steps\n\\begin{itemize} \\label{it:PCDsolve}\n    \\item[1.] Solve for $y$: $y = -Q_p^{-1}F_p A_p^{-1}b;$\n    \\item[2.] Solve for $x$: $x = F^{-1}(a-B^Ty).$\n\\end{itemize}\nThis means that we have one pressure Poisson solve ($A_p^{-1}$), one mass matrix solve ($Q_p^{-1}$), one convection-diffusion solve ($F^{-1}$) and multiplications with $F_p$ and $B^T$ at each Krylov iteration. It is possible to solve these iteratively using multigrid and/or iterative methods. However, to test the preconditioner we will use direct solver in this thesis.\n\n\\subsection{Least-Squares Commutator (LSC)}\n\\label{sec:LSC_outline}\n\nOne disadvantage with using the PCD preconditioner is that it requires the construction of the matrices $A_p$, $Q_p$ and $F_p$ in \\eqref{eq:PrecondMatrices}. A second approach to approximate the Schur complement is the LSC preconditioner \\cite[Chap. 8]{elman2005finite} which primarily uses the available matrix coefficients in \\eqref{eq:ns_coeff} and the construction of $Q$ to form the preconditioner (without explicitly forming $A_p$, $Q_p$ and $F_p$).\n\nAs for the derivation of the PCD preconditioner we start off with the discrete commutator of the convection-diffusion operator\n\\begin{equation} \\nonumber\n    \\epsilon_h = (Q^{-1}F)(Q^{-1}B^T)-(Q^{-1}B^T)(Q_p^{-1}F_p).\n\\end{equation}\nSuppose that the $Q$-norm is defined by $\\|v\\|_{Q} = (Qv,v)^{\\nicefrac{1}{2}}$. Then this time we minimise $\\epsilon_h$ over the $j$th column of $F_p$ (that is $[F_p]_j$) in the $Q$-norm to try to find an approximation for $F_p$. As shown in \\cite{elman2005finite}, the minimisation is given by\n$$\\min \\|[Q^{-1}FQ^{-1}B^T]_j-Q^{-1}B^TQ_p^{-1}[F_p]_j) \\|_Q.$$\nSolving this optimisation problem, as shown in \\cite{elman2005finite}, is done by solving the  normal equations\n$$Q_p^{-1}BQ^{-1}B^TQ_p^{-1}F_p = Q_p^{-1}BQ^{-1} FQ^{-1}B^T.$$\nThis yields the an approximation to $F_p$ as\n$$F_p \\approx Q_p(BQ^{-1}B^T)^{-1}(BQ^{-1} FQ^{-1}B^T).$$\nBy substituting this into expression \\eqref{eq:SchurApprox} we obtain the LSC approximation to the Schur complement:\n\\begin{equation} \\nonumber\n    S \\approx BF^{-1}B^T \\approx S_{\\rm LSC} = (B Q^{-1} B^T)(BQ^{-1}FQ^{-1}B^T)^{-1}(B Q^{-1} B^T).\n\\end{equation}\nTherefore, applying the LSC preconditioner to the Oseen system $\\mathcal{K}_{\\rm NS}$ in \\eqref{eq:ns_coeff} involves solving for the matrix\n\\begin{equation} \\nonumber\n\\left(\n\\begin{array}{cc}\nF & B^T \\\\\n0 & -S_{\\rm LSC}\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\nx \\\\\ny\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{c}a\\\\b\n\\end{array}\n\\right)\n\\end{equation}\nat each Krylov iteration. Again, this can be efficiently split up into the following two steps:\n\\begin{itemize}\n    \\item[1.] Solve for $y$: $y = -(B Q^{-1} B^T)^{-1}(BQ^{-1}FQ^{-1}B^T)(B Q^{-1} B^T)^{-1}b;$\n    \\item[2.] Solve for $x$: $x = F^{-1}(a-B^Ty).$\n\\end{itemize}\nHence, we have two pressure Poisson solves ($(B Q^{-1} B^T)^{-1}$) and one convection-diffusion solve ($F^{-1}$) at each Krylov iteration as well as matrix multiplications. In practice, we take the diagonal or lumped diagonal of $Q$ to form $B Q^{-1} B^T$. These solves, as  with the PCD preconditioner, will be done directly in this thesis.\n\n\\subsection{PCD versus LSC}\n\nThe main advantage of solving the commutator using the least-squares approach is that the matrices that define the preconditioner are available from the original system $\\mathcal{K}_{\\rm NS}$ in \\eqref{eq:ns_coeff} and the construction of $Q$ . However, for PCD we require the construction of the matrices $A_p$, $Q_p$ and $F_p$. Therefore, LSC is slightly more computationally efficient to form. On the other hand, to apply the LSC preconditioner we require two pressure Poisson solves, whereas PCD only requires one.\n\nWe will consider experiments with both preconditioners for the incompressible Navier-Stokes problem in isolation to determine which seems more effective. This preconditioner  will then be applied within the solver for the linearised MHD system.\n\n\\section{Preconditioning Maxwell's equations}\n\\label{sec:MaxwellPrecond}\n\nNext, consider the Maxwell subproblem\n\\begin{equation}\n\\label{eq:m_coeff}\n\\mathcal{K}_{\\rm MX}=\n\\begin{pmatrix}\nM & D^T \\\\\nD & 0\n\\end{pmatrix},\n\\end{equation}\nappearing in a linearised MHD system \\eqref{eq:mhd_saddle}.\n% , but without coupling terms.\nAs we did for the Navier-Stokes subproblem in Section \\ref{sec:NSprecond}, we apply a block preconditioning strategy for $\\mathcal{K}_{\\rm MX}$ in  \\eqref{eq:m_coeff}. Notice that, $\\mathcal{K}_{\\rm MX}$ in \\eqref{eq:m_coeff} is symmetric and hence we will focus on SPD block diagonal preconditioners.\n\n\\subsection{An ideal preconditioner}\n\nThe $(1,1)$ block of $\\mathcal{K}_{\\rm MX}$ is the curl-curl operator, hence, the matrix $M$ is singular, where the null space is of dimension $m_b$ and consists of the discrete gradients. Therefore the usual Schur complement does not exist as the matrix $M$ cannot be inverted. To overcome this difficulty, we employ the approach in  \\cite{golub2003solving,greif2006preconditioners} based on  augmentation and use preconditioners of the form:\n\\begin{equation} \\nonumber\n% \\label{eq:maxwell_pc_ideal}\n\\begin{pmatrix}\nM+D^T W^{-1} D & 0 \\\\\n0 & W\n\\end{pmatrix},\n\\end{equation}\nwhere $W$ is a symmetric positive definite matrix.\n\n% . More precisely, we consider replacing $M$ by $M+D^T\\mathcal{W}^{-1}D$ where $\\mathcal{W}\\in {\\mathbb R}^{m_b\\times m_b}$ is a symmetric positive definite matrix just to form the preconditioner, see \\cite{golub2003solving,greif2006preconditioners} for more details. The addition of the matrix ($D^T\\mathcal{W}^{-1}D$) removes the singularity of the $(1,1)$ block of $\\mathcal{K}_{\\rm MX}$ without changing the solution (since $Db = 0$).  In practice, we don't actually replace $M$ by $M+D^T\\mathcal{W}^{-1}D$ but just consider preconditioners with $M+D^T\\mathcal{W}^{-1}D$ in the $(1,1)$ block which then are applied to $\\mathcal{K}_{\\rm MX}$. For the Maxwell subproblem the appropriate choice of $\\mathcal{W}$ is the scalar Laplacian on $S_h$ defined as $L=(L_{i,j})_{i,j=1}^{m_b} \\in{\\mathbb R}^{m_b \\times m_b}$ with\n% \\begin{equation}\n% \\label{eq:scalar_laplace}\n% L_{i,j}=\\int_\\Omega\\,\\nabla\\beta_j\\cdot\\nabla\\beta_i\\,d\\uu{x},\n% \\end{equation}\n% see  \\cite{greif2007preconditioners}. Therefore, the augmented system is given by:\n% \\begin{equation}\n% \\label{eq:AugmentMaxwell}\n% \\bar{\\mathcal{K}}_{\\rm MX}=\n% \\begin{pmatrix}\n% M + D^TL^{-1}D & D^T \\\\\n% D & 0\n% \\end{pmatrix}.\n% \\end{equation}\n\n\n\nIt has been shown in \\cite{greif2007preconditioners} that an appropriate choice of $W$ is the scalar Laplacian, $L$, defined in \\eqref{eq:PrecondMatrices}. This leads to the ideal preconditioner:\n\\begin{equation}\n\\label{eq:maxwell_pc_ideal}\n\\mathcal{M}_{\\rm iMX} =\n\\begin{pmatrix}\nM+D^T L^{-1} D & 0 \\\\\n0 & L\n\\end{pmatrix}.\n\\end{equation}\nApplying \\eqref{eq:maxwell_pc_ideal} as the preconditioner yields exactly two eigenvalues, $1$ and $-1$ of algebraic multiplicities of $n_b$ and $m_b$, respectively. Therefore using this matrix as a preconditioner means that MINRES will converge in two iterations, in the absence of roundoff errors  \\cite{paige1975solution}. However, forming the matrix $M+D^T L^{-1} D$ is costly, hence, $\\mathcal{M}_{\\rm iMX}$  is  impractical  for large systems.\n\n\n\\subsection{A practical preconditioner}\n\nA good approximation for $M+D^T L^{-1} D$ is required to make the ideal preconditioner, $\\mathcal{M}_{\\rm iMX}$, suitable in practice. It has been shown in \\cite{greif2007preconditioners}  that $M+D^T L^{-1} D$ is spectrally equivalent to $M+X$  where $X$ is the vector mass matrix on the magnetic space defined in \\eqref{eq:PrecondMatrices}. Using this approximation yields the practical preconditioner\n\\begin{equation}\n\\label{eq:maxwell_pc_X}\n\\mathcal{M}_{\\rm MX} =\n\\begin{pmatrix}\nN& 0 \\\\\n0 & L\n\\end{pmatrix},\n\\end{equation}\nwhere $N = M+X$ is a shifted curl-curl operator. A scalable multigrid solver for $N$ has been developed in \\cite{hiptmair2007nodal} which involves one shifted Laplacian solves on the vector space and one scalar Laplacian solve. However, the construction of this multigrid solver is involved and hence we will consider direct solves for this preconditioner and leave the multigrid implementation as a possible area of future work.\n\n\\section{Preconditioning the MHD equations}\n\\label{sec:MHDp}\nIn Section \\ref{sec:nonlinear} and \\ref{sec:FEMdecouple} we introduced three iteration schemes, namely Picard iteration (P), Magnetic Decoupling (MD) and Complete Decoupling (CD). Using the results from Sections \\ref{sec:MaxwellPrecond} and \\ref{sec:NSprecond} we will discuss the preconditioning approaches that we apply for these non-linear iteration schemes.\n\n\n\\subsection{Picard iteration}\n\\label{sec:MHDprecond}\n\n% In Sections \\ref{sec:NSprecond} and \\ref{sec:MaxwellPrecond} we looked briefly at the preconditioning strategies for the Navier-Stokes and Maxwell's equations.\n\nUsing the techniques of Sections \\ref{sec:NSprecond} and \\ref{sec:MaxwellPrecond} for the incompressible Navier-Stokes and Maxwell problems, respectively, we will look at possible scalable preconditioners for the linearised MHD problem\n\\begin{equation} \\label{eq:K}\n    {\\mathcal K}_{\\rm MH} = \\left(\n\\begin{array}{cccc}\nA+O & B^T & C^T & 0\\\\\nB & 0 & 0 & 0\\\\\n-C & 0 & M & D^T \\\\\n0 & 0 & D & 0\n\\end{array}\n\\right).\n\\end{equation}\n% Using the Navier-Stokes and Maxwell subproblem preconditioners \\eqref{eq:ns_pc_upper} and \\eqref{eq:maxwell_pc_X} respectively, then w\nWe propose the following preconditioner for ${\\mathcal K}_{\\rm MH}$\n\\begin{equation}\n\\label{eq:mhd_pc_ls}\n\\mathcal{M}_{\\rm MH} =\n\\left(\n\\begin{array}{cccc}\nF & B^T & C^T & 0\\\\\n0 & -{S} & 0 & 0 \\\\\n-C & 0 & N & 0\\\\\n0 & 0 & 0 & L\n\\end{array}\n\\right),\n\\end{equation}\nwhere ${S}$ is now either the LSC or PCD approximation to the fluid flow Schur complement. The preconditioned matrix, $\\mathcal{M}_{\\rm MH}^{-1}{\\mathcal K}_{\\rm MH}$, has eigenvalues $\\lambda = 1$ with algebraic multiplicity of at least $n_u + n_b$ and the eigenvalue $\\lambda = -1$ with algebraic multiplicity of at least $m_b$ see \\cite[Theorem~8]{li2010numerical}. Due to the coupling terms, $C$, the application of this preconditioner is computationally expensive. To overcome this, we propose to invert  $\\mathcal{M}_{\\rm MH}$ by means of an inner preconditioner. The inner preconditioner is taken as\n\\begin{equation}\n\\label{eq:mhd_pc_inner}\n\\mathcal{M}_{\\rm innerMH} =\n\\left(\n\\begin{array}{cccc}\nF & B^T & 0 & 0\\\\\n0 & -{S} & 0 & 0 \\\\\n0 & 0 & N & 0\\\\\n0 & 0 & 0 & L\n\\end{array}\n\\right).\n\\end{equation}\nHere the preconditioned matrix, $\\mathcal{M}_{\\rm innerMH}^{-1}{\\mathcal M}_{\\rm MH}$,  has an eigenvalue $\\lambda = \u22121$ with algebraic multiplicity of at least $m_u +n_u+3m_b -n_b$, see \\cite[Theorem~10]{li2010numerical}.\n\n\n\n\n\\subsection{Magnetic Decoupling}\n\\label{sec:MDprecond}\n\nFrom Section \\ref{sec:FEMmd} the  matrix to be preconditioned for (MD) is\n\\begin{equation}\n\\label{eq:Kmd}\n   \\mathcal{K}_{\\rm MD} =\n    \\left(\n    \\begin{array}{cc|cc}\n    F& B^T & 0 & 0\\\\\n    B & 0 & 0 & 0 \\\\\n    \\hline\n    0 & 0 & M & D^T\\\\\n    0 & 0 & D & 0\n    \\end{array}\n    \\right).\n\\end{equation}\nRecall that removing the coupling terms completely decouples the system. This therefore enables us to use the preconditioners for each of the subproblems separately and in parallel. Using the subproblem preconditioners \\eqref{eq:ns_pc_upper} and \\eqref{eq:maxwell_pc_X} then we propose the following preconditioner for $\\mathcal{K}_{\\rm MD}$:\n\\begin{equation}\n\\label{eq:mhd_pc_explicit}\n\\mathcal{M}_{\\rm MD} =\n\\left(\n\\begin{array}{cc|cc}\nF & B^T & 0 & 0\\\\\n0 & -{S} & 0 & 0 \\\\\n\\hline\n0 & 0 & N & 0\\\\\n0 & 0 & 0 & L\n\\end{array}\n\\right),\n\\end{equation}\nwhere $S$ is the LSC or PCD approximation and $N$ is the shifted curl-curl matrix.\n\n\\subsection{Complete Decoupling}\n\\label{sec:CDprecond}\n\nFrom Section \\ref{sec:FEMcd} the matrix to be preconditioned for (CD) is\n\\begin{equation}\n\\label{eq:Kcd}\n%\\mathcal{K} x \\equiv\n \\mathcal{K}_{\\rm CD} = \\left(\n\\begin{array}{cc|cc}\nA & B^T & 0 & 0\\\\\nB & 0 & 0 & 0 \\\\\n\\hline\n0 & 0 & M & D^T\\\\\n0 & 0 & D & 0\n\\end{array}\n\\right).\n\\end{equation}\nNote that the matrix $\\mathcal{K}_{\\rm CD}$ is now symmetric. First we consider how to deal with the upper $(2,2)$ block matrix which corresponds to the discrete Stokes equations\n\\begin{equation}\\nonumber\n   \\mathcal{K}_{\\rm S} =\n    \\left(\n    \\begin{array}{cc}\n    A& B^T \\\\\n    B & 0\n    \\end{array}\n    \\right).\n\\end{equation}\nAs with the incompressible Navier-Stokes subproblem the idea for the Stokes preconditioner is again to approximate the Schur complement\n$$S_{\\rm S} =  BA^{-1}B^T.$$\nRecall that the matrix $A$ is defined with the viscosity $\\nu$ in Section \\ref{sec:variation}. It was shown in \\cite{silvester1993fast,silvester1994fast} that the scaled pressure mass matrix, $\\mbox{\\small \\(\\frac{1}{\\nu}\\)} W$ defined in \\eqref{eq:PrecondMatrices}, is spectrally equivalent to the Schur complement $S_S$ (which is also a consequence of the inf-sup stability from our mixed discretisation). Therefore a possible scalable Stokes preconditioner is\n\\begin{equation}\n\\label{eq:mhd_pc_explicit2_1}\n\\begin{pmatrix}\nA & 0 \\\\\n0 & \\mbox{\\small \\(\\frac{1}{\\nu}\\)} W\n\\end{pmatrix}.\n\\end{equation}\nUsing \\eqref{eq:mhd_pc_explicit2_1} together with the Maxwell subproblem preconditioner  \\eqref{eq:maxwell_pc_X} gives the preconditioner\n\\begin{equation}\n\\label{eq:mhd_pc_explicit2}\n\\mathcal{M}_{\\rm CD} =\n\\left(\n\\begin{array}{cc|cc}\nA & 0 & 0 & 0\\\\\n0 & \\mbox{\\small \\(\\frac{1}{\\nu}\\)} W & 0 & 0 \\\\\n\\hline\n0 & 0 & N & 0\\\\\n0 & 0 & 0 & L\n\\end{array}\n\\right).\n\\end{equation}\n\nAs the matrix system $\\mathcal{K}_{\\rm CD}$ is symmetric, then the appropriate choice for the Krylov subspace method is MINRES for each subproblem. The main advantage of using MINRES over GMRES is that we do not need to store a new vector at each iteration. Therefore, in terms of computational memory MINRES is more efficient.\n\n\n\\subsection{Summary}\n\nIn summary, we outlined the three preconditioning approaches for the linearised systems arising in the non-linear iteration schemes proposed in Chapter \\ref{sec:discretization}. Table \\ref{tab:SummaryTable} references the coefficient matrices together the associated preconditioner.\n\\begin{table}[h!]\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline\n  Iteration & Coefficient & (Outer) \\\\\n  scheme & matrix & preconditioner\\\\\n\\hline\n\\rule{0pt}{12pt}(P) & $\\mathcal{K}_{\\rm MH}$ in \\eqref{eq:K}& $\\mathcal{M}_{\\rm MH}$ in \\eqref{eq:mhd_pc_ls}  \\\\[0.1cm]\n\\hline\n\\rule{0pt}{12pt}(MD) & $\\mathcal{K}_{\\rm MD}$ in \\eqref{eq:Kmd}& $\\mathcal{M}_{\\rm MD}$ in \\eqref{eq:mhd_pc_explicit}  \\\\[0.1cm]\n\\hline\n\\rule{0pt}{12pt}(CD) & $\\mathcal{K}_{\\rm CD}$ in \\eqref{eq:Kcd}&$\\mathcal{M}_{\\rm CD}$ in \\eqref{eq:mhd_pc_explicit2}\\\\[0.1cm]\n\\hline\n\\end{tabular}\n\\caption{Summary of coefficient matrices and corresponding preconditioners for each iteration scheme}\n\\label{tab:SummaryTable}\n\\end{center}\n\\end{table}\n\n\\noindent Note that for the Picard iteration (P), we employ the inner preconditioner $\\mathcal{M}_{\\rm innerMH}$ in \\eqref{eq:mhd_pc_inner} to solve systems corresponding to \\eqref{eq:mhd_pc_ls}.\n\n% \\begin{table}[h!] \\small\n% \\begin{center}\n% \\begin{tabular}{|c|c|c|}\n% \\hline\n%   Iteration & Coefficient & (Outer) \\\\\n%   scheme & matrix & preconditioner\\\\\n% \\hline\n% (P) & $\\left(\n% \\begin{array}{cccc}\n% A+O & B^T & C^T & 0\\\\\n% B & 0 & 0 & 0\\\\\n% -C & 0 & M & D^T \\\\\n% 0 & 0 & D & 0\n% \\end{array}\n% \\right)$&$\\left(\n% \\begin{array}{cccc}\n% F & B^T & C^T & 0\\\\\n% 0 & -{S} & 0 & 0 \\\\\n% -C & 0 & N & 0\\\\\n% 0 & 0 & 0 & L\n% \\end{array}\n% \\right)$  \\\\\n% \\hline\n% (MD) & $\\left(\n%     \\begin{array}{cccc}\n%     F& B^T & 0 & 0\\\\\n%     B & 0 & 0 & 0 \\\\\n%     0 & 0 & M & D^T\\\\\n%     0 & 0 & D & 0\n%     \\end{array}\n%     \\right)$&$\\left(\n% \\begin{array}{cccc}\n% F & B^T & 0 & 0\\\\\n% 0 & -{S} & 0 & 0 \\\\\n% 0 & 0 & N & 0\\\\\n% 0 & 0 & 0 & L\n% \\end{array}\n% \\right)$  \\\\\n% \\hline\n% (CD) & $\\left(\n% \\begin{array}{cccc}\n% A & B^T & 0 & 0\\\\\n% B & 0 & 0 & 0 \\\\\n% 0 & 0 & M & D^T\\\\\n% 0 & 0 & D & 0\n% \\end{array}\n% \\right)$& $\\left(\n% \\begin{array}{cccc}\n% A & 0 & 0 & 0\\\\\n% 0 & \\mbox{\\small \\(\\frac{1}{\\nu}\\)} W & 0 & 0 \\\\\n% 0 & 0 & N & 0\\\\\n% 0 & 0 & 0 & L\n% \\end{array}\n% \\right)$\\\\\n% \\hline\n% \\end{tabular}\n% \\caption{Summary of coefficient and corresponding preconditioners for each iteration scheme}\n% \\label{tab:SummaryTable}\n% \\end{center}\n% \\end{table}\n\n\n\n\n\n\n% \\textit{Generate a new subSection that lays out a summary of the systems and the preconditioners. This generates a redundancy, but it will be very helpful for the reader, who at this point may be lost trying to track all preconditioners and systems. This section should also include a summary of multiplicities (briefly, no need to state theorems), as per Dan's thesis and/or the paper that is currently being written.}\n\n\n% \\paragraph{Picard iteration (P)} ~\\\\\n\n% The coefficient matrix corresponding to the Picard iteration is\n% \\begin{equation}\\nonumber\n% {\\mathcal K}_{\\rm MH} = \\left(\n% \\begin{array}{cccc}\n% A+O & B^T & C^T & 0\\\\\n% B & 0 & 0 & 0\\\\\n% -C & 0 & M & D^T \\\\\n% 0 & 0 & D & 0\n% \\end{array}\n% \\right).\n% \\end{equation}\n% We consider the  preconditioner\n% \\begin{equation} \\nonumber\n% \\mathcal{M}_{\\rm MH} =\n% \\left(\n% \\begin{array}{cccc}\n% F & B^T & C^T & 0\\\\\n% 0 & -{S} & 0 & 0 \\\\\n% -C & 0 & N & 0\\\\\n% 0 & 0 & 0 & L\n% \\end{array}\n% \\right),\n% \\end{equation}\n% The preconditionered matrix, $\\mathcal{M}_{\\rm MH}^{-1}{\\mathcal K}_{\\rm MH}$, has eigenvalues $\\lambda = 1$ with algebraic multiplicity of at least $n_u + n_b$ and and an eigenvalue $\\lambda = \u22121$ with algebraic multiplicity of at least $m_b$ see \\cite{li2010numerical}. To apply this preconditioner, we solve linear systems associated with $\\mathcal{M}_{\\rm MH}$. The proposed way of doing this is to use GMRES with the inner preconditioner:\n% \\begin{equation} \\nonumber\n% \\mathcal{M}_{\\rm innerMH} =\n% \\left(\n% \\begin{array}{cccc}\n% F & B^T & 0 & 0\\\\\n% 0 & -S & 0 & 0 \\\\\n% 0 & 0 & N & 0\\\\\n% 0 & 0 & 0 & L\n% \\end{array}\n% \\right).\n% \\end{equation}\n% Here the preconditioner matrix, $\\mathcal{M}_{\\rm innerMH}^{-1}{\\mathcal M}_{\\rm MH}$,  has an eigenvalue $\\lambda = \u22121$ with algebraic multiplicity of at least $m_u +n_u+3m_b -n_b$ see \\cite{li2010numerical}.\n\n\n% \\paragraph{Magnetic Decoupling (MD)} ~\\\\\n% The coefficient matrix corresponding to the (MD) iteration is\n% \\begin{equation}\\nonumber\n% {\\mathcal K}_{\\rm MH} = \\left(\n% \\begin{array}{cc|cc}\n% F& B^T & 0 & 0\\\\\n% B & 0 & 0 & 0\\\\\n% \\hline\n% 0& 0 & M & D^T \\\\\n% 0 & 0 & D & 0\n% \\end{array}\n% \\right).\n% \\end{equation}\n% This complete decouples the system to enable use to use the individual Navier-Stokes and Maxwell's equation preconditioners seperately. The proposed preconditioner is:\n% \\begin{equation}\n% \\nonumber\n% \\mathcal{M}_{\\rm MD} =\n% \\left(\n% \\begin{array}{cc|cc}\n% F & B^T & 0 & 0\\\\\n% 0 & -{S} & 0 & 0 \\\\\n% \\hline\n% 0 & 0 & N & 0\\\\\n% 0 & 0 & 0 & L\n% \\end{array}\n% \\right).\n% \\end{equation}\n\n\n% \\paragraph{Complete Decoupling (CD)} ~\\\\\n\n% The coefficient matrix corresponding to the (CD) iteration is\n% \\begin{equation} \\nonumber\n%  {\\mathcal K}_{\\rm MH} = \\left(\n% \\begin{array}{cc|cc}\n% A & B^T & 0 & 0\\\\\n% B & 0 & 0 & 0\\\\\n% \\hline\n% 0& 0 & M & D^T \\\\\n% 0 & 0 & D & 0\n% \\end{array}\n% \\right).\n% \\end{equation}\n% This complete decouples the system to enable use to use the individual Stokes and Maxwell's equation preconditioners seperately. The proposed preconditioner is:\n% \\begin{equation}\n% \\nonumber\n% \\mathcal{M}_{\\rm CD} =\n% \\left(\n% \\begin{array}{cc|cc}\n% A & 0 & 0 & 0\\\\\n% 0 & \\mbox{\\small \\(\\frac{1}{\\nu}\\)} W & 0 & 0 \\\\\n% \\hline\n% 0 & 0 & N & 0\\\\\n% 0 & 0 & 0 & L\n% \\end{array}\n% \\right).\n% \\end{equation}\n\n", "meta": {"hexsha": "6e40a70203422e70235434691829a798b93945f3", "size": 25873, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MHD/THESIS/Preconditioning/Preconditioning.tex", "max_stars_repo_name": "wathen/PhD", "max_stars_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-10-25T13:30:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T21:27:30.000Z", "max_issues_repo_path": "MHD/THESIS/Preconditioning/Preconditioning.tex", "max_issues_repo_name": "wathen/PhD", "max_issues_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MHD/THESIS/Preconditioning/Preconditioning.tex", "max_forks_repo_name": "wathen/PhD", "max_forks_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-28T16:12:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-13T13:59:44.000Z", "avg_line_length": 46.2017857143, "max_line_length": 822, "alphanum_fraction": 0.6964403046, "num_tokens": 8752, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5677055580484832}}
{"text": "\\chapter{Evolution of the simulator}\n\t\\section{Solutions for 2 cars}\\label{sec:base2car}\n\t\tEquation~\\eqref{eq:numerical_idm} can be solved with the Explicit Euler method. To test the model a basic setup was implemented in \\textsc{Matlab} \\cite{matlab}. The visual representation of the setup can be seen in Figure~\\ref{fig:basic2car}.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=.95\\textwidth]{common/basic_2_car}\n\t\t\t\\caption{Basic setup with 2 cars}\n\t\t\t\\label{fig:basic2car}\n\t\t\\end{figure}\n\t\tThe leading car has a constant 100 km/h velocity. The following car, which is modeled with IDM, has an initial velocity of 100 km/h as well. The second car is 100 meters behind the other car. The parameters of the following car can be seen in Table~\\ref{tab:idm_params}. The parameters have been chosen based on literature \\cite{opensource}.\n\t\t\\begin{table}\n\t\t\t\\begin{center}\n\t\t\t\t\\begin{tabular}{ |c|c|c| }\n\t\t\t\t\t\\hline\n\t\t\t\t\t$a_{\\rm max}$ & $1.5$ & $\\rm m/s^2$ \\\\\n\t\t\t\t\t$b_{\\rm max}$ & $1.67$ & $\\rm m/s^2$ \\\\\n\t\t\t\t\t$v_{\\rm d}$ & $130$ & km/h \\\\\n\t\t\t\t\t$T$ & $1.8$ & s \\\\\n\t\t\t\t\t$h_0$ & $2$ & m \\\\\n\t\t\t\t\t$\\rm \\delta$ & $4$ & - \\\\\n\t\t\t\t\t$L$ & $4.5$ & m \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\t\\caption{Intelligent Driver Model parameters}\n\t\t\t\\label{tab:idm_params}\n\t\t\\end{table}\n\t\tThe simulation was run until 100 seconds with a time step of 0.01s. Figure \\ref{fig:basic2car_case_1} shows the result. The initial gap between the cars is greater than the second car's desired headway, consequently the vehicle will accelerate. The gap between the vehicles starts to decrease. At a certain time (around $t =$ 10s) the second car starts to decelerate slowly based on the velocity difference and the decreasing headway. It will reach the desired safe gap at some point and will have exactly the same speed as the car before. It will maintain its desired headway.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ee/basic_2_car_headaway_case_1_2}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ee/basic_2_car_velocity_case_1_2}\n\t\t\t\\end{minipage}\n\t\t\t\\caption{Following car's headway and velocity in Setup 1}\n\t\t\t\\label{fig:basic2car_case_1}\n\t\t\\end{figure}\n\n\t\tAnother simulation was run with the same configuration except that the first car's front is set to be at 40 meters instead of 100. Figure \\ref{fig:basic2car_case_2} shows the result. In this case the gap between cars is less than the second vehicle's desired safety headway. So it will decelerate first than accelerate to reach the desired gap. Fundamentally the same happened but in the opposite direction.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ee/basic_2_car_headaway_case_2_2}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ee/basic_2_car_velocity_case_2_2}\n\t\t\t\\end{minipage}\n\t\t\t\\caption{Following car's headway and velocity in Setup 2}\n\t\t\t\\label{fig:basic2car_case_2}\n\t\t\\end{figure}\n\t\tBoth examples showed that the stationary state does not depend on the initial gap. After a little bit of time the second car has reached the desired safety gap (which is around 64.5 meters) and the 100 km/h speed in both cases.\n\t\\section{Model behavior verification}\n\t\tIn Section \\ref{sec:base2car} two examples were shown where the model produced the same stationary state for different initial conditions. In this case stationary state means that the vehicle's acceleration is zero. So Eq.~\\eqref{eq:aidm} modifies in a stationary state when $\\dot{x}(t)\\equiv \\dot{x}_{\\rm lead}(t)\\equiv v_{\\rm stac}$:\n\t\t\\begin{equation}\n\t\t1 - \\left ( \\frac{v_{\\rm stac}}{v_{\\rm d}} \\right )^{\\rm \\delta} - \\left ( \\frac{h_0+v_{\\rm stac}\\cdot T}{h_{\\rm stac}} \\right )^2=0\\,.\n\t\t\\label{eq:aidm_stac1}\n\t\t\\end{equation}\n\t\tOf course all this means that the initial headway should be chosen properly, namely $h(t=0)=h_{\\rm stac}$.\n\t\tEquation~\\eqref{eq:aidm_stac1} shows that $v_{\\rm stac}$ only depends on $h_{\\rm stac}$. Everything else in the equations is a parameter of the model or initial condition. Consequently every stationary gap has a corresponding stationary speed value. Using \\textsc{Matlab}'s \\textit{fsolve} method Eq.~\\eqref{eq:aidm_stac1} can be solved. Plot of the result can be seen in Figure \\ref{fig:aidm_stac}.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\includegraphics{check/check_stationary_states}\n\t\t\t\\caption{Stationary velocity based on stationary headway}\n\t\t\t\\label{fig:aidm_stac}\n\t\t\\end{figure}\n\n\t\tIn the examples of Section \\ref{sec:base2car} the stationary gap between the vehicles was around 64.5 meters and the velocity was 100 km/h. Figure \\ref{fig:aidm_stac} shows exactly the same values. It can be stated that the model is working as expected. Note that the bumper to bumper or traffic jam distance ($h_0$) appears on the horizontal axis and it has a zero velocity value.\n\t\\section{Solver verification}\n\t\tA custom EE solver was implemented to solve Eq.~\\eqref{eq:numerical_idm} numerically. However \\textsc{Matlab} provides built-in solvers for differential equation systems like IDM. It has a numerical solver called ode45. It is a 4th order numerical solver which is capable of changing some of its parameters (e.g.: time step) based on the system's behavior. So theoretically it is a better choice to solve differential equation systems over a custom implemented Explicit Euler. Calculating the solution with this solver also provides a verification to the custom implemented EE solver. \n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ode/basic_2_car_headaway_case_3_2}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ode/basic_2_car_velocity_case_3_2}\n\t\t\t\\end{minipage}\n\t\t\t\\caption{Following car's headway and velocity in Setup 1 with ode45 and EE}\n\t\t\t\\label{fig:basic2car_case_1_ode}\n\t\t\\end{figure}\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ode/basic_2_car_headaway_case_4_2}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{ode/basic_2_car_velocity_case_4_2}\n\t\t\t\\end{minipage}\n\t\t\t\\caption{Following car's headway and velocity in Setup 2 with ode45 and EE}\n\t\t\t\\label{fig:basic2car_case_2_ode}\n\t\t\\end{figure}\n\n\t\tThe same two examples were implemented with ode45 as before in Section \\ref{sec:base2car}. The time step is not given this time since the solver can adjust it to the system's current behavior. But it is recommended the provide a relative error tolerance value to ensure a fine solution. For this case it was set to $10^{-4}$s.\n\t\tIt can be seen in Figure \\ref{fig:basic2car_case_1_ode} and \\ref{fig:basic2car_case_2_ode} that the solutions are overlapping exactly, consequently the custom made solver is working as expected.\n\t\\section{Solver for $n$ cars}\n\t\tThe simulator works as expected for 2 cars simulations. It is time to implement an $n$-car simulation as well. The simulator still uses the built-in ode45 function to solve the differential equation system so the only task is to produce $\\dot{\\textbf{y}}$. The visual representation of the setup can be seen in Figure \\ref{fig:basic_n_car}. The dimension of the system grows from 4 to 2n. The first car will have a constant velocity of 100 km/h so $\\dot{y}_1 = 100$ and $\\dot{y}_2 = 0$, since there is no acceleration.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{common/basic_n_car}\n\t\t\t\\caption{n-car setup}\n\t\t\t\\label{fig:basic_n_car}\n\t\t\\end{figure}\n\t\tUnfortunately the previously used implementation is not good enough to produce $\\dot{\\textbf{y}}$. The problem is that to be able to calculate vehicle n's position and velocity the solver needs to have the position and velocity of vehicle n-1. To make it easy to read let us use function $f$ which represents Eq.~\\eqref{eq:numerical_idm}. Than $\\dot{\\textbf{y}}$ looks like the following:\n\t\t\\begin{equation}\n\t\t\t\\dot{\\textbf{y}}=\n\t\t\t\\begin{pmatrix}\n\t\t\t\t100\\\\\n\t\t\t\t0\\\\\n\t\t\t\tf_1(v_{2})\\\\\n\t\t\t\tf_2(x_{2}, v_{2},x_{1}, v_{1})\\\\\n\t\t\t\tf_1(v_{3})\\\\\n\t\t\t\tf_2(x_{3}, v_{3},x_{2}, v_{2})\\\\\n\t\t\t\t\\vdots\\\\\n\t\t\t\tf_1(v_{\\rm n-1})\\\\\n\t\t\t\tf_2(x_{n-1}, v_{n-1},x_{n-2}, v_{n-2})\\\\\n\t\t\t\tf_1(v_{\\rm n})\\\\\n\t\t\t\tf_2(x_{n}, v_{n},x_{n-1}, v_{n-1})\n\t\t\t\\end{pmatrix}\\,.\n\t\t\t\\label{eq:n_ode_math}\n\t\t\\end{equation}\n\t\t\\section{Solutions for n cars}\n\t\tA couple of setups were made with n=5 to test the new n-car solver. The vehicles' IDM parameters were unchanged, it can be found in Table~\\ref{tab:idm_params}.\n\t\t\\subsection*{Case 1}\n\t\tThe initial positions and velocities of the cars can be seen on Table~\\ref{tab:node_case1}.\n\t\t\\begin{table}\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{ |c|c|c| }\n\t\t\t\t\\hline\n\t\t\t\tn [-] & $x$ [m] & $v$ [km/h]\\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 400 & 100 \\\\\n\t\t\t\t2 & 300 & 100 \\\\\n\t\t\t\t3 & 200 & 100 \\\\\n\t\t\t\t4 & 100 & 100 \\\\\n\t\t\t\t5 & 0 & 100 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Case 1}\n\t\t\t\\label{tab:node_case1}\n\t\t\\end{table}\n\t\tThe result of the simulation is in Figure~\\ref{fig:node_case1_headways} and \\ref{fig:node_case1_velocities}. \n\n\t\tEvery car's leader is a bit further than their desired safety headway so every car is accelerating then they try to maintain their desired headway so they decelerate to the first car's speed.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_headway_case1}\n\t\t\t\t\\caption{Case 1: headways}\n\t\t\t\t\\label{fig:node_case1_headways}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_velocity_case1}\n\t\t\t\t\\caption{Case 1: velocities}\n\t\t\t\t\\label{fig:node_case1_velocities}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\n\t\t\\subsection*{Case 2}\n\t\tThe initial positions and velocities of the cars can be seen on Table~\\ref{tab:node_case2}.\n\t\t\\begin{table}\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{ |c|c|c| }\n\t\t\t\t\\hline\n\t\t\t\tn [-] & $x$ [m] & $v$ [km/h]\\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 160 & 100 \\\\\n\t\t\t\t2 & 120 & 100 \\\\\n\t\t\t\t3 & 80 & 100 \\\\\n\t\t\t\t4 & 40 & 100 \\\\\n\t\t\t\t5 & 0 & 100 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Case 2}\n\t\t\t\\label{tab:node_case2}\n\t\t\\end{table}\n\t\tThe result of the simulation is in Figure~\\ref{fig:node_case2_headways} and \\ref{fig:node_case2_velocities}. \n\n\t\tEvery car's leader is a bit closer than their desired safety headway so every car is decelerating first, then they try to maintain their desired headway so the cars accelerate until they reach the first car's speed.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_headway_case2}\n\t\t\t\t\\caption{Case 2: headways}\n\t\t\t\t\\label{fig:node_case2_headways}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_velocity_case2}\n\t\t\t\t\\caption{Case 2: velocities}\n\t\t\t\t\\label{fig:node_case2_velocities}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\n\t\t\\subsection*{Case 3}\n\t\tThe initial positions and velocities of the cars can be seen on Table~\\ref{tab:node_case3}.\n\t\t\\begin{table}\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{ |c|c|c| }\n\t\t\t\t\\hline\n\t\t\t\tn [-] & $x$ [m] & $v$ [km/h]\\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 280 & 100 \\\\\n\t\t\t\t2 & 240 & 100 \\\\\n\t\t\t\t3 & 140 & 100 \\\\\n\t\t\t\t4 & 100 & 100 \\\\\n\t\t\t\t5 & 0 & 100 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Case 3}\n\t\t\t\\label{tab:node_case3}\n\t\t\\end{table}\n\t\tThe result of the simulation is in Figure~\\ref{fig:node_case3_headways} and \\ref{fig:node_case3_velocities}. \n\n\t\tSome of the cars are closer to their leader than their desired safety gap and the others have more headway than their safety gap. As a result the former ones will decelerate, the latter ones will accelerate to reach the desired headway. At the end all of them have the same speed as the leader car.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_headway_case3}\n\t\t\t\t\\caption{Case 3: headways}\n\t\t\t\t\\label{fig:node_case3_headways}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_velocity_case3}\n\t\t\t\t\\caption{Case 3: velocities}\n\t\t\t\t\\label{fig:node_case3_velocities}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\n\t\t\\subsection*{Case 4}\n\t\tThe initial positions and velocities of the cars can be seen on Table~\\ref{tab:node_case4}.\n\t\t\\begin{table}\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{ |c|c|c| }\n\t\t\t\t\\hline\n\t\t\t\tn [-] & $x$ [m] & $v$ [km/h]\\\\\n\t\t\t\t\\hline\n\t\t\t\t1 & 290 & 100 \\\\\n\t\t\t\t2 & 240 & 80 \\\\\n\t\t\t\t3 & 150 & 120 \\\\\n\t\t\t\t4 & 110& 115 \\\\\n\t\t\t\t5 & 0 & 70 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Case 4}\n\t\t\t\\label{tab:node_case4}\n\t\t\\end{table}\n\t\tThe result of the simulation is in Figure~\\ref{fig:node_case4_headways} and \\ref{fig:node_case4_velocities}. \n\n\t\tIn this case the initial positions and the initial velocities were varied. As expected the result has changed significantly. Now both the speed and the headway must be changed by the drivers in order to get to the stationary point.\n\t\t\\section*{Case summary}\n\t\tThe same stationary point have been reached in every cases as expected. That stationary point was where all of the car had the same speed as the first car and they maintained their desired safety gap. In some cases it took longer to get to the stationary state then in others.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_headway_case4}\n\t\t\t\t\\caption{Case 4: headways}\n\t\t\t\t\\label{fig:node_case4_headways}\n\t\t\t\\end{minipage}\\hfill\n\t\t\t\\begin{minipage}{.5\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics{node/n_car_velocity_case4}\n\t\t\t\t\\caption{Case 4: velocities}\n\t\t\t\t\\label{fig:node_case4_velocities}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\n", "meta": {"hexsha": "bf5beaae4d777bcf788914133b9ce82e76e38fed", "size": 13669, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/Chapters/evolutionofthesimulator.tex", "max_stars_repo_name": "ngergo100/traffic-sim", "max_stars_repo_head_hexsha": "2cf579d2812af84b9bd3225645c2441310f194d6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documentation/Chapters/evolutionofthesimulator.tex", "max_issues_repo_name": "ngergo100/traffic-sim", "max_issues_repo_head_hexsha": "2cf579d2812af84b9bd3225645c2441310f194d6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2018-11-23T15:15:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-01T07:21:48.000Z", "max_forks_repo_path": "documentation/Chapters/evolutionofthesimulator.tex", "max_forks_repo_name": "ngergo100/traffic-sim", "max_forks_repo_head_hexsha": "2cf579d2812af84b9bd3225645c2441310f194d6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-06-07T16:49:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-07T16:49:18.000Z", "avg_line_length": 48.9928315412, "max_line_length": 587, "alphanum_fraction": 0.7112444217, "num_tokens": 4442, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5677055580484832}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{graphicx}\n\\usepackage{enumerate}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[mathscr]{euscript}\n\\usepackage{hyperref}\n\\usepackage{bbm}\n\n\\hypersetup{\n    colorlinks=true,\n    urlcolor=blue\n}\n\n\\setlength{\\oddsidemargin}{.1in}\n\\setlength{\\textwidth}{6.3in}\n\\setlength{\\textheight}{8.9in}\n\\setlength{\\topmargin}{-.5in}\n\n\\title{Solving the inverted pendulum as a Markov decision process}\n\\date{}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Markov decision processes}\n\nA Markov decision process (MDP) is defined by a 5-tuple $(\\mathscr{S}, \\mathscr{A}, P_\\cdot(\\cdot, \\cdot), R_\\cdot(\\cdot, \\cdot), \\gamma)$ where\n\n\\begin{enumerate}[(i)]\n\\item $\\mathscr{S}$ is a finite set of states;\n\\item $\\mathscr{A}$ is a finite set of actions;\n\\item $P_a(s, s^\\prime)$ is the probability that action $a$ in state $s$ at time $t$ will lead to state $s^\\prime$ at time $t + 1$;\n\\item $R_a(s, s^\\prime)$ is the immediate reward received after transitioning from state $s$ to state $s^\\prime$ due to action $a$; and\n\\item $\\gamma \\in [0, 1]$ is a discount factor which represents the difference in importance between future rewards and present rewards.\n\\end{enumerate}\n\nThe problem is to find a policy for a decision-making agent that navigates the states of this world via actions. The policy is a function $\\pi$ that specifies the action $\\pi(s)$ that the agent will choose when in state $s$. Our goal is to choose a policy that will maximize a cumulative function of the random rewards; in particular, we wish to maximize the discounted sum of expected rewards over a potentially infinite horizon:\n\n\\begin{equation}\n\\sum_{t = 0}^{\\infty} \\gamma^t \\mathbb{E}[R_{a_t}(s_t, s_{t+1})] \\quad \\mbox{where } a_t = \\pi(s_t).\n\\end{equation}\n\n\\subsection{Policies and value functions}\n\nImagine that we have some magical function $V(\\cdot)$ which tells us the inherent value of a state. It accepts a state as its argument, and its output value tells us \\emph{how characteristically good} it is to be in that state for the purposes of achieving our goal (that is, maximizing reward). A bit philosophical, eh?\n\nIf we had this function, then choosing a policy would be trivial: the optimal policy is one which always selects the action that maximizes the expectation of $V(s^\\prime)$ based on the present state $s$. The expectation of $V(s^\\prime)$ under a given action $a$ in state $s$ is a sum of $V(s^\\prime)$ over all possible $s^\\prime$, with each term weighted by the probability of transitioning from $s$ to $s^\\prime$ due to action $a$:\n\n\\begin{equation}\n\\mathbb{E}[V(s^\\prime)] = \\sum_{s^\\prime \\in \\mathscr{S}} P_a(s, s^\\prime) V(s^\\prime).\n\\end{equation}\n\nSo the optimal policy is\n\n\\begin{equation}\n\\pi^*(s) = \\underset{a}{\\arg\\max} \\, \\mathbb{E}[V(s^\\prime)] = \\underset{a}{\\arg\\max} \\, \\sum_{s^\\prime \\in \\mathscr{S}} P_a(s, s^\\prime) V(s^\\prime).\n\\end{equation}\n\nBut, the inherent value of a state $V(\\cdot)$ depends on what policy we are following. In particular, it is the sum of the expected reward over all future times if we act according to a policy $\\pi$. Intuitively, there are two factors which influence the value of state $s$:\n\n\\begin{enumerate}[(i)]\n\\item{the immediate expected reward of acting on our policy in state $s$} and\n\\item{the expected value of the state which follows from acting on our policy in state $s$.}\n\\end{enumerate}\n\nNote that the second of these is defined in terms of the expected value of the state which follows. Thus, our description of the value of state $s$ includes the value of state $s^\\prime$, and in this way recursively captures something about the value of all future states.\n\nLet's work out each of the two factors above, starting with the first piece. The expected reward of taking action $a$ in a given state $s$ is a sum of $R_a(s, s^\\prime)$ over all possible $s^\\prime$, with each term weighted by the probability of transitioning from $s$ to $s^\\prime$ due to action $a$:\n\n\\begin{equation}\n\\mathbb{E}[R_a(s)] = \\sum_{s^\\prime \\in \\mathscr{S}} P_a(s, s^\\prime) R_a(s, s^\\prime).\n\\end{equation}\n\nThe action $a$ is selected by policy $\\pi(s)$, so we can rewrite our equation above as the expected reward of acting under policy $\\pi$ in a given state $s$:\n\n\\begin{equation}\n\\label{eq:reward} \\mathbb{E}[R_{\\pi(s)}(s)] = \\sum_{s^\\prime \\in \\mathscr{S}} P_{\\pi(s)}(s, s^\\prime) R_{\\pi(s)}(s, s^\\prime).\n\\end{equation}\n\nThis is our expression for the immediate expected reward of acting on our policy in state $s$ (the first of the two factors mentioned above). But the value of state $s$ depends not only on this immediately available reward; it also depends on the expected value of the state which follows from acting on our policy (the second of the two factors above):\n\n\\begin{equation}\n\\label{eq:future_value} \\mathbb{E}[V(s^\\prime)] = \\sum_{s^\\prime \\in \\mathscr{S}} P_{\\pi(s)}(s, s^\\prime) V(s^\\prime).\n\\end{equation}\n\nThe full value of state $s$ is a sum of equations (\\ref{eq:reward}) and (\\ref{eq:future_value}), with the second weighted by our discount factor to reflect the fact that this is the value of a state ($s^\\prime$) which is one step ahead in the future from the state ($s$) whose value we are presently considering. So, to get the value of state $s$ under policy $\\pi$, we sum the expectation of the immediate reward for acting on our policy from state $s$ and the future-discounted expectation of the value of the state $s^\\prime$ which follows from acting on our policy in state $s$:\n\n\\begin{align}\nV(s) &= \\mathbb{E}[R_{\\pi(s)}(s)] + \\gamma \\mathbb{E}[V(s^\\prime)] \\\\[5pt]\n&= \\sum_{s^\\prime \\in \\mathscr{S}} P_{\\pi(s)}(s, s^\\prime) R_{\\pi(s)}(s, s^\\prime) + \\gamma \\sum_{s^\\prime \\in \\mathscr{S}} P_{\\pi(s)}(s, s^\\prime) V(s^\\prime) \\\\[5pt]\n&= \\sum_{s^\\prime \\in \\mathscr{S}} P_{\\pi(s)}(s, s^\\prime) \\Big(R_{\\pi(s)}(s, s^\\prime) + \\gamma V(s^\\prime)\\Big)\n\\end{align}\n\nLet's take stock. We have an expression for the optimal policy with respect to any given value function, and we have an expression for a value function under a given policy. To reiterate:\n\n\\begin{align}\n\\pi^*(s) &= \\underset{a}{\\arg\\max} \\, \\mathbb{E}[V(s^\\prime)] = \\underset{a}{\\arg\\max} \\, \\sum_{s^\\prime \\in \\mathscr{S}} P_a(s, s^\\prime) V(s^\\prime) \\\\[5pt]\nV(s) &= \\sum_{s^\\prime \\in \\mathscr{S}} P_{\\pi(s)}(s, s^\\prime) \\Big(R_{\\pi(s)}(s, s^\\prime) + \\gamma V(s^\\prime)\\Big)\n\\end{align}\n\n\\subsection{The Bellman optimality equations}\n\nBut now we have a bit of a conundrum. We know the optimal policy with respect to a given value function, but our value function itself depends on what policy we're acting under! How do we find, at once, the optimal value function and its optimal policy?\n\nThere exists a large space of possible value functions due to all possible policies we could use. However, consider for a moment that we are handed from on high the optimal policy for our task. Under this policy, the output of the value function will be greater than or equal to the output of the value function under any other policy. This is because under the optimal policy, the value of any given state is always maximized. So, the maximum of all possible value functions for a state coincides with the value function we have when acting under an optimal policy.\n\nWith this in mind, we can readily define a notion of the optimal value function. The optimal value function for a state must equal the value function for that state under the best possible action for that state:\n\n\\begin{equation}\nV^*(s) = \\underset{a}{\\max} \\, V(s).\n\\end{equation}\n\nNote that in this case, the action $a$ is the optimal policy $\\pi^*(s)$. This enables us to write out $V(s)$ with no reference to $\\pi(s)$ at all, instead substituting $\\pi(s) = a$:\n\n\\begin{equation}\nV^*(s) = \\underset{a}{\\max} \\, \\sum_{s^\\prime \\in \\mathscr{S}} P_{\\pi(s)}(s, s^\\prime) \\Big(R_{\\pi(s)}(s, s^\\prime) + \\gamma V^*(s^\\prime)\\Big).\n\\end{equation}\n\nThis is a set of $|\\mathscr{S}|$ nonlinear recurrence relations in $|\\mathscr{S}|$ variables\\footnotemark, each one requiring a maximum over $|\\mathscr{A}|$ expressions ($|\\mathscr{A}|$ and $|\\mathscr{S}|$ are the number of elements in the sets $\\mathscr{A}$ and $\\mathscr{S}$, respectively). These are called the \\emph{Bellman equations} for the optimal value function. Intuitively, solving each one amounts to finding the value of trying each available action in a state, and then selecting the value which corresponds to the best action in that state (the value of that state under an optimal policy). Solving them all simultaneously amounts to finding the value of all states at once when acting under an optimal policy, yielding the optimal value function.\n\n\\footnotetext{Such systems generally cannot be characterized analytically and lack a closed-form solution. In fact, solving nonlinear recurrence relations is often viewed as a hopeless endeavor: to solve \\emph{every} nonlinear recurrence relation would imply that one could solve the Halting problem, as one could encode a program as initial states and the workings of the Turing machine as the recurrence relations (see \\href{https://math.stackexchange.com/users/11513/rex-kerr}{this Stack Exchange discussion} by Rex Kerr). But these equations can be solved numerically (at least in principle), and with sufficiently small action and state spaces this computation is tractable. This where we're headed.}\n\nOnce we have $V^*(s)$, we can substitute this back into our expression for $\\pi^*(s)$:\n\n\\begin{equation}\n\\pi^*(s) = \\underset{a}{\\arg\\max} \\, \\sum_{s^\\prime \\in \\mathscr{S}} P_a(s, s^\\prime) V^*(s^\\prime).\n\\end{equation}\n\nThis is an optimal policy, and our solution to the MDP.\n\n\\section{The inverted pendulum problem as an MDP}\n\nTo set up the inverted pendulum problem as an MDP, we need to define for our agent\n\n\\begin{enumerate}[(i)]\n\\item{a world of available states,}\n\\item{a set of actions it can take,}\n\\item{the transition probabilities which describe the physical dynamics of this world,}\n\\item{a reward function which incentivizes remaining balanced upright, and}\n\\item{a discount factor which reflects how long- or short-sighted our agent is.}\n\\end{enumerate}\n\n\\subsection{The agent's world}\n\nFirst we will build the world of our agent by choosing (i) and (ii): the states it can visit and the actions it can take. The computational complexity of solving the Bellman equations explodes rapidly as the combinations of possible states and actions grow. So, our choices of (i) and (ii) ultimately depend on how long we are willing to wait, and unfortunately it doesn't take much to have to wait an astronomically period time. Since we're not willing to wait the age of the universe, our state and action spaces will have to be pretty small indeed.\n\nLet's begin with a state space for our agent. The state of the pendulum-cart system can be fully characterized by four variables: angular position of the pendulum, angular velocity of the pendulum, position of the cart, and velocity of the cart. In order to avoid our state space becoming overly large, we will have to discretize each of these continuous variables into distressingly course-grained substates.\n\nFirst let's consider the pendulum. For the angular position subspace $\\mathscr{S}^\\theta$, we choose the four substates\n\n\\begin{equation}\ns^\\theta = \\left\\{\n\\begin{array}{lr}\ns_1^\\theta \\quad : -\\pi \\leq \\theta < -\\frac{\\pi}{2} \\\\\ns_2^\\theta \\quad : -\\frac{\\pi}{2} \\leq \\theta < 0 \\\\\ns_3^\\theta \\quad : 0 \\leq \\theta < \\frac{\\pi}{2} \\\\\ns_4^\\theta \\quad : \\frac{\\pi}{2} \\leq \\theta < \\pi\n\\end{array}\n\\right.\n.\n\\end{equation}\n\n\\begin{figure}\n\\center\n\\includegraphics[width=0.9\\linewidth]{substates.png}\n\\caption{Visualization of the substate space for angular position.}\n\\label{fig:substates}\n\\end{figure}\n\nFor the angular velocity subspace $\\mathscr{S}^\\omega$, we similarly choose four substates\n\n\\begin{equation}\ns^\\omega = \\left\\{\n\\begin{array}{lr}\ns_1^\\omega \\quad : \\omega < -\\alpha \\\\\ns_2^\\omega \\quad : -\\alpha \\leq \\omega < 0 \\\\\ns_3^\\omega \\quad : 0 \\leq \\omega < \\alpha \\\\\ns_4^\\omega \\quad : \\alpha \\leq \\omega\n\\end{array}\n\\right.\n\\end{equation}\n\nwhere $\\alpha$ is a parameter deciding the `width' of our state bins.\n\nNow on to the cart. For its position subspace $\\mathscr{S}^x$, we choose the four substates\n\n\\begin{equation}\ns^x = \\left\\{\n\\begin{array}{lr}\ns_1^x \\quad : x < -\\beta \\\\\ns_2^x \\quad : -\\beta \\leq x < 0 \\\\\ns_3^x \\quad : 0 \\leq x < \\beta \\\\\ns_4^x \\quad : \\beta \\leq x\n\\end{array}\n\\right.\n\\end{equation}\n\nand for its velocity subspace $\\mathscr{S}^v$ we choose four substates\n\n\\begin{equation}\ns^v = \\left\\{\n\\begin{array}{lr}\ns_1^v \\quad : v < -\\eta \\\\\ns_2^v \\quad : -\\eta \\leq v < 0 \\\\\ns_3^v \\quad : 0 \\leq v < \\eta \\\\\ns_4^v \\quad : \\eta \\leq v\n\\end{array}\n\\right.\n\\end{equation}\n\nwhere $\\beta$ and $\\eta$ are again parameters deciding the `width' of the state bins.\n\nThe full state space $\\mathscr{S}$ is $\\mathscr{S}^\\theta \\times \\mathscr{S}^\\omega \\times \\mathscr{S}^x \\times \\mathscr{S}^v$, or the set of all ordered pairs $(s^\\theta, s^\\omega, s^x, s^v)$. By our choices above, this space has 256 states.\n\nOur agent acts by applying horizontal forces to the cart. We'll allow our agent to choose among five actions, each applying a different, discrete-valued force to the cart:\n\n\\begin{equation}\n\\mathscr{A} = \\left\\{a_1, a_2, a_3, a_4, a_5\\right\\} \\mbox{ where }\\left\\{\n\\begin{array}{lr}\na_1 \\quad : f = -\\kappa_2 \\\\\na_2 \\quad : f = -\\kappa_1 \\\\\na_3 \\quad : f = 0 \\\\\na_4 \\quad : f = \\kappa_1 \\\\\na_5 \\quad : f = \\kappa_2\n\\end{array}\n\\right.\n.\n\\end{equation}\n\n$\\kappa_1$ and $\\kappa_2$ are parameters deciding the force of the action.\n\n\\subsection{How the world works}\n\nNow our agent needs a model of the physical dynamics of its world and how its actions affect those dynamics. This model comes in the form of a set of transition probabilities $P_a(s, s^\\prime)$---one for every possible action $a$---describing the probability of transitioning from state $s$ to state $s^\\prime$ due to that action.\n\nIn our case the transition probabilities represent the physics of the pendulum-cart system. However, forming these transition probabilities is not totally straightforward: the states which describe our system each contain a wide, continuous range of actual physical configurations, leaving a great range of physical variability over which state will proceed from which. So, we will produce the transition probabilities via a Monte Carlo estimate of randomly simulated dynamics. For each state $a$ and state $s$, we'll randomly sample an angular position and velocity uniformly from within the state $s$, and run the simulation one step forward to end up in state $s_{next}$. We'll repeat this a large number of times $N$ and estimate the transition probabilities as\n\n\\begin{equation}\nP_a(s, s^\\prime) = \\frac{1}{N}\\sum_{i = 1}^N \\mathbbm{1}(s_{next} = s^\\prime).\n\\end{equation}\n\nSince the most extreme states for velocity and angular velocity cover an unbounded range of physical possibilities (off to velocities of positive and negative $\\infty$), we'll just have to choose physically realistic boundaries for the sake of sampling.\n\nThis is going to be an inherently flawed estimate. We're estimating the dynamics by sampling uniformly from the continuum of available positions and velocities within each state; yet in reality, there's no reason the physical system should visit all possible (continuous) positions and velocities within a state with equal probability. Nonetheless, let's consider this satisfactory, reasoning that our estimate of the transition probabilities should encode no finer-grained statistical assumptions about our system than our agent's representation of its state allows.\\footnotemark\n\n\\footnotetext{Note that if we \\emph{could} choose arbitrarily fine-grained states, this issue would go away: as the state ``slices\" grew thinner, the physical dynamics of the system would become far less variable within each one, and our estimate of their transition probabilities would increasingly-well approximate the dynamics of the physical system. Furthermore, these transition probabilities would become increasingly sparse, reflecting the fact that in a causally deterministic system each state may transition to only one other state with unity probability. But alas, our agent must content itself to inhabit a simple---and unsettlingly noisy---world.}\n\n\\subsection{What the agent wants}\n\nNext we need to construct a reward function that incentivizes our agent to do the right thing. We would like our agent to balance the pendulum and remain near the center of the track. This means there are two things we wish to avoid:\n\n\\begin{enumerate}[(i)]\n\\item{allowing the pendulum to fall below the level of the track and}\n\\item{moving the cart too close to the ends of the track.}\n\\end{enumerate}\n\nSo, we will let the reward function equal -1 for any transition into a state that satisfies either of these conditions, and 0 otherwise:\n\n\\begin{equation}\nR_a(s, s^\\prime) = \\left\\{\n\\begin{array}{lr}\n-1 \\quad : s^{\\theta\\prime} \\in \\left\\{s_1^\\theta, s_4^{\\theta\\prime} \\right\\}\\mbox{ or }s^x \\in \\left\\{s_1^{x\\prime}, s_4^{x\\prime} \\right\\} \\\\\n0 \\quad : \\mbox{otherwise}\n\\end{array}\n\\right.\n.\n\\end{equation}\n\nFinally, we make sure to choose a discount factor $\\gamma$ as well. Having fleshed out $\\mathscr{S}$, $\\mathscr{A}$, $P_\\cdot(\\cdot, \\cdot)$, $R_\\cdot(\\cdot, \\cdot)$, and $\\gamma$, we have fully set up our inverted pendulum problem as an MDP. The rest is up to the Bellman equations.\n\n\\end{document}", "meta": {"hexsha": "b272f17fc8bb71db4c1617e21b3d0eb0efb081cf", "size": 17475, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2-mdp/writeup/2-mdp.tex", "max_stars_repo_name": "lukearend/swirl", "max_stars_repo_head_hexsha": "83e105f3f6b656a02ecb94bf100393ff355e291f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2-mdp/writeup/2-mdp.tex", "max_issues_repo_name": "lukearend/swirl", "max_issues_repo_head_hexsha": "83e105f3f6b656a02ecb94bf100393ff355e291f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2-mdp/writeup/2-mdp.tex", "max_forks_repo_name": "lukearend/swirl", "max_forks_repo_head_hexsha": "83e105f3f6b656a02ecb94bf100393ff355e291f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.010989011, "max_line_length": 765, "alphanum_fraction": 0.7351645207, "num_tokens": 4880, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5677055580484832}}
{"text": "\\input{../header_function}\r\n\r\n%---------- start document ---------- %\r\n \\section{module -- module/ideal with HNF}\\linkedzero{module}\r\n \\begin{itemize}\r\n   \\item {\\bf Classes}\r\n   \\begin{itemize}\r\n     \\item \\linkingone{module}{Submodule}\r\n     \\item \\linkingone{module}{Module}\r\n     \\item \\linkingone{module}{Ideal}\r\n     \\item \\linkingone{module}{Ideal\\_with\\_generator}\r\n   \\end{itemize}\r\n \\end{itemize}\r\n\r\n\\C\r\n\r\n \\subsection{Submodule -- submodule as matrix representation}\\linkedone{module}{Submodule}\r\n \\initialize\r\n  \\func{Submodule}{\\hiki{row}{integer},\\ \\hiki{column}{integer},\\ \\hikiopt{compo}{compo}{0},\\ \\hikiopt{coeff\\_ring}{CommutativeRing}{0},\\ \\hikiopt{ishnf}{True/False}{None}}{\\out{Submodule}}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create a submodule with matrix representation.\\\\\r\n  \\spacing\r\n  % added document\r\n  \\quad Submodule is subclass of \\linkingone{matrix}{RingMatrix}.\\\\\r\n  We assume that \\param{coeff\\_ring} is a PID (principal ideal domain).\r\n  Then, we have the HNF(hermite normal form) corresponding to a matrices.\\\\\r\n  \\spacing\r\n  %\r\n  % input, output document\r\n  If \\param{ishnf} is True, we assume that the input matrix is a HNF.\r\n  \\begin{at}\r\n   \\item[ishnf] If the matrix is a HNF, then \\param{ishnf} should be True, otherwise False.\r\n  \\end{at}\r\n  \\method\r\n  \\subsubsection{getGenerators -- generator of module}\\linkedtwo{module}{Submodule}{getGenerators}\r\n   \\func{getGenerators}{\\param{self}}{\\out{list}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a (current) generator of the module \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\r\n   % input, output document\r\n   \\quad Return the list of vectors consisting of a generator.\\\\\r\n \\subsubsection{isSubmodule -- Check whether submodule of self}\\linkedtwo{module}{Submodule}{isSubmodule}\r\n   \\func{isSubmodule}{\\param{self},\\ \\hiki{other}{Submodule}}{\\out{True/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return True if the submodule instance is a submodule of the \\param{other}, or False otherwise.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\r\n   %\\spacing\r\n   % input, output document\r\n   %\r\n   \\subsubsection{isEqual -- Check whether self and other are same module}\\linkedtwo{module}{Submodule}{isEqual}\r\n   \\func{isEqual}{\\param{self},\\ \\hiki{other}{Submodule}}{\\out{True/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return True if the submodule instance is \\param{other} as module, or False otherwise.\\\\.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad You should use the method for equality test of module, not matrix.\r\n   For equality test of matrix simply, use \\param{self}$==$\\param{other}.\r\n   \\\\\r\n   % input, output document\r\n   %\r\n   \\subsubsection{isContain -- Check whether other is in self}\\linkedtwo{module}{Submodule}{isContains}\r\n   \\func{isContains}{\\param{self},\\ \\hiki{other}{vector.Vector}}{\\out{True/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Determine whether \\param{other} is in \\param{self} or not.\\\\.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad If you want to represent \\param{other} as linear combination with the HNF generator of \\param{self}, use \\linkingtwo{module}{Submodule}{represent\\_element}.\r\n   \\\\\r\n   % input, output document\r\n   %\r\n   \\subsubsection{toHNF - change to HNF}\\linkedtwo{matrix}{Submodule}{toHNF}\r\n   \\func{toHNF}{\\param{self}}{\\out{(None)}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Rewrite \\param{self} to HNF (hermite normal form), and set True to its \\param{ishnf}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad Note that HNF do not always give basis of \\param{self}.(i.e. HNF may be redundant.)\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{sumOfSubmodules - sum as submodule}\\linkedtwo{matrix}{Submodule}{sumOfSubmodules}\r\n   \\func{sumOfSubmodules}{\\param{self},\\ \\hiki{other}{Submodule}}{\\out{Submodule}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a module which is sum of two subspaces.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{intersectionOfSubmodules - intersection as submodule}\\linkedtwo{matrix}{Submodule}{intersectionOfSubmodules}\r\n   \\func{intersectionOfSubmodules}{\\param{self},\\ \\hiki{other}{Submodule}}{\\out{Submodule}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a module which is intersection of two subspaces.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n \\subsubsection{represent\\_element -- represent element as linear combination}\\linkedtwo{module}{Submodule}{represent\\_element}\r\n   \\func{represent\\_element}{\\param{self},\\ \\hiki{other}{vector.Vector}}{\\out{vector.Vector/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Represent \\param{other} as a linear combination with HNF generators.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad If \\param{other} not in \\param{self}, return False.\r\n   Note that this method calls \\linkingtwo{module}{Submodule}{toHNF}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   \\quad The method returns coefficients as an instance of \\linkingone{vector}{Vector}.\r\n \\subsubsection{linear\\_combination -- compute linear combination}\\linkedtwo{module}{Submodule}{linear\\_combination}\r\n   \\func{linear\\_combination}{\\param{self},\\ \\hiki{coeff}{list}}{\\out{vector.Vector}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad For given $\\mathbf{Z}$-coefficients \\param{coeff}, \r\n         return a vector corresponding to a linear combination of (current) basis.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\r\n   % input, output document\r\n   \\quad \\param{coeff} must be a list of instances in \\linkingone{ring}{RingElement} whose size is the column of \\param{self}.\r\n\\begin{ex}\r\n>>> A = module.Submodule(4, 3, [1,2,3]+[4,5,6]+[7,8,9]+[10,11,12])\r\n>>> A.toHNF()\r\n>>> print A\r\n9 1\r\n6 1\r\n3 1\r\n0 1\r\n>>> A.getGenerator\r\n[Vector([9L, 6L, 3L, 0L]), Vector([1L, 1L, 1L, 1L])]\r\n>>> V = vector.Vector([10,7,4,1])\r\n>>> A.represent_element(V)\r\nVector([1L, 1L])\r\n>>> V == A.linear_combination([1,1])\r\nTrue\r\n>>> B = module.Submodule(4, 1, [1,2,3,4])\r\n>>> C = module.Submodule(4, 2, [2,-4]+[4,-3]+[6,-2]+[8,-1])\r\n>>> print B.intersectionOfSubmodules(C)\r\n2\r\n4\r\n6\r\n8\r\n\\end{ex}%Don't indent!\r\n\\C\r\n  \\subsection{fromMatrix(class function) - create submodule}\\linkedtwo{module}{Submodule}{fromMatrix}\r\n  \\func{fromMatrix}{\\param{cls},\\ \\hiki{mat}{RingMatrix},\\ \\hikiopt{ishnf}{True/False}{None}}{\\out{Submodule}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Create a Submodule instance from a matrix instance \\param{mat}, whose class can be any of subclasses of Matrix.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad Please use this method if you want a Submodule instance for sure.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\r\n\\C\r\n\r\n\\subsection{Module - module over a number field}\\linkedone{module}{Module}\r\n \\initialize\r\n  \\func{Module}{\\hiki{pair\\_mat\\_repr}{list/matrix},\\ \\hiki{number\\_field}{algfield.NumberField},\\ \\hikiopt{base}{list/matrix.SquareMatrix}{None},\\ \\hikiopt{ishnf}{bool}{False}}{\\out{Module}}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create a new module object over a number field.\\\\\r\n  \\spacing\r\n  % added document\r\n  \\quad A module is a finitely generated sub $\\mathbf{Z}$-module. \r\n  Note that we do not assume rank of a module is deg(number\\_field).\\\\\r\n  We represent a module as generators respect to base module over $\\mathbf{Z}[\\theta]$, where $\\theta$ is a solution of \\param{number\\_field}.\\linkingtwo{algfield}{BasicAlgNumber}{polynomial}.\\\\\r\n  \\spacing\r\n  % input, output document\r\n  \\quad \\param{pair\\_mat\\_repr} should be one of the following form:\r\n  \\begin{itemize}\r\n  \\item $[M,\\ d]$, \r\n   where $M$ is a list of integral tuple/vectors whose size is the degree of \\param{number\\_field} and\r\n         $d$ is a denominator.\r\n  \\item $[M,\\ d]$,\r\n   where $M$ is an integral matrix whose the number of row is the degree of \\param{number\\_field} and\r\n         $d$ is a denominator.\r\n  \\item a rational matrix whose the number of row is the degree of \\param{number\\_field}.\r\n  \\end{itemize}\r\n  Also, \\param{base} should be one of the following form:\r\n  \\begin{itemize}\r\n  \\item \\linkedone{module}{base} a list of rational tuple/vectors whose size is the degree of \\param{number\\_field}\r\n  \\item a square non-singular rational matrix whose size is the degree of \\param{number\\_field}\r\n  \\end{itemize}\r\n  The module is internally represented as $\\frac{1}{d}M$ with respect to \\linkingtwo{module}{Module}{base},\r\n  where $d$ is \\linkingtwo{module}{Module}{denominator} and $M$ is \\linkingtwo{module}{Module}{mat\\_repr}.\r\n  If \\param{ishnf} is True, we assume that \\param{mat\\_repr} is a HNF.\\\\\r\n  \\begin{at}\r\n  \\item[mat\\_repr]\\linkedtwo{module}{Module}{mat\\_repr}: an instance of \\linkingone{module}{Submodule} $M$ whose size is the degree of \\param{number\\_field}\r\n  \\item[denominator]\\linkedtwo{module}{Module}{denominator}: an integer $d$\r\n  \\item[base]\\linkedtwo{module}{Module}{base}: a square non-singular rational matrix whose size is the degree of \\param{number\\_field}\r\n  \\item[number\\_field]\\linkedtwo{module}{Module}{number\\_field}: the number field over which the module is defined\r\n  \\end{at}\r\n\\begin{op}\r\n    \\verb+M==N+ & Return whether \\param{M} and \\param{N} are equal or not as module.\\\\\r\n    \\verb+c in M+ & Check whether some element of \\param{M} equals \\param{c}.\\\\\r\n    \\verb|M+N| & Return the sum of \\param{M} and \\param{N} as module.\\\\\r\n    \\verb+M*N+ & Return the product of \\param{M} and \\param{N} as the ideal computation. \\\\\r\n               &  \\param{N} must be module or scalar(i.e. an element of \\linkingtwo{module}{Module}{number\\_field}).\\\\\r\n               &  If you want to compute the intersection of $M$ and $N$, see \\linkingtwo{module}{Module}{intersect}.\\\\\r\n    \\verb+M**c+ & Return \\param{M} to \\param{c} based on the ideal multiplication.\\\\\r\n    \\verb+repr(M)+ & Return the repr string of the module \\param{M}.\\\\\r\n    \\verb+str(M)+ & Return the str string of the module \\param{M}.\\\\\r\n  \\end{op}\r\n\\begin{ex}\r\n>>> F = algfield.NumberField([2,0,1])\r\n>>> M_1 = module.Module([matrix.RingMatrix(2,2,[1,0]+[0,2]), 2], F)\r\n>>> M_2 = module.Module([matrix.RingMatrix(2,2,[2,0]+[0,5]), 3], F)\r\n>>> print M_1\r\n([1, 0]+[0, 2], 2)\r\n over\r\n([1L, 0L]+[0L, 1L], NumberField([2, 0, 1]))\r\n>>> print M_1 + M_2\r\n([1L, 0L]+[0L, 2L], 6)\r\n over\r\n([Rational(1, 1), Rational(0, 1)]+[Rational(0, 1), Rational(1, 1)], \r\nNumberField([2, 0, 1]))\r\n>>> print M_1 * 2\r\n([1L, 0L]+[0L, 2L], 1L)\r\n over\r\n([Rational(1, 1), Rational(0, 1)]+[Rational(0, 1), Rational(1, 1)], \r\nNumberField([2, 0, 1]))\r\n>>> print M_1 * M_2\r\n([2L, 0L]+[0L, 1L], 6L)\r\n over\r\n([Rational(1, 1), Rational(0, 1)]+[Rational(0, 1), Rational(1, 1)], \r\nNumberField([2, 0, 1]))\r\n>>> print M_1 ** 2\r\n([1L, 0L]+[0L, 2L], 4L)\r\n over\r\n([Rational(1, 1), Rational(0, 1)]+[Rational(0, 1), Rational(1, 1)], \r\nNumberField([2, 0, 1]))\r\n\\end{ex}%Don't indent!\r\n\\method\r\n  \\subsubsection{toHNF - change to hermite normal form(HNF)}\\linkedtwo{module}{Module}{toHNF}\r\n   \\func{toHNF}{\\param{self}}{\\out{(None)}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Change \\param{self}.\\linkingtwo{module}{Module}{mat\\_repr} to the hermite normal form(HNF).\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{copy - create copy}\\linkedtwo{module}{Module}{copy}\r\n   \\func{copy}{\\param{self}}{\\out{Module}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Create copy of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{intersect - intersection}\\linkedtwo{module}{Module}{intersect}\r\n   \\func{intersect}{\\param{self},\\ \\hiki{other}{Module}}{\\out{Module}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return intersection of \\param{self} and \\param{other}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \r\n  \\subsubsection{issubmodule - Check submodule}\\linkedtwo{module}{Module}{issubmodule}\r\n   \\func{submodule}{\\param{self},\\ \\hiki{other}{Module}}{\\out{True/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Check \\param{self} is submodule of \\param{other}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \r\n  \\subsubsection{issupermodule - Check supermodule}\\linkedtwo{module}{Module}{issupermodule}\r\n   \\func{supermodule}{\\param{self},\\ \\hiki{other}{Module}}{\\out{True/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Check \\param{self} is supermodule of \\param{other}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{represent\\_element - Represent as linear combination}\\linkedtwo{module}{Module}{represent\\_element}\r\n   \\func{represent\\_element}{\\param{self},\\ \\hiki{other}{algfield.BasicAlgNumber}}{\\out{list/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Represent \\param{other} as a linear combination with generators of \\param{self}.\r\n        If \\param{other} is not in \\param{self}, return False.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad Note that we do not assume \\param{self}.\\linkingtwo{module}{Module}{mat\\_repr} is HNF.\r\n   \\spacing\r\n   % input, output document\r\n   \\quad\r\n   The output is a list of integers if \\param{other} is in \\param{self}.\\\\\r\n   \\spacing\r\n  \\subsubsection{change\\_base\\_module - Change base}\\linkedtwo{module}{Module}{change\\_base\\_module}\r\n   \\func{change\\_base\\_module}{\\param{self},\\ \\hiki{other\\_base}{list/matrix.RingSquareMatrix}}{\\out{Module}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the module which is equal to \\param{self} respect to \\param{other\\_base}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad\r\n   \\param{other\\_base} follows the form \\linkingone{module}{base}.\\\\\r\n  \\subsubsection{index - size of module}\\linkedtwo{module}{Module}{index}\r\n   \\func{index}{\\param{self}}{\\out{rational.Rational}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the order of a residue group over \\param{self}.\\linkingtwo{module}{Module}{base}.\r\n         That is, return $[M:N]$ if $N \\subset M$ or ${[N:M]}^{-1}$ if $M subset N$,\r\n         where $M$ is the module \\param{self} and $N$ is the module corresponding to \\param{self}.\\linkingtwo{module}{Module}{base}. \\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{smallest\\_rational - a $\\mathbf{Z}$-generator in the rational field}\\linkedtwo{module}{Module}{smallest\\_rational}\r\n   \\func{smallest\\_rational}{\\param{self}}{\\out{rational.Rational}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the $\\mathbf{Z}$-generator of intersection of the module \\param{self} and the rational field.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n \\begin{ex}\r\n>>> F = algfield.NumberField([1,0,2])\r\n>>> M_1=module.Module([matrix.RingMatrix(2,2,[1,0]+[0,2]), 2], F)\r\n>>> M_2=module.Module([matrix.RingMatrix(2,2,[2,0]+[0,5]), 3], F)\r\n>>> print M_1.intersect(M_2)\r\n([2L, 0L]+[0L, 5L], 1L)\r\n over\r\n([Rational(1, 1), Rational(0, 1)]+[Rational(0, 1), Rational(1, 1)],\r\n NumberField([2, 0, 1]))\r\n>>> M_1.represent_element( F.createElement( [[2,4], 1] ) )\r\n[4L, 4L]\r\n>>> print M_1.change_base_module( matrix.FieldSquareMatrix(2, 2, [1,0]+[0,1]) / 2 )\r\n([1L, 0L]+[0L, 2L], 1L)\r\n over\r\n([Rational(1, 2), Rational(0, 1)]+[Rational(0, 1), Rational(1, 2)],\r\n NumberField([2, 0, 1]))\r\n>>> M_2.index()\r\nRational(10, 9)\r\n>>> M_2.smallest_rational()\r\nRational(2, 3)\r\n\\end{ex}%Don't indent!\r\n\\C\r\n\r\n\\subsection{Ideal - ideal over a number field}\\linkedone{module}{Ideal}\r\n \\initialize\r\n  \\func{Ideal}{\\hiki{pair\\_mat\\_repr}{list/matrix},\\ \\hiki{number\\_field}{algfield.NumberField},\\ \\hikiopt{base}{list/matrix.SquareMatrix}{None},\\ \\hikiopt{ishnf}{bool}{False}}{\\out{Ideal}}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create a new ideal object over a number field.\\\\\r\n  \\spacing\r\n  % added document\r\n  \\quad Ideal is subclass of \\linkingone{module}{Module}.\\\\\r\n  \\spacing\r\n  % input, output document\r\n  \\quad Refer to initialization of \\linkingone{module}{Module}.\\\\\r\n%\\begin{ex}\r\n%\\end{ex}%Don't indent!\r\n\\C\r\n\\method\r\n\r\n\\subsubsection{inverse -- inverse}\\linkedtwo{module}{Ideal}{inverse}\r\n   \\func{inverse}{\\param{self}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the inverse ideal of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method calls \\param{self}.\\linkingtwo{module}{Module}{number\\_field}.\\linkingtwo{algfield}{NumberField}{integer\\_ring}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n \\subsubsection{issubideal -- Check subideal}\\linkedtwo{module}{Ideal}{issubideal}\r\n   \\func{issubideal}{\\param{self},\\ \\hiki{other}{Ideal}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Check \\param{self} is subideal of \\param{other}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad \r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n  \\subsubsection{issuperideal -- Check superideal}\\linkedtwo{module}{Ideal}{issuoerideal}\r\n   \\func{issuperideal}{\\param{self},\\ \\hiki{other}{Ideal}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Check \\param{self} is superideal of \\param{other}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad \r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n  \\subsubsection{gcd -- greatest common divisor}\\linkedtwo{module}{Ideal}{gcd}\r\n   \\func{gcd}{\\param{self},\\ \\hiki{other}{Ideal}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the greatest common divisor(gcd) of \\param{self} and \\param{other} as ideal.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method simply executes \\param{self}$+$\\param{other}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n  \\subsubsection{lcm -- least common multiplier}\\linkedtwo{module}{Ideal}{lcm}\r\n   \\func{lcm}{\\param{self},\\ \\hiki{other}{Ideal}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the least common multiplier(lcm) of \\param{self} and \\param{other} as ideal.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method simply calls the method \\linkingtwo{module}{Module}{intersect}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n%  \\subsubsection{twoElementRepresentation -- Represent as two element}\\linkedtwo{module}{Ideal}{twoElementRepresentation}\r\n%   \\func{twoElementRepresentation}{\\param{self}}{\\out{Ideal}}\\\\\r\n%   \\spacing\r\n%   % document of basic document\r\n%   \\quad Return the ideal which is \\param{self} represented with only two elements.\\\\\r\n%   \\spacing\r\n%   % added document\r\n%   %\\quad \\\\\r\n%   %\\spacing\r\n%   % input, output document\r\n%   %\\quad \\\\\r\n  \\subsubsection{norm -- norm}\\linkedtwo{module}{Ideal}{norm}\r\n   \\func{norm}{\\param{self}}{\\out{rational.Rational}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the norm of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method calls \\param{self}.\\linkingtwo{module}{Module}{number\\_field}.\\linkingtwo{algfield}{NumberField}{integer\\_ring}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n  \\subsubsection{isIntegral -- Check integral}\\linkedtwo{module}{Ideal}{isIntegral}\r\n   \\func{isIntegral}{\\param{self}}{\\out{True/False}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Determine whether \\param{self} is an integral ideal or not.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad \\\\\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n%  \\subsubsection{isPrime -- Check primality}\\linkedtwo{module}{Ideal}{isPrime}\r\n%   \\func{isPrime}{\\param{self}}{\\out{True/False}}\\\\\r\n%   \\spacing\r\n%   % document of basic document\r\n%   \\quad Determine whether \\param{self} is a prime ideal or not.\\\\\r\n%   \\spacing\r\n%   % added document\r\n%   %\\quad \\\\\r\n%   %\\spacing\r\n%   % input, output document\r\n%   %\\quad \\\\\r\n\\begin{ex}\r\n>>> M = module.Ideal([matrix.RingMatrix(2, 2, [1,0]+[0,2]), 2], F)\r\n>>> print M.inverse()\r\n([-2L, 0L]+[0L, 2L], 1L)\r\n over\r\n([Rational(1, 1), Rational(0, 1)]+[Rational(0, 1), Rational(1, 1)],\r\n NumberField([2, 0, 1]))\r\n>>> print M * M.inverse()\r\n([1L, 0L]+[0L, 1L], 1L)\r\n over\r\n([Rational(1, 1), Rational(0, 1)]+[Rational(0, 1), Rational(1, 1)],\r\n NumberField([2, 0, 1]))\r\n>>> M.norm()\r\nRational(1, 2)\r\n>>> M.isIntegral()\r\nFalse\r\n\\end{ex}%Don't indent!\r\n\\C\r\n\r\n\\subsection{Ideal\\_with\\_generator - ideal with generator}\\linkedone{module}{Ideal\\_with\\_generator}\r\n \\initialize\r\n  \\func{Ideal\\_with\\_generator}{\\hiki{generator}{list}}{\\out{Ideal\\_with\\_generator}}\\\\\r\n  \\spacing\r\n  % document of basic document\r\n  \\quad Create a new ideal given as a generator.\\\\\r\n  \\spacing\r\n  % added document\r\n  %\\quad \r\n  %\\spacing\r\n  % input, output document\r\n  \\quad \\param{generator} is a list of instances in \\linkingone{algfield}{BasicAlgNumber}, which represent generators, over a same number field.\r\n  \\begin{at}\r\n  \\item[generator]\\linkedtwo{module}{Ideal\\_with\\_generator}{generator}: generators of the ideal\r\n  \\item[number\\_field]\\linkedtwo{module}{Ideal\\_with\\_generator}{number\\_field}: the number field over which generators are defined\r\n  \\end{at}\r\n\\begin{op}\r\n    \\verb+M==N+ & Return whether \\param{M} and \\param{N} are equal or not as module.\\\\\r\n    \\verb+c in M+ & Check whether some element of \\param{M} equals \\param{c}.\\\\\r\n    \\verb|M+N| & Return the sum of \\param{M} and \\param{N} as ideal with generators.\\\\\r\n    \\verb+M*N+ & Return the product of \\param{M} and \\param{N} as ideal with generators. \\\\\r\n    \\verb+M**c+ & Return \\param{M} to \\param{c} based on the ideal multiplication.\\\\\r\n    \\verb+repr(M)+ & Return the repr string of the ideal \\param{M}.\\\\\r\n    \\verb+str(M)+ & Return the str string of the ideal \\param{M}.\\\\\r\n  \\end{op}\r\n\\begin{ex}\r\n>>> F = algfield.NumberField([2,0,1])\r\n>>> M_1 = module.Ideal_with_generator([\r\n F.createElement([[1,0], 2]), F.createElement([[0,1], 1]) \r\n])\r\n>>> M_2 = module.Ideal_with_generator([\r\n F.createElement([[2,0], 3]), F.createElement([[0,5], 3]) \r\n])\r\n>>> print M_1\r\n[BasicAlgNumber([[1, 0], 2], [2, 0, 1]), BasicAlgNumber([[0, 1], 1], [2, 0, 1])]\r\n>>> print M_1 + M_2\r\n[BasicAlgNumber([[1, 0], 2], [2, 0, 1]), BasicAlgNumber([[0, 1], 1], [2, 0, 1]),\r\n BasicAlgNumber([[2, 0], 3], [2, 0, 1]), BasicAlgNumber([[0, 5], 3], [2, 0, 1])]\r\n>>> print M_1 * M_2\r\n[BasicAlgNumber([[1L, 0L], 3L], [2, 0, 1]), BasicAlgNumber([[0L, 5L], 6], [2, 0, 1]), \r\nBasicAlgNumber([[0L, 2L], 3], [2, 0, 1]), BasicAlgNumber([[-10L, 0L], 3], [2, 0, 1])]\r\n>>> print M_1 ** 2\r\n[BasicAlgNumber([[1L, 0L], 4], [2, 0, 1]), BasicAlgNumber([[0L, 1L], 2], [2, 0, 1]), \r\nBasicAlgNumber([[0L, 1L], 2], [2, 0, 1]), BasicAlgNumber([[-2L, 0L], 1], [2, 0, 1])]\r\n\\end{ex}%Don't indent!\r\n\\method\r\n  \\subsubsection{copy - create copy}\\linkedtwo{module}{Ideal\\_with\\_generator}{copy}\r\n   \\func{copy}{\\param{self}}{\\out{Ideal\\_with\\_generator}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Create copy of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{to\\_HNFRepresentation - change to ideal with HNF}\\linkedtwo{module}{Ideal\\_with\\_generator}{to\\_HNFRepresentation}\r\n   \\func{to\\_HNFRepresentation}{\\param{self}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Transform \\param{self} to the corresponding ideal as HNF(hermite normal form) representation.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad\r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{twoElementRepresentation - Represent with two element}\\linkedtwo{module}{Ideal\\_with\\_generator}{twoElementRepresentation}\r\n   \\func{twoElementRepresentation}{\\param{self}}{\\out{Ideal\\_with\\_generator}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Transform \\param{self} to the corresponding ideal as HNF(hermite normal form) representation.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad  If \\param{self} is not a prime ideal, this method is not efficient.\\\\\r\n   \\spacing \r\n   % input, output document\r\n   %\\quad\r\n  \\subsubsection{smallest\\_rational - a $\\mathbf{Z}$-generator in the rational field}\\linkedtwo{module}{Ideal\\_with\\_generator}{smallest\\_rational}\r\n   \\func{smallest\\_rational}{\\param{self}}{\\out{rational.Rational}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the $\\mathbf{Z}$-generator of intersection of the module \\param{self} and the rational field.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method calls \\linkingtwo{module}{Ideal\\_with\\_generator}{to\\_HNFRepresentation}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad\r\n\\subsubsection{inverse -- inverse}\\linkedtwo{module}{Ideal\\_with\\_generator}{inverse}\r\n   \\func{inverse}{\\param{self}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the inverse ideal of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method calls \\linkingtwo{module}{Ideal\\_with\\_generator}{to\\_HNFRepresentation}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n  \\subsubsection{norm -- norm}\\linkedtwo{module}{Ideal\\_with\\_generator}{norm}\r\n   \\func{norm}{\\param{self}}{\\out{rational.Rational}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return the norm of \\param{self}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method calls \\linkingtwo{module}{Ideal\\_with\\_generator}{to\\_HNFRepresentation}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n  \\subsubsection{intersect - intersection}\\linkedtwo{module}{Ideal\\_with\\_generator}{intersection}\r\n   \\func{intersect}{\\param{self},\\ \\hiki{other}{Ideal\\_with\\_generator}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return intersection of \\param{self} and \\param{other}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method calls \\linkingtwo{module}{Ideal\\_with\\_generator}{to\\_HNFRepresentation}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad\r\n \\subsubsection{issubideal -- Check subideal}\\linkedtwo{module}{Ideal\\_with\\_generator}{issubideal}\r\n   \\func{issubideal}{\\param{self},\\ \\hiki{other}{Ideal\\_with\\_generator}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Check \\param{self} is subideal of \\param{other}.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad This method calls \\linkingtwo{module}{Ideal\\_with\\_generator}{to\\_HNFRepresentation}.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n  \\subsubsection{issuperideal -- Check superideal}\\linkedtwo{module}{Ideal\\_with\\_generator}{issuoerideal}\r\n   \\func{issuperideal}{\\param{self},\\ \\hiki{other}{Ideal\\_with\\_generator}}{\\out{Ideal}}\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad This method calls \\linkingtwo{module}{Ideal\\_with\\_generator}{to\\_HNFRepresentation}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\quad \r\n   %\\spacing\r\n   % input, output document\r\n   %\\quad \\\\\r\n\\begin{ex}\r\n>>> M = module.Ideal_with_generator([\r\nF.createElement([[2,0], 3]), F.createElement([[0,2], 3]), F.createElement([[1,0], 3])\r\n])\r\n>>> print M.to_HNFRepresentation()\r\n([2L, 0L, 0L, -4L, 1L, 0L]+[0L, 2L, 2L, 0L, 0L, 1L], 3L)\r\n over\r\n([1L, 0L]+[0L, 1L], NumberField([2, 0, 1]))\r\n>>> print M.twoElementRepresentation()\r\n[BasicAlgNumber([[1L, 0], 3], [2, 0, 1]), BasicAlgNumber([[3, 2], 3], [2, 0, 1])]\r\n>>> M.norm()\r\nRational(1, 9)\r\n\\end{ex}%Don't indent!\r\n\\C\r\n\r\n%---------- end document ---------- %\r\n\r\n\\input{../footer}\r\n", "meta": {"hexsha": "2bf077d79f576253bc15bf956e2e021f2cf48a0c", "size": 27690, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/en/module.tex", "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_issues_repo_path": "manual/en/module.tex", "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/en/module.tex", "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.9010339734, "max_line_length": 195, "alphanum_fraction": 0.6598410979, "num_tokens": 8766, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672320414786, "lm_q2_score": 0.7025300573952052, "lm_q1q2_score": 0.5676915189052846}}
{"text": "\\chapter{Finite element discretisation}\n\\label{sec:discretization}\n\nIn this chapter we introduce a mixed finite element discretisation for the steady-state incompressible MHD problem \\eqref{eq:mhd}, \\eqref{eq:bc} that models electrically conductive fluids under the influence of a magnetic field.  Following the setting in \\cite{schotzau2004mixed}, we use curl-conforming elements for the magnetic field and conforming continuous elements for the velocity field. The resulting discretisation is verified though a series of numerical experiments which appear later in Chapter \\ref{chap:results}. For simplicity, we initially only discuss in detail homogeneous Dirichlet boundary conditions, that is\n\\begin{equation} \\label{eq:homogeneousBC}\n    \\uu{u} = \\uu{0} \\quad \\mbox{and} \\quad \\uu{n}\\times \\uu{b} = \\uu{0}.\n\\end{equation}\nInhomogeneous conditions as in \\eqref{eq:bc}  are  discussed in Section~\\ref{sec:bcig}.\n\n\n\\section{Variational formulation}\n\\label{sec:variation}\n\n\\RE{Suppose that the domain $\\Omega$ is a Lipschitz domain of $\\mathbb{R}^d$ for $d=2,3$.} To express the problem \\eqref{eq:mhd}, \\eqref{eq:bc} in weak form we follow \\cite{schotzau2004mixed} and denote the $L^2$-inner product on $L^2(\\Omega)^d$ by $(\\cdot,\\cdot)_\\Omega$, for $d = 2,3$. We introduce the standard Sobolev spaces\n\\begin{equation} \\label{eq:FuncSpace}\n \\left. \\begin{aligned}\n\\hspace{-1.5mm}\\uu{V}&=H_0^1(\\Omega)^d=\\left\\{\\uu{u}\\in H^1(\\Omega)^d\\,:\\,\\text{$\\uu{u}=\\uu{0}$ on $\\partial\\Omega$}\\right\\},\\\\\n\\hspace{-1.5mm}Q&=L^2_0(\\Omega)=\\{p\\in L^2(\\Omega)\\,:\\,(p\\,,1)_\\Omega=0\\},\\\\\n\\hspace{-1.5mm}\\uu{C}&=H_0({\\rm curl};\\Omega) = \\left\\{\\uu{b}\\in L^2(\\Omega)^d\\,:\\,\\nabla\\times\\uu{b}\\in L^2(\\Omega)^{\\bar{d}}, \\\n\\text{$\\uu{n}\\times\\uu{b}=\\uu{0}$ on $\\partial\\Omega$}\\right\\},\\\\\n\\hspace{-1.5mm}S&=H^1_0(\\Omega)=\\{r\\in H^1(\\Omega)\\,:\\,r=0\\ \\mbox{on $\\partial\\Omega$}\\}.\n \\end{aligned}\n \\right.\n \\qquad \\text{}\n\\end{equation}\nwhere $\\bar{d}={2d-3}$ is used to cover the 2D and 3D cases. We write $\\|\\cdot\\|_{L^2(\\Omega)}$, $\\|\\cdot\\|_{H^1(\\Omega)}$ and $\\|\\cdot\\|_{H(\\rm{curl};\\Omega)}$ for the associated natural norms. More precisely, for  vector fields $\\uu{u},\\uu{b}$ and a scalar function $r$ the norms are defined as follows:\n\\begin{equation} \\nonumber\n \\left. \\begin{aligned}\n    \\|\\uu{u}\\|_{L^2 (\\Omega)} &= \\left({\\int_{\\Omega} \\uu{u}\\cdot\\uu{u}\\;dx}\\right)^{\\frac{1}{2}},\\\\\n   \\|\\uu{u}\\|_{H^1(\\Omega)} &=  \\left(\\|\\uu{u}\\|_{L^2(\\Omega)}^2 + \\|\\nabla  \\uu{u}\\|_{L^2(\\Omega)}^2 \\right)^{\\frac{1}{2}},\\\\\n   \\|\\uu{b}\\|_{H(\\rm{curl},\\Omega)} &=  \\left(\\|\\uu{b}\\|_{L^2(\\Omega)}^2 + \\|\\nabla \\times \\uu{b}\\|_{L^2(\\Omega)}^2 \\right)^{\\frac{1}{2}}, \\\\\n    \\|r\\|_{L^2 (\\Omega)} &= \\left({\\int_{\\Omega} r^2\\;dx}\\right)^{\\frac{1}{2}},\\\\\n    \\|r\\|_{H^1(\\Omega)} &=  \\left(\\|r\\|_{L^2(\\Omega)}^2 + \\|\\nabla  r\\|_{L^2(\\Omega)}^2 \\right)^{\\frac{1}{2}},\\\\\n \\end{aligned}\n \\right.\n \\qquad \\text{}\n\\end{equation}\nwhere $\\|\\nabla  \\uu{u}\\|_{L^2(\\Omega)}^2$ is given by:\n$$\\|\\nabla  \\uu{u}\\|_{L^2(\\Omega)}^2 = \\left(\\int_{\\Omega} \\sum^d_{i,j=1}(\\nabla \\uu{u})_{ij}(\\nabla \\uu{u})_{ij} \\, dx\\right)^{\\frac12}.$$\nThe weak formulation of the incompressible MHD system (\\ref{eq:mhd}), (\\ref{eq:bc}) consists in finding~$(\\uu{u},p,\\uu{b},r)\\in \\uu{V} \\times Q\\times \\uu{C} \\times S$ such that\n\\begin{subequations}\n\\label{eq:weak}\n\\begin{eqnarray}\n\\label{eq:weak1} A(\\uu{u},\\uu{v}) + O(\\uu{u};\\uu{u},\\uu{v})\n+C(\\uu{b};\\uu{v},\\uu{b})\n+B(\\uu{v}, p) & =& (\\uu{f}, \\uu{v})_{\\Omega},\\\\[.1cm]\n\\label{eq:weak2}\nB(\\uu{u},q)&=&0, \\\\[.1cm]\n\\label{eq:weak3}\nM(\\uu{b},\\uu{c})-C(\\uu{b};\\uu{u},\\uu{c})+D(\\uu{c},r)&=& (\\uu{g},\\uu{c})_\\Omega, \\\\[.1cm]\n\\label{eq:weak4} D(\\uu{b},s)&=&0,\n\\end{eqnarray}\n\\end{subequations}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V} \\times Q\\times \\uu{C}\\times\nS$. The individual variational forms are given by\n\\begin{equation} \\label{eq:forms}\n \\left. \\begin{aligned}\n&A(\\uu{u},\\uu{v})=  \\int_\\Omega \\nu \\, \\nabla\\uu{u}:\n\\nabla\\uu{v}\\,d\\uu{x},&\\\\  & O(\\uu{w};\\uu{u},\\uu{v}) = \\int_\\Omega\n(\\uu{w}\\cdot\\nabla)\\uu{u} \\cdot\\uu{v} \\, d\\uu{x},\n\\\\[.1cm]\n&  B(\\uu{u},q) = -\\int_\\Omega\\,(\\nabla\\cdot\\uu{u}) \\,q \\,d\\uu{x},\n&\\\\  &\n M(\\uu{b},\\uu{c})= \\int_\\Omega\\, \\kappa\\nu_m\n(\\nabla\\times\\uu{b})\\cdot(\\nabla\\times\\uu{c})\\,d\\uu{x},\\\\[0.1cm]\n& D(\\uu{b},s) = \\int_\\Omega\\, \\uu{b} \\cdot \\nabla s\\,\nd\\uu{x}, & \\\\ &\nC(\\uu{d};\\uu{v},\\uu{b}) =  \\int_\\Omega \\kappa\\, (\\uu{v}\\times\\uu{d})\\cdot\n(\\nabla\\times\\uu{b})\\, d\\uu{x},\n \\end{aligned}\n \\right.\n \\quad\\text{}\n\\end{equation}\nwhere  $\\nabla \\uu{u}:\\nabla \\uu{v}$ is  defined as\n$$\\nabla \\uu{u}:\\nabla \\uu{v} = \\sum^d_{i,j=1}(\\nabla \\uu{u})_{ij}(\\nabla \\uu{v})_{ij}.$$ In \\cite{schotzau2004mixed} it has been shown that this formulation of the problem is discretely energy-stable and has a unique solution for small data (i.e. for small $\\nu$, $\\nu_m$, $\\kappa$ and forcing terms $\\uu{f}$ and $\\uu{g}$ with small $L^2$-norms).\n\n\\section{Mixed finite element discretisation}\n\nConsider the domain $\\Omega$ to be divided up into a regular and quasi-uniform mesh ${\\mathcal T}_h=\\{K\\}$ consisting of triangles ($d = 2$) or tetrahedra ($d = 3$)  with mesh size $h$. Based on the function spaces defined in \\eqref{eq:FuncSpace}, our finite element approximation will be sought in the finite dimensional spaces given by:\n\\begin{equation}\n\\label{eq:FiniteSpace}\n\\begin{split}\n\\uu{V}_h &=  \\{\\, \\uu{u}\\in H^1( \\Omega)\\, :\\, \\uu{u}|_K \\in {\\mathcal P}_{k}(K)^d, \\, K \\in{\\mathcal T}_h \\, \\},\\\\[.1cm]\nQ_h&=  \\{\\, p\\in L^2(\\Omega) \\cap H^1(\\Omega)\\,:\\, p|_K \\in {\\mathcal P}_{k-1}(K), \\, K \\in{\\mathcal T}_h \\,\\},\\\\[.1cm]\n\\uu{C}_h &=  \\{\\, \\uu{b}\\in H_0({\\rm curl}; \\Omega) \\,:\\, \\uu{b}|_K \\in {\\mathcal P}_{k-1}(K)^d \\oplus \\uu{R}_k(K), \\, K \\in{\\mathcal T}_h \\,\\},\\\\[.1cm]\nS_h&=  \\{\\, r\\in H_0^1(\\Omega) \\,:\\, r|_K \\in {\\mathcal P}_{k}(K), \\, K \\in {\\mathcal T}_h \\, \\},\n\\end{split}\n\\end{equation}\nfor $k\\geq 2$. We define ${\\mathcal P}_{k}(K)$ as the space of polynomials of total degree at most $k$ on $K$ and $ \\uu{R}_k(K)$ as the space of homogeneous vector polynomials of total degree $k$ on $K$ that are orthogonal to the position vector $\\uu{x}$. Here we note that we are using ${\\mathcal P_k}/{\\mathcal P_{k-1}}$ Taylor-Hood elements for the fluid unknowns $(\\uu{u},p)$ \\cite{taylor1973numerical}. For the magnetic variables $(\\uu{b},r)$ we use the curl-conforming \\nedelec element pair     of the first kind \\cite{nedelec1980mixed}. These choices of finite elements spaces $\\uu{V}_h, \\, \\uu{C}_h, \\, Q_h$ and $S_h$ imply that  we have conforming subspaces to our Sobolev spaces $\\uu{V}, \\, \\uu{C}, \\,Q$ and $S$, respectively. Then the finite element solution to \\eqref{eq:weak} consists in finding $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)\\in \\uu{V}_h\\times Q_h\\times \\uu{C}_h\\times S_h$ such that\n\\begin{subequations}\n\\label{eq:VariationForm}\n\\begin{eqnarray}\n\\label{eq:bn1} \\hspace{-15mm} A(\\uu{u}_h,\\uu{v}) + \\tilde{O}(\\uu{u}_h;\\uu{u}_h,\\uu{v}) +C(\\uu{b}_h;\\uu{v},\\uu{b}_h) +B(\\uu{v}, p_h) & = & ( \\uu{f},\\uu{v}),\\\\[.1cm]\n\\label{eq:bn2}\nB(\\uu{u}_h,q)&=& 0, \\\\[.1cm]\n\\label{eq:bn3} M(\\uu{b}_h,\\uu{c})-C(\\uu{b}_h;\\uu{u}_h,\\uu{c})+ D(\\uu{c},r_h)&=& (\\uu{g},\\uu{c}),\\\\[.1cm]\n\\label{eq:bn4} D(\\uu{b}_h,s)&=&0,\n\\end{eqnarray}\n\\end{subequations}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$.\n\nThe forms $A, M, B, D$ and $C$ stay the same as on the continuous level. However, for the convection term $\\tilde{O}(\\cdot;\\cdot,\\cdot)$ we modify the form $O(\\uu{w};\\uu{u},\\uu{v})$ in a standard fashion to ensure the energy-stability property\n\\begin{equation} \\label{eq:convection}\n    \\tilde{O}(\\uu{w};\\uu{u},\\uu{u}) = 0, \\quad \\forall \\uu{w},\\uu{u} \\in  \\uu{V}_h.\n\\end{equation}\nTo do so we integrate by parts the convection form $O(\\uu{w};\\uu{u},\\uu{u})$  to obtain\n\\begin{equation} \\nonumber\n \\left. \\begin{aligned}\n     \\int_\\Omega (\\uu{w}\\cdot\\nabla)\\uu{u} \\cdot\\uu{u} \\, d\\uu{x} =& -\\frac{1}{2}\\int_{\\Omega} \\nabla \\cdot \\uu{w} \\uu{u} \\cdot \\uu{u} \\, d\\uu{x}\n     +\\frac{1}{2}\\int_{\\partial \\Omega} \\uu{w}\\cdot \\uu{n} |\\uu{u}|^2\\, ds,\n \\end{aligned}\n \\right.\n \\qquad \\text{}\n\\end{equation}\nrecalling that $\\uu{n}$ is the unit outward normal on $\\partial \\Omega$. Therefore, we choose the modified convection form $\\tilde{O}(\\uu{w};\\uu{u},\\uu{v})$ as\n$$\\tilde{O}(\\uu{w};\\uu{u},\\uu{v}) =  \\int_\\Omega (\\uu{w}\\cdot\\nabla)\\uu{u} \\cdot\\uu{v} \\, d\\uu{x} +\\frac{1}{2}\\int_{\\Omega} \\nabla \\cdot \\uu{w} \\uu{u} \\cdot \\uu{v}\\, d\\uu{x}-\\frac{1}{2}\\int_{\\partial \\Omega} \\uu{w}\\cdot \\uu{n} \\uu{u} \\cdot \\uu{v}\\, ds.$$\nBy construction, property \\eqref{eq:convection} is now satisfied. Note also that for homogeneous boundary conditions as assumed in \\eqref{eq:homogeneousBC}, the boundary integral term in $\\tilde{O}$ can be omitted.\n\nAgain in \\cite{schotzau2004mixed} it has been shown that this variational formulation of a MHD problem is discretely energy-stable and has a unique solution for small data. Also, optimal order error estimates in the mesh size $h$ have been derived for small data using the stability property \\eqref{eq:convection}. Namely, for sufficiently smooth solutions, we have the error bound\n$$\\|\\uu{u}-\\uu{u}_h\\|_{H^1(\\Omega)}+\\|\\uu{b}-\\uu{b}_h\\|_{H(\\rm{curl};\\Omega)}+\\|p-p_h\\|_{L^2(\\Omega)}+\\|r-r_h\\|_{H^1(\\Omega)} \\leq C h^k,$$\nfor a constant $C>0$ independent of the mesh size. In addition, the $L^2$-norm error for the velocity field is of order $\\mathcal{O}(h^{k+1})$ (as $\\uu{V}_h$ consists of a full polynomial space on each element). However, we cannot expect $L^2$-norm errors of  order $\\mathcal{O}(h^{k+1})$ for the magnetic field (as $\\uu{C}_h$ does not consist of a full polynomial space on each element).\n\n\n\\subsection{Matrix representation}\n\nThe variational formulation \\eqref{eq:VariationForm} now can be converted into a matrix representation. To do this, we introduce the basis function for the finite element spaces in \\eqref{eq:FiniteSpace}:\n\\begin{alignat}2\n\\label{eq:bases1}\n\\uu{V}_h & = \\mbox{span}\\langle  \\uu{\\psi}_j \\rangle _{j=1}^{n_u}, & \\qquad &\nQ_h  = \\mbox{span} \\langle  \\alpha_i \\rangle _{i=1}^{m_u},\\\\[0.1cm]\n \\uu{C}_h& =\\mbox{span}\\langle \\uu{\\phi}_j \\rangle _{j=1}^{n_b}, & \\qquad & S_h = \\mbox{span} \\langle \\beta_i\n\\rangle_{i=1}^{m_b}.\n\\end{alignat}\nThe aim now is to find the coefficient vectors $u = (u_1, \\ldots , u_{n_u}) \\in \\mathbb{R}^{n_u}$, $p = (p_1, \\ldots , p_{m_u}) \\in \\mathbb{R}^{m_u}$, $b = (b_1, \\ldots , b_{n_b}) \\in \\mathbb{R}^{n_b}$, and $r = (r_1, \\ldots , r_{m_b}) \\in \\mathbb{R}^{m_b}$ of the finite element functions $(\\uu{u}_h, p_h,\\uu{b}_h, r_h)$ in terms of the chosen bases. As usual, this is done by writing the bilinear forms in \\eqref{eq:VariationForm} in terms of the following stiffness matrices and load vectors:\n\\begin{alignat*}2\nA_{i,j} &= A(\\uu{\\psi}_j,\\uu{\\psi}_i), &\\quad  &1 \\leq i,j \\leq n_u,\\\\[0.1cm]\nB_{i,j} &= B(\\uu{\\psi}_j,\\alpha_i), &\\quad &1 \\leq i \\leq m_u, \\ 1 \\leq j \\leq n_u,\\\\[.1cm]\nD_{i,j} &= D(\\uu{\\phi}_j,\\beta_i),  & & 1 \\leq i \\leq m_b,\\ 1 \\leq j \\leq n_b,\\\\[.1cm]\nM_{i,j}&= M(\\uu{\\phi}_j,\\uu{\\phi}_i), &\\qquad & 1 \\leq i,j \\leq n_b,\\\\[.1cm]\nf_i &= (\\uu{f},\\uu{\\psi}_i)_\\Omega, & & 1\\leq i\\leq n_u,\\\\[.1cm]\ng_i &= (\\uu{g},\\uu{\\phi}_i)_\\Omega, & & 1\\leq i \\leq n_b.\n\\end{alignat*}\nFor the two non-linear forms, $\\tilde{O}$ and $C$, we define the corresponding stiffness matrices with respect to given finite element functions $\\uu{w}_h \\in \\uu{V}_h$ and $\\uu{d}_h\\in \\uu{C}_h$ in the first argument and their associated coefficient vectors $w$ and $d$ as\n\\begin{alignat*}2\nO(w)_{i,j} &=\\tilde{O}(\\uu{w}_h;\\uu{\\psi}_j,\\uu{\\psi}_i), &\\quad  &1 \\leq i,j \\leq n_u,\\\\[.1cm]\nC(d)_{i,j} &= C(\\uu{d}_h;\\uu{\\psi}_j,\\uu{\\phi}_i), & & 1\\leq i \\leq n_b,\\ 1 \\leq j \\leq n_u.\n\\end{alignat*}\n\nThus, the numerical solution to \\eqref{eq:mhd} consists in solving the non-linear system\n\\begin{equation}\n\\label{eq:matrix-system}\n\\left(\n\\begin{array}{cccc}\nA+O(u) & B^T & C^T(b) & 0\\\\\nB & 0 & 0 & 0\\\\\n-C(b) & 0 & M & D^T \\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\nu\\\\\np\\\\\nb\\\\\nr\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{c} f\\\\0\\\\g\\\\0\n\\end{array}\n\\right),\n\\end{equation}\nwhere the vectors  $u\\in\\mathbb{R}^{n_u}$, $p\\in\\mathbb{R}^{m_u}$,  $b\\in\\mathbb{R}^{n_b}$, and $r\\in\\mathbb{R}^{m_b}$ are the unknown coefficients of the finite element functions.\n\n\\section{Picard iteration (P)}\n\\label{sec:nonlinear}\nThe discrete system \\eqref{eq:matrix-system} is non-linear, and therefore appling a non-linear solver to this problem is necessary. A common choice to deal with the non-linearity within the incompressible Navier-Stokes equations in isolation is to perform Oseen or Picard iterations \\cite{elman2005finite}. This involves linearising around the current velocity and solving for updates.\n\n% One (at least theoretical) advantage of the discrete Oseen system is that it is provably energy-stable (for the skew-symmetrized form which we use) for all values of nu. That is, the matrix is not symmetric but positive definite, which might be an advantage for preconditioning. For small data (i.e. for large \\nu) the fixed-point iteration is a contraction.\n\n\\RE{For simplicity we only consider the linearly convergent Picard iterations. Since we have modified the convection form to be discretely energy-stable as in \\eqref{eq:convection}, an advantage of this approach is that the discrete convection-diffusion operator is real positive. Thus, with small data the fixed-point/Picard iteration is a contraction. A more efficient non-linear solver is Newton's method which converges quadratically near the solution. However, applying Newton's method is more involved, as it requires construction and solving linear systems associated with a Jacobian as well as finding an initial guess sufficiently close to the solution.\n% A common approach to ensure convergence with Newton's method is to perform a few Picard iterations before starting the Newton scheme.\n% In Section~5.2 we mention the possibility of using Newton's method as an area of future work.\nWe leave the implementation of Newton's method or other non-linear solvers as an area of possible future work (Section~5.2).\n}\n\n% \\RE{For simplicity we only consider the linearly convergent Picard iterations. An example of a more efficient non-linear solver is Newton's method which converges quadratically. However, the main difficulty with Newton's method is that convergence is only local and hence the initial guess needs to be sufficiently accurate to obtain convergence. Since we have modified the convection form to be descretely energy-stable \\eqref{eq:convection} then an advantage of the discrete Oseen systems is that the matrix is positive definite. Thus, this provides an advantage for preconditioning and with small data the fixed-point/Picard iterations is a contraction.\n% % A common approach to ensure convergence with Newton's method is to perform a few Picard iterations before starting the Newton scheme.\n% % In Section~5.2 we mention the possibility of using Newton's method as an area of future work.\n% We leave the implementation of Newton's method or other non-linear solvers as an area of possible future work (Section~5.2).\n% }\n\n\\RE{We adapt the fixed-point/Picard iterations to an approach for the full MHD system, where we linearise around the current velocity and magnetic fields.} Given a current iterate $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$  we solve for updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$ and introduce the next iterate by setting:\n\\begin{equation}\\nonumber\n\\begin{array}{cc}\n% \\label{eq:updates}\n\\uu{u}_h& \\hspace{-3mm} \\rightarrow \\uu{u}_h +\\delta \\uu{u}_h, \\quad p_h \\rightarrow p_h +\\delta p_h,\\\\\n\\uu{b}_h& \\hspace{-3mm}  \\rightarrow \\uu{b}_h +\\delta \\uu{b}_h, \\quad r_h \\rightarrow r_h +\\delta r_h.\n\\end{array}\n\\end{equation}\nIn variational form, the updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$ are found by solving the Picard system (P):\n\\begin{equation} \\nonumber\n% \\label{eq:picard}\n\\begin{split}\nA(\\delta\\uu{u}_h, \\uu{v}) +\\tilde{O}(\\uu{u};\\delta\\uu{u}_h,\\uu{v})+ C(\\uu{b}_h;\\uu{v},\\delta \\uu{u}_h) + B(\\uu{v}, \\delta p_h) & = R_u(\\uu{u}_h,\\uu{b}_h,p_h;\\uu{v}),\\\\[.1cm]\nB(\\delta\\uu{u}_h,q)&= R_p(\\uu{u}_h;q), \\\\[.1cm]\nM(\\delta \\uu{b}_h,\\uu{c})+\nD(\\uu{c},\\delta r_h)-C(\\uu{b}_h;\\delta \\uu{u}_h,\\uu{v})&= R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c}),\\\\[.1cm]\nD(\\delta \\uu{b}_h,s)&= R_r(\\uu{b}_h;s),\n\\end{split}\n\\end{equation}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$. Note that this system is linearised around $(\\uu{u}_h,\\uu{b}_h)$. The right-hand side linear forms correspond to the residual at the current iteration $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ defined by:\n\\begin{align*}\n R_u(\\uu{u}_h,\\uu{b}_h,p_h;\\uu{v})&=(\\uu{f}, \\uu{v})_\\Omega-A(\\uu{u}_h,\\uu{v})\n-  \\tilde{O}(\\uu{u}_h;\\uu{u}_h,\\uu{v}) \\\\  & \\hspace{4.2mm}- C(\\uu{b}_h;\\uu{v},\\uu{b}_h)-B(\\uu{v},p_h),\\\\[.1cm]\nR_p(\\uu{u}_h;q)&=-B(\\uu{u}_h,q),\\\\[.1cm]\n R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c})&=(\\uu{g,c})_\\Omega -M(\\uu{b}_h,\\uu{c})\n+ C(\\uu{b}_h;\\uu{u}_h,\\uu{c})-D(\\uu{c},r_h),\\\\[.1cm]\nR_r(\\uu{b}_h;s)&=-D(\\uu{b}_h,s),\n\\end{align*}\nfor all $(\\uu{v},q,\\uu{c},s)\\in \\uu{V}_h\\times Q_h \\times \\uu{C}_h\\times S_h$.\n\nIn \\cite{schotzau2004mixed} it is shown that for small data the Picard iteration (P) will converge to the exact solution for any initial guess.\n\nTo formulate the variational form of the Picard iteration (P) in matrix form, let $({u},p,{b},r)$ be the coefficient vectors associated with $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ and $(\\delta{u},\\delta p,\\delta{b},\\delta r)$ be the coefficient vectors of $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$. Then it can readily seen that the Picard iteration (P) amounts to solving the matrix system\n\\begin{equation}\n\\label{eq:mhd_saddle}\n%\\mathcal{K} x \\equiv\n\\left(\n\\begin{array}{cccc}\nA+O(u) & B^T & C(b)^T & 0\\\\\nB & 0 & 0 & 0 \\\\\n-C(b) & 0 & M & D^T\\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\n\\delta u\\\\\n\\delta p\\\\\n\\delta b\\\\\n\\delta r\n\\end{array}\n\\right)  =\n\\begin{pmatrix}\nr_u \\\\\nr_p\\\\\nr_b\\\\\nr_r\n\\end{pmatrix},\n\\end{equation}\nwith\n\\begin{equation} \\label{eq:rhsupdate}\n\\begin{array}{rl}\nr_u &= f- Au -O(u) u - C(b)^T b- B^T p,\\\\\nr_p &=-B u,\\\\\nr_b &=g-Mu+C(b)b-D^T r,\\\\\nr_r &=-D b.\n\\end{array}\n\\end{equation}\nAt each non-linear iteration, the right hand side vectors and matrices $O(u)$ and $C(b)$ in \\eqref{eq:rhsupdate} and \\eqref{eq:mhd_saddle} respectively must be assembled with the solution coefficient vectors $({u},p,{b},r)$ of the current iterate. Here, the matrix $A$  is symmetric positive definite (SPD), $O(u)$ is non-symmetric and $-C(b)$, $C(b)^T$ appear in a skew symmetric fashion. We also note that $M$ is symmetric positive semidefinite (SPSD) with nullity $m_b$ corresponding to the dimension of the scalar space $S_h$ giving rise to the discrete gradients, see \\cite{greif2007preconditioners}.\n\n\n\\section{Decoupled iterations}\n\\label{sec:FEMdecouple}\n\n\nThe full MHD system \\eqref{eq:mhd}, \\eqref{eq:bc} is a coupled system consisting of the incompressible Navier-Stokes and Maxwell's equations, coupled through the non-linear skew symmetric coupling term $C(b)$. In addition, the convection term $O(u)$ is non-linear as well. These two terms make the numerical solution challenging. Therefore, if one or both of these terms is small then it may be possible to iterate explicitly. In particular if the coupling term, $C(b)$, is small then we may completely decouple the system into an incompressible Navier-Stokes problem and a Maxwell problem. The two resulting decoupling schemes are what we call Magnetic and Complete Decoupling and are both described below. Note that unlike the Picard iteration, there is no small data guarantee that such iterations based on these decoupling schemes will converge; although we see convergence for reasonable values of the non-dimensional parameters.\n\n\n\\subsection{Magnetic Decoupling (MD)}\n\\label{sec:FEMmd}\n\nConsider first the situation where there is  weak coupling within the system, that is when $C(b)$ is small. Then it may be possible to drop these terms to completely decouple the system into the two subproblems, the incompressibleNavier-Stokes and Maxwell's equations. We will call this approach Magnetic Decoupling (MD).\n% For a given solution $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$, neglecting the coupling terms in \\eqref{eq:picard} results in solving for the updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h) \\in \\uu{V}_h \\times Q_h \\times \\uu{C}_h \\times S_h$  such that\n% \\begin{equation}\n% \\label{eq:picard_explicit_MD}\n% \\begin{split}\n% A(\\delta\\uu{u}_h, \\uu{v}) +O(\\uu{u};\\delta\\uu{u}_h,\\uu{v})+ B(\\uu{v}, \\delta p_h) & = R_u(\\uu{u}_h,\\uu{b}_h,p_h;\\uu{v})\\\\[.1cm]\n% B(\\delta\\uu{u}_h,q)&= R_p(\\uu{u}_h;q), \\\\[.1cm]\n% M(\\delta \\uu{b}_h,\\uu{c})+\n% D(\\uu{c},\\delta r_h)&= R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c}),\\\\[.1cm]\n% D(\\delta \\uu{b}_h,s)&=R_r(\\uu{b}_h;s),\n% \\end{split}\n% \\end{equation}\n% where again $(\\uu{v},q,\\uu{c},s)\\in\\uu{V}_h\\times Q_h\\times\\uu{C}_h\\times S_h$ and $R_u$, $R_p$, $R_b$ and $R_r$ which are defined in section \\ref{sec:nonlinear}. Again, let $({u},p,{b},r)$ be the coefficient vectors of $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ and $(\\delta{u},\\delta p,\\delta{b},\\delta r)$ be the coefficient vectors of $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$, then this amounts to solving the linear system:\nThen the system\\eqref{eq:mhd_saddle} simplifies to\n\\begin{equation}\n\\label{eq:matrix_MD}\n%\\mathcal{K} x \\equiv\n\\left(\n\\begin{array}{cccc}\nA+O(u) & B^T & 0 & 0\\\\\nB & 0 & 0 & 0 \\\\\n0 & 0 & M & D^T\\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\n\\delta u\\\\\n\\delta p\\\\\n\\delta b\\\\\n\\delta r\n\\end{array}\n\\right)  =\n\\begin{pmatrix}\nr_u \\\\\nr_p\\\\\nr_b\\\\\nr_r\n\\end{pmatrix},\n\\end{equation}\nwith\n\\begin{align*}\nr_u &= f- Au -O(u)u - C(b)^T b- B^T p,\\\\[0.1cm]\nr_p &=-B u,\\\\[0.1cm]\nr_b &=g-Mu+C(b)b-D^T r,\\\\[0.1cm]\nr_r &=-D b.\n\\end{align*}\nWe iterate in the same fashion as the Picard iteration with the simpler matrix \\eqref{eq:matrix_MD}. From \\eqref{eq:matrix_MD} we can see that the system is now completely decoupled. This enable us to  solve each individual subproblem separately and possibly in parallel.\n\n\\subsection{Complete Decoupling (CD)}\n\\label{sec:FEMcd}\n\nFor the second decoupling scheme, we again consider there to be weak coupling of the system but we also consider that the fluid equations are diffusion-dominated and hence can exclude the convection terms.\n% This is the simplest technique as it removes all non-linear terms. Again, for a given solution $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$, removing the coupling and convection terms in \\eqref{eq:picard} results in solving for the updates $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h) \\in \\uu{V}_h \\times Q_h \\times \\uu{C}_h \\times S_h$  such that\n% \\begin{equation}\n% \\label{eq:picard_explicit_CD}\n% \\begin{split}\n% A_h(\\delta\\uu{u}_h, \\uu{v}) + B(\\uu{v}, \\delta p_h) & = R_u(\\uu{u}_h,\\uu{b}_h.p_h;\\uu{v})\\\\[.1cm]\n% B(\\delta\\uu{u}_h,q)&= R_p(\\uu{u}_h;q), \\\\[.1cm]\n% M(\\delta \\uu{b}_h,\\uu{c})+\n% D(\\uu{c},\\delta r_h)&= R_b(\\uu{u}_h,\\uu{b}_h,r_h;\\uu{c}),\\\\[.1cm]\n% D(\\delta \\uu{b}_h,s)&=R_r(\\uu{b}_h;s),\n% \\end{split}\n% \\end{equation}\n% where $(\\uu{v},q,\\uu{c},s)\\in\\uu{V}_h\\times Q_h\\times\\uu{C}_h\\times S_h$.  Taking $({u},p,{b},r)$ as the coefficient vectors of $(\\uu{u}_h,p_h,\\uu{b}_h,r_h)$ and $(\\delta{u},\\delta p,\\delta{b},\\delta r)$ be the coefficient vectors of $(\\delta \\uu{u}_h,\\delta p_h,\\delta \\uu{b}_h,\\delta r_h)$, then the proposed decoupled linear system is\nThis amounts to the system\n\\begin{equation}\n\\label{eq:matrix_CD}\n%\\mathcal{K} x \\equiv\n\\left(\n\\begin{array}{cccc}\nA & B^T & 0 & 0\\\\\nB & 0 & 0 & 0 \\\\\n0 & 0 & M & D^T\\\\\n0 & 0 & D & 0\n\\end{array}\n\\right)\n\\,\n\\left(\n\\begin{array}{c}\n\\delta u\\\\\n\\delta p\\\\\n\\delta b\\\\\n\\delta r\n\\end{array}\n\\right)  =\n\\begin{pmatrix}\nr_u \\\\\nr_p\\\\\nr_b\\\\\nr_r\n\\end{pmatrix},\n\\end{equation}\nwith\n\\begin{align*}\nr_u &= f- Au -O(u)u - C(b)^T b- B^T p,\\\\[0.1cm]\nr_p &=-B u,\\\\[0.1cm]\nr_b &=g-Mu+C(b)b-D^T r,\\\\[0.1cm]\nr_r &=-D b.\n\\end{align*}\nAgain, we perform iterations in the same fashion as the Picard iteration. This is the simplest technique as it removes all non-linear terms in the iteration matrix and hence leaves the linear Stokes problem in the upper $(1,1)$ block matrix.\n\n\\section{Inhomogeneous Dirichlet boundary conditions and initial guess}\n\\label{sec:bcig}\n\n% In this section, we described the formulation of the MHD system \\eqref{eq:mhd}, \\eqref{eq:homogeneousBC}, that is with homogeneous Dirichlet boundary conditions. In general, the problems we numerically test have inhomogeneous Dirichlet boundary conditions \\eqref{eq:bc}.\n\nWhen considering inhomogeneous Dirichlet boundary conditions as in \\eqref{eq:bc}, we still solve \\eqref{eq:mhd_saddle}, \\eqref{eq:matrix_MD} and \\eqref{eq:matrix_CD} for the solution updates with homogeneous Dirichlet boundary conditions. Therefore, in this approach we must incorporate the inhomogeneous Dirichlet boundary conditions only within the initial guess.\n\n% To start the non-linear iteration schemes given in Section \\ref{sec:FEMdecouple} we require an initial guess.\n\nTo form a suitable initial guess, we solve the decoupled Stokes problem with the inhomogeneous boundary condition (\\ref{eq:bc}a):\n\\begin{equation} \\label{eq:StokesInitial}\n\\left(\n\\begin{array}{cc}\nA & B^T \\\\\nB & 0 \\\\\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\nu \\\\\np \\\\\n\\end{array}\n\\right)=\\left(\n\\begin{array}{c}\nf \\\\\n0 \\\\\n\\end{array}\n\\right),\n\\end{equation}\nand then the non-symmetric Maxwell problem the with inhomogeneous boundary conditions (\\ref{eq:bc}b), (\\ref{eq:bc}c):\n\\begin{equation} \\label{eq:MaxwellInitial}\n\\left(\n\\begin{array}{cc}\nM -C(u) & D^T \\\\\nD & 0 \\\\\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\nb \\\\\nr \\\\\n\\end{array}\n\\right)=\\left(\n\\begin{array}{c}\ng \\\\\n0 \\\\\n\\end{array}\n\\right).\n\\end{equation}\nHere the term $C(u)$ corresponds the coupling term using $u$ (the initial guess for the velocity field). We expect the inclusion of the coupling term to increase the accuracy of the initial guess because additional information of the problem is used.\n\n\n% When iteratively solving these two systems, the convergence tolerance for the initial guess is important.  Since the updates will have homogeneous boundary conditions then the initial guess needs to incorporate the inhomogeneous boundary conditions. Consider for example the fluid equations: if the discrete Stokes problem is solved approximately to some tolerance, then the we approximately solve the boundary equations of the form:\n% $$1\\times u_B = u_D,$$\n% where $u_B$ is the coefficients of the solution on the boundary and $u_D$ is a sufficiently accurate boundary data interpolation. That is that the initial guess only starts with boundary values of the same order of accuracy as the initial solve.\n\n\nThe inhomogeneous Dirichlet boundary conditions are incorporated in a standard fashion by suitably modifying the matrix system. The outcome of this procedure is that the boundary data interpolation is only performed for the initial guess. Hence, the iterative solves for the initial guess must be run to a sufficient accuracy to ensure the accuracy of the discrete boundary conditions.\n\n% If the subsequent solution updates (which have homogeneous Dirichlet boundary conditions) are solver accurately, the inaccuracy in the boundary values will remain. For similar reasons, low accuracy of the solution updates will lead to perturbation in the boundary conditions and thus in the whole solution.\n\n\n% Suppose you solve for the initial guesses $(u_0,p_0,b_0,r_0,)$ one would get an error associated with the boundary $(\\delta u_B,\\delta p_B,\\delta b_B,\\delta r_B)$ so that\n% \\begin{equation} \\nonumber\n%  \\left. \\begin{aligned}\n%         u_0 &= u+\\delta u_B, & \\quad&\n%         p_0 = p+\\delta p_B,\\\\\n%         b_0 &= b+\\delta b_B,  &\\quad &\n%         r_0 = r+\\delta r_B.\\\\\n%  \\end{aligned}\n%  \\right.\n%  \\qquad \\text{}\n% \\end{equation}\n% When $(u_0,p_0,b_0,r_0,)$ as the initial guess and then solving for the updates (which have homogeneous Dirichlet boundary conditions) there will still be some error associated with the boundaries  which are $(\\delta u_B,\\delta p_B,\\delta b_B,\\delta r_B)$. This therefore means that if the Stokes and Maxwell solves are not done accurately enough then when solving for the updates errors will remain within the boundary conditions. Thus the error norms for the full MHD system will be capped by the accuracy of the Stokes and Maxwell solves for the initial guess.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n% To start the Picard iterations given in Section \\ref{sec:nonlinear} we require an initial guess. To form the initial guess, we solve the decoupled Stokes problem\n% $$\n% \\left(\n% \\begin{array}{cc}\n% A & B^T \\\\\n% B & 0 \\\\\n% \\end{array}\n% \\right)\n% \\left(\n% \\begin{array}{c}\n% u \\\\\n% p \\\\\n% \\end{array}\n% \\right)=\\left(\n% \\begin{array}{c}\n% f \\\\\n% 0 \\\\\n% \\end{array}\n% \\right),\n% $$\n% then the non-symmetric Maxwell problem\n% $$\\left(\n% \\begin{array}{cc}\n% M -C & D^T \\\\\n% D & 0 \\\\\n% \\end{array}\n% \\right)\n% \\left(\n% \\begin{array}{c}\n% b \\\\\n% r \\\\\n% \\end{array}\n% \\right)=\\left(\n% \\begin{array}{c}\n% g \\\\\n% 0 \\\\\n% \\end{array}\n% \\right).$$\n% Here the term $C$ corresponds the the coupling term using $u$ (the initial guess for the velocity field). We use the coupling term within the non-symmetric Maxwell problem as we have the initial guess for the velocity field and incorporating this into the solve for the magnetic and multiplier fields initial guess will hopefully increase the accuracy of the initial guess.\n\n\n% When solving these two systems, the convergence tolerance which we use is important. In particular, inaccurate solutions cause problems with non-homogeneous boundary conditions. This is because if the matrix system with the boundary conditions applied is not solve to a sufficient accuracy then there will be errors in the boundary data. When we solved for the updates we enforce homogeneous Dirichlet boundary conditions (i.e., zero boundary conditions) on the velocity ($u$), magnetic ($b$) and multiplier ($r$) fields. This therefore means that if the Stokes and Maxwell solves are not done accurately enough then when solving for the updates errors will remain within the boundary conditions. Thus the error norms for the full MHD system will be capped by the accuracy of the Stokes and Maxwell solves for the initial guess.\n\n\n\n\n\\section{Summary}\n\nIn this chapter we reviewed a mixed finite element approximation to the full MHD system given in \\eqref{eq:mhd} and \\eqref{eq:bc}. We followed the mixed approach outlined in \\cite{schotzau2004mixed} and expressed the MHD system in the matrix form \\eqref{eq:mhd_saddle}. Using the Picard iteration  \\eqref{eq:mhd_saddle} we introduced two possible decoupling schemes ((MD) and (CD)) which may be simpler to solve for. The performance of the resulting three non-linear iteration schemes depends on the values of the parameters $\\kappa$, $\\nu$ and $\\nu_m$. The next chapter will discuss possible preconditioning approaches for these systems.\n\nIn the sequel, we shall omit the dependence of $O(u)$ and $C(b)$ on $u$ and~$b$, respectively, and simply  write $O$ and $C$.\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "ae33229a24a0ae9540e04eaa508337bbe83c9825", "size": 31662, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MHD/THESIS/FEM/FEM.tex", "max_stars_repo_name": "wathen/PhD", "max_stars_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-10-25T13:30:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-10T21:27:30.000Z", "max_issues_repo_path": "MHD/THESIS/FEM/FEM.tex", "max_issues_repo_name": "wathen/PhD", "max_issues_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MHD/THESIS/FEM/FEM.tex", "max_forks_repo_name": "wathen/PhD", "max_forks_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-28T16:12:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-13T13:59:44.000Z", "avg_line_length": 61.4796116505, "max_line_length": 934, "alphanum_fraction": 0.6724464658, "num_tokens": 10885, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.5676915109491592}}
{"text": "\\chapter{Sample chapter}\n\n\\section{Math}\n\n\\subsection{Symbols}\n\n\\begin{itemize}\n  \\item Caligraphic letters: $\\mathcal{A}$ \n  \\item Mathbb letters: $\\mathbb{A}$\n  \\item Mathfrak letters: $\\mathfrak{A}$\n  \\item Math Sans serif letters: $\\mathsf{A}$\n  \\item Math bold letters: $\\mathbf{A}$\n  \\item Math bold italic letters: $\\mathbi{A}$\n\\end{itemize}\n\nYou can alias some commands to easily use these math symbols by \\lstinline|\\newcommand| or \\lstinline|\\renewcommand|.\n\n\\subsection{Equations}\n\n\\begin{equation}\n  E^2 = m^2 + p^2\\label{eq:mass-energy}\n\\end{equation}\n\nYou can use \\lstinline|\\cref{}| to automatically setup the cross reference name; instead, you can always use \\lstinline|\\ref{}| to customize the appearence of the cross reference.\n\n\\cref{eq:mass-energy} or Equation (\\ref{eq:mass-energy}) gives the mass-energy relationship.\n\n\\subsection{Theorem}\n\n\\begin{definition}\n  LCL is orange juice.\n\\end{definition}\n\n\\begin{proof}\n  They are both orange.\n\\end{proof}\n\nAvaiable theorem environments are listed below:\n\nalgorithm, assumption, axiom, conclusion, condition, corollary, definition, example, lemma, proof, property, proposition, remark, theorem.\n\n\n\\section{Figure}\n\n\\begin{figure}[H]\n  \\begin{tikzpicture}\n    \\draw (0,0) -- (1, 0) -- (1, 1) -- cycle;\n  \\end{tikzpicture}\n  \\caption{An example tikz picture with long caption: \\blindtext\\\\\\blindtext}\n  \\label{fig:tikz example}\n\\end{figure}\n\nAn example image is shown in \\cref{fig:tikz example} or Figure (\\ref{fig:tikz example}).\n\n\\section{Table}\n\n\\begin{table}[H]\n  \\caption{Test result on different platforms}\n  \\label{tab:environment}\n  \\centering\n  \\begin{tabular}{lll}\n    \\toprule\n    OS & TeX & Test \\\\\n    \\midrule\n    Windows 10    & \\hologo{TeX}\\,Live 2021    & Pass \\\\\n    Windows 10    & \\hologo{MiKTeX}            & Pass \\\\\n    Windows 10    & \\hologo{TeX}\\,Live 2020    & \\lstinline|cref| problem  \\\\\n    macOS 10.15   & \\hologo{TeX}\\,Live 2021    & Pass \\\\\n    Ubuntu 20.04  & \\hologo{TeX}\\,Live 2021    & Pass \\\\\n    Ubuntu 20.04  & \\hologo{MiKTeX}            & Pass \\\\\n    Termux        & \\hologo{TeX}\\,Live 2021    & Pass \\\\\n    Overleaf      & \\hologo{TeX}\\,Live 2021    & Pass \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\n\\section{Code}\n\n\\subsection{Inline code}\nUse \\lstinline$\\lstinline|<code>|$ to print code snippets. The \\lstinline$||$ marks delimit\nthe code and can be replaced by any character not in the code;\n\\textit{e.g.}   \\lstinline|\\lstinline$<code>$| gives the same result.\n\n\\subsection{Code environment}\nThe code to draw the \\cref{fig:tikz example} is listed below:\n\\begin{lstlisting}[caption={\\hologo{LaTeX} code for inserting a figure}]\n\\begin{figure}[htb]\n  \\begin{tikzpicture}\n    \\draw (0,0) -- (1, 0) -- (1, 1) -- cycle;\n  \\end{tikzpicture}\n  \\caption{An example picture with long caption: \\blindtext\\\\\\blindtext}\n  \\label{fig:tikz example} % this is a comment\n\\end{figure}\n\\end{lstlisting}\n\n\\section{Manage citations}\n\\label{chap:bibliography}\n\nUse \\lstinline|biber| as \\hologo{BibTeX} backend.\n\n\\subsection{Add and manage}\n\nEntries are stored in \\lstinline|mythesis.bib|. For other sources, modify following commands in \\lstinline|mythesis.tex|:\n\\begin{lstlisting}[language=TeX]\n\\addbibresource{mythesis.bib}\n\\end{lstlisting}\n\n\\subsection{Citation style}\n\nAs described in the sample page from ECE department, the style is set to \\lstinline|ieee|. You can modify the style in the \\lstinline|hkustthesis.cls| file as you wish.\n", "meta": {"hexsha": "462d5d7b0e359719da965d121f0af923f4e81f73", "size": 3438, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/chapter_XXX.tex", "max_stars_repo_name": "zybbigpy/hkust-thesis", "max_stars_repo_head_hexsha": "2e1716b60e5e96f6420d006c576cddaaea849c3c", "max_stars_repo_licenses": ["LPPL-1.3c"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/chapter_XXX.tex", "max_issues_repo_name": "zybbigpy/hkust-thesis", "max_issues_repo_head_hexsha": "2e1716b60e5e96f6420d006c576cddaaea849c3c", "max_issues_repo_licenses": ["LPPL-1.3c"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/chapter_XXX.tex", "max_forks_repo_name": "zybbigpy/hkust-thesis", "max_forks_repo_head_hexsha": "2e1716b60e5e96f6420d006c576cddaaea849c3c", "max_forks_repo_licenses": ["LPPL-1.3c"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.6964285714, "max_line_length": 179, "alphanum_fraction": 0.6969168121, "num_tokens": 1077, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300698514778, "lm_q2_score": 0.808067204308405, "lm_q1q2_score": 0.5676915094874722}}
{"text": "\\section{Algorithms}\n\\subsection{Mo.cc}\n\\lstinputlisting[style=cpp]{../codebook/algorithms/Mo.cc}\n\\section{Dynamic Programming}\n\\subsection{Convex\\_Hull\\_Trick.cc}\n\\lstinputlisting[style=cpp]{../codebook/dp/Convex_Hull_Trick.cc}\n\\subsection{Convex\\_Hull\\_Trick\\_Dynamic.cc}\n\\lstinputlisting[style=cpp]{../codebook/dp/Convex_Hull_Trick_Dynamic.cc}\n\\subsection{Divide\\_And\\_Conquer.cc}\n\\lstinputlisting[style=cpp]{../codebook/dp/Divide_And_Conquer.cc}\n\\subsection{Knuth.cc}\n\\lstinputlisting[style=cpp]{../codebook/dp/Knuth.cc}\n\\section{Data Structures}\n\\subsection{BIT.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/BIT.cc}\n\\subsection{BIT\\_Range.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/BIT_Range.cc}\n\\subsection{Treap.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/Treap.cc}\n\\subsection{Treap\\_Implicit.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/Treap_Implicit.cc}\n\\subsection{Splay.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/Splay.cc}\n\\subsection{Splay\\_Implicit.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/Splay_Implicit.cc}\n\\subsection{Link\\_Cut\\_Tree.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/Link_Cut_Tree.cc}\n\\subsection{Persistent\\_Segment\\_Tree.cc}\n\\lstinputlisting[style=cpp]{../codebook/datastructures/Persistent_Segment_Tree.cc}\n\\section{Geometry}\n\\subsection{Convex\\_Hull.cc}\n\\lstinputlisting[style=cpp]{../codebook/geometry/Convex_Hull.cc}\n\\subsection{Delaunay.cc}\n\\lstinputlisting[style=cpp]{../codebook/geometry/Delaunay.cc}\n\\section{Graph Theory}\n\\subsection{Eulerian.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/Eulerian.cc}\n\\subsection{SCC.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/SCC.cc}\n\\subsection{Biconnected\\_Components.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/Biconnected_Components.cc}\n\\subsection{Max\\_Flow.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/Max_Flow.cc}\n\\subsection{Max\\_Flow\\_Min\\_Cost.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/Max_Flow_Min_Cost.cc}\n\\subsection{Max\\_Matching.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/Max_Matching.cc}\n\\subsection{Min\\_Cut.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/Min_Cut.cc}\n\\subsection{LCA.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/LCA.cc}\n\\subsection{HLD.cc}\n\\lstinputlisting[style=cpp]{../codebook/graph/HLD.cc}\n\\section{Mathematics}\n\\subsection{General.cc}\n\\lstinputlisting[style=cpp]{../codebook/math/General.cc}\n\\subsection{Miller\\_Rabin.cc}\n\\lstinputlisting[style=cpp]{../codebook/math/Miller_Rabin.cc}\n\\subsection{Euclid.cc}\n\\lstinputlisting[style=cpp]{../codebook/math/Euclid.cc}\n\\subsection{Combinatorics.cc}\n\\lstinputlisting[style=cpp]{../codebook/math/Combinatorics.cc}\n\\subsection{Gauss\\_Jordon.cc}\n\\lstinputlisting[style=cpp]{../codebook/math/Gauss_Jordon.cc}\n\\subsection{Matrix.cc}\n\\lstinputlisting[style=cpp]{../codebook/math/Matrix.cc}\n\\subsection{FFT.cc}\n\\lstinputlisting[style=cpp]{../codebook/math/FFT.cc}\n\\section{String}\n\\subsection{Manacher's.cc}\n\\lstinputlisting[style=cpp]{../codebook/string/Manacher's.cc}\n\\subsection{KMP.cc}\n\\lstinputlisting[style=cpp]{../codebook/string/KMP.cc}\n\\subsection{Rabin\\_Karp.cc}\n\\lstinputlisting[style=cpp]{../codebook/string/Rabin_Karp.cc}\n\\subsection{Z\\_Algorithm.cc}\n\\lstinputlisting[style=cpp]{../codebook/string/Z_Algorithm.cc}\n\\subsection{Suffix\\_Array.cc}\n\\lstinputlisting[style=cpp]{../codebook/string/Suffix_Array.cc}\n\\subsection{Suffix\\_Tree.cc}\n\\lstinputlisting[style=cpp]{../codebook/string/Suffix_Tree.cc}\n\\section{Misc}\n\\subsection{.vimrc}\n\\lstinputlisting[style=txt]{../codebook/misc/.vimrc}\n", "meta": {"hexsha": "31cb3e9f89575bfa3ec9204705ee75b991867c97", "size": 3602, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notebook/content.tex", "max_stars_repo_name": "jeffrey-xiao/acm-notebook", "max_stars_repo_head_hexsha": "b8588429f2cb727f35e346e649e544bb1e67badd", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-19T14:02:54.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-19T14:02:54.000Z", "max_issues_repo_path": "notebook/content.tex", "max_issues_repo_name": "jeffrey-xiao/acm-notebook", "max_issues_repo_head_hexsha": "b8588429f2cb727f35e346e649e544bb1e67badd", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook/content.tex", "max_forks_repo_name": "jeffrey-xiao/acm-notebook", "max_forks_repo_head_hexsha": "b8588429f2cb727f35e346e649e544bb1e67badd", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-11-28T07:23:27.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-01T07:46:23.000Z", "avg_line_length": 42.3764705882, "max_line_length": 82, "alphanum_fraction": 0.7920599667, "num_tokens": 1099, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5676540862792282}}
{"text": "\\chapter{Neural Networks}\nThis chapter will focus on neural networks and how they were utilized for parts of this\nreport. First, some theoretical background surrounding neural networks in general, and then\nmore specifically, convolutional neural networks, is given in sections \\ref{sec:NNwork}\nand \\ref{sec:CNN}.\nAfterwards, the chapter goes more into how neural networks were tried and used during this project. Data collection and augmentation is described in sections \\ref{sec:NNdata} and \\ref{sec:NNaugment}. Further sections, \\ref{sec:NNclassification} \\& \\ref{sec:NNtransfer}, describe how this data was used to train neural networks, with the latter section explaining an alternative to training a model from scratch. \n\\section{How neural networks work}\n\\label{sec:NNwork}\nThere exists many different machine learning techniques out there today. Due to its high \neffectiveness and relevance, for this report we are going to focus on the highly popular \nmethod of artificial neural networks.\nA variant, convolutional neural networks, is a proven method for working well with \nimages and is therefore highly relevant for this project.\n\n\\subsection{Artificial neuron}\nAn artificial neuron is the simplest form of the neural network. It has a set of inputs and an output.\nThe artificial neuron first sums up all the input values, $x$, multiplied with the weight value, $w$.\nAfter that it passes that sum through an activation function. This activation function can be everything from a simple $f(x)=x$ to the more complex sigmoid function, depending on the need. More on this in section \\ref{subsec:activationfunctions}.\n\n\\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.75\\textwidth]{./Images/perceptron.jpg} \n\\caption{An illustration of an artificial neuron, the simplest version of a neural network.}\n\\end{center}\n\\end{figure}\n\nA bias also exists in every node which is not based on any input. The bias function in the artificial neuron is similar to what the $m$ in $y = kx + m$ does. It gives the function the ability to move up and down in the graph for more possibilities of splitting the data set. The bias is usually disregarded when illustrating the artificial neuron.\n\nThe artificial neuron only has the ability to draw a single line and thus is only able to split simple data sets.\n\nThe output of an artificial neuron is described by the following formula:\n\n\\[ y = b + \\displaystyle\\sum_{i=1}^{n} x_i \\cdot w_i \\]\n\n\\[y=f(b+\\sum_{i=1}^n x_i\\cdot w_i)\\]\n\nwhere $b$ is the bias value, $x$ is the input, $w$ is the weight for that input, $n$ is the number of inputs and $f(*)$ is the activation function.\n\n\\subsection{Activation functions}\n\\label{subsec:activationfunctions}\nThe activation function,  $ \\phi (v_{i}) $ , takes the sum of all the inputs from a node as input and passes them through a function before giving an output. This is beneficial when for instance the output should be kept in a range between 0 and 1, or perhaps when negative values don't make sense.\nUsually, activation functions are attributed to layers instead of individual nodes.\n\nA few commonly used functions are \\textbf{ReLu}, \\textbf{Tanh}, \\textbf{Sigmoid} and \\textbf{Softmax}.\n\n\\textbf{ReLu} is described as\n\\[f(x) = max(0, x)\\]\nand is a good option when negative values should be ignored or don't make sense. \nIt is also a good choice avoiding the net to become computationally heavy,\nsince it only outputs the same input value but sets all negative values to zero.\nThat is usually a good choice in larger networks with many neurons which can become\nslow.\n\n\\textbf{Tanh} is the tangens hyperbolicus function and is used to output values either as -1 or 1 like a binary operator. Unlike a binary operator, tanh's derivative is always defined which makes back propagation possible.\nBack propagation is described in section \\ref{subSec:optimizers}\n\n\\textbf{Sigmoid} is described as \n\\[\\frac{e^x}{1+e^x}\\]\nand works like the Tanh function but keeps the values between 0 and 1 and sets y(0) = 0.5 instead.\n\n\\textbf{Softmax} is a little bit more complex. It takes a vector $v$ of dimension $n$ and turns it into a vector $\\sigma(v)$ of the same dimension where $\\displaystyle\\sum_{i=1}^{n} \\sigma_i(v) = 1 $ and each element value is between zero and one.\n\nEach element in the array is described as below\n\\[ \\sigma_i(v) = \\frac{e^{v_i}}{\\displaystyle\\sum_{j=1}^{n} e^{v_k}} \\]\n\nThis output is good for describing the probability for each element to be correct and is therefore commonly used\nin the output layer.\n\n\\subsection{Calculating the loss}\nWhen training the model, it will at first make a guess to what the right answer or value is based on the random initial weigh values. In the beginning, the model is usually mostly wrong and it is then important to know how wrong it was and in what direction it should go.\n\nFor this, the model uses a loss function to calculate that error. For different applications the loss function can be different.\nOne way to calculate the error, $E$, is to simply take the predicted value, $y_p$, and subtract it with the correct value, $y_c$.\n\n\\[E = y_p - y_c\\]\n\nSometimes the sign of the error is not important. Then the absolute value can \nbe calculated instead. $E = |y_p - y_c| $\n\nAnother approach is the \\textbf{mean squared error} which squares the difference and takes the mean value over a few predictions. $n$ is the number of predictions.\n\\[E(n) = \\frac{1}{n} \\cdot \\displaystyle\\sum_{}^{n} (y_p - y_c)^{2} \\]\n\n\\subsection{Optimizers}\n\\label{subSec:optimizers}\nOnce the error has been calculated, the network will make changes to itself to improve the performance by decreasing the error value. This is called backpropagation.\nThe principle behind backpropagation is to go backwards in the network from the output node and changing the weight values, $ \\omega $.\nThe optimizers decide how the weights should change. \\\\\n\nA popular method for minimizing the error is to use \\textbf{Gradient Decent} which works\nby, step by step, move in the negative direction that minimize the error.\nThe direction is determined by taking the derivative of the error function.\nThe new weight value is calculated accordingly. $\\eta$ is a chosen parameter called the learning rate.\n\n\\[ \\Delta \\omega_{ik} = -\\eta \\frac{\\delta E}{\\delta \\omega_{ik}} \\]\n\n\\begin{center}\nwhere the error function is\n\\end{center}\n\n\\[E = \\frac{1}{N} \\displaystyle\\sum_{i=1}^{N} E(n) \\]\n\n\\begin{center}\nwhere N is the total number of neurons. This leads to\n\\end{center}\n\n\\[ \\hat{\\Delta\\omega_{ik}} = \\frac{1}{N}  \\displaystyle\\sum_{i=1}^{N} \\Delta\\omega_{ik}(n) \\]\n\n$ E(n) $ can be, for example, mean squared error mentioned above.\nThe most common form of gradient decent is called \\textbf{Stochastic gradient decent}, SGD.\nThe difference is that SGD only evaluates on a small number of nodes/patterns, $P$ and updates all the weights from that observation.\n\n\\[ \\hat{\\Delta\\omega_{ik}} = \\frac{1}{P}  \\displaystyle\\sum_{i=1}^{P} \\Delta\\omega_{ik}(p) \\]\n\n\\textbf{Adam}\\cite{adamoptimizer}, short for Adaptive moment estimation, is another optimizer which is quite popular.\nThe reason is because it has been proven to be a very effective algorithm in many different use cases.\n\nAdam is a kind of combination of using \\textbf{RMSPROP}\\cite{rmsprop} and SGD with momentum.\nWithout going into too much detail, SGD with momentum keeps track of which direction the the improvement is in and keeps the improvement going in that same way. The optimisation goes faster when the previous direction is the same and slower when they are different. Almost like a rolling ball that gains speed as it is rolling down hill.\nRMSPROP does something similar and uses a running average of each weight value and the previous values importance can be controlled with a parameter. It also only uses the sign of the direction and not its value. Also RMSPROP has individual weights.\n\nAdam keeps a running average of both the past gradients and the squared past gradients.\nThe full mathematical description of Adam is given by the formula below where\n$t$ is a time iteration index.\n\n\\[ \\omega_i (t+1) = \\omega_i(t) - \\eta \\frac{m_i}{\\sqrt{v_i} + \\epsilon} \\]\n\nwhere\n\n\\[ m_i (t+1) = \\beta_1m_i(t) + (1 - \\beta_1) \\frac{ \\delta E(t) }{\\delta\\omega_i} \\]\n\n\\[ v_i (t+1) = \\beta_2v_i(t) + (1 - \\beta_2) (\\frac{ \\delta E(t) }{\\delta\\omega_i})^2 \\]\n\nand $ \\eta, \\beta_1 $ and $ \\beta_2 $ are adjustable parameters and $ \\epsilon $ a small value to keep the equation from dividing by zero.\\\\\n\nThere are a lot of other optimizers such as \\textbf{Adagrad}, \\textbf{Adadelta}, \n\\textbf{Nadam} \\cite{optimizers}. They all have their own advantage and use-cases.\nWe won't go into further details about them.\n\n\\subsection{Layers}\n\n\\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 1.0\\textwidth]{./Images/fully_connected.jpg} \n\\caption{Two fully connected layers. One with 3 neurons and one with 2 neurons.}\n\\label{fig:layers}\n\\end{center}\n\\end{figure}\n\nWith more layers and more neurons the networks parameters and complexity begins to grow. So does also the training time, size of the model and the cost for doing predictions. \nThus, these networks are capable of describing much more complex data sets. But as a result of that, the risk of overfitting, the act of describing the training data set too well so that new data sets will not get recognized, becomes much greater.\nThat is why a complex network is not always wanted.\nIn figure \\ref{fig:layers} an example is given of how a network with many layers might look.\n\n\\subsection{Overfitting}\n\\label{overfitting}\nWhen training a neural net, one needs to be careful not to train the model too much or overfitting is likely to happen.\nOverfitting is when a model gets really good at predicting the data that it is training on but fails to predict accurately on new data.\nThis is because it starts to pick up too much on detail or in some instances even noise.\nFor this reason it fails to pick up the general trends, which is more valuable.\nAn illustration is given i figure \\ref{fig:overfitting}.\n\n\\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 1.0\\textwidth]{./Images/overfitting.jpg} \n\\caption{Overfitting of a dataset. On the left is a generalized trained model and on the right is an overtrained model. Image taken from OReilly.com \\cite{overfitting}.}\n\\label{fig:overfitting}\n\\end{center}\n\\end{figure}\n\nThere are a few methods that can help handle the problem of overfitting. They usually involve trying to limit the size of a small amount of weights.\nThe hypothesis behind this is that if a weight is much larger than the rest it also has much more influence over the final prediction. That way, a small detail in the data can have much more influence than the general trend.\n\nOne way to do this is to cut random connections between layers during epochs, usually by specifying a certain amount that is going to be cut.\nThis way, the model is not relying on a small number of nodes to make the correct predictions. This method is called \\textbf{dropout} \\cite{dropout}.\n\nAnother way is to introduce random noise on a layer during training. This works because if a node with a large weight receives noise, it will be heavily amplified and probably give a false prediction.\nWhen doing this, Gaussian noise is usually implemented, which is basically random noise with a gaussian distribution.\n\nTo force the model to keep weights small, one way is to add a penalty to the loss for every weight based on its size. \nThis is called weight regularisation and is widely used and exists in two forms, L1 and L2.\nL1 simply adds the weights size multiplied with an L1 term that is chosen by the designer.\nThe other, L2, is to do the same but in this case square that value. This has the effect of making values over 1 even bigger and values less than 1 smaller.\nIn that way, it doesn't affect the loss as much as L1 when not being overtrained. L2 is also called weight decay and is the more common method of the two.\n\nAnother way to keep the values small is to normalize the input to every layer and set the mean to 0 and variance to 1. In Tensorflow, this can be done with the BatchNormalization layer \\cite{batchnormalization}.\n\nOne important thing to point out is that all these methods mentioned so far are only active during training and is not doing anything when making real predictions.\n\nIf lots of training data exists, the ensemble technique could be a good way to go. This technique divides the training data into smaller sets and trains a model for every data set.\nThe final prediction is then an average of the predictions of all the models. This technique works because if the model is highly overtrained in a certain direction, chances are that the other networks will drown out this result by the many other models.\n\nIt is kind of like when someone has an off pitch in a big choir. If there are many other singers, this off-pitch will not be noticed very much.\n\nThe final technique is called early stopping and is based on validating the model after every \nepoch and stopping the training when the validation loss, or some other criteria, is not \nimproving anymore. A graph showing when to do early stopping is given in figure \\ref{fig:earlystopping}.\n\n\\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.75\\textwidth]{./Images/early_stop.jpg} \n\\caption{A graph showing when to do an early stop. The graph shows the total loss over trained epochs. Notice that the validation starts to increase again somewhere after epoch 20.}\n\\label{fig:earlystopping}\n\\end{center}\n\\end{figure}\n\n \\section{Convolutional Neural Networks}\n \\label{sec:CNN}\nConvolutional neural networks, CNN, have been used for quite some time when it comes to deep learning and their capabilities are rather astounding, as proved by Krizhevsky et al \\cite{NIPS2012_4824}.  They use layers which perform convolutions, hence it's name.  The input to a convolutional network are either 2D or 3D tensors, where the 3D alternative usually has color channels along the third dimension. Use of convolutional neural networks in image recognition  can be practical for several reasons, e.g., it can tell of spatial relationships in an image. This section has been written with reference to V. Dumolin et al \\cite{convArit}. Figure \\ref{fig:cnn} shows an example of how a CNN may look.\n \n \\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.9\\textwidth]{./Images/convNetwork.jpg}\n\\caption{Image shows the flow of a convolutional neural network from left to right. Several convolutions as well as pooling functions are done in order on the input image. The last feature map is then flattened to fit into the hidden layers, to then be classified with a softmax function. Image taken from \\cite{cnnImage}.}\n\\label{fig:cnn}\n\\end{center}\n\\end{figure}\n \nA convolutional layer performs the mathematical operation \\textit{convolution} on the input tensor, which can be presented as an image. A convolutional filter, or kernel, of size $k \\cdot l$ slides across the input. The kernel consist of weights in each element. These weights are found during the training process of the network. As the kernel is slides across the image, the dot product of the weights of the kernel and the pixels the the kernel is above, is calculated as the output. Afterwards, a new image containing all of the produced dot products is formed.\n\nPadding on the input can be used to account for values when the kernel is outside the input. Use of padding have an impact on the output size after the convolutional layer. Stride can also be used, which tells how much the kernel translates along an axis;  increased stride leads to subsampling. Since changes of parameters in one axis do not affect outcome in another axis it simplifies explaining CNNs by having the parameter values being the same along both axes. The more convolutional layers used in a network, the more complex shapes are detected. First layers may detect edges and corners, whereas, later layers may find features representing, for example, a car or a dog. A common addition to the layers is an activation function, commonly ReLU (which is described in section \\ref{subsec:activationfunctions}). This gives faster training by only allowing for positive weights to pass the layer.\n\n The common types of padding is \\textit{no zero}, \\textit{same} and \\textit{full}; examples seen in figure \\ref{fig:padding}. No Zero padding involves having no padding outside of the input. This means that the kernel never goes outside of the actual image, once a side of the kernel hits the side of the input it jumps down to the next line (if stride is 1). \n \n Same padding is used when one desires the output size of the layer to be the same as the input size. This is achieved by having a padding $p$ be $ p = \\floor{\\frac{k}{2}} $ for any odd kernel size $k$.\n  \n With full padding one actually makes the output size larger than the input, by fully utilizing every possible combination of the kernel and the input image. This is accomplished  by having the padding $p $ be $p = k - 1$ for any kernel size $k$. \n \n \n \\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.3\\textwidth]{./Images/noPad.png}\n\\includegraphics[width = 0.3\\textwidth]{./Images/samePad.png}\n\\includegraphics[width = 0.3\\textwidth]{./Images/fullPad.png} \n\\caption{Different examples of padding. Grid in green is the output, grid in blue is the input, and area in shadow is the current area where the kernel is currently at. Leftmost image shows a no zero padding being used. Here the kernel size is $k = 3$, input size is $i = 4$ and the output size is  $o = 2$, i.e., smaller than input.  In the middle image same padding is used; output size is the same as input size $o = i = 5$. In the rightmost image full padding is used. Here a bigger output size than the input size is produced, $o = 7, i = 5$. Images taken from \\cite{convArit}}\n\\label{fig:padding}\n\\end{center}\n\\end{figure}\n \n \n Another useful feature used in a lot of convolutional neural networks is pooling. Pooling layers are somewhat similar to convolutional layers in that they use a sliding window which performs an operation on the contents of the window and outputs a new value. However, the difference is that they use other functions instead of linear addition. Two common pooling functions are \\textit{max pooling}, where the output is the largest value within the window, and \\textit{average pooling}, where the output is the average of the components within the window.  Pooling is commonly done do reduce the size of the input. Figure \\ref{fig:pooling} shows example of pooling being used.\n \n \\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.45\\textwidth]{./Images/avgPool.png}\n\\includegraphics[width = 0.45\\textwidth]{./Images/maxPool.png}\n\\caption{Examples of two types of pooling. Blue grid represents input, green grid represents the output, and the shaded area is where the sliding window is located. The left image shows the use of average pooling, while the right one shows the use of max pooling.\nImages taken from \\cite{convArit}}\n\\label{fig:pooling}\n\\end{center}\n\\end{figure} \n \nWhen a classification is to be done in the CNN, the shape needs to be shifted to an array. This is done after the last pooling layer or convolutional layer. This array can then be passed into a hidden layer for classification.\n\n\n\\section{Collecting data}\n\\label{sec:NNdata}\nWhen working with machine learning, big sets of data is often required for a good resulting model.\nA problem with this is that it can be tricky to obtain such a large data set because it usually also requires\nlabels with that data to be manually put in. These labels will be used by the neural network to, during training, check if\nthe predicted value is correct or not.\n\nIn this project the data is sets of images of different furniture parts. These images did not exist anywhere online, so they had to be collected by taking lots of photos. The photos contained the part in different background and from different angles. Table \\ref{table:imageclassnumbers} shows how many images of each class we had. The backgrounds were mostly of typical office surroundings, although there were exceptions (example is shown in figure \\ref{fig:exampleCutout}). All the photos were then resized to 256x256 pixels.\nThe reason for choosing a square size is because when rotating them\n(this will be useful later on when augmenting data in section \\ref{sec:NNaugment}) \nthe images will not have any black bars on the sides nor be stretched.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{ |c|c| } \n \\hline\n Class &  Number of images  \\\\ \n \\hline\n Leg piece & 203 \\\\ \n \\hline\n Bridge piece & 210 \\\\ \n  \\hline\n Seat & 216 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Table showing how many images of each class we had.}\n\\label{table:imageclassnumbers}\n\\end{table}\n\nFor the first furniture Nolmyra, there were only 3 unique parts (excluding screws and similar parts). Therefore the model was trained to recognize 4 things; the different parts and an unknown object. The unknown label was because the object detection algorithm could detect other objects and it is then unwanted that those objects be mistaken for furniture parts. This could of course also be done by setting a threshold on the confidence value, i.e., if the model gives a confidence value under 0.8 it discards it as being an unknown object. The problem with this type is that it is much harder to distinguish parts from unknown objects. This is due to that the best model is the one that can give a close to 100\\% confidence value on every prediction.\n\nFor this reason, photos of random objects were also added to the data set. Most of these photos were taken by ourselves, but some were also collected from the web. The images were placed in folders with a specific item which meant labelling them became much simpler. Just put all the images containing a certain part in the same folder.\n\nTraining a model usually requires hundreds of thousands or even millions of photos. That much data is hard to get and would take a lot of time to obtain Going around the office to snap that many photos is almost unthinkable.\nHowever, there are other options which will be explained in the next section.\n\n\\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_31.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_133.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_136.jpg}\n\\caption{Images showing different samples of training data used for training the model. From left to right: seat, bridge piece, leg piece.}\n\\label{fig:exampleData}\n\\end{center}\n\\end{figure}\n\n\\section{Augmenting data}\n\\label{sec:NNaugment}\nInstead of training on only original images, some images can be created from other images by for example rotating, flipping, changing brightness, saturation, contrast etc.  This will essentially be an image of the same object, but the data will look different, thus giving the model more relevant data to train on, see figure \\ref{fig:exampleArtificial}.\nWhen doing this it is important to keep in mind that the augmented data should be relevant to real situations.\nCreating data with only the blue color band when the real situations are only in daylight makes no sense.\n\n\\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_15-1.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_15-3.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_15-4.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_15-9.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_15-11.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_15-12.jpg}\n\\caption{Images showing some resulting images from augmentation. Top left image shows the original image. Top middle image shows the result after rotating $180$ degrees. Top right image shows result after rotating $-90$ degrees. Bottom left image show result after  reducing the contrast with by $30\\%$. Bottom middle image shows result after reducing color balance by $70\\%$. Bottom right image shows result after increasing color balance by $60\\%$.}\n\\label{fig:exampleArtificial}\n\\end{center}\n\\end{figure}\n\nFurthermore, images of objects of interest can be cut out and pasted into random environments to create even more data, see figure \\ref{fig:exampleCutout}.\nThe idea of this is to try and make the model understand that the focus should be on the furniture part and the background environment is irrelevant. While doing this, even though the parts could be cut into totally random environments we tried to focus on pasting them into relevant spaces like office floors or carpets.\nIn our project, this gave a good result as it increased the test accuracy by 8\\%.\n\n\\begin{figure}[hbtp]\n\\begin{center}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_13.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_86.jpg}\n\\includegraphics[width = 0.3\\textwidth]{./Images/image_107.jpg}\n\\caption{Images showing how cutouts of images were utilized. Leftmost image shows a cutout of one of the furniture parts. The other two images show the result after they were pasted in to different environments to artificially increase training data.}\n\\label{fig:exampleCutout}\n\\end{center}\n\\end{figure}\n\n\nDoing this by hand can still be very time consuming, so automating this process as far as possible is recommended.\n\n\\section{Image classification}\n\\label{sec:NNclassification}\nDesigning a neural network for the image classification is not the easiest task and much\nwork consists of trying things and making intelligent guesses.\nAs stated in the section about data collection we had 4 different classes to classify,\nwhich means that a test accuracy of over 25\\% would be significant. We were aiming for\nsomewhere above or around 90\\% with a relatively small amount of data.\n\nWe started out with many convolutional layers layered with pooling layers and a few dense\nlayers at the end. The final layer had four nodes and softmax activation functions to give a\nclassification probability between four classes.\nThe activation functions of the other layers were set to ReLu since they are the fastest\nand preferred for larger nets. Tanh was also tried but with no success.\n\nSGD with momentum as an optimizer was tested but was changed to Adam\nquite early on since it had better performance.\nOnly linear optimizers were considered because non-linear optimizers take a lot of\ncomputing power and are usually only recommended for small networks.\n\nWe went back and forth with trying different mixes of kernel sizes, amounts of convolutional\nlayers, insertion of pooling layers on different places, stride sizes, amount of dense layers\nand how many nodes they had.\n\nWhen training the models, overfitting was a big problem from the start. The tested methods\nfor combating this was dropouts with different values, L2 weight regularisation,\nbatch normalization and reducing the number of weights.\n\nIt was hard to reduce the number of weights as the evaluation usually converged around 25\\%\nwhenever it was tried.\nWhen using the dropout method a value of 50\\% with a minimum of 64 nodes in the layer\nreduced overfitting most effectively. That, with a combination of weight regularisation and batch \nnormalization gave good results. Figure \\ref{fig:imageclassification} shows how overfitting was avoided.\n\nGaussian Noise gave very good results, but was unfortunately not possible to be \nconverted from the model format generated in keras, to the mlmodel format that is required for Xcode.\n\nThe most important thing to mention is the difference the amount of photos we had and how they\nwere taken. Adding more photos made a huge difference to the test evaluation score.\nAfter having added 50 more images per class with a library of 600 images the test score increased\nfrom \\textbf{80.55\\%} to \\textbf{83.45\\%} in accuracy. \\\\\n\nWhen finally testing the model we made a mistake by not separating the training data and test data.\nSince we shuffled our data before splitting it up into two parts, some augmented data got into the\ntest set. The model had already trained on the original image and it was similar enough to\ngive unrealistic good results of 99.52\\%. This was finally fixed by splitting up the test and train data \ninto separate folders. Figure \\ref{fig:augmentedtestdata} shows the printed graph from one of those\ntraining sessions.\n\n\\begin{figure}[!hbtp]\n\\begin{center}\n\\includegraphics[width = 0.5\\textwidth]{./Images/augmentedtestdata}\n\\caption{Test and train accuracy over a 100 epochs. The performance seems much better than it actually is.}\n\\label{fig:augmentedtestdata}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!hbtp]\n\\begin{center}\n\\includegraphics[width = 0.45\\textwidth]{./Images/imageclassification1}\n\\includegraphics[width = 0.45\\textwidth]{./Images/imageclassification2}\n\\includegraphics[width = 0.45\\textwidth]{./Images/imageclassification3}\n\\caption{Images shows the result from introducing weight regularisation and dropout to a model. The graphs show the training loss (green) and evaluation loss (red). The upper left image shows the training without any dropout or weight regularisation. The upper right image shows the training with dropout implemented. The bottom image shows the training with dropout and weight regularisation implemented.}\n\\label{fig:imageclassification}\n\\end{center}\n\\end{figure}\n\nAs shown in the code in appendix B.1., early stopping was also finally implemented with a callback to save the\nhighest performing model at the end of the process.\n\nThe network that proved to be the best performing network (\\textbf{92.25\\%} accuracy) had the configuration shown in appendix B.2.\n\nA model with many convolutional layers and a few dense layers at the end gave the best\nresults. A larger network could have been implemented and tests as well but\nsince the model size would grow more and training time would increase drastically,\nwe didn't increase it further.\n\n\\section{Transfer Learning}\n\\label{sec:NNtransfer}\n% Write about how we went about in order to insert Transfer Learning into the mix. \nAfter trial of designing the network from scratch, it was decided to try and make use of \npre-trained networks and then retrain them for another purpose. Models trained on \nimageNet \\cite{imageNet} were chosen since that source domain is similar to this target \ndomain. The models with their respective weights were loaded, without their fully \nconnected layer. The top layer's weights were then frozen at different stages, and new \nfully connected layers were added on top, and new classifiers were trained. \nSeveral different models were tried, including ResNet50, InceptionV3 and VGG16, \nwhich where all integrated in Keras to start with.\n\nThe code shown in appendix B.3. shows how to load the VGG16 network which has weights that has been pretrained on imageNet. The fully connected layer is then removed and instead a custom top layer is added to be trained.\n\nThe best result  that was achieved for each of the models using this method is listed in\n table \\ref{table:transferLearning}. It shows that using InceptionV3 model with pretrained\n weights gave the best result. However, a few factors, such as, where the models were\n frozen, how adding and removing layers, has most likely affected the results. As can be\n seen from using ResNet, the best model achieved 25\\% accuracy, which is the same as making a guess, seeing as there are only four possible classes. These models could, and would most like have been, improved if it was not decided to take a new approach to the problem, which is described further in section \\ref{sec:ODresults}.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{ |c|c| } \n \\hline\n Pretrained model used &  Best achieved accuracy  \\\\ \n \\hline\n VGG16 & 70.8\\%. \\\\ \n \\hline\n ResNet & 25.0\\% \\\\ \n  \\hline\n InceptionV3 & 72.9\\% \\\\ \n \\hline\n\\end{tabular}\n\\caption{Results from doing transfer learning .}\n\\label{table:transferLearning}\n\\end{table}\n\n\n\\newpage", "meta": {"hexsha": "9d278c137e8e7e88f06e14d5a90b68a27eca7836", "size": 31999, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/Report/Subsections/neuralNetworks.tex", "max_stars_repo_name": "iSadist/master-thesis", "max_stars_repo_head_hexsha": "03cc6bdbf0712f0c1d7840396d2717616b671dfe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2018-12-25T00:47:03.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-21T16:51:59.000Z", "max_issues_repo_path": "Documents/Report/Subsections/neuralNetworks.tex", "max_issues_repo_name": "iSadist/master-thesis", "max_issues_repo_head_hexsha": "03cc6bdbf0712f0c1d7840396d2717616b671dfe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documents/Report/Subsections/neuralNetworks.tex", "max_forks_repo_name": "iSadist/master-thesis", "max_forks_repo_head_hexsha": "03cc6bdbf0712f0c1d7840396d2717616b671dfe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-12-27T13:11:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-01T16:14:53.000Z", "avg_line_length": 70.9512195122, "max_line_length": 902, "alphanum_fraction": 0.7804618894, "num_tokens": 7599, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5676540777896395}}
{"text": "% Sample LaTeX file for creating a paper in the Morgan Kaufmannn two\n% column, 8 1/2 by 11 inch proceedings format.\n\n\\documentclass[]{article}\n\\usepackage{proceed2e}\n\\usepackage[\nbackend=bibtex,\nstyle=alphabetic,\ncitestyle=authoryear\n]{biblatex}\n\\usepackage{aliascnt}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{caption}\n\\usepackage{mathtools}\n\\usepackage[capitalise, noabbrev]{cleveref}\n\\usepackage{float}\n\\usepackage{algorithm}\n\\usepackage{algorithmic}\n\\usepackage{bbm}\n\\usepackage{amsthm}\n\n% Set the typeface to Times Roman\n\\usepackage{times}\n\n\n%\\binoppenalty=\\maxdimen % to prevent breaking equations\n%\\relpenalty=\\maxdimen % to prevent breaking equations\n% Define the equation float env\n\\newaliascnt{eqfloat}{equation}\n\\newfloat{eqfloat}{h}{eqflts}\n\\floatname{eqfloat}{Equation}\n\n\n\\newtheorem{proposition}{Proposition}\n\n\\title{Differentiable Particle Filter}\n\n\\author{} % LEAVE BLANK FOR ORIGINAL SUBMISSION.\n          % UAI  reviewing is double-blind.\n\n% The author names and affiliations should appear only in the accepted paper.\n%\n%\\author{ {\\bf Harry Q.~Bovik\\thanks{Footnote for author to give an\n%alternate address.}} \\\\\n%Computer Science Dept. \\\\\n%Cranberry University\\\\\n%Pittsburgh, PA 15213 \\\\\n%\\And\n%{\\bf Coauthor}  \\\\\n%Affiliation          \\\\\n%Address \\\\\n%\\And\n%{\\bf Coauthor}   \\\\\n%Affiliation \\\\\n%Address    \\\\\n%(if needed)\\\\\n%}\n\n\n\n\\addbibresource{DPFbibli.bib}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\n\tSequential Monte Carlo (SMC or Particle Filters) methods are a set of powerful techniques for continuous state space models. Additionally to providing a belief for the state of an observed system, they also provide an estimate of its likelihood which is non-differentiable with respect to the model parameters. This is due to the fact that standard resampling schemes consist in a non-smooth re-indexing of the Monte Carlo sample. In this article we propose two new resampling schemes that rely on regularised optimal transport techniques and are respectively differentiable but biased, and non-differentiable but optimal in a certain metric. Furthermore we assess the behaviour of the methods in a linear setup by comparing with the Kalman Filter and discuss the influence of the hyper-parameters.\n\\end{abstract}\n\n\\section{INTRODUCTION}\n\tParticle Filters \\parencite[see][]{particlefilter} offer an efficient way of performing posterior inference in otherwise intractable non-linear state space models and provide an unbiased estimate of the likelihood of the state space model parameters given observed data. Formally particle filters are interested in estimating state space hidden Markov models described by an unobserved state $X_t \\in \\mathbb{R}^{d_X}$ following $X_t|(X_{t-1}=x) \\sim f_t(\\cdot|x), t > 0$ and $X_0 \\sim \\mu(\\cdot) \\in \\mathcal{M}(\\mathbb{R}^{d_X})$ and an observed process $Y_t|(X_t=x) \\sim g_t(\\cdot|x) \\in \\mathcal{M}(\\mathbb{R}^{d_Y})$. They do this by keeping track of $X_t$ in the form of a weighted sample $(w^i_t, X_t^i)$ through a method called Sequential Importance Sampling, however, this technique applied by itself leads to weight degeneracy that needs to be mitigated \\parencite[see][]{doucet2009tutorial}, a well accepted way to fight this is through the use of a resampling scheme that replaces low weight particles with high weights ones \\parencite[see][]{hol2006resampling}. \n\t\n\tIn this paper we focus on regularised optimal transport as a resampling technique in two different ways: in \\cref{sec:differentiable} we use the planning matrix as a direct map from a weighted sample $(w_i, X_i) \\sim \\mathbf{X}$ to an equally weighted sample $(\\mathbbm{1}_N, \\mathbf{Z}_i^{\\epsilon}) \\sim \\mathbf{X}^{\\epsilon}$, where $\\mathbbm{1}_N$ is the vector of size $N$ filled with $\\frac 1 N$ and show $\\mathbf{X}^{\\epsilon} \\xrightarrow[\\epsilon \\to 0]{\\mathcal{L}} \\mathbf{X}$, because the mapping we use is differentiable w.r.t $(w_i, X_i)$, it will provide a biased but differentiable resampling scheme, then in \\cref{sec:optimal} we provide a novel algorithm that learns the optimal equally-weighted sample $(\\mathbbm{1}_N, \\mathbf{Z}_i^{\\epsilon})$ in the $\\epsilon$-Sinkhorn divergence sense. \n\t\n\t\n\tOur main contributions are two fold: we introduce the planning matrix of the regularised optimal transport as a differentiable resampling scheme and we introduce a novel resampling algorithm that guarantees optimality of the resampled particles.\n\t\n\tThe rest of the paper is organised as follows: in \\cref{sec:background} we give a brief recapitulation of the Particle Filter \\parencite[][]{particlefilter}, the biased Sinkhorn distances \\parencite[][]{cuturi2013sinkhorn} and the Sinkhorn divergence \\parencite[][]{feydy:interpolating}, in \\cref{sec:differentiable} we introduce the differentiable particle filter, discuss its behaviour and provide examples, in \\cref{sec:optimal} we introduce an optimal resampling scheme and provide examples for it, and finally we conclude with future possible extensions and improvements.\n\n\\section{BACKGROUND}\n\\label{sec:background}\n\t\\subsection{Particle Filters}\n\t\\label{subsec:PF}\n\t \t\n\t\tParticle filters have emerged as a standard technique for non linear data assimilation and approximates the filtering (posterior) distribution $X_t|(Y_t=y, X_{t-1}))$ using a weighted set of $N$ samples $(w_i, X_i)$ which are updated following a predict-update approach. Any given quantity of interest $\\mathbb{E}[\\phi(X_t)]|(Y_t=y, X_{t-1}))]$ can then estimated as a weighted average $\\sum_i w_i \\phi(X_i)$ with variance given by a central limit theorem \\parencite[see][]{chopin2004central}. \n\t\tThe \"predict step\" consists in proposing particles $X_t^i \\sim p(\\cdot|\\mathbf{X}_{t-1}=X_{t-1}^i)$ whose weights will then be updated as per the bayes formula: \n\t\t\n\t\t\\begin{align}\n\t\t\tw_t^i &= p(X_t^i|\\mathbf{Y}_t=\\mathbf{y},X_{t-1}^i) \\\\\n\t\t\t&\\propto p(Y_t|X_{t}^i) \\cdot p(X_t^i|X_{t-1}^i) \\label{eq:likelihood}\\\\\n\t\t\t&= p(Y_t|X_{t}^i) \\cdot w_{t-1}^i\n\t\t\\end{align}\n\t\n\t\t\n\t\tThese will then be normalised to sum to 1. Additionally the marginal likelihood $P(\\mathbf{Y}_{u=1..t}|\\mathbf{X}_{u=1..t}, \\mathbf{\\theta})$ up to time time $T$ where $\\mathbf{\\theta}$ are model parameters can be estimated through \\cref{eq:likelihood} prior to normalisation.\n\t\t\n\t\tHowever, as time passes, the weights suffer a well-known degeneracy problem, that is all weights will converge to 0 apart from 1, hence not providing a good distributional estimate for the posterior. This has traditionally been mitigated by ancestry resampling: instead of proposing $X_t^i \\sim p(\\cdot|\\mathbf{X}_{t-1}=X_{t-1}^i)$, we propose $X_t^i \\sim p(\\cdot|\\mathbf{X}_{t-1}=X_{t-1}^{a_i})$ where $a_i \\in {1,..,N}$ is any index sampling such that $\\mathbb{E}[\\sum_i \\mathbb{I}(a_i=j)|\\mathbf{w}] = N w_j, \\, \\forall j \\in {1,...,N}$ \\parencite[see][]{doucet2009tutorial}. In practice this is only done when the efficient sample size (ESS) $\\frac{1}{\\sum_i w_i^2}$ is lower than a certain threshold (usually 50\\% of the sample size $N$).\n\t\t\n\t\tThis is summarised in \\cref{algo:bootstrap} and \\cref{algo:resampling}. Because of the reparametrization trick \\parencite[see][]{kingma2013auto}, only \\cref{algo:resampling} makes the particle filter non-differentiable with respect to its inputs.\n\t\t\n\t\t\\begin{algorithm}\n\t\t\t\\caption{Bootstrap Particle Filter}\n\t\t\t\\label{algo:bootstrap}\n\t\t\t\\begin{algorithmic}\n\t\t\t\t\\STATE{\\bfseries Input:} $X_i$, $w_i$, $y$, $N$, $X$, $L$ \\COMMENT{Inputs at time $t > 0$}, \n\t\t\t\t\\IF{$\\text{ESS} < N \\cdot \\text{threshold}$}\n\t\t\t\t\t\\STATE{Resample}\n\t\t\t\t\\ENDIF\n\t\t\t\t\\FOR{$i=1$ {\\bfseries to} $N$}\n\t\t\t\t\t\\STATE{Propose:} Sample $X_i ~ p(X_{t+1}|X_t=X_i)$\n\t\t\t\t\t\\STATE{Update:} Compute $w_i *= p(Y_t=y|X_t=X_i)$\n\t\t\t\t\\ENDFOR\n\t\t\t\t\\STATE{Compute log-likelihood update:} $L+=\\log{\\sum_i w_i}$\n\t\n\t\t\t\t\\FOR{$i=1$ {\\bfseries to} $N$}\n\t\t\t\t\t\\STATE $w_i = \\frac{w_i}{\\sum_i w_i}$\n\t\t\t\t\\ENDFOR\n\t\t\t\\end{algorithmic}\n\t\t\\end{algorithm}\n\t\n\t\t\\begin{algorithm}\n\t\t\t\\caption{Generic resampling}\n\t\t\t\\label{algo:resampling}\n\t\t\t\\begin{algorithmic}\n\t\t\t\t\\STATE{\\bfseries Input:} $X_i$, $w_i$, $N$,\n\t\t\t\t\\FOR{$i=1$ {\\bfseries to} $N$}\n\t\t\t\t\t\\STATE{Sample:} $a_i$ \\COMMENT{satisfying hypotheses, for example $a_i \\sim \\text{Multinomial}(\\mathbf{w})$} \n\t\t\t\t\t\\STATE{Set:} $w_i = \\frac{1}{N}$, $X_i = X_{a_i}$\n\t\t\t\t\\ENDFOR\n\t\t\t\t\\OUTPUT $\\mathbf{w}$, $\\mathbf{X}$ \n\t\t\t\\end{algorithmic}\n\t\t\\end{algorithm}\n\n\tIn this article, similarly to \\parencite{reich2012nonparametric,graham2019scalable} we consider an ensemble approach to particle filtering: instead of the resampling scheme $(w_i, X_i) \\mapsto (\\frac 1 N, X_{a_i})$ we provide two new mappings: we present a biased ensemble reweighting (\\cref{sec:differentiable}) that comes from the planning matrix solution of a regularised Optimal Transport problem $(w_i, X_i) \\mapsto (\\frac 1 N, (M_{\\epsilon}^{\\mathbf{w}, \\mathbf{X}} X)_i)$, and we present a learnt optimal recentering of the particles learnt to minimize the Sinkhorn Divergence \\parencite[see][]{genevay2017learning,feydy:interpolating} $(w_i, X_i) \\mapsto (\\frac 1 N, Z^{\\epsilon}_i)$.\n\n\t\\subsection{REGULARISED OPTIMAL TRANSPORT}\n\t\\label{subsec:OT}\n\t\t\\subsubsection{Optimal Transport}\n\t\t\tOptimal transport is interested in computing a distance between measures. Formally, given two empirical probability measures $\\alpha, \\beta \\in \\mathcal{M}^+_1(\\mathcal{X})$, and given a symmetric positive cost function $C: \\mathcal{X} \\times \\mathcal{X} \\to \\mathbb{R}$, it is interested in computing both the minimum and the minimising argument of the functional $\\pi \\mapsto \\int_{\\mathcal{X}^2} C(x,y) d \\pi$ for $\\pi$ belonging to the simplex \n\t\t\t$$S_{\\alpha, \\beta} = \\left\\{ \\pi \\in \\mathcal{M}^+_1(\\mathcal{X} \\times \\mathcal{X} ) | \\int \\pi(\\cdot, dy) = \\alpha, \\int \\pi(dx, \\cdot) = \\beta \\right\\}$$\n\t\t\t\n\t\t\\subsubsection{Sinkhorn distances}\n\t\t\n\t\t\tIf the supports of $\\alpha$ and $\\beta$ are of size $N$, this is known to scale in $O(n^3)$. \\cite{cuturi2013sinkhorn} shows that a regularised version of the algorithm can be considered instead:\n\t\t\t$$\\textbf{OT}_{\\epsilon} := \\min_{\\pi \\in S_{\\alpha, \\beta}} \\int_{\\mathcal{X}^2} C(x,y) d \\pi + \\epsilon KL(\\pi||\\alpha \\otimes \\beta)$$\n\t\t\t\n\t\t\tThanks to Fenchel-Rockafellar theorem, this can be rewritten \\parencite[see][]{feydy:interpolating,peyr2018computational} using dual functions $f, g \\in \\mathcal{C}(\\mathcal{X})$:\n\t\t\t\\begin{align}\n\t\t\t\t\\textbf{OT}_{\\epsilon} \t=&\\max_{f, g \\in \\mathcal{C}(\\mathcal{X})} \\langle \\alpha, f \\rangle + \\langle \\beta, g \\rangle \\\\\n\t\t\t\t\t\t\t\t\t\t&- \\epsilon \\langle \\alpha \\otimes \\beta, \\exp \\left( \\frac 1 \\epsilon \\left( f \\oplus g - C \\right)  \\right) - 1 \\rangle \\nonumber\\\\\n\t\t\t\t\\text{with: } \\pi_{\\epsilon} \t\t\t= &\\exp \\left( \\frac 1 \\epsilon \\left( f \\oplus g - C \\right)  \\right) \\cdot \\alpha \\otimes \\beta \\label{eq:plan}\n\t\t\t\\end{align}\n\t\t\t\n\t\t\tThis has been key for the recent development of computational optimal transport \\parencite{peyr2018computational} as it translates a problem over a matrix to a problem over two related vectors. Moreover the optimality condition on $f$ and $g$ can be written in a fixed-point theorem form: if $\\alpha = (w^X_i, X_i)_{1 \\leq i \\leq N}, \\beta = (w^Y_j, Y_j)_{1 \\leq j \\leq M}$ \\parencite{feydy:interpolating} which one only has to iterate successively to until convergence.\n\t\t\t\n\t\t\t\\begin{align}\n\t\t\t\t\\forall i, j &\\nonumber\\\\\n\t\t\t\tf_i &= -\\epsilon \\text{LSE}_k(\\log w^Y_k + \\frac 1 \\epsilon g_k - \\frac 1 \\epsilon C(X_i, Y_k)) \\label{eq:fixed_1}\\\\\n\t\t\t\tg_j &= -\\epsilon \\text{LSE}_k(\\log w^X_k + \\frac 1 \\epsilon f_k - \\frac 1 \\epsilon C(X_k, Y_j)) \\label{eq:fixed_2}\n\t\t\t\\end{align}\n\t\t\t\n\t\n\t\t\\subsubsection{Sinkhorn divergences}\n\t\t\t\\label{subsubsec:Div}\n\t\t\tCrucially $\\textbf{OT}_{\\epsilon}(\\cdot, \\alpha)$ doesn't reach its minimum in $\\alpha$, which means that the solution of $\\min_{\\beta} \\textbf{OT}_{\\epsilon}(\\beta, \\alpha)$ may actually lay far from $\\alpha$. This has been outlined first in \\cite{genevay2017learning}, and a solution is to consider instead the so-called \"Sinkhorn Divergence\" \n\t\t\t\\begin{align}\n\t\t\t\t\\mathcal{W}_\\epsilon(w_X, X, w_y, Y) := \n\t\t\t\t\t& \\; \\text{OT}_{\\epsilon}(w_X, X, w_y, Y) \\label{eq:non_sym}\\\\\n\t\t\t\t\t& - 0.5 \\text{OT}_{\\epsilon}(w_X, X, w_X, X) \\label{eq:sym_1}\\\\\n\t\t\t\t\t& - 0.5 \\text{OT}_{\\epsilon}(w_Y, Y, w_Y, Y)\\label{eq:sym_2}\n\t\t\t\\end{align}\n\t\t\t\n\t\t\tAn important point is that the symmetric Optimal transport problems \\cref{eq:sym_1,eq:sym_2} can be solved faster than the non-symmetric one, hence the computational burden is controlled by \\cref{eq:non_sym}\n\n\\section{DIFFERENTIABLE RESAMPLING}\n\\label{sec:differentiable}\n\tGiven a weighted empirical distribution $(w_i, X_i)_{1 \\leq i \\leq N} \\in \\mathcal{M}^+_1(\\mathbb{R}^d$ we are interested in finding a \"good enough\" unweighted sample in the sense that for any $\\phi := \\mathbb{R}^d \\to \\mathbb{R}$, $\\sum_{1 \\leq i \\leq N} w_i \\phi(X_i) \\approx \\frac{1}{N}\\sum_{1 \\leq i \\leq N} \\phi(X_i)$ with small enough variance \\parencite[][]{resampling_comp}. While most approaches consider a bootstrapping of the particle based on the weights: $(w_i, X_i) \\mapsto (\\frac 1 N, X_{a_i})$ this provides a non-differentiable mapping that prevents exact propagation of the gradient through the resampling step. Because of this and to prevent high variance in the gradient estimation, recent works by \\cite{maddison2017filtering,naesseth2017variational,le2017auto} that link SMC and Variational Inference simply ignore the additional impact of resampling and tweak the AutoDiff scheme to propagate the gradient of particles that were not discarded at the resampling step only. As discussed in their papers this provides a biased estimator of the likelihood.\n\t\n\t\\subsection{OPTIMAL TRANSPORT MAP RESAMPLING}\n\t\\label{subsec:otResampling}\n\t\tTo the best of our knowledge the first paper to have introduced Optimal Transport map as an ensemble technique for resampling is \\cite{reich2012nonparametric}, and the method has since been applied in \\cite{graham2019scalable} and \\cite{jacob2016coupling} to provide a local mapping from prior to posterior and to couple same-seed particle filters trajectories in order to compute sensitivities with respect to its hyper-parameters. \n\t\t\n\t\tThe paradigm introduced in \\cite{reich2012nonparametric} can be phrased as follows.\n\t\tLet $(w, X)_i$ be a weighted sample before resampling and $C \\in \\mathbb{R}^d \\times \\mathbb{R}^d$ be a symmetric positive cost matrix, and lets consider the following Optimal Transport problem: \n\t\t\\begin{align}\n\t\t\t\\mathbf{U} = \\text{argmin}_{M \\in S_{\\mathbf{w}, \\mathbf{1}_N}} \\sum_{i,j} C_{i,j}, M_{i,j} \\label{eq:ot_res}\n\t\t\\end{align}\n\t\t\n\t\tAs discussed in\t\\cite{reich2012nonparametric}, this matrix constitutes a Markov Chain $\\mathbf{P} := N \\mathbf{U}$ and the mapping $\\tilde{X} = \\mathbf{P} \\mathbf{X}$ provides a consistent estimate of $(w_i, X_i)_i$.\n\t\t\n\t\tHowever the function $(w_i, X_i)_i \\to \\mathbf{P}\\mathbf{X}$ is difficult to compute \\parencite[see][]{cuturi2013sinkhorn}.\n\t\t\n\t\\subsection{DIFFERENTIABLE RESAMPLING}\n\t\\label{subsec:regOTResamp}\n\t\tInstead of considering the non-regularised problem in \\cref{eq:ot_res}, we instead use the mapping resulting from the regularised version of the Optimal Transport problem \\cref{eq:plan} \\parencite{cuturi2013sinkhorn,feydy:interpolating} $(\\mathbf{w}, \\mathbf{X}) \\to \\mathbf{P}_\\epsilon \\mathbf{X}$.\n\t\t\n\t\t\\begin{proposition}\n\t\t\t\\label{prop:differentiability}\n\t\t\tThe mapping $$(\\mathbf{w}, \\mathbf{X}) \\to \\mathbf{P}_\\epsilon = \\exp \\left( \\frac 1 \\epsilon \\left( \\mathbf{f}^T + \\mathbf{g} - \\mathbf{C}(\\mathbf{X},\\mathbf{X}) \\right)  \\right) \\cdot \\mathbf{w}^T$$ where $\\mathbf{f}$ and $\\mathbf{g}$ are given by \\cref{eq:fixed_1,eq:fixed_2} is differentiable, with:\n\t\t\t\\begin{align}\n\t\t\t\t\\forall i, j &\\nonumber\\\\\n\t\t\t\t\\frac{\\partial f_i}{\\partial \\cdot} &= \\frac{\\partial}{\\partial \\cdot}-\\epsilon \\text{LSE}_k(\\log w^Y_k + \\frac 1 \\epsilon g_k - \\frac 1 \\epsilon C(X_i, Y_k)) \\label{eq:fixed_1_deriv}\\\\\n\t\t\t\t\\frac{\\partial g_j}{\\partial \\cdot} &= \\frac{\\partial}{\\partial \\cdot}-\\epsilon \\text{LSE}_k(\\log w^X_k + \\frac 1 \\epsilon f_k - \\frac 1 \\epsilon C(X_k, Y_j)) \\label{eq:fixed_2_deriv}\n\t\t\t\\end{align}\n\t\t\\end{proposition}\n\t\n\t\t\\begin{proof}\n\t\t\tThe system of equations \\cref{eq:fixed_1,eq:fixed_2} defines an system of implicit functions to which we can apply the implicit function theorem. In this case the application of it is trivial as the relationship is linear.\n\t\t\\end{proof}\n\t\n\t\tIn practice this means that the gradients can be propagated by automatic differentiation at the last step of the Sinkhorn iterates only provided that the algorithm has converged. \n\t\t\n\t\tWhen using automatic differentiation, this can be summarised as:\n\t\t\\begin{algorithm}\n\t\t\t\\caption{Biased resampling}\n\t\t\t\\label{algo:biasedResampling}\n\t\t\t\\begin{algorithmic}\n\t\t\t\t\\STATE{\\bfseries Input:} $X_i$, $w_i$, $N$, $n_{\\text{steps}}$\n\t\t\t\t\\STATE Stop registering gradients\n\t\t\t\t\\STATE{\\bfseries Initialise:} $\\mathbf{f}$, $\\mathbf{g}$\n\t\t\t\t\\STATE \n\t\t\t\t\t\\FOR{$i=1$ {\\bfseries to} $n_{\\text{steps}}-1$}\n\t\t\t\t\t\t\\STATE evaluate \\cref{eq:fixed_1} and \\cref{eq:fixed_2} simultaneously\n\t\t\t\t\t\\ENDFOR\n\t\t\t\t\\STATE Register gradients\n\t\t\t\t\\STATE Set gradients of $\\mathbf{f}$, $\\mathbf{g}$ to $0$\n\t\t\t\t\\STATE Evaluate \\cref{eq:fixed_1}, \\cref{eq:fixed_2}\n\t\t\t\t\\OUTPUT $\\frac 1 N \\mathbbm{1}_N$, $\\mathbf{P} \\mathbf{X}$ with $P$ given by \\cref{eq:plan}\n\t\t\t\\end{algorithmic}\n\t\t\\end{algorithm}\n\t\t\n\t\t\n\t\t\n\t\t\\subsubsection{Illustration}\n\t\t\tTo illustrate the behaviour of the resampling scheme we consider a bimodal 2D distribution of 500 constructed as follows: 500 points $X_i \\in \\mathbb{R}^2$ are drawn uniformly within a circle of radius 1, then half the sample is randomly set to have weights proportional to $\\mathcal{N}\\left(\\left(\\begin{matrix}-0.5 \\\\ 0.5\\end{matrix}\\right), \\left(\\begin{matrix}0.3 & 0.\\\\ 0. & 0.3 \\end{matrix}\\right)\\right)$ and the other half $\\mathcal{N}\\left(\\begin{matrix}0.5 \\\\ -0.5\\end{matrix}, \\begin{matrix}0.1 & 0.\\\\ 0. & 0.1 \\end{matrix}\\right)$, see \\cref{fig:BiasedTransport}. This corresponds tp a multimodal distribution $X |\\, ||X||^2_2 \\leq 1$ where $X\\sim \\mathcal{N}\\left(\\left(\\begin{matrix}-0.5 \\\\ 0.5\\end{matrix}\\right), \\left(\\begin{matrix}0.3 & 0.\\\\ 0. & 0.3 \\end{matrix}\\right)\\right) + \\mathcal{N}\\left(\\left(\\begin{matrix}0.5 \\\\ -0.5\\end{matrix}\\right), \\left(\\begin{matrix}0.1 & 0.\\\\ 0. & 0.1 \\end{matrix}\\right)\\right)$\n\t\t\t\n\t\t\t\\begin{figure}\n\t\t\t\t\\centering\n\t\t\t\t\\captionsetup{justification=centering}\n\t\t\t\t\\includegraphics[width=\\linewidth]{BiasedTransport}\n\t\t\t\t\\caption{Biased transport comparison}\n\t\t\t\t\\label{fig:BiasedTransport}\n\t\t\t\\end{figure}\t\n\t\t\n\t\t\t\\cref{fig:BiasedTransport} illustrates a well-known problem of the regularised Sinkhorn algorithm: as the regularisation increases, the resulting transporting plan will converge to the one that minimizes $KL(\\pi||\\alpha\\otimes\\beta)$, in this case $\\mathbf{w_X} \\otimes \\mathbf{\\mathbbm{1}_N}$, hence the resulting resampling of $\\mathbf{X}$: $\\mathbf{X_\\epsilon} = \\mathbf{P_\\epsilon} \\mathbf{X}$ collapses to the weighted mean of the sample $\\mathbf{X}$.\n\t\t\t\n\t\t\\subsubsection{Application To A State Space Model}\n\t\t\tWe now consider the following noisy resonator:\n\t\t\t\n\t\t\t\\begin{align}\n\t\t\t\tx_\\text{true}(t) = \\sin(t) + \\frac 1 2 \\mathcal{N}(0, 1), t = 0., 0.1, ..., 20\n\t\t\t\\end{align}\n\t\t\t\n\t\t\tThat we model by the following 2D state space model\n\n\t\t\t\\begin{align}\n\t\t\t\t\\label{eq:ssm}\n\t\t\t\tx_0(t + dt) &= x_0(t) + x_1(t) dt + N(0, \\sigma_0^2)\\\\\t\t\t\t\n\t\t\t\tx_1(t + dt) &= x_1(t) - x_0(t) dt + N(0, \\sigma_1^2)\\\\\n\t\t\t\ty(t) &= x_0(t) + N(0, \\sigma_y^2)\n\t\t\t\\end{align}\n\t\t\t\n\t\t\t\\begin{figure}\n\t\t\t\t\\centering\n\t\t\t\t\\captionsetup{justification=centering}\n\t\t\t\t\\includegraphics[width=\\linewidth]{KF_OptimalTransportPF_comp}\n\t\t\t\t\\caption{Filtering applied to \\cref{eq:ssm}\n\t\t\t\tTop: Kalman Filter, Second and Third: Biased Transport with $\\epsilon=0.25,1$, Bottom: Systematic Resampling }\n\t\t\t\t\\label{fig:kf_illustration}\n\t\t\t\\end{figure}\t\n\t\t\n\t\t\t\\cref{fig:kf_illustration} compares \\cref{algo:biasedResampling} for different regularisations with \\cref{algo:resampling} with systematic resampling \\parencite{resampling_comp}, all filters have 100 particles. The shaded zone corresponds to $\\pm 2 \\text{standard deviations}$ \n\t\t\t\n\t\t\tThe collapsing phenomenon in the sample due to the regularisation as highlighted in \\cref{fig:BiasedTransport} is visible in \\cref{fig:kf_illustration}: each resampling step results in a decrease in variance of the sample.\n\t\t\t\n\t\t\\subsubsection{Likelihood evaluation}\n\t\t\tTogether with the resampling in \\cref{algo:resampling} the formula coming from \\cref{algo:bootstrap} provides an unbiased estimate of the likelihood for the state space model associated and also comes with a central limit theorem \\cite{chopin2004central}. However, the resulting function $\\hat{\\mathcal{L}}(\\mathbf{y}|\\mathbf{\\theta})$ is not continuous and as a consequence not differentiable with respect to $\\mathbf{\\theta}$. On the other hand using \\cref{algo:biasedResampling} provides a theoretically everywhere differentiable scheme for \"recentering\" the particles, albeit to the cost of biasedness in the resulting estimate.\n\t\t\t\n\t\t\t\n\t\t\t\\begin{figure*}\n\t\t\t\t\\centering\n\t\t\t\t\\captionsetup{justification=centering}\n\t\t\t\t\\includegraphics[width=\\textwidth]{likelihood}\n\t\t\t\t\\caption{Likelihood estimate w.r.t. the observation log-error, 400 points}\n\t\t\t\t\\label{fig:likelihood}\n\t\t\t\\end{figure*}\n\t\t\t\n\t\t\t\\begin{figure*}\n\t\t\t\t\\centering\n\t\t\t\t\\captionsetup{justification=centering}\n\t\t\t\t\\includegraphics[width=\\textwidth]{likelihoodGradient}\n\t\t\t\t\\caption{Gradient of the likelihood estimate w.r.t. the observation log-error}\n\t\t\t\t\\label{fig:likelihoodGradient}\n\t\t\t\\end{figure*}\n\t\t\t\n\n\\section{OPTIMAL RESAMPLING}\n\\label{sec:optimal}\n\n\tWhile the scheme in \\cref{sec:differentiable} has the advantage of providing a gradient for the resampling schemes, it suffers from the inconvenient of providing a collapsed estimate of the state post resampling, which in turns results in a biased estimate of the likelihood. Instead of learning a biased linear mapping, we can instead learn the best unweighted point cloud minimising a distance from the weighted degenerate sample. To this end we consider the Sinkhorn divergence (see \\cref{subsubsec:Div})\n\t\n\t\\subsection{LEARNT POINTS CLOUD}\n\t\tAs in \\cite{genevay2017learning} we consider the optimisation problem given by the Wasserstein divergence $\\mathcal{W}_\\epsilon = 2 OT_{\\epsilon}(\\alpha,\\beta) - OT_{\\epsilon}(\\alpha,\\alpha) - OT_{\\epsilon}(\\beta,\\beta)$, where in our case $\\alpha$ is the weighted degenerate sample $(w_i, X_i)_i$ and $\\beta$ is the target unweighted sample $(\\frac 1 N, Z_i)_i$: this constitutes a gradient descent on $Z$ with respect to the loss given by the Wasserstein divergence.\n\t\t\n\t\tThe resampling algorithm is therefore modified as in \\cref{algo:optimalResampling}\n\t\t\n\t\t\\begin{algorithm}\n\t\t\t\\caption{Optimal resampling}\n\t\t\t\\label{algo:optimalResampling}\n\t\t\t\\begin{algorithmic}\n\t\t\t\t\\STATE{\\bfseries Input:} $X_i$, $w_i$, $N$, $n_{\\text{steps}}$, tolerance, $\\lambda$ \\COMMENT{Learning rate}\n\t\t\t\t\\STATE Stop registering gradients\n\t\t\t\t\\STATE{\\bfseries Initialise:} $\\mathbf{f}$, $\\mathbf{g}$\n\t\t\t\t\\STATE $Z \\leftarrow X$\n\t\t\t\t\\FOR{$i=1$ {\\bfseries to} $n_{\\text{steps}}$}\n\t\t\t\t\t\\IF{$\\mathcal{W}_\\epsilon(w,X,\\frac 1 N, Z)<\\text{tolerance}$}\n\t\t\t\t\t\t\\STATE Break\n\t\t\t\t\t\\ENDIF\n\t\t\t\t\t\\STATE $Z \\leftarrow Z - \\lambda \\nabla_Z \\mathcal{W}_\\epsilon$\n\t\t\t\t\\ENDFOR\n\t\t\t\t\\OUTPUT $\\frac 1 N \\mathbbm{1}_N$, $Z$\n\t\t\t\\end{algorithmic}\n\t\t\\end{algorithm}\n\t\t\n\t\t\\begin{figure*}\n\t\t\t\\centering\n\t\t\t\\captionsetup{justification=centering}\n\t\t\t\\includegraphics[width=\\textwidth]{LearntReweighting}\n\t\t\t\\caption{Gradient Flow for learning the unweighted sample\\\\\n\t\t\t\tTop left: original sample, right and bottom: evolution of the gradient descent}\n\t\t\t\\label{fig:LearntReweighting}\n\t\t\\end{figure*}\t\n\n\t\t\n\t\tThe behaviour of \\cref{algo:optimalResampling} is shown in \\cref{fig:LearntReweighting}\n\t\n\n\n\\section{CONCLUSION AND FUTURE WORKS}\n\\label{sec:conclusion}\n\n\n\n\\subsubsection*{References}\n\n\\printbibliography[heading=none]\n\n\n\\end{document}\n", "meta": {"hexsha": "9fa6872af668f23861057d303f75026b1a10148b", "size": 23976, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DPF.tex", "max_stars_repo_name": "AdrienCorenflos/PFlow", "max_stars_repo_head_hexsha": "ec5f43a5e20d1280260e482ee0f9139fb9d1ca2b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-02T12:38:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-02T12:38:39.000Z", "max_issues_repo_path": "DPF.tex", "max_issues_repo_name": "AdrienCorenflos/PFlow", "max_issues_repo_head_hexsha": "ec5f43a5e20d1280260e482ee0f9139fb9d1ca2b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DPF.tex", "max_forks_repo_name": "AdrienCorenflos/PFlow", "max_forks_repo_head_hexsha": "ec5f43a5e20d1280260e482ee0f9139fb9d1ca2b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-21T00:48:19.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-21T00:48:19.000Z", "avg_line_length": 68.1136363636, "max_line_length": 1076, "alphanum_fraction": 0.7221805138, "num_tokens": 7527, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177519, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5676540739867525}}
{"text": "\\section{FE cantonmathguy Seclected Problems}\n\n\n\n\\begin{enumerate}\n\n    \n\n\n    \\item Determine all functions $f: \\mathbb{Q} \\to \\mathbb{Q}$ satisfying $f(x + y) = f(x) + f(y)$ for all $x, y \\in \\mathbb{Q}$.\n\n    \n\n\n    \\item Let $a_1, a_2, \\ldots$ be a sequence of integers with infinitely many positive and negative terms. Suppose that for every positive integer $n$ the numbers $a_1, a_2, \\ldots, a_n$ leave $n$ different remainders upon division by $n$. Prove that every integer occurs exactly once in the sequence $a_1, a_2, \\ldots$.\n\n    \n\n\n    \\item Determine all functions $f: \\mathbb{R} \\to \\mathbb{R}$ satisfying \\[f(x)f(y) = f(x + y) + xy\\]for all real $x$ and $y$.\n\n    \n\n\n    \\item Find all functions $f: \\mathbb{Z} \\rightarrow \\mathbb{Z}$ such that, for all integers $a$, $b$, $c$ that satisfy $a + b + c = 0$, the following equality holds:\n    \\[f(a)^2 + f(b)^2 + f(c)^2 = 2f(a)f(b) + 2f(b)f(c) + 2f(c)f(a).\\]\n\n    \n\n\n    \\item Determine all functions $f: \\mathbb{R} \\to \\mathbb{R}$ satisfying\n    \\[f(x) + f(y) = f(x + y) \\quad \\text{and} \\quad f(xy) = f(x)f(y)\\]for all $x, y \\in \\mathbb{R}$.\n\n    \n\n\n    \\item Find all functions $f: \\mathbb{R} \\rightarrow \\mathbb{R}$ such that for all $x,y \\in \\mathbb{R}$, the following equality holds\n    \\[f(\\left\\lfloor x\\right\\rfloor y)=f(x)\\left\\lfloor f(y)\\right\\rfloor\\]where $\\left\\lfloor a\\right\\rfloor$ is the greatest integer not greater than $a$.\n\n    \n\n\n    \\item $ \\bigstar $ Let $k$ be a real number. Find all functions $f: \\mathbb{R} \\to \\mathbb{R}$ satisfying\n    \\[|f(x) - f(y)| \\le k(x - y)^2\\]for all real $x$ and $y$.\n\n    \n\n\n    \\item Let $f: \\mathbb{N} \\to \\mathbb{N}$ be a function, and suppose that positive integers $k$ and $c$ satisfy\n    \\[f^k(n) = n + c\\]for all $n \\in \\mathbb{N}$, where $f^k$ denotes $f$ applied $k$ times. Show that $k \\mid c$.\n\n    \n\n\n    \\item Find all functions $f: \\mathbb{N} \\to \\mathbb{N}$ satisfying\n    \\[f(f(f(n))) + f(f(n)) + f(n) = 3n\\]for every positive integer $n$.\n\n    \n\n\n    \\item Let $S$ be the set of integers greater than $1$. Find all functions $f: S \\to S$ such that\n    (i) $f(n) \\mid n$ for all $n \\in S$,\n    (ii) $f(a) \\ge f(b)$ for all $a, b \\in S$ with $a \\mid b$.\n\n    \n\n\n    \\item Let $\\mathbb{R}$ be the set of real numbers. Determine all functions $f: \\mathbb{R} \\to \\mathbb{R}$ such that \\[f(x^2 - y^2) = x f(x) - y f(y)\\]for all pairs of real numbers $x$ and $y$.\n\n    \n\n\n    \\item $ \\bigstar $ Let $T$ denote the set of all ordered triples $(p,q,r)$ of nonnegative integers. Find all functions $f: T \\to \\mathbb{R}$ satisfying\n    \\[f(p,q,r) = \\begin{cases} 0 & \\text{if} \\; pqr = 0, \\\\ 1 + \\frac{1}{6}(f(p + 1,q - 1,r) + f(p - 1,q + 1,r) & \\\\ + f(p - 1,q,r + 1) + f(p + 1,q,r - 1) & \\\\ + f(p,q + 1,r - 1) + f(p,q - 1,r + 1)) & \\text{otherwise} \\end{cases} \\]for all nonnegative integers $p$, $q$, $r$.\n\n    \n\n\n    \\item Determine all strictly increasing functions $f: \\mathbb{N} \\to \\mathbb{N}$ satisfying $nf(f(n)) = f(n)^2$ for all positive integers $n$.\n\n    \n\n\n    \\item Determine all functions $f: \\mathbb{Z} \\rightarrow \\mathbb{Z}$ with the property that\n    \\[f(x - f(y)) = f(f(x)) - f(y) - 1\\]holds for all $x,y \\in \\mathbb{Z}$.\n\n    \n\n\n    \\item Find all real-valued functions $f$ defined on pairs of real numbers, having the following property: for all real numbers $a, b, c$, the median of $f(a,b), f(b,c), f(c,a)$ equals the median of $a, b, c$.\n\n    \n\n\n    \\item Find all functions $f: \\mathbb{N} \\rightarrow \\mathbb{N}$ such that, for all positive integer $n$, we have $f(f(n)) < f(n + 1)$.\n\n    \n\n\n    \\item Let $f: \\mathbb{N} \\to \\mathbb{N}$ be a function such that, for any $w, x, y, z \\in N$,\n    \\[f(f(f(z))) f(wxf(yf(z))) = z^2 f(xf(y)) f(w).\\]Show that $f(n!) \\ge n!$ for every positive integer $n$.\n\n    \n\n\n    \\item Find all functions $f: \\mathbb{N} \\rightarrow \\mathbb{N}$ such that $f(n!) = f(n)!$ for all positive integers $n$ and such that $m-n$ divides $f(m) - f(n)$ for all distinct positive integers $m$, $n$.\n\n    \n\n\n    \\item Find all functions $f$ from the reals to the reals such that\n    \\[(f(a) + f(b))(f(c) + f(d)) = f(ac + bd) + f(ad - bc)\\]for all real $a, b, c, d$.\n\n    \n\n\n    \\item Determine all functions $f$ defined on the natural numbers that take values among the natural numbers for which\n    \\[(f(n))^p \\equiv n \\pmod{f(p)}\\]for all $n \\in \\mathbb{N}$ and all prime numbers $p$.\n\n    \n\n\n    \\item Let $n \\ge 4$ be an integer, and define $[n] = \\{1, 2, \\ldots, n\\}$. Find all functions $W: [n]^2 \\to \\mathbb{R}$ such that for every partition $[n] = A \\cup B \\cup C$ into disjoint sets,\n    \\[\\sum_{a \\in A} \\sum_{b \\in B} \\sum_{c \\in C} W(a,b) W(b,c) = |A| |B| |C|.\\]\n\n    \n\n\n    \\item $ \\bigstar $ Find all infinite sequences $a_1, a_2, \\ldots$ of positive integers satisfying the following properties:\n    (a) $a_1 < a_2 < a_3 < \\cdots$,\n    (b) there are no positive integers $i$, $j$, $k$, not necessarily distinct, such that $a_i + a_j = a_k$,\n    (c) there are infinitely many $k$ such that $a_k = 2k - 1$.\n\n    \n\n\n    \\item Show that there exists a bijective function $f:\\mathbb N_0\\to\\mathbb N_0$ such that for all $m,n\\in\\mathbb{N}_{0}$, $$f(3mn + m + n) = 4f(m)f(n) + f(m) + f(n)$$\n\n    \n\n\n    \\item Determine all functions $f: \\mathbb Z \\to \\mathbb Z$ satisfying $$f(f(m) + n) + f(m) = f(n) + f(3m) + 2014$$ for all integers $m$ and $n$.\n\n    \n\n\n    \\item Let $n \\ge 3$ be a given positive integer. We wish to label each side and each diagonal of a regular $n$-gon $P_1 \\ldots P_n$ with a positive integer less than or equal to $r$ so that:\n\n\\begin{enumerate}\n\n\t    \n\n\n        \\item every integer between $1$ and $r$ occurs as a label;\n        \n\n\n        \\item in each triangle $P_iP_jP_k$ two of the labels are equal and greater than the third.\n\n\\end{enumerate}\n    \n    Given these conditions:\n\n\\begin{enumerate}\n\n        \n\n\n        \\item Determine the largest positive integer $r$ for which this can be done.\n        \n\n\n        \\item For that value of $r$, how many such labellings are there?\n\n\\end{enumerate}\n\n    \n\n\n    \\item $ \\bigstar $ Suppose that $f$ and $g$ are two functions defined on the set of positive integers and taking positive integer values. Suppose also that the equations $f(g(n)) = f(n) + 1$ and $g(f(n)) = g(n) + 1$ hold for all positive integer $n$. Prove that $f(n) = g(n)$ for all positive integer $n$.\n\n    \n\n\n    \\item Find all the functions $f: \\mathbb N_0 \\to \\mathbb N_0$ satisfying the relation\n\n    \n\n\n    \\item Let $\\mathbb{R}$ be the set of real numbers. Determine all functions $f: \\mathbb{R} \\to \\mathbb{R}$ that satisfy the equation\n    \\[f(x + f(x + y)) + f(xy) = x + f(x + y) + yf(x)\\]for all real numbers $x$ and $y$.\n\n    \n\n\n    \\item Suppose that $s_1, s_2, s_3, \\ldots$ is a strictly increasing sequence of positive integers such that the sub-sequences\n    \\[s_{s_1}, s_{s_2}, s_{s_3}, \\ldots \\qquad\\text{and}\\qquad s_{s_1+1}, s_{s_2+1}, s_{s_3+1}, \\ldots\\]are both arithmetic progressions. Prove that the sequence $s_1, s_2, s_3, \\ldots$ is itself an arithmetic progression.\n\n    \n\n\n    \\item Find all functions $f$ from $\\mathbb{N}_0$ to itself such that\n    \\[f(m + f(n)) = f(f(m)) + f(n)\\]for all $m, n \\in \\mathbb{N}_0$.\n\n    \n\n\n    \\item $ \\bigstar $ Consider a function $f: \\mathbb{N} \\to \\mathbb{N}$. For any $m, n \\in \\mathbb{N}$ we write $f^n(m) = \\underbrace{f(f(\\ldots f}_{n}(m)\\ldots))$. Suppose that $f$ has the following two properties:\n\n\\begin{enumerate}\n\n\t   \n\n\n       \\item if $m, n \\in \\mathbb{N}$, then $\\frac{f^n(m) - m}{n} \\in \\mathbb{N}$;\n        \n\n\n        \\item The set $\\mathbb{N} \\setminus \\{f(n) \\mid n\\in \\mathbb{N}\\}$ is finite.\n\n\\end{enumerate}\n    \n    Prove that the sequence $f(1) - 1, f(2) - 2, f(3) - 3, \\ldots$ is periodic.\n    \n    \n\n\n    \\item Let $\\mathbb{N}$ be the set of positive integers. Find all functions $f: \\mathbb{N} \\to \\mathbb{N}$ that satisfy the equation\n    \\[f^{abc-a}(abc) + f^{abc-b}(abc) + f^{abc-c}(abc) = a + b + c\\]for all $a,b,c \\ge 2$.\n\n    \n\n\n    \\item Let $2\\mathbb{Z} + 1$ denote the set of odd integers. Find all functions $f:\\mathbb{Z} \\to 2\\mathbb{Z} + 1$ satisfying\n    \\[f(x + f(x) + y) + f(x - f(x) - y) = f(x + y) + f(x - y)\\]for every $x, y \\in \\mathbb{Z}$.\n\n\\end{enumerate}", "meta": {"hexsha": "f622d7711288805dc332bb04bb4c342e1f9e6a19", "size": 8189, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg/sec1_5_selected_probs.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "alg/sec1_5_selected_probs.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg/sec1_5_selected_probs.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 34.552742616, "max_line_length": 322, "alphanum_fraction": 0.5932348272, "num_tokens": 2939, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5676540739867524}}
{"text": "%&LaTeX\n\n\\section{Joe Fourier Was Not a Discrete Fellow}\n\n\\subsection{Lab Background}\nBy the end of this lab you should have a firm understanding of how the\nDiscrete Fourier Transform (DFT) can be implemented exactly using the\nFast Fourier Transform (FFT). In addition you should be able to\nidentify common problems using the DFT to analyze signals. You will\nalso be familiar with a new tool, the spectrogram, that uses the DFT\nas a function of time.\n\n\\subsection{Implementing the DFT}\n\nRecall that the DFT can be implemented directly from the analysis\nequation. For a length $N$ signal $x[n]$,\n\\begin{align}\nX[k]=\\sum_{n=0}^{N-1}x[n]e^{-j \\frac{2\\pi}{N} nk} && \\text{for $k = 0, 1, 2, \\cdots N-1$}\n\\label{eq:dft}\n\\end{align}\nThe order of the implementation is $O(N)=N^2$. The following Java code\noutlines implementation of a 256-point DFT. It is written without any\nalgorithmic speedup (i.e., it exactly mirrors equation \\ref{eq:dft}).\n\\begin{lstlisting}[language=Java,basicstyle=\\mlttfamily\\small]\npublic class MyDFT\n{\n  // x is the input and y is the magnitude of the complex DFT\n public void computeDFT(double[] x, double[] y)\n  {\n  double[] yImag = new double[256];\n  double[] yReal = new double[256];\n\n  double twoPiOverN = 2*Math.PI/256;\n\n  for (int k = 0 ; k < 256 ; k++)\n  {\n    yReal[k] = 0;\n    yImag[k] = 0;\n    for (int n = 0 ; n < 256 ; n++)\n    {\n      yReal[k] += x[n]*Math.cos(n*k*twoPiOverN);\n      yImag[k] += -x[n]*Math.sin(n*k*twoPiOverN);\n    }\n    y[k] = Math.sqrt(yReal[k]*yReal[k] + yImag[k]*yImag[k]);\n  }\n }\n}\n\\end{lstlisting}\n\nNote that, unlike in Matlab, there is no native support in Java for\ncomplex numbers so this arithmetic is written out explicitly in the\ncode above. For example, the equation $y=x\\times e^{a}$ (where x is a\nreal number) must be explicitly written out using Euler's formula, and\nthe real and imaginary portions saved in separate variables,\n$y_{real}=x\\times \\cos(a)$ and $y_{imag}=x\\times \\sin(a)$.\n\nThe FFT algorithm can be used to reduce the computation time of the\nDFT to $O(N)=N\\log_2 N$ --- a significant speedup for even modest\nlength signals.\n\n\\subsection{The FFT in Matlab}\n\nMatlab includes a \\verb|fft| function (there are many more related\noperations in the Signal Processing Toolbox, but we are sticking to\n``vanilla'' Matlab here). You can look at the documentation for this\nfunction; pay especial attention to the frequency values that\ncorrespond to each element of the vector that this function returns,\nand to how to specify the number of points in the FFT it computes.\n\n\\paragraph{Step 1.1} Implement your own \\verb|myFFT| function in\nMatlab that takes a real-valued vector as input, computes a 256-point\nFFT, and returns a real-valued vector that is the magnitude of the\n(single-sided, i.e., only positive frequencies) FFT. Implement this\nfunction using loops (i.e., do not use recursion). The first part of\nthis code performs a bit reversal on the input array. You can use the\nfollowing code to perform the bit reversal efficiently (efficient bit\nreversal algorithms in other languages are typically more complex):\n\\begin{lstlisting}[style=Matlab-editor,basicstyle=\\mlttfamily\\small]\n% Assume that you want to do a bit reversal of the contents of the vector x\nindices = [0 : length(x)-1];                 % binary indices need to start at zero\nrevIndices = bin2dec(fliplr(dec2bin(indices, 8))); % bit reversed indices\nrevX = x(revIndices+1);                      % Add 1 to get Matlab indices\n\\end{lstlisting}\nThis code converts the vector of in-order indices to an array of\n8-character strings (representing those indices as binary 8-bit\nnumbers). Each string is then reversed, and converted back to\nnumbers. The resultant vector of indices (which is what they are after\nwe add one to each) is applied to the signal to pull its entries out\ninto a new vector, with the order of the entries in bit-reversed\norder.\n\nOnce you've done this, all you need to do is iterate over the array\n$\\log_2 N$ times! Remember that you're performing complex arithmetic\nand to compute the magnitude of the output array once the FFT is\ncomputed. Include a copy of your Matlab code in your report.\n\n\\paragraph{Step 1.2} \nCheck your results using the Matlab \\verb|fft| function. Your results\nshould be quite similar, if not identical; this should be apparent by\ncomparing graphs of the outputs. Take the FFT of a sinusoid with a\nfrequency of $\\pi/4$ radians per second using your FFT implementation\nand the \\verb|fft| function.\n\n\\paragraph{Step 1.3}\nProve that the number of complex multiplies performed by your code is\n$O(N \\log N)$.\n\n\\subsection{Using the DFT}\n\n\\paragraph{Step 2.1} \nCreate a sum of two sinusoids. Use the built-in Matlab \\verb|fft|\nfunction (it will allow you, among other things, to apply an $n$-point\nFFT to a signal with more than $n$ samples) to compute the FFT of the\nsum and then plot the FFT magnitude. Use frequencies of $0.13\\pi$ and\n$0.19\\pi$ for the two sinusoids. Make sure that there are at least 256\nsamples in each waveform, as you'll want to use a 256-point FFT! Also,\nmake sure you correctly compute the corresponding frequency values\n(either from 0 to $\\pi$ or $-\\pi$ to $\\pi$, depending on whether you\nprefer plotting the single-sided magnitudes or not) for the FFT\nX-axis, and label it appropriate in your graph. What does the result\nlook like? Does this make sense?\n\n\n\\paragraph{Step 2.2} At what index (or indices) does the FFT magnitude\nreach its peak value(s)? What frequency (or frequencies) does this\ncorrespond to?\n\n\n\\paragraph{Step 2.3} Change the frequencies of the sinusoids to\n$0.13\\pi$ and $0.14\\pi$. Repeat steps 2.1 and 2.2. Do the results\nstill make sense?\n\n\n\\paragraph{Step 2.4} Replace the sum of two sinusoids with your\n\\verb|DTMFCoder| function from lab 6. Input a selection of button\nvalues. Do the FFT function and plot show the separate frequencies for each\nbutton?\n\n\n\n\\subsection{Spectrograms}\n\nComparing FFT graphs (as in the step 2.4 above) can be difficult to\ndo. But what if we could analyze the frequency content of a signal as\na function of time? That would make it easier to see differences in\nfrequency if a signal started changing (like a string of DTMF keys\npressed in turn). To do this we will need a new tool called the\n\\emph{Spectrogram}. The spectrogram is simply an algorithm for\ncomputing the FFT of a signal at different times and plotting them as\na function of time. The spectrogram is computed in the following way:\n\n\\begin{enumerate}\n\\item A given signal is ``windowed.'' This means that we only take a\n  certain number of points from the signal (for this example assume we\n  are using a window of length 128 points). To start out, we take the\n  first 128 points of the signal (points 1 through 128 of the input\n  vector).\n\\item Take the FFT of the window and save the it in a separate array.\n\\item Advance the window in time by a certain number of points. For\n  instance we can advance the window by 64 points so that we now have\n  a window of indices 65 through 192 from the input signal array.\n\\item Repeat steps 1--3, saving the FFT of each window, until there\n  are no longer any points in the input array.\n\\item Form a 2-D matrix whose columns are the FFT magnitudes of each\n  window (placed in chronological order). In this way, each row\n  represents a certain frequency, each column represents a given\n  instant in time, and the value of the matrix represents the\n  magnitude of the FFT.\n\\end{enumerate}\n\nThe result is called a \\emph{spectrogram} and is usually displayed as\nan RGB image where blue represents small FFT magnitudes and red\nrepresents larger FFT magnitudes (the \\emph{Jet} colormap if you are\nfamiliar with color visualizations). There is an art to choosing the\ncorrect parameters of the spectrogram (i.e., window size, FFT size,\nhow many points to advance the FFT, etc.). Each parameter has\ntradeoffs for the time and frequency resolution of the resulting\nspectrogram. For our purposes here, we will not be concerned with\nthese tradeoffs. Instead we will be more interested in getting\nfamiliar with analysis using spectrograms.\n\nThe Matlab Signal Processing Toolbox has a \\verb|spectrogram|\nfunction, but you can retrieve a drop-in replacement for it from\n\\url{http://www.ee.columbia.edu/ln/rosa/matlab/sgram/} that will work\nwithout that toolbox. See the online Matlab documentation for the\n\\verb|spectrogram| function to understand what the parameters to that\nfunction are (though, for our purposes, you should just be able to use\nthe call \\verb|y = myspecgram(x)|.\n\n\\paragraph{Step 3.1} Use your \\verb|DTMFCoder| function again. Instead\nof computing its FFT, compute its spectrogram instead. What does the\nspectrogram look like for button 1? Are both frequencies present?\n\n\n\\paragraph{Step 3.2} For this step, use \\verb|DTMFCoder| to generate\nthe codes for multiple buttons --- at least three --- and concatenate\nthem together to form a single vector. Does the spectrogram make it\neasier to judge the frequency content of the keys? Can you clearly see\nwhen the signal changes from one key to another?\n\n% LocalWords:  WebQ MATLAB DSP\n", "meta": {"hexsha": "78fa3d2f457bbcb1d537b93658a29a0aec78a686", "size": 9092, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Matlab Labs/lab7/lab7.tex", "max_stars_repo_name": "stiber/Signal-Computing", "max_stars_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-09-10T16:54:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T15:48:26.000Z", "max_issues_repo_path": "Matlab Labs/lab7/lab7.tex", "max_issues_repo_name": "stiber/Signal-Computing", "max_issues_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2015-08-18T18:16:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-29T17:19:16.000Z", "max_forks_repo_path": "Matlab Labs/lab7/lab7.tex", "max_forks_repo_name": "stiber/Signal-Computing", "max_forks_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.0099009901, "max_line_length": 89, "alphanum_fraction": 0.7560492741, "num_tokens": 2370, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5676540736921476}}
{"text": "\\documentclass{article} % For LaTeX2e\n\\usepackage[legalpaper, margin=0.5in]{geometry}\n\\usepackage{amsmath}\n\\usepackage{amsfonts,dsfont}\n\\usepackage{amssymb}\n\\usepackage[ruled,vlined]{algorithm2e}\n\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\\usepackage{xcolor}\n\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\n\\begin{document}\n\n\\section{Compact Description For Anomalies}\n\nProviding explanations for anomalies is important and some recent work (among others) can be found in \\cite{macha:2017}. We illustrate a different and yet very simple approach when AAD is used with \\textit{Isolation Forest}.\n\nThe Isolation Forest version of AAD works by partitioning the space into smaller subspaces and weights each subspace to fit the expert feedback. The end result is: (1) a set of default (unsupervised) anomaly scores for each subspace and, (2) weights for each subspace inferred from feedback. We can generate compact descriptions for the anomalies using this information. The idea is to select some combination of (preferably) the smallest subspaces that \\textbf{cover} all \\textbf{discovered} anomalies. Therefore, we treat this as an instance of the \\emph{set covering} problem. We illustrate this on a toy dataset below.\n\nAssume that there are total $m$ subspaces across all Isolation Forest trees. We represent weights by ${\\bf w} \\in \\mathbb{R}^m$, and the corresponding unsupervised anomaly scores by ${\\bf d} \\in \\mathbb{R}^m$.\n\nNow, the final anomaly scores for the \\textbf{subspaces} (not instances), after incorporating feedback, is ${\\bf a} = {\\bf w} \\circ {\\bf d} \\in \\mathbb{R}^m$ where $\\circ$ denotes the element wise (\\textit{Hadamard}) product.\n\nWe sort the scores in ${\\bf a}$ in descending order and select (say) $30$ top ranked subspaces. Next, we retain only those subspaces from this set which contain at least one anomaly \\textbf{discovered} by the analyst. Let us denote the resulting set of subspaces by $\\mathcal{S}$. Further, let $\\mathcal{Z}$ be the set of discovered anomalies that belong to one or more subspaces in $\\mathcal{S}$, and $|\\mathcal{Z}|=n$. (Note that some of the discovered anomalies might not belong to any of the top ranked subspaces we selected.)\n\nLet $|\\mathcal{S}|=k$. Denote the \\textit{volumes} of the subspaces in $\\mathcal{S}$ by the vector ${\\bf v} \\in \\mathbb{R}^k$. Now, assume that a binary vector ${\\bf x} \\in \\{0, 1\\}^k$ contains $1$ in locations corresponding to the subspaces in $\\mathcal{S}$ which are included in the covering set, and $0$ otherwise. Let ${\\bf u}_z \\in \\{0, 1\\}^k$ denote a vector for each anomaly $z \\in \\mathcal{Z}$ which contains $1$ in all locations corresponding to the subspaces in $\\mathcal{S}$ that $z$ belongs to. Let ${\\bf U} \\in \\{0, 1\\}^{[n \\times k]}$ represent the matrix of all ${\\bf u}$s.\n\nThe selection of the compact set of subspaces to describe all discovered anomalies can be formulated as:\n\\begin{align}\n& \\argmin_{{\\bf x} \\in \\{0, 1\\}^k} {\\bf x} \\cdot {\\bf v}^p \\label{eqn:opt} \\\\\n\\text{s.t.} & {\\bf U} \\cdot {\\bf x} \\geq {\\bf 1} \\nonumber \\\\\n\\text{where, } & \\text{${\\bf 1}$ is the column vector of $n$ 1s, and} \\nonumber \\\\\n& \\text{$p$ is an integer $\\geq 1$ (more below)} \\nonumber\n\\end{align}\n\nThe parameter $p$ determines how severely to penalize larger volumes. This is usually $1$. However, a larger value will strongly discourage bigger volume subspaces from being selected.\n\nWe apply this idea on a toy data shown in Figure~\\ref{fig:dataset}. The compact descriptions are shown in Figure~\\ref{fig:compact_rects}.\n\nNote that we have not considered labeled nominals in the optimization objective, but a more sophisticated objective might include them too. For instance, we could add another term in the objective which penalizes whenever a nominal is present in the selected subspace.\n\nA different approach to generate descriptions might train a cost-based decision tree with the labeled anomalies and nominals. Once trained, the extracted rules would correspond to `descriptions'. This approach is possibly more principled because the decision tree splits are [usually] based on information gain or gini-index and are therefore more informative. The downside is that this would be separate from the anomaly detector and therefore it would not offer much insight into the anomaly detector's behavior -- ideally we would like the explanation or description of anomalies to reflect the detector's internals.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{toy2_dataset}\n\t\t\\caption{Dataset}\n\t\t\\label{fig:dataset}\n\t\\end{subfigure}\n\t~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{toy2_iter_00}\n\t\t\\caption{Initial score contours}\n\t\t\\label{fig:baseline_contours}\n\t\\end{subfigure}\n\t~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{toy2_iter_34}\n\t\t\\caption{After 35 feedback}\n\t\t\\label{fig:contours_35}\n\t\\end{subfigure}\n\t\\caption{Dataset and score contours. Figure~\\ref{fig:dataset} shows a synthetic toy dataset. Figure~\\ref{fig:baseline_contours} shows the initial Isolation Forest score contours. Figure~\\ref{fig:contours_35} shows the score contours after 35 feedback iterations from the Oracle. The red dots are discovered anomalies (true positives). The green dots are discovered nominals (false positives). The red checks are undiscovered anomalies (false negatives).} \\label{fig:dataset_and_contours}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{top_30_anomalous_regions_100_trees_baseline}\n\t\t\\caption{Baseline}\n\t\t\\label{fig:baseline_rects}\n\t\\end{subfigure}\n\t~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{top_30_anomalous_regions_100_trees_aad}\n\t\t\\caption{AAD}\n\t\t\\label{fig:aad_rects}\n\t\\end{subfigure}\n\t~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{top_30_anomalous_regions_100_trees_compact}\n\t\t\\caption{Compact Descriptions}\n\t\t\\label{fig:compact_rects}\n\t\\end{subfigure}\n\t\\caption{Top $30$ subspaces ranked by ${\\bf w}\\circ{\\bf d}$. Figure~\\ref{fig:baseline_rects} shows the top $30$ most important subspaces (w.r.t their \\textit{anomalousness}) without any feedback. We can see that initially, these simply correspond to the exterior regions of the dataset. AAD \\textbf{learns the true importance of subspaces} automatically with feedback. For example, after incorporating the labels of $35$ instances, the subspaces around the labeled anomalies have emerged as the most important (Figure~\\ref{fig:aad_rects}). Figure~\\ref{fig:compact_rects} shows the set of \\textbf{important} subspaces which compactly cover all labeled anomalies. These were computed with Equation~\\ref{eqn:opt}. We might even think of this as a non-parametric clustering. Note that the compact subspaces only cover anomalies that were discovered in the $35$ feedback iterations. Anomalies which were not detected are likely to fall outside these compact subspaces.} \\label{fig:rects}\n\\end{figure}\n\n\\section{Querying Diversity}\nWe can use the anomaly description to diversify our queries when we have the option to query labels for more than one instance per round of feedback. Following is how we go about it:\n\\begin{enumerate}\n\t\\item Select the top ranked $m$ (say, $m=15$) instances. Denote this set by $\\mathcal{C}$ (points in \\textcolor{blue}{blue} in Figure~\\ref{fig:candidate_regions}).\n\t\\item Next, select the top (say) $5$ most anomalous regions for each instance in $\\mathcal{C}$. Let $\\mathcal{F}$ be the union of all these regions (rectangles in \\textcolor{red}{red} in Figure~\\ref{fig:candidate_regions}).\n\t\\item Now, \\emph{compactly} describe all instances in $\\mathcal{C}$ using the regions in $\\mathcal{F}$ (rectangles in \\textcolor{red}{red} in Figure~\\ref{fig:baseline_queries}).\n\t\\item Now, start with the most anomalous instance, and select one-by-one $k$ (say, $k=5$) instances which have the fewest overlapping regions for querying (points circled in \\textcolor{green}{green} in Figures \\ref{fig:baseline_queries} and \\ref{fig:diverse_queries}).\n\\end{enumerate}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{query_candidate_regions_ntop5_100_trees}\n\t\t\\caption{Candidate regions and instances}\n\t\t\\label{fig:candidate_regions}\n\t\\end{subfigure}\n\t%~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{query_compact_ntop5_100_trees_baseline}\n\t\t\\caption{Simple select-top query}\n\t\t\\label{fig:baseline_queries}\n\t\\end{subfigure} \\\\\n    %~\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{query_compact_ntop5_100_trees_aad}\n\t\t\\caption{Select diverse query}\n\t\t\\label{fig:diverse_queries}\n\t\\end{subfigure}\n\t\\caption{Diverse querying strategy using sub-space descriptions. These figures show the $15$ most anomalous instances (ranked by score) in \\textcolor{blue}{blue}. We assume that the true labels are not known for these instances. We find the top $5$ most anomalous regions for each of the $15$ instances and the union of all these regions is shown as the \\textcolor{red}{red} rectangles in Figure~\\ref{fig:candidate_regions}. Figure~\\ref{fig:baseline_queries} and Figure~\\ref{fig:diverse_queries} show the regions which compactly contain (describe) the top-ranked instances. The instances circled in \\textcolor{green}{green} are the ones selected for querying. The instances (in green) in Figure~\\ref{fig:baseline_queries} are merely the top-ranked $5$ instances without taking into account any diversity. Figure~\\ref{fig:diverse_queries} shows that when we select instances which have different descriptions (minimum region overlap), they are more diverse.} \\label{fig:diverse}\n\\end{figure}\n\n\\subsection{Does the diverse querying strategy help?}\nWhatever querying strategy we use, it should ideally not lower the number of anomalies discovered within a budget. For the diverse query strategy, we need to \n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{toy2_processes}\n\t\t\\caption{Underlying Processes}\n\t\t\\label{fig:processes}\n\t\\end{subfigure}\n\t%~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{results_anoms_found_toy2}\n\t\t\\caption{Anomalies discovered}\n\t\t\\label{fig:discovery}\n\t\\end{subfigure} \\\\\n\t%~\n\t\\begin{subfigure}[b]{0.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{results_diff_classes_toy2}\n\t\t\\caption{Difference in class diversity per batch}\n\t\t\\label{fig:class_diff}\n\t\\end{subfigure}\n\t\\caption{Diversity in the classes shown to an analyst per batch. Here we assume that the data contains \\textbf{three} classes which represent three underlying processes (Figure~\\ref{fig:processes}). Class $0$ is \\emph{nominal}, while classes $1$ and $2$ are \\emph{anomalous}. In Figure~\\ref{fig:discovery}, we plot the anomaly discovery curves for five Isolation Forest-based algorithms: (a) \\textbf{BAL (Adaptive Prior)} -- label only the single most anomalous instance (per iteration), (b) \\textbf{ifor\\_q1b3} -- label the top three most anomalous instances, (c) \\textbf{BAL-D} -- label the top three \\textbf{most diverse} instances among the top ten anomalous instances (employing the diversity strategy), (d) \\textbf{BAL-E} -- label the top three instances \\textbf{farthest in euclidean space} among the top ten anomalous instances, (e) \\textbf{Unsupervised Baseline} -- unsupervised Isolation Forest, and (f) \\textbf{ifor\\_top\\_random} -- label three instances selected uniformly at random from the top ten anomalous instances. We see that all algorithms other than the baseline have similar performance, and all active labeling algorithms perform better than the baseline. In Figure~\\ref{fig:class_diff}, the solid line \\texttt{(diverse - top)} shows the average difference in the number of unique classes shown to the analyst per batch between \\textbf{BAL-D} and \\textbf{ifor\\_q1b3} (these differences are averaged over $10$ runs). The dashed line \\texttt{(diverse - top\\_random)} in Figure~\\ref{fig:class_diff} shows the average difference in the number of unique classes shown to the analyst per batch between \\textbf{BAL-D} and \\textbf{ifor\\_top\\_random}. Since both these differences are mostly positive, we conclude that the diversity strategy indeed presents more diverse instances to the analyst to label. Moreover, as seen in Figure~\\ref{fig:discovery}, this diversity does not lower the anomaly detection accuracy over a strictly select-top-anomalous query strategy. \\textbf{We also see this pattern in most real-world datasets we experimented with.} The dash-dot line \\texttt{(diverse - euclidean)} shows the difference between \\textbf{BAL-D} and \\textbf{BAL-E}. \\textbf{BAL-E} is a standard way to find diverse instances. The performance of \\textbf{BAL-D} and \\textbf{BAL-E} are very similar; however, \\textbf{BAL-D} is more user-friendly because it provides descriptions which can characterize multiple anomalies.} \\label{fig:diverse_effect}\n\\end{figure}\n\n\\section{Comparison between Isolation Forest, HS Trees, RS Forest}\nThere are fundamental differences between Isolation Forest and the other two: HS Trees and RS Forest. These differences directly influence the effectiveness of incorporating feedback. Since the depths in HS Trees and RS Forest are fixed and they repeatedly split a dimension, there are $O(2^H)$ subspaces represented by the leaf nodes of these detectors and most are very small in volume. We see this property in Figures \\ref{fig:hstrees_rects} and \\ref{fig:rsforest_rects}. The implication is that more number of subspaces will be required to generate a compact representation for a set of instances. This also means that feedback at instance level is going to be shared by a much fewer set of instances. In contrast, in Isolation forest (Figure~\\ref{fig:iforest_rects}), the depths of leaf nodes are not fixed; they are usually \\textbf{much} shallower than in HS Trees and  RS Forest. As a result, the subspaces represented by the leaf nodes are larger and cover more number of instances, thereby requiring fewer subspaces to compactly represent a set of instances. Moreover, this results in feedback being shared by more instances in Isolation Forest. One remedy for HS Trees and RS Forest might be to dynamically determine when to stop splitting based on (maybe) the number of samples at a node. This might be a motivation for future research. My personal opinion right now is to stick to Isolation Forest because of its `nicer' properties.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{leaf_trees/query_candidate_regions_ntop5_100_trees_iforest}\n\t\t\\caption{Isolation Forest}\n\t\t\\label{fig:iforest_rects}\n\t\\end{subfigure}\n\t~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{leaf_trees/query_candidate_regions_ntop5_50_trees_hstrees}\n\t\t\\caption{HS Trees}\n\t\t\\label{fig:hstrees_rects}\n\t\\end{subfigure}\n\t~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n\t%(or a blank line to force the subfigure onto a new line)\n\t\\begin{subfigure}[b]{0.3\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{leaf_trees/query_candidate_regions_ntop5_50_trees_rsforest}\n\t\t\\caption{RS Forest}\n\t\t\\label{fig:rsforest_rects}\n\t\\end{subfigure} \\\\\n    \\begin{subfigure}[b]{0.3\\textwidth}\n    \t\\includegraphics[width=\\textwidth]{leaf_trees/query_compact_ntop5_100_trees_aad_iforest}\n    \t\\caption{IForest Compact}\n    \t\\label{fig:iforest_compact_rects}\n    \\end{subfigure}\n    ~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n    %(or a blank line to force the subfigure onto a new line)\n    \\begin{subfigure}[b]{0.3\\textwidth}\n    \t\\includegraphics[width=\\textwidth]{leaf_trees/query_compact_ntop5_50_trees_aad_hstrees}\n    \t\\caption{HST Compact}\n    \t\\label{fig:hstrees_compact_rects}\n    \\end{subfigure}\n    ~ %add desired spacing between images, e. g. ~, \\quad, \\qquad, \\hfill etc. \n    %(or a blank line to force the subfigure onto a new line)\n    \\begin{subfigure}[b]{0.3\\textwidth}\n    \t\\includegraphics[width=\\textwidth]{leaf_trees/query_compact_ntop5_50_trees_aad_rsforest}\n    \t\\caption{RSF Compact}\n    \t\\label{fig:rsforest_compact_rects}\n    \\end{subfigure}\n\t\\caption{Top subspaces after $35$ feedback iterations ranked by ${\\bf w}\\circ{\\bf d}$ which cover the top ranked $15$ instances. We find the top $5$ most anomalous regions for each of the $15$ instances and the union of all these regions is shown as the \\textcolor{red}{red} rectangles (\\textbf{top row}). The corresponding compact subspaces which describe the $15$ instances are shown in the \\textbf{bottom row}. The points circled in \\textcolor{green}{green} (in the bottom row) are the $5$ instances (out of the $15$) selected for query using a diverse querying strategy. The Isolation Forest subspaces tend to be larger and each subspace usually contains more instances. This lets the feedback in Isolation Forest to be shared across more instances than in HS Trees or RS Forest.} \\label{fig:tree_diffs}\n\\end{figure}\n\n\\begin{thebibliography}{1}\n\\bibitem{macha:2017} Meghanath Macha and Leman Akoglu {\\em {X-PACS:} eXPlaining Anomalies by Characterizing Subspaces},  2017, http://arxiv.org/abs/1708.05929.\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "7b6ad26888aaf50d6227c9abc4ba8c5129e9681c", "size": 18271, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/anomaly_description/anomaly_description.tex", "max_stars_repo_name": "snad-space/ad_examples", "max_stars_repo_head_hexsha": "7c62a81f52e79874d6215b262f5a849d56eeae4f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 773, "max_stars_repo_stars_event_min_datetime": "2017-12-10T04:08:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T01:50:38.000Z", "max_issues_repo_path": "documentation/anomaly_description/anomaly_description.tex", "max_issues_repo_name": "snad-space/ad_examples", "max_issues_repo_head_hexsha": "7c62a81f52e79874d6215b262f5a849d56eeae4f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2018-12-25T00:46:25.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-10T00:02:29.000Z", "max_forks_repo_path": "documentation/anomaly_description/anomaly_description.tex", "max_forks_repo_name": "snad-space/ad_examples", "max_forks_repo_head_hexsha": "7c62a81f52e79874d6215b262f5a849d56eeae4f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 197, "max_forks_repo_forks_event_min_datetime": "2018-05-08T04:16:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T21:37:03.000Z", "avg_line_length": 89.1268292683, "max_line_length": 2461, "alphanum_fraction": 0.7670625582, "num_tokens": 5044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7577943658046609, "lm_q1q2_score": 0.5676540695946558}}
{"text": "\\documentclass[10pt]{article}\n\\author{Alex Peyrard}\n\\title{Information and coding theory assignment}\n\n\\begin{document}\n\\maketitle\n\\section{Known equations}\nDefine a probability mass function\n\\[\\mathcal{P}(X=x,Y=y,Z=z)\\]\nWhere \\[x\\in \\mathcal{X} = \\{0,1\\}, y\\in \\mathcal{Y} = \\{0,1\\}, z\\in \\mathcal{Z} = \\{0,1\\}\\]\nby\n\\[\\mathcal{P}(X=0,Y=0,Z=0)=\\mathcal{P}(X=1,Y=0,Z=1)=1/4\\]\n\\[\\mathcal{P}(X=0,Y=1,Z=1)=\\mathcal{P}(X=1,Y=1,Z=0)=1/4\\]\n\\[\\mathcal{P}(X=0,Y=1,Z=0)=\\mathcal{P}(X=0,Y=0,Z=1)=0\\]\n\\[\\mathcal{P}(X=1,Y=0,Z=0)=\\mathcal{P}(X=1,Y=1,Z=1)=0\\]\n\nthen\n\\[\\mathcal{P}(X=0,Z=0)=\\mathcal{P}(X=0,Y=0,Z=0)+\\mathcal{P}(X=0,Y=1,Z=0)=1/4\\]\n\\[\\mathcal{P}(X=0,Z=1)=\\mathcal{P}(X=0,Y=0,Z=1)+\\mathcal{P}(X=0,Y=1,Z=1)=1/4\\]\n\\[\\mathcal{P}(X=1,Z=0)=\\mathcal{P}(X=1,Y=0,Z=0)+\\mathcal{P}(X=1,Y=1,Z=0)=1/4\\]\n\\[\\mathcal{P}(X=1,Z=1)=\\mathcal{P}(X=1,Y=0,Z=1)+\\mathcal{P}(X=1,Y=1,Z=1)=1/4\\]\nand\n\\[\\mathcal{P}(X=0,Y=0)=\\mathcal{P}(X=0,Y=0,Z=0)+\\mathcal{P}(X=0,Y=0,Z=1)=1/4\\]\n\\[\\mathcal{P}(X=0,Y=1)=\\mathcal{P}(X=0,Y=1,Z=0)+\\mathcal{P}(X=0,Y=1,Z=1)=1/4\\]\n\\[\\mathcal{P}(X=1,Y=0)=\\mathcal{P}(X=1,Y=0,Z=0)+\\mathcal{P}(X=1,Y=0,Z=1)=1/4\\]\n\\[\\mathcal{P}(X=1,Y=1)=\\mathcal{P}(X=1,Y=1,Z=0)+\\mathcal{P}(X=1,Y=1,Z=1)=1/4\\]\nand\n\\[\\mathcal{P}(Y=0,Z=0)=\\mathcal{P}(X=0,Y=0,Z=0)+\\mathcal{P}(X=1,Y=0,Z=0)=1/4\\]\n\\[\\mathcal{P}(Y=0,Z=1)=\\mathcal{P}(X=0,Y=0,Z=1)+\\mathcal{P}(X=1,Y=0,Z=1)=1/4\\]\n\\[\\mathcal{P}(Y=1,Z=0)=\\mathcal{P}(X=0,Y=1,Z=0)+\\mathcal{P}(X=1,Y=1,Z=0)=1/4\\]\n\\[\\mathcal{P}(Y=1,Z=1)=\\mathcal{P}(X=0,Y=1,Z=1)+\\mathcal{P}(X=1,Y=1,Z=1)=1/4\\]\n\nwe can also find\n\\[\\mathcal{P}(X=0)=\\mathcal{P}(X=0,Z=0)+\\mathcal{P}(X=0,Z=1)=1/2\\]\n\\[\\mathcal{P}(X=1)=\\mathcal{P}(X=1,Y=0)+\\mathcal{P}(X=1,Y=1)=1/2\\]\n\\[\\mathcal{P}(Y=0)=\\mathcal{P}(Y=0,Z=0)+\\mathcal{P}(Y=0,Z=1)=1/2\\]\n\\[\\mathcal{P}(Y=1)=\\mathcal{P}(Y=1,Z=0)+\\mathcal{P}(Y=1,Z=1)=1/2\\]\n\\[\\mathcal{P}(Z=0)=\\mathcal{P}(X=0,Z=0)+\\mathcal{P}(X=1,Z=0)=1/2\\]\n\\[\\mathcal{P}(Z=1)=\\mathcal{P}(X=0,Z=1)+\\mathcal{P}(X=1,Z=1)=1/2\\]\n\n\\section{Mutual independence}\nThanks to the previous equations, we can find that\n\\[\\mathcal{P}(X=0,Y=0,Z=0)=1/4\\]\n\\[\\mathcal{P}(X=0)\\mathcal{P}(Y=0)\\mathcal{P}(Z=0)=(1/2)^3=1/8\\]\nThus\n\\[\\mathcal{P}(X=0,Y=0,Z=0)\\neq\\mathcal{P}(X=0)\\mathcal{P}(Y=0)\\mathcal{P}(Z=0)\\]\nX,Y and Z are not mutually independent\n\\section{Pairwise independence}\nThanks to the previous equations we can find that\n\\[\\mathcal{P}(X=0,Y=0)=\\mathcal{P}(X=0)\\mathcal{P}(Y=0)=1/4\\]\n\\[\\mathcal{P}(X=0,Y=1)=\\mathcal{P}(X=0)\\mathcal{P}(Y=1)=1/4\\]\n\\[\\mathcal{P}(X=1,Y=0)=\\mathcal{P}(X=1)\\mathcal{P}(Y=0)=1/4\\]\n\\[\\mathcal{P}(X=1,Y=1)=\\mathcal{P}(X=1)\\mathcal{P}(Y=1)=1/4\\]\n\n\\[\\mathcal{P}(X=0,Z=0)=\\mathcal{P}(X=0)\\mathcal{P}(Z=0)=1/4\\]\n\\[\\mathcal{P}(X=0,Z=1)=\\mathcal{P}(X=0)\\mathcal{P}(Z=1)=1/4\\]\n\\[\\mathcal{P}(X=1,Z=0)=\\mathcal{P}(X=1)\\mathcal{P}(Z=0)=1/4\\]\n\\[\\mathcal{P}(X=1,Z=1)=\\mathcal{P}(X=1)\\mathcal{P}(Z=1)=1/4\\]\n\n\\[\\mathcal{P}(Y=0,Z=0)=\\mathcal{P}(Y=0)\\mathcal{P}(Z=0)=1/4\\]\n\\[\\mathcal{P}(Y=0,Z=1)=\\mathcal{P}(Y=0)\\mathcal{P}(Z=1)=1/4\\]\n\\[\\mathcal{P}(Y=1,Z=0)=\\mathcal{P}(Y=1)\\mathcal{P}(Z=0)=1/4\\]\n\\[\\mathcal{P}(Y=1,Z=1)=\\mathcal{P}(Y=1)\\mathcal{P}(Z=1)=1/4\\]\n\nThus\n\nX,Y and Z are pairwise independent.\n\\section{Conditional independence}\nFor any random variables X1,X2 and X3\nX1 is independent of X2 conditioning on X3 if and only if\n\\[\\mathcal{P}(x1,x2,x3)\\mathcal{P}(x2)=\\mathcal{P}(x1,x2)\\mathcal{P}(x2,x3)\\]\nIn our case, for any correspondence between X1, X2, X3 and X,Y, Z $\\mathcal{P}(x1,x2)\\mathcal{P}(x2,x3)$ will always be equal to 1/8\nOn the other hand, $\\mathcal{P}(x1,x2,x3)\\mathcal{P}(x2)$ will be equal to 0 for some value of x,y and z, for example when X=Y=Z=1.\nThus, for any possible permutation of X,Y,and Z as X1, X2, and X3, none of them is conditionally independent.\n\n\\section{Equivalent definitions of Conditional independence}\nLet's prove that the jumping form and shortened form are equivalent to the symetric form.\n\n\\subsection{Jumping form}\nfor $\\mathcal{P}(x)>0, \\mathcal{P}(y)>0$\n\\[\\mathcal{P}(x,y,z)=\\mathcal{P}(x)\\mathcal{P}(y|x)\\mathcal{P}(z|y)\\]\n\\[\\Leftrightarrow\\]\n$\\mathcal{P}(x,y,z)=\\mathcal{P}(x)\\frac{\\mathcal{P}(x,y)}{\\mathcal{P}(x)}\\frac{\\mathcal{P}(y,z)}{\\mathcal{P}(y)}$ using the formulae $\\mathcal{P}(z|x,y) = \\frac{\\mathcal{P}(x,y,z)}{\\mathcal{P}(x,y)}$\n\\[\\Leftrightarrow\\]\n\\[\\mathcal{P}(x,y,z)\\mathcal{P}(y)=\\mathcal{P}(x,y)\\mathcal{P}(y,z)\n\\]\nQED\n\\\\\n\n\\subsection{Shortened form}\nfor $\\mathcal{P}(x,y)>0$\n\\[\\mathcal{P}(z|x,y)=\\mathcal{P}(z|y)\\]\n\\[\\Leftrightarrow\\]\n$\\frac{\\mathcal{P}(x,y,z)}{\\mathcal{P}(x,y)}=\\frac{\\mathcal{P}(y,z)}{\\mathcal{P}(y)}$ using the formulae $\\mathcal{P}(z|x,y) = \\frac{\\mathcal{P}(x,y,z)}{\\mathcal{P}(x,y)}$\n\\[\\Leftrightarrow\\]\n\\[\\mathcal{P}(x,y,z)\\mathcal{P}(y)=\\mathcal{P}(x,y)\\mathcal{P}(y,z)\n\\]\n\nQED\n\n\\end{document}", "meta": {"hexsha": "1e0fd2cc3c3b935171e1419a07372387b076c41f", "size": 4691, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ICT/assignment.tex", "max_stars_repo_name": "apeyrard/sjtu-work", "max_stars_repo_head_hexsha": "ca98fec3c83b81ed9091bdc968cb5ad8a74d1d6a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-26T10:04:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T10:04:05.000Z", "max_issues_repo_path": "ICT/assignment.tex", "max_issues_repo_name": "apeyrard/sjtu-work", "max_issues_repo_head_hexsha": "ca98fec3c83b81ed9091bdc968cb5ad8a74d1d6a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICT/assignment.tex", "max_forks_repo_name": "apeyrard/sjtu-work", "max_forks_repo_head_hexsha": "ca98fec3c83b81ed9091bdc968cb5ad8a74d1d6a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-26T10:04:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T10:04:06.000Z", "avg_line_length": 46.4455445545, "max_line_length": 199, "alphanum_fraction": 0.6060541462, "num_tokens": 2399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7577943658046608, "lm_q1q2_score": 0.5676540695946557}}
{"text": "\\section{2-D DP}\n\n\\subsection{2-D DP (part 1)}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 1)}\n  \\begin{exampleblock}{LCS: longest common subsequence \\pno{2.2.7}}\n    \\begin{itemize}\n      \\item $X = X_{1} \\cdots X_{m}; Y = Y_{1} \\cdots Y_{n}$\n      \\item find (the length of) an LCS of $X$ and $Y$\n    \\end{itemize}\n    \\begin{align*}\n      X &= \\langle A,\\textcolor{blue}{B},\\textcolor{blue}{C},\\textcolor{blue}{B},D,\\textcolor{blue}{A},B \\rangle  \\\\\n      Y &= \\langle \\textcolor{blue}{B},D,\\textcolor{blue}{C},A,\\textcolor{blue}{B},\\textcolor{blue}{A} \\rangle \\\\\n      Z &= \\langle B,C,B,A \\rangle\n    \\end{align*}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item subproblem: $L[i,j]$: the length of an LCS of $X[1 \\cdots i]$ and $Y[1 \\cdots j]$\n      \\item goal: $L[m,n]$\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 1)}\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item question: Is $X_{i} = Y_{j}$?\n      \\item recurrence:\n\t\\begin{displaymath}\n\t  L[i,j] = \\left\\{ \\begin{array}{ll}\n\t    L[i-1, j-1] + 1 & \\textrm{if $X_{i} = Y_{j}$}\\\\\n\t    \\max \\set{L[i-1,j], L[i,j-1]} & \\textrm{if $X_{i} \\neq Y_{j}$}\n\t  \\end{array} \\right.\n\t\\end{displaymath}\n      \\item<2-> initialization:\n\t\\begin{align*}\n\t  L[i,0] &= 0, 0 \\le i \\le m \\\\\n\t  L[0,j] &= 0, 0 \\le j \\le n\n\t\\end{align*}\n    \\end{itemize}\n  \\end{block}\n\n  \\uncover<3->{\n  \\begin{center}\n    \\textcolor{red}{It {\\it may be} correct. But I feel quite uncomfortable without a proof.}\n  \\end{center}\n  }\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 1)}\n  \\begin{alertblock}{Counterexample?}\n    \\[\n      L[i,j] = L[i-1,j-1] + 1 \\text{ if } X_{i} = Y_{j}\n    \\]\n\n    \\begin{columns}\n      \\column{0.50\\textwidth}\n\t\\begin{align*}\n\t  X &= \\textcolor{blue}{a},\\textcolor{blue}{b},c,c,\\textcolor{red}{c} \\\\\n\t  Y &= \\textcolor{blue}{a},\\textcolor{blue}{b},\\textcolor{red}{c}  \\\\\n\t  Z &= \\textcolor{blue}{a},\\textcolor{blue}{b},\\textcolor{red}{c}\n\t\\end{align*}\n      \\column{0.50\\textwidth}\n\t\\begin{align*}\n\t  X &= \\textcolor{blue}{a},\\textcolor{blue}{b},\\textcolor{blue}{c},c,c \\\\\n\t  Y &= \\textcolor{blue}{a},\\textcolor{blue}{b},\\textcolor{blue}{c}  \\\\\n\t  Z &= \\textcolor{blue}{a},\\textcolor{blue}{b},\\textcolor{blue}{c}\n\t\\end{align*}\n    \\end{columns}\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 1)}\n  \\begin{block}{Correctness proof (I).}\n    \\begin{theorem}{}\n      $L[i,j] = L[i-1,j-1] + 1$ if $X_{i} = Y_{j}$.\n    \\end{theorem}\n\n    \\begin{theorem}{}\n      $Z[1 \\cdots k]$ with $\\textcolor{blue}{Z_{k} \\equiv X_{i} \\land Z_{k} \\equiv Y_{j}}$ \\emph{is} an LCS of $X[1 \\cdots i]$ and $Y[1 \\cdots j]$.\n    \\end{theorem}\n    \\begin{proof}\n      \\begin{enumerate}\n\t\\item $Z_{k} = X_{i} = Y_{j}$ (by contradiction)\n\t% \\item But, $Z_{k} = X_{i} \\nRightarrow Z_{k} \\equiv X_{i}; Z_{k} = Y_{i} \\nRightarrow Z_{k} \\equiv Y_{i}$ \n\t\\item $Z_{k} = X_{i} = Y_{j} \\Rightarrow \\text{ either } Z_{k} \\equiv X_{i} \\text{ or } Z_{k} \\equiv Y_{j}$ (by contradiction)\n\t  \\begin{enumerate}\n\t    \\item $Z_{k} \\equiv X_{i} \\land Z_{k} \\equiv Y_{j}$\n\t    \\item $Z_{k} \\not\\equiv X_{i} \\land Z_{k} \\equiv Y_{j}$\n\t    \\item $Z_{k} \\equiv X_{i} \\land Z_{k} \\not\\equiv Y_{j}$\n\t  \\end{enumerate}\n      \\end{enumerate}\n    \\end{proof}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 1)}\n  \\begin{block}{Correctness proof (II).}\n    \\begin{theorem}{}\n      $L[i,j] = \\max \\set{L[i-1,j], L[i,j-1]} \\text{ if } X_{i} \\neq Y_{j}$\n    \\end{theorem}\n\n    \\begin{theorem}{}\n      If $X_{i} \\neq Y_{j}$, then either $X_{i} \\notin \\text{LCS}[i,j]$ or $Y_{j} \\notin \\text{LCS}[i,j]$.\n    \\end{theorem}\n    \\begin{proof}\n      By contradiction.\n    \\end{proof}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}<beamer:0>{2-D DP (part 1)}\n  \\begin{exampleblock}{LCS with repetition of $X_{i}$ \\pno{2.2.8}}\n    \\begin{enumerate}\n      \\item repetition of $X_{i}$\n      \\item $k$-bounded repetition of $X_{i}$\n    \\end{enumerate}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{enumerate}\n      \\item repetition of $x_{i}$:\n\t\\begin{displaymath}\n\t  L[i,j] = \\left\\{ \\begin{array}{ll}\n\t    L[\\textcolor{red}{i}, j-1] + 1 & \\textrm{if $X_{i} = Y_{j}$}\\\\\n\t    \\max \\set{L[i-1,j], L[i,j-1]} & \\textrm{if $X_{i} \\neq Y_{j}$}\n\t  \\end{array} \\right.\n\t\\end{displaymath}\n      \\item $k$-bounded repetition of $X_{i}$:\n\n\t$X^{(k)} = X_{1}^{(k)} \\cdots X_{m}^{(k)}$\n    \\end{enumerate}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}<beamer:0>{2-D DP (part 1)}\n  \\begin{exampleblock}{Shortest common supersequence \\pno{2.2.10}}\n    \\begin{itemize}\n      \\item $X = \\set{x_{1} \\cdots x_{m}}; Y = \\set{y_{1} \\cdots y_{n}}$\n      \\item to find (the length of) a SCS of $X$ and $Y$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item subproblem $L[i,j]$: the length of an SCS of $X[1 \\cdots i]$ and $Y[1 \\cdots j]$\n      \\item goal: $L[m,n]$\n      \\item question: is $X_{i} = Y_{j}$\n      \\item recurrence:\n\t\\begin{displaymath}\n\t  L[i,j] = \\left\\{ \\begin{array}{ll}\n\t    L[i-1, j-1] + 1 & \\textrm{if $X_{i} = Y_{j}$}\\\\\n\t    \\max \\set{L[i-1,j] + 1, L[i,j-1] + 1} & \\textrm{if $X_{i} \\neq Y_{j}$}\n\t  \\end{array} \\right.\n\t\\end{displaymath}\n    \\end{itemize}\n  \\end{block}\n\n  \\begin{alertblock}{Remark.}\n    $\\max(m,n) \\le L(m,n) \\le m+n$\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 1)}\n  \\begin{exampleblock}{Edit distance revisited}\n    \\begin{displaymath}\n      \\text{ED}[i,j] = \\min \\left\\{ \\begin{array}{ll}\n\t\\text{ED}[i-1,j] + 1 &  \\\\\n\t\\text{ED}[i,j-1] + 1 & \\\\\n\t\\text{ED}[i-1,j-1] + \\text{I}\\set{X_{i} = Y_{j}} &\n      \\end{array} \\right.\n    \\end{displaymath}\n  \\end{exampleblock}\n    \n  \\uncover<2->{\n  \\begin{exampleblock}{}\n    \\begin{displaymath}\n      \\text{ED}[i,j] = \\left\\{ \\begin{array}{ll}\n        \\text{ED}[i-1,j-1] & \\text{if } X_{i} = Y_{j}  \\\\\n        \\min \\left\\{ \\begin{array}{ll}\n          \\text{ED}[i-1,j] + 1 &  \\\\\n          \\text{ED}[i,j-1] + 1 &  \\\\\n          \\text{ED}[i-1,j-1] + 1 \\\\\n\t\\end{array} \\right. & \\text{if } X_{i} \\neq Y_{j}\n      \\end{array} \\right.\n    \\end{displaymath}\n  \\end{exampleblock}\n  }\n\n  \\uncover<3->{\n  \\begin{theorem}\n    If $X_{i} = Y_{j}$, then $\\text{ED}[i-1,j-1] \\le \\text{ED}[i-1,j] + 1$.\n  \\end{theorem}\n  }\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}<beamer:0>{2-D DP (part 1)}\n  \\begin{exampleblock}{Shuffle of strings \\pno{2.2.12}}\n    \\begin{itemize}\n      \\item $X[1 \\cdots m]; Y[1 \\cdots n], Z[1 \\cdots m+n]$\n      \\item is $Z$ a shuffle of $X$ and $Y$?\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item $S[i,j]$: Is $Z[1 \\cdots i+j]$ a shuffle of $X[1 \\cdots i]$ and $Y[1 \\cdots j]$?\n      \\item goal: $S[m,n]$\n      \\item question: what is the relation among $X_{i}, Y_{j}, \\text{ and } Z_{i+j}$?\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}<beamer:0>{2-D DP (part 1)}\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item recurrence:\n\t\\begin{displaymath}\n\t  S[i,j] = \\left\\{ \\begin{array}{ll}\n\t    \\text{false} & \\textrm{if $Z_{i+j} \\neq X_{i} \\land Z_{i+j} \\neq Y_{j}$}\\\\\n\t    S[i-1,j] & \\textrm{if $Z_{i+j} = X_{i} \\land Z_{i+j} \\neq Y_{j}$}\\\\\n\t    S[i,j-1] & \\textrm{if $Z_{i+j} \\neq X_{i} \\land Z_{i+j} = Y_{j}$}\\\\\n\t    S[i-1,j] \\lor S[i,j-1] & \\textrm{if $Z_{i+j} = X_{i} = Y_{j}$}\n\t  \\end{array} \\right.\n\t\\end{displaymath}\n      \\item initialization: \n\t\\begin{align*}\n\t  S[0,0] &= \\text{true} \\\\\n\t  S[0,j] &= \\left\\{ \\begin{array}{ll}\n\t    \\text{true} & \\text{if } Y = Z  \\\\\n\t    \\text{false} & \\text{if } Y \\neq Z\n\t  \\end{array} \\right. \\\\\n\t  S[i,0] &= \\left\\{ \\begin{array}{ll}\n\t    \\text{true} & \\text{if } X = Z  \\\\\n\t    \\text{false} & \\text{if } X \\neq Z\n\t  \\end{array} \\right.\n\t\\end{align*}\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\subsection{2-D DP (part 2)}\n\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 2)}\n  \\begin{exampleblock}{Longest contiguous substring both forward and backward \\pno{2.2.9}}\n    \\begin{itemize}\n      \\item string $T[1 \\cdots n]$\n      \\item to find LCS both forward and backward\n    \\end{itemize}\n\n    \\begin{center}\n      d\\textcolor{blue}{ynam}icprogramming\\textcolor{blue}{many}times\n    \\end{center}\n  \\end{exampleblock}\n\n  \\begin{alertblock}{Trial and error.}\n    \\begin{itemize}\n      \\item try subproblem $L[i]$: the length of an LCS in $T[1 \\cdots i]$\n      \\item try subproblem $L[i,j]$: the length of an LCS in $T[i \\cdots j]$\n    \\end{itemize}\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 2)}\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item $L[i,j]$: the length of an LCS \\textcolor{blue}{starting with $T_{i}$ and ending with $T_{j}$}\n      \\item goal: $\\max_{1 \\le i \\le j \\le n} L[i,j]$ (simply $O(n^3)$)\n      \\item<2-> question: Is $T_{i} = T_{j}$?\n      \\item<2-> recurrence: \n\t\\begin{displaymath}\n\t  L[i,j] = \\left\\{ \\begin{array}{ll}\n\t    0 & \\textrm{if $T_{i} \\neq T_{j}$}  \\\\\n\t    L[i+1,j-1] + 1 & \\textrm{if $T_{i} = T_{j}$}\n\t  \\end{array} \\right.\n\t\\end{displaymath}\n      \\item<3-> initialization: \n\t\\begin{align*}\n\t  L[i,i] &= 0, 0 \\le i \\le n  \\\\\n\t  L[i,i+1] &= \\left\\{ \\begin{array}{ll}\n\t    1 & \\text{if } T_{i} = T_{i+1}  \\\\\n\t    0 & \\text{if } T_{i} \\neq T_{i+1}\n\t    \\end{array} \\right.\n\t\\end{align*}\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}[fragile]{2-D DP (part 2)}\n  \\begin{block}{Code: three ways of filling the table.}\n    \\fignocaption{width = 0.50\\textwidth}{fig/three-ways-filling-table.png}\n\n    \\begin{center}\n        \\begin{verbatim}\n          for d = 2 to n-1\n            for i = 1 to n-d\n              j = i + d\n              ...\n          return max_{1 <= i <= j <= n} L[i,j]\n       \\end{verbatim}\n    \\end{center}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}[fragile]{2-D DP (part 2)}\n  \\begin{block}{Code: three ways of filling the table.}\n    \\begin{columns}[t]\n      \\column{0.50\\textwidth}\n        \\begin{verbatim}\n          for i = n-2 to 1\n            for j = i+2 to n\n              ...\n          return ...\n        \\end{verbatim}\n      \\column{0.50\\textwidth}\n        \\begin{verbatim}\n          for j = 3 to n\n            for i = j-2 to 1\n              ...\n          return ...\n       \\end{verbatim}\n    \\end{columns}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}<beamer:0>{2-D DP (part 2)}\n  \\begin{exampleblock}{Longest subsequence palindrome \\pno{2.2.15 (a)}}\n    \\begin{itemize}\n      \\item string $S[1 \\cdots n]$\n      \\item to find (the length of) a longest subsequence palindrome\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\pause\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item subproblem $L[i,j]$: the length of the LSP in $S[i \\cdots j]$\n      \\item goal: $L[1,n]$\n      \\item question: is $S[i] = S[j]$?\n      \\item recurrence\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}<beamer:0>{2-D DP (part 2)}\n  \\begin{exampleblock}{Longest subsequence palindrome \\pno{2.2.15 (b)}}\n    \\begin{itemize}\n      \\item string $S[1 \\cdots n]$\n      \\item decompose into a sequence of palindromes\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item $\\text{Num}[i,j]$: the minimum number of palindromes obtained from $S[i \\cdots j]$\n      \\item subproblem $\\text{MinPals}[i]$: the minimum number of palindromes obtained from $S[1 \\cdots i]$\n      \\item goal: $\\text{MinPals}[n]$\n      \\item question: what is the start index of the last palindrome?\n      \\item recurrence:\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 2)}\n  \\begin{exampleblock}{String split problem \\pno{2.2.16}}\n    \\begin{itemize}\n      \\item split a string $S$ into many pieces\n      \\item cost $|S| = n \\Rightarrow n$\n      \\item given locations of $m$ cuts: $\\textcolor{gray}{C_{0}}, C_{1}, \\cdots, C_{m}, \\textcolor{gray}{C_{m+1}}$\n      \\item to find the MinCost of splitting $S$ into $m+1$ pieces $S_{0} \\cdots S_{m}$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item subproblem: $\\text{MinCost}[i,j]$: the minimum cost of splitting substring $S_{i} \\cdots S_{j-1}$ using cuts $C_{i+1} \\cdots C_{j-1}$\n      \\item goal: $\\text{MinCost}[0,m+1]$\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n\\begin{frame}{2-D DP (part 2)}\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item question: what is the first cut in $C_{i+1} \\cdots C_{j-1}$?\n      \\item recurrence:\n\t\\[\n\t  \\text{MinCost}[i,j] = \\min_{i < k < j} \\left( \\text{MinCost}[i,k] + \\text{MinCost}[k,j] + l(S_{i} \\cdots S_{j-1}) \\right)\n\t\\]\n      \\item initialization:\n\t\\[\n\t  \\text{MinCost}[i, i+1] = 0\n\t\\]\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%\n", "meta": {"hexsha": "d08fbe7dd5f1cd5f97544d418a586745e21862a1", "size": 12667, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-dp-2016-06-16/sections/2d-dp.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-dp-2016-06-16/sections/2d-dp.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-dp-2016-06-16/sections/2d-dp.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 31.5885286783, "max_line_length": 147, "alphanum_fraction": 0.5541959422, "num_tokens": 4980, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5676540654971638}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[english]{babel}\n\\usepackage[utf8x]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\usepackage[colorinlistoftodos]{todonotes}\n\n\\usepackage{fancyvrb}\n\\DefineVerbatimEnvironment{code}{Verbatim}{fontsize=\\small}\n\\DefineVerbatimEnvironment{example}{Verbatim}{fontsize=\\small}\n\\newcommand{\\ignore}[1]{}\n\n\\title{On general balanced trees}\n\\author{Slavomir Kaslev}\n\n\\begin{document}\n\\maketitle\n\n\\section{Definition}\n\nGeneral balanced trees are trees with all internal nodes having at least one\nchild and all leafs being equidistant from the root. Using algebraic data types,\none would define them as\n\\begin{code}\n  data F a = F0 a | F1 (F (G a))\n  data G a = G0 a | G1 a (G a)\n\\end{code}\nor equivalently using generalized algebraic data types\n\\begin{code}\n  data F a where\n    F0 :: a -> F a\n    F1 :: F (G a) -> F a\n  data G a where\n    G0 :: a -> G a\n    G1 :: a -> G a -> G a\n\\end{code}\n\n\\section{Generating function}\n\nThe generating function $f(x)$ of general balanced trees satisfies the equations\n\\begin{align*}\n  f(x) &= x + f(g(x)) \\\\\n  g(x) &= x + x g(x)\n\\end{align*}\nBy noticing that $g(x) = \\frac{x}{1-x}$ we can rewrite it as a single equation\n\\begin{equation}\n  f(x) = x + f(\\frac{x}{1-x})\\label{eq:eq1}\n\\end{equation}\nThe solution to \\eqref{eq:eq1} is\n\\begin{equation}\n  f(x) = \\frac{d}{dx}\\ln\\Gamma(-\\frac{1}{x})\\label{eq:eq2}\n\\end{equation}\nor written in terms of the digamma function $\\psi(x) = \\frac{d}{dx}\\ln\\Gamma(x)\n= \\frac{\\Gamma^\\prime(x)}{\\Gamma(x)}$\n$$f(x) = \\psi(-\\frac{1}{x})$$\n\n\\section{Proof}\n\nA quick proof starts with the functional equation of the gamma function\n$$\\Gamma(x+1) = x \\Gamma(x)$$\nTaking logarithm of both sides of the equation\n$$\\ln\\Gamma(x+1) = \\ln(x) + \\ln\\Gamma(x)$$\nand differentiating gives\n$$\\frac{d}{dx}\\ln\\Gamma(x+1) = \\frac{1}{x} + \\frac{d}{dx}\\ln\\Gamma(x)$$\nBy substituting $x \\to -\\frac{1}{x}$ and rearranging terms we obtain\n$$\\frac{d}{dx}\\ln\\Gamma(-\\frac{1}{x}) = x + \\frac{d}{dx}\\ln\\Gamma(1-\\frac{1}{x})$$\nSince $g(x) = \\frac{x}{1-x}$ we know that $1-\\frac{1}{x} = -\\frac{1}{g(x)}$ and\ntherefore we have\n$$\\frac{d}{dx}\\ln\\Gamma(-\\frac{1}{x}) = x + \\frac{d}{dx}\\ln\\Gamma(-\\frac{1}{g(x)})$$\nFinally, the substitution $f(x) = \\frac{d}{dx}\\ln\\Gamma(-\\frac{1}{x})$ brings us\nto our original equation\n\\begin{align*}\n  f(x) &= x + f(g(x)) \\\\\n  g(x) &= x + x g(x)\n\\end{align*}\n\n\\end{document}\n", "meta": {"hexsha": "a89029469e443e3190e7b3a01ea1269e567405da", "size": 2410, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "balanced.tex", "max_stars_repo_name": "skaslev/papers", "max_stars_repo_head_hexsha": "592ef26e52ec6a4b61f9c0c198e9c459cef5b00a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "balanced.tex", "max_issues_repo_name": "skaslev/papers", "max_issues_repo_head_hexsha": "592ef26e52ec6a4b61f9c0c198e9c459cef5b00a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "balanced.tex", "max_forks_repo_name": "skaslev/papers", "max_forks_repo_head_hexsha": "592ef26e52ec6a4b61f9c0c198e9c459cef5b00a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-09T17:16:43.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-09T17:16:43.000Z", "avg_line_length": 30.125, "max_line_length": 84, "alphanum_fraction": 0.6585062241, "num_tokens": 863, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943603346811, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.567654057007575}}
{"text": "\\chapter{Bit operations}\n\\label{chap:Bit operations}\n\nlibzahl provides a number of functions that operate on\nbits. These can sometimes be used instead of arithmetic\nfunctions for increased performance. You should read\nthe sections in order.\n\n\\vspace{1cm}\n\\minitoc\n\n\n\\newpage\n\\section{Boundary}\n\\label{sec:Boundary}\n\nTo retrieve the index of the lowest set bit, use\n\n\\begin{alltt}\n   size_t zlsb(z_t a);\n\\end{alltt}\n\n\\noindent\nIt will return a zero-based index, that is, if the\nleast significant bit is indeed set, it will return 0.\n\nIf {\\tt a} is a power of 2, it will return the power\nof which 2 is raised, effectively calculating the\nbinary logarithm of {\\tt a}. Note, this is only if\n{\\tt a} is a power of two. More generally, it returns\nthe number of trailing binary zeroes, if equivalently\nthe number of times {\\tt a} can evenly be divided by\n2. However, in the special case where $a = 0$,\n{\\tt SIZE\\_MAX} is returned.\n\nA similar function is\n\n\\begin{alltt}\n   size_t zbit(z_t a);\n\\end{alltt}\n\n\\noindent\nIt returns the minimal number of bits require to\nrepresent an integer. That is, $\\lceil \\log_2 a \\rceil - 1$,\nor equivalently, the number of times {\\tt a} can be\ndivided by 2 before it gets the value 0. However, in\nthe special case where $a = 0$, 1 is returned. 0 is\nnever returned. If you want the value 0 to be returned\nif $a = 0$, write\n\n\\begin{alltt}\n   zzero(a) ? 0 : zbits(a)\n\\end{alltt}\n\nThe definition ``it returns the minimal number\nof bits required to represent an integer,''\nholds true if $a = 0$, the other divisions\ndo not hold true if $a = 0$.\n\n\n\\newpage\n\\section{Shift}\n\\label{sec:Shift}\n\nThere are two functions for shifting bits\nin integers:\n\n\\begin{alltt}\n   void zlsh(z_t r, z_t a, size_t b);\n   void zrsh(z_t r, z_t a, size_t b);\n\\end{alltt}\n\n\\noindent\n{\\tt zlsh} performs a left-shift, and {\\tt zrsh}\nperforms a right-shift. That is, {\\tt zlsh} adds\n{\\tt b} trailing binary zeroes, and {\\tt zrsh}\nremoves the lowest {\\tt b} binary digits. So if\n\n$a = \\phantom{00}10000101_2$ then\n\n$r = 1000010100_2$ after calling {\\tt zlsh(r, a, 2)}, and\n\n$r = \\phantom{0100}100001_2$ after calling {\\tt zrsh(r, a, 2)}.\n\\vspace{1em}\n\n{\\tt zlsh(r, a, b)} is equivalent to $r \\gets a \\cdot 2^b$,\nand {\\tt zrsh(r, a, b)} is equivalent to $r \\gets a \\div 2^b$,\nwith truncated division, {\\tt zlsh} and {\\tt zrsh} are\nsignificantly faster than {\\tt zpowu} and should be used\nwhenever possible. {\\tt zpowu} does not check if it is\npossible for it to use {\\tt zlsh} instead, even if it\nwould, {\\tt zlsh} and {\\tt zrsh} would still be preferable\nin most cases because it removes the need for {\\tt zmul}\nand {\\tt zdiv}, respectively.\n\n{\\tt zlsh} and {\\tt zrsh} are implemented in two steps:\n(1) shift whole characters, that is, groups of aligned\n64 bits, and (2) shift on a bit-level between characters.\n\nIf you are implementing a calculator, you may want to\ncreate a wrapper for {\\tt zpow} that uses {\\tt zlsh}\nwhenever possible. One such wrapper could be\n\n\\begin{alltt}\n   void\n   pow(z_t r, z_t a, z_t b)\n   \\{\n       size_t s1, s2;\n       if ((s1 = zlsb(a)) + 1 == zbits(a) &&\n                     zbits(b) <= 8 * sizeof(SIZE_MAX)) \\{\n           s2 = zzero(b) ? 0 : b->chars[0];\n           if (s1 <= SIZE_MAX / s2) \\{\n               zsetu(r, 1);\n               zlsh(r, r, s1 * s2);\n               return;\n           \\}\n       \\}\n       zpow(r, a, b);\n   \\}\n\\end{alltt}\n\n\n\\newpage\n\\section{Truncation}\n\\label{sec:Truncation}\n\nIn \\secref{sec:Shift} we have seen how bit-shift\noperations can be used to multiply or divide by a\npower of two. There is also a bit-truncation\noperation: {\\tt ztrunc}, which is used to keep\nonly the lowest bits, or equivalently, calculate\nthe remainder of a division by a power of two.\n\n\\begin{alltt}\n   void ztrunc(z_t r, z_t a, size_t b);\n\\end{alltt}\n\n\\noindent\nis consistent with {\\tt zmod}; like {\\tt zlsh} and\n{\\tt zrsh}, {\\tt a}'s sign is preserved into {\\tt r}\nassuming the result is non-zero.\n\n{\\tt ztrunc(r, a, b)} stores only the lowest {\\tt b}\nbits in {\\tt a} into {\\tt r}, or equivalently,\ncalculates $r \\gets a \\mod 2^b$. For example, if\n\n$a = 100011000_2$ then\n\n$r = \\phantom{10001}1000_2$ after calling\n{\\tt ztrunc(r, a, 4)}.\n\n\n\\newpage\n\\section{Split}\n\\label{sec:Split}\n\nIn \\secref{sec:Shift} and \\secref{sec:Truncation}\nwe have seen how bit operations can be used to\ncalculate division by a power of two and\nmodulus a power of two efficiently using\nbit-shift and bit-truncation operations. libzahl\nalso has a bit-split operation that can be used\nto efficiently calculate both division and\nmodulus a power of two efficiently in the same\noperation, or equivalently, storing low bits\nin one integer and high bits in another integer.\nThis function is\n\n\\begin{alltt}\n   void zsplit(z_t high, z_t low, z_t a, size_t b);\n\\end{alltt}\n\n\\noindent\nUnlike {\\tt zdivmod}, it is not more efficient\nthan calling {\\tt zrsh} and {\\tt ztrunc}, but\nit is more convenient. {\\tt zsplit} requires\nthat {\\tt high} and {\\tt low} are from each\nother distinct references.\n\nCalling {\\tt zsplit(high, low, a, b)} is\nequivalent to\n\n\\begin{alltt}\n   ztrunc(low, a, delim);\n   zrsh(high, a, delim);\n\\end{alltt}\n\n\\noindent\nassuming {\\tt a} and {\\tt low} are not the\nsame reference (reverse the order of the\nfunctions if they are the same reference.)\n\n{\\tt zsplit} copies the lowest {\\tt b} bits\nof {\\tt a} to {\\tt low}, and the rest of the\nbits to {\\tt high}, with the lowest {\\tt b}\nremovesd. For example, if $a = 1010101111_2$,\nthen $high = 101010_2$ and $low = 1111_2$\nafter calling {\\tt zsplit(high, low, a, 4)}.\n\n{\\tt zsplit} is especially useful in\ndivide-and-conquer algorithms.\n\n\n\\newpage\n\\section{Bit manipulation}\n\\label{sec:Bit manipulation}\n\n\nThe function\n\n\\begin{alltt}\n   void zbset(z_t r, z_t a, size_t bit, int mode);\n\\end{alltt}\n\n\\noindent\nis used to manipulate single bits in {\\tt a}. It will\ncopy {\\tt a} into {\\tt r} and then, in {\\tt r}, either\nset, clear, or flip, the bit with the index {\\tt bit}\n\u2014 the least significant bit has the index 0. The\naction depend on the value of {\\tt mode}:\n\n\\begin{itemize}\n\\item\n$mode > 0$ ($+1$): set\n\\item\n$mode = 0$ ($0$): clear\n\\item\n$mode < 0$ ($-1$): flip\n\\end{itemize}\n\n\n\\newpage\n\\section{Bit test}\n\\label{sec:Bit test}\n\nlibzahl provides a function for testing whether a bit\nin a big integer is set:\n\n\\begin{alltt}\n   int zbtest(z_t a, size_t bit);\n\\end{alltt}\n\n\\noindent\nit will return 1 if the bit with the index {\\tt bit}\nis set in {\\tt a}, counting from the least significant\nbit, starting at zero. 0 is returned otherwise. The\nsign of {\\tt a} is ignored.\n\nWe can think of this like so: consider\n\n$$ \\lvert a \\rvert = \\sum_{i = 0}^\\infty k_i 2^i,~ k_i \\in \\{0, 1\\}, $$\n\n\\noindent\n{\\tt zbtest(a, b)} returns $k_b$. Equivalently, we can\nthink that {\\tt zbtest(a, b)} return whether $b \\in B$\nwhere $B$ is defined by\n\n$$ \\lvert a \\rvert = \\sum_{b \\in B} 2^b,~ B \\subset \\textbf{Z}_+, $$\n\n\\noindent\nor as right-shifting $a$ by $b$ bits and returning\nwhether the least significant bit is set.\n\n{\\tt zbtest} always returns 1 or 0, but for good\ncode quality, you should avoid testing against 1,\nrather you should test whether the value is a\ntruth-value or a falsehood-value. However, there\nis nothing wrong with depending on the value being\nrestricted to being either 1 or 0 if you want to\nsum up returned values or otherwise use them in\nnew values.\n\n\n\\newpage\n\\section{Connectives}\n\\label{sec:Connectives}\n\nlibzahl implements the four basic logical\nconnectives: and, or, exclusive or, and not.\nThe functions for these are named {\\tt zand},\n{\\tt zor}, {\\tt zxor}, and {\\tt znot},\nrespectively.\n\nThe connectives apply to each bit in the\nintegers, as well as the sign. The sign is\ntreated as a bit that is set if the integer\nis negative, and as cleared otherwise. For\nexample (integers are in binary):\n\n\\begin{alltt}\n   zand(r, a, b)              zor(r, a, b)\n   a = +1010  (input)         a = +1010  (input)\n   b = -1100  (input)         b = -1100  (input)\n   r = +1000  (output)        r = -1110  (output)\n\n   zxor(r, a, b)              znot(r, a)\n   a = +1010  (input)         a = +1010  (input)\n   b = -1100  (input)         r = -0101  (output)\n   r = -0110  (output)\n\\end{alltt}\n\nRemember, in libzahl, integers are represented\nwith sign and magnitude, not two's complement,\neven when using these connectives. Therefore,\nmore work than just changing the name of the\ncalled function may be required when moving\nbetween big integer libraries. Consequently,\n{\\tt znot} does not flip bits that are higher\nthan the highest set bit, which means that\n{\\tt znot} is nilpotent rather than self dual.\n\nBelow is a list of the value of {\\tt a} when\n{\\tt znot(a, a)} is called repeatedly.\n\n\\begin{alltt}\n   10101010\n   -1010101\n     101010\n     -10101\n       1010\n       -101\n         10\n         -1\n          0\n          0\n          0\n\\end{alltt}\n", "meta": {"hexsha": "d7998f815dbbdde57ccbec7f8c6ba3495a19a383", "size": 8814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/bit-operations.tex", "max_stars_repo_name": "maandree/libzahl", "max_stars_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2016-03-06T10:34:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T10:40:14.000Z", "max_issues_repo_path": "doc/bit-operations.tex", "max_issues_repo_name": "maandree/libzahl", "max_issues_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2016-05-09T12:34:47.000Z", "max_issues_repo_issues_event_max_datetime": "2017-04-22T13:11:49.000Z", "max_forks_repo_path": "doc/bit-operations.tex", "max_forks_repo_name": "maandree/libzahl", "max_forks_repo_head_hexsha": "cf4b5d338225ac30d8f7434768c45619928bf3bf", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2016-10-14T12:23:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-23T12:10:26.000Z", "avg_line_length": 26.8719512195, "max_line_length": 71, "alphanum_fraction": 0.6830043113, "num_tokens": 2789, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.7606506581031359, "lm_q1q2_score": 0.5675511246730353}}
{"text": "%&latex\n\n\\documentclass[10pt,landscape]{article}\n%\\documentclass[10pt]{article}\n\n%\n% Bring in the AMS math environment\n%\n\\usepackage{amsmath}\n\n%\n% Bring in the common page setup\n%\n\\usepackage{trickhlaenv}\n\n%\n% Define vector and quaternion macros\n%\n\\usepackage{trickhlamath}\n\n\n%\n% Bring in the hyper ref environment\n%\n\\usepackage[colorlinks,bookmarks]{hyperref}\n\\hypersetup{\n   pdftitle={TrickHLA Package Mathematical Nomenclature},\n   pdfauthor={David Hammen}}\n\n\\newcommand{\\trickhlamath}{\\texorpdfstring{\\tt trickhlamath}{trickhlamath}}\n\n\\newcommand{\\symhdr}[1]{%\n\\multicolumn{4}{l}{\\rule{0pt}{3ex}\\parbox[c]{0.6\\textwidth}{#1}} \\\\\n\\cline{2-4} \\rule{3em}{0pt} & {\\bf Symbol} & {\\bf Represents} & {\\bf Units (MKS)}\\\\ \\cline{2-4} \\cline{2-4}}\n\n\\newcommand{\\acchdr}[1]{%\n\\multicolumn{4}{l}{\\rule{0pt}{3ex}\\parbox[c]{0.6\\textwidth}{#1}} \\\\\n\\cline{2-4} \\rule{3em}{0pt} & {\\bf Description} & {\\bf Remark} & {\\bf Example}\\\\ \\cline{2-4} \\cline{2-4}}\n\n\\def\\purpwidth{0.35\\textwidth}\n\\def\\argswidth{0.15\\textwidth}\n\n\\newcommand{\\mlentry}[2]{%\n  \\minipage[t]{#1}{\\flushleft{#2}\\endflushleft}\\endminipage}\n\n\n\\newcommand{\\slinethree}{\\rule{0pt}{0.2ex}&\\rule{0pt}{0.2ex}&\\rule{0pt}{0.2ex}&\\rule{0pt}{0.2ex}}\n\\newcommand{\\slinefour}{\\rule{0pt}{1.2ex}&&&&}\n\n\n\\begin{document}\n\n\\title{TrickHLA Package Mathematical Nomenclature}\n\\author{David Hammen}\n\\date{12/21/05}\n\n\\pdfbookmark{Title Page}{titlepage}\n\\maketitle\n\n\\tableofcontents\n\n\\section*{Introduction}\nThis note describes the preferred nomenclature for the TrickHLA package\ndocumentation and describes a set of \\LaTeX\\ macros that implement\nthe nomenclature.\n\n\n\\pagebreak\n\\section{Nomenclature}\nThis section describes the preferred nomenclature for the TrickHLA package\ndocumentation. The preferred nomenclature\n\\begin{itemize}\n\\item Uses International System (SI) units with standard SI abbreviations.\n\\item Represents scalars in plain math font, vectors and matrices in bold math font,\nand quaternions in caligraphy font.\n\\item Represents vectors, matrices, and quaternions in a standard, intuitive style\nwith an obvious translation to variable names. For example,\n$\\framerelvect A x a b$ is the vector from point $a$ to point $b$ as expressed in\nreference frame $A$. This is denoted as the ``arrow-separated'' format.\nAlternative respresentations such as\n\\trickhlamathcommamode $\\relvect x a b$ \\trickhlamatharrowmode (``comma-separated'')\nand\n\\trickhlamathstackedmode $\\relvect x a b$ \\trickhlamatharrowmode (``stacked'')\nwere\ndiscussed but discarded for various reasons.\n\\item Quaternions that represent a transformation from one frame to another\nare represented as unit left transformation quaternions,\nconsistent with the quaternion functions defined in the Trick core.\n\\end{itemize}\n\nThe following tables depict the preferred nomenclature.\nIn cases where alternative representation schemes exist,\nthe preferred form is listed first followed by alternatives\n\n \\pagebreak\n\\subsection{Basics}\\label{sec:nomen_basics}\n\n\\begin{tabular}{l||l|l|l|}\n\\acchdr{{\\bf{Display style for vectors, matrices, and quaternions}}}\n\\rule{0pt}{3ex} & Scalar &\n  in plain math font & $s$ \\\\\n\\rule{0pt}{1.5ex} & Vector &\n  in bold math font& $\\vect x$ \\\\\n\\rule{0pt}{1.5ex} & Matrix &\n  in bold math font& $\\mat T$ \\\\\n\\rule{0pt}{1.5ex} & Quaternion&\n  in caligraphy font, uppercase & $\\quat Q$ \\\\\n\\cline{2-4}\n\\symhdr{{\\bf{Scalar Symbols}}}\n\\rule{0pt}{3ex} & $\\alpha$ & Angular acceleration & $r/s^2$ \\\\\n\\rule{0pt}{1.5ex} & $\\alpha, \\beta, \\theta, \\phi, \\psi$ & Angle & $r$ \\\\\n\\rule{0pt}{1.5ex} & $\\omega$ & Angular rate & $r/s$ \\\\\n\\rule{0pt}{1.5ex} & $d, l, s$ & Distance or length & $m$ \\\\\n\\rule{0pt}{1.5ex} & $m$ & Mass & $kg$ \\\\\n\\rule{0pt}{1.5ex} & $v, s$ & Speed & $m/s$ \\\\\n\\cline{2-4}\n\\symhdr{{\\bf{Vector Symbols}}}\n\\rule{0pt}{3ex} & $\\vect \\alpha$ & Angular acceleration & $r/s^2$ \\\\\n\\rule{0pt}{1.5ex} & $\\vect \\tau$ & Torque & $Nm$ \\\\\n\\rule{0pt}{1.5ex} & $\\vect \\omega$ & Angular velocity & $r/s$ \\\\\n\\rule{0pt}{1.5ex} & $\\vect a$ & Acceleration & $m/s^2$ \\\\\n\\rule{0pt}{1.5ex} & $\\vect L$ & Angular momentum & $Nms \\; (kg \\, m^2/s)$ \\\\\n\\rule{0pt}{1.5ex} & $\\vect v$ & Velocity & $m/s$ \\\\\n\\rule{0pt}{1.5ex} & $\\vect x$ & Position & $m$ \\\\\n\\cline{2-4}\n\\symhdr{{\\bf{Matrix Symbols}}}\n\\rule{0pt}{3ex} & $\\mat I$ & Inertia tensor & $kg \\, m^2$ \\\\\n\\rule{0pt}{1.5ex} & $\\mat T$ & Transformation matrix & $--$ \\\\\n\\cline{2-4}\n\\symhdr{{\\bf{Quaternion Symbols}}}\n\\rule{0pt}{3ex} & $\\quat Q$ & Left transformation quaternion & $--$ \\\\\n\\cline{2-4}\n\\end{tabular}\n\n\\pagebreak\n\n\\subsection{Adornments}\\label{sec:nomen_addorn}\n\n\\begin{tabular}{l||l|l|l|}\n\\acchdr{{\\bf{Vectors}}}\n\\rule{0pt}{3ex} & Vector from origin to $b$ &Subscripted $b$& $\\absvect x b$ \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Vector from $a$ to $b$ &\n Arrow, comma, stacked formats &\n\\makebox[1.5cm]{$\\relvect x a b$}\n\\makebox[1.5cm]{\\trickhlamathcommamode $\\relvect x a b$ \\trickhlamatharrowmode}\n\\makebox[1.5cm]{\\trickhlamathstackedmode $\\relvect x a b$ \\trickhlamatharrowmode} \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Vector expressed in frame $A$ &&\n\\makebox[1.5cm]{$\\framevect A x$}\n\\makebox[1.5cm]{\\trickhlamathcommamode $\\framevect A x$ \\trickhlamatharrowmode}\n\\makebox[1.5cm]{\\trickhlamathstackedmode $\\framevect A x$ \\trickhlamatharrowmode} \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Vector from $a$ to $b$ expressed in frame $A$ &&\n\\makebox[1.5cm]{$\\framerelvect A x a b$}\n\\makebox[1.5cm]{\\trickhlamathcommamode $\\framerelvect A x a b$ \\trickhlamatharrowmode}\n\\makebox[1.5cm]{\\trickhlamathstackedmode $\\framerelvect A x a b$ \\trickhlamatharrowmode} \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Time derivative of above, observer in frame $A$&&\n\\makebox[1.5cm]{$\\framerelvdot A x a b$}\n\\makebox[1.5cm]{\\trickhlamathcommamode $\\framerelvdot A x a b$ \\trickhlamatharrowmode}\n\\makebox[1.5cm]{\\trickhlamathstackedmode $\\framerelvdot A x a b$ \\trickhlamatharrowmode} \\\\\n\\rule{0pt}{3ex} & Vector time derivative, observer in frame $A$&\n  Dot format & $\\framedot A {\\vect x}$ \\\\\n\\rule{0pt}{3ex} & &\n  $or$ d/dt format & $\\frac{d}{dt_A}{\\vect x}$ \\\\\n\\cline{2-4}\n\\acchdr{{\\bf{Matrices}}}\n\\rule{0pt}{3ex} & Transformation from $A$ to $B$ &\n Arrow, comma, stacked formats &\n\\makebox[1.5cm]{$\\tmat A B$}\n\\makebox[1.5cm]{\\trickhlamathcommamode $\\tmat A B$ \\trickhlamatharrowmode}\n\\makebox[1.5cm]{\\trickhlamathstackedmode $\\tmat A B$ \\trickhlamatharrowmode} \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Matrix product &\n  No operator & $\\MxM{\\tmat B C}{\\tmat A B}$ \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Single matrix transpose &\n  Superscript $\\top$ & $\\matT T$ \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Matrix product transpose &\n  Superscript $\\top$ & $\\left(\\MxM{\\tmat B C}{\\tmat A B}\\right){^\\top}$ \\\\\n\\cline{2-4}\n\\acchdr{{\\bf{Quaternions}}}\n\\rule{0pt}{3ex} & Quaternion from $A$ to $B$ &\n Arrow, comma, stacked formats &\n\\makebox[1.5cm]{$\\tquat A B$}\n\\makebox[1.5cm]{\\trickhlamathcommamode $\\tquat A B$ \\trickhlamatharrowmode}\n\\makebox[1.5cm]{\\trickhlamathstackedmode $\\tquat A B$ \\trickhlamatharrowmode} \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Quaternion product &\n  No operator & $\\QxQ{\\tquat B C}{\\tquat A B}$ \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Single quaternion conjugate &\n  Superscript $\\star$ & $\\quatconj Q$ \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Quaternion product conjugate &\n  Superscript $\\star $ & $\\quatconjlr{\\QxQ{\\tquat B C}{\\tquat A B}}$ \\\\\n%\\cline{2-4}\n\\rule{0pt}{3ex} & Quaternion components &\n  Scalar $+$ vector $or$ four-vector&\n  $\\quatsv{q_s}{\\vect{q_v}} \\;or\\; \\bmatrix q_s \\\\ q_x \\\\ q_y \\\\ q_z\\endbmatrix$ \\\\\n\\cline{2-4}\n\\end{tabular}\n\n\\pagebreak\n\\section{{\\trickhlamath}\\ Macros}\n\nThis section provides a brief description of the macros defined in\n\\verb|trickhlamath.sty| in the form of tables that describe the macros,\ntheir arguments, sample usage of the macros, and the displayed math that results.\n\nNote: All of the macros defined in \\verb|trickhlamath.sty| assume {\\LaTeX} is in math mode.\n\n\\pagebreak\n\\subsection{Vector Macros}\n\n\\begin{tabular}{||l|l|l|l|l|} \\hline\n{\\bf Command} & {\\bf Purpose} & {\\bf Arguments} & {\\bf Example} & {\\bf Display} \\\\ \\hline \\hline\n\\slinefour \\\\\n\\verb|\\vect| &\n  \\mlentry{\\purpwidth}{Display a symbol that represents a vector (typically a lowercase letter)} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\vect{x}| & $\\vect{x}$ \\\\ \\slinefour \\\\\n\\verb|\\vhat| &\n  \\mlentry{\\purpwidth}{Display a symbol that represents a unit vector} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\vhat{x}| & $\\vhat{x}$ \\\\ \\slinefour \\\\\n\\verb|\\vdot| &\n  \\mlentry{\\purpwidth}{Time derivative of a vector} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\vdot{x}| & $\\vdot{x}$ \\\\ \\slinefour \\\\\n\\verb|\\framevect| &\n  \\mlentry{\\purpwidth}{Vector represented in a specific frame} &\n  \\mlentry{\\argswidth}{1. Frame\\\\ 2. Vector} &\n  \\verb|\\framevect B x| & $\\framevect B x$  \\\\ \\slinefour \\\\\n  \\verb|\\framevdot| &\n  \\mlentry{\\purpwidth}{Time derivative of a vector represented in a specific frame} &\n  \\mlentry{\\argswidth}{1. Frame\\\\ 2. Vector} &\n  \\verb|\\framevdot B x| & $\\framevdot B x$  \\\\ \\slinefour \\\\\n\\verb|\\relvect| &\n  \\mlentry{\\purpwidth}{Vector between two items} &\n  \\mlentry{\\argswidth}{1. Vector\\\\ 2. Source\\\\ 3. Destination} &\n  \\verb|\\relvect x a b| & $\\relvect x a b$  \\\\ \\slinefour \\\\\n\\verb|\\relvdot| &\n  \\mlentry{\\purpwidth}{Time derivative of a vector between two items} &\n  \\mlentry{\\argswidth}{1. Vector\\\\ 2. Source\\\\ 3. Destination} &\n  \\verb|\\relvdot x a b| & $\\relvdot x a b$  \\\\ \\slinefour \\\\\n\\verb|\\framerelvect| &\n  \\mlentry{\\purpwidth}{Vector between two items} &\n  \\mlentry{\\argswidth}{1. Frame\\\\ 2. Vector\\\\ 3. Source\\\\ 4. Destination} &\n  \\verb|\\framerelvect B x a b| & $\\framerelvect B x a b$  \\\\ \\slinefour \\\\\n\\verb|\\framerelvdot| &\n  \\mlentry{\\purpwidth}{Time derivative of a vector between two items} &\n  \\mlentry{\\argswidth}{1. Frame\\\\ 2. Vector\\\\ 3. Source\\\\ 4. Destination} &\n  \\verb|\\framerelvdot B x a b| & $\\framerelvdot B x a b$  \\\\ \\slinefour \\\\\n\\verb|\\vectxyz| &\n  \\mlentry{\\purpwidth}{Construct a vector from its components} &\n  \\mlentry{\\argswidth}{Three components} &\n  \\verb|\\vectxyz x y z| & $\\vectxyz x y z$  \\\\ \\slinefour \\\\\n\\hline\n\\end{tabular}\n\n\\pagebreak\n\n\\subsection{Quaternion Macros}\n\\\n\\begin{tabular}{||l|l|l|l|l|} \\hline\n{\\bf Command} & {\\bf Purpose} & {\\bf Arguments} & {\\bf Example} & {\\bf Display} \\\\ \\hline \\hline\n\\slinefour \\\\\n\\verb|\\quat| &\n  \\mlentry{\\purpwidth}{Display a symbol that represents a quaternion (typically Q)} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\quat{Q}| & $\\quat{Q}$ \\\\ \\slinefour \\\\\n\\verb|\\qdot| &\n  \\mlentry{\\purpwidth}{Time derivative of a quaternion} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\qdot{Q}| & $\\qdot{Q}$ \\\\ \\slinefour \\\\\n\\verb|\\quatconj| &\n  \\mlentry{\\purpwidth}{Conjugate of a quaternion} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\quatconj{Q}| & $\\quatconj{Q}$ \\\\ \\slinefour \\\\\n\\verb|\\quatconjdot| &\n  \\mlentry{\\purpwidth}{Time derivative of the conjugate of a quaternion} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\quatconjdot{Q}| & $\\quatconjdot{Q}$ \\\\ \\slinefour \\\\\n\\verb|\\quatsv| &\n  \\mlentry{\\purpwidth}{Construct a quaternion from a scalar and $3$-vector} &\n  \\mlentry{\\argswidth}{1. Scalar\\\\ 2. Vector} &\n  \\minipage[t]{0pt}{\\verb|\\quatsv| \\\\ \\verb|  {q_s}| \\\\ \\verb|  {\\vect{q_v}}|}\\endminipage &\n  \\raisebox{-1.5ex}{$\\quatsv{q_s}{\\vect{q_v}}$} \\\\ \\slinefour \\\\\n\\verb|\\quattrot| &\n  \\mlentry{\\purpwidth}{Construct a transformation quaternion from an angle and unit vector} &\n  \\mlentry{\\argswidth}{1. Angle\\\\ 2. Unit vector} &\n  \\minipage[t]{0pt}{\\verb|\\quattrot| \\\\ \\verb|  \\theta| \\\\ \\verb|  {\\vhat u}|}\\endminipage &\n  \\raisebox{-2.5ex}{$\\quattrot{\\theta}{\\vhat u}$} \\\\ \\slinefour \\\\\n\\verb|\\conjop| &\n  \\mlentry{\\purpwidth}{Conjugate operator} &\n  \\mlentry{\\argswidth}{None} &\n  \\verb|\\conjop \\quat Q| & $\\conjop \\quat Q$ \\\\ \\slinefour \\\\\n\\verb|\\scalarpart| &\n  \\mlentry{\\purpwidth}{Scalar part operator} &\n  \\mlentry{\\argswidth}{None} &\n  \\verb|\\scalarpart \\quat Q| & $\\scalarpart \\quat Q$ \\\\ \\slinefour \\\\\n\\verb|\\vectorpart| &\n  \\mlentry{\\purpwidth}{Vector part operator} &\n  \\mlentry{\\argswidth}{None} &\n  \\verb|\\vectorpart \\quat Q| & $\\vectorpart \\quat Q$ \\\\ \\slinefour \\\\\n\\verb|\\QBI| &\n  \\mlentry{\\purpwidth}{Inertial-to-body quaternion} &\n  \\mlentry{\\argswidth}{None} &\n  \\verb|\\QBI| & $\\QBI$ \\\\ \\slinefour \\\\\n\\hline\n\\end{tabular}\n\n\n\\pagebreak\n\n\\subsection{Matrix Macros}\n\n\\begin{tabular}{||l|l|l|l|l|} \\hline\n{\\bf Command} & {\\bf Purpose} & {\\bf Arguments} & {\\bf Example} & {\\bf Display} \\\\ \\hline \\hline\n\\slinefour \\\\\n\\verb|\\mat| &\n  \\mlentry{\\purpwidth}{Display a symbol that represents a matrix (typically an uppercase letter)} &\n  \\mlentry{\\argswidth}{Symbol} &\n  \\verb|\\mat{T}| & $\\mat{T}$ \\\\ \\slinefour \\\\\n\\verb|\\framemat| &\n  \\mlentry{\\purpwidth}{Matrix represented in some reference frame} &\n  \\mlentry{\\argswidth}{1. Frame\\\\ 2. Matrix} &\n  \\verb|\\framemat B \\inertia| & $\\framemat B \\inertia$ \\\\ \\slinefour \\\\\n\\verb|\\diagmatrix| &\n  \\mlentry{\\purpwidth}{$3\\times3$ diagonal matrix} &\n  \\mlentry{\\argswidth}{Three diagonal elements} &\n  \\verb|\\diagmatrix 1 2 3| & $\\diagmatrix 1 2 3$ \\\\ \\slinefour \\\\\n\\verb|\\identmatrix| &\n  \\mlentry{\\purpwidth}{$3\\times3$ identity matrix} &\n  \\mlentry{\\argswidth}{None} &\n  \\verb|\\identmatrix| & $\\identmatrix$ \\\\ \\slinefour \\\\\n\\verb|\\inertia| &\n  \\mlentry{\\purpwidth}{Inertia matrix} &\n  \\mlentry{\\argswidth}{None} &\n  \\verb|\\inertia| & $\\inertia$ \\\\ \\slinefour \\\\\n\\hline\n\\end{tabular}\n\n\\pagebreak\n\n\\subsection{Multiplication Macros}\n\nThe preferred nomenclature for depicting the product of two (or more) composite objects\nis ``implicit multiplication'':  The operands are written with no intervening operator.\nHowever,\na small amount of white space between the operands helps to distinguish the operands.\nThe multiplication macros provide a default amount of white space between operands.\n\n\\begin{tabular}{||l|l|l|l|l|} \\hline\n{\\bf Command} & {\\bf Purpose} & {\\bf Arguments} & {\\bf Example} & {\\bf Display} \\\\ \\hline \\hline\n\\slinefour \\\\\n\\verb|\\MxM| &\n  \\mlentry{\\purpwidth}{Product of two matrices} &\n  \\mlentry{\\argswidth}{Matrix 1\\\\ Matrix 2} &\n  \\minipage[t]{2.5cm}{\\verb|\\MxM| \\\\ %\n    \\verb| {\\tmat B C}| \\\\ %\n    \\verb| {\\tmat A B}|}\\endminipage &\n  $\\MxM{\\tmat B C}{\\tmat A B}$ \\\\[8ex]\n\\verb|\\QxVxQ | &\n  \\mlentry{\\purpwidth}{Product of a quaternion, a vector, and a quaternion} &\n  \\mlentry{\\argswidth}{Quaternion 1\\\\ Vector\\\\ Quaternion 2} &\n  \\minipage[t]{2.5cm}{\\verb|\\QxVxQ| \\\\ %\n    \\verb| {\\quat Q}| \\\\ %\n    \\verb| {\\vect x}| \\\\ %\n    \\verb| {\\quatconj Q}|}\\endminipage &\n    \\raisebox{-1.5ex}{$\\QxVxQ{\\quat Q}{\\vect x}{\\quatconj Q}$} \\\\ \\slinefour \\\\\n\\hline\n\\end{tabular}\n\nSimilar macros are defined for\n\\begin{itemize}\n\\item the product of three matrices (\\verb|MxMxM|)\n\\item the product of a matrix and vector (\\verb|MxV|)\n\\item the product of two or three quaternions (\\verb|QxQ|, \\verb|QxQxQ|)\n\\item the product of a quaternion and a vector (\\verb|QxV|, \\verb|VxQ|)\n\\end{itemize}\n\nThe multiplication macros take an optional argument via which an explicit multiplication operator can be specified. For example,\n\n\\begin{tabular}{||l|l|l|l|l|} \\hline\n{\\bf Command} & {\\bf Purpose} & {\\bf Option} & {\\bf Example} & {\\bf Display} \\\\ \\hline \\hline\n\\slinefour \\\\\n\\verb|\\QxQ| &\n  \\mlentry{\\purpwidth}{Implicit product of two quaternions} &\n  \\mlentry{\\argswidth}{None} &\n  \\minipage[t]{2.5cm}{\\verb|\\QxQ| \\\\ %\n    \\verb| {\\quat{Q}_1}| \\\\ %\n    \\verb| {\\quat{Q}_2}|}\\endminipage &\n  $\\QxQ{{\\quat Q}_1}{{\\quat Q}_2}$ \\\\[8ex]\n\\verb|\\QxQ| &\n  \\mlentry{\\purpwidth}{Explicit product of two quaternions} &\n  \\mlentry{\\argswidth}{\\ttfamily \\textbackslash circ} &\n \\minipage[t]{2.5cm}{\\verb|\\QxQ[\\circ]| \\\\ %\n    \\verb| {{\\quat Q}_1}| \\\\ %\n    \\verb| {{\\quat Q}_2}|}\\endminipage &\n  $\\QxQ[\\circ]{{\\quat Q}_1}{{\\quat Q}_2}$ \\\\[8ex]\n\\hline\n\\end{tabular}\n\n\n\\pagebreak\n\n\\subsection{Miscellaneous Macros}\n\n\\begin{tabular}{||l|l|l|l|l|} \\hline\n{\\bf Command} & {\\bf Purpose} & {\\bf Arguments} & {\\bf Example} & {\\bf Display} \\\\ \\hline \\hline\n\\slinefour \\\\\n\\verb|\\abs| &\n  \\mlentry{\\purpwidth}{Absolute value} &\n  \\mlentry{\\argswidth}{Scalar expression} &\n  \\verb|\\abs{x}| & $\\abs{x}$ \\\\ \\slinefour \\\\\n\\verb|\\norm| &\n  \\mlentry{\\purpwidth}{Euclidian norm} &\n  \\mlentry{\\argswidth}{Vector or quaternion expression} &\n  \\verb|\\norm{\\vect x}| & $\\norm{\\vect x}$ \\\\ \\slinefour \\\\\n\\verb|\\framedot| &\n  \\mlentry{\\purpwidth}{Frame-dependent time derivative} &\n  \\mlentry{\\argswidth}{1. Frame\\\\ 2. Expression} &\n  \\verb|\\framedot B {\\vect x}| & $\\framedot B {\\vect x}$ \\\\ \\slinefour \\\\\n\\hline\n\\end{tabular}\n\n\n\\end{document}\n", "meta": {"hexsha": "3611cffef7846e6ffdad6ec9202d4204868ba28d", "size": 16426, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/TrickHLA/LaTeX/template/trickhlamath.tex", "max_stars_repo_name": "jiajlin/TrickHLA", "max_stars_repo_head_hexsha": "ae704b97049579e997593ae6d8dd016010b8fa1e", "max_stars_repo_licenses": ["NASA-1.3"], "max_stars_count": 18, "max_stars_repo_stars_event_min_datetime": "2020-03-04T14:23:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T10:47:21.000Z", "max_issues_repo_path": "docs/TrickHLA/LaTeX/template/trickhlamath.tex", "max_issues_repo_name": "jiajlin/TrickHLA", "max_issues_repo_head_hexsha": "ae704b97049579e997593ae6d8dd016010b8fa1e", "max_issues_repo_licenses": ["NASA-1.3"], "max_issues_count": 57, "max_issues_repo_issues_event_min_datetime": "2020-06-04T16:03:44.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-17T20:54:35.000Z", "max_forks_repo_path": "docs/TrickHLA/LaTeX/template/trickhlamath.tex", "max_forks_repo_name": "jiajlin/TrickHLA", "max_forks_repo_head_hexsha": "ae704b97049579e997593ae6d8dd016010b8fa1e", "max_forks_repo_licenses": ["NASA-1.3"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-08-25T05:51:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-01T18:37:38.000Z", "avg_line_length": 37.935334873, "max_line_length": 128, "alphanum_fraction": 0.669487398, "num_tokens": 5862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.760650658103136, "lm_q1q2_score": 0.5675511160852521}}
{"text": "% declare document class and geometry\n\\documentclass[12pt]{article} % use larger type; default would be 10pt\n\\usepackage[english]{babel} % for hyphenation dictionary\n%\\setdefaultlanguage{english} % polyglossia command for use with XeTeX / LuaTeX\n\\usepackage[margin=1in]{geometry} % handle page geometry\n\n% import packages and commands\n\\input{../header2.tex}\n\n% title information\n\\title{Phys 221A -- Quantum Mechanics -- Lec09}\n\\author{UCLA, Fall 2014}\n\\date{\\formatdate{3}{11}{2014}} % Activate to display a given date or no date (if empty),\n         % otherwise the current date is printed \n\n\\begin{document}\n\\maketitle\n\n\n\\section{Propagator}\n\nThe propagator is defined as the time evolution operator evaluated in position space\n\\begin{eqn}\nK(\\v r, t; \\v r', t') = \\matrixel{\\v r}{U(t,t')}{\\v r'}.\n\\end{eqn}\nWhen $t=t'$ we have $U(t,t'=t) = 1$, so that the propagator in this case is just\n\\begin{eqn}\nK(\\v r, t; \\v r', t'=t) = \\delta(\\v r - \\v r').\n\\label{eq:prop0}\n\\end{eqn}\nNow, the Schroedinger equation can be rewritten\n\\begin{eqn}\n(i \\hbar \\partial_t - H) \\psi(\\v r, t) = 0,\n\\end{eqn}\nwhere $H$ is some differential operator, e.g. $H = -\\frac{\\hbar^2}{2m} \\nabla_r^2$. Let's call the new operator on the left the ``Schroedinger operator''. Furthermore, we can write\n\\begin{eqn}\n\\int \\dif^3{r} \\, \\psi(\\v r, t) \\ket{\\v r} = U(t,t') \\int \\dif^3{r} \\, \\psi(\\v r, t') \\ket{\\v r}.\n\\end{eqn}\nApplying a position bra on the left, we have\n\\begin{align}\n\\psi(\\v r, t) &= \\bra{\\v r} \\int \\dif^3{r'} \\, \\psi(\\v r', t) \\ket{\\v r'} \\\\\n\t&= \\int \\dif^3{r'} \\, K(\\v r, t; \\v r', t') \\psi(\\v r, t').\n\\end{align}\nNext, applying the Schroedinger operator, we have\n\\begin{align}\n0 &= (i \\hbar \\partial_t - H) \\psi(\\v r, t) \\\\\n\t&= \\int \\dif^3{r'} \\left[ (i \\hbar \\partial_t - H) K(\\v r, t; \\v r', t') \\right] \\psi(\\v r', t').\n\\end{align}\nSince $\\psi(\\v r', t')$ is a completely arbitrary wavefunction, we find that in general the propagator itself obeys the Schroedinger equation,\n\\begin{eqn}\n(i \\hbar \\partial_t - H) K(\\v r, t; \\v r', t') = 0.\n\\end{eqn}\nThis combined with the initial condition \\eqref{eq:prop0} uniquely characterizes the propagator $K$. \n\n\n\\subsection{Causality, Retarded Propagator}\n\nWe can impose causality by defining the retarded propagator\n\\begin{eqn}\nK^R (\\v r, t; \\v r', t') \\equiv \\theta(t-t') K(\\v r, t; \\v r', t'),\n\\end{eqn}\nwhere $\\theta(x)$ is the Heaviside step function. This just cuts off information about the past, which is really just redundant because we should always have symmetry under time reversal. What happens when we applying the Schroedinger operator to the retarded propagator? Everything goes away except for the time derivative acting on the step function,\n\\begin{align}\n(i \\hbar \\partial_t - H) K^R(\\v r, t; \\v r', t') &= \\left[ i \\hbar \\partial_t \\theta(t-t') \\right] K(\\v r, t; \\v r', t') \\\\\n\t&= i \\hbar \\delta (t-t') \\delta^3(\\v r - \\v r').\n\\end{align}\nThis uniquely defines the retarded propagator---note also that, due to the step function, the retarded propagator is zero for $t < t'$. \n\n\n\\subsection{Green's function}\n\nGiven a differential equation of the form\n\\begin{eqn}\nL(\\v x) f(\\v x) = \\delta(\\v x - \\v y)\n\\end{eqn}\nwith some boundary and/or initial conditions, where $L(\\v x)$ is some operator and $\\v y$ is an arbitrary point, we call $f(\\v x)$ a Green's function of $L$. There is a lot of mathematical technology developed around Green's functions, so it is useful to consider the propagator as a Green's function of the Schroedinger operator. This formulation is very useful for more advanced applications like quantum field theory. \n\n\n\\subsection{Time-independent Hamiltonian}\n\nLet's specialize to the case of time-independent $H$, i.e. we will have energy eigenkets $\\ket{\\alpha_i}$ with eigenvalues $E_i$,\n\\begin{eqn}\nH \\ket{\\alpha_i} = E_i \\ket{\\alpha_i}.\n\\end{eqn}\nRecall that in this eigenbasis we can write\n\\begin{eqn}\nU(t-t') = \\sum_i \\bra{\\alpha_i} \\ket{\\alpha_i} e^{-i E_i (t-t') / \\hbar},\n\\end{eqn}\nthus we find that\n\\begin{align}\nK(\\v r, t; \\v r', t') &= \\sum_i \\underbrace{\\braket{\\v r}{\\alpha_i}}_{\\psi_i(\\v r)} \\braket{\\alpha_i}{\\v r'} e^{-i E_i (t-t') / \\hbar} \\\\\n\t&= \\sum_i \\psi_i (\\v r) \\psi_i^* (\\v r') e^{-i E_i (t-t') / \\hbar},\n\\end{align}\nwhere we have\n\\begin{eqn}\nH(\\v r) \\psi_i(\\v r) = E_i \\psi(\\v r).\n\\end{eqn}\n\n\\begin{remark}\nAt the end of the day, we can think of the propagator as a wave function for a particle localized at $\\v r'$ at time $t'$. \n\\end{remark}\n\nNow, we can think of the propagator as as a trace over time,\n\\begin{align}\nG(t-t') &= \\tr U(t,t') \\\\\n\t&= \\int \\dif^3 r \\, K(\\v r, t; \\v r', t') \\\\\n\t&= \\sum_i e^{-i E_i (t-t') / \\hbar},\n\\end{align}\nsince the eigenfunctions are all normalized. Note that here the trace is defined by\n\\begin{align}\n\\tr U(t,t') &= \\sum_i \\matrixel{\\beta_i}{U(t,t')}{\\beta_i} \\\\\n\t&= \\sum_i \\int \\dif^3{r} \\braket{\\beta_i}{\\v r} \\matrixel{\\v r}{U}{\\beta_i} \\\\\n\t&= \\int \\dif^3{r} \\matrixel{\\v r}{U(t,t')}{\\v r}.\n\\end{align}\n\nNotice that our Green's function \n\\begin{eqn}\nG(t) = \\sum_i e^{-i E_i t / \\hbar}\n\\end{eqn}\nnow is almost exactly the same as the partition function from statistical mechanics\n\\begin{eqn}\nZ = \\sum_i e^{-\\beta E_i}.\n\\end{eqn}\nThis is no trivial remark---it turns out that much of what we do in statistical mechanics is directly applicable to quantum field theory, and vice versa. If we define an ``imaginary time'' $\\tau = \\hbar \\beta$, the connection becomes clearer. We can analytically continue $G(t)$ onto the complex plane defined by $z = t - i \\tau$. Then, we find that under the so-called ``Wick rotation'' $t \\rightarrow i \\tau$ we have\n\\begin{eqn}\nG(-i \\tau) = Z.\n\\end{eqn}\n\nOf course we can also define a retarded Green's function\n\\begin{eqn}\nG^R(t) = \\theta(t) G(t),\n\\end{eqn}\nwhich we can Fourier transform and write\n\\begin{eqn}\nG^R(\\omega) = -i \\int_{-\\infty}^\\infty \\dif{t} \\, e^{i\\omega t} G^R(t) = -i \\int_0^\\infty \\dif{t} \\, e^{i \\omega t} G^R(t).\n\\end{eqn}\nThis generally ends up making further calculations intractible but we can ``regularize'' by taking $\\omega \\rightarrow \\omega + i \\eta$,\n\\begin{eqn}\nG^R(\\omega) \\rightarrow -i \\int_0^\\infty e^{i \\omega t - \\eta t} G^R(t),\n\\end{eqn}\nor in other words\n\\begin{eqn}\nG^R(t) \\rightarrow G^R(t) e^{-\\eta t}.\n\\end{eqn}\nThen we find that\n\\begin{eqn}\nG^R(\\omega) = -i \\sum_i \\int_0^\\infty \\dif{t} e^{i \\omega t - \\eta t - i E_i t / \\hbar},\n\\end{eqn}\nwhich we can evaluate using\n\\begin{eqn}\n\\int_0^\\infty \\dif{t} \\, e^{-\\alpha t} = 1 / \\alpha, \\qquad \\text{where $\\Re \\alpha > 0$},\n\\end{eqn}\nso that\n\\begin{eqn}\nG^R(\\omega) = \\sum_i \\frac{1}{\\omega - E_i / \\hbar + i \\eta}.\n\\end{eqn}\nSo we have a function with poles in the complex plane at $E_i / \\hbar - i \\eta$, which we have pushed just below the real line, into the negative imaginary half-plane. \n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "4b9e641bbed598dd9cb35307fa4cb5a38fb58501", "size": 6780, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "quantum/lec09.tex", "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "quantum/lec09.tex", "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quantum/lec09.tex", "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.843373494, "max_line_length": 421, "alphanum_fraction": 0.6650442478, "num_tokens": 2357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289835, "lm_q2_score": 0.7461389873857265, "lm_q1q2_score": 0.5675511158397967}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n% \\usepackage{graphicx}\n%     \\DeclareGraphicsExtensions{.png, .jpeg}\n% \\usepackage{caption}\n\\usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry}\n\n\\title{STAT 775: Machine Learning \\\\ HW 06}\n\\author{Terence Henriod}\n\\date{\\today}\n\n\\begin{document}\n\n\\clearpage            % All of\n\\maketitle            % this,\n\\thispagestyle{empty} % removes the page number from the title page\n\n\\begin{abstract}\nIn this assignment, feed-forwards neural networks using simple stochastic backpropagation were explored.\n\\end{abstract}\n\n\\newpage\n\\section{Exercise 01}\n\\subsection{Problem Statement}\nImplement a single neural network to perform digit image classification for all ten classes of digits using the zip.data data set from the ESL website. To reduce the model complexity, perform PCA to limit the input features to 20. Use a single hidden layer with varying numbers of units. Use 10 output units. Use sigmoidal units.\n\n\\subsection{Results}\nA neural net with $20$ inputs, $20$ hidden layer units, and $10$ outputs was constructed and trained. Using a learning rate of $2$ and exposing the network to each training sample observation $500$ times produces the following results, summarized in a confusion matrix:\n\n\\begin{tabular}{| c || c | c | c | c | c | c | c | c | c | c | c |}\n  \\hline\n  Actual/Prediction &   0 &   1 &   2 &   3 &   4 &   5 &   6 &   7 &   8 &   9 &\\% Correct \\\\\n  \\hline\n  \\hline\n  0                 & 341 &   1 &   2 &   2 &   3 &   3 &   6 &   0 &   1 &   0 & 94.9 \\\\\n  \\hline\n  1                 &   0 & 254 &   0 &   0 &   3 &   2 &   4 &   0 &   0 &   1 & 96.2 \\\\\n  \\hline\n  2                 &   5 &   0 & 174 &   4 &   7 &   1 &   2 &   2 &   3 &   0 & 87.9 \\\\\n  \\hline\n  3                 &   3 &   0 &   3 & 139 &   0 &  18 &   0 &   1 &   2 &   0 & 83.7 \\\\\n  \\hline\n  4                 &   1 &   2 &   7 &   0 & 181 &   2 &   1 &   1 &   1 &   4 & 90.5 \\\\\n  \\hline\n  5                 &   3 &   1 &   1 &  11 &   3 & 138 &   0 &   0 &   2 &   1 & 86.3 \\\\\n  \\hline\n  6                 &   6 &   0 &   2 &   0 &   1 &   3 & 155 &   0 &   3 &   0 & 91.2 \\\\\n  \\hline\n  7                 &   0 &   0 &   0 &   1 &   5 &   0 &   0 & 138 &   1 &   2 & 93.9 \\\\\n  \\hline\n  0                 &   2 &   2 &   3 &   9 &   2 &   6 &   2 &   2 & 131 &   7 & 78.9 \\\\\n  \\hline\n  0                 &   0 &   2 &   1 &   1 &   8 &   0 &   0 &   4 &   3 & 158 & 89.3 \\\\\n  \\hline\n  \\hline\n  Overall           & & & & & & & & & &                                         & 90.13 \\\\\n  \\hline\n\\end{tabular}\n\nFor reference, the neural net did not outperform the naive bayes classification (with 16 principle components) that was done in a previous assignment, although adding more rounds of training or varying the number of nodes in the hidden layer might remedy this. Also, the training time was extraordinarily long, so other back-propagation techniques should be used to improve the training time.\n\n%\n% \\newpage\n\\subsection{Code}\nThe following R code was used to perform the LDA and classification:\n\\begin{verbatim}\n#\n# Initial Setup\n#\nsetwd(\"C:/Users/Terence/Documents/GitHub/STAT775/HW06\")\n\n#\n# Data Cleaning\n#\nDATA.PATH <- \"../DataSets/zip.data/\"\nZIP.TRAIN.FILE.NAME <- paste0(DATA.PATH, \"zip.train\")\nZIP.TEST.FILE.NAME <- paste0(DATA.PATH, \"zip.test\")\n\nget.multi.dimensional.label <- function(label) {\n#\n# Expands a single digit label to a {0, 1} vector-label\n#\n# Args:\n#   label: a single numeric value [0-9]\n\n  multi.dimensional.label <- matrix(0, nrow = 1, ncol = 10)\n  multi.dimensional.label[1, as.numeric(label) + 1] <- 1\n  return(multi.dimensional.label)\n}\n\nsquash.multi.dimensional.label <- function(multi.dim.label) {\n#\n# Collapses a digit's {0, 1} vector-label to a single-value label\n#\n# Args:\n#   mulit.dim.label: a 10 x 1 {0, 1} vector indicating the true class\n\n  label <- -1\n  highest.probability <- max(multi.dim.label)\n  for (i in 1:nrow(multi.dim.label)) {\n    if (multi.dim.label[[i]] == highest.probability) {\n      label <- i - 1\n    }\n  }\n  return(label)\n}\n\nread.data.tuples <- function(file.path.name) {\n  data.fram.e <- read.table(file.path.name)\n\n  data <- data.matrix(data.fram.e[, -1])\n\n  targets <- matrix(nrow = nrow(data.fram.e), ncol = 10)\n  for (i in 1:nrow(data.fram.e)) {\n    targets[i, ] <- get.multi.dimensional.label(data.fram.e$V1[[i]])\n  }\n\n  data.tuple <- list(\n    observations = data,\n    labels = data.matrix(data.fram.e[, 1]),\n    targets = targets\n  )\n  return(data.tuple)\n}\n\n\n#\n# PCA\n#\n\nget.pca.summary <- function(data, num.components = 20) {\n#\n# Args:\n#   data: an n x m matrix of n observations of m dimensions\n#   num.components: the number of principle components to keep\n\n  num.component.s <- min(ncol(data), num.components)\n  n.obs <- nrow(data)\n  full.dimensionality <- ncol(data)\n  mu <- colMeans(data)\n\n  # get centered data\n  x <- data\n  for (i in 1:n.obs) {\n    x[i, ] <- data[i, ] - mu\n  }\n\n  # covariance matrix\n  sigma <- t(x) %*% x\n  sigma <- sigma * (1.0 / n.obs)\n\n  eigen.decomposition <- eigen(sigma, F)  # TODO: is cov symmetric? Not sure...\n  eigen.vectors <- eigen.decomposition$vectors[, 1:num.component.s]\n\n  pca.summary <- list(\n    rotation = eigen.vectors,\n    mu = matrix(mu, nrow = 1, ncol = full.dimensionality)\n  )\n\n  return(pca.summary)\n}\n\npredict <- function(pca.summary, data) {\n#\n# Args:\n#   data: an n x d matrix of observations; rows are observations\n#   pca.summary: a tuple of the rotation matrix (eigenvectors as columns) and\n#                the mean (row vector of column means) computed in the pca\n#                computations\n\n  x <- data\n  for (i in 1:nrow(data)) {\n    x[i, ] <- data[i, ] - pca.summary$mu\n  }\n\n  return (t(t(pca.summary$rotation) %*% t(x)))\n}\n\n#\n# Neural Net\n#\n\nsigmoid <- function(x) {\n#\n# Args:\n#   x: a numeric or vector; the function is applied element-wise\n\n  return(1.0 / (1.0 + exp(-x)))\n}\n\nsigmoid.derivative <- function(x) {\n  #\n  # Args:\n  #   x: a numeric or vector\n\n  return(sigmoid(x) * (1.0 - sigmoid(x)))\n}\n\nconstruct.neural.net <- function(\n  topology = c(2, 2, 1),\n  activation = sigmoid,\n  activation.derivative = sigmoid.derivative,\n  debug = F) {\n#\n# Args:\n#   topology: a list or vector of the dimensions of each layer\n#   activation: a function to be used for the activation of each unit\n#   activation.derivative: a function that is the derivative of activation\n\n  layer.weights <- list()\n  derivative.matrices <- list()\n  outputs <- list()\n\n  previous.layer.dim <- 1\n  next.layer.dim <- 1\n  for (i in 1:(length(topology) - 1)) {\n    previous.layer.dim <- topology[[i]] + 1  # +1 for bias\n    next.layer.dim <- topology[[i + 1]]\n    num.elements <- (previous.layer.dim) * next.layer.dim\n\n    layer.weights[[i]] <- matrix(\n      if(debug) {rep(1, num.elements)}\n      else {runif(n = num.elements, min = -0.001, max = 0.001)},\n      nrow = previous.layer.dim,\n      ncol = next.layer.dim\n    )\n\n    outputs[[i]] <- matrix(0, nrow = next.layer.dim, ncol = 1)\n    derivative.matrices[[i]] <- diag(0, next.layer.dim)\n  }\n\n  return(list(\n    input.dim = topology[[1]],\n    output.dim = next.layer.dim,  # should be dim of last layer\n    n.layers = length(layer.weights),\n    activation = activation,\n    activation.deriv = activation.derivative,\n    input = matrix(0, nrow = 1, ncol = topology[[1]]),\n    output = matrix(0, nrow = tail(topology, 1)[[1]], 1),\n    weights = layer.weights,\n    outputs = outputs,\n    derivatives = derivative.matrices\n  ))\n}\n\napply.inputs <- function(net, x, for.training = T) {\n#\n# Args:\n#   x: a 1 x n vector of inputs; n should be the same as for net\n#   net: a structure with all of the appropriate data for a neural network,\n#        as created by construct.neural.net()\n#   for.training[T]: currently unused\n\n  net$input <- matrix(x, nrow = 1)\n  previous.output <- cbind(net$input, 1)\n  for (i in 1:net$n.layers) {\n    weighted.sums <- previous.output %*% net$weights[[i]]\n\n    net$outputs[[i]] <- net$activation(weighted.sums)\n\n    net$derivatives[[i]] <- diag(\n      as.list(net$activation.deriv(weighted.sums)),\n      length(net$outputs[[i]])\n    )\n\n    previous.output <- cbind(net$outputs[[i]], 1)\n  }\n\n  net$output <- t(tail(net$outputs, 1)[[1]])\n\n  return(net)\n}\n\nbackprop.weight.update <- function(net, target, learning.rate = 0.1) {\n#\n# Args:\n#   net: a neural net object that has had inputs applied and derivatives stored\n#   target: a column vector; the target output that should have been observed\n#   learning.rate: the learning rate of the network\n#                  TODO: refactor learning.rate to be less hacky, allow for\n#                        advanced techniques\n\n  last.index <- net$n.layers\n\n  deltas <- list()\n  error <- matrix(net$output, ncol = 1) - matrix(target, ncol = 1)\n  deltas[[last.index + 1]] <- error\n  W <- diag(1, nrow = nrow(error))\n  D <- net$derivatives[[last.index]]\n  for (i in last.index:1) {\n    deltas[[i]] <- D %*% W %*% deltas[[i + 1]]\n\n    if (i > 1) {\n      D <- net$derivatives[[i - 1]]\n      W <- net$weights[[i]]\n      W <- W[1:(nrow(W) - 1), ]\n    }\n  }\n\n  weight.updates <- list()\n  o.hat <- cbind(net$input, 1)\n  for (i in 1:last.index) {\n    weight.updates[[i]] <- -learning.rate * t(deltas[[i]] %*% o.hat)\n\n    if (i < last.index) {\n      o.hat <- cbind(net$outputs[[i]], 1)\n    }\n  }\n\n  for (i in 1:length(weight.updates)) {\n    net$weights[[i]] <- net$weights[[i]] + weight.updates[[i]]\n  }\n\n  return(net)\n}\n\n#\n# Main\n#\nNUM.PRINCIPLE.COMPONENTS <- 20\nNUM.DIGITS <- 10\nNUM.EPOCHS <- 500\nCONFUSION.LABELS <-\n  c(' 0 ', ' 1 ', ' 2 ', ' 3 ', ' 4 ', ' 5 ', ' 6 ', ' 7 ', ' 8 ', ' 9 ')\n\n# training ###################################\ntrain <- read.data.tuples(ZIP.TRAIN.FILE.NAME)\n\npca.summary <- get.pca.summary(\n  data = train$observations,\n  num.components = NUM.PRINCIPLE.COMPONENTS\n)\n\ntrain$observations <- predict(pca.summary, train$observations)\n\nk <- 20\ndigit.net <- construct.neural.net(\n  topology = c(NUM.PRINCIPLE.COMPONENTS, k, NUM.DIGITS),\n  activation = sigmoid,\n  activation.deriv = sigmoid.derivative\n)\n\nfor (t in 1:NUM.EPOCHS) {\n  for (i in 1:nrow(train$targets)) {\n    digit.net <- apply.inputs(\n      net = digit.net,\n      x = matrix(train$observations[i, ], nrow = 1)\n    )\n    digit.net <- backprop.weight.update(\n      net = digit.net,\n      target = matrix(train$targets[i, ], ncol = 1),\n      learning.rate = 2\n    )\n  }\n}\n\n# testing ###################################\ntest <- read.data.tuples(ZIP.TEST.FILE.NAME)\ntest$observations <- predict(pca.summary, test$observations)\n\nnum.correct <- 0\nconfusion.matrix <- matrix(0, nrow = NUM.DIGITS, ncol = NUM.DIGITS + 1)\nrow.names(confusion.matrix) <- CONFUSION.LABELS\ncolnames(confusion.matrix) <- c(CONFUSION.LABELS, '% correct')\nfor (i in 1:nrow(test$observations)) {\n  prediction <- squash.multi.dimensional.label(\n    apply.inputs(\n      net = digit.net,\n      x = matrix(test$observations[i, ], nrow = 1),\n      for.training = F\n    )$output\n  )\n\n  if (prediction == test$labels[[i]]) {\n    num.correct <- num.correct + 1\n  }\n\n  confusion.matrix[test$labels[[i]] + 1, prediction + 1] <-\n    confusion.matrix[test$labels[[i]] + 1, prediction + 1] + 1\n}\n\nclass.totals <- rowSums(confusion.matrix)\nfor (i in 1:nrow(confusion.matrix)) {\n  confusion.matrix[i, 11] <- 100 * confusion.matrix[i, i] / class.totals[[i]]\n}\n\nprint(100 * num.correct / nrow(test$labels))\nprint(confusion.matrix)\n\\end{verbatim}\n\n\\end{document}\n", "meta": {"hexsha": "b794c2483525f9cd714a166247f6dea32bb376b6", "size": 11398, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "STAT775/HW06/HW06.tex", "max_stars_repo_name": "T-R0D/Past-Courses", "max_stars_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-03-13T17:32:26.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-27T16:51:22.000Z", "max_issues_repo_path": "STAT775/HW06/HW06.tex", "max_issues_repo_name": "T-R0D/Past-Courses", "max_issues_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-29T19:54:02.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-29T19:54:52.000Z", "max_forks_repo_path": "STAT775/HW06/HW06.tex", "max_forks_repo_name": "T-R0D/Past-Courses", "max_forks_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2016-10-18T03:31:44.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-29T13:23:10.000Z", "avg_line_length": 29.0025445293, "max_line_length": 392, "alphanum_fraction": 0.6003684857, "num_tokens": 3563, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506418255928, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.5675511082338339}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Overview of Uncertainty Quantification}\\label{sec:intro}\n\nIn the last several decades, there has been an increasing reliance on quantitative predictions from computational, simulation-based models of physical systems to inform engineering design, predict the behavior of physical systems, and even shape public policy, e.g., see \\cite{VO14, VO15, BDMV, HV}, for just a few such examples.\nIt is therefore more important than ever to quantify, and whenever possible, reduce, the uncertainties impacting such models.\nUnfortunately, many key characteristics governing system behavior, described as model inputs (referred to here as parameters), are often hidden from direct observation.\nWhen observable model output data are sensitive to variations in these parameters, we formulate and solve inverse problems using the output data to quantify uncertainties in parameters.\nInverse problems therefore play a vital role in the uncertainty quantification (UQ) community.\n\nIn UQ, uncertainties are categorized as being either aleatoric (i.e., irreducible) or epistemic (i.e., reducible) in nature, which are often quantitatively described and interpreted in distinct ways.\nBelow, we use abstractions of conceptual examples to distinguish how both types of uncertainties arise in parameters, and subsequently impact the type of inverse problem that is solved to quantify these uncertainties.\n% and their associated inverse problems while simultaneously comparing and contrasting methodologies for solving these problems.\nThis distinction further serves to highlight the contributions of this thesis.\n\nConsider modeling the manufacturing process of an engineered system involving various electrical or mechanical components.\nThe intrinsic variability in component properties, e.g., due to impurities in raw materials used in their construction, are aleatoric in nature.\n%These are quantitatively characterized as probability measures.\nComponent properties define a sample space (the set of all possible outcomes), and combining this sample space with a description of measurable events along with a probability measure defines a probability space.\nScalar-valued model parameters associated with component properties defines a random vector (i.e., a measurable function) from this probability space of components into the parameters required by the model.\nSubsequently, the mapping from parameters to observable model outputs defines what we refer to as a Quantities of Interest (QoI) map.\nObservation of a probability measure on the range of the QoI map leads to the formulation of a stochastic inverse problem (SIP), where the goal is to pullback the observed probability measure onto the space of parameters.\nConceptually, a pullback measure is data-consistent in the sense that its push-forward through the QoI map matches the observed probability measure.\n\n\nWhile it is possible to construct explicit approximations to data-consistent measures in terms of estimating measurable events and their probabilities in the parameter space (e.g., see \\cite{BET+14}), such ``set-based'' approximations become computationally intractable for high-dimensional parameter spaces or geometrically complex and/or computationally expensive QoI maps.\nA recently developed density-based approach \\citep{BJW18a, BJW18b, BWY20} solves the SIP in a novel way by first solving a stochastic forward problem (SFP).\nSpecifically, an {\\em initial} probability measure is first specified on the parameters to encode any prior knowledge of parameter variability.\nThen, a SFP is solved where the push-forward of the initial probability measure is used to define a {\\em predicted} probability measure on the QoI.\nThe discrepancy between the predicted and {\\em observed} probability measures on the QoI, expressed as a ratio of probability density functions (more generally, Radon-Nikodym derivatives), is then used to {\\em update} the initial probability density.\nThe {\\em updated} probability measure associated with this density is then data-consistent.\nMoreover, the updates to the initial probability measure only occur in directions informed by the QoI.\nIn other words, the initial probability measure serves to regularize the space of all pullback measures solving the SIP to produce a unique solution.\n\nThe SIP and its solution methodologies are based on rigorous measure theory using the Disintegration Theorem \\citep{Dellacherie_Meyer_book, Chang_Pollard} as the central tool in establishing existence, uniqueness, and stability of solutions.\nUpdated probability measures often have complex structures that are not well approximated by a family of parametrically defined distributions (e.g., Gaussian).\nThis attribute of the solution further distinguishes this measure-theoretic approach from typical Bayesian-inspired approaches, e.g., Hierarchical Bayesian methods \\citep{Smith, Tarantola_book, Wikle1998}, that specify prior distributions from a parametric family of distributions along with additional prior distributions on the so-called hyper-parameters introduced by this parametric family (e.g., the means and variances of a Gaussian).\nSubsequently, solutions to the SIP using Bayesian approaches will not, in general, produce solutions (defined as posterior distributions) whose push-forward matches the observed distribution.\nIn fact, the push-forward of the posterior is not even of general interest in most Bayesian paradigms.\nInstead, the posterior predictive, which defines the distribution of possible unobserved values is of central interest \\citep{Smith}.\nThe posterior predictive is constructed as a conditional distribution on the observations but makes practical use of the posterior through a marginalization.\nThese differences are actually not surprising when one considers that the Bayesian inverse problem that is perhaps most familiar in the UQ community solves an inverse problem involving epistemic uncertainty, as we describe below and expand upon in Section~\\ref{sec:compare}.\n\nIn a typical Bayesian framework \\citep{0266-5611-7-5-003,\n  Kennedy_O_JRSSSB_2001, Tarantola_book, MNR07, CDS10, starktenorio,\n  AlexanderianPetraStadlerEtAl14, Bui-ThanhGhattas14, Ernst2014,\n  0266-5611-30-11-110301, ROM:CMW_2016, Stuart10,\n  cockayneoatessullivangirolami}, one of the initial assumptions is that data obtained on a QoI are polluted by measurement error, i.e., the data are ``noisy.''\n  Measurement errors can theoretically be reduced using improved measurement instruments (i.e., they are epistemic in nature).\n  A data-likelihood function is used to express the relative likelihoods that all of the data came from a particular choice of the parameter.\n  Encoding any initial assumptions about which parameters are more likely than others as a prior density allows the formal construction of a posterior density as a conditional density that describes the difference in relative likelihoods of any parameter value given the data.\n\nIt is common to use specific point estimators such as the maximum a posteriori (MAP) point given by the mode of the posterior as the actual solution to the inverse problem.\nThe posterior is then re-interpreted as providing descriptions of uncertainty in that specific point estimate.\nThe Bernstein-von Mises theorem \\citep{vonmises} provides conditions under which the posterior will become concentrated around the single true parameter in the limit of infinite data \\citep{Smith}.\n\nReturning to the hypothetical example of modeling a manufacturing process, the typical Bayesian paradigm described above is most applicable to a specific instance of the manufactured system.\nIn other words, suppose a single system is extracted from the end of the production line.\nWe subject this system to experiments for which we collect data on the system response, and we are interested in using this data to determine the precise parameter values associated with this single system.\nThe Bayesian framework is fundamentally designed to address such a problem while the measure-theoretic framework as presented in \\cite{BJW18a, BJW18b, BWY20} is not.\nThe SIP is concerned with modeling the variability in the outputs of the production line as a collection, which is of particular interest to quality control.\n\nThe main contributions of this thesis are the extension of the SIP framework to address the reduction of epistemic uncertainty.\nThis is accomplished by formulating parameter identification problems as ones involving pullbacks of distributions of residuals.\nIn the following section we provide background and history for the SIP and subsequently define the Deterministic Inverse Problem (DIP), which is the term we use to refer to the problem addressed by the Bayesian framework.\nWe then compare the two frameworks and provide some illustrative examples to draw attention to the key differences between them.\nThe chapter will conclude with a summary of the assumptions, properties, and stability of the solutions to the SIP which will be considered throughout this thesis.\n\n\\vfill\n", "meta": {"hexsha": "70f53aaa8971faae4cd623d0ffd4d1e22e31dbb7", "size": 9153, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "intro/intro.tex", "max_stars_repo_name": "mathematicalmichael/thesis", "max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z", "max_issues_repo_path": "intro/intro.tex", "max_issues_repo_name": "mathematicalmichael/thesis", "max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 59, "max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z", "max_forks_repo_path": "intro/intro.tex", "max_forks_repo_name": "mathematicalmichael/thesis", "max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 130.7571428571, "max_line_length": 440, "alphanum_fraction": 0.8126297389, "num_tokens": 1763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.7461389817407016, "lm_q1q2_score": 0.5675511034490319}}
{"text": "%\\documentclass[aps,prb,twocolumn,groupedaddress,nofootinbib,floatfix]{revtex4}\n\\documentclass{article}\n\\usepackage{bm}\n\\usepackage{mathtools}\n\\usepackage{mathrsfs}\n\\usepackage[parfill]{parskip}\n\n\\usepackage[margin=1in]{geometry}\n\n\\usepackage{color}\n\\newcommand\\editremark[1]{{\\color{red}#1}}\n\n\\begin{document}\n\n\\title{EM PE Implementation and Usage}\n\n%\n\\author{Ben Champion}\n\n\\date{\\today}\n\n\\maketitle\n\n\\section{Models}\n\n\\subsection{Simple analytic kilonova model}\n\n\\textbf{Note:} some of this text comes directly from \\cite{Villar_2017}.\n\nThe single-component, two-component, and three-component analytic kilonova models provided with the code are an implementation of the model described in \\cite{Villar_2017}.\nIn the following equations, $M$ is the $r$-process ejecta mass and $v$ is the ejecta velocity.\nNote that for now we assume the ejecta consists entirely of $r$-process material, so $M$ is the full ejecta mass.\nThe radioactive heating rate at time $t$ is given by \\cite{Korobkin_2012}:\n\\begin{equation}\n    L_\\text{in}(t) = 4 \\times 10^{18} M \\times \\left [ 0.5 - \\pi^{-1} \\arctan \\left ( \\frac {t - t_0} {\\sigma} \\right ) \\right ]^{1.3} \\text{ erg s}^{-1}\n\\end{equation}\nwhere $t_0 = 1.3$ s and $\\sigma = 0.11$ s are constants.\n\nOnly a fraction of $L_\\text{in}$ powers the kilonova, given by the thermalization efficiency $\\epsilon_\\text{th}$.\nThis is approximated analytically in \\cite{Barnes_2016}:\n\\begin{equation}\n    \\epsilon_\\text{th}(t) = 0.36 \\left [ e^{-a t} + \\frac {\\ln(1 + 2 b t^d)} {2 b t^d} \\right ]\n\\end{equation}\nThe parameters $a$, $b$, and $d$ are constants that depend on the ejecta mass and velocity; an interpolation of Table 1 in \\cite{Barnes_2016} is used in the model.\n\nThe bolometric luminosity is calculated as\n\\footnote{\\cite{Villar_2017} is missing a factor of time in the denominator (this can be seen from the units -- our $L_\\text{bol}$ has the correct units of erg s$^{-1}$). \nThe factor of 1/$t_d$ seems to produce reasonable results with the correct units, and better aligns with e.g. \\cite{Chatzopoulos_2012}.}\n\\begin{equation}\n    L_\\text{bol}(t) = \\frac {2} {t_d} \\exp{\\left ( \\frac {-t^2} {t_d^2} \\right ) }\n                    \\int_0^t L_\\text{in} \\epsilon_\\text{th} \\exp{\\left ( \\frac {t^2} {t_d^2} \\right ) } \\frac {t} {t_d} dt\n\\end{equation}\nwhere $t_d$ is the diffusion timescale, $t_d = \\sqrt{2 \\kappa M / \\beta v c}$, $\\kappa$ is the opacity, and $\\beta = 13.7$ is a dimensionless constant related to the ejecta's geometry.\n\nLightcurves are calculated by assuming the kilonova behaves as a blackbody photosphere that expands at a velocity $v$.\nThe blackbody temperature is generally defined by its bolometric luminosity; however, once it cools to a critical temperature $T_c$, the photosphere recedes into the ejecta and the temperature remains fixed.\nThe photosphere temperature is\n\\footnote{\\cite{Villar_2017} has a mistake here: $\\sigma_\\text{SB}$ should not be squared.}\n\\begin{equation} \\label{T_c}\n    T_\\text{phot}(t) = \\max \\left [ \\left ( \\frac {L_\\text{bol}(t)} {4 \\pi \\sigma_\\text{SB} v^2 t^2} \\right )^{1/4}, T_c \\right]\n\\end{equation}\n\nWhen $T_\\text{phot} > T_c$, the photosphere radius is simply $R_\\text{phot} = v t$.\nWhen $T_\\text{phot} = T_c$ (i.e. the photosphere has receded into the ejecta), the photosphere radius is\n\\begin{equation}\n    R_\\text{phot}(t) = \\left ( \\frac {L_\\text{bol}(t)} {4 \\pi \\sigma_\\text{SB} T_c^4} \\right )^{1/2}\n\\end{equation}\n\nThe flux density at frequency $\\nu$ is given in \\cite{Metzger_2017}:\n\\begin{equation}\n    F_\\nu(t) = \\frac {2 \\pi h \\nu^3} {c^2} \\frac {1} {\\exp{(h \\nu / k T_\\text{photo}(t))} - 1} \\frac {R_\\text{photo}^2(t)} {D^2}\n\\end{equation}\nwhere $D$ is the source distance.\nWe use a fixed fiducial distance of $D = 10$ pc to calculate $F_\\nu(t)$, then calculate AB magnitude with a distance modulus if necessary.\n\nTo compute multi-component lightcurves we assume each component has a photosphere that evolves independently of the others.\nThe total flux density is the sum of the flux densities of the individual components.\nThe version of the model implemented in the code uses three components with fixed opacities (a blue component with $\\kappa = 0.5$ cm$^2$ g$^{-1}$, a purple component with $\\kappa = 3$, cm$^2$ g$^{-1}$ and a red component with $\\kappa = 10$ cm$^2$ g$^{-1}$).\n\n\\section{Parameter Estimation}\n\n\\subsection{Likelihood Function}\n\nFor lightcurve magnitudes $x_i$, model values $m_i(\\bm{\\theta})$ computed with a parameter vector $\\bm{\\theta}$, data uncertainties $\\sigma_i$, and a scatter term $\\sigma$, fit as a parameter, that accounts for additional uncertainty in the data and model, the log-likelihood is\n\\begin{equation}\n    \\ln \\mathcal{L}(\\bm{\\theta}) = -0.5 \\sum_{i=1}^n \\left [ \\frac {(x_i - m_i(\\bm{\\theta}))^2} {\\sigma_i^2 + \\sigma^2} + \\ln(2 \\pi (\\sigma_i^2 + \\sigma^2)) \\right ]\n\\end{equation}\nwhere the sum is taken over every data point in every band used in the analysis.\n\nThis log-likelihood is implemented as a method in \\texttt{EM\\_PE}'s \\texttt{sampler} class, which can be imported and used in other codes.\nThe joint gravitational wave and EM log-likelihood is the sum of the individual log-likelihoods.\n\n\\subsection{Injection/Recovery Example}\n\nA simple parameter estimation test using fake data can be run after installing the \\texttt{EM\\_PE} code to ensure it functions properly.\nThe repository contains a \\texttt{Makefile} that generates reproducable parameter estimation runs.\nTo run the injection/recovery test:\n\\begin{verbatim}\n$ make test_kilonova_3c\n$ cd pe_runs/test_kilonova_3c/\n$ ./sample.sh\n\\end{verbatim}\nThis series of commands will generate a posterior sample file, \\texttt{samples.txt}.\nNote that the default settings assume the code is being run on a machine with many CPU cores available, so it computes the likelihood in 8 parallel processes.\nIt will run without issue on a less powerful computer, but \\texttt{sample.sh} should be modified to use fewer cores (e.g. set \\texttt{--nprocs 2} to use two cores).\n\nTo produce a corner plot and lightcurve plot, run\n\\begin{verbatim}\n$ ./plot_corner.sh\n$ ./plot_lc.sh\n\\end{verbatim}\nThis will produce plots similar to Figure \\ref{fig:corner} and Figure \\ref{fig:lightcurves}.\n\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=6.5in]{corner.png}\n    \\caption{Posterior distribution for the injection/recovery test (the blue lines show the true parameter values). For this example, $T_c$ was fixed for each component and the distance was fixed to 40.0 Mpc.}\n    \\label{fig:corner}\n\\end{figure}\n\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=6.5in]{lc.png}\n    \\caption{Fake photometry data and lightcurve models for the injection/recovery test. The solid line shows the lightcurve model evaluated at the maximum-likelihood parameters.}\n    \\label{fig:lightcurves}\n\\end{figure}\n\n\\subsection{GW170817 Example Analysis}\n\n\\textit{Coming soon}\n\n\\clearpage\n\n\\bibliographystyle{h-physrev5.bst}\n\\bibliography{bibliography}\n\n\\end{document}\n\n", "meta": {"hexsha": "ea087cc54af0a1230cbfbf475bdeade70c6866a5", "size": 6981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/notes.tex", "max_stars_repo_name": "markoris/EM_PE", "max_stars_repo_head_hexsha": "d65811e269e4befec94429387bdef0bc76cfb8cf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-06-14T20:59:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-22T14:15:59.000Z", "max_issues_repo_path": "Notes/notes.tex", "max_issues_repo_name": "markoris/EM_PE", "max_issues_repo_head_hexsha": "d65811e269e4befec94429387bdef0bc76cfb8cf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/notes.tex", "max_forks_repo_name": "markoris/EM_PE", "max_forks_repo_head_hexsha": "d65811e269e4befec94429387bdef0bc76cfb8cf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-06-14T00:18:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-02T20:41:42.000Z", "avg_line_length": 50.2230215827, "max_line_length": 278, "alphanum_fraction": 0.7285489185, "num_tokens": 2121, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835493924954, "lm_q2_score": 0.6791786861878392, "lm_q1q2_score": 0.5674426194079476}}
{"text": "\\chapter{OpenMath/MathML Translation} \\label{analysis}\n\nMathML and OpenMath are closely related, serving a similar purpose of\nconveying mathematics across different applications. The aim of this\nanalysis is to relate MathML and OpenMath to illustrate their\nsimilarities and differences. We intend it to be application\nindependent, highlighting the problems arising when developing programs\ntranslating MathML to OpenMath and vice versa.\n\nAs is stated in the OpenMath standard \\cite{openmathspec}, OpenMath\nobjects have the expressive power to cover all areas of computational\nmathematics. This is certainly not the case with MathML. However,\nMathML was designed to be displayed on any MathML compliant renderer.  \nThe possibility to translate between them would allow OpenMath objects\nto be displayed, and MathML objects to have a wider semantic scope. But is\na translation possible?\n\nOpenMath and MathML have many common aspects. Some features of the\nstandards help facilitate the translation, mainly that the structure of\nboth standards is very similar.  They both use prefix operators and are\nXML \\cite{xml}\\index{XML} based. They both construct their objects by\napplying certain rules recursively. Such similarities facilitate\nmapping across both standards.\n\nBecause both standards are XML based, their syntax is governed by the\nrules of XML syntax. In other words, the details of using tags,\nattributes, entity references and so on are defined in the XML language\nspecification. By complying with the XML standard it is possible to use\ngeneric XML generators and validators. These can be programmed for the\napplication being developed, or existing ones can be used.\n\nFinally, OpenMath has specific content dictionaries\\index{content\ndictionaries} mirroring MathML's semantic scope, which permit a\nstraightforward mapping between both recommendations. Since both\nstandards are simply different ways of representing mathematics,\ndesigned with translation in mind, mapping one to the other is\ncertainly possible.\n\nWe shall look at all the areas of both recommendations where\ndifferences occur and how they pose difficulties to designing a\ntranslator. It is important to understand how objects are constructed\nand what they represent. We will then discuss how functions and\noperators are applied on their arguments. There are various specific\nstructural differences between both standards which need to be properly\nunderstood; we will attempt to explain these differences and offer a\nmethod of translation for each one. We will also discuss how MathML\nsupports extensibility and to what extent it is possible to implement\nsuch extensibility to accept new OpenMath symbols. To finish we will\ngive an explanation of how to handle the translation problem.\n\nBefore we start our analysis, it is important that we define a few\nterms related to our analysis. We also encourage the reader to have a\nlook at the standards in order to better appreciate this analysis.  \nMathML and OpenMath {\\it\nobjects}\\index{MathML!objects|textbf}\\index{OpenMath!objects|textbf}\nconvey the meaning of a mathematical expression and are represented as\nlabelled trees. An object can also be called an {\\it expression}. A\n{\\it symbol}\\index{OpenMath!symbol|textbf} in OpenMath is used to\nrepresent a mathematical concept. For instance {\\it plus} or {\\it max}\nare considered symbols. We call {\\it elements} the words enclosed\nwithin \\texttt{$<$$>$} such as \\texttt{$<$apply$>$} or\n\\texttt{$<$OMA$>$}. Elements enclose other XML data called their\n`content' between a `start tag' (sometimes called a `begin tag') and an\n`end tag', much like in HTML\\index{Html}. There are also `empty\nelements' such as \\texttt{$<$plus/$>$}, whose start tag ends with /$>$\nto indicate that the element has no content or end tag.\n\n\\section{Constructing Objects} \\label{constructors}\n\nConstructing objects in MathML and OpenMath is done in similar ways.\nMathML uses elements termed {\\it containers} and OpenMath uses elements\ncalled {\\it constructs}.  They are both closely related, and most of\nthem are easily interchangeable. The nature of the\nconstructors\\index{constructors} in both standards is rather different,\nbut their usage is the same.\n\nOpenMath objects can be created by applying a symbol onto a series of\narguments.  These are the objects created by {\\it application} and are\nsurrounded by \\texttt{$<$OMA$>$\\ldots$<$/OMA$>$} elements. In MathML\nthe approach is different. MathML possesses more constructors and they\nare more specific.  It is important to note that OpenMath objects\nconstructed with the \\texttt{$<$OMA$>$} element may translate to\nvarious constructors in MathML.\n\nIn OpenMath for instance, defining a list or a matrix would be done by\napplying the application constructor on the {\\it list} or {\\it matrix}\nsymbol followed by the contents of the list or matrix. In MathML\nhowever, a list would require the \\texttt{$<$list$>$$\\ldots<$/list$>$}\nconstructor, and a matrix would need the\n\\texttt{$<$matrix$>$\\ldots$<$/matrix$>$} constructor.\n\nMost OpenMath symbols constructed by application are constructed in\nMathML using the {\\tt $<$apply$>$} constructor. But there are\nexceptions which do not map to {\\tt $<$apply$>$} tags. It is important\nthat all exceptions such as \\verb|matrix|, \\verb|list|, \\verb|set| and\nothers are determined and that the appropriate MathML constructor is\nused when translating. Table \\ref{const} shows what possible MathML\nconstructors {\\tt $<$OMA$>$} can map to.\n\nOpenMath objects can also be constructed using the\n\\texttt{$<$OMBIND$>$} element.  This consists in binding a symbol to a\nfunction with zero or more bound variables.  MathML does not have an\nequivalent, and so symbols which use the {\\it binding} construct in\nOpenMath, like {\\tt lambda} or {\\tt forall}, may have different ways of\nbeing constructed in MathML. {\\tt lambda} uses a specific constructor\nin MathML, whereas {\\tt forall} uses the {\\tt $<$apply$>$} construct.\nIt is very important in order to ensure proper translation, to\ndetermine which OpenMath symbols use the binding constructor and what\ntheir MathML equivalent is.\n\nThere are objects constructed by attributing a value to an object.\nThese are objects constructed by {\\it attribution} and employ the {\\tt\n$<$OMATTR$>$} elements. MathML also allows objects to possess\nattributed values called attributes. The translation is\nstraightforward.\n\nThere are other constructors which we do not mention in more detail\nbecause there exists a direct mapping between both standards. This is\nthe case of \\texttt{$<$OMI$>$, $<$OMF$>$}, \\texttt{$<$OMV$>$}\n\\texttt{$<$cn$>$} and \\texttt{$<$ci$>$}. Table \\ref{const} shows the\nrelation between them.\n\n\\begin{table} \n\n\\begin{center}\n\n\\begin{tabular}{|l|l|} \n\\hline \n\\label{const}\n\n{\\bf OpenMath} \t\t\t&\t{\\bf MathML} \\\\ \\hline\n\n\\texttt{$<$OMA$>$}\t\t&\t\\texttt{$<$interval$>$, $<$set$>$, $<$list$>$, $<$matrix$>$,}\\\\\n\t\t\t\t&\t\\texttt{$<$vector$>$, $<$apply$>$, $<$lambda$>$, $<$reln$>$}. \\\\\n\\texttt{$<$OMATTR$>$}\t\t&\t{\\it attributes associated to a tag} \\\\\n\\texttt{$<$OMI$>$, $<$OMF$>$}\t&\t\\texttt{$<$cn$>$} \\\\\n\\texttt{$<$OMV$>$}\t\t&\t\\texttt{$<$ci$>$} \\\\\n\\texttt{$<$OMSTR$>$}\t\t&\t{\\it not supported} \\\\\n\\texttt{$<$OMBIND$>$}\t\t&\t{\\it not supported} \\\\\n{\\it not supported}\t\t&\t\\texttt{$<$declare$>$} \\\\\n\n\\hline\n\n\\end{tabular}\n\n\\end{center}     \n\\caption{Relation between constructors} \n\n\\end{table}     \n\n\\section{Elements and Functions}\n\\label{funcs}\n\nMathML has a classification\\footnote{MathML standard section 4.2.3}\nwhich categorises elements according to the number of arguments they\naccept and the types of these arguments.  This classification can be\nsummarised for our purpose into the following:\n\n\\begin{description}\n\n\\item[unary elements] accepting 1 argument\n\n\\item [binary elements] accepting 2 arguments\n\n\\item [nary elements] accepting 3 or more arguments\n\n\\item [operators:] elements whose arguments are given following a\nspecific syntax. This includes symbols such as {\\tt int, sum, diff,\nlimit, forall} and a few others.\n\n\\end{description}\n\nThis classification is not explicitly stated in the OpenMath standard\nbut can also be used since OpenMath symbols fit well into these\ncategories. By gathering OpenMath and MathML symbols into these defined\ngroups according to their syntax, it is possible to define specific\ntranslating procedures which deal with all symbols in one group in the\nsame way.\n\nFor instance one procedure could parse through any unary function by\nreading in the symbol and then the one argument. Printing out unary\nfunctions would be done by one procedure which would output the symbol\nin MathML or OpenMath followed by that one argument.\n\nThe advantages of this classification are that it greatly simplifies\nthe translation.  Parsing and generation of all symbols would then be\nthe task of a few generic procedures. However, symbols contained in the\n{\\it operators} group require more attention, since they have different\nways of reading in arguments. Specific procedures need to be\nimplemented for such cases.  We will discuss these in more detail\nlater.\n\n\\subsection{The Scope of Symbols} \\label{scope}\n\nWhen dealing with a function or an operator in mathematics, it is\nimportant that its scope is well defined. MathML and OpenMath both\nspecify the scope\\index{scope} of an operator by enclosing it with its\narguments inside opening and closing tags. In MathML, the opening and\nclosing tags \\texttt{$<$apply$>$} are employed, and in OpenMath one\nuses the opening and closing tags \\texttt{$<$OMA$>$}.\n\nHowever, OpenMath's grammar as it is defined in the OpenMath standard\nin section 4.1.2 can produce OpenMath objects where the scope of an\noperator is ambiguous, in which case a parser would have great\ndifficulties validating the syntax for translation. Let us illustrate\nthis problem with the two OpenMath expressions in figure \\ref{omscope} which are grammatically\ncorrect.\n\n\n\\begin{figure}[h]\n\n\\begin{tabular}{ l l }\n\n{\\bf Example 1}\t\t\t\t\t& {\\bf Example 2}\\\\\n\t\t\t\t\t\t& \\\\\n\\verb|<OMOBJ>|\t\t\t\t\t&\\verb|<OMOBJ>|\\\\  \n\\verb| <OMA>|\t\t\t\t\t&\\verb| <OMA>|\\\\\n\\verb|   <OMS cd=arith1 name=plus/>|\t\t&\\verb|   <OMS cd=arith1 name=plus/>| \\\\\n\\verb|   <OMS cd=arith1 name=times/>|\t\t&\\verb|   <OMA>| \\\\\n\\verb|   <OMV name=x/>|\t\t\t\t&\\verb|     <OMS cd=arith1 name=times/>| \\\\\n\\verb|   <OMV name=y/>|\t\t\t\t&\\verb|     <OMV name=x/>| \\\\\n\\verb|   <OMV name=z/>|\t\t\t\t&\\verb|     <OMV name=y/>| \\\\\n\\verb|   <OMI>6</OMI>|\t\t\t\t&\\verb|   </OMA>| \\\\\n\\verb| </OMA>|\t\t\t\t\t&\\verb|   <OMV name=y/>| \\\\\n\\verb|</OMOBJ>|\t\t\t\t\t&\\verb|   <OMI>6</OMI>| \\\\\n\t\t\t\t\t\t&\\verb| </OMA>| \\\\\n\t\t\t\t\t\t&\\verb|</OMOBJ>| \\\\\n\n\\end{tabular}  \n\n\\caption{The importance of defining scopes}\n\\label{omscope}\n\\end{figure}  \n\nExample 2 demonstrates how the use of \\verb|<OMA>| tags help define\nclearly the scope of each operator. A parser can then without\ndifficulty interpret the expression and translate it correctly. Example\n1, on the other side, shows how insufficient use of \\verb|<OMA>| tags\ncan lead to ambiguous expressions both for automatic parsers and\nhumans.\n\nMathML is stricter when defining the scopes of operators. Every\noperator must be enclosed with its own \\verb|<apply>| tags. This\ndifference between both standards is source of problems. The expression\nin Example 1 does not allow the scopes of the operators to be\ndetermined with accuracy, and so an equivalent MathML expression cannot\nbe produced.\n\nWhen developing an OpenMath/MathML translator, it is important to\nspecify that operator scopes in OpenMath must be accurately defined, or\nelse translation to MathML is not possible. The use of \\verb|<OMA>|\ntags must be imposed.\n\n\\section{Differences in Structure}\n\nThere are MathML and OpenMath elements which require special attention.\nMainly because there are elements constructed differently in MathML as\nthey are in OpenMath and because some elements have no equivalent in\nboth standards. Such cases must be well understood before starting to\nimplement any translator. We shall look at these cases and propose a\nreliable method for overcoming the differences and implementing an\nefficient solution. We will mention bound variables, element attributes\nand constants representation.\n\nThere exist elements in both standards which represent the same\nmathematical concept, but where the syntactical structure is different.\nThe following list shows these elements: {\\it matrices, limits,\nintegrals, definite integrals, differentiation, partial\ndifferentiation, sums, products, intervals, selection from a vector}\nand {\\it\nselection from a matrix}.\n\n\\subsection{Selector functions and Matrices}\n\nLet us first look at: {\\it matrices, selection from a matrix} and {\\it\nselection from a vector}.  These elements exist in both\nrecommendations, but differ syntactically.\n\nSelection from a matrix and from a vector is done by the\n\\verb|<selector/>| element in MathML and by the symbols {\\it\nvector\\_selector} and {\\it matrix\\_selector} in OpenMath. Because\nMathML uses the same element to deal both with matrices and vectors, it\nis necessary for the parser to determine what the arguments of the\nexpression are before finding the correct equivalent OpenMath. If the\nexpression has a matrix as argument, then {\\it matrix\\_selector} is the\ncorrect corresponding symbol. If the argument is a vector then the\ncorresponding symbol is {\\it vector\\_selector}.\n\nIt is important to note as well the order of arguments. The MathML\n\\verb|<selector/>| tag first takes the vector or matrix object, and\nthen the indices of selection. In OpenMath it is the other way around.\nFirst the indices of selection are given and then the object.\n\nAnother element where differences in structure are important is the\nmatrix element.  OpenMath has two ways of representing matrices. One\nrepresentation defined in the \\verb|\"linalg1\"| CD and the other defined\nin the \\verb|\"linalg2\"| CD. A matrix is defined as a series of\nmatrixrows in \\verb|\"linalg1\"|, exactly as in MathML. For such\nmatrices, translation is straightforward.\n\nHowever, \\verb|\"linalg2\"| defines a matrix as a series of matrix\ncolumns. This representation has no equivalent in MathML. It is\nimportant that a translator is capable of understanding both\nrepresentations in order to offer correct translation.\n\nWhen dealing with a \\verb|\"linalg2\"| matrix, a procedure can be\nimplemented which given the matrix columns of a matrix, returns a\nseries of matrix rows representing the same matrix. From these matrix\nrows, a MathML expression can be generated.\n\n\\subsection{Bound Variables} \\label{boundvars}\n\nThe remaining elements {\\it limit, integrals, definite integrals,\ndifferentiation, partial differentiation, sums, and products} have a\nsimilar structure and can be treated in a similar way when translating.  \nFollowing the classification in section \\ref{funcs} these elements go\nin the {\\it operators} group.\n\nWhat characterises these elements is that in MathML they all specify\ntheir bound variables explicitly using the \\verb|<bvar>| construct.\nHowever, in OpenMath, the bound variables are not explicitly stated.\nOpenMath expressions are the result of applying the symbol on a lambda\nexpression. In order to determine the bound variable the parser must\nretrieve it from the lambda expression. Let us illustrate this problem\nby contrasting two equivalent expressions on figure \\ref{bound}\n\n\n\\begin{figure}[h]\n\\begin{tabular}{ l l }    \n\n{\\bf OpenMath} \t\t\t\t\t\t& {\\bf MathML}\\\\\n\t\t\t\t\t\t\t& \\\\\n\\verb|<OMOBJ>| \t\t\t\t\t\t&\\verb|<math>| \\\\\n\\verb| <OMA>| \t\t\t\t\t\t&\\verb| <apply><sum/>| \\\\\n\\verb|  <OMS cd=\"arith1\" name=\"sum\"/>|    \t\t&\\verb|   <bvar>| \\\\\n\\verb|  <OMA>|\t\t\t\t\t\t&\\verb|     <ci>i</ci>| \\\\\n\\verb|   <OMS cd=\"interval1\" name=\"interval\"/>| \t&\\verb|   </bvar>| \\\\\n\\verb|   <OMI> 1 </OMI>|\t\t\t\t&\\verb|   <lowlimit>| \\\\\n\\verb|   <OMI> 10 </OMI>|\t\t\t\t&\\verb|     <cn>0</cn>| \\\\\n\\verb|  </OMA>|\t\t\t\t\t\t&\\verb|   </lowlimit>| \\\\\n\\verb|  <OMBIND>|\t\t\t\t\t&\\verb|   <uplimit>| \\\\\n\\verb|   <OMS cd=\"fns1\" name=\"lambda\"/>|\t\t&\\verb|     <cn>100</cn>| \\\\\n\\verb|   <OMBVAR>|\t\t\t\t\t&\\verb|   </uplimit>| \\\\\n\\verb|     <OMV name=\"x\"/>|\t\t\t\t&\\verb|   <apply><power/>| \\\\\n\\verb|   </OMBVAR>|\t\t\t\t\t&\\verb|       <ci>x</ci>| \\\\\n\\verb|   <OMA>|\t\t\t\t\t\t&\\verb|       <ci>i</ci>| \\\\\t\n\\verb|    <OMS cd=\"arith1\" name=\"divide\"/>|\t\t&\\verb|   </apply>| \\\\\n\\verb|    <OMI> 1 </OMI>|\t\t\t\t&\\verb| </apply>|  \\\\\n\\verb|    <OMV name=\"x\"/>|\t\t\t\t&\\verb|</math>|   \\\\\n\\verb|   </OMA>|\t\t\t\t\t& \\\\\n\\verb|  </OMBIND>|\t\t\t\t\t& \\\\\n\\verb| </OMA>|\t\t\t\t\t\t& \\\\\n\\verb|</OMOBJ>|\t\t\t\t\t\t& \\\\\n\n\\end{tabular}  \n\n\\caption{Use of bound variables}\n\\label{bound}\n\n\\end{figure}\n\nIn MathML, the index variable is explicitly stated within the\n\\verb|<bvar>| tags. It is part of the \\verb|<sum/>| syntax and is\nobligatory.  In OpenMath, the {\\it sum} symbol takes as arguments an\ninterval giving the range of summation and a function. Specifying the\nbound variable is not part of the syntax. It is contained inside the\nlambda expression. This same difference in structure exists with the\nother operators mentioned above.\n\nWhen translating any of these elements, it is necessary to support\nautomatic generation and decoding of lambda expressions. Thus when\ngoing from OpenMath to MathML, the bound variable and the function\ndescribed by the lambda expression need to be extracted to generate\nvalid MathML.\n\nWhen passing from MathML to OpenMath, the variable contained inside the\n\\verb|<bvar>| tags and the function given as argument would have to be\nencoded as a lambda expression. This is possible for all MathML\nexpressions of this type, and correct OpenMath is simple to produce.\n\nThus by retrieving bound variable information from OpenMath lambda\nexpressions, it is possible to translate to MathML. But OpenMath\ngrammar does not impose the use of lambda expressions to define bound\nvariables. Because of this flexibility, it is possible to construct\nOpenMath expressions which cannot be translated to MathML by an\nautomatic translator. If one looks at the \\verb|\"calculus1\"| CD, the\nOpenMath examples of {\\it int} and {\\it defint} do not specify their\nvariable of integration. A parser would not determine the variables of\nintegration and an equivalent MathML expression would not be possible.\n\nThis is a problem for an OpenMath/MathML translator with no easy\nsolution. A parser intelligent enough to extract the correct bound\nvariables of an expression is very difficult to implement. We recommend\nthat OpenMath expressions which do not specify all the necessary\ninformation for translation are ignored. The use of lambda expressions\nshould be required.\n\n\\subsection{Intervals}\n\nSome operators require an interval to be given specifying the range\nwithin which a variable ranges. The {\\it sum} or {\\it product} operator\nare some good examples. They both take as argument the interval giving\nthe range of summation or multiplication. Other operators accepting \nintervals in some cases are {\\it int} and {\\it condition}.\n\nBoth in MathML and OpenMath these operators define ranges with\nintervals, but differently. OpenMath defines intervals using specific\ninterval defining symbols found in the {\\tt interval1} CD. MathML can\nuse either the interval element or the tags \\verb|<lowlimit>| and\n\\verb|<upperlimit>|. These two tags do not have an OpenMath equivalent\nand so when encountered must be transformed into an interval. This is\nnot difficult since one must simply merge the lower and upper limits\ninto the edges of an interval.\n\n\\subsection{MathML attributes}\n\nThere are OpenMath symbols which map to the same MathML element, and\nare only distinguished by the attributes characterising the MathML\nelement. A MathML element which illustrates this is \\verb|<interval>|.\nThe interval element in MathML has a \\verb|closure| attribute which\nspecifies the type of interval being represented. This attribute takes\nthe following values:  \\verb|open|, \\verb|closed|, \\verb|open_closed|,\n\\verb|closed_open|.  Depending on the attribute value, a different\nOpenMath symbol will be used in the translation. The following example\nillustrates how one element with different attribute values maps to\ndifferent OpenMath symbols.\n\n\\begin{center}\n\n\\begin{verbatim}\n      <interval type=\"closed\">\n\n      <OMS cd=\"interval1\" name=\"interval_cc\"/>\n\\end{verbatim}\n\n\\end{center}\n\n\\noindent are equivalent and so are\n\n\\begin{center}\n\n\\begin{verbatim}\n      <interval type=\"open\">\n\n      <OMS cd=\"interval1\" name=\"interval_oo\"/>\n\\end{verbatim}\n\n\\end{center}\n\nWhen a translator encounters such elements, it is necessary that the\nMathML elements generated posses these attributes, or else semantic\nvalue is lost. Table \\ref{allatts} shows the relation between all\nMathML elements whose attributes are of importance and their equivalent\nOpenMath symbols.\n\n\\begin{table}[h]\n\\begin{center}\n\n\\begin{tabular}{|l|l|l|} \\hline   \n\\label{allatts}\n\n{\\bf MathML element}  \t\t&{\\bf Attribute values}\t\t\t& {\\bf \nOpenMath symbol} \\\\ \\hline\n\\verb|<interval>|\t\t&{\\it default}\t\t\t\t& {\\it interval} \\\\\n\t\t\t\t&\\verb|closure=\"open_closed\"|\t\t& {\\it interval\\_oc}\t\\\\\t\n\t\t\t\t&\\verb|closure=\"closed_open\"|\t\t& {\\it interval\\_co}\t\\\\\t\n\t\t\t\t&\\verb|closure=\"closed\"|\t\t& {\\it interval\\_cc}\t\\\\\t\n\t\t\t\t&\\verb|closure=\"open\"|\t\t\t& {\\it interval\\_oo}\t\\\\ \\hline\n\\verb|<tendsto>|\t\t&{\\it default}\t\t\t\t& {\\it above} \t\\\\\n\\verb||\t\t\t\t&\\verb|type=\"above\"|\t\t\t& {\\it above}\t\\\\\n\\verb||\t\t\t\t&\\verb|type=\"below\"|\t\t\t& {\\it below}\t\\\\\n\\verb||\t\t\t\t&\\verb|type=\"both_sides\"|\t\t& {\\it null}\t\\\\ \\hline\n\\verb|<set>|\t\t\t&{\\it default}\t\t\t\t& {\\it set}\t\\\\\n\t\t\t\t&\\verb|type=\"normal\"|\t\t\t& {\\it set}\t\\\\\n\t\t\t\t&\\verb|type=\"multiset\"|\t\t\t& {\\it multiset}\t\\\\\n\\hline\n\n\\end{tabular}\n\n\\end{center}     \n\\caption{Equivalent OpenMath symbols to the different attribute values of MathML \nelements }\n\n\\end{table}     \n\n\\subsection{MathML constants}\n\nIn MathML, constants are defined as being any of the following:  \n\\verb|e|, \\verb|i|, \\verb|pi|, \\verb|gamma|, \\verb|infinity|,\n\\verb|true|, \\verb|false| or \\verb|not a number (NaN)|. They appear\nwithin \\verb|<cn>| tags when the attribute \\verb|type| is set to\n\\verb|constant|. For instance $\\pi$ would be represented in MathML as:\n\n\\begin{verbatim}\n        <cn type=\"constant\">pi</cn>\n\\end{verbatim}\n\nIn OpenMath, these constants all appear as different symbols and from\ndifferent CDs.  Hence, we face a similar problem as we did with MathML\nattributes. The \\verb|<cn>| tag with the attribute set to\n\\verb|constant| can map to different OpenMath symbols.\n\nIt is important that the translator detects the use of the\n\\verb|constant| attribute value and maps the constant expressed to the\ncorrect OpenMath symbol.\n\nMathML also allows to define Cartesian complex numbers and polar\ncomplex numbers.  A complex number is of the form two real point\nnumbers separated by the \\verb|<sep/>| tag. For instance $3+4i$ is\nrepresented as:\n\n\\begin{verbatim}\n        <cn type=\"cartesian_complex\"> 3 <sep/> 4 </cn>\n\\end{verbatim}\n\nOpenMath is more flexible in its definition of complex numbers. The\nreal and imaginary parts, or the magnitude and argument of a complex\nnumber do not have to be only real numbers. They may be variables. This\nallows OpenMath to represent numbers such as $x+iy$ or $re^{i\\theta}$\nwhich cannot be done in MathML.\n\nSo how should one map such an OpenMath expression to MathML? Because\nthere is no specific construct for such complex numbers, the easiest\nway is to generate a MathML representation using simple operators. The\ntwo expressions in figure \\ref{compls} are equivalent and illustrate how a\ntranslator should perform:\n\n\\begin{figure}[h]\n\n\\begin{verbatim}\n      <OMOBJ>\n        <OMA>\n          <OMS cd=\"nums1\" name=\"complex_polar\"/>\n          <OMV name=\"x\"/>\n          <OMV name=\"y\"/>\n        </OMA>\n      </OMOBJ> \n\\end{verbatim}\n\n\\begin{verbatim}\n       <math>\n          <apply><times/>\n             <ci> x </ci>\n             <apply><exp/>\n                <apply><times/>\n                   <ci> y </ci>\n                   <cn type=\"constant\"> &imaginaryi; </cn>\n                </apply>\n             </apply>\n          </apply>\n       </math> \n\\end{verbatim}\n\n\\caption{How to translate complex numbers}\n\\label{compls}\n\\end{figure}\n\nThe problem is the same when representing rationals, since OpenMath\nallows variables to be used as elements of a rational number, whereas\nMathML only allows real numbers.\n\n\\subsection{{\\tt partialdiff} and {\\tt diff}}\n\nIn both standards it is possible to represent normal and partial\ndifferentiations. But the structures are different. Let us first look\nat {\\tt diff}. In MathML, it is possible to specify the order of the\nderivative. In OpenMath, differentiation is always of first order.  \nThe trouble here is translating MathML expressions where the order of\nderivation is higher than one. There is no equivalent representation in\nOpenMath.\n\nWhat can be done to overcome this discrepancy is to construct an\nOpenMath expression differentiated as many times as is specified by the\nMathML derivation order. For instance, when dealing with a MathML\nsecond order derivative, the equivalent OpenMath expression could be a\nfirst order derivative of a first order derivative.  This will surely\ngenerate very verbose OpenMath in cases where the order of derivation\nis high, but at least will convey the same semantic meaning and\nsurmounts OpenMath's limitation.\n\nThe case of partial differentiation is complicated. The representations\nin both standards are very different. In MathML one specifies all the\nvariables of integration and the order of derivation of each variable.\nIn OpenMath one specifies a list of integers which index the variables\nof the function. Suppose a function has bound variables $x$, $y$ and\n$z$. If we give as argument the integer list $\\{1,3\\}$ then we are\ndifferentiating with respect to $x$ and $z$. The differentiation is of\nfirst order for each variable.\n\nTranslating partial differentials from OpenMath to MathML is simple,\nbecause the information conveyed by the OpenMath expression can be\nrepresented without difficulty by MathML syntax. However the other way\naround is difficult. Given OpenMath's limitation of only allowing first\norder differentiation for each variable, many MathML expressions which\ndifferentiate with respect to various variables and each at a different\ndegree cannot be translated. We recommend that such MathML expressions\nare discarded by the translator.\n\n\\section{Elements not Supported by both Standards}\n\nThere are some elements which have no equivalent in both standards.\nThese are mainly the MathML elements \\verb|<condition>| and\n\\verb|<declare>|and the OpenMath {\\it matrixrow} and {\\it matrixcolumn}\nsymbols.\n\n\n\\subsection{{\\tt $<$condition$>$}}\n\nThe \\verb|<condition>| element is used often throughout MathML and is\nnecessary to convey certain mathematical concepts. There is no direct\nequivalent in OpenMath, making translation impossible for certain\nexpressions.\n\nThe \\verb|<condition>| element is used to define the `such that'\nconstruct in mathematical expressions.  Condition elements are used in\na number of contexts in MathML. They are used to construct objects like\nsets and lists by rule instead of by enumeration. They can be used with\nthe {\\tt forall} and {\\tt exists} operators to form logical\nexpressions. And finally, they can be used in various ways in\nconjunction with certain operators. For example, they can be used with\nan int element to specify domains of integration, or to specify\nargument lists for operators like min and max.\n\nThe example in figure \\ref{forall} represents $\\{\\forall x | x<9:\nx<10\\}$ and shows how the \\verb|<condition>| tags can be used in a\nMathML expression. This MathML expression has no OpenMath equivalent\nbecause OpenMath does not allow to specify any conditions on bound\nvariables.\n\n\n\\begin{figure}[h]\n\n\\begin{verbatim}\n      <math>   \n        <apply><forall/>\n          <bvar>\n            <ci> x </ci>\n          </bvar>\n          <condition>\n            <apply><lt/>\n              <ci> x </ci>\n              <cn> 9 </cn>      \n            </apply>\n          </condition>\n          <apply><lt/>\n            <ci> x </ci>\n            <cn> 10 </cn>      \n          </apply>\n        </apply>\n      </math>   \n\\end{verbatim}\n\n\\caption{Use of {\\tt $<$condition$>$}}\n\\label{forall}\n\n\\end{figure}\n\n\nThe \\verb|<condition>| tags are used in the following MathML elements:\n{\\it set, forall, exists, int, sum, product, limit, min} and {\\it max}.\nIn all of these elements except {\\it limit}, the use of\n\\verb|<condition>| tags makes translation impossible.\n\nThe case of {\\it limit} is different because OpenMath does allow\nconstraints to be placed on the bound variable; mainly to define the\nlimit point and the direction from which the limit point is approached.\n\n\\subsection{{\\tt $<$declare$>$}}\n\nThe \\verb|<declare>| construct is used to associate specific properties\nor meanings with an object. It was designed with computer algebra\npackages in mind. OpenMath's philosophy is to leave the application\ndeal with the object once it has received it. It is not intended to be\na query or programming language. This is why such a construct was not\ndefined. A translator should deny such MathML expressions.\n\n\\subsection{{\\it matrixrow, matrixcolumn}}\n\nIn the MathML specification it is stated that {\\it `The matrixrow\nelements must always be contained inside of a matrix'}. This is not the\ncase in OpenMath where the {\\it matrixrow} symbol can appear on its\nown.  A matrix row encountered on its own has no MathML equivalent.\nHowever, when it is encountered within a matrix object, then\ntranslation is possible.\n\nAs we mentioned earlier, it is possible to translate a matrix defined\nwith matrixcolumns to MathML. However, if a matrixcolumn is found on\nits own it does not have a MathML equivalent.\n\n\n\\section{Extensibility}\n\nOpenMath already possesses a set of CDs covering all of MathML's\nsemantic scope. These CDs belong to the MathML CD Group.  It is clear\nthat these CDs must be understood by an OpenMath/MathML interface.  \nThere are as well a few other symbols from other CDs which are not in\nthe MathML CD Group but can be mapped such as matrices defined in {\\tt\n\"linalg2\"}.\n\nBut OpenMath has the capability of extending its semantic scope by\ndefining new symbols within new content dictionaries\\index{content\ndictionaries}. This facility affects the design of any OpenMath\ncompliant application. When it comes to translating to MathML, it is\nnecessary that newly defined symbols are properly dealt with. A\ntranslator should have the ability to recognise any symbol with no\nmapping to MathML.\n\nBut how do we deal with most symbols outside the MathML CD Group? Or\nwith new symbols which will continue to appear as OpenMath evolves? How\ndo we map them to MathML?\n\nMathML, as any system of content markup\\index{content markup}, requires\nan extension mechanism which combines notation with semantics.\nExtensibility in MathML is not as efficient as in OpenMath, but it is\npossible to define and use functions which are not part of the MathML\nspecification. MathML content markup specifies several ways of\nattaching an external semantic definition to content objects.\n\nBecause OpenMath contains many elements which have no equivalent in\nMathML, and because OpenMath can have new CDs amended to it, we will\nneed to use these mechanisms of extension. The \\verb|<semantic>|\nelement is used in MathML to bind a semantic definition with a symbol.\nAn example taken from the MathML specification~\\cite{mathml} section\n5.2.1\\footnote{CHECK!!!!{\\tt\nhttp://www.w3.org/WD$-$MathML2$-$19991222/chapter5.html\\#mixing:parallel}}\nshows how the OpenMath `rank' operator (non existent in MathML) can be\nencoded using MathML. The MathML encoding of rank is shown in figure\n\\ref{rank}:\n\n\\begin{figure}[h]\n\\begin{verbatim}\n      <math>\t\n        <apply><eq/>\n          <apply><fn>\n              <semantics>\n                <ci><mo>rank</mo></ci>\n                <annotation-xml encoding=\"OpenMath\">\n                  <OMS cd=\"linalg3\" name=\"rank\"/>\n                </annotation-xml>\n              </semantics>\n            </fn>\n            <apply><times/>\n              <apply><transpose/>\n                <ci>u</ci>\n              </apply>\n              <ci>v</ci>\n            </apply>\n          </apply>\n          <cn>1</cn>\n        </apply>\n      </math>\n\\end{verbatim}\n\\caption{Encoding of OpenMath symbol `rank' in MathML}\n\\label{rank}\n\\end{figure}\n\nIt shows that an OpenMath operator without MathML equivalent is easily\ncontained within \\verb|<semantic>| tags and can be applied on any\nnumber of arguments.\n\nThis method works well when dealing with operators constructed by {\\it\napplication} (between \\verb|<OMA>| tags), because MathML also\nconstructs expressions by application (between \\verb|<apply>| tags). It\nis assumed they take any number of arguments. However, OpenMath can\nalso construct expressions by binding symbols to their arguments. As we\ndescribed earlier (section \\ref{constructors}), this method has no\nequivalent in MathML.\n\nSo what happens when a new symbol is encountered which is constructed\nby binding in OpenMath? Enveloping the new symbol inside\n\\verb|<semantic>| tags will produce an incorrect translation.\n\nIt is first necessary to determine if the new symbol encountered is\nconstructed by binding or not. In order to do so, a file describing the\nnew symbol specifying these details could be read in by the translator.\nThis file could be the CD where the symbol is defined. But\nunfortunately CDs are written in a human readable way, and there is no\nway a program could determine the construction method of a particular\nsymbol or the number and type of arguments it takes.\n\nOne would need to read in the STS file of a symbol. But the best way\nwould be by checking the tag preceding the new symbol given by the\nOpenMath input. If it was \\verb|<OMBIND>| then we are sure this symbol\nis constructed by binding. Nonetheless, accurate mapping would be\nimpossible. As we have seen before, MathML only offers extensibility\nconstructing operators by application. It is not possible to define new\ncontainers, new types, or new operators constructed differently such as\nthose constructed by binding.\n\nWhile it is possible to define certain new symbols in MathML, the\nadvantages of OpenMath extensibility would create problems for a\ntranslator to MathML. This is why it is stated in the OpenMath standard\nin section 2.5 that {\\it `it is envisioned that a software application\ndealing with a specific area of mathematics declares which content\ndictionaries\\index{content dictionaries} it understands'}. A MathML\ntranslator deals with the area of mathematics defined by MathML and\nshould understand all CDs within the MathML CD Group. Any other symbols\nwill be properly translated if they are enclosed inside \\verb|<OMA>|\ntags.\n\nExtensibility is limited by the extension mechanisms offered by MathML.\n\n\\section{How to Handle the Translation problem}\n\nAlthough there are surely many ways to tackle the translation problem,\nthere are a few requirements which must be respected by any\nOpenMath/MathML translator. Mainly that content dictionaries and\nsymbols are dealt with correctly during translation in both directions.\n\nIn OpenMath, symbols always appear next to the content\ndictionary\\index{content dictionary} they belong to. The \\verb|<OMS>|\nelement always takes two attributes: the symbol's name and the\nsymbol's corresponding CD. Two symbols with the same name coming from\ndifferent CDs are considered to be different.\n\nWhen parsing OpenMath, a translator must ensure that the symbols read\nbelong to the correct CDs, if not it should conclude the symbol has a\nmeaning it does not understand and deal with it accordingly. Because\nan OpenMath/MathML translator will understand all MathML related CDs,\nsymbols encountered are considered valid if they come from this CD\ngroup. Symbols with the same name, but from unknown CDs should be\nenclosed within \\verb|<semantic>| tags when possible.\n\nWe face the same requirement when generating OpenMath. All OpenMath\nsymbols output from the translator must appear next to their correct\nCDs. If we are translating the MathML element \\verb|<plus/>|, the\ncorresponding OpenMath symbol {\\it plus} must appear next to the\n\\verb|arith1| CD.\n\nThis requires a translator to keep a database relating each understood\nsymbol with its CD. This database must allow the translator to detect\nunknown symbols, or to accept some symbols from different CDs with the\nsame name which have MathML equivalents. This is the case of {\\it\nmatrix} which belongs to various CDs (\\verb|linalg1|, \\verb|linalg2|) as do\nthe symbols {\\it in}, {\\it inverse}, {\\it setdiff}, {\\it vector}, {\\it\narcsinh} to name a few.\n\nThese symbols belonging to various CDs pose a problem when\ntranslating from MathML to OpenMath. Which CD do we choose? {\\it\ninverse} for instance belongs to {\\it fns1} and {\\it arith2}.\nPriority should be given to the CD belonging to the MathML CD group.\nIf both CDs belong to the MathML then common sense should guide which\nCD to place. It is up to the designer.\n\nAn OpenMath/MathML interface must be very rigorous when dealing with\ncontent dictionaries. Any mistake may produce invalid OpenMath or\nreject valid OpenMath expressions.\n\n\\section{Conclusion}\n\nIt is clear now that a translation is possible. Putting apart the\ndifficulties described in this analysis, their are many similarities\nbetween both standards. As we have seen, expressions are constructed\nsimilarly and the application of functions is practically identical.\n\nHowever, the various differences of structure can limit the power of a\ntranslator in some situations. Mainly when translating partial\ndifferentiations or applying conditions to bound variables.\n\nThe design of any translator requires a good understanding of both\nstandards and how they represent mathematical concepts. The\ninformation described in this document will guide the design of an\nOpenMath/MathML translator.\n\n", "meta": {"hexsha": "504b8d081268661bd9757d7b2b0cbc2fb786ae0c", "size": 37744, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/mathml/analysis.tex", "max_stars_repo_name": "arthurcnorman/general", "max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packages/mathml/analysis.tex", "max_issues_repo_name": "arthurcnorman/general", "max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packages/mathml/analysis.tex", "max_forks_repo_name": "arthurcnorman/general", "max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.6346820809, "max_line_length": 94, "alphanum_fraction": 0.7492846545, "num_tokens": 9232, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835207180243, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5674426162091888}}
{"text": "\\documentclass{article}\n\n\\usepackage{lipsum}\n\\usepackage[margin=1.2in]{geometry}\n\\usepackage{titlesec}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\n\\newcommand{\\code}{\\texttt}\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\n\\usepackage{siunitx} % Required for alignment\n\n\\sisetup{\n  round-mode          = places, % Rounds numbers\n  round-precision     = 2, % to 2 places\n}\n\n% Specify images directory\n\\graphicspath{ {./report-images/} }\n\n% Header and Footer stuff\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\fancyhead{}\n\\fancyfoot{}\n\\fancyfoot[R]{ \\thepage\\ }\n\\renewcommand{\\headrulewidth}{0pt}\n\\renewcommand{\\footrulewidth}{0pt}\n\\newcommand{\\sectionbreak}{\\clearpage}\n\\setlength{\\parindent}{0pt}\n\n%\n\n\\begin{document}\n\n%----------------------------------------------------------------------------------------\n%\tTITLE PAGE\n%----------------------------------------------------------------------------------------\n\n\\begin{titlepage} % Suppresses displaying the page number on the title page and the subsequent page counts as page 1\n\t\\newcommand{\\HRule}{\\rule{\\linewidth}{0.5mm}}% Defines a new command for horizontal lines, change thickness here\n\t\n\t\\center % Centre everything on the page\n\t\n\t%------------------------------------------------\n\t%\tHeadings\n\t%------------------------------------------------\n\t\n\t\\textsc{\\Large Systems of Linear Equations}\\\\[0.5cm] % Major heading such as course name\n\t\n\t\\textsc{\\large Exercise 7}\\\\[0.5cm] % Minor heading such as course title\n\t\n\t%------------------------------------------------\n\t%\tTitle\n\t%------------------------------------------------\n\t\n\t\\HRule\\\\[0.6cm]\n\t\n\t{\\huge\\bfseries Solving a Linear System with LU Decomposition}\\\\[0.25cm] % Title of your document\n\t\n\t\\HRule\\\\[1.5cm]\n\t\n\t%------------------------------------------------\n\t%\tAuthor(s)\n\t%------------------------------------------------\n\t\n\t\\begin{minipage}{0.4\\textwidth}\n\t\t\\begin{flushleft}\n\t\t\t\\large\n\t\t\t\\textit{Author}\\\\\n\t\t\t\\textsc{Cesare De Cal} % Your name\n\t\t\\end{flushleft}\n\t\\end{minipage}\n\t~\n\t\\begin{minipage}{0.4\\textwidth}\n\t\t\\begin{flushright}\n\t\t\t\\large\n\t\t\t\\textit{Professor}\\\\\n\t\t\t\\textsc{Annie Cuyt}\\\\ % Supervisor's name\n\t\t\t[0.25cm]\n\t\t\t\\textit{Assistant Professor}\\\\\n\t\t\t\\textsc{Ferre Knaepkens} % Supervisor's name\n\n\t\t\\end{flushright}\n\t\\end{minipage}\n\t\t\n\t\\vfill\\vfill\\vfill\n\t\n\t{\\large\\today}\n\t\t\n\t\\vfill\n\t\n\\end{titlepage}\n\n%----------------------------------------------------------------------------------------\n\n\\section{Introduction}\\label{sec:intro}\nThis exercise asks to build a tridiagonal matrix with the value $-1$ on the adjacent upper diagonal, the value $+1$ on the adjacent lower diagonal, and the value $b_{i}$ on the main diagonal, with $i = 1, \\ldots, n$ given by\n$$b_i =\\frac{2(i+1)}{3},\\quad i + 1= 3, 6, 9,\\ldots$$\n$$b_i =1,\\quad i + 1 = 2, 4, 5, 7, 8, \\ldots$$\n\nThis matrix should then be used as the coefficients matrix in the $A\\vec{x}=\\vec{y}$ linear system. The exercise asks to solve the system using \\code{GEPP} (Gaussian Elimination with Partial Pivoting) and then give $x_1$, an approximation of $e-2$.\\\\\n\nAs we've seen in class, there are multiple ways of solving a linear system. For example, I could compute the inverse of $A$ and find $\\vec{x}=A^{-1}\\vec{y}$. We've seen that this approach, however, requires more computations than necessary and returns a less accurate result. Therefore in this exercise I am going to use solve a linear system using LU decomposition which is numerically stable. I'll also calculate the condition number and the error to verify if this is an ill-conditioned system and to verify how precise the computed solution is.\n\n\\section{Tools}\nThe following programming language and libraries have been used in this exercise:\n\\begin{itemize}\n  \\item C\n  \\item C Math Library\n  \\item GSL (GNU Scientific Library)\n  \\item Python 3 (for plotting)\n  \\item NumPy (for plotting)\n\\end{itemize}\nThe following double-precision GSL data types have been used in the exercise:\n\\begin{itemize}\n  \\item \\code{gsl\\_vector}\n  \\item \\code{gsl\\_matrix}\n  \\item \\code{gsl\\_permutation}\n\\end{itemize}\nThe following GSL methods have been used in the exercise:\n\\begin{itemize}\n  \\item \\code{gsl\\_matrix\\_alloc(size1, size2)}\n  \\item \\code{gsl\\_matrix\\_set\\_zero(matrix)}\n  \\item \\code{gsl\\_matrix\\_set(matrix, row, column, value)}\n  \\item \\code{gsl\\_matrix\\_get(matrix, row, column)}\n  \\item \\code{gsl\\_vector\\_alloc(size)}\n  \\item \\code{gsl\\_vector\\_set\\_zero(vector)}\n  \\item \\code{gsl\\_vector\\_set(vector, index, value)}\n  \\item \\code{gsl\\_vector\\_get(vector, index)}\n  \\item \\code{gsl\\_matrix\\_memcpy(matrixToCopyFrom, matrix)}\n  \\item \\code{gsl\\_linalg\\_SV\\_decomp(A, V, S, workspaceVector)}\n  \\item \\code{gsl\\_vector\\_minmax(vector, minInVector, maxInVector)}\n\\end{itemize}\nIn order to factorize a matrix into the LU decomposition, and then solve the square system $Ax=y$ using the decomposition of A, I've used the following methods:\n\\begin{itemize}\n  \\item \\code{gsl\\_linalg\\_LU\\_decomp(A, permutation, signum)}\n  \\item \\code{gsl\\_linalg\\_LU\\_solve(LU, permutation, b, x)}\n  \\item \\code{gsl\\_permutation\\_alloc(size)}\n\\end{itemize}\nThe following method from the C Math library was used in this exercise to calculate the absolute value of a number:\n\\begin{itemize}\n  \\item \\code{fabs(x)}\n\\end{itemize}\n  \n\\section{Solving the Linear System}\nBy looking closely at the first rule, we see that the $i+1$ are all multiples of 3 ($i+1 = 3*k$, for some $k$). Hence the $i$ are of the form $i = 3*k-1$, for some $k$. For $n = 5$, for example, this is what the coefficient matrix looks like:\n$$\n\\begin{bmatrix}\n1.000000000e+00 & -1.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 \\\\ \n1.000000000e+00 & 2.000000000e+00 & -1.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 \\\\ \n0.000000000e+00 & 1.000000000e+00 & 1.000000000e+00 & -1.000000000e+00 & 0.000000000e+00 \\\\ \n0.000000000e+00 & 0.000000000e+00 & 1.000000000e+00 & 1.000000000e+00 & -1.000000000e+00 \\\\ \n0.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 & 1.000000000e+00 & 4.000000000e+00 \\\\\n\\end{bmatrix}\n$$\n\nThe coefficients matrix A is first allocated by using the \\code{gsl\\_matrix\\_alloc} method, then I set all the elements to zero with \\code{gsl\\_matrix\\_set\\_zero} and finally nested \\code{for} loops fill the diagonal values by checking the indexes.\\\\\n\nI used the \\code{gsl\\_vector\\_alloc} method to create an instance of the vector. All of its elements were set to zero by using \\code{gsl\\_vector\\_set\\_zero(vector)}. The exercise asks us to set the first element of the $y$ vector to one, so I used \\code{gsl\\_vector\\_set(vector, 0, 1)} to assign the value 1 to index 0. For $n=5$, we have:\n$$\n\\vec{y}=\n\\begin{bmatrix}\n1.000000000e+00 \\\\\n0.000000000e+00 \\\\\n0.000000000e+00 \\\\\n0.000000000e+00 \\\\\n0.000000000e+00 \\\\\n\\end{bmatrix}\n$$\n\nThen, I calculate the condition number of the matrix $A$ of order $n$ which will give me a better idea if this is a well-conditioned or an ill-conditioned linear system. In GSL there is no direct function that calculates the condition number, but it's possible to use the ratio of the largest singular value of matrix A, $\\sigma_n (A)$, to the smallest $\\sigma_1 (A)$:\n\n$$\\kappa(A) := \\frac{\\sigma_n (A)}{\\sigma_1 (A)}= \\frac{\\norm{A}}{\\norm{A^{-1}}^{-1}}$$\n\nI proceed to factorize $A$ into its singular value decomposition $SVD$ using the \\code{gsl\\_linalg\\_SV\\_decomp} method, and then use $\\code{gsl\\_vector\\_minmax}$ to extract the minimum and maximum singular values out of the vector $S$ that contains the diagonal elements of the singular value matrix. \\\\\n\nFor $n=5$, the condition number is\n\n$$\\kappa(A) = \\frac{\\sigma_n (A)}{\\sigma_1 (A)}= \\frac{4.205100611e+00}{1.142643287e+00}=3.680151678e+00$$\n\nGiven the $Ax=y$ system, my goal is now to find the vector of the unknowns $\\vec{x}$. To do so, I first factorize $A$ into its LU decomposition by allocating a new matrix (so that the matrix which represents $A$ doesn't get overridden) using \\code{gsl\\_matrix\\_memcpy} and then by calling \\code{gsl\\_linalg\\_LU\\_decomp}. This method utilizes Gaussian Elimination with partial pivoting to compute the decomposition. The following is the $LU$ matrix for $n=5$:\n$$\n\\begin{bmatrix} \n1.000000000e+00 & -1.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 \\\\ \n1.000000000e+00 & 3.000000000e+00 & -1.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 \\\\ \n0.000000000e+00 & 3.333333333e-01 & 1.333333333e+00 & -1.000000000e+00 & 0.000000000e+00 \\\\\n0.000000000e+00 & 0.000000000e+00 & 7.500000000e-01 & 1.750000000e+00 & -1.000000000e+00 \\\\ \n0.000000000e+00 & 0.000000000e+00 & 0.000000000e+00 & 5.714285714e-01 & 4.571428571e+00 \\\\\n\\end{bmatrix}\n$$\n\nI can now use the $LU$ matrix to solve the system by passing $LU$, $\\vec{x}$, a permutation structure \\code{gsl\\_permutation} (it contains the order of the indexes of the equations in the system to keep track of pivoting) and $\\vec{y}$ to \\code{gsl\\_linalg\\_LU\\_solve}. This method uses forward and back-substitution to modify the contents of the $\\vec{x}$ vector given in input, which now looks like this (for $n=5$):\n\n$$\n\\vec{x}=\n\\begin{bmatrix}\n7.187500000e-01\\\\\n-2.812500000e-01\\\\\n1.562500000e-01\\\\\n-1.250000000e-01\\\\\n3.125000000e-02\\\\\n\\end{bmatrix}\n$$\n\nI calculate the error by subtracting the computed solution $x_{1}^{\\ast}$ from the exact mathematical solution $\\widetilde{x}$ (an approximated value can be obtained by using the $\\code{M\\_E}$ GSL constant minus 2).\\\\\n\n\\begin{table*}[htb]\n\\centering % used for centering table\n\\begin{tabular}{c c c c} % centered columns (4 columns)\n$n$ & $\\widetilde{x}_1$ & $x_1^{\\ast}- \\widetilde{x}_1$ & $\\kappa(A_n)$ \\\\ [0.65ex] % inserts table\n%heading\n\\hline % inserts single horizontal line\n1 & 1.000000000e+00 & -2.817181715e-01 & 1.000000000e+00 \\\\\n2 & 6.666666667e-01 & 5.161516179e-02 & 1.767591879e+00 \\\\\n3 & 7.500000000e-01 & -3.171817154e-02 & 2.561552813e+00 \\\\\n4 & 7.142857143e-01 & 3.996114173e-03 & 2.258696038e+00 \\\\\n5 & 7.187500000e-01 & -4.681715410e-04 & 3.680151678e+00 \\\\\n6 & 7.179487179e-01 & 3.331105103e-04 & 3.953864002e+00 \\\\\n7 & 7.183098592e-01 & -2.803069588e-05 & 3.847674609e+00 \\\\\n8 & 7.182795699e-01 & 2.258566572e-06 & 5.377037588e+00 \\\\\n9 & 7.182835821e-01 & -1.753630507e-06 & 5.727581839e+00 \\\\\n10 & 7.182817183e-01 & 1.101773268e-07 & 5.498872833e+00 \\\\\n11 & 7.182818352e-01 & -6.746947445e-09 & 7.100335770e+00 \\\\\n12 & 7.182818229e-01 & 5.515095380e-09 & 7.582164638e+00 \\\\\n13 & 7.182818287e-01 & -2.766507023e-10 & 7.195531702e+00 \\\\\n14 & 7.182818284e-01 & 1.364375279e-11 & 8.833149892e+00 \\\\\n15 & 7.182818285e-01 & -1.153854789e-11 & 9.488074730e+00 \\\\\n16 & 7.182818285e-01 & 4.816147481e-13 & 8.911558696e+00 \\\\\n17 & 7.182818285e-01 & -1.998401444e-14 & 1.057152285e+01 \\\\\n18 & 7.182818285e-01 & 1.709743458e-14 & 1.142018246e+01 \\\\\n19 & 7.182818285e-01 & -6.661338148e-16 & 1.063813407e+01 \\\\\n20 & 7.182818285e-01 & -1.110223025e-16 & 1.231319966e+01 \\\\\n21 & 7.182818285e-01 & -2.220446049e-16 & 1.336883104e+01 \\\\\n22 & 7.182818285e-01 & -2.220446049e-16 & 1.237107821e+01 \\\\\n23 & 7.182818285e-01 & -2.220446049e-16 & 1.405700479e+01 \\\\\n24 & 7.182818285e-01 & -2.220446049e-16 & 1.532862983e+01 \\\\\n25 & 7.182818285e-01 & -2.220446049e-16 & 1.410816377e+01 \\\\\n26 & 7.182818285e-01 & -2.220446049e-16 & 1.580226249e+01 \\\\\n27 & 7.182818285e-01 & -2.220446049e-16 & 1.729630706e+01 \\\\\n28 & 7.182818285e-01 & -2.220446049e-16 & 1.584809348e+01 \\\\\n29 & 7.182818285e-01 & -2.220446049e-16 & 1.754855617e+01 \\\\\n30 & 7.182818285e-01 & -2.220446049e-16 & 1.926975724e+01 \\\\\n31 & 7.182818285e-01 & -2.220446049e-16 & 1.759006043e+01 \\\\\n32 & 7.182818285e-01 & -2.220446049e-16 & 1.929561485e+01 \\\\\n33 & 7.182818285e-01 & -2.220446049e-16 & 2.124756325e+01 \\\\\n34 & 7.182818285e-01 & -2.220446049e-16 & 1.933353645e+01 \\\\\n35 & 7.182818285e-01 & -2.220446049e-16 & 2.104325456e+01 \\\\\n36 & 7.182818285e-01 & -2.220446049e-16 & 2.322873622e+01 \\\\\n37 & 7.182818285e-01 & -2.220446049e-16 & 2.107816128e+01 \\\\\n38 & 7.182818285e-01 & -2.220446049e-16 & 2.279134599e+01 \\\\\n39 & 7.182818285e-01 & -2.220446049e-16 & 2.521256520e+01 \\\\\n40 & 7.182818285e-01 & -2.220446049e-16 & 2.282368084e+01 \\\\\n41 & 7.182818285e-01 & -2.220446049e-16 & 2.453979556e+01 \\\\\n42 & 7.182818285e-01 & -2.220446049e-16 & 2.719852600e+01 \\\\\n43 & 7.182818285e-01 & -2.220446049e-16 & 2.456991077e+01 \\\\\n44 & 7.182818285e-01 & -2.220446049e-16 & 2.628853390e+01 \\\\\n45 & 7.182818285e-01 & -2.220446049e-16 & 2.918622370e+01 \\\\\n46 & 7.182818285e-01 & -2.220446049e-16 & 2.631671410e+01 \\\\\n47 & 7.182818285e-01 & -2.220446049e-16 & 2.803750846e+01 \\\\\n48 & 7.182818285e-01 & -2.220446049e-16 & 3.117535515e+01 \\\\\n49 & 7.182818285e-01 & -2.220446049e-16 & 2.806398692e+01 \\\\\n50 & 7.182818285e-01 & -2.220446049e-16 & 2.978667872e+01 \\\\\\hline %inserts single line\n\\end{tabular}\n\\end{table*}\n\n\\section{Plot}\n\\includegraphics[width=\\textwidth,height=\\textheight,keepaspectratio]{cond_number.png}\n\\includegraphics[width=\\textwidth,height=\\textheight,keepaspectratio]{error.png}\n\n\\section{Observations}\nThe linear system presented in this exercise gets increasingly ill-conditioned as $n$ grows (since $\\kappa(A_n)>> 1$ for most $n$). From the plot, it can be observed that the condition number grows linearly. It can be noticed, however, that a large condition number doesn\u2019t necessarily mean that the error will be large in all cases, just that it is possible to have a large error. In fact, the error which represents how well the computed solution $\\widetilde{x}_1$ approximates the true solution $x_1^{\\ast}$ gets incrementally smaller. This computation presents round-off errors because computers cannot work with infinite numbers. It can be noted that the Gaussian elimination with partial pivoting doesn't introduce any additional truncation errors and therefore it is numerically stable.\\\\\n\n\n\\end{document}", "meta": {"hexsha": "bfa36161c6e3db5437cdb756fc8e4a99e178fda2", "size": 13780, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Linear Systems/CesareDeCal_Exercise7/Report.tex", "max_stars_repo_name": "csr/MATLAB-Scientific-Programming", "max_stars_repo_head_hexsha": "ac2d64ea235d7bee9cf0de8bbe42d06a3986bd5a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Linear Systems/CesareDeCal_Exercise7/Report.tex", "max_issues_repo_name": "csr/MATLAB-Scientific-Programming", "max_issues_repo_head_hexsha": "ac2d64ea235d7bee9cf0de8bbe42d06a3986bd5a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Linear Systems/CesareDeCal_Exercise7/Report.tex", "max_forks_repo_name": "csr/MATLAB-Scientific-Programming", "max_forks_repo_head_hexsha": "ac2d64ea235d7bee9cf0de8bbe42d06a3986bd5a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.9275362319, "max_line_length": 795, "alphanum_fraction": 0.6871552975, "num_tokens": 4927, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6442251201477016, "lm_q2_score": 0.8807970748488297, "lm_q1q2_score": 0.5674316013702315}}
{"text": "\\documentclass{article}\n\n\\usepackage{mathrsfs,amssymb,amsmath}\n\n\\begin{document}\n\nPage whatever\n\\section{Show that wk is the smallest topology on $\\mathscr{X}$ such that each $x^*$ in $\\mathscr{X}^*$}\n\nMust show that an arbitrary open set in wk can be generated by some collection of sets of the form $x^{*-1}(V)$.\n\nStart with an arbitrary open set $U$. Since $X$ with wk is a locally convex set, $\\bigcap^n_{i=1}\\{x\\in \\mathscr{X} : p_j( x-x_0) < \\varepsilon_j\\} \\subseteq U$ for some finite list of $\\varepsilon$ and $p$, where $p_{x^*}=|\\langle x,x^*\\rangle|$.\n\n $\\bigcap^n_{i=1}\\{x\\in \\mathscr{X} : p_j( x-x_0) < \\varepsilon_j\\}$ is open and the intersection of pre-images of open sets from $\\mathbb{F}$, which are simply open balls of radius $\\varepsilon_j$ around $x_0$. \n\nSince $U$ is generated by the subbase of preimages of the collection of $x^*$ and $U$ is arbitrary, all open sets of wk are generated in this way, thus is the smallest possible topology.\n\n\\section{Show that wk* is the smallest topology on $\\mathscr{X}^*$ such that for each $x$ in $\\mathscr{X}$, $x^* \\mapsto \\langle x, x^* \\rangle$}\n\n\nThis purported topology $\\sigma$ is generated by open sets that are preimages of open sets of functions of the form $x^* \\mapsto \\langle x, x^* \\rangle$. By continuity of composition, they are also the preimages of open sets of functions of the form $x^* \\mapsto | \\langle x, x^* \\rangle | $. Open sets of $\\mathbb{R}$ are generated by open balls, so the pre-images are generated by sets of the form $| \\langle x, x^* \\rangle | < \\varepsilon $.\n\nA set $U$ of $\\mathscr{X}^*$ is weakly open if and only if for every $x^*_0$ in $U$ there is an $\\varepsilon$ and there are $x_1, ... x_n$ in $\\mathscr{X}$ such that \n\n$$\\bigcap^n_{i=1}\\{x^*\\in \\mathscr{X}^* : |\\langle x_k, x^* - x^*_0 \\rangle| < \\varepsilon_j\\} \\subseteq U$$\n\nEvery such set $U$ can be generated as part of $\\sigma$ and therefore is the smallest topology \n\n\\section{Prove Theorem 1.3}\n\nTheorem 1.3 is proven in the text!\n\n\\section{Let $\\mathscr{X}$ be a complex LCS and let $\\mathscr{X}^*_{\\mathbb{R}}$ denote the collection of all continous real linear functionals on $\\mathscr{X}$. Use the elements of $\\mathscr{X}^*_{\\mathbb{R}}$  to define seminorms on $\\mathscr{X}$ and let $\\sigma(\\mathscr{X}, \\mathscr{X}^*_{\\mathbb{R}})$ be the corresponding topology. Show that $\\sigma(\\mathscr{X}, \\mathscr{X}^*) = \\sigma(\\mathscr{X}, \\mathscr{X}^*_{\\mathbb{R}})$}\n\nSince every complex linear function is also real linear, $\\mathscr{X}^*$ is a subset of $\\mathscr{X}^*_{\\mathbb{R}}$,  hence  $\\sigma(\\mathscr{X}, \\mathscr{X}^*) \\subset \\sigma(\\mathscr{X}, \\mathscr{X}^*_{\\mathbb{R}})$ \n\nThus we need to show that $\\sigma(\\mathscr{X}, \\mathscr{X}^*_{\\mathbb{R}}) \\subset \\sigma(\\mathscr{X}, \\mathscr{X}^*)$.\n\nAn arbitrary open sets around $x_0$ in $\\sigma(\\mathscr{X}, \\mathscr{X}^*_{\\mathbb{R}})$ is generated by sets that are pre-images of open sets of a finite set of real-linear functions. Take one such real-linear function $f$ which takes an open neighborhood $U_0$ to some open neighborhood $V_0$, and consider some open rectangle. If we can show that $U_0$ is open in $\\sigma(\\mathscr{X}, \\mathscr{X}^*)$, this will allow us to conclude that any arbitrary in $\\sigma(\\mathscr{X}, \\mathscr{X}^*_{\\mathbb{R}})$ is open in $\\sigma(\\mathscr{X}, \\mathscr{X}^*)$.\n\nIf $f$ is real linear, then define $\\hat{f}(z) \\equiv f(z) - i f(iz)$. The image of $U_0$ under $\\hat{f}$ is $U_0$ plus $ (-i) W_0$, where $W_0$ is the image of $U_0$ times $i$. Since an open set plus an arbitrary set in a $LCS$ is open, it doesn't matter what exactly $W_0$, the image will be open in $\\mathbb{C}$ (is this sufficient?). Hence if $\\hat{f}$ is shown to be complex linear, then $U_0$ is open in $\\sigma(\\mathscr{X}, \\mathscr{X}^*)$ and the proof is concluded. \n\n\\begin{align}\n\\hat{f}( \\alpha z) &= f(\\alpha z) - i f (\\alpha i z) \\\\\n&= f( (a+bi) z) - i f((a+bi) i z) \\\\\n&= f(az + biz) - i f(aiz - bz) \\\\\n&= f(az) + f(biz) - if(aiz) + if(bz) \\\\\n&= af(z) + bf(iz) - iaf(iz) + bif(z) \\\\\n&= (a+bi) (f(z) - i f(iz)) \\\\\n&= \\alpha (f(z) - i f(iz)) \\\\\n&= \\alpha \\hat{f}(z)\n\\end{align}\n\n5) because $f$ is real linear and $a$ and $b$ are real. 6) is just collecting terms. This shows that $\\hat{f}$ is complex linear.\n\n\\section{If $A \\subseteq  \\mathscr{X}$...}\n\n\\subsection{$A^{\\circ}$ is convex and balanced}\n\n$A^{\\circ} \\equiv \\{x^*\\in \\mathscr{X}^*: | \\langle a, x^*\\rangle | \\le 1 \\text{ for all } a \\text{ in } A\\}$\n\nTo show that $A^{\\circ}$ is balanced, given $x^*$ in $A^{\\circ}$ and $|\\alpha| \\le 1$, we have to show that $\\alpha x^*$ is also in $A^{\\circ}$.\n\nIf $|\\langle a, x^*\\rangle | \\le 1$, then $|\\langle a, \\alpha x^*\\rangle | = |\\alpha |  | \\langle a, x^* \\rangle | \\le 1$, hence $\\alpha x^*$ is in $A^{\\circ}$\n\nTo show that $A^{\\circ}$ is convex, we have to show that given arbitrary $x_1^*$ and $x_2^*$ in $A^{\\circ}$, then an arbitrarily chosen $x_3^*$ in $\\{t x_1^* + (1-t) x_2^* :0\\le t \\le 1\\}$ is also in $A^{\\circ}$, say $t_3 x_1^* + (1-t_3) x_2^*$ (where $t_3$ was chosen arbitrarily in $0\\le t \\le 1\\}$).\n\nHence we need to prove that, for all $a$ in $A$, $|\\langle a, t_3 x_1^* + (1-t_3) x_2^* \\rangle | \\le 1$\n\nIf $|\\langle a, x_1^*\\rangle | \\le 1$ and $|\\langle a, x_2^*\\rangle | \\le 1$, then simply by multiplication of both sides $t_3|\\langle a, x_1^*\\rangle | \\le t_3$ and $(1-t_3)|\\langle a, x_2^*\\rangle | \\le 1-t_3$ since $0 \\le t_3 \\le 1$ (to not flip the signs). Adding the inequalities we get:\n\n\\begin{align}\nt_3|\\langle a, x_1^*\\rangle | +(1-t_3)|\\langle a, x_2^*\\rangle | \\le 1 \\\\\n|\\langle a, t_3 x_1^*\\rangle | +|\\langle a, (1-t_3) x_2^*\\rangle | \\le 1 \\\\\n|\\langle a, t_3 x_1^* + (1-t_3) x_2^*\\rangle | \\le 1\n\\end{align}\n\nBoth 10) and 11) by linearity. Note that (9) could also have been obtained geometrically, by noting that the unit disk is convex, so any point between any two points would also be in the unit disk.\n\n\\subsection{If $A_1 \\subseteq A$, then $A^{\\circ} \\subseteq A^{\\circ}_1$}\n\nIf $x^* \\in A^{\\circ}$, then $| \\langle a,x^* \\rangle | \\le 1$ for all $a$ in $A$. Since $A_1 \\subseteq A$,  $| \\langle a,x^* \\rangle | \\le 1$ for all $a$ in $A_1$ as well. Therefore $x^* \\in A^{\\circ}$. Since $x^*$ was chosen arbitrarily,  $A^{\\circ} \\subseteq A^{\\circ}_1$.\n\n\n\\section{If $A\\subseteq\\mathscr{X}$, show that $A$ is weakly bounded if and only if $A^{\\circ}$ is absorbing in $\\mathscr{X}^*$}\n\nThis works equally well at any point $x$ in $\\mathscr{X}$ so assume that $0 \\in A$ and  $A^{\\circ}$ is absorbing in $\\mathscr{X}^*$ at $0$.\n\nAssume $A$ is weakly bounded. Thus for every $x^*$ in $\\mathscr{X}$, $x^*(A)$ is bounded in $\\mathbb{C}$. Take an arbitrary $x^*_0$ such that $x^*_0(A)$ is bounded by $M_0$. Take $\\varepsilon_0 = 1 / M_0$. We must show that for $0 \\le t < \\varepsilon_0$, $tx^* \\in A^{\\circ}$. Indeed, for all $a$ in $A$, \n\n\\begin{align}\n|\\langle a, x^* \\rangle | \\le M_0 \\\\\n|\\langle a, t x^* \\rangle | \\le \\varepsilon_0 \n\\end{align}\n\n13) by linearity. Thus all of $t x^*$ is in $A^{\\circ}$, so $A^{\\circ}$ is absorbing.\n\n\nAssume $A^{\\circ}$ is absorbing in $\\mathscr{X}$. Thus for each $x^*$ in $\\mathscr{X}$ there is an $\\varepsilon > 0$ such that $tx^* \\in A^{\\circ}$ for $0 \\le t < \\varepsilon$, i.e. $| \\langle a, t x^* \\rangle | \\le 1$. Thus, by linearity, $x^*$ cannot be larger than $1/\\varepsilon$, and is thus bounded.\n\n\\end{document}", "meta": {"hexsha": "9937ae1753cb34abaedc169cfca86a0e86219268", "size": 7356, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "analysis/5_Weak_Topologies/1_Duality.tex", "max_stars_repo_name": "lukemassa/math-exercises", "max_stars_repo_head_hexsha": "765b84eb0a1b5ab59576172e2a814a1862a4f129", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "analysis/5_Weak_Topologies/1_Duality.tex", "max_issues_repo_name": "lukemassa/math-exercises", "max_issues_repo_head_hexsha": "765b84eb0a1b5ab59576172e2a814a1862a4f129", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "analysis/5_Weak_Topologies/1_Duality.tex", "max_forks_repo_name": "lukemassa/math-exercises", "max_forks_repo_head_hexsha": "765b84eb0a1b5ab59576172e2a814a1862a4f129", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.8316831683, "max_line_length": 556, "alphanum_fraction": 0.6402936378, "num_tokens": 2714, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059609645724, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.567407030860772}}
{"text": "\\section{Currency Input IO}\nThe purpose of this section is to Provide extra detail on the specification of IO streams and how they interface with other devices. \n\n\\subsection{Input Details}\n\\label{selector}\n\\begin{itemize}\n\\item \\textbf{Clock:} The clock signal tells the circuit to add the value of an accepted bill to the value that is currently stored in memory. In a real vending machine, the clock pulse would be generated when the bill reader accepted an inserted bill as valid. We simulate this using a manually operated two-button clock pulse generator. \n\\item $\\mathbf{B_{0..5}}$\\textbf{:} Together these inputs are a binary representation of the value of the bill with $B_0$ being the least significant bit. In a real vending machine, this number would be generated by the bill reader. We simulate this in our circuit using 6 switches. \n\\item \\textbf{Selector:} The selector bit switches the ALUs between addition and subtraction while adjusting their $C_n$ bits accordingly. Addition is used for currency input and subtraction is used for vending and making change. The value of this bit is set by the item selection and change making circuits. \n\\end{itemize}\n\n\\subsection{Output Details}\n\\label{dollars-in-machine}\n\\begin{itemize}\n\\item \\textbf{Dollars In Machine:} There are six D flip flops that store the amount of money currently in the machine. On the board, they are numbered as such where FF represents a 74LS74: \\\\\\\\\n\\begin{tabular}{|c|c|c|}\n\\hline \n4 & FF & 1 \\\\ \n\\hline \n5 & FF & 2 \\\\ \n\\hline \n6 & FF & 3 \\\\ \n\\hline \n\\end{tabular} \n\\end{itemize}\n", "meta": {"hexsha": "64f629decc9707ae4bf898bfa9f8bce7d00e0105", "size": 1568, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MAN/tex/IOInput.tex", "max_stars_repo_name": "c-morris/vending-machine", "max_stars_repo_head_hexsha": "da0542dd7c92fb3ad55460ce3ae91c3b02f607f7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MAN/tex/IOInput.tex", "max_issues_repo_name": "c-morris/vending-machine", "max_issues_repo_head_hexsha": "da0542dd7c92fb3ad55460ce3ae91c3b02f607f7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MAN/tex/IOInput.tex", "max_forks_repo_name": "c-morris/vending-machine", "max_forks_repo_head_hexsha": "da0542dd7c92fb3ad55460ce3ae91c3b02f607f7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.3076923077, "max_line_length": 339, "alphanum_fraction": 0.7653061224, "num_tokens": 396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105951184112, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5674070286757897}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\marginpar{Tuesday\\\\ 2020-11-17, \\\\ compiled \\\\ \\today}\n\nThe papers \\textcite[]{verdeFirstYearWilkinson2003} and \\textcite[]{spergelFirstYearWilkinson2003} are discussed. \n\n\\section{Bayesian Hierarchical Modeling}\n\nAn example from the lectures by A.\\ Heavens, in the ICIC data analysis workshop. \n\nSuppose we want to solve a linear regression problem for a line passing through the origin, and we have one data pair \\((\\hat{x}, \\hat{y})\\), with error on both. \n\nWe want the angular coefficient \\(m\\):\n%\n\\begin{align}\n\\mathcal{P} (m | \\hat{x}, \\hat{y}) \\propto \\mathcal{L} (\\hat{x}, \\hat{y} | m) \\mathbb{P}(m)\n\\,.\n\\end{align}\n\nWe know that the true \\(x\\) and \\(y\\) are related by \\(y = mx\\).\nWe can introduce these two as extra variables in the problem: applying \\(\\mathbb{P}(a, b) = \\mathbb{P}(a | b) \\mathbb{P}(b)\\) repeatedly we find\n%\n\\begin{align}\n\\mathbb{P}(\\hat{x}, \\hat{y}, x, y | m) \\propto \\mathbb{P}(\\hat{x}, \\hat{y} | x, y, m) \\mathbb{P}(x, y | m) \\propto \\mathbb{P}(\\hat{x}, \\hat{y} | x, y) \\underbrace{\\mathbb{P}(y | x, m)}_{ = \\delta (y - mx)} \\mathbb{P}(x | m)\n\\,.\n\\end{align}\n\nThen, we can get the final posterior by \n%\n\\begin{align}\n\\mathcal{P} (m | \\hat{x}, \\hat{y}) &\\propto \\int \\dd{x} \\dd{y} \\mathbb{P}(\\hat{x}, \\hat{y}, x, y | m) \\mathbb{P}(m)  \\\\\n&\\propto \\int \\dd{x} \\dd{y} \\underbrace{\\mathbb{P}(\\hat{x}, \\hat{y} | x, y)}_{\\text{error}} \\underbrace{\\mathbb{P}(y | x, m)}_{\\text{theory}} \\underbrace{\\mathbb{P}(x | m) \\mathbb{P}(m)}_{\\text{prior}} \\\\\n&\\propto \\int \\dd{x} \\mathbb{P}(\\hat{x}, \\hat{y} | x, mx)  \\mathbb{P}(x | m) \\mathbb{P}(m)\n\\,,\n\\end{align}\n%\nand assuming \\(\\sigma = 1\\) independent Gaussians for the error distribution, we get \n%\n\\begin{align}\n\\mathbb{P}(m | \\hat{x}, \\hat{y}) &\\propto \n\\int \\dd{x} \\exp(- \\frac{1}{2} (\\hat{y} - mx)^2) \\exp( - \\frac{1}{2} (\\hat{x} - x)^2)  \\\\\n&\\propto \\frac{1}{\\sqrt{1 + m^2}} \\exp( - \\frac{1}{2 (1+m^2)} (- m \\hat{x} + \\hat{y})^2 )\n\\,.\n\\end{align}\n\nAlternatively, we can marginalize with Monte Carlo integration.\n\nThis method works well for Gibbs sampling: it is easy to find the conditional distributions.\n\nThis can be applied to SN Ia data. \n\n\\end{document}\n", "meta": {"hexsha": "001bb6f16677db37d771aaecfbe20bd62ad58a4e", "size": 2190, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_third_semester/astrostatistics_cosmology/nov17.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_third_semester/astrostatistics_cosmology/nov17.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_third_semester/astrostatistics_cosmology/nov17.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 40.5555555556, "max_line_length": 223, "alphanum_fraction": 0.6278538813, "num_tokens": 838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105941403651, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5674070264908068}}
{"text": "\\documentclass[8pt]{article}\n\n\\usepackage{fullpage}\n\\usepackage[margin=.7in]{geometry}\n\\usepackage{epic}\n\\usepackage{eepic}\n\\usepackage{graphicx}\n\\usepackage{mathtools}\n\\usepackage{algorithm}\n\\usepackage[noend]{algpseudocode}\n\\usepackage{ragged2e}\n\\usepackage[parfill]{parskip}\n\n\\newcommand{\\proof}[1]{\n{\\noindent {\\it Proof.} {#1} \\rule{2mm}{2mm} \\vskip \\belowdisplayskip}\n}\n\n\\DeclarePairedDelimiter\\ceil{\\lceil}{\\rceil}\n\\DeclarePairedDelimiter\\floor{\\lfloor}{\\rfloor}\n\n\n\\newtheorem{lemma}{Lemma}[section]\n\\newtheorem{theorem}[lemma]{Theorem}\n\\newtheorem{claim}[lemma]{Claim}\n\\newtheorem{definition}[lemma]{Definition}\n\\newtheorem{corollary}[lemma]{Corollary}\n\n%\\setlength{\\oddsidemargin}{0in}\n%\\setlength{\\topmargin}{0in}\n%\\setlength{\\textwidth}{6.5in}\n%\\setlength{\\textheight}{8.5in}\n\n\\begin{document}\n\n\\hfill \\small{\\today} \\\\\n\\setlength{\\fboxrule}{.5mm}\\setlength{\\fboxsep}{1.2mm}\n\\newlength{\\boxlength}\\setlength{\\boxlength}{\\textwidth}\n\\addtolength{\\boxlength}{-4mm}\n\\begin{center}\\framebox{\\parbox{\\boxlength}{\\bf\n\\center{CS 577 - Homework 3}\n\\center{Sejal Chauhan, Vinothkumar Siddharth, Mihir Shete}\n}}\\end{center}\n\\vspace{5mm}\n\n\\section{Graded written problem}\n\n\\textbf{Input:} A sequence of n real numbers $a_1$, $a_2$,..., $a_n$ and a corresponding sequence of weights $w_1$, $w_2$, . . . , $w_n$. The weights are nonnegative reals that add up to 1, i.e. $\\sum_n^{i=1} w_i = 1$. \\\\\n\\\\\n\\textbf{Output:} The weighted median of the sequence is the number $a_k$ such that\n$\\sum_{a_i \\textless a_k} w_i \\textless 1/2$ and $\\sum_{a_i \\textless= a_k} w_i \\geq 1/2$\n\n\\begin{algorithm}\n\\caption{Algorithm to find weighted median}\\label{euclid}\n\\begin{algorithmic}[1]\n\\Procedure{Weighted-Median}{$A_w^a$}\n\n\\If{$length(A_w^a) = 1$}\n    \\State \\Return{A[0]} \n\\EndIf\n\n\\State $Median \\leftarrow \\Call{Selection}{A_w^a, \\ceil{length(A_w^a)/2}}$ \\Comment{The Selection will happen over A}\n    \\State $Pivot \\leftarrow Median$\n    \\State $L_{w}^{a},R_{w}^{a} \\leftarrow \\Call{Partition}{A_w^a, Pivot}$\n\n    \\If{$\\sum w_L$ \\textless $1/2$}\n        \\State $w_{Pivot} \\leftarrow \\sum w_L + w_{Pivot}$\n        \\If{$w_{Pivot} \\geq 1/2$}\n            \\State \\Return ${Pivot}$\n        \\EndIf\n        \\State $\\Call{Weighted-Median}{Pivot|R_w^a}$\n    \\Else\n        \\State $\\Call{Weighted-Median}{L_w^a}$\n    \\EndIf\n\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{flushleft}\nOur Algorithm uses the Selection() procedure discussed in section 6 of the Lecture \nnotes on Divide and Conquer. When \\textit{Weighted-Median} is called it will first\nfind the $\\ceil{n/2}$ smallest element in A using the Selection() procedure in linear\ntime, let's call this value \\textit{Median}(because it is actually the median of\nsorted elements in A). We will use this value to \\textit{Partition} the Array A into\n2 arrays L and R such that the Value $a_i^L$ of all elements in L is less than or equal\nto value of \\textit{Median} and Value $a_i^R$ of all elements in R is greater than \\textit{Median}.\n\nIf the weight of all elements in L is less than 1/2 and on adding the weight of the \\textit{Pivot}\nit becomes greater than or equal to 1/2 then \\textit{Pivot} is our Weighted Median and\nwe return the value of the \\textit{Pviot}. If the weight of all elements in L is greater\nthan or equal to 1/2 then we can say that the Weighted Median is present in the array L\nand we recursively call \\textit{Weighted-Median} on L.\n\nIf the aggregate of weight of elements in L and the weight of \\textit{Pivot} is less than 1/2\nthen we know the Weighted Median is in R. So we change the weight of the\n\\textit{Pivot} as: $w_{Pivot} \\leftarrow w_{Pivot} + \\sum L_w$. So \\textit{Pivot's} weight\nnow also includes the weight of elements to its Left(i.e all elements less than equal to pivot).\n\nAfter changing the weight of \\textit{Pivot} we will recursively call \\textit{Weighted-Median}\nprocedure on array $\\textit{Pivot}|R$. Prepending \\textit{Pivot} with added weight to R\nassures that we consider the weight of all elements which are lesser than all elements in R\nwhile calculating the Weighted Median from the new sub-array, so the Weighted Median returned by the recursive call is actually the Weighted Median of original Array A.\n\n\\newpage\n\\subsection{Correctness}\nWe will proceed to prove the correctness of our algorithm by proving partial correctness\nand termination.\n\n\\subsubsection{Partial Correctness}\nThe recursive calls to \\textit{Weighted-Median} will be made on sub-array of the input.\nIt is trivial to see that the length of the sub-array will never be negative or 0. So\nthe recursive calls to \\textit{Weighted-Median} will always have a valid array as input.\n\nThe algorithm returns from 2 places. The first case is trivially true because if the length\nof the array is 1 then the only element in the array is the Weighted-Median. In the second\nreturn we see that the sum of all elements to the Left of the \\textit{Pivot} in the partitioned\narray is less than 1/2 and when we add the weight of the \\textit{Pivot} the aggregated weight\nbecomes $\\geq{1/2}$ then we say that the \\textit{Pivot} is the Weighted Median which is true\nas per definition because all the elements to the left of \\textit{Pivot} in the partitioned\narray are less than or equal to the \\textit{Pivot}.\n\n\\subsubsection{Termination}\nLet $\\mu(length(A))$ be the potential function then we can see that in the recursive calls\n\n\\begin{center}\n    $\\mu(length(A_{recursive)}) = \\mu(length(A[0...\\ceil{n/2}]))$ OR $\\mu(length(A_{recursive)}) = \\mu(length(A[\\ceil{n/2} + 1 ... n - 1])) \\leq \\mu(length(A))$\n\\end{center}\n\nSo the potential function will decrease each recursive call and our program is guaranteed to terminate.\n\n\\subsection{Complexity Analysis}\nIts trivial to see that our recursive calls operate on an input of size $\\ceil{n/2} \\pm 1$ and in \nworst case our algorithm will go through log(n) recursive calls.\n\nAll the steps in our procedure take constant time except \\textit{Selection} and \\textit{Partition}\nwhich take linear time. So total running time of our algorithm in worst case will be:\n\\begin{center}\n$c*(n/2^0) + c*(n/2^1) + c*(n/2^2) ... + c*(n/2^{log(n)}) \\le c*n*(1 + 1/2 + 1/4 ....)$\n\\end{center}\nThe above is a geometric series with sum limiting to 2. Hence, our algorithm can take c*2*n time in worst case and so it is linear in n i.e O(n)\n\\end{flushleft}\n\n\\end{document}\n", "meta": {"hexsha": "7b694a7093b2014d233dc757d231f04ae8f3f6d4", "size": 6339, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw3.tex", "max_stars_repo_name": "smihir/cs577", "max_stars_repo_head_hexsha": "1a8e036e125bc571fe24713bbeb3b60d79d0e857", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw3.tex", "max_issues_repo_name": "smihir/cs577", "max_issues_repo_head_hexsha": "1a8e036e125bc571fe24713bbeb3b60d79d0e857", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw3.tex", "max_forks_repo_name": "smihir/cs577", "max_forks_repo_head_hexsha": "1a8e036e125bc571fe24713bbeb3b60d79d0e857", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.0208333333, "max_line_length": 221, "alphanum_fraction": 0.7341852027, "num_tokens": 1936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.567396704169178}}
{"text": "\\documentclass[letterpaper,12pt,leqno]{article}\n\\usepackage{paper,math,notes}\n\\available{https://www.pascalmichaillat.org/t3.html}\n\\hypersetup{pdftitle={Optimal Control}}\n\n\\begin{document}\n\n\\title{Optimal Control}\n\\author{Pascal Michaillat}\n\\date{}\n\n\\begin{titlepage}\n\\maketitle\n\\tableofcontents\n\\end{titlepage}\n\n\\section{Preliminary Results}\n\nBefore presenting the techniques of optimal control, we need to review two mathematical results: L'Hopital's rule and Leibniz's rule. These results are used in section~\\ref{sec:HEURISTIC} for the derivation of some of the results.\n\n\\subsection{L'Hopital's rule}\n\nL'Hopital's rule states that if \n\\begin{equation*}\n\\lim_{x\\to x_{0}}f(x) =\\lim_{x\\to x_{0}}g(x) =0,\n\\end{equation*}\nthen \n\\begin{equation*}\n\\lim_{x\\to x_{0}}\\frac{f(x)}{g(x)}=\\lim_{x\\to x_{0}}\\frac{f'(x) }{g'(x)}.\n\\end{equation*}\n\n\\subsection{Leibniz's rule}\n\nLeibniz's rule states that if\n\\begin{equation*}\nI(z) \\equiv \\int_{a(z) }^{b(z)}f\\bp{x,z} dx\n\\end{equation*}\nwhere $x$ is the integration variable, $z$ is a parameter and $f\\bp{ x,z} $\nis assumed to have a continuous derivative $\\pdx{f(x,z)}{z}$ in the\ninterval $\\bs{a(z),b(z)}$, then the effect of change in $z$ on the integral is given by\n\\begin{equation*}\n\\od{I}{z}(z) =\\int_{a(z)}^{b(z)} \\pd{f}{z}(x,z) dx+\\od{b}{z}(z) \\cdot f( b(z),z) -\\od{a}{z}(z) \\cdot f(a(z) ,z).\n\\end{equation*}\n\n\\section{Consumption-Saving Problem}\n\n\\subsection{Description of the Problem}\n\nWe consider the same consumption-saving problem as in the notes on dynamic programming, except that the problem is now in continuous time. Taking initial wealth $a_{0}$ as given, the problem is to choose the consumption path $\\bc{c(t)}_{t\\geq 0}$ to maximize the lifetime utility\n\\begin{align}\n\\int_{0}^{\\infty}e^{-\\r \\cdot t}\\cdot u(c(t)) dt \\label{eq:cont},\n\\end{align}\nsubject to the law of motion\n\\begin{equation}\n\\dot{a}(t) = r\\cdot a(t)-c(t).\\label{eq:LAW}\n\\end{equation}  \nThe parameter $r>0$ is the constant interest rate at which wealth is invested. The parameter $\\r>0$ is the discount factor. The notation $\\dot{a}$ denotes the derivative of wealth $a$ with respect to time $t$:\n\\[\\dot{a}(t)\\equiv \\pd{a(t)}{t}.\\] \n\n\\subsection{Optimal-Control Approach}\n\nTo solve this problem, it is inconvenient to use the Lagrangian technique or dynamic programming technique because they are designed for discrete-time optimization problems. Instead, we use a technique called \\textit{optimal control}. We will see in section~\\ref{sec:HEURISTIC} that optimal control is related both to the Lagrangian technique and the dynamic programming technique. But optimal control is designed for continuous-time optimization problems. \n\nThere are two ways to apply the optimal-control approach: one by forming a \\textit{present-value Hamiltonian}, the other by forming a \\textit{current-value Hamiltonian}. These two ways are roughly equivalent, but using the current-value Hamiltonian is usually simpler.\n\n\\section{Solution with Present-Value Hamiltonian}\n\nWe start by solving the consumption-saving problem with a present-value Hamiltonian.\n\n\\subsection{State and Control Variables}\n\nFirst, identify the \\textit{state variables} and \\textit{control variables}. A control variable can be adjusted at any time $t$ whereas the evolution of a state variable follows a law of motion such as~\\eqref{eq:LAW}. Here, the control variable is consumption $c(t)$ and the state variable is wealth $a(t)$.\n\n\\subsection{Present-Value Hamiltonian}\n\nSecond, write down the present-value Hamiltonian\n\\begin{equation*}\n\\Hc(t) = e^{-\\r \\cdot t}\\cdot u(c(t)) +\\l(t) \\cdot \\bp{r\\cdot a(t) - c(t)}.\n\\end{equation*}\nTo form the Hamiltonian, we introduce a new variable $\\l(t)$, which we call the \\textit{costate variable} associated with the state variable $a(t)$. In general, we introduce as many costate variables as there are state variables. As a consequence, we introduce as many costate variables as there are laws of motion, because each state variable is associated with one law of motion.\n\n\\subsection{Optimality Conditions}  \n\nWrite down the two optimality conditions, which are derived from the Maximum Principle:\n\\begin{align}\n\\pd{\\Hc(t)}{c(t)}=& 0 \\label{eq:foc1}\\\\\n\\pd{\\Hc(t)}{a(t)}=&-\\dot{\\l}(t)\\label{eq:foc2}.\n\\end{align}\n\nThe optimality conditions can be rewritten as\n\\begin{align}\n0 &= e^{-\\r \\cdot t}\\cdot u'(c(t)) -\\l(t)\\label{eq:focc} \\\\\n-\\dot{\\l}(t)&= \\l(t)\\cdot  r  \\label{eq:focs}.\n\\end{align}\n\n\\subsection{Transversality Condition} \n\nImpose a \\textit{transversality condition}:\n\\begin{equation}\n\\lim_{t\\to+\\infty} \\l(t)\\cdot a(t)=0.\\label{eq:trans1}\n\\end{equation}\nThe transversality condition prevents the state variable to grow without bounds. It makes sure that the discounted present value of consumption does not exceed the discounted present value of the income from investment.\n\n\\subsection{Euler Equation} \n\nWe can solve explicitly for the optimal consumption path by eliminating the costate variable $\\l(t)$ using~\\eqref{eq:focc} and~\\eqref{eq:focs}. \n\nTo eliminate $\\l(t)$, we first take log of \\eqref{eq:focc}:\n\\begin{align*}\n-\\r \\cdot t+ \\ln{u'(c(t))} =\\ln{\\l(t)}.\n\\end{align*}\nWe then take time derivatives in this equation:\n\\begin{equation*}\n\\r+\\bs{ \\frac{-u^{''}(c(t)) \\cdot c(t)}{u'(c(t))}}\\cdot \\bs{\\frac{\\dot{c}(t)}{c(t)}} =-\\frac{\\dot{\\l}(t)}{\\l(t)}.\n\\end{equation*}\nEquation~\\eqref{eq:focs} can be rewritten as\n\\begin{align*}\n-\\frac{\\dot{\\l}(t)}{\\l(t)}&=  r.\n\\end{align*}\nCombining these two equations, we obtain the Euler equation for optimal consumption:\n\\begin{equation}\n\\frac{\\dot{c}(t)}{c(t)}\\cdot \\bs{\\frac{-u^{''}(c(t)) \\cdot c(t)}{u'(c(t)) }} =r-\\r.\n\\label{eq:EULER}\\end{equation}\nThe term \\[\\frac{-u^{''}(c(t))\\cdot  c(t)}{u'(c(t))}\\] measures relative risk aversion. The coefficient of relative risk aversion also corresponds to the inverse of the intertemporal elasticity of substitution.\n\n\\subsection{CRRA Utility}\n\nConsider the following class of utility function:\n\\begin{equation*}\nu(c) =\\frac{c^{1-\\g }-1}{1-\\g }.\n\\end{equation*}\nThis class of utility function is know as Constant Relative Risk Aversion utility, or CRRA utility. This class of function is characterized by a constant coefficient of relative risk aversion $\\g$ as\n\\[\\frac{-u^{''}(c) \\cdot  c}{u^{'}(c)}=\\g.\\]\nWith CRRA utility, the Euler equation simplifies to\n\\begin{equation*}\n\\frac{\\dot{c}(t)}{c(t)}=\\frac{r-\\r}{\\g}.\n\\end{equation*}\n\n\\section{Solution with Current-Value Hamiltonian}\n\nNext we solve the consumption-saving problem with a current-value Hamiltonian.\n\n\\subsection{Current-Value Hamiltonian}\n\nThe present-value Hamiltonian $\\Hc$ depends on $t$ because of the discounting $e^{-\\r \\cdot t}$, which might create some difficulties in deriving and analyzing solutions to the problem. Multiplying $\\Hc$ by $e^{\\r \\cdot t}$ addresses this problem. \n\nWe denote the resulting Hamiltonian $\\Hc^{*}$ as current-value Hamiltonian. The current-value Hamiltonian is given by\n\\begin{equation*}\n\\Hc^{*}(t)\\equiv e^{\\r \\cdot t}\\cdot  \\Hc(t)=u(c(t)) +e^{\\r \\cdot t} \\cdot \\l(t) \\cdot \\bp{r\\cdot a(t)-c(t)}.\n\\end{equation*}\nWe define a new costate variable $q(t)$ as\n\\begin{equation*}\nq(t)\\equiv  e^{\\r \\cdot t} \\cdot \\l(t).\n\\end{equation*}\nThe current-value Hamiltonian becomes\n\\begin{equation*}\n\\Hc^{*}(t)= u(c(t)) +q(t) \\cdot \\bp{r\\cdot a(t)-c(t)}.\n\\end{equation*}\nThis is the expression of the current-value Hamiltonian that we use in practice.\n\n\\subsection{Optimality Conditions}\n\nThe optimality conditions are slightly different. The optimality condition become\n\\begin{align*}\n\\pd{\\Hc^{*}(t)}{c(t)}=& 0\\\\\n\\pd{\\Hc(t)^{*}}{a(t)}=&\\r\\cdot q(t)-\\dot{q}(t).\n\\end{align*}\n\nCompared to the optimality conditions~\\eqref{eq:foc1} and~\\eqref{eq:foc2} with the present-value Hamiltonian, there is an extra term $+\\r\\cdot q(t)$ in the second condition. The extra term arises because the costate variable $q(t)$ is defined differently form the costate variable $\\l(t)$ that we used in the present-value Hamiltonian. \n\nThe optimality conditions can be rewritten as\n\\begin{align}\n0 &= u'(c(t)) -q(t)\\label{eq:auto1}\\\\\n\\r\\cdot q(t)-\\dot{q}(t)&= q(t)\\cdot  r .\\label{eq:auto2}\n\\end{align}\n\n\\subsection{Transversality Conditions}\n\nWe also impose a transversality condition:\n\\[\\lim_{t\\to+\\infty}e^{-\\r \\cdot t} \\cdot  q(t)\\cdot a(t)=0.\\]\nCompared to the transversality condition~\\eqref{eq:trans1} with the present-value Hamiltonian, there is an extra factor $e^{-\\r \\cdot t}$ in this condition. The extra factor arises because the costate variable $q(t)$ is defined differently form the costate variable $\\l(t)$ that we used in the present-value Hamiltonian.\n\n\\subsection{Euler Equation}\n\nBy combining equations~\\eqref{eq:auto1} and~\\eqref{eq:auto2}, we obtain exactly the same condition~\\eqref{eq:EULER} as with the present-value Hamiltonian. We take the log of equation~\\eqref{eq:auto1} to obtain\n\\begin{align*}\n\\ln{u'(c(t))} = \\ln{q(t)}.\n\\end{align*}\nWe then take time derivatives in this equation:\n\\begin{equation*}\n\\bs{ \\frac{-u^{''}(c(t)) \\cdot c(t)}{u'(c(t))}}\\cdot \\bs{\\frac{\\dot{c}(t)}{c(t)}} =-\\frac{\\dot{q}(t)}{q(t)}.\n\\end{equation*}\nEquation~\\eqref{eq:auto2} can be rewritten\n\\begin{align*}\n-\\frac{\\dot{q}(t)}{q(t)}&= r-\\r.\n\\end{align*}\nCombining these two equations, we obtain the same Euler equation for optimal consumption as the one we obtained with the present-value Hamiltonian:\n\\begin{equation*}\n\\frac{\\dot{c}(t)}{c(t)}\\cdot \\bs{\\frac{-u^{''}(c(t)) \\cdot c(t)}{u'(c(t)) }} =r-\\r.\n\\end{equation*}\n\n\\subsection{Comparison of the Two Solutions}\n\nThe two approaches---the approach with the present-value Hamiltonian and the approach with the current-value Hamiltonian---are equivalent. They lead to the same Euler equation. However, it is often more convenient to work with the current-value Hamiltonian.\n\n\\section{Theory of Optimal Control}\n\n\n\\subsection{General Optimization Problem}\n\nOptimal control is used to solve any continuous-time optimization problem. The general problem is to choose $\\bc{c(t)}_{t\\geq 0}$ to maximize\n\\begin{align}\n\\int_{0}^{\\infty}e^{-\\r\\cdot  t}\\cdot  u\\bp{ a(t),c(t)}\\label{dpc} \n\\end{align}\ngiven the constraint that for all $t$\n\\begin{align}\n\\dot{a}(t) = g(a(t),c(t)),\\label{eq:thelaw}\n\\end{align}\nand taking $a_{0}$ as given. The parameter $\\r >0$ is the discount rate. The functions $u$ and $g$ are concave and twice differentiable.\n\n\\subsection{Present-Value Hamiltonian}\n\nAn important result in optimal control theory is the Maximum Principle. It is due to Pontryagin. In addition to the control and state variables, we introduce a costate variable $\\l (t)$  associated with the state variable. The costate variable measures the shadow price of the associated state variable. The costate variable enters the optimal control problem through the present-value Hamiltonian, defined as \n\\begin{equation}\n\\Hc(t)=e^{-\\r\\cdot  t}\\cdot  u(a(t),c(t)) +\\l(t)\\cdot g(a(t),c(t)).  \n\\label{eq:HAMILDEF}\\end{equation}\n\n\\subsection{Present-Value Optimality Conditions}\n\nThe Maximum Principle gives necessary conditions for optimality. There are three conditions. The first two conditions are\n\\begin{align}\n\\pd{\\Hc(t)}{c(t)} &=0  \\label{eq:hcontrol} \\\\\n\\pd{\\Hc(t)}{a(t)} &=-\\dot{\\l}(t)  \\label{eq:hstate}\n\\end{align}\nCondition \\eqref{eq:hcontrol} implies that the Hamiltonian must be maximized\nwith respect to the control variable at any point in time. Condition \\eqref{eq:hstate} says that the marginal change of the Hamiltonian associated with a unit change of the state variable is equal to  minus the rate of\nchange of the costate variable. \n\nThe optimal solution must also satisfy a third condition, which we call transversality condition:\n\\begin{equation}\n\\lim_{t\\to \\infty }\\l(t)\\cdot a(t)=0.  \\label{eq:tvc}\n\\end{equation}\nThe transversality condition implies the product of costate and state must be converging to zero as time goes to infinity.\n\n\\subsection{Current-Value Hamiltonian}\n\nWe can reformulate the results from the Maximum Principle with the current-value Hamiltonian which is often easier to manipulate. The current-value Hamiltonian is defined as \n\\begin{equation*}\n\\Hc^{*}(t)=  u\\bp{a(t),c(t)} +q(t)\\cdot g(a(t),c(t)),\n\\end{equation*}\nwhere $q(t)$ is the costate variable associated with the state variable $a(t)$.\n\n\\subsection{Current-Value Optimality Conditions}\n\nWith the current-value Hamiltonian, the three necessary conditions~\\eqref{eq:hcontrol},~\\eqref{eq:hstate}, and ~\\eqref{eq:tvc} for optimality become\n\\begin{align*}\n\\pd{\\Hc^{*}(t)}{c(t)} &=0\\\\\n\\pd{\\Hc^{*}(t)}{a(t)} &=\\r\\cdot q(t)-\\dot{q}(t)\\\\\n\\lim_{t\\to \\infty }e^{-\\r\\cdot  t}\\cdot q(t)\\cdot a(t)&=0. \n\\end{align*}\n\n\n\\section{Heuristic Derivation of the Maximum Principle}\\label{sec:HEURISTIC}\n\nIn this section, we provide an heuristic derivation of the necessary conditions for optimality provided by the Maximum Principle. One way to derive the optimality conditions \\eqref{eq:hcontrol} and \\eqref{eq:hstate} is to apply informally the results from dynamic programming. Formally, many of the claims below are imprecise, but they will serve the purpose of providing intuition for the Maximum Principle.\n\n\\subsection{Value Function for the Discretized Optimization Problem}\n\nWe being by defining the value function of the problem, which the maximized value of the objective function as a function of the\nstate variable $a(t)$ and time $t$:\n\\begin{equation*}\nV\\bp{a(t),t} =\\underset{\\bc{c(s)}_{s\\geq t}}{\\max} \\int_{t}^{\\infty }e^{-\\r \\cdot \\bp{s-t}}\\cdot u\\bp{a(s),c(s)} ds,\n\\end{equation*}\nwhere the maximization is subject for all $s\\geq t$ to the law of motion of the state variable\n\\begin{equation}\n\\dot{a}(s)=g\\bp{a(s),c(s)}.\\label{eq:lmh}\n\\end{equation}\n\n\\subsection{Bellman Equation}\n\nSince the problem has a recursive structure, we can apply the Principle of Optimality and write the value function as the solution to a Bellman equation:\n\\begin{align*}\nV\\bp{ a(t),t} =\\underset{\\bc{c(s)}_{t\\leq s\\leq t+\\D t}}{\\max }\\bc{\\int_{t}^{t+\\D t}e^{-\\r \\cdot \\bp{ s-t}} \\cdot u\\bp{a(s),c(s)} ds+e^{-\\r  \\cdot \\D t} \\cdot V\\bp{ a(t+\\D t),t+\\D t} },\n\\end{align*}\nwhere the maximization is subject for all $t\\leq s\\leq t+\\D t$ to~\\eqref{eq:lmh}.\n\nSubtract $V\\bp{a(t),t} $ from both side and divide by $\\D t$:\n\\begin{align}\n0=\\underset{\\bc{c(s)}_{t\\leq s\\leq t+\\D t}}{\\max }\\bs{\\frac{\\int_{t}^{t+\\D t}e^{-\\r \\cdot \\bp{ s-t}} \\cdot u\\bp{\na(s),c(s)} ds}{\\D t}+ \\frac{e^{-\\r  \\cdot \\D t} \\cdot V\\bp{ a(t+\\D t),t+\\D t}-V\\bp{ a(t),t}}{\\D t} },\\label{eq:BIG}\n\\end{align}\nwhere the maximization is subject for all $t\\leq s\\leq t+\\D t$ to~\\eqref{eq:lmh}.\n\n\\subsection{Hamilton-Jacobi-Bellman Equation}\n\nWe now take the limit of \\eqref{eq:BIG} as $\\D t\\to 0$ to obtain the Hamilton-Jacobi-Bellman equation.\n\n\\paragraph{Limit of First Term} We start with the first term in the curly brackets. Since numerator and denominator of the first term approach zero as $\\D t\\to 0$,  we apply L'Hopital's rule. The derivative of the denominator with respect to $\\D t$ is $1$. We apply Leibniz's rule to determine the derivative with respect to $\\D t$ of the integral in the numerator. Leibniz's rule tells us that the derivative of the integral with respect to $\\D t$ is\n\\[e^{-\\r \\cdot  \\D t} \\cdot u\\bp{a(t+\\D t),c(t+\\D t)}.\\]\nTherefore, the limit as $\\D t\\to 0$ for the first term in the bracket is \n\\begin{equation}\nu\\bp{ a(t),c(t)}.\\label{eq:BIG1}\n\\end{equation}\n\n\\paragraph{Limit of Second Term}  We move on to the second term. Since \n\\[\\lim_{\\D t\\to 0} e^{-\\r \\cdot \\D t}=1\\]\nand \n\\[\\lim_{\\D t\\to 0} V\\bp{ a(t+\\D t),t+\\D t} =V\\bp{a(t),t}, \\]\nboth numerator and denominator approach zero as $\\D t\\to 0$. Therefore we apply L'Hopital's rule. The derivative of the denominator with respect to $\\D t$ is $1$. The derivative of the numerator with respect to $\\D t$ is\n\\begin{align*}\n&-\\r \\cdot e^{-\\r\\cdot \\D t}\\cdot V\\bp{ a(t+\\D t),t+\\D t}+e^{-\\r\\cdot \\D t}\\cdot \\pd{V}{a}\\bp{ a(t+\\D t),t+\\D t} \\cdot \\dot{a}(t+\\D t)\\\\\n&+e^{-\\r\\cdot \\D t}\\cdot  \\pd{V}{t}\\bp{a(t+\\D t),t+\\D t}. \n\\end{align*}\nWe have the following limits:\n\\begin{align*}\n\\lim_{\\D t\\to 0} \\r \\cdot e^{-\\r\\cdot \\D t}\\cdot V\\bp{ a(t+\\D t),t+\\D t}&=\\r \\cdot V\\bp{ a(t),t}\\\\\n\\lim_{\\D t\\to 0} e^{-\\r\\cdot \\D t}\\cdot  \\pd{V}{t}\\bp{a(t+\\D t),t+\\D t} &= \\pd{V}{t}\\bp{a(t),t}\\\\\n\\lim_{\\D t\\to 0} e^{-\\r\\cdot \\D t}\\cdot \\pd{V}{a}\\bp{ a(t+\\D t),t+\\D t}\\cdot\\dot{a}(t+\\D t)&=\\pd{V}{a}\\bp{ a(t),t} \\cdot \\dot{a}(t)=\\pd{V}{a}\\bp{ a(t),t} \\cdot g(a(t),c(t)),\n\\end{align*}\nwhere the last equality results from the law of motion~\\eqref{eq:thelaw} of state variable $a(t)$.\n\nTherefore, the limit as $\\D t\\to 0$ for the first term in the bracket is \n\\begin{equation}\n-\\r \\cdot V\\bp{ a(t),t}+\\pd{ V}{a}\\bp{ a(t),t} \\cdot g(a(t),c(t))+\\pd{V}{t}\\bp{a(t),t}.\\label{eq:BIG2}\n\\end{equation}\n\n\\paragraph{Derivation of the Hamilton-Jacobi-Bellman Equation} Combining equations~\\eqref{eq:BIG},~\\eqref{eq:BIG1}, and~\\eqref{eq:BIG2}, we obtain a version of the Bellman equation for the continuous-time optimization problem. This equation is called the \\textit{Hamilton-Jacobi-Bellman equation}. The equation is\n\\begin{equation}\n\\r V\\bp{ a(t),t} =\\max_{c(t)}\\bs{ u\\bp{a(t),c(t)} +\\pd{V}{a(t)}\\bp{a(t),t}\\cdot\ng(a(t),c(t)) +\\pd{V}{t}\\bp{a(t),t}}, \\label{eq:HJB}\n\\end{equation}\nwhere $a(t)$ is given. We define \\[\\l (t)\\equiv e^{-\\r \\cdot t}\\cdot \\pd{V}{a(t)}\\bp{a(t),t}.\\]\nWe can rewrite the Hamilton-Jacobi-Bellman equation as\n\\begin{equation}\n\\r V\\bp{ a(t),t} =\\max_{c(t)}\\bs{ u\\bp{a(t),c(t)} +e^{\\r \\cdot t}\\cdot\\l (t)  \\cdot\ng(a(t),c(t)) +\\pd{V}{t}\\bp{a(t),t}}. \\label{eq:HJB2}\n\\end{equation}\n\n\\subsection{Derivation of the Optimality Conditions}\n\nTaking the first-order condition with respect to $c(t)$ in the Hamilton-Jacobi-Bellman equation~\\eqref{eq:HJB2} implies\n\\begin{equation*}\n\\pd{u}{c(t)}\\bp{a(t),c(t)}+e^{\\r \\cdot t}\\cdot\\l (t) \\cdot \\pd{g}{c(t)}\\bp{a(t),c(t)}=0.\n\\end{equation*}\nFurthermore, the envelope theorem implies\n\\begin{align*}\n\\r \\pd{V}{a(t)}\\bp{a(t),t}&=\\pd{u}{a(t)}\\bp{a(t),c(t)}+e^{\\r \\cdot t}\\cdot\\l (t)\\cdot \\pd{g}{a(t)}\\bp{ a(t),c(t)}+\\frac{\\partial^{2}V}{\\partial t\\partial a(t)}\\bp{a(t),t}.\n\\end{align*}\n\nThe last two equations become equivalent to optimality conditions~\\eqref{eq:hcontrol} and~\\eqref{eq:hstate}. This is the case because, using the definition of the present-value Hamiltonian, equation~\\eqref{eq:hcontrol} can be written\n\\[\\pd{H(t)}{c(t)}\\bp{a(t),c(t)} =0\\]\nand equation~\\eqref{eq:hstate} can be written\n\\[\\pd{u}{a(t)}\\bp{a(t),c(t)} +e^{\\r\\cdot  t}\\cdot \\l(t)\\cdot \\pd{g}{a(t)}\\bp{a(t),c(t)}=-e^{\\r\\cdot  t}\\cdot \\pd{\\l(t)}{t},\\]\nwhich implies\n\\[\\pd{H(t)}{a(t)}\\bp{a(t),c(t)} =-\\pd{\\l(t)}{t}.\\]\nNote that we consider that the Hamiltonian is a function of $a_{t}$, $t$, and $\\l_{t}$, and we only take the partial derivative with respect to $a_{t}$, thus keeping $\\l_{t}$ constant.\n\n\\section{Applications of the Hamilton-Jacobi-Bellman Equation}\n\nThe Hamilton-Jacobi-Bellman equation~\\eqref{eq:HJB} is an optimality condition that equates flow costs with flow benefits. In practice, we write it down without going through all the algebra relating to $\\D t.$ \n\nThis equation is commonly used in macroeconomics. For instance, it is frequently used in search-and-matching models of the labor market. In a search-and-matching model, a vacant job costs $c$ per unit time and becomes occupied according to a Poisson process with arrival rate $q.$ In the labor market, the occupied job yields net returns $p-w,$ where $p$ is real output and $w$ is the cost of labor. The job runs a risk $\\l $ of being destroyed. \n\nLet $V$ be the value of the vacant job and $J$ be the value of occupied job. Let $r$ be the discount factor. In steady state, the Hamilton-Jacobi-Bellman equations are\n\\begin{align*}\nr\\cdot V &=-c+q \\cdot  \\bp{J-V} \\\\\nr\\cdot J &=p-w+ \\l \\cdot \\bp{0-J}=p-w-\\l \\cdot J\n\\end{align*}\nThere is no maximization on the right-hand-side in this particular example. These equations simply describe the relationship between the equilibrium values $V,J$ and $w$. \n\n\\end{document}", "meta": {"hexsha": "d0c71e436a010b8142c1619275991c8878644c15", "size": 19867, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/optimalcontrol.tex", "max_stars_repo_name": "pascalmichaillat/math-for-macro", "max_stars_repo_head_hexsha": "e78569b10b76f4bec2af50360eb07a11089d782b", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 59, "max_stars_repo_stars_event_min_datetime": "2022-01-24T10:22:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T13:17:46.000Z", "max_issues_repo_path": "lectures/optimalcontrol.tex", "max_issues_repo_name": "pascalmichaillat/math-for-macro", "max_issues_repo_head_hexsha": "e78569b10b76f4bec2af50360eb07a11089d782b", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/optimalcontrol.tex", "max_forks_repo_name": "pascalmichaillat/math-for-macro", "max_forks_repo_head_hexsha": "e78569b10b76f4bec2af50360eb07a11089d782b", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2022-01-25T18:14:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T16:38:21.000Z", "avg_line_length": 53.5498652291, "max_line_length": 457, "alphanum_fraction": 0.7030754518, "num_tokens": 6467, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458152, "lm_q2_score": 0.8152324960856175, "lm_q1q2_score": 0.5673678527705949}}
{"text": "\\subsection{Sorting}\n\n\\begin{center}\n\\begin{longtable}{|l|l|l|p{7cm}|}\n\\hline\n\\textbf{Name} & \\textbf{Average complexity} & \\textbf{Worst Complexity} & \\textbf{Description} \\\\\n\\hline\nQuick sort & O(nlog(n)) & O(n$^{2}$) \n           & \\begin{enumerate}\n                \\item Pick a random pivot value \n                \\item Partition into items smaller than the value (left) and \n                      items larger (right)\n                \\item Recursively apply this to the left and right subsections\n                      until they contain one value\n              \\end{enumerate}\\\\\n\\hline\n\nMerge sort & O(nlog(n)) & O(nlog(n))) \n           & \\begin{enumerate}\n                \\item Divide unsorted list into n subsets\n                \\item Compare each element with adjacent list to sort and merge\n                      the two\n                \\item Recursively do this until only one list is left\n              \\end{enumerate}\\\\\n\n\\hline\n\nHeap sort & O(nlog(n)) & O(nlog(n)) \n          & \\begin{enumerate}\n                \\item First build a max heap from the data [O(n) time]\n                \\item Repeatedly remove the largest element from the heap, \n                      it takes log(n) time for each removal\n\n              \\end{enumerate}\\\\\n\\hline\n\n\\end{longtable}\n\\end{center}\n", "meta": {"hexsha": "1fb3a327dca196540c31d5266fd0b8f965784ebb", "size": 1285, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algos/sorting.tex", "max_stars_repo_name": "sarahtattersall/InterviewCheatSheet", "max_stars_repo_head_hexsha": "c5d1a4ba2e8713bd95913eb91aa1432bdc743fdf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2015-01-14T22:59:50.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-02T15:46:41.000Z", "max_issues_repo_path": "algos/sorting.tex", "max_issues_repo_name": "sarahtattersall/InterviewCheatSheet", "max_issues_repo_head_hexsha": "c5d1a4ba2e8713bd95913eb91aa1432bdc743fdf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algos/sorting.tex", "max_forks_repo_name": "sarahtattersall/InterviewCheatSheet", "max_forks_repo_head_hexsha": "c5d1a4ba2e8713bd95913eb91aa1432bdc743fdf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.9487179487, "max_line_length": 97, "alphanum_fraction": 0.5595330739, "num_tokens": 317, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324983301568, "lm_q2_score": 0.6959583187272712, "lm_q1q2_score": 0.5673678389096888}}
{"text": "\\documentclass{article} \n\n% the purpose of this particular document is to serve as a staging area potential blog posts, for the purpose of editing, revision, and review. \n\n% packages \n\t\\usepackage{amsmath, amsthm, mathtools}\n\t\\usepackage{mdframed}  \n\t\\usepackage{geometry, enumerate} \n\t\\usepackage{parskip}\n\t\\usepackage{graphicx}\n\t\\usepackage{dsfont} % for math bf text \n\n% theorems and such \n  \t\\newtheorem{theorem}{Theorem}\n  \t\\newtheorem{corollary}{Corollary}\n  \t\\newtheorem{lemma}[theorem]{Lemma} \n  \t\\newtheorem*{remark}{Remark}\n  \t\\newtheorem*{exe}{Exercise}\n  \t\\newtheorem{prop}{Proposition}\n  \t\\newtheorem{example}{Example} \n  \t\\newtheorem{definition}{Definition}   \n  \t\n% custom commands \n\t\\newcommand{\\ceil}[1]{\\left \\lceil #1 \\right \\rceil}\n\t\\newcommand{\\floor}[1]{\\lfloor #1 \\rfloor}\n\t\\newcommand{\\X}[1]{\\, \\text{mod} \\, #1}\n\t\\newcommand{\\divv}{\\,|\\,}\n\t\\newcommand{\\ndiv}{\\,\\not|\\,} \n\t\\newcommand{\\GCD}[2]{GCD\\,(#1, #2)}\n\t\\newcommand{\\LCM}[2]{LCM\\,(#1, #2)}\n\n\\begin{document} \n\n\\title{Elementary Number Theory : \\S6 } \n\\author{Henry Slayer $|$ University of California, Santa Cruz} \n\\date{}\n\\maketitle\n\n\\section*{The Chinese Remainder Theorem} \nOften, we will be faced with problems or situations in which it is helpful to break congruence down into a system of congruences. Conversely, it can be useful to assemble a single congruence that uniquely describes a system of smaller congruences. Our tool for doing so is the Chinese Remainder Theorem, which allows us to relate congruence across different modulii. Speaking loosely, if our modulus $m$ can be decomposed into coprime factors, say $d$ and $e$, every congruence modulo $m$ is equivalent to a compound congruence $[a, b]\\X{[d, e]}$. The bracket notation means \\textit{congruent to \\textbf{a} modulo \\textbf{d}, and \\textbf{b} modulo \\textbf{e}}. More formally, we can say the following \n\\begin{mdframed} \n\\begin{theorem}[Chinese Remainder Theorem] \nIf $d$ and $e$ are a pair of coprime numbers, then there exists a bijective correspondence between the set of pairs $[a, b]$, where $0\\leq a < d$, and $0\\leq b < e$, and the set of integers $n$ for which $0\\leq n < de$. \\\\\nIn other words, the pairs of residues $[a, b]\\X{[d, e]}$ map uniquely to the residues $n\\X{(de)}$. \n\\end{theorem} \n\\begin{proof} \nThe sets that we're working with are the same size. So, if we can find an injective map from the set of pairs $[a, b]$ to the set of integers from 0 to $de - 1$, we've found our bijection. So how do we map $n\\mapsto [a, b]$? There's a natural choice. \\\\\nLet $n\\mapsto [a, b]$ be defined by the rule that $n\\mapsto [a, b]$ if and only if $n\\equiv[a, b]\\X{de}$. \\\\\nWe need to show that this map is injective, so suppose that we have a pair of values $0\\leq m < de$ and $0\\leq n < de$, such that $n\\equiv[a, b]\\X{de}$ and $m\\equiv[a, b]\\X{[d, e]}$. Naturally, then, $n - m \\equiv [0, 0]\\X{[d, e]} $. In turn, this implies that $n - m$ is a multiple of both $d$ and $e$. As $d$ and $e$ are coprime, this forces that $n - m$ is a multiple of the least common multiple of $d$ and $e$. In this case, this means that $n - m$ is a multiple of $de$. Yet, both $m$ and $n$ are strictly less than $de$, so their difference $n - m$ is surely less than $de$. The only multiple of $de$ less than $de$ is 0, so $m - n = 0$, and $m = n$. \\\\\nHaving shown that the map is injective, the fact that our two sets (the set of pairs, and the set of residue classes modulo $de$) are the same size implies that the map is bijective. \\\\\nAs the map is bijective, this implies that every congruence modulo $de$ corresponds to a unique congruence $[a, b]\\X{[d, e]}$, \\textbf{provided that d and e are coprime}.  \n\\end{proof} \n\\end{mdframed}  \n\n\\subsubsection*{Disassembling and Reassembling Congruence} \nThe extreme utility of the Chinese Remainder Theorem is that it enables us to reduce complex congruences to systems of simpler congruences, and vice versa. \n\\begin{mdframed} \n\\begin{example} \nBreak $x\\equiv 45\\X{56}$ into a system of simpler congruences. \\\\\\\\\nBreaking down congruence couldn't be simpler. All we need to do is find two (or more) coprime factors of 56. 7 and 8 seem like a good choice. $45\\equiv 3\\X{7}$, and $45\\equiv5\\X{8}$, so $45\\equiv[3, 5]\\X{[7, 8]}$. Brilliant. \n\\end{example} \n\\end{mdframed} \n\\begin{mdframed}\n\\begin{example} \nWhat is $x$, if... \\\\\nx is no more than 100. \nCounting by 3, we have one left over. \\\\\nCounting by 22, we have three left over. \\\\\nCounting by 7, we have 4 left over.\\\\\\\\\nWe'll start the problem, and leave the reader to finish it. \\\\\nThe statement of the problem implies that $x\\equiv 1\\X{3}$, $3\\X{22}$, and $4\\X{7}$. Each one of these linear congruences is really hiding a linear diophantine equation, which gives us the following system of equations: \n\\begin{align*} \nx - 3y &= 1 \\\\\nx - 22z &= 3 \\\\\nx - 7w &= 4\n\\end{align*} \nYou do the rest! Hint: use the first two equations to come up with a congruence modulo 66. This can be translated back into the language of linear diophantine equations, and re-combined with the last equation to construct a congruence in terms of 462. \n\\end{example} \n\\end{mdframed} \n\\begin{mdframed} \n\\begin{example} \nFind all solutions to $x^2\\equiv 1\\X{21}$. \\\\\\\\\nBy the CRT, we can decompose this into two congruences, $x^2\\equiv 1\\X{7}$ and $x^2\\equiv1\\X{3}$. Modulo 7, this means that we have two solutions: $x\\equiv 6$, or $x\\equiv 1$. Modulo 3, we also have two solutions, $x\\equiv 1$ or $x\\equiv 2$. Two options for each smaller congruence gives us four possible combinations: \n\\begin{align*} \nx\\equiv [1, 1]\\X{[3, 7]}\\\\\nx\\equiv [1, 6]\\X{[3, 7]}\\\\\nx\\equiv [2, 1]\\X{[3, 7]}\\\\\nx\\equiv [2, 6]\\X{[3, 7]}\n\\end{align*} \nRespectively, these simpler congruences resolve to $x\\equiv 1,\\, 8,\\, 13,\\, 20\\X{21}$, which characterizes all solutions to $x^2\\equiv 1\\X{21}$.  \n\\end{example} \n\\end{mdframed} \nThe last example here characterizes a useful generalization. Solutions to the congruence $x^2\\equiv a\\X{de}$ are in one-to-one correspondence with the pairs of solutions $(u, v)$ to the congruences $u^2\\equiv a\\X{d}$ and $v^2\\equiv a\\X{e}$. In other words, $a$ is a square mod $de$ (meaning $a\\equiv x^2$ for some $x\\X{de}$) if and only if $a$ is a square modulo $d$ and modulo $e$. We'll dive into squares (the \\textit{quadratic residues}) a bit more further down the road. \n\n\n\n\n\\end{document} ", "meta": {"hexsha": "28390697f95f787093956015221d66d288e1f91a", "size": 6323, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ENT6.tex", "max_stars_repo_name": "hcslayer/Elementary-Number-Theory", "max_stars_repo_head_hexsha": "5e0520285d36cb4ca971be28874c6fe1fbf357a8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ENT6.tex", "max_issues_repo_name": "hcslayer/Elementary-Number-Theory", "max_issues_repo_head_hexsha": "5e0520285d36cb4ca971be28874c6fe1fbf357a8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ENT6.tex", "max_forks_repo_name": "hcslayer/Elementary-Number-Theory", "max_forks_repo_head_hexsha": "5e0520285d36cb4ca971be28874c6fe1fbf357a8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.5204081633, "max_line_length": 701, "alphanum_fraction": 0.6979281986, "num_tokens": 2118, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.640635854839898, "lm_q2_score": 0.885631470799559, "lm_q1q2_score": 0.5673672743687916}}
{"text": "\\subsection{Example \\#6: generate and analyze synthetic data }\n\\label{S:EXAMPLESYNTHETICDATA}\n\n\n\\subsubsection{Purpose}\nThis example illustrates how to create a synthetic time series using OpenBDLM.\nThe objective is to create a 4-years long time series with an acceleration stationary baseline, and a yearly periodic pattern as well as an autoregressive process superimposed into it. \nThe timestep is 1 day.\nThis example corresponds to the OpenBDLM demo presented in Section~\\ref{S:OPENBDLMGETTINGSTARTED}.\n\n\\begin{figure*}[h!]\n\\centering\n\\begin{subfigure}{\\linewidth}\\centering\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/optim_param_optim_initialhiddenstate/ALL_AMPLITUDES.pdf} \n\\caption{Amplitude}\n\\end{subfigure}\n\\begin{subfigure}{\\linewidth}\\centering\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/raw/ALL_TIMESTEPS.pdf}\n\\caption{Timestep}\n\\end{subfigure}\n\\begin{subfigure}{\\linewidth}\\centering\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/raw/AVAILABILITY.pdf}\n\\caption{Availability}\n\\end{subfigure}\n\\caption{Data used in example \\#5}.\n\\label{fig:DataSummaryRawSynthetic}\n\\end{figure*}\n\n\n\\subsubsection{Model description}\nThe model includes one model class, and the block components are \n\\begin{gather*}\n\\textbf{x}=[x^{\\mathtt{L}}, x^{\\mathtt{T}}, x^{\\mathtt{LA}}, x^{\\mathtt{P1}\\text{,yearly}}, x^{\\mathtt{P2}\\text{,yearly}}, x^{\\mathtt{AR}}].\n\\end{gather*}\nThe associated model parameters are\n\\begin{gather*}\n\\bm\\theta=[\\sigma_{w}^{\\mathtt{LA}}, p^{\\mathtt{PD}, \\text{yearly}}, \\sigma_{w}^{\\mathtt{PD}, \\text{yearly}}, \\phi^{\\mathtt{AR}}, \\sigma_{w}^{\\mathtt{AR}}, \\sigma_{v,\\text{D}}].\n \\end{gather*}\nThe model parameters values assigned by default from OpenBDLM are\n\\begin{gather*}\n\\bm\\theta^{\\text{defaut}}=[ 1\\times10^{-8}, 365.24, 0, 0.75, 1, 0.01].\n\\end{gather*}\nThe default initial hidden states mean, covariance, and model probability are \n\\begin{align*}\n\\bm \\mu^{\\text{defaut}}_{0} & = [\t 10  , -1\\times10^{-5}  ,\t-0.001\t,\t10  ,  \t10    ,\t0  ]^{\\intercal}, \\text{and} \\\\\n\\bm\\Sigma^{\\text{defaut}}_{0} & = \\text{diag}[ 0.01  ,\t0.01  ,\t0.01  \t,0.04  ,\t0.04  ,\t0.01 ], \\\\\n \\pi_{0}^{1,\\text{default}} & = 1.\n \\end{align*}\nThe synthetic data generated from this model structure, model parameters values and initial hidden states values are presented in Figure~\\ref{fig:DataSummaryRawSynthetic}.\nThe hidden states computed using the same (i.e. true) model structure, model parameters values and initial hidden states values are presented in Figure~\\ref{fig:SYNTHETICOptimizedOptimized}.\n\n\\subsubsection{Run the example from command line interaction}\n\nThis section explains how to run the example \\#6, that is, how to generate the synthetic data presented in Figure~\\ref{fig:DataSummaryRawSynthetic}, and estimate the hidden states as presented in Figure~\\ref{fig:SYNTHETICOptimizedOptimized}.\n\n\n\\begin{enumerate}\n\\item Start OpenBDLM. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!OpenBDLM_main;!}.\n\\item Choose the interactive tool. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!0!}.\n\\item Enter the project name. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!Example_SYNTHETIC!}. \n\\item Generate synthetic data. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!yes!}. \n\\item Provide the number of time series. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!}.\n\\item Provide  the date corresponding of the first data sample. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!2000-01-01!}.\n\\item Provide  the date corresponding of the last data sample. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!2005-01-01!}.\n\\item Provide  the timestep in day. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!}.\n\\item Select the number of model classes. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!}. \n\\item Select the model block components. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]![13 31 41]!}. The Figure~\\ref{fig:DataSummaryRawSynthetic} should popup on screen.\n\\item Access hidden states estimation menu. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!3!}. \n\\item Estimate the smoothed hidden states. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!2!}. The estimation should correspond to the results presented in Figure~\\ref{fig:SYNTHETICOptimizedOptimized}.\n\\item Save and quit OpenBDLM. Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!Q!}.\n\\end{enumerate}\n\n\n\n%First, choose the interactive tool by typing \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!0!}.\n%Secondly, provide a project name (i.e. \\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!Example_SYNTHETIC!).\n%Then, answer \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!yes!} to indicate that you would like to create synthetic data.\n%Finally, type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!} to create one single synthetic time series. \n%\n%\\subsection{Step 2: define the time step vector}\n%\n%Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!2000-01-01!}, \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!2005-01-01!} and \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!}, to provide the date corresponding of the first and last data sample of the synthetic data, as well as the timestep in days.\n\n\n%\\subsection{Step 3: configure the model}\n%\n%First, the program requests the number of model class.\n%In this example, we would like to create synthetic data with no anomaly.\n%In such case, we type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!} to choose a single model class.\n%Then, OpenBDLM asks for the type of block component.\n%As mentionned earlier, the objective is to create synthetic time series data with an acceleration stationary baseline, and a yearly periodic pattern superimposed into it. \n%%The presence of a daily periodic pattern is unclear.\n%Therefore, we choose \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]![13 31 41]!}.\n%This model considers a trend stationary model with a yearly periodic pattern, and an autoregressive process to be more realistic.\n%At this time, three figures  that represent the amplitude, timestep and availability of the newly created synthetic data (as shown in Figure~\\ref{fig:DataSummaryRawSynthetic}) should popup on the screen.\n%Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!Q!} to save and quit.\n%\n%\\subsection{Step 4: explore the model}\n%From the main menu, type  \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!11!} to see what are the values of model parameters which have been assigned to create the synthetic data.\n%The model totalizes 6 model parameters, that is \n%\\begin{gather*}\n%\\bm\\theta=\\{\\sigma_{w, \\text{D}}^{LA}, p^{\\text{PD1}}_{\\text{D}}, \\sigma_{w,\\text{D}}^{\\text{PD1}}, \\phi^{AR}_{\\text{D}}, \\sigma_{w,\\text{D}}^{AR}, \\sigma_{v,\\text{D}}\\} \n% \\end{gather*}\n%The default model parameters values are \n%\\begin{gather*}\n%\\bm\\theta^{\\text{default}}=\\{1\\times10^{-8}, 365.24, 0, 0.75, 0.01, 0.01\\}, \n%\\end{gather*}\n%in agreement with the values indicated in Table~\\ref{table:defaultsynthetic}.\n%Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!R!} to return to the main menu.\n%Then, type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!12!} the see that the default initial hidden states mean, covariance  and model probability values are \n%\\begin{align*}\n% \\bm \\mu^{1,\\text{default}}_{0} & = [\t10  , -1\\times10^{-5}  ,\t-0.001\t,\t10  ,  \t10    ,\t0         ]^{\\intercal}, \\text{and} \\\\\n% \\text{diag}(\\bm\\Sigma^{1,\\text{default}}_{0})  & = [\t0.01  ,\t0.01  ,\t0.01  \t,0.04  ,\t0.04  ,\t0.01     ], \\text{and} \\\\\n% \\pi_{0}^{1,\\text{default}} & = 1.\n%\\end{align*}\n%respectively, in agreement with the values indicated in Table~\\ref{table:defaultsynthetic}.\n%Type \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!R!} to return to the main menu.\n\n%\\begin{figure*}[h!]\n%\\centering\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/raw/ALL_AMPLITUDES.pdf} \n%\\caption{Amplitude}\n%\\end{subfigure}\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/raw/ALL_TIMESTEPS.pdf}\n%\\caption{Timestep}\n%\\end{subfigure}\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/raw/AVAILABILITY.pdf}\n%\\caption{Availability}\n%\\end{subfigure}\n%\\caption{Data used in example \\#4}.\n%\\label{fig:DataSummaryRawSynthetic}\n%\\end{figure*}\n\n\n%\\subsection{Step 5: estimate the hidden states}\n%\n%From the main menu, type  \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!3!}, then \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!} to estimate the filtered hidden states using the default model parameters and default initial hidden states values.\n%The value of the log-likelihood is $4976$, and the estimated hidden states are presented in Figure~\\ref{fig:SYNTHETICDefaultDefaultExample4}.\n%For each figure, the red dashed line represents the true known values of the hidden states.\n%\n%\\begin{figure*}[h!]\n%\\centering\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/default/TS01_ObservedPredicted.pdf}\n%\\caption{Observed and estimated displacement data}\n%\\end{subfigure}\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/default/TS01_LL_1.pdf} \n%\\caption{Estimated displacement local level component}\n%\\end{subfigure}\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/default/TS01_LT_2.pdf}\n%\\caption{Estimated displacement local trend component.}\n%\\end{subfigure}\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/default/TS01_LA_3.pdf}\n%\\caption{Estimated displacement local acceleration component.}\n%\\end{subfigure}\n%\\end{figure*}\n%\\begin{figure*}[h!]\n%\\ContinuedFloat\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/default/TS01_S1_4.pdf}\n%\\caption{Estimated displacement yearly periodic component (first hidden state)}\n%\\end{subfigure}\n%\\begin{subfigure}{\\linewidth}\n%\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/default/TS01_AR_6.pdf} \n%\\caption{Estimated displacement autoregressive component}\n%\\end{subfigure}\n%\\caption{Estimated results using OpenBDLM default model parameters and default initial hidden states. The hidden states are estimated from the data presented in Figure~\\ref{fig:DataSummaryRawSynthetic}a. The solid line and shaded area represent the mean and standard deviation of the estimated hidden states, respectively. The red dashed line represent the true the hidden state value.}\n%\\label{fig:SYNTHETICDefaultDefaultExample4}\n%\\end{figure*}\n\n%\\subsection{Step 6: estimate the initial hidden states}\n%\n%From the main menu, type  \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!2!}, to optimize the initial hidden states value.\n%The estimated initial hidden states mean and covariance values are \n%\\begin{align*}\n%\\bm \\mu^{*}_{0} & = [\t10 ,   \t-0.00103,\t-9.68\\times10^{-6},\t10   , \t10    ,\t-0.0106  ]^{\\intercal}, \\text{and} \\\\\n% \\text{diag}(\\bm\\Sigma^{*}_{0}) & = [\t2.35\\times10^{-5}\t, 1.33\\times10^{-9},\t3.3\\times10^{-14}\t, 1.9\\times10^{-6}\t, 2.03\\times10^{-6}\t,0.000353    ], \n% \\end{align*}\n% respectively.\n%Once it is done, type  \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!3!}, and then  \\colorbox{light-gray}{\\lstinline[basicstyle = \\mlttfamily \\small, backgroundcolor = \\color{light-gray}]!1!} to compute the filtered hidden states using the optimized model parameters and optimized initial hidden states.\n%The value of the log-likelihood is $5011$.\n%The estimated hidden states are presented in Figure~\\ref{fig:SYNTHETICOptimizedOptimized}.\n\n\n\\begin{figure*}[h!]\n\\centering\n\\begin{subfigure}{\\linewidth}\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/optim_param_optim_initialhiddenstate/TS01_ObservedPredicted.pdf}\n\\caption{Observed and estimated displacement data}\n\\end{subfigure}\n\\begin{subfigure}{\\linewidth}\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/optim_param_optim_initialhiddenstate/TS01_LL_1.pdf} \n\\caption{Estimated displacement level component}\n\\end{subfigure}\n\\begin{subfigure}{\\linewidth}\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/optim_param_optim_initialhiddenstate/TS01_LT_2.pdf}\n\\caption{Estimated displacement trend component.}\n\\end{subfigure}\n\\begin{subfigure}{\\linewidth}\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/optim_param_optim_initialhiddenstate/TS01_LA_3.pdf}\n\\caption{Estimated displacement local acceleration component.}\n\\end{subfigure}\n\\end{figure*}\n\\begin{figure*}[h!]\n\\ContinuedFloat\n\\begin{subfigure}{\\linewidth}\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/optim_param_optim_initialhiddenstate/TS01_S1_4.pdf}\n\\caption{Estimated displacement yearly periodic component (first hidden state)}\n\\end{subfigure}\n\\begin{subfigure}{\\linewidth}\n\\includegraphics[width=0.9\\linewidth]{./docfigs/Example_SYNTHETIC/optim_param_optim_initialhiddenstate/TS01_AR_6.pdf} \n\\caption{Estimated displacement autoregressive component}\n\\end{subfigure}\n\\caption{Estimated results using OpenBDLM with the default model parameters and optimized initial hidden states. The hidden states are estimated from the data presented in Figure~\\ref{fig:DataSummaryRawSynthetic}a. The solid line and shaded area represent the mean and standard deviation of the estimated hidden states, respectively. The red dashed line represent the true the hidden state value.}\n\\label{fig:SYNTHETICOptimizedOptimized}\n\\end{figure*}\n\n\n", "meta": {"hexsha": "070000045927a6f439768f59d108daa23834bf4f", "size": 15251, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/pdf_doc/section/OpenBDLMExampleCreatingSyntheticData.tex", "max_stars_repo_name": "CivML-PolyMtl/OpenBDLM", "max_stars_repo_head_hexsha": "af395cea6d394b0d1fb91ce76ddda9d97c02318f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-05-19T23:42:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T17:32:11.000Z", "max_issues_repo_path": "doc/pdf_doc/section/OpenBDLMExampleCreatingSyntheticData.tex", "max_issues_repo_name": "bhargobdeka/OpenBDLM", "max_issues_repo_head_hexsha": "af395cea6d394b0d1fb91ce76ddda9d97c02318f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/pdf_doc/section/OpenBDLMExampleCreatingSyntheticData.tex", "max_forks_repo_name": "bhargobdeka/OpenBDLM", "max_forks_repo_head_hexsha": "af395cea6d394b0d1fb91ce76ddda9d97c02318f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-10-18T07:18:38.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-30T02:26:06.000Z", "avg_line_length": 67.7822222222, "max_line_length": 477, "alphanum_fraction": 0.7598190283, "num_tokens": 4502, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.5673287626200393}}
{"text": "\\documentclass{article}\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{February 10, 2014}\n\\maketitle\n\\section*{Lesson 10}\n\\subsection*{use of integral transforms}\n\\begin{align*}\n  \\intertext{sine or fourier transform?}\n  \\mathcal{F}_s[f]&=\\frac{2}{\\pi}\\int_0^\\infty{f(t)\\sin(\\omega t)\\,\\mathrm{d}t}=F(\\omega)\\\\\n  {\\mathcal{F}_s}^{-1}[F]&=\\int_0^\\infty{F(\\omega)\\sin(\\omega t)\\,\\mathrm{d}\\omega}=f(t)\\\\\n  \\intertext{Laplace}\n  \\mathcal{L}[f]&=F(s)=\\int_0^\\infty{e^{-st}f(t)\\,\\mathrm{d}t}\n\\end{align*}\nStart with a problem in terms of (say) time t. Original solutioon $f(t)$. Transform it to an equation (in s). Solve for $F(s)$. Inversion $F(s)\\to f(t)$. Simplest way of inverstion: Use a table of transforms.\n\\begin{align*}\n  &\\text{Laplace}&&f(t)&&F(s)\\\\\n  &&&t^p&&\\frac{T(p+1)}{S^{p+}}\\\\\n  &&&\\cos(at)&&\\frac{s}{s^2+a^2}\\\\\n  &&&\\vdots&&\\vdots\\\\\n\\end{align*}\n\\subsubsection*{note}\nwhen you first see laplace transforms you do not see an inversion formula written down. This is because it turns out that the inversion formula requires complex analysis. Even to write down.\n\\[f(t)=\\frac{1}{2\\pi i}\\int_{c-i\\infty}^{c+i\\infty}{F(s)e^t\\,\\mathrm{d}s}\\]\nThis is covered in chapter 13.\n\\emph{extra credit for using this inversion formula. Talk to him about it}\n\\subsection*{some properties of sine transform}\n\\begin{align*}\n  \\mathcal{F}_s[f']&=\\frac{2}{\\pi}\\int_0^\\infty{f'(t)\\sin(\\omega t)\\,\\mathrm{d}t}\\\\\n  &=\\frac{2}{\\pi}\\left(\\left.f(t)\\sin(\\omega t)\\right|_0^\\infty-\\int_0^\\infty{f(t)\\cdot \\omega\\cos(\\omega t)\\,\\mathrm{d}t}\\right)\\\\\n  &=\\frac{2}{\\pi}\\left(0-0-\\omega\\int_0^\\infty{f(t)\\cdot \\cos(\\omega t)\\,\\mathrm{d}t}\\right)\\\\\n  &=-\\omega\\mathcal{F}_c[f]\\\\\n  \\mathcal{F}_s[f'']&=-\\omega\\mathcal{F}_c[f']=-\\omega^2\\mathcal{F}_s[f]-\\frac{2\\omega}{\\pi}f(0)\\\\\n  \\mathcal{F}_c[f']&=+\\omega\\mathcal{F}_s[f]-\\frac{2}{\\pi}f(0)\n\\end{align*}\n\\subsection*{example p. 77}\n\\begin{align*}\n  \\text{PDE}&&u_t&=\\alpha^2u_{xx}&0&<x<\\infty &0&<t<\\infty\\\\\n  \\text{BC}&&u(0,t)&=A&&&0&<t<\\infty\\\\\n  \\text{IC}&&u(x,0)&=0&0&<x<\\infty\n\\end{align*}\nindefinite length rod. starts at zero temp.\n\nuse integral transforms to solve this. specifically the sine transform.\n\\begin{align*}\n  \\mathcal{F}_s[f]&=\\frac{2}{\\pi}\\int_0^\\infty{f(t)\\sin(\\omega t)\\,\\mathrm{d}t}=F(\\omega)\\\\\n  \\mathcal{F}_s[u_t]&=\\alpha^2\\mathcal{F}_s[u_{xx}]\\\\\n  &=\\alpha^2(-\\omega^2U(\\omega,t)+\\frac{2}{\\pi}\\omega Au(0.t))\\\\\n  \\intertext{note:}\n  \\frac{2}{\\pi}\\int_0^\\infty{\\sin(\\omega x)\\frac{\\partial u}{\\partial t}(x,t)\\,\\mathrm{d}x}&=\\frac{\\partial}{\\partial t}\\left[\\frac{2}{\\pi}\\int_0^\\infty{]sin(\\omega x)u(x,t)\\,\\mathrm{d}x}\\right]\\\\\n  U(\\omega,t)&=C(\\omega)e^{-\\alpha^2\\omega^2t}+\\frac{2}{\\pi}\\frac{A}{\\omega}\\\\\n  \\intertext{as $t\\to0^+$}\n  U(\\omega,t)&\\to\\frac{2}{\\pi}\\int_0^\\infty{\\sin(\\omega t)\\cdot0\\,\\mathrm{d}t}\\text{ initial condition}\\\\\n  0&=C(\\omega)+\\frac{2}{\\pi}\\frac{A}{\\omega}\\\\\n  U(\\omega,t)&=\\frac{2}{\\pi}\\frac{A}{\\omega}\\left(1-e^{-\\alpha^2\\omega^2t}\\right)\\text{ sine transform of the solution $u(x,t)$}\n\\end{align*}\n\\end{document}\n", "meta": {"hexsha": "591a1d903c5b3d5d17e1eba8c87c8ad03c85f0f4", "size": 3128, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "partial differential equations/pde-notes-2014-02-10.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "partial differential equations/pde-notes-2014-02-10.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "partial differential equations/pde-notes-2014-02-10.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.1230769231, "max_line_length": 208, "alphanum_fraction": 0.6416240409, "num_tokens": 1309, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.72487026428967, "lm_q2_score": 0.7826624789529376, "lm_q1q2_score": 0.5673287579682242}}
{"text": "\n\\subsection{Curvature tensor}\n\n\n", "meta": {"hexsha": "154fca6d253b87c989d690a0f584fbf2aeed4f26", "size": 33, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/manifoldsRiemann/03-06-curvature.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/manifoldsRiemann/03-06-curvature.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/manifoldsRiemann/03-06-curvature.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 6.6, "max_line_length": 29, "alphanum_fraction": 0.7575757576, "num_tokens": 8, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8311430478583168, "lm_q2_score": 0.6825737473266736, "lm_q1q2_score": 0.5673164247411642}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage[margin=0.3in]{geometry}\n\\usepackage{amssymb}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\\usepackage{graphicx}\n\\usepackage{float}\n\\usepackage{hyperref}\n\\usepackage{pgfplots}\n\\usepackage{enumitem}\n\\usepackage{bm}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{titling}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{definition}{Definition}\n\\newcommand{\\subtitle}[1]{\\posttitle{\\par\\end{center}\\begin{center}\\large#1\\end{center}\\vskip0.5em}}\n\\definecolor{codegreen}{rgb}{0,0.6,0}\n\\definecolor{codegray}{rgb}{0.4,0.4,0.4}\n\\definecolor{codepurple}{rgb}{0.58,0,0.82}\n\\definecolor{backcolour}{rgb}{0.91,0.91,0.91}\n\\lstdefinestyle{mystyle}{backgroundcolor=\\color{backcolour}, commentstyle=\\color{codegreen}, keywordstyle=\\color{magenta}, numberstyle=\\footnotesize\\color{codegray}, stringstyle=\\color{codepurple}, basicstyle=\\ttfamily\\fontsize{12}{12}, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2}\n\\lstset{style=mystyle}\n\\setlength\\parindent{0pt}\n\n\\newcommand{\\code}[1]{\\colorbox{backcolour}{\\texttt{#1}}}\n\n\\setlength{\\abovecaptionskip}{2pt plus 3pt minus 2pt}\n\n\\begin{document}\n\n\\title{CTA200H Assignment 2}\n\\author{Anatoly Zavyalov}\n\\date{\\today}\n\\maketitle\n\n\\section*{Question 1}\n\n\\begin{figure}[H]\n    \\begin{center}\n        \\includegraphics[scale=0.8]{img_q1.pdf}\n    \\end{center}\n    \\caption{Comparison of errors of two ways of estimating derivative on a loglog plot, based on varying stepsize.}\n\\end{figure}\n\n\n\\subsection*{Methods}\n\nWe first wrote Python functions to represent the two ways of approximating the derivative, and then created a function to determine the error between the numerical and analytical ways of finding the derivative. We then iterated through different values of stepsize \\(h\\) (with a certain step between the values of the stepsize), and plotted the values on a loglog plot using the \\code{matplotlib} Python module.\n\n\\subsection*{Analysis}\n\nWe see that the absolute error of the two methods of approximations have the same approximate innacuracy when the stepsize is high, but the second method of approximation is more accurate by orders of magnitude when the stepsize is decreased. The slope of the error vs. stepsize plot represents how rapidly the method becomes more innacurate as the stepsize is increased.\n\n\\newpage\n\n\\section*{Question 2}\n\n\\begin{figure}[H]\n    \\begin{center}\n        \\includegraphics[scale=0.55]{img_q2_1.pdf} \n        \\includegraphics[scale=0.55]{img_q2_2.pdf}\n        \\caption{Left: Mandelbrot set with two colors; Right: Mandelbrot set with coloring based on number of iterations before diverging.}\n    \\end{center}\n\\end{figure}\n\n\\subsection*{Methods}\n\nIn both illustrations, for \\(c \\in \\mathbb{C}\\), we developed a function to check the number of iterations of the function \\(z(n) = z(n-1)^2 + c\\) with \\(z(0) = 0\\) before \\(z(n)\\) was greater than some threshold, up to a maximum number of iterations. This function was vectorized using \\code{numpy}'s \\code{vectorize} method, and was then applied to a \\code{numpy.meshgrid} in order to get a two-dimensional mapping of the function. Then, \\code{matplotlib}'s \\code{imshow} method was used to display the graphics. For the illustration that required two colors, the function just returned \\code{1.0} if the point was bounded, and \\code{0.0} otherwise, instead of returning the number of iterations, thus resulting in a two-colored output.\n\n\\subsection*{Analysis}\n\nWe see that the output is the famous Mandelbrot set! This is a fractal (as can be seen by increasing resolution and zooming in on the \\code{matplotlib} output in the Jupyter notebook), and a very cool one at that. We see that the large middle region is completely bounded upon repeatedly applying the function, and it ``branches out'' into a fractal when heading toward regions that become unbounded.\n\n\\newpage\n\n\\section*{Question 3}\n\n\\begin{figure}[H]\n    \\begin{center}\n        \\includegraphics[scale=0.55]{img_q3_1.pdf}\n        \\includegraphics[scale=0.55]{img_q3_2.pdf}\n    \\end{center}\n    \\caption{Left: SIR Model with different values of $\\beta$, $\\gamma$. Right: SIRD Model with different values of $\\beta$, $\\gamma$, $\\kappa$.}\n\\end{figure}\n\n\\section*{Methods}\n\nSimilar to lecture, we create a function to return the right hand of the ODEs (that have the derivative in terms of time on the left hand side). We then use \\code{scipy}'s \\code{integrate.ode} function to numerically integrate the ODEs, then convert the data into \\code{numpy} arrays, and finally draw the plots using \\code{matplotlib}. \n\nFor the SIRD model, we added a death parameter and death coefficient \\(\\kappa\\), which represents how much of the infected population dies every unit of time due to the disease. We then modify our code from the SIR model to accept this fourth parameter, and recreate the graphs.\n\n\\section*{Analysis}\n\nWe see that, for the SIR model, when \\(\\beta > \\gamma\\), the size of the infected population rises in a single wave before slowly dropping, with the size of the recovered population increasing shortly after, and with the size of the susceptible population dropping quite rapidly. This makes sense, as if the recovery rate is lower than the infection rate, the disease will have the chance to infect some people before the infected population recovers. However, when \\(\\gamma > \\beta\\) (such as in the bottom left), the chance of recovery of an infected person is greater than the possibility of a susceptible person becoming infected, hence almost no one gets infected. \n\n\\medspace\n\nFor the SIRD model, we see that if $\\kappa \\geq \\beta$ (such as in the top left example), then it is more likely for infected people to die than it is for susceptible people to get infected. Hence, the disease basically does nothing, the small infected population dies, and that's the end of it. We also see that, in the bottom right example, if the infection coefficient is high, recovery coefficient is low, and death coefficient is relatively low (so disease can kill people slowly enough without just completely killing every infected person quickly), we have quite a deadly pandemic on our hands.\n\n\\end{document}", "meta": {"hexsha": "5a2854e742cc8cec82f9ca94fdc5e9e6d4672389", "size": 6253, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment_2/assignment_2.tex", "max_stars_repo_name": "firetto/CTA200", "max_stars_repo_head_hexsha": "f536de21a1a8f30074f98b2b98829a6f94d6c227", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment_2/assignment_2.tex", "max_issues_repo_name": "firetto/CTA200", "max_issues_repo_head_hexsha": "f536de21a1a8f30074f98b2b98829a6f94d6c227", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment_2/assignment_2.tex", "max_forks_repo_name": "firetto/CTA200", "max_forks_repo_head_hexsha": "f536de21a1a8f30074f98b2b98829a6f94d6c227", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.3039215686, "max_line_length": 738, "alphanum_fraction": 0.7677914601, "num_tokens": 1626, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.831143054132195, "lm_q1q2_score": 0.5673164182898769}}
{"text": "\\section{Language and axiom system}\r\n\r\n\\begin{defn} (Basic vocabulary)\r\n\r\n\t\\begin{itemize} \r\n\t\t\\item $P, Q, P\\p, Q\\p, \\dots$ (\\textit{relation symbols});\r\n\t\t\\item $x, y, z, \\dots$ (\\textit{individual variables});\r\n\t\t\\item $\\impli, \\bot$ (\\textit{boolean connectives});\r\n\t\t\\item $\\todo$ (\\textit{universal quantifier});\r\n\t\t\\item $p_{0}, p_{1}, p_{2}, \\dots$(\\textit{justification variables});\r\n\t\t\\item $c_{0}, c_{1}, c_{2}, \\dots$ (\\textit{justification constants});\r\n\t\t\\item $+$, $\\cdot$, $!$, $?$, $gen_{x}$ (\\textit{justification operators - for every individual variable $x$, there is an operator $gen_{x}$});\r\n\t\t\\item $(\\cdot):_{X} (\\cdot)$,(for every finite set of individual variables $X$);\r\n\t\t\\item $),($ (\\textit{parentheses}).\r\n\t\\end{itemize}\r\n\\end{defn}\r\n\r\n\\begin{defn} (Justification terms)\r\n\\begin{center}\r\n$ t : = p_{i}$   $|$ $c$ $|$  $(t_{1} \\cdot t_{2})$ $|$ $(t_{1} + t_{2})$ $|$  $!t$ $|$ $?t$ $|$ $gen_{x}(t)$\r\n\\end{center}\r\n\\end{defn}\r\n\r\n\\begin{defn} (Justification formulas)\r\n\\begin{center}\r\n$ \\varphi : = Q(x_1, \\dots, x_n)$   $|$ $\\bot$ $|$  $\\varphi \\impli \\psi$ $|$ $\\todo x \\varphi$ $|$  $t$$:_{X}$$\\varphi$\r\n\\end{center}\r\n\\end{defn}\r\n\r\n\r\n\\qquad The set of all formulas is denoted by $L$. We are assuming that the set of relational symbols, individual variables, justification variables and justification constants are all countable sets. Thus, it is easy to check that $L$ itself is a countable set. \r\n\r\n\\begin{defn}\r\nWe define the notion of free variables of $\\varphi$, $fv(\\varphi)$, inductively as follows:\r\n\r\n\\begin{itemize} \r\n\t\\item If $\\varphi$ is atomic, then $fv(\\varphi)$ is the set of all variables occurring in $\\varphi$.\r\n\t\\item If $\\varphi$ is $(\\psi \\impli \\theta)$, then $fv(\\varphi)$ is $fv(\\psi) \\cup fv(\\theta)$.\r\n\t\\item If $\\varphi$ is $\\todo x \\psi$, then $fv(\\varphi)$ is $fv(\\psi) \\backslash \\{x\\}$.\r\n\t\\item If $\\varphi$ is $t$$:_{X}$$\\psi$, then  $fv(\\varphi)$ is $X$.\r\n\\end{itemize}\r\n\r\n\r\n\\qquad Similarly as in the classical case, we must define the notion of an individual variable $y$ being free for $x$ in the formula $\\varphi$. The definition is the same as in the classical case, we only add the following clause: $y$ is free for $x$ in $t$$:_{X}$$\\varphi$ if two conditions are met, i) $y$ is free for $x$ in $\\varphi$, ii) if $y \\in fv(\\varphi)$, then $y \\in X$.\r\n\\end{defn}\r\n\r\n\\qquad We write $\\varphi(x_{1}, \\dots, x_{n})$ to denote that the free variables of $\\varphi$ are among $\\{x_{1}, \\dots, x_{n}\\}$. Let $y_{1}, \\dots, y_{n}$ be variables, we write $\\varphi(y_{1}/x_{1}, \\dots, y_{n}/x_{n})$ to denote the formula obtained by substitution of $y_{1}, \\dots, y_{n}$ for all the free occurrences of $x_{1}, \\dots, x_{n}$ in $\\varphi$,  respectively. When it is clear from the context which variables are free in $\\varphi$ we simply write $\\varphi(y_{1}, \\dots, y_{n})$ instead of $\\varphi(y_{1}/x_{1}, \\dots, y_{n}/x_{n})$. We use $\\vec{x},\\vec{y}, \\dots$ for sequence of variables; and we write $\\todo \\vec{x} \\varphi(\\vec{x})$ in the place of $\\todo x_1, \\dots ,\\todo x_n\\varphi(x_1, \\dots ,x_n)$ \r\n\r\n\r\n\\qquad We write $Xy$ instead of $X \\cup \\{y\\}$, in this case it is assumed that $y \\notin X$. And we use $t$$:$$\\varphi$ as an abbreviation for $t$$:_{\\vazio}$$\\varphi$\r\n\r\n\\qquad  The first-order JT45, FOJT45, is axiomatized by the following axiom schemes and inference rules:\\\\\r\n\r\n\r\n\\textbf{A1} classical axioms of first-order logic\\\\\r\n\r\n\\textbf{A2} $t$$:_{Xy}$$\\varphi \\impli$ $t$$:_{X}$$\\varphi$, provided $y$ does not occur free in $\\varphi$\\\\\r\n\r\n\\textbf{A3} $t$$:_{X}$$\\varphi \\impli$ $t$$:_{Xy}$$\\varphi$ \\\\\r\n\r\n\\textbf{B1} $t$$:_{X}$$\\varphi \\impli \\varphi$\\\\\r\n\r\n\\textbf{B2} $t$$:_{X}$$(\\varphi \\impli \\psi) \\impli$ $(s$$:_{X}$$\\varphi \\impli$ $[t\\cdot s]$$:_{X}$$\\psi)$\\\\\r\n\r\n\\textbf{B3} $t$$:_{X}$$\\varphi \\impli$ $[t+s]$$:_{X}$$\\varphi$, $s$$:_{X}$$\\varphi \\impli$ $[t+s]$$:_{X}$$\\varphi$\\\\ \r\n\r\n\\textbf{B4} $t$$:_{X}$$\\varphi \\impli$ $!t$$:_{X}$$t$$:_{X}$$\\varphi$\\\\\r\n\r\n\r\n\\textbf{B5} $\\nao t$$:_{X}$$\\varphi \\impli$ $?t$$:_{X}$$\\nao t$$:_{X}$$\\varphi$\\\\\r\n\r\n\r\n\\textbf{B6} $t$$:_{X}$$\\varphi \\impli$ $gen_{x}(t)$$:_{X}$$ \\todo x \\varphi$, provided $x \\notin X$\\\\\r\n\r\n\r\n\\textbf{R1} (\\textit{Modus Ponens}) $\\teo \\varphi$, $\\teo \\varphi\\impli\\psi$ $\\Rightarrow$ $\\teo \\psi$ \\\\\r\n\r\n\\textbf{R2} (\\textit{generalization})  $\\teo \\varphi$ $\\Rightarrow$ $\\teo \\todo x \\varphi$ \\\\\r\n\r\n\\textbf{R3} (\\textit{axiom necessitation})  $\\teo c$$:$$\\varphi$, where $\\varphi$ is an axiom and $c$ is a justification constant.\\\\\r\n\r\n\\begin{defn}\r\nLet $\\C$ be a constant specification. We say that $\\C$ is \\textit{schematic} if all instances of an axiom scheme are assigned the same constants.  \r\n\\end{defn}\r\n\r\n\r\n\\qquad We use $\\Gamma, \\Delta, \\Theta, \\dots$ as variables for sets of formulas. The notion of $\\Gamma \\teo \\varphi$ is define as usual. The only thing that should be noted is that, if $\\Gamma$ deduces $\\varphi$ using the generalization rule, then this rule was not applied to a variable which occur free in the formulas of $\\Gamma$. \r\n\r\n\\qquad Since derivations depends on the constant specification being considered, we sometimes write $\\teo_{\\C} \\varphi$ to point out that the proof of $\\varphi$ meets the constant specification $\\C$.\r\n\r\n\\begin{defn}\r\nA substitution $\\sigma$ is a mapping from the set of justification variables to the set of justification terms. For a justification term $t$ the result of applying a substitution $\\sigma$ is denoted $t\\sigma$; similarly, for a formula $\\varphi$ we write $\\varphi\\sigma$.\r\n\\end{defn}\r\n\r\n\r\n\\begin{lema}\r\n(\\textit{Substitution}) Let $\\varphi$ be a formula, $\\sigma$ a substitution and $\\C$ a schematic constant specification. If  $\\teo_{\\C} \\varphi$, then  $\\teo_{\\C} \\varphi\\sigma$.\r\n\\end{lema}\r\n\r\n\\begin{lema}\r\n(\\textit{Deduction})  $\\Gamma,\\varphi \\teo \\psi$ iff  $\\Gamma \\teo \\varphi \\impli \\psi$\r\n\\end{lema}\r\n\r\n\\begin{teor}\r\n\t(\\textit{Internalization}) Let $\\C$ be an axiomatic appropriate constant specification; $p_{0}, \\dots, p_{k}$ be justification variables; $X_{0}, \\dots, X_{k}$ be finite sets of individual variables, and $X =X_{0} \\cup \\dots \\cup X_{k}$. In these conditions, if  $p_{0}$$:_{X_{0}}$$\\varphi_{0}, \\dots, p_{k}$$:_{X_{k}}$$\\varphi_{k} \\teo_{\\C} \\psi$, then there is a justification term $t(p_{0}, \\dots, p_{k})$ such that \r\n\t\r\n\t\\begin{center}\r\n\t\t$p_{0}$$:_{X_{0}}$$\\varphi_{0}, \\dots, \r\n\t\tp_{k}$$:_{X_{k}}$$\\varphi_{k} \\teo_{\\C} t$$:_{X}$$\\psi$.\r\n\t\\end{center}\r\n\t\r\n\\end{teor}\r\n\r\n\\begin{proof}\r\nThe same proof as presented in \\cite[p. 7]{Artemov11}.\r\n\\end{proof}\r\n\r\n\\pagebreak\r\n\r\n\r\n\\begin{pro}\r\n(\\textit{Explicit counterpart of the Barcan Formula and its converse}) For every formula $\\varphi(x)$ and every justification term $t$, there are justification terms $CB(t)$ and $B(t)$ such that: \r\n\\begin{center}\r\n$\\teo t$$:$$\\todo x \\varphi(x) \\impli \\todo x CB(t)$$:_{\\{x\\}}$$\\varphi(x)$\\\\\r\n\r\n$\\teo \\todo x t$$:_{\\{x\\}}$$\\varphi(x) \\impli B(t)$$:$$\\todo x \\varphi(x)$\r\n\\end{center}\r\n\\end{pro}\r\n\r\n\r\n\r\n\\begin{proof}\r\n\\qquad In Appendix.\r\n\\end{proof}\r\n\r\n\r\n\r\n\\begin{pro}\r\nAAAA For every formula $\\varphi(x)$ and every justification term $t$, there are justification terms $CB(t)$ and $B(t)$ such that: \r\n\t\\begin{center}\r\n\t\t$\\teo t$$:$$\\todo x \\varphi(x) \\impli \\todo x CB(t)$$:_{\\{x\\}}$$\\varphi(x)$\\\\\r\n\t\t\r\n\t\t$\\teo \\todo x t$$:_{\\{x\\}}$$\\varphi(x) \\impli B(t)$$:$$\\todo x \\varphi(x)$\r\n\t\\end{center}\r\n\\end{pro}\r\n\r\n\r\n\r\n\\begin{proof}\r\n\t\\qquad In Appendix.\r\n\\end{proof}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\section{Semantics: basic definitions}\r\n\r\n\r\n\\begin{defn}\r\nAn S5 \\textit{skeleton} is a structure $\\bl \\W, \\R, \\D \\br$ where: $\\W \\ne \\vazio$; $\\R \\subseteq \\W \\times \\W$ such that $\\R$ is an equivalence relation, and $\\D \\ne \\vazio$.\r\n\\end{defn}\r\n\r\n\r\n\\qquad For any non-empty set $\\D$ we are going to use the elements of $\\D$ as constants. And  we are going to use $\\vec{a}, \\vec{b},  \\dots$ to denote sequences of constants.\r\n\r\n\\begin{defn}\r\nLet $\\D$ be a non-empty set. The set of all $\\D$-formulas, $L_{\\D}$, is defined as follows:\r\n\r\n\\begin{center}\r\n\t$L_{\\D} = \\{\\varphi (\\vec{a})$ $|$  $\\varphi(\\vec{x}) \\in L$ and $\\vec{a} \\in \\D\\}$\r\n\\end{center}\r\n\r\n\\end{defn}\r\n\r\n\\qquad As usual, for a $\\D$-formula $\\varphi$, we say that $\\varphi$ is closed if  $\\varphi$ has no free-variables.\r\n\r\n\\begin{defn}\r\nA \\textit{Fitting model} is a structure $M = \\model$ where $\\bl \\W, \\R, \\D \\br$ is an S5 skeleton; and:\r\n\r\n\\begin{itemize} \r\n\\item $\\I$ is an \\textit{interpretation function}, i.e.,  $\\I$ is a function assigning to each $n$-ary relational symbol $Q$ and each $w \\in \\W$ an $n$-ary relation $\\I(Q,w)$ on $\\D$.\r\n\r\n\\item $\\E$ is an \\textit{evidence function}, i.e., for any justification term $t$ and $\\D$-formula $\\varphi$, $\\E(t,\\varphi) \\subseteq \\W$.\r\n\\end{itemize}\r\n\r\n\\end{defn}\r\n\r\n\r\n\r\n\\begin{defn}\r\n\\textit{Evidence Function Conditions}. Let $M = \\model$ be a Fitting model. We require the evidence function to meet the following conditions:\r\n\r\n\r\n\\begin{itemize} \r\n\t\\item[] \\textbf{$\\cdot$ Condition} $\\E (t, \\varphi \\impli \\psi) \\cap \\E(s, \\varphi) \\subseteq \\E([t\\cdot s], \\psi).$\r\n\t\\item[] \\textbf{$+$ Condition} $\\E (s, \\varphi) \\cup \\E(t, \\varphi) \\subseteq \\E([s+t], \\varphi).$\r\n\t\\item[] \\textbf{$!$ Condition} $\\E (t, \\varphi) \\subseteq \\E(!t, t$$:_{X}\\varphi)$, where $X$ is the set of constant occurring in $\\varphi$.\r\n    \\item[] \\textbf{$?$ Condition} $\\W  \\backslash \\E (t, \\varphi) \\subseteq \\E(?t,\\nao t$$:_{X}\\varphi)$, where $X$ is the set of constant occurring in $\\varphi$.\r\n\t\\item[] \\textbf{$\\R$ Closure Condition} If $w \\in \\E (t, \\varphi)$ and $w \\R w\\p$, then $w\\p \\in \\E (t, \\varphi)$.\r\n\t\\item[] \\textbf{Instantiation Condition} If $w \\in \\E (t, \\varphi(x))$ and $a \\in \\D$, then $w \\in \\E (t, \\varphi(a))$.\r\n\t\\item[] \\textbf{$gen_{x}$ Condition} $\\E (t, \\varphi) \\subseteq \\E(gen_{x}(t),\\todo x\\varphi)$.\r\n\\end{itemize}\r\n\\end{defn}\r\n\r\n\\qquad We say that a model $M = \\model$ \\textit{meets constant specification $\\C$} whenever $c$$:$$\\varphi \\in \\C$, then $\\E (c, \\varphi) = \\W$.\r\n\r\n\r\n\\begin{defn}\r\nLet $M = \\model$ be a Fitting model, $\\varphi$ a closed $\\D$-formula and $w \\in \\W$. The notion that \\textit{$\\varphi$ is true at world $w$ of $M$}, in symbols $M,w \\models \\varphi$, is defined as usual by induction on $\\varphi$: \r\n\\begin{itemize} \r\n\t\\item $M,w \\models Q(\\vec{a})$ iff $\\bl \\vec{a}\\br \\in \\I(Q,w)$. \r\n\t\\item $M,w \\nmodels \\bot$. \r\n\t\\item $M,w \\models \\psi \\impli \\theta$ iff $\\M,w \\nmodels \\psi$ or $M,w \\models \\theta$.\r\n\t\\item $M,w \\models \\todo x \\psi(x)$ iff for every $a \\in \\D$, $M,w \\models \\psi(a)$.\r\n\r\n\\pagebreak\t\r\n\t\r\n\t\\item Assume $t$$:_{X}$$\\psi(\\vec{x})$ is closed and $\\vec{x}$ are all the free variables of $\\psi$. Then, $M,w \\models t$$:_{X}$$\\psi(\\vec{x})$ iff\r\n\t\\begin{enumerate}[(a)]\r\n\t\t\\item $w \\in \\E (t, \\psi(\\vec{x}))$ and\r\n\t\t\\item for every $w\\p \\in \\W$ such that $w\\R w\\p$, $\\M,w\\p \\models \\psi(\\vec{a})$ for every $\\vec{a} \\in \\D$.\r\n\t\\end{enumerate}\r\n\r\n\\end{itemize}\r\n\r\n\\end{defn}\r\n\r\n\t\r\n\r\n\\begin{defn}\r\nLet $\\varphi \\in L$ be a closed formula. We say that $\\varphi$ is \\textit{valid in the Fitting model} $M = \\model$ provided for every $w \\in W$, $M,w \\models \\varphi$. A formula with free individual variables is valid if its universal closure is valid.\r\n\\end{defn}\r\n\r\n\r\n\\begin{defn}\r\nA \\textit{Fitting model for FOJT45} is a Fitting model $M = \\model$ where $\\E$ is a \\textit{strong evidence function}, i.e., for every term $t$ and $\\D$-formula $\\varphi$, $\\E(t,\\varphi) \\subseteq \\{w \\in \\W$ $|$ $ M,w \\models t$$:_{X}$$\\varphi\\}$ where $X$ is the set of constant occurring in $\\varphi$.\r\n\r\n\r\n\\qquad For a formula $\\varphi$ and constant specification $\\C$, we write $\\models_{\\C}\\varphi$ if for every Fitting model for FOJT45 $M$ meeting $\\C$, $\\varphi$ is valid in $M$.\r\n\\end{defn}\r\n\r\n\r\n\r\n\\section{Semantics: non-validity}\r\n\r\n\\qquad Before we deal with soundness and completeness, it is useful to know some examples of non-validity in order to see that the provisions of some axioms make sense. There is only a minor problem, we require that Fitting models for FOJT45 have a strong evidence function, and it is not so easy to construct models with that property. The following proposition helps us to circumnavigate this issue.\r\n\r\n\r\n\\begin{pro}\r\nIf $M = \\model$ is a Fitting model such that for every justification term $t$ and D-formula $\\varphi$, $\\E(t,\\varphi) = \\W$, then there is a Fitting model for FOJT45 $M^{*} = \\bl\\W,\\R,\\D,\\I,\\E^{*} \\br$ such that for every $w \\in \\W$ and every formula $\\varphi$, $M,w \\models \\varphi$ iff $M^{*},w \\models \\varphi$.   \r\n\\end{pro}\r\n\r\n\\begin{proof}\r\n\\qquad Let $M^{*} =\\bl\\W,\\R,\\D,\\I,\\E^{*} \\br$ where for every justification term and D-formula $\\varphi$,\r\n\r\n\\begin{center}\r\n$\\E^{*}(t,\\varphi) = \\{w \\in \\W$ $|$ $ M,w \\models t$$:_{X}$$\\varphi\\}$\r\n\\end{center}\r\n\r\nwhere $X$ is the set of constants occurring in $\\varphi$. \r\n\r\n\\qquad It is straightforward to check that $M^{*}$ is indeed a Fitting model. Now consider the following:\\\\\r\n \r\n (*) For every $w \\in \\W$ and every closed D-formula $\\varphi$, $M,w \\models \\varphi$ iff $M^{*},w \\models \\varphi$.\\\\\r\n \r\n (Proof of (*)) Induction on the complexity of $\\varphi$. Crucial case, $\\varphi = t$$:_{X}$$\\psi$. For simplicity, let's assume that $\\varphi$ is $t$$:_{\\{a\\}}$$\\psi(a,y)$.\r\n\r\n\\qquad ($\\Rightarrow$) If $M,w \\models t$$:_{\\{a\\}}$$\\psi(a,y)$, then by definition $w \\in \\E^{*}(t, \\psi(a,y))$ and for every $w\\p \\in \\W$, if $w\\R w\\p$, then $M,w\\p \\models \\psi(a,b)$ for every $b \\in \\D$. By the induction hypothesis, for every $w\\p \\in \\W$, if $w\\R w\\p$, then $M^{*},w\\p \\models \\psi(a,b)$ for every $b \\in \\D$. Thus, $M^{*},w \\models t$$:_{\\{a\\}}$$\\psi(a,y)$.\r\n\r\n\\qquad ($\\Leftarrow$) If $M^{*},w \\models t$$:_{\\{a\\}}$$\\psi(a,y)$, then $w \\in \\E^{*}(t, \\psi(a,y))$. By definition, $M,w \\models t$$:_{\\{a\\}}$$\\psi(a,y)$. $\\Box$\\\\\r\n\r\n\r\n\\qquad By (*) we have that,\r\n\r\n\\begin{center}\r\n\t$\\E^{*}(t,\\varphi) = \\{w \\in \\W$ $|$ $ M,w \\models t$$:_{X}$$\\varphi\\} = \\{w \\in \\W$ $|$ $ M^{*},w \\models t$$:_{X}$$\\varphi\\}$\r\n\\end{center}\r\n\r\n\r\n\\qquad Hence, $\\E^{*}$ is a strong evidence function and $M$ and $M^{*}$ agree on all D-formulas. Therefore, $M^{*}$ is a Fitting model for FOJT45 and $M$ and $M^{*}$ agree on all formulas.\r\n\\end{proof}\r\n\r\n\r\n\\qquad With this proposition we can construct non-validity examples similar as presented in \\cite{Fitting14}.\r\n\r\n\\qquad \\textbf{Example 1:} the restriction on axiom \\textbf{A2} is needed. Take, for example, the formula $t$$:_{\\{x,y\\}}$$Q(x,y) \\impli t$$:_{\\{x\\}}$$Q(x,y)$; let $m =\\model$ be a Fitting model where:\r\n\\begin{itemize}\r\n\\item $\\W = \\{w_0, w_1\\}$;\r\n\\item $\\R = \\W \\times \\W$;\r\n\\item $\\D = \\{a, b\\}$;\r\n\\item $\\I(w_{0},Q) = \\I(w_{1},Q) = \\{\\bl a,b \\br\\}$;\r\n\\item $\\E(t,\\varphi) = \\W$, for every term $t$ and formula $\\varphi$.\r\n\\end{itemize}\r\n\r\n\r\n\\qquad Clearly, $M,w_0 \\models t$$:_{\\{a,b\\}}$$Q(a,b)$ and $M,w_0 \\nmodels t$$:_{\\{a\\}}$$Q(a,y)$. Hence, $M,w_0 \\nmodels t$$:_{\\{x,y\\}}$$Q(x,y) \\impli t$$:_{\\{x\\}}$$Q(x,y)$. By Proposition 2, $t$$:_{\\{x,y\\}}$$Q(x,y) \\impli t$$:_{\\{x\\}}$$Q(x,y)$ is not valid in every Fitting model for FOJT45.\r\n\r\n\r\n\\qquad \\textbf{Example 2:} The proviso of axiom \\textbf{B6} is necessary. Take, for example, the formula $t$$:_{\\{x\\}}$$Q(x) \\impli gen(t)$$:_{\\{x\\}}$$\\todo x Q(x)$; let $M =\\model$ be a Fitting model where:\r\n\\begin{itemize}\r\n\\item $\\W = \\{w_0\\}$;\r\n\\item $\\R = \\W \\times \\W$;\r\n\\item $\\D = \\{a,b\\}$;\r\n\\item $\\I(w_{0},Q) =  \\{a\\}$;\r\n\\item $\\E(t,\\varphi) = \\W$, for every term $t$ and formula $\\W$.\r\n\\end{itemize}\r\n\r\n\r\n\\qquad Clearly, $M,w_0 \\models t$$:_{\\{a\\}}$$Q(a)$ and since $M,w_0 \\nmodels Q(b)$, then $M,w_0 \\nmodels \\todo x Q(x)$, and so $M,w_0 \\nmodels gen(t)$$:_{\\{a\\}}$$\\todo x Q(x)$. Hence, $M,w \\nmodels t$$:_{\\{x\\}}$$Q(x) \\impli gen(t)$$:_{\\{x\\}}$$\\todo x Q(x)$. Again by Proposition 2, $t$$:_{\\{x\\}}$$Q(x) \\impli gen(t)$$:_{\\{x\\}}$$\\todo x Q(x)$ is not valid in every Fitting model for FOJT45.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\\section{Soundness and completeness}\r\n\t\r\n\\begin{teor}\r\n(\\textit{Soundness}) Let $\\C$ be a constant specification. For every formula $\\varphi \\in L$, if $\\teo_{\\C} \\varphi$, then $\\models_{\\C}\\varphi$.\r\n\\end{teor}\t\r\n\t\r\n\\begin{proof}\r\nThe proof is by induction on the theorems of the axiom system using the constant specification $\\C$. The argument is exactly the same as presented in \\cite[p.9-10]{Fitting14}. We are going to show validity for the specific axiom of FOJT45.\r\n\t\r\n\\textbf{B5} $\\nao t$$:_{X}$$\\psi \\impli$ $?t$$:_{X}$$\\nao t$$:_{X}$$\\psi$. For simplicity, assume $X= \\{x\\}$ and $\\psi = \\psi(x,y)$. So, we have that $\\teo_{\\C}\\nao t$$:_{\\{x\\}}$$\\psi(x,y) \\impli$ $?t$$:_{\\{x\\}}$$\\nao t$$:_{\\{x\\}}$$\\psi(x,y)$.\r\n\r\n\\qquad Let $\\M = \\model$ be a Fitting model for FOJT45 meeting $\\C$, $w \\in \\W$ and $a \\in \\D$. Suppose $M, w \\models \\nao t$$:_{\\{a\\}}$$\\psi(a,y)$. Then, $M, w \\nmodels t$$:_{\\{a\\}}$$\\psi(a,y)$. By the definition of the strong evidence function, $w \\notin \\E (t, \\psi(a,y))$. By the ? condition, $w \\in \\E(?t,\\nao t_{\\{a\\}}$$:\\psi(a,y))$. Again, by the strong evidence function $M, w \\models ?t$$:_{\\{a\\}}$$\\nao t$$:_{\\{a\\}}$$\\psi(a,y)$.\r\n\t\r\n\\end{proof}\r\n\r\n\r\n\\subsection{Language extension}\r\n\r\n\r\n\r\n\\subsection{Templates}\r\n\r\n\r\n\\subsection{Using templates for Henkin and Lindenbaum theorems}\r\n\r\n\r\n\\subsection{Completeness}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "meta": {"hexsha": "6b3b7aa3370df865d9653411f153f5ca5e018f95", "size": 17208, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/chapters/first-order.tex", "max_stars_repo_name": "felipessalvatore/dissertacao_mestrado", "max_stars_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/chapters/first-order.tex", "max_issues_repo_name": "felipessalvatore/dissertacao_mestrado", "max_issues_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/chapters/first-order.tex", "max_forks_repo_name": "felipessalvatore/dissertacao_mestrado", "max_forks_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.0106951872, "max_line_length": 728, "alphanum_fraction": 0.6116922362, "num_tokens": 6316, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7401743735019595, "lm_q1q2_score": 0.5671909171896844}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\usepackage[latin1]{inputenc}\n\\usepackage{amsmath,amsfonts,amssymb,amsthm,dsfont}\n\n\\def\\F{\\mathcal{F}}\n\\def\\G{\\mathcal{G}}\n\\def\\I{\\mathcal{I}}\n\\def\\y{\\mathcal{\\mu}}\n\\def\\m{m}\n\\def\\card{\\mathrm{card}}\n\n\\def\\diam{\\mathrm{diam}}\n\n\\def\\det{\\mathrm{det}}\n\n\\def\\tr{\\mathrm{tr}}\n\n\\def\\cov{\\mathrm{cov}}\n\\begin{document}\n\n\n\\section{Entropy formulas}\n\\begin{description}\n\t$\\G$ - family of all normal distributions \\newline\n\t$\\G_A$ - where $A$ proper matrix(square, symetric positvelly defined) is a subfamilly of $\\G$ which covariance equals $A$\n\t\t\\newline\n\t$\\G_{(\\cdot \\I)} = \\cup_{r \\in \\mathbb{R} - \\{0\\}} \\G_{r\\cdot \\I}\n\t\n\\end{description}\n\n\\begin{table}\\centering\n\n\\begin{tabular}{||l|l|l||} \\hline \\hline\n\n$\\F$ & $\\Sigma_{\\F}(\\y)$ & $H^{\\times}(\\y\\|\\F)$ \\\\[0.5ex]\n\n\\hline \\hline\n\n$\\G_{\\Sigma}$ & $\\Sigma$ & $\\frac{N}{2} \\ln(2\\pi)+\\frac{1}{2}\\tr(\\Sigma^{-1}\\Sigma_{\\y})+\\frac{1}{2}\\ln \\det(\\Sigma)$\n\n\\\\[0.5ex] \\hline\n\n$\\G_{r\\I}$ & $r\\I$ &\n\n$\\frac{N}{2}\\ln(2\\pi)+\\frac{1}{2r}\\tr(\\Sigma_{\\y})+\\frac{N}{2}\\ln r$ \\\\[0.5ex] \\hline\n\n$\\G_{(\\cdot\\I)}$ & $\\frac{\\tr(\\Sigma_{\\y})}{N} \\I$ & $\\frac{N}{2}\\ln(2\\pi e/N)+\\frac{N}{2}\\ln (\\tr \\Sigma_\\y)$ \\\\[0.5ex] \\hline\n\n$\\G_{\\mathrm{diag}}$ & $\\mathrm{diag}(\\Sigma_{\\y})$ & $\\frac{N}{2}\\ln(2\\pi e)+\\frac{1}{2}\\ln(\\det(\\mathrm{diag}(\\Sigma_\\y)))$ \\\\[0.5ex]\n\n\\hline\n\n%$\\G_{\\det=A}$ & $(A/\\det \\Sigma_{\\mu})^{1/N}\n\n%\\Sigma_{\\mu}$ & $\\frac{N}{2} \\ln(2\\pi)+\\frac{N}{2}(\\det \\Sigma_{\\mu}/A)^{1/N}+\\frac{1}{2}\\ln(A)$ \\\\[0.5ex] \\hline\n\n$\\G$ & $\\Sigma_{\\y}$ & $\\frac{N}{2}\\ln(2\\pi e)+\\frac{1}{2}\\ln \\det(\\Sigma_{\\y})$ \\\\[0.5ex] \\hline \\hline\n\n\\end{tabular}\n\n\\caption{Table of cross-entropy formulas with respect to Gaussian subfamilies.}\n\n\\label{tab1:cec}\n\n\\end{table}\n\n\n\\section{Cluster formulas}\nAssume we we have a cluster $A$ with parameters $\\l,\\m,\\Sigma$ and we add to this cluster point $y$ we will get a new cluter $A_{+y}$ with paramter given by formulas:\n$$\n\n\\begin{array}{rcl}\n\n\\l_{+y} & = & l+1, \\\\[1ex]\n\n\\m_{+y} & = & \\frac{l\\m+y}{l+1}, \\\\[1ex]\n\n\\Sigma_{+y} & = & \\frac{l}{l+1}[\\Sigma+\\frac{1}{l+1}(\\m-y)(\\m-y)^T].\n\n\\end{array}\n\n$$\n\nLet's assume we'll substract point $ y $ from cluster $A$ our new cluster will have parameters given by formula :\n\n$$\n\n\\begin{array}{rcl}\n\nl_{-y} & = & l-1, \\\\[1ex]\n\n\\m_{-y} & = & \\frac{l}{l-1}\\m-\\frac{1}{l-1} y, \\\\[1ex]\n\n\\Sigma_{-y} & = & \\frac{l}{l-1}[\\Sigma-\\frac{1}{l-1}(\\m-y)(\\m-y)^T], .\n\n\\end{array}\n\n$$\n\n\n\n\n\n\n\\end{document}", "meta": {"hexsha": "205887c2a5b97327d45545738ec1555da8fa96ec", "size": 2437, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/cec/papers/formulas.tex", "max_stars_repo_name": "gmum/gmum.r", "max_stars_repo_head_hexsha": "fdf76abffb803cfffca7a33cbb319e06cfcf73d3", "max_stars_repo_licenses": ["Xnet", "X11"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2015-05-04T08:36:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-29T09:22:09.000Z", "max_issues_repo_path": "doc/cec/papers/formulas.tex", "max_issues_repo_name": "gmum/gmum.r", "max_issues_repo_head_hexsha": "fdf76abffb803cfffca7a33cbb319e06cfcf73d3", "max_issues_repo_licenses": ["Xnet", "X11"], "max_issues_count": 112, "max_issues_repo_issues_event_min_datetime": "2015-04-30T15:28:53.000Z", "max_issues_repo_issues_event_max_datetime": "2017-11-22T15:48:27.000Z", "max_forks_repo_path": "doc/cec/papers/formulas.tex", "max_forks_repo_name": "gmum/gmum.r", "max_forks_repo_head_hexsha": "fdf76abffb803cfffca7a33cbb319e06cfcf73d3", "max_forks_repo_licenses": ["Xnet", "X11"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2015-05-10T06:18:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-06T02:24:06.000Z", "avg_line_length": 23.2095238095, "max_line_length": 166, "alphanum_fraction": 0.5806319245, "num_tokens": 1058, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859598, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5671909127976943}}
{"text": "\\documentclass[11pt]{etk-article}\n\\usepackage{pstool} \n\\usepackage{etk-bib}\n\\pdfmetadata{}{}{}{}\n\\externaldocument[FD:]{operator_discretization_finite_differences}\n\n\\begin{document}\n\\title{Solving HJBE with Finite Differences}\n\\author{Jesse Perla\\\\UBC}\n\\date{\\today}\n\\maketitle\n Here, we expand on details for how to discretize HJBE with a control of the drift.\\footnote{See \\url{operator_discretization_finite_differences.pdf} for more details on the discretization of a linear diffusion operator with finite-differences, and general notation (with equation numbers in that document prefaced by \\textit{FD}). Thanks to Sari Wang for superb Research Assistance.}  In particular, this set of notes will focus on solving the neoclassical growth model first with a deterministic and then a stochastic TFP.\n \n \\section{Neoclassical Growth}\n This solves the simple deterministic neoclassical growth model.\n \\subsection{Value Function with Capital Reversibility}\n \n Take a standard neoclassical growth model with capital $k$, consumption $c$, production $f(k)$, utility $u(c)$, depreciation rate $\\delta$, and discount rate $\\rho$.\\footnote{\n This builds on \\url{http://www.princeton.edu/~moll/HACTproject/HACT_Additional_Codes.pdf} Section 2.2, with code in \\url{http://www.princeton.edu/~moll/HACTproject/HJB_NGM_implicit.m}}\nThe law of motion for capital in this setup is,\n\\begin{align}\n\t\\D[t] k(t) &= f(k) - \\delta k - c\\label{eq:lom-capital}\\\\\n\t\\intertext{With this, the standard HJBE for the value of capital $k$ is,}\n\t\\rho v(k) &= \\max_{c}\\set{u(c) + \\left(f(k) - \\delta k - c\\right)v'(k)}\\label{eq:HJBE-neoclassical-growth}\n\t\\intertext{Assume an interior $c$ and envelope conditions, then taking the FOC of the choice is,}\n\tu'(c) &= v'(k)\n\t\\intertext{Assume a functional form $u(c) = \\frac{c^{1-\\gamma}}{1-\\gamma}$ for the utility and $f(k)=Ak^\\alpha$ for production. Then the FOC can be inverted such that}\n\tc &= \\left(v'(k) \\right)^{-\\frac{1}{\\gamma}}\\label{eq:c-neoclassical-growth}\n\t\\intertext{And,}\n\tu(c) &= \\frac{\\left(v'(k)\\right)^{-\\frac{1-\\gamma}{\\gamma}}}{1-\\gamma}\\label{eq:u-c-neoclassical-growth}\n\\intertext{If \\cref{eq:c-neoclassical-growth,eq:u-c-neoclassical-growth} were substituted back into \\cref{eq:HJBE-neoclassical-growth}, we would have a nonlinear ODE in just $k$.  Define the following, which will be important in the finite difference scheme}\n\t\\mu(k) &\\equiv f(k) - \\delta k - c\\label{eq:mu-neoclassical-growth}\n\t\\intertext{Then with \\cref{eq:c-neoclassical-growth} this can be defined as a function of the $v(\\cdot)$ function,}\n\t\\mu(k;v) &\\equiv f(k) - \\delta k - \\left(v'(k) \\right)^{-\\frac{1}{\\gamma}}\\label{eq:mu-v-neoclassical-growth}\n\t\\intertext{Then with \\cref{eq:HJBE-neoclassical-growth,eq:mu-v-neoclassical-growth,eq:u-c-neoclassical-growth}}\n\t\\rho v(k) &= \\frac{\\left(v'(k)\\right)^{-\\frac{1-\\gamma}{\\gamma}}}{1-\\gamma} + \\mu(k;v) v'(k)\\label{eq:HJBE-neoclassical-growth-mu}\n\\end{align}\nThe HJBE \\cref{eq:HJBE-neoclassical-growth-mu} is a nonlinear ODE in $v(k)$.\n\n\\paragraph{Steady State}  The goal is for the finite-difference scheme to find the steady-state on its own.  However, in this example we have an equation to verify the solution:\n\\begin{align}\nk^* &= \\left(\\frac{\\alpha A}{\\rho + \\delta}\\right)^{\\frac{1}{1-\\alpha}}\\label{eq:steady-state-k}\\\\\nc^* &= A \\left(k^*\\right)^{\\alpha} - \\delta  \\left(k^*\\right)^{\\alpha}\\label{eq:steady-state-c}\n\\end{align}\n\\subsection{Finite Differences and Discretization}\nIn order to solve this problem, we will need to use the appropriate ``upwind'' direction for the finite differences given the sign of the drift in \\cref{eq:mu-v-neoclassical-growth}.  \n\\paragraph {Two basic approaches:} \n\\begin{itemize}\n\t\\item Fix the nonlinear $v'(k)$ function so that the operator becomes linear, solve it as a sparse linear system, and then iterate on the $v(k)$ solution.\n\t\\item Solve it directly as a nonlinear system of equations.\n\\end{itemize}\n\n\\paragraph {Setup}\n\\begin{itemize}\n\\item Define a uniform grid with $I$ discrete points  $\\set{k_i}_{i=1}^I$,  for some small $k_1 \\in (0, k^*)$ and large $k_I > k^*$, with distance between grid points $\\Delta \\equiv k_i - k_{i-1}$ for all $i$. After discretizing, we will denote the grid with the variable name, i.e. $k \\equiv \\set{k_i}_{i=1}^I$.\nFurther define the notations $\\underline{k} \\equiv k_1$, $\\overline{k} \\equiv k_I$ and $v_i \\equiv v(k_i)$ for simplicity. \n\\item When we discretize a function, use the function name without arguments to denote the vector.  i.e. $v(k)$ discretized on a grid $\\set{k_i}_{i=1}^{I}$ is $v \\equiv \\set{v(k_i)}_{i=1}^I \\in \\R^I$.\n\\item When referring to a variable $\\mu$, define the notation $\\mu^{-} \\equiv \\min\\set{\\mu,0}$ and $\\mu^{+} \\equiv \\max\\set{\\mu,0}$. This can apply to vectors as well. For example, $\\mu_i^{-} = \\mu_i$ if $\\mu_i < 0$ and $0$ if $\\mu_i > 0$, and $\\mu^{-} \\equiv \\set{\\mu^{-}_i}_{i=1}^{I}$.\n\n\\item To discretize the derivative at $k_i$, consider both forwards and backwards differences,\n\\begin{align}\n\tv'_F(k_i) &\\approx \\frac{v_{i+1} - v_i}{\\Delta}\\label{eq:forward-diff}\\\\\n\tv'_B(k_i) &\\approx \\frac{v_i - v_{i-1}}{\\Delta}\\label{eq:backward-diff}\n\t\\intertext{subject to the state constraints} \n\t\tv'_{F}(\\overline{k}) = (f(\\overline{k}) - \\delta \\overline{k})^{-\\gamma}\\label{eq:forward-constraint}\\\\\n\t\tv'_{B}(\\underline{k}) = (f(\\underline{k}) - \\delta \\underline{k})^{-\\gamma}\\label{eq:backward-constraint}\n\t\\intertext {Our finite difference approximation to the HJB equation \\cref{eq:HJBE-neoclassical-growth-mu} is then}\n\t\\rho v(k_i) &= \\frac{\\left(v'(k_i)\\right)^{-\\frac{1-\\gamma}{\\gamma}}}{1-\\gamma} + \\mu(k_i;v) v'(k_i)\n\\end{align}\n\\end{itemize}\n Following the ``upwind scheme'', we will use the forwards approximation whenever there is a positive drift above the steady state level of capital $k^*$, and the backwards approximation when below $k^*$. \n\n\\subsection{Iterating on a Linear System of Equations}\nThis section solves the problem through a series of iterations on a linear system of equations.\n\\paragraph{Iterative Method }Pick an $h$, which is used in the iterative process.\\footnote{\\textbf{TODO:} I believe this is the Howard algorithm.  In effect, I think that this is like using an explicit time step in a PDE, where we are forward iterating fixing the policy?  Because of this, there is a high likelihood that it is not unconditionally stable.}  We first make an initial guess $v^0 = (v_1^0, \\dots v_I^0)$. Then for each consecutive iteration $n = 0, 1, 2, ...$,  update $v^n$ using the following equation\n\\begin{align}\n\\frac{v_i^{n+1}-v_i^{n}}{h} + \\rho v_i^{n+1} = u(c_i^n) + (v_i^{n+1})^{'}  \\underbrace{\\left[f(k_i) - \\delta k_i - c_i^n\\right]}_{\\equiv \\mu_i}\\label{eq:fd-approx-capital}\n\\end{align}\nUsing both the forwards and backwards difference approximations \\cref{eq:forward-diff} and \\cref{eq:backward-diff}, find consumption levels $c_F^n$ and $c_B^n$ as given by \\cref{eq:c-neoclassical-growth} and compute savings,\n\\begin{align}\n\\mu^n_{i,F} = f(k_i) - \\delta k_i - \\left({v_F^n}'(k_i) \\right)^{-\\frac{1}{\\gamma}}\\\\\n\\mu^n_{i,B} = f(k_i) - \\delta k_i - \\left({v_B^n}'(k_i) \\right)^{-\\frac{1}{\\gamma}}\n\\end{align}\n\nDepending upon the sign of the drift, a choice is made between using forward or backward differences. Let $\\bold{1}_{\\{ \\cdot \\}}$ be an indicator function\\footnote{For now, assume that if the case $\\mu_{F}>0$  and $\\mu_{B}<0$ should arise, take $\\bold{1}_{\\{ \\mu_{i,F} > 0\\}} $ to be 1 and $\\bold{1}_{ \\{\\mu_{i,B} < 0\\}}\\ $ to be 0.}and  $\\bar{v_i}' $ be the derivative at the steady state, given in this example by $\\bar{v_i}' = (f(k_i)-\\delta k_i)^{-\\gamma}$. Then the approximation of the derivative is \n\\begin{align}\n{v_i^n}'={v_{i,F}^n}'\\bold{1}_{\\{ \\mu_{i,F} > 0\\}}+{v_{i,B}^n}\\bold{1}_{\\{ \\mu_{i,B}<0 \\}}+{\\bar{v}_i^n}'\\bold{1}_{\\{ \\mu_{i,F}<0<\\mu_{i,B} \\}}\\label{eq:dv-upwind}\n\\end{align}\n\nDefine the vectors $X, Y, Z \\in \\R^{I} $ such that \n\\begin{align}\n\tX &= -\\frac {\\mu^{-} _B}{\\Delta}\\label{eq:X-delta} \\\\\n\tY &= -\\frac {\\mu^{+} _F}{\\Delta} + \\frac {\\mu^{-} _B}{\\Delta}\\label{eq:Y-delta} \\\\\n\tZ &= \\frac {\\mu^{+} _F}{\\Delta}\\label{eq:Z-delta}\n\\end{align}\n%\n%%For algebraic simplicity later on, multiply every term by  $\\Delta$ and obtain \n%\\begin{align}\n%\tX &= -({\\mu^{n} _B})^{-}\\label{eq:X} \\\\\n%\tY &= -({\\mu^{n} _F})^{+} + ({\\mu^{n} _B})^{-}\\label{eq:Y} \\\\\n%\tZ &= ({\\mu^{n} _F})^{+}\\label{eq:Z}\n%\\end{align}\n\nWith these, construct the sparse matrix $A^n$\n\\begin{align}\nA^n &\\equiv \\begin{bmatrix}\nY_1 & Z_1 & 0 & \\cdots & \\cdots & \\cdots & 0 \\\\\nX_2 & Y_2 & Z_2 & 0 & \\ddots& & \\vdots \\\\\n0 & \\ddots & \\ddots & \\ddots & \\ddots &  & \\vdots \\\\\n\\vdots & &\\ddots & \\ddots & \\ddots & \\ddots  & \\vdots \\\\\n\\vdots & & & \\ddots & X_{I-1} & Y_{I-1}  & Z_{I-1} \\\\\n0 & \\cdots & \\cdots & \\cdots & 0 & X_I & Y_I\\\\\n\\end{bmatrix}\\in\\R^{I\\times I}\\label{eq:A}\n\\end{align}\n\nBy substituting the approximation ${v^{n}}'$ found in \\cref{eq:dv-upwind} into equation \\cref{eq:c-neoclassical-growth}, define the utility vector\n\\begin{align}\nu(c^n) &\\equiv u\\left(\\left({v^n}'\\right)^{-\\frac{1}{\\gamma}}\\right)\\\\\n\\intertext{We can then write as a system of equations in matrix form and solve for $v^{n+1}$}\nB^n &\\equiv \\left(\\rho + \\frac{1}{h}\\right)I - A^n\\\\\nb^n &\\equiv u(c^n) + \\frac{v^n}{h}\\\\\nB^{n}v^{n+1} &= {b^n}\n\\end{align}\n\n\n\nWhen $v^{n+1}$ is sufficiently close in value to $v^n$, the algorithm is complete. Otherwise, update the value of $v^n$ and repeat the previous steps for the next iteration. Keep in mind that the solution may not be unconditionally stable for an arbitrary $h$.\n\n\n\\subsection{Nonlinear System of Equations}\nThis section will focus on solving the problem directly as a nonlinear system of equations. Returning to equation \\cref{eq:HJBE-neoclassical-growth-mu}, we have \n\\begin{align}\n\\rho v_i &= \\frac{\\left(v'_i\\right)^{-\\frac{1-\\gamma}{\\gamma}}}{1-\\gamma} + \\mu(k_i;v) v'_i\n\\end{align}\nfor every $k_i$, $i=1,2,...I$.\n\n\\bibliography{etk-references}\n\n\\end{document}", "meta": {"hexsha": "682c14a7ee0b6e19feb71505fc4948a1c958d6a8", "size": 9925, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/HJBE_discretization.tex", "max_stars_repo_name": "econtoolkit/continuous_time_methods", "max_stars_repo_head_hexsha": "b72e63768b5c2e2052573457d332fff782b61edd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2017-11-29T00:55:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-30T20:35:22.000Z", "max_issues_repo_path": "docs/HJBE_discretization.tex", "max_issues_repo_name": "xiaohr/continuous_time_methods", "max_issues_repo_head_hexsha": "e968fff1f829113b8f53640b05d592e40224fb30", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 36, "max_issues_repo_issues_event_min_datetime": "2017-09-13T20:58:41.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-13T22:19:04.000Z", "max_forks_repo_path": "docs/HJBE_discretization.tex", "max_forks_repo_name": "xiaohr/continuous_time_methods", "max_forks_repo_head_hexsha": "e968fff1f829113b8f53640b05d592e40224fb30", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2018-01-26T22:05:44.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-10T03:06:21.000Z", "avg_line_length": 71.4028776978, "max_line_length": 524, "alphanum_fraction": 0.6852392947, "num_tokens": 3366, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5671909084057039}}
{"text": "\\subsubsection{Generalized NFW (GNFW) Model}\n\\label{sec:gnfw}\nIn the context of the pressure models of \\cite{nagai2007}, hereafter N07, the generalized NFW profile is given by\n\n\\begin{equation}\np(x_g) = {p_0\\over{(\\c500 x_g)^\\gamma\\left[1 + (c_{\\rm 500}x_g)^\\alpha\\right]^{(\\beta-\\gamma)/\\alpha}}}\n\\label{eq:gnfw}\n\\end{equation}\n\nwhere $c_{\\rm 500}$ is a dimensionless concentration parameter and $x_g\n= r/R_{\\rm 500}$.  \n\nIn \\climax, the generalized NFW model is implemented as a\ndimensionless shape function that can be used to fit cluster profiles\nin any units:\n\n\\begin{equation}\np(x) = {1\\over{x^\\gamma\\left[1 + x^\\alpha\\right]^{(\\beta-\\gamma)/\\alpha}}}\n\\end{equation}\n\nwhere $x = r/r_c$.  In the context of the GNFW pressure models,\ntherefore, the $r_c$ returned by \\climax\\ (actually $\\theta_c$), is\nequivalent to $r_c = \\R500/\\c500$.\n\nThe generalized NFW model can be used in \\climax\\ by\ninvoking \\code{addmodel} with \\code{type = gnfwmodel}.\n\nFrom self-similarity arguments (see \\S\\ref{sec:ss}), the pressure\nnormalization (see Eq~\\ref{eq:ytop}) of a cluster can be related to\nits mass via Eq~\\ref{eq:arnaud}.  In \\climax\\, \\code{m500} can\ntherefore also be used with any GNFW model as the primary variable,\ngiven a cosmology.\n\n\\subsubsection{Nagai07 GNFW Model}\n\nN07 find that a good description of high-$T_X$ \\chandra\\\nclusters can be fit with a model with $p_0 = 3.3$, $c_{\\rm 500} = 1.8$\nand $(\\alpha,\\beta,\\gamma) = (1.3, 4.3, 0.7)$.\n\nThis specialization of the GNFW model can be used in \\climax\\ by\ninvoking \\code{addmodel} with \\code{type = nagai07model}.\n\n\\subsubsection{Arnaud GNFW Model}\n\n\\cite{arnaud2010}, hereafter A10, determine that $p_0 =\n8.403\\,h^{-3/2}_{70}$, $c_{\\rm 500} = 1.17$ and $(\\alpha,\\beta,\\gamma)\n= (1.0510, 5.4905, 0.3081)$ yield the best fit to REXCESS clusters.\n\nThis specialization of the GNFW model can be used in \\climax\\ by\ninvoking \\code{addmodel} with \\code{type = arnaudmodel}.\n\nA10 also determine the normalization of the pressure model fits to be\n\n\\begin{eqnarray}\nP(r) &=&\n1.65\\times10^{-3}h(z)^{8/3}\\left[{\\M500\\over{3\\times10^{14}h^{-1}_{70}\\Msolar}}\\right]^{2/3+\\alpha_P+\\alpha^\\prime_P(x)}\\\\\\nonumber\n&\\times& p_{\\rm A}(x_g) \\,h^2_{70}\\,{\\rm keV\\,cm^{-3}},\n\\end{eqnarray}\n\nwhere $p_{\\rm A}(x_g)$ is the GNFW profile with the Arnaud et al fit\nparameters given above, $\\alpha_P = 0.12$, and\n\n\\begin{equation}\n\\alpha^\\prime_P(x_g) = 0.1 - (\\alpha_P + 0.1){{(x_g/0.5)^3}\\over{1 + (x_g/0.5)^3}}\n\\end{equation}\n\nThe small second-order $x_g$-dependent term represents a departure\nfrom self-similarity, and also a significant increase in computational\noverhead, and it is presently ignored in \\climax.\n\n", "meta": {"hexsha": "79a7da89a54ce1982a9c748d7bb636cf136b2051", "size": 2649, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "help/gnfwmodel.tex", "max_stars_repo_name": "erikleitch/climax", "max_stars_repo_head_hexsha": "66ce64b0ab9f3a3722d3177cc5215ccf59369e88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-01T05:15:31.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-01T05:15:31.000Z", "max_issues_repo_path": "docs/gnfwmodel.tex", "max_issues_repo_name": "erikleitch/climax", "max_issues_repo_head_hexsha": "66ce64b0ab9f3a3722d3177cc5215ccf59369e88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/gnfwmodel.tex", "max_forks_repo_name": "erikleitch/climax", "max_forks_repo_head_hexsha": "66ce64b0ab9f3a3722d3177cc5215ccf59369e88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-05-02T19:35:55.000Z", "max_forks_repo_forks_event_max_datetime": "2018-03-07T00:54:51.000Z", "avg_line_length": 37.3098591549, "max_line_length": 131, "alphanum_fraction": 0.7063042658, "num_tokens": 936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677660619633, "lm_q2_score": 0.6688802735722128, "lm_q1q2_score": 0.5671889113439442}}
{"text": "\\section{Development of data processing methods}\n\\label{data_processing}\n\nHigh-quality spectra are needed for advanced multivariate statistic methods\nused for the measurement analysis.\nIt is essential to have sufficiently sensitive apparatus which can provide a\nreliable signal.\nHowever, raw Raman spectra, which come from the measurement, usually contain\nmany components, and only one of them belongs to the analyte signal.\nThe following sections describe the process of transforming the raw spectrum\nto the spectrum ready for the advanced result analysis, analysis\nof spectra of multicomponent system, minimization algorithm used for the\ndata analysis, and band intensity estimation overview.\n\nAmong the major artifacts observed in Raman spectra is signal caused by cosmic\nrays, which is characteristic by sharp lines usually impacting only a few\npixels of the CCD detector.\n\\emph{Spycor} program\n\\parencite{Spycor2018},\nwas created as part of this thesis and is based on a comparison of consecutive\nspectra where it counts on the fact that during macroscopic Raman measurement,\nthe spectra change only slowly with time and that the consecutive spectra are\nsimilar.\n\nRaman spectra of liquid samples usually consist of many components, including\nthe signal of the analyte under investigation and the spectrum of the solvent.\nIn the case of absorbing analyte, the spectrum\nis acquired from the near proximity of the sample cell wall, and so it\nusually contains a signal of quartz.\nThe resulting spectrum can also contain unspecific background from different\nsources like fluorescence from residuals from chemical synthesis and elastic\nscattering of light from different light sources in the laboratory.\n\nTherefore the background subtraction procedure took several steps.\nEach spectrum was acquired as a series of frames, and all frames were stored\nfor further analysis.\nThe spectrum of solvent, buffer, and sample cell together with the intensity\nnormalization to the subtracted buffer signal was performed on each frame in\nthe first step using the \\emph{Bgcor} program\n\\parencite{Bgcor2017}\n(written in \\cite{Matlab}),\ndeveloped as part of this thesis. The unspecific background was then removed\nfrom each spectrum using background subtraction in frames decomposed to\nspectral components by PCA\n\\parencite{Palacky2011}.\nThe thirds step consisted of extrapolation to zero time, where PCA decomposed\nthe frames to spectral components, and the spectrum at zero time\nwas reconstructed from significant ones.\nThe last optional step performed only when a series of spectra was\ntaken, for example, temperature-dependent measurement was to subtract the\nbackground once more using PCA the same way as in step two but currently\napplied to the whole series of the extrapolated spectra from the third step.\nThis section focuses mainly on the first step -- the subtraction of the\nsolvent, buffer, and sample cell, together with the intensity normalization.\n\nFurther, we adopted in this thesis \\emph{Principal component analysis} (PCA,\n\\cite{%\n\tWold1987,%\n\tMalinowski2002%\n}),\nwhich reduces the measured spectra to several spectral profiles (loadings) and\nscores indicating each profile's portion in the measured spectra.\nThe scores can then be fitted by a function based on an underlying chemical\nmodel, and for example, thermodynamic parameters for structural transitions\ncan be estimated in temperature-dependent measurement\n\\parencite{Nemecek2013}.\nBut the fit is usually not linear in the system parameters.\nNonlinear minimization is usually much more expensive because it utilizes\niterative methods of searching for the minimum.\nThey are also susceptible to finding only local minima.\nOn the other hand, linear least squares regression is \u201cjust\u201d solution of a set\nof normal equations\n\\parencite[p.~671]{NumericalRecipes}.\nIt means that the nonlinear iterative minimization algorithm can be applied\nto $b_{k,m}$ values, the $a_{k,l_k}$ values are estimated by linear least\nsquares for the given $c_k(t_i)$ which are determined by the values of\n$b_{k,m}$.\nWe made use of this fact and divided the fit to two parts where linear\nparameters were estimated by linear fit in each step of iterative nonliner\nfit which minimized only the nonlinear parameters.\nThis approach significantly reduced dimension of the nonlinear fit and\nimproved its numerical stability.\n\nIn this thesis, many data treatment approaches needed efficient nonlinear\nminimization algorithm (background subtraction, estimation of band intensities,\nestimation of parameters of chemical model from spectral series).\nThe Levenberg-Marquardt method\n\\parencite{Marquardt1963}\nwas used as a basis for the minimization algorithm used in this thesis, and\nslight modifications were applied to improve its performance.\n\n\\label{band_intensities}\n\nOne of the important spectroscopic information is the integral intensity of\nparticular Raman bands from the spectrum.\nThe band's shape is usually modeled by Gaussian or Lorentzian curves or as\ntheir combination.\nThe combination of the Lorentzian $\\func{L}$ and Gaussian $\\func{G}$ curve can\nbe expressed as\n\\begin{equation}\n\t\\func{S}(\\wn; I_\\text{m}, \\mu, \\sigma) =\n\t\tc_\\text{L} \\cdot \\func{L}(\\wn; I_\\text{m}, \\mu, \\sigma)\n\t\t+ (1 - c_\\text{L}) \\cdot \\func{G}(\\wn; I_\\text{m}, \\mu, \\sigma),\n\t\\label{\\eqnlabel{band_intensities:single_shape}}\n\\end{equation}\nwhere $c_\\text{L}$ is the Lorentzian curve fraction coefficient, $I_m$ is the\nheight, $\\mu$ is the band position, $\\sigma$ is the Gaussian root mean square\nwidth, and $\\wn$ is the wavenumber.\n\nRaman bands in a typical Raman spectrum of complex samples are overlapped.\nWe solved this problem by modeling the band as a combination of more\nband-shape functions from\n\\eqnref{band_intensities:single_shape}\n\\begin{equation}\n\t\\func{S}(\\wn; I_{\\text{m},1..n}, \\mu_{1..n}, \\sigma_{1..n}) =\n\t\t\\sum_{i = 1}^n = \t\\func{S}_i(\\wn; I_{\\text{m},i}, \\mu_i, \\sigma_i),\n\t\\label{\\eqnlabel{band_intensities:shape}}\n\\end{equation}\nwhere $n$ is the number of the overlapping bands.\n", "meta": {"hexsha": "7c4ac5231a8934e17e66b4ee17955de71ea91646", "size": 6008, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/results_and_discussion/data_processing.tex", "max_stars_repo_name": "lumik/phd_thesis_abstract", "max_stars_repo_head_hexsha": "065f71fb85528e45d647dd926a8fe3559510aed3", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/results_and_discussion/data_processing.tex", "max_issues_repo_name": "lumik/phd_thesis_abstract", "max_issues_repo_head_hexsha": "065f71fb85528e45d647dd926a8fe3559510aed3", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/results_and_discussion/data_processing.tex", "max_forks_repo_name": "lumik/phd_thesis_abstract", "max_forks_repo_head_hexsha": "065f71fb85528e45d647dd926a8fe3559510aed3", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.0666666667, "max_line_length": 79, "alphanum_fraction": 0.8014314248, "num_tokens": 1388, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677622198947, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.5671888975798646}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathtools}\n\\usepackage{parskip}\n\\usepackage{lscape}\n\\usepackage{multicol}\n\n\n\\begin{document}\n\\begin{landscape}\n% lose the page number on this page\n\\thispagestyle{empty}\n% start a 2 column page\n\\begin{multicols}{2}\n\n\\section*{(Batch) Gradient Descent}\n\\begin{equation*}\n    J_{train}(\\theta) = \\frac{1}{2m} \\sum_{i=1}^m(h_{\\theta}(x^{(i)} - y^{(i)}))^2\n\\end{equation*}\n\nPerform gradient descent by updating $\\theta$ using the derivative of the cost function.\n\nRepeat for every  $j=0,...,n$ \\{\n\\begin{equation*}\n    \\theta_j := \\theta_j - \\alpha\\frac{1}{m}\\sum_{i=1}^m(h_{\\theta}(x^{(i)} - y^{(i)}))x_j^{(i)}\n\\end{equation*}\n\\}\n\nIn large data sets, this summation would have to occur over every training example $m$ at every iteration of the for loop.\n\n% This code block below starts the new column\n\\vfill\n\\columnbreak\n% This code block above starts the new column\n\n\\section*{Stochastic Gradient Descent}\nIn stochastic gradient descent, the cost is rewritten as the cost of one training example:\n\\begin{equation*}\n    cost(\\theta, (x^{(i)}, y^{(i)})) = \\frac{1}{2}(h_\\theta(x^{(i)})-y^{(i)})^2\n\\end{equation*}\nAnd thus the cost function is:\n\\begin{equation*}\n    J_{train}(\\theta) = \\frac{1}{m}\\sum_{i=1}^m cost(\\theta, (x^{(i)}, y^{(i)}))\n\\end{equation*}\n\n\\begin{enumerate}\n    \\item Randomly shuffle the dataset (randomly reorder the training examples\n    \\item Repeat \\{\n    \\begin{description}\n    \\item for $i=1,...,m$ \\{\n    \\item$\\theta_j := \\theta_j - \\alpha h_\\theta(x^{(i)}- y^{(i)})x^{(i)}_j$ \\quad \\textbf{note:} this is $\\frac{\\partial}{\\partial\\theta_j}cost$\n    \n    (for every $j=0,...,n$)\n    \\item\\}\n    \\end{description}\n    \\item[]\\}\n\\end{enumerate}\nIn essence, stochastic gradient descent is updating the parameters $\\Theta$ (the matrix of $\\theta_{0,...,j}$) by stepping through each individual training example rather than summing across all training examples. The outer repeat loop is meant to symbolize that this process may have to be repeated over several iterations (e.g., 1-10 times) until pseudo-convergence.\n\\end{multicols}\n\\end{landscape}\n\\end{document}\n", "meta": {"hexsha": "3b8f2a07d91153be48bb71585c3b56f3c3021eb7", "size": 2190, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex files/Batch v. Stochastic Gradient Descent.tex", "max_stars_repo_name": "mazin-abdelghany/coursera-machine-learning", "max_stars_repo_head_hexsha": "5b2d6fa46c1e68314054623c13b06e0ef96776ba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex files/Batch v. Stochastic Gradient Descent.tex", "max_issues_repo_name": "mazin-abdelghany/coursera-machine-learning", "max_issues_repo_head_hexsha": "5b2d6fa46c1e68314054623c13b06e0ef96776ba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex files/Batch v. Stochastic Gradient Descent.tex", "max_forks_repo_name": "mazin-abdelghany/coursera-machine-learning", "max_forks_repo_head_hexsha": "5b2d6fa46c1e68314054623c13b06e0ef96776ba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.21875, "max_line_length": 368, "alphanum_fraction": 0.6863013699, "num_tokens": 702, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.668880247169804, "lm_q2_score": 0.8479677660619633, "lm_q1q2_score": 0.5671888889555525}}
{"text": "\\chapter{Database}\nDatabase questions\n\\newline\n\n\\section{Combine Two Tables} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:combine-two-tables}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\nTable: Person\n\\begin{Code}\n+-------------+---------+\n| Column Name | Type    |\n+-------------+---------+\n| PersonId    | int     |\n| FirstName   | varchar |\n| LastName    | varchar |\n+-------------+---------+\nPersonId is the primary key column for this table.\n\\end{Code}\n\nTable: Address\n\\begin{Code}\n+-------------+---------+\n| Column Name | Type    |\n+-------------+---------+\n| AddressId   | int     |\n| PersonId    | int     |\n| City        | varchar |\n| State       | varchar |\n+-------------+---------+\nAddressId is the primary key column for this table.\n\\end{Code}\n\nWrite a SQL query for a report that provides the following information for each person in the Person table, regardless if there is an address for each of those people:\n\n\\subsubsection{outer join}\n\\begin{Code}\n  // Note: Using where clause to filter the records will fail if there is\n  // no address information for a person because it will not display the name information.\nSELECT\n    FirstName, LastName, City, State\nFROM\n    Person LEFT JOIN Address\nON\n    Person.PersonID = Address.PersonID\n;\n\\end{Code}\n\n\\section{Second Highest Salary} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:second-highest-salary}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\nTable: Employee\n\\begin{Code}\n+----+--------+\n| Id | Salary |\n+----+--------+\n| 1  | 100    |\n| 2  | 200    |\n| 3  | 300    |\n+----+--------+\n\\end{Code}\n\nFor example, given the above Employee table, the query should return 200 as the second highest salary. If there is no second highest salary, then the query should return null.\n\\begin{Code}\n+---------------------+\n| SecondHighestSalary |\n+---------------------+\n| 200                 |\n+---------------------+\n\\end{Code}\n\nWrite a SQL query to get the second highest salary from the Employee table.\n\n\\subsubsection{sub-query and LIMIT clause}\n\\begin{Code}\n  // Note: Use a sub-query to handle only one record in this table\nSELECT\n    (SELECT DISTINCT\n        Salary\n     FROM\n         Employee\n     ORDER BY Salary DESC\n     LIMIT 1 OFFSET 1) AS SecondHighestSalary\n;\n\\end{Code}\n\n\\subsubsection{IFNULL and LIMIT}\n\\begin{Code}\n  // Note: Use a sub-query to handle only one record in this table\nSELECT\n    IFNULL(\n      (SELECT DISTINCT\n           Salary\n       FROM\n           Employee\n       ORDER BY Salary DESC\n       LIMIT 1 OFFSET 1),\n    NULL) AS SecondHighestSalary\n;\n\\end{Code}\n\n\\section{Nth Highest Salary} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:nth-highest-salary}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\nTable: Employee\n\\begin{Code}\n+----+--------+\n| Id | Salary |\n+----+--------+\n| 1  | 100    |\n| 2  | 200    |\n| 3  | 300    |\n+----+--------+\n\\end{Code}\n\nFor example, given the above Employee table, the nth highest salary where n = 2 is 200. If there is no nth highest salary, then the query should return null.\n\\begin{Code}\n+------------------------+\n| getNthHighestSalary(2) |\n+------------------------+\n| 200                    |\n+------------------------+\n\\end{Code}\n\nWrite a SQL query to get the nth highest salary from the Employee table.\n\n\\subsubsection{IFNULL and LIMIT}\n\\begin{Code}\nCREATE FUNCTION getNthHighestSalary(N INT) RETURNS INT\nBEGIN\n\nSET N=N-1;\n  RETURN (\n      # Write your MySQL query statement below.\n      select\n          ifnull(\n              (select distinct\n                   salary\n               from\n                   employee\n               order by  salary desc\n               limit N,1),\n          null)\n  );\nEND\n\\end{Code}\n\n\\section{Rank Scores} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:rank-scores}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Scores (Id int, Score DECIMAL(3,2))\nTruncate table Scores\ninsert into Scores (Id, Score) values ('1', '3.5')\ninsert into Scores (Id, Score) values ('2', '3.65')\ninsert into Scores (Id, Score) values ('3', '4.0')\ninsert into Scores (Id, Score) values ('4', '3.85')\ninsert into Scores (Id, Score) values ('5', '4.0')\ninsert into Scores (Id, Score) values ('6', '3.65')\n\\end{Code}\n\nWrite a SQL query to rank scores. If there is a tie between two scores, both should have the same ranking. Note that after a tie, the next ranking number should be the next consecutive integer value. In other words, there should be no \"holes\" between ranks.\n\nTable: Scores\n\\begin{Code}\n+----+-------+\n| Id | Score |\n+----+-------+\n| 1  | 3.50  |\n| 2  | 3.65  |\n| 3  | 4.00  |\n| 4  | 3.85  |\n| 5  | 4.00  |\n| 6  | 3.65  |\n+----+-------+\n\\end{Code}\n\nFor example, given the above Scores table, your query should generate the following report (order by highest score):\n\\begin{Code}\n+-------+---------+\n| score | Rank    |\n+-------+---------+\n| 4.00  | 1       |\n| 4.00  | 1       |\n| 3.85  | 2       |\n| 3.65  | 3       |\n| 3.65  | 3       |\n| 3.50  | 4       |\n+-------+---------+\n\\end{Code}\n\n\\textbf{Important Note}: For MySQL solutions, to escape reserved words used as column names, you can use an apostrophe before and after the keyword. For example `Rank`.\n\n\\subsubsection{Dense_Rank}\n\\begin{Code}\nSELECT Score, dense_rank() OVER(ORDER BY Score DESC) AS `Rank` FROM Scores;\n\\end{Code}\n\n\\section{Consecutive Numbers} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:consecutive-numbers}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Logs (Id int, Num int)\nTruncate table Logs\ninsert into Logs (Id, Num) values ('1', '1')\ninsert into Logs (Id, Num) values ('2', '1')\ninsert into Logs (Id, Num) values ('3', '1')\ninsert into Logs (Id, Num) values ('4', '2')\ninsert into Logs (Id, Num) values ('5', '1')\ninsert into Logs (Id, Num) values ('6', '2')\ninsert into Logs (Id, Num) values ('7', '2')\n\\end{Code}\n\nWrite an SQL query to find all numbers that appear at least three times consecutively.\nReturn the result table in any order.\nThe query result format is in the following example:\n\nTable: Logs\n\\begin{Code}\n+-------------+---------+\n| Column Name | Type    |\n+-------------+---------+\n| id          | int     |\n| num         | varchar |\n+-------------+---------+\nid is the primary key for this table.\n\\end{Code}\n\n\n\\begin{Code}\nLogs table:\n+----+-----+\n| Id | Num |\n+----+-----+\n| 1  | 1   |\n| 2  | 1   |\n| 3  | 1   |\n| 4  | 2   |\n| 5  | 1   |\n| 6  | 2   |\n| 7  | 2   |\n+----+-----+\n\nResult table:\n+-----------------+\n| ConsecutiveNums |\n+-----------------+\n| 1               |\n+-----------------+\n1 is the only number that appears consecutively for at least three times.\n\\end{Code}\n\n\n\\subsubsection{multiple table}\n\\begin{Code}\nSELECT DISTINCT\n    l1.Num AS ConsecutiveNums\nFROM\n    Logs l1,\n    Logs l2,\n    Logs l3\nWHERE\n    l1.Id = l2.Id - 1\n    AND l2.Id = l3.Id - 1\n    AND l1.Num = l2.Num\n    AND l2.Num = l3.Num\n;\n\\end{Code}\n\n\\section{Employees Earning More Than Their Managers} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:employees-earning-more-than-their-managers}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Employee (Id int, Name varchar(255), Salary int, ManagerId int)\nTruncate table Employee\ninsert into Employee (Id, Name, Salary, ManagerId) values ('1', 'Joe', '70000', '3')\ninsert into Employee (Id, Name, Salary, ManagerId) values ('2', 'Henry', '80000', '4')\ninsert into Employee (Id, Name, Salary, ManagerId) values ('3', 'Sam', '60000', 'None')\ninsert into Employee (Id, Name, Salary, ManagerId) values ('4', 'Max', '90000', 'None')\n\\end{Code}\n\nThe Employee table holds all employees including their managers. Every employee has an Id, and there is also a column for the manager Id.\n\nTable: Employee\n\\begin{Code}\n+----+-------+--------+-----------+\n| Id | Name  | Salary | ManagerId |\n+----+-------+--------+-----------+\n| 1  | Joe   | 70000  | 3         |\n| 2  | Henry | 80000  | 4         |\n| 3  | Sam   | 60000  | NULL      |\n| 4  | Max   | 90000  | NULL      |\n+----+-------+--------+-----------+\n\\end{Code}\n\nGiven the Employee table, write a SQL query that finds out employees who earn more than their managers. For the above table, Joe is the only employee who earns more than his manager.\n\n\\begin{Code}\n+----------+\n| Employee |\n+----------+\n| Joe      |\n+----------+\n\\end{Code}\n\n\n\\subsubsection{multiple table}\n\\begin{Code}\nSELECT\n    a.Name AS 'Employee'\nFROM\n    Employee AS a,\n    Employee AS b\nWHERE\n    a.ManagerId = b.Id\n        AND a.Salary > b.Salary\n;\n\\end{Code}\n\n\\subsubsection{join}\n\\begin{Code}\nSELECT\n     a.NAME AS Employee\nFROM Employee AS a JOIN Employee AS b\n     ON a.ManagerId = b.Id\n     AND a.Salary > b.Salary\n;\n\\end{Code}\n\n\\section{Duplicate Emails} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:duplicate-emails}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Person (Id int, Email varchar(255))\nTruncate table Person\ninsert into Person (Id, Email) values ('1', 'a@b.com')\ninsert into Person (Id, Email) values ('2', 'c@d.com')\ninsert into Person (Id, Email) values ('3', 'a@b.com')\n\\end{Code}\n\nWrite a SQL query to find all duplicate emails in a table named Person.\n\nTable: Person\n\\begin{Code}\n+----+---------+\n| Id | Email   |\n+----+---------+\n| 1  | a@b.com |\n| 2  | c@d.com |\n| 3  | a@b.com |\n+----+---------+\n\\end{Code}\n\nFor example, your query should return the following for the above table:\n\n\\begin{Code}\n+---------+\n| Email   |\n+---------+\n| a@b.com |\n+---------+\n\\end{Code}\n\n\n\\subsubsection{Using GROUP BY and a temporary table}\n\\begin{Code}\nselect Email from\n(\n  select Email, count(Email) as num\n  from Person\n  group by Email\n) as statistic\nwhere num > 1\n;\n\\end{Code}\n\n\\subsubsection{Using GROUP BY and HAVING condition}\n\\begin{Code}\nselect Email\nfrom Person\ngroup by Email\nhaving count(Email) > 1;\n\\end{Code}\n\n\\section{Customers Who Never Order} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:customers-who-never-order}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Customers (Id int, Name varchar(255))\nCreate table If Not Exists Orders (Id int, CustomerId int)\nTruncate table Customers\ninsert into Customers (Id, Name) values ('1', 'Joe')\ninsert into Customers (Id, Name) values ('2', 'Henry')\ninsert into Customers (Id, Name) values ('3', 'Sam')\ninsert into Customers (Id, Name) values ('4', 'Max')\nTruncate table Orders\ninsert into Orders (Id, CustomerId) values ('1', '3')\ninsert into Orders (Id, CustomerId) values ('2', '1')\n\\end{Code}\n\nSuppose that a website contains two tables, the Customers table and the Orders table. Write a SQL query to find all customers who never order anything.\n\nTable: Customers.\n\\begin{Code}\n+----+-------+\n| Id | Name  |\n+----+-------+\n| 1  | Joe   |\n| 2  | Henry |\n| 3  | Sam   |\n| 4  | Max   |\n+----+-------+\n\\end{Code}\n\nTable: Orders.\n\n\\begin{Code}\n+----+------------+\n| Id | CustomerId |\n+----+------------+\n| 1  | 3          |\n| 2  | 1          |\n+----+------------+\n\\end{Code}\n\nUsing the above tables as example, return the following:\n\n\\begin{Code}\n+-----------+\n| Customers |\n+-----------+\n| Henry     |\n| Max       |\n+-----------+\n\\end{Code}\n\n\\subsubsection{Not In}\n\\begin{Code}\nselect customers.name as 'Customers'\nfrom customers\nwhere customers.id not in\n(\n    select customerid from orders\n);\n\\end{Code}\n\n\\subsubsection{Left Join}\n\\begin{Code}\nSELECT Name AS 'Customers'\nFROM Customers c\nLEFT JOIN Orders o\nON c.Id = o.CustomerId\nWHERE o.CustomerId IS NULL;\n\\end{Code}\n\n\\section{Department Highest Salary} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:department-highest-salary}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Employee (Id int, Name varchar(255), Salary int, DepartmentId int)\nCreate table If Not Exists Department (Id int, Name varchar(255))\nTruncate table Employee\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('1', 'Joe', '70000', '1')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('2', 'Jim', '90000', '1')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('3', 'Henry', '80000', '2')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('4', 'Sam', '60000', '2')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('5', 'Max', '90000', '1')\nTruncate table Department\ninsert into Department (Id, Name) values ('1', 'IT')\ninsert into Department (Id, Name) values ('2', 'Sales')\n\\end{Code}\n\nThe Employee table holds all employees. Every employee has an Id, a salary, and there is also a column for the department Id.\n\n\\begin{Code}\n+----+-------+--------+--------------+\n| Id | Name  | Salary | DepartmentId |\n+----+-------+--------+--------------+\n| 1  | Joe   | 70000  | 1            |\n| 2  | Jim   | 90000  | 1            |\n| 3  | Henry | 80000  | 2            |\n| 4  | Sam   | 60000  | 2            |\n| 5  | Max   | 90000  | 1            |\n+----+-------+--------+--------------+\n\\end{Code}\n\nThe Department table holds all departments of the company.\n\\begin{Code}\n+----+----------+\n| Id | Name     |\n+----+----------+\n| 1  | IT       |\n| 2  | Sales    |\n+----+----------+\n\\end{Code}\n\nWrite a SQL query to find employees who have the highest salary in each of the departments. For the above tables, your SQL query should return the following rows (order of rows does not matter).\n\\begin{Code}\n+------------+----------+--------+\n| Department | Employee | Salary |\n+------------+----------+--------+\n| IT         | Max      | 90000  |\n| IT         | Jim      | 90000  |\n| Sales      | Henry    | 80000  |\n+------------+----------+--------+\n\\end{Code}\n\n\\subsubsection{Join, In}\n\\begin{Code}\n# This example shows that IN can match multiple fields together\nSELECT\n    Department.name AS 'Department',\n    Employee.name AS 'Employee',\n    Salary\nFROM\n    Employee\n        JOIN\n    Department ON Employee.DepartmentId = Department.Id\nWHERE\n    (Employee.DepartmentId , Salary) IN\n    (   SELECT\n            DepartmentId, MAX(Salary)\n        FROM\n            Employee\n        GROUP BY DepartmentId\n    )\n;\n\\end{Code}\n\n\\section{Department Top Three Salaries} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:department-top-three-salaries}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Employee (Id int, Name varchar(255), Salary int, DepartmentId int)\nCreate table If Not Exists Department (Id int, Name varchar(255))\nTruncate table Employee\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('1', 'Joe', '85000', '1')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('2', 'Henry', '80000', '2')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('3', 'Sam', '60000', '2')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('4', 'Max', '90000', '1')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('5', 'Janet', '69000', '1')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('6', 'Randy', '85000', '1')\ninsert into Employee (Id, Name, Salary, DepartmentId) values ('7', 'Will', '70000', '1')\nTruncate table Department\ninsert into Department (Id, Name) values ('1', 'IT')\ninsert into Department (Id, Name) values ('2', 'Sales')\n\\end{Code}\n\nTable: Employee\n\n\\begin{Code}\n+--------------+---------+\n| Column Name  | Type    |\n+--------------+---------+\n| Id           | int     |\n| Name         | varchar |\n| Salary       | int     |\n| DepartmentId | int     |\n+--------------+---------+\nId is the primary key for this table.\nEach row contains the ID, name, salary, and department of one employee.\n\\end{Code}\n\nTable: Department\n\\begin{Code}\n+-------------+---------+\n| Column Name | Type    |\n+-------------+---------+\n| Id          | int     |\n| Name        | varchar |\n+-------------+---------+\nId is the primary key for this table.\nEach row contains the ID and the name of one department.\n\\end{Code}\n\nA company's executives are interested in seeing who earns the most money in each of the company's departments. A high earner in a department is an employee who has a salary in the top three unique salaries for that department.\n\nWrite an SQL query to find the employees who are high earners in each of the departments.\n\nReturn the result table in any order.\n\nThe query result format is in the following example:\n\\begin{Code}\nEmployee table:\n+----+-------+--------+--------------+\n| Id | Name  | Salary | DepartmentId |\n+----+-------+--------+--------------+\n| 1  | Joe   | 85000  | 1            |\n| 2  | Henry | 80000  | 2            |\n| 3  | Sam   | 60000  | 2            |\n| 4  | Max   | 90000  | 1            |\n| 5  | Janet | 69000  | 1            |\n| 6  | Randy | 85000  | 1            |\n| 7  | Will  | 70000  | 1            |\n+----+-------+--------+--------------+\n\nDepartment table:\n+----+-------+\n| Id | Name  |\n+----+-------+\n| 1  | IT    |\n| 2  | Sales |\n+----+-------+\n\nResult table:\n+------------+----------+--------+\n| Department | Employee | Salary |\n+------------+----------+--------+\n| IT         | Max      | 90000  |\n| IT         | Joe      | 85000  |\n| IT         | Randy    | 85000  |\n| IT         | Will     | 70000  |\n| Sales      | Henry    | 80000  |\n| Sales      | Sam      | 60000  |\n+------------+----------+--------+\n\nIn the IT department:\n- Max earns the highest unique salary\n- Both Randy and Joe earn the second-highest unique salary\n- Will earns the third-highest unique salary\n\nIn the Sales department:\n- Henry earns the highest salary\n- Sam earns the second-highest salary\n- There is no third-highest salary as there are only two employees\n\\end{Code}\n\n\\subsubsection{Join, sub-query}\n\\begin{Code}\nSELECT\n    d.Name AS 'Department', e1.Name AS 'Employee', e1.Salary\nFROM\n    Employee e1\n        JOIN\n    Department d ON e1.DepartmentId = d.Id\nWHERE\n    3 > (SELECT\n            COUNT(DISTINCT e2.Salary)\n        FROM\n            Employee e2\n        WHERE\n            e2.Salary > e1.Salary\n                AND e1.DepartmentId = e2.DepartmentId\n        )\n;\n\\end{Code}\n\n\\subsubsection{Join, dense_rank}\n\\begin{Code}\nSELECT d.NAME  AS Department,\n       a. NAME AS Employee,\n       a. salary\nFROM   (SELECT e.*,\n               Dense_rank()\n                 OVER (\n                   partition BY departmentid\n                   ORDER BY salary DESC) AS DeptPayRank\n        FROM   employee e) a\n       JOIN department d\n         ON a. departmentid = d. id\nWHERE  deptpayrank <= 3; \n\\end{Code}\n\n\\section{Delete Duplicate Emails} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:delete-duplicate-emails}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nTruncate table Person\ninsert into Person (Id, Email) values ('1', 'john@example.com')\ninsert into Person (Id, Email) values ('2', 'bob@example.com')\ninsert into Person (Id, Email) values ('3', 'john@example.com')\n\\end{Code}\n\nWrite a SQL query to delete all duplicate email entries in a table named Person, keeping only unique emails based on its smallest Id.\n\n\\begin{Code}\n+----+------------------+\n| Id | Email            |\n+----+------------------+\n| 1  | john@example.com |\n| 2  | bob@example.com  |\n| 3  | john@example.com |\n+----+------------------+\nId is the primary key column for this table.\n\\end{Code}\n\nFor example, after running your query, the above Person table should have the following rows:\n\\begin{Code}\n+----+------------------+\n| Id | Email            |\n+----+------------------+\n| 1  | john@example.com |\n| 2  | bob@example.com  |\n+----+------------------+\n\\end{Code}\n\nNote:\n\nYour output is the whole Person table after executing your sql. Use delete statement.\n\n\\subsubsection{Delete, Where}\n\\begin{Code}\nDELETE p1 FROM Person p1,\n    Person p2\nWHERE\n    p1.Email = p2.Email AND p1.Id > p2.Id;\n\\end{Code}\n\n\\section{Rising Temperature} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:rising-temperature}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Weather (Id int, RecordDate date, Temperature int)\nTruncate table Weather\ninsert into Weather (Id, RecordDate, Temperature) values ('1', '2015-01-01', '10')\ninsert into Weather (Id, RecordDate, Temperature) values ('2', '2015-01-02', '25')\ninsert into Weather (Id, RecordDate, Temperature) values ('3', '2015-01-03', '20')\ninsert into Weather (Id, RecordDate, Temperature) values ('4', '2015-01-04', '30')\n\\end{Code}\n\nTable: Weather\n\n\\begin{Code}\n+---------------+---------+\n| Column Name   | Type    |\n+---------------+---------+\n| id            | int     |\n| recordDate    | date    |\n| temperature   | int     |\n+---------------+---------+\nid is the primary key for this table.\nThis table contains information about the temperature in a certain day.\n\\end{Code}\n\nWrite an SQL query to find all dates' id with higher temperature compared to its previous dates (yesterday).\n\nReturn the result table in any order.\n\nThe query result format is in the following example:\n\\begin{Code}\nWeather\n+----+------------+-------------+\n| id | recordDate | Temperature |\n+----+------------+-------------+\n| 1  | 2015-01-01 | 10          |\n| 2  | 2015-01-02 | 25          |\n| 3  | 2015-01-03 | 20          |\n| 4  | 2015-01-04 | 30          |\n+----+------------+-------------+\n\nResult table:\n+----+\n| id |\n+----+\n| 2  |\n| 4  |\n+----+\nIn 2015-01-02, temperature was higher than the previous day (10 -> 25).\nIn 2015-01-04, temperature was higher than the previous day (20 -> 30).\n\\end{Code}\n\n\n\\subsubsection{Join, Datediff}\n\\begin{Code}\nSELECT\n    weather.id AS 'Id'\nFROM\n    weather\n        JOIN\n    weather w ON DATEDIFF(weather.recordDate, w.recordDate) = 1\n        AND weather.Temperature > w.Temperature\n;\n\\end{Code}\n\n\\subsubsection{Join, Subdate}\n\\begin{Code}\nselect\n    w1.id\nfrom\n    Weather w1\njoin\n    Weather w2\non\n    subdate(w1.recordDate, 1) = w2.recordDate\n    and w1.Temperature > w2.Temperature;\n\\end{Code}\n\n\\section{Trips and Users} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:trips-and-users}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Trips (Id int, Client_Id int, Driver_Id int, City_Id int, Status ENUM('completed', 'cancelled_by_driver', 'cancelled_by_client'), Request_at varchar(50))\nCreate table If Not Exists Users (Users_Id int, Banned varchar(50), Role ENUM('client', 'driver', 'partner'))\nTruncate table Trips\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('1', '1', '10', '1', 'completed', '2013-10-01')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('2', '2', '11', '1', 'cancelled_by_driver', '2013-10-01')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('3', '3', '12', '6', 'completed', '2013-10-01')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('4', '4', '13', '6', 'cancelled_by_client', '2013-10-01')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('5', '1', '10', '1', 'completed', '2013-10-02')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('6', '2', '11', '6', 'completed', '2013-10-02')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('7', '3', '12', '6', 'completed', '2013-10-02')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('8', '2', '12', '12', 'completed', '2013-10-03')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('9', '3', '10', '12', 'completed', '2013-10-03')\ninsert into Trips (Id, Client_Id, Driver_Id, City_Id, Status, Request_at) values ('10', '4', '13', '12', 'cancelled_by_driver', '2013-10-03')\nTruncate table Users\ninsert into Users (Users_Id, Banned, Role) values ('1', 'No', 'client')\ninsert into Users (Users_Id, Banned, Role) values ('2', 'Yes', 'client')\ninsert into Users (Users_Id, Banned, Role) values ('3', 'No', 'client')\ninsert into Users (Users_Id, Banned, Role) values ('4', 'No', 'client')\ninsert into Users (Users_Id, Banned, Role) values ('10', 'No', 'driver')\ninsert into Users (Users_Id, Banned, Role) values ('11', 'No', 'driver')\ninsert into Users (Users_Id, Banned, Role) values ('12', 'No', 'driver')\ninsert into Users (Users_Id, Banned, Role) values ('13', 'No\n\\end{Code}\n\nTable: Trips\n\n\\begin{Code}\n+-------------+----------+\n| Column Name | Type     |\n+-------------+----------+\n| Id          | int      |\n| Client_Id   | int      |\n| Driver_Id   | int      |\n| City_Id     | int      |\n| Status      | enum     |\n| Request_at  | date     |     \n+-------------+----------+\nId is the primary key for this table.\nThe table holds all taxi trips. Each trip has a unique Id, while Client_Id and Driver_Id are foreign keys to the Users_Id at the Users table.\nStatus is an ENUM type of (\u2018completed\u2019, \u2018cancelled_by_driver\u2019, \u2018cancelled_by_client\u2019).\n\\end{Code}\n\nTable: Users\n\n\\begin{Code}\n+-------------+----------+\n| Column Name | Type     |\n+-------------+----------+\n| Users_Id    | int      |\n| Banned      | enum     |\n| Role        | enum     |\n+-------------+----------+\nUsers_Id is the primary key for this table.\nThe table holds all users. Each user has a unique Users_Id, and Role is an ENUM type of (\u2018client\u2019, \u2018driver\u2019, \u2018partner\u2019).\nStatus is an ENUM type of (\u2018Yes\u2019, \u2018No\u2019).\n\\end{Code}\n\nWrite a SQL query to find the cancellation rate of requests with unbanned users (both client and driver must not be banned) each day between \"2013-10-01\" and \"2013-10-03\".\n\nThe cancellation rate is computed by dividing the number of canceled (by client or driver) requests with unbanned users by the total number of requests with unbanned users on that day.\n\nReturn the result table in any order. Round Cancellation Rate to two decimal points.\n\nThe query result format is in the following example:\n\n\\begin{Code}\nTrips table:\n+----+-----------+-----------+---------+---------------------+------------+\n| Id | Client_Id | Driver_Id | City_Id | Status              | Request_at |\n+----+-----------+-----------+---------+---------------------+------------+\n| 1  | 1         | 10        | 1       | completed           | 2013-10-01 |\n| 2  | 2         | 11        | 1       | cancelled_by_driver | 2013-10-01 |\n| 3  | 3         | 12        | 6       | completed           | 2013-10-01 |\n| 4  | 4         | 13        | 6       | cancelled_by_client | 2013-10-01 |\n| 5  | 1         | 10        | 1       | completed           | 2013-10-02 |\n| 6  | 2         | 11        | 6       | completed           | 2013-10-02 |\n| 7  | 3         | 12        | 6       | completed           | 2013-10-02 |\n| 8  | 2         | 12        | 12      | completed           | 2013-10-03 |\n| 9  | 3         | 10        | 12      | completed           | 2013-10-03 |\n| 10 | 4         | 13        | 12      | cancelled_by_driver | 2013-10-03 |\n+----+-----------+-----------+---------+---------------------+------------+\n\nUsers table:\n+----------+--------+--------+\n| Users_Id | Banned | Role   |\n+----------+--------+--------+\n| 1        | No     | client |\n| 2        | Yes    | client |\n| 3        | No     | client |\n| 4        | No     | client |\n| 10       | No     | driver |\n| 11       | No     | driver |\n| 12       | No     | driver |\n| 13       | No     | driver |\n+----------+--------+--------+\n\nResult table:\n+------------+-------------------+\n| Day        | Cancellation Rate |\n+------------+-------------------+\n| 2013-10-01 | 0.33              |\n| 2013-10-02 | 0.00              |\n| 2013-10-03 | 0.50              |\n+------------+-------------------+\n\nOn 2013-10-01:\n  - There were 4 requests in total, 2 of which were canceled.\n  - However, the request with Id=2 was made by a banned client (User_Id=2), so it is ignored in the calculation.\n  - Hence there are 3 unbanned requests in total, 1 of which was canceled.\n  - The Cancellation Rate is (1 / 3) = 0.33\nOn 2013-10-02:\n  - There were 3 requests in total, 0 of which were canceled.\n  - The request with Id=6 was made by a banned client, so it is ignored.\n  - Hence there are 2 unbanned requests in total, 0 of which were canceled.\n  - The Cancellation Rate is (0 / 2) = 0.00\nOn 2013-10-03:\n  - There were 3 requests in total, 1 of which was canceled.\n  - The request with Id=8 was made by a banned client, so it is ignored.\n  - Hence there are 2 unbanned request in total, 1 of which were canceled.\n  - The Cancellation Rate is (1 / 2) = 0.50\n\\end{Code}\n\n\\subsubsection{CASE WHEN, Extra Column}\n\\begin{Code}\nSELECT\n    t.Request_at AS Day\n    , ROUND(SUM(t.flag) / COUNT(t.Request_at), 2) AS \"Cancellation Rate\"\nFROM\n    (\n        SELECT\n            *\n            , CASE WHEN Status!='completed' THEN 1 ELSE 0 END AS flag\n        FROM\n            Trips\n        WHERE\n\t\t    Request_at >= '2013-10-01' AND Request_at <= '2013-10-03'\n            AND Client_Id NOT IN (SELECT Users_Id FROM Users WHERE Banned='Yes') \n            AND Driver_Id NOT IN (SELECT Users_Id FROM Users WHERE Banned='Yes')\n    ) AS t\nGROUP BY\n    t.Request_at\n;\n\\end{Code}\n\n\\subsubsection{Extra Table}\n\\begin{Code}\nWITH cte AS\n(\nSELECT\n    Client_Id\n    , Driver_Id\n    , Status\n    , Request_at\n    , CASE WHEN Status!='completed' THEN 1 ELSE 0 END AS flag\nFROM\n    Trips\nWHERE\n    Request_at >= '2013-10-01' AND Request_at <= '2013-10-03'\n    AND Client_Id NOT IN (SELECT Users_Id FROM Users WHERE Banned='Yes')\n    AND Driver_Id NOT IN (SELECT Users_Id FROM Users WHERE Banned='Yes')\n)\n\nSELECT\n    Request_at AS Day\n    , ROUND(SUM(flag) / COUNT(*), 2) AS 'Cancellation Rate'\nFROM\n    cte\nGROUP BY\n    Request_at\n;\n\\end{Code}\n\n\\subsubsection{Extra Table, Faster}\n\\begin{Code}\nWITH cte AS\n(\nSELECT\n    *\n    , CASE WHEN Status!='completed' THEN 1 ELSE 0 END AS flag\nFROM\n    Trips\nWHERE\n    Request_at >= '2013-10-01' AND Request_at <= '2013-10-03'\n    AND Client_Id NOT IN (SELECT Users_Id FROM Users WHERE Banned='Yes')\n    AND Driver_Id NOT IN (SELECT Users_Id FROM Users WHERE Banned='Yes')\n)\n\nSELECT\n    Request_at AS Day\n    , ROUND(SUM(flag) / COUNT(*), 2) AS 'Cancellation Rate'\nFROM\n    cte\nGROUP BY\n    Request_at\n;\n\\end{Code}\n\n\\section{Game Play Analysis I} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:game-play-analysis-i}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Activity (player_id int, device_id int, event_date date, games_played int)\nTruncate table Activity\ninsert into Activity (player_id, device_id, event_date, games_played) values ('1', '2', '2016-03-01', '5')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('1', '2', '2016-05-02', '6')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('2', '3', '2017-06-25', '1')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('3', '1', '2016-03-02', '0')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('3', '4', '2018-07-03', '5')\n\\end{Code}\n\nTable: Activity\n\n\\begin{Code}\n+--------------+---------+\n| Column Name  | Type    |\n+--------------+---------+\n| player_id    | int     |\n| device_id    | int     |\n| event_date   | date    |\n| games_played | int     |\n+--------------+---------+\n(player_id, event_date) is the primary key of this table.\nThis table shows the activity of players of some game.\nEach row is a record of a player who logged in and played a number of games (possibly 0) before logging out on some day using some device.\n\\end{Code}\n\nWrite an SQL query that reports the first login date for each player.\n\nThe query result format is in the following example:\n\n\\begin{Code}\nActivity table:\n+-----------+-----------+------------+--------------+\n| player_id | device_id | event_date | games_played |\n+-----------+-----------+------------+--------------+\n| 1         | 2         | 2016-03-01 | 5            |\n| 1         | 2         | 2016-05-02 | 6            |\n| 2         | 3         | 2017-06-25 | 1            |\n| 3         | 1         | 2016-03-02 | 0            |\n| 3         | 4         | 2018-07-03 | 5            |\n+-----------+-----------+------------+--------------+\n\nResult table:\n+-----------+-------------+\n| player_id | first_login |\n+-----------+-------------+\n| 1         | 2016-03-01  |\n| 2         | 2017-06-25  |\n| 3         | 2016-03-02  |\n+-----------+-------------+\n\\end{Code}\n\n\n\\subsubsection{GROUP BY, MIN}\n\\begin{Code}\nSELECT\n    player_id,\n    MIN(event_date) AS first_login\nFROM\n    activity\nGROUP BY\n    player_id\n;\n\\end{Code}\n\n\\section{Game Play Analysis II} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:game-play-analysis-ii}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Activity (player_id int, device_id int, event_date date, games_played int)\nTruncate table Activity\ninsert into Activity (player_id, device_id, event_date, games_played) values ('1', '2', '2016-03-01', '5')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('1', '2', '2016-05-02', '6')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('2', '3', '2017-06-25', '1')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('3', '1', '2016-03-02', '0')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('3', '4', '2018-07-03', '5')\n\\end{Code}\n\nTable: Activity\n\n\\begin{Code}\n+--------------+---------+\n| Column Name  | Type    |\n+--------------+---------+\n| player_id    | int     |\n| device_id    | int     |\n| event_date   | date    |\n| games_played | int     |\n+--------------+---------+\n(player_id, event_date) is the primary key of this table.\nThis table shows the activity of players of some game.\nEach row is a record of a player who logged in and played a number of games (possibly 0) before logging out on some day using some device.\n\\end{Code}\n\nWrite an SQL query that reports the first login date for each player.\n\nThe query result format is in the following example:\n\n\\begin{Code}\nActivity table:\n+-----------+-----------+------------+--------------+\n| player_id | device_id | event_date | games_played |\n+-----------+-----------+------------+--------------+\n| 1         | 2         | 2016-03-01 | 5            |\n| 1         | 2         | 2016-05-02 | 6            |\n| 2         | 3         | 2017-06-25 | 1            |\n| 3         | 1         | 2016-03-02 | 0            |\n| 3         | 4         | 2018-07-03 | 5            |\n+-----------+-----------+------------+--------------+\n\nResult table:\n+-----------+-----------+\n| player_id | device_id |\n+-----------+-----------+\n| 1         | 2         |\n| 2         | 3         |\n| 3         | 1         |\n+-----------+-----------+\n\\end{Code}\n\n\n\\subsubsection{GROUP BY, MIN}\n\\begin{Code}\nSELECT\n    player_id\n    , device_id\nFROM\n    activity\nWHERE\n    (player_id\n     , event_date) \nIN\n    (SELECT\n         player_id\n         , MIN(event_date) AS event_date\n     FROM\n         activity\n     GROUP BY player_id)\n;\n\\end{Code}\n\n\\subsubsection{GROUP BY, Dense_Rank}\n\\begin{Code}\nSELECT\n    player_id\n    , device_id\nFROM\n    (SELECT\n         player_id\n         , device_id\n         , DENSE_RANK() OVER (PARTITION BY player_id ORDER BY event_date) AS num\n    FROM\n         activity) AS a\n    WHERE\n        num = 1\n;\n\\end{Code}\n\n\n\\section{Game Play Analysis III} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\label{sec:game-play-analysis-iii}\n\n\n\\subsubsection{\u63cf\u8ff0}\nSQL Schema\n\n\\begin{Code}\nCreate table If Not Exists Activity (player_id int, device_id int, event_date date, games_played int)\nTruncate table Activity\ninsert into Activity (player_id, device_id, event_date, games_played) values ('1', '2', '2016-03-01', '5')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('1', '2', '2016-05-02', '6')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('2', '3', '2017-06-25', '1')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('3', '1', '2016-03-02', '0')\ninsert into Activity (player_id, device_id, event_date, games_played) values ('3', '4', '2018-07-03', '5')\n\\end{Code}\n\nTable: Activity\n\n\\begin{Code}\n+--------------+---------+\n| Column Name  | Type    |\n+--------------+---------+\n| player_id    | int     |\n| device_id    | int     |\n| event_date   | date    |\n| games_played | int     |\n+--------------+---------+\n(player_id, event_date) is the primary key of this table.\nThis table shows the activity of players of some game.\nEach row is a record of a player who logged in and played a number of games (possibly 0)\nbefore logging out on some day using some device.\n\\end{Code}\n\nWrite an SQL query that reports for each player and date, how many games played so far by the player. That is, the total number of games played by the player until that date. Check the example for clarity.\n\nThe query result format is in the following example:\n\n\\begin{Code}\nActivity table:\n+-----------+-----------+------------+--------------+\n| player_id | device_id | event_date | games_played |\n+-----------+-----------+------------+--------------+\n| 1         | 2         | 2016-03-01 | 5            |\n| 1         | 2         | 2016-05-02 | 6            |\n| 1         | 3         | 2017-06-25 | 1            |\n| 3         | 1         | 2016-03-02 | 0            |\n| 3         | 4         | 2018-07-03 | 5            |\n+-----------+-----------+------------+--------------+\n\nResult table:\n+-----------+------------+---------------------+\n| player_id | event_date | games_played_so_far |\n+-----------+------------+---------------------+\n| 1         | 2016-03-01 | 5                   |\n| 1         | 2016-05-02 | 11                  |\n| 1         | 2017-06-25 | 12                  |\n| 3         | 2016-03-02 | 0                   |\n| 3         | 2018-07-03 | 5                   |\n+-----------+------------+---------------------+\nFor the player with id 1, 5 + 6 = 11 games played by 2016-05-02,\nand 5 + 6 + 1 = 12 games played by 2017-06-25.\nFor the player with id 3, 0 + 5 = 5 games played by 2018-07-03.\nNote that for each player we only care about the days when the player logged in.\n\\end{Code}\n\n\n\\subsubsection{SUM, Window Function}\n\\begin{Code}\nSELECT\n    player_id\n    , event_date\n    , SUM(games_played) OVER\n        (PARTITION BY player_id ORDER BY event_date) AS games_played_so_far\nFROM\n    Activity;\n\\end{Code}\n\n\n", "meta": {"hexsha": "a3e60659e2627f733997ead0c84a26f3d2fec6e2", "size": 37788, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "C++/chapDatabase.tex", "max_stars_repo_name": "SulfredLee/leetcode", "max_stars_repo_head_hexsha": "7fe358e2f7d82d48a4fa6f5794e4912e43d7c89a", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "C++/chapDatabase.tex", "max_issues_repo_name": "SulfredLee/leetcode", "max_issues_repo_head_hexsha": "7fe358e2f7d82d48a4fa6f5794e4912e43d7c89a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "C++/chapDatabase.tex", "max_forks_repo_name": "SulfredLee/leetcode", "max_forks_repo_head_hexsha": "7fe358e2f7d82d48a4fa6f5794e4912e43d7c89a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.3157486424, "max_line_length": 257, "alphanum_fraction": 0.5684344236, "num_tokens": 10717, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.668880247169804, "lm_q2_score": 0.8479677602988601, "lm_q1q2_score": 0.5671888851007266}}
{"text": "\\section{Refined Strings}\\label{sec:strings}\n\nRefined Strings provides a wrapping over a string manipulations library, \nhere `ByteString`\n\\begin{code}\ndata RString = S BS.ByteString \n\\end{code}\n\nString operators wrap the library operators\nand moreover they are assumed to be reflected into the logic\n\n\\begin{code}\nassume concatString :: x:LHString -> y:LHString \n      -> {v:LHString | v == concatString x y }\nconcatString (S s1) (S s2) = S (s1 `BS.append` s2)\n\nassume stringEmp :: {v:RString | v == stringEmp  && stringLen v == 0 } \nstringEmp = S (BS.empty)\n\nassume stringLen :: x:RString -> {v:Nat | v == stringLen x}\nstringLen (S s) = BS.length s \n\nassume subString  :: s:RString -> offset:Int -> ln:Int -> {v:RString | v == subString s offset ln }\nsubString (S s) o l = S (BS.take l (BS.drop o s))\n\nassume takeString :: i:Nat -> xs:{RString | i <= stringLen xs } \n  -> {v:RString | stringLen v == i && v == takeString i xs }\ntakeString i (S s) = S (BS.take i s)\n\nassume dropString :: i:Nat -> xs:{RString | i <= stringLen xs } \n  -> {v:RString | stringLen v == stringLen xs - i && v == dropString i xs }\ndropString i (S s) = S (BS.drop i s)\n\nassume fromString :: i:String -> {o:RString | i == o && o == fromString i}\nfromString = S . ST.fromString \n\nassume isNullString :: i:RString -> {b:Bool | Prop b <=> stringLen i == 0 }\nisNullString (S s) = BS.length s == 0 \n\\end{code}\n\n\nFinally the refined string library, exposes theorems that hold (but are assumed) \nfor the internal string representation\n\n\\begin{code}\nassume stringEmpProp :: x:RString  -> { stringLen x == 0 <=> x == stringEmp }\nstringEmpProp _ = trivial \n \nassume concatStringNeutralLeft :: x:RString -> {concatString x stringEmp == x}\nconcatStringNeutralLeft _ = trivial\n\nassume concatStringNeutralRight :: x:RString -> {concatString stringEmp x == x}\nconcatStringNeutralRight _ = trivial\n\nassume concatTakeDrop :: i:Nat -> xs:{RString | i <= stringLen xs} \n    -> {xs == concatString (takeString i xs) (dropString i xs) }\nconcatTakeDrop _ _ = trivial\n\nassume concatLen :: x:RString -> y:RString -> { stringLen (concatString x y) == stringLen x + stringLen y }\nconcatLen _ _ = trivial\n\nassume concatStringAssoc :: x:RString -> y:RString -> z:RString \n     -> {concatString (concatString x y) z == concatString x (concatString y z) }\nconcatStringAssoc _ _ _ = trivial\n\n\n-- | Substrings \n\nassume subStringConcatBack :: input:RString -> input':RString -> j:Int -> i:{Int | i + j <= stringLen input }\n  -> { (subString input i j == subString (concatString input input') i j) }\nsubStringConcatBack _ _ _ _ = trivial  \n\n\nassume subStringConcatFront  \n  :: input:RString -> input':RString -> j:Int -> i:Int \n  -> { (subString input i j == subString (concatString input' input) (stringLen input' + i) j) }\nsubStringConcatFront _ _ _ _ = trivial\n\\end{code}", "meta": {"hexsha": "709c9caa46dcb00f0d2c0afa19975e2d798aaab7", "size": 2815, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/stringmatcher/strings.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/stringmatcher/strings.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/stringmatcher/strings.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 36.0897435897, "max_line_length": 109, "alphanum_fraction": 0.6792184725, "num_tokens": 826, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267118111485245, "lm_q2_score": 0.6859494550081926, "lm_q1q2_score": 0.5670825163061662}}
{"text": "\n\\subsection{Variance of the 2SOLS estimator}\n\n", "meta": {"hexsha": "462421e5d198b52a9101bbb4ccca315361205f0d", "size": 47, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/olsMore/03-03-2SOLSvariance.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/olsMore/03-03-2SOLSvariance.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/olsMore/03-03-2SOLSvariance.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 11.75, "max_line_length": 44, "alphanum_fraction": 0.7872340426, "num_tokens": 14, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267118026095991, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5670825157564672}}
{"text": "\\subsection{gammapy.stats}\n\\label{ssec:gammapy-stats}\n\\todo{Regis Terrier}\n\nThe gammapy.stats subpackage contains the fit statistics and associated\nstatistical estimators that are commonly used in gamma-ray astronomy. In\ngeneral, gamma-ray observations count Poisson-distributed events at various sky\npositions, and contain both signal and background events. Estimation of the\nnumber of signal events is done through likelihood maximization. In Gammapy,\nthe fit statistics are Poisson log-likelihood functions normalized like\nchi-squares, i.e., they follow the expression $2 \\times log L$, where $L$ is\nthe likelihood function used. The statistic function used when the expected\nnumber of background events is known is the \\emph{Cash} statistic\n~\\citep{Cash}. It is used by datasets using background templates such as the\nMapDataset. When the number of background events is unknown and an off\nmeasurement where only background events are expected is used, the statistic\nfunction is WStat. It is a profile log-likelihood statistic where background\ncounts are marginalized parameters. It is used by datasets containing off\ncounts measurements such as the SpectrumDatasetOnOff, used for classical\nspectral analysis.\n\nTo perform simple statistical estimations on counts measurements,\nCountsStatistic classes encapsulate the aforementioned statistic functions to\nmeasure excess counts and estimate the associated statistical significance,\nerrors and upper limits. They perform maximum likelihood ratio tests to\nestimate significance (the square root of the statistic difference) and compute\nlikelihood profiles to measure errors and upper limits. The code example\n\\ref{codeexample:stats} shows how to compute the Li \\& Ma\nsignificance~\\citep{LiMa} of a set of measurements.\n\n\\begin{figure}\n\t\\import{code-examples/generated/}{gp_stats}\n\t\\caption{}\n\t\\label{codeexample:stats} \\end{figure}\n\n", "meta": {"hexsha": "d953edf4084c1822f7166acc327969b0ed8ad024", "size": 1883, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/text/2-package-subsections/stats.tex", "max_stars_repo_name": "LauraOlivera/gammapy-v1.0-paper", "max_stars_repo_head_hexsha": "212b87975575347e0249746c9e5a490e1bc549a5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T22:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T22:42:34.000Z", "max_issues_repo_path": "src/text/2-package-subsections/stats.tex", "max_issues_repo_name": "LauraOlivera/gammapy-v1.0-paper", "max_issues_repo_head_hexsha": "212b87975575347e0249746c9e5a490e1bc549a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 24, "max_issues_repo_issues_event_min_datetime": "2022-02-07T15:04:27.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T20:12:56.000Z", "max_forks_repo_path": "src/text/2-package-subsections/stats.tex", "max_forks_repo_name": "LauraOlivera/gammapy-v1.0-paper", "max_forks_repo_head_hexsha": "212b87975575347e0249746c9e5a490e1bc549a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2022-01-27T20:22:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:17:18.000Z", "avg_line_length": 52.3055555556, "max_line_length": 79, "alphanum_fraction": 0.8204992034, "num_tokens": 402, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.685949467848392, "lm_q1q2_score": 0.5670825152067678}}
{"text": "\\section{Algorithm}\nAlthough the algorithm of~\\cite{AGHKT12} is statistically consistent and more computationally efficient than EM, directly applying it to our setting has a few drawbacks: First, if we covert our observations $(c_t, \\mu_t)$ to categorical observations, then since $(c_t, \\mu_t) \\in \\cbr{(x,y) \\in \\N^2: x\\leq y \\leq N}$, the observation space scale as $O(N^2)$.  In practice, $N$ can be as large as 2000, making $n$, the size of the observation space, as large as $10^6$, which can be prohibitive. Moreover, we are given the prior information that $\\mu_t$ is drawn from a binomial distribution $\\bin(c_t, p_{h_t})$, which a direct aplication of~\\cite{AGHKT12} will not utilize. Instead, it suffices to estimate vector $p$ for our application. Estimating the joint distribution of $c_t$ and $\\mu_t$ given $h_t$ is not necessary for our purposes, and direct estimation of $p$ may result in better statistical efficiency.\n\nTo overcome these drawbacks, we propose a feature-map based algorithm for learning the binomial hidden Markov model. The algorithm relies on the tensor decomposition algorithm of~\\cite{AGHKT12}, but has several novel modifications. We introduce them in the following subsections.\n\n\\subsection{Feature Map} We use a feature map\n$\\phi(x)$ to map the observations to a vector. By redefining $C_i = \\E[\\phi(x_i)|h_2]$,\nand $P_{i,j} := \\E[\\phi(x_i) \\otimes \\phi(x_j)]$ (where $i,j$ are distinct elements from $\\cbr{1,2,3}$)\nand $T := \\E[\\phi(x_1) \\otimes \\phi(x_2) \\otimes \\phi(x_3)]$, it can be shown that same as in Section 2.3, relationship\n\\[ P_{i,j} = C_i \\diag(w) C_j^T = \\sum_{l=1}^m w_l (C_i)_l \\otimes (C_j)_l \\]\nand\n\\[ T = \\sum_{l=1}^m w_l (C_1)_l \\otimes (C_2)_l \\otimes (C_3)_l \\]\nstill holds, and Steps 2-5 in Section 2.3 provably recovers matrix $C_2$, matrix $T$ and vector $\\pi$\nmodulo column permutation and scaling.\n\n%\\paragraph{Binning mapping} Given observation $(c, \\mu)$, $\\phi_{\\bin, n}(c, \\mu)$ is a $n$-dimensional vector, with its entries as follows:\n%\\[ (\\phi_{\\bin, n}(c, \\mu))_i = \\begin{cases} I(\\frac{\\mu}{c} \\in (\\frac{i-1}{n}, \\frac{i}{n}]) & c \\neq 0 \\\\ 0 & c = 0 \\end{cases} \\]\n%where $n$ is a hyperparameter controlling the number of the bins, and the width of the bins is $\\frac{1}{n}$.\n\n\\paragraph{Beta Mapping} We propose a novel feature map, namely Beta mapping, to map the observations to vectors. Since our goal is to recover $p_h$ for all $h$, we would like a function of $c_t$ and $m_t$ that reveals information on the underlying $p_{h_t}$. We map the observation $(m_t, c_t)$ to a (discretized) distribution with its mean around $\\frac{c_t}{m_t}$, which is $p_{h_t}$ in expectation; moreover, if $c_t$ is large, then the mapped distribution is more concentrated, implying that we have a higher confidence on the value of $p_{h_t}$.\n\nFormally, given observation $(c, \\mu)$, $\\phi_{\\bet, n}(c, \\mu)$ is a $n$-dimensional vector, with its entries defined as:\n\\[ (\\phi_{\\bet, n}(c, \\mu))_i = \\frac{1}{B(\\mu+1, c-\\mu+1)} (\\frac{i}{n})^\\mu (1-\\frac{i}{n})^{c-\\mu} \\]\nWe also denote by the above quantity $\\varphi_{\\bet}((c,\\mu), \\frac{i}{n})$, where $\\varphi_{\\bet, n}(x, t) = \\frac{1}{B(\\mu+1, c-\\mu+1)} t^\\mu (1-t)^{c-\\mu}$.\nWe will run Algorithm $\\TD$ with feature map $\\phi_{\\bet, n}$.\n\n%\\paragraph{Recovery of $p_h$ from Expected  Binning feature mapping}\n%Recall that\n%\\[ \\int_0^1 t \\E[\\phi_{\\bin}(x,t) | h] dt = \\E[\\frac{m}{c} | h] = p_h \\]\n%Thus,\n%\\[ p_h = \\int_0^1 t \\E[\\phi_{\\bin}(x,t) | h] dt. \\]\n\n\\subsection{Recovery of Methylation Probabilities}\nNotice that running the Algorithm $\\TD$ with feature map $\\phi$, we will recover $C_2 = \\E[\\phi(x)|h]$, which is not the methylation probability. To recover the model parameters $p$,we need a recovery procedure that extracts the value of $p_h$ for each $h$ from $\\E[\\phi(x)|h]$ estimated from tensor decomposition.\n\n\\paragraph{Recovery of $p_h$ from Expected Beta Feature Mapping} If the feature map $\\phi$ is the Beta feature mappping $\\phi_{\\bet, n}$, we use the following formula to recover $p_h$ from $\\E[\\phi(x)|h]$:\n\\[ \\hat{p}_h := \\frac{\\frac{1}{n} \\sum_{i=1}^n \\frac{i}{n} \\E[(\\phi_{\\bet}(x))_i | h] - a}{1 - 2 a}, \\]\nwhere $a = \\E[\\frac{1}{c+2}]$. We justify the recovery formula as follows.\n\nRecall that\n\\[ \\int_0^1 \\frac{t \\cdot t^{m} (1-t)^{c-m}}{B(m+1,c-m+1)} dt = \\int_0^1 \\frac{t \\cdot t^{(m+1)-1} (1-t)^{(c-m+1)-1}}{B(m+1,c-m+1)} dt = \\frac{m+1}{c+2} \\]\nTherefore,\n\\[ \\int_0^1 t \\E[\\varphi_{\\bet}(x,t) | h] dt = \\E\\sbr{\\frac{m+1}{c+2} | h} = \\E\\sbr{\\frac{cp_h + 1}{c + 2} | h} = \\E\\sbr{\\frac{c}{c+2}} p_h + \\E\\sbr{\\frac{1}{c+2}} \\]\nassuming independence between $h$ and $c$.\nLet $a = \\E[\\frac{1}{c+2}]$. Then,\n\\[ \\int_0^1 t\\E[\\varphi_\\bet(x,t) | h] dt = (1 - 2a) p_h + a. \\]\nHence,\n\\[ p_h = \\frac{\\int_0^1 t\\E[\\varphi_{\\bet}(x,t) | h] dt - a}{1 - 2 a}. \\]\nIn practice, since we only have finite dimensions of the feature map, we do discrete\nsummation as opposed to integration. Using the relationship that $\\phi_{\\bet, n}(x)_i = \\varphi_{\\bet}(x, \\frac{i}{n})$, we get\n\\[ p_h \\approx \\frac{\\frac{1}{n} \\sum_{i=1}^n \\frac{i}{n} \\E[(\\phi_{\\bet}(x))_i | h] - a}{1 - 2 a}. \\]\n\n\\subsection{Stablization Procedure}\nIn practice, the binomial hidden Markov model may not perfectly characterize the data distribution. Therefore, the methylation probability estimation provided by the algorithm may not fully capture the data distribution and causes estimation errors on other parameters, such as initial probablity and transition matrix. To address this issue, we propose a least squares formulation for recovering the transition matrix and initial probability distribution. Given $\\hat{C}_2$ as an estimatior of $\\E[\\phi(x_2)|h_2]$, we propose to solve the following optimization problem:\n\\[ \\min_{H_{2,1}: \\forall i,j (H_{2,1})_{i,j} \\geq 0, \\sum_{i,j} (H_{2,1})_{i,j} = 1} \\| P_{2,1} - \\hat{C}_2 H_{2,1} \\hat{C}_2^T \\|_F^2 \\]\nHere, $H_{2,1}$ is and estimator of $\\P[x_2 = i, x_1 = j]$, and we can recover the transition matrix and intial probability by applying the formulae $\\pi =\\one^T H_{2,1}$ and $T = H_{2,1} \\diag(\\pi)^{-1}$.\nEmpirically, this can be shown to have superior performance compared to direct applying the formula\n\\[ H_{2,1} := \\hat{C}_2^{\\dagger} P_{2,1} \\hat{C}_2^{\\dagger T}\\]\nwhich will get an estimator of $H_{2,1}$ with large negative entries.\n\n\\subsection{Multiple Cell Types}\nFor experiments with differential methylation between two cell types, we observe two\ncoverage methylation pairs, one for each cell types. That is, $x = ((c^1, \\mu^1), (c^2, \\mu^2))$.\nOur goal is to extract hidden states represented by a pair of methylation probabilities $(p^1_h, p^2_h)$.\nTo this end, we construct a concatenated feature map from feature maps on each cell type:\n\\[ \\phi(x) = \\begin{bmatrix} \\phi(c^1, \\mu^1) \\\\ \\phi(c^2, \\mu^2) \\end{bmatrix} \\]\nFollowing the tensor decomposition algorithm in Section 2, we can recover the expected feature map\ngiven hidden states:\n\\[ C_2 = \\E[\\phi(x)|h] = \\begin{bmatrix} \\E[\\phi(c^1, \\mu^1)|h] \\\\ \\E[\\phi(c^2, \\mu^2)|h] \\end{bmatrix}\\]\nNow, applying the recovery procedure of $p_h$, we can now recover\na pair $(p^1_h, p^2_h)$ for each hidden state $h$. For $h$, if we see a large difference between\n$p^1_h$ and $p^2_h$, then we identify state $h$ as differntial methylation state.\nWe follow the same stable recovery procedure as in the last subsection.\n\n\\subsection{Decoding}\nParameter recovery gives us estimates on model parameters $\\pi, T, p$. We perform decoding, i.e. inference\nover the hidden states, using two algorithms: posterior decoding and Viterbi decoding. In posterior decoding,\nwe compute $\\P(h_t=i|x_1, \\ldots, x_T)$ and pick the $i$ achieving the maximum for each position $t$. In Viterbi decoding,\nwe use a dynamic program to get a combination of $h_1, \\ldots, h_T$ that maximizes\n$\\P(h_1, \\ldots, h_T|x_1,\\ldots,x_T)$. Both algorithm runs in time $O(l m^2)$. In the experiments, we see the decoding results of both methods are roughly the same, therefore, we focus on the results of posterior decoding in this paper.\n\nWhen we have multiple cell types, we compute the observation probabilities given hidden states under the following assumption. Given positition $t$,\nwe assume conditional independence among different observations in different cell types given the hidden state and the coverage in its cell type. Formally, we have\n\\[ \\P(c_1, c_2 | \\mu_1, \\mu_2, h) = \\P(c_1 | \\mu_1, h) \\cdot \\P(c_2 | \\mu_2, h) \\]\n", "meta": {"hexsha": "b36c3baade4138861bb0fa002ea3afaaa620adf3", "size": 8429, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "kernelbin/paper/algorithm.tex", "max_stars_repo_name": "anapophenic/knb", "max_stars_repo_head_hexsha": "0e865a47791bf801c00078a128a4d21e4375e732", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-04-21T21:10:22.000Z", "max_stars_repo_stars_event_max_datetime": "2016-08-27T19:15:23.000Z", "max_issues_repo_path": "kernelbin/paper/algorithm.tex", "max_issues_repo_name": "anapophenic/knb", "max_issues_repo_head_hexsha": "0e865a47791bf801c00078a128a4d21e4375e732", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "kernelbin/paper/algorithm.tex", "max_forks_repo_name": "anapophenic/knb", "max_forks_repo_head_hexsha": "0e865a47791bf801c00078a128a4d21e4375e732", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.1647058824, "max_line_length": 916, "alphanum_fraction": 0.6920156602, "num_tokens": 2771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267118068790619, "lm_q2_score": 0.6859494421679929, "lm_q1q2_score": 0.567082502762386}}
{"text": "\\chapter{EVOLUTIONARY ALGORITHMS}\n\n\\section{Introduction to EA}In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.In details, the underlying idea behind the algorithm is:\\\\\nGiven a population of individuals the environmental pressure causes natural selection (survival of the fittest) and this causes a rise in the fitness of the population.Given a quality function to be maximised we can randomly create a set of candidate solutions, i.e., elements of the function\u2019s domain, and apply the quality function as an abstract fitness measure- the higher the better. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and/or mutation to them.\\\\\nRecombination is an operator applied to two or more selected candidates (the so-called parents) and results one or more new candidates (the children). Mutation is applied to one candidate and results in one new candidate.Executing recombination and mutation leads to a set of new candidates date (the \toffspring) that compete-based on their fitness \u2013 with the old ones for a place in the \tnext generation. This process can be iterated until a candidate with sufficient quality (a solution) is found or a previously set computational limit is reached.\\\\\n\\section{General algorithm of EA.}BEGIN\\\\\n        INITIALISE population with random candidate solutions;\\\\\n\t\tEVALUATE each candidate\u2019s fitness;\\\\\n\t\tREPEAT UNTIL (TERMINATION CONDITION is satisfied) DO\\\\\\\n           1.\tSELECT parents;\\\\\n           2.\tRECOMBINE pairs of parents;\\\\\n           3.\tMUTATE the resulting offspring;\\\\\n           4.\tEVALUATE new candidates;\\\\\n           5.\tSELECT individuals for the next generation;\\\\\n        END\\\\\\\\\n\\includegraphics[width=1\\textwidth]{./ge}\\\\[1cm]\n\\section{Some evolutionary algorithms.}\n\\subsection{Ant colony optimization.}Ant colony optimization algorithm(ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs.This algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, and it constitutes some optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants.\\\\\n\\includegraphics[width=0.5\\textwidth]{./ant}\\\\[1cm]\n\\subsection{Genetic algorithm.}Genetic algorithm (GA) is a search heuristic that mimics the process of natural selection. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.Next step is to generate a second generation population of solutions from those selected through a combination of genetic operators: crossover (also called recombination), and mutation.\\\\\nFor each new solution to be produced, a pair of \"parent\" solutions is selected for \tbreeding from the pool selected previously.By producing a \"child\" solution using \tthe above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its \"parents\". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more \"biology inspired\",quality chromosomes.\\\\\nThese processes ultimately result in the next generation population of \tchromosomes that is different from the initial generation.Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.\\\\\n\\includegraphics[width=0.5\\textwidth]{./ee}\\\\[1cm]\n\\subsection{Particle swarm optimization.}Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behaviour.These particles are moved around in the search-space according to a few simple formulae. The movements of the particles are guided by their own best known position in the search-space as well as the entire swarm's best known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.\\\\\\\\\n\\includegraphics[width=0.5\\textwidth]{./pso}\\\\[1cm]\n\n", "meta": {"hexsha": "5a585536957afd1f79962f1bc02f50bb6af67557", "size": 5570, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "introduction_1.tex", "max_stars_repo_name": "Ace139/final-year-thesis", "max_stars_repo_head_hexsha": "207ee3e1465ea1c8673ba2a3045228bcca22adbe", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "introduction_1.tex", "max_issues_repo_name": "Ace139/final-year-thesis", "max_issues_repo_head_hexsha": "207ee3e1465ea1c8673ba2a3045228bcca22adbe", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "introduction_1.tex", "max_forks_repo_name": "Ace139/final-year-thesis", "max_forks_repo_head_hexsha": "207ee3e1465ea1c8673ba2a3045228bcca22adbe", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 206.2962962963, "max_line_length": 742, "alphanum_fraction": 0.8016157989, "num_tokens": 1114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267118026095992, "lm_q2_score": 0.6859494421679929, "lm_q1q2_score": 0.5670824998337505}}
{"text": "\\documentclass{article}\n\\usepackage{etoolbox}\n\\usepackage{mdwlist}\n\\usepackage{mathtools}\n\\makeatletter\n\\patchcmd{\\chapter}{\\if@openright\\cleardoublepage\\else\\clearpage\\fi}{}{}{}\n\\makeatother\n\n\\usepackage{thumbpdf}\n\\usepackage[pdftex,\n        colorlinks=true,\n        urlcolor=rltblue,       % \\href{...}{...} external (URL)\n        filecolor=rltgreen,     % \\href{...} local file\n        linkcolor=rltred,       % \\ref{...} and \\pageref{...}\n        pdftitle={Untitled},\n        pdfauthor={Your Name},\n        pdfsubject={Just a test},\n        pdfkeywords={mathematics},\n        pdfproducer={pdfLaTeX},\n        %pdfadjustspacing=1,\n        pagebackref,\n        pdfpagemode=None,\n        bookmarksopen=true]{hyperref}\n\\usepackage{color}\n\\definecolor{rltred}{rgb}{0.75,0,0}\n\\definecolor{rltgreen}{rgb}{0,0.5,0}\n\\definecolor{rltblue}{rgb}{0,0,0.75}\n\n\\title{Optimist Racing - Math and Physics}\n\\author{Claude Richard}\n\\date{\\today}\n\n\\begin{document}\\label{start}\n\n\\maketitle\nThis document contains all the mathematics used (except the easy math LOL) for the program.\n\n\\subsection{Conventions}\nThis section describes the notational conventions used throughout this document.\nIf \\(A\\) is a matrix (or vector), then we denote the i\\textsuperscript{th} row of A by \\(A_i\\).\nWe also denote the i\\textsuperscript{th} column of A by \\(A^i\\).\n\n\\newpage\n\n\n\n\\section{Electric Colours}\nThere are 3 RGB colours (red, green, blue). There are 8 electric colours.\nThe electric colours can be thought of as subsets of \\( \\{ red, green, blue \\} \\).\nThe electric colours are:\n\\begin{basedescript}{\\desclabelstyle{\\pushlabel}\\desclabelwidth{10em}}\n\\item [ElectricBlack] \\(= \\emptyset\\)\n\\item [ElectricRed] \\(= \\{ red \\}\\)\n\\item [ElectricGreen] \\(= \\{ green \\}\\)\n\\item [ElectricBlue] \\(= \\{ blue \\}\\)\n\\item [ElectricYellow] \\(= \\{ red, green \\}\\)\n\\item [ElectricMagenta] \\(= \\{ red, blue \\}\\)\n\\item [ElectricCyan] \\(= \\{ green, blue \\}\\)\n\\item [ElectricWhite] \\(= \\{ red, green, blue \\}\\)\n\\end{basedescript}\nIn the physics engine, 2 objects of Electric colours X and Y respectively can interact with each other if and only if\n\\( X \\bigcap Y \\neq \\emptyset \\)\n\n\\newpage\n\n\n\n\\section{Smooth Floor Grid}\nFile: Bezier.hpp\nYou can create a floor which is differentiable n times for an arbitrary n.\nNote that the complexity is at least O(n). (I didn't calculate it exactly)\nSpecify f, fx, fy, etc. for each vertex in a grid.\nThen the whole grid will be n-times differentiable everywhere.\n\n\\newpage\n\n\n\n\\section{Lagrangian Mechanics}\nLet \\(\\overrightarrow{x}\\) represent the vector of generalized coordinates, of length \\(n\\).\nLet \\(\\overrightarrow{Q}\\) represent the vector of generalized forces, also of length \\(n\\).\nFor a given system, let \\(T\\) be the kinetic energy and \\(V\\) be the potential energy of the system.\nExpress \\(L = T - V\\) as a function of the generalized coordinates and their derivatives with respect to time.\n\\[\nT = T(x_1, \\dotsc, x_n, \\dot{x_1}, \\dotsc, \\dot{x_n})\n\\]\\[\nV= V(x_1, \\dotsc, x_n)\n\\]\\[\nL = L(x_1, \\dotsc, x_n, \\dot{x_1}, \\dotsc, \\dot{x_n})\n\\]\nThen there are \\(n\\) Lagrange Equations which together govern the motion of the generalized coordinates.\n\\[\n\\frac{d}{dt}\\frac{\\partial L}{\\partial \\dot{x_i}} - \\frac{\\partial L}{\\partial x_i} = Q_i\n\\]\n\n\\subsection{Matrix Form}\nYou should be able to express \\(L\\) in matrix form:\n\\[\nL = \\frac{1}{2} \\dot{\\overrightarrow{x}}^T A \\dot{\\overrightarrow{x}} - V\n\\]\nWhere \\(A\\) is a symmetric \\(n x n\\) matrix dependent on the generalized coordinates but not their derivatives wrt time.\n\\[\nA = A(x_1, \\dotsc, x_n)\n\\]\\[\nA = A^T\n\\]\nWe will now derive a vector equation to replace the \\(n\\) Lagrange equations. We have the following:\n\\[\n\\frac{\\partial L}{\\partial x_i} = \\frac{1}{2}\\dot{\\overrightarrow{x}}^T \\frac{\\partial A}{\\partial x_i} \\overrightarrow{x} - \\frac{\\partial V}{\\partial x_i}\n\\]\\[\n\\frac{\\partial L}{\\partial \\dot{x_i}} = A_i \\dot{\\overrightarrow{x}}\n\\]\\[\n\\frac{d}{dt}\\frac{\\partial L}{\\partial \\dot{x_i}} = A_i \\ddot{\\overrightarrow{x}} +\n\\sum_{j=1}^{n} \\dot{x_j}(\\frac{\\partial A}{\\partial x_j})_i \\dot{\\overrightarrow{x}}\n\\]\nThe i\\textsuperscript{th} Lagrange equation then becomes:\n\\[\nA_i \\ddot{\\overrightarrow{x}} + \\sum_{j=1}^{n} \\dot{x_j}(\\frac{\\partial A}{\\partial x_j})_i \\dot{\\overrightarrow{x}}\n- \\frac{1}{2}\\dot{\\overrightarrow{x}}^T \\frac{\\partial A}{\\partial x_i} \\overrightarrow{x} + \\frac{\\partial V}{\\partial x_i}\n= Q_{x_i}\n\\]\nNow for each \\(i\\), let \\(M_i\\) be a matrix such that \\( (M_i)_j = (\\frac{\\partial A}{\\partial x_j})_i\\).\nAll equations combined become the following vector equation:\n\\[\nA \\ddot{\\overrightarrow{x}} + \\sum_{j=1}^{n} \\dot{x_j}(\\frac{\\partial A}{\\partial x_j} - \\frac{1}{2}M_j) \\dot{\\overrightarrow{x}}  + \\nabla V = \\overrightarrow{Q}\n\\]\n\n\\subsection{Kinetic energy relating to the center-of-mass}\nLet the system be composed of one object of mass \\(m\\), which does not rotate, and whose position as a function of the generalized coordinates is:\n\\[\nposition = \\begin{bmatrix}\n  f(\\overrightarrow{x}) \\\\\n  g(\\overrightarrow{x}) \\\\\n  h(\\overrightarrow{x})\n \\end{bmatrix}\n\\]\nWe will now derive the kinetic energy as a function of \\(\\overrightarrow{x}\\), \\(\\dot{\\overrightarrow{x}}\\), and the three functions \\(f\\), \\(g\\), and \\(h\\).\nThe velocity of the object is:\n\\[\nvelocity = \\begin{bmatrix}\n  (\\nabla f)^T \\dot{\\overrightarrow{x}} \\\\\n  (\\nabla g)^T \\dot{\\overrightarrow{x}} \\\\\n  (\\nabla h)^T \\dot{\\overrightarrow{x}}\n \\end{bmatrix}\n\\]\nSo its kinetic energy is:\n\\[\nT = \\frac{1}{2} m [\\dot{\\overrightarrow{x}}^T [(\\nabla f)(\\nabla f)^T + (\\nabla g)(\\nabla g)^T + (\\nabla h)(\\nabla h)^T] \\dot{\\overrightarrow{x}} ]\n\\]\nAssume the potential energy is zero. Then \\(L = T\\).\nWe will now derive the Lagrange equations for this system.\n\\[\n\\frac{\\partial L}{\\partial x_i} = \\frac{1}{2} m \\dot{\\overrightarrow{x}}^T [ H(f)^i(\\nabla f)^T + H(g)^i(\\nabla g)^T + H(h)^i(\\nabla h)^T + (\\nabla f)H(f)_i + (\\nabla g)H(g)_i + (\\nabla h)H(h)_i ] \\dot{\\overrightarrow{x}}\n\\]\\[\n\\frac{\\partial L}{\\partial \\dot{x_i}} = m [ \\frac{\\partial f}{\\partial x_i}(\\nabla f)^T + \\frac{\\partial g}{\\partial x_i}(\\nabla g)^T +\\frac{\\partial h}{\\partial x_i}(\\nabla h)^T ] \\dot{\\overrightarrow{x}}\n\\]\\[\n\\begin{array}{lcl}\n\\frac{d}{dt} \\frac{\\partial L}{\\partial \\dot{x_i}} & = & m [ \\frac{\\partial f}{\\partial x_i}(\\nabla f)^T + \\frac{\\partial g}{\\partial x_i}(\\nabla g)^T +\\frac{\\partial h}{\\partial x_i}(\\nabla h)^T ] \\ddot{\\overrightarrow{x}} \\\\\n& + & m \\dot{\\overrightarrow{x}}^T [H(f)^i(\\nabla f)^T + H(g)^i(\\nabla g)^T + H(h)^i(\\nabla h)^T] \\dot{\\overrightarrow{x}} \\\\\n& + & m \\dot{\\overrightarrow{x}}^T [\\frac{\\partial f}{\\partial x_i}H(f) + \\frac{\\partial g}{\\partial x_i}H(g) + \\frac{\\partial h}{\\partial x_i}H(h)] \\dot{\\overrightarrow{x}}\n\\end{array}\n\\]\nThe matrix form of the Lagrange equations is:\n\\[\n\\begin{array}{lcl}\nm[(\\nabla f)(\\nabla f)^T + (\\nabla g)(\\nabla g)^T + (\\nabla h)(\\nabla h)^T]\\ddot{\\overrightarrow{x}} + \\\\\nm[(\\nabla f)\\dot{\\overrightarrow{x}}^TH(f) + (\\nabla g)\\dot{\\overrightarrow{x}}^TH(g) + (\\nabla h)\\dot{\\overrightarrow{x}}^TH(h)]\\dot{\\overrightarrow{x}} = Q\n\\end{array}\n\\]\n\n\\subsection{Kinetic energy relating to the Euler angles}\nLet the system be composed of one object, whose center-of-mass does not move, and whose Euler angles as a function of the generalized coordinates are \\(\\alpha\\), \\(\\beta\\) and \\(\\gamma\\).\nThis means that to go from an orientation parallel to the world axes to the object's orientation, you must turn left (around the z-axis) by \\(\\alpha\\), then tilt back (around the new model x-axis) by \\(\\beta\\), then turn left around the model z-axis by \\(\\gamma\\).\nLet the object have a moment of inertia tensor\n\\[\nI = \\begin{bmatrix}\nI_x & I_{xy} & I_{xz} \\\\\nI_{xy} & I_y & I_{yz} \\\\\nI_{xz} & I_{yz} & I_z\n\\end{bmatrix}\n\\]\nAlso denote\n\\[\n\\alpha_1 = \\alpha, \\alpha_2 = \\beta, \\alpha_3 = \\gamma\n\\]\\[\nG = \\begin{bmatrix}\n\\nabla \\alpha & \\nabla \\beta & \\nabla \\gamma\n\\end{bmatrix}\n\\]\n\\[\nE = \\begin{bmatrix}\nI_x \\sin^2 \\beta \\cos^2 \\gamma & (I_y-I_x)\\sin\\beta\\sin\\gamma\\cos\\gamma & I_z\\cos\\beta \\\\\n(I_y-I_x)\\sin\\beta\\sin\\gamma\\cos\\gamma & I_x\\sin^2\\gamma+I_y\\cos^2\\gamma & 0 \\\\\nI_z\\cos\\beta & 0 & I_z\n\\end{bmatrix}\n\\]\\[\n\\nabla_0 L = \\begin{bmatrix} \\frac{\\partial L}{\\partial x_1} \\\\ \\vdots \\\\ \\frac{\\partial L}{\\partial x_n} \\end{bmatrix}\n\\]\\[\n\\nabla_1 L = \\begin{bmatrix} \\frac{\\partial L}{\\partial \\dot{x_1}} \\\\ \\vdots \\\\ \\frac{\\partial L}{\\partial \\dot{x_n}} \\end{bmatrix}\n\\]\nWe will now derive the matrix form of the Lagrange equations for this system.\nLet \\(T\\) represent the kinetic energy, and assume the potential energy is zero.\nWe can derive the following lemmas:\n\\[\n\\dot{\\alpha_i} = \\dot{\\overrightarrow{x}}^T \\nabla\\alpha_i\n\\]\\[\n\\dot{(\\nabla\\alpha_i)} = H(\\alpha_i)\\dot{\\overrightarrow{x}}\n\\]\\[\nT = L\n\\]\n\\[\nT = \\begin{bmatrix}\\dot \\alpha & \\dot \\beta & \\dot \\gamma \\end{bmatrix}\nE \\begin{bmatrix}\\dot \\alpha \\\\ \\dot \\beta \\\\ \\dot \\gamma \\end{bmatrix}\n\\]\nTherefore:\n\\[\nL = \\frac{1}{2} \\dot{\\overrightarrow{x}}^T G E G^T \\dot{\\overrightarrow{x}}\n\\]\\[\n\\nabla_0 L = \\sum_{i=1}^3 \\frac{1}{2} \\nabla \\alpha_i \\dot{\\overrightarrow{x}}^T G \\frac{\\partial E}{\\partial \\alpha_i} G^T \\dot{\\overrightarrow{x}}\n+ H(\\alpha_i) \\dot{\\overrightarrow{x}} E_i G^T \\dot{\\overrightarrow{x}}\n\\]\\[\n\\nabla_1 L = G E G^T \\dot{\\overrightarrow{x}}\n\\]\nAnyway, after a while we find the final vector equation:\n\\[\nG E G^T \\ddot{\\overrightarrow{x}} + \\sum_{i=1}^{3} G E^i \\dot{\\overrightarrow{x}}^T H(\\alpha_i) \\dot{\\overrightarrow{x}} +\nG \\frac{\\partial E}{\\partial \\alpha_i} G^T \\dot{\\overrightarrow{x}} \\dot{\\overrightarrow{x}}^T \\nabla \\alpha_i -\n\\frac{1}{2} \\nabla \\alpha_i \\dot{\\overrightarrow{x}}^T G \\frac{\\partial E}{\\partial \\alpha_i} G^T \\dot{\\overrightarrow{x}}\n= \\overrightarrow{Q}\n\\]\n\n\\subsection{Combining center-of-mass movement and rotation, and several objects}\nIn the previous two sections we focused on center-mass movement, then on rotation, for only one object.\nWhat if we have several objects that have both rotation and moving centers-of-mass?\nLet the system have \\(k\\) objects, and \\(n\\) generalized coordinates. Let the position of object \\(i\\) be\n\\[ position_i = \\begin{bmatrix} f_i(\\overrightarrow{x}) \\\\ g_i(\\overrightarrow{x}) \\\\ h_i(\\overrightarrow{x}) \\end{bmatrix} \\]\nand let the Euler angles of object \\(i\\) be\n\\[ Euler_i = \\begin{bmatrix} \\alpha_i(\\overrightarrow{x}) \\\\ \\beta_i(\\overrightarrow{x}) \\\\ \\gamma_i(\\overrightarrow{x}) \\end{bmatrix} \\]\nIf we use the center-of-mass section above for object \\(i\\), we would get the equation\n\\[ C_i \\ddot{\\overrightarrow{x}} = \\overrightarrow{c_i} \\]\nAnd if we use the rotation section above for object \\(i\\), we would get\n\\[ R_i \\ddot{\\overrightarrow{x}} = \\overrightarrow{r_i} \\]\nWhere \\(C_i\\), \\(R_i\\), \\(\\overrightarrow{c_i}\\) and \\(\\overrightarrow{r_i}\\) depend on \\(\\overrightarrow{x}\\) and \\(\\dot{\\overrightarrow{x}}\\).\nFor the combined system, the equation is simply\n\\[ A \\ddot{\\overrightarrow{x}} = \\overrightarrow{Q} - \\overrightarrow{b} \\]\nwhere\n\\[ A = \\sum_{i=1}^{n} (C_i + R_i) \\]\n\\[ b = \\sum_{i=1}^{n} (\\overrightarrow{c_i} + \\overrightarrow{r_i}) \\]\nand \\(\\overrightarrow{Q}\\) is the generalized force on the system, which will be covered in the next sections.\n\n\\subsection{Transforming forces and torques between model and world coordinate systems}\nThis is just a preliminary section to clarify the transformations in the next sections.\nSay you apply a force or torque \\(\\overrightarrow{F_w}\\) in world coordinates.\nThere is a force/torque \\(\\overrightarrow{F_m}\\) in model coordinates, which is equivalent to \\(\\overrightarrow{F_w}\\) in world coordinates.\nDefine the following matrices:\n\\[ R_\\alpha = \\begin{bmatrix} \\cos\\alpha & -\\sin\\alpha & 0 \\\\ \\sin\\alpha & \\cos\\alpha & 0 \\\\ 0 & 0 & 1 \\end{bmatrix} \\]\n\\[ R_\\beta = \\begin{bmatrix} 1 & 0 & 0 \\\\ 0 & \\cos\\beta & -\\sin\\beta \\\\ 0 & \\sin\\beta & \\cos\\beta \\end{bmatrix} \\]\n\\[ R_\\gamma = \\begin{bmatrix} \\cos\\gamma & -\\sin\\gamma & 0 \\\\ \\sin\\gamma & \\cos\\gamma & 0 \\\\ 0 & 0 & 1 \\end{bmatrix} \\]\n\n\\subsection{Generalized force from 3D force on center-of-mass}\nYou are given a system consisting of one object, which may or may not rotate. Its position is as above,\n\\[ position = \\begin{bmatrix} f(\\overrightarrow{x}) \\\\ g(\\overrightarrow{x}) \\\\ h(\\overrightarrow{x}) \\end{bmatrix} \\]\nYou apply an external force \\(\\overrightarrow{F_w}\\) (in 3D world coordinates) on the object at its center-of-mass.\nThe generalized force on the coordinate \\(x_i\\) is\n\\[ Q_{x_i} = \\frac{\\partial position}{\\partial x_i}^T \\overrightarrow{F_w} \\]\nSo the generalized force vector must be\n\\[ Q = \\begin{bmatrix} \\nabla f & \\nabla g & \\nabla h \\end{bmatrix} \\overrightarrow{F_w} \\]\n\n\\subsection{Generalized force from 3D torque}\nNow instead of applying a force, you apply a torque \\(\\overrightarrow{T_w}\\) about the center-of-mass, in world coordinates.\nAlternatively, apply a torque \\(\\overrightarrow{T_m}\\) in model coordinates (i.e. rotated according to current Euler angles).\n\\newpage\n\n\n\n\\section{Lagrangian Models}\nThe racer includes 2 objects.\nThe bottom has position x,y,z, rotated by theta.\nThe top is attached to a massless rigid rod, of which the other end is attached to the bottom.\nTo get to the top: start with (x,y,z), rotate left by theta, tilt right by phi, go up by length.\nWhen you're gliding on the floor, z = f(x,y) where f describes the floor.\nWhen you're falling, z is an additional generalized coordinate.\n\n\\newpage\n\n\n\n\\section{Camera}\nTODO: write out this section to describe good camera movement around the lagrangian models\n\n\\label{end}\\end{document}", "meta": {"hexsha": "3dc7e609568968be917892d70111e3d9c19b1951", "size": 13490, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/Specification/Math and Physics.tex", "max_stars_repo_name": "clauderichard/OptimistRacing", "max_stars_repo_head_hexsha": "808ffcef44307c9097035bf5dacdcae8fc7663a7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/Specification/Math and Physics.tex", "max_issues_repo_name": "clauderichard/OptimistRacing", "max_issues_repo_head_hexsha": "808ffcef44307c9097035bf5dacdcae8fc7663a7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/Specification/Math and Physics.tex", "max_forks_repo_name": "clauderichard/OptimistRacing", "max_forks_repo_head_hexsha": "808ffcef44307c9097035bf5dacdcae8fc7663a7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.3573883162, "max_line_length": 264, "alphanum_fraction": 0.6793180133, "num_tokens": 4407, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117855317474, "lm_q2_score": 0.6859494421679929, "lm_q1q2_score": 0.5670824881192075}}
{"text": "\\documentclass{tufte-handout}\n\n\\usepackage{xcolor}\n\\usepackage{graphicx}\n\n% set hyperlink attributes\n\\hypersetup{colorlinks}\n\n\\usepackage{amsmath}\n\n% set image attributes:\n\\usepackage{graphicx}\n\\graphicspath{ {images/} }\n\n% create environment for bottom paragraph:\n\\newenvironment{bottompar}{\\par\\vspace*{\\fill}}{\\clearpage}\n\n% ============================================================\n\n% define the title\n\\title{SOC 4930/5050: Week 04 Equations Quick \\\\Reference}\n\\author{Christopher Prener, Ph.D.}\n\\date{September 18\\textsuperscript{th}, 2017}\n% ============================================================\n\\begin{document}\n% ============================================================\n\\maketitle % generates the title\n% ============================================================\n\n\\vspace{5mm}\n\\section{Additive Law}\n\\begin{fullwidth}\n\\begin{equation}\n\\scalebox{2} {$ P(A \\cup B)= P(A)+P(B)-P(A \\cap B) $}\n\\end{equation}\n\\end{fullwidth}\n\n\\vspace{5mm}\n\\section{Conditional Probability}\n\\begin{subequations}\n\\begin{equation}\n\\scalebox{2} {$ P(A | B)= \\frac{P(A \\cap B)}{P(B)} $}\n\\end{equation}\n\\vspace{3mm}\n\\par \\noindent \\begin{equation}\n\\scalebox{2} {$ P(B | A)= \\frac{P(A \\cap B)}{P(A)} $}\n\\end{equation}\n\\end{subequations}\n\n\\vspace{5mm}\n\\section{Multiplicative Law}\n\\begin{equation}\n\\scalebox{2} {$ P(A \\cap B)= P(A)*P(B|A) $}\n\\end{equation}\n\n\\vspace{5mm}\n\\section{Independence}\n\\begin{equation}\n\\scalebox{2} {$ P(A \\cap B)= P(A)*P(B) $}\n\\end{equation}\n\n\\vspace{5mm}\n\\section{Bayes' Theorem}\nThe posterior probability can be calculated using this simplified formula:\n\\begin{equation}\n\\scalebox{2} {$ \\frac { xy }{ xy+z(1-x) } $}\n\\end{equation}\n\n% ============================================================\n\\end{document}", "meta": {"hexsha": "68a2ec9becf01ce1f25047a987579290d415afee", "size": 1732, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week-04-equations.tex", "max_stars_repo_name": "slu-soc5050/Equations", "max_stars_repo_head_hexsha": "6398ba3a3d351d95a37d3245fcc6eabb91fdd032", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week-04-equations.tex", "max_issues_repo_name": "slu-soc5050/Equations", "max_issues_repo_head_hexsha": "6398ba3a3d351d95a37d3245fcc6eabb91fdd032", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week-04-equations.tex", "max_forks_repo_name": "slu-soc5050/Equations", "max_forks_repo_head_hexsha": "6398ba3a3d351d95a37d3245fcc6eabb91fdd032", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.7428571429, "max_line_length": 74, "alphanum_fraction": 0.5744803695, "num_tokens": 513, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5669337783746974}}
{"text": "\\mainsection{Dummies Guide}\nUsing Rust has some advantages in that we have strong typing and the\nprogramming language has been designed to be safe in terms of stopping\nprogrammers making errors.\nHowever, these advantages come at a disadvantage (especially if you \nare used to Python or C++). \nSo in this section we give an informal overview of how these differences\naffect programming in our Rust system, and in particular with relevance\nto SCALE.\n\n\\subsection{Strong Typing}\nWe take advantage of strong typing to ensure that the types correspond\nto mathematically what they represent.\nFor efficiency we give access to the basic SCALE internal types\n$\\ints$, $\\ClearModp$, $\\SecretModp$, $\\SecretI$ and $\\SecretBit$.\nBut in MAMBA you could do something like\n\\begin{lstlisting}\n    program.bit_length = 16\n    program.security = 40\n\n    a = sint(4)\n    b = sint(5)\n    c = a<b\n\\end{lstlisting}\nBut this is mathematically meaningless, what is meant here is that\nyou are going to `think' of \\verb|sint| values as integers of bit\nsize sixteen and then do the comparison assuming that the numbers\nare indeed of size bounded by $2^{16}$. After all a comparison\nof values in the finite field $\\F_p$ really makes no mathematical\nsense.\n\nIn our Rust system we want to avoid such implicit assumptions\nthat a reader of your code needs to make. Thus we provide types\nwhich help capture what you really want to do.\nSo the above MAMBA code would become\n\\begin{lstlisting}\n   let a: SecretInteger<16> = SecretInteger::from(4);\n   let b: SecretInteger<16> = SecretInteger::from(5);\n   let c = a.lt(b);\n\\end{lstlisting}\nThe type of the value \\verb|c| is a $\\SecretModp$ value.\nWe use the member function notation \\verb|a.lt(b)| instead\nof \\verb|a<b| to force the programmer to realise that you cannot\nuse the output of the comparison in an \\verb|if|-statement.\n\nAnother aspect of this strong typing is the above use of the\n\\verb|from| command. This is used to convert one type to another,\nwhich needs to be done explicitly in almost all case.\n\n\\subsection{Mutable vs Non-Mutable}\nCommon problems in programs are that people accidentally re-assign\na variable and then want the old value again. This is because in\nlanguages like C++ or python all variables are mutable by default.\nIn C++ it is usually considered good practice to define all inputs\nto a function to be \\verb|const| if they are not going to be returned\nas changed for this reason.\nRust goes one step further and assumes all variables are non-mutable \nby default. Thus you cannot do\n\\begin{lstlisting}\n    let a = 3;\n    if some_condition {\n       a = a + 1;\n    }\n\\end{lstlisting}\nTo enable this you explicitly have to signal that the variable\nis going to be changed by writing\n\\begin{lstlisting}\n    let mut a = 3;\n    if some_condition {\n       a = a + 1;\n    }\n\\end{lstlisting}\n\n\\subsection{Container Types}\nContainer types are really important in programming, yet in Rust\nthey can be a bit confusing mainly due to the mutability issue\nabove, and the need to maintain safety of the language.\nThe \\verb|Array| and \\verb|Slice| types we define are very similar\nto the standard \\verb|Vec| type in Rust, and hopefully eventually\nthey will be the same.\nThe difference between an  \\verb|Array| and a \\verb|Slice|  is that\nthe size of an  \\verb|Array| is known at compile time, whereas\na \\verb|Slice| size may not.\nThis distinction enables us to do some optimizations.\n\nSuppose you have an array of ten $\\ClearModp$ values\n\\begin{lstlisting}\n    let mut a: Array<ClearModp, 10> = Array::uninitialized();\n\\end{lstlisting}\nYou may want to assign values to the array elements, or use them\nlater. There are {\\em four} ways of getting an array element:\n\\begin{enumerate}\n  \\item \\verb|A.get(i)| returns a reference to the array and performs\n        an out of bounds check.\n  \\item \\verb|A.get_unchecked(i)| returns a reference to the array and \n        does not perform an out of bounds check.\n  \\item \\verb|A.get_mut(i)| returns a mutable reference to the array and performs\n        an out of bounds check.\n  \\item \\verb|A.get_mut_unchecked(i)| returns a mutable reference to the array and \n        does not perform an out of bounds check.\n\\end{enumerate}\nWhen accessing the reference it comes wrapped in an \\verb|Option| when\nusing the checked values, thus you need to \\verb|unwrap| this option before\nusing the item. \nThe unwrapping itself produces a guarded object, to remove the \\verb|Guard| you \nneed to de-reference it.\n\nThus when you use a \\verb|get|, but not a \\verb|get_unchecked|, \nyou need to unwrap the result, then in both cases you should de-reference,\ni.e. you do\n\\begin{lstlisting}\n   println!(\" a[2] = \",*a.get(2).unwrap());\n   println!(\" a[2] = \",*a.get_unchecked(2));\n\\end{lstlisting}\nHowever, for printing we have added some code to make the following more\nsyntactically nicer code work,\n\\begin{lstlisting}\n   println!(\" a[2] = \",a.get(2).unwrap());\n   println!(\" a[2] = \",a.get_unchecked(2));\n\\end{lstlisting}\n\nIf you want to access an element {\\em and not change it} you should use\none of the non-mutable \\verb|get| operations, if you want to change an\nelement you should access one of the mutable, i.e. \\verb|get_mut|, operations.\nThis is particularly relevant when the internal object is another \\verb|Array| or\n\\verb|Slice|.\n\\begin{lstlisting}\n    let mut a: Slice<Array<i64, 2>> = Slice::uninitialized(5);\n    for i in 0..5 {\n        for j in 0..2 {\n            a.get_mut(i).unwrap().set(j, &((i * 2 + j) as i64));\n        }\n    }\n\\end{lstlisting}\nTo modify elements in a simple \\verb|Array| or \\verb|Slice| use.\n\\begin{lstlisting}\n    let mut a: Array<ClearModp, 10> = Array::uninitialized();\n    a.set(2, &ClearModp::from(1));\n    a.set(3, &ClearModp::from(4));\n\\end{lstlisting}\nNow suppose you have a \\verb|Slice| of \\verb|Array|s\n\\begin{lstlisting}\n    let mut S: Slice<Array<ClearModp, 2>> = Slice::uninitialized(5);\n\\end{lstlisting}\nand after some processing you would like to take the fourth element of the\n\\verb|Slice|.\n\\begin{lstlisting}\n    let mut A = *S.get_mut(4).unwrap();\n\\end{lstlisting}\nThe value \\verb|A| is now an \\verb|Array| of length two.\nWhat you really want is for \\verb|A| to be a \\verb|Copy| of \\verb|S|.\nBut not all types in Rust enable copying, whilst our basic types do\nthe \\verb|Array| and \\verb|Slice| types do not.\nThus in C++ terms in the above code the \\verb|A| value is really just a `pointer' \nto the fourth \\verb|Array| in the \\verb|Slice| \\verb|S|.\nThus the effect would be that if you changed elements in \\verb|A| then you \nalso change the elements in \\verb|S|.\nIn addition if \\verb|S| goes out of scope and gets deleted then so will \\verb|A|.\n\nTo avoid this problem you need to \\verb|clone| the output of the \\verb|get| as in\n\\begin{lstlisting}\n    let mut A = S.get(4).unwrap().clone();\n\\end{lstlisting}\nBut you only need to do this as the \\verb|Array| type is not copyable.\nIf you had the following \n\\begin{lstlisting}\n    let mut A : Array<ClearModp, 10> = Array::uninitialized();\n    a.set(3, ClearModp::from(4));\n    let mut a = *A.get(3).unwrap();\n    a = a + 3;\n\\end{lstlisting}\nthen \\verb|a| really is a copy of the entry in \\verb|A|. So\nat the end we have $a=7$ and $A[3]=4$.\n\nThe \\verb|unchecked| versions of the \\verb|get| operations should only be\nused if you know what you are doing. However, the checked versions do have\na performance cost; they require a run-time branch which may impact the\noptimizers ability to reduce the total number of rounds of communication.\n\nFor more details on \\verb|Option|s see\n\\begin{itemize}\n        \\item \\url{https://doc.rust-lang.org/std/option/}\n\\end{itemize}\n\n\n\n\n\n", "meta": {"hexsha": "455bfac9145f8fbf4152e5560ca605e53b475701", "size": 7618, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RustDocumentation/Dummies.tex", "max_stars_repo_name": "karannewatia/SCALE-MAMBA", "max_stars_repo_head_hexsha": "467b33a6c80050789204ea3ee3b5cf0113354f85", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 196, "max_stars_repo_stars_event_min_datetime": "2018-05-25T11:41:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-12T05:49:50.000Z", "max_issues_repo_path": "RustDocumentation/Dummies.tex", "max_issues_repo_name": "karannewatia/SCALE-MAMBA", "max_issues_repo_head_hexsha": "467b33a6c80050789204ea3ee3b5cf0113354f85", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 49, "max_issues_repo_issues_event_min_datetime": "2018-07-17T15:49:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-19T11:35:31.000Z", "max_forks_repo_path": "RustDocumentation/Dummies.tex", "max_forks_repo_name": "karannewatia/SCALE-MAMBA", "max_forks_repo_head_hexsha": "467b33a6c80050789204ea3ee3b5cf0113354f85", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 90, "max_forks_repo_forks_event_min_datetime": "2018-05-25T11:41:42.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-23T19:15:10.000Z", "avg_line_length": 40.3068783069, "max_line_length": 83, "alphanum_fraction": 0.7278813337, "num_tokens": 2073, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.737158174177441, "lm_q2_score": 0.7690802264851919, "lm_q1q2_score": 0.5669337755517969}}
{"text": "\\section{\\module{math} ---\n         Mathematical functions (\\function{sin()} etc.).}\n\\declaremodule{builtin}{math}\n\n\n\\modulesynopsis{Mathematical functions (\\function{sin()} etc.).}\n\nThis module is always available.\nIt provides access to the mathematical functions defined by the \\C{}\nstandard.\nThey are:\n\n\\begin{funcdesc}{acos}{x}\nReturn the arc cosine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{asin}{x}\nReturn the arc sine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{atan}{x}\nReturn the arc tangent of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{atan2}{y, x}\nReturn \\code{atan(\\var{y} / \\var{x})}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{ceil}{x}\nReturn the ceiling of \\var{x} as a real.\n\\end{funcdesc}\n\n\\begin{funcdesc}{cos}{x}\nReturn the cosine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{cosh}{x}\nReturn the hyperbolic cosine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{exp}{x}\nReturn \\code{e**\\var{x}}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{fabs}{x}\nReturn the absolute value of the real \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{floor}{x}\nReturn the floor of \\var{x} as a real.\n\\end{funcdesc}\n\n\\begin{funcdesc}{fmod}{x, y}\nReturn \\code{\\var{x} \\%\\ \\var{y}}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{frexp}{x}\nReturn the matissa and exponent for \\var{x}.  The mantissa is\npositive.\n\\end{funcdesc}\n\n\\begin{funcdesc}{hypot}{x, y}\nReturn the Euclidean distance, \\code{sqrt(\\var{x}*\\var{x} + \\var{y}*\\var{y})}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{ldexp}{x, i}\nReturn \\code{\\var{x} * (2**\\var{i})}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{log}{x}\nReturn the natural logarithm of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{log10}{x}\nReturn the base-10 logarithm of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{modf}{x}\nReturn the fractional and integer parts of \\var{x}.  Both results\ncarry the sign of \\var{x}.  The integer part is returned as a real.\n\\end{funcdesc}\n\n\\begin{funcdesc}{pow}{x, y}\nReturn \\code{\\var{x}**\\var{y}}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{sin}{x}\nReturn the sine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{sinh}{x}\nReturn the hyperbolic sine of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{sqrt}{x}\nReturn the square root of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{tan}{x}\nReturn the tangent of \\var{x}.\n\\end{funcdesc}\n\n\\begin{funcdesc}{tanh}{x}\nReturn the hyperbolic tangent of \\var{x}.\n\\end{funcdesc}\n\nNote that \\function{frexp()} and \\function{modf()} have a different\ncall/return pattern than their \\C{} equivalents: they take a single\nargument and return a pair of values, rather than returning their\nsecond return value through an `output parameter' (there is no such\nthing in Python).\n\nThe module also defines two mathematical constants:\n\n\\begin{datadesc}{pi}\nThe mathematical constant \\emph{pi}.\n\\end{datadesc}\n\n\\begin{datadesc}{e}\nThe mathematical constant \\emph{e}.\n\\end{datadesc}\n\n\\begin{seealso}\n  \\seemodule{cmath}{Complex number versions of many of these functions.}\n\\end{seealso}\n", "meta": {"hexsha": "7addda43b7afe9165d306063b0e6fe290ebf53dc", "size": 2889, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/lib/libmath.tex", "max_stars_repo_name": "1byte2bytes/cpython", "max_stars_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec", "max_stars_repo_licenses": ["Unlicense", "TCL", "DOC", "AAL", "X11"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-25T21:41:07.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-25T21:41:07.000Z", "max_issues_repo_path": "Doc/lib/libmath.tex", "max_issues_repo_name": "1byte2bytes/cpython", "max_issues_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec", "max_issues_repo_licenses": ["Unlicense", "TCL", "DOC", "AAL", "X11"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Doc/lib/libmath.tex", "max_forks_repo_name": "1byte2bytes/cpython", "max_forks_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec", "max_forks_repo_licenses": ["Unlicense", "TCL", "DOC", "AAL", "X11"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.9285714286, "max_line_length": 78, "alphanum_fraction": 0.7088958117, "num_tokens": 912, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.5669337744730902}}
{"text": "\\section{Pressurized hollow sphere}\n\\paragraph{}\nThe problem is a pressurized hollow sphere subjected to the internal pressure.\nThe geometry of the problem is described in Fig.\\ref{oct_fig:ex_pre-hollow-sphere}. \n\n\\begin{figure}[h!]\n  \\centering\n  \\scalebox{1}{\\includegraphics{octree/ex_images/oct_ex_image021.jpg}}\n  \\caption{Pressurized hollow sphere}\n  \\label{oct_fig:ex_pre-hollow-sphere}\n\\end{figure}\n\n\\paragraph{}\nIn the example, the external pressure is set to be zero so that only the internal one is considered.\nThe geometric properties are: $a=\\SI{10}{\\meter}$ and $b=\\SI{50}{\\meter}$.\nThe boundary condition is: $p_a = \\SI{10}{\\newton \\per \\square \\meter}$ ,$p_b = 0$ and the rigid body motion is prevented.\nThe material properties are: $E=\\SI{200}{\\newton \\per \\square \\meter}$ and $\\nu=0.3$.\nFor simplification, only a quarter of the sphere is analysed as shown in Fig.~\\ref{oct_fig:ex_hollow_sphere_mesh_1716}\nAnalytical surface traction is applied on all of the boundary surfaces.\nFirst order tetrahedral element is adopted to calculated the displacement and stress and they are compared to the exact solution as in Eq.~\\ref{oct_eq:ex_hollow_sphere_ana_sol} in spherical coordinate.\n\n\\begin{subequations}\n\\begin{align}\n  u & = \\frac{1}{2E(b^3-a^3)R^2}\\left\\{ 2(p_aa^3-p_bb^3)(1-2\\nu)R^3+(p_a-p_b)(1+\\nu)b^3a^3\\right\\}\\\\\n  \\sigma_{RR} & = \\frac{p_aa^3-p_bb^3}{b^3-a^3} - \\frac{(p_a-p_b)b^3a^3}{(b^3-a^3)R^3}\\\\\n  \\sigma_{\\theta\\theta} & = \\frac{p_aa^3-p_bb^3}{b^3-a^3} + \\frac{(p_a-p_b)b^3a^3}{2(b^3-a^3)R^3}\\\\\n  \\sigma_{\\phi\\phi} & = \\sigma{\\theta\\theta}\n  \\label{oct_eq:ex_hollow_sphere_ana_sol}\n\\end{align}\n\\end{subequations}\nThe tensor transformation from spherical coordinate to cartesian coordinate can be written as Eq.~\\ref{eqn:transformation} with according to Fig.~\\ref{octree_fig:oct_ex_hollow_sphere_tran}.\n\\begin{subequations}\n  \\begin{align}\n    \\begin{bmatrix}\n      S_{xx} & S_{xy} & S_{xz} \\\\\n      S_{xy} & S_{yy} & S_{yz} \\\\\n      S_{xz} & S_{yz} & S_{zz} \\\\\n    \\end{bmatrix} = T\\begin{bmatrix}\n      S_{RR} & S_{R\\theta} & S_{R\\phi} \\\\\n      S_{R\\theta} & S_{\\theta\\theta} & S_{\\theta\\phi}\\\\\n      S_{R\\phi} & S_{\\theta\\phi} & S_{\\phi\\phi} \\\\\n    \\end{bmatrix} T^T\\\\\n  T = \n\\begin{bmatrix}\n\\sin\\theta\\cos\\phi & \\cos\\theta\\cos\\phi & -\\sin\\phi \\\\\n\\sin\\theta\\sin\\phi & \\cos\\theta\\sin\\phi & \\cos\\phi  \\\\\n\\cos\\theta & -\\sin\\theta & 0 \\\\\n\\end{bmatrix}\n\\end{align}\n\\label{eqn:transformation}\n\\end{subequations}\n%\n\\begin{figure}[h!]\n    \\centering\n    \\scalebox{0.5}{\\includegraphics{octree/ex_images/oct_ex_tran.png}}\n    \\caption{Coordinate transformation}\n    \\label{octree_fig:oct_ex_hollow_sphere_tran}\n  \\end{figure}\n%\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}[b]{0.49\\linewidth}\n        \\scalebox{0.25}{\n            \\includegraphics{octree/ex_images/hollow_sphere_1716_out.eps}\n        }\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.49\\linewidth}\n        \\scalebox{0.25}{\n            \\includegraphics{octree/ex_images/hollow_sphere_1716_side.eps}\n        }\n    \\end{subfigure}\\\\\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.5}{\n            \\includegraphics{octree/ex_images/hollow_sphere_1716_front.eps}\n        }\n    \\end{subfigure}\n    \\caption{Mesh of the hollow sphere with 1716 DOFs}\n    \\label{oct_fig:ex_hollow_sphere_mesh_1716}\n\\end{figure}\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}[b]{0.49\\linewidth}\n        \\scalebox{0.25}{\n            \\includegraphics{octree/ex_images/hollow_sphere_3896_out.eps}\n        }\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.49\\linewidth}\n        \\scalebox{0.25}{\n            \\includegraphics{octree/ex_images/hollow_sphere_3896_side.eps}\n        }\n    \\end{subfigure}\\\\\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.5}{\n            \\includegraphics{octree/ex_images/hollow_sphere_3896_front.eps}\n        }\n    \\end{subfigure}\n    \\caption{Mesh of the hollow sphere with 3896 DOFs}\n    \\label{oct_fig:ex_hollow_sphere_mesh_3896}\n\\end{figure}\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}[b]{0.49\\linewidth}\n        \\scalebox{0.25}{\n            \\includegraphics{octree/ex_images/hollow_sphere_12078_out.eps}\n        }\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.49\\linewidth}\n        \\scalebox{0.25}{\n            \\includegraphics{octree/ex_images/hollow_sphere_12078_side.eps}\n        }\n    \\end{subfigure}\\\\\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.5}{\n            \\includegraphics{octree/ex_images/hollow_sphere_12078_front.eps}\n        }\n    \\end{subfigure}\n    \\caption{Mesh of the hollow sphere with 12078 DOFs}\n    \\label{oct_fig:ex_hollow_sphere_mesh_12078}\n\\end{figure}\n% \\begin{figure}[h!]\n%   \\centering\n%   \\scalebox{0.3}{\\includegraphics{octree/ex_images/oct_ex_mesh.png}}\n%   \\caption{Mesh of the hollow sphere}\n%   \\label{oct_fig:ex_hollow_sphere_meshP}\n% \\end{figure}\n\nStress boundary in Eq.~\\ref{oct_eq:ex_sphere_hole_bond_str} condition is applied on two spherical surfaces.\n\\begin{subequations}\n    \\begin{align}\n    \\sigma_{RR}(R=a,\\phi,\\theta) & = \\frac{p_aa^3-p_bb^3}{b^3-a^3} - \\frac{(p_a-p_b)b^3a^3}{(b^3-a^3)R^3}\\\\\n    u_z(x,y,0) &= 0\\\\\n    u_y(x,0,z) & = 0 \\\\\n    u_x(0,y,z) & = 0\n  \\end{align}\n\\label{oct_eq:ex_sphere_hole_bond_str}\n\\end{subequations}\n%\nThe convergence study is plotted in Fig.~\\ref{oct_fig:ex_hollow_sphere_conv} and the mesh with corresponding DOFs are plotted in Fig.~\\ref{oct_fig:ex_hollow_sphere_mesh_1716}, Fig.~\\ref{oct_fig:ex_hollow_sphere_mesh_3896} and Fig.~\\ref{oct_fig:ex_hollow_sphere_mesh_12078}\n\\begin{figure}[h!]\n    \\centering\n    \\scalebox{0.75}{\\includegraphics{octree/ex_images/ex_sphere_hole_conv.eps}}\n    \\caption{Convergence of displacement error}\n    \\label{oct_fig:ex_hollow_sphere_conv}\n\\end{figure}", "meta": {"hexsha": "3b1962a136f8aa16c944d1e5d4ffded452fa6049", "size": 5773, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "octree/ex_sphere_hole3d.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "octree/ex_sphere_hole3d.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "octree/ex_sphere_hole3d.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2721088435, "max_line_length": 272, "alphanum_fraction": 0.6800623593, "num_tokens": 1936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.76908023177796, "lm_q1q2_score": 0.566933770571483}}
{"text": "\\subsection{Paraboloids}\r\n\\noindent\r\nThe paraboloid look like a parabola that has been rotated and extruded about its axis of symmetry. It is radially symmetric, and its level curves are circles. Paraboloids have the form $z = ax^2 + by^2$ where $a,b \\in \\mathbb{R}$.\r\n\r\n[INSERT IMAGES]", "meta": {"hexsha": "7ba625cc56cdc904e2321b35f6051a15471f9c34", "size": 286, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "multiCalc/differentialMultivariableCalculus/paraboloids.tex", "max_stars_repo_name": "rawsh/Math-Summaries", "max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "multiCalc/differentialMultivariableCalculus/paraboloids.tex", "max_issues_repo_name": "rawsh/Math-Summaries", "max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "multiCalc/differentialMultivariableCalculus/paraboloids.tex", "max_forks_repo_name": "rawsh/Math-Summaries", "max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.2, "max_line_length": 231, "alphanum_fraction": 0.7517482517, "num_tokens": 82, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5669337700321295}}
{"text": "\\documentclass[UTF8]{article}\n\\usepackage[UTF8]{ctex}\n\\usepackage{amsmath}\n\\usepackage{chngpage}\n\\usepackage{afterpage}\n\\usepackage{amsfonts}\n\\begin{document}\n    \\title{Homework 6: Greedy algorithm, proof, etc.}\n    \\date{}\n    \\maketitle\n    \\section{Normal Coin Changing}\n    \\noindent \\textsc{Algorithm}: \\textit{pseudo code for changing quarters, dimes, nickels and pennies}\n    \\begin{adjustwidth}{1cm}{0cm}\n         \\tt \n        \\noindent \\textbf{def} Change(total, change):\\\\\n        \\indent \\noindent \\textbf{for} change\\_type \\textbf{in} change\\_types.sorted(reverse=True):\\\\\n        \\indent \\indent \\noindent \\textbf{if} total >= change\\_type.value:\\\\\n        \\indent \\indent \\indent \\noindent change[change\\_type] += 1\\\\\n        \\indent \\indent \\indent \\noindent \\textbf{return} Change(total-change\\_type.value, change)\\\\\n        \\indent \\noindent \\textbf{return} change\n    \\end{adjustwidth}\\par\\par\n    \\noindent \\textsc{Proof}\\par\n    Define a coin set, if for all given total, solution generated using the algorithm above yields an optimal solution, as greedily changable.\n\n    It's obvious that when a coin group $A$ greedily changable, $\\forall A' \\subset A$ greedily changable.\n\n    Let $A$ greedily changable, $a' \\notin A, \\forall a \\in A\\; a < a'$, with greedy algorithm definition we have: $A\\cup\\{a'\\}$ can be greedily changed $\\Leftrightarrow$ $\\forall price \\in (a', 2a')$, $coin\\_amount_{A}(price) \\geq coin\\_amount_{A}(price-a')+1$. As $A$ greedily changable, the upper limit can be reduced to$ \\left\\lceil a'/\\mathrm{max}(A)\\right\\rceil \\mathrm{max}(A)$.\n\n    We may give a \\textsc{Sufficient Condition} here: if $a' = k\\mathrm{max}(A), k \\in \\mathbb{N^*}\\backslash \\{1\\}$,then $A\\cup\\{a'\\}$ can be greedily changed. As the upper limit now reduced to $a'$ and obviously we have $k > 1$ fits the condition.\n\n    As of this situation, dimes, nickels and pennies fit the sufficient condition. then we have \\{dime, nickel, penny\\} greedily changable.\n\n    In $(a', \\left\\lceil a'/\\mathrm{max}(A)\\right\\rceil \\mathrm{max}(A))$ we can verify that $coin\\_amount_{A}(price) \\geq coin\\_amount_{A}(price-a')+1$ holds in all situations. Now we have \\{quarter, dime, nickel, penny\\} greedily changable. $\\Box$ \n\n    \\section{Coins by Geometric Progression}\n    As prooved above, geometric progression started at 1 and common ratio $c > 1$ fits the sufficient condition of greedily changable.\n\n    \\section{Example not Yielding Optimal}\n    Using greedy algorithm with a coin set with prices as \\{1, 5, 11\\} fails to yield the optimal solution for total equals to 15.\\\\\n    Greedy: $1\\times 11$, $4 \\times 1$\\\\\n    Optimal: $3 \\times 5$\n\n\n\\end{document}\nreturn\n", "meta": {"hexsha": "243a02ab168815dd4722e6543aeb14b129028b52", "size": 2677, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CS241-Homework/greedy.tex", "max_stars_repo_name": "Victrid/Atlas", "max_stars_repo_head_hexsha": "da25d50424790e571f29b66fc815245c1093798c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-19T16:00:07.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-19T16:00:07.000Z", "max_issues_repo_path": "CS241-Homework/greedy.tex", "max_issues_repo_name": "Victrid/Atlas", "max_issues_repo_head_hexsha": "da25d50424790e571f29b66fc815245c1093798c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CS241-Homework/greedy.tex", "max_forks_repo_name": "Victrid/Atlas", "max_forks_repo_head_hexsha": "da25d50424790e571f29b66fc815245c1093798c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.1956521739, "max_line_length": 385, "alphanum_fraction": 0.6940605155, "num_tokens": 820, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410572017153, "lm_q2_score": 0.8519528057272544, "lm_q1q2_score": 0.5668984037905908}}
{"text": "\\section{Protocol Parameters}\n\nIn Shelley, the size of a UTxO entry was bounded from above, which\nwill not be the case anymore. The protocol parameter\n$\\var{minUTxOValue}$ was used to prevent an attack on the UTxO size,\nbut as there is no upper bound on the memory usage of a UTxO entry\nanymore, this parameter will be retired in favor of two new\nparameters, $\\var{adaPerUTxOByte}$ and $\\var{outputSizeConstants}$.\n\n\n\\begin{figure*}[htb]\n  \\emph{Protocol Parameters}\n  %\n  \\begin{equation*}\n      \\begin{array}{r@{~\\in~}lr}\n        \\var{adaPerUTxOByte} \\mapsto \\nonnegReals & \\PParams & \\text{conversion factor for UTxO storage space}\\\\\n        \\var{outputSizeConstants} \\mapsto \\Z \\times \\Z \\times \\Z & \\PParams & \\text{constants for outputSize}\n      \\end{array}\n  \\end{equation*}\n  %\n  \\emph{Accessor Functions}\n  %\n  \\begin{center}\n    \\fun{adaPerUTxOByte},\n    \\fun{outputSizeConstants}\n  \\end{center}\n  %\n  \\caption{Definitions Used in Protocol Parameters}\n  \\label{fig:defs:protocol-parameters}\n\\end{figure*}\n\n", "meta": {"hexsha": "4825d18857f48e4b57484f58c6ab14bc771fd552", "size": 1017, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "shelley-ma/formal-spec/protocol-parameters.tex", "max_stars_repo_name": "eyelash/cardano-ledger-specs", "max_stars_repo_head_hexsha": "2939eeb5d1ba7fda2a09bc5b4b84ae29aea904c5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "shelley-ma/formal-spec/protocol-parameters.tex", "max_issues_repo_name": "eyelash/cardano-ledger-specs", "max_issues_repo_head_hexsha": "2939eeb5d1ba7fda2a09bc5b4b84ae29aea904c5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "shelley-ma/formal-spec/protocol-parameters.tex", "max_forks_repo_name": "eyelash/cardano-ledger-specs", "max_forks_repo_head_hexsha": "2939eeb5d1ba7fda2a09bc5b4b84ae29aea904c5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.78125, "max_line_length": 112, "alphanum_fraction": 0.7099311701, "num_tokens": 312, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8519527869325346, "lm_q2_score": 0.6654105720171531, "lm_q1q2_score": 0.5668983912843857}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{biblatex}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n%\\addbibresource{bib.bib}\n\\setlength{\\parindent}{0em}\n\\bibliography{bib}\n\n\n\\title{Recap of general math}\n\\author{warm up!}\n\\date{\\today}\n\\linespread{1.5}\n\n\n%% remove numbering\n\\makeatletter\n\\renewcommand{\\@seccntformat}[1]{}\n\\makeatother\n\n\n\\begin{document}\n\t\n\\maketitle\n% https://www.hitbullseye.com/Probability-Examples.php\t\n\\subsection{Example 1}\nA coin is thrown 3 times .what is the probability that atleast one head is obtained?\n\t\n\\subsection{Example 2}\n\nFind the probability of getting a numbered card when a card is drawn from the pack of 52 cards.\t\n\t\n\\subsection{Example 3}\nWhat is the probability of getting a sum of 7 when two dice are thrown?\n\t\n\n\\subsection{Example 4}\nA box contains three coins: two regular coins and one fake two-headed coin $(P(H)=1)$,\n\n1) You pick a coin at random and toss it. What is the probability that it lands heads up?\n\n2) You pick a coin at random and toss it, and get heads. What is the probability that it is the two-headed coin?\n\n\\subsection{Example 5}\nMultiply matrices:\n\\begin{equation}\n\t\\left[\n\t\\begin{array}{cc}\n\t\t5 & 1 \\\\\n\t\t4 & 2 \\\\\n\t\t4 & 3 \\\\\n\t\t2 & 4 \\\\  \n\t\\end{array}\n\t\\right]\n\t\\left[\n\t\\begin{array}{cccc}\n\t\t \n\t\t5 & 1 & 99 & -1\\\\\n\t\t8 & 0 & 87 & 0\\\\\n\t\t\n\t\\end{array}\n\\right]\n\\end{equation}\n\n\\begin{equation}\n\t\\left[\n\t\\begin{array}{cccc}\n\t\n\t5 & 1 & 99 & -1\\\\\n\t8 & 0 & 87 & 0\\\\\n\t\n\\end{array}\n\t\\right]\n\t\\left[\n\t\\begin{array}{cc}\n\t5 & 1 \\\\\n\t4 & 2 \\\\\n\t4 & 3 \\\\\n\t2 & 4 \\\\  \n\\end{array}\n\t\\right]\n\\end{equation}\n\n\\vspace{2cm}\n\nNOTE: To get 10 grade, solve one of the following. Otherwise you'll get 8. \n\n\\subsection{Example 6}\n% https://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Probability/BS704_Probability6.html\nA patient goes to see a doctor. The doctor performs a test with 99 percent reliability -- that is, 99 \\% of people who are sick test positive and 99 \\% of the healthy people test negative. The doctor knows that only 1 percent of the people in the country are sick. Now the question is: if the patient tests positive, what are the chances the patient is sick?\n\n\\subsection{Example 7}\n%ttps://www.hackmath.net/en/math-problem/18143?tag_id=151\nUsing vector dot product calculate the angle of the body diagonals of the cube. (Radinas, angle or cos)\n\n\n\\end{document}", "meta": {"hexsha": "e46f3bf4a7fdd25a24eea27fc1c72478e2b84cbc", "size": 2335, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week1/problem_set.tex", "max_stars_repo_name": "sergeychuvakin/SAS_Quantitative_Methods", "max_stars_repo_head_hexsha": "4c39fc013ae2d4a14565e6963b9da2f6da268965", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week1/problem_set.tex", "max_issues_repo_name": "sergeychuvakin/SAS_Quantitative_Methods", "max_issues_repo_head_hexsha": "4c39fc013ae2d4a14565e6963b9da2f6da268965", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week1/problem_set.tex", "max_forks_repo_name": "sergeychuvakin/SAS_Quantitative_Methods", "max_forks_repo_head_hexsha": "4c39fc013ae2d4a14565e6963b9da2f6da268965", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.8265306122, "max_line_length": 358, "alphanum_fraction": 0.7070663812, "num_tokens": 748, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850402140659, "lm_q2_score": 0.8031737869342623, "lm_q1q2_score": 0.5668680435102819}}
{"text": "%! TEX root = ../Text.tex\n%auto-ignore\n\\providecommand{\\MainFolder}{..}\n\\documentclass[\\MainFolder/Text.tex]{subfiles}\n\\begin{document}\n\\chapter{Reduced cyclic homology of A-infinity-algebras}\n\\label{App:AInfty}\nIn this appendix, we prove Proposition~\\ref{Prop:Reduced}. The idea from \\cite{LodayCyclic} is to resolve cyclic (co)invariants degree-wise and obtain certain bicomplexes with better properties.\n\nIn Section~\\ref{Sec:FF}, given a strictly unital $\\AInfty$-algebra on a graded vector space $V$, we define the normalized and reduced Hochschild (co)chain complexes (Definition~\\ref{Def:NormRedHoch}) and prove the computational prerequisites CP1--CP4 (Lemmas~\\ref{Lem:CP1}, \\ref{Lem:CP2}, \\ref{Lem:CP3} and \\ref{Lem:CP4}); they are necessary for the development of the cyclic homology theory in the upcoming section. These prerequisites, like squaring to zero of the Hochschild differential, seem to be much harder computationally for $\\AInfty$-algebras than for $\\DGA$'s. Proofs of some of these relations in different formalisms appeared already in \\cite{Mescher2016} and \\cite{Lazarev2003}.\n\nIn Section~\\ref{Sec:HomBi}, we define Loday's and Connes' cyclic half-plane bicomplexes for $\\AInfty$-algebras together with their normalized and reduced versions (Definition~\\ref{Def:CycBico}). We then summarize some convergence results for spectral sequences associated to horizontal, vertical and diagonal filtrations (Proposition~\\ref{Prop:ConvOfSpSeq}). In a series of lemmas (Lemmas~\\ref{Lem:LodCycBiCycHom}, \\ref{Lem:LodConCycBi}, \\ref{Lem:ConNormVer} and~\\ref{Lem:ReducedCyclic}), we prove that some of the (co)homologies are isomorphic. These lemmas copy results for $\\DGA$'s from \\cite{LodayCyclic}; we just do them carefully for half-plane bicomplexes and more explicitly. There is a new phenomenon of long chains coming from completing the direct sum total complex; these long chains seem to disappear in homology if the degrees of $V$ are bounded (Lemma~\\ref{Lem:BddDegrees}). Additionally, we point out some differences between first-quadrant and half-plane bicomplexes (Remark~\\ref{Rem:SpecSeq}) and mention the relation to mixed complexes (Remark~\\ref{Rem:MixedCompl}).\n\nIn Section~\\ref{Sec:FinRem}, we obtain short exact sequences for reduced Connes' bicomplexes (Lemma~\\ref{Lem:ConBiRed}), which replace, up to quasi-isomorphisms, the non-exact sequences for reduced cyclic Hochschild (co)chains. We summarize the isomorphisms of (co)homologies from Section~\\ref{Sec:HomBi} (Figure~\\ref{Fig:FinalPictureHom}), finish the proof of Proposition~\\ref{Prop:Reduced} and formulate a few open question (Questions~\\ref{Q:OpenProbAInftx}).\n%\n%We develop this theory for $\\AInfty$-algebras following the theory for $\\DGA$'s from \\cite{LodayCyclic}.\n%\n%by studying the cyclic (co)homology of an $\\AInfty$-algebra (see Definition~\\ref{Def:CycHom}) using the resolution of cyclic (co)invariant  bicomplexes from .\n%\n%\n%To recall, Proposition~\\ref{Prop:Reduced} states that the Connes' cyclic cohomology $\\H_\\lambda^*(\\mathcal{A})$ of a strictly unital strictly augmented $\\AInfty$-algebra $\\mathcal{A}$ relates to its reduced version $\\H_{\\lambda, \\mathrm{red}}^*(\\mathcal{A})$ via the formula\n%\\begin{equation} \\label{Eq:MainFormula}\n% \\H_\\lambda^*(\\mathcal{A}) = \\H_{\\lambda,\\mathrm{red}}^*(\\mathcal{A})\\oplus \\H_\\lambda^*(\\K),\n%\\end{equation}\n%where $\\K$ is a field and all chain complexes are over $\\K$. We will repeat some definitions from Section~\\ref{Sec:Alg2} to make the appendix self-consistent. \n%The reduced cohomology $\\bar{H}^*_\\lambda(\\mathcal{A})$ is obtained by forgetting the unit and is easier to compute. Back in our $\\IBLInfty$-theory, suppose that all graphs with an $A$-vertex with~$\\NOne$ vanish, so that strict unitality is guaranteed, and so that the reduced chain complex $(\\OPQ_{110}^\\PMC, \\CDBCyc \\bar{H}_{\\mathrm{dR}}(M)[3-n])$, where $\\bar{H}_{\\mathrm{dR}}^* = H_{\\mathrm{dR}}^*(M)/H_{\\mathrm{dR}}^0(M)$, is well-defined. Using \\eqref{Eq:MainFormula}, we can then compute the homology of $(\\OPQ_{110}^\\PMC, \\CDBCyc H_{\\mathrm{dR}}(M)[3-n])$ from the homology of $(\\OPQ_{110}^\\PMC, \\CDBCyc \\bar{H}_{\\mathrm{dR}}(M)[3-n])$ by adding \\eqref{Eq:Field}.\n%\\section{Glossary and overview}\n%\n%We first summarize definitions of the basic spaces and maps.\n%\n%\\begin{Definition}[Basic operations]\n%Let~$V$ be a $\\Z$-graded vector space. For every~$k\\ge 1$ and $v_1$,~$\\dotsc$, $v_k \\in V[1]$, we define the cyclic permutation\n%\\[\\CycPermOp_k(v_1 \\otimes \\dotsb \\otimes v_k) \\coloneqq (-1)^{\\Abs{v_k}(\\Abs{v_1} + \\dotsb + \\Abs{v_{k-1}})} v_k \\otimes v_1 \\otimes \\dotsb \\otimes v_{k-1},\\]\n%where $\\Abs{v}$ denotes the degree in $V[1]$. We set\n%\\[ N_k \\coloneqq \\sum_{i = 0}^{k-1} \\CycPermOp^{i} : V[1]^{\\otimes k} \\longrightarrow V[1]^{\\otimes k}. \\]\n%\n%Consider a strict $\\AInfty$-algebra on $V$ given by the operations $\\mu_j: V[1]^{\\otimes j}\\rightarrow V[1]$ for~$j\\ge 1$ (see Definition~\\ref{Def:CyclicAinfty}). For every~$k\\ge 1$ and~$1 \\le j \\le k$, we define the operations $[\\Hd']^k_j$, $[R]^k_j: V[1]^{\\otimes k} \\rightarrow V[1]^{\\otimes k-j+1}$ by the following formulas:\n%\\begin{align*}\n%[\\Hd']^k_j &\\coloneqq \\sum_{i=0}^{k-j} \\CycPermOp^i_{k-j+1}\\circ(\\mu_j \\otimes \\Id^{k-j})\\circ \\CycPermOp_k^{-i},\\\\ \n%[R]^k_j &\\coloneqq \\sum_{i=1}^{j-1} (\\mu_j \\otimes \\Id^{k-j})\\circ \\CycPermOp_k^i.\n%\\end{align*}\n%\n%Let $\\NOne\\in V[1]$ be a strict unit for $(V,(\\mu_j))$. This means that $\\Abs{\\NOne} = -1$, $\\mu_2(\\NOne,v) = (-1)^{\\Abs{v}+1} \\mu_2(v, \\NOne)=v$ for all $v\\in V[1]$ and $\\mu_k(v_1, \\dotsc, v_k) = 0$ for all $k\\neq 2$ whenever~$v_i = \\NOne$ for some $1\\le i \\le k$ (see Definition~\\ref{Def:AugUnit}).\n%For $2\\le i \\le k+1$, we define the maps $[s]_i^k : V[1]^{\\otimes k} \\rightarrow V[1]^{\\otimes k + 1}$ by\n%\\[ [s]_i^k(v_1\\otimes\\dotsb\\otimes v_k) \\coloneqq (-1)^{\\Abs{v_1}+\\dotsb + \\Abs{v_{i-1}}} v_1\\otimes \\dotsb\\otimes v_{i-1}\\otimes \\NOne\\otimes v_{i}\\otimes\\dotsb\\otimes v_k \\]\n%for all $v_1$,~$\\dotsc$, $v_k \\in V[1]$.\n%We define the map $[s]^k_1: V[1]^{\\otimes k} \\rightarrow V[1]^{\\otimes k + 1}$ for $k\\ge 1$ similarly --- by putting $\\NOne$ at the beginning.\n% \n%Recall that we defined $\\B V = \\bigoplus_{k=1}^\\infty V[1]^{\\otimes k}$ to be the weight-reduced bar complex (see Definition~\\ref{Def:BarComplex}). We form the maps $\\CycPermOp$, $N$, $\\Hd'$, $R$, $s_i$, $s_1: \\B V \\rightarrow \\B V$ by setting\n%\\begin{align*}\n%\\CycPermOp &\\coloneqq \\sum_{k = 1}^\\infty \\CycPermOp_k, &\n%N &\\coloneqq \\sum_{k=1}^\\infty N_k, &\n%\\Hd'&\\coloneqq \\sum_{k=1}^{\\infty} \\sum_{j=1}^k [\\Hd']_j^k, \\\\ \n%R&\\coloneqq \\sum_{k=1}^\\infty \\sum_{j=1}^k [R]_j^k, & \n%s_i &\\coloneqq \\sum_{k=i-1}^\\infty [s]_i^k, & s_1 &\\coloneqq \\sum_{k=1}^\\infty [s]_1^k.\n%\\end{align*}\n%We define the \\emph{Hochschild boundary operator} by\n%\\[ \\Hd\\coloneqq \\Hd' + R. \\]\n%\\end{Definition}\n%\n%\n%We will work with the following (co)chain complexes with the differential induced from $\\Hd$ or its dual $\\Hd^*$. Note that one has to check that the complexes are well-defined.\n%\n%\\begin{Definition} \\label{Def:Complexes}\n%Let $\\mathcal{A} = (V, (\\mu_j))$ be an $\\AInfty$-algebra. For all $q\\in \\Z$ let\n%\\[ \\begin{aligned}\n%\\HC_q(V) &\\coloneqq (\\B V)_{-q-1}, & \\HC^q(V) &\\coloneqq (\\CDB V)^{-q-1}, \\\\\n%\\HC^\\lambda_q(V) &\\coloneqq (\\BCyc V)^{-q-1}, & \\HC_\\lambda^q &\\coloneqq (\\CDBCyc V)^{-q-1}.\n%\\end{aligned} \\]\n%Here $\\CDB V$ is the completion of the weight-graded vector space $\\DB V$ with respect to the weights, i.e we have $\\CDB V = \\bigoplus_{q\\in \\Z} (\\CDB V)^{q}$, where\n%\\[ (\\CDB V)^{q}  = \\prod_{k=1}^\\infty (\\DB V)^{q}_k. \\]\n%We identify $\\CDBCyc V$ with the subspace of $\\CDB V$ consisting of cyclic symmetric maps. We call $(\\HC_*,\\Hd)$ and $(\\HC^*,\\Hd^*)$ the \\emph{Hochschild complexes} and denote their homologies by $H_*(\\mathcal{A})$ and $H^*(\\mathcal{A})$, respectively. We call $(\\HC_*^\\lambda,\\Hd)$ and $(\\HC^*_\\lambda,\\Hd^*)$ \\emph{Connes' cyclic complexes} and denote their homologies by $H^\\lambda_*(\\mathcal{A})$ and $H_\\lambda^*(\\mathcal{A})$, respectively.\n%\n%Suppose that $\\mathcal{A}$ has a strict unit $\\NOne$. Denote $\\bar{V}\\coloneqq V/\\langle \\NOne \\rangle$. For all $q\\in \\Z$ let\n%\\[ \\begin{aligned}\n%\\bar{\\HC}_q(V) &\\coloneqq (\\B V)^{-q-1}, & \\bar{\\HC}^q(V) &\\coloneqq \\{ \\psi \\in \\HC^q(V) \\mid s_i^* \\psi = 0 \\text{ for all }i\\ge 2\\}, \\\\\n%\\bar{\\HC}_q^\\lambda(V) &\\coloneqq \\HC_q^\\lambda(\\bar{V}), & \\bar{\\HC}^q_\\lambda(V) &\\coloneqq \\HC_\\lambda^q(\\bar{V}).\n%\\end{aligned} \\]\n%We call $(\\bar{\\HC}_*,\\Hd)$ and $(\\bar{\\HC}^*,\\Hd^*)$ the \\emph{normalized Hochschild complexes}.  We call $(\\bar{\\HC}^\\lambda_*,\\Hd)$ and $(\\bar{\\HC}_\\lambda^*,\\Hd^*)$ \\emph{reduced Connes' cyclic complexes} and denote their homologies by $\\bar{H}_*(\\mathcal{A})$ and $\\bar{H}^*(\\mathcal{A})$.\n%\n%We denote by $\\NormProj : \\HC(V) \\rightarrow \\bar{\\HC}_\\bullet(V)$, $p^\\lambda: \\HC_\\bullet(V) \\rightarrow \\HC^\\lambda_\\bullet(V)$ and $\\NormIncl: \\bar{\\HC}^\\bullet(V) \\rightarrow \\HC^\\bullet(V)$, $\\iota_\\lambda: \\HC_\\lambda^\\bullet(V) \\rightarrow \\HC^\\bullet(V)$ the canonical projections and inclusions, respectively. \n%\n%\n%For all $q\\in \\Z$ let\n%\\[ \\HC_q^{\\mathrm{red}}(V) = \\begin{cases}\n%\\bar{\\HC}_q(V) & q \\neq 0, \\\\\n%\\HC_0(\\bar{V}) & q = 0,\n%\\end{cases} \\quad \\HC^q_{\\mathrm{red}}(V) = \\begin{cases}\n%\\bar{\\HC}^q(V) & q \\neq 0, \\\\\n%\\{\\psi\\in \\bar{\\HC}^0(V) \\mid \\psi(\\NOne) = 0 \\} & q=0.\n%\\end{cases}\\]\n%We call $(\\HC_*^{\\mathrm{red}},\\Hd)$ and $(\\HC^*_{\\mathrm{red}},\\Hd^*)$ the \\emph{reduced Hochschild complexes} and denote their homologies by $\\bar{H}^\\lambda_*(\\mathcal{A})$ and $\\bar{H}_\\lambda^*(\\mathcal{A})$.\n%\\end{Definition}\n%\n%\n%\n%\n%\n%\\begin{Remark}\\label{Rem:BasicStr}\n%\\begin{RemarkList}\n%\\item Taking $\\CDB V$ instead of $\\DB V$ is necessary for $\\Hd^*$ to be well-defined. The reason is that $\\mu_k$ have unbounded weights $k\\in \\Z$, and hence there might be $\\psi\\in \\DB V$ with $\\psi \\circ \\Hd \\notin \\DB V$. On the other hand, $\\Hd$ is not well-defined on $\\hat{B} V$.\n%\\item If $V$ has bounded degrees, then $\\CDB V$ equals the completion of $\\DB V$ with respect to both weights and degrees, which gives the entire linear dual of $\\B V$. Therefore, the cochain complexes from Definition \\ref{Def:Complexes} are dual to the corresponding chain complexes in the classical sense.\n%\\item If $V = V_0$ is concentrated in degree $0$, then\n%\\[ \\HC_q(V) = (\\B V)_{q+1}\\quad \\text{and}\\quad \\HC^q(V)= (\\DB V)^{-q-1} = (\\DB V)_{q+1}. \\]\n%In particular, the complexes are bounded from below. \n%\\item If $\\mu_k = 0$ for all $k\\neq 2$, then $\\Hd$ has not only degree $+1$ on $\\B V$, but also weight $-1$. We cal also grade $\\B V$ by weights (notice we can not grade $\\CDB V$ by weights). Write $\\B V = \\bigoplus_{k\\in \\N, d\\in \\Z} (\\B V)_k^d$ and let\n%\\[ \\HC_q(V) = \\bigoplus_{k=1}^\\infty (\\B V)_k^{-q-1}\\quad\\text{and}\\quad \\tilde{\\HC}_k(V) = \\bigoplus_{q\\in \\Z} (\\B V)_k^{-q-1}. \\] \n%If $V = V_0$, then $\\HC_q(V) = \\tilde{\\HC}_q(V)$. The homology of these complexes is also bigraded. Therefore, we get one from each other by resummation\n%  \n%\\item If $V$ is non-negatively graded and simply-connected, i.e.\\ $V_1=0$, then it holds  $\\CDB \\bar{V}  = \\DB \\bar{V}$.\n%\\item Consider the map $u: \\bar{\\HC}_*(\\R) \\rightarrow \\bar{\\HC}_*(V)$ defined by extending $\\SuspU 1\\in \\R[1] \\mapsto \\NOne\\in V[1]$ to tensor powers. It is a chain map and $\\HC_*^{\\mathrm{red}}(V)$, resp. $\\HC^*_{\\mathrm{red}}(V)$ arise naturally as $\\CoKer(u)$, resp. $\\ker(u^*)$.\n%%\\[ \\bar{C}_q(\\R) = \\begin{cases}\n%%                    0 & q\\neq 0,\\\\\n%%                    \\langle \\NOne \\rangle & q = 0\n%%                   \\end{cases} \\quad \\text{and}\\quad \\bar{C}^q(\\R) = \\begin{cases}\n%%                    0 & q\\neq 0,\\\\\n%%                    \\langle \\NOne^* \\rangle & q = 0.\n%%                   \\end{cases} \\]\n%\\end{RemarkList}\n%\\end{Remark}\n%\\clearpage\n%\n%\n\\section{Computational prerequisites} \\label{Sec:FF}\n\nThe heart of cyclic (co)homology theory, following \\cite{LodayCyclic}, are the following five \\emph{computational prerequisites} (CP):\n\\begin{description}\n\\item[\\quad CP0\\,{\\normalfont (horizontal relations)}:] $\\ker \\CountOp = \\im (\\Id-\\CycPermOp)$, $\\ker(\\Id-\\CycPermOp) = \\im \\CountOp$,\n\\item[\\quad CP1\\,{\\normalfont (vertical relations)}:] $\\Hd\\circ\\Hd = 0$, $\\Hd'\\circ \\Hd' = 0$,\n\\item[\\quad CP2\\,{\\normalfont (vertical-horizontal relations)}:] $\\Hd'\\circ \\CountOp = \\CountOp\\circ \\Hd$, $(\\Id-\\CycPermOp)\\circ \\Hd' = \\Hd\\circ (\\Id-\\CycPermOp)$,\n%$\\InsOneOp_i b = - b' \\InsOneOp_i + \\Id - \\CycPermOp$\n\\end{description}\nand in the strictly unital case\n\\begin{description}[resume]\n\\item[\\quad CP3\\,{\\normalfont (null-homotopy of the bar resolution)}:] $\\Hd'\\circ \\InsOneOp_1 + \\InsOneOp_1\\circ \\Hd' = \\Id$ and\n\\item[\\quad CP4\\,{\\normalfont (contraction onto normalized chains)}:] $\\NormProj: \\HC V \\rightarrow \\HNC V$ is a quasi-isomorphisms.\n\\end{description}\nThe definitions of $\\CycPermOp$ (cyclic permutation), $\\Hd$ ($\\AInfty$-Hochschild differential), $\\Hd'$ (acyclic $\\AInfty$-Hochschild differential) and $\\HC V$ ((reduced) bar complex with reversed grading shifted by one) can be found in Section~\\ref{Sec:Alg2}; in particular, consult Definition~\\ref{Def:CycHom}. The new players are the \\emph{counting operator}\n\\[ \\CountOp \\coloneqq \\sum_{k=1}^\\infty \\underbrace{\\sum_{i=0}^{k-1} t_k^i}_{\\displaystyle{\\eqqcolon\\CountOp_k}} : \\HC V \\longrightarrow \\HC V \\]\nand the projection $\\bar{p}: \\HC V \\rightarrow \\HNC V$ to normalized Hochschild chains --- this we define below.\n\n\\begin{Definition}[Normalized and reduced Hochschild complex]\\label{Def:NormRedHoch}\n\\Correct[noline,caption={DONE Wrong def of normalized}]{The normalized complex is not properly defined!! The projection does not work.}\nLet $(V,(\\mu_j),\\NOne)$ be a strictly unital $\\AInfty$-algebra. Let $\\bar{V}[1]\\coloneqq V[1]/\\langle\\NOne\\rangle$. We define the \\emph{normalized Hochschild chain complex} by\n\\[ \\HNC V \\coloneqq \\bigoplus_{l=0}^\\infty V[1]\\otimes\\bar{V}[1]^{\\otimes l}. \\]\nWe consider the canonical projection $\\NormProj: V[1]\\rightarrow \\bar{V}[1]$ and define $\\NormProj: \\HC V \\rightarrow \\HNC V$ by\n\\[ \\Restr{\\NormProj}{V[1]^{\\otimes l}} \\coloneqq \\Id \\otimes \\underbrace{\\NormProj \\otimes \\dotsb \\otimes \\NormProj}_{l-\\text{times}}.\\]\nFor every $l\\ge 1$, we define the operator $\\InsOneOp_l : \\HC V \\rightarrow \\HC V$ by inserting $\\NOne$ at the $l$-th position of a tensor product, where the position $l=1$ is in front; i.e., we have\n\\[ \\InsOneOp_1(v_1\\otimes\\dotsb\\otimes v_i) = \\NOne\\otimes v_1\\otimes\\dotsb\\otimes v_i\\quad\\text{for all }v_j\\in V[1]\\text{ and }i\\ge j\\ge 1.\\]\nWe define the \\emph{normalized Hochschild cochain complex} by \n\\[ \\HNC^* V \\coloneqq \\{\\varphi\\in\\HC^*V \\mid \\varphi \\circ \\InsOneOp_l = 0\\text{ for all }l\\ge 2\\}. \\]\n\nIf $u: \\HC \\R \\rightarrow \\HC V$ is the unit map (in the strictly unital case, $\\Restr{u}{\\R[1]^{\\otimes k}}\\coloneqq u^{\\otimes k}$ where $u: \\R[1]\\to V[1]$ is the canonical injection; see also Definition~\\ref{Def:AugUnit} and the discussion below) and $\\NormIncl: \\HNC^*V \\rightarrow \\HC^* V$ the inclusion, we define the \\emph{reduced Hochschild chain and cochain complexes} $\\HC^{\\RedMRM} V$ and $\\HC_{\\RedMRM}^* V$ by\n\\[ \\HC^{\\RedMRM} V \\coloneqq \\coker(\\NormProj\\circ u)\\quad\\text{and}\\quad\\HC_{\\RedMRM}^*\n \\coloneqq \\ker(u^* \\circ \\NormIncl),\\quad\\text{respectively}.\\]\nWe denote by $p^{\\RedMRM}: \\HC V \\rightarrow \\HC^{\\RedMRM} V$ and $\\iota_{\\RedMRM}: \\HC^*_{\\RedMRM} V \\rightarrow \\HC^* V$ the canonical projection and inclusion, respectively.\n\nAll chain complexes above are graded by degree and equipped with a boundary operator induced naturally from $\\Hd$ (see the remark below).\n\\end{Definition}\n\n\\begin{Remark}[Some details on normalized and reduced complexes]\nSince $\\NOne$ is a unit for~$\\mu_2$, we have\n\\begin{align*}\n0 & = (-1)^{\\Abs{v_1} + \\dotsb + \\Abs{v_{k-1}}}\\bigl(v_1 \\dotsb \\mu_2(v_k, \\NOne) + (-1)^{\\Abs{v_k}}\\mu_2(\\NOne,v_1)\\dotsb v_k \\bigr)\\quad\\text{and}\\\\\n0 & = (-1)^{\\Abs{v}_1 + \\dotsb + \\Abs{v_{i-2}}}\\bigl(v_1 \\dotsb \\mu_2(v_{i-1}, \\NOne) v_i \\dotsb v_k + (-1)^{\\Abs{v_{i-1}}} v_1 \\dotsb  v_{i-1}\\mu_2(\\NOne, v_i) \\dotsb v_k \\bigr)\n\\end{align*}\nfor all $i=2$, $\\dotsc$, $k$. This fact and strict unitality implies\n\\[ \\Hd\\bigl(\\sum_{i\\ge 2}\\im \\InsOneOp_i\\bigr)\\subset\\sum_{i\\ge 2}\\im \\InsOneOp_i = \\ker \\NormProj.\\]\nTherefore, $\\Hd$ induces a differential on $\\HNC V$. Since $\\HNC^* V = \\{\\varphi\\in \\HC^*V \\mid \\varphi(\\sum_{i\\ge 2}\\im\\InsOneOp_i) = 0\\}$, the dual $\\Hd^*$ restricts to $\\HNC^* V$. Clearly, both $\\NormProj$ and $\\NormIncl$ are chain maps, and they are compatible under the dualization from Definition~\\ref{Def:Pairings}; i.e., $\\NormIncl \\simeq \\NormProj^*$ under $\\HNC^* V \\simeq (\\HNC V)^{\\GD}$ and $\\HC^* V \\simeq (\\HC V)^{\\GD}$, where $^{\\GD}$ denotes the graded dual.\n \nAs for the reduced complexes, $u$ is a chain map, and thus $\\ker$ and $\\coker$ are chain complexes. Again, it holds $\\HC_{\\RedMRM}^* V \\simeq (\\HC^{\\RedMRM} V)^{\\GD}$ and $\\iota_\\RedMRM \\simeq p^{\\RedMRM,*}$ under the dualization.\n\\end{Remark}\n\nWe will now prove CP1, CP2, CP3 and CP4 for strictly unital $\\AInfty$-algebras. We do not prove CP0 because it is a standard fact which does not depend on the algebra we work with (see \\cite{LodayCyclic}). A proof of CP1 in a slightly different notation and in a more general setting (coefficients in a bimodule) can also be found in \\cite{Mescher2016}. The proofs of CP2 and CP3 work in the same way as the proofs for $\\DGA$'s from \\cite{LodayCyclic}. The computation is just a little longer. As for CP4, we can not use the proof for $\\DGA$'s from \\cite[Proposition~1.6.5]{LodayCyclic} anymore because we do not have a simplicial module; instead of this, we consider an explicit homotopy inspired by~\\cite{Lazarev2003}, where CP4 is also proven in a slightly different notation.\n\nWe first introduce some notation which simplifies computations:\n\\begin{Definition}[Notation]\nFor the cyclic permutation $\\CycPermOp_k^i$ ($\\coloneqq i$-times $t_k$), we define $c\\coloneqq k - i + 1$ and write\n\\[ v_c \\dotsb v_{c-1} \\coloneqq \\CycPermOp_k^i(v_1\\otimes \\dotsb \\otimes v_k). \\] \nWe compute indices modulo $k$ and often omit writing the tensor product.\n%\\item We denote by $\\le$ the \\emph{cyclic ordering} on $\\{1, \\dots, k\\}$; for example, for $k=4$, we have $4 \\le 1 \\le 3$. \n%For two indices $i_1$, $i_2\\in \\{1,\\dots,k\\}$ we let $\\mathrm{dist}(i_1,i_2)$ be their distance in the cyclic ordering; for example $\\mathrm{dist}$\n\nFor every $i = 1$,~$\\dotsc$, $k$ and $j = 1$,~$\\dotsc$, $k-i+ 1$, we define the \\emph{closed bracket} by\n\\begin{equation} \\label{Eq:Inclmu} \\begin{aligned} \n&v_1 \\dotsb \\MuII{v_{i} \\dotsb v_{i + j - 1}} \\dotsb v_k \\\\\n&\\qquad \\coloneqq (-1)^{\\Abs{v_1} + \\dotsb + \\Abs{v_i-1}} v_1 \\otimes \\dotsb \\otimes \\mu_j(v_i \\otimes \\dotsb \\otimes v_{i+j-1})\\otimes \\dotsb \\otimes v_k.\n\\end{aligned} \\end{equation}\nIf we apply the closed bracket two-times, we write the first application as an \\emph{underbracket} and the second as an \\emph{overbracket}; for instance, we have\n\\[ \\begin{aligned} & v_1 \\dotsb \\MuII{v_{i_1} \\dotsb v_{i_2}} \\dotsb \\MuI{v_{i_3}\\dotsb v_{i_4}} \\dotsb v_k \\\\ \n& \\quad =  \\begin{multlined}[t] (-1)^{\\Abs{v_{i_1}} + \\dotsb + \\Abs{v_{i_3-1}}}\n v_1 \\otimes \\dotsb \\otimes \\mu_{j_2}(v_{i_1}\\otimes \\dotsb \\otimes v_{i_2})\\otimes \\dotsb \\\\ \\otimes \\mu_{j_1}(v_{i_3}\\otimes \\dotsb \\otimes v_{i_4})\\otimes \\dotsb \\otimes v_k,\n\\end{multlined}\\end{aligned}\\]\nwhere $j_1 = i_4 - i_3 + 1$, $j_2 = i_2 - i_1 + 1$. Clearly, the difference is only in the sign. We denote\n \\[ v_1 \\dotsb v_{i-1} \\NOneII v_{i} \\dotsb v_k \\coloneqq \\InsOneOp_i(v_1 \\dotsb v_k) = (-1)^{\\Abs{v_1}+\\dotsb+\\Abs{v_{i-1}}} v_1\\dotsb v_{i-1} \\NOne v_i \\dotsb v_k. \\]\n If $\\InsOneOp_i$ is composed with an other operation, we write $\\NOneI$ if the corresponding $\\NOne$ was inserted first and $\\NOneII$ if it was inserted second. For example, we have\n \\[ v_1 \\NOneI v_2 v_3 \\NOneII v_4 = \\InsOneOp_5(\\InsOneOp_2(v_1 v_2 v_3 v_4)). \\]\n\nFor $j\\ge 1$ and $1\\le i_1 \\le i_2 \\le k$ with $i_2 - i_1 \\ge j$, we define the \\emph{open bracket} as follows:\n \\[ v_1 \\dotsb \\OMuIIO[j]{v_{i_1} \\dotsb v_{i_2}} \\dotsb v_k \\coloneqq \\sum_{\\substack{i_1\\le i_3 \\le i_4 \\le i_2 \\\\ i_4 - i_3 = j}} v_1 \\dotsb v_{i_1} \\dotsb \\MuII{v_{i_3}\\dotsb v_{i_4}} \\dotsb v_{i_2} \\dotsb v_k. \\]\n\\end{Definition}\n\nUsing the notation above, it holds\n\\[ \\begin{aligned}\nb'(v_1\\dotsb v_k) &= \\sum_{1\\le i_1 \\le i_2 \\le k} v_1 \\dotsb \\MuII{v_{i_1} \\dotsb v_{i_2}} \\dotsb v_k = \\sum_{j=1}^k \\OMuIIO[j]{v_1\\dotsb v_k}, \\\\\nR(v_1\\dotsb v_k) & = \\sum_{\\substack{2 \\le c \\le k}} \\MuII{v_{c} \\dotsb v_{1}} \\dotsb v_{c-1},\n\\end{aligned} \\]\nand the $\\AInfty$-relations simplify to\n\\begin{equation} \\label{Eq:AInftyCyclic}\n\\sum_{1 \\le i_1 \\le  i_2 \\le k} \\MuII{v_{1}\\dotsb \\MuI{v_{i_1} \\dotsb v_{i_2} } \\dotsb v_{k}} = \\sum_{j=1}^k \\MuII{\\OMuIO[j]{v_1\\dotsb v_k}} = 0. \n\\end{equation}\n\\noindent Because all signs are, in fact, Koszul signs for the symbols $\\mu_{j_1}$, $\\mu_{j_2}$, $v_1$, $\\dotsc $, $v_k$, and because $\\mu$'s have odd degree, we have for every $1 \\le i_1 \\le i_2 \\le i_3 \\le i_4 \\le k$ the following relation:\n\\begin{equation} \\label{Eq:OddDeg}\nv_{1}\\dotsb \\MuI{v_{i_1} \\dotsb v_{i_2}} \\dotsb \\MuII{v_{i_3} \\dotsb v_{i_4}} \\dotsb v_{k} +  v_{1}\\dotsb \\MuII{v_{i_1} \\dotsb v_{i_2}} \\dotsb \\MuI{v_{i_3} \\dotsb v_{i_4}} \\dotsb v_{k} = 0.\n\\end{equation}\n\n\\begin{Lemma}[CP1] \\label{Lem:CP1}\nFor an $\\AInfty$-algebra $(V,(\\mu_j))$, it holds\n\\[ \\Hd'\\circ \\Hd' = 0\\quad\\text{and}\\quad \\Hd \\circ \\Hd = 0. \\]\n\\end{Lemma}\n\\begin{proof}\nWe write\n\\[ \\Hd\\circ\\Hd  = (\\Hd' + R)\\circ(\\Hd' + R) = \\Hd'\\circ \\Hd' + \\Hd' \\circ R + R \\circ \\Hd' + R \\circ R \\]\nand evaluate it on a tensor $v_1\\dotsb v_k \\in \\HC V$. We claim that a summand of $\\Hd(\\Hd(v_1\\dotsb v_k))$ coming from the subsequent application of the operations can be uniquely determined by the following data: \n\\begin{itemize}\n\\item the information whether it comes from $\\Hd'\\circ \\Hd'$, $\\Hd'\\circ R$, $R\\circ \\Hd'$ or $R\\circ R$;\n\\item a cyclic permutation $c$ of $v_1$, $\\dotsc$, $v_k$;\n\\item positions of the under- and upperbracket.\n\\end{itemize}\nThe reason for this is that both $\\Hd'$ and $R$ produce only Koszul signs, and hence the total sign of a summand in $\\Hd(\\Hd(v_1\\dots v_k))$ is the Koszul sign for the symbols $\\mu_{j_1}$, $\\mu_{j_2}$, $v_1$,~$\\dotsc$,~$v_k$, which depends only on the start and final position of the symbols; this is precisely encoded in the data above.\n\nFor $c=1$, only $\\Hd'\\circ\\Hd'$ contributes; however, using \\eqref{Eq:AInftyCyclic} and \\eqref{Eq:OddDeg}, we get\n\\begin{align*}\n(\\Hd'\\circ \\Hd')(v_{1}\\dotsb v_{k}) &= \\begin{aligned}[t] &\\sum_{1\\le i_1 \\le i_2 \\le i_3 \\le i_4\\le k} v_{1}\\dotsb \\MuII{v_{i_1} \\dotsb \\MuI{v_{i_2} \\dotsb v_{i_3}} \\dotsb v_{i_4}} \\dotsb v_{k} \\\\ {}+ &\\sum_{\\substack{1\\le i_1 \\le i_2 \\le {k-1} \\\\ i_2+1 \\le i_3 \\le i_4 \\le {k} }} v_{1}\\dotsb \\MuI{v_{i_1} \\dotsb v_{i_2}} \\dotsb \\MuII{v_{i_3} \\dotsb v_{i_4}} \\dotsb v_{k} \\\\ {}+&\\sum_{\\substack{2\\le i_3 \\le i_4 \\le {k} \\\\ 1 \\le i_1 \\le i_2 \\le i_3 - 1 }} v_{1}\\dotsb \\MuII{v_{i_1} \\dotsb v_{i_2}} \\dotsb \\MuI{v_{i_3} \\dotsb v_{i_4}} \\dotsb v_{k} \\end{aligned} \\\\\n & = 0.\n\\end{align*}\nLet $c \\ge 2$ and consider the summands from $\\Hd'\\circ R$, $R\\circ \\Hd'$ and $R\\circ R$ based on $v_{c} \\dots v_{c-1}$. The contribution of $R\\circ \\Hd'$ consists of the following three parts:\n\\begin{EqnList}\n\\item $\\displaystyle \\sum_{\\substack{c \\le i_1 \\le i_2 \\le k \\\\ 1 \\le i_4 \\le c-1}}\\MuII{v_{c} \\dotsb \\MuI{v_{i_1} \\dotsb v_{i_2}} \\dotsb v_k v_1 \\dotsb v_{i_4}} v_{i_4+1} \\dotsb v_{c-1}$,\n\\item $\\displaystyle\\sum_{\\substack{1\\le i_1 \\le i_2 \\le i_4 \\le c-1}}\\MuII{v_{c} \\dotsb v_k v_1 \\dotsb  \\MuI{v_{i_1} \\dotsb v_{i_2}} \\dotsb v_{i_4}} v_{i_4+1} \\dotsb v_{c-1}$,\n\\item $\\displaystyle \\sum_{1\\le i_4 \\prec i_1 \\le i_2 \\le c-1} \\MuII{v_{c} \\dotsb v_k v_1 \\dotsb v_{i_4}} v_{i_4+1} \\dotsb \\MuI{v_{i_1} \\dotsb v_{i_2}} \\dotsb v_{c-1}$.\n\\end{EqnList}\nThe contribution of $\\Hd'\\circ R$ consists of the following two parts:\n\\begin{EqnList}[resume]\n\\item $\\displaystyle\\sum_{1 \\le i_2 \\le i_4 \\le c-1}\\MuII{\\MuI{v_{c} \\dotsb v_k v_1 \\dotsb v_{i_2}} \\dotsb v_{i_4}} v_{i_4+1}\\dotsb v_{c-1}$,\n\\item $\\displaystyle\\sum_{1\\le i_2 \\prec i_3 \\le i_4 \\le c-1}\\MuI{v_{c} \\dotsb v_k v_1 \\dotsb v_{i_2}} v_{i_2+1} \\dotsb \\MuII{v_{i_3} \\dotsb v_{i_4}}\\dotsb v_{c-1}$.\n\\end{EqnList}\nThe contribution of $R \\circ R$ is:\n\\begin{EqnList}[resume]\n\\item $\\displaystyle\\sum_{\\substack{c \\prec i_1 \\le k\\\\1\\le i_2 \\le i_4 \\le c-1}}\\MuII{v_{c} \\dotsb \\MuI{v_{i_1}\\dotsb v_k v_1\\dotsb v_{i_2}} \\dotsb v_{i_4}} v_{i_4+1} \\dotsb v_{c-1}$.\n\\end{EqnList}\nUsing \\eqref{Eq:OddDeg}, it is easy to see that III cancels with V. The sum of the other terms I, II, IV, VI vanishes for fixed $1 \\le i_4 \\le c-1$ due to \\eqref{Eq:AInftyCyclic}.\n\\end{proof}\n\n\\begin{Lemma}[CP2] \\label{Lem:CP2}\nFor an $\\AInfty$-algebra $(V,(\\mu_j))$, the following relations hold:\n\\begin{ClaimList}\n\\item $\\Hd'\\circ \\CountOp = \\CountOp\\circ \\Hd$,\n\\item $(\\Id-\\CycPermOp)\\circ \\Hd' = \\Hd\\circ (\\Id-\\CycPermOp)$.\n\\end{ClaimList}\n\\end{Lemma}\n\\begin{proof}\n\\begin{ProofList}\n\\item We denote $z_j \\coloneqq \\mu_j \\otimes \\Id^{k-j}$ and omit writing the composition $\\circ$. We consider the components\n\\[ {\\Hd'}_k^j \\coloneqq \\sum_{i=0}^{k-j} \\CycPermOp^i_{k-j+1}z_j\\CycPermOp_k^{-i} \\qquad\\text{and}\\qquad R_k^j \\coloneqq \\sum_{i=1}^{j-1} z_j\\CycPermOp_k^i. \\]\nIt holds\n\\begin{align*} \n{\\Hd'}_k^j \\CountOp_k &= \\sum_{l=0}^{k-1}\\sum_{i=0}^{k-j} \\CycPermOp^i_{k-j+1} z_j \\CycPermOp^{-i + l}_k \\\\\n& = \\sum_{l=0}^{k-1} \\sum_{i=0}^{k-j} \\CycPermOp^l_{k-j+1} \\CycPermOp^{i-l}_{k-j+1} z_j \\CycPermOp^{-(i-l)}_k \\\\\n& = \\sum_{u=1-k}^{k-j} \\bigl(\\sum_{l\\in L_u} \\CycPermOp^l_{k-j+1}\\bigr) \\CycPermOp^u_{k-j+1} z_j \\CycPermOp^{-u}_k,\n\\end{align*}\nwhere $u \\coloneqq i - l$ and \n\\[L_u \\coloneqq \\{ l\\in \\{0, \\dotsc, k-1\\} \\mid \\exists i\\in \\{0,\\dotsc, k-j\\} : u = i-l \\}. \\] \nWe distinguish the cases\n\\[ L_u = \\begin{cases}\n         \\{0,\\dotsc, k-j-u\\} & \\text{for }0\\le u \\le k-j, \\\\\n         \\{-u, \\dotsc, k-j-u\\} & \\text{for } 1-j \\le u \\le -1\\quad\\text{and} \\\\\n         \\{-u, \\dotsc, k -1\\} & \\text{for }1-k \\le u \\le -j\n         \\end{cases}\\]\nand denote the corresponding sums by $\\mathrm{I}$, $\\mathrm{II}$ and $\\mathrm{III}$, respectively. It holds         \n\\[ \\mathrm{I} = \\sum_{u=0}^{k-j} \\bigr(\\sum_{l=0}^{k-j-u} \\CycPermOp^l_{k-j+1} \\bigl) \\CycPermOp^u_{k-j+1} z_j \\CycPermOp^{-u}_k \\]\nand  \n\\begin{align*}\n\\mathrm{III} &= \\sum_{u = 1- k}^{-j} \\sum_{l = -u}^{k-1} \\CycPermOp^l_{k-j+1} \\CycPermOp^u_{k-j+1} z_j \\CycPermOp^{-u}_k \\\\\n& = \\sum_{u = 1- k}^{-j} \\sum_{l = -u}^{k-1} \\CycPermOp^l_{k-j+1} \\CycPermOp^{u+k-j+1}_{k-j+1} z_j \\CycPermOp^{-u-k}_k  \\\\\n&= \\sum_{u=1}^{k-j} \\sum_{l = k - u}^{k-1} \\CycPermOp^{l-j+1}_{k-j+1} \\CycPermOp^u_{k-j+1} z_j \\CycPermOp^{-u}_k \\\\ \n& = \\sum_{u=1}^{k-j} \\bigl(\\sum_{l =  k - j -u + 1}^{k-j} \\CycPermOp^l_{k-j+1}\\bigr) \\CycPermOp^u_{k-j+1} z_j \\CycPermOp^{-u}_k.\n\\end{align*}\nTherefore, we have\n\\[ \\mathrm{I} + \\mathrm{III} = \\sum_{u=0}^{k-j} \\bigl( \\sum_{l=0}^{k-j} \\CycPermOp^l_{k-j+1} \\bigr) \\CycPermOp^u_{k-j+1} z_j \\CycPermOp^{-u}_k = \\CountOp_{k-j+1} {b'}_k^j \\]\nNext, we have\n\\[ \\mathrm{II} = \\sum_{u = 1-j}^{-1} \\sum_{l=-u}^{k-j-u} \\CycPermOp^l_{k-j+1} \\CycPermOp^u_{k-j+1} z_j \\CycPermOp^{-u}_k = \\sum_{u = 1}^{j-1} \\bigl(\\sum_{l=0}^{k-j} \\CycPermOp^l_{k-j+1}\\bigr) z_j \\CycPermOp^u_k = \\CountOp_{k-j+1} R_k^j. \\]\nWe conclude that\n\\[ {\\Hd'}_k^j \\CountOp_k = \\CountOp_{k-j+1}{\\Hd'}_k^j + \\CountOp_{k-j + 1} R_k^j = \\CountOp_{k-j+1}\\Hd_k. \\]\nThis proves the claim.\n\\item For every $k\\ge 1$ and $1\\le j \\le k$ we compute\n\\begin{align*}\n(\\Id-\\CycPermOp){\\Hd}^j_k &= {\\Hd'}_k^j - \\sum_{i=0}^{k-j} \\CycPermOp^{i+1}_{k-j+1}z_j \\CycPermOp^{-(i+1)}_k\\CycPermOp_k\\\\\n& = {\\Hd'}_k^j - \\sum_{i=1}^{k-j + 1} \\CycPermOp^{i}_{k-j+1}z_j \\CycPermOp^{-i}_k\\CycPermOp_k \\\\\n& = {\\Hd'}_k^j - \\sum_{i=0}^{k-j} \\CycPermOp^{i}_{k-j+1}z_j \\CycPermOp^{-i}_k \\CycPermOp_k + \\CycPermOp^0_{k-j+1} z_j \\CycPermOp^{-0}_k \\CycPermOp_k - \\CycPermOp^{k-j+1}_{k-j+1} z_j \\CycPermOp^{-k+j-1}_k \\CycPermOp_k \\\\\n& = {\\Hd'}_k^j(\\Id-\\CycPermOp_k)  + z_j \\CycPermOp_k - z_j \\CycPermOp^{j}_k  \\\\ \n&= {\\Hd'}_k^j(\\Id-\\CycPermOp_k) + \\sum_{i=1}^{j-1} z_j \\CycPermOp^i_k (\\Id-\\CycPermOp_k) \\\\\n& = ({\\Hd'}_k^j + {R}_k^j)(\\Id-\\CycPermOp_k). \n\\end{align*} \nThis proves the claim. \\qedhere\n\\end{ProofList}\n\\end{proof}\n\n\\begin{Lemma}[CP3] \\label{Lem:CP3}\nFor a strictly unital $\\AInfty$-algebra $(V,(\\mu_k),\\NOne)$, it holds\n\\[ \\Hd'\\circ \\InsOneOp_1 + \\InsOneOp_1 \\circ \\Hd' = \\Id. \\]\n\\end{Lemma}\n\\begin{proof}\nFor any $k\\ge 1$ and $v_1$, $\\dotsc$, $v_k \\in V[1]$, we compute\n\\[ \\begin{aligned} \n\\Hd' \\InsOneOp_1(v_1\\dots v_k) + \\InsOneOp_1 \\Hd' (v_1\\dots v_k) &= \\MuII{\\NOneI v_1} v_2 \\dots v_k  + \\NOneI \\OMuIIO{v_1 \\dots v_k} + \\NOneII \\OMuIO{v_1 \\dots v_k} \\\\ &= v_1 \\dots v_k. \n\\end{aligned} \\]\nThis proves the claim.\n\\end{proof} \n\n\n\\begin{Lemma}[CP4] \\label{Lem:CP4}\nLet $\\mathcal{A} = (V,(\\mu_k),\\NOne)$ be a strictly unital $\\AInfty$-algebra. For all $k\\ge 2$, we define $h_k: \\HC V \\rightarrow \\HC V$ by\n\\[ h_k\\coloneqq \\InsOneOp_k \\circ \\Hd + \\Hd\\circ \\InsOneOp_k + \\Id. \\]\nThen the formulas\n\\begin{align*}\n    s^* &\\coloneqq  s^*_2 + s^*_3 \\circ h^*_2 + s^*_4 \\circ h^*_3 \\circ h_2^* + \\dotsb\\quad\\text{and} \\\\\n    h^* &\\coloneqq \\dotsb\\circ h^*_k\\circ\\dotsb\\circ h^*_2\n\\end{align*}\ndefine homogenous linear maps~$s^*$ and $h^*: \\HC^*V \\rightarrow \\HC^* V$ of degrees~$1$ and~$0$, respectively. The map $h^*$ is a projection onto $\\HNC^* V$, and the following homotopy relation holds:\n\\begin{equation} \\label{Eq:Homot}\ns^* \\circ \\Hd^* + \\Hd^* \\circ s^* = h^* - \\Id.\n\\end{equation}\nIt implies that $\\NormIncl$ and $\\NormProj$ are quasi-isomorphisms.\n\\end{Lemma}\n\n\\begin{proof}\nWe set $\\HNC^*_{(1)} V\\coloneqq \\HC^* V$, and for all $k\\ge 2$, we define\n\\[ \\HNC^*_{(k)} V \\coloneqq \\{\\psi\\in \\HC^* V \\mid \\psi \\circ \\InsOneOp_i = 0\\text{ for all }i=2, \\dotsc, k\\}. \\]\nWe will show first that $h_k^*$ restricts to a projection $\\HNC^*_{(k-1)}V \\rightarrow \\HNC^*_{(k)} V$.  Let $i \\ge 1$ and $v_1$,~$\\dotsc$, $v_i\\in V[1]$. We make the following computations:\n\\begin{PlainList}\n\\item For $i<k-1$, we have\n\\[(\\InsOneOp_i \\Hd + \\Hd \\InsOneOp_i)(v_1 \\dots v_i) = 0 \\]\nby the definition of $\\InsOneOp_ik$ and by the fact that $\\Hd$ does not increase weights.\n\\item For $i = k-1$, we have\n\\begin{align*}\n& (\\InsOneOp_i \\Hd + \\Hd \\InsOneOp_i)(v_1 \\dots v_i) \\\\\n&\\quad = \\OMuIO[$1$]{v_1 \\dots v_i}\\NOneII  + \\OMuIIO[$1$]{v_1\\dots v_i}\\NOneI + \\sum_{j=2}^{i} \\OMuIIO[$j$]{v_1\\dots v_i}\\NOneI + v_1\\dots \\MuII{v_i \\NOneI} + \\MuII{\\NOneI v_1} \\dots v_i \\\\\n& \\quad = \\sum_{j=2}^{i} \\OMuIIO[$j$]{v_1\\dots v_i}\\NOneI.\n\\end{align*}\nNotice that $\\NOne$ in the result is at positions $<k$.\n\\item For $i> k-1$, we have\n\\begin{align*} \n&(\\InsOneOp_i \\Hd + \\Hd \\InsOneOp_i)(v_1 \\dots v_i) \\\\\n& \\quad  =  \\begin{aligned}[t] &\\hphantom{+}\\sum_{j = 1}^{i-k+2} \\OMuIO[$j$]{v_1 \\dots v_{k+j-2}} \\NOneII v_{k+j-1} \\dots v_i + \\sum_{j=1}^{i-k+1} v_1 \\dots v_{k-1} \\NOneII \\OMuIO[$j$]{v_{k}\\dots v_i} \\\\\n&{}+ \\sum_{m=1}^{i-k+1} \\sum_{c=m+k-1}^i \\MuI{v_c \\dots v_i v_1 \\dots v_{m}}v_{m+1}\\dots v_{m+k-2} \\NOneII v_{m+k-1} \\dots v_{c-1}  \\\\\n&{}+ \\sum_{j=1}^{k-1} \\OMuIIO[j]{v_1\\dots v_{k-1}} \\NOneI v_{k}\\dots v_i + \\sum_{j=1}^{i-k+1} v_1 \\dots v_{k-1} \\NOneI \\OMuIIO[j]{v_{k}\\dots v_i} \\\\\n&{}+v_1\\dots \\MuII{v_{k-1} \\NOneI} v_{k} \\dots v_i + v_1 \\dots v_{k-1} \\MuII{\\NOneI v_{k}}\\dots v_i \n\\\\\n&{}+ \\sum_{m=1}^{k-1} \\sum_{c=k}^i \\MuII{v_c \\dots v_i v_1 \\dots v_{m}}v_{m+1}\\dots v_{k-1} \\NOneI v_{k} \\dots v_{c-1} \\end{aligned} \\\\\n& \\quad= \\begin{aligned}[t] &\\hphantom{+}\\sum_{j = 1}^{i-k+2} \\OMuIO[$j$]{v_1 \\dots v_{k+j-2}} \\NOneII v_{k+j-1} \\dots v_i + \\sum_{j=1}^{k-1} \\OMuIIO[j]{v_1\\dots v_{k-1}} \\NOneI v_{k}\\dots v_i \\\\\n&{}+ \\sum_{m=1}^{i-k+1} \\sum_{c=m+k-1}^i \\MuI{v_c \\dots v_i v_1 \\dots v_{m}}v_{m+1}\\dots v_{m+k-2} \\NOneII v_{m+k-1} \\dots v_{c-1} \\\\\n&{}+\\sum_{m=1}^{k-1} \\sum_{c=k}^i \\MuII{v_c \\dots v_{i} v_1 \\dots v_{m}}v_{m+1}\\dots v_{k-1} \\NOneI v_{k} \\dots v_{c-1}\n\\end{aligned} \\\\\n&\\quad = \\begin{aligned}[t]\n&\\hphantom{+}\\overbrace{\\sum_{j = 2}^{i-k+2} \\OMuIO[$j$]{v_1 \\dots v_{k+j-2}} \\NOneII v_{k+j-1} \\dots v_i}^{\\eqqcolon\\mathrm{I}} + \\overbrace{\\sum_{j=2}^{k-1} \\OMuIIO[j]{v_1\\dots v_{k-1}} \\NOneI v_{k}\\dots v_i}^{\\eqqcolon\\mathrm{II}} \\\\\n&{}+ \\overbrace{\\sum_{m=2}^{i-k+1} \\sum_{c=m+k-1}^i \\MuI{v_c \\dots v_i v_1 \\dots v_{m}}v_{m+1}\\dots v_{m+k-2} \\NOneII v_{m+k-1} \\dots v_{c-1}}^{\\eqqcolon\\mathrm{III}} \\\\\n&{}+\\overbrace{\\sum_{m=2}^{k-1} \\sum_{c=k}^i \\MuII{v_c \\dots v_i v_1 \\dots v_{m}}v_{m+1}\\dots v_{k-1} \\NOneI v_{k} \\dots v_{c-1}}^{\\eqqcolon\\mathrm{IV}}\n\\end{aligned}\n\\end{align*}\nNotice that $\\NOne$ is at the $k$-th position in $\\mathrm{I}$ and $\\mathrm{III}$, whereas at positions $<k$ in $\\mathrm{II}$ and $\\mathrm{IV}$.\n\\end{PlainList}\nLet $k\\ge 2$, and let $\\psi\\in \\HNC_{(k-1)}^* V$. In order to show that $\\InsOneOp_j^* h_k^* \\psi = 0$ for $2 \\le j \\le k$, let $i\\ge j$, and let $v_1$,~$\\dotsc$, $v_{i}\\in V[1]$ be such that $v_{j} = \\NOne$.\n%The only non-trivial case to consider is (3).\nClearly, $\\psi(\\mathrm{II}) = \\psi(\\mathrm{IV}) = 0$ for any~$v$'s. As for $\\mathrm{III}$, the vector $v_j=\\NOne$ lies either inside $\\mu_j$ with $j\\ge 3$ or at a position $<k$. It follows that $\\psi(\\mathrm{III})=0$. As for $\\mathrm{I}$, we write \n\\[ \\mathrm{I} = \\overbrace{\\OMuIO[2]{v_1 \\dotsb v_k} \\NOneII v_{k+1} \\dotsb v_i}^{\\mathrm{Ia}} + \\overbrace{\\sum_{j\\ge 3}^{i-k+2} \\OMuIO[j]{v_1\\dotsb v_{k+j-2}}\\NOneII v_{k+j-1}\\dotsb v_i}^{\\mathrm{Ib}}. \\]\nIt holds $\\psi(\\mathrm{Ib})=0$. For $2\\le j<k$, it holds  \n\\begin{align*}\n\\psi(\\mathrm{Ia}) &= \\begin{multlined}[t]\\psi(v_1 \\dots \\MuI{v_{j-1}\\NOne} v_{j+1} \\dots v_k \\NOneII v_{k+1} \\dots v_i) \\\\{}+ \\psi(v_1 \\dots v_{j-1}\\MuI{\\NOne v_{j+1}} \\dots v_k \\NOneII v_{k+1} \\dots v_i) \\end{multlined} \\\\ \n&= 0,\n\\end{align*}\nwhereas for $j=k$, we have\n\\[ \\psi(v_1 \\dotsb \\MuI{v_{k-1} \\NOne} \\NOneII v_{k+1} \\dotsb v_i) = - \\psi(v_1 \\dots v_{i}). \\]\nIt follows that\n\\begin{equation}\n h_k^*\\psi(v_1 \\dotsb v_i) = \\psi(v_1\\dotsb v_i) + \\psi\\bigl((\\InsOneOp_i \\Hd + \\Hd \\InsOneOp_i)(v_1\\dotsb v_i)\\bigr) = 0.\n\\end{equation}\nTherefore, we have $h^*_k \\psi \\in \\HNC_{(k)}^* V$. If $\\psi\\in \\HNC_{(k)}^* V$, then clearly $h_{k}^*(\\psi) = \\psi$. Consequently,~$h_k^*$ is a projection $h^*_k: \\HNC_{(k-1)}^*V \\rightarrow \\HNC_{(k)}^* V$.\n\nFor $k\\ge 2$, we define\n\\begin{align*}\n \\leftidx{^k}{s}{^*} &\\coloneqq \\InsOneOp_2^* +  \\InsOneOp_3^* \\circ h_2^* + \\dotsb + \\InsOneOp_k^* \\circ h_{k-1}^* \\circ \\dotsb \\circ h_2^*, \\\\\n \\leftidx{^k}{h}{^*} &\\coloneqq h_k^* \\circ \\dotsb \\circ h_2^*.\n\\end{align*}\nLet $\\Filtr^n_{\\WeightMRM} \\HC V = \\bigoplus_{k=1}^n \\HC_k V$ be the filtration of $\\HC V$ by weights. For all $k\\ge n+1$ it holds\n\\[ \\Restr{\\leftidx{^{k}}{h}{^*}\\psi}{\\Filtr^n_{\\WeightMRM} \\HC V} = \\leftidx{^{n+1}}{h}{^*}\\psi\\quad\\text{and}\\quad \\Restr{\\leftidx{^k}{s}{^*}\\psi}{\\Filtr^n_{\\WeightMRM} \\HC V} = \\leftidx{^{n+1}}{s}{^*}\\psi. \\]\nIt follows that $h^*$, $s^*: \\HC^* V \\rightarrow \\HC^* V$ are well-defined and that $h^*$ is a projection onto~$\\HNC^* V$. Also, it suffices to prove  \\eqref{Eq:Homot} with~$\\leftidx{^k}{s}{^*}$ and~$\\leftidx{^k}{h}{^*}$ instead of~$s^*$ and~$h^*$ for each~$k\\ge 2$. For~$k=2$, it holds by definition. Suppose that \\eqref{Eq:Homot} holds for some~$k\\ge 2$. Then\n\\begin{align*}\n&\\Hd^*\\circ (\\leftidx{^{k+1}}{s}{^*}) + (\\leftidx{^{k+1}}{s}{^*})\\circ \\Id \\\\\n&\\quad = \\Hd^*\\circ (\\leftidx{^{k}}{s}{^*}) + (\\leftidx{^{k}}{s}{^*}) \\circ \\Hd^* + \\Hd^*\\circ \\InsOneOp_{k+1}^*\\circ h_k^*\\circ \\dotsb \\circ h_2^* + \\InsOneOp_{k+1}^*\\circ h_k^*  \\circ \\dotsb \\circ h_2^* \\circ \\Hd^* \\\\\n&\\quad = \\leftidx{^k}{h}{^*}- \\Id + (\\Hd^*\\circ \\InsOneOp_{k+1}^* + \\InsOneOp_{k+1}^*\\circ \\Hd^*)\\circ h_k^* \\circ \\dotsb \\circ h_2^*  \\\\\n&\\quad = \\leftidx{^k}{h}{^*}- \\Id + (h_{k+1}^* - \\Id)\\circ h_k^* \\circ \\dotsb \\circ h_2^* \\\\\n&\\quad = \\leftidx{^{k+1}}{h}{^*} - \\Id.\n\\end{align*}\nThe lemma is finally proven.\\qedhere\n\\end{proof}\n%\\begin{proof}\n%It is easy to check that \n%\\[ \\Restr{\\leftidx{^k}{h}{}}{\\Filtr_n \\B V} = \\leftidx{^{n+1}}{h}{}\\quad\\text{and}\\quad \\Restr{\\leftidx{^k}{s}{}}{\\Filtr_n \\B V} = \\leftidx{^{n+1}}{s}{}\\quad \\text{for all }k\\ge n+1. \\]\n%It follows that $\\Hd$, $s: \\BV \\rightarrow \\BV$ are well-defined and we can prove the homotopy by proving it for $\\leftidx{^k}{h}{}$, $\\leftidx{^k}{s}{}$ for all $k$, or in other words on $\\Filtr_{k-1} \\B V$. However, can I get a direct homotopy from this?\n%\\end{proof}\n%\n%If $V$ has bounded degrees, then $\\HC^q(V) = (\\HC_q(V))^*$ and $\\bar{\\HC}^q(V) \\simeq (\\bar{\\HC}_q(V))^*$ by mapping $p^*: \\psi \\mapsto \\psi \\circ \\NormProj$. We know that the inclusion $\\iota: \\bar{\\HC}^* \\rightarrow \\HC^*$ is a quasi-isomorphism. We have the diagram\n%\\begin{center}\n%\\begin{tikzcd}\n% \\bar{\\HC}^* \\arrow[r]{}{\\iota} & \\HC^* = \\Hom(\\HC_*,\\R) \\\\\n% \\Hom(\\bar{\\HC}_*, \\R) \\arrow[u]{}{p^*} \\arrow[ur]{}{p^*} & {} \n%\\end{tikzcd}\n%\\end{center}\n%It follows that $p^*$ is quasi-isomorphism. Because $p: \\HC_* \\rightarrow \\bar{\\HC}_*$ is a chain map and we work over $\\R$, it follows that $p: \\HC_* \\rightarrow \\bar{\\HC}_*$ is also a quasi-isomorphism.\n\n\n\\section{Homological algebra of bicomplexes}\\label{Sec:HomBi}\n%Watch out that $B^*$ is the dual to the bicomplex, i.e., we dualize every element in the grid. It is not dual of the total complex.\nWe will consider homological and cohomological half-plane bicomplexes, which we depict~as\n\\Correct[caption={DONE Switsch p q},noline]{Switch p and q! There is more mistakes. It shoudl be just $B_{q,p} \\mapsto B_{q,p}^*$ in the picture, not a change of numbering }\n\\Add[caption={DONE Why completion arises for half-plane},noline]{Give here the example with the snake of $\\R$'s.}\n\\begin{center}\n$B:\\ $\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}\\arrow[d]&{}\\arrow[d]&{}\\arrow[d]&{} \\\\\nB_{q,p+2} \\arrow[d] & B_{q+1,p+2} \\arrow[l] \\arrow[d]  & B_{q+2,p+2} \\arrow[l] \\arrow[d]  &{}\\arrow[l]\\\\\nB_{q,p+1} \\arrow[d] & B_{q+1,p+1} \\arrow[d] \\arrow[l] & \\arrow[d] \\arrow[l] B_{q+2,p+1}  &{} \\arrow[l] \\\\\nB_{q, p} \\arrow[d] & B_{q+1,p}\\arrow[d] \\arrow[l] & B_{q+2,p}\\arrow[d]  \\arrow[l]  & {}\\arrow[l] \\\\\n{}&{}&{} &{}\n\\end{tikzcd}\n\\end{center}\nand \n\\begin{center}\n$\\quad B^*:\\ $\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}&{}&{} &{}\\\\\nB^{q,p+2} \\arrow[u] \\arrow[r,]& B^{q+1,p+2} \\arrow[u] \\arrow[r] & B^{q+2,p+2} \\arrow[u,] \\arrow[r]  &{}\\\\\nB^{q,p+1} \\arrow[u] \\arrow[r] & B^{q+1,p+1} \\arrow[u] \\arrow[r]  & \\arrow[u] \\arrow[r]  B^{q+2,p+1} &{} \\\\\nB^{q,p} \\arrow[u] \\arrow[r]  & B^{q+1,p}\\arrow[u] \\arrow[r]  & B^{q+2,p}\\arrow[u]  \\arrow[r]  & {} \\\\\n{}\\arrow[u]&{}\\arrow[u]&{}\\arrow[u]&{}\n\\end{tikzcd},\n\\end{center}\nrespectively. The standard convention is that the squares anticommute (see \\cite{LodayCyclic})!\n \\ToDo[noline,caption={DONE Do bicomplexes commute or anticommute?}]{According to our convention, the squares anti-commute. Boardmann says they should anticommute!!! }\n \nWe consider the \\emph{total complexes} $(\\TotI(B), \\Bdd)$ and $(\\TotII(B),\\Bdd)$, where for all $q\\in \\Z$, the chain groups are defined by\n\\[ (\\TotI B)_q \\coloneqq \\bigoplus_{i + j =q} B_{i,j},\\quad\\text{and}\\quad (\\TotII B)_q \\coloneqq \\prod_{i + j =q} B_{i,j}, \\]\nrespectively, and where $\\Bdd = \\Bdd_v + \\Bdd_h$ is the total boundary operator consisting of the vertical and horizontal boundary operators $\\Bdd_v$ and $\\Bdd_h$, respectively. \n%we have $\\Tot_I(B^{\\DblBul})$ and $\\Tot_{II}(B^{\\DblBul})$ with $\\Dd = \\Dd_v + \\Dd_h$.\nThe homologies of $\\TotI$ and $\\TotII$ are denoted by $\\H B$ and $\\H \\hat{B}$, respectively. We proceed similarly in cohomology.\\Add[caption={DONE Remark about half-plane bi},noline]{Make a remark why half-plane happen in A-infinity, example of the snake where the spectral sequence does not compute the homology properly, make a remark about dualizations. These are the main problems.}\n%\\footnote{We use $\\hat{\\cdot}$ because $\\TotII$ is a (graded) completion of $\\TotI$ with respect to the anti-diagonal filtration.}\n\nFor each $B$ and $B^*$, we consider the vertical and horizontal filtrations which are defined in such a way that they are preserved by all the arrows and that the $k$-th group contains the $k$-th column and the $k$-th row, respectively. More precisely, the vertical filtration~$\\Filtr_{\\VertMRM}^k B$ of $B$ consists of the columns $0$,~$\\dotsc$, $k$, whereas the vertical filtration~$\\Filtr_{\\VertMRM}^k B^*$ of $B^*$ consists of the columns $k+1$, $k+2$,~$\\dotsc$; the horizontal filtration~$\\Filtr^\\HorMRM_k B$ of $B$ consists of the rows $k$, $k-1$,~$\\dotsc$, whereas the horizontal filtration~$\\Filtr_\\HorMRM^k B^*$ of $B^*$ consists of the rows $k$, $k+1$,~$\\dotsc$, and so~on.\n%\\begin{itemize}\n% \\item vertical = consists of columns (the $0$-th page of spectral sequence has the vertical differential)\n% \\item horizontal = consists of rows (the $0$-th page has horizontal differential) \n%\\end{itemize}\n%\n%The vertical and horizontal filtrations induce filtrations of the total complexes. They are exhaustive and Hausdorff (see Definition~\\ref{Def:Filtrations}); however, they are not bounded in general. The horizontal filtration for $B^{\\DblBul}$ is decreasing and bounded from below, whereas the horizontal filtration for $B_{\\DblBul}$ is increasing and bounded from above. The vertical filtration for $B_{\\DblBul}$ is increasing and bounded from below, whereas the vertical filtration for $B^{\\DblBul}$ is decreasing and bounded from above.\n% \n\nIn order to check that morphisms of bicomplexes induce quasi-isomorphisms of total complexes, we will use the techniques of spectral sequences. Because we work with half-plane bicomplexes, our spectral sequences do not lie in the first quadrant, as in~\\cite{Weibel1994}, and the notion of conditional convergence from~\\cite{Boardmann1999} comes in handy. In the following, we recall some basic theory and formulate a proposition about the convergence of some unbounded spectral sequences.\n\nA \\emph{cohomological spectral sequence} is a collection $\\SSPage_r$ of bigraded vector spaces and differentials $\\SSDiff_{r}: \\SSPage_r^{\\bullet\\bullet} \\rightarrow \\SSPage_r^{\\bullet+r,\\bullet-r+1}$ for $r\\in \\N$ such that $\\SSPage_{r+1} = \\H(\\SSPage_r,\\SSDiff_r)$. Let $(C^*,\\Dd)$ be a cochain complex with a decreasing filtration $\\Filtr^s C^*$ (it has to be graded and preserved by~$\\Dd$). For every $s\\in \\Z$, the short exact sequence\n\\[ 0\\longrightarrow \\Filtr^{s+1} C^* \\overset{i}{\\longrightarrow} \\Filtr^s C^* \\overset{j}{\\longrightarrow} \\Gr_s(C^*) \\coloneqq \\Filtr^{s} C^* / \\Filtr^{s+1} C^* \\longrightarrow 0 \\]\ninduces the long exact sequence\\Modify[caption={DONE Labels on arrows},noline]{Add labels $i$, $j$, $\\delta$ to arrows!}\n\\[\\dotsb \\longrightarrow \\H^\\bullet(\\Filtr^{s+1} C^*) \\overset{i}{\\longrightarrow}  \\H^\\bullet(\\Filtr^{s} C^*) \\overset{j}{\\longrightarrow} \\H^\\bullet(\\Gr_s C^*) \\overset{\\delta}{\\longrightarrow} \\H^{\\bullet+1}(\\Filtr^{s+1} C^*) \\longrightarrow \\dotsb, \\]\nwhich wraps into the exact couple of bigraded vector spaces\n\\[\\begin{tikzcd}\nA_1 \\coloneqq \\bigoplus_{s\\in \\Z} \\H(\\Filtr^s C^*)[s]  \\arrow{rr}{i} && A_1 \\arrow{dl}{j}\\\\\n & \\SSPage_1 \\coloneqq \\bigoplus_{s\\in \\Z} \\H(\\Gr_s C^*)[s] \\arrow{ul}{\\delta}.\n\\end{tikzcd}\\]\nThis is the so called \\emph{geometric grading} convention.\\footnote{It is chosen such that $\\SSPage_1^{sd} = \\H(B^{sd},\\Dd_\\VertMRM)$ for the vertical filtration.} We also define $A_1^s \\coloneqq \\H(\\Filtr^s C^*)$ and $\\SSPage_1^s \\coloneqq \\H(\\Gr_s C^*)$. By deriving this triangle (see, e.g., \\cite{Cieliebak2013}), one obtains a spectral sequence $\\SSPage_r$ associated to the filtration. One defines the $\\SSPage_\\infty$ page (see~\\cite{Boardmann1999}) and studies the convergence of $\\SSPage_r$ to a filtered group $G$. In order to formulate this, one considers the limit $A^\\infty \\coloneqq \\lim_s A^s$, the colimit $A^{-\\infty} \\coloneqq \\colim_s A^s$ and the right derived module for the limit $RA^\\infty \\coloneqq \\lim_s^1 A^s$. We will use the following notions of convergence:\n\\begin{enumerate}\n\\item\\emph{Strong convergence to a filtered group $G$}\\ $:\\Equiv$\\ For each $s\\in \\Z$, we have $\\Gr_s G \\simeq \\SSPage_\\infty^s/\\SSPage_\\infty^{s+1}$ and the filtration on $G$ is exhaustive, Hausdorff and complete (i.e., $G^\\infty = 0$, $G^{-\\infty} = G$ and $RG^\\infty = 0$ for $G^s \\coloneqq \\Filtr^s G$).\n\\item\\emph{Conditional convergence to the colimit $G\\coloneqq A^{-\\infty}$}\\ $:\\Equiv$\\ It holds $A^\\infty = 0$ and $RA^\\infty = 0$.\n\\end{enumerate}\nNote that neither notion implies, in general, the other (see (b) of Remark~\\ref{Rem:SpecSeq}).\n\n\\begin{Proposition}[Convergence of certain unbounded spectral sequences]\\label{Prop:ConvOfSpSeq}\nThe following statements about convergence of spectral sequences hold:\n\\begin{ClaimList}\n\\item For any $\\Z$-graded cochain complex $(C^*,\\Dd)$ with the canonical filtration $\\Filtr^k_{\\CanMRM} C^* \\coloneqq \\bigoplus_{i\\ge k} C^i$, the associated spectral sequence converges strongly and conditionally to the colimit $\\H(C^*)$.\n\\item The spectral sequence associated to the total complex $\\TotI B^*$ of a cohomological half-plane bicomplex $B^*$ with the filtration induced from the horizontal filtration $\\Filtr^k_{\\HorMRM} B^* = \\bigoplus_{i\\in\\Z}\\bigoplus_{j\\ge k} B^{ij}$ converges strongly to the colimit $\\H(\\TotI B^*)$.\n\\item The spectral sequence associated to the total complex $\\TotI B^*$ of a cohomological half-plane bicomplex $B^{*}$ with the filtration induced from the diagonal filtration \n\\[ \\Filtr^k B^* = \\bigoplus_{j-i>k} B^{ij} \\oplus \\bigoplus_{i\\in \\N_0} Z^{i,k+i}, \\]\nwhere $Z^{ij} \\subset B^{i j}$ is such that $\\Dd_{\\HorMRM} Z^{ij} = 0$ and $\\Dd_{\\HorMRM} B^{i-1 j} \\subset Z^{ij}$, converges strongly to the colimit $\\H(\\TotI B^*)$.\n\\item The spectral sequence associated the total complex $\\TotII B^*$ of a cohomological half-plane bicomplex $B^{*}$ with the filtration induced from the vertical filtration $\\Filtr^k_{\\VertMRM} B^* = \\bigoplus_{i\\ge k}\\bigoplus_{j\\in\\Z} B^{ij}$ converges conditionally to the colimit $\\H(\\TotII B^*)$.\n\\end{ClaimList}\nThe following statements about morphisms of spectral sequences hold:\n\\begin{ClaimList}[resume]\n\\item Let $f$ be a morphism of filtered complexes of types (a), (b) or (c). If it induces an isomorphism of $\\SSPage_r$ for some $r$, then it induces an isomorphism of the target groups.\n\\item Let $f$ be a morphism of filtered complexes of type (d). If it induces an isomorphism of $\\SSPage_r$ for some $r$, then it induces an isomorphism of the target groups.\n\\end{ClaimList}\n\\end{Proposition}\n\n\\begin{proof}\n\\begin{ProofList}\n\\item Strong convergence follows from (b) by embedding $C^*$ as the first column of~$B^*$. Also, the computation there show that $A^{-\\infty} = \\H(C^*)$ and $A^\\infty = 0$. The condition $RA^\\infty = 0$ is equivalent to the degreewise completeness of the filtration, which is true as the filtration is degreewise trivial. This shows the conditional convergence.\n\\item Let us compute the first page with the geometrical bigrading:\n\\begin{align*}\n\\SSPage_1^{sd} & = \\H(\\Gr_s \\TotI B^*,\\Dd_\\HorMRM)[s]^d \\\\\n &= \\H^{d+s}(\\TotI(s\\text{-th row of }B^*),\\Dd_\\HorMRM) \\\\\n &= \\H(B^{d,s},\\Dd_\\HorMRM).\n\\end{align*}\n%Because $B^{ij} = 0$ for all $i<0$, it follows that for a fixed $d$, we have \n%\\begin{equation}\\label{Eq:WeakerCondition}\n%\\SSPage_r^{s d} = 0\\quad\\text{for }s>d.\n%\\end{equation}\nWe want to use the following theorem:\n\\begin{ProofTheorem}[{\\cite[Theorem~6.1]{Boardmann1999}}]\\label{Thm:Boardman61}\nSuppose that $\\SSPage^s = 0$ for all $s>0$ and $A^{-\\infty} = 0$. Then the spectral sequence converges strongly to the colimit $A^{\\infty}$.\n\\end{ProofTheorem}\nThe proof can be done degreewise (see \\cite{MO336781}), and it can be shown that Theorem~\\ref{Thm:Boardman61} generalizes appropriately under the weaker assumption of ``exiting differentials''. This means that the pages occupy a half-plane and for any fixed $(s,d)$, all but finitely many differentials $\\Dd_r: \\SSPage_r^{sd} \\rightarrow \\SSPage_{r}^{s+r,d-r+1}$  leave the half-plane (and thus vanish). In our case,~$\\SSPage_r^{sd}$ occupy the half-plane $\\{(s,d)\\mid d\\ge 0\\}$, and because $d-r+1 \\to -\\infty$ as $r\\to \\infty$, the condition of exiting differentials is satisfied.\n\nWe still have to check that $A^{\\infty}=0$ and compute $A^{-\\infty}$. Because the colimit is an exact functor, it commutes with $\\H$, and we have\n\\[ A^{-\\infty} = \\colim_s \\H(\\Filtr^s\\TotI B^*) = \\H(\\colim_s\\Filtr^s\\TotI B^*) = \\H\\Bigl(\\bigcup_s \\Filtr^s \\TotI B^*\\Bigr) = \\H(\\TotI B^*). \\]\nWe used here that $\\Filtr$ is exhaustive. The limit $A^\\infty$ can be represented as\n\\[ A^\\infty \\simeq \\Bigl\\{([a_s]) \\in \\prod_{s\\in \\Z} A^s \\mid [a_{s+1}]\\mapsto [a_s]\\Bigr\\}, \\]\nwhere $\\H(\\Filtr^{s+1}\\TotI B^*) \\rightarrow \\H(\\Filtr^s \\TotI B)$ is induced by the inclusion $\\Filtr^{s+1}\\TotI B^* \\xhookrightarrow{} \\Filtr^s \\TotI B^*$. Pick $s_0\\in \\Z$ and consider $[a_{s_0}] \\in A^{s_0}$ with a fixed representative $a_{s_0}\\in \\Filtr^{s_0} B^*$. Because the cohomological degrees of $a_{s_0}$ are bounded, let's say that $d_0\\in \\Z$ is an upper bound, and the filtration is degreewise bounded from below, there is an $s_1 \\ge s_0$ such that $\\Filtr^{s} \\TotI B^* \\cap (\\TotI B^*)^{d} = 0$ for all $s\\ge s_1$ and $d\\le d_0$. Now, we have $[a_{s_1}] \\mapsto [a_{s_0}]$, and hence $[a_{s_0}] = 0$. Because $s_0$ was arbitrary, we get $([a_s]) = 0$. Therefore, it holds $A^\\infty=0$.\n\nAlternatively, a direct proof of (b) can be found in \\cite{Cencelj1998}.\n\n\\item The first page reads\n\\[\\SSPage_1^{sd} = \\H^{s+d}(\\Gr_s \\TotI B^*) = \\H(B^{\\lfloor\\frac{d}{2}\\rfloor,s+\\lceil\\frac{d}{2}\\rceil},\\Dd'), \\]\nwhere $\\Dd'$ is the differential on $\\Gr \\TotI B^*$. We see that $\\SSPage_r^{sd}$ occupy the half-plane $\\{(s,d)\\mid d\\ge 0\\}$, and hence the condition of exiting differentials is satisfied. The groups~$A^\\infty$ and~$A^{-\\infty}$ are computed as in (b). The strong convergence is again implied by a generalization of Theorem~\\ref{Thm:Boardman61}.\n\nAlternatively, one can modify the direct proof for the horizontal filtration from~\\cite{Cencelj1998}.\n\n\\item We want to use the following theorem:\n\\begin{ProofTheorem}[{\\cite[Theorem~9.2]{Boardmann1999}}]\\label{Thm:Thm92}\nLet $C^*$ be a cochain complex filtered by an exhaustive, Hausdorff and complete filtration. Then the spectral sequence converges conditionally to $\\H(C^*)$.\n\\end{ProofTheorem}\nWe have\n\\[ \\Filtr^k_{\\VertMRM} (\\TotI B^*)^d = \\bigoplus_{i\\ge k} B^{i,d-i}, \\]\nand hence $\\TotII B^*$ is the completion of $\\TotI B^*$ with respect to $\\Filtr_{\\VertMRM}$. Hence, it is complete. Clearly, it is also Hausdorff and exhaustive, and we can apply Theorem~\\ref{Thm:Thm92}.\n\n\\item This follows from (a), (b), (c) and the following theorem:\n\\begin{ProofTheorem}[{\\cite[Theorem~5.3]{Boardmann1999}}]\nLet $f: C^* \\rightarrow \\bar{C}^*$ be a morphism of filtered cochain complexes. Suppose that $\\SSPage_r$ converges strongly to a filtered group~$G$ and that~$\\bar{\\SSPage}_r$ converges (strongly) to a filtered group~$\\bar{G}$. If~$f$ induces an isomorphism~$\\SSPage_r \\simeq \\bar{\\SSPage}_r$ for some~$r$, then it induces an isomorphism~$G\\simeq\\bar{G}$.\n\\end{ProofTheorem}\n\n\\item We want to use the following theorem:\n\\begin{ProofTheorem}[{\\cite[Theorem~7.2]{Boardmann1999}}]\\label{Thm:Thm72}\nLet $f: C^* \\rightarrow \\bar{C}^*$ be a morphism of filtered cochain complexes. Suppose that $\\SSPage^s = \\bar{\\SSPage}^s = 0$ for all $s<0$ and that the spectral sequences converge conditionally to the colimits $A^{-\\infty}$ and $\\bar{A}^{-\\infty}$, respectively. If $f$ induces isomorphisms $\\SSPage_\\infty \\simeq \\bar{\\SSPage}_\\infty$ and $R\\SSPage_\\infty \\simeq R\\bar{\\SSPage}_\\infty$, then it induces an isomorphism $A^\\infty\\simeq\\bar{A}^\\infty$. \n\\end{ProofTheorem}\nThe first page for the vertical filtration reads:\n\\begin{align*}\n \\SSPage_1^{sd} & = \\H(\\Gr_s \\TotII B^*, \\Dd_{\\VertMRM})[s]^d \\\\\n                & = \\H^{d+s}(\\TotII(s\\text{-th column of }B^*),\\Dd_{\\VertMRM}) \\\\\n                & = \\H(B^{s d},\\Dd_{\\VertMRM}).\n\\end{align*} \nTherefore, the condition $\\SSPage^s = \\bar{\\SSPage}^s = 0$ for all $s<0$ is satisfied.\\footnote{It is again possible to relax this assumption and prove \\ref{Thm:Thm72} when the condition of ``entering differentials'' is satisfied. This means that the pages occupy a half-plane and for any fixed $(s,d)$, all but finitely many differentials $\\Dd_r: \\SSPage_r^{s-r,d+r-1} \\rightarrow \\SSPage_{r}^{s,d}$ start outside of the half-plane (and thus vanish). See \\cite{MO336781}.} By (d), conditional convergence is guaranteed. Since both $\\SSPage_\\infty$ and $R\\SSPage_\\infty$ depend only on $\\SSPage_r$ for $r\\ge r_0$ and any $r_0$ (see \\cite[p.~16]{Boardmann1999}), the rest of the assumptions of Theorem~\\ref{Thm:Thm72} is fulfilled.\\qedhere\n\\end{ProofList}\n\\end{proof}\n\n\n\\begin{Remark}[Differences to first-quadrant bicomplexes]\\phantomsection\\label{Rem:SpecSeq}\n\\begin{RemarkList}\n\\item Given a half-plane bicomplex $B$, we have\n\\[ (\\TotI B)^{\\GD} \\simeq \\TotII B^*, \\]\nwhere $B^* = \\bigoplus_{i,j} B_{ij}^*$ is the ``pointwise dual'' to $B$. This is why we have to consider homology and cohomology separately and can not just dualize the results.\n\\item The vertical filtration of $B^*$ might not converge strongly to $\\H(\\TotI B^*)$. Indeed, let\n\\begin{center}\n$B^*: \\quad \\begin{tikzcd}\n \\R & 0 & 0  \\\\\n \\R\\arrow{u}{\\Id}\\arrow{r}{\\Id} & \\R & 0   \\\\\n 0 & \\R\\arrow{u}{\\Id}\\arrow{r}{\\Id} & \\dotsb\n\\end{tikzcd}$.\n\\end{center}\nThen $\\H(\\TotI B^*) = \\R$ (the $\\R$ in the first column), but $\\SSPage_1 = 0$ because every column is exact. Notice that $\\H(\\TotII B^*) = 0$. Taking $0$'s instead of $\\Id$'s in the definition of $B^*$, we see that the horizontal filtration does not converge conditionally to $\\H(\\TotI B^*)$ because its filtration by $A^s$ is incomplete.\n%Is the convergence of the vertical filtration to $\\TotII B^*$ strong? Is it strong in the case of bicomplexes from Definition~\\ref{Def:CycBico}?\n\\qedhere\n\\end{RemarkList}\n\\end{Remark}\n\nWe will work with the following bicomplexes.\n\n\\begin{Definition}[Bicomplexes for cyclic (co)homology]\\label{Def:CycBico}\nLet $\\mathcal{A} = (V, (\\mu_k))$ be an $\\AInfty$-algebra. \\emph{Loday's cyclic bicomplexes} are defined by\n\\begin{center}\n$\\LodCycBi(\\mathcal{A}):\\ $\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n\\arrow[d] & \\arrow[d] & \\arrow[d] & \\arrow[d] &{} \\\\\n\\HC_2 V \\arrow[d]{}{\\Hd} & \\HC_2V \\arrow[l]{}{\\Id - \\CycPermOp} \\arrow[d]{}{-\\Hd'} & \\HC_2V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\CountOp} & \\HC_2V \\arrow[d]{}{-\\Hd'} \\arrow[l]{}{\\Id - \\CycPermOp} &{}\\arrow[l]{}{\\CountOp}\\\\ \n\\HC_1V \\arrow[d]{}{\\Hd} & \\HC_1V \\arrow[l]{}{\\Id - \\CycPermOp} \\arrow[d]{}{-\\Hd'} & \\HC_1V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\CountOp} & \\HC_1V \\arrow[d]{}{-\\Hd'} \\arrow[l]{}{\\Id - \\CycPermOp} &{}\\arrow[l]{}{\\CountOp}\\\\\n\\HC_0V \\arrow[d]{}{\\Hd} & \\HC_0V \\arrow[l]{}{\\Id - \\CycPermOp} \\arrow[d]{}{-\\Hd'} & \\HC_0V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\CountOp} & \\HC_0V \\arrow[d]{}{-\\Hd'} \\arrow[l]{}{\\Id - \\CycPermOp} &{}\\arrow[l]{}{\\CountOp}\\\\\n {} & {} & {} & {} & {}\n\\end{tikzcd}\n\\end{center}\nand\n\\begin{center}\n$\\LodCycBi^*(\\mathcal{A}):\\ $\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{} & {} & {} & {} & {}\\\\\n \\HC^2V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Id - \\CycPermOp^*} & \\HC^2V \\arrow[u]{}{-{\\Hd'}^*} \\arrow[r]{}{\\CountOp^*} & \\HC^2V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Id - \\CycPermOp^*} & \\HC^2V \\arrow[u]{}{-{\\Hd'}^*} \\arrow[r]{}{\\CountOp^*} &{}\\\\ \n \\HC^1V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Id - \\CycPermOp^*} & \\HC^1V \\arrow[u]{}{-{\\Hd'}^*} \\arrow[r]{}{\\CountOp^*} & \\HC^1V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Id - \\CycPermOp^*} & \\HC^1V \\arrow[u]{}{-{\\Hd'}^*} \\arrow[r]{}{\\CountOp^*} &{}\\\\\n \\HC^0V \\arrow[u]{}{\\Hd^*}\\arrow[r]{}{\\Id - \\CycPermOp^*} & \\HC^0V \\arrow[u]{}{-{\\Hd'}^*} \\arrow[r]{}{\\CountOp^*} & \\HC^0V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Id - \\CycPermOp^*} & \\HC^0V \\arrow[u]{}{-{\\Hd'}^*}\\arrow[r]{}{\\CountOp^*} &{} \\\\\n\\arrow[u]{}{\\Hd^*} & \\arrow[u]{}{-{\\Hd'}^*} & \\arrow[u]{}{\\Hd^*} & \\arrow[u]{}{-{\\Hd'}^*} &{} \n\\end{tikzcd}.\n\\end{center}\nClearly, $\\LodCycBi^*$ is the ``pointwise'' graded dual to $\\LodCycBi$ and analogously for other bicomplexes we are going to define.\n\nLet $\\NOne$ be a strict unit for $\\mathcal{A}$. We define the \\emph{Connes' operator} $\\Cd: \\HC V \\rightarrow \\HC V$ by \n\\begin{equation}\\label{Eq:ConnesOperator}\n\\Cd\\coloneqq (\\Id-\\CycPermOp)\\circ\\InsOneOp_1\\circ\\CountOp.\n\\end{equation}\n\\emph{Connes' cyclic bicomplexes} are defined by\n\\begin{center}\n$\\ConCycBi(\\mathcal{A}):$\n\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}\\arrow[d]{}{\\Hd}&{}\\arrow[d]{}{\\Hd}&{}\\arrow[d]{}{\\Hd}&{} \\\\\n\\HC_2V \\arrow[d]{}{\\Hd} & \\HC_1V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\Cd}& \\HC_0V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\Cd}  &{}\\arrow[l]{}{\\Cd}\\\\\n\\HC_1V \\arrow[d]{}{\\Hd} & \\HC_0V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\Cd}  & \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\Cd}  \\HC_{-1}V  &{}\\arrow[l]{}{\\Cd} \\\\\n\\HC_0V \\arrow[d]{}{\\Hd} & \\HC_{-1}V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\Cd}  & \\HC_{-2}V \\arrow[d]{}{\\Hd}  \\arrow[l]{}{\\Cd}  & {} \\arrow[l]{}{\\Cd} \\\\\n{}&{}&{} &{}\n\\end{tikzcd} \n\\end{center}\nand\n\\begin{center}\n$\\ConCycBi^{*}(\\mathcal{A}):$\n\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}&{}&{} &{}\\\\\n\\HC^2V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*}& \\HC^1V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*} & \\HC^0V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*}  &{}\\\\\n\\HC^1V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*} & \\HC^0V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*}  & \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*}  \\HC^{-1}V  &{} \\\\\n\\HC^0V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*}  & \\HC^{-1}V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\Cd^*}  & \\HC^{-2}V \\arrow[u]{}{\\Hd^*}  \\arrow[r]{}{\\Cd^*}  & {} \\\\\n{}\\arrow[u]{}{\\Hd^*}&{}\\arrow[u]{}{\\Hd^*}&{}\\arrow[u]{}{\\Hd^*}&{}\n\\end{tikzcd} \n\\end{center}\nWe define the \\emph{normalized Connes' operator} $\\NCd: \\HNC V \\rightarrow \\HNC V$ by\\footnote{The definition does not depend on the chosen section of $\\bar{p}: \\HC V \\rightarrow \\HNC V$.}\n\\begin{equation}\\label{Eq:NCd}\n\\NCd \\coloneqq \\NormProj\\circ\\Cd = \\NormProj\\circ \\InsOneOp_1 \\circ \\CountOp.\n\\end{equation}\nThe \\emph{normalized Connes' cyclic bicomplexes} are defined by\n\\begin{center}\n$\\NConCycBi(\\mathcal{A}):$\n\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}\\arrow[d]{}{\\Hd}&{}\\arrow[d]{}{\\Hd}&{}\\arrow[d]{}{\\Hd}&{} \\\\\n\\HNC_2V \\arrow[d]{}{\\Hd} & \\HNC_1V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\NCd}& \\HNC_0V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\NCd}  &{}\\arrow[l]{}{\\NCd}\\\\\n\\HNC_1V \\arrow[d]{}{\\Hd} & \\HNC_0V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\NCd}  & \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\NCd}  \\HNC_{-1}V  &{}\\arrow[l]{}{\\NCd} \\\\\n\\HNC_0V \\arrow[d]{}{\\Hd} & \\HNC_{-1}V \\arrow[d]{}{\\Hd} \\arrow[l]{}{\\NCd}  & \\HNC_{-2}V \\arrow[d]{}{\\Hd}  \\arrow[l]{}{\\NCd}  & {} \\arrow[l]{}{\\NCd} \\\\\n{}&{}&{} &{}\n\\end{tikzcd} \n\\end{center}\nand\n\\begin{center}\n$\\NConCycBi^*(\\mathcal{A}):$\n\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}&{}&{} &{}\\\\\n\\HNC^2V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}& \\HNC^1V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*} & \\HNC^0V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  &{}\\\\\n\\HNC^1V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*} & \\HNC^0V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  & \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  \\HNC^{-1}V  &{} \\\\\n\\HNC^0V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  & \\HNC^{-1}V \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  & \\HNC^{-2}V \\arrow[u]{}{\\Hd^*}  \\arrow[r]{}{\\NCd^*}  & {} \\\\\n{}\\arrow[u]{}{\\Hd^*}&{}\\arrow[u]{}{\\Hd^*}&{}\\arrow[u]{}{\\Hd^*}&{}\n\\end{tikzcd}.\n\\end{center}\nThe \\emph{reduced Connes' cyclic bicomplexes} $\\ConCycBi^{\\RedMRM}$ and $\\ConCycBi_{\\RedMRM}^*$ are defined by replacing $\\HNC V$ and $\\HNC^*V$ by $\\HC^{\\RedMRM} V$ and $\\HC_{\\RedMRM}^* V$, respectively.\n\nThe coordinate $(0,0)$ in the bicomplexes above always correspond to $\\HC^0 V$ in the first column (bottom-left position in the figures). \n\\end{Definition}\n%\n%The reason why we consider half-plane bicomplexes is that some filtration become bounded. One can also continue the bicomplexes to the left and obtain other homology theories.\nNote that another convention of drawing homological bicomplexes in the left half-plane and cohomological bicomplexes in the right half-plane might be more natural.\n\\Add[caption={DONE Origin of bicomplexes},noline]{Define where the point $(0,0)$ is for the bicomplexes.}\n\\begin{Remark}[Mixed complexes]\\label{Rem:MixedCompl}\nOne can equivalently encode the data of a cohomological Connes bicomplex into that of a mixed complex $(\\HC^*,\\Hd^*,\\Cd^*)$. In general, it is a graded vector space $\\HC^*$ with a differential $\\Hd^*$, $\\Abs{\\Hd^*}=1$ and a boundary operator $\\Cd^*$, $\\Abs{\\Cd^*}=-1$ which anticommute. One introduces the formal symbol~$\\FormU$ of degree $\\Abs{\\FormU}= 2$ and considers the polynomial ring $\\HC^*[\\FormU]$ in $\\FormU$ with values in~$\\HC^*$ and the ring of power series $\\HC^*[[\\FormU]]$ with the differential $\\Hd^* + \\Cd^*\\FormU$. Clearly, the former is quasi-isomorphic to $\\TotI(\\ConCycBi^{*})$ and the latter to $\\TotII(\\ConCycBi^{*})$ (columns of $\\ConCycBi^{*}$ are indexed with non-negative powers of $\\FormU$).\n%Doing this in homology, the degrees are multiplied with $-1$, and one has to consider the rings $\\HC[\\FormU^{-1}]$ and $\\HC[[\\FormU^{-1}]]$ instead (columns of $\\ConCycBi$ are indexed with non-positive powers of $\\FormU$).\nAltogether, there are seven versions $[\\FormU]$, $[\\FormU^{-1}]$, $[\\FormU,\\FormU^{-1}]$, $[[\\FormU^{-1},\\FormU]$, $[\\FormU^{-1},\\FormU]]$, $[[\\FormU,\\FormU^{-1}]]$ whose relation is studied in \\cite{Cieliebak2018b}. Some of these are related to periodic and negative versions of cyclic homology (see~\\cite{LodayCyclic}).\n\\end{Remark}\n%\\begin{Remark}\n%The map $u$ from v) of Remark \\ref{Remark:BasicDef} induces the maps of bicomplexes $u : \\ConCycBi_{**}(\\R) \\rightarrow \\ConCycBi_{**}(\\mathcal{A})$ and $u^*: \\ConCycBi^{**}(\\mathcal{A}) \\rightarrow \\ConCycBi^{**}(\\R)$.  \n%\\[ \\ConCycBi^{\\mathrm{red}}_{**}\\coloneqq \\CoKer(u) \\quad\\text{and}\\quad \\ConCycBi_{\\mathrm{red}}^{**}\\coloneqq \\ker(u^*). \\]\n%We denote the corresponding (co)homologies by $H\\ConCycBi$ and $H\\hat{\\ConCycBi}$.\n%\\end{Remark}\nIn the following proofs, we might not need the full strength of Proposition~\\ref{Prop:ConvOfSpSeq} since the spectral sequences for the bicomplexes from Definition~\\ref{Def:CycBico} mostly collapse already on the second page (see (iii) of Questions~\\ref{Q:OpenProbAInftx}).\n%Suppose that $V = V^0$ is concentrated in degree $0$ and $\\mu_k = 0$ for $k\\neq 2$. Then, in contrast to the general case, all bicomplexes lie in the first-quadrant, and hence the spectral sequences for both the vertical and the horizontal filtration converge strongly to the (co)homology of $\\Tot_I$. In the case of $\\AInfty$-algebras, the following example from \n%\\begin{description}\n% \\item[LA1] %(Suppose we do it over a ring which contains $\\Q$)\n% The projection $p^\\lambda: CC_{**} \\rightarrow \\HC_*^\\lambda$ to the first column modulo $\\im(\\Id-\\CycPermOp)$ for homology, resp. the inclusion $\\iota_\\lambda: \\HC_\\lambda^{*} \\rightarrow CC^{**}$ into the first column for cohomology are chain maps and induce the isomorphisms \n% \\[ HC_* \\simeq H^\\lambda_*,\\quad\\text{resp.}\\quad HC^* \\simeq H_\\lambda^*. \\]\n%It is proven for homology in \\cite[Theorem 2.1.5]{LodayCyclic} using the horizontal filtration of $CC$: The first page $E^1$ of the induced spectral sequence is exactly $C_*^\\lambda$ in the first column and $0$ otherwise, and hence $p^\\lambda$ induces an isomorphism of $E^1$. The strong convergence of the spectral sequence implies that $p^\\lambda$ is a quasi-isomorphism. The proof for cohomology is analogous.\n%\\end{description}\n%Let $\\mathcal{A}$ have a unit $\\NOne$ (strict=homological unit).\n%\\begin{description}[resume]\n%\\item[LA2] We have\n% \\[ HC_* \\simeq H\\mathcal{B}_*\\quad\\text{and}\\quad HC^* \\simeq H\\mathcal{B}^*. \\]\n% It is proven for homology in \\cite[Theorem 2.1.8]{LodayCyclic} by applying Killing's Lemma about contractible complexes to the contractible subspace of $\\Tot_I$ generated by odd columns of $CC$.\n% \\item[LA3]  The projection $\\NormProj: \\mathcal{B}\\rightarrow \\bar{\\mathcal{B}}$, resp. the inclusion $\\NormIncl: \\bar{\\mathcal{B}} \\rightarrow \\mathcal{B}$ are chain maps of bicomplexes and induce the isomorphisms\n% \\[ H\\mathcal{B}_* \\simeq H\\bar{\\mathcal{B}}_*, \\quad\\text{resp.}\\quad H\\mathcal{B}^* \\simeq H\\bar{\\mathcal{B}}^*. \\]\n%It is proven for homology in \\cite[Corollary 2.1.10]{LodayCyclic} using the vertical spectral sequence. The first pages are namely exactly copies of Hochschild, resp. normalized Hochschild (co)homologies in columns, so that CP4 can be applied. The rest follows from the strong convergence.\n%\\item[LA4] \n%%(Suppose we do it over a ring which contains $\\Q$ and that $\\im(u)$ is a direct summand)\n%The projection $p^\\lambda: \\mathcal{B}^{\\mathrm{red}}_{**} \\rightarrow \\bar{\\HC}^\\lambda_*$ to the first column modulo $\\im(\\Id-\\CycPermOp)$, resp. the inclusion $\\iota_\\lambda: \\bar{\\HC}_\\lambda^* \\rightarrow \\mathcal{B}_{\\mathrm{red}}^{**}$ into the first column are chain maps of bicomplexes and induce the isomorphisms\n%\\[ H\\mathcal{B}^{\\mathrm{red}}_* \\simeq H^\\lambda_*, \\quad \\text{resp.}\\quad H\\mathcal{B}_{\\mathrm{red}}^* \\simeq H_\\lambda^*. \\]\n%It is proven for homology in \\cite[Proposition 2.2.14]{LodayCyclic} using a diagonal filtration $\\mathcal{B}^{\\mathrm{red}}$ such that the graded module of $\\Tot_I$ is a resolution of $\\bar{\\HC}^\\lambda$, and hence the $n$-th homology group gives $\\bar{\\HC}^\\lambda_n$. The spectral sequence is again strongly convergent, which implies the result.\n% \\item[LA5] %(Suppose we do it over a ring which contains $\\Q$)\n% We have the SESes of bicomplexes\n% \\[ 0 \\rightarrow \\bar{\\mathcal{B}}(\\R) \\xrightarrow{u} \\bar{\\mathcal{B}} \\xrightarrow{\\pi} \\mathcal{B}^{\\mathrm{red}} \\rightarrow 0\\quad\\text{and}\\quad 0 \\rightarrow \\mathcal{B}_{\\mathrm{red}} \\xrightarrow{\\iota} \\bar{\\mathcal{B}} \\xrightarrow{u^*} \\bar{\\mathcal{B}}(\\R) \\rightarrow 0, \\]\n% where $\\pi$, resp. $\\iota$ are the canonical projection, resp. inclusion. If the algebra is in addition augmented, then the corresponding LESes split, and we get\n%\\[ H_\\lambda = \\bar{H}_\\lambda \\oplus H_\\lambda(\\R)\\quad\\text{and}\\quad H^\\lambda = \\bar{H}^\\lambda \\oplus H^\\lambda(\\R). \\]\n%This is mentioned for homology in \\cite[Paragraph 2.2.13]{LodayCyclic}. For cohomology the same argument works.\n%\\end{description}\n% Since $\\Tot_I C^\\lambda = \\Tot_{II} C^\\lambda = C^\\lambda$ as a chain complex, we get \n%\\[ H(\\Tot_I CC_{**}) \\simeq H^\\lambda \\simeq H(\\Tot_{II}CC_{**}). \\]\n%Since $\\Tot_I C_\\lambda = C_\\lambda$ as a cochain complex, we get\n%\\[ H(\\Tot_I CC^{**}) \\simeq H_\\lambda \\]  \n\\begin{Lemma}[Loday's cyclic bicomplexes and cyclic homology]\\label{Lem:LodCycBiCycHom}\nLet $\\Alg = (V,(\\mu_k))$ be an $\\AInfty$-algebra. The projection $p^\\lambda: \\LodCycBi \\rightarrow \\HC^\\lambda$ to the first column modulo $\\im(\\Id-\\CycPermOp)$ is a chain map and induces an isomorphism $\\H(\\widehat{\\LodCycBi}) \\simeq \\H^\\lambda(\\Alg)$. The inclusion $\\iota_\\lambda: \\HC_\\lambda^{*} \\rightarrow \\LodCycBi^{*}$ into the first column is a chain map and induces an isomorphism $\\H_\\lambda^*(\\Alg) \\simeq \\H(\\LodCycBi^*)$.\n\\end{Lemma}\n\\begin{proof}\nThe fact that $p^\\lambda$ and $\\iota_\\lambda$ are chain maps is obvious. We consider the horizontal filtration $\\Filtr^\\HorMRM$ of $\\LodCycBi^*$. Because of CP1, the rows are acyclic, and we see that the only non-zero terms of the first page of the corresponding spectral sequence are \n\\[ \\SSPage_1^{0 d} =  \\HC^d V / \\ker(\\Id - \\CycPermOp^*). \\] The differential $\\Dd_1$ is easy to check to be $\\Hd^*$, and the inclusion $\\iota_\\lambda$ induces the isomorphism $\\HC_{\\lambda}^d\\simeq \\HC^d V / \\ker(\\Id - \\CycPermOp^*)$. Considering the canonical filtration on $\\HC^*_\\lambda$, claims (a), (b) and~(e) of Proposition~\\ref{Prop:ConvOfSpSeq} apply.\n\nAs for homology, we consider the degree reversed cochain complex $\\DegRev(\\TotII \\LodCycBi)$ and the reversed filtration $\\DegRev(\\Filtr^\\HorMRM)_s = \\Filtr^\\HorMRM_{-s}$. For the corresponding cohomological spectral sequence~$\\tilde{\\SSPage}_r$, we have\n\\begin{align*}\n\\tilde{\\SSPage}_1^{sd} &= \\H^{s+d}(\\DegRev(\\Filtr)_{s}\\DegRev(\\TotII \\LodCycBi)/\\DegRev(\\Filtr)_{s+1}\\DegRev(\\TotII\\LodCycBi)) \\\\\n&=\\H_{-s-d}(\\TotII(-s\\text{-th row of }\\LodCycBi)) \\\\\n&=\\H(B_{-d,-s},\\Bdd_\\HorMRM).\n\\end{align*}\nTherefore, the only groups are $\\tilde{\\SSPage}^{s0} = \\HC_{-s}/\\im(\\Id-\\CycPermOp)$, and the spectral sequence converges conditionally to $\\DegRev(\\H(\\widehat{\\LodCycBi}))$. Clearly, $\\iota_\\lambda$ induces an isomorphism of the first pages, where on $\\DegRev(\\HC^\\lambda)$ we consider the canonical filtration. Proposition~\\ref{Prop:ConvOfSpSeq} and its proof finishes the argument.\n\\end{proof}\n\nClaim (a) of the following is similar to \\cite[Lemma 2.12]{Cieliebak2018b}.\n\n\\begin{Lemma}[No long chains in homology for bounded degrees] \\label{Lem:BddDegrees}\nLet $\\Alg=(V,(\\mu_k),\\NOne)$ be a strictly unital $\\AInfty$-algebra. Suppose that $V$ has bounded degrees. Then the canonical inclusion $\\TotI \\hookrightarrow \\TotII$ induces the following isomorphism: \n\\begin{ClaimList}\n\\item $\\H\\bigl(\\LodCycBi(\\Alg)\\bigr) \\simeq \\H\\bigl(\\widehat{\\LodCycBi}(\\Alg)\\bigr)$,\n\\item $\\H\\bigl(\\NConCycBi(\\Alg)\\bigr) \\simeq \\H\\bigl(\\widehat{\\NConCycBi}(\\Alg)\\bigr)$.\n\\end{ClaimList} \n\\end{Lemma}\n\\begin{proof}\n\\begin{ProofList}\n\\item We will denote $\\LodCycBi(\\Alg)$, $\\NConCycBi(\\Alg)$ and $\\HC V$ simply by $\\LodCycBi$, $\\NConCycBi$ and $\\HC$, respectively. Consider the (increasing) filtration $\\Filtr_{\\WeightMRM}$ of $\\HC$ by weights. We first prove the following subclaim:\n\\begin{SubClaim}[Weight normalization]\\label{SubClaim:LodCycBi}\nLet $c = (c_i)_{i=0}^\\infty\\in {\\TotII}_k(\\LodCycBi)$ be a closed chain of degree~$k$; i.e., for all $i\\in\\N_0$, we have $c_i \\in \\HC_{k-i}$, and the relations\n\\[ \\Hd c_{2i} + (\\Id-\\CycPermOp)c_{2i+1} = 0 \\quad\\text{and}\\quad -\\Hd' c_{2i+1} + \\CountOp c_{2i+2} = 0\\]\nhold. Suppose that we are given $j\\ge 1$ and $n_0\\in \\N$ such that $c_{j-1}\\in \\Filtr^{n_0}_{\\WeightMRM} \\HC_{k-j+1}$. Then we can construct $\\tilde{c}_j\\in \\Filtr^{n_0}_{\\WeightMRM}\\HC_{k-j}$, $\\tilde{c}_{j+1}\\in \\HC_{k-j-1}$ and $\\tilde{z}_{j+1}\\in \\HC_{k-j}$ such that if we define\n\\begin{equation}\\label{Eq:DefOfChains}\n\\tilde{c}_i \\coloneqq \\begin{cases}\n    \\tilde{c}_j & \\text{for } i=j, \\\\\n    \\tilde{c}_{j+1} & \\text{for } i=j+1, \\\\\n    c_i & \\text{otherwise}\n\\end{cases}\\quad\\text{and}\\quad z_i \\coloneqq \\begin{cases} \n\\tilde{z}_{j+1} & \\text{for }u = j+ 1,\\\\\n0 & \\text{otherwise},\n\\end{cases}\n\\end{equation}\nthen $\\tilde{c}\\coloneqq (\\tilde{c}_i)_{i=0}^\\infty$ is a closed chain and $z\\coloneqq(z_i)_{i=0}^\\infty$ satisfies $\\Bdd z = c - \\tilde{c}$. By repeating this procedure inductively, we obtain chains $c'\\in\\TotII_k(\\LodCycBi)$ and $z'\\in \\TotII_{k+1}(\\LodCycBi)$ such that $c'_i \\in \\Filtr^{n_0}_{\\WeightMRM}\\HC_{k-i}$ for all $i\\ge j$ and $c-c' = \\Bdd z'$.\n\\end{SubClaim}\n\\begin{proof}[Proof of the Subclaim]\n\\begin{figure}\n\\centering\n\\begin{tikzcd}\n {} & c_j, \\tilde{c}_j\\in\\HC_{k-j} \\arrow{l}{\\Id-\\CycPermOp}\\arrow{d}{-\\Hd'} & \\arrow{l}{\\CountOp}\\tilde{z}_{j+1}\\in \\HC_{j-k}\\arrow{d}{\\Hd} \\\\\n {} & {} & \\arrow{l}{\\CountOp}c_{j+1},\\tilde{c}_{j+1}\\in\\HC_{j-k-1} \\arrow{d}{\\Hd} \\\\\n {} & {} & {} \n\\end{tikzcd}\n\\caption[Illustration of weight normalization in the Loday's cyclic bicomplex.]{Positions of element in $\\LodCycBi$ for $j$ odd.}\n\\label{Fig:PosOfElLodCycBi}\n\\end{figure}\nWe will assume that $j$ is odd; the proof is analogous for $j$ even with the roles of $(\\Id-\\CycPermOp)$ and $\\CountOp$, resp.~$\\Hd$ and $-\\Hd'$ switched. The situation is depicted in Figure~\\ref{Fig:PosOfElLodCycBi}. Because $c_{j-1}\\in \\Filtr^{n_0}_{\\WeightMRM} \\HC_{k-j+1}$ and $\\Hd$ does not increase weights, we have $\\Hd c_{j-1} \\in \\Filtr^{n_0}_{\\WeightMRM} \\HC_{k-j}$. Since $c$ is closed, we have $(\\Id-\\CycPermOp)c_{j} = - \\Hd c_{j-1} \\in \\Filtr^{n_0}_{\\WeightMRM} \\HC_{k-j}$. Therefore, there is a $\\tilde{c}_j \\in \\Filtr^{n_0}_{\\WeightMRM} \\HC_{k-j}$ such that $(\\Id-\\CycPermOp)\\tilde{c}_j = (\\Id-\\CycPermOp)c_j$. As $c_j - \\tilde{c}_j \\in \\ker(\\Id-\\CycPermOp) = \\im \\CountOp$, we have $c_j - \\tilde{c}_j = \\CountOp \\tilde{z}_{j+1}$ for some $\\tilde{z}_{j+1}\\in \\HC_{k-j}$. We define $\\tilde{c}_{j+1}\\coloneqq c_{j+1} - \\Hd \\tilde{z}_{j+1}$. The following relations hold:\n\\begin{EqnList}\n\\item $(\\Id-\\CycPermOp)\\tilde{c}_j = (\\Id-\\CycPermOp)c_j$,\n\\item $\\begin{aligned}[t]\n- \\Hd' \\tilde{c}_{j} + \\CountOp \\tilde{c}_{j+1} &= - \\Hd' \\tilde{c}_{j} + \\CountOp c_{j+1} - \\CountOp \\Hd \\tilde{z}_{j+1} \\\\\n &= - \\Hd' \\tilde{c}_{j} + \\CountOp c_{j+1} - \\Hd'\\CountOp \\tilde{z}_{j+1}  \\\\ \n&= - \\Hd' \\tilde{c}_{j} + \\CountOp c_{j+1}  - \\Hd'(c_j - \\tilde{c}_j) \\\\ & = -\\Hd'c_j + \\CountOp c_{j+1} \\\\\n& = 0,\n\\end{aligned}$\n\\item $\\Hd \\tilde{c}_{j+1} = \\Hd c_{j+1}$,\n\\item $\\CountOp \\tilde{z}_{j+1} = c_j - \\tilde{c}_j$,\n\\item $\\Hd \\tilde{z}_{j+1} = c_{j+1}- \\tilde{c}_{j+1}$.\n\\end{EqnList}\nThe relations I--III show that $\\tilde{c}$ is closed and the relations IV--V that $\\Bdd z = c - \\tilde{c}$.\\footnote{In fact, $\\Bdd \\tilde{c} = 0$ follows from $\\Bdd c = 0$ and $\\Bdd z = c - \\tilde{c}$.}\n%This is clear from looking at Figure~\\ref{Fig:PosOfElLodCycBi}.\n\\renewcommand{\\qed}{\\hfill\\textit{(Subclaim) }$\\square$}\n\nStarting with $c^{1} \\coloneqq \\tilde{c}$ and $z^{1} \\coloneqq z$, we repeat the construction above to produce the telescopic sequence of homotopies\n\\begin{align*}\nc - c^{1} & = \\Bdd z^{1} \\\\\nc^{1} - c^{2} & = \\Bdd z^{2} \\\\\n\\dotsb &= \\dotsb \n\\end{align*}\nsuch that $c^l_i\\in \\Filtr^{n_0}_{\\WeightMRM} \\HC_{k-i}$ for all $j\\le i\\le j+l$. The limit chain $c'\\coloneqq \\sum_{k=0}^{\\infty} c^{k} \\in \\TotII\\LodCycBi$ has the property that $c'_i \\in \\Filtr^{n_0}_{\\WeightMRM}\\HC_{k-i}$ for all $i\\ge j$, and the limit homotopy $z' \\coloneqq \\sum_{k=1}^\\infty z^k \\in \\TotII \\LodCycBi$ converges and satisfies $\\Bdd z' = c-c'$.\n\\end{proof}\n\nWe will now prove surjectivity of the map on homology induced by the inclusion $\\TotI \\hookrightarrow \\TotII$. Given $[c]\\in \\H_k(\\widehat{\\LodCycBi})$, using the Subclaim, we can assume that there is an $n_0\\in\\N$ such that $c_i\\in \\Filtr^{n_0}_{\\WeightMRM}\\HC_{k-i}$ for all $i\\in\\N_0$. However, we have $\\Abs{c_i} = -k+i-1$ for the degrees in $\\B V$, and because the degrees of $V$ are bounded, $c_i$ eventually, as $i\\to \\infty$, reach degrees which can not be produced by $n_0$ vectors. Therefore, there is an $i_0\\in \\N_0$ such that $c_i = 0$ for all $i\\ge i_0$; this means that $c \\in \\TotI \\LodCycBi$.\n\nTo show injectivity of the induced map on homology, suppose that $c\\in \\TotI \\LodCycBi$ satisfies $c=\\Bdd z$ for some $z\\in \\TotII \\LodCycBi$. Let $i_0\\in \\CountOp$ be such that $c_i = 0$ for all $i\\ge i_0$. We use the Subclaim to alter $z$ and obtain a chain $\\tilde{z}\\in\\TotI \\LodCycBi$ such that $\\tilde{z}_{i} = z_i$ for $i\\le i_0$ and $\\Bdd \\tilde{z} = c$. This shows injectivity.\n\n\\item We will prove an analogy of the Subclaim from (a):\n\nGiven a closed $c = (c_i)_{i=0}^\\infty \\in \\TotII_k \\NConCycBi$, every $c_i \\in \\HNC_{k-2i} V$ can be written as \n\\[ c_i = \\tilde{c}_i + \\NOne \\hat{c}_i \\]\nfor unique $\\tilde{c}_i\\in \\HC_{q-2i} \\bar{V}$ and $\\hat{c}_i \\in \\HC_{q-2i-1} \\bar{V}$. Using strict unitality, we have $\\Hd(\\NOne\\hat{c}_i) = (\\Id-\\CycPermOp)\\hat{c}_i - \\NOne \\Hd'\\hat{c}_i$, and hence\n\\begin{align*}\n   \\Hd c_i &=  \\Hd\\bar{c}_i + (\\Id-\\CycPermOp)\\hat{c}_i - \\NOne \\Hd'\\hat{c}_i, \\\\\n   \\NCd c_i &= \\NOne \\CountOp \\bar{c}_i.\n\\end{align*}\nFor the second equality, recall the definition \\eqref{Eq:NCd} and note that the $\\bar{p}$ in front ``kills'' any input of $\\NCd$ containing at least one $\\NOne$. We see that $\\Bdd c = 0$ is equivalent to $\\Hd c_i = - \\NCd c_{i+1}$ which is equivalent to\n\\[ \\begin{aligned}\n\\Hd \\bar{c}_i + (\\Id-\\CycPermOp)\\hat{c}_i & = 0\\quad\\text{and}  \\\\\n\\Hd'\\hat{c}_i &= \\CountOp \\bar{c}_{i+1}\n\\end{aligned} \\] \nfor all $i\\in \\N_0$.\n\\begin{figure}\n\\centering\n\\begin{tikzcd}\nc_{j-1}\\in\\HNC_{k-2j+2}\\arrow{d}{\\Hd} & \\arrow{l}{\\NCd} z_j\\in \\HNC_{k-2j+1} \\arrow{d}{\\Hd} & \\\\\n{} & \\arrow{l}{\\NCd} c_j, \\tilde{c}_j \\in \\HNC_{k-2j} \\arrow{d}{\\Hd} & \\arrow{l}{\\NCd} z_{j+1}\\in\\HNC_{k-2j-1}\\arrow{d}{\\Hd} \\\\\n{} & {} & \\arrow{l}{\\NCd} c_{j+1}, \\tilde{c}_{j+1} \\in \\HNC_{k-2j-2}\n\\end{tikzcd}\n\\caption[Illustration of weight normalization in the Connes' bicomplex.]{Positions of element in $\\ConCycBi$ for $j$ odd.}\n\\label{Fig:PosOfElConCycBi}\n\\end{figure}\n\nSuppose that $c_{j-1} \\in \\Filtr^{n_0}_{\\WeightMRM} \\bar{\\HC}_{k-2 j + 2}$ for some $j\\ge 1$ and $n_0\\in \\N_0$. Then $\\bar{c}_{j-1}\\in\\Filtr^{n_0}_{\\WeightMRM}\\HNC_{k-2 j + 2}$ and $\\hat{c}_{j-1}\\in \\Filtr^{n_0-1}_{\\WeightMRM}\\HNC_{k-2j+1}$. Because $\\Hd'\\hat{c}_{j-1} = \\CountOp \\bar{c}_j$, we can find $\\bar{d}_{j} \\in \\Filtr^{n_0-1}_{\\WeightMRM} \\HC_{k-2j}\\bar{V}$ such that $\\CountOp \\bar{d}_{j} = \\Hd' \\hat{c}_{j-1}$. Because $\\bar{d}_j - \\bar{c}_j \\in \\ker \\CountOp = \\im (\\Id-\\CycPermOp)$, we can find $\\hat{z}_j \\in \\HNC_{k-2j}V$ such that $(\\Id-\\CycPermOp)\\hat{z}_j = \\bar{d}_{j} - \\bar{c}_{j}$. We compute\n\\begin{align*}\n \\Hd \\bar{d}_j & = \\Hd \\bigl( \\bar{c}_j + (\\Id-\\CycPermOp)\\hat{z}_j)\\bigr) \\\\\n & = - (\\Id-\\CycPermOp)\\hat{c}_j + \\Hd (\\Id-\\CycPermOp)\\hat{z}_j \\\\\n & = - (\\Id-\\CycPermOp)\\hat{c}_j + (\\Id-\\CycPermOp) \\Hd' \\hat{z}_j \\\\\n & = - (\\Id-\\CycPermOp)\\bigl(\\hat{c}_j - \\Hd'\\hat{z}_j\\bigr).\n\\end{align*}\nBecause $\\bar{d}_j \\in \\Filtr^{n_0-1}_{\\WeightMRM}\\HC_{k-2j}\\bar{V}$ and $\\Hd$ does not increase the filtration, we can find $\\hat{d}_j \\in \\Filtr^{n_0-1}_{\\WeightMRM}\\HC_{k-2j-1}\\bar{V}$ such that $(\\Id-\\CycPermOp)\\hat{d}_j = - \\Hd \\bar{d}_j$. Now, $\\hat{d}_j - (\\hat{c}_j - \\Hd'\\hat{z}_j) \\in \\ker (\\Id-\\CycPermOp) = \\im \\CountOp$, and hence there is a $\\bar{z}_{j+1}\\in \\HNC_{k-2j-1}$ such that $\\CountOp \\bar{z}_{j+1} = \\hat{d}_j - (\\hat{c}_j - \\Hd'\\hat{z}_j)$. We define the following elements:\\Correct[caption={DONE : in definition},noline]{Take care of alignment of $\\coloneqq$ and $=$ in one column!}\n\\begin{align*}\n \\tilde{c}_j &\\coloneqq c_j + \\NCd \\bar{z}_{j+1} + \\Hd(\\NOne \\hat{z}_j) \\\\\n & = c_j + \\NCd \\bar{z}_{j+1} + (\\Id-\\CycPermOp)\\hat{z}_j - \\NOne \\Hd'\\hat{z}_j \\\\\n & = c_j + \\NOne \\CountOp \\bar{z}_{j+1} + (\\Id-\\CycPermOp)\\hat{z}_j - \\NOne\\Hd'\\hat{z}_j \\\\\n & =  c_j + \\NOne \\hat{d}_j -\\NOne \\hat{c}_j + \\NOne \\Hd'\\hat{z}_j + \\bar{d}_j - \\bar{c}_j - \\NOne\\Hd'\\hat{z}_j\\\\\n & = \\bar{d}_j + \\NOne \\hat{d}_j, \\\\\n \\tilde{c}_{j+1} & \\coloneqq c_{j+1} + \\Hd \\bar{z}_{j+1}, \\\\\n \\tilde{z}_j & \\coloneqq \\NOne \\hat{z}_j, \\\\\n \\tilde{z}_{j+1} &\\coloneqq \\bar{z}_{j+1}.\n\\end{align*}\nThe following relations hold:\n\\begin{EqnList}\n\\item $\\NCd \\tilde{c}_j = \\NCd c_j$,\n\\item $\\Hd \\tilde{c}_j = \\Hd c_j + \\Hd \\NCd \\bar{z}_{j+1} = - \\NCd c_{j+1} - \\NCd  \\Hd \\bar{z}_{j+1} = - \\NCd \\tilde{c}_{j+1}$,\n\\item $\\Hd \\tilde{c}_{j+1} = \\Hd c_{j+1}$,\n\\item $\\NCd \\tilde{z}_j = \\NCd \\NOne \\hat{z}_j = 0$,\n\\item $\\Hd \\tilde{z}_j = \\tilde{c}_j - c_j - \\NCd \\tilde{z}_{j+1}$,\n\\item $\\Hd \\tilde{z}_{j+1} = \\tilde{c}_{j+1} - c_{j+1}$.\n\\end{EqnList}\nRelations I--III show that $\\tilde{c}$ is closed, and relations IV--VI show that $\\Bdd z = \\tilde{c} - c$. Here $\\tilde{c}$ is defined as in \\eqref{Eq:DefOfChains} and $z$ has one more term:\n\\[ z_i \\coloneqq \\begin{cases} \\tilde{z}_j & \\text{for } i = j, \\\\ \\tilde{z}_{j+1} & \\text{for } i = j+1, \\\\\n0 & \\text{otherwise}. \\end{cases} \\]\nSince $\\tilde{c}_j = \\bar{d}_j + \\NOne \\hat{d}_j$, $\\bar{d}_j\\in \\Filtr^{n_0-1}_{\\WeightMRM}\\HC_{k-2j}\\bar{V}$ and $\\hat{d}_j\\in \\Filtr^{n_0-1}_{\\WeightMRM}\\HC_{k-2j-1}\\bar{V}$, we have $\\tilde{c}_j \\in \\Filtr^{n_0}_{\\WeightMRM} \\HNC_{k-2j}$.\n\nHaving the recursive step, the rest can be done in the same way as in (a). \\qedhere\n%We set $\\tilde{z}_j \\coloneqq \\NOne \\hat{z}_j \\in \\HNC_{k-2j+1}$ and define $\\tilde{c}_{j} \\coloneqq c_{j}+\\Hd \\tilde{z}_j$. The following relations hold:\n%\\begin{EqnList}\n%\\item $\\NCd \\tilde{c}_{j} = \\NOne \\CountOp (\\bar{c}_{j} + (\\Id-\\CycPermOp)\\hat{z}_j) = \\NOne \\CountOp \\bar{c}_j = \\NCd c_j$,\n%\\item $\\Hd \\tilde{c}_{j} = \\Hd c_{j}$,\n%\\item $\\NCd \\tilde{z}_{j} = 0$,\n%\\item $\\Hd \\tilde{z}_j =  \\tilde{c}_j - c_j$.\n%\\item $\\tilde{c}_j = c_j + \\Hd \\hat{z}_j = \\bar{c}_j + \\NOne \\hat{c}_j + (1-t)\\hat{z}_j - \\NOne \\Hd'\\hat{z}_j = \\bar{d}_j + \\NOne(\\hat{c}_j - \\Hd'\\hat{z}_j)$ \n%\\end{EqnList}\n%%$\\bar{\\tilde{c}}_{j} = \\HNC_{j} + (\\Id-\\CycPermOp)\\hat{z}_i = \\bar{d}_{j}. $\n%The relations I--II show that $\\tilde{c}\\in \\TotII_k \\NConCycBi$ defined by replacing $c_j\\mapsto \\tilde{c}_j$ in $c$ is a closed chain in $\\TotII$, and the relations III--IV show that it holds $\\Bdd z = \\tilde{c} - c$ for $z$ defined by zero everywhere except for $z_j \\coloneqq \\tilde{z}_j$. The relation V shows that $\\bar{\\tilde{c}}_j = \\bar{d}_j \\in \\Filtr^{n_0-1}\\HC_{k-2j}\\bar{V}$.\n%\n%By repeating this procedure infinitely many times as in the proof of (a), we see that we can assume that $\\bar{c}_i \\in \\Filtr^{n_0 - 1}\\HNC_{k-2i}$ for all $i\\ge j$. In the next paragraph, we will show that we can modify $\\hat{c}_i$, keeping $\\bar{c}_i$ fixed, so that $\\hat{c}_i\\in \\Filtr^{n_0-1} \\HC_{k-2i-1} \\bar{V}$ for all $i\\ge j$. It will follow that $c_i\\in \\Filtr^{n_0} \\HNC_{k-2i}$ for all $i\\ge j$, which proves Subclaim from (a) also for (b).\n%\n%We can find $\\hat{d}_{j} \\in \\Filtr^{n_0-1} \\HC_{k-2j}\\bar{V}$ so that $(\\Id-\\CycPermOp)\\hat{d}_{j} = - \\Hd \\bar{c}_j$. There exists an $\\bar{w}_{i+1}$ such that $\\CountOp \\bar{w}_{i+1} = \\hat{d}_{i+1} - \\hat{c}_{i+1}$. We set  $w_{i+1} \\coloneqq \\bar{w}_{i+1}$ and define $\\tilde{c}_{i+1} = c_{i+1} + \\NCd w_{i+1}$. We also define $\\tilde{c}_{i+2} = c_{i+2} - \\Hd w_{i+1}$. Then it holds \n%\\[ \\begin{aligned}\n%\\Hd \\tilde{c}_{i+2} &= \\Hd c_{i+2}, \\\\\n%\\NCd \\tilde{c}_{i+2} &= \\NCd c_{i+2} + \\Hd \\NCd w_{i+1} = \\Hd(c_{i+1} + \\NCd w_{i+1}) = \\Hd \\tilde{c}_{i+1}, \\\\\n%\\NCd \\tilde{c}_{i+1}  &= \\NOne \\CountOp \\bar{\\tilde{c}}_{i+1} = \\NOne \\CountOp \\HNC_{i+1} = \\NCd c_{i+1}, \\\\\n%\\hat{\\tilde{c}}_{i+1} &= \\hat{c}_{i+1} + \\CountOp \\bar{w}_{i+1} = \\hat{c}_{i+1} + \\hat{d}_{i+1} - \\hat{c}_{i+1} = \\hat{d}_{i+1}.\n%\\end{aligned} \\]\n%Therefore, we can also achieve $\\hat{c}_{i+1} \\in \\Filtr^{n_0} C(\\bar{V})$ by adding an exact term. The rest is shown as in a). \\qedhere\n\\end{ProofList}\n\\end{proof}\n\n\\begin{Lemma}[Loday's and Connes' bicomplexes are quasi-isomorphic]\\label{Lem:LodConCycBi}\nLet $\\Alg=(V,(\\mu_k),\\NOne)$ be a strictly unital $\\AInfty$-algebra. The map\n\\begin{align*}\nI: \\TotII \\ConCycBi &\\longrightarrow \\TotII\\LodCycBi\\\\\n(c_0, c_1, c_2, \\dotsc) &\\longmapsto (c_0, \\InsOneOp_1 \\CountOp c_1, c_1, \\InsOneOp_1 \\CountOp c_2, c_2, \\dotsc )\n\\end{align*}\nis a chain map inducing the isomorphisms \n\\[\\H( \\widehat{\\ConCycBi})\\simeq \\H(\\widehat{\\LodCycBi})\\quad\\text{and}\\quad\\H(\\ConCycBi)\\simeq\\H(\\LodCycBi).\\] Analogously, the map \n\\begin{align*}\nP: \\TotII\\LodCycBi^* &\\longrightarrow \\TotII \\ConCycBi^* \\\\\n(\\psi_0, \\psi_1,\\psi_2, \\dotsc )&\\longmapsto (\\psi_0, \\psi_2 + \\CountOp^* \\InsOneOp_1^* \\psi_1, \\psi_4 + \\CountOp^* \\InsOneOp_1^* \\psi_3, \\dotsc )\n\\end{align*}\nis a chain map inducing the isomorphisms \n\\[\\H(\\widehat{\\ConCycBi}^*) \\simeq \\H(\\widehat{\\LodCycBi}^*)\\quad\\text{and}\\quad\\H(\\ConCycBi^*) \\simeq \\H(\\LodCycBi^*).\\]\n\\end{Lemma}\n\\begin{proof}\nThe following computation shows that $\\iota$ is a chain map:\n\\begin{align*}\n\\Bdd_{\\LodCycBi}I(c_0,c_1,c_2\\dotsc)&= \\bigl(\\Hd c_0 + (\\Id-\\CycPermOp)\\InsOneOp_1\\CountOp c_1, - \\Hd'\\InsOneOp_1\\CountOp c_1 + \\CountOp c_1, \\Hd c_1 + (\\Id-\\CycPermOp)\\InsOneOp_1 \\CountOp c_2, \\dotsc \\bigr)\\\\\n&= \\bigl(\\Hd c_0 + \\Cd c_1, -\\CountOp c_1 + \\InsOneOp_1 \\Hd' \\CountOp c_1 + \\CountOp c_1 ,\\Hd c_1 + \\Cd c_2,\\dotsc\\bigr) \\\\\n&= \\bigl(\\Hd c_0 + \\Cd c_1, \\InsOneOp_1 \\CountOp \\Hd c_1,\\Hd c_1 + \\Cd c_2,\\dotsc\\bigr) \\\\\n&= I\\bigl(\\Hd c_0 + \\Cd c_1, \\Hd c_1 + \\Cd c_2, \\dotsc \\bigr) \\\\ \n&= I\\Bdd_{\\ConCycBi}\\bigl(c_0,c_1,c_2,\\dotsc\\bigr).\n\\end{align*}\n%\\begin{figure}\n%\\centering\n%\\begin{tikzcd}\n%c_0 \\arrow{d}{\\Hd} & {} & {} \\\\\n%{} & \\arrow{l}{\\Id-\\CycPermOp}\\InsOneOp_1\\CountOp c_1 \\arrow{d}{-\\Hd'} & {} \\\\\n%{} & {} & c_1\n%\\end{tikzcd}\n%\\caption{}\n%\\label{Fig:PosOfElConCycBi}\n%\\end{figure}\nClearly, $I$ is injective, and hence it induces an isomorphisms of chain complexes \n\\[ \\bigl(\\TotII\\ConCycBi,\\Bdd_{\\ConCycBi}\\bigr) \\simeq \\bigl(\\im(I),\\Restr{\\Bdd_{\\LodCycBi}}{\\im(I)}\\bigr)\\subset (\\TotII \\LodCycBi, \\Bdd_{\\LodCycBi}). \\]\nConsider the subcomplex $(\\TotII \\LodCycBi_{\\mathrm{odd},\\bullet},-{\\Hd'})\\subset (\\TotII\\LodCycBi,\\Bdd_{\\LodCycBi})$ which consists of odd columns of $\\LodCycBi$. It is a direct complement of $\\im(I)$ in $\\TotII \\LodCycBi$. Indeed, $(0,c_1,0,c_2,\\dotsc) \\in \\im(I)$ implies $c_i = 0$ for all $i\\in\\N$, which gives $\\TotII(\\LodCycBi_{\\mathrm{odd},\\bullet}) \\cap \\im(I) = 0$; also, for any $(c_i)\\in \\TotII \\LodCycBi$, we have\n\\[ (c_0, c_1, c_2, c_3, c_4 \\dotsc ) = I\\bigl((c_0, c_2, c_4, \\dotsc )\\bigr) - (0,\\InsOneOp_1\\CountOp c_2 - c_1 , 0, \\InsOneOp_1\\CountOp c_4 - c_3, 0, \\dotsc), \\]\nwhich gives $\\im(I) + \\TotII(\\LodCycBi_{\\mathrm{odd},\\bullet}) = \\TotII \\LodCycBi$. Now, $\\TotII(\\LodCycBi_{\\mathrm{odd},\\bullet})$ is contractible by CP3, and hence $\\H(\\TotII \\ConCycBi) \\simeq \\H(\\TotII \\LodCycBi)$ (using an argument with the long exact sequence in homology). Clearly, $I$ restricts to short chains $\\TotI$, and thus $\\H(\\TotI \\ConCycBi) \\simeq \\H(\\TotI \\LodCycBi)$ holds too.\n\nA similar discussion applies in cohomology. The following computation shows that $P$ is a chain map:\n\\begin{align*}\n &P \\Dd_{\\LodCycBi}(\\psi_0,\\psi_1,\\psi_2, \\psi_3, \\psi_4,\\dotsc) \\\\\n &\\quad = P(\\Hd^*\\psi_0,(\\Id-\\CycPermOp^*)\\psi_0 - \\Hd'\\psi_1,\\CountOp^*\\psi_1+ \\Hd^*\\psi_2,(\\Id-\\CycPermOp^*)\\psi_2 - {\\Hd'}^*\\psi_3, \\CountOp^*\\psi_4 + \\Hd^*\\psi_3, \\dotsc ) \\\\\n &\\quad = \\begin{aligned}[t]\n  \\bigl(\\Hd^*\\psi_0, & \\CountOp^*\\psi_1 + \\Hd^*\\psi_2 + \\CountOp^*\\InsOneOp_1^*((\\Id-\\CycPermOp^*)\\psi_0 - {\\Hd'}^*\\psi_1), \\\\\n  & \\CountOp^*\\psi_4 + \\Hd^*\\psi_3+ \\CountOp^*\\InsOneOp_1^*((\\Id-\\CycPermOp^*)\\psi_2 - {\\Hd'}^*\\psi_3),\\dotsc\\bigr)\n \\end{aligned} \\\\\n &\\quad=\\begin{aligned}[t]\n \\bigl(\\Hd^*\\psi_0, \\Cd^* \\psi_0 + \\Hd^* \\psi_2 + \\CountOp^*(\\psi_1 - \\InsOneOp_1^*{\\Hd'}^* \\psi_1), \\Cd^* \\psi_2 + \\Hd^* \\psi_4 + \\CountOp^*(\\psi_3 - \\InsOneOp_1^*{\\Hd'}^* \\psi_3),\\dotsc  \\bigr)\n\\end{aligned}\\\\\n&\\quad = \\bigl(\\Hd^*\\psi_0, \\Cd^*\\psi_0 + \\Hd^*(\\psi_2 + \\CountOp^*\\InsOneOp_1^* \\psi_1), \\Cd^*\\psi_2 + \\Hd^*(\\psi_4 + \\CountOp^*\\InsOneOp_1^* \\psi_3), \\dotsc \\bigr) \\\\\n&\\quad = \\Dd_{\\ConCycBi}\\bigl(\\psi_0,\\psi_2 + \\CountOp^*\\InsOneOp_1^*\\psi_1, \\psi_2, \\psi_4 + \\CountOp^*\\InsOneOp_1^*\\psi_3,\\dotsc\\bigr)\\\\\n&\\quad = \\Dd_{\\ConCycBi} P (\\psi_0,\\psi_1,\\psi_2,\\psi_3,\\psi_4,\\dotsc)\n\\end{align*}\nThe fourth equality uses that $\\InsOneOp_1 \\Hd' + \\Hd' \\InsOneOp_1 = \\Id$ and $\\Hd'\\CountOp = \\CountOp\\Hd$. Because $(\\psi_0,\\psi_1,\\dotsc) = P(\\psi_0,0,\\psi_1,0,\\dotsc )$, $P$ is surjective, and hence it induces an isomorphism of cochain complexes $\\TotII \\LodCycBi^*/\\ker(P) \\simeq \\TotII \\ConCycBi^*$.  It is easy to see that \n\\[ \\ker(P) = \\bigl\\{(0,\\psi_1,-\\CountOp^*\\InsOneOp_1^*\\psi_1,\\psi_3,-\\CountOp^*\\InsOneOp_1^*\\psi_3,\\dotsc)\\bigr\\} \\]\nand that the map\n\\begin{align*}\nZ: \\TotII(\\LodCycBi^{\\mathrm{odd},\\bullet}) &\\longrightarrow \\ker(p) \\\\\n(\\psi_1, \\psi_3, \\dotsc) &\\longmapsto (0 ,\\psi_1, -\\CountOp^* \\InsOneOp_1^* \\psi_1, \\psi_3, -\\CountOp^* \\InsOneOp_1^* \\psi_3, \\dotsc )\n\\end{align*}\nis an isomorphism of the cochain complexes\n\\[ \\bigl(\\TotII(\\LodCycBi^{\\mathrm{odd},\\bullet}),-{\\Hd'}^*\\bigr) \\simeq \\bigl(\\ker(P),\\Restr{\\Bdd_{\\LodCycBi}}{\\ker(P)}\\bigr) \\subset (\\TotII\\LodCycBi,\\Bdd_{\\LodCycBi}). \\]\nIndeed, we have \n\\begin{align*}\n \\Bdd_{\\LodCycBi}Z(\\psi_1,\\psi_3,\\dotsc) &= (0,-{\\Hd'}^*\\psi_1, \\CountOp^*\\psi_1 - \\Hd^*\\CountOp^*\\InsOneOp_1^*\\psi_1, -(\\Id-\\CycPermOp^*)\\CountOp^*\\InsOneOp_1^*\\psi_1 - {\\Hd'}^*\\psi_3, \\dotsc ) \\\\\n  &= \\bigl(0,-{\\Hd'}^*\\psi_1,-\\CountOp^*\\InsOneOp_1^*(-{\\Hd'}^*\\psi_1), -{\\Hd'}^*\\psi_3,\\dotsc\\bigr) \\\\\n  & = Z(-{\\Hd'}^*)(\\psi_1,\\psi_3,\\dotsc).\n\\end{align*}\nTherefore, $\\ker P$ is contractible, and the statement is implied by an argument with the long exact sequence in homology.\n% and hence induces a chain isomorphism  $\\Tot_{II}(C^{**})/\\ker(p) \\simeq \\Tot_{II}(\\mathcal{B}^{**})$. Therefore, it suffices to show that $\\ker(p)$ is contractible. However, we have $\\ker(p) = \\{ (0 ,\\psi_1, -\\CountOp^* \\InsOneOp_i^* \\psi_1, \\psi_3, -\\CountOp^* \\InsOneOp_i^* \\psi_3, \\dotsc )\\}$, and it is easy to see that the isomorphism $(\\psi_1, \\psi_3, \\dotsc) \\mapsto (0 ,\\psi_1, -\\CountOp^* \\InsOneOp_i^* \\psi_1, \\psi_3, -\\CountOp^* \\InsOneOp_i^* \\psi_3, \\dotsc )$ from the contractible subcomplex of odd columns to $\\ker(p)$ is a chain map. It follows that $H\\hat{\\mathcal{B}}^* \\simeq H\\hat{\\HC}^*$ and also  $H\\mathcal{B}^* \\simeq H C^*$.\n\\end{proof}\n\n\n\\begin{Lemma}[Connes' cyclic bicomplexes are quasi-iso to their normalized versions]\\label{Lem:ConNormVer}\nLet $\\Alg=(V,(\\mu_k),\\NOne)$ be a strictly unital $\\AInfty$-algebra. The projection $\\NormProj$ and the inclusion $\\NormIncl$ (see Definition~\\ref{Def:CycBico}) induce the isomorphisms $\\H(\\ConCycBi) \\simeq \\H(\\NConCycBi)$ and $\\H(\\widehat{\\ConCycBi}^*)\\simeq\\H(\\widehat{\\NConCycBi}^*)$, respectively.\n\\end{Lemma}\n\\begin{proof}\nIt follows from CP4 using the spectral sequence associated to the vertical filtration. In cohomology, we use (d) and (f) of Proposition~\\ref{Prop:ConvOfSpSeq}.\n\nIn homology, we have $\\tilde{\\SSPage}^{sd} = \\H(\\ConCycBi_{-s,-d},\\Bdd_\\VertMRM)$ for the reversed spectral sequence (see the proof of Lemma~\\ref{Lem:LodCycBiCycHom}), and hence strong convergence is implied by Theorem~\\ref{Thm:Boardman61} from the proof of Proposition~\\ref{Prop:ConvOfSpSeq}. Claim (e) of Proposition~\\ref{Prop:ConvOfSpSeq} finishes the proof.\n\\end{proof}\n\n\nThe following is based on \\cite[Proposition 2.2.14]{LodayCyclic} and its proof.\n\n\\begin{Lemma}[Reduced Connes' cyclic bicomplexes and cyclic homology are quas-iso] \\label{Lem:ReducedCyclic}\nLet~$\\Alg=(V,(\\mu_k),\\NOne)$ be a strictly unital $\\AInfty$-algebra. The projection $p^\\lambda: \\ConCycBi^{\\RedMRM} \\rightarrow \\HNC^\\lambda$ to the first column modulo $\\im(\\Id-\\CycPermOp)$ is a chain map and induces an isomorphism $\\H(\\widehat{\\ConCycBi}^{\\RedMRM})\\simeq\\H(\\HNC^\\lambda)$ ($\\eqqcolon\\H^{\\lambda,\\RedMRM}(\\Alg)$). The inclusion $\\iota_\\lambda: \\HNC_\\lambda^* \\rightarrow \\ConCycBi_{\\RedMRM}^*$ into the first column of $\\ConCycBi_{\\RedMRM}^*$ is a chain map and induces an isomorphism $\\H(\\HNC_\\lambda^*)\\simeq\\H(\\ConCycBi_{\\RedMRM}^*)$. \n\\end{Lemma}\n\n\n\\begin{proof}\nWe start with the cohomology. It is easy to see that $\\InsOneOp_1$ induces $\\InsOneOp_1^*: \\HC_{\\RedMRM}^* V \\rightarrow \\HC_{\\RedMRM}^* V$ and that for this map, we have\n\\[ Z\\coloneqq \\ker \\InsOneOp_1^* = \\im \\InsOneOp_1^* \\simeq \\HC^* \\bar{V}. \\]\n%Notice that had we taken $\\bar{\\HC}$ instead of $\\HC_{\\mathrm{bar}}$, there would be $\\NOne^* \\in \\bar{\\HC}$, for which $\\InsOneOp_i^*(\\NOne^*) = 0$ but $\\NOne^* \\not\\in \\im{\\InsOneOp_i^*}$.\nWe consider the following diagonal filtration of $\\ConCycBi^*_{\\RedMRM}$:\n \\begin{center}\n$\\Filtr^s\\ConCycBi^*_{\\RedMRM}: \\quad$\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}&{}&{} &{}\\\\\n\\HC_{\\mathrm{red}}^{s+2} \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}& \\HC_{\\mathrm{red}}^{s+1} \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*} & Z^s \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  &{}\\\\\n\\HC_{\\mathrm{red}}^{s+1} \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*} & Z^s \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  & \\arrow[u] \\arrow[r]  0 &{} \\\\\nZ^s \\arrow[u]{}{\\Hd^*} \\arrow[r]{}{\\NCd^*}  & 0 \\arrow[u] \\arrow[r]  & 0 \\arrow[u]  \\arrow[r]  & {} \\\\\n{}\\arrow[u]&{}\\arrow[u]&{}\\arrow[u]&{}\n\\end{tikzcd} \n\\end{center}\nThe first page of the corresponding spectral sequence consists of the columns\n\\[ \\SSPage_1^s = \\H(\\Gr_s \\TotI) = \\H\\bigl(Z^s \\xrightarrow{\\Hd^*} \\HC_{\\RedMRM}^{s+1}/Z^{s+1} \\xrightarrow{\\NCd^*} Z^s \\xrightarrow{\\Hd^*} \\dotsb \\bigr), \\]\nwhere the cochain complex starts in degree $s$. Because $\\InsOneOp_1^* \\Hd^* = - {\\Hd'}^* \\InsOneOp_1^* + (\\Id - \\CycPermOp^*)$ and $\\NCd^* = \\CountOp^* \\InsOneOp_1^*$ on $\\HC_{\\RedMRM}$, we have the commutative diagram\n\\[\\begin{tikzcd}\n Z^s \\arrow{r}{\\Hd^*}\\arrow{d}{\\Id} & \\HC_{\\RedMRM}^{s+1}/Z^{s+1} \\arrow{r}{\\NCd^*}\\arrow{d}{\\InsOneOp_1^*} & Z^s \\arrow{r}{\\Hd^*}\\arrow{d}{\\Id} & \\dotsb \\\\\n Z^s \\arrow{r}{\\Id-\\CycPermOp^*} & Z^s \\arrow{r}{\\CountOp^*} & Z^s \\arrow{r}{\\Id-\\CycPermOp^*} & \\dotsb.\n\\end{tikzcd}\\]\nTherefore, the only non-zero terms of the first page are\n\\[ \\SSPage_1^{s,0} = Z_s / \\ker(\\Id-\\CycPermOp^*) = \\bar{\\HC}_\\lambda^s \\]\nwith the differential $\\Dd_1 = \\Hd^*$. This is precisely the first page of the canonical filtration of~$\\bar{\\HC}_\\lambda^*$. The map induced by $\\iota_\\lambda$ on the first page is the identity, and Proposition~\\ref{Prop:ConvOfSpSeq} implies the rest.\n\nThe situation in homology is analogous. We consider the restriction $\\InsOneOp_1: \\HC^{\\RedMRM} \\rightarrow \\HC^{\\RedMRM}$ and the subspace\n\\[ B\\coloneqq \\ker \\InsOneOp_1 = \\im \\InsOneOp_1. \\]\nThe diagonal filtration is now\n\\begin{center}\n $\\Filtr^s \\ConCycBi^{\\RedMRM}: \\quad$\\begin{tikzcd}[column sep=scriptsize, row sep=scriptsize]\n{}\\arrow[d] & {}\\arrow[d] & {}\\arrow[d] & {} \\\\\n0\\arrow[d] & 0 \\arrow[d]\\arrow[l] & B_{s}\\arrow[d]{}{\\Hd}\\arrow[l]{}{\\NCd} & {}\\arrow[l] \\\\\n0\\arrow[d] & B_{s}\\arrow[d]{}{\\Hd}\\arrow[l]{}{\\NCd} & \\HC^{\\RedMRM}_{s-1}\\arrow[d]{}{\\Hd}\\arrow[l]{}{\\NCd} & {}\\arrow[l] \\\\\nB_{s}\\arrow[d] & \\HC^{\\RedMRM}_{s-1}\\arrow[d]\\arrow[l]{}{\\NCd} & \\HC^{\\RedMRM}_{s-2}\\arrow[d]\\arrow[l]{}{\\NCd} & {}\\arrow[l] \\\\\n{} & {} & {} & {}\n\\end{tikzcd} \n\\end{center}\nThe reversed filtration $\\DegRev(\\Filtr)_s = \\Filtr^{-s}$ of the degree reversed cochain comples $\\DegRev(\\TotI)$ satisfies\n\\begin{align*}\n\\tilde{\\SSPage}_1^{s} & = \\H(\\DegRev(\\Filtr)_{s}\\DegRev(\\TotI)/\\DegRev(\\Filtr)_{s+1}\\DegRev(\\TotI) ) \\\\\n& = \\H(\\HC^{\\RedMRM}_{-s-1}/B_{-s-1} \\xleftarrow{\\Hd} B_{-s} \\xleftarrow{\\NCd} \\HC^{\\RedMRM}_{-s-1}/B_{-s-1} \\xleftarrow{\\Hd}\\dotsb)\n\\end{align*}\nwhere the first group has degree $s$. We have the commutative diagram\n\\[\\begin{tikzcd}\n\\HC^{\\RedMRM}_{-s-1}/B_{-s-1}\\arrow{d}{\\InsOneOp_1} & \\arrow{l}{\\Hd}\\arrow{d}{\\Id} B_{-s} & \\arrow{l}{\\NCd} \\HC^{\\RedMRM}_{-s}/B_{-s}\\arrow{d}{\\InsOneOp_1} & \\arrow{l}{\\Hd}\\dotsb \\\\\nB^{-s}&\\arrow{l}{\\Id-\\CycPermOp}B^{-s}&\\arrow{l}{\\CountOp}B^{-s}&\\arrow{l}{\\Id-\\CycPermOp}\\dotsb,\n\\end{tikzcd}\\]\nand thus $\\tilde{\\SSPage}_1^{s,0} = B^{-s}/\\im(\\Id-\\CycPermOp) = \\HC^{\\lambda,\\RedMRM}_{-s}$. The rest is as in the case of cohomology.\n\\end{proof}\n\n\\section{Final argument and remarks}\\label{Sec:FinRem}\n\nWe are finally in position to prove Proposition~\\ref{Prop:Reduced}. Precisely as in the proof sketch, we replace the unit-augmentation sequence \\eqref{Eq:UnitAugSS}, up to a quasi-isomorphism, by a short exact sequence of normalized Connes' cyclic bicomplexes.\n\n\\begin{Lemma}[Short exact sequence of normalized Connes' cyclic bicomplexes]\\label{Lem:ConBiRed}\nLet $\\mathcal{A}=(V,(\\mu_j),\\NOne,\\varepsilon)$ be a strictly augmented strictly unital $\\AInfty$-algebra. The short exact sequences of bicomplexes\n\\[\\begin{tikzcd}\n 0 \\arrow{r} &  \\NConCycBi(\\R) \\arrow{r}{u} & \\NConCycBi(V)\\arrow{r}{p^{\\RedMRM}} & \\ConCycBi^{\\RedMRM} \\arrow{r} & 0\n\\end{tikzcd}\\]\nand\n\\[\\begin{tikzcd}\n 0 \\arrow{r} & \\ConCycBi_{\\RedMRM}^* \\arrow{r}{\\iota_{\\RedMRM}} & \\NConCycBi^*(V) \\arrow{r}{u^*} & \\NConCycBi(\\R) \\arrow{r} & 0\n\\end{tikzcd}\\]\nsplit. From this, we obtain the following isomorphisms:\n\\begin{equation}\\label{Eq:ConRedBiIso}\n\\begin{aligned}\n\\H(\\NConCycBi)&\\simeq  \\H(\\ConCycBi^{\\RedMRM}) \\oplus \\H^\\lambda(\\R), & \\H( \\widehat{\\NConCycBi}) &\\simeq  \\H(\\widehat{\\ConCycBi}^{\\RedMRM})\\oplus H^\\lambda(\\R), \\\\\n\\H(\\NConCycBi^*) &\\simeq \\H(\\ConCycBi^*_{\\RedMRM}) \\oplus H_\\lambda^*(\\R), & \\H(\\widehat{\\NConCycBi}^*) &\\simeq  \\H(\\widehat{\\ConCycBi}^*_{\\RedMRM})\\oplus H_\\lambda^*(\\R).\n\\end{aligned}\n\\end{equation}\n\\end{Lemma}\n\\begin{proof}\nBecause we have a strict unit and a strict augmentation, the maps $u: \\HC \\R \\rightarrow \\HC V$ and $\\varepsilon: \\HC V \\rightarrow \\HC \\R$ satisfy $\\varepsilon \\circ u = \\Id$. It follows that $\\HC V = \\ker\\varepsilon\\oplus\\im u$. The same holds for the induced maps $\\bar{u}: \\HNC \\R \\rightarrow \\HNC V$ and $\\bar{\\varepsilon}: \\HNC V \\rightarrow \\HNC \\R$. We can now define the splitting $r: \\NConCycBi(V) \\rightarrow \\NConCycBi(\\R)$ of the homological short exact sequence by projecting to $\\ker \\bar{\\varepsilon}$ along $\\im \\bar{u}$. It is easy to check that it is a morphism of bicomplexes. This dualizes ``pointwisely'' to cohomology.\n\nBecause \n\\[ (\\HNC \\R)_i = \\begin{cases} 0 & \\text{for }i \\neq 0, \\\\ \\R & \\text{for } i = 0, \\end{cases} \\]\nthe bicomplex $\\NConCycBi(\\R)$ is diagonal, and hence $\\TotI$ and $\\TotII$ are the same; both compute $\\H^\\lambda(\\R)$ (using results from the previous sections). The same is true in cohomology. This shows~\\eqref{Eq:ConRedBiIso}.\n\\end{proof} \n\nWe summarize our results in Figure~\\ref{Fig:FinalPictureHom}. Having $\\H^\\lambda(\\Alg) \\simeq \\H^{\\lambda,\\RedMRM}(\\Alg) \\oplus \\H^{\\lambda}(\\R)$, Proposition~\\ref{Prop:Reduced} in Section~\\ref{Sec:Alg2} follows by dualization. \n\n\\begin{Questions}\\phantomsection\\label{Q:OpenProbAInftx}\n\\begin{RemarkList}\n\\item Suppose that $V$ has bounded degrees and look at Figure~\\ref{Fig:FinalPictureHom}. Does $\\H(\\widehat{\\LodCycBi}^*)\\simeq \\H(\\LodCycBi^*)$ hold? Does $\\H(\\ConCycBi^*)\\simeq\\H(\\NConCycBi^*)$ hold?\n\\item Does Proposition~\\ref{Prop:Reduced} hold for homological unital and homological augmented $\\AInfty$-algebras? A strategy would be to construct a quasi-isomorphic strictly unital and strictly augmented $\\AInfty$-algebra. Does it exist?\n\\item How is it with the conditional and strong convergence of spectral sequences associated to different filtrations of bicomplexes from Definition~\\ref{Def:CycBico}? Because of the simple internal data, lots of them collapse.\\qedhere\n\\end{RemarkList}\n\\end{Questions}\n\\begin{figure}[t]\n\\centering\n \\begin{tikzcd}  \n  \\H^\\lambda(\\Alg) \\arrow[dash]{r}{Lem.~\\ref{Lem:LodCycBiCycHom}}  & \\H(\\widehat{\\LodCycBi}) \\arrow[dash]{d}{Lem.~\\ref{Lem:LodConCycBi}} \\arrow[dash,dashed]{r}{Lem.~\\ref{Lem:BddDegrees}}  & \\H(\\LodCycBi) \\arrow[dash]{d}{Lem.~\\ref{Lem:LodConCycBi}} \\\\\n   {} &  \\H(\\widehat{\\ConCycBi}) & \\H(\\ConCycBi) \\arrow[dash]{d}{Lem.~\\ref{Lem:ConNormVer}} \\\\\n    {} &  \\H(\\widehat{\\NConCycBi}) \\arrow[dash]{d}{Lem.~\\ref{Lem:ConBiRed}} \\arrow[dash,dashed]{r}{Lem.~\\ref{Lem:BddDegrees}} &   \\H(\\NConCycBi) \\arrow[dash]{d}{Lem.~\\ref{Lem:ConBiRed}} \\\\\n{} & \\H(\\widehat{\\ConCycBi}^{\\RedMRM}) \\oplus \\H^\\lambda(\\R) \\arrow[dash]{d}{Lem.~\\ref{Lem:ReducedCyclic}} & \\H(\\ConCycBi^{\\RedMRM}) \\oplus \\H^\\lambda(\\R) \\\\\n{}& \\H^{\\lambda,\\RedMRM}(\\Alg) \\oplus \\H^\\lambda(\\R)  & {}\n \\end{tikzcd}\n\\\\[1cm]\n \\begin{tikzcd}  \n   \\H(\\widehat{\\LodCycBi}^*) \\arrow[dash]{d}{Lem.~\\ref{Lem:LodConCycBi}} & \\H(\\LodCycBi^*) \\arrow[dash]{d}{Lem.~\\ref{Lem:LodConCycBi}} & \\H_\\lambda^* \\arrow[dash]{l}{Lem.~\\ref{Lem:LodCycBiCycHom}} \\arrow[bend left = 60,dotted,dash]{ldddd}\\\\\n   \\H(\\widehat{\\ConCycBi}^*) \\arrow[dash]{d}{Lem.~\\ref{Lem:ConNormVer}} & \\H(\\ConCycBi^*) & {} \\\\\n   \\H(\\widehat{\\NConCycBi}^*) \\arrow[dash]{d}{Lem.~\\ref{Lem:ConBiRed}} &   \\H(\\NConCycBi^*) \\arrow[dash]{d}{Lem.~\\ref{Lem:ConBiRed}} & {} \\\\\n  \\H(\\widehat{\\ConCycBi}^*_{\\RedMRM})\\oplus\\H_\\lambda^*(\\R) &  \\H(\\ConCycBi^*_{\\RedMRM})\\oplus\\H_\\lambda^*(\\R) \\arrow[dash]{d}{Lem.~\\ref{Lem:ReducedCyclic}} & {} \\\\\n  {} & \\H_{\\lambda,\\RedMRM}^*(\\Alg) \\oplus \\H_{\\lambda}^*(\\R) & {}\n \\end{tikzcd}\n\\caption[Isomorphisms of various versions of cyclic (co)homologies of an $\\AInfty$-algebra.]{Isomorphisms of (co)homologies for a strictly unital strictly augmented $\\AInfty$-algebra $\\Alg$ on a graded vector space $V$. A solid line denotes an isomorphism which is always valid and a dashed line an isomorphism which is valid provided that the degrees of $V$ are bounded. The dotted line denotes the isomorphism obtained by dualizing the corresponding isomorphism in homology under the assumptions that the degrees of $V$ are bounded.}\n\\label{Fig:FinalPictureHom}\n\\end{figure}\n\n%Note that when $V$ has bounded degrees, we have $\\Hom(\\HC_q,\\R) = \\HC^q$, and hence $\\Tot_{II} B^{**}$ are duals to $\\Tot_I B_{**}$. Therefore, we do not get any isomorphism in cohomology by dualizing isomorphisms in homology.\n%\n%\n%\\begin{Example}[Direct proof for spheres]\n%Let $V[1] = \\langle \\NOne, \\NVol \\rangle$, where $\\Abs{\\NVol}=n-1$ for some $n\\in \\Z$. Suppose $\\mu_2(\\NVol,\\NVol)=0$ and $\\mu_j = 0$ for $j\\neq 2$. Then $\\HC^{\\mathrm{red}}_*(V)$ is generated by $\\NOne\\NVol^k$ and $\\NVol^k$ for $k\\ge 1$, and we compute\n%\\[ \\begin{aligned} \n%   \\bar{B}(\\NOne \\NVol^k)&= 0 & \\bar{B}(\\NVol^k), &= \\begin{cases}\n%   0 & \\text{for }k\\text{ and }n\\text{ even,}\\\\   \n%   k \\NOne \\NVol^k & \\text{otherwise}, \\end{cases} \\\\\n%   \\Hd(\\NOne \\NVol^k) &= \\begin{cases} 2 \\NVol^k & \\text{for }k\\text{ and }n\\text{ even,} \\\\\n%   0 & \\text{otherwise},\n%   \\end{cases}\n%    & \\Hd(\\NVol^k) &= 0.\n%   \\end{aligned} \\]\n%Let $(c_i)\\in \\Tot_{II}\\mathcal{B}^{\\mathrm{red}}(V)$ be closed.  We will show that, for an arbitrary $i\\ge 1$, we can achieve $c_i = 0$ by adding an exact term. Clearly, $\\im \\bar{B} \\cap \\im \\Hd = 0$, and hence $\\Hd(c_i) = \\bar{B}(c_i) = 0$ for all $i \\ge 1$. For $n$ odd, resp. even we must have $c_i \\in \\langle \\NOne \\NVol^k \\rangle$, resp. $c_i \\in \\langle \\NVol^{2k}, \\NOne \\NVol^{2k-1}\\rangle$ for all $i\\ge 1$. The claim follows because it holds $\\NOne \\NVol^k = \\frac{1}{k}\\bar{B}(\\NVol^k)$ for $n$ odd and $\\NVol^{2k} = \\frac{1}{2} \\Hd(\\NOne \\NVol^{2k})$, $\\NOne\\NVol^{2k-1} = \\frac{1}{2k-1} \\bar{B}(\\NVol^{2k-1})$ for $n$ even.\n%\n%We have shown that $H\\hat{\\mathcal{B}}^{\\mathrm{red}}_* \\simeq H \\mathcal{B}^{\\mathrm{red}}$ by \"contracting\" both bicomplexes to the first column.\n%\\end{Example}\n%\n%Let us finish with a remark which shows how our theory fits in the more general formalism of homology with coefficients in a module. \n%\n%\\begin{Remark}[Coefficients in a module] \\label{Rem:Module}\n%More generally, one can consider the Hochschild cochains with values in a $(\\tilde{\\mu}_k)$-bimodule $M$. Our theory is then a special case for $M=V^*$. In this remark we will explain the precise correspondence assuming that $\\mu_j = 0$ for $j\\neq 2$.\n%\n%Let us first discuss the algebraic setting. We start with a graded associative product $\\tilde{\\mu}_2 : V^{\\otimes 2} \\rightarrow V$ of degree $0$ on $V$.\n%Let $M\\coloneqq V^*$. For all $u_0$, $u_1\\in V$ and $\\eta\\in V^*$ define\n%\\[ \\begin{aligned}\n%(u_1\\cdot \\nu)(u_0) &\\coloneqq (-1)^{u_1(\\nu + u_0) +  \\nu}\\nu(u_0\\cdot u_1), \\\\ \n%(\\nu \\cdot u_1)(u_0) &\\coloneqq (-1)^{\\nu}\\nu(u_1\\cdot u_0).\n%\\end{aligned} \\]\n% Notice that the signs correspond to the Koszul signs $\\tilde{\\mu}_2 u_1 \\nu u_0 \\mapsto \\nu \\tilde{\\mu}_2 u_0 u_1$, resp. $\\tilde{\\mu}_2 \\nu u_1 u_0 \\mapsto \\nu \\tilde{\\mu}_2 u_1 u_0$.\n%For all $u_1$, $u_2\\in V$ and $\\nu\\in V^*$ it holds\n%\\[\\begin{aligned}\n%u_1 \\cdot (u_2 \\cdot \\nu) &= (-1)^{u_1} (u_1\\cdot u_2)\\cdot \\nu, \\\\\n%(\\nu \\cdot u_1)\\cdot u_2 &= (-1)^{\\nu} \\nu\\cdot (u_1\\cdot u_2), \\\\\n%(u_1\\cdot \\nu)\\cdot u_2 &= (-1)^{\\nu} v_1\\cdot (\\nu\\cdot u_2),\n%\\end{aligned}\\]\n%and hence $\\cdot$ defines the structure of a graded $\\tilde{\\mu}_2$-bimodule on $M$. We now consider the degree shift of this structure to $M[1]= V^*[1] = V[-1]^*$. Recall briefly our degree-shift convention:\n%\\begin{itemize}\n%\\item We use the letters $u$, $v$, $\\nu$, $\\alpha$ to denote elements of $V$, $V[1]$, $V^*$, $V^*[1]$ respectively. They are related by $v=\\Susp u$, $\\alpha = \\SuspU \\eta$, where $\\SuspU$ is a formal symbol of degree $-1$.\n%\\item We denote the degrees of $u$, $\\nu$ simply by $u$, $\\eta$. However, we denote the degrees of $v$, $\\alpha$ by $\\Abs{v}$, $\\Abs{\\alpha}$ due to the backward compatibility with \\cite{Cieliebak2015} --- they do not distinguish $V$ and $V[1]$ but rather have two gradings for $v\\in V$ denoted by $v$ and $\\Abs{v} = v - 1$.\n%\\item We use the Koszul convention for the tensor product given by $\\SuspU u_1 \\otimes \\dotsb \\otimes \\SuspU u_k = (-1)^{(k-1)u_1 + \\dotsb + u_{k-1}} \\SuspU^k u_1 \\otimes \\dotsb \\otimes u_k$.\n%\\item We denote by $\\tilde{\\mu}_k$ the operations on $V$ and by $\\mu_k$ the operations on $V[1]$, and similarly for other maps $\\tilde{f}: V^{\\otimes k} \\rightarrow V^{\\otimes l}$. We degree shift these maps using $f(\\SuspU^k u_1 \\dots u_k) = \\SuspU^l \\tilde{f}(u_1\\dots u_k)$.\n%\\end{itemize}\n%The degree shifted product $\\mu_2$ on $V[1]$ has degree $1$ and is graded anti-associative:\n%\\[ \\mu_2(\\mu_2(v_1, v_2),v_3) = (-1)^{\\Abs{v_1} + 1} \\mu_2(v_1,\\mu_2(v_2,v_3))\\quad \\text{for all }v_1, v_2, v_3\\in V[1]. \\]\n%For $\\alpha\\in V[1]^*$, $v_0$, $v_1$, $v_2 \\in V[1]$ it holds\n%\\[ \\begin{aligned}\n%(v_1\\cdot \\alpha)(v_0) &\\coloneqq (-1)^{\\Abs{v_1}(\\Abs{\\alpha} + \\Abs{v_0}) +  \\Abs{\\alpha}}\\alpha(v_2\\cdot v_1) ,\\\\ \n%(\\alpha \\cdot v_1)(v_0) &\\coloneqq (-1)^{\\Abs{\\alpha}}\\alpha(v_1\\cdot v_0), \\\\\n%v_1 \\cdot (v_2 \\cdot \\alpha) &= (-1)^{\\Abs{v_1}} (v_1\\cdot v_2)\\cdot \\alpha, \\\\\n%(\\alpha \\cdot v_1)\\cdot v_2 &= (-1)^{\\Abs{\\alpha}} \\alpha\\cdot (v_1\\cdot v_2).\n%\\end{aligned}\\]\n%In particular, $M[1]$ is a graded left and right $\\mu_2$-module. However, the compatibility becomes\n%\\[ (v_1\\cdot \\alpha)\\cdot v_2 = (-1)^{\\Abs{v_1} + 1} v_1 \\cdot (\\alpha\\cdot v_2), \\]\n%and hence $M[1]$ is not an $\\mu_2$-bimodule.\n%\n%For $k\\ge 1$ we consider the graded vector space $\\Hom(V[1]^{\\otimes k},M[1])$ generated by homogenous $\\R$-linear maps $\\Psi: V[1]^{\\otimes k} \\rightarrow M[1]$. Its elements are called Hochschild cochains with values in $M$. For all $k\\ge 2$ we define the map\n%\\[ \\Hd_m^*: \\Hom(V[1]^{\\otimes k-1},M[1]) \\rightarrow \\Hom(V[1]^{\\otimes k},M[1])\\]\n%for $\\Psi\\in \\Hom(V[1]^{\\otimes k-1}, M[1])$ and $v_1$, $\\dotsc$, $v_k \\in V[1]$ as follows:\n%\\[ (\\Hd_m^*\\Psi)(v_1 \\dots v_k) \\coloneqq\\begin{multlined}[t] (-1)^{(\\Abs{v_1}+1) \\Abs{\\Psi}} v_1 \\cdot \\Psi(v_2 \\dots v_k) + (-1)^{\\Abs{\\Psi}} \\Psi(v_1\\dots v_{k-1})\\cdot v_k \\\\{}+ \\sum_{i=1}^{k-1} (-1)^{\\Abs{v_1} + \\dotsb + \\Abs{v_{i-1}}} \\Psi(v_1 \\dots v_i \\cdot v_{i+1} \\dots v_k). \\end{multlined}\\]\n%Notice that the signs correspond to the Koszul signs with the starting order $\\Psi \\mu_2 v_1 \\dots v_k$. The label $m$ means ``module''. Using the anti-compatibility of the left and right $\\mu_2$-module $M$, we get $\\Hd_m^* \\circ \\Hd_m^* = 0$. Consider the correspondence\n%\\begin{equation}\\label{Eq:Corr1}\n%\\psi \\in \\Hom(V[1]^{\\otimes k},\\R) \\simeq \\Hom(V[1]^{\\otimes (k-1)}, M[1])\\ni \\Psi,\n%\\end{equation}\n%which is for all $v_0$, $\\dotsc$, $v_{k-1}\\in V[1]$ given by \n%\\[ \\Psi(v_1 \\dots v_{k-1})(v_0) = (-1)^{\\Abs{v_0}(\\Abs{v_1}+ \\dotsb + \\Abs{v_{k-1}})}\\psi(v_0 v_1\\dots v_{k-1}).\\]\n%We compute \n%\\[\\begin{aligned}\n%&(\\Hd_m^*\\Psi)(v_1 \\dots v_k)(v_0) \\\\\n%&\\quad = \\begin{multlined}[t] (-1)^{(\\Abs{v_1} + 1)(\\Abs{v_2} + \\dotsb + \\Abs{v_k}) + \\Abs{v_1}\\Abs{v_0}} \\Psi(v_2\\dots v_k)(v_0 \\cdot v_1) \\\\ {}+ (-1)^{\\Abs{v_1} + \\dotsb + \\Abs{v_{k-1}}}\\Psi(v_1 \\dots v_{k-1})(v_k \\cdot v_0) \\\\ {}+ \\sum_{i=1}^{k-1} (-1)^{\\Abs{v_1} + \\dotsb + \\Abs{v_{i-1}}} \\Psi(v_1 \\dots v_i \\cdot v_{i+1} \\dots v_k)(v_0)\n%\\end{multlined} \\\\\n% &\\quad = \\begin{multlined}[t]\n%(-1)^{\\Abs{v_0}(\\Abs{v_1} + \\dotsb + \\Abs{v_k})}\\bigl[ \\psi(v_0 \\cdot v_1 v_2 \\dots v_k)\\\\\n% {}+ (-1)^{\\Abs{v_k}(\\Abs{v_0}+ \\dotsb + \\Abs{v_{k-1}})} \\psi(v_k \\cdot v_0 v_1 \\dots v_{k-1})\\\\\n% {}+ \\sum_{i=1}^{k-1} (-1)^{\\Abs{v_0}+\\Abs{v_1} + \\dotsb + \\Abs{v_{i-1}}} \\psi(v_0 v_1 \\dots v_i \\cdot v_{i+1} \\dots v_k)\\bigr].\n%\\end{multlined}\n%\\end{aligned}\\]\n%It follows that $\\Hd_m^*$ becomes $\\Hd^*$ under the correspondence \\eqref{Eq:Corr1}.\n%\\end{Remark}\n%\n\\end{document}\n\n", "meta": {"hexsha": "b62e6d474baa56a38400da9b5e39b2e5a899773f", "size": 107025, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Subfiles/App_Ainfty.tex", "max_stars_repo_name": "p135246/phd-thesis", "max_stars_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Subfiles/App_Ainfty.tex", "max_issues_repo_name": "p135246/phd-thesis", "max_issues_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Subfiles/App_Ainfty.tex", "max_forks_repo_name": "p135246/phd-thesis", "max_forks_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.2629310345, "max_line_length": 1085, "alphanum_fraction": 0.6505582808, "num_tokens": 43754, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5668680418823031}}
{"text": "\\section{Distance, Velocity, Acceleration}\\label{sec:IntVel}\r\nWe recall a general principle that will later be applied to\r\ndistance-velocity-acceleration problems, among other things.  If $F(u)$ is\r\nan anti-derivative of $f(u)$, then $\\ds \\int_a^bf(u)\\,du=F(b)-F(a)$.  Suppose\r\nthat we want to let the upper limit of integration vary, i.e., we replace\r\n$b$ by some variable $x$.  We think of $a$ as a fixed starting value $x_0$.\r\nIn this new notation the last equation (after adding $F(a)$ to both sides)\r\nbecomes: \r\n$$\r\n  F(x)=F(x_0)+\\int_{x_0}^xf(u)\\,du.\r\n$$ \r\nHere $u$ is the variable of\r\nintegration, called a ``dummy variable,'' since it is not the variable in\r\nthe function $F(x)$.  In general, it is not a good idea to use the same\r\nletter as a variable of integration and as a limit of integration.  That\r\nis, $\\ds \\int_{x_0}^xf(x)dx$ is bad notation, and can lead to errors and\r\nconfusion.\r\n\r\nAn important application of this principle occurs when we are\r\ninterested in the position of an object at time $t$ (say, on the\r\n$x$-axis) and we know its position at time $\\ds t_0$.  Let $s(t)$ denote\r\nthe position of the object at time $t$ (its distance from a reference\r\npoint, such as the origin on the $x$-axis).  Then the net change in\r\nposition between $\\ds t_0$ and $t$ is $\\ds s(t)-s(t_0)$.  Since $s(t)$ is an\r\nanti-derivative of the velocity function $v(t)$, we can write\r\n$$\r\n  s(t)=s(t_0)+\\int_{t_0}^tv(u)du.\r\n$$\r\nSimilarly, since the velocity is an anti-derivative of the acceleration\r\nfunction $a(t)$, we have \r\n$$\r\n  v(t)=v(t_0)+\\int_{t_0}^ta(u)du.\r\n$$\r\n\r\n\\begin{example}{Constant Force}{ConstantForce}\r\nSuppose an object is acted upon by a constant\r\nforce $F$.  Find $v(t)$ and $s(t)$.\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nBy Newton's law $F=ma$, so the acceleration is\r\n$F/m$, where $m$ is the mass of the object.  Then we first have\r\n$$\r\n  v(t)=v(t_0)+\\int_{t_0}^t{F\\over m}\\,du=v_0+\r\n  \\left.{F\\over m}u\\right|_{t_0}^t=v_0+{F\\over m}(t-t_0),\r\n$$ \r\nusing the usual convention $\\ds v_0=v(t_0)$.\r\nThen\r\n\\begin{eqnarray*}\r\n  s(t)&=s(t_0)+\\int_{t_0}^t\\left(v_0+{F\\over m}(u-t_0)\\right)du=s_0+\r\n  \\left.(v_0u+{F\\over2m}(u-t_0)^2)\\right|_{t_0}^t\\cr\r\n      &=s_0+v_0(t-t_0)+{F\\over2m}(t-t_0)^2.\\cr\r\n\\end{eqnarray*}\r\nFor instance, when $F/m=-g$ is the constant of gravitational acceleration,\r\nthen this is the falling body formula (if we neglect air resistance)\r\nfamiliar from elementary physics:\r\n$$s_0+v_0(t-t_0)-{g\\over2}(t-t_0)^2,$$\r\nor in the common case that $\\ds t_0=0$,\r\n$$s_0+v_0t-{g\\over2}t^2.$$\r\n\\end{solution}\r\n\r\nRecall that the integral of the velocity function gives the \\ifont{net} distance\r\ntraveled. If you want to know the \\ifont{total} distance traveled, \r\nyou must find out where the velocity function\r\ncrosses the $t$-axis, integrate separately over the time intervals when\r\n$v(t)$ is positive and when $v(t)$ is negative, and add up the absolute\r\nvalues of the different integrals.  For example, if an object is thrown\r\nstraight upward at 19.6 m/sec, its velocity function is\r\n$v(t)=-9.8t+19.6$, using $g=9.8$ m/sec for the force of gravity.\r\nThis is a straight line which is positive for $t<2$ and negative for $t>2$.\r\nThe net distance traveled in the first 4 seconds is thus\r\n$$\\int_0^4(-9.8t+19.6)dt=0,$$\r\n while the total distance traveled in the first\r\n4 seconds is\r\n$$\r\n  \\int_0^2(-9.8t+19.6)dt+\\left|\\int_2^4(-9.8t+19.6)dt\\right|=19.6+|-19.6|=39.2\r\n$$ \r\nmeters, $19.6$ meters up and $19.6$ meters down.\r\n\r\n\\begin{example}{Net and Total Distance}{NetTotalDistance}\r\nThe acceleration of an object is given by $a(t)=\\cos(\\pi\r\nt)$, and its velocity at time $t=0$ is $1/(2\\pi)$.  Find both the net and the\r\ntotal distance traveled in the first 1.5 seconds.\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nWe compute \r\n$$\r\n  v(t)=v(0)+\\int_0^t\\cos(\\pi u)du={1\\over 2\\pi}+\\left.{1\\over\\pi}\r\n  \\sin(\\pi u)\\right|_0^t={1\\over\\pi}\\bigl({1\\over2}+\\sin(\\pi t)\\bigr).\r\n$$\r\nThe \\ifont{net} distance traveled is then\r\n\\begin{eqnarray*}\r\n  s(3/2)-s(0)&=&\\int_0^{3/2}{1\\over\\pi}\\left({1\\over2}+\\sin(\\pi t)\\right)\\,dt\\cr\r\n  &=&\\left.{1\\over\\pi}\\left({t\\over2}-{1\\over\\pi}\\cos(\\pi t)\\right)\r\n  \\right|_0^{3/2}\\cr\r\n  &=&{3\\over4\\pi}+{1\\over\\pi^2}\\approx 0.340 \\hbox{ meters.}\\cr\r\n\\end{eqnarray*}\r\nTo find the {\\it total} distance traveled, we need to know when\r\n$(0.5+\\sin(\\pi t))$ is positive and when it is negative.  This\r\nfunction is 0 when $\\sin(\\pi t)$ is $-0.5$, i.e., when $\\pi t=7\\pi/6$,\r\n$11\\pi/6$, etc.  The value $\\pi t=7\\pi/6$, i.e., $t=7/6$, is the only\r\nvalue in the range $0\\le t\\le 1.5$.  Since $v(t)>0$ for $t<7/6$ and\r\n$v(t)<0$ for $t>7/6$, the total distance traveled is\r\n$$ \\int_0^{7/6}\\ds{1\\over \\pi}\\left({1\\over2}+\\sin(\\pi t)\\right)\\,dt+\r\n  \\Bigl|\\int_{7/6}^{3/2} \r\n  {1\\over \\pi}\\left({1\\over2}+\\sin(\\pi t)\\right)\\,dt\\Bigr|\r\n$$\r\n\\begin{eqnarray*}\r\n%& \\ds \\int_0^{7/6}\\ds{1\\over \\pi}\\left({1\\over2}+\\sin(\\pi t)\\right)\\,dt+\r\n%  \\Bigl|\\int_{7/6}^{3/2} \r\n%  {1\\over \\pi}\\left({1\\over2}+\\sin(\\pi t)\\right)\\,dt\\Bigr|\\\\\r\n ~\\qquad~&=&\\ds{1\\over \\pi}\\left( {7\\over 12}+{1\\over \\pi}\\cos(7\\pi/6)+{1\\over\r\n    \\pi}\\right)+\r\n  {1\\over \\pi}\\Bigl|{3\\over 4}-{7\\over 12}\r\n  +{1\\over \\pi}\\cos(7\\pi/6)\\Bigr|\\\\\r\n  &=&\\ds{1\\over \\pi}\\left( {7\\over 12}+{1\\over \\pi}{\\sqrt3\\over2}+{1\\over\r\n    \\pi}\\right)+\r\n  {1\\over \\pi}\\Bigl|{3\\over 4}-{7\\over 12}\r\n  +{1\\over \\pi}{\\sqrt3\\over2}.\\Bigr|\\\\\r\n  &\\approx& 0.409 \\hbox{ meters.}\r\n\\end{eqnarray*}\r\n\\end{solution}\r\n\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for Section \\ref{sec:IntVel}}\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n An object moves so that its velocity at time $t$ is\r\n$v(t)=-9.8t+20$ m/s. Describe the motion of the object between $t=0$ and\r\n$t=5$, find the total distance traveled by the object during that\r\ntime, and find the net distance traveled.\r\n\\begin{sol}\r\n It rises until $t=100/49$, then falls. The position of the\r\nobject at time $t$ is $\\ds s(t)=-4.9t^2+20t+k$. The net distance traveled\r\nis $-45/2$, that is, it ends up $45/2$ meters below where it started.\r\nThe total distance traveled is $6205/98$ meters. \r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n An object moves so that its velocity at time $t$ is $v(t)=\\sin t$.\r\nSet up and evaluate a single definite integral to compute the\r\nnet distance traveled between $t=0$ and $t=2\\pi$.\r\n\\begin{sol}\r\n $\\ds\\int_0^{2\\pi}\\sin t\\,dt=0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n An object moves so that its velocity at time $t$ is\r\n$v(t)=1+2\\sin t$ m/s. Find the net distance traveled by the object\r\nbetween $t=0$ and $t=2\\pi$, and find the total distance traveled\r\nduring the same period.\r\n\\begin{sol}\r\n net: $2\\pi$, total: $\\ds 2\\pi/3+4\\sqrt3$ \r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n Consider the function $f(x)=(x+2)(x+1)(x-1)(x-2)$ on\r\n$[-2,2]$. Find the total area between the curve and the $x$-axis\r\n(measuring all area as positive).\r\n\\begin{sol}\r\n $8$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n Consider the function $\\ds f(x)=x^2-3x+2$ on\r\n$[0,4]$. Find the total area between the curve and the $x$-axis\r\n(measuring all area as positive).\r\n\\begin{sol}\r\n $17/3$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\n Evaluate the three integrals:\r\n$$\r\n  A=\\int_0^3 (-x^2+9)\\,dx\\qquad B=\\int_0^{4} (-x^2+9)\\,dx\\qquad \r\n  C=\\int_{4}^3 (-x^2+9)\\,dx,\r\n$$\r\nand verify that $A=B+C$.\r\n\\begin{sol}\r\n $A=18$, $B=44/3$, $C=10/3$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n\\end{enumialphparenastyle}", "meta": {"hexsha": "5813cef4b8c87e0db4908cfce7d780278df613bb", "size": 7412, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8-applications-of-integration/8-1-dist-vel-acc.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8-applications-of-integration/8-1-dist-vel-acc.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8-applications-of-integration/8-1-dist-vel-acc.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.06, "max_line_length": 81, "alphanum_fraction": 0.638424177, "num_tokens": 2766, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599563, "lm_q2_score": 0.8031738034238806, "lm_q1q2_score": 0.5668680352665555}}
{"text": "\\section{Discussion}\n\nThe two methods proposed by L. Sani and P. Hudzovic and two combinations thereof\nwere investigated and compared to one another.\n\nThe most accurate method -- in general -- turns out to be P. Hudzovic's transfer\nfunction in conjunction with L. Sani's  characterisation  method  (by  measuring\n$t_{10}$, $t_{50}$, $t_{90}$ instead of $T_u/T_g$). This  is true for all orders\nof  $n$  and  this  is  true  for   noisy   and   non-noisy   input   functions.\n\nOf course,  by performing a least-square curve fit, an even more accurate result\ncan be obtained.\n\nOne factor that was not  considered  is how many data points are required in the\nlookup  curves  to yield accurate enough results. The simulations in this report\nused  50  data  points. It would be worth investigating this further to see  how\nhigher resolution (or  lower  resolution)  lookup  curves affect the accuracy of\neach method.\n\nP.  Hudzovic's  characterisation method (by measuring $T_u/T_g$) isn't as robust\nas  L.  Sani's   characterisation   method  (by  measuring  $t_{10}$,  $t_{50}$,\n$t_{90}$),  especially  with  noisy  input data. This is primarily  due  to  the\nderivative  that  must  be  computed  to   find   the   point   of   inflection.\n\nL. Sani's method is definitely  the  easiest  to  implement  due  to  the simple\ninterpolation formulae and due to  the simplicity of finding $t_{10}$, $t_{50}$,\n$t_{90}$. It also performs quite  fast  and has a low memory footprint (since it\ndoesn't require any lookup curves).  Unfortunately, it does perform the worst of\nall methods.\n\n", "meta": {"hexsha": "25ccc99544057223daa2296e55bbea375cc85683", "size": 1576, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "versuche/mlab/sections/discussion.tex", "max_stars_repo_name": "TheComet93/laborjournal", "max_stars_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "versuche/mlab/sections/discussion.tex", "max_issues_repo_name": "TheComet93/laborjournal", "max_issues_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "versuche/mlab/sections/discussion.tex", "max_forks_repo_name": "TheComet93/laborjournal", "max_forks_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.8387096774, "max_line_length": 80, "alphanum_fraction": 0.7335025381, "num_tokens": 429, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432182679956, "lm_q2_score": 0.7853085808877581, "lm_q1q2_score": 0.5667911425034031}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{geometry} \n\\geometry{margin=1in}\n\\geometry{a4paper} \n\n\n\\usepackage{textcomp}\n\\usepackage{booktabs}\n\\usepackage{array}\n\\usepackage{paralist}\n\\usepackage{verbatim} \n\\usepackage{subfigure}\n\\usepackage{graphicx,caption}\n\\usepackage{placeins}\n\\usepackage{lipsum}\n\\usepackage{xcolor}\n\\usepackage{dcolumn}\n\\usepackage{sectsty}\n\\allsectionsfont{\\sffamily\\mdseries\\upshape}\n\\usepackage{gensymb,amsmath,mathtools,amssymb}\n\\usepackage{flafter}\n%\\usepackage{parskip}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{tocbibind}\n\\usepackage[toc,page]{appendix}\n\\captionsetup{width=\\linewidth}\n\\usepackage{bm}\n\\usepackage{lscape}\n\n\n\\graphicspath{{./figs/}}\n\n\n\\title{First Order Optimal Sizing of Ascent Vehicles}\n\\author{Devansh Agrawal}\n%\\date{} \n\n\n\\begin{document}\n\n\\maketitle\n\n\n\\section{Problem Description}\n\nIn designing a ascent vehicle, it is important to start a design relatively close to the optimal solution. In this document, I optimize the thrust of a one dimensional rocket to maximise the apogee altitude. The parameters that are specified fall into the following categories:\n\\begin{itemize}\n\\item Environment parameters: $g, \\rho(h)$\n\\item Design parameters: $m_0, m_p, c_d, A,c$\\footnote{$m_0$ is the launch mass, $m_p$ is the propellant mass}\n\\item Control parameters: $F(t)$\n\\end{itemize}\n\nFor initial sizing, we make the following assumptions on the rocket dynamics:\n\\begin{itemize}\n\\item Earth is inertial, non-rotating, and the acceleration due to gravity is constant with altitude, $g = 9.81$~m/s$^2$\n\\item Earths atmosphere follows an exponential relationship, $\\rho(h) = \\rho_0 e^{-\\beta h}$, where for the earth, $\\rho_0 = 1.225$~kg/m$^3$, $\\beta = 1/8.5$~km$^{-1}$\n\\item The rockets drag coefficient is constant, independent of the mach number\\footnote{In general, this is not a good assumption, however, as we will see later, the maximum mach number of the rocket is to be kept below $M\\sim0.7$. A numeric optimiser should be used later to refine the vehicle.}\n\\item The rockets specific impulse is a constant $I_{sp} = c g_0$, and does not depend on altitude.\n\\item The thrust curve is extremely simple. The rocket thrusts at a constant $F$~newtons for $t_b$~seconds. For this simplifying case, we do not need to include $t_b$ in our parameters since it is implicitly defined:\n\\begin{align}\nI_t &= \\int F dt = F t_b\\\\\n\\text{and } I_t &= \\int -\\dot{m} c dt  = m_p c\\\\\n\\therefore t_b &= \\frac{m_p c}{F}\n\\end{align}\n\\end{itemize}\n\n\nHence, we will try to determine the optimum value of $F$ given all the other parameters. \n\n\\section{State space description, and non-dimensionalisation}\n\nThe dynamics of a 1-D rocket can be written as\n\\begin{align} \n\\dot{h} &=v \\\\\n\\dot{v} &=\\frac{1}{m}(F(t)-D(v, h))-g \\\\\n\\dot{m} &=\\frac{-F(t)}{c} \n\\end{align}\n\nDirectly integrating this is difficult, but we can do so numerically, using Mathematica. From their, the maximum height is readily determined. However there are 6 parameters that can be chosen in this problem, and the dependence due to 3 more. To simplify the analysis we introduce non-dimensional parameters. The reference quantities used are\n\n\\begin{itemize}\n\\item Mass: $m_0$\n\\item Acceleration: $g$\n\\item Speed: $c$\n\\end{itemize}\n\nTherefore the following reference dimensions can be found:\n\\begin{itemize}\n\\item Force: $m_0 g$\n\\item Time: $c/g$\n\\item Length: $c^2/g$\n\\item Total Impulse: $1/(m_0 c)$\n\\end{itemize}\n\nUsing hats to denote the non-dimensionalised quantities, the state space equation can be written as \n\n\\begin{align}\n\\dot{\\hat{h}} &=\\hat{v} \\\\\n\\dot{\\hat v} &=\\frac{\\hat F}{\\hat m}  - \\left( \\frac{\\rho c^2 c_d A}{2 m_0 g} \\right) e^{- (\\beta c^2/g) \\hat h}  \\frac{ \\hat{v}^2}{\\hat m}  - 1 \\\\\n\\dot{\\hat m} &=-\\hat F\n\\end{align}\n\nFurthermore, the switching occurs at $t = t_b$. Interestingly, the non-dimensionalized total impulse is  directly related to the propellant mass fraction:\n\\begin{align}\n\\hat{I_t} &= \\frac{I_t}{m_0 c} = \\frac{F t_b}{m_0 c} = \\frac{m_p c}{m_0 c} = \\frac{m_p}{m_0}\\\\\n\\therefore \\hat{t_b} &= \\frac{I_t}{F} = \\frac{m_p}{m_0} \\frac{1}{\\hat F}\n\\end{align}\n\nTherefore, the important parameters are now immediately obvious, and can be interpreted as,\n\n\\begin{enumerate}\n\\item The thrust to initial weight ratio, \n\\begin{equation}\n\\hat F = \\frac{F}{m_0 g}\n\\end{equation}\n\n\\item A drag parameter defined as,\n\\begin{equation}\nx \\equiv \\frac{\\frac{1}{2} \\rho c^2 c_d A}{m_0 g}\n\\end{equation}\nNote: This parameter is like the drag to weight ratio (except it uses $c$ as the velocity and $m_0$ as the mass)\n\n\\item Propellant mass fraction,\n\\begin{equation}\nMR \\equiv m_p/m_0\n\\end{equation}\n\n\\item Atmosphere parameter, \n\\begin{equation}\n\\hat \\beta = \\beta \\frac{c^2}{g}\n\\end{equation}\nwhich scales the atmosphere's height to the height we are interested in. \n\\end{enumerate}\n\nFinally, the parameter to be optimised is the final altitude, \n\n\\begin{equation}\nh_f = \\hat h_f \\frac{c^2}{g}\n\\end{equation} \n\n\nTherefore, our 8 parameter space has reduced to a four parameter space, three if we assume the specific impulse of the rocket is fixed.\n\nThe typical range of these parameters, and a rough order of magnitude for a 10k solid rocket are\n\n\\begin{center}\n   \\begin{tabular}{@{} ccc@{}}\n      \\toprule\n      Parameter & Typical value & Range\\\\\n      \\midrule\n      $\\hat F$ & 15 & $>1$\\\\\n      $x$ & 60 & $>0$\\\\\n      $MR$ & 0.15 & $0.3\\sim0.9$\\\\\n      $\\hat \\beta$ & 50 & $25 \\sim 75$\\\\\n      $\\hat h_f$ & 0.0075 \\\\\n      \\bottomrule\n   \\end{tabular}\n\\end{center}\nwhere the $\\hat h_f = 0.0075$  is the value that corresponds to 10k ft. \n\nNote, while the thrust to weight ratio is approximately 15 for rockets for IREC, most big rockets have $\\hat F$ closer to 1.5! The next section shows this is actually very close to the optimum.\n\n\n\\section{Optimization Results}\n\nA relatively brute force form of optimisation is attempted, in the interest of time. A Mathematica function is written to solve the dynamics equations, given the four parameters. The apogee height is returned by this function. The numerical schemes used are Mathematica's 'automatic' choices, which generally performs well, even through stiff systems. Since the problem is initial valued, it is fairly trivial to solve numerically. \n\nA few simulated trajectories are shown below, as samples. It is hard to believe that these are optimally sized vehicles. \n\n%\\FloatBarrier\n\\begin{figure}[htbp]\n   \\centering\n   \\includegraphics[width=0.8\\linewidth]{sample_flights.eps}\n   \\caption{Sample simulated flights based on data from 2018 reports. Red indicates thrusting, blue indicates coasting.}\n   \n      \\begin{tabular}{@{} llcccc|ccc@{}}\n         \\toprule\n         Team &Type & $C_D$ (est) & $c$ (m/s, assumed) & dia (in) & $m_0$ (kg) &  $x$ & $m_p/m_0$ & $T/W$ \\\\ \n         \\midrule\n         McGill & Solid & 0.5 & 2000 & 5 & 24 & 64.6 &0.146 & 8.8\\\\\n         Waterloo& Hybrid & 0.5 & 2000 & 6 & 64.8 & 34.48 &0.280 & 1.5\\\\\n         UCLA & Hybrid &0.5 & 2000 & 6 & 26.7 & 83.7 & 0.21 & 8.5\\\\\n         \\bottomrule\n      \\end{tabular}\n    \\label{fig:}\n\\end{figure}\n%FloatBarrier\n\n\nNow we can see the trade between thrust to weight ratio and propellant mass fraction. \n\n\\begin{landscape}\n%\\FloatBarrier\n\\begin{figure}[htbp]\n   \\centering\n   \\includegraphics[width=\\linewidth]{Tw-MR_contours.eps}\n   \\caption{}\n   \\label{fig:}\n\\end{figure}\n%FloatBarrier\n\\end{landscape}\n\n\nThe red line indicates the 10k target altitude (assuming $c = 2000~m/s$). This shows that for a range of $x$, the easiest to design scenarios are for T/W>3, and propellant mass fraction around 15\\%. Below a thrust to weight ratio of 2, the height is very sensitive to the propellant mass fraction. \n\nIt is also interesting to see that as the propellant mass fraction increases, it becomes better to reduce thrust to weight ratio towards T/W $\\sim$ 2. \n\n\nUltimately however, we need to find a design that can achieve the desired altitude, while minimising the total mass and thrust level required. If we limit max thrust to 1~kN, and assume $c=2000$~m/s and $c_d = 0.5$, we can figure out possible designs:\n\n%\\FloatBarrier\n\\begin{figure}[htbp]\n   \\centering\n   \\includegraphics[width=0.8\\linewidth]{results.eps}\n   \\caption{}\n   \\label{fig:}\n\\end{figure}\n%FloatBarrier\n\nThis figure shows the required propellant mass fraction is about 17\\%, over a range of vehicle masses. It is obvious that reducing the vehicles mass would be beneficial, as it gives more flexibility in the future, and increases the acceleration, and therefore stability, but is also important for the propellant mass fraction to be as close to 17\\% as possible. \n\nWhy is there such a steep slope to the graphs? One way to think about it is as follows. For very large thrust to weight, the rocket burns very rapidly, and acts as impulse. From there, the vehicle is basically passive, and the ratio of drag to inertia (the parameter $x$) controls the final altitude. Therefore in the limit that thrust to weight becomes large, \n\n\n\\end{document}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "ddc704c33cbe4c6d3995a355d5207f9d0d1e9df4", "size": 9011, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bang-off-control/writeup/writeup.tex", "max_stars_repo_name": "icl-rocketry/optimalAscent", "max_stars_repo_head_hexsha": "a841745e4097c55fa4996dddc8a7dc45b2059023", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-08-19T01:06:38.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-06T05:32:08.000Z", "max_issues_repo_path": "bang-off-control/writeup/writeup.tex", "max_issues_repo_name": "icl-rocketry/optimalAscent", "max_issues_repo_head_hexsha": "a841745e4097c55fa4996dddc8a7dc45b2059023", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bang-off-control/writeup/writeup.tex", "max_forks_repo_name": "icl-rocketry/optimalAscent", "max_forks_repo_head_hexsha": "a841745e4097c55fa4996dddc8a7dc45b2059023", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-06T05:32:10.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-06T05:32:10.000Z", "avg_line_length": 36.044, "max_line_length": 432, "alphanum_fraction": 0.7182332704, "num_tokens": 2680, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.785308580887758, "lm_q1q2_score": 0.566791133102947}}
{"text": "\\documentclass[10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\n% partial derivative as a fraction\n\\newcommand{\\fracpd}[2]{\n  \\ensuremath{\\frac{\\partial #1}{\\partial #2}}\n}\n\n\n\\begin{document}\n\n% --------------------------------------- %\n% Section\n% --------------------------------------- %\n\\section{Introduction}\n\n  This repository holds the solutions to a general set of growth models that incorporate the following explore the differences between\n\n  \\begin{itemize}\n    \\item One Agent vs Two Agent\n    \\item One Good vs Two Goods\n    \\item Time Additive Preferences vs Recursive Preferences\n    \\item Constant Volatility vs Stochastic Volatility\n  \\end{itemize}\n\n  By solve the 16 possible versions of this model, we are able to isolate the effects of each bell and whistle that we include in the model.\n\n% --------------------------------------- %\n% Section\n% --------------------------------------- %\n\\section{One Agent One Good General Model}\n\n  We will write our one agent general unscaled model as the following:\n\n  \\begin{align*}\n    J_t(k_t, x_t, v_t) &= \\max_{k_{t+1}} \\left[(1 - \\beta) c_t^{\\rho} + \\beta \\mu(J_{t+1}(k_{t+1}, x_{t+1}, v_{t+1}))^{\\rho} \\right]^{\\frac1\\rho} \\\\\n    \\text{Subject To }& \\\\\n    c_t &= \\left[ \\eta k_{t}^{\\nu} + (1 - \\eta) z_t^{\\nu} \\right]^{\\frac1\\nu} + (1 - \\delta) k_t - k_{t+1} \\\\\n    x_{t+1} &= A x_{t} + B v_t^{\\frac{1}{2}} \\varepsilon_{1, t+1} \\\\\n    v_{t+1} &= (1 - \\phi_v) \\bar{v} + \\phi_v v_{t} + \\tau \\varepsilon_{2, t+1} \\\\\n    \\log(z_t) &= \\log(z_{t-1}) + \\log(\\bar{g}) + x_t\n  \\end{align*}\n\n  We could then scale the model by dividing by $z_t$ to give us (will not both writing tildes for now -- everything here is divided by $z_t$)\n\n  \\begin{align*}\n    J_t(k_t, x_t, v_t) &= \\max_{\\chi_{t}} \\left[(1 - \\beta) c_t^{\\rho} + \\beta \\mu(J_{t+1}(k_{t+1}, x_{t+1}, v_{t+1}) g_{t+1})^{\\rho} \\right]^{\\frac1\\rho} \\\\\n    \\text{Subject To }& \\\\\n    c_t &= \\left[ \\eta k_{t}^{\\nu} + (1 - \\eta) \\right]^{\\frac1\\nu} + (1 - \\delta) k_t - \\chi_t \\\\\n    \\chi_t &= k_{t+1} g_{t+1} \\\\\n    x_{t+1} &= A x_{t} + B v_{t}^{\\frac{1}{2}} \\varepsilon_{1, t+1} \\\\\n    v_{t+1} &= (1 - \\phi_v) \\bar{v} + \\phi_v v_{t} + \\tau \\varepsilon_{2, t+1} \\\\\n    \\log(g_t) &= \\log(z_t) - \\log(z_{t-1}) = \\log(\\bar{g}) + x_t\n  \\end{align*}\n\n  Notice we can get constant volatility by allowing $\\tau = \\phi_v = 0$ or time separable preferences by $\\rho = \\alpha$.\n\n% --------------------------------------- %\n% Section\n% --------------------------------------- %\n\\section{One Agent Two Goods General Model}\n\n% --------------------------------------- %\n% Section\n% --------------------------------------- %\n\\section{Two Agent One Good General Model}\n\n% --------------------------------------- %\n% Section\n% --------------------------------------- %\n\\section{Two Agent Two Goods General Model}\n\n\n\\end{document}\n", "meta": {"hexsha": "1b724beb24cef948ccfcc6f0cc57c2bf646597a2", "size": 2840, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes.tex", "max_stars_repo_name": "NYUEcon/GrowthModels", "max_stars_repo_head_hexsha": "463ceafc5749475340e367074f3eaf6248668eaf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2015-08-19T23:28:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T11:07:10.000Z", "max_issues_repo_path": "notes.tex", "max_issues_repo_name": "NYUEcon/GrowthModels", "max_issues_repo_head_hexsha": "463ceafc5749475340e367074f3eaf6248668eaf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-08-18T20:18:34.000Z", "max_issues_repo_issues_event_max_datetime": "2015-08-20T17:21:15.000Z", "max_forks_repo_path": "notes.tex", "max_forks_repo_name": "NYUEcon/GrowthModels", "max_forks_repo_head_hexsha": "463ceafc5749475340e367074f3eaf6248668eaf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2015-11-09T18:43:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-01T18:49:32.000Z", "avg_line_length": 37.3684210526, "max_line_length": 157, "alphanum_fraction": 0.5271126761, "num_tokens": 916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.566791133102947}}
{"text": "%\\subsection{Symbolic Numeric Computation}\n\n\\bigskip\n\\noindent\n\\textbf{2.3. Sparse Computation}\n\\bigskip\n\n%There are important applications in engineering, particularly control theory and linear systems theory, where physical models are represented as polynomials and polynomial matrices (or more generally rational functions and rational matrices). Properties of such models are then represented in terms of arithmetic in polynomial or polynomial matrix domains. However, while the polynomial algebra for such representations assume exact arithmetic, the input data is typically measured (i.e. inexact) values. For these {\\em symbolic-numeric} problems \\cite{corless2} the arithmetic theory needs to be properly defined, and algorithms need to be both efficient in terms of operation count and, more importantly, numerically correct. The main direction in this area is sparse interpolation and exponential approximation.\n%arithmetic with approximate rational functions. The common thread in these two topics is their relation with Prony types of problems.\n\n\nSparse polynomial interpolation is the problem of recovering a representation of a multivariate polynomial over a basis (typically the standard power basis).  This is an inverse problem where we need to determine the parameters - exponents, coefficients, support - of the polynomial given some evaluation values.  This problem is closely connected to the shape from moments problem  in the plane \\cite{MilanfarGolub} along with the sparse decomposition of a signal built as a linear mixture of complex exponentials \\cite{HuaSarkar}. In these cases efficient algorithms need to depend on the number of terms in the sparse representation rather than the possible number of terms found in a dense representation. Sparse representations \nare common in applications. For example, the recovery of low rank information from sparse sampling is at the core of the\ncompressed sensing paradigm \\cite{candes}.  Sparse representations pose serious issues when these inverse problem computations are done in floating point, approximate arithmetic environments. \n\n\\newpage\n\\bigskip\n\\noindent\n{\\bf Previous Work}\n\\bigskip\n\nThe most significant challenge for sparse interpolation is that this is a nonlinear problem compared to the linear problem of dense interpolation. We have previously shown in [J16] that this problem is closely related to problems solved via Prony's method from 1795. This in turn allowed us to take advantage of modern numerical work by Golub et al \\cite{golub1} to solve this problem via the use of generalized Hankel eigenvalues. The issue of the poor numerical conditioning of the generalized Hankel eigenvalue problem (indeed bounded below by an exponential factor) reported in  \\cite{begola} was overcome by using randomization in determining sampling points. We refer the reader to \n \\cite{potts} for other methods for solving sparse problems in numerical environments.\n%This topic is also the theme of our Dagstuhl meeting in June 2015 and a BIRS workshop in 2016.\nSparse interpolation and exponential analysis are the themes of our Dagstuhl meeting in June 2015 and BIRS workshop in 2016.\n\n\\bigskip\n\\noindent\n{\\bf Proposed Research}\n\\bigskip\n\nSparse interpolation is typically considered for multivariate polynomials defined over the standard polynomial basis. However, there are a number of applications where sparse representations coming from Chebyshev and Pochhammer polynomials \\cite{gll2,potts2} are more useful - the former being particularly interesting as it allows for sparse modeling of fourier cosine data. We propose to address the computational challenges in finding sparse representations in other orthogonal and/or interpolation bases (e.g. \\cite{corless}) over approximate floating point environments. In the short term we plan to investigate the use of structured linear algebra formulations represented in terms of Krylov type matrices. However we expect that even analysing the numerical sensitivities of this problem will be extremely challenging. \n\nWe are interested in higher dimensional versions of exponential and polynomial interpolation. The standard shape-from-moments problem reconstructs 2-D polygonal shapes from 2-D moment data. Some previous work has been done on higher dimensional shapes \\cite{cuyt} but the recent paper by Gravin et al \\cite{lasserre}, formulating an $n$-D shape-from-moments problem using Brion's formula to find the $n$ dimensional vertices of a polygonal object from higher dimensional moments seems most promising. A longer term challenge is to actually compute the $n$ dimension coordinates in a provably numerically stable way and to understand the conditioning issues that arise from this problem \\cite{cchlc}. We propose to look at this problem by making use of tools from structured linear algebra but with the structures coming from multivariate polynomial arithmetic both for power and alternate bases. \n%Classical structured matrices such as Hankel and Toeplitz linearly describe univariate polynomial arithmetic identities. \n%As before, our plan is to do this for both power polynomial or exponential bases along with a number of alternate polynomial bases.\n\n\\bigskip\n\\noindent\n{\\bf 3.   Anticipated Impact and Long Term Objectives}\n\\bigskip\n  \nThe proposed research in symbolic matrix algorithms can potentially provide significant advancement in symbolic solutions and properties of differential equations. Our proposed work on scalings and finite group actions should have considerable impact on the solving of multivariate polynomial and dynamical systems of equations. Similarly our work on sparse interpolation in numerical environments would have considerable impact for \naddressing numeric issues arising in sparse problems. Our overall objective is to create tools for symbolic computation that allow computer algebra systems to be applied in practical and important problems in science and engineering. In all cases the algorithms created in this research will be implemented, usually in the Maple system, and hence have wide availability. This is consistent with what has happened in the past with the impact of such core Maple packages as Plots, PlotTools, DEtools, MatrixPolynomialAlgebra and such important functionality as symbolic indefinite integration and solving of linear differential equations.\n\n%Our second research direction in this area is the symbolic-numeric computation of rational functions. \n%Symbolic-numeric computation with polynomials, including such operations as finding zeros and greatest common divisors has been a well studied topic over the last two decades in computer algebra. However the same cannot be said for working with rational functions. This is a bit surprising since for applications (for example function approximations or input-output problems from control theory), one typically encounters rational rather than polynomial functions. For rational functions understanding ```close'' properties involves understanding the zeros and poles of nearby rational functions. For example in the case of rational approximation, the nearby zeros and poles have significant impact on convergence and other properties.  This becomes a difficult issue when working over a numerical environment. Interestingly enough one cannot just work with symbolic-numeric arithmetic with numerators and denominators separately. Here the main issue is one of missing or spurious poles, that is, poles with a nearby zero or poles with small residuals.  Recent attempts to address the notion of closeness includes the %introduction of the \n%concept of\n%robust rational approximants found in \\cite{gonnet} and the notion of well-conditioned rational approximants from the interesting new paper \\cite{matos}.\n\n%We propose two separate directions for research in this area, both with high impact and long term in nature. The measures for closeness given in \\cite{matos} make use of unstructured \n%condition numbers. We propose to replace these measures by structured condition numbers as was done earlier in case of numerically coprime polynomials \\cite{coprime}. We would expect the result that the ``nearby regions''  be enlarged and the computation of the structured measures more efficient. A second problem is to change our representation for rational functions from zero-pole to partial fraction decomposition and then look for measures of closeness. We note that this would require only the poles of a rational function, which can be viewed as a Prony type of problem, and which is better understood in a numerical environment.\n\n\n\n\n%{\\bf Arithmetic with Differential Operators : } \n%Symbolic/numeric computation with differential operators considers the algebraic operations on operators having polynomial coefficients whose coefficients in turn are approximate numbers. Preliminary work defining basic operations suchas one sided GCDs have recently been presented\n%\\cite{haraldson}.", "meta": {"hexsha": "59d23aaed94403ee3dbf9e0f5ddfe266e9fc5f12", "size": 8949, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "downloads/symbolic-numeric2.tex", "max_stars_repo_name": "kfoynt/opallab", "max_stars_repo_head_hexsha": "d35bb05fca14cb76db4bb6c5d59e361c701a5c13", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "downloads/symbolic-numeric2.tex", "max_issues_repo_name": "kfoynt/opallab", "max_issues_repo_head_hexsha": "d35bb05fca14cb76db4bb6c5d59e361c701a5c13", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "downloads/symbolic-numeric2.tex", "max_forks_repo_name": "kfoynt/opallab", "max_forks_repo_head_hexsha": "d35bb05fca14cb76db4bb6c5d59e361c701a5c13", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-09-19T15:43:33.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-19T16:19:37.000Z", "avg_line_length": 151.6779661017, "max_line_length": 1140, "alphanum_fraction": 0.8261258241, "num_tokens": 1715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.5667911237024907}}
{"text": "\\section{Preliminaries}\n\\label{sect:dssat-preliminaries}\n\n\\subsection{Dependency quantified Boolean formula}\n\\label{sect:dssat-dqbf}\nDQBF is formulated as \\textit{multiple-person alternation} by Peterson and Reif~\\cite{Peterson1979}.\nIn contrast to the \\textit{linearly ordered} quantification used in QBF,\ni.e., an existentially quantified variable depends on all of its preceding universally quantified variables,\nthe quantification structure in DQBF is extended with \\textit{Henkin quantifiers},\nwhere the dependency of an existentially quantified variable is explicitly specified.\n\nA DQBF $\\Qf$ over a set $V=\\{x_1,\\ldots,x_n,y_1,\\ldots,y_m\\}$ of variables is of the form:\n\\begin{align*}\n    \\Qf=\\forall x_1,\\ldots,\\forall x_n,\\exists y_1(\\dep{y_1}),\\ldots,\\exists y_m(\\dep{y_m}).\\pf(x_1,\\ldots,x_n,y_1,\\ldots,y_m),\n\\end{align*}\nwhere each $\\dep{y_j}\\subseteq\\{x_1,\\ldots,x_n\\}$ denotes the set of universally quantified variables that $y_j$ depends on,\nand Boolean formula $\\pf$ over $V$ is quantifier-free.\nWe denote the set $\\{x_1,\\ldots,x_n\\}$ (resp. $\\{y_1,\\ldots,y_m\\}$) of universally (resp. existentially) quantified variables of $\\Qf$ by $\\uvs$ (resp. $\\evs$).\n\nA DQBF $\\Qf$ is satisfiable if for each variable $y_j$,\nthere exists a Boolean function $f_j:\\av{\\dep{y_j}}\\mapsto\\booldom$,\nsuch that matrix $\\pf$ becomes a tautology over $\\uvs$\nafter substituting variables in $\\evs$ with their respective Boolean functions.\nThe set $\\skf=\\{f_1,\\ldots,f_m\\}$ of Boolean functions is called a set of \\textit{Skolem} functions for $\\Qf$.\nIn other words, $\\Qf$ is satisfied by $\\skf$ if the following QBF valuates to $\\top$:\n\\begin{align}\n    \\label{eq:dssat-dqbf-satisfiable}\n    \\forall x_1,\\ldots,\\forall x_n.\\pf(x_1,\\ldots,x_n,f_1,\\ldots,f_m),\n\\end{align}\nwhere $\\pf(x_1,\\ldots,x_n,f_1,\\ldots,f_m)$ represents the formula obtained by substituting existentially quantified variables in $\\pf$ with their respective Skolem functions.\nWe extend the notation for cofactors and denote $\\pf(x_1,\\ldots,x_n,f_1,\\ldots,f_m)$ by $\\pcf{\\pf}{\\skf}$.\nThe satisfiability problem of DQBF is NEXPTIME-complete \\cite{Peterson2001}.\n\n\\subsection{Decentralized POMDP}\n\\label{sect:dssat-dec-pomdp}\nDec-POMDP is a formalism for multi-agent systems under uncertainty and with partial information.\nIts computational complexity is NEXPTIME-complete~\\cite{Bernstein2002}.\nIn the following, we briefly review the definition, optimality criteria,\nand value function of Dec-POMDP from the literature~\\cite{Oliehoek2016}.\n\nA Dec-POMDP is specified by a tuple $\\mathcal{M}=(I,S,\\{A_i\\},T,\\rho,\\{O_i\\},\\Omega,\\Delta_0,h)$, where\n$I=\\{1,\\ldots,n\\}$ is a finite set of $n$ agents,\n$S$ is a finite set of states,\n$A_i$ is a finite set of actions of Agent $i$,\n$T:S\\times(A_1\\times\\cdots\\times A_n)\\times S\\mapsto [0,1]$ is a transition distribution function with\n$T(s,\\Vec{a},s')=\\Pr[s'|s,\\vec{a}]$,\nthe probability to transit to state $s'$ from state $s$ after taking actions $\\vec{a}$,\n$\\rho:S\\times(A_1\\times\\cdots\\times A_n)\\mapsto\\mathbb{R}$ is a reward function with\n$\\rho(s,\\vec{a})$ giving the reward for being in state $s$ and taking actions $\\vec{a}$,\n$O_i$ is a finite set of observations for Agent $i$,\n$\\Omega:S\\times(A_1\\times\\cdots\\times A_n)\\times(O_1\\times\\cdots\\times O_n)\\mapsto[0,1]$ is an observation distribution function with\n$\\Omega(s',\\Vec{a},\\vec{o})=\\Pr[\\vec{o}|s',\\vec{a}]$,\nthe probability to receive observation $\\vec{o}$ after taking actions $\\vec{a}$ and transiting to state $s'$, $\\Delta_0:S\\mapsto [0,1]$ is an initial state distribution function with $\\Delta_0(s)=\\Pr[s^0 \\equiv s]$,\nthe probability for the initial state $s^0$ being state $s$,\nand $h$ is a planning horizon, which we assume finite in this work.\n\n\\begin{figure}[t]\n    \\centering\n    \\includegraphics[scale=1.5]{fig/build/dec-pomdp-example.pdf}\n    \\caption{A two-agent Dec-POMDP example}\n    \\label{fig:dssat-dec-pomdp-state-graph}\n\\end{figure}\n\n\\begin{example}\n    An example Dec-POMDP $\\mathcal{M}$ with two agents and two states $s_p,s_q$\n    is shown in~\\cref{fig:dssat-dec-pomdp-state-graph}.\n    The goal of the agents is to make a correct agreement on the current state.\n    They have an action set $A_1=A_2=\\{a_p,a_q\\}$,\n    where $a_p$ (resp. $a_q$) is used to guess that the current state is $s_p$ (resp. $s_q$).\n\n    Under the assumption of partial observation, the agents cannot access the current state of $\\mathcal{M}$.\n    Instead, they have two different observations $o_p$ and $o_q$.\n    We assume that an agent will receive $o_p$ (resp. $o_q$) after the current state transits to $s_p$ (resp. $s_q$) with probability $\\Omega(s_p,o_p)$ (resp. $\\Omega(s_q,o_q)$).\n\n    If both agents guess correctly, they will receive a reward of $1$,\n    and $\\mathcal{M}$ will transit to a next state with equal probability.\n    In the state-transition graph, observe that in state $s_p$ (resp. $s_q$),\n    there are two outgoing edges labeled with a tuple $(a_p,a_p,1)$ (resp. $(a_q,a_q,1)$),\n    which describes the agents' actions and the reward.\n    These two edges will be taken with probability $0.5$ if the preconditions are met.\n\n    On the other hand, if one of the agents makes a wrong guess,\n    they will receive a reward of $-1$, and $\\mathcal{M}$ will stay in the current state.\n    In the state-transition graph, observe that in state $s_p$ (resp. $s_q$),\n    there is a self-loop labeled with tuples $(a_q,A_2,-1)$ and $(A_1,a_q,-1)$\n    (resp. $(a_p,A_2,-1)$ and $(A_1,a_p,-1)$).\n    These tuples indicate that if one of the agent makes a wrong guess,\n    they will receive a reward of $-1$ regardless of the guess of the other agent.\n\n    Note that the agents cannot communicate with each other.\n    Their actions must be solely based on their own observations.\n\\end{example}\n\nGiven a Dec-POMDP $\\mathcal{M}$,\nwe aim at maximizing the expected cumulative reward $\\mathbb{E}[\\sum_{t=0}^{h-1}\\rho(s^t,\\vec{a}^t)]$ through searching an optimal \\textit{joint policy} for a team of agents.\nSpecifically, a \\textit{policy} $\\pi_i$ of Agent $i$ is a mapping from the agent's \\textit{observation history},\ni.e., a sequence of observations $\\underline{o_i^t}=o_i^0,\\ldots,o_i^t$ received by Agent $i$,\nto an action $a_i^{t+1}\\in A_i$.\nA joint policy for the team of agents $\\vec{\\pi}=(\\pi_1,\\ldots,\\pi_n)$ maps the agents' joint observation history $\\vec{\\underline{o}}^t=(\\underline{o_1^t},\\ldots,\\underline{o_n^t})$ to actions $\\vec{a}^{t+1}=(\\pi_1(\\underline{o_1^t}),\\ldots,\\pi_n(\\underline{o_n^t}))$.\nWe shall focus on deterministic policies only,\nas it is shown that every Dec-POMDP with a finite planning horizon has a deterministic optimal joint policy~\\cite{Oliehoek2008}.\n\nThe quality of a joint policy $\\vec{\\pi}$ is measured by its expected cumulative reward.\nThe \\textit{value} of a joint policy is hence defined to be $\\mathbb{E}[\\sum_{t=0}^{h-1}\\rho(s^t,\\vec{a}^t)|\\Delta_0,\\vec{\\pi}]$.\nThe \\textit{value function} $V^\\pi$ can be computed in a recursive manner, where for $t=h-1$,\n\\begin{align*}\n    V^\\pi(s^{h-1},\\vec{\\underline{o}}^{h-2})=\\rho(s^{h-1},\\vec{\\pi}(\\vec{\\underline{o}}^{h-2})),\n\\end{align*}\nand for $t<h-1$,\n\\begin{align}\\label{eq:dssat-bellman}\n    V^\\pi(s^t,\\vec{\\underline{o}}^{t-1})=\\rho(s^t,\\vec{\\pi}(\\vec{\\underline{o}}^{t-1}))+\n    \\sum_{s^{t+1}\\in S}\\sum_{\\vec{o}^t\\in\\vec{O}}\\Pr[s^{t+1},\\vec{o}^t|s^t,\\vec{\\pi}(\\vec{\\underline{o}}^{t-1})]V^\\pi(s^{t+1},\\vec{\\underline{o}}^{t}).\n\\end{align}\nThe probability $\\Pr[s^{t+1},\\vec{o}^{t}|s^t,\\vec{\\pi}(\\vec{\\underline{o}}^{t-1})]$ is the product of\nthe transition probability $T(s^t,\\vec{\\pi}(\\vec{\\underline{o}}^{t-1}),s^{t+1})$ and\nthe observation probability $\\Omega(s^{t+1},\\vec{\\pi}(\\vec{\\underline{o}}^{t-1}),\\vec{o}^{t})$.\n\\Cref{eq:dssat-bellman} is also called the \\textit{Bellman equation} of Dec-POMDP.\nDenoting the empty observation history at the first stage (i.e., $t=0$) with the symbol $\\vec{\\underline{o}}^{-1}$, the value $V(\\vec{\\pi})$ of a joint policy equals $\\sum_{s^0\\in S}\\Delta_0(s^0)V^\\pi(s^0,\\vec{\\underline{o}}^{-1})$.", "meta": {"hexsha": "88418af47904e2bc25c3d61f6c16182a4a7060fb", "size": 8017, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/dependency-ssat/preliminaries.tex", "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_issues_repo_path": "paper/dependency-ssat/preliminaries.tex", "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/dependency-ssat/preliminaries.tex", "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.9406779661, "max_line_length": 269, "alphanum_fraction": 0.7061244855, "num_tokens": 2604, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891130942474, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.566739563220356}}
{"text": "\\chapter{Discretization Procedures}\n\nDiscretization procedures for elastic formulation have been developed since the early $60$'s under diverse setting oriented mainly in civil-engineering fields. In this section, it will be presented following literature  \\cite{ern2004theory}, \\cite{raviart1983introduction} a finite-element discretization scheme for the space-domain and a standard $\\beta$-Newmark time-domain discretization, thus a complete numerical formulation for various elastic models implemented in \\texttt{FEniCS}, a state-of-art library for such purposes joint with \\texttt{Python} language.\n\n\\section{Numerical Schemes}\nTo discretize the elastodynamic model, it will be assumed that the solution is defined on a Banach space $\\mathbf{V}$, in which the elasticity operator defined from the tensor $\\mathbf{C}^{hom} = (C^{hom}_{ijkl})_{ijkl}$ by:\n\\begin{equation*}\n    \\mathcal{I}_{C^{hom}} (u,v) := \\int \\limits_{\\Omega} C^{hom}_{ijkl}\\mathbf{e}_{kl}(u) \\mathbf{e}_{ij}(v) \\, dx \\quad \\forall u,v \\in \\mathbf{V}\n\\end{equation*}\nis well-defined in the sense that conditions of coercivity on the operator are assured from the uniformly ellipticity of $C^{hom}_{ijkl}$.\n\n\\subsection{Approximation by Finite Spaces}\nThe solution of the elastodynamic problem will be found as projection solutions of finite dimensional subspaces that approximate to solution space\\footnote{Such workflow is defines the approximation theory of PDE solutions and the so-called finite elements method \\cite{ern2004theory}.}, in this case $\\mathbf{H}^1(\\Omega, \\Gamma_D)$.\nLet $\\{\\mathbf{V}_h \\}_{0 \\leq h \\leq 1}$ be a family of finite-dimensional subspaces of $\\mathbf{V}$, assume that there exist a dense subspace $\\mathbf{W} \\subset \\mathbf{V}$, a linear interpolator $\\Pi_h \\in \\mathcal{L}(\\mathbf{W};\\mathbf{V}_h)$, integer $k > 0$ such that our finite dimensional solution approximation $\\Pi_{h} $ with associated to a mesh size $h>0$ satisfies:\n\\begin{equation*}\n    \\Vert v - \\Pi_h v \\Vert_{\\mathcal{L}(\\mathbf{W};\\mathbf{V}_h)} \\leq h^{k+1} \\Vert v \\Vert_{\\mathbf{W}} \\quad \\forall v \\in \\mathbf{W}\n\\end{equation*}\n%In particular, the above assumptions are assured in REFERENCE: HUDGENDS PAPER.\n\\begin{rem}\nIn our case under study it is used $\\mathbf{V} = \\mathbf{L}^2(\\Omega)$ and $\\mathbf{W} = \\mathbf{H}^1(\\Omega, \\Gamma_D)$ being $\\mathbf{V}_h$ the $\\mathbf{H}^1$-conformal finite element spaces using as reference element $\\{ \\hat{K}, \\hat{P}, \\hat{\\sigma} \\}$ of \\textit{Lagrange} type with typical order $k \\in \\{1,2\\}$, such that $\\mathbb{P}_k \\subset \\hat{P}$ with $\\mathbf{W} := \\mathbf{H}^{k+1}(\\Omega) \\cap \\mathbf{H}^1(\\Omega, \\Gamma_D)$\n\\end{rem}\n\nOver such finite dimensional subspaces $\\{ \\mathbf{V}_h\\}_{h>0}$ the approximate solution can be found, satisfying the problem for each $h>0$:\n\\begin{equation}\n\\label{ODE-Discretized}\n    \\left \\{\n    \\begin{array}{cc}\n        \\text{Find } u_h \\in \\mathcal{C}^2(0,T; \\mathbf{V}_h) & \\text{ such that } \\\\\n        (\\rho^0 d_{tt} u_h(t), v_h)_{\\Omega} + \\mathcal{I}_{C^{hom}}(u_h(t), v_h) = (\\mathbf{F}(t), v_h)_{\\Gamma_N} & \\forall v_h \\in \\mathbf{V}_h\\\\\n        d_t u_h(0) = u_h(0) = \\mathbf{0} & \n    \\end{array}\n    \\right. \n\\end{equation}\n\nProblem (\\ref{ODE-Discretized}) corresponds to a finite linear PDE system in time for the variable $u_h(t)$ where the existence can be obtained by application of the spectral theory decomposing the elastic operator in its spectrum. The treatment is similar to the procedure proposed in the above section \\cite{raviart1983introduction}.\n\nThe space discretization then is obtained from (\\ref{ODE-Discretized}) which corresponds to a system of Ordinary Differential Equations (ODEs). Indeed, let $\\{ \\varphi_1, \\dots, \\varphi_{N(h)} \\}$ denote a basis for the finite dimensional space $\\mathbf{V}_h$ or so-called global shape functions for $\\mathbf{V}_h$, then for all $t \\in (0,T)$ the approximate solution $u_h(t) \\in \\mathbf{V}_h$ can be expanded as\n\\begin{equation*}\n    u_h(\\mathbf{x},t) = \\sum \\limits_{i=1}^{N(h)} \\mathbf{U}_i(t) \\varphi_i\n\\end{equation*}\nwhere it is used the notation $\\mathbf{U}(t) = (U_i(t))_{1 \\leq i \\leq N} \\in \\mathbb{R}^{N(h)}$.\n\nNow, introducing also $\\mathbf{F}_h(t) = \\big( (\\mathbf{F}(t), \\varphi_i)_{\\Omega} \\big)_{1 \\leq i \\leq N}\\in \\mathbb{R}^{N(h)}$ with the so-called stiffness $\\mathcal{A}(t) \\in \\mathbb{R}^{N(h) \\times N(h)}$ and stress $\\mathcal{M} \\in \\mathbb{R}^{N(h) \\times N(h)}$ matrices respectively by:\n\\begin{equation*}\n    \\mathcal{A}_{ij}(t) = \\mathcal{I}_C(\\varphi_j, \\varphi_i), \\, \\mathcal{M}_{ij} = (\\sqrt{\\rho^0} \\varphi_i , \\sqrt{\\rho^0}\\varphi_j)_{\\Omega} \\quad 1 \\leq i,j \\leq N(h)\n\\end{equation*}\n\nBy the previous hypothesis $\\mathcal{A}(t)$ is symmetric and positive-definite whereas $\\mathcal{M}_{ij}$ is positive definite. Under such notation, it follows the time-dependent matrix form formulation :\n\\begin{equation}\n    \\label{MatrixFormulation}\n    \\left \\{\n    \\begin{array}{cc}\n        \\text{Find } \\mathbf{U} \\in \\mathcal{C}^2(0,T; \\mathbb{R}^{N(h)\\times N(h)}) & \\text{ such that} \\\\\n        \\mathcal{M} d_{tt} \\mathbf{U}(t) + \\mathcal{A}\\mathbf{U}(t) = \\mathbf{F}_h (t) & \\text{ in }(0,T)\\\\ \n        d_{t} \\mathbf{U}(0) = \\mathbf{U}(0)  = \\mathbf{0}&\n    \\end{array}\n    \\right.\n\\end{equation}\nwhich describes the full space discretization scheme by means of the FEM method, and enabling the use of time discretization techniques for the resulting ODE problem (\\ref{MatrixFormulation}). In particular as will be seen in the next section the so-called $\\beta$-Newmark scheme. \\footnote{The implementation of such space discretization is done in Python using a state-of-art open-source project called \\texttt{FEniCS} \\cite{logg2012automated}, defined as a set of specific core component such as \\texttt{DOLFIN}, \\texttt{UFL},  \\texttt{UFC}, etc. It enables the automatic solution of differential forms implementing the Theory of Finite Elements with numerically optimized libraries for arithmetic computations. The project is an international collaboration initiated in 2003 between the University of Chicago and Chalmers University of Technology currently under development from a variety of institutions.}\n\n\\subsection{Time Discretization}\nLet us note that (\\ref{MatrixFormulation}) defines a standard \\textit{Cauchy} ODE problem. To solve numerically such system, it will be introduced an uniform time-step discretization of $(0,T)$ in the form $\\Delta t = T/N_T$ such that:\n\\begin{equation*}\n    t_n = n \\Delta t, \\quad 0 \\leq n \\leq N_T\n\\end{equation*}\nbeing $N_T \\in \\mathbb{N}^*$ some fixed number of time steps.\n\nIt will be found for each $n \\in \\{1,\\dots, N_T \\}$ an approximate sequence tuple $(u_n, v_n, a_n)$ of the solutions $(u(t_n), \\partial_{t} u(t_n), \\partial_{tt} u(t_n))$ being the method chosen the so-called $\\beta$-\\textit{Newmark}, a well-known scheme to study mechanically driven behavior \\cite{raviart1983introduction}.\n\nTaking into account the time-domain regularity from the solution $u^{(0)}(t, \\mathbf{x}) := u(t,\\mathbf{x})$ of the homogenized PDE problem (result analogous to \\ref{BootstrapingProp}), let us note that by using the \\textit{Taylor} expansion on $u \\in \\mathcal{C}^{p}(0,T;\\mathbf{H}^1(\\Omega, \\Gamma_D))$ and the mean value theorem that for the displacement at time $t_{n+1}$ it follows:\n\\begin{equation*}\n    u(t_{n+1}) = u(t_n) + \\Delta t \\, \\partial_{t} u(t_n) + \\Delta t^2 \\, \\big( \\beta \\partial_{tt} u(t_{n+1}) + (\\frac{1}{2} - \\beta) \\partial_{tt} u(t_{n+1}) \\big) + \\mathcal{O}(\\Delta t^3)    \n\\end{equation*}\nand for the velocity\n\\begin{equation*}\n    \\partial_{t} u(t_{n+1}) = \\partial_{t} u (t_{n}) + \\Delta t \\, \\big( \\gamma \\partial_{tt} u(t_{n+1}) + (1-\\gamma) \\partial_{tt} u(t_n) \\big) + \\mathcal{O}(\\Delta t^2)\n\\end{equation*}\nbeing $\\beta, \\gamma$ two parameters with $0 < 2\\beta<1,\\, 0 < \\gamma < 1$.\nThe $\\beta$-Newmark scheme consist then in a approximation of the above equations by using a finite difference scheme in the form:\n\\begin{align}\n    \\label{TimeDif-Scheme}\n    u_{n+1} &= u_{n} + \\Delta t\\, v_{n} + \\Delta t^2 \\, \\big( \\beta a_{n+1} + (\\frac{1}{2} - \\beta) a_n \\big) \\\\\n    v_{n+1} &= v_n + \\Delta t\\, \\big( \\gamma a_{n+1} + (1-\\gamma) a_{n} \\big)\n\\end{align}\nFrom (\\ref{TimeDif-Scheme}) it is defined the numerical procedure by a two-step update given as follows:\n\\begin{align}\n    \\label{TwoStage-Update}\n    \\text{(Stage 1)}\\longrightarrow  a_{n+1} &= \\frac{1}{\\beta \\Delta t^2} ( u_{n+1}-u_n - \\Delta t \\, v_n) - \\frac{(1-2\\beta)}{2 \\beta} a_n, \\quad \\forall \\, 0 \\leq n \\leq N_T-1 \\\\\n    \\text{(Stage 2)}\\longrightarrow v_{n+1} &= v_n + \\Delta t \\, \\big( (1-\\gamma) a_n + \\gamma a_{n+1} \\big), \\quad \\forall \\, 0 \\leq n \\leq N_T-1\n\\end{align}\n\nLet us now show the precision of the $\\beta$-\\textit{Newmark} scheme. Note that the elastodynamic equation contains two derivatives in time, so that taking into account the above and applying again \\textit{Taylor} expansion and the mean value theorem, it follows as $\\Delta t$ tends to zero that:\n\n\\begin{multline}\n    u(t_{n+1}) = u(t_{n}) + \\Delta t \\, \\partial_{t} u(t_n) + \\\\\n    \\Delta t \\, \\big( \\beta \\partial_{tt} u(t_{n+1}) + (\\frac{1}{2}- \\beta) \\partial_{tt} u(t_n) \\big) + \\Delta t^3 (\\frac{1}{6}-\\beta) \\partial_{ttt}u(t_n) + \\mathcal{O}(\\Delta t^4)\n\\end{multline}\n\n\\begin{multline}\n    \\partial_{t} u(t_{n+1}) = \\partial_{t} u(t_n) + \\Delta t\\, \\big( \\gamma \\partial_{tt} u(t_{n+1}) + \\\\\n    (1-\\gamma) \\partial_{tt}u(t_n) \\big) + \\Delta t^2 \\, (\\frac{1}{2}-\\gamma) \\partial_{ttt} u(t_n) + \\mathcal{O}(\\Delta t^3)\n\\end{multline}\n\nso that, the scheme results of precision $\\mathcal{O}(\\Delta t)$ for $\\gamma \\neq 1/2$ and $\\mathcal{O}(\\Delta t^2)$ for $\\gamma = 1/2$.\n\nApplying now such two-step scheme (\\ref{TwoStage-Update}) over (\\ref{MatrixFormulation}) it follows the matrix system update procedure:\n\\begin{align}\n    \\label{MatrixSystemUpdate}\n    \\frac{1}{\\beta \\Delta t} \\mathcal{M}(\\mathbf{U}_{n+1} - \\mathbf{U}_{n} - \\Delta t\\, \\mathbf{V}_n) - \\frac{1}{\\beta}(\\frac{1}{2}- \\beta) \\mathcal{M}\\mathbf{A}_n + \\mathcal{A}\\mathbf{U}_{n+1} &= \\mathbf{F}_h \\quad 0 \\leq n \\leq N_T-1 \\\\\n    \\frac{1}{\\gamma \\Delta t} \\mathcal{M}(\\mathbf{V}_{n+1} - \\mathbf{V}_{n}) - \\frac{1}{\\gamma}(1-\\gamma) \\mathcal{M}\\mathbf{A}_n + \\mathcal{A}\\mathbf{U}_{n+1} & = \\mathbf{F}_h \\quad 0 \\leq n \\leq N_T-1\n\\end{align}\n\nNote that such a time step iteration (\\ref{MatrixSystemUpdate}) corresponds to solve the linear system of equations defined for the function $\\mathbf{U}_{n+1}$ in the form:\n\\begin{equation*}\n    (\\mathcal{M}+ \\beta \\Delta t^2 \\, \\mathcal{A}) \\mathbf{U}_{n+1} = \\mathbf{F}_{n+1}\n\\end{equation*}\nbeing $\\mathbf{F}_{n+1} \\in \\mathbb{R}^{I(h)}$ known and then update the solution $\\mathbf{U}_n$ field using the (\\ref{TwoStage-Update}).\nNow, since the $\\beta \\geq 0$, if follows that the $\\mathbf{R}^{I(h)} \\times \\mathbf{R}^{I(h)}$ matrix $(\\mathcal{M}+ \\beta \\Delta t^2 \\mathcal{A})$ is positive definite and moreover its symmetric. \n\n\\section{Dynamic Type Models}\nThe discretization procedures defined before enables the implementation of step-wise procedures of homogenized fully elastodynamic models, its attenuated version to bypass resonance effects shown in the frequency domain formulation and Viscoelastic formulations.\n\\subsection{Fully Elastodynamic}\n\nTaking into account our homogenized elastodynamic model (\\ref{HomogenizedPDE}) described explicitly:\n\\begin{equation}\n    \\label{HomPDE-Model}\n    \\left \\{\n    \\begin{array}{cc}\n        \\text{Find $u \\in \\mathcal{C}^2(0,T;\\mathbf{H}^1(\\Omega, \\Gamma_D))$} & \\text{such that} \\\\\n        \\rho^{0} \\partial_{tt} u^{(0)}(t,\\mathbf{x}) - \\nabla \\cdot \\sigma^{0} (u^{(0)}(t,\\mathbf{x}) ) = \\mathbf{0} & \\text{ in } (0,T)\\times \\Omega \\\\\n        \\sigma^{0}_{ij}(u^{(0)}(t, \\mathbf{x})) = C^{hom}_{ijkl}\\mathbf{e}_{kl,x}(u^{(0)}(t,\\mathbf{x})) & \\text{ in } (0,T)\\times \\Omega \\\\\n        u^{(0)}(t, \\mathbf{x}) = \\mathbf{0} & \\text{ on } (0,T)\\times \\Gamma_D \\\\\n        \\sigma^{0}(u^{(0)}(t, \\mathbf{x})) \\cdot n = \\mathbf{F}(t, \\mathbf{x}) & \\text{ on } (0,T)\\times \\Gamma_N\n    \\end{array}\n    \\right .\n\\end{equation}\nthe discretization procedure over time applied on (\\ref{HomPDE-Model}) turn into the update scheme for each $n \\in \\{1,\\dots, N\\}$ in the form:\n\\begin{equation*}\n    \\label{HomPDE-TimeUpdate}\n    \\frac{\\rho^{0}}{\\beta (\\Delta t)^2} u_{n+1} - \\nabla \\cdot \\sigma(u_{n+1}) = \\frac{\\rho^{0}}{\\beta (\\Delta t)^2} ( u_{n} + (\\Delta t) v_n ) + \\rho^{0}\\frac{(1-2\\beta)}{2\\beta} a_n\n\\end{equation*}\nwith fields update (\\ref{TwoStage-Update}).\n\n\\subsection{Elastodynamic Attenuated}\nThe presence of eigen-frequencies in the elastodynamic operator gives a bad numerical solution after solving the discrete system in the frequency domain. This problem can be bypassed using a regularization factor, or moreover, a kind of viscosity term which changes the eigen-frequencies and stabilizes the discrete system resulting in \\textit{good} approximation to the experimental results.\\\\\nIn this case, we add a term which in viscoelastic literature resembles a \\textit{Kelvin-Voigt} type behavior. It is added such term scaled with parameter $\\epsilon$, so that the overall behavior is not abruptly disrupted but is well-posed in the set of real frequencies.\n\n\\begin{rem}\nOne possible way to see this technique is to consider the fact that we are moving the real frequencies $\\omega$ to $\\omega - i\\epsilon$, it let us be away from the eigen-frequencies of the operator $\\mathcal{I}_{C^{hom}}$, but the overall behavior of the real part remains close to the experimental setting.\n\\end{rem}\n\nSo, it is modeled the material as function of displacement $u(\\mathbf{x},t) \\in H^{1}(\\Omega)$ (after the homogenization procedure) being $\\Omega \\subset \\mathbb{R}^2$ a sufficiently regular domain, with \\textit{Lipschitz} boundary $\\partial \\Omega := \\Gamma_D \\cup \\Gamma_N$. \nExplicitly, the regularized elastic model in defined by:\n\\begin{equation*}\n    \\left \\{\n    \\begin{array}{cc}\n        \\rho^{0} \\partial_{tt} u^{(0)} - \\epsilon \\partial_{t} u^{(0)} - \\nabla \\cdot \\sigma^{0}(u^{(0)}) = \\mathbf{0} & \\text{in } (0,T)\\times \\Omega\\\\\n        \\sigma^{0}(u^{(0)}) =  \\mathbf{C}^{hom}:\\mathbf{e}(u^{(0)})  & \\text{in }(0,T)\\times \\Omega\\\\ \n        \\sigma^{0}(u^{(0)})\\cdot n = \\mathbf{F} & \\text{on } (0,T)\\times \\Gamma_N\\\\\n        u^{(0)} = \\mathbf{0} & \\text{on }(0,T)\\times \\Gamma_D \\\\\n        u^{(0)} = \\partial_t u^{(0)} = \\mathbf{0}& \\text{ on } \\{t=0\\}\\times\\Omega\n    \\end{array}\n    \\right .\n\\end{equation*}\nwhere $\\epsilon$ denotes a small regularization parameter associated to a Kelvin-Voigt type viscosity. \nAs before, it is considered a time-discretization $\\beta$-Newmark scheme following the two-step update procedure for each $n \\in \\{1,\\dots, N\\}$:\n\\begin{align*}\n    \\text{(Stage 1)} &\\longrightarrow a_{n+1} = \\frac{1}{\\beta (\\Delta t)^2} (u_{n+1}-u_{n}-(\\Delta t)v_n) - \\frac{1-2\\beta}{2\\beta}a_n\\\\\n    \\text{(Stage 2)}& \\longrightarrow v_{n+1} = v_n + \\Delta t((1-\\gamma)a_n + \\gamma a_{n+1})\n\\end{align*}\nIt is then possible to define a numerically explicit scheme for our model, with update procedure at time $n+1$ given in the form:\n\\begin{align*}\n    \\frac{(\\rho^{0}- \\epsilon (\\Delta t) \\gamma)}{\\beta (\\Delta t)^2} u_{n+1} - \\nabla \\cdot u_{n+1} &= \\frac{\\rho^{0}}{\\beta (\\Delta t)^2} (u_n + (\\Delta t)v_n) + \\rho^{0}\\frac{ (1-2\\beta)}{2\\beta}a_n \\\\\n    & \\, - \\frac{\\epsilon \\gamma}{\\beta (\\Delta t)}(u_n + (\\Delta t)v_n) - \\epsilon (\\Delta t)\\gamma\\frac{1-2\\beta}{2 \\beta} a_n\n\\end{align*}\n\n\\subsection{Kelvin-Voigt Viscoelastic Type}\nFollowing the developments of the above ideas, a $\\epsilon$-\\textit{Kelvin-Voigt} viscoelastic bone model captures the main characteristics observed in biological and biomechanical literature of bone. In this case, the model is considered of linear elastic-viscous behavior with an $\\epsilon$-regularization of the viscous part.\nThus, after applying the homogenization procedure, the following homogenized model is obtained:\n\\begin{equation*}\n    \\left \\{\n    \\begin{array}{cc}\n        \\rho^{0} \\partial_{tt} u^{(0)} - \\nabla \\cdot \\sigma^{0}(u^{(0)}) = \\mathbf{0} & \\text{ in } (0,T)\\times\\Omega\\\\\n        \\sigma^{0}(u^{(0)}) =  \\mathbf{C}^{hom}:\\mathbf{e}(u^{(0)}) + \\epsilon \\mathbf{D}^{0}:\\mathbf{e}(\\partial_{t}u^{(0)}) & \\text{ in }(0,T)\\times \\Omega\\\\\n        \\sigma^{0}(u^{(0)})\\cdot n = \\mathbf{F} & \\text{ on }(0,T)\\times\\Gamma_N\\\\\n        u^{(0)} = \\mathbf{0} & \\text{ on }(0,T)\\times\\Gamma_D \\\\\n        u^{(0)} = \\partial_t u^{(0)} = \\mathbf{0} &  \\text{ on } \\{t=0\\}\\times\\Omega\n    \\end{array}\n    \\right .\n\\end{equation*}\nThe standard discretization scheme is defined by a $\\beta$-\\textit{Newmark} implicit finite differences scheme for the time domain and FEM for the space domain. In this case, the time discretization is obtained in the usual two steps:\n\\begin{align*}\n    \\text{(Stage 1)} &\\longrightarrow a_{n+1} = \\frac{1}{\\beta (\\Delta t)^2} (u_{n+1}-u_{n}-(\\Delta t)v_n) - \\frac{1-2\\beta}{2\\beta}a_n\\\\\n    \\text{(Stage 2)}& \\longrightarrow v_{n+1} = v_n + \\Delta t((1-\\gamma)a_n + \\gamma a_{n+1})\n\\end{align*}\nSo that, the update scheme obtained at times $t_n = n+1$ in the form:\n\\begin{align*}\n    \\frac{\\rho^{0}}{\\beta (\\Delta t)^2} u_{n+1} - \\nabla \\cdot \\sigma_C^{0}( u_{n+1})  & - \\frac{\\gamma \\Delta t}{\\beta (\\Delta t)^2} \\nabla \\cdot \\sigma_D^{0}(u_{n+1}) \\\\\n    &= \\frac{\\rho^{0}}{\\beta (\\Delta t)^2} (u_n + (\\Delta t)v_n) + \\rho^{0}\\frac{ (1-2\\beta)}{2\\beta} a_n \\\\\n    & + \\nabla \\cdot \\sigma_D^{0}(v_n) + (1-\\gamma)(\\Delta t) \\nabla\\cdot \\sigma_D^{0}(a_n) \\\\\n    & - \\frac{(\\Delta t)\\gamma}{\\beta (\\Delta t)^2}\\nabla \\cdot \\sigma_D^{0}(u_n + (\\Delta t)v_n) \\\\\n    & - (\\Delta t)\\gamma\\frac{1-2\\beta}{2 \\beta} \\nabla \\cdot \\sigma_D^{0}(a_n)\n\\end{align*}\n\nwhere it is used the notation $\\sigma_C^{0}(w) = \\mathbf{C}:\\mathbf{e}(w)$ and $\\sigma_D^{0}(w) = \\mathbf{D}:\\mathbf{e}(w)$ for $w \\in \\mathbf{H}^1(\\Omega)$.\n\n%\\section{Frequency Domain Formulation}\n%Finally, the study on frequency domain relates to various aspects, ranging from the study of the elastic operator and its eigen-frequencies, the proximity of the fourier implementation within the device recording procedure to the computational restrictions limiting the time-domain implementation. \n\n%To this end, let us take Fourier transform to the time-dependent problem in such a way that it is considered particular solutions at frequency $\\omega \\in \\mathbb{R}$ in the form:\n%\\begin{equation}\n%    \\label{FreqAnsatz}\n%    u^{\\epsilon}(\\mathbf{x},t) = \\hat{u}^{\\epsilon}(\\mathbf{x},\\omega) e^{i \\omega t}\n%\\end{equation}\n%Again, following the homogenization procedure, the effective macroscopic PDE problem (\\ref{HomPDE-Model}) is obtained:\n%\\begin{equation}\n%    \\label{VectorPDE-Freq}\n%    \\left \\{\n%    \\begin{array}{cc}\n%        -\\rho^{hom} \\omega^2 \\hat{u}^{(0)}(\\mathbf{x}) - \\nabla \\cdot \\sigma^{hom} (\\hat{u}^{(0)}(\\mathbf{x}))=\\mathbf{0}  & \\text{ in }  \\Omega \\\\\n%        \\sigma^{hom} (\\hat{u}^{(0)}(\\mathbf{x})) = \\mathbf{C}^{hom}(\\mathbf{x}): \\mathbf{e} (\\hat{u}^{(0)}(\\mathbf{x})) & \\text{ in } \\Omega  \\\\\n%        \\hat{u}^{(0)}(\\mathbf{x}) = \\mathbf{0} & \\text{ on } \\Gamma_D\\\\\n%        \\sigma^{hom} (\\hat{u}^{(0)}(\\mathbf{x}))\\cdot n = \\hat{\\mathbf{F}}(\\mathbf{x}) & \\text{ on } \\Gamma_N \\\\\n%    \\end{array}\n%    \\right .\n%\\end{equation}\n%where it is omitted the frequency dependency on the solution $\\hat{u}^{(0)}$ for easiness of exposure and used $\\Hat{F}(\\omega, \\mathbf{x})$ as the Fourier transform at frequency $\\omega$ of $F(t, \\mathbf{x})$. \n%Finally, the discretization of (\\ref{VectorPDE-Freq}) is defined by selecting a particular array of frequencies $(\\omega_i)_{i \\in [N_{\\omega}]}$ associated to the experimental setup, over which the frequency domain simulations takes place.\n\n", "meta": {"hexsha": "a9986eb416d7194214648b79881919653da3ba2e", "size": 19736, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "discretization.tex", "max_stars_repo_name": "Reidmen/Master-Thesis-2018", "max_stars_repo_head_hexsha": "4cefa410208dfced927616ba8e2b62c4b6557281", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "discretization.tex", "max_issues_repo_name": "Reidmen/Master-Thesis-2018", "max_issues_repo_head_hexsha": "4cefa410208dfced927616ba8e2b62c4b6557281", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "discretization.tex", "max_forks_repo_name": "Reidmen/Master-Thesis-2018", "max_forks_repo_head_hexsha": "4cefa410208dfced927616ba8e2b62c4b6557281", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 85.8086956522, "max_line_length": 911, "alphanum_fraction": 0.6713619781, "num_tokens": 6686, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789178257654, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5667309715441471}}
{"text": "\\input{../ex_template.tex}\n\n\n\\title{BPP Exercise 4 - Lists and Collections}\n\\author{A. Hain, M. Nipshagen}\n\\date{30.04.2018, 8:00}\n\n\\makeatletter\n\\let\\thetitle\\@title\n\\let\\theauthor\\@author\n\\let\\thedate\\@date\n\\makeatother\n\n% Defining the Left Pointy Bracket and the Right Pointy Bracket, so they look nicer\n\\newcommand\\lpb{\\small{<}}\n\\newcommand\\rpb{\\small{>}}\n% sublists are annoying\n\\newcommand\\SubPoint[1]{\n  \\begin{itemize}\n    \\item #1\n  \\end{itemize}\n  }\n  \n% do not include solutions\n% \\renewcommand\\sol[1]{} \\renewcommand\\SubPoint[1]{}\n\n\n\\begin{document}\n\nThe deadline for this exercise sheet is \\textbf{Monday, \\thedate.}\n%\n%\\section*{Introductory Words}\n%In case we have some information that doesn't directly concern the current exercises.\n%\n\\section{Vector Math}\nWe can model vectors with tuples and lists. Python however is not equipped with the tools to do vector math, but we can write the functions for this ourselves!\\\\\nWrite a script \\texttt{vectors.py} which defines the following functions:\n\\begin{itemize}\n  \\item \\texttt{add(x, y)}: Adds $x$ and $y$ such that the result follows $z_i = x_i + y_i$.\n    \\cprotect\\sol{\n\\begin{python}\ndef add(x, y):\n      \"\"\"Adds up the vectors x and y\"\"\"\n      return [x[i] + y[i] for i in range(len(x))]\n\\end{python}\n    }\n  \\item \\texttt{sub(x, y)}: Subtracts $y$ from $x$ such that the result follows $z_i = x_i - y_i$.\n    \\cprotect\\sol{\n\\begin{python}\ndef sub(x, y):\n    \"\"\"returns the result of subtracting y from x\"\"\"\n\t\treturn [x[i] - y[i] for i in range(len(x))]\n\\end{python}\n    }\n  \\item \\texttt{dot(x, y)}: Calculates the scalar (dot) product (or inner product) of $x$ and $y$. The dot product $\\lpb x,y\\rpb$ is defined as $\\lpb x,y\\rpb = \\sum\\limits_{i=1}^{N}x_iy_i$.\n\\cprotect\\sol{\\begin{python}\ndef dot(x, y):\n    \"\"\"Returns the inner product of the two vectors\"\"\"\n\t\tsum([x[i]y[i] for i in range(len(x))])\n\t\\end{python}}\n  \\item \\texttt{angle(x, y)}: Calculates the angle $\\theta$ between the between $x$ and $y$. The angle can be found by using an alternative definition of the dot product: $\\lpb x,y\\rpb = \\|x\\|\\|y\\| \\cos\\theta$, which we can then solve for $\\theta$ and we get $\\theta = \\arccos\\dfrac{\\lpb x,y\\rpb}{\\|x|\\|y\\|}$.\\\\\n  \\emph{Note:} $\\|x\\|$ where $x$ is a vector is called the vector norm and is calculated by $\\sqrt{<x,x>}$. You can find the arccos function in the \\texttt{math} package, which we previously used for pi. The function is called \\texttt{acos}.\n\\cprotect\\sol{\\begin{python}\nimport math\n\n\ndef angle(x, y):\n    \"\"\"Returns the angle between the two vectors in radien\"\"\"\n    divisor = math.sqrt(dot(x, x)) * math.sqrt(dot(y, y))\n    return math.acos(dot(x, y) / divisor)\n\t\\end{python}}\n  \\item \\texttt{pdist(x, y, **kwargs)}: The distance between the two points $x$ and $y$. Your \\textit{kwargs} should recognise the keywords \\texttt{metric} and \\texttt{p}. Your \\texttt{metric} keyword should be able to take one of the values \\texttt{'euclidean', 'minkowski', 'cityblock'}, and \\texttt{p} can take any integer greater or equal than one. The distance is calculated depending on the metric and should default to \\texttt{euclidean}. The distances are calculated as follows:\n    \\begin{itemize}\n      \\item \\texttt{euclidean}: $d_{euclid} (x,y) = \\sqrt[\\uproot{2}2]{\\sum\\limits_{i=1}^N \\|x_i-y_i\\|^2}$.\n      \\item \\texttt{minkowski}: $d_{minkowsky} (x,y) = \\sqrt[\\uproot{2}\\texttt{p}]{\\sum\\limits_{i=1}^N \\|x_i-y_i\\|^\\texttt{p}}$.\n      \\item \\texttt{cityblock}: $d_{cityblock} (x,y) = \\sum\\limits_{i=1}^N \\|x_i-y_i\\|$\\\\\n      (\\emph{Note:} $\\|x\\|$ is the absolute value.)\n    \\end{itemize}\n    See figure \\ref{fig:metrics} for a nicer visualisation.\\\\\n    Don't be afraid of this. It looks scary at first, but it is not beyond the scope of what you already learned.\n\\cprotect\\sol{\\begin{python}\nimport math\n\n\ndef pdist(x, y, **kwargs):\n    \"\"\"\n    Calculates the distance between x and  y using the given metric.\n\n    The default metric used for distance calculation is the euclidean metric. Other options are 'cityblock' and 'minkowski'. If using minkowski distance, the parameter p >= 1 can be supplied.\n    \"\"\"\n    # check whether metric is given, and use euclidean as default\n\u00a0\u00a0\u00a0\u00a0metric = kwargs['metric'] if 'metric' in kwargs else 'euclidean'\n     if metric == 'minkowski':\n        p = kwargs['p'] if 'p' in kwargs else 2\n    if metric == 'euclidean':\n        squared = [abs(z)**2 for z in sub(x,y)]\n        return math.sqrt(sum(squared))\n    elif metric == 'cityblock':\n        diff = [abs(z) for z in sub(x, y)]\n        return sum(diff)\n    elif metric == 'minkowski':\n        if p < 1:\n             return None\n        to_p = [abs(z)**p for z in sub(x, y)]\n        return sum(to_p) ** (1/p)\n    else:\n        return None\n\n\\end{python}\n}\n  \\item \\textbf{BONUS:} \\texttt{outer(x, y)}: calculates the outer product of two vectors. The outer product of two vectors is the tensor product of the two. For two vectors with size $n$ the outer product will yield a matrix of size $(n, n)$.\n  The outer product is calculated as $C_{i_j} = x_iy_j$.\n  \\cprotect\\sol{\\begin{python}\ndef outer(x, y):\n    \"\"\"Returns a list of lists representinx a matrix calculated from the two vectors, each list represents one row\"\"\"\n    # one liner:\n    # return [[xi * yi for yi in y] for xi in x]\n    result = []\n    for i in range(len(x)):\n        row = []\n        for j in range(len(y)):\n          row.append(x[i] * y[j])\n        result.append(row)\n\t\treturn result\n\t\\end{python}}\n\\end{itemize}\nYou can assume that all vectors have the same size. To verify your function works correctly you can test it with the following values:\\\\\n$a = (1, 2, 3), b = (4, 5, 6), c = [0, 1, 0, 0, 1], d = [1.5, 2.5, 3.5, 4.5, 5.5]$\\\\\n\\begin{tabular}{|ll|}\n  \\hline\n  Call & Result \\\\\n  \\hline\n  \\texttt{add(a,b)} & (5, 7, 9)\\\\\n  \\texttt{add(b,a)} & (5, 7, 9)\\\\\n  \\texttt{sub(a,b)} & (-3, -3, -3)\\\\\n  \\texttt{dot(c,d)} & 8.0\\\\\n  \\texttt{angle(a,b)} & approx. 0.23\\\\\n  \\texttt{pdist(a,b)} & approx. 5.20\\\\\n  \\texttt{pdst(a,b, metric='cityblock')} & 9\\\\\n  \\texttt{pdist(c,d, metric='minkowski', p=3)} & 6.14\\\\\n  \\texttt{outer(a,b)} & $\\left[\\begin{matrix} 4 & 5 & 6\\\\ 8 & 10 & 12\\\\ 12 & 15 & 18 \\end{matrix}\\right]$\\\\\n  \\hline\n\\end{tabular}\n\\begin{figure}\n  \\includegraphics[scale=0.5]{metriken}\n  \\label{fig:metrics}\n  \\caption{Blue, red, and yellow are examples for the cityblock distance (it looks like following the streets of a grid city) and green is the euclidean distance.}\n  \\small{Taken from \\url{https://en.wikipedia.org/wiki/Taxicab_geometry#/media/File:Manhattan_distance.svg}}\n  \n\\end{figure}\n\n\\section{Which to What}\nWe will give you a few examples of datasets. Which collection type would you use to save this data in?\\\\\nPlease explain your answers.\n\\begin{itemize}\n\\item options a user can click on in a program menu\n\\SubPoint{\\sol{List}}\n  \\item a country's name, population and capital\n    \\SubPoint{\\sol{Tuple or Dict}}\n  \\item food ingredients\n    \\SubPoint{\\sol{List}}\n  \\item data about a music album\n    \\SubPoint{\\sol{Dict}}\n  \\item people who visit Basic Programming in Python or Scientific Programming in Python\n    \\SubPoint{\\sol{Set}}\n  \\item IDs of upcoming orders in an online shop\n    \\SubPoint{\\sol{List}}\n  \\item nicknames and account information in a forum (e-mail address, password, real name [optional], birth date [optional], ...)\n    \\SubPoint{\\sol{Dict}}\n\\end{itemize}\n\n\\section{The Transform}\nIn the file \\texttt{my\\_collection.py} you will find two lists defined: \\texttt{subjects} and \\texttt{attributes}. The first list contains subject ids, wheras the second one contains attributes corresponding to the subject ids. So the subject at index 0 of \\texttt{subjects} has the attribute at index 0 of \\texttt{attributes} and the subject at index 9 has the attribute at index 9. You get the idea.\\\\\nYou may notice that each subject id appears multiple times. Your task now is to create a function to create one dictionary which uses the subject id as a key and a \\textit{list} of attributes as the value.\n\\cprotect\\sol{\n\\begin{python}\n# the given lists\nall_subjects = [0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 9, 9, 9]\nall_attributes = ['Materialistic', 'Neat', 'Active', 'Welcoming', 'Creative', 'Ambitious', 'Geek', 'Welcoming', 'Neat', 'Creative', 'Geek', 'Quiet', 'Shy', 'Neat', 'Ambitious', 'Adventurous', 'Active', 'Welcoming', 'Adventurous', 'Neat', 'Ambitious', 'Excitable', 'Active', 'Welcoming', 'Quiet', 'Excitable', 'Ambitious', 'Adventurous', 'Quiet', 'Geek', 'Active', 'Spiritual', 'Quiet', 'Excitable', 'Materialistic', 'Geek', 'Welcoming', 'Excitable', 'Adventurous']\n\n# This is the function you need to implement\ndef clean_up(subjects, attributes):\n  \"\"\"\n  This function takes a list of subject ids which correspond to attributes\n  in the second list, and forms a dictionary out of them, with the unique\n  subject id as the key and a list of their attributes as the corresponding\n  value.\n  \"\"\"\n  # and as a one-liner:\n  # return {sub : attributes[subjects.index(sub):subjects.index(sub)+subjects.count(sub)] for sub in set(subjects)}\n  # create the empty dict that we will keep adding the stuff into one by one\n  subject_dict = dict()\n  # idx is the counter going from 0 to 38\n  # using enumerate saves as the line: subj_id = subjects[idx]\n  for idx, subj_id in enumerate(subjects):\n  # if this is the first time we encounter this subject id, add it to the dict\n  # the value is an empty list for now, we will now add all the attributes\n    if subj_id not in subject_dict:\n      subject_dict[subj_id] = []\n    # add the current attribute to the list of the subject\n    subject_dict[subj_id].append(attributes[idx])\n\n  return subject_dict\n\n# So nice, tidy and clean.\nsubject_dict = clean_up(all_subjects, all_attributes)\n\\end{python}\n}\n\\end{document}\n", "meta": {"hexsha": "25ae5bba1f17811d6221b5087a211ea5f7ecfc95", "size": 9831, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2018/04/04_Ex_Lists_and_Collections.tex", "max_stars_repo_name": "lfrommelt/monty", "max_stars_repo_head_hexsha": "e8cabf0e4ac01ab3d97eecee5e699139076d6544", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2018/04/04_Ex_Lists_and_Collections.tex", "max_issues_repo_name": "lfrommelt/monty", "max_issues_repo_head_hexsha": "e8cabf0e4ac01ab3d97eecee5e699139076d6544", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2018/04/04_Ex_Lists_and_Collections.tex", "max_forks_repo_name": "lfrommelt/monty", "max_forks_repo_head_hexsha": "e8cabf0e4ac01ab3d97eecee5e699139076d6544", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-02-10T23:52:25.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-10T23:52:25.000Z", "avg_line_length": 46.8142857143, "max_line_length": 486, "alphanum_fraction": 0.6748041908, "num_tokens": 3080, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8104789132480439, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5667309632635926}}
{"text": "\n\\section{Subcritical flow without a shock over a bump}\n\nThis is a subcritical flow over a bump. This test is adapted from Goutal and Maurel~\\cite{GM1997}. No shock occurs in this scenario. \n\nConsider a one dimensional domain $[0,25]$ with topography\n\\begin{equation}\nz(x)= \\left\\{ \\begin{array}{ll}\n 0.2-0.05\\left(x-10\\right)^2& ~\\textrm{if}\\quad 8 \\leq x \\leq 12\\,,\\\\\n 0 & ~\\textrm{otherwise}\\,,\\\\\n\\end{array} \\right.\n\\end{equation}\ntogether with Dirichlet boundary conditions.\nPhysically, the boundary conditions mean that there is a source of flow upstream at the point $x=0^{-}$ and at the same time there exists a sink of flow downstream at the point $x=25^{+}$\\,.\n\n\nThe analytical height is found by solving the Bernoulli equation. The simplified Bernoulli equation is the following cubic equation\n\\begin{equation}\nh^3 + \\left(z - \\frac{q^2}{2 g H^2} - H \\right) h^2 + \\frac{q^2}{2 g} = 0\\,,\n\\end{equation}\nwhere $H$ is the upstream height and $q=uh$ is the discharge or $x$-momentum. When the height $h$ has been found, the velocity is computed as $u=q/h$\\,.\n\n\\subsection{Results}\nFor our test we consider the initial condition\n\\begin{equation}\nu(x,y,0)=v(x,y,0)=0\\,, \\quad\nw(x,y,0)= 0.2\\,,\n\\end{equation}\nand the Dirichlet boundary conditions at $x=0^{-}$ and $25^{+}$ to be \n\\begin{equation}\n[w,hu,hv]=[2, 4.42, 0]\\,.\n\\end{equation}\nRepresentatives of the simulation results are given in the following three figures. Even though we have small discrepancy in the numerical and analytical momenta, these numerical an analytical solutions should agree quite well. Note the vertical scale of the plots (matplotlib may offset the vertical scale, check top-left of figures).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{stage_plot.png}\n\\end{center}\n\\caption{Stage results}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{xmom_plot.png}\n\\end{center}\n\\caption{Xmomentum results}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{xvel_plot.png}\n\\end{center}\n\\caption{Xvelocity results}\n\\end{figure}\n\n\n\\endinput\n", "meta": {"hexsha": "3b0040854145d2cc94cd7dfade909f0d54b012f1", "size": 2104, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "validation_tests/analytical_exact/subcritical_over_bump/results.tex", "max_stars_repo_name": "samcom12/anuga_core", "max_stars_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_stars_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_stars_count": 136, "max_stars_repo_stars_event_min_datetime": "2015-05-07T05:47:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T03:07:40.000Z", "max_issues_repo_path": "validation_tests/analytical_exact/subcritical_over_bump/results.tex", "max_issues_repo_name": "samcom12/anuga_core", "max_issues_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_issues_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-05-03T09:27:54.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-20T04:22:48.000Z", "max_forks_repo_path": "validation_tests/analytical_exact/subcritical_over_bump/results.tex", "max_forks_repo_name": "samcom12/anuga_core", "max_forks_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_forks_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_forks_count": 70, "max_forks_repo_forks_event_min_datetime": "2015-03-18T07:35:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T07:07:29.000Z", "avg_line_length": 35.0666666667, "max_line_length": 335, "alphanum_fraction": 0.7328897338, "num_tokens": 651, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5667309600626007}}
{"text": "\\chapter{Optimization}\n\\label{sec:opt}\nExecuting \\textit{FloatingSE} by hand is sufficient to explore some simple\none-off or comparison analyses between a few runs.  OpenMDAO\nprovides extensive optimization capability, which can give yield richer\nand more insightful analyses.\n\n\\section{Methodology}\nFor a full substructure optimization, we begin with the formulation of\na constrained, nonlinear single-objective optimization problem with\nmixed-integer design variables,\n\\begin{equation}\n\\begin{array}{ll}\n  \\min & f\\left(\\mbf{x}\\right)\\\\\n  \\text{subject to} & \\mbf{g}\\left(\\mbf{x}\\right) \\leq 0,\\\\\n  \\text{and}& \\mbf{x} \\in \\mbf{X} \\\\\n  \\end{array}\n\\end{equation}\nwhere,\n\\begin{itemize}\n\\item $\\mbf{x}$ is a vector of $n$ \\textit{design variables}, the variables that are adjusted in order to\n  find the optimal solution (see Table \\ref{tbl:designvar});\n\\item $f(\\mbf{x})$ is the nonlinear \\textit{objective function}, the\n  metric to be minimized by the optimization algorithm;\n\\item $\\mbf{g} (\\mbf{x})$ is the vector of \\textit{inequality constraints}, the\n  set of conditions that the solution must satisfy (see Table\n  \\ref{tbl:constraints}).  There are no equality constraints;\n\\item $\\mbf{X}$ is the design variable \\textit{bounds}, the bracket of\n  allowable design variable values.\n\\end{itemize}\n\nNote that this problem statement imposes no requirements on the types of\nvariables in $\\mbf{x}$.  A mixed-integer solution is desired, where some\ndesign variables are continuous ($x \\in \\mbb{R}$) and others are\ndiscrete variables that can only take integer values ($x \\in\n\\mbb{Z}$).  An example of an integer design variable in this\napplication is the number of offset columns or the number of mooring\nline connections.\n\n\n\\section{Gradient-Based versus Derivative-Free Algorithms}\nDerivative-free optimization algorithms are preferable for substructure\noptimization problems for a few reasons, despite their known performance\ndrawbacks in terms of wall-clock time.  First, to do a complete\nconfiguration optimization of the substructure, a mixed-integer capable\nalgorithm is required.  No gradient-based optimization algorithm is\ncapable of handling these types of variables directly (unless a rounding\napproximation is used).  A genetic algorithm, properly designed, can\nsupport mixed-integer variables for a global design space optimization.\n\nAnother reason for the selection of derivative-free algorithms is that\nthe \\textit{FloatingSE} uses a number of third-party, black box tools or\nalgorithms that do not come with analytical gradients.  This includes\nFrame3DD, MAP++, and some of the API 2U procedures that rely on roots of\nnonlinear equations.  Thus, gradient-based optimization algorithms would\nbe forced to use finite difference approximations around these tools at\nthe very least.  However, derivatives approximated with finite\ndifferences are expensive to compute accurately.  If computed\ninaccurately, for the sake of reducing computational time, finite\ndifference derivatives can easily lead an optimization algorithm astray,\nespecially in highly nonlinear or tightly constrained regions of the\ndesign space.  This is another reason for the use of\nderivative-free algorithms, even when conducting local neighborhood\ndesign space optimization and/or sensitivity studies.\n\n\n\\section{Design Variables}\nIn WISDEM, via OpenMDAO, any input parameter can be designated a design\nvariable.  The design variables used in this study focused on the\ngeometric specification of the floating substructure and mooring\nsubsystem.  Slightly different design variables and bounds were used for\nspar, semisubmersible, and TLP optimizations.  The complete listing of\nthe design variables for each optimization configuration is shown in\nTable \\ref{tbl:designvar}.  Note that the integer design variables were\nonly used in the global optimization with the genetic algorithm, not the\nlocal search with the simplex algorithm.\n\n\n\\begin{table}[htbp] \\begin{center}\n    \\caption{Standard design variables, their size, and units used for\n      optimization in \\textit{FloatingSE}.  Note that $n_s$ denotes the\n      number of sections in the column discretization.}\n    \\label{tbl:designvar}\n{\\footnotesize\n  \\begin{tabular}{ l l c l c } \\hline\n    \\textbf{Variable} & \\textbf{Name} & \\textbf{Units} & \\textbf{Type} & \\textbf{Bounds} \\\\ \\hline \\hline\n    Main col section height &\\mytt{main\\_section\\_height}& \\unit{$m$} & Float array ($n_s$) & 0.1--50 \\\\\n    Main col outer diameter &\\mytt{main\\_outer\\_diameter}& \\unit{$m$} & Float array ($n_s+1$) &2.1--40 \\\\\n    Main col wall thickness &\\mytt{main\\_wall\\_thickness}& \\unit{$m$} & Float array ($n_s+1$)  &0.001--0.5 \\\\\n    Main col freeboard & \\mytt{main\\_freeboard}& \\unit{$m$} & Float scalar &0--50 \\\\\n    Main col stiffener web height &\\mytt{main\\_stiffener\\_web\\_height}& \\unit{$m$} & Float array ($n_s$) &0.01--1 \\\\\n    Main col stiffener web thickness &\\mytt{main\\_stiffener\\_web\\_thickness}& \\unit{$m$} & Float array ($n_s$) &0.001--0.5 \\\\\n    Main col stiffener flange width &\\mytt{main\\_stiffener\\_flange\\_width}& \\unit{$m$} & Float array ($n_s$) &0.01--5 \\\\\n    Main col stiffener flange thickness &\\mytt{main\\_stiffener\\_flange\\_thickness}& \\unit{$m$} & Float array ($n_s$) &0.001--0.5 \\\\\n    Main col stiffener spacing &\\mytt{main\\_stiffener\\_spacing}& \\unit{$m$} & Float array ($n_s$) &0.1--100 \\\\\n    Main col permanent ballast height &\\mytt{main\\_permanent\\_ballast\\_height}& \\unit{$m$} & Float scalar &0.1--50 \\\\\n    Main col buoyancy tank diameter &\\mytt{main\\_buoyancy\\_tank\\_diameter}& \\unit{$m$} & Float scalar &0--50 \\\\\n    Main col buoyancy tank height &\\mytt{main\\_buoyancy\\_tank\\_height}& \\unit{$m$} & Float scalar &0--20 \\\\\n    Main col buoyancy tank location (fraction) &\\mytt{main\\_buoyancy\\_tank\\_location}&& Float scalar &0--1 \\\\\n    \\hline\n    Number of offset cols &\\mytt{number\\_of\\_offset\\_columns}&& Integer scalar & 3-5 \\\\\n    Offset col section height &\\mytt{offset\\_section\\_height}& \\unit{$m$} & Float array ($n_s$) &0.1--50\\\\\n    Offset col outer diameter &\\mytt{offset\\_outer\\_diameter}& \\unit{$m$} & Float array ($n_s+1$)&1.1--40\\\\\n    Offset col wall thickness &\\mytt{offset\\_wall\\_thickness}& \\unit{$m$} & Float array ($n_s+1$) &0.001--0.5\\\\\n    Offset col freeboard & \\mytt{offset\\_freeboard}& \\unit{$m$} & Float scalar &2--15 \\\\\n    Offset col stiffener web height &\\mytt{offset\\_stiffener\\_web\\_height}& \\unit{$m$} & Float array ($n_s$) &0.01--1\\\\\n    Offset col stiffener web thickness &\\mytt{offset\\_stiffener\\_web\\_thickness}& \\unit{$m$} & Float array ($n_s$) & 0.001--0.5\\\\\n    Offset col stiffener flange width &\\mytt{offset\\_stiffener\\_flange\\_width}& \\unit{$m$} & Float array ($n_s$) & 0.01--5\\\\\n    Offset col stiffener flange thickness &\\mytt{offset\\_stiffener\\_flange\\_thickness}& \\unit{$m$} & Float array ($n_s$) & 0.001--0.5\\\\\n    Offset col stiffener spacing &\\mytt{offset\\_stiffener\\_spacing}& \\unit{$m$} & Float array ($n_s$) &0.01--100\\\\\n    Offset col permanent ballast height &\\mytt{offset\\_permanent\\_ballast\\_height}& \\unit{$m$} & Float scalar &0.1--50\\\\\n    Offset col buoyancy tank diameter &\\mytt{offset\\_buoyancy\\_tank\\_diameter}& \\unit{$m$} & Float scalar &0--50 \\\\\n    Offset col buoyancy tank height &\\mytt{offset\\_buoyancy\\_tank\\_height}& \\unit{$m$} & Float scalar &0--20 \\\\\n    Offset col buoyancy tank location (fraction) &\\mytt{main\\_buoyancy\\_tank\\_location}&& Float scalar & 0--1 \\\\\n    Radius to offset col &\\mytt{radius\\_to\\_offset\\_column}& \\unit{$m$} & Float scalar &5--100 \\\\\n    \\hline\n    Pontoon outer diameter &\\mytt{pontoon\\_outer\\_diameter}& \\unit{$m$} & Float scalar &0.1--10 \\\\\n    Pontoon wall thickness &\\mytt{pontoon\\_wall\\_thickness}& \\unit{$m$} & Float scalar &0.01--1 \\\\\n    Lower main-offset pontoons &\\mytt{lower\\_attachment\\_pontoons\\_int}&& Integer scalar & 0--1 \\\\\n    Upper main-offset pontoons &\\mytt{upper\\_attachment\\_pontoons\\_int}&& Integer scalar & 0--1 \\\\\n    Cross main-offset pontoons &\\mytt{cross\\_attachment\\_pontoons\\_int}&& Integer scalar & 0--1 \\\\\n    Lower offset ring pontoons &\\mytt{lower\\_ring\\_pontoons\\_int}&& Integer scalar & 0--1 \\\\\n    Upper offset ring pontoons &\\mytt{upper\\_ring\\_pontoons\\_int}&& Integer scalar & 0--1 \\\\\n    Outer V-pontoons &\\mytt{outer\\_cross\\_pontoons\\_int}&& Integer scalar & 0--1 \\\\\n    Main col pontoon attach lower (fraction) &\\mytt{main\\_pontoon\\_attach\\_lower}&& Float scalar & 0--0.5\\\\\n    Main col pontoon attach upper (fraction) &\\mytt{main\\_pontoon\\_attach\\_upper}&& Float scalar & 0.5--1\\\\\n    \\hline\n    Fairlead (fraction) &\\mytt{fairlead\\_location}&& Float scalar &0--1 \\\\\n    Fairlead offset from col &\\mytt{fairlead\\_offset\\_from\\_shell}& \\unit{$m$} & Float scalar & 5--30\\\\\n    Fairlead pontoon diameter &\\mytt{fairlead\\_support\\_outer\\_diameter}& \\unit{$m$} & Float scalar & 0.1--10 \\\\\n    Fairlead pontoon wall thickness &\\mytt{fairlead\\_support\\_outer\\_thickness}& \\unit{$m$} & Float scalar &0.001--1 \\\\\n    Number of mooring connections &\\mytt{number\\_of\\_mooring\\_connections}&& Integer scalar &3--5 \\\\\n    Mooring lines per connection &\\mytt{mooring\\_lines\\_per\\_connection}&& Integer scalar &1--3 \\\\\n    Mooring diameter &\\mytt{mooring\\_diameter}& \\unit{$m$} & Float scalar &0.05--2 \\\\\n    Mooring line length &\\mytt{mooring\\_line\\_length}& \\unit{$m$} & Float scalar &0--3000 \\\\\n    Anchor distance &\\mytt{anchor\\_radius}& \\unit{$m$} & Float scalar &0--5000 \\\\\n  \\hline \\end{tabular}\n}\n\\end{center} \\end{table}\n\n\n\n\\section{Constraints}\nDue to the many design variables, permutations of settings, and applied\nphysics, there are many constraints that must be applied for an\noptimization to close.  The constraints capture both physical\nlimitations, such as column buckling, but also inject industry\nstandards, guidelines, and lessons learned from engineering experience\ninto the optimization.  As described in Section \\ref{sec:intro}, this is\na critically important element in building a MDAO framework for\nconceptual design that yields feasible results worth interrogating\nfurther with higher-fidelity tools.  The constraints used in the\nsubstructure design optimization and sensitivity studies are listed in\nTable \\ref{tbl:constraints}.  Where appropriate, some of the constraint\nvalues differ from one type of substructure to another.  Some additional\nexplanation is provided for a handful of constraints in the subsections\nbelow.\n\n\n\\begin{table}[htbp] \\begin{center}\n    \\caption{Optimization constraints used in \\textit{FloatingSE}.}\n    \\label{tbl:constraints}\n    {\\footnotesize\n  \\begin{tabular}{ c l c l} \\hline\n    \\textbf{Lower} & \\textbf{Variable} & \\textbf{Upper} & \\textbf{Comments}\\\\\n\\hline \\hline\n & \\textbf{Tower / Main / Offset Columns} &  & \\\\\n & Eurocode global buckling & 1.0 & \\\\\n & Eurocode shell buckling & 1.0 & \\\\\n & Eurocode stress limit & 1.0 & \\\\\n  & Manufacturability & 0.5 & Taper ratio limit\\\\\n  120.0 & Weld-ability &  & Diameter:thickness ratio limit\\\\\n\\hline & \\textbf{Main / Offset Columns} &  & \\\\\n & Draft ratio & 1.0 & Ratio of draft to max value\\\\\n & API 2U general buckling- axial loads & 1.0 & \\\\\n & API 2U local buckling- axial loads & 1.0 & \\\\\n & API 2U general buckling- external loads & 1.0 & \\\\\n & API 2U local buckling- external loads & 1.0 & \\\\\n & Wave height:freeboard ratio & 1.0 & Maximum wave height relative to freeboard\\\\\n  1.0 & Stiffener flange compactness &  & \\\\\n  1.0 & Stiffener web compactness &  & \\\\\n & Stiffener flange spacing ratio & 1.0 & Stiffener spacing relative to flange width\\\\\n & Stiffener radius ratio & 0.50 & Stiffener height relative to diameter\\\\\n\\hline & \\textbf{Offset Columns} &  & \\textit{Semi only}\\\\\n  0.0 & Heel freeboard margin &  & Height required to stay above waterline at max heel\\\\\n  0.0 & Heel draft margin &  & Draft required to stay submerged at max heel\\\\\n\\hline & \\textbf{Pontoons} &  & \\textit{Semi only}\\\\\n & Eurocode stress limit & 1.0 &\\\\\n\\hline & \\textbf{Tower} &  & \\\\\n  -0.01 & Hub height error & 0.01 &\\\\\n\\hline & \\textbf{Mooring} &  & \\\\\n  0.0 & Axial stress limit & 1.0 &\\\\\n & Line length limit & 1.0 & Loss of tension or catenary hang\\\\\n & Heel moment ratio & 1.0 & Ratio of overturning moment to restoring moment\\\\\n & Surge force ratio & 1.0 & Ratio of surge force to restoring force\\\\\n\\hline & \\textbf{Geometry} &  & \\\\\n  1.0 & Main-offset spacing &  & Minimum spacing between main and offset columns \\\\\n  0.0 & Nacelle transition buffer &  & Tower diameter limit at nacelle junction\\\\\n  -1.0 & Tower transition buffer & 1.0 & Diameter consistency at freeboard point\\\\\n\\hline & \\textbf{Stability} &  & \\\\\n  0.10 & Metacentric height &  & \\textit{Not applied to TLPs}\\\\\n  1.0 & Wave-Eigenmode boundary (upper) &  & Natural frequencies below wave frequency range\\\\\n & Wave-Eigenmode boundary (lower) & 1.0 & Natural frequencies above wave frequency range\\\\\n  0.0 & Water ballast height limit & 1.0 & \\\\\n  0.0 & Water ballast mass &  & Neutral buoyancy\\\\\n    \\hline \\end{tabular}\n  }\n\\end{center} \\end{table}\n\n\\subsection{Geometry Constraints}\nWords\n%\n\\begin{table}[htbp] \\begin{center}\n    \\caption{Constraint variables for the geometry in \\textit{FloatingSE}.}\n    \\label{tbl:geomconvar}\n{\\footnotesize\n  \\begin{tabular}{l l l } \\hline\n    \\textbf{Variable} & \\textbf{Type} & \\textbf{Description} \\\\\n    \\mytt{max\\_draft} & Float scalar & Maximum allowable draft for the substructure\\\\\n  \\hline \\end{tabular}\n}\n\\end{center} \\end{table}\n\n\\subsection{Manufacturing Constraints}\nManufacturing steel frustum shells requires rolling steel plates into\nshape and welding along a seam to close the section.  To accommodate\ntraditional rolling and welding practices, both the diameter taper over\nthe course of a section and the wall thickness ratio relative to the\ndiameter are capped.  Similarly, to facilitate welding the\nsemisubmersible pontoons to the columns, constraints regarding the radio\nof diameters between the two are enforced. These limits are determined\nby user parameters in Table \\ref{tbl:manconvar} and constraints,\n%\n\\begin{table}[htbp] \\begin{center}\n    \\caption{Constraint variables for the manufacturability in \\textit{FloatingSE}.}\n    \\label{tbl:manconvar}\n{\\footnotesize\n  \\begin{tabular}{l l l } \\hline\n    \\textbf{Variable} & \\textbf{Type} & \\textbf{Description} \\\\\n    \\mytt{min\\_taper\\_ratio} & Float scalar & For manufacturability of rolling steel\\\\\n    \\mytt{min\\_diameter\\_thickness\\_ratio} & Float scalar & For weld-ability\\\\\n    \\mytt{connection\\_ratio\\_max} & Float scalar & For welding pontoons to columns\\\\\n  \\hline \\end{tabular}\n}\n\\end{center} \\end{table}\n\n\\subsection{Stress Limits and Code Compliance}\nThe stress and buckling code compliance constraints are formulated as\nutilization ratios (ratio of actual to maximum values), with a safety\nfactor, which must be less than one. The safety factor parameters are\nlisted in Table \\ref{tbl:safetyvar}.\n%\n\\begin{table}[htbp] \\begin{center}\n    \\caption{Variables specifying the factors of safety within \\textit{FloatingSE}.}\n    \\label{tbl:safetyvar}\n{\\footnotesize\n  \\begin{tabular}{ l l l } \\hline\n    \\textbf{Variable} & \\textbf{Type} & \\textbf{Description} \\\\\n    \\mytt{gamma\\_f} & Float scalar & Safety factor on \\\\\n    \\mytt{gamma\\_b} & Float scalar & Safety factor on buckling\\\\\n    \\mytt{gamma\\_m} & Float scalar & Safety factor on materials\\\\\n    \\mytt{gamma\\_n} & Float scalar & Safety factor on consequence of failure\\\\\n    \\mytt{gamma\\_fatigue} & Float scalar & Not currently used\\\\\n  \\hline \\end{tabular}\n}\n\\end{center} \\end{table}\n\n\\subsection{Stability}\nAs described above, surge and pitch stability are enforced through\nsimilar approaches.  The total force and moment acting on the turbine\nare compared to the restoring forces and moments applied by the mooring\nsystem, buoyancy, or other sources at the maximum allowable point of\ndisplacement.  These constraints are formulated as ratios with the user\nspecifying the maximum allowable limits via the variables in Table\n\\ref{tbl:moorcon}.\n%\n\\begin{table}[htbp] \\begin{center}\n    \\caption{Constraint variables for the mooring system in \\textit{FloatingSE}.}\n    \\label{tbl:moorcon}\n{\\footnotesize\n  \\begin{tabular}{ l l c l } \\hline\n    \\textbf{Variable} & \\textbf{Type} & \\textbf{Units} & \\textbf{Description} \\\\\n    \\mytt{max\\_offset} & Float scalar & $m$& Max surge/sway offset \\\\\n    \\mytt{operational\\_heel} & Float scalar & $deg$& Max heel (pitching) angle in operating conditions \\\\\n    \\mytt{max\\_survival\\_heel} & Float scalar & $deg$& Max heel (pitching) angle in parked conditions \\\\\n  \\hline \\end{tabular}\n}\n\\end{center} \\end{table}\n\n\\section{Objectives}\nDifferent anaylses will emphasize different metrics, requiring different\nobjective functions.  Under the default assumption that the user wishes\nto minimize cost and adhere to stability constraints, the objective\nfunction would be total substructure cost (variable name,\n\\texttt{total\\_cost}) or mass (variable name, \\texttt{total\\_mass}).\n\n\\section{Example}\n\n\\begin{figure}[htb]\n  \\begin{subfigure}[b]{0.29\\linewidth}\n    \\centering \\includegraphics[width=\\linewidth]{figs/spar-cost1.png}\n    \\caption{Spar}\n  \\end{subfigure}\n  \\begin{subfigure}[b]{0.39\\linewidth}\n    \\centering \\includegraphics[width=\\linewidth]{figs/semi-mass2.png}\n    \\caption{Semisubmersible}\n  \\end{subfigure}\\\\\n  \\begin{subfigure}[b]{0.29\\linewidth}\n    \\centering \\includegraphics[width=\\linewidth]{figs/tlp-cost2.png}\n    \\caption{Spar}\n  \\end{subfigure}\n  \\caption{Examples of optimized spar, semisubmersible, and TLP geometries.}\n  \\label{fig:optex}\n\\end{figure}\n\n", "meta": {"hexsha": "dbcca89974967ce04c848d8587555ba6dce62ca3", "size": 17481, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/latex-doc/optimization.tex", "max_stars_repo_name": "mattEhall/FloatingSE", "max_stars_repo_head_hexsha": "f13e0f38a7742ea00a8f446a9ebf505dcf7acd42", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-03-27T15:09:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-27T15:09:02.000Z", "max_issues_repo_path": "docs/latex-doc/optimization.tex", "max_issues_repo_name": "mattEhall/FloatingSE", "max_issues_repo_head_hexsha": "f13e0f38a7742ea00a8f446a9ebf505dcf7acd42", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-05-17T14:57:05.000Z", "max_issues_repo_issues_event_max_datetime": "2017-05-17T14:57:05.000Z", "max_forks_repo_path": "docs/latex-doc/optimization.tex", "max_forks_repo_name": "mattEhall/FloatingSE", "max_forks_repo_head_hexsha": "f13e0f38a7742ea00a8f446a9ebf505dcf7acd42", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2015-12-26T01:06:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-08T20:19:57.000Z", "avg_line_length": 55.3196202532, "max_line_length": 135, "alphanum_fraction": 0.7269607002, "num_tokens": 5081, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.6992544085240401, "lm_q1q2_score": 0.5667309499034758}}
{"text": "%!TEX root = forallx-ubc.tex\n\\chapter{Soundness and Completeness for QL Trees}\n\\label{ch.QLsoundcomplete}\n\nIn Chapter \\ref{ch.SLsoundcomplete} we proved that our SL tree method works the way it is supposed to: any tree that closed was guaranteed to have a root that was unsatisfiable in SL, and any completed tree that remained open was guaranteed to describe an SL model that satisfies the root. These two theorems together are called \\emph{soundness} and \\emph{completeness}.\n\n\\factoidbox{\n\\define{soundness}: If a tree closes, that guarantees that its root is unsatisfiable. In other words: $$\\metaSetX{}\\vdash{}\\bot\\Rightarrow\\metaSetX{}\\models{}\\bot$$\n\\define{completeness}: If a root is unsatisfiable, that guarantees that the tree will close. In other words: $$\\metaSetX{}\\models\\bot{}\\Rightarrow\\metaSetX{}\\vdash{}\\bot$$\n}\n\nThese definitions were first given on pages \\pageref{definesound} and \\pageref{definecomplete}. They are equally applicable to our extended tree system for QL. The system introduced in Chapter \\ref{ch.QLTrees} is also sound and complete. The rules are designed in a way so as to guarantee that if the tree closes, there is no model for the root (sound), and to guarantee that an open branch describes a model for the root (complete). The proofs for these two theorems are structurally very similar to the proofs given in Chapter \\ref{ch.SLsoundcomplete}.\n\n\\section{Soundness}\n\nIf our tree method is sound, then there is no possible set of QL sentences \\metaSetX{} that are satisfiable, where a tree with root \\metaSetX{} closes. Not every possible tree method is sound. For example, suppose we dropped the requirement, for a negated universal, that one take an instance with a new name.\n\nIn other words, suppose we replaced this rule (introduced on page \\pageref{negunrule})\n\n\\begin{center}\n\\begin{prooftree}\n{not line numbering}\n[\\enot\\forall\\script{x}\\metaA{}, checked={\\script{a}}\n\t[\\enot\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=where \\script{a} is \\emph{new}\n\t]\n]\n\\end{prooftree}\n\\end{center}\n\nwith this alternate one:\n\n\\begin{center}\n\\begin{prooftree}\n{not line numbering}\n[\\enot\\forall\\script{x}\\metaA{}, checked={\\script{a}}\n\t[\\enot\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=for \\emph{any} \\script{a}\n\t]\n]\n\\end{prooftree}\n\\end{center}\n\nIf we had this rule, there would be counterexamples to soundness --- trees with satisfiable roots, which nevertheless closed. Here is an example:\n\n\\begin{prooftree}\n{}\n\t[Fa\n\t[\\enot \\forall x Fx, grouped, checked\n\t\t[\\enot Fa, just=2 alt\\. \\enot $\\forall$, close={1, 3}]\n\t]\n\t]\n\\end{prooftree}\n\nThis tree is a counterexample to the soundness of the alternate tree system just described. To prove that our system is sound is to prove that our actual rules do not allow for this kind of tree. To prove this, we'll begin by assuming that \\metaSetX{} is satisfiable, and demonstrate from that assumption that at least one branch of any tree that develops according to our rules will remain open. As in the case of the parallel discussion in Chapter \\ref{ch.SLsoundcomplete}, our proof will be a recursive one. We will demonstrate that, if the tree \\emph{starts} with a set of satisfiable sentences, then, for each legal way the tree may develop, at least one branch will continue to have a set of satisfiable sentences. This is effectively to show that such a branch will never close, since branches only close when they contain some sentences \\metaA{} and \\enot\\metaA{}, which are of course never jointly satisfiable.\n\nSuppose, then, we have some satisfiable set of sentences \\metaSetX{} in the root of a tree. If the root is satisfiable, then there is some model that satisfies it. Call this interpretation $\\mathcal{I}$. We prove that, if our tree follows the rules given in Chapter \\ref{ch.QLTrees}, then $\\mathcal{I}$ doesn't just satisfy the root --- it satisfies every sentence in any completed branch. We will prove this recursively.\n\n\\subsection{Root}\n\nWe assume that the tree begins with a satisfiable root. Given this assumption, $\\mathcal{I}$ is just our name for one of the interpretations we are assuming must exist. So $\\mathcal{I}$ trivially satisfies everything in the root.\n\nNow we must prove, for each possible way of developing the tree, that if the sentences in the branch we \\emph{begin} with are satisfiable, then the sentences we have after applying the rule are satisfiable too. There are thirteen possible ways a tree can develop, corresponding to the thirteen kinds of non-atomic sentences in QL, each of which has a particular processing rule. The thirteen kinds of sentences are:\n\n\\begin{itemize}\n\\item double negation\n\\item conjunction\n\\item negated conjunction\n\\item disjunction\n\\item negated disjunction\n\\item conditional\n\\item negated conditional\n\\item biconditional\n\\item negated biconditional\n\\item existential\n\\item negated existential\n\\item universal\n\\item negated universal\n\\end{itemize}\n\nFortunately, we've already proven what we need to prove for the first nine items on the list. Our QL tree method uses all the same rules as the SL method did for the sentential connectives; we proved in \\S\\ref{sec.sl.soundnessproof.begin}, for each of those rules, that it has the key property: if some interpretation $\\mathcal{I}$ satisfies what comes above the rule, then the development below it is also satisfiable. (Indeed, we proved there that the very same interpretation, $\\mathcal{I}$, satisfied it.)\n\nSo to extend our soundness proof to our QL system, we need only prove the same thing for the four rules for quantifiers. This is the project of the next four subsections.\n\n\\subsection{Existentials}\n\nSuppose some satisfiable set of QL sentences \\metaSetX{} is developed according to the existential rule:\n\n\\begin{center}\n\\begin{prooftree}\n{not line numbering}\n[\\exists\\script{x}\\metaA{}, checked={\\script{a}}\n\t[\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=where \\script{a} is \\emph{new}\n\t]\n]\n\\end{prooftree}\n\\end{center}\n\nWe assume that $\\mathcal{I}$ models \\metaSetX{}, which includes some existential $\\exists\\script{x}\\metaA{}$. We want to prove that there is a model for the expanded branch which comprises both \\metaSetX{} and $\\metaA{}\\substitute{\\script{x}}{\\script{a}}$, i.e.,  $\\metaSetX{} \\cup \\metaA{}\\substitute{\\script{x}}{\\script{a}}$ (the set containing \\metaSetX{} and the new development sentence too).  Unlike in the parallel proof for the sentential rules, we cannot be sure that $\\mathcal{I}$ itself satisfies our new development, because our new development introduces a new name; we cannot assume that $\\mathcal{I}$ included any assignment for the new name \\script{a}. Nor, if it did, are we sure that the object \\script{a} denotes satisfies $\\metaA{}$. But we \\emph{can} be assured that $\\mathcal{I}$ can be expanded into a new, similar interpretation, $\\mathcal{I}\\mbox{*}$, which does include \\script{a}. Moreover, since we know that $\\mathcal{I}$ satisfied the existential $\\exists\\script{x}\\metaA{}$, we know that there was some object in $\\mathcal{I}$'s domain that satisfied \\metaA{}. So it will be possible to construct our new interpretation $\\mathcal{I}\\mbox{*}$ so that it includes the new name, and assigns it to that object. This will ensure that $\\mathcal{I}\\mbox{*}(\\metaA{}\\substitute{\\script{x}}{\\script{a}})=1$. And since $\\mathcal{I}\\mbox{*}$ is just like $\\mathcal{I}$ with respect to everything other than \\script{a} --- and since we are assured that \\script{a} was not in \\metaSetX{} (the rule requires that it be new) --- $\\mathcal{I}\\mbox{*}$ will satisfy \\metaSetX{} in just the same way that $\\mathcal{I}$ did.\n\nThis will be clearer with an example. Suppose $\\exists x Fx$ is part of a satisfiable set of sentences. So assume that $\\mathcal{I}$ interprets it. The tree resolution for rule for existentials requires that one take an instance corresponding to a new name. Suppose this is done thus:\n\n\\begin{center}\n\\begin{prooftree}\n{not line numbering}\n[\\exists x Fx, checked={a}\n\t[Fa]\n]\n\\end{prooftree}\n\\end{center}\n\nWe cannot simply say that since some model $\\mathcal{I}$ satisfies the existential $\\exists x Fx$, it thereby must also satisfy $Fa$; plenty of interpretations satisfy the former while falsifying the latter. But what we \\emph{can} say, since $a$ is a new name, with no previous commitments specific to it earlier in the tree, is that we can construct a model $\\mathcal{I}\\mbox{*}$ that satisfies every sentence above in the tree (in this case, just the one existential), and that also satisfies the new development (in this case, $Fa$). We do so by assigning the name $a$ to refer to some object that is $F$ in $\\mathcal{I}$. We're sure there is such an object there because it is stipulated that $\\mathcal{I}$ satisfies the existential.\n\nIn general, assuming that $\\mathcal{I}$ satisfies every sentence in a branch before the existential rule is performed, there is guaranteed to be a new interpretation, $\\mathcal{I}\\mbox{*}$, which is an extension of $\\mathcal{I}$ that includes a new name attached to an object in the UD satisfying the existential, which satisfies everything in the branch up to the point after the existential rule is performed. That is to say, like the nine sentential rules considered in Chapter \\ref{ch.SLsoundcomplete}, the existential rule can never take you from a satisfiable branch to an unsatisfiable one.\n\n\\subsection{Universals}\n\nSuppose a branch of a tree uses the rule for universals:\n\n\\begin{center}\n\\begin{prooftree}\n{not line numbering}\n[\\forall\\script{x}\\metaA{}, subs={\\script{a}}\n\t[\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=for \\emph{any} \\script{a}\n\t]\n]\n\\end{prooftree}\n\\end{center}\n\nAssume that the set of QL sentences \\metaSetX{} above this development is satisfiable. Then some model $\\mathcal{I}$ makes it true. The universal rule allows an instance to be developed using any name, so, as before, we cannot guarantee that $\\mathcal{I}$ makes the development true, because $\\mathcal{I}$ may or may not interpret the name \\script{a}; but as before, we can be assured if it doesn't, we can extend the interpretation to include it. So consider a new model $\\mathcal{I}\\mbox{*}$, which includes the name \\script{a}. If \\script{a} wasn't interpreted by $\\mathcal{I}$, then it can be assigned to any element of the UD. Since the rest of $\\mathcal{I}$ is unchanged, and since $\\mathcal{I}(\\forall \\script{x}\\metaA{})=1$, we know that our new extended interpretation will satisfy $\\metaA{}\\substitute{\\script{x}}{\\script{a}}$ too.\n\nThat is to say, once again, if we assume that $\\mathcal{I}$ satisfies every sentence in a branch before the universal rule is performed, there is guaranteed to be a model --- either the very same one, $\\mathcal{I}$, or a modification of it, $\\mathcal{I}\\mbox{*}$, which assigns a new name to any object in the UD, which satisfies everything in the branch up to the point after the universal rule is performed. In other words, the universal rule can never take you from a satisfiable branch to an unsatisfiable one.\n\nNote that the fact that we can perform this rule multiple times does not interfere with the soundness proof. We have proven that \\emph{each} time you perform it, you are guaranteed not to switch from a satisfiable branch to an unsatisfiable one. So no matter how many times you take an instance of a universal, you won't be able to go from a satisfiable set of sentences to an unsatisfiable one.\n\n\n\\subsection{Negated Existential}\n\nThe reasoning behind soundness for the negated existential rule is exactly parallel to that for the universal rule. We begin by assuming that some negated universal $\\enot\\exists\\script{x}\\metaA{}$, is satisfied by $\\mathcal{I}$. Here is the rule for negated existentials:\n\n\\begin{center}\n\\begin{prooftree}\n{not line numbering}\n[\\enot\\exists\\script{x}\\metaA{}, subs={\\script{a}}\n\t[\\enot\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=for \\emph{any} \\script{a}\n\t]\n]\n\\end{prooftree}\n\\end{center}\n\nWe want to prove that the result of this rule is also satisfied, either by $\\mathcal{I}$ itself (if \\script{a} was interpreted in $\\mathcal{I}$), or by an extension of it $\\mathcal{I}\\mbox{*}$, that preserves the satisfaction of everything above (if \\script{a} was a new name). Since $\\mathcal{I}$ satisfies $\\enot\\exists\\script{x}\\metaA{}$, it makes every substitution instance for \\script{x} of \\metaA{} false. If \\script{a} was interpreted by $\\mathcal{I}$ already, then $\\mathcal{I}(\\metaA{}\\substitute{\\script{x}}{\\script{a}})=0$. If it wasn't, the new model $\\mathcal{I}\\mbox{*}$ will assign the new name to some object in the UD of the original model; since no object in that model satisfied \\metaA{}, \\mbox{$\\mathcal{I}\\mbox{*}(\\metaA{}\\substitute{\\script{x}}{\\script{a}})=0$}. Either way, our interpretation falsifies \\mbox{\\metaA{}\\substitute{\\script{x}}{\\script{a}}}, and so satisfies that sentence's negation, which is the continuation of the branch.\n\nSo this rule too can never take us from a satisfiable set of QL sentences to an unsatisfiable one.\n\n\\subsection{Negated Universal}\n\nNegated universals are similar to existentials. Assume that a negated universal is part of a set of sentences satisfied by $\\mathcal{I}$, and that this rule is then applied:\n\n\\begin{center}\n\\begin{prooftree}\n{not line numbering}\n[\\enot\\forall\\script{x}\\metaA{}, checked={\\script{a}}\n\t[\\enot\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=where \\script{a} is \\emph{new}\n\t]\n]\n\\end{prooftree}\n\\end{center}\n\nConstruct a new interpretation $\\mathcal{I}\\mbox{*}$, which differs from $\\mathcal{I}$ only in that it includes an interpretation of the new name \\script{a}, and assigns that name to some object that falsifies \\metaA{}. We know there is at least one such object because we are assuming that $\\mathcal{I}$ satisfies the negated universal. Then our new interpretation $\\mathcal{I}\\mbox{*}$ satisfies the new development of the branch. It also satisfies everything above the branch, just like $\\mathcal{I}$ did, because nothing above the branch included the name \\script{a}.\n\nThat last bit of reasoning relied centrally on the requirement that we're taking a new name. We saw in the introduction to this chapter that if we do not include that requirement, soundness would be violated.\n\n\\subsection{Summarizing the soundness proof}\n\nWe have now shown, for our four quantifier rules, that each of them has the following property: you can never start with a branch that is satisfiable, and use that rule to extend that branch into one that is unsatisfiable. Since we've also shown that the nine sentential rules also have this property, we've effectively shown that there is no possible way to start with a satisfiable set of sentences and develop the branch into one that is not satisfiable. This in turn means that if the branch starts with a satisfiable set of sentences, the branch will never close. But that's just what soundness says: if the root is satisfiable, the tree is guaranteed to remain open. Soundness is proven.\n\n\n\\section{Completeness}\n\nCompleteness says that if a branch of a completed tree remains open, then the root is satisfiable. We prove this by assuming that we have an open completed branch, and use it to construct an interpretation that satisfies every sentence in that branch, which includes the root. The proof for completeness of our QL tree system is structurally just like the one given in Chapter \\ref{ch.SLsoundcomplete}.\n\nGiven a completed open branch, we construct a model $\\mathcal{I}$, based on that branch, as follows: for any predicate $\\script{F}$\\!, if some QL atom $\\script{F}\\script{a}_{1}, \\ldots, \\script{a}_{n}$ is in the branch --- i.e., if $P$ or $Fa$ or $Rab$ is in the branch --- then $\\mathcal{I}$ makes that atom true, by putting \\ntuple{$\\script{a}_{1}$, \\ldots, $\\script{a}_{n}$} in the extension of $\\script{F}$\\!. And if $\\enot \\script{F}\\script{a}_{1}, \\ldots, \\script{a}_{n}$ is in the branch, $\\mathcal{I}$ excludes \\ntuple{$\\script{a}_{1}$, \\ldots, $\\script{a}_{n}$} from the extension of $\\script{F}$\\!, thus making the negation of the atom true. This is of course just the way that we construct interpretations from open branches of completed trees.\n\nNow we will prove, for every sentence in QL, that if it is in the open branch, it is made true by $\\mathcal{I}$. The QL atoms trivially meet this criterion --- $\\mathcal{I}$ was designed precisely to satisfy them. We will prove by induction that every possible QL sentence also meets this criterion. In \\S\\ref{sec.completenessproof} we showed, for each propositional connective, if you construct a more complex SL sentence out of simpler SL sentences that have this criterion, the more complex one will too. That proof carries on unchanged here. So it remains only to show that the same is true of our four quantifier rules.\n\n\\subsection{Existential}\n\nConsider an existential --- a QL sentence of the form $\\exists \\script{x} \\metaA{}$. We need to prove that if it is in the open, completed branch, $\\mathcal{I}$ satisfies it. Since the branch is complete, we know that the existential rule has been performed to resolve this sentence. So the branch includes a substitution instance of \\metaA{} that used a new name. For our present purposes, it doesn't actually matter whether the name was new --- the fact that there is some instance of \\metaA{} in the branch already is enough to prove what we need to prove. Since there is an instance of \\metaA{} in the branch, if it is satisfied by $\\mathcal{I}$, the existential $\\exists \\script{x} \\metaA{}$ must be satisfied by $\\mathcal{I}$ too.\n\nSo, just as we showed for the nine sentential rules, the existential rule has this important property: in a completed tree, any interpretation that satisfies the simpler sentences below the existential development, must also satisfy the existential above it.\n\n\\subsection{Universal}\n\nSuppose a universal sentence $\\forall \\script{x} \\metaA{}$ appears in a completed open branch. Since the branch is complete, that means that, for every name \\script{a} in the branch, \\mbox{\\metaA{}\\substitute{\\script{x}}{\\script{a}}} is also in the branch. We therefore assume that $\\mathcal{I}$ satisfies each \\metaA{}\\substitute{\\script{x}}{\\script{a}}; so $\\mathcal{I}$ must also satisfy $\\forall \\script{x} \\metaA{}$. Because the UD for $\\mathcal{I}$ includes only those names that occur in the branch, every instance of \\metaA{} is included, so the universal is true.\n\nOnce again, any interpretation that satisfies everything below the universal development, must also satisfy the universal above it.\n\n\\subsection{Negated Existential}\n\nNegated existentials work just like universals. If $\\enot \\exists \\script{x} \\metaA{}$ is in a completed open branch, then for every name \\script{a} in $\\mathcal{I}$, \n\\mbox{\\enot\\metaA{}\\substitute{\\script{x}}{\\script{a}}} is below it in the branch. And if $\\mathcal{I}$ satisfies each of these negations, it will also satisfy the negated existential.\n\n\\subsection{Negated Universal}\n\nNegated universals work just like existentials. If $\\enot \\forall \\script{x} \\metaA{}$ is in the branch, then some instance of the negation \\enot \\metaA{} is in the branch below. If $\\mathcal{I}$ satisfies some instance of \\enot \\metaA{}, then, given the definition of truth for negation and universals in QL, it will also satisfy $\\enot \\forall \\script{x} \\metaA{}$.\n\n\\subsection{Summarizing the completeness proof}\n\nThe sentence shapes just considered, combined with the nine shapes considered in \\S\\ref{sec.completenessproof}, correspond to all the possible QL sentences. So we have proven that, for any possible QL sentence \\metaA{}, if an interpretation satisfies the simpler sentences below it in the branch, that interpretation also satisfies \\metaA{} itself. Since we also have a recipe for constructing an interpretation $\\mathcal{I}$ that is guaranteed to satisfy the atoms, we can prove by induction that it can satisfy everything in the branch, including the root. A completed open branch guarantees a satisfiable root. Completeness is proven.\n\n\\practiceproblems\n\n\\solutions\n\\problempart\n\\label{pr.QLalttrees-sound}\nFollowing are possible modifications to our QL tree system. For each, imagine a system that is like the system laid out in this chapter, except for the indicated change. Would the modified tree system be sound? If so, explain how the proof given in this chapter would extend to a system with this rule; if not, give a tree that is a counterexample to the soundness of the modified system.\n\\begin{earg}\n\\item Change the rule for existentials to this rule:\n\t\\factoidbox{\n\t\\begin{center}\n\t\\begin{prooftree}\n\t{not line numbering}\n\t[\\exists\\script{x}\\metaA{}, checked={\\script{a}}\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=for \\emph{any} \\script{a}\n\t\t]\n\t]\n\t\\end{prooftree}\n\t\\end{center}\n\t}\n\t\n\\item Change the rule for existentials to this rule:\n\t\\factoidbox{\n\t\\begin{center}\n\t\\begin{prooftree}\n\t{not line numbering}\n\t[\\exists\\script{x}\\metaA{}, checked=d\n\t\t[\\metaA{}\\substitute{\\script{x}}{d}, just=(whether or not $d$ is new)\n\t\t]\n\t]\n\t\\end{prooftree}\n\t\\end{center}\n\t}\n\n\\item Change the rule for existentials to this rule:\n\t\\factoidbox{\n\t\\begin{center}\n\t\\begin{prooftree}\n\t{not line numbering}\n\t[\\exists\\script{x}\\metaA{}, checked\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just={for 3 different names, old or new}\n\t\t[ , grouped\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{b}}, grouped\n\t\t[ , grouped\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{c}}, grouped\n\t\t]\n\t\t]\n\t\t]\n\t\t]\n\t\t]\n\t]\n\t\\end{prooftree}\n\t\\end{center}\n\t}\n\n\\item Change the rule for universals to this rule:\n\t\\factoidbox{\n\t\\begin{center}\n\t\\begin{prooftree}\n\t{not line numbering}\n\t[\\forall\\script{x}\\metaA{}, checked\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just={for 3 different names, old or new}\n\t\t[ , grouped\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{b}}, grouped\n\t\t[ , grouped\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{c}}, grouped\n\t\t]\n\t\t]\n\t\t]\n\t\t]\n\t\t]\n\t]\n\t\\end{prooftree}\n\t\\end{center}\n\t}\n\n\\item Change the rule for existentials to this rule:\n\t\\factoidbox{\n\t\\begin{center}\n\t\\begin{prooftree}\n\t{not line numbering}\n\t[\\exists\\script{x}\\metaA{}, checked\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just={for 3 new names}\n\t\t[ , grouped\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{b}}, grouped\n\t\t[ , grouped\n\t\t[\\metaA{}\\substitute{\\script{x}}{\\script{c}}, grouped\n\t\t]\n\t\t]\n\t\t]\n\t\t]\n\t\t]\n\t]\n\t\\end{prooftree}\n\t\\end{center}\n\t}\n\n\\item Change the rule for universals to this rule:\n\t\\factoidbox{\n            \t\\begin{center}\n            \\begin{prooftree}\n            {not line numbering}\n            [\\forall\\script{x}\\metaA{}, checked={\\script{a}}\n            \t[\\metaA{}\\substitute{\\script{x}}{\\script{a}}, just=where \\script{a} is \\emph{new}\n            \t]\n            ]\n            \\end{prooftree}\n            \\end{center}\n\t}\n\n\\item Change the rule for conjunction to this rule:\n\t\\factoidbox{\n            \t\\begin{center}\n            \\begin{prooftree}\n            {not line numbering}\n            \t[\\metaA{} \\eand \\metaB{}, checked\n            \t\t[\\exists \\script{x} \\metaA{}, just=where \\script{x} does not occur in \\metaA{}\n\t\t\t[\\metaB{}, grouped\n            \t\t]\n            \t\t]\n\t\t]\n            \\end{prooftree}\n            \\end{center}\n\t}\n\n\n\\item Change this requirement (given on page \\pageref{branchcompletion.defined})...\n\t\\factoidbox{A branch is \\define{complete} if and only if either (i) it is closed, or (ii) every resolvable sentence in every branch has been resolved, and for every general sentence and every name \\script{a} in the branch, the \\script{a} instance of that general sentence has been taken.}\n\t...to this one:\n\t\\factoidbox{A branch is \\define{complete} if and only if either (i) it is closed, or (ii) every resolvable sentence in every branch has been resolved, and for every general sentence, \\emph{at least one instance of} that general sentence has been taken.}\n\n\\item Change the branch completion requirement to:\n\t\\factoidbox{\\ldots and for every general sentence and every name \\script{a} \\emph{that is above that general sentence in the branch}, the \\script{a} instance of that general sentence has been taken.}\n\n\\item Change the branch completion requirement to:\n\t\\factoidbox{\\ldots and for every general sentence and every name \\script{a} in the branch, the \\script{a} instance of that general sentence has been taken, \\emph{and at least one additional new instance of that general sentence has also been taken}.}\n\t\n\t\\end{earg}\n\t\n\t\n\t\n\t\n\\solutions\n\\problempart\n\\label{pr.QLalttrees-complete}\nFor each of the rule modifications given in Part \\ref{pr.QLalttrees-sound}, would the modified tree system be complete? If so, explain how the proof given in this chapter would extend to a system with this rule; if not, give a tree that is a counterexample to the completeness of the modified system.\n\n", "meta": {"hexsha": "97504d69a2040b80266f5054ec1dabc52edad897", "size": 24746, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Latex-Files/forallx-ubc-11-QLsoundcomplete.tex", "max_stars_repo_name": "jonathanichikawa/for-all-x", "max_stars_repo_head_hexsha": "b7cc18e497065e45e54af30c615999941941b23d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2019-03-29T14:57:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-05T00:58:11.000Z", "max_issues_repo_path": "Latex-Files/forallx-ubc-11-QLsoundcomplete.tex", "max_issues_repo_name": "jonathanichikawa/for-all-x", "max_issues_repo_head_hexsha": "b7cc18e497065e45e54af30c615999941941b23d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2019-02-18T21:45:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-13T23:39:59.000Z", "max_forks_repo_path": "Latex-Files/forallx-ubc-11-QLsoundcomplete.tex", "max_forks_repo_name": "jonathanichikawa/for-all-x", "max_forks_repo_head_hexsha": "b7cc18e497065e45e54af30c615999941941b23d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-06-19T20:30:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T16:39:29.000Z", "avg_line_length": 69.7070422535, "max_line_length": 1636, "alphanum_fraction": 0.7423826073, "num_tokens": 6483, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.7956580952177053, "lm_q1q2_score": 0.5666933127843987}}
{"text": "\\newcommand{\\defeq}{\\vcentcolon=}\n\\newcommand{\\tn}[1]{\\textnormal{#1}}\n\n\\section{Elementary Definitions and Properties}\n\\label{sec:c*-algebras-elementary-definitions-properties}\n\nA $C^*$-algebra is a Banach algebra with additional structure. We recall the definition of a Banach algebra below.\n\n\\begin{definition}\nA \\emph{Banach algebra} is a Banach space $A$ with an associative, bilinear multiplication operation $A \\times A \\rightarrow A$, $(a,b) \\mapsto ab$ which is \\emph{submultiplicative} with respect to the norm:\n\\begin{equation}\\label{eq:submultiplicativity}\n\\norm{ab} \\leq \\norm{a} \\norm{b}, \\qquad \\forall \\, a, b \\in A.\n\\end{equation}\nWe say $A$ is \\emph{unital} if there exists a \\emph{unit} $1 \\in A$ satisfying $\\norm{1} = 1$ and\n\\begin{equation}\\label{eq:algebra_unit}\n1a = a1 = a, \\qquad \\forall \\, a \\in A.\n\\end{equation}\nIn a unital Banach algebra we can speak of inverses of elements, but not every element of a Banach algebra is invertible. By an inverse we mean a two-sided inverse unless otherwise specified. We write $A^\\times$ for the set of invertible elements in $A$. \n\\end{definition}\n\n\n\n\n\nWe state without proof some obvious facts about Banach algebras.\n\n\\begin{proposition}\nLet $A$ be a Banach algebra. \n\\begin{enumerate}\n\t\\item For all $a \\in A$, we have $0a = a0 = 0$. \n\\end{enumerate}\nIf $A$ is unital, then the following hold.\n\\begin{enumerate}\\setcounter{enumi}{1}\n\t\\item The unit is unique.\n\t\\item Inverses are unique.\n\t\\item The additive identity is not invertible.\n\t\\item The multiplicative identity is its own inverse.\n\t\\item If $a, b \\in A^\\times$, then $ab \\in A^\\times$ and $(ab)^{-1} = b^{-1}a^{-1}$.\n\t\\item If $a \\in A$ has a left inverse and a right inverse, then these inverses are equal, so $a \\in A^\\times$.\n\\end{enumerate}\n\\end{proposition}\n\n\n\n\n\n\\begin{proposition}\nThe multiplication operation on a Banach algebra is continuous.\n\\end{proposition}\n\n\\begin{proof}\nTake sequences $(a_n)$ and $(b_n)$ in $A$ such that $a_n \\rightarrow a$ and $b_n \\rightarrow b$. Using the triangle inequality and \\eqref{eq:submultiplicativity},\n\\begin{equation}\n\\begin{aligned}\n\\norm{ab - a_n b_n} &\\leq \\norm{ab - ab_n} + \\norm{ab_n - a_nb_n}\\\\\n&\\leq \\norm{a} \\norm{b - b_n} + \\norm{a - a_n} \\norm{b_n}\n\\end{aligned}\n\\end{equation}\nThis manifestly approaches zero, so we conclude that $a_nb_n \\rightarrow ab$.\n\\end{proof}\n\n\n\n\\begin{proposition}\nIf $A$ is a Banach algebra and $a \\in A^\\times$, then\n\\begin{equation}\n\\norm{a}^{-1} \\leq \\norm{a^{-1}}.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nBy \\eqref{eq:submultiplicativity}, we have\n\\begin{equation}\n1 = \\norm{1} =\\norm{a^{-1}a} \\leq \\norm{a^{-1}}\\norm{a}.\n\\end{equation}\nThe result follows by dividing by $\\norm{a}$.\n\\end{proof}\n\n\n\n\\begin{proposition}\nInversion $A^\\times \\rightarrow A^\\times$, $a \\mapsto a^{-1}$ is continuous.\n\\end{proposition}\n\n\\begin{proof}\nTake a sequence $(a_n)$ in $A^\\times$ such that $a_n \\rightarrow a \\in A^\\times$. We compute\n\\begin{equation}\n\\begin{aligned}\n\\norm{a^{-1} - a_n^{-1}} &= \\norm{a^{-1} (a_n - a)a^{-1}_n}\\\\\n&\\leq \\norm{a^{-1}} \\norm{a_n - a}\\norm{a_n^{-1}}\\\\\n&\\leq \\norm{a^{-1}}\\norm{a_n - a}\\norm{a^{-1}} + \\norm{a^{-1}}\\norm{a_n - a}\\norm{a^{-1} - a_n^{-1}}.\n\\end{aligned}\n\\end{equation}\nMoving the rightmost term to the other side yields\n\\begin{equation}\n\\qty(1 - \\norm{a^{-1}}\\norm{a_n - a})\\norm{a^{-1} - a_n^{-1}} \\leq  \\norm{a_n - a} \\norm{a^{-1}}^2\n\\end{equation}\nFor large enough $n$, the term in parentheses is nonzero, and we may divide by it, yielding\n\\begin{equation}\n\\norm{a^{-1} - a_n^{-1}} \\leq \\frac{\\norm{a_n - a}\\norm{a^{-1}}^2}{1 - \\norm{a^{-1}}\\norm{a_n - a}}.\n\\end{equation}\nThe left hand side manifestly approaches zero, so we conclude $a_n^{-1} \\rightarrow a^{-1}$.\n\\end{proof}\n\n\n\n\n\n\\begin{definition}\nA \\emph{$\\boldsymbol{C^*}$-algebra} is a Banach algebra $A$ with an antilinear \\emph{star operation} $A \\rightarrow A$, $a \\mapsto a^*$ satisfying\n\\begin{enumerate}\n\t\\item[(i)] \\emph{involutivity:} $a^{**} = a$ for all $a \\in A$,\n\t\\item[(ii)] \\emph{contravariance:} $(ab)^* = b^*a^*$ for all $a, b \\in A$,\n\t\\item[(iii)] the \\emph{$\\boldsymbol{C^*}$-property:} $\\norm{a^*a} = \\norm{a}^2$ for all $a \\in A$.\n\\end{enumerate} \nAn element $a \\in A$ is \\emph{self-adjoint} if $a^* = a$.\n\nA subset $B \\subset A$ is a \\emph{$\\boldsymbol{C^*}$-subalgebra} of $A$ if $B$ is a $C^*$-algebra under the restrictions to $B$ of all operations defined on $A$. Equivalently, $B$ must be a topologically closed subset which is closed under all the operations on $A$. Topological closedness is equivalent to completeness. We say $B$ is a \\emph{unital} $C^*$-subalgebra of $A$ if $B$ is a unital $C^*$-algebra and the unit in $B$ is the same as the unit in $A$.\\footnote{The unitization of a unital $C^*$-algebra (discussed in a later section) provides an instance where we have a unital $C^*$-algebra and a $C^*$-subalgebra which has a different unit.}\n\nIf $S$ is a subset of a $C^*$-algebra $A$, then the intersection of all $C^*$-subalgebras of $A$ containing $S$ is a $C^*$-subalgebra, called the \\emph{$\\boldsymbol{C^*}$-subalgebra generated by $\\boldsymbol{S}$}.\n\nIf $A$ and $B$ are $C^*$-algebras, a \\emph{$\\boldsymbol{*}$-homomorphism} from $A$ to $B$ is a map $\\pi:A \\rightarrow B$ respecting the algebraic operations on $A$ and $B$. More precisely, $\\pi$ is a linear map satisfying\n\\begin{equation}\n\\begin{aligned}\n\\pi(ab) &= \\pi(a)\\pi(b)\\\\\n\\pi(a^*) &= \\pi(a)^*\n\\end{aligned}\n\\end{equation}\nfor all $a, b \\in A$. Notice that we do not require $\\pi$ to be continuous; we will show later that this is automatically so. If $A$ and $B$ are unital, we say $\\pi$ is a \\emph{unital $\\boldsymbol{*}$-homomorphism} if $\\pi$ is a $*$-homomorphism and $\\pi(1) = 1$. The terms $*$-isomorphism and $*$-automorphism will be used in the natural sense. \n\\end{definition}\n\n\\begin{proposition}\nIf $A$ and $B$ are $C^*$-algebras and $\\pi:A \\rightarrow B$ is a $*$-isomorphism, then $\\pi^{-1}:B \\rightarrow A$ is a $*$-isomorphism. If $\\pi$ is unital, then so is $\\pi^{-1}$.\n\\end{proposition}\n\n\\begin{proof}\nWe know from linear algebra that $\\pi^{-1}$ is a linear map. Given $a, b \\in B$, let $a', b' \\in A$ such that $\\pi(a') = a$ and $\\pi(b') = b$. Then\n\\begin{equation}\n\\begin{aligned}\n\\pi^{-1}(ab) &= \\pi^{-1}(\\pi(a')\\pi(b')) = \\pi^{-1}(\\pi(a'b')) = a'b' = \\pi^{-1}(a)\\pi^{-1}(b)\\\\\n\\pi^{-1}(a^*) &= \\pi^{-1}(\\pi(a')^*) = \\pi^{-1}(\\pi(a'^*)) = a'^* = \\pi^{-1}(a)^*.\n\\end{aligned}\n\\end{equation}\nIf $\\pi$ is unital, then $\\pi(1) = 1$, so $\\pi^{-1}(1) = 1$ as well.\n\\end{proof}\n\n\nWe will study $*$-homomorphisms more in a later section, for now focusing on properties of elements of $C^*$-algebras.\n\n\\begin{definition}\nIf $A$ is a $C^*$-algebra, a subset $B \\subset A$ is a \\emph{Banach subalgebra} of $A$ if it is a Banach algebra under the restrictions of all operations defined on $A$. Equivalently, $B$ must be a topologically closed subset which is closed under all the operations on $A$. Topological closedness is equivalent to completeness. We say $B$ is a \\emph{unital} Banach subalgebra of $A$ if $B$ is a unital $C^*$-algebra and the unit in $B$ is the same as the unit in $A$.  I pray I never have to consider a case where $B$ is a Banach subalgebra of $A$ and has a different unit from $A$.\n\nIf $S$ is a subset of a Banach algebra or $C^*$-algebra $A$, then the intersection of all subalgebras of $A$ containing $S$ is a (Banach or $C^*$) subalgebra, called the \\emph{subalgebra generated by $\\boldsymbol{S}$}.\n\nA (unital) $C^*$-subalgebra is a (unital) Banach subalgebra $B\\subset A$ which is closed under the star operation. \n\\end{definition}\n\n\\begin{definition}\nLet $A$ and $B$ be $C^*$-algebras. A \\emph{$\\boldsymbol{*}$-homomorphism} is a linear map $\\pi:A \\rightarrow B$ such that\n\\begin{equation}\n\\begin{aligned}\n\\pi(ab) &= \\pi(a)\\pi(b)\\\\\n\\pi(a^*) &= \\pi(a)^*\n\\end{aligned}\n\\end{equation}\nfor all $a, b \\in A$. In other words, $\\pi$ respects all algebraic operations on $A$ and $B$. Note that we do not require $\\pi$ to be continuous. We will use the terms $*$-isomorphism and $*$-automorphism in the natural way.\n\\end{definition}\n\nWe will study $*$-homomorphisms more in a later section. For now, let us establish a few more basic properties of $C^*$-algebras.\n\n\n\\begin{proposition}\nIf $A$ is a $C^*$-algebra, then $0$ is self-adjoint. If $A$ is unital, then $1$ is self-adjoint as well.\n\\end{proposition}\n\n\\begin{proof}\nFor all $a \\in A$, we have\n\\begin{equation}\n0^* + a = 0^* + a^{**} = (0 + a^*)^* = a^{**} = a.\n\\end{equation}\nHence, $0^* = 0$ by uniqueness of the additive identity. Furthermore,\n\\begin{equation}\n1^*a = 1^*a^{**} = (a^*1)^* = a^{**} = a,\n\\end{equation}\nfrom which it follows that $1$ is self-adjoint by uniqueness of the multiplicative identity.\n\\end{proof}\n\n\\begin{proposition}\nEvery $a \\in A$ has a unique expression in the form $a = a_1 + ia_2$, where $a_1$ and $a_2$ are self-adjoint.\n\\end{proposition}\n\n\\begin{proof}\nThis is evident upon setting $a_1 = (a + a^*)/2$ and $a_2 = (a - a^*)/2i$. If $a = a_1' + ia_2'$ for some self-adjoint $a_1', a_2'$, then $a_1 - a_1' = i(a_2' - a_2)$, which can be self-adjoint only if it is zero.\n\\end{proof}\n\n\\begin{proposition}\nLet $A$ be a nontrivial $C^*$-algebra. If there exists $1 \\in A$ satisfying\n\\begin{equation}\n1a = a1 = a, \\qquad \\forall \\, a \\in A, \n\\end{equation}\nthen $\\norm{1} = 1$, i.e.\\ $A$ is unital.\n\\end{proposition}\n\n\\begin{proof}\nSetting $a = 1$ in the $C^*$-property yields\n\\begin{equation}\n\\norm{1} = \\norm{1^*1} = \\norm{1}^2.\n\\end{equation}\nHence, $\\norm{1} = 0$ or $\\norm{1} = 1$. If $\\norm{1} = 0$, then $1 = 0$, so that $A = \\qty{0}$. Since $A$ is nontrivial by hypothesis, this cannot be the case, so $\\norm{1} = 1$.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:inverse_star_commute}\nLet $A$ be a unital $C^*$-algebra and let $a \\in A^\\times$. Then $a^* \\in A^\\times$ and \n\\begin{equation}\n(a^*)^{-1} = (a^{-1})^*.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nWe compute\n\\begin{equation}\na^*(a^{-1})^* = (a^{-1} a)^* = 1 = (aa^{-1})^* = (a^{-1})^* a^*,\n\\end{equation}\nwhich proves the result.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:star_is_isometry}\nIf $A$ is a $C^*$-algebra and $a \\in A$, then\n\\begin{equation}\n\\norm{a^*} = \\norm{a}.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nThe conclusion is trivial if $a = 0$, so suppose $a \\neq 0$. The $C^*$-property and submultiplicativity yield\n\\begin{equation}\n\\norm{a}^2 = \\norm{a^*a} \\leq \\norm{a} \\norm{a^*}.\n\\end{equation}\nDividing by $\\norm{a}$ yields $\\norm{a} \\leq \\norm{a^*}$. Applying this result to $a^*$ yields $\\norm{a^*} \\leq \\norm{a^{**}} = \\norm{a}$.\n\\end{proof}\n\n\n\n\\begin{corollary}\nThe star operation $A \\rightarrow A$, $a \\mapsto a^*$ is continuous.\n\\end{corollary}\n\n\\begin{proof}\nThis is immediate from Proposition \\ref{prop:star_is_isometry}.\n\\end{proof}\n\n\nWe conclude this section with several examples.\n\n\\begin{example}\nThe complex numbers $\\C$ give a fairly trivial unital $C^*$-algebra.\n\\end{example}\n\n\\begin{example}\nThe bounded linear operators $\\blinOps(\\hilbertH)$ on a Hilbert space $\\hilbertH$ are the prototypical example of a $C^*$-algebra. The star operation is given by the adjoint. Note that this is, of course, a unital $C^*$-algebra.\n\\end{example}\n\n\\begin{example}\nIn a similar vein to the previous example, the set $M_{n}(\\C)$ of $n \\times n$ matrices with complex entries is a $C^*$-algebra, where the star operation is given by Hermitian conjugation.\n\\end{example}\n\n\\begin{example}\nLet $X$ be a compact Hausdorff space and let $C(X)$ be the space of continuous functions $X \\rightarrow \\C$. This is a unital $C^*$-algebra with the norm given by the supremum norm and the star operation given by complex conjugation. We note that\n\\begin{equation}\n\\norm{fg} = \\sup_{x \\in X} \\abs{fg} \\leq \\sup_{x \\in X} \\abs{f} \\cdot \\sup_{x \\in X} \\abs{g} = \\norm{f} \\norm{g}\n\\end{equation}\nand \n\\begin{equation}\n\\norm{f^*f} = \\sup_{x \\in X} \\abs{f}^2  = \\qty(\\sup_{x \\in X} \\abs{f})^2 = \\norm{f}^2,\n\\end{equation}\nso that this satisfies the nontrivial properties of a $C^*$-algebra.\n\\end{example}\n\n\n\\begin{example}\nLet $X$ be a locally compact Hausdorff space and let $C_0(X)$ be the space of continuous functions $f: X \\rightarrow \\C$ which \\emph{vanish at infinity}, meaning for every $\\varepsilon > 0$ there exists a compact $K \\subset X$ such that $\\abs{f(x)} < \\varepsilon$ for $x \\notin K$. This space is a $C^*$-algebra with the supremum norm and the star operation given by complex conjugation. If $X$ is not compact, then this is a non-unital $C^*$-algebra.\n\\end{example}\n\n\n\\subsection*{Finite Direct Sums}\n\nLet $A_1,\\ldots, A_n$ be a finite collection of Banach algebras. We define the \\emph{direct sum}\n\\begin{equation}\n\\bigoplus_{i=1}^n A_i \\defeq \\qty{(a_1,\\ldots, a_n): a_i \\in A_i}\n\\end{equation}\nwith addition and multiplication defined componentwise. If $A_1,\\ldots, A_n$ are $C^*$-algebras, define the star operation on $A$ componentwise as well. Finally, set\n\\begin{equation}\n\\norm{(a_1,\\ldots, a_n)} = \\max\\qty(\\norm{a_1},\\ldots, \\norm{a_n}).\n\\end{equation}\n\n\nWe want to show that the direct sum so defined is a Banach algebra. It is easy to check that the norm above is indeed a norm using the properties of the max and the definition of the norms on $A_i$. Submultiplicativity also follows easily from submultiplicativity of the norms on the $A_i$. If $(a_{1,k},\\ldots, a_{n,k})_{k \\in \\N}$ is a Cauchy sequence in the direct sum, then each sequence $(a_{i,k})_{k \\in \\N}$ is Cauchy, hence convergent, in $A_i$ for $i = 1, \\ldots, n$. If $a_{i,k} \\rightarrow a_i$ for each $i$, then $(a_{1,k},\\ldots, a_{n,k}) \\rightarrow (a_1,\\ldots, a_n)$ by definition of the norm on the direct sum. Thus, $\\bigoplus_{i=1}^n A_i$ is complete, and is therefore a Banach algebra. If each $A_i$ is a $C^*$-algebra, it is again easy to check that $\\bigoplus_{i=1}^n A_i$ is a $C^*$-algebra using the properties of the $C^*$-algebras $A_i$.\n\n\nWe may also define the direct sum of a sequence of Banach algebras or $C^*$-algebras $\\qty{A_n}_{n \\in \\N}$. We define\n\\begin{equation}\n\\bigoplus_{n=1}^\\infty A_n \\defeq \\qty{(a_n)_{n \\in \\N}: a_n \\in A_n \\text{ and } \\lim_{n \\rightarrow \\infty} \\norm{a_n} = 0}\n\\end{equation}\nAgain, the algebraic operations are defined componentwise and the norm is defined by \n\\begin{equation}\n\\norm{(a_n)_{n \\in \\N}} = \\max_{n \\in \\N} \\norm{a_n}.\n\\end{equation}\nThe definition of $\\bigoplus_{n = 1}^\\infty A_n$ ensures that the max exists. \n\nOne easily checks that this satisfies all algebraic properties of a Banach or $C^*$-algebra, but completeness is more subtle. If $\\mathbf{a}_k = (a_{n,k})_{n \\in \\N} \\in \\bigoplus_{n=1}^\\infty A_n$ and $(\\mathbf{a}_k)_{k \\in \\N}$ is a Cauchy sequence in the direct sum, then it follows in the same way as before that the sequence $(a_{n,k})_{k \\in \\N}$ is Cauchy in $A_n$, hence convergent with limit $a_{n,k} \\rightarrow a_n \\in A_n$. We must show that $\\lim_{n \\rightarrow \\infty} \\norm{a_n} = 0$. For any $n, k, K \\in \\N$, we have\n\\begin{equation}\n\\norm{a_n} \\leq \\norm{a_n - a_{n,k}} + \\norm{a_{n,k} - a_{n,K}} + \\norm{a_{n,K}} \\leq \\norm{a_n - a_{n,k}} + \\norm{\\mathbf{a}_k - \\mathbf{a}_K} + \\norm{a_{n,K}}\n\\end{equation}\nFix $\\varepsilon > 0$. Since $(\\mathbf{a}_k)_{k \\in \\N}$ is Cauchy, we may choose $K \\in \\N$ such that $k, \\ell \\geq K$ implies $\\norm{\\mathbf{a}_k - \\mathbf{a}_\\ell} < \\varepsilon/3$. We may choose $N \\in \\N$ such that $n \\geq N$ implies $\\norm{a_{n,K}} < \\varepsilon/3$. Finally, for any $n \\geq N$, we may choose $k \\geq K$ such that $\\norm{a_n - a_{n,k}} < \\varepsilon/3$. Thus, for $n \\geq N$, we have $\\norm{a_n} < \\varepsilon$, so $\\lim_{n \\rightarrow \\infty} \\norm{a_n} = 0$.\n\nFinally, we show that $\\mathbf{a}_k \\rightarrow \\mathbf{a} \\defeq (a_n)_{n \\in \\N}$. Fix $\\varepsilon > 0$. Choose $N \\in \\N$ such that $\\norm{a_n} < \\varepsilon/2$ if $n \\geq N$. Choose $K \\in \\N$ such that $k,\\ell \\geq K$ and $n < N$ implies\n\\begin{equation}\n\\norm{\\mathbf{a}_k - \\mathbf{a}_\\ell} < \\frac{\\varepsilon}{2} \\quad\\text{and}\\quad \\norm{a_n - a_{n,k}} < \\varepsilon.\n\\end{equation}\nThen for $k \\geq K$, we have\n\\begin{equation}\n\\norm{\\mathbf{a} - \\mathbf{a}_k} < \\max\\qty(\\varepsilon, \\max_{n \\geq N} \\norm{a_n - a_{n,k}}).\n\\end{equation}\nBut for $n \\geq N$ we may choose $\\ell$ large enough such that\n\\begin{equation}\n\\norm{a_n - a_{n,k}} \\leq \\norm{a_n - a_{n,\\ell}} + \\norm{\\mathbf{a}_\\ell - \\mathbf{a}_k} < \\varepsilon.\n\\end{equation}\nThus, $\\norm{\\mathbf{a} - \\mathbf{a}_k} < \\varepsilon$, so $\\mathbf{a}_k \\rightarrow \\mathbf{a}$. This proves that the direct sum is complete.\n\n\n%I believe one can form arbitrary direct sums by completing the vector space direct sum with respect to the max norm.\n\nBoth of these constructions come equipped with natural algebra homomorphisms $\\iota_i: A_i \\rightarrow \\bigoplus A_n$ satisfying the following universal property. If $B$ is another Banach or $C^*$-algebra with algebra homomorphisms $f_i:A_i \\rightarrow B$, then there exists a unique algebra homomorphism $f:\\bigoplus A_n \\rightarrow  B$ such that the diagram\n\\begin{equation}\n\\begin{tikzcd}\n \\bigoplus A_i \\arrow[r, \"f\"] & B\\\\\n A_i \\arrow[ur, \"f_i\"'] \\arrow[u, \"\\iota_i\"] &\n\\end{tikzcd}\n\\end{equation}\ncommutes. We define\n\\begin{equation}\nf(a) = \\sum f_i(\\pi_i(a))\n\\end{equation}\n\n\n", "meta": {"hexsha": "272b53879ccba4fad596fbac533a28589c7c76fe", "size": 17210, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Example/sections/c_-algebras-elementary-definitions-properties.tex", "max_stars_repo_name": "martinpflaum/latex_to_html", "max_stars_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-11-13T15:10:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T14:08:26.000Z", "max_issues_repo_path": "Example/sections/c_-algebras-elementary-definitions-properties.tex", "max_issues_repo_name": "martinpflaum/latex_to_html", "max_issues_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-07-11T13:18:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T22:02:11.000Z", "max_forks_repo_path": "Example/sections/c_-algebras-elementary-definitions-properties.tex", "max_forks_repo_name": "martinpflaum/latex_to_html", "max_forks_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-13T15:22:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-13T15:22:47.000Z", "avg_line_length": 49.7398843931, "max_line_length": 863, "alphanum_fraction": 0.6710052295, "num_tokens": 6228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.7956580952177051, "lm_q1q2_score": 0.5666933030643062}}
{"text": "\\section{Week 1}\n\\subsection{1/5/2021}\n\\begin{itemize}\n    \\item Vector review!\n    \\item Two dimensional vector: $\\langle v_1, v_2 \\rangle \\begin{bmatrix}\n        w_1 \\\\\n        w_2 \\\\\n    \\end{bmatrix}$\n    \\item The zero vector = $\\langle 0, 0 \\rangle$ or $\\textbf{0}$\n    \\item Linear combinations: linear combinations of $v$ and $w$ refer to any sum of multiples of the vectors. $$c\\textbf{v} + d\\textbf{w}$$ is a linear combination of $v$ and $w$\n    \\item In 3-space, a 3 dimensional vector: $\\textbf{v} = \\begin{bmatrix} v_1 \\\\ v_2 \\\\ v_3 \\end{bmatrix}, \\textbf{v} = (v_1, v_2, v_3)$\n    \\item Dot products: $\\Vec{v} \\cdot \\Vec{w} = v_1w_1+v_2w_2...+v_nw_n$\n    \\item Dot products are commutative.\n    \\item A dot product of zero means the vectors orthogonal.\n    \\item Contrived example: $v$ is a vector of weights and $w$ is a vector of distances. $v \\cdot w = 0$ means the  system is balanced.\n    \\item Contrived example 2: Expanding the previous problem to 3 dimensions, we can add more weights, $v= (4,2,100), w=(-1,2,0), v\\cdot w = 0$\n    \\item Economics example: Five products with prices $p_1, p_2, ..., p_5$ with quantity $q_1, q_2, ... q_5$. A dot product of zero would mean it breaks even.\n    \\item Technically, $\\textbf{0}$ is orthogonal to all vectors.\n    \\item $\\begin{bmatrix} 4 & 1 \\\\ 0 & 2 \\\\ 4 & 3\\end{bmatrix} \\begin{bmatrix} 3 \\\\ -2 \\end{bmatrix} = \\begin{bmatrix} 10 \\\\ -4 \\\\ 6\\end{bmatrix}$\n    \\item Linear combinations can also be seen as dot products.\n    \\item The length or norm of a vector $||v|| = \\sqrt{v_1^2 + v_2^2 + ... + v_n^2} = \\sqrt{\\Vec{v} \\cdot \\Vec{v}}$\n    \\item Unit vector trick: if $\\Vec{v} \\cdot \\Vec{v} = 1$ yes!\n    \\item $\\Vec{u} = \\frac{\\Vec{v}}{||\\vec{v}||}$\n    \\item If we know the angle $\\theta$ a vector makes with the x-axis, the unit vector is $\\langle cos\\theta, sin\\theta \\rangle$\n    \\item Given unit vectors with form $\\langle cos\\beta, sin\\beta \\rangle$ then the angle in between can be denoted by $\\vec{u_1}\\cdot \\vec{u_2}=cos(\\beta - \\alpha)$\n    \\item Schwarz Inequality: $|v\\cdot w| \\le ||v||||w||$\n\\end{itemize}\n\n\\subsection{1/6/2021}\n\\begin{itemize}\n    \\item $$n\\cdot v=ax+by+cz=d$$\n    \\item Planes are also linear combinations. Ex. $x-y-3z=0$ is a combination of vectors.\n    \\item $x=y+3z$, set $y=z=0$, so $\\langle 1, 1, 0\\rangle$ is a vector. Then, set $y=0, z=1$, so we get $\\langle 3, 0 ,1 \\rangle$. Our combination is $y\\begin{bmatrix}1\\\\1\\\\0\\end{bmatrix}+z\\begin{bmatrix}3\\\\0\\\\1\\end{bmatrix}$\n    \\item In case of a constant, ex. $x-y-3z=2$.\n    \\item Start as if $d=0$, in this case giving us the same $y\\begin{bmatrix}1\\\\1\\\\0\\end{bmatrix}+z\\begin{bmatrix}3\\\\0\\\\1\\end{bmatrix}$, then simply add $\\begin{bmatrix}2\\\\0\\\\0\\end{bmatrix}$ for $y\\begin{bmatrix}1\\\\1\\\\0\\end{bmatrix}+z\\begin{bmatrix}3\\\\0\\\\1\\end{bmatrix}+\\begin{bmatrix}2\\\\0\\\\0\\end{bmatrix}$\n    \\item Matrix notation. Ex. $a_{4 7}$, row 4, col 7\n    \\item Identity matrix, denoted by $I$, is a square matrix with 1's along the main diagonal (where $i=j$)\n    \\item $AI = A = IA$ if their multiplication dimensions are valid.\n    \\item $$\\begin{bmatrix}2&-1\\\\3&5\\end{bmatrix}\\begin{bmatrix}1&0&-1\\\\3&1&3\\end{bmatrix}=\\begin{bmatrix}-1&-1&-5\\\\ 18&5&12\\end{bmatrix}$$\n    \\item An item $A_{ij}$ = row $i \\cdot$ column $j$\n    \\item $\\begin{bmatrix}0&1\\\\1&0\\end{bmatrix}\\begin{bmatrix}4\\\\5\\end{bmatrix}=\\begin{bmatrix}5\\\\4\\end{bmatrix}$\n    \\item Linear equations:\n    $$x+2y+3z=6$$\n    $$2x+5y+2z=4$$\n    $$6x-3y+z=2$$\n    \\item Coefficient matrix $A=\\begin{bmatrix}1&2&3\\\\2&5&2\\\\6&-3&-1\\end{bmatrix}$\n    \\item $$\\begin{bmatrix}1&2&3\\\\2&5&2\\\\6&-3&-1\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\z\\end{bmatrix}=\\begin{bmatrix}6\\\\4\\\\2\\end{bmatrix}$$\n    \\item $A\\vec{x}=\\vec{b}$\n    \\item Different options\n    \\begin{itemize}\n        \\item No solutions\n        \\item All planes are parallel\n        \\item 2 are parallel + 3 intercept\n        \\item Infinite solutions (share plane or share line)\n        \\item One solution.\n    \\end{itemize}\n\\end{itemize}\n\n\\subsection{1/7/2021}\n\\begin{itemize}\n    \\item In Algebra I, we solved 2x2 examples such as $$x-2y=1$$$$3x+2y=11$$\n    \\item We would solve by adding together to get $4x=12$, $x=3$\n    \\item We want our solution to form a triangle.\n    \\item Elimination fails when there are no solutions.\n    \\item Elimination also fails when there are infinitely many solutions. $$x-2y=1$$$$3x-6y=3$$\n    \\item Take the equation $$\\begin{bmatrix}2&4&-2\\\\4&9&-3\\\\-2&-3&7\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\z\\end{bmatrix}=\\begin{bmatrix}2\\\\8\\\\10\\end{bmatrix}$$\n    \\item We first solve to get $$\\begin{bmatrix}2&4&-2\\\\1&3&11\\\\0&1&5\\end{bmatrix}$$\n    \\item Then we get the triangular system $$\\begin{bmatrix}2&4&-2\\\\0&3&11\\\\0&0&4\\end{bmatrix}$$\n    \\item We have to multiply matrices. Each element is a dot product of a row of the first matrix with a column of the second ex. $b_{23}$ would come from taking the dot product of row 2 of the first matrix and column 3 of the second.\n    \\item Associative $(AB)C=A(BC)$\n    \\item \\textbf{NOT COMMUTATIVE} $AB\\neq BA$\n    \\item $A\\textbf{x}=\\textbf{b}$\n    \\item $A\\longrightarrow$ Coefficient matrix\n    \\item $x\\longrightarrow$ Column vector of unknowns\n    \\item $b\\longrightarrow$ Column of scalars\n    \\item What if we had $b=\\begin{bmatrix}2\\\\8\\\\10\\end{bmatrix}$ and wanted $b=\\begin{bmatrix}2\\\\8\\\\10\\end{bmatrix}$\n    \\item To form our matrices, we will start with the identity matrix.\n    \\item In the example above, we would have $$\\begin{bmatrix}1&0&0\\\\-2&1&0\\\\0&0&1\\end{bmatrix}\\begin{bmatrix}a\\\\b\\\\c\\end{bmatrix}=\\begin{bmatrix}a\\\\-2a+b\\\\c\\end{bmatrix}$$\n    \\item What would we use to add 5 times the second row to the third? $E_{32}$ do $i+j$ to the position to get $\\begin{bmatrix}1&0&0\\\\0&1&0\\\\0&5&1\\end{bmatrix}$\n    \\item To subtract a multiple k of row $j$ from row $i$ is the identify matrix with a $-k$ in the $i, j$ position.\n    \\item $$\\begin{bmatrix}1&0&0\\\\0&1&0\\\\-4&0&1\\end{bmatrix}\\begin{bmatrix}1\\\\3\\\\9\\end{bmatrix}=\\begin{bmatrix}1\\\\3\\\\5\\end{bmatrix}$$\n    \\item Row exchange matrix (exchanges i and j) exchange them through the identity matrix. \n    \\item Ex. $i=1$ $j=3$, then we get matrix $\\begin{bmatrix}0&0&1\\\\0&1&0\\\\1&0&0\\end{bmatrix}$\n    \\item Augmented Matrix includes the two sides of the equation.\n    \\item Our example 4 images above would be $\\begin{bmatrix}1&0&0&a\\\\-2&1&0&b\\\\0&0&0&c\\end{bmatrix}$\n    \n\\end{itemize}\n", "meta": {"hexsha": "07bafdd78b433137e706655af24f328818fb4828", "size": 6377, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/math/linear/content/week1.tex", "max_stars_repo_name": "WHS-Resources/WHS-Resources", "max_stars_repo_head_hexsha": "255306f11e159ee39d58f479b64935ecac792991", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/math/linear/content/week1.tex", "max_issues_repo_name": "WHS-Resources/WHS-Resources", "max_issues_repo_head_hexsha": "255306f11e159ee39d58f479b64935ecac792991", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-01-14T05:47:20.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-16T21:02:54.000Z", "max_forks_repo_path": "notes/math/linear/content/week1.tex", "max_forks_repo_name": "WHS-Resources/WHS-Resources", "max_forks_repo_head_hexsha": "255306f11e159ee39d58f479b64935ecac792991", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-01-13T20:58:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-13T20:58:04.000Z", "avg_line_length": 72.4659090909, "max_line_length": 307, "alphanum_fraction": 0.6525011761, "num_tokens": 2404, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718435083355187, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.5666254001599639}}
{"text": "% !TEX root = ../master/master.tex\n\n%% Dexter Barrows, 2016\n%% dbarrows.github.io\n\n\\section{Fitting Setup}\n\n\tNow that we have established which methods we wish to evaluate the efficacy of for epidemic forecasting, it is prudent to see how they perform when fitting parameters for a known epidemic model. We have already seen how they perform when fitting parameters for a model with a deterministic evolution process and observation noise, but a more realistic model will have both process and observation noise.\n\n\tTo form such a model, we will take a deterministic SIR ODE model specified in Equation [\\ref{sirode}] and add process noise by allowing $\\beta$ to follow a geometric random walk given by\n\n\t\\begin{equation}\\label{betaautoreg}\n\t\t\\beta_{t+1} = \\exp \\left( \\log(\\beta_{t}) + \\eta (\\log(\\bar{\\beta}) - \\log(\\beta_{t})) + \\epsilon_{t} \\right).\n\t\\end{equation}\n\n\tWe will take $\\epsilon_{t}$ to be normally distributed with standard deviation $\\rho^2$ such that $\\epsilon_{t} \\sim \\mathcal{N}(0,\\rho^2)$. The geometric attraction term constrains the random walk, the force of which is $\\eta \\in [0,1]$. If we take $\\eta = 0$ then the walk will be unconstrained; if we let $\\eta = 1$ then all values of $\\beta_t$ will be independent from the previous value (and consequently all other values in the sequence).\n\n\tWhen $\\eta \\in (0,1)$, we have an autoregressive process of order 1 on the logarithmic scale of the form\n\n\t\\begin{equation}\n\t\tX_{t+1} = c + \\rho X_t + \\epsilon_t ,\n\t\\end{equation}\n\n\twhere $\\epsilon_t$ is normally distributed noise with mean 0 and standard deviation $\\sigma_E$. This process has a theoretical expected mean of $\\mu = c / (1 - \\rho)$ and variance $\\sigma = \\sigma_E^2 / (1 - \\rho^2)$. If we choose $\\eta = 0.5$, the resulting log-normal distribution has a mean of $6.80 \\times 10^{-4}$ and standard deviation of $4.46 \\times 10^{-4}$.\n\n\tFigure [\\ref{betaplot}] shows the result of simulating the process in Equation [\\ref{betaautoreg}] with $\\eta = 0.5$.\n\n\t\\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/betaplot.pdf}\n        \\caption{Simulated geometric autoregressive process shown in Equation [\\ref{betaautoreg}]. \\label{betaplot}}\n    \\end{figure}\n\n    Figure [\\ref{betadensity}] shows the density plot corresponding to the values in Figure [\\ref{betaplot}].\n\n    \\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/betadensity.pdf}\n        \\caption{Density plot of values shown if Figure[\\ref{betaplot}]. \\label{betadensity}}\n    \\end{figure}\n\n    We see a density plot similar in shape to the desired density, and the geometric random walk displays dependence on previous values. Further the mean of this distribution was calculated to be $6.92 \\times 10^{-4}$ and standard the deviation to be $3.99 \\times 10^{-4}$, which are very close to the theoretical values.\n\n    If we take the full stochastic SIR system and evolve it using an Euler stepping scheme with a step size of $h = 1/7$, for 1 step per day, we obtain the plot in Figure [\\ref{sirmean}].\n\n    \\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/sirmean.pdf}\n        \\caption{Stochastic SIR model simulated using an explicit Euler stepping scheme. The solid line is a single random trajectory, the dots show the data points obtained by adding in observation error defined as $\\epsilon_{\\small\\text{obvs}} = \\mathcal{N}(0,10)$, and the grey ribbon is centre 95th quantile from 100 random trajectories. \\label{sirmean}}\n    \\end{figure}\n\n\n\\section{Calibrating Samples}\n\n\tIn order to compare HMC and IF2 we need to set up a fair and theoretically justified way to select the number of samples to draw for the HMC iterations and the number of particles to use for IF2. As we wish to compare, among other things, approximate computational cost using runtimes, we need to determine how many sample draws for each method are required to obtain a certain accuracy. Sample draws are typically not comparable in terms of quality when considering multiple methods. For example, vanilla MCMC draws are computationally cheap compared to those from HMC, but HMC produces draws that more efficiently cover the sampling space. Thus we cannot just set the number of HMC draws equal to the number of particles used in IF2 -- we must calibrate both quantities based on a desired target error. We assume that we are working with a problem that has an unknown real solution, so we use the Monte Carlo Standard Error (MCSE) \\cite{Harding2014}.\n\n\tSuppose we are using a Monte-Carlo based method to obtain a mean estimate $\\hat{\\mu}_{n}$ for a quantity $\\mu$, where $n$ is the number of samples. Then the Law of Large Numbers says that $\\hat{\\mu}_{n} \\rightarrow \\mu$ as $n \\rightarrow \\infty$. Further, the Central Limit Theorem says that the error $\\hat{\\mu}_{n} - \\mu$ should shrink with number of samples such that $\\sqrt{n} (\\hat{\\mu}_{n} - \\mu) \\rightarrow \\mathcal{N}(0,\\sigma^2)$ as $n \\rightarrow \\infty$, where $\\sigma^2$ is the variance of the samples drawn.\n\n\tWe of course do not know $\\mu$, but the above allows us to obtain an estimate $\\hat{\\sigma}_n$ for $\\sigma$ given a number of samples $n$ as\n\n\t\\begin{equation}\n\t\t\\hat{\\sigma}_n = \\sqrt{\\frac{1}{n} \\sum_{i=1}^{n} (X_i - \\hat{\\mu}) },\n\t\\end{equation}\n\n\twhich is known as the Monte Carlo Standard Error.\n\n\tWe can modify this formula to account for multiple, potentially correlated, variables by replacing the single variance measure sum by\n\n\t\\begin{equation}\n\t\t\\Theta^* V (\\Theta^*)^T\n\t\\end{equation}\n\n\twhere $\\Theta^*$ is a row vector containing the reciprocals of the means of the parameters of interest, and $V$ is the variance-covariance matrix with respect to the same parameters. This in effect scales the variances with respect to their magnitudes and accounts for covariation between parameters in one fell swoop. We also divide by the number of parameters, yielding\n\n\t\\begin{equation}\n\t\t\\hat{\\sigma}_n = \\sqrt{\\frac{1}{n} \\frac{1}{P} \\Theta^* V (\\Theta^*)^T }\n\t\\end{equation}\n\n\twhere $P$ is the number of particles.\n\n\tThe goal here is to then pick the number of HMC samples and IF2 particles to yield similar MCSE values. To do this we picked a combination of parameters for RStan that yielded decent results when applied to the stochastic SIR model specified above, calculated the resulting mean MCSE across several model fits, and isolated the expected number of IF2 particles needed to obtain the same value. This was used as a starting value to ``titrate'' the IF2 iterations to the same point.\n\n\tThe resulting values were 1000 HMC warm-up iterations with 1000 samples drawn post-warm-up, and 2500 IF2 particles sent through 50 passes, each method giving an approximate MCSE of 0.0065.\n\n\n\\section{IF2 Fitting}\n\n\tNow we will use an implementation of the IF2 algorithm to attempt to fit the stochastic SIR model to the previous data. The goal here is just parameter inference, but since IF2 works by applying a series of particle filters we essentially get the average system state estimates for a very small additional computational cost. Hence, we will will also look at that estimated behaviour in addition the parameter estimates.\n\n\tThe code used here is a mix of R and C++ implemented using Rcpp. The fitting was undertaken using $2500$ particles with 50 IF2 passes and a cooling schedule given by a reduction in particle spread determined by $0.975^{p}$, where p is the pass number starting with 0. This geometric cooling scheme is standard for use with IF2 \\cite{King2015a}\\cite{King2016}, with the cooling rate chosen to neatly scale the perturbation factor from 1 to 0.02 (almost 0) over 50 passes.\n\n\tThe MLE parameter estimates, taken to be the mean of the particle swarm values after the final pass, are shown in the table in Figure [\\ref{fiterror}], along with the true values and the relative error.\n\n\t\\begin{figure}\n\t\t\\centering\n\t\t{\\small\n\t\t\\begin{tabular}{cccccc}\n\t\t\t& & \\multicolumn{2}{c}{IF2} & \\multicolumn{2}{c}{HMC} \\\\\n\t\t\t\\cmidrule[1.0pt](r){3-4} \\cmidrule[1.0pt](r){5-6}\n\t\t\tName & True\t& Fit & Error & Fit & Error \\\\\n\t\t\t\\cmidrule[1.0pt](r){1-1} \\cmidrule[1.0pt](r){2-2} \\cmidrule[1.0pt](r){3-3} \\cmidrule[1.0pt](r){4-4} \\cmidrule[1.0pt](r){5-5} \\cmidrule[1.0pt](r){6-6}\n\t\t\t$\\mathcal{R}_0$ \t\t& 3.0 \t\t\t\t & 3.27 \t\t\t\t\t& $9.08 \\times 10^{-2}$ \t& $3.12$ \t\t\t\t& $1.05 \\times 10^{-1}$\t\t\\\\\n\t\t\t$\\gamma$ \t\t\t\t& $10^{-1}$  \t\t & $1.04 \\times 10^{-1}$ \t& $3.61 \\times 10^{-2}$ \t& $9.99 \\times 10^{-2}$ & $-7.56 \\times 10^{-4}$\t\\\\\n\t\t\tInitial Infected \t& 5  \t\t\t\t & 7.90 \t\t\t\t\t& $5.80 \\times 10^{-1}$ \t& $6.64$ \t\t\t\t& $3.28 \\times 10^{-1}$ \t\\\\\n\t\t\t$\\sigma$ \t\t\t& 10   \t\t\t\t & 8.84 \t\t\t\t\t& $-1.15 \\times 10^{-1}$ \t& $8.5$ \t\t\t\t& $-1.50 \\times 10^{-1}$\t\\\\\n\t\t\t$\\eta$ \t\t\t\t& $5 \\times 10^{-1}$ & $5.87 \\times 10^{-1}$ \t& $1.73 \\times 10^{-1}$ \t& $4.57 \\times 10^{-1}$ & $-8.27 \\times 10^{-2}$ \t\\\\\n\t\t\t$\\varepsilon_{err}$ & $5 \\times 10^{-1}$ & $1.63 \\times 10^{-1}$ \t& $-6.73 \\times 10^{-1}$ \t& $1.60 \\times10^{-1}$  & $-6.80 \\times 10^{-1}$\n\t\t\\end{tabular}}\n\t    \\caption{Fitting errors. \\label{fiterror}}\n    \\end{figure}\n\n\n\tFrom last IF2 particle filtering iteration, the mean state values from the particle swarm at each time step are shown with the true underlying state and data in the plot in Figure [\\ref{if2state}].\n\n\t\\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/if2state.pdf}\n        \\caption{True system trajectory (solid line), observed data (dots), and IF2 estimated real state (dashed line). \\label{if2state}}\n    \\end{figure}\n\n\n\\section{IF2 Convergence}\n\n\tSince IF2 is an iterative algorithm where each pass through he data is expected to push the parameter estimates towards the MLE, we can see the evolution of these estimates as a function of the pass number. We expect near-convergence in the parameter estimates as IF2 nears the maximum number of passes specified. Unconvincing convergence plots may signal suboptimal algorithm parameters. If sensible algorithm parameters have been chosen, we should see the convergence plots display ``flattening'' over time.\n\n\tFigure [\\ref{if2convergence}] shows evolution of the mean estimates for the six most critical parameters.\n\n\t\\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/if2convergence.pdf}\n        \\caption{The horizontal axis shows the IF2 pass number. The solid black lines show the evolution of the ML estimates, the solid grey lines show the true value, and the dashed grey lines show the mean parameter estimates from the particle swarm after the final pass. \\label{if2convergence}}\n    \\end{figure}\n\n    Figure [\\ref{if2sdconvergence}] shows the evolution of the standard deviations of the parameter estimates from the particle swarm as a function of the pass number. We should expect to see asymptotic convergence to zero if the filter is converging.\n\n    \\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/if2sdconvergence.pdf}\n        \\caption{The horizontal axis shows the IF2 pass number and the solid black lines show the evolution of the standard deviations of the particle swarm values. \\label{if2sdconvergence}}\n    \\end{figure}\n\n\n\\section{IF2 Densities}\n\n\tOf diagnostic importance are the densities of the parameter estimates given by the final parameter swarm. If the swarm has collapsed, these densities will be extremely narrow, almost resembling a vertical line. A ``healthy'' swarm should display relatively smooth kernels of reasonable breadth.\n\n\tFigure [\\ref{sc1if2kernels}] shows the parameter sample distributions from the final parameter swarm.\n\n\t\\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/if2kernels.pdf}\n        \\caption{The solid grey lines show the true parameter values and the dashed grey lines show the density medians. \\label{sc1if2kernels}}\n    \\end{figure}\n\n    The IF2 parameters chosen were in part chosen so as to not artificially narrow these densities; a more aggressive cooling schedule and/or an increased number of passes would have resulted in much narrower densities, and indeed have the potential to collapse them to point estimates. This is undesirable as it may indicate instability -- the particles may have become ``trapped'' in a region of the sampling space.\n\n\n\\section{HMC Fitting}\n\n\tWe can use the Hamiltonian Monte Carlo algorithm implemented in the RStan package to fit the stochastic SIR model as above. This was done with a single HMC chain of 2000 iterations with 1000 of those being warm-up iterations.\n\n\tThe MLE parameter estimates, taken to be the means of the samples in the chain, were shown in the table in Figure [\\ref{fiterror}] along with the true values and relative error.\n\n\n\\section{HMC Densities}\n\n\tFigure [\\ref{sc1hmckernels}] shows the parameter estimation densities from the Stan HMC fitting.\n\n\t\\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/hmckernels.pdf}\n        \\caption{As before, the solid grey lines show the true parameter values and the dashed grey lines show the density medians. \\label{sc1hmckernels}}\n    \\end{figure}\n\n    The densities shown here represent a ``true'' MLE density estimate in that they represent HMC's attempt to directly sample from the parameter space according to the likelihood surface, unlike IF2 which is in theory only trying to get a ML point estimate. Hence, these densities are potentially more robust than those produced by the IF2 implementation.\n\n\n\\section{HMC and Bootstrapping}\n\n\tUnlike in some models, our RStan epidemic model does not keep track of state estimates directly, but does keep track of the autoregressive process latent variable draws, which allow us to reconstruct states. This was done to ease implementation as RStan places some restrictions on how interactions between parameters and states can be specified.\n\n\tFigure [\\ref{hmcboot}] shows the results of 100 bootstrap trajectories generated from the RStan HMC samples.\n\n\t\\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/hmcboot.pdf}\n        \\caption{Result from 100 HMC bootstrap trajectories. The solid line shows the true states, the dots show the data, the dotted line shows the average system behaviour, the dashed line shows the bootstrap mean, and the grey ribbon shows the centre 95th quantile of the bootstrap trajectories. \\label{hmcboot}}\n    \\end{figure}\n\n\n\\section{Multi-trajectory Parameter Estimation}\n\n\tHere we fit the stochastic SIR model to 200 random independent trajectories using each method and examine the density of the point estimates produced.\n\n\tFigure [\\ref{combinedmulti}] shows the results of the mean parameter fits from IF2 and HMC for 200 independent data sets generates using the previously described model.\n\n    \\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/combined-multi.pdf}\n        \\caption{IF2 point estimate densities are shown in black and HMC point estimate densities are shown in grey. The vertical lines show the true parameter values. \\label{combinedmulti}}\n    \\end{figure}\n\n    The densities by and large display similar coverage, with the IF2 densities for $r$ and $\\varepsilon_{proc}$ showing slightly wider coverage than the HMC densities for the same parameters.\n\n    Figure [\\ref{sc1timeplot}] summarizes the running times for each algorithm.\n\n\t\\begin{figure}\n        \\centering\n        \\captionsetup{width=.8\\linewidth}\n        \\includegraphics[width=0.8\\textwidth]{./images/timeplot.pdf}\n        \\caption{Fitting times for IF2 and HMC, in seconds. The centre box in each plot shows the centre 50th percent, with the bold centre line showing the median. \\label{sc1timeplot}}\n    \\end{figure}\n\n    The average running times were approximately 45.5 seconds and 257.4 seconds for IF2 and HMC respectively, representing a 5.7x speedup for IF2 over HMC. While IF2 may be able to fit the model to data faster than HMC, we are obtaining less information; this will become important in the next section. Further, the results in Figure [\\ref{sc1timeplot}] show that while the running time for IF2 is relatively fixed, the times for HMC are anything but, showing a wide spread of potential times.\n", "meta": {"hexsha": "454b57dfa0d589d05ba57c4450f3e5e7b1af9e36", "size": 16676, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writing/SC1/sc1-text.tex", "max_stars_repo_name": "dbarrows/epidemic-forecasting", "max_stars_repo_head_hexsha": "a0865fa20c992dc4159e79bb332500e3ff2357ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "writing/SC1/sc1-text.tex", "max_issues_repo_name": "dbarrows/epidemic-forecasting", "max_issues_repo_head_hexsha": "a0865fa20c992dc4159e79bb332500e3ff2357ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writing/SC1/sc1-text.tex", "max_forks_repo_name": "dbarrows/epidemic-forecasting", "max_forks_repo_head_hexsha": "a0865fa20c992dc4159e79bb332500e3ff2357ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.1155555556, "max_line_length": 953, "alphanum_fraction": 0.7350083953, "num_tokens": 4359, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.5666253924543193}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% CS637: Database-Backed Websites\n% Copyright 2015 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% More info: https://github.com/ghorbanzade/beacon\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section*{Question 4}\n\nWrite a simple PHP program that calculates future value of a potential investment based on key parameters investment amount, yearly interest rate and number of years.\n\n\\subsection*{Solution}\n\nDesired PHP program has been developed on \\href{http://ghorbanzade.com/cs637?p=2&s=1}{Ghorbanzade.com/637} under the name of \\textit{Future Value Calculator} and is readily available for evaluation.\n\nFollowing is the code snippet (slightly different than the one on the website) that calculates future value of an investment, upon submission of its related parameters.\n\n\\lstset{language=PHP}\n\\lstset{frame=tb}\n\\lstset{numbers=left}\n\\begin{lstlisting}\nif (is_numeric($_POST['invest'])) {\n\tif (is_numeric($_POST['interest'])) {\n\t\tif (is_numeric($_POST['year'])) {\n\t\t\t$future_value = $_POST['invest'];\n\t\t\tfor ($i = 1; $i <= $_POST['year']; $i++) {\n\t\t\t\t$future_value += $future_value * $_POST['interest'] * 0.01;\n\t\t\t}\n\t\t\t$future_value = number_format($future_value,2);\n\t\t\techo \"Future value is $$future_value\";\n\t\t}\n\t\telse {\n\t\t\techo \"Year not valid.\";\n\t\t}\n\t}\n\telse {\n\t\techo \"Interest rate not valid.\";\n\t}\n}\nelse {\n\techo \"Investment amount not valid.\";\n}\n\\end{lstlisting}\n\n\\section*{Question 5}\nWrite another simple non-web PHP program that counts and reports how many \\textit{a}'s, \\textit{e}'s, \\textit{i}'s, \\textit{o}'s and \\textit{u}'s show up in a string set up at the start as \\textit{this is a test, only a test}. Use an associative array with keys \\textit{a}, \\textit{e}, \\textit{i}, \\textit{o}, and \\textit{u}. Use \\texttt{strlen} function. You can test if a key exists in an associative array with \\texttt{array\\_key\\_exists} function.\n\n\\subsection*{Solution}\nDesired PHP program has been developed on \\href{http://ghorbanzade.com/cs637?p=2&s=2}{Ghorbanzade.com/637} under the name of \\textit{Vowel Counter} and is readily available for evaluation.\n\nFollowing is a code snippet that analyzes a given string, upon its submission, and gives the number of occurrences of each specified vowel.\n\n\\begin{lstlisting}\n\t// Get the string submitted by user\n\t$string = $_POST['string'];\n\t// Initialize array of vowels\n\t$letter_desired = array(\"a\",\"e\",\"i\",\"o\",\"u\");\n\t$list = array_fill_keys($letter_desired, 0);\n\t// Find occurance of each vowel\n\tfor ($i=0; $i < strlen($string); $i++) {\n\t\tif (array_key_exists($string[$i], $list)) {\n\t\t\t$list[$string[$i]] += 1;\n\t\t}\n\t}\n\t// Sort array based on value in descending order \n\tarsort($list);\n\t// Prepare message\n\t$result = \"There are \";\n\t$i = 0;\n\tforeach ($list as $char => $number) {\n\t\t$result .= ($number == 0) ? \"no\" : $number;\n\t\t$result .= \" '<i>$char</i>'\";\n\t\t$result .= ($number != 1) ? \"s\": \"\";\n\t\t$i++;\n\t\t$result .= ($i == count($list)-1) ? \" and \" : \", \";\n\t}\n\t$result .= \"in specified string.\";\n\t// Output message\n\techo $result;\n\\end{lstlisting}\n", "meta": {"hexsha": "5c728f6b843d3f13d5601ef970c052ad576d823c", "size": 3162, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "umb-cs637-2015s/src/tex/hw01/hw01q04.tex", "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_issues_repo_path": "umb-cs637-2015s/src/tex/hw01/hw01q04.tex", "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "umb-cs637-2015s/src/tex/hw01/hw01q04.tex", "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "avg_line_length": 39.037037037, "max_line_length": 451, "alphanum_fraction": 0.6612903226, "num_tokens": 890, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5666253879644616}}
{"text": "\\documentclass[10pt, twocolumn]{article}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage[margin=0.25in]{geometry}\n\\usepackage{pdfpages}\n\n\\newcommand*{\\Perm}[2]{{}^{#1}\\!P_{#2}}%\n\\newcommand*{\\Comb}[2]{{}^{#1}C_{#2}}%\n\\newcommand\\given[1][]{\\:#1\\vert\\:}\n\n\\begin{document}\n\n\\section{STAT 400, Aryn Harmon}\n\n\\subsection{Midterm 1 Material}\n\n\\textbf{Probability} is a real-valued function:\n1. $\\mathbb{P}(S) = 1$; 2. $\\mathbb{P}(A) \\geq 0$; 3. If $A_1, A_2$ are mutually exclusive events, $\\mathbb{P}(A_1 \\cup A_2) = \\mathbb{P}(A_1) + \\mathbb{P}(A_2)$ and so on.\\\\\n\\textbf{Inclusion-Exclusion:} $\\mathbb{P}(A \\cup B) = \\mathbb{P}(A) + \\mathbb{P}(B) - \\mathbb{P}(A \\cap B)$.\\\\\n\\textbf{Conditional Probability:} $\\mathbb{P}(A|B) = \\frac{\\mathbb{P}(A \\cap B)}{\\mathbb{P}(B)}$ given $\\mathbb{P}(B) > 0$. $\\mathbb{P}(A|B) \\neq \\mathbb{P}(A)$ unless $A$ and $B$ are independent. Also see Bayes's Theorem. Probability of a string of unions given $B$ is equal to the sum of the individual conditional probabilities.\\\\\n\\textbf{Multiplication Rule:} Probability of two events both occurring: $\\mathbb{P}(A \\cap B) = \\mathbb{P}(A) \\cdot \\mathbb{P}(B|A)$ or $\\mathbb{P}(A \\cap B) = \\mathbb{P}(B) \\cdot \\mathbb{P}(A|B)$ (one is easier than the other).\\\\\n\\textbf{Bayes's Rule:} $\\mathbb{P}(A|B) = \\frac{\\mathbb{P}(A) \\cdot \\mathbb{P}(B|A)}{\\mathbb{P}(B)}$. $\\mathbb{P}(A)$ is the \\textit{prior probability} of $A$. $\\mathbb{P}(A|B)$ is the \\textit{posterior probability} of $A$ given that $B$ occurred. Use to invert probabilities.\\\\\n\\textbf{Bayes's Rule 2:} $\\mathbb{P}(A|B) = \\frac{\\mathbb{P}(A) \\cdot \\mathbb{P}(B|A)}{\\mathbb{P}(A) \\cdot \\mathbb{P}(B|A) + \\mathbb{P}(A') \\cdot \\mathbb{P}(B|A')}$.\\\\\n\\textbf{Bayes's Rule Full:} Given some partition of $S$: $A_1 \\cup \\dots \\cup A_k = S$. $\\mathbb{P}(A_i|B) = \\frac{\\mathbb{P}(A_i) \\cdot \\mathbb{P}(B|A_i)}{\\sum_{m=1}^{k} \\mathbb{P}(A_m) \\cdot \\mathbb{P}(B|A_m)}$.\\\\\n\\textbf{Independence:} $\\mathbb{P}(A|B) = \\mathbb{P}(A)$, $\\mathbb{P}(B|A) = \\mathbb{P}(B)$, and $\\mathbb{P}(A \\cap B) = \\mathbb{P}(A) \\cdot \\mathbb{P}(B)$.\\\\\n\\textbf{Pairwise Independence:} All of the following must hold: $\\mathbb{P}(A \\cap B) = \\mathbb{P}(A) \\cdot \\mathbb{P}(B)$, $\\mathbb{P}(A \\cap C) = \\mathbb{P}(A) \\cdot \\mathbb{P}(C)$, and $\\mathbb{P}(B \\cap C) = \\mathbb{P}(B) \\cdot \\mathbb{P}(C)$.\\\\\n\\textbf{Mutual Independence:} $A$, $B$, and $C$ must be pairwise independent and in addition: $\\mathbb{P}(A \\cap B \\cap C) = \\mathbb{P}(A) \\cdot \\mathbb{P}(B) \\cdot \\mathbb{P}(C)$.\n\\textbf{Number of ways to take $r$ objects from $n$ candidates:}\n\n\\begin{tabular}{lll}\n        & Ordered Sample & Unordered Sample      \\\\\nW. R.   & $n^r$          & ${n+r-1 \\choose r}$ \\\\\nW/o. R. & $\\Perm{n}{r}$  & ${n \\choose r}$   \n\\end{tabular}\n\n$\\Perm{n}{k}=\\frac{n!}{(n-k)!}$ - permutation\\\\\n$\\binom nk=\\Comb{n}{k}=\\frac{n!}{k!(n-k)!}$ - combination\n\n\\textbf{Binomial Theorem:} $(a+b)^n = \\sum_{r=0}^n {n \\choose r} a^r b^{n-r}$. ${n \\choose n-r} = {n \\choose r}$, ${n \\choose 0} = {n \\choose n} = 1$, ${n \\choose 1} = {n \\choose n-1} = n$.\\\\\n\\textit{Some magic formulae:} 1. $\\sum_{r=0}^n {n \\choose r} = 2^n$. 2. $\\sum_{r=0}^n (-1)^r {n \\choose r} = 0$. 3. $\\sum_{r=0}^n {n \\choose r} p^r (1-p)^{n-r} = 1$.\\\\\n\\textbf{Pascal's Equation:} ${n \\choose r} = {n-1 \\choose r} + {n-1 \\choose r-1}$.\\\\\n\\textbf{Hypergeometric Distribution:} $f(x) = \\mathbb{P}(X=x) = \\frac{{N_1 \\choose x} \\cdot {N_2 \\choose n-x}}{{N \\choose n}}$, where $x \\leq n$, $x \\leq N_1$, $n-x \\leq N_2$ and $N = N_1 + N_2$. $\\mathbb{E}(X) = n\\frac{N_1}{N}$ and $Var(X) = n \\cdot \\frac{N_1}{N} \\cdot \\frac{N_2}{N} \\cdot \\frac{N-n}{N-1}$. Example: Urn model: $N_1$ red balls, $N_2$ blue balls, draw $n$ balls from $N_1 + N_2$ balls, then look at the probability that there are $x$ red balls in the selected $n$ balls.\\\\\n\\textbf{Mean:} $\\mu = \\mathbb{E}(X) = \\sum_{x \\in S} x f(x)$.\\\\\n\\textbf{Variance:} $\\sigma^2 = Var(X) = \\mathbb{E}(X - \\mu)^2 = \\mathbb{E}(X^2) - \\mu^2 = \\sum_{i=1}^{k} x_i^2 f(x_i) - \\mu^2$. \\textbf{Standard Deviation:} $\\sigma$.\\\\\n\\textbf{r-th Moment:} $\\mathbb{E}(|X|^r) = \\sum_{x \\in S} |x|^r f(x) < \\infty$; is the moment about the origin.\\\\\n\\textbf{r-th Moment about $b$:} $\\mathbb{E}((X-b)^r) = \\sum_{x \\in S} (x-b)^r f(x)$; is the moment about $b$. Facts: $\\mu$ is the first moment of $X$ about the origin. $\\sigma^2$ is the second moment of $X$ about $\\mu$.\\\\\n\\textit{Example of Variance Properties:} $Var(aX+b) = a^2 \\cdot Var(X)$.\n\\textbf{Bernoulli Distribution:} A random experiment is called a set of Bernoulli trials if each trial has only two outcomes, has a constant $p$, and each trial is independent. $f(x) = p$ if $x=1$, $1-p$ if $x=0$, with $0 \\leq p \\leq 1$. $\\mathbb{E}(X) = p$ and $Var(X) = p(1-p)$.\\\\\n\\textbf{Binomial Distribution:} Let $X$ be the number of successes in $n$ independent Bernoulli trials with $p$. Then $X \\sim b(n,p)$. $f(x) = \\mathbb{P}(X=x) = {n \\choose x} p^x (1-p)^{n-x}$. $\\mathbb{E}(X) = np$ and $Var(X) = np(1-p)$.\\\\\nA note: binomial and hypergeometric are similar, but binomial has replacement and one category, and hypergeometric has two categories and no replacement.\\\\\n\\textbf{Cumulative Distribution Function:} $F(x) = \\mathbb{P}(X \\leq x), x \\in (-\\infty, \\infty)$. For discrete random variables: $f(x) = \\mathbb{P}(X=x) = \\mathbb{P}(X \\leq x) - \\mathbb{P}(X \\leq x - 1) = F(x) - F(x-1)$.\\\\\n\\textbf{Geometric Distribution:} $f(x) = p(1-p)^{x-1}, x = 1,2,3,\\dots$. $X$ represents the draw in which the first success is drawn. $f(x)$ is the probability of getting a success in the $x$-th draw.\\\\\n\\textbf{Negative Binomial Distribution:} $f(x) = {x-1 \\choose r-1} p^r (1-p)^{x-r}$ and $x = r,r+1,r+2,\\dots$. $X$ is the number of trials it takes to get $r$ successes. $f(x)$ is the probability that the $r$-th success occurs on the $x$-th trial.\n\n\\subsection{Midterm 2 Material}\n\\textbf{Moment Generating Function:} $M(t) = \\mathbb{E}(e^{tX}) = \\sum_{x \\in S}{e^{tX}f(x)}$ if $f(x)$ is the p.d.f. of some distribution and $t \\in V_h(0)$ is finite. \\textit{Theorem:} $\\mathbb{E}(X^r) = M^{(r)}(0)$, so $\\mu = \\mathbb{E}(X) = M'(0)$ and $\\sigma^2 = \\mathbb{E}(X^2) - [\\mathbb{E}(X)]^2 = M''(0) - [M'(0)]^2$. To calculate the m.g.f. and p.d.f. of some random variable given only its moments, use the Taylor series expansion centered at zero. $M(t) = M(0) + M'(0)\\left(\\frac{t}{1!}\\right) + M''(0)\\left(\\frac{t^2}{2!}\\right) + \\dots = 1 + \\mathbb{E}(X)\\left(\\frac{t}{1!}\\right) + \\mathbb{E}(X^2)\\left(\\frac{t^2}{2!}\\right) + \\dots$.\\\\\n\\textbf{Poisson distribution:} \\textit{Definition:} Poisson process counts the number of events occurring in a fixed time/space given a rate $\\lambda$. Let $X_t$ be the number events which occur in $t$ unit time intervals. $\\mathbb{P}(X_t = x) = \\frac{(\\lambda t)^x}{x!} e^{-\\lambda t}$. $\\mu = \\lambda, \\sigma^2 = \\lambda$.\\\\\n\\textbf{Poisson Approximation} of Binomial Distribution: If $X \\sim b(n,p)$ and $n$ is large while $p$ is small, then $X$ can be approximated as $\\hat{X} \\sim poi(\\lambda) \\, s.t. \\, \\lambda = np$.\\\\\n\\textbf{Mean:} $\\mu = \\int_{-\\infty}^{\\infty} x f(x) dx$.\\\\\n\\textbf{Variance:} $\\sigma^2 = \\int_{-\\infty}^{\\infty} (x - \\mu)^2 f(x) dx$.\\\\\n\\textbf{M.G.F.:} $M(t) = \\mathbb{E}(e^{tX}) = \\int_{-\\infty}^{\\infty} e^{tx} f(x) dx, t \\in (-h,h)$.\\\\\n\\textbf{Percentile:} $p = \\int_{-\\infty}^{\\pi_p} f(x) dx = F(\\pi_p)$.\\\\\n\\textbf{Uniform Distribution:} $f(x) = \\frac{1}{b-a}, a \\leq x \\leq b$. If $X \\sim U(a,b)$, then $\\mathbb{E}(X) = \\frac{a+b}{2}$, $\\sigma^2 = \\frac{(b-a)^2}{12}$, and $M(t) = \\mathbb{E}(e^{tX}) = \\frac{e^{tb} - e^{ta}}{t(b-a)}, t \\neq 0$ and $M(0) = 1$.\\\\\n\\textbf{Exponential Distribution:} This describes the waiting time between events in a Poisson process with rate $\\lambda$. Let $\\theta = \\frac{1}{\\lambda}$. Then if $X \\sim Exp(\\theta)$, $f(x) = \\frac{1}{\\theta}e^{-\\frac{x}{\\theta}}, x \\geq 0$ and 0 otherwise. $\\mu = \\theta$, $\\sigma^2 = \\theta^2$, and $M(t) = (1 - \\theta t)^{-1}, t < \\frac{1}{\\theta}$.\\\\\n\\textbf{Memoryless Property} of the Exponential Distribution: What happened in the past does not matter now. Only the present can determine the future. $\\mathbb{P}(X > a+b \\given[\\big] X > a) = \\mathbb{P}(X > b), \\forall a, b \\geq 0$.\\\\\n\\textbf{Gamma Function:} $\\Gamma(x) = \\int_{0}^{\\infty} y^{x-1} e^{-y} dy, \\, x > 0$.\\\\\n\\textbf{Gamma Distribution:} $f(t)$ represents the waiting time unitl the $\\alpha$-th occurrence of some Poisson process with rate $\\lambda$. $f(t) = \\frac{\\lambda^\\alpha t^{\\alpha-1}}{(\\alpha-1)!} e^{-\\lambda t}, \\, t \\geq 0$. Then let $\\theta = \\lambda^{-1}$, then $f(t) = \\frac{1}{\\Gamma(\\alpha)} \\theta^{\\alpha} t^{\\alpha - 1} e^{\\frac{-t}{\\theta}}, \\, t \\geq 0, \\, \\alpha \\in \\mathbb{R}$. $\\mu = \\alpha \\theta$, $\\sigma^2 = \\alpha \\theta^2$, and $M(t) = (1 - \\theta t)^{-\\alpha}$. If $\\alpha = 1$, $\\mathrm{gamma}(1,\\theta) = \\mathrm{Exp}(\\theta)$.\\\\\n\\textbf{$\\chi^2$ Distribution:} If $X \\sim \\mathrm{gamma}(\\alpha,\\theta)$, and $\\theta = 2$ and $\\alpha = \\frac{r}{2}$, where $r$ is a positive integer, then $X$ is a $\\chi^2$ distribution with degree of freedom $r$. $X \\sim \\chi^2(r)$. $\\mu = r$, $\\sigma^2 = 2r$, and $M(t) = (1-2t)^{\\frac{-r}{2}}, \\, t < \\frac{1}{2}$. $e^2 = \\chi^2(2)$.\\\\\n\\textbf{Normal Distribution:} Bell curve! $f(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma} e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}, \\, x \\in \\mathbb{R}$. $X \\sim N(\\mu, \\sigma^2)$. $X \\sim N(0, 1)$ is the standard normal distribution. $\\mu = \\mu$, $\\sigma^2 = \\sigma^2$, and $M(t) = e^{\\mu t + \\frac{\\sigma^2 t^2}{2}}$. Standardization: $X \\sim N(\\mu, \\sigma^2)$, then $Z = \\frac{X-\\mu}{\\sigma} \\sim N(0, 1)$.\\\\\n\\textbf{Normal Square Distribution:} Let $X \\sim N(\\mu, \\sigma^2)$ and $Z = \\frac{X-\\mu}{\\sigma}$. Then the random variable $V = Z^2 \\sim \\chi^2(1)$.\\\\\n\\textbf{Cauchy Distribution:} (Why do we even need this distribution?) $f(x) = \\frac{1}{\\pi (1 + x^2)}, \\, x \\in \\mathbb{R}$. Symmetric about zero, so median is zero, but $\\mu$ is undefined because the tail of the p.d.f. is too heavy (i.e. each integral of the distribution does not converge). $c.d.f. = F(x) = \\frac{1}{\\pi} \\arctan(x) + \\frac{1}{2}, \\, x \\in \\mathbb{R}$.\\\\\n\\textbf{Joint Probability Density Function:} $\\mathbb{P}((X, Y) \\in A) = \\int\\int f(x,y) dx dy$.\\\\\n\\textbf{Marginal Probability Density Function:} $f_x(x) = \\int_{-\\infty}^{\\infty} f(x,y) dy$ and the other way around for $y$.\\\\\n\\textbf{Mathematical Expectation:} $\\mathbb{E}[g(X,Y)] = \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} g(x,y) f(x,y) dx dy$.\\\\\n\\textbf{Independent Random Variables:} Two events $A$ and $B$ are independent iff $f(x,y) = f_x(x) \\cdot f_y(y) \\, \\forall x,y$. This works for both the p.d.f. and the c.d.f.\\\\\n\\textbf{Trinomial Distribution:} This is an extension of the binomial distribution into two dimensions. $f(x_1,x_2) = \\mathbb{P}(X_1 = x_1, X_2 = x_2) = \\frac{n!}{x_1! x_2! (n - x_1 - x_2)!} p_1^{x_1} p_2^{x_2} (1 - p_1 - p_2)^{n - x_1 - x_2}$ for $x_1, x_2 \\geq 0$, $x_1 + x_2 \\leq n$.\\\\\n\\textbf{Covariance:} $Cov(X,Y) = \\mathbb{E}[(X - \\mu_x)(Y - \\mu_y)], \\mu_x = \\mathbb{E}(X), \\mu_y = \\mathbb{E}(Y)$. Useful identity: $Cov(X,Y) = \\mathbb{E}(XY) - \\mathbb{X}\\mathbb{E}(Y)$.\\\\\n\\textbf{Properties of Covariance:} Zero-angle: $Cov(X,X) = Var(X)$. Symmetry: $Cov(X,Y) = Cov(Y,X)$. $Cov(aX + b, Y) = a \\cdot Cov(X,Y)$. $Cov(X+Y, W) = Cov(X,W) + Cov(Y,W)$. $Cov(aX+bY, cX+dY) = (ac) \\cdot Var(X)+(bd) \\cdot Var(Y)+(ad+bc) \\cdot Cov(X, Y)$. $Var(aX + bY) = a^2 \\cdot Var(X) + 2ab \\cdot Cov(X, Y) + b^2 \\cdot Var(Y)$.\\\\\n\\textbf{Correlation Coefficient:} $\\rho_{XY} = \\frac{\\sigma_{XY}}{\\sigma_X \\cdot \\sigma_Y} = \\frac{Cov(X,Y)}{\\sqrt{Var(X)} \\cdot \\sqrt{Var(Y)}}$. Uncorrelated: $\\rho_{XY} = 0 \\Leftrightarrow \\sigma_{XY} = 0$. If $X$ and $Y$ are independent, then $\\rho_{XY} = 0$, but the implication \\textbf{does not} go the other way.\\\\\n\\textbf{I.I.D.:} If every marginal probability density function is equal and independent, then all of the random variables are independent and identically distributed (i.i.d.).\\\\\n\\textbf{Combinations:} If $Y$ is a linear combination of independent random variables with constant multiples, then the expectation of $Y$ is simply the sum of the expectations of random variables. The variance of $Y$ is the sum of the variances of the random variables with their constant multiples squared. The moment of a linear combination of random variables is the product of their individual moments.\\\\\n\\textbf{Summary of Combinations:} Suppose $X$ and $Y$ are independent. Then,\n\\begin{itemize}\n        \\item $bern(p) + bern(p) = b(2,p)$\n        \\item $b(n_1,p) + b(n_2,p) = b(n_1 + n_2, p)$\n        \\item $geo(p) + geo(p) = negbin(2,p)$\n        \\item $negbin(r_1,p) + negbin(r_2,p) = negbin(r_1 + r_2, p)$\n        \\item $poi(\\lambda_1) + poi(\\lambda_2) = poi(\\lambda_1 + \\lambda_2)$\n        \\item $exp(\\theta) + exp(\\theta) = gamma(2,\\theta)$\n        \\item $gamma(\\alpha_1,\\theta) + gamma(\\alpha_2,\\theta) = gamma(\\alpha_1 + \\alpha_2, \\theta)$\n        \\item $N(\\mu_1,\\sigma_1^2) + N(\\mu_2,\\sigma_2^2) = N(\\mu_1 + \\mu_2,\\sigma_1^2 + \\sigma_2^2)$\n\\end{itemize}\n\n\\subsection{Final Material}\nEverything taught in the class is on the final.\\\\\n\\textbf{Chebyshev's Inequality:} If $k \\geq 1$: $\\mathbb{P}(|X - \\mu| \\geq k \\sigma) \\leq \\frac{\\sigma^2}{\\epsilon^2}$. Let $\\epsilon = k \\sigma$: $\\mathbb{P}(|X - \\mu| \\geq \\epsilon) \\leq \\frac{1}{k^2}$.\\\\\n\\textbf{Central Limit Theorem (CLT):} $\\frac{\\sqrt{n}(\\overline{X} - \\mu)}{\\sigma} \\rightarrow Z, \\;\\mathrm{as}\\; n \\rightarrow \\infty$.\\\\\n\\textbf{Normal Approximation to Binomial:} If $np \\geq 5$ and $n(1-p) \\geq 5$, then\\\\\n\\begin{tabular}{lll}\n        & $b(n,p)$ & $N(\\mu,\\sigma^2)$     \\\\\nMean    & $np$          & $\\mu$ \\\\\nVariance& $np(1-p)$  & $\\sigma^2$   \n\\end{tabular}\\\\\n\\textbf{Normal Approximation to Poisson:}\\\\\n\\begin{tabular}{lll}\n        & Poisson$(\\lambda)$ & $N(\\mu,\\sigma^2)$     \\\\\nMean    & $\\lambda$          & $\\mu$ \\\\\nVariance& $\\lambda$  & $\\sigma^2$   \n\\end{tabular}\\\\\n\\textbf{Jensen's Inequality:} If $g : \\mathbb{R} \\rightarrow \\mathbb{R}$ is a convex function and $X$ is a random variable, then $\\mathbb{E}[g(X)] \\geq g(\\mathbb{E}(X))$.\\\\\n\\textbf{Maximum Likelihood Estimator:} Maximize the likelihood function subject to the parameter space.\n\\begin{enumerate}\n\t\\item Construct likelihood function, $F(x; \\theta) = \\Pi_{i=1}^{n}(f(x_i))$.\n\t\\item If applicable, take the log of this to make things easier to differentiate and to change the product into a sum. $l(x; \\theta)$.\n\t\\item Differentiate to get $\\frac{dl(x;\\theta)}{d\\theta}$.\n\t\\item Assume convexity (ha) and find optimal $\\theta$ by solving $\\frac{dl(x;\\theta)}{d\\theta} = 0$.\n\\end{enumerate}\n\\textbf{Sample Mean:} $\\frac{1}{n}\\sum_{i=1}^{n}X_i \\equiv \\overline{X}$.\\\\\n\\textbf{Sample Variance:} For $z$ distribution: $\\frac{1}{n}\\sum_{i=1}^{n}(X_i - \\overline{X})^2$, and for $t$ distribution: $\\frac{1}{n-1}\\sum_{i=1}^{n}(X_i - \\overline{X})^2$.\\\\\n\\textbf{Method of Moments Estimator:} Construct a system of equations using moments and sample data. For example, equate theoretical mean with sample mean, and theoretical variance with sample variance, then solve. \\texttt{<><><maybe put an example here<><>><}\\\\\n\\textbf{Helpful Distribution Values:}\\\\\n\\begin{tabular}{ll}\n$\\alpha$ & $z_{1-\\alpha}$ \\\\\n0.2      & 0.840 \\\\\n0.1      & 1.280 \\\\\n0.05     & 1.645 \\\\\n0.025    & 1.960 \\\\\n\\end{tabular}\\\\\n\\textbf{Confidence Intervals for Means:}\\\\\n\\tiny\n\\begin{tabular}{rcc}\n                & Known Variance & Unknown Variance     \\\\\nTwo Side       & $\\left[ \\overline{X} - z_{\\frac{\\alpha}{2}} \\frac{\\sigma}{\\sqrt{n}}, \\overline{X} + z_{\\frac{\\alpha}{2}} \\frac{\\sigma}{\\sqrt{n}} \\right]$ & $\\left[ \\overline{X} - t_{\\frac{\\alpha}{2}}(n-1) \\frac{S}{\\sqrt{n}}, \\overline{X} + t_{\\frac{\\alpha}{2}}(n-1) \\frac{S}{\\sqrt{n}} \\right]$ \\\\\nLower & $\\left[ \\overline{X} - z_{\\alpha} \\frac{\\sigma}{\\sqrt{n}}, \\infty \\right)$ & you can \\\\\nUpper & $\\left( -\\infty, \\overline{X} + z_{\\alpha} \\frac{\\sigma}{\\sqrt{n}} \\right]$ & figure it out \\\\\n\\end{tabular}\\\\\n\\normalsize\n$S^2 = \\frac{1}{n-1}\\sum_{i=1}^{n}(X_i - \\overline{X})^2$ be the sample variance. \nWhen the population distribution is normal, use the ($n-1$) version unless you need to use MLE for which the ($n$) version is needed. For other distributions, use the ($n$) version. If no distribution is mentioned, then assume it to be normal and use the ($n-1$) version.\\\\\n\\textbf{Confidence Interval for Variance:} $\\left[ \\frac{(n-1)S^2}{b}, \\frac{(n-1)S^2}{a} \\right], a=\\chi_{1-\\frac{\\alpha}{2}}^2(n-1), b=\\chi_{\\frac{\\alpha}{2}}^2(n-1)$.\\\\\n\\textbf{Confidence Interval for Standard Deviation:} Use $a$ and $b$ from above, and just square root the expressions for the variance C.I.\\\\\n\\textbf{Confidence Interval for Proportions (and Wilcox Interval for large $n$):} Let $\\hat{p} = \\frac{Y}{n}$. Then $\\hat{p} \\pm z_{\\frac{\\alpha}{2}} \\sqrt{\\frac{\\hat{p}(1-\\hat{p})}{n}}$. Lower bound: $\\left[ \\hat{p} - z_{\\alpha} \\sqrt{\\frac{\\hat{p}(1-\\hat{p})}{n}}, 1 \\right]$. Upper bound: $\\left[ 0, \\hat{p} + z_{\\alpha} \\sqrt{\\frac{\\hat{p}(1-\\hat{p})}{n}} \\right]$.\\\\\n\n\\includepdf[pages={1-},scale=1.0]{gdoc.pdf}\n\n\\end{document}", "meta": {"hexsha": "a7b14e947e4d9024d7ebf21939c0843fc2256e53", "size": 17013, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "eq-sheet.tex", "max_stars_repo_name": "achcello/effective-umbrella", "max_stars_repo_head_hexsha": "22a348a5618625b985408f7524a4be468ac9b84f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "eq-sheet.tex", "max_issues_repo_name": "achcello/effective-umbrella", "max_issues_repo_head_hexsha": "22a348a5618625b985408f7524a4be468ac9b84f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "eq-sheet.tex", "max_forks_repo_name": "achcello/effective-umbrella", "max_forks_repo_head_hexsha": "22a348a5618625b985408f7524a4be468ac9b84f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 116.5273972603, "max_line_length": 651, "alphanum_fraction": 0.614706401, "num_tokens": 6785, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7718435030872968, "lm_q1q2_score": 0.5666253873274263}}
{"text": "%!TEX root = ../thesis.tex\n% ******************************* Thesis Appendix A ****************************\n\\chapter{Experiment Specifications} \n\n\\section*{Bayesian Settings}\n\n\\subsection{Image Recognition}\n\n\\begin{table}[H]\n    \\centering\n    \\renewcommand{\\arraystretch}{2}\n    \\begin{tabular}[c]{c | c} \n     \\hline\n     variable & value \\\\ [0.5ex] \n     \\hline\n     learning rate &  0.001\\\\ \n     \n     epochs & 100 \\\\\n     \n     batch size & 256 \\\\\n     \n     sample size & 10-25 \\\\\n     \n     loss & cross-entropy \\\\\n     \n     $(\\alpha \\mu^2)_{init}$ of approximate posterior $q_{\\theta}(w|\\mathcal{D})$ & -10 \\\\\n     \n     optimizer & Adam \\cite{kingma2014adam} \\\\\n     \n     $\\lambda$ in $\\ell$-2 normalisation & 0.0005 \\\\\n    \n     $\\beta_i$ & $\\frac{2^{M-i}}{2^M-1}$ \\cite{blundell2015weight} \\\\ [1ex] \n     \\hline\n    \\end{tabular} \n    \\renewcommand{\\arraystretch}{2}\n\\end{table}\n\nSample size can vary from 10 to 25 as this range provided the best results. However, it can be played around with. For most of our experiments, it is either 10 or 25 unless specified otherwise. \n\n\\subsection{Image Super Resolution}\n\n\\begin{table}[H]\n    \\centering\n    \\renewcommand{\\arraystretch}{2}\n    \\begin{tabular}[c]{c | c} \n     \\hline\n     variable & value \\\\ [0.5ex] \n     \\hline\n     learning rate &  0.01\\\\ \n     \n     epochs & 200 \\\\\n     \n     batch size & 64 \\\\\n     \n     upscale factor & 3 \\\\\n     \n     loss & Mean Squared Error \\\\\n     \n     seed & 123 \\\\\n     \n     $(\\alpha \\mu^2)_{init}$ of approximate posterior $q_{\\theta}(w|\\mathcal{D})$ & -10 \\\\\n     \n     optimizer & Adam \\cite{kingma2014adam} \\\\\n     \n     $\\lambda$ in $\\ell$-2 normalisation & 0.0005 \\\\\n    \n     $\\beta_i$ & $\\frac{2^{M-i}}{2^M-1}$ \\cite{blundell2015weight} \\\\ [1ex] \n     \\hline\n    \\end{tabular} \n    \\renewcommand{\\arraystretch}{2}\n\\end{table}\n\n\n\\subsection{Generative Adversarial Network}\n\n\\begin{table}[H]\n    \\centering\n    \\renewcommand{\\arraystretch}{2}\n    \\begin{tabular}[c]{c | c} \n     \\hline\n     variable & value \\\\ [0.5ex] \n     \\hline\n     learning rate &  0.001\\\\ \n     \n     epochs & 100 \\\\\n     \n     batch size & 64 \\\\\n     \n     image size & 64 \\\\\n     \n     latent vector (nz) & 100 \\\\\n     \n     number of generator factor (ndf) & 64 \\\\\n     \n     number of discriminator factor (ndf) & 64 \\\\\n     \n     upscale factor & 3 \\\\\n     \n     loss & Mean Squared Error \\\\\n     \n     number of channels (nc) & 3 \\\\\n     \n     $(\\alpha \\mu^2)_{init}$ of approximate posterior $q_{\\theta}(w|\\mathcal{D})$ & -10 \\\\\n     \n     optimizer & Adam \\cite{kingma2014adam} \\\\\n     \n     $\\lambda$ in $\\ell$-2 normalisation & 0.0005 \\\\\n    \n     $\\beta_i$ & $\\frac{2^{M-i}}{2^M-1}$ \\cite{blundell2015weight} \\\\ [1ex] \n     \\hline\n    \\end{tabular} \n    \\renewcommand{\\arraystretch}{2}\n\\end{table}\n\n\\section*{Non Bayesian Settings}\n\n\\subsection{Image Recognition}\n\n\\begin{table}[H]\n    \\centering\n    \\renewcommand{\\arraystretch}{2}\n    \\begin{tabular}[c]{c | c} \n     \\hline\n     variable & value \\\\ [0.5ex] \n     \\hline\n     learning rate &  0.001\\\\ \n     \n     epochs & 100 \\\\\n     \n     batch size & 256 \\\\\n     \n     loss & cross-entropy \\\\\n     \n     initializer & Xavier \\cite{glorot2010understanding} or Normal \\\\\n     \n     optimizer & Adam \\cite{kingma2014adam} \\\\ [1ex] \n     \\hline\n    \\end{tabular} \n    \\renewcommand{\\arraystretch}{2}\n\\end{table}\n\nThe weights were initialized with Xavier initialization \\cite{glorot2010understanding} at first, but to make it consistent with the Bayesian networks where initialization was Normal initialization (mean = 0 and variance = 1), the initializer was changed to Normal initialization.\n\n\\pagebreak\n\n\\section*{Architectures}\n\n\\subsection{LeNet-5}\n \n\\begin{table}[H]\n    \\centering\n    \\renewcommand{\\arraystretch}{2}\n    \\begin{tabular}{c c c c c c} \n     \\hline\n     layer type & width & stride & padding & input shape & nonlinearity \\\\ [0.5ex] \n     \\hline\n     convolution ($5\\times5$) & 6 & 1 & 0 & $M\\times1\\times32\\times32$ & Softplus \\\\ \n     \n     Mmax-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times6\\times28\\times28$ & \\empty \\\\\n     \n     convolution ($5\\times5$) & 16 & 1 & 0 & $M\\times1\\times14\\times14$ & Softplus \\\\\n     \n     max-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times16\\times10\\times10$ & \\empty \\\\\n    \n     fully-connected & 120 & \\empty & \\empty & $M\\times400$ & Softplus \\\\\n     \n     fully-connected & 84 & \\empty & \\empty & $M\\times120$ & Softplus \\\\\n     \n     fully-connected & 10 & \\empty & \\empty & $M\\times84$ & \\empty \\\\ [1ex] \n     \\hline\n    \\end{tabular} \n    \\renewcommand{\\arraystretch}{1}\n    \\label{tab:LeNet}\n    \\caption{LeNet architecture with original configurations as defined in the paper. \\cite{lecun1998gradient}}\n\\label{tab:AlexNet}\n\n\\end{table}\n\n\\subsection{AlexNet}\n\n\\begin{table}[H]\n    \\centering\n    \\renewcommand{\\arraystretch}{2}\n    \\begin{tabular}{c c c c c c} \n \\hline\n layer type & width & stride & padding & input shape & nonlinearity \\\\ [0.5ex] \n \\hline\n convolution ($11\\times11$) & 64 & 4 & 5 & $M\\times3\\times32\\times32$ & Softplus \\\\ \n \n max-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times64\\times32\\times32$ & \\empty \\\\\n \n convolution ($5\\times5$) & 192 & 1 & 2 & $M\\times64\\times15\\times15$ & Softplus \\\\\n \n max-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times192\\times15\\times15$ & \\empty \\\\\n \n convolution ($3\\times3$) & 384 & 1 & 1 & $M\\times192\\times7\\times7$ & Softplus \\\\\n \n convolution ($3\\times3$) & 256 & 1 & 1 & $M\\times384\\times7\\times7$ & Softplus \\\\\n \n convolution ($3\\times3$) & 128 & 1 & 1 & $M\\times256\\times7\\times7$ & Softplus \\\\\n \n max-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times128\\times7\\times7$ & \\empty \\\\\n \n fully-connected & 128 & \\empty & \\empty & $M\\times128$ & \\empty \\\\ [1ex] \n \\hline\n\\end{tabular}\n\\renewcommand{\\arraystretch}{1}\n\\caption{AlexNet architecture with original configurations as defined in the paper. \\cite{krizhevsky2012imagenet}}\n\\label{tab:AlexNet}\n\\end{table}\n\n\n\\subsection{AlexNetHalf}\n\n\\begin{table}[H]\n    \\centering\n    \\renewcommand{\\arraystretch}{2}\n    \\begin{tabular}{c c c c c c} \n \\hline\n layer type & width & stride & padding & input shape & nonlinearity \\\\ [0.5ex] \n \\hline\n convolution ($11\\times11$) & 32 & 4 & 5 & $M\\times3\\times32\\times32$ & Softplus \\\\ \n \n max-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times32\\times32\\times32$ & \\empty \\\\\n \n convolution ($5\\times5$) & 96 & 1 & 2 & $M\\times32\\times15\\times15$ & Softplus \\\\\n \n max-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times96\\times15\\times15$ & \\empty \\\\\n \n convolution ($3\\times3$) & 192 & 1 & 1 & $M\\times96\\times7\\times7$ & Softplus \\\\\n \n convolution ($3\\times3$) & 128 & 1 & 1 & $M\\times192\\times7\\times7$ & Softplus \\\\\n \n convolution ($3\\times3$) & 64 & 1 & 1 & $M\\times128\\times7\\times7$ & Softplus \\\\\n \n max-pooling ($2\\times2$) & \\empty & 2 & 0 & $M\\times64\\times7\\times7$ & \\empty \\\\\n \n fully-connected & 64 & \\empty & \\empty & $M\\times64$ & \\empty \\\\ [1ex] \n \\hline\n\\end{tabular}\n\\renewcommand{\\arraystretch}{1.5}\n\\label{tab:AlexNetHalfArchitecture}\n\\caption{AlexNetHalf with number of filters halved compared to the original architecture.}\n\\end{table}\n\n", "meta": {"hexsha": "55e2315a5048b4358ad3c1ae2c0d55b67dc877fe", "size": 7117, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Appendix1/appendix1.tex", "max_stars_repo_name": "kumar-shridhar/Master-Thesis-BayesCNN", "max_stars_repo_head_hexsha": "c1f3c68b1ca7d03ec79dbfc96f7eb3595a2add38", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 238, "max_stars_repo_stars_event_min_datetime": "2018-12-18T08:58:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T20:45:01.000Z", "max_issues_repo_path": "Appendix1/appendix1.tex", "max_issues_repo_name": "goodrahstar/Master-Thesis-BayesianCNN", "max_issues_repo_head_hexsha": "3ebe37259df566e0bdda1918c1bd2c3d366b6355", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Appendix1/appendix1.tex", "max_forks_repo_name": "goodrahstar/Master-Thesis-BayesianCNN", "max_forks_repo_head_hexsha": "3ebe37259df566e0bdda1918c1bd2c3d366b6355", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 73, "max_forks_repo_forks_event_min_datetime": "2018-12-18T12:46:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-28T14:29:51.000Z", "avg_line_length": 28.9308943089, "max_line_length": 279, "alphanum_fraction": 0.5994098637, "num_tokens": 2402, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.566625383474604}}
{"text": "\\documentclass[12pt,a4paper]{article}\n\\usepackage[margin=2cm]{geometry}\n\\usepackage{amsmath,amsfonts,amsthm}\n\\usepackage{graphicx}\n\\usepackage{biblatex}\n\\usepackage{hyperref}\n\\usepackage{cleveref}\n\\bibliography{MLnotes}\n\\title{Notes on the Mittag--Leffler function}\n\\author{William McLean}\n\\date{\\today}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{example}[theorem]{Example}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\DeclareMathOperator*{\\res}{res}\n\\newcommand{\\arcosh}{\\operatorname{arcosh}}\n\\newcommand{\\diag}{\\operatorname{diag}}\n\\newcommand{\\argmin}{\\operatorname*{arg\\,min}}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% % %\n\\begin{document}\n\\maketitle\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nThe two-argument, Mittag--Leffler function is defined by the power series\n\\begin{equation}\\label{eq: E alpha beta def}\nE_{\\alpha,\\beta}(z)=\\sum_{n=0}^\\infty\\frac{z^n}{\\Gamma(\\beta+n\\alpha)}\n\\quad\\text{for $z\\in\\mathbb{C}$, $\\alpha\\ge0$, $\\beta\\in\\mathbb{R}$.}\n\\end{equation}\nThe reciprocal of the Gamma function may be written as an integral along a\nHankel contour,\n\\[\n\\frac{1}{\\Gamma(\\beta+n\\alpha)}=\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}\n    \\frac{e^w\\,dw}{w^{\\beta+n\\alpha}},\n\\]\nwhere we take a branch cut along the negative real axis so that \n$-\\pi<\\arg w<\\pi$. Therefore, if $|z|<|w^\\alpha|$ for all~$w$ on the contour, \nthen\n\\[\nE_{\\alpha,\\beta}(z)=\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}\\frac{e^w}{w^\\beta}\n    \\sum_{n=0}^\\infty(zw^{-\\alpha})^n\\,dw\n    =\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}\\frac{e^w}{w^\\beta}\\,\n    \\frac{dw}{1-zw^{-\\alpha}},\n\\]\nand so\n\\begin{equation}\\label{eq: integral repn}\nE_{\\alpha,\\beta}(z)=\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}\n    \\frac{e^w\\,dw}{w^\\beta-zw^{\\beta-\\alpha}}.\n\\end{equation}\n\nIf $\\alpha>0$, then it suffices to treat the case~$\\beta>0$ because the identity\n\\begin{equation}\\label{eq: beta identity}\nz^rE_{\\alpha,\\beta+r\\alpha}=E_{\\alpha,\\beta}(z)\n    -\\sum_{n=0}^{r-1}\\frac{z^n}{\\Gamma(\\beta+n\\alpha)},\\quad \nr\\in\\{1,2,3,\\ldots\\},\n\\end{equation}\nallows us to express $E_{\\alpha,\\beta}(z)$ in terms \nof~$E_{\\alpha,\\beta+r\\alpha}(z)$.  Also, in the case~$\\alpha=0$,\n\\[\nE_{0,\\beta}(z)=\\frac{1}{\\Gamma(\\beta)(1+z)}.\n\\]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Taylor approximation for $z$ near zero}\nSince \n$\\Gamma(t)\\ge\\Gamma(1{\\cdot}46163\\,21456\\,31155)=0{\\cdot}88560\\,31944\\,10889\n=1/1{\\cdot}12917\\,38854\\,50141$, a crude bound for the remainder term~$R_N(z)$ \nin the Taylor expansion,\n\\[\nE_{\\alpha,\\beta}(z)=\\sum_{n=0}^N\\frac{z^n}{\\Gamma(\\beta+n\\alpha)}+R_N(z)\n\\]\nis given by\n\\[\n|R_N(z)|\\le1{\\cdot}3\\sum_{n=N+1}^\\infty|z|^n=\\frac{1{\\cdot}3\\,|z|^{N+1}}{1-|z|}\n\\quad\\text{for $|z|<1$.}\n\\]\nHence, if\n\\[\nN+1\\ge\\frac{\\log\\bigl((1-|z|)\\epsilon/1{\\cdot}3\\bigr)}{\\log|z|}\n\\quad\\text{and}\\quad|z|<1,\n\\]\nthen $|R_N(z)|\\le\\epsilon$.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Quadrature approximation: negative real argument}\\label{sec: quad neg}\nAssume now that $0<\\alpha<1$. Let $x>0$ and put $z=-x$ \nin~\\eqref{eq: integral repn} to obtain\n\\[\nE_{\\alpha,\\beta}(-x)=\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}\n    e^wF(w,x)\\,dw\n\\quad\\text{where}\\quad\nF(w,x)=\\frac{w^{\\alpha-\\beta}}{w^\\alpha+x}.\n\\]\nWe note that since $|\\arg(w)|<\\pi$ if follows that $|\\arg(w^\\alpha)|<\\alpha\\pi$\nso $w^\\alpha+x\\ne0$.  Consider the Hankel contour parameterised \nby~\\cite{WeidemanTrefethen2007}\n\\[\nw(u)=\\mu\\bigl(1+\\sin(iu-\\phi)\\bigr)\\quad\\text{for $-\\infty<u<\\infty$,}\n\\]\nwith $\\mu>0$ and $0<\\phi<\\pi/2$. Since\n\\begin{align*}\n\\Re w&=\\mu(1-\\cosh u\\,\\sin\\phi),\\\\\n\\Im w&=\\mu\\sinh u\\,\\cos\\phi,\n\\end{align*}\nwe have\n\\[\n\\biggl(\\frac{\\Re w-1}{\\mu\\sin\\phi}\\biggr)^2\n    -\\biggl(\\frac{\\Im w}{\\mu\\cos\\phi}\\biggr)^2=1,\n\\]\nshowing that the contour is the left branch of an hyperbola with asymptotes\n\\[\n\\Im w=\\pm(\\Re w-1)\\tan\\phi.\n\\]\nThus,\n\\[\nE_{\\alpha,\\beta}(-x)=\\frac{1}{2\\pi i}\\int_{-\\infty}^\\infty \ne^{w(u)}F\\bigl(w(u),x\\bigr)z'(u)\\,du\\approx Q_h(x),\n\\]\nwhere, for a step size~$h>0$, we put\n\\[\nu_n=nh,\\qquad w_n=w(u_n),\\qquad w'_n=w'(u_n),\n\\]\nand define the series approximation\n\\[\nQ_h(x)=\\frac{h}{2\\pi i}\\sum_{n=-\\infty}^\\infty e^{w_n}F(w_n,x)w'_n.\n\\]\nFor a fixed~$v$ with $0<\\phi+v<\\pi/2$, we see that\n$w(u+iv)=\\mu\\bigl[1+\\sin(iu-(\\phi+v)\\bigr)\\bigr]$ parameterises the left \nbranch of an hyperbola with asymptotes\n\\[\n\\Im w=\\pm(\\Re w-1)\\tan(\\phi+v).\n\\]\nPutting\n\\[\nM(x,v)=\\int_{-\\infty}^\\infty\\bigl|\\exp\\bigl(w(u+iv)\\bigr)\n    F\\bigl(w(u+iv),x\\bigr)w'(u+iv)\\bigr|\\,du\n    \\quad\\text{for $-\\phi<v<\\frac{\\pi}{2}$,}\n\\]\nwe have the error bound~\\cite[Theorem~2.1]{WeidemanTrefethen2007}\n\\begin{equation}\\label{eq: Qh error}\n|Q_h(x)-E_{\\alpha,\\beta}(-x)|\\le\\frac{M(x,r)}{\\exp(2\\pi r/h)-1}\n    +\\frac{M(x,-s)}{\\exp(2\\pi s/h)-1}\n\\end{equation}\nfor $0<s<\\phi$ and $0<r<\\pi/2-\\phi$.  For the finite sum,\n\\begin{equation}\\label{eq: Q_hN}\nQ_{h,N}(x)=\\frac{h}{2\\pi i}\\sum_{n=-N}^N e^{w_n}F(w_n,x)w'_n,\n\\end{equation}\nwe have an additional truncation error\n\\[\nT_{h,N}=h\\sum_{n=N+1}^\\infty\\bigl(e^{w_n}F(w_n,x)w'_n\n    +e^{w_{-n}}F(w_{-n},x)w'_{-n}\\bigr).\n\\]\nChoosing the largest possible values $s=\\phi$~and $r=\\pi/2-\\phi$, the three\nerror terms are of order $\\exp\\bigl(-(\\pi^2-2\\pi\\phi)/h\\bigr)$,\n$\\exp(\\mu-2\\pi\\phi/h)$ and $\\exp\\bigl(\\mu(1-\\cosh(Nh)\\sin\\phi)\\bigr)$.\nTo balance these quantities, we seek to choose the parameters so that\n\\cite[(4.2)]{WeidemanTrefethen2007}\n\\[\n-\\frac{\\pi^2-2\\pi\\phi}{h}=\\mu-\\frac{2\\pi\\phi}{h}\n    =\\mu\\bigl(1-\\cosh(Nh)\\sin\\phi\\bigr).\n\\]\nThe left-hand equation implies that\n\\[\n\\mu=\\frac{4\\pi\\phi-\\pi^2}{h},\n\\]\nwhich allows us to eliminate $\\mu$ in the right-hand equation, to obtain\n\\[\n\\cosh(Nh)=\\frac{2\\pi\\phi}{(4\\phi-\\pi)\\pi\\sin\\phi}.\n\\]\nWe therefore define\n\\[\na(\\phi)=\\arcosh\\biggl(\\frac{2\\pi\\phi}{(4\\phi-\\pi)\\pi\\sin\\phi}\\biggr)\n\\quad\\text{and}\\quad b(\\phi)=\\frac{4\\pi\\phi-\\pi^2}{A(\\phi)}\n\\]\nfor $\\pi/4<\\phi<\\pi/2$, and set\n\\[\nh=\\frac{a(\\phi)}{N}\\quad\\text{and}\\quad \\mu=b(\\phi)\\,N,\n\\]\nso that the error~$Q_{h,N}(x)-E_{\\alpha,\\beta}(-x)$ is order~$e^{-C(\\phi)N}$ \nfor\n\\begin{equation}\\label{eq: C function}\nC(\\phi)=\\frac{\\pi^2-2\\pi\\phi}{A(\\phi)}.\n\\end{equation}\n\\Cref{fig: C plot} shows a plot of~$C(\\phi)$, and we find using a standard \noptimization package that the maximum value occurs at\n\\[\n\\phi=\\phi_\\star=1.172104\\quad\\text{with}\\quad C(\\phi_\\star)=2.315654.\n\\]\nFurthermore,\n\\[\na(\\phi_\\star)=1.081792\\quad\\text{and}\\quad\nb(\\phi_\\star)=4.492075,\n\\]\nleading to the optimal values \n\\[\nh=h_\\star=\\frac{a(\\phi_\\star)}{N}\\quad\\text{and}\\quad\n\\mu=\\mu_\\star=b(\\phi_\\star)\\,N.\n\\]\nSince $e^{B(\\phi_\\star)}=10.131547$ we expect to gain about an extra decimal\ndigit of accuracy each time we increase $N$ by~$1$.\n\n\\begin{figure}\n\\caption{Plot of $C(\\phi)$ from~\\eqref{eq: C function} showing the \nmaximum at~$\\phi_\\star$.}\\label{fig: C plot}\n\\begin{center}\n\\includegraphics[scale=0.75]{Bplot.pdf}\n\\end{center}\n\\end{figure}\n\nNotice that\n\\[\n\\frac{hw'(u)}{2\\pi i}=\\frac{h\\mu}{2\\pi}\\,\\cos(iu-\\phi)\n    =\\frac{a(\\phi_\\star)b(\\phi_\\star)}{2\\pi}\\,\\cos(iu-\\phi)\n\\]\nso we can re-write the sum~\\eqref{eq: Q_hN} as\n\\[\nQ_{hN}(x)=\\frac{a(\\phi_\\star)b(\\phi_\\star)}{2\\pi}\\sum_{n=-N}^N\n    e^{w_n}F(w_n,x)\\cos(iu_n-\\phi).\n\\]\nAlso, $u_{-n}=-u_n$, $w(-u)=\\overline{w(u)}$~and \n$F(\\bar w,x)=\\overline{F(w,x)}$, so the terms of the sum occur in complex \nconjugate pairs,\n\\[\ne^{w_{-n}}F(w_{-n},x)\\cos(iu_{-n}-\\phi)\n    =\\overline{e^{w_n}F(w_n,x)\\cos(iu_n-\\phi)}.\n\\]\nThus,\n\\[\nQ_{h,N}(x)=\\frac{a(\\phi_\\star)b(\\phi_\\star)}{\\pi}\\biggl(\n    \\tfrac12e^{w_0}F(w_0,x)\\cos\\phi\n    +\\sum_{n=1}^N\\Re\\bigl(e^{w_n}F(w_n,x)\\cos(iu_n-\\phi)\\bigr)\\biggr),\n\\]\nbut note that $w_n$~and $u_n$ depend on~$N$ through $h$~and $\\mu$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Quadrature approximation: positive real argument}\nWe now consider\n\\[\nE_{\\alpha,\\beta}(x)=\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}\n    e^wF(w,-x)\\,dx,\n\\]\nwhere the Hankel contour must pass to the right of the pole at~$w=x^{1/\\alpha}$.\nWrite\n\\[\nF(w,-x)=w^{\\alpha-\\beta}\\,\\frac{\\rho(w,x)}{w-x^{1/\\alpha}}\\quad\\text{where}\\quad\n\\rho(w,x)=\\frac{w-x^{1/\\alpha}}{w^\\alpha-x},\n\\]\nand observe that $\\rho(w,x)$ has a removable singularity at~$w=x^{1/\\alpha}$. \nIn fact, using L'H\\^ospital's rule,\n\\[\n\\rho(x^{1/\\alpha},x)=\\lim_{w\\to x^{1/\\alpha}}\n    \\frac{w-x}{w^\\alpha-x}=\\frac{x^{(1/\\alpha)-1}}{\\alpha},\n\\]\nand if we define\n\\[\nH(w;x)=\\frac{w^{\\alpha-\\beta}\\rho(w;x)\n-(x^{1/\\alpha})^{\\alpha-\\beta}\\rho(x^{1/\\alpha};x)}{w-x^{1/\\alpha}}\n    =\\frac{w^{\\alpha-\\beta}}{w^\\alpha-x}\n    -\\frac{\\alpha^{-1}x^{(1-\\beta)/\\alpha}}{w-x^{1/\\alpha}}\n\\]\nthen\n\\[\nF(w;-x)=\\frac{\\alpha^{-1}x^{(1-\\beta)/\\alpha}}{w-x^{1/\\alpha}}+H(w;x). \n\\]\nHence,\n\\begin{align*}\nE_{\\alpha,\\beta}(x)&=\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}e^w F(w;-x)\\,dw\\\\\n    &=\\frac{\\alpha^{-1}x^{(1-\\beta)/\\alpha}}{2\\pi i}\\int_{-\\infty}^{0^+}\n    \\frac{e^w\\,dw}{w-x^{1/\\alpha}}\n    +\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}e^w H(w;x)\\,dw,\n\\end{align*}\nthat is,\n\\begin{equation}\\label{eq: pos H}\nE_{\\alpha,\\beta}(x)=\\alpha^{-1}x^{(1-\\beta)/\\alpha}\\exp(x^{1/\\alpha})\n    +\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}e^w H(w,x)\\,dw.\n\\end{equation}\nThe integral term in \\eqref{eq: pos H} can be approximated using the same \napproach as in \\Cref{sec: quad neg}, with $H(w,x)$ replacing $F(w,x)$.\n\nTo evaluate $H(w,x)$ for $w$ near~$x^{1/\\alpha}$, put\n\\[\nw=x^{1/\\alpha}(1+\\epsilon)\\quad\\text{where}\\quad \n\\epsilon=\\frac{w-x^{1/\\alpha}}{x^{1/\\alpha}}\n\\]\nand define\n\\[\n\\psi_{1,\\alpha}(\\epsilon)=\\frac{(1+\\epsilon)^\\alpha-1}{\\epsilon}\n    =\\sum_{k=1}^\\infty\\binom{\\alpha}{k}\\epsilon^{k-1}\n\\]\nand\n\\[\n\\psi_{2,\\alpha}(\\epsilon)\n    =\\frac{(1+\\epsilon)^\\alpha-(1+\\alpha\\epsilon)}{\\epsilon}\n    =\\sum_{k=2}^\\infty\\binom{\\alpha}{k}\\epsilon^{k-2}.\n\\]\nIn this way,\n\\[\nw^\\alpha-x=x\\epsilon\\psi_{1,\\alpha}(\\epsilon)\n    =x\\epsilon[\\alpha+\\epsilon\\psi_{2,\\alpha}(\\epsilon)],\n\\]\nso\n\\begin{align*}\nH(w;x)&=\\frac{w^{\\alpha-\\beta}(w-x^{1/\\alpha})\n-\\alpha^{-1}x^{(1-\\beta)/\\alpha}(w^\\alpha-x)}{(w^\\alpha-x)(w-x^{1/\\alpha})}\\\\\n&=\\frac{x^{(\\alpha-\\beta)/\\alpha}(1+\\epsilon)^{\\alpha-\\beta}\n(x^{1/\\alpha}\\epsilon)-x^{(1-\\beta)/\\alpha}x\\epsilon \n[1+\\alpha^{-1}\\epsilon\\psi_{2,\\alpha}(\\epsilon)]}%\n{x\\epsilon\\psi_{1,\\alpha}(\\epsilon)(x^{1/\\alpha}\\epsilon)}\\\\\n&=\\frac{x^{1+(1-\\beta)/\\alpha}\\epsilon}{x^{1+(1/\\alpha)}\\epsilon}\\,\n\\frac{[1+\\epsilon\\psi_{1,\\alpha-\\beta}(\\epsilon)]\n-[1+\\alpha^{-1}\\epsilon\\psi_{2,\\alpha}(\\epsilon)]}%\n{\\epsilon\\psi_{1,\\alpha}(\\epsilon)}\\\\\n&=\\frac{\\psi_{1,\\alpha-\\beta}(\\epsilon)-\\alpha^{-1}\\psi_{2,\\alpha}(\\epsilon)}%\n{x^{\\beta/\\alpha}\\psi_{1,\\alpha}(\\epsilon)},\n\\end{align*}\nand thus\n\\[\nH(x^{1/\\alpha};x)=\\frac{\\psi_{1,\\alpha-\\beta}(0)\n-\\alpha^{-1}\\psi_{2,\\alpha}(0)}{x^{\\beta/\\alpha}\\psi_{1,\\alpha}(0)}\n    =\\frac{1+\\alpha-2\\beta}{2\\alpha x^{\\beta/\\alpha}}.\n\\]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Asymptotic expansion: negative real argument}\nSince\n\\[\n\\frac{1}{w^\\beta-zw^{\\beta-\\alpha}}=\\frac{-1}{zw^{\\beta-\\alpha}}\n    \\,\\frac{1}{1-w^\\alpha z^{-1}}\n\\]\nand\n\\[\n\\frac{1}{1-w^\\alpha z^{-1}}=\\sum_{n=0}^{N-1}(w^\\alpha z^{-1})^n\n    +\\frac{(w^\\alpha z^{-1})^N}{1-w^\\alpha z^{-1}},\n\\]\nwe see from~\\eqref{eq: integral repn} that\n\\[\nE_{\\alpha,\\beta}(z)=\\sum_{n=0}^{N-1}\\frac{-1}{2\\pi i}\\int_{-\\infty}^{0^+}\n    \\frac{e^w(w^\\alpha z^{-1})^n}{zw^{\\beta-\\alpha}}\\,dw+R_N(z),\n\\]\nwhere the remainder term is\n\\[\nR_N(z)=\\frac{-1}{2\\pi i}\\int_{-\\infty}^{0^+}\n    \\frac{e^w(w^\\alpha z^{-1})^N\\,dw}{zw^{\\beta-\\alpha}(1-w^\\alpha z^{-1})}.\n\\]\nThe $n$th term equals\n\\[\n\\frac{-z^{-1-n}}{2\\pi i}\\int_{-\\infty}^{0^+}e^w w^{(n+1)\\alpha-\\beta}\\,dw,\n\\]\nand if we assume $\\alpha-\\beta>-1$ then by collapsing the contour onto the \nnegative real axis and using the substitutions~$w=re^{\\pm i\\pi}$, we obtain\n\\[\n\\frac{-1}{2\\pi i}\\int_{-\\infty}^{0^+}e^w w^{(n+1)\\alpha-\\beta}\\,dw\n    =\\frac{e^{i\\pi[(n+1)\\alpha-\\beta]}-e^{-i\\pi[(n+1)\\alpha-\\beta]}}{2\\pi i}\n    \\int_0^\\infty e^{-r}r^{(n+1)\\alpha-\\beta}\\,dr\n\\]\nso\n\\[\nE_{\\alpha,\\beta}(z)=R_N(z)+\\frac{1}{\\pi}\\sum_{n=0}^{N-1}\n    \\sin\\pi[(n+1)\\alpha-\\beta]\\,\\Gamma\\bigl((n+1)\\alpha-\\beta+1\\bigr)z^{-n-1}.\n\\]\nAlso,\n\\begin{equation}\\label{eq: RN(z)}\nR_N(z)=\\frac{-z^{-N-1}}{2\\pi i}\\int_{-\\infty}^{0^+}\n    \\frac{e^w w^{(N+1)\\alpha-\\beta}}{1-z^{-1}w^\\alpha}\\,dw.\n\\end{equation}\n\nSuppose $z=-x<0$, and choose $\\theta\\in(0,\\pi/2)$.  Define the semi-infinite\nlines\n\\[\n\\varGamma_\\pm=\\{\\,re^{\\pm i(\\pi-\\theta)}:0<r<\\infty\\,\\},\n\\]\nand consider\n\\[\nR_N(-x)=\\frac{(-1)^Nx^{-N-1}}{2\\pi i}\\int_{-\\infty}^{0+}\n    \\frac{e^w w^{(N+1)\\alpha-\\beta}}{1+x^{-1}w^\\alpha}\\,dw.\n\\]\nBy integrating along $-\\varGamma_-$ and $\\varGamma_+$, we see that\n\\[\n|R_N(-x)|\\le\\frac{x^{-N-1}}{\\pi}\\int_0^\\infty\n\\frac{e^{-r\\cos\\theta}r^{(N+1)\\alpha-\\beta}}%\n{|1+x^{-1}r^\\alpha e^{i\\phi}|}\\,dr\\quad\\text{where}\\quad\n\\phi=(\\pi-\\theta)\\alpha.\n\\]\nSince\n\\[\n|1+x^{-1}r^\\alpha e^{i\\phi}|\\ge\\Re(1+x^{-1}r^\\alpha\\cos\\phi)\n    =1+x^{-1}r^\\alpha\\cos\\phi,\n\\]\nif $0<\\phi\\le\\pi/2$ then $\\cos\\phi\\ge0$ and hence $|1+x^{-1}r^\\alpha|\\ge1$ so,\nusing the substitution~$y=r\\cos\\theta$,\n\\begin{align*}\n|R_N(-x)|&\\le\\frac{x^{-N-1}}{\\pi}\\int_0^\\infty \n    e^{-r\\cos\\theta}r^{(N+1)\\alpha-\\beta}\\,dr\n    =\\frac{x^{-N-1}}{\\pi}\\,\\frac{1}{(\\cos\\theta)^{(N+1)\\alpha-\\beta+1}}\n    \\int_0^\\infty e^{-y}y^{(N+1)\\alpha-\\beta}\\,dy\\\\\n    &=\\frac{(\\cos\\theta)^{1-\\beta}}{\\pi}\\,\n    \\bigl(x(\\cos\\theta)^\\alpha\\bigr)^{-N-1}\\,\n    \\Gamma\\bigl((N+1)\\alpha-\\beta+1\\bigr)=O(x^{-N-1}).\n\\end{align*}\nOtherwise, if $\\pi/2<\\phi<\\pi$, then $\\cos\\phi<0$ and we put $q=x/|\\cos\\phi|$ \nso that\n\\[\n|1+x^{-1}r^\\alpha e^{i\\phi}|\\ge1/2\\quad\\text{for}\\quad 0<r<(q/2)^{1/\\alpha}\n\\quad\\text{and}\\quad (3q/2)^{1/\\alpha}<r<\\infty,\n\\]\nwhereas, for $(q/2)^{1/\\alpha}\\le r\\le(3q/2)^{1/\\alpha}$, we use the lower bound\n\\[\n|1+x^{-1}r^\\alpha e^{i\\phi}|\\ge|\\Im(1+x^{-1}r^\\alpha e^{i\\phi})|\n    =x^{-1}r^\\alpha|\\sin\\phi|.\n\\]\nThus, $|R_N(-x)|\\le I_1+I_2$ where $I_1=O(x^{-N-1})$ and\n\\begin{align*}\nI_2&=\\frac{x^{-N}}{\\pi\\sin\\phi}\\int_{(q/2)^{1/\\alpha}}^{(3q/2)^{1/\\alpha}}\n    e^{-r\\cos\\theta}r^{N\\alpha-\\beta}\\,dr\\\\\n&\\le\\frac{x^{-N}}{\\pi\\sin\\phi}\\,\\exp\\bigl(-(q/2)^{1/\\alpha}\\cos\\theta\\bigr)\n    \\biggl[\\frac{r^{N\\alpha-\\beta+1}}{N\\alpha-\\beta+1}\n    \\biggr]_{(q/2)^{1/\\alpha}}^{(3q/2)^{1/\\alpha}}\\\\\n&\\le\\frac{x^{-N-1}q\\cos\\phi}{\\pi\\sin\\phi}\\,(3q/2)^{N+(1-\\beta)/\\alpha}\\,\n\\frac{\\exp\\bigl(-(q/2)^{1/\\alpha}\\cos\\theta\\bigr)}{N\\alpha-\\beta+1}\n=O(x^{-N-1}),\n\\end{align*}\nyielding the asymptotic expansion\n\\[\nE_{\\alpha,\\beta}(-x)=\\frac{1}{\\pi}\\sum_{n=1}^N(-1)^n\n    \\sin\\pi(n\\alpha-\\beta)\\,\\Gamma(n\\alpha-\\beta+1)\\,x^{-n}\n    +O(x^{-N-1})\\quad\\text{as $x\\to\\infty$.}\n\\]\nNotice that the identity\n\\[\n\\Gamma(z)\\,\\Gamma(1-z)=\\frac{\\pi}{\\sin\\pi z}\n\\]\nallows us to re-write the expansion as\n\\[\nE_{\\alpha,\\beta}(-x)=-\\sum_{n=1}^N\\frac{(-x)^n}{\\Gamma(\\beta-n\\alpha)}\n    +O(x^{-N-1}).\n\\]\n\nBy Stirling's formula,\n\\[\n\\log\\Gamma(z)=z\\log z-z+\\tfrac12\\log z+O(1)\\quad\\text{as $|z|\\to\\infty$,}\n\\]\nso as $n\\to\\infty$, and putting $\\beta'=1-\\beta$,\n\\begin{align*}\n\\smash[b]{\\log\\biggl(\\frac{\\Gamma(n\\alpha+\\beta')}%\n{\\Gamma\\bigl((n-1)\\alpha+\\beta')}\\biggr)}&=\n\\bigl(n\\alpha+\\beta'\\bigr)\\log(n\\alpha+\\beta')\\\\\n&\\qquad{}-\\bigl((n-1)\\alpha+\\beta'\\bigr)\n    \\log\\bigl((n-1)\\alpha+\\beta'\\bigr)+O(1)\\\\\n    &=\\alpha\\log(n\\alpha+\\beta')+\\bigl((n-1)\\alpha+\\beta'\\bigr)\n    \\log\\frac{n\\alpha+\\beta'}{(n-1)\\alpha+\\beta'}+O(1)\\\\\n    &=\\alpha\\log\\bigl(n(\\alpha+O(n^{-1})\\bigr)+\\bigl((n-1)\\alpha+\\beta'\\bigr)\n    \\log\\frac{1+O(n^{-1})}{1+O(n^{-1})}+O(1)\\\\\n    &=\\alpha\\log n+O(1)=\\log n^\\alpha+O(1).\n\\end{align*}\nThus,\n\\[\n\\log\\frac{\\Gamma(n\\alpha+\\beta')x^{-n}}%\n{\\Gamma\\bigl((n-1)\\alpha+\\beta'\\bigr)x^{-n+1}}=\\log{n^\\alpha}{x}+O(1),\n\\]\nshowing that the terms of the asymptotic expansion decrease in magnitude until \n$n^\\alpha\\approx x$.\n\nFor large~$n$, there is a risk that $\\Gamma(n\\alpha-\\beta+1)$ overflows.\nFor example, in IEEE double precision arithmetic, $\\Gamma(172)=\\mathtt{Inf}$.\nStandard numerical libraries therefore provide a $\\log\\Gamma(x)$ function \nwhich will not overflow unless $x$ itself is close to overflowing.  We \ntherefore store\n\\[\na_n=(-1)^n\\sin\\pi(n\\alpha-\\beta)\n\\quad\\text{and}\\quad\nb_n=\\log\\Gamma(n\\alpha-\\beta+1),\n\\]\nand compute the terms of the asymptotic expansion as\n\\[\na_nb_nx^{-n}=a_n\\exp(b_n-n\\log x).\n\\]\n(Notice that $a_1=\\sin\\pi(\\beta-\\alpha)>0$ if $0<\\beta-\\alpha<1$.)\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Asymptotic expansion: positive real argument}\nPutting $z=x>0$ in~\\eqref{eq: RN(z)} gives\n\\[\nR_N(x)=\\frac{-x^{-N-1}}{2\\pi i}\\int_{-\\infty}^{0^+}\n\\frac{e^w w^{(N+1)\\alpha-\\beta}}{1-x^{-1}w^\\alpha}\\,dw,\n\\]\nwhere the Hankel contour must be such that $x<|w^\\alpha|$ for all~$w$.\nSince we want to consider large~$x$, we move the contour to the left of the \npole at~$w=x^{1/\\alpha}$, picking up a residue\n\\[\n\\res_{w=x^{1/\\alpha}}\\frac{e^w w^{(N+1)\\alpha-\\beta}}{1-x^{-1}w^\\alpha}\n    =\\lim_{w\\to x^{1/\\alpha}}e^w w^{(N+1)\\alpha-\\beta}\\,\n    \\frac{w-x^{1/\\alpha}}{1-x^{-1}w^\\alpha}\n    =\\frac{-x^{N+1}}{\\alpha}\\,x^{(1-\\beta)/\\alpha}\\exp(x^{1/\\alpha}),\n\\]\nleading to the expansion\n\\[\nE_{\\alpha,\\beta}(x)=x^{(1-\\beta)/\\alpha}\\exp(x^{1/\\alpha})\n    +\\frac{1}{\\pi}\\sum_{n=1}^Na_nb_nx^{-n}+\\tilde R_N(x).\n\\]\n\n\n\n\n\n\n\nA deeper analysis of the exponential asymptotics of~$E_{\\alpha,\\beta}$ shows\nthat~\\cite[Theorem~2.2]{WongZhao2002}\n\\[\nE_{\\alpha,\\beta}(-x)=\\frac{1}{\\pi}\\sum_{n=1}^{[x^{1/\\alpha}]}\n    A_nB_nx^{-n}+O\\bigl(x^{1/2-\\beta}e^{-x}\\bigr)\n\\]\nfor $x>0$ and $0<\\alpha\\le 1-\\epsilon$, where\n\\[\nA_n=(-1)^{n+1}\\sin\\pi(n\\alpha-\\beta)\n\\quad\\text{and}\\quad\nB_n=\\Gamma(n\\alpha-\\beta+1).\n\\]\nTo reduce the risk of overflow for large~$n$, we store $\\log B_n$ and compute\n\\[\nB_nx^{-n}=\\exp(\\log B_n-n\\log x).\n\\]\nFor optimal truncation, we choose $N$ so that if $x=N^\\alpha$ (and hence\n$N\\approx x^{1/\\alpha}$) then $B_Nx^{-N}=B_N/N^{N\\alpha}<\\epsilon$, or \nequivalently,\n\\[\nN\\alpha\\log N-\\log B_N>\\log\\epsilon^{-1}.\n\\]\n\nReplacing $\\beta$ with $\\beta-r\\alpha$ in the identity~\\eqref{eq: beta identity}\ngives\n\\[\nz^rE_{\\alpha,\\beta}(z)=E_{\\alpha,\\beta-r\\alpha}(z)-\\sum_{n=0}^{r-1}\n\\frac{z^n}{\\Gamma(\\beta-r\\alpha+n\\alpha)},\n\\]\nand so\n\\[\nE_{\\alpha,\\beta}(z)=z^{-r}E_{\\alpha,\\beta-r\\alpha}(z)-\\sum_{n=0}^{r-1}\n\\frac{z^{-(r-n)}}{\\Gamma\\bigl(\\beta-(r-n)\\alpha\\bigr)},\n\\]\nleading to the asymptotic expansion\n\\[\nE_{\\alpha,\\beta}(z)=-\\sum_{n=1}^r\\frac{z^{-n}}{\\Gamma(\\beta-n\\alpha)}\n    +O(z^{-r-1}).\n\\]\nRecalling that the Gamma function satisfies\n\\[\n\\Gamma(z)\\Gamma(1-z)=\\frac{\\pi}{\\sin\\pi z},\n\\]\nit follows that\n\\[\n\\frac{-1}{\\Gamma(\\beta-n\\alpha)}=\\frac{1}{\\pi}\\,\\sin\\pi(n\\alpha-\\beta)\n    \\Gamma(1+n\\alpha-\\beta),\n\\]\nso\n\\[\nE_{\\alpha,\\beta}(z)=\\frac{1}{\\pi}\\sum_{n=1}^r\\sin\\pi(n\\alpha-\\beta)\n    \\Gamma(1+n\\alpha-\\beta)z^{-n}+O(z^{-r-1}).\n\\]\n\n\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Chebyshev expansion}\nConsider the substitution\n\\[\ny=\\frac{x-1}{x+1}\\quad\\text{for $0\\le x<\\infty$.}\n\\]\nThe function~$x\\mapsto y$ maps the half-line~$[0,\\infty)$ onto~$[-1,1)$, and \nthe inverse function~$y\\mapsto x$ is given by\n\\[\nx=\\frac{1+y}{1-y}\\quad\\text{for $-1\\le y<1$.}\n\\]\nWe define a function~$g$ by requiring that\n\\[\nE_{\\alpha,\\beta}(-x)=\\tfrac12(1-y)[1+(1+y)g(y)],\n\\]\nor equivalently,\n\\[\ng(y)=\\frac{1}{1+y}\\biggl(\\frac{2E_{\\alpha,\\beta}(-x)}{1-y}-1\\biggr).\n\\]\nNotice that\n\\[\n1+y=\\frac{2x}{x+1}\\quad\\text{and}\\quad 1-y=\\frac{2}{x+1}.\n\\]\n\nOn the one hand, the power series~\\eqref{eq: E alpha beta def} implies that\n\\[\ng(y)=\\frac{1}{2}\\biggl(1-\\frac{1}{\\Gamma(\\beta+\\alpha)}\\biggr)+O(x)\n\\quad\\text{as $y\\to-1$ (so $x\\to0$),}\n\\]\nand on the other hand, the asymptotic expansion~\\eqref{eq: E minus asymptotic}\nimplies that\n\\[\ng(y)=\\frac{1}{2}\\biggl(-1+\\frac{1}{\\pi}\\,\\sin\\pi(\\beta-\\alpha)\\,\n    \\Gamma(1+\\alpha-\\beta)\\biggr)+O(x^{-1})\\quad\n\\text{as $y\\to1$ (so $x\\to\\infty$).}\n\\]\nIn particular, $g$ and all its derivatives are continuous on~$[-1,1]$.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Negative real argument when $1<\\alpha<2$}\nAssume for this section that $x>0$ and $1<\\alpha<2$.  If follows that $F(w,x)$\nhas poles at $\\gamma_\\pm=x^{1/\\alpha}e^{\\pm i\\pi/\\alpha}$, \nwith $\\arg\\gamma_\\pm=\\pm\\pi/\\alpha$ and hence\n\\[\n\\frac{\\pi}{2}<\\arg\\gamma_+<\\pi\n\\quad\\text{and}\\quad\n-\\pi<\\arg\\gamma_-<-\\frac{\\pi}{2}.\n\\]\nLet\n\\[\n\\varphi(w,x)=\\frac{(w-\\gamma_+)(w-\\gamma_-)}%\n{(w^\\alpha+x)(2w-\\gamma_+-\\gamma_-)}\n\\]\nso that\n\\[\n\\frac{1}{w^\\alpha+x}=\\varphi(w;x)\\biggl(\n    \\frac{1}{w-\\gamma_+}+\\frac{1}{w-\\gamma_-}\\biggr)\n\\]\nwith\n\\[\n\\varphi(\\gamma_\\pm;x)=\\lim_{x\\to\\gamma_\\pm}\\varphi(w;x)\n    =\\frac{\\gamma_\\pm^{1-\\alpha}}{\\alpha}.\n\\]\nPutting\n\\[\nH_\\pm(w,x)=\\frac{w^{\\alpha-\\beta}\\varphi(w;x)\n-\\gamma_\\pm^{\\alpha-\\beta}\\varphi(\\gamma_\\pm;x)}{w-\\gamma_\\pm}\n    =\\frac{w^{\\alpha-\\beta}(w-\\gamma_\\mp)}{(w^\\alpha+x)(2w-\\gamma_+-\\gamma_-)}\n-\\frac{\\alpha^{-1}\\gamma_\\pm^{1-\\beta}}{w-\\gamma_\\pm},\n\\]\nwe have\n\\[\nw^{\\alpha-\\beta}\\,\\frac{\\varphi(w;x)}{w-\\gamma_\\pm}\n    =\\frac{\\alpha^{-1}\\gamma_\\pm^{1-\\beta}}{w-\\gamma_\\pm}+H_\\pm(w,x)\n\\]\nso\n\\[\nF(w,x)=\\frac{1}{\\alpha}\\biggl(\n     \\frac{\\gamma_+^{1-\\beta}}{w-\\gamma_+}\n    +\\frac{\\gamma_-^{1-\\beta}}{w-\\gamma_-}\\biggr)\n    +H_+(w;x)+H_-(w;x)\n\\]\nand hence\n\\begin{align*}\nE_{\\alpha,\\beta}(-x)&=\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}e^wF(w,x)\\,dw\\\\\n&=\\frac{1}{\\alpha}\\,\\bigl(\\gamma_+^{1-\\beta}e^{\\gamma_+}\n    +\\gamma_-^{1-\\beta}e^{\\gamma_-}\\bigr)\n     +\\frac{1}{2\\pi i}\\int_{-\\infty}^{0^+}e^w\\bigl(H_+(w,x)+H_-(w,x)\\bigr)\\,dw.\n\\end{align*}\nSince\n\\begin{align*}\n\\gamma_+^{1-\\beta}e^\\gamma_+&=x^{(1-\\beta)/\\alpha}\n    \\exp\\bigl(i\\pi(1-\\beta)/\\alpha\\bigr)\n    \\exp\\bigl(x^{1/\\alpha}(\\cos\\pi/\\alpha+i\\sin\\pi/\\alpha)\\bigr)\\\\\n    &=x^{(1-\\beta)/\\alpha}\\exp(x^{1/\\alpha}\\cos\\pi/\\alpha)\n    \\exp\\bigl(i\\pi(1-\\beta)/\\alpha+ix^{1/\\alpha}\\sin\\pi/\\alpha\\bigr)\n\\end{align*}\nit follows that\n\\[\n\\gamma_+^{1-\\beta}e^{\\gamma_+}\n    +\\gamma_-^{1-\\beta}e^{\\gamma_-}\n    =2x^{(1-\\beta)/\\alpha}\\,\\exp(x^{1/\\alpha}\\cos\\pi/\\alpha)\n    \\cos\\bigl(\\pi(1-\\beta)/\\alpha+x^{1/\\alpha}\\sin\\pi/\\alpha\\bigr).\n\\]\n\nTo evaluate $H_\\pm(w;x)$ for~$w$ near~$\\gamma_\\pm$, let\n\\[\nw=\\gamma_\\pm(1+\\epsilon_\\pm)\\quad\\text{where}\\quad\n\\epsilon_\\pm=\\frac{w-\\gamma_\\pm}{\\gamma_\\pm}.\n\\]\nso that\n\\[\nw^\\alpha+x=-x(1+\\epsilon_\\pm)^\\alpha+x\n    =-x\\epsilon_\\pm\\psi_{1,\\alpha}(\\epsilon_\\pm)\n    =-x\\epsilon_\\pm[\\alpha+\\epsilon_\\pm\\psi_{2,\\alpha}(\\epsilon_\\pm)]\n\\]\nwith\n\\begin{gather*}\nw-\\gamma_\\pm=\\gamma_\\pm\\epsilon_\\pm,\\qquad\n2w-\\gamma_+-\\gamma_-=w-\\gamma_\\mp+\\gamma_\\pm\\epsilon_\\pm,\\\\\nw^{\\alpha-\\beta}=-x\\gamma_\\pm^{-\\beta}\n    [1+\\epsilon_\\pm\\psi_{1,\\alpha-\\beta}(\\epsilon_\\pm)].\n\\end{gather*}\nWe have\n\\[\nH_\\pm(w;x)=\\frac{w^{\\alpha-\\beta}(w-\\gamma_\\mp)(w-\\gamma_\\pm)\n-\\alpha^{-1}\\gamma_\\pm^{1-\\beta}(w^\\alpha+x)(2w-\\gamma_\\pm-\\gamma_\\mp)}%\n{(w^\\alpha+x)(2w-\\gamma_+-\\gamma_-)(w-\\gamma_\\pm)},\n\\]\nwhere the numerator equals\n\\begin{align*}\n-x\\gamma_\\pm^{-\\beta}&[1+\\epsilon_\\pm\\psi_{1,\\alpha-\\beta}(\\epsilon_\\pm)]\n(w-\\gamma_\\mp)(\\gamma_\\pm\\epsilon_\\pm)\n+\\alpha^{-1}\\gamma_\\pm^{1-\\beta}\nx\\epsilon_\\pm[\\alpha+\\epsilon_\\pm\\psi_{2,\\alpha}(\\epsilon_\\pm)]\n(w-\\gamma_\\mp+\\gamma_\\pm\\epsilon_\\pm)\\\\\n&=x\\gamma_\\pm^{1-\\beta}\\epsilon_\\pm\\Bigl(\n[1+\\alpha^{-1}\\epsilon_\\pm\\psi_{2,\\alpha}(\\epsilon_\\pm)]\n(w-\\gamma_\\mp+\\gamma_\\pm\\epsilon_\\pm)\n-[1+\\epsilon_\\pm\\psi_{1,\\alpha-\\beta}(\\epsilon_\\pm)](w-\\gamma_\\mp)\n\\Bigr)\\\\\n&=x\\gamma_\\pm^{1-\\beta}\\epsilon_\\pm^2\\Bigl((w-\\gamma_\\mp)\n[\\alpha^{-1}\\psi_{2,\\alpha}(\\epsilon_\\pm)-\\psi_{1,\\alpha-\\beta}(\\epsilon_\\pm)]\n+\\gamma_\\pm[1+\\alpha^{-1}\\epsilon_\\pm\\psi_{2,\\alpha}(\\epsilon_\\pm)]\\Bigr)\n\\end{align*}\nand the denominator equals\n\\[\n-x\\epsilon_\\pm\\psi_{1,\\alpha}(\\epsilon_\\pm)\n(w-\\gamma_\\mp+\\gamma_\\pm\\epsilon_\\pm)\n(\\gamma_\\pm\\epsilon_\\pm)=-x\\gamma_\\pm\\epsilon_\\pm^2\\psi_{1,\\alpha}(\\epsilon_\\pm)\n(w-\\gamma_\\mp+\\gamma_\\pm\\epsilon_\\pm)\n\\]\nso\n\\[\nH_\\pm(w;x)=\\frac{(w-\\gamma_\\mp)\n[\\psi_{1,\\alpha-\\beta}(\\epsilon_\\pm)-\\alpha^{-1}\\psi_{2,\\alpha}(\\epsilon_\\pm)]\n-\\gamma_\\pm\\alpha^{-1}\\psi_{1,\\alpha}(\\epsilon_\\pm)}%\n{\\gamma_\\pm^\\beta\\,\\psi_{1,\\alpha}\n(\\epsilon_\\pm)(w-\\gamma_\\mp-\\gamma_\\pm\\epsilon_\\pm)}.\n\\]\nIn particular,\n\\[\nH_\\pm(\\gamma_\\pm;x)\n=\\frac{(1+\\alpha-2\\beta)(\\gamma_\\pm-\\gamma_\\mp)-2\\gamma_\\pm}%\n{\\alpha\\gamma_\\pm^\\beta(\\gamma_\\pm-\\gamma_\\mp)}.\n\\]\nSimilarly,\n\\[\nH_\\mp(w;x)=\\frac{w^{\\alpha-\\beta}(w-\\gamma_\\pm)(w-\\gamma_\\mp)\n-\\alpha^{-1}\\gamma_\\mp^{1-\\beta}(w^\\alpha+x)(2w-\\gamma_\\mp-\\gamma_\\pm)}%\n{(w^\\alpha+x)(2w-\\gamma_+-\\gamma_-)(w-\\gamma_\\mp)},\n\\]\nwhere the numerator equals\n\\begin{align*}\n\\gamma_\\pm^{\\alpha-\\beta}&(1+\\epsilon_\\pm)^{\\alpha-\\beta}\n(\\gamma_\\pm\\epsilon_\\pm)\n(\\gamma_\\pm-\\gamma_\\mp+\\gamma_\\pm\\epsilon_\\pm)\n+\\alpha^{-1}\\gamma_\\mp^{1-\\beta}x\\epsilon_\\pm\\psi_{1,\\alpha}(\\epsilon_\\pm)\n(\\gamma_\\pm-\\gamma_\\mp+2\\gamma_\\pm\\epsilon_\\pm)\\\\\n&=-x\\epsilon_\\pm\\Bigl(\\gamma_\\pm^{1-\\beta}(1+\\epsilon_\\pm)^{\\alpha-\\beta}\n(w-\\gamma_\\mp)\n-\\alpha^{-1}\\gamma_\\mp^{1-\\beta}\\psi_{1,\\alpha}(\\epsilon_\\pm)\n(2w-\\gamma_+-\\gamma_-)\\Bigr)\n\\end{align*}\nand the denominator equals\n\\[\n-x\\epsilon_\\pm\\psi_{1,\\alpha}(\\epsilon_\\pm)\n(2w-\\gamma_+-\\gamma_-)(w-\\gamma_\\mp)\n\\]\nso\n\\begin{align*}\nH_\\mp(w;x)&=\\frac{\\gamma_\\pm^{1-\\beta}(1+\\epsilon_\\pm)^{\\alpha-\\beta} \n(w-\\gamma_\\mp)-\\alpha^{-1}\\gamma_\\mp^{1-\\beta}\\psi_{1,\\alpha}(\\epsilon_\\pm)\n(2w-\\gamma_+-\\gamma_-)}%\n{\\psi_{1,\\alpha}(\\epsilon_\\pm)(2w-\\gamma_+-\\gamma_-)(w-\\gamma_\\mp)}\\\\\n&=\\frac{\\gamma_\\pm^{1-\\beta}(1+\\epsilon_\\pm)^{\\alpha-\\beta}}%\n{\\psi_{1,\\alpha}(\\epsilon_\\pm)(2w-\\gamma_+-\\gamma_-)}\n-\\frac{\\alpha^{-1}\\gamma_\\mp^{1-\\beta}}{w-\\gamma_\\mp}.\n\\end{align*}\nIn particular,\n\\[\nH_\\mp(\\gamma_\\pm;x)=\\frac{\\gamma_\\pm^{1-\\beta}-\\gamma_\\mp^{1-\\beta}}%\n{\\alpha(\\gamma_\\pm-\\gamma_\\mp)}=\\frac{1}{x^{\\beta/\\alpha}}\\,\n\\frac{\\sin\\pi(1-\\beta)/\\alpha}{\\alpha\\sin\\pi/\\alpha}.\n\\]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Parabolic contour}\nConsider $\\mathcal{C}$ of the form\n\\[\nw(u)=\\mu(iu+1)^2\\quad\\text{for $-\\infty<u<\\infty$,}\n\\]\nwith $\\mu>0$.  It follows that\n\\begin{align*}\n\\Re w(u+iv)&=\\mu\\bigl((1-v)^2-u^2\\bigr),\\\\\n\\Im w(u+iv)&=2\\mu(1-v)u,\n\\end{align*}\nand so $\\Im w(u+iv)\\to-\\infty$ as~$|u|\\to\\infty$ provided $v<1$.  Thus, we can \napply the error bound~\\eqref{eq: Qh error} for $r=1-\\epsilon$ and $s>0$.\nNeglecting $\\epsilon$, we deduce that the error is of order\n\\[\n\\exp(-2\\pi/h)+\\exp\\bigl(\\mu(1+s)^2-2\\pi s/h\\bigr)\n    +\\exp\\bigl(\\mu(1-(Nh)^2)\\bigr).\n\\]\nThe exponent\n\\[\n\\mu(1+s)^2-\\frac{2\\pi s}{h}=\\mu\\bigg[\\biggl(1+s-\\frac{\\pi}{\\mu h}\\biggr)^2\n    +\\frac{2\\pi}{\\mu h}-\\biggl(\\frac{\\pi}{\\mu h}\\biggr)^2\\biggr]\n\\]\nis minimised if\n\\[\ns=1-\\frac{\\pi}{\\mu h},\n\\]\nand using this value we balance the three error terms by satisfying\n\\[\n\\frac{-2\\pi}{h}=\\frac{2\\pi}{h}-\\frac{\\pi^2}{\\mu h^2}=\\mu\\bigl(1-(Nh)^2\\bigr).\n\\]\nThe left-hand equation yields $\\mu h=\\pi/4$ so $1-(Nh)^2=-2\\pi/(\\mu h)=-8$\nand hence $Nh=3$.  Thus, for a given~$N$, the optimal quadrature parameters are\n\\[\nh_\\star=\\frac{3}{N}\\quad\\text{and}\\quad\\mu_\\star=\\frac{\\pi}{12}\\,N,\n\\]\nyielding an error of order $\\exp(-2\\pi/h_\\star)=\\exp(-2\\pi N/3)$ or\n$\\exp(2\\pi/3)^N\\approx8.12^N$.\n\nSince $w'(u)=2\\mu i(iu+1)$ we see that\n\\[\n\\frac{hw'(u)}{2\\pi i}=\\frac{\\mu h}{\\pi}(iu+1)=\\frac{1}{4}(1+iu)\n\\]\nand so\n\\begin{align*}\nQ_{h,N}(x)&=\\frac{1}{4}\\sum_{n=-N}^Ne^{w_n}f(w_n;x)(1+inh)\\\\\n    &=\\frac{1}{2}\\biggl(\\tfrac12e^{w_0}f(w_0;x)+\\sum_{n=1}^N\\Re\\bigl(\n    e^{w_n}f(w_n;x)(1+inh)\\bigr)\\biggr).\n\\end{align*}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Carath\\'eodory--Fejer method}\nLet $D=\\{\\,z\\in\\mathbb{C}:|z|<1\\,\\}$ denote the open unit disk in the complex \nplane, so that its boundary~$\\partial D=\\{\\,z\\in\\mathbb{C}:|z|=1\\,\\}$ is the \nunit circle.  Given a function~$F:[-1,1]\\to\\mathbb{R}$ and an integer~$M\\ge0$, \ndenote the partial Chebyshev expansion of~$F$ by\n\\[\nF_M(x)=\\sideset{}{'}\\sum_{k=0}^Ma_kT_k(x)\n    \\equiv\\tfrac12 a_0+\\sum_{k=1}^Ma_kT_k(x)\\quad\\text{for \n$-1\\le x\\le 1$,}\n\\]\nwhere $T_k$ denote the Chebyshev polynomial (of the first kind) with degree~$k$,\nand\n\\[\na_k=\\frac{2}{\\pi}\\int_{-1}^1\\frac{F(x)T_k(x)}{\\sqrt{1-x^2}}\\,dx\n\\]\nis the $k$th Chebyshev coefficient of~$F$.  The change of variable\n\\[\nx=\\Re z=\\tfrac12(z+z^{-1})\n\\]\ndefines a bijection~$z\\mapsto x$ from the unit circle~$\\partial D$ onto the \nreal interval~$[-1,1]$, with\n\\[\nT_k(x)=\\tfrac12(z^k+z^{-k}).\n\\]\nPut $a_{-k}=a_k$ and define\n\\[\nf_M(z)=\\sum_{k=-M}^M a_kz^k,\n\\]\nand given fixed integers $m\\ge 0$~and $n\\ge0$, let\n\\[\nf^+(z)=\\sum_{k=m-n+1}^M a_kz^k\\quad\\text{and}\\quad\nf^0(z)=\\begin{cases}\n    \\sum_{k=n-m}^{m-n}a_kz^k,&\\text{if $m\\ge n$,}\\\\\n    -\\sum_{k=m-n+1}^{n-m-1}a_kz^k&\\text{if $m<n$.}\n       \\end{cases}\n\\]\nIn this way,\n\\begin{equation}\\label{eq: FM}\nF_M(x)=\\tfrac12f_M(z)=\\tfrac12\\bigl[f^+(z)+f^+(z^{-1}) +f^0(z)\\bigr].\n\\end{equation}\n\nWe wish to approximate $F(x)$ by a rational function of type~$(m,n)$, that is,\nby a function $R(x)=p(x)/q(z)$ where $p$~and $q$ are real polynomials of degree \n$m$~and $n$, respectively.  Let $V_{mn}$ denote the set of such rational \nfunctions, and let $\\tilde V_{mn}$ denote the set of functions of the form\n\\[\n\\tilde r(z)=\\sum_{k=-\\infty}^m d_kz^k\\bigg/\\sum_{k=0}^n e_kz^k,\n\\]\nwhere the coefficients $d_k$~and $e_k$ are real, the numerator converges to \na bounded analytic function in~$|z|>1$ and the denominator has no zeros \nin~$D\\cup\\partial D$.\n\nConstruct a real, $(M-m+n)\\times(M-m+n)$ symmetric Hankel matrix\n\\[\nH=[H_{ij}]\\quad\\text{where}\\quad\nH_{ij}=\\begin{cases}\n    a_{m-n+i+j},&\\text{if $i+j\\le M-m+n$,}\\\\\n    0,&\\text{otherwise.}\n\\end{cases}\n\\]\n\n\\begin{example}\nIf $M=5$, $m=3$ and $n=4$, then\n\\[\nH=\\begin{bmatrix}\na_0&a_1&a_2&a_3&a_4&a_5\\\\\na_1&a_2&a_3&a_4&a_5&0\\\\\na_2&a_3&a_4&a_5&0  &0\\\\\na_3&a_4&a_5&0  &0  &0\\\\\na_4&a_5&0  &0  &0  &0\\\\\na_5&0  &0  &0  &0  &0\n\\end{bmatrix}.\n\\]\n\\end{example}\n\nSince $H$ is real and symmetric, it has an eigenvalue decomposition\n\\[\nH=U\\Lambda U^\\top,\n\\]\nwhere $U=[\\boldsymbol{u}_1\\quad\\boldsymbol{u}_2\\quad\\cdots\n\\quad\\boldsymbol{u}_{M-m+n}]$ is orthogonal and \n$\\Lambda=\\diag(\\lambda_1,\\lambda_2,\\ldots,\n\\lambda_{M-m+n})$.  We order the (real) eigenvalues by absolute magnitude:\n\\[\n|\\lambda_1|\\ge|\\lambda_2|\\ge\\cdots\\ge|\\lambda_{M-m+n}|.\n\\]\nWe denote the $(n+1)$st eigenvalue and eigenvector by\n\\[\n\\lambda=\\lambda_{n+1}\\quad\\text{and}\\quad\n\\boldsymbol{u}_{n+1}=[u_0,u_1,\\ldots,u_{M-m+n-1}]^\\top,\n\\]\nand define\n\\begin{align*}\nb(z)&=\\lambda z^M\\sum_{j=0}^{M-m+n-1}u_jz^j\\bigg/\n\\sum_{j=0}^{M-m+n-1}u_{M-m+n-j}z^j\\\\\n    &=\\lambda z^M\\,\\frac{u_0+u_1z+\\cdots+u_{M-m+n-1}z^{M-m+n-1}}%\n{u_{M-m+n-1}+u_{M-m+n-2}z+\\cdots+u_0z^{M_n-m+n-1}}\n\\end{align*}\nand\n\\begin{equation}\\label{eq: upsilon}\n\\upsilon(z)=\\sum_{j=0}^{M-m+n-1}u_{M-m+n-j}z^j.\n\\end{equation}\nSince\n\\[\n\\sum_{j=0}^{M-m+n-1}u_jz^j=\\sum_{j=0}^{M-m+n-1}u_{M-m+n-j}z^{M-m+n-j}\n    =M^{M-m+n}\\upsilon(z^{-1})\n\\]\nwe see that\n\\[\nb(z)=\\lambda z^{m-n}\\frac{\\upsilon(z^{-1})}{\\upsilon(z)}.\n\\]\nThus, $|b(z)|=|\\lambda|$ for~$z\\in\\partial D$, and since $\\lambda$ and the \ncomponents of~$\\boldsymbol{u}_{n+1}$ are real,\n\\begin{equation}\\label{eq: b zbar}\n\\overline{\\upsilon(z)}=\\upsilon(\\bar z)\\quad\\text{and}\\quad\n\\overline{b(z)}=b(\\bar z), \n\\end{equation}\nand in particular,\n\\[\n\\overline{\\upsilon(z)}=\\upsilon(z^{-1})\\quad\\text{and}\\quad\n\\overline{b(z)}=b(z^{-1})\\quad\\text{for $z\\in\\partial D$.}\n\\]\n\n\\begin{theorem}\nThe function $\\tilde r^*=f^+-b$ is the unique best approximation to~$f^+$\nfrom $\\tilde V_{mn}$ on~$\\partial D$, that is, $\\tilde r^*$ is the unique \nfunction in~$V_{mn}$ satisfying\n\\[\n\\|f^+-\\tilde r^*\\|_{L_\\infty(\\partial D)}=\\min_{g\\in\\tilde V_{mn}}\n\\|f^+-g\\|_{L_\\infty(\\partial D)}.\n\\]\nMoreover, \n\\[\n\\|f^+-\\tilde r^*\\|_{L_\\infty(\\partial D)}=|\\lambda|,\n\\]\nand if $|\\lambda_n|>|\\lambda|>|\\lambda_{n+2}|$ then the winding number \nof~$b$ is $m+n+1$.  \n\\end{theorem}\n\\begin{proof}\nSee~\\cite[Theorem~1]{TrefethenGutknecht1983}.\n\\end{proof}\n\nDefine\n\\[\n\\tilde R(x)=\\tfrac12\\bigl[\\tilde r^*(z)+\\tilde r^*(z^{-1})+f^0(z)\\bigr],\n\\]\nand observe that by~\\eqref{eq: FM},\n\\begin{equation}\\label{eq: Rtilde FM}\n\\tilde R(x)=\\tfrac12\\bigl[f_M(z)-b(z)-b(z^{-1})\\bigr]\n    =F_M(x)-\\tfrac12[b(z)+b(z^{-1})],\n\\end{equation}\nso\n\\[\n\\tilde R(x)=F_M(x)-\\Re b(z).\n\\]\nThe theorem above implies that there are points\n\\[\n1=x_0>x_1>\\cdots>x_{m+n+1}=-1\n\\]\nsuch that\n\\[\n\\|F_M-\\tilde R\\|_\\infty=|\\lambda|\\quad\\text{and}\\quad\n(F_M-\\tilde R)(x_j)=(-1)^j\\lambda\\quad\\text{for $0\\le j\\le m+n+1$.}\n\\]\n\nLet\n\\[\nB(x)=\\tfrac12[b(z)+b(z^{-1})]=\\sideset{}{'}\\sum_{k=0}^\\infty b_kT_k(x)\n\\]\nand note that the Chebyshev coefficients\n\\[\nb_k=\\frac{2}{\\pi}\\int_{-1}^1\\frac{B(x)T_k(x)}{\\sqrt{1-x^2}}\\,dx\n    =\\frac{2}{\\pi}\\int_0^\\pi B(\\cos\\theta)\\cos(k\\theta)\\,d\\theta.\n\\]\nThus, by~\\eqref{eq: Rtilde FM},\n\\[\n\\tilde R(x)=\\sideset{}{'}\\sum_{k=0}^\\infty\\tilde c_kT_k(x)\n\\quad\\text{where}\\quad\\tilde c_k=a_k-b_k.\n\\]\n\nLet us factorize the polynomial~\\eqref{eq: upsilon},\n\\[\n\\upsilon(z)=u_0\\prod_{j=1}^{M-m+n}(z-\\zeta_j),\n\\]\nand number the zeros~$\\zeta_j$ so that\n\\[\n\\begin{aligned}\n|\\zeta_j|&>1&&\\text{for $1\\le j\\le J$,}\\\\\n|\\zeta_j|&<1&&\\text{for $J+1\\le j\\le M-m+n$.}\n\\end{aligned}\n\\]\nDefine\n\\[\nq(z)=\\prod_{j=1}^J\\frac{z-\\zeta_j}{-\\zeta_j},\n\\]\nwhich satisfies $q(0)=1$, and\n\\[\nQ(x)=\\frac{q(z)q(z^{-1})}{q(i)q(-i)}\n    =\\prod_{j=1}^J\\frac{(z-\\zeta_j)(z^{-1}-\\zeta_j)}{(i-\\zeta_j)(-i-\\zeta_j)}\n    =\\prod_{j=1}^J\\frac{1+\\zeta_j^2-2x\\zeta_j}{1+\\zeta_j^2}\n    =\\prod_{j=1}^J\\biggl(1-\\frac{2x\\zeta_j}{1+\\zeta_j^2}\\biggr),\n\\]\nwhich satisfies $Q(0)=1$ and $|Q(x)|>0$ for~$-1\\le x\\le 1$.  We therefore have \na Chebyshev expansion,\n\\[\n\\frac{1}{Q(x)}=\\sideset{}{'}\\sum_{k=0}^\\infty\\gamma_kT_k(x).\n\\]\nFinally, we define\n\\[\nP(x)=\\sideset{}{'}\\sum_{k=0}^m\\beta_kT_k(x)\n\\]\nwhere the Chebyshev coefficients satisfy\n\\[\n\\begin{bmatrix}\n\\gamma_0     &\\gamma_1&\\cdots&\\gamma_m    &\\cdots&\\gamma_{2m-1}&\\gamma_{2m}\\\\\n\\gamma_1     &\\gamma_0&\\cdots&\\gamma_{m-1}&\\cdots&\\gamma_{2m-2}&\\gamma_{2m-1}\\\\\n\\vdots       &\\vdots  &\\ddots&\\vdots      &\\ddots&\\vdots       &\\vdots\\\\\n\\gamma_m     &\\gamma_{m-1}&\\cdots&\\gamma_0&\\cdots&\\gamma_{m-1} &\\gamma_m\\\\\n\\vdots       &\\vdots  &\\ddots&\\vdots      &\\ddots&\\vdots       &\\vdots\\\\\n\\gamma_{2m-1}&\\gamma_{2m-2}&\\cdots&\\gamma_{m-1}&\\cdots&\\gamma_0&\\gamma_1\\\\\n\\gamma_{2m}  &\\gamma_{2m-1}&\\cdots&\\gamma_m&\\cdots&\\gamma_1&\\gamma_0\n\\end{bmatrix}\n\\begin{bmatrix}\\beta_m\\\\ \\beta_{m-1}\\\\ \\vdots\\\\ \\beta_0\\\\ \\vdots\\\\ \n\\beta_{m-1}\\\\ \\beta_m\n\\end{bmatrix}\n=\\begin{bmatrix}\\tilde c_m\\\\ \\tilde c_{m-1}\\\\ \\vdots\\\\ \\tilde c_0\\\\ \\vdots\\\\\n\\tilde c_{m-1}\\\\ \\tilde c_m\n \\end{bmatrix}.\n\\]\nHere, the symmetric Toeplitz matrix is known to be positive definite.  Our \nrational approximation is then\n\\[\nF(x)\\approx R(x)=\\frac{P(x)}{Q(x)}.\n\\]\n\nIf we are given a function $G:[0,\\infty]\\to\\mathbb{R}$, then by putting\n\\[\nF(x)=G(t)\\quad\\text{where}\\quad t=\\frac{1+x}{1-x},\n\\]\nwe can construct $R(x)$ as above and use\n\\[\nG(t)\\approx R(x)\\quad\\text{where}\\quad x=\\frac{t-1}{t+1}.\n\\]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Adaptive Antoulas--Anderson algorithm}\nWe work with the barycentric representation\n\\[\nr(z)=\\frac{n(z)}{d(z)}\n\\quad\\text{where}\\quad\nn(z)=\\sum_{j=1}^m\\frac{w_jf_j}{z-z_j}\n\\quad\\text{and}\\quad\nd(z)=\\sum_{j=1}^m\\frac{w_j}{z-z_j},\n\\]\nfor a set of distinct (real or complex) \\emph{support points} $z_1$, $z_2$, \n\\dots, $z_m$.  The \\emph{data values} $f_1$, $f_2$, \\dots, $f_m$ and \n\\emph{weight} $w_1$, $w_2$, \\dots, $w_m$ may also be real or complex.  The \n\\emph{node polynomial} associated with the support points is\n\\[\n\\ell(z)=\\prod_{j=1}^m(z-z_j),\n\\]\nand if we define the polynomials of degree at most~$m-1$,\n\\[\np(z)=\\ell(z)n(z)=\\sum_{j=1}^mw_jf_j\\prod_{\\substack{k=1\\\\ k\\ne j}}^m(z-z_k)\n\\quad\\text{and}\\quad \nq(z)=\\ell(z)d(z)=\\sum_{j=1}^mw_j\\prod_{\\substack{k=1\\\\ k\\ne j}}^m(z-z_k),\n\\]\nthen\n\\[\nr(z)=\\frac{p(z)}{q(z)}.\n\\]\n\nThe AAA algorithm requires as input a large \n\\emph{sample set}~$S=\\{s_k\\}_{k=1}^K\\subseteq\\mathbb{C}$.  In step~1,\nwe define the constant approximation\n\\[\nr^{(1)}=\\frac{1}{K}\\sum_{k=1}^Kf(s_j)\n\\]\nand choose $z_1\\in S$ by requiring\n\\[\n|f(z_1)-r^{(0)}|=\\max_{z\\in S}|f(z)-r^{(0)}|.\n\\]\nDefining $f_1=f(z_1)$, $S^{(1)}=S\\setminus\\{z_1\\}$ and the discrete 2-norm\n\\[\n\\|g\\|_{S^{(1)}}=\\sqrt{\\sum_{k=1}^K|g(s_j)|^2},\n\\]\nwe choose\n\\[\nw^{(1)}_1=\\argmin_{|w|=1}\n\\bigl\\|fd^{(1)}-n^{(1)}\\bigr\\|_{S^{(1)}},\n\\]\nwhere\n\\[\nn^{(1)}(z)=\\frac{w^{(1)}_1f_1}{z-z_1}\\quad\\text{and}\\quad\nd^{(1)}(z)=\\frac{w^{(1)}_1}{z-z_1}.\n\\]\n\nFor~$m\\ge1$, at the start of the $m$th step we have already constructed\n\\[\nr^{(m-1)}(z)=\\frac{n^{(m-1)}(z)}{d^{(m-1)}(z)},\\quad\nn^{(m-1)}(z)=\\sum_{j=1}^{m-1}\\frac{w^{(m-1)}_jf_j}{z-z_j},\\quad\nd^{(m-1)}(z)=\\sum_{j=1}^{m-1}\\frac{w^{(m-1)}_j}{z-z_j}.\n\\]\nWe choose $z_m\\in S^{(m-1)}=S\\setminus\\{z_1,z_2,\\ldots,z_{m-1}\\}$ by requiring\n\\[\n\\bigl|f(z_m)-r^{(m-1)}(z_m)\\bigr|=\\max_{z\\in S^{(m-1)}}\n    \\bigl|f(z)-r^{(m-1)}(z)\\bigr|,\n\\]\nand put $f_m=f(z_m)$.  The vector of \nweights~$\\boldsymbol{w}^{(m)}\\in\\mathbb{C}^m$ is then chosen as\n\\[\n\\boldsymbol{w}^{(m-1)}=\\argmin_{\\|w\\|=1}\\|fd^{(m)}-n^{(m)}\\|_{S^{(m)}}.\n\\]\nWe stop the iteration when this norm falls below the desired error threshhold.\n\nTo solve the least-squares problem at the $m$th step, we enumerate the elements \nof the set~$S^{(m)}=\\{s_{k_1},s_{k_2},\\ldots,s_{k_{M-m}}\\}$ and \ndefine the column vectors\n\\[\n\\boldsymbol{S}^{(m)}=\\begin{bmatrix}s_{k_1}\\\\ s_{k_2}\\\\ \\vdots\\\\\ns_{k_{K-m}}\\end{bmatrix}\n\\quad\\text{and}\\quad\n\\boldsymbol{F}^{(m)}=\\begin{bmatrix}f(s_{k_1})\\\\ f(s_{k_2})\\\\ \n\\vdots\\\\ f(s_{k_{K-m}})\\end{bmatrix},\n\\]\nso that\n\\begin{equation}\\label{eq: Aw}\nf(s_{k_i})d^{(m)}(s_{k_i})-n^{(m)}(s_{k_i})\n    =\\sum_{j=1}^m\\frac{F^{(m)}_i-f_j}{S^{(m)}_i-z_j}\\,w_j\n    =(A^{(m)}\\boldsymbol{w})_i,\n\\end{equation}\nwhere the $(K-m)\\times m$ matrix~$A^{(m)}$ has entries\n\\[\n(A^{(m)})_{ij}=\\frac{F^{(m)}_i-f_j}{S^{(m)}_i-z_j}\n\\quad\\text{for $1\\le i\\le M-m$ and $1\\le j\\le m$.}\n\\]\nWe compute the thin singular value decomposition $A^{(m)}=U\\Sigma V^*$,\nso $\\Sigma=\\diag(\\sigma_1,\\sigma_2,\\ldots,\\sigma_m)$ is $(M-m)\\times m$ and \n\\[\n\\|A^{(m)}\\boldsymbol{w}\\|^2=\\|\\Sigma V^*\\boldsymbol{w}\\|^2\n    =\\sum_{j=1}^m\\bigl(\\sigma_j\\boldsymbol{v}_j^*\\boldsymbol{w}\\bigr)^2,\n\\]\nwhere $\\boldsymbol{v}_j$ is the $j$th column of~$V$.  Since\n$\\sigma_1\\ge\\sigma_2\\ge\\cdots\\ge\\sigma_m$, the norm is minimised choosing\n$\\boldsymbol{w}=\\boldsymbol{v}_m$, in which case\n\\[\n\\|fd^{(m)}-n^{(m)}\\|_{S^{(m)}}=\\|A^{(m)}\\boldsymbol{w}^{(m)}\\|=\\sigma_m.\n\\]\n\nIn an alternative, non-interpolatory approach we set\n\\[\nn(z)=\\sum_{j=1}^m\\frac{\\alpha_j}{z-z_j}\\quad\\text{and}\\quad\nd(z)=\\sum_{j=1}^m\\frac{\\beta_j}{z-z_j},\n\\]\nso that instead of~\\eqref{eq: Aw},\n\\[\nf(s_i)d^{(m)}(s_i)-n^{(m)}(s_i)=\\sum_{j=1}^m\n\\frac{\\beta_jF_i-\\alpha_j}{s_i-z_j}\n    =\\bigl(A_1^{(m)}\\boldsymbol{\\alpha}+A_2^{(m)}\\boldsymbol{\\beta}\\bigr)_i,\n\\]\nwhere the $K\\times m$~matrices $A_1^{(m)}$~and $A_2^{(m)}$ have entries\n\\[\n(A_1^{(m)})_{ij}=\\frac{-1}{s_i-z_j}\\quad\\text{and}\\quad\n(A_2^{(m)})_{ij}=\\frac{F_i}{s_i-z_j}.\n\\]\nLet $A^{(m)}=[A_1^{(m)}\\quad A_2^{(m)}]$ so that\n\\[\nA_1^{(m)}\\boldsymbol{\\alpha}+A_2^{(m)}\\boldsymbol{\\beta}\n    =A^{(m)}\\begin{bmatrix}\\boldsymbol{\\alpha}\\\\ \\boldsymbol{\\beta} \n\\end{bmatrix}.\n\\]\nWe compute the SVD $A^{(m)}=U\\Sigma V^\\top$ and put\n\\[\n\\begin{bmatrix}\\boldsymbol{\\alpha}\\\\ \\boldsymbol{\\beta}\\end{bmatrix}\n    =V\\boldsymbol{e}_{2m}.\n\\]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{A rational Remez algorithm}\nFor a function~$f:[a,b]\\to\\mathbb{R}$ we seek a rational approximation in \nbarycentric form,\n\\[\nr(x)=\\frac{N(x)}{D(x)},\\qquad\nN(x)=\\sum_{k=0}^n\\frac{\\alpha_k}{x-t_k},\\qquad\nD(x)=\\sum_{k=0}^n\\frac{\\beta_k}{x-t_k},\n\\]\nand note that\n\\[\nr(t_k)=\\lim_{x\\to t_k}r(x)=\\frac{\\alpha_k}{\\beta_k}.\n\\]\nThe node polynomial associated with the~$t_k$ is denoted\n\\[\n\\omega_t(x)=\\prod_{k=0}^n(x-t_k),\n\\]\nand we define\n\\[\n\\mu_k(x)=\\frac{\\omega_t(x)}{x-t_k}=\\prod_{\\substack{p=0\\\\ p\\ne k}}(x-t_p)\n\\]\nso that $r(x)=p(x)/q(x)$ where \n\\[\np(x)=\\omega_t(x)N(x)=\\sum_{k=0}^n\\alpha_k\\mu_k(x)\n\\quad\\text{and}\\quad\nq(x)=\\omega_t(x)D(x)=\\sum_{k=0}^n\\beta_k\\mu_k(x)\n\\]\nare polynomials of degree~$n$.  The optimal $L_\\infty$ approximation~$r^*$ has \nthe equioscillation property\n\\[\nf(x_l)-r^*(x_l)=(-1)^{l+1}\\lambda\\quad\\text{for $0\\le l\\le 2n+1$,}\n\\]\nwith \n\\[\na\\le x_0<x_1<\\cdots<x_{2n+1}\\le b\\quad\\text{and}\\quad\n\\lambda=\\|f-r^*\\|_{L_\\infty(a,b)},\n\\]\nassuming no degeneracy.\n\nIn the Remez algorithm, we start from candidate points~$x_l$ and seek weights\n$\\alpha_k$~and $\\beta_k$, along with a value~$\\lambda$, such that the resulting\n$r$ has the equioscillation property.  We then update the~$x_l$ to be the \nlocal extrema of this~$r$, and iterate until the amplitudes of the local extrema \nare constant to within a desired tolerance.\n\nThe equioscillation property is equivalent to\n\\begin{equation}\\label{eq: equiosc p q}\nf(x_l)q(x_l)-p(x_l)=(-1)^{l+1}\\lambda q(x_l)\\quad\\text{for $0\\le l\\le2n+1$.}\n\\end{equation}\nNotice that\n\\[\n\\omega_x'(x)=\\frac{d}{dx}\\prod_{l=0}^{2n+1}(x-x_l)\n    =\\sum_{l=0}^{2n+1}\\prod_{\\substack{p=0\\\\p\\ne l}}^{2n+1}(x-x_p)\n\\]\nand in particular\n\\[\n\\omega_x'(x_l)=\\prod_{\\substack{p=0\\\\p\\ne l}}^{2n+1}(x_l-x_p)\n    =\\biggl(\\prod_{p=0}^{l-1}(x_l-x_p)\\biggr)\n    \\biggl(\\prod_{p=l+1}^{2n+1}(x_l-x_p)\\biggr)\\\\\n\\]\nso if we let\n\\[\nd_l=\\biggl(\\prod_{\\substack{p=0\\\\ p\\ne l}}^{2n+1}|x_l-x_p|\\biggr)^{-1/2}\n\\]\nthen\n\\[\n\\omega_x'(x_l)=(-1)^{2n+1-l}d_l^{-2}\\quad\\text{for $0\\le l\\le 2n+1$.}\n\\]\nHence, multiplying both sides of~\\eqref{eq: equiosc p q} by~$d_l$ gives\n\\[\nd_lf(x_l)\\sum_{k=0}^n\\mu_k(x_l)\\beta_k-d_l\\sum_{k=0}^n\\mu_k(x_l)\\alpha_k\n    =(-1)^{l+1}d_l\\lambda\\sum_{k=0}^n\\mu_k(x_l)\\beta_l\n\\]\nwhich is equivalent to the vector equation\n\\begin{equation}\\label{eq: equiosc matrix}\nDFM\\boldsymbol{\\beta}-DM\\boldsymbol{\\alpha}=\\lambda DSM\\boldsymbol{\\beta},\n\\end{equation}\nwhere\n\\[\n\\boldsymbol{\\alpha}=\\begin{bmatrix}\n\\alpha_0\\\\ \\alpha_1\\\\ \\vdots\\\\\\alpha_n \\end{bmatrix},\\qquad\n\\boldsymbol{\\beta}=\\begin{bmatrix}\n\\beta_0\\\\ \\beta_1\\\\ \\vdots\\\\ \\beta_n\\end{bmatrix},\\qquad\nM=\\begin{bmatrix}\n\\mu_0(x_0)&\\mu_1(x_0)&\\cdots&\\mu_n(x_0)\\\\\n\\mu_0(x_1)&\\mu_1(x_1)&\\cdots&\\mu_n(x_1)\\\\\n\\vdots    &\\vdots    &\\ddots&\\vdots\\\\\n\\mu_0(x_{2n+1})&\\mu_1(x_{2n+1})&\\cdots&\\mu_n(x_{2n+1})\n\\end{bmatrix}\n\\]\nand \n\\[\nD=\\diag[d_l]_{l=0}^{2n+1},\\qquad\nF=\\diag[f(x_l)]_{l=0}^{2n+1},\\qquad\nS=\\diag[(-1)^{l+1}]_{l=0}^{2n+1}.\n\\]\n\nTake the thin $QR$ decomposition\n\\[\nDM=Q_1R,\n\\]\nand put $Q_2=SQ_1$.  It can be shown~\\cite[Lemma~4.3]{FilipEtAl2018} that\n$M^\\top SD^2M=0$, and therefore\n\\[\n0=(D^{-1}Q_1R)^\\top SD(Q_1R)=R^\\top Q_1^\\top D^{-1}SDQ_1R\n=R^\\top(SQ_1)^\\top Q_1R=R^\\top(Q_2^\\top Q_1)R.\n\\]\nSince $R$ is non-singular~\\cite[Theorem~4.4]{FilipEtAl2018}, it follows that \n\\[\nQ_2^\\top Q_1=0.\n\\]\nRewriting \\eqref{eq: equiosc matrix} as\n\\begin{equation}\\label{eq: equiosc Q1}\nFQ_1R\\boldsymbol{\\beta}-Q_1R\\boldsymbol{\\alpha}\n    =\\lambda SQ_1R\\boldsymbol{\\beta}.\n\\end{equation}\nwe multiply on the left by~$Q_2^\\top=Q_1^\\top S$ to \neliminate~$\\boldsymbol{\\alpha}$ and obtain\n\\[\nQ_1^\\top(SF)Q_1R\\boldsymbol{\\beta}=\\lambda Q_1^\\top S^2Q_1R\\boldsymbol{\\beta}\n    =\\lambda R\\boldsymbol{\\beta}.\n\\]\nEquivalently,\n\\[\nQ_1^\\top(SF)Q_1\\boldsymbol{y}=\\lambda\\boldsymbol{y}\n\\quad\\text{where}\\quad\\boldsymbol{y}=R\\boldsymbol{\\beta},\n\\]\nso we can compute $\\boldsymbol{\\beta}$ and $\\lambda$ by solving a symmetric \neigenvalue problem.  Multiplying \\eqref{eq: equiosc Q1} on the left \nby~$Q_1^\\top$ gives\n\\[\nQ_1^\\top FQ_1\\boldsymbol{y}-R\\boldsymbol{\\alpha}\n    =\\lambda Q_1^\\top SQ_1\\boldsymbol{y}\n    =\\lambda Q_2^\\top Q_1\\boldsymbol{y}=\\boldsymbol{0},\n\\]\nand so we can determine $\\boldsymbol{\\alpha}$ by solving\n\\[\nR\\boldsymbol{\\alpha}=Q_1^\\top FQ_1\\boldsymbol{y}.\n\\]\nIt can be shown that at most one eigenpair yields $q(x)=\\omega_t(x)D(x)$ with \nno root in~$[a,b]$.  We can determine this eigenpair by examining the signs of\n\\[\nq(x_l)=\\sum_{k=0}^n\\beta_k\\mu_k(x_l)=(M\\boldsymbol{\\beta})_l\n    =(D^{-1}Q_1\\boldsymbol{y})_l.\n\\]\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\printbibliography\n\\end{document}\n\n", "meta": {"hexsha": "763452baeab2541d8435aa28bb87a215839fbb59", "size": 43706, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/MLnotes.tex", "max_stars_repo_name": "billmclean/MittagLefflerFunctions.jl", "max_stars_repo_head_hexsha": "5244e7fce7efeee160edfc76eb7cab5e7624ae8e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/MLnotes.tex", "max_issues_repo_name": "billmclean/MittagLefflerFunctions.jl", "max_issues_repo_head_hexsha": "5244e7fce7efeee160edfc76eb7cab5e7624ae8e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/MLnotes.tex", "max_forks_repo_name": "billmclean/MittagLefflerFunctions.jl", "max_forks_repo_head_hexsha": "5244e7fce7efeee160edfc76eb7cab5e7624ae8e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.47102526, "max_line_length": 81, "alphanum_fraction": 0.5960280053, "num_tokens": 18838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.566625383474604}}
{"text": "\\chapter{parElm}\n\n\\section{Introduction}\n\nA linearized process may have parameters that do not affect the behavior of that process in any way.\nThese parameters are called \\emph{inert}.\n\nRemoving inert parameters from a process has the advantages of fewer states, smaller state vectors (and therefore reduced memory usage), and faster performance in general.\nThese advantages may be small; however, detection and removal of inert parameters can be done fairly cheaply.\n\n\\section{Algorithm}\n\nThe algorithm consists of the following steps \\cite{groote2001computer}:\n\n\\begin{enumerate}\n\n\\item Mark all process parameters.\n(This means that, initially, we assume that all process parameters are inert.)\n\n\\item Unmark all process parameters that occur in the guard of a summand.\n\n\\item Consider the assignments that occur as part of the recursive process instantiations of the summands of the LPE.\nUnmark all process parameters that occur in the expression of which the value is assigned to an \\emph{unmarked} process parameter.\n\n\\item Repeat the previous step until no process parameter is unmarked.\nAll remaining marked process parameters can be safely removed the process.\n\n\\end{enumerate}\n\n\\clearpage\n\\section{Example}\n\nConsider the following LPE:\n\n\\begin{lstlisting}\n//Process definition:\nPROCDEF example[A :: Int, B](x, y, z :: Int)\n  = A ? i [[x==0]] >-> example[A, B](i, y, z)\n  + A ? i [[x==1]] >-> example[A, B](0, i, z)\n  + B [[x==2]] >-> example[A, B](0, y, z)\n  + B >-> example[A, B](z, y, x)\n  ;\n\n//Initialization:\nexample[A, B](0, 0, 0);\n\\end{lstlisting}\n\nFirst, $x$ is unmarked, because it occurs in the guards of the first three summands.\n\nIn the fourth summand, $z$ is used in the expression of which the value is assigned to $x$.\nTherefore $z$ must also be unmarked.\n\n$y$ remains marked: it does not occur in a guard, nor is it used in the assignment to a process parameter other than itself.\nRemoving $y$ gives\n\n\\begin{lstlisting}\n//Process definition:\nPROCDEF example[A :: Int, B](x, z :: Int)\n  = A ? i [[x==0]] >-> example[A, B](i, z)\n  + A ? i [[x==1]] >-> example[A, B](0, z)\n  + B [[x==2]] >-> example[A, B](0, z)\n  + B >-> example[A, B](z, x)\n  ;\n\n//Initialization:\nexample[A, B](0, 0);\n\\end{lstlisting}\n\n\n\n", "meta": {"hexsha": "f3fec7b4f66f86162ef6a17e7ecc130f50027c46", "size": 2227, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "_tex/lpeopsDoc/parElm.tex", "max_stars_repo_name": "Sercammus/TxsLpeOps", "max_stars_repo_head_hexsha": "3354f2762cf195e571f4c05040ec500165969359", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_tex/lpeopsDoc/parElm.tex", "max_issues_repo_name": "Sercammus/TxsLpeOps", "max_issues_repo_head_hexsha": "3354f2762cf195e571f4c05040ec500165969359", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_tex/lpeopsDoc/parElm.tex", "max_forks_repo_name": "Sercammus/TxsLpeOps", "max_forks_repo_head_hexsha": "3354f2762cf195e571f4c05040ec500165969359", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.3661971831, "max_line_length": 171, "alphanum_fraction": 0.709025595, "num_tokens": 607, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.566625383474604}}
{"text": "\\documentclass[a4paper]{scrartcl}\n\n% Text\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage[T1]{fontenc}\n\n% Math\n\\usepackage{amsmath, amssymb}\n\\usepackage{bm}\n\t\\newcommand{\\vv}[1]{\\ensuremath{\\bm{#1}}}\n\t\\newcommand\\lphi{\\ensuremath{\\phi^{\\text{left}}}}\n\t\\newcommand\\lH{\\ensuremath{H^{\\text{left}}}}\n\t\\newcommand\\lh{\\ensuremath{h^{\\text{left}}}}\n\n\t\\newcommand\\rphi{\\ensuremath{\\phi^{\\text{right}}}}\n\t\\newcommand\\rH{\\ensuremath{H^{\\text{right}}}}\n\t\\newcommand\\rh{\\ensuremath{h^{\\text{right}}}}\n\n\t\\newcommand\\R{\\ensuremath{\\mathbb{R}}}\n\t\\DeclareMathOperator\\supp{supp}\n\n\\usepackage{graphicx}\n\n\\usepackage[colorlinks=true]{hyperref}\n\n\\usepackage{cleveref}\n\n\\usepackage{biblatex}\n\\bibliography{literature.bib}\n\n\\title{Daubechies wavelets}\n\\author{Robert Dahl Jacobsen}\n\n\\begin{document}\n\n\\maketitle\n\nThe purpose of the Julia package \\href{https://github.com/robertdj/IntervalWavelets.jl}{IntervalWavelets.jl} is to compute ordinary Daubechies scaling functions (see e.g.\\ \\cite{Mallat:2009}) and the moment-preserving boundary scaling functions from \\cite{Cohen:Daubechies:Vial:1993}.\nA common approach is to use the inverse discrete wavelet transform on a unit vector in $R^{2^n}$, but this only guarantees a pointwise \\emph{approximation} to the scaling functions.\n\nThe alternative used here is to rely solely on the recursive definitions as in \\cite{Strang:1989}.\nIn this short note I summarize these methods with all the details needed in the implementation.\n\n\n\\section{Interior scaling functions}\n\\label{sec:internal}\n\nA Daubechies scaling function $\\phi$ and associated wavelet $\\psi$ with $p$ vanishing moments are defined by a filter $\\{h_k\\}_k$.\nThe filter, scaling function and wavelet have supports of the same lengths, and we know from \\cite[Theorem 7.5]{Mallat:2009} that if $\\supp \\{h_k\\}_k = [N_1, N_2]$, then \n\\begin{equation*}\n\t\\supp\\phi = [N_1, N_2],\n\t\\quad\n\t\\supp\\psi = \\Bigl[\\frac{N_1-N_2+1}2, \\frac{N_2-N_1+1}2\\Bigr].\n\\end{equation*}\nIt is customary to let $N_1 = 0$ and $N_2 = 2p-1$.\nHowever, when constructing the boundary scaling functions we have $N_1 = -p+1$ and $N_2 = p$.\n\nThe scaling function satisfies the dilation equation\n\\begin{equation}\n\t\\label{eq:internal_scaling_function_definition}\n\t\\phi(x) \n\t= \\sqrt2 \\sum_{k=0}^{2p-1} h_k \\phi_k(2x)\n\t= \\sqrt2 \\sum_{k=0}^{2p-1} h_k \\phi(2x - k).\n\\end{equation}\nFor $p\\geq2$, $\\phi$ is continuous and hence zero at the endpoints of the support.\nThese properties allow us to compute $\\phi$ at the integer values (in the support).\nAs an example, for $p=3$:\n\\begin{align*}\n\t\\frac1{\\sqrt2} \\phi(1) \n\t& = h_1\\phi(1) + h_0\\phi(2),\n\t\\\\\n\t\\frac1{\\sqrt2} \\phi(2)\n\t& = h_4\\phi(1) + h_3\\phi(2) + h_2\\phi(3) + h_1\\phi(4),\n\t\\\\\n\t\\frac1{\\sqrt2} \\phi(3)\n\t& = h_5\\phi(1) + h_4\\phi(2) + h_3\\phi(3) + h_2\\phi(4),\n\t\\\\\n\t\\frac1{\\sqrt2} \\phi(4)\n\t& = h_5\\phi(3) + h_4\\phi(4).\n\\end{align*}\nIn matrix form, we have an eigenvalue problem:\n\\begin{equation*}\n\t\\begin{bmatrix}\n\t\t\\phi(1) \\\\ \\phi(2) \\\\ \\phi(3) \\\\ \\phi(4)\n\t\\end{bmatrix}\n\t=\n\t\\sqrt2\n\t\\begin{bmatrix}\n\t\th_1 & h_0 & 0 & 0\n\t\t\\\\\n\t\th_3 & h_2 & h_1 & h_0\n\t\t\\\\\n\t\th_5 & h_4 & h_3 & h_2\n\t\t\\\\\n\t\t0 & 0 & h_5 & h_4\n\t\\end{bmatrix}\n\t\\begin{bmatrix}\n\t\t\\phi(1) \\\\ \\phi(2) \\\\ \\phi(3) \\\\ \\phi(4)\n\t\\end{bmatrix}.\n\\end{equation*}\nThe $(i,j)$'th entry of the matrix is $\\sqrt2 h_{2i-j}$ and the vector $\\vv\\phi = [\\phi(n)]_{n=1}^4$ is an eigenvector of the eigenvalue 1.\nThis eigenspace is one-dimensional, so the only question is how to scale $\\vv\\phi$.\nFrom e.g.\\ \\cite[page 69]{Cohen:Daubechies:Vial:1993} we know that\n\\begin{equation*}\n\t\\sum_{k\\in\\mathbb{Z}} \\phi(k)\n\t= \\sum_{k=0}^{2p-1} \\phi(k)\n\t= 1.\n\\end{equation*}\n\nFrom the function values at the integers we can compute the function values at the half-integers using \\eqref{eq:internal_scaling_function_definition}.\nAs an example,\n\\begin{equation*}\n\t\\frac1{\\sqrt2} \\phi\\Bigl(\\frac32\\Bigr)\n\t= h_0 \\phi(3) + h_1 \\phi(2) + h_2 \\phi(1).\n\\end{equation*}\nThis process can be repeated to recursively yield $\\phi(k/2^R)$, for all integers $k$ and positive integers $R$.\n\nNote that the filter $\\{h_k\\}_k$ defining the scaling function is not unique.\nIn \\cref{fig:Daubechies4} is shown the usual, minimum-phase Daubechies 4 scaling function along with Daubechies 'symmlet'/linear phase scaling function used in \\cref{sec:boundary_Daubechies} and \\cite{Cohen:Daubechies:Vial:1993} -- see e.g.\\ \\cite[Section 7.2.3]{Mallat:2009}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.5]{interior}\n\t\\caption{The usual minimum-phase Daubechies 4 scaling function (black) and the symmlet version (red).}\n\t\\label{fig:Daubechies4}\n\\end{figure}\n\n\n\\section{Boundary scaling functions}\n\\label{sec:boundary_Daubechies}\n\nThe moment preserving Daubechies boundary scaling functions were introduced in \\cite{Cohen:Daubechies:Vial:1993} and are also described in \\cite{Mallat:2009} (albeit with some indexing errors).\n\nAn important difference between the internal and boundary scaling functions is that the left (right) boundary scaling functions are \\emph{not} continuous at the left (right) endpoint of their support.\n\nAs in \\cref{sec:internal}, the dilation equations defining the boundary scaling functions can yield function values at all dyadic rationals once we have the function values at the integers (in the support).\nIn the subsequent sections the focus is therefore on how to compute these function values at the integers.\n\nThe filters used for the boundary scaling functions are available at \\url{http://www.pacm.princeton.edu/~ingrid/publications/54.txt} and \\url{http://numerical.recipes/contrib}.\n\n\n\\subsection{Left boundary scaling functions}\n\nLet $p$ denote the number of vanishing moments and $\\phi$ be the interior symmlet Daubechies scaling function associated with the wavelet with $p$ vanishing moments translated such that $\\supp\\phi = [-p+1, p]$.\n\nWe want a family of functions satisfying a multiresolution analysis on $L^2([0,\\infty))$ or, equivalently, a dilation equation like \\cref{eq:internal_scaling_function_definition}.\nThe starting point is $\\{\\phi_k\\}_{k\\geq 0}$.\nThe functions $\\phi_k$ with $\\supp\\phi_k \\subseteq [0,\\infty)$ do not need any alteration, but the $\\phi_k$ with $\\supp\\phi_k \\cap (-\\infty,0) \\neq \\emptyset$ (i.e., with $0\\leq k < p-1$) must be replaced with a corresponding $\\lphi_k$.\nIt turns out that we should also replace $\\phi_{p-1}$ with $\\lphi_{p-1}$ in order to keep the number of vanishing moments even though $\\supp\\phi_{p-1} = [0,2p-1]$.\nThe boundary scaling functions are constructed such that $\\supp\\bigl(\\lphi_k\\bigr) = [0,p+k]$.\n\nThe relevant counterpart to the dilation equation \\cref{eq:internal_scaling_function_definition} for interior scaling functions is\n\\begin{align}\n\t\\frac1{\\sqrt2} \\lphi_k(x)\n\t& = \\sum_{l=0}^{p-1} \\lH_{k,l} \\lphi_l(2x) + \\sum_{m=p}^{p+2k} \\lh_{k,m} \\phi_m(2x)\n\t\\nonumber\n\t\\\\\n\t& = \\sum_{l=0}^{p-1} \\lH_{k,l} \\lphi_l(2x) + \\sum_{m=p}^{p+2k} \\lh_{k,m} \\phi(2x-m),\n\t\\label{eq:left_scaling_function_definition}\n\\end{align}\nfor $0\\leq k\\leq p-1$, where $(\\lH_{k,l})$ and $(\\lh_{k,m})$ are filter coefficients.\n\nFor $x=0$, \\eqref{eq:left_scaling_function_definition} becomes\n\\begin{equation*}\n\t\\frac1{\\sqrt2} \\lphi_k(0)\n\t= \\sum_{l=0}^{p-1} \\lH_{k,l} \\lphi_l(0)\n\\end{equation*}\nand these function values can be found as an eigenvector.\n% As in Section \\ref{sec:internal}, we must scale this eigenvector such that the $\\lphi_k$'s are normalized in $L^2([0,\\infty))$.\n\nAt the remaining integers we make use of the compact support.\nConsider e.g.\\ the case $p=2$ (where is $\\phi$ is supported on $[-1,2]$, $\\lphi_0$ is supported on $[0,2]$ and $\\lphi_1$ is supported on $[0,3]$):\n\\begin{align*}\n\t\\frac1{\\sqrt2} \\lphi_0(1)\n\t& = \\lH_{0,1} \\lphi_1(2) + \\lh_{0,2} \\phi(0),\n\t\\\\\n\t\\frac1{\\sqrt2} \\lphi_1(1)\n\t& = \\lH_{1,1} \\lphi_1(2) + \\lh_{1,2} \\phi(0),\n\t\\\\\n\t\\frac1{\\sqrt2} \\lphi_1(2)\n\t& = \\lh_{1,3} \\phi(1) + \\lh_{1,4} \\phi(0).\n\\end{align*}\nFrom Section \\ref{sec:internal} we know how to calculate the internal scaling function and thus the boundary scaling function as well.\nThe four boundary scaling functions related to four vanishing moments are seen in \\cref{fig:left_Daubechies4}.\nThere is a large resemblance between $\\lphi_3$ and the symmlet scaling function in \\cref{fig:Daubechies4} (here denoted $\\phi_4$).\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.5]{left}\n\t\\caption{The left boundary scaling function with 4 vanishing moments.}\n\t\\label{fig:left_Daubechies4}\n\\end{figure}\n\n\n\\subsection{Right boundary scaling functions}\n\nLet again $\\phi$ denote the interior symmlet Daubechies scaling function and $p$ denote the number of vanishing moments of the associated wavelet.\nThe idea for the right boundary scaling functions is the same as for the the left:\nWe want a multiresolution analysis on $L^2((-\\infty,0])$ by modifiying the interior scaling functions.\nThe $\\phi_k$ with $\\supp\\phi_k \\subset (-\\infty,0)$ are unaltered, but those with $\\supp\\phi_k \\cap [0, \\infty) \\neq \\emptyset$ are replaced by a corresponding $\\rphi_k$.\nIn conclusion, for $k=0,\\ldots,p-1$, the right boundary scaling functions satisfies the dilation equations\n\\begin{equation}\n\t\\label{eq:right_scaling_function_definition}\n\t\\frac1{\\sqrt2} \\rphi_k(x)\n\t= \\sum_{l=0}^{p-1} \\rH_{k,l} \\rphi_l(2x) + \\sum_{m=p}^{p+2k} \\rh_{k,m} \\phi(2x+m+1),\n\\end{equation}\nwhere $(\\rH_{k,l})$ and $(\\rh_{k,m})$ are filter coefficients.\nThe support of $\\rphi_k$ is $[-p-k,0]$.\n\nThe four right boundary scaling functions related to four vanishing moments are seen in \\cref{fig:right_Daubechies4}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.5]{right}\n\t\\caption{The right boundary scaling function with 4 vanishing moments.}\n\t\\label{fig:right_Daubechies4}\n\\end{figure}\n\n\n\\printbibliography\n\n\\end{document}\n\n", "meta": {"hexsha": "d890578dd9f679ea6f816e5e173c9df9430d8741", "size": 9693, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/boundary_wavelets.tex", "max_stars_repo_name": "UnofficialJuliaMirrorSnapshots/IntervalWavelets.jl-8574dada-bbe2-5b14-8c04-14b17cae11e5", "max_stars_repo_head_hexsha": "f456d0bbb58d8c4dc53dd307ef28add49176a4f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/boundary_wavelets.tex", "max_issues_repo_name": "UnofficialJuliaMirrorSnapshots/IntervalWavelets.jl-8574dada-bbe2-5b14-8c04-14b17cae11e5", "max_issues_repo_head_hexsha": "f456d0bbb58d8c4dc53dd307ef28add49176a4f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/boundary_wavelets.tex", "max_forks_repo_name": "UnofficialJuliaMirrorSnapshots/IntervalWavelets.jl-8574dada-bbe2-5b14-8c04-14b17cae11e5", "max_forks_repo_head_hexsha": "f456d0bbb58d8c4dc53dd307ef28add49176a4f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.08, "max_line_length": 284, "alphanum_fraction": 0.7188692871, "num_tokens": 3351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195152660687, "lm_q2_score": 0.7718434873426302, "lm_q1q2_score": 0.5666253667892437}}
{"text": "\\chapter{Gradient Methods for CMDP}\n\\label{gradient_cmdp}\n\\thispagestyle{empty}\nIn this section we provide the straightforward extension of REINFORCE and G(PO)MDP gradient estimation optimizing the policy and the configuration parameters.\n\\section{REINFORCE}\nLet us start by stating the expression of the gradient of the expected return with respect to a parametric transition model differentiable in its parameters $\\boldsymbol{\\omega}$.\n\\begin{theorem}[P-Gradient Theorem, from \\citep{cmdp}] Let $P_\\omega$ be a class of parametric stochastic transition models differentiable in $\\boldsymbol{\\omega}$, $\\pi$ be the current policy, the gradient of the expected return with respect to $\\boldsymbol{\\omega}$ is given by:\n$$\n\\nabla_{\\boldsymbol{\\omega}}J^{P,\\pi} = \\int_{\\mathcal{S}}\\int_{\\mathcal{A}} d^{P,\\pi}(s,a) \\int_{\\mathcal{S}} \\nabla_{\\boldsymbol{\\omega}}P_{\\boldsymbol{\\omega}}(s'|s,a)u^{P,\\pi}(s,a,s') \\mathrm{d}s' \\mathrm{d}a \\mathrm{d}s\n$$\n\t\n\\end{theorem}\n\nWe can now derive in a straightforward manner the REINFORCE estimator for model learning:\n\\begin{equation}\n\\widehat{\\nabla_{\\boldsymbol{\\omega}} J^{P,\\pi}}_{RF} = \\langle \\left(\\sum_{k=0}^H \\nabla_{\\bm{\\omega}} \\log P_{\\boldsymbol{\\omega}}(s_{k+1} |s_k, a_k) \\right) \\left( \\sum_{k=0}^H \\gamma^k R(s_k,a_k, s_{k+1}) \\right) \\rangle_N \\, ,\n\\end{equation}\nwhere $\\langle \\cdot \\rangle_N$ denotes the empirical average over a batch size of dimension $N$.\n\n\\section{G(PO)MDP}\n\\label{gpomdp_cmdp}\nIn order to derive the G(PO)MDP estimator for model learning we start from a trajectory based perspective:\n\n$$\nJ^{P,\\pi} = \\int p_{\\boldsymbol{\\theta},\\boldsymbol{\\omega}}(\\tau)G(\\tau) \\mathrm{d}\\tau,\n$$\nwhere $p_{\\boldsymbol{\\theta},\\boldsymbol{\\omega}}(\\tau)$ is the probability of the trajectory $\\tau$ under the distribution induced by the parameters $\\boldsymbol{\\theta},\\boldsymbol{\\omega}$ and $G(\\tau)$ is the return of the trajectory $\\tau$.\nUsing the log-trick and taking the derivative with respect to the model parameters we obtain:\n\\begin{align}\n\t\\nabla_{\\boldsymbol{\\omega}}J^{P,\\pi} &= \\int p_{\\boldsymbol{\\theta},\\boldsymbol{\\omega}}(\\tau) \\nabla_{\\boldsymbol{\\omega}}\\log p(\\tau) G(\\tau) \\mathrm{d}\\tau \\\\\n\t&= \\int p_{\\boldsymbol{\\theta},\\boldsymbol{\\omega}}(\\tau) \\left(\\sum_{k=0}^{H} \\log P_{\\boldsymbol{\\omega}}(s_{k+1} | s_k, a_k)  \\right) G(\\tau) .\n\\end{align}\nNow we are exactly in the G(PO)MDP settings and we can use the following approximation of the gradient:\n\\begin{equation}\n\\widehat{\\nabla_{\\boldsymbol{\\omega}} J^{P\\pi}}_{G(PO)MDP} = \\langle \\sum_{l=0}^H \\left( \\sum_{k=l}^H \\nabla_{\\bm{\\omega}} \\log P_{\\boldsymbol{\\omega}}(s_{k+1} | s_k, a_k) \\right) \\left( \\gamma^l R(s_l,a_l, s_{l+1}) \\right) \\rangle_N \\, ,\n\\end{equation}\nwhere $\\langle \\cdot \\rangle_N$ denotes the empirical average over a batch size of dimension $N$.", "meta": {"hexsha": "788dfa9b4a8ef730f8744251a64f1c94ccef9174", "size": 2815, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/cmdp_gpomdp_rf.tex", "max_stars_repo_name": "EmanueleGhelfi/thesis-remps-cmdp", "max_stars_repo_head_hexsha": "1b512b1684cfa6c8bac9a513b7f0f2e9cbc1eed5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/cmdp_gpomdp_rf.tex", "max_issues_repo_name": "EmanueleGhelfi/thesis-remps-cmdp", "max_issues_repo_head_hexsha": "1b512b1684cfa6c8bac9a513b7f0f2e9cbc1eed5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/cmdp_gpomdp_rf.tex", "max_forks_repo_name": "EmanueleGhelfi/thesis-remps-cmdp", "max_forks_repo_head_hexsha": "1b512b1684cfa6c8bac9a513b7f0f2e9cbc1eed5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.0810810811, "max_line_length": 280, "alphanum_fraction": 0.7143872114, "num_tokens": 923, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511579973932, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.5665074598011822}}
{"text": "\\section{Background}\\label{sec:background}\nWe first introduce the reader to Belief Propagation, then to Top-N recommendation.\n\n\\mypar{Belief Propagation}\nBP is a technique in the domain of machine learning to perform probabilistic inference on graphical models. The standard version of BP is designed to deal with \\textit{factor graphs}, special bipartite graphs that can represent Bayesian Networks or Markov Random Fields. Nodes in a factor graph $G(V,F,E)$ are either variables $v \\in V = \\{ v_1, \\ldots, v_N \\}$ or factors $f \\in F = \\{f_1, \\ldots, f_M \\}$. Edges $e\\in E$ model dependencies between nodes.\n\\textit{Variable nodes} $v$ hold a belief about the variable assigned to them. A belief $p(i):=p[v=x_i\\in\\Omega_{v}]$ is a density function that assigns a probability to each state $x_v \\in \\Omega_v$ of variable $v$. Since the number of states of a variable is often finite this belief can be represented as a vector $v \\in \\mathbb{R}^n$, with $n$ being the number of states. \n\\textit{Factor nodes} $f$, on the other hand, hold a belief about the joint probabilities of their neighbors. \nThis means that if, for example, there are two (discrete) variable nodes $v_a, v_b$ connected to a factor node $f$ the belief can be represented as a matrix, where entry $i,j$ holds the belief about the variables being in state $i$ or $j$ respectively. Figuratively speaking, \\textit{messages} are sent to inform neighboring nodes about the beliefs of the sender. All receivers update their beliefs accordingly and forward the updated messages to their neighbors. This is repeated until messages (and therefore beliefs) converge. \n\n\\begin{figure}\n\\begin{equation*}                                                            \n\\mu_{v\\rightarrow f}(x_v) = \\prod_{\\hat f \\in N(v)\\backslash \\{f\\}} \\mu_{\\hat f\\rightarrow v}(x_v)\n\\end{equation*}\n\\begin{equation*}                                                            \n\\mu_{f\\rightarrow v}(x_v) = \\sum_{x_f \\sim x_v}\\phi_f(x_f) \\prod_{\\hat v \\in N(f)\\backslash \\{v\\}} \\mu_{\\hat v\\rightarrow f}(x_{\\hat v})\n\\end{equation*}\n\\caption{Update equations for the standard BP algorithm. The first one describes messages from variable $v$ to factor $f$ and the second for messages from factor $f$ to variable node $v$, $x_v$ is the specific state for which we want to send our belief, $\\phi_f(x_f)$ denotes the belief of the factor node $f$ about state $x_f$, $\\mu$ is a message, $k \\in N(i)\\backslash j$ denotes all neighbours of $i$ except the neighbouring node $j$, and $x_f \\sim x_v$ is used to denote all possible states that are consistent with state $x_v$.}\n\\label{eqn_bp_message}\n\\end{figure}\n\nConceptually there are two types of messages: $\\mu_{v\\rightarrow f}$ from variable nodes $v$ to factor nodes $f$, and $\\mu_{f\\rightarrow v}$ from factor nodes $f$ to variable nodes $v$. Both update equations are given in figure \\ref{eqn_bp_message}. They show nicely why this version of BP is often called the \\textit{sum-product algorithm}. The update equations involve a product over all neighboring nodes, followed by a sum that marginalizes over all variables except the one to which the message is sent.\n\nThe standard version of BP is designed to deal with acyclic graphs. When graphical models contain loops BP is often called \\textit{Loopy Belief Propagation} (LBP).\n\nCompared to standard BP, loopy BP problems don't always converge and infer much higher computational costs. Nevertheless, the approximative solutions are often accurate enough to be used in real world applications. Elidan et al. \\cite{elidan2012residual} proposed a method to improve the convergence rate by propagating belief in an informed way through the graph. The technique is called \\textit{Residual Belief Propagation} because a residual is calculated for each message that estimates the impact of updating it. Therefore messages with a large residual are updated first. Formally, the residual is defined as the $L_\\infty$ norm of the difference of the current value of a message and its prospected value: $||\\mu_{new} - \\mu_{old}||_\\infty$.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{\\columnwidth}\n\t   \\centering\n\t   \\includegraphics[scale=0.33]{graphics/top-n-graph.pdf}%\n\t      \\caption{Factor Graph for predicting Top-N movies for user $\\hat u = u_2$.\n\t      \\label{top_n_graph}\n\t   }\n\t\\end{subfigure}\\hfill%\n\t\\vspace{5mm}\n\t\\begin{subfigure}{\\columnwidth}\n\t   \\centering\n\t   \\includegraphics[scale=0.33]{graphics/top-n-important-messages.pdf}%\n      \t   \\caption{Propagate belief from observed node $m_2$ to unobserved ones.\n\t      \\label{top_n_graph_important_msg}\n\t   }\n\t\\end{subfigure}\\hfill%\n\t\\vspace{5mm}\n\t\\begin{subfigure}{\\columnwidth}\n\t   \\centering\n\t   \\includegraphics[scale=0.33]{graphics/top-n-final.pdf}%\n\t      \\caption{A possible third state: $m_1$ is the top-1 recommendation for $u_2$. \n\t      \\label{top_n_graph_final_state}\n\t   }\n\t\\end{subfigure}\n\t\\caption{Top-N step by step}\n\\end{figure}\n\n\n\\mypar{Top-N Recommendation}\nTop-N Recommendation is the problem of generating a list of recommended items for a user $\\hat u$. The list is based on the available ratings of the user $\\hat u$ and the ratings of the other users. Jiwoon Ha et al. \\cite{Ha:2012:TRT:2396761.2398636} proposed a method for Top-N Recommendation using LBP. In the following, we sketch the method by means of an online video shop with users $u_i$ and movies $m_j$. The factor graph consists of variable nodes for every user and movie. The state of each node can be either\n\\begin{enumerate}\n   \\itemsep0em \n   \\item $\\hat u$ likes this user/movie or\n   \\item $\\hat u$ does not like this user/movie.\n\\end{enumerate}\nMovies are connected with users $u_i\\neq \\hat u$ in the following way: $u_i$ is linked with movie $m_j$ (through a factor) if $u_i$ actually liked it, that is, the rating is considered only if it is above some threshold. The factor between $u_i$ and $m_j$ has four joint states, best represented by a $2\\times 2$ matrix $A$. We set the values $A(1,1), A(2,2) = 0.5 + \\alpha$, with $\\alpha = 0.0001$, to express the belief about user $\\hat u$ to also like movie $m_j$, given that $\\hat u$ likes $u_i$, and the other way around. Furthermore we set $A(1,2), A(2,1) = 0.5 - \\alpha$ to express our belief that it is unlikely that user $\\hat u$ only likes the movie or the user but not both. $\\alpha$ is a tuning parameter, the value $0.0001$ was recommended by Jiwoon Ha et al. \\cite{Ha:2012:TRT:2396761.2398636}.\n\nFurthermore we need to incorporate the known ratings of $\\hat u$. For a rating of $\\hat u$ for movie $m_j$, we set the belief of node $m_j$ correspondingly. To exemplify this, assume that ratings vary between 1 and 5. We assign the probability for the two states \\textit{like} and \\textit{dislike} of movie $m_j$ to $[0.9,0.1]$ for a rating of 5. Similarly, a rating of 4 is mapped to $[0.7,0.3]$, 3 to $[0.5,0.5]$, 2 to $[0.3,0.7]$ and 1 to $[0.1,0.9]$. The beliefs for all other movies, for which no rating exists, are initialized with $[0.5,0.5]$.\n\nFigure \\ref{top_n_graph} shows a small factor graph for  $\\hat u = u_2$ where user $u_1$ has rated the movies $m_1,m_2$, $u_2$ the movie $m_2$ and $u_3$ the movies $m_1,m_2$ and $m_3$. Given our model, the only other belief available about the system is that user $\\hat u$ rated movie $m_2$ with a 4.\n\nNow, using BP, this initial belief is propagated through the factor graph by sending messages (see Figure \\ref{top_n_graph_important_msg}). Once the BP's message passing has converged, each movie has a probability assigned that tells us, how likely user $\\hat u$ liked this movie. Figure \\ref{top_n_graph_final_state} shows a possible final state. The movies can be sorted by these probabilities and the top $N$ elements are returned. In this example movie $m_1$ would be the top-one recommendation for $\\hat u$.\n\n\n\\mypar{Cost Analysis}\nA reasonable choice for the cost measure is the FLOP count, since BP operates on floating point numbers to calculate the beliefs. Unfortunately, it is not viable to calculate the exact amount of FLOPS for a given input size analytically, as BP is driven by the list of maximal message residuals and it cannot be foreseen, how many messages have to be calculated until convergence. Even worse, the count of processed messages (and, related, the FLOPS) is not only a function of the input \\textit{size}, but also a function of the actual \\textit{shape} of the graph. Therefore, we decided to count FLOPS using performance counters. The following counters were used:\n\n\\begin{itemize}\n\t\\item \\texttt{FP\\_COMP\\_OPS\\_EXE.SSE\\_SCALAR\\_SINGLE} and \\\\* \\texttt{FP\\_COMP\\_OPS\\_EXE.SSE\\_SCALAR\\_DOUBLE}: count the number of SSE instructions on single floats and doubles.\n\t\\item \\texttt{FP\\_COMP\\_OPS\\_EXE.SSE\\_PACKED\\_SINGLE} and \\\\* \\texttt{FP\\_COMP\\_OPS\\_EXE.SSE\\_PACKED\\_DOUBLE}: count the number of packed SSE instructions on floats and doubles.\n\t\\item \\texttt{SIMD\\_FP\\_256.PACKED\\_SINGLE} and \\\\* \\texttt{SIMD\\_FP\\_256.PACKED\\_DOUBLE}: count the number of packed AVX instructions on floats and doubles.\n\t\\item \\texttt{FP\\_COMP\\_OPS\\_EXE.X87}: counts the X87 floating point instructions.\n\\end{itemize}\n\nThe performance counters were evaluated using VTune Amplifier XE 2015 (see section~\\ref{sec:results}).\nSince it is impossible to distinguish the different kinds of operations, all FLOPS are weighted the same.\n\n\n%\\mypar{Discrete Fourier Transform}\n%Precisely define the transform so I understand it even if I have never\n%seen it before.\n%\n%\\mypar{Fast Fourier Transforms}\n%Explain the algorithm you use.\n%\n%\\mypar{Cost Analysis}\n%First define you cost measure (what you count) and then compute the\n%cost. Ideally precisely, at least asymptotically. In the latter case you will need to instrument your code to count\n%the operations so you can create a performance plot.\n%\n%Also state what is\n%known about the complexity (asymptotic usually) \n%about your problem (including citations).", "meta": {"hexsha": "84e293ff658046d4bda735d587882ad9a5a01897", "size": 9912, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/02-background.tex", "max_stars_repo_name": "flurischt/libDAI", "max_stars_repo_head_hexsha": "20683a222e2ef307209290f79081fe428d9c5050", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-05-03T00:17:48.000Z", "max_stars_repo_stars_event_max_datetime": "2015-05-03T00:17:48.000Z", "max_issues_repo_path": "report/02-background.tex", "max_issues_repo_name": "flurischt/libDAI", "max_issues_repo_head_hexsha": "20683a222e2ef307209290f79081fe428d9c5050", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/02-background.tex", "max_forks_repo_name": "flurischt/libDAI", "max_forks_repo_head_hexsha": "20683a222e2ef307209290f79081fe428d9c5050", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.12, "max_line_length": 808, "alphanum_fraction": 0.74354318, "num_tokens": 2701, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744806385542, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5662864148170138}}
{"text": "\n\\subsection{Autoregressive Integrated Moving Average models (ARIMA)}\n\nUses differences to remove non statiority\n\nAlso estiamted with box-jenkins\n\n", "meta": {"hexsha": "26f0e655399b75466a3142e3e0bb62422935d0d8", "size": 147, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/probability/stochasticWold/03-02-ARIMA.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/probability/stochasticWold/03-02-ARIMA.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/forecastingUni/04-02-ARIMA.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.375, "max_line_length": 68, "alphanum_fraction": 0.8231292517, "num_tokens": 33, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8175744850834649, "lm_q2_score": 0.6926419831347362, "lm_q1q2_score": 0.5662864127085719}}
{"text": "%!TEX root = forallxsol.tex\n%\\part{Truth-functional logic}\n%\\label{ch.TFL}\n%\\addtocontents{toc}{\\protect\\mbox{}\\protect\\hrulefill\\par}\n\n\\setcounter{chapter}{9}\n\\chapter{Complete truth tables}\\setcounter{ProbPart}{0}\n\\problempart\n\\label{pr.TT.TTorC}\nComplete truth tables for each of the following:\n\\begin{earg}\n\\item $A \\eif A$ %taut\n\\myanswer{\\begin{center}\n\\begin{tabular}{c | def}\n$A$ & $A$&$\\eif$&$A$\\\\\n\\hline\n T & T&\\TTbf{T}&T\\\\\nF & F&\\TTbf{T}&F\n\\end{tabular}\n\\end{center}}\n\\item $C \\eif\\enot C$ %contingent\n\\myanswer{\\begin{center}\n\\begin{tabular}{c | d e e f}\n$C$ & $C$&$\\eif$&$\\enot$&$C$\\\\\n\\hline\n T & T & \\TTbf{F}& F& T\\\\\nF & F & \\TTbf{T}& T& F\\\\\n\\end{tabular}\n\\end{center}}\n\\item $(A \\eiff B) \\eiff \\enot(A\\eiff \\enot B)$ %tautology\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c | d e e e e e e e f}\n$A$ & $B$&$(A$&$\\eiff$&$B)$&$\\eiff$&$\\enot$&$(A$&$\\eiff$&$\\enot$&$B)$ \\\\\n\\hline\nT & T & T & T & T & \\TTbf{T} & T & T & F & F & T\\\\\nT & F & T & F & F & \\TTbf{T} & F & T & T & T & F\\\\\nF & T & F & F & T & \\TTbf{T} & F & F & T & F & T\\\\\nF & F & F & T & F & \\TTbf{T} & T & F & F & T & F\n \\end{tabular}\n\\end{center}}\n\\item $(A \\eif B) \\eor (B \\eif A)$ % taut\n\\myanswer{\n\\begin{center}\n\\begin{tabular}{c c | d e e e e e f}\n$A$ & $B$&$(A$&$\\eif$&$B)$&$\\eor$&$(B$&$\\eif$&$A)$ \\\\\n\\hline\nT & T & T & T & T & \\TTbf{T} & T & T & T\\\\\nT & F & T & F & F & \\TTbf{T} & F & T & T\\\\\nF & T & F & T & T & \\TTbf{T} & T & F & F \\\\\nF & F & F & T & F & \\TTbf{T} & F & T & F\n \\end{tabular}\n\\end{center}}\n\\item $(A \\eand B) \\eif (B \\eor A)$  %taut\n\\myanswer{\n\\begin{center}\n\\begin{tabular}{c c | d e e e e e f}\n$A$ & $B$&$(A$&$\\eand$&$B)$&$\\eif$&$(B$&$\\eor$&$A)$ \\\\\n\\hline\nT & T & T & T & T & \\TTbf{T} & T & T & T\\\\\nT & F & T & F & F & \\TTbf{T} & F & T & T\\\\\nF & T & F & F & T & \\TTbf{T} & T & T & F \\\\\nF & F & F & F & F & \\TTbf{T} & F & F & F\n \\end{tabular}\n\\end{center}}\n\\item $\\enot(A \\eor B) \\eiff (\\enot A \\eand \\enot B)$ %taut\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c | d e e e e e e e e f}\n$A$ & $B$&$\\enot$&$(A$&$\\eor$&$B)$&$\\eiff$&$(\\enot$&$A$&$\\eand$&$\\enot$&$B)$\\\\\n\\hline\nT & T & F & T & T & T & \\TTbf{T} & F & T & F & F & T\\\\\nT & F & F & T& T & F & \\TTbf{T} & F & T & F & T & F\\\\\nF & T & F & F & T & T & \\TTbf{T} & T & F & F & F & T\\\\\nF & F & T & F & F & F & \\TTbf{T} & T & F & T & T & F\n \\end{tabular}\n\\end{center}}\n\\item $\\bigl[(A\\eand B) \\eand\\enot(A\\eand B)\\bigr] \\eand C$ %contradiction\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e e e e e e f}\n$A$ & $B$&$C$&$\\bigl[(A$&$\\eand$&$B)$&$ \\eand$&$\\enot$&$(A$&$\\eand$&$B)\\bigr]$&$\\eand$&$C$\\\\\n\\hline\nT & T & T & T & T & T & F & F & T & T & T & \\TTbf{F} & T\\\\\nT & T & F & T& T & T & F & F & T & T & T & \\TTbf{F}& F\\\\\nT & F & T & T & F & F & F & T & T & F & F & \\TTbf{F} & T\\\\\nT & F & F & T & F & F & F & T & T & F & F & \\TTbf{F} & F\\\\\nF & T & T & F & F & T & F & T & F & F & T & \\TTbf{F} & T\\\\\nF & T & F & F & F & T & F & T & F & F & T & \\TTbf{F} & F\\\\\nF & F & T & F & F & F & F & T & F & F & F & \\TTbf{F} & T\\\\\nF & F & F & F & F & F & F & T & F & F & F & \\TTbf{F} & F\n\\end{tabular}\n\\end{center}}\n\\item $[(A \\eand B) \\eand C] \\eif B$ %taut\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e e e f}\n$A$ & $B$&$C$&$[(A$&$\\eand$&$B)$&$\\eand$&$C]$&$\\eif$&$B$\\\\\n\\hline\nT & T & T & T & T & T & T & T & \\TTbf{T} & T\\\\\nT & T & F & T & T & T & F & F & \\TTbf{T} & T\\\\\nT & F & T & T & F & F & F & T & \\TTbf{T} & F\\\\\nT & F & F & T & F & F & F & F & \\TTbf{T} & F\\\\\nF & T & T & F & F & T & F & T & \\TTbf{T} & T\\\\\nF & T & F & F & F & T & F & F & \\TTbf{T} & T\\\\\nF & F & T & F & F & F & F & T & \\TTbf{T} & F\\\\\nF & F & F & F & F & F & F & F & \\TTbf{T} & F\\\\\n\\end{tabular}\n\\end{center}}\n\\item $\\enot\\bigl[(C\\eor A) \\eor B\\bigr]$ %contingent\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e e f}\n$A$ & $B$&$C$&$\\enot\\bigl[($&$C$&$\\eor$&$A)$&$\\eor$&$B\\bigr]$\\\\\n\\hline\nT & T & T & \\TTbf{F} & T & T & T & T & T\\\\\nT & T & F & \\TTbf{F} & F & T & T & T & T\\\\\nT & F & T & \\TTbf{F} & T & T & T & T & F\\\\\nT & F & F & \\TTbf{F} & F & T & T & T & F\\\\\nF & T & T & \\TTbf{F} & T & T & F & T & T\\\\\nF & T & F & \\TTbf{F} & F & F & F & T & T\\\\\nF & F & T & \\TTbf{F} & T & T & F & T & F\\\\\nF & F & F & \\TTbf{T} & F & F & F & F & F\n\\end{tabular}\n\\end{center}}\n\\end{earg}\n\n\\problempart\nCheck all the claims made in introducing the new notational conventions in \\S10.3, i.e.\\ show that:\n\\begin{earg}\n\t\\item `$((A \\eand B) \\eand C)$' and `$(A \\eand (B \\eand C))$' have the same truth table\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e e e f }\n$A$ & $B$ & $C$ & $(A$&$\\eand$& $B)$ &$ \\eand$ & $C$ & $A$ & $\\eand$ & $(B$&$\\eand$&$C)$\\\\\n\\hline\nT & T & T & T & T & T &  \\TTbf{T} & T &T & \\TTbf{T} & T & T& T \\\\\nT & T & F & T& T & T &  \\TTbf{F} & F & T& \\TTbf{F} & T & F& F\\\\\nT & F & T & T & F & F &  \\TTbf{F} & T & T & \\TTbf{F} & F & F & T \\\\\nT & F & F &  T & F & F &  \\TTbf{F} & F & T& \\TTbf{F} & F & F & F\\\\\nF & T & T & F & F & T & \\TTbf{F} & T & F& \\TTbf{F} & T & T & T\\\\\nF & T & F & F & F & T & \\TTbf{F} & F & F& \\TTbf{F} &  T & F & F\\\\\nF & F & T & F & F & F & \\TTbf{F} & T & F& \\TTbf{F} & F& F & T\\\\\nF & F & F & F & F & F & \\TTbf{F} & F & F& \\TTbf{F} &  F& F & F\\\\\n\\end{tabular}\n\\end{center}}\n\t\\item `$((A \\eor B) \\eor C)$' and `$(A \\eor (B \\eor C))$' have the same truth table\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e e e f }\n$A$ & $B$ & $C$ & $(A$&$\\eor$& $B)$ &$ \\eor$ & $C$ & $A$ & $\\eor$ & $(B$&$\\eor$&$C)$\\\\\n\\hline\nT & T & T & T & T & T &  \\TTbf{T} & T &T & \\TTbf{T} & T & T& T \\\\\nT & T & F & T& T & T &  \\TTbf{T} & F & T& \\TTbf{T} & T & T& F\\\\\nT & F & T & T & T & F &  \\TTbf{T} & T & T & \\TTbf{T} & F & T & T \\\\\nT & F & F &  T & T& F &  \\TTbf{T} & F & T& \\TTbf{T} & F & F & F\\\\\nF & T & T & F & T & T & \\TTbf{T} & T & F& \\TTbf{T} & T & T & T\\\\\nF & T & F & F & T & T & \\TTbf{T} & F & F& \\TTbf{T} &  T & T & F\\\\\nF & F & T & F & F & F & \\TTbf{T} & T & F& \\TTbf{T} & F& T & T\\\\\nF & F & F & F & F & F & \\TTbf{F} & F & F& \\TTbf{F} &  F& F & F\\\\\n\\end{tabular}\n\\end{center}}\n\t\\item `$((A \\eor B) \\eand C)$' and `$(A \\eor (B \\eand C))$' do not have the same truth table\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e e e f }\n$A$ & $B$ & $C$ & $(A$&$\\eor$& $B)$ &$ \\eand$ & $C$ & $A$ & $\\eor$ & $(B$&$\\eand$&$C)$\\\\\n\\hline\nT & T & T & T & T & T &  \\TTbf{T} & T &T & \\TTbf{T} & T & T& T \\\\\nT & T & F & T& T & T &  \\TTbf{F} & F & T& \\TTbf{T} & T & F& F\\\\\nT & F & T & T & T & F &  \\TTbf{T} & T & T & \\TTbf{T} & F & F & T \\\\\nT & F & F &  T & T& F &  \\TTbf{F} & F & T& \\TTbf{T} & F & F & F\\\\\nF & T & T & F & T & T & \\TTbf{T} & T & F& \\TTbf{T} & T & T & T\\\\\nF & T & F & F & T & T & \\TTbf{F} & F & F& \\TTbf{F} &  T & F & F\\\\\nF & F & T & F & F & F & \\TTbf{F} & T & F& \\TTbf{F} & F& F & T\\\\\nF & F & F & F & F & F & \\TTbf{F} & F & F& \\TTbf{F} &  F& F & F\\\\\n\\end{tabular}\n\\end{center}}\n\t\\item `$((A \\eif B) \\eif C)$' and `$(A \\eif (B \\eif C))$' do not have the same truth table\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e e e f }\n$A$ & $B$ & $C$ & $(A$&$\\eif$& $B)$ &$ \\eif$ & $C$ & $A$ & $\\eif$ & $(B$&$\\eif$&$C)$\\\\\n\\hline\nT & T & T & T & T & T &  \\TTbf{T} & T &T & \\TTbf{T} & T & T& T \\\\\nT & T & F & T& T & T &  \\TTbf{F} & F & T& \\TTbf{F} & T & F& F\\\\\nT & F & T & T & F & F &  \\TTbf{T} & T & T & \\TTbf{T} & F & T & T \\\\\nT & F & F &  T & F& F &  \\TTbf{T} & F & T& \\TTbf{T} & F & T & F\\\\\nF & T & T & F & T & T & \\TTbf{T} & T & F& \\TTbf{T} & T & T & T\\\\\nF & T & F & F & T & T & \\TTbf{F} & F & F& \\TTbf{T} &  T & F & F\\\\\nF & F & T & F & T & F & \\TTbf{T} & T & F& \\TTbf{T} & F& T & T\\\\\nF & F & F & F & T & F & \\TTbf{F} & F & F& \\TTbf{T} &  F& T & F\\\\\n\\end{tabular}\n\\end{center}}\n\\end{earg}\nAlso, check whether:\n\\begin{earg}\n\t\\item[5.] `$((A \\eiff B) \\eiff C)$' and `$(A \\eiff (B \\eiff C))$' have the same truth table\n\t\t\\\\\\myanswer{Indeed they do:\n\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e e e f }\n$A$ & $B$ & $C$ & $(A$&$\\eiff$& $B)$ &$ \\eiff$ & $C$ & $A$ & $\\eiff$ & $(B$&$\\eiff$&$C)$\\\\\n\\hline\nT & T & T & T & T & T &  \\TTbf{T} & T &T & \\TTbf{T} & T & T& T \\\\\nT & T & F & T& T & T &  \\TTbf{F} & F & T& \\TTbf{F} & T & F& F\\\\\nT & F & T & T & F & F &  \\TTbf{F} & T & T & \\TTbf{F} & F & F & T \\\\\nT & F & F &  T & F& F &  \\TTbf{T} & F & T& \\TTbf{T} & F & T & F\\\\\nF & T & T & F & F & T & \\TTbf{F} & T & F& \\TTbf{F} & T & T & T\\\\\nF & T & F & F & F & T & \\TTbf{T} & F & F& \\TTbf{T} &  T & F & F\\\\\nF & F & T & F & T & F & \\TTbf{T} & T & F& \\TTbf{T} & F& F & T\\\\\nF & F & F & F & T & F & \\TTbf{F} & F & F& \\TTbf{F} &  F& T & F\\\\\n\\end{tabular}\n\\end{center}}\n\\end{earg}\n\n\n\n\\chapter{Semantic concepts}\\setcounter{ProbPart}{0}\n\\problempart\nRevisit your answers to \\S10\\textbf{A}. Determine which sentences were tautologies, which were contradictions, and which were neither tautologies nor contradictions.\n\\begin{earg}\n\\item $A \\eif A$ \\hfill \\myanswer{Tautology}\n\\item $C \\eif\\enot C$ \\hfill \\myanswer{Neither}\n\\item $(A \\eiff B) \\eiff \\enot(A\\eiff \\enot B)$  \\hfill \\myanswer{Tautology}\n\\item $(A \\eif B) \\eor (B \\eif A)$  \\hfill \\myanswer{Tautology}\n\\item $(A \\eand B) \\eif (B \\eor A)$  \\hfill \\myanswer{Tautology}\n\\item $\\enot(A \\eor B) \\eiff (\\enot A \\eand \\enot B)$ \\hfill \\myanswer{Tautology}\n\\item $\\bigl[(A\\eand B) \\eand\\enot(A\\eand B)\\bigr] \\eand C$  \\hfill \\myanswer{Contradiction}\n\\item $[(A \\eand B) \\eand C] \\eif B$  \\hfill \\myanswer{Tautology}\n\\item $\\enot\\bigl[(C\\eor A) \\eor B\\bigr]$  \\hfill \\myanswer{Neither}\n\\end{earg}\n\n\\\n\n\\problempart\n\\label{pr.TT.consistent}\nUse truth tables to determine whether these sentences are jointly consistent, or jointly inconsistent:\n\\begin{earg}\n\\item $A\\eif A$, $\\enot A \\eif \\enot A$, $A\\eand A$, $A\\eor A$ \\hfill \\myanswer{Jointly consistent (see line 1)}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c | d e f | d e e e f | d e f | d e f}\n$A$ &  $A$&$\\eif$&$A$&$\\enot$&$A$&$\\eif$&$\\enot$&$A$&$A$&$\\eand$&$A$&$A$&$\\eor$&$A$\\\\\n\\hline\nT & T & \\TTbf{T} & T& F & T & \\TTbf{T} & F & T & T & \\TTbf{T} & T & T & \\TTbf{T} & T\\\\\nF & F & \\TTbf{T} & F& T & F & \\TTbf{T} & T & F & F & \\TTbf{F} & F & F & \\TTbf{F} & F\n\\end{tabular}\n\\end{center}}\n\\item $A\\eor B$, $A\\eif C$, $B\\eif C$ \\hfill \\myanswer{Jointly consistent (see line 1)}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e f | d e f | d e f}\n$A$ & $B$ & $C$ & $A$&$\\eor$&$B$&$A$&$\\eif$&$C$&$B$&$\\eif$&$C$\\\\\n\\hline\nT & T & T & T & \\TTbf{T} & T & T & \\TTbf{T} & T & T & \\TTbf{T} & T\\\\\nT & T & F & T & \\TTbf{T} & T & T & \\TTbf{F} & F & T & \\TTbf{F} & F\\\\\nT & F & T & T & \\TTbf{T} & T & T & \\TTbf{T} & T & F & \\TTbf{T} & T\\\\\nT & F & F & T & \\TTbf{T} & F & T & \\TTbf{F} & F & F & \\TTbf{T} & F\\\\\nF & T & T & F & \\TTbf{T} & F & F & \\TTbf{T} & T & T & \\TTbf{T} & T\\\\\nF & T & F & F & \\TTbf{T} & T & F & \\TTbf{T} & F & T & \\TTbf{F} & F\\\\\nF & F & T & F & \\TTbf{F} & F & F & \\TTbf{T} & T & F & \\TTbf{T} & T\\\\\nF & F & F & F & \\TTbf{F} & F & F & \\TTbf{T} & F & F & \\TTbf{T} & F\\\\\n\\end{tabular}\n\\end{center}}\n\\item $B\\eand(C\\eor A)$, $A\\eif B$, $\\enot(B\\eor C)$ \\hfill \\myanswer{Jointly inconsistent}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e f | d e e f}\n$A$ & $B$ & $C$ & $B$&$\\eand$&$(C$&$\\eor$&$A)$&$A$&$\\eif$&$B$&$\\enot$&$(B$&$\\eor$&$C)$\\\\\n\\hline\nT & T & T & T & \\TTbf{T} & T & T & T & T & \\TTbf{T} & T & \\TTbf{F} & T & T & T\\\\\nT & T & F & T & \\TTbf{T} & F & T & T & T & \\TTbf{T} & T & \\TTbf{F} & T &  T & F\\\\\nT & F & T & F & \\TTbf{F} & T & T & T & T & \\TTbf{F} & F & \\TTbf{F} & F &  T & T\\\\\nT & F & F & F & \\TTbf{F} & F & T & T & T & \\TTbf{F} & F & \\TTbf{T} & F & F & F\\\\\nF & T & T & T & \\TTbf{T} & T & T & F & F & \\TTbf{T} & T & \\TTbf{F} & T & T & T\\\\\nF & T & F & T & \\TTbf{F} & F & F & F & F & \\TTbf{T} & T & \\TTbf{F} & T & T & F\\\\\nF & F & T & F & \\TTbf{F} & T & T & F & F & \\TTbf{T} & F & \\TTbf{F} & F & T & T\\\\\nF & F & F & F & \\TTbf{F} & F & F & F & F & \\TTbf{T} & F & \\TTbf{T} & F & F & F\\\\\n\\end{tabular}\n\\end{center}}\n\\item $A\\eiff(B\\eor C)$, $C\\eif \\enot A$, $A\\eif \\enot B$  \\hfill \\myanswer{Jointly consistent (see line 8)}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e e f | d e e f}\n$A$ & $B$ & $C$ & $A$&$\\eiff$&$(B$&$\\eor$&$C)$&$C$&$\\eif$&$\\enot$&$A$&$A$&$\\eif$&$\\enot$&$B$\\\\\n\\hline\nT & T & T & T & \\TTbf{T} & T & T & T & T & \\TTbf{F} & F & T & T & \\TTbf{F} & F & T\\\\\nT & T & F & T & \\TTbf{T} & T & T & F & F & \\TTbf{T} & F & T & T & \\TTbf{F} & F & T\\\\\nT & F & T & T & \\TTbf{T} & F & T & T & T & \\TTbf{F} & F & T & T & \\TTbf{T} & T & F\\\\\nT & F & F & T & \\TTbf{F} & F & F & F & F & \\TTbf{T} & F & T & T & \\TTbf{T} & T & F\\\\\nF & T & T & F & \\TTbf{F} & T & T & T & T & \\TTbf{T} & T & F & F & \\TTbf{T} &  F & T\\\\\nF & T & F & F & \\TTbf{F} & T & T & F & F & \\TTbf{T} & T & F & F & \\TTbf{T} & F & T\\\\\nF & F & T & F & \\TTbf{F} & F & T & T & T & \\TTbf{T} & T & F & F & \\TTbf{T} &T & F\\\\\nF & F & F & F & \\TTbf{T} & F & F & F & F & \\TTbf{T} & T & F & F & \\TTbf{T} & T & F\n\\end{tabular}\n\\end{center}}\n\\end{earg}\n\n\n\n\\problempart\n\\label{pr.TT.valid}\nUse truth tables to determine whether each argument is valid or invalid.\n\\begin{earg}\n\\item $A\\eif A \\therefore A$  \\hfill \\myanswer{Invalid (see line 2)}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c | d e f | c}\n$A$ &$A$&$\\eif$&$A$&$A$\\\\\n\\hline\n T & T & \\TTbf{T} & T & T\\\\\n F & F & \\TTbf{T} & F & F\n \\end{tabular}\n\\end{center}}\n\\item $A\\eif(A\\eand\\enot A) \\therefore \\enot A$  \\hfill \\myanswer{Valid}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c | d e e e e f | df}\n$A$&$A$&$\\eif$&$(A$&$\\eand$&$\\enot$&$A)$&$\\enot$&$A$\\\\\n\\hline\n T & T & \\TTbf{F} & T & F& F&T&\\TTbf{F}&T\\\\\n F & F & \\TTbf{T} & F & F&T&F&\\TTbf{T}&F\n \\end{tabular}\n\\end{center}}\n\\item $A\\eor(B\\eif A) \\therefore \\enot A \\eif \\enot B$  \\hfill \\myanswer{Valid}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c | d e e e f | d e e e f}\n$A$ & $B$ & $A$&$\\eor$&$(B$&$\\eif$&$A)$&$\\enot$&$A$&$\\eif$&$\\enot$&$B$\\\\\n\\hline\nT & T & T & \\TTbf{T} & T & T & T & F & T & \\TTbf{T} & F & T \\\\\nT & F & T & \\TTbf{T} & F & T & T & F & T & \\TTbf{T} & T & F \\\\\nF & T & F & \\TTbf{F} & T & F & F & T & F & \\TTbf{F} & F & T \\\\\nF & F & F & \\TTbf{T} & F & T & F & T & F & \\TTbf{T} & T & F\n\\end{tabular}\n\\end{center}}\n\\item $A\\eor B, B\\eor C, \\enot A \\therefore B \\eand C$  \\hfill \\myanswer{Invalid (see line 6)}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e f | d e f | d f | d e f}\n$A$ & $B$ & $C$ & $A$&$\\eor$&$B$&$B$&$\\eor$&$C$&$\\enot$&$A$&$B$&$\\eand$&$C$\\\\\n\\hline\nT & T & T & T & \\TTbf{T} & T & T & \\TTbf{T} & T & \\TTbf{F} & T & T & \\TTbf{T} & T \\\\\nT & T & F & T & \\TTbf{T} & T & T & \\TTbf{T} & F & \\TTbf{F} & T & T &\\TTbf{F} & F \\\\\nT & F & T & T & \\TTbf{T} & F & F & \\TTbf{T} & T & \\TTbf{F} & T & F & \\TTbf{F} & T \\\\\nT & F & F & T & \\TTbf{T} & F & F & \\TTbf{F} & F & \\TTbf{F} & T & F & \\TTbf{F} & F\\\\\nT & T & T & F & \\TTbf{T} & T & T & \\TTbf{T} & T & \\TTbf{T} & F & T & \\TTbf{T} & T \\\\\nT & T & F & F & \\TTbf{T} & T & T & \\TTbf{T} & F & \\TTbf{T} & F & T &\\TTbf{F} & F \\\\\nT & F & T & F & \\TTbf{F} & F & F & \\TTbf{T} & T & \\TTbf{T} & F & F & \\TTbf{F} & T \\\\\nT & F & F & F & \\TTbf{F} & F & F & \\TTbf{F} & F & \\TTbf{T} & F & F & \\TTbf{F} & F\n\\end{tabular}\n\\end{center}}\n\\item $(B\\eand A)\\eif C, (C\\eand A)\\eif B \\therefore (C\\eand B)\\eif A$  \\hfill \\myanswer{Invalid (see line 5)}\n\\myanswer{\\begin{center}\n\\begin{tabular}{c c c | d e e e f | d e e e f | d e e e f}\n$A$ & $B$ & $C$ & $(B$&$\\eand$&$A)$&$\\eif$&$C$&$(C$&$\\eand$&$A)$&$\\eif$&$B$&$(C$&$\\eand$&$ B)$&$\\eif$&$A$\\\\\n\\hline\nT & T & T & T & T & T & \\TTbf{T} & T & T & T & T & \\TTbf{T} & T & T & T & T & \\TTbf{T} & T\\\\\nT & T & F & T & T & T & \\TTbf{F} & F & F & F & T & \\TTbf{T} & T & F & F & T & \\TTbf{T} & T\\\\\nT & F & T & F & F & T & \\TTbf{T} & T & T & T & T & \\TTbf{F} & F & T & F & F & \\TTbf{T} & T\\\\\nT & F & F & F & F & T & \\TTbf{T} & F & F & F & T & \\TTbf{T} & F & F & F & F & \\TTbf{T} & T\\\\\nF & T & T & T & F & F & \\TTbf{T} & T & T & F & F & \\TTbf{T} & T & T & T & T & \\TTbf{F} & F\\\\\nF & T & F & T & F & F & \\TTbf{T} & F & F & F & F & \\TTbf{T} & T & F & F & T & \\TTbf{T} & F\\\\\nF & F & T & F & F & F & \\TTbf{T} & T & T & F & F & \\TTbf{T} & F & T & F & F & \\TTbf{T} & F\\\\\nF & F & F & F & F & F & \\TTbf{T} & F & F & F & F & \\TTbf{T} & F & F & F & F & \\TTbf{T} & F\n\\end{tabular}\n\\end{center}}\n\\end{earg}\n\n\n\\problempart\n\\label{pr.TT.concepts}\nAnswer each of the questions below and justify your answer.\n\\begin{earg}\n\\item Suppose that \\meta{A} and \\meta{B} are tautologically equivalent. What can you say about $\\meta{A}\\eiff\\meta{B}$?\n\\item[] \\myanswer{\\meta{A} and \\meta{B} have the same truth value on every line of a complete truth table, so $\\meta{A}\\eiff\\meta{B}$ is true on every line. It is a tautology.}\n\\item Suppose that $(\\meta{A}\\eand\\meta{B})\\eif\\meta{C}$ is neither a tautology nor a contradiction. What can you say about this: $\\meta{A}, \\meta{B} \\entails \\meta{C}$?\n\\item[] \\myanswer{Since the sentence $(\\meta{A}\\eand\\meta{B})\\eif\\meta{C}$ is not a tautology, there is some line on which it is false. Since it is a conditional, on that line, \\meta{A} and \\meta{B} are true and \\meta{C} is false. So in fact $\\meta{A}, \\meta{B} \\nentails \\meta{C}$ .}\n\\item Suppose that $\\meta{A}$, $\\meta{B}$ and $\\meta{C}$  are jointly tautologically inconsistent. What can you say about $(\\meta{A}\\eand\\meta{B}\\eand\\meta{C})$?\n\\item[] \\myanswer{Since the sentences are jointly tautologically inconsistent, there is no valuation on which they are all true. So their conjunction is false on every valuation. It is a contradiction}\n\\item Suppose that \\meta{A} is a contradiction. What can you say about this: $\\meta{A}, \\meta{B} \\entails \\meta{C}$?\n\\item[] \\myanswer{Since \\meta{A} is false on every line of a complete truth table, there is no line on which \\meta{A} and \\meta{B} are true and \\meta{C} is false. So the entailment holds.}\n\\item Suppose that \\meta{C} is a tautology. What can you say about this: $\\meta{A}, \\meta{B}\\entails \\meta{C}$?\n\\item[] \\myanswer{Since \\meta{C} is true on every line of a complete truth table, there is no line on which \\meta{A} and \\meta{B} are true and \\meta{C} is false. So the entailment holds.}\n\\item Suppose that \\meta{A} and \\meta{B} are tautologically equivalent. What can you say about $(\\meta{A}\\eor\\meta{B})$?\n\\item[] \\myanswer{Not much! Since $\\meta{A}$ and $\\meta{B}$ are true on exactly the same lines of the truth table, their disjunction is true on exactly the same lines. So, their disjunction is tautologically equivalent to them.}\n\\item Suppose that \\meta{A} and \\meta{B} are \\emph{not} tautologically equivalent. What can you say about $(\\meta{A}\\eor\\meta{B})$?\n\\item[] \\myanswer{\\meta{A} and \\meta{B} have different truth values on at least one line of a complete truth table, and $(\\meta{A}\\eor\\meta{B})$ will be true on that line. On other lines, it might be true or false. So $(\\meta{A}\\eor\\meta{B})$ is either a tautology or it is contingent; it is \\emph{not} a contradiction.}\n\\end{earg}\n\n\\problempart\nConsider the following principle:\n\t\\begin{ebullet}\n\t\t\\item Suppose $\\meta{A}$ and $\\meta{B}$ are tautologically equivalent. Suppose an argument contains $\\meta{A}$ (either as a premise, or as the conclusion). The validity of the argument would be unaffected, if we replaced $\\meta{A}$ with $\\meta{B}$.\n\t\\end{ebullet}\nIs this principle correct? Explain your answer.\n\\\\\\myanswer{The principle is correct. Since $\\meta{A}$ and $\\meta{B}$ are tautologically equivalent, they have the same truth table. So every valuation that makes $\\meta{A}$ true also makes $\\meta{B}$ true, and every valuation that makes $\\meta{A}$ false also makes $\\meta{B}$ false. So if no valuation makes all the premises true and the conclusion false, when $\\meta{A}$ was among the premises or the conclusion, then no valuation makes all the premises true and the conclusion false, when we replace $\\meta{A}$ with $\\meta{B}$.}\n\n\\chapter{Truth table shortcuts}\\setcounter{ProbPart}{0}\n\\problempart\n\\label{pr.TT.TTorC}\nUsing shortcuts, determine whether each sentence is a tautology, a contradiction, or neither. \n\\begin{earg}\n\\item $\\enot B \\eand B$ %contra\n\\myanswer{ \\hfill Contradiction\\begin{center}\n\\begin{tabular}{c | d e e f }\n$B$ & $\\enot$&$B$&$\\eand$&$B$\\\\\n\\hline\nT & F & & \\TTbf{F}\\\\\nF & & & \\TTbf{F} & \n\\end{tabular}\n\\end{center}}\n\\item $\\enot D \\eor D$ %taut\n\\myanswer{\\hfill Tautology\n\\begin{center}\n\\begin{tabular}{c | d e e f }\n$D$ & $\\enot$&$D$&$\\eor$&$D$\\\\\n\\hline\nT &  & & \\TTbf{T}\\\\\nF & T & & \\TTbf{T}\n\\end{tabular}\n\\end{center}}\n\\item $(A\\eand B) \\eor (B\\eand A)$ %contingent\n\\myanswer{\\hfill Neither\n\\begin{center}\n\\begin{tabular}{c c | d e e e e e f }\n$A$ & $B$ & $(A$&$\\eand$&$B)$&$\\eor$&$(B$&$\\eand$&$A)$\\\\\n\\hline\nT & T & & T & & \\TTbf{T}\\\\\nT & F & & F & & \\TTbf{F} & & F \\\\\nF & T & & F & & \\TTbf{F} & & F \\\\\nF & F & & F & & \\TTbf{F} & & F \n\\end{tabular}\n\\end{center}}\n\\item $\\enot[A \\eif (B \\eif A)]$ %contra\n\\myanswer{\\hfill Contradiction\n\\begin{center}\n\\begin{tabular}{c c | d e e e e f }\n$A$ & $B$ & $\\enot[$&$A$&$\\eif$&$(B$&$\\eif$&$A)]$\\\\\n\\hline\nT & T & \\TTbf{F} &  & T & & T\\\\\nT & F & \\TTbf{F} &  & T & & T \\\\\nF & T & \\TTbf{F} &  & T & &  \\\\\nF & F & \\TTbf{F} &  & T & &  \n\\end{tabular}\n\\end{center}}\n\\item $A \\eiff [A \\eif (B \\eand \\enot B)]$ %contra\n\\myanswer{\\hfill Contradiction\n\\begin{center}\n\\begin{tabular}{c c | d e e e e e e f }\n$A$ & $B$ & $A$&$\\eiff$&$[A$&$\\eif$&$(B$&$\\eand$&$\\enot$&$B)]$\\\\\n\\hline\nT & T & & \\TTbf{F} &  & F & & F& F\\\\\nT & F & & \\TTbf{F} &  & F & & F \\\\\nF & T & & \\TTbf{F} &  & T & &  \\\\\nF & F & & \\TTbf{F} &  & T & &  \n\\end{tabular}\n\\end{center}}\n\\item $\\enot(A\\eand B) \\eiff A$ %contingent\n\\myanswer{\\hfill Neither\n\\begin{center}\n\\begin{tabular}{c c | d e e e e f }\n$A$ & $B$ & $\\enot$&$(A$&$\\eand$&$B)$&$\\eiff$&$A$\\\\\n\\hline\nT & T & F & & T & & \\TTbf{F} \\\\\nT & F & T & & F & & \\TTbf{T} \\\\\nF & T & T & & F & & \\TTbf{F} \\\\\nF & F & T & & F & & \\TTbf{F} \\\\\n\\end{tabular}\n\\end{center}}\n\\item $A\\eif(B\\eor C)$ %contingent\n\\myanswer{\\hfill Neither\n\\begin{center}\n\\begin{tabular}{c c c | d e e e f }\n$A$ & $B$ & $C$ & $A$&$\\eif$&$(B$&$\\eor$&$C)$\\\\\n\\hline\nT & T & T & & \\TTbf{T} & & T \\\\\nT & T & F & & \\TTbf{T} & & T \\\\\nT & F & T & & \\TTbf{T} & & T \\\\\nT & F & F & & \\TTbf{F} & & F \\\\\nF & T & T & & \\TTbf{T} & &  \\\\\nF & T & F & & \\TTbf{T} & &  \\\\\nF & F & T & & \\TTbf{T} & &  \\\\\nF & F & F & & \\TTbf{T} & &  \n\\end{tabular}\n\\end{center}}\n\\item $(A \\eand\\enot A) \\eif (B \\eor C)$ %tautology\n\\myanswer{\\hfill Tautology\n\\begin{center}\n\\begin{tabular}{c c c | d e e e e e e f }\n$A$ & $B$ & $C$ & $(A$&$\\eand$&$\\enot$&$A)$&$\\eif$&$(B$&$\\eor$&$C)$\\\\\n\\hline\nT & T & T & & \\TTbf{F} & F& & T \\\\\nT & T & F & & \\TTbf{F} &F & & T \\\\\nT & F & T & & \\TTbf{F} &F & & T \\\\\nT & F & F & & \\TTbf{F} & F& & T \\\\\nF & T & T & & \\TTbf{F} & & & T \\\\\nF & T & F & & \\TTbf{F} & & & T \\\\\nF & F & T & & \\TTbf{F} & & & T \\\\\nF & F & F & & \\TTbf{F} & &  & T \\\\ \n\\end{tabular}\n\\end{center}}\n\\item $(B\\eand D) \\eiff [A \\eiff(A \\eor C)]$%contingent\n\\myanswer{\\hfill Neither\n\\begin{center}\n\\begin{tabular}{c c c c | d e e e e e e e f }\n$A$ & $B$ & $C$ & $D$ & $(B$&$\\eand$&$D)$&$\\eiff $&$[A$&$\\eiff$&$(A$&$\\eor$&$C)]$\\\\\n\\hline\nT & T & T & T & & T & & \\TTbf{T} &  & T & &  T\\\\\nT & T & T & F & & F & & \\TTbf{F} &  & T & &  T\\\\\nT & T & F & T & & T & & \\TTbf{T} &  & T & &  T\\\\\nT & T & F & F & & F & & \\TTbf{F} &  & T & &  T\\\\\nT & F & T & T & & F & & \\TTbf{F} &  & T & &  T\\\\\nT & F & T & F & & F & & \\TTbf{F} &  & T & &  T\\\\\nT & F & F & T & & F & & \\TTbf{F} &  & T & &  T\\\\\nT & F & F & F & & F & & \\TTbf{F} &  & T & &  T\\\\\nF & T & T & T & & T & & \\TTbf{F} &  & F & &  T\\\\\nF & T & T & F & & F & & \\TTbf{T} &  & F & &  T\\\\\nF & T & F & T & & T & & \\TTbf{T} &  & T & &  F\\\\\nF & T & F & F & & F & & \\TTbf{F} &  & T & &  F\\\\\nF & F & T & T & & F & & \\TTbf{T} &  & F & &  T\\\\\nF & F & T & F & & F & & \\TTbf{T} &  & F & &  T\\\\\nF & F & F & T & & F & & \\TTbf{F} &  & T & &  F\\\\\nF & F & F & F & & F & & \\TTbf{F} &  & T & &  F\n\\end{tabular}\n\\end{center}}\n\\end{earg}\n\n\n\\chapter{Partial truth tables}\\setcounter{ProbPart}{0}\n\\problempart\n\\label{pr.TT.equiv}\nUse complete or partial truth tables (as appropriate) to determine whether these pairs of sentences are tautologically equivalent:\n\\begin{earg}\n\\item $A$, $\\enot A$\n\\myanswer{\\hfill Not tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c | c | c}\n$A$ &$A$&$\\enot A$\\\\\n\\hline\n T & T & F\n \\end{tabular}\n\\end{center}}\n\\item $A$, $A \\eor A$\n\\myanswer{\\hfill Tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c | c | c}\n$A$ &$A$&$A \\eor A$\\\\\n\\hline\n T & T & T\\\\\n T & T & T\n \\end{tabular}\n\\end{center}}\n\\item $A\\eif A$, $A \\eiff A$ %Yes\n\\myanswer{\\hfill Tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c | c | c}\n$A$ &$A \\eif A$&$A \\eiff A$\\\\\n\\hline\nT & T & T\\\\\nF & T & T\n\\end{tabular}\n\\end{center}}\n\\item $A \\eor \\enot B$, $A\\eif B$ %No\n\\myanswer{\\hfill Not tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c c | c | c}\n$A$ & $B$ &$A\\eor \\enot B$&$A \\eif B$\\\\\n\\hline\nT & F & {T} & F\n\\end{tabular}\n\\end{center}}\n\\item $A \\eand \\enot A$, $\\enot B \\eiff B$ %Yes\n\\myanswer{\\hfill Tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c c | d e e f | d e e f}\n$A$ & $B$ &$A$&$\\eand$&$\\enot$&$A$&$\\enot$&$B$&$\\eiff$&$B$\\\\\n\\hline\nT & T & & \\TTbf{F} & F & & F & & \\TTbf{F} & \\\\\nT & F & & \\TTbf{F} & F & & T & & \\TTbf{F} & \\\\\nF & T & & \\TTbf{F} &  & & F & & \\TTbf{F} & \\\\\nF & F & & \\TTbf{F} &  & &  T & & \\TTbf{F} & \\\\\n\\end{tabular}\n\\end{center}}\n\n\\item $\\enot(A \\eand B)$, $\\enot A \\eor \\enot B$ %Yes\n\\myanswer{\\hfill Tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c c | d e e f | d e e e f}\n$A$ & $B$ &$\\enot$&$(A$&$\\eand$&$B)$&$\\enot$&$A$&$\\eor$&$\\enot$&$B$\\\\\n\\hline\nT & T &\\TTbf{F}& & T & & F & & \\TTbf{F} & F\\\\\nT & F & \\TTbf{T} & & F & & F & & \\TTbf{T} & T\\\\\nF & T & \\TTbf{T} & & F & & T & & \\TTbf{T} & F\\\\\nF & F & \\TTbf{T} & & F & &  T& & \\TTbf{T} & T\\\\\n\\end{tabular}\n\\end{center}}\n\\item $\\enot(A \\eif B)$, $\\enot A \\eif \\enot B$ %No\n\\myanswer{\\hfill Not tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c c | d e e f | d e e e f}\n$A$ & $B$ &$\\enot$&$(A$&$\\eif$&$B)$&$\\enot$&$A$&$\\eif$&$\\enot$&$B$\\\\\n\\hline\nT & T &\\TTbf{F}& & T & & F & & \\TTbf{T} & F\\\\\n\\end{tabular}\n\\end{center}}\n\\item $(A \\eif B)$, $(\\enot B \\eif \\enot A)$ %Yes\n\\myanswer{\\hfill Tautologically equivalent\n\\begin{center}\n\\begin{tabular}{c c | c | d e e e f}\n$A$ & $B$ &$(A \\eif B)$&$(\\enot$&$B$&$\\eif$&$\\enot$&$A)$\\\\\n\\hline\nT & T &T& F & & \\TTbf{T} &  \\\\\nT & F &F & T & & \\TTbf{F} & F  \\\\\nF & T &T& F & & \\TTbf{T} &  \\\\\nF & F &T& T & & \\TTbf{T} & T \n\\end{tabular}\n\\end{center}}\n\\end{earg}\n\n\n\\problempart\n\\label{pr.TT.consistent}\nUse complete or partial truth tables (as appropriate) to determine whether these sentences are jointly tautologically consistent, or jointly tautologically inconsistent:\n\\begin{earg}\n\\item $A \\eand B$, $C\\eif \\enot B$, $C$ %inconsistent\n\\myanswer{\\hfill Jointly tautologically inconsistent\n\\begin{center}\n\\begin{tabular}{c c c | c | d e f | c c c c  }\n$A$ & $B$ & $C$ & $A \\eand B$ & $C$&$\\eif$&$\\enot B$&$C$\\\\\n\\hline\nT & T & T & T & & \\TTbf{F} & F& T \\\\\nT & T & F & T & & \\TTbf{T} & & F \\\\\nT & F & T & F & & \\TTbf{T} &T & T \\\\\nT & F & F & F & & \\TTbf{T} & & F \\\\\nF & T & T & F & & \\TTbf{F} &F & T \\\\\nF & T & F & F & & \\TTbf{T} & & F \\\\\nF & F & T & F & & \\TTbf{T} & T& T \\\\\nF & F & F & F & & \\TTbf{T} & &  F \\\\ \n\\end{tabular}\n\\end{center}}\n\\item $A\\eif B$, $B\\eif C$, $A$, $\\enot C$ %inconsistent\n\\myanswer{\\hfill Jointly tautologically inconsistent\n\\begin{center}\n\\begin{tabular}{c c c | c | c | c | c }\n$A$ & $B$ & $C$ & $A \\eif B$ & $B \\eif C$ & $A$ & $\\enot C$\\\\\n\\hline\nT & T & T & T & T & T & F\\\\\nT & T & F & T & F & T & T\\\\\nT & F & T & F & T & T & F\\\\\nT & F & F & F & T & T & T\\\\\nF & T & T & T & T & F & F\\\\\nF & T & F & T & F & F & T\\\\\nF & F & T & T & T & F & F\\\\\nF & F & F & T & T & F & T\n\\end{tabular}\n\\end{center}}\n\\item $A \\eor B$, $B\\eor C$, $C\\eif \\enot A$ %consistent\n\\myanswer{\\hfill Jointly tautologically consistent\n\\begin{center}\n\\begin{tabular}{c c c | c | c | d e f }\n$A$ & $B$ & $C$ & $A \\eor B$ & $B \\eor C$ & $C$&$\\eif$& $\\enot A$\\\\\n\\hline\nF & T & T & T & T & & \\TTbf{T} & T\\\\\n\\end{tabular}\n\\end{center}}\n\\item $A$, $B$, $C$, $\\enot D$, $\\enot E$, $F$ %consistent\n\\myanswer{\\hfill Jointly tautologically consistent\n\\begin{center}\n\\begin{tabular}{c c c c c c| c | c | c | c |c | c }\n$A$ & $B$ & $C$ & $D$ & $E$ & $F$ & $A$ & $B$ & $C$ & $\\enot D$ & $\\enot E$ & $F$\\\\\n\\hline\nT & T & T & F & F & T & T & T& T& T& T& T\n\\end{tabular}\n\\end{center}}\n\\end{earg}\n\n\n\\problempart\n\\label{pr.TT.valid}\nUse complete or partial truth tables (as appropriate) to determine whether each argument is valid or invalid:\n\\begin{earg}\n\\item $A\\eor\\bigl[A\\eif(A\\eiff A)\\bigr] \\therefore A$ %invalid\n\\myanswer{\\hfill Invalid\n\\begin{center}\n\\begin{tabular}{c |  d e e e e e f | c}\n$A$ & $A$&$\\eor$&$\\bigl[A$&$\\eif$&$(A$&$\\eiff$&$A)\\bigr]$ & $A$\\\\\n\\hline\nF & & \\TTbf{T} & & T & & & & F\n\\end{tabular}\n\\end{center}}\n\\item $A\\eiff\\enot(B\\eiff A) \\therefore A$ %invalid\n\\myanswer{\\hfill Invalid\n\\begin{center}\n\\begin{tabular}{c c | d e e f | c}\n$A$&$B$ & $A$&$\\eiff$&$\\enot$&$(B\\eiff A)$ & $A$\\\\\n\\hline\nF & F & & \\TTbf{T} & F & T & F\n\\end{tabular}\n\\end{center}}\n\n\\item $A\\eif B, B \\therefore A$ %invalid\n\\myanswer{\\hfill Invalid\n\\begin{center}\n\\begin{tabular}{c c | c | c | c}\n$A$&$B$ & $A \\eif B$ & $B$ & $A$\\\\\n\\hline\nF & T & T & T & F\n\\end{tabular}\n\\end{center}}\n\n\\item $A\\eor B, B\\eor C, \\enot B \\therefore A \\eand C$ %valid\n\\myanswer{\\hfill Valid\n\\begin{center}\n\\begin{tabular}{c c c | c | c | c | c }\n$A$ & $B$ & $C$ & $A \\eor B$ & $B \\eor C$ & $\\enot B$ & $A \\eand C$\\\\\n\\hline\nT & T & T &  &  &  & T\\\\\nT & T & F &  &  & F & F\\\\\nT & F & T &  &  &  & T\\\\\nT & F & F & T & F & T & F\\\\\nF & T & T &  &  & F & F\\\\\nF & T & F &  &  & F & F\\\\\nF & F & T & F &  & T & F\\\\\nF & F & F & F &  & T & F\n\\end{tabular}\n\\end{center}}\n\n\\item $A\\eiff B, B\\eiff C \\therefore A\\eiff C$ %valid\n\\myanswer{\\hfill Valid\n\\begin{center}\n\\begin{tabular}{c c c | c | c | c }\n$A$ & $B$ & $C$ & $A \\eiff B$ & $B \\eiff C$ & $A \\eiff C$\\\\\n\\hline\nT & T & T &  &  & T\\\\\nT & T & F & T & F & F\\\\\nT & F & T &  &  & T\\\\\nT & F & F & F &  & F\\\\\nF & T & T & F &  & F\\\\\nF & T & F &  &  & T\\\\\nF & F & T & T & F & F\\\\\nF & F & F &  &  & T\n\\end{tabular}\n\\end{center}}\n\\end{earg}", "meta": {"hexsha": "2c62a97542a5f3cb157509650be3d024b49e0a2b", "size": 29811, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "solutions/forallx-sol-truthtables.tex", "max_stars_repo_name": "OpenLogicProject/forallx-cam", "max_stars_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-02-19T01:39:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-04T05:59:31.000Z", "max_issues_repo_path": "solutions/forallx-sol-truthtables.tex", "max_issues_repo_name": "ryanmichaelhebert/forallx-cam", "max_issues_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "solutions/forallx-sol-truthtables.tex", "max_forks_repo_name": "ryanmichaelhebert/forallx-cam", "max_forks_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-09-08T05:09:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T10:13:13.000Z", "avg_line_length": 40.9491758242, "max_line_length": 531, "alphanum_fraction": 0.4915635168, "num_tokens": 14202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5662864065511084}}
{"text": "\\lab{HIV Treatment Using Optimal Control}{HIV Treatment Using Optimal Control}\n\\label{lab:hiv}\n\n\n\\section*{Introduction}\n    Viruses are the cause of many common illnesses in society today, such as ebola, influenza, the common cold, and Human Immunodeficiency Virus (HIV). Viruses are not considered to be living organisms as they cannot reproduce on their own. Instead they inject their genes in the form of DNA or RNA into a host's genome. They then use the cell's ribosomes and proteins to make the protein coat and replicate their genes. At the end they lyse the cell (tear it apart) and release many virus particles to infect other cells.\\par\n    The body has an adaptive immune system which learns to recognize viruses and bacteria and their hosts, and how to destroy them. A major component of this system are T cells. These cells perform many necessary functions such as recognizing invaders, destroying infected cells, and remembering previous infections long after recovery. Of particular interest is the helper T cell, also known as the CD4+T cell, due to a protein found on its surface which regulates the immune responses. HIV is unique in that it specifically targets this particular type of T cell. This means that the system responsible for fighting infections is specifically targeted.\\par\n    This loss of CD4+T cells is what causes Acquired Immune Deficiency Syndrome (AIDS). Note that AIDS itself is not an infection, which is a common misconception among the population. Due to the lack of T cells to recognize viruses and bacteria, the body becomes susceptible to other forms of infection. Whereas most people are easily able to shake off a common cold, someone suffering from the advanced stages of AIDS will be at serious risk of dying. Since AIDS comes from a loss of T cells, it may be several years before the host notices the effects of the infection. This enables the HIV virus to spread more easily since the host might not realize they are infected and continue in whatever behavior made them susceptible to the infection initially.\\par\n   Currently there is no cure or vaccine for HIV. However, there are treatments that reduce the virus and bolster the immune system by increasing the CD4+T cell count. Since these treatments can be expensive and often have negative side effects, it is important to optimize the amount of drugs used. Sometimes combinations of these drugs are used to provide a better effect. In this lab we will use optimal control to find the optimal dosage of a two-drug combination\\footnote{\\textit{SHORT COURSES ON THE MATHEMATICS OF BIOLOGICAL COMPLEXITY}, Web. 15 Apr. 2015 http://www.math.utk.edu/~lenhart/smb2003.v2.html.} \\footnote{`Immunotherapy of HIV-1 Infection', Kirschner, D. and Webb, G. F., Journal of Biological Systems, 6(1), 71-83 (1998)}.\n   \n   % \\begin{thebibliography}{9}\n% \n% \\bibitem{paper}\n% \"SHORT COURSES ON THE MATHEMATICS OF BIOLOGICAL COMPLEXITY.\" \\textit{SHORT COURSES ON THE MATHEMATICS OF BIOLOGICAL COMPLEXITY}, Web. 15 Apr. 2015 http://www.math.utk.edu/~lenhart/smb2003.v2.html.\n\n% \\bibitem{constants}\n% .\n% \\end{thebibliography}\n   \n\\begin{problem}\nExplain what makes the HIV virus unique. What are the consequences of this? What is AIDS?\n\\label{problem:hiv:virusunderstanding}\n\\end{problem}\n\n\\begin{problem}\nHow is AIDS treated and what are the considerations for treatment?\n\\label{problem:hiv:treatment}\n\\end{problem}\n\n\\section*{Derivation of Control}\nFirst we begin by writing the state system, the equations that describe the changes in T cells and viruses:\n\\begin{align*}\n\\frac{\\it{d}T(t)}{\\it{dt}} &= s_1 - \\frac{s_2V(t)}{B_1+V(t)} - \\mu T(t)-kV(t)T(t)+u_1(t)T(t), \\\\\n\\frac{\\it{d}V(t)}{\\it{dt}} &= \\frac{gV(t)}{B_2+V(t)}(1-u_2(t)) - cV(t)T(t).\n\\end{align*}\nTo these equations we add initial conditions $T(0)=T_0$ and $V(0)=V_0$, where $T$ represents the concentration of $CD4^+T$ cells and $V$ the concentration of HIV particles. \nThe term $s_1-\\frac{s_2V(t)}{B_1+V(t)}$ is the source/proliferation of unaffected T cells,\n$\\mu T(t)$ the natural loss of T cells, $kV(t)T(t)$ the loss of T cells by infection. \n$\\frac{gV(t)}{B_2+V(t)}$ represents the viral contribution to plasma, and $cV(t)T(t)$ the viral loss.\n\nWe now seek to maximize the functional \n\\[\nJ(u_1,u_2) = \\int_0^{t_f}[T-(A_1u_1^2+A_2u_2^2)]\\it{dt}.\n\\]\nThis functional considers i) the benefit of T cells, and ii) the systematic costs of drug treatments.\nThe constants $A_1$ and $A_2$ represent scalars to adjust the size of terms coming from $u_1^2$ and $u_2^2$ respectively. \nWe seek an optimal control $u_1^*,u_2^*$ satisfying\n\\[\nJ(u_1^*,u_2^*)=\\max\\{J(u_1,u_2)|(u_1,u_2)\\in U\\},\n\\]\nwhere $U=\\{(u_1,u_2)|u_i $ is measurable $,\\,a_i\\le u_i \\le b_i$, $t\\in[0,t_f]$ for $i=1,2\\}$.\n \n\\section*{Optimality System}\nThe Hamiltonian is defined as:\n\\begin{align*}\n\tH = [T-(A_1u_1^2+A_2u_2^2)]&+\\lambda_1\\Big[ s_1 - \\frac{s_2V}{B_1+V}-\\mu T-kVT+u_1T \\Big] \\\\\n\t&+ \\lambda_2 \\Big[\\frac{g(1-u_2)V}{B_2+V}-cVT\\Big].\n\\end{align*}\nNote that the costate is represented with $\\lambda$ instead of $p$. Now based on what we know from class we have:\n\\begin{align*}\n\\lambda_1^{'} &=-\\frac{\\partial H}{\\partial T} =  -1+\\lambda_1[\\mu+kV^*-u_1^*]+\\lambda_2cV^*,\\\\\n\\lambda_2^{'} &= -\\frac{\\partial H}{\\partial V} = \\lambda_1\\Big[\\frac{B_1s_2}{(B_1+V^*)^2}+kT^*\\Big] -\\lambda_2\\Big[\\frac{B_2g(1-u_2^*)}{B_2+V^*)^2}-cT^*\\Big].\n\\end{align*}\n\nThe transversality conditions are $\\lambda_1(t_f)=\\lambda_2(t_f)=0$, with $T(0)=T_0$ and $V(0)=V_0$. \n%Also \n%$$u_1^* = min\\Bigg\\{max\\Bigg\\{a_1,\\frac{1}{2A_1}(\\lambda_1T^*(t))\\Bigg\\},b_1\\Bigg\\}$$\n%$$u_2^* = min\\Bigg\\{max\\Bigg\\{a_2,\\frac{-\\lambda_2}{2A_2}\\frac{V*(t)}{(B_2+V^*(t))}\\Bigg\\},b_2\\Bigg\\}$$\n%The optimality equations are:\n%$$\\frac{\\partial H}{\\partial u_1} = -2A_1u_1^*(t)+\\lambda_1T^*(t)=0$$\n%$$\\frac{\\partial H}{\\partial u_2} = -2A_2u_2^*(t)+\\lambda_2\\big[\\frac{-gV^*(t)}{B_2+V^*(t)}\\big\nFrom these conditions we obtain\n\\begin{align*}\nu_1^*(t) &= \\frac{1}{2A_1}\\left[\\lambda_1T^*(t)\\right],\\\\\nu_2^*(t) &= \\frac{1}{2A_2}\\left[-\\lambda_2\\frac{gV^*(t)}{B_2+V^*(t)}\\right].\n\\end{align*}\nFrom the bounds on the controls we have\n\\begin{align*}\n\t\\begin{split}\n\t\tu_1^*(t)&=\\min\\left\\{\\max\\{a_1,\\frac{1}{2A_1}(\\lambda_1T^*(t))\\},b_1\\right\\},\\\\\n\t\tu_2^*(t)&=\\min\\left\\{\\max\\{a_2,-\\frac{\\lambda_2}{2A_2}\\frac{gV^*(t)}{B_2+V^*(t)}\\},b_2\\right\\}.\n\t\\end{split}\n\\end{align*}\nThis gives us the optimal system\n\\begin{align*}\nT'&=s_1-\\frac{s_2V}{B_1+V}-\\mu T-kVT+\\min\\{\\max\\{a_1,\\frac{1}{2A_1}(\\lambda_1T)\\},b_1\\}T,\\\\\nV'&=\\frac{g(1-\\min\\{\\max\\{a_2,\\frac{-\\lambda_2}{2A_2}\\frac{gV}{B_2+V}\\},b_2\\})V}{B_2+V}-cVT,\\\\\n\\lambda_1'&=-1+\\lambda_1\\left[\\mu+kV-\\min\\{\\max\\{a_1,\\frac{1}{2A_1}(\\lambda_1T)\\},b_1\\}\\right]+\\lambda_2cV,\\\\\n\\lambda_2'&=\\lambda_1\\left[\\frac{B_1s_2}{(B_1+V)^2}+kT\\right]-\\lambda_2\\left[\\frac{B_2g(1-\\min\\{\\max\\{a_2,\\frac{-\\lambda_2}{2A_2}\\frac{V}{B_2+V}\\},b_2\\})}{(B_2+V)^2}-cT\\right],\n\\end{align*}\nwith end conditions $\\lambda_1(t_f)=\\lambda_2(t_f)=0$, and $T(0)=T_0,V(0)=V_0$.\n\n\\section*{Creating a Numerical Solver}\n\nWe iteratively solve for our control $u$. In each iteration we solve our state equations and our costate equations numerically, then use those to find our new control. Lastly, we check to see if  our control has converged. To solve each set of differential equations, we will use the RK4 solver from a previous lab with one minor adjustment. Our state equations depend on $u$, and our costate equations depend on our state equations. Therefore, we will pass another parameter into the function that RK4 takes in that will index the arrays our equations depend on.\n\n\\begin{lstlisting}\n# Dependencies for this lab's code:\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n#Code from RK4 Lab with minor edits\t\t\t\t\ndef initialize_all(y0, t0, t, n):\n    \"\"\" An initialization routine for the different ODE solving\n    methods in the lab. This initializes Y, T, and h. \"\"\"\n    if isinstance(y0, np.ndarray):\n        Y = np.empty((n, y0.size)).squeeze()\n    else:\n        Y = np.empty(n)\n    Y[0] = y0\n    T = np.linspace(t0, t, n)\n    h = float(t - t0) / (n - 1)\n    return Y, T, h\n\ndef RK4(f, y0, t0, t, n):\n    \"\"\" Use the RK4 method to compute an approximate solution\n    to the ODE y' = f(t, y) at n equispaced parameter values from t0 to t\n    with initial conditions y(t0) = y0.\n    \n    y0 is assumed to be either a constant or a one-dimensional numpy array.\n    t and t0 are assumed to be constants.\n    f is assumed to accept three arguments.\n    The first is a constant giving the value of t.\n    The second is a one-dimensional numpy array of the same size as y.\n    The third is an index to the other arrays.\n    \n    This function returns an array Y of shape (n,) if\n    y is a constant or an array of size 1.\n    It returns an array of shape (n, y.size) otherwise.\n    In either case, Y[i] is the approximate value of y at\n    the i'th value of np.linspace(t0, t, n).\n    \"\"\"\n    Y,T,h = initialize_all(y0,t0,t,n)\n    for i in xrange(n-1):\n        K1 = f(T[i],Y[i],i)\n        K2 = f(T[i]+h/2.,Y[i]+h/2.*K1,i)\n        K3 = f(T[i]+h/2.,Y[i]+h/2.*K2,i)\n        K4 = f(T[i+1],Y[i]+h*K3,i)\n        Y[i+1] = Y[i] + h/6.*(K1+2*K2 +2*K3+K4)\n    return Y\n\\end{lstlisting}\n\n\\begin{problem}\nCreate a function that defines the state equations and returns both equations in a single array. The function should be able to be passed into the RK4 solver. This function can depend on the global variables defined below.\n\n\\begin{lstlisting}\n# define constants\ns_1 = 2.\ns_2 = 1.5\nmu = 0.002\nk = 0.000025\ng = 30.\nc = 0.007\nB_1 = 14\nB_2 = 1\n\n# initialize global variables, state, costate, and u.\nstate = np.zeros((n,2))\nstate0 = np.array([T0, V0])\n\t\ncostate = np.zeros((n,2))\ncostate0 = np.zeros(2)\n\nu=np.zeros((n,2))\nu[:,0] += .02\nu[:,1] += .9\n\n# define state equations\ndef state_equations(t,y,i):\n\t'''\n\tParameters\n\t---------------\n\tt : float\n\t\tthe time\n\ty : ndarray (2,)\n\t\tthe T cell concentration and the Virus concentration at time t\n\ti : int\n\t\tindex for the global variable u.\n\tReturns\n\t--------------\n\ty_dot : ndarray (2,)\n\t\tthe derivative of the T cell concentration and the virus concentration at time t\n\t'''\n\tpass\n\\end{lstlisting}\n\\label{problem:hiv:state}\n\\end{problem}\n\n\nThe state equations work great in the RK4 solver; however, the costate equations have end conditions rather than initial conditions. Thus we want our RK4 solver to iterate backwards from the end to the beginning. An easy way to accomplish this is to define a function $ \\hat{\\lambda}_i(t)=\\lambda_i(t_f - t).$ Then $\\hat{\\lambda}_i$ has the initial conditions $\\hat{\\lambda}_i(0) = \\lambda_i(t_f)$. We get the new equations\n\n\\begin{align*}\n\\dot{\\hat{\\lambda}}_1(t) &=\\lambda_1(t_f-t)\\left(-\\mu - kV(t_f-t) + u_{1}\\right) - c\\lambda_2(t_f-t)V(t_f-t) + 1, \\\\\n\\dot{\\hat{\\lambda}}_2(t) &= -\\lambda_1(t_f-t)\\left(\\frac{s_2B_1}{(B_1+V(t_f-t))^2}+kT(t_f-t)\\right) \\\\\n&+ \\lambda_2(t_f-t)\\left(\\frac{gB_2(1-u_2(t_f-t))}{(B_2 + V(t_f-t))^2} - cT(t_f-t)\\right).\n\\end{align*}\n\nThese we can solve with our RK4 solver and recover the original costate equations by simply indexing the array backwards.\n\n\\begin{problem}\nCreate a function that defines the costate equations and returns both equations in a single array. The function should be able to be passed into the RK4 solver. Use the global variables as defined in Problem \\ref{problem:hiv:state}.\n\n\\begin{lstlisting}\ndef lambda_hat(t,y,i):\n\t'''\n\tParameters\n\t---------------\n\tt : float\n\t\tthe time\n\ty : ndarray (2,)\n\t\tthe lambda_hat values at time t\n\ti : int\n\t\tindex for global variables, u and state.\n\tReturns\n\t--------------\n\ty_dot : ndarray (2,)\n\t\tthe derivative of the lambda_hats at time t.\n\t'''\n\tpass\n\\end{lstlisting}\n\n\\label{problem:hiv:costateequations}\n\\end{problem}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=5in]{solutions.png}\n\\caption{The solution to Problem \\ref{problem:hiv:numericalsolver}.}\n\\label{fig:hiv:solutions}\n\\end{figure}\n\n\n\nFinally, we can put these together to create our solver.\n\\begin{problem}\nCreate a numerical solver for the HIV two drug model using the code below.\n\n\\begin{lstlisting}\nepsilon = 0.001\ntest = epsilon + 1\n\nwhile(test > epsilon):\n\toldu = u.copy();\n    \n\t#solve the state equations with forward iteration\n\t#state = RK4(...)\n    \n\t#solve the costate equations with backwards iteration\n\t#costate = RK4(...)[::-1]\n\t\n\t#solve for u1 and u2\n    \n\t#update control\n\tu[:,0] = 0.5*(u1 + oldu[:,0])\n\tu[:,1] = 0.5*(u2 + oldu[:,1])\n\n\t#test for convergence\n\ttest = abs(oldu - u).sum()\n\\end{lstlisting}\n\nRun with the following parameter values:\n\\begin{lstlisting}\na_1, a_2 = 0, 0\nb_1, b_2 = 0.02, 0.9\ns_1, s_2 = 2., 1.5\n\\mu = 0.002\nk = 0.000025\ng = 30.\nc = 0.007\nB_1, B_2 = 14, 1\nA_1, A_2 = 250000, 75\nT_0, V_0 = 400, 3\nt_f = 50\nn = 1000\n\\end{lstlisting}\nThese constants come from both references cited at the end of this lab. Your solutions should match Figure \\ref{fig:hiv:solutions}.\n\\label{problem:hiv:solver}\n\n\\label{problem:hiv:numericalsolver}\n\\end{problem}\n\nIn modern medicine, patients generally take combinations of five or more medications with different functions. These include Nucleotide Reverse Transcriptase Inhibitors, which prevent HIV inserting its genes into host DNA, Non-Nucleoside Reverse Transcriptase Inhibitors, which do the same job as NRTIs in a different fashion, Protease Inhibitors, which cut up replicated HIV strands, Fusion Inhibitors, which block the virus from entering the cells to begin with, and Integrase Inhibitors, which prevents the virus' replicated DNA from being inserted into a cell's DNA. These drugs often can interact with each other and have different side effects on the body. Also, doctors rotate medications as the body and virus develop immunity. \n\n% \\begin{thebibliography}{9}\n%\n% \\bibitem{paper}\n% \"SHORT COURSES ON THE MATHEMATICS OF BIOLOGICAL COMPLEXITY.\" \\textit{SHORT COURSES ON THE MATHEMATICS OF BIOLOGICAL COMPLEXITY}. Web. 15 Apr. 2015. http://www.math.utk.edu/~lenhart/smb2003.v2.html.\n%\n% \\bibitem{constants}\n% Kirschner, D. and Webb, G. F., `Immunotherapy of HIV-1 Infection', Journal of Biological Systems, 6(1), 71-83 (1998).\n% \\end{thebibliography}\n\n", "meta": {"hexsha": "9fb059c000cba4b64993dd7009fff9376c36bddb", "size": 14075, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Vol4B/HIV/HIV.tex", "max_stars_repo_name": "joshualy/numerical_computing", "max_stars_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Vol4B/HIV/HIV.tex", "max_issues_repo_name": "joshualy/numerical_computing", "max_issues_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Vol4B/HIV/HIV.tex", "max_forks_repo_name": "joshualy/numerical_computing", "max_forks_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-11-05T14:45:03.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-05T14:45:03.000Z", "avg_line_length": 47.8741496599, "max_line_length": 760, "alphanum_fraction": 0.7045825933, "num_tokens": 4600, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.5662658369353583}}
{"text": "\\section{Convergence rates in Sobolev norms}\n\\subsection{Compactly supported activation function}\nGiven a bounded domain $\\Omega\\subset\\mathbb R^d$, we consider the function\n$$\nf: \\Omega\\mapsto \\mathbb R\n$$\nLet \n$$\nf^e: \\mathbb R^d\\mapsto \\mathbb R\n$$ \nbe any extension of $f$ so that\n$$\nf^e|_\\Omega=f(x), \\quad x\\in \\Omega. \n$$\nMost time, we will drop the superscript $``e\"$ to still use $f$ to\ndenote an extension of $f$. \n\nConsider the Fourier transform:\n\\begin{equation}\n\\label{Fourier}\n\\hat f(\\omega)=\\frac{1}{(2\\pi)^d}\\int_{\\mathbb{R}^d}e^{-i\\omega\\cdot x}f(x)dx\n\\quad \\forall \\omega \\in \\mathbb R^d.\n\\end{equation}\nLet $x_\\Omega\\in\\Omega$ is such that\n\\begin{equation}\n\\label{xOmega}\nx_\\Omega\\in \\arg\\min_{y\\in\\bar\\Omega}(\\max_{x\\in\\bar\\Omega}|x-y|)\n\\end{equation}\nWe may call\n\\begin{equation}\n\\label{rOmega}\nr_\\Omega=\\max_{x\\in\\bar\\Omega}|x-x_\\Omega|\n\\end{equation}\nthe radius of $\\Omega$.\n\n\nUsing the Fourier inversion formula, we can have a Fourier\nrepresentation of $f(x)$ as follows\n\\begin{equation}\n\\label{eqn1}\nf(x)=\\int_{\\mathbb{R}^d}e^{i\\omega\\cdot x}\\hat{f}(\\omega)d\\omega\n\\quad \\forall x \\in \\Omega,\n\\end{equation}\nLet us write\n\\begin{equation}\n\\label{theta-omega}\n\\hat{f}(\\omega)=e^{i\\beta_1(\\omega)}|\\hat{f}(\\omega)|.   \n\\end{equation}\n\n%Define $\\mu_f$ as a complex measure on $\\mathbb{R}^d$ such that \n%\\begin{equation}\n%d\\mu_f = e^{i\\alpha(\\omega)}|\\hat{f}(\\omega)|d\\omega\n%\\end{equation}\n%and\n%\\begin{equation}\n%\\label{norm}\n%\\|\\mu_f\\| = \\int_{\\mathbb{R}^d} (1+|\\omega|)^2d|\\mu_f|(\\omega) \n%\\end{equation}\n%Also we have:\n%\\begin{equation}\n%f(x)=\\int_{\\mathbb{R}^d}e^{i\\omega\\cdot x}d\\mu_f\n%\\end{equation}\n\n\nLet $\\sigma$ be the activation function with compact support and bounded derivatives up to order $m$, that is\n\\begin{equation}\n\\|\\sigma\\|_{m,\\infty} := \\max_{0\\le\\alpha\\le m}\\sup_{x\\in\\mathbb{R}}|\\sigma^{(\\alpha)}(x)|\\le \\infty\n\\end{equation}\nSet\n\\begin{equation}\n\\label{eq:3}\n\\tilde x=x-x_\\Omega.  \n\\end{equation}\nChoose $a\\ne0$ such that $\\hat{\\sigma}(a)\\ne 0$, by Fourier transform, we have:\n\\begin{equation*}\n\\hat{\\sigma}(a)=\\frac{1}{2\\pi}\\int_{\\mathbb{R}}\\sigma(y)e^{-ia\n\ty}dy=\\frac{1}{2\\pi}\\int_{\\mathbb{R}}\\sigma(\\omega\\cdot \\tilde\nx+b)e^{-ia(\\omega\\cdot \\tilde x+\\theta)}db\n\\end{equation*}\nand we can also write\n\\begin{equation*}\n\\hat{\\sigma}(a)=e^{i\\beta_2(a)}|\\hat{\\sigma}(a)|.   \n\\end{equation*}\n\nThus\n\\begin{equation}\n|\\hat{\\sigma}(a)|=\\frac{1}{2\\pi}\\int_{\\mathbb{R}}\\sigma((\\omega\\cdot \\tilde{x}+b ) )e^{-ia(\\omega\\cdot \\tilde{x}+b )-i\\beta_2(a)}db.   \n\\end{equation}\n%be the hat function defined as:\n%\\begin{equation*}\n%\\sigma(x)=\\left\\{\n%\\begin{aligned}\n%&x,\\qquad\\ \\  0\\le x\\le 1 \\\\\n%&2-x,\\quad 1\\le x\\le 2\\\\\n%&0,\\qquad\\quad \\mathrm{otherwise} \n%\\end{aligned}\n%\\right.\n%\\end{equation*}\n\nThen we can write\n\\begin{equation}\n\\begin{aligned}\nf(x)=& \\int_{\\mathbb{R}^d}e^{i\\omega\\cdot x}e^{i\\beta_1(\\omega)}|\\hat{f}(\\omega)|d\\omega\\\\\n=&\\int_{\\mathbb{R}^d}|a|^d e^{ia(\\omega\\cdot x)}e^{i\\beta_1(a\\omega)}|\\hat{f}(a\\omega)|d\\omega \\\\\n=&\\int_{\\mathbb{R}^d}\\frac{|a|^d }{ |\\hat{\\sigma}(a)|} |\\hat{\\sigma}(a)|e^{ia(\\omega\\cdot x)+i\\beta_1(a\\omega)}|\\hat{f}(a\\omega)|d\\omega \\\\\n=&\\int_{\\mathbb{R}^d}\\int_{\\mathbb{R}}\\frac{|a|^d}{2\\pi|\\hat{\\sigma}(a)|}\\sigma(\\omega\\cdot \\tilde{x}+b  )e^{-i\\beta_2(a)}\ne^{ia\\omega\\cdot x_\\Omega-iab}db e^{i\\beta_1(a\\omega)}|\\hat{f}(a\\omega)|d\\omega \\\\\n=&\\int_{\\mathbb{R}^d}\\int_{\\mathbb{R}}\\sigma(\\omega\\cdot x+b )\\frac{|a|^d}{2\\pi|\\hat{\\sigma}(a)|}e^{i\\beta_3(a,\\omega,b)}|\\hat{f}(a\\omega)|db d\\omega\n\\end{aligned}\n\\end{equation}\nwhere $\\beta_3(a,\\omega,b)=a\\omega\\cdot x_\\Omega-ab-\\beta_2(a)+\\beta_1(a\\omega)$.\n\nLet\n$$\n\\theta=(\\omega, b).\n$$\nSince $f$ is real, we have\n$$\nf(x)={\\rm Re}\\; f(x) \n%\\int_{\\mathbb{R}^d}\\int_{\\mathbb{R}}\\frac{\\cos(\\beta_1(a\\omega)-ab-\\beta_2(a))}{2\\pi|\\hat{\\sigma}(a)|}\\sigma(\\omega\\cdot\n%\\tilde x+b )|a|^d|\\hat{f}(a\\omega)|db d\\omega\n=\\int_{\\mathbb{R}^{d+1}}\\kappa(\\theta)\\sigma(\\omega\\cdot\\tilde x+b )|a|^d|\\hat{f}(a\\omega)|d\\theta.\n$$\nwhere\n\\begin{equation}\n\\kappa(\\theta)\\equiv\n\\kappa(\\omega,b)=\\frac{\\cos\\beta_3(a,\\omega,b)}{2\\pi|\\hat{\\sigma}(a)|},\n\\end{equation}\n\n\n\nNotice that $\\sigma$ is compactly supported, assume that $\\mathrm{supp}(\\sigma)\\subset[-M_1,M_1]$. Then we have:\n\\begin{equation}\n\\begin{aligned}\nf(x) =\\int_{\\mathbb{R}^{d+1}}\\kappa(\\theta)\\mathbf{1}_{D_M}\\sigma(\\omega\\cdot \\tilde{x}+b)|a|^d|\\hat{f}(a\\omega)|d\\theta \n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\nM=\\max(M_1,r_\\Omega)\n\\end{equation}\n\\begin{equation}\nD_M=\\{\\theta=(\\omega,b):|b|\\le(1+|\\omega|)M  \\}\n\\end{equation}\n\nDefine \n\\begin{equation}\n\\gamma(f)=\\int_{\\mathbb{R}^{d+1}}\\mathbf{1}_{D_M}(1+|\\omega|)^m|a|^d|\\hat{f}(a\\omega)|d\\theta \n\\end{equation}\n\nNow that\n\\begin{equation}\n\\begin{aligned}\nf(x) \t&=\\int_{\\mathbb{R}^{d+1}}\\kappa(\\theta)\\mathbf{1}_{D_M}\\sigma(\\omega\\cdot \\tilde{x}+b)|a|^d|\\hat{f}(a\\omega)|d\\theta \\\\\n&= \\int_{\\mathbb{R}^{d+1}}\\frac{\\kappa(\\theta)\\gamma(f)}{(1+|\\omega|)^m}\\sigma(\\omega\\cdot \\tilde{x}+b)\\frac{(1+|\\omega|)^m\\mathbf{1}_{D_M}|a|^d|\\hat{f}(a\\omega)|}{\\gamma(f)}d\\theta\\\\\n&:=\\int_{\\mathbb{R}^{d+1}}\\frac{\\kappa(\\theta)\\gamma(f)}{(1+|\\omega|)^m}\\sigma(\\omega\\cdot \\tilde{x}+b) d\\lambda\n\\end{aligned}\n\\end{equation}\nwhere\n\n$$\nd\\lambda= \\frac{(1+|\\omega|)^m\\mathbf{1}_{D_M}|a|^d|\\hat{f}(a\\omega)|}{\\gamma(f)}d\\theta ,\\quad \\int_{\\mathbb{R}^{d+1}}d\\lambda=1,\n$$\n\nand we can write\n\\begin{equation}\n\\tilde f(x)\\equiv\\frac{f(x)}{\\gamma(f)}\n=\\mathbb{E}(g(\\theta; x))\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{gx}\ng(\\theta; x))\n=\\frac{\\kappa(\\theta)}{(1+|\\omega|)^m}\\sigma(\\omega\\cdot\\tilde x+b)) \n\\end{equation}\nAnd \n\\begin{equation}\nD^\\alpha \\tilde f(x)=\\mathbb{E}(D^\\alpha g).\n\\end{equation}\n%\\mathbb{E}(\\kappa(\\omega,b)\\frac{D^\\alpha\\sigma(\\omega\\cdot\\tilde x+b)}{(1+|\\omega|)^m})\nLet $\\theta_i=(\\omega_i,b_i)_{i=1}^n$ be independently drawn from the same distribution $\\lambda$ and let \n$$\n\\bar g(\\theta_1,\\ldots,\\theta_n)\n=\\frac{1}{n} \\sum_{i=1}^ng(\\theta_i, x).\n$$\nThen\n$$\n\\bar{\\mathbb E} \\left(\\sum_{|\\alpha|\\le m}(D^\\alpha \\tilde f-D^\\alpha \\bar\ng)^2\\right) \n=\\sum_{|\\alpha|\\le m}\\bar{\\mathbb E} (D^\\alpha \\tilde f-D^\\alpha \\bar g)^2\n=\\sum_{|\\alpha|\\le m}\\bar{\\mathbb E} (\\mathbb E(D^\\alpha g)-D^\\alpha \\bar g)^2 \n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\nTaking the $L^2$ norm (with a probability measure) on both side of the above inequality\nand using Fubini's theorem,\n\\begin{equation}\n\\mathbb{\\bar{\\mathbb{E}}}(\\|\\tilde f(\\cdot)-g(\\theta,\\cdot)\\|^2_{H^m(\\Omega)})\n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n\\end{equation}\nThus there exists $\\theta_i=(\\omega_i^*,b_i^*)_{i=1}^n$ and $\\beta_i$ such that \n$$\n\\|\\tilde f-\\frac{1}{n}\\sum_{i=1}^{n} g(\\theta_i; \\cdot)\\|_m^2\n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\n$$\n\\|f-f_n\\|_m^2\n\\le{\\gamma(f)^2\\over n}\n\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\nwhere\n$$\nf_n(x)=\\frac{\\gamma(f)}{n}\\sum_{i=1}^{n} g(\\theta_i; \\cdot)=\\frac{1}{n} \\sum_{i=1}^n a_i\\sigma(\\omega_i^*\\cdot\\tilde{x}+b_i^*).\n$$\nwhere \n$$\na_i= \\frac{\\gamma(f)\\kappa(\\theta_i^*)}{(1+|\\omega_i^*|)^m} \\le \\frac{\\gamma(f)}{2\\pi|\\hat{\\sigma}(a)|}\n$$\n\nNoting that $\\frac{1}{1+|\\omega|}\\le1$, we have\n$$\n\\|D^\\alpha g\\|_{\\infty}=\n\\max_{0\\le|\\alpha|\\le m}\\max_{\\in\\mathbb{R}^d,\\theta\\in\\mathbb{R}^{d+1}}|\\kappa(\\theta)|\\frac{|D^\\alpha\\sigma(\\omega\\cdot\\tilde x+b)|}{(1+|\\omega|)^m}\\le \\frac{\\|\\sigma\\|_{m,\\infty}}{2\\pi|\\hat{\\sigma}(a)|}\n$$\nThen we have:\n\n\\begin{equation}\n\\begin{aligned}\n\\gamma(f)&=|a|^d\\int_{\\mathbb{R}^d}(1+|\\omega|)^m \\int_{-(1+|\\omega|)M}^{(1+|\\omega|)M}db |\\hat{f}(a\\omega)|d\\omega \\\\\n&= |a|^d\\int_{\\mathbb{R}^d}2M(1+|\\omega|)^{m+1} |\\hat{f}(a\\omega)|d\\omega\\\\\n&\\le 2M \\int_{\\mathbb{R}^d}(1+|\\omega/a|)^{m+1} |\\hat{f}(\\omega)|d(\\omega)\\\\\n\\end{aligned}\n\\end{equation}\n\nDenote\n$$\n\\|f\\|_{m+1} = \\int_{\\mathbb{R}^d}(1+|\\omega/a|)^{m+1} |\\hat{f}(\\omega)|d(\\omega)\n$$\nThen there exists $(\\omega_i^*,b_i^*)_{i=1}^n$ and $\\beta_i$ such that \n$$\n\\|f-f_n\\|_m\n\\le\\frac{\\|\\sigma\\|_{m,\\infty}}{\\sqrt{n}}\\|f\\|_{m+1} \n$$\nwhere $C=\\frac{M\\sqrt{C_{m+d}^m}}{\\pi|\\hat{\\sigma}(a)|} $.\n\n\\subsection{Periodic activation function}\n\nNow $\\sigma$ is periodic with period $T$. Then we can write the Fourier series for $\\sigma$:\n$$\n\\sigma(x) =  \\sum_{i=-\\infty}^{\\infty}c_n e^{i\\frac{2\\pi nx}{T}}\n$$\nchoose $n_1$ such that $c_{n_1}\\ne 0$, we also have:\n$$\nc_{n_1} = \\frac{1}{T}\\int_{0}^{T}\\sigma(x) e^{-i\\frac{2\\pi n_1x}{T}}dx= |c_{n_1}|e^{i\\beta_2(c_{n_1})}\n$$\nThen for $f$:\n\\begin{equation}\n\\begin{aligned}\nf(x)=& \\int_{\\mathbb{R}^d}e^{i\\omega\\cdot x}e^{i\\beta_1(\\omega)}|\\hat{f}(\\omega)|d\\omega\\\\\n=&\\int_{\\mathbb{R}^d}\\frac{1}{c_{n_1}}\\frac{1}{T}\\int_{x_0}^{x_0+T}\\sigma(t) e^{-i\\frac{2\\pi n_1t}{T}}dte^{i(\\omega\\cdot x)+i\\beta_1(\\omega)}|\\hat{f}(\\omega)|d\\omega \\\\\n=&\\int_{\\mathbb{R}^d}\\frac{1}{|c_{n_1}|e^{i\\beta_2(c_{n_1})}}\\frac{1}{2\\pi n_1}\\int_{0}^{2\\pi n_1}\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b)) e^{-i(\\omega\\cdot x+b)}dbe^{i(\\omega\\cdot x)+i\\beta_1(\\omega)}|\\hat{f}(\\omega)|d\\omega \\\\\n=&\\int_{\\mathbb{R}^d}\\frac{1}{2|c_{n_1}|\\pi n_1}\\int_{0}^{2\\pi n_1}\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b)) e^{i(\\beta_1(\\omega)-b-\\beta_2(c_{n_1})}db|\\hat{f}(\\omega)|d\\omega \\\\\n=&\\int_{\\mathbb{R}^d}\\int_{0}^{2\\pi n_1}\\kappa(b,\\omega)\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b)) db|\\hat{f}(\\omega)|d\\omega \\\\\n\\end{aligned}\n\\end{equation}\nhere we do a substitution $t=\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b)$ and \n\\begin{equation}\n\\kappa = \\frac{\\cos(\\beta_1(\\omega)-b-\\beta_2(c_{n_1}))}{2|c_{n_1}|\\pi n_1}\n\\end{equation}\nwhich is then the same as what we did for compactly supported activation functions.\n\\begin{equation}\n\\begin{aligned}\nf(x)=&\\int_{\\mathbb{R}^d}\\int_{0}^{2\\pi n_1}\\kappa(b,\\omega)\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b)) db|\\hat{f}(\\omega)|d\\omega \\\\\n=&\\int_{\\mathbb{R}^d\\times[0,2\\pi n_1]}\\frac{\\kappa(\\theta)}{(1+|\\omega|)^m}\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b))(1+|\\omega|)^m|\\hat{f}(\\omega)|d\\theta \\\\\n=&\\int_{\\mathbb{R}^d\\times[0,2\\pi n_1]}\\frac{\\kappa(\\theta)\\gamma(f)}{(1+|\\omega|)^m}\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b))\\frac{(1+|\\omega|)^m|\\hat{f}(\\omega)|}{\\gamma(f)}d\\theta \\\\\n:=&\\int_{\\mathbb{R}^{d+1}}\\frac{\\kappa(\\theta)\\gamma(f)}{(1+|\\omega|)^m}\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b))d\\lambda\n\\end{aligned}\n\\end{equation}\nwhere\n$$\n\\gamma(f)=\\int_{\\mathbb{R}^{d+1}}\\mathbf{1}_{0\\le b\\le 2\\pi n_1}(1+|\\omega|)^m|\\hat{f}(\\omega)|d\\theta =2\\pi n_1\\int_{\\mathbb{R}^d}(1+|\\omega|)^m|\\hat{f}(\\omega)|d\\omega\n$$\nand\n$$\nd\\lambda= \\frac{(1+|\\omega|)^m|\\hat{f}(\\omega)|\\mathbf{1}_{0\\le b\\le 2\\pi n_1}}{\\gamma(f)}d\\theta,\\quad \\int_{\\mathbb{R}^{d+1}}d\\lambda=1,\n$$\n\n\nWe can write\n\\begin{equation}\n\\tilde f(x)\\equiv\\frac{f(x)}{\\gamma(f)}\n=\\mathbb{E}(g(\\theta; x))\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{gx}\ng(\\theta; x))\n=\\frac{\\kappa(\\theta)}{(1+|\\omega|)^m}\\sigma(\\frac{T}{2\\pi n_1}(\\omega\\cdot x+b))\n\\end{equation}\nAnd \n\\begin{equation}\nD^\\alpha \\tilde f(x)=\\mathbb{E}(D^\\alpha g).\n\\end{equation}\n%\\mathbb{E}(\\kappa(\\omega,b)\\frac{D^\\alpha\\sigma(\\omega\\cdot\\tilde x+b)}{(1+|\\omega|)^m})\nLet $\\theta_i=(\\omega_i,b_i)_{i=1}^n$ be independently drawn from the same distribution $\\lambda$ and let \n$$\n\\bar g(\\theta_1,\\ldots,\\theta_n)\n=\\frac{1}{n} \\sum_{i=1}^ng(\\theta_i, x).\n$$\nThen\n$$\n\\bar{\\mathbb E} \\left(\\sum_{|\\alpha|\\le m}(D^\\alpha \\tilde f-D^\\alpha \\bar\ng)^2\\right) \n=\\sum_{|\\alpha|\\le m}\\bar{\\mathbb E} (D^\\alpha \\tilde f-D^\\alpha \\bar g)^2\n=\\sum_{|\\alpha|\\le m}\\bar{\\mathbb E} (\\mathbb E(D^\\alpha g)-D^\\alpha \\bar g)^2 \n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\nTaking the $L^2$ norm (with a probability measure) on both side of the above inequality\nand using Fubini's theorem,\n\\begin{equation}\n\\mathbb{\\bar{\\mathbb{E}}}(\\|\\tilde f(\\cdot)-g(\\theta,\\cdot)\\|^2_{H^m(\\Omega)})\n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n\\end{equation}\nThus there exists $\\theta_i=(\\omega_i^*,b_i^*)_{i=1}^n$ and $\\beta_i$ such that \n$$\n\\|\\tilde f-\\frac{1}{n}\\sum_{i=1}^{n} g(\\theta_i; \\cdot)\\|_m^2\n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\n$$\n\\|f-f_n\\|_m^2\n\\le{\\gamma(f)^2\\over n}\n\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\nwhere\n$$\nf_n(x)=\\frac{\\gamma(f)}{n}\\sum_{i=1}^{n} g(\\theta_i; \\cdot)=\\frac{1}{n} \\sum_{i=1}^n a_i\\sigma(\\frac{T}{2\\pi n_1}(\\omega_i^*\\cdot x+b_i^*)).\n$$\nwhere \n$$\na_i= \\frac{\\gamma(f)\\kappa(\\theta_i^*)}{(1+|\\omega_i^*|)^m} \\le \\frac{\\gamma(f)}{2|c_{n_1}|\\pi n_1}\n$$\n\nNoting that $\\frac{1}{1+|\\omega|}\\le1$, we have\n$$\n\\|D^\\alpha g\\|_{\\infty}=\n\\max_{0\\le|\\alpha|\\le m}\\max_{\\in\\mathbb{R}^d,\\theta\\in\\mathbb{R}^{d+1}}|\\kappa(\\theta)|\\frac{|D^\\alpha\\sigma( \\frac{T}{2\\pi n_1}(\\omega\\cdot x+b) )|}{(1+|\\omega|)^m}\\le \\frac{k\\|\\sigma\\|_{m,\\infty}}{2|c_{n_1}|\\pi n_1}\n$$\nwhere\n$$\nk = \\max(\\frac{T}{2\\pi n_1},(\\frac{T}{2\\pi n_1})^\\alpha)\n$$\n\n\nDenote\n$$\n\\||f\\||_{m} = \\int_{\\mathbb{R}^d}(1+|\\omega|)^{m} |\\hat{f}(\\omega)|d(\\omega)\n$$\nThen there exists $(\\omega_i^*,b_i^*)_{i=1}^n$ and $\\beta_i$ such that \n$$\n\\|f-f_n\\|_m\n\\le\\frac{\\|\\sigma\\|_{m,\\infty}}{\\sqrt{n}}|\\|f\\||_{m} \n$$\nwhere $C=\\frac{k\\sqrt{C_{m+d}^m}}{2|c_{n_1}|\\pi n_1} $.\n\n\n\\subsubsection{Exponential Decay}\nIf there exists $\\eta>0$ such that for all $0\\le k \\le m$\n$$\n|e^{\\eta|t|}D^{k}\\sigma(t)|_{L^1(t)}\\le \\zeta<\\infty,\n$$\nwhich means that $|e^{\\eta|t|}D^{k}\\sigma(t)|$ is finite for all $t$ and all $0\\le k \\le m$, with abuse of notation here, we still use $\\zeta$ to denote the bound.\n$$\ne^{\\eta|t|}|D^{k}\\sigma(t)|\\le \\zeta,\n$$\n\nWe have\n\\begin{equation}\n\t\\begin{aligned}\n\t\t&f(x)=\\int_{\\mathbb{R}^{d}}\\int_{\\mathbb{R}}\\kappa(\\omega,b)\\sigma(\\omega\\cdot x+b)|a|^d|\\hat{f}(a\\omega)|dbd\\omega.\\\\\n\t\t=&\\int_{\\mathbb{R}^{d}}\\int_{\\mathbb{R}}\\frac{\\kappa(\\omega,b)}{(1+|\\omega|)^m}\\sigma(\\omega\\cdot x+b)(1+|\\omega|)^m|a|^d|\\hat{f}(a\\omega)|dbd\\omega.\\\\\n\t\t=&\\int_{\\mathbb{R}^{d}}\\int_{\\mathbb{R}}\\frac{\\kappa(\\omega,b)e^{\\eta\\max\\{0,|b|-M|\\omega|\\}}}{(1+|\\omega|)^m}\\sigma(\\omega\\cdot x+b)\\frac{(1+|\\omega|)^m|a|^d|\\hat{f}(a\\omega)|}{e^{\\max\\{0,\\eta(|b|-M|\\omega|)\\}}}dbd\\omega.\\\\\n\t\t=&\\int_{\\mathbb{R}^{d+1}}\\frac{\\kappa(\\theta)e^{\\max\\{0,\\eta(|b|-M|\\omega|)\\}}\\gamma(f)}{(1+|\\omega|)^m}\\sigma(\\omega\\cdot x+b) d\\lambda\n\t\\end{aligned}\n\\end{equation}\n\nwhere\n$$\n\\gamma(f)=\\int_{\\mathbb{R}^{d+1}}\\frac{(1+|\\omega|)^m|a|^d|\\hat{f}(a\\omega)|}{e^{\\max\\{0,\\eta(|b|-M|\\omega|)\\}}}d\\theta\n$$\nand\n$$\nd\\lambda= \\frac{(1+|\\omega|)^m|a|^d|\\hat{f}(a\\omega)|}{e^{\\max\\{0,\\eta(|b|-M|\\omega|)\\}}\\gamma(f)}d\\theta,\\quad \\int_{\\mathbb{R}^{d+1}}d\\lambda=1,\n$$\n\n\nWe can write\n\\begin{equation}\n\t\\tilde f(x)\\equiv\\frac{f(x)}{\\gamma(f)}\n\t=\\mathbb{E}(g(\\theta; x))\n\\end{equation}\nwhere\n\\begin{equation}\n\t\\label{gx}\n\tg(\\theta; x))\n\t=\\frac{\\kappa(\\theta)e^{\\max\\{0,\\eta(|b|-M|\\omega|)\\}}}{(1+|\\omega|)^m}\\sigma(\\omega\\cdot x+b)\n\\end{equation}\nAnd \n\\begin{equation}\n\tD^\\alpha \\tilde f(x)=\\mathbb{E}(D^\\alpha g).\n\\end{equation}\n%\\mathbb{E}(\\kappa(\\omega,b)\\frac{D^\\alpha\\sigma(\\omega\\cdot\\tilde x+b)}{(1+|\\omega|)^m})\n\nAlso we should have:\n\\begin{equation*}\n\t\\begin{aligned}\n\t\t\\|D^\\alpha g\\|_{\\infty}&\\le \\frac{1}{2\\pi|\\hat{\\sigma}(a)|}e^{\\max\\{0,\\eta(|b|-M|\\omega|)\\}}|D^\\alpha\\sigma(\\omega\\cdot x+b)|\\\\\n\t\t&\\le\\frac{1}{2\\pi|\\hat{\\sigma}(a)|}e^{\\eta|b+\\omega\\cdot x|}|D^\\alpha\\sigma(\\omega\\cdot x+b)|\\\\\n\t\t&\\le\\frac{\\zeta}{2\\pi|\\hat{\\sigma}(a)|}\n\t\\end{aligned}\n\\end{equation*}\n\n\nLet $\\theta_i=(\\omega_i,b_i)_{i=1}^n$ be independently drawn from the same distribution $\\lambda$ and let \n$$\n\\bar g(\\theta_1,\\ldots,\\theta_n)\n=\\frac{1}{n} \\sum_{i=1}^ng(\\theta_i, x).\n$$\nThen\n$$\n\\bar{\\mathbb E} \\left(\\sum_{|\\alpha|\\le m}(D^\\alpha \\tilde f-D^\\alpha \\bar\ng)^2\\right) \n=\\sum_{|\\alpha|\\le m}\\bar{\\mathbb E} (D^\\alpha \\tilde f-D^\\alpha \\bar g)^2\n=\\sum_{|\\alpha|\\le m}\\bar{\\mathbb E} (\\mathbb E(D^\\alpha g)-D^\\alpha \\bar g)^2 \n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\nTaking the $L^2$ norm (with a probability measure) on both side of the above inequality\nand using Fubini's theorem,\n\\begin{equation}\n\t\\mathbb{\\bar{\\mathbb{E}}}(\\|\\tilde f(\\cdot)-g(\\theta,\\cdot)\\|^2_{H^m(\\Omega)})\n\t\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n\\end{equation}\nThus there exists $\\theta_i=(\\omega_i^*,b_i^*)_{i=1}^n$ and $\\beta_i$ such that \n$$\n\\|\\tilde f-\\frac{1}{n}\\sum_{i=1}^{n} g(\\theta_i; \\cdot)\\|_m^2\n\\le{1\\over n}\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\n$$\n\\|f-f_n\\|_m^2\n\\le{\\gamma(f)^2\\over n}\n\\sum_{|\\alpha|\\le m}\\|D^\\alpha g\\|_\\infty^2.\n$$\nwhere\n$$\nf_n(x)=\\frac{\\gamma(f)}{n}\\sum_{i=1}^{n} g(\\theta_i; \\cdot)=\\frac{1}{n} \\sum_{i=1}^n a_i\\sigma(\\omega_i^*\\cdot\\tilde{x}+b_i^*).\n$$\nwhere \n$$\na_i= \\frac{\\gamma(f)\\kappa(\\theta_i^*)e^{\\max\\{0,\\eta(|b_i^*|-M|\\omega_i^*|)\\}}}{(1+|\\omega_i^*|)^m}\n$$\n\nNow with\n$$\n\\||f\\||_{m+1} = \\int_{\\mathbb{R}^d}(1+|\\omega/a|)^{m+1} |\\hat{f}(\\omega)|d(\\omega)\n$$\nwe need to give an estimation on $\\gamma(f)$:\n\\begin{equation}\n\t\\begin{aligned}\n\t\t&\\gamma(f)=\\int_{\\mathbb{R}^{d+1}}\\frac{(1+|\\omega|)^m|a|^d|\\hat{f}(a\\omega)|}{e^{\\max\\{0,\\eta(|b|-M|\\omega|)\\}}}dbd\\omega\\\\\n\t\t=&\\int_{\\mathbb{R}^d}|a|^d(1+|\\omega|)^m \\int_{-|\\omega|M}^{|\\omega|M}db |\\hat{f}(a\\omega)|d\\omega + \\int_{\\mathbb{R}^d}|a|^d(1+|\\omega|)^m \\int_{|b|\\ge M|\\omega|}\\frac{1}{e^{\\eta(|b|-M|\\omega|)}}db |\\hat{f}(a\\omega)|d\\omega  \\\\\n\t\t\\le&\\||f\\||_{m+1}+C\\||f\\||_{m}\\\\\n\t\t\\le&C\\||f\\||_{m+1}\n\t\\end{aligned}\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n%\\subsection{Noncompactly supported activation function}\n%\n%Now $\\sigma$ is not compactly supported, however, we can still write $f$ as:\n%$$\n%f(x)=\\int_{\\mathbb{R}^{d+1}}\\kappa(\\theta)\\sigma(\\omega\\cdot\\tilde x+b )|a|^d|\\hat{f}(a\\omega)|d\\theta.\n%$$\n%\n%denote \n%$$\n%f_M(x)=\\int_{\\mathbb{R}^{d+1}}\\kappa(\\theta)\\mathbf{1}_{D_M}\\sigma(\\omega\\cdot \\tilde{x}+b)|a|^d|\\hat{f}(a\\omega)|d\\theta\n%$$\n%where for $M >0$, \n%\\begin{equation}\n%D_M=\\{\\theta=(\\omega,b):|b|\\le(1+|\\omega|)M  \\}\n%\\end{equation}\n%by the conclusion in last subsection, we have :\n%$$\n%\\|f_M-f_n\\|_m\n%\\le C\\frac{\\|\\sigma\\|_{m,\\infty}}{\\sqrt{n}}\\|f\\|_{m+1} \n%$$\n%where $C=\\frac{M\\sqrt{C_{m+d}^m}}{\\pi|\\hat{\\sigma}(a)|} $.\n%\n%Now it suffices to estimate $\\|f-f_M\\|_m$:\n%\\begin{equation}\n%f_r:=f-f_M=\\int_{\\mathbb{R}^{d}}\\int_{|b|>(1+|\\omega|)M}\\kappa(\\omega,b)\\sigma(\\omega\\cdot \\tilde{x}+b)|a|^d|\\hat{f}(a\\omega)|d\\omega db\n%\\end{equation}\n%\n%Since we are in a bounded domain $\\Omega$, we can choose $M>0$ such that $x\\in\\Omega\\subset B_M$ where $B_M$ is the ball of radius $M$ in $\\mathbb{R}^d$.\n%If $x\\in\\Omega$ and $|b|>(1+|\\omega|)M$, then:\n%$$\n%|\\omega\\cdot x+b|\\ge |b|-|\\omega\\cdot x| \\ge M\n%$$\n%\n%Hence for all $x\\in\\Omega$ and $0\\le|\\alpha|\\le m$, since $(1+|\\omega|)^k$ is non decreasing w.r.t k, we have:\n%\\begin{equation}\n%\\begin{aligned}\n%|D^{\\alpha}f_r|&\\le\\frac{1}{2\\pi|\\hat{\\sigma}(a)|}\\|D^{|\\alpha|}\\sigma(t)\\|_{L^1(|t|\\ge M)}\\int_{\\mathbb{R}^d}(1+|\\omega/a|)^{m} |\\hat{f}(\\omega)|d(\\omega)\\\\\n%&= \\frac{1}{2\\pi|\\hat{\\sigma}(a)|}\\|D^{|\\alpha|}\\sigma(t)\\|_{L^1(|t|\\ge M)}\\|f\\|_{m}\\\\\n%&\\le\\frac{1}{2\\pi|\\hat{\\sigma}(a)|}\\|D^{|\\alpha|}\\sigma(t)\\|_{L^1(|t|\\ge M)}\\|f\\|_{m+1}\n%\\end{aligned}\n%\\end{equation}\n%Thus take the $L^2$ norm w.r.t a probability measure on $\\Omega$, we can have:\n%\\begin{equation}\n%\\int_{\\mathbb{R}^{d}}|D^{\\alpha}f_r|^2\\le \\frac{1}{(2\\pi|\\hat{\\sigma}(a)|)^2}\\|D^{|\\alpha|}\\sigma(t)\\|^2_{L^1(|t|\\ge M)}\\|f\\|_{m+1}^2\n%\\end{equation}\n%\n%Summing over $0\\le|\\alpha|\\le m$, and let\n%$$\n%R_{M}(\\sigma)=\\max_{0\\le k\\le m}\\|D^{k}\\sigma(t)\\|^2_{L^1(|t|\\ge M)}\n%$$\n%we have:\n%\\begin{equation}\n%\\|f_r\\|_{H^m}^2\\le \\frac{C_{m+d}^m}{(2\\pi|\\hat{\\sigma}(a)|)^2}R_{M}^2(\\sigma)\\|f\\|_{m+1}^2\n%\\end{equation}\n%\n%Thus in the end we have:\n%\\begin{equation}\n%\\begin{aligned}\n%\\|f-f_n\\|_{H^m}&\\le\\|f_r\\|_{H^m}^2+\\|f_M-f_n\\|_m\\\\\n%&\\le \\frac{\\sqrt{C_{m+d}^m}}{2\\pi|\\hat{\\sigma}(a)|}R_{M}(\\sigma)\\|f\\|_{m+1}+\\frac{M\\sqrt{C_{m+d}^m}}{\\pi|\\hat{\\sigma}(a)|} \\frac{\\|\\sigma\\|_{m,\\infty}}{\\sqrt{n}}\\|f\\|_{m+1} \\\\\n%&\\le  \\frac{\\sqrt{C_{m+d}^m}}{\\pi|\\hat{\\sigma}(a)|}\\|f\\|_{m+1}(\\frac{R_{M}(\\sigma)}{2}+\\frac{M\\|\\sigma\\|_{m,\\infty}}{\\sqrt{n}})\n%\\end{aligned}\n%\\end{equation}\n", "meta": {"hexsha": "11482b12c36aa26b9553def553face394a2cc7ad", "size": 19703, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/Barron0-1.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/Barron0-1.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/Barron0-1.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.0201096892, "max_line_length": 231, "alphanum_fraction": 0.6050347663, "num_tokens": 8982, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.7310585727705126, "lm_q1q2_score": 0.5662658309276405}}
{"text": "\\documentclass[10pt,letterpaper]{article}\n\n\\usepackage{amsmath, amsfonts}\n\\usepackage{amsthm}\n\\usepackage[usenames,dvipsnames]{color}\n\\usepackage{paralist}   % For better control of itemizations\n\\usepackage[margin=1in]{geometry}\n\\usepackage{parallel}\n\n\\newtheorem{thm}{Theorem}\n\n% New commands\n\\newcommand{\\QR}{\\ensuremath{QR} }\n\n\\title{\\QR Algorithm}\n\\author{Jeremy Lloyd Conlin}\n\n\\begin{document}\n\\maketitle\n\\begin{abstract}\n    This document is used to help formulate some of the important proofs resulting from the QR algorithm.  Many of them are exercises from David S. Watkins' \\emph{Fundamentals of Matrix Computations}.\n\\end{abstract}\n\n\\section{General Infomation}\nArnoldi factorization\n\\begin{equation}\n    A = \\QR\n    \\label{eq:QRFactorization}\n\\end{equation}\nRecombining:\n\\begin{equation}\n    \\hat{A} = RQ.\n\\end{equation}\n\n\\subsection{QR Iteration}\n\\begin{subequations}\\begin{align}\n    A_{m-1} &= Q_mR_m \\\\\n    A_{m} &= R_mQ_m,\n\\end{align}\n    \\label{eq:QRIteration}\n\\end{subequations}\nAlternatively\n\\begin{subequations}\\begin{align}\n    A_{m} &= Q_m^*A_{m-1}Q_m \\\\\n    A_{m} &= R_mA_{m-1}R_m^{-1}.\n\\end{align}\n    \\label{eq:QRIterationAlternate}\n\\end{subequations}\nAlso\n\\begin{equation}\n    Q_mA_m = A_{m-1}Q_m\n\\end{equation}\n\n\\subsection{Shifted QR Iteration}\n\\begin{subequations}\\begin{gather}\n    A_{m-1} - \\nu_{m-1} I = Q_mR_m \\label{eq:ShiftedQRFactorization} \\\\\n    A_m = R_mQ_m = \\nu_{m-1}I\n\\end{gather}\n    \\label{eq:ShiftedQRIteration}\n\\end{subequations}\nAlternatively\n\\begin{subequations}\\begin{align}\n    A_m &= Q_m^*A_{m-1}Q_m \\\\\n    A_m &= R_mA_{m-1}R_{m-1}^{-1}.\n\\end{align}\n    \\label{eq:ShifteDQRIterationAlternative}\n\\end{subequations}\n\n\n\\section{$p(A) = \\hat{Q}_j \\hat{R}_j$ (Exercise 6.2.36)} \\label{sec:QRPolynomial}\nThis exercise shows several things.  First it shows that the shifted \\QR algorithm creates a polynomials of $A$ with roots at the chosen shifts.\nLet $A_m$ be generated by \\QR algorithm:\n\\begin{subequations}\\begin{align}\n    A_{m-1} - \\nu_{m-1}I = Q_mR_m \\\\\n    A_m = R_mQ_m + \\nu_{m-1}\n\\end{align}\\end{subequations}\nwhere $A_0 = A$.  Let $\\hat{Q}_m = Q_1\\cdots Q_m$ and $\\hat{R}_m = R_m \\cdots R_1$.\n\n\\begin{enumerate}[(a)]\n    \\item Show that \n        \\begin{equation}\n            A_m = \\hat{Q}_m^*A\\hat{Q}_m\n        \\end{equation}\n        for $m=1,2,3,\\ldots$.\n\n        \\begin{enumerate}[$m=1$: ]\n            \\item $\\hat{Q}_1 = Q_1$, \n                \\begin{equation}\n                    \\boxed{A_1 = Q_1^*AQ_1} \\text{ by definition, see Eq. \\ref{eq:ShifteDQRIterationAlternative} }.\n                \\end{equation}\n\n            \\item $\\hat{Q}_2 = Q_1Q_2, \\hspace{2ex} \\hat{Q}_2^* = Q_2^*Q_1^*$\n                \\begin{subequations}\\begin{align}\n                    \\text{we know: } A_2 &= Q_2^*A_1Q_2 \\\\\n                     &= Q_2^*\\left(Q_1^*AQ_1\\right)Q_2 \\\\\n                    A_2 &= \\hat{Q}_2^*A\\hat{Q}_2\n                \\end{align}\\end{subequations}\n\n            \\item $\\hat{Q}_3 = Q_1Q_2Q_3, \\hspace{2ex} \\hat{Q}_3^* = Q_3^*Q_2^*Q_1^*$\n                \\begin{subequations}\\begin{align}\n                    \\text{we know: } A_3 &= Q_3^*A_2Q_3 \\\\\n                     &= Q_3^*\\left[Q_2^*A_1Q_2\\right]Q_3 \\\\\n                     &= Q_3^*\\left[Q_2^*\\left(Q_1^*AQ_1\\right)Q_2\\right]Q_3 \\\\\n                    A_3 &= \\hat{Q}_3^*A\\hat{Q}_3\n                \\end{align}\\end{subequations}\n                Clearly this can continue for all $m$.\n        \\end{enumerate}\n\n    \\item Deduce that \n        \\begin{equation}\n            \\left(A-\\nu_mI\\right)\\hat{Q}_m = \\hat{Q}_m\\left(A_m - \\nu_mI\\right)\n        \\end{equation}\n        for $m=1,2,3,\\ldots$.\n\n        \\begin{enumerate}[$m=1$: ]\n            \\item $\\hat{Q}_1 = Q_1$\n                \\begin{equation}\n                    \\left(A-\\nu I\\right)Q_1 = AQ_1 - \\nu_1Q_1\n                    \\label{eq:m1Results}\n                \\end{equation}\n\n                First we must make note:\n                \\begin{subequations}\\begin{align}\n                    A_1 &= Q_1^*AQ_1 \\\\[2mm]\n                    Q_1A_1 &= Q_1Q_1^*AQ_1 \\\\\n                    Q_1A_1 &= AQ_1\n                \\end{align}\\end{subequations}\n                Plug this result into Eq. (\\ref{eq:m1Results}) to get our answer\n                \\begin{subequations}\\begin{align}\n                    \\left(A-\\nu I\\right)Q_1 &= Q_1A_1 - \\nu_1Q_1 \\\\\n                    \\left(A-\\nu I\\right)\\hat{Q}_1 &= \\hat{Q}_1\\left( A_1 - \\nu_1 I \\right)\n                \\end{align}\\end{subequations}\n\n            \\item $\\hat{Q}_2 = Q_1Q_2, \\hspace{2ex} \\hat{Q}_2^* = Q_2^*Q_1^*$\n                \\begin{equation}\n                    \\left(A-\\nu_2 I\\right)\\hat{Q}_2 = \\left(A-\\nu_2 I\\right)Q_1Q_2\n                    \\label{eq:m1Results}\n                \\end{equation}\n\n                Again:\n                \\begin{subequations}\\begin{align}\n                    A_2 &= Q_2^*A_1Q_2 \\\\[2mm]\n                    Q_2A_2 &= Q_2Q_2^*A_1Q_2 \\\\\n                    Q_2A_2 &= A_1Q_2\n                \\end{align}\\end{subequations}\n                We will use this result shortly\n                \\begin{subequations}\\begin{align}\n                    \\left(A-\\nu_2 I\\right)\\hat{Q}_2 &= \\left(A-\\nu_2 I\\right)Q_1Q_2 \\\\\n                    &= \\left(AQ_1-\\nu_2 Q_1\\right)Q_2 \\\\\n                    &= \\left(Q_1A_1-\\nu_2 Q_1\\right)Q_2 \\\\\n                    &= Q_1\\left(A_1-\\nu_2 I\\right)Q_2 \\\\\n                    &= Q_1\\left(A_1Q_2-\\nu_2 Q_2\\right) \\\\\n                    &= Q_1\\left(Q_2A_2-\\nu_2 Q_2\\right) \\\\[2mm]\n                    \\left(A-\\nu_2 I\\right)\\hat{Q}_2 &= Q_1Q_2\\left(A_2-\\nu_2 I\\right)\n                \\end{align}\\end{subequations}\n        \\end{enumerate}\n        I think it's clear that the rest follows by induction.\n\n    \\item Now prove by induction on $m$ that\n        \\begin{equation}\n            \\left(A - \\nu_{m-1}I\\right)\\cdots\\left(A-\\nu_0I\\right) = \\hat{Q}_m\\hat{R}_m, \\hspace{5ex}\n        \\end{equation}\n        for $m=1,2,3,\\ldots$.\n\n        \\begin{enumerate}[$m=1$: ]\n            \\item $\\left(A-\\nu_0I\\right) = \\hat{Q}_1\\hat{R}_1$\n                \\begin{equation}\n                    \\left(A-\\nu_0I\\right) = Q_1R_1\n                \\end{equation}\n                This is true by construction of shifted \\QR algorithm.\n\n            \\item $\\left(A-\\nu_1I\\right)\\left(A-\\nu_0I\\right) = \\hat{Q}_2\\hat{R}_2$\n                \\begin{subequations}\\begin{align}\n                    \\left(A-\\nu_1I\\right)\\left(A-\\nu_0I\\right) &= \\left(A-\\nu_1I\\right)\\left(Q_1R_1\\right) \\\\\n                     &= \\left[ \\left( A - \\nu_1 \\right)Q_1 \\right]R_1 \\\\\n                     &= \\left[ \\hat{Q}_1\\left(A_1 - \\nu_1I\\right) \\right]R_1 \\\\\n                     &= \\hat{Q}_1\\left(Q_2R_2\\right)R_1 \\\\\n                    \\left(A-\\nu_1I\\right)\\left(A-\\nu_0I\\right) &= \\hat{Q}_2\\hat{R}_2\n                \\end{align}\\end{subequations}\n\n            \\item $\\left(A-\\nu_2I\\right)\\left(A-\\nu_1I\\right)\\left(A-\\nu_0I\\right) = \\hat{Q}_3\\hat{R}_3$\n                \\begin{subequations}\\begin{align}\n                    \\left(A-\\nu_2I\\right)\\left(A-\\nu_1I\\right)\\left(A-\\nu_0I\\right) &= \\left(A-\\nu_2I\\right)\\hat{Q}_2\\hat{R}_2 \\\\\n                    &= \\left(A-\\nu_2I\\right)Q_1Q_2\\hat{R}_2 \\\\\n                    &= \\left(AQ_1-\\nu_2Q_1\\right)Q_2\\hat{R}_2 \\\\\n                    &= \\left(Q_1A_1-\\nu_2Q_1\\right)Q_2\\hat{R}_2 \\\\\n                    &= Q_1\\left(A_1-\\nu_2I\\right)Q_2\\hat{R}_2 \\\\\n                    &= Q_1\\left(A_1Q_2-\\nu_2Q_2\\right)\\hat{R}_2 \\\\\n                    &= Q_1\\left(Q_2A_2-\\nu_2Q_2\\right)\\hat{R}_2 \\\\\n                    &= Q_1Q_2\\left(A_2-\\nu_2I\\right)\\hat{R}_2 \\\\\n                    &= Q_1Q_2\\left(Q_3R_3\\right)R_2R_1 \\\\\n                    \\left(A-\\nu_2I\\right)\\left(A-\\nu_1I\\right)\\left(A-\\nu_0I\\right) &= \\hat{Q}_3\\hat{R}_3\n                \\end{align}\\end{subequations}\n        \\end{enumerate}\n        The rest follows by induction.\n\n\\end{enumerate}\n\n\n\\section{Proving Theorem 6.4.11, (Exercise 6.4.22)}\n\\begin{thm}\n    Suppose\n    \\begin{equation*}\n        AV_m = V_mH_m + v_{m+1}h_{m+1,m}e_m^T\n    \\end{equation*}\n    and let $p$ be a polynomial of degree $j < m$.  Then\n    \\begin{equation}\n        p(A)V_m = V_mp(H_m) + E_j,\n        \\label{eq:ArnoldiPolynomial}\n    \\end{equation}\n    where $E_j \\in \\mathbb{C}^{n \\times m}$ is identically zero, except in the last $j$ columns.\n    \\label{thm:First}\n\\end{thm}\n\nWe will prove Theorem \\ref{thm:First} by induction\n\\begin{enumerate}[(a)]\n    \\item Show that the theorem holds when $j=1$.  In this case $p(z) = \\alpha_1\\left(z-\\nu_1\\right)$, where $\\alpha_1$ is some nonzero constant.\n        \\begin{subequations}\n            \\begin{align}\n                \\left(A-\\nu_1I\\right)V_m &= V_m\\left(H_m-\\nu_1I\\right) + v_{m+1}h_{m+1,m}e_m^T \\\\\n                p(A)V_m &= V_mp(H_m) + E_1 \\\\\n            \\end{align}\n            \\label{eq:j=1}\n        \\end{subequations}\n\n        This one is trivial.\n    \\item Show that the theorem holds when $j=2$.  In this case $p(z) = k\\alpha_2\\left(z-\\nu_1\\right)\\left(z-\\nu_2\\right)$.  This step is just for practice; it is not crucial to the proof of the Theorem.\n\n        We begin with the Arnoldi factorization after applying one shift\n        \\begin{equation}\n            \\left(A-\\nu_1I\\right)V_m = V_m\\left(H_m-\\nu_1I\\right) + E_1.\n            \\label{eq:OneShift}\n        \\end{equation}\n        Now we can apply a second shift (this isn't applying a shift rather just multiplying by $\\left(A-\\nu_2I\\right)$):\n        \\begin{equation}\n            \\left(A-\\nu_2I\\right)\\left(A-\\nu_1I\\right)V_m = \\left(A-\\nu_2I\\right)V_m\\left(H_m-\\nu_1I\\right) + \\left(A-\\nu_2I\\right)E_1.\n            \\label{eq:SecondShift}\n        \\end{equation}\n        We can substitute $\\nu_2$ for $\\nu_1$ in Equation \\ref{eq:OneShift} to obtain the identity\n        \\begin{equation}\n            \\left(A-\\nu_2I\\right)V_m = V_m\\left(H_m-\\nu_2I\\right) + E_1.\n        \\end{equation}\n        and insert this into Equation \\ref{eq:SecondShift}\n        \\begin{subequations}\n            \\begin{align}\n                \\begin{split}\n                    \\left(A-\\nu_2I\\right)\\left(A-\\nu_1I\\right)V_m &= \\left(A-\\nu_2I\\right)V_m\\left(H_m-\\nu_1I\\right) \\\\\n                     &+ \\left(A-\\nu_2I\\right)E_1.\n                \\end{split} \\\\\n                \\begin{split}\n                    &= \\left[V_m\\left(H_m-\\nu_2I\\right) + E_1\\right]\\left(H_m-\\nu_1I\\right) \\\\\n                    &+ \\left(A-\\nu_2I\\right)E_1.\n                \\end{split} \\\\\n                \\begin{split}\n                    &= V_m\\left(H_m-\\nu_2I\\right)\\left(H_m-\\nu_1I\\right) + E_1\\left(H_m-\\nu_1I\\right) \\\\\n                    &+ \\left(A-\\nu_2I\\right)E_1.\n                \\end{split} \\\\\n                    \\left(A-\\nu_2I\\right)\\left(A-\\nu_1I\\right)V_m &= V_m\\left(H_m-\\nu_2I\\right)\\left(H_m-\\nu_1I\\right) + E_2\n            \\end{align}\n        \\end{subequations}\n        where $E_2 = E_1\\left(H_m-\\nu_1I\\right) + \\left(A-\\nu_2I\\right)E_1$.which is identically zero, except in the last 2 columns.\n\n    \\item Show that if the theorem holds for polynomials of degree $j-1$, then it holds for polynomials of degree $j$.\n\n        We know:\n        \\begin{equation}\n            \\left(A-\\nu_{j-1}I\\right)\\cdots\\left(A-\\nu_1I\\right)V_m = V_m\\left(H_m-\\nu_{j-1}I\\right)\\cdots\\left(H_m-\\nu_1I\\right) + E_{j-1}\n        \\end{equation}\n        Multiply this by $\\left(A-\\nu_jI\\right)$,\n        \\begin{equation}\\begin{split}\n            \\left(A-\\nu_jI\\right)\\left(A-\\nu_{j-1}I\\right)\\cdots\\left(A-\\nu_1I\\right)V_m &= \\\\\n            & \\left(A-\\nu_jI\\right)V_m\\left(H_m-\\nu_{j-1}I\\right)\\cdots\\left(H_m-\\nu_1I\\right) \\\\\n            &+ \\left(A-\\nu_jI\\right)E_{j-1}\n        \\end{split}\\end{equation}\n        Of course:\n        \\begin{equation}\n            \\left(A-\\nu_jI\\right)V_m = V_m\\left(H_m-\\nu_jI\\right) + E_1.\n        \\end{equation}\n        which can be inserted into the previous equation  as before\n        \\begin{subequations}\\begin{align}\n            \\begin{split}\n                \\left(A-\\nu_jI\\right)\\cdots\\left(A-\\nu_1I\\right)V_m &= \\\\\n                &\\left[ \\left(A-\\nu_jI\\right)V_m \\right]\\left(H_m-\\nu_{j-1}I\\right)\\cdots\\left(H_m-\\nu_1I\\right) \\\\\n                &+ \\left(A-\\nu_jI\\right)E_{j-1}\n            \\end{split} \\\\\n            \\begin{split}\n                &= \\left[ V_m\\left(H_m-\\nu_jI\\right) + E_1 \\right]\\left(H_m-\\nu_{j-1}I\\right)\\cdots\\left(H_m-\\nu_1I\\right) \\\\\n                &+ \\left(A-\\nu_jI\\right)E_{j-1}\n            \\end{split} \\\\\n            \\begin{split}\n                &= V_m\\left(H_m-\\nu_jI\\right)\\left(H_m-\\nu_{j-1}I\\right)\\cdots\\left(H_m-\\nu_1I\\right) \\\\\n                &+ E_1\\left(H_m-\\nu_{j-1}I\\right)\\cdots\\left(H_m-\\nu_1I\\right) + \\left(A-\\nu_jI\\right)E_{j-1}\n            \\end{split} \\\\\n               \\left(A-\\nu_jI\\right)\\cdots\\left(A-\\nu_1I\\right) &= V_m\\left(H_m-\\nu_jI\\right)\\left(H_m-\\nu_{j-1}I\\right)\\cdots\\left(H_m-\\nu_1I\\right) + E_j \\\\\n       \\intertext{Finally}\n       p(A)V_m &= V_mp(H_m) + E_j.\n        \\end{align}\\end{subequations}\n\\end{enumerate}\n\nThis isn't how IRAM actually proceeds.  However we can create an equation as in eq. \\ref{eq:ArnoldiPolynomial} by simply operating on the left by some polynomial of $A$; in this case we have chosen a polynomial with root $\\nu_j$.  It just so happens (not coincidentally) that using the same $\\nu_j$'s as shifts for the shifted \\QR algorithm on $H_m$.  Now we have proven in section \\ref{sec:QRPolynomial} that \n\\begin{equation}\n    p\\left(H_m\\right) = \\hat{Q}_m\\hat{R}_m\n\\end{equation}\nwhere $\\hat{Q}_m$ and $\\hat{R}_m$ come from the shifted \\QR algorithm.  Continue on from equation 6.4.12 in Watkins.\n\n\\section{Exercise (6.4.21) pg. 460}\nLet $j$ be a non-negative integer.  A matrix $B$ is called $j$-Hessenberg if $b_{ij} = 0$ whenever $i-k>j$.  A $j$-Hessenberg matrix is said to be \\emph{properly} $j$-Hessenberg if $b_{ij} \\neq 0$ whenever $i-k=j$.\n\n\\begin{enumerate}[(a)]\n    \\item \\emph{What are the common names for 0-Hessenberg and 1-Hessenberg matrices?}\n\n    A 0-Hessenberg matrix is \\emph{upper triangular}.  A 1-Hessenberg is commonly called \\emph{upper Hessenberg}.\n\n    \\item Show that the product of a properly $j$-Hessenberg and a properly $k$-Hessenberg matrix is properly $\\left(j+k\\right)$-Hessenberg.\\label{itm:j+k}\n\n    The $r$th row of a properly $j$-Hessenberg matrix has\n    \\begin{equation*}\n        \\max\\left(r-j-1,0\\right)\n    \\end{equation*}\n    leading zeros.  The $c$th column of a properly $k$-Hessenberg matrix has\n    \\begin{equation*}\n        \\max\\left[s-\\left(c+k)\\right),0\\right]\n    \\end{equation*}\n    trailing zeros where $s$ is the size of the matrix.\n\n    Suppose $\\mathcal{J}$ is properly $j$-Hessenberg and $\\mathcal{K}$ is properly $k$-Hessenberg and $\\mathcal{J},\\mathcal{K} \\in \\mathbb{R}^{s\\times s}$.  The resulting product $\\mathcal{A} = \\mathcal{JK}$.  The elements of $\\mathcal{A}$, $a_{mn}$, are the inner product of the $m$-th row of $\\mathcal{J}$ and the $n$-th column of $\\mathcal{K}$.  This element will be zero if the sum of the number of leading zeros in the $m$-th row of $\\mathcal{J}$ and the number of trailing zeros in the $n$-th column of $\\mathcal{K}$ are greater than the size of the matrix, $s$:\n    \\begin{equation}\n        a_{mn} = \\begin{cases}\n            0 & \\left( \\max\\left[(r-j-1),0\\right] + \\max\\left[s-\\left(c+k\\right),0\\right] \\right) \\geq s \\\\\n            \\mathcal{J}_m\\cdot\\mathcal{K}_n & \\mathrm{otherwise}\n        \\end{cases}\n    \\end{equation}\n    where $\\mathcal{J}_m$ is the $m$-th row of $\\mathcal{J}$ and $\\mathcal{K}_n$ is the $n$-th column of $\\mathcal{K}$.  For $\\mathcal{A}$ to be properly $\\left(j+k\\right)$-Hessenberg $a_{mn} = 0$ whenever $m-n>\\left(j+k\\right)$ and $a_{mn}\\neq 0$ whenever $m-n = \\left(j+k\\right)$.\n\n    \\item  Show that if $B \\in \\mathbb{C}^{m\\times m}$ is properly$j$-Hessenberg ($j<m$>, then the first $m-j$ columns of $B$ are linearly independent.\n    \n    If $B$ is properly $j$-Hessenberg then column $k$ will have one more non-zero element than column \\mbox{$k-1$} for \\mbox{$k =2,\\ldots j$}.  Thus they must be linearly independent.\n\n    \\item Show that if $H_m$ is properly upper Hessenberg and $p$ is a polynomial of degree $j$, say $p(z) = \\left(z-\\nu_1\\right)\\cdots\\left(z-\\nu_j\\right)$, then $p(H_m)$ is properly $j$-Hessenberg.\n\n    In $p(H_m)$ there will be a term $H_m^j$, which by item \\ref{itm:j+k} will be $j$-Hessenberg.  Adding in the other lower order terms will not affect this.\n\n\n\\end{enumerate}\n\n\\section{Manual Application of Shifted Algorithm}\nI write this section to demonstrate manually, and laboriously what happens when you perform shifted \\QR iterations.  The traditional way to show the iterations is given in the left column.  The right column is just some algebra to get the shift in the form\n\\begin{equation}\n    \\left( A_0-\\nu_nI\\right)\\cdots\\left( A_0 - \\nu_1I \\right) = \\hat{Q}_n\\hat{R}_n.\n\\end{equation}\nThis is necessary for some proofs I am doing.\n\n\\clearpage\n\\begin{Parallel}[v]{0.49\\textwidth}{0.49\\textwidth}\n\\textbf{First Shift}\n\\ParallelLText{\n    \\begin{equation}\n        \\left(A_0 - \\nu_1I\\right) = Q_1R_1\n    \\end{equation}\n\n    \\begin{subequations}\n        \\begin{align}\n            A_1 &= R_1Q_1  + \\nu_1I \\\\\n            A_1 &= Q_1^*A_0Q_1 \\\\\n            A_1 &= \\hat{Q}_1^*A_0\\hat{Q}_1\n        \\end{align}\n    \\end{subequations}\n}\n\\ParallelRText{\n    \\begin{align}\n        \\left(A_0 - \\nu_1I\\right) &= Q_1R_1 \\\\[2mm]\n        Q_1A_1 &= A_0Q_1\n    \\end{align}\n}\n\\ParallelPar\n\n\\vspace{\\baselineskip}\n\\textbf{Second Shift}\n\\ParallelLText{\n    \\begin{equation}\n        \\left(A_1 - \\nu_2I\\right) = Q_2R_2\n    \\end{equation}\n\n    \\begin{subequations}\n        \\begin{align}\n            A_2 &= R_2Q_2  + \\nu_2I \\\\\n            A_2 &= Q_2^*A_1Q_2 \\\\\n            A_2 &= Q_2^*\\left(Q_1^*A_0Q_1\\right)Q_2 \\\\\n            A_2 &= \\hat{Q}_2^*A_0\\hat{Q}_2\n        \\end{align}\n    \\end{subequations}\n}\n\\ParallelRText{\n    \\begin{align}\n        \\left(A_1 - \\nu_2I\\right) &= Q_2R_2 \\\\[2mm]\n        Q_2A_2 &= A_0Q_2    \\label{eq:2Identity}\n    \\end{align}\n\n    \\begin{subequations}\n        \\begin{align}\n            \\hat{Q}_1\\left(A_1-\\nu_2I\\right)\\hat{R}_1 &= \\hat{Q}_1Q_2R_2\\hat{R}_1 \\\\\n            \\hat{Q}_1A_1\\hat{R}_1-\\nu_2\\hat{Q}_1\\hat{R}_1 &= \\hat{Q}_2\\hat{R}_2 \\\\\n            A_0\\hat{Q}_1\\hat{R}_1-\\nu_2\\hat{Q}_1\\hat{R}_1 &= \\hat{Q}_2\\hat{R}_2 \\\\\n            \\left( A_0-\\nu_2I\\right) \\hat{Q}_1\\hat{R}_1 &= \\hat{Q}_2\\hat{R}_2 \\\\\n            \\left( A_0-\\nu_2I\\right)\\left( A_0 - \\nu_1I \\right) &= \\hat{Q}_2\\hat{R}_2\n        \\end{align}\n    \\end{subequations}\n\n}\n\\ParallelPar\n\n\\vspace{\\baselineskip}\n\\textbf{$n$'th Shift}\n\\ParallelLText{\n    \\begin{equation}\n        \\left(A_{n-1} - \\nu_n\\right) = Q_nR_n\n    \\end{equation}\n\n    \\begin{subequations}\n        \\begin{align}\n            A_n &= R_nQ_n  + \\nu_nI \\\\\n            A_n &= Q_n^*A_{n-1}Q_n \\\\\n            A_n &= Q_n^*\\left(Q_{n-1}^*A_{n-2}Q_{n-1}\\right)Q_n \\\\\n            \\vdots \\\\\n            A_n &= \\hat{Q}_n^*A_0\\hat{Q}_n\n        \\end{align}\n    \\end{subequations}\n}\n\\ParallelRText{\n    \\begin{align}\n        \\left(A_{n-1} - \\nu_nI\\right) &= Q_nR_n \\\\[2mm]\n        \\hat{Q}_{n-1}A_{n-1} &= A_0\\hat{Q}_{n-1}    \n    \\end{align}\n\n    \\begin{subequations}\n        \\begin{align}\n            \\hat{Q}_{n-1}\\left(A_{n-1}-\\nu_nI\\right)\\hat{R}_{n-1} &= \\hat{Q}_{n-1}Q_nR_n\\hat{R}_{n-1} \\\\\n            \\hat{Q}_{n-1}A_{n-1}\\hat{R}_{n-1}-\\nu_n\\hat{Q}_{n-1}\\hat{R}_{n-1} &= \\hat{Q}_n\\hat{R}_n \\\\\n            A_0\\hat{Q}_{n-1}\\hat{R}_{n-1}-\\nu_n\\hat{Q}_{n-1}\\hat{R}_{n-1} &= \\hat{Q}_n\\hat{R}_n \\\\\n            \\left( A_0-\\nu_nI\\right)\\left( \\hat{Q}_{n-1}\\hat{R}_{n-1} \\right) &= \\hat{Q}_n\\hat{R}_n \\\\\n            \\left( A_0-\\nu_nI\\right)\\cdots\\left( A_0 - \\nu_1I \\right) &= \\hat{Q}_n\\hat{R}_n\n        \\end{align}\n    \\end{subequations}\n\n}\n\\end{Parallel}\n\n\n\\end{document}\n", "meta": {"hexsha": "a3c88be499d0f4c9e47eaa6d7934952233fddca6", "size": 19605, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/QRAlgorithm/QR.tex", "max_stars_repo_name": "jlconlin/PhDThesis", "max_stars_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/QRAlgorithm/QR.tex", "max_issues_repo_name": "jlconlin/PhDThesis", "max_issues_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/QRAlgorithm/QR.tex", "max_forks_repo_name": "jlconlin/PhDThesis", "max_forks_repo_head_hexsha": "8e704613721a800ce1c59576e94f40fa6f7cd986", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.5666666667, "max_line_length": 568, "alphanum_fraction": 0.5714868656, "num_tokens": 7242, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.5662658293270799}}
{"text": "%% SECTION HEADER /////////////////////////////////////////////////////////////////////////////////////\n\\section{3D Model of the PZT transducers}\n\\label{sec:3Dpzt}\n\n%% SECTION CONTENT ////////////////////////////////////////////////////////////////////////////////////\n\n\nThe displacement vector of the PZT transducer is composed of three translational displacements and  is defined as:\n\\begin{eqnarray}\n\t\\left \\{ \\begin{array}{c}\n\t\t\\textbf{u}^e(\\xi,\\eta,\\zeta) \\\\\n\t\t\\textbf{v}^e(\\xi,\\eta,\\zeta) \\\\\n\t\t\\textbf{w}^e(\\xi,\\eta,\\zeta)\n\t\\end{array} \\right\\}\n\t= \\textbf{N}^e(\\xi,\\eta, \\zeta)\\widehat{\\textbf{d}}^e\n\t= \\sum_{l=1}^r\\sum_{n=1}^q\\sum_{m=1}^p\\textbf{N}_m^e(\\xi)\\textbf{N}_n^e(\\eta)\\textbf{N}_l^e(\\zeta)\n\t\\left \\{ \\begin{array}{c}\n\t\t\\widehat{\\textbf{u}}^e(\\xi_m,\\eta_n,\\zeta_l) \\\\\n\t\t\\widehat{\\textbf{v}}^e(\\xi_m,\\eta_n,\\zeta_l) \\\\\n\t\t\\widehat{\\textbf{w}}^e(\\xi_m,\\eta_n,\\zeta_l)\n\t\\end{array} \\right\\},\n\t\\label{eq:3D_displ}\n\\end{eqnarray}\nwhere \\(\\widehat{\\textbf{u}}^e\\), \\(\\widehat{\\textbf{v}}^e\\) and \n\\(\\widehat{\\textbf{w}}^e\\) are displacements of the element nodes in \\(\\xi,\\eta\\) and \\(\\zeta\\) direction.\n\nThe nodal strain--displacement relations are given as \\cite{kudela20093d}:\n\\begin{eqnarray}\n\t\\boldsymbol{\\epsilon}^e=\\textbf{B}_{d}^e\\widehat{\\textbf{d}}^e=\n\t\\left [\n\t\\begin{array}{ccc}\n\t\t\\frac{\\partial N^e}{\\partial x} & 0 & 0\\\\\n\t\t0 & \\frac{\\partial N^e}{\\partial y} & 0\\\\\n\t\t0 & 0 & \\frac{\\partial N^e}{\\partial z}\\\\\n\t\t0 & \\frac{\\partial N^e}{\\partial z} & \\frac{\\partial N^e}{\\partial y}\\\\\n\t\t\\frac{\\partial N^e}{\\partial z} & 0 & \\frac{\\partial N^e}{\\partial x}\\\\\n\t\t\\frac{\\partial N^e}{\\partial y} & \\frac{\\partial N^e}{\\partial x} & 0\n\t\\end{array} \\right]\n\t\\left \\{ \\begin{array}{c}\n\t\t\\widehat{\\textbf{u}}^e \\\\\n\t\t\\widehat{\\textbf{v}}^e \\\\\n\t\t\\widehat{\\textbf{w}}^e\n\t\\end{array} \\right\\}.\n\\end{eqnarray}\n\nThe electromechanical coupling is governed by the linear constitutive equation of piezoelectric material according to~\\cite{giurgiutiu2009micromechatronics, rekatsinas2017cubic}, and this is defined as:\n\\begin{eqnarray}\n\t\\left [ \n\t\\begin {array}{c}\n\t\\boldsymbol{\\sigma}\\\\\n\t\\textbf{D}\n\\end{array}\\right ]=\n\\left [ \n\\begin{array}{cc}\n\t\\textbf{c}^E & -\\textbf{e}^T \\\\\n\t\\textbf{e} & \\epsilon^S \n\\end{array} \\right ]\n\\left[ \n\\begin{array}{c}\n\t\\textbf{S}\\\\\n\t\\textbf{E} \n\\end{array} \\right ],\n\\end{eqnarray}\nwhere \\(\\boldsymbol{\\sigma}\\) and \\(\\textbf{S}\\) are the stress and strain components, respectively, \\(\\textbf{c}^E\\) is the stiffness coefficient matrix measured at zero electric field, \\textbf{e} is the piezoelectric coupling tensor,  \\(\\boldsymbol{\\epsilon}^S\\) is the electric permittivity, and \\textbf{E} and \\textbf{D} are the electric field and electric displacement measured at zero strain.\nThe superscript T denotes a transpose matrix.\nThe electric field is defined as:\n\\begin{eqnarray}\n\\textbf{E}^e=-\\textbf{B}_\\phi^e \\widehat{\\boldsymbol{\\phi}}^e = \\left[ \\begin{array}{c}\n\t\\frac{\\partial N^e}{\\partial \\xi}\\\\\n\t\\frac{\\partial N^e}{\\partial \\eta}\\\\\n\t\\frac{\\partial N^e}{\\partial \\zeta}\n\\end{array} \\right] \\widehat{\\boldsymbol{\\phi}}^e.\n\\end{eqnarray}\nwhere \\(\\widehat{\\boldsymbol{\\phi}}^e\\) is a nodal voltage of the transducer.", "meta": {"hexsha": "05fe1be46dfeac53a812b9291790053286d542c5", "size": 3159, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:3Dpzt.tex", "max_stars_repo_name": "pfiborek/model_hc", "max_stars_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:3Dpzt.tex", "max_issues_repo_name": "pfiborek/model_hc", "max_issues_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/proposal/Dissertation/Chapters/Chapter4/sec:3Dpzt.tex", "max_forks_repo_name": "pfiborek/model_hc", "max_forks_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.6891891892, "max_line_length": 398, "alphanum_fraction": 0.6308958531, "num_tokens": 1180, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772318846386, "lm_q2_score": 0.6477982179521103, "lm_q1q2_score": 0.5662256731673824}}
{"text": "\\documentclass{article}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[osf]{libertine}\n\\usepackage[scaled=0.8]{beramono}\n\\usepackage[margin=1.5in]{geometry}\n\\usepackage{url}\n\\usepackage{booktabs}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{nicefrac}\n\\usepackage{microtype}\n\\usepackage{bm}\n\n\\usepackage{sectsty}\n\\sectionfont{\\large}\n\\subsectionfont{\\normalsize}\n\n\\usepackage{titlesec}\n\\titlespacing{\\section}{0pt}{10pt plus 2pt minus 2pt}{0pt plus 2pt minus 0pt}\n\\titlespacing{\\subsection}{0pt}{5pt plus 2pt minus 2pt}{0pt plus 2pt minus 0pt}\n\n\\setlength{\\parindent}{0pt}\n\\setlength{\\parskip}{1ex}\n\n\\newcommand{\\acro}[1]{\\textsc{\\MakeLowercase{#1}}}\n\\newcommand{\\given}{\\mid}\n\\newcommand{\\mc}[1]{\\mathcal{#1}}\n\\newcommand{\\data}{\\mc{D}}\n\\newcommand{\\intd}[1]{\\,\\mathrm{d}{#1}}\n\\newcommand{\\mat}[1]{\\bm{\\mathrm{#1}}}\n\\renewcommand{\\vec}[1]{\\bm{\\mathrm{#1}}}\n\\newcommand{\\trans}{^\\top}\n\\newcommand{\\inv}{^{-1}}\n\n\\DeclareMathOperator{\\var}{var}\n\\DeclareMathOperator{\\tr}{tr}\n\n\\usepackage{pgfplots}\n\\pgfplotsset{\n  compat=newest,\n  plot coordinates/math parser=false,\n  tick label style={font=\\footnotesize, /pgf/number format/fixed},\n  label style={font=\\small},\n  legend style={font=\\small},\n  every axis/.append style={\n    tick align=outside,\n    clip mode=individual,\n    scaled ticks=false,\n    thick,\n    tick style={semithick, black}\n  }\n}\n\n\\pgfkeys{/pgf/number format/.cd, set thousands separator={\\,}}\n\n\\usepgfplotslibrary{external}\n\\tikzexternalize[prefix=tikz/]\n\n\\newlength\\figurewidth\n\\newlength\\figureheight\n\n\\setlength{\\figurewidth}{8cm}\n\\setlength{\\figureheight}{6cm}\n\n\\begin{document}\n\n{\\large \\textbf{CSE 515T (Spring 2015) Assignment 2 solutions}} \\\\\n\n\\begin{enumerate}\n\n\\item\n  (Curse of dimensionality.)\n  Consider a $d$-dimensional, zero-mean, spherical multivariate\n  Gaussian distribution:\n  \\begin{equation*}\n    p(\\vec{x}) = \\mc{N}(\\vec{x}; \\vec{0}, \\mat{I}_d).\n  \\end{equation*}\n  Equivalently, each entry of $\\vec{x}$ is drawn iid from a univariate\n  standard normal distribution.\n\n  In familiar small dimensions ($d \\leq 3$), ``most'' of the vectors\n  drawn from a multivariate Gaussian distribution will lie near the\n  mean.  For example, the famous 68--95--99.7 rule for $d = 1$\n  indicates that large deviations from the mean are unusual.  Here we\n  will consider the behavior in larger dimensions.\n  \\begin{itemize}\n  \\item Draw 10\\,000 samples from $p(\\vec{x})$ for each dimension in\n    $d \\in \\{1, 5, 10, 50, 100\\},$ and compute the length of each\n    vector drawn: $y_d = \\sqrt{\\vec{x}\\trans \\vec{x}} = (\\sum_i^d\n    x_i^2)^{\\nicefrac{1}{2}}$.  Estimate the distribution of each\n    $y_d$ using either a histogram or a kernel density estimate (in\n    \\acro{MATLAB}, \\texttt{hist} and \\texttt{ksdensity},\n    respectively).  Plot your estimates.  (Please do not hand in your\n    raw samples!)  Summarize the behavior of this distribution as $d$\n    increases.\n  \\item\n    The true distribution of $y_d^2$ is a chi-square distribution with\n    $d$ degrees of freedom (the distribution of $y_d$ itself is the\n    less-commonly seen chi distribution).  Use this fact to compute\n    the probability that $y_d < 5$ for each of the dimensions in the\n    last part.\n  \\item\n    For $d = 1\\,000$, compute the 5th and 95th percentiles of $y_d$.\n    Is the mean $\\vec{x} = \\vec{0}$ a representative summary of the\n    distribution in high dimensions?  This behavior has been called\n    ``the curse of dimensionality.''\n  \\end{itemize}\n\n\\end{enumerate}\n\n\\subsection*{Solution}\n\nKernel density estimates of the empirical distributions of $y$ (using\n10\\,000 samples each) are shown in Figure \\ref{problem_1}. As the\ndimension increases, we see that the bulk of the probability mass\nactually lives in a thin ``shell'' centered around the origin: all\nsamples are approximately the same length, with no vectors near the\nmean.  This is somewhat unintuitive.\n\nThe compute the probability that $y_d < 5$, we can evaluate the\ncorresponding $\\chi^2$ \\acro{CDF} at $y_d^2 = 25$:\\footnote{Computed\n  with \\texttt{chi2cdf(25, [1, 5, 10, 50, 100])} in \\acro{MATLAB}.}\n\\begin{align*}\n  \\Pr(y < 5 \\given d = \\phantom{00}1) &\\approx 1.0000 \\\\\n  \\Pr(y < 5 \\given d = \\phantom{00}5) &\\approx 0.9999 \\\\\n  \\Pr(y < 5 \\given d = \\phantom{0}10) &\\approx 0.9947 \\\\\n  \\Pr(y < 5 \\given d = \\phantom{0}50) &\\approx 0.0012 \\\\\n  \\Pr(y < 5 \\given d = 100) &\\approx 0.0000,\n\\end{align*}\nso the probability of being within a distance of five standard\ndeviations form he mean decreases from near certainty to near\nimpossibility, another surprising result.\n\nFor the last part, we invert the $\\chi^2$ \\acro{CDF} and take the\nsquare root.\\footnote{Computed with \\texttt{sqrt(chi2inv([0.05, 0.95],\n    1000))} in \\acro{MATLAB}.}  The 5th percentile is 30.46 and the\n95th percentile is 32.78.  Again, we see that most of the mass lies in\na narrow shell centered around the mean.\n\nWhether the mean is a representative summary is a much more\ncomplicated question with no definitive answer.  In some sense, it's a\nvery odd summary: in dimensions higher than 10 or so, we can't imagine\nseeing a vector anywhere near the mean!  On the other hand, if we want\nto choose another single point to summarize the distribution, there's\nno clear better alternative.  By definition, the mean minimizes the\naverage squared distance from the chosen point to the vector\n$\\vec{x}$.  It just so happens that the \\emph{minimum} squared\ndistance to the mean is relatively high in large dimension.  This is\nthe ``curse of dimensionality:'' all points are ``far away from the\nmean'' (and also each other!).\n\n\\begin{figure}\n  \\centering\n  \\input{figures/problem_1.tex}\n  \\caption{Empirical distributions of lengths $y$ as a function of the\n    dimension $d$.}\n  \\label{problem_1}\n\\end{figure}\n\n\\clearpage\n\\begin{enumerate}\n\\setcounter{enumi}{1}\n\\item\n  (Bayesian linear regression.)\n  Consider the following data:\n  \\begin{align*}\n    \\vec{x}\n    &=\n    [-2.26, -1.31, -0.43, 0.32, 0.34, 0.54, 0.86, 1.83, 2.77, 3.58]\\trans; \\\\\n    \\vec{y}\n    &=\n    [1.03, 0.70, -0.68, -1.36, -1.74, -1.01, 0.24, 1.55, 1.68, 1.53]\\trans.\n  \\end{align*}\n  Fix the noise variance at $\\sigma^2 = 0.5^2$.\n  \\begin{itemize}\n  \\item\n    Perform Bayesian linear regression for these data using the\n    polynomial basis functions $\\phi_k(x) = [1, x, x^2, \\dotsc\n      x^k]\\trans$ for $k \\in \\{1, 2, 3\\}$, in each case using the\n    parameter prior $p(\\vec{w}) = \\mc{N}(\\vec{w}; \\vec{0}, \\mat{I})$.\n    Evaluate and plot the posterior means $\\mathbb{E}[\\vec{y}_\\ast\n      \\given \\mat{X}_\\ast, \\data, \\sigma^2]$ on the interval $x_\\ast\n    \\in [-4, 4]$ for each model.  Also plot the posterior mean\n    plus-or-minus two times the posterior standard deviation:\n    \\begin{equation*}\n      \\mathbb{E}[\\vec{y}_\\ast \\given \\mat{X}_\\ast, \\data, \\sigma^2] \\pm\n      2 \\sqrt{\\var[\\vec{y}_\\ast \\given \\mat{X}_\\ast, \\data, \\sigma^2]}.\n    \\end{equation*}\n    This is a pointwise 95\\% credible interval for the regression\n    function.  Where is the pointwise uncertainty the largest?\n  \\item\n    Compute the marginal likelihood of the data for each of the basis\n    expansions above: $p(\\vec{y} \\given \\mat{X}, k, \\sigma^2)$.  Which\n    model explains the data the best?\n  \\end{itemize}\n\\end{enumerate}\n\n\\subsection*{Solution}\n\nGiven the feature expansions $\\mat{\\Phi} = \\phi(\\mat{X})$ and\n$\\mat{\\Phi}_\\ast = \\phi(\\mat{X}_\\ast)$, we may compute the posterior\ndistribution of $\\vec{y}_\\ast$:\n\\begin{equation*}\n  p(\\vec{y}_\\ast \\given \\vec{X}_\\ast, \\data)\n  =\n  \\mc{N}(\\vec{y}_\\ast;\n  \\mat{\\Phi}_\\ast \\vec{\\mu}_{\\vec{w}\\given\\data},\n  \\mat{X}_\\ast \\mat{\\Sigma}_{\\vec{w}\\given\\data} \\mat{X}_\\ast\\trans + \\sigma^2 \\mat{I}),\n\\end{equation*}\nwhere\n\\begin{align*}\n  \\vec{\\mu}_{\\vec{w}\\given\\data}\n  &=\n  \\vec{\\mu}\n  +\n  \\mat{\\Sigma}\n  \\mat{\\Phi}\\trans\n  (\\mat{\\Phi}\\mat{\\Sigma}\\mat{\\Phi}\\trans + \\sigma^2 \\mat{I})\\inv\n  (\\vec{y} - \\mat{\\Phi}\\vec{\\mu});\n  \\\\\n  \\mat{\\Sigma}_{\\vec{w}\\given\\data}\n  &=\n  \\mat{\\Sigma}\n  -\n  \\mat{\\Sigma}\n  \\mat{\\Phi}\\trans\n  (\\mat{\\Phi}\\mat{\\Sigma}\\mat{\\Phi}\\trans + \\sigma^2 \\mat{I})\\inv\n  \\mat{\\Phi}\n  \\mat{\\Sigma}.\n\\end{align*}\nThe diagonal of the posterior covariance for $\\vec{y}\\ast$,\n$\\mat{\\Phi}_\\ast\\mat{\\Sigma}\\mat{\\Phi}_\\ast + \\sigma^2\\mat{I}$,\ngives the desired variance for plotting the credible interval.\n\nPlugging in the prior $p(\\vec{w}) = \\mc{N}(\\vec{w}; \\vec{0}, \\mat{I})$\ngives the result, plotted below.  For all three models, the pointwise\nuncertainty is maximized on the extreme ranges of the domain at $x =\n-4$ and $x = 4$; we have few observations near either of these\nlocations.  The pointwise uncertainty tends to be especially large at\n$x = -4$.\n\n\\begin{figure}\n  \\centering\n  \\input{figures/order_1_expansion}\n  \\caption{Posterior for $k = 1$.}\n  \\label{order_1_expansion}\n\\end{figure}\n\n\\begin{figure}\n  \\centering\n  \\input{figures/order_2_expansion}\n  \\caption{Posterior for $k = 2$.}\n  \\label{order_2_expansion}\n\\end{figure}\n\n\\begin{figure}\n  \\centering\n  \\input{figures/order_3_expansion}\n  \\caption{Posterior for $k = 3$.}\n  \\label{order_3_expansion}\n\\end{figure}\n\nTo compute the marginal likelihood, we must compute\n\\begin{equation*}\n  p(\\vec{y} \\given \\mat{X}, \\sigma^2)\n  =\n  \\mc{N}(\\vec{y};\n  \\mat{\\Phi}\\vec{\\mu},\n  \\mat{\\Phi}\\mat{\\Sigma}\\mat{\\Phi}\\trans + \\sigma^2 \\mat{I}).\n\\end{equation*}\nComputing the marginal likelihood comes down to plugging in our\nobservations $\\vec{y}$ into this Gaussian \\acro{PDF}.  In practice it\nis more convenient to compute the logarithm of the marginal\nlikelihood, due to the large dynamic range of this function.  For our\ndata, we can compute:\n\\begin{align*}\n  \\log p(\\vec{y} \\given \\mat{X}, \\sigma^2, k = 1)\n  &= -32.9; \\\\\n  \\log p(\\vec{y} \\given \\mat{X}, \\sigma^2, k = 2)\n  &= -22.3; \\\\\n  \\log p(\\vec{y} \\given \\mat{X}, \\sigma^2, k = 3)\n  &= -22.2.\n\\end{align*}\nThere is a clear preference for either the quadratic or the cubic\nmodel over the linear model, but there is no clear-cut winner between\nthose two.\n\n\\clearpage\n\\begin{enumerate}\n\\setcounter{enumi}{2}\n\\item\n  (Optimal design for Bayesian linear regression.)\n  Consider the data from the last problem, and suppose we have\n  selected the quadratic model corresponding to $k = 2$ (do not assume\n  that this is the answer to the last part of the last question).\n  Imagine we are allowed to evaluate the function at a point $x'$ of\n  our choosing, giving a new dataset $\\data' = \\data \\cup \\bigl\\{ (x',\n  y') \\bigr\\}$ and a new posterior for the parameters $p(\\vec{w}\n  \\given \\data', \\sigma^2) = \\mc{N}(\\vec{w};\n  \\vec{\\mu}_{\\vec{w}\\given\\data'},\n  \\mat{\\Sigma}_{\\vec{w}\\given\\data'})$.  We hope to select the\n  location $x'$ to best improve our current model, under some quality\n  measure.\n\n  Assume that we ultimately wish to predict the function at a grid of\n  points\n  \\begin{equation*}\n    \\vec{x}_\\ast = [-4, -3.5, -3, \\dotsc, 3.5, 4]\\trans.\n  \\end{equation*}\n  We select the squared loss for a set of predictions\n  $\\hat{\\vec{y}}_\\ast$ at these points:\n  \\begin{equation*}\n    \\ell(\\vec{y}_\\ast, \\hat{\\vec{y}}_\\ast)\n    =\n    \\sum_i \\bigl((y_\\ast)_i - (\\hat{y}_\\ast)_i\\bigr)^2;\n  \\end{equation*}\n  therefore, we will predict using the new posterior mean\n  $\\hat{\\vec{y}}_\\ast = \\mat{X}_\\ast \\vec{\\mu}_{\\vec{w}\\given\\data'}$.\n  \\begin{itemize}\n  \\item\n    Given a potential observation location $x'$, derive a closed-form\n    expression for the expected loss\n    $\\mathbb{E}\\bigl[\\ell(\\vec{y}_\\ast, \\hat{\\vec{y}}_\\ast) \\given x',\n      \\data \\bigr]$.  Note: this does not require integration over\n    $y'$!  (What is the expected squared deviation from the mean?)\n  \\item\n    Plot the expected loss over the interval $x' \\in [-4, 4]$.  Where\n    is the optimal location to sample the function?\n  \\end{itemize}\n\n  Note: this approach of actively selecting where to sample a function\n  to maximize some utility function is known as \\emph{active learning}\n  in machine learning and \\emph{optimal experimental design} in\n  statistics.  Bayesian decision theory provides a convenient and\n  consistent framework for performing active learning with a variety\n  of objectives.\n\\end{enumerate}\n\n\\subsection*{Solution}\n\nImagine for the sake of argument that we have been given a new\nobservation $(x', y')$ to our dataset $\\data$, forming the augmented\ndataset $\\data'$.  Imagine further that we have computed the updated\nposterior $p(\\vec{w} \\given \\data', \\sigma^2)$ with this new dataset.\n\nWe are compelled to predict the value of the function at the given\ntest inputs $\\vec{x}_\\ast$.  Under the given squared loss function\n$\\ell(\\vec{y}_\\ast, \\hat{\\vec{y}}_\\ast)$, we will predict the\nposterior mean $\\hat{\\vec{y}}_\\ast \\vec{\\mu}_{\\vec{w}\\given\\data'}$.\n(Recall the Bayes estimator under squared loss is the posterior mean).\n\nLet us explicitly compute the expected loss given $\\data$':\n\\begin{align*}\n  \\mathbb{E}\\bigl[\n    \\ell(\\vec{y}_\\ast, \\hat{\\vec{y}}_\\ast)\n    \\given\n    \\vec{x}_\\ast,\n    \\data'\n  \\bigr]\n  &=\n  \\mathbb{E}\\Bigl[\n    \\sum_i\n    \\bigl(\n    (y_\\ast)_i - (\\hat{y}_\\ast)_i\n    \\bigr)^2\n    \\given \\vec{x}_\\ast, \\data'\n  \\Bigr]\n  \\\\\n  &=\n  \\sum_i\n  \\mathbb{E}\\Bigl[\n    \\bigl(\n    (y_\\ast)_i\n    -\n    \\mathbb{E}\\bigl[(y_\\ast)_i \\given (x_\\ast)_i, \\data'\\bigr]\n    \\bigr)^2\n    \\given (x_\\ast)_i, \\data'\n    \\Bigr]\n  \\\\\n  &=\n  \\sum_i\n  \\var\\bigl[\n    (y_\\ast)_i\n    \\given\n    (x_\\ast)_i, \\data'\n  \\bigr]\n  \\\\\n  &=\n  \\tr\n  \\bigl(\n  \\vec{\\Phi}_\\ast\n  \\Sigma_{\\vec{w}\\given\\data'}\n  \\vec{\\Phi}_\\ast\\trans\n  +\n  \\sigma^2 \\mat{I}\n  \\bigr),\n\\end{align*}\nwhere, in successive lines, we have: applied the linearity of\nexpectation, substituted the posterior mean predictions for\n$\\hat{\\vec{y}}_\\ast$, applied the definition of variance, and\nrewritten the sum of the variances in terms of the trace of the\nposterior covariance matrix over $\\vec{y}_\\ast$ given $\\data'$.\n\nThe key observation here is that the posterior covariance matrix, and\ntherefore the expected loss given $\\data'$, \\emph{does not depend} on\nthe value of $y'$, only the location of the new input $x'$.  So we may\nactually compute the future expected loss as a function of the next\nobservation location $x'$.  The Bayes action will then be to sample\nthe function at the point minimizing the trace of the updated\nposterior covariance over $\\vec{y}_\\ast$.\n\nIn Figure \\ref{problem_3}, we plot the expected loss as a function of\n$x'$.  The expected loss is minimized at $x' = -4$, which is the Bayes\naction.\n\nOf course, we could have iteratively performed this procedure to\nselect every observation location!  The result would fall under the\ngeneral framework of so-called \\emph{active learning.}\n\n\\begin{figure}\n  \\centering\n  \\input{figures/problem_3.tex}\n  \\caption{Expected loss for predicting $\\vec{y}_\\ast$ given $\\data'$\n    as a function of $x'$.}\n  \\label{problem_3}\n\\end{figure}\n\n\\clearpage\n\\begin{enumerate}\n\\setcounter{enumi}{3}\n\\item\n  (Woodbury matrix identity.)\n  The \\emph{Woodbury matrix identity} is a very useful result.  Let\n  $\\mat{A}$ be an $(n \\times n)$ matrix, let $\\mat{U}$ and $\\mat{V}$\n  be $(n \\times k)$ matrices, and let $\\mat{C}$ be a $(k \\times k)$\n  matrix.  Then:\n  \\begin{equation*}\n    (\\mat{A} + \\mat{U}\\mat{C}\\mat{V}\\trans)\\inv\n    =\n    \\mat{A}\\inv\n    -\n    \\mat{A}\\inv\n    \\mat{U}\n    (\\mat{C}\\inv + \\mat{V}\\trans \\mat{A}\\inv \\mat{U})\\inv\n    \\mat{V}\\trans\n    \\mat{A}\\inv.\n  \\end{equation*}\n  This result is useful when you already have the inverse of a matrix\n  $\\mat{A}$ and want to know the inverse after a rank-$k$ adjustment.\n  When $k \\ll n$, the Woodbry matrix identity can be considerably\n  faster than direct inversion!\n  \\begin{itemize}\n  \\item\n    Prove this result.\n  \\item\n    Use this result to rewrite the posterior covariance of the weight\n    vector $\\vec{w}$ in Bayesian linear regression (as written in the\n    notes to lecture 5) in a simpler form.\n  \\end{itemize}\n\\end{enumerate}\n\n\\subsection*{Solution}\n\nThe first part of the problem can be completed by multiplying the\nright hand side by $(\\mat{A} + \\mat{U}\\mat{C}\\mat{V}\\trans)$ and\nchecking you get the identity.\n\nThe posterior covariance for $\\vec{w}$ was previously given as\n\\begin{equation*}\n  \\mat{\\Sigma}_{\\vec{w}\\given\\data}\n  =\n  \\mat{\\Sigma}\n  -\n  \\mat{\\Sigma}\n  \\mat{X}\\trans\n  (\\mat{X}\\mat{\\Sigma}\\mat{X}\\trans + \\sigma^2 \\mat{I})\\inv\n  \\mat{X}\n  \\mat{\\Sigma}.\n\\end{equation*}\nTaking\n\\begin{equation*}\n  \\mat{A} = \\mat{\\Sigma}\\inv\n  \\qquad\n  \\mat{U} = \\mat{V} = \\mat{X}\\trans\n  \\qquad\n  \\mat{C} = \\sigma^2 \\mat{I},\n\\end{equation*}\nwe may rewrite this as\n\\begin{equation*}\n  \\mat{\\Sigma}_{\\vec{w}\\given\\data}\n  =\n  (\\mat{\\Sigma}\\inv\n  +\n  \\sigma^{-2}\n  \\mat{X}\\trans\\mat{X})\\inv.\n\\end{equation*}\n\n\\clearpage\n\\begin{enumerate}\n\\setcounter{enumi}{4}\n\\item\n  (Laplace approximation.)\n  Find a Laplace approximation to the Gamma distribution:\n  \\begin{equation*}\n    p(\\theta \\given \\alpha, \\beta)\n    =\n    \\frac{1}{Z}\n    \\theta^{\\alpha - 1}\n    \\exp(-\\beta\\theta).\n  \\end{equation*}\n  Plot the approximation against the true density for $(\\alpha, \\beta)\n  = (2, \\nicefrac{1}{2})$.\n\n  The true value of the normalizing constant is\n  \\begin{equation*}\n    Z = \\frac{\\Gamma(\\alpha)}{\\beta^\\alpha}.\n  \\end{equation*}\n  If we fix $\\beta = 1$, then $Z = \\Gamma(\\alpha)$, so we may use the\n  Laplace approximation to estimate the Gamma function.  Analyze the\n  quality of this approximation as a function of $\\alpha$.\n\\end{enumerate}\n\n\\subsection*{Solution}\n\nWe first define the unnormalized log density $\\Psi(\\theta)$:\n\\begin{equation*}\n  \\Psi(\\theta)\n  =\n  \\log p(\\theta \\given \\alpha, \\beta)\n  =\n  (\\alpha - 1) \\log \\theta - \\beta\\theta.\n\\end{equation*}\nNext we find the maximal value of the distribution, $\\hat{\\theta}$, by\ncomputing the derivative and setting to zero:\n\\begin{equation*}\n  0\n  =\n  \\frac{d}{d\\theta}\n  \\Psi(\\theta)\n  =\n  \\frac{\\alpha - 1}{\\theta}\n  - \\beta\n  \\quad\n  \\Rightarrow\n  \\quad\n  \\hat{\\theta} = \\frac{\\alpha - 1}{\\beta}.\n\\end{equation*}\nNext we compute the negative Hessian of $\\Psi$ at $\\hat{\\theta}$.\nNote that here we have a one-dimensional density, so the Hessian is\nsimply equal to the second derivative:\n\\begin{equation*}\n  H\n  =\n  -\\frac{d^2}{d\\theta^2}\n  \\Psi(\\theta)\n  \\biggr\\rvert_{\\theta = \\hat{\\theta}}\n  =\n  \\frac{\\alpha - 1}{\\theta^2}\n  \\biggr\\rvert_{\\theta = \\hat{\\theta}}\n  =\n  \\frac{\\beta^2}{\\alpha - 1}.\n\\end{equation*}\nNow the Laplace approximation to the gamma distribution is\n\\begin{equation*}\n  p(\\theta \\given \\alpha, \\beta)\n  \\approx\n  \\mc{N}(\\theta; \\hat{\\theta}, H\\inv)\n  =\n  \\mc{N}\\biggl(\\theta;\n  \\frac{\\alpha - 1}{\\beta},\n  \\frac{\\alpha - 1}{\\beta^2}\n  \\biggr).\n\\end{equation*}\nThe corresponding estimate for the normalizing constant $Z$\nis\n\\begin{equation*}\n  Z\n  \\approx\n  \\exp\\bigl(\\Psi(\\hat{\\vec{\\theta}})\\bigr)\n  \\sqrt{\n    \\frac{(2\\pi)^d}\n         {\\det \\mat{H}}\n  }\n  =\n  p(\\hat{\\theta} \\given \\alpha, \\beta)\n  \\sqrt{\\frac{2\\pi}{H}}\n  =\n  \\sqrt{\\frac{2\\pi(\\alpha - 1)}{\\beta^2}}\n  \\biggl(\\frac{\\alpha - 1}{\\beta}\\biggr)^{\\alpha - 1}\n  \\exp\\bigl(-(\\alpha - 1)\\bigr).\n\\end{equation*}\nPlugging in $\\beta = 1$ and using the true normalizing constant of the\ngamma distribution, we have the approximation\n\\begin{equation*}\n  \\Gamma(\\alpha)\n  \\approx\n  \\sqrt{2\\pi}\n  (\\alpha - 1)^{\\alpha - \\nicefrac{1}{2}}\n  \\exp\\bigl(-(\\alpha - 1)\\bigr).\n\\end{equation*}\nNote we also have an approximation to the logarithm\nof $\\Gamma$:\n\\begin{equation*}\n  \\log \\Gamma(\\alpha)\n  \\approx\n  \\textstyle\n  \\frac{1}{2}\\log 2 \\pi\n  +\n  (\\alpha - \\nicefrac{1}{2})\n  \\log(\\alpha - 1)\n  -\n  (\\alpha - 1).\n\\end{equation*}\n\nFigure \\ref{problem_5} shows the resulting approximation to $Z =\n\\Gamma(\\alpha)$ as a function of $\\alpha$.  The approximation\nappears to be quite good for $\\alpha \\geq 2$.\n\nTo those who are mathematically inclined, note that $n! = \\Gamma(n +\n1)$.  With a bit of manipulation, we have actually rediscovered a very\nfamous result known as \\emph{Stirling's approximation:}\n\\begin{equation*}\n  n!\n  \\approx\n  \\sqrt{2\\pi n}\n  \\biggl(\\frac{n}{e}\\biggr)^n.\n\\end{equation*}\n\n\\begin{figure}\n  \\centering\n  \\input{figures/problem_5.tex}\n  \\caption{Laplace approximation to $Z = \\Gamma(\\alpha)$ as a function\n    of $\\alpha$.}\n  \\label{problem_5}\n\\end{figure}\n\n\\end{document}\n", "meta": {"hexsha": "4b2d084e5b405511ea18b4cb6948db56acc1d47c", "size": 20084, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "spring_2015/assignments/2/solutions/solutions_2.tex", "max_stars_repo_name": "Aahana1/cse515t", "max_stars_repo_head_hexsha": "2a7c9657ede4664e080e2914be402de85a8e3c6d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2015-01-12T22:26:40.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-22T13:35:22.000Z", "max_issues_repo_path": "spring_2015/assignments/2/solutions/solutions_2.tex", "max_issues_repo_name": "Aahana1/cse515t", "max_issues_repo_head_hexsha": "2a7c9657ede4664e080e2914be402de85a8e3c6d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-01-18T00:14:26.000Z", "max_issues_repo_issues_event_max_datetime": "2019-02-25T22:00:05.000Z", "max_forks_repo_path": "spring_2015/assignments/2/solutions/solutions_2.tex", "max_forks_repo_name": "Aahana1/cse515t", "max_forks_repo_head_hexsha": "2a7c9657ede4664e080e2914be402de85a8e3c6d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 39, "max_forks_repo_forks_event_min_datetime": "2015-01-14T23:29:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-02T09:12:54.000Z", "avg_line_length": 31.4303599374, "max_line_length": 88, "alphanum_fraction": 0.6727245569, "num_tokens": 6685, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6477982179521103, "lm_q2_score": 0.8740772302445241, "lm_q1q2_score": 0.5662256721049191}}
{"text": "\\chapter{Spans, linear independence, and bases in \\texorpdfstring{$\\R^n$}{Rn}}\n", "meta": {"hexsha": "eb736e72c9631944e1ffa52d355e46eaa2641396", "size": 79, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "baseText/content/SpanIndependenceBasis.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "baseText/content/SpanIndependenceBasis.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "baseText/content/SpanIndependenceBasis.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 39.5, "max_line_length": 78, "alphanum_fraction": 0.7341772152, "num_tokens": 26, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.5662029484176021}}
{"text": "\\documentclass[journal,11pt]{IEEEtran}\n\\usepackage[utf8]{inputenc}\n\\usepackage{parskip}\n\\usepackage[sorting=none]{biblatex}\n\\addbibresource{biblio}\n\\usepackage{datetime}\n\\usepackage{float}\n\\usepackage{neuralnetwork}\n\\usepackage{tikz}\n\\usepackage{siunitx}\n\\usepackage{pgfplots}\n\\pgfplotsset{compat=1.16}\n\\usepackage{tabularx} \n\\usepgfplotslibrary{units}\n\\usepackage{graphicx}\n\\graphicspath{{img}}\n\n\\title{LELEC2870 - Project in Machine learning\\\\\\vspace*{10pt} \\Large Prediction of air-quality in Beijing}\n\\author{Group AM: Martin Braquet (06641500), Amaury Gouverneur (69331500)\\\\\\vspace*{10pt} \\normalsize \\today{}}\n\\date{December 2019}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\n\nThis report aims in predicting the concentration of PM2.5 in the air of Beijing. This is done using regression models on a dataset of 7684 records of meteorological and weather data from March $1^{\\text{st}}$ 2013 to February $28^{\\text{th}}$ 2017. The 15 recorded input features are the following:\n\\begin{itemize}\n    \\item time [year, month, day, hour],\n    \\item SO2, NO2, CO, O3, concentrations [\\si{\\micro g/m^3}],\n    \\item temperature and dew point temperature [\\si{C^\\degree}],\n    \\item pressure [\\si{h \\pascal}],\n    \\item rain precipitation [\\si{m m}],\n    \\item wind direction [cardinal] and speed [\\si{m/s}],\n    \\item id of the air-quality. monitoring site.\n\\end{itemize}\nThe recorded output variable is the corresponding PM2.5 concentration [\\si{\\micro g/m^3}].\n\nThis paper is organized as follows: features processing, selection and extraction will be discussed in sections \\ref{Features_processing}, \\ref{Features_selection} and \\ref{Features_extraction}, the error estimation and the models implementations are presented in section \\ref{Error_estimation} and \\ref{Models_implementations} before concluding in section \\ref{Conclusion}. \n\n\\section{Features processing}\n\\label{Features_processing}\n\nThe time feature recorded in the year, month, day, hour format is converted in seconds in the variable \\texttt{time} with $\\texttt{time}=0$ corresponding to the first record. This format is more usable but however does not express the cyclic relations in time, namely the rotation of the earth around the sun and the rotation of the earth around itself (see Figure \\ref{fig:earth_revolution}). To do so 4 extra variables are created: \\texttt{syear} and \\texttt{cyear}, encoding the progress of the earth rotation around the sun, and \\texttt{sday} and \\texttt{cday}, encoding the progress of the earth rotation around itself. The values are computed as follows: \n\\begin{align*}\n    \\texttt{syear} &= \\sin\\left(2 \\pi \\dfrac{\\texttt{time}}{365\\cdot24 \\cdot 60 \\cdot 60}\\right),\\\\\n    \\texttt{cyear} &= \\cos\\left(2 \\pi \\dfrac{\\texttt{time}}{365\\cdot24 \\cdot 60 \\cdot 60}\\right),\\\\\n    \\texttt{sday} &= \\sin\\left(2 \\pi \\dfrac{\\texttt{time}}{24 \\cdot 60 \\cdot 60}\\right),\\\\\n    \\texttt{sday} &= \\cos\\left(2 \\pi \\dfrac{\\texttt{time}}{24 \\cdot 60 \\cdot 60}\\right).\n\\end{align*}\n\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/earth_revolution.tex}\n    \\caption{Earth revolution around the sun}\n    \\label{fig:earth_revolution}\n\\end{figure}\n\nThe wind direction recorded as cardinals directions are also translated in a format expressing the cyclic relation, \\texttt{swd} and \\texttt{cwd}, respectively the sine and cosine of the angle of the cardinal direction on a wind rose (see Figure \\ref{fig:wind_rose}). \n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.3]{img/compass.pdf}\n    \\caption{Wind rose}\n    \\label{fig:wind_rose}\n\\end{figure}\n\nThe complete set of inputs is thus composed of 17 features.\nThe set of inputs is then normalized using a standard normalization method. \n\n\\section{Features selection}\n\\label{Features_selection}\n\nFeatures selection is a technique to drop some less useful inputs. The intrinsic principle aims to maximize the relevance (relation between input and output) and minimize the redundancy (relation between input and input).\n\nIt is important to consider the newly created features (cos/sin) in the previous paragraph as a pair of inputs resulting from one unique feature, it is thus not advised to remove one of them  (the sine or cosine) even if they present a high dependency.\n\n\\subsection{Correlation}\n\nFigure~\\ref{fig:corr} presents the correlation between the inputs and the output (absolute value). Since the correlation only detects linear relations between variables, a zero correlation between an input and the output is not sufficient to drop this input.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/correlation.tex}\n    \\caption{Correlation matrix between the inputs and the output}\n    \\label{fig:corr}\n\\end{figure}\n\n\\subsection{Mutual information}\n\nFigure~\\ref{fig:MI} shows the mutual information between the inputs and the output (absolute value). This method is able to detect any dependency between variables and is thus particularly purposeful to select the right features.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/MI.tex}\n    \\caption{Mutual Information matrix between the inputs and the output}\n    \\label{fig:MI}\n\\end{figure}\n\n\\subsection{Redundancy and relevance}\n\nBased on the correlation and MI obtained above, one can drop the features in Table~\\ref{tab:relevance} because they have few relevance with the output.\n\n\\begin{table}[H]\n\\setlength\\abovecaptionskip{-2\\baselineskip}\n\\centering\n\\begin{tabular}{cc}\n\\hline\nFeature &  MI with output  \\\\ \\hline\n\\texttt{station} &  0.074  \\\\\n\\texttt{rain}    &  0.079  \\\\\n\\texttt{swd}  &  0.098   \\\\\n\\texttt{cwd}  &  0.1   \\\\\n\\texttt{sday}  &  0.11   \\\\\n\\texttt{cday}  &  0.11   \\\\\n\\texttt{wspm}  &  0.18   \\\\ \\hline\n\\end{tabular}\n\\vspace*{3mm}\n\\caption{Non relevant features}\n\\label{tab:relevance}\n\\end{table}\n\nAdditionally, it is possible to remove the temperature (see Table~\\ref{tab:temp}), the pressure (see Table~\\ref{tab:pres}) and the time (see Table~\\ref{tab:time}) since they are redundant with other inputs.\n\n\\begin{table}[H]\n\\setlength\\abovecaptionskip{-2\\baselineskip}\n\\centering\n\\begin{tabular}{ccc}\n\\hline\nFeature &  MI & Correlation \\\\ \\hline\n \\texttt{dewp}  & 0.57  & 0.81  \\\\\n \\texttt{cyear}  & 0.79  &  0.9 \\\\ \\hline\n\\end{tabular}\n\\vspace*{3mm}\n\\caption{Relations between some features and a redundant feature: Temperature}\n\\label{tab:temp}\n\\end{table}\n\n\\begin{table}[H]\n\\setlength\\abovecaptionskip{-2\\baselineskip}\n\\centering\n\\begin{tabular}{ccc}\n\\hline\nFeature &  MI & Correlation \\\\ \\hline\n \\texttt{dewp}  & 0.54  & 0.74  \\\\\n \\texttt{cyear}  & 0.79  &  0.82 \\\\ \\hline\n\\end{tabular}\n\\vspace*{3mm}\n\\caption{Relations between some features and a redundant feature: Pressure}\n\\label{tab:pres}\n\\end{table}\n\nIt is worth noting that the dependencies between the time and the other features is non linear, such that a small correlation is computed.\n\n\\begin{table}[H]\n\\setlength\\abovecaptionskip{-2\\baselineskip}\n\\centering\n\\begin{tabular}{ccc}\n\\hline\nFeature &  MI & Correlation \\\\ \\hline\n  \\texttt{syear}  & 0.99  & 0.1  \\\\\n \\texttt{cyear}  & 0.99  &  0.17 \\\\ \\hline\n\\end{tabular}\n\\vspace*{3mm}\n\\caption{Relations between some features and a redundant feature: Time}\n\\label{tab:time}\n\\end{table}\n\nFinally, the set resulting of features selection contains 7 features.\n\n\\section{Features extraction}\n\\label{Features_extraction}\n\n\nFeatures extraction consists to transform the input into other features such that this new smaller set of features almost fully describes the content of the inputs.\n\n\\subsection{Principal Component Analysis}\n\nThe Principal Component Analysis method uses an orthogonal transformation to convert a set of inputs into a set of linearly uncorrelated variables called principal components~\\cite{pca}.\n\nFigure~\\ref{fig:error_PCA} depicts the error for several numbers of principal components, ranging from 1 to 17 (the full number of features). One can conclude that the most important correlation of the data are combined in 3 principal components, lowering the error to \\SI{55.5}{\\micro g/m^3}. Adding more components does not lead to better results considering that having only 3 input features brings very efficient computations.\n\nHowever, it is important to note that this method is linear and thus drops important variables if they are non-linearly dependent. This problem depends upon the linear characteristics of the input features. It is consequent for the data analysed here and hence leads to bad results as discussed thereafter.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/error_PCA.tex}\n    \\caption{Bootstrap 632 error for different numbers of principal components}\n    \\label{fig:error_PCA}\n\\end{figure}\n\n\\section{Error estimation}\n\\label{Error_estimation}\n\nThe errors of the following models are estimated thanks to the Bootstrap 632 method gently provided in the MLxtend package \\cite{raschkas_2018_mlxtend}. This highly efficient method is characterized by a low bias and a low variance, which makes it particularly purposeful for model validation (compared to the simpler K-fold validation for instance). If not mentioned in the following sections, the number of splits in the Bootstrap method is set to 10 due to the limited available computation power in classical computers. This small number only influences the randomness of the bootstrap error but not the mean value (most important part).\n\nFigure~\\ref{fig:error_analysis_bootstrap} shows the error distribution of the linear regression model with the selected features, for 1 and 10 splits. The error is very narrow for 10 splits ($\\sigma = \\SI{0.255}{\\micro g/m^3}$) compared to 1 split ($\\sigma = \\SI{0.762}{\\micro g/m^3}$), computing the error with 10 splits is thus cogent.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/error_analysis_bootstrap.tex}\n    \\caption{Bootstrap 632 error distribution for different numbers of splits}\n    \\label{fig:error_analysis_bootstrap}\n\\end{figure}\n\n\\section{Models implementations}\n\\label{Models_implementations}\n\n\nSeven models have been trained and compared in order to bring the most powerful one for the estimation of the secret dataset. These models are tested with the bootstrap 632 error on the three previously detailed input sets: the full features, the selected features and the PCA features.\n\nLet the dataset be $D = { (x_1 , y_1 ),(x_2 , y_2 ),\\ldots,(x_N , y_N ) }$ where  $x_i$ is the input value and $y_i$ is the target value, the goal of a regression model is to estimate the function $y = f(x)$.\n\n\\subsection{Linear regression}\n \nThe linear regression estimates the function $y = f(x)$ by $\\hat{f}(x) = w^Tx$. The weight vector $w$ is computed by minimizing the mean squared error on the training data: $$\\displaystyle \\min_{\\mathbf{w}} ||\\mathbf{y}_{\\text{train}} - \\mathbf{w}^T\\mathbf{X}_{\\text{train}}||_2^2.$$\n\nThe error for the three implemented sets is given in Table~\\ref{tab:linreg}. One can see that the 7 selected features give almost the same error as for the full input set, which validates the features selection (at least for linear models). However, the PCA method, although very fast, is not convincing because it has few features.\n\n\\begin{table}[H]\n\\setlength\\abovecaptionskip{-2\\baselineskip}\n\\centering\n\\begin{tabular}{cc}\n\\hline\n Features &  Error [\\si{\\micro g/m^3}] \\\\ \\hline\n Full  & 44.35 \\\\\n Selected  & 44.50 \\\\\n PCA  & 54.10 \\\\ \\hline\n\\end{tabular}\n\\vspace*{3mm}\n\\caption{Bootstrap 632 error for the Linear regression model}\n\\label{tab:linreg}\n\\end{table}\n\n\\subsection{Ridge regression}\n\nThe ridge regression is similar to the linear regression with a shrinkage penalty on $\\mathbf{w}$ ($L_2$ regularization): $$\\displaystyle \\min_{\\mathbf{w}} ||\\mathbf{y}_{\\text{train}} - \\mathbf{w}^T\\mathbf{X}_{\\text{train}}||_2^2 + \\lambda ||\\mathbf{w}||_2^2.$$\nThe parameter $\\lambda$ controls the relative impact of the two terms. Increasing $\\lambda$ decreases the variance while increasing the bias. \n\nAs shown in Figure~\\ref{fig:RR}, the ridge regression performs better with the \\textit{selected features} and \\textit{full features} than with the \\textit{PCA features}. The performance is almost constant for $\\lambda \\in [10^{-2},10^2]$ and decreases after when the model starts to suffer from underfitting. The best result is achieved with the \\textit{full features} and $\\lambda = 0.61$, the error is \\SI{44.15}{\\micro g/m^3}. \n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/RR.tex}\n    \\caption{Bootstrap 632 error for the Ridge regression model}\n    \\label{fig:RR}\n\\end{figure}\n\n\\subsection{Lasso}\n\nThe Lasso regression is similar to the ridge regression with a shrinkage penalty on $\\mathbf{w}$ ($L_1$ regularization): $$\\displaystyle \\min_{\\mathbf{w}} ||\\mathbf{y}_{\\text{train}} - \\mathbf{w}^T\\mathbf{X}_{\\text{train}}||_2^2 + \\lambda \\sum_{i=1}^N |\\mathbf{w_i}|.$$\n\nAs shown in Figure~\\ref{fig:Lasso}, the lasso regression performances are almost constant until $\\lambda = 1$ and start to decrease after because of underfitting. Similarly to the ridge regression the Lasso performs better with the \\textit{full} and \\textit{selected features}. The best result is achieved with the \\textit{full features} and $\\lambda = 0.01$, the error is \\SI{44.06}{\\micro g/m^3}. \n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/Lasso.tex}\n    \\caption{Bootstrap 632 error for the Lasso model}\n    \\label{fig:Lasso}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/Lasso_weights.tex}\n    \\caption{Weights of the Lasso model ($\\lambda = 0.1$)}\n    \\label{fig:Lasso_weights}\n\\end{figure}\n\nThe weights for $\\lambda = 0.1$ are given in Figure~\\ref{fig:Lasso_weights}. As found in the features selection, the output is very sensitive to three features, and more or less 7 features completely describe the input space. Some coefficients end up being set to almost zero, making the model easier to interpret.\n\nAs both the Ridge regression and Lasso models are better with a small regularization coefficient, they are redundant with the linear regression. This can be explained by the high number of samples used for the training, which are sufficient to prevent overfitting in the linear regression model. These two additional models cannot exploit their advantage of reducing the linear regression overfitting.\n\n\\subsection{K-Nearest Neighbour}\n\nFigure~\\ref{fig:KNN} presents the error for the KNN model with respect to the number of neighbours $K$ (hyperparameter). For this model, using the full input set should dramatically increase the error since in a high dimensional space, the $K$ nearest neighbours (with the Euclidian norm) are greatly influenced by useless dimensions and thus lead to select wrong neighbours. As expected, the model with features selection outperforms the others, reaching an error of \\SI{38.51}{\\micro g/m^3} for $K = 4$.\n\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/KNN.tex}\n    \\caption{Bootstrap 632 error for the KNN model}\n    \\label{fig:KNN}\n\\end{figure}\n\n\n\\subsection{Regression tree}\n\nA regression tree partitions the input space in smaller regions recursively until reaching a leaf where a simple model is fitted to. Each node represents a binary questions about one or several feature(s) that will divide the space in two.  For a classic regression tree, the model at each leaf is a constant estimate: the average of the target value of the training data that belongs to this leaf. The binary questions on the nodes are chosen such that the information gain is maximized. Mechanisms such as pruning are necessary to avoid overfitting. \n\n\nAs shown in Figure~\\ref{fig:tree}, the performance obtained by the \\textit{full} and \\textit{selected features} are very close and outperform the ones obtained with the \\textit{PCA features}. The performances increase as the maximal depth of the tree increases until reaching a maximum at 7 for the \\textit{PCA features}, then it starts to overfit since there are too few features in PCA. The error obtained with the \\textit{full} and \\textit{selected features} stops decreasing and starts to level out at a depth of around 10. It has to be noted than the trees do not grow further than a depth of 33, explaining the absence of overfitting as the allowed maximal depth continues to increase. The best result is achieved with the \\textit{selected features} and a depth of 11, the error is \\SI{40.23}{\\micro g/m^3}.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/tree.tex}\n    \\caption{Bootstrap 632 error for the regression tree model}\n    \\label{fig:tree}\n\\end{figure}\n\n\n\\subsection{Bootstrap aggregating trees}\n\nThe core idea of an ensemble method is to train weak regressors (or classifiers) and combine them to construct a regressor (or classifier) more powerful than any of the individual ones.\n\nA simple ensemble method is the so-called Bootstrap aggregating or Bagging. Let $D = { (x_1 , y_1 ),(x_2 , y_2 ),\\ldots,(x_N , y_N ) }$ be the training data, the principle of the Bagging method is detailed below.\n\nIteratively, for $b = 1,\\ldots,B$, do the following: \n\\begin{itemize}\n    \\item sample training examples, with replacement, $N'$ times from $D$ to create $D_b (N' \\leq N)$,\n    \\item use this bootstrap sample $D_b$ to estimate the regression (or classification) function $f_b$.\n\\end{itemize}\nThe bagging estimate is $$f_{\\text{bag}}(x) = \\frac{1}{B} \\sum_{b=1}^B f_b(x).$$\nAs shown in Figure~\\ref{fig:boost_tree}, the \\textit{full} and \\textit{selected features} are very close and outperform the ones obtained with the \\textit{PCA features}. The error decreases as the maximum depth increases until starting to plateau at a depth of 12 with no overfitting. As for the regression tree, this is due to the fact that the trees do not grow deeper than 33. The best result is achieved with the \\textit{selected features} and a depth of 19, the error obtained in this case is \\SI{31.42}{\\micro g/m^3}.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/boost_tree.tex}\n    \\caption{Bootstrap 632 error for the Bagging trees model}\n    \\label{fig:boost_tree}\n\\end{figure}\n\n\\subsection{Multilayer Perceptron}\n\nA multilayer perceptron takes as inputs the features (17 for the full set and 7 for the selected/PCA set) and propagates these information though a deep neural network to output the final PM2.5 estimation. At each layer, the data are subject to a batch normalisation followed by a ReLU activation function.\n\nConsidering the high number of parameters which are subject to optimization (number of hidden layers, neurons per hidden layer, epochs, learning rate, batch size), it might seem interesting to use an optimization algorithm such as a greedy search or a genetic search. However, it appears after simulation that finding this optimum is difficult since the performances are better when the network is very large, in such a way that overfitting is only noticed for big neural networks which are impossible to train in a limited time. The following paragraphs will thus be dedicated to analysing the error for several numbers of neurons per layer, epochs per training period, and hidden layers.\n\nThe results presented in Figure~\\ref{fig:MLP} depict the error variation for different numbers of neurons (hyperparameter) in each of the 8 hidden layers, ranging from 1 to 50. The learning step is done by feeding the network with the inputs/output pairs, more precisely with 50 epochs and a batch size of 128. It can be seen that the neural network performs the best with the full features as inputs since the weights of the network are precisely adapted to select the information in each features, even the smallest one. On the other hand, deep neural networks do not aim at dropping some features, which is proven by the poor results from features selection and features extraction.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/MLP.tex}\n    \\caption{Bootstrap 632 error for the MLP model with different numbers of neurons per hidden layer}\n    \\label{fig:MLP}\n\\end{figure}\n\nOne can also analyse the error for different training periods, characterised by a varying number of epochs. Figure~\\ref{fig:MLP_epochs} shows this error with a number of epochs ranging from 1 to 80. The error is stabilized after around 50 epochs for all inputs subsets since all the neural networks weights do not vary anymore. The error does not increase for additional epochs, leading to the conclusion that the network configuration is too small to bring overfitting due to long training periods\\footnote{In case of possible overfitting, the early stop approach consists to stop the training when the validation error reaches a minimum.}.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/MLP_epochs.tex}\n    \\caption{Bootstrap 632 error for the MLP model with different numbers of epochs per training period}\n    \\label{fig:MLP_epochs}\n\\end{figure}\n\nThe next step thus consists to increase the number of hidden layers and reduce the error as far as possible. Figure~\\ref{fig:MLP_layers} presents this error for the full set, 50 neurons per hidden layer and a number of epochs proportional to the number of layers since larger networks require a longer training period. One can see a minimum error for 18 hidden layers but no overfitting appears as it should be expected for such large networks. Hence, overfitting should arise for larger networks.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{img/MLP_layers.tex}\n    \\caption{Bootstrap 632 error for the MLP model with different numbers of hidden layers}\n    \\label{fig:MLP_layers}\n\\end{figure}\n\nFinally, a huge neural network of 20 hidden layers and 100 neurons has been trained with 300 epochs. It gives a bootstrap 632 error of \\SI{34.12}{\\micro g/m^3}, suggesting the start of a small overfitting.\n\nIt can be noted that the PyTorch library has been used, leading the authors to implement an estimator in order to match the specifications of the bootstrap 632 method \\cite{NEURIPS2019_9015}.\n\n\\section{Conclusion}\n\\label{Conclusion}\n\nTo conclude, the lowest error for the analysed models are summarized in Table~\\ref{tab:sum} (the errors are expressed in \\si{\\micro g/m^3}).\n\n\\begin{table}[H]\n\\setlength\\abovecaptionskip{-2\\baselineskip}\n\\centering\n\\begin{tabular}{lccc}\n\\hline\n Model &  Error (Full) &  Error (Selected) &  Error (PCA)\\\\ \\hline\n Linear regression & 44.35 & 44.50 & 54.10 \\\\\n Ridge regression & 44.15 & 44.38 & 53.91  \\\\ \n Lasso & 44.06 & 44.07 & 53.92 \\\\\n KNN & 44.12 & 38.51 & 48.58 \\\\\n Regression tree & 40.39 & 40.23 & 51.48 \\\\\n Boot. agg. trees & 31.60 & 31.42 & 41.78 \\\\\n MLP & 31.94 & 37.52 & 46.16 \\\\ \\hline\n\\end{tabular}\n\\vspace*{3mm}\n\\caption{Summarized error for all the models}\n\\label{tab:sum}\n\\end{table}\n\n\\subsection{Final model selection}\n\nThe bootstrap aggregating trees method with the features selection is chosen as the final method since it has the lowest error. This model is used to predict the output of the secret set, \\texttt{Y2}. The bootstrap 632 method aims to estimate the error on a set belonging to the entire possible space. Hence, the expected RMS error on \\texttt{Y2} is the bootstrap 632 error on \\texttt{Y1}: \\SI{31.42}{\\micro g/m^3}.\n\nFinally, combining the previous model predictions by means of ensemble methods or a voting classifier are good perspectives for a future work in the prediction improvements and the quest for the lowest error.\n\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "6e166ac2bcf8c024a2456165b4d435438f5bf163", "size": 23198, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Project/Report/main.tex", "max_stars_repo_name": "MartinBraquet/machine-learning-ELEC2870", "max_stars_repo_head_hexsha": "7704aec7894b9032e8487dd5800d0435c7240431", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project/Report/main.tex", "max_issues_repo_name": "MartinBraquet/machine-learning-ELEC2870", "max_issues_repo_head_hexsha": "7704aec7894b9032e8487dd5800d0435c7240431", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project/Report/main.tex", "max_forks_repo_name": "MartinBraquet/machine-learning-ELEC2870", "max_forks_repo_head_hexsha": "7704aec7894b9032e8487dd5800d0435c7240431", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.8781725888, "max_line_length": 813, "alphanum_fraction": 0.7582119148, "num_tokens": 6119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.7879311906630568, "lm_q1q2_score": 0.5662029448391185}}
{"text": "\\documentclass[twocolumn]{article}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n \n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\n% Recap the problem and motivation. This part should be brief, since we've already\n% heard about this.\n\n% Briefly describe your overall approach. A small concrete example snippet or two\n% would be very helpful.\n\n% Results. Describe what you have achieved. What remains to be done to make this a\n% full-fledged research result?\n\n% Lessons Learned. Reflect on the work you did, so you and I can learn from it.\n% How did your approach differ from what you originally proposed, and why? What\n% was more (or less) challenging than you expected? What would you do differently\n% next time? What do you now see as the strengths/weaknesses of a tool like Coq,\n% and what does that imply about possible research directions to pursue?\n\n% Code. Please submit all of your Coq code along with some brief documentation so\n% I can relatively easily play with it and understand what's going on.\n\\title{Verified Sentential Decision Diagrams}\n\\author{Steven Holtzen\\\\Saketh Ram Kasibatla}\n\\date{\\today}\n\\begin{document}\n\\maketitle\n\n\\section{Introduction and Motivation}\nThe goal of this project is to construct and reason about sentential decision\ndiagrams (SDDs) in Coq. A sentential decision diagram is a data structure which encodes\nan arbitrary Boolean formula; see Fig.\\ref{fig:sdd} for an example SDD. \n\n\\begin{figure}\n  \\includegraphics[width=\\linewidth]{sdd.png}\n  \\caption{An SDD for the Boolean formula $f = (A \\land B) \\lor (B \\land C) \\lor\n    (C \\land D)$. Reproduced from \\cite{Darwiche2011}.}\n  \\label{fig:sdd}\n\\end{figure}\n\nThe SDD data structure has several desirable properties:\n\\begin{enumerate}\n\\item Canonicity. Any two equivalent Boolean formulas will always compile to\n  the same SDD.\n\\item Determinism. Any given Or-node in an SDD will have at most one true\n  child.\n\\item Decomposability. The children of any given And-node in an SDD share\n  no variables.\n\\end{enumerate} \nThese properties allow one to perform many queries in linear time the SDD data\nstructure, for example: model enumeration, weighted model counting,\nsatisfiability and validity checking. In our work, we focused on non-canonical\n(i.e. uncompressed) SDDs, which are easier to represent and manipulate.\n\nThe key operations that we focused on implementing were (1) SDD compilation; \n(2) SDD application; (3) SDD evaluation. \n\n\\subsection{Proof Goals}\nOur original goal was to prove the following properties:\n  \\begin{enumerate}\n  \\item Correctness: For all Boolean formalae $e$ and inputs $i$,\n    \\texttt{eval\\_boolexpr}($e$, $i$) =\n    \\texttt{eval\\_sdd}(\\texttt{compile}($e$),  $i$).\n  \\item Prove that the resulting SDD is decomposable.\n  \\item Prove that the resulting SDD is deterministic.\n  \\item Prove that \\texttt{init\\_var} generates SDDs with respect to the same\n    $v$-tree (will most likely be a necessary lemma for proving the correctness\n    of \\texttt{apply}).\n  \\end{enumerate} \nDuring the course of implementation, these goals proved to be too ambitious. We\nwere able to prove, subject to certain assumptions, that SDDs remain\ndecomposable after application; See Fig. \\ref{fig:decomposable}. We relaxed goal (1) to be to prove a particular\nproperty of the SDD post-application; see Fig.\\ref{fig:unsatthm}.\n\n\\begin{figure}\n\\begin{verbatim}\nTheorem sdd_decomposable :\n  forall sdd v,\n    sdd_vtree sdd v ->\n    vtree_correct v ->\n    decomposable sdd.\n\\end{verbatim}\n  \\caption{Theorem around decomposability}\n  \\label{fig:decomposable}\n\\end{figure}\n\n\\begin{figure}\n\\begin{verbatim}\nTheorem apply_unsat :\n  forall sdd1 sdd2 sddres,\n    sdd_unsat sdd1 ->\n    sdd_apply OAnd sdd1 sdd2 sddres ->\n    sdd_unsat sddres.\n\\end{verbatim}\n  \\caption{The apply unsat theorem.}\n  \\label{fig:unsatthm}\n\\end{figure}\nThe proof of this fact required a custom inductive hypothesis on the derivation\nof \\texttt{sdd\\_apply}, which we will elaborate on in later sections. The reason\nthat this approach can not be extended to prove stronger claims and possbile\nfuture work will also be discussed.\n\n\\section{Approach}\n\n\\subsection{OCaml Prototype and Fixpoints vs. Inductive Constructions}\nWe created an initial OCaml implementation of SDD compilation and application to\nguide our Coq implementation. This implementation can be found in the\n\\texttt{ml/} directory of the project.\n\nSome key observations of this OCaml code is that it is reasonably complex; a\ndirect translation of this code to Gallina was attempted, but ultimately we\ndecided against it for the following reasons.\n\n\\begin{itemize}\n\\item It was difficult to establish that all the recursive calls (especially\n  \\texttt{apply}) provably terminated.\n\\item It is difficult to reason about Gallina code, as we would have had to prove many assumptions from the code, before being able to reason about its correctness.\n\\end{itemize} \n\nInstead, we opted to specify the SDD data structure as an inductive in Gallina, and to specify operations on this data structure using inductive relations. This approach allowed us immediate access to assumptions that were integral to proving the theorems we were interested in, and made it easy to strengthen these assumptions as necessary.\n\n\\subsection{Inductive SDD Data Structure}\nWe used the following compact definition for our SDD datastructure:\n\n\\begin{verbatim}\nInductive atom : Type :=\n| AFalse : atom\n| ATrue : atom\n| AVar :  nat -> bool -> atom.\n\nInductive sdd : Type :=\n| Or: list (sdd * sdd) -> sdd\n| Atom : atom -> sdd.\n\\end{verbatim}\nAn \\texttt{or} node is simply a list of pairs $(p_i, s_i)$ (primes and subs).\n\n\\subsection{SDD V-Tree}\\label{sec:vtree}\nWe experimented with (1) representing the V-Tree as a dependent type of the SDD\nand (2) creating a seperate \\texttt{sdd\\_vtree : sdd -> vtree} inductive relation which\ngenerates a V-Tree from a particular SDD:\n\n\\begin{verbatim}\nInductive sdd_vtree : sdd -> vtree -> Prop :=\n| AtomTrue : forall n, \n sdd_vtree (Atom ATrue) (VAtom n)\n| AtomFalse : forall n, \n sdd_vtree (Atom AFalse) (VAtom n)\n| AtomVar : forall n b, \n sdd_vtree (Atom (AVar n b)) (VAtom n)\n| OrEmpty : forall v, sdd_vtree (Or []) v\n| OrSingle: forall prime sub lvtree rvtree tail, \n sdd_vtree prime lvtree ->\n sdd_vtree sub rvtree ->\n sdd_vtree (Or (tail)) (VNode lvtree rvtree) ->\n sdd_vtree (Or ((prime, sub) :: tail)) \n  (VNode lvtree rvtree)\n\n\\end{verbatim}\n\nThis relation is used throughout our proofs as a correctness assumption. A valid SDD must obey some V-Tree.\n\n\\subsection{SDD Application}\nSDD application is the process of combining two SDDs $\\alpha$ and $\\beta$ into a\nthird SDD $\\gamma$ according to some operation $\\circ$. We defined application\nas a 4-argument inductive relation:\n\\begin{verbatim}\nInductive sdd_apply : \n  op -> sdd -> sdd -> sdd -> Prop\n\\end{verbatim}\n\nwhere \\texttt{op} is either $\\land$ or $\\lor$. We decomposed this relation in 3\nmutually recursive sub-relations:\n\\begin{enumerate}\n\\item \\texttt{atom\\_apply}, which handles applying together atoms\n\\item \\texttt{apply\\_or\\_list}, which takes two SDD \\textt{or}-nodes as arguments and\n  produces a new SDD \\texttt{or}-node.\n\\item \\texttt{apply\\_single\\_list}, which takes a prime $p$, a sub $s$, and an\n  \\texttt{or}-node $[(p_i, s_i)]$ as an argument and produces the SDD of the\n  form $[(p_i \\land p, s_i \\circ s)]$ for all $i$, where $\\circ$ is applying\n  \\texttt{op}.\n\n  For this operation, if $p_i \\land p$ is not satisfiable, it is not to be\n  included in the final \\texttt{or}-node which is produced. This necessitates\n  an \\texttt{sdd\\_sat} and \\texttt{sdd\\_unsat} relations.\n\\end{enumerate}\n\n\\subsection{Custom Inductive Hypothesis}\nUltimately in order to prove theorems of the form in Fig\\ref{fig:unsatthm}, we\nneed to be able to decompose along the primes and subs of the SDD. Because we implemented \\texttt{or}-nodes' children as lists of pairs of nodes, the default inductive hypothesis that Coq inferred for the SDD datastructure, and for the \\texttt{sdd\\_apply} inductive construction proved too weak. These default inductive hypotheses failed to break down the lists of primes and subs embedded in our SDD data structure.\n\nTo resolve this issue, we created custom inductive hypotheses for the SDD data structure, and for the \\texttt{sdd\\_apply} inductive construction (\\texttt{sdd\\_ind'} and \\texttt{sdd\\_apply\\_ind'} respectively). Refer to the attached source code for implementations of these hypotheses. Each of these hypotheses contains a fixpoint which operates on the proof goal. When the user calls the tactic \\texttt{induction sdd using sdd\\_ind'}, she is asked to prove the goal, in relation to each prime and sub. The machinery of the custom inductive hypothesis then builds a proof of the general goal, using proofs of each piece. In such a manner, \\texttt{sdd\\_apply\\_ind'} gave us inductive hypotheses strong enough to prove \\texttt{apply\\_unsat}.\n\n\\subsection{SDD Compilation}\nSDD compilation is the process of transforming a Boolean expression into an SDD.\nThis procedure requires the following components:\n\\begin{enumerate}\n\\item An implementation of Boolean expressions. (\\texttt{BoolExpr.v})\n\\item A method which generates an sdd from an atom (\\texttt{sdd\\_of\\_atom})\n\\end{enumerate}\n\nWe compile SDDs from boolean expressions in a bottom-up fashion. First, each atom is turned into an SDD atom. Then, we use \\texttt{sdd\\_apply} to combine these atoms according to the boolean expression (and them together for each and node, or them together for each or node).\n\n\\subsection{Decomposability}\n\nOur proof of decomposability relies heavily on varSets (implemented in \\texttt{VarSet.v}). For any SDD or vtree, we can obtain a varSet, which is the set of all variables mentioned in the SDD/vtree. We prove decomposability by showing that the varSet of an SDD is a subset of the varSet of any vtree it obeys (see \\texttt{sdd\\_vtree\\_vars}). We then use this result, along with induction on the \\texttt{sdd\\_vtree} relation to prove the theorem shown in Fig. \\ref{fig:decomposable}. Interestingly, we did not need a custom inductive hypothesis to prove this property, as the \\texttt{sdd\\_vtree} relation encoded enough information about how to decompose the SDD and vtree in question to push the proof through.\n\n\\section{Results}\n\nOur final codebase contains the following inductive constructions:\n\n\\begin{itemize}\n\\item \\texttt{sdd\\_apply}\n\\item \\texttt{sdd\\_vtree}\n\\item \\texttt{sdd\\_sat}\n\\item \\texttt{sdd\\_unsat}\n\\item \\texttt{sdd\\_of\\_var}\n\\item \\texttt{sdd\\_eval}\n\\item Judgements around varSets (see \\texttt{VarSet.v} and judgements containing \\texttt{\\_varSet} in \\texttt{IndSdd.v})\n\\item Judgements around boolean expressions (see \\texttt{BoolExp.v})\n\\end{itemize}\n\nWe successfully proved \\texttt{sdd\\_unsat} and \\texttt{sdd\\_decomposable}, using custom inductive hypotheses and inductive relations on sdds and vtrees. \n\nIn order to make our codebase a full research result, we would need to prove:\n\n\\begin{itemize}\n\\item apply preserves vtree - that the operands and result of the \\texttt{sdd\\_apply} operation all obey the same vtree. This result, combined with our proof of decomposability would be enough to show that any compiled SDD is decomposable\n\\item determinism (as formulated in the original goals)\n\\item correctness (also as formulated in the original goals).\n\\end{itemize}\n\nTogether, these properties would prove that our implementation of SDDs is valid, and would be enough to make our implementation a point of comparsion for other researchers.\n\n\\section{Lessons Learned}\n\n\\subsection{Challenges}\n\nWorking with Coq this quarter has given us a good feel for the differences between writing software in a traditional context, and doing so for the purpose of verification. \n\nBecause the type system and language are so powerful, it is easy to formulate the same solution in many ways. A great deal of what we learned to do in this course was to pick the approach that would be easiest to prove correct. For example, we chose to use inductive judgements to prove our algorithms correct, instead of working directly with Gallina code. \n\nWe also had trouble working with the \\texttt{apply} operation, as it performs complex manipulations that Coq couldn't work with automatically. We produced custom inductive hypotheses to try and get around this, but working with the proofs for apply was much more difficult without Coq doing a lot of the decomposition automatically.\n\n\\subsection{Things we'd do Differently}\n\nIf we were to start from scratch, we would formulate our SDD operations as inductive judgements from the start. Each inductive judgement would include invariant information (the vtree) directly. While these judgements would not be formulated in the simplest way possible, having invariants on hand would greatly ease the proof process. We would use a similar SDD implementation, as it guarantees a degree of structural correctness that it is useful to have. Overall, using inductive judgements and encoding information directly would help us produce the results shown here more quickly, and would give us a better jumping off point to begin to reason about our original goals.\n\n\\subsection{Strengths and Weaknesses of Coq}\n\nOverall, Coq is a powerful tool for proving software correct. By helping the user keep track of proof obligations and their knowledge, it helps the user formulate and tweak strategies for their proofs. \n\nBut, it relies heavily on a complex dependent type system and (at times) a strong user understanding of the Curry-Howard isomorphism. Both of these difficult concepts make  Coq in its current form fairly inaccessible to the average user. If this system is to be adopted in industry (perhaps only for mission critical systems), the system should be something that an industry professional can use confidently, without requiring a deep understanding of these esoteric concepts.\n\n\\bibliographystyle{plain}\n\\bibliography{bib}\n\\end{document}\n", "meta": {"hexsha": "9204b179cc160dbe9b44c486350fa3f90f7b4393", "size": 13981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "final-report/report.tex", "max_stars_repo_name": "SHoltzen/verified-sdd", "max_stars_repo_head_hexsha": "d400630db6526997226d6723ff8aedc0f1466901", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-01-22T22:17:04.000Z", "max_stars_repo_stars_event_max_datetime": "2017-01-22T22:17:04.000Z", "max_issues_repo_path": "final-report/report.tex", "max_issues_repo_name": "SHoltzen/verified-sdd", "max_issues_repo_head_hexsha": "d400630db6526997226d6723ff8aedc0f1466901", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "final-report/report.tex", "max_forks_repo_name": "SHoltzen/verified-sdd", "max_forks_repo_head_hexsha": "d400630db6526997226d6723ff8aedc0f1466901", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.7730769231, "max_line_length": 738, "alphanum_fraction": 0.7757671125, "num_tokens": 3543, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7879312056025699, "lm_q1q2_score": 0.5662029365806329}}
{"text": "\\chapter{Computational aspects}\n\n\\abstract{In this chapter\nwe discuss some of the classical methods for integrating a function. The methods we discuss are  the trapezoidal, rectangular and Simpson's rule for equally spaced \nabscissas  and integration approaches  \nbased on Gaussian quadrature. The latter are more suitable\nfor the case where the abscissas are not equally spaced. \nThe emphasis is on \nmethods for evaluating few-dimensional (typically up to four dimensions) integrals. In\nchapter \\ref{chap:mcint} \nwe show how Monte Carlo methods can be used to compute multi-dimensional\nintegrals.\nWe discuss also how to compute \nsingular integrals.\nWe end this chapter with an extensive discussion on MPI and parallel computing.\nThe examples focus on parallelization of algorithms for computing integrals. }\n\n", "meta": {"hexsha": "3376b0e0af8eae3429452d75383f4084c9da09f3", "size": 810, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/src/chapters/chapter5.tex", "max_stars_repo_name": "ManyBodyPhysics/CQMech", "max_stars_repo_head_hexsha": "8395f082392844a0e2831649aab4108324c86312", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-06-18T14:34:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T14:44:41.000Z", "max_issues_repo_path": "doc/src/chapters/chapter5.tex", "max_issues_repo_name": "ManyBodyPhysics/CQMech", "max_issues_repo_head_hexsha": "8395f082392844a0e2831649aab4108324c86312", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/src/chapters/chapter5.tex", "max_forks_repo_name": "ManyBodyPhysics/CQMech", "max_forks_repo_head_hexsha": "8395f082392844a0e2831649aab4108324c86312", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-02-18T15:21:45.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-18T15:21:45.000Z", "avg_line_length": 45.0, "max_line_length": 164, "alphanum_fraction": 0.8148148148, "num_tokens": 171, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7879311906630568, "lm_q1q2_score": 0.5662029353421508}}
{"text": "\\newpage\r\n%HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH\r\n\r\n\\section{Calculator with memory\\label{calc}}\r\n\r\n%HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH\r\n\r\nBack to the calculator from Section~\\ref{arith}.\r\nWe suggest one more improvement:\r\nthe ability to store computed values for subsequent use.\r\nThe idea is that  \r\na session may look like this:\r\n\r\n\\small\r\n\\begin{Verbatim}[samepage=true,xleftmargin=15mm,baselinestretch=0.8]\r\n java Calc\r\n > n = 3/2\r\n > n = (3/n + n)/2\r\n > (3/n + n)/2\r\n   1.7321428571428572\r\n\\end{Verbatim}\r\n\\normalsize\r\n\r\nIn the first line we store the number \\tx{3/2} under name \"\\tx{n}\".\r\nIn the second, we replace it by \\tx{(3/n + n)/2}, computed using the stored value.\r\nFinally, we use the newest \\tx{n} to compute one more value and print it.\r\n(Is it just an accident that the result resembles $\\sqrt{3}$\\,?)\r\n\r\nTo achieve this kind of functionality, you can modify the grammar as follows:\r\n\r\n\\small\r\n\\begin{Verbatim}[frame=single,framesep=2mm,samepage=true,xleftmargin=15mm,xrightmargin=15mm,baselinestretch=0.8]\r\n   Input   = Space (Store / Print) !_ ;\r\n   Store   = Name Space Equ Sum {store} ;\r\n   Print   = Sum {print} ;\r\n   Sum     = Sign Product (AddOp Product)* {sum} ;\r\n   Product = Factor (MultOp Factor)* {product} ;\r\n   Factor  = Digits? \".\" Digits Space {fraction}\r\n           / Digits Space {integer}\r\n           / Lparen Sum Rparen {unwrap} \r\n           / Name Space {retrieve} ; \r\n   Sign    = (\"-\" Space)? ;\r\n   AddOp   = [-+] Space ;\r\n   MultOp  = [*/]? Space ;\r\n   Lparen  = \"(\" Space ;\r\n   Rparen  = \")\" Space ;\r\n   Equ     = \"=\" Space ;\r\n   Digits  = [0-9]+ ;\r\n   Name    = [a-z]+ ;\r\n   Space   = \" \"* ;\r\n\\end{Verbatim}\r\n\\normalsize\r\n\r\nWe have here two alternative forms of \\tx{Input}\r\nto perform the two tasks,\r\nand one more way to obtain a \\tx{Factor}: \r\nby retrieving a stored result.\r\nThe results will be stored in a hash table \\tx{dictionary}\r\nthat resides in the semantics object.\r\n\r\nThe semantic actions for \\tx{Store} and \\tx{Print} are:\r\n\r\n\\small\r\n\\begin{Verbatim}[frame=single,framesep=2mm,samepage=true,xleftmargin=15mm,xrightmargin=15mm,baselinestretch=0.8]\r\n   //-------------------------------------------------------------------\r\n   //  Store = Name Space Equ Sum\r\n   //            0    1    2   3\r\n   //-------------------------------------------------------------------\r\n   void store()\r\n     { dictionary.put(rhs(0).text(),(Double)rhs(3).get()); }\r\n\\end{Verbatim}\r\n\\normalsize\r\n\r\n\\small\r\n\\begin{Verbatim}[frame=single,framesep=2mm,samepage=true,xleftmargin=15mm,xrightmargin=15mm,baselinestretch=0.8]\r\n   //-------------------------------------------------------------------\r\n   //  Print = Sum\r\n   //           0\r\n   //-------------------------------------------------------------------\r\n   void print()\r\n     { System.out.println((Double)rhs(0).get()); }\r\n\\end{Verbatim}\r\n\\normalsize\r\n\r\n\\newpage\r\nThe stored result is retrieved by semantic action\r\nfor the fourth alternative of \\tx{Factor}:\r\n\r\n\\small\r\n\\begin{Verbatim}[frame=single,framesep=2mm,samepage=true,xleftmargin=15mm,xrightmargin=15mm,baselinestretch=0.8]\r\n   //-------------------------------------------------------------------\r\n   //  Factor = Name Space\r\n   //             0    1\r\n   //-------------------------------------------------------------------\r\n   void retrieve()\r\n     {\r\n       String name = rhs(0).text();\r\n       Double d = dictionary.get(name);\r\n       if (d==null)\r\n       {\r\n         d = (Double)Double.NaN;\r\n         System.out.println\r\n           (rhs(0).where(0) + \": '\" + name + \"' is not defined.\");\r\n       }\r\n       lhs().put(d);\r\n     }\r\n\\end{Verbatim}\r\n\\normalsize\r\n\r\nYou have to handle the case where the name is not in the dictionary.\r\nThe example shown here returns in such a case a \\tx{NaN} (Not a Number) object. \r\nUsing \\tx{NaN} in subsequent calculations produces inevitably a \\tx{NaN} result.\r\nA message printed to \\tx{System.out} explains where this \\tx{NaN} comes from.\r\nNote the call to \\tx{rhs(0).where(0)} that returns a text telling where to find the name\r\nin the input. \r\n\r\nYou find the modified grammar and semantics in \\tx{example7}.\r\nYou find there also a file \\tx{Calc.java} containing invocation of the calculator.\r\nCopy these files to \\tx{work} directory, generate a new parser, and compile\r\neverything.\r\nYou can now invoke the calculator as shown at the beginning of this section.\r\n", "meta": {"hexsha": "35dc152bd9b28756ed5d637a809bcf60a3d16e3c", "size": 4421, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Mouse/source/manual/Calculator.tex", "max_stars_repo_name": "celer/mouse", "max_stars_repo_head_hexsha": "021a81f0c02fc079a944569ba382f2c9d7b9b9eb", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2015-01-30T11:17:56.000Z", "max_stars_repo_stars_event_max_datetime": "2017-04-08T14:06:28.000Z", "max_issues_repo_path": "Mouse/source/manual/Calculator.tex", "max_issues_repo_name": "celer/mouse", "max_issues_repo_head_hexsha": "021a81f0c02fc079a944569ba382f2c9d7b9b9eb", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-04-07T06:22:47.000Z", "max_issues_repo_issues_event_max_datetime": "2016-05-31T11:00:12.000Z", "max_forks_repo_path": "Mouse/source/manual/Calculator.tex", "max_forks_repo_name": "celer/mouse", "max_forks_repo_head_hexsha": "021a81f0c02fc079a944569ba382f2c9d7b9b9eb", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.237704918, "max_line_length": 113, "alphanum_fraction": 0.5933046822, "num_tokens": 1186, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.787931188173138, "lm_q1q2_score": 0.5662029335529091}}
{"text": "\\documentclass{article}\n\\usepackage{bm}\n\\usepackage{amssymb,amsmath}\n\\usepackage{color}\n\\usepackage{graphics}\n\\input{symbols}\n\n\\title{Minimization Algorithm in the GSI}\n\\author{Jing Guo}\n\n\\begin{document}\n\\maketitle\n\n\\section{Conventions}\nThis note describes the\noptimization algorithm used by the current GSI system.\n\nThe notation used in this document\nis selected to reflect a balanced consideration of\n\\begin{enumerate}\n\\item variable names used in the GSI software,\n\\item symbols used in GEOS DAS literature,\n\\item symbols used in related literature of optimization algorithms, and\n\\item notation consistency throughout this note.\n\\end{enumerate}\n\nIn this document, a superscript often categorizes the symbol, such\n$()^b$ for background,\n$()^a$ for analysis, and\n$()^g$ for initial guess.\nExceptions\ninclude $()^T$ for matrix-transpose and $()^{-1}$ for matrix-inverse,\nnumbers for exponents, or otherwise being specified in the text.\n\nOn the other hand, a subscript often denotes the instantiation of the\nsymbol, {\\it i.e.} symbols evaluated at specific locations or with\nrespect to specific variables.\n\n\\section{Cost Function $J(x)$}\nA variational data assimilation problem tries to minimize the\ncost function in the following form\n\\begin{align}\t\\label{eq:cost-in-x}\nJ(x) &= \\onehalf\n  \\Bigl\\{\n\t(x-x^b)^T\n\t\\bP^{-1}\n\t(x-x^b)\n\t+\n\t\\bigl[ \\cH(x) - o \\bigr]^T\n\t\\bR^{-1}\n\t\\bigl[ \\cH(x) - o \\bigr]\n  \\Bigr\\}\n\\end{align}\nwith respect to a gridded field $x$,\nwhere\n\\begin{align*}\n  x^b &: \\text{a given background field of $x$}\t\t\\\\\n  o &: \\text{observations}\t\t\t\t\\\\\n  \\bR &: \\text{observation error covariance}\t\t\\\\\n  \\bP &: \\text{background error covariance}\t\t\\\\\n  \\cH &: \\text{observation operator}\n\\end{align*}\n\nDefine\n\\begin{gather}\n\\label{eq:xhatsave}\n\\xhatsave \\equiv x - x^b\n\\end{gather}\nThen in term of $x$-increment $\\xhatsave$,\nthe cost function is\n\\begin{subequations}\n\\begin{align}\n\\label{eq:cost-in-xhatsave}\nJ(x) &= \\onehalf\n  \\Bigl\\{\n\t(\\xhatsave)^T\n\t\\bP^{-1}\n\t\\xhatsave\n\t+\n\t\\bigl[ \\cH(x) - o \\bigr]^T\n\t\\bR^{-1}\n\t\\bigl[ \\cH(x) - o \\bigr]\n  \\Bigr\\}\t\t\\\\\n\\label{eq:x-in-xhatsave}\nx &= x^b + \\xhatsave\n\\end{align}\n\\end{subequations}\n\nDefine a preconditioned increment vector\n\\begin{align}\n  \\yhatsave &\\equiv \\bP^{-1}\\xhatsave\n\\end{align}\nThen in term of $y$-increment $\\yhatsave$, the cost function is\n\\begin{subequations}\n\\begin{align}\n\\label{eq:cost-in-yhatsave}\nJ(x) &= \\onehalf\n  \\Bigl\\{\n\t(\\yhatsave)^T\n\t\\bP\n\t\\yhatsave\n\t+\n\t\\bigl[ \\cH(x) - o \\bigr]^T\n\t\\bR^{-1}\n\t\\bigl[ \\cH(x) - o \\bigr]\n  \\Bigr\\}\n  \\\\\n\\label{eq:x-in-yhatsave}\nx &= x^b + \\bP \\yhatsave\n\\end{align}\n\\end{subequations}\n\nThe minimization of Eq. (\\ref{eq:cost-in-x}) with respect to\n$x$ is equivalent to\nthe minimization of Eq. (\\ref{eq:cost-in-xhatsave}) with respect to\n$\\xhatsave$, and in particular, is equivalent to\nthe minimization of Eq. (\\ref{eq:cost-in-yhatsave}) with respect to\n$\\yhatsave$.\n\n\\section{Gradient of Cost Function, $\\nabla_\\yhatsave J$}\n\nThe first thing of an iterative optimization algorithm is\nto evaluate the gradient of the cost function.\nDenoted by $\\nabla_\\yhatsave J$, the\ngradient of the cost function $J(x)$ in Eq. (\\ref{eq:cost-in-yhatsave})\nwith respect to $\\yhatsave$ at $x=x^b+\\xhatsave$ is\n\\begin{align}\n\\label{eq:grad-in-yhatsave}\n  \\nabla_\\yhatsave J\n    &=\t\\xhatsave +\n\t\\bP\\bH_x^T \\bR^{-1}\n\t\\bigl[ \\cH(x) - o \\bigr]\n\\end{align}\nwhere\n\\begin{align*}\n  \\bH_x &\\equiv \\left. \\frac{\\partial \\cH(x)}{\\partial x}\n  \t\t\\right|_{x}\n\\end{align*}\nbased on identities\n\\begin{gather*}\n  \\frac{\\partial \\cH(x(\\yhatsave))}{\\partial \\yhatsave}\n    =\t\\frac{\\partial \\cH\\bigl(x(\\yhatsave)\\bigr)}{\\partial x}\t\\cdot\n\t\\frac{\\partial          x(\\yhatsave)      }{\\partial \\yhatsave}\n    =\t\\bH_x \\bP\t\\\\\n  \\bP^T = \\bP\n\\end{gather*}\n\nFor a cost function minimization process started with a given initial\n``guess'' field $x^g$, let\n\\begin{gather}\n\\label{eq:xhat}\n  \\xhat \\equiv x - x^g\n\\end{gather}\nthen\n\\begin{align*}\n  x = x^b + \\xhatsave = x^g + \\xhat.\n\\end{align*}\nEq.~(\\ref{eq:grad-in-yhatsave}) becomes\n\\begin{align}\n\\label{eq:grad-wrt-yhatsave-in-xhat}\n  \\nabla_\\yhatsave J\n    &=\t\\xhatsave +\n\t\\bP\\bH_x^T \\bR^{-1}\n\t\\bigl[ \\cH(x^g+\\xhat) - o \\bigr]\n\\end{align}\nOr correspondingly\n\\begin{align}\n\\label{eq:grad-wrt-xhatsave-in-xhat}\n  \\nabla_\\xhatsave J\n    &=\t\\yhatsave +\n\t\\bH_x^T \\bR^{-1}\n\t\\bigl[ \\cH(x^g+\\xhat) - o \\bigr]\n\\end{align}\n\n\n\\section{Minimization Algorithm}\n\nThe objective of an iterative step in the cost function minimization\nprocess utilizing the gradient defined by\nEq.~(\\ref{eq:grad-wrt-yhatsave-in-xhat})\nis to find the next correction step $\\delta x$ which reduces\nthe cost function in $x$\n\\begin{align*}\n  J(x+\\delta x) < J(x);\n\\end{align*}\nOr in $\\xhat$\n\\begin{align*}\n  J(\\xhat+\\delta\\xhat) < J(\\xhat).\n\\end{align*}\nNote that $\\delta\\xhat \\equiv \\delta x$ by the definition of $\\xhat$.\n\nThis minimization objective can be achieved through the\nalgorithm in the GSI system, shown in Fig.~\\ref{alg:algo} in\nmathematical terms, and in Fig.~\\ref{alg:code} in\nprogram terms.\n\n\n%\\input{algo-pcgsoi-math}\n\\begin{figure}[hb]\n\\resizebox{!}{!}{\n\\includegraphics{algo-pcgsoi-math.eps}}\n\n\\caption{Routine {\\tt PCGSOI()} in mathematical terms.}\n\\label{alg:algo}\n\\end{figure}\n\nIn Fig.~\\ref{alg:algo},\nthe objective of code segment line 6-9 is to compute the gradient\nwith respect to $\\xhat$ ({\\em i.e.} $\\xhatsave$) and $\\yhatsave$,\nrespectively.  Line 10-13 is to compute the correction directions\nin $\\xhat$ ($\\xhatsave$) and $\\yhatsave$, where line 11 for\nscalar $\\beta$\nis in agreement with Eqs.~(106) and (107) in Navon and Legler (1987),\nin the ``one-step limited-memory BFGS quasi-Newton update'' form.\nLine 14 is to compute the step-size\nthat minimize the cost function along the given correction direction\n$\\dirx_i$ or $\\diry_i$, which\ncan also be noted as ``solve\n\\begin{align*}\n0 &=(\\dirx_i)^T\\!\\nabla_\\xhat J\t\\\\\n  &=(\\dirx_i)^T\\!\\yhatsave +\n    (\\dirx_i)^T\\!\\bH^T\\Rinv\n\t\\bigl(\\cH(x_j^g+\\xhat_{i-1}+\\alpha_i\\dirx_i)-o\\bigr)\t\\\\\n  &=(\\diry_i)^T\\!\\xhatsave +\n    (\\dirx_i)^T\\!\\bH^T\\Rinv\n\t\\bigl(\\cH(x_j^g+\\xhat_{i-1}+\\alpha_i\\dirx_i)-o\\bigr)\t\\\\\n  &=(\\diry_i)^T\\!\\nabla_\\yhatsave J\n\\end{align*}\nfor $\\alpha_i$''.\nLine 15-17, is to compute the next values of $\\xhat$ ($\\xhatsave$)\nand $\\yhatsave$.\n\n%\\input{algo-pcgsoi-code}\n\\begin{figure}[hb]\n\\resizebox{!}{!}{\n\\includegraphics{algo-pcgsoi-code.eps}}\n\n\\caption{Routine {\\tt PCGSOI()} in program terms.\nNote two new operators ${\\bm L}$ and ${\\bm L}^{-1}$.\n} \\label{alg:code}\n\\end{figure}\n\nThe algorithm shown in Fig.~\\ref{alg:algo} and Fig.~\\ref{alg:code} is\nknown as the ``inner-loop'' of the minimization process in the GSI, or\n\\verb\"SUBROUTINE PCGSOI()\".  Note that beside that Fig.~\\ref{alg:algo}\nis in mathematical terms and Fig.~\\ref{alg:code} is in program terms,\nFig.~\\ref{alg:code} also identifies the difference between the source\ngrid-space defining background state $q^b$ and analysis $q^a$, and the\nintermediate grid-space defining the guess state $x^g$ and the\nincrement state $\\xhat$, and two linear operators\n($\\bL$ and $\\bL^{-1}$) converting between\nthe two spaces back-and-forth.\n\nThe algorithm descriptions given in these two figures\ninclude neither some configuration steps for the ``inner-loop'', nor\nthe so-called ``restart'' of the search directions.  Both are parts of\nthe algorithm known as the ``outer-loop'' of the process in the GSI,\nor in \\verb\"SUBROUTINE GLBSOI()\".\n\nFigure~\\ref{alg:twoloops} describes\nthe complete minimization process in its two level nested-loop form,\nfor the algorithm in routine \\verb\"GLBSOI()\" as well as\nits major lower level routines.\nFor convenience, some of these lower level\nroutines are marked as Fortran comments in the figure.\nNote that the description of line 4-6 is highly simplified,\nnot an accurate representation of the complex details in this\npart of the whole algorithm.\n\n%\\input{algo-glbsoi-code}\n\\begin{figure}[hb]\n\\resizebox{!}{!}{\n\\includegraphics{algo-glbsoi-code.eps}}\n\\caption{Minimization in two level nested loops\nfrom routine {\\tt GLBSOI()} down.\n} \\label{alg:twoloops}\n\\end{figure}\n\n\\end{document}\n", "meta": {"hexsha": "6574dcea62fe91b119e4835749613773900bba20", "size": 8031, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "GEOSaana_GridComp/GSI_GridComp/About../gsisolver.tex", "max_stars_repo_name": "GEOS-ESM/GEOSana_GridComp", "max_stars_repo_head_hexsha": "cf33607613754313a2383bb7e7b3d29c856b9daf", "max_stars_repo_licenses": ["NASA-1.3", "ECL-2.0", "Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "GEOSaana_GridComp/GSI_GridComp/About../gsisolver.tex", "max_issues_repo_name": "GEOS-ESM/GEOSana_GridComp", "max_issues_repo_head_hexsha": "cf33607613754313a2383bb7e7b3d29c856b9daf", "max_issues_repo_licenses": ["NASA-1.3", "ECL-2.0", "Apache-2.0"], "max_issues_count": 43, "max_issues_repo_issues_event_min_datetime": "2019-08-15T20:38:31.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-04T15:20:38.000Z", "max_forks_repo_path": "GEOSaana_GridComp/GSI_GridComp/About../gsisolver.tex", "max_forks_repo_name": "GEOS-ESM/GEOSana_GridComp", "max_forks_repo_head_hexsha": "cf33607613754313a2383bb7e7b3d29c856b9daf", "max_forks_repo_licenses": ["NASA-1.3", "ECL-2.0", "Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-12-20T23:40:17.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-11T08:20:51.000Z", "avg_line_length": 28.6821428571, "max_line_length": 72, "alphanum_fraction": 0.7011580127, "num_tokens": 2774, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199714402813, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.5660339312754954}}
{"text": "%[TODO: FROM VIDEO]\n\\section{Motion Analysis and Object Tracking}\n\n\\ifCPy\n\n\\cvCPyFunc{Acc}\nAdds a frame to an accumulator.\n\n\\cvdefC{\nvoid cvAcc( \\par const CvArr* image,\\par CvArr* sum,\\par const CvArr* mask=NULL );\n}\n\\cvdefPy{Acc(image,sum,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently)}\n\\cvarg{sum}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function adds the whole image \\texttt{image} or its selected region to the accumulator \\texttt{sum}:\n\n\\[ \\texttt{sum}(x,y) \\leftarrow \\texttt{sum}(x,y) + \\texttt{image}(x,y) \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\n\\cvCPyFunc{MultiplyAcc}\nAdds the product of two input images to the accumulator.\n\n\\cvdefC{\nvoid cvMultiplyAcc( \\par const CvArr* image1,\\par const CvArr* image2,\\par CvArr* acc,\\par const CvArr* mask=NULL );\n}\n\\cvdefPy{MultiplyAcc(image1,image2,acc,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{image1}{First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)}\n\\cvarg{image2}{Second input image, the same format as the first one}\n\\cvarg{acc}{Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function adds the product of 2 images or their selected regions to the accumulator \\texttt{acc}:\n\n\\[ \\texttt{acc}(x,y) \\leftarrow \\texttt{acc}(x,y) + \\texttt{image1}(x,y) \\cdot \\texttt{image2}(x,y) \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\n\n\\cvCPyFunc{RunningAvg}\nUpdates the running average.\n\n\\cvdefC{\nvoid cvRunningAvg( \\par const CvArr* image,\\par CvArr* acc,\\par double alpha,\\par const CvArr* mask=NULL );\n}\n\\cvdefPy{RunningAvg(image,acc,alpha,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)}\n\\cvarg{acc}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point}\n\\cvarg{alpha}{Weight of input image}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function calculates the weighted sum of the input image\n\\texttt{image} and the accumulator \\texttt{acc} so that \\texttt{acc}\nbecomes a running average of frame sequence:\n\n\\[ \\texttt{acc}(x,y) \\leftarrow (1-\\alpha) \\cdot \\texttt{acc}(x,y) + \\alpha \\cdot \\texttt{image}(x,y) \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\nwhere $\\alpha$ regulates the update speed (how fast the accumulator forgets about previous frames).\n\n\\cvCPyFunc{SquareAcc}\nAdds the square of the source image to the accumulator.\n\n\\cvdefC{\nvoid cvSquareAcc( \\par const CvArr* image,\\par CvArr* sqsum,\\par const CvArr* mask=NULL );\n}\\cvdefPy{SquareAcc(image,sqsum,mask=NULL)-> None}\n\n\\begin{description}\n\\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)}\n\\cvarg{sqsum}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function adds the input image \\texttt{image} or its selected region, raised to power 2, to the accumulator \\texttt{sqsum}:\n\n\\[ \\texttt{sqsum}(x,y) \\leftarrow \\texttt{sqsum}(x,y) + \\texttt{image}(x,y)^2 \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\n\\fi\n\n\\ifCpp\n\n\\cvCppFunc{accumulate}\nAdds image to the accumulator.\n\n\\cvdefCpp{void accumulate( const Mat\\& src, Mat\\& dst, const Mat\\& mask=Mat() );}\n\\begin{description}\n\\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point}\n\\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function adds \\texttt{src}, or some of its elements, to \\texttt{dst}:\n\n\\[ \\texttt{dst}(x,y) \\leftarrow \\texttt{dst}(x,y) + \\texttt{src}(x,y) \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\nThe function supports multi-channel images; each channel is processed independently.\n\nThe functions \\texttt{accumulate*} can be used, for example, to collect statistic of background of a scene, viewed by a still camera, for the further foreground-background segmentation.\n\nSee also: \\cvCppCross{accumulateSquare}, \\cvCppCross{accumulateProduct}, \\cvCppCross{accumulateWeighted}\n\n\\cvCppFunc{accumulateSquare}\nAdds the square of the source image to the accumulator.\n\n\\cvdefCpp{void accumulateSquare( const Mat\\& src, Mat\\& dst, \\par const Mat\\& mask=Mat() );}\n\\begin{description}\n\\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point}\n\\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function adds the input image \\texttt{src} or its selected region, raised to power 2, to the accumulator \\texttt{dst}:\n\n\\[ \\texttt{dst}(x,y) \\leftarrow \\texttt{dst}(x,y) + \\texttt{src}(x,y)^2 \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\nThe function supports multi-channel images; each channel is processed independently.\n\nSee also: \\cvCppCross{accumulateSquare}, \\cvCppCross{accumulateProduct}, \\cvCppCross{accumulateWeighted}\n\n\\cvCppFunc{accumulateProduct}\nAdds the per-element product of two input images to the accumulator.\n\n\\cvdefCpp{void accumulateProduct( const Mat\\& src1, const Mat\\& src2,\\par\n                        Mat\\& dst, const Mat\\& mask=Mat() );}\n\\begin{description}\n\\cvarg{src1}{The first input image, 1- or 3-channel, 8-bit or 32-bit floating point}\n\\cvarg{src2}{The second input image of the same type and the same size as \\texttt{src1}}\n\\cvarg{dst}{Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function adds the product of 2 images or their selected regions to the accumulator \\texttt{dst}:\n\n\\[ \\texttt{dst}(x,y) \\leftarrow \\texttt{dst}(x,y) + \\texttt{src1}(x,y) \\cdot \\texttt{src2}(x,y) \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\nThe function supports multi-channel images; each channel is processed independently.\n\nSee also: \\cvCppCross{accumulate}, \\cvCppCross{accumulateSquare}, \\cvCppCross{accumulateWeighted}\n\n\\cvCppFunc{accumulateWeighted}\nUpdates the running average.\n\n\\cvdefCpp{void accumulateWeighted( const Mat\\& src, Mat\\& dst,\\par\n                         double alpha, const Mat\\& mask=Mat() );}\n\\begin{description}\n\\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point}\n\\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point}\n\\cvarg{alpha}{Weight of the input image}\n\\cvarg{mask}{Optional operation mask}\n\\end{description}\n\nThe function calculates the weighted sum of the input image\n\\texttt{src} and the accumulator \\texttt{dst} so that \\texttt{dst}\nbecomes a running average of frame sequence:\n\n\\[ \\texttt{dst}(x,y) \\leftarrow (1-\\texttt{alpha}) \\cdot \\texttt{dst}(x,y) + \\texttt{alpha} \\cdot \\texttt{src}(x,y) \\quad \\text{if} \\quad \\texttt{mask}(x,y) \\ne 0 \\]\n\nthat is, \\texttt{alpha} regulates the update speed (how fast the accumulator \"forgets\" about earlier images).\nThe function supports multi-channel images; each channel is processed independently.\n\nSee also: \\cvCppCross{accumulate}, \\cvCppCross{accumulateSquare}, \\cvCppCross{accumulateProduct}\n\n\\fi\n", "meta": {"hexsha": "17e3839adc9abf528a848eed5c72a4650499c394", "size": 7628, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "to/lang/OpenCV-2.2.0/doc/imgproc_motion_tracking.tex", "max_stars_repo_name": "eirTony/INDI1", "max_stars_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "to/lang/OpenCV-2.2.0/doc/imgproc_motion_tracking.tex", "max_issues_repo_name": "eirTony/INDI1", "max_issues_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2016-11-24T10:46:39.000Z", "max_issues_repo_issues_event_max_datetime": "2016-12-10T07:24:15.000Z", "max_forks_repo_path": "to/lang/OpenCV-2.2.0/doc/imgproc_motion_tracking.tex", "max_forks_repo_name": "eirTony/INDI1", "max_forks_repo_head_hexsha": "42642d8c632da53f60f2610b056547137793021b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.8705882353, "max_line_length": 185, "alphanum_fraction": 0.7425275302, "num_tokens": 2243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8376199552262967, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5660339148490909}}
{"text": "% Preamble.\n\\documentclass[12pt]{article}\n\\usepackage[margin=1.25in]{geometry}\n\\usepackage[fleqn]{amsmath}\n\\usepackage{textcomp}\n\\usepackage{gensymb}\n\\usepackage{amsfonts}\n\\usepackage{enumitem}\n%\\usepackage{tikz}  % Include for figures.\n%\\usepackage{subfiles}  % Include for subfiles.\n\n%% Title macros.\n\\newcommand{\\EXAMNUM}{2}\n\\newcommand{\\NAME}{D. Choi}\n\\newcommand{\\DATE}{2020-07-08}\n\n\\title{\\vspace{-2\\baselineskip}MATH 225 - Exam \\#\\EXAMNUM}\n\\author{\\NAME}\n\\date{\\DATE}\n\n%% Formatting options.\n%\\pagenumbering{gobble}  % Include for single-page document.\n\\allowdisplaybreaks\n\n% Macros.\n%% Norm.\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n%% Rotation and reflection matrices.\n\\newcommand{\\rota}[1]{\\text{Rot}(#1 \\degree)}\n\\newcommand{\\refl}[1]{\\text{Ref}(#1 \\degree)}\n%% Augmented matrix.\n% https://tex.stackexchange.com/a/2238\n\\newenvironment{amatrix}[1]{%\n  \\left(\\begin{array}{@{}*{#1}{c}|c@{}}\n}{%\n  \\end{array}\\right)\n}\n%% Contextualized by input/output bases.\n\\newcommand{\\based}[3]{{\\{#1\\}}_{#2}^{#3}}\n\n\n% Document.\n\\begin{document}\n\\maketitle\n\n\\section*{1.}\n\\textit{For each each of the ordinary differential equations listed below,\nanswer the following:\n\\begin{itemize}\n\t\\item What is its order?\n\t\\item Is it linear or non-linear?\n\t\\item If non-linear, why?\n\t\\item If linear, is it constant coefficient? Is it homogenous?\n\\end{itemize}\n}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item $x^{\\prime\\prime\\prime} + x \\cdot x^{\\prime\\prime} + t x^\\prime\n\t+ x = t^2$.\n\t\\\\[\\baselineskip]\n\tThird-order, non-linear. \\\\\n\tIt is non-linear because of the term $x \\cdot x^{\\prime\\prime}$.\n\t\n\t\\item $x^{\\prime\\prime\\prime} + t x^{\\prime\\prime} + x = \\text{sin}(t).$\n\t\\\\[\\baselineskip]\n\tThird-order, linear. \\\\\n\tIt is not constant coefficient because of the term $t x^{\\prime\\prime}$.\n\t\\\\\n\tIt is not homogenous because of the term $\\text{sin}(t)$.\n\t\n\t\\item $x^2 + t x^\\prime + x = \\text{sin}(t)$.\n\t\\\\[\\baselineskip]\n\tFirst-order, non-linear. \\\\\n\tIt is non-linear because of the term $x^2$.\n\t\n\t\\item $2x^{\\prime\\prime} + 4 x^\\prime = -x$.\n\t\\\\[\\baselineskip]\n\tSecond-order, linear. \\\\\n\tIt is constant coefficient because all of the coefficients are constant. \\\\\n\tIt is homogenous because there are no constant terms.\n\\end{enumerate}\n\\newpage\n\n\\section*{2.}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item \\textit{Solve\n\t$x^{\\prime\\prime} + 3x^\\prime + 2x = 0$ with the restrictions that\n\t$x(0) = 2$ and $x^\\prime(0) = 1$.}\\\\[\\baselineskip]\n\tThe roots of the characteristic equation are\n\t\\begin{equation*}\n\t\tr^2 + 3r + 2 = 0 \\quad \\Rightarrow \\quad r_1 = -1, \\quad r_2 = -2.\n\t\\end{equation*}\n\tWith $2$ distinct real roots, the general solution is\n\t\\begin{equation*}\n\t\tx(t) = c_1 e^{-t} + c_2 e^{-2 t}.\n\t\\end{equation*}\n\tUsing the system of equations given by the initial conditions,\n\t\\begin{alignat*}{2}\n\t\tx(0) &=& \\quad 2 &= c_1 + c_2 \\\\\n\t\tx^\\prime(0) &=& \\quad 1 &= -c_1 - 2 c_2,\n\t\\end{alignat*}\n\tthe coefficients can be solved for as\n\t\\begin{equation*}\n\t\tc_1 = 5, \\quad\n\t\tc_2 = -3.\n\t\\end{equation*}\n\tThus,\n\t\\begin{equation*}\n\t\t\\boxed{\n\t\t\tx(t) = 5 e^{-t} - 3 e^{-2 t}\n\t\t}.\n\t\\end{equation*}\n\t\n\t\\item \\textit{Consider a similar, but non-homogenous o.d.e.:\n\t$x^{\\prime\\prime} + 3x^\\prime + 2x = 2t + 3$. Show that\n\t$x(t) = 7e^{-2t} + 2e^{-t} + t$ solves this non-homogenous o.d.e.}\n\t\\\\[\\baselineskip]\n\tLet's ``brute forge plug and chug'':\n\t\\begin{align*}\n\t\tx &= 7e^{-2t} + 2e^{-t} + t \\\\\n\t\tx^\\prime &= -14e^{-2t} - 2e^{-t} + 1 \\\\\n\t\tx^{\\prime\\prime} &= 28e^{-2t} + 2e^{-t}\n\t\\end{align*}\n\tThus,\n\t\\begin{align*}\n\t\t2t + 3 &\\overset{?}{=} x^{\\prime\\prime} + 3x^\\prime + 2x\n\t\t\\\\ &\\overset{?}{=}\n\t\t(28e^{-2t} + 2e^{-t})\n\t\t+ 3(-14e^{-2t} - 2e^{-t} + 1)\n\t\t+ 2(7e^{-2t} + 2e^{-t} + t)\n\t\t\\\\ &\\overset{?}{=}\n\t\t(28 - 42 + 14)(e^{-2t})\n\t\t+ (2 - 6 + 4)(e^{-t})\n\t\t+ (2)(t) + (3)(1)\n\t\t\\\\\n\t\t2t + 3 &= 2t + 3.\n\t\\end{align*}\n\\end{enumerate}\n\\newpage\n\n\\section*{3.}\n\\textit{Suppose that}\n\\begin{equation*}\n\tA = \n\t\\begin{pmatrix}\n\t\t-1 & 1 & 1 \\\\\n\t\t1 & -1 & 1 \\\\\n\t\t1 & 1 & -1\n\t\\end{pmatrix}\n\t\\begin{pmatrix}\n\t\t1 & 0 & 0 \\\\\n\t\t0 & 2 & 0 \\\\\n\t\t0 & 0 & -1\n\t\\end{pmatrix}\n\t\\begin{pmatrix}\n\t\t-1 & 1 & 1 \\\\\n\t\t1 & -1 & 1 \\\\\n\t\t1 & 1 & -1\n\t\\end{pmatrix}^{-1}\n\t.\n\\end{equation*}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item \\textit{What is\n\t\t$A^{1001} \\begin{pmatrix} 0 \\\\ 2 \\\\ 0 \\end{pmatrix}$\n\t?}\n\t\\\\[\\baselineskip]\n\tNote that\n\t\\begin{equation*}\n\t\t\\begin{pmatrix} 0 \\\\ 2 \\\\ 0 \\end{pmatrix}\n\t\t=\n\t\t\\begin{pmatrix} -1 \\\\ 1 \\\\ 1 \\end{pmatrix}\n\t\t+\n\t\t\\begin{pmatrix} 1 \\\\ 1 \\\\ -1 \\end{pmatrix}\n\t\t.\n\t\\end{equation*}\n\tThus,\n\t\\begin{align*}\n\t\tA^{1001} \\begin{pmatrix} 0 \\\\ 2 \\\\ 0 \\end{pmatrix}\n\t\t&=\n\t\tA^{1001} \\left(\n\t\t\t\\begin{pmatrix} -1 \\\\ 1 \\\\ 1 \\end{pmatrix}\n\t\t\t+\n\t\t\t\\begin{pmatrix} 1 \\\\ 1 \\\\ -1 \\end{pmatrix}\n\t\t\\right)\n\t\t\\\\\n\t\t&=\n\t\tA^{1001} \\begin{pmatrix} -1 \\\\ 1 \\\\ 1 \\end{pmatrix}\n\t\t+\n\t\tA^{1001} \\begin{pmatrix} 1 \\\\ 1 \\\\ -1 \\end{pmatrix}\n\t\t\\\\\n\t\t&=\n\t\t\\lambda_{1}^{1001} \\begin{pmatrix} -1 \\\\ 1 \\\\ 1 \\end{pmatrix}\n\t\t+\n\t\t\\lambda_{3}^{1001} \\begin{pmatrix} 1 \\\\ 1 \\\\ -1 \\end{pmatrix}\n\t\t\\\\\n\t\t&=\n\t\t1^{1001} \\begin{pmatrix} -1 \\\\ 1 \\\\ 1 \\end{pmatrix}\n\t\t+\n\t\t{(-1)}^{1001} \\begin{pmatrix} 1 \\\\ 1 \\\\ -1 \\end{pmatrix}\n\t\t\\\\\n\t\t&=\n\t\t\\begin{pmatrix} -1 \\\\ 1 \\\\ 1 \\end{pmatrix}\n\t\t-\n\t\t\\begin{pmatrix} 1 \\\\ 1 \\\\ -1 \\end{pmatrix}\n\t\t\\\\\n\t\t&=\n\t\t\\boxed{\n\t\t\t\\begin{pmatrix} -2 \\\\ 0 \\\\ 2 \\end{pmatrix}\n\t\t}\n\t\\end{align*}\n\t\n\t\\item \\textit{What is $\\text{det}(A)$? Why?}\n\t\\begin{align*}\n\t\t\\text{det}(A)\n\t\t&= \\text{det}(S) \\text{det}(D) \\text{det}(S^{-1}) \\\\\n\t\t&= \\text{det}(S) \\text{det}(D) \\frac{1}{\\text{det}(S)} \\\\\n\t\t&= \\text{det}(D) \\\\\n\t\t&=\n\t\t\\begin{vmatrix}\n\t\t\t1 & 0 & 0 \\\\\n\t\t\t0 & 2 & 0 \\\\\n\t\t\t0 & 0 & -1\n\t\t\\end{vmatrix} \\\\\n\t\t&= 1 \\cdot 2 \\cdot -1 \\\\\n\t\t&= \\boxed{-2}.\n\t\\end{align*}\n\t\n\t\\item \\textit{What is an eigenbasis for $A$?}\n\t\\begin{equation*}\n\t\t\\boxed{\n\t\t\t\\left\\{\n\t\t\t\t\\begin{pmatrix} -1 \\\\ 1 \\\\ 1 \\end{pmatrix},\n\t\t\t\t\\begin{pmatrix} 1 \\\\ -1 \\\\ 1 \\end{pmatrix},\n\t\t\t\t\\begin{pmatrix} 1 \\\\ 1 \\\\ -1 \\end{pmatrix}\n\t\t\t\\right\\}.\n\t\t}\n\t\\end{equation*}\n\\end{enumerate}\n\\newpage\n\n\\section*{4.}\n\\textit{Let $A$ be the $3 \\times 3$ matrix associated with the transformation\nthat orthogonally projects vectors onto the plane $2x + 3y - z = 0$. \\\\\nLet $B$ be the $3 \\times 3$ matrix associated with the transformation that\nreflects vectors across the plane $x + y + z = 0$.}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item \\textit{Pick one of the following and find a basis for it:\n\t\\begin{itemize}\n\t\t\\item $\\text{nullspace}(AB)$\n\t\t\\item $\\text{range}(AB)$\n\t\\end{itemize}\n\t}\n\tThe range of $AB$ is the plane that $A$ projects onto. \\\\\n\tA basis for the range of $AB$ can be found using the normal vector of the\n\tplane,\n\t\\begin{equation*}\n\t\t\\vec{n} = \\begin{pmatrix} 2 \\\\ 3 \\\\ -1 \\end{pmatrix},\n\t\\end{equation*}\n\tto find two linearly independent vectors orthogonal to $\\vec{n}$ which lie\n\ton the plane:\n\t\\begin{equation*}\n\t\t\\boxed{\n\t\t\t\\left\\{\n\t\t\t\t\\begin{pmatrix} 1 \\\\ 0 \\\\ 2 \\end{pmatrix},\n\t\t\t\t\\begin{pmatrix} 0 \\\\ 1 \\\\ 3 \\end{pmatrix}\n\t\t\t\\right\\}.\n\t\t}\n\t\\end{equation*}\n\t\n\t\\item \\textit{Repeat the above instructions for:\n\t\\begin{itemize}\n\t\t\\item $\\text{nullspace}(BA)$\n\t\t\\item $\\text{range}(BA)$\n\t\\end{itemize}\n\t}\n\tThe nullspace of $BA$ consists of all the vectors that $A$ will\n\torthographically project to $0$. \\\\\n\tThus, a basis for the nullspace of $BA$ is\n\t\\begin{equation*}\n\t\t\\left\\{ \\vec{n} \\right\\}\n\t\t=\n\t\t\\boxed{\n\t\t\t\\left\\{\n\t\t\t\t\\begin{pmatrix} 2 \\\\ 3 \\\\ -1 \\end{pmatrix}\n\t\t\t\\right\\}.\n\t\t}\n\t\\end{equation*}\n\t\n\t\\item \\textit{Find all of the following and briefly explain your\n\treasoning.}\n\t\\begin{itemize}\n\t\t\\item $\\text{rank}(AB) = 2$\n\t\t\\item $\\text{nullity}(AB) = 1$\n\t\t\\item $\\text{rank}(BA) = 2$\n\t\t\\item $\\text{nullity}(BA) 1$\n\t\\end{itemize}\n\tThe image of both $AB$ and $BA$ are planes, thus the rank of both is $2$. \\\\\n\tBy the rank-nullity theorem, the nullities of $AB$ and $BA$ must be\n\t$3 - 2 = 1$.\n\\end{enumerate}\n\\newpage\n\n\\section*{5.}\n\\textit{Let $T$ be the reflection in $\\mathbb{R}^2$ across $y=x$.}\n\\begin{gather*}\n\t\\mathcal{A}\n\t=\n\t\\left\\{\n\t\t\\begin{pmatrix} \\cos(30 \\degree) \\\\ \\sin(30 \\degree) \\end{pmatrix},\n\t\t\\begin{pmatrix} -\\sin(30 \\degree) \\\\ \\cos(30 \\degree) \\end{pmatrix}\n\t\\right\\}\n\t\\\\\n\t\\mathcal{B}\n\t=\n\t\\left\\{\n\t\t\\begin{pmatrix} -\\sin(45 \\degree) \\\\ \\cos(45 \\degree) \\end{pmatrix},\n\t\t\\begin{pmatrix} \\cos(45 \\degree) \\\\ \\sin(45 \\degree) \\end{pmatrix}\n\t\\right\\}\n\\end{gather*}\n\\textit{Find the following and show your work.}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item\n\t\\begin{align*}\n\t\t\\based{T}{\\mathcal{B}}{\\mathcal{A}}\n\t\t&=\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{A}}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t\\based{T}{\\mathcal{B}}{\\mathcal{S}}\n\t\t\\\\ &=\n\t\t\\begin{pmatrix}\n\t\t\t\\cos(30 \\degree) & -\\sin(30 \\degree) \\\\\n\t\t\t\\sin(30 \\degree) & \\cos(30 \\degree)\n\t\t\\end{pmatrix}^{-1}\n\t\t\\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix}\n\t\t\\begin{pmatrix}\n\t\t\t-\\sin(45 \\degree) & \\cos(45 \\degree) \\\\\n\t\t\t\\cos(45 \\degree) & \\sin(45 \\degree\n\t\t\\end{pmatrix}\n\t\\end{align*}\n\t\n\t\\item\n\t\\begin{align*}\n\t\t\\based{T}{\\mathcal{A}}{\\mathcal{A}}\n\t\t&=\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{A}}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t\\based{T}{\\mathcal{A}}{\\mathcal{S}}\n\t\t\\\\ &=\n\t\t\\begin{pmatrix}\n\t\t\t\\cos(30 \\degree) & -\\sin(30 \\degree) \\\\\n\t\t\t\\sin(30 \\degree) & \\cos(30 \\degree)\n\t\t\\end{pmatrix}^{-1}\n\t\t\\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix}\n\t\t\\begin{pmatrix}\n\t\t\t\\cos(30 \\degree) & -\\sin(30 \\degree) \\\\\n\t\t\t\\sin(30 \\degree) & \\cos(30 \\degree)\n\t\t\\end{pmatrix}\n\t\\end{align*}\n\t\n\t\\item\n\t\\begin{align*}\n\t\t\\based{T}{\\mathcal{A}}{\\mathcal{A}}\n\t\t&=\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{A}}\n\t\t\\based{T}{\\mathcal{S}}{\\mathcal{S}}\n\t\t\\based{T}{\\mathcal{A}}{\\mathcal{S}}\n\t\t\\\\ &=\n\t\t\\begin{pmatrix}\n\t\t\t-\\sin(45 \\degree) & \\cos(45 \\degree) \\\\\n\t\t\t\\cos(45 \\degree) & \\sin(45 \\degree\n\t\t\\end{pmatrix}^{-1}\n\t\t\\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix}\n\t\t\\begin{pmatrix}\n\t\t\t-\\sin(45 \\degree) & \\cos(45 \\degree) \\\\\n\t\t\t\\cos(45 \\degree) & \\sin(45 \\degree\n\t\t\\end{pmatrix}\n\t\\end{align*}\n\\end{enumerate}\n\\newpage\n\n\\section*{6.}\n\\textit{Let $E$ be the $25 \\times 25$ matrix with ones on both diagonals and\nzeros everywhere else.}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item \\textit{For each vector, determine if it is an eigenvector of $E$.\n\tIf it is, find the corresponding eigenvalue. If it is not, give a brief\n\texplanation why.}\n\t{\n\t\\itemize{\n\t\t\\item \\textit{$\\vec{e}_1$ = 1 on top, 24 0s.}\n\t\t\\\\[\\baselineskip]\n\t\tNo, $E \\vec{e}_1$ will have 1 on the top and bottom with zeros\n\t\teverywhere else.\n\t\t\n\t\t\\item \\textit{$\\vec{e}_2$ = 12 0s, 1 in the middle, 12 0s.}\n\t\t\\\\[\\baselineskip]\n\t\tYes, $E \\vec{e}_2 = \\vec{e}_2$, thus $\\boxed{\\lambda_2 = 1}$.\n\t\t\n\t\t\\item \\textit{$\\vec{e}_3$ = 1 on top, 23 0s, 1 on bottom.}\n\t\t\\\\[\\baselineskip]\n\t\tYes, $E \\vec{e}_3$ will have 2 on top, 23 0s, and 2 on the bottom.\n\t\tThus, $\\boxed{\\lambda_3 = 2}$.\n\t\t\n\t\t\\item \\textit{$\\vec{e}_4$ = 0, 1 on top, 21 0s, 1, 0 on bottom.}\n\t\t\\\\[\\baselineskip]\n\t\tUh, yes, $\\boxed{\\lambda_4 = 2}$.\n\t\t\n\t\t\\item \\textit{$\\vec{e}_5$ = 1 on top, 23 0s, -1 on bottom.}\n\t\t\\\\[\\baselineskip]\n\t\tYes, $E \\vec{e}_5 = \\vec{0}$, thus $\\boxed{\\lambda_5 = 0}$.\n\t}\n\t}\n\t\n\t\\item \\textit{Create a complete algebraic and geometric multiplicity table\n\tfor the eigenvalues of $E$. Explain your reasoning.}\n\tOuch.\n\t\n\t\\item \\textit{Is $E$ diagonalizable? Justify.} \\\\[\\baselineskip]\n\tYes. This was a guess.\n\t\n\t\\item \\textit{Is $E$ invertible? Justify.} \\\\[\\baselineskip]\n\tNo, row-reducing the matrix will result in the bottom half being zeros.\n\\end{enumerate}\n\n\\end{document}", "meta": {"hexsha": "355cd72c51f74867f379e93b463c93b57b0fc343", "size": 11268, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "usc-20202-math-225-39425/exam2/main.tex", "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "usc-20202-math-225-39425/exam2/main.tex", "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "usc-20202-math-225-39425/exam2/main.tex", "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.5754716981, "max_line_length": 77, "alphanum_fraction": 0.599485268, "num_tokens": 4753, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.672331699179286, "lm_q2_score": 0.8418256492357358, "lm_q1q2_score": 0.5659860691633679}}
{"text": "\\section{Monte Carlo EM Procedure}\r\n\r\nLet $X_{ij} = [x_i, x_j, x_{ij}]$ denote the features, $\\Delta_{ij} = [\\alpha_i, \\beta_j, \\gamma_i, u_i, v_j, s_i]$ denote the latent factors, and\r\n$\\Theta = [b, g_0, d_0,, c_0, G, D, H, \\lambda, \\eta, a_\\alpha, a_\\beta, a_\\gamma, A_u, A_v, A_s]$ denote the model parameters. We use the convention that $y = \\{y_{ij}\\}$, $X = \\{X_{ij}\\}$, $z = \\{z_{jn}\\}$ and so on. The EM algorithm seeks to find the $\\Theta$ that maximizes the incomplete data likelihood (marginalizing over latent factors):\r\n$$\r\n\\arg\\max_{\\Theta} \\Pr[y, w \\,|\\, \\Theta, X]\r\n$$\r\nLet $LL(\\Theta; \\Delta, z, y, w, X) = \\log(\\Pr[\\Delta, z, y, w \\,|\\, \\Theta, X])$ denote the log of the complete data likelihood. Let $\\hat{\\Theta}^{(t)}$ denote our current estimate of $\\Theta$ at the $t$th iteration. The EM algorithm iterate through the following two steps until convergence.\r\n\r\n\\begin{itemize}\r\n\\item {\\bf E-step:} Compute the sufficient statistics of $E_{\\Delta, z}[LL(\\Theta; \\Delta, z, y, w, X) \\,|\\, \\hat{\\Theta}^{(t)}]$ as a function of $\\Theta$, where the expectation is taken over the posterior distribution of $(\\Delta, z \\,|\\, \\hat{\\Theta}^{(t)}, y, w, X)$.\r\n\r\n\\item {\\bf M-step:} Find the $\\Theta$ that maximizes the expectation computed in the E-step.\r\n$$\r\n\\hat{\\Theta}^{(t+1)}= \\arg\\max_{\\Theta}\r\n E_{\\Delta, z}[LL(\\Theta; \\Delta, z, y, w, X) \\,|\\, \\hat{\\Theta}^{(t)}]\r\n$$\r\n\\end{itemize}\r\n\r\n\\subsection{Monte-Carlo E-Step}\r\n\r\nBecause $E_{\\Delta, z}[LL(\\Theta; \\Delta, z, y, w, X) \\,|\\, \\hat{\\Theta}^{(t)}]$ is not in closed form, we compute the Monte-Carlo mean based on $L$ samples generated by a Gibbs sampler. The Gibbs sampler repeats the following procedure for $L$ times.\r\n\r\n\\begin{enumerate}\r\n\\item Sample from $(\\alpha_i \\,|\\, \\textrm{Rest})$, which is Gaussian, for all $i$.\r\n\\begin{equation}\r\n\\begin{split}\r\n\\mbox{Let } o_{ij} & = y_{ij} - (x_{ij}^{\\prime} b)\\gamma_i - \\beta_j - \r\n\tu_i^\\prime v_j - s_{i}^{\\prime} \\bar{z}_j \\\\\r\n\\textit{Var}[\\alpha_i|\\mbox{Rest}] & = \r\n\t( \\frac{1}{a_{\\alpha}} + \r\n\t\t\\sum_{j \\in \\mathcal{J}_i} \\frac{1}{\\sigma_{ij}^{2}} )^{-1} \\\\\r\nE[\\alpha_i|\\mbox{Rest}] & = \r\n\t\\textit{Var}[\\alpha_i|\\mbox{Rest}]\r\n\t( \\frac{g_0^{\\prime} x_i}{a_{\\alpha}} + \r\n\t\t \\sum_{j \\in \\mathcal{J}_i} \\frac{o_{ij}}{\\sigma_{ij}^{2}} )\r\n\\end{split}\r\n\\end{equation}\r\n\r\n\\item Sample from $(\\beta_j \\,|\\, \\textrm{Rest})$, which is Gaussian, for all $j$.\r\n\\begin{equation}\r\n\\begin{split}\r\n\\mbox{Let } o_{ij} & = y_{ij} - (x_{ij}^{\\prime} b) \\gamma_i - \\alpha_i - \r\n\tu_i^\\prime v_j - s_{i}^{\\prime} \\bar{z}_j \\\\\r\n\\textit{Var}[\\beta_j|\\mbox{Rest}] & = \r\n\t( \\frac{1}{a_{\\beta}} + \r\n\t\t\\sum_{i \\in \\mathcal{I}_j} \\frac{1}{\\sigma_{ij}^{2}} )^{-1} \\\\\r\nE[\\beta_j|\\mbox{Rest}] & =  \r\n\t\\textit{Var}[\\beta_j|\\mbox{Rest}]\r\n\t( \\frac{d_0^{\\prime} x_j}{a_{\\beta}} + \r\n\t\t \\sum_{i \\in \\mathcal{I}_j} \\frac{o_{ij}}{\\sigma_{ij}^{2}} )\r\n\\end{split}\r\n\\end{equation}\r\n\r\n\\item Sample from $(\\gamma_i \\,|\\, \\textrm{Rest})$, which is Gaussian, for all $i$.\r\n\\begin{equation}\r\n\\begin{split}\r\n\\mbox{Let } o_{ij} & =  y_{ij} - \\alpha_i - \\beta_j - u_i^\\prime v_j -\r\n\ts_i^\\prime \\bar{z}_j \\\\\r\n\\textit{Var}[\\gamma_i|\\mbox{Rest}] & =  \r\n\t( \\frac{1}{a_{\\gamma}} + \r\n\t  \\sum_{j \\in \\mathcal{J}_i} \r\n\t\t\t\\frac{(x_{ij}^{\\prime} b)^2}{\\sigma_{ij}^{2}} \r\n\t)^{-1} \\\\\r\nE[\\gamma_i|\\mbox{Rest}] & = \r\n\t\\textit{Var}[\\gamma_i|\\mbox{Rest}]\r\n\t( \\frac{ c_0^\\prime x_i }{a_{\\gamma}} + \r\n\t\t \\sum_{j \\in \\mathcal{J}_i} \r\n\t\t\t\\frac{o_{ij} (x_{ij}^{\\prime} b)}{\\sigma_{ij}^{2}} )\r\n\\end{split}\r\n\\end{equation}\r\n\r\n\\item Sample from $(u_i \\,|\\, \\textrm{Rest})$, which is Gaussian, for all $i$.\r\n\\begin{equation}\r\n\\begin{split}\r\n\\mbox{Let } o_{ij} & =  y_{ij} - (x_{ij}^{\\prime} b)\\gamma_i - \r\n\t\t\t\t\t\t\\alpha_i - \\beta_j - s_i^\\prime \\bar{z}_j \\\\\r\n\\textit{Var}[u_i|\\mbox{Rest}] & =  \r\n\t( A_u^{-1} + \r\n\t  \\sum_{j \\in \\mathcal{J}_i} \r\n\t\t\t\\frac{v_j v_j^{\\prime}}{\\sigma_{ij}^{2}} \r\n\t)^{-1} \\\\\r\nE[u_i|\\mbox{Rest}] & = \r\n\t\\textit{Var}[u_i|\\mbox{Rest}]\r\n\t( A_u^{-1} G x_i + \r\n\t\t \\sum_{j \\in \\mathcal{J}_i} \\frac{o_{ij} v_j}{\\sigma_{ij}^{2}} )\r\n\\end{split}\r\n\\end{equation}\r\n\r\n\\item Sample from $(v_j \\,|\\, \\textrm{Rest})$, which is Gaussian, for all $j$.\r\n\\begin{equation}\r\n\\begin{split}\r\n\\mbox{Let } o_{ij} & =  y_{ij} - (x_{ij}^{\\prime} b)\\gamma_i - \r\n\t\t\t\t\t\t\\alpha_i - \\beta_j - s_i^\\prime \\bar{z}_j \\\\\r\n\\textit{Var}[v_j|\\mbox{Rest}] & =  \r\n\t( A_v^{-1} + \r\n\t  \\sum_{i \\in \\mathcal{I}_j} \r\n\t\t\t\\frac{u_i u_i^{\\prime}}{\\sigma_{ij}^{2}} \r\n\t)^{-1} \\\\\r\nE[v_j|\\mbox{Rest}] & = \r\n\t\\textit{Var}[v_j|\\mbox{Rest}]\r\n\t( A_v^{-1} D x_j + \r\n\t\t \\sum_{i \\in \\mathcal{I}_j} \\frac{o_{ij} u_i}{\\sigma_{ij}^{2}} )\r\n\\end{split}\r\n\\end{equation}\r\n\r\n\\item Sample from $(s_i \\,|\\, \\textrm{Rest})$, which is Gaussian, for all $i$.\r\n\\begin{equation}\r\n\\begin{split}\r\n\\mbox{Let } o_{ij} & =  y_{ij} - (x_{ij}^{\\prime} b)\\gamma_i - \r\n\t\t\t\t\t\t\\alpha_i - \\beta_j - u_i^\\prime v_j \\\\\r\n\\textit{Var}[s_i|\\mbox{Rest}] & =  \r\n\t( A_s^{-1} + \r\n\t  \\sum_{j \\in \\mathcal{J}_i} \r\n\t\t\t\\frac{\\bar{z}_j \\bar{z}_j^{\\prime}}{\\sigma_{ij}^{2}} \r\n\t)^{-1} \\\\\r\nE[s_i|\\mbox{Rest}] & = \r\n\t\\textit{Var}[s_i|\\mbox{Rest}]\r\n\t( A_s^{-1} H x_i + \r\n\t\t \\sum_{j \\in \\mathcal{J}_i} \\frac{o_{ij} \\bar{z}_j}{\\sigma_{ij}^{2}} )\r\n\\end{split}\r\n\\end{equation}\r\n\r\n\\item Sample from $(z_{jn} \\,|\\, \\textrm{Rest})$, which is multinomial, for all $j$ and $n$. Let $z_{\\neg jn}$ denote $z$ with $z_{jn}$ removed. The probability of $z_{jn}$ being topic $k$ is given by\r\n\\begin{equation*}\r\n\\begin{split}\r\n&\\Pr[z_{jn} = k \\,|\\, z_{\\neg jn}, \\Delta, \\hat{\\Theta}^{(t)}, y, w, X] \\\\\r\n&\\propto \\Pr[z_{jn} = k, y \\,|\\, z_{\\neg jn}, \\Delta, \\hat{\\Theta}^{(t)}, w, X] \\\\\r\n&\\propto \\Pr[z_{jn} = k \\,|\\, w, z_{\\neg jn}, \\hat{\\Theta}^{(t)}] \\cdot \r\n  \t\t  \\prod_{i\\in I_j} \\Pr[y_{ij}\\,|\\,z_{jn} = k, z_{\\neg jn}, \r\n\t\t\t\t\t\t\t\t   \\Delta, \\hat{\\Theta}^{(t)}, X]\r\n\\end{split}\r\n\\end{equation*}\r\nLet\r\n$$\r\nZ_{jk\\ell} = \\sum_n \\mathbf{1}\\{z_{jn} = k \\mbox{ and } w_{jn} = \\ell\\}\r\n$$\r\ndenote the number of times word $\\ell$ belongs to topic $k$ in item $j$. Let $Z_{k\\ell} = \\sum_j Z_{jk\\ell}$, $Z_{k} = \\sum_{j\\ell} Z_{jk\\ell}$, and so on. We use $Z_{\\cdot}^{\\neg jn}$ to denote the count $Z_{\\cdot}$ with $z_{jn}$ removed from the summation. Assume $w_{jn} = \\ell$. Then,\r\n\\begin{equation*}\r\n\\begin{split}\r\n& \\Pr[z_{jn} = k \\,|\\, w, z_{\\neg jn}, \\hat{\\Theta}^{(t)}] \\\\\r\n&\t\\propto \\Pr[z_{jn} = k, w_{jn} = \\ell \\,|\\, \r\n\t\t\t\t\tw_{\\neg jn}, z_{\\neg jn}, \\hat{\\Theta}^{(t)}] \\\\\r\n&\t= \\Pr[w_{jn} = \\ell \\,|\\, \r\n\t\t\tw_{\\neg jn}, z_{jn} = k, z_{\\neg jn}, \\eta] ~\r\n\t  \\Pr[z_{jn} = k \\,|\\, z_{\\neg jn}, \\lambda] \\\\\r\n&\t= E[ \\Phi_{k\\ell} \\,|\\, w_{\\neg jn}, z_{\\neg jn}, \\eta] ~\r\n\t  E[ \\theta_{jk} \\,|\\, z_{\\neg jn}, \\lambda] \\\\\r\n& \t= \\frac{Z_{k\\ell}^{\\neg jn} + \\eta}\r\n\t\t\t {Z_{k}^{\\neg jn} + W \\eta}~\r\n\t  \\frac{Z_{jk}^{\\neg jn} + \\lambda_k}\r\n\t\t\t {Z_{j}^{\\neg jn} + \\sum_k \\lambda_k}\r\n\\end{split}\r\n\\end{equation*}\r\nNote that the denominator of the second term $(Z_{j}^{\\neg jn} + \\sum_k \\lambda_k)$ is independent of $k$. Thus, we obtain\r\n\\begin{equation}\r\n\\Pr[z_{jn} = k \\,|\\, \\mbox{Rest}]\r\n \\propto \\frac{Z_{k\\ell}^{\\neg jn} + \\eta}\r\n\t\t\t {Z_{k}^{\\neg jn} + W \\eta} ~ \r\n\t\t\t (Z_{jk}^{\\neg jn} + \\lambda_k) ~\r\n\t\t\t \\prod_{i\\in \\mathcal{I}_j} f_{ij}(y_{ij})\r\n\\end{equation}\r\nwhere $f_{ij}(y_{ij})$ is the probability density at $y_{ij}$ given the current values of $(x_{ij}^{\\prime}\\, b) \\gamma_i  + \\alpha_i + \\beta_j + u_i^\\prime v_j + s_{i}^{\\prime} \\, \\bar{z}_{j}$ and $\\sigma_{ij}^2$.\r\n\\begin{equation}\r\n\\begin{split}\r\n\\mbox{Let } o_{ij} \r\n\t&= y_{ij} - (x_{ij}^{\\prime}\\, b) \\gamma_i  - \r\n\t\t\\alpha_i - \\beta_j - u_i^\\prime v_j \\\\\r\n\\prod_{i\\in \\mathcal{I}_j} f_{ij}(y_{ij}) \r\n\t&\\propto \\exp\\left\\{ \r\n\t\t\t-\\frac{1}{2} \\sum_{i \\in \\mathcal{I}_j}\r\n\t\t\t\t\\frac{(o_{ij} -  s_{i}^{\\prime} \\, \\bar{z}_{j})^2}{\\sigma_{ij}^2}\r\n\t\t\\right\\} \\\\\r\n\t&\\propto \\exp\\left\\{\r\n\t\t\t\\bar{z}_{j}^\\prime B_j - \r\n\t\t\t\\frac{1}{2} \\bar{z}_{j}^\\prime C_j \\bar{z}_{j}\r\n\t\t\\right\\} \\\\\r\n\\mbox{where } & \r\n\tB_j = \\sum_{i \\in \\mathcal{I}_j} \\frac{o_{ij} s_i}{\\sigma_{ij}^{2}}\r\n\t\\mbox{ and }\r\n\tC_j = \\sum_{i \\in \\mathcal{I}_j} \\frac{s_i s_i^{\\prime}}{\\sigma_{ij}^{2}}\r\n\\end{split}\r\n\\end{equation}\r\n\\end{enumerate}\r\n\r\n\\subsection{M-Step}\r\n\r\nIn the M-step, we want to find the $\\Theta$ = $[b, g_0, c_0, d_0, G, D, H, \\lambda, \\eta, a_\\alpha, a_\\beta, a_\\gamma, A_u, A_v]$ that maximizes the expected complete data likelihood computed in the E-step.\r\n$$\r\n\\hat{\\Theta}^{(t+1)}= \\arg\\max_{\\Theta}\r\n E_{\\Delta, z}[LL(\\Theta; \\Delta, z, y, w, X) \\,|\\, \\hat{\\Theta}^{(t)}]\r\n$$\r\nwhere $- LL(\\Theta; \\Delta, z, y, w, X) =$\r\n{\\small\\begin{equation*}\r\n\\begin{split}\r\n& ~~   \\mbox{Constant} +\r\n\t\t\t\\frac{1}{2} \\sum_{ij} \\left( \\frac{1}{\\sigma_{ij}^{2}}\r\n\t\t\t(y_{ij}-\\alpha_i-\\beta_j- (x_{ij}^{\\prime}b) \\gamma_i\r\n\t\t\t\t- u_i^\\prime v_j - s_{i}^{\\prime} \\bar{z}_{j})^{2}\r\n\t\t\t+ \\log \\sigma_{ij}^{2} \\right) \\\\\r\n& ~~ + \\frac{1}{2 a_{\\alpha}} \\sum_{i}\r\n\t\t\t\t(\\alpha_i - g_{0}^{\\prime}x_i)^{2}\r\n\t\t\t\t+ \\frac{M}{2} \\log a_{\\alpha}\r\n\t\t+ \\frac{1}{2 a_{\\gamma}} \\sum_{i} \r\n\t\t\t\t(\\gamma_i - c_{0}^{\\prime}x_i)^{2}\r\n\t\t\t\t+ \\frac{M}{2} \\log a_{\\gamma}  \\\\\r\n& ~~\t+ \\frac{1}{2} \\sum_{i} \r\n\t\t\t\t(u_i - Gx_i)^{\\prime} A_u^{-1} (u_i - Gx_i)\r\n\t\t\t\t+ \\frac{M}{2} \\log(\\det A_u) \\\\\r\n& ~~\t+ \\frac{1}{2} \\sum_{j} \r\n\t\t\t\t(v_j - Dx_j)^{\\prime} A_v^{-1} (v_j - Dx_j)\r\n\t\t\t\t+ \\frac{N}{2} \\log(\\det A_v) \\\\\r\n& ~~\t+ \\frac{1}{2} \\sum_{i} \r\n\t\t\t\t(s_i - Hx_i)^{\\prime} A_s^{-1} (s_i - Hx_i)\r\n\t\t\t\t+ \\frac{M}{2} \\log(\\det A_s) \\\\\r\n& ~~ \t+ \\frac{1}{2 a_{\\beta}} \\sum_{j}  \r\n\t\t\t\t (\\beta_j - d_{0}^{\\prime}x_j)^{2}\r\n\t\t\t\t+ \\frac{N}{2} \\log a_{\\beta}  \\\\\r\n& ~~ \t+ N \\left( \\sum_k \\log\\Gamma(\\lambda_k) \r\n\t\t\t\t- \\log\\Gamma(\\textstyle\\sum_k \\lambda_k) \\right)\r\n\t\t\t\t+ \\sum_j \\left(\r\n\t\t\t\t\t\t\\log\\Gamma\\left(Z_j + \\textstyle\\sum_k \\lambda_k\\right) -\r\n\t\t\t\t\t\t\\sum_k \\log\\Gamma(Z_{jk} + \\lambda_k)\r\n\t\t\t\t  \\right) \\\\\r\n& ~~ \t+ K \\left( W \\log\\Gamma(\\eta) \r\n\t\t\t\t- \\log\\Gamma(W \\eta) \\right)\r\n\t\t\t\t+ \\sum_k \\left(\r\n\t\t\t\t\t\t\\log\\Gamma(Z_k + W \\eta) -\r\n\t\t\t\t\t\t\\sum_\\ell \\log\\Gamma(Z_{k\\ell} + \\eta)\r\n\t\t\t\t  \\right) \\\\\r\n\\end{split}\r\n\\end{equation*}}\r\n\r\n\\noindent The optimal $g_0$, $c_0$, $G$, $D$, $d_0$, $a_\\alpha$, $a_\\gamma$, $A_u$, $A_v$ and $a_\\beta$ can be obtained using the same regression procedure as that in the original RLFM.\r\n\\\\\r\n\r\n{\\bf Optimal $b$ and $\\sigma^2$:} Consider the Gaussian case, where $\\sigma_{ij}^2 = \\sigma^2$. Define\r\n$$\r\no_{ij} = y_{ij} - (\\alpha_i + \\beta_j + u_i^\\prime v_j + s_i^\\prime \\bar{z}_j)\r\n$$\r\nWe use $\\tilde{E}[\\cdot]$ to denote the Monte-Carlo expectation. Here, we want to find $b$ and $\\sigma^2$ that minimize\r\n\\begin{equation*}\r\n\\begin{split}\r\n&\t\\frac{1}{\\sigma^2} \\sum_{ij} \r\n\t\t\\tilde{E}[(o_{ij} - \\gamma_i(x_{ij}^{\\prime}b))^2]\r\n\t\t+ P \\log(\\sigma^2) \\\\\r\n&=\t\\frac{1}{\\sigma^2} \\sum_{ij} \\left(\r\n\t\t\t\\tilde{E}[o_{ij}^2] \r\n\t\t\t- 2\\tilde{E}[o_{ij}\\gamma_i] (x_{ij}^{\\prime}b)\r\n\t\t\t+ \\tilde{E}[\\gamma_i^2] (x_{ij}^{\\prime}b)^2\r\n\t\t\\right)\r\n\t\t+ P \\log(\\sigma^2) \\\\\r\n&=\t\\frac{1}{\\sigma^2} \\sum_{ij} \\left(\r\n\t\t\t\\tilde{E}[o_{ij}^2]\r\n\t\t\t- \\frac{(\\tilde{E}[o_{ij}\\gamma_i])^2}{\\tilde{E}[\\gamma_i^2]}\r\n\t\t\t+ \\tilde{E}[\\gamma_i^2] \\left(\r\n\t\t\t\t\t\\frac{\\tilde{E}[o_{ij}\\gamma_i]}{\\tilde{E}[\\gamma_i^2]}\r\n\t\t\t\t\t- x_{ij}^{\\prime} b\r\n\t\t\t\t\\right)^2\r\n\t\t\\right)\r\n\t\t+ P \\log(\\sigma^2) \\\\\r\n\\end{split}\r\n\\end{equation*}\r\nwhere $P$ is the total number of observed user-item pairs. The optimal $b$ can be found by weighted least squares regression with feature $x_{ij}$, response $\\frac{\\tilde{E}[o_{ij}\\gamma_i]}{\\tilde{E}[\\gamma_i^2]}$ and weight $\\tilde{E}[\\gamma_i^2]$. The optimal $\\sigma^2$ is the above summation (over $ij$) divided by $P$ with the optimal $b$ value.\r\n\\\\\r\n\r\n{\\bf Optimal $\\eta$:} We want to find $\\eta$ that minimizes\r\n\\begin{equation*}\r\n\\begin{split}\r\n& K \\left( W \\log\\Gamma(\\eta) \r\n\t\t\t\t- \\log\\Gamma(W \\eta) \\right)\r\n\t\t\t\t+ \\sum_k \\left(\r\n\t\t\t\t\t\t\\tilde{E}[\\log\\Gamma(Z_k + W \\eta)] -\r\n\t\t\t\t\t\t\\sum_\\ell \\tilde{E}[\\log\\Gamma(Z_{k\\ell} + \\eta)]\r\n\t\t\t\t  \\right) \\\\\r\n\\end{split}\r\n\\end{equation*}\r\nSince this optimization is just one dimensional and $\\eta$ is a nuance parameter, we can simply try a number of fixed possible $\\eta$ values.\r\n\\\\\r\n\r\n{\\bf Optimal $\\lambda$:} We want to find $\\lambda_1$, ..., $\\lambda_K$ that minimize\r\n\\begin{equation*}\r\n\\begin{split}\r\n& N \\left( \\sum_k \\log\\Gamma(\\lambda_k) \r\n\t\t- \\log\\Gamma(\\textstyle\\sum_k \\lambda_k) \\right) \\\\\r\n&\t\t+ \\sum_j \\left(\r\n\t\t\t\t\\tilde{E}[\\log\\Gamma\\left(Z_j + \\textstyle\\sum_k \\lambda_k\\right)]\r\n\t\t\t\t- \\sum_k \\tilde{E}[\\log\\Gamma(Z_{jk} + \\lambda_k)]\r\n\t\\right)\\\\\r\n\\end{split}\r\n\\end{equation*}\r\n{\\bf Question: How to optimize this efficiently?} What's the sufficient statistics? Storing Monte-Carlo samples in memory for $Z_{jk}$ may not be feasible. Any approximation? For now, I just assume $\\lambda_k = \\lambda_0$, a single scalar and search through a fixed set of points (which is also assumed in the first Gibbs-sampling-based LDA paper by Griffiths, 2004).\r\n\r\n{\\bf Comment:}: I think assuming the same $\\lambda$ should work fine with a lot of document data, in fact we can select $\\eta$ and $\\lambda_0$ through cross-validation in the first  implementation to simplify things.\r\n\r\n{\\bf Comment:} The Gibbs sampler here looks trivial, I also feel it would be better behaved than the sampler we had in RLFM. However, one drawback is the conditional\r\ndistribution of $u_i$ that involves inverting a matrix whose dimension equals number of topics. In general, we may want to try to a few thousand topics with LDA; we may have to break $u_i$ into blocks of small chunks and sample each block one at a time conditional on the others. However, this does not seem to be a worry in the first implementation where we could try few 100 topics in the LDA.\r\n", "meta": {"hexsha": "502d09db66a8f31cc04fe27e00feb12f4987d18c", "size": 13461, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/LDA-RLFM/doc/fitting.tex", "max_stars_repo_name": "beechung/Latent-Factor-Models", "max_stars_repo_head_hexsha": "bda67b6fab8fa3a4219d5360651d9105e006a8c7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 86, "max_stars_repo_stars_event_min_datetime": "2015-02-02T21:49:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T08:24:53.000Z", "max_issues_repo_path": "src/LDA-RLFM/doc/fitting.tex", "max_issues_repo_name": "TotallyBullshit/Latent-Factor-Models", "max_issues_repo_head_hexsha": "3815cbb311da8819b686661ce7007a7cb62e0f7a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2015-05-05T19:40:11.000Z", "max_issues_repo_issues_event_max_datetime": "2017-12-31T01:14:49.000Z", "max_forks_repo_path": "src/LDA-RLFM/doc/fitting.tex", "max_forks_repo_name": "TotallyBullshit/Latent-Factor-Models", "max_forks_repo_head_hexsha": "3815cbb311da8819b686661ce7007a7cb62e0f7a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2015-01-26T05:13:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-24T05:37:19.000Z", "avg_line_length": 45.1711409396, "max_line_length": 396, "alphanum_fraction": 0.5770745116, "num_tokens": 5529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256313782276, "lm_q2_score": 0.6723317123102956, "lm_q1q2_score": 0.5659860682112196}}
{"text": "%!TEX root = ../../report.tex\n\n\\subsubsection{L-Systems} % (fold)\n\\label{ssub:l_systems}\n\nLindenmayer Systems (L-Systems) are a class of string rewriting mechanisms, originally developed by Lindenmayer as a mathematical theory for plant development.\n\nOne L-System is a type of a formal grammar and a string rewriting system that is capable of describe the behaviour of plant cells and model the growth processes of plant development.\n\nIt consists of two different parts, one axiom and a set of production rules. The axiom is the starting point of the system, acting as a seed. Then is't applied in this seed the set of production rules, that change the initial string and producing other strings.\nThis is an iterative process, so after the production of a larger set of strings, the rules can be applied to each one of them wish grows the size of the set even larger.\n\nThis L-Systems are used to produce natural growth of vegetation (Figure~\\ref{fig:trees}), and the generation of Fractals. \n\n\n\\begin{figure}[htbp]\n    \\centering\n    \\includegraphics[width=0.95\\textwidth]{img/Theory/L_Systems/Dragon_trees.jpg}\n    \\caption{Trees with L-Systems}\n    \\label{fig:trees}\n\\end{figure}\n\n\nIn this process, each symbol is associated with a production rule. For instance having $\\{F, +, -\\}$ for the alphabet and \\emph{production} $\\{F \\rightarrow\n F+F--F+F\\}$. From a starting axiom \\emph{aba}, and the application of the rules we have:\\\\\n\\begin{equation} \\label{eq:seed}\nF\\\\\n\\end{equation}\n\\begin{equation} \\label{eq:step1}\nF+F--F+F\\\\\n\\end{equation}\n\\begin{equation} \\label{eq:step2}\nF+F--F+F \\; + \\; F+F--F+F \\;- \\;- \\;F+F--F+F \\;+ \\;F+F--F+F\\\\\n\\end{equation}\n\n%\\begin{align}\n%\\begin{split}\n%F\\\\\n%F+F--F+F\\\\\n%F+F--F+F \\; + \\; F+F--F+F \\;- \\;- \\;F+F--F+F \\;+ \\;F+F--F+F\\\\\n%\\end{split}\n%\\end{align}\n%\\\\\n\\\\\n\\\\\nThis is an example of the evolution of one system where the production is applied  in (\\ref{eq:seed}) that turns into $F+F--F+F$. In Note that the space between the symbols are just for readability.\n\nAll the symbols are assigned with a geometric meaning. The notion of a turtle with a pen, as proposed in \\cite{abelson1982aa}, with the symbols being interpreted as moving instructions to the turtle, is a simple way to understand. If ``F'' means forward and the symbols ``+'' and ``-'' are interpreted as rotations counter-clockwise and clockwise respectively by a predefined angle. \n\nUsing the given example, and setting the angle for the rotation to $60^{\\circ}$ the result is Figure~\\ref{fig:kockLS}.\n%$\\bigodot \\; \\bigodot$\n\n\\begin{figure}[htbp]\n   \\centering\n   \\includegraphics[width=0.75\\textwidth]{img/Theory/L_Systems/koch.png}\n   \\caption{}\n   \\label{fig:kockLS}\n\\end{figure}\n\n\n% This concept of the turtle can be considered also in 3D.\n\n\n\n% subsubsection l_systems (end)", "meta": {"hexsha": "8943ad83ce66a4bc7f049df3df57dd0075dc8667", "size": 2784, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/overview/3-L-Systems.tex", "max_stars_repo_name": "arturalkaim/ProceduralGeneration", "max_stars_repo_head_hexsha": "736fcb8a15291ede1db069ad968527508bc081c4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/overview/3-L-Systems.tex", "max_issues_repo_name": "arturalkaim/ProceduralGeneration", "max_issues_repo_head_hexsha": "736fcb8a15291ede1db069ad968527508bc081c4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/overview/3-L-Systems.tex", "max_forks_repo_name": "arturalkaim/ProceduralGeneration", "max_forks_repo_head_hexsha": "736fcb8a15291ede1db069ad968527508bc081c4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.8307692308, "max_line_length": 383, "alphanum_fraction": 0.7219827586, "num_tokens": 785, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952052, "lm_q2_score": 0.8056321866478979, "lm_q1q2_score": 0.5659808263251724}}
{"text": "\\begin{latin}\n\\textbf{Solution}: In merge sort, the $n$ element array is divided into two halves each with $\\frac{n}{2}$ elements (for $n$ = even), or one with $\\dfrac{n}{2}$ and the other with $\\frac{n}{2}+1$ elements. The two halves are sorted and merged. The algorithm for performing merge sort is as follows:\n\\begin{figure}[h!]\n\\centerline{ \\includegraphics[width=12cm,height=18cm]{2.jpg}}\n%\\caption{ }\n%\\label{1.jpg}      \n\\end{figure}\\\\\n\nThe algorithm Merge$(\\min, m, \\max)$ is performed in $O(n)$. The algorithm mergeSort $(int\n\\min, int \\max)$ of $n$ element becomes half when it is called recursively. Thus, the whole algorithm\ntakes\n\\begin{align*}\nT(n) &\\leq 2 T (\\frac{n}{2})+cn \\\\\n&\\leq 2 \\bigg[ 2 T (\\frac{n}{2^2}) + c \\frac{n}{2} \\bigg] + cn\\\\\nT(n) &\\leq 2^2 T (\\frac{n}{2^2})+2cn \\\\\n\\cdots \\\\\n\\cdots \n\\end{align*}\n\\begin{align*}\n&T(n)\\leq 2^i T (\\frac{n}{2^i})+icn \\\\\n& if ~~2^i=n, i=\\log_2 n \\\\\n&T(n) \\leq  2^i T(1) + cn \\log_2 n\n\\end{align}\n$T(1) = 1$ (time required to sort a list of one element).\\\\\nThus, the complexity of merge sort is $O(n \\log_2 n)$.\\\\\n$3.$ Prove that the k clique problem is $NP$ complete.\\\\\n\\textbf{Solution:}\n\\begin{Definition}\nGiven a graph $G = {V, E}$ and an integer $k$, check whether there exists a sub-graph $C$\nof $G$ which contains $k$ number of vertices.\n\\end{Definition}\n(The max clique problem is fi nding a clique with largest number of vertices. It is an $NP$ hard\nproblem.)\\\\\nTo prove that clique is $NP$ complete, we need to prove:\\\\\n\\textbf{A. Clique is NP}\\\\\n$\\square$ Check whether $C$ contains $k$ number of vertices. This can be done in $O(| C |)$.\\\\\n$\\square$ Check whether $C$ is a complete sub-graph. This can be done in $kC_2$ steps.\\\\\nThus, checking whether $C$ is a clique of $G$ or not can be performed in polynomial time.\nA graph with $n$ number of vertices can have $2n$ number of such vertex combination. For each\nof the combinations, the same checking algorithm is performed. Among these, the clique with $k$\nvertices is the $ k$-clique of $G$. So, a $k$-clique algorithm is carried out by a non-deterministic Turing\nmachine in polynomial time. Hence, it is in $NP$.\\\\\n\\textbf{B. Reduce $3-SAT$ to $k$-clique}\\\\\n\\textbf{Reduction:}\\\\\n$\\square$ For each clause of a Boolean assignment, create distinct vertices for each literal. If a\nBoolean assignment contains $C$ clauses, then the number of vertices is $3C$. \\\\\n$\\square$ Join the vertices of the clauses in such a way that\\\\\n \u2013 Two vertices of the same clause cannot be joined.\\\\\n\u2013 Two vertices whose literals are the negation of the other cannot be joined.\n\\begin{exa}\nConsider $F=(X_1 \\vee X_2 \\vee Y_1) \\wedge (X_3 \\vee \\neg Y_1 \\vee \\neg Y_2) \\wedge (X_4 \\vee X_5 \\vee \\neg Y_2) $ \\\\\nThe graph is \\\\\n\\begin{figure}[h!]\n\\centerline{ \\includegraphics[width=13cm,height=7cm]{1.jpg}}\n%\\caption{ }\n%\\label{1.jpg}      \n\\end{figure}\\\\\nIf it can be proved that $G$ has a $k$-clique if and only if $F$ is satisfi able, then the reduction is correct.\n\\end{exa}\n$\\square$ \\textbf{If F is satisfi able, G has k-clique: }If the $3-SAT$ instance $F$ is TRUE, then each clause is TRUE, which means every clause has a TRUE literal. By selecting a corresponding vertex to a TRUE literal in each clause, a clique in $G$ of size $k$ is constructed, where $k $ is the number of clauses of the $3-SAT$ instance. Because, if there is a missing edge, that would mean that our truth assignment effectively set something to be true and false at the same time ($Y_1$ and $\\neg Y_1$)$!$\n\\\\\n$\\square$ \\textbf{If G-has k-clique, F is satisfi able:} Assume that there is a clique of size $k$ in $G$ where k is the number of clauses in $F$. It is to be proved that there must be a truth assignment that satisfies the given Boolean formula. Clique is a complete graph. Let us assume that the truth assignment induced by the labels of the vertices satisfies $F.$ It signifies that every pair of vertices in the clique has an edge. But the vertices labelling the literals may be set to both true and false. Already it is mentioned that every trio of vertices corresponding to a clause of $F$ have no edges between those vertices, which signifies that there must be a vertex from every clause of $F$ in the clique. This shows that the clauses of $F$ are satisfied as well as $F$  is satisfied.\\\\\nThis reduction is possible in polynomial time. Hence, clique is $NP$ complete.\n\\section*{Multiple Choice Questions}\n$1.$ Worst case time complexity is denoted by\nthe notation. \\\\\na) Big oh notation\\\\\nb) Big omega notation\\\\\nc) Theta notation\\\\\nd) Little omega notation\\\\\n$2.$ Best case time complexity is denoted by the\nnotation.\\\\\na) Big oh notation\\\\\nb) Big omega notation\\\\\nc) Theta notation\\\\\nd) Little omega notation\\\\\n$3.$ if $f(n)=O(g(n))$ and $f(n)=\\Omega(g(n))$, then which is true?\\\\\na) $f(n)=\\omega (g(n))$~~~~~b) $f(n)=o(g(n))$ \\\\\nc) $f(n)=\\Theta(g(n))$~~~~~~d) $f(n)=\\theta(g(n))$\\\\\n$4.$ The problem which results in \u2018yes\u2019 or \u2018no\u2019 is \\\\\na) Decision problem\\\\\nb) Optimization problem\\\\\nc) Search problem\\\\\nd) Functional problem\\\\\n$5.$ Which type of problem is the shortest path\nalgorithm?\\\\\na) Decision problem\\\\\nb) Optimization problem\\\\\nc) Search problem\\\\\nd) Functional problem\\\\\n$6.$ Which is true for the $P$ class problem?\\\\\na) The number of steps (or time) required\nto complete the algorithm is a polynomial\nfunction of $n$.\\\\\nb) It is computable by a deterministic\nTuring machine in polynomial time.\\\\\nc) It contains all sets in which the membership\nmay be decided by an algorithm\nwhose running time is bounded by a\npolynomial.\\\\\nd) All of these\\\\\n\\textbf{Answers:} $1.$a   \\quad   $2.$b   \\quad   3.c  \\quad  4.a   \\quad  5.b      \\quad 6.d\n\\section*{GATE Questions}\n1. For problems $X$ and $Y$, $ Y$ is $NP$ complete and $X$  reduces to $Y$ in polynomial time. Which of the following is true? \\\\\na) If $X$ can be solved in polynomial time, then so can $ Y$.\\\\\nb) $X$ is $NP$ complete\\\\\nc) $X $ is $NP$ hard\\\\\nd) $ X $ is in $NP$, but not necessarily $NP$ complete.\\\\\n2. Which of the problems is not $NP$ hard?\\\\\na) Hamiltonian circuit problem \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad b) The $0/1$ knapsack problem \\\\\nc) Finding bi-connected components of a graph \\quad  d) The graph colouring problem \\\\\n3. Ram and Shyam have been asked to show a certain problem $\\Pi$  is $NP$ complete. Ram shows\na polynomial time reduction from $3-SAT$ problem to $\\Pi$, and Shyam shows a polynomial time\nreduction from $\\Pi$ to $3$ SAT. Which of the following can be inferred from these reductions?\\\\\na) $\\Pi$ is $NP$ hard but not $NP$ complete     \\quad \\quad b)$\\Pi$ is $NP$, but is not $NP$ complete\\\\\nc) $\\Pi$ is $NP$ complete    \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad  d) $\\Pi$ is neither $NP$ hard nor $NP$\\\\\n4. No body knows yet if $P = NP$. Consider the language $L $defined as follows.\n\\begin{center}\n$L=\\begin{cases}\n(0+1)^* \\quad  \\quad   if ~~P=NP \\\\\n\\varphi \\quad \\quad \\quad \\quad \\quad Otherwise\n\\end{cases}$\n\\end{center}\nWhich of the following statements is true?\\\\\na) $L$ is recursive\\\\\nb) $L$ is recursively enumerable but not recursive \\\\\nc) $L$ is not recursively enumerable \\\\\nd) Whether $L$ is recursive or not will be known after we find out if $P = NP$\n\n\\end{latin}\n", "meta": {"hexsha": "f0ea62922831510136bf58497408c7819331108e", "size": 7213, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "SOLMAZ215/chapter1.tex", "max_stars_repo_name": "SOLMAZ215/PNU_3991", "max_stars_repo_head_hexsha": "87ca526305d5e13e2fa479f4712ea99bb6a2448e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-07T17:04:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-07T17:04:45.000Z", "max_issues_repo_path": "chapter1.tex", "max_issues_repo_name": "SOLMAZ215/PNU_3991", "max_issues_repo_head_hexsha": "87ca526305d5e13e2fa479f4712ea99bb6a2448e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter1.tex", "max_forks_repo_name": "SOLMAZ215/PNU_3991", "max_forks_repo_head_hexsha": "87ca526305d5e13e2fa479f4712ea99bb6a2448e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.0610687023, "max_line_length": 797, "alphanum_fraction": 0.6938860391, "num_tokens": 2230, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6791786991753929, "lm_q2_score": 0.8333245994514084, "lm_q1q2_score": 0.5659763174462629}}
{"text": "%\\documentclass[aps,showpacs,prb,floatfix,twocolumn]{revtex4}\n\\documentclass[aps,prb,floatfix,epsfig,twocolumn,showpacs,preprintnumbers]{revtex4}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\usepackage{amsmath,amssymb,graphicx,bm,epsfig}\n\\usepackage{color}\n\n\\newcommand{\\eps}{\\epsilon}\n\\newcommand{\\vR}{{\\mathbf{R}}}\n\\renewcommand{\\vr}{{\\mathbf{r}}}\n\\newcommand{\\hr}{{\\hat{\\textbf{r}}}}\n\\newcommand{\\vk}{{\\mathbf{k}}}\n\\newcommand{\\vdelta}{{\\mathbf{\\delta}}}\n\\newcommand{\\vK}{{\\mathbf{K}}}\n\\newcommand{\\vq}{{\\mathbf{q}}}\n\\newcommand{\\vQ}{{\\mathbf{Q}}}\n\\newcommand{\\vPhi}{{\\mathbf{\\Phi}}}\n\\newcommand{\\vS}{{\\mathbf{S}}}\n\\newcommand{\\cG}{{\\cal G}}\n\\newcommand{\\cF}{{\\cal F}}\n\\newcommand{\\cT}{{\\cal T}}\n\\newcommand{\\cH}{{\\cal H}}\n\\newcommand{\\cJ}{{\\cal J}}\n\\newcommand{\\cD}{{\\cal D}}\n\\newcommand{\\cU}{{\\cal U}}\n\\newcommand{\\cL}{{\\cal L}}\n\\newcommand{\\Tr}{\\mathrm{Tr}}\n\\renewcommand{\\a}{\\alpha}\n\\renewcommand{\\b}{\\beta}\n\\newcommand{\\g}{\\gamma}\n\\renewcommand{\\d}{\\delta}\n\\newcommand{\\npsi}{\\underline{\\psi}}\n\\renewcommand{\\Im}{\\textrm{Im}}\n\\renewcommand{\\Re}{\\textrm{Re}}\n\\newcommand{\\cA}{{\\cal A}}\n\n\n\n\\begin{document}\n\n\\title{Some notes on LAPW}\n\\author{Kristjan Haule}\n\\affiliation{Department of Physics, Rutgers University, Piscataway, NJ 08854, USA}\n\\date{\\today}\n\n%\\begin{abstract}\n%\\end{abstract}\n\\pacs{71.27.+a,71.30.+h}\n\\date{\\today}\n\\maketitle\n\n\\begin{widetext}\nThe LAPW basis takes the form:\n\\begin{eqnarray}\n&& \\chi_{\\vk+\\vK}(\\vr) = \\frac{1}{\\sqrt{V}} e^{i(\\vk+\\vK)\\vr}\n=\\frac{4\\pi i^l}{\\sqrt{V}}e^{i(\\vk+\\vK)\\vr_\\alpha}Y_{lm}^*(R(\\hat{\\vk}+\\hat{\\vK}))j_l(|\\vk+\\vK||\\vr-\\vr_\\alpha|)Y_{lm}(R(\\vr-\\vr_\\alpha))\n  \\qquad interstitial\\\\\n&& \\chi_{\\vk+\\vK}(\\vr) = \\left( a_{lm} u_l(|\\vr-\\vr_\\alpha|) + b_{lm}\n  \\dot{u}_l(|\\vr-\\vr_\\alpha|)\\right)\n  Y_{lm}(R(\\hat{\\vr}-\\hat{\\vr_\\alpha}))\\qquad MT-sphere\n\\end{eqnarray}\nThe matching condition at the MT-sphere $S$ gives\n\\begin{eqnarray}\n\\left(\n\\begin{array}{cc}\nu_l(S) & \\dot{u}_l(S)\\\\\n\\frac{d}{dr} u_l(S) & \\frac{d}{dr} \\dot{u}_l(S)\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\na_{lm}\\\\\nb_{lm}\n\\end{array}\n\\right)=\n\\frac{4\\pi i^l}{\\sqrt{V}}e^{i(\\vk+\\vK)\\vr_\\alpha}Y_{lm}^*(R(\\hat{\\vk}+\\hat{\\vK}))\n\\left(\n\\begin{array}{c}\nj_l(|\\vk+\\vK|S)\\\\\n\\frac{d}{dr} j_l(|\\vk+\\vK|S)\n\\end{array}\n\\right)\n\\end{eqnarray}\nwith the solution\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\na_{lm}\\\\\nb_{lm}\n\\end{array}\n\\right)=\n\\frac{4\\pi i^l}{\\sqrt{V}}e^{i(\\vk+\\vK)\\vr_\\alpha}Y_{lm}^*(R(\\hat{\\vk}+\\hat{\\vK}))\n%\n\\left(\n\\begin{array}{cc}\n\\frac{d}{dr} \\dot{u}_l(S)  & -\\dot{u}_l(S)\\\\\n-\\frac{d}{dr} u_l(S) & u_l(S)       \n\\end{array}\n\\right)\n\\frac{1}{u_l(S) \\frac{d}{dr} \\dot{u}_l(S)-\\dot{u}_l(S) \\frac{d}{dr} u_l(S)}\n%\n\\left(\n\\begin{array}{c}\nj_l(|\\vk+\\vK|S)\\\\\n\\frac{d}{dr} j_l(|\\vk+\\vK|S)\n\\end{array}\n\\right)\n\\end{eqnarray}\n\nThe two solutions satisfy the following equations\n\\begin{eqnarray}\n&&\\left(-\\frac{d^2}{dr^2}+\\frac{l(l+1)}{r^2}+V_{KS}(r)-\\varepsilon \\right) r u_l(r)=0\\\\\n&&\\left(-\\frac{d^2}{dr^2}+\\frac{l(l+1)}{r^2}+V_{KS}(r)-\\varepsilon \\right)r \\dot{u}_l(r)= r u_l(r)\n\\end{eqnarray}\nWe multiply the first equation by $r\\dot{u}_l(r)$ and the second by $ru_l(r)$ to obtain\n\\begin{eqnarray}\n\\int_0^S dr\\left\\{ r\\dot{u}_l(r) \\left(-\\frac{d^2}{dr^2}\\right) r u_l(r)\n-r u_l(r)\n\\left(-\\frac{d^2}{dr^2}\\right)r \\dot{u}_l(r)\\right\\}=-\\int_0^S dr r^2 u_l(r)u_l(r)\n\\end{eqnarray}\nIntegration by parts gives\n\\begin{eqnarray}\n\\left[\n- r\\dot{u}_l(r) \\frac{d}{dr} \\left(r u_l(r)\\right)\n+r u_l(r) \\frac{d}{dr} \\left(r \\dot{u}_l(r)\\right)\\right]_0^S\n=-1\n\\end{eqnarray}\nwhich finally leads to\n\\begin{eqnarray}\n\\dot{u}_l(S) \\frac{d}{dr} u_l(S)\n-u_l(S) \\frac{d}{dr} \\dot{u}_l(S) =\\frac{1}{S^2}\n\\end{eqnarray}\n\nWe can than simplify the solution for $a_{lm}$ and $b_{lm}$ to \n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\na_{lm}\\\\\nb_{lm}\n\\end{array}\n\\right)=\n\\frac{4\\pi i^l}{S^2 \\sqrt{V}}e^{i(\\vk+\\vK)\\vr_\\alpha}Y_{lm}^*(R(\\hat{\\vk}+\\hat{\\vK}))\n%\n\\left(\n\\begin{array}{c}\n\\dot{u}_l(S)\\frac{d}{dr} j_l(|\\vk+\\vK|S)-\\frac{d}{dr} \\dot{u}_l(S) j_l(|\\vk+\\vK|S)\\\\\n\\frac{d}{dr} u_l(S) j_l(|\\vk+\\vK|S)- u_l(S) \\frac{d}{dr} j_l(|\\vk+\\vK|S)\n\\end{array}\n\\right)\n\\end{eqnarray}\nThis equation is implemented in Wien2k, and also in both dmft1 and\ndmft2 steps.\n\n\nTo compute the projector, we need the overlap between a localized function\n$\\phi(r) Y_L(\\vr)$, and Kohn-Sham states\n\\begin{eqnarray}\n\\cU^{\\vk,\\vr_\\alpha}_{i,m}=\\langle \\phi_l Y_{lm}|\\psi_{\\vk i}\\rangle  =\n\\sum_\\vK C^\\vk_{i\\vK} \\langle \\phi_l(|\\vr-\\vr_\\alpha|) Y_{lm}(R(\\hat{\\vr}-\\hat{\\vr}_\\alpha))|\\chi_{\\vk+\\vK}(\\vr)\\rangle\n\\end{eqnarray}\nIf function $\\phi(r)$ extends sufficiently outside its MT-sphere, the\noverlap $\\cU^{\\vk,\\vr_\\alpha}_{i,m}$ will have non-zero contribution\nfrom all other MT-spheres. However, we will use only the envelope\nfunction outside its center sphere, because the increased charge in\nthe neighboring spheres really should not be counted here as charge\ncontribution to $\\phi(r)$ function.\n\nTherefore we have  only two contributions. Inside MT-sphere we have\n\\begin{eqnarray}\n\\langle \\phi_l(|\\vr-\\vr_\\alpha|) Y_{lm}(\\hat{\\vr}-\\hat{\\vr}_\\alpha)|\\chi_{\\vk+\\vK}(\\vr)\\rangle  =\n\\sum_\\kappa a^\\kappa_{lm} \\int_0^S \\phi(r)  u_l^\\kappa(r) r^2 dr\n\\end{eqnarray}\nand outside MT-sphere we get\n\\begin{eqnarray}\n\\langle \\phi_l(|\\vr-\\vr_\\alpha|) Y_{lm}(R(\\hat{\\vr}-\\hat{\\vr}_\\alpha))|\\chi_{\\vk+\\vK}(\\vr)\\rangle  =\n\\frac{4\\pi  i^l}{\\sqrt{V}}  e^{i(\\vk+\\vK)\\vr_\\alpha}Y_{lm}^*(R(\\hat{\\vk}+\\hat{\\vK}))\n\\int_S^{S_2} \\phi_l(r) j_l(|\\vk+\\vK|r)r^2 dr\n\\end{eqnarray}\n\n\n\\section{Free energy and Total Energy}\n\n\nThe equation for the total energy is\n\\begin{eqnarray}\nE = \\Tr(H_0 G) + \\frac{1}{2}\\Tr(\\Sigma G) - \\Phi^{DC}[n_{loc}] + \\Phi^H[\\rho]+\\Phi^{xc}[\\rho]\n\\end{eqnarray}\nwhere\n$$H_0 = -\\nabla^2 + \\delta(\\vr-\\vr')V_{ext}(\\vr)$$\nWe typically evaluate it in the following way\n\\begin{eqnarray}\nE = \\Tr((-\\nabla^2+V_{ext}+V_{H}+V_{xc}) G) + \\frac{1}{2}\\Tr(\\Sigma G)-\\Phi^{DC}[\\rho_{loc}] \n-\\Tr((V_H+V_{xc})\\rho) + \\Phi^H[\\rho]+\\Phi^{xc}[\\rho]\n\\end{eqnarray}\nNamely, we use the Green's function of the solid to evaluate:\n\\begin{eqnarray}\nE_1 = \\Tr((-\\nabla^2+V_{ext}+V_{H}+V_{xc}) G) -\\Tr((V_H+V_{xc})\\rho) + \\Phi^H[\\rho]+\\Phi^{xc}[\\rho]\n\\end{eqnarray}\nand the impurity to evaluate\n\\begin{eqnarray}\nE_2 =  \\frac{1}{2}\\Tr(\\Sigma_{imp} G_{imp})-\\Phi^{DC}[\\rho_{imp}] \n\\end{eqnarray}\nNotice that $\\frac{1}{2}\\Tr(\\Sigma_{imp} G_{imp})$ is not evaluated as\na Matsubara sum, but we rather compute it from probabilities of atomic\nstates, i.e.,\n\\begin{eqnarray}\n \\frac{1}{2}\\Tr(\\Sigma_{imp} G_{imp}) = \\sum_m P_m E_m - \\sum_\\alpha\n \\varepsilon_{imp}^\\alpha n_{imp}^\\alpha\n\\end{eqnarray}\n\n\n\nThe free energy functional is\n\\begin{eqnarray}\n\\Gamma[G] = \\Tr\\log G-\\Tr\\log((G_0^{-1}-G^{-1})G) + \\Phi^{H}[\\rho]+\\Phi^{xc}[\\rho]+\\Phi^{DMFT}[G_{loc}]-\\Phi^{DC}[\\rho_{loc}]\n\\end{eqnarray}\nhence stationarity gives\n\\begin{eqnarray}\nG^{-1}-G_0^{-1} + V_H + V_{xc}+\\Sigma_{DMFT}-V_{dc}=0\n\\end{eqnarray}\nand hence\n\\begin{eqnarray}\nF = \\Tr\\log G\n-\\Tr(\\Sigma G) +\n\\Tr(V_{dc}\\rho_{loc}) + \\Phi^{DMFT}[G_{loc}]-\\Phi^{DC}[\\rho_{loc}]\n-\\Tr((V_H + V_{xc})\\rho)+\n\\Phi^{H}[\\rho]+\\Phi^{xc}[\\rho]\n\\end{eqnarray}\nSince $F_{imp}$ contains $\\Phi^{DMFT}$, i.e,\n\\begin{eqnarray}\nF_{imp} = \\Tr\\log G_{imp} - \\Tr(\\Sigma_{imp} G_{imp}) + \\Phi^{DMFT}[G_{imp}]  \n\\end{eqnarray}\nwe can write\n\\begin{eqnarray}\nF = \\Tr\\log(G)-\\Tr\\log(G_{loc})+F_{imp}+\\Tr(V_{dc} \\rho_{loc}) -\n\\Phi^{DC}[\\rho_{loc}]\n-\\Tr((V_H + V_{xc})\\rho)+\n\\Phi^{H}[\\rho]+\\Phi^{xc}[\\rho]  \n\\end{eqnarray}\nwhere\n$$F_{imp} = E_{imp}-T S_{imp}$$\nand\n$$E_{imp}=\n\\Tr((\\Delta+\\varepsilon_{imp}-\\omega_n\\frac{\\partial\\Delta}{\\partial\\omega_n})\nG_{imp}) + \\frac{1}{2}\\Tr(\\Sigma_{imp} G_{imp}) -T S_{imp}\n$$\nHence\n\\begin{eqnarray}\nF+T S_{imp} = \\Tr\\log(G)-\\Tr\\log(G_{loc})+\n%\n\\Tr((\\Delta+\\varepsilon_{imp}-\\omega_n\\frac{\\partial\\Delta}{\\partial\\omega_n})\nG_{imp}) + \\frac{1}{2}\\Tr(\\Sigma_{imp} G_{imp})\n%\n+\\Tr(V_{dc} \\rho_{loc}) -\n\\Phi^{DC}[\\rho_{loc}]\\nonumber\\\\\n-\\Tr((V_H + V_{xc})\\rho)+\n\\Phi^{H}[\\rho]+\\Phi^{xc}[\\rho]  \n\\end{eqnarray}\nwhich can also be cast into the form\n\\begin{eqnarray}\nF+T S_{imp} = \\Tr\\log(G)-\\Tr\\log(G_{loc})+\n\\Tr((\\Delta-\\omega_n\\frac{\\partial\\Delta}{\\partial\\omega_n}+\\varepsilon_{imp}+V_{dc})G_{loc})\n+ \\frac{1}{2}\\Tr(\\Sigma_{imp} G_{imp})\n-\\Phi^{DC}[\\rho_{imp}]\\nonumber\\\\\n-\\Tr((V_H + V_{xc})\\rho)+\n\\Phi^{H}[\\rho]+\\Phi^{xc}[\\rho]  \n\\end{eqnarray}\nWe thus compute the following quantities with the Green's function of\nthe solid:\n\\begin{eqnarray}\nF_1 = \\Tr\\log(G)-\\Tr\\log(G_{loc})+\n\\Tr((\\Delta-\\omega_n\\frac{\\partial\\Delta}{\\partial\\omega_n}+\\varepsilon_{imp}+V_{dc})G_{loc})\n-\\Tr((V_H + V_{xc})\\rho)+\n\\Phi^{H}[\\rho]+\\Phi^{xc}[\\rho]    \n\\end{eqnarray}\nand the following with the impurity:\n\\begin{eqnarray}\nF_2 =   \\frac{1}{2}\\Tr(\\Sigma_{imp} G_{imp})-\\Phi^{DC}[\\rho_{imp}] -T S_{imp}\n\\end{eqnarray}\nNotice that $F_2$ is similar to $E_2$ (except for the entropy term), hence $E_{solid}$ and\n$F_{solid}$ contain exactly the same Monte Carlo noise.\n\n\\end{widetext}\n\n\\end{document}\n", "meta": {"hexsha": "161c473247f5d3c7f7c993d4d8d2fc6f3b7382d1", "size": 8945, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/simple_lapw.tex", "max_stars_repo_name": "dmft-wien2k/dmft-wien2k-v2", "max_stars_repo_head_hexsha": "83481be27e8a9ff14b9635d6cc1cd9d96f053487", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-04-03T06:37:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-08T11:44:06.000Z", "max_issues_repo_path": "src/doc/simple_lapw.tex", "max_issues_repo_name": "dmft-wien2k/dmft-wien2k-v2", "max_issues_repo_head_hexsha": "83481be27e8a9ff14b9635d6cc1cd9d96f053487", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-07-12T21:37:53.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-12T21:42:01.000Z", "max_forks_repo_path": "src/doc/simple_lapw.tex", "max_forks_repo_name": "dmft-wien2k/dmft-wien2k", "max_forks_repo_head_hexsha": "83481be27e8a9ff14b9635d6cc1cd9d96f053487", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2016-10-27T20:23:34.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-13T13:54:11.000Z", "avg_line_length": 31.3859649123, "max_line_length": 137, "alphanum_fraction": 0.6285075461, "num_tokens": 3802, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245870332531, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5659763144235402}}
{"text": "% ----------------------------------------------------------------------\n\\section{Elasticity with Infinitesimal Strain and No Faults}\n\nWe begin with the elasticity equation including the inertial term,\n\\begin{gather}\n  \\label{eqn:elasticity:strong:form}\n  \\rho \\frac{\\partial^2\\vec{u}}{\\partial t^2} - \\vec{f}(\\vec{x},t) - \\tensor{\\nabla} \\cdot \n\\tensor{\\sigma}\n(\\vec{u}) = \\vec{0} \\text{ in }\\Omega, \\\\\n%\n  \\label{eqn:bc:Neumann}\n  \\tensor{\\sigma} \\cdot \\vec{n} = \\vec{\\tau}(\\vec{x},t) \\text{ on }\\Gamma_\\tau, \\\\\n%\n  \\label{eqn:bc:Dirichlet}\n  \\vec{u} = \\vec{u}_0(\\vec{x},t) \\text{ on }\\Gamma_u,\n\\end{gather}\nwhere $\\vec{u}$ is the displacement vector, $\\rho$ is the mass\ndensity, $\\vec{f}$ is the body force vector, $\\tensor{\\sigma}$ is the\nCauchy stress tensor, $\\vec{x}$ is the spatial coordinate, and $t$ is\ntime. We specify tractions $\\vec{\\tau}$ on boundary $\\Gamma_\\tau$, and\ndisplacements $\\vec{u}_0$ on boundary $\\Gamma_u$. Because both $\\vec{\\tau}$\nand $\\vec{u}$ are vector quantities, there can be some spatial overlap\nof boundaries $\\Gamma_\\tau$ and $\\Gamma_u$; however, a degree of freedom at\nany location cannot be associated with both prescribed displacements\n(Dirichlet) and traction (Neumann) boundary conditions simultaneously.\n\n\\begin{table}[htbp]\n  \\caption{Mathematical notation for elasticity equation with\n    infinitesimal strain.}\n  \\label{tab:notation:elasticity}\n  \\begin{tabular}{lcp{3in}}\n    \\toprule\n    {\\bf Category} & {\\bf Symbol} & {\\bf Description} \\\\\n    \\midrule\n    Unknowns & $\\vec{u}$ & Displacement field \\\\\n    & $\\vec{v}$ & Velocity field \\\\\n    Derived quantities & $\\tensor{\\sigma}$ & Cauchy stress tensor \\\\\n                   & $\\tensor{\\epsilon}$ & Cauchy strain tensor \\\\\n    Common constitutive parameters & $\\rho$ & Density \\\\\n  & $\\mu$ & Shear modulus \\\\\n  & $K$ & Bulk modulus \\\\\nSource terms & $\\vec{f}$ & Body force per unit volume, for example $\\rho \\vec{g}$ \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{table}\n\n\n\\subsection{Quastistatic}\n\nIf we neglect the inertial term\n($\\rho \\frac{\\partial \\vec{v}}{\\partial t} \\approx \\vec{0}$), then\ntime dependence only arises from history-dependent constitutive\nequations and boundary conditions. Our solution vector is the\ndisplacement vector and the elasticity equation reduces to\n\\begin{gather}\n  \\label{eqn:elasticity:strong:form:quasistatic}\n  \\vec{f}(\\vec{x},t) + \\tensor{\\nabla} \\cdot \\tensor{\\sigma}(\\vec{u}) = \\vec{0} \\text{ in }\\Omega, \\\\\n%\n  \\tensor{\\sigma} \\cdot \\vec{n} = \\vec{\\tau}(\\vec{x},t) \\text{ on }\\Gamma_\\tau, \\\\\n%\n  \\vec{u} = \\vec{u}_0(\\vec{x},t) \\text{ on }\\Gamma_u.\n\\end{gather}\nBecause we will use implicit time stepping, we place all of the terms\nin the elasticity equation on the LHS. We create the weak form by\ntaking the dot product with the trial function $\\trialvec[u]$ and\nintegrating over the domain:\n\\begin{equation}\n    \\int_\\Omega \\trialvec[u] \\cdot \\left( \\vec{f}(t) + \\tensor{\\nabla}\n      \\cdot \\tensor{\\sigma} (\\vec{u}) \\right) \\, d\\Omega = 0. \n\\end{equation}\nUsing the divergence theorem and incorporating the Neumann boundary conditions, we have\n\\begin{equation}\n% \n  \\int_\\Omega \\trialvec[u] \\cdot \\vec{f}(\\vec{x},t) + \\nabla \\trialvec[v] : -\\tensor{\\sigma}(\\vec{u}) \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[v] \\cdot \\vec{\\tau}(\\vec{x},t) \\, d\\Gamma = 0\n\\end{equation}\n\n\\subsubsection{Residual Pointwise Functions}\n\nIdentifying $F(t,s,\\dot{s})$ and $G(t,s)$, we have\n\\begin{align}\n  % Fu\n  F^u(t,s,\\dot{s}) &=  \\int_\\Omega \\trialvec[u] \\cdot \\eqnannotate{\\vec{f}(\\vec{x},t)}{\\vec{f}^u_0} + \\nabla \\trialvec[u] : \\eqnannotate{-\\tensor{\\sigma}(\\vec{u})}{\\tensor{f^u_1}} \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[u] \\cdot \\eqnannotate{\\vec{\\tau}(\\vec{x},t)}{\\vec{f}^u_0} \\, d\\Gamma, \\\\\n  % Gu\n  G^u(t,s) &= 0\n\\end{align}\nNote that we have multiple $\\vec{f}_0$ functions, each associated with\na trial function and an integral over a different domain or\nboundary. Each material and boundary condition (except Dirichlet)\ncontribute pointwise functions. The integral over the domain $\\Omega$\nis subdivided into integrals over the materials and the integral over\nthe boundary $\\Gamma_\\tau$ is subdivided into integrals over the\nNeumann boundaries. Each bulk constitutive model provides a different\npointwise function for the stress tensor\n$\\tensor{\\sigma}(\\vec{u})$. With $G=0$ it is clear that we have a\nformulation that will use implicit time stepping algorithms.\n\n\\subsubsection{Jacobian Pointwise Functions}\n\nWe only have a Jacobian for the LHS:\n\\begin{align}\n  J_F^{uu} &= \\frac{\\partial F^u}{\\partial u} = \\int_\\Omega \\nabla \\trialvec[u] : \n\\frac{\\partial}{\\partial u}(-\\tensor{\\sigma}) \\, d\\Omega \n  = \\int_\\Omega \\nabla \\trialvec[u] : -\\tensor{C} : \\frac{1}{2}(\\nabla + \\nabla^T)\\basisvec[u] \n\\, d\\Omega \n  = \\int_\\Omega \\trialscalar[u]_{i,k} \\, \\eqnannotate{\\left( -C_{ikjl} \\right)}{J_{f3}^{uu}} \\, \\basisscalar[u]_{j,l}\\, d\\Omega.\n\\end{align}\n\n\n\\subsection{Dynamic}\n\nFor compatibility with PETSc TS algorithms, we want to turn\nthe second order equation~\\vref{eqn:elasticity:strong:form} into two first order\nequations. We introduce the velocity as a unknown,\n$\\vec{v}=\\frac{\\partial u}{\\partial t}$, which leads to\n\\begin{align}\n  % Displacement-velocity\n  \\frac{\\partial \\vec{u}}{\\partial t} &= \\vec{v} \\text{ in } \\Omega, \\\\\n  % Elasticity\n  \\rho(\\vec{x}) \\frac{\\partial\\vec{v}}{\\partial t} &= \\vec{f}(\\vec{x},t) + \\tensor{\\nabla} \\cdot \\tensor{\\sigma}(\\vec{u}) \\text{ in } \\Omega, \\\\\n  % Neumann\n  \\tensor{\\sigma} \\cdot \\vec{n} &= \\vec{\\tau}(\\vec{x},t) \\text{ on } \\Gamma_\\tau, \\\\\n  % Dirichlet\n  \\vec{u} &= \\vec{u}_0(\\vec{x},t) \\text{ on } \\Gamma_u.\n\\end{align}\nWe create the weak form by taking the dot product with the trial\nfunction $\\trialvec[u]$ or $\\trialvec[v]$ and\nintegrating over the domain:\n\\begin{gather}\n  % Displacement-velocity\n  \\int_\\Omega \\trialvec[u] \\cdot \\frac{\\partial \\vec{u}}{\\partial t} \\, d\\Omega = \n  \\int_\\Omega \\trialvec[u] \\cdot \\vec{v} \\, d\\Omega, \\\\\n  % Elasticity\n    \\int_\\Omega \\trialvec[v] \\cdot \\rho(\\vec{x}) \\frac{\\partial \\vec{v}}{\\partial t} \\, d\\Omega \n = \\int_\\Omega \\trialvec[v] \\cdot \\left( \\vec{f}(t) + \\tensor{\\nabla} \\cdot \\tensor{\\sigma} (\\vec{u}) \\right) \\, d\\Omega.\n\\end{gather}\nUsing the divergence theorem and incorporating the Neumann boundaries, we can rewrite the second equation as\n\\begin{equation}\n% \n  \\int_\\Omega \\trialvec[v] \\cdot \\rho(\\vec{x}) \\frac{\\partial \\vec{v}}{\\partial t} \\, d\\Omega\n  = \\int_\\Omega \\trialvec[v] \\cdot \\vec{f}(\\vec{x},t) + \\nabla \\trialvec[v] : -\\tensor{\\sigma}(\\vec{u}) \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[v] \\cdot \\vec{\\tau}(\\vec{x},t) \\, d\\Gamma.\n\\end{equation}\n\nFor explicit time stepping, we want $F(t,s,\\dot{s})=\\dot{s}$, so we\nsolve an augmented system in which we multiply the RHS residual\nfunction by the inversion of the lumped LHS Jacobian,\n\\begin{gather}\n  F^*(t,s,\\dot{s}) = G^*(t,s) \\text{, where} \\\\\n  F^*(t,s,\\dot{s}) = \\dot{s} \\text{ and} \\\\\n  G^*(t,s) = J_F^{-1} G(t,s).\n\\end{gather}\nWith the augmented system, we have\n\\begin{gather}\n  % Displacement-velocity\n  \\frac{\\partial \\vec{u}}{\\partial t}  = M_u^{-1} \\int_\\Omega \\trialvec[u] \\cdot \\vec{v} \\, d\\Omega, \\\\\n  % Elasticity\n  \\frac{\\partial \\vec{v}}{\\partial t} = M_v^{-1} \\int_\\Omega \\trialvec[v] \\cdot \\left( \\vec{f}(t) + \\tensor{\\nabla} \\cdot \\tensor{\\sigma} (\\vec{u}) \\right) \\, d\\Omega, \\\\\n  % Mu\n  M_u = \\mathit{Lump}\\left( \\int_\\Omega \\trialscalar[u]_i \\delta_{ij} \\basisscalar[u]_j \\, d\\Omega \\right), \\\\\n  % Mv\n  M_v = \\mathit{Lump}\\left( \\int_\\Omega \\trialscalar[v]_i \\rho(\\vec{x}) \\delta_{ij} \\basisscalar[v]_j \\, d\\Omega \\right).\n\\end{gather}\n\n\\subsubsection{Residual Pointwise Functions}\n\nWith explicit time stepping the PETSc TS assumes the LHS is\n$\\dot{s}$, so we only need the RHS residual functions:\n\\begin{align}\n  % Gu\n  G^u(t,s) &= \\int_\\Omega \\trialvec[u] \\cdot \\eqnannotate{\\vec{v}}{\\vec{g}^u_0} \\, d\\Omega, \\\\\n  % Gv\n  G^v(t,s) &=  \\int_\\Omega \\trialvec[v] \\cdot \\eqnannotate{\\vec{f}(\\vec{x},t)}{\\vec{g}^v_0} + \\nabla \\trialvec[v] : \\eqnannotate{-\\tensor{\\sigma}(\\vec{u})}{\\tensor{g^v_1}} \\, d\\Omega\n  + \\int_{\\Gamma_\\tau} \\trialvec[v] \\cdot \\eqnannotate{\\vec{\\tau}(\\vec{x},t)}{\\vec{g}^v_0} \\, d\\Gamma,\n\\end{align}\nIn the second equation these are the same pointwise functions as in the\nquasistatic case, only now they are on the RHS instead of the LHS.\n\n\n\\subsubsection{Jacobian Pointwise Functions}\n\nThese are the pointwise functions associated with $M_u$ and $M_v$ for\ncomputing the lumped LHS Jacobian. We premultiply the RHS residual\nfunction by the inverse of the lumped LHS Jacobian while\n$s_\\mathit{tshift}$ remains on the LHS with $\\dot{s}$. As a result, we\nuse LHS Jacobian pointwise functions, but set $s_\\mathit{tshift}=1$.\nThe LHS Jacobians are:\n\\begin{align}\n  % J_F uu\n  M_u = J_F^{uu} &= \\frac{\\partial F^u}{\\partial u} + s_\\mathit{tshift} \\frac{\\partial F^u}{\\partial \\dot{u}} =\n             \\int_\\Omega \\trialscalar[u]_i \\eqnannotate{s_\\mathit{tshift} \\delta_{ij}}{J^{uu}_{f0}} \\basisscalar[u]_j  \\, d\\Omega, \\\\\n  % J_F vv\n  M_v = J_F^{vv} &= \\frac{\\partial F^v}{\\partial v} + s_\\mathit{tshift} \\frac{\\partial F^v}{\\partial \\dot{v}} =\n             \\int_\\Omega \\trialscalar[v]_i \\eqnannotate{\\rho(\\vec{x}) s_\\mathit{tshift} \\delta_{ij}}{J ^{vv}_{f0}} \\basisscalar[v]_j \\, d\\Omega\n\\end{align}\n\n\n% End of file", "meta": {"hexsha": "e4a9d07b2a612a285bc14c437e0573a3af68ab59", "size": 9262, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/userguide/governingeqns/elasticity_infstrain.tex", "max_stars_repo_name": "Grant-Block/pylith", "max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2015-01-08T16:41:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T13:40:02.000Z", "max_issues_repo_path": "doc/userguide/governingeqns/elasticity_infstrain.tex", "max_issues_repo_name": "Grant-Block/pylith", "max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 277, "max_issues_repo_issues_event_min_datetime": "2015-02-20T16:27:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T21:13:09.000Z", "max_forks_repo_path": "doc/userguide/governingeqns/elasticity_infstrain.tex", "max_forks_repo_name": "Grant-Block/pylith", "max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 71, "max_forks_repo_forks_event_min_datetime": "2015-03-24T12:11:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T04:26:02.000Z", "avg_line_length": 46.5427135678, "max_line_length": 190, "alphanum_fraction": 0.6627078385, "num_tokens": 3175, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867873410141, "lm_q2_score": 0.7090191337850933, "lm_q1q2_score": 0.5659297045592323}}
{"text": "\\section{The Monodromy Group; The Space of Germs}\r\n\\subsection{The Monodromy Group}\r\nLet $\\pi:\\tilde{X}\\to X$ be a regular covering map.\r\nPick a pasepoint $x_0\\in X$.\r\nFor any choice of loop $\\gamma:[0,1]\\to X$ based at $x_0$ (that is $\\gamma(0)=\\gamma(1)=x_0$), we want to defines a permutation $\\sigma_\\gamma:\\pi^{-1}(\\{x_0\\})\\to\\pi^{-1}(\\{x_0\\})$.\r\nIf you took Algebraic Topology, you should already know about this construction, but we'll do it again.\r\n\\begin{definition}\r\n    Let $\\tilde{x}\\in\\pi^{-1}(\\{x_0\\})$ and let $\\tilde{\\gamma}_{\\tilde{x}}$ be the unique lift of $\\gamma$ starting at $\\tilde{x}$.\r\n    Then $\\pi(\\tilde{\\gamma}_{\\tilde{x}}(1))=\\gamma(1)=x_0$, so $\\tilde{\\gamma}_{\\tilde{x}}(1)\\in\\pi^{-1}(\\{x_0\\})$.\r\n    Therefore we define $\\sigma_\\gamma(\\tilde{x})=\\tilde{\\gamma}_{\\tilde{x}}(1)$.\r\n\\end{definition}\r\n\\begin{remark}\r\n    1. The constant loop corresponds to the identity permutation.\\\\\r\n    2. Let $\\bar\\gamma(t)=\\gamma(1-t)$, then obviously $\\sigma_{\\bar\\gamma}=\\pi_{\\gamma}^{-1}$, which precisely means that $\\sigma_\\gamma$ is a permutation.\\\\\r\n    3. The previous two remarks hints that the set of all $\\sigma_\\gamma$ makes a subgroup of $\\operatorname{Sym}(\\pi^{-1}(\\{x_0\\}))$.\r\n    We want to realise this group operation in an intuitive way.\r\n    For $\\alpha,\\beta$ loops based at $x_0$, define their concatenation to be\r\n    $$\\alpha\\cdot\\beta=\\begin{cases}\r\n        \\alpha(2t)\\text{, for $t\\in[0,1/2]$}\\\\\r\n        \\beta(2t-1)\\text{, for $t\\in[1/2,1]$}\r\n    \\end{cases}$$\r\n    which is easily seen to be a well-defined loop based at $x_0$.\r\n    The uniqueness of lifts then implies that\r\n    $$(\\widetilde{\\alpha\\cdot\\beta})_{\\tilde{x}_1}=\\tilde{\\alpha}_{\\tilde{x}_1}\\cdot\\tilde{\\beta}_{\\tilde{\\alpha}_{\\tilde{x}_1}(1)}$$\r\n    Therefore $\\sigma_{\\alpha\\cdot\\beta}=\\sigma_\\beta\\sigma_\\alpha$.\r\n\\end{remark}\r\n\\begin{definition}\r\n    The group\r\n    $$\\{\\sigma_\\gamma|\\gamma\\text{ loop based at $x_0$}\\}\\le \\operatorname{Sym}(\\pi^{-1}(\\{x_0\\}))$$\r\n    which is called the monodromy group of $\\pi$.\r\n\\end{definition}\r\n\\begin{remark}\r\n    1. By Theorem \\ref{monodromy}, $\\alpha\\simeq\\beta$ implies $\\sigma_\\alpha=\\sigma_\\beta$.\\\\\r\n    2. One can easily show that the monodromy group is independent of the choice of basepoint (with the path-connectedness assumption, of course).\r\n\\end{remark}\r\n\\begin{example}\r\n    Recall that $p_k:\\mathbb C_\\star\\to\\mathbb C_\\star$ sending $z$ to $z^k$ is a regular covering map.\r\n    Take basepoint $1$, then $\\pi^{-1}(\\{1\\})$ consists of the $k^{th}$ roots of unity $\\zeta_k^n=e^{2\\pi in/k}$.\r\n    Let $\\gamma(t)=e^{2\\pi it}$, then for each $n$, $\\tilde{\\gamma}_{\\zeta_k^n}=\\zeta_k^{n+1}$.\r\n    Therefore $\\sigma_\\gamma(\\zeta_k^n)=\\zeta_k^{n+1}$.\r\n    But it turns out every loop in $\\mathbb C_\\star$ is homotopic to $\\gamma^n$ for some $n\\in\\mathbb Z$ (Algebraic Topology again!), therefore the monodromy group of $p_k$ is indeed the cyclic group of order $k$.\r\n\\end{example}", "meta": {"hexsha": "d123e810fbc05e792a0d12b3a48b9677cff40ac5", "size": 2941, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "8/monodromy.tex", "max_stars_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_stars_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "8/monodromy.tex", "max_issues_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_issues_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "8/monodromy.tex", "max_forks_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_forks_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.0238095238, "max_line_length": 214, "alphanum_fraction": 0.6514790887, "num_tokens": 1012, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7981867777396212, "lm_q1q2_score": 0.5659296977516609}}
{"text": "\\documentclass[./\\jobname.tex]{subfiles}\n\\begin{document}\n\\chapter{Theoretical Notes}\nThis chapter deals with a few theoretical notes. At first, the universal approximation theorem for the \\gls{gsk} is shown. Further, the multimodality of the fitness function is proven. Thereby, a symmetry is observed that could lead to the design of better variation operators.  \n\n\\section{Universial Approximation Theorem}\n\\label{chap:gsin_approximation_theorem}\n\nThe Gauss kernel is able to approximate all functions that are part of the Lebesgue space $f(\\mathbf{x}) \\in L^1(\\mathbb{R}^n)$ arbitrarily close. This has been proven by various works (\\cite{park_universal_1991}, \\cite{hangelbroek_nonlinear_2010}). In particular, \\cite{park_universal_1991} extends the universal approximation theorem to other kernels. The following paragraphs show that the \\gls{gsk} fulfils the posed conditions and thus can benefit from the approximation theorem. \n\nThe Gauss-Sine kernel, as defined in equation \\eqref{eq:gsin_kernel_theoretical_notes}, is a multiplication of the Gauss kernel with a sine function. Its main purpose is described in the experiment \\mbox{chapter \\ref{chap:experimet_3}}. \n\n\\begin{equation}\n\\label{eq:gsin_kernel_theoretical_notes}\ngsk(\\mathbf{x}) = \\omega e^{-\\gamma ||\\mathbf{x} - \\mathbf{c}||^2} sin(f ||\\mathbf{x} - \\mathbf{c}||^2 - \\varphi)\n\\end{equation}\n\nTo prove that the universal approximation theorem is also applicable to the \\gls{gsk}, it must comply with the conditions placed by \\cite{park_universal_1991}. At first, a kernel, in this case the $gsk(\\mathbf{x})$, must be continuous and bounded. This already restricts $\\gamma > 0$. However, \\cite{chaquet_using_2019} found that not placing any limits on the parameters results in a better performance. Thus, this constraint is not implemented in the current version of the algorithm. Secondly, the integral over the whole domain of the kernel $K(\\mathbf{x})$ must not be $0$. Thus, it needs to be shown that \n\n\\begin{equation}\n\\int_{\\mathbf{x} = -\\mathbf{\\infty}}^{\\infty} gsk(\\mathbf{x}) \\text{ } d\\mathbf{x} \\neq 0\n\\end{equation} \n\nIntuitively, the next restriction on $\\omega \\neq 0$ is found. \n\nTo simplify the following calculations, the \\gls{gsk} is rewritten into polar coordinates. Further, the offsets by $c_0$ and $c_1$ are accounted for by an appropriate coordinate transformation. This results in \n\n\\begin{equation}\n\\lim_{t \\to \\infty} \\int_{r=0}^{t} \\int_{\\theta = 0}^{2 \\pi} e^{-\\gamma(r^2)} sin(f r^2 - \\varphi) r \\text{ } dr \\text{ } d\\theta\n\\end{equation}\n\nSince the kernel is radial symmetric and thus has no dependency on $\\theta$, the respective integral can be solved immediately which results in a multiplicative factor of $2 \\pi$. The integral can be further simplified by substituting $r^2 = u$.\nThe resulting expression can be solved with ``integration by part'' $\\int f(u) \\frac{g(u)}{du} = f(u) g(u) - \\int g(u) \\frac{f(u)}{du}$. The formula has to be applied twice. \n\n\\begin{equation}\n\\begin{split}\n& 2 \\pi \\lim_{t \\to \\infty} \\int_{r=0}^{t} e^{-\\gamma(r^2)} sin(f r^2 - \\varphi) r \\text{ } dr  \\\\\n= & \\pi \\lim_{t \\to \\infty} \\int_{u=0}^{t} \\underbrace{e^{-\\gamma u }}_{\\frac{g(u)}{du}} \\underbrace{sin(f u - \\varphi)}_{f(u)} \\text{ } du \\\\\n= - \\frac{sin(fu - \\varphi) e^{-\\gamma u}}{\\gamma} +\\frac{f}{\\gamma} & \\left[ -cos(fu -\\varphi) \\frac{e^{-\\gamma u}}{\\gamma} -\\frac{f}{\\gamma} \\int e^{-\\gamma u} sin(fu -\\varphi) du \\right] \\\\\n\\end{split}\n\\end{equation}\n\nThe same integral is retrieved. Thus, the equation can be rearranged and solved for the integral. Considering the constant factors and the integral limits gives \n\n\\begin{equation}\n\\lim_{t \\to \\infty} \\frac{\\pi e^{-\\gamma r^2}(- \\gamma sin(f r^2 - \\varphi) - f cos(f r^2 - \\varphi))}{ \\gamma^2 + f^2} + C \\text{ } \\Bigg|_{r=0}^{t}.\n\\end{equation}\n\nTo resolve the limit of the function towards $\\infty$, the function value must be bounded. Therefore, $\\gamma$ must be positive, which is already required. Finally, this results in \n\n\\begin{equation}\n\t\\frac{-\\pi (\\gamma sin(\\varphi) + f cos(\\varphi))}{\\gamma^2 + f^2} \\neq 0.\n\\end{equation}\n\nThis places more constraints on the parameter of the \\gls{gsk} as seen in the equation \\eqref{eq:gsk_param_restrictions} below. \n\n\\begin{equation}\n\\label{eq:gsk_param_restrictions}\n\t\\begin{split}\n\t\\gamma & > 0 \\\\\n\t\\omega & \\neq 0 \\\\\n\t\\gamma & sin(\\varphi) + f cos(\\varphi) \\neq 0 \\\\\n\t\\end{split}\n\\end{equation}\n\nThese restrictions on the parameters could be enforced during the optimisation process. They could further be used to limit the search dimension. Thus, other optimisation algorithms that are good at handling constraints must be used. \n\n\\section{Multimodality and Symmetry}\n\\label{chap:multimodality_and_symmetry}\n\nThe optimisation algorithm tries to find the best approximation of $N$ kernels to the solution of a differential equation. Assume, that a kernel $K$ is fully defined by a vector of parameters $\\mathbf{p}$. The best fit is defined as \n\n\\begin{equation}\n\\mathbf{\\hat{p}_{apx}} = \\left[\\underbrace{\\left[ \\mathbf{\\hat{p}}_{K_0} \\right] }_{\\text{kernel 0}}, \\cdots \\underbrace{\\left[ \\mathbf{\\hat{p}}_{K_i} \\right] }_{\\text{kernel i}}, \\cdots \\underbrace{\\left[ \\mathbf{\\hat{p}}_{K_N} \\right]}_{\\text{kernel N}} \\right]^T\n\\end{equation}\n\nwhere the parameters $\\mathbf{\\hat{p}}$ of every kernel are chosen optimally. The optimal kernel functions $K(\\mathbf{\\hat{p}}_{K_i}, \\mathbf{x})$ are summed up to form the optimal approximation $\\hat{u}_{apx}(\\mathbf{x})$. \n\n\\begin{equation}\n\\label{eq:uapx_kernel_sum}\n\\hat{u}_{apx}(\\mathbf{x}) = \\sum_{i=0}^{N} K(\\mathbf{\\hat{p}}_{K_i}, \\mathbf{x})\n\\end{equation}\n\nSince the order of the summation is irrelevant, any kernel-wise permutation describes an optimal solution $\\mathbf{\\hat{p}_{apx}}$. Thus, the fitness function $F(u_{apx}(\\mathbf{x}))$ has at least $N!$ number of local optima and all of them share the same function value. \n\nFurther, a symmetry in the location of the optima is observed. All optima lay on the surface of the hypersphere that is centred at the origin and has a radius of $r = || \\mathbf{\\hat{p}_{apx}} ||$. \n\nThe following 3D plot in figure \\ref{fig:optima_distribution} shows an exemplary distribution of optima on the fitness function. For the sake of simplicity, a kernel now consists of only one parameter. As an example, the vector $\\mathbf{\\hat{p}_{apx}} = \\left[ 2, 1, -1 \\right]^T$ describes an optimal solution that consists of 3 kernels. Any permutation of these three coordinates is itself a perfect fit. \n\n\\begin{figure}[H]\n\t\\centering\n\t\\noindent\\adjustbox{max width=0.7\\linewidth}{\n\t\t\\includegraphics[width=\\textwidth]{../img/pdf/symmetry.pdf}\n\t}\n\t\\unterschrift{Distribution of exemplary optima on the fitness function in 3D space.}{}{}\n\t\\label{fig:optima_distribution}\n\\end{figure}\n\nThis symmetry is independent of the kernel type. Large parts of the fitness function, such as the weighting and penalty factors or the number of collocation points, have no influence on the actual radial arrangement of the optima. The symmetry is a fundamental property of the sum of \\gls{rbf}. Thus, it could even be applied in other fields that use this kind of representation. One of these could be non-linear function approximation. \n\nObviously, the order of summation is not only true for the optimum, but also for every other point in between. Thus, the entire fitness function exhibits this radial symmetry. More work investigating the structure of the fitness function must be done. With more knowledge about the features of this function, it might be possible to design algorithms specifically to that problem. \n\n\\end{document}", "meta": {"hexsha": "f675fd2babdca63422470ea9f94edc6a01a15f69", "size": 7622, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "master_thesis_doc/tex/Theoretical_Notes.tex", "max_stars_repo_name": "nicolai-schwartze/Masterthesis", "max_stars_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-13T10:02:02.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-13T10:02:02.000Z", "max_issues_repo_path": "master_thesis_doc/tex/Theoretical_Notes.tex", "max_issues_repo_name": "nicolai-schwartze/Masterthesis", "max_issues_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "master_thesis_doc/tex/Theoretical_Notes.tex", "max_forks_repo_name": "nicolai-schwartze/Masterthesis", "max_forks_repo_head_hexsha": "7857af20c6b233901ab3cedc325bd64704111e16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.0, "max_line_length": 611, "alphanum_fraction": 0.7351088953, "num_tokens": 2228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191214879992, "lm_q2_score": 0.7981867753392728, "lm_q1q2_score": 0.5659296862343902}}
{"text": "\\documentclass{article}\r\n\\usepackage{graphicx}\r\n\\usepackage{amsmath}\r\n\\usepackage{fixltx2e}\r\n\\usepackage{float}\r\n\r\n\r\n\\begin{document}\r\n\r\n\\title{Software package for Engineering drawing}\r\n\\author{Aditya Jain \\& Ankit Akash}\r\n\r\n\\maketitle\r\n\r\n\\begin{abstract}\r\nDrawing is precisely considered as a Universal Language. For ages the depiction of a 3D object in 2D sheets has been a problem of sheer importance. People have relied on orthographic methods of projection to ensure unambiguous communication for proper convey of graphic ideas. This method although being simple requires a level of imagination on both the parties to properly visualize the 3D body from 2D views. The motive here is to automate the process of projection from 3D to orthographic views and reconstruction from the same.\r\n\\end{abstract}\r\n\r\n\r\n\r\n%%%%%%%  1st Part  %%%%%%%%\r\n\r\n\r\n\\part{3D object to Orthographic Projections}\r\n\r\n\\section*{Idea}\r\nFor projection of a 3D object onto any 2D plane we assume that the object has no line attribute ,it has only collections of points which represent the whole object. This helps in generalizing the concept of projections for curves and transformation of the object.\r\n\r\n\\section{Transformations on 3D Object}\r\nThe coordinate points \\((x,y,z)\\) can be transformed using the five basic operations which are translation,rotation,reflection,scaling and shearing. These operations will help us in getting the different views of the object by converting the coordinates into the new transformed ones. \r\nWe use the row vector \\((x,y,z,1)\\) to get the transformed coordinates by multiplying it with the transformation matrix.\r\n\r\n\\subsection{Translation}\r\nTo shift a point by \\((t\\textsubscript{x},t\\textsubscript{y},t\\textsubscript{z})\\) we multiply its \\(4\\) dimensional row vector by the translation matrix which is given below and denoted by T.\r\n\r\n\r\n\\begin{equation}\r\n    \\label{simple_equation}\r\n    T(t_x,t_y,t_z) = \\begin{bmatrix}1 & 0 & 0 & 0 \\\\0 & 1 & 0 & 0 \\\\0 & 0 & 1 & 0 \\\\t_{x} & t_{y} & t _{z} & 1 \\end{bmatrix}\r\n\\end{equation}\r\n\r\n\\subsection{Scaling and Reflection}\r\nScaling the object about the origin by a factor of s\\textsubscript{x},s\\textsubscript{y},s\\textsubscript{z} in x,y and z directions respectively is done by multiplying with the following scaling matrix\r\n\\begin{equation}\r\n    \\label{simple_equation_2}\r\n    S(s_x,s_y,s_z) = \\begin{bmatrix}s_{x} & 0 & 0 & 0 \\\\0 & s_{y} & 0 & 0 \\\\0 & 0 & s_z & 0 \\\\0 & 0 & 0 & 1 \\end{bmatrix}\r\n\\end{equation}\r\nThe transformation matrices of the reflection R\\textsubscript{yz} in x=0 plane , R\\textsubscript{xz} in the y=0 plane, and R\\textsubscript{xy} in the z=0 plane are obtained by scaling of -1 in the respective axis .\r\n\\begin{equation}\r\n    \\label{simple_equation_3}\r\n    R_{yz} = \\begin{bmatrix}-1 & 0 & 0 & 0 \\\\0 & 1 & 0 & 0 \\\\0 & 0 & 1 & 0 \\\\0 & 0 & 0 & 1 \\end{bmatrix}\r\n    \\end{equation}\r\n    \\begin{equation}\r\n    \\label{simple_equation_4}\r\n    R_{xz} = \\begin{bmatrix}1 & 0 & 0 & 0 \\\\0 & -1 & 0 & 0 \\\\0 & 0 & 1 & 0 \\\\0 & 0 & 0 & 1 \\end{bmatrix}\r\n    \\end{equation}\r\n    \\begin{equation}\r\n    \\label{simple_equation_5}\r\n    R_{xy} = \\begin{bmatrix}1 & 0 & 0 & 0 \\\\0 & 1 & 0 & 0 \\\\0 & 0 & -1 & 0 \\\\0 & 0 & 0 & 1 \\end{bmatrix}\r\n\\end{equation}\r\n\r\n\\subsection{Rotations About the Coordinate axis}\r\nThe transformation matrices for rotation about the x,y and z axis are Rot\\textsubscript{x},Rot\\textsubscript{y} and Rot\\textsubscript{z} respectively given by:\r\n\\begin{equation}\r\n    \\label{simple_equation_6}\r\n    Rot_{x}(\\theta_{x}) =\\begin{bmatrix}1 & 0 & 0 & 0 \\\\0 & \\cos\\theta_{x} & \\sin\\theta_{x} & 0 \\\\0 & -\\sin\\theta_{x} & \\cos\\theta_{x} & 0 \\\\0 & 0 & 0 & 1 \\end{bmatrix}\r\n\\end{equation}\r\n\\begin{equation}\r\n    \\label{simple_equation_7}\r\n    Rot_{y}(\\theta_{y}) =\\begin{bmatrix}\\cos\\theta_{y} & 0 & -\\sin\\theta_{y} & 0 \\\\0 & 1 & 0 & 0 \\\\\\sin\\theta_{y} & 0 & \\cos\\theta_{y} & 0 \\\\0 & 0 & 0 & 1 \\end{bmatrix}\r\n\\end{equation}\r\n\\begin{equation}\r\n    \\label{simple_equation_8}\r\n    Rot_{z}(\\theta_{z}) =\\begin{bmatrix}\\cos\\theta_{z} & \\sin\\theta_{z} & 0 & 0 \\\\-\\sin\\theta_{z} & \\cos\\theta_{z} & 0 & 0 \\\\0 & 0 & 1 & 0 \\\\0 & 0 & 0 & 1 \\end{bmatrix}\r\n\\end{equation}\r\n\r\n\\begin{figure}[h]\r\n  \\includegraphics[width=10cm,height=10cm]{axis.png}\r\n  \\caption{Positive angles of rotation}\r\n  \\label{fig:boat1}\r\n\\end{figure}\r\n\r\n\\subsection{Rotation About An Arbitrary Line In Space}\r\nRotation through an angle $\\theta$ about an arbitrary rotation axis is obtained by transforming the rotation axis to one of the coordinate axis , applying a primary rotation through an angle $\\theta$ about the coordinate axis and then tracing back to the rotation axis. \r\n\\\\Let the rotation axis have cosines $(c_1,c_2,c_3)$. If the rotation axis passes through point P $(p_1,p_2,p_3)$ then applying the following transformation matrices will rotate the coordinates :\r\n\\\\ (1) Apply the translation $ T(-p_{1},-p_{2},-p_{3})$ which maps the point P to origin.\r\n\\\\ (2)Apply a rotation through an angle $\\theta_{x}$ about the x-axis so that the line is mapped into the $xz$-plane.\r\n\\\\ (3)Apply a rotation about the y-axis so that the line is mapped to the $z$-axis .\r\n\\\\ (4)Apply a rotation through an angle $\\theta$ about the $z$-axis.\r\n\\\\ (5)Apply the inverses of the transformation 1-3 in reverse order.\r\n\r\n$$    T(-p_{1},-p_{2},-p_{3})Rot_{x}(\\theta_{x})Rot_{y}(-\\theta_{y})Rot_{z}(\\theta)Rot_{y}(\\theta_{y})Rot_{x}(-\\theta_{x})T(p_{1},p_{2},p_{3})$$\r\nwhere \r\n$$\\sin\\theta_{x} = \\frac{c_2}{\\sqrt{c_2^2+c_3^2}}  \\cos\\theta_{x} = \\frac{c_3}{\\sqrt{c_2^2+c_3^2}} $$\r\n$$ \\sin\\theta_{y} = c_1 , \\cos\\theta_{y} =\\sqrt{c_2^2 + c_3^3}$$\r\n\\section{Projection From 3D to 2D Generalizations }\r\nEach edge is divided into numerous points hence we end up have cloud of points. Points are named uniquely. We project each point on the surface. Then we tag to each projected point a label h or p with the corresponding meaning :-\r\n\\begin{itemize}\r\n\\item H - The point is a part of a edge that is hidden\r\n\\item S - The point is a part of a edge that is solid in that view\r\n\\end{itemize}\r\n\r\n\\section{Projection on arbitrary cross section}\r\nLets say we have a plane \\ ax + by + cz + d = 0\\\r\nWe want projection of a random point \\ (x,y,z)\\\r\nWe do the following math :-\r\n \\[ t = \\frac{ax_1+by_1+cz_1+d}{a^2+b^2+c^2} \\]\r\n \\[ (x_p,y_p,z_p)=(x_1,y_1,z_1)\\times(1-t) \\]\r\n\\((x_p , y_p , z_p)\\) represent projection of a single sample point. \r\nThe collection of all \\((x_p , y_p , z_p)\\) forms its 2d projected figure.\r\n\r\n\r\n\r\n\\begin{flushleft}\r\n%%%%%%%  Needs editing, down  %%%%%%%%\r\n\\\\If the normal to the plane is given then we can also rotate the whole object such that the z-axis of the newly rotated object aligns with the given normal . Then we can take the orthographic projections as before.\r\n\\\\To align the z-axis with the normal let the unit vector of normal be denoted by $u$. Then by using Rodrigue's rotation formula :\r\n$$v_{rot} = v\\cos\\theta + (k \\times v )\\sin\\theta + k(k.v)(1-\\cos\\theta)$$\r\nwhere $$k=\\frac{u\\times\\hat{z}}{|u\\times\\hat{z}|} $$\r\n$v_{rot}$ is the final rotated vector and $v$ is the initial vector.So by applying this formula to all the vertices we can align our object to the given normal .\r\n\\end{flushleft}\r\n\\section{Projection on standard xy, yz and zx planes}\r\nDrop the corresponding coordinate x or y or z like wise:-\r\n\\begin{itemize}\r\n\\item For xy plane - drop z\r\n\\item For yz plane - drop x\r\n\\item For xz plane - drop y\r\n\\end{itemize}\r\nAdd the label.\r\n\r\n\\section{How to label H and S}\r\nTheoratical Idea : We extend our 3d object to form a mesh of points enclosing the whole surface of our object. For marking solid and dashed lines in any view (say, xy) we proceed by projecting parallel lines in the direction of normal vector (in this case z-axis) from each projected point. The point at which this line meets our 3d object for the first time is the one which will be visible when viewed from the surface. Rest all points lieing in that line would be hidden from our sight with respect to that view. Thus we get a simple scheme, mark point S if it's first intersection, H otherwise. \r\nOnce the labelling is done, join all the S points with dark lines, H points with dashed lines.\r\n\r\n%%%%%%%  2nd Part  %%%%%%%%\r\n\r\n\\part{Converting Orthographic Projections}\r\n\\setcounter{section}{0}\r\n\\section{How to store them?}\r\nA 2D orthographic figure is a collection of tuples we name as 2dVertices, Edge1 and Edge2.2dVertices(V) represents intersection of edges. It is collection of points with pair of integers x,y denoting respective position in the axis. Any edge e in Edge1 or Edge2 is of the type v1xv2 where v1,v2 E V. Hence Edge1 and Edge2 are subsets of VxV. Each edge e1 in Edge1 represent a solid line in the isometric view. Each edge e2 in Edge2 represent a dashed (hidden) line in the isometric view. For a single object we have 3 sets of {V, Edge1 and Edge2} with labels as Top, Front and Side.\r\n\\section{How a object data looks for 3D?}\r\n3D space consists of 3dVertices, Edges, Planes.\r\n3dVertices are basic building blocks. It is collection of tuples with 3 integers x,y,z representing vertex\u2019s position in 3D space.\r\nSimilar as previous definition any edge e in Edges is of the type v1xv2 where v1,v2 E V. Hence Edge1 and Edge2 are subsets of VxV. Each edge e1 in Edges represent a line between its corresponding vertices in 3d view.\r\nPlanes is a tuple of arbitrary size containing a collection of 3dVertices where each of the Vertices lie in the literal same plane\r\n\r\n\\section{How to transfer information from 2D projections to get 3D object?}\r\nWe have \r\n\\begin{table}[h!]\r\n\\centering\r\n\\begin{tabular}{||c c c c||} \r\n \\hline\r\n Front View(X-Z) & Front View(X-Z) & Front View \\\\ [0.5ex] \r\n \\hline\\hline\r\n V1(x,z) & V1(x,z) & V1(x,z) \\\\ \r\n Edge11 & Edge11 & Edge11 \\\\\r\n Edge21 & Edge21 & Edge21 \\\\ [1ex] \r\n \\hline\r\n\\end{tabular}\r\n\\label{table:1}\r\n\\end{table}\r\nEvery vertex in 3D space is uniquely defined by triplet x,y,z. We propose a simple Algorithm to construct the 3dVertices list for 3D representation :-\r\n\\begin{itemize}\r\n\\item Construct a blank list Vertices\u2019\u2019 of type 3dVertices\r\n\\item For all v1 in V1\r\n\\item For all v in Vertices\u2019\u2019 extract y,z to get a a a 2dVertex in 2dVertices.\r\n\\item Vertices\u2019 contains all the corresponding points that the object will have which we would obtain by the given 2d projections.\r\n\r\n\\end{itemize}\r\n\r\n\r\n\\section{Problem with our Algorithm}\r\nIt evaluates and gives all the points that our object should have(Why?: proof is obvious) but it also reports extra points that doesn\u2019t belong to the object(say shadow points). Consider this as example :-\r\n\r\n\\begin{figure}[H]\r\n  \\includegraphics[width=15cm,height=10cm]{unit-6-isometric-views-28-638.png}\r\n  \\caption{Example}\r\n  \\label{fig:boa}\r\n\\end{figure}\r\n\r\nApplying the algorithm gives a shadow point in extra i.e. the deleted edge of the original cube.\r\nAnd this thing in general happens with all those points which can be formed in our 3d object by extending any three line segments in 3d object till infinity(ofcourse, its possible only if the 3 are concurrent).  So how can we handle this? By the basic fact that\r\n\r\n\\section{Edges too in isometric drawing carry information}\r\nThis simple fact has not been utilised until now. But it too carry a powerful piece of information for reconstruction of our 3d object. If two edge in our drawing are collinear and concurrent at one point then the other points forming the edge must have different z component (note here z is arbitrary reference to the third unknown component of the 2dVertex). \r\nAddition of this small fact enables our algorithm to eliminate the shadow points. \r\nNow lets restrict ourselves to simpler case when each vertex in 2dVerteices carry a label which shows which 3d object it belongs to.\r\n\\section{Original Problem Statement}\r\nUsing information from set of isometric projections of a 3d object to reconstruct it back identically. Here each of the 2d figure in isometric projection carry information of \r\n\\begin{itemize}\r\n\\item Vertex position(x,y) with label\r\n\\item Edge list for solid lines and \r\n\\item Edge list for hidden lines. \r\n\\end{itemize}\r\n\\section{Converting Wireframe To Solid Structure}\r\nThere are two models for converting a wireframe into a solid structure. One is volumetric model and the other is surface model. We will be sticking with the latter one in this program. For the surface model we need to find all the faces which are produced by closed edge loops .First we will produce all the faces which are possible then remove the pseudo-faces by using a set of rules which are defined below . Finally all the true faces are assembled to form the solid object. \r\n\\subsection{Finding All The Possible Faces}\r\nAs our object is polyhedral all its faces will be planar and so to find the faces we will search for closed loops formed with edges in all possible planes. A plane will be formed whenever two edges share one common vertex . Using this fact and minimum-internal-angle searching method we can find all the possible faces in the wireframe.Starting from a convex vertex (at the extreme left) we can ensure that loops have no internal edges.\r\n\\\\for (each vertex, vi)\r\n\\\\for (each pair of adjacent edges, $e_i$,$e_j$) do\r\n\\\\n = normal to plane, $e_i$ x $e_j$\r\n\\\\$v_k$ = vertex at far end of $e_j$\r\n\\\\ while( $v_k$ $\\neq$  $v_i$ and $e_j$ exists)\r\n\\\\$e_k \\longleftarrow$ edge adjacent to $v_k$, coplanar to n,with minimum-internal-angle from e,\r\n\\\\$e_j$ = $e_k$; $v_k$ = far end of $e_k$\r\n\\\\end while\r\n\\\\if ($v_k$=$v_i$) then\r\n\\\\list $ e_i,e_j,e_k,$ . . . is a possible face loop\r\n\\\\end if\r\n\\\\next e,,q\r\n\\\\next v, \r\n\r\n\\subsection{Rules For Detecting Pseudo Faces}\r\nThe following rules are followed while removing the pseudo faces from the set of all faces:\r\n\\\\$\\langle 1\\rangle$When an edge is adjacent to more than two\r\nfaces, at most two faces can be true, the rest\r\nmust be pseudo. (From the Moebius rule)\r\n\\\\$\\mathbf{Moebius Rule:}$each edge in a manifold solid belongs to two faces and its orientation is inverted by each face.\r\n\\\\$\\langle 2\\rangle$When two faces intersect, only one of them can\r\nbe true. (If they were both true, the intersecting\r\nedge would have four adjacent faces contradicting Moebius rule)\r\n\\\\$\\langle 3 \\rangle $ A dashed edge which has no face to occlude is false.(dashed line definition)\r\n\\\\$\\langle 4 \\rangle$ If an edge is adjacent to only one face then both the face and edge are false by Moebius rule.\r\n\\\\$\\langle 5 \\rangle$ If an edge is adjacent to two coplanar faces then the faces can be merged and the edge be removed.\r\n\\\\$\\langle 6 \\rangle $ If a true edge is adjacent to two non-coplanar faces then both the faces are true.\r\n\\\\$\\langle 7 \\rangle $ If a face is coplanar and adjacent to a true face and their shared edge is solid in the projected view then the the face must be false (otherwise it will contradict 5).\r\n\\subsection{Implementing Rules}\r\nWhen undecided edges are found, assumptions are\r\nmade using rules 1 and 2, then rules 1 to 7 are\r\napplied, until either all edges are decided, or no\r\nobject is found. All possible objects are detected this\r\nway, without having to assemble them first. The\r\nrecursive algorithm is summarised as follows:\r\n\\\\$\\mathbf{function()}$\r\n\\\\if (all edges obey Moebius rule)\r\nand (no faces intersect) then\r\na solution has been found, add it to the list\r\n\\\\else\r\n\\\\for (each undecided edge $e_i$) do\r\n\\\\for (each adjacent pair of faces$f_j ,f_k$) do\r\n\\\\assume $f_i,f_k$ are true;\r\n\\\\if (rules 1-7 change anything) then\r\n\\\\call function\r\n\\\\end if\r\n\\\\next $f_j,f_k$\r\n\\\\next $e_i$\r\n\\\\end if\r\n\\\\$\\mathbf{end function}$\r\n\r\n\r\n\r\n%%%%%%%  3rd Part  %%%%%%%%\r\n\r\n\\part{How many projections do we need?}\r\n\\setcounter{section}{0}\r\nThe three principal views are top, front and side. In engineering drawing, the choice of number of views is determined by the need to define the engineering object uniquely and unambiguously. If the corresponding labelling of points is given then two views are sufficient to describe all the corner vertices and edges of our object with certainty. What if points are labeled in correspondence  but only for those which are necessarily required to represent the 2d figure. Why are we considering this case?\r\nLook at the figure 2 below :-\r\n\r\n\\begin{figure}[H]\r\n  \\includegraphics[width=12cm,height=8cm]{fig2.png}\r\n  \\caption{Orthographic views of an object}\r\n  \\label{fig:boat1}\r\n\\end{figure}\r\n\r\n\r\n\\\\In the top and side view its ridiculous to label infinite number of points corresponding to the circular region of our object and if we approximate it by any number of finite points then there is always a confusion left out whether the hole is a polygonal cut or cylindrical. So 2 views are never sufficient to represent a 3d object reserving its identity.\r\n\r\n\r\n\r\n\\end{document}", "meta": {"hexsha": "a5f42c9b3456871529b3253653b9adf8d704895b", "size": 16717, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/Assistive files for Design Document/cop290.tex", "max_stars_repo_name": "ankitakash2007/Slate", "max_stars_repo_head_hexsha": "aad79cd0a353a94f2a049575a91ff961953f3af0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-30T07:17:40.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-30T07:17:40.000Z", "max_issues_repo_path": "docs/Assistive files for Design Document/cop290.tex", "max_issues_repo_name": "aditya0212jain/Slate", "max_issues_repo_head_hexsha": "aad79cd0a353a94f2a049575a91ff961953f3af0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/Assistive files for Design Document/cop290.tex", "max_forks_repo_name": "aditya0212jain/Slate", "max_forks_repo_head_hexsha": "aad79cd0a353a94f2a049575a91ff961953f3af0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-30T07:17:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-30T07:17:44.000Z", "avg_line_length": 61.9148148148, "max_line_length": 600, "alphanum_fraction": 0.7253095651, "num_tokens": 4725, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867681382279, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5659296860364005}}
{"text": "\\lab{Pandas 1: Introduction}{Pandas 1: Introduction}\n\\objective{Though NumPy and SciPy are powerful tools for numerical computing, they lack some of the high-level functionality necessary for many data science applications.\nPython's \\emph{pandas} library, built on NumPy, is designed specifically for data management and analysis.\nIn this lab we introduce pandas data structures, syntax, and explore its capabilities for quickly analyzing and presenting data.\n}\n\\label{lab:pandas1}\n\n\\section*{Pandas Basics}\n\nPandas is a python library used primarily to analyze data.\nIt combines functionality of NumPy, MatPlotLib, and SQL to create an easy to understand library that allows for the manipulation of data in various ways.\nIn this lab we focus on the use of Pandas to analyze and manipulate data in ways similar to NumPy and SQL.\n\n\\subsection*{Pandas Data Structures}\n\n\\subsubsection*{Series}\n\nThe first pandas data structure is a \\li{Series}.\nA \\li{Series} is a one-dimensional array that can hold any datatype, similar to a \\li{ndarray}.\nHowever, a \\li{Series} has an \\li{index} that gives a label to each entry.\nAn \\li{index} generally is used to label the data.\n\nTypically a \\li{Series} contains information about one feature of the data.\nFor example, the data in a \\li{Series} might show a class's grades on a test and the \\li{Index} would indicate each student in the class.\nTo initialize a \\li{Series}, the first parameter is the data and the second is the index.\n\n\\begin{lstlisting}\n>>> import pandas as pd\n>>>\n# Initialize Series of student grades\n>>> math = pd.Series(np.random.randint(0,100,4), ['Mark', 'Barbara',\n...\t\t'Eleanor', 'David'])\n>>> english = pd.Series(np.random.randint(0,100,5), ['Mark', 'Barbara',\n...\t\t'David', 'Greg', 'Lauren'])\n\\end{lstlisting}\n\n\\subsubsection*{DataFrame}\n\nThe second key pandas data structure is a \\li{DataFrame}.\nA \\li{DataFrame} is a collection of multiple \\li{Series}.\nIt can be thought of as a 2-dimensional array, where each row is a separate datapoint and each column is a feature of the data.\nThe rows are labeled with an \\li{index} (as in a \\li{Series}) and the columns are labeled in the attribute \\li{columns}.\n\nThere are many different ways to initialize a \\li{DataFrame}.\nOne way to initialize a \\li{DataFrame} is by passing in a dictionary as the data of the \\li{DataFrame}.\nThe keys of the dictionary will become the labels in \\li{columns} and the values are the \\li{Series} associated with the label.\n\n\\begin{lstlisting}\n# Create a DataFrame of student grades\n>>> grades = pd.DataFrame({\"Math\": math, \"English\": english})\n>>> grades\n\t      Math  English\nBarbara   52.0     73.0\nDavid     10.0     39.0\nEleanor   35.0      NaN\nGreg       NaN     26.0\nLauren     NaN     99.0\nMark      81.0     68.0\n\\end{lstlisting}\n\nNotice that \\li{pd.DataFrame} automatically lines up data from both \\li{Series} that have the same index.\nIf the data only appears in one of the \\li{Series}, the corresponding entry for the other \\li{Series} is \\li{NaN}.\n\nWe can also initialize a \\li{DataFrame} with a NumPy array.\nWith this method, the data is passed in as a 2-dimensional NumPy array, while the column labels and the index are passed in as parameters.\nThe first column label goes with the first column of the array, the second with the second, and so forth.\nThe index works similarly.\n\n\\begin{lstlisting}\n>>> import numpy as np\n# Initialize DataFrame with NumPy array. This is identical to the grades DataFrame above.\n>>> data = np.array([[52.0, 73.0], [10.0, 39.0], [35.0, np.nan],\n...\t\t[np.nan, 26.0], [np.nan, 99.0], [81.0, 68.0]])\n>>> grades = pd.DataFrame(data, columns = ['Math', 'English'], index =\n...\t\t['Barbara', 'David', 'Eleanor', 'Greg', 'Lauren', 'Mark'])\n\n# View the columns\n>>> grades.columns\nIndex(['Math', 'English'], dtype='object')\n\n# View the Index\n>>> grades.index\nIndex(['Barbara', 'David', 'Eleanor', 'Greg', 'Lauren', 'Mark'], dtype='object')\n\\end{lstlisting}\n\nA \\li{DataFrame} can also be viewed as a NumPy array using the attribute \\li{values}.\n\n\\begin{lstlisting}\n# View the DataFrame as a NumPy array\n>>> grades.values\narray([[ 52.,  73.],\n       [ 10.,  39.],\n       [ 35.,  nan],\n       [ nan,  26.],\n       [ nan,  99.],\n       [ 81.,  68.]])\n\\end{lstlisting}\n\n\\subsection*{Data I/O}\n\nThe pandas library has functions that make importing and exporting data simple.\nThe functions allow for a variety of file formats to be imported and exported, including CSV, Excel, HDF5, SQL, JSON, HTML, and pickle files.\n\n\\begin{table}[H]\n\\begin{tabular}{r|l}\nMethod & Description \\\\ \\hline\n\\begin{comment}\n\\li{describe()}  & Return a \\li{Series} describing the data structure \\\\\n\\li{head()}      & Return the first $n$ rows, defaulting to 5 \\\\\n\\li{tail()}      & Return the last $n$ rows, defaulting to 5 \\\\\n\\end{comment}\n\\li{to_csv()}    & Write the index and entries to a CSV file \\\\\n\\li{read_csv()}  & Read a csv and convert into a DataFrame\\\\\n\\li{to_json()}   & Convert the object to a JSON string \\\\\n\\li{to_pickle()} & Serialize the object and store it in an external file \\\\\n\\li{to_sql()}    & Write the object data to an open SQL database \\\\\n\\li{read_html()} & Read a table in an html page and convert to a DataFrame\\\\\n\\end{tabular}\n\\caption{Methods for exporting data in a pandas \\li{Series} or \\li{DataFrame}.}\n\\label{table:pandas-view-or-export}\n\\end{table}\n\nThe CSV (comma separated values) format is a simple way of storing tabular data\nin plain text.\nBecause CSV files are one of the most popular file formats for exchanging data, we will explore the \\li{read_csv()} function in more detail.\n%To learn to read other types of file formats, see the online pandas documentation.\n%To read a CSV data file into a \\li{DataFrame}, call the \\li{read_csv()} function with the path to the CSV file, along with the appropriate keyword arguments.\nSome frequently-used keyword arguments include the following:\n\n\\begin{itemize}\n\\item \\li{delimiter}:\nThe character that separates data fields. It is often a\ncomma or a whitespace character.\n\\item \\li{header}: The row number (0 indexed) in the CSV file that contains the column names.\n \\item \\li{index_col}: The column (0 indexed) in the CSV file that is the index for the\n\\li{DataFrame}.\n \\item \\li{skiprows}:\nIf an integer $n$, skip the first $n$ rows of the file, and then start reading\nin the data.\nIf a list of integers, skip the specified rows.\n \\item \\li{names}:\nIf the CSV file does not contain the column names, or you wish to use other\ncolumn names, specify them in a list.\n\\end{itemize}\n\nAnother particularly useful function is \\li{read_html()}, which is useful when scraping data.\nIt takes in a url or html file and an optional argument \\li{match}, a string or regex, and returns a list of the tables that match the \\li{match} in a DataFrame.\nWhile the resulting data will probably need to be cleaned, it is frequently much faster than scraping a website.\n\n\\section*{Data Manipulation}\n\n\\subsection*{Accessing Data}\n\nIn general, the best way to access data in a \\li{Series} or \\li{DataFrame} is through the indexers \\li{loc} and \\li{iloc}.\nWhile array slicing can be used, it is more efficient to use these indexers.\nAccessing \\li{Series} and \\li{DataFrame} objects using these indexing operations is more efficient than slicing because the bracket indexing has to check many cases before it can determine how to slice the data structure.\nUsing \\li{loc} or \\li{iloc} explicitly bypasses these extra checks.\nThe \\li{loc} index selects rows and columns based on their labels, while \\li{iloc} selects them based on their integer position.\nWith these indexers, the first and second arguments refer to the rows and columns, respectively, just as array slicing.\n\n\\begin{lstlisting}\n# Use loc to select the Math scores of David and Greg\n>>> grades.loc[['David', 'Greg'],'Math']\nDavid    10.0\nGreg      NaN\nName: Math, dtype: float64\n\n# Use iloc to select the Math scores of David and Greg\n>>> grades.iloc[[1,3], 0]\nDavid    10.0\nGreg      NaN\n\\end{lstlisting}\n\nTo access an entire column of a \\li{DataFrame}, the most efficient method is to use only square brackets and the name of the column, without the indexer.\nThis syntax can also be used to create a new column or reset the values of an entire column.\n\n\\begin{lstlisting}\n# Create a new History column with array of random values\n>>> grades['History'] = np.random.randint(0,100,6)\n>>> grades['History']\nBarbara     4\nDavid      92\nEleanor    25\nGreg       79\nLauren     82\nMark       27\nName: History, dtype: int64\n\n# Reset the column such that everyone has a 100\n>>> grades['History'] = 100.0\n>>> grades\n         Math  English  History\nBarbara  52.0     73.0    100.0\nDavid    10.0     39.0    100.0\nEleanor  35.0      NaN    100.0\nGreg      NaN     26.0    100.0\nLauren    NaN     99.0    100.0\nMark     81.0     68.0    100.0\n\\end{lstlisting}\n\nDatasets can often be very large and thus difficult to visualize.\nPandas has various methods to make this easier.\nThe methods \\li{head} and \\li{tail} will show the first or last $n$ data points, respectively, where $n$ defaults to 5.\nThe method \\li{sample} will draw $n$ random entries of the dataset, where $n$ defaults to 1.\n\\begin{lstlisting}\n# Use head to see the first n rows\n>>> grades.head(n=2)\n         Math  English  History\nBarbara  52.0     73.0    100.0\nDavid    10.0     39.0    100.0\n\n# Use sample to sample a random entry\n>>> grades.sample()\n        Math  English  History\nLauren   NaN     99.0    100.0\n\\end{lstlisting}\nIt may also be useful to re-order the columns or rows or sort according to a\ngiven column.\n\\begin{lstlisting}\n# Re-order columns\n>>> grades.reindex(columns=['English','Math','History'])\n         English  Math  History\nBarbara     73.0  52.0    100.0\nDavid       39.0  10.0    100.0\nEleanor      NaN  35.0    100.0\nGreg        26.0   NaN    100.0\nLauren      99.0   NaN    100.0\nMark        68.0  81.0    100.0\n\n# Sort descending according to Math grades\n>>> grades.sort_values('Math', ascending=False)\n         Math  English  History\nMark     81.0     68.0    100.0\nBarbara  52.0     73.0    100.0\nEleanor  35.0      NaN    100.0\nDavid    10.0     39.0    100.0\nGreg      NaN     26.0    100.0\nLauren    NaN     99.0    100.0\n\\end{lstlisting}\nOther methods used for manipulating \\li{DataFrame} and \\li{Series} panda structures can be found in Table \\ref{table:pandas-manage-data}.\n\\begin{table}[H]\n\\begin{tabular}{r|l}\nMethod & Description \\\\ \\hline\n\\li{append()} & Concatenate two or more \\li{Series}. \\\\\n\\li{drop()} & Remove the entries with the specified label or labels \\\\\n\\li{drop_duplicates()} & Remove duplicate values \\\\\n\\li{dropna()} & Drop null entries \\\\\n\\li{fillna()} & Replace null entries with a specified value or strategy \\\\\n\\li{reindex()} & Replace the index \\\\\n\\li{sample()} & Draw a random entry \\\\\n\\li{shift()} & Shift the index \\\\\n% \\li{truncate()} & \\\\\n\\li{unique()} & Return unique values \\\\\n% \\li{where()} & Search for entires that satisfy some criteria \\\\\n\\end{tabular}\n\\caption{Methods for managing or modifying data in a pandas \\li{Series} or \\li{DataFrame}.}\n\\label{table:pandas-manage-data}\n\\end{table}\n\n\\begin{problem}\nThe file \\li{budget.csv} contains the budget of a college student over the course of 4 years.\nWrite a function that performs the following operations in this order:\n\\begin{enumerate}\n\\item Read in \\li{budget.csv} as a \\li{DataFrame} with the index as column 0. Hint: Use \\li{index\\_col=0} to set the first column as the index when reading in the csv.\n\\item Reindex the columns such that amount spent on groceries is the first column and all other columns maintain the same ordering.\n\\item Sort the \\li{DataFrame} in descending order by how much money was spent on \\li{Groceries}.\n\\item Reset all values in the \\li{'Rent'} column to \\li{800.0}.\n\\item Reset all values in the first 5 data points to \\li{0.0}.\n\\end{enumerate}\nReturn the values of the updated \\li{DataFrame} as a NumPy array.\n\n\\label{prob:budget}\n\\end{problem}\n\n\\subsection*{Basic Data Manipulation}\nBecause the primary pandas data structures are based off of \\li{ndarray}, most NumPy functions work with pandas structures.\nFor example, basic vector operations work as would be expected:\n\n\\begin{lstlisting}\n# Sum history and english grades of all students\n>>> grades['English'] + grades['History']\nBarbara    173.0\nDavid      139.0\nEleanor      NaN\nGreg       126.0\nLauren     199.0\nMark       168.0\ndtype: float64\n\n# Double all Math grades\n>>>  grades['Math']*2\nBarbara    104.0\nDavid       20.0\nEleanor     70.0\nGreg         NaN\nLauren       NaN\nMark       162.0\nName: Math, dtype: float64\n\\end{lstlisting}\nIn addition to arithmetic, \\li{Series} has a variety of other methods similar to NumPy arrays.\nA collection of these methods is found in Table \\ref{table:pandas-numerical-methods}.\n\\begin{table}[H]\n\\begin{tabular}{r|l}\nMethod & Returns \\\\ \\hline\n\\li{<<abs>>()}     & Object with absolute values taken (of numerical data) \\\\\n\\li{idxmax()}  & The index label of the maximum value \\\\\n\\li{idxmin()}  & The index label of the minimum value \\\\\n\\li{count()}   & The number of non-null entries \\\\\n\\li{cumprod()} & The cumulative product over an axis \\\\\n\\li{cumsum()}  & The cumulative sum over an axis \\\\\n\\li{<<max>>()}     & The maximum of the entries \\\\\n\\li{mean()}    & The average of the entries \\\\\n\\li{median()}  & The median of the entries \\\\\n\\li{<<min>>()}     & The minimum of the entries \\\\\n\\li{mode()}    & The most common element(s) \\\\\n\\li{prod()}    & The product of the elements \\\\\n\\li{<<sum>>()}     & The sum of the elements \\\\\n\\li{var()}     & The variance of the elements \\\\\n\\end{tabular}\n\\caption{Numerical methods of the \\li{Series} and \\li{DataFrame} pandas classes.\n% Methods marked with a * are not methods of NumPy's \\li{ndarray} class.\n}\n\\label{table:pandas-numerical-methods}\n\\end{table}\n\n%The default\n% missing value \\li{NaN} is given for labels that are not shared by both inputs.\n\n\\subsection*{Basic Statistical Functions}\n\nThe pandas library allows us to easily calculate basic summary statistics of our data,\nwhich can be useful when we want a quick description of the data.\nThe \\li{describe()} function\noutputs several such summary statistics for each column in a \\li{DataFrame}:\n\\begin{lstlisting}\n# Use describe to better understand the data\n>>> grades.describe()\n            Math   English  History\ncount   4.000000   5.00000      6.0\nmean   44.500000  61.00000    100.0\nstd    29.827281  28.92231      0.0\nmin    10.000000  26.00000    100.0\n25%    28.750000  39.00000    100.0\n50%    43.500000  68.00000    100.0\n75%    59.250000  73.00000    100.0\nmax    81.000000  99.00000    100.0\n\\end{lstlisting}\n\nFunctions for calculating means and variances, the covariance and correlation matrices, and other\nbasic statistics are also available.\n\n\\begin{lstlisting}\n# Find the average grade for each student\n>>> grades.mean(axis=1)\nBarbara    75.000000\nDavid      49.666667\nEleanor    67.500000\nGreg       63.000000\nLauren     99.500000\nMark       83.000000\ndtype: float64\n\n# Give correlation matrix between subjects\n>>> grades.corr()\n            Math  English  History\nMath     1.00000  0.84996      NaN\nEnglish  0.84996  1.00000      NaN\nHistory      NaN      NaN      NaN\n\\end{lstlisting}\n\nThe method \\li{rank()} can be used to rank the values in a data set, either within each entry or with each column.\nThis function defaults ranking in ascending order: the least will be ranked 1 and the greatest will be ranked the highest number.\n\n\\begin{lstlisting}\n# Rank each student's performance in their classes in descending order\n# (best to worst)\n# The method keyword specifies what rank to use when ties occur.\n>>> grades.rank(axis=1,method='max',ascending=False)\n         Math  English  History\nBarbara   3.0      2.0      1.0\nDavid     3.0      2.0      1.0\nEleanor   2.0      NaN      1.0\nGreg      NaN      2.0      1.0\nLauren    NaN      2.0      1.0\nMark      2.0      3.0      1.0\n\\end{lstlisting}\n\nThese methods can be very effective in interpreting data.\nFor example, the \\li{rank()} example above shows use that Barbara does best in History, then English, and then Math.\n\n\\subsection*{Dealing with Missing Data}\n\nMissing data is a ubiquitous problem in data science.\nFortunately, pandas is particularly well-suited to handling missing or anomalous data.\nAs we have already seen, the pandas default for a missing value is \\li{NaN}.\nIn basic arithmetic operations, if one of the operands is \\li{NaN}, then the output is also \\li{NaN}.\nIf we are not interested in the missing values, we can simply drop them from the data altogether, or we can fill them with some other value, such as the mean.\n\\li{NaN} might also mean something specific, such as some default value, which should inform what to do with \\li{NaN} values.\n\n\\begin{lstlisting}\n# Grades with all NaN values dropped\n>>> grades.dropna()\n         Math  English  History\nBarbara  52.0     73.0    100.0\nDavid    10.0     39.0    100.0\nMark     81.0     68.0    100.0\n\n# fill missing data with 50.0\n>>> grades.fillna(50.0)\n         Math  English  History\nBarbara  52.0     73.0    100.0\nDavid    10.0     39.0    100.0\nEleanor  35.0     50.0    100.0\nGreg     50.0     26.0    100.0\nLauren   50.0     99.0    100.0\nMark     81.0     68.0    100.0\n\\end{lstlisting}\n\nWhen dealing with missing data, make sure you are aware of the behavior of the pandas functions you are using.\nFor example, \\li{<<sum()>>} and \\li{mean()} ignore NaN values in the computation.\n\n\\begin{warn}\nAlways consider missing data carefully when analyzing a dataset.\nIt may not always be helpful to drop the data or fill it in with a random number.\nConsider filling the data with the mean of surrounding data or the mean of the feature in question.\nOverall, the choice for how to fill missing data should make sense with the dataset.\n\\end{warn}\n\n\\begin{problem}\nWrite a function which uses \\li{budget.csv} to answer the questions \"Which category affects living expenses the most? Which affects other expenses the most?\"\nPerform the following manipulations:\n\\begin{enumerate}\n\\item Fill all \\li{NaN} values with \\li{0.0}.\n\\item Create two new columns, \\li{'Living Expenses'} and \\li{'Other'}.\nSet the value of \\li{'Living Expenses'} to be the sum of the columns \\li{'Rent', 'Groceries', 'Gas'} and \\li{'Utilities'}.\nSet the value of \\li{'Other'} to be the sum of the columns \\li{'Dining Out', 'Out With Friends'} and \\li{'Netflix'}.\n\\item Identify which column, other than \\li{'Living Expenses'}, correlates most with \\li{'Living Expenses'} and which column, other than \\li{'Other'}, correlates most with \\li{'Other'}.\nThis can indicate which columns in the budget affect the overarching categories the most.\n\\end{enumerate}\nReturn the names of each of those columns as a tuple.\nThe first should be of the column corresponding to \\li{'Living Expenses'} and the second to \\li{'Other'}.\n\\end{problem}\n\n\\section*{Complex Operations in Pandas}\nOften times, the data that we have is not exactly the data we want to analyze.\nIn cases like this we use more complex data manipulation tools to access only the data that we need.\n\nFor the examples below, we will use the following data:\n\\begin{lstlisting}\n>>> name = ['Mylan', 'Regan', 'Justin', 'Jess', 'Jason', 'Remi', 'Matt',\n...\t\t'Alexander', 'JeanMarie']\n>>> sex = ['M', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'F']\n>>> age = [20, 21, 18, 22, 19, 20, 20, 19, 20]\n>>> rank = ['Sp', 'Se', 'Fr', 'Se', 'Sp', 'J', 'J', 'J', 'Se']\n>>> ID = range(9)\n>>> aid = ['y', 'n', 'n', 'y', 'n', 'n', 'n', 'y', 'n']\n>>> GPA = [3.8, 3.5, 3.0, 3.9, 2.8, 2.9, 3.8, 3.4, 3.7]\n>>> mathID = [0, 1, 5, 6, 3]\n>>> mathGd = [4.0, 3.0, 3.5, 3.0, 4.0]\n>>> major = ['y', 'n', 'y', 'n', 'n']\n>>> studentInfo = pd.DataFrame({'ID': ID, 'Name': name, 'Sex': sex, 'Age': age,\n...\t\t'Class': rank})\n>>> otherInfo = pd.DataFrame({'ID': ID, 'GPA': GPA, 'Financial_Aid': aid})\n>>> mathInfo = pd.DataFrame({'ID': mathID, 'Grade': mathGd, 'Math_Major':\n...\t\tmajor})\n\\end{lstlisting}\nBefore querying our data, it is helpful to know some of its basic properties,\nsuch as number of columns, number of rows, and the datatypes of the columns.\nThis can be done by simply calling the \\li{info()} method on the desired\n\\li{DataFrame}:\n\n\\begin{lstlisting}\n>>> mathInfo.info()\n<class 'pandas.core.frame.DataFrame'>\nInt64Index: 5 entries, 0 to 4\nData columns (total 3 columns):\nGrade         5 non-null float64\nID            5 non-null int64\nMath_Major    5 non-null object\ndtypes: float64(1), int64(1), object(1)\n\\end{lstlisting}\n\n\\subsection*{Masks}\nSometimes, we only want to access data from a single column.\nFor example if we want to only access the \\li{ID} of the students in the \\li{studentInfo} \\li{DataFrame}, then we would use the following syntax.\n\\begin{lstlisting}\n   # Get the ID column from studentInfo\n   >>> studentInfo.ID # or studentInfo['ID']\n      ID\n   0   0\n   1   1\n   2   2\n   3   3\n   4   4\n   5   5\n   6   6\n   7   7\n   8   8\n\\end{lstlisting}\nIf we want to access multiple columns at once we can use a list of column names.\n\\begin{lstlisting}\n   # Get the ID and Age columns.\n   >>> studentInfo[['ID', 'Age']]\n      ID  Age\n   0   0   20\n   1   1   21\n   2   2   18\n   3   3   22\n   4   4   19\n   5   5   20\n   6   6   20\n   7   7   19\n   8   8   29\n\\end{lstlisting}\n\nNow we can access the specific columns that we want.\nHowever, some of these columns may still contain data points that we don't want to consider.\nIn this case we can build a mask.\nEach mask that we build will return a pandas \\li{Series} object with a \\li{bool} value at each index indicating if the condition is satisfied.\n\n\\begin{lstlisting}\n   # Create a mask for all student receiving financial aid.\n   >>> mask = otherInfo['Financial_Aid'] == 'y'\n   # Access other info where the mask is true and display the ID and GPA columns.\n   >>> otherInfo[mask][['ID', 'GPA']]\n      ID  GPA\n   0   0  3.8\n   3   3  3.9\n   7   7  3.4\n\\end{lstlisting}\n\nWe can also create compound masks with multiple statements.\nWe do this using the same syntax you would use for a compound mask in a normal NumPy array.\nUseful operators are \\li{&}, the \\li{AND} operator; \\li{|}, the \\li{OR} operator; and $\\sim$, the \\li{NOT} operator.\n\n\\begin{lstlisting}\n   # Get all student names where Class = 'J' OR Class = 'Sp'.\n   >>>  mask = (studentInfo.Class == 'J') | (studentInfo.Class == 'Sp')\n   >>>  studentInfo[mask].Name\n   0        Mylan\n   4        Jason\n   5         Remi\n   6         Matt\n   7    Alexander\n   Name: Name, dtype: object\n   # This can also be acomplished with the following command:\n   # studentInfo[studentInfo['Class'].isin(['J','Sp'])]['Name']\n\\end{lstlisting}\n\n\\begin{problem}\\label{prob:rate}\nRead in the file \\li{crime_data.csv} as a pandas object.\nThe file contains data on types of crimes in the U.S. from 1960 to 2016.\nSet the index as the column \\li{'Year'}.\nAnswer the following questions using the pandas methods learned in this lab.\nThe answer of each question should be saved as indicated.\nReturn the answers to all three questions as a tuple (i.e. \\li{(answer_1,answer_2,answer_3)}).\n\n\\begin{enumerate}\n\t\\item Identify the three crimes that have a mean yearly number of occurences over 1,500,000.\n\tOf these three crimes, which two are very correlated?\n\tWhich of these two crimes has a greater maximum value?\n\tSave the title of this column as a variable to return as the answer.\n\t\\item Examine the data from 2000 and later.\n\tSort this data (in ascending order) according to number of murders.\n\tFind the years where aggravated assault is greater than 850,000.\n   Save the indices (the years) of the masked and reordered \\li{DataFrame} as a NumPy array to return as the answer.\n\t\\item What year had the highest crime rate?\n\tIn this year, which crime was committed the most?\n\tWhat percentage of the total crime that year was it?\n\tSave this value as a float.\n\\end{enumerate}\n\\end{problem}\n\n% what used to be pandas 4\n\n\\section*{Working with Dates and Times} % =====================================\n\nThe \\li{datetime} module in the standard library provides a few tools for representing and operating on dates and times.\nThe \\li{datetime.datetime} object represents a \\emph{time stamp}: a specific time of day on a certain day.\nIts constructor accepts a four-digit year, a month (starting at 1 for January), a day, and, optionally, an hour, minute, second, and microsecond.\nEach of these arguments must be an integer, with the hour ranging from $0$ to $23$.\n\n\\begin{lstlisting}\n>>> from datetime import datetime\n\n# Represent November 18th, 1991, at 2:01 PM.\n>>> bday = datetime(1991, 11, 18, 14, 1)\n>>> print(bday)\n1991-11-18 14:01:00\n\n# Find the number of days between 11/18/1991 and 11/9/2017.\n>>> dt = datetime(2017, 11, 9) - bday\n>>> dt.days\n9487\n\\end{lstlisting}\n\nThe \\li{datetime.datetime} object has a parser method, \\li{strptime()}, that converts a string into a new \\li{datetime.datetime} object.\nThe parser is flexible so the user must specify the format that the dates are in.\nFor example, if the dates are in the format \\li{\"Month/Day//Year::Hour\"}, specify \\li{<<format>>\"=\\%m/\\%d//\\%Y::\\%H\"} to parse the string appropriately.\nSee Table \\ref{table:date_formats} for formatting options.\n\n\\begin{table}[H]\n\\begin{center}\n    \\begin{tabular}{c|l}\n        Pattern & Description \\\\ \\hline\n        \\li{\\%Y} & 4-digit year \\\\\n        \\li{\\%y} & 2-digit year \\\\\n        \\li{\\%m} & 1- or 2-digit month \\\\\n        \\li{\\%d} & 1- or 2-digit day \\\\\n        \\li{\\%H} & Hour (24-hour) \\\\\n        \\li{\\%I} & Hour (12-hour) \\\\\n        \\li{\\%M} & 2-digit minute \\\\\n        \\li{\\%S} & 2-digit second \\\\\n    \\end{tabular}\n\\end{center}\n\\caption{Formats recognized by \\li{datetime.strptime()}}\n\\label{table:date_formats}\n\\end{table}\n\n\\begin{lstlisting}\n>>> print(datetime.strptime(\"1991-11-18 / 14:01\", \"%Y-%m-%d / %H:%M\"),\n...       datetime.strptime(\"1/22/1996\", \"%m/%d/%Y\"),\n...       datetime.strptime(\"19-8, 1998\", \"%d-%m, %Y\"), sep='\\n')\n1991-11-18 14:01:00                 # The date formats are now standardized.\n1996-01-22 00:00:00                 # If no hour/minute/seconds data is given,\n1998-08-19 00:00:00                 # the default is midnight.\n\\end{lstlisting}\n\n\\subsection*{Converting Dates to an Index} % ----------------------------------\n\nThe \\li{TimeStamp} class is the pandas equivalent to a \\li{datetime.datetime} object.\nA pandas index composed of \\li{TimeStamp} objects is a \\li{DatetimeIndex}, and a \\li{Series} or \\li{DataFrame} with a \\li{DatetimeIndex} is called a \\emph{time series}.\nThe function \\li{pd.to_datetime()} converts a collection of dates in a parsable format to a \\li{DatetimeIndex}.\nThe format of the dates is inferred if possible, but it can be specified explicitly with the same syntax as \\li{datetime.strptime()}.\n\n\\begin{lstlisting}\n>>> import pandas as pd\n\n# Convert some dates (as strings) into a DatetimeIndex.\n>>> dates = [\"2010-1-1\", \"2010-2-1\", \"2012-1-1\", \"2012-1-2\"]\n>>> pd.to_datetime(dates)\n<<DatetimeIndex(['2010-01-01', '2010-02-01', '2012-01-01', '2012-01-02'],\n                dtype='datetime64[ns]', freq=None)>>\n\n# Create a time series, specifying the format for the DatetimeIndex.\n>>> dates = [\"1/1, 2010\", \"1/2, 2010\", \"1/1, 2012\", \"1/2, 2012\"]\n>>> date_index = pd.to_datetime(dates, format=\"%m/%d, %Y\")\n>>> pd.Series([x**2 for x in range(4)], index=date_index)\n<<2010-01-01    0\n2010-01-02    1\n2012-01-01    4\n2012-01-02    9\ndtype: int64>>\n\\end{lstlisting}\n\n\\begin{problem} % Load, clean, and plot Dow Jones IA data.\nThe file \\texttt{DJIA.csv} contains daily closing values of the Dow Jones Industrial Average from 2006--2016.\nRead the data into a \\li{Series} or \\li{DataFrame} with a \\li{DatetimeIndex} as the index.\nDrop any rows without numerical values, cast the \\li{\"VALUE\"} column to floats, then return the updated DataFrame.\n\nHint: You can change the column type the same way you'd change a numpy array type.\n\\label{prob:timeseries-dowjones}\n\\end{problem}\n\n\\subsection*{Generating Time-based Indices} % ---------------------------------\n\nSome time series datasets come without explicit labels but have instructions for deriving timestamps.\nFor example, a list of bank account balances might have records from the beginning of every month, or heart rate readings could be recorded by an app every 10 minutes.\nUse \\li{pd.date\\_range()} to generate a \\li{DatetimeIndex} where the timestamps are equally spaced.\nThe function is analogous to \\li{np.arange()} and has the following parameters:\n\\begin{table}[H]\n\\begin{center}\n    \\begin{tabular}{r|l}\n        Parameter & Description \\\\ \\hline\n        \\li{start} & Starting date \\\\\n        \\li{end} & End date \\\\\n        \\li{periods} & Number of dates to include \\\\\n        \\li{freq} & Amount of time between consecutive dates \\\\\n        \\li{normalize} & Normalizes the start and end times to midnight \\\\\n    \\end{tabular}\n\\end{center}\n\\caption{Parameters for \\li{pd.date_range()}.}\n\\label{table:date_params}\n\\end{table}\n\nExactly three of the parameters \\li{start}, \\li{end}, \\li{periods}, and \\li{freq} must be specified to generate a range of dates.\nThe \\li{freq} parameter accepts a variety of string representations, referred to as \\emph{offset aliases}.\nSee Table \\ref{table:range_freqs} for a sampling of some of the options.\nFor a complete list of the options, see \\url{https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#timeseries-offset-aliasesl}.\n\n\\begin{table}[H]\n\\begin{center}\n    \\begin{tabular}{r|l}\n        Parameter & Description \\\\ \\hline\n        \\li{\"D\"} & calendar daily (default) \\\\\n        \\li{\"B\"} & business daily (every business day)\\\\\n        \\li{\"H\"} & hourly \\\\\n        \\li{\"T\"} & minutely \\\\\n        \\li{\"S\"} & secondly \\\\\n        \\li{\"MS\"} & first day of the month (Month Start) \\\\\n        \\li{\"BMS\"} & first business day of the month (Business Month Start)\\\\\n        \\li{\"W-MON\"} & every Monday (Week-Monday)\\\\\n        \\li{\"WOM-3FRI\"} & every 3rd Friday of the month (Week of the Month - 3rd Friday)\\\\\n    \\end{tabular}\n\\end{center}\n\\caption{Options for the \\li{freq} parameter to \\li{pd.date_range()}.}\n\\label{table:range_freqs}\n\\end{table}\n\n\\begin{lstlisting}\n# Create a DatetimeIndex for 5 consecutive days starting on September 28, 2016.\n>>> pd.date_range(start='9/28/2016 16:00', periods=5)\n<<DatetimeIndex(['2016-09-28 16:00:00', '2016-09-29 16:00:00',\n               '2016-09-30 16:00:00', '2016-10-01 16:00:00',\n               '2016-10-02 16:00:00'],\n              dtype='datetime64[ns]', freq='D')>>\n\n# Create a DatetimeIndex with the first weekday of every other month in 2016.\n>>> pd.date_range(start='1/1/2016', end='1/1/2017', freq=\"2BMS\" )\n<<DatetimeIndex(['2016-01-01', '2016-03-01', '2016-05-02', '2016-07-01',\n               '2016-09-01', '2016-11-01'],\n              dtype='datetime64[ns]', freq='2BMS')>>\n\n\n# Create a DatetimeIndex for 10 minute intervals between 4:00 PM and 4:30 PM on September 9, 2016.\n>>> pd.date_range(start='9/28/2016 16:00',\n            end='9/28/2016 16:30', freq=\"10T\")\n<<DatetimeIndex(['2016-09-28 16:00:00', '2016-09-28 16:10:00',\n               '2016-09-28 16:20:00', '2016-09-28 16:30:00'],\n              dtype='datetime64[ns]', freq='10T')>>\n\n# Create a DatetimeIndex for 2 hour 30 minute intervals between 4:30 PM and 2:30 AM on September 29, 2016.\n>>> pd.date_range(start='9/28/2016 16:30', periods=5, freq=\"2h30min\")\n<<DatetimeIndex(['2016-09-28 16:30:00', '2016-09-28 19:00:00',\n               '2016-09-28 21:30:00', '2016-09-29 00:00:00',\n               '2016-09-29 02:30:00'],\n              dtype='datetime64[ns]', freq='150T')>>\n\\end{lstlisting}\n\n\\begin{problem} % Use pd.date_range() for something.\nThe file \\texttt{paychecks.csv} contains values of an hourly employee's last 93 paychecks.\nPaychecks are given every other Friday, starting on March 14, 2008, and the employee started working on March 13, 2008.\n\nRead in the data, using \\li{pd.date\\_range()} to generate the \\li{DatetimeIndex}.\nSet this as the new index of the \\li{DataFrame} and return the \\li{DataFrame}.\n\\end{problem}\n\n\\section*{Elementary Time Series Analysis}\n\\subsection*{Shifting}\n\n\\li{DataFrame} and \\li{Series} objects have a \\li{shift()} method that allows you to move data up or down relative to the index.\nWhen dealing with time series data, we can also shift the \\li{DatetimeIndex} relative to a time offset.\n\n\\begin{lstlisting}\n>>> df = pd.DataFrame(dict(VALUE=np.random.rand(5)),\n                index=pd.date_range(\"2016-10-7\", periods=5, freq='D'))\n>>> df\n<<               VALUE\n2016-10-07  0.127895\n2016-10-08  0.811226\n2016-10-09  0.656711\n2016-10-10  0.351431\n2016-10-11  0.608767>>\n\n>>> df.shift(1)\n<<               VALUE\n2016-10-07       NaN\n2016-10-08  0.127895\n2016-10-09  0.811226\n2016-10-10  0.656711\n2016-10-11  0.351431>>\n\n>>> df.shift(-2)\n<<               VALUE\n2016-10-07  0.656711\n2016-10-08  0.351431\n2016-10-09  0.608767\n2016-10-10       NaN\n2016-10-11       NaN>>\n\n>>> df.shift(14, freq=\"D\")\n<<               VALUE\n2016-10-21  0.127895\n2016-10-22  0.811226\n2016-10-23  0.656711\n2016-10-24  0.351431\n2016-10-25  0.608767>>\n\\end{lstlisting}\n\nShifting data makes it easy to gather statistics about changes from one timestamp or period to the next.\n\n\\begin{lstlisting}\n# Find the changes from one period/timestamp to the next\n>>> df - df.shift(1)            # Equivalent to df.diff().\n               VALUE\n2016-10-07       NaN\n2016-10-08  0.683331\n2016-10-09 -0.154516\n2016-10-10 -0.305279\n2016-10-11  0.257336\n\\end{lstlisting}\n\n\\begin{problem}\nCompute the following information about the DJIA dataset from Problem \\ref{prob:timeseries-dowjones} that has a DateTimeIndex.\n\\begin{itemize}\n    \\item The single day with the largest gain.\n    \\item The single day with the largest loss.\n\\end{itemize}\nReturn the DateTimeIndex of the day with the largest gain and the day with the largest loss.\n\n(Hint: Call your function from Problem \\ref{prob:timeseries-dowjones} to get the DataFrame already cleaned and with DatetimeIndex).\n\\end{problem}\n\nMore information on how to use \\li{datetime} with Pandas is in the additional material section.\nThis includes working with \\li{Periods} and more analysis with time series.\n\n\\pagebreak\n\n\\section*{Additional Material}\n\\subsection*{SQL Operations in pandas} % =========================================\n\n\\li{DataFrames} are tabular data structures bearing an obvious resemblance to a typical relational\ndatabase table.\nSQL is the standard for working with relational databases; however, pandas can accomplish many of the same tasks as SQL.\nThe SQL-like functionality of pandas is\none of its biggest advantages, eliminating the need to switch between programming languages\nfor different tasks.\nWithin pandas, we can handle both the querying \\emph{and} data analysis.\n\nFor the examples below, we will use the following data:\n\\begin{lstlisting}\n>>> name = ['Mylan', 'Regan', 'Justin', 'Jess', 'Jason', 'Remi', 'Matt',\n...\t\t'Alexander', 'JeanMarie']\n>>> sex = ['M', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'F']\n>>> age = [20, 21, 18, 22, 19, 20, 20, 19, 20]\n>>> rank = ['Sp', 'Se', 'Fr', 'Se', 'Sp', 'J', 'J', 'J', 'Se']\n>>> ID = range(9)\n>>> aid = ['y', 'n', 'n', 'y', 'n', 'n', 'n', 'y', 'n']\n>>> GPA = [3.8, 3.5, 3.0, 3.9, 2.8, 2.9, 3.8, 3.4, 3.7]\n>>> mathID = [0, 1, 5, 6, 3]\n>>> mathGd = [4.0, 3.0, 3.5, 3.0, 4.0]\n>>> major = ['y', 'n', 'y', 'n', 'n']\n>>> studentInfo = pd.DataFrame({'ID': ID, 'Name': name, 'Sex': sex, 'Age': age,\n...\t\t'Class': rank})\n>>> otherInfo = pd.DataFrame({'ID': ID, 'GPA': GPA, 'Financial_Aid': aid})\n>>> mathInfo = pd.DataFrame({'ID': mathID, 'Grade': mathGd, 'Math_Major':\n...\t\tmajor})\n\\end{lstlisting}\n\nSQL SELECT statements can be done by column indexing.\nWHERE statements can be included by adding masks (just like in a NumPy array).\nThe method \\li{isin()} can also provide a useful WHERE statement.\nThis method accepts a list, dictionary, or \\li{Series} containing possible values of the \\li{DataFrame} or \\li{Series}.\nWhen called upon, it returns a \\li{Series} of booleans, indicating whether an entry contained a value in the parameter pass into \\li{isin()}.\n\n\n\\begin{lstlisting}\n# SELECT ID, Age FROM studentInfo\n>>> studentInfo[['ID', 'Age']]\n   ID  Age\n0   0   20\n1   1   21\n2   2   18\n3   3   22\n4   4   19\n5   5   20\n6   6   20\n7   7   19\n8   8   29\n\n# SELECT ID, GPA FROM otherInfo WHERE Financial_Aid = 'y'\n>>> mask = otherInfo['Financial_Aid'] == 'y'\n>>> otherInfo[mask][['ID', 'GPA']]\n   ID  GPA\n0   0  3.8\n3   3  3.9\n7   7  3.4\n\n# SELECT Name FROM studentInfo WHERE Class = 'J' OR Class = 'Sp'\n>>>  studentInfo[studentInfo['Class'].isin(['J','Sp'])]['Name']\n0        Mylan\n4        Jason\n5         Remi\n6         Matt\n7    Alexander\nName: Name, dtype: object\n\\end{lstlisting}\n\nNext, let's look at JOIN statements.\nIn pandas, this is done with the \\li{merge} function.\n\\li{merge} takes the two \\li{DataFrame} objects to join as parameters, as well as keyword arguments specifying\nthe column on which to join, along with the type (left, right, inner, outer).\n\n\\begin{lstlisting}\n# SELECT * FROM studentInfo INNER JOIN mathInfo ON studentInfo.ID = mathInfo.ID\n>>> pd.merge(studentInfo, mathInfo, on='ID') # INNER JOIN is the default\n   Age Class  ID    Name Sex  Grade Math_Major\n0   20    Sp   0   Mylan   M    4.0          y\n1   21    Se   1   Regan   F    3.0          n\n2   22    Se   3    Jess   F    4.0          n\n3   20     J   5    Remi   F    3.5          y\n4   20     J   6    Matt   M    3.0          n\n[5 rows x 7 columns]\n\n# SELECT GPA, Grade FROM otherInfo FULL OUTER JOIN mathInfo ON otherInfo.\n# ID = mathInfo.ID\n>>> pd.merge(otherInfo, mathInfo, on='ID', how='outer')[['GPA', 'Grade']]\n   GPA  Grade\n0  3.8    4.0\n1  3.5    3.0\n2  3.0    NaN\n3  3.9    4.0\n4  2.8    NaN\n5  2.9    3.5\n6  3.8    3.0\n7  3.4    NaN\n8  3.7    NaN\n[9 rows x 2 columns]\n\\end{lstlisting}\n\n\\subsection*{More Datetime with Pandas}\n\\subsection*{Periods} % ==========================================================\n\nA pandas \\li{Timestamp} object represents a precise moment in time on a given day.\nSome data, however, is recorded over a time interval, and it wouldn't make sense to place an exact timestamp on any of the measurements.\nFor example, a record of the number of steps walked in a day, box office earnings per week, quarterly earnings, and so on.\nThis kind of data is better represented with the pandas \\li{Period} object and the corresponding \\li{PeriodIndex}.\n\nThe \\li{Period} class accepts a \\li{value} and a \\li{freq}.\nThe \\li{value} parameter indicates the label for a given \\li{Period}.\nThis label is tied to the \\textbf{end} of the defined \\li{Period}.\nThe \\li{freq} indicates the length of the \\li{Period} and in some cases can also indicate the offset of the \\li{Period}.\nThe default value for \\li{freq} is \"M\" for months.\nThe \\li{freq} parameter accepts the majority, but not all, of frequencies listed in Table \\ref{table:range_freqs}.\n\n\\begin{lstlisting}\n# Creates a period for month of Oct, 2016.\n>>> p1 = pd.Period(\"2016-10\")\n>>> p1.start_time                   # The start and end times of the period\n<<Timestamp('2016-10-01 00:00:00')>>    # are recorded as Timestamps.\n>>> p1.end_time\n<<Timestamp('2016-10-31 23:59:59.999999999')>>\n\n# Represent the annual period ending in December that includes 10/03/2016.\n>>> p2 = pd.Period(\"2016-10-03\", freq=\"A-DEC\")\n>>> p2.start_time\n<<Timestamp('2016-01-01 00:00:00')\n>>> p2.end_time\n<<Timestamp('2016-12-31 23:59:59.999999999')>>\n\n# Get the weekly period ending on a Saturday that includes 10/03/2016.\n>>> print(pd.Period(\"2016-10-03\", freq=\"W-SAT\"))\n<<2016-10-02/2016-10-08>>\n\\end{lstlisting}\n\nLike the \\li{pd.date\\_range()} method, the \\li{pd.period\\_range()} method is useful for generating a \\li{PeriodIndex} for unindexed data.\nThe syntax is essentially identical to that of \\li{pd.date\\_range()}.\nWhen using \\li{pd.period\\_range()}, remember that the \\li{freq} parameter marks the end of the period.\nAfter creating a \\li{PeriodIndex}, the \\li{freq} parameter can be changed via the \\li{asfreq()} method.\n\n\\begin{lstlisting}\n# Represent quarters from 2008 to 2010, with Q4 ending in December.\n>>> pd.period_range(start=\"2008\", end=\"2010-12\", freq=\"Q-DEC\")\n<<PeriodIndex(['2008Q1', '2008Q2', '2008Q3', '2008Q4', '2009Q1', '2009Q2',\n             '2009Q3', '2009Q4', '2010Q1', '2010Q2', '2010Q3', '2010Q4'],\n            dtype='period[Q-DEC]', freq='Q-DEC')>>\n\n# Get every three months form March 2010 to the start of 2011.\n>>> p = pd.period_range(\"2010-03\", \"2011\", freq=\"3M\")\n>>> p\n<<PeriodIndex(['2010-03', '2010-06', '2010-09', '2010-12'],\n            dtype='period[3M]', freq='3M')>>\n\n# Change frequency to be quarterly.\n>>> p.asfreq(\"Q-DEC\")\n<<PeriodIndex(['2010Q2', '2010Q3', '2010Q4', '2011Q1'],\n            dtype='period[Q-DEC]', freq='Q-DEC')>>\n\\end{lstlisting}\n\nThe bounds of a \\li{PeriodIndex} object can be shifted by adding or subtracting an integer.\n \\li{PeriodIndex} will be shifted by $n$ $\\times$ \\li{freq}.\n\n\\begin{lstlisting}\n# Shift index by 1\n>>> p -= 1\n>>> p\nPeriodIndex(['2010Q1', '2010Q2', '2010Q3', '2010Q4'],\n            dtype='int64', freq='Q-DEC')\n\\end{lstlisting}\n\nIf for any reason you need to switch from periods to timestamps, pandas provides a very simple method to do so.\nThe \\li{how} parameter can be \\li{start} or \\li{end} and determines if the timestamp is the beginning or the end of the period.\nSimilarly, you can switch from timestamps to periods.\n\n\\begin{lstlisting}\n# Convert to timestamp (last day of each quarter)\n>>> p = p.to_timestamp(how='end')\n>>> p\nDatetimeIndex(['2010-03-31', '2010-06-30', '2010-09-30', '2010-12-31'],\n                dtype='datetime64[ns]', freq='Q-DEC')\n\n>>> p.to_period(\"Q-DEC\")\nPeriodIndex(['2010Q1', '2010Q2', '2010Q3', '2010Q4'],\n            dtype='int64', freq='Q-DEC')\n\n\\end{lstlisting}\n\n\\begin{comment}\n\\begin{problem} %\nThe file \\texttt{finances.csv} contains a list of simulated quarterly earnings and expense totals from a fictional company.\nLoad the data into a \\li{Series} or \\li{DataFrame} with a \\li{PeriodIndex} with a quarterly frequency.\nLet \\li{how=end}.\nAssume the fiscal year starts at the beginning of September and that the data begins in September 1978.\nReturn the \\li{DataFrame}.\n\\end{problem}\n\\end{comment}\n\\subsection*{Operations on Time Series}\n\nThere are certain operations only available to Series and DataFrames that have a \\li{DatetimeIndex}. A sampling of this functionality is described throughout the remainder of this lab.\n\\subsubsection*{Slicing}\n\nSlicing is much more flexible in pandas for time series. We can slice by year, by month, or even use traditional slicing syntax to select a range of dates.\n\n% TODO: initialize df using pd.date_range().\n\\begin{lstlisting}\n# Select all rows in a given year\n>>> df[\"2010\"]\n                   0         1\n2010-01-01  0.566694  1.093125\n2010-02-01 -0.219856  0.852917\n2010-03-01  1.511347 -1.324036\n\n# Select all rows in a given month of a given year\n>>> df[\"2012-01\"]\n                   0         1\n2012-01-01  0.212141  0.859555\n2012-01-02  1.483123 -0.520873\n2012-01-03  1.436843  0.596143\n\n# Select a range of dates using traditional slicing syntax\n>>> df[\"2010-1-2\":\"2011-12-31\"]\n                   0         1\n2010-02-01 -0.219856  0.852917\n2010-03-01  1.511347 -1.324036\n2011-01-01  0.300766  0.934895\n\\end{lstlisting}\n\n\\subsubsection*{Resampling} % ----------------------------------------------------\n\nSome datasets do not have datapoints at a fixed frequency.\nFor example, a dataset of website traffic has datapoints that occur at irregular intervals.\nIn situations like these, \\emph{resampling} can help provide insight on the data.\n\nThe two main forms of resampling are \\emph{downsampling}, aggregating data into fewer intervals, and \\emph{upsampling}, adding more intervals.\n\nTo downsample, use the \\li{resample()} method of the \\li{Series} or \\li{DataFrame}.\nThis method is similar to \\li{groupby()} in that it groups different entries together.\nThen aggregation produces a new data set.\nThe first parameter to \\li{resample()} is an offset string from Table \\ref{table:range_freqs}: \\li{\"D\"} for daily, \\li{\"H\"} for hourly, and so on.\n\n\\begin{lstlisting}\n>>> import numpy as np\n\n# Get random data for every day from 2000 to 2010.\n>>> dates = pd.date_range(start=\"2000-1-1\", end='2009-12-31', freq='D')\n>>> df = pd.Series(np.random.random(len(days)), index=dates)\n>>> df\n2000-01-01    0.559\n2000-01-02    0.874\n2000-01-03    0.774\n                ...\n2009-12-29    0.837\n2009-12-30    0.472\n2009-12-31    0.211\nFreq: D, Length: 3653, dtype: float64\n\n# Group the data by year.\n>>> years = df.resample(\"A\")        # 'A' for 'annual'.\n>>> years.agg(len)                  # Number of entries per year.\n2000-12-31    366.0\n2001-12-31    365.0\n2002-12-31    365.0\n                ...\n2007-12-31    365.0\n2008-12-31    366.0\n2009-12-31    365.0\nFreq: A-DEC, dtype: float64\n\n>>> years.mean()                    # Average entry by year.\n2000-12-31    0.491\n2001-12-31    0.514\n2002-12-31    0.484\n                ...\n2007-12-31    0.508\n2008-12-31    0.521\n2009-12-31    0.523\nFreq: A-DEC, dtype: float64\n\n# Group the data by month.\n>>> months = df.resample(\"M\")\n>>> len(months.mean())              # 12 months x 10 years = 120 months.\n120\n\\end{lstlisting}\n\n\\begin{comment}\n\\begin{problem} % Downsample\nThe file \\texttt{website\\_traffic.csv} contains records for different visits to a fictitious website.\nRead in the data, calculate the duration of each visit in seconds and convert the index to a \\li{DatetimeIndex}.\nUse downsampling to calculate the number of visits each minute and the number of visits each hour.\nReturn these DataFrames.\n\n(Hint: \\li{minute.agg(func)} returns a DataFrame).\n\\end{problem}\n\\end{comment}\n\\subsection*{Elementary Time Series Analysis} % ==================================\n\n\\subsubsection*{Rolling Functions and Exponentially-Weighted Moving Functions}\n\nMany time series are inherently noisy.\nTo analyze general trends in data, we use \\emph{rolling functions} and \\emph{exponentally-weighted moving (EWM)} functions.\nRolling functions, or \\emph{moving window functions}, perform a calculation on a window of data.\nThere are a few rolling functions that come standard with pandas.\n\n\\subsubsection*{Rolling Functions (Moving Window Functions)}\n\nOne of the most commonly used rolling functions is the \\emph{rolling average}, which takes the average value over a window of data.\n\n\\begin{lstlisting}\n# Generate a time series using random walk from a uniform distribution.\nN = 10000\nbias = 0.01\ns = np.zeros(N)\ns[1:] = np.random.uniform(low=-1, high=1, size=N-1) + bias\ns = pd.Series(s.cumsum(),\n              index=pd.date_range(\"2015-10-20\", freq='H', periods=N))\n\n# Plot the original data together with a rolling average.\nax1 = plt.subplot(121)\ns.plot(color=\"gray\", lw=.3, ax=ax1)\ns.rolling(window=200).mean().plot(color='r', lw=1, ax=ax1)\nax1.legend([\"Actual\", \"Rolling\"], loc=\"lower right\")\nax1.set_title(\"Rolling Average\")\n\\end{lstlisting}\n\nThe function call \\li{s.rolling(window=200)} creates a \\li{pd.core.rolling.Window} object that can be aggregated with a function like \\li{mean()}, \\li{std()}, \\li{var()}, \\li{<<min>>()}, \\li{<<max>>()}, and so on.\n\n\\subsubsection*{Exponentially-Weighted Moving (EWM) Functions}\n\nWhereas a moving window function gives equal weight to the whole window, an \\emph{exponentially-weighted moving} function gives more weight to the most recent data points.\n\nIn the case of a \\emph{exponentially-weighted moving average} (EWMA), each data point is calculated as follows.\n\\[\nz_i = \\alpha \\bar{x}_i + (1 - \\alpha)z_{i-1},\n\\]\nwhere $z_i$ is the value of the EWMA at time $i$, $\\bar{x}_i$ is the average for the $i$-th window, and $\\alpha$ is the decay factor that controls the importance of previous data points.\nNotice that $\\alpha=1$ reduces to the rolling average.\n\nMore commonly, the decay is expressed as a function of the window size.\nIn fact, the \\li{span} for an EWMA is nearly analogous to \\li{window} size for a rolling average.\n\nNotice the syntax for EWM functions is very similar to that of rolling functions.\n\n\\begin{figure}[H] % Rolling average vs. EWMA\n\\captionsetup[subfigure]{justification=centering}\n\\centering\n\\begin{subfigure}{.49\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/moving_rolling.pdf}\n    \\caption{}\n    \\label{fig:pandas-ts-moving-rolling}\n\\end{subfigure}\n%\n\\begin{subfigure}{.49\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/moving_ewma.pdf}\n    \\caption{}\n    \\label{fig:pandas-ts-moving-ewma}\n\\end{subfigure}\n\\caption{Rolling average and EWMA.}\n\\end{figure}\n\n\\begin{lstlisting}\nax2 = plt.subplot(122)\ns.plot(color=\"gray\", lw=.3, ax=ax2)\ns.ewm(span=200).mean().plot(color='g', lw=1, ax=ax2)\nax2.legend([\"Actual\", \"EWMA\"], loc=\"lower right\")\nax2.set_title(\"EWMA\")\n\\end{lstlisting}\n\n\\begin{comment}\n\\begin{problem}\nPlot the following from the DJIA dataset with a window or span of 30, 120, and 365.\n\\begin{itemize}\n    \\item The original data points.\n    \\item Rolling average.\n    \\item Exponential average.\n\\end{itemize}\nYour plots should look like Figure \\ref{fig:prob6}.\nReturn a list of the minimum rolling average value for each window size and a list of the minimum exponential average value for each span size.\n\\label{prob:rolling}\n\\end{problem}\n\n\\begin{figure}[H]\n\\includegraphics[width=\\textwidth]{figures/prob6.pdf}\n\\caption{Plots for Problem \\ref{prob:rolling}.}\n\\label{fig:prob6}\n\\end{figure}\n\\end{comment}\n", "meta": {"hexsha": "382442fd4ba4c4a665f48a9e5e12b768941c3384", "size": 49300, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DataScienceEssentials/Pandas1/Pandas1.tex", "max_stars_repo_name": "chrismmuir/Labs-1", "max_stars_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 190, "max_stars_repo_stars_event_min_datetime": "2015-07-17T01:57:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T19:16:19.000Z", "max_issues_repo_path": "DataScienceEssentials/Pandas1/Pandas1.tex", "max_issues_repo_name": "chrismmuir/Labs-1", "max_issues_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-07-16T17:56:06.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-06T23:47:14.000Z", "max_forks_repo_path": "DataScienceEssentials/Pandas1/Pandas1.tex", "max_forks_repo_name": "chrismmuir/Labs-1", "max_forks_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 76, "max_forks_repo_forks_event_min_datetime": "2015-08-06T02:53:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-27T11:08:57.000Z", "avg_line_length": 40.7775020678, "max_line_length": 221, "alphanum_fraction": 0.683346856, "num_tokens": 14751, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754607093178, "lm_q2_score": 0.7772998611746911, "lm_q1q2_score": 0.5658552245479346}}
{"text": "\\section{Runtime Complexity}\n\n\\begin{frame}{Runtime Complexity}\n  \\begin{itemize}\n    \\item\n      The runtime does not entirely depend on the size of the problem, but also\n      on the type of input\n    \\item\n      This results in:\n    \\begin{itemize}\n      \\item\n        {\\color{Mittel-Blau}Best runtime:}\\\\\n        Lowest possible runtime complexity for an input of size $n$\n      \\item\n        {\\color{Mittel-Blau}Worst runtime:}\\\\\n        Highest possible runtime complexity for an input of size $n$\n      \\item\n        {\\color{Mittel-Blau}Average / Expected runtime:}\\\\\n        The average of all runtime complexities for an input of size $n$\n    \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime Complexity}{Example 1 - Conditions}\n  \\begin{itemize}\n    \\item\n      Input: Array $a$ with $n$ elements\n      $a[i] \\in \\mathbb{N}, \\; 1 \\leq a[i] \\leq n, \\; 0 \\leq i < n$\n    \\item\n      Output: Updated $a$ with $n$ elements where $a[0] \\neq 1$\n  \\end{itemize}\n  \\begin{tabularx}{\\textwidth}{P{11em}O{4em}O{1.75em}O{4.5em}O{1.75em}L}\n  if $a[0]\\,$ == 1: & $\\mathcal{O}(1)$ & {} & {} & {} & {}\\\\\n    \\hhline{~*{1}{-}}\n    $\\hspace*{1.5em}$ $a[0]\\,$ = 2 & $\\mathcal{O}(1)$ &\n    \\multirow{-2}{*}{$\\left.%\n      \\begin{array}{@{}c@{}}\\\\[1em]\\end{array}%\n      \\right\\rbrace$} &%\n    \\multirow{-2}{*}{$\\mathcal{O}(1)$} & {} & {}\\\\\n    else: & {} & {} & {} & {} & {}\\\\\n    $\\hspace*{1.5em}$ for $\\,i\\,$ in range(0, n): & $\\mathcal{O}(n)$ &\n    {} & {} & {} & {}\\\\\n    \\hhline{~*{1}{-}}\n    $\\hspace*{3em}$ $a[i]\\,$ = 2 & $\\mathcal{O}(1)$ &\n    \\multirow{-2}{*}{$\\left.\n      \\begin{array}{@{}c@{}}\\\\[1em]\\end{array}\n      \\right\\rbrace$} &\n    \\multirow{-2}{*}{\n      $\\begin{array}{@{}c@{}}\n      \\mathcal{O}(n) \\cdot \\mathcal{O}(1)\\\\\n      = \\mathcal{O}(n)\n      \\end{array}$\n    } &%\n    \\multirow{-5}{*}{$\\left.\n      \\begin{array}{@{}c@{}}\\\\[4.5em]\\end{array}\n      \\right\\rbrace$} &%\n    \\hspace*{-0.5em}%\n    \\multirow{-5}{*}{%\n      $\\mathcal{O}(?)$\n    }%\n  \\end{tabularx}\n  \\onslide<2->{\\begin{itemize}\n    \\item\n      {\\color{Mittel-Blau}Best runtime:}\n      $\\mathcal{O}(1) + \\mathcal{O}(1) = \\mathcal{O}(1)$\n    \\item\n      {\\color{Mittel-Blau}Worst runtime:}\n      $\\mathcal{O}(1) + \\mathcal{O}(n) = \\mathcal{O}(n)$\n  \\end{itemize}}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime Complexity}{Example 1 - Average Runtime}\n  \\begin{itemize}\n    \\item\n      The {\\color{Mittel-Blau}average runtime} is determined by the average\n      runtime for all input instances of size $n$\n    \\item<2->\n      Every element of $a$ can have $n$ different values\\\\\n      \\begin{math}\n        \\hspace{1.5em}\\Rightarrow n \\cdot \\ldots \\cdot n = n^n\n      \\end{math}\n      different input instances of size $n$\n      \\begin{itemize}\n        \\item<3->\n          \\lstinline[\n            language=Python,\n            style={python-idle-code},\n            basicstyle=\\small\n            ]|a[0] == 1|\n          in $n^{n-1}$ instances\n        \\item<4->\n          \\lstinline[\n            language=Python,\n            style={python-idle-code},\n            basicstyle=\\small\n          ]|a[0] != 1|\n          in $n^n - n^{n-1} = n^{n-1} \\cdot (n-1)$ instances\n      \\end{itemize}\n    \\item<5->\n      Sum of all runtime complexities:\n      \\begin{displaymath}\n        \\underbrace{n^{n-1} \\cdot \\mathcal{O}(1)\\vphantom{()}}_{\n          \\lstinline[\n            language=Python,\n            style={python-idle-code},\n            basicstyle=\\small\n          ]|a[0] == 1|\n        } + \\underbrace{n^{n-1} \\cdot (n-1) \\cdot \\mathcal{O}(n)}_{\n          \\lstinline[\n            language=Python,\n            style={python-idle-code},\n            basicstyle=\\small\n          ]|a[0] != 1|\n        }\n      \\end{displaymath}\n    \\item<6->\n      {\\color{Mittel-Blau}Average runtime}: {\\footnotesize (normalize by\n      number of instances)}\n      \\begin{displaymath}\n        \\frac{n^{n-1} + n^{n-1} \\cdot (n-1) \\cdot n}{n^n}\n        = \\frac{1}{n} + n - 1\n        \\in \\mathcal{O}(n)\n      \\end{displaymath}\n  \\end{itemize}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime Complexity}{Example 2 - Binary Addition}\n  \\begin{itemize}\n    \\item\n      Input: binary number $b$ with $n$ digits\n    \\item\n      Output: binary number $b+1$ with $n$ digits\n    \\item<2->\n      Runtime of the algorithm is determined by the number of bits getting\n      changed (steps)\n      \\begin{enumerate}\n        \\item \"0\" $\\to$ \"1\"\n        \\item \"1\" $\\to$ \"0\"\n      \\end{enumerate}\n    \\item<3->\n      {\\color{Mittel-Blau}Best runtime:}\n      $1\\,\\text{step} = \\mathcal{O}(1)$\n    \\item<3->\n      {\\color{Mittel-Blau}Worst runtime:}\n      $n\\,\\text{steps} = \\mathcal{O}(n)$\n  \\end{itemize}\n  \\vspace{-1.0em}%\n  \\begin{table}[!h]%\n    \\caption{Binary addition}%\n    \\label{tab:runtime:binary_addition}%\n    \\begin{tabular}{crrc}%\n      Digits ($n$) & Input & Output & Steps\\\\\n      \\midrule\n      10 & 1011100100 & 1011100101 & 1\\\\\n      4 & 1011 & 1100 & 3\\\\\n      8 & 11111111 & 00000000 & 8\n    \\end{tabular}\n  \\end{table}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime Complexity}{Example 2 - Average Steps}\n  \\vspace{-3em}\n  \\begin{columns}\n    \\begin{column}[t]{0.5\\linewidth}\n      \\begin{table}[!t]%\n        \\caption{Binary addition with $n=1$}%\n        \\label{tab:runtime:binary_addition_one}%\n        \\begin{tabular}{rrc}%\n          Input & Output & Steps\\\\\n          \\midrule\n          0 & 1 & 1\\\\\n          1 & 0 & 1\n        \\end{tabular}\n      \\end{table}\n    \\end{column}\n    \\begin{column}[t]{0.5\\linewidth}\n      \\onslide<2->\n      \\begin{table}[!t]%\n        \\caption{Binary addition with $n=2$}%\n        \\label{tab:runtime:binary_addition_two}%\n        \\begin{tabular}{rrc}%\n          Input & Output & Steps\\\\\n          \\midrule\n          00 & 01 & 1\\\\\n          01 & 10 & 2\\\\\n          10 & 11 & 1\\\\\n          11 & 00 & 2\n        \\end{tabular}\n      \\end{table}\n    \\end{column}\n  \\end{columns}\n  \\begin{columns}\n\\onslide<1->   \\begin{column}{0.5\\linewidth}\n      \\begin{align*}\n        \\overline{\\text{steps}} &= \\frac{1+1}{2} = 1\\\\[0.5em]\n       % {} &= 2 - \\frac{1}{1} = {\\color{Mittel-Blau}2 - \\frac{1}{2^{n-1}}}\n      \\end{align*}\n    \\end{column}\n    \\begin{column}{0.5\\linewidth}\n\\onslide<2->      \\begin{align*}\n        \\overline{\\text{steps}} &= \\frac{1+2+1+2}{4} = \\frac{3}{2}\\\\[0.5em]\n%        {} &= 2 - \\frac{1}{2} = {\\color{Mittel-Blau}2 - \\frac{1}{2^{n-1}}}\n      \\end{align*}\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime Complexity}{Example 2 - Average Steps}\n  \\vspace{-3em}\n  \\begin{columns}\n    \\begin{column}{0.5\\linewidth}\n      \\begin{table}[!h]%\n        \\caption{Binary addition with $n=3$}%\n        \\label{tab:runtime:binary_addition_three}%\n        \\begin{tabular}{rrc}%\n          Input & Output & Steps\\\\\n          \\midrule\n          000 & 001 & 1\\\\\n          001 & 010 & 2\\\\\n          010 & 011 & 1\\\\\n          011 & 100 & 3\\\\\n          \\midrule\n          100 & 101 & 1\\\\\n          101 & 110 & 2\\\\\n          110 & 111 & 1\\\\\n          111 & 000 & 3\n        \\end{tabular}\n      \\end{table}\n    \\end{column}\n    \\begin{column}{0.5\\linewidth}\n      \\begin{align*}\n        \\overline{\\text{steps}}\n          &= \\frac{1+2+1+3+1+2+1+3}{8} = \\frac{7}{4}\\\\[0.5em]\n   {} &\\onslide<2->{= 2 - \\frac{1}{4} = {\\color{Mittel-Blau}2 - \\frac{1}{2^{n-1}}}}\n      \\end{align*}\\\\\n\\onslide<2->{    \n $\\Rightarrow$ {\\color{Mittel-Blau}Average runtime}:\n      \\begin{math}\n        \\displaystyle\\hspace*{1.5em}\n        2 - \\frac{1}{2^{n-1}} \\in \\mathcal{O}(1)\n        \\end{math}}\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\n\n\\begin{frame}{Runtime Complexity}{Example 2 - Average Steps}\n  \\vspace{-2.0em}\n  \\begin{table}[!h]%\n    \\caption{Case analysis for instances of size $n$}%\n    \\label{tab:runtime:binary_addition_case_analysis}%\n    \\vspace{-0.5em}%\n    \\begin{tabular}{cccc}\n      Input & Output & Instances & Steps\\\\\n      \\midrule\n      $\\_\\_\\_ \\ldots \\_\\_\\_0$ & $\\_\\_\\_ \\ldots \\_\\_\\_1$ &\n      {\\only<2>{\\color{red}}$2^{n-1}$} & 1\n      \\\\\n      $\\_\\_\\_ \\ldots \\_\\_01$ & $\\_\\_\\_ \\ldots \\_\\_10$ &\n      {\\only<2>{\\color{red}}$2^{n-2}$} & 2\n      \\\\\n      $\\_\\_\\_ \\ldots \\_011$ & $\\_\\_\\_ \\ldots \\_100$ &\n      {\\only<2>{\\color{red}}$2^{n-3}$} & 3\n      \\\\\n      $\\vdots$ & $\\vdots$ & $\\vdots$ & $\\vdots$\n      \\\\\n      $\\_01 \\ldots 1111$ & $\\_10 \\ldots 0000$ & \n      {\\only<2>{\\color{red}}$2^{1}$} &\n      n-1\n      \\\\\n      $011 \\ldots 1111$ & $100 \\ldots 0000$ & \n      {\\only<2>{\\color{red}}$2^0$} & n\n      \\\\\n      $111 \\ldots 1111$ & $000 \\ldots 0000$ & \n      {\\only<2>{\\color{red}}1} & n\n    \\end{tabular}\n  \\end{table}\n  \\vspace{-0.5em}\n  \\onslide<2->{{\\color{Mittel-Blau}Average steps:}\n  \\vspace{-1.0em}\n \\begin{displaymath}\n    \\dfrac{\n      {\\only<3->{\\color{Mittel-Blau}} \n      1 \\cdot {\\only<2>{\\color{red}} 2^{n-1}}\n\t+ 2 \\cdot {\\only<2>{\\color{red}}2^{n-2}}\n\t+ \\dots \n\t+ (n-1) \\cdot {\\only<2>{\\color{red}}2^1} \n\t+ n \\cdot {\\only<2>{\\color{red}}2^0}}\n\t+ n \\cdot {\\only<2>{\\color{red}}1}\n\t}{{\\only<3->{\\color{blue}}2^{n-1}+2^{n-2}+\\dots+2^{1}+2^{0}}+1} = \\onslide<3->{\\frac{\n      {\\color{Mittel-Blau}\\left(\\sum\\limits_{i=1}^n i \\cdot\n      2^{n-i}\\right)} + n }{\n      {\\color{blue}\\left(\\sum\\limits_{i=0}^{n-1} 2^i\\right)} + 1\n    }}\n  \\end{displaymath}}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime Complexity}{Example 2 - Average Steps}\n  \\begin{itemize}\n    \\item\n      Denominator:\n      \\begin{displaymath}\n        \\left(\\sum_{i=0}^{n-1} 2^i \\right) + 1\n        \\stackrel{\\begin{array}{c}\n          \\text{geometric}\\\\\n          \\text{series}\n        \\end{array}}{=}\n        (2^n - 1) + 1 = 2^n\n      \\end{displaymath}\n    \\item \\onslide<2->{\n      Counter:}\n  \\end{itemize}\n  \\begin{alignat*}{4}\n    & \\onslide<2->{\\left(\\sum_{i=1}^n i \\cdot 2^{n-i}\\right) + n}\n     \\onslide<3->{\\stackrel{\\color{Mittel-Blau}[x = {\\color{red}2x} - x]}{=}\n        {\\color{red}\\left(2 \\, \\sum_{i=1}^n i \\cdot 2^{n-i}\\right)}\n        {\\color{Mittel-Blau}- \\left(\\sum_{i=1}^n i \\cdot 2^{n-i}\\right)} + n}\\\\\n    &\\onslide<4->{\\hspace*{1.5em} = {\\color{red}1 \\cdot 2^n + 2 \\cdot 2^{n-1} +\n    3 \\cdot n^{n-2} + \\dots + (n-1) \\cdot 2^2 + n \\cdot 2^1}}\\\\\n    & \\onslide<4->{\\hspace{4.5em}{\\color{Mittel-Blau}- 1 \\cdot 2^{n-1} - 2 \\cdot\n    2^{n-2} - \\dots - (n-2) \\cdot 2^2 - (n-1) \\cdot 2^1 - n \\cdot 2^0} + n}\\\\\n    & \\onslide<5->{\\hspace*{1.5em} = \\underbrace{{\\color{red}2^n + 2^{n-1} +\n    \\dots +2^1} {\\color{blue}+ 2^0}}_{2^{n+1}-1}{\\color{blue} - 2^0}\n      = 2^{n+1} - 2}\n  \\end{alignat*}\n\\end{frame}\n\n%-------------------------------------------------------------------------------\n\n\\begin{frame}{Runtime Complexity}{Example 2 - Average Steps}\n  {\\color{Mittel-Blau}Average steps}:\n  \\begin{displaymath}\n    \\overline{steps}\n    = \\frac{\n      \\left(\\sum\\limits_{i=1}^n i \\cdot 2^{n-i}\\right) + n\n    }{\\left(\\sum\\limits_{i=0}^{n-1} 2^i \\right) + 1}\n    = \\frac{2^{n+1} - 2}{2^n}\n    = {\\color{Mittel-Blau}2 - \\frac{1}{2^{n-1}}}\n  \\end{displaymath}\n  \\begin{displaymath}\n    \\lim_{n \\to \\infty} \\left(2 - \\frac{1}{2^{n-1}}\\right) = 2\n    \\in \\mathcal{O}(1)\n  \\end{displaymath}\n\\end{frame}\n\n", "meta": {"hexsha": "1a417d5b6813ea88dc7c6000f19238faaf397db7", "size": 11536, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-4/Chapter/eng/010_Runtime.tex", "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_issues_repo_path": "Lecture-4/Chapter/eng/010_Runtime.tex", "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_forks_repo_path": "Lecture-4/Chapter/eng/010_Runtime.tex", "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "avg_line_length": 31.9556786704, "max_line_length": 86, "alphanum_fraction": 0.4851768377, "num_tokens": 4265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5658552228843406}}
{"text": "\\documentclass{scrartcl}\n\\usepackage[a4paper,left=1in,right=1in,top=1.2in,bottom=1in]{geometry}\n\\usepackage{siunitx}\n\\usepackage{graphicx}\n\\usepackage{mathtools}\n\\setkomafont{disposition}{\\normalfont\\bfseries}\n\\newcommand*\\diff{\\mathop{}\\!\\mathrm{d}}\n\\newcommand*\\Diff[1]{\\mathop{}\\!\\mathrm{d^#1}}\n\\newcommand*\\colvec[3][]{\n    \\begin{pmatrix}\\ifx\\relax#1\\relax\\else#1\\\\\\fi#2\\\\#3\\end{pmatrix}\n}\n\n%title\n\\title{Exercise 05:\\\\Rescorla-Wagner rule}\n\\subtitle{Theoretical Neuroscience II}\n\\author{Johannes G\\\"atjen \\and Lorena Morton}\n\n%use these for structure/overview\n\\newcommand\\Question{%\n  \\textbf{Question:}%\n}\n\\newcommand\\Answer{%\n  \\textbf{Answer:}%\n}\n\\renewcommand{\\arraystretch}{1.2}\n\n\n\\begin{document}\n\\maketitle\n\n\\section{Two independent stimuli}\n\nWe consider two independent stimuli, $A$ and $B$, that are present according to the following probability table:\\\\\n\\begin{tabular}{c | c c | c}\n& $A$ & $\\neg A$ & \\\\  [3pt] \\hline\n$B$ & $\\frac{1}{15}$ & $\\frac{2}{15}$ & $\\frac{1}{5}$ \\\\ [3pt]\n$\\neg B$ & $\\frac{2}{15}$ & $\\frac{8}{15}$ & $\\frac{4}{5}$\\\\ [3pt] \\hline\n& $\\frac{1}{3}$ & $\\frac{2}{3}$ & 1 \\\\ [3pt]\n\\end{tabular}\\\\\n\nFrom the probability table we can compute the expected stimulus vector $\\langle \\mathbf{u} \\rangle$, the correlation matrix $\\mathbf{Q}$ and its inverse:\n\\begin{align*}\n\\langle \\mathbf{u} \\rangle = \\colvec{p_A}{p_B} = \\colvec{\\frac{1}{3}}{\\frac{1}{5}}\\\\\n\\mathbf{Q} = \\left( \\begin{array}{cc}\n\\langle u_1^2 \\rangle & \\langle u_1u_2 \\rangle \\\\\n\\langle u_1u_2 \\rangle & \\langle u_2^2 \\rangle \\end{array} \\right) = \\left( \\begin{array}{cc}\n\\frac{1}{3} & \\frac{1}{15} \\\\\n\\frac{1}{15} & \\frac{1}{5} \\end{array} \\right) \\\\\n\\mathbf{Q}^{-1} = \\left( \\begin{array}{cc}\n\\frac{45}{14} & -\\frac{15}{14} \\\\\n-\\frac{15}{14} & \\frac{225}{42} \\end{array} \\right) \n\\end{align*}\n\n\\section{Full, partial and inhibitory conditioning}\n\nWhen we get a reward $r = 1$ whenever $A$ is present we get the following joint expectation of stimulus and reward $\\langle r \\mathbf{u}\\rangle$, conditional expectation of reward, given stimulus $\\langle r \\mid \\mathbf{u}\\rangle$ and the asymptotic weights for the Rescorla-Wagner rule $\\mathbf{w}_{ss}$.\n\\begin{align*}\n\\langle r \\mathbf{u}\\rangle = \\colvec{\\frac{1}{3}}{\\frac{1}{15}} \\qquad\n\\langle r \\mid \\mathbf{u}\\rangle = \\colvec{1}{\\frac{1}{3}} \\qquad\n\\mathbf{w}_{ss} = \\mathbf{Q}^{-1} \\cdot \\langle r \\mathbf{u}\\rangle = \\colvec{1}{0}\n\\end{align*}\nThe conditional expectation of reward, given stimulus $\\langle r \\mid \\mathbf{u}\\rangle$ simply tells us how much certain stimuli occur together with rewards. The specific reward expectation on the other hand tells us which stimulus is responsible for the rewards. One can say that $\\langle r \\mid \\mathbf{u}\\rangle$ gives us a purely descriptive value, whereas $\\mathbf{w}_{ss}$ tells us something about the (presumed) causality of the world.\n\nFigure \\ref{one} shows the development according to the Rescorla-Wagner rule with the probabilities and rewards as specified above with a learning rate of $\\epsilon = 0.05$. Figure \\ref{two} shows development of partial conditioning and Figure \\ref{three} shows inhibitory conditioning.\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim = {0.8cm 0 0.5cm 0.2cm}, width=0.8\\textwidth, clip]{../pics/one}\n\\caption{The first 100 trials for full conditioning. Top: Red: occurrences of stimulus $A$. Blue: occurrences of stimulus $B$. Green: occurrences of rewards ($r = 1$). Bottom: Red dotted line: Specific reward expectation for $A$. Blue dotted line: Specific reward expectation for $B$. Black circles: Prediction error $\\delta$. Specific reward expectations after 1000 trials: $(1.00, 0.00)^T$.\nInitially, the prediction errors are large, and the reward expectation is split between $A$ and $B$ evenly when they occur together. Then when $B$ occurs alone there is a small expectation of reward, but because there is no reward the prediction error is negative and the reward expectation for $B$ is decreased again. After enough trials the reward expectations have been learned correctly and the prediction errors are (almost) zero.}\n\\label{one}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim = {0.8cm 0 0.5cm 0.2cm}, width=0.8\\textwidth, clip]{../pics/two}\n\\caption{The first 100 trials for partial conditioning. Stimulus $A$ is rewarded with probability $0.5$. See Figure \\ref{one} for plot details. Specific reward expectations after 1000 trials: $(0.53, 0.07)^T$.}\n\\label{two}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim = {0.8cm 0 0.5cm 0.2cm}, width=0.8\\textwidth, clip]{../pics/three}\n\\caption{The first 100 trials for inhibitory conditioning. Stimulus $A$ is rewarded with probability $1$, but only if $B$ is absent. See Figure \\ref{one} for plot details. Specific reward expectations after 1000 trials: $(0.86, -0.24)^T$. Stimulus $A$ is a predictor for reward, but stimulus $B$ prevents any rewards. Because there is no distinction between types of reward (only absence/presence/magnitude) the withdrawal of an expected reward is simply a negative reward (punishment) and is here associated with $B$.}\n\\label{three}\n\\end{figure}\n\n\n\\end{document}", "meta": {"hexsha": "7eab040eeb959b351b7202d24b0e77bf6cc49d53", "size": 5134, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ex5/pdf/ex5.tex", "max_stars_repo_name": "gaetjen/TNSII_Exercises", "max_stars_repo_head_hexsha": "d82eb790132e9066c6ad41e7f90ba193145a2e8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ex5/pdf/ex5.tex", "max_issues_repo_name": "gaetjen/TNSII_Exercises", "max_issues_repo_head_hexsha": "d82eb790132e9066c6ad41e7f90ba193145a2e8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ex5/pdf/ex5.tex", "max_forks_repo_name": "gaetjen/TNSII_Exercises", "max_forks_repo_head_hexsha": "d82eb790132e9066c6ad41e7f90ba193145a2e8f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.6853932584, "max_line_length": 519, "alphanum_fraction": 0.7187378263, "num_tokens": 1606, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5658552228843406}}
{"text": "\\section{Introduction}\\label{sec:introduction}\n\\epigraph{\n``It is amazing what can be done\nwith just [...] some bitwise operations.''\n}{\n\\emph{Hacker's Delight}, page \\texttt{xiii}\n\\cite{Warren:2012:HD:2462741}\n}\n\n\n\\subsection*{Crazy Ideas in Data Structures and Algorithms}\nThe seminar this contribution is part of covers\nideas in computer science with a certain ``craziness''.\nThis might not obviously apply to this paper:\n\nThe underlying data structure is a bitword\n-- the simplest interpretation of binary values possible.\nFurthermore, all formulas are composed of simple logical operators,\nwithout any complex (e.g. recursive) nesting involved.\n\nWhere the ``craziness'' starts however,\nis in the rejection of abstract data types (like boolean-arrays).\nAddressing single bits isn't directly possible.\nInstead ``hacks'' will be used that involve decoupling\noperations (e.g. increment) from their mathematical meaning\n(see \\autoref{sec:rightmost}).\nAdditionally, instruction-level parallelism will allow for\n``crazy'' fast computation: everything fitting in a bitword\nis processed by the CPU in a single time step.\n\n\n\\subsection*{Bitwords}\nThe common term \\emph{bitword} (or \\emph{bitvector}).\nIt is simply defined as a sequence of bits with fixed length\nthat is indexed (starting with $0$) from right to left\n(e.g. in \\autoref{table:bitword} ``$\\uparrow$'' refers to $x_4$).\nThere isn't any further interpretation (e.g. as a numerical value) to it.\nThis holds true when applying operations with such a meaning\n(e.g. incrementing).\n\nA good way to think of a bitvector is a series of\n\\emph{on} (\\lstinline$1$) and \\emph{off} (\\lstinline$0$) values.\nTerms like ``rightmost'' and ``trailing''\nare used in their conventional meaning (see \\autoref{table:bitword})\n-- others will be introduced when needed.\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{r}\n$\\underset{\\text{\\hfill rightmost \\lstinline$1$}}\n{\\text{\\lstinline$x = 1100 1011 001$}\n{\\underset{\\uparrow}{\\text{\\lstinline$1$}}}}\n\\underbrace{\\text{\\lstinline$0000$}}_{\\text{trailing \\lstinline$0$s}}$\\\\\n\\end{tabular}\n\\caption{Bitword}\n\\label{table:bitword}\n\\end{table}\n\n\n\\subsection*{Logical Operators}\nThe mathematical logical operators serve as a basis\nfor all bit manipulation.\nThey are interpreted in the well-known way (see \\autoref{table:logic}).\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{c|c||c|c|c}\n  \\lstinline$x$ & \\lstinline$y$\n& \\lstinline$NOT$: $\\lnot$ \\lstinline$x$\n& \\lstinline$OR$: \\lstinline$x$ $\\lor$ \\lstinline$y$\n& \\lstinline$AND$: \\lstinline$x$ $\\land$ \\lstinline$y$\\\\\n\\hline\\hline\n  \\lstinline$0$ & \\lstinline$0$\n& \\lstinline$1$ & \\lstinline$0$ & \\lstinline$0$\\\\\n\\hline\n  \\lstinline$0$ & \\lstinline$1$\n& \\lstinline$1$ & \\lstinline$1$ & \\lstinline$0$\\\\\n\\hline\n  \\lstinline$1$ & \\lstinline$0$\n& \\lstinline$0$ & \\lstinline$1$ & \\lstinline$0$\\\\\n\\hline\n  \\lstinline$1$ & \\lstinline$1$\n& \\lstinline$0$ & \\lstinline$1$ & \\lstinline$1$\\\\\n\\end{tabular}\n\\caption{Operators \\lstinline$NOT$, \\lstinline$OR$, \\lstinline$AND$}\n\\label{table:logic}\n\\end{table}\n\nThe set $\\{\\lnot, \\lor, \\land\\}$ presented in \\autoref{table:logic}\nis considered a \\emph{complete set of logical operators},\nwhich means that any logical formula can be expressed\nonly using elements from this set.\nThere is an important distinction to be made\non how to apply a logical operator to a bitword\n(see \\autoref{table:operators} for \\emph{C}):\n\n\\begin{description}\n\\item[byte-level:] This common interpretation\ndefines the whole variable (at least 1 byte in size)\nas a single boolean value being false iff all bits are \\lstinline$0$.\n\n\\item[bitwise:] This lesser used version\nis made for bit-level manipulation.\nAs the name suggests every bit of the argument is treated independently.\n\n\\end{description}\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{c|c|c|c}\nfunction & mathematical & \\emph{C}: byte-level & \\emph{C}: bitwise\\\\\n\\hline\n\\lstinline$NOT$ & $\\lnot$ & \\lstinline$!$ & \\lstinline$~$\\\\\n\\lstinline$OR$ & $\\lor$ & \\lstinline$||$ & \\lstinline$|$\\\\\n\\lstinline$AND$ & $\\land$ & \\lstinline$&&$ & \\lstinline$&$\\\\\n\\end{tabular}\n\\caption{Byte-level and bitwise operators in \\emph{C}}\n\\label{table:operators}\n\\end{table}\n\n\n\\subsection*{Mathematical Operators}\nIn addition to manipulating bits independently,\noperators that interpret sequence as a whole a required.\nIncrement (\\lstinline$INC$, \\lstinline$(x+1)$ in \\emph{C})\nand decrement (\\lstinline$DEC$, \\lstinline$(x-1)$ in \\emph{C})\ndo so by treating the bitwords in a mathematical sense.\nThe assigning notations \\lstinline$x++$ and \\lstinline$x--$\nare ignored to avoid side effects.\n\nWhen executing these operation, \\emph{carry} and \\emph{borrow}\nrefer to the values that get passed to adjacent bits\n(see \\autoref{sec:rightmost}).\n", "meta": {"hexsha": "3a0d62b415e9f53e33a560663184f663d618ad1f", "size": 4719, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/2-introduction.tex", "max_stars_repo_name": "NeoLegends/hackers-delight", "max_stars_repo_head_hexsha": "4cd924e1e10476d116b5e7b8b9504aa6c8d88e23", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-11T12:10:40.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-11T12:10:40.000Z", "max_issues_repo_path": "content/2-introduction.tex", "max_issues_repo_name": "NeoLegends/hackers-delight", "max_issues_repo_head_hexsha": "4cd924e1e10476d116b5e7b8b9504aa6c8d88e23", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-12-25T23:16:11.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-25T23:16:11.000Z", "max_forks_repo_path": "content/2-introduction.tex", "max_forks_repo_name": "NeoLegends/hackers-delight", "max_forks_repo_head_hexsha": "4cd924e1e10476d116b5e7b8b9504aa6c8d88e23", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-12-16T11:05:07.000Z", "max_forks_repo_forks_event_max_datetime": "2018-12-23T22:08:23.000Z", "avg_line_length": 35.2164179104, "max_line_length": 73, "alphanum_fraction": 0.734053825, "num_tokens": 1408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.5658552199605672}}
{"text": "\\documentclass{article}\n\\author{Vaibhav Pujari}\n\\title{Categories}\n\\begin{document}\n\\section{Definition}\nA category $\\zeta$ is a thing that is made up of:\n\\begin{itemize}\n\\item \\textbf{Objects}\n\n  A set of objects $Ob(\\zeta)$\n\n  \\textsf{\\tiny\n  \\textbf{Notes}\n  \\begin{itemize}\n  \\item These objects could be at any level of abstraction and for the definions and\n  working of category theory, we don't need to look under that abstraction. I\n  think this is why category theory becomes really powerful.\n  \\end{itemize}\n  }\n\n\\item \\textbf{Morphisms}\n\n  For every to elements $c$,$d$ in $Ob(\\zeta)$, there is a set of morphisms.\n  This set is denoted by $\\zeta(c,d)$ (or $Hom_\\zeta(c,d)$) and contains morphisms\n  $f \\in \\zeta(c,d)$, also denoted as $f : c \\to d$\n\n  The set of morphisms is also called \\textit{Hom-set} due to Category theory's\n  origins in Homomorphisms\n\n  \\textsf{\\tiny\n  \\textbf{Notes}\n  \\begin{itemize}\n  \\item In context or programming, it comes more naturally to think of morphisms\n    as functions. But that is a special case which only applies on a special\n    category (Types). To be able to exploit the power of abstractions in category\n    theory, its important to remember that it could be anything. For example, in\n    a relational DB, it could be a foreign key.\n  \\item I was initially confused whether its necessary to always have a Hom-set\n    for any two objects in a category, but then I realized that just because\n    there is a Hom-set, it doesn't mean it has any members - it could be a null\n    set too. So that way, of course, there is always a Hom-set.\n  \\end{itemize}\n  }\n\n\\item \\textbf{Identities}\n\n  Each object $c$ in a category must have an identity morphism, which would be an\n  element of $\\zeta(c,c)$ for all $c \\in Ob(\\zeta)$. An identity morphism is a\n  morphism that satisfies the \\textit{unitality} constraint (defined below)\n\n  \\textsf{\\tiny\n  \\textbf{Notes}\n  \\begin{itemize}\n  \\item What if there are multiple morphisms that satisfy the unitality\n    constraint? Is identity unique? If so, how to select one of the candidates?\n  \\end{itemize}\n  }\n\n\\item \\textbf{Composition}\n\n  The morphisms in the category must be composable. For any three objects\n  $c,d,e$ in the category, and corresponding morphisms $f: c \\to d$ and $g: d\n  \\to e$, there always exists a morphism $h: c \\to e$ which is a composition of\n  $f$ and $g$\n\n  $h := g \\circ f$\n\n  The compositions must satisfy the \\textit{Associativity} constraint (defined\n  below)\n\n  \\textsf{\\tiny\n  \\textbf{Notes}\n  \\begin{itemize}\n  \\item In context of programming language implementation, is it necessary for\n    the runtime to always define a new function for every occurrence of\n    composition? What are the challenges there (both if its necessary, or not)?\n  \\end{itemize}\n  }\n\\end{itemize}\n\n\\section{Constraints}\n\\begin{itemize}\n\\item \\textbf{Unitality}\n\n  For any $f: c \\to d$, the identities $id_c$ and $id_d$ which are identities\n  for objects $c$ and $d$ respectively,\n\n  $f \\circ id_c = f = id_d \\circ f$\n\n\\item \\textbf{Associativity}\n\n  For any $f: a\\to b$, $g: b \\to c$ and $h: c \\to d$,\n\n  $h \\circ (g \\circ f) = (h \\circ g) \\circ f$\n\n  This constraint would make the usage of paranthesis in composition\n  unnecessary.\n\n  \\textsf{\\tiny\n  \\textbf{Notes}\n  \\begin{itemize}\n  \\item This might sound unnecessary to specify the associativity because many\n    examples come to mind which are associative so it might start feeling like\n    everything is associative, but its quite an important constraint. For\n    example, it prohibits exponents to be considered morphism because\n    ${(a^b)}^c$ is not always same as $a^{(b^c)}$, so they dont compose without\n    order being important.\n  \\end{itemize}\n  }\n\\end{itemize}\n\n\\label{sec:label}\n\n\n\\end{document}\n", "meta": {"hexsha": "fd0c9a316b0084730ab8688dc627939c2fecb969", "size": 3769, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "01-category-definition.tex", "max_stars_repo_name": "vaibhav276/category-theory-notes", "max_stars_repo_head_hexsha": "164285a8ace6bcf86f19a083255c029a2c39cdbe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01-category-definition.tex", "max_issues_repo_name": "vaibhav276/category-theory-notes", "max_issues_repo_head_hexsha": "164285a8ace6bcf86f19a083255c029a2c39cdbe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01-category-definition.tex", "max_forks_repo_name": "vaibhav276/category-theory-notes", "max_forks_repo_head_hexsha": "164285a8ace6bcf86f19a083255c029a2c39cdbe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.4913793103, "max_line_length": 84, "alphanum_fraction": 0.7121252322, "num_tokens": 1088, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7772998611746911, "lm_q1q2_score": 0.5658552153731997}}
{"text": "\\documentclass{homework}\n\\course{Math 5522H}\n\\author{Alex Li}\n\\input{preamble}\n\n\\usepackage[symbol]{footmisc}\n\\renewcommand{\\thefootnote}{\\fnsymbol{footnote}}\n\n\\begin{document}\n\\maketitle\n\n\\begin{inspiration}\n  Wenn ich nur erst die S\\\"atze habe! Die Beweise werde ich schon\n    finden.\\footnote{``If only I first had the theorems, then I'd find\n        the proofs somehow.''  Riemann knew all too well the challenge of\n            PODASIPs.} \\byline{Bernhard Riemann}\n            \\end{inspiration}\n\n            \\section{Terminology}\n\n            \\begin{problem}\n              When $\\Real s > 0$, what is the Gamma function $\\Gamma(s)$?\n              \\end{problem}\n              \\begin{solution}\n              \\[\n              \\Gamma(s): \\C^+ \\to \\C = \\int_0^\\oo x^{s-1}e^{-x}dx\n              \\]\n              \\end{solution}\n              \\begin{problem}\n                When $\\Real s > 1$, what is the Riemann zeta function $\\zeta(s)$?\n                \\end{problem}\n                \\begin{solution}\n                \\[\n                \\zeta(s) = \\sum_{k=1}^\\infty k^{-s}\n                \\]\n                \\end{solution}\n                \\begin{problem}\n                  What is the von Mangoldt function $\\Lambda(n)$?\n                  \\end{problem}\n                  \\begin{solution}\n                  \\[\n                  \\Lambda(n) = \\begin{cases}\n                  \\log(p) & n = p^k \\text{ for some prime $p$ and $k\\in \\N$}\\\\\n                  0 & \\text{else}\n                  \\end{cases}\n                  \\]\n                  \\end{solution}\n                  \\section{Numericals}\n\n                  \\begin{problem}\n                    Compute $\\Res(\\Gamma,s)$ for $s \\in \\{ 0, -1, -2, \\ldots \\}$.\n                    \\end{problem}\n                    \\begin{solution}\n                    The subsequent problem shows that for $f= \\Gamma(s)\\Gamma(1-s), k\\in \\Z, \\Res(f, k)= (-1)^k$. At each of these poles, $\\Gamma(1-s)$ is holomorphic, so\n                    \\[\n                    \\Res(f, s) = \\Gamma(1-s)\\Res(\\Gamma, s) = (-s)!\\Res(\\Gamma, s).\n                    \\]\n                    And therefore\n                    \\[\n                    \\frac{(-1)^s}{(-s)!} = \\Res(\\Gamma, s).\n                    \\]\n                    \\end{solution}\n                    \\begin{problem}Evaluate $\\Res(f,s)$ for the\n                      function $f(s) = \\Gamma(s) \\Gamma(1-s)$.\n                      \\end{problem}\n                      \\begin{solution}\n                      Considering the definition of the analytic extension of $\\Gamma$, one can observe that the poles are at every integer.\n                      First let's compute the residue at $s=0$:\n                      \\[\n                      \\lim_{s\\to 0} s \\Gamma(s) \\Gamma(1-s) = \\lim_{s\\to 0} \\Gamma(s+1)\\Gamma(1) = \\Gamma(1)\\Gamma(1) = 1\n                      \\]\n                      Then note the following periodicity property of $f$:\n                      \\[\n                      f(s) = \\Gamma(1-s)\\Gamma(s) = -s\\Gamma(-s) \\cdot \\frac{\\Gamma(s+1)}{s} = -f(s+1)\n                      \\]\n                      So the residue at $s \\in \\Z$ is $(-1)^s$.\n                      \\end{solution}\n                      \\begin{problem} % be careful to get the sign right!\n                        Use \\ref{euler-reflection-formula} to compute $\\Gamma(1/2)$.  \n                        \\end{problem}\n                        \\begin{solution}\n                        The referenced problem shows that $\\Gamma(1-s)\\Gamma(s) = \\frac{\\pi}{\\sin(\\pi s)}$. At $s=\\frac{1}{2}$,\n                        \\begin{align*}\n                        \\Gamma(.5)^2&= \\frac{\\pi}{1}\\\\\n                        \\Gamma(.5)&= \\pm \\sqrt{\\pi}\n                        \\end{align*}\n                        Since $\\Gamma(.5) =\\int_0^\\infty x^{-.5}e^{-x} dx > 0$, $\\Gamma(.5) = \\sqrt{\\pi}$\n\n                        \\end{solution}\n                        \\section{Exploration}\n\n                        \\begin{problem}\n                          In previous courses, you have seen that\n                            $\\Gamma(s+1) = s \\cdot \\Gamma(s)$.  Use this fact to explain how to\n                              define a meromorphic function on $\\C$ agreeing with $\\Gamma(s)$ for\n                                $\\Real s > 0$.  This is \\textbf{analytic continuation} which succeeds\n                                  here by using a \\textbf{functional equation}.\n                                  \\end{problem}\n                                  \\begin{solution}\n                                  We can define an extension named $\\hat\\Gamma$ as follows\n                                  \\[\n                                  \\hat\\Gamma(s) = \\begin{cases}\n                                  \\Gamma(s) &\\Real(s) > 0\\\\\n                                  (s+1)\\hat \\Gamma(s+1) &\\Real(s) \\leq 0\\\\\n                                  \\end{cases}\n                                  \\]\n                                  We show this is meromorphic by induction. Assume $\\hat\\Gamma$ is meromorphic with poles at the negative integers for $s>k$ for $k\\in \\N\\cup \\{0\\}$. This is true if $k=0$ since $\\Gamma$ is meromorphic. \n\n                                  Now we will prove $\\hat\\Gamma$ is meromorphic if $s>k-1$. $\\hat\\Gamma$ is holomorphic when $\\Real(s)$ is not a nonnegative integer because it is defined to be $(s+1)\\hat\\Gamma(s+1)$, the product of two functions holomorphic when $\\Real(s)$ is not a nonnegative integer. Furthermore, on the line $s=k-1$, $(s+1)\\hat\\Gamma(s+1)= \\hat\\Gamma(s)$ by definition, so the functions agree where they are defined, and thus we can extend $\\Gamma$ back using this formula, creating another pole at $s=k$. By induction, we can extend $\\Gamma$ down the entire real line.\n\n                                  \\end{solution}\n                                  \\begin{problem}\\label{start-of-reflection-formula}For $\\lambda \\in (0,1)$, evaluate\n                                    \\[\n                                        \\int_{0}^\\infty \\frac{u^{\\lambda - 1}}{1 + u} \\, du.\n                                          \\]\n                                            \\textit{Hint:} make the substitution $u = e^x$ and invoke \\ref{integral-for-euler-reflection}.\n                                            \\end{problem}\n                                            \\begin{solution}\n\n                                            \\begin{align*}\n                                            \\int_0^\\oo \\frac{u^{\\lambda - 1}\\, du }{1 + u} &=\n                                            \\int_0^\\oo \\frac{u^{\\lambda}\\, du }{u(1 + u)}\n                                            \\end{align*}\n                                            Let $u=e^x$ so that $du = e^x dx = u dx$ and thus $\\frac{du}{u} = dx$.\n                                            \\begin{align*}\n                                            \\int_0^\\oo \\frac{u^{\\lambda - 1}\\, du }{1 + u} &= \\int_{-\\oo}^\\oo \\frac{e^{x\\lambda}\\, dx }{1 + e^x}\n                                            \\end{align*}\n                                            In a previous problem, we evaluated this integral to be \n                                            \\[\n                                            \\frac{2\\pi i e^{i\\pi \\lambda}}{1 - e^{2\\pi i \\lambda}} = \\frac{\\pi}{\\sin(\\pi \\lambda)}\n                                            \\]\n                                            \\end{solution}\n\n                                            \\begin{problem}\\label{integrate-gamma-one-minus-s}For $s \\in (0,1)$ and positive $t \\in \\R$, show that\n                                              \\[\n                                                  \\Gamma(1 - s) = t \\int_0^\\infty (xt)^{-s} e^{-xt}\\, dx.\n                                                    \\]\n                                                    \\end{problem}\n                                                    \\begin{solution}\n                                                    \\begin{align*}\n                                                    \\Gamma(s) &= \\int_0^\\infty u^{s-1} e^{-u}\\, du\\qquad\\color{purple}\\text{Definition of }\\Gamma\\\\\n                                                    \\Gamma(1 - s) &= \\int_0^\\infty u^{-s} e^{-u}\\, du\n                                                    \\end{align*}\n                                                    Now take $tx = u$ so that $t\\, dx = du$\n                                                    \\begin{align*}\n                                                    \\Gamma(1 - s) = t\\int_0^\\infty (tx)^{-s} e^{-tx}\\, dx\n                                                    \\end{align*}\n\n                                                    \\end{solution}\n\n                                                    \\begin{problem}\\label{euler-reflection-formula}Combine \\ref{start-of-reflection-formula} and \\ref{integrate-gamma-one-minus-s} to conclude that\n                                                      \\[\n                                                          \\Gamma(1-s) \\, \\Gamma(s) = \\frac{\\pi}{\\sin \\left( \\pi s \\right)}.\n                                                            \\]\n                                                              This is \\textbf{Euler's reflection formula}.\n                                                              \\end{problem}\n                                                              \\begin{solution}\n                                                              Let's suppose that $0<s<1$.\n                                                              Combining \\ref{integrate-gamma-one-minus-s} with the definition of $\\Gamma$,\n                                                              \\begin{align*}\n                                                              \\Gamma(1-s) \\, \\Gamma(s) &= \\left(t\\int_0^\\oo (tx)^{-s} e^{-tx}\\, dx\\right)\\left(\\int_0^\\oo y^{s-1}e^{-y} dy\\right)\\\\\n                                                              &= \\int_0^\\oo \\int_0^\\oo (tx)^{-s} e^{-tx} ty^{s-1}e^{-y} dy dx\n                                                              \\end{align*}\n                                                              Let $t=y$\n                                                              \\begin{align*}\n                                                              &= \\int_0^\\oo \\int_0^\\oo (yx)^{-s} e^{-yx} y^{s}e^{-y} dy dx\\\\\n                                                              &= \\int_0^\\oo \\int_0^\\oo x^{-s} e^{-y(x+1)} dy dx\\\\\n                                                              &= \\int_0^\\oo \\left\\{x^{-s} \\frac{-e^{-y(x+1)}}{x+1}\\right\\}\\bigg|_0^\\oo dx\\\\\n                                                              &= \\int_0^\\oo \\frac{x^{-s}}{x+1} dx\n                                                              \\end{align*}\n                                                              Now let $\\lambda = \\frac{1}{x}$, $d\\lambda = -\\frac{dx}{x^2}$ to see that \n                                                              \\begin{align*}\n                                                              \\int_0^\\oo \\frac{x^{-s}}{x+1} dx &=\n                                                              \\int_0^\\oo \\frac{x^{-s+2}}{x^2(x+1)} dx\\\\\n                                                              &= \\int_0^\\oo \\frac{\\lambda^{s-2}}{\\frac{1}{\\lambda}+1} dx\\\\\n                                                              &= \\int_0^\\oo \\frac{\\lambda^{s-1}}{1 + \\lambda} dx\n                                                              \\end{align*}\n                                                              By problem \\ref{start-of-reflection-formula}, this is \\[\\frac{\\pi}{\\sin(\\pi s)}\\]\n\n                                                              So the formulas are equal on the real line from $0$ to $1$. Noting that\n                                                              \\[\n                                                              f(s) = \\Gamma(1-s)\\Gamma(s) = -s\\Gamma(-s) \\cdot \\frac{\\Gamma(s+1)}{s} = -f(s+1)\n                                                              \\]\n                                                              and that \n                                                              \\[\n                                                              \\frac{\\pi}{\\sin(\\pi s)} = -\\frac{\\pi}{\\sin(\\pi (s+1))} \\], we can extend this to the whole real line.\n\n                                                              Subtracting the two functions, we have a meromorphic function that is 0 at all nonintegral points on the real line. Taking the limit going towards an integral points, this implies that the difference between these functions is 0 at integral points on the real line, so the difference is in fact an entire function. Then the identity theorem implies that they must be the same. \n                                                              \\end{solution}\n                                                              \\begin{problem}\n                                                                Show that the series\n                                                                  \\[\\displaystyle\\sum_{n=1}^\\infty \\displaystyle\\frac {1}{n^{s}}\\]\n                                                                    converges to a holomorphic function when $\\Real s > 1$.\n                                                                    \\end{problem}\n                                                                    \\begin{solution}\n                                                                    \\begin{align*}\n                                                                    \\abs{\\sum_{n=1}^\\infty \\frac {1}{n^{s}}} &\\leq \\sum_{n=1}^\\infty \\frac {1}{n^{\\Real(s)}} \\\\\n                                                                    &\\leq \\int_1^\\oo \\frac{1}{n^{\\Real(s)}}dn \\leq -\\frac{1}{\\Real(-s)+1}n^{\\Real(-s)+1}\\bigg|_1^\\oo \\\\\n                                                                    &= \\frac{1}{\\Real(-s)+1}\\left(1 - \\oo^{\\Real{(-s)+1)}}\\right)\n                                                                    \\end{align*}\n                                                                    Since $s>1$, this is finite, this converges. Since each term of the series is holomorphic, each partial sum is holomorphic, so the limit of the partial sums is also holomorphic.\n                                                                    \\end{solution}\n                                                                    \\begin{problem}\\label{euler-product-formula}\n                                                                    Recall that you defined\n                                                                      \\(\\displaystyle\\prod_{n=1}^\\infty \\left( 1 + a_n \\right)\\) in\n                                                                        \\ref{terminology-infinite-product}.  Show that when $\\Real s > 1$, we\n                                                                          have\n                                                                            \\[\n                                                                                \\frac{1}{\\zeta(s)} = \\displaystyle\\prod_{n=1}^\\infty \\left( 1 - {p_n}^{-s} \\right),\n                                                                                  \\]\n                                                                                    where $p_n$ is the $n$th prime number.\n                                                                                    \\end{problem}\n                                                                                    \\begin{solution}\n                                                                                    Let \n                                                                                    \\[\n                                                                                    P(s) = \\prod_{p_n} 1 + p^{-s}_n + p^{-2s}_n + \\dots = \\prod_{p_n} \\frac{1}{1-p^{-s}}\n                                                                                    \\]\n                                                                                    We will show that $\\zeta(s) = P(s)$. First, we show $\\zeta(s) \\leq P(s)$.\n                                                                                    \\begin{align*}\n                                                                                    \\sum_{n=1}^N n^{-s} \\leq \\prod_{p_n\\leq N} 1 + p^{-s}_n + p^{-2s}_n + \\dots\n                                                                                    \\end{align*}\n                                                                                    because every term on the LHS appears in the RHS and all terms are positive, this inequality is true for all $N$, and taking the limit as $N\\to\\oo$, we deduce that $\\zeta(s) \\leq P(s)$.\n\n                                                                                    Next, we show that $\\zeta(s)\\geq P(s)$\n                                                                                    \\begin{align*}\n                                                                                    \\sum_{n=1}^{N!} n^{-s} \\geq \\prod_{p|N!}\\sum_{p^k|N!} p^{-sk}\n                                                                                    \\end{align*}\n                                                                                    Note that both the right and left sides are montonically increasing and bounded if we take the limit as $N\\to\\infty$. After rewriting the right hand side as \n                                                                                    \\[\n                                                                                    \\lim_{N\\to\\infty} \\prod_{p|N!}\\sum_{p^k|N!} p^{-sk} \\geq  e^{\\lim_{N\\to\\infty}\\sum_{p|N!} \\log(\\sum_{p^k|N!} p^{-sk} )}\n                                                                                    \\]\n                                                                                    since the total sum is bounded by $\\log(\\zeta(s))$ which converges when $Re(s)>1$, we can use Fubini's theorem to evaluate the inner sum first\n                                                                                    \\[\n                                                                                    \\zeta(s)\\geq \\lim_{N\\to \\infty} e^{\\sum_{p|N!} \\log(\\frac{1}{1 - p^{-s}})} =  \\lim_{N\\to \\infty} \\sum_{p|N!} \\frac{1}{1 - p^{-s}} = P(s)\n                                                                                    \\]\n                                                                                    Thus $\\zeta(s) = P(s)$. Inverting both sides gets the desired equality.\n                                                                                    \\end{solution}\n                                                                                    \\begin{problem}\n                                                                                      Use \\ref{euler-product-formula} and \\ref{nonzero-infinite-product}\n                                                                                        to show that if $\\Real s > 1$ then $\\zeta(s) \\neq 0$.\n                                                                                        \\end{problem}\n                                                                                        \\begin{proof}\n                                                                                        At $\\zeta(0)$ we expand the formula in $\\ref{euler-product-formula}$ and see it suffices to show that\n                                                                                        \\[\n                                                                                        \\prod_{n=1}^\\infty (1- p_n^{-s})\n                                                                                        \\]\n                                                                                        is constant. Now, \n                                                                                        \\[\n                                                                                        \\sum_{n=1}^\\oo p_n^{-s} \\leq \\sum_{n=1}^\\oo \\abs{n^{-s}} \\leq \\infty\n                                                                                        \\]\n                                                                                        when the real part of $s$ is more than 1. By \\ref{nonzero-infinite-product}, the product converges. Therefore $\\zeta(s)\\neq 0$.\n                                                                                        \\end{proof}\n                                                                                        \\begin{problem}\n                                                                                          Use \\ref{euler-product-formula} to show that\n                                                                                            \\[\n                                                                                                \\Real \\left( \\frac{ \\zeta'(a+bi) }{ \\zeta(a+bi) } \\right) = - \\sum_{n=1}^\\infty \\frac{\\Lambda(n) \\, \\cos \\left( b \\log n \\right)}{n^{a}}.\n                                                                                                  \\]\n                                                                                                  \\end{problem}\n                                                                                                  \\begin{proof}\n                                                                                                  Let's compute $\\Real(\\log\\zeta(a+bi))$, taking the derivative will get us the LHS of the desired equation. Using \\ref{euler-product-formula},\n                                                                                                  \\begin{align*}\n                                                                                                  \\Real(\\log\\zeta(a+bi))&= \\Real(\\log \\prod_{n=1}^\\infty \\frac{1}{1-p_n^{-(a+bi)}})\\\\\n                                                                                                  &= \\Real(\\sum_{n=1}^\\infty \\log \\frac{1}{1-p_n^{-(a+bi)}})\\\\\n                                                                                                  &= \\Real(-\\sum_{n=1}^\\infty \\log (1-p_n^{-(a+bi)}))\\\\\n                                                                                                  &= \\sum_{n=1}^\\infty \\sum_{m=1}^\\oo \\Real(\\frac{p_n^{-m(a+bi)}}{m}) \\quad \\text{ power series of log}\\\\\n                                                                                                  \\end{align*}\n                                                                                                  Since the double sum converges absolutely, we can change the order of summation. Note that the only coefficients of $n^{-s}$ which are nonzero correspond to prime powers, so this is\n                                                                                                  \\begin{align*}\n                                                                                                  \\Real(\\log\\zeta(a+bi)) &= \\sum_{n=1}^\\infty \\phi(n)\\Real(n^{-(a+bi)})\n                                                                                                  \\end{align*}\n                                                                                                  Where \n                                                                                                  \\[\n                                                                                                  \\phi(n) = \\begin{cases}\n                                                                                                  \\frac{1}{k} & n= p^k\\\\\n                                                                                                  1 & n = 0\n                                                                                                  \\end{cases}\n                                                                                                  \\]\n                                                                                                  We next take a derivative with respect to $a+bi$\n                                                                                                  \\[\n                                                                                                  \\Real(\\frac{\\zeta'(a+bi)}{\\zeta(a+bi)}) = -\\sum_{n=1}^\\infty \\phi(n)\\log(n)\\Real(n^{-a+bi})\n                                                                                                  \\]\n                                                                                                  Next we can notice that\n                                                                                                  \\begin{align}\\label{n_to_a+bi_expansion}\n                                                                                                      \\Real(n^{-(a+bi)}) = n^{-a}\\Real(e^{-bi\\log n}) = n^{-a}\\cos(b\\log n)\n                                                                                                      \\end{align}\n                                                                                                      and also that when $\\phi(n)\\neq 0$ so $n$ is a prime power $n=p^k$,\n                                                                                                      \\[\n                                                                                                      \\phi(p^k)\\log(p^k) = \\frac{1}{k}k\\log(p) = \\log(p) = \\Lambda(n)\n                                                                                                      \\]\n                                                                                                      With this, we can simplify to the given expression:\n                                                                                                      \\begin{align*}\n                                                                                                      \\Real(\\frac{\\zeta'(a+bi)}{\\zeta(a+bi)}) &= -\\sum_{n=1}^\\infty \\Lambda(n)\\frac{\\cos(b\\log n)}{n^{a}}\n                                                                                                      \\end{align*}\n                                                                                                      \\end{proof}\n                                                                                                      \\begin{problem}\n                                                                                                        Use the trigonometric fact $3 + 4 \\cos \\theta + \\cos (2\\theta) \\geq 0$ to show that\n                                                                                                          \\[\n                                                                                                              \\Real \\left( 3 \\frac{ \\zeta'(a) }{ \\zeta(a) } + 4 \\frac{ \\zeta'(a+bi) }{ \\zeta(a+bi) } +  \\frac{ \\zeta'(a+2bi) }{ \\zeta(a+2bi) } \\right) \\leq 0.\n                                                                                                                \\]\n                                                                                                                  Knowing $\\zeta$ has a simple pole at $s = 1$ and assuming\n                                                                                                                    $\\zeta(1+bi) = 0$ for $b \\in \\R$, define \n                                                                                                                      \\[\n                                                                                                                          f(a) = \\zeta(a)^3 \\cdot \\zeta(a+bi)^4 \\cdot \\zeta(a + 2bi)\n                                                                                                                            \\]\n                                                                                                                              and study $f$ near $a = 1$ to uncover a contradiction.  You will\n                                                                                                                                have shown that $\\zeta(s) \\neq 0$ when $\\Real s = 1$.\n                                                                                                                                \\end{problem}\n                                                                                                                                \\begin{solution}\n                                                                                                                                First we use equation \\eqref{n_to_a+bi_expansion} to notice that\n                                                                                                                                \\[\n                                                                                                                                \\Real(\\log\\zeta(x+yi)) = \\sum_{n=1}^\\infty n^{-x}\\cos(y\\log n)\n                                                                                                                                \\]\n                                                                                                                                Then we take derivatives to see that\n                                                                                                                                \\begin{align*}\n                                                                                                                                \\Real(\\log\\zeta'(x+yi)) = -\\sum_{n=1}^\\infty \\log(n) n^{-x}\\cos(y\\log n)\n                                                                                                                                \\end{align*}\n                                                                                                                                Now we can check the inequality by plugging this in three times\n                                                                                                                                \\begin{align*}\n                                                                                                                                &\\Real\\left(3\\log\\left(\\frac{\\zeta'(a)}{\\zeta(a)}\\right) + 4\\log\\left(\\frac{\\zeta'(a+bi)}{\\zeta(a+bi)}\\right) + \\log\\left(\\frac{\\zeta'(a+2bi)}{\\zeta(a+2bi)}\\right)\\right)\\\\\n                                                                                                                                &= -\\sum_{n=1}^\\infty \\log(n) n^{-a}(3 + 4\\cos(b\\log n) + \\cos(2b\\log n)) \n                                                                                                                                \\end{align*}\n                                                                                                                                Using the trig identity with $\\theta = b\\log n$, each of these terms is positive so the RHS is negative.\n\n                                                                                                                                Next, we assume that $\\zeta(1 + bi) = 0$, and consider \n                                                                                                                                \\[\n                                                                                                                                f(a) = \\zeta(a)^3\\zeta(a+bi)^4 \\zeta(a+2bi)\n                                                                                                                                \\]\n                                                                                                                                Note that we have just shown that $\\Real(\\frac{f'(a)}{f(a)}) \\leq 0.$ \n\n                                                                                                                                Now, since $\\zeta(1)$ is a pole of degree 1 $\\zeta(1+bi)$ is a zero of degree 4, and $\\zeta(1+2bi)$ is not a pole, $f(x)\\to 0$ as $x\\to 1$. Thus $\\log(f(x))\\to \\infty$, and so the derivative of $\\log(f(x))$ is positive, contradicting the fact that $\\Real(\\frac{f'(a)}{f(a)}) \\leq 0.$ Thus $\\zeta$ must not have any zeros on the line with real part equal to 1.\n                                                                                                                                \\end{solution}\n                                                                                                                                \\section{Prove or Disprove and Salvage if Possible}\n\n                                                                                                                                \\begin{problem}\\label{nonzero-infinite-product}\n                                                                                                                                For a sequence $a_n \\neq 0$ of complex numbers, suppose\n                                                                                                                                  $\\displaystyle\\sum_{n=1}^\\infty |a_n|$ converges.  Then\n                                                                                                                                    $\\displaystyle\\prod_{n=1}^\\infty \\left( 1 + a_n \\right)$ converges\n                                                                                                                                      to a nonzero quantity.\n                                                                                                                                      \\end{problem}\n                                                                                                                                      \\begin{solution}\n                                                                                                                                      False. We should assume $a_n\\neq -1$ instead of $a_n \\neq 0.$\n\n                                                                                                                                      Choose a branch of the log with a branch cut given by a line that intersects none of the $a_i$. This is always possible since the set of $a_i$ is countable, but the set of angles is uncountable. We will use this branch for the rest of the proof.\n                                                                                                                                      \\begin{lemma}\\label{log_prod_convergence}\n                                                                                                                                      If\n                                                                                                                                      \\[\n                                                                                                                                      \\sum_{n=1}^\\infty \\log(1 + a_n)\n                                                                                                                                      \\]\n                                                                                                                                      converges to $S_1$, then\n                                                                                                                                      \\[\n                                                                                                                                      \\prod_{n=1}^\\infty (1 + a_n)\n                                                                                                                                      \\]\n                                                                                                                                      converges to $e^{S_1}$.\n\n                                                                                                                                      \\end{lemma}\n                                                                                                                                      \\begin{proof}\n                                                                                                                                      If the sum converges to $S_1$, then $e^{S_1}$ is also finite, and\n                                                                                                                                      \\[\n                                                                                                                                      \\exp(\\sum_{n=1}^\\infty \\log(1 + a_n)) = \\prod_{n=1}^\\infty (1 + a_n)\n                                                                                                                                      \\]\n                                                                                                                                      \\end{proof}\n                                                                                                                                      Now suppose that $\\sum_{n=1}^\\infty \\abs{a_n}$ converges, then there are only finitely many terms $a_n$ with $\\abs{a_n}>\\frac{1}{2}$, and for the rest,\n                                                                                                                                      \\[\n                                                                                                                                      \\abs{\\log(1 + a_n)} = \\abs{\\sum_{k=1}^\\infty \\frac{(-1)^{k-1}a_n^k}{k}} = \\abs{a_n + a_n\\sum_{k=1}^\\infty \\frac{(-1)^{k-1}a_n^k}{k}}\n                                                                                                                                      \\]\n                                                                                                                                      Since\n                                                                                                                                      \\[\n                                                                                                                                      \\abs{\\sum_{k=1}^\\infty \\frac{(-1)^{k-1}a_n^k}{k}}\\leq \n                                                                                                                                      \\abs{\\sum_{k=1}^\\infty a_n^k} = \\abs{\\frac{a_n}{1-a_n}} \\leq 1,\n                                                                                                                                      \\]\n                                                                                                                                      we can conclude that \n                                                                                                                                      \\[\n                                                                                                                                      \\abs{\\log(1 + a_n)}\\leq \\abs{2a_n}\n                                                                                                                                      \\]\n                                                                                                                                      And since $a_n$ converges, $\\log(1 + a_n)$ does as well, so by lemma \\ref{log_prod_convergence}, $\\prod(1+a_n)$ converges to $e^{S_1}\\neq 0$.\n\n                                                                                                                                      \\end{solution}\n\n                                                                                                                                      \\begin{problem} % missing pole at s = 1\n                                                                                                                                        Define the \\textbf{Dirichlet eta function} via\n                                                                                                                                          \\[\n                                                                                                                                              \\eta(s) := \\sum_{n=1}^\\infty \\frac{(-1)^{n-1}}{n^s}.\n                                                                                                                                                \\]\n                                                                                                                                                  This series converges to a holomorphic function when $\\Real s > 0$.\n                                                                                                                                                  \\end{problem}\n                                                                                                                                                  \\begin{solution}\n                                                                                                                                                  True. Let\n                                                                                                                                                  \\begin{align*}\n                                                                                                                                                      \\eta(s) &= \\sum_{n=1}^\\infty (2n)^{-s} - (2n + 1)^{-s} \\\\\n                                                                                                                                                      \\end{align*}\n                                                                                                                                                      Note that for $\\Real(s) > 0$, $f(n, s) = (2n)^{-s} - (2n + 1)^{-s}$ is positive and a decreasing function in $n$, since \n                                                                                                                                                      \\[\n                                                                                                                                                      \\pfrac{f}{n} = -s(f(n, s+1)) < 0\n                                                                                                                                                      \\]\n                                                                                                                                                      Thus we see that\n                                                                                                                                                      \\begin{align*}\n                                                                                                                                                      \\abs{\\eta(s)}    &\\leq \\abs{\\int_1^\\infty (2n)^{-s} - (2n + 1)^{-s} dn}\\\\\n                                                                                                                                                          &= \\frac{1}{2}\\abs{\\int_2^\\infty n^{-s} - (n + 1)^{-s} dn}\\\\\n                                                                                                                                                              &= \\frac{1}{2}\\abs{\\frac{n^{-s}}{\\log(n)} - \\frac{(n + 1)^{-s}}{\\log(n+1)} \\bigg|_{n=2}^\\infty}\n                                                                                                                                                              \\end{align*}\n                                                                                                                                                              If $\\Real(s)>0$, then the numerator will go to zero faster than the denominator, so the value at infinity is 0 and thus $\\abs(\\eta(s))$ is bounded, so the series converges. Also, since $\\sum_{n=1}^N \\frac{(-1)^n}{n^s}$ is holomorphic, the limit as $N\\to \\infty$ is also holomorphic.\n                                                                                                                                                              \\end{solution}\n                                                                                                                                                              \\begin{problem}\n                                                                                                                                                                The Riemann zeta and Dirichlet eta functions are related via\n                                                                                                                                                                  \\[\n                                                                                                                                                                      \\left( 1 - 2^{1-s} \\right) \\zeta(s) = \\eta(s).\n                                                                                                                                                                        \\]\n                                                                                                                                                                        \\end{problem}\n                                                                                                                                                                        \\begin{solution}\n                                                                                                                                                                        We need to be a little careful about the domain: $\\zeta$ isn't even defined at $s=1$. The claim is true if we restrict to $\\Re(s)>1$, since in this case all the infinite sums converge aboslutely and so we can rearrange terms and make things cancel nicely:\n                                                                                                                                                                        \\begin{align*}\n                                                                                                                                                                        \\left( 1 - 2^{1-s} \\right)\\zeta(s) &=    (\\sum_{n=1}^\\infty \\frac{1}{n^s}) -  \\sum_{n=1}^\\infty \\frac{2}{(2n)^s} \\\\\n                                                                                                                                                                        &= \\sum_{n=1}^\\infty \\frac{(-1)^n}{n^s} = \\eta(s)\n                                                                                                                                                                        \\end{align*}\n                                                                                                                                                                        \\end{solution}\n                                                                                                                                                                        \\end{document}\n", "meta": {"hexsha": "6ade498753ae0ab915238637da2ad416d20d3e22", "size": 47456, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problem-solutions/sol11.tex", "max_stars_repo_name": "Alex7Li/math5522h", "max_stars_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "problem-solutions/sol11.tex", "max_issues_repo_name": "Alex7Li/math5522h", "max_issues_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem-solutions/sol11.tex", "max_forks_repo_name": "Alex7Li/math5522h", "max_forks_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 113.8033573141, "max_line_length": 590, "alphanum_fraction": 0.2235544504, "num_tokens": 7075, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.5658552137096053}}
{"text": "\\documentclass[gr-notes.tex]{subfiles}\n\n\\begin{document}\n\n\\setcounter{chapter}{4}\n\n\\chapter{Preface to Curvature}\n\n\\setcounter{section}{7}\n\n\\section{Exercises}\n\n\\textbf{1}\n\n(a)\nRepeat the argument leading to Equation 5.1, but this time assume that only a fraction $\\epsilon < 1$ of the mass's kinetic energy is converted into a photon.\n\nIf only a fraction $\\epsilon$ of the energy is converted into a photon, then it will start with an energy of $\\epsilon (m + mgh + \\order{\\boldsymbol{v}^4}$, but once it reaches the top it should have an energy of $\\epsilon m$, as it loses the component due to gravitational potential energy. Thus\n%\n\\begin{displaymath}\n  \\frac{E'}{E} =\n  \\frac{\\epsilon m}{\\epsilon (m + mgh + \\order{\\boldsymbol{v}^4)}} =\n  \\frac{m}{m + mgh + \\order{\\boldsymbol{v}^4}} =\n  1 - gh + \\order{\\boldsymbol{v}^4}\n\\end{displaymath}\n\n(b)\nAssume Equation 5.1 does not hold. Devise a perpetual motion device.\n\nIf we assume that the photon does not return to an energy $m$ once it reaches the top, but instead has an energy $m' > m$, then we could create the perpetual motion device shown in Figure \\ref{fig:ch5-problem-1b}. A black box consumes the photon with energy $m'$, and splits it into a new object of mass $m$, and a photon of energy $m' - m$. The object repeats the action of the original falling mass, creating an infinite loop.\n\n\\begin{figure}[ht]\n  \\centering\n  \\includegraphics[width=0.5\\textwidth]{img/ch5_problem_1b}\n  \\caption{Problem 1: Perpetual motion device.}\n  \\label{fig:ch5-problem-1b}\n\\end{figure}\n\n\n\\textbf{2}\nExplain why a uniform gravitational field would not be able to create tides on Earth.\n\nTides depend on there being a gravitational field gradient. If the curvature closer to the source of the field (e.g. the Moon) is greater than it is further away, then the closer side will move towards the source more than the further side, thus creating tides. In the absense of such a gradient, there would be no difference in curvature between the two sides, and thus they would not stretch relative to each other.\n\n\n\n\\textbf{7}\nCalculate the components of $\\tensor{\\Lambda}{^{\\alpha'}_\\beta}$ and $\\tensor{\\Lambda}{^\\mu_{\\nu'}}$ for transformations $(x, y) \\leftrightarrow (r, \\theta)$.\n~\n\\begin{align*}\n  \\mqty( \\Delta r \\\\ \\Delta \\theta ) &=\n  \\mqty( \\pdv*{r}{x}      & \\pdv*{r}{y} \\\\\n         \\pdv*{\\theta}{x} & \\pdv*{\\theta}{y} )\n  \\mqty( \\Delta x \\\\ \\Delta y )\n  &\n  \\mqty( \\Delta x \\\\ \\Delta y ) &=\n  \\mqty( \\pdv*{x}{r} & \\pdv*{x}{\\theta} \\\\\n         \\pdv*{y}{r} & \\pdv*{y}{\\theta} )\n  \\mqty( \\Delta r \\\\ \\Delta \\theta )\n  \\\\ &=\n  \\mqty( x / \\sqrt{x^2+y^2} & y / \\sqrt{x^2 + y^2} \\\\\n         -y / (x^2 + y^2) & x / (x^2 + y^2) )\n  \\mqty( \\Delta x \\\\ \\Delta y )\n  &&=\n  \\mqty( \\cos\\theta & -r \\sin\\theta \\\\\n         \\sin\\theta &  r \\cos\\theta )\n  \\mqty( \\Delta r \\\\ \\Delta \\theta )\n  \\\\ &=\n  \\mqty( \\cos\\theta & \\sin\\theta \\\\\n         -(1/r) \\sin\\theta & (1/r) \\cos\\theta )\n  \\mqty( \\Delta x \\\\ \\Delta y )\n  &&=\n  \\mqty( x / \\sqrt{x^2 + y^2} & -y \\\\\n         y / \\sqrt{x^2 + y^2} &  x)\n  \\mqty( \\Delta r \\\\ \\Delta \\theta )\n\\end{align*}\n%\n\\begin{align*}\n  \\tensor{\\Lambda}{^r_x} &= x / \\sqrt{x^2 + y^2} = \\cos\\theta\n  &\n  \\tensor{\\Lambda}{^x_r} &= \\cos\\theta = x / \\sqrt{x^2 + y^2}\n  \\\\\n  \\tensor{\\Lambda}{^r_y} &= y / \\sqrt{x^2 + y^2} = \\sin\\theta\n  &\n  \\tensor{\\Lambda}{^y_r} &= \\sin\\theta = y / \\sqrt{x^2 + y^2}\n  \\\\\n  \\tensor{\\Lambda}{^\\theta_x} &= -y / (x^2 + y^2) = -(1/r) \\sin\\theta\n  &\n  \\tensor{\\Lambda}{^x_\\theta} &= -r \\sin\\theta = -y\n  \\\\\n  \\tensor{\\Lambda}{^\\theta_y} &= x / (x^2 + y^2) = (1/r) \\cos\\theta\n  &\n  \\tensor{\\Lambda}{^y_\\theta} &= r \\cos\\theta = x\n\\end{align*}\n\n\n\n\\textbf{8}\n\n(a)\n$f \\equiv x^2 + y^2 + 2xy$, $\\vec{V} \\underset{(x,y)}{\\to} (x^2 + 3y, y^2 + 3x)$, $\\vec{W} \\underset{(r,\\theta)}{\\to} (1, 1)$. Express $f = f(r, \\theta)$, and find the components of $\\vec{V}$ and $\\vec{W}$ in a polar basis, as functions of $r$ and $\\theta$.\n\n\\begin{align*}\n  f &= x^2 + y^2 + 2xy = (x + y)^2\n  \\\\ &=\n  (r \\cos\\theta + r \\sin\\theta)^2 =\n  r^2 \\sin^2\\theta + r^2 \\cos^2\\theta + 2 r^2 \\sin\\theta \\cos\\theta\n  \\\\ &=\n  r^2 (1 + \\sin(2 \\theta))\n  \\\\\n%%%%%\n  \\vec{V} &\\underset{(x,y)}{\\to}\n  \\mqty( r^2 \\cos^2\\theta + 3 r \\sin\\theta \\\\\n         r^2 \\sin^2\\theta + 3 r \\cos\\theta )\n  \\\\\n  \\vec{V} &\\underset{(r,\\theta)}{\\to}\n  \\mqty( \\cos\\theta & \\sin\\theta \\\\\n         -(1/r) \\sin\\theta & (1/r) \\cos\\theta )\n  \\mqty( r^2 \\cos^2\\theta + 3 r \\sin\\theta \\\\\n         r^2 \\sin^2\\theta + 3 r \\cos\\theta )\n  \\\\ &\\underset{(r,\\theta)}{\\to}\n  \\mqty(\n    r^2 \\cos^2\\theta + 6 r \\sin\\theta \\cos\\theta + r^2 \\sin^3\\theta\n    \\\\\n    -r \\cos^2\\theta \\sin\\theta - 3 \\sin^2\\theta +\n     r \\sin^2\\theta \\cos\\theta + 3 \\cos^2\\theta\n  )\n  \\\\ &\\underset{(r,\\theta)}{\\to}\n  \\mqty(\n    r^2 (\\sin^3\\theta + \\cos^3\\theta) + 6 r \\sin\\theta \\cos\\theta\n    \\\\\n    r \\sin\\theta \\cos\\theta (\\sin\\theta - \\cos\\theta) +\n    3 (\\cos^2\\theta - \\sin^2\\theta)\n  )\n  \\\\ &\\underset{(r,\\theta)}{\\to}\n  \\mqty(\n    r^2 (\\sin^3\\theta + \\cos^3\\theta) + 3 r \\sin(2\\theta)\n    \\\\\n    (r/2) \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) + 3 \\cos(2\\theta)\n  )\n  \\\\\n%%%%%\n  \\vec{W} &\\underset{(r,\\theta)}{\\to}\n  \\mqty( \\cos\\theta & \\sin\\theta \\\\\n         -(1/r) \\sin\\theta & (1/r) \\cos\\theta )\n  \\mqty( 1 \\\\ 1 )\n  \\\\ &\\underset{(r,\\theta)}{\\to}\n  \\mqty(\n    \\cos\\theta + \\sin\\theta\n    \\\\\n    (1/r) (\\cos\\theta - \\sin\\theta)\n  )\n\\end{align*}\n\n\n\n(b)\nExpress the components of $\\tilde{\\dd}f$ in $(x,y)$ and obtain them in $(r,\\theta)$ by:\n\n(i) using direct calculation in $(r, \\theta)$:\n\n\\begin{displaymath}\n  \\tilde{\\dd}f \\underset{(r,\\theta)}{\\to}\n  \\qty( \\pdv*{f}{r}, \\pdv*{f}{\\theta} ) =\n  \\qty( 2 r (1 + \\sin(2 \\theta)), 2 r^2 \\cos(2 \\theta) )\n\\end{displaymath}\n\n(ii) transforming the components in $(x, y)$:\n\n\\begin{displaymath}\n  \\tilde{\\dd}f \\underset{(x,y)}{\\to}\n  \\qty( \\pdv*{f}{x}, \\pdv*{f}{y} ) =\n  \\qty( 2 (x+y), 2 (x+y) ) =\n  \\qty( 2 r (\\cos\\theta + \\sin\\theta), 2 r (\\cos\\theta + \\sin\\theta))\n\\end{displaymath}\n\n\\begin{align*}\n  \\mqty( (\\tilde{\\dd}f)_r & (\\tilde{\\dd}f)_\\theta ) &=\n  \\mqty( 1 & 1 )\n  \\mqty( \\cos\\theta & -r\\sin\\theta \\\\ \\sin\\theta & r\\cos\\theta )\n  \\qty[ 2 r (\\cos\\theta + \\sin\\theta) ]\n  \\\\ &=\n  \\mqty( 2 r  (\\cos^2\\theta + \\sin^2\\theta + 2 \\sin\\theta\\cos\\theta) &\n         2 r^2 (\\cos^2\\theta - \\sin^2\\theta) )\n  \\\\ &=\n  \\mqty( 2 r (1 + \\sin(2 \\theta)) & 2 r^2 \\cos(2 \\theta) )\n\\end{align*}\n\n\n(c)\nNow find the $(r,\\theta)$ components of the one-forms $\\tilde{V}$ and $\\tilde{W}$ associated with the vectors $\\vec{V}$ and $\\vec{W}$ by\n\n(i)\nusing the metric tensor in $(r,\\theta)$:\n\n\n\\begin{align*}\n  V_r &=\n  g_{r\\alpha} V^\\alpha =\n  g_{rr} V^r + g_{r\\theta} V^\\theta\n  \\\\ &=\n  r^2 (\\sin^3\\theta + \\cos^3\\theta) + 3 r \\sin(2\\theta)\n  \\\\\n  V_\\theta &=\n  g_{\\theta r} V^r + g_{\\theta\\theta} V^\\theta =\n  (1/2) r^3 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n  3 r^2 \\cos(2\\theta)\n  \\\\\n%%%%%\n  W_r &=\n  g_{r\\alpha} W^\\alpha =\n  g_{rx} W^x + g_{ry} W^y\n  \\\\ &=\n  1 (\\cos\\theta + \\sin\\theta) +\n  0 \\qty[ (1/r) (\\cos\\theta - \\sin\\theta) ]\n  \\\\ &=\n  \\cos\\theta + \\sin\\theta\n  \\\\\n%%%%%\n  W_\\theta &=\n  g_{\\theta x} W^x + g_{\\theta y} W^y =\n  \\\\ &=\n  0 (\\cos\\theta + \\sin\\theta) +\n  r^2 \\qty[ r (\\cos\\theta - \\sin\\theta) ]\n  \\\\ &=\n  r (\\cos\\theta - \\sin\\theta)\n\\end{align*}\n\n\n(ii)\nusing the metric tensor in $(x,y)$ and then doing a coordinate transformation:\n\n\\begin{align*}\n  V_x &= V^x; \\quad V_y = V^y\n  \\\\\n  V_r &=\n  \\tensor{\\Lambda}{^\\alpha_r} V_\\alpha =\n  \\tensor{\\Lambda}{^x_r} V_x + \\tensor{\\Lambda}{^y_r} V_y\n  \\\\ &=\n  \\cos\\theta V_x + \\sin\\theta V_y\n  \\\\ &=\n  r^2 \\cos^3\\theta + (3/2) r \\sin(2\\theta) +\n  r^2 \\sin^3\\theta + (3/2) r \\sin(2\\theta)\n  \\\\ &=\n  r^2 (\\cos^3\\theta + \\sin^3\\theta) + 3 r \\sin(2\\theta)\n  \\\\\n  V_\\theta &=\n  \\tensor{\\Lambda}{^\\alpha_\\theta} V_\\alpha =\n  \\tensor{\\Lambda}{^x_\\theta} V_x + \\tensor{\\Lambda}{^y_\\theta} V_y\n  \\\\ &=\n  (-r \\sin\\theta) V_x + (r \\cos\\theta) V_y\n  \\\\ &=\n  -r^3 \\cos^2\\theta \\sin\\theta - 3 r^2 \\sin^2\\theta +\n   r^3 \\sin^2\\theta \\cos\\theta + 3 r^2 \\cos^2\\theta\n  \\\\ &=\n  r^3 \\sin\\theta \\cos\\theta (\\sin\\theta - \\cos\\theta) +\n  3 r^2 (\\cos^2\\theta - \\sin^2\\theta)\n  \\\\ &=\n  (1/2) r^3 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n  3 r^2 \\cos(2\\theta)\n  \\\\\n%%%%%\n  W_x &= W^x = W_y = W^y = 1\n  \\\\\n  W_r &=\n  \\tensor{\\Lambda}{^\\alpha_r} W_\\alpha =\n  \\tensor{\\Lambda}{^x_r} W_x + \\tensor{\\Lambda}{^y_r} W_y\n  \\\\ &=\n  \\cos\\theta + \\sin\\theta\n  \\\\\n  W_\\theta &=\n  \\tensor{\\Lambda}{^\\alpha_\\theta} W_\\alpha =\n  \\tensor{\\Lambda}{^x_\\theta} W_x + \\tensor{\\Lambda}{^y_\\theta} W_y\n  \\\\ &=\n  -r \\sin\\theta + r \\cos\\theta\n  \\\\ &=\n  r (\\cos\\theta - \\sin\\theta)\n\\end{align*}\n\n\n\\textbf{11}\nConsider $V \\underset{(x,y)}{\\to} (x^2 + 3y, y^2 + 3x)$.\n\n(a)\nFind $\\tensor{V}{^\\alpha_{,\\beta}}$ in Cartesian coordinates.\n%\n\\begin{displaymath}\n  \\tensor{V}{^x_{,x}} = 2 x; \\quad\n  \\tensor{V}{^y_{,y}} = 2 y; \\quad\n  \\tensor{V}{^x_{,y}} = \\tensor{V}{^y_{,x}} = 3.\n\\end{displaymath}\n\n(b)\n\\begin{align*}\n  \\tensor{V}{^{\\mu'}_{;\\nu'}} &=\n  \\tensor{\\Lambda}{^{\\mu'}_\\alpha}\n  \\tensor{\\Lambda}{^\\beta_{\\nu'}}\n  \\tensor{V}{^\\alpha_{,\\beta}}\n  \\\\\n  \\tensor{V}{^r_{;r}} &=\n  \\tensor{\\Lambda}{^r_x} \\tensor{\\Lambda}{^x_r} \\tensor{V}{^x_{,x}} +\n  \\tensor{\\Lambda}{^r_y} \\tensor{\\Lambda}{^y_r} \\tensor{V}{^y_{,y}} +\n  \\tensor{\\Lambda}{^r_x} \\tensor{\\Lambda}{^y_r} \\tensor{V}{^x_{,y}} +\n  \\tensor{\\Lambda}{^r_y} \\tensor{\\Lambda}{^x_r} \\tensor{V}{^y_{,x}}\n  \\\\ &=\n  (\\cos^2\\theta) (2r \\cos\\theta) +\n  (\\sin^2\\theta) (2r \\sin\\theta) +\n  (\\sin\\theta \\cos\\theta) (3) +\n  (\\sin\\theta \\cos\\theta) (3)\n  \\\\ &=\n  2r (\\cos^3\\theta + \\sin^3\\theta) +\n  3 \\sin(2\\theta)\n  \\\\\n  \\tensor{V}{^\\theta_{;\\theta}} &=\n  \\tensor{\\Lambda}{^\\theta_x} \\tensor{\\Lambda}{^\\theta_r} \\tensor{V}{^x_{,x}} +\n  \\tensor{\\Lambda}{^\\theta_y} \\tensor{\\Lambda}{^\\theta_r} \\tensor{V}{^y_{,y}} +\n  \\tensor{\\Lambda}{^\\theta_x} \\tensor{\\Lambda}{^y_\\theta} \\tensor{V}{^x_{,y}} +\n  \\tensor{\\Lambda}{^\\theta_y} \\tensor{\\Lambda}{^\\theta_r} \\tensor{V}{^y_{,x}}\n  \\\\ &=\n  (\\sin^2\\theta) (2 r \\cos\\theta) + (\\cos^2\\theta) (2 r \\sin\\theta) +\n  (-\\sin\\theta \\cos\\theta) (3) + (-\\sin\\theta \\cos\\theta) (3)\n  \\\\ &=\n  \\sin(2\\theta) [ r (\\sin\\theta + \\cos\\theta) - 3 ]\n  \\\\\n  \\tensor{V}{^r_{;\\theta}} &=\n  \\tensor{\\Lambda}{^r_x} \\tensor{\\Lambda}{^x_\\theta} \\tensor{V}{^x_{,x}} +\n  \\tensor{\\Lambda}{^r_y} \\tensor{\\Lambda}{^y_\\theta} \\tensor{V}{^y_{,y}} +\n  \\tensor{\\Lambda}{^r_x} \\tensor{\\Lambda}{^y_\\theta} \\tensor{V}{^x_{,y}} +\n  \\tensor{\\Lambda}{^r_y} \\tensor{\\Lambda}{^x_\\theta} \\tensor{V}{^y_{,x}}\n  \\\\ &=\n  (-r \\sin\\theta \\cos\\theta) (2 r \\cos\\theta) +\n  (r \\sin\\theta \\cos\\theta) (2 r \\sin\\theta) +\n  (r \\cos^2\\theta) (3) + (-r \\sin^2\\theta)\n  \\\\ &=\n  r^2 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n  3 r \\cos(2 \\theta)\n  \\\\\n  \\tensor{V}{^\\theta_{;r}} &=\n  \\tensor{\\Lambda}{^\\theta_x} \\tensor{\\Lambda}{^x_r} \\tensor{V}{^x_{,x}} +\n  \\tensor{\\Lambda}{^\\theta_y} \\tensor{\\Lambda}{^y_r} \\tensor{V}{^y_{,y}} +\n  \\tensor{\\Lambda}{^\\theta_x} \\tensor{\\Lambda}{^y_r} \\tensor{V}{^x_{,y}} +\n  \\tensor{\\Lambda}{^\\theta_y} \\tensor{\\Lambda}{^x_r} \\tensor{V}{^y_{,x}}\n  \\\\ &=\n  (-(1/r) \\sin\\theta \\cos\\theta) (2 r \\cos\\theta) +\n  ( (1/r) \\sin\\theta \\cos\\theta) (2 r \\sin\\theta) +\n  (-(1/r) \\sin^2\\theta) (3) +\n  ( (1/r) \\cos^2\\theta) (3)\n  \\\\ &=\n  \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n  \\frac{3}{r} \\cos(2 \\theta)\n\\end{align*}\n\n\n(c) compute $\\tensor{V}{^{\\mu'}_{;\\nu'}}$ directly in polars using the Christoffel symbols.\n\nRecall that we have $\\tensor{\\Gamma}{^\\mu_{rr}} = \\tensor{\\Gamma}{^r_{r\\theta}} = \\tensor{\\Gamma}{^\\theta_{\\theta\\theta}} = 0$, $\\tensor{\\Gamma}{^\\theta_{r\\theta}} = 1/r$, and $\\tensor{\\Gamma}{^r_{\\theta\\theta}} = -r$.\n\n\\begin{align*}\n  \\tensor{V}{^{\\mu'}_{;\\nu'}} &=\n  \\tensor{V}{^{\\mu'}_{,\\nu'}} +\n  \\tensor{V}{^{\\alpha'}} \\tensor{\\Gamma}{^{\\mu'}_{\\alpha'\\nu'}}\n  \\\\\n  \\tensor{V}{^r_{;r}} &=\n  \\tensor{V}{^r_{,r}} +\n  \\tensor{V}{^{\\alpha'}} \\tensor{\\Gamma}{^r_{\\alpha'r}}\n  \\\\\n  \\tensor{V}{^r_{,r}} &=\n  \\pdv*{V^r}{r} =\n  2 r (\\sin^3\\theta + \\cos^3\\theta) + 3 \\sin(2\\theta)\n  \\\\\n  \\tensor{V}{^\\alpha} \\tensor{\\Gamma}{^r_{\\alpha r}} &=\n  V^r     \\tensor{\\Gamma}{^r_{rr}} +\n  V^\\theta \\tensor{\\Gamma}{^r_{\\theta r}} =\n  0\n  \\\\\n  \\tensor{V}{^r_{;r}} &=\n  \\tensor{V}{^r_{,r}} =\n  2 r (\\sin^3\\theta + \\cos^3\\theta) + 3 \\sin(2\\theta)\n  \\\\\n  \\tensor{V}{^\\theta_{;\\theta}} &=\n  \\tensor{V}{^\\theta_{,\\theta}} +\n  \\tensor{V}{^{\\alpha'}} \\tensor{\\Gamma}{^\\theta_{\\alpha'\\theta}}\n  \\\\\n  \\tensor{V}{^\\theta_{,\\theta}} &=\n  \\pdv*{V^\\theta}{\\theta} =\n  (r/2) \\sin(2\\theta) (\\sin\\theta + \\cos\\theta) +\n  r \\cos(2\\theta) (\\sin\\theta - \\cos\\theta) -\n  6 \\sin(2\\theta)\n  \\\\\n  \\tensor{V}{^{\\alpha'}} \\tensor{\\Gamma}{^\\theta_{\\alpha'\\theta}} &=\n  \\tensor{V}{^r} \\tensor{\\Gamma}{^\\theta_{r\\theta}} +\n  \\tensor{V}{^\\theta} \\tensor{\\Gamma}{^\\theta_{\\theta\\theta}}\n  \\\\ &=\n  [r^2 (\\sin^3\\theta + \\cos^3\\theta) + 3r \\sin(2\\theta)] (1/r)\n  \\\\ &=\n  r (\\sin^3\\theta + \\cos^3\\theta) + 3 \\sin(2\\theta)\n  \\\\\n  \\tensor{V}{^\\theta_{;\\theta}} &=\n  \\sin(2\\theta) [r (\\sin\\theta + \\cos\\theta) - 3]\n  \\\\\n  \\tensor{V}{^r_{;\\theta}} &=\n  \\tensor{V}{^r_{,\\theta}} +\n  V^r \\tensor{\\Gamma}{^r_{r\\theta}} +\n  V^\\theta \\tensor{\\Gamma}{^r_{\\theta\\theta}}\n  \\\\ &=\n  \\tensor{V}{^r_{,\\theta}} +\n  V^\\theta \\tensor{\\Gamma}{^r_{\\theta\\theta}} =\n  \\pdv*{V^r}{\\theta} - r V^\\theta\n  \\\\ &=\n  6 r \\cos(2\\theta) +\n  (3/2) r^2 \\sin(2\\theta) (\\sin\\theta-\\cos\\theta) -\n  ((1/2) r^2 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) + 3 r \\cos(2\\theta))\n  \\\\ &=\n  r^2 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) + 3 r \\cos(2\\theta)\n  \\\\\n  \\tensor{V}{^\\theta_{;r}} &=\n  \\tensor{V}{^\\theta_{,r}} +\n  V^r \\tensor{\\Gamma}{^\\theta_{rr}} +\n  V^\\theta \\tensor{\\Gamma}{^\\theta_{\\theta r}} =\n  \\tensor{V}{^\\theta_{,r}} +\n  \\frac{1}{r}\n  V^\\theta\n  \\\\ &=\n  (1/2) \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n  (1/2) \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n  (3/r) \\cos(2\\theta)\n  \\\\ &=\n  \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n  (3/r) \\cos(2\\theta)\n\\end{align*}\n\n\n(d)\nCalculate the divergence using the results from part (a)\n%\n\\begin{displaymath}\n  \\tensor{V}{^\\alpha_{,\\alpha}} =\n  \\tensor{V}{^x_{,x}} + \\tensor{V}{^y_{,y}} =\n  2 (x + y) =\n  2 r (\\sin\\theta + \\cos\\theta)\n\\end{displaymath}\n\n\n(e)\nCalculate the divergence using the results from either part (b) or (c).\n%\n\\begin{align*}\n  \\tensor{V}{^{\\mu'}_{;\\mu'}} &=\n  \\tensor{V}{^{r}_{;r}} +\n  \\tensor{V}{^{\\theta}_{;\\theta}}\n  \\\\ &=\n  2 r (\\sin^3\\theta + \\cos^3\\theta) +\n  3 \\sin(2\\theta) +\n  \\sin(2\\theta) [r (\\sin\\theta + \\cos\\theta) - 3]\n  \\\\ &=\n  2 r (\\sin\\theta + \\cos\\theta)\n\\end{align*}\n\n\n\n(f)\nCompute $\\tensor{V}{^{\\mu'}_{;\\mu'}}$ using Equation 5.56.\n\n\\begin{displaymath}\n  \\tensor{V}{^{\\mu'}_{;\\mu'}} =\n  \\frac{1}{r} \\pdv{r} (r V^r) + \\pdv{\\theta} (V^\\theta) =\n  2 r (\\sin\\theta + \\cos\\theta)\n\\end{displaymath}\n\n\n\n\n\\textbf{12}\n%\n\\begin{displaymath}\n  \\tilde{p} \\underset{(x,y)}{\\to} (x^2 + 3y, y^2 + 3x).\n\\end{displaymath}\n\n(a)\nFind the components $p_{\\alpha,\\beta}$ in Cartesian coordinates.\n\nSince $p_{\\alpha,\\beta} = \\pdv*{p_\\alpha}{x^\\beta}$, it's simply $p_{x,x} = 2x$, $p_{y,y} = 2y$, and $p_{x,y} = p_{y,x} = 3$.\n\n(b)\nFind the components $p_{\\mu';\\nu'}$ in polar coordinates by using the transformation $\\tensor{\\Lambda}{^\\alpha_{\\mu'}} \\tensor{\\Lambda}{^\\beta_{\\nu'}} p_{\\alpha,\\beta}$.\n\n\\begin{align*}\n  p_{r;r} &=\n  \\qty(\\tensor{\\Lambda}{^x_r})^2 p_{x,x} +\n  \\qty(\\tensor{\\Lambda}{^y_r})^2 p_{y,y} +\n  2 \\tensor{\\Lambda}{^x_r} \\tensor{\\Lambda}{^y_r} p_{x,y}\n  \\\\ &=\n  (\\cos^2\\theta) (2 r \\cos\\theta) +\n  (\\sin^2\\theta) (2 r \\sin\\theta) +\n  2 (\\sin\\theta \\cos\\theta) (3)\n  \\\\ &=\n  2 r (\\sin^3\\theta + \\cos^3\\theta) + 3 \\sin(2\\theta)\n  \\\\\n  p_{\\theta;\\theta} &=\n  \\qty(\\tensor{\\Lambda}{^x_\\theta})^2 p_{x,x} +\n  \\qty(\\tensor{\\Lambda}{^y_\\theta})^2 p_{y,y} +\n  2 \\tensor{\\Lambda}{^x_\\theta} \\tensor{\\Lambda}{^y_\\theta} p_{x,y}\n  \\\\ &=\n  (-r \\sin\\theta)^2 (2 r \\cos\\theta) +\n  (r \\cos\\theta)^2 (2 r \\sin\\theta) +\n  2 (3 (-r \\sin\\theta) (r \\cos\\theta))\n  \\\\ &=\n  r^2 \\sin(2\\theta) (r (\\sin\\theta + \\cos\\theta) - 3)\n  \\\\\n  p_{r;\\theta} &=\n  \\tensor{\\Lambda}{^x_r} \\tensor{\\Lambda}{^x_\\theta} p_{x,x} +\n  \\tensor{\\Lambda}{^y_r} \\tensor{\\Lambda}{^y_\\theta} p_{y,y} +\n  \\tensor{\\Lambda}{^x_r} \\tensor{\\Lambda}{^y_\\theta} p_{x,y} +\n  \\tensor{\\Lambda}{^y_r} \\tensor{\\Lambda}{^x_\\theta} p_{y,x}\n  \\\\ &=\n  (-r \\sin\\theta \\cos\\theta) (2 r \\cos\\theta) +\n  (r \\sin\\theta \\cos\\theta) (2 r \\sin\\theta) +\n  3 (r \\cos^2\\theta - r \\sin^2\\theta)\n  \\\\ &=\n  r^2 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) + 3 r \\cos(2\\theta),\n\\end{align*}\nand by the symmetry of $p_{\\alpha,\\beta}$ in Cartesian coordinates, $p_{\\theta;r} = p_{r;\\theta}$.\n\n(c)\nNow find $p_{\\mu';\\nu'}$ using the Christoffel symbols.\n\n\\begin{align*}\n  p_{r;r} &=\n  p_{r,r} -\n  p_r \\tensor{\\Gamma}{^r_{rr}} -\n  p_\\theta \\tensor{\\Gamma}{^\\theta_{rr}} =\n  p_{r,r} = \\pdv*{p_r}{r}\n  \\\\ &=\n  \\pdv*{r} \\qty[ r^2 (\\cos^3\\theta + \\sin^3\\theta) + 3 r \\sin(2\\theta) ] =\n  2 r (sin^3\\theta + \\cos^3\\theta) + 3 \\sin(2\\theta)\n  \\\\\n  p_{\\theta;\\theta} &=\n  p_{\\theta,\\theta} -\n  p_r \\tensor{\\Gamma}{^r_{\\theta\\theta}} -\n  p_\\theta \\tensor{\\Gamma}{^\\theta_{\\theta\\theta}} =\n  p_{\\theta,\\theta} + r p_r =\n  \\pdv*{p_\\theta}{\\theta}\n  \\\\ &=\n  \\pdv*{\\theta} \\qty[\n    (1/2) r^3 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) +\n    3 r^2 \\cos(2\\theta)\n  ] +\n  r \\qty[\n    r^2 (\\cos^3\\theta + \\sin^3\\theta) + 3 r \\sin(2\\theta)\n  ]\n  \\\\ &=\n  r^2 \\sin(2\\theta) \\qty[ r (\\sin\\theta + \\cos\\theta) - 3 ]\n  \\\\\n  p_{r;\\theta} &=\n  p_{r,\\theta} -\n  p_r \\tensor{\\Gamma}{^r_{r\\theta}} -\n  p_\\theta \\tensor{\\Gamma}{^\\theta_{r\\theta}} =\n  \\pdv*{p_r}{\\theta} - (1/r) p_\\theta\n  \\\\ &=\n  r^2 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) + 3 r \\cos(2\\theta)\n  \\\\\n  p_{\\theta;r} &=\n  p_{\\theta,r} -\n  p_r \\tensor{\\Gamma}{^r_{\\theta r}} -\n  p_\\theta \\tensor{\\Gamma}{^\\theta_{\\theta r}} =\n  \\pdv*{p_\\theta}{r} - (1/r) p_\\theta\n  \\\\ &=\n  r^2 \\sin(2\\theta) (\\sin\\theta - \\cos\\theta) + 3 r \\cos(2\\theta)\n\\end{align*}\n\n\n\\textbf{13}\nShow in polars that $g_{\\mu'\\alpha'} \\tensor{V}{^{\\alpha'}_{;\\nu'}} = p_{\\mu';\\nu'}$.\n\n\\begin{align*}\n  g_{r\\alpha'} \\tensor{V}{^{\\alpha'}_{;r}} &=\n  g_{rr} \\tensor{V}{^{r}_{;r}} + g_{r\\theta} \\tensor{V}{^{\\theta}_{;r}}\n  \\\\ &=\n  1 \\tensor{V}{^r_{;r}} =\n  p_{r;r}\n  \\\\\n  g_{\\theta\\alpha'} \\tensor{V}{^{\\alpha'}_{;\\theta}} &=\n  g_{\\theta r} \\tensor{V}{^r_{;\\theta}} +\n  g_{\\theta\\theta} \\tensor{V}{^\\theta_{;\\theta}}\n  \\\\ &=\n  r^2 \\tensor{V}{^\\theta_{;\\theta}} =\n  p_{\\theta;\\theta}\n  \\\\\n  g_{r \\alpha'} \\tensor{V}{^{\\alpha'}_{;\\theta}} &=\n  g_{rr} \\tensor{V}{^r_{;\\theta}} +\n  g_{r\\theta} \\tensor{V}{^\\theta_{;\\theta}}\n  \\\\ &=\n  1 \\tensor{V}{^r_{;\\theta}} =\n  p_{\\theta;r}\n  \\\\\n  g_{\\theta\\alpha'} \\tensor{V}{^{\\alpha'}_{;r}} &=\n  g_{\\theta r} \\tensor{V}{^r_{;r}} +\n  g_{\\theta\\theta} \\tensor{V}{^\\theta_{;r}}\n  \\\\ &=\n  r^2 \\tensor{V}{^\\theta_{;r}} =\n  p_{\\theta;r}\n\\end{align*}\n\n\n\\textbf{14}\nCompute $\\grad_\\beta A^{\\mu\\nu}$ for the tensor $\\boldsymbol{A}$ with components:\n%\n\\begin{align*}\n  A^{rr} &= r^2, & A^{r\\theta} &= r \\sin\\theta,\n  \\\\\n  A^{\\theta\\theta} &= \\tan\\theta, & A^{\\theta r} &= r \\cos\\theta\n\\end{align*}\n%\n\\begin{align*}\n  \\tensor{A}{^{rr}_{,r}} &= 2r &\n  \\tensor{A}{^{rr}_{,\\theta}} &= 0\n  \\\\\n  \\tensor{A}{^{\\theta\\theta}_{,r}} &= 0 &\n  \\tensor{A}{^{\\theta\\theta}_{,\\theta}} &= \\sec^2\\theta\n  \\\\\n  \\tensor{A}{^{r\\theta}_{,r}} &= \\sin\\theta &\n  \\tensor{A}{^{r\\theta}_{,\\theta}} &= r \\cos\\theta\n  \\\\\n  \\tensor{A}{^{\\theta r}_{,r}} &= \\cos\\theta &\n  \\tensor{A}{^{\\theta r}_{,\\theta}} &= -r \\sin\\theta\n\\end{align*}\n%\n\\begin{displaymath}\n  \\grad_\\beta A^{\\mu \\nu} =\n  \\tensor{A}{^{\\mu \\nu}_{,\\beta}} +\n  A^{\\alpha \\nu} \\tensor{\\Gamma}{^\\mu_{\\alpha \\beta}} +\n  A^{\\mu \\alpha} \\tensor{\\Gamma}{^\\nu_{\\alpha \\beta}}\n\\end{displaymath}\n%\n\\begin{align*}\n  \\grad_r A^{rr} &=\n  \\tensor{A}{^{rr}_{,r}} +\n  A^{\\alpha r} \\tensor{\\Gamma}{^r_{\\alpha r}} +\n  A^{r\\alpha} \\tensor{\\Gamma}{^r_{\\alpha r}}\n  \\\\ &=\n  \\tensor{A}{^{rr}_{,r}} +\n  A^{r r} \\tensor{\\Gamma}{^r_{r r}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^r_{\\theta r}} +\n  A^{r r} \\tensor{\\Gamma}{^r_{r r}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^r_{\\theta r}}\n  \\\\ &=\n  \\tensor{A}{^{rr}_{,r}} =\n  2 r\n  \\\\\n  \\grad_\\theta A^{r r} &=\n  \\tensor{A}{^{r r}_{,\\theta}} +\n  A^{\\alpha r} \\tensor{\\Gamma}{^r_{\\alpha \\theta}} +\n  A^{r \\alpha} \\tensor{\\Gamma}{^r_{\\alpha \\theta}}\n  \\\\ &=\n  \\tensor{A}{^{r r}_{,\\theta}} +\n  A^{r r} \\tensor{\\Gamma}{^r_{r \\theta}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^r_{\\theta \\theta}} +\n  A^{r r} \\tensor{\\Gamma}{^r_{r \\theta}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^r_{\\theta \\theta}}\n  \\\\ &=\n  (A^{\\theta r} + A^{r \\theta}) \\tensor{\\Gamma}{^r_{\\theta \\theta}} =\n  -r^2 (\\sin\\theta + \\cos\\theta)\n  \\\\\n  \\grad_r A^{\\theta \\theta} &=\n  \\tensor{A}{^{\\theta \\theta}_{,r}} +\n  A^{\\alpha \\theta} \\tensor{\\Gamma}{^\\theta_{\\alpha r}} +\n  A^{\\theta \\alpha} \\tensor{\\Gamma}{^\\theta_{\\alpha r}}\n  \\\\ &=\n  \\tensor{A}{^{\\theta \\theta}_{,r}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^\\theta_{r r}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta r}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^\\theta_{r r}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta r}}\n  \\\\ &=\n  2 A^{\\theta \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta r}} =\n  (2/r) \\tan\\theta\n  \\\\\n  \\grad_\\theta A^{\\theta \\theta} &=\n  \\tensor{A}{^{\\theta \\theta}_{,\\theta}} +\n  A^{\\alpha \\theta} \\tensor{\\Gamma}{^\\theta_{\\alpha \\theta}} +\n  A^{\\theta \\alpha} \\tensor{\\Gamma}{^\\theta_{\\alpha \\theta}}\n  \\\\ &=\n  \\tensor{A}{^{\\theta \\theta}_{,\\theta}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^\\theta_{r \\theta}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta \\theta}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^\\theta_{r \\theta}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta \\theta}}\n  \\\\ &=\n  \\tensor{A}{^{\\theta \\theta}_{,\\theta}} +\n  (A^{r \\theta} + A^{\\theta r}) \\tensor{\\Gamma}{^\\theta_{r \\theta}} =\n  \\sin\\theta + \\cos\\theta + \\sec^2\\theta\n  \\\\\n  \\grad_r A^{r \\theta} &=\n  \\tensor{A}{^{r \\theta}_{,r}} +\n  A^{\\alpha \\theta} \\tensor{\\Gamma}{^r_{\\alpha r}} +\n  A^{r \\alpha} \\tensor{\\Gamma}{^\\theta_{\\alpha r}}\n  \\\\ &=\n  \\tensor{A}{^{r \\theta}_{,r}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^r_{r r}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^r_{\\theta r}} +\n  A^{r r} \\tensor{\\Gamma}{^\\theta_{r r}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta r}}\n  \\\\ &=\n  \\tensor{A}{^{r \\theta}_{,r}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta r}} =\n  2 \\sin\\theta\n  \\\\\n  \\grad_\\theta A^{r \\theta} &=\n  \\tensor{A}{^{r \\theta}_{,\\theta}} +\n  A^{\\alpha \\theta} \\tensor{\\Gamma}{^r_{\\alpha \\theta}} +\n  A^{r \\alpha} \\tensor{\\Gamma}{^\\theta_{\\alpha \\theta}}\n  \\\\ &=\n  \\tensor{A}{^{r \\theta}_{,\\theta}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^r_{r \\theta}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^r_{\\theta \\theta}} +\n  A^{r r} \\tensor{\\Gamma}{^\\theta_{r \\theta}} +\n  A^{r \\theta} \\tensor{\\Gamma}{^\\theta_{\\theta \\theta}}\n  \\\\ &=\n  \\tensor{A}{^{r \\theta}_{,\\theta}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^r_{\\theta \\theta}} +\n  A^{r r} \\tensor{\\Gamma}{^\\theta_{r \\theta}} =\n  r (1 + \\cos\\theta - \\tan\\theta)\n  \\\\\n  \\grad_r A^{\\theta r} &=\n  \\tensor{A}{^{\\theta r}_{,r}} +\n  A^{\\alpha r} \\tensor{\\Gamma}{^\\theta_{\\alpha r}} +\n  A^{\\theta \\alpha} \\tensor{\\Gamma}{^r_{\\alpha r}}\n  \\\\ &=\n  \\tensor{A}{^{\\theta r}_{,r}} +\n  A^{r r} \\tensor{\\Gamma}{^\\theta_{r r}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^\\theta_{\\theta r}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^r_{r r}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^r_{\\theta r}}\n  \\\\ &=\n  \\tensor{A}{^{\\theta r}_{,r}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^\\theta_{\\theta r}} =\n  2 \\cos\\theta\n  \\\\\n  \\grad_\\theta A^{\\theta r} &=\n  \\tensor{A}{^{\\theta r}_{,\\theta}} +\n  A^{\\alpha r} \\tensor{\\Gamma}{^\\theta_{\\alpha \\theta}} +\n  A^{\\theta \\alpha} \\tensor{\\Gamma}{^r_{\\alpha \\theta}}\n  \\\\ &=\n  \\tensor{A}{^{\\theta r}_{,\\theta}} +\n  A^{r r} \\tensor{\\Gamma}{^\\theta_{r \\theta}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^\\theta_{\\theta \\theta}} +\n  A^{\\theta r} \\tensor{\\Gamma}{^r_{r \\theta}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^r_{\\theta \\theta}}\n  \\\\ &=\n  \\tensor{A}{^{\\theta r}_{,\\theta}} +\n  A^{r r} \\tensor{\\Gamma}{^\\theta_{r \\theta}} +\n  A^{\\theta \\theta} \\tensor{\\Gamma}{^r_{\\theta \\theta}} =\n  -r \\sin\\theta\n\\end{align*}\n\n\n\\textbf{15}\nFind the components of $\\tensor{V}{^\\alpha_{;\\beta;\\gamma}}$ for the vector $V^r = 1$, $V^\\theta = 0$.\n\nWe start by finding the components of $\\tensor{V}{^\\alpha_{;\\beta}}$.\n%\n\\begin{displaymath}\n  \\tensor{V}{^\\alpha_{;\\beta}} =\n  \\tensor{V}{^\\alpha_{,\\beta}} +\n  V^\\mu \\tensor{\\Gamma}{^\\alpha_{\\mu\\beta}}.\n\\end{displaymath}\n%\nBy noting that $\\tensor{V}{^\\alpha_{,\\beta}} = V^\\theta = \\tensor{\\Gamma}{^r_{rr}} = \\tensor{\\Gamma}{^r_{r\\theta}} = 0$, we can simplify this to\n%\n\\begin{displaymath}\n  \\tensor{V}{^\\alpha_{;\\beta}} =\n  V^r \\tensor{\\Gamma}{^\\alpha_{r\\beta}},\n\\end{displaymath}\n%\nwhich means\n%\n\\begin{displaymath}\n  \\tensor{V}{^r_{;r}} = \\tensor{V}{^r_{;\\theta}} = \\tensor{V}{^\\theta_{;r}} = 0;\n  \\quad\n  \\tensor{V}{^\\theta_{;\\theta}} = \\frac{1}{r}.\n\\end{displaymath}\n%\nNow we can say\n%\n\\begin{displaymath}\n  \\tensor{V}{^\\alpha_{;\\beta;\\mu}} =\n  \\grad_\\mu \\tensor{V}{^\\alpha_{;\\beta}} =\n  \\tensor{V}{^\\alpha_{;\\beta,\\mu}} +\n  \\tensor{V}{^\\gamma_{;\\beta}} \\tensor{\\Gamma}{^\\alpha_{\\gamma\\mu}} -\n  \\tensor{V}{^\\alpha_{;\\gamma}} \\tensor{\\Gamma}{^\\gamma_{\\beta\\mu}}.\n\\end{displaymath}\n%\nNote that $\\tensor{V}{^\\theta_{;\\theta}}$ is a function only of $r$, and so $\\tensor{V}{^\\theta_{;\\theta,r}} = -1/r^2$, and all other partial derivatives are zero. We can also see by inspecting the components, that $\\tensor{V}{^r_{;\\mu;\\nu}} = \\tensor{V}{^\\theta_{;\\mu}} \\tensor{\\Gamma}{^r_{\\theta\\mu}}$, as all other components go to zero. Likewise, we can see that $\\tensor{V}{^\\theta_{;r;\\mu}} = -\\tensor{V}{^\\theta_{;\\theta}} \\tensor{\\Gamma}{^\\theta_{r\\mu}}$. It then becomes easy to find all the individual components. I summarize their values in Table \\ref{tab:ch5-ex15}.\n\n\\begin{table}[b]\n  \\centering\n  \\begin{tabular}{ccc|c}\n    $\\alpha$ &  $\\beta$ &    $\\mu$ & $\\tensor{V}{^\\alpha_{;\\beta;\\mu}}$ \\\\\n    \\hline\n    $\\theta$ & $\\theta$ & $\\theta$ & $0$      \\\\\n    $\\theta$ & $\\theta$ &      $r$ & $-1/r^2$ \\\\\n    $\\theta$ &      $r$ & $\\theta$ & $-1/r^2$ \\\\\n    $\\theta$ &      $r$ &      $r$ & $0$      \\\\\n         $r$ & $\\theta$ & $\\theta$ & $-1$     \\\\\n         $r$ & $\\theta$ &      $r$ & $0$      \\\\\n         $r$ &      $r$ & $\\theta$ & $0$      \\\\\n         $r$ &      $r$ &      $r$ & $0$      \\\\\n  \\end{tabular}\n  \\caption{Components of the tensor in Exercise 15.}\n  \\label{tab:ch5-ex15}\n\\end{table}\n\n\n\\textbf{16}\nRepeat the steps leading from Equation 5.74 to 5.75.\n\nRecalling that $g_{\\alpha\\mu;\\beta} = 0$, we can rewrite Equation 5.72 as\n%\n\\begin{displaymath}\n  g_{\\alpha\\beta,\\mu} =\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\mu}} g_{\\nu\\beta} +\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} g_{\\alpha\\nu}.\n\\end{displaymath}\n%\nNow if we switch the $\\beta$ and $\\mu$ indices, and then switch the $\\alpha$ and $\\beta$ indices, we get two more equations,\n%\n\\begin{align*}\n  g_{\\alpha\\mu,\\beta} &=\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\beta}} g_{\\nu\\mu} +\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\beta}} g_{\\alpha\\nu},\n  \\\\*\n  g_{\\beta\\mu,\\alpha} &=\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} g_{\\nu\\mu} +\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\alpha}} g_{\\beta\\nu}.\n\\end{align*}\n%\nNow we add the first two equations and subtract the third, getting\n%\n\\begin{align*}\n  g_{\\alpha\\beta,\\mu} + g_{\\alpha\\mu,\\beta} - g_{\\beta\\mu,\\alpha} &=\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\mu}} g_{\\nu\\beta} +\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} g_{\\alpha\\nu} +\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\beta}} g_{\\nu\\mu} +\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\beta}} g_{\\alpha\\nu} -\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} g_{\\nu\\mu} -\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\alpha}} g_{\\beta\\nu}\n  \\\\ &=\n  {\\color{red}\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\mu}} g_{\\beta\\nu}} +\n  {\\color{green}\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} g_{\\alpha\\nu}} +\n  {\\color{blue}\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\beta}} g_{\\nu\\mu}} +\n  {\\color{green}\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} g_{\\alpha\\nu}} -\n  {\\color{red}\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\beta}} g_{\\nu\\mu}} -\n  {\\color{blue}\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\mu}} g_{\\beta\\nu}}\n  \\\\ &=\n  2 \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} g_{\\alpha\\nu}.\n\\end{align*}\n%\nRecalling that $g^{\\alpha\\gamma} g_{\\alpha\\nu} = \\tensor{g}{^\\gamma_\\nu} = \\tensor{\\delta}{^\\gamma_\\nu}$, we divide both sides by $2$ and multiply by $g^{\\alpha\\gamma}$, arriving at Equation 5.75:\n%\n\\begin{align*}\n  \\frac{1}{2}\n  g^{\\alpha\\gamma}\n  (g_{\\alpha\\beta,\\mu} + g_{\\alpha\\mu,\\beta} - g_{\\beta\\mu,\\alpha} &=\n  \\frac{2}{2}\n  g^{\\alpha\\gamma} g_{\\alpha\\nu}\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}}\n  \\\\ &=\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}}\n\\end{align*}\n\n\n\\textbf{17}\nShow how $\\tensor{V}{^\\beta_{,\\alpha}}$ and $V^\\mu \\tensor{\\Gamma}{^\\beta_{\\upsilon\\alpha}}$ transform under change of coordinates. Neither follows a tensor transformation law, but their \\emph{sum} does.\n\n\\begin{align*}\n  \\tensor{V}{^{\\alpha'}_{,\\beta'}} &=\n  \\pdv{V^{\\alpha'}}{x^{\\beta'}} =\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}}\n  \\pdv{x^\\beta} \\qty[ \\tensor{\\Lambda}{^{\\alpha'}_\\alpha} V^\\alpha ]\n  \\\\ &=\n  \\tensor{\\Gamma}{^\\beta_{\\beta'}} \\qty[\n    V^\\alpha \\pdv{x^\\beta} \\tensor{\\Lambda}{^{\\alpha'}_\\alpha} +\n    \\tensor{\\Lambda}{^{\\alpha'}_\\alpha} \\pdv{x^\\beta} V^\\alpha\n  ]\n  \\\\ &=\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}} V^\\alpha\n  \\tensor{\\Lambda}{^{\\alpha'}_{\\alpha,\\beta}} +\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}} \\tensor{\\Lambda}{^{\\alpha'}_\\alpha}\n  \\tensor{V}{^\\alpha_{,\\beta}}\n  \\\\ &\\neq\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}} \\tensor{\\Lambda}{^{\\alpha'}_\\alpha}\n  \\tensor{V}{^\\alpha_{,\\beta}}\n  \\\\\n  \\pdv{\\vec{e}_{\\alpha'}}{x^{\\beta'}} &=\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}}\n  \\pdv{x^\\beta} \\qty[ \\tensor{\\Lambda}{^\\alpha_{\\alpha'}} \\vec{e}_\\alpha ]\n  \\\\ &=\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}} \\qty[\n    \\tensor{\\Lambda}{^\\alpha_{\\alpha'}} \\pdv{x^\\beta} \\vec{e}_\\alpha +\n    \\vec{e}_\\alpha \\pdv{x^\\beta} \\tensor{\\Lambda}{^\\alpha_{\\alpha'}}\n  ]\n  \\\\ &=\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}}\n  \\tensor{\\Lambda}{^\\alpha_{\\alpha'}}\n  \\tensor{\\Gamma}{^\\mu_{\\alpha\\beta}} \\vec{e}_\\mu +\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}}\n  \\tensor{\\Lambda}{^\\alpha_{\\alpha',\\beta}}\n  \\vec{e_\\alpha}\n  \\\\ &\\neq\n  \\tensor{\\Lambda}{^\\beta_{\\beta'}}\n  \\tensor{\\Lambda}{^\\alpha_{\\alpha'}}\n  \\tensor{\\Gamma}{^\\mu_{\\alpha\\beta}} \\vec{e}_\\mu,\n\\end{align*}\n%\nso we have shown that $\\pdv*{\\vec{e}_{\\alpha'}}{x^{\\beta'}}$ is not a tensor,\nand since $\\tensor{V}{^\\mu}$ is a tensor, and the product of a tensor and a non-tensor is also not a tensor, then $V^\\mu\\tensor{\\Gamma}{^\\beta_{\\nu\\alpha}}$ is not a tensor.\n\nAccording to Carroll, the precise transformation is\n%\n\\begin{displaymath}\n  \\tensor{\\Gamma}{^{\\nu'}_{\\mu'\\lambda'}} =\n  \\tensor{\\Lambda}{^\\mu_{\\mu'}}\n  \\tensor{\\Lambda}{^\\lambda_{\\lambda'}}\n  \\tensor{\\Lambda}{^{\\nu'}_\\nu}\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\lambda}} +\n  \\tensor{\\Lambda}{^\\mu_{\\mu'}}\n  \\tensor{\\Lambda}{^\\lambda_{\\lambda'}}\n  \\tensor{\\Lambda}{^{\\nu'}_{\\mu\\lambda}}.\n\\end{displaymath}\n%\nNow we add the two expressions, in order to show that it is a tensor equation\n%\n\\begin{align*}\n  \\tensor{V}{^{\\nu'}_{,\\lambda'}} +\n  V^{\\mu'} \\tensor{\\Gamma}{^{\\nu'}_{\\mu'\\lambda'}} &=\n  \\tensor{\\Lambda}{^\\lambda_{\\lambda'}}\n  V^\\nu \\tensor{\\Lambda}{^{\\nu'}_{\\nu,\\lambda}} +\n  \\tensor{\\Lambda}{^\\lambda_{\\lambda'}}\n  \\tensor{\\Lambda}{^{\\nu'}_\\nu} \\tensor{V}{^\\nu_{,\\lambda}} +\n  \\tensor{\\Lambda}{^\\lambda_{\\lambda'}}\n  \\tensor{\\Lambda}{^{\\nu'}_\\nu} V^\\mu\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\lambda}} +\n  \\tensor{\\Lambda}{^\\lambda_{\\lambda'}}\n  V^\\nu \\tensor{\\Lambda}{^{\\nu'}_{\\lambda,\\mu}}\n  \\\\ &=\n  \\tensor{\\Lambda}{^\\lambda_{\\lambda'}}\n  \\tensor{\\Lambda}{^{\\nu'}_\\nu}\n  \\qty(\n    \\tensor{V}{^\\nu_{,\\lambda}} +\n    V^\\mu \\tensor{\\Gamma}{^\\nu_{\\mu\\lambda}}\n  )\n\\end{align*}\n%\nSo it does in fact transform like a tensor equation, meaning $\\tensor{V}{^\\nu_{;\\lambda}}$ is a tensor!\n\n\n\\textbf{18}\n\nVerify Equation 5.78:\n%\n\\begin{displaymath}\n  \\left.\n  \\begin{array}{r}\n    \\vec{e}_{\\hat\\alpha} \\cdot \\vec{e}_{\\hat\\beta} \\equiv\n    g_{\\hat\\alpha \\hat\\beta} =\n    \\delta_{\\hat\\alpha \\hat\\beta}\n    \\\\\n    \\tilde\\omega^{\\hat\\alpha} \\cdot \\tilde\\omega^{\\hat\\beta} \\equiv\n    g^{\\hat\\alpha \\hat\\beta} =\n    \\delta^{\\hat\\alpha \\hat\\beta}\n  \\end{array}\n  \\right\\}\n\\end{displaymath}\n\nFor the basis \\emph{vectors}, we have\n%\n\\begin{align*}\n  g_{\\hat{r} \\hat{r}} &=\n  \\vec{e}_{\\hat{r}} \\cdot \\vec{e}_{\\hat{r}} =\n  \\vec{e}_r \\cdot \\vec{e}_r =\n  g_{rr} =\n  1\n  \\\\\n  g_{\\hat\\theta \\hat\\theta} &=\n  \\vec{e}_{\\hat\\theta} \\cdot \\vec{e}_{\\hat\\theta} =\n  \\qty(\\frac{1}{r} \\vec{e}_\\theta) \\cdot \\qty(\\frac{1}{r} \\vec{e}_\\theta) =\n  \\frac{1}{r^2} (\\vec{e}_\\theta \\cdot \\vec{e}_\\theta) =\n  \\frac{1}{r} g_{r\\theta} =\n  1\n  \\\\\n  g_{\\hat{r} \\hat\\theta} &=\n  \\vec{e}_{\\hat{r}} \\cdot \\vec{e}_{\\hat\\theta} =\n  \\vec{e}_r \\cdot \\qty(\\frac{1}{r} \\vec{e}_\\theta) =\n  \\frac{1}{r} (\\vec{e}_r \\cdot \\vec{e}_\\theta) =\n  \\frac{1}{r} g_{r\\theta} =\n  0\n  \\\\\n  g_{\\hat\\theta \\hat{r}} &=\n  g_{\\hat{r} \\hat\\theta} =\n  0\n\\end{align*}\n%\nSo it is indeed true that $g_{\\hat\\alpha \\hat\\beta} = \\delta_{\\hat\\alpha \\hat\\beta}$.\n\nNow for the basis \\emph{one-forms}, we have\n%\n\\begin{align*}\n  g^{\\hat{r} \\hat{r}} &=\n  \\tilde\\omega^{\\hat{r}} \\cdot \\tilde\\omega^{\\hat{r}} =\n  \\tilde\\dd{r} \\cdot \\tilde\\dd{r} =\n  g^{rr} =\n  1\n  \\\\\n  g^{\\hat\\theta \\hat\\theta} &=\n  \\tilde\\omega^{\\hat\\theta} \\cdot \\tilde\\omega^{\\hat\\theta} =\n  (r \\tilde\\dd\\theta) \\cdot (r \\tilde\\dd\\theta) =\n  r^2 (\\tilde\\dd\\theta \\cdot \\tilde\\dd\\theta) =\n  r^2 g^{\\theta\\theta} =\n  r^2 (1/r^2) =\n  1\n  \\\\\n  g^{\\hat{r} \\hat\\theta} &=\n  \\tilde\\omega^{\\hat{r}} \\cdot \\tilde\\omega^{\\hat\\theta} =\n  \\tilde\\dd{r} \\cdot (r \\tilde\\dd\\theta) =\n  r (\\tilde\\dd{r} \\cdot \\tilde\\dd\\theta) =\n  r g^{r\\theta} =\n  0\n  \\\\\n  g^{\\hat\\theta \\hat{r}} &=\n  g^{\\hat{r} \\hat\\theta} =\n  0\n\\end{align*}\n%\nSo it is indeed true that $g^{\\hat\\alpha \\hat\\beta} = \\delta^{\\hat\\alpha \\hat\\beta}$.\n\n\n\n\\textbf{19}\nRepeat the calculations going from Equations 5.81 to 5.84, with $\\tilde\\dd{r}$ and $\\tilde\\dd\\theta$ as your bases. Show that they form a coordinate basis.\n\n\\begin{align*}\n  \\tilde\\dd{r} &=\n  \\cos\\theta \\dd{x} + \\sin\\theta \\dd{y} =\n  \\pdv{\\xi}{x} \\tilde\\dd{x} + \\pdv{\\xi}{y} \\tilde\\dd{y}\n  \\\\\n  \\pdv{\\xi}{x} &=\n  \\cos\\theta; \\quad\n  \\pdv{\\xi}{y} =\n  \\sin\\theta\n  \\\\\n  \\pdv{y} \\pdv{\\xi}{x} &=\n  \\pdv{x} \\pdv{\\xi}{y} \\implies\n  \\pdv{y} (x/r) = \\pdv{x} (y/r),\n\\end{align*}\n%\nwhich is true, so we have shown that at least $\\tilde\\dd{r}$ may be part of a coordinate basis.\n%\n\\begin{align*}\n  \\tilde\\dd\\theta &=\n  -\\frac{1}{r} \\sin\\theta \\tilde\\dd{x} +\n   \\frac{1}{r} \\cos\\theta \\tilde\\dd{y} =\n  \\pdv{\\eta}{x} \\tilde\\dd{x} +\n  \\pdv{\\eta}{y} \\tilde\\dd{y}\n  \\\\\n  \\pdv{\\eta}{x} &=\n  -\\frac{1}{r} \\sin\\theta; \\quad\n  \\pdv{\\eta}{y} =\n   \\frac{1}{r} \\cos\\theta\n  \\\\\n  \\pdv{y} \\pdv{\\eta}{x} &=\n  \\pdv{x} \\pdv{\\eta}{y} \\implies\n  \\pdv{y} \\qty[ -\\frac{1}{r} \\sin\\theta ] =\n  \\pdv{x} \\qty[  \\frac{1}{r} \\cos\\theta ],\n\\end{align*}\n%\nwhich is also true, and thus we have shown that $\\tilde\\dd{r}$ and $\\tilde\\dd\\theta$ form a coordinate basis.\n\n\n\\textbf{20}\nFor a non-coordinate basis $\\{ \\vec{e}_\\mu \\}$, let $\\tensor{c}{^\\alpha_{\\mu\\nu}} = \\grad_{\\vec{e}_\\mu} \\vec{e}_\\nu - \\grad_{\\vec{e}_\\nu} \\vec{e}_\\mu$. Use this in place of Equation 5.74 to derive a more general expression for Equation 5.75.\n\n$\\boldsymbol{c}$ is antisymmetric w.r.t. its bottom indices.\n%\n\\begin{align*}\n  \\tensor{c}{^\\alpha_{\\mu\\nu}} \\vec{e}_\\alpha +\n  \\tensor{c}{^\\alpha_{\\nu\\mu}} \\vec{e}_\\alpha &=\n  (\\grad_{\\vec{e}_\\mu} \\vec{e}_\\nu - \\grad_{\\vec{e}_\\nu} \\vec{e}_\\mu) +\n  (\\grad_{\\vec{e}_\\nu} \\vec{e}_\\mu - \\grad_{\\vec{e}_\\mu} \\vec{e}_\\nu) =\n  0\n  \\\\ \\implies\n   \\tensor{c}{^\\alpha_{\\mu\\nu}} \\vec{e}_\\alpha &=\n  -\\tensor{c}{^\\alpha_{\\nu\\mu}} \\vec{e}_\\alpha\n  \\\\ \\implies\n   \\tensor{c}{^\\alpha_{\\mu\\nu}} &=\n  -\\tensor{c}{^\\alpha_{\\nu\\mu}}\n\\end{align*}\n%\nExpanding the covariant derivatives in the original expression, we get\n\\begin{align*}\n  \\tensor{c}{^\\alpha_{\\mu\\nu}} \\vec{e}_\\alpha &=\n  \\vec{e}_{\\nu;\\mu} - \\vec{e}_{\\mu;\\nu}\n  \\\\ &=\n  (\\vec{e}_{\\nu,\\mu} - \\vec{e}_\\alpha \\tensor{\\Gamma}{^\\alpha_{\\nu\\mu}}) -\n  (\\vec{e}_{\\mu,\\nu} - \\vec{e}_\\alpha \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}})\n  \\\\ &=\n  \\vec{e}_\\alpha\n  (\\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}} - \\tensor{\\Gamma}{^\\alpha_{\\nu\\mu}})\n  \\\\\n  \\tensor{c}{^\\alpha_{\\mu\\nu}} &=\n  \\tensor{\\Gamma}{^\\alpha_{\\mu\\nu}} - \\tensor{\\Gamma}{^\\alpha_{\\nu\\mu}}\n\\end{align*}\n%\nNow we recall the result from Exercise 16, but without assuming symmetry of the Christoffel symbols\n%\n\\begin{align*}\n  g_{\\alpha\\beta,\\mu} + g_{\\alpha\\mu,\\beta} - g_{\\beta\\mu,\\alpha} &=\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\mu}} g_{\\nu\\beta} +\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} g_{\\alpha\\nu} +\n  \\tensor{\\Gamma}{^\\nu_{\\alpha\\beta}} g_{\\nu\\mu} +\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\beta}} g_{\\alpha\\nu} -\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} g_{\\nu\\mu} -\n  \\tensor{\\Gamma}{^\\nu_{\\mu\\alpha}} g_{\\beta\\nu}\n  \\\\ &=\n  {\\color{red} \\tensor{\\Gamma}{^\\nu_{\\alpha\\mu}} g_{\\beta\\nu}} +\n  {\\color{green} \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} g_{\\alpha\\nu}} +\n  {\\color{blue} \\tensor{\\Gamma}{^\\nu_{\\alpha\\beta}} g_{\\nu\\mu}} +\n  {\\color{green} \\tensor{\\Gamma}{^\\nu_{\\mu\\beta}} g_{\\alpha\\nu}} -\n  {\\color{blue} \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} g_{\\nu\\mu}} -\n  {\\color{red} \\tensor{\\Gamma}{^\\nu_{\\mu\\alpha}} g_{\\beta\\nu}}\n  \\\\ &=\n  g_{\\beta\\nu}\n  (\\tensor{\\Gamma}{^\\nu_{\\alpha\\mu}} - \\tensor{\\Gamma}{^\\nu_{\\mu\\alpha}}) +\n  g_{\\alpha\\nu}\n  (\\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} + \\tensor{\\Gamma}{^\\nu_{\\mu\\beta}}) +\n  g_{\\nu\\mu}\n  (\\tensor{\\Gamma}{^\\nu_{\\alpha\\beta}} - \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}})\n  \\\\ &=\n  g_{\\beta\\nu} \\tensor{c}{^\\nu_{\\alpha\\mu}} +\n  g_{\\alpha\\nu}\n  (\\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} +\n   \\tensor{\\Gamma}{^\\nu_{\\mu\\beta}} +\n   \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} -\n   \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}}) +\n  g_{\\nu\\mu} \\tensor{c}{^\\nu_{\\alpha\\beta}}\n  \\\\ &=\n  g_{\\beta\\nu} \\tensor{c}{^\\nu_{\\alpha\\mu}} +\n  g_{\\nu\\mu} \\tensor{c}{^\\nu_{\\alpha\\beta}} +\n  g_{\\alpha\\nu}\n  (2 \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} + \\tensor{c}{^\\nu_{\\mu\\beta}})\n  \\\\\n  g^{\\nu\\mu} 2 g_{\\alpha\\nu} \\tensor{\\Gamma}{^\\nu_{\\beta\\mu}} &=\n  g^{\\nu\\mu} (\n    g_{\\alpha\\beta,\\mu} +\n    g_{\\alpha\\mu,\\beta} -\n    g_{\\beta\\mu,\\alpha} -\n    c_{\\beta\\alpha\\mu}  -\n    c_{\\mu\\alpha\\beta}  -\n    c_{\\alpha\\mu\\beta}\n  )\n  \\\\\n  \\tensor{\\Gamma}{^\\nu_{\\beta\\alpha}} &=\n  \\frac{1}{2}\n  g^{\\nu\\mu} (\n    g_{\\alpha\\beta,\\mu} +\n    g_{\\alpha\\mu,\\beta} -\n    g_{\\beta\\mu,\\alpha} -\n    c_{\\beta\\alpha\\mu}  -\n    c_{\\mu\\alpha\\beta}  -\n    c_{\\alpha\\mu\\beta}\n  )\n\\end{align*}\n\n\n\n\n\\textbf{21}\nA uniformly accelerated observer has world line\n%\n\\begin{displaymath}\n  t(\\lambda) = a \\sinh\\lambda,\n  \\quad\n  x(\\lambda) = a \\cosh\\lambda\n\\end{displaymath}\n\n(a) Show that the spacelike line tangent to his world line (which is parameterized by $\\lambda$) is orthogonal to the line parameterized by $a$.\n\nThe line tangent to his world line is\n%\n\\begin{displaymath}\n  \\vec{V} \\to\n  \\dv{\\lambda} (t, x) =\n  (a \\cosh\\lambda, a \\sinh\\lambda).\n\\end{displaymath}\n%\nThe line parameterized by $a$ is\n%\n\\begin{displaymath}\n  \\vec{W} \\to\n  \\dv{a} (t, x) =\n  (\\sinh\\lambda, \\cosh\\lambda)\n\\end{displaymath}\n%\nIf they are orthogonal, then their dot product must be zero\n%\n\\begin{displaymath}\n  \\vec{V} \\cdot \\vec{W} =\n  -(a \\cosh\\lambda \\sinh\\lambda) + (a \\sinh\\lambda \\cosh\\lambda) =\n  0,\n\\end{displaymath}\n%\nwhich it is.\n\n(b) To prove that this defines a valid coordinate transform from $(\\lambda,a)$ to $(t,x)$, we show that the determinant of the transformation matrix is non-zero.\n%\n\\begin{align*}\n  \\det\\mqty( \\pdv*{t}{\\lambda} & \\pdv*{t}{a} \\\\\n             \\pdv*{x}{\\lambda} & \\pdv*{x}{a} ) &=\n  \\pdv{t}{\\lambda} \\pdv{x}{a} - \\pdv{t}{a} \\pdv{x}{\\lambda}\n  \\\\ &=\n  a \\cosh^2\\lambda - a \\sinh^2\\lambda = a\n  \\\\ &\\neq 0,\n\\end{align*}\n%\nand so it is indeed a valid coordinate transform.\n\nTo plot the curves parameterized by $a$, we take\n%\n\\begin{align*}\n  -t^2 + x^2 &=\n  a^2 (\\cosh^2\\lambda - \\sinh^2\\lambda)\n  \\\\ &=\n  a^2,\n\\end{align*}\n%\nwhich gives us a family of space-like hyperbola, depending on the chosen value of $a$.\n\nTo plot the curves parameterized by $\\lambda$, we take\n%\n\\begin{align*}\n  x &= a \\cosh\\lambda \\implies a = x / \\cosh\\lambda\n  \\\\\n  t &= a \\sinh\\lambda = x \\sinh\\lambda / \\cosh\\lambda = x \\tanh\\lambda,\n\\end{align*}\n%\nwhich gives us a family of space-like lines, depending on the chosen value of $\\lambda$.\n\nA plot of these curves is given in Figure \\ref{fig:ch5-problem-21}, from which it is clear that only half of the $t$--$x$ plane is covered.\n\nWhen $\\abs{t} = \\abs{x}$, then $a = 0$, since $-t^2 + x^2 = a^2$. We already found that the determinant of the coordinate transformation is $a$, so this would make the determinant $0$, making it singular.\n\n\\begin{figure}[ht]\n  \\centering\n  \\includegraphics[width=0.75\\textwidth]{img/ch5_problem_21}\n  \\caption{Lines of constant $\\lambda$ and $a$ in Problem 21.}\n  \\label{fig:ch5-problem-21}\n\\end{figure}\n\n\n(c) Find the metric tensor and Christoffel symbols in $(\\lambda,a)$ coordinates.\n\nFirst we find the basis vectors:\n%\n\\begin{align*}\n  \\vec{e}_\\lambda &=\n  a (\\cosh\\lambda \\vec{e}_t + \\sinh\\lambda \\vec{e}_x),\n  \\\\\n  \\vec{e}_a &=\n  \\sinh\\lambda \\vec{e}_t + \\cosh\\lambda \\vec{e}_x.\n\\end{align*}\n\nNow we find the components of the metric tensor $\\vb{g}$ as\n%\n\\begin{align*}\n  g_{\\lambda\\lambda} &=\n  a^2\n  ( \\cosh\\lambda \\vec{e}_t +\n    \\sinh\\lambda \\vec{e}_x )^2\n  \\\\ &=\n  a^2\n  ( \\cosh^2\\lambda \\eta_{tt} +\n    \\sinh^2\\lambda \\eta_{xx} +\n    2 \\sinh\\lambda \\cosh\\lambda \\eta_{tx} )\n  \\\\ &=\n  a^2 (\\sinh^2\\lambda - \\cosh^2\\lambda)\n  \\\\ &=\n  -a^2\n  \\\\\n  g_{aa} &=\n  (\\sinh\\lambda \\vec{e}_t + \\cosh\\lambda \\vec{e}_x)^2\n  \\\\ &=\n  \\sinh^2\\lambda \\eta_{tt} +\n  \\cosh^2\\lambda \\eta_{xx} +\n  2 \\sinh\\lambda \\cosh\\lambda \\eta_{tx}\n  \\\\ &=\n  1\n  \\\\\n  g_{\\lambda a} = g_{a \\lambda} &=\n  a (\\cosh\\lambda \\vec{e}_t + \\sinh\\lambda \\vec{e}_x)\n    (\\sinh\\lambda \\vec{e}_t + \\cosh\\lambda \\vec{e}_x)\n  \\\\ &=\n  a\n  ( \\cosh\\lambda \\sinh\\lambda (\\eta_{tt} + \\eta_{xx}) +\n    2 \\sinh\\lambda \\cosh\\lambda \\eta_{tx} )\n  \\\\ &=\n  0\n  \\\\\n  \\vb{g} &\\underset{(\\lambda,a)}{\\to}\n  \\mqty( -a^2 & 0 \\\\ 0 & 1 )\n\\end{align*}\n%\nNow for the Christoffel symbols, since we know this is a coordinate basis, we can use\n%\n\\begin{displaymath}\n  \\tensor{\\Gamma}{^\\gamma_{\\beta \\mu}} =\n  \\frac{1}{2} g^{\\alpha \\gamma}\n  (g_{\\alpha \\beta , \\mu} + g_{\\alpha \\mu , \\beta} - g_{\\beta \\mu , \\alpha})\n\\end{displaymath}\n%\n\\begin{align*}\n  \\tensor{\\Gamma}{^\\lambda_{\\lambda \\lambda}} &=\n  \\frac{1}{2} g^{\\alpha \\lambda}\n  (g_{\\alpha \\lambda , \\lambda} + g_{\\alpha \\lambda , \\lambda} - g_{\\lambda \\lambda , \\alpha}) =\n  \\frac{1}{2} g^{a\\lambda} (-g_{\\lambda\\lambda,a})\n  \\\\ &=\n  0\n  \\\\\n  \\tensor{\\Gamma}{^a_{a a}} &=\n  \\frac{1}{2} g^{\\alpha a}\n  (g_{\\alpha a , a} + g_{\\alpha a , a} - g_{a a , \\alpha})\n  \\\\ &=\n  0\n  \\\\\n  \\tensor{\\Gamma}{^\\lambda_{\\lambda a}} &=\n  \\frac{1}{2} g^{\\alpha \\lambda}\n  (g_{\\alpha \\lambda , a} + g_{\\alpha a , \\lambda} - g_{\\lambda a , \\alpha}) =\n  \\frac{1}{2} g^{\\lambda\\lambda} g_{\\lambda\\lambda,a} =\n  \\frac{1}{2} (-a^{-2}) (-2a)\n  \\\\ &=\n  1/a\n  \\\\\n  \\tensor{\\Gamma}{^a_{\\lambda a}} &=\n  \\frac{1}{2} g^{\\alpha a}\n  (g_{\\alpha \\lambda , a} + g_{\\alpha a , \\lambda} - g_{\\lambda a , \\alpha}) =\n  \\frac{1}{2} g^{\\lambda a} g_{\\lambda\\lambda,a}\n  \\\\ &=\n  0\n  \\\\\n  \\tensor{\\Gamma}{^\\lambda_{a a}} &=\n  \\frac{1}{2} g^{\\alpha \\lambda}\n  (g_{\\alpha a , a} + g_{\\alpha a , a} - g_{a a , \\alpha})\n  \\\\ &=\n  0\n  \\\\\n  \\tensor{\\Gamma}{^a_{\\lambda \\lambda}} &=\n  \\frac{1}{2} g^{\\alpha a}\n  (g_{\\alpha \\lambda , \\lambda} + g_{\\alpha \\lambda , \\lambda} - g_{\\lambda \\lambda , \\alpha}) =\n  \\frac{1}{2} g^{aa} (-g_{\\lambda\\lambda,a}) =\n  \\frac{1}{2} \\cdot 2 \\cdot a\n  \\\\ &=\n  a\n\\end{align*}\n\n\n\n\n\\textbf{22}\n%\n\\begin{align*}\n  U^\\alpha \\grad_\\alpha V^\\beta = W^\\beta &\\implies\n  U^\\alpha \\tensor{V}{^\\gamma_{;\\alpha}} = W^\\gamma\n  \\\\ &\\implies\n  g_{\\alpha\\beta} U^\\alpha \\tensor{V}{^\\gamma_{;\\alpha}} =\n  g_{\\gamma\\beta} W^\\gamma\n  \\\\ &\\implies\n  U^\\alpha V_{\\beta;\\alpha} = W_\\beta\n  \\\\ &\\implies\n  U^\\alpha \\grad_\\alpha V_\\beta = W_\\beta\n\\end{align*}\n\n\n\n\\end{document}", "meta": {"hexsha": "7438652e082e8d2a15c66f90643c41608d6c901e", "size": 43538, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/textbook/tex/gr-ch5-notes.tex", "max_stars_repo_name": "dwysocki/ASTP-760", "max_stars_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/textbook/tex/gr-ch5-notes.tex", "max_issues_repo_name": "dwysocki/ASTP-760", "max_issues_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/textbook/tex/gr-ch5-notes.tex", "max_forks_repo_name": "dwysocki/ASTP-760", "max_forks_repo_head_hexsha": "da116f346bfa40fa3f9620782a2c85e9e9654e37", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.3674351585, "max_line_length": 577, "alphanum_fraction": 0.5588221783, "num_tokens": 17952, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.5658552116176293}}
{"text": "\\documentclass[a4paper, 11 pt, article, accentcolor=tud7b]{tudreport}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\n\\title{CNuVS Exercise 10}\n\\author{Nils Rollshausen, Daniel Drodt}\n\\subtitle{Nils Rollshausen, Daniel Drodt}\n\n\\begin{document}\n\t\\maketitle\n\t\\section{M/M/1 Queues}\n\t$\\lambda = 20, \\mu = \\frac{1000ms}{40ms/r} = 25$\n\t\\subsection*{a) System time}\n\t$T = \\frac{1}{\\mu - \\lambda} = \\frac{1}{5} = 0.2s$\n\t\n\t\\subsection*{b) Average number of requests}\n\t$N = \\lambda \\cdot T = 20 \\cdot 0.2 = 4$\n\t\n\t\\subsection*{c) System utilization}\n\t$\\rho = \\frac{\\lambda}{\\mu} = \\frac{20}{25} = 0.8 = 80\\%$\n\t\n\t\\subsection*{d) Average idle time in 15min}\n\t$t_{idle} = 15 \\cdot 60 \\cdot (1 - \\rho) = 900 \\cdot 0.2 = 180s$\n\t\n\t\\subsection*{e) Probability of 3 requests}\n\t$p_{3} = (1 - \\rho) \\cdot \\rho^3 = 0.2 \\cdot 0.8^3 = 0.1024 = 10.24\\%$\n\t\n\t\\section{M/M/m Queues I}\n\t\n\t$\\lambda_{1} = 0.1, \\lambda_{2} = 0.2, \\lambda_{3} = 0.3, \\mu = \\frac{1}{2.5} = 0.4$\n\t\n\t\\subsection*{a) Utilization}\n\t$$\\rho_{1} = \\frac{0.1}{0.4} = 0.25$$ \n\t$$\\rho_{2} = \\frac{0.2}{0.4} = 0.5 $$\n\t$$\\rho_{3} = \\frac{0.3}{0.4} = 0.75$$\n\t\n\t\\subsection*{b) Idle probability}\n\t$$p_{0} = (1-\\rho_{1}) \\cdot (1-\\rho_{2}) \\cdot (1-\\rho_{3}) = 0.75 \\cdot 0.5 \\cdot 0.25 = 0.09375 = 9.375\\%$$\n\t\n\t\\subsection*{c) System time}\n\t$$T_{1} = \\frac{1}{0.4 - 0.1} = \\frac{10}{3} \\approx 3.3333m$$\n\t$$T_{2} = \\frac{1}{0.4 - 0.2} = 5m$$\n\t$$T_{3} = \\frac{1}{0.4 - 0.3} = 10m$$\n\t\n\t\\subsection*{d) Number of jobs}\n  $$N_{1} = 0.1 \\cdot T_{1} = \\frac{1}{3} \\approx 0.3333$$\n\t$$N_{2} = 0.2 \\cdot T_{2} = 0.2 \\cdot 5 = 1$$\n\t$$N_{3} = 0.3 \\cdot T_{3} = 0.3 \\cdot 10 = 3$$\n\t\n\t\\section{M/M/m Queues II}\n\t\n\t\\subsection*{a) Arrival rate and utilization}\n\t$$\\lambda = \\lambda_{1} + \\lambda_{2} + \\lambda_{3} = 0.1 + 0.2 + 0.3 = 0.6$$\n\t$$\\rho = \\frac{\\lambda}{m \\cdot \\mu} = \\frac{0.6}{3 \\cdot 0.4} = 0.5 = 50\\%$$\n\t\n\t\\subsection*{b) Probability of empty system}\n\t\\begin{align*}\n\t  p_{0} &= \\frac{1}{(\\sum_{k=0}^{m-1} \\frac{(m \\cdot \\rho)^{k}}{k!}) + \\frac{(m \\cdot \\rho)^{m}}{m!} \\cdot \\frac{1}{1 - \\rho}} \\\\\n\t        &= \\frac{1}{(\\sum_{k=0}^{2} \\frac{(3 \\cdot 0.5)^{k}}{k!}) + \\frac{(3 \\cdot 0.5)^{3}}{6} \\cdot 2} \\\\\n\t        &= \\frac{1}{1 + 1.5 + \\frac{9}{8} + \\frac{9}{8}} \\\\\n\t        &= \\frac{1}{4.75} \\approx 0.2105\n\t\\end{align*}\n\t\n\t\\subsection*{c) Average system time}\n\t$$\\delta = p_{0} \\cdot \\frac{(m \\cdot \\rho)^{m}}{m! \\cdot (1-\\rho)} = p_{0} \\cdot \\frac{1.5^3}{3} = 1.125 \\cdot p_{0} \\approx 0.2368 $$\n\t$$ T = \\frac{1}{\\mu} \\cdot (1 + \\frac{\\delta}{m \\cdot (1-\\rho)}) = 2.5 \\cdot (1 + \\frac{0.2368}{1.5}) \\approx 2.8947m$$\n\t\n\t\\subsection*{d) Average number of jobs}\n\t$$N = m \\cdot \\rho + \\frac{\\rho \\cdot \\delta}{1 - \\rho} = 1.5 + \\frac{0.5 \\cdot 0.2368}{0.5} = 1.5 + 0.2368 \\approx 1.7368$$\n\t\n\t\\section{Queueing Theory}\n\t\n\t\\subsection*{a) M/M/1 Queues}\n\tA M/M/1 queue describes a queue with a single serving unit, where both customer interarrival times and serving time are exponentially distributed. As such, they are useful models for most simple real-world queueing scenarios.\n\t\n\t\\subsection*{b) The Markov Process}\n\tA Markov process is a discrete stochastic process in which the probabilities for the next state depend only on the current state. Additionally, a Markov process requires that the process is memoryless (i.e. the probability of a state change does not change with the time spent in that state), homogenous, and that process times are exponentially (or geometrically) distributed.\n\t\n\t\\subsection*{c) Post Office}\n\tA post office with four cash desks is probably best modeled by a M/M/4 queue. Even if separate queues exist for each counter, customers most likely switch queues when any desk is idle and do not queue at every counter independently. Thus, the customers act like queueing in a single queue even if there are physically separate queues for each counter.\n\t\n\t\\subsection*{d) Birth-Death processes}\n\tBirth-Death processes are a specialization of Markov processes in which in any state, the next following state can only be an adjacent state (e.g. $k \\rightarrow k+1$ or $k \\rightarrow k-1$).\n\t\\end{document}\n", "meta": {"hexsha": "e75db95b913f4d7b7c2120042d7e23cf112b0c75", "size": 4077, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ex10/ex10.tex", "max_stars_repo_name": "rec0de/CNuVS19", "max_stars_repo_head_hexsha": "52d07fe5c4380af707c63f718aa9533044224a3e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ex10/ex10.tex", "max_issues_repo_name": "rec0de/CNuVS19", "max_issues_repo_head_hexsha": "52d07fe5c4380af707c63f718aa9533044224a3e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ex10/ex10.tex", "max_forks_repo_name": "rec0de/CNuVS19", "max_forks_repo_head_hexsha": "52d07fe5c4380af707c63f718aa9533044224a3e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.4069767442, "max_line_length": 378, "alphanum_fraction": 0.6244787834, "num_tokens": 1642, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7772998508568416, "lm_q1q2_score": 0.5658551986873236}}
{"text": "\\section{Dynamic Programming}\n\\subsection{LIS $O(n\\log{n})$}\n\\lstinputlisting{\"./dynamicProgramming/lis.cc\"}\n\\subsection{LCS $O(n\\log{n})$}\n\\lstinputlisting{\"./dynamicProgramming/lcs.cc\"}\n\\subsection{Sparse-Table}\n\\lstinputlisting{\"./dynamicProgramming/sparse-table.c\"}\n\\subsection{Improved by quadrilateral inequality}\n\\lstinputlisting{\"./dynamicProgramming/inequ.cc\"}\n\\subsection{Improved by Slope}\n\\lstinputlisting{\"./dynamicProgramming/slope.cc\"}\n", "meta": {"hexsha": "5b3e38d013d025dc459fbc9e3a53f46ae6404bc3", "size": 451, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dynamicProgramming.tex", "max_stars_repo_name": "Abreto/acm-icpc-template", "max_stars_repo_head_hexsha": "43552abf6d03aa5958dfca785aa538548a0e563b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dynamicProgramming.tex", "max_issues_repo_name": "Abreto/acm-icpc-template", "max_issues_repo_head_hexsha": "43552abf6d03aa5958dfca785aa538548a0e563b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dynamicProgramming.tex", "max_forks_repo_name": "Abreto/acm-icpc-template", "max_forks_repo_head_hexsha": "43552abf6d03aa5958dfca785aa538548a0e563b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.5833333333, "max_line_length": 55, "alphanum_fraction": 0.7827050998, "num_tokens": 120, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424295406087, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.5658341869462764}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{amssymb,amsfonts,amsthm,amsmath}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[usenames,dvipsnames,svgnames]{xcolor}\n\\usepackage{hyperref,url}\n\\hypersetup{\n    colorlinks=true,\n    citecolor=MidnightBlue,\n    urlcolor=Bittersweet,\n}\n\\newcommand{\\<}{\\langle}\n\\renewcommand{\\>}{\\rangle}\n\\newcommand{\\iprod}[2]{\\left\\langle #1 , #2 \\right\\rangle}\n\n% Theorems\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{remark}{Remark}\n\\newtheorem{corollary}{Corollary}%[section]\n\\newtheorem{proposition}{Proposition}%[section]\n\\newtheorem{definition}{Definition}%[section] % number this the same as theorem and lemma\n\n\n\n\n\\title{Analysis Toolbox: Strong convexity and Lipschitz continuity of gradients}\n\\author{\nStephen Becker \\\\\nApplied Math, U.\\ Colorado Boulder\n\\texttt{stephen.becker@colorado.edu}\n}\n\\date{Created Sep 27 2019, last edited \\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\subsection*{Lipschitz continuity of derivative and/or strong convexity of $f$}\nThe definition of Lipschitz continuity of $\\nabla f$  (with constant $L$) is\n\\begin{equation} \\label{eq:LL}\n    \\forall x, y\\quad \\| \\nabla f(x) - \\nabla f(y) \\| \\le L \\|x-y\\|\n\\end{equation}\nand the definition of $f$ being $\\mu$ strongly convex means that the function $x \\mapsto f(x) - \\frac{\\mu}{2}\\|x\\|^2$ is convex. In the lines below, if $L$ or $\\mu$ appears, then we are assuming the gradient is Lipschitz with constant $L$ or $f$ is strongly convex with constant $\\mu$, respectively. Most references to Nesterov's book are to his first edition~\\cite{Nesterov_2004}, not the recent 2018  edition~\\cite{Nesterov_2018}.\n\nThese two inequalities are very helpful; see, e.g., Thm 2.1.5 and Thm 2.1.10 from \\cite{Nesterov_2004}.\n\\begin{align}\n    f(y) &\\le f(x) + \\iprod{\\nabla f(x)}{ y - x } + \\frac{L}{2}\\|x-y\\|^2 \\label{eq:L} \\\\\n    f(y) &\\ge f(x) + \\iprod{\\nabla f(x)}{ y - x } + \\frac{\\mu}{2}\\|x-y\\|^2 \\label{eq:mu}\n\\end{align}\nIf we drop convexity but keep Lipschitz continuity of the gradient, then the first equation is still true, but the second equation is not true with $\\mu=0$, but it is true with $\\mu = -L$.  This is often written as\n$\\left| f(y) - ( f(x) + \\iprod{\\nabla f(x)}{ y - x } )\\right| \\le \\frac{L}{2}\\|x-y\\|^2$.\n\nThe main inequalities can be summarized by: \n% Not sure how to number; see https://tex.stackexchange.com/a/103894 for a way that won't work if I want this format\n\\begin{equation} \\label{eq:big}\n\\left.\\begin{aligned}\nL^{-1} \\| \\nabla f(x) - \\nabla f(y) \\|^2  \\quad\\text{\\small\\textcolor{red}{(a)}} \\\\\n\\mu \\|x-y\\|^2 \n\\quad\\text{\\small\\textcolor{red}{(b)}} \\\\\n\\frac{\\mu L}{\\mu + L} \\|x-y\\|^2  + \\frac{1}{\\mu+L}\\| \\nabla f(x) - \\nabla f(y) \\|^2\n\\quad\\text{\\small\\textcolor{red}{(c)}}\n\\end{aligned} \\right\\rbrace\n\\le \\< \\nabla f(x) - \\nabla f(y), x-y \\> \n\\le \n\\left\\lbrace\\begin{aligned}\n\\text{\\small\\textcolor{red}{(d)}} \\quad& L \\|x-y\\|^2  \\\\\n\\text{\\small\\textcolor{red}{(e)}} \\quad& \\mu^{-1}  \\| \\nabla f(x) - \\nabla f(y) \\|^2\n\\end{aligned} \\right.\n\\end{equation}\nThe inequality {\\small\\textcolor{red}{(a)}} %left-most $\\le$ above in the line for $L$ \nreally follows from the co-coercivity of gradients; this result is actually surprisingly strong, since it makes implicit use of the Baillon-Haddad theorem. The result {\\small\\textcolor{red}{(e)}} for $\\mu$ also requires $f$ be continuously differentiable. \nThe {\\small\\textcolor{red}{(c)}} inequality \nassumes both strong convexity and Lipschitz continuity of the gradient; see \\cite[Thm. 2.1.12]{Nesterov_2004} for a derivation.\n\n\n\n\n\\paragraph{Sub-optimality bounds} Added Sept 17 2019.  For unconstrained smooth optimization, if $x^\\star$ is a minimizer, then $\\nabla f(x^\\star) = 0$. Note there are 3 equivalent definitions of optimality: $x$ is optimal if\n\\begin{equation}\n    \\|x - x^\\star\\|=0, \\quad\n    f(x) - f^\\star = 0, \\quad\n    \\|\\nabla f(x)\\| = 0\n\\end{equation}\nand this would be ``iff'' if we assume the optimal solution is unique. Now, given a Lipschitz continuous derivative, we can bound\n\\begin{align}\n \\|\\nabla f(x) \\| = \\| \\nabla f(x) - \\nabla f(x^\\star)\\| &\\le L \\|x-x^\\star\\| \\quad \\text{by \\eqref{eq:LL}}\\\\\n f(x) - f^\\star &\\le \\frac{L}{2}\\|x-x^\\star\\|^2\n \\quad\\text{by \\eqref{eq:L}} \\\\\n \\|\\nabla f(x) \\|^2 &\\le 2L \\left( f(x) - f^\\star\\right) \\quad\\text{by Eq. (9.14) in \\cite{BoydVandenbergheBook}} \\label{eq:33}\n\\end{align}\nand given $\\mu$ strong convexity, we can bound in the other direction:\n\\begin{align}\n\\|x-x^\\star\\|^2 &\\le \\frac{1}{\\mu^2}\\|\\nabla f(x)\\|^2 \n\\quad \\text{by \\eqref{eq:big}  {\\small\\textcolor{red}{(b)}} and  {\\small\\textcolor{red}{(e)}} }\n\\\\\n\\|x-x^\\star\\|^2 &\\le \\frac{2}{\\mu}\\left( f(x) - f^\\star\\right)\n\\quad \\text{by \\eqref{eq:mu}, with $x=x^\\star$, $y=x$} \\\\\nf(x) - f^\\star &\\le \\frac{1}{2\\mu}\\|\\nabla f(x)\\|^2 \\quad\\text{by Eq. (9.9) in \\cite{BoydVandenbergheBook}. This is PL}\n\\label{eq:44}\n\\end{align}\nGiven both $L$ and $\\mu$, we can combine the bounds, and bound any one of the 3 error metrics in terms of another, i.e.,\n%\\begin{align*}\n $\\|\\nabla f(x) \\|^2 \\le \\frac{2L^2}{\\mu}\\left( f(x) - f^\\star\\right)$ and $% \\\\\n f(x) - f^\\star \\le \\frac{L}{2\\mu^2}\\|\\nabla f(x)\\|^2. $\n%\\end{align*}.\nBut these are not good bounds; the bounds in Eq~\\eqref{eq:33} and \\eqref{eq:44} are better. Note: \\eqref{eq:44} is the Polyak-Lojasiewicz (PL) inequality, see \\href{https://arxiv.org/abs/1608.04636}{Karimi, Nutini, Schmidt} for details.\n\n\n\\begin{thebibliography}{AZQRY16}\n\n\\bibitem[BC11]{bauschke2011convex}\nH. H. Bauschke and P.~L. Combettes, \\href{https://link.springer.com/book/10.1007/978-1-4419-9467-7}{\\emph{Convex analysis and monotone\n  operator theory in {H}ilbert spaces}, 1st edition}, {S}pringer, 2011.\n  \n\\bibitem[BC17]{bauschke2017convex}\nH. H. Bauschke and P.~L. Combettes, \\href{https://link.springer.com/book/10.1007/978-3-319-48311-5}{\\emph{Convex analysis and monotone\n  operator theory in {H}ilbert spaces}, 2nd edition}, {S}pringer, 2017.\n\n\\bibitem[BV04]{BoydVandenbergheBook}\nS.~Boyd and L.~Vandenberghe.\n\\newblock \\href{http://www.stanford.edu/~boyd/cvxbook/}{{\\em Convex Optimization}}.\n\\newblock Cambridge University Press, 2004.\n\n\\bibitem[Nes04]{Nesterov_2004}\nYu.~Nesterov.\n\\newblock \\href{https://link.springer.com/book/10.1007/978-1-4419-8853-9}{{\\em Introductory Lectures on Convex Optimization: A Basic Course}},\n  volume~87 of {\\em Applied Optimization}.\n\\newblock Kluwer, Boston, 2004.\n\n\\bibitem[Nes18]{Nesterov_2018}\nYu. Nesterov.\n\\newblock \\href{https://link.springer.com/book/10.1007/978-3-319-91578-4}{{\\em Lectures on Convex Optimization}}.\n\\newblock Springer International Publishing, 2018.\n\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "18f0b40a31c50038fe767bc9f8efc9a33802866f", "size": 6633, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "inequalities.tex", "max_stars_repo_name": "stephenbeckr/AIMS", "max_stars_repo_head_hexsha": "d920c87a6da0c44f4c814fc3d992f210b1be1983", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-09-23T08:12:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T21:52:41.000Z", "max_issues_repo_path": "inequalities.tex", "max_issues_repo_name": "stephenbeckr/AIMS", "max_issues_repo_head_hexsha": "d920c87a6da0c44f4c814fc3d992f210b1be1983", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "inequalities.tex", "max_forks_repo_name": "stephenbeckr/AIMS", "max_forks_repo_head_hexsha": "d920c87a6da0c44f4c814fc3d992f210b1be1983", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-02-18T11:55:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-22T04:02:44.000Z", "avg_line_length": 47.3785714286, "max_line_length": 432, "alphanum_fraction": 0.6846072667, "num_tokens": 2417, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.668880247169804, "lm_q2_score": 0.8459424353665381, "lm_q1q2_score": 0.565834185259396}}
{"text": "\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                            Fifth Chapter                            %\n%                   Evaluation and Experimentations                   %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\chapter{Evaluation and Experimentation}\n\\label{cha:evaluation_and_experimentation}\n\n\\graphicspath{{Chapter6-Evaluation/Figs/}}\n\nIn this chapter, we study the performances of our solver to solve problems formulated on manifolds as well as the performances of our posture generator.\nWe then present a use case of our solver on manifolds with a problem of inertial identification.\nAnd finally, we present an approach to make use of posture generation in real-environment with data acquired directly from the robot.\n\n\\section{On the performance of formulation with Manifolds}\n\\label{sec:On_the_performance_of_formulation_with_manifolds}\n\nIn Chapter~\\ref{chapter:optimization_on_noneuclidean_manifolds} we presented our developments for a new nonlinear SQP solver dedicated to non-Euclidean Manifolds.\nWe then took advantage of it in the posture generator framework, with use case examples presented in Chapter~\\ref{ch:PG}.\nThe choice of using optimization on manifolds was driven by the intuition that such an approach would lead to relatively faster and more robust convergence of resolution when the search space is a non-Euclidean manifold (due to the reduction of the numbers of constraints and variables).\n\nRobotics optimization problems may be complex and many aspects play a role in the efficiency of their resolution.\nTo isolate the influence of optimizing on manifolds, we consider a toy problem: cube stacking.\n%In a typical robotic posture generation problem, the search manifold is a composition of several instances of $\\mathbb{R}^n$ and one $SO(3)$ per robot, and the equations involving the $SO(3)$ variables are quite complex.\n%Thus it may be difficult to extract the actual influence of the use of manifold formulation on the resolution.\n%In order to evaluate the influence of the formulation and resolution with manifolds, we choose to study a problem different from a typical robotic posture generation.\nGiven a set of unit cubes, each one defined on $\\mathbb{R}^3 \\times SO(3)$, we want to find a configuration in which the cubes are all inside a box, while not interpenetrating with each other.\nFor any cube $C_i$, we denote $V_i = \\{v_0, v_1, \\ldots, v_7\\}$  the set of all its vertices, $\\vec{t_i}\\in\\mathbb{R}^3$ and $R_i\\in SO(3)$ respectively represent its translation and rotation w.r.t the world frame.\nTo ensure that the cubes are inside the box, we write constraints that enforce each corner of each cube to be inside the box.\nFor each plane composing the box, we denote $\\vec{n}$ its normal toward the inside of the box, and $d$ the signed distance from the plane to the origin along $\\vec{n}$.\nThe constraint for each cube $C_i$ being `above' a plane defined by $\\{d, \\vec{n}\\}$ is of dimension 8 (1 per vertex) and can be written as:\n\\begin{equation}\n  \\forall v\\in V_i,\\ (\\vec{t_i} + R_i v)\\cdot \\vec{n} \\geq d\n\\end{equation}\n\nTo avoid interpenetration of the cubes, we could use the usual collision avoidance constraints as presented in Section~\\ref{sec:problem_formulations}.\nBut the use of the exact mesh of the cubes would generate gradient discontinuities of the constraints.\nApproximating the mesh with STP-BV would allow avoiding those discontinuities, but then, if the STP-BV approximates the exact mesh closely, we would have a gradient close to discontinuous.\nInstead, we propose another approach that uses non-Euclidean manifolds: for each pair of cubes $C_i,\\ C_j$, we require a plane $P_{ij}$ to separate them.\nThe plane's location can be parametrized by its normal $\\vec{n}\\in S^2$ and $d\\in\\mathbb{R}$, the signed distance from the plane to the origin along $\\vec{n}$.\nEach plane's pose can be represented by a variable on $\\mathbb{R} \\times S^2$.\nThus we can write a constraint of dimension 16 (1 per vertex) such that $C_i$ is above $P_{ij}$ and $C_j$ is below $P_{ij}$ as follows:\n\\begin{align}\n  \\begin{split}\n    &\\forall v\\in V_i,\\ (\\vec{t_i} + R_i v)\\cdot \\vec{n} \\geq d \\\\\n    &\\forall v\\in V_j,\\ (\\vec{t_j} + R_j v)\\cdot \\left(-\\vec{n}\\right) \\geq -d\n  \\end{split}\n\\end{align}\n\nIn order to simulate gravity, we minimize the potential energy of all the cubes (simplified by a factor mass times gravity):\n\n\\begin{equation}\n  f = \\sum\\limits_i \\vec{t_i}\\cdot \\vec{z}\n\\end{equation}\n\nWe consider the problem of stacking $n$ cubes in an open-top box composed of 5 planes (the ground and 4 walls).\n%Each cube's pose is parametrized on $\\mathbb{R}^3\\times SO(3)$, while each plane is parametrized on $\\mathbb{R}\\times S^2$.\nIn~\\Figref{fig:cubes}, we illustrate the case of stacking 3 cubes.\nWith the initial condition on the left side and a solution on the right.\n\\begin{figure}\n\\centering\n  \\includegraphics[width=.8\\linewidth]{3cubes.png}\n  \\caption{Problem of stacking 3 red cubes in a blue box, separating each pair of cubes by a yellow plane. Initial (left) and final (right) configurations.}\n\\label{fig:cubes}\n\\end{figure}\n\nThere is one plane for each pair of cubes, so, $n(n-1)/2$ planes.\nThus the search manifold is:\n\\begin{equation}\n  \\mathcal{M} = {\\left( \\mathbb{R}^3\\times SO(3) \\right)}^n \\times {\\left( \\mathbb{R} \\times S^2 \\right)}^{\\frac{n(n-1)}{2}} \\nonumber\n\\end{equation}\nThe problem contains 5 constraints of dimension 8 per cube to fit them in the box and $n(n-1)/2$ constraints of dimension 16 to avoid the interpenetration of cubes.\nWe have a problem of dimension $4.5n+1.5n^2$ with $32n+8n^2$ constraints.\n\nIn order to compare the resolution with and without the use of manifolds, we also formulate this problem over $\\mathbb{R}^n$.\nTo do so, each variable on $SO(3)$ is replaced by a variable on $\\mathbb{R}^4$, while each variable on $S^2$ is replaced by one on $\\mathbb{R}^3$.\nIn both cases, a norm-1 constraint on the variable is added to the problem to force those variables on the manifolds.\nThis results in a problem on\n\\begin{equation}\n  \\mathcal{M}={\\left( \\mathbb{R}^3\\times \\mathbb{R}^4 \\right)}^n \\times {\\left( \\mathbb{R} \\times \\mathbb{R}^3 \\right)}^{\\frac{n(n-1)}{2}} = \\mathbb{R}^{5n+2n^2} \\nonumber\n\\end{equation}\nthat is a problem of dimension $5n+2n^2$ with $32.5n+8.5n^2$ constraints, which is $\\frac{n(n+1)}{2}$ more variables and constraints than with the manifold formulation.\n\nWe solve these two problems for different numbers of cubes and compare the results in terms of number of iterations before convergence, convergence time, and time spent per iterations.\nIn each resolution, both problems (Real Space and Manifold formulations) are initialized with the same randomized initial guess.\nThe resolutions with {\\tt PGSolver} are set up with the same set of parameters, and the ones with the {\\tt CFSQP} solver use its default parameters.\nThe initial positions of the cubes are chosen randomly, and each plane is initialized at a position between the two cubes it separates.\nWe display the results of these tests in~\\Figref{fig:timings-cubes}.\n\\begin{figure}[htpb]\n  \\centering\n  \\input{Chapter6-Evaluation/Figs/timingsCubes.tikz}\n  \\caption{Comparison of resolutions with and without using manifolds with $\\mu=10^{-8}$. Red represents the results with {\\tt PGSolver} on the manifold formulation, blue on Real Space formulation and green with {\\tt CFSQP} on Real Space formulation.}\n\\label{fig:timings-cubes}\n\\end{figure}\n\nWith 300 resolutions per case, approximately $98\\%$ converged when using a manifold formulation versus $99.5\\%$ with a non-manifold formulation with {\\tt PGSolver}, whereas the success rates of {\\tt CFSQP} drop incrementally from $100\\%$ for 2 cubes to $70\\%$ for 7 cubes.\nConcerning the resolutions with {\\tt PGSolver}, we observe that the numbers of iterations are sensibly similar for the two types of resolutions, but the time spent per iteration is consistently smaller in the case of a resolution with manifolds, which is in agreement with our expectations and subsequently, the convergence time is consistently shorter for the formulation with manifolds.\nThe resolutions with {\\tt CFSQP} take up to 4 times more iterations and each iteration is on average 3 times longer than with {\\tt PGSolver}.\nWhich results in resolutions with {\\tt PGSolver} on manifold formulation being on average 7 times faster than with {\\tt CFSQP} on real-space formulation for this specific type of problem.\n\nDuring our experimentations, we noticed that the regularization applied on the Hessian of the problem plays an important role in the convergence speed.\nIn particular, the minimum value for the diagonal terms $\\mu$ is important.\nWe observed that for values of $\\mu$ between $10^{-8}$ and $10^{-2}$, the behaviors of both resolutions are quite consistent (the influence of $\\mu$ is small in that span), with the best results observed for $10^{-8}$ (presented in~\\Figref{fig:timings-cubes}), and the formulation with manifolds is consistently faster than the one without manifolds.\nValues outside of that span tend to degrade both resolutions.\nToo high values tend to damp the Hessian, which translates into bigger numbers of iterations.\nToo small values make the hessian closer to being rank deficient, in that case, we observe larger numbers of internal iterations of the QP solver, which translates into longer times per iteration.\n\nThis study shows that on a simple example that makes heavy use of non-Euclidean manifolds, solving a problem on non-Euclidean manifolds with optimization on manifolds not only allows the user to profit of a simpler and more intuitive formulation of the problem but also outperforms the classical approach.\nAnd for this specific problem, {\\tt PGSolver} clearly outperforms {\\tt CFSQP} in term of success rates as well as convergence speed.\n\n\n\\section{Evaluation of the Posture Generation}\n\\label{sec:evaluation_of_the_posture_generation}\n\nIn an effort to evaluate the performances of our posture generator framework and solver on manifolds, we devised a campaign of tests in which we solve problems that are known to be feasible and compare the success rates and computation times of different resolution approaches for different types of problems.\nWe always search viable solutions, where the joint and torque limits are respected, collisions are avoided and static stability is guaranteed.\nTo generate problems that are known to be feasible, we start by computing the pose of 4 frames that the robot can reach simultaneously with its right foot, left foot, right gripper and left gripper, by querying the Direct Kinematics algorithm for a random configuration $q_\\text{rand}$ of the robot that is within its joint limits.\nGiven those 4 frames, we can create a posture generation problem where the robot has to reach those frames with its end-effectors while respecting its viability constraints.\nIn particular, we require the contacts on the foot to be fixed unilateral friction contacts (with the forces in the friction cones) and the contacts on the hands to be fixed bilateral contacts (where the forces can be applied in any direction).\nIf feasible, this problem can easily be solved by providing as initial guess the configuration $q_\\text{rand}$.\nIf a solution $q^*$ is found, the problem is deemed feasible and is recorded for further use.\n\nUsing this process, we compute three types of problems with different sets of constraints:\n\\begin{itemize}\n  \\item 2 Contacts: only the foot are in contact, the hands are free of constraints.\n  \\item 3 Contacts: both foot and the right gripper are in contact, the left hand is free.\n  \\item 4 Contacts: both foot and hands are in contact.\n\\end{itemize}\n\n%In addition to evaluating our solver, we want to evaluate the efficiency of our posture generator.\n%We propose to study its ability to find a solution to problems that are known to be feasible.\n\n%First, we generate a list of feasible problems.\n%To do so, we set the HRP-2 Kai robot in a random configuration $q_\\text{rand}$ within its joint limits and record the 3D poses of its end-effectors' frames: at the tip of its grippers and below its foot,we denote them $F_\\text{rightGripper}$, $F_\\text{leftGripper}$, $F_\\text{rightFoot}$, $F_\\text{leftFoot}$.\n%Then we generate a problem with the constraints that both grippers are in fixed bilateral contact (forces can be applied in any direction) with their associated frames $F_\\text{rightGripper}$ and $F_\\text{leftGripper}$, while the feet are in unilateral frictional contact (forces in friction cones) with their associated frames $F_\\text{rightFoot}$ and $F_\\text{leftFoot}$ and the stability, torque limits, and auto-collision avoidance constraints must be respected.\n%We solve those problems starting from a configuration close to the solution to estimate their feasibility.\n%To make the resolution easier, we initialize the robot's configuration variable to $q_\\text{rand}$, which is a solution to the geometric part of the problem.\n%If a solution is found, the problem is feasible and is recorded for further use.\n\n%We generate feasible problems by setting a HRP-2 Kai robot's in a random configuration variable $q_\\text{rand}$ within its joint limits and recording the poses of its end-effectors' frames: at the tip of its grippers and below its foot,we denote them $F_\\text{rightGripper}$, $F_\\text{leftGripper}$, $F_\\text{rightFoot}$, $F_\\text{leftFoot}$.\n%Then we check whether that configuration violates the auto-collision constraints, if it does, it is discarded and the process is restarted with another random configuration.\n\n%Once a collision free configuration is found, we generate a problem with the constraints that both gripper are in fixed bilateral contact (forces can be applied in any direction) with their associated frames $F_\\text{rightGripper}$ and $F_\\text{leftGripper}$, while the foot are in unilateral contact (forces in friction cones) with their associated frames $F_\\text{rightFoot}$ and $F_\\text{leftFoot}$ and the stability and torque limits must be respected.\n%To help the resolution, we initialize the robot's configuration variable to $q_\\text{rand}$, which is a solution to the geometric part of the problem.\n%If a solution is found, the problem is known to be feasible and recorded for further use.\n\nSome initial testings where we solved the recorded problems with 4 contacts with the half-sitting configuration $q_\\text{half-sitting}$ as initial guess showed that a solution is found for only 35\\% of the problems.\nThis is not a surprising result, as the configuration to reach is very far from the initial posture.\nWe noticed that most of the randomly generated configurations put the robot in a very twisted posture that seems difficult to reach for the optimization algorithm when starting from the half-sitting configuration.\n\n%We evaluate the performance of our posture generator by solving the recorded problems, starting from the half-sitting configuration $q_\\text{half-sitting}$ of the robot.\n%This approach tests the ability to find a feasible posture when starting from a configuration far from the solution.\n%Since there is no cost function, the optimization algorithm is exited as soon as the restoration phase terminates.\n%The resolution is successful for approximately 35\\% of the problems.\n%Which is not surprising, as the configuration to reach is very far from the initial posture.\n%We noticed that most of the randomly generated configurations put the robot in a very twisted posture that seems difficult to reach for the optimization algorithm when starting from the half-sitting configuration.\n\nWe computed another set of feasible problems in which the value of $q_\\text{rand}$ is taken closer to the half-sitting, by taking the average between a random configuration and the half-sitting:\n\\begin{equation}\n  q_\\text{rand} \\leftarrow \\frac{q_\\text{rand}+q_\\text{half-sitting}}{2}\n\\end{equation}\n%We generate a set of feasible problems from those configurations.\n%In that case, a solution is found in 85\\% of the tests.\nThe resolution of those feasible problems showed a rate of success of 85\\%.\nThis implies that the initial guess and its distance from the solution, which we coin the \\emph{initial distance}, have a crucial importance in the resolution of posture generation problems.\nThe initial distance for a problem where $q_0$ is the initial guess and $q^*$ is a solution, is defined as: $d=\\|q_0-q^*\\|^2$.\n\n%The topology of the problem to solve is also an important factor of the resolution, thus, we generated several sets of feasible problems with different numbers of contact constraints.\n%In the first set, contact constraints are only imposed on the two feet of the robot, we denote this set `2 Contacts'.\n%In the secont set, contact constraints are imposed on the two feet and the right hand of the robot, we denote this set `3 Contacts'.\n%And the last set is the one described previously where contact constraints are imposed on both feet and hands, we denote it `4 Contacts'.\n%We study the influence of the initial distance on each of those sets separately.\n\nWe propose to evaluate the effect of the initial distance on the success rate and on the convergence time of the posture generator.\nIn this case, each feasible problem is solved several times, with a value of the initial guess increasingly closer to the solution.\nGiven a randomly chosen initial guess $q_0$, and $q^*$ being the known solution of the problem, we solve the problem starting from a configuration $q_0^{n\\%}$ defined as:\n\\begin{equation}\n  q_0^{n\\%} = n q_0 + (1-n)q^*\n\\end{equation}\nWith $n$ taking successively the values $1.0$, $0.8$, $0.6$, $0.4$,and $0.2$.\nAfter aggregating the results, we are able to estimate the influence of the initial distance on the success rate and convergence time of the optimization.\nDuring all our work with our solver, we observed that the choice of Hessian approximation method has a substantial effect on the resolution.\nWe take here the opportunity to assess this observation by solving series of problems with the solver using different Hessian update methods.\nAs we explained in Chapter~\\ref{chapter:optimization_on_noneuclidean_manifolds}, the Hessian can be updated as a whole, or individually, in which case a list of Hessians approximating each constraint's second-order derivative is maintained and combined in the global Hessian matrix.\nWe compare the results obtained with the three update methods that are BFGS, BFGS with self-scaling and SR1 and using whole or individual updates for each of them.\nWe gather those results after solving 250 different posture generation problems, 5 times each, starting increasingly close from the solution.\nWe sample the range of initial distances into 20 segments of the same length and report the average results on each segment as a data point in~\\Figref{fig:evalPG}.\n\\begin{figure}[htb]\n\\centering\n  \\includegraphics[width=\\linewidth]{evalPG/resWithSR1Grouped.png}\n  \\caption{Comparison of rate of success and convergence times of the posture generation problem resolution with respect to the distance between initial guess and solution for sets of feasible problems with 2, 3 and 4 contact constraints and for different choices of Hessian update methods.}\n\\label{fig:evalPG}\n\\end{figure}\n\nThe results obtained show that there is not one Hessian update method that is much better than all others in every case.\nThe BFGS and SR1 approaches with individual or grouped Hessians present similar and consistent behaviors on the 3 types of problems.\nThe BFGS with self-scaling update is quite inconsistent, as it shows the best success rates and convergence speeds on problems with 4 contact constraints, but the worst ones for problems with 2, and on problems with 3 contacts, the individual update fairs correctly while the grouped one has the worst computation times.\n%We observed that those slow convergence times were due to high numbers of iterations and not to high computation times per iteration.\n%Whereas the BFGS with and without self\n\nThis experiment shows that the closer the initial guess is to the solution, the more likely the solver is to converge to a solution and it gives us a quantitative estimate of the expected success rate and convergence times with respect to the initial distance.\n\nAside from the BFGS with self-scaling methods, all others usually reach convergence in a few seconds, taking longer when starting further from the solution.\nOn average, a solution is found faster for problems with only 2 contacts, within 2 seconds, problems with 3 contacts take a little longer, up to 3 seconds, and problems with 4 contacts are the longest to solve, taking up to 4 seconds.\n\nAlthough those computation times are long, and make it inconvenient to compute large numbers of postures, as is often done in contact planning, they remain of the same order of magnitude as the ones reported in~\\cite{brossette:RAM:2013}, \\cite{escande:iros:2006}, \\cite{bouyarmane2011autonomous} and \\cite{hauser:ijrr:2008}, that were all based on off-the-shelf solvers, while we use an open solver that we developed and are able to modify or even specialize.\n\nIt would be interesting to run more tests like those to study the influence of the different solver options and find optimal strategies to set the solver's option to solve posture generation problems.\nPerhaps choosing automatically the update method and other options based on the structure of the problem at hand would allow increasing the general performance of our posture generator.\n%An initial study of the problem to solve, followed by an automatic tuning of the solver seems like a promising idea to improve the posture generator's performances.\n%One could for example choose an initial guess amongst a database based on the structure of the problem.\n%It would also be possible to take into account the very specific structure of a robot in the resolution algorithm.\n\n\\FloatBarrier\n\\section{Application to Inertial Parameters Identification}\n\\label{sec:inertial_parameters}\n\nAlthough it was developed with the idea of solving posture generation problems, our solver and the formulation on manifolds can be used for different types of problems.\nIn this section, we present an application of our optimization algorithm to solve a problem of inertial parameters identification that was published in~\\cite{traversaro:iros:2016}.\n\n\\subsection{Physical Consistency of Inertial Parameters}\n\\label{sub:physical_consistency_of_inertial_parameters}\n\nThe dynamics of a rigid body is completely defined by its mass distribution:\n\\begin{equation}\n  \\rho(.):\\mathbb{R}^3 \\mapsto\\mathbb{R}_{\\geq 0}\n\\end{equation}\nThat function defines the density of a rigid body in the 3D space, it is strictly positive on points that belong to the rigid body and null everywhere else.\nThe complete dynamics of a rigid body can be captured by a set of parameters $\\pi\\in\\mathbb{R}^{10}$ representing its mass $m$, the position of its center of mass $c$ and its 3D inertia matrix $I_B$.\nBy definition, those parameters are functionals of $\\rho(.)$.\nThe identification of those parameters can in some cases be obtained from the Computer-Aided Design (CAD) model of the body, but it is often necessary to evaluate those parameters experimentally.\nTheir identification can be done by measuring the body acceleration $A^g$, twist $V$ and the external wrench applied to it $F$ for $N$ different situation and finding $\\pi^*$ that minimizes the error in the Newton-Euler equation for those values:\n\\begin{equation}\n\\label{eq:classicIdentif}\n  \\pi^* = \\argmin_{\\pi\\in\\mathbb{R}^{10}} \\sum_{i=0}^N \\|Y(A^g_i,V_i)\\pi -F_i\\|^2\n\\end{equation}\nWhere $Y(A^g_i,V_i)$ is a matrix representing the inertial effects in the Newton-Euler equation.\nTo take into account the physical properties of the inertial parameters, the \\emph{physical consistency} constraint is traditionally used, it enforces that the mass and inertia matrix are respectively positive and positive definite.\nIn~\\cite{traversaro:iros:2016}, we prove that the \\emph{physical consistency} constraint is not enough to ensure that there exists a mass distribution for a given set of inertial parameters, thus, we propose an alternate formulation that we call \\emph{full physical consistency} that ensures it.\nIn this novel formulation, the inertial parameters are parametrized by an element $\\theta \\in \\mathfrak{P} = \\mathbb{R}_{\\ge 0} \\times \\mathbb{R}^3 \\times  SO(3) \\times \\mathbb{R}_{\\ge 0}^3$ and the \\emph{full physical consistency} in ensured.\nIn particular the components of $\\theta$ are:\n\\begin{itemize}\n    \\item $m \\in \\mathbb{R}_{\\ge 0}$ the mass of the body\n    \\item $c \\in \\mathbb{R}^3$ the center of mass of the body\n    \\item $Q \\in SO(3)$ the rotation matrix between the body frame and the frame of principal axes at the center of mass\n    \\item $L \\in \\mathbb{R}_{\\ge 0}^3$ the second central moment of mass along the principal axes\n\\end{itemize}\nAnd we devise a functional $\\pi_p(\\theta):\\mathfrak{P}\\mapsto\\mathbb{R}^{10}$ that maps this new parametrization to the corresponding inertial parameters.\nHowever, the optimization variable now lives on a non-Euclidean manifold, because $\\mathfrak{P}$ includes $SO(3)$.\nThus, we propose to solve that problem with our solver on manifolds.\n\n%\\subsection{Physical Consistency of Inertial Parameters}\n%\\label{sub:physical_consistency_of_inertial_parameters}\n\n%The dynamics of a rigid body is completely defined by its mass distribution:\n%\\begin{equation}\n  %\\rho(.):\\mathbb{R}^3 \\mapsto\\mathbb{R}_{\\geq 0}\n%\\end{equation}\n%That function defines the density of a rigid body in the 3D space, it is strictly positive on points that belong to the rigid body and null everywhere else.\n%It can be reduced to 10 inertial parameters $\\pi\\in\\mathbb{R}^{10}$ to describe the dynamics of the rigid body~\\cite{hollerbach2008}.\n%%which is itself completely described by 10 inertial parameters~\\cite{hollerbach2008}.\n%%The dynamics of a rigid body is completely defined by its mass distribution, which is itself completely described by 10 inertial parameters~\\cite{hollerbach2008}.\n%We denote $\\text{vech}(.)$ the serialization operation on symmetric matrices.\n%\\begin{equation}\n  %\\pi = \\begin{bmatrix}\n    %m \\\\\n    %mc \\\\\n    %\\text{vech}(I_B)\n  %\\end{bmatrix}\n%\\end{equation}\n%\\begin{itemize}\n  %\\item $m\\in\\mathbb{R}$ is the mass of the rigid body\n  %\\item $\\hat{c}$ denotes the skew-symmetric matrix associated with $c\\in\\mathbb{R}^3$, the position of the center of mass of the rigid body\n  %\\item $I_B\\in\\mathbb{R}^{3\\times 3}$ is its 3D inertia matrix\n%\\end{itemize}\n\n%%Those parameters are defined as functionals of the mass distribution of the rigid body:\n%%\\begin{equation}\n  %%\\rho(.):\\mathbb{R}^3 \\mapsto\\mathbb{R}_{\\geq 0}\n%%\\end{equation}\n%%That function defines the density of a rigid body in the 3D space, is is null outside of the body and strictly positive inside it.\n\n%By definition, the inertial parameters are functionals of the mass distribution $\\rho(.)$, and can be written as $\\pi_d:(\\mathbb{R}^3\\mapsto\\mathbb{R})\\mapsto\\mathbb{R}^{10}$\n\n%%\\begin{equation}\n  %%\\pi_d\\ :\\\n  %%\\begin{array}{ccc}\n%%(\\mathbb{R}^3 \\mapsto \\mathbb{R}) & \\mapsto & \\mathbb{R}^{10}\\\\\n%%\\rho(.)\n   %%& \\rightarrow &\n  %%\\begin{bmatrix}\n    %%\\iiint\\limits_{\\mathbb{R}^3} \\rho(r) dr \\\\\n    %%\\iiint\\limits_{\\mathbb{R}^3} r \\rho(r) dr \\\\\n    %%\\operatorname{vech}\\left(\n    %%\\iiint\\limits_{\\mathbb{R}^3} {\\hat{r}}^\\top \\hat{r} \\rho({r}) d{r} \\right)\n  %%\\end{bmatrix}\n  %%\\end{array}\\nonumber%\n%%\\end{equation}\n\n%\\begin{equation}\n%\\pi_d(\\rho(.))\n  %=\n  %\\begin{bmatrix}\n    %\\iiint\\limits_{\\mathbb{R}^3} \\rho(r) dr \\\\\n    %\\iiint\\limits_{\\mathbb{R}^3} r \\rho(r) dr \\\\\n    %\\operatorname{vech}\\left(\n    %\\iiint\\limits_{\\mathbb{R}^3} {\\hat{r}}^\\top \\hat{r} \\rho({r}) d{r} \\right)\n  %\\end{bmatrix}\n%\\end{equation}\n\n%The identification of those parameters can in some cases be obtained from the Computer-Aided Design (CAD) model of the body, but it is often necessary to evaluate those parameters experimentally.\n%Their identification can be done by measuring the body acceleration $A^g$, twist $V$ and the external wrench applied to it $F$ for $N$ different situation and finding $\\pi^*$ that minimizes the error in the Newton-Euler equation for those values:\n%\\begin{equation}\n%\\label{eq:classicIdentif}\n  %\\pi^* = \\argmin_{\\pi\\in\\mathbb{R}^{10}} \\|Y(A^g_i,V_i)\\pi -F_i\\|^2\n%\\end{equation}\n%%Identifying the inertial parameters of a rigid body comes down to finding $\\pi$ that minimizes the error in the Newton-Euler equation~\\ref{eq:NewtonEuler}.\n%Where $Y(A^g,V)$ is such that:\n%$$\n%Y(A^g,V)\\pi = M A^g + V\\bar{\\times}^*MV\n%$$\n%M is the spatial inertia matrix:\n%\\begin{equation}\n%M=\n  %\\begin{bmatrix}\n    %m\\mathbb{I}_3 & -m\\hat{c} \\\\\n    %m\\hat{c} & I_B\n  %\\end{bmatrix}\n%\\end{equation}\n%and $V \\bar \\times^*$ is the 6D force cross product operator \\cite{featherstone:book:2007} that, for $V = \\begin{bmatrix} v^\\top & \\omega^\\top \\end{bmatrix}^\\top \\in \\mathbb{R}^6$, gives:\n%$$\n%V \\bar \\times^*\n%=\n%\\begin{bmatrix}\n%\\omega^\\top & 0_{3\\times3} \\\\\n%v^\\top      & \\omega^\\top\n%\\end{bmatrix}\n%$$\n\n%However, this optimization problem~\\ref{eq:classicIdentif} does not take into account the physical properties of the inertial parameters.\n%For this, the \\emph{physical consistency} constraint is introduced~\\cite{yoshida1994} and added to the problem:\n%\\begin{definition}\n  %A vector of inertial parameters $\\pi\\in\\mathbb{R}^{10}$ is called \\emph{physical consistent} if:\n%\\begin{equation}\n  %\\begin{aligned}\n    %m(\\pi) \\geq 0\\\\\n    %I_C(\\pi) \\succeq 0\n  %\\end{aligned}\n%\\end{equation}\n%\\end{definition}\n%Where $I_C(\\pi)$ is the 3D inertial matrix at the center of mass.\n%Inertial parameters satisfying the \\emph{physical consistency} conditions have nice properties ($M$ is invertible), but it is still possible to find some \\emph{physical consistent} inertial parameters that cannot be generated by a mass distribution of a rigid body.\n%In~\\cite{traversaro:iros:2016}, we propose the \\emph{full physical consistent} condition that assesses that a vector of inertial parameters can be generated from a physical rigid body.\n\n%\\begin{definition}\n  %A vector of inertial parameters $\\pi\\in\\mathbb{R}^{10}$ is called \\emph{fully physical consistent} if:\n  %\\begin{equation}\n    %\\exists\\rho(.):\\mathbb{R}^3\\mapsto\\mathbb{R}_{\\geq 0}\\ \\text{s.t.}\\ \\pi = \\pi_d(\\rho(.))\n  %\\end{equation}\n%\\end{definition}\n\n%We propose a new parametrization of the inertial parameters by an element $\\theta \\in \\mathfrak{P} = \\mathbb{R}_{\\ge 0} \\times \\mathbb{R}^3 \\times  SO(3) \\times \\mathbb{R}_{\\ge 0}^3$ that ensures the \\emph{full physical consistency}. In particular the components of $\\theta$ are:\n%\\begin{itemize}\n    %\\item $m \\in \\mathbb{R}_{\\ge 0}$ the mass of body\n    %\\item $c \\in \\mathbb{R}^3$ the center of mass of the body\n    %\\item $Q \\in SO(3)$ the rotation matrix between the body frame and the frame of principal axis at the center of mass\n    %\\item $L \\in \\mathbb{R}_{\\ge 0}^3$ the second central moment of mass along the principal axes\n%\\end{itemize}\n%And a function $\\pi_p(\\theta):\\mathfrak{P}\\mapsto\\mathbb{R}^{10}$ that maps this new parametrization to the corresponding inertial parameters.\n%\\begin{equation}\n  %\\label{eq:pip}\n  %\\pi_p(\\theta)\n  %=\n  %\\begin{bmatrix}\n    %m(\\theta) \\\\\n    %mc(\\theta) \\\\\n    %\\operatorname{vech}\\left(I_B(\\theta)\\right) \\\\\n  %\\end{bmatrix}\n  %=\n  %\\begin{bmatrix}\n    %m \\\\\n    %mc \\\\\n    %\\operatorname{vech}\\left( Q \\operatorname{diag}{(\\left[\\begin{smallmatrix}\n    %0 & 1 & 1 \\\\\n    %1 & 0 & 1 \\\\\n    %1 & 1 & 0\n    %\\end{smallmatrix}\\right] L)}  Q^\\top - m \\hat{c} \\hat{c} \\right) \\\\\n  %\\end{bmatrix} \\nonumber\n%\\end{equation}\n%In~\\cite{traversaro:iros:2016}, we prove that every $\\theta\\in\\mathfrak{P}$ generates \\emph{fully physically consistent} inertial parameters.\n%However, the optimization variable now lives on a non-Euclidean manifold, because $\\mathfrak{P}$ includes $SO(3)$.\n%Thus, we propose to solve that problem with our solver on manifolds.\n\n\\subsection{Resolution with optimization on Manifolds}\n\\label{sub:resolution_with_optimization_on_manifolds}\n\nThe formulation of the problem becomes:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:finalProblem}\n    \\argmin_{\\theta \\in \\mathbb{R}\\times\\mathbb{R}^3\\times SO(3) \\times \\mathbb{R}^3} &\\ \\sum_{i = 1}^N \\left\\| Y(A^g_i, V_i) \\pi(\\theta) - F_i \\right\\|^2 \\\\\n    \\mbox{s.t.} &\\ m \\geq 0,\\ L_x \\geq 0,\\ L_y \\geq 0,\\ L_z \\geq 0\n\\end{aligned}\n\\end{equation}\n\nWe can solve it with our solver on manifolds.\nIt has an immediate advantage: we can write directly the problem~\\eqref{eq:finalProblem} without the need to add any parametrization-related constraints.\nBecause there are fewer variables and fewer constraints, it is also faster to solve.\nTo assess this, we compared the resolution of~\\eqref{eq:finalProblem} formulated with each of the three parametrizations of $SO(3)$ available, namely, native $SO(3)$, unit quaternion and rotation matrix.\nWe solved the three formulations with our {\\tt PGSolver}, and the two last ones with an off-the-shelf solver ({\\tt CFSQP}~\\cite{cfsqp:manual}), using the dataset presented in section~\\ref{sub:experiments}.\nThe formulation with native $SO(3)$ was consistently solved faster.\nWe observed timings around $0.5$s for it, and over $1$s for non-manifold formulations with {\\tt CFSQP}.\nThe mean time for an iteration was also the lowest with the native formulation (at least $30\\%$ lower when compared to all other possibilities).\n\nWorking directly with manifolds has also an advantage that we do not leverage here, but could be useful for future work: at each iteration, the variables of the problem represent a \\emph{fully physically consistent} set of inertial parameters.\nThis is not the case with the other formulations we discussed, as the (additional) constraints are guaranteed to be satisfied only at the end of the optimization process.\nHaving physically meaningful intermediate values can be useful to evaluate additional functions that presuppose it (additional constraints, external monitoring...).\nIt can also be leveraged for real-time applications where only a short time is allocated repeatedly to the inertial identification, so that when the optimization process is stopped after a few iterations, the output is physically valid.\n\n\\subsection{Experiments}\n\\label{sub:experiments}\nWe ran some experiments with the iCub robot as an example.\nIt is a full-body humanoid with 53 degrees of freedom~\\cite{metta2010icub}.\nFor validating the presented approach, we used the six-axis force/torque (F/T) sensor embedded in iCub's right arm to collect experimental F/T measurements.\nWe locked the elbow, wrist and hand joints of the arm, simulating the presence of a rigid body directly attached to the F/T sensor, a scenario similar to the one in which an unknown payload needs to be identified~\\cite{kubus2008line}.\n\n\\begin{figure}[htb]\n\\centering\n\\begin{overpic}[width=0.68\\textwidth,natwidth=1235,natheight=742]{arm3.png}\n\\put(5,10){FT sensor}\n\\put(13,13){\\vector(1,1){18}}\n\\put(38,50){Upper arm}\n\\put(43,49){\\vector(0,-1){12}}\n\\put(65,45){Forearm}\n\\put(70,44){\\vector(-1,-2){7}}\n\\end{overpic}\n\\caption{CAD drawing of the iCub arm used in the experiments. The six-axis F/T sensor used for validation is visible in the middle of the upper arm link.}\n\\label{fig:cadArm}\n\\end{figure}\n\nWe generated five 60 seconds trajectories in which the three shoulder joints were reaching successive random joint positions using minimum-jerk like trajectories.\nEach trajectory is decomposed in sub-trajectories to travel between two consecutive random positions in respectively $10$s, $5$s, $2$s, $1$s and $0.5$s (As a result, the trajectories are ordered from the slowest to the fastest).\nWe played those trajectories on the robot and sampled the F/T sensors and joint encoders outputs at $100$Hz.\n%We filtered the joint positions and obtained joint velocities and accelerations using a Savitzky-Golay filtering of order 2 and with a windows size of $499$, $41$, $21$, $9$, $7$ samples.\nWe used joint positions, velocities and accelerations with the kinematic model of the robot to compute $A^g$ and $V$ of the F/T sensor for each time sample.\n%We removed the unknown offset from the F/T measurements using the offset removal technique described in \\cite{traversaro2015situ}.\nWe solved the inertial identification problem~\\eqref{eq:classicIdentif} using a classical linear algorithm and the one using the proposed \\emph{fully physically consistent} parametrization~\\eqref{eq:finalProblem} with our solver on manifolds.\nWe report the identified inertial parameters in Table~\\ref{table:results}.\nIt is interesting to highlight that for slow datasets (sub-trajectory times of $10$s or $5$s) the unconstrained optimization problem~\\eqref{eq:classicIdentif} results in inertial parameters that are not \\emph{fully physically consistent}.\nIn particular, this is due to the low values of angular velocities and acceleration, that do not properly excite the inertial parameters, which are then \\emph{numerically not identifiable}.\n\\begin{figure}\n\\centering\n  \\includegraphics[width=\\linewidth]{tableResults.png}\n  \\caption{Inertial parameters identified with the different datasets and the different optimization problems.\n  Inertial parameters identified on $\\mathbb{R}^{10}$ optimization manifold that are not \\emph{fully physically consistent} are highlighted.\nMasses are expressed in $kg$, first moment of masses in $kg.m$, inertia matrix elements in $kg.m^2$.}\n\\label{table:results}\n\\end{figure}\n\\FloatBarrier\nThe proposed optimization problem clearly cannot identify those parameters anyway, as the identified parameters are an order of magnitude larger than the ones estimated for faster datasets, nevertheless, it always estimates inertial parameters that are \\emph{fully physically consistent}.\nFor faster datasets (sub-trajectory times of $1$s or $0.5$s) the results of the two optimization problems are the same because the high values of angular velocities and accelerations permit to identify all the parameters perfectly.\nWhile this is possible to identify all the inertial parameters of a single rigid body, this is not the case when identifying the inertial parameters of a complex structure such as a humanoid robot, for which both structural~\\cite{ayusawa2014identifiability} and numerical~\\cite{pham1991essential} not identifiable parameters exists.\nIn this later application, the enforcement of \\emph{full physical consistency}  will always be necessary to get meaningful results.\n\n\\section{Application to contact planning on real-environment}\n\\label{sec:application_of_contact_planning_on_real_environment}\n\n%In this section, we present our approach to generating postures in environments acquired by an RGBD camera mounted on the head of the robot in a multi-contact planning context.\n%This work was done prior to the development of our posture generator and therefore makes use of a previous version of it that is presented in~\\cite{bouyarmane:humanoids:2010}.\n%Our plan is now to use our Posture Generator in this framework.\n\n%Generating isolated postures is not sufficient to make a robot move or achieve tasks in ways similar to humans.\n%Entire motions, with sequences of contact creations and releases, and trajectories in-between need to be planned.\n%That's the role of the Multi-Contact Planner (MCP).\n%The core of the planner consists of the multi-contact search and the posture generator.\n%The former incrementally builds a tree of contact sets and is presented in~\\cite{escande:ras:2013}.\n%The children of a node are obtained by adding or removing exactly one contact to its set.\n%At each iteration of the search, the best leaf, according to a potential field, is expanded this way.\n%Every tentative set of contacts are tested by a posture generator.\n%Upon success, the contact set is validated, and a new leaf is created.\n%The goal is written as a set of specific constraints.\n%A node is a final node if its associated posture generation problem augmented by these constraints is feasible.\n%By backtracking from this final node to the initial root node, we obtain a sequence of nodes and thus a sequence of contact sets, that can be executed on the robot by a whole-body controller such as the one based on a quadratic programming formulation presented in~\\cite{bouyarmane:iros:2011}.\n\n%After the creation of each node, the MCP queries the PG to check the feasibility of the node and compute a `viable' configuration that satisfies the node's contact set's constraints.\n\nWhen an exact model of the environment is not available, it in necessary to plan a robot's motion based on data acquired by its sensors.\nIn this section, we investigate the generation of `viable' postures in an environment acquired by an RGBD camera as a point cloud.\nOur goal here is to generate postures with constraints that describe contacts between the robot and the acquired point cloud representing the environment.\nTo do so, we need to formulate contact and collisions avoidance constraints between geometric features of the robot and a point cloud.\nThis can be done in two different ways:\n\\begin{itemize}\n  \\item Formulate contacts with the raw point cloud;\n  \\item Pre-process the point cloud to extract some geometric features with which contacts can be formulated;\n\\end{itemize}\n\nFormulating contact constraints between geometric features of the robot and a raw point cloud would require the development of new contact formulations, and the high number of data points may induce longer computation times of some constraints of the optimization problem.\nPre-processing the point cloud data in order to extract geometric features on which contacts can be generated presents the advantage of simplicity of formulation, as the posture generation formulation would be unchanged.\nAlso, assuming that the environment is static, the results of a pre-processing can be used for several posture generations, which is particularly advantageous in the context of multi-contact planning where the posture generator is queried many times in a row.\n\nAlthough the pre-processing of the point cloud takes a long time, it is only done once, one can assume that the constraint evaluations in the pre-processed case will be faster than with the raw point cloud, simply due to the fact that simpler geometric entities are used.\nSo we can assume that for a few posture generations, the formulation on raw point cloud may be faster, but as we do more generations on the same data-set, the cost of pre-processing will disappear in front of the time excess of constraint evaluation in the raw point cloud case.\nThus, we decided to take the pre-processing approach to generate postures in a sensory acquired environment.\n\n%The MCP relies largely on the 3D geometric models of the environment and robotic agents.\n%In previous work~\\cite{escande:ras:2013,bouyarmane:ar:2012}, the geometric models are provided by the user.\n%The contact transition for the robot are planned off-line and later executed by the robot assuming exactness of the models and their relative positioning.\nTo extract geometric features that are relevant to use in a posture generation, we must deal with two kinds of situations:\n\\begin{itemize}\n  \\item the models of the objects in the environment are known: in this case pre-processing consists mainly in dealing with recognition, model superposition, and handling uncertainties.\n  In brief, once model superposition is achieved, we can use the 3D model in the PG as in~\\cite{escande:ras:2013,bouyarmane:ar:2012, escande:icra:2016}.\n  \\item the models of the hurdles and the environment are not known (e.g.\\ disaster or outdoor environments, for example, related to the Fukushima disaster that inspired the DARPA Robotic Challenge), models of the environment need to be built from the robot's embedded sensors in an egocentric way.\n  This section deals with this case and we describe how we construct planar surfaces from the 3D point clouds data and feed them to the PG.\\@\n\\end{itemize}\n\nIn robotics, the use of 3D-based vision for recognition and navigation in environments known or partially unknown has first been used on mobile robots, evolving in a flat environment, for example by coupling it with a SLAM system~\\cite{whitty:acra:2012}.\nAnother approach consists in extracting the surfaces from the point cloud, and then link them to the known environment or simply consider them as obstacles to be avoided~\\cite{poppinga:iros:2008}.\nSince working on raw point clouds is costly because of the high number of data points, this extraction has also been enhanced~\\cite{biswas:icra:2012} in order to be run in real-time.\nThis approach has recently been experimented on a humanoid robot in~\\cite{maier:humanoids:2012}, where two methods are combined: the surface extraction from a point cloud, and the voxel-based 3D decomposition of the environment~\\cite{nakhaei:humanoids:2008}.\nStill, since the robot only navigates in a flat environment, and does not realize manipulation tasks, the surfaces extracted from the 3D point cloud are down projected to a 2D plane, on which are based the navigation and collision avoidance processes.\nThe use of humanoid robots allows to navigate in more complex environments.\nSome work has been done to make a humanoid robot go down a ramp~\\cite{lutz:iros:2012} or climb stairs~\\cite{osswald:iros:2012}.\n\nIn order to enable a robot to analyze and plan a motion inside a 3D sensory-acquired environment, we propose to extract the relevant surfaces of an acquired point cloud, to directly have a global picture of the environment and determine the convex planar surfaces that the robot can contact with.\n\n\\subsection{Building an understandable environment}\n\\label{sub:building_an_understandable_environment}\n\n%Our first concern is to build an environment that our multi-contact planner and posture generator are able to ``understand'' and that can be extracted from a point cloud scene.\n\nThe simplest entity that our PG would be able to deal with and that could correctly describe the robot's environment is a set of convex polygonal planar surfaces.\nTherefore, starting from an acquired point cloud, we extract a relevant set of such geometric entities that will be used as the surroundings of the robot.\n\n\\begin{figure}\n\\centering\n  \\includegraphics[width=\\linewidth]{complete_pipeline_modified.pdf}\n  \\caption{Top: flowchart describing the main elements of our algorithm and the type of data that is passed between them. Bottom: the data throughout the process, illustrated in the case of our first experiment.}\n\\label{fig:full_pipeline}\n\\end{figure}\n\nThe~\\Figref{fig:full_pipeline} illustrates the major steps of this point cloud treatment.\nWe use Willow Garage's Point Cloud Library\\footnote{\\url{http://pointclouds.org/}} (PCL)~\\cite{rusu:icra:2011} extensively for processing the point cloud.\n\n\\paragraph{Acquisition of point cloud from an RGB and depth sensor}\nThe point cloud representing the scene is acquired by an Asus Xtion Pro camera.\nThe points are defined by their space coordinates and colors.\nWe do not use the color information except for display purpose.\nIt may, however, be useful for future developments in matching object models with sensor data and to perform color-based segmentation algorithms.\n\n\\paragraph{Filtering}\nIn order to reduce the computation time and improve the efficiency of our point cloud treatment algorithm, we filter out the points that are too far and use a voxelized grid approach to downsample the remaining point cloud.\nThis consists of creating a 3D voxel grid over the point cloud and replacing the points in each voxel by their centroid.\n(We use this step to reduce the number of points to treat by a factor 5 to 6)\n\n%, it is necessary to reduce the overall size of the data set. We first remove all points further than 5 meters from the camera: such points are not reliable enough and thus a potential source of error for the following steps.\n%We know that the data extracted from the Asus Xtion live camera are reliable only for points that are less than 5 meters away from it. Therefore, we eliminate all the\n%points that are further than that, since they would only be a source of error and wouldn't carry any valuable information.\n\n%We then downsample the cloud, since it is excessively dense for our purpose. To do so, we use a voxelized grid approach (using the {\\tt pcl::VoxelGrid} class): we create a 3D voxel grid over the input point cloud data. Each voxel represents a group of points that are close enough from each other to be represented by a single point that would be their centroid. The centroid is chosen instead of the center of the voxel because it represents the real environment more accurately. Another advantage of this method is that the filter can easily be parametrized by choosing the size of the voxels. Those two steps allow us to greatly reduce the number of points in the data set. Typically, in our experiments, this divides the number of points by a factor 5 to 6.\n\n\\paragraph{Region growing segmentation}\nWe divide a global point cloud scene into clusters of points that belong to the same flat area, by using a region growing algorithm~\\cite{poppinga:iros:2008} that regroups the neighboring points that have similar normals into clusters.\n%This method is based on the comparison of the angles between the points' normals.\n%\\note{\\sout{This algorithm fits perfectly our needs, because it provides us with a list of sub-clouds (clusters), and each of those represents a flat area of the scene.}}\n%It is used right before the planar extraction algorithm in order to obtain an accurate list of points and plane models they represent.\n\n\\paragraph{Planar extraction}\nFor each cluster, we use a plane segmentation to find the plane model that fits the highest number of points.\nThe outlying points can then be either filtered, or we can try to find another plane to fit them.\n\n\\paragraph{Planar projection and hull convex generation}\nWe extract the convex hull of the projection of each cluster on their respective fitting plane.\n%The exact knowledge of all the data points contained in the plane model of a flat area is not necessary for our planning; we can reduce the point cloud to its convex hull without loss of information (except if the surface is concave, but this issue will be tackled in future works). In order to obtain the convex hull of each set of points, we first project every point of the set on its plane model ({\\tt pcl::ProjectInliers} class) and then compute the 2D convex hull of the projected set of points ({\\tt pcl::ConvexHull} class).\nAfter this step, each plane surface of the scene is represented by a frame composed of the barycentre of the set of points and the convex hull of the surface.\n\n\\paragraph{Re-orientation and transfer to the planner}\nAfter re-orienting each frame to get their expression with respect to the world frame, the list of \\{frame + convex hull\\} can be sent to the planner as a list of contact surface candidates.\n%Before sending the previously computed data to the planner, it is necessary to take into account the initial orientation of the camera. Indeed, if the camera was not aiming in a perfectly horizontal direction, then the entire point cloud would be misoriented. Therefore, it is necessary to re-orientate the surfaces before sending them to the planner. To do so, we simply apply a rotation matrix (that is computed from the initial camera orientation) to each of our data set's frame and origin to settle that problem. From there, the transfer of the surfaces to the planner can be done without any specific issue.\n\n%Re-orientation is done as a last step for the sake of performance: obviously we need to re-orient only a few frames, compared to an early re-orientation of the point cloud that would require to apply a transformation on thousands of points.\n\n\n\\subsection{Constraints for surfaces extracted from point clouds}\n\n%Generating postures in which a contact between convex polygonal plane surfaces requires ensuring that the intersection area between the two surfaces is large enough to support the contact.\n%One could use the formulation presented in Section~\\ref{sec:integration_on_non_inclusive_contacts_in_posture generation}.\n%Or more simply a constraint of convex polygon inclusion, which is sufficient for cases such as the ones we study here, where the support surfaces are wide enough for the robot to lean on.\n\n%%We adjusted slightly our planner and posture generator to handle contacts between convex polygonal plane surfaces.\n%%The main modification made in the posture generator deals with properly writing the constraints that enforce the inclusion of one surface into another one.\n%%In our previous implementation, contacts are searched between rectangular patches attached to the robot body or the environment.\n\n%Such constraint can be written by enforcing that all the points of polygon $S_i$ are located on the left side of all the segments of polygon $S_j$ (provided that the points of $S_j$ are ordered counterclockwise around its normal).\n%%For a couple of coplanar surfaces $S_i$, $S_j$ respectively represented by $n$ and $m$ points, this gives rise to a constraint of dimension $n \\times m$:\n%Given $S_i$ and $S_j$ two surfaces which contours are defined respectively by the sets of points ${p_0, p_1, \\ldots, p_n}$ and ${q_0, q_1, \\ldots, q_m}$ and $\\vec{n}$ a vector normal to $S_i$, the inclusion of $S_i$ in $S_j$ can be written as the following constraint:\n\n%\\begin{equation}\n%\\forall k \\in [0, n],\\ \\forall l \\in [0, m],\\ \\left[\\overrightarrow{q_l p_k}\\times\\overrightarrow{q_l q_{l+1}}\\right].\\vec{n} \\leq 0\n%\\end{equation}\n\n%\\begin{algorithm}\n%\\caption{Surface inclusion constraints}\n%\\label{alg:surf_inclusion}\n%\\begin{algorithmic}\n%\\State{Let $S_i$ and $S_j$ be two coplanar plane surfaces}\n%\\State{$S_i = {p_0, p_1, \\ldots, p_n}$ and $S_j = {q_0, q_1, \\ldots, q_m}$}\n%\\State{$\\vec{N}$ is $S_i$'s normal vector}\n%\\For{$k = 0 \\to n$}\n%\\For{$l = 0 \\to m$}\n%\\State{Constraint $: \\left[\\overrightarrow{q_l p_k}\\times\\overrightarrow{q_l q_{l+1}}\\right].\\vec{N} \\leq 0$}\n%\\EndFor{}\n%\\EndFor{}\n%\\end{algorithmic}\n%\\end{algorithm}\n\nOnce the surfaces are defined, we can choose which ones are suitable for the robot to make contact with, using flat contact constraints formulations as described previously.\n%Although it is not a mandatory step, it allows reducing the amount of exploration during the planning phase by removing undesired or inappropriate pairs of robot/environment surfaces.\nFor the time being, this is determined by heuristics that are defined depending on the situation.\nIf we want the robot to walk on various surfaces, only those that have a normal vector closely aligned with the gravity field would be selected as potential candidates (so as to eliminate the walls and other surfaces on which the robot cannot walk).\nSimilarly, only surfaces located at a certain height can be considered for hand contact, etc.\n\nFor collision avoidance with the environment, we consider each surface generated by our point cloud treatment algorithm as a thin 3D body.\nBasically, we extrude each surface by a few centimeters in the direction opposite to its normal (provided that this normal is pointing toward the outside of the real body) and create a convex hull surface using {\\tt QHull}~\\cite{qhull:acm:1996}.\n\n\\subsection{Results}\n\\label{sub:results_pcl_plannif}\n\nWe illustrate this approach with two planning scenarios where the HRP-2 robot is required to move $2$m forward, using the planner presented in~\\cite{bouyarmane:humanoids:2010}.\nIn both scenarios, this results in climbing on a table (The first one is $71$cm high and the second one is $53$cm high) with the help of various surrounding objects.\nThe knowledge of the environment and surrounding objects is obtained from a point cloud captured with an RGBD camera.\n\nAll the computations of the following experiments are performed on a single thread of an Intel (R) Core (TM) i7--3840QM CPU at 2.80GHz, with 16Go of RAM\\@.\nThis work was done prior to the development of our posture generator and therefore makes use of a previous version of it that is presented in~\\cite{bouyarmane:humanoids:2010}.\nOur plan is now to use our Posture Generator in this framework.\n\n\\paragraph{Plan 1: irregular stairs}\nIn this first experiment, the robot has to walk up some irregular stairs made of several random pieces of furniture to reach its goal.\nThe filtered point cloud was split into 6 plane surfaces.\nThe whole cloud processing was done in $2.7$ s.\nThe planner generates a sequence of 11 nodes, some of which are depicted in~\\Figref{fig:table-climbing-simulation-stair}{}.\nWe notice that the robot climbs the stairs one by one without ever putting its two feet on the same step and without any noticeable problem.\nIn total, 23 nodes were generated and the planning time was $98.4$ s.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{hrp2stairs.png}\n  \\caption{Table climbing simulation using irregular stairs. Of the 11 nodes of the path, we depicted the nodes 1, 4, 6, 8, 10 and 11.}\n\\label{fig:table-climbing-simulation-stair}\n\\end{figure}\n\n\\paragraph{Plan 2: helping motion with the hand}\nThis second experiment was designed to showcase a more complex plan involving the use of the HRP-2 upper limbs.\nIn this experiment, we extracted 11 plane surfaces from the point cloud.\nThe point cloud processing took $2.7$ s.\nThe planner computation generates a path of 19 nodes, some of which are depicted in~\\Figref{fig:crapahut-simulation}{}.\nTo climb on the table, the robot uses its left arm and walks on an inclined slope before climbing a step at the end of the slope, once again, with the help of its left arm as support.\nIn total, 40 nodes were generated and the planning time was $122.3$ s.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\linewidth]{hrp2slope.png}\n  \\caption{Slope and step climbing simulation. Of the 19 nodes of the path, we depicted the nodes 1, 7, 12, 15, 17 and 18.}\n\\label{fig:crapahut-simulation}\n\\end{figure}\n\n\\subsection{Discussion}\n\\label{sub:discussion_planning_pcl}\n\nThis work is a first step toward a fully sensory-perception-based multi-contact planner with posture generation on acquired environment data.\nWe devised a data processing algorithm that extracts relevant information from a point cloud to generate an environment in which classical posture generation's formulations can be used.\n%Our main goal was to devise a process the acquired data to be able to extract the relevant informations from it that would allow to generation on acquired environment.\n%It raises several interesting questions on the way to adapt our MCP\\@.\nIn this work, we extract convex planar surfaces from the acquired point cloud and generate postures where the robot makes contact with those surfaces.\n%The simulation results revealed that our MCP and PG do not require major adjustments to handle egocentric sensory data.\nThis could straightforwardly be extended to extract other geometric primitives such as spheres and cylinders from the point cloud.\n\nA direct extension of this work would be to recognize known objects in the environment and use their 3D models in the posture generations.\nFor example, even in a disaster situation as in Fukushima nuclear power plants, the inside exploration videos available show that many objects kept their shape and were not totally destroyed (e.g.\\ door, stairs, ladders, etc.).\nSo having their models would then still permit to rely on 3D models to plan contacts and generate postures.\nFor objects of complex shapes, our approach for making contact with complex surfaces based on a Catmull-Clark approximation presented in Section~\\ref{sec:contact_with_complex_surfaces} could be leveraged.\nIn fact, even for unknown objects and parts of the environments, it would be possible to generate a mesh of their surfaces from the acquired point cloud and a smooth parametrization of it on which contacts can be generated through our PG.\nSince a point cloud acquired from a single point of view provides only partial information, the acquired meshes should be updated and improved along the execution of the plan with the acquisition of new point clouds from different point of views.\n\nAlternatively, postures could be generated directly with contacts on point clouds.\n%The obvious point by point approach where each point is considered as a contact candidate would be prohibitively expensive for the MCP.\nApproximating the mesh with a Catmull-Clark surface and smoothly parametrizing the contact location on it could be a solution.\nBut again, that would require some pre-processing of the point cloud to generate the surfaces.\nIn~\\cite{hauser:isrr:2013}, a contact formulation between robot and point clouds is presented, where the mesh representing each object is thickened by a thin layer, and a little interpenetration of those layers is allowed.\nA similar formulation could be used in posture generation to describe the contact constraints when a point cloud is involved.\n\n%PCL provides only partial information, it is then necessary to drive the planner by the mission objective and also a perceptual one (e.g.\\ SLAM).\n\nBesides the issue of generating postures on acquired point cloud data, this study raises some questions regarding the handling of uncertainties in posture generation.\nSince the acquired point cloud data is subject to measurement errors, it would be beneficial to be able to generate postures that are robust to those errors.\nFor example, given a plane $P$ extracted from a point cloud, its location in space is known with a limited precision $\\delta$ in its orientation and position.\nIt would serve our purpose to devise a formulation that guarantees that for all imprecision of value less than a given $\\delta$, there exists a solution configuration in which the contact with $P$ is maintained.\nBut such formulation is not straightforward to devise, and as of now, we have not found any promising leads, except using a space-grid discretisation of the possible uncertainty and generating a solution posture for each point of the grid.\nAlthough this could be solved fairly quickly in most cases by initializing one posture generation with the solution found for a neighbor grid point, such approach does not guarantee that solutions exist for any point between the grid points.\n\n%Besides the issue of generating postures on acquired point cloud data, this study raises some questions regarding the handling of uncertainties in multi-contact planning and posture generation.\n%%One advantage of our approach is to avoid having to precisely position the robot in the environment prior to the plan execution.\n%%Yet, for now, the positioning is done once and for all with the acquisition of the point cloud, before planning.\n%When executing the plan, the robot might deviate from it, for example, a support might move, or a foot might contact a few centimeters away from where it was planned to, or the support might not be at its expected position because of measurement errors in the acquired point cloud.\n%We thus need to close the loop between the planning and its execution.\n%To do so, we need robust detection of discrepancies to be made by the robot.\n%This can be achieved by combining point cloud-based SLAM with contact sensing.\n%Once the robot has acquired knowledge of its deviation or of the position error of the support, it has to adapt to it.\n%This adaptation should not be time-consuming so as to not interrupt the execution for too long.\n%A slight deviation can be recovered by simply positioning with care the next contacts and the closed-loop multi-contact controller shall work on guarded-motions basis.\n%However, a bigger deviation might make the next contact stances infeasible.\n%In this later case, re-planning the next contacts is necessary to go back to the plan.\n%How many contacts have to be re-planned depends on the context.\n%In difficult situations, changes in contacts might cascade up to requiring an entire re-planning.\n%Recognizing the situation should be the task of a local planner that re-plans as few steps as possible.\n%In case too much contacts must be reprocessed, the re-planning phase can be stopped before it ends and resumed at the next step.\n\n%This partial planning approach can also be seen in the context of semi-autonomous motion: an operator gives the overall direction with, for example, a joystick, and the planner reactively finds a sequence of a few contact sets to move as closely as possible in this desired direction.\n%The operator is thus in charge of preventing the robot from getting stuck while the planner only concentrates on finding the correct contacts over a short time window.\n\n%Another question stems from the partial knowledge of the environment: it is not possible to give a guide path as we used to do with the 3D models.\n%This guide path will necessarily be very crude, either a straight line to the desired position or a plan in a known environment before it was changed (for example in the case of a disaster in a plant).\n%The planning must then be driven also by the need of getting information about the environment, for example reaching a viewpoint allowing to see parts of the environment that were hidden before, filling empty spaces and possibly adding new supports on-the-fly.\n%Planning is then only partial since necessary part of the environment might be unknown.\n\n%Later on, the discovery of the environment might be improved by the use of other sensors.\n%One can then imagine having the robot test a contact to ensure a given surface is fit for support or that it is precisely at the position measured by vision.\n%If the surface is not, this is another kind of discrepancy in the plan that needs to be handled by re-planning.\n\n%We present preliminary results in extending our previous work in multi-contact planning to operate directly on environment that is built on-the-fly from a robot's embedded sensors.\n%This first study makes use of depth camera and implemented modules from the PCL which extract planar surfaces that are fed to our MCP\\@.\n%We intentionally seek for technical implementations that minimize the changes on our existing MCP software and illustrate successful plans generated on the basis of 3D point clouds solely.\n\n%Of course, this does not mean that we are fully satisfied since our results also suggest additional future work.\n\n%Although we choose to treat the case of not having 3D models, we believe that the implementation of an MCP with knowledge of the 3D model is necessary.\n%For example, even in a disaster situation as in Fukushima nuclear power plants, the inside exploration videos available show that many objects kept their shape and were not totally destroyed (e.g.\\ door, stairs, ladders, etc.).\n%So having their models would then still permit our MCP to rely on 3D models to plan contacts for motion.\n%PCL provides only partial information, it is then necessary to drive the planner by the mission objective and also a perceptual one (e.g.\\ SLAM).\n%Of course, our ultimate future plan is to handle uncertainties, control and planning recovery of discrepancies when they occur.\n\n%\\section{Examples of postures generation}\n%\\label{sec:examples_of_postures_generation}\n\n%In this section, we present some scenarios solved with our posture generator and solver.\n%\\paragraph{Application of a desired force}\n%In the context of using a robot to help humans in the construction of an aircraft, we formulate a problem where the HRP-4 robot is required to apply a desired force on a given point of the airplanes structure.\n%We denote $f_g$ the target value of the force in direction $\\vec{d}$ and $\\vec{f_c}$ the actual force at the contact point and the could simply implement the following function $f_{target}(f_c)$ to minimize to get the desired result:\n%\\begin{equation}\n  %f_{target}(f_c) = {\\vec{f_c}\\cdot\\vec{d}-f_g)}^2\n%\\end{equation}\n%In addition to that cost function, the robot needs keep its foot on the ground, its left hand is used to lean on a beam of the structure to allow a longer reach with the right hand that is in contact with the wall.\n%Those constraints must be satisfied while respecting the joint and torque limits of the robot, maintaining balance and avoiding auto-collisions.\n%The result is depicted in~\\Figref{fig:comanoid}\n%\\begin{figure}\n  %\\centering\n  %\\includegraphics[width=0.6\\linewidth]{comanoid.png}\n  %\\caption{HRP-4 applying a desired force on a contact point with its right hand}\n%\\label{fig:comanoid}\n%\\end{figure}\n\n%\\paragraph{Generating postures in highly constrained environment}\n%In this scenario, the HRP-4 robot is required to evolve in a highly constrained environment, in the sense that many obstacles limit its displacement, and it must avoid collision with them.\n%The goal is to real a pannel of circuit breakers (in green on~\\Figref{fig:getafe}) to activate them.\n%We generated a sequence of steps to reach that panel and then some postures to reach different points of this panel.\n%In figure~\\ref{fig:getafe}, we show some extracts of this sequence of postures.\n%In each of them the two foot are in planar contact with the ground, alternatively bearing forces, while always respecting the joint, torque limits, collision avoidance and stability constraints.\n%On the last posture the top left corner of the panel is reached with the tip of the right hand of the robot.\n%\\begin{figure}\n  %\\centering\n  %\\includegraphics[width=\\linewidth]{getafeSequence.png}\n  %\\caption{Extracts of a sequence of postures with HRP-4 entering a constrained environments to reach a point on a pannel of circuit breakers (in green)}\n%\\label{fig:getafe}\n%\\end{figure}\n\n\n\\section{Conclusion}\n\\label{sec:conclusion_chap_6}\n\nIn this section, we presented several evaluations and potential usage applications of our solver and posture generator.\nWe first evaluated the influence of the formulation on manifolds on a problem making substantial usage of them and showed that as expected, this formulation outperforms the classical approach in terms of convergence times.\nThen we propose an approach to estimate the success rate of our posture generator with respect to the distance between the initial guess and the solution.\nWe then present a different application of our solver to the problem of inertial identification of a rigid body.\nIn which case, working with manifolds ensures the \\emph{full physical consistency} of the inertial parameters in any situation and all along the optimization process.\nThere again, the formulation with manifolds outperforms the classical approach.\nWe then present an application of posture generation to the problem of planning in a real environment, where we propose a method to segment a sensory acquired environment to allow the generation of postures on it.\nFinally, we present some use case scenarios in which our posture generator has been used efficiently.\n", "meta": {"hexsha": "a9d523dad6f96988463a3baec29730f17cab9ee6", "size": 71033, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter6-Evaluation/chapter6.tex", "max_stars_repo_name": "stanislas-brossette/phd-thesis", "max_stars_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter6-Evaluation/chapter6.tex", "max_issues_repo_name": "stanislas-brossette/phd-thesis", "max_issues_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter6-Evaluation/chapter6.tex", "max_forks_repo_name": "stanislas-brossette/phd-thesis", "max_forks_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 90.1434010152, "max_line_length": 762, "alphanum_fraction": 0.780510467, "num_tokens": 16848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289388083214156, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5658118526426477}}
{"text": "%% This file is part of the CroMagnon project\n%% Copyright 2015 David W. Hogg\n\n% To-Do\n% -----\n% - Clean up all mentions of ``DWH'' in the code.\n\n\\documentclass[12pt]{article}\n\\usepackage{url}\n\\input{vc}\n\n% useless formatting\n\\linespread{1.08333} % 10/13 spacing\n\\setlength{\\parindent}{2\\baselineskip}\\addtolength{\\parindent}{-1.25ex}\n\\setlength{\\parskip}{0ex}\n\n% math definitions\n\\newcommand{\\normal}{N}\n\\newcommand{\\unitvec}[1]{\\hat{#1}}\n\\newcommand{\\ehat}{\\unitvec{e}}\n\\newcommand{\\xhat}{\\unitvec{x}}\n\\newcommand{\\yhat}{\\unitvec{y}}\n\\newcommand{\\zhat}{\\unitvec{z}}\n\\newcommand{\\transpose}{^{\\mathsf{T}}}\n\\newcommand{\\inverse}{^{-1}}\n\\newcommand{\\given}{\\,|\\,}\n\\newcommand{\\like}{L}\n\\newcommand{\\setof}[1]{\\left\\{{#1}\\right\\}}\n\\newcommand{\\dd}{\\mathrm{d}}\n\n\\begin{document}\\sloppy\\sloppypar\n\n\\section*{Inferring a 3-d Gaussian variance tensor \\\\\n          from 2-d projections at unknown angles}\n\\noindent\nDavid W. Hogg (NYU) (SCDA) (MPIA)\n\n\\bigskip\n\n\\section{Problem statement}\n\nIn a three-dimensional (3-d) space there exists a 3-d multivariate\nGaussian-shaped density blob $\\rho(x)$\n\\begin{eqnarray}\n  \\rho(x) &=& a\\,\\normal(x\\given 0, V)\n  \\quad ,\n\\end{eqnarray}\nwhere $a$ is an amplitude and $\\normal(x\\given\\mu, V)$ is a Gaussian\npdf for $x$ with mean $\\mu$ and variance tensor $V$.\nWithout loss of generality we have presumed zero mean $\\mu$, and a\nvariance tensor $V$ with principal axes aligned with the coordinate\naxes $\\ehat_1$, $\\ehat_2$, and $\\ehat_3$ directions:\n\\begin{eqnarray}\n  V &=& \\sum_{d=1}^3 \\sigma^2_d \\, \\ehat_d\\cdot\\ehat_d\\transpose\n  \\quad ,\n\\end{eqnarray}\nwhere $V$ is a symmetric $3\\times3$ tensor, the vectors are column\northonormal unit vectors (so the vector products\n$\\ehat_d\\cdot\\ehat_d\\transpose$ are outer products), the $V_d$ are\nscalar eigenvalues (principal variances).\nStill without loss of generality we can assume that the corresponding\neigenvalues of the tensor are ordered\n\\begin{eqnarray}\n  \\sigma^2_1 \\geq \\sigma^2_2 \\geq \\sigma^2_3 > 0\n  \\quad .\n\\end{eqnarray}\n\nWe don't get to observe this Gaussian blob or its variance tensor\ndirectly.\nInstead, we get $N$ two-dimensional (2-d) images $y_n$, each of which\ncontains a noisy 2-d projection of the 3-d Gaussian blob,\nprojected along an unknown three-vector direction $\\zhat_n$.\nIn detail, each image $y_n$ is a square array of $M$ pixels at integer\ncoordinates $\\xi_m$ in the projection plane, and each image pixel $m$\ncontains an evaluation of a shifted (that is, non-zero mean),\narbitrarily rotated (in 2-d), 2-d Gaussian blob plus some additive iid\nGaussian noise.\nNone of the projection directions $\\zhat_n$, the 2-d rotations, and\nthe 2-d shifts are known; only the noisy images are given.\n\nThe variance tensor of the 2-d Gaussian blob appearing in image $y_n$\nis given by the projection of the 3-d variance tensor:\n\\begin{eqnarray}\n  V_n &=& P_n\\transpose\\cdot V\\cdot P_n\n  \\quad ,\n\\end{eqnarray}\nwhere $P_n$ is a $3\\times 2$ rectangular projection matrix.\nThe projection matrices $P_n$ have three degrees of freedom or three\nparameters (in some sense), which can be thought of as the Euler\nangles, which are two angles on the sphere to define the direction of\n$\\zhat_n$ and one more angle to define the direction of the x-axis of\nthe image in the projection plane.\nDWH: INSERT MATH HERE.\n\nDWH: SHOW EXAMPLE DATA HERE.\n\nThe question is:\nGiven these data, \\emph{can we infer the 3-d variance tensor $V$?}\nAnd if we can, how accurately, and what does it depend on?\nWe are going to attempt the inference by maximizing a marginalized\nlikelihood, marginalizing out all $5\\,N$ unknown projection, rotation,\nand shift parameters.\n\nThere is another similar problem we could pose in which each datum\n$y_n$ is a \\emph{sampling} from a two-dimensional projection of the\nGaussian blob, possibly also with a uniform background (dc) component.\nThat's beyond our current scope!\n\n\\section{Marginalized likelihood and sampling}\n\nThe likelihood we are going to use is based on the image formation\nmodel\n\\begin{eqnarray}\n  y_{nm} &=& a\\,\\normal(\\xi_m\\given\\mu_n,V_n) + \\sigma\\,g_{nm}\n  \\quad,\n\\end{eqnarray}\nwhere $y_{nm}$ is the $m$th pixel of image $y_n$,\n$a$ is the overall amplitude of the 3-d Gaussian blob,\n$\\xi_m$ is the 2-d vector location of pixel $m$,\n$\\mu_n$ is the 2-d shift or center of the projected Gaussian blob,\n$V_n$ is the projected 2-d variance tensor,\n$\\sigma$ is the amplitude of the iid noise,\nand $g_{nm}$ is a draw from a univariate Gaussian of zero mean and unit variance.\n(Apologies for overloading the $\\sigma$ variable.\nAlternate suggestions welcome.)\nThis image formation model implies this (unmarginalized)\nlog-likelihood (dropping an additive constant):\n\\begin{eqnarray}\n  -2\\,\\ln\\like(a,V,\\setof{\\phi_n}_{n=1}^N) &=& \\sum_{n=1}^N \\sum_{m=1}^M \\frac{[y_{nm} - Y_{nm}]^2}{\\sigma^2}\n  \\\\\n  Y_{nm} &\\equiv& a\\,\\normal(\\xi_m\\given\\mu_n,V_n)\n  \\\\\n  V_n &\\equiv& P_n\\transpose\\cdot V\\cdot P_n\n  \\\\\n  \\phi_n &\\equiv& \\setof{P_n, \\mu_n}\n  \\quad,\n\\end{eqnarray}\nwhere the (unmarginalized) likelihood depends on the 3-d parameters\n$\\setof{a,V}$ (4 degrees of freedom) and also the projection\nparameters $\\setof{\\phi_n}_{n=1}^N$ ($5\\,N$ degrees of freedom).\n\nNow what we really want is to marginalize out the projections and\noffsets.\nThe marginalized log-likelihood (again, dropping a constant) looks like\n\\begin{eqnarray}\n  -2\\,\\ln\\like(a,V) &=& -2\\,\\sum_{n=1}^N\\ln\\int\\exp(-\\frac{1}{2}\\,\\sum_{m=1}^M \\frac{[y_{nm} - Y_{nm}]^2}{\\sigma^2})\\,p(\\phi_n)\\,\\dd\\phi_n\n  \\quad,\n\\end{eqnarray}\nwhere $p(\\phi)$ is the prior over the 5 nuisance parameters for any\nindividual datum, and the likelihood is now a function only of\n$\\setof{a,V}$ because we have marginalized out all $5\\,N$ nuisance\nparameters.\nNote that this horror is a log of an integral of an exponential of\nsomething simple.\nThat's going to hurt (and going to suggest to our competitors to use\nE-M, but we won't be so weak).\nThe prior $p(\\phi)$ over nuisance parameters will be isotropic in\nprojection angles, isotropic in rotations in the 2-d space, and\nuninformative---somehow---in offsets.\n\nImagine we had a large, dense, fair set of $K$ samples\n$\\phi_k\\equiv\\setof{P_k, \\mu_k}$ from the prior $p(\\phi)$ over\nnuisance parameters.\nThen we could approximate the marginalization integrals with sums, and\nobtain (once again dropping a constant)\n\\begin{eqnarray}\n  -2\\,\\ln\\like(a,V) &\\approx& -2\\,\\sum_{n=1}^N\\ln\\frac{1}{K}\\,\\sum_{k=1}^K\\exp(-\\frac{1}{2}\\,\\sum_{m=1}^M \\frac{[y_{nm} - Y_{km}]^2}{\\sigma^2})\n  \\\\\n  Y_{km} &\\equiv& a\\,\\normal(\\xi_m\\given\\mu_k,V_k)\n  \\\\\n  V_k &\\equiv& P_k\\transpose\\cdot V\\cdot P_k\n  \\quad,\n\\end{eqnarray}\nwhere now the logsumexp has reared its head.\n\nDWH: Now notice that we don't need the full set of samples to\napproximate this integral well.\n\n\\section{Method and implementation}\n\nDWH: To make fake data, we do the following...\n\nDWH: How to re-sample from the prior, cleanly?  Or burn in?  Or whatever.\n\nDWH: Do we have to re-sample from the full prior, or can we sample in\na neighborhood?  IF the latter, how do we figure out whether our\nneighborhood was big enough?\n\nDWH: How to initialize?\n\n\\section{Experiments}\n\n\\section{Discussion}\n\n\n\\bigskip\n\nEverything in this project, including this document itself, is\navailable at \\giturl.  If you want this exact version, clone at git\nhash \\texttt{\\githash~(\\gitdate)}.\n\n\\end{document}\n", "meta": {"hexsha": "feb9e180350af1ed775e36c8e52ac3f494dec43d", "size": 7326, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/old/gaussian.tex", "max_stars_repo_name": "davidwhogg/CryoEM", "max_stars_repo_head_hexsha": "4ced7aa2216efeb9cdb657ab4ef47bf7ba0709e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-06-18T15:48:00.000Z", "max_stars_repo_stars_event_max_datetime": "2019-06-18T15:48:00.000Z", "max_issues_repo_path": "papers/old/gaussian.tex", "max_issues_repo_name": "davidwhogg/CroMagnon", "max_issues_repo_head_hexsha": "4ced7aa2216efeb9cdb657ab4ef47bf7ba0709e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/old/gaussian.tex", "max_forks_repo_name": "davidwhogg/CroMagnon", "max_forks_repo_head_hexsha": "4ced7aa2216efeb9cdb657ab4ef47bf7ba0709e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.0886699507, "max_line_length": 143, "alphanum_fraction": 0.7263172263, "num_tokens": 2278, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673269042767, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5657217884328091}}
{"text": "\\section{Prob 1. a)}\n\n\\lstinputlisting{poisson.py}\n\nOutput the Poisson distribution with input ($\\lambda$, k) = (1,0), (5,10), (3,21), and (2.6, 40). \n\\lstinputlisting{poisson.txt}\n", "meta": {"hexsha": "b50e0299be40ed884b2da547b856e1c36f05b92e", "size": 181, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Hand_in_exercise_1/poisson.tex", "max_stars_repo_name": "rywjhzd/Numerical-Recipes-In-Astrophysics", "max_stars_repo_head_hexsha": "1f4bf40c504cd5f0117a9986c2756dcfd5bfc5c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Hand_in_exercise_1/poisson.tex", "max_issues_repo_name": "rywjhzd/Numerical-Recipes-In-Astrophysics", "max_issues_repo_head_hexsha": "1f4bf40c504cd5f0117a9986c2756dcfd5bfc5c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Hand_in_exercise_1/poisson.tex", "max_forks_repo_name": "rywjhzd/Numerical-Recipes-In-Astrophysics", "max_forks_repo_head_hexsha": "1f4bf40c504cd5f0117a9986c2756dcfd5bfc5c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.8571428571, "max_line_length": 98, "alphanum_fraction": 0.6685082873, "num_tokens": 66, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8128673269042767, "lm_q2_score": 0.6959583187272711, "lm_q1q2_score": 0.5657217781806314}}
{"text": "\n\\section{Convergence Diagnostics} % (fold)\n%\\label{sec:convergence_diagnostics}\n\nValid inferences from sequences of MCMC samples are based on the assumption that samples are derived from the true posterior distribution of interest. Theory guarantees this condition as the number of iterations approaches infinity. It is important, therefore, to determine the minimum number of samples required to ensure a reasonable approximation to the target posterior density. Unfortunately, no universal threshold exists across all problems, so convergence must be assessed independently each time MCMC estimation is performed. The procedures for verifying convergence are collectively known as convergence diagnostics.\n\nOne approach to analyzing convergence is analytical, whereby the variance of the sample at different sections of the chain are compared to that of the limiting distribution. These methods use distance metrics to analyze convergence, or place theoretical bounds on the sample variance, and though they are promising, they are generally difficult to use and are not prominent in the MCMC literature. More common is a statistical approach to assessing convergence. Statistical techniques, rather than considering the properties of the theoretical target distribution, only consider the statistical properties of the observed chain. Reliance on the sample alone restricts such convergence criteria to heuristics, and hence, convergence cannot be guaranteed. Although evidence for lack of convergence using statistical convergence diagnostics will correctly imply lack of convergence in the chain, the absence of such evidence will not \\emph{guarantee} convergence in the chain. Nevertheless, negative results for one or more criteria will provide some measure of assurance to users that their sample will provide valid inferences.\n\nFor most simple models, convergence will occur quickly, sometimes within the first several hundred iterations, after which all remaining samples of the chain may be used to calculate posterior quantities. For many more complex models, convergence requires a significantly longer burn-in period; sometimes  orders of magnitude more samples are needed. Frequently, lack of convergence will be caused by poor \\emph{mixing} (Figure \\ref{fig:mix}). Mixing refers to the degree to which the Markov chain explores the support of the posterior distribution. Poor mixing may stem from inappropriate proposals (if one is using the Metropolis-Hastings sampler) or from attempting to estimate models with highly correlated variables.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[height=3in]{poormixing.pdf}\n\\caption{An example of a poorly-mixing sample in two dimensions. Notice that the chain is trapped in a region of low probability relative to the mean (dot) and variance (oval) of the true posterior quantity.}\n\\label{fig:mix}\n\\end{center}\n\\end{figure}\n\n\\subsection{Informal Methods}\n\nThe most straightforward approach for assessing convergence is based on simply plotting and inspecting traces and histograms of the observed MCMC sample. If the trace of values for each of the stochastics exhibits asymptotic behaviour\\footnote{Asymptotic behaviour implies that the variance and the mean value of the sample stays relatively constant over some arbitrary period.} over the last $m$ iterations, this may be satisfactory evidence for convergence. A similar approach involves plotting a histogram for every set of $k$ iterations (perhaps 50-100) beyond some burn-in threshold $n$; if the histograms are not visibly different among the sample intervals, this is some evidence for convergence. Note that such diagnostics should be carried out for each stochastic estimated by the MCMC algorithm, because convergent behaviour by one variable does not imply evidence for convergence for other variables in the model. An extension of this approach can be taken when multiple parallel chains are run, rather than just a single, long chain. In this case, the final values of $c$ chains run for $n$ iterations are plotted in a histogram; just as above, this is repeated every $k$ iterations thereafter, and the histograms of the endpoints are plotted again and compared to the previous histogram. This is repeated until consecutive histograms are indistinguishable.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=3in]{metastable.pdf}\n\\caption{An example of metastability in a two-dimensional parameter space. The chain appears to be stable in one region of the parameter space for an extended period, then unpredictably jumps to another region of the space.}\n\\label{fig:metas}\n\\end{center}\n\\end{figure}\n\nAnother \\emph{ad hoc} method for detecting convergence is to examine the traces of several MCMC chains initialized with different starting values. Overlaying these traces on the same set of axes should (if convergence has occurred) show each chain tending toward the same equilibrium value, with approximately the same variance. Recall that the tendency for some Markov chains to converge to the true (unknown) value from diverse initial values is called \\emph{ergodicity}. This property is guaranteed by the reversible chains constructed using MCMC, and should be observable using this technique. Again, however, this approach is only a heuristic method, and cannot always detect lack of convergence, even though chains may appear ergodic.\n\nA principal reason that evidence from informal techniques cannot guarantee convergence is a phenomenon called metastability. Chains may appear to have converged to the true equilibrium value, displaying excellent qualities by any of the methods described above. However, after some period of stability around this value, the chain may suddenly move to another region of the parameter space (Figure \\ref{fig:metas}). This period of metastability can sometimes be very long, and therefore escape detection by these convergence diagnostics. Unfortunately, there is no statistical technique available for detecting metastability.\n\n\\subsection{Formal Methods}\n\nAlong with the \\emph{ad hoc} techniques described above, a number of more formal methods exist which are prevalent in the literature. These are considered more formal because they are based on existing statistical methods, such as time series analysis.\n\nPyMC currently includes functions for two formal convergence diagnostic procedures. The first, proposed by \\citet{Geweke:1992gm}, is a time-series approach that compares the mean and variance of segments from the beginning and end of a single chain.\n\\begin{equation}\nz = \\frac{\\bar{\\theta}_a - \\bar{\\theta}_b}{\\sqrt{Var(\\theta_a) + Var(\\theta_b)}}\n\\end{equation}\nwhere $a$ is the early interval and $b$ the late interval. If the z-scores (theoretically distributed as standard normal variates) of these two segments are similar, it can provide evidence for convergence. PyMC calculates z-scores of the difference between various initial segments along the chain, and the last 50\\% of the remaining chain. If the chain has converged, the majority of points should fall within 2 standard deviations of zero.\n\nDiagnostic z-scores can be obtained by calling the \\code{geweke} function. It accepts either (1) a single trace, (2) a Node or Stochastic object, or (3) an entire Model object.\n\n\\paragraph{Method Usage}\n\\begin{verbatim}\ngeweke(pymc_object, first=0.1, last=0.5, intervals=20)\n\\end{verbatim}\n\\begin{itemize}\n\n\\item \\verb=pymc_object=: The object that is or contains the output trace(s).\n\n\\item \\verb=first= (optional): First portion of chain to be used in Geweke diagnostic. Defaults to 0.1 (i.e. first 10\\% of chain).\n\n\\item \\verb=last= (optional): Last portion of chain to be used in Geweke diagnostic. Defaults to 0.5 (i.e. last 50\\% of chain).\n\n\\item \\verb=intervals= (optional): Number of sub-chains to analyze. Defaults to 20.\n\\end{itemize}\n\nThe resulting scores are best interpreted graphically, using the \\code{geweke_plot} function. This displays the scores in series, in relation to the 2 standard deviation boundaries around zero. Hence, it is easy to see departures from the standard normal assumption.\n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[height=6in]{geweke.png}\n\\caption{Sample plots of Geweke z-scores for a variable using \\textbf{geweke_plot}. The occurrence of the scores well within 2 standard deviations of zero gives does not indicate lack of convergence (top), while deviations exceeding 2 standard deviations suggests that additional samples are required to achieve convergence (bottom).}\n\\label{fig:geweke}\n\\end{center}\n\\end{figure}\n\n\\code{geweke_plot} takes either a single set of scores, or a dictionary of scores (output by \\code{geweke} when an entire Sampler is passed) as its argument:\n\n\\paragraph{Method Usage}\n\\begin{verbatim}\ngeweke_plot(scores, name='geweke', format='png', suffix='-diagnostic', \\\n                path='./', fontmap = {1:10, 2:8, 3:6, 4:5, 5:4}, verbose=1)\n\\end{verbatim}\n\\begin{itemize}\n\n\\item \\verb=scores=: The object that contains the Geweke scores. Can be a list (one set) or a dictionary (multiple sets).\n\n\\item \\verb=name= (optional): Name used for output files. For multiple scores, the dictionary keys are used as names.\n\n\\item \\verb=format= (optional): Graphic output file format (defaults to \\emph{png}).\n\n\\item \\verb=suffix= (optional): Suffix to filename (defaults to \\emph{-diagnostic})\n\n\\item \\verb=path= (optional): The path for output graphics (defaults to working directory).\n\n\\item \\verb=fontmap= (optional): Dictionary containing the font map for the labels of the graphic.\n\n\\item \\verb=verbose= (optional): Verbosity level for output (defaults to 1).\n\\end{itemize}\n\nTo illustrate, consider a model \\code{my_model} that is used to instantiate a MCMC sampler. The sampler is then run for a given number of iterations:\n\\begin{verbatim}\n\t>>> S = pm.MCMC(my_model)\n\t>>> S.sample(10000, burn=5000)\n\\end{verbatim}\nIt is easiest simply to pass the entire sampler \\code{S} to the \\code{geweke} function:\n\\begin{verbatim}\n\t>>> scores = pm.geweke(S, intervals=20)\n\t>>> pm.Matplot.geweke_plot(scores)\n\\end{verbatim}\nAlternatively, individual stochastics within \\code{S} can be analyzed for convergence:\n\\begin{verbatim}\n\t>>> trace = S.alpha.trace()\n\t>>> alpha_scores = pm.geweke(trace, intervals=20)\n\t>>> pm.Matplot.geweke_plot(alpha_scores, 'alpha')\n\\end{verbatim}\nAn example of convergence and non-convergence of a chain using \\code{geweke_plot} is given in Figure \\ref{fig:geweke}. \n\n\nThe second diagnostic provided by PyMC is the \\citet{raftery} procedure. This approach estimates the number of iterations required to reach convergence, along with the number of burn-in samples to be discarded and the appropriate thinning interval. A separate estimate of both quantities can be obtained for each variable in a given model.\n\nAs the criterion for determining convergence, the Raftery and Lewis approach uses the accuracy of estimation of a user-specified quantile. For example, we may want to estimate the quantile $q=0.975$ to within $r=0.005$ with probability $s=0.95$. In other words,\n\n\\begin{equation}\n\tPr(|\\hat{q}-q| \\le r) = s\n\\end{equation}\n\nFrom any sample of $\\theta$, one can construct a binary chain:\n\n\\begin{equation}\n\tZ^{(j)} = I(\\theta^{(j)} \\le u_q)\n\\end{equation}\n\nwhere $u_q$ is the quantile value and $I$ is the indicator function. While $\\{\\theta^{(j)}\\}$ is a Markov chain, $\\{Z^{(j)}\\}$ is not necessarily so. In any case, the serial dependency among $Z^{(j)}$ decreases as the thinning interval $k$ increases. A value of $k$ is chosen to be the smallest value such that the first order Markov chain is preferable to the second order Markov chain.\n\nThis thinned sample is used to determine number of burn-in samples. This is done by comparing the remaining samples from burn-in intervals of increasing length to the limiting distribution of the chain. An appropriate value is one for which the truncated sample's distribution is within $\\epsilon$ (arbitrarily small) of the limiting distribution. See \\citet{raftery} or \\citet{Gamerman:1997tb} for computational details. Estimates for sample size tend to be conservative.\n\nThis diagnostic is best used on a short pilot run of a particular model, and the results used to parameterize a subsequent sample that is to be used for inference.\n\n\n\\paragraph{Method Usage}\n\\begin{verbatim}\nraftery_lewis(pymc_object, q, r, s=.95, epsilon=.001, verbose=1)\n\\end{verbatim}\n\\begin{itemize}\n\n\t\\item \\verb=pymc_object=: The object that contains the Geweke scores. Can be a list (one set) or a dictionary (multiple sets).\n\n    \\item \\verb=q=: Desired quantile to be estimated.\n\n    \\item \\verb=r=: Desired accuracy for quantile.\n\n    \\item \\verb=s=(optional): Probability of attaining the requested accuracy (defaults to 0.95).\n\n    \\item \\verb=epsilon= (optional) : Half width of the tolerance interval required for the q-quantile (defaults to 0.001).\n\n    \\item \\verb=verbose= (optional) : Verbosity level for output (defaults to 1).\n\\end{itemize}\n\nThe code for \\code{raftery_lewis} is based on the FORTRAN program \\emph{gibbsit}, which was written by Steven Lewis.\n\nFor example, consider again a sampler \\code{S} run for some model \\code{my_model}:\n\\begin{verbatim}\n\t>>> S = pm.MCMC(my_model)\n\t>>> S.sample(10000, burn=5000)\n\\end{verbatim}\nOne can pass either the entire sampler \\code{S} or any stochastic within \\code{S} to the \\code{raftery_lewis} function, along with suitable arguments. Here, we have chosen $q=0.025$ (the lower limit of the equal-tailed 95\\% interval) and error $r=0.01$:\n\\begin{verbatim}\n\t>>> pm.raftery_lewis(S, q=0.025, r=0.01)\n\\end{verbatim}\nThis yields diagnostics as follows for each stochastic of \\code{S}, as well as a dictionary containing the diagnostic quantities:\n\n\\begin{verbatim}\n\t========================\n\tRaftery-Lewis Diagnostic\n\t========================\n\n\t937 iterations required (assuming independence) to achieve 0.01 accuracy\n\twith 95 percent probability.\n\n\tThinning factor of 1 required to produce a first-order Markov chain.\n\n\t39 iterations to be discarded at the beginning of the simulation (burn-in).\n\n\t11380 subsequent iterations required.\n\n\tThinning factor of 11 required to produce an independence chain.\n\\end{verbatim}\n\nAdditional convergence diagnostics are available in the\n\\href{http://lib.stat.cmu.edu/R/CRAN/}{R statistical package}, via the\n\\href{http://www-fis.iarc.fr/coda/}{CODA module}. PyMC includes a method\n\\code{coda} for exporting model traces in a format that may be\ndirectly read by CODA.\n\n\\paragraph{Method Usage}\n\\begin{verbatim}\npm.utils.coda(pymc_object)\n\\end{verbatim}\n\\begin{itemize}\n\n\\item \\verb=pymc_object=: The PyMC sampler for which output is desired.\n\n\\end{itemize}\nCalling \\verb=coda= yields two files, one containing raw trace values (suffix\n\\verb=.out=) and another containing indices to the trace values (suffix\n\\verb=.ind=).\n\n% section convergence_diagnostics (end)\n\n\\subsection{Autocorrelation Plots} % (fold)\n%\\label{sec:autocorrelation_plots}\n\nSamples from MCMC algorithms are ususally autocorrelated, due partly to the\ninherent Markovian dependence structure. The degree of autocorrelation can\nbe quantified using the autocorrelation function:\n\\begin{eqnarray*}\n    \\rho_k &=& \\frac{\\mbox{Cov}(X_t,\nX_{t+k})}{\\sqrt{\\mbox{Var}(X_t)\\mbox{Var}(X_{t+k})}} \\\\\n            &=& \\frac{E[(X_t - \\theta)(X_{t+k} - \\theta)]}{\\sqrt{E[(X_t -\n\\theta)^2] E[(X_{t+k} - \\theta)^2]}}\n\\end{eqnarray*}\nPyMC includes a function for plotting the autocorrelation function for each\nstochastic in the sampler (Figure \\ref{fig:autocorr}). This allows users\nto examine the relationship among successive samples within sampled chains.\nSignificant autocorrelation suggests that chains require thinning prior to\nuse of the posterior statistics for inference.\n\n\\begin{figure}[h]\n        \\begin{center}\n        \\includegraphics[scale=0.4]{autocorr.png}\n    \\end{center}\n    \\caption{Sample autocorrelation plots for two Poisson variables from\ncoal mining disasters example model.}\n    \\label{fig:autocorr}\n\\end{figure}\n\n\\begin{verbatim}\nautocorrelation(pymc_object, name, maxlag=100, format='png', suffix='-acf',\npath='./', fontmap = {1:10, 2:8, 3:6, 4:5, 5:4}, verbose=1)\n\\end{verbatim}\n\\begin{itemize}\n\t\\item \\verb=pymc_object=: The object that is or contains the output\ntrace(s).\n\n\t\\item \\verb=name=: Name used for output files.\n\n\t\\item \\verb=maxlag=: The highest lag interval for which\nautocorrelation is calculated.\n\n\t\\item \\verb=format= (optional): Graphic output file format\n(defaults to \\emph{png}).\n\n\t\\item \\verb=suffix= (optional): Suffix to filename (defaults to\n\\emph{-diagnostic})\n\n\t\\item \\verb=path= (optional): The path for output graphics\n(defaults to working directory).\n\n\t\\item \\verb=fontmap= (optional): Dictionary containing the font map\nfor the labels of the graphic.\n\n\t\\item \\verb=verbose= (optional): Verbosity level for output\n(defaults to 1).\n\\end{itemize}\n\nUsing any given model \\code{my_model} as an example, autocorrelation plots can be obtained simply by passing the sampler for that model to the \\code{autocorrelation} function (within the \\code{Matplot} module) directly:\n\\begin{verbatim}\n\t>>> S = pm.MCMC(my_model)\n\t>>> S.sample(10000, burn=5000)\n\t>>> pm.Matplot.autocorrelation(S)\n\\end{verbatim}\nAlternatively, variables within a model can be plotted individually. For example, a hypothetical variable \\code{beta} that was estimated using sampler \\code{S} will yield a correlation plot as follows:\n\\begin{verbatim}\n\t>>> pm.Matplot.autocorrelation(S.beta)\n\\end{verbatim}\n\n% section autocorrelation_plots (end)\n\n\\section{Goodness of Fit} % (fold)\n%\\label{sec:goodness_of_fit}\n\nChecking for model convergence is only the first step in the evaluation of MCMC model outputs. It is possible for an entirely unsuitable model to converge, so additional steps are needed to ensure that the estimated model adequately fits the data. One intuitive way for evaluating model fit is to compare model predictions with actual observations. In other words, the fitted model can be used to simulate data, and the distribution of the simulated data should resemble the distribution of the actual data.\n\nFortunately, simulating data from the model is a natural component of the Bayesian modelling framework. Recall, from the discussion on imputation of missing data, the posterior predictive distribution:\n\n\\begin{equation}\n\tp(\\tilde{y}|y) = \\int p(\\tilde{y}|\\theta) f(\\theta|y) d\\theta\n\\end{equation}\n\nHere, $\\tilde{y}$ represents some hypothetical new data that would be expected, taking into account the posterior uncertainty in the model parameters. Sampling from the posterior predictive distribution is easy in PyMC. The code looks identical to the corresponding data stochastic, with two modifications: (1) the node should be specified as deterministic and (2) the statistical likelihoods should be replaced by random number generators. As an example, consider the Poisson data likelihood of the coal mining disasters example:\n\\begin{verbatim}\n\t@pm.stochastic(observed=True, dtype=int)\n\tdef disasters(  value = disasters_array,\n\t                early_mean = early_mean,\n\t                late_mean = late_mean,\n\t                switchpoint = switchpoint):\n\t    \"\"\"Annual occurences of coal mining disasters.\"\"\"\n\t    return pm.poisson_like(value[:switchpoint],early_mean) +\n\t pm.poisson_like(value[switchpoint:],late_mean)\n\\end{verbatim}\nThis is a mixture of Poisson processes, one with a higher early mean and another with a lower late mean. Here is the corresponding sample from the posterior predictive distribution:\n\\begin{verbatim}\n\t@pm.deterministic\n\tdef disasters_sim(early_mean = early_mean,\n\t                late_mean = late_mean,\n\t                switchpoint = switchpoint):\n\t    \"\"\"Coal mining disasters sampled from the posterior predictive distribution\"\"\"\n\t    return concatenate( (pm.rpoisson(early_mean, size=switchpoint),\n\t pm.rpoisson(late_mean, size=n-switchpoint)))\n\\end{verbatim}\nNotice that \\code{@pm.stochastic} has been replaced with \\code{@pm.deterministic} and \\code{pm.poisson_like} with \\code{pm.rpoisson}. The simulated values from each of the Poisson processes are concatenated together before returning them.\n\nThe degree to which simulated data correspond to observations can be evaluated in at least two ways. First, these quantities can simply be compared visually. This allows for a qualitative comparison of model-based replicates and observations. If there is poor fit, the true value of the data may appear in the tails of the histogram of replicated data, while a good fit will tend to show the true data in high-probability regions of the posterior predictive distribution (Figure~\\ref{fig:gof}).\n\n\\begin{figure}[htdevi!]\n        \\begin{center}\n        \\includegraphics[height=3in]{gof.png}\n    \\end{center}\n    \\caption{Data sampled from the posterior predictive distribution of a model for some observation \\textbf{x}. The true value of \\textbf{x} is shown by the dotted red line.}\n    \\label{fig:gof}\n\\end{figure}\n\nThe Matplot module in PyMC provides an easy way of producing such plots, via the \\code{gof_plot} function. To illustrate, consider a single observed data point \\code{x} and an array of values \\code{x_sim} sampled from the posterior predictive distribution. The histogram is generated by calling:\n\n\\begin{verbatim}\n\tpm.Matplot.gof_plot(x_sim, x, name='x')\n\\end{verbatim}\n\nA second approach for evaluating goodness of fit using samples from the posterior predictive distribution involves the use of a statistical criterion. For example, the Bayesian p-value \\citep{Gelman:1996gp} uses a discrepancy measure that quantifies the difference between data (observed or simulated) $x$ and the expected value $e$, conditional on some model. One such discrepancy measure is the Freeman-Tukey statistic \\citep{Brooks:2000il}:\n\n\\begin{equation}\n\tD(x|\\theta) = \\sum_j (\\sqrt{x_j}-\\sqrt{e_j})^2\n\\end{equation}\n\nModel fit is assessed by comparing the discrepancies from observed data to those from simulated data. On average, we expect the difference between them to be zero; hence, the Bayesian p-value is simply the proportion of simulated discrepancies that are larger than their corresponding observed discrepancies:\n\n\\begin{equation}\n\tp = Pr[ D(\\text{sim}) > D(\\text{obs}) ]\n\\end{equation}\n\nIf $p$ is very large (e.g. $>0.975$) or very small (e.g. $<0.025$) this implies that the model is not consistent with the data, and thus is evidence of lack of fit. Graphically, data and simulated discrepancies plotted together should be clustered along a 45 degree line passing through the origin, as shown in Figure~\\ref{fig:deviate}.\n\n\\begin{figure}[ht!]\n        \\begin{center}\n        \\includegraphics[height=3in]{deviates.png}\n    \\end{center}\n    \\caption{Plot of deviates of observed and simulated data from expected values. The cluster of points symmetrically about the 45 degree line (and the reported p-value) suggests acceptable fit for the modeled parameter.}\n    \\label{fig:deviate}\n\\end{figure}\n\nThe \\code{discrepancy} function in the \\code{diagnostics} module can be used to generate discrepancy statistics from arrays of data, simulated values, and expected values:\n\n\\begin{verbatim}\n\tD = pm.diagnostics.discrepancy(observed, simulated, expected)\n\\end{verbatim}\n\nA call to this function returns two arrays of discrepancy values (one for observed data and one for simulated data), which can be passed to the \\code{discrepancy_plot} function in the Matplot module to generate a scatter plot, and if desired, a p-value:\n\n\\begin{verbatim}\n\tpm.Matplot.discrepancy_plot(D, name='D', report_p=True)\n\\end{verbatim}\n\nAdditional optional arguments for \\code{discrepancy_plot} are identical to other PyMC plotting functions.\n\n% section goodness_of_fit (end)\n\n", "meta": {"hexsha": "1d581fb8ad9ed770cbcd5fa66806d5cbe222979d", "size": 23710, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/modelchecking.tex", "max_stars_repo_name": "matthew-brett/pymc", "max_stars_repo_head_hexsha": "3a31613f056e7993a449d89bafef5fdaa40d47e9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-12-03T09:42:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-06T19:23:29.000Z", "max_issues_repo_path": "docs/modelchecking.tex", "max_issues_repo_name": "matthew-brett/pymc", "max_issues_repo_head_hexsha": "3a31613f056e7993a449d89bafef5fdaa40d47e9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-09-27T02:00:41.000Z", "max_issues_repo_issues_event_max_datetime": "2016-09-27T02:15:32.000Z", "max_forks_repo_path": "docs/modelchecking.tex", "max_forks_repo_name": "matthew-brett/pymc", "max_forks_repo_head_hexsha": "3a31613f056e7993a449d89bafef5fdaa40d47e9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2017-10-27T13:27:32.000Z", "max_forks_repo_forks_event_max_datetime": "2017-10-27T13:27:32.000Z", "avg_line_length": 65.3168044077, "max_line_length": 1369, "alphanum_fraction": 0.7741037537, "num_tokens": 5647, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210896, "lm_q2_score": 0.8128673223709251, "lm_q1q2_score": 0.5657217698995188}}
{"text": "\\chapter{Disjoint product}\n\nI remind that\n$\\coprod X = \\bigcup_{i\\in\\dom X} (i,X_i)$\nfor every indexed family~$X$ of sets.\n\n\\begin{obvious}\n$\\coprod X\\in\\mathbf{Set}(\\dom X,\\im X)$.\n\\end{obvious}\n\n\\begin{defn}\nI will call disjoint join of an indexed family~$\\mathcal{X}$ of filters the following reloid: $\\coprod\\mathcal{X} = \\bigsqcup_{i\\in\\dom f}(\\{i\\}\\times^{\\mathsf{RLD}}\\mathcal{X}_i)$.\n\\end{defn}\n\n\\section{Some funcoids}\n\n\\begin{prop}\n$\\supfun{x\\mapsto(i,x)}\\mathcal{X}=\\{i\\}\\times^{\\mathsf{RLD}}\\mathcal{X}$ for every filter~$\\mathcal{X}$.\n\\end{prop}\n\n\\begin{proof}\n$\\supfun{x\\mapsto(i,x)}\\mathcal{X}=\\bigsqcap\\setcond{\\rsupfun{x\\mapsto(i,x)}X}{X\\in\\up\\mathcal{X}}=\\bigsqcap\\setcond{\\{i\\}\\times X}{X\\in\\up\\mathcal{X}}=\\{i\\}\\times^{\\mathsf{RLD}}\\mathcal{X}$.\n\\end{proof}\n\n\\begin{prop}\n$\\supfun{(x\\mapsto(i,x))^{-1}}\\mathcal{X}=\\im(\\mathcal{X}|_{\\{i\\}})$ for a filter~$\\mathcal{X}$ on the set $\\mathscr{U}\\cup\\{{\\mathscr{U}}\\}$ where $\\mathscr{U}$~is\na Grothendieck universe.\n\\end{prop}\n\n\\begin{proof}\n$\\supfun{(x\\mapsto(i,x))^{-1}}\\mathcal{X}=\\bigsqcap\\setcond{\\rsupfun{(x\\mapsto(i,x))^{-1}}X}{X\\in\\up\\mathcal{X}}=\\bigsqcap\\setcond{\\setcond{x}{(i,x)\\in X}}{X\\in\\up\\mathcal{X}}=\\bigsqcap\\setcond{x\\in\\im X|_{\\{i\\}}}{X\\in\\up\\mathcal{X}}=\\im\\bigsqcap\\setcond{x\\in X|_{\\{i\\}}}{X\\in\\up\\mathcal{X}}=\\im(\\mathcal{X}|_{\\{i\\}})$.\n\\end{proof}\n\n\\section{Cartesian product of funcoids}\n\n\\subsection{Definition and basic properties}\n\n\\begin{defn}\n\\emph{Cartesian product} of an indexed family~$f$ of funcoids is\na funcoid \\[ \\prod^{(J)} f=\\bigsqcup_{i\\in\\dom f}((x\\mapsto(i,x))\\circ f_i\\circ(x\\mapsto(i,x)^{-1}). \\]\n\\end{defn}\n\n\\begin{prop}\n$\\supfun{\\prod^{(J)}f}\\mathcal{X}=\\coprod_{i\\in\\dom f}\\supfun{f_i\\circ(x\\mapsto(i,x)^{-1}}\\mathcal{X}$.\n\\end{prop}\n\n\\begin{proof}\n\\begin{multline*}\n\\coprod_{i\\in\\dom f}\\supfun{f_i\\circ(x\\mapsto(i,x)^{-1}}\\mathcal{X}=\\\\\n\\bigsqcup_{i\\in\\dom f}(\\{i\\}\\times^{\\mathsf{RLD}}\\supfun{f_i\\circ(x\\mapsto(i,x)^{-1}}\\mathcal{X})=\\\\\n\\bigsqcup_{i\\in\\dom f}(\\supfun{x\\mapsto(i,x)}\\supfun{f_i\\circ(x\\mapsto(i,x)^{-1}}\\mathcal{X})=\\\\\n\\supfun{\\prod^{(J)}f}\\mathcal{X}.\n\\end{multline*}\n\\end{proof}\n\n\\subsection{Projections}\n\n\\begin{thm}\n$f_i$~can be restored from the value of $\\prod^{(J)}f=f_i$.\n\\end{thm}\n\n\\begin{proof}\n$f_i = (x\\mapsto(i,x)^{-1})\\circ\\prod^{(J)}f\\circ(x\\mapsto(i,x)^{-1})$\n(taken into account that $x\\mapsto(i,x)^{-1}$ is a principal funcoid).\n\\end{proof}\n\n\\section{Arrow product of funcoids}\n\n\\begin{defn}\n\\emph{Arrow product} of an indexed family~$f$ of funcoids is\na funcoid \\[ \\prod^{\\rightarrow} f=\\bigsqcup_{i\\in\\dom f}((x\\mapsto(i,x))\\circ f_i. \\]\n\\end{defn}\n\n\\begin{prop}\n$\\supfun{\\prod^{\\rightarrow}f}\\mathcal{X}=\\coprod_{i\\in\\dom f}\\supfun{f_i}\\mathcal{X}$.\n\\end{prop}\n\n\\begin{proof}\n\\begin{multline*}\n\\coprod_{i\\in\\dom f}\\supfun{f_i}\\mathcal{X}=\\\\\n\\bigsqcup_{i\\in\\dom f}(\\{i\\}\\times^{\\mathsf{RLD}}\\supfun{f_i}\\mathcal{X})=\\\\\n\\bigsqcup_{i\\in\\dom f}(\\supfun{x\\mapsto(i,x)}\\supfun{f_i}\\mathcal{X})=\\\\\n\\supfun{\\prod^{\\rightarrow}f}\\mathcal{X}.\n\\end{multline*}\n\\end{proof}\n\n\\subsection{Projections}\n\n\\begin{defn}\n\\emph{Arrow projections}\n$\\pi^{\\rightarrow}_i = (x\\mapsto(i,x))^{-1}$.\n\\end{defn}\n\n\\begin{thm}\n$\\pi^{\\rightarrow}_i\\circ\\prod^{\\rightarrow}f=f_i$\n\\end{thm}\n\n\\begin{proof}\nBecause $\\pi^{\\rightarrow}_i$ is a principal funcoid, we have\n\\[ \\pi^{\\rightarrow}_i\\circ\\prod^{\\rightarrow}f=\n\\bigsqcup_{j\\in\\dom f}((x\\mapsto(i,x))^{-1}\\circ(x\\mapsto(j,x))\\circ f_j). \\]\nBut $(x\\mapsto(i,x))^{-1}\\circ(x\\mapsto(j,x))$ is the idenitity if $i=j$ or\nempty otherwise. So\n$\\pi^{\\rightarrow}_i\\circ\\prod^{\\rightarrow}f=f_i$.\n\\end{proof}\n\n\\section{Cartesian product of reloids}\n\n\\subsection{Definition and basic properties}\n\n\\begin{defn}\n\\emph{Cartesian product} of an indexed family~$f$ of reloids is\na reloid \\[ \\prod^{(J)} f=\\bigsqcup_{i\\in\\dom f}((x\\mapsto(i,x))\\circ f_i\\circ(x\\mapsto(i,x)^{-1}). \\]\n\\end{defn}\n\n\\begin{conjecture}\n$\\prod^{(J)}(g\\circ f)=\\prod^{(J)}g\\circ\\prod^{(J)}f$.\n\\end{conjecture}\n\n\\subsection{Projections}\n\n\\begin{thm}\n$f_i$~can be restored from the value of $\\prod^{(J)}f=f_i$.\n\\end{thm}\n\n\\begin{proof}\n$f_i = (x\\mapsto(i,x)^{-1})\\circ\\prod^{(J)}f\\circ(x\\mapsto(i,x)^{-1})$.\n\\end{proof}\n\n\\section{Arrow product of reloids}\n\n\\begin{defn}\n\\emph{Arrow product} of an indexed family~$f$ of reloids is\na reloid \\[ \\prod^{\\rightarrow} f=\\bigsqcup_{i\\in\\dom f}((x\\mapsto(i,x))\\circ f_i. \\]\n\\end{defn}\n\n\\subsection{Projections}\n\n\\begin{defn}\n\\emph{Arrow projections}\n$\\pi^{\\rightarrow}_i = (x\\mapsto(i,x))^{-1}$.\n\\end{defn}\n\n\\begin{prop}\n$\\pi^{\\rightarrow}_i\\circ\\prod^{\\rightarrow}f=f_i$.\n\\end{prop}\n\n\\begin{proof}\nBecause $x\\mapsto(i,x)$ is a pricipal funcoid, we have\n$\\pi^{\\rightarrow}_i\\circ\\prod^{\\rightarrow}f=\n\\pi^{\\rightarrow}_i\\circ\\bigsqcup_{i\\in\\dom f}((x\\mapsto(i,x))\\circ f_i=\n\\bigsqcup_{j\\in\\dom f}((x\\mapsto(i,x))^{-1}\\circ(x\\mapsto(j,x))\\circ f_i$.\n\nBut $(x\\mapsto(i,x))^{-1}\\circ(x\\mapsto(j,x)$ is the identity if $i=j$ or empty otherwise. So $\\pi^{\\rightarrow}_i\\circ\\prod^{\\rightarrow}f=f_i$.\n\\end{proof}", "meta": {"hexsha": "a1aab30dcca6003ab78c6d4b334f2335c967d51d", "size": 4982, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-disjoint.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-disjoint.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-disjoint.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.1419354839, "max_line_length": 319, "alphanum_fraction": 0.6597751907, "num_tokens": 2103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711908591638, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.5655798106679941}}
{"text": "\\documentclass[10pt]{report}\n\n\\usepackage{subcaption} % for subfigures\n\\usepackage{amsthm} % for QED\n\\usepackage{mathtools} % for delimiter\n\n\\usepackage{listings} % for code\n\\lstset{ \n\tlanguage=R,\n\tbasicstyle=\\footnotesize\\ttfamily,\n\tnumbers=none,\n\tstepnumber=1,\n\tnumbersep=8pt,\n\tshowspaces=false,\n\tshowstringspaces=false,\n\tshowtabs=false,\n\tframe=single,\n\ttabsize=2,\n\tcaptionpos=t,\n\tbreaklines=true,\n\tbreakatwhitespace=false\n} \n\n\\usepackage{float} % for figure [H]\n\\usepackage{booktabs} % for tabular\n\\usepackage{caption} % for \\caption*\n\\usepackage[export]{adjustbox} % for valign=t\n\\usepackage{array} % for column type m\n\\usepackage{verbatim}\n\\usepackage{graphicx}\n%\\graphicspath{ {imgs/} }\n\\usepackage{fancyhdr}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\n%%%%%% Pagination\n\\setlength{\\topmargin}{-.3 in}\n\\setlength{\\oddsidemargin}{0in}\n\\setlength{\\evensidemargin}{0in}\n\\setlength{\\textheight}{9.in}\n\\setlength{\\textwidth}{6.5in}\n\n%Title page\n\\newcommand{\\hwTitle}{Homework \\#3}\n\\newcommand{\\hwCourse}{Applied Statistics}\n\\newcommand{\\hmClassInstructor}{Professor Lulu Kang}\n\n\\title{\n\t\\vspace{2in}\n\t\\textmd{\\textbf{\\hwCourse\\\\\\hwTitle}}\\\\\n\t\\vspace{0.3in}\\large{\\textit{\\hmClassInstructor}}\n\t\\vspace{3in}\n}\n\\author{\\textbf{Zhihao Ai}}\n\\date{}\n\n%Header setting. \n\\pagestyle{fancy}\n\\fancyhead[L]{Zhihao Ai}\n\\fancyhead[C]{Math 484}\n\\fancyhead[R]{Homework 3}\n%%%%%%\n\n%Global setting.\n%\\everymath{\\displaystyle}\n\\setlength\\parindent{0pt}\n\n%Custom general commands.\n\\newcommand{\\ds}{\\displaystyle}\n\\newcommand{\\ts}{\\textstyle}\n\n\\newcolumntype{N}{ >$ c <$}\n\\newcolumntype{M}[1]{>{\\centering\\arraybackslash $}m{#1}<{$}}\n\n\\newcommand{\\abs}[1] {\\left| #1 \\right|}\n\n\\DeclarePairedDelimiter\\autoparen{(}{)}\n\\newcommand{\\pa}[1]{\\autoparen*{#1}}\n\n\\newcommand{\\var} {\\text{var}}\n\n\\newcommand{\\m}[1] {\\mathbf{#1}}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Problem 1}\n(Ex. 6.5) \\textbf{Brand preference}\n\\begin{enumerate}\n\t\\item [a.]\n\tObtain the scatter plot matrix and the correlation matrix. What information do these diagnostic aids provide here?\n\t\n\tScatter plot matrix:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\vspace{-4ex}\n\t\t\\includegraphics[width=.5\\linewidth]{p1/5a.png}\n\t\t\\vspace{-5ex}\n\t\\end{figure}\n\tCorrelation matrix:\n\t\\lstinputlisting{p1/5a.txt}\n\tFrom the scatter plot matrix, we can see there is a strong correlation between $Y$ and $X_1$. From the correlation matrix, we can see that the correlation between $Y$ and $X_1$ are stronger than that between $Y$ and $X_2$ ($0.89>0.39$). Also, there is no linear correlation between $X_1$ and $X_2$ since the correlation is 0.\n\t\n\t\\item [b.]\n\tFit regression model (6.1) to the data. State the estimated regression function. How is $b_1$ interpreted here?\n\t\n\tAfter fitting regression model to the data, we obtain the following coefficients:\n\t\\lstinputlisting{p1/5b.txt}\n\tSo, the estimated regression function is $\\hat{Y} = 37.65 + 4.425 x_1 + 4.375 x_2$. The interpretation is that when sweetness ($X_2$) is fixed, the brand liking ($Y$) will increase by $b_1 = 4.425$ units if mosture content ($X_1$) increases by 1 unit.\n\t\n\t\\item [c.]\n\tObtain the residuals and prepare a box plot of the residuals. What information does this plot provide?\n\t\n\tThe residuals are\n\t\\lstinputlisting{p1/5c.txt}\n\tBox plot of the residuals:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=.35\\linewidth]{p1/5c.png}\n\t\\end{figure}\n\tThe plot shows that the residuals are symmetrically centered around 0.\n\t\n\t\\item [d.]\n\tPlot the residuals against $\\hat{Y}, X_1, X_2$ and $X_1 X_2$ on separate graphs. Also prepare a normal probability plot. Interpret the plots and summerize your findings.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{subfigure}[b]{.4\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p1/5d1.png}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{.4\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p1/5d2.png} \n\t\t\\end{subfigure}\\\\\n\t\t\\begin{subfigure}[b]{.4\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p1/5d3.png} \n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{.4\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p1/5d4.png} \n\t\t\\end{subfigure} \n\t\\end{figure}\n\tThere are no linear relationships shown in the plots above, indicating constancy of error variance and that the model fits the data well. The plot of residuals against $X_1 X_2$ indicates there is no interaction effect present. \n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=.5\\linewidth]{p1/5dqq.png}\n\t\\end{figure}\n\tThe normal QQ-plot above shows a strong linear correlation, confirming that the error terms are fairly normally distributed.\n\t\n\t\\item [e.]\n\tConduct the Breusch-Pagan test for constancy of the error variance, assuming $\\log \\sigma_i^2 = \\gamma_0 + \\gamma_1 X_{i1} + \\gamma_2 X_{i2}$; use $\\alpha = .01$. State the alternatives, decision rule, and conclusion.\n\t\n\tThe alternatives are\n\t\\begin{align*}\n\t\tH_0 &: \\gamma_1 = \\gamma_2 = 0\\\\\n\t\tH_a &: \\gamma_1 \\ne 0 \\text{ or } \\gamma_2 \\ne 0\n\t\\end{align*}\n\tThe BP-statistic is\n\t\\[\n\t\\chi^2_{BP} = \\frac{SSR^*}{p} \\div \\pa{\\frac{SSE}{n}}^2\n\t\\]\n\twhere $SSR^*$ is the regression sum of squares when regressing $e^2$ on $X$.\n\tThe decision rule is\n\t\\begin{align*}\n\t\\text{If } \\chi^2_{BP} \\le \\chi^2(0.99; 2) = 9.21034, \\text{ conclude } H_0\\\\\n\t\\text{If } \\chi^2_{BP} > \\chi^2(0.99; 2) = 9.21034, \\text{ conclude } H_a\n\t\\end{align*}\n\tAccording to the ANOVA tables of two models, we have\n\t\\[\n\t\\chi^2_{BP} = \\frac{67.34+5.06}{3} \\div \\pa{\\frac{94.30}{16}}^2 = 0.694759 < 9.21034\n\t\\]\n\tTherefore, we conclude $H_0$, that the error variance is constant.\n\t\n\t\\item [f.]\n\tConduct a formal test for lack of fit of the first-order regression function; use $\\alpha = .01$. State the alternatives, decision rule, and conclusion.\n\t\n\tThe alternatives are\n\t\\begin{align*}\n\t\tH_0 &: E(Y) = \\beta_0 + \\beta_1 X_1 + \\beta_2 X_2 \\\\\n\t\tH_a &: E(Y) \\ne \\beta_0 + \\beta_1 X_1 + \\beta_2 X_2\n\t\\end{align*}\n\tThe test statistic is\n\t\\[\n\tF^* = \\frac{SSLF}{c-3} \\div \\frac{SSPE}{n-c}\n\t\\]\n\tThe decision rule is\n\t\\begin{align*}\n\t\t\\text{If } F^* \\le F(1-\\alpha; c-2, n-c) = 6.370681, \\text{ conclude } H_0\\\\\n\t\t\\text{If } F^* > F(1-\\alpha; c-2, n-c) = 6.370681, \\text{ conclude } H_a\n\t\\end{align*}\n\tThe ANOVA table is shown below:\n\t\\lstinputlisting{p1/5f.txt}\n\tFrom the table, we have $F^* = (37.30/5)/(57.00/8) = 1.047018 < 6.370681$. So, we conclude $H_0$, that $E(Y) = \\beta_0 + \\beta_1 X_1 + \\beta_2 X_2$.\n\t\n\\end{enumerate}\n\n\\section*{Problem 2}\n(Ex. 6.18) \\textbf{Commercial properties}\n\\begin{enumerate}\n\t\\item [a.]\n\tPrepare a stem-and-leaf plot for each predictor variable. What information do these plots provide?\n\t\n\tStem-and-leaf plot for $X_1$:\n\t\\lstinputlisting{p2/18a1.txt}\n\tStem-and-leaf plot for $X_2$:\n\t\\lstinputlisting{p2/18a2.txt}\n\tStem-and-leaf plot for $X_3$:\n\t\\lstinputlisting{p2/18a3.txt}\n\tStem-and-leaf plot for $X_4$:\n\t\\lstinputlisting{p2/18a4.txt}\n\tFrom the plots, we can see that $X_1$ ranges from 0 to 20, $X_2$ ranges from 2 to 14.6, $X_3$ ranges from 0 to 0.73 and mostly takes value less than 0.1, $X_4$ ranges from 30000 to 480000.\n\t\n\t\\item [b.]\n\tObtain the scatter plot matrix and the correlation matrix. Interpret these and state your principal findings.\n\t\n\tScatter plot matrix:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=.65\\linewidth]{p2/18b.png}\n\t\\end{figure}\n\tCorrelation matrix:\n\t\\lstinputlisting{p2/18b.txt}\n\tThe scatter plot matrix shows the relationships among the variables and the correlation matrix provides the corelation coefficients. We can see that there is no strong linear correlation among the variables, but the correlation between $Y$ and $X_2$, and also the correlation between $Y$ and $X_4$, are relatively high.\n\t\n\t\\item [c.]\n\tFit regression model (6.5) for four predictor variables to the data. State the estimated regression function. \n\t\n\tAfter fitting regression model to the data, we obtain the following coefficients:\n\t\\lstinputlisting{p2/18c.txt}\n\tSo, the estimated regression function is $\\hat{Y} = 12.2 - 0.142 x_1 + 0.282 x_2 + 0.619 x_3 + 0.00000792 x_4$.\n\t\n\t\\item [d.]\n\tObtain the residuals and prepare a box plot of the residuals. Does the distribution appear to be fairly symmetrical?\n\t\n\tThe residuals are in the residuals list of the model. Below is the box plot of the residuals:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[width=.4\\linewidth]{p2/18d.png}\n\t\\end{figure}\n\tThe distribution appears to be fairly symmetrical. Just a few outliners.\n\t\n\t\\item [e.]\n\tPlot the residuals against $\\hat{Y}$, each predictor variable, and each two-factor interaction term on separate graphs. Also prepare a normal probability plot. Analyze your plots and summerize your findings.\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_yhat.png}\n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x1.png} \n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x2.png} \n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x3.png} \n\t\t\\end{subfigure}\\\\\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x4.png}\n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x1x2.png} \n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x1x3.png} \n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x1x4.png} \n\t\t\\end{subfigure}\\\\\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x2x3.png}\n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x2x4.png} \n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18e_x3x4.png} \n\t\t\\end{subfigure}%%\n\t\t\\begin{subfigure}[b]{.25\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{p2/18eqq.png} \n\t\t\\end{subfigure}\n\t\\end{figure}\n\tIn the plots with $x$-axis representing $X_3, X_1 X_3, X_2 X_3$ and $X_3 X_4$, the residuals are getting more concentrated around 0 as the predictors increase. The S shape in the QQ-plot indicates the error is not normally distributed.\n\t\n\t\\item [f.]\n\tCan you conduct a formal test for lack of fit here?\n\t\n\tNo, because there are no observations with the same values of predictor variables.\n\t\n\t\\item [g.]\n\tDivide the 81 cases into two groups, placing the 40 cases with the smallest fitted values $\\hat{Y}_i$ into group 1 and the remaining cases into group 2. Conduct the Brown-Forsythe test for constancy of the error variance, using $\\alpha = .05$. State the decision rule and conclusion.\n\t\n\tThe alternatives are\n\t\\begin{align*}\n\t\tH_0 &: \\text{error variance is constant}\\\\\n\t\tH_a &: \\text{error variance is not constant}\n\t\\end{align*}\n\tThe test statistic is\n\t\\[\n\tt_{BF}^* = \\frac{\\bar{d}_1 - \\bar{d}_2}{s\\sqrt{\\frac{1}{n_1} + \\frac{1}{n_2}}}\n\t\\]\n\tThe decision rule is\n\t\\begin{align*}\n\t\t\\text{If } t_{BF}^* \\le t(1-\\alpha/2; n-2) = 1.99045, \\text{ conclude } H_0\\\\\n\t\t\\text{If } t_{BF}^* > t(1-\\alpha/2; n-2) = 1.99045, \\text{ conclude } H_a\n\t\\end{align*}\n\tWe can obtain $s$ from the pooled variance,\n\t\\[\n\ts = \\sqrt{\\frac{\\sum (d_{i1} - \\bar{d}_1) + \\sum (d_{i2} - \\bar{d}_2)}{n-2}} = \\sqrt{\\frac{20.3091 + 22.4447}{81-2}} = 0.7357\n\t\\]\n\tThen the test statistic is given by\n\t\\[\n\tt_{BF}^* = \\frac{0.8696 - 0.7793}{0.7357\\sqrt{\\frac{1}{40} + \\frac{1}{41}}} = 0.5521\n\t\\]\n\tSince $t_{BF}^* = 0.5521 < 1.99045$, we conclude $H_0$, that the error variance is constant.\n\t\n\\end{enumerate}\n\n\\section*{Problem 3}\n(Ex. 6.19) Refer to \\textbf{Commercial properties} Problem 6.18. Assume that regression model (6.5) for four predictor variables with independent normal error terms is appropriate.\n\\begin{enumerate}\n\t\\item [a.]\n\tTest whether there is a regression relation; use $\\alpha = .05$. State the alternatives, decision rule, and conclusion. What does your test imply about $\\beta_1, \\beta_2, \\beta_3,$ and $\\beta_4$? What is the $P$-value of the test?\n\t\n\tThe alternatives are\n\t\\begin{align*}\n\t\tH_0 &: \\beta_1 = \\beta_2 = \\dots = \\beta_{p-1} = 0\\\\\n\t\tH_a &: \\text{not all } \\beta_i=0, i=1,\\dots,p-1\n\t\\end{align*}\n\tThe test statistic is\n\t\\[\n\tF^* = \\frac{MSR}{MSE}\n\t\\]\n\tThe decision rule is\n\t\\begin{align*}\n\t\\text{If } F^* \\le F(1-\\alpha; p-1, n-p) = 2.492049, \\text{ conclude } H_0\\\\\n\t\\text{If } F^* > F(1-\\alpha; p-1, n-p) = 2.492049, \\text{ conclude } H_a\n\t\\end{align*}\n\tThe ANOVA table is shown below:\n\t\\lstinputlisting{p3/19a.txt}\n\tSo, the test statistic $F^* = \\frac{(14.819+72.802+8.381+42.325)/4}{98.231/76} = 26.75543 > 2.492049$, we conclude $H_a$, that not all $\\beta_i$ is zero. The $P$-value is $7.272e-14$.\n\t\n\t\\item [b.]\n\tEstimate $\\beta_1, \\beta_2, \\beta_3,$ and $\\beta_4$ jointly by the Bonferroni procedure, using a 95 percent family confidence coefficient. Interpret your results.\n\t\n\tWe can obtain the standard errors of $\\hat{\\beta}_i$ from the summary table below:\n\t\\lstinputlisting{p3/19b.txt}\n\tThe confidence limits with family confidence coefficient 0.95 are\n\t\\[\n\t\\hat{\\beta}_i \\pm B\\cdot s\\{\\hat{\\beta}_i\\}\n\t\\]\n\twhere $B = t(1-\\alpha/2g; n-p) = t(0.99375; 76) = 2.558541$. So, the Bonferroni 95\\% joint CIs for $\\beta_1, \\beta_2, \\beta_3,$ and $\\beta_4$ are\n\t\\begin{align*}\n\t\t\\beta_1 &: -0.142034 \\pm 2.558541 \\cdot 0.021343 = (-0.196641, -0.087427)\\\\\n\t\t\\beta_2 &: 0.282017 \\pm 2.558541 \\cdot 0.063172 = (0.1203888, 0.443645)\\\\\n\t\t\\beta_3 &: 0.619344 \\pm 2.558541 \\cdot 1.086813 = (-2.161312, 3.4)\\\\\n\t\t\\beta_4 &: 0.000008 \\pm 2.558541 \\cdot 0.000001 = (0.000004, 0.000011)\n\t\\end{align*}\n\tmeaning that we are 95\\% confident the $\\beta_i$'s will fall in the intervals simultaneously.\n\t\n\t\\item [c.]\n\tCalculate $R^2$ and interpret this measure.\n\t\n\tFrom the ANOVA table, we have\n\t\\[\n\tR^2 = \\frac{SSR}{SSTO} = \\frac{14.819+72.802+8.381+42.325}{14.819+72.802+8.381+42.325+98.231} = \\frac{138.327}{236.558} = 0.584749\n\t\\]\n\tIt measures the proportionate reduction of total variation in $Y$ associated with the use the set of $X$ variables.\n\\end{enumerate}\n\n\\section*{Problem 4}\n(Ex. 6.20) Refer to \\textbf{Commercial properties} Problem 6.18. Assume that regression model (6.5) for four predictor variables with independent normal error terms is appropriate. The researcher wishes to obtain simultaneous interval estimates of the mean rental rates for four typical properties specified as follows:\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{*{5}{N}}\n\t\t& 1 & 2 & 3 & 4 \\\\ \\midrule\n\t\tX_1: & 5.0 & 6.0 & 14.0 & 12.0 \\\\\n\t\tX_2: & 8.25 & 8.50 & 11.50 & 10.25 \\\\\n\t\tX_3: & 0 & 0.23 & 0.11 & 0 \\\\\n\t\tX_4: & 250000 & 270000 & 300000 & 310000 \\\\\n\t\\end{tabular}\n\\end{table}\nObtain the family of estimates using a 95 percent family confidence coefficient. Employ the most efficient procedure.\n\nApplying Bonferroni procedure, we have \n\\[\nB = t(1-\\alpha/2g; n-p) = t(0.99375; 76) = 2.558541\n\\]\nSo the CIs are\n\\begin{align*}\n\tY_{h1} &: \\hat{Y}_{h1} \\pm B\\hat{\\sigma} \\sqrt{x_{h1}' (X'X)^{-1} x_{h1}^{}} = 15.79813 \\pm 2.558541 \\cdot 1.137102 \\cdot 0.244601 = (15.08651, 16.50975)\\\\\n\tY_{h2} &: \\hat{Y}_{h2} \\pm B\\hat{\\sigma} \\sqrt{x_{h2}' (X'X)^{-1} x_{h2}^{}} = 16.02754 \\pm 2.558541 \\cdot 1.137102 \\cdot 0.207519 = (15.4238, 16.63128)\\\\\n\tY_{h3} &: \\hat{Y}_{h3} \\pm B\\hat{\\sigma} \\sqrt{x_{h3}' (X'X)^{-1} x_{h3}^{}} = 15.90072 \\pm 2.558541 \\cdot 1.137102 \\cdot 0.195411 = (15.33221, 16.46923)\\\\\n\tY_{h4} &: \\hat{Y}_{h4} \\pm B\\hat{\\sigma} \\sqrt{x_{h4}' (X'X)^{-1} x_{h4}^{}} = 15.84339 \\pm 2.558541 \\cdot 1.137102 \\cdot 0.227928 = (15.18027, 16.50651)\n\\end{align*}\n\n\\section*{Problem 5}\nSuppose that $\\m{X}= [\\m{x}_1, \\dots, \\m{x}_k]$, where $\\m{x}_i$, $i=1, \\dots k$ are linearly independent columns of $\\m{X}$ (one of $\\m{x}_i$ can be a column of $\\m{1}$ if intercept is included in the model). Show that $\\var(\\hat{\\beta}_k) \\ge \\sigma^2(\\m{x}_k' \\m{x}_k)^{-1}$ with equality if and only if $\\m{x}_k' \\m{x}_j = 0, j=0,1,\\dots,k-1$.\n\nSince $\\var(\\hat{\\beta}_k) = \\sigma^2 \\pa{\\m{X}' \\m{X}}^{-1}_{k,k}$, \n\\begin{align*}\n\\var(\\hat{\\beta}_k) \\ge \\sigma^2(\\m{x}_k' \\m{x}_k)^{-1} \n&\\Leftrightarrow \n\\sigma^2 \\pa{\\m{X}' \\m{X}}^{-1}_{k,k} \\ge \\sigma^2(\\m{x}_k' \\m{x}_k)^{-1}\\\\\n&\\Leftrightarrow \n\\pa{\\m{X}' \\m{X}}^{-1}_{k,k} \\ge (\\m{x}_k' \\m{x}_k)^{-1}\n\\end{align*}\n\\iffalse\nPartition $\\m{X}$ into two blocks then we have\n\\[\n\\m{X} = \n\t\\left[\n\t\\begin{array}{c}\n\t\\m{X}_{-k}\\\\ \\hline\n\t\\m{x}'_k\n\t\\end{array}\n\t\\right]\n,\n\\m{X}' =\n\t\\left[\n\t\\begin{array}{c|c}\n\t\\m{X}'_{-k} & \\m{x}_k\n\t\\end{array}\n\t\\right]\n\\]\nwhere $\\m{x}_k$ is the $k$-th row of $\\m{X}$, and $\\m{X}_{-k}$ is the matrix with $\\m{x}_k$ removed. By block matrix multiplication, we obtain\n\\[\n\\m{X}' \\m{X}= \\m{X}'_{-k} \\m{X}_{-k} + \\m{x}_k \\m{x}'_k\n\\]\n\\fi\nDividing $\\m{X}' \\m{X}$ into four blocks, we have\n\\[\n\\m{X}' \\m{X} =\n\t\\left[\n\t\\begin{array}{c|c}\n\tA & B \\\\ \\hline\n\tB' & C\n\t\\end{array}\n\t\\right]\n\\]\nwhere $C_{1\\times1} = \\m{x}_k' \\m{x}_k$. Taking the inverse of $\\m{X}' \\m{X}$,\n\\[\n(\\m{X}' \\m{X})^{-1} = \n\t\\left[\n\t\\begin{array}{c|c}\n\tD & E \\\\ \\hline\n\tE' & F\n\t\\end{array}\n\t\\right]\n\\]\nAccording to a theorem of inverse of a partitioned symmetric matrix, \n\\[\nF = (C - B'A^{-1}B)^{-1} = C^{-1} + C^{-1}B'(A-BC^{-1}B')^{-1} BC^{-1}\n\\]\nSubstitute $C= \\m{x}_k' \\m{x}_k$ into the equation above\n\\begin{align*}\n\t\\pa{\\m{X}' \\m{X}}^{-1}_{k,k} \n\t&= F\\\\\n\t&= C^{-1} + C^{-1}B'(A-BC^{-1}B')^{-1} BC^{-1}\\\\\n\t&= (\\m{x}_k' \\m{x}_k)^{-1} + (\\m{x}_k' \\m{x}_k)^{-1} B'(A-B(\\m{x}_k' \\m{x}_k)^{-1}B')^{-1} B(\\m{x}_k' \\m{x}_k)^{-1}\n\\end{align*}\nBecause $\\m{x}_k' \\m{x}_k$ represents the sum of square of the $k$-th column, $\\m{x}_k' \\m{x}_k\\ge 0$, and $(\\m{x}_k' \\m{x}_k)^{-1}\\ge 0$. \nSince $\\m{x}_i$ are linearly independent columns, and $(\\m{x}_k' \\m{x}_k)^{-1}\\ge 0$, $(\\m{x}_k' \\m{x}_k)^{-1} B'(A-B(\\m{x}_k' \\m{x}_k)^{-1}B')^{-1} B(\\m{x}_k' \\m{x}_k)^{-1}\\ge 0$; so $\\pa{\\m{X}' \\m{X}}^{-1}_{k,k} \\ge (\\m{x}_k' \\m{x}_k)^{-1}$. \nBecause $B = [\\m{x}_1, \\m{x}_2, \\dots ,\\m{x}_{k-1}]'\\m{x}_k$ and $B' = \\m{x}'_k [\\m{x}_1, \\m{x}_2, \\dots ,\\m{x}_{k-1}]$, $B=B'=\\m{0}$ if and only $\\m{x}_k' \\m{x}_j = 0$ for $j=0,1,\\dots,k-1$; with $B=B'=\\m{0}$, $\\pa{\\m{X}' \\m{X}}^{-1}_{k,k} = (\\m{x}_k' \\m{x}_k)^{-1}$. \nTherefore, by proving $\\pa{\\m{X}' \\m{X}}^{-1}_{k,k} \\ge (\\m{x}_k' \\m{x}_k)^{-1}$ with equality if and only if $\\m{x}_k' \\m{x}_j = 0, j=0,1,\\dots,k-1$, we show that $\\var(\\hat{\\beta}_k) \\ge \\sigma^2(\\m{x}_k' \\m{x}_k)^{-1}$ with equality if and only if $\\m{x}_k' \\m{x}_j = 0, j=0,1,\\dots,k-1$.\n\\end{document}\n\n", "meta": {"hexsha": "687105e27288e3616eda20c065a8b3872f0336ec", "size": 18382, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW3/Math-484-HW3.tex", "max_stars_repo_name": "ZhihaoAi/MATH-484-Assignments", "max_stars_repo_head_hexsha": "9817add32fbcb46b58849ec923aa73a47daa9584", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW3/Math-484-HW3.tex", "max_issues_repo_name": "ZhihaoAi/MATH-484-Assignments", "max_issues_repo_head_hexsha": "9817add32fbcb46b58849ec923aa73a47daa9584", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW3/Math-484-HW3.tex", "max_forks_repo_name": "ZhihaoAi/MATH-484-Assignments", "max_forks_repo_head_hexsha": "9817add32fbcb46b58849ec923aa73a47daa9584", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3618843683, "max_line_length": 347, "alphanum_fraction": 0.6655967795, "num_tokens": 7097, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105720171531, "lm_q2_score": 0.8499711756575749, "lm_q1q2_score": 0.565579806192399}}
{"text": "\\chapter{Relativistic Free Field Theories}\n\n\\section{The Lorentz Invariance}\n\nThe metric for ($3+1$)-dimensional flat space-time is chosen to be\n\\begin{equation}\n\tg_{\\mu\\nu}=g^{\\mu\\nu}=\\mathrm{diag}(+1,-1,-1,-1).\n\\end{equation}\nThe Lorentz transformation ${\\Lambda^{\\mu}}_{\\nu}$ satisfies\n\\begin{equation}\n{\\Lambda^{\\mu}}_{\\alpha}{\\Lambda^{\\nu}}_{\\beta} g^{\\alpha\\beta} = g^{\\mu\\nu}.\n\\end{equation}\nFrom this we have\n\\begin{equation}\n\tg^{\\gamma\\alpha}{\\Lambda^{\\mu}}_{\\alpha}{\\Lambda^{\\nu}}_{\\beta} g_{\\mu\\nu} \n\t= g^{\\gamma\\alpha}g_{\\alpha\\beta} \n\t\\quad \\Longrightarrow \\quad\n\t{\\Lambda_{\\nu}}^{\\gamma}{\\Lambda^{\\nu}}_{\\beta} \n\t= {\\delta^{\\gamma}}_{\\beta},\n\\end{equation}\nThe inverse Lorentz transformation satisfies:\n\\begin{equation}\n\t{(\\Lambda^{-1})^{\\mu}}_{\\nu} = {\\Lambda_{\\nu}}^{\\mu}.\n\\end{equation}\nThe infinitesimal transformation is denoted as\n\\begin{equation}\n\\begin{aligned}\n\t{\\Lambda^{\\mu}}_{\\nu} &= \\delta^{\\mu}_{\\nu}+\\delta{\\omega^{\\mu}}_{\\nu}, \\\\\n\t{(\\Lambda^{-1})^\\mu}_\\nu &= \\delta^{\\mu}_{\\nu}-\\delta{\\omega^\\mu}_\\nu.\n\\end{aligned}\n\\end{equation}\nwhich means $\\delta {\\omega^\\mu}_\\nu = -\\delta {\\omega_\\nu}^\\mu$.\nWe can further use the metric tensor $g_{\\mu\\nu}$ to lower the indices and get $\\delta\\omega_{\\alpha\\beta} = -\\delta\\omega_{\\beta\\alpha}$, i.e., the infinitesimal parameter $\\delta \\omega_{\\mu\\nu}$ is anti-symmetric for ($\\mu \\leftrightarrow \\nu$).\n\nIn general, a representation of Lorentz group $U_R(\\Lambda)$ can be parametrized as:\n\\begin{equation}\n\tU_R(\\Lambda) = \\exp\\left(\\frac{i}{2}\\omega_{\\mu\\nu}M_R^{\\mu\\nu}\\right).\n\\end{equation}\nAnother useful parametrization is\n\\begin{equation}\n\t\\theta_i \\equiv \\frac{1}{2}\\varepsilon_{ijk}\\omega_{jk}, \\quad \n\t\\beta_i \\equiv \\omega_{i0}.\n\\end{equation}\nA new set of generators are:\n\\begin{equation}\n\tJ_i \\equiv \\frac{1}{2}\\varepsilon_{ijk}M^{jk}, \\quad \n\tK_i \\equiv M^{i0},\n\\end{equation}\nwhere $J_i$'s are the generators of the spatial rotations, and $K_i$'s are the generators of Lorentz boosts.\n\nIn the fundamental representation, the generators are represented by\n\\begin{equation}\n\\begin{aligned}\n\tJ_1 &= \\left[\\begin{array}{cccc} 0 & & & \\\\ & 0 & & \\\\ & & 0 & -i \\\\ & & i & 0 \\end{array}\\right], & \n\tJ_2 &= \\left[\\begin{array}{cccc} 0 & & & \\\\ & 0 & & i \\\\ & & 0 & \\\\ & -i & & 0 \\end{array}\\right], &\n\tJ_3 &= \\left[\\begin{array}{cccc} 0 & & & \\\\ & 0 & -i & \\\\ & i & 0 & \\\\ & & & 0 \\end{array}\\right], \\\\\n\tK_1 &= \\left[\\begin{array}{cccc} 0 & -i & & \\\\ -i & 0 & & \\\\ & & 0 & \\\\ & & & 0 \\end{array}\\right], & \n\tK_2 &= \\left[\\begin{array}{cccc} 0 & & -i & \\\\ & 0 & & \\\\ -i & & 0 & \\\\ & & & 0 \\end{array}\\right], &\n\tK_3 &= \\left[\\begin{array}{cccc} 0 & & & -i \\\\ & 0 & & \\\\ & & 0 & \\\\ -i & & & 0 \\end{array}\\right].\n\\end{aligned}\n\\end{equation}\nThe Lie algebra of the Lorentz algebra can be explicitly done using the fundamental representation. \nThe result is\n\\begin{equation}\n\\begin{aligned}\n\t\\left[J_i, J_j\\right] &= i \\varepsilon_{ijk} J_k, \\\\\n\t\\left[J_i, K_j\\right] &= i \\varepsilon_{ijk} K_k, \\\\\n\t\\left[K_i, K_j\\right] &= -i\\varepsilon_{ijk} J_k.\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsection{Left and Right Spinors}\nWe introduce a new set of generators:\n\\begin{equation}\n\tN_i^{L} \\equiv \\frac{J_i - i K_i}{2}, \\quad\n\tN_i^{R} \\equiv \\frac{J_i + i K_i}{2}.\n\\end{equation}\nThey satisfies two independent $\\mathfrak{su}(2)$ algebra:\n\\begin{equation}\n\\begin{aligned}\n\t\\left[N_i^L, N_j^L \\right] &= i\\varepsilon_{ijk}N_k^L, \\\\\n\t\\left[N_i^R, N_j^R \\right] &= i\\varepsilon_{ijk}N_k^R, \\\\\n\t\\left[N_i^L, N_j^R \\right] &= 0.\n\\end{aligned}\n\\end{equation}\nThat is, the Lorentz algebra is isomorphic to two $\\mathfrak{su}(2)$ algebra,\n\\begin{equation}\n\t\\mathfrak{so}(3,1) \\approx \\mathfrak{su}_L(2)\\oplus\\mathfrak{su}_R(2).\n\t\\label{eq:Lorentz-alg-decomp}\n\\end{equation}\nFrom Eq.~(\\ref{eq:Lorentz-alg-decomp}), we know that the representation of the Lorentz algebra can be labelled by $j_L$ and $j_R$.\nNote that the fundamental representation correspond to\n\\begin{equation*}\n\t\\left(j_L=\\frac{1}{2},j_R=\\frac{1}{2}\\right).\n\\end{equation*}\nThe specific form of the group is\n\\begin{equation}\n\t\\Lambda(\\vec\\theta,\\vec\\beta)\n\t=\\exp\\left[i(\\vec\\theta+i\\vec\\beta)\\cdot \\vec N^L + i(\\vec\\theta-i\\vec\\beta)\\cdot \\vec N^R\\right].\n\\end{equation}\nThe spinor representations are those with $j_L=1/2$ or $j_R=1/2$. \nSpecifically, we define the left-hand spinor $\\psi_L$ and right-hand spinor $\\psi_R$ that transform as:\n\\begin{equation}\\label{eq:qft-left-right-spinor-rep}\n\\begin{aligned}\n\t\\Lambda_L(\\vec\\theta,\\vec\\beta)\\psi_L \n\t&= \\exp\\left(\\frac{i}{2}\\vec\\theta\\cdot\\vec\\sigma-\\frac{1}{2}\\vec\\beta\\cdot\\vec\\sigma \\right) \\psi_L, \\\\\n\t\\Lambda_R(\\vec\\theta,\\vec\\beta)\\psi_R \n\t&= \\exp\\left(\\frac{i}{2}\\vec\\theta\\cdot\\vec\\sigma+\\frac{1}{2}\\vec\\beta\\cdot\\vec\\sigma \\right) \\psi_R.\n\\end{aligned}\n\\end{equation}\nUsing the fact $\\sigma^2 \\cdot \\vec\\sigma^* \\cdot\\sigma^2 = -\\vec\\sigma$, the left-hand and the right-hand representations are related by:\n\\begin{equation}\n\\begin{aligned}\n\t\\sigma^2 \\Lambda_L^* \\sigma^2 &= \\Lambda_R, & \\sigma^2 \\Lambda_L^T \\sigma^2 &= \\Lambda_L^{-1}, \\\\\n\t\\sigma^2 \\Lambda_R^* \\sigma^2 &= \\Lambda_L, & \\sigma^2 \\Lambda_R^T \\sigma^2 &= \\Lambda_R^{-1}.\n\\end{aligned}\n\\end{equation}\nFor this reason, the left-hand and right-hand spinor can be interchanged by\n\\begin{equation}\n\\begin{aligned}\n\t\\sigma^2 \\psi_L^* &\\sim \\chi_R, & \\psi_L^\\dagger \\sigma^2 &\\sim \\chi^\\dagger_R, \\\\\n\t\\sigma^2 \\psi_R^* &\\sim \\chi_L, & \\psi^\\dagger_R \\sigma^2 &\\sim \\chi^\\dagger_L.\n\t\\label{eq:left-right-spinor-rel}\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsection{The Invariant Symbols}\nThe invariant symbols can be thought as the Clebsch-Gordan coefficients that help to form singlets.\nThe first singlet comes from the decomposition\n\\begin{equation*}\n\t\\frac{1}{2}\\otimes \\frac{1}{2} \\approx 0 \\oplus 1.\n\\end{equation*}\nCorrespondingly, we can check that for each-hand-side spinor, the quadratic forms\n\\begin{equation}\n\t\\psi_L^T\\sigma^2\\chi_L \\quad \\text{or} \\quad \n\t\\psi_R^T\\sigma^2\\chi_R\n\t\\label{eq:inner-product-inv-symbol}\n\\end{equation}\nare singlets.\nWe can define the first invariant symbol as\\footnote{We use the dotted symbol to denote the right-hand spinor indices.}\n\\begin{equation}\n\t\\varepsilon^{ab} = \\varepsilon^{\\dot a \\dot b} = i(\\sigma^2)_{ab}, \\quad\n\t\\varepsilon_{ab} = \\varepsilon_{\\dot a \\dot b} = -i(\\sigma^2)_{ab}.\n\\end{equation}\nThe symbol $\\varepsilon^{ab}$ or $\\varepsilon_{ab}$ also serve as the index raising/lowering symbol, i.e.,\n\\begin{equation}\n\t\\varepsilon^{ab}\\psi_b = \\psi^a,\\ \n\t\\varepsilon_{ab}\\psi^b = \\psi_a.\n\\end{equation}\nThe singlet (\\ref{eq:inner-product-inv-symbol}) is then defined as the inner product of two spinors:\n\\begin{equation}\n\t\\psi\\cdot\\chi \n\t\\equiv \\varepsilon_{ab}\\psi^a\\chi^b\n\t= \\psi^a\\chi_{a}\n\t= -\\varepsilon_{ba}\\psi^a\\chi^b\n\t= -\\psi_b\\chi^b.\n\\end{equation}\nIn addition, because of (\\ref{eq:left-right-spinor-rel}), the expressions\n\\begin{equation*}\n\t\\psi_L^\\dagger \\chi_R \\quad \\text{and} \\quad \\psi_R^\\dagger \\chi_L\n\\end{equation*}\nare also singlets.\n\nBesides, we know there should be another invariant symbol from the decomposition\n\\begin{equation*}\n\t\\left(\\frac{1}{2}, 0\\right) \\otimes \\left(0,\\frac{1}{2}\\right)\n\t\\approx \\left(0, 0\\right) \\oplus \\cdots.\n\\end{equation*}\nFor this reason, we are searching for the symbol $M$ that the expression\n\\begin{equation*}\n\tM^\\mu_{a\\dot b} \\psi^a_L \\chi^{\\dot b}_R\n\\end{equation*}\ntransforms as the Lorentz vector.\nThe matrix $M^\\mu$ should transform as\n\\begin{equation*}\n\tM^\\mu \\longrightarrow \\Lambda_L^T \\cdot M^\\mu \\cdot \\Lambda_R = {\\Lambda^\\mu}_\\nu M^\\nu.\n\\end{equation*}\nUse the fact that $\\sigma^2 \\cdot \\Lambda_L^T \\cdot \\sigma^2 = \\Lambda_L^{-1}$, the above equation transforms to\n\\begin{equation*}\n\t\\left(\\sigma^2 M^\\mu\\right) \\longrightarrow \\Lambda_L^{-1} \\cdot \\left(\\sigma^2 M^\\mu\\right)\\cdot \\Lambda_R.\n\\end{equation*}\nWe then show the matrices $\\sigma^\\mu = (\\sigma^0,\\vec\\sigma)$ satisfies the requirement.\nFirstly, for the spatial rotation,\n\\begin{equation}\n\t\\Lambda_L(\\vec\\theta,\\vec 0) = \\Lambda_R(\\theta,\\vec 0) = \\exp\\left(i\\vec\\theta\\cdot \\frac{\\vec\\sigma}{2}\\right)\n\\end{equation}\nThe Pauli matrix transform as\n\\begin{equation*}\n\t\\left(1-i\\delta\\vec\\theta\\cdot\\frac{\\vec\\sigma}{2}\\right)\\sigma^j\\left(1+i\\delta\\vec\\theta\\cdot \\frac{\\vec\\sigma}{2}\\right)\n\t= \\sigma^j + i\\delta\\theta_i \\left(-i \\varepsilon_{ijk}\\sigma^k \\right)\n\\end{equation*}\nSecondly, for the boosts,\n\\begin{equation}\n\t\\Lambda_{L/R}(\\vec 0, \\vec\\beta) = \n\t\\exp\\left(\\mp\\vec\\beta\\cdot \\frac{\\vec\\sigma}{2}\\right).\n\\end{equation}\nThe Pauli matrix transform as\n\\begin{equation*}\n\t\\left(1+\\delta\\vec\\beta\\cdot\\frac{\\vec\\sigma}{2}\\right)\\sigma^\\mu \\left(1+\\delta\\vec\\beta\\cdot \\frac{\\vec\\sigma}{2}\\right) = \\begin{cases}\n\t\t \\sigma^0 + i\\delta\\beta_i \\cdot (-i\\sigma^i), & \\mu = 0 \\\\\n\t\t \\sigma^j + i\\delta\\beta_j (-i\\sigma^0), & \\mu = j\n\t\\end{cases}.\n\\end{equation*}\nWe thus have shown indeed that\n\\begin{equation}\n\t\\psi_L^T \\sigma^2 \\sigma^\\mu \\chi_R\n\\end{equation}\nis a Lorentz vector.\nFurther more, from (\\ref{eq:left-right-spinor-rel}), we know that\n\\begin{equation}\n\t\\eta_R^\\dagger \\sigma^\\mu \\chi_R\n\\end{equation}\nis also a Lorentz vector.\nSimilarly, consider the Lorentz vector \n\\begin{equation*}\n\tN^\\mu_{\\dot a b} \\psi^{\\dot a}_R \\chi^{b}_R,\n\\end{equation*}\nwhich together with $\\sigma^2$ should transforms as\n\\begin{equation*}\n\t\\left(\\sigma^2 N^\\mu\\right) \\longrightarrow \n\t\\Lambda_R^{-1} \\cdot \\left(\\sigma^2 N^\\mu\\right)\\cdot \\Lambda_L.\n\\end{equation*}\nWe can check that $\\bar\\sigma^\\mu = (\\sigma^0,-\\vec\\sigma)$ satisfies the requirement, and thus \n\\begin{equation}\n\t\\eta_L^\\dagger \\bar\\sigma^\\mu \\chi_L\n\\end{equation}\nis also a Lorentz vector.\n\n\n\n\\subsection{Lorentz-invariant Lagrangian}\nIn relativistic quantum field theory, the Lagrangian should be a singlet under Lorentz transformation.\nDifferent free fields correspond to different representation of the Lorentz algebra.\nThe symmetry under Lorentz transformation also restrict the possible terms that can appear in the Lagrangian.\n\n\\subsubsection{Scalar Field}\nThe simplest case is when $j_L=j_R = 0$, corresponding to the scalar field, which we denote as $\\phi(x)$.\nSince the field it self is singlet, any polynomial of the field in principle can appear in the theory.\nWhen considering the free theory, we restrict our attention to the quadratic terms.\nWe require the field theory to have a dynamical term, which contains derivative the the field.\nThe derivative operator $\\partial^\\mu$ transforms as the fundamental representation.\nTo be Lorentz invariant, the allowed free theory can only be\n\\begin{equation}\n\t\\mathcal L_{\\mathrm{KG}} = \\frac{1}{2}\\partial^\\mu \\phi \\partial_\\mu \\phi -\\frac{m^2}{2}\\phi^2 \n\t\\simeq -\\frac{1}{2}\\phi (\\partial^2+m^2) \\phi.\n\\end{equation}\n\nFor general discussion, we consider the field theory on $d$-dimensional space-time.\nNote that the space-time Fourier transformation is defined as\n\\begin{equation}\n\\begin{aligned}\n\t\\tilde{\\phi}(k) &= \\int d^{d}x e^{ik\\cdot x} \\phi(x), \\\\ \n\t\\phi(x) &= \\int \\frac{d^{d}k}{(2\\pi)^{d}} e^{-ik\\cdot x}\\tilde{\\phi}(k),\n\\end{aligned}\n\\end{equation}\nwhere the inner product of two $d$-momentum and $d$-coordinate is\n\\begin{equation}\n\tk\\cdot x \\equiv \\omega t-\\vec k\\cdot \\vec x.\n\\end{equation}\nThe action can be expressed as\n\\begin{equation}\n\tS_{\\mathrm{KG}} = \\int \\frac{d^d k}{(2\\pi)^d} \\frac{1}{2} \\tilde{\\phi}^*(k)(k^2-m^2)\\tilde{\\phi}(k).\n\\end{equation}\n\n\\subsubsection{Vector Field}\nIf we can choose $j_L=j_R=1/2$, the field is transformed as Lorentz vector.\nWe denote the field as $A^\\mu(x)$.\nSome possible quadratic forms for the vector field that forms singlets are\n\\begin{equation}\n\tA^\\mu A_\\mu,\\ (\\partial_\\mu A^\\mu)^2,\\ A^\\nu \\partial^2 A_\\nu,\\ \n\t\\varepsilon_{\\mu\\nu\\rho\\lambda} \\partial^\\mu A^\\nu \\partial^\\rho A^\\lambda.\n\\end{equation}\nFor the field theory describe the electromagnetic field, we require the theory to further have gauge symmetry, i.e., invariant under\n\\begin{equation}\n\tA^\\mu(x) \\rightarrow A^\\mu(x) + \\partial^\\mu \\alpha(x).\n\\end{equation}\nThe gauge invariant forbids the first term, and forces the second and third term to combine as\n\\begin{equation*}\n\t(\\partial_\\mu A^\\mu)^2 - A^\\nu \\partial^2 A_\\nu\n\t\\sim \\frac{1}{2}(\\partial^\\mu A^\\nu - \\partial^\\nu A^\\mu)(\\partial_\\mu A^\\nu-\\partial_\\nu A_\\mu)\n\t\\equiv \\frac{1}{2} F^{\\mu\\nu}F_{\\mu\\nu}.\n\\end{equation*}\nwhere we have define a field-strength tensor\n\\begin{equation}\n\tF^{\\mu\\nu}\\equiv (\\partial^\\mu A^\\nu - \\partial^\\nu A^\\mu)\n\t= \\left[\\begin{array}{cccc}\n\t\t0 & -E_1 & -E_2 & -E_3 \\\\\n\t\tE_1 & 0 & -B_3 & B_2 \\\\\n\t\tE_2 & B_3 & 0 & -B_1 \\\\\n\t\tE_3 & -B_2 & B_1 & 0\n\t\\end{array} \\right],\n\\end{equation}\nwhere we notice that from Maxwell equations:\n\\begin{equation}\n\tE^i = \\partial_t \\vec A = -\\vec\\nabla A^0, \\quad B^i = \\nabla \\times \\vec A.\n\\end{equation}\nNote that the fourth term is called the \\textit{theta term}, which can be written as a boundary term\n\\begin{equation}\n\t\\varepsilon_{\\mu\\nu\\rho\\lambda} \\partial^\\mu A^\\nu \\partial^\\rho A^\\lambda\n\t= \\partial^\\mu (\\varepsilon_{\\mu\\nu\\rho\\lambda} A^\\nu \\partial^\\rho A^\\lambda).\n\\end{equation}\nThe Lagrangian describing the electromagnetic field is given by\n\\begin{equation}\n\t\\mathcal{L}_{\\mathrm{Maxwell}} = -\\frac{1}{4}F_{\\mu\\nu}F^{\\mu\\nu}.\n\\end{equation}\n\n\n\\subsubsection{Spinor Field}\n\nBased on previous discussion, the Lagrangian for spinor field can have\n\\begin{equation}\n\t\\psi_L^\\dagger \\bar\\sigma^\\mu \\partial_\\mu \\psi_L,\\ \n\t\\psi_R^\\dagger \\sigma^\\mu \\partial_\\mu \\psi_R,\\ \n\t\\psi_L^\\dagger \\psi_R,\\ \\psi_R^\\dagger \\psi_L,\\ \n\t\\psi_L \\cdot \\psi_L,\\ \\psi_R \\cdot \\psi_R.\n\\end{equation}\nThe Dirac field describe the theory with both left-hand and right-hand spinors.\nThe Lagrangian is\n\\begin{equation}\n\t\\mathcal{L}_{\\mathrm{Dirac}}\n\t= \\bar\\psi \\left(i\\gamma^\\mu \\partial_\\mu - m\\right)\\psi,\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\t\\psi = \\left(\\begin{array}{c}\n\t\t\\psi_L \\\\ \\psi_R\n\t\\end{array}\\right),\\ \n\t\\bar\\psi = \\left(\\begin{array}{cc}\n\t\t\\psi_R^\\dagger & \\psi_L^\\dagger\n\t\\end{array}\\right),\\ \n\t\\gamma^\\mu = \\left(\\begin{array}{cc}\n\t\t0 & \\sigma^\\mu \\\\\n\t\t\\bar\\sigma^\\mu & 0\n\t\\end{array}\\right).\n\\end{eqnarray}\nIn addition, we could consider using the last two terms as the mass, the result theory is the \\textit{Majorana field theory}:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal{L}^{L}_{\\mathrm{Majorana}}\n\t&= \\psi_L^\\dagger \\left(i\\bar\\sigma^\\mu \\partial_\\mu -m \\sigma^2 \\right) \\psi_L, \\\\\n\t\\mathcal{L}^{R}_{\\mathrm{Majorana}}\n\t&= \\psi_R^\\dagger \\left(i\\sigma^\\mu \\partial_\\mu -m \\sigma^2 \\right) \\psi_R. \n\\end{aligned}\n\\end{equation} \nFor the spinor basis, the Dirac Algebra is generated by\n\\begin{equation}\\label{eq:qft-diract-generator}\n\tS^{\\mu\\nu} = \\frac{i}{4}[\\gamma^\\mu, \\gamma^\\nu].\n\\end{equation}\nThe Lorentz group is represented by\n\\begin{equation}\\label{eq:qft-dirac-rep}\n\t\\Lambda_{\\frac{1}{2}} = \\exp\\left(\\frac{i}{2}\\omega_{\\mu\\nu} S^{\\mu\\nu}\\right).\n\\end{equation}\nUsing the familiar parametrization,\n\\begin{equation}\n\tS^{i0} = \\frac{i}{2}\\left[\\begin{array}{cc}\n\t\t\\sigma^i & 0 \\\\ 0 & -\\sigma^i\n\t\\end{array}\\right], \\quad \n\tS^{ij} = \\frac{1}{2}\\epsilon^{ijk} \\left[\\begin{array}{cc}\n\t\t\\sigma^k & 0 \\\\ 0 & -\\sigma^k\n\t\\end{array}\\right],\n\\end{equation}\nwhich agree with the transformation property (\\ref{eq:qft-left-right-spinor-rep}).\n\n\n\n\\section{Canonical Quantization}\n\n\\subsection{Scalar Field}\n\nFor the Klein-Gordon Lagrangian\n\\begin{equation}\n\t\\mathcal L = -\\frac{1}{2}\\phi(x)(\\partial^2+m^2)\\phi(x),\n\\end{equation}\nthe equation of motion is:\n\\begin{equation}\\label{eq:rkg-eom}\n\\begin{aligned}\n\t&\\ \\partial_\\mu \\left[\\frac{\\partial \\mathcal L}{\\partial(\\partial_\\mu \\phi)}\\right] - \\frac{\\partial \\mathcal L}{\\partial \\phi} = 0 \\\\\n\t\\Rightarrow &\\\n\t(\\partial_t^2-\\nabla^2+m^2)\\phi(\\bm x,t) = 0.\n\\end{aligned}\n\\end{equation}\nThe (classical) solution to Eq.~(\\ref{eq:rkg-eom}) is proportional to the plane wave:\n\\begin{equation}\n\t\\phi_{\\bm k}(\\bm x, t) \\propto e^{-i\\omega_{\\bm{k}}t+i\\bm{k}\\cdot\\bm{x}} + e^{i\\omega_{\\bm{k}}t-i\\bm{k}\\cdot\\bm{x}},\n\\end{equation}\nwhere the energy is $\\omega_{\\bm{k}}=\\bm{k}^2+m^2$ and $\\bm k$ is the momentum as the conserved quantity.\nThe general solution to Eq.~(\\ref{eq:rkg-eom}) is\n\\begin{equation}\n\t\\phi(\\bm x,t) \\propto \\int \\frac{d^{3} k}{(2\\pi)^{3}} \\left(\n\t\ta_{k}e^{-i\\omega_{\\bm{k}}t+i\\bm{k}\\cdot\\bm{x}} + \n\t\ta^*_{k}e^{i\\omega_{\\bm{k}}t-i\\bm{k}\\cdot\\bm{x}} \n\t\\right),\n\\end{equation}\nwhere $a_k$'s are arbitrary c-numbers.\n\nThe canonical quantization promote the coefficient $a_{k}/a_{k}^*$ to the particle annihilation/creation operator $a_{k}/a_{k}^\\dagger$, with the commutation relation\n\\begin{equation}\n\t[a_{k}, a_{p}^\\dagger] = (2\\pi)^{3} \\delta^{3}(\\bm{k}-\\bm{p}).\n\\end{equation}\n\n\\subsubsection{Single-particle States}\nThe single-particle state with momentum $\\bm k$ is created by $a_{k}^{\\dagger}$ operators acting on the vacuum:\n\\begin{equation}\n\t|\\bm{k}\\rangle \\equiv \\sqrt{2\\omega_{\\bm k}} a_{k}^{\\dagger}|0\\rangle,\n\t\\label{eq:rel-single-particle}\n\\end{equation}\nwhere $|\\bm{k}\\rangle$ is a state with a single particle of momentum $\\bm{k}$.\nThe factor of $\\sqrt{2 \\omega_{\\bm k}}$ in (\\ref{eq:rel-single-particle}) is a convention to ensure Lorenz invariant.\nTo compute the normalization of one-particle states, we start by requiring the vacuum state to be of unit norm:\n\\begin{equation}\n\t\\langle 0|0\\rangle=1,\n\\end{equation}\nwhich, together with the canonical commutation relation of particle annihilation and creation operators leads to\n\\begin{equation}\n\t\\langle\\bm{p}|\\bm{k}\\rangle \n\t= 2\\sqrt{\\omega_{\\bm p} \\omega_{\\bm k}}\\left\\langle 0\\left|a_{p} a_{k}^{\\dagger}\\right| 0\\right\\rangle\n\t= 2 \\omega_{\\bm p}(2\\pi)^{3} \\delta^{3}(\\bm{p}-\\bm{k}).\n\\end{equation}\nThe identity operator for one-particle states under such norm is\n\\begin{equation}\n\t1=\\int \\frac{d^{3} p}{(2\\pi)^{3}} \\frac{1}{2\\omega_{\\bm p}}|\\bm{p}\\rangle\\langle\\bm{p}|, \\label{eq:rel-identity}\n\\end{equation}\nwhich we can check with\n\\begin{equation*}\n\t|\\bm{k}\\rangle\n\t=\\int \\frac{d^{3} p}{(2\\pi)^{3}} \\frac{1}{2\\omega_{\\bm p}}|\\bm{p}\\rangle\\langle\\bm{p}|\\bm{k}\\rangle\n\t=\\int \\frac{d^{3} p}{(2\\pi)^{3}} \\frac{1}{2\\omega_{\\bm p}} 2\\omega_{\\bm p}(2\\pi)^3 \\delta^3(\\bm{p}-\\bm{k})|\\bm{p}\\rangle\n\t=|\\bm{k}\\rangle.\n\\end{equation*}\nWe see that the identity operator (\\ref{eq:rel-identity}) under such convention is Lorentz invariant, since it can be expressed as\n\\begin{equation}\n\t1 = 2\\pi \\int \\frac{d^{3} p d\\omega}{(2\\pi)^{4}} \\delta(\\omega^2-{\\bm{p}}^2-m^2) |\\bm p\\rangle\\langle \\bm p|.\n\\end{equation}\n\nThe single-particle defined above can be used to fix the normalization:\n\\begin{equation}\n\t\\langle \\bm k|\\phi(\\bm x,0)|0\\rangle = e^{-i \\bm k\\cdot \\bm x},\n\\end{equation}\nleading to the field expansion\n\\begin{equation}\n\t\\phi(x)\n\t=\\int \\frac{d^{3} k}{(2\\pi)^{3}} \\frac{1}{\\sqrt{2\\omega_{\\bm k}}}\\left(a_k \n\t\te^{-i k \\cdot x}+a_k^{\\dagger} e^{i k \\cdot x}\\right).\n\\end{equation}\n\n\\subsubsection{Hamiltonian}\n\nWe can obtain the Hamiltonian for the Klein-Gordon field using the Legendre transformation:\n\\begin{equation}\n\\begin{aligned}\n\tH &= \\int d^4 x\\ \\left[\\pi(x) \\dot{\\phi}(x) - \\mathcal L(x) \\right] \\\\\n\t&= \\int d^4 x\\ \\frac{1}{2} \\left[\\pi^2 + (\\nabla \\phi)^2 + m^2 \\phi^2 \\right]\n\\end{aligned}\n\\end{equation}\nwhere the canonical momentum is defined as\n\\begin{equation}\n\\begin{aligned}\n\t\\pi(x) &= \\frac{\\partial \\mathcal L}{\\partial \\dot{\\phi}} = \\dot{\\phi}(x) \\\\\n\t&= -i\\int \\frac{d^{3} k}{(2\\pi)^{3}} \\sqrt{\\frac{\\omega_{k}}{2}}\\left(a_k \n\t\te^{-i k \\cdot x} - a_k^{\\dagger} e^{i k \\cdot x}\\right)\n\\end{aligned}\n\\end{equation}\nThe $\\pi^2$ term expands as\n\\begin{equation}\n\t\\pi^2(x) = \\int \\frac{d^{3} k_1}{(2\\pi)^{3}} \\frac{d^{3} k_2}{(2\\pi)^{3}}\n\t\t\\frac{\\sqrt{\\omega_{k_1} \\omega_{k_2}}}{2} \\left(a^\\dagger_{k_1}a_{k_2}e^{i(k_1-k_2)x} - a^\\dagger_{k_1} a^\\dagger_{k_2} e^{i(k_1+k_2)x} + h.c.\\right).\n\\end{equation}\nWe note that after integrate over $x$, the phase factor $e^{i(k_1-k_2)x}$ produce a delta function for $k_1$ and $k_2$.\nThe $a^\\dagger_{k_1} a^\\dagger_{k_2}$ terms will finally be cancelled by other terms.\nWe temporally ignore such term.\nThe contribution from the first term is then\n\\begin{equation}\n\t\\int d^4 x\\ \\pi^2(x) = \\int \\frac{d^3 k}{(2\\pi)^3} \\frac{\\omega_k}{2} a_k^\\dagger a_k + h.c.\n\\end{equation}\nThe second term is\n\\begin{equation}\n\t(\\nabla \\phi)^2 = \\int \\frac{d^{3} k_1}{(2\\pi)^{3}} \\frac{d^{3} k_2}{(2\\pi)^{3}}\n\t\t\\frac{\\bm k_1 \\bm k_2}{2\\sqrt{\\omega_{k_1}\\omega_{k_2}}} \\left(a^\\dagger_{k_1}a_{k_2}e^{i(k_1-k_2)x} - a^\\dagger_{k_1} a^\\dagger_{k_2} e^{i(k_1+k_2)x} + h.c.\\right).\n\\end{equation}\nThe third term is\n\\begin{equation}\n\tm^2 \\phi^2 = \\int \\frac{d^{3} k_1}{(2\\pi)^{3}} \\frac{d^{3} k_2}{(2\\pi)^{3}}\n\t\t\\frac{m^2}{2\\sqrt{\\omega_{k_1}\\omega_{k_2}}} \\left(a^\\dagger_{k_1}a_{k_2}e^{i(k_1-k_2)x} + a^\\dagger_{k_1} a^\\dagger_{k_2} e^{i(k_1+k_2)x} + h.c.\\right).\n\\end{equation}\nAll three contributions sum up as\n\\begin{equation}\n\\begin{aligned}\n\tH &= \\int \\frac{d^3 k}{(2\\pi)^3} \\frac{1}{2}\\left(\\frac{\\omega_k}{2} + \\frac{\\bm k^2+m^2}{2 \\omega_k}\\right) \\left(a_k^\\dagger a_k + h.c. \\right) \\\\\n\t&= \\int \\frac{d^3 k}{(2\\pi)^3} \\omega_k \\left(a_k^\\dagger a_k +\\frac{1}{2} \\right).\n\\end{aligned}\n\\end{equation}\n\nWe can now check that the $a^\\dagger a^\\dagger$ terms indeed have no contributions, as the total contribution for each momentum $k$ is\n\\begin{equation}\n\t-\\frac{\\omega_k}{2} + \\frac{\\bm k^2}{2\\omega_k} + \\frac{m^2}{2\\omega_k} = 0.\n\\end{equation}\n\nThe Hamiltonian in the operator form also make it manifest that\n\\begin{equation}\n\tH |\\bm k\\rangle = \\omega_k |\\bm k\\rangle.\n\\end{equation}\n\n\n\n\n\\subsubsection{Correlation Function}\nConsider the two-point correlation (propagator):\n\\begin{equation}\n\\begin{aligned}\n\ti\\Delta(x_1-x_2) &\\equiv \\langle 0|T \\phi(x_1) \\phi(x_2) |0\\rangle \\\\\n\t&= \\theta(t_1-t_2) \\langle 0|\\phi(x_1) \\phi(x_2) |0\\rangle \n\t+ \\theta(t_2-t_1) \\langle 0|\\phi(x_2) \\phi(x_1) |0\\rangle.\n\\end{aligned}\n\\end{equation}\nNote that\n\\begin{equation}\n\t\\langle 0|\\phi(x_1) \\phi(x_2) |0\\rangle\n\t= \\int\\frac{d^{3} k}{(2\\pi)^{3}}\\frac{1}{2\\omega_k} e^{i\\bm k\\cdot (\\bm x_1-\\bm x_2)-i\\omega_{\\bm k}\\tau},\n\\end{equation}\nwhere $\\tau =t_1-t_2$.\nThe propagator can be written as\n\\begin{equation}\n\\begin{aligned}\n\ti\\Delta(x_1-x_2) \n\t&= \\int\\frac{d^{3} k}{(2\\pi)^{3}}\\frac{1}{2\\omega_k} e^{i\\bm k\\cdot (\\bm x_1-\\bm x_2)}\\left[e^{-i\\omega_{\\bm k}\\tau}\\theta(\\tau)+e^{i\\omega_{\\bm k}\\tau}\\theta(-\\tau)\\right] \\\\\n\t&= \\int\\frac{d^{3} k}{(2\\pi)^{3}} e^{i\\bm k\\cdot (\\bm x_1-\\bm x_2)}\\int \\frac{d\\omega}{2\\pi i}\\frac{-e^{i\\omega\\tau}}{\\omega^2-\\omega_k^2+i\\epsilon} \\\\\n\t&= \\int\\frac{d^{4} k}{(2\\pi)^{4}} e^{-i k\\cdot (x_1-x_2)}\\frac{i}{k^2-m^2+i\\epsilon}.\n\\end{aligned}\n\\end{equation}\nWe have used the identity\n\\begin{equation*}\n\t\\frac{1}{2\\omega_k} \\left[e^{-i\\omega_{\\bm k}\\tau}\\theta(\\tau)+e^{i\\omega_{\\bm k}\\tau}\\theta(-\\tau)\\right] \n\t= \\int \\frac{d\\omega}{2\\pi i} \\frac{-e^{i\\omega\\tau}}{\\omega^2-\\omega_k^2+i\\epsilon},\n\\end{equation*}\nwhere an infinitesimal number $\\epsilon$ is included to move the singularities away from the real axis.\nAny final result shall take the ($\\epsilon \\rightarrow 0^+$) limit.\nSometimes the infinitesimal $\\epsilon$ will be absorbed into the mass, i.e., $m^2 \\rightarrow m^2-i\\epsilon$.\n\n\n\\subsection{Vector Field}\n\nAlthough forbid by gauge invariance, we consider a vector field with nonzero mass term.\nActually the vector field can obtain mass from the spontaneous symmetry breaking.\nFor example the W and Z boson in the week interaction have nonzero mass.\nThe action with mass term is:\n\\begin{equation}\n\tS = \\int \\frac{d^4 k}{(2\\pi)^4} \\tilde{A}^{\\mu *}(k) \\left(-k^2 g_{\\mu\\nu}+k_\\mu k_\\nu + m^2 \\right) \\tilde{A}^\\nu(k).\n\\end{equation}\nThe equation of motion for the action is\n\\begin{equation}\n\\begin{aligned}\n\t\\frac{\\delta S}{\\delta \\tilde{A}^{\\mu *}(k)} = 0 \n\t\\quad \\Longrightarrow \\quad \n\t\\left(-k^2 g_{\\mu\\nu}+k_\\mu k_\\nu + m^2 \\right) \\tilde{A}^\\nu(k) = 0.\n\\end{aligned}\n\\end{equation}\nSuch equation is sometimes called the \\textit{Proca equation}.\nNote that the Proca equation implies\n\\begin{equation}\n\t\\partial_\\mu A^\\mu = k_\\nu \\tilde{A}^\\nu = 0.\n\\end{equation}\nSo the Proca equation becomes\n\\begin{equation}\n\t\\left(-k^2 g_{\\mu\\nu} + m^2 \\right) \\tilde{A}^\\nu(k) = 0,\n\\end{equation}\nwhich is similar to the Klein-Gordon field with multiple components.\n\n\\subsubsection{Polarization Vectors}\nSince we are now dealing with a field with space-time indices, it is helpful to introduce a set of basis vectors.\nThe general solution to the Proca equation is also a plane wave labelled by momentum $\\bm k$.\nFor each momentum, we introduce a \\textit{longitudinal polarization vector}\n\\begin{equation}\n\t\\epsilon(\\bm k,3) \\equiv \\left(\\frac{|\\bm k|}{m}, \\frac{\\bm k}{|\\bm k|}\\frac{k_0}{m} \\right)\n\\end{equation}\nand two \\textit{transverse polarization vectors}\n\\begin{equation}\n\t\\epsilon(\\bm k, 1) \\equiv (0, \\bm \\epsilon(\\bm k, 1)), \\quad\n\t\\epsilon(\\bm k, 2) \\equiv (0, \\bm \\epsilon(\\bm k, 2)),\n\\end{equation}\nsatisfying the orthogonal relation\n\\begin{equation}\n\t\\bm \\epsilon(\\bm k, 1)\\cdot \\bm k = \\bm \\epsilon(\\bm k, 2)\\cdot \\bm k = k^\\mu \\epsilon_\\mu(\\bm k, 3) = 0.\n\\end{equation}\nSo these three polarization vector together with 4-momentum $k$ form a basis for the space-time.\nFor the notational convenience, we define\n\\begin{equation}\n\t\\epsilon(\\bm k,0) \\equiv \\frac{k}{m}.\n\\end{equation}\nThe vector field can be \n\\begin{equation}\n\tA_\\mu(\\bm k, \\lambda; x) \\propto e^{-i\\omega_k t + \\bm k \\cdot \\bm x} \\epsilon_\\mu(\\bm k, \\lambda)\n\\end{equation}\n\nThe condition $k \\cdot \\tilde A = 0$ is satisfied if we require no mode in the $\\epsilon(\\bm k, 0)$ polarization.\nThen the vector field is basically three independent scalar field, leading to the  field expansion:\n\\begin{equation}\n\tA^\\mu(x) = \\int \\frac{d^{3} k}{(2\\pi)^{3}}\\frac{1}{\\sqrt{2\\omega_k}}\n\t\\sum_{j=1}^3 \\left[\\epsilon^\\mu(\\bm k, j) a_{k,j} e^{-ik\\cdot x} + \n\t\\epsilon^{\\mu}(\\bm k, j) a^\\dagger_{k,j} e^{ik\\cdot x}\\right].\n\\end{equation}\nA single-particle state with polarization vector $\\epsilon(\\bm k, j)$ is defined as\n\\begin{equation}\n\t|k,\\epsilon_j\\rangle = \\epsilon(\\bm k, j) \\sqrt{2\\omega_k} a^\\dagger_{j}|0\\rangle.\n\\end{equation}\n\n\n\\subsubsection{Massless Polarization Vectors}\n\nFor the massless vector field, the polarization $\\epsilon(\\bm k, 3)$ is not well-defined.\nWe modify the definition to\n\\begin{equation}\n\\begin{aligned}\n\t\\epsilon(\\bm k, 0) &\\equiv (1,0,0,0), \\\\\n\t\\epsilon(\\bm k, 3) &\\equiv (0,0,0,1).\n\\end{aligned}\n\\end{equation}\nNote that we have choose the spatial direction so that the momentum point to the z-direction.\n\nWe can add two types of virtual particles generated by $a^\\dagger_{k,0}$ and $a^\\dagger_{k,3}$ respectively, which are usually called the \\textit{scalar photons} and \\textit{longitudinal photons}.\nHowever, the gauge fixing condition requires\n\\begin{equation}\n\t\\partial_\\mu A^\\mu(x)|\\psi\\rangle = 0 \\quad \\Longrightarrow \\quad\n\t(a_{k,0}-a_{k,3})|\\psi\\rangle = 0\n\\end{equation}\nfor all state $|\\psi\\rangle$ in the gauge-fixed Hilbert space.\n\nWe can show that the scalar and longitudinal modes are just the result of gauge transformation.\nThe physical polarization are the transverse polarization modes, and the field expansion is\n\\begin{equation}\n\tA^\\mu(x) = \\int \\frac{d^{3} k}{(2\\pi)^{3}}\\frac{1}{\\sqrt{2\\omega_k}}\n\t\\sum_{j=1}^2 \\left[\\epsilon^\\mu(\\bm k, j) a_{k,j} e^{-ik\\cdot x} + \n\t\\epsilon^{\\mu}(\\bm k, j) a^\\dagger_{k,j} e^{ik\\cdot x}\\right].\n\\end{equation}\n\n\n\\subsubsection{Correlation Function}\nTo obtain the correlation for the massless vector field, we consider a modified Lagrangian:\n\\begin{equation}\n\t\\mathcal L = -\\frac{1}{4}F_{\\mu\\nu}F^{\\mu\\nu}-\\frac{\\xi}{2}(\\partial_\\mu A^\\mu)^2.\n\\end{equation}\nIn the momentum space:\n\\begin{equation}\n\t\\tilde{\\mathcal L}_k = \\tilde{A}^\\mu(-k)\\left(-k^2 g_{\\mu\\nu}+(1-\\xi)k_\\mu k_\\nu\\right) \\tilde{A}^\\nu(k)\n\\end{equation}\nTo construct the inverse matrix we make a general symmetric ansatz\n\\begin{equation}\n\t(G_\\gamma^{-1})^{\\mu\\nu}(k)=A\\left(k^{2}\\right) g^{\\mu\\nu}+B\\left(k^{2}\\right) k^{\\mu} k^{\\nu}.\n\\end{equation}\nRequiring that\n\\begin{equation}\n\t(G_\\gamma)_{\\mu\\sigma}(k)(G_\\gamma^{-1})^{\\sigma\\nu}(k) = \\delta_\\mu^\\nu,\n\\end{equation}\nand comparing the coefficients, we get the conditions\n\\begin{equation}\\label{eq:qft-propagator-equation}\n\\begin{aligned}\n\t-k^{2} A\\left(k^{2}\\right) &=1, \\\\\n\t\\xi k^{2} B\\left(k^{2}\\right) &=(\\xi-1) A\\left(k^{2}\\right).\n\\end{aligned}\n\\end{equation}\nIn the case $\\xi=0$ these equations are not compatible. Without the gauge-fixing term the matrix $(G_\\gamma)_{\\mu \\nu}$ cannot be inverted (since the determinant vanishes) and the Feynman propagator cannot be constructed. If $\\xi \\neq 0$, however, no problems arise and the system of equations (\\ref{eq:qft-propagator-equation}) is solved by\n\\begin{equation}\n\tA\\left(k^{2}\\right)=-\\frac{1}{k^{2}}, \\quad B\\left(k^{2}\\right)=\\frac{\\xi-1}{\\xi} \\frac{1}{\\left(k^{2}\\right)^{2}},\n\\end{equation}\nwhich leads to\n\\begin{equation}\n\tG_\\gamma(k) = \\frac{-g^{\\mu\\nu}+(1-\\xi)k^\\mu k^\\nu}{k^2}.\n\\end{equation}\nDifferent choice of $\\xi$ correspond to different gauge fixing.\nThe Landau gauge choose $\\xi=1$, and the propagator has the simplest form\n\\begin{equation}\n\tG_\\gamma(k) = \\frac{-g_{\\mu\\nu}}{k^2}.\n\\end{equation}\n\n\n\n\\subsection{Spinor Field}\nThe equation of motion for Dirac field is\n\\begin{equation}\n\\begin{aligned}\n\t\\partial_\\mu\\frac{\\partial\\mathcal{L}}{\\partial(\\partial_\\mu\\psi)} - \\frac{\\partial \\mathcal L}{\\partial \\psi} = 0 \n\t\\quad &\\Longrightarrow \\quad\n\t\\bar\\psi(i\\overleftarrow{\\cancel \\partial}-m) = 0, \\\\\n\t\\partial_\\mu\\frac{\\partial\\mathcal{L}}{\\partial(\\partial_\\mu\\bar\\psi)} - \\frac{\\partial \\mathcal L}{\\partial \\bar\\psi} = 0 \n\t\\quad &\\Longrightarrow \\quad\n\t(i\\overrightarrow{\\cancel \\partial}-m)\\psi = 0.\n\\end{aligned}\n\\end{equation}\nThis EOM is a matrix equation.\nThe general solution of the Dirac equation can be written as a linear combination of plane waves (with positive and negative energy):\\footnote{Note that we have chosen to put the $+$ sign into the exponential, rather than having $p^{0}<0$.}\n\\begin{equation}\n\t\\psi_p(x) = \\begin{cases}\n\t\tu(p) e^{-i p \\cdot x} & p^{0}>0 \\\\\n\t\tv(p) e^{+i p \\cdot x} & p^{0}<0\n\t\\end{cases}, \\quad p^{2}=m^{2}.\n\\end{equation}\nIn momentum space, $u(p)$ and $v(p)$ satisfies:\n\\begin{equation}\n\t\\left[\\begin{array}{cc}\n\t\t-m & p \\cdot \\sigma \\\\ p\\cdot \\bar\\sigma & -m\n\t\\end{array} \\right] u_s(p) = \n\t\\left[\\begin{array}{cc}\n\t\t-m & -p \\cdot \\sigma \\\\ -p\\cdot \\bar\\sigma & -m\n\t\\end{array} \\right] v_s(p) = 0\n\\end{equation}\nFor massive Dirac field, we can choose the rest frame where $p = (m,0,0,0)$, the matrix equation is\\footnote{We first consider the case where there is only one spatial dimension. It correspond to the choice of coordinate such that the momentum point to the $z$ direction.}\n\\begin{equation}\n\\begin{aligned}\n\t\\left[\\begin{array}{cc}\n\t\t-m & m \\\\\n\t\tm & -m\n\t\\end{array}\\right] u_s = 0 \n\t\\quad &\\Longrightarrow \\quad\n\tu_s = \\sqrt{m}\\left[\\begin{array}{c}\n\t\t\\xi_s \\\\ \\xi_s\n\t\\end{array}\\right], \\\\\n\t\\left[\\begin{array}{cc}\n\t\tm & m \\\\\n\t\tm & m\n\t\\end{array}\\right] v_s = 0 \n\t\\quad &\\Longrightarrow \\quad\n\tv_s = \\sqrt{m}\\left[\\begin{array}{c}\n\t\t\\eta_s \\\\ -\\eta_s\n\t\\end{array}\\right],\n\\end{aligned}\n\\end{equation}\nwhere $\\xi$ and $\\eta$ has two independent solutions.\nFor example, four linearly independent solutions are\n\\begin{equation}\n\tu_{\\uparrow} = \\left[\\begin{array}{c} 1 \\\\ 0 \\\\ 1 \\\\ 0 \\end{array}\\right], \\quad\n\tu_{\\downarrow} = \\left[\\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\\\ 1 \\end{array}\\right], \\quad\n\tv_{\\uparrow} = \\left[\\begin{array}{c} -1 \\\\ 0 \\\\ 1 \\\\ 0 \\end{array}\\right], \\quad\n\tv_{\\downarrow} = \\left[\\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\\\ -1 \\end{array}\\right].\n\\end{equation}\nThe Dirac spinor is a complex four-component object, with eight real degrees of freedom. \nThe equations of motion reduce it to four degrees of freedom, which, as we will see, can be interpreted as spin up and spin down for particle and antiparticle.\n\n\n\\subsubsection{Solution in General Frame}\nTo derive a more general expression, we can solve the equations again in the boosted frame and match the normalization. \nIf $p=(E,0,0,p_z)$ then\n\\begin{equation}\n\tp \\cdot \\sigma=\\left[\\begin{array}{cc}\n\t\tE-p_{z} & 0 \\\\\n\t\t0 & E+p_{z}\n\t\\end{array}\\right], \\quad \n\tp \\cdot \\bar{\\sigma}=\\left[\\begin{array}{cc}\n\t\tE+p_{z} & 0 \\\\\n\t\t0 & E-p_{z}\n\t\\end{array}\\right].\n\\end{equation}\nLet $a=\\sqrt{E-p_{z}}$ and $b=\\sqrt{E+p_{z}}$, then $m^{2}=\\left(E-p_{z}\\right)\\left(E+p_{z}\\right)=a^{2} b^{2}$ and Dirac equation becomes\n\\begin{equation}\n\t\\left[\\begin{array}{cccc}\n\t\t-a b & 0 & a^{2} & 0 \\\\\n\t\t0 & -a b & 0 & b^{2} \\\\\n\t\tb^{2} & 0 & -a b & 0 \\\\\n\t\t0 & a^{2} & 0 & -a b\n\t\\end{array}\\right] u_{s}(p) = \n\t\\left[\\begin{array}{cccc}\n\t\ta b & 0 & a^{2} & 0 \\\\\n\t\t0 & a b & 0 & b^{2} \\\\\n\t\tb^{2} & 0 & a b & 0 \\\\\n\t\t0 & a^{2} & 0 & a b\n\t\\end{array}\\right] v_{s}(p) = 0.\n\\end{equation}\n\nThe solutions are\n\\begin{equation}\n\tu_{s}=\\left(\\begin{array}{ll}\n\t\\left[\\begin{array}{ll}\n\t\ta & 0 \\\\\n\t\t0 & b\n\t\\end{array}\\right] \\xi_{s} \\\\\n\t\\left[\\begin{array}{ll}\n\t\tb & 0 \\\\\n\t\t0 & a\n\t\\end{array}\\right] \\xi_{s}\n\t\\end{array}\\right), \\quad \n\tv_{s}=\\left(\\begin{array}{ll}\n\t\\left[\\begin{array}{ll}\n\t\ta & 0 \\\\\n\t\t0 & b\n\t\\end{array}\\right] \\eta_{s} \\\\\n\t-\\left[\\begin{array}{ll}\n\t\tb & 0 \\\\\n\t\t0 & a\n\t\\end{array}\\right] \\eta_{s}\n\t\\end{array}\\right).\n\\end{equation}\nUsing\n\\begin{equation}\n\t\\sqrt{p \\cdot \\sigma}=\\left[\\begin{array}{cc}\n\t\t\\sqrt{E-p_{z}} & 0 \\\\\n\t\t0 & \\sqrt{E+p_{z}}\n\t\\end{array}\\right], \\quad \n\t\\sqrt{p \\cdot \\bar{\\sigma}}=\\left[\\begin{array}{cc}\n\t\t\\sqrt{E+p_{z}} & 0 \\\\\n\t\t0 & \\sqrt{E-p_{z}}\n\t\\end{array}\\right],\n\\end{equation}\nwe can write more generally\n\\begin{equation}\n\tu_{s}(p) = \\left(\\begin{array}{c}\n\t\t\\sqrt{p \\cdot \\sigma} \\xi_{s} \\\\\n\t\t\\sqrt{p \\cdot \\bar{\\sigma}} \\xi_{s}\n\t\\end{array}\\right), \\quad \n\tv_{s}(p) = \\left(\\begin{array}{c}\n\t\t\\sqrt{p \\cdot \\sigma} \\eta_{s} \\\\\n\t\t-\\sqrt{p \\cdot \\bar{\\sigma}} \\eta_{s}\n\t\\end{array}\\right),\n\\end{equation}\nwhere the square root of a matrix can be defined by changing to the diagonal basis, taking the square root of the eigenvalues, then changing back to the original basis. \nIn practice, we will usually pick $p$ along the $z$ axis, so we do not need to know how to make sense of $\\sqrt{p \\cdot \\sigma}$. Then the four solutions are\n\\begin{equation}\\label{eq:qft-dirac-solutions}\n\\begin{aligned}\n\tu^{1}(p) &= \\left(\\begin{array}{c}\n\t\t\\sqrt{E-p_{z}} \\\\ 0 \\\\\n\t\t\\sqrt{E+p_{z}} \\\\ 0\n\t\\end{array}\\right), & \n\tu^{2}(p) &= \\left(\\begin{array}{c}\n\t\t0 \\\\ \\sqrt{E-p_{z}} \\\\\n\t\t0 \\\\ \\sqrt{E+p_{z}}\n\t\\end{array}\\right), \\\\\n\tv^{1}(p) &= \\left(\\begin{array}{c}\n\t\t\\sqrt{E-p_{z}} \\\\ 0 \\\\\n\t\t-\\sqrt{E+p_{z}} \\\\ 0\n\t\\end{array}\\right), & \n\tv^{2}(p) &= \\left(\\begin{array}{c}\n\t\t0 \\\\ \\sqrt{E-p_{z}} \\\\\n\t\t0 \\\\ -\\sqrt{E+p_{z}}\n\t\\end{array}\\right).\n\\end{aligned}\n\\end{equation}\nIn any frame $u^{s}$ are the positive frequency electrons, and the $v^{s}$ are negative frequency electrons, or equivalently, positive frequency positrons.\n\nFor massless spinors, $p_{z}=\\pm E$ and the explicit solutions in Eq. (\\ref{eq:qft-dirac-solutions}) are 4-vectors with one non-zero component describing spinors with fixed helicity. \nThe spinor solutions for massless electrons are sometimes called polarizations, and are useful for computing polarized electron scattering amplitudes.\n\nFor Weyl spinors, there are only four real degrees of freedom off-shell and two real degrees of freedom on-shell. \nRecalling that the Dirac equation splits up into separate equations for $\\psi_{L}$ and $\\psi_{R}$, the Dirac spinors with zeros in the bottom two rows will be $\\psi_{L}$ and those with zeros in the top two rows will be $\\psi_{R}$. \nSince $\\psi_{L}$ and $\\psi_{R}$ have two degrees of freedom each, these must be particle and antiparticle for the same helicity. \nThe embedding of Weyl spinors into fields this way induces irreducible unitary representations of the Poincare group for $m=0$.\n\n\\subsubsection{Normalization and Spin Sum}\nThe normalization chosen this way gives the orthogonal relation:\n\\begin{equation}\\label{eq:qft-dirac-otho-1}\n\\begin{aligned}\n\t\\bar{u}^{r}(p) u^{s}(p) &= +2 m \\delta^{r s}, \\\\\n\t\\bar{v}^{r}(p) v^{s}(p) &= -2 m \\delta^{r s}.\n\\end{aligned}\n\\end{equation} \nThis is the (conventional) normalization for the spinor inner product for massive Dirac spinors. \nIt is also easy to check that\n\\begin{equation}\n\t\\bar u_s(p) v_{s'}(p) = \\bar v_s(p) u_{s'}(p) = 0.\n\\end{equation}\nWe can further check that an additional orthogonal relation hold\n\\begin{equation}\n\\begin{aligned}\n\tu^{r \\dagger}(p) u^{s}(p) &= -2 \\omega_{\\bm p} \\delta^{r s}, \\\\\n\tv^{r \\dagger}(p) v^{s}(p) &= +2 \\omega_{\\bm p} \\delta^{r s}.\n\\end{aligned}\n\\end{equation}\nAnd if we define $\\bar p \\equiv (E,-\\vec p)$, there is another set of orthogonal relation:\n\\begin{equation}\n\tu^{r\\dagger}(p) v^{s}(\\bar p) = \n\tv^{r\\dagger}(p) u^{s}(\\bar p) =0.\n\\end{equation}\nA useful identity is the spin sum identity:\n\\begin{equation}\n\\begin{aligned}\n\t\\sum_{s} u^{s}(p) \\bar{u}^{s}(p) &= \\cancel p+m, \\\\\n\t\\sum_{s} v^{s}(p) \\bar{v}^{s}(p) &= \\cancel p-m.\n\\end{aligned}\n\\end{equation}\n\n\n\\subsubsection{Field Expansion and Correlation}\nThe Dirac field expansion is\n\\begin{equation}\n\\begin{aligned}\n\t\\psi(x) &=\\int \\frac{d^{3} p}{(2 \\pi)^{3}} \\frac{1}{\\sqrt{2 \\omega_{\\mathbf{p}}}} \n\t\t\\sum_{s}\\left(a_{\\mathbf{p}}^{s} u^{s}(p) e^{-i p \\cdot x}\n\t\t+b_{\\mathbf{p}}^{s \\dagger} v^{s}(p) e^{i p \\cdot x}\\right), \\\\\n\t\\bar{\\psi}(x) &=\\int \\frac{d^{3} p}{(2 \\pi)^{3}} \\frac{1}{\\sqrt{2 \\omega_{\\mathbf{p}}}} \n\t\t\\sum_{s}\\left(b_{\\mathbf{p}}^{s} \\bar{v}^{s}(p) e^{-i p \\cdot x}\n\t\t+a_{\\mathbf{p}}^{s \\dagger} \\bar{u}^{s}(p) e^{i p \\cdot x}\\right).\n\\end{aligned}\n\\end{equation}\nNow let us investigate the propagator\n\\begin{equation}\n\\begin{aligned}\n\tiD_{F,\\alpha\\beta}(x_1-x_2) &= \\langle0|T\\psi_\\alpha(x_1)\\bar\\psi_\\beta(x_2)|0\\rangle \\\\\n\t&= \\theta(\\tau) \\langle0|\\psi_\\alpha(x_1)\\bar\\psi_\\beta(x_2)|0\\rangle - \\theta(-\\tau) \\langle0|\\bar\\psi_\\beta(x_2)\\psi_\\alpha(x_1)|0\\rangle.\n\\end{aligned}\n\\end{equation}\nOn the RHS, the first term is\n\\begin{equation*}\n\\begin{aligned}\n\t\\langle0|\\psi_\\alpha(x_1)\\bar\\psi_\\beta(x_2)|0\\rangle \n\t&= \\int \\frac{d^{3} p}{(2 \\pi)^{3}} \\frac{1}{\\sqrt{2 \\omega_{\\mathbf{p}}}} \\left[\\sum_s u_\\alpha^s(p)\\bar u_\\beta^s(p)\\right]e^{-i p\\cdot (x_1-x_2)} \\\\\n\t&= (i\\cancel \\partial+m)_{\\alpha\\beta}\\int \\frac{d^{3} p}{(2 \\pi)^{3}} \\frac{1}{\\sqrt{2 \\omega_{\\mathbf{p}}}} e^{-i p\\cdot (x_1-x_2)}.\n\\end{aligned}\n\\end{equation*}\nFor the second term:\n\\begin{equation*}\n\\begin{aligned}\n\t\\langle0|\\bar\\psi_\\beta(x_2)\\psi_\\alpha(x_1)|0\\rangle\n\t&= \\int \\frac{d^{3} p}{(2 \\pi)^{3}} \\frac{1}{\\sqrt{2 \\omega_{\\mathbf{p}}}} \\left[\\sum_s \\bar v_\\beta^s(p)v_\\alpha^s(p)\\right]e^{i p\\cdot (x_1-x_2)} \\\\\n\t&= -(i\\cancel \\partial + m)_{\\alpha\\beta}\\int \\frac{d^{3} p}{(2 \\pi)^{3}} \\frac{1}{\\sqrt{2 \\omega_{\\mathbf{p}}}} e^{i p\\cdot (x_1-x_2)}.\n\\end{aligned}\n\\end{equation*}\nTogether, the Dirac propagator is:\n\\begin{equation}\n\\begin{aligned}\n\tiD_F(x_1-x_2) &= (i\\cancel \\partial+m)i\\Delta(x_1-x_2) \\\\\n\t&= \\int\\frac{d^{4} p}{(2\\pi)^{4}} e^{-i p\\cdot (x_1-x_2)}\\frac{i(\\cancel p+m)}{p^2-m^2+i\\epsilon}.\n\\end{aligned}\n\\end{equation}\n\n\n\n\n\\section{Path-integral Quantization}\n\n\\subsection{Scalar Field}\nConsider the action for free field with source\n\\begin{equation}\n\tS_0[\\phi,J]\n\t= \\int d^dx\\left[\\mathcal{L}_0 + J(x)\\cdot\\phi(x) \\right].\n\\end{equation}\nIn the path integral formalism, we consider the partition function \n\\begin{equation}\n\tZ_0[J] = \\int D[\\phi] \\exp(iS_0[\\phi,J])\n\t\\equiv Z[0] \\exp(iW_0[J]).\n\\end{equation}\nwhere we have introduced a new quantity\n\\begin{equation}\n\\begin{aligned}\n\tW_0[J] = -\\frac{1}{2}\\int d^dx_1 d^dx_2 J(x_1)\\Delta_0(x_1-x_2)J(x_2).\n\\end{aligned}\n\\end{equation}\nFor free field, the free propagator $\\Delta_0(x_1-x_2)$ is:\n\\begin{equation}\n\ti\\Delta_0(x_1-x_2) = \\langle 0| T\\phi(x_1)\\phi(x_2)|0\\rangle\n\t= \\frac{\\delta}{i\\delta J(x_1)}\\frac{\\delta}{i\\delta J(x_2)} iW_0[J].\n\\end{equation}\nNow we evaluate the propagator in the path-integral formalism.\nIn momentum space, the free action (with source) is \n\\begin{equation*}\n\t\\frac{1}{V}\\sum_k \\left[\\frac{1}{2}\\tilde\\phi^*(k)( k^2-m^2)\\tilde\\phi(k)+\\tilde J^*(k)\\cdot\\tilde\\phi(k)+\\tilde\\phi^*(k)\\cdot\\tilde J(k)\\right].\n\\end{equation*}\nFor real field, $\\tilde\\phi^*(k) = \\tilde\\phi(-k)$.\nFor our convenience, we have expressed the momentum integral as summation.\nActually, consider the $d$-dimensional box of size $L^d$, the momentum along each axis is multiple of $2\\pi/L$, so when $L\\rightarrow \\infty$, the summation approaches in integral,\n\\begin{equation*}\n\t\\frac{1}{V}\\sum_k \\rightarrow \\int \\frac{d^d k}{(2\\pi)^d}.\n\\end{equation*}\nLet us omit the $1/V$ factor, the summation can be formally expressed as\n\\begin{equation}\n\t\\frac{1}{4}\\mathbf{v}^T \\cdot \\mathbf M\\cdot \\mathbf{v} + \\frac{1}{2}\\mathbf{j}^T \\cdot \\mathbf{v}\n\\end{equation}\nwhere\n\\begin{equation*}\n\t\\mathbf v = \\bigoplus_{|\\mathbf k|} \\left[\n\t\\begin{array}{c}\n\t\t\\tilde{\\phi}(k) \\\\ \n\t\t\\tilde{\\phi}^*(k) \n\t\\end{array}\\right],\\ \n\t\\mathbf M = \\bigoplus_{|\\mathbf k|} \\left[\n\t\\begin{array}{cc} \n\t\t0 & k^2-m^2 \\\\ \n\t\tk^2-m^2 & 0 \n\t\\end{array}\\right],\\ \n\t\\mathbf j = \\bigoplus_{|\\mathbf k|} \\left[\n\t\\begin{array}{c}\n\t\t\\tilde{J}^*(k) \\\\ \n\t\t\\tilde{J}(k) \n\t\\end{array}\\right].\n\\end{equation*}\nNote that in the above expression, we have made an infinitesimal shift of mass ($m^2 \\rightarrow m^2 - i\\epsilon$) to ensure the convergence of the Gaussian integral.\nThe integrated variables $v_i$ is not real.\nTo use the real Gaussian integral formula, we make use of a unitary transformation: \n\\begin{equation*}\n\t\\mathbf U = \\frac{1}{\\sqrt 2} \\left[\\begin{array}{cc}\n\t\t1 & 1 \\\\\n\t\t-i & i\n\t\\end{array}\\right], \\quad\n\t\\mathbf U \\cdot \\left[\n\t\\begin{array}{c}\n\t\t\\tilde{\\phi}(k) \\\\ \n\t\t\\tilde{\\phi}^*(k) \n\t\\end{array}\\right] \n\t= \\frac{1}{\\sqrt 2}\\left[\n\t\\begin{array}{c}\n\t\t\\tilde\\phi(k)+\\tilde\\phi^*(k) \\\\ \n\t\t-i\\tilde\\phi(k)+i\\tilde\\phi^*(k)\n\t\\end{array}\\right]\n\t\\equiv \\left[\n\t\\begin{array}{c}\n\t\t\\tilde\\phi_1(k) \\\\ \n\t\t\\tilde\\phi_2(k) \n\t\\end{array}\\right]\n\\end{equation*}\nThe path integral then becomes a real field integral.\nRecall the real Gaussian integral formula:\n\\begin{equation}\n\t\\int d\\mathbf x \\exp\\left(-\\frac{1}{2}\\mathbf{x}^T \\cdot \\mathbf A \\cdot \\mathbf{x} + \\mathbf{B}^T \\cdot \\mathbf{x}\\right) \n\t= \\sqrt{\\frac{(2\\pi)^N}{\\det{\\mathbf A}}}\\exp\\left(\\frac{1}{2}\\mathbf{B}^T \\cdot \\mathbf{A}^{-1} \\cdot \\mathbf{B}\\right),\n\t\\label{eq:real-gaussian-integral}\n\\end{equation}\nFor the field integral, we absorbed the $(2\\pi)^{N/2}$ term into the measure, and express the path integral for the Gaussian field as:\n\\begin{equation}\n\tW_0[J] \n\t= -\\frac{i}{4}\\int \\frac{d^d k}{(2\\pi)^d} \\mathbf j^T_k \\cdot \\mathbf M^{-1}_k \\cdot \\mathbf j_k\n\t= -\\frac{1}{2} \\int \\frac{d^d k}{(2\\pi)^d}  \\tilde{J}^*(k) \\tilde{\\Delta}_0(k) \\tilde{J}(k).\n\\end{equation}\nThis gives the propagator in the momentum space:\n\\begin{equation}\n\t\\tilde{\\Delta}_0(k) = \\frac{i}{k^2-m^2}\n\t\\quad \\Longrightarrow \\quad \n\t\\Delta_0(x_1-x_2) = i\\int\\frac{d^{4} k}{(2\\pi)^{4}} \\frac{e^{-i k\\cdot (x_1-x_2)}}{k^2-m^2}.\n\\end{equation}\n\n\n\\subsubsection{From Field to Force}\nConsider two separate particle described by the delta function $J_a(x) = \\delta^{(3)}(\\bm x - \\bm x_a)$, together the source is\n\\begin{equation}\n\tJ(x) = J_1(x) + J_2(x).\n\\end{equation}\nAdding the source,\n\\begin{equation*}\n\tW_0[J] = -\\frac{1}{2}\\int d^4x_1 d^4 x_2 J(x_1) \\Delta_0(x_1-x_2) J(x_2)\n\\end{equation*}\nOmit the self energy terms $J_1^2(x), J_2^2(x)$, $W_0[J]$ is\n\\begin{equation}\n\\begin{aligned}\n\tW_0[J] &= -\\int d^4 y_1 d^4 y_2\\ e^{-ik^0(y_1^0-y_2^0)}\\int \\frac{d^4 k}{(2\\pi)^4} J_1(y_1)\\frac{e^{i\\bm k\\cdot (\\bm y_1-\\bm y_2)}}{k^2-m^2} J_2(y_2) \\\\\n\t&= -\\int  dt \\int d (y_1^0 - y_2^0) \\ e^{-ik^0(y_1^0-y_2^0)}\\int \\frac{d^4 k}{(2\\pi)^4} \\frac{e^{i\\bm k\\cdot (\\bm y_1-\\bm y_2)}}{k^2-m^2} \\\\\n\t&= \\left(\\int dt \\right)\\int \\frac{d^3 k}{(2\\pi)^3} \\frac{e^{i\\bm k\\cdot (\\bm y_1-\\bm y_2)}}{\\bm k^2 + m^2}\n\\end{aligned}\n\\end{equation}\nRecall that the partition function is actually infinite:\n\\begin{equation}\n\tZ_0 \\sim \\langle 0| e^{-i H_0 T} |0\\rangle \\quad \\Longrightarrow \\quad\n\tW_0 = -i E T,\n\\end{equation}\nwhere $E$ is the energy.\nWriting $\\bm r \\equiv \\bm y_1 - \\bm y_2$, and $u \\equiv \\cos\\theta$ with $\\theta$ the angle between $\\bm k$ and $\\bm r$, the volume form is $dk \\cdot kd\\theta \\cdot  2\\pi k \\sin \\theta = 2\\pi k^2 dk du$, and the integral is\n\\begin{equation}\n\\begin{aligned}\n\tE &= -\\int \\frac{d^3 k}{(2\\pi)^3} \\frac{e^{i k r u}}{k^2 + m^2} \\\\\n\t&= - \\frac{1}{(2\\pi)^2} \\int_0^\\infty k^2 dk \\int_{-1}^1 du \\frac{e^{ikru}}{k^2 +m^2} \\\\\n\t&= -\\frac{1}{2\\pi^2 r} \\int_0^\\infty k  \\frac{\\sin kr}{k^2 +m^2} dk.\n\\end{aligned}\n\\end{equation}\nSince the integral is even, we can extend the integral to\n\\begin{equation}\n\\begin{aligned}\n\tE &= -\\frac{1}{4\\pi^2 r} \\int_{-\\infty}^\\infty k  \\frac{\\sin kr}{k^2 +m^2} dk \\\\\n\t&= \\frac{i}{4\\pi^2 r} \\int_{-\\infty}^\\infty \\frac{k e^{ikr}}{k^2 +m^2} dk\n\\end{aligned}\n\\end{equation}\nThe residue theorem gives\n\\begin{equation}\n\t\\int_{-\\infty}^\\infty \\frac{k e^{ikr}}{k^2 +m^2} dk = \\pi ie^{-mr}\n\\end{equation}\nSo we get the potential of two particles:\n\\begin{equation}\\label{eq:field-to-force}\n\tV(r) = -\\frac{e^{-mr}}{4\\pi r},\n\\end{equation}\nand the attractive force is\n\\begin{equation}\n\tF(r) = -\\frac{dV}{dr} = -(1+mr)\\frac{e^{-mr}}{4\\pi r^2}.\n\\end{equation}\nWe see that in the massless case, the force gives the long-range Coulomb force $F \\propto 1/r^2$, while in the massful field theory, the force is short-ranged, with the decay length proportional to the mass.\n\n\n\\subsection{Vector Field}\nWe define the gauge fixing function\n\\begin{equation*}\n\tG(A) = \\partial_\\mu A^\\mu(x) -\\omega(x) = 0\n\\end{equation*}\nThe gauge transformation has the form:\n\\begin{equation*}\n\tA^\\alpha_\\mu(x) = A_\\mu(x) + \\partial_\\mu \\alpha(x).\n\\end{equation*}\nWe then have\n\\begin{equation*}\n\t1 \\propto \\int D[\\alpha] \\det\\left(\\frac{\\delta G(A^\\alpha)}{\\delta \\alpha}\\right) \\delta(G(A)).\n\\end{equation*}\nInset the identity operator into the path integral formula\n\\begin{equation*}\n\tZ[J] \\propto \\det\\left(\\partial^2 \\right) \\int D[\\alpha]D[A] e^{iS[A,J]} \\delta(\\partial_\\mu A^\\mu -\\omega(x)).\n\\end{equation*}\nThe above equation does not depend on $\\omega(x)$.\nWe can then integrate over $\\omega(x)$ with gaussian weight\n\\begin{equation*}\n\\begin{aligned}\n\tZ[J] &\\propto \\int D[\\omega] e^{-i\\int d^d x \\frac{\\omega^2}{2\\xi}} \\int D[\\alpha]D[A] e^{iS[A,J]}\n\t\\delta(\\partial_\\mu A^\\mu-\\omega) \\\\\n\t&= \\int D[A] e^{iS[A,J]} \\exp\\left\\{i \\left[S[A,J]-\\int d^d x \\frac{1}{2\\xi}(\\partial_\\mu A^\\mu)^2 \\right]\\right\\}.\n\\end{aligned}\n\\end{equation*}\nIn momentum space, the modified Langriangian is \n\\begin{equation*}\n\t\\tilde{\\mathcal{L}}_\\xi(k) = \\tilde{A}^\\mu(k)\\left[\n\t\t-k^2 g_{\\mu\\nu}+\\left(1-\\frac{1}{\\xi}\\right)k_\\mu k_\\nu\n\t\t\\right] \\tilde{A}^\\nu(-k) +\n\t\t\\tilde{J}_\\mu(k) \\tilde{A}^\\mu(-k) +\n\t\t\\tilde{A}^\\mu(k) \\tilde{J}_\\mu(-k).\n\\end{equation*}\nIn the momentum space, the photon propagator is\n\\begin{equation}\\label{eq:qft-photon-momentum-propagator}\n\\begin{aligned}\n\t\\tilde G^{\\mu\\nu}(k) \n\t&= \\left[-k^2 g_{\\mu\\nu}+\\left(1-\\frac{1}{\\xi}\\right)k_\\mu k_\\nu\\right]^{-1} \\\\\n\t&= \\frac{-g^{\\mu\\nu}+(1-\\xi)k^\\mu k^\\nu}{k^2}.\n\\end{aligned}\n\\end{equation}\nThus, the partition function is\n\\begin{equation}\n\t\\frac{Z_{\\mathrm{maxwell}}[J]}{Z_{\\mathrm{maxwell}}[0]}\n\t= \\exp\\left[-\\frac{i}{2}\\int d^dx_1 d^dx_2 J_\\mu(x_1) G^{\\mu\\nu}(x_1-x_2) J_\\nu(x_2) \\right],\n\\end{equation}\nwhere the real-space propagator is\n\\begin{equation}\n\tG^{\\mu\\nu}(x_1-x_2) = \\int \\frac{d^d k}{(2\\pi)^d} e^{-ik\\cdot(x_1-x_2)}\\frac{-g^{\\mu\\nu}+(1-\\xi)k^\\mu k^\\nu}{k^2}.\n\\end{equation}\nNote that the propagator is related to the two-point correaltion:\n\\begin{equation}\n\\begin{aligned}\n\t\\langle 0|T A^\\mu(x_1) A^\\nu(x_2) |0\\rangle\n\t&= \\left.\\frac{1}{Z_{\\mathrm{Maxwell}}[0]}\\frac{\\delta}{iJ_\\mu(x_1)}\\frac{\\delta}{iJ_\\nu(x_2)} Z_{\\mathrm{Maxwell}}[J]\\right|_{J=0} \\\\\n\t&= iG^{\\mu\\nu}(x_1-x_2).\n\\end{aligned}\n\\end{equation}\n\n\n\\subsection{Spinor Field}\nConsider the partition function with source\n\\begin{equation}\n\tZ_{\\mathrm{Dirac}}[J]\n\t= \\int D[\\bar\\psi,\\psi] \\exp\\left[i\\int d^dx \\left(\\mathcal{L}_{\\mathrm{Dirac}}+\\bar{\\eta}\\psi + \\bar\\psi\\eta \\right) \\right].\n\\end{equation}\nIn momentum space:\n\\begin{equation}\n\tS = \\int\\frac{d^d k}{(2\\pi)^d} \\left[\n\t\t\\tilde{\\bar\\psi}(k)(\\cancel{k}-m)\\tilde{\\psi}(k) +\n\t\t\\tilde{\\bar\\eta}(k) \\tilde{\\psi}(k) +\n\t\t\\tilde{\\bar\\psi}(k) \\tilde{\\eta}(k)\n\t\\right].\n\\end{equation}\nUsing the Gaussian integral formula (for Grassman variables), the partition function is:\n\\begin{equation}\n\\begin{aligned}\n\t\\frac{Z_{\\mathrm{Dirac}}[J]}{Z_{\\mathrm{Dirac}}[0]}\n\t&= \\exp\\left[-i\\int \\frac{d^d k}{(2\\pi)^d} \\tilde{\\bar\\eta}(k)\\frac{1}{\\cancel{k}-m}\\tilde\\eta(k)\\right] \\\\\n\t&= \\exp\\left[-i\\int d^dx_1 d^d x_2 \\bar{\\eta}(x_1)\\cdot D_F(x_1-x_2)\\cdot \\eta(x_2) \\right]\n\\end{aligned}\n\\end{equation}\n\nwhere\n\\begin{equation}\n\tD_F(x_1-x_2) = \\int \\frac{d^d k}{(2\\pi)^d} \\frac{e^{-i k \\cdot (x_1-x_2)}}{\\cancel{k}-m}\n\t= \\int \\frac{d^d k}{(2\\pi)^d} \\frac{\\cancel{k}+m}{k^2-m^2} e^{-i k \\cdot (x_1-x_2)}.\n\\end{equation}\nNote that the propagator is\n\\begin{equation}\n\\begin{aligned}\n\t\\langle 0| T \\psi^\\alpha(x_1) \\bar\\psi^\\beta(x_2) |0\\rangle\n\t&= \\left.\\frac{1}{Z_{\\mathrm{Dirac}}[0]}\\frac{\\delta}{i\\delta \\bar{\\eta}_\\alpha(x_1))}\\frac{i\\delta}{\\delta\\eta_\\beta(x_2)} Z_{\\mathrm{Dirac}}[\\bar\\eta,\\eta]\\right|_{\\eta=\\bar\\eta=0} \\\\\n\t&= i D^{\\alpha\\beta}_F(x_1-x_2),\n\\end{aligned}\n\\end{equation}\nwhere the sign in the variational derivative comes from the anti-commutation relation of the fermionic fields.\n\n\n\n\n", "meta": {"hexsha": "68cae7cb2ea478f7da7f06e843229a6c4a18bb34", "size": 49078, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/Relativistic.tex", "max_stars_repo_name": "jayren3996/Notes_on_QFT", "max_stars_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/Relativistic.tex", "max_issues_repo_name": "jayren3996/Notes_on_QFT", "max_issues_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/Relativistic.tex", "max_forks_repo_name": "jayren3996/Notes_on_QFT", "max_forks_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.4160337553, "max_line_length": 341, "alphanum_fraction": 0.6647173886, "num_tokens": 18760, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397349, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.5655643760250509}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{setspace}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\n\n\n\\title{Chapter 8\\\\Sturm-Liouville Theory}\n\\author{solutions by Hikari}\n\\date{September 2021}\n\n\\begin{document}\n\n\\newcommand{\\pdv}[2]{\\frac{\\partial#1}{\\partial#2}}\n\\newcommand{\\V}{\\mathbf}\n\\newcommand{\\M}{\\mathrm}\n\\newcommand{\\VE}{\\hat{\\V{e}}}\n\\newcommand{\\br}[2]{\\langle#1|#2\\rangle}\n\n\\maketitle\n\n\\section*{8.2 Hermitian Operators}\n\n\\paragraph{8.2.1}\nWhen multiplied by $e^{-x}$, the Laguerre\u2019s ODE becomes\n\\[\ne^{-x}xy''+e^{-x}(1-x)y'+e^{-x}ay=0\n\\]\nThen $\\frac{d}{dx}(e^{-x}x)=e^{-x}(1-x)$, so the ODE is self-adjoint if the boundary term $e^{-x}x\\big[v^*u'-(v^*)'u\\big]_a^b$ vanishes. \nThe inner product becomes \n\\[\n\\br{v}{u}=\\int_a^bv^*(x)e^{-x}\\mathcal{L}(x)u(x)dx=\\int_a^b\\big[v^*(x)\\mathcal{L}(x)u(x)\\big]e^{-x}dx\n\\]\nwhich means the ODE will be self-adjoint if we let $e^{-x}$ be the weighting function.\n\n\\paragraph{8.2.2}\nWhen multiplied by $e^{-x^2}$, the Hermite ODE becomes\n\\[\ne^{-x^2}y''-2xe^{-x^2}y'+2\\alpha e^{-x^2}y=0\n\\]\nThen $\\frac{d}{dx}(e^{-x^2})=-2xe^{-x^2}$, so the ODE is self-adjoint if the boundary term $e^{-x^2}\\big[v^*u'-(v^*)'u\\big]_a^b$ vanishes. \nThe inner product becomes \n\\[\n\\br{v}{u}=\\int_a^bv^*(x)e^{-x^2}\\mathcal{L}(x)u(x)dx=\\int_a^b\\big[v^*(x)\\mathcal{L}(x)u(x)\\big]e^{-x^2}dx\n\\]\nwhich means the ODE will be self-adjoint if we let $e^{-x^2}$ be the weighting function.\n\n\\paragraph{8.2.3}\nWhen multiplied by $\\frac{1}{\\sqrt{1-x^2}}$, the Chebyshev ODE becomes\n\\[\n\\sqrt{1-x^2}\\,y''-\\frac{x}{\\sqrt{1-x^2}}y'+\\frac{n^2}{\\sqrt{1-x^2}}y=0\n\\]\nThen $\\frac{d}{dx}(\\sqrt{1-x^2})=-\\frac{x}{\\sqrt{1-x^2}}$, so the ODE is self-adjoint if the boundary term $\\sqrt{1-x^2}\\big[v^*u'-(v^*)'u\\big]_a^b$ vanishes. \nThe inner product becomes \n\\[\n\\br{v}{u}=\\int_a^bv^*(x)\\frac{1}{\\sqrt{1-x^2}}\\mathcal{L}(x)u(x)dx=\\int_a^b\\big[v^*(x)\\mathcal{L}(x)u(x)\\big]\\frac{1}{\\sqrt{1-x^2}}dx\n\\]\nwhich means the ODE will be self-adjoint if we let $\\frac{1}{\\sqrt{1-x^2}}$ be the weighting function.\n\n\\paragraph{8.2.4}\nThe boundary condition for $p_0(x)y''+p_1(x)y'+p_2(x)=0$ to be self-adjoint is \n\\[\nw(x)p_0(x)\\big[v^*u'-(v^*)'u\\big]_a^b=0\n\\]\nfor every $u,v$, with $w(x)$ being the weighting function.\n\\begin{alignat*}{3}\n    & \\textit{Legendre:}\\qquad &&  (1-x^2)\\big[v^*u'-(v^*)'u\\big]_{-1}^{1}=0 && \\qquad \\textit{because $1-x^2=0$ at $x=-1,1$}\\\\\n    & \\textit{Chebyshev:}\\qquad &&  \\sqrt{1-x^2}\\big[v^*u'-(v^*)'u\\big]_{-1}^{1}=0 && \\qquad \\textit{because $\\sqrt{1-x^2}=0$ at $x=-1,1$}\\\\\n    & \\textit{Hermite:}\\qquad &&  e^{-x^2}\\big[v^*u'-(v^*)'u\\big]_{-\\infty}^{\\infty}=0 && \\qquad \\textit{because $e^{-x^2}$ approach to zero faster than}\\\\\n    & && &&\\qquad \\textit{polynomial $v^*u'-(v^*)'u$ at $-\\infty,\\infty$}\\\\\n    & \\textit{Laguerre:}\\qquad &&  e^{-x}x\\big[v^*u'-(v^*)'u\\big]_{0}^{\\infty}=0 && \\qquad \\textit{because $x=0$, and $e^{-x}$ approach to zero}\\\\\n    & && &&\\qquad \\textit{faster than polynomial $x\\big[v^*u'-(v^*)'u\\big]$ at $\\infty$}\n\\end{alignat*}\n\n\\paragraph{8.2.5}\nThe eigenvectors of an Hermitian operator with different eigenvalues are orthogonal (chapter 6) and therefore linearly independent (because if $au_1+bu_2=0$, then $\\br{au_1+bu_2}{au_1+bu_2}=|a|^2\\br{u_1}{u_1}+|b|^2\\br{u_2}{u_2}=0$, which means $a=b=0$).\n\n\\paragraph{8.2.6}\n(a) \nLet $u=\\frac{1+x}{1-x}$, so $x=\\frac{u-1}{u+1}$, $dx=\\frac{2}{(u+1)^2}du$, and $u=0,\\infty$ when $x=-1,1$. The integral becomes\n\\[\n\\int_0^\\infty\\frac{1}{2}\\frac{u-1}{u+1}(\\ln u)\\frac{2}{(u+1)^2}du\n=\\int_0^\\infty\\frac{u-1}{(u+1)^3}\\ln u\\,du\n\\]\n\\[\n=\\frac{-u}{(u+1)^2}\\ln u\\Big|_0^\\infty-\\int_0^\\infty\\frac{-u}{(u+1)^2}\\frac{1}{u}du\n=\\frac{-u}{(u+1)^2}\\ln u\\Big|_0^\\infty-\\frac{1}{u+1}\\Big|_0^\\infty\n=(0-0)-(0-1)=1\n\\]\n\n(b) \nThe boundary term \n\\[\n(1-x^2)\\left[x\\frac{1}{1-x^2}-\\frac{1}{2}\\ln\\left(\\frac{1+x}{1-x} \\right) \\right]_{-1}^1\n\\]\ndoes not vanish (diverges), so the proof of the ODE being self-adjoint failed, and therefore the eigenfunctions are not necessarily orthogonal.\n\n\\paragraph{8.2.7}\nThe boundary term \n\\[\n\\sqrt{1-x^2}\\left[\\frac{-x}{\\sqrt{1-x^2}}-0 \\right]_{-1}^1=-2\n\\]\ndoes not vanish, so the proof of the ODE being self-adjoint failed, and therefore the eigenfunctions are not necessarily orthogonal.\n\n\n\\paragraph{8.2.8}\n(We prove the case when $w(x)=1$ (or similarly a constant), while I'm not sure the correctness when $w(x)$ is a function of $x$. But if $w(x)$ is the weighting function, which means $\\br{u_m}{u_n}=\\int_a^bu_m^*u_nw\\,dx$, then there is no problem in the following proof.)\n\nWe have $(pu_n')'=-\\lambda_nwu_n$ from the equation, and $\\br{u_m}{u_n}=\\int_a^bu_m^*u_ndx=\\delta_{mn}$ by the orthogonality.\nIntegrating $\\br{u_m'}{u_n'}$ by parts:\n\\[\n\\br{u_m'}{u_n'}=\\int_a^b(u_m')^*pu_n'dx\n\\]\n\\[\n=u_m^*pu_n'\\big|_a^b-\\int_a^bu_m^*(pu_n')'dx\n\\]\n\\[\n=u_m^*pu_n'\\big|_a^b+\\int_a^bu_m^*\\lambda_nwu_ndx\n\\]\n\\[\n=u_m^*pu_n'\\big|_a^b+\\lambda_n\\br{u_m}{u_n}=\\lambda_n\\delta_{mn}\n\\]\nwhich means $u_m',u_n'$ are orthogonal, as long as the boundary condition $u_m^*pu_n'\\big|_a^b=0$ is satisfied. \\\\(Or equivalently $(u_m^*)'pu_n\\big|_a^b=0$, because we have $\\big[u_m^*pu_n'-(u_m^*)'pu_n \\big]_a^b=0$ from the boundary conditions making  $u_m,u_n$ orthogonal.)\n\n\\paragraph{8.2.9}\nAssume linear dependence, so $\\varphi_n=\\sum_{i=1}^{n-1}a_i\\varphi_i$, then \n\\[\nA\\varphi_n=\\lambda_n\\varphi_n=\\sum_{i=1}^{n-1}\\lambda_na_i\\varphi_i\n\\]\nand also\n\\[\nA\\varphi_n=A(\\sum_{i=1}^{n-1}a_i\\varphi_i)=\\sum_{i=1}^{n-1}a_i\\lambda_i\\varphi_i\n\\]\nso\n\\[\n\\sum_{i=1}^{n-1}a_i(\\lambda_n-\\lambda_i)\\varphi_i=0\n\\]\nNote that $\\lambda_n-\\lambda_1\\neq0$, so the equation implies $\\varphi_1,\\cdots,\\varphi_{n-1}$ are linearly dependent. Therefore, the linear dependence between $\\varphi_1,\\cdots,\\varphi_n$ implies the linear dependence between $\\varphi_1,\\cdots,\\varphi_{n-1}$, and by repeating the process, we will find $\\varphi_1,\\varphi_2$ being linearly dependent, which is impossible because it implies $\\lambda_1=\\lambda_2$ ($\\lambda_2\\varphi_2=A\\varphi_2=Ak\\varphi_1=\\lambda_1k\\varphi_1=\\lambda_1\\varphi_2$). So $\\varphi_1,\\cdots,\\varphi_n$ must be linearly independent.\n\n\\paragraph{8.2.10}\n(a)\nUsing Eq 8.15, the ODE will be self-adjoint if multiplied by\n\\[\n\\frac{1}{1-x^2}e^{\\int\\frac{-(2\\alpha\n+1)x}{1-x^2}dx}=\\frac{1}{1-x^2}e^{(\\alpha+\\frac{1}{2})\\ln(1-x^2)}=(1-x^2)^{\\alpha-\\frac{1}{2}}\n\\]\nso the ODE becomes\n\\[\n\\left\\{(1-x^2)^{\\alpha+\\frac{1}{2}}\\frac{d^2}{dx^2}-(2\\alpha+1)x(1-x^2)^{\\alpha-\\frac{1}{2}}\\frac{d}{dx}+n(n+2\\alpha)(1-x^2)^{\\alpha-\\frac{1}{2}} \\right\\}C_n^{(\\alpha)}(x)=0\n\\]\nwhich is self-adjoint because \n\\[\n\\frac{d}{dx}(1-x^2)^{\\alpha+\\frac{1}{2}}=-(2\\alpha+1)x(1-x^2)^{\\alpha-\\frac{1}{2}}\n\\]\n\n(b)\nThe boundary conditions will be satisfied if we choose the interval to be $[-1,1]$\\,:\n\\[\n(1-x^2)^{\\alpha+\\frac{1}{2}}\\big[v^*u'-(v^*)'u \\big]_{-1}^1=0\n\\]\nso the eigenfunctions of different eigenvalues will be orthogonal if the inner product is defined as\n\\[\n\\br{C_m}{C_n}=\\int_{-1}^1C_m^*C_n(1-x^2)^{\\alpha-\\frac{1}{2}}\\,dx\n\\]\n\n\\section*{8.3 ODE Eigenvalue Problems}\n\n\\paragraph{8.3.1}\nLet $y=\\sum_{j=0}^\\infty a_jx^{s+j}$ and substitute:\n\\[\n(1-x^2)\\sum_{j=0}^\\infty a_j(s+j)(s+j-1)x^{s+j-2}-2x\\sum_{j=0}^\\infty a_j(s+j)x^{s+j-1}+n(n+1)\\sum_{j=0}^\\infty a_j x^{s+j}=0\n\\]\nThe coefficients of each order must be zero, so\n\\begin{alignat*}{2}\n    & x^{s-2}:\\qquad && a_0s(s-1)=0\\\\\n    & x^{s-1}:\\qquad && a_1(s+1)s=0\\\\\n    & x^{s+j}:\\qquad && a_{j+2}(s+j+2)(s+j+1)-a_j(s+j)(s+j-1)-2a_j(s+j)+n(n+1)a_j=0\\\\\n    & && a_{j+2}=\\frac{(s+j)(s+j+1)-n(n+1)}{(s+j+1)(s+j+2)}a_j\n\\end{alignat*}\n\n(a)\n$a_0\\neq0$, so $s(s-1)=0$.\n\\medskip\n\n(b)\nLet $s=0$ and $a_1=0$ (so $a_3,a_5,\\cdots$ vanish), then \n\\[\na_{j+2}=\\frac{j(j+1)-n(n+1)}{(j+1)(j+2)}a_j=\\frac{(j-n)(j+n+1)}{(j+1)(j+2)}a_j\n\\]\n\\[\ny_{even}=\\sum_{j\\; even}a_jx^j=a_0\\left[1-\\frac{n(n+1)}{2!}x^2+\\frac{(n-2)n(n+1)(n+3)}{4!}x^4-\\cdots \\right]\n\\]\n\n(c) \nLet $s=1$, then $a_1(s+1)s=0$ implies $a_1=0$, and\n\\[\na_{j+2}=\\frac{(j+1)(j+2)-n(n+1)}{(j+2)(j+3)}a_j=\\frac{(j+1-n)(j+2+n)}{(j+2)(j+3)}a_j\n\\]\n\\[\ny_{odd}=\\sum_{j\\;even}a_jx^{1+j}=a_0\\left[x-\\frac{(n-1)(n+2)}{3!}x^3+\\frac{(n-3)(n-1)(n+2)(n+4)}{5!}x^5-\\cdots \\right]\n\\]\n\n(d)\nFor $y_{even}$, let $u_k=a_{2k}$, so\n\\[\nu_{k+1}=\\frac{2k(2k+1)-n(n+1)}{(2k+1)(2k+2)}u_k\n\\]\nthen the series at $x=\\pm 1$ becomes\n\\[\ny_{even}=\\sum_{j\\;even}a_jx^j=\\sum_{k=0}^\\infty u_k(x^2)^k=\\sum_{k=0}^\\infty u_k\n\\]\nand \n\\[\n\\frac{u_k}{u_{k+1}}=\\frac{(2k+1)(2k+2)}{2k(2k+1)-n(n+1)}=1+\\frac{1}{k}+\\frac{B(k)}{k^2}\n\\]\nwhere $B(k)$ is bounded for large $k$, so by Gauss' test the series diverge.\n\\medskip\n\nSimilarly, for $y_{odd}$, let $u_k=a_{2k}$, so\n\\[\nu_{k+1}=\\frac{(2k+1)(2k+2)-n(n+1)}{(2k+2)(2k+3)}u_k\n\\]\nthen the series at $x=\\pm 1$ becomes\n\\[\ny_{odd}=\\sum_{j\\;even}a_jx^{1+j}=\\sum_{k=0}^\\infty xu_k(x^2)^k=\\pm\\sum_{k=0}^\\infty u_k\n\\]\nand \n\\[\n\\frac{u_k}{u_{k+1}}=\\frac{(2k+2)(2k+3)}{(2k+1)(2k+2)-n(n+1)}=1+\\frac{1}{k}+\\frac{B(k)}{k^2}\n\\]\nwhere $B(k)$ is bounded for large $k$, so by Gauss' test the series diverge.\n\\medskip\n\n(e)\nFrom the recurrence relations of $y_{even}$ and $y_{odd}$, if $n$ is a non-negative even integer, then $y_{even}$ will terminate at $a_nx^n$; if $n$ is a non-negative odd integer, then $y_{odd}$ will terminate at $a_{n-1}x^n$. So the series are converted into finite polynomials.\n\n\\paragraph{8.3.2}\nWhen multiplied by $e^{-x^2}$, the equation becomes\n\\[\ne^{-x^2}y''-2xe^{-x^2}y'+2\\alpha e^{-x^2}y=0\n\\]\nso\n\\[\n\\frac{d}{dx}\\big[e^{-x^2}\\big]=-2xe^{-x^2}\n\\]\nand \n\\[\ne^{-x^2}\\big[v^*u'-(v^*)'u \\big]_{-\\infty}^\\infty=0\n\\]\nwhich means the ODE is self-adjoint (Hermitian).\n\n\\paragraph{8.3.3}\nLet $y=\\sum_{j=0}^\\infty a_jx^{s+j}$ and substitute:\n\\[\n\\sum_{j=0}^\\infty a_j(s+j)(s+j-1)x^{s+j-2}-2x\\sum_{j=0}^\\infty a_j(s+j)x^{s+j-1}+2\\alpha\\sum_{j=0}^\\infty a_j x^{s+j}=0\n\\]\nThe coefficients of each order must be zero, so\n\\begin{alignat*}{2}\n    & x^{s-2}:\\qquad && a_0s(s-1)=0\\\\\n    & x^{s-1}:\\qquad && a_1(s+1)s=0\\\\\n    & x^{s+j}:\\qquad && a_{j+2}(s+j+2)(s+j+1)-2a_j(s+j)+2\\alpha a_j=0 \\\\\n    & && a_{j+2}=a_j\\frac{2(s+j-\\alpha)}{(s+j+1)(s+j+2)}\n\\end{alignat*}\n\n(a)\n$s=0$:\n\\[\na_{j+2}=a_j\\frac{2(j-\\alpha)}{(j+1)(j+2)}\n\\]\n\\[\ny_{even}=\\sum_{j\\;even}a_jx^j=a_0\\left[1+\\frac{2(-\\alpha)}{2!}x^2+\\frac{2^2(-\\alpha)(2-\\alpha)}{4!}x^4+\\cdots \\right]\n\\]\n\n$s=1$:\n\\[\na_{j+2}=a_j\\frac{2(j+1-\\alpha)}{(j+2)(j+3)}\n\\]\n\\[\ny_{odd}=\\sum_{j\\;even}a_jx^{1+j}=a_0\\left[x+\\frac{2(1-\\alpha)}{3!}x^3+\\frac{2^2(1-\\alpha)(3-\\alpha)}{5!}x^5+\\cdots \\right]\n\\]\n\n(b)\nFor $y_{even}$, let $u_k=a_{2k}$, then \n\\[\nu_{k+1}=u_k\\frac{2(2k-\\alpha)}{(2k+1)(2k+2)}\n\\]\n\\[\n\\lim_{k\\to\\infty}\\frac{a_{2k+2}x^{2k+2}}{a_{2k}x^{2k}}=\\lim_{k\\to\\infty}\\frac{u_{k+1}}{u_k}x^2=\\lim_{k\\to\\infty}\\frac{4x^2k-2x^2\\alpha}{4k^2+6k+2}=\\lim_{k\\to\\infty}\\frac{x^2}{k}=0\n\\]\nso by ratio test the series converge.\n\\medskip\n\nSimilarly, for $y_{odd}$, let $u_k=a_{2k}$, then \n\\[\nu_{k+1}=u_k\\frac{2(2k+1-\\alpha)}{(2k+2)(2k+3)}\n\\]\n\\[\n\\lim_{k\\to\\infty}\\frac{a_{2k+2}x^{2k+3}}{a_{2k}x^{2k+1}}=\\lim_{k\\to\\infty}\\frac{u_{k+1}}{u_k}x^2=\\lim_{k\\to\\infty}\\frac{4x^2k+2x^2(1-\\alpha)}{4k^2+10k+6}=\\lim_{k\\to\\infty}\\frac{x^2}{k}=0\n\\]\nso by ratio test the series converge.\n\\medskip\n\n$e^{x^2}=\\sum_{k=0}^\\infty\\frac{x^{2k}}{k!}$, so the ratio of seccessive terms is\n\\[\n\\lim_{k\\to\\infty}\\frac{x^{2(k+1)}}{(k+1)!}\\frac{k!}{x^{2k}}=\\lim_{k\\to\\infty}\\frac{x^2}{k+1}=\\lim_{k\\to\\infty}\\frac{x^2}{k}\n\\]\nwhich is the same as the ratios of seccessive terms of $y_{even}$ and $y_{odd}$.\n\\medskip\n\n(c)\nFrom the recurrence relations of $y_{even}$ and $y_{odd}$, if $\\alpha$ is a non-negative even integer, then $y_{even}$ will terminate at $a_{\\alpha}x^{\\alpha}$; if $n$ is a non-negative odd integer, then $y_{odd}$ will terminate at $a_{\\alpha-1}x^{\\alpha}$. So the series are converted into finite polynomials.\n\n\\paragraph{8.3.4}\nLet $L_n(x)=\\sum_{j=0}^\\infty a_jx^{s+j}$ and substitute:\n\\[\nx\\sum_{j=0}^\\infty a_j(s+j)(s+j-1)x^{s+j-2}+(1-x)\\sum_{j=0}^\\infty a_j(s+j)x^{s+j-1}+n\\sum_{j=0}^\\infty a_jx^{s+j}=0\n\\]\nThe coefficients of each order must be zero, so\n\\begin{alignat*}{3}\n    & x^{s-1}:\\qquad && a_0s(s-1)+a_0s=a_0s^2=0\\qquad && s=0\\\\\n    & x^{s+j}:\\qquad && a_{j+1}(j+1)j+a_{j+1}(j+1)-a_jj+na_j=0\\qquad\\qquad && a_{j+1}=a_j\\frac{j-n}{(j+1)^2}\n\\end{alignat*}\nso\n\\[\ny=\\sum_{j=0}^\\infty a_jx^j=a_0\\left[1+\\frac{(-n)}{1^2}x+\\frac{(-n)(1-n)}{1^2\\cdot2^2}x^2+\\cdots \\right]\n\\]\n\nFrom the recurrence relation of $y$, if $n$ is a non-negative integer, then $y$ will terminate at $a_{n}x^{n}$, and the series will be converted into a finite polynomial.\n\n\\paragraph{8.3.5}\nLet $T_n=\\sum_{j=0}^\\infty a_jx^{s+j}$ and substitute:\n\\[\n(1-x^2)\\sum_{j=0}^\\infty a_j(s+j)(s+j-1)x^{s+j-2}-x\\sum_{j=0}^\\infty a_j(s+j)x^{s+j-1}+n^2\\sum_{j=0}^\\infty a_jx^{s+j}=0\n\\]\n\nThe coefficients of each order must be zero, so\n\\begin{alignat*}{2}\n    & x^{s-2}:\\qquad && a_0s(s-1)=0\\\\\n    & x^{s-1}:\\qquad && a_1(s+1)s=0\\\\\n    & x^{s+j}:\\qquad && a_{j+2}(s+j+2)(s+j+1)-a_j(s+j)(s+j-1)-a_j(s+j)+n^2a_j=0\\\\\n    & && a_{j+2}=a_j\\frac{(s+j-n)(s+j+n)}{(s+j+1)(s+j+2)}\n\\end{alignat*}\n\nFor $s=0$, let $a_1=0$, then\n\\[\na_{j+2}=a_j\\frac{(j-n)(j+n)}{(j+1)(j+2)}\n\\]\n\\[\ny_{even}=\\sum_{j\\;even}a_jx^j=a_0\\left[1+\\frac{(-n)n}{2!}x^2+\\frac{(2-n)(-n)n(2+n)}{4!}x^4+\\cdots \\right]\n\\]\n\nFor $s=1$, $a_1=0$, and\n\\[\na_{j+2}=a_j\\frac{(j+1-n)(j+1+n)}{(j+2)(j+3)}\n\\]\n\\[\ny_{odd}=\\sum_{j\\;even}a_jx^{1+j}=a_0\\left[x+\\frac{(1-n)(1+n)}{3!}x^3+\\frac{(3-n)(1-n)(1+n)(3+n)}{5!}x^5+\\cdots \\right]\n\\]\n\\medskip\n\nTo test the convergence at $x=\\pm1$, let $u_k=a_{2k}$:\n\n$y_{even}:$\n\\[\nu_{k+1}=u_k\\frac{(2k-n)(2k+n)}{(2k+1)(2k+2)}\n\\]\n\\[\ny_{even}=\\sum_{j\\;even}a_jx^j=\\sum_{k=0}^\\infty u_k(x^2)^k=\\sum_{k=0}^\\infty u_k\n\\]\n\\[\n\\frac{u_k}{u_{k+1}}=\\frac{4k^2+6k+2}{4k^2-n^2}=1+\\frac{3}{2k}+\\frac{B(k)}{k^2}\n\\]\nso $y_{even}$ converges at $x=\\pm1$ by Gauss' test.\n\\medskip\n\n$y_{odd}:$\n\\[\nu_{k+1}=u_k\\frac{(2k+1-n)(2k+1+n)}{(2k+2)(2k+3)}\n\\]\n\\[\ny_{odd}=\\sum_{j\\;even}a_jx^{1+j}=\\sum_{k=0}^\\infty xu_k(x^2)^k=\\pm\\sum_{k=0}^\\infty u_k\n\\]\n\\[\n\\frac{u_k}{u_{k+1}}=\\frac{4k^2+10k+6}{4k^2+4k+1-n^2}=1+\\frac{3}{2k}+\\frac{B(k)}{k^2}\n\\]\nso $y_{odd}$ converges at $x=\\pm1$ by Gauss' test.\n\n\\paragraph{8.3.6}\nLet $U_n=\\sum_{j=0}^\\infty a_jx^{s+j}$ and substitute:\n\\[\n(1-x^2)\\sum_{j=0}^\\infty a_j(s+j)(s+j-1)x^{s+j-2}-3x\\sum_{j=0}^\\infty a_j(s+j)x^{s+j-1}+n(n+2)\\sum_{j=0}^\\infty a_jx^{s+j}=0\n\\]\nThe coefficients of each order must be zero, so\n\\begin{alignat*}{2}\n    & x^{s-2}:\\qquad && a_0s(s-1)=0\\\\\n    & x^{s-1}:\\qquad && a_1(s+1)s=0\\\\\n    & x^{s+j}:\\qquad && a_{j+2}(s+j+2)(s+j+1)-a_j(s+j)(s+j-1)-3a_j(s+j)+n(n+2)a_j=0\\\\\n    & && a_{j+2}=a_j\\frac{(s+j)(s+j+2)-n(n+2)}{(s+j+1)(s+j+2)}=a_j\\frac{(s+j-n)(s+j+n+2)}{(s+j+1)(s+j+2)}\n\\end{alignat*}\nChoose $s=1$, then $a_1=0$, and\n\\[\ny_{odd}=\\sum_{j\\;even}a_jx^{1+j}=a_0\\left[x+\\frac{(1-n)(3+n)}{3!}x^3+\\frac{(3-n)(1-n)(3+n)(5+n)}{5!}x^5+\\cdots \\right]\n\\]\n\nFrom the recurrence relation, if $n$ is a positive odd integer, then $y_{odd}$ will terminate at $a_{n-1}x^{n}$, and the series will be converted into a finite polynomial.\n\n\\section*{8.4 Variation Method}\n\n\\paragraph{8.4.1}\n(a) \n\\[\n\\br{\\varphi}{\\varphi}=\\int_0^\\infty4\\alpha^3x^2e^{-2\\alpha x}dx=4\\alpha^3\\left[2\\frac{e^{-2\\alpha x}}{(-2\\alpha)^3} \\right]_0^\\infty=1\n\\]\n\n(b)\n\\[\n\\langle x^{-1}\\rangle=\\int_0^\\infty 4\\alpha^3xe^{-2\\alpha x}dx=4\\alpha^3\\left[-\\frac{e^{-2\\alpha x}}{(-2\\alpha)^2} \\right]_0^\\infty=\\alpha\n\\]\n\n(c)\n\\[\n\\langle\\frac{d^2}{dx^2}\\rangle=\\int_0^\\infty4\\alpha^3xe^{-\\alpha x}\\frac{d^2}{dx^2}(xe^{-\\alpha x})dx\n\\]\n\\[\n=4\\alpha^5\\int_0^\\infty x^2e^{-2\\alpha x}dx-8\\alpha^4\\int_0^\\infty xe^{-2\\alpha x}dx\n\\]\n\\[\n=4\\alpha^5\\left[2\\frac{e^{-2\\alpha x}}{(-2\\alpha)^3} \\right]_0^\\infty-8\\alpha^4\\left[-\\frac{e^{-2\\alpha x}}{(-2\\alpha)^2} \\right]_0^\\infty\n\\]\n\\[\n=\\alpha^2-2\\alpha^2=-\\alpha^2\n\\]\n\n(d)\n\\[\n\\left\\langle\\varphi\\left|-\\frac{1}{2}\\frac{d^2}{dx^2}-\\frac{1}{x} \\right|\\varphi \\right\\rangle=\\frac{\\alpha^2}{2}-\\alpha\n\\]\n\\[\n\\frac{d}{d\\alpha}\\left[\\frac{\\alpha^2}{2}-\\alpha\\right]=\\alpha-1=0\n\\]\nso \n\\[\n\\alpha=1\n\\]\nand the minimum value of the expectation value is\n\\[\n\\left[\\frac{\\alpha^2}{2}-\\alpha\\right]_{\\alpha=1}=-\\frac{1}{2}\n\\]\n\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "f974d5696809871a4913d8b36bef6766b6b73819", "size": 15961, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Mathematical Methods for Physicists/Chapter 08/main.tex", "max_stars_repo_name": "hikarimusic2002/Solutions", "max_stars_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematical Methods for Physicists/Chapter 08/main.tex", "max_issues_repo_name": "hikarimusic2002/Solutions", "max_issues_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematical Methods for Physicists/Chapter 08/main.tex", "max_forks_repo_name": "hikarimusic2002/Solutions", "max_forks_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.4730021598, "max_line_length": 560, "alphanum_fraction": 0.6020926007, "num_tokens": 7668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185318, "lm_q2_score": 0.7905303137346446, "lm_q1q2_score": 0.5655643654331337}}
{"text": "%%%\n% This advanced example of a B+ tree is supposed to help creating/modifying B+\n% trees using the btreevis package. It covers all possibilities of extending and\n% shrinking a B+ tree. Hence, it is a good way to learn and understand what are\n% the necessary steps to create and modify B+ trees, i.e. which macros/styles\n% have to be used/modified.\n%\n% All sections and subsections provide a header describing what has to be done\n% and why. Each (sub-)section contains exactly a single tikzpicture containing\n% inline comments to describe single parts of the header description.\n%\n% The (sub-)sections are described in terms of the (sub-)section before. So, if\n% a change is described, it is a change with respect to the (sub-)section before.\n%%%\n\n\\documentclass{article}\n\n\\usepackage{hyperref}\n\\usepackage[german]{babel}\n\n% Use btreevis package.\n\\usepackage{btreevis}\n\n\\begin{document}\n  \\tableofcontents\n  \\clearpage\n\n  %%%\n  % Create an intially empty B+ tree with one node containing 3 key cells and\n  % 4 pointer cells.\n  %%%\n  \\section{Initial B$^+$ tree}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 1                 % No scaling necessary.\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate (empty) content of (tree) level 0 (root level).\n    \\CreateBTreeNode{0}{1}{{}}{\\levelZeroNodeOne}\n\n    % Generate B+ tree with a single node (no content).\n    \\node[BTreeNodeDefault] {\\levelZeroNodeOne};\n\n    % As there is just a single node by now, no nodes need to be connected.\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % This section describes all possibilities of extending a B+ tree:\n  %   - Add values to an existing node when a node is not yet full.\n  %   - Add values to an existing node when a node is already full.\n  %   - Add a new level to the B+ tree and connect the new nodes/leaves.\n  %   - Add a new node to an existing level and connect the new node.\n  %%%\n  \\section{Extending the B$^+$ tree}\n\n  %%%\n  % Values 3 and 9 are added to the existing (empty) node.\n  % Hence, the node is neither full before nor after the inserts.\n  %\n  % This is probably the easiest thing to do: we simply add the values 3 and 9\n  % to the comma-separated list of values (which is empty by now) as third\n  % argument of the \\CreateBTreeNode macro. The third values is treated as empty\n  % and thus the cell will also have no content in it.\n  %%%\n  \\subsection{Insert 3 and 9}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 1                 % No scaling necessary.\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % Here we add 3 and 9 to the comma-separated list, i.e. {3, 9}.\n    \\CreateBTreeNode{0}{1}{{3, 9}}{\\levelZeroNodeOne}\n\n    % Generate B+ tree with a single node.\n    % No change here because we still have only a single node.\n    \\node[BTreeNodeDefault] {\\levelZeroNodeOne};\n\n    % As there is still just a single node by now, no nodes need to be connected.\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % Value 10 is added to the existing node containing two values by now.\n  % The node is full afterwards.\n  %\n  % As the workflow is the same, this should be as easy to understand as the\n  % last step: we again add the value 10 to the comma-separated list of values.\n  %%%\n  \\subsection{Insert 10}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 1                 % No scaling necessary.\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % Here we add 10 to the comma-separated list, i.e. {3, 9, 10}.\n    \\CreateBTreeNode{0}{1}{{3, 9, 10}}{\\levelZeroNodeOne}\n\n    % Generate B+ tree with a single node.\n    % No change here because we still have only a single node.\n    \\node[BTreeNodeDefault] {\\levelZeroNodeOne};\n\n    % As there is still just a single node by now, no nodes need to be connected.\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % Values 1 and 2 are added to the existing node which is already full. Because\n  % the node is already full we need to split the node into two and add another\n  % level to the tree.\n  %\n  % Several things have to be done:\n  %   - Add two new calls of the \\CreateBTreeNode macro to generate a total of\n  %     three node contents: one root node and its two child nodes.\n  %   - When generating the tree, we need two new 'child' clauses to utilize the\n  %     tree environment of TikZ to position the nodes properly. We use the newly\n  %     generated contents as node contents.\n  %   - As we have two tree levels, the \\ConnectBTreeNodes and the\n  %     \\ConnectBTreeLeaves macros have to be used to connect the nodes in a\n  %     proper way. Hence, we introduce one call for each of these two macros.\n  %   - Because of the two tree levels it is also recommended to add a style for\n  %     the new level of the tree.\n  %%%\n  \\subsection{Insert 1 and 2}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 1,                % No scaling necessary.\n    level 1/.style = {        % Level 1 tree style recommended.\n      sibling distance = 8em,\n      level distance = 6em\n    }\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % Here we just have a single value (9) in the comma-separated list because\n    % we only have two subsequent nodes.\n    \\CreateBTreeNode{0}{1}{{9}}{\\levelZeroNodeOne}\n\n    % Generate content of (tree) level 1.\n    % We add two new calls of the \\CreateBTreeNode macro to generate the contents\n    % of the subsequent nodes. One node contains three values, i.e. the comma-\n    % separated list is {1, 2, 3}, and the second node contains two values, i.e.\n    % {9, 10}.\n    % We also have to provide the correct level number (1) as first argument of\n    % both calls as well as the correct node number for each call as second\n    % argument, i.e. 1 and 2 respectively.\n    % As last argument we need to specify two new variables in which the\n    % respective node content is stored.\n    \\CreateBTreeNode{1}{1}{{1, 2, 3}}{\\levelOneNodeOne}\n    \\CreateBTreeNode{1}{2}{{9, 10}}{\\levelOneNodeTwo}\n\n    % Generate B+ tree with one root and two child nodes.\n    % Here we add two 'child' clauses generating two new tree nodes. Because\n    % these nodes are on the subsequent tree level (level number 1), we use the\n    % two contents generated for this level as node contents.\n    \\node[BTreeNodeDefault] {\\levelZeroNodeOne}\n    child { node[BTreeNodeDefault] {\\levelOneNodeOne} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeTwo} };\n\n    % Because the tree now consists of three nodes and two levels, we need to\n    % use the \\ConnectBTreeNodes macro to connect all the nodes vertically.\n    % We connect from level 0 (first argument) node 1 (second argument) to\n    % level 1 (third argument). On level 1 we have 2 nodes to connect (fourth\n    % argument) and the node to start with on level 1 has number 1 (fifth\n    % argument).\n    \\ConnectBTreeNodes{0}{1}{1}{2}{1}\n\n    % Because the tree now consists of more than one leaf node, we need to\n    % use the \\ConnectBTreeLeaves macro to connect all the leaves horizontally.\n    % We connect all leaf nodes of level 1 (first argument) and there are 2 leaf\n    % nodes to connect. Remember: the \\ConnectBTreeLeaves macro always starts\n    % connecting from node number 1 of the supplied level, so no node to start\n    % with has to be defined.\n    \\ConnectBTreeLeaves{1}{2}\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % Values 6 and 4 are added to the first node of the B+ tree. Hence, this node\n  % is overfilled and has to be split into two nodes. So another node must be\n  % added to the existing leaf level of the tree.\n  %\n  % To accomplish this, the following steps have to be done:\n  %   - The root node has to be modified. Because we add another node on the\n  %     subsequent level, the root node needs to hold another value, i.e. 3.\n  %   - Obviously, one new call to the \\CreateBTreeNode is necessary to generate\n  %     the content of the new leaf node. Thus, also a new variable to hold this\n  %     content has to be declared. Furthermore, the node is located between the\n  %     two existing leaf nodes. Because of the consecutive numbering of the\n  %     nodes, we also have to provide the correct node numbers for the new and\n  %     the last leaf node!\n  %   - A new child has to be added to the tree containing the newly generated\n  %     node content.\n  %   - The \\ConnectBTreeNodes and \\ConnectBTreeLeaves calls have to be adjusted\n  %     according to the new number of leaf nodes.\n  %%%\n  \\subsection{Insert 6 and 4}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 1,                % No scaling necessary.\n    level 1/.style = {        % Level 1 tree style recommended.\n      sibling distance = 8em,\n      level distance = 6em\n    }\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % Here we have to add a value (3) to the comma-separated list.\n    \\CreateBTreeNode{0}{1}{{3, 9}}{\\levelZeroNodeOne}\n\n    % Generate content of (tree) level 1.\n    % We add one new call of the \\CreateBTreeNode macro to generate the content\n    % one additional leaf node. Because of the splitting, the new node is located\n    % between the two already existing leaves. Hence, we also have to adjust the\n    % node numbers of this level: the previously node 2 now has node number 3 and\n    % the new leaf node gets node number 2. Also a new variable is introduced and\n    % the variables are used according to the node numbering.\n    % Summary:\n    %   - Node 1 holds values 1 and 2.\n    %   - Node 2 holds values 3, 4 and 6.\n    %   - Node 3 holds values 9 and 10.\n    \\CreateBTreeNode{1}{1}{{1, 2}}{\\levelOneNodeOne}\n    \\CreateBTreeNode{1}{2}{{3, 4, 6}}{\\levelOneNodeTwo}\n    \\CreateBTreeNode{1}{3}{{9, 10}}{\\levelOneNodeThree}\n\n    % Generate B+ tree with one root and three child nodes.\n    % Here we add another 'child' clauses generating one new tree node. As before\n    % we use the previously generated node contents.\n    \\node[BTreeNodeDefault] {\\levelZeroNodeOne}\n    child { node[BTreeNodeDefault] {\\levelOneNodeOne} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeTwo} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeThree} };\n\n    % Because the root node now has three child nodes, we need to modify the\n    % the \\ConnectBTreeNodes call accordingly.\n    % We connect from level 0 (first argument) node 1 (second argument) to\n    % level 1 (third argument). On level 1 we now have 3 nodes to connect (fourth\n    % argument) and the node to start with on level 1 has number 1 (fifth\n    % argument).\n    \\ConnectBTreeNodes{0}{1}{1}{3}{1}\n\n    % Because the tree now consists of one more leaf node, we also need to\n    % use the \\ConnectBTreeLeaves macro accordingly.\n    % We connect all leaf nodes of level 1 (first argument) and now there are 3\n    % of those to connect.\n    \\ConnectBTreeLeaves{1}{3}\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % Values 5 and 7 are added to the second node of the B+ tree. Hence, this node\n  % is overfilled and has to be split into two nodes. So another node must be\n  % added to the existing leaf level of the tree.\n  %\n  % To accomplish this, the following steps have to be done:\n  %   - The root node has to be modified. Because we add another node on the\n  %     subsequent level, the root node needs to hold another value, i.e. 5.\n  %   - Once again, a new call to the \\CreateBTreeNode is necessary to generate\n  %     the content of the new leaf node. Thus, also a new variable to hold this\n  %     content has to be declared. Furthermore, the node is located between the\n  %     second and the third existing leaf nodes. Because of the consecutive\n  %     numbering of the nodes, we also have to adjust the node numbers for the\n  %     new and all following leaf nodes.\n  %   - A new child has to be added to the tree containing the newly generated\n  %     node content.\n  %   - The \\ConnectBTreeNodes and \\ConnectBTreeLeaves calls have to adjusted\n  %     according to the new number of leaf nodes.\n  %   - The tikzpicture options are changed as well: the sibling distance of\n  %     level 1 is reduced to 7em and the level distance is increased to 8em.\n  %     This is not obligatory but playing around with the sibling and level\n  %     distances often helps to make the B+ tree look more beautiful (e.g. the\n  %     nodes do not overlap or the orientation of the arrows is better).\n  %%%\n  \\subsection{Insert 5 and 7}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 1,                % No scaling necessary.\n    level 1/.style = {        % Level 1 tree style recommended.\n      sibling distance = 7em, % Smaller distance betweens siblings.\n      level distance = 8em    % Larger distance between levels.\n    }\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % Here we have to add a value (5) to the comma-separated list.\n    \\CreateBTreeNode{0}{1}{{3, 5, 9}}{\\levelZeroNodeOne}\n\n    % Generate content of (tree) level 1.\n    % We add one new call of the \\CreateBTreeNode macro to generate the content\n    % one additional leaf node. Because of the splitting, the new node is located\n    % between the second and the third existing leaves. Hence, we also have to\n    % adjust the node numbers of this level: the previously node 3 now has node\n    % number 4 and the new leaf node gets node number 3. Node number 2 keeps its\n    % number. Also a new variable is introduced and the variables are used\n    % according to the node numbering.\n    % Summary:\n    %   - Node 1 holds values 1 and 2.\n    %   - Node 2 holds values 3 and 4.\n    %   - Node 3 holds values 5, 6 and 7.\n    %   - Node 4 holds values 9 and 10.\n    \\CreateBTreeNode{1}{1}{{1, 2}}{\\levelOneNodeOne}\n    \\CreateBTreeNode{1}{2}{{3, 4}}{\\levelOneNodeTwo}\n    \\CreateBTreeNode{1}{3}{{5, 6, 7}}{\\levelOneNodeThree}\n    \\CreateBTreeNode{1}{4}{{9, 10}}{\\levelOneNodeFour}\n\n    % Generate B+ tree with one root and four child nodes.\n    % Here we add another 'child' clauses generating one new tree node. As before\n    % we use the previously generated node contents.\n    \\node[BTreeNodeDefault] {\\levelZeroNodeOne}\n    child { node[BTreeNodeDefault] {\\levelOneNodeOne} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeTwo} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeThree} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeFour} };\n\n    % Because the root node now has four child nodes, we need to modify the\n    % the \\ConnectBTreeNodes call accordingly.\n    % We connect from level 0 (first argument) node 1 (second argument) to\n    % level 1 (third argument). On level 1 we now have 4 nodes to connect (fourth\n    % argument) and the node to start with on level 1 has number 1 (fifth\n    % argument).\n    \\ConnectBTreeNodes{0}{1}{1}{4}{1}\n\n    % Because the tree now consists of one more leaf node, we also need to\n    % use the \\ConnectBTreeLeaves macro accordingly.\n    % We connect all leaf nodes of level 1 (first argument) and now there are 4\n    % of those to connect.\n    \\ConnectBTreeLeaves{1}{4}\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % Value 8 is added to the third leaf node. This node is overfilled and is split\n  % into two nodes. Afterwards, the leaf level of the tree contains of five nodes\n  % but the root node only has four pointers. Thus, an additional level has to be\n  % added to the tree. The tree afterwards consists of\n  %   - one root node holding one value, i.e. {5}.\n  %   - two nodes on level 1 holding values {3} and {7, 9}, respectively.\n  %   - five leaf nodes on level 2 holding their respective values.\n  %\n  % This is one of the hardest modifications of an existing B+ tree. The\n  % following steps are necessary:\n  %   - The root node has to be modified. Because we add another level, the root\n  %     node now holds the smallest value of its right subtree, i.e. 5.\n  %   - The level of the existing leaf nodes has to be changed to 2 and also\n  %     the variable names should be changed accordingly. Although the renaming\n  %     is not obligatory as long as the contents are used correctly afterwards,\n  %     it is still recommended to keep it traceable and avoid errors.\n  %   - Two new (inner) nodes have to be generated on level 1. Remember: these\n  %     have to be numbered consecutively, i.e. {1}{1} and {1}{2} as the first\n  %     two arguments, respectively.\n  %   - One new leaf nodes has to be generated on level 2, holding the values 7\n  %     and 8. This node is generated between the nodes previously numbered 3 and\n  %     4. Hence, node number 3 keeps its number, the newly generated node gets\n  %     number 4 and the previously content numbered 4 is now node number 5.\n  %   - The new level has to be added to the tree environment as well.\n  %   - The existing \\ConnectBTreeNodes call has to be modified because the root\n  %     node now only has two child nodes. Furthermore, two new calls of\n  %     \\CreateBTreeNodes are necessary to draw the connections between two nodes\n  %     of level 1 and the 5 leaf nodes of level 2.\n  %   - The \\ConnectBTreeLeaves call has to be modified according to the new\n  %     level of the leaf nodes and the increased number of leaves.\n  %   - The tikzpicture options are changed as well:\n  %       - The sibling distance of level 1 is increased to 18em. This is\n  %         recommended because of the new level. The sibling distance has to\n  %         be increased almost every time when adding a new level.\n  %       - The level distance of level 1 is decreased to 6em. This is just a\n  %         minor adjustment.\n  %       - A new style for level 2 is introduced.\n  %       - The whole tikzpicture is scaled down to 0.7 now because the tree got\n  %         higher and wider.\n  %       - A new style is introduced to scale the nodes too. This is necessary\n  %         because nodes have to be scaled in TikZ separately. So, we just use\n  %         the BTreeNodeDefault style and add a scale clause for the nodes\n  %         afterwards.\n  %     This is not obligatory but playing around with the sibling and level\n  %     distances often helps to make the B+ tree look more beautiful (e.g. the\n  %     nodes do not overlap or the orientation of the arrows is better).\n  %%%\n  \\subsection{Insert 8}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 0.7,              % Scaling to 0.7 necessary.\n    level 1/.style = {        % Level 1 tree style recommended.\n      sibling distance = 18em,% Larger distance betweens siblings.\n      level distance = 6em    % Smaller distance between levels.\n    },\n    level 2/.style = {        % Level 2 tree style recommended.\n      sibling distance = 7em, % Just like the style used for level 1 previously.\n      level distance = 6em\n    },\n    BTreeNodeScaled/.style = {% New B+ tree node style.\n      BTreeNodeDefault,       % Uses the predefined B+ tree node style.\n      nodes = { scale = 0.7 } % Scales all nodes to 0.7.\n    }\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % Because a new level is added, the root has to hold the smallest value of\n    % its right subtree, i.e. 5.\n    \\CreateBTreeNode{0}{1}{{5}}{\\levelZeroNodeOne}\n\n    % Generate content of (tree) level 1.\n    % Here we generate the new level of the B+ tree consisting of two nodes.\n    % Because we push down the leaf level by 1, the new level has level number 1.\n    % Numbering is done consecutively, as always, starting with node number 1.\n    % The names of the variables are chosen as before to avoid any confusion.\n    % Summary:\n    %   - Node 1 holds values 3.\n    %   - Node 2 holds values 7 and 9.\n    \\CreateBTreeNode{1}{1}{{3}}{\\levelOneNodeOne}\n    \\CreateBTreeNode{1}{2}{{7, 9}}{\\levelOneNodeTwo}\n\n    % Generate content of (tree) level 2.\n    % We add one new call of the \\CreateBTreeNode macro to generate the content\n    % one additional leaf node. Because of the splitting, the new node is located\n    % between the third and the fourth existing leaves. Hence, we also have to\n    % adjust the node numbers of this level: the previously node 4 now has node\n    % number 5 and the new leaf node gets node number 4. Node number 3 keeps its\n    % number. Also a new variable is introduced and the variables are used\n    % according to the node numbering.\n    % Because a new level is added, the leaves are now located at level number 2.\n    % This is changed in the first argument for all leaf nodes accordingly.\n    % Also the names of the variables are changed accordingly to avoid confusion.\n    % Summary:\n    %   - Node 1 holds values 1 and 2.\n    %   - Node 2 holds values 3 and 4.\n    %   - Node 3 holds values 5 and 6.\n    %   - Node 4 holds values 7 and 8.\n    %   - Node 5 holds values 9 and 10.\n    \\CreateBTreeNode{2}{1}{{1, 2}}{\\levelTwoNodeOne}\n    \\CreateBTreeNode{2}{2}{{3, 4}}{\\levelTwoNodeTwo}\n    \\CreateBTreeNode{2}{3}{{5, 6}}{\\levelTwoNodeThree}\n    \\CreateBTreeNode{2}{4}{{7, 8}}{\\levelTwoNodeFour}\n    \\CreateBTreeNode{2}{5}{{9, 10}}{\\levelTwoNodeFive}\n\n    % Generate B+ tree with one root, two inner nodes and five leaf nodes.\n    % Here we add the level according to the new structure of the B+ tree.\n    \\node[BTreeNodeScaled] {\\levelZeroNodeOne}\n    child {\n      node[BTreeNodeScaled] {\\levelOneNodeOne}\n      child { node[BTreeNodeScaled] {\\levelTwoNodeOne} }\n      child { node[BTreeNodeScaled] {\\levelTwoNodeTwo} }\n    }\n    child {\n      node[BTreeNodeScaled] {\\levelOneNodeTwo}\n      child { node[BTreeNodeScaled] {\\levelTwoNodeThree} }\n      child { node[BTreeNodeScaled] {\\levelTwoNodeFour} }\n      child { node[BTreeNodeScaled] {\\levelTwoNodeFive} }\n    };\n\n    % Connect nodes level 0 -> level 1.\n    % Because the root now only has two child nodes, we just change the fourth\n    % argument accordingly.\n    \\ConnectBTreeNodes{0}{1}{1}{2}{1}\n    % Connect nodes level 1 node 1 -> level 2 nodes 1 and 2.\n    % Because we added a new level, we also need to add additional calls to the\n    % \\ConnectBTreeNodes macro.\n    % This call is used to connect the first node of level 1 with its two child\n    % nodes of level 2, starting from node number 1 of level 2. So, the nodes\n    % 1 and 2 of level 1 are child nodes of node 1 of level 1.\n    \\ConnectBTreeNodes{1}{1}{2}{2}{1}\n    % Connect nodes level 1 node 2 -> level 2 nodes 3, 4 and 5.\n    % This call is used to connect the second node of level 1 with its three\n    % child nodes of level 2, starting from node number 3 of level 2. So, the\n    % nodes 3, 4 and 5 of level 2 are child nodes of node 2 of level 1.\n    \\ConnectBTreeNodes{1}{2}{2}{3}{3}\n\n    % Because the leaf nodes are now located at level 2 and there is one more\n    % leaf node, we need to adjust the \\ConnectBTreeLeaves call.\n    % We connect all leaf nodes of level 2 (first argument) and now there are 5\n    % of those to connect.\n    \\ConnectBTreeLeaves{2}{5}\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % This section describes all possibilities of shrinking a B+ tree:\n  %   - Delete a value in a leaf node, delete corresponding value in inner node\n  %     accordingly and connect nodes/leaves\n  %   - Change a value in a node/leaf and connect nodes/leaves\n  %   - Delete a whole level of the B+ tree and connect nodes/leaves accordingly\n  %%%\n  \\section{Shrinking the B$^+$ tree}\n\n  %%%\n  % Value 7 is removed from the B+ tree, specifically from leaf number 4. After\n  % removing 7 from leaf number 4, this leaf has an insufficient number of keys.\n  % Hence, it is merged with its left sibling, leaf number 3, which contains the\n  % keys 5, 6 and 8 after the merge. Also the parent node has to be updated,\n  % because the parent node has two child nodes now (instead of three). Thus, one\n  % key has to be removed, i.e. 7.\n  %\n  % These are the necessary steps:\n  %   - One of the existing \\CreateBTreeNode calls has to be removed, preferably\n  %     the one of node number 5 because we only have four leaf nodes afterwards.\n  %   - The values of node number 3 need to be changed. In essence, value 8 has\n  %     to be added to the comma-separated list supplied as third argument.\n  %   - Because we removed one \\CreateBTreeNode call, we have to remove one child\n  %     from the tree as well.\n  %   - The last call of the \\ConnectBTreeNodes macro has to be updated. Now\n  %     there are only two instead of three child nodes.\n  %   - The \\ConnectBTreeLeaves call has to be modified. Only four leaves need to\n  %     be connected (instead of formerly five).\n  %   - The tikzpicture options are not altered (not obligatory). However, the\n  %     sibling/level distances could be adjusted here if desired (e.g. to reduce\n  %     the horizontal distance between the leaves).\n  %%%\n  \\subsection{Delete 7}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 0.7,              % Scaling to 0.7.\n    level 1/.style = {        % Level 1 tree style recommended.\n      sibling distance = 18em,% No changes.\n      level distance = 6em\n    },\n    level 2/.style = {        % Level 2 tree style recommended.\n      sibling distance = 7em, % No changes.\n      level distance = 6em\n    },\n    BTreeNodeScaled/.style = {% No changes.\n      BTreeNodeDefault,\n      nodes = { scale = 0.7 }\n    }\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % No modifications to the root node.\n    \\CreateBTreeNode{0}{1}{{5}}{\\levelZeroNodeOne}\n\n    % Generate content of (tree) level 1.\n    % On level 1 we just have to alter the values of the second node. We remove\n    % value 7, so node number 2 of level 1 just contains a single value, i.e. 9.\n    \\CreateBTreeNode{1}{1}{{3}}{\\levelOneNodeOne}\n    \\CreateBTreeNode{1}{2}{{9}}{\\levelOneNodeTwo}\n\n    % Generate content of (tree) level 2.\n    % We remove one call of the \\CreateBTreeNode macro (the last one), so only\n    % four leaf nodes are generated on level 2. The node numbering is straight\n    % forward from 1 to 4.\n    % To be consistent, we also use the variable names corresponding to the\n    % respective number of each node.\n    % Summary:\n    %   - Node 1 holds values 1 and 2.\n    %   - Node 2 holds values 3 and 4.\n    %   - Node 3 holds values 5, 6 and 8.\n    %   - Node 4 holds values 9 and 10.\n    \\CreateBTreeNode{2}{1}{{1, 2}}{\\levelTwoNodeOne}\n    \\CreateBTreeNode{2}{2}{{3, 4}}{\\levelTwoNodeTwo}\n    \\CreateBTreeNode{2}{3}{{5, 6, 8}}{\\levelTwoNodeThree}\n    \\CreateBTreeNode{2}{4}{{9, 10}}{\\levelTwoNodeFour}\n\n    % Generate B+ tree with one root, two inner nodes and four leaves.\n    % Here we remove one node (\\levelTwoNodeFive) from the right subtree.\n    \\node[BTreeNodeScaled] {\\levelZeroNodeOne}\n    child {\n      node[BTreeNodeScaled] {\\levelOneNodeOne}\n      child { node[BTreeNodeScaled] {\\levelTwoNodeOne} }\n      child { node[BTreeNodeScaled] {\\levelTwoNodeTwo} }\n    }\n    child {\n      node[BTreeNodeScaled] {\\levelOneNodeTwo}\n      child { node[BTreeNodeScaled] {\\levelTwoNodeThree} }\n      child { node[BTreeNodeScaled] {\\levelTwoNodeFour} }\n    };\n\n    % Connect nodes level 0 -> level 1.\n    % No changes, the root has still two child nodes.\n    \\ConnectBTreeNodes{0}{1}{1}{2}{1}\n    % Connect nodes level 1 node 1 -> level 2 nodes 1 and 2.\n    % No changes, the left subtree does not change.\n    \\ConnectBTreeNodes{1}{1}{2}{2}{1}\n    % Connect nodes level 1 node 2 -> level 2 nodes 3 and 4.\n    % Here the fourth parameter needs to be changed to 2 because we removed one\n    % leaf node from the right subtree and thus have one child node less to\n    % connect.\n    \\ConnectBTreeNodes{1}{2}{2}{2}{3}\n\n    % Here we need to change the last argument because of the altered number of\n    % leaf nodes (four instead of formerly 5).\n    % We connect all leaf nodes of level 2 and now there are 4 of those to\n    % connect.\n    \\ConnectBTreeLeaves{2}{4}\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % Value 10 is removed from the B+ tree, specifically from leaf number 4. After\n  % removing 10 from leaf number 4, this leaf has an insufficient number of keys.\n  % Hence, it is merged with its left sibling, leaf number 3. Because after this\n  % merge node number 3 would contain one key too much, the node must be split.\n  % Hence, we can keep the two existing nodes and just alter their values\n  % accordingly. Node number 3 holds the two smaller values, i.e. 5 and 6, and\n  % node number 4 holds the two greater values, i.e. 8 and 9.\n  % Because of the B+ tree characteristics, the content of the parent node also\n  % needs to be changed. The right pointer of a key in the parent node has to\n  % hold the smallest value of the subtree the pointer points to. In this case\n  % this is 8.\n  %\n  % The following steps need to be done:\n  %   - Remove value 10 from the content of the last leaf node\n  %     (\\levelTwoNodeFour).\n  %   - Insert value 8 into the content of the last leaf node (\\levelTwoNodeFour)\n  %     in front of value 9.\n  %   - The value of the parent node has to be changed to 8 (formerly 9).\n  %   - The \\ConnectBTreeNodes and \\ConnectBTreeLeaves calls need not to be\n  %     changed because the number of nodes does not change, only their\n  %     values.\n  %   - As before the tikzpicture options are not altered (not obligatory).\n  %%%\n  \\subsection{Delete 10}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 0.7,              % Scaling to 0.7.\n    level 1/.style = {        % Level 1 tree style recommended.\n      sibling distance = 18em,% No changes.\n      level distance = 6em\n    },\n    level 2/.style = {        % Level 2 tree style recommended.\n      sibling distance = 7em, % No changes.\n      level distance = 6em\n    },\n    BTreeNodeScaled/.style = {% No changes.\n      BTreeNodeDefault,\n      nodes = { scale = 0.7 }\n    }\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % No modifications to the root node.\n    \\CreateBTreeNode{0}{1}{{5}}{\\levelZeroNodeOne}\n\n    % Generate content of (tree) level 1.\n    % On level 1 we just have to alter the value of the second node. We replace\n    % value 9 with value 8 (to comply with the B+ tree characteristics).\n    \\CreateBTreeNode{1}{1}{{3}}{\\levelOneNodeOne}\n    \\CreateBTreeNode{1}{2}{{8}}{\\levelOneNodeTwo}\n\n    % Generate content of (tree) level 2.\n    % We just alter the comma-separated value lists of the last two calls to\n    % {5, 6} and {8, 9}, respectively.\n    % Summary:\n    %   - Node 1 holds values 1 and 2.\n    %   - Node 2 holds values 3 and 4.\n    %   - Node 3 holds values 5 and 6.\n    %   - Node 4 holds values 8 and 9.\n    \\CreateBTreeNode{2}{1}{{1, 2}}{\\levelTwoNodeOne}\n    \\CreateBTreeNode{2}{2}{{3, 4}}{\\levelTwoNodeTwo}\n    \\CreateBTreeNode{2}{3}{{5, 6}}{\\levelTwoNodeThree}\n    \\CreateBTreeNode{2}{4}{{8, 9}}{\\levelTwoNodeFour}\n\n    % Generate B+ tree with one root, two inner nodes and four leaves.\n    % No changes necessary because we just altered the contents of the nodes,\n    % neither their location nor their numbering.\n    \\node[BTreeNodeScaled] {\\levelZeroNodeOne}\n    child {\n      node[BTreeNodeScaled] {\\levelOneNodeOne}\n      child { node[BTreeNodeScaled] {\\levelTwoNodeOne} }\n      child { node[BTreeNodeScaled] {\\levelTwoNodeTwo} }\n    }\n    child {\n      node[BTreeNodeScaled] {\\levelOneNodeTwo}\n      child { node[BTreeNodeScaled] {\\levelTwoNodeThree} }\n      child { node[BTreeNodeScaled] {\\levelTwoNodeFour} }\n    };\n\n    % Connect nodes level 0 -> level 1.\n    % No changes, the root has still two child nodes.\n    \\ConnectBTreeNodes{0}{1}{1}{2}{1}\n    % Connect nodes level 1 node 1 -> level 2 nodes 1 and 2.\n    % No changes, the left subtree does not change.\n    \\ConnectBTreeNodes{1}{1}{2}{2}{1}\n    % Connect nodes level 1 node 2 -> level 2 nodes 3 and 4.\n    % No changes, the right subtree does not change as well.\n    \\ConnectBTreeNodes{1}{2}{2}{2}{3}\n\n    % No changes as well because the leaf node level and the number of leaf nodes\n    % do not change.\n    \\ConnectBTreeLeaves{2}{4}\n  \\end{tikzpicture}\n  \\end{center}\n\n  %%%\n  % Value 6 is removed from the B+ tree, specifically from leaf number 3. Thus,\n  % this leaf consists of an insufficient number of keys and is merged with its\n  % right sibling (because no left sibling exists). After this merge, the parent\n  % of the resulting node would just have a single child node, which is not\n  % B+ tree compliant. Hence, we need to remove the whole mid level (level 1) of\n  % the tree because three leaf nodes are covered by a single root node with two\n  % keys and three pointers. So, the resulting B+ tree consists of:\n  %   - A root node holding the values 3 and 5 (smallest value of the respective\n  %     right subtree).\n  %   - Three leaf nodes holding the values {1, 2}, {3, 4} and {5, 8, 9},\n  %     respectively.\n  %\n  % This is one of the hardest modifications of an existing B+ tree. The\n  % following steps are necessary:\n  %   - The values of the root node need to be changed. The comma-separated list\n  %     is {3, 5} afterwards.\n  %   - The calls of \\CreateBTreeNode for level 2 can be removed.\n  %   - One call of \\CreateBTreeNode needs to be added for level 1. Previously,\n  %     level 1 one consisted of two nodes, now it consists of three nodes. The\n  %     numbering of the nodes has to be correct (1 - 3) and the variable names\n  %     are chosen according to the level and the node number, e.g. for node 2 of\n  %     level 1, the variable is named \\levelOneNodeTwo.\n  %   - When generating the tree, all clauses generating level 2 are removed and\n  %     a clause to generate a third node on level 1 is added. The content of the\n  %     nodes is assigned accordingly.\n  %   - Only the first call of \\ConnectBTreeNodes has to be kept, the other two\n  %     subsequent calls are removed. The kept call needs to be changed because\n  %     we now have three instead of two nodes on level 1.\n  %   - The \\ConnectBTreeLeaves call has to be altered too. The level of the leaf\n  %     nodes has changed (now 1, formerly it was 2) and the number of leaves has\n  %     changed (now 3 instead of previously 4).\n  %   - The tikzpicture options are shortened. The style for level 2 of the tree\n  %     is removed (not used anymore). Also the sibling and level distance of\n  %     level 1 is changed, both to 8em. Last but not least, the scaling is\n  %     reset/removed - the global tikzpicture scaling was reset to 1 and the\n  %     BTreeNodeScaled style was removed because node scaling is needed.\n  %%%\n  \\subsection{Delete 6}\n  \\begin{center}\n  \\begin{tikzpicture}[\n    BTreeDefault,             % Use default B+ tree style.\n    scale = 1,                % No scaling necessary.\n    level 1/.style = {        % Level 1 tree style recommended.\n      sibling distance = 8em, % Smaller distance betweens siblings.\n      level distance = 8em    % Larger distance between levels.\n    },\n  ]\n    % Set number of pointers per node (this should be done always upfront).\n    \\SetNoOfPoinersPerNode{4}\n\n    % Generate content of (tree) level 0 (root level).\n    % Here we just need to change to the comma-separated list to contain 3 and 5.\n    \\CreateBTreeNode{0}{1}{{3, 5}}{\\levelZeroNodeOne}\n\n    % Generate content of (tree) level 1.\n    % On level 1 we now have all the unaltered leaf nodes as well as the altered\n    % leaf node holding the values 5, 8 and 9. The node numbers are chosen\n    % consecutively as required and the variable names are chosen accordingly.\n    \\CreateBTreeNode{1}{1}{{1, 2}}{\\levelOneNodeOne}\n    \\CreateBTreeNode{1}{2}{{3, 4}}{\\levelOneNodeTwo}\n    \\CreateBTreeNode{1}{3}{{5, 8, 9}}{\\levelOneNodeThree}\n\n    % Generate content of (tree) level 2.\n    % Removed because the tree has only two levels after the deletion of 6.\n\n    % Generate B+ tree with one root, two inner nodes and four leaves.\n    % We removed all clauses generating level 2 and added a clause to add another\n    % child to the root node (holding the value 5, 8 and 9).\n    \\node[BTreeNodeDefault] {\\levelZeroNodeOne}\n    child { node[BTreeNodeDefault] {\\levelOneNodeOne} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeTwo} }\n    child { node[BTreeNodeDefault] {\\levelOneNodeThree} };\n\n    % Connect nodes level 0 -> level 1.\n    % We changed only the fourth argument, i.e. 3, because the root now has three\n    % instead of two child nodes.\n    \\ConnectBTreeNodes{0}{1}{1}{3}{1}\n\n    % Connect nodes level 1 -> level 2.\n    % Removed because the three has only two levels after the deletion of 6.\n\n    % We changed both arguments. The first one because the leaf nodes were\n    % relocated from level 2 to level 1 and the second argument because the\n    % number of leaves changed from 4 to 3.\n    \\ConnectBTreeLeaves{1}{3}\n  \\end{tikzpicture}\n  \\end{center}\n\\end{document}\n", "meta": {"hexsha": "e1fb448910ae338cd3c1939353ba839c5621ebc9", "size": 37793, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "example-advanced.tex", "max_stars_repo_name": "danielkocher/btreevis", "max_stars_repo_head_hexsha": "e1f0d1f3096552d8295c489b3f49b3b68de0f783", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2016-12-29T15:49:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-23T07:10:06.000Z", "max_issues_repo_path": "example-advanced.tex", "max_issues_repo_name": "danielkocher/btreevis", "max_issues_repo_head_hexsha": "e1f0d1f3096552d8295c489b3f49b3b68de0f783", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "example-advanced.tex", "max_forks_repo_name": "danielkocher/btreevis", "max_forks_repo_head_hexsha": "e1f0d1f3096552d8295c489b3f49b3b68de0f783", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-11-24T21:10:20.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-18T06:29:51.000Z", "avg_line_length": 47.24125, "max_line_length": 81, "alphanum_fraction": 0.6777180959, "num_tokens": 10492, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.79053032607222, "lm_q1q2_score": 0.5655643646665997}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{amsthm}\n\n% collectivised from: https://www.overleaf.com/learn/latex/Theorems_and_proofs#Theorem_styles\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\n\\theoremstyle{remark}\n\\newtheorem*{remark}{Remark}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{lemma}[theorem]{Lemma}\n%\n\n\\begin{document}\n\n\\begin{remark}\n    This statement is true, I guess.\n\\end{remark}\n\n\\begin{definition}[Fibration]\n    A fibration is a mapping between two topological spaces that has the homotopy lifting property for every space $X$.\n\\end{definition}\n\n\\begin{lemma}\n    Given two line segments whose lengths are $a$ and $b$ respectively there \n    is a real number $r$ such that $b=ra$.\n\\end{lemma}\n    \n\\begin{proof}\n    To prove it by contradiction try and assume that the statement is false,\n    proceed from there and at some point you will arrive to a contradiction.\n\\end{proof}\n\n\n\\newpage    \n\\section{Super vertex coloring}\n\n\\begin{definition}[Super vertex coloring]\n    Let $G$ be an undirected graph.\\\\ $c: V(G) \\rightarrow \\{1, 2, ... , k\\}$ is a super vertex coloring iff: \n    $$\\forall v_1, v_2 \\in V(G), c(v_1) = c(v_2) \\Rightarrow v_1 \\notin N(v_2), \\forall u \\in N(v_1), v_2\\notin N(u)$$\n    We say that $G$ is $n$-super colorable if we can find a super vertex coloring using $n$ colors.\n\\end{definition}\n\n%\\subsection{Using a $K_n$ as a tool}\nLet $K_n$ be a complete graph. We can clearly find a super vertex coloring by assigning a different color to each vertex. Also by [1] (lemma 2), We\ncan find an edge coloring using $n$ colors (respectively $n-1$) if $n$ is odd (respectively even).\n\n\\begin{theorem}\n    If $G$ admitt a super vertex coloring using $n$ colors, then we can find and $n$ or $n-1$ coloring of it's edges.\n\\end{theorem}\n\n\\begin{proof} \nThe idea is to use the $K_n$ graph as a tool to decide which color to assign to each edge. \\\\\nLet $G$ be a $n$-super colorable graph. Call $c_g$ such a coloration. Let $K_n$ and denote by $(k_i)$ it's vertices. As stated earlier,\nwe can find a $n$ (or $n-1$) edge coloration of it. We shall call this edge coloration $e_k$.  \\\\\nThen, assign to the $k_i$ a different number in $\\{1,... , n\\}$. Call $c_k$ such a coloration. \\\\\nLet $v_i v_j$ be an edge of $G$. Find $k_1, k_2$ the vertices of $K_n$ such that $c_g(v_i) = c_k(k_1)$ and $c_g(v_j) = c_k(k_2)$ assign to $v_iv_j$ the color \n$e_k(k_1k_2)$. \\\\\nNow let's prove by contradiction that such a coloring of the edges is proper. Assume the existence of $v_1, v_2, v_3$ such that $v_1v_2, v_2v_3 \\in E(G)$ are assigned to the same color.\nLet $k_1, k_2, k_3$ such that $c_g(v_i) = c_k(k_i)$. As $e_k$ is a proper edge coloring, if $e_k(k_1k_2) = e_k(k_2k_3)$, then $k_1=k_3$. Thus, it would mean that\n$c_g(v_1) = c_g(v_3)$, ie $c_g$ is not a super vertex coloring of $G$.\n\\end{proof}\n\n\\section{Special graphs}\n\n\\subsection{Trees}\n\n\\begin{lemma}\n    Let $G$ be a tree with maximum degree $\\Delta$. Then $G$ is $\\Delta + 1$ super colorable.\n\\end{lemma}\n\n\\begin{proof}\n    The proof is done by induction on $|E(G)|$.\\\\\n    When $|E(G)| = 0$, $\\Delta = 0$ and a super coloration is simply assigning the same color to each vertex.\\\\\n    Let $G$ be a graph with $\\Delta$ maximum degree. Find $v$ such that $d(v)=1$, call $v'$ it's only neighbour. \n    As per the induction property $G-v$ admitts a $\\Delta +1$ super vertex coloring. As the degree of $v'$ in $G-v$ is at most $\\Delta -1$, it means it \n    lacks at least a color among the color of $v'$ and it's neighbours. Choose it for $v$.\n\\end{proof}\n\n\\subsection{Interval graphs}\n\n\\begin{lemma}\n    Let $G$ be an interval intersection graph with maximum degree $\\Delta$ . Then $G$ is $\\Delta+1$ super colorable. \n\\end{lemma}\n\n\\begin{proof}\n    The proof is done by induction on the number of intervals.\\\\\n    The result is trivial with one interval. \\\\ \\\\\n    Let $G$ an interval graph. Call $I_i$ it's intervals and $v_i$ it's associated vertices.\n    We can find an ordering of it's intervals. $I_1, I_2, ...$ such that $\\forall i, r_i < r_{i+1}$.\n    Consider $G-v_1$. By the induction property, it admitts a $\\Delta(G-I_1)+1$ super vertex coloration. \\\\\n    If $\\Delta(G) > \\Delta(G-v_1)$, then just assign to $v_1$ the newly available color.\\\\ \\\\\n    Suppose $d_G(v_1) = k-1 < \\Delta(G)$. The neighbours of $v_1$ are $v_2, ... , v_k$. Observe that each two of the neighbours of $v_1$ are neighbours in G:\n    Indeed, they share at least $r_1$ as a common point. \\\\\n    Moreover, we can state that $\\forall i, j \\in \\{2, ... k\\}, i<j \\Rightarrow N(i) \\subset N(j)$:\\\\\n    Let $i, j$ such that $v_i, v_j$ and $v_1$ are neighbours. Let $v^*$ an neighbour of $v_i$, such as $v^* \\notin N(v_1)$ \n    \\begin{itemize}\n        \\item As $v_i$ and $v^*$ are neighbours, $l^* < r_i$; so $l^* < r_j$ by construction.\n        \\item As $v^*$ and $v_1$ are not neighbours, $r_1<l^*$\n        \\item As $v_j$ and $v_1$ are neighbours, $l_j<r_1$\n    \\end{itemize}\n    Thus, $l_j< l^* < r_j$; so $I^*$ and $I_j$ share a common point in $l^*$. \\\\\n    So each neighbour of $I_1$ is also a neighbour of $I_k$. As $d_{G-v_1}(v_k)$ is at most $\\Delta(G)-1$; we deduce that a least one color has to be missing\n    among the neighbours of $v_k$ (including itself); choose this color for $v_1$.  \n\\end{proof}\n\n\\section{References}\n[1] V.A. Bojarshinov, Edge and total coloring of interval graphs, Disctrete Applied Mathematics 114 (2001) 23-28\n\n\\end{document}", "meta": {"hexsha": "cfc80dc1a227b5a81fc7c297de075f2827edffda", "size": 5452, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/1/1.tex", "max_stars_repo_name": "Qiselong/Internship-GSCOP21", "max_stars_repo_head_hexsha": "c33b09fb889181d5d40c785e964f7748a82dc61f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/1/1.tex", "max_issues_repo_name": "Qiselong/Internship-GSCOP21", "max_issues_repo_head_hexsha": "c33b09fb889181d5d40c785e964f7748a82dc61f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/1/1.tex", "max_forks_repo_name": "Qiselong/Internship-GSCOP21", "max_forks_repo_head_hexsha": "c33b09fb889181d5d40c785e964f7748a82dc61f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.1171171171, "max_line_length": 185, "alphanum_fraction": 0.6757153338, "num_tokens": 1782, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7905303186696747, "lm_q1q2_score": 0.5655643593706411}}
{"text": "\\documentclass[11pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage[left=2.5cm, right=2.5cm, top=2.5cm, bottom=2.5cm]{geometry}\n\\usepackage{hyperref}\n\\usepackage{graphicx}\n\\usepackage{times}\n\\usepackage[bottom]{footmisc}\n\\usepackage{listings}\n\\usepackage{bm}\n\\usepackage{amsmath}\n\\usepackage{algpseudocode}\n\\usepackage{algorithm}\n\\usepackage{titlesec}\n\\usepackage{tikz}\n\\usepackage{tikz-qtree}\n\n\\titleformat{\\subsubsection}[runin]{}{}{}{}[]\n\n\\title{\\textbf{Advanced Machine Learning -- Homework 3}}\n\\author{Anna Berger, Ashwin Sadananda Bhat (AML 25)}\n\n\\setlength{\\parindent}{0pt}\n\n\\usepackage{booktabs}\n\n\\begin{document}\n\\maketitle\n  \n \\section*{Exercise 1}\n To investigate the effect of dimensionality reduction and PCA we apply the Bayesian approach to the predict whether a person has breast cancer or not. There are two possible classes: benign (labelled as 2) and malicious (labelled as 4). In the assignment we refer to them as B and M respectively.\n \n  From the data we can estimate the prior distributions of these classes by counting the number of B's and M's and normalising them by the total number of people: $$ priors \\approx (0.65, 0.35) $$ \n \n Hereafter we define the error as the ratio of misclassified objects.\n \n \n \\subsection*{Part a}\n \n \\begin{enumerate}\n\t \\item \\textbf{Univariate classifiers}\n\t\\begin{enumerate}\n\t\t\\item \\textbf {x2 (Clump Thickness) only} \\\\\n\t\tIn this case, the feature is the second column of the dataset which represents clump thickness.\n\t\tFirst, we estimate the mean and the variance of this feature for each class:\n\t\t$$\\bar{x}_{2B} \\approx 2.936, \\quad var(x_{2B}) \\approx 2.739, $$\n\t\t$$\\bar{x}_{2M} \\approx 7.133, \\quad var(x_{2M}) \\approx 6.215.$$\n\t\t\n\t\tThen we fit the Gaussians using these estimations and apply the Bayes rule to classify objects.\n\t\tThe statistics for this classifier can be seen from the Table \\ref{tab:results-x2}:\n\t\t\n\t\t\\begin{table}[H]\n\t\t\t\\centering\n\t\t\t\t\\begin{tabular}{lrr}\n\t\t\t\t\t\\toprule\n\t\t\t\t\t & \\textbf{Train} & \\textbf{Validation}  \\\\ \\midrule\n\t\t\t\t\tError & 0.137 & 0.146 \\\\\t\n\t\t\t\t\tPrecision (malicious) & 0.907 &0.868 \\\\\n\t\t\t\t\tRecall (malicious) & 0.678 & 0.6875 \\\\\n\t\t\t\t\t\\bottomrule\n\t\t\t\t\\end{tabular}\n\t\t\t\\caption{Univariate classifier (feature x2).}\n\t\t\t\\label{tab:results-x2}\n\t\t\\end{table}\n\t\t\n\t\tThe confusion matrices for train and validation sets are shown in the Figures \\ref{fig:conf_train_x2.png} and \\ref{fig:conf_val_x2.png}.\n\t\t\n\t\t\\begin{figure}[H]\\centering\n\t\t\t\t\\begin{minipage}{0.49\\textwidth}\n\t\t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_train_x2.png}\n\t\t\t\t\t\\caption{Feature x2, train}\\label{fig:conf_train_x2.png}\n\t\t\t\t\\end{minipage}\n\t\t\t\t\\begin{minipage}{0.49\\textwidth}\n \t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_val_x2.png}\n\t\t\t\t\\caption{Feature x2, val}\\label{fig:conf_val_x2.png}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\n\n\t\\item \\textbf {First principal component} \\\\\n\tWe perform SVD on the covariance matrix for the total dataset and obtain eigenvalues (diagonal matrix $s$) and eigenvectors for it (columns of matrix $U$). The first column of matrix $U$ is the first principal component which covers 68,7\\% of the variance of the dataset and looks as follows:\n\t$$ [-0.3059, -0.4085, -0.3859, -0.3280 , -0.2293, -0.4522, -0.2830 , -0.3581, -0.1329]^\\tau $$\n\t\n\tThen, we project all the datapoints on the obtained vector.\n\t\n\tAfter that, we estimate the mean and the variance of projected dataset:\n\t$$\\bar{x}_{pca1B} \\approx -4.718, \\quad var(x_{pca1B}) \\approx 3.566, $$\n\t$$\\bar{x}_{pca1M} \\approx -17.729, \\quad var(x_{pca1M}) \\approx 19.710.$$\n\t\n\tThen we fit the Gaussians using these estimations and apply the Bayes rule to classify objects.\n\tThe statistics for this classifier can be seen from the Table \\ref{tab:results-pca-1}:\n\t\n\t\\begin{table}[H]\n\t\t\\centering\n\t\t\\begin{tabular}{lrr}\n\t\t\t\\toprule\n\t\t\t& \\textbf{Train} & \\textbf{Validation}  \\\\ \\midrule\n\t\t\tError & 0.037 & 0.014 \\\\\t\n\t\t\tPrecision (malicious) & 0.932 & 0.969 \\\\\n\t\t\tRecall (malicious) & 0.965 & 0.989 \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\caption{Univariate classifier (first principal component).}\n\t\t\\label{tab:results-pca-1}\n\t\\end{table}\n\t\n\tThe confusion matrices for train and validation sets are shown in the Figures \\ref{fig:conf_train_pca_1.png} and \\ref{fig:conf_val_pca_1.png}.\n\t\n\t\\begin{figure}[H]\\centering\n\t\t\\begin{minipage}{0.49\\textwidth}\n\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_train_pca_1.png}\n\t\t\t\\caption{The first principal component, train set}\\label{fig:conf_train_pca_1.png}\n\t\t\\end{minipage}\n\t\t\\begin{minipage}{0.49\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/conf_val_pca_1.png}\n\t\t\\caption{The first principal component, val set}\\label{fig:conf_val_pca_1.png}\n\t\t\\end{minipage}\n\t\\end{figure}\t\t\n\t\t\n\t\\end{enumerate}\n\n\t\\item \\textbf{Bivariate classifiers}\n\t\\begin{enumerate}\n\t\t\\item \\textbf {x2 and x7 (Clump Thickness and Bare Nuclei)} \\\\\n\t\tIn this case, the features are the second and the seventh columns of the dataset which represent clump thickness and bare nuclei.\n\t\t\n\t\tFirst, we estimate the mean and the covariance of this feature for each class:\n\t\t\\[\n\t\t\\bar{x}_{27B} \\approx (2.936,  1.365), \\quad \n\t\t\\Sigma_{27B} \\approx \n\t\t\\begin{bmatrix}\n\t\t2.739 & 0.220   \\\\\n\t\t0.220 & 1.470   \\\\\n\t\t\\end{bmatrix}.\n\t\t\\]\n\t\t\n\t\t\\[\n\t\t\\bar{x}_{27M} \\approx (7.133,  7.706), \\quad \n\t\t\\Sigma_{27M} \\approx \n\t\t\\begin{bmatrix}\n\t\t6.215 & 0.082   \\\\\n\t\t0.082 & 9.082   \\\\\n\t\t\\end{bmatrix}.\n\t\t\\]\n\t\t\n\t\tThen we fit the Gaussians using these estimations and apply the Bayes rule to classify objects.\n\t\tThe statistics for this classifier can be seen from the Table \\ref{tab:results-x2-x7}:\n\t\t\n\t\t\\begin{table}[H]\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{lrr}\n\t\t\t\t\\toprule\n\t\t\t\t& \\textbf{Train} & \\textbf{Validation}  \\\\ \\midrule\n\t\t\t\tError & 0.059 & 0.054 \\\\\t\n\t\t\t\tPrecision (malicious) & 0.916 &0.918 \\\\\n\t\t\t\tRecall (malicious) & 0.916 & 0.927 \\\\\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Bivariate classifier (features x2, x7).}\n\t\t\t\\label{tab:results-x2-x7}\n\t\t\\end{table}\n\t\t\n\t\tThe confusion matrices for train and validation sets are shown in the Figures \\ref{fig:conf_train_x2_x7.png} and \\ref{fig:conf_val_x2_x7.png}.\n\t\t\n\t\t\\begin{figure}[H]\\centering\n\t\t\t\\begin{minipage}{0.49\\textwidth}\n\t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_train_x2_x7.png}\n\t\t\t\t\\caption{Features x2, x7, train}\\label{fig:conf_train_x2_x7.png}\n\t\t\t\\end{minipage}\n\t\t\t\\begin{minipage}{0.49\\textwidth}\n\t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_val_x2_x7.png}\n\t\t\t\t\\caption{Features x2, x7, val}\\label{fig:conf_val_x2_x7.png}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\n\t\t\n\t\\item \\textbf {First two principal components} \\\\\n\tNow we take first two columns of the matrix $U$ --- first two principal components. They cover 76.1\\% of the variance of the dataset.\n\tThe first principal component is already shown in a previous part, the second principal component is given by:\n\t $$ [0.0756,  0.2158,  0.1710, -0.2682,  0.1916, -0.7441,  0.0371,  0.4670,  0.1917]^\\tau$$\n\t\n\t Then, we project all the datapoints on these vectors.\n\t\n\tAfter that, we estimate the mean and the covariance of projected dataset:\n\t\n\t\\[\n\t\\bar{x}_{pca12B} \\approx (-4.718,  0.659), \\quad \n\t\\Sigma_{pca12B} \\approx \n\t\\begin{bmatrix}\n\t3.567 & -0.129   \\\\\n\t-0.129 & 0.932   \\\\\n\t\\end{bmatrix}.\n\t\\]\n\t\n\t\\[\n\t\\bar{x}_{pca12M} \\approx (-17.729,  0.210), \\quad \n\t\\Sigma_{pca12M} \\approx \n\t\\begin{bmatrix}\n\t19.709 & -3.593  \\\\\n\t-3.593 & 12.923   \\\\\n\t\\end{bmatrix}.\n\t\\]\n\t\n\t\n\tThen we fit the Gaussians using these estimations and apply the Bayes rule to classify objects.\n\n\t\n\t\\begin{table}[H]\n\t\t\\centering\n\t\t\\begin{tabular}{lrr}\n\t\t\t\\toprule\n\t\t\t& \\textbf{Train} & \\textbf{Validation}  \\\\ \\midrule\n\t\t\tError & 0.037 & 0.021 \\\\\t\n\t\t\tPrecision (malicious) & 0.927 & 0.959 \\\\\n\t\t\tRecall (malicious) & 0.972 & 0.979 \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\caption{Bivariate classifier (first two principal components).}\n\t\t\\label{tab:results-pca-12}\n\t\\end{table}\n\t\n\tThe confusion matrices for train and validation sets are shown in the Figures \\ref{fig:conf_train_pca_12.png} and \\ref{fig:conf_val_pca_12.png}.\n\t\n\t\\begin{figure}[H]\\centering\n\t\t\\begin{minipage}{0.49\\textwidth}\n\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_train_pca_12.png}\n\t\t\t\\caption{First two principal components, train set}\\label{fig:conf_train_pca_12.png}\n\t\t\\end{minipage}\n\t\t\\begin{minipage}{0.49\\textwidth}\n\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_val_pca_12.png}\n\t\t\t\\caption{First two principal components, val set}\\label{fig:conf_val_pca_12.png}\n\t\t\\end{minipage}\n\t\\end{figure}\t\n\t\n\t\t\n\t\\end{enumerate}\n\t\n\t\\item \\textbf{Multivariate classifiers}\n\t\\begin{enumerate}\n\t\t\\item \\textbf{The principal components which explain 80\\% of the variance} \\\\\n\t\tWe found out that the first three principal components cover 80\\% of the variance of the whole dataset, precisely 82.3\\%.\n\t\tThe third principal component is given by:\n\t\t$$ [0.8107, -0.0523,  0.0440, -0.5190, -0.1229, 0.1079, -0.0952, -0.1489, -0.1033]^\\tau $$\n\t\t\n\t\tWe repeat the described procedure: project all the data points on the first three eigenvectors and estimate the mean and the covariance of projected dataset for both classes:\n\t\t\n\t\t\\[\n\t\t\\bar{x}_{pca123B} \\approx (-4.718, 0.659,  1.059), \\quad \n\t\t\\Sigma_{pca123B} \\approx \n\t\t\\begin{bmatrix}\n\t\t3.567 & -0.129 & -0.635   \\\\\n\t\t-0.129 & 0.932 & 0.312  \\\\\n\t\t-0.635 & 0.312 & 1.691 \\\\\n\t\t\\end{bmatrix}.\n\t\t\\]\n\t\t\n\t\t\\[\n\t\t\\bar{x}_{pca123M} \\approx (-17.729,  0.210, 1.437), \\quad \n\t\t\\Sigma_{pca123M} \\approx \n\t\t\\begin{bmatrix}\n\t\t19.709 & -3.593 & 4.411 \\\\\n\t\t-3.593 & 12.923 & -0.471   \\\\\n\t\t4.411 & -0.471 & 9.053 \\\\\n\t\t\\end{bmatrix}.\n\t\t\\]\n\t\t\n\t\tThe statistics for this classifier can be seen from the Table \\ref{tab:results-pca-123}:\n\t\t\n\t\t\\begin{table}[H]\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{lrr}\n\t\t\t\t\\toprule\n\t\t\t\t& \\textbf{Train} & \\textbf{Validation}  \\\\ \\midrule\n\t\t\t\tError & 0.037 & 0.022 \\\\\t\n\t\t\t\tPrecision (malicious) & 0.921 & 0.959 \\\\\n\t\t\t\tRecall (malicious) & 0.979 & 0.979 \\\\\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular}\n\t\t\t\\caption{First three principal components.}\n\t\t\t\\label{tab:results-pca-123}\n\t\t\\end{table}\n\t\t\n\t\tThe confusion matrices for train and validation sets are shown in the Figures \\ref{fig:conf_train_pca_123.png} and \\ref{fig:conf_val_pca_123.png}.\n\t\t\n\t\t\\begin{figure}[H]\\centering\n\t\t\t\\begin{minipage}{0.49\\linewidth}\n\t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_train_pca_123.png}\n\t\t\t\t\\caption{First two principal components, train set}\\label{fig:conf_train_pca_123.png}\n\t\t\t\\end{minipage}\n\t\t\t\\begin{minipage}{0.49\\linewidth}\n\t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_val_pca_123.png}\n\t\t\t\t\\caption{First two principal components, val set}\\label{fig:conf_val_pca_123.png}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\t\n\t\t\n\t\t\\item \\textbf{All inputs} \\\\\n\t\tIn this case the features are all the columns from the second to the tenth.\n\t\tWe again estimate the mean and the covariance of the dataset (the number of dimensions is equal to 9, that's why we decided not to overload the report with the matrix 9x9), then fit the Gaussians for both classes and apply the Bayes rule to classify the objects.\n\t\t\n\t\tThe statistics for this classifier can be seen from the Table \\ref{tab:results-all}:\n\t\t\n\t\t\\begin{table}[H]\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{lrr}\n\t\t\t\t\\toprule\n\t\t\t\t& \\textbf{Train} & \\textbf{Validation}  \\\\ \\midrule\n\t\t\t\tError & 0.048 & 0.033 \\\\\t\n\t\t\t\tPrecision (malicious) & 0.897 & 0.931 \\\\\n\t\t\t\tRecall (malicious) & 0.972 & 0.979 \\\\\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Multivariate classifier (all inputs).}\n\t\t\t\\label{tab:results-all}\n\t\t\\end{table}\n\t\t\n\t\tThe confusion matrices for train and validation sets are shown in the Figures \\ref{fig:conf_train_all.png} and \\ref{fig:conf_val_all.png}.\n\t\t\n\t\t\\begin{figure}[H]\\centering\n\t\t\t\\begin{minipage}{0.49\\linewidth}\n\t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_train_all.png}\n\t\t\t\t\\caption{All inputs, train set}\\label{fig:conf_train_all.png}\n\t\t\t\\end{minipage}\n\t\t\t\\begin{minipage}{0.49\\linewidth}\n\t\t\t\t\\includegraphics[width=\\linewidth]{figures/conf_val_all.png}\n\t\t\t\t\\caption{All inputs, val set}\\label{fig:conf_val_all.png}\n\t\t\t\\end{minipage}\n\t\t\\end{figure}\t\n\t\t\n\t\\end{enumerate}\n\t\n\t\\end{enumerate}\n\n\\subsection*{Part b}\nIn this exercise we explored several versions of the classifier. The univariate classifier using one raw feature from the data showed the worst performance, as it didn't contain enough information for good prediction. Moreover, it's always hard to choose the most representative feature manually. The bivariate classifier using two features performed much better, but it wasn't in the top according to validation error (and the problem of choosing the most representative pair of feature is even harder).\n\nWe obtained the lowest validation error on the validation set with the approach using the first principal component. We think it is a reasonable balance between a number of parameters to estimate (it is already enough to capture the patterns in the data and small enough to avoid overfitting to the training data). We got slightly bigger validation error with the first two and three principal components, however, we don't need to add more features as they don't improve the results.\n\nIn the task of cancer detection, recall for malicious class is extremely important. The best approach with the first principal component showed the best recall for the malicious class as well as the best precision for this class again proving that the first principal component covers the majority of important information in this dataset.\n\nIn the last experiment with all inputs, we saw that even using all raw features from the dataset, we can't achieve the same performance as when applying PCA to extract information from the data.\n\n\\section*{Exercise 2}\n\\subsection*{Part a}\n\nIn this exercise we apply the logistic classifier (the implementation from scikit-learn python library, namely LogisticRegression) to this problem. The regularization parameter (for l2 regularization) is set to 1.  Our training set consists of all provided features (9 columns). The adaptation of inputs (concatenation 1 as the first coordinate of inputs) and outputs (transformation from probabilities to labels 0 and 1) is done inside the algorithm provided by scikit-learn, so we don't have to do it manually.\n \nThe results for the logistic classifier can be seen from the Table \\ref{tab:results-log}:\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{lrr}\n\t\t\\toprule\n\t\t& \\textbf{Train} & \\textbf{Validation}  \\\\ \\midrule\n\t\tError & 0.034 & 0.021 \\\\\t\n\t\tPrecision (malicious) & 0.957 & 0.978 \\\\\n\t\tRecall (malicious) & 0.944 & 0.958 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption{Logistic classifier (all inputs).}\n\t\\label{tab:results-log}\n\\end{table}\n\nThe confusion matrices for train and validation sets are shown in the Figures \\ref{fig:conf_train_log} and \\ref{fig:conf_val_log}.\n\n\\begin{figure}[H]\\centering\n\t\\begin{minipage}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/conf_train_log.png}\n\t\t\\caption{Logistic classifier, train set}\\label{fig:conf_train_log}\n\t\\end{minipage}\n\t\\begin{minipage}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{figures/conf_val_log.png}\n\t\t\\caption{Logistic classifier, val set}\\label{fig:conf_val_log}\n\t\\end{minipage}\n\\end{figure}\t\n\nLogistic classifier achieves the same level of accuracy as the best approach from the previous exercise --- the approach with first two principal components obtained the lowest error 0.021 on the validation set. However, the recall on the validation set is slightly higher for the Bayesian approach with the first two principal components and we suppose that recall for malicious class is essential for the problem of cancer detection.\n\n\\subsection*{Part b}\nIn Exercise 3 we used a generative approach to classify the objects: we explicitly modelled the distributions of classes $p(x| B)$ and $p(x | M)$ to determine class-conditional densities for each class individually. To get posterior probabilities, we applied Bayes theorem. Having found posterior probabilities, we used decision theory to determine class membership for each new input. It is rather demanding approach in which one need to estimate a lot of parameters $\\left(n + \\frac{n(n + 1)}{2}\\right)$\u0438\u043b\u0438\u043b\u044f  in this particular case, where $n$ is the number of features), therefore, we need a large training set. However, it can be useful when we need to sample new datapoints from the distribution.\n\nIn Exercise 4 we used discriminative approach to modelling: we first determined the posterior class probabilities $p(B | x)$ and $p(M | x)$ and then used standard decision theory to assign each new input its category. The advantage of this approach is that it is less computationally expensive and requires less parameters to estimate, which may lead to a better predictive performance (especially under the lack of data conditions). On the other hand, by using this approach, we lose some information about structure of density probabilities.\n\n \n\\end{document}\n\n", "meta": {"hexsha": "901461ddc3cc22e7ce7c9bd2c187e4187e0f5cda", "size": 16872, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw/hw-3/hw_3.tex", "max_stars_repo_name": "melaanya/twente-aml", "max_stars_repo_head_hexsha": "3c483ca4c570c67370759bddc8e372e2c100e257", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw/hw-3/hw_3.tex", "max_issues_repo_name": "melaanya/twente-aml", "max_issues_repo_head_hexsha": "3c483ca4c570c67370759bddc8e372e2c100e257", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw/hw-3/hw_3.tex", "max_forks_repo_name": "melaanya/twente-aml", "max_forks_repo_head_hexsha": "3c483ca4c570c67370759bddc8e372e2c100e257", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.9375, "max_line_length": 702, "alphanum_fraction": 0.7230915126, "num_tokens": 5242, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.685949467848392, "lm_q2_score": 0.8244619263765707, "lm_q1q2_score": 0.5655392196592688}}
{"text": "\\section{Calculus}\n\n\\vspace{0.5cm}\\subsection{Simplifying Algebraic Terms}\n\n % \\Arrow[xoffset=-1cm]{Text}\n\n % Exercise 1\n\n \\vspace{0.5cm}Exercise 1: \n\n \\begin{equation}\n     \\setlength{\\jot}{10pt}\n     \\begin{WithArrows}\n        [12x+5x \\cdot 2-(10x-8x)]+18x \\div 3 & \\Arrow{Apply parentheses rule \\bref{eq-22}} \\\\\n        =[12x+5x \\cdot 2-10x+8x]+18x \\div 3 & \\Arrow{Substract $-10x+8$} \\\\\n        = [12x+5x \\cdot 2-2x]+18x \\div 3 & \\Arrow{Multiply $5x \\cdot 2$} \\\\\n        = 20x+18x \\div 3 \\\\\n        = 20x+6x \\\\\n        = 26x\n     \\end{WithArrows}\n     \\nonumber\n \\end{equation}\n\n \\flushleft Exercise 2:\n\n \\begin{equation}\n     \\setlength{\\jot}{10pt}\n     \\begin{WithArrows}\n         x-((x-4)-(14+2x))+1 & \\Arrow{Apply parentheses rule \\bref{eq-24}} \\\\\n         \\Leftrightarrow x-(x-4-14-2x)+1 & \\Arrow{\\bref{eq-22}} \\\\\n         \\Leftrightarrow x-x+4+14+2x+1 \\\\ \n         \\Leftrightarrow 19+2x & \\Arrow{\\bref{eq-29}} \\\\\n         \\Leftrightarrow 2x+19\n     \\end{WithArrows}\n     \\nonumber\n \\end{equation}\n\n \\newpage \n\n\\vspace{0.5cm}\\subsection{Square Root Equations}\n\n% Exercise 1\n\n\\vspace{0.5cm} Exercise 1: \n\n\\begin{equation}\n    \\setlength{\\jot}{10pt}\n    \\begin{WithArrows}\n        2x + \\sqrt{x^2 + 9} = 4x +3 & \\Arrow{$-2x$} \\\\\n        \\sqrt{x^2 + 9} = 2x+3 & \\Arrow{$(\\;)^2$} \\\\\n        x^2 + 9 = (2x+3)^2 & \\Arrow{Apply first binomial rule \\bref{eq-1}} \\\\\n        x^2 + 9 = 4x^2 + 2x \\cdot 2 \\cdot 3 + 9 & \\Arrow{Summarize} \\\\ \n        x^2 + 9 = 4x^2 + 12x + 9 & \\Arrow{$-x^2-9$} \\\\\n        0 = 3x^2 + 12x \\\\\n        0 = \\underbrace{x}_\\text{$x_1$}\\underbrace{(3x+12)}_\\text{$x_2$}\n    \\end{WithArrows}\n    \\nonumber\n\\end{equation}\n\n\\begin{equation}\n    \\Rightarrow x_1 = 0\n    \\quad \\mid \\quad \\text{Verify for $x_1$:} \\quad 2 \\cdot 0 + \\sqrt{0^2 + 9} = 3\n    \\nonumber\n\\end{equation}\n\n\\begin{equation}\n    \\Rightarrow x_2 = 3x+12 = 0 \\quad \\Leftrightarrow \\quad x_2 = -4\n    \\quad \\mid \\quad \\text{Verify for $x_2$:} \\quad 4 \\cdot (-4) + 3 = -13\n    \\nonumber\n\\end{equation}\n\n\\begin{equation}\n    \\Rightarrow 3 = -13\n    \\nonumber\n\\end{equation}\n\n\\begin{equation}\n    Q.E.D\n    \\nonumber\n\\end{equation}\n\n% Exercise 2\n\nExercise 2: \n\n\\begin{equation}\n    \\setlength{\\jot}{5pt}\n    \\begin{WithArrows}\n        \\sqrt{x+2} + \\sqrt{x-1} = 3 & \\Arrow{$(\\;)^2$} \\\\\n        (\\sqrt{x+2} + \\sqrt{x-1})^2 = 3^2 & \\Arrow{Apply first binomial rule \\bref{eq-1}} \\\\\n        \\sqrt{x+2}^2 + 2\\cdot\\sqrt{x+2}\\cdot\\sqrt{x-1} + \\sqrt{x-1}^2 = 9 & \\Arrow{Square, and apply rule \\bref{eq-35}} \\\\\n        x+2 + 2\\sqrt{(x+2)\\cdot(x-1)} + x-1 = 9 & \\Arrow{$x+2 + x-1$} \\\\ \n        2x+1 + 2\\sqrt{(x+2)\\cdot(x-1)} = 9 & \\Arrow{$-2x-1$} \\\\\n        2\\sqrt{(x+2)\\cdot(x-1)} = -2x+8 & \\Arrow{$\\div 2$} \\\\\n        \\sqrt{(x+2)\\cdot(x-1)} = -x+4 & \\Arrow{Apply third binomial rule \\bref{eq-3}} \\\\\n        \\sqrt{x^2-1x+2x-2} = -x+4 & \\Arrow{$(\\;)^2$} \\\\\n        x^2-1x+2x-2 = (-x+4)^2 & \\Arrow{No binomial rule applies for $(-x+4)^2$} \\\\\n        x^2+1x-2 = x^2-2\\cdot4x+16 & \\Arrow{$-x^2-1x+2$} \\\\\n        0 = -9x+18 & \\Arrow{$+9x$} \\\\\n        9x = 18 & \\Arrow{$\\div9$} \\\\\n        x = 2\n    \\end{WithArrows}\n    \\nonumber\n\\end{equation}\n\n% Exercise 3\n\nExercise 3:\n\n\\begin{align*}\n    \\sqrt{-3x-1-\\sqrt{4x+5}}&=1 &&\\text{$(\\;)^2$}&\\\\[1.25ex]\n    -3x-1-\\sqrt{4x+5}&=1 &&\\text{$+3x+1$}&\\\\[1.25ex]\n    -\\sqrt{4x+5}&=3x+2 &&\\text{$(\\;)^2$}&\\\\[1.25ex]\n    4x+5 &= (3x+2)^2 &&\\text{Apply first binomial rule \\bref{eq-1}}&\\\\[1.25ex]\n    4x+5 &= 9x^2 + 2 \\cdot 3x \\cdot 2 + 4&\\\\[1.25ex]\n    4x+5 &= 9x^2 + 12x + 4 &&\\text{$-4x-5$}&\\\\[1.25ex]\n    0 &= 9x^2 + 8x - 1 &&\\text{$\\div 9$}&\\\\[1.25ex]\n    0 &= x^2 + \\frac{8}{9}\\cdot x - \\frac{1}{9} &&\\text{Apply \\textbf{PQ} formula}&\\\\[1.25ex]\n    x_{1,2}&=-{\\frac {p}{2}}\\pm {\\sqrt {\\left({\\frac {p}{2}}\\right)^{2}-q}}&&\\\\[1.25ex]\n    x_1 &\\Leftrightarrow \\sqrt{-\\frac{11}{3}} = 1 &\\\\[1.25ex]\n    x_2 &\\Leftrightarrow 1 = 1 &\\\\[2ex]\n    \\Rightarrow L &= \\{-1\\} &\n\\end{align*}\n\n% Exercise 4\n\nExercise 4:\n\n\\begin{align*}\n    x = \\sqrt{x+20} &&\\text{$x^2$}&\\\\[1.25ex]\n    x^2 = x+20 &&\\text{$-x-20$}&\\\\[1.25ex]\n    \\Leftrightarrow x^2-x-20 = 0&\\\\[1.25ex]\n    \\Leftrightarrow (x-5)(x+4) = 0&\\\\[1.25ex]\n    \\Rightarrow x = 5 \\quad \\land x = -4&\\\\[1.25ex]\n    \\Rightarrow \\sqrt{5+20} = 5 \\quad \\mid \\quad \\sqrt{-4+20} = \\sqrt{16} = 4&\\\\[1.25ex]\n    \\Rightarrow L = \\{5\\} &\n\\end{align*}\n\n% Changing the Subject of an Equation \n\n\\newpage \n\n\\vspace{0.5cm}\\subsection{Changing the Subject of an Equation }\n\n% Exercise 1\n\n\\vspace{0.5cm} Exercise 1: \n\n\\begin{align*}\n    \\frac{6x+7}{9} - \\frac{10x+7}{18} = \\frac{9x+5}{14} - \\frac{9x-16}{20} &&\\text{Find lowest common denominator.}&\\\\[1.25ex]\n    \\Leftrightarrow \\frac{6x+7}{2 \\cdot 9} - \\frac{10x+7}{18} =\\frac{9x+5}{14 \\cdot 10} - \\frac{9x-16}{20 \\cdot 7} &\\\\[1.25ex]\n    \\Leftrightarrow  \\frac{12+14}{18} - \\frac{10x+7}{18} = \\frac{90x+50}{140} - \\frac{63x-112}{140} &\\\\[1.25ex]\n    \\Leftrightarrow  \\frac{12x+14-(10x+7)}{18} = \\frac{90x+50-(63x-112)}{140} &\\\\[1.25ex]\n    \\Leftrightarrow  \\frac{12x+14-10x-7}{18} = \\frac{90x+50-63x+112}{140} &\\\\[1.25ex]\n    \\Leftrightarrow \\frac{2x+7}{18} = \\frac{27x+162}{140} &&\\text{$\\cdot 18$}&\\\\[1.25ex]\n    \\Leftrightarrow  2x+7 = \\frac{(27x+162)\\cdot18}{140} &&\\text{Reduce fraction with $2$.}&\\\\[1.25ex]\n    \\Leftrightarrow 2x+7 = \\frac{(27x+162)9}{70} &&\\text{$\\cdot 70$}&\\\\[1.25ex]\n    \\Leftrightarrow 140x+490 = 9(27x+162)&\\\\[1.25ex]\n    \\Leftrightarrow 140x+490 = 243x+1458 &&\\text{$-140x$}&\\\\[1.25ex]\n    \\Leftrightarrow 490 = 103x + 1458 &&\\text{$-1458$}&\\\\[1.25ex]   \n    \\Leftrightarrow -968 = 103x &&\\text{Apply \\bref{eq-29}}&\\\\[1.25ex]\n    \\Leftrightarrow 103x = -968 &&\\text{$\\div 103$}&\\\\[1.25ex]\n    \\Leftrightarrow x = -\\frac{968}{103}&\n\\end{align*}\n\n% Exercise 2\n\n\\newpage Exercise 2: \n\n\\begin{align*}\n    \\frac{3x-9}{6x-1} = \\frac{4x-16}{8x-5} &&\\text{$\\cdot(8x-5) \\cdot(6x-1)$}&\\\\[1.25ex]\n    \\Leftrightarrow (3x-9)\\cdot(8x-5)=(4x-16)\\cdot(6x-1)&\\\\[1.25ex]\n    \\Leftrightarrow 24x^2-15x-72x+45 = 24x^2-4x-96x+16 &&\\text{$-24x^2$}&\\\\[1.25ex]\n    \\Leftrightarrow -15x-72x+45=-4x-96x+16 &&\\text{$-45$}&\\\\[1.25ex]\n    \\Leftrightarrow -15x-72x = -4x-96x-29&\\\\[1.25ex]\n    \\Leftrightarrow -87x = -100x-29&&\\text{$+100x$}&\\\\[1.25ex]\n    \\Leftrightarrow 13x = -29&&\\text{$\\div13$}&\\\\[1.25ex]\n    \\Leftrightarrow x = -\\frac{29}{13}&\n\\end{align*}\n\n% Exercise 3\n\n\\begin{align*}\n    x^2+(x-2)^2 = 10 &\\\\[1.25ex]\n    \\Leftrightarrow x^2 + (x-2)(x-2)=10&\\\\[1.25ex]\n    \\Leftrightarrow x^2+x^2-2x-2x+4=10&\\\\[1.25ex]\n    \\Leftrightarrow 2x^2-4x+4 = 10 &&\\text{$-10$}&\\\\[1.25ex]\n    \\Leftrightarrow 2x^2-4x-6 = 0 &&\\text{$\\div 2$}&\\\\[1.25ex]\n    \\Leftrightarrow x^2-2x-3 = 0 &&\\text{Apply \\textbf{PQ} formula.}&\\\\[1.25ex]\n    \\Leftrightarrow x_1 = 3 \\quad \\mid \\quad x_2 = -1 &\\\\[1.25ex]\n    \\Rightarrow L = \\{3\\} &\n\\end{align*}\n\n\n\\newpage \n\n% Exponential Equations\n\n\\vspace{0.5cm}\\subsection{Exponential Equations}\n\n\\vspace{0.5cm}Exercise 1:\n\n\\begin{align*}\n    \\frac{(2^{-4})^{-5}\\cdot2^{17}}{(2^{-3})^{-6}\\cdot(2^{-4})^3} &&\\text{Apply \\bref{eq-50}.} &\\\\[1.25ex]\n    = \\frac{2^{(-4)\\cdot(-5)}\\cdot 2^{17}} {2^{(-3)\\cdot(-6)}\\cdot2^{(-4)\\cdot3 }}&\\\\[1.25ex]\n    = \\frac{2^{20} \\cdot 2^{17}}{2^{18} \\cdot 2^{-12}} &&\\text{Apply \\bref{eq-48} and \\bref{eq-49}}&\\\\[1.25ex]\n    = 2^{20 + 17 - 18 -(-12)} &\\\\[1.25ex]\n    = 20^{20 + 17 -18 + 12} &\\\\[1.25ex]\n    = 20^{31} &\n\\end{align*}\n\nExercise 2:\n\n\\begin{align*}\n    \\frac{3^7 \\cdot (3^{-2})^3}{3^{-4} \\cdot 3^7} \\div \\frac{(3^4)^{-3}}{(3^{-2})^{-6}} &&\\text{Apply rule \\bref{eq-50}} &\\\\[1.25ex]\n    = \\frac{3^7 \\cdot 3^{(-2) \\cdot 3}}{3^{-4} \\cdot 3^7} \\div \\frac{3^{4 \\cdot {(-3)}}}{3^{(-2) \\cdot (-6)}} &&\\text{Apply rule \\bref{eq-14} and \\bref{eq-48}} &\\\\[1.25ex]\n    = \\frac{3^7 \\cdot 3^{-6}}{3^{-4+7}} \\cdot \\frac{3^{12}}{3^{-12}} &\\\\[1.25ex]\n    = \\frac{3^7 \\cdot 3^{-6} \\cdot 3^{12}}{3^3 \\cdot 3^{-12}}&&\\text{Apply \\bref{eq-48} and \\bref{eq-49}}&\\\\[1.25ex]\n    = 3^{7-6+12-3-(-12)}&\\\\[1.25ex]\n    = 3^{22}&\n\\end{align*}\n\nExercise 3:\n\n\\begin{align*}\n    \\frac{12x^{-2} y^3}{8z^2} \\cdot \\frac{4y^{-2}z}{3x^{-5}} \\div \\frac{6z^{-3}}{2y^{-4}z} &&\\text{Apply rule \\bref{eq-14}.}&\\\\[1.25ex]\n    =\\frac{12x^{-2} y^3}{8z^2} \\cdot \\frac{4y^{-2}z}{3x^{-5}} \\cdot \\frac{2y^{-4}z}{6z^{-3}}&&\\text{Reduce $\\frac{12\\cdot4\\cdot2}{8\\cdot3\\cdot6}$ and apply \\bref{eq-48} and \\bref{eq-49}.}&\\\\[1.25ex]\n    =\\frac{2}{3}x^{-2-(-5)}\\cdot y^{3-2-4}\\cdot z^{1+1-2-(-3)}&\\\\[1.25ex]\n    =\\frac{2}{3}x^3 y^{-3} z^3 &&\\text{Apply rule \\bref{eq-55}.}&\\\\[1.25ex]\n    =\\frac{2}{3}\\frac{x^3 z^3}{y^3}&\n\\end{align*}\n\nExercise 4:\n\n\\begin{align*}\n    \\frac{2^4 \\cdot x^5 \\cdot y^7 \\cdot z^8}{8 \\cdot x^2 \\cdot y^5 \\cdot z^{10}} \\div \\frac{2 \\cdot x^2 \\cdot y^5 \\cdot z^8}{16 \\cdot x^4 \\cdot y^3 \\cdot z^5}&&\\text{Apply rule \\bref{eq-14}.}&\\\\[1.25ex]\n    = \\frac{2^4 \\cdot x^5 \\cdot y^7 \\cdot z^8}{8 \\cdot x^2 \\cdot y^5 \\cdot z^{10}} \\cdot \\frac{16 \\cdot x^4 \\cdot y^3 \\cdot z^5}{2 \\cdot x^2 \\cdot y^5 \\cdot z^8}&&\\text{Reduce $\\frac{2}{8} \\cdot \\frac{16}{2}$ and apply \\bref{eq-48}.}&\\\\[1.25ex]\n    = \\frac{2}{1}\\cdot \\frac{x^{5+4}\\cdot y^{7+3}\\cdot z^{8+5}}{x^{2+2}\\cdot y^{5+5}\\cdot z^{10+8}}&\\\\[1.25ex]\n    = \\frac{2}{1}\\cdot \\frac{x^9\\cdot y^{10}\\cdot z^{13}}{x^4 \\cdot y^{10}\\cdot z^{18}} &&\\text{Apply rule \\bref{eq-49}.}&\\\\[1.25ex]\n    = 2\\cdot x^5 \\cdot z^{-5} &&\\text{Apply rule \\bref{eq-55}.}&\\\\[1.25ex]\n    = \\frac{2\\cdot x^5}{z^5}&\n\\end{align*}\n\nExercise 5:\n\n\\begin{align*}\n    \\frac{4x^{2-m}y^{3\\cdot m}}{7z^{m-n}} \\div \\frac{5z^{m+n}x^{3-m}}{14y^{1-2m}}&&\\text{Apply rule \\bref{eq-14}.}&\\\\[1.25ex]\n    = \\frac{4x^{2-m}y^{3\\cdot m}}{7z^{m-n}} \\cdot \\frac{14y^{1-2m}}{5z^{m+n}x^{3-m}}&&\\text{Reduce $\\frac{4}{7} \\cdot \\frac{14}{5}$ and apply \\bref{eq-48} and \\bref{eq-49}.}&\\\\[1.25ex]\n    = \\frac{8}{5}\\cdot \\frac{x^{2-m-(3-m)}\\cdot y^{3m+1-2m}}{z^{m-n+m+n}}&\\\\[1.25ex]\n    = \\frac{8}{5}\\cdot \\frac{x^{-1}\\cdot y^{m+1}}{z^{2m}}&&\\text{Apply rule \\bref{eq-55}.}&\\\\[1.25ex]\n    = \\frac{8}{5} \\frac{y^{m+1}}{x\\cdot z^{2m}}&\n\\end{align*}\n\n\\newpage\n\n% Square Root Equations \n\n\\vspace{0.5cm}\\subsection{Square Root Equations}\n\n\\vspace{0.5cm}Exercise 1:\n\n\\begin{align*}\n    a^{\\frac{2}{5}} \\cdot \\sqrt[5]{a^3} &&\\text{Apply rule \\bref{eq-46}.}&\\\\[1.25ex]\n    a^{\\frac{2}{5}} \\cdot (a^3)^{\\frac{1}{5}} &&\\text{Apply rule \\bref{eq-50}.}&\\\\[1.25ex]\n    = a^{\\frac{2}{5}} \\cdot a^{\\frac{3}{5}} &&\\text{Apply rule \\bref{eq-48}.}&\\\\[1.25ex]\n    = a^{\\frac{2}{5} + \\frac{3}{5}}&\\\\[1.25ex]\n    = a^1 &&\\text{Apply rule \\bref{eq-54}.}&\\\\[1.25ex]\n    = a &\n\\end{align*}\n\nExercise 2:\n\n\\begin{align*}\n    &\\sqrt{a \\sqrt[3]{a^2}} \\quad \\div \\quad \\left(a\\sqrt{a^{-3}\\sqrt{a^{-1}}}\\right)&&\\text{Apply rule \\bref{eq-47}.}&\\\\[1.25ex]\n    &= \\sqrt{a^1 \\cdot a^{\\frac{2}{3}}} \\quad \\div \\quad \\left(a^1 \\cdot \\sqrt{a^{-3} \\cdot a^{-\\frac{1}{2}}}\\right) &&\\text{Apply rule \\bref{eq-48} and \\bref{eq-62}.}&\\\\[1.25ex]\n    &= \\left(a^{1+\\frac{2}{3}}\\right)^\\frac{1}{2} \\quad \\div \\quad \\left(a^1 \\cdot \\left(a^{-3-\\frac{1}{2}}\\right)^\\frac{1}{2}\\right)&&\\text{Apply rule \\bref{eq-50}.}&\\\\[1.25ex]\n    &= a^{\\frac{5}{3}\\cdot \\frac{1}{2}} \\quad \\div \\quad \\left(a^1 \\cdot a^{-\\frac{7}{2}\\cdot \\frac{1}{2}}\\right)&&\\text{Simplify and apply rule \\bref{eq-48}.}&\\\\[1.25ex]\n    &= a^{\\frac{5}{6}} \\div a^{1-\\frac{7}{4}}&\\\\[1.25ex]\n    &= a^{\\frac{5}{6}} \\div a^{-\\frac{3}{4}}&&\\text{Apply rule \\bref{eq-49}.}&\\\\[1.25ex]\n    &= a^{\\frac{5}{6}-(-\\frac{3}{4})}&\\\\[1.25ex]\n    &= a^{\\frac{19}{12}}\n\\end{align*}\n\nExercise 3: \n\\begin{align*}\n    \\sqrt[3]{(a^2)^{-5} \\cdot \\sqrt[4]{a^{16}}} &&\\text{Apply rule \\bref{eq-62}, \\bref{eq-50}, and \\bref{eq-47}.}&\\\\[1.25ex]\n    = \\left(a^{-10} \\cdot a^{\\frac{16}{4}}\\right)^{\\frac{1}{3}}&\\\\[1.25ex]\n    = \\left(a^{-10} \\cdot a^4\\right)^{\\frac{1}{3}}&&\\text{Apply rule \\bref{eq-48}.}&\\\\[1.25ex]\n    = \\left(a^{-10+4}\\right)^{\\frac{1}{3}}&\\\\[1.25ex]\n    = \\left(a^{-6}\\right)^{\\frac{1}{3}}&&\\text{Apply rule \\bref{eq-50} and \\bref{eq-55}.}&\\\\[1.25ex]\n    = a^{-2}\n    = \\frac{1}{a^2}\n\\end{align*}\n\nExercise 4:\n\\begin{align*}\n    &\\frac{1}{\\sqrt[3]{a\\sqrt[5]{a^{-20}}}} &&\\text{Apply rule \\bref{eq-62} and \\bref{eq-47}.}&\\\\[1.25ex]\n    &= \\frac{1}{\\left(a^1 \\cdot a^{\\frac{-20}{5}}\\right)^{\\frac{1}{3}}}&&\\text{$\\frac{-20}{5}$ equals $-4$.}&\\\\[1.25ex]\n    &= \\frac{1}{\\left(a^1 \\cdot a^{-4}\\right)^{\\frac{1}{3}}}&&\\text{Apply rule \\bref{eq-48}.}&\\\\[1.25ex]\n    &= \\frac{1}{\\left(a^{1-4}\\right)^{\\frac{1}{3}}}\n    = \\frac{1}{(a^{-3})^{\\frac{1}{3}}} &&\\text{Apply rule \\bref{eq-50}.}&\\\\[1.25ex]\n    &= \\frac{1}{a^{-1}} = a^1 = a\n\\end{align*}\n\nExercise 5:\n\\begin{align*}\n    &\\sqrt[3]{a\\sqrt{a}} \\div \\sqrt{a^{-3}\\sqrt[4]{a^6}} &&\\text{Apply rule \\bref{eq-62} and \\bref{eq-47}.}&\\\\[1.25ex]\n    &= \\left(a^1 \\cdot a^{\\frac{1}{2}}\\right)^{\\frac{1}{3}} \\div \\left(a^{-3} \\cdot a^{\\frac{6}{4}}\\right)^{\\frac{1}{2}}&&\\text{eq-50}&\\\\[1.25ex]\n    &= \\left(a^{\\frac{1}{3}} \\cdot a^{\\frac{1}{6}}\\right) \\div \\left(a^{-\\frac{3}{2}} \\cdot a^{\\frac{3}{4}}\\right)&&\\text{Apply rule \\bref{eq-48}.}&\\\\[1.25ex]\n    &= \\left(a^{\\frac{1}{3}+\\frac{1}{6}}\\right) \\div \\left(a^{\\frac{-3}{2}+\\frac{3}{4}}\\right) = a^{\\frac{1}{2}} \\div a^{\\frac{-3}{4}}&&\\text{Apply rule \\bref{eq-49}.}&\\\\[1.25ex]\n    &= a^{\\frac{1}{2}-\\left(\\frac{-3}{4}\\right)} = a^{\\frac{1}{2} + \\frac{3}{4}}&\\\\[1.25ex]\n    &= a^{\\frac{5}{4}}\n\\end{align*}\n", "meta": {"hexsha": "5b95b16f61c48894e3a56ceb9b2a466b66b6fde2", "size": 13129, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/sections/calculus/calculus.tex", "max_stars_repo_name": "StevenGreve/mathematics", "max_stars_repo_head_hexsha": "0763cbc4b44169300adf48dc2fea3d07a6f5cb68", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-21T23:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-21T23:42:14.000Z", "max_issues_repo_path": "src/sections/calculus/calculus.tex", "max_issues_repo_name": "StevenGreve/mathematics", "max_issues_repo_head_hexsha": "0763cbc4b44169300adf48dc2fea3d07a6f5cb68", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/sections/calculus/calculus.tex", "max_forks_repo_name": "StevenGreve/mathematics", "max_forks_repo_head_hexsha": "0763cbc4b44169300adf48dc2fea3d07a6f5cb68", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.8121019108, "max_line_length": 244, "alphanum_fraction": 0.5297433163, "num_tokens": 6356, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5655229110928345}}
{"text": "\\documentclass{ximera}\n\\input{../preamble}\n\\title{Exercises: Trigonometric Substitions}\n%%%%%\\author{Philip T. Gressman}\n\n\\begin{document}\n\\begin{abstract}\nVarious exercises relating to trigonometric substitutions.\n\\end{abstract}\n\\maketitle\n\n\nCompute the indefinite integrals below. Since there are many possible answers (which differ by constant values), use the given instructions if needed to choose which possible answer to use. Do not forget absolute value signs inside logarithms when they are needed.\n\n\\begin{exercise}%[APEX0608TRIGSB20]\n\\[ \\int x^2\\sqrt{1-x^2}\\ dx = \\answer{\\frac{1}{8}\\arcsin x-\\frac{1}{8}x\\sqrt{1-x^2}(1-2x^2)}+C\\] \n(Choose your answer to equal $0$ at $x = 0$.)\n%\n%\n\\end{exercise}\n\n\\begin{exercise}%[APEX0608TRIGSB18]\n\\[  \\int \\frac {1}{(x^2+1)^2}\\ dx = \\answer{\\frac{1}{2}\\left(\\arctan x+\\frac{x}{x^2+1}\\right)} +C\\]\n(Choose your answer to equal $0$ at $x = 0$.)\n%\n%\n\\end{exercise}\n\n\\begin{exercise}%[APEX0608TRIGSB26]\n\\[  \\int \\frac{x^2}{\\sqrt{x^2+3}}\\ dx = \\answer{\\frac{1}{2}x\\sqrt{x^2+3}-\\frac{3}{2}\\ln\\left|\\frac{\\sqrt{x^2+3}}{\\sqrt{3}}+\\frac{x}{\\sqrt{3}}\\right|} +C\\]\n(Choose your answer to equal $0$ at $x = 0$.)\n%\n%\n\\end{exercise}\n\n\\section*{Sample Quiz Questions}\n\\begin{question}%%%%%[TrigSub001]\n\nCompute the integral \n\\[\\int_{-2}^{2}\\frac{5}{(5-x^2)^{3/2}}~dx.\\]\n(Hints won't be revealed until after you choose a response.)\n\\begin{multiplechoice}\n\\choice{\\(\\displaystyle 2\\)}\n\\choice{\\(\\displaystyle 3\\)}\n\\choice[correct]{\\(\\displaystyle 4\\)}\n\\choice{\\(\\displaystyle 5\\)}\n\\choice{\\(\\displaystyle 6\\)}\n\\choice{\\(\\displaystyle 7\\)}\n\\end{multiplechoice}\n\\begin{feedback}\nBegin by making the trig substitution \\(x=\\sqrt{5}\\sin \\theta\\). \\begin{hint} It follows that \n\\[ \\begin{aligned} \\int\\frac{5}{(5-x^2)^{3/2}}~dx & = \\int \\frac{5}{(5-(\\sqrt{5}\\sin \\theta)^2)^{3/2}} \\cdot (\\sqrt{5}\\cos \\theta)~d \\theta \\\\\n & = \\int (\\cos \\theta)^{-2}~d \\theta \\\\ & = \\int (\\sec \\theta)^{2} ~ d \\theta = (\\tan \\theta) + C. \\end{aligned} \\] \\begin{hint}\nTo finish, use the inversion identity \\[\\tan \\theta = \\frac{x}{\\sqrt{5-x^2}}.\\]\nTherefore \\[\\int_{-2}^{2}\\frac{5}{(5-x^2)^{3/2}}~dx = \\left.\\frac{x}{\\sqrt{5-x^2}}\\right|_{-2}^{2} = \\left(2\\right) - \\left(-2\\right) = 4.\\] \\end{hint} \\end{hint}\n\\end{feedback}\n\n\\end{question}\n\n\\begin{question}%%%%%[TrigSub017]\n\nCompute the integral \n\\[\\int_{-1}^{1}\\frac{3}{(3+x^2)^{3/2}}~dx.\\]\n(Hints won't be revealed until after you choose a response.)\n\\begin{multiplechoice}\n\\choice{\\(\\displaystyle \\frac{1}{2}\\)}\n\\choice[correct]{\\(\\displaystyle 1\\)}\n\\choice{\\(\\displaystyle \\frac{3}{2}\\)}\n\\choice{\\(\\displaystyle 2\\)}\n\\choice{\\(\\displaystyle \\frac{5}{2}\\)}\n\\choice{\\(\\displaystyle 3\\)}\n\\end{multiplechoice}\n\\begin{feedback}\nBegin by making the trig substitution \\(x=\\sqrt{3}\\tan \\theta\\). \\begin{hint} It follows that \n\\[ \\begin{aligned} \\int\\frac{3}{(3+x^2)^{3/2}}~dx & = \\int \\frac{3}{(3+(\\sqrt{3}\\tan \\theta)^2)^{3/2}} \\cdot (\\sqrt{3}\\sec^2 \\theta)~d \\theta \\\\\n & = \\int (\\sec \\theta)^{-1}~d \\theta \\\\ & = \\int (\\cos \\theta) ~ d \\theta = (\\sin \\theta) + C. \\end{aligned} \\] \\begin{hint}\nTo finish, use the inversion identity \\[\\cos \\theta = \\frac{x}{\\sqrt{3+x^2}}.\\]\nTherefore \\[\\int_{-1}^{1}\\frac{3}{(3+x^2)^{3/2}}~dx = \\left.\\frac{x}{\\sqrt{3+x^2}}\\right|_{-1}^{1} = \\left(\\frac{1}{2}\\right) - \\left(-\\frac{1}{2}\\right) = 1.\\] \\end{hint} \\end{hint}\n\\end{feedback}\n\n\\end{question}\n\n\\begin{question}%%%%%[TrigSub033]\n\nCompute the integral \n\\[\\int_{4}^{5}\\frac{16\\sqrt{x^2-16}}{x^{4}}~dx.\\]\n(Hints won't be revealed until after you choose a response.)\n\\begin{multiplechoice}\n\\choice{\\(\\displaystyle \\frac{4}{125}\\)}\n\\choice{\\(\\displaystyle \\frac{1}{25}\\)}\n\\choice{\\(\\displaystyle \\frac{6}{125}\\)}\n\\choice{\\(\\displaystyle \\frac{7}{125}\\)}\n\\choice{\\(\\displaystyle \\frac{8}{125}\\)}\n\\choice[correct]{\\(\\displaystyle \\frac{9}{125}\\)}\n\\end{multiplechoice}\n\\begin{feedback}\nBegin by making the trig substitution \\(x=4\\sec \\theta\\). \\begin{hint} It follows that \n\\[ \\begin{aligned} \\int\\frac{16\\sqrt{x^2-16}}{x^{4}}~dx & = \\int \\frac{16\\sqrt{(4\\sec \\theta)^2-16}}{(4\\sec \\theta)^{4}} \\cdot (4\\sec \\theta\\tan \\theta)~d \\theta \\\\\n & = \\int (\\sec \\theta)^{-3}(\\tan \\theta)^{2}~d \\theta \\\\ & = \\int (\\sin \\theta)^{2}(\\cos \\theta) ~ d \\theta = \\frac{1}{3}(\\sin \\theta)^{3} + C. \\end{aligned} \\] \\begin{hint}\nTo finish, use the inversion identity \\[\\sin \\theta = \\frac{\\sqrt{x^2-16}}{x}.\\]\nTherefore \\[\\int_{4}^{5}\\frac{16\\sqrt{x^2-16}}{x^{4}}~dx = \\left.\\frac{1}{3}\\frac{(x^2-16)^{3/2}}{x^{3}}\\right|_{4}^{5} = \\left(\\frac{9}{125}\\right) - \\left(0\\right) = \\frac{9}{125}.\\] \\end{hint} \\end{hint}\n\\end{feedback}\n\n\\end{question}\n\n\n\n\\section*{Sample Exam Questions}\n\n\\begin{question}%%%%%[2016C.03]\n\nCompute the value of the integral below.\n\\[ \\int_0^{\\frac{1}{\\sqrt{2}}} \\frac{1}{(1-x^2)^{\\frac{3}{2}}} dx \\]\n\\begin{multiplechoice}\n\\choice{\\(0\\)}\n\\choice[correct]{\\(1\\)}\n\\choice{\\(2\\)}\n\\choice{\\(3\\)}\n\\choice{\\(4\\)}\n\\choice{none of these}\n\\end{multiplechoice}\n\n\\end{question}\n\n\\begin{question}%%%%%[2017C.06]\n\nEvaluate \\(\\displaystyle \\int_0^3 \\frac{dx}{(25-x^2)^{3/2}}\\).\n\\begin{multiplechoice}\n\\choice{\\(0\\)}\n\\choice{\\(\\displaystyle \\frac{1}{100}\\)}\n\\choice[correct]{\\(\\displaystyle \\frac{3}{100}\\)}\n\\choice{\\(\\displaystyle \\frac{5}{100}\\)}\n\\choice{\\(\\displaystyle \\frac{7}{100}\\)}\n\\choice{none of these}\n\\end{multiplechoice}\n\n\\end{question}\n\n\n\\end{document}\n", "meta": {"hexsha": "d2e40a306c04a192ef747e3ac2bcabf438d9f1ca", "size": 5311, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "techniques/11trigsubpractice.tex", "max_stars_repo_name": "ptgressman/math104", "max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "techniques/11trigsubpractice.tex", "max_issues_repo_name": "ptgressman/math104", "max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "techniques/11trigsubpractice.tex", "max_forks_repo_name": "ptgressman/math104", "max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6666666667, "max_line_length": 264, "alphanum_fraction": 0.6315194879, "num_tokens": 2020, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.5654971803694423}}
{"text": "\\chapter{Lecture 27 Jul 06th 2018}%\n\\label{chp:lecture_27_jul_06th_2018}\n% chapter lecture_27_jul_06th_2018\n\n\\section{Polynomial Ring}%\n\\label{sec:polynomial_ring}\n% section polynomial_ring\n\n\\subsection{Polynomials}%\n\\label{sub:polynomials}\n% subsection polynomials\n\n\\begin{defn}[Polynomials]\n\\label{defn:polynomials}\n  Let $R$ be a ring and $x$ a variable. Let\n  \\begin{equation*}\n    R[x] = \\left\\{ f(x) = \\sum_{i=0}^{m} a_i x^i : m \\in \\mathbb{N} \\cup \\{0\\}, \\, a_i \\in R, 0 \\leq i \\leq m \\right\\}.\n  \\end{equation*}\n  Each element in $R[x]$ is called a \\hldefn{polynomial} in $x$ over $R$. If $a_m \\neq 0$, we say that $f(x)$ has \\hldefn{degree} $m$, denoted by $\\deg f = m$, and we say that $a_m$ is the \\hldefn{leading coefficient} of $f(x)$.\n\n  If $\\deg f = 0$, then $f(x) = a_0 \\in R$. In this case, we call $f(x)$ a \\hldefn{constant polynomial}. Note if\n  \\begin{equation*}\n    f(x) = 0 \\iff a_0 = a_1 = ... = a_m = 0,\n  \\end{equation*}\n  we define $\\deg 0 = - \\infty$, and $f(x)$ is called a \\hldefn{zero polynomial}.\n\\end{defn}\n\nFor\n\\begin{gather*}\n  f(x) = a_0 + a_1 x + \\hdots + a_m x^m \\\\\n  g(x) = b_0 + b_1 x + \\hdots + b_n x^n\n\\end{gather*}\nin $R[x]$. If $m \\leq n$, we can define $a_i = 0$ for $m + 1 \\leq i \\leq n$. Then the addition and multiplication on $R[x]$ can be defined as\n\\begin{align*}\n  f(x) + g(x) &= (a_0 + b_0) + (a_1 + b_1) x + \\hdots + (a_n + b_n) x^n \\\\\n  f(x) g(x) &= (a_0 + a_1 x + \\hdots + a_m x^m) (b_0 + b_1 x + \\hdots + b_n x^n) \\\\\n            &= a_0 b_0 + (a_1 b_0 + a_1 b_0) x + (a_2 b_0 + a_1 b_1 + a_0 b_2) x^2 + \\hdots \\\\\n            &\\quad + (a_m b_m) x^{m + n} \\\\\n            &= c_0 + c_1 x + \\hdots + c_{m + n} x^{m + n}\n\\end{align*}\nwhere $c_i = a_0 b_i + a_1 b_{i - 1} + \\hdots + a_{i - 1} b_1 + a_i b_0$.\n\n\\begin{propo}[Ring is a Subring of Its Polynomial Ring]\n\\label{propo:ring_is_a_subring_of_its_polynomial_ring}\n  Let $R$ be a ring and $x$ a variable.\n  \\begin{enumerate}\n    \\item $R[x]$ is a ring\n    \\item $R$ is a subring of $R[x]$\n    \\item If $Z = Z(R)$ denote the center of $R$, then the center of $R[x]$ is $Z[x]$. In particular, $x$ is in the center of $R[x]$.\n  \\end{enumerate}\n\\end{propo}\n\n\\begin{proof}\n  \\begin{enumerate}\n    \\item \\textbf{Checking all 9 properties}: Let \n          \\begin{gather*}\n            f(x) = a_0 + a_1 x + \\hdots + a_m x^m \\\\\n            g(x) = b_0 + b_1 x + \\hdots + b_n x^n \\\\\n            h(x) = d_0 + d_1 x + \\hdots + d_k x^k\n          \\end{gather*}\n          be in $R[x]$.\n      \\begin{itemize}\n        \\item (\\textbf{Closed under addition and multiplication})\n          Suppose, WLOG, that $m \\leq n$. Let $a_i = 0$ for $m + 1 \\leq i \\leq n$. Then\n          \\begin{equation*}\n            f(x) + g(x) = (a_0 + b_0) + (a_1 + b_1) x + \\hdots + (a_n + b_n) x^n\n          \\end{equation*}\n          and we observe that $a_i + b_i \\in R$ for $0 \\leq i \\leq n$ since $R$ is a ring. And so $f(x) + g(x) \\in R[x]$. Also, we have\n          \\begin{align*}\n            f(x) g(x) = c_0 + c_1 x + \\hdots + c_{m + n} x^{m + n}\n          \\end{align*}\n          where $c_i = a_0 b_i + a_1 b_{i - 1} + \\hdots + a_{i - 1} b_1 + a_i b_0 \\in R$ for $1 \\leq i \\leq m + n$. And so $f(x) g(x) \\in R[x]$.\n        \\item (\\textbf{Commutativity of Addition}) Suppose, WLOG, that $m \\leq n$. Let $a_i = 0$ for $m + 1 \\leq i \\leq n$. Then\n          \\begin{align*}\n            f(x) + g(x) &= (a_0 + b_0) + (a_1 + b_1) x + \\hdots + (a_n + b_n) x^n \\\\\n                        &= (b_0 + a_0) + (b_1 + a_1) x + \\hdots + (b_n + a_n) x^n \\\\\n                        &= g(x) + f(x)\n          \\end{align*}\n        \\item (\\textbf{Zero and Unity}) It is clear that the zero and unity of $R$ are the zero and unity of $R[x]$ respectively, since only\n          \\begin{equation*}\n            f(x) + 0 = f(x) = 0 + f(x)\n          \\end{equation*}\n          and\n          \\begin{equation*}\n            1 f(x) = f(x) = f(x) \\cdot 1.\n          \\end{equation*}\n        \\item (\\textbf{Associativity}) Suppose, WLOG, that $m \\leq n \\leq k$. Let $a_i = b_j = 0$ for $m + 1 \\leq i \\leq k$ and $n + 1 \\leq j \\leq k$. Then\n          \\begin{align*}\n            &f(x) + [ g(x) + h(x) ]\\\\\n            &= f(x) + [ (b_0 + d_0) + (b_1 + d_1) x + \\hdots + (b_k d_k) x^k ] \\\\\n            &= (a_0 + b_0 + d_0) + (a_1 + b_1 + d_1) x + \\hdots + (a_k + b_k + d_k) x^k \\\\\n            &= [(a_0 + b_0) + (a_1 + b_1) x + \\hdots + (a_k + b_k) x^k] + d(x) \\\\\n            &= [ f(x) + g(x) ] + h(x)\n          \\end{align*}\n          and if we use the summation notation for $f(x), g(x)$ and $h(x)$, we have\n          \\begin{align*}\n            f(x) [ g(x) d(x) ] &= f(x) \\left[ \\left( \\sum_{j=0}^{n} b_j x^j \\right)\\left( \\sum_{l=0}^{k} d_l x^l \\right) \\right] \\\\\n                               &= \\left[ \\sum_{i=0}^{m} a_i x^i \\right] \\left[ \\sum_{j=0}^{n} \\sum_{l=0}^{k} b_j d_l x^{j + l} \\right] \\\\\n                               &= \\sum_{i=0}^{m} \\sum_{j=0}^{n} \\sum_{l=0}^{k} a_i b_j d_l x^{i + j + k} \\\\\n                               &= \\left[ \\sum_{i=0}^{m} \\sum_{j=0}^{n} a_i b_j x^{i + j} \\right] \\left[ \\sum_{l=0}^{k} d_l x^l \\right] \\\\\n                               &= \\left[ \\left( \\sum_{i=0}^{m} a_i x^i \\right) \\left( \\sum_{j=0}^{n} b_j x^j \\right) \\right] h(x) \\\\\n                               &= [ f(x) g(x) ] h(x)\n          \\end{align*}\n        \\item (\\textbf{Inverse}) Since $R$ is a ring, and in particular an additive ring, for each $a_i \\in R$, $0 \\leq i \\leq m$, we have that $\\exists (-a_i) \\in R$ such that $a_i + (-a_i) = 0$. Particularly, we have that\n          \\begin{equation*}\n            - f(x) = ( - a_0 ) + ( - a_1 ) x + ( - a_2 ) x^2 + \\hdots + ( - a_m ) x^m\n          \\end{equation*}\n          is the inverse of $f(x) \\in R[x]$.\n        \\item (\\textbf{Distributivity}) Again, using the summation notation, since $R$ is a ring, we have\n          \\begin{align*}\n            &f(x) [ g(x) + h(x) ] \\\\\n            &= \\left[ \\sum_{i=0}^{m} a_i x^i \\right] \\left[ \\sum_{j=0}^{n} b_j x^j + \\sum_{l=0}^{k} d_l x^l \\right] \\\\\n            &= \\left[ \\sum_{i=0}^{m} a_i x^i \\right] \\left[ \\sum_{j=0}^{k} (b_j + d_j) x^j \\right] \\\\\n            &= \\sum_{i=0}^{m} \\sum_{j=0}^{k} a_i (b_j + d_j) x^{i + j} = \\sum_{i=0}^{m} \\sum_{j=0}^{k} (a_i b_j + a_i d_j) x^{i + j} \\\\\n            &= \\sum_{i=0}^{m} \\sum_{j=0}^{k} a_i b_j x^{i + j} + \\sum_{i=0}^{m} \\sum_{j=0}^{k} a_i d_j x^{i + j} \\\\\n            &= \\sum_{i=0}^{m} \\sum_{j=0}^{n} a_i b_j x^{i + j} + \\sum_{i=0}^{m} \\sum_{j=0}^{k} a_i d_j x^{i + j} \\\\\n            &= f(x) g(x) + f(x) d(x).\n          \\end{align*}\n          Proof for the other side is similar.\n        \\end{itemize}\n        With that, we have that $R[x]$ is a ring.\n\n      \\item We already have that $R$ is a ring, and so it suffices to prove that $R \\subseteq R[x]$. This is, however, rather simple, since $\\forall r \\in R$, we have that $r$ is a constant polynomial, and so $r \\in R[x]$, and therefore $R \\subseteq R[x]$.\n\n      \\item Let\n        \\begin{gather*}\n          f(x) = a_0 + a_1 x + a_2 x^2 + \\hdots + a_m x^m \\in Z[x] \\\\\n          g(x) = b_0 + b_1 x + b_2 x^2 + \\hdots + b_n x^n \\in R[x].\n        \\end{gather*}\n        We have that\n        \\begin{equation*}\n          f(x) g(x) = \\sum_{i=0}^{m} \\sum_{j=0}^{n} a_i b_j x^{i + j}.\n        \\end{equation*}\n        Since $a_i \\in Z$ for $0 \\leq i \\leq n$, we have\n        \\begin{equation*}\n          f(x) g(x) = \\sum_{i=0}^{m} \\sum_{j=0}^{n} b_j a_i x^{i + j} = \\sum_{j=0}^{n} \\sum_{i=0}^{m} b_j a_i x^{j + i} = g(x) f(x)\n        \\end{equation*}\n        for any $g(x) \\in R[x]$. And so $Z[x] = Z(R[x])$.\n\n        For $\\supseteq$, $f(x) \\in Z(R[x]) \\implies \\forall b \\in R \\subseteq R[x]$ we have $f(x) b = bf(x)$. It follows that\n        \\begin{equation*}\n          \\forall 0 \\leq i \\leq n \\quad a_i b = b a_i\n        \\end{equation*}\n        and so $a_i \\in Z(R)$, which implies that $Z(R[x]) \\subseteq Z[x]$. Therefore, $Z(R[x]) = Z[x]$.\n  \\end{enumerate}\\qed\n\\end{proof}\n\n\\begin{warning}\n  Althought $f(x) \\in R[x]$ can be used to define a function from $R \\to R$, the polynomial is not the same as the function it defines. For example, if $R = \\mathbb{Z}_2$, then $\\mathbb{Z}_2[x]$ is an infinite set, but there are only $4$ different functions from $\\mathbb{Z}_2 \\to \\mathbb{Z}_2$\n\\end{warning}\n\n\\begin{propo}[Polynomial Ring is an Integral Domain]\n\\label{propo:polynomial_ring_is_an_integral_domain}\n  Let $R$ be an integral domain. Then\n  \\begin{enumerate}\n    \\item $R[x]$ is an integral domain.\n    \\item If $f(x) \\neq 0$ and $g(x) \\neq 0$ in $R[x]$, then\\sidenote{In order to preserve this for when we have the case of $\\deg 0$, we have to define $\\deg 0 = - \\infty$. Otherwise, say if we define $\\deg 0 = -1$, then if $\\deg f = -1$, then $\\deg (fg) = \\deg f + \\deg g$ would imply that $\\deg g = -2$, which is undefined.}\n      \\begin{equation*}\n        \\deg (fg) = \\deg f + \\deg g\n      \\end{equation*}\n    \\item The units in $R[x]$ are $R^*$, the units in $R$.\n  \\end{enumerate}\n\\end{propo}\n\n\\begin{proof}\n  We shall prove $(1)$ and $(2)$ together.\n  \\begin{enumerate}\n    \\item[1 \\& 2.] Suppose $f(x) \\neq 0 \\neq g(x) \\in R[x]$, say\n      \\begin{gather*}\n        f(x) = a_0 + a_1 x + \\hdots + a_m x^m \\quad a_m \\neq 0 \\\\\n        g(x) = b_0 + b_1 x + \\hdots + b_n x^n \\quad b_n \\neq 0.\n      \\end{gather*}\n      Then\n      \\begin{equation*}\n        f(x) g(x) = a_m b_n x^{m + n} + \\hdots a_0 b_0.\n      \\end{equation*}\n      Now since $R$ is an integral domain, we have that $a_m b_n \\neq 0$ and so $f(x) g(x) \\neq 0$. Thus $R[x]$ is an integral domain. Moreover, we see that\n      \\begin{equation*}\n        \\deg (fg) = m + n = \\deg f + \\deg g.\n      \\end{equation*}\n\n      \\setcounter{enumi}{2}\n    \\item Suppose that $u(x) \\in R[x]$ is a unit of $R[x]$ with inverse $u^{-1}(x)$ which we shall write as $v(x)$. Since $u(x) v(x) = 1$, by $(2)$, we have that\n      \\begin{equation}\\label{eq:polynomial_ring_is_an_integral_domain_eq1}\n        \\deg u + \\deg v = \\deg 1 = 0.\n      \\end{equation}\n      Now by $(1)$, $R[x]$ is an integral domain, and so since $u(x) v(x) = 1$, we have that $u(x) \\neq 0 \\neq v(x)$. Therefore, $\\deg u, \\deg v \\geq 0$, which implies that we must have $\\deg u = 0 = \\deg v$ from \\cref{eq:polynomial_ring_is_an_integral_domain_eq1}. Therefore, units in $R[x]$ are from $R^*$.\n  \\end{enumerate}\\qed\n\\end{proof}\n\n\\begin{note}\n  Recall that $\\mathbb{Z}_n$ is an integral domain if and only if $n = p$ a prime. If $n \\neq p$, then, e.g., for $\\mathbb{Z}_4[x]$, we have\n  \\begin{equation*}\n    2x\\cdot 2x = 4x^2 = 0\n  \\end{equation*}\n  and so\n  \\begin{equation*}\n    \\deg (2x) + \\deg (2x) \\neq \\deg (4x^2) = \\deg (2x \\cdot 2x).\n  \\end{equation*}\n\\end{note}\n\n% subsection polynomials (end)\n\n\\subsection{Factorization of Polynomials}%\n\\label{sub:factorization_of_polynomials}\n% subsection factorization_of_polynomials\n\n\\begin{defn}[Division of Polynomials]\\index{Division of Polynomials}\n\\label{defn:division_of_polynomials}\n  Let $R$ be a commutative ring and $f(x), g(x) \\in R[x]$. We say that $f(x)$ divides $g(x)$, denoted as $f(x) \\, | \\, g(x)$ if $\\exists q(x) \\in R[x]$ such that\n  \\begin{equation*}\n    g(x) = q(x) f(x)\n  \\end{equation*}\n\\end{defn}\n\n\\begin{defn}[Monic Polynomial]\\index{Monic Polynomial}\n\\label{defn:monic_polynomial}\n  Let $R$ be a commutative ring and $f(x) \\in R[x]$. $f(x)$ is monic if its leading coefficient is $1$.\n\\end{defn}\n\nWe shall prove the following proposition next class.\n\n\\begin{propononum}\n  Let $R$ be an integral domain, and $f(x), \\, g(x) \\in R[x]$ be monic polynomials. If $f(x) \\, | \\, g(x)$ and $g(x) \\, | \\, f(x)$, then $f(x) = g(x)$.\n\\end{propononum}\n\n% subsection factorization_of_polynomials (end)\n\n% section polynomial_ring (end)\n\n% chapter lecture_27_jul_06th_2018 (end)\n", "meta": {"hexsha": "ad926f74548a0a7e7f8039f3ff5f55d44d4bc376", "size": 11674, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PMATH347S18/lectures/lec27.tex", "max_stars_repo_name": "japorized/TeX_notes", "max_stars_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-09-28T21:23:05.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-21T01:41:27.000Z", "max_issues_repo_path": "PMATH347S18/lectures/lec27.tex", "max_issues_repo_name": "japorized/TeX_notes", "max_issues_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-29T17:58:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-29T17:58:51.000Z", "max_forks_repo_path": "PMATH347S18/lectures/lec27.tex", "max_forks_repo_name": "japorized/TeX_notes", "max_forks_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2017-09-27T20:55:58.000Z", "max_forks_repo_forks_event_max_datetime": "2017-09-27T20:55:58.000Z", "avg_line_length": 50.3189655172, "max_line_length": 327, "alphanum_fraction": 0.5350351208, "num_tokens": 4654, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5654971761407769}}
{"text": "\\section{The Knapsack Problem}\n\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{The change-making problem}\n  \\begin{exampleblock}{The change-making problem (Problem 7.12)}\n    \\begin{itemize}\n      \\item Coins values: $x_{1} \\dots x_{n}$\n      \\item Amount: $v$\n      \\item Is it possible to make change for $v$?\n    \\end{itemize}\n  \\end{exampleblock}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{The change-making problem}\n  \\begin{exampleblock}{The change-making problem (Problem 7.12(2), Problem 7.1 (Subset sum))}\n\t\\begin{enumerate}[(1)]\n\t  \\setcounter{enumi}{1}\n\t  \\item Without repetition ($0/1$)\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\pause\n  \\begin{description}\n\t\\item[Subproblem:] $C[i, w]$: Make change for $w$ using only values of $x_{1} \\dots x_{i}$?\n\t\\item[Goal:] $C[n,v]$\n\t  \\pause\n\t\\item[Make choice:] Using value $x_{i}$ or not?\n\t\\item[Recurrence:] \n\t  \\[\n\t\tC[i,w] = C[i-1, w] \\lor (C[i-1, w-x_{i}] \\textcolor{blue}{\\land w \\ge x_{i}})\n\t  \\]\n\t  \\pause\n\t\\item[Init:]\n\t  \\begin{align*}\n\t\tC[i,0] &= \\text{true}  \\\\\n\t\tC[0,w] &= \\text{false}, \\text{if } w > 0 \\\\\n\t\tC[0,0] &= \\text{true}\n\t  \\end{align*}\n\t\\item[Time:] $O(nv)$\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{The change-making problem}\n  \\begin{exampleblock}{The change-making problem (Problem 7.12(1))}\n\t\\begin{enumerate}[(1)]\n\t  \\item Unbounded repetition ($\\infty$)\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\pause\n  \\begin{description}\n\t\\item[Subproblem:] $C[i, w]$: Make change for $w$ using only values of $x_{1} \\dots x_{i}$?\n\t\\item[Goal:] $C[n,v]$\n\t  \\pause\n\t\\item[Make choice:] Using value $x_{i}$ or not?\n\t\\item[Recurrence:] \n\t  \\[\n\t\tC[i,w] = C[i-1, w] \\lor (C[\\textcolor{red}{i}, w-x_{i}] \\land w \\ge x_{i})\n\t  \\]\n\t  \\pause\n\t\\item[Init:]\n\t  \\begin{align*}\n\t\tC[i,0] &= \\text{true}, \\forall i = 0 \\dots n  \\\\\n\t\tC[0,w] &= \\text{false}, \\text{if } w > 0\n\t  \\end{align*}\n\t\\item[Time:] $O(nv)$\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{The change-making problem}\n  \\begin{exampleblock}{The change-making problem (Problem 7.12(1))}\n\t\\begin{enumerate}[(1)]\n\t  \\item Unbounded repetition ($\\infty$)\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\pause\n  \\begin{description}\n\t\\item[Subproblem:] $C[w]$: Possible to make change for $w$?\n\t\\item[Goal:] $C[v]$\n\t  \\pause\n\t\\item[Make choice:] What is the first coin to use?\n\t\\item[Recurrence:] \n\t  \\[\n\t\tC[w] = \\bigvee_{\\textcolor{red}{i: \\; x_i \\le w}} C[w-x_i]\n\t  \\]\n\t  \\pause\n\t\\item[Init:] $C[0] = \\text{true}$\n\t\\item[Time:] $O(nv)$\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{The change-making problem}\n  \\begin{exampleblock}{The change-making problem (Problem 7.12(1))}\n\t\\begin{enumerate}[(1)]\n\t  \\item Unbounded repetition ($\\infty$)\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\[\n\tC[i,w] \\text{\\emph{ vs. }} C[w]\n  \\]\n\n  \\[\n\tC[i,w] = C[i-1, w] \\lor (C[i, w-x_{i}] \\land w \\ge x_{i})\n  \\]\n\n  \\[\n\tC[w] = \\bigvee_{i: \\; x_i \\le w} C[w-x_i]\n  \\]\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{The change-making problem}\n  \\begin{exampleblock}{The change-making problem (Problem 7.12(3))}\n\t\\begin{enumerate}[(1)]\n\t  \\setcounter{enumi}{2}\n\t  \\item Unbounded repetition with $\\le k$ coins\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\pause\n  \\begin{description}\n\t\\item[Subproblem:] $C[i,w,l]$: Possible to make change for $w$ with $\\le l$ coins of values of $x_{1} \\dots x_{i}$?\n\t\\item[Goal:] $C[n,v,k]$\n\t  \\pause\n\t\\item[Make choice:] Using value $x_{i}$ or not? \n\t\\item[Recurrence:] \n\t  \\[\n\t\tC[i,w,l] = C[i-1,w,l] \\lor \\left( C[\\textcolor{red}{i}, w-x_{i}, \\textcolor{red}{l-1}] \\land w \\ge x_{i} \\right)\n\t  \\]\n\t  \\pause\n\t\\item[Init:]\n\t  \\begin{align*}\n\t\tC[0,0,l] &= \\text{true}, \\quad C[0,w,l] = \\text{false}, \\text{if } w > 0 \\\\\n\t\tC[i,0,l] &= \\text{true}, \\quad C[i,w,0] = \\text{false}, \\text{if } w > 0\n\t  \\end{align*}\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{The change-making problem}\n  \\begin{exampleblock}{The change-making problem (Problem 7.12(3))}\n\t\\begin{enumerate}[(1)]\n\t  \\setcounter{enumi}{2}\n\t  \\item Unbounded repetition with $\\le k$ coins\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\pause\n  \\begin{description}\n\t\\item[Subproblem:] $C[w,l]$: Possible to make change for $w$ with $\\le l$ coins?\n\t\\item[Goal:] $C[v,k]$\n\t  \\pause\n\t\\item[Make choice:] What is the first coin to use?\n\t\\item[Recurrence:] \n\t  \\[\n\t\tC[w,l] = \\bigvee_{i: \\; x_i \\le w} C[w-x_i, l-1]\n\t  \\]\n\t  \\pause\n\t\\item[Init:]\n\t  \\begin{align*}\n\t\tC[0,l] &= \\text{true}, \\\\\n\t\tC[w,0] &= \\text{false}, \\text{if } w > 0\n\t  \\end{align*}\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "1871e48300994d25fce09faca8770ed3de6c55b6", "size": 4553, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/knapsack-dp.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/knapsack-dp.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/knapsack-dp.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 27.2634730539, "max_line_length": 116, "alphanum_fraction": 0.5848890841, "num_tokens": 1701, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149758396752, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5654971596189796}}
{"text": "\\section{Model of implicit particle-in-cell}\nWe developed the implicit particle-in-cell (PIC) code, based on the scheme suggested by Lapenta et al.~\\cite{Lapenta2006} and improved for the relativistic case by Noguchi et al.\\cite{Noguchi2007}. This scheme is described in details in these papers and below we present only a short description. The common assumption for all particle-in-cell codes is the consideration of the plasma as a superposition of a large number of finite elements, called super-particles. Distribution function can be represented as the sum of members for all super-particles: \n\\begin{equation}\nF \\left( \\textbf{x},\\textbf{p},t \\right) =\\sum _{p=1}^{N_{s}}S \\left( x-x_{{p}} \\right) \nS \\left( y-y_{{p}} \\right) S \\left( z-z_{{p}} \\right) \\delta \\left( \\textbf{p}-\\textbf{p}_{{p}} \\right), \n\\end{equation}\nwhere $\\textbf{x}$ and $\\textbf{p}$ are coordinates and the momentum of a particle , respectively, $S$ is a shape-function, and $\\delta$ is the Dirac's delta-function. The shape-function is chosen in form of b spline. Also we introduce interpolation function $W$ which represents the part of a particle which is contained in the grid cell:\n\\begin{equation}\nW \\left(x_{{c}} - x_{{p}} \\right)= \\int\\limits_{-\\infty}^{+\\infty}S(x-x_{{p}})\\phi(x)dx,\n\\end{equation} \nwhere $c$ and $p$ are indexes of the cell and the particle, respectively, and $\\phi$ equals 1 inside the cell and 0 outside. Using this function we can determine electric and magnetic field for each particle as\n\\begin{equation}\n\\begin{aligned}\n&\\textbf{E}_{p} = \\sum_{c}\\textbf{E}_c W \\left(x_{{c}} - x_{{p}} \\right),\n\\\\\n&\\textbf{B}_{p} = \\sum_{c}\\textbf{B}_c W \\left(x_{{c}} - x_{{p}} \\right).\n\\end{aligned}\n\\end{equation}\n\nThen we use particle fields to solve equations of motion for particles:\n\\begin{equation}\n\\begin{aligned}\n&{\\frac {d\\textbf{x}_p}{dt}}=\\textbf{v}_p,\n\\\\\n&{\\frac {d\\textbf{p}_p}{dt}}=q_p \\left(\\textbf{E}_{p}+\\frac{\\textbf{v}_p\\times\\textbf{B}_{p}}{c}\\right),\n\\end{aligned}\n\\end{equation}\nwhere $q_p$ is the particle charge and $\\textbf{p}_p$ is the momentum.\n\nAt the same time, using interpolation functions we can determine macroscopic plasma parameters, such as the electric flux and the electric density :\n\\begin{equation}\n\\begin{aligned}\\label{rhoj}\n&\\rho_c=\\sum_p q_p W \\left(x_{{c}} - x_{{p}} \\right),\n\\\\\n&\\textbf{J}_c=\\sum_p q_p \\textbf{v}_p W \\left(x_{{c}} - x_{{p}} \\right).\n\\end{aligned}\n\\end{equation}\n\nThe difficulty comes from the fact that particles coordinates and fields are coupled and we can not separate the equations of motion and Maxwell equations. The first possible solution is the explicit method: equations of fields use particle velocities on the previous time step and vice versa. Explicit approach is rather simple, but it has strong restrictions on stability. The other possible method is the implicit approach which is more complicated, but at the same time more stable. The main idea of it is to predict particle velocity at the next time step, using the field at the next time step and then use velocity in the implicit scheme for the field.\n\nIn the implicit approach we should introduce intermediate values $F^{n+\\theta}=\\theta F^{n+1} + \\left(1-\\theta\\right) F^n$, where $F^n$ is the value at the n-th time step and $F^{n+\\theta}$ the value at the intermediate time step. $\\theta$ is the parameter of the scheme and should be greater or equal to $\\frac{1}{2}$ for stability. Using this denotations we write time discretisation of the Maxwell equations as:\n\\begin{equation}\n\\begin{aligned}\\label{maxwell}\n&\\nabla\\times\\textbf{E}^{n+\\theta}+\\frac{\\textbf{B}^{n+1}-\\textbf{B}^n}{c\\Delta t}=0,\n\\\\\n&\\nabla\\times\\textbf{B}^{n+\\theta}-\\frac{\\textbf{E}^{n+1}-\\textbf{E}^n}{c\\Delta t}=\\frac{4\\pi}{c}\\textbf{J}^{n+\\theta},\n\\\\\n&\\Delta\\cdot\\textbf{E}^{n+\\theta}=4\\pi\\rho^{n+\\theta},\n\\\\\n&\\Delta\\cdot\\textbf{B}^n=0.\n\\end{aligned}\n\\end{equation}\n\nTime discretization of equations of particle motion reads as\n\\begin{equation}\\label{particlemoving}\n\\begin{aligned}\n&\\textbf{x}_p^{n+1}=\\overline{\\textbf{v}}_p\\Delta t,\n\\\\\n&\\textbf{p}_p^{n+1}=\\textbf{p}_p^{n}+q_p\\Delta t\n\\left(\\textbf{E}_p^{n+\\theta}\\left(\\textbf{x}_p^{n+\\theta}\\right)+\\frac{\\overline{\\textbf{v}}_p\\times\\textbf{B}_p^n\\left(\\textbf{x}_p^{n+\\theta}\\right)}{c}\\right),\n\\end{aligned}\n\\end{equation}\nwhere $\\overline{\\textbf{v}}_p$ is the average velocity. It should be noted that the electric field is evaluated at the moment $n+\\theta$, while the magnetic field at the moment $n$. If $\\theta = \\frac{1}{2}$ the scheme has the second order accuracy in $\\Delta t$.  Finally we can express average velocity explicitly using $\\textbf{v}_p^{n}$ and $\\textbf{E}_p^{n+\\theta}$: \n\\begin{equation}\\label{averagev}\n\\overline{\\textbf{v}}_p = \\widehat{\\textbf{v}}_p+\\frac{q_p\\Delta t}{2m_p\\Gamma_p}\\alpha_p^{n}\\textbf{E}_p^{n+\\theta}\\left(\\textbf{x}_p^{n+\\theta}\\right),\n\\end{equation}\nwhere we use following notations:\n\\begin{equation}\n\\widehat{\\textbf{v}}_p=\\alpha_p^{n}\\gamma_p^n\\textbf{v}_p^n,\n\\end{equation} \n\\begin{equation}\n\\alpha_p^n=\\frac{1}{\\Gamma_p\\left(1+\\left(\\frac{q_p\\Delta t\\textbf{B}_p^n}{2m_p\\Gamma_p}\\right)^2\\right)}\\left(I-\\frac{q_p\\Delta t}{2m_p\\Gamma_p}I\\times\\textbf{B}_p^n+\\left(\\frac{q_p\\Delta t}{2m_p\\Gamma_p}\\right)^2\\textbf{B}_p^n\\textbf{B}_p^n\\right),\n\\end{equation}\n\\begin{equation}\n\\Gamma_p=\\frac{q_p\\Delta t\\textbf{B}_p^n}{2m_p}\\textbf{E}_p^n\\cdot\\textbf{v}_p^n+\\gamma_p^n,\n\\end{equation}\nwhere $I$ is the identity tensor and $\\gamma_p^n$ is the gamma-factor of the particle at the moment $n$. Using expression(\\ref{averagev}) for the average velocity we can expand in series expressions for the electric flux and density (\\ref{rhoj}) by term $\\textbf{x}_p^n-\\textbf{x}_p^{n+\\theta}$ and substitute them into the Maxwell equations (\\ref{maxwell}). After solving algebraic equations, combining all terms and excluding magnetic field we finally obtain the implicit second-order elliptic equation for $\\textbf{E}^{n+\\theta}$:\n\\begin{eqnarray}\\label{electricfield}\n\\left(c\\theta\\Delta t\\right)^2 \\left(-\\nabla^2\\textbf{E}^{n+\\theta}-\\nabla\\nabla\\cdot\\left(\\mu^n\\cdot\\textbf{E}^{n+\\theta}\\right)\\right)+\\epsilon^n\\textbf{E}^{n+\\theta}=\n\\nonumber\\\\\n\\textbf{E}^{n}-c\\theta\\Delta t\\left(\\frac{4\\pi\\widehat{\\textbf{J}}}{c}-\\frac{\\Delta t}{2}\\nabla\\cdot\\widehat{\\Pi}-\\nabla\\times\\textbf{B}^n\\right)\n-\\left(c\\theta\\Delta t\\right)^2 4\\pi \\nabla\\widehat{\\rho},\n\\end{eqnarray}\nwhere we used following notations:\n\\begin{equation}\n \\epsilon^n=I+\\mu^n ,\n\\end{equation}\n\\begin{equation}\n\\mu^n=-\\sum_p\\frac{2\\pi q_p^2\\theta\\Delta t^2}{m_p}\\alpha_p^n W\\left(x-x_p^n\\right),\n\\end{equation}\n\\begin{equation}\n\\widehat{\\Pi}=\\sum_p q_p\\widehat{\\textbf{v}}_p\\widehat{\\textbf{v}}_p W\\left(x-x_p^n\\right),\n\\end{equation}\n\\begin{equation}\n\\widehat{\\textbf{J}}=\\sum_p q_p \\widehat{\\textbf{v}}_p W\\left(x-x_p^n\\right),\n\\end{equation}\n\\begin{equation}\n\\widehat{\\rho}=\\rho^n - \\theta \\Delta t \\nabla\\cdot\\left(\\widehat{\\textbf{J}} -\\frac{\\Delta t}{2}\\nabla\\cdot\\widehat{\\Pi} \\right).\n\\end{equation}\n\nThe spacial discretization of the equation of the electric field (\\ref{electricfield}) is performed with a finite-volume scheme and  then the system of linear equations is solved using general minimal residual algorithm. After that we can explicitly evaluate magnetic field using the first Maxwell equation (\\ref{maxwell}) and finally, when fields computed, we can solve equations of motion (\\ref{particlemoving}) using iterative non-linear solver.\n\nOur code is fully three dimensional and parallelized with MPI technology, which is adapted for distributed computing and can be executed on a wide class of computers.\n", "meta": {"hexsha": "05233b95a7b39b9500eca61fb28471dfdbe2c440", "size": 7598, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/Physica2016/model.tex", "max_stars_repo_name": "eskyhome/PICpp", "max_stars_repo_head_hexsha": "3365e0e36ba46a87e7a406670ed2dbd9f60ef219", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-05-16T01:41:35.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-17T05:13:23.000Z", "max_issues_repo_path": "papers/Physica2016/model.tex", "max_issues_repo_name": "eskyhome/PICpp", "max_issues_repo_head_hexsha": "3365e0e36ba46a87e7a406670ed2dbd9f60ef219", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/Physica2016/model.tex", "max_forks_repo_name": "eskyhome/PICpp", "max_forks_repo_head_hexsha": "3365e0e36ba46a87e7a406670ed2dbd9f60ef219", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-05-16T01:45:16.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-26T06:43:58.000Z", "avg_line_length": 73.0576923077, "max_line_length": 659, "alphanum_fraction": 0.7286127928, "num_tokens": 2476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362849986365571, "lm_q2_score": 0.6039318337259584, "lm_q1q2_score": 0.5654523161166823}}
{"text": "%auto-ignore\n\\providecommand{\\MainFolder}{..}\n\\documentclass[\\MainFolder/Text.tex]{subfiles}\n\\begin{document}\n\\section{Summary of IBL-infinity theory for cyclic DGA}\\label{BV:Summary}\nLet $(V,\\Pair,m_1,m_2)$ be a cyclic $\\DGA$ of degree $n$ of finite type.\nWe have:\n\\begin{itemize}\n    \\item Cyclic cochains $\\CycC(V) = \\CDBCyc V[2-n]$,\n    \\item Canonical dIBL-algebra $\\OPQ_{110}$, $\\OPQ_{210}$, $\\OPQ_{120}$ on $\\CycC(V)$ denoted by $\\dIBL(\\CycC(V))$,\n    \\item Canonical Maurer-Cartan element $\\MC = (\\MC_{10})$,\n    \\item Twisted dIBL-algebra $\\OPQ_{110}^\\MC$, $\\OPQ_{210}$, $\\OPQ_{120}$ on $\\CycC(V)$ denoted by $\\dIBL^\\MC(\\CycC(V))$.\n\\end{itemize}\nRecall from Appendix~\\ref{Section:AppEqDefPrCoPr} that, up to signs and permutation of variables, $\\OPQ_{210}$ is the cyclization ${}^+$ of the Gerstenhaber bracket $[\\cdot,\\cdot]$ and that it holds $\\BVOp_{\\CycMRM} = \\ProdCyc\\circ\\OPQ_{120}$, where~$\\BVOp_{\\CycMRM}$ is an extension of the Schwarz's $\\BV$-operator on polynomial functions on $V$ to $\\CycC(V)$ and~$\\ProdCyc$ is the cyclic shuffle coproduct. \n\nLet $(V',\\Pair',m_1',m_2')$ be another cyclic $\\DGA$ of degree $n$ of finite type, and let \n\\begin{equation}\\label{Eq:DefRetr}\n\\begin{tikzcd}[column sep=large]\n(V,m_1)\\arrow[loop left]{l}{\\Htp}\\arrow[shift left]{r}{\\pi}&\\arrow[shift left]{l}{\\iota} (V',m_1')\n\\end{tikzcd}\n\\end{equation}\nbe a deformation retraction.\nWe have the following:\n\\begin{itemize}\n    \\item $\\IBLInfty$-morphism $\\HTP: \\dIBL(\\CycC(V))\\rightarrow \\dIBL(\\CycC(V'))$ with components $\\HTP_{klg}: \\hat{\\Ext}_k \\CycC(V) \\rightarrow \\hat{\\Ext}_l \\CycC(V')$ such that \n $$ \\HTP_{110}=\\iota^*: (\\CycC(V),\\hat{\\OPQ}_{110})\\longrightarrow(\\CycC(V'),\\hat{\\OPQ}'_{110}) $$\n is a quasi-isomorphism.\n The number $\\HTP_{klg}(\\Susp^k \\psi_1\\otimes\\dotsb\\otimes\\psi_k)(\\Susp^{l}\\omega_1\\otimes\\dotsb\\otimes\\omega_l)$\nfor $\\psi_1$,~$\\dotsc$, $\\psi_k \\in \\CDBCyc V$ and $\\omega_1 = \\omega_{11}\\dotsb\\omega_{1 s_1}$,~$\\dotsc$, $\\omega_l = \\omega_{l1}\\dotsb\\omega_{ls_l}\\in \\BCyc V'$, where $\\Susp$ is the formal symbol of degree $n-3$, is computed via summation over ribbon graphs with interior vertices decorated with $\\psi_1$, $\\dotsc$, $\\psi_k$, interior edges decorated with the propagator --- the Schwartz kernel $\\Prpg$ of $\\Htp$ --- and exterior vertices decorated with~$\\omega_{ij}$ at the $i$-th boundary component and with $j$ respecting the orientation.\nWe will refer to such graphs shortly as \\emph{Feynman graphs.}\n    \\item Pushforward Maurer-Cartan element $\\PMC \\coloneqq\\HTP_* \\MC$ with components $\\PMC_{lg} \\in \\hat{\\Ext}_l\\CycC(V)$.\n    The number $\\PMC_{lg}(\\Susp^l\\omega_1\\otimes \\dotsc \\otimes \\omega_l)$ is computed via summation over Feynman graphs as above with trivalent internal vertices $\\MC_{10}$.\n    \\item Twisted $\\IBLInfty$-morphism $\\HTP^\\MC: \\dIBL^\\MC(\\CycC(V)) \\rightarrow \\dIBL^\\PMC(\\CycC(V'))$ with components $\\HTP^\\MC_{klg}: \\hat{\\Ext}_k \\CycC(V) \\rightarrow \\hat{\\Ext}_l \\CycC(V')$ such that\n $$\\begin{multlined}[t]\\HTP_{110}^\\MC: (\\CycC(V),\\hat{\\OPQ}_{110}^\\MC)\\longrightarrow(\\CycC(V'),{\\hat{\\OPQ}'^\\PMC}_{110}) \\\\ =\\iota^* + \\HTP_{210}\\circ_1 \\MC_{10} + \\frac{1}{2!} \\HTP_{310} \\circ_{1,1}(\\MC_{10}, \\MC_{10}) + \\dotsb\\end{multlined} $$\nis a quasi-isomorphism.\nThe number $\\HTP_{klg}(\\Susp^k \\psi_1\\otimes\\dotsb\\otimes\\psi_k)(\\Susp^{l}\\omega_1\\otimes\\dotsb\\otimes\\omega_l)$ is computed via summation over Feynmann graphs with interior vertices $\\psi_1$, $\\dotsc$, $\\psi_k$ and $\\MC_{10}$.\n\\end{itemize}\nRecall that an $\\IBLInfty$-quasi-isomorphism is automatically an $\\IBLInfty$-homotopy equivalence; this has been proven in \\cite[Theorem~1.2]{Cieliebak2015} via obstruction theory.\nObstruction theory also gives the existence of a minimal model of any $\\IBLInfty$-algebra, which is the content of~\\cite[Theorem~1.3]{Cieliebak2015}.\nWhat is not proven yet but what we suspect is true is the following:\n\\begin{enumerate}[label=(\\arabic*)]\n \\item For a different choice of the deformation retraction \\eqref{Eq:DefRetr}, the corresponding $\\IBLInfty$-morphisms $\\HTP$ and $\\HTP'$ are $\\IBLInfty$-homotopic.\n \\item If $\\iota$ is in addition a morphism of Poincar\\'e $\\DGA$'s (perhaps simply-connected), then $\\dIBL^\\MC(\\CycC(V))$ and $\\dIBL^{\\MC'}(\\CycC(V'))$ are $\\IBLInfty$-homotopy equivalent (perhaps even the Maurer-Cartan elements $\\MC'$ and $\\HTP_{*}\\MC$ are gauge equivalent).\n%Therefore, in~\\eqref{Eq:DefRetr}, if, e.g., $V'$ is a subalgebra and $\\iota$ is an inclusion, then $\\dIBL^\\MC(\\CycC(V))$ and $\\dIBL^{\\MC'}(\\CycC(V'))$ are $\\IBLInfty$-homotopic.\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "ef80e69abc1e620e7e814c36d038a3c596311fc2", "size": 4557, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Subfiles/BV_Sumary.tex", "max_stars_repo_name": "p135246/phd-thesis", "max_stars_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Subfiles/BV_Sumary.tex", "max_issues_repo_name": "p135246/phd-thesis", "max_issues_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Subfiles/BV_Sumary.tex", "max_forks_repo_name": "p135246/phd-thesis", "max_forks_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 96.9574468085, "max_line_length": 544, "alphanum_fraction": 0.693877551, "num_tokens": 1656, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916134888613, "lm_q2_score": 0.6548947290421275, "lm_q1q2_score": 0.5654306167730331}}
{"text": "\\lab{Newton's Method}{Newton's Method}\n\\label{lab:NewtonsMethod}\n\\objective{Newton's method, the classical method for finding the zeros of a function, is one of the most important algorithms of all time.\nIn this lab, we implement Newton's method in arbitrary dimensions and use it to solve a few interesting problems.\nWe also explore in some detail the convergence (or lack of convergence) of the method under various circumstances.}\n\n\\section*{Iterative Methods} % ================================================\n\nAn \\emph{iterative method} is an algorithm that must be applied repeatedly to obtain a result.\nThe general idea behind any iterative method is to make an initial guess at the solution to a problem, apply a few easy computations to better approximate the solution, use that approximation as the new initial guess, and repeat until done.\nMore precisely, let $F$ be some function used to approximate the solution to a problem.\nStarting with an initial guess of $x_0$, compute\n\\begin{equation}x_{k+1} = F(x_k)\\label{eq:iter-summary}\\end{equation}\nfor successive values of $k$ to generate a sequence $(x_k)_{k=0}^\\infty$ that hopefully converges to the true solution.\nIf the terms of the sequence are vectors, they are denoted by $\\x_k$.\n\nIn the best case, the iteration converges to the true solution $x$, written $\\lim_{k\\rightarrow\\infty} x_k = x$ or $x_k\\rightarrow x$.\nIn the worst case, the iteration continues forever without approaching the solution.\nIn practice, iterative methods require carefully chosen \\emph{stopping criteria} to guarantee that the algorithm terminates at some point.\nThe general approach is to continue iterating until the difference between two consecutive approximations is sufficiently small, and to iterate no more than a specific number of times.\nThat is, choose a very small $\\epsilon > 0$ and an integer $N\\in\\mathbb{N}$, and update the approximation using (\\ref{eq:iter-summary}) until either\n\\begin{equation} % Stopping Criteria.\n|x_k - x_{k-1}| < \\epsilon\n\\qquad \\text{or} \\qquad\nk > N.\n\\end{equation}\n\nThe choices for $\\epsilon$ and $N$ are significant: a ``large'' $\\epsilon$ (such as $10^{-6}$) produces a less accurate result than a ``small'' $\\epsilon$ (such $10^{-16}$), but demands less computations; a small $N$ (10) also potentially lowers accuracy, but detects and halts nonconvergent iterations sooner than a large $N$ (10,000).\nIn code, $\\epsilon$ and $N$ are often named \\li{tol} and \\li{maxiter}, respectively (or similar).\n\nWhile there are many ways to structure the code for an iterative method, probably the cleanest way is to combine a \\li{for} loop with a \\li{break} statement.\nAs a very simple example, let $F(x) = \\frac{x}{2}$.\nThis method converges to $x = 0$ independent of starting point.\n\n\\begin{lstlisting}\n>>> F = lambda x: x / 2\n>>> x0, tol, maxiter = 10, 1e-9, 8\n>>> for k in range(maxiter):           # Iterate at most N times.\n...     print(x0, end='  ')\n...     x1 = F(x0)                      # Compute the next iteration.\n...     if abs(x1 - x0) < tol:          # Check for convergence.\n...         break                       # Upon convergence, stop iterating.\n...     x0 = x1                         # Otherwise, continue iterating.\n...\n<<10  5.0  2.5  1.25  0.625  0.3125  0.15625  0.078125>>\n\\end{lstlisting}\n\nIn this example, the algorithm terminates after $N=8$ iterations (the maximum number of allowed iterations) because the tolerance condition $|x_k - x_{k-1}| < 10^{-9}$ is not met fast enough.\nIf $N$ had been larger (say $40$), the iteration would have quit early due to the tolerance condition.\n\n\\section*{Newton's Method in One Dimension} % =================================\n\n\\emph{Newton's method} is an iterative method for finding the zeros of a function.\nThat is, if $f:\\mathbb{R}\\rightarrow\\mathbb{R}$, the method attempts to find a $\\bar{x}$ such that $f(\\bar{x}) = 0$.\nBeginning with an initial guess $x_0$, calculate successive approximations for $\\bar{x}$ with the recursive sequence\n\\begin{equation}\nx_{k+1} = x_k - \\frac{f(x_k)}{f'(x_k)}.\n\\label{eq:newton-1d-def}\n\\end{equation}\n\nThe sequence converges to the zero $\\bar{x}$ of $f$ if three conditions hold:\n\\begin{enumerate}\n\\item $f$ and $f'$ exist and are continuous,\n\\item $f'(\\bar{x})\\neq0$, and\n\\item $x_0$ is ``sufficiently close'' to $\\bar{x}$.\n\\end{enumerate}\nIn applications, the first two conditions usually hold.\nIf $\\bar{x}$ and $x_0$ are not ``sufficiently close,'' Newton's method may converge very slowly, or it may not converge at all.\nHowever, when all three conditions hold, Newton's method converges quadratically, meaning that the maximum error is squared at every iteration.\nThis is very quick convergence, making Newton's method as powerful as it is simple.\n\n\\begin{figure}[h] % Iterations of Newton's method.\n\\centering\n\\includegraphics[width=.7\\textwidth]{figures/newton_iters.pdf}\n\\caption{\nNewton's method approximates the zero of a function (blue) by choosing as the next approximation the $x$-intercept of the tangent line (red) that goes through the point $(x_k, f(x_k))$.\nIn this example, $f(x) = e^x - 2$, which has a zero at $\\bar{x} = \\log(2)$.\nSetting $x_0 = 2$ and using (\\ref{eq:newton-1d-def}) to iterate, we have $x_1 = x_0 - \\frac{f(x_0)}{f'(x_0)} = 2 - \\frac{e^2 - 2}{e^2} \\approx 1.2707$.\nSimilarly, $x_2 \\approx 0.8320$, $x_3 \\approx .7024$, and $x_4 \\approx 0.6932$.\nAfter only a few iterations, the zero $\\log(2)\\approx 0.6931$ is already computed to several digits of accuracy.}\n\\label{fig:newton}\n\\end{figure}\n\n\\begin{problem}\n\\label{prob:newton-basic}\nWrite a function that accepts a function $f$, an initial guess $x_0$, the derivative $f'$, a stopping tolerance defaulting to $10^{-5}$, and a maximum number of iterations defaulting to $15$.\nUse Newton's method as described in (\\ref{eq:newton-1d-def}) to compute a zero $\\bar{x}$ of $f$.\nTerminate the algorithm when $|x_k - x_{k-1}|$ is less than the stopping tolerance or after iterating the maximum number of allowed times.\nReturn the last computed approximation to $\\bar{x}$, a boolean value indicating whether or not the algorithm converged, and the number of iterations completed.\n\nTest your function against functions like $f(x) = e^x - 2$ (see Figure \\ref{fig:newton}) or $f(x) = x^4 - 3$.\nCheck that the computed zero $\\bar{x}$ satisfies $f(\\bar{x}) \\approx 0$.\nAlso consider comparing your function to \\li{scipy.optimize.newton()}, which accepts similar arguments.\n\\end{problem}\n\n\\begin{info}\nNewton's method can be used to find zeros of functions that are hard to solve for analytically.\nFor example, the function $f(x) = \\frac{\\sin(x)}{x}-x$ is not continuous on any interval containing 0, but it can be made continuous by defining $f(0)=1$.\nNewton's method can then be used to compute the zeros of this function.\n\\end{info}\n\n\\begin{problem} % Application to interest rates.\n\\label{prob:newton-interest}\nSuppose that an amount of $P_1$ dollars is put into an account at the beginning of years $1, 2,..., N_1$ and that the account accumulates interest at a fractional rate $r$ (so $r = .05$ corresponds to $5\\%$ interest).\nIn addition, at the beginning of years $N_1 + 1, N_1 + 2, ..., N_1 + N_2$, an amount of $P_2$ dollars is withdrawn from the account and that the account balance is exactly zero after the withdrawal at year $N_1 + N_2$.\nThen the variables satisfy\n\\[\nP_1[(1+r)^{N_1} - 1] = P_2[1-(1+r)^{-N_2}].\n\\]\n\nWrite a function that, given $N_1$, $N_2$, $P_1$, and $P_2$, uses Newton's method to determine $r$.\nFor the initial guess, use $r_0 = 0.1$.\n\\\\(Hint: Construct $f(r)$ such that when $f(r)=0$, the equation is satisfied.\nAlso compute $f'(r)$.)\n\nTo test your function, if $N_1 =30, N_2 =20, P_1 =2000$, and $P_2 =8000$, then $r\\approx 0.03878$.\n(From Atkinson, page 118). % TODO: better citation.\n\\end{problem}\n\n\\subsection*{Backtracking} % --------------------------------------------------\n\nNewton's method may not converge for a variety of reasons.\nOne potential problem occurs when the step from $x_k$ to $x_{k+1}$ is so large that the zero is stepped over completely.\n\\emph{Backtracking} is a strategy that combats the problem of overstepping by moving only a fraction of the full step from $x_k$ to $x_{k+1}$.\nThis suggests a slight modification to (\\ref{eq:newton-1d-def}),\n\\begin{equation}\nx_{k+1} = x_k - \\alpha\\frac{f(x_k)}{f'(x_k)},\\qquad\\alpha\\in(0,1].\n\\label{eq:newton-backtracking-1d}\n\\end{equation}\nNote that setting $\\alpha = 1$ results in the exact same method defined in (\\ref{eq:newton-1d-def}), but for $\\alpha \\in (0,1)$, only a fraction of the step is taken at each iteration.\n\n\\begin{problem} % Implement backtracking.\nModify your function from Problem \\ref{prob:newton-basic} so that it accepts a parameter $\\alpha$ that defaults to $1$.\nIncorporate (\\ref{eq:newton-backtracking-1d}) to allow for backtracking.\n\nTo test your modified function, consider $f(x)=x^{1/3}$.\nThe command \\li{x**(1/3.)} fails when \\li{x} is negative, so the function can be defined with NumPy as follows.\n\\begin{lstlisting}\nimport numpy as np\nf = lambda x: np.sign(x) * np.power(np.<<abs>>(x), 1./3)\n\\end{lstlisting}\nWith $x_0=.01$ and $\\alpha=1$, the iteration should \\textbf{not} converge.\nHowever, setting $\\alpha=.4$, the iteration should converge to a zero that is close to $0$.\n\\label{prob:newton-1d-backtracking}\n\\end{problem}\n\nThe backtracking constant $\\alpha$ is significant, as it can result in faster convergence or convergence to a different zero (see Figure \\ref{fig:newton-backtracking-multi-result}).\nHowever, it is not immediately obvious how to choose an optimal value for $\\alpha$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=.7\\textwidth]{figures/backtracking.pdf}\n\\caption{Starting at the same initial value but using different backtracking constants can result in convergence to two different solutions.\nThe blue line converges to $\\tilde{\\x} = (0,-1)$ with $\\alpha = 1$ in 5 iterations of Newton's method while the orange line converges to $\\hat{\\x} = (3.75,.25)$ with $\\alpha = 0.4$ in $15$ iterations.\nNote that the points in this example are $2$-dimensional, which is discussed in the next section.}\n\\label{fig:newton-backtracking-multi-result}\n\\end{figure}\n\n\\begin{problem} % Searching for a good backtracking constant.\n\\label{prob:newton-backtracking-search}\nWrite a function that accepts the same arguments as your function from Problem \\ref{prob:newton-1d-backtracking} except for $\\alpha$.\nUse Newton's method to find a zero of $f$ using various values of $\\alpha$ in the interval $(0,1]$.\nPlot the values of $\\alpha$ against the number of iterations performed by Newton's method.\nReturn a value for $\\alpha$ that results in the lowest number of iterations.\n\nA good test case for this problem is the function $f(x) = x^{1/3}$ discussed in Problem \\ref{prob:newton-1d-backtracking}.\nIn this case, your plot should show that the optimal value for $\\alpha$ is actually closer to $.3$ than to $.4$.\n% resemble the following figure.\n% \\begin{figure}[H]\n% \\includegraphics[width=.7\\textwidth]{figures/alpha_iters.pdf}\n% \\end{figure}\n\\end{problem}\n\n\\section*{Newton's Method in Higher Dimensions} % =============================\n\nNewton's method can be generalized to work on functions with a multivariate domain and range.\nLet $f:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ be given by $f(\\x)=[f_1(\\x)\\ f_2(\\x)\\ \\ldots\\ f_k(\\x)]\\trp$, with $f_i:\\mathbb{R}^n\\to\\mathbb{R}$ for each $i$.\nThe derivative $Df:\\mathbb{R}^n\\rightarrow\\mathbb{R}^{n\\times n}$ is the $n\\times n$ Jacobian matrix of $f$.\n\\[\nDf =\n\\left[\\begin{array}{ccc}\n\\frac{\\partial f_1}{\\partial x_1} & \\cdots & \\frac{\\partial f_1}{\\partial x_k} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial f_n}{\\partial x_1} & \\cdots & \\frac{\\partial f_n}{\\partial x_k} \\\\\n\\end{array}\\right]\n\\]\n\nIn this setting, Newton's method seeks a vector $\\bar{\\x}$ such that $f(\\bar{\\x}) = \\0$, the vector of $n$ zeros.\nWith backtracking incorporated, (\\ref{eq:newton-backtracking-1d}) becomes\n\\begin{equation}\n\\x_{k+1} = \\x_k - \\alpha{Df(\\x_k)}^{-1}{f(\\x_k)}.\n\\label{eq:newton-nd}\n\\end{equation}\nNote that if $n = 1$, (\\ref{eq:newton-nd}) is exactly (\\ref{eq:newton-backtracking-1d}) because in that case, $Df(x)^{-1} = 1/f'(x)$.\n\nThis vector version of Newton's method terminates when the maximum number of iterations is reached or the difference between successive approximations is less than a predetermined tolerance $\\epsilon$ with respect to a vector norm, that is, $||\\x_k - \\x_{k-1}|| < \\epsilon$.\n\n\\begin{problem} % Newton's method in n dimensions.\nModify your function from Problems \\ref{prob:newton-basic} and \\ref{prob:newton-1d-backtracking} so that it can compute a zero of a function $f:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ for any $n\\in\\mathbb{N}$.\nTake the following tips into consideration.\n\\begin{itemize}\n\\item If $n > 1$, $f$ should be a function that accepts a 1-D NumPy array with $n$ entries and returns another NumPy array with $n$ entries.\nSimilarly, $Df$ should be a function that accepts a 1-D array with $n$ entries and returns a $n\\times n$ array.\nIn other words, $f$ and $Df$ are callable functions, but $f(\\x)$ is a vector and $Df(\\x)$ is a matrix.\n\n\\item \\li{np.isscalar()} may be useful for determining whether or not $n > 1$.\n\n\\item Instead of computing $Df(\\x_k)^{-1}$ directly at each step, solve the system $Df(\\x_k)\\y_k = f(\\x_k)$ and set $\\x_{k+1} = \\x_k - \\alpha\\y_k$.\nIn other words, use \\li{la.solve()} instead of \\li{la.inv()}.\n% Always avoid taking matrix inverses when possible.\n\n\\item The stopping criterion now requires using a norm function instead of \\li{abs()}.\n\\end{itemize}\n\nAfter your modifications, carefully verify that your function still works in the case that $n=1$, and that your functions from Problems \\ref{prob:newton-interest} and \\ref{prob:newton-backtracking-search} also still work correctly.\nIn addition, your function from Problem \\ref{prob:newton-backtracking-search} should also work for any $n \\in \\mathbb{N}$.\n\\label{prob:newton-nd-implementation}\n\\end{problem}\n\n\\begin{problem}\nBioremediation involves the use of bacteria to consume toxic wastes.\nAt a steady state, the bacterial density $x$ and the nutrient concentration $y$ satisfy the system of nonlinear equations\n\\begin{align*}\n\\gamma xy - x(1 + y) &= 0 \\\\\n-xy + (\\delta - y)(1 + y) &= 0,\n\\end{align*}\nwhere $\\gamma$ and $\\delta$ are parameters that depend on various physical features of the system.%\n\\footnote{This problem is adapted from exercise 5.19 of \\cite{heath2002scientific} and the notes of Homer Walker).}\n\nFor this problem, assume the typical values $\\gamma = 5$ and $\\delta = 1$, for which the system has solutions at $(x, y) = (0, 1), (0, -1)$, and $(3.75, .25)$.\nWrite a function that finds an initial point $\\x_0 = (x_0,y_0)$ such that Newton's method converges to either $(0, 1)$ or $(0, -1)$ with $\\alpha = 1$, and to $(3.75, .25)$ with $\\alpha = 0.55$.\nAs soon as a valid $\\x_0$ is found, return it (stop searching).\n\\\\(Hint: search within the rectangle $[-\\frac{1}{4},0]\\times[0,\\frac{1}{4}]$.)\n\\end{problem}\n\n\\subsection*{Basins of Attraction} % ------------------------------------------\n\nWhen a function $f$ has many zeros, the zero that Newton's method converges to depends on the initial guess $x_0$.\nFor example, the function $f(x)=x^2-1$ has zeros at $-1$ and $1$.\nIf $x_0<0$, then Newton's method converges to $-1$; if $x_0 > 0$ then it converges to $1$ (see Figure \\ref{fig:basins-quadratic}).\nThe regions $(-\\infty, 0)$ and $(0, \\infty)$ are called the \\emph{basins of attraction} of $f$.\nStarting in one basin of attraction leads to finding one zero, while starting in another basin yields a different zero.\n\nWhen $f$ is a polynomial of degree greater than 2, the basins of attraction are much more interesting.\nFor example, the basis of attraction for $f(x) = x^3-x$ are shown in Figure \\ref{fig:basins-cubic}.\nThe basin for the zero at the origin is connected, but the other two basins are disconnected and share a kind of symmetry.\n\n\\begin{figure}[H] % 1-D basins of attraction.\n\\centering\n\\begin{subfigure}{.48\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/basins_quadratic.pdf}\n    \\caption{Basins of attraction for $f(x) = x^2 -1$.}\n    % With $\\alpha=1$, blue values for $x_0$ converge to -1 and orange values converge to 1.}\n    \\label{fig:basins-quadratic}\n\\end{subfigure}\n\\quad\n\\begin{subfigure}{.48\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/basins_cubic.pdf}\n    \\caption{Basins of attraction for $f(x) = x^3 -x$.}\n        % With $\\alpha=1$, blue values converge to $-1$, orange converge to 0, and green converge to 1.}\n    \\label{fig:basins-cubic}\n\\end{subfigure}\n\\caption{Basins of attraction with $\\alpha=1$.\nSince choosing a different value for $\\alpha$ can change which zero Newton's method converges to, the basins of attraction may change for other values of $\\alpha$.}\n\\end{figure}\n\nIt can be shown that Newton's method converges in any Banach space with only slightly stronger hypotheses than those discussed previously. % \\footnote{See Chapter 7 of Volume 1 for details.}\nIn particular, Newton's method can be performed over the complex plane $\\mathbb{C}$ to find imaginary zeros of functions.\nPlotting the basins of attraction over $\\mathbb{C}$ yields some interesting results.\n\nThe zeros of $f(x) = x^3 - 1$ are $1$, and $-\\frac{1}{2} \\pm \\frac{\\sqrt{3}}{2}i$.\nTo plot the basins of attraction for $f(x) = x^3-1$ on the square complex domain $X = \\{a+bi \\mid a \\in [-\\frac{3}{2},\\frac{3}{2}], b \\in [-\\frac{3}{2},\\frac{3}{2}]\\}$,\ncreate an initial grid of complex points in this domain using \\li{np.meshgrid()}.\n% Create the real and imaginary parts of the points separately, and then use \\li{np.meshgrid()} to turn them into a single grid of complex numbers.\n\n\\begin{lstlisting}\n>>> x_real = np.linspace(-1.5, 1.5, 500)    # Real parts.\n>>> x_imag = np.linspace(-1.5, 1.5, 500)    # Imaginary parts.\n>>> X_real, X_imag = np.meshgrid(x_real, x_imag)\n>>> X_0 = X_real + 1j*X_imag                # Combine real and imaginary parts.\n\\end{lstlisting}\n\nThe grid $X_0$ is a $500\\times500$ array of complex values to use as initial points for Newton's method.\nArray broadcasting makes it easy to compute an iteration of Newton's method at every grid point.\n\n\\begin{lstlisting}\n>>> f = lambda x: x**3 - 1\n>>> Df = lambda x: 3*x**2\n>>> X_1 = X_0 - f(X_0)/Df(X_0)\n\\end{lstlisting}\n\nAfter enough iterations, the $(i,j)$th element of the grid $X_k$ corresponds to the zero of $f$ that results from using the $(i,j)$th element of $X_0$ as the initial point.\nFor example, with $f(x) = x^3 - 1$, each entry of $X_k$ should be close to $1$, $-\\frac{1}{2} + \\frac{\\sqrt{3}}{2}i$, or $-\\frac{1}{2} - \\frac{\\sqrt{3}}{2}i$.\nEach entry of $X_k$ can then be assigned a value indicating which zero it corresponds to.\nSome results of this process are displayed below.\n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}{.49\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/fractal_hw.png}\n    \\caption{Basins of attraction for $f(x) = x^3-1$.}\n    \\label{fig:fractal_hw}\n\\end{subfigure}\n\\begin{subfigure}{.49\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/fractal_ex.png}\n    \\caption{Basins of attraction for $f(x) = x^3-x$.}\n    \\label{fig:fractal_ex}\n\\end{subfigure}\n\\caption{}\n\\label{fig:newton-basins}\n\\end{figure}\n\n\\begin{info} % Newton fractals\nNotice that in some portions of Figure \\ref{fig:fractal_hw}, whenever red and blue try to come together, a patch of green appears in between.\nThis behavior repeats on an infinitely small scale, producing a fractal.\nBecause it arises from Newton's method, this kind of fractal is called a \\emph{Newton fractal}.\n\nNewton fractals show that the long-term behavior of Newton's method is \\textbf{extremely} sensitive to the initial guess $x_0$.\nChanging $x_0$ by a small amount can change the output of Newton's method in a seemingly random way.\nThis phenomenon is called \\emph{chaos} in mathematics.\n\\end{info}\n\n\\begin{problem} % Plot the basins of attraction.\nWrite a function that accepts a function $f:\\mathbb{C}\\rightarrow\\mathbb{C}$, its derivative $f':\\mathbb{C}\\rightarrow\\mathbb{C}$, an array \\li{zeros} of the zeros of $f$, bounds $[r_{\\text{min}},r_{\\text{max}},i_{\\text{min}},i_{\\text{max}}]$ for the domain of the plot, an integer \\li{res} that determines the resolution of the plot, and number of iterations \\li{iters} to run the iteration.\nCompute and plot the basins of attraction of $f$ in the complex plane over the specified domain in the following steps.\n\\begin{enumerate}\n\\item Construct a \\li{res}$\\times$\\li{res} grid $X_0$ over the domain $\\{a+bi \\mid a \\in [r_{\\text{min}},r_{\\text{max}}], b \\in [i_{\\text{min}},i_{\\text{max}}]\\}$.\n\n\\item Run Newton's method (without backtracking) on $X_0$ \\li{iters} times, obtaining the \\li{res}$\\times$\\li{res} array $x_{k}$.\nTo avoid the additional computation of checking for convergence at each step, do not use your function from Problem \\ref{prob:newton-nd-implementation}.\n\n\\item $X_k$ cannot be directly visualized directly because its values are complex.\nSolve this issue by creating another \\li{res}$\\times$\\li{res} array $Y$.\nTo compute the $(i,j)$th entry $Y_{i,j}$, determine which zero of $f$ is closest to the $(i,j)$th entry of $X_k$.\nSet $Y_{i,j}$ to the index of this zero in the array \\li{zeros}.\nIf there are $R$ distinct zeros, each $Y_{i,j}$ should be one of $0,1,\\ldots,R-1$.\n\\\\(Hint: \\li{np.argmin()} may be useful.)\n\n\\item Use \\li{plt.pcolormesh()} to visualize the basins.\nRecall that this function accepts three array arguments: the $x$-coordinates (in this case, the real components of the initial grid), the $y$-coordinates (the imaginary components of the grid), and an array indicating color values ($Y$).\nSet \\li{cmap=\"brg\"} to get the same color scheme as in Figure \\ref{fig:newton-basins}.\n\\end{enumerate}\n\nTest your function using $f(x) = x^3-1$ and $f(x)=x^3-x$.\nThe resulting plots should resemble Figures \\ref{fig:fractal_hw} and \\ref{fig:fractal_ex}, respectively (perhaps with the colors permuted).\n\\end{problem}\n\n\\begin{comment} % No Additional Material for now.\n\n\\newpage\n\n\\section*{Additional Material} % ==============================================\n\n\\subsection*{Newton's Method for Optimization} % ------------------------------\n\n\\subsection*{Other Algorithms Based on Newton's Method} % ---------------------\n\n% \\subsection*{Fractals} % ----------------------------------------------------\n\n% % Totally irrelevant.\n% Another well-studied fractal in the complex plane is the Mandelbrot set.\n% It is defined as the points $c \\in \\mathbb{C}$ for which the sequence\n% \\[z_k = z_{k-1}^2 + c\\]\n% is bounded.\n\n\\subsection*{Visualizing Complex Functions} % ---------------------------------\n\nPlot the result of \\li{np.angle()} on $X_k$ as an alternative to the index assignment.\n\n\\end{comment}\n", "meta": {"hexsha": "834e2a634468c6a71b6d923ef5e6dfd77146ab44", "size": 22863, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "acme-material/Labs/Volume1/NewtonsMethod/NewtonsMethod.tex", "max_stars_repo_name": "DM561/dm561.github.io", "max_stars_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-13T13:22:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-13T13:22:41.000Z", "max_issues_repo_path": "acme-material/Labs/Volume1/NewtonsMethod/NewtonsMethod.tex", "max_issues_repo_name": "DM561/dm561.github.io", "max_issues_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-10-18T19:57:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-31T19:00:36.000Z", "max_forks_repo_path": "acme-material/Labs/Volume1/NewtonsMethod/NewtonsMethod.tex", "max_forks_repo_name": "DM561/dm561.github.io", "max_forks_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.968, "max_line_length": 392, "alphanum_fraction": 0.7060753182, "num_tokens": 6614, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947290421275, "lm_q2_score": 0.8633916134888613, "lm_q1q2_score": 0.5654306167730331}}
{"text": "\\chapter{Conclusions and Outlook}\n\\label{ch:conclusion}\n\nThis thesis focuses on the plasma dynamics at the tokamak periphery.\n%\nDespite its importance for the success of the magnetic confinement fusion program, the development of a model for the tokamak periphery has been hindered by the fact that this region is characterized by fluctuation levels of order unity, by the presence of both closed and open magnetic flux surfaces, and by a wide range of temperatures and densities that result in a wide range of collisionalities.\n%\nThese challenges, as shown in the present thesis, can be overcome by the use of moment expansion methods with a suitable set of basis functions that allows a convenient expression of the integro-differential Coulomb collision operator.\n\nIn \\cref{ch:dk}, a moment-hierarchy model is developed from a first-principles based, full-F, drift-kinetic model, suitable to describe the plasma dynamics in the SOL region of tokamak devices at arbitrary collisionality.\n%\nTaking advantage of the separation between the turbulent and gyromotion scales, a gyroaveraged Lagrangian and its corresponding equations of motion are obtained.\n%\nThe gyroaveraged distribution function is then expanded into a Hermite-Laguerre basis, and the coefficients of the expansion are related to the lowest-order gyrofluid moments.\n%\nThe fluid moment expansion of the Coulomb operator in terms of irreducible Hermite polynomials is reviewed, and its respective particle moments are written in terms of coefficients of the Hermite-Laguerre expansion, relating both expansions.\n%\nThis allows us to express analytically the moments of the collision operator in terms of guiding-center moments.\n%\nA moment-hierarchy that describes the evolution of the guiding-center moments is derived, together with a Poisson's equation accurate up to second order.\n%\nThe resulting set of equations is then used to derive a fluid model in the high collisionality limit.\n%\nThe results of this chapter are published in \\citet{Jorge2017}.\n\nIn \\cref{ch:gk}, a full-F gyrokinetic moment-hierarchy able to evolve the turbulent plasma dynamics in both the tokamak edge and SOL regions is derived.\n%\nTaking advantage of the spatial scale separation between turbulent fluctuations and magnetic field gradients, and the low-frequency of the fluctuations compared to the ion gyrofrequency, a single-particle Lagrangian is obtained using two successive noncanonical coordinate transformations in order to take into account fluctuations present at the $k_\\perp \\rho_s$ scale.\n%\nSuch transformations are derived using Lie transform perturbation theory.\n%\nThe resulting gyrokinetic equation is then projected onto a Hermite-Laguerre polynomial basis, allowing us to express the gyroaverage of plasma quantities in a closed analytical form.\n%\nThe electrostatic fields are evolved using a gyrokinetic formulation of Maxwell's equations, expressed in terms of coefficients of the moment-hierarchy expansion coefficients.\n\n\\cref{ch:op} complements the gyrokinetic moment-hierarchy model of \\cref{ch:gk} by deriving a moment-hierarchy formulation of the full-F gyrokinetic Coulomb collision operator, valid in both the electrostatic and in the electromagnetic regime.\n%\nThe Coulomb collision operator at arbitrary $k_\\perp \\rho_i$ is ported to a phase-space coordinate system suitable to describe magnetized plasmas, i.e., to guiding-center and gyrocenter coordinate systems, and projected onto a Hermite-Laguerre basis.\n%\nThis allows us to describe the plasma dynamics and turbulence in the tokamak periphery at arbitrary collisionalities and fills a gap in the literature by providing full Coulomb moments for full-F gyrofluid models.\n\n%In \\cref{ch:4ddk}, \n\nIn \\cref{ch:epw}, following \\citet{Jorge2018a}, the effect of full Coulomb collisions on electron-plasma waves is studied by taking into account both electron-electron and electron-ion collisions.\n%\nThe proposed framework is particularly efficient, as the number of polynomials needed in order to obtain convergence is low enough to allow multiple scans to be performed, particularly a comparison between several collision operators at arbitrary collisionalities.\n%\nWhile the use of electron-ion collisions {alone} leads to a damping rate slightly smaller than the one evaluated with the full Coulomb operator, the damping rate using a Lenard-Bernstein or a Dougherty collision operator yields deviations up to 50\\% larger with respect to the Coulomb one.\n%\nAn eigenmode analysis reveals major differences between the spectrum of full Coulomb and simplified collision operators.\n%\nIn addition, the eigenspectrum shows the presence of purely damped modes that correspond to the entropy mode.\n%\nWe demonstrate that the entropy mode needs a full Coulomb collision operator for its proper description, deriving an analytical dispersion relation for the entropy mode that accurately reproduces the numerical results.\n\nFinally, in \\cref{ch:dwi}, the linear properties of the drift-wave instability are described at arbitrary collisionalities for the first time.\n%\nThe analysis shows that the corrections introduced by the full Coulomb collision operator with respect to simplified collision operators, presently used in state-of-the-art codes, are qualitatively and quantitatively significant at the relevant collisionality regime of operation of future nuclear fusion devices such as ITER.\n%\nIndeed, the drift-wave growth rate is seen to deviate by factors of order unity from fluid and kinetic models based on simplified collision operators. \n%\nThe results of \\cref{ch:dwi} are published in \\citet{Jorge2018}.\n\nWith the present work, a crucial step towards a predictive model of tokamak turbulence has been accomplished.\n%\nAlthough the ordering used in this work when deriving the gyrokinetic moment-hierarchy equation is, in principle, applicable to describe the plasma dynamics in the whole machine, we focus on the tokamak periphery region as collisions are expected to limit the number of terms in the expansion needed, making moment-expansion simulations more efficient than standard numerical methods.\n%\nAs a first step of the numerical implementation of the proposed models, we have considered its linear version.\n%\nHowever, plasma dynamics at the tokamak periphery is essentially turbulent, therefore requiring the development of nonlinear simulations.\n%\nIn this setting, future extensions of the present work should include the development of sheath boundary conditions for moment-hierarchy non-linear simulations.\n%\nFinally, in order to properly address the treatment of peeling-ballooning modes and the drift-Alfv\u00e9n coupling in the edge region, an extension of the model derived here to include electromagnetic perturbations will be addressed in a future publication \\citep{Frei2019}.", "meta": {"hexsha": "cada769a19294ff76eb7185d2e4cc29bf0465c2f", "size": 6812, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "IST Version/main/ch8_conclusion.tex", "max_stars_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_stars_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IST Version/main/ch8_conclusion.tex", "max_issues_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_issues_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IST Version/main/ch8_conclusion.tex", "max_forks_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_forks_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.0540540541, "max_line_length": 400, "alphanum_fraction": 0.8226658837, "num_tokens": 1369, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916029436189, "lm_q2_score": 0.6548947357776795, "lm_q1q2_score": 0.5654306156824286}}
{"text": "\\documentclass[12pt]{article}\n\n\\begin{document}\n\n\\section*{Chapter 1}\n\n\t\\subsection*{Part 1: Introduction}\n\n\t(skip)\n\n\t\\subsection*{Part 2: Sets and Subsets}\n\n\t\\begin{enumerate}\n\t\t\\item Determine whether each of the following statements is true or false:\n\t\t\\begin{enumerate}\n\t\t\t\\item For each set $ A, A \\in 2^A $ \\\\\n\t\t\t\tTrue. $ A \\in \\left\\{ A \\right\\} $ by definition\n\t\t\t\\item For each set $ A, A \\subset 2^A $ \\\\\n\t\t\t\tFalse \\\\\n\t\t\t\tex: $ A = \\left\\{ 1 \\right\\} $\\\\\n\t\t\t\t$ 2^A = \\left\\{ \\left\\{ 1 \\right\\} \\right\\} $ \\\\\n\t\t\t\t$ A \\subset 2^A \\leftrightarrow 1 \\in \\left\\{ 1 \\right\\} \\wedge 1 \\in\n\t\t\t\\item For each set $ A, {A} \\subset 2^A $\n\t\t\t\\item For each set $ A, \\emptyset \\subset 2^A $\n\t\t\t\\item For each set $ A, \\emptyset \\in 2^A $\n\t\t\t\\item There are no members of the set $ \\left\\{ \\emptyset \\right\\} $\n\t\t\t\\item Let A and B be sets. If $\\displaystyle A \\subset B$, then $ 2^A \\subset 2^B $\n\t\t\t\\item There are two distinct objects that belong to the set $ \\left\\{ \\emptyset \\left\\{ \\emptyset \\right\\} \\right\\} $\n\t\t\\end{enumerate}\n\t\t\\item Let A, B, C be sets. Prove if $ A \\subset B $ and $ B \\subset C $, then $ A \\subset C $\n\t\t\\item Let $ A_1,...,A_n $ be sets. Prove that if $ A_1 \\subset A_2, A_2 \\subset A_3, ..., A_{n-1} \\subset A_n, A_n \\subset A_1 $ then $ A_1 = A_2 = ... = A_n $\n\t\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "c87c1adebc2b42f929fb05600492ebb223748d26", "size": 1324, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter1.tex", "max_stars_repo_name": "jadekler/git-latex-topology", "max_stars_repo_head_hexsha": "bb3436f616c27d5cdcdcfa2b883af48ab16ddcce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter1.tex", "max_issues_repo_name": "jadekler/git-latex-topology", "max_issues_repo_head_hexsha": "bb3436f616c27d5cdcdcfa2b883af48ab16ddcce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter1.tex", "max_forks_repo_name": "jadekler/git-latex-topology", "max_forks_repo_head_hexsha": "bb3436f616c27d5cdcdcfa2b883af48ab16ddcce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8285714286, "max_line_length": 161, "alphanum_fraction": 0.6057401813, "num_tokens": 508, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7577943767446201, "lm_q1q2_score": 0.5654199331885965}}
{"text": "\\subsection{Two dimensional uniform dataset with holes}\n\\label{sec:2d-holes}\n\nIn order to test for the adaptability of the randomized SOM to different topologies, we created a two dimensional uniform dataset with holes of various size and at random positions (see figure \\ref{fig:2D-holes:results}B). Such holes are known to pose difficulties to the regular SOM since neurons whose codewords are over a hole (or in the immediate vicinity) are attracted by neurons outside the holes from all sides. Those neurons hence become dead units that never win the competition. In the RSOM, this problem exists but is less severe thanks to the absence of regularity in the underlying neural topology and the loose constraints (we use 2 neighbors to build the topology). This can be observed in \\ref{fig:2D-holes:results}B) where the number of dead units is rather small and some holes are totally devoid of any neurons. Furthermore, when a sample that does not belong to the original distribution is presented, it can be observed that the answer of the map is maximal for a few neurons only (see figure \\ref{fig:2D-holes:results}G)).\n\n\n%The second experiment is designed to investigate how the SOM algorithms cope with two-dimensional uniformly distributed data points with holes ($x_1, x_2 \\sim \\mathcal{U}(0, 1)$). Therefore, in this case the input to the maps is two-dimensional Euclidean points on a plane, where we put some holes at random places on the plane (see Figure~\\ref{Fig:persistence_exp2b} {\\bfseries \\sffamily B}, blue discs). We train again both the VSOM and the Kohonen maps over $25000$ samples for $25000$ epochs and each map consists of $1024$ neurons. For the VSOM we define the topology of the map using a blue noise distribution (see Figure~\\ref{Fig:experiment2b} {\\bfseries \\sffamily A}) and for the Kohonen we use the standard rectangular Euclidean grid. After convergence both algorithms generate proper topographic maps covering the  input space. Figure~\\ref{Fig:experiment2b} {\\bfseries \\sffamily B} shows the mapping of VSOM learning algorithm (white discs and black segments) along with the input space (blue discs). We observe that not too many neurons cover the holes.\n\nThis observation is also supported by our topological analysis shown in figure \\ref{fig:2D-holes:analysis}. Figures\n\\ref{fig:2D-holes:analysis}A, B, and C show the persistent barcodes where we can see the lifespan of each (birth, death)\npair (for more details about how we compute these diagrams see Section~\\ref{sec:tda}). We observe that both the SOM and\nthe RSOM capture both the $H0$- and $H1$-homology of the input space, however the RSOM seems to have more persistent \ntopological features for the $H1$-homology (orange lines). This means that the RSOM can capture more accurately the \nholes which are present in the input space. Roughly speaking we have about eight holes (see figure~\\ref{fig:2D-holes:results}B) and we count about eight persistent features (the longest line segments\nin the barcode diagrams) for RSOM in \\ref{fig:2D-holes:analysis}C. On the other hand, the important persistent features\nfor the SOM are about five. In a similar way the persistent diagrams in figures~\\ref{fig:2D-holes:analysis}C, D, and E\nshow that both RSOM and SOM capture in a similar way both the $H0$- and $H1$-homology features, although the \nRSOM (panel F) captures more holes as the isolated orange points away from the diagonal line indicate. This\nis because the pairs that are further away from the diagonal are the most important meaning that they represent \ntopological features that are the most persistent during the filtration process. \nFurthermore, we measure the Bottleneck distance between the persistence diagrams of input space and those of \nSOM and RSOM. The SOM's persistence diagram for $H0$ is closer to the input space (SOM: $0.000829$, RSOM: $0.001$),\nwhile the RSOM's persistence diagram is closer to input's one for the $H1$ (SOM: $0.00478$, RSOM: $0.0037$). \nFinally, we ought to point out that the scale between panels A (D) and B, C (E, F) are not the same since the \nself-organization process has compressed information during mapping the input space to neural one.\n\n\n% On the upper part of this figure, we can see that the 1-homology (or H1, blue segments, blue dots) are more elongated for RSOM (last column) when compared to the regular SOM (central column). These segments correspond to the lifespan of H1 features when the $\\alpha$ radius is increased and we can observe that the RSOM is closer to the actual distribution (left column) when compared to SOM. More precisely, we observe that the number of holes in figure~\\ref{fig:2D-holes:results} is eight. Furthermore, we notice that the number of persistent blue segments (first raw of figure~\\ref{fig:2D-holes:analysis}) in persistent barcodes of figure~\\ref{fig:2D-holes:analysis} is about eight for the input space and about the same for the RSOM. The regular SOM expresses less persistent blue segments. The length of the blue or red segments in the barcodes indicate the lifespan  of the filtration under the variation of radius $\\alpha$. The more a segment survives the more important that topological feature is. Therefore, we can claim that the RSOM has about six persistent (significant) blue segments meaning that there are eight \\emph{holes} in the corresponding neural space. On the contrary, regular SOM has less persistent segments of 1-homology (blue lines) implying that it covers more regularly the input space. Similar results are shown in the second raw of figure~\\ref{fig:2D-holes:analysis}, where the persistent diagrams display the pairs (birth, death). Every time the radius $\\alpha$ increases new pairs are born or die as we have already described in Section~\\ref{sec:topo}. \n\n\n% However, 0-homology (H0, red segments) of RSOM and SOM behavior is very similar but also quite different from the actual distribution. \n\n% \\npr{ Here, an explanation of what are the red lines would be necessary. Also, concerning H0, I'm not sure how to explain the difference with SOM, RSOM and the actual distribution and what does that mean.}\n\n%Differences in $H0$ are attributed to the different neural space topology the two algorithms, the Kohonen and the VSOM, use. VSOM's neurons are placed not on an rectangular Euclidean grid as Kohonen's do, but instead on a random (following a blue noise distribution) Euclidean grid. \n\n% \\npr{ Where is the referenced figure below? Where is the explanation concerning birht death rate on the analysis figure ?}\n\n\n% In Figures~\\ref{Fig:experiment2b} {\\bfseries \\sffamily C}-{\\bfseries \\sffamily H} we show the receptive fields of six randomly chosen units from the VSOM  map. We see that different stimuli are captured by different units implying that the topographic map of VSOM map is well-formed. \n\n\n\n\\begin{figure}\n  \\includegraphics[width=\\columnwidth]{experiment-2D-holes.pdf}\n  \\vspace{2mm}\n  \\includegraphics[width=\\columnwidth]{figures/colormap.pdf}\n  %\n  \\caption{%\n  %\n  {\\bfseries \\sffamily Two dimensional uniform dataset with holes (results)}\n  %\n  Randomized SOM made of $1024$ neurons with a $2$-nearest neighbors induced topology. Model has been trained for $25,000$ epochs on two-dimensional points drawn from a uniform distribution on the unit square with holes of various sizes and random positions. \\textbf{A} Map topology in neural space. \\textbf{B} Map topology in data space. \\textbf{C to H} Normalized distance map for six random samples. The \\textbf{G} point has been purposely set outside the point distribution. Normalization has been performed for each sample in order to enhance contrast but this prevents comparison between maps.\n  %\n  }\n  \\label{fig:2D-holes:results}\n\\end{figure}\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=\\columnwidth]{experiment-2D-holes-analysis.pdf}\n  %\n  \\caption{{\\bfseries \\sffamily Two dimensional uniform dataset with holes (analysis)}\n  Persistent Barcodes of \\textbf{A} input space, \\textbf{B} SOM, and \\textbf{RSOM}.\n  The blue and orange line segments represent the $H0$- and $H1$-homology, respectively. This means\n  that blue color represents connected segments within the space and orange color reflects the holes\n  within the space. The longer the line segment the more important the corresponding topological\n  feature. \\textbf{D} illustrates the persistent diagram for the input space. \\textbf{E} and \\textbf{F}\n  depict the persistent diagrams for SOM and RSOM, respectively. Again blue dots indicate $H0$-homology\n  features and orange dots represent $H1$-homological features.}\n  \\label{fig:2D-holes:analysis}\n\\end{figure}\n", "meta": {"hexsha": "885704ca15b97c6c9bd15b35b8b4e3243b313ee4", "size": 8629, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "article-overleaf/03-results-A.tex", "max_stars_repo_name": "rougier/VSOM", "max_stars_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-11-20T06:27:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:20:28.000Z", "max_issues_repo_path": "article-overleaf/03-results-A.tex", "max_issues_repo_name": "rougier/VSOM", "max_issues_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "article-overleaf/03-results-A.tex", "max_forks_repo_name": "rougier/VSOM", "max_forks_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-03T04:41:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T04:41:57.000Z", "avg_line_length": 118.2054794521, "max_line_length": 1588, "alphanum_fraction": 0.7837524626, "num_tokens": 2094, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5654199291072315}}
{"text": "\\chapter{Appendix}\n\n\\begin{algorithm}[H]\n\t\\caption{Branch-and-Bound algorithm for a minimization problem}\n\t\\label{alg:lit:bab}\n\t\\begin{algorithmic}[1]\n\t\t\\Statex\n\t\t\\State $Incumbent=\\infty$\n\t\t\\State Initialize the tree with root node, $n_0$, and solve the relaxed LP\n\t\t% \\Let{$BestBound$}{LP solution}\n\t\t\\Let{$Nodes$}{$\\{n_0\\}$} \\Comment{\\emph{The set of nodes to be fathomed}}\n\t\t\\Statex\n\t\t\\While{$Nodes\\neq \\emptyset$}\n\t\t\t\\State Select a node $n\\in Nodes$\n\t\t\t\\State Choose a relaxed variable, $x_i$, in the LP solution\n\t\t\t\\State Branch on $x_i$, splitting its domain into two subsets, creating nodes $n_a$, $n_b$\n\t\t\t\\State Solve the two relaxed LPs of each node.\n\t\t\t\\For{$n_{new} \\in \\{n_a, n_b\\}$}\n\t\t\t\t\\If{$LPsol$ is integer}\n\t\t\t\t\t\\If{$LPsol < Incumbent$}\n\t\t\t\t\t\t\\Let{$Incumbent$}{$LPsol$} \\Comment{\\emph{Found new best solution}}\n\t\t\t\t\t\\ElsIf{$LPsol \\geq Incumbent$}\n\t\t\t\t\t\t\\Let{$Nodes$}{$Nodes\\cup\\{n_{new}\\}$}\n\t\t\t\t\t\\EndIf\n\t\t\t\t\\ElsIf{$LPsol$ is not integer}\n\t\t\t\t\t\\If{$LPsol \\leq BestBound$}\n\t\t\t\t\t\t\\Let{$Nodes$}{$Nodes\\cup\\{n_{new}\\}$}\n\t\t\t\t\t\\ElsIf{$LPsol > BestBound$}\n\t\t\t\t\t\t\\Let{$Nodes$}{$Nodes\\setminus\\big(n_{new}\\cup Children(n_{new})\\big)$} \\Comment{\\emph{Pruning}}\n\t\t\t\t\t\t\\State Continue to next node\n\t\t\t\t\t\\EndIf\n\t\t\t\t\\EndIf\n\t\t\t\t\\If{$LPsol < BestBound$}\n\t\t\t\t\t\\Let{$BestBound$}{$LPsol$}\n\t\t\t\t\\EndIf\n\t\t\t\t% \\State Current node is fathomed\n\t\t\t\\EndFor\n\t\t\\EndWhile\n\t\t\\Statex\n\t\t\\State \\Return $Incumbent$\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lstlisting}[caption={Full MiniZinc model for CP sub-problems},label={lst:appen:CPmodel},language=minizinc]\n% Constraint Programming model for a station's sub-problem\ninclude \"cumulative.mzn\";\ninclude \"disjunctive.mzn\";\ninclude \"redefinitions.mzn\";\n%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%\n% INSTANCE INITIALISATION\nint: nTasks;\nint: nPrecs;\nint: maxLoad;  % maximum makespan\nset of int: TASK;\nset of int: PREC = 1..nPrecs;\nset of int: TIME = 0..maxLoad;\narray[TASK] of int: dur; % duration\narray[TASK] of set of TASK: suc; % set of successors\narray[TASK,TASK] of int: forwSU; % forward setup times\narray[TASK,TASK] of int: backSU; % backward setup times\narray[TASK] of set of TASK: followForw; % allowed followers in forward load\narray[TASK] of set of TASK: followBack; % allowed followers in backward load\narray[TASK] of set of TASK: precedeForw; % allowed preceders in forward load\narray[TASK] of set of TASK: precedeBack; % allowed preceders in backward load\n%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%\n% DECISION VARIABLES\narray[TASK] of var TIME: s; % start time\narray[TASK,TASK] of var TIME: spair; % start time pairings\narray[TASK,TASK] of var bool: y; % forward direction following\narray[TASK,TASK] of var bool: z; % backward direction following\nvar TIME: load;  % load\n%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%\n% CONSTRAINTS\n% Only one follower in either station load direction\nconstraint\n\tforall (\n\t\ti in TASK\n\t)(\n\t\t  sum( j in followForw[i] )( y[i,j] )\n\t\t+ sum( j in followBack[i] )( z[i,j] )\n\t\t== 1 \n\t);\n% Only one preceder in either station load direction\nconstraint\n\tforall (\n\t\tj in TASK\n\t)(\n\t\t  sum( i in precedeForw[j] )( y[i,j] )\n\t\t+ sum( i in precedeBack[j] )( z[i,j] )\n\t\t== 1 \n\t);\n% Exactly one backward setup\nconstraint\n\tsum( \n\t\ti in TASK, j in followBack[i]\n\t)(\n\t\tz[i,j]\n\t) == 1\n\t;\n% Precedence constraints\nconstraint\n\tforall ( \n\t\ti in TASK, j in suc[i] \n\t)(\n\t\ts[i] + dur[i] + forwSU[i,j]*y[i,j] <= s[j]\n\t);\n% Forward station load respects setup times\nconstraint\n\tforall (\n\t\ti in TASK, j in followForw[i] \n\t)(\n\t\ty[i,j] <-> ( s[i] + dur[i] + forwSU[i,j] == s[j] )\n\t);\n% Backward station load respects station load\nconstraint\n\tforall (\n\t\ti in TASK\n\t)(\n\t\t  s[i] + dur[i]\n\t\t+ sum( \n\t\t\tj in followBack[i]\n\t\t  )(\n\t\t  \tbackSU[i,j]*z[i,j]\n\t\t  )\n\t\t<= load\n\t);\n%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%\n% REDUNDANT CONSTRAINTS\n% Cumulative Global\nconstraint\n\tcumulative(\n\t\t[ spair[i,j] \t\t\t| i in TASK, j in TASK ],\n\t\t[ dur[i] + forwSU[i,j] \t| i in TASK, j in TASK ],\n\t\t[ y[i,j]\t\t\t\t| i in TASK, j in TASK ],\n\t\t1\n\t);\nconstraint\n\tforall( \n\t\ti in TASK, j in TASK\n\t)(\n\t\ts[i] == spair[i,j]\n\t);\n% Fix some ordering variables to zero\nconstraint\n\tforall (\n\t\ti in TASK, j in TASK\n\twhere\n\t\tnot( j in followForw[i] )\n\t)(\n\t\ty[i,j] == 0\n\t);\nconstraint\n\tforall (\n\t\ti in TASK, j in TASK\n\twhere\n\t\tnot( j in followBack[i] )\n\t)(\n\t\tz[i,j] == 0\n\t);\n%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%\n% OBJECTIVE\nann: my_search;\n% Solve\nsolve :: my_search\nminimize load;\n%~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~%\n% OUTPUT\noutput\nif full_output == 0 then    \n  [\"load = \" ++ show(load) ++ \"\\n\"]\nelseif full_output == 1 then\n  [\"load = \" ++ show(load) ++ \"\\n\"] ++\n  [\"start = \" ++ show(s) ++ \"\\n\"]\nelse\n  [\"\"]\nendif;\n\\end{lstlisting}\n\n\n\n\\begin{table}[tpb]\n\t\\centering\n\t\\caption{Breakdown of all 1076 adapted SBF2 instances}\n\t\\vspace{2mm}\n\t\\begin{tabular}{llrrrrr}\n\t\t\\toprule\n\t\tClass & Creator & \\#Inst. & $n$ & $m$ & $|E|$ & $OS$ \\\\\n\t\t\\midrule\\midrule\n\t\t1 &  & 108 & 7-21 & 2-8 & 6-27 & \\\\\n\t\t & {\\tt mertens}  & 6 & 7 & 2-6 & 6 & 52.40\\\\\n\t\t  & {\\tt bowman8}  & 1 & 8 & 5 & 8 & 75.00\\\\\n\t\t  & {\\tt jaeschke} & 5 & 9 & 3-8 & 11 & 83.33\\\\\n\t\t  & {\\tt jackson}  & 6 & 11 & 3-8 & 13 & 58.18\\\\\n\t\t  & {\\tt mansoor}  & 3 & 11 & 2-4 & 11 & 60.00\\\\\n\t\t  & {\\tt mitchell} & 6 & 21 & 3-8 & 27 & 70.95\\\\\\midrule\n\t\t2 &  & 112 & 25-30 & 3-14 & 32-40 & \\\\\n\t\t  & {\\tt roszieg} & 6 & 25 & 4-10 & 32 & 71.67\\\\\n\t\t  & {\\tt heskia}   & 6 & 28 & 3-8 & 40 & 22.49\\\\\n\t\t  & {\\tt buxey}    & 7 & 29 & 7-13 & 36 & 50.74\\\\\n\t\t  & {\\tt sawyer30} & 9 & 30 & 5-14 & 32 & 44.83\\\\\\midrule\n\t\t3 &  & 176 & 32-58 & 3-31 & 38-82 & \\\\\n\t\t  & {\\tt lutz1}   & 6 & 32 & 6-11 & 38 & 83.47\\\\\n\t\t  & {\\tt gunther}  & 7 & 35 & 7-14 & 43 & 59.50\\\\\n\t\t  & {\\tt kilbrid}  & 10 & 45 & 3-10 & 62 & 44.60\\\\\n\t\t  & {\\tt hahn}     & 5 & 53 & 4-8 & 82 & 83.82\\\\\n\t\t  & {\\tt warnecke} & 16 & 58 & 14-31 & 70 & 59.10\\\\\\midrule\n\t\t4 &  & 224 & 70-83 & 7-63 & 86-112 &  \\\\\n\t\t & {\\tt tonge} & 16 & 70 & 7-23 & 86 & 59.04 \\\\\n\t\t  & {\\tt wee-mag} & 24 & 75 & 31-63 & 87 & 22.70 \\\\\n\t\t  & {\\tt arc83} & 16 & 83 & 8-21 & 112 & 59.10 \\\\\\midrule\n\t\t5 &  & 144 & 89-94 & 12-49 & 116-181 &  \\\\\n\t\t  & {\\tt lutz2} & 11 & 89 & 24-49 & 116 & 77.60 \\\\\n\t\t  & {\\tt lutz3} & 12 & 89 & 12-23 & 116 & 77.60 \\\\\n\t\t  & {\\tt mukherje} & 13 & 94 & 13-25 & 181 & 44.80 \\\\\\midrule\n\t\t6 &  & 208 & 111-148 & 7-51 & 175-176 &  \\\\\n\t\t  & {\\tt arc111} & 17 & 111 & 9-27 & 176 & 40.40 \\\\\n\t\t  & {\\tt barthold} & 8 & 148 & 7-14 & 175 & 25.80 \\\\\n\t\t  & {\\tt barthol2} & 27 & 148 & 25-51 & 175 & 25.80 \\\\\\midrule\n\t\t7 &  & 104 & 297-297 & 25-50 & 423-423 &  \\\\\n\t\t  & {\\tt scholl} & 26 & 297 & 25-50 & 423 & 58.20 \\\\\\midrule\n\t\t\\multicolumn{2}{l}{Overall} & 1076 & 7-297 & 2-31 & 6-82 & \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{tab:appen:dataSBF2}\n\\end{table}\n\n\\begin{table}[tpb]\n\t\\tiny\n\t\\caption{Full results of FSBF-2 with (\\ref{eq:mip:valIneq1}) on classes 1,2 and 3}\n\t\\centering\n\t\\vspace{2mm}\n\t\\begin{tabular}{cclrrrrrr}\n\t\t\\toprule\n\t\tClass & Alpha & Creator & \\#Nodes & \\%Gap & \\#No solution & \\#Optimal & \\%Optimal & Runtime(s) \\\\\\midrule\\midrule\n\t\t1 & 1.00 & {\\tt mertens}\t& 117 & 0.00 & 0 & 6/6 & 100.00 & 0.46 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 0 & 0.00 & 0 & 1/1 & 100.00 & 0.40 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 249 & 0.00 & 0 & 5/5 & 100.00 & 0.60 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 3,798 & 0.00 & 0 & 6/6 & 100.00 & 6.31 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 5,237 & 0.00 & 0 & 3/3 & 100.00 & 3.95 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 234,143 & 0.79 & 0 & 5/6 & 83.33 & 682.81 \\\\\n\t\t& 0.75\t& {\\tt mertens}\t& 80 & 0.00 & 0 & 6/6 & 100.00 & 0.30 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 0 & 0.00 & 0 & 1/1 & 100.00 & 0.25 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 106 & 0.00 & 0 & 5/5 & 100.00 & 0.40 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 4,456 & 0.00 & 0 & 6/6 & 100.00 & 8.91 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 4,010 & 0.00 & 0 & 3/3 & 100.00 & 2.52 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 188,922 & 1.02 & 0 & 4/6 & 66.67 & 799.00 \\\\\n\t\t& 0.50\t& {\\tt mertens}\t& 80 & 0.00 & 0 & 6/6 & 100.00 & 0.32 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 0 & 0.00 & 0 & 1/1 & 100.00 & 0.10 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 143 & 0.00 & 0 & 5/5 & 100.00 & 0.47 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 4,456 & 0.00 & 0 & 6/6 & 100.00 & 8.81 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 3,902 & 0.00 & 0 & 3/3 & 100.00 & 2.70 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 166,621 & 1.02 & 0 & 4/6 & 66.67 & 737.33 \\\\\n\t\t& 0.25\t& {\\tt mertens}\t& 80 & 0.00 & 0 & 6/6 & 100.00 & 0.37 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 0 & 0.00 & 0 & 1/1 & 100.00 & 0.06 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 107 & 0.00 & 0 & 5/5 & 100.00 & 0.42 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 4,456 & 0.00 & 0 & 6/6 & 100.00 & 8.76 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 1,394 & 0.00 & 0 & 3/3 & 100.00 & 1.21 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 166,876 & 1.02 & 0 & 4/6 & 66.67 & 738.00 \\\\[1mm]\n\t\t\\multicolumn{2}{l}{Overall} & & 43,437 & 0.16 & 0 & 101/108 & 93.52 & 166.57 \\\\\\midrule\n\t\t2 & 1.00 & {\\tt roszieg}\t& 269,369 & 12.92 & 0 & 0/6 & 0.00 & 1800.47 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 401,337 & 12.40 & 0 & 0/6 & 0.00 & 1800.89 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 55,003 & 28.79 & 0 & 0/7 & 0.00 & 1800.70 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 30,589 & 28.26 & 0 & 0/9 & 0.00 & 1800.84 \\\\\n\t\t& 0.75\t& {\\tt roszieg}\t& 211,653 & 8.15 & 0 & 0/6 & 0.00 & 1800.44 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 375,894 & 8.60 & 0 & 0/6 & 0.00 & 1800.88 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 26,939 & 25.80 & 0 & 0/7 & 0.00 & 1800.70 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 23,766 & 25.86 & 0 & 0/9 & 0.00 & 1800.86 \\\\\n\t\t& 0.50\t& {\\tt roszieg}\t& 364,062 & 5.29 & 0 & 0/6 & 0.00 & 1800.41 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 354,638 & 6.84 & 0 & 0/6 & 0.00 & 1800.68 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 38,177 & 16.03 & 0 & 0/7 & 0.00 & 1800.57 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 34,790 & 16.35 & 0 & 0/9 & 0.00 & 1800.67 \\\\\n\t\t& 0.25\t& {\\tt roszieg}\t& 364,552 & 5.29 & 0 & 0/6 & 0.00 & 1800.41 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 330,465 & 3.81 & 0 & 0/6 & 0.00 & 1800.71 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 85,544 & 8.13 & 0 & 0/7 & 0.00 & 1800.64 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 71,434 & 7.64 & 0 & 0/9 & 0.00 & 1800.59 \\\\[1mm]\n\t\t\\multicolumn{2}{l}{Overall} & & 168,899 & 13.76 & 0 & 0/112 & 0.00 & 1800.66 \\\\\\midrule\n\t\t3 & 1.00 & {\\tt lutz1}\t& 52,738 & 10.98 & 0 & 1/6 & 16.67 & 1733.60 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 37,277 & 31.30 & 0 & 0/7 & 0.00 & 1800.91 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 45,651 & 24.87 & 1 & 0/10 & 0.00 & 1801.75 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 74,365 & 14.70 & 0 & 0/5 & 0.00 & 1801.53 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 4,813 & 58.44 & 0 & 0/16 & 0.00 & 1802.23 \\\\\n\t\t& 0.75\t& {\\tt lutz1}\t& 49,364 & 10.60 & 0 & 0/6 & 0.00 & 1800.53 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 39,351 & 22.45 & 0 & 0/7 & 0.00 & 1800.90 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 85,124 & 18.87 & 2 & 0/10 & 0.00 & 1801.61 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 112,021 & 7.18 & 0 & 0/5 & 0.00 & 1801.44 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 4,580 & 48.45 & 1 & 0/16 & 0.00 & 1802.40 \\\\\n\t\t& 0.50\t& {\\tt lutz1}\t& 115,017 & 2.81 & 0 & 2/6 & 33.33 & 1706.07 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 26,716 & 18.32 & 0 & 0/7 & 0.00 & 1800.84 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 49,411 & 17.51 & 1 & 0/10 & 0.00 & 1801.55 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 108,078 & 7.62 & 0 & 0/5 & 0.00 & 1801.18 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 7,121 & 41.94 & 1 & 0/16 & 0.00 & 1802.23 \\\\\n\t\t& 0.25\t& {\\tt lutz1}\t& 63,440 & 1.76 & 0 & 4/6 & 66.67 & 733.06 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 31,915 & 11.11 & 0 & 0/7 & 0.00 & 1800.79 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 100,912 & 5.77 & 3 & 0/10 & 0.00 & 1801.41 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 161,258 & 4.92 & 0 & 0/5 & 0.00 & 1801.18 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 7,657 & 31.58 & 0 & 0/16 & 0.00 & 1802.41 \\\\[1mm]\n\t\t\\multicolumn{2}{l}{Overall} & & 46,060 & 19.56 & 9 & 7/176 & 3.98 & 1759.67 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{tab:appen:mipfsbf}\n\\end{table}\n\n\n\\begin{table}[tpb]\n\t\\tiny\n\t\\caption{Full results of SCBF-2 with (\\ref{eq:mip:valIneq1}) on classes 1,2 and 3}\n\t\\centering\n\t\\vspace{2mm}\n\t\\begin{tabular}{cclrrrrrr}\n\t\t\\toprule\n\t\tClass & Alpha & Creator & \\#Nodes & \\%Gap & \\#No solution & \\#Optimal & \\%Optimal & Runtime(s) \\\\\\midrule\\midrule\n\t\t1 & 1.00 & {\\tt mertens}\t& 623 & 0.00 & 0 & 6/6 & 100.00 & 0.31 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 568 & 0.00 & 0 & 1/1 & 100.00 & 0.36 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 739 & 0.00 & 0 & 5/5 & 100.00 & 0.43 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 11,149 & 0.00 & 0 & 6/6 & 100.00 & 9.00 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 10,954 & 0.00 & 0 & 3/3 & 100.00 & 5.38 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 1,123,442 & 11.10 & 0 & 0/6 & 0.00 & 1800.25 \\\\\n\t\t& 0.75\t& {\\tt mertens}\t& 500 & 0.00 & 0 & 6/6 & 100.00 & 0.30 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 142 & 0.00 & 0 & 1/1 & 100.00 & 0.21 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 596 & 0.00 & 0 & 5/5 & 100.00 & 0.28 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 18,033 & 0.00 & 0 & 6/6 & 100.00 & 11.11 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 9,303 & 0.00 & 0 & 3/3 & 100.00 & 5.04 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 957,732 & 11.81 & 0 & 0/6 & 0.00 & 1800.24 \\\\\n\t\t& 0.50\t& {\\tt mertens}\t& 500 & 0.00 & 0 & 6/6 & 100.00 & 0.27 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 34 & 0.00 & 0 & 1/1 & 100.00 & 0.10 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 665 & 0.00 & 0 & 5/5 & 100.00 & 0.26 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 18,033 & 0.00 & 0 & 6/6 & 100.00 & 10.85 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 11,228 & 0.00 & 0 & 3/3 & 100.00 & 6.38 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 1,116,662 & 12.23 & 0 & 0/6 & 0.00 & 1800.23 \\\\\n\t\t& 0.25\t& {\\tt mertens}\t& 500 & 0.00 & 0 & 6/6 & 100.00 & 0.29 \\\\\n\t\t&\t\t& {\\tt bowman8}\t& 49 & 0.00 & 0 & 1/1 & 100.00 & 0.10 \\\\\n\t\t&\t\t& {\\tt jaeschke}\t& 545 & 0.00 & 0 & 5/5 & 100.00 & 0.28 \\\\\n\t\t&\t\t& {\\tt jackson}\t& 18,033 & 0.00 & 0 & 6/6 & 100.00 & 10.73 \\\\\n\t\t&\t\t& {\\tt mansoor}\t& 12,718 & 0.00 & 0 & 3/3 & 100.00 & 7.20 \\\\\n\t\t&\t\t& {\\tt mitchell}\t& 1,117,030 & 12.23 & 0 & 0/6 & 0.00 & 1800.24 \\\\[1mm]\n\t\t\\multicolumn{2}{l}{Overall} & & 244,811 & 1.97 & 0 & 84/108 & 77.78 & 403.17 \\\\\\midrule\n\t\t2 & 1.00 & {\\tt roszieg}\t& 991,067 & 20.31 & 0 & 0/6 & 0.00 & 1800.30 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 726,379 & 11.69 & 2 & 0/6 & 0.00 & 1800.60 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 685,017 & 16.08 & 4 & 0/7 & 0.00 & 1800.50 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 530,560 & 27.54 & 4 & 0/9 & 0.00 & 1800.57 \\\\\n\t\t& 0.75\t& {\\tt roszieg}\t& 734,178 & 18.44 & 0 & 0/6 & 0.00 & 1800.24 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 756,204 & 17.72 & 2 & 0/6 & 0.00 & 1800.56 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 558,135 & 24.44 & 3 & 0/7 & 0.00 & 1800.59 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 497,528 & 29.45 & 4 & 0/9 & 0.00 & 1800.49 \\\\\n\t\t& 0.50\t& {\\tt roszieg}\t& 1,195,487 & 16.73 & 0 & 0/6 & 0.00 & 1800.31 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 711,764 & 13.75 & 2 & 0/6 & 0.00 & 1800.52 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 609,114 & 22.60 & 2 & 0/7 & 0.00 & 1800.50 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 556,436 & 19.70 & 4 & 0/9 & 0.00 & 1800.53 \\\\\n\t\t& 0.25\t& {\\tt roszieg}\t& 1,193,838 & 16.73 & 0 & 0/6 & 0.00 & 1800.28 \\\\\n\t\t&\t\t& {\\tt heskia}\t& 652,072 & 11.64 & 2 & 0/6 & 0.00 & 1800.49 \\\\\n\t\t&\t\t& {\\tt buxey}\t& 580,369 & 23.58 & 0 & 0/7 & 0.00 & 1800.48 \\\\\n\t\t&\t\t& {\\tt sawyer30}\t& 614,005 & 25.58 & 2 & 0/9 & 0.00 & 1800.47 \\\\[1mm]\n\t\t\\multicolumn{2}{l}{Overall} & & 701,617 & 19.75 & 31 & 0/112 & 0.00 & 1800.47 \\\\\\midrule\n\t\t3 & 1.00 & {\\tt lutz1}\t& 716,824 & 18.94 & 0 & 0/6 & 0.00 & 1800.37 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 482,696 & 10.68 & 5 & 0/7 & 0.00 & 1800.65 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 318,856 & -- & 10 & 0/10 & 0.00 & 1801.17 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 450,264 & 21.67 & 0 & 0/5 & 0.00 & 1800.85 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 170,586 & -- & 16 & 0/16 & 0.00 & 1801.47 \\\\\n\t\t& 0.75\t& {\\tt lutz1}\t& 644,760 & 16.40 & 0 & 0/6 & 0.00 & 1800.38 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 525,585 & 21.13 & 4 & 0/7 & 0.00 & 1800.62 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 320,037 & -- & 10 & 0/10 & 0.00 & 1801.08 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 414,795 & 27.77 & 0 & 0/5 & 0.00 & 1800.86 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 167,942 & -- & 16 & 0/16 & 0.00 & 1801.36 \\\\\n\t\t& 0.50\t& {\\tt lutz1}\t& 473,510 & 15.23 & 0 & 0/6 & 0.00 & 1800.42 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 463,846 & 28.02 & 2 & 0/7 & 0.00 & 1800.60 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 323,799 & -- & 10 & 0/10 & 0.00 & 1800.96 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 363,883 & 18.67 & 0 & 0/5 & 0.00 & 1800.92 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 135,786 & -- & 16 & 0/16 & 0.00 & 1801.48 \\\\\n\t\t& 0.25\t& {\\tt lutz1}\t& 475,795 & 16.00 & 0 & 0/6 & 0.00 & 1800.33 \\\\\n\t\t&\t\t& {\\tt gunther}\t& 427,559 & 23.37 & 2 & 0/7 & 0.00 & 1800.50 \\\\\n\t\t&\t\t& {\\tt kilbrid}\t& 356,573 & -- & 10 & 0/10 & 0.00 & 1800.97 \\\\\n\t\t&\t\t& {\\tt hahn}\t& 310,303 & 26.20 & 0 & 0/5 & 0.00 & 1800.70 \\\\\n\t\t&\t\t& {\\tt warnecke}\t& 111,564 & -- & 16 & 0/16 & 0.00 & 1801.41 \\\\[1mm]\n\t\t\\multicolumn{2}{l}{Overall} & & 326,284 & 12.20 & 117 & 0/176 & 0.00 & 1801.00 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{tab:appen:mipSCBF}\n\\end{table}\n", "meta": {"hexsha": "21f0349298713258692acb75d3e271d121cfd3ec", "size": 16295, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assembly Task Scheduling/sualbsp-2-master/thesis/chap_appendix.tex", "max_stars_repo_name": "BillChan226/ODA-Multi-Manipulator", "max_stars_repo_head_hexsha": "6863558a9fdb946fb16c67a7660172154274cfe0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assembly Task Scheduling/sualbsp-2-master/thesis/chap_appendix.tex", "max_issues_repo_name": "BillChan226/ODA-Multi-Manipulator", "max_issues_repo_head_hexsha": "6863558a9fdb946fb16c67a7660172154274cfe0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assembly Task Scheduling/sualbsp-2-master/thesis/chap_appendix.tex", "max_forks_repo_name": "BillChan226/ODA-Multi-Manipulator", "max_forks_repo_head_hexsha": "6863558a9fdb946fb16c67a7660172154274cfe0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.9947229551, "max_line_length": 115, "alphanum_fraction": 0.495857625, "num_tokens": 8327, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407017, "lm_q2_score": 0.7577943658046609, "lm_q1q2_score": 0.5654199164703305}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{titletoc}\n\\usepackage{titlesec}\n\\usepackage{geometry} \n\\usepackage{fontspec, xunicode, xltxtra}\n\\usepackage{float}\n\\usepackage{cite}\n\\usepackage{amsmath}\n\\usepackage{listings}\n\\usepackage{titletoc}\n\n\\geometry{left=3cm,right=3cm,top=3cm,bottom=3cm}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\logit}{logit}\n\\DeclareMathOperator*{\\var}{var}\n\\DeclareMathOperator*{\\cov}{cov}\n\\DeclareMathOperator*{\\expec}{E}\n\\DeclareMathOperator*{\\deriv}{d}\n\\DeclareMathOperator*{\\const}{constant}\n\n\\begin{document}\n\\title{\\textsf{Homework 4 for Bayesian Data Analysis}}\n\\author{Fan JIN\\quad (2015011506)}\n\\maketitle\n\n\\section*{Question 5.10a}\n{\n    $$p(\\mu, \\tau | y) \\propto p(\\mu, \\tau) p(y | \\mu, \\tau) = p(\\mu, \\tau) \\cdot \\prod_{j=1}^{J} {N(\\bar{y}_{.j} | \\mu, \\sigma_j^2 + \\tau^2)}$$\n    $$\\propto \\tau^{-1} \\cdot \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)^{-1/2} \\exp{ \\left( - \\frac{(\\bar{y}_{.j} - \\mu)^2}{2(\\sigma_j^2 + \\tau^2)} \\right) }},$$\n    where $\\sigma_j^2 = \\sigma^2 / n_j$ is the variance of the $j$-th group.\n\n    As $\\tau \\rightarrow 0$, the posterior pdf $p(\\mu, \\tau | y)$ is dominated by $\\tau^{-1}$, which is not integrable. Therefore, the posterior distribution is improper.\n}\n\n\\section*{Question 5.10b}\n{\n    $$p(\\mu, \\tau | y) \\propto p(\\mu, \\tau) p(y | \\mu, \\tau) = p(\\mu, \\tau) \\cdot \\prod_{j=1}^{J} {N(\\bar{y}_{.j} | \\mu, \\sigma_j^2 + \\tau^2)}$$\n    $$= \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)^{-1/2} \\exp{ \\left( - \\frac{(\\bar{y}_{.j} - \\mu)^2}{2(\\sigma_j^2 + \\tau^2)} \\right) }}$$\n    $$= \\left[ \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)} \\right]^{-1/2} \\cdot \\exp{\\left[ -\\frac{1}{2} \\sum_{j=1}^{J} {\\frac{(\\bar{y}_{.j} - \\mu)^2}{\\sigma_j^2 + \\tau^2}} \\right]}$$\n    $$= \\left[ \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)} \\right]^{-1/2} \\cdot \\exp{\\left[ -\\frac{1}{2} {\\left( A(\\mu-B)^2 + C \\right)} \\right]}$$\n    $$= \\left[ \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)} \\right]^{-1/2} \\cdot \\exp{\\left[ -\\frac{A}{2} (\\mu-B)^2 \\right]} \\cdot \\exp{(-\\frac{C}{2})},$$\n    where $$A = \\sum_{j=1}^{J} {\\frac{1}{(\\sigma_j^2 + \\tau^2)}},$$\n    $$C = \\left[ \\sum_{j=1}^{J} {\\frac{\\bar{y}_{.j}^2}{(\\sigma_j^2 + \\tau^2)}} \\right] - A^{-1} \\left[ \\sum_{j=1}^{J} {\\frac{\\bar{y}_{.j}}{(\\sigma_j^2 + \\tau^2)}} \\right]^2 .$$\n\n    It follows that \n    $$\\int_{-\\infty}^{\\infty} {p(\\mu, \\tau | y) \\deriv{\\mu}} \\propto \\left[ \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)} \\right]^{-1/2} \\cdot A^{-1/2} \\cdot \\exp{(-\\frac{C}{2})}.$$\n\n    As $\\tau \\rightarrow 0$, the integral above goes to a finite constant. As $\\tau \\rightarrow \\infty$, we have\n    $$A \\rightarrow J \\tau^{-2},$$\n    $$C \\rightarrow \\tau^{-2} \\cdot \\left[ \\sum_{j=1}^{J}{\\bar{y}_{.j}^2} - \\frac{1}{J} (\\sum_{j=1}^{J}{\\bar{y}_{.j}})^2 \\right],$$\n    and the integral above is therefore dominated by\n    $$\\tau^{-J} \\cdot J^{-1/2} \\tau \\cdot \\exp{\\left(-\\frac{D}{2} \\tau^{-2}\\right)} \\propto \\tau^{-(J-1)} \\exp{\\left(-\\frac{D}{2} \\tau^{-2}\\right)} \\rightarrow \\tau^{-(J-1)},$$ \n    where $$D = \\sum_{j=1}^{J}{\\bar{y}_{.j}^2} - \\frac{1}{J} (\\sum_{j=1}^{J}{\\bar{y}_{.j}})^2 > 0.$$\n    Since $\\tau^{-(J-1)}$ is integrable with respect to $\\tau$ if $J > 2$, the posterior distribution is proper:\n    $$\\int_{0}^{\\infty} { \\left\\{ \\int_{-\\infty}^{\\infty} {p(\\mu, \\tau | y) \\deriv{\\mu}} \\right\\} \\deriv{\\tau}} < \\infty.$$\n}\n\n\\section*{Question 5.10c}\n{\n    We don't have enough information to infer if we only have two groups in the hierarchical model, as the hyperparameters $(\\mu, \\tau)$ is estimated precisely when $J$ is large. So it is an alternative to use a model other than hierarchical model, e.g., to assume the two schools are independent of each other, and assign independent prior distribution for each school (i.e. treat the two schools separately), as shown below:\n    $$y_{1j} \\sim^{\\mathrm{i.i.d.}} N(\\theta_1, \\sigma^2), \\quad y_{2j} \\sim^{\\mathrm{i.i.d.}} N(\\theta_2, \\sigma^2),$$\n    where $\\sigma^2$ is known, and $$\\theta_1 \\sim N(\\mu_1, \\tau_1^2), \\quad \\theta_2 \\sim N(\\mu_2, \\tau_2^2)$$ with hyperparameters $(\\mu_1, \\mu_2, \\tau_1, \\tau_2)$. Thus, $p(\\mu_1, \\tau_1 | \\bar{y}_{1j})$ and $p(\\mu_2, \\tau_2 | \\bar{y}_{2j})$ can be obtained separately.\n\n}\n\n\\section*{Question 5.12}\n{\n    First, we have\n    $$\\theta_j | \\mu, \\tau, y \\sim N( \\frac{\\frac{1}{\\sigma_j^2} \\bar{y}_{.j} + \\frac{1}{\\tau^2} \\mu}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}}, \\frac{1}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}}).$$\n    It follows that $$\\expec{(\\theta_j | \\mu, \\tau, y)} = \\frac{\\frac{1}{\\sigma_j^2} \\bar{y}_{.j} + \\frac{1}{\\tau^2} \\mu}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}},$$\n    $$\\var{(\\theta_j | \\mu, \\tau, y)} = \\frac{1}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}}.$$\n\n    Second, we have\n    $$\\mu | \\tau, y \\sim N( \\frac{\\sum_{j=1}^{J}{\\frac{1}{\\sigma_j^2+\\tau^2} \\bar{y}_{.j}}}{\\sum_{j=1}^{J}{\\frac{1}{\\sigma_j^2+\\tau^2}}}, \\left( \\sum_{j=1}^{j}{\\frac{1}{\\sigma_j^2+\\tau^2}} \\right)^{-1} ).$$\n    It follows that $$\\expec{(\\mu | \\tau, y)} = \\frac{\\sum_{j=1}^{J}{\\frac{1}{\\sigma_j^2+\\tau^2} \\bar{y}_{.j}}}{\\sum_{j=1}^{J}{\\frac{1}{\\sigma_j^2+\\tau^2}}},$$\n    $$\\var{(\\mu | \\tau, y)} = (\\sum_{j=1}^{j}{\\frac{1}{\\sigma_j^2+\\tau^2}})^{-1}.$$\n\n    Therefore, the posterior expectation and variance of $\\theta_j$ conditional on $\\tau$ and $y$ are:\n    $$\\expec{(\\theta_j | \\tau, y)} = \\expec_{\\mu | \\tau, y}{\\left[ \\expec{(\\theta_j | \\mu, \\tau, y)} \\right]} $$\n    $$= \\expec_{\\mu | \\tau, y}{\\left[ \\frac{\\frac{1}{\\sigma_j^2} \\bar{y}_{.j} + \\frac{1}{\\tau^2} \\mu}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}} \\right]}$$\n    $$= \\frac{\\frac{1}{\\sigma_j^2} \\bar{y}_{.j} + \\frac{1}{\\tau^2} \\expec{(\\mu | \\tau, y)}}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}}$$\n    $$= \\frac{\\frac{1}{\\sigma_j^2} \\bar{y}_{.j} + \\frac{1}{\\tau^2} \\frac{\\sum_{j=1}^{J}{\\frac{1}{\\sigma_j^2+\\tau^2} \\bar{y}_{.j}}}{\\sum_{j=1}^{J}{\\frac{1}{\\sigma_j^2+\\tau^2}}}}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}},$$\n    and\n    $$\\var{(\\theta_j | \\tau, y)} = \\expec_{\\mu | \\tau, y}{\\left[ \\var{(\\theta_j | \\mu, \\tau, y)} \\right]} + \\var_{\\mu | \\tau, y}{\\left[ \\expec{(\\theta_j | \\mu, \\tau, y)} \\right]}$$\n    $$= \\frac{1}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}} + \\var_{\\mu | \\tau, y}{\\left[ \\frac{\\frac{1}{\\sigma_j^2} \\bar{y}_{.j} + \\frac{1}{\\tau^2} \\mu}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}} \\right]}$$\n    $$= \\frac{1}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}} + \\left( \\frac{\\frac{1}{\\tau^2}}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}} \\right)^2 \\var{(\\mu | \\tau, y)}$$\n    $$= \\frac{1}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}} + \\left( \\frac{\\frac{1}{\\tau^2}}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}} \\right)^2 (\\sum_{j=1}^{j}{\\frac{1}{\\sigma_j^2+\\tau^2}})^{-1}$$\n\n}\n\n\\section*{Question 5.15a}\n{\n    Build the hierarchical model below:\n    $$y_j | \\theta_j, \\sigma_j \\sim N(\\theta_j, \\sigma_j^2),$$\n    where $$y_j = \\logit{(y_{1j} / n_{1j})} - \\logit{(y_{0j} / n_{0j})}$$ is the log-odds (observed data), and $$\\sigma_j^2 = (y_{1j})^{-1} + (n_{1j} - y_{1j})^{-1} + (y_{0j})^{-1} + (n_{0j} - y_{0j})^{-1}$$ is the approximated sampling variance (assumed known). And the model paramters $$\\theta_j \\sim^{\\mathrm{i.i.d.}} N(\\mu, \\tau^2)$$ with hyperparameters $(\\mu, \\tau)$.\n\n    Assign a noninformative hyperprior distribution $p(\\mu, \\tau) \\propto 1$. It follows from Question 5.10b that\n    $$p(\\tau | y) = \\int_{-\\infty}^{\\infty} {p(\\mu, \\tau | y) \\deriv{\\mu}} \\propto \\left[ \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)} \\right]^{-1/2} \\cdot A^{-1/2} \\cdot \\exp{(-\\frac{C}{2})},$$ where $A$, $C$ are defined in my answer to Question 5.10b:\n    $$A = \\sum_{j=1}^{J} {\\frac{1}{(\\sigma_j^2 + \\tau^2)}},$$\n    $$C = \\left[ \\sum_{j=1}^{J} {\\frac{\\bar{y}_{.j}^2}{(\\sigma_j^2 + \\tau^2)}} \\right] - A^{-1} \\left[ \\sum_{j=1}^{J} {\\frac{\\bar{y}_{.j}}{(\\sigma_j^2 + \\tau^2)}} \\right]^2 .$$\n\n    Although the expression is a bit complicated for analysis, it can be easily calculated numerically. Below is the output in R:\n    \\begin{figure}[H]\n        \\centering\n        \\includegraphics[width = 0.8\\linewidth]{tau.posterior.png}\n        \\caption{Posterior density of $\\tau$}\n    \\end{figure}\n\n}\n\n\\section*{Question 5.15b}\n{\n    Based on what we obtained in Question 5.12 about the posterior mean and variance of $\\theta_j$ conditional on $\\tau$, we get the plots below. \n    \\begin{figure}[H]\n        \\centering\n        \\includegraphics[width = 1.0\\linewidth]{theta.posterior.means.png}\n        \\caption{Posterior means of $\\theta_j$}\n    \\end{figure}\n    \\begin{figure}[H]\n        \\centering\n        \\includegraphics[width = 1.0\\linewidth]{theta.posterior.vars.png}\n        \\caption{Posterior variances of $\\theta_j$}\n    \\end{figure}\n    \n}\n\n\\section*{Question 5.15c}\n{\n    First, we obtained the posterior joint distribution for hyperparameters in Question 5.10b:\n    $$p(\\mu, \\tau | y) \\propto \\left[ \\prod_{j=1}^{J} {(\\sigma_j^2 + \\tau^2)} \\right]^{-1/2} \\cdot \\exp{\\left[ -\\frac{A}{2} (\\mu-B)^2 \\right]} \\cdot \\exp{(-\\frac{C}{2})},$$\n    where $$A = \\sum_{j=1}^{J} {\\frac{1}{(\\sigma_j^2 + \\tau^2)}},$$\n    $$C = \\left[ \\sum_{j=1}^{J} {\\frac{\\bar{y}_{.j}^2}{(\\sigma_j^2 + \\tau^2)}} \\right] - A^{-1} \\left[ \\sum_{j=1}^{J} {\\frac{\\bar{y}_{.j}}{(\\sigma_j^2 + \\tau^2)}} \\right]^2 .$$\n\n    Second, we have \n    $$p(\\theta_j | \\mu, \\tau, y) \\propto (V_j)^{-1/2} \\exp{(-\\frac{(\\theta_j - \\hat{\\theta}_j)^2}{2V_j})}, $$\n    where $$\\hat{\\theta}_j = \\frac{\\frac{1}{\\sigma_j^2} \\bar{y}_{.j} + \\frac{1}{\\tau^2} \\mu}{\\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2}}, $$\n    $$V_j = \\left( \\frac{1}{\\sigma_j^2} + \\frac{1}{\\tau^2} \\right)^{-1}.$$\n\n    It follows that \n    $$p(\\theta_j | y) = \\int_{0}^{\\infty} {\\int_{-\\infty}^{\\infty} {p(\\mu, \\tau | y) \\cdot p(\\theta_j | \\mu, \\tau, y) \\deriv{\\mu}} \\deriv{\\tau}}.$$ The median of $\\theta_j | y$ is obtained as well.\n}\n\n\\section*{Question 5.15d}\n{\n    First, draw $\\tau$ from $p(\\tau | y)$. Then draw $\\mu$ from $p(\\mu | \\tau, y)$. Finally, draw $\\theta_j$ from $p(\\theta_j | \\mu, \\tau, y)$.\n\n    \\begin{figure}[H]\n        \\centering\n        \\includegraphics[width = 1.0\\linewidth]{theta.simulations.png}\n        \\caption{Simulations of $\\theta_j$s}\n    \\end{figure}\n}\n\n\\section*{Question 5.15e}\n{\n    I failed to simulate $y_{1j}$ and $y_{0j}$ from the $\\theta_j$s simulated in Question 5.15d, because we cannot obtain the likelihood $$p(y_j | \\theta_j, \\sigma_j^2) = N(\\theta_j, \\sigma_j^2)$$ with $\\sigma_j^2$ unknown for the hypothetical new study, although $\\theta_j$ has been simulated in Question 5.15d.\n}\n\n\\section*{Source Code in R:}\n{\n    Raw file: http://39.106.23.58/files/BayesianHW4.7z\n\n    \\begin{lstlisting}[language=R]\n        plot.increment = 0.0005\n        dat = read.csv(\"data.csv\", header=T)\n        \n        logit <- function(x) {\n          return (log(x / (1 - x)))\n        }\n        \n        log.odds = logit(dat$treated.deaths / dat$treated.total) - logit(dat$control.deaths / dat$control.total)\n        std.err = sqrt((dat$treated.deaths)^(-1) + \n                         (dat$treated.total - dat$treated.deaths)^(-1) + \n                         (dat$control.deaths)^(-1) +\n                         (dat$control.total - dat$control.deaths)^(-1))\n        \n        dat = cbind(dat, log.odds)\n        dat = cbind(dat, std.err)\n        \n        J = nrow(dat)\n        \n        # Question 5.15a\n        \n        tau.posterior <- function(tau, dat) {\n          J = nrow(dat)\n          \n          A = 0\n          for (j in 1:J) {\n            A = A + 1 / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          S1 = 0\n          for (j in 1:J) {\n            S1 = S1 + (dat$log.odds[j]) / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          S2 = 0\n          for (j in 1:J) {\n            S2 = S2 + (dat$log.odds[j]^2) / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          C = S2 - S1^2 / A\n          \n          P = 0\n          for (j in 1:J) {\n            P = P + (-1/2) * log(dat$std.err[j]^2 + tau^2)\n          }\n          P = exp(P)\n          \n          return (P * A^(-1/2) * exp(-C/2))\n          \n        }\n        \n        taus = seq(0 + plot.increment, 1, by = plot.increment)\n        taus.posterior = unlist(lapply(taus, tau.posterior, dat))\n        taus.posterior = taus.posterior / sum(taus.posterior) / plot.increment # rescale to 1\n        \n        png(\"tau.posterior.png\", width = 800, height = 600)\n        plot(taus, taus.posterior)\n        dev.off()\n        \n        # Question 5.15b\n        \n        theta.posterior.mean <- function(tau, dat) {\n          J = nrow(dat)\n          \n          S0 = 0\n          for (j in 1:J) {\n            S0 = S0 + 1 / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          S1 = 0\n          for (j in 1:J) {\n            S1 = S1 + (dat$log.odds[j]) / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          ret = 1:12\n          for (j in 1:J) {\n            A = dat$log.odds[j] / dat$std.err[j]^2 + 1 / tau^2 * S1 / S0\n            B = 1 / dat$std.err[j]^2 + 1 / tau^2\n            ret[j] = A / B\n          }\n          \n          return (ret)\n        }\n        \n        theta.posterior.var <- function(tau, dat) {\n          J = nrow(dat)\n          \n          S0 = 0\n          for (j in 1:J) {\n            S0 = S0 + 1 / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          ret = 1:12\n          for (j in 1:J) {\n            A = 1 / tau^2\n            B = 1 / dat$std.err[j]^2 + 1 / tau^2\n            ret[j] = 1 / B + (A / B)^2 / S0\n          }\n          \n          return (ret)\n        }\n        \n        theta.posterior.means = lapply(taus, theta.posterior.mean, dat)\n        theta.posterior.means = matrix(unlist(theta.posterior.means), ncol=J, byrow=T)\n        \n        theta.posterior.vars = lapply(taus, theta.posterior.var, dat)\n        theta.posterior.vars = matrix(unlist(theta.posterior.vars), ncol=J, byrow=T)\n        \n        png(\"theta.posterior.means.png\", width = 800, height = 600)\n        matplot(taus, theta.posterior.means, type=\"l\", lty=1, main=\"Posterior means of theta\")\n        dev.off()\n        \n        png(\"theta.posterior.vars.png\", width = 800, height = 600)\n        matplot(taus, theta.posterior.vars, type=\"l\", lty=1, main=\"Posterior variances of theta\")\n        dev.off()\n        \n        # Question 5.15c\n        \n        index.MLE = which.max(taus.posterior) # use MLE of tau for a crude estimate of theta_j\n        tau.MLE = taus[index.MLE]\n        \n        # Question 5.15d\n        \n        p.mu.tau.c.dat <- function(mu, tau, dat) {\n          J = nrow(dat)\n          \n          A = 0\n          for (j in 1:J) {\n            A = A + 1 / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          S1 = 0\n          for (j in 1:J) {\n            S1 = S1 + (dat$log.odds[j]) / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          S2 = 0\n          for (j in 1:J) {\n            S2 = S2 + (dat$log.odds[j]^2) / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          C = S2 - S1^2 / A\n          \n          S3 = 0\n          for (j in 1:J) {\n            S3 = S3 + (dat$log.odds[j]) / (dat$std.err[j]^2 + tau^2)\n          }\n          \n          B = S3 / A\n          \n          P = 0\n          for (j in 1:J) {\n            P = P + (-1/2) * log(dat$std.err[j]^2 + tau^2)\n          }\n          P = exp(P)\n          \n          return (P * exp(-A/2 * (mu - B)^2) * exp(-C/2))\n        }\n        \n        p.theta.c.mu.tau.dat <- function(j, theta.j, mu, tau, dat) {\n          J = nrow(dat)\n          \n          A = 1 / dat$std.err[j]^2\n          B = 1 / tau^2\n          \n          theta.j.hat = (A * dat$log.odds[j] + B * mu) / (A + B)\n          V.j = 1 / (A + B)\n          \n          return (V.j^(-1/2) * exp(-1/2 * (theta.j - theta.j.hat)^2 / V.j))\n        }\n        \n        # Question 5.15d\n        \n        T = 1000\n        samples = matrix(NA, T, J)\n        \n        for (k in 1:1000) {\n          U = runif(1)\n          temp = 1:length(taus)\n          temp[1] = taus.posterior[1]\n          V = 0\n          for (j in 2:length(taus)) {\n            temp[j] = temp[j - 1] + taus.posterior[j]\n            if (temp[j] > U) {\n              tau.sampled = taus[j] # sample tau\n              break;\n            }\n          }\n          \n          J = nrow(dat)\n          A = 0\n          for (j in 1:J) {\n            A = A + 1 / (dat$std.err[j]^2 + tau.sampled^2)\n          }\n          S3 = 0\n          for (j in 1:J) {\n            S3 = S3 + (dat$log.odds[j]) / (dat$std.err[j]^2 + tau.sampled^2)\n          }\n          B = S3 / A\n          mu.sampled = rnorm(1, mean = B, sd = 1 / sqrt(A)) # sample mu\n          \n          J = nrow(dat)\n          for (j in 1:J) {\n            C = 1 / dat$std.err[j]^2\n            D = 1 / tau.sampled^2\n            \n            theta.j.hat = (C * dat$log.odds[j] + D * mu.sampled) / (C + D)\n            V.j = 1 / (C + D)\n            theta.j.sampled = rnorm(1, theta.j.hat, sqrt(V.j)) # sample theta_j\n            \n            samples[k, j] = theta.j.sampled\n          }\n        }\n        \n        png(\"theta.simulations.png\", width = 800, height = 600)\n        par(mfrow=c(4,6))\n        for (j in 1:J) {\n          hist(samples[, j], main=\"Simulations of theta\", pch = j)\n        }\n        dev.off()\n\n    \\end{lstlisting}\n}\n\n\\clearpage\n\\end{document}\n", "meta": {"hexsha": "ce58b286033bf49414beb5958a1b1165f90b6914", "size": 17133, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW4/Homework4.tex", "max_stars_repo_name": "goldsail/BayesianHomework", "max_stars_repo_head_hexsha": "d5506faccbf4d0b7b696c7c2bcb42d020bb0d357", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-07-07T18:55:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-07T18:55:43.000Z", "max_issues_repo_path": "HW4/Homework4.tex", "max_issues_repo_name": "kingium/BayesianHomework", "max_issues_repo_head_hexsha": "d5506faccbf4d0b7b696c7c2bcb42d020bb0d357", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW4/Homework4.tex", "max_forks_repo_name": "kingium/BayesianHomework", "max_forks_repo_head_hexsha": "d5506faccbf4d0b7b696c7c2bcb42d020bb0d357", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.8184143223, "max_line_length": 426, "alphanum_fraction": 0.5025973268, "num_tokens": 6504, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407016, "lm_q2_score": 0.7577943603346811, "lm_q1q2_score": 0.5654199123889653}}
{"text": "\\section{Ensembles}\n\nFor a given macrostate $(N, V, E)$, a statistical system, at any time t, may be in any one of an extremely large number of distinct microstates. As time passes, the system transitions between microstates, with the result that, over a long enough time period the behaviour of the systems is ``averaged'' over the collection of microstates which have been visited by the system.\nA useful approach is to consider, at a single instant of time, a large (infinite) number of ``copies'' of the same system, existing in all possible microstates that satisfy the macroscopic conditions. Then, we can expect the average behaviour of any system for this collection (or ensemble) to be identical with the time-averaged behaviour of the system. This forms the basis of the so\u2013called ensemble theory.\n\nIn the previous section, we considered an isolated system where we could keep track of the dynamics of every particle and use that to calculate the values of extensive, macroscopic properties of the system. An important aspect of this was the conservation of energy for the system. We refer to such systems as a \\emph{micro-canonical ensemble} --- the system is isolated with no heat flux and no change in the number of particles, hence the internal energy of the system is constant and it is described entirely by the Hamiltonian dynamics\n\nWe also saw (a couple of sections earlier) that we can equate the entropy of a system with the accessible volume of the phase space of that system. This was part of what motivated us to study the microscopic dynamics of the system using molecular dynamics.  We'd now like to consider some slightly more realistic cases where, for example, we may want to allow for multiple systems in contact.\n\nWe'll start by considering the simplest case: two (isolated) systems in contact, without any exchange of energy. If the state of system 1 corresponds to the region of phase space $\\Gamma^1$ and similarly, the state of system 2 to $\\Gamma^2$ then the state of the composite system $1\\cup 2$ corresponds to the phase space regions given by the Cartesian product $\\Gamma^{1\\cup 2}=\\Gamma^1\\times\\Gamma^2$ and the volume of this accessible volume of phase space is given by $|\\Gamma^{1\\cup 2}|=|\\Gamma^1||\\Gamma^2|$. From this, it's easy to see that the entropy of the composite system is given by\n\\begin{eqnarray*}\n\tS &=& k_B\\ln|\\Gamma^{1\\cup 2}| = k_B\\ln(|\\Gamma^1||\\Gamma^2|)\\\\\n\t\t&=& k_B\\ln|\\Gamma^1| + k_B\\ln|\\Gamma^2| = S^1 + S^2,\n\\end{eqnarray*}\nwhich is fortunate, since entropy is an extensive variable and we therefore expect it to be additive.\nIn this example the two sub-system were completely isolated from each other; the dynamics of one system had no influence on the dynamics of the other. This condition of dynamic independence corresponds to the independence of the observables that pertain to these sub-systems.\n\nNow, let's look at how the entropy of a composite system changes if we allow for the exchange of energy between the two sub-systems. This has the effect of increasing the accessible volume of the phase space, since exchange of energy means that there are more possible configurations for the overall system.\n\nWithout energy exchange, the volume of the accessible phase space to the total system is given by\n$$\n\t|\\Gamma_0| = |\\Gamma^1||\\Gamma^2|.\n$$\nOnce we allow for the exchange of energy, this becomes\n$$\n\t|\\Gamma| = \\sum_{E^1}|\\Gamma^1(E^1)||\\Gamma^2(E_\\text{tot}-E^1)|.\n$$\nThat is, we must now consider all possible configurations where the total energy of the composite system is partitioned between the two sub-systems. This volume is bounded below by $|\\Gamma_0|$ since the expression for $|\\Gamma_0|$ is just one of the terms in the sum. At first glance, it may look like this increase in the volume of the accessible phase space is enormous (we have added many more possible configurations), however, the volume of the accessible phase space for the composite system corresponds almost entirely to the states where $E^1=E^1_\\text{eqm}$ and $E^2=E^2_\\text{eqm}$. As a consequence, for large enough $N$, the difference between $\\Gamma$ and $|\\Gamma^1(E^1_\\text{eqm})|||\\Gamma^2(E^2_\\text{eqm})|$ is negligible. It's not too hard to show (see section 3.7 of \\emph{Statistical Mechanics in a Nutshell}) that the contribution to the accessible phase space volume due to the exchange of energy between the two systems is of order $\\sqrt{N}$ compared with the total system size $N$ --- small enough to be negligible when $N$ is large.\n\n\\subsection{The canonical ensemble}\n\nRather than considering a perfectly isolated system, for which energy is conserved, a more realistic experimental situation may be to consider a system S in thermal contact with some much larger reservoir R. This has the effect of holding the total system at constant temperature. In such a situation we want to be able to calculate the average value $\\langle A\\rangle$ of an observable $A$ for the system S; we are not interested in the state of the reservoir R, except to the extent that it helps us determine the state of S.\n\nIf, for the composite system S$\\cup$R, we write $x_S,x_R$ for the state, te=hen the average value of the observable $A$ is given by\n$$\n\t\\langle A\\rangle = \\frac{1}{|\\Gamma|}\\int_{\\Gamma}dx_Sdx_RA(x_s),\n$$\nwhere $\\Gamma$ is the region of the phase space for the composite system, when it has total internal energy of $E$.\n\nIn order to make explicit the parts of the total phase space that is accessible to the composite system, wewrite the above expression as the Cartesian product of the phase space for the system and the reservoir, i.e.\n$$\n\t\\langle A\\rangle =\\frac{1}{|\\Gamma|}\\int dx_SA(x_S)\\times\\int dx_R\\delta(H^R(x_R)-(E-H^S(x_S))).\n$$\nThe delta function in the last term is zero, except when $x_S$ and $x_R$ in the two sub-systems take values such that $H^S+H^R=E$. That is the delta function defines the accessible phase space volume when the two sub-systems can exchange internal energy between them, but the total internal energy of the composite system is conserved.\n\nRecalling the fundamental postulate of statistical mechanics ($S=k_B\\ln|\\Gamma|$), we rewrite the last expression to replace the phase space volume with the corresponding expression for entropy:\n$$\n\t\\int dx_R\\delta(H^R(x_R)-(E-H^S(x_S))) \\simeq \\exp\\left(\\frac{1}{k_B}S^R(E-H^S)\\right).\n$$\n\nSince $H^S$ is much less than $E$ it makes sense to now expand the exponential in a Taylor series about $E$:\n$$\n\t\\exp\\left(\\frac{1}{k_B}S^R(E-H^S)\\right)\\simeq \\exp\\left(\\frac{1}{k_B}S^R(E)\\right)\\exp\\left(\\frac{-1}{k_B}\\left.\\frac{\\partial S^R}{\\partial E}\\right\\|_E H^S(x_S)\\right)\\ldots\n$$\n\nEarlier in the course, we identified $\\frac{\\partial S}{\\partial E}$ with $\\frac{1}{T}$, and, for a canonical ensemble, $T$ is constant (that's the point of the reservoir) so we write the expected value of the observable as\n$$\n\t\\langle A\\rangle \\simeq \\frac{1}{\\Gamma}\\int dx_sA(x_S)\\times\\exp\\left(\\frac{1}{k_B}s^R(E)\\right)\\exp\\left(\\frac{-H^S(x_S)}{k_BT}\\right).\n$$\n\nIt remains to make the normalisation precise. When we do this, the factors of $S\\exp\\left(\\frac{1}{k_B}S^R(E)\\right)$ cancel from the integral and its normalisation and we get\n\\begin{equation}\n\t\\langle A\\rangle = \\frac{1}{Z}\\int dx_SA(x_S)\\exp\\left(\\frac{-H^S(x_S)}{k_BT}\\right),\n\t\\label{eq:z1}\n\\end{equation}\nwhere $Z$ is the known as the \\emph{partition function} and is given by\n\\begin{equation}\n\tZ = \\int dx_S\\exp\\left(\\frac{-H^S(x_S)}{k_BT}\\right).\n\t\\label{eq:z2}\n\\end{equation}\n\nOne way to think about equations \\ref{eq:z1} and \\ref{eq:z2} is that we no longer need to keep track of which parts of phase space are accessible. Instead we can integrate over the entire phase space and each region is weighted appropriately, according to $\\exp\\frac{-H}{k_BT}$ --- the so-called \\emph{Boltzmann factor}, a probability density in the phase space.\n\nSimilar to the case of the micro-canonical ensemble, for the canonical ensemble, the contributions to $A$ are dominated by the part of phase space that corresponds to when the system's internal energy is at the equilibrium value. \n\nYou should read through, and understand, section 3.12 of \\emph{Statistical Mechanics in a Nutshell} and sections 4.1--4.3 of \\emph{Statistical Mechanics Made Simple} for some extra details and explanation.\n\n\\subsection{Other ensembles}\nWe can follow a similar approach to what we did for the canonical ensemble and generalise to other types of ensembles. If $f_i$ is some variable that we want to hold fixed and $X_i$ is the conjugate extensive variable, then putting the system in contact with a reservoir with which it can exchange $X_i$ gives\n\\begin{equation*}\n\t\\langle A\\rangle = \\frac{1}{Z}\\int dx_SA(x_S)\\exp\\left(\\frac{f_iX^S_i(x_S)}{k_BT}\\right),\n\\end{equation*}\nwhere we have used the relation $\\left.\\frac{\\partial S^R}{\\partial X_i^R}\\right\\|_E=-\\frac{f_i}{T}$.\n\nCalculating the partition function proceeds similarly too, (see SMiaN, section 3.13) and gives\n\\begin{equation*}\n\tZ \\simeq \\exp\\left(\\frac{T^S(X_i^*)+f_iX_i^*}{k_BT}\\right),\n\\end{equation*}\nwhere $X_i^*$ is the value of $X_i$ which maximises the value of the exponential.\n\n\\subsubsection{The grand-canonical ensemble}\nRather than holding $N$ the number of particles fixed we can allow it to vary and instead fix $\\mu$ the chemical potential. The corresponding ensemble for this case is the so-called \\emph{grand-canonical ensemble}. In this case, the expected value of an observable $A$ is given by\n$$\n\t\\langle A\\rangle = \\frac{1}{Z}\\sum_{N=1}^{\\infty}\\int\\mathrm{d}xA(x)\\exp\\left(-\\frac{H_N(x)-\\mu N}{k_B T}\\right)\n$$\nand the corresponding partition function is given by\n$$\n\tZ=\\exp\\left(-\\frac{E-TS-\\mu N}{k_B T}\\right).\n$$\n\n\\subsubsection{The $P-T$ ensemble}\nThe \\emph{$p-T$ ensemble} is one specific example of a generalised ensemble. In this case the pressure and temperature are fixed, while the internal energy and volume (their conjugate variables) are allowed to fluctuate.\nUsing the generalised formula above, and dropping the subscripts that we were previously using to denote the system, the $p-T$ ensemble is given by\n \\begin{equation*}\n\t\\langle A\\rangle = \\frac{1}{Z}\\int dxA(x)\\exp\\left(-\\frac{E(x)+pV(x)}{k_BT}\\right),\n\\end{equation*}\nwhile the partition function is given by\n$$\n\t\\ln Z = -\\frac{E-TS+pV}{k_BT}.\n$$\nThe quantity on the top of the fraction is the \\emph{Gibbs free energy}. (See section 3.14 of SMiaN)\n\nAnother important ensemble is the so-called \\emph{grand ensemble} in which the number of particles in the system is also allowed to fluctuate due to, for example, chemical reactions, while the \\emph{chemical potential} $\\mu$ is held fixed. In this case the expression for the expected value of an observable is\n$$\n\t\\langle A \\rangle = \\frac{1}{Z} \\sum_{N=1}^\\infty \\int dxA(x)\\exp\\left(-\\frac{H_N(x)-\\mu N}{k_BT}\\right),\n$$\nwhile the corresponding partition function is given by\n$$\n\t\\ln Z = -\\frac{E-TS-\\mu N}{k_BT}.\n$$\n(See section 3.15 of SMiaN and section 4.4 of SMMS for more discussion of the grand canonical ensemble.)\n\n\\subsection{Information theory and the Gibbs Formula for entropy}\nTo finish off this section, we'll look at one last application of entropy; not because it is particularly useful to physical systems in statistical mechanics, but because it gives a result that forms the basis of information theory.\n\nWe'll return to a considering a generalised ensemble, like the one we looked at a couple of sections earlier. However, in this case we'll assume that the phase space is discretized and that the index $n$ runs over all of the microstates of the system. If we consider an intensive variable $f$ and its corresponding extensive variable $X$ then the expression for the expected value of any observable $A$ is\n$$\n\t\\langle A \\rangle = \\frac{1}{Z}\\sum_n A_n\\exp\\left(\\frac{fX_n}{k_BT}\\right),\n$$\nwhile the partition function $Z$ is given by\n$$\n\tZ =\\sum_n\\exp\\left(\\frac{fX_n}{k_BT}\\right).\n$$\n\nThe partition function is related to the thermodynamic potentials (see the previous section on generalised ensembles and SMiaN sections 3.12 and 3.13) via\n\\begin{equation}\n\t\\ln Z = \\frac{TS+f\\langle X\\rangle}{k_BT}.\n\t\\label{eq:gibbZ}\n\\end{equation}\n\nNow, for any individual microstate $n$ the probability is therefore given by\n$$\n\tp_n = \\frac{1}{Z}\\exp\\left(\\frac{fX_n}{k_BT}\\right).\n$$\nTaking the log of both sides of this expression gives\n$$\n\t\\ln p_n = \\frac{fX_n}{k_BT} - ln Z,\n$$\nwhich after substituting \\ref{eq:gibbZ} for $\\ln Z$ gives\n$$\n\t\\ln p_n = \\frac{1}{k_BT}(fX_n -TS -f\\langle X\\rangle).\n$$\n\nNow we can calculate the expected value of both side of the above equation\n$$\n\t\\langle \\ln p_n \\rangle = \\sum_n p_n\\ln p_n = \\frac{1}{k_BT}(f\\langle X\\rangle -TS -\\langle X \\rangle) = \\frac{-S}{kB}.\n$$\n\nAfter rearranging this for $S$ (and making explicit the sum for the expected value of $p_n$) we arrive at the \\emph{Gibbs formula for entropy}:\n$$\n\tS = -k_B\\sum_np_n\\ln p_n.\n$$\n\nAlthough this is elegant, it's generally useless in the context of physical systems since the number of microstates it would be necessary to sum over is far to large to be computationally practical and, in any case, we generally don't know the probability distribution $p_n$ for each of the microstates. The value of this expression lies in it's application to other systems, particularly information theory, where it can be used to quantify amount of information in, for example a digital signal, in which case the $p_n$ represent the probability of receiving the $n$-th possible value in the list of signals.\n\n\\subsection{Calculating things with ensembles}\nLet $\\rho(q,p,t)$ be the probability density function that gives the probability that a system will be in states $(q,p)$ at time $t$. Now, consider an ensemble of systems characterised by $\\rho$, for any choices of $(q,p)$. If $\\rho(q,p,t) = \\rho(H(q,p))$ --- i.e. $\\rho$ is autonomous --- then $\\frac{\\partial \\rho}{\\partial t}=0$ and we say that the system is stationary.\n\nThe canonical ensemble average $\\langle f\\rangle$ of some physical quantity $f(q,p)$ is\n$$\n\t \\langle f\\rangle = \\frac{\\int f(q,p)\\rho(q,p)\\mathrm{d}q\\mathrm{d}p}{\\int\\rho{q,p}\\mathrm{d}q\\mathrm{d}p}.\n$$\nFor fixed $N$, as we saw above, the natural choice for the probability density function is \n$$\n\\rho(q,p) \\propto \\exp\\left(-\\frac{H(q,p)}{k_BT}\\right)\n$$ \nand if we normalise this, so that $\\int \\rho =1,$ we have\n$$\n\t\\rho(q,p) = \\frac{1}{Z}\\exp\\left(-H(q,p)/k_BT\\right).\n$$\nFor $N$ particles, in 3-D the normalisation factor is\n$$\n\tZ(T,N,V) = \\frac{1}{h^{3N}}\\int\\mathrm{d}^{3N}q\\mathrm{d}^{3N}p\\exp(-H(q,p)/k_BT),\n$$\nthe so-called partition function.\n\n\nWe often write the integral as an infinite sum, both for ease of notation, and because it makes sense for discrete systems like the QM systems we will meet soon. I.e. $Z=\\sum_n\\exp(-H_n/k_BT)$.\n\nMost quantities of interest for an ensemble can be calculated two ways; as a sum over states and as derivates of the partition function. For example:\n\n\\subsubsection{Internal energy}\nThe average value of the internal energy of an ensemble can be found by summing over each energy state $E_n$ of the system, weighted by the corresponding probability $p_n$. (In the following, we're going to use the notation $\\beta =1/k_bT$. The quantity $\\beta$ is referred to as the inverse temperature.)\n\n\\begin{align*}\n\t\\langle E \\rangle = \\sum_n E_n p_n &= \\frac{\\sum_n E_n\\exp(-\\beta E_n)}{Z}\\\\\n\t&= -\\frac{\\partial Z}{\\partial \\beta}\\frac{1}{Z} = \\frac{\\partial}{\\partial \\beta} \\ln(Z).\n\\end{align*}\n\n\\subsubsection{Specific heat}\nLet $c_v$ be the specific heat per particle in a system with constant volume. This means that when we add a unit of heat to the system, the temperature increases by $\\frac{\\partial\\langle E\\rangle}{\\partial T}$ units. The total specific heat of the system, with $N$ particles, is given by\n\\begin{align*}\n\tNc_v = \\frac{\\partial \\langle E \\rangle}{\\partial T} &=  \\frac{\\partial \\langle E \\rangle}{\\partial \\beta} \\frac{\\mathrm(d)}{\\mathrm{d}T}\\\\\n\t&=\\frac{-1}{k_bT^2} \\frac{\\partial^2 Z}{\\partial \\beta^2} \\mbox{ since } \\beta = 1/k_BT \\mbox{ and } \\langle E \\rangle = \\frac{-\\partial Z}{\\partial \\beta}.\n\\end{align*}\n\nWe can expand the second to last expression in the equality above as a sum. This gives\n\\begin{align*}\n\tNc_v &= \\frac{-1}{k_bT^2} \\frac{\\partial}{\\partial \\beta} \\frac{\\sum_n E_n \\exp(-\\beta E_n)}{\\sum_n\\exp(-\\beta E_n)}\\\\\n\t&= \\frac{-1}{k_BT^2}\\left[\\frac{\\sum_n E_n\\exp(-\\beta E_n)}{Z}+ \\frac{(\\sum_n E_n\\exp(-\\beta E_n))^2}{Z^2}\\right]\\\\\n\t&= \\frac{1}{k_BT}(\\langle E \\rangle^2 - \\langle E^2\\rangle).\n\\end{align*}\nThat is, the specific heat of the system is given by the RMS fluctuation in the internal energy of the system at constant temperature.\n\n\\subsubsection{Entropy}\nWe have\n$$\n\tS = -k_B\\sum_n p_n\\ln(p_n).\n$$\n(This is the Gibbs formula for entropy that we saw a couple of pages ago.) Using $p_n = \\frac{\\sum_n \\exp(-\\beta E_n)}{Z}$, along with the properties of logs, one gets (do the calculation yourself --- it's not a tough one)\n$$\n\tS = \\frac{\\langle E \\rangle}{T} +k_B\\ln(Z).\n$$\nRearranging gives\n$$\n\t\\langle E \\rangle -ST = -1/\\beta \\ln(Z) = A(T,V,N).\n$$\n\nThe quantity $A(T,V,N)$ is the so-called Helmholtz free energy. It has the property that $S = \\left.\\frac{-\\partial A}{\\partial T}\\right\\|_{N,V}$.\n\n\\subsubsection{Simple harmonic oscillator}\nA classical simple harmonic oscillator has $H(q,p) =p^2/2m + m\\omega^2q^2/2$ where $\\omega$ is the angular frequency of the oscillator. As an exercise, calculate the following for an ensemble consisting of a single oscillator.\n\\begin{itemize}\n\t\\item The partition function $Z$;\n\t\\item The Helmholtz free energy $A_{\\omega}(T)$;\n\t\\item The average internal energy $\\langle E \\rangle_{\\omega}(T)$.\n\\end{itemize}\n\n\\subsection{Recommended reading}\nMost of the notes in this section follow closely the second half of chapter 3 in \\emph{Statistical Mechanics in a Nutshell}; specifically, section 3.6--3.18. Chapter four of \\emph{Statistical Mechanics made Simple} covers the same material in sections 4.0--4.4. It is gives some intuitive and succinct explanations, but I find it to be of more use \\emph{after} you've already looked at SMiaN. Also useful, and with a slightly different presentation (perhaps with more traditional notation), is \\emph{Entropy, Order Parameters, and Complexity}. Here the content is spread around a bit over chapters three through six. Much of the useful content, including some relevant to early sections of these notes, is in sections 3.1, 3.5, 6.1, 6.2, 6.3, and 5.3.\n", "meta": {"hexsha": "ffae8ae9fbe2d4e36da98c5e208fba8558d749b3", "size": 18474, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "07-ensembles.tex", "max_stars_repo_name": "wvan478/708Notes2018", "max_stars_repo_head_hexsha": "0a4ef44e261f2892b0e927aeadf7f06adda1b80b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-02-28T20:47:25.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-10T19:05:54.000Z", "max_issues_repo_path": "07-ensembles.tex", "max_issues_repo_name": "wvan478/708Notes2018", "max_issues_repo_head_hexsha": "0a4ef44e261f2892b0e927aeadf7f06adda1b80b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2018-03-07T20:07:07.000Z", "max_issues_repo_issues_event_max_datetime": "2018-04-18T20:53:19.000Z", "max_forks_repo_path": "07-ensembles.tex", "max_forks_repo_name": "wvan478/708Notes2018", "max_forks_repo_head_hexsha": "0a4ef44e261f2892b0e927aeadf7f06adda1b80b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2018-02-26T21:38:05.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-15T22:54:11.000Z", "avg_line_length": 76.6556016598, "max_line_length": 1059, "alphanum_fraction": 0.7401212515, "num_tokens": 5180, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.5653936736591051}}
{"text": "\\chapter{Backgroud}\n\nAs the proposed architecture and algorithm extend existing work on Probabilistic Graphical Models, a substantial amount of background is required culminating with the new approach being derived. A robust understanding of these concepts is required for this project to implement and design appropriate evaluations for the proposed architecture.\n\n\\section{Generative Models}\\label{S:Generative-Models}\n\nThis project works with neural network based generative models.\nGenerative models are a powerful way to model data. With generative models, we aim to learn a model that can both create the training data, represent it, and reconstruct it using the representation the model has learnt to create. Generative models can map input data from raw values to higher level features. Hinton~\\cite{hinton:32723:vv}, gave a compelling argument for higher level features in the context of generative models. \\begin{quote} ``Consider, for example, a set of images of a dog. Latent variables such as the position, size, shape and color of the dog are a good way of explaining the complicated, higher-order correlations between the individual pixel intensities, and some of these latent variables are very good predictors of the class label.''\\end{quote}\n\nGenerative models model a distribution over a collection of binary variables $X$, where $X$ is comprised of variables which can be observed or unobserved. The observed variables are referred to as \\texttt{visible} variables or units ($v$). Conversely, unobserved variables correspond to the hidden units of the neural network ($h$). With these terms defined, the joint distribution that generative models model can be expressed as $P(X)$ where $X$ is comprised of $h,v$. Collections of these units, are often referred to as `patterns` or `vectors` in that they are represented by a vector or pattern of bits. For instance in the context of an image, a visible pattern is the pixels of the image flattened into a one dimensional vector.\n\n\\section{Directed PGMs}\n\nThe new approach proposed in this project uses both the two classes of Probabilistic Graphical Models, Directed and Undirected. A Directed PGM, is a notation for factorising over a collection of stochastic variables.\nGiven $X$, the variables in the generative model, and $parent_i$ the parent unit of $x_i$, the distribution over $X$ in a directed PGM is defined by the following factorisation:\n\\begin{equation}\\label{eq:sbn-factorisation}\nP(X) = \\prod_i P(x_i | parent_i)\n\\end{equation}\nConnections between units in the Directed PGM express causation~\\cite{Pearl:1988:PRI:52121}. They provide an expressive way to represent a collection of related, stochastic variables. See figure \\ref{F:PGM-example} for the minimal example, where variable $A$ is dependent on $B$. As a result this network is often referred to as a Belief Network or Bayesian Network. A Belief Network's causal dependencies are expressed as a conditional probability table also know as `factor'. For example in figure \\ref{F:PGM-example} the probabilities of $A$ being in a given state are dependent on $B$. The joint distribution over $A$ and $B$, as in equation \\ref{eq:sbn-factorisation}, can be expressed as:\n$$P(A,B) = P(A|B)P(B)$$\n\\begin{wrapfigure}{r}{0.2\\textwidth}\n\\begin{center}\n  \\includegraphics[width = 0.1\\textwidth]{Assets/PGM_Example_1.png}\n\\caption{Minimal Directed PGM, showing an observed variable `A` and it's hidden cause `B`.}\n\\label{F:PGM-example}\n\\end{center}\n\\end{wrapfigure}\n\n\\subsection{Explaining Away in Directed PGMs}\\label{S:Explaining-Away}\n\nA common task in generative models is given a parent unit (a cause) is in some state, inferring the state of child variable. In directed PGMs this is trivial to calculate. The opposite task is also desirable~\\cite{salakhutdinov2009learning}, inferring the state of causes given the effect of that cause. It becomes problematic in Directed PGMs as the causal relationships give rise to the effect of `Explaining Away'. The canonical example that exemplifies `Explaining Away', \\texttt{Burglar, Earthquake, Alarm Problem}~\\cite{Barber:2012:BRM:2207809} is illustrated in figure~\\ref{F:Explaining-Away}.\n\\begin{figure}[h]\n\\begin{center}\n  \\includegraphics[width = 0.4\\textwidth]{Assets/Explaining_Away.png}\n\\caption{The famous \\texttt{Burglar, Earthquake, Alarm} network showing a minimal case of explaining away.}\n\\label{F:Explaining-Away}\n\\end{center}\n\\end{figure}\nKnowledge of the state of the \\texttt{alarm} makes \\texttt{burglar} and \\texttt{earthquake} dependent. The \\texttt{alarm} is the observable variable ($v$) and the \\texttt{burglar} and \\texttt{earthquake} are the hidden `causes' ($h$). For example if the \\texttt{alarm} is true, and we see news of earthquake in the area, our belief that we have been burgled decreases. Expressed (again exemplifying equation \\ref{eq:sbn-factorisation}) as a joint probability over $A$, $B$ and $E$ where these are the states of \\texttt{alarm},\\texttt{burglar} and \\texttt{earthquake} respectively:\n$$\nP(A,B,E) = P(A|B,E)P(B)P(E)\n$$\n\n\\subsection{Directed PGMs in Neural Networks: The Sigmoid Belief Network}\n\nA Belief network can be expressed as a neural network, where conditional probabilities are parameterised as weights. This network is called a Sigmoid Belief Network (SBN) as the probability of a variable $x_i$ being 1 is dependent on a ancestor variable $parent_i$. The weighted sum into $x_i$, $\\phi_i$ passed through the sigmoid function ($ \\sigma(x)=1/(1+e^{-x})$). This is equivalent to a Perceptron using a sigmoid activation function which ensures that the output is a valid probability (between 0 and 1).\nSBNs take a naive approach to causes, where each hidden unit represents a single, simple cause. Formally, $\\phi_i$ is a weighted sum of the activations of parent nodes:\n$$ \\phi_i = \\sum_{j \\in parent_i} W_{ij}x_j$$\nand\n$$\nP(x_i = 1 | parent_i) = \\sigma(\\phi_i)\n$$\n\n\\section{Undirected PGMs:}\n\nUndirected PGMs do not represent causation, instead merely capturing a dependency between two units. These pairwise dependencies change the structure of the factorisation a factor $\\Phi$ between each pair of variables $x_i,x_j$ resulting in the factorisation:\n$$\nP(X) = \\frac{1}{Z} \\prod_i \\Phi(x_i, x_j)\n$$\nThe introduction of the normalisation $Z$ (often referred to as the partition function) adds nontrivial complexity to performing inference in Undirected PGMs~\\cite{Salakhutdinov08learningand}, a sum over all $2^N$ configurations of $x$ is required.\n\nUndirected PGMs do not capture causal relationships meaning computing the state of a variable given another is no longer hampered by the effect of explaining away. However, their reccurent structure, while expressive introduces an intractability in practice.\n\n\\subsection{Undirected PGMs in Neural Networks: The Boltzmann Machine}\nA Undirected PGM expressed as nueral network is referred as Boltzmann Machine, where connections encode dependancies with an associated weight. We see this where $W_{ij}$ is the weight between variables $x_i$ and $x_j$ the factor $\\Phi$ is expressed as:\n$$\n\\Phi(x_i, x_j) = e^{x_ix_jW_{ij}}\n$$\nThe Boltzmann machine has been proposed in various forms from different domains throughout the years. For instance it was presented in a non-stochastic context of the Hopfield network in~\\cite{Hopfield01041982}. Hinton and Sejnowski~\\cite{geoffreye.hintonterrencej.sejnowski1983} also proposed the Boltzmann machine. An example Boltzmann Machine is shown in figure \\ref{F:Boltzmann-Machine}, we see that the Boltzmann Machine can be recurrent, expressing complex dependencies between variables. This recurrence makes inferring the state of a subset variables based on knowledge of another subset non-trivial as the size of the network grows.\n\\begin{figure}[h]\n\\begin{center}\n  \\includegraphics[width = 0.4\\textwidth]{Assets/Boltzmann_Machine.png}\n\\caption{A Boltzmann Machine, the blue shaded nodes representing the observed variables, and the non-shaded nodes the latent variables.}\n\\label{F:Boltzmann-Machine}\n\\end{center}\n\\end{figure}\n\n\\subsection{Restricted Boltzmann Machines}\n\n\nA Boltzmann Machine's architecture can be altered to alleviate inference shortcomings. The restriction, originally proposed by \\cite{Smolensky:1986vy}, and then later revitalised with a training algorithm that operates on the deeper architecture of the DBN~\\cite{geoffreye.hintonterrencej.sejnowski1983}. The restriction requires the network to be a two layer bipartite network, each layer corresponding to the observed (visible) and latent (hidden) units. Connections are forbidden between the layer of hidden units and the layer of visible units respectively. An example Restricted Boltzmann Machine architecture is shown in figure~\\ref{F:Restricted-Boltzmann-Machine} (with the biases omitted for simplicity.). The collection of hidden units, forming a layer are referred to as the \\texttt{hidden layer}. The collection of visible units are referred to as the \\texttt{visible layer}.\n\n\\begin{figure}[h]\n\\begin{center}\n  \\includegraphics[width = 0.6\\textwidth]{Assets/RBM_Example.png}\n\\caption{An example Restricted Boltzmann Machine with four hidden units, and five visible units. Note that the edges between units are not directed --- representing a dependency not a cause. This means that the weights are symmetric.}\n\\label{F:Restricted-Boltzmann-Machine}\n\\end{center}\n\\end{figure}\n\nThe probability of the RBM (again ignore biases) being in a given configuration is the joint probability of $h$ and $v$:\n$$ P(h,v) = \\frac{1}{Z} \\prod_{j,i} e^{h_jv_iW_{ji}} $$\nTaking logs this becomes:\n$$ \\log P(h,v) = \\log \\sum_{j,i} h_jv_i  W_{ji}  - \\log Z $$\n$ Z $ is the partition function, which normalises the probability of the joint. Calculating this would require summing over all $2^N$ configurations of $h$ and $v$, which is intractable for practical numbers of units. For instance a 28 by 28 image corresponds to 784 visible units, and for say 10 hidden units, this would amount to $ 2^{784} * 2^{10} $ possible configurations. We opt to work in terms of $P^\\star$ which is the non-normalised probability of the joint over $h \\text{ and } v$.\n% So we arrive at:\n\n\\begin{equation}\\label{eq:LogPJoint}\n   \\log P^\\star(h, v) = \\log \\sum_{i,j} {h_j v_i W_{ji}}\n\\end{equation}\n\n\\section{Sampling in Generative Models}\n\n\\subsection{Why sampling is important}\n\nSampling is used when the distribution we want to work with is intractable to calculate analytically, which happens to be the case in the model proposed in this report. As mentioned in \\ref{S:Generative-Models}, the power of generative models is their ability to represent, reconstruct and be trained on data. These tasks all require sampling from configurations/distributions of the hidden and visible units, often conditioned one or the other. There are two cases:\n\\begin{itemize}\n  \\item Sampling from $P(v|h)$, which is known as running the generative model, where the model \\emph{generates} data based on a hidden representation.\n  \\item Sampling from $P(h|v)$, which is known as \\emph{inverting} a generative model (This is also reffered to as \\emph{inference}). The process is called `inverting' because instead of the model generating the data (sampling from $P(v|h)$) we instead try to infer a representation given data $P(h|v)$. It is the process of reasoning about what we do not know, given that of which we do.\n\\end{itemize}\n\nSampling from $P(v|h)$ and $P(h|v)$ is required to train generative models, as often the gradient to be climbed/descended involves calculating a probability over all the units in the generative model. Training a network based generative model involves calculating a weight update, which in turn requires inferring a hidden representation given a training item. The converse, sampling from a $P(v|h)$ is also required~\\cite{dempster1977maximum}.\n\n\\subsection{The sampling technique: Gibbs Sampling}\n\nGibbs sampling is a special case of Markov Chain Monte Carlo~\\cite{hastings70}, a technique for drawing samples from a complex distribution. Sampling from the probability mass (or `joint distribution`) of a generative model is a common use case for Gibbs sampling~\\cite{Pearl:1988:PRI:52121}.\n\nGibbs sampling explores the desired probability distribution, taking samples of that distribution's state. One must leave iterations of `exploration' (of the probability mass) between drawing of a sample to ensure that the samples are independent~\\cite{casella1992explaining}. The process of taking a step between states is referred to as a \\texttt{Gibbs iteration} or a \\texttt{Gibbs Step}. Formally the algorithm is described in algorithm~\\ref{Alg:Gibbs-Sampling}.\n\n\\begin{algorithm}[!ht]\n \\KwData{A vector $x$ indexed by $j$.}\n \\KwResult{Gibbs sampling algorithm}\n Let $ x_{\\smallsetminus} j$ be all components that make up $x$ vector except $x_j$\\;\n initialization, begin with $x$, we are going to get a sample $x'$\\;\n  \\For{$k$ many iterations}{\n   \\For{each component in $x$, $x_j$}{\n     Draw a sample, $x_j'$ from $P(x_j| x_{\\smallsetminus} j)$\\;\n     Update the current value of $x_j$ in $x$ with $x_j'$\\;\n   }\n  }\n \\caption{The Gibbs Sampling Algorithm}\\label{Alg:Gibbs-Sampling}\n\\end{algorithm}\n\n\\subsubsection{Mixing Time of the Gibbs Sampler}\n\nMCMC methods aim to approximate a distribution, by exploring likely states. As we often start this process from a random state, it's important that enough Gibbs steps are taken before a sample is drawn. This is because the random state may not be close to any part of the true distribution we want to sample from, so by running the chain for many iterations we increase the likelihood of samples being from the desired distribution.\n\nThis process of waiting for enough steps to before drawing samples is referred to as the \\texttt{Mixing Time}. Gibbs sampling is used for performing inference in RBMs~\\cite{Hinton:2006:FLA:1161603.1161605} and as result also in the ORBM. The \\texttt{mixing time}, that is how many Gibbs iterations are needed to reach a satisfactory sample is an important part issue in the ORBM, in that one Gibbs step was sufficient in practice for training and using an RBM. The new generative model is not so fortunate.\n\n\\subsection{Sampling in a Sigmoid Belief Network}\n\n\nSampling from $ P(v|h) $ (known as Ancestral Sampling in a SBN) is extremely efficient in a SBN, it does not need Gibbs sampling. It can be expressed as\n\\begin{equation}\\label{eq:SBN-v-given-h}\nP(v|h) = \\sum_i v_i \\log \\sigma_i(h) + (1-v_i)log(1-\\sigma_i(h))\n\\end{equation}\nAs this process is calculated by propagating probabilities from the root hidden causes, it is known in a SBN as Ancestral Sampling.\n\nConversly, sampling from $P(h|v)$ in an SBN is intractable. Performing inference in a Sigmoid Belief network would allow source separation, as each hidden unit could represent a simple cause. The SBN would be shown an input, and could extract representations of the seperate causes that likely gave rise to the input. There exist algorithms for performing inference in Sigmoid Belief Networks. For instance, the Belief Propagation algorithm proposed by Judea Pearl~\\cite{Pearl1982} operates on this encoding, calculating the probabilities of a given network state (i.e. the state of all the variables). As well as constraining the architecture to be a Directed Acyclic Graph, Belief Propagation is intractable to use as the number of variables grow~\\cite{elidan2007using}. As we want to work with cases with upwards of 100 visible and hidden nodes, these algorithms break down.\n\nDespite the Sigmoid Belief Network being expressive and providing a succinct encoding of conditional probabilities, Gibbs sampling is intractable for SBNs of practical size~\\cite{Jensen95blockinggibbs}. The issue being that the Gibbs chain takes too long to mix~\\cite{neal1992:connectionist}, which arises from the the `explaining away effect`~\\cite{Hinton:2006:FLA:1161603.1161605}. Calling back to the example in section \\ref{S:Explaining-Away}, instead of a single \\texttt{burglar} or \\texttt{earthquake} becoming dependent given the \\texttt{alarm}, there is more in the realm of hundreds of burlgars and earthquakes all suddenly dependent.\n\nBeing able to efficiently sample from $P(h|v)$ is also required for training generative models~\\cite{elidan2007using} making Sigmoid Belief Networks impractical for not only inference, but also training.\n\n\n\\subsection{Gibbs Sampling in a Boltzmann Machine}\n\nPerforming Gibbs sampling appears trivial in a Boltzmann Machine, in that to find the probability of a given unit being active, a weighted input to that node is passed through a sigmoid function. However, in practice the recurrent nature of Boltzmann Machines makes sampling intractable, as updating a node will change the probabilities of those connected. This requires a long Gibbs chain to sample from, intractably long in practice.\n%However, it was shown that given unlimited training time Boltzmann Machines could be trained, out performing the state of the art models of the time \\todocite{This}.\n\nRecall that $ x_{\\smallsetminus} j$ be all components that make up $x$ vector except $x_j$ and that a Boltzmann Machine has symmetric weights ($ W_{ji} = W_{ij} $),\n$$\nP(x_j = 1, x_{\\smallsetminus}j) = \\frac{1}{1 + e^{-\\sum_i w_{ji}x_i}}\n$$\nThat is, Gibbs sampling in a Boltzmann Machine amounts to use the Sigmoid function of the weighted inputs~\\cite{neal1992:connectionist}.\n\n\\subsection{Gibbs Sampling in RBMs}\\label{S:Gibbs-Sampling-RBM}\n\n\nA big payoff for the restriction in an RBM is inverting the model becomes tractable, as the latent variables no longer become dependant given the observed ones. This is illustrated in figure~\\ref{F:Restricted-Boltzmann-Machine} the hidden unit $h_1$ is not dependent on $h_2$, regardless if we know anything about the visible units. Firstly, this is the opposite of a SBN, where knowledge of the visible units makes the hidden units dependent. Secondly, by removing the recurrence present in Boltzmann Machines, it reduces the expressiveness of the RBM network. This does make the RBM useable in practice, as the Gibbs sampling process can stop after one Gibbs step\\cite{fischer2014training}.\n\nIn order to describe Gibbs sampling in the new architecture proposed, it must first be explained for a standard RBM --- The process of Gibbs sampling is as follows:\n  \\begin{itemize}\n    \\item One must sample from $P(h|v)$ giving a hidden state $h'$\n    \\item Using this hidden state, a visible state is then generated, $v'$, by sampling from $P(v'|h')$. This process of generating a hidden pattern, and subsequent visible pattern is referred to as a Gibbs step.\n    \\item This chain of Gibbs steps between sampling from $P(h|v)$ and $P(v|h)$ can then be repeated as desired, the longer the chain the closer the samples will be to the true joint distribution that the model has learnt. For training an RBM Hinton\\cite{Hinton:2006:FLA:1161603.1161605} showed that one step is often enough in practice, as one step is enough to infer a direction to adjust the weights in.\n  \\end{itemize}\n  The process of updating the hidden, then visible layers forms what is referred to as the \\texttt{Gibbs Chain} and is visualised at layer level in Figure \\ref{F:Gibbs_Chain}.\n  \\begin{figure}[h]\n    \\begin{center}\n      \\includegraphics[width=0.8\\textwidth]{Assets/RBM-Gibbs-Chain.png}\n    \\end{center}\n    \\caption{A figure illustrating a Gibbs chain where left to right indicates a Gibbs iteration. Note this is \\emph{not} a PGM.}\n    \\label{F:Gibbs_Chain}\n  \\end{figure}\n\n  \\subsubsection{The Gibbs update}\\label{S:Gibbs-Update}\n\n    Denote the Gibbs sampler's probability of setting a variable, $x_j$ to 1 as:\n    \\begin{equation}\\label{eq-p1-gibbs-full}\n    P_{Gibbs}(h_j = 1 | v, h_{k != j})\n    \\end{equation}\n    We can refer to the proability in equation \\ref{eq-p1-gibbs-full} as $p_1$ and the converse $P_{Gibbs}(h_j = 0 | v, h_{k != j})$ can be referred to as $p_0$.\n    Then, by the product rule of probabilities:\n    $$\n    \\begin{aligned}\n    \\frac{p_1}{p_0} &= \\frac{P^\\star(h,v \\text{ where } h_j = 1)}{P^\\star(h,v \\text{ where } h_j = 0)}\\\\\n    &= \\frac{p_1}{1 - p_1} \\\\\n    % \\text{Rearranging to give p_1 leads to} \\\\\n    p_1 &= \\frac{1}{1 + \\frac{P^\\star(h,v \\text{ where } h_j = 0)}{P^\\star(h,v \\text{ where } h_j = 1)} }\n    &= \\frac{1}{1 + e^{-\\psi_j}}\n  \\end{aligned}\n    $$\n  To update a hidden unit $h_j$\n  we find $ P(h_j = 1 | v) $ where $v$ is an input pattern. In the context of an image, $ v $ would be the pixel values where each pixel corresponds to a visible unit, $v_i$.\n  The probability of a given hidden unit activating is:\n    \\begin{equation}\\label{eq:Hid-Gibbs-Update}\n    P(h_j = 1 | v) = \\sigma(\\psi_j)\n    \\end{equation}\n    Where $\\psi_j$ is the weighted sum into the $jth$ hidden unit and $\\sigma()$ is the Sigmoid function, or it also known as the Logistic function $\\sigma(x)=1/(1+e^{-x})$. Figure \\ref{F:PSI} illustrates $\\psi_j$ for an example RBM.\n    \\begin{figure}[h]\n    \\begin{center}\n      \\includegraphics[width = 0.8\\textwidth]{Assets/PSI_and_PHI.png}\n    \\caption{A diagram showing $\\psi_j$, the weighted sum into the $jth$ hidden unit. Note that $W_{0j}$ is the hidden bias, represented as a unit that is always on with a weight into each hidden unit.}\n    \\label{F:PSI}\n    \\end{center}\n    \\end{figure}\n    As the weights are symmetric, sampling from the visible layer, given a hidden state is similar. That is $P(v_i = 1 | h)$, where $h$ is the entire hidden vector is given by:\n    \\begin{equation}\\label{eq:Vis-Gibbs-Update}\n     P(v_i = 1 | h) = \\sigma(\\phi_{i})\n    \\end{equation}\n    Where $\\phi_i$ is the weighted sum into the $ith$ visible unit, which is: $ \\phi_i = \\sum(W_{ji}h_{j}) + W_{0i} $. Both $\\phi_j$ and $\\psi_i$ can be expressed in alternative, but useful way:\n    \\begin{equation}\n    \\phi_j = \\log P^\\star(v,h | v_i = 1) - \\log P^\\star(v,h | v_i = 0)\n    \\end{equation}\n    \\begin{equation}\\label{psi-gibbs-update-rbm}\n    \\psi_i = \\log P^\\star(h,v | h_j = 1) - \\log P^\\star(h,v | h_j = 0)\n    \\end{equation}\n\n\n\n\\subsubsection{Reconstructions: visualising what the RBM has learnt}\\label{SS:RBM-Reconstructions}\n\nRBMs create an internal representation given an input by sampling from $P(h|v)$. They can also generate a faux input given an internal representation (a $h$). Performing one Gibbs iteration, that is, sampling from the hidden units given a `clamped' input $ P(h|v) $ and then taking the generated hidden state and generating a faux input (sampling from $P(v|h_{sampled})$) results in a \\texttt{reconstruction}. Clamped input is where the visible units are set to be an input pattern. The model tries to reconstruct the input based on the internal representation it has learnt to model.\n\n\\subsubsection{RBM Fantasies: The Free-Phase of a Generative model}\\label{S:Dreams}\n\nIn the same way that a Generative model uses reconstructions to try and recreate the supplied input vector, performing many Gibbs iterations with no input pattern clamped and taking a sample is referred to as free-phase sampling of the model. It allows the samples to explore the probability mass that has been built by the model during training. Sampling from these wanderings creates what are referred to as `fantasies` or `dreams`.\n", "meta": {"hexsha": "3e1310c7addb8f9cf4f6ed3731afd13230c2d620", "size": 23255, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Max/Report/background.tex", "max_stars_repo_name": "garibaldu/multicauseRBM", "max_stars_repo_head_hexsha": "f64f54435f23d04682ac7c15f895a1cf470c51e8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Max/Report/background.tex", "max_issues_repo_name": "garibaldu/multicauseRBM", "max_issues_repo_head_hexsha": "f64f54435f23d04682ac7c15f895a1cf470c51e8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Max/Report/background.tex", "max_forks_repo_name": "garibaldu/multicauseRBM", "max_forks_repo_head_hexsha": "f64f54435f23d04682ac7c15f895a1cf470c51e8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 97.3012552301, "max_line_length": 886, "alphanum_fraction": 0.7661578155, "num_tokens": 6044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799928900257126, "lm_q2_score": 0.7248702880639792, "lm_q1q2_score": 0.565393670880794}}
{"text": "\\subsection{Countermeasures}\n\n\\begin{frame}[t]\n\\begin{center}\\Large Algebraic Security (1/3)\\end{center}\n\n\\begin{columns}\n    \\setlength{\\leftmargini}{0.25cm}\n    \\setlength{\\leftmarginii}{0.5cm}\n    \\column{0.1\\linewidth}{}\n    \\column{0.35\\linewidth}{\n        {\\large Security Model:}\n        \\vspace{0.25cm}\n        \\begin{enumerate}\n            \\item<1-> \\alert{random} bits allowed \\\\\n                \\begin{itemize}\n                    \\item as in classic masking\n                    \\item model~\\alert{unpredictability}\n                    \\item in WB impl. as {\\bf pseudorandom}\n                \\end{itemize}\n            \n            \\item<2-> {\\bf Goal:} \\\\\n                  any $f \\in span\\{v_i\\}$ is \\alert{unpredictable}\n                  \n            \\item<3-> {\\bf isolated} from obfuscation problems\n        \\end{enumerate}\n    }\n    \\column{0.5\\linewidth}{\n        \\includegraphics[height=7cm]{MaskedCircuitWB.png}\n    }\n\\end{columns}\n\\end{frame}\n\n\\newcommand\\bias{\\varepsilon}\n\\newcommand\\adv{\\mathcal{A}}\n\\newcommand\\Advan{\\mathsf{Adv}}\n\n\\begin{frame}[t]\n\\begin{center}\\Large Algebraic Security (2/3)\\end{center}\n\n\\begin{columns}\n    \\setlength{\\leftmargini}{0.25cm}\n    \\setlength{\\leftmarginii}{0.5cm}\n    \\column{0.1\\linewidth}{}\n    \\column{0.35\\linewidth}{\n        {\\large Adversary:}\n        \\vspace{0.25cm}\n        \\begin{enumerate}\n            \\item<1-> chooses plaintext/key pairs\n            \n            \\item<2-> chooses $f \\in span\\{v_i\\}$\n            \\item<3-> tries to {\\bf predict} values of this function \\\\ \n                  (i.e. before random bits are sampled)\n            \n            %\\item<4-> succeeds,\\\\\n            %      if \\alert{only} $f$ matches\n        \\end{enumerate}\n    }\n    \\column{0.5\\linewidth}{\n        \\includegraphics[height=7cm]{MaskedCircuitWB.png}\n    }\n\\end{columns}\n\\end{frame}\n\n\n\\begin{frame}\n\\begin{center}\\Large Algebraic Security (3/3)\\end{center}\n\n\\CenterBlock{10cm}{\n\\begin{block}{Proposition}\n\nLet ${\\color{blue}F} = \\{f(x,\\cdot,\\cdot) ~\\mid~ f(x,r_e,r_c) \\in span\\{v_i\\},~ x \\in \\mathbb{F}_2^N\\}.$\n\nLet ${\\color{red}e} = -\\log_2{\\big(1/2 + \\max_{f \\in {\\color{blue}F}}{bias(f)}\\big)}$.\n\nThen for any adversary $\\adv$ choosing $Q$ inputs\n$$\n    \\Advan[\\adv] \\le min(2^{Q-|r_c|}, 2^{-{\\color{red}e}Q}).\n$$\n\\end{block}\n}\n\n\\pause \n\n\\CenterBlock{10cm}{\n\\begin{block}{Corollary}\nLet ${\\color{green!40!black}k}$ be a positive integer. Then for any adversary $\\adv$\n$$\n    \\Advan[\\adv] \\le 2^{-{\\color{green!40!black}k}} \\text{~if~}\n    e > 0 \\text{~and~} |r_c| \\ge k\\cdot(1+\\frac{1}{{\\color{red}e}}).\n$$\n\\end{block}\n}\n\n\\pause \n\n\\center{\\bf Information-theoretic security!}\n\\end{frame}\n\n\\begin{frame}[t]\n    \\Title{Minimalist Quadratic Masking Scheme}\n    \\begin{columns}\n    \\setlength{\\leftmargini}{0.5cm}\n    \\column{0.1\\linewidth}{}\n    \\column{0.3\\textwidth}{\n        \\only<1>{\n            {\\Large Masking scheme}\n            \\vspace{0.25cm}\n            \\begin{itemize}\n                \\item \\alert{quadratic} decoder:\\\\\n                      $(a,b,c) \\mapsto ab\\oplus c$\n                \\item set of {\\bf gadgets} \\\\\n                \\item provably secure {\\bf composition}\n            \\end{itemize}\n        }\n        \\only<2->{\n            {\\Large Security}\n            \\vspace{0.25cm}\n            \\begin{enumerate}\n                \\item algorithm to verify \\\\\n                      that bias $\\ne 1/2$\n                \\item max. degree on $r$: 4\n            \\end{enumerate}\n            \\onslide<3->{\n            \n            \\vspace{0.3cm}\n            \n            ~~~~$\\Rightarrow$ bias $\\le 7/16$\n            \n            \\vspace{0.3cm}\n            \n            ~~~~for 80-bit security \\\\\n            ~~~~we need $|r_c| \\ge 940$\n            }\n        }\n    }\n    \\column{0.5\\textwidth}{\n        \\input{figures/mqms.tex}\n    }\n    \\end{columns}\n\\end{frame}\n\n\\begin{frame}\n\\Title{Proof-of-concept masked AES-128}\n\n\\CenterBlock{10cm}{\n\\vspace{0.5cm}\n\\begin{enumerate}\n    \\item MQMS + 1-st order Boolean masking\n    \\item 31,783 $\\to$ 2,588,743 gates expansion (x81)\n    \\item 16 Mb code / 1 Kb RAM / 0.05s per block on a laptop\n    \\item (unoptimized)\n\\end{enumerate}\n}\n\n\\vspace{0.5cm}\n\n\\center{\\Large\n    \\href{https://github.com/cryptolu/whitebox}{github.com/cryptolu/whitebox}\n}\n    \n\\end{frame}", "meta": {"hexsha": "5dfc3dc471e269d02e38743b0a98a7d64230639a", "size": 4271, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides-source/texwb/4masking.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "slides-source/texwb/4masking.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "slides-source/texwb/4masking.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 26.5279503106, "max_line_length": 104, "alphanum_fraction": 0.541793491, "num_tokens": 1348, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.5653936643872088}}
{"text": "If you haven't downloaded and unzipped \\href{https://libaoj.in/courses/2021f/MATH3341/zip/Math.3341.zip}{\\texttt{Math.3341.zip}}. Download and unzip it under \\verb|H:| (H Drive if you are working on the Remote Lab). Change the current working directory by typing \\verb|cd H:\\Math.3341\\Math.3341.Lab.05| in the Command Window, and type \\verb|edit lab_05_script| in the Command Window to edit \\verb|lab_05_script.m|.\n\n%---------------------------------------------\n\\section{Formatting Numerical Values}\n%---------------------------------------------\n\\begin{enumerate}[(a)]\n    \\item Define a variable \\verb|x|, of which the value is $e^{\\pi}$.\n    \\item Define a cell array \\verb|formatOptions|, of which the entries are listed as follows:\n        \\begin{enumerate}[(1)]\n            \\item \\verb|rat|\n            \\item \\verb|longeng|\n            \\item \\verb|longg|\n            \\item \\verb|longe|\n            \\item \\verb|long|\n            \\item \\verb|shorteng|\n            \\item \\verb|shortg|\n            \\item \\verb|shorte|\n            \\item \\verb|short|\n        \\end{enumerate}\n    \\item Use a for-loop to output \\verb|x| in the above formats (do \\textsc{not} change the order).\n\\end{enumerate}\n%---------------------------------------------\n\\section{Formatting Data using \\lstinline[style=MATLAB]{fprintf}}\n%---------------------------------------------\n\\begin{enumerate}[(a)]\n    \\item Define \\verb|x| to be column vector ranging from $0$ to $2 \\pi$ with $25$ entries, and define \\verb|y1|, \\verb|y2|, \\verb|y3| as follows\n        $$\n        y_1 = \\sin(x/2), \\quad\n        y_2 = \\sin(x), \\quad\n        y_3 = \\sin(2x).\n        $$ \\item Concatenate column vectors \\verb|x|, \\verb|y1|, \\verb|y2|, \\verb|y3|, and store the new 2-D array to \\verb|data|.\n    \\item Print out the heading in the Command Window using \\verb|fprintf|, where the heading of the output is \\verb|x|, \\verb|sin(x/2)|, \\verb|sin(x)|, \\verb|sin(2x)|, whose widths are $9$. The heading should be left-justified.\n    \\item Then use a for-loop to loop over each row of \\verb|data|: use \\verb|fprintf| to print out the numerical values, which have width $9$ with $6$ decimal places, in the Command Window. All numerical values should be left-justified.\n\\end{enumerate}\n\n%---------------------------------------------\n\\section{Formatting Data for \\LaTeX{}}\n%---------------------------------------------\nThis part we will format \\verb|data| (defined above) for \\LaTeX{}.\n\\begin{enumerate}[(a)]\n    \\item Set the output filename to \\verb|sin.tex|, and the permission to \\verb|w| (write mode) in \\verb|fopen| and store the file handle to the variable \\verb|fileHandle|.\n    \\item Use \\verb|fprintf| to print out the setup for \\verb|table| and \\verb|tabular| environments. The output should be as follows\n        \\begin{lstlisting}[style=TeX]\n\\begin{table}[!hbtp]\n\\centering\n\\caption{Sine functions}\n\\label{tab:sin}\n\\begin{tabular}{lcrr}\n\\toprule\n\\midrule\n\\bottomrule\n\\end{tabular}\n\\end{table}\n        \\end{lstlisting}\n    \\item Print out the heading of the data, whose column width is $11$ between \\verb|\\toprule| and \\verb|\\midrule|. The expected output is as follows:\n        \\begin{lstlisting}[style=TeX]\n        $x$ & $\\sin(x/2)$ &   $\\sin(x)$ &  $\\sin(2x)$ \\\\\n        \\end{lstlisting}\n    \\item Print out the numerical values of each row in \\verb|data| between \\verb|\\midrule| and \\verb|\\bottomrule| using a for-loop. Each number has width $9$ and $6$ decimal places. Also each number should be enclosed by a pair of \\verb|$| and seperated by \\verb|&|. The expected output for one of the rows should be as follows\n        \\begin{lstlisting}[style=TeX]\n$ 0.000000$ & $ 0.000000$ & $ 0.000000$ & $ 0.000000$ \\\\\n        \\end{lstlisting}\n    \\item Print the content of \\verb|sin.tex| by calling \\verb|type('sin.tex')|.\n\\end{enumerate}\n%---------------------------------------------\n\\section{Plotting Multiple Functions using for-loop}\n%---------------------------------------------\n\\begin{enumerate}[(a)]\n    \\item Define a cell array \\verb|styles|. The elements are plotting styles, i.e.,\n        \\begin{enumerate}[(1)]\n            \\item solid line with circle as the marker;\n            \\item dashdot line with diamond as the marker;\n            \\item dashed line with triangle (up) as the marker.\n        \\end{enumerate}\n    \\item Define another cell array \\verb|y|, of which the entries are \\verb|y1|, \\verb|y2|, and \\verb|y3|.\n    \\item Then use a for-loop to plot each entries of \\verb|y| versus \\verb|x| with in the same figure window the above styles (in the same order).\n    \\item Set legend, labels, grid, and title. Change the range of $x$-axis to $[0, 2\\pi]$, and that of $y$-axis to $[-1, 1]$. Set the following properties as you did in last lab. The expected result is shown in Figure \\ref{fig:sin}.\n    \\begin{itemize}\n        \\item \\verb|XTick| to \\verb|[0, pi / 2, pi, 3 * pi / 2, 2 * pi]|;\n        \\item \\verb|XTickLabel| to \\verb|{'0', '$\\pi/2$', '$\\pi$', '$3 \\pi/2$', '$2\\pi$'}|;\n        \\item \\verb|GridLineStyle| to \\verb|'--'|;\n        \\item \\verb|Box| to \\verb|'on'|;\n        \\item \\verb|BoxStyle| to \\verb|'full'|.\n    \\end{itemize}\n    \\item Then save the plot using the following lines of commands:\n        \\begin{lstlisting}[style=MATLAB]\nname = 'lab_05_plot';\nfig = figure(1);                         % Set figure i as current figure window\nset(fig, 'PaperPositionMode', 'auto');   % Set paper position mode to 'auto'\npos = get(fig, 'PaperPosition');         % Get figure window paper position\nset(fig, 'PaperSize', [pos(3) pos(4)]);  % Set figure paper size\nprint(fig, '-dpdf', name);               % Save figure\n        \\end{lstlisting}\n\\end{enumerate}\n%---------------------------------------------\nType \\verb|diary('lab_05_output.txt')| in the Command Window, run the script file \\verb|lab_05_script.m|, and type \\verb|diary off| in the Command Window. Upload \\verb|lab_05_output.txt|, \\verb|sin.tex|, and \\verb|lab_05_script.m| to the folder \\verb|src| on Overleaf.\n\nOn Overleaf, open \\verb|body.tex| under the folder \\verb|LaTeX|. In the last section of the report, you will reproduce Section \\ref{sec:bol} using \\LaTeX{}. You may find the following helpful:\n\n\\begin{itemize}\n    \\item You may use enviroments such as  \\verb|align|, \\verb|figure|, and \\verb|table|.\n    \\item You may use \\verb|\\includegraphics[width=amount unit]{/path/to/figure.pdf}| to specify the width of a figure. In our case, the width of the figure is \\verb|0.75\\textwidth|.\n    \\item For special characters, you may look them up in \\href{https://libaoj.in/files/LaTeX.Mathematical.Symbols.pdf}{\\LaTeX{}.Mathematics.Symbols.pdf}.\n    \\item You may use \\verb|\\input{/path/to/sin.tex}| to include the table you got from MATLAB.\n\\end{itemize}\n\nRecompile and submit the PDF file generated by Overleaf to WyoCourses.\n%---------------------------------------------\n\n\\newpage\n\\section{Basics of \\LaTeX{}}\n\\label{sec:bol}\n\\subsection{Sine functions}\nFor given $x \\in [0, 2\\pi]$ with step size $\\pi/12$, we can obtain the evaluations of \\eqref{eq:y1}, \\eqref{eq:y2}, \\eqref{eq:y3} at $x$ (see Table \\ref{tab:sin}), and the corresponding plot (see Figure \\ref{fig:sin}).\n\\begin{align}\ny_1 & = \\sin(x/2) \\label{eq:y1} \\\\\ny_2 & = \\sin(x)   \\label{eq:y2} \\\\\ny_3 & = \\sin(2x)  \\label{eq:y3}\n\\end{align}\n\\input{../Math.3341.Lab.05.ans/sin.tex}\n\\begin{figure}[!hbtp]\n    \\centering\n    \\includegraphics[width=0.75\\textwidth]{../Math.3341.Lab.05.ans/lab_05_plot.pdf}\n    \\caption{Sine functions}\n    \\label{fig:sin}\n\\end{figure}\n", "meta": {"hexsha": "dddf4348b6647007807811fdf25c990e482e81c0", "size": 7482, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "courses/template/MATH3341/Math.3341.Lab.05/exercise/body.tex", "max_stars_repo_name": "butlerm0405/math3341", "max_stars_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "courses/template/MATH3341/Math.3341.Lab.05/exercise/body.tex", "max_issues_repo_name": "butlerm0405/math3341", "max_issues_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "courses/template/MATH3341/Math.3341.Lab.05/exercise/body.tex", "max_forks_repo_name": "butlerm0405/math3341", "max_forks_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.9133858268, "max_line_length": 414, "alphanum_fraction": 0.6234963913, "num_tokens": 2201, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.640635854839898, "lm_q2_score": 0.8824278664544912, "lm_q1q2_score": 0.5653149305606204}}
{"text": "\\documentclass[a4paper,12pt]{report}\n\n\\usepackage{amsmath,amsfonts,mathtools}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\n\\begin{document}\n\\title{PHY293 Abridged}\n\\author{Aman Bhargava}\n\\date{September 2019}\n\\maketitle\n\n\\tableofcontents\n\n\\section{Introduction}\nProf. Grisouard has put up some pretty solid \n\\href{https://github.com/PHY293-Grisouard/Chapters-2019}{notes} written in jupyter notebooks. \nThose offer a great comprehensive guide for the course contents. The goal here is to give\na very concise overview of the things you need to know (NTK) to answer exam questions. Unlike\nsome of our other courses, you don't need to be very intimately familiar with the derivations\nof everything in order to solve the problems (though it certainly doesn't hurt). Think of this \nas a really good cheat sheet.\n\n\\chapter{Simple Harmonic Oscillators (SHO's)}\n\\section{Set Up}\nPretty much the same as covered last year.\n$$F_{restorative} = -kx$$\n$$F_{net} = ma$$\n$$ma + kx = 0$$\nWe let $\\omega_0^2 = k/m$ and rename $a$ to $\\ddot{x}$\n\\section{ODE and Solution}\n\\paragraph{ODE:}\n$$\\ddot{x} + \\omega_0^2 x = 0$$\n\\paragraph{Solution:}\n$$x(t) = a cos(\\omega_0 t + \\theta)$$\nSolve for 2 unknowns (usually $a$ and $\\theta$) based on two initial conditions.\n\n\\chapter{Dampened Harmonic Oscillators (DHO's)}\n\\section{Set Up}\nWe have all the same forces from last time plus the force of friction.\n\n$$F_{friction} = F_f = -bv$$\n\n\\section{ODE and Solutions}\n\\paragraph{ODE}\nWe let $\\gamma = b/m$ so that:\n$$\\ddot{x} + \\gamma \\dot{x} + \\omega_0^2 x = 0$$\n\n\\subsection{General Solution}\n\nAnd for underdampened and over dampened, the overall solution looks like:\n$$x(t) = ae^{rt}$$\nWhere $a, r \\in \\mathbb{C}$, remembering that $$e^{i \\theta} = i sin(\\theta) + cos(\\theta)$$\n\nThis is actually really cool, because the existence of a complex component in $r$ is what \nenables oscillation. \n\nIf we populate the initial ODE with the derivatives of our $x(t)$, we get:\n$$ar^2e^{rt} + \\gamma are^{rt} + \\omega_0^2 ae^rt = 0$$\n$$r^2 + \\gamma r + \\omega_0^2 = 0$$\n\nThis results in a simple quadratic for which you can solve for $r$ using the quadratic formula.\n\n$$x = \\frac{b \\pm \\sqrt{b^2 - 4ac}}{2a}$$\n$$r = \\frac{\\gamma \\pm \\sqrt{\\gamma^2 - 4w_0^2}}{2}$$\n\nThere will be up to 2 $r$ values. Since the solution space is a 2-D vector space, any \nlinear combination of vectors (i.e. values of $r$) is still a valid vector.\n\nYou might be wondering how $a$ figures into this whole thing. That's just based on initial\nconditions, really. \n\n\\subsection{Regieme 1: Underdampened}\nIf the dampening is weak enough, there will still be \\textbf{some} oscillation before the \noscillator comes to rest. In this case, the general solution is:\n\n$$x(t) = A_0 e^{-\\gamma t/2} cos(w_dt + \\phi)$$\nWhere $w_d^2 = w_{dampened}^2 = \\omega_0^2 - \\frac{\\gamma^2}{4}$\n\nYou get this by solving for $r$ with the quadratic formula, plugging the two imaginary \nroots in, then taking just the real components because you know that $x(t)$ is real.\n\n\\subsubsection{Logarithmic Decrement}\nThis quantity is the $ln$ of the ratio of 1 peak to the next.\n\n$$\\frac{\\gamma T_d}{2}$$\n\nWhere $T_d$ is the period based on $w_d$.\n\n\n\\subsection{Regieme 2: Heavily Dampened}\nIn this case both roots are real. This occurs when $\\gamma^2-4\\omega_0^2 > 0$\n\nLet $r_p, r_m$ be the two real roots. Then the solution is:\n\n$$x(t) = a_pe^{r_pt} + a_me^{r_mt}$$\n\nYou can solve for the a values at leisure with two initial conditions.\n\n\\subsection{Regieme 3: Critically Dampened}\nWhen the discriminant of the quadratic is zero (i.e. $\\gamma^2 = 4\\omega_0^2 = $), you\nget an overconstrained problem because you have fewer unknowns than you have constraints.\nTherefore, the new solution for this exact case is:\n\n$$x(t) = (A + Bt)e^{-\\gamma t/2}$$\n\nAnd it decays the fastest and is pretty cool. Yay.\n\n\\section{Energy}\n$$E = K + U$$\n$$E = \\frac{1}{2}mv^2 + \\frac{1}{2}kx^2$$\n$$\\frac{dE}{dt} = -bv^2$$\n\n\\subsection{Underdampened Energy}\nThe equations get really messy so we assume that $\\omega_0 >> \\gamma/2$ and $\\omega_0 = \\omega_d$. After \nyou sub everything in and use the pythagorean identity to cancel out the squared sine/cosine,\nyou get:\n\n$$E = \\frac{1}{2} kA_0^2e^{-\\gamma t}$$\n\n\\subsection{Critical and Overdampened Energy}\nThese ones are really easy, they just decay and that's it. Just remember the equations for \nenergy and solve for the velocity equation.\n\n\\section{Q-Factor}\n$Q$ is a measure of the tendency to oscillate dividied by the tendency to dampen.\n$$Q = \\frac{\\omega_0}{\\gamma}$$\nLet $\\tau = \\frac{1}{gamma}$ be proportional to the lifetime of the oscillator. Then \n$n = \\frac{\\tau}{T_0}$ is the number of cycles in a lifetime.\n$$Q = 2\\pi n$$\n\n\n$Q$ is also proportional to the rate at which dampening removes energy from the system. Then\n$$E_n = E_0e^{-\\gamma t}$$\nRepresents the amount of mechanical energy in the system at time $t$.\n$$\\frac{E_n - E_{n+1}}{E_n} = ... = \\frac{2\\pi}{Q}$$\n\nTherefore $Q$ is the energy in oscillator divided by the amount of energy lost at the\nfollowing \\textbf{radian}.\n\n\\chapter{Forced Oscillators}\n\\section{Equation of Motion ODE}\n$$\\ddot{x} + \\omega_0^2 x = F_0' cos(\\omega t)$$\nWhere $\\omega_0^2 = k/m$, $F_0' = F_0/m$, $\\omega$ is the driving frequency.\n\n\\section{Solution: Undamped Forced Oscillator}\n$$x(t) = A(\\omega) cos(\\omega t - \\delta(\\omega))$$\nWhere\n$$\\delta(\\omega_0) = 0 \\,\\,if\\,\\, \\omega < \\omega_0, else \\pi$$\nIf you shake it too fast, it'll vibrate in exactly opposite phase. \n$$A(w) = |\\frac{\\omega_0^2}{\\omega_0^2 - \\omega^2}A_f|$$\n\n\\subsection{$\\omega = \\omega_0$}\nPerfect resonance makes amplitude tend toward infinity.\n$$x(t) = \\frac{A_f \\omega_0}{\\sqrt{2}} t cos(\\omega_0t - \\frac{pi}{4})$$\n\n\\section{Solution: Forced Damped Oscillators}\nEquation of Motion: $\\ddot{x} + \\gamma \\dot{x} + \\omega_0^2 x = A_f w_0^2 cos(wt)$\n\n\\paragraph{General Solution: } Same as before \n$$x(t) = A(\\omega) cos(\\omega t - \\delta(\\omega))$$\n\n\\paragraph{Updates to $A$ and $\\delta(\\omega)$: } \n$$A(\\omega) = \\frac{\\omega_0^2}{\\sqrt{\\gamma^2\\omega^2+(\\omega_0^2 - \\omega)^2}}A_f$$\n$$tan[\\delta(\\omega)] = \\frac{\\gamma \\omega}{\\omega_0^2-\\omega^2}$$\n\n\\paragraph{Maximum $A(w)$} occurs when $\\omega = \\omega_0^2 - \\frac{\\gamma^2}{2}$. In other words, \n$$\\omega_{max} = \\omega_0^2[1-\\frac{1}{2Q^2}]$$\n\n$$\\therefore A_{max} = \\frac{Q}{\\sqrt{1-\\frac{1}{4Q^2}}}A_f$$\n\n\\section{Transient Response}\nWhen you suddenly start a damped, forced oscillator, you have one mode that is the long-term response and one \nthat is the transient response (as if there is no forcing).\n\n$$x_{free} = A_{free} e^{-\\gamma t / 2}cos(\\omega_d t + \\phi)$$\n$$x_{forced} = A(\\omega) cos(\\omega t - \\delta(\\omega))$$\n\n$x_{free}$ tend toward zero and only exists for the beginning, so it is the transient response. \n\n\\section{Power in FDHO's}\n$$\\therefore -\\dot{E}(w, t) = P(w, t) = bv^2 = bV^2(w)sin^2(wt-\\delta)$$\nWhere $V$ is the amplitude of the velocity.\n\n\\subsection{Average Power Per Cycle}\n$$\\bar{P}(w) = \\frac{F_0^2}{2m} \\frac{\\gamma}{\\gamma^2 + (\\omega_0^2/\\omega - \\omega)^2}$$\n\\paragraph{Maximizing Case: } $\\bar{P}$ is maximized when $w = w_0$\n\n\\subsection{Power vs. $\\omega$}\nLet us plot $\\bar{P}(w)$ vs. $\\omega / \\omega_0$. The $\\omega_{full width half height} = \\omega_{fwhh} = \\gamma$\n\n\\chapter{Coupled Oscillators}\n\\paragraph{Main Takeaways}\n\\begin{itemize}\n\\item Motion of coupled systems is a linear combination of orthogonal forms of motion.\n\\item Motion can be found via 2x2 matrix system\n\\item Each normal mode has a unique frequency common to all elements. \n\\item `Eigen' = `normal'.\n\\item \\textbf{Beating} = when frequencies of modes are very close.\n\\end{itemize}\n\n\\subsection{How to Solve Coupled Systems}\nFrom tutorial note: \n\\subsubsection{Determine Equations of Motion}\nYou must give each mass its own coordinate system where deviation in one direction adds to the zero \ncoordinate and deviation to the other subtracts from the zero coordinate.\n\nImplement $f = ma$ and $f = kx$ to create equations of motion of each mass depending on its position \nand the position of its neighbouring masses.\n\nNow we have $$m\\ddot{x_a} = ...$$ $$...$$ $$m\\ddot{x_c} = ...$$\n\n\\subsubsection{Assume $\\exists$ a normal mode}\nThis implies that $$x_a = A cos(wt)$$ $$...$$ $$x_c = C cos(wt)$$\n\nAll we need to solve for now is $w$, and $A \\to C$. To do so, we sub our definitions for $x_a, x_b, and x_c$ \ninto our initial EoM, which gives us relationships between $A, B, C, w$ (cosines cancel out!)\n\nNow we just have to put everything in a matrix equation of the form:\n$$[...][A, B, C] = [0, 0, 0]$$\n$$[M][A, B, C] = [0, 0, 0]$$\n\nwhich implies that $det(M) = 0$. This will give us $3$ different eigen frequencies (or however many degrees \nof freedom we have). From there, we just have to solve for the coefficients $A, B, C$ via initial conditions, \nremembering that $x_a = Acos(wt)$ and so on!\n\n\\subsection{Review: Results from Linear Algebra}\n\\begin{itemize}\n\\item $Av = \\lambda v$, $(A-\\lambda I)v = 0$\n\\item We get non-trivial solutions for $v$ if $det(A - \\lambda I) = 0$\n\\item A matrix is diagonizable if $\\exists n$ eigen values $\\lambda$ (sufficient, not necessary)\n\\item Each solution to the initial $Ax = b$ is unique because the eigen vectors form a basis.\n\\end{itemize}\n\n\\paragraph{Connecting Eigen Things to Real Things}\n\\begin{itemize}\n\\item Each $\\lambda_i = \\omega_i^2$\n\\end{itemize}\n\n\\chapter{Standing Waves}\n\\paragraph{travelling wave: } a travelling disturbance where no matter travels on average.\n\\paragraph{Continuum: } infinite coupled oscillators $\\to$ infinite modes can arise, characterized \nby nodes and antinodes.\n\nBoundaries give rise to standing waves (think about boundaries on a plucked guitar string).\n\n\\section{Expectations and Takeaways}\n\\begin{enumerate}\n\\item Wave equation is $$\\frac{\\partial^2 y}{\\partial t^2} = v^2\\frac{\\partial^2y}{\\partial x^2}$$\n\\item Phase speed equation is $$v = \\sqrt{\\frac{T}{\\mu}}$$\n\\item \\textbf{Seperation of variables} is when you\n\\item Wavenumbers $k_n$ are basically the number of antinodes in the given mode and angular frequencies $\\omega_n$ are frequency at which mode $n$ virbrates, \nwavelengths $\\lambda_n$ are the physical length of each wave in the mode $n$.\n\\item Orthogonality relates how different modes of oscillations in standing waves don't interact.\n\\item Total energy = sum of energy in each mode of oscillation.\n\\end{enumerate}\n\n\\section{The Taut String}\nProperties of a taut string:\n\\begin{itemize}\n\\item $tan\\theta = \\frac{dy}{dx} << 1$ where $x$ is along the string and $y$ is across the string.\n\\item Vibrations occur in only one plane\n\\item \\textbf{Phase speed } is $v^2 = T/\\mu$ with $T$ as \\textbf{tension} and $\\mu$ as \\textbf{mass per unit length of string}.\n\\end{itemize}\n\n\\paragraph{WAVE EQUATION: } $$\\frac{\\partial^2 y}{\\partial t^2} = v^2 \\frac{\\partial^2y}{\\partial x^2}$$\n\n\\section{Solution to Wave Equation}\nWe assume that the solution is of the form $$y(x, t) = f(x)h(t)$$ so that it is seperable and works nicely \nwith the wave equation. When we sub this into the wave equation, we get:\n\nLet $$\\frac{1}{h(t)} \\frac{d^2h(t)}{dt^2} = -w^2 = v^2 \\frac{1}{f(x)} \\frac{d^^2f(x)}{dx^2}$$\n\nSo we can still have a $w^2$ that fits well into our SHO ODE:\n\n$$h(t) = H cos(wt + \\phi)$$ $$f(x) = A cos(kx) + B sin(kx)$$ where $k = w/v$\n\n\\subsection{Boundary Conditions}\nWe know that $y(t, 0) = y(t, L) = 0$ because the string is secured on either end. This forces the $A$ term \nin the equation $f(x) = A cos(kx) + B sin(kx)$ to be zero, so now $f(x) = B sin(kx)$. Now $f(kL) = 0$ for \nany natural number $k$. \n\n$$k_n = \\frac{n \\pi}{L} \\,\\,, \\,\\, n \\in \\mathbb{N}$$\n\nRecall that $k = w/v$ so now $w_n = \\frac{n \\pi v}{L}$. \n\n\\subsection{Final Solution}\n$$y_n(x, t) = C_n cos(w_n t + \\phi_n)sin(k_n x)$$\n$$y(x, t) = \\sum_{n=1}^{\\infty} y_n(x, t)$$\n\n\\subsection{Surrounding Math}\nWe need some tools to do proofs and reasoning about these standing and travelling waves. \n$$f \\odot g = \\frac{2}{L} \\int_0^L f(x)g(x) dx$$\nwhich represents the inner product of the vector space and \n$\\delta_{mn}$ which is 1 when $m=n$.\n$$sin(k_n x) \\odot sin(k_m x) = \\delta_{mn}$$\n\nBecause they are orthogonal to eachother.\n\n\\subsection{Energy Considerations}\n$$E_n = \\frac{1}{4} \\mu L w_n^2 A_n^2$$\n\n\\chapter{Travelling Waves}\nThe real general solution to the wave equation: \n$$\\frac{\\partial^2 y}{\\partial t^2} = v^2 \\frac{\\partial^2 y}{\\partial x^2}$$\n$$y(x, t) = f(x-vt) + g(x+vt)$$\n\nIt can be shown via trig sums that this is the same as the standing wave solution when you \nput it between two boundaries.\n\n\\paragraph{Most common travelling wave function: } $$y(x, t) = A sin(\\frac{2 \\pi}{\\lambda} (x \\pm vt))$$\n$$= A sin(k(x\\pm vt))$$\n$$= A sin(kx \\pm wt)$$\n\\begin{itemize}\n\\item $\\lambda$ = wavelength\n\\item $k$ = wavenumber\n\\item $\\omega^2 = v^2k^2$, k is the wave number.\n\\item $\\frac{2\\pi}{w}$ = period\n\\end{itemize}\n\n\\section{Energy Considerations}\n$$K = \\frac{1}{2} \\mu (\\frac{\\partial y}{\\partial t})^2$$\n$$U = \\frac{1}{2} \\mu v^2 (\\frac{\\partial y}{\\partial x})^2$$\n\n$<>$ corresponds to integration over one wavelength, so $<f> = \\int_0^{\\lambda} f(x) dx$.\n\nTo find $<E>$ we can apply the above formulae for $K$ and $U$ to the general solution \n$y(x, t)=A cos(kx-wt)$.\n\n$$<E> = \\frac{1}{2} \\mu \\lambda A^2 w^2$$\n\n\\section{Power Considerations}\nSince these waves are unbounded travelling entites, it's useful to consider how much \nenergy they transport past a certain point per second.\n\nWe start with instantaneous energy $E = \\mu A^2 w^2 sin^2(kx - wt)$. Then \n$$\\bar{P} = \\frac{w <E>}{2 \\pi} = \\frac{\\mu \\lambda A^2 w^3}{4 \\pi} = \\frac{\\mu A^2 w^2 v}{2}$$\n\n\\section{Reflection and Transmission}\nWhen medium changes discontinuously, you get reflection and transmission. The boundary conditions \nat the discontinuity are \n\\paragraph{Boundary Condition 1: } matching frequency, amplitude, and phase.\n\\paragraph{Boundary Condition 2: } transverse (across-string) forces must match.\n\nTherefore we know that \n$$y_T(x, t) = A_T cos(wt - k_t x)$$ $$y_R = A_R cos (wt + k_r x)$$\n\nhave the same $wt$ term, and $k_R, k_T > 0$ so they propagate in the right directions. The reflected \nwave is in the same original medium, so its spatial frequency $k_R = k_I = w/v_1$ while $k_T = w/v_2$.\n\nNow the only numbers that aren't known are $A_R, A_T$. According to boundary condition 1, $A_I + A_R = A_T$. \nNow we use boundary condition to solve for \n$$\\frac{A_T}{A_I} = \\frac{2k_1}{k_1+k_2} = \\pmb{\\tau_{12}} = \\frac{2 \\sqrt{\\mu_1}}{\\sqrt{\\mu_1} + \\sqrt{\\mu_2}} $$ \n$$\\frac{A_R}{A_I} = \\frac{k_1 - k_2}{k_1 + k_2} = \\pmb{\\rho_{12}} = \\frac{\\sqrt{\\mu_1} - \\sqrt{\\mu_2}}{\\sqrt{\\mu_1} + \\sqrt{\\mu_2}}$$\n\n\\section{travelling Waves in Multiple Dimensions}\nThe wave equation become: $$\\frac{\\partial^2 f}{\\partial t^2} = v^2\\nabla^2f = v^2[\\frac{\\partial^2 f}{\\partial x^2} + \\frac{\\partial^2 f}{\\partial y^2} + \\frac{\\partial^2 f}{\\partial z^2}]$$\n\n\\chapter{Dispersion}\nFor non-dispersive waves (the ones we have been talking about up to this point), the following relationship is true for angular \nfrequency, wave number $k$, and velocity $v$: $$\\omega^2 = k^2v^2$$\n\nFor dispersive waves, you have a non-linear relationship. Therefore different waves disperse at different rates. \n\nLet's think about the phenomenon of beating: when you have similar frequency waves going past eachother, you get these ``beats'' from \ninterference patterns (look up a picture to solidify your understanding!).\n\nSince we have different frequencies going at different rates, the beats now move! Let's have an example: \n\n\\section{Example of Dispersive Waves}\nLet the following relationship hold for angular frequency and speed of a wave ($g$ is actually just gravity, this is a weird example)\n$$\\omega = \\sqrt{gk}$$\n$$v = \\omega/k = \\sqrt{g/k}$$\n\nThe only thing you can really be asked on the exam is to find the group velocity (according to Shayne, at least): \n$$v_g = \\frac{d\\omega}{dk} \\approx \\frac{\\Delta \\omega}{\\Delta k}$$\n\nGroup velocity is just the speed of the little wave packets. It's an absolute speed that a stationary observer would see. If you have\ntwo different waves interfereing at $\\omega_1$ and $\\omega_2$, use the $\\Delta$ form to find the speed of the packets!\n\n\\chapter{Special Relativity}\n\\section{Introduction}\nAn intuitive grasp of this stuff is pretty easy. Just watch the minute physics videos about it, they do a great job.\n\nThe key points are that light speed is constant no matter your frame of reference. In other words, it looks like $v_1 + c = v_1$, \nregardless of what $v_1$ is. Therefore, the way you would think to transform a spacetime diagram to be from another person's \nreference frame is not correct. You use this cool Lorentz transformation thing:\n\n\\section{Formulae}\n\\paragraph{The Lorentz factor} comes up all the time: $$\\gamma(v) = \\frac{1}{\\sqrt{1-v^2/c^2}}$$\n\\subsection{Time Dilation}\nLet's say you pass by some dude on a planet at close to the speed of light. Both of you will see the other acting really slowly. In other \nwords, \\textbf{the moving clock is always observed to be slower than the stationary clock}. Therefore, for this relationship, $\\Delta t'$ \nis just \\textbf{the other clock}:\n$$\\Delta t' = \\gamma(v)\\Delta t$$\n\n\\subsection{Length Contraction}\nSame thing as above applies with \\textbf{the moving object is always observed to be scrunched up more than the stationary object}: \n$$L=\\frac{L_0}{\\gamma(v)}$$\n\n\\subsection{Relativstic Events: Location and Time}\nLet's say you're watching a dude running close to the speed of light ($v$) who is about to detonate a cake. If the dude does so at \nposition $x$ and time $t$ from his perspective, the position $x_{new}$ and time $t_{new}$ that you observe are given by the following \nequations.\n$$t_{new}=\\gamma(t-\\frac{v^2}{c^2}x)$$\n$$x_{new}=\\gamma(x-vt)$$\n\n\\subsection{Relativistic Energy and Momentum}\n$$p=\\gamma(v)m_0v$$\n$$E = \\gamma mc^2$$\n$$KE=E-mc^2=\\gamma mc^2-mc^2=(\\gamma-1)mc^2$$\n\n\\paragraph{Invariant Energy: }\n$$m_0^2c^4=E-p^2c^2$$\n\nApparently the doppler shift equation (relativistic) is given, so don't worry too much about that.\n\n\\subsection{4-Vectors}\nIn conventional 3-D space, you can have nice invariant properties of objects, like relative distances. In relatistic 4-D space \nyou can have nice invariant 4-vectors that tell you the relative distance of objects and events in 4-D space!\n\n$$\\Delta x = (c\\Delta t,\\Delta x, \\Delta y, \\Delta z) = (c\\Delta t, \\pmb{r})$$\n\n\\paragraph{The Inner Product} (the thing that is actually invariant) is given by: \n$$A = (A_0, \\pmb{A}); \\, B = (B_0, \\pmb{B})$$\n$$A\\cdot B=A_0B_0 - \\pmb{A}\\cdot \\pmb{B}$$\n\nWhere the bolded A and B are a vector of the three spatial coordinates, the $A_0/B_0$ is the the $c\\Delta t$ component of the 4-vector.\n\nThe other form of 4-vector is as follows: \n\n$$...$$\n\n\n\n\\chapter{Quantum}\n\\section{Particle Energy, Wave Energy}\nHere is a hodge podge of fun properties about waves (and particles, cause they're waves too). Shoutout to De Broglie.\n\\begin{itemize}\n\\item Energy of particle/wave with angular frequency: $E = \\hslash \\omega$\n\\item Momentum of particle/wave with angular frequency: $p = \\hslash \\omega/c$. Now is as good a time as any to tell you that $k=\\omega/c$, which is really annoying and confusing.\n\\end{itemize}\n\n\\section{Compton Scattering}\nWhen a high energy photon with wavelength $\\lambda_0$ collides with an electron, the photon scatters of at angle $\\theta$ with respect to its original direction. The wavelength of the photon \nchanges accordingly to $\\lambda'$\n\n$$\\lambda' - \\lambda_0 = \\frac{R}{mc}(1-\\cos\\theta)$$\n\\begin{itemize}\n\\item $\\lambda'$ is the final wavelength of the photon.\n\\item $\\lambda_0$ is the initial wavelength of the photon.\n\\item $R$ is some constant we're given.\n\\item $m$ is the mass of the electron.\n\\item $c$ is the speed of light.\n\\item $\\theta$ is the angle of deflection of the photon with respect to it's initial line of attack.\n\\end{itemize}\n\n\\section{Photoelectric Effect}\nIf you yeet some photons at a piece of metal, the electrons get ejected if the energy of the light packets is high enough (if the frequency of light is high enough).\n$$E_x = \\hslash \\omega - \\phi$$\nwhere $\\omega$ is the frequency of light and $\\phi$ is the binding energy of the electrons. $E_x$ is the energy of the ejected electrons.\n\n\\section{Hydrogen Model}\n$$\\omega_{mn}=2\\pi R_0 c(\\frac{1}{n^2} - \\frac{1}{m^2})$$\n\n\\section{De Broglie Waves}\n$$\\lambda = \\frac{h}{p} = \\frac{h}{mv}$$\n$$E = \\hslash \\omega$$\n$$p = \\hslash k; \\,\\,\\,\\,\\,\\, k = \\omega/c$$\n\n\\section{Wave-Particle Duality}\n\\subsection{Uncertainty Principle}\n$$\\Delta x\\Delta p_x \\geq 2\\pi \\hslash$$\n$$$$\n\\subsection{Double Slit Experiment}\nIf you shine a laser through two slits (or anything really), the two slits will act as sources of the exact same frequency light and they will \ninterfere with eachother to create some diffraction patterns on wall. Difference in path length being an integer multiple of the wavelength makes \nconstructive interference, difference in path length of half makes for destructive.\n\n$$m \\lambda = d\\sin \\theta$$\nWhere \\begin{itemize}\n\\item $m$ is some integer (0 is the central node, 1 is the next, etc.)\n\\item $\\lambda$ is the wavelength of light (or particles) coming through.\n\\item $d$ is the distance between the slits.\n\\item When the equation holds true for integer value of $m$, you get constructive point. If it's an integer and a half, it's a destructive point.\n\\end{itemize}\n\n\n\\section{Black Body Radiation}\nA body absorbs and emits radiation with the exact same efficiency for any given frequency. A black body is a perfect absorber and perfect emitter. \n$$P=\\sigma T^4$$\n$P$ is the power per square meter of the emitter surface, $\\sigma$ is the emissivity/absorptivity constant, and $T$ is the temperature.\n\n$$f_{max} \\propto T$$\nIn other words, the frequency of radiation that is absorbed/emitted most is proportional to temperature.\n\n\n\n\\chapter{Matter Waves and The Schr\u00f6dinger Equation}\nThis problem is basically just $\\pmb{A}\\vec{x} = \\lambda \\vec{x}$ as follows:\n$$(\\frac{p^2}{2m} + V) \\Psi = E\\Psi$$\n\\begin{itemize}\n\\item $p$ is the momentum\n\\item $m$ is the mass\n\\item $V$ is a conservative field of potential energy..\n\\item $E$ is the energy.\n\\end{itemize}\n\n\\section{Normalization and Probability}\nSolve for $N$ such that $$\\int_{-\\infty}^{\\infty} |(N\\psi(x))^2| dx = 1$$\n\nThe probability of the particle being between two points is just the integral between those of $\\int_a^b |\\psi(x)^2| dx$ as long as $\\psi$ is \nnormalized. \n\n\\section{Energy Considerations}\n\\paragraph{Quantum Kinetic and Potential Energy} are given by: $$U = \\frac{\\hat{p}^2}{2m}\\psi(x)$$\n$$K = \\frac{\\hslash^2}{2m}\\nabla^2\\psi(x)$$\nWhere\n\\begin{itemize}\n\\item $\\hat{p} = -i\\hslash \\frac{d^2}{dx^2}$\n\\end{itemize}\n\n\\section{Infinite Well}\nThis is a pretty common question to ask. The high level explanation is as follows:\n\\begin{itemize}\n\\item There is a square well of potential energy with infinitely high side walls.\n\\item The particle is trapped inside. It cannot tunnel out because of the infinitely high walls.\n\\item At the edges of the well and beyond, the wave function must be 0\n\\end{itemize}\nThis is starting to sound a lot like the boundary conditions for standing waves! You can derive this relationship with a bunch\nof complicated math, but here's the final formula for the energy of each of the possible standing wave pattern that a particle \ncan occupy: \n$$E_n = \\frac{n^2\\pi^2\\hslash^2}{2mL^2}$$\nWhere $n$ is any positive integer, $m$ is the mass of the particle, $L$ is the length of the well. \n\n\\paragraph{The actual equation} for the wave is as follows: $$\\Psi(x) = C\\sin(Ax)$$\nWhere $AL = n\\pi$.\n\n\\subsection{Superposition}\nWhen you \\textbf{measure} the particle, there is only one energy level $n$ that tells you the frequency of the standing wave and, by extension, \nthe energy at that level.\n\nBefore you observe the particle, it has a superposition of a bunch of $n$ values. However, once you observe the particle, it becomes constrained \nto one $n$ value. This is why the photoelectric effect still has discrete energy levels!\n\n\\section{Expectation Values}\nHonestly you can get this from our vector calculus course pretty well. We know that the probability distribution is given by $P = |\\psi^2|$. From vector calculus, \nwe also know how to get the expected values of things from probability distributions. They like to write this with annoying complex conjugates but it's the same thing.\n$$\\langle x \\rangle = \\int_{-\\infty}^{\\infty}\\psi^* x \\psi dx = \\int_{-\\infty}^{\\infty}|\\psi^2| x dx = \\int_{-\\infty}^{\\infty}P(x) x \\psi dx$$\n\nSo yeah, nothing really to worry about. Same stuff different variable names.\n\n\\section{Operators}\nOperators are applied to the wave function $\\Psi$ to get physical values out of the wave function. For some reason, those values are called eigenvalues. Whatever. \nHere are the operators.\n\\begin{enumerate}\n\\item Position operator $\\hat{x}$: $\\hat{x}\\Psi = x\\Psi; \\hat{r}\\Psi=r\\Psi$\n\\item Momentum operator: $\\hat{p}$: $\\hat{p}_x=-i\\hslash\\frac{\\partial}{\\partial x}$\n\\item Kinetic energy operator: $\\hat{K}$: $\\hat{K}=-\\frac{\\hslash^2}{2m}\\nabla^2$\n\\item Total energy operator: $\\hat{H}$: $\\hat{H} = \\hat{K} + \\hat{U}$\n\\item Potential energy is just from any fields of potential energy the particle happens to be riding on.\n\\end{enumerate}\n\n\\section{Schr\u00f6dinger's For Free Particle}\nRemember the general traveling wave equation? Yup, it's back baby!\n$$\\Psi=Acos(kx-\\omega t)+iAsin(kx-\\omega t)$$\nAnd here's how you get all those parameter values:\n$$p=\\hslash k; \\,\\,\\,K=\\frac{1}{2}mv^2;\\,\\,E=\\hslash\\omega$$\nJust remember that $k$ and $K$ are different things ($k$ is the spatial frequency, $K$ is the kinetic energy of the particle). \n\n\n\\end{document}", "meta": {"hexsha": "e40ddbb553e0910d3a780f248fc90a21955684c0", "size": 25711, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/PHY293.tex", "max_stars_repo_name": "AdamCarnaffan/EngSci_Abridged", "max_stars_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-10-25T06:03:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-15T02:14:13.000Z", "max_issues_repo_path": "tex/PHY293.tex", "max_issues_repo_name": "AdamCarnaffan/EngSci_Abridged", "max_issues_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/PHY293.tex", "max_forks_repo_name": "AdamCarnaffan/EngSci_Abridged", "max_forks_repo_head_hexsha": "de733823c493d35689cfcd846f87a47e0b05331c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-05T14:21:34.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-06T19:01:31.000Z", "avg_line_length": 44.9493006993, "max_line_length": 191, "alphanum_fraction": 0.707207032, "num_tokens": 7968, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5652912264731179}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage[margin=0.9in, a4paper]{geometry}\n\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{hyperref}\n\n\\newtheorem*{claim}{Claim}\n\n\\thispagestyle{empty}\n\n\\begin{document}\n  \\section*{Finding a colour with AA contrast ratio} % (fold)\n  \\label{sec:the_problem}\n\n  The \\href{https://www.w3.org/TR/2008/REC-WCAG20-20081211/#visual-audio-contrast-contrast}{WCAG AA contrast requirement} for text requires a contrast ratio between the text and the background of at least 4.5:1.\n  Given a background colour, suppose we want to find a colour that satisfies this contrast ratio as quickly as possible.\n\n  The two most opposite colours are white (\\texttt{\\#ffffff}) and black (\\texttt{\\#000000}), so it makes sense to try those first.\n\n  \\begin{claim}\n    Every colour has a contrast ratio of at least 4.5:1 with at least one of white  or black.\n  \\end{claim}\n\n  \\begin{proof}\n    The WCAG defines the \\emph{contrast ratio} of two colours to be\n    \\begin{equation*}\n      \\frac{L_1 + 0.05}{L_2 + 0.05},\n    \\end{equation*}\n    where $L_1$ is the relative luminance of the lighter colour, and $L_2$ is the relative luminance of the darker colour.\n    (See \\href{https://www.w3.org/TR/WCAG20-TECHS/G17.html}{WCAG technique G17}.)\n\n    The relative luminance of white is $L_\\text{white} = 1$, and the relative luminance of black is $L_\\text{black} = 0$.\n\n    Suppose there were a colour with relative luminance $L$, which has insufficient contrast with both white and black.\n    It must satisfy both:\n    \\begin{equation*}\n      \\frac{L_\\text{white} + 0.05}{L + 0.05} < 4.5\n      \\hspace{1cm} \\text{and} \\hspace{1cm}\n      \\frac{L + 0.05}{L_\\text{black} + 0.05} < 4.5\n    \\end{equation*}\n    The luminance of a colour always satisfies $0 \\leq L \\leq 1$, so we can simplify these to:\n    \\begin{equation*}\n      L > \\frac{11}{60} = 0.18333\\ldots\n      \\hspace{1cm} \\text{and} \\hspace{1cm}\n      L < \\frac{7}{40} = 0.175.\n    \\end{equation*}\n    This is a contradiction, which means there is no colour which has insufficient contrast with both white and black.\n\n    This means we can always find a colour with sufficient contrast in at most two lookups: first we try white, then we try black.\n  \\end{proof}\n\n  % section the_problem (end)\n\n  \\section*{Finding a colour with AAA contrast ratio} % (fold)\n  \\label{sec:finding_a_colour_with_aaa_contrast_ratio}\n\n  The enhanced contrast requirement requires a contrast ratio between the text and the background of at least 7.\n  Can we use white and black to find a guaranteed colour with this contrast ratio?\n\n  It turns out not: if you try to repeat the proof above with a contrast ratio of 7, not 4.5, you don't get a contradiction.\n  Instead, yopu learn that a colour has insufficient contrast with both white and black if and only if\n  \\begin{equation*}\n    0.1 < L < 0.3\n  \\end{equation*}\n  and there are colours with this luminance.\n\n  If you work through the greys, you find \\texttt{\\#5a5a5a}, which has a relative luminance of 0.102, a contrast ratio 6.897:1 with white and 3.045:1 with black.\n  If you go right to the middle, \\texttt{\\#7f7f7f} has a contrast ratio 4.004:1 with white and 5.245:1 with black.\n\n  Indeed, there are no colours that have a contrast ratio of 7:1 with \\texttt{\\#7f7f7f}.\n  (For proof, imagine such a colour with luminance $L$, and consider the cases $L<L_\\text{\\texttt{\\#7f7f7f}}$ and $L>L_\\text{\\texttt{\\#7f7f7f}}$.\n  In both cases you discover the contrast is less than 7.)\n  As you keep increasing your contrast requirement, some colours become unusable.\n  Eventually you reach the maximum contrast ratio of 21:1, when the only colours you can use are black and white.\n\n  % section finding_a_colour_with_aaa_contrast_ratio (end)\n\n\\end{document}\n", "meta": {"hexsha": "14e0dbf859075a2b89032a1b8d566addc6639539", "size": 3753, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/2019-08-tint-colours/wcag-black-and-white.tex", "max_stars_repo_name": "alexwlchan/alexwlchan.net", "max_stars_repo_head_hexsha": "f34c4b11dcebd5b6eb65de6caa7fe320fff7a25a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 33, "max_stars_repo_stars_event_min_datetime": "2017-09-25T13:53:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T11:59:05.000Z", "max_issues_repo_path": "assets/2019-08-tint-colours/wcag-black-and-white.tex", "max_issues_repo_name": "alexwlchan/alexwlchan.net", "max_issues_repo_head_hexsha": "f34c4b11dcebd5b6eb65de6caa7fe320fff7a25a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 94, "max_issues_repo_issues_event_min_datetime": "2017-09-09T07:27:25.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-28T02:45:39.000Z", "max_forks_repo_path": "assets/2019-08-tint-colours/wcag-black-and-white.tex", "max_forks_repo_name": "alexwlchan/alexwlchan.net", "max_forks_repo_head_hexsha": "f34c4b11dcebd5b6eb65de6caa7fe320fff7a25a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2017-09-25T13:59:34.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-07T09:23:02.000Z", "avg_line_length": 46.3333333333, "max_line_length": 211, "alphanum_fraction": 0.7186250999, "num_tokens": 1137, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7606506418255928, "lm_q1q2_score": 0.5652912227353315}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsthm}\n\\usepackage{breqn}\n\\usepackage{enumitem}\n\\usepackage{tcolorbox}\n\\usepackage{hyperref}\n\\title{Notes on Quantum Complexity}\n\\author{S.G. Schoeman}\n\\date{August, 2021}\n\\usepackage{physics}\n\\usepackage{amsfonts}\n\\newtheorem{definition}{Definition}\n\\newtheorem{fact}[definition]{Fact}\n\\newtheorem{formula}[definition]{Formula}\n\\newtheorem{equations}[definition]{Equations}\n\\newtheorem{External Info}[definition]{External Info}\n\\newtheorem{treatment}[definition]{Treatment}\n\\begin{document}\n\n\\maketitle\n\n\\begin{tcolorbox}[title=Definition: Linear Vector Space]\\begin{definition}[Linear Vector Space]\\label{0}There are these axioms, you see... \n \n This definition has no references is thus rank $0$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Field]\\begin{definition}[Field]\\label{1}A field is the numbers over which a vector space is defined.\n \n This definition references: (\\ref{0}), and is thus rank 1.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Linear Independence of Vectors]\\begin{definition}[Linear Independence of Vectors]\\label{2}A set of vectors $\\mathbb{V}$ is said to be linearly independent if the only such\nlinear relation $$a_1v_1+a_2v_2+\\ldots+a_nv_n=0,$$ where all $v_i\\in\\mathbb{V}$ and all\n$a_i\\in\\mathbb{C}$, is the trivial one with all $a_i = 0$. If the set of vectors\nis not linearly independent, we say they are linearly dependent. \n \n This definition references: (\\ref{0}), and is thus rank 1.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Vector Space Dimension]\\begin{definition}[Vector Space Dimension]\\label{3}The dimension of a vector space is equal to the minimum number of linearly independent vectors it requires to be spanned.\n \n This definition references: (\\ref{0}), (\\ref{2}), and is thus rank 2.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Fact]\\begin{fact}[]\\label{4}Any vector $\\ket{V}$ in an $n$-dimensional space can be written as a\nlinear combination of $n$ linearly independent vectors $\\ket{1}\\ldots\\ket{n}$.\n \n This fact references: (\\ref{0}), (\\ref{2}), (\\ref{3}), and is thus rank 3.\\end{fact}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Vector Basis]\\begin{definition}[Vector Basis]\\label{5}A set of $n$ linearly independent vectors in an $n$-dimensional space\n is called a basis.\n \n This definition references: (\\ref{0}), (\\ref{2}), (\\ref{3}), and is thus rank 3.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Coordinates of a Vector w.r.t. a Basis]\\begin{definition}[Coordinates of a Vector w.r.t. a Basis]\\label{6}The coordinates of a vector $\\ket{v}$ in a basis $\\{\\ket{j_i}:i\\in I\\}$\n are the cooefficients of the expansion $$\\ket{v}=v_1\\ket{j_1}+\\ldots+v_n\\ket{j_n}$$ \n of $\\ket{v}$ in that basis.\n \n This definition references: (\\ref{5}), (\\ref{4}), and is thus rank 4.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Fact]\\begin{fact}[]\\label{7}The Hamiltonian $H$ of a spring-mass harmonic oscillator is given by\\begin{equation}H=\\frac{p^2}{2m}+\\frac{\\omega^2x^2}{2}\\end{equation}\n where $\\omega^2=k/m$ is the classical frequency of the oscillating system.\n \n This fact has no references is thus rank $0$.\\end{fact}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Fact]\\begin{fact}[]\\label{8}The inverted harmonic oscillator changes the sign of the harmonic oscillator,\n so it has Hamiltonian\\begin{equation}H=\\frac{p^2}{2m}-\\frac{\\omega^2x^2}{2}\\end{equation}\n where $\\omega^2=k/m$ is the classical frequency of the oscillating system.\n \n This fact references: (\\ref{7}), and is thus rank 1.\\end{fact}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Fact]\\begin{fact}[]\\label{9}Put $\\omega^2=m^2-\\lambda$, then $\\lambda<m^2$ describes the\n regular harmonic oscillator and $\\lambda>m^2$ describes the inverted one, while\n $m^2=\\lambda$ is simply a free particle; that is, a particle such that its potential energy\n is independent of its position.\n \n This fact references: (\\ref{7}), (\\ref{8}), and is thus rank 2.\\end{fact}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Formula]\\begin{formula}[]\\label{10}Consider with the following system. \n $$\\psi(x,t)=\\mathcal{N}(t)\\exp\\left(-\\frac{1}{2}\\omega_r x^2\\right)$$\n with initial conditions $\\psi(x,0)=\\psi_0,~\\mathcal{N}(0)=\\mathcal{N}_0$, \n and where $\\omega_r=m$.\n \n This formula has no references is thus rank $0$.\\end{formula}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Hermition Operator]\\begin{definition}[Hermition Operator]\\label{11}A Hermitian operator is an operator that is equal \n to its own conjugate transpose.\n \n This definition has no references is thus rank $0$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Hermition Matrix]\\begin{definition}[Hermition Matrix]\\label{12}A Hermitian matrix is a complex square \n matrix that is equal to its own conjugate transpose.\n \n This definition has no references is thus rank $0$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Equations]\\begin{equations}[]\\label{13}The equations of motion for the oscillator are $$\\dot{x}=\n \\frac{\\partial H}{\\partial p}=\\frac{p}{m},~~~~\\dot{p}=-\\frac{\\partial H}{\\partial x}=\n -m\\omega^2 x$$ which, through elimination of $\\dot{p}$, becomes\n $$\\ddot{x}=-\\omega^2x$$ which easily solves by\n $$x(t)=A\\cos(\\omega t+\\phi)$$ where $x_0=x(0)=A\\cos(\\phi)$. Note then that\n $$E=T+V=\\frac{1}{2}m\\dot{x}^2+\\frac{1}{2}m\\omega^2x^2=\\frac{1}{2}mA^2\\omega^2$$\n where $E$ is the energy of the classical system.\n \n This equations references: (\\ref{7}), and is thus rank 1.\\end{equations}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Fact]\\begin{fact}[]\\label{14}The equation for a quantum oscillator with state $\\ket{\\psi}$ is\n $$i\\hbar\\frac{d}{dt}\\ket{\\psi}=H\\ket{\\psi}$$\n \n This fact has no references is thus rank $0$.\\end{fact}\\end{tcolorbox}\n\\begin{tcolorbox}[title=External Info: AdS/CFT correspondence]\\begin{External Info}[AdS/CFT correspondence]\\label{15}\n \n This External Info references: \\{\\href{https://en.wikipedia.org/wiki/AdS/CFT_correspondence}{link}\\}, [\\ref{3}], and is thus rank $\\infty$.\\end{External Info}\\end{tcolorbox}\n\\begin{tcolorbox}[title=External Info: Einstein-Rosen Bridge (Wormhole)]\\begin{External Info}[Einstein-Rosen Bridge (Wormhole)]\\label{16}\n \n This External Info references: \\{\\href{https://en.wikipedia.org/wiki/Wormhole}{link}\\}, [\\ref{3}], and is thus rank $\\infty$.\\end{External Info}\\end{tcolorbox}\n\\begin{tcolorbox}[title=External Info: Complexity Equals Action]\\begin{External Info}[Complexity Equals Action]\\label{17}\n \n This External Info references: [\\ref{5}], and is thus rank $\\infty$.\\end{External Info}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Vector Space]\\begin{definition}[Vector Space]\\label{18}A vector space (also called a linear space) is a set of objects called vectors, which may be added together and multiplied (\"scaled\") by numbers, called scalars. They obey specific axioms...\n \n This definition references: (\\ref{0}), \\{\\href{https://en.wikipedia.org/wiki/Vector_space}{link}\\}, and is thus rank $\\infty$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Binary Operation]\\begin{definition}[Binary Operation]\\label{19}In mathematics, a binary operation or dyadic operation is a calculation that combines two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two.\n \n This definition references: \\{\\href{https://en.wikipedia.org/wiki/Binary_operation}{link}\\}, and is thus rank $\\infty$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Inner Product Space]\\begin{definition}[Inner Product Space]\\label{20}An inner product space, or a Hausdorff pre-Hilbert space, is a vector space with a binary operation called an inner product.\n \n This definition references: (\\ref{18}), (\\ref{19}), \\{\\href{https://en.wikipedia.org/wiki/Inner_product_space}{link}\\}, and is thus rank $\\infty$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Complete Metric Space]\\begin{definition}[Complete Metric Space]\\label{21}In mathematical analysis, a metric space M is called complete (or a Cauchy space) if every Cauchy sequence of points in M has a limit that is also in M.\n \n This definition references: \\{\\href{https://en.wikipedia.org/wiki/Complete_metric_space}{link}\\}, and is thus rank $\\infty$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Hilbert Space]\\begin{definition}[Hilbert Space]\\label{22}A Hilbert space H is a real or complex inner product space that is also a complete metric space with respect to the distance function induced by the inner product.\n \n This definition references: (\\ref{20}), (\\ref{21}), \\{\\href{https://en.wikipedia.org/wiki/Hilbert_space#Definition_and_illustration}{link}\\}, and is thus rank $\\infty$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Definition: Unitary Operator]\\begin{definition}[Unitary Operator]\\label{23}In functional analysis, a unitary operator is a surjective bounded operator on a Hilbert space preserving the inner product. Unitary operators are usually taken as operating on a Hilbert space, but the same notion serves to define the concept of isomorphism between Hilbert spaces.\n \n This definition references: \\{\\href{https://en.wikipedia.org/wiki/Unitary_operator}{link}\\}, (\\ref{22}), (\\ref{20}), and is thus rank $\\infty$.\\end{definition}\\end{tcolorbox}\n\\begin{tcolorbox}[title=Treatment: Quantum Treatment of an Inverted Harmonic Oscillator (IHO)]\\begin{treatment}[Quantum Treatment of an Inverted Harmonic Oscillator (IHO)]\\label{24}A reference state $\\ket{\\psi_R}$ is transformed, via a unitary operator $\\hat{\\mathcal{U}}$, into target state $\\ket{\\psi_T}$: $$\\ket{\\psi_T}=\\hat{\\mathcal{U}}\\ket{\\psi_R}$ Now, say that the operator is but an evolving operator $\\hat{\\mathcal{U}}=e^{-iHt}$, so $$\\ket{\\psi_T}=e^{-iHt}\\ket{\\psi_R}$$\n \n This treatment references: (\\ref{23}), and is thus rank $\\infty$.\\end{treatment}\\end{tcolorbox}\n\\section*{External References}\n\\begin{enumerate}[label={[\\arabic*]}]\\item A. Bhattacharyya, S. Das, S.S. Haque, B. Underwood. \\begin{textit}Cosmological Complexity. \\end{textit}(2020). arXiv:2001.08664\n\\item R. Jefferson, R.C. Myers. \\begin{textit}Circuit complexity in quantum field theory. \\end{textit}(2017). arXiv:1707.08570\n\\item T. Ali, A. Bhattacharyya, S.S. Haque, E.H. Kim, N. Moynihan, J. Murugan. \\begin{textit}Chaos and Complexity in Quantum Mechanics. \\end{textit}(2020). arXiv:1905.13534\n\\item T. Ali, A. Bhattacharyya, S.S. Haque, E.H. Kim, N. Moynihan. \\begin{textit}Time Evolution of Complexity: A Critique of Three Methods. \\end{textit}(2019). arXiv:1810.02734\n\\item S.S. Haque, C. Jana, B. Underwood. \\begin{textit}Saturation of Thermal Complexity of Purification. \\end{textit}(2021). arXiv:2107.08969\n\\item A.R. Brown, D.A. Roberts, L. Susskind, B. Swingle, Y. Zhao. \\begin{textit}Complexity Equals Action. \\end{textit}(2016). arXiv:1509.07876\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "906b5dbf1b51aff5aad57b603b08d9daeba6068a", "size": 10969, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bin/notes_on_quantum_complexity.tex", "max_stars_repo_name": "ZR000X/MathCourses", "max_stars_repo_head_hexsha": "57135a351c194a9e8fe15dec26a39125d6758e21", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bin/notes_on_quantum_complexity.tex", "max_issues_repo_name": "ZR000X/MathCourses", "max_issues_repo_head_hexsha": "57135a351c194a9e8fe15dec26a39125d6758e21", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bin/notes_on_quantum_complexity.tex", "max_forks_repo_name": "ZR000X/MathCourses", "max_forks_repo_head_hexsha": "57135a351c194a9e8fe15dec26a39125d6758e21", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.0984848485, "max_line_length": 479, "alphanum_fraction": 0.7454644908, "num_tokens": 3274, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.743167997235783, "lm_q1q2_score": 0.5652912221462716}}
{"text": "\\section{Solving probabilistic property evaluation}\n\\label{sect:prob-solutions}\n\nWe propose different solutions to the MPPE and PPE problems.\nFor the former, we resort to SSAT solving.\nFor the latter, in addition to SSAT solving,\nwe present solutions based on signal probability calculation,\nweighted model counting,\nand probabilistic model checking.\n\n\\subsection{Solving MPPE and PPE via SSAT}\nNote that the entire miter SPBN can be directly converted to a CNF formula by Tseitin transformation~\\cite{Tseitin1983}\nsince all vertices except for those in $X,Z,W$ are error-free after the standardization.\nIn the following,\nlet $\\pf_M$ be the CNF formula converted from the miter SPBN $G_M$,\nwhich contains vertex sets\n$X=\\{x_1,\\ldots,x_n\\}$,\n$Y=\\{y_1,\\ldots,y_m\\}$,\n$Z=\\{z_1,\\ldots,z_l\\}$, and\n$W=\\{w_1,\\ldots,w_q\\}$ as shown in~\\cref{fig:prob-spbn-miter}.\nObserve that $X \\cup Z \\cup W$ is a base set for $\\pf_M$.\nGiven a parameter assignment $\\pi$ to $X$,\nwe define the corresponding weighting function $\\wt:X\\cup Z\\cup W\\mapsto[0,1]$ for $\\pf_M$ as\n$\\wt(x_i)=\\pi(x_i)$,\n$\\wt(y_j)=p_{y_j}$, and\n$\\wt(w_k)=p_{w_k}$ for all\n$x_i \\in X$,\n$y_j \\in Y$, and\n$w_k \\in W$.\nThe weighting function $\\wt$ will be used throughout our discussion.\n\n\\begin{theorem}\n    \\label{thm:prob-ppe-ssat}\n    The probabilistic property evaluation (PPE) of $\\pf_M$ under a parameter assignment $\\pi$ can be expressed\n    by the following SSAT formula $\\Qf_\\mathrm{PPE}(\\pi)$:\n    \\begin{align}\n        \\label{eq:prob-ppe-ssat}\n        \\random{\\pi(x_1)}x_1,\\ldots,\\random{\\pi(x_n)}x_n,\n        \\random{p_{z_1}}z_1,\\ldots,\\random{p_{z_l}}z_l,\n        \\random{p_{w_1}}w_1,\\ldots,\\random{p_{w_q}}w_q,\n        \\exists y_1,\\ldots,\\exists y_m.\n        \\pf_M.\n    \\end{align}\n\\end{theorem}\n\\begin{proof}\n    Let $A$ be the event $\\pf_M=\\top$ and $\\Lambda=\\av{X \\cup Z \\cup W}$.\n    By the law of total probability,\n    $\\Pr[A]=\\sum\\limits_{\\as\\in\\Lambda}\\Pr[\\as]\\Pr[A\\mid\\as]$,\n    where $\\Pr[\\as]=\\wt(\\as)$ ($\\wt$ is the weighting function defined previously) and\n    $\\Pr[A\\mid\\as]$ is the conditional probability of event $A$ under the assignment $\\as$.\n    Notice that $\\Pr[A\\mid\\as]=\\pcf{\\pf_M}{\\as}$ since\n    $X \\cup Z \\cup W$ is a base set for $\\pf_M$.\n    As a result,\n    $\\Pr[A]=\\sum\\limits_{\\as\\in\\Lambda}\\wt(\\as)\\pcf{\\pf_M}{\\as}$,\n    which equals the satisfying probability of the SSAT formula $\\Qf_\\mathrm{PPE}(\\pi)$.\n\\end{proof}\n\n\\begin{theorem}\n    \\label{thm:prob-mppe-ssat}\n    The maximum probabilistic property evaluation (MPPE) of $\\pf_M$ can be expressed\n    by the following SSAT formula $\\Qf_\\mathrm{MPPE}$:\n    \\begin{align}\n        \\label{eq:prob-mppe-ssat}\n        \\exists x_1,\\ldots,\\exists x_n,\n        \\random{p_{z_1}}z_1,\\ldots,\\random{p_{z_l}}z_l,\n        \\random{p_{w_1}}w_1,\\ldots,\\random{p_{w_q}}w_q,\n        \\exists y_1,\\ldots,\\exists y_m.\n        \\pf_M.\n    \\end{align}\n\\end{theorem}\n\\begin{proof}\n    By the same argument in the proof of~\\cref{thm:prob-ppe-ssat},\n    given an assignment $\\as_X$ over $X$,\n    the SSAT formula:\n    \\begin{align*}\n        \\Qf_\\mathrm{MPPE}(\\as_X)=\n        \\random{p_{z_1}}{z_1},\\ldots,\\random{p_{z_l}}{z_l},\n        \\random{p_{w_1}}{w_1},\\ldots,\\random{p_{w_q}}{w_q},\n        \\exists y_1,\\ldots,\\exists y_m.\n        \\pcf{\\pf_M}{\\as_X}\n    \\end{align*}\n    computes the satisfying probability of the miter under the assignment $\\as_X$.\n    According to the SSAT semantics,\n    the outermost existential quantification of primary inputs $X$\n    ensures to find an optimal assignment $\\as_X^*$\n    such that the satisfying probability of $\\Qf_\\mathrm{MPPE}(\\as_X^*)$ is maximized.\n    Hence the SSAT formula $\\Qf_\\mathrm{MPPE}$ computes the maximum satisfying probability of the miter.\n\\end{proof}\n\nNote that the only difference between $\\Qf_\\mathrm{MPPE}$ and $\\Qf_\\mathrm{PPE}$\nlies in the quantification for the primary inputs $X$.\nAlthough SSAT provides a convenient language for expressing both MPPE and PPE problems,\nits solvers to date remain immature to handle formulas of practical sizes in our considered application.\nOne of the main inefficiencies can be attributed to representing $\\pf_M$ in CNF,\nwhich results in the additional quantification of the intermediate circuit variables $Y=\\{y_1,\\ldots,y_m\\}$.\nIt motivates the development of a new SSAT solver as we present below.\n\n\\subsubsection{BDD-based SSAT solving}\n\n\\begin{algorithm}[t]\n    \\caption{BDD-based SSAT solving: \\texttt{BddSsatSolve}}\n    \\label{alg:bddssat}\n    \\begin{algorithmic}[1]\n        \\REQUIRE $\\Qf=Q_1 v_1,\\ldots,Q_n v_n.\\pf$\n        \\ENSURE $\\spb{\\Qf}$\n        \\STATE $N := \\texttt{BuildReducedOrderedBdd}(\\pf,(v_1,\\ldots,v_n))$\\label{code:bddssat-build-bdd}\n        \\STATE $Q := Q_1 v_1,\\ldots,Q_n v_n$\n        \\RETURN $\\texttt{BddSsatRecur}(N,Q)$\\label{code:bddssat-recursive}\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n    \\caption{The recursive step of \\texttt{BddSsatSolve}: \\texttt{BddSsatRecur}}\n    \\label{alg:bddssat-recursive}\n    \\begin{algorithmic}[1]\n        \\REQUIRE An ROBDD node $N$ and a prefix $Q$\n        \\ENSURE $\\spb{N=\\top}$ under $Q$\n        \\IF{($N$ is a terminal node)}\\label{code:bddssat-recursive-constant-start}\n        \\RETURN $\\nodesp{N}$\\label{code:bddssat-recursive-constant-end}\n        \\ENDIF\n        \\IF{($\\nodevisit{N}=\\false$)}\n        \\IF{($Q(\\nodevar{N})=\\random{p}$)}\n        \\STATE $\\nodesp{N}:=(1-p)\\cdot\\texttt{BddSsatRecur}(\\nodeelse{N},Q)+p\\cdot\\texttt{BddSsatRecur}(\\nodethen{N},Q)$\n        \\label{code:bddssat-recursive-random}\n        \\ELSE\n        \\STATE $\\nodesp{N}:=\\max\\{\\texttt{BddSsatRecur}(\\nodeelse{N},Q),\\texttt{BddSsatRecur}(\\nodethen{N},Q)\\}$\n        \\label{code:bddssat-recursive-exist}\n        \\ENDIF\n        \\STATE $\\nodevisit{N} := \\true$\n        \\ENDIF\n        \\RETURN $\\nodesp{N}$\n    \\end{algorithmic}\n\\end{algorithm}\n\nWe propose a BDD-based solver to enhance the scalability of SSAT solving.\nThe procedure \\texttt{BddSsatSolve} is outlined in~\\cref{alg:bddssat},\nwhich takes as input an SSAT formula $\\Qf=Q_1 v_1,\\ldots,Q_n v_n.\\pf$.\nAt~\\cref{code:bddssat-build-bdd},\na \\textit{reduced ordered BDD} (ROBDD) of $\\pf$ is built with a variable ordering following the quantification order.\nAt~\\cref{code:bddssat-recursive},\na recursive procedure \\texttt{BddSsatRecur},\nsketched in~\\cref{alg:bddssat-recursive},\nis called to calculate the satisfying probability of $\\Qf$.\nIn the pseudo code, for an ROBDD node $N$,\n$\\nodethen{N}$ and $\\nodeelse{N}$ denote its $\\mathrm{then}$- and $\\mathrm{else}$-child, respectively;\n$\\nodeval{N}$ equals 0 (resp. 1) if $N$ is a 0-terminal (resp. 1-terminal) node;\n$\\nodevisit{N}$ is a flag initialized to \\false and records whether $N$ has been processed;\n$\\nodevar{N}$ and $\\nodesp{N}$ denote the control variable and the satisfying probability of node $N$, respectively.\nIn~\\cref{alg:bddssat-recursive},\n\\crefrange{code:bddssat-recursive-constant-start}{code:bddssat-recursive-constant-end} implement the first and second computation rules of SSAT in~\\cref{sect:background-ssat};\n\\Cref{code:bddssat-recursive-random} and \\cref{code:bddssat-recursive-exist} implement the third and fourth rules corresponding to the randomized and existential quantification of the variable, respectively.\n\nNote that \\texttt{BddSsatSolve} runs in time linear to the number of BDD nodes (as each node is processed only once).\nTherefore the computation complexity is dominated by constructing the ROBDD of $\\pf$.\nNote also that if the outermost variables in the quantification order are existentially quantified, e.g.,\nthose in $\\Qf_\\mathrm{MPPE}$ of~\\cref{thm:prob-mppe-ssat},\nthe corresponding assignments to the existentially quantified variables\nto maximize the satisfying probability of $\\pf$ can be obtained.\nSpecifically,\nthe assignment to an outermost existential variable $\\nodevar{N}$ can be derived by recording which of\n\\texttt{BddSsatRecur}($\\nodeelse{N},Q$) and\n\\texttt{BddSsatRecur}($\\nodethen{N},Q$)\ncontributes to the maximum probability in~\\cref{code:bddssat-recursive-exist} of~\\cref{alg:bddssat-recursive}.\n\n\\subsubsection{Signal probability with BDD-based SSAT}\nWe exploit the developed BDD-based SSAT solver to compute signal probabilities as defined in~\\cref{def:prob-signal-prob,def:prob-signal-prob-max}.\nGiven an SPBN $G=(V,E)$,\nthe satisfying probability of any $v \\in V$ can be obtained by first encoding the problem into an SSAT instance\nand then solving the SSAT formula by \\texttt{BddSsatSolve}.\n\nNotice that formula $\\pf$ needs not be represented in CNF for \\texttt{BddSsatSolve}.\nIn the application of circuit verification,\nformula $\\pf$ can be directly input to \\texttt{BddSsatSolve} as a circuit.\nWithout Tseitin transformation from circuit to CNF formula,\nthe algorithm avoids introducing extra variables.\nConsequently, the innermost existential quantification $\\exists y_1,\\ldots,\\exists y_m$\nin~\\cref{eq:prob-ppe-ssat} of~\\cref{thm:prob-ppe-ssat}\nand~\\cref{eq:prob-mppe-ssat} of~\\cref{thm:prob-mppe-ssat} is removed.\nThe utilization of circuit structures makes the calculation of signal probability\n(and therefore the computation of PPE and MPPE) on an SPBN more efficient and scalable\nusing the proposed BDD-based SSAT solver.\nEmpirical evidence suggests our proposed SSAT solver outperforms and is much scalable than other CNF-based SSAT solvers.\n\nWe note that the signal probability calculation via BDD is used for power estimation of integrated circuits~\\cite{Najm1994}.\nAs we formulate PPE as a signal probability problem on SPBN,\nany prior method for signal probability calculation can be used to solve PPE.\nHowever, it is worth emphasizing that prior methods for signal probability calculation,\nsuch as Monte Carlo simulation, cannot solve MPPE.\nThe BDD-based method has its unique value over other previous endeavors for signal probability calculation\nbecause of its generality of solving both PPE and MPPE.\nOn the other hand,\nsince we established the connections of MPPE and PPE to SSAT formulations,\nSSAT solvers can also be applied to calculate the maximum signal probability as well as signal probability.\n\nBDD is a well-studied data structure but known for its memory limitation.\nTechniques have been highly developed in the 1990s to extend its scalability.\nPractical experience suggests that BDD-based computation remains competitive to other formulations\ndue to the fact that other tools,\nsuch as SSAT and model counting,\nare still in their early development.\nIn our experiments over ISCAS\\,'85~\\cite{ISCAS85-benchmark} and EPFL~\\cite{EPFL-benchmark} benchmark suites,\nthe BDD-based approach stands as the most scalable one over other formulations to be discussed.\n\n\\subsubsection{Generalization to dependent random variables}\nThe proposed BDD-based SSAT solver is advantageous over other SSAT solvers for its ability to handle randomly quantified variables whose probability distributions are mutually dependent.\nNote that according to the SSAT syntax,\nthere is no support to describe the joint behavior among randomly quantified variables.\nThat is, every randomly quantified variable acts independently of each other.\nThis assumption limits the expressiveness of SSAT.\nIn this paper, we combine our BDD-based SSAT solver with previous methods~\\cite{Marculescu1998,Miskov-Zivanov2006}\nto represent joint probability distribution of random variables,\nand propose a novel SSAT solver that is capable of expressing mutual dependence among randomly quantified variables.\nA joint probability distribution of random variables is represented as an algebraic decision diagram (ADD)~\\cite{Marculescu1998,Miskov-Zivanov2006}.\nAfter such an ADD representing joint probability distribution is constructed,\nthe original BDD (which is also an ADD) of the Boolean formula $\\pf$ is conjoined with the ADD.\nFinally, the value of the SSAT formula can be calculated by traversing the merged ADD similar to the prior independent counterpart.\n\nFollowing the above strategy,\nnow we explain how to extend the proposed PPE and MPPE framework to approach PBNs whose random variables are mutually dependent.\nAfter the distillation operation,\nthe mutually dependent random variables on erroneous vertices are converted to correlated AIs.\nGiven the joint probability distribution of random variables,\nan ADD is built to represent the mutual dependence among AIs.\nSimilarly, if PIs are correlated,\nanother ADD can be built to describe their correlation.\nAfter multiplying the ADDs with the BDD of the circuit under evaluation,\nthe average (PPE) or the maximum (MPPE) violating probability can be computed through traversing the product ADD.\nTherefore, the proposed PPE framework is generalized to dependent PBNs,\nwhose random variables have mutual dependent probability distribution.\n\n\\subsection{Solving PPE via weighted model counting}\nThe following theorem states that an SSAT formula of the form\nin~\\cref{thm:prob-ppe-ssat} is equivalent to a weighted model counting instance.\n\\begin{theorem}\n    The SSAT formula:\n    \\begin{align*}\n        \\Qf=\n        \\random{p_{x_1}}x_1,\\ldots,\\random{p_{x_n}}x_n,\n        \\exists y_1,\\ldots,\\exists y_m.\\pf,\n    \\end{align*}\n    where $X=\\{x_1,\\ldots,x_n\\}$ is a base set for $\\pf$,\n    is equivalent to a weighted model counting instance of $\\pf$\n    under a weighting function $\\wt$ such that\n    $\\wt(x_i)=p_{x_i}$ for every $x_i\\in X$.\n\\end{theorem}\n\\begin{proof}\n    Let $A$ be the event that $\\pf=\\top$.\n    According to~\\cref{thm:prob-ppe-ssat},\n    we have $\\Pr[A]=\\sum\\limits_{\\as\\in\\av{X}}\\wt(\\as)\\pcf{\\pf}{\\as}$.\n    Since $X$ is a base set for $\\pf$,\n    $\\Pr[A]$ can be simplified to $\\sum\\limits_{\\as\\models\\pf}\\wt(\\as)$,\n    which is the weight of $\\pf$.\n\\end{proof}\nBy the above theorem, weighted model counting is clearly applicable to PPE.\n\n\\begin{algorithm}[p]\n    \\caption{Formula rewriting for unweighted model counting: \\texttt{WmcRewriting}}\n    \\label{alg:wmcrewriting}\n    \\begin{algorithmic}[1]\n        \\REQUIRE A formula $\\pf$, a base set $\\base$ for $\\pf$,\n        a wt. func. $\\wt$ s.t. $\\forall x\\in\\base.\\wt(x)=\\frac{k}{2^n}$\n        \\ENSURE A formula $\\pf'$, a base set $\\base'$ for $\\pf'$,\n        a wt. func. $\\wt'$ s.t. $\\forall x\\in\\base'.\\wt'(x)=\\frac{1}{2}$\n        \\STATE $\\pf':=\\pf,\\base':=\\base$\n        \\FORALL{($x\\in\\base$)}\n        \\STATE $var:=x,wt:=\\wt(x)$\n        \\WHILE{($wt\\neq\\frac{1}{2}$)}\n        \\STATE $inv:=\\bot$\n        \\IF{($wt>\\frac{1}{2}$)}\n        \\STATE $wt:=1-wt,inv:=\\top$\n        \\ENDIF\n        \\STATE $\\pf':=\\pf'\\land((inv \\oplus var) \\equiv (y_{var} \\land z_{var}))$\n        \\STATE $\\base':=\\base'\\setminus\\{var\\}\\cup\\{y_{var}\\}$\n        \\STATE $\\wt'(y_{var})=\\frac{1}{2}$\n        \\STATE $var=z_{var},wt=2 \\cdot wt$\n        \\ENDWHILE\n        \\STATE $\\base':=\\base'\\cup\\{var\\},\\wt'(var)=\\frac{1}{2}$\n        \\ENDFOR\n        \\RETURN $(\\pf',\\base',\\wt')$\n    \\end{algorithmic}\n\\end{algorithm}\n\nWhile exact model counting can be extended to the weighted version at little extra cost,\nmore effort is required to achieve approximate weighted model counting~\\cite{SATHandbook-ModelCounting}.\nTo enhance the scalability of solving PPE via model counting,\nwe show how to rewrite a weighted model counting instance into an equivalent unweighted formula.\nThe rewriting procedure \\texttt{WmcRewriting} is outlined in~\\cref{alg:wmcrewriting}.\n\nThe following lemma explains the rationale behind \\texttt{WmcRewriting}.\n\\begin{lemma}\n    \\label{lemma:prob-rewrite}\n    Let $\\pf$ be a Boolean formula with a base set $\\base$ and\n    a weighting function $\\wt$ over $\\base$.\n    Given an arbitrary variable $x\\in\\base$,\n    we construct $\\base'$, $\\pf'$, and $\\wt'$ as follows.\n    Let $\\base'=\\base\\setminus\\{x\\}\\cup\\{y_x,z_x\\}$ with\n    $y_x,z_x$ being newly introduced fresh variables.\n    Let $\\pf'=\\pf \\land ((inv \\oplus x) \\equiv (y_x \\land z_x))$,\n    where $inv$ is either $\\top$ or $\\bot$ to determine the sign of $x$.\n    If $inv=\\bot$, let $\\wt'(y_x)\\wt'(z_x)=\\wt(x)$;\n    else, let $\\wt'(y_x)\\wt'(z_x)=1-\\wt(x)$.\n    For any other variable $v\\in\\base'$,\n    $\\wt'(v)=\\wt(v)$.\n    After this construction, $\\base'$ is a base set for $\\pf'$, and $\\wt'(\\pf')=\\wt(\\pf)$.\n\\end{lemma}\n\\begin{proof}\n    First, we show that $\\base'$ is a base set for $\\pf'$.\n    Clearly any assignment $\\as'$ over $\\base'$ can be transformed into\n    an assignment $\\as$ over $\\base$ by substituting $\\as'(y_x),\\as'(z_x)$ into the formula\n    $((inv \\oplus x) \\equiv y_x \\land z_x)$ to derive the truth value of $x$.\n    The fact that $\\base$ being a base set for $\\pf$ implies $\\base'$ being a base set for $\\pf'$.\n\n    Second, we prove that $\\wt'(\\pf')=\\wt(\\pf)$.\n    We only show the case for $inv=\\bot$,\n    since the other case can be established similarly.\n    Consider any $\\as$ over $\\base$.\n    There are two cases: $\\as(x)=1$ and $\\as(x) = 0$.\n    In both cases, we derive corresponding assignments $\\as'$ such that\n    $\\pcf{\\pf}{\\as}=\\pcf{\\pf'}{\\as'}$ and\n    $\\wt(\\as)=\\sum\\wt'(\\as')$.\n    In the first case $\\as(x)=1$,\n    the corresponding $\\as'$ over $\\base'$ is obtained by assigning\n    $(\\as'(y_x),\\as'(z_x))$ to $(1,1)$ and\n    $\\as'(v)=\\as(v)$ for any other $v\\in\\base'$.\n    Obviously $\\pcf{\\pf}{\\as}=\\pcf{\\pf'}{\\as'}$,\n    and $\\wt(\\as)=\\wt'(\\as')$ because $\\wt(x)=\\wt'(y_x)\\wt'(z_x)$.\n    In the second case $\\as(x)=0$,\n    there are three possible assignments $\\as'$ over $\\base'$,\n    namely $(\\as'(y_x),\\as'(z_x))=(0,0),(0,1),(1,0)$,\n    and $\\as'(v)=\\as(v)$ for any other $v\\in\\base'$.\n    Denote the three assignments by $\\as_1',\\as_2',\\as_3'$.\n    Note that $\\pcf{\\pf}{\\as}=\\pcf{\\pf'}{\\as_1'}=\\pcf{\\pf'}{\\as_2'}=\\pcf{\\pf'}{\\as_2'}$,\n    and $\\wt(\\as)=\\wt'(\\as_1')+\\wt'(\\as_2')+\\wt'(\\as_3')$.\n\n    The above analysis shows that any $\\as\\models\\pf$ can be transformed into one or multiple $\\as'$\n    such that $\\as'\\models\\pf'$ and $\\wt(\\as)=\\sum\\wt'(\\as')$.\n    As a result, $\\wt'(\\pf')=\\wt(\\pf)$.\n\\end{proof}\n\n\\begin{theorem}\n    \\label{thm:prob-rewrite}\n    Given a weighted model counting instance $\\pf$ and\n    a weighting function $\\wt$ over a base set $\\base$ of $\\pf$ such that\n    for any $x\\in\\base$, $\\wt(x)$ has the form of $k/2^n$\n    for some $n\\in\\mathbb{N}$,\n    $k$ an odd integer,\n    and $k < 2^n$,\n    \\texttt{WmcRewriting} in~\\cref{alg:wmcrewriting} derives\n    $\\pf'$ and $\\wt'$ over $\\base'$ such that $\\base'$ is a base set for $\\pf'$,\n    $\\wt'(\\pf')=\\wt(\\pf)$,\n    and $\\wt'(x)=\\frac{1}{2}$ for every $x\\in\\base'$.\n\\end{theorem}\n\\begin{proof}\n    To show the correctness of \\texttt{WmcRewriting},\n    we divide the task into two parts.\n    First, we show that \\texttt{WmcRewriting} always terminates.\n    Second, we prove that when it terminates, the claimed properties hold.\n\n    Consider a variable $x\\in\\base$ with $\\wt(x)=\\frac{k}{2^n}$.\n    Observe that the loop invariant of the \\textsc{while} loop is $wt=\\frac{h}{2^m}$,\n    for some $m\\in\\mathbb{N}$, $h$ an odd integer, and $h<2^m$.\n    Moreover, after an iteration, the denominator of $wt$ will be halved,\n    while the numerator remains an odd integer.\n    As a result, $wt$ equals $1/2$ after $n-1$ iterations and the \\textsc{while} loop terminates.\n\n    Inside the \\textsc{while} loop,\n    the truth value of $inv$ is decided by comparing $wt$ with $\\frac{1}{2}$,\n    and the corresponding formula rewriting and weight assignments are made as described in~\\cref{lemma:prob-rewrite}.\n    According to~\\cref{lemma:prob-rewrite},\n    $\\base'$ is a base set for $\\pf'$ and $\\wt'(\\pf')=\\wt(\\pf)$.\n    Furthermore, $\\wt'(x)=\\frac{1}{2}$ for every $x\\in\\base'$ as assigned in the algorithm.\n\\end{proof}\nBy the above theorem,\na weighted model counting instance whose weighting function is specialized as above\ncan be rewritten and solved by any (either exact or approximate) unweighted model counting engine,\nsince $\\wt(\\pf)=\\wt'(\\pf')=\\sum\\limits_{\\as\\models\\pf'}\\wt'(\\as)=\\#\\pf'/2^{|\\base'|}$,\nwhere $|\\base'|$ is the cardinality of the set $\\base'$.\nWe remark that, given a variable $x$ and its weight $p=k/2^n$,\nthe cost, in terms of the number of added variables and clauses,\nof \\texttt{WmcRewriting} is linear to $n$.\nIn a way, $n$ can be interpreted as the degree of precision\nif we attempt to apply \\texttt{WmcRewriting} to approximate any arbitrary probability.\n\n\\subsection{Solving PPE via probabilistic model checking}\nProbabilistic model checking (PMC) verifies stochastic systems modeled by the variants of Markov chains, e.g.,\ndiscrete-time Markov chains (DTMCs),\ncontinuous-time Markov chains,\nand Markov decision processes,\nagainst properties specified in probabilistic temporal logics.\nWe show how to calculate signal probability with the probabilistic model checker \\prism~\\cite{Kwiatkowska2002PRISM},\nwhich provides a high level modeling language to specify probabilistic systems.\n\nWe convert an SPBN to a DTMC and encode the calculation of signal probabilities\nby probabilistic computation tree logic (PCTL)~\\cite{Hansson1989}.\nTo illustrate, we briefly introduce the syntax of \\prism.\nThe basic components in \\prism are \\texttt{modules}.\nA module consists of two parts: \\texttt{variables} and \\texttt{commands}.\nVariables describe possible states of a module,\nwhile commands describe its state transitions.\nA command is composed of a \\texttt{guard} and some \\texttt{updates}.\nFor example, ``$x=0 \\rightarrow 0.8:(x'=0) + 0.2:(x'=1);$''\nis a command with a guard ``$x=0$'' and an update rule ``$0.8:(x'=0) + 0.2:(x'=1)$''.\nWhen the guard is met, i.e., $x=0$ at the current time slot,\n$x$ will remain at value $0$ with probability $0.8$ or change to $1$ with probability $0.2$ at the next time slot.\n\nIf the guards of multiple commands are met simultaneously at the same time slot,\n\\prism will choose exactly one of them to execute uniformly at random.\nHowever, the vertices of an SPBN $G=(V,E)$ operate concurrently.\nThat is, at one time slot, multiple vertices have to be executed,\nprovided that every fanin of those vertices is evaluated.\nWithout proper control, \\prism will randomly choose one of these vertices to execute,\nand may bias the probabilities specified by the SPBN.\nTo prevent such bias, we introduce a fresh variable for each vertex\nto enforce an topological execution order of logic gates in the SPBN.\nThis construction eliminates the possibility of the multi-vertex execution at one time slot,\nand thus preserving the probability specified in the update rules.\nSince $G$ is a combinational design,\nthis enforcement of ordering will not affect the system behavior.\n\nGiven an SPBN $G=(V,E)$ with a parameter assignment $\\pi$ to $V_I$,\nthe procedure to compute the signal probability of an arbitrary vertex with \\prism is as follows.\n\\begin{enumerate}\n    \\item Sort $V$ into a topological order.\n    \\item For each vertex $v\\in V$, create a module with two Boolean variables:\n          $x_v$ represents the output variable of vertex $v$,\n          and $y_v$ enforces the execution order in the DTMC.\n          Both variables are initialized to $0$.\n          Let $u$ be the vertex preceding $v$ in the topological order.\n    \\item If $v\\in V_I$, add a command:\n          ``$(y_u=1 \\enskip\\&\\enskip y_v=0) \\rightarrow\n              \\pi(v):(x_v'=1 \\enskip\\&\\enskip y_v'=1)+\n              1-\\pi(v):(x_v'=0 \\enskip\\&\\enskip y_v'=1)$''\n          to the module of $v$.\n    \\item If $v\\in V\\setminus V_I$, add a command:\n          ``$(y_u=1 \\enskip\\&\\enskip y_v=0) \\rightarrow\n              p_v:(x_v'=\\neg f_v \\enskip\\&\\enskip y_v'=1)+\n              1-p_v:(x_v'=f_v \\enskip\\&\\enskip y_v'=1)$''\n          to the module of $v$.\n    \\item Compute the signal probability of any $v\\in V$ by specifying a PCTL formula $P_{=?}(F(x_v=1))$.\n\\end{enumerate}\n\nThe added commands describe the probabilistic behavior of a vertex.\nObserve that in both commands,\na vertex $v$ is executed only after its preceding vertex $u$ is executed,\nand this order in enforced by the variable $y_v$.\nAfter the execution of $v$,\n$y_v$ is set to 1 to trigger its successive executions.\nThe PCTL formula $P_{=?}(F(x_v=1))$ computes the probability of $x_v=1$ in the future.\nSince in our DTMC, each node is executed only once,\nthe PCTL formula computes the signal probability of $v$.\n\nWe remark that this transformation brutally encodes each gate into a module in the DTMC and suffers from the state-space explosion problem.\nIt remains future investigation to search for better encoding.", "meta": {"hexsha": "9ba6914e54d7107df45d58c5a8084678da9545d3", "size": 24318, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/prob-design-eval/technique.tex", "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_issues_repo_path": "paper/prob-design-eval/technique.tex", "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/prob-design-eval/technique.tex", "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.6821192053, "max_line_length": 207, "alphanum_fraction": 0.7052800395, "num_tokens": 7140, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933447152497, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5651570824442039}}
{"text": "\\documentclass[a4paper,10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{graphicx}\n\\usepackage{float}\n\\usepackage{hyperref}\n\n%opening\n\\title{GPGPU Prac 2: The Mandelbrot Set}\n\\author{Antonio Peters}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{The Mandelbrot Set}\nThe Mandelbrot Set is a collection of complex numbers $c$ such that iterating the equation\n\\begin{equation}\n  z_{i+1} = z_i^2 + c, \\qquad z_0=0\n\\end{equation}\ndoes not tend to infinity.\n\\linebreak\nFor this to hold true, firstly the number must be within radius $2$ of the origin, $0$. The set generated can be seen in Figure \\ref{mandelimg}\n\\begin{figure}[H] \\label{mandelimg}\n\\includegraphics[width=\\textwidth]{Mandelset_hires.png}\n\\caption{The Mandelbrot Set}\n\\end{figure}\nThe most interesting notion about this is the fact that the pattern generated by the Mandelbrot Set is recursive, each node on the surface of the main lobe has on its surface a replicate set of nodes and similarly for these subnodes extending infinitely. In order to study this however we need to reduce this amount, we do this by setting the size of the image being generated and setting an upper limit to the number of iterations the set goes through. In the case of this task these particular settings were set to $4096$x$4096$ for the image size and $10000$ for the maximum number of iterations.\n\n\\section{CPU Implementation}\nThe code used to generate a CPU implementation of the Mandelbrot Set was originally taken from \\url{https://rosettacode.org/wiki/Mandelbrot_set#C}, the code was then re-factored to be better ported to a GPU such that the only differences between them were the essential elements being timed. The code works by nesting two for-loops and iterating over the X- and Y-axis and calculating the whether or not the point, scaled to fit on the $(-2,-2i) to (2,2i)$ plane, is in the Mandelbrot Set for the given number of iterations. This linear method generated the set in $115453.000000$ milliseconds, setting a baseline for the GPU implementation to work on.\n\n\\section{Naive GPU Implementation}\nThe GPU implementation was created by taking the general algorithm of the CPU implementation and assigning a thread to each point of the set, further divided into blocks and grids. Since the the 10000 iterations for set inclusion rely on each other, this could not be parallelized. Once calculated, the data is then transferred back to the CPU, to be written to a file in the exact same manner as the CPU implementation to minimize the differences between them to only that of the actual algorithm implementation. The time taken by this method is $5341.878418$ milliseconds, a fraction of the time of that of the CPU time. However, further analysis through the NVIDIA Visual Profiler shows that this too can be improved on. Although the process uses $100\\%$ of its computation capacity, it does not use any computation/memory transfer capabilities to overlap data transfers.\n\n\\section{Optimized GPU Implementation}\nDue to the nature of the Mandelbrot Set code, no data is read in to be used, it simply takes generates an x,y-coordinate using the threads data and then scales it before computing the point so there are no reads which need optimizing and there is only one transfer of data, namely from the GPU to the CPU, so this is the only way in which memory can be optimized. Therefore, streams were introduced to allow for asynchronous data transfers. Using two streams an execution time of $5335.469727$ milliseconds, a very minor speed up, but it also alludes to the fact that with larger data sets, the overhead cost of having streams can be made up for with the ability to asynchronously transfer data.\n\\linebreak\n\\linebreak\nAlthough compute occupancy could not be increased, an approach with fewer threads was looked at with data iterations in the kernel, meaning each thread calculates more than one point value. This was tested with two calculations per thread for a time of $5309.734863$, again only a minor speed up given that the data set is large with an unparallizable loop of $10000$ iterations but it is the only way to increase the computational optimization of the program.\n\\linebreak\n\\linebreak\nSince there is only one kernel and no kernel memory reads, there is no possibility for latency optimization other that the streams, which have already been explored.\n\\linebreak\n\\linebreak\nBy utilizing both streams, asynchronous data transfers, and multiple iterations per thread we get a compounded speed up with an execution time of $5273.914551$, which is faster than either of the optimizations on their own.\n\n\\end{document}\n", "meta": {"hexsha": "7bc2948f8b7507760a48b705458186c9d302ab5b", "size": 4602, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "apeters/Prac 2/prac2.tex", "max_stars_repo_name": "RhodesCS2016/gpgpu", "max_stars_repo_head_hexsha": "944301860e7048b788f530154ab2ecac649dbd74", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-02-16T15:58:38.000Z", "max_stars_repo_stars_event_max_datetime": "2016-02-16T15:58:38.000Z", "max_issues_repo_path": "apeters/Prac 2/prac2.tex", "max_issues_repo_name": "RhodesCS2016/gpgpu", "max_issues_repo_head_hexsha": "944301860e7048b788f530154ab2ecac649dbd74", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "apeters/Prac 2/prac2.tex", "max_forks_repo_name": "RhodesCS2016/gpgpu", "max_forks_repo_head_hexsha": "944301860e7048b788f530154ab2ecac649dbd74", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 95.875, "max_line_length": 874, "alphanum_fraction": 0.7979139505, "num_tokens": 1062, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850402140659, "lm_q2_score": 0.8006919997179627, "lm_q1q2_score": 0.5651164352200231}}
{"text": "%!TEX root = ../thesis.tex\n\\chapter{Future Work}\n\\label{future_work}\n\n\\section{Publicly verifiable zero knowledge proof}\n\\label{future_work:zkp_ver}\n\nIn the scheme presented in Chapter~\\ref{solution}, only the requestor can verify the proof produced by the data processor crafted for her request. The output of the processing must be remain private. For that reason, is stored encrypted in the blockchain. As the output is a crucial input to the verification algorithm, only the requestor can validate the proof. This gives the requestor an opportunity to defy data processor's legitimacy and validity forcing the data processor to resort to a third part trusted entity that will enforce the data requestor to disclose the output of the process. To eliminate this, the verification algorithm must be run on the blockchain, where anyone can validate the proof, without revealing the output of the processing. The input of the verification algorithm must be encrypted. To achieve this, the proof generation algorithm must include the encryption procedure that encrypts the output. This ensures that the ciphertext corresponds to the output of the processing. Implementing the verification algorithm in the blockchain eliminates, also, the need of the salt (\u00a7~\\ref{solution:proof}). The output is encrypted and an adversary cannot brute-force it efficiently.\n\n\\section{Secure Multi party computation}\n\\label{future_work:mpc}\n\nIn the setting of \\textit{multiparty computation}~\\cite{Ben-Or:1988:CTN:62212.62213}, sets of two or more parties with private inputs wish to jointly compute some (predetermined) function of their inputs~\\cite{mpc}. Secure multiparty computation assume malicious behavior by a subset of participant entities.\n\nThe two important requirements of any secure computation protocol are \\textit{privacy} and \\textit{correctness}. The privacy requirement states that the parties should not learn anything else than the output of the computation and the correctness requirement states that each party should receive its correct output~\\cite{mpc}.\n\nInformally, consider $n$ parties with private inputs $x_1, x_2, \\dots, x_n$. The parties want to compute the outcome of the function $f(x_1, x_2, \\dots, x_n)$ where the respective inputs remain private.\n\nThe first secure multiparty computation problem was described by Yao in~\\cite{Yao:1982:PSC:1398511.1382751} and is called the \\textit{Yao's Millionaires' problem}. The problem discusses two millionaires, Alice and Bob, who are interested in knowing which of them is richer without revealing their actual wealth. Specifically, Yao\u2019s millionaires\u2019 problem is the problem of computing the predicate, $a \\geq b$ where $a$ is Alice's holding and $b$ is Bob's holding, without disclosing anything more than the result to either party~\\cite{mpc_ioannidis}.\n\nMulti-party computation protocols have a wide range of applications. They can be used in voting systems where each party votes for a candidate and they want to compute the winner of the voting without revealing their vote. Another use case it that of auctions. Several parties are bidding for a product where the winning party and maximum bid should be determined, without revealing bids of other parties\n\nMost MPC protocols make use of \\textit{secret sharing}. A secret sharing scheme allows a value to be shared among $n$ parties where some of the parts or all of them are needed in order to reconstruct the secret~\\cite{Kamm:2015:ASM:2836836}. Usually, there is a threshold $t$ so that at least $t$ part of the secret are needed to reconstruct the original secret.\n\nUsing an MPC protocol based on secret sharing eliminates the need of symmetric key exchange and dataset decryption and exposition (\u00a7~\\ref{solution:flow:pr_req}, \u00a7~\\ref{solution:flow:pr_data}). Data queries can be computed in a distributed way with the use of an MPC cluster of data processors. In addition, datasets can be split between data processors without them having access to the data in its entirety~\\cite{DBLP:journals/corr/ZyskindNP15}. This way, the need for a trusted processor is eliminated as the data processors do not have access to the unciphered dataset.\n\n\\section{Fees}\n\\label{future_work:fees}\n\nAs discussed in~\\ref{solution:treat_model:mrequestor} a malicious requestor can threat the system by overflowing the network with multiple requests aiming to prevent other request to be fulfilled. A scalable \\textit{payment system} could be used as a countermeasure to this type of attacks where a requestor pays a fixed price per byte processed. The price can decrease as the data size increases.\n\nA payment system helps to prevent DDoS attacks as each request has a cost and the attack becomes more expensive as the requests are increasing. A rational malicious requestor has to significantly gain more than the cost of the attack to be profitable. Moreover, a payment system will incentivize the data processors to follow the protocol honestly and contribute to the prosperity of the network.\n\n\\section{Privacy Preserving Queries}\n\\label{future_work:ppq}\n\nThe algorithms (queries) the system supports are not privacy-preserving. The privacy of the individuals in the dataset must be protected with various privacy-preserving techniques such as differential privacy~\\cite{differential_privacy}, k-anonymity~\\cite{Samarati98protectingprivacy} and l-diversity~\\cite{Aggarwal2008}. This techniques can be either applied by the data controllers before handing over the dataset to the processor or by the processors themselves where a requestor wants the de-identification of the dataset as a processing procedure.\n\n\\section{Reputation system}\n\\label{future_work:ranking_system}\n\nA \\textit{decentralized reputation system}~\\cite{trust_is_risk}, resilient to Sybil attacks, could eliminate the need of a centralized trust model of a public key infrastructure (PKI), which relies on a trusted entity to authenticate, identify and verify the participants of the system. The users in such systems built trust relationships among them and measure trustworthiness in a setting where assets can be exchanged between them~\\cite{trust_is_risk}. The benefits of such systems are \\textit{pseudonymity} of the participants, truly decentralization and the increasing trust for the system itself.\n\nA data controller, whom its dataset are of bad quality or fabricated with malicious intents, could be identified by the users of the system and gain bad reputation. It is evident, that the users of the network should choose datasets where their owners have high trust. The same applies to data processors and even the requestor themselves.\n\n\\section{Analytics}\n\\label{future_work:analytics}\n\nThe activity of the system could be analyzed by various tools to measure the functionality and the prosperity of the system. To achieve that, blockchain should be indexed and be monitored constantly by an analytic node that stores in a database all the needed information. Various charts and results can be derived by such analysis. For example:\n\n\\begin{itemize}\n  \\item Transaction chart\n  \\item Address growth chart\n  \\item Hash rate growth chart\n  \\item Block difficulty\n  \\item Pending transactions per minute\n  \\item Network transaction fees\n  \\item Total dataset processing requests\n  \\item Total datasets\n  \\item Total processors\n  \\item Total data controllers\n  \\item Most trusted processors or controllers\n  \\item Most used datasets\n  \\item Most used algorithms\n  \\item Top 10 charts\n\\end{itemize}\n\n\\section{Consent}\n\\label{future_work:consent}\n\nA key aspect that is missing from the current solution is the implementation of \\textit{dynamic consent} which can empower the data owners. With the use of smart contracts data access policies could be implemented where data controllers are obligate to comply to them. That way, data owners have total control over their data and can decide when and by whom their data are accessed.\n\nModeling dynamic consent in smart contracts should be carefully analyzed taking into account design issues that are related to smart contracts lifecycle, required state variables for storing contract\u2019s information, and access restrictions to those variables.\n\nNeisse et al.~\\cite{DBLP:journals/corr/NeisseSF17} proposed the following three models for data accountability and provenance tracking that comply with the GDPR(\u00a7~\\ref{problem:regulations}):\n\n\\begin{enumerate}\n  \\item Contract for a specific controller: A contract where the data subject creates a contract tailored for each data controller\n  \\item Contract for specific data: A contract where each subject\u2019s data instance is shared among all data controllers\n  \\item Contract for multiple data subjects: A contract where the data controller specifies how the data from all subjects are treated from the controller\n\\end{enumerate}\n\nThe first contract is more adequate for sensitive data~\\cite{DBLP:journals/corr/NeisseSF17}, such as health data, since a subject-controller relationship~\\cite{Azaria2016} is being created where each patient has a different contract for each controller accessing their data, providing fine grained access control and provenance information. On the other hand, it has the highest cardinality among the others two types~\\cite{DBLP:journals/corr/NeisseSF17}. The second contract suffers from direct linkability as a unique subject address is needed compromising patients\u2019 privacy~\\cite{DBLP:journals/corr/NeisseSF17}. Lastly, the third one has the lowest cardinality among the others but also has the lower level of customization due to the limited number of contract options~\\cite{DBLP:journals/corr/NeisseSF17}.\n", "meta": {"hexsha": "b244692cbeb63fae2357d1937c930077e5d8521b", "size": 9616, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/future_work.tex", "max_stars_repo_name": "cnasikas/thesis", "max_stars_repo_head_hexsha": "e5bfd9d293fb7b8024863e389c9a2bc1f5d53509", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-06-21T17:44:21.000Z", "max_stars_repo_stars_event_max_datetime": "2018-06-21T17:44:21.000Z", "max_issues_repo_path": "chapters/future_work.tex", "max_issues_repo_name": "cnasikas/thesis", "max_issues_repo_head_hexsha": "e5bfd9d293fb7b8024863e389c9a2bc1f5d53509", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/future_work.tex", "max_forks_repo_name": "cnasikas/thesis", "max_forks_repo_head_hexsha": "e5bfd9d293fb7b8024863e389c9a2bc1f5d53509", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 114.4761904762, "max_line_length": 1205, "alphanum_fraction": 0.8111480865, "num_tokens": 2049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7057850340255386, "lm_q1q2_score": 0.5651164319432698}}
{"text": "% Standard Article Definition\n\\documentclass[]{article}\n\n% Page Formatting\n\\usepackage[margin=1in]{geometry}\n\\setlength\\parindent{0pt}\n\n% Graphics\n\\usepackage{graphicx}\n\n% Math Packages\n\\usepackage{physics}\n\\usepackage{amsmath, amsfonts, amssymb, amsthm}\n\\usepackage{mathtools}\n\n% Extra Packages\n\\usepackage{listings}\n\\usepackage{hyperref}\n\n% Section Heading Settings\n\\usepackage{enumitem}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\renewcommand*{\\thesection}{Problem \\arabic{section}}\n\\renewcommand*{\\thesubsection}{\\alph{subsection})}\n\\renewcommand*{\\thesubsubsection}{\\quad \\quad \\roman{subsubsection})}\n\n%Custom Commands\n\\newcommand{\\Rel}{\\mathcal{R}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\n\\newcommand{\\toI}{\\xrightarrow{\\textsf{\\tiny I}}}\n\\newcommand{\\toS}{\\xrightarrow{\\textsf{\\tiny S}}}\n\\newcommand{\\toB}{\\xrightarrow{\\textsf{\\tiny B}}}\n\n\\newcommand{\\divisible}{ \\ \\vdots \\ }\n\\newcommand{\\st}{\\ : \\ }\n\n\n% Theorem Definition\n\\newtheorem{definition}{Definition}\n\\newtheorem{assumption}{Assumption}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{proposition}{Proposition}\n\n\n%opening\n\\title{MATH 5301 Elementary Analysis - Homework 10}\n\\author{Jonas Wagner}\n\\date{2021, November 12\\textsuperscript{th}}\n\n\\begin{document}\n\n\\maketitle\n\n% Problem 1 ----------------------------------------------\n\\section{}\nCompute the derivatives of the following functions:\n\n% Derivative\n\\begin{definition}\n    Let $f : (a,b) \\to \\R$ be a function.    \n    \\begin{enumerate}\n        \\item The \\emph{\\underline{derivative of the function at point $x_0$}} is defined as\n        \\[\n            f'(x_0) := \\lim_{x \\to x_0} \\cfrac{f(x) - f(x_0)}{x - x_0}\n        \\]\n        \\item If the derivative is defined at $x_0$, then it is \\emph{\\underline{differentiable at $x_0$}}.\n        \\item If the derivative is defined for all $x_0 \\in (a,b)$, then the function $f$ is said to be \\emph{\\underline{differentialble}}.\n        \\item When $f$ is differentiable, the \\underline{\\emph{derivative}} of $f(x)$ is defined as:\n        \\[\n            f'(x) := \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\n        \\]\n    \\end{enumerate}\n\\end{definition}\n\n%Part a\n\\subsection{\n    $x^2 \\sin{\\frac{1}{x}}$\n}\nLet\n\\[\n    f(x) = x^2 \\sin{\\frac{1}{x}}\n\\]\nThen, by definition, the derivative of $f(x)$ is calculated as\n\\begin{align*}\n    f'(x) \n    &= \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{\n            (x + h)^2 \\sin(\\frac{1}{x + h}) - x^2 \\sin(\\frac{1}{x})\n            }{h}\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{\n            \\dv{h}\\qty((x + h)^2 \\sin(\\frac{1}{x + h}) - x^2 \\sin(\\frac{1}{x}))\n            }{\n                \\dv{h} h\n            }\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{\n            2 (x + h) \\sin(\\frac{1}{x + h}) + (x + h)^2 \\qty(\\frac{-1}{(x+h)^2}) \\cos(\\frac{1}{x + h})\n            }{1}\\\\\n    &= \\lim_{h \\to 0} 2 (x + h) \\sin(\\frac{1}{x + h}) - \\cos(\\frac{1}{x + h})\\\\\n    \\Aboxed{\n        f'(x)\n        &= 2 x \\sin(\\frac{1}{x}) - \\cos(\\frac{1}{x})\n        }\n\\end{align*}\n\n%Part b\n\\subsection{\n    $\\cfrac{e^x + e^{-x}}{2}$\n}\nLet\n\\[\n    f(x) = \\cosh(x) = \\cfrac{e^x + e^{-x}}{2}\n\\]\nThen, by definition, the derivative of $f(x)$ is calculated as\n\\begin{align*}\n    f'(x) \n    &= \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{\n            \\cfrac{e^{(x+h)} + e^{-(x + h))}}{2} + \\cfrac{e^x + e^{-x}}{2}\n            }{h}\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            e^{x + h} + e^{-x - h} + e^{x} + e^{-x}\n        }{\n            2 h\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            e^{x} e^{h} + e^{-x} e^{-h} + e^{x} + e^{-x}\n        }{\n            2 h\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            \\dv{h} \\qty(e^{x} e^{h} + e^{-x} e^{-h} + e^{x} + e^{-x})\n        }{\n            \\dv{h} 2 h\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            e^{x} e^{h} - e^{-x} e^{-h}\n        }{\n            2\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            e^{x + h} - e^{-x - h}\n        }{\n            2\n        }\\\\\n    \\Aboxed{\n        f'(x)\n        &= \\cfrac{e^{x} - e^{-x}}{2} = \\sinh(x)\n        }\n\\end{align*}\n\n%Part c\n\\subsection{\n    $\\cfrac{e^x - e^{-x}}{2}$\n}\nLet\n\\[\n    f(x) = \\sinh(x) = \\cfrac{e^x - e^{-x}}{2}\n\\]\nThen, by definition, the derivative of $f(x)$ is calculated as\n\\begin{align*}\n    f'(x) \n    &= \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{\n            \\cfrac{e^{(x+h)} - e^{-(x + h))}}{2} + \\cfrac{e^x - e^{-x}}{2}\n            }{h}\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            e^{x + h} - e^{-x - h} + e^{x} - e^{-x}\n        }{\n            2 h\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            \\dv{h} \\qty(e^{x + h} - e^{-x - h} + e^{x} - e^{-x})\n        }{\n            \\dv{h} 2 h\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            e^{x + h} + e^{-x - h}\n        }{\n            2\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            e^{x + h} + e^{-x - h}\n        }{\n            2\n        }\\\\\n    \\Aboxed{\n        f'(x)\n        &= \\cfrac{e^{x} + e^{-x}}{2} = \\cosh(x)\n        }\n\\end{align*}\n\n%Part d\n\\subsection{\n    $e^x + e^{e^x} + e^{e^{e^x}}$\n}\nLet\n\\[\n    f(x) = e^x + e^{e^x} + e^{e^{e^x}}\n\\]\nThen, by definition, the derivative of $f(x)$ is calculated as\n\\begin{align*}\n    f'(x) \n    &= \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{e^{x+h} + e^{e^{x+h}} + e^{e^{e^{x+h}}} - \\qty(e^x + e^{e^x} + e^{e^{e^x}})\n            }{h}\\\\\n    &= \\lim_{h \\to 0} \\cfrac{e^{x+h} - e^x}{h}\n        + \\lim_{h \\to 0} \\cfrac{e^{e^{x + h}} - e^{e^x}}{h}\n        + \\lim_{h \\to 0} \\cfrac{e^{e^{e^{x + h}}} - e^{e^{e^x}}}{h}\\\\\n    &= \\lim_{h \\to 0} \\cfrac{\n        \\dv{h} \\qty(e^{x+h} - e^x)\n        }{\\dv{h} h}\n        + \\lim_{h \\to 0} \\cfrac{\n            \\dv{h} \\qty(e^{e^{x + h}} - e^{e^x})\n            }{h}\n        + \\lim_{h \\to 0} \\cfrac{\n            \\dv{h} \\qty(e^{e^{e^{x + h}}} - e^{e^{e^x}})\n            }{\\dv{h} h}\\\\\n    &= \\lim_{h \\to 0} \n            \\cfrac{(1)e^{x+h}}{1} \n        + \\lim_{h \\to 0} \n            \\cfrac{\n                (1)(e^{x+h})(e^{e^{x+h}})\n                }{1}\n        + \\lim_{h \\to 0} \n            \\cfrac{\n                (1)(e^{x+h})(e^{e^{x+h}})(e^{e^{e^{x+h}}})\n                }{1}\\\\\n    &= \\lim_{h \\to 0} e^{x+h}\n        + \\lim_{h \\to 0} e^{x + h} e^{e^{x + h}}\n        + \\lim_{h \\to 0} e^{x + h} e^{e^{x + h}} e^{e^{e^{x + h}}}\\\\\n    \\Aboxed{\n        f'(x)\n        &=  e^{x} + e^{x} e^{e^{x}} + e^{x} e^{e^{x}} e^{e^{e^{x}}}\n        = e^{x} + e^{x + e^{x}} + e^{x + e^{x} + e^{e^{x}}}\n    }\n\\end{align*}\n\n\\newpage\n%Part e\n\\subsection{\n    $x^{x^{x^x}}$\n}\nLet\n\\[\n    f(x) = x^{x^{x^{x}}}\n\\]\nThen, by definition, the derivative of $f(x)$ is calculated as\n\\begin{align*}\n    f'(x) \n    &= \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{\n            (x + h)^{(x + h)^{(x + h)^{(x + h)}}} - x^{x^{x^{x}}}\n        }{\n            h\n        }\\\\\n    &= \\lim_{h \\to 0} \n        \\cfrac{\n            \\dv{h} \\qty((x + h)^{(x + h)^{(x + h)^{(x + h)}}} - x^{x^{x^{x}}})\n        }{\n            \\dv{h} h\n        }\\\\\n    &= \\lim_{h \\to 0}\n        \\cfrac{\n            \\dv{h} \\qty((x + h)^{(x + h)^{(x + h)^{(x + h)}}})\n        }{\n            1\n        }\n    \\intertext{\n        Exponent Rule: $a^b = e^{b \\ln{a}}$\n    }\n    &= \\lim_{h \\to 0} \\dv{h} \\qty(\n        e^{\\qty(\n            (x + h)^{(x + h)^{(x + h)}}\n            \\ln(x + h)\n        )}\n    )\n    \\intertext{\n        Chain Rule: $\\dv{f}{x} = \\dv{f}{u} \\dv{u}{x}$\n    }\n    &= \\lim_{h \\to 0} \n        \\dv{e^{u}}{u}\\dv{u}{h}\n    \\intertext{\n        with $u = (x + h)^{(x + h)^{(x + h)}} \\ln(x + h)$\n    }\n    &= \\lim_{h \\to 0} e^{u} \\dv{h} \\qty(\n        (x + h)^{(x + h)^{(x + h)}} \\ln(x + h)\n    )\\\\\n    &= \\lim_{h \\to 0} e^{u} \\Big[\n        \\qty((x + h)^{(x + h)^{(x + h)}}) \\frac{1}{x + h}\\\\\n        &\\quad + \\ln(x + h) \\dv{h} e^{\n            (x + h)^{(x + h)}\n            \\ln(x + h)\n        }\n    ]\\\\\n    &\\vdots\\\\\n    &= \\lim_{h \\to 0}\n        (x + h)^{(x + h)^{(x + h)^{(x + h)}}} \\\\\n        &\\qquad \\Bigg[(x + h)^{(x + h)^{(x + h)}-1}\\\\\n            &\\qquad \\qquad + \\Bigg(\n                (x + h)^{(x + h)^{(x + h)}} \\ln(x + h) \\Big(\n                    (x + h)^{h + x - 1}\\\\\n                    &\\qquad \\qquad \\qquad \\qquad \n                    + (x + h)^{x + h} \\ln(x + h) \\qty(\n                        1 + \\ln(x + h)\n                    )\n                \\Big)\n            \\Bigg)\n        \\Bigg]\n\\end{align*}\n\nThen\n\\[\\boxed{\n    f'(x) = x^{x^{x^{x}}} \\qty(\n        x^{x^{x}}\\ln(x) \\qty(\n            x^{x}\\ln(x) \\qty(\\ln(x) + 1) + x^{x^{x - 1}}\n        )\n        +x^{x^{x} - 1}\n    )\n}\\]\n\n% Problem 2\n\\newpage\n\\section{}\n\n% Differentiable Function\n\\begin{definition}\n    A \\emph{\\underline{differentiable function}} over $(a,b)$ is a function $f$ so that $f'(x)$ exists for all $x \\in (a,b)$.\n\\end{definition}\n\n% Bounded Function\n\\begin{definition}\n    A \\emph{\\underline{bounded function}} is a function so that \n    \\[\n        \\exists_{N \\in \\R} \\st \\forall_{x \\in (a,b)} \\abs{f(x)} < N\n    \\]\n\\end{definition}\n\n% Unbounded Function\n\\begin{definition}\n    An \\emph{\\underline{unbounded function}} is a function that is not bounded.\n\\end{definition}\n\n\\begin{definition}\n    A function $f : X \\to Y$ is \\emph{\\underline{Uniformly Continous}} if \n    \\[\n        \\forall_{\\epsilon > 0} \\exists_{\\delta(\\epsilon) > 0} \\st \\forall_{x,y \\in X} \\norm{x - y} < \\delta \\implies \\norm{f(x) - f(y)} < \\epsilon\n    \\]\n    Note: this is stricter then continuity itself with the only difference that $\\exists_{\\delta}$ is constant $\\forall_{x,y \\in X}$; whereas a \\emph{\\underline{Continous}} function only requires that this is true for some $\\delta$ dependent on $\\epsilon$, $x$, and $y$:\n    \\[\n        \\forall_{\\epsilon > 0} \\forall_{x,y \\in X} \\exists_{\\delta(\\epsilon,x,y) > 0} \\st \\norm{x - y} < \\delta \\implies \\norm{f(x) - f(y)} < \\epsilon\n    \\]\n\\end{definition}\n\n%Part a\n\\subsection{Prove the following:}\n\\begin{theorem}\n    If $f \\st (-1,1) \\to \\R$ is a differentiable unbounded function, then $f'$ is also unbounded on $[-1,1]$.\n    \\begin{proof}\n        When $f$ is unbounded,\n        \\[\n            \\forall_{N \\in \\R} \\exists_{x \\in (-1,1)} \\st \\abs{f(x)} \\geq N\n        \\]\n        then $f'$ is defined by\n        \\[\n            f'(x) := \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\n        \\]\n        Since the derivative is an instantaneous 'rise' over 'run', in order for no bound to exists within a bounded interval $(-1,1)$, the differentiable (and therefore continuous) function must have an asymptote at the boundaries. This implies that within the closed interval, the derivative must also be unbounded to obtain the the asymptote on the boundary.\n    \\end{proof}\n\\end{theorem}\n\n%Part b\n\\subsection{\n    Provide an example of bounded differentiable function on $[-1,1]$ with an unbounded derivative.\n}\n\\begin{theorem}\n    $f(x) = \\sqrt{x + 1}$ is a bounded differentiable function on $[-1,1]$, but has an unbounded derivative.\n    \\begin{proof}\n        Clearly, $f(x)$ is fully defined and bounded on $[-1,1]$.\n        The domain of $\\sqrt{x + 1}$ is $\\{x \\in \\R \\st x \\geq -1\\}$ while the range for $x \\in [-1,1]$ is $\\{y \\in [0,\\sqrt(3)]\\}$.\n\n        The derivative of $f$ is defined as \n        \\[\n            f'(x) = \\cfrac{1}{2 \\sqrt{x + 1}}\n        \\]\n        which is actually bounded on $x \\in (-1,1]$, but an asymptote exists at $x = -1$ in which \n        \\[\n            \\lim_{x \\to -1^+} \\cfrac{1}{2 \\sqrt{x + 1}} = + \\infty\n        \\]\n        and is clearly unbounded.\n    \\end{proof}\n\\end{theorem}\n\n%Part c\n\\subsection{Prove the following:}\n\\begin{theorem}\n    If $f \\st (-1,1) \\to \\R$ is a differentiable function, such that $f'$ is bounded on $[-1,1]$, then $f$ is uniformly continuous.\n    \\begin{proof}\n        $f$ differentiable on $x \\in (-1,1)$ means \n        \\[\n            \\exists f'(x) = \\lim_{h \\to 0} \\cfrac{f(x + h) - f(x)}{h}\n        \\]\n        $f'$ bounded on $x \\in [-1,1]$ means $f'(x)$ is bounded on $x \\in (-1, 1)$ and that the following is bounded:\n        \\[\n            \\lim_{h \\to 0^{+/-}} \\cfrac{f(x + h) - f(x)}{h}\n        \\]\n        which is equivalent to saying that these two limits exists and are bounded.\n        \\[\n            \\lim_{x \\to (+/-)1^{+/-}} f(x)\n        \\]\n        Since it is already known that $f$ is differentiable on $x \\in (-1,1)$, this implies that $f$ is continuous, i.e:\n        \\[\n            \\forall_{\\epsilon > 0} \\forall_{x,y \\in (-1,1)} \\exists_{\\delta(\\epsilon,x,y) > 0} \\st \\norm{x - y} < \\delta \\implies \\norm{f(x) - f(y)} < \\epsilon\n        \\]\n        and since $f'$ is bounded on the entire closed domain, a constant finite bound $\\delta$ will exist to bound the 'run' ($\\norm{x - y}$) for an arbitrary 'rise' ($\\norm{f(x) - f(y)}$).\n        Additionally, the lack of asymptotes at the boundaries also implies that continuity of the open domain will extend to the closed domain.\n        Therefore $f$ is uniformly continuous, i.e:\n        \\[\n            \\forall_{\\epsilon > 0} \\exists_{\\delta(\\epsilon) > 0} \\st \\forall_{x,y \\in [-1,1]} \\norm{x - y} < \\delta \\implies \\norm{f(x) - f(y)} < \\epsilon\n        \\]\n    \\end{proof}\n\\end{theorem}\n\n% Problem 3\n\\newpage\n\\section{}\nFind $f^{(n)}(0)$ for the following functions:\n\n% n-th Derivative\n\\begin{definition}\n    Let $f : (a,b) \\to \\R$ be a differentiable function.\n    \\begin{enumerate}\n        \\item If $f' : (a,b) \\to \\R$ is also differentiable, $f$ is called \\emph{\\underline{twice differentiable}}.\n        \\item The derivative of $f'$ is then denoted as $f''$ and is called the \\emph{\\underline{Secound Derivative}}.\n        \\item The \\emph{\\underline{n-th Derivative}}, denoted by $f^{(n)} : (a,b) \\to \\R, n \\in \\N$, is defined by repeating differentiation on the next derivative:\n        \\begin{enumerate}\n            \\item $n = 0, \\ f^{(0)} = f$\n            \\item If $f^{(n)}$ is defined, for $n \\geq 0$, then $f^{(n+1)} = \\dv{x} (f^{(n)})$.\n        \\end{enumerate}\n        \\item If $f$ has derivatives up until the order $n, n \\geq 1$, then $f$ is \\emph{\\underline{n-times differentiable}}.\n        \\item If $f^{(n)}$ is continuous, the $f$ is \\emph{\\underline{n-times continously differentiable}}; which is also known as $f$ is of class $C^n$.\n    \\end{enumerate}\n\\end{definition}\n\n% Leibnitz Formula\n\\begin{theorem}\n    \\textbf{Leibnitz Formula:}\n    Let $f,g : (a,b) \\to \\R$ be n-differentiable functions. \n    Then for $1\\leq m \\leq n$, the m-th derivative of $f(x)g(x)$ is given by:\n    \\[\n        \\dv[m]{x}\\qty(f(x) g(x)) = \\sum_{k=0}^m \\mqty(m \\\\ k) f^{(m-k)}(x) g^{(k)}(x)\n    \\]\n    \\begin{proof}\n        Proof provided in the lecture notes: Theorem 6.20\n    \\end{proof}\n\\end{theorem}\n\n%Part a\n\\subsection{\n    $\\sin(ax)\\cos(bx)$\n}\nIt is known, and can be proven using Euler's identity, that the derivatives of sinusoidal functions follow the following progression:\n    \\begin{enumerate}\n        \\item $\\dv{x} \\cos(x) = - \\sin(x)$\n        \\item $\\dv{x} -\\sin(x) = - \\cos(x)$\n        \\item $\\dv{x} -\\cos(x) = \\sin(x)$\n        \\item $\\dv{x} \\sin(x) = \\cos(x)$\n    \\end{enumerate}\nAdditionally, from the chain rule, the m-th derivative of a domain scaled function, $f(a x)$ will be given by\n\\[\n    \\dv[m]{x} f(a x) = a^{m} f^{(m)}(u), \\ u = a x\n\\]\n\nLet $f_1(x) = \\sin(ax)$ and $f_2(x) = \\cos(bx)$, by Euler's formula,\n\\[\n    f_1(x) = \\sin(ax) = \\cfrac{e^{j a x} - e^{- j a x}}{2 j}\n\\]\nand\n\\[\n    f_2(x) = \\cos(bx) = \\cfrac{e^{j b x} + e^{-j b x}}{2}\n\\]\n\nThe m-th derivatives can be defined in many ways, but the following begins by recognizing that $g(x) = a^{-1} f'(x) = a^{-1} f^{(1)}(x)$; and additionally, $g^{(m)}(x) = a^{-m - 1} b^{m} f^{(m + 1)}(\\frac{b}{a}x)$.\n\n\nThe m-th derivatives of $f$ and $g$ are then clearly given as:\n\\[\n    f_1^{(m)}(x) \n    = a^m (j)^{m} \\cfrac{e^{j a x} - (-1)^{m} e^{- j a x}}{j 2}\n    = a^m (j)^{m - 1} \\cfrac{e^{j a x} + (-1)^{m - 1} e^{- j a x}}{2}\n\\]\nand \n\\[\n    f_2^{(m)}(x) \n    = b^m (j)^{m} \\cfrac{e^{j b x} + (-1)^{m} e^{- j b x}}{2}\n\\]\n\nBy the Leibnitz Formula, the m-th derivative of $f_1(x) f_2(x)$ can be calculated as follows:\n\\[\n    \\dv[m]{x}\\qty(f_1(x) g_2(x)) \n    = \\sum_{k=0}^m \\mqty(m \\\\ k) f_1^{(m-k)}(x) f_2^{(k)}(x)\n\\]\n\\begin{align*}\n    \\dv[m]{x}\\qty(\\sin(ax) \\cos(bx)) \n        &= \\sum_{k=0}^m \\mqty(m \\\\ k) \n            \\qty(a^{m - k} (j)^{m - k - 1} \\cfrac{e^{j a x} + (-1)^{m - k - 1} e^{- j a x}}{2})\n            \\qty(b^k (j)^{k} \\cfrac{e^{j b x} + (-1)^{k} e^{- j b x}}{2})\\\\\n    \\Aboxed{\n    \\dv[m]{x}\\qty(\\sin(ax) \\cos(bx)) \n        &= \\sum_{k=0}^m \\mqty(m \\\\ k) \\qty(a^{m - k} b^k) \n            \\qty((j)^{m - k - 1} \\cfrac{e^{j a x} + (-1)^{m - k - 1} e^{- j a x}}{2})\n            \\qty((j)^{k} \\cfrac{e^{j b x} + (-1)^{k} e^{- j b x}}{2})\n    }\\\\\n\\end{align*}\nWhich, using Euler's formula again, can be converted back into many different configurations of sinusoidal forms, but regardless, this can be evaluated for $f^{(m)}(0)$ within this complex exponential form:\n\\begin{align*}\n    f^{(m)} (0) \n    &= \\sum_{k=0}^m \\mqty(m \\\\ k) \\qty(a^{m - k} b^k) \n        \\qty((j)^{m - k - 1} \\cfrac{e^{j a (0)} + (-1)^{m - k - 1} e^{- j a (0)}}{2})\n        \\qty((j)^{k} \\cfrac{e^{j b (0)} + (-1)^{k} e^{- j b (0)}}{2})\\\\\n    &= \\sum_{k=0}^m \\mqty(m \\\\ k) \\qty(a^{m - k} b^k) \n        \\qty((j)^{m - k - 1} \\cfrac{(1) + (-1)^{m - k - 1} (1)}{2})\n        \\qty((j)^{k} \\cfrac{(1) + (-1)^{k} (1)}{2})\\\\\n    &= \\sum_{k=0}^m \\mqty(m \\\\ k) \\qty(a^{m - k} b^k) \n        \\qty((j)^{m - k - 1 + k} \\cfrac{(1) + (-1)^{m - k - 1}}{2})\n        \\qty[\\begin{cases}\n            \\qty(\\cfrac{(1) + (-1)^{k}}{2})\n            &k \\divisible 2\\\\\n            \\qty(\\cfrac{(1) + (-1)^{k}}{2})\n            &(k - 1) \\divisible 2\n        \\end{cases}]\n    \\intertext{\n        Clearly, when $(k-1) \\divisible 2$, $\\frac{1 + (-1)^k}{2} = 0$, \n        otherwise, $\\frac{1 + (-1)^k}{2} = 1$.\n    }\n    &= \\sum_{k=0}^m \\mqty(m \\\\ k) \\qty(a^{m - k} b^k) \n        \\qty((j)^{m - 1} \\cfrac{(1) + (-1)^{m - k - 1}}{2})\n        \\qty[\\begin{cases}\n            (1)\n            &k \\divisible 2\\\\\n            (0)\n            &(k - 1) \\divisible 2\n        \\end{cases}]\\\\\n    &= \\sum_{k=0}^m \\mqty(m \\\\ k) \\qty(a^{m - k} b^k (j)^{m - 1})\n        \\qty[\\begin{cases}\n            \\cfrac{1 - 1}{2} = 0\n            &(k \\divisible 2) \\land (m - k \\divisible 2)\\\\\n            \\cfrac{1 + 1}{2} = 1\n            &(k \\divisible 2) \\land (m - k - 1 \\divisible 2)\\\\\n            0\n            &(k - 1) \\divisible 2\n        \\end{cases}]\n    \\intertext{\n        Since $m - 1 \\divisible 2 \\implies (k \\divisible 2) \\land (m - k - 1 \\divisible 2)$,\n    }\n    &= \\begin{cases}\n        \\sum_{k=0}^m \\mqty(m \\\\ k)\n        \\qty[\\begin{cases}\n            a^{m - k} b^k (j)^{m - 1}\n            &k \\divisible 2\\\\\n            0\n            &(k - 1) \\divisible 2\n        \\end{cases}]\n        & m - 1 \\divisible 2\\\\\n        0 & m \\divisible 2\n    \\end{cases}\n\\end{align*}\nUltimately this means that \n\\[\n    f^{(n)}(0) = \\begin{cases}\n        \\sum_{k = 1}^{m/2} a^{m - 2 k} b^{2 k}\n        &(m  - 1 \\divisible 2) \\land \\lnot (m \\divisible 4)\\\\\n        - \\sum_{k = 1}^{m/2} a^{m - 2 k} b^{2 k}\n        &(m - 1 \\divisible 2) \\land (m - 1 \\divisible 4)\\\\\n        0\n        & m \\divisible 2\n    \\end{cases}\n\\]\n    \n%Part b\n\\subsection{\n    $x^k \\sin{\\frac{1}{x}}$\n}\nAssuming that $k \\in \\N$, Let\n\\[\n    f(x) = x^k \\sin{\\frac{1}{x}}\n\\]\n\nLet $f_1(x) = x^k$ and $f_2(x) = \\sin(\\frac{1}{x})$.\n\nBy Leibnitz Formula, the m-th derivative of $f_1(x) f_2(x)$ can be calculated as follows:\n\\[\n    \\dv[m]{x}\\qty(f_1(x) g_2(x)) \n    = \\sum_{k=0}^m \\mqty(m \\\\ k) f_1^{(m-k)}(x) f_2^{(k)}(x)\n\\]\nTherefore,\n\\[\n    \\dv[m]{x} x^k \\sin(\\frac{1}{x})\n        = \\sum_{l = 0}^m \\mqty(m \\\\ l)\n            \\dv[m - l]{x} x^k\n            \\dv[l]{x} \\sin(\\frac{1}{x})\n\\]\n\nIt is known by the power rule, that\n\\[\n    \\dv[m]{x} x^k = \\begin{cases}\n        k^{m} x^{k - m} &m \\leq k\\\\\n        0 &m > k\n    \\end{cases}\n\\]\n\nSince $f^{m}(0)$ will always let $x = 0$, \n\\[\n    \\dv[m]{x} x^k = 0 \\forall_{m}\n\\]\n\nTherefore,\n\\begin{align*}\n    \\dv[m]{x} x^k \\sin(\\frac{1}{x})\n        &= \\sum_{l = 0}^m \\mqty(m \\\\ l)\n            \\dv[m - l]{x} x^k\n            \\dv[l]{x} \\sin(\\frac{1}{x})\\\\\n        &= \\sum_{l = 0}^m \\mqty(m \\\\ l)\n            (0)\n            \\dv[l]{x} \\sin(\\frac{1}{x})\n\\end{align*}\n\\[\n    \\boxed{f^{m}(0) = 0}\n\\]\n\n\\newpage\n%Part c\n\\subsection{\n    $f(x) =\n    \\begin{cases}\n        e^{-\\frac{1}{x^2}}, &x > 0\\\\\n        0,                  &x \\leq 0\n    \\end{cases}$\n}\n\nSince $\\lim_{x \\to 0^+} - \\frac{1}{x^2} = - \\infty$,\n\\[\n    \\lim_{x \\to 0^+} e^{-\\frac{1}{x^2}} = 0\n\\]\n\nTherefore, the function is smooth and differentiable on all $\\R$.\n\nAdditionally, since $\\forall_{x \\leq 0} f(x) = 0$,\n\\[\n    \\forall_{x \\leq 0} f'(x) = 0\n\\]\n\nBy the Chain rule,\n\\[\n    \\forall_{x \\leq 0} \\ \\dv{x} e^{- \\frac{1}{x^2}} = \\frac{2}{x^3} e^{-\\frac{1}{x^2}}\n\\]\nSimilarly, by the chain rule and product rule,\n\\[\n    \\forall_{x \\leq 0} \\ \\dv[2]{x} e^{- \\frac{1}{x^2}} \n    = \\dv{x} \\frac{2}{x^3} e^{-\\frac{1}{x^2}}\n    = \\frac{-6}{x^4} e^{-\\frac{1}{x^2}} + \\frac{2}{x^3} \\frac{2}{x^3} e^{-\\frac{1}{x^2}}\n    = \\frac{2}{x^6} (2 - 3 x^2) e^{-\\frac{1}{x^2}}\n\\]\nThis can then be expanded further with Leibnitz rule,\nto show that \n\\[\n    \\forall_{x \\leq 0} \\ \\dv[m]{x} e^{- \\frac{1}{x^2}}\n    = \\sum_{k = 0}^m \\mqty(m \\\\ k) \\qty(\\frac{2}{x^3})^k e^{-\\frac{1}{x^2}}\n\\]\nand since $\\lim_{x \\to 0^+} e^{-\\frac{1}{x^2}} = 0$,\n\\[\n    \\lim_{x \\to 0^+} \\dv[m]{x} e^{- \\frac{1}{x^2}} = 0\n\\]\nAnd therefore $f^{(n)}(0) = 0$ since the two limits are the same. \nAlthough technically, the understanding of the smoothness of $f(x)$ and the fact that $f(0) = 0$ should have been enough to demonstrate this...\n\n% Problem 4\n\\newpage\n\\section{}\nConstruct an example of infinitely many times differentiable function $f(x)$ such that $f(x) = 0$ for $x \\leq 0$, $f(x) = 1$ for $x \\geq 1$ and $f(x)$ is strictly monotone on the interval $(0,1)$.\n\nUsing such function you could construct, for example, a monotone function $g(x)$ such that $\\lim_{x \\to +\\infty} g(x) = 0$ but $\\lim_{x \\to +\\infty} g'(x) \\neq 0$. (How?)\n\n\\begin{definition}\n    Let $f : \\R \\to \\R$ be defined as\n    \\[\n        f(x) := \\begin{cases}\n            0   &x \\leq 0\\\\\n            \\frac{1 - \\cos(\\pi x)}{2} & x \\in (0,1)\\\\\n            1   &x \\geq 1\n        \\end{cases}\n    \\]   \n\\end{definition}\n\n\n\\begin{theorem}\n    $f$ is a strictly monotone function over $(0,1)$, is infinitely differentiable function, has $f(x) = 0$ for $x \\leq 0$ and $f(x) = 1$ for $x \\geq 1$.\n    \\begin{proof}\n        \\begin{enumerate}\n            \\item $f$ is strictly monotone over $(0,1)$ means\n                \\[\n                    \\forall_{x \\in (0,1)} \\ f'(x) > 0 \n                \\]\n                Since within the interval $(0,1)$, \n                $f(x) = cos(\\pi x)$, \n                \\[\n                    f'(x) = \\pi \\sin(\\pi x)\n                \\]\n                which is strictly positive over $(0,1)$ implying $f(x)$ is strictly increasing.\n                By the definition of a monotone, $f$ is therefore a monotone function.\n            \\item $f$ being infinitely differentiable means that the n-th derivative is defined over the entire interval forall $n \\in \\N$.\n                \n                The nth derivative is in fact defined defined by the following, \n                \\[\n                    f^{(n)}(x) = \\begin{cases}\n                        0 & x \\in (-\\infty, 0] \\cup [1,\\infty)\\\\\n                        \\begin{cases}\n                            \\pi^n \\cos(\\pi x) &n \\divisible 4\\\\\n                            -\\pi^n \\sin(\\pi x) &n + 1 \\divisible 4\\\\\n                            -\\pi^n \\cos(\\pi x) &n + 2 \\divisible 4\\\\\n                            \\pi^n \\sin(\\pi x) &n + 3 \\divisible 4\n                        \\end{cases}\n                        &0 < x < 1\n                    \\end{cases}\n                \\]\n            \\item $f(x) = 0$ for $x \\leq 0$ is true by the piecewise definition of $f(x)$\n            \\item $f(x) = 1$ for $x \\leq 1$ is true by the piecewise definition of $f(x)$\n        \\end{enumerate}\n    \\end{proof}\n\\end{theorem}\n\nPerhaps it is possible to complete use such a function to construct the desired function $g(x)$ using one of the monotone functions on that region, but my guess is that such a function will be monotone forall $x$ instead of only within the region as described like I did with the cosine function.\nPerhaps a sigmoid variant would be useful, but I believe that any variation based on the cosine function with the piecewise will not satisfy that condition (at least with all real-values).\n\n% \\newpage\n% \\begin{definition}\n%     Let $g : (0,\\infty) \\to \\R$ be defined in terms of $f$ as:\n%     \\[\n%         g(x) := x\\cfrac{f(x) - 1}\n%         = \\begin{cases}\n%             \\cfrac{e^{x}(-1 - \\cos(\\pi x))}{2x}\n%             & 0 < x < 1\\\\\n%             0 & x \\geq 0\n%         \\end{cases}\n%     \\]\n% \\end{definition}\n\n% \\begin{theorem}\n%     The function $g(x)$ converges to 0, \n%     \\[\n%         \\lim_{x \\to +\\infty} g(x) = 0\n%     \\]\n%     but the same is not the case for $g'(x)$:\n%     \\[\n%         \\lim_{x \\to +\\infty} \\neq 0\n%     \\]\n%     \\begin{proof}\n%         \\begin{enumerate}\n%             \\item Clearly, by definition of $g(x)$, $g(x)$ converges to 0.\n%             \\item The derivative does not converge to zero:\n%                 \\begin{align*}\n%                     g'(x) = \\begin{cases}\n                        \n%                     \\end{cases}\n%                 \\end{align*}\n%         \\end{enumerate}\n%     \\end{proof}\n% \\end{theorem}\n\n% Problem 5\n\\newpage\n\\section{}\nFind the following limits\n\nTrigonometric Derivative Reference:\n\\begin{align*}\n    \\dv{x} \\sin(x) &= \\cos(x)\\\\\n    \\dv{x} \\cos(x) &= -\\sin(x)\\\\\n    \\dv{x} \\tan(x) &= \\secant^2(x)\\\\\n    \\dv{x} \\sin[-1](x) &= \\frac{1}{\\sqrt{1-x^2}}\\\\\n    \\dv{x} \\cos[-1](x) &= -\\frac{1}{\\sqrt{1-x^2}}\\\\\n    \\dv{x} \\tan[-1](x) &= \\frac{1}{1 + x^2}\n\\end{align*}\n\n%Part a\n\\subsection{\n    $\\lim_{x \\to 0} \\frac{\\tan{x} - x}{x^3}$\n}\n\n\\begin{align*}\n    \\lim_{x \\to 0} \\frac{\\tan{x} - x}{x^3}\n        &= \\lim_{x \\to 0} \\frac{\\tan{x}}{x^3} - \\frac{x}{x^2}\\\\\n        &= \\lim_{x \\to 0} \n            \\frac{\\dv{x} \\tan{x} - x}{\\dv{x} x^3}\\\\\n        &= \\lim_{x \\to 0}\n            \\frac{\\sec^2{x} - 1}{3 x^2}\\\\\n        &= \\lim_{x \\to 0}\n            \\frac{2 \\sec^2{x}\\tan{x}}{6 x}\\\\\n        &= \\lim_{x \\to 0}\n            \\frac{(4 \\sin[2](x) + 2) \\sec[4](x)}{6}\\\\\n        &= \\frac{(4 (0)^2 + 2) (1)^4}{6}\\\\\n    \\Aboxed{\n        \\lim_{x \\to 0} \\frac{\\tan{x} - x}{x^3}\n        &= \\frac{2}{6} = \\frac{1}{3}\n    }\n\\end{align*}\n\n\\newpage\n%Part b\n\\subsection{\n    $\\lim_{x \\to 0} \\frac{\\arctan(\\arcsin{x}) - \\arcsin(\\arctan{x})}{\\sin{x} - \\tan{x}}$\n}\n\n\\begin{align*}\n    \\lim_{x \\to 0} \\frac{\\arctan(\\arcsin{x}) - \\arcsin(\\arctan{x})}{\\sin{x} - \\tan{x}}\n    &= \\lim_{x \\to 0}\n        \\frac{\n            \\dv{x} \\qty(\\tan[-1](\\sin[-1](x)) - \\sin[-1](\\tan[-1](x)))\n        }{\n            \\dv{x} \\qty(\\sin(x) - \\tan(x))\n        }\\\\\n    &= \\lim_{x \\to 0}\n        \\frac{\n            \\dv{\\tan[-1](\\sin[-1](x))}{\\sin[-1](x)} \\dv{\\sin[-1](x)}{x}\n            - \\dv{\\sin[-1](\\tan[-1](x))}{\\tan[-1](x)}\\dv{\\sin[-1](x)}{x}\n        }{\n            \\dv{\\sin(x)}{x} - \\dv{\\tan(x)}{x}\n        }\\\\\n    &= \\lim_{x \\to 0}\n        \\frac{\n            \\frac{1}{\\sqrt{1-x^2} (1 + \\sin[-1](x)^2)} \n            - \\frac{1}{\\sqrt{1+x^2} (1 - \\tan[-1](x)^2)}\n        }{\n            \\cos(x) - \\sin(x)\n        }\\\\\n    &= \\lim_{x \\to 0}\n        \\frac{\n            \\frac{\n                - (\\sqrt{1-x^2} (1 + \\sin[-1](x)^2))\n                + (\\sqrt{1+x^2} (1 - \\tan[-1](x)^2))\n            }{\n                (\\sqrt{1-x^2} (1 + \\sin[-1](x)^2))\n                (\\sqrt{1+x^2} (1 - \\tan[-1](x)^2))\n            }\n        }{\n            \\cos(x) - \\sin(x)\n        }\\\\\n    &= \\frac{\n            \\frac{\n                -(\\sqrt{1 - (0)} (1 + (0)^2))\n                +(\\sqrt{1 + (0)} (1 - (0)^2))\n            }{\n                (\\sqrt{1-0^2} (1 + (0)^2))\n                (\\sqrt{1+0^2} (1 - (0)^2))\n            }\n        }{\n            (1) - (0)\n        }\\\\\n    &= \\frac{\n        \\frac{\n            -1 + 1\n        }{\n            (1)(1)\n        }\n    }{1}\n\\end{align*}\n\\begin{equation*}\n    \\centering\n    \\boxed{\n        \\lim_{x \\to 0} \\frac{\\arctan(\\arcsin{x}) - \\arcsin(\\arctan{x})}{\\sin{x} - \\tan{x}}\n        = 0\n    }\n\\end{equation*}\n\n%Part c\n\\subsection{\n    $\\lim_{x \\to +\\infty} \\frac{x^{\\ln{x}}}{(\\ln{x})^x}$\n}\n\n\\begin{align*}\n    \\lim_{x \\to +\\infty} \\frac{x^{\\ln{x}}}{(\\ln{x})^x}\n    &= \\lim_{x \\to + \\infty}\n        x^{\\ln(x)} \\ln(x)^{-x}\\\\\n    &= \\lim_{x \\to + \\infty}\n        e^{\\ln(x^{\\ln(x)})} e^{\\ln(\\ln(x)^{-x})}\\\\\n    &= \\lim_{x \\to + \\infty}\n        e^{\\ln(x) \\ln(x)} e^{- x \\ln(\\ln(x))}\\\\\n    &= \\lim_{x \\to + \\infty}\n        \\frac{\n            e^{(\\ln(x))^2}\n        }{\n            e^{x \\ln(\\ln(x))}\n        }\\\\\n    &= \\lim_{x \\to + \\infty}\n        \\frac{\n            \\dv{x} e^{(\\ln(x))^2}\n        }{\n            \\dv{x} e^{x \\ln(\\ln(x))}\n        }\\\\\n    &= \\lim_{x \\to + \\infty}\n        \\frac{\n            \\frac{2 \\ln(x)}{x} e^{(\\ln(x))^2}\n        }{\n            \\ln^x(x) \\qty(\\ln(\\ln(x))+ \\frac{1}{\\ln(x)})\n        }\\\\\n    &= \\lim_{x \\to + \\infty}\n        \\frac{\n            2 \\ln(x) e^{\\ln^2(x)}\n        }{\n            x \\ln^x(x) \\qty(\\ln(\\ln(x))+ \\frac{1}{\\ln(x)})\n        }\\\\\n    % &= \\lim_{x \\to +\\infty} \n    %     \\frac{\n    %         \\dv{x} x^{\\ln{x}}\n    %     }{\n    %         \\dv{x} (\\ln{x})^x\n    %     }\\\\\n    % &= \\lim_{x \\to +\\infty} \n    %     \\frac{\n    %         \\dv{x^{\\ln{x}}}{\\ln{x}}{x} \\dv{\\ln{x}}{x}\n    %     }{\n    %         \\dv{x} e^{\\ln{(\\ln{x})^x}}\n    %     }\\\\\n    % &= \\lim_{x \\to +\\infty} \n    %     \\frac{\n    %         \\ln{x} x^{\\ln{x} - 1} \\frac{1}{x}\n    %     }{\n    %         \\dv{\n    %             x \\ln(\\ln(x))\n    %         }{x}\n    %         e^{x \\ln(\\ln{x})}\n    %     }\\\\\n    % &= \\lim_{x \\to +\\infty} \n    %     \\frac{\n    %         \\frac{\\ln{x} x^{\\ln{x} - 1}}{x}\n    %     }{\n    %         \\qty(\n    %             \\ln(\\ln(x)) \n    %             + \\frac{1}{\\ln(x)}\n    %         )\n    %         e^{\n    %             x \\ln(\\ln{x})\n    %         }\n    %     }\\\\\n    % &= \\lim_{x \\to +\\infty}\n    %     \\frac{\n    %         x^{\\ln(x) - 2} \\ln(x)\n    %     }{\n    %         \\qty(\\ln(x))^x\n    %         \\qty(\n    %             \\ln(\\ln(x) + \\frac{1}{\\ln(x)})\n    %         )\n    %     }\\\\\n    % &= \\lim_{x \\to +\\infty}\n    %     \\frac{\n    %         \\dv{x} x^{\\ln(x) - 2} \\ln(x)\n    %     }{\n    %         \\dv{x} \\qty(\\ln(x))^x\n    %         \\qty(\n    %             \\ln(\\ln(x) + \\frac{1}{\\ln(x)})\n    %         )\n    %     }\\\\\n    % &= \\lim_{x \\to +\\infty}\n    %     \\cfrac{\n    %         x^{\\ln(x) - 3}\n    %         \\qty(\n    %             2 (\\ln(x))^2 - 2 \\ln(x) + 1\n    %         )\n    %     }{\n    %         x^{-1}\n    %         (\\ln(x))^{x - 2}\n    %         \\qty(\n    %             x \n    %             + x (\\ln(x))^2 (\\ln(\\ln(x)))^2\n    %             + 2 x \\ln(x) \\ln(\\ln(x))\n    %             + \\ln(x)\n    %             - 1\n    %         )\n    %     }\\\\\n    % &= \\lim_{x \\to +\\infty}\n    %     \\cfrac{\n    %         x^{\\ln(x)}\n    %         (\\ln(x))^2\n    %         \\qty(\n    %             2 (\\ln(x))^2 - 2 \\ln(x) + 1\n    %         )\n    %     }{\n    %         x^4\n    %         (\\ln(x))^{x}\n    %         \\qty(\n    %             x \n    %             + x (\\ln(x))^2 (\\ln(\\ln(x)))^2\n    %             + 2 x \\ln(x) \\ln(\\ln(x))\n    %             + \\ln(x)\n    %             - 1\n    %         )\n    %     }\n\\end{align*}\n\nAfter a lot of manipulation I am making the assumption that with\n$\\lim_{x \\to + \\infty} \\ln{x} = + \\infty$\nand since for large enough $x$, \n$x^{\\ln(x)} > (\\ln(x))^x$\nthe resulting ratio will be unbounded and will tend to infinity.\n\nIt is also true that\n$e^{\\ln^2(x)} > \\ln^x(x)$\nand ultimately the original hypothesis is supported as well.\n\nTherefore, \n\\[\n    \\lim_{x \\to + \\infty} \\frac{x^{\\ln(x)}}{(\\ln(x))^x} = + \\infty \n        = \\lim_{x \\to + \\infty}\n            \\frac{\n                \\frac{2 \\ln(x)}{x} e^{(\\ln(x))^2}\n            }{\n                \\ln^x(x) \\qty(\\ln(\\ln(x))+ \\frac{1}{\\ln(x)})\n            }\n\\]\n\n% Problem 6\n\\newpage\n\\section{}\nFind the example of a function $f(x)$ which is continuous at every point of the interval $(0,1)$, but is not differentiable at every point $(0,1)$.\n\nRead about the construction of the function, which is differentiable at every point of $(0,1)$ but whose derivative is discontinuous at every point of $(0,1)$.\n\n\\subsection*{Continuous but not differentiable}\nThis type of function is known as a fractal curve. An example of this is the \\emph{Weierstrass function} which for $(0,1)$ can be defined as:\n\\begin{definition}\n    Let $f : (0,1) \\to \\R$ be defined by\n    \\[\n        f(x) := \\sum_{n = 0}^{\\infty} a^n \\cos(b^n \\pi x)\n    \\]\n    where $0 < a < 1$,\n    $b$ is a positive odd integer $\\geq 7$,\n    and\n    \\[\n        ab > 1 + \\frac{3}{2} \\pi\n    \\]    \n\\end{definition}\nAs proven by Weierstrass, the function $f$ is continuous everywhere, but is also not differentiable anywhere.\n\n\\subsection*{Differentiable function with a discontinuous derivative}\nAn example construction of a function which is discontinuous at every point of $(0,1)$ was found in \\url{https://math.stackexchange.com/a/423279}\n\nReading through this example, the process involves using the differentiable function:\n\\[\n    f(x) = \\begin{cases}\n        x^2 \\sin(1/x) & x \\neq 0\\\\\n        0 & x = 0        \n    \\end{cases}\n\\]\nwhose derivative is discontinuous at $x = 0$\n\\[\n    f'(x) = \\begin{cases}\n        2 x \\sin(\\frac{1}{x}) - \\cos(\\frac{1}{x}) & x \\neq 0\\\\\n        0 & x = 0\n    \\end{cases}\n\\]\n\nThis function is then modified to add additional discontinuities,\nlike having discontinuous derivatives at $x = 0$ and $x = 1$:\n\\[\n    f(x) = \\begin{cases}\n        x^2 (1 - x)^2 \\sin(\\frac{1}{\\pi x (1-x)}) & 0 < x < 1\\\\\n        0 & \\text{else}\n    \\end{cases}\n\\]\n\nThis can then be expanded to have a bunch of discontinuities at $x_m$ and $x_n$ with $x_m < x_n$:\n\\[\n    f_{m,n}(x) = \\begin{cases}\n        (x_m - x)^2 (x_n - x)^2 \\sin(\\frac{1}{\\pi (x_m - x) (x_n - x)}) & x_m < x < x_n\\\\\n        0 &\\text{else}\n    \\end{cases}\n\\]\n\nFinally, an sum of these functions can then be developed to achieve the goal of a continuous function with a fully discontinuous derivative\n\\[\n    F_N(x) = \\sum_{n = 1}^{N} \\sum_{m=1}^{2^n} f_{m,n}(x)\n\\]\nwhere the discontinuities for $x_k$ are selected from a sequence containing all values of $x \\in (0,1)$.\n\nThe limit of this sum as $N \\to \\infty$ will essentially develop a similar function to that of the fractal sets, but with differentiability intact.\n\n\\end{document}\n", "meta": {"hexsha": "9be79dee0b873b8dcde01658a4783af0aaa363d8", "size": 34658, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW10/MATH5301-HW10.tex", "max_stars_repo_name": "jonaswagner2826/MATH5301", "max_stars_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-01T05:26:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-01T05:26:53.000Z", "max_issues_repo_path": "Homework/HW10/MATH5301-HW10.tex", "max_issues_repo_name": "jonaswagner2826/MATH5301", "max_issues_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW10/MATH5301-HW10.tex", "max_forks_repo_name": "jonaswagner2826/MATH5301", "max_forks_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.2515779982, "max_line_length": 361, "alphanum_fraction": 0.4511801027, "num_tokens": 13224, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8006920020959544, "lm_q1q2_score": 0.5651164269881654}}
{"text": "\\documentclass[english]{pkupaper}\n\n\\usepackage{lipsum}\n\\usepackage{essay-def}\n\\usepackage{clrscode}\n\\usepackage{appendix}\n\\usepackage{mathrsfs}\n\\usepackage{indentfirst}\n\\usepackage{multirow}\n\\usepackage{makecell}\n%\\allowdisplaybreaks[4]\n\n\\newenvironment{eqt}{\\begin{equation}\\begin{aligned}}{\\end{aligned}\\end{equation}}\n\n\\newcommand{\\titlemark}{Final project of Numerical Linear Algebra}\n\n\\title{\\titlemark}\n\\author{1500010611 \u6c6a\u794e\u975e}\n\\date{}\n\\bibliography{DeepL}\n\n\\begin{document}\n\\maketitle\n\n\\section{Introduction}\nThis report is organized as follows: Section \\ref{sec:dotb} describes how to use MAC scheme to discretize the Stokes equation and to attain the Saddle Point Problem. In Section \\ref{sec:prob1}, we introduce two types of DGS: DGS in parallel \\cite{mmfts} and DGS in sequence (in the notes); we describe the routine of V-Cycle multi-grid method. Section \\ref{sec:prob2} introduces Uzawa Iteration, Inexact Uzawa Iteration and Inexact Uzawa Iteration based on V-Cycle multi-grid. We rigorously prove that Exact Uzawa Iteration will converge in at most 2 iteration under fair assumption. The numerical results is shown in Section \\ref{sec:num}. In Appendix \\ref{appendix}, we modify V-Cycle multi-grid method based on DGS a little bit and achieve great improvement on the original one. \n\n\\section{Description of the background}\n\\label{sec:dotb}\nFor 2-dimensional Stokes Equation in the region $\\Omega=(0,1)^2$\n\\begin{equation}\n\\label{stokes}\n\\begin{aligned}\n-\\Delta \\mfu +\\nabla p =\\mfF,\\\\\n-\\nabla \\cdot \\mfu = 0.\n\\end{aligned}\n\\end{equation}\nwhere $\\mfu=(u,v)$ is the velocity, $p$ is the pressure, $\\mfF=(f,g)$ is the external force.\n\nThe boundary condition is given by\n\\begin{eqt}\n&\\left.\\frac{\\p u}{\\p \\nu}\\right|_{y=0}=b, \\quad \\left.\\frac{\\p u}{\\p \\nu}\\right|_{y=1}=t,\\\\\n&\\left.\\frac{\\p v}{\\p \\nu}\\right|_{x=0}=l, \\quad \\left.\\frac{\\p v}{\\p \\nu}\\right|_{x=1}=r,\\\\\n&\\left.u\\right|_{x=0}=\\left.u\\right|_{x=1}=0, \\quad \\left.v\\right|_{y=0}=\\left.v\\right|_{y=1}=0,\n\\end{eqt}\nwhere $\\nu$ is the outer normal vector.\n\nWe consider the external force to be specifies as \n\\begin{equation}\n\\begin{aligned}\n&f(x,y)=-4\\pi^2(2\\cos(2\\pi x)-1)\\sin(2\\pi y)+x^2,\\\\\n&g(x,y)=4\\pi^2(2\\cos(2\\pi y)-1)\\sin(2\\pi x).\n\\end{aligned}\n\\end{equation}\n\nThe true solution is given by\n\\begin{eqt}\n\\label{true_sol}\n&u(x,y)=(1-\\cos(2\\pi x))\\sin (2\\pi y),\\\\\n&v(x,y)=-(1-\\cos(2\\pi y))\\sin (2\\pi x),\\\\\n&p(x,y)=\\frac{x^3}{3}-\\frac{1}{12}.\n\\end{eqt}\n\nBased on the true solution, we can calculate the boundary condition as follows:\n\\begin{eqt}\nb(x)=-\\left.\\frac{\\p u}{\\p y}\\right|_{y=0}=-2\\pi (1-\\cos(2\\pi x)),\\\\\nt(x)=\\left.\\frac{\\p u}{\\p y}\\right|_{y=1}=2\\pi (1-\\cos(2\\pi x)),\\\\\nl(x)=-\\left.\\frac{\\p v}{\\p x}\\right|_{x=0}=2\\pi (1-\\cos(2\\pi y)),\\\\\nr(x)=\\left.\\frac{\\p v}{\\p x}\\right|_{x=1}=-2\\pi (1-\\cos(2\\pi y)).\\\\\n\\end{eqt}\n\nFor given scale of lattice $N$, and the step size $h=1/N$, we can discretize this PDE. We define\n\\begin{eqt}\n&u_{i,j-\\frac{1}{2}}\\approx u(ih,(j-\\frac{1}{2})h), \\quad &0\\leq i \\leq N, 1\\leq j\\leq N,\\\\\n&v_{i-\\frac{1}{2},j}\\approx u((i-\\frac{1}{2})h,jh), &1\\leq i \\leq N, 0\\leq j\\leq N,\\\\\n&p_{i-\\frac{1}{2},j-\\frac{1}{2}}\\approx p((i-\\frac{1}{2})h,(j-\\frac{1}{2})h), &1\\leq i \\leq N, 1\\leq j\\leq N,\\\\\n&f_{i,j-\\frac{1}{2}}=f(ih,(j-\\frac{1}{2})h), & 1\\leq i \\leq N-1, 1\\leq j\\leq N,\\\\\n&g_{i-\\frac{1}{2},j}=g((i-\\frac{1}{2})h,jh), & 1\\leq i \\leq N, 1\\leq j\\leq N-1,\\\\\n&b_i=b(ih), \\quad t_i=t(ih), & 1\\leq i\\leq N-1,\\\\\n&l_j=l(jh), \\quad r_j=r(jh), & 1\\leq j\\leq N-1.\n\\end{eqt}\nHere we use $\\approx$ to indicate that $u_{i,j-\\frac{1}{2}}, v_{i-\\frac{1}{2},j}, p_{i-\\frac{1}{2},j-\\frac{1}{2}}$ are the numerical solution on the corresponding site instead of the true solution. Our notation here is slightly different from the notation in the assignment. \n\nBy the boundary condition, we have $u_{0,j+\\frac{1}{2}}=u_{N,j+\\frac{1}{2}}=v_{i+\\frac{1}{2},0}=v_{i+\\frac{1}{2},N}=0$. For the discrete form of $u$, $v$, we have $N(N-1)$ free variables respectively. For the discrete form of $p$, we have $N^2$ free variables. We denote $U\\in\\mbR^{(N-1)\\times N}$ with $U_{i,j}=u_{i,j-\\oot}$, $V\\in \\mbR^{N\\times(N-1)}$ with $V_{i,j}=v_{i-\\oot,j}$,  $P\\in\\mbR^{N\\times N}$ with $P_{i,j}=p_{i-\\oot,j-\\oot}$, $F\\in\\mbR^{(N-1)\\times N}$ with $F_{i,j}=f_{i,j-\\oot}$ and $G\\in \\mbR^{N\\times(N-1)}$ with $G_{i,j}=g_{i-\\oot,j}$.\n\n\\subsection{Definitions and notations}\nHere we state some useful definition and notations.\n\\begin{definition}\nFor a matrix $A\\in \\mbR^{m\\times n}$, we define two mappings $\\phi, \\psi: \\mbR^{m\\times n}\\to \\mbR^{mn\\times 1}$. $\\phi(A)$ is with entry $\\phi(A)_{i+m(j-1)}=A_{i,j}$. $\\psi(A)$ is with entry $\\psi(A)_{n(i-1)+j}=A_{i,j}$. \n$A_{i,.}$ represents the $i$-th row of $A$; $A_{.,j}$ represents the $j$-th column of $A$. \n\\end{definition}\nFrom the above definition, we find that \n\\begin{eqt}\n\\psi(A)=\\phi(A^T),\n\\end{eqt} \n\\begin{eqt}\n&\\phi(A)=\\begin{bmatrix}\nA_{.,1}\\\\\n \\vdots \\\\\n A_{.,n}\n\\end{bmatrix},\\quad \\psi(A)=\\begin{bmatrix}\nA_{1,.}\\\\\n \\vdots \\\\\n A_{n,.}\n\\end{bmatrix}.\n\\end{eqt}\nSuppose $a_1, a_2 \\dots a_n \\in\\mbR^{n\\times 1}$. Then, \n\\begin{eqt}\n\\phi(\\begin{bmatrix}\na_1\\\\\n \\vdots \\\\\na_n\n\\end{bmatrix})=[a_1, a_2, \\dots a_n],\\quad \\psi(\\begin{bmatrix}\na_1\\\\\n \\vdots \\\\\na_n\n\\end{bmatrix})=\\begin{bmatrix}\na_1^T\\\\\n \\vdots \\\\\na_n^T\n\\end{bmatrix}.\n\\end{eqt}\n\nWe use $\\otimes$ to represent Kronecker\u2019s product.\n\\subsection{MAC scheme}\nWe use MAC scheme to discretize \\ref{stokes}. \n\nWe denote $T_{N-1}\\in \\mbR^{(N-1)\\times (N-1)}$ with entries\n\\begin{eqt}\nT_{N-1}=\\begin{bmatrix}\n2&-1\\\\\n-1&\\ddots&\\ddots\\\\\n&\\ddots&\\ddots&-1\\\\\n&&-1&2\\\\\n\\end{bmatrix},\n\\end{eqt}\n$S_{N-1}\\in \\mbR^{(N-1)\\times N}$ with entries\n\\begin{eqt}\nS_{N-1}=\\begin{bmatrix}\n-1&1\\\\\n&\\ddots&\\ddots\\\\\n&&-1&1\\\\\n\\end{bmatrix}.\n\\end{eqt}\n$S_{N-1}^{(2)}=S^T_{N-1}S_{N-1}\\in \\mbR^{(N-1)\\times N}$. We can verify that $S_{N-1}^{(2)}$ is with entries\n\\begin{eqt}\\label{s2}\nS^{(2)}_{N-1}=\\begin{bmatrix}\n1&-1\\\\\n-1&2&\\ddots\\\\\n&\\ddots&\\ddots&\\ddots\\\\\n&&\\ddots&2&-1\\\\\n&&&-1&1\\\\\n\\end{bmatrix},\n\\end{eqt}\n\nWe start with $u$. For $1\\leq i \\leq N-1, 2\\leq j\\leq N-1$, we have \n\\begin{equation}\n\\label{fij}\n-\\frac{1}{h^2}(u_{i+1,j-\\oot}+u_{i-1,j-\\oot}+u_{i,j+\\oot}+u_{i,j-\\frac{3}{2}}-4u_{i,j-\\oot})+\\frac{1}{h}(p_{i+\\oot,j-\\oot}-p_{i-\\oot,j-\\oot})=f_{i,j-\\oot}.\n\\end{equation}\n\nStacking \\ref{fij} with $1\\leq i\\leq N-1$, we have \n\\begin{eqt}\n\\label{fij2}\n\\frac{1}{h^2}\\left(T_{N-1}U_{,.j}+2U_{,.j}-U_{,.j-1}-U_{,.j+1}\\right)+\\frac{1}{h}S_{N-1}P_{.,j}=F_{.,j}.\n\\end{eqt}\n\nFor $1\\leq i \\leq N-1, j=1$, we have \n\\begin{eqt}\n\\label{bi}\n-\\frac{1}{h^2}(u_{i+1,j-\\oot}+u_{i-1,j-\\oot}+u_{i,j+\\oot}-3u_{i,j-\\oot}+hb_i)+\\frac{1}{h}(p_{i+\\oot,j-\\oot}-p_{i-\\oot,j-\\oot})=f_{i,j-\\oot}.\n\\end{eqt}\n\nStacking \\ref{bi} with $1\\leq i\\leq N-1$, we have \n\\begin{eqt}\n\\label{bi2}\n\\frac{1}{h^2}\\left(T_{N-1}U_{,.j}+U_{,.j}-U_{,.j+1}-hb\\right)+\\frac{1}{h}S_{N-1}P_{.,j}=F_{.,j}.\n\\end{eqt}\n\nFor $1\\leq i \\leq N-1, j=N$, we have \n\\begin{eqt}\n\\label{ti}\n-\\frac{1}{h^2}(u_{i+1,j-\\oot}+u_{i-1,j-\\oot}+u_{i,j-\\frac{3}{2}}-3u_{i,j-\\oot}+ht_i)+\\frac{1}{h}(p_{i+\\oot,j-\\oot}-p_{i-\\oot,j-\\oot})=f_{i,j-\\oot}.\n\\end{eqt}\n\nStacking \\ref{ti} with $1\\leq i\\leq N-1$, we have \n\\begin{eqt}\n\\label{ti2}\n\\frac{1}{h^2}\\left(T_{N-1}U_{,.j}+U_{,.j}-U_{,.j-1}-ht\\right)+\\frac{1}{h}S_{N-1}P_{.,j}=F_{.,j}.\n\\end{eqt}\n\nWe denote $A_N\\in \\mbR^{N(N-1)\\times N(N-1)}$, $B^{(1)}_N\\in\\mbR^{N(N-1)\\times N^2}$ and $F^{(bt)}\\in \\mbR^{N(N-1)\\times1}$\n\\begin{eqt}\n\\label{an}\nA_N=\\begin{bmatrix}\nI_{N-1}+T_{N-1}&-I_{N-1}\\\\\n-I_{N-1}&2I_{N-1}+T_{N-1}&-I_{N-1}\\\\\n&\\ddots&\\ddots&\\ddots\\\\\n&&\\ddots&2I_{N-1}+T_{N-1}&-I_{N-1}\\\\\n&&&-I_{N-1}&I_{N-1}+T_{N-1}\\\\\n\\end{bmatrix},\n\\end{eqt}\n\n\\begin{eqt}\\label{bn1}\nB_N^{(1)}=\\begin{bmatrix}\nS_{N-1}\\\\\n&\\ddots\\\\\n&&S_{N-1}\\\\\n\\end{bmatrix}=I_N\\otimes S_{N-1}, \\quad F^{(bt)}=\\phi(F)+\\frac{1}{h}\\begin{bmatrix}\nb\\\\\n0\\\\\n\\vdots\\\\\n0\\\\\n\\end{bmatrix}+\\frac{1}{h}\\begin{bmatrix}\n0\\\\\n\\vdots\\\\\n0\\\\\nt\\\\\n\\end{bmatrix}.\n\\end{eqt}\nIndeed, $A_N=S_{N-1}^{(2)}\\otimes I_{N-1}+I_{N}\\otimes T_{N-1}$.\n\nTherefore, we can combine \\ref{fij2}, \\ref{bi2}, \\ref{ti2} together in the matrix form:\n\\begin{eqt}\n\\label{u1}\n&\\frac{1}{h^2}\nA_N\\phi(U)+\\frac{1}{h}\nB_N^{(1)}\\phi(P)=F^{(bt)}.\n\\end{eqt}\n\n\nThen, we deal with $v$. We denote $e^{(i)}_N\\in \\mbR^{N\\times 1}$ to be the vector with $i$-th entry is $1$ and other entries are $0$. We denote $R_N^{(i)}\\in \\mbR^{(N-1)\\times N^2}$.\n\\begin{eqt}\n\\label{rni}\nR_N^{(i)}=\\begin{bmatrix}\n-(e_{N}^{(i)})^T & (e_{N}^{(i)})^T\\\\\n&\\ddots&\\ddots\\\\\n&&-(e_{N}^{(i)})^T & (e_{N}^{(i)})^T\\\\\n\\end{bmatrix}=S_{N-1}\\otimes(e_{N}^{(i)})^T.\n\\end{eqt}\nIt is easy to verify that \n\\begin{eqt}\nR_N^{(i)}\\phi(P)=\\begin{bmatrix}\n-(e_{N}^{(i)})^T & (e_{N}^{(i)})^T\\\\\n&\\ddots&\\ddots\\\\\n&&-(e_{N}^{(i)})^T & (e_{N}^{(i)})^T\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nP_{.,1}\\\\\n\\vdots\\\\\nP_{.,n}\\\\\n\\end{bmatrix}\n=\\begin{bmatrix}\nP_{i,2}-P_{i,1}\\\\\n\\vdots\\\\\nP_{i,N}-P_{i,N-1}\\\\\n\\end{bmatrix}.\n\\end{eqt}\n\nFor $2\\leq i \\leq N-1, 1\\leq j\\leq N-1$, we have \n\\begin{equation}\n\\label{gij}\n-\\frac{1}{h^2}(v_{i-\\oot,j+1}+v_{i-\\oot,j-1}+v_{i+\\oot,j}+v_{i-\\frac{3}{2},j}-4v_{i-\\oot,j})+\\frac{1}{h}(p_{i-\\oot,j+\\oot}-p_{i-\\oot,j-\\oot})=g_{i-\\oot,j}.\n\\end{equation}\nStacking \\ref{gij} with $1\\leq j\\leq N-1$, we have\n\\begin{eqt}\n\\label{gij2}\n\\frac{1}{h^2}\\left(T_{N-1}V_{i,.}^T+2V_{i,.}-V_{i+1,.}^T-V_{i-1,.}^T\\right)+R_N^{(i)}\\phi(P)=G_{i,.}^T.\n\\end{eqt}\n\nFor $i=1, 1\\leq j \\leq N-1$, we have \n\\begin{eqt}\n\\label{lj}\n-\\frac{1}{h^2}(v_{i-\\oot,j+1}+v_{i-\\oot,j-1}+v_{i+\\oot,j }-3v_{i-\\oot,j}+hl_j)+\\frac{1}{h}(p_{i-\\oot,j+\\oot}-p_{i-\\oot,j-\\oot})=g_{i-\\oot,j}.\n\\end{eqt}\nStacking \\ref{lj} with $1\\leq j\\leq N-1$, we have\n\\begin{eqt}\n\\label{lj2}\n\\frac{1}{h^2}\\left(T_{N-1}V_{i,.}^T+V_{i,.}-V_{i+1,.}^T-hl\\right)+R_N^{(i)}\\phi(P)=G_{i,.}^T.\n\\end{eqt}\n\nFor $i=N, 1\\leq j \\leq N-1$, we have \n\\begin{eqt}\n\\label{rj}\n-\\frac{1}{h^2}(v_{i-\\oot,j+1}+v_{i-\\oot,j-1}+v_{i-\\frac{3}{2},j }-3v_{i-\\oot,j}+hr_j)+\\frac{1}{h}(p_{i-\\oot,j+\\oot}-p_{i-\\oot,j-\\oot})=g_{i-\\oot,j}\n.\\end{eqt}\nStacking \\ref{rj} with $1\\leq j\\leq N-1$, we have\n\\begin{eqt}\n\\label{rj2}\n\\frac{1}{h^2}\\left(T_{N-1}V_{i,.}^T+2V_{i,.}-V_{i+1,.}^T-V_{i-1,.}^T-hr\\right)+R_N^{(i)}\\phi(P)=G_{i,.}^T\n.\\end{eqt}\n\nWe denote $B_N^{(2)}\\in\\mbR^{N(N-1)\\times N^2}$ and $G^{(lr)}\\in \\mbR^{N(N-1)\\times1}$.\n\n\\begin{eqt}\\label{bn2}\nB_N^{(2)}=\\begin{bmatrix}R_N^{(1)}\\\\\n\\vdots\\\\\nR_N^{(N)}\\\\\n\\end{bmatrix},\\quad G^{(lr)}=\\psi(G)+\\frac{1}{h}\\begin{bmatrix}\nl\\\\\n0\\\\\n\\vdots\\\\\n0\\\\\n\\end{bmatrix}+\\frac{1}{h}\\begin{bmatrix}\n0\\\\\n\\vdots\\\\\n0\\\\\nr\\\\\n\\end{bmatrix}.\n\\end{eqt}\nTherefore, we can combine \\ref{gij2}, \\ref{lj2}, \\ref{rj2} together in the following matrix form:\n\\begin{eqt}\nA_N\\psi(V)+B_N^{(2)}\\phi(P)=G^{(lr)}.\n\\end{eqt}\n\nAnd we also have the following linear constraints from $\\nabla \\cdot \\mfu=0$. For $1\\leq i\\leq N$ and $1\\leq j\\leq N$, we have\n\\begin{eqt}\n\\label{divu}\n-\\frac{1}{h}(u_{i,j-\\oot}-u_{i-1,j-\\oot})-\\frac{1}{h}(v_{i-\\oot,j}-v_{i-\\oot,j-1})=0.\n\\end{eqt}\n\nWe observe that \n\\begin{eqt}\n(B_N^{(1)})^T\\phi(U)=\\begin{bmatrix}\nS_{N-1}^T\\\\\n&\\ddots\\\\\n&&S_{N-1}^T\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nU_{.,1}\\\\\n\\vdots\\\\\nU_{.,N}\\\\\n\\end{bmatrix}=\\begin{bmatrix}\nS_{N-1}^TU_{.,1}\\\\\n\\vdots\\\\\nS_{N-1}^TU_{.,N}\\\\\n\\end{bmatrix},\n\\end{eqt}\nwhere\n\\begin{eqt}\nS_{N-1}^TU_{.,j}=\\begin{bmatrix}\n-1\\\\\n1&\\ddots\\\\\n&\\ddots&-1\\\\\n&&1\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nU_{1,j}\\\\\n\\vdots\\\\\nU_{N-1,j}\n\\end{bmatrix}=\\begin{bmatrix}\nU_{0,j}-U_{1,j}\\\\\n\\vdots\\\\\nU_{N-1,j}-U_{N,j}\n\\end{bmatrix}.\n\\end{eqt}\nTherefore, we have\n\\begin{eqt}\n\\label{bn1t}\n(B_N^{(1)})^T\\phi(U)= \\sum_{j=1}^Ne_N^{(j)}\\otimes\\begin{bmatrix}\nU_{0,j}-U_{1,j}\\\\\n\\vdots\\\\\nU_{N-1,j}-U_{N,j}\n\\end{bmatrix}\n\\end{eqt}\nWe also observe that\n\\begin{eqt}\n&(B_N^{(2)})^T\\psi(V)=\\left[(R_N^{(1)})^T, \\dots (R_N^{(N)})^T\\right]\\begin{bmatrix}\n(V_{1,.})^T\\\\\n\\vdots\\\\\n(V_{N,.})^T\\\\\n\\end{bmatrix}\\\\\n=&\\left[S_{N-1}^T\\otimes e_N^{(1)}, \\dots S_{N-1}^T\\otimes e_N^{(N)}\\right]\\begin{bmatrix}\n(V_{1,.})^T\\\\\n\\vdots\\\\\n(V_{N,.})^T\\\\\n\\end{bmatrix}=\\sum_{i=1}^N\\left(S_{N-1}^T\\otimes e_N^{(i)}\\right)(V_{i,.})^T,\n\\end{eqt}\nwhere\n\\begin{eqt}\n&\\left(S_{N-1}^T\\otimes e_N^{(i)}\\right)(V_{i,.})^T\n=\\begin{bmatrix}\n-e_N^{(i)}\\\\\ne_N^{(i)}&\\ddots\\\\\n&\\ddots&-e_N^{(i)}\\\\\n&&e_N^{(i)}\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nV_{i,1}\\\\\n\\vdots\\\\\nV_{i,N}\\\\\n\\end{bmatrix}=\\begin{bmatrix}\nV_{i,0}-V_{i,1}\\\\\n\\vdots\\\\\nV_{i,N-1}-V_{i,N}\\\\\n\\end{bmatrix}\\otimes e_N^{(i)}.\n\\end{eqt}\nTherefore, \n\\begin{eqt}\n\\label{bn2t}\n(B_N^{(2)})^T\\psi(V)=\\sum_{i=1}^N\\begin{bmatrix}\nV_{i,0}-V_{i,1}\\\\\n\\vdots\\\\\nV_{i,N-1}-V_{i,N}\\\\\n\\end{bmatrix}\\otimes e_N^{(i)}.\n\\end{eqt}\nStacking \\ref{divu} from $1\\leq i\\leq N$ and $1\\leq j\\leq N$, we get\n\\begin{eqt}\n\\frac{1}{h}\\lp B_N^{(1)}\\rp^T\\phi(U)+\\frac{1}{h}\\lp B_N^{(2)}\\rp^T\\psi(V)=0.\n\\end{eqt}\n\nIn summary, we denote $\\mfA\\in\\mbR^{2N(N-1)\\times2N(N-1)}$, $\\mfB\\in \\mbR^{2N(N-1)\\times N^2}, \\mfU \\in\\mbR^{2N(N-1)\\times1}, \\mfP\\in \\mbR^{N^2\\times1}, \\mfF \\in\\mbR^{2N(N-1)\\times1}$\n\\begin{equation}\n\\mfA=\\frac{1}{h^2}\\begin{bmatrix}\nA_N\\\\\n&A_N\n\\end{bmatrix},\\, \\mfB=\\frac{1}{h}\\begin{bmatrix}\nB^{(1)}_N\\\\\nB^{(2)}_N\n\\end{bmatrix},\\,\\mfU=\\begin{bmatrix}\n\\phi(U)\\\\\n\\psi(V)\n\\end{bmatrix},\\, \\mfP=\n\\phi(P),\\, \\mfF=\\begin{bmatrix}\nF^{(bt)}\\\\\nG^{(lr)}\n\\end{bmatrix}.\n\\end{equation}\nwhere $A_N$ is defined in \\ref{an}; $B_N^{(1)}$ and  $F^{(bt)}$ is defined in \\ref{bn1}; $B_N^{(2)}$ and $G^{(lr)}$ is defined in \\ref{bn2}.\n\nThen we derive the following Saddle Point Problem as the discretization of Stokes equation \\ref{stokes} with boundary conditions.\n\\begin{equation}\n\\label{spp}\n\\begin{bmatrix}\n\\mfA&\\mfB\\\\\\mfB^T&0\n\\end{bmatrix}\\begin{bmatrix}\n\\mfU\\\\\\mfP\n\\end{bmatrix}=\\begin{bmatrix}\n\\mfF\\\\0\n\\end{bmatrix}.\n\\end{equation}\nWe can also rescale \\ref{spp} by introducing $\\tilde \\mfA = h^2\\mfA, \\tilde \\mfB = h^2\\mfB$ and $\\tilde \\mfF=h^2\\mfF$. Then, \\ref{spp} turns to be \n\\begin{equation}\n\\label{rspp}\n\\begin{bmatrix}\n\\tilde\\mfA&\\tilde\\mfB\\\\\\tilde\\mfB^T&0\n\\end{bmatrix}\\begin{bmatrix}\n\\mfU\\\\\\mfP\n\\end{bmatrix}=\\begin{bmatrix}\n\\tilde\\mfF\\\\0\n\\end{bmatrix}.\n\\end{equation}\nIn our numerical experiment, the residual is for \\ref{rspp}, instead of for \\ref{spp}. \n\n\\section{V-Cycle Multi-Grid Based on DGS (Problem 1)}\n\\label{sec:prob1}\nWe consider to use V-cycle multi-grid method to solve \\ref{spp}. We use Distributive Gauss-Seidel (DGS) Iteration as smoother. Before we state the algorithm, we make several preparations: we calculate $\\mfB^T\\mfB$. Actually,\n\\begin{eqt}\n\\mfB^T\\mfB=\\frac{1}{h^2}\\left(\\left(\\mfB^{(1)}\\right)^T\\mfB^{(1)}+(\\mfB^{(2)})^T\\mfB^{(2)}\\right).\n\\end{eqt}\nFrom \\ref{s2} and \\ref{bn1}, $(\\mfB^{(1)})^T\\mfB^{(1)}=I_N\\otimes (S_{N-1}^TS_{N-1})=I_N\\otimes (S_{N-1}^{(2)})$. From \\ref{bn2} and \\ref{rni}, \n\\begin{eqt}\n&\\left(\\mfB^{(2)}\\right)^T\\mfB^{(2)}=\\sum_{i=1}^N\\left(R_N^{(i)}\\right)^TR_N^{(i)}=\\sum_{i=1}^N\\left(S_{N-1}\\otimes \\left(e_N^{(i)}\\right)^T\\right)^T\\left(S_{N-1}\\otimes \\left(e_N^{(i)}\\right)^T\\right)\\\\\n=&\\sum_{i=1}^NS_{N-1}^{(2)}\\otimes\\left(e_N^{(i)}\\left(e_N^{(i)}\\right)^T\\right)=S_{N-1}^{(2)}\\otimes I_N.\n\\end{eqt}\n\nIf we denote\n\\begin{eqt}\nb^{(1)}=[2,3,\\dots, 3,2],\\quad b^{(2)}=[3,4,\\dots, 4,3],\n\\end{eqt}\nthen, we can write $\\mfB^T\\mfB$ explicitly:\n\\begin{eqt}\n\\mfB^T\\mfB=\\frac{1}{h^2}\\begin{bmatrix}\n\\mathrm{diag}(b^{(1)})&-I_N\\\\\n-I_N&\\mathrm{diag}(b^{(2)})&\\ddots\\\\\n&\\ddots&\\ddots&\\ddots\\\\\n&&\\ddots&\\mathrm{diag}(b^{(2)})&-I_N\\\\\n&&\\ddots&-I_N&\\mathrm{diag}(b^{(1)})\\\\\n\\end{bmatrix}.\n\\end{eqt}\n\nWe then introduce a diagonal matrix $\\tilde \\mfD\\in \\mbR^{N^2, N^2}$ as the inverse of the diagonal part of $\\mfB^T\\mfB$, i.e.\n\\begin{eqt}\n\\label{mfD}\n\\mfD=h^2\\mathrm{diag}([d^{(1)}, d^{(2)}, \\dots d^{(2)}, d^{(1)}]),\n\\end{eqt}\nwhere $d^{(1)}, d^{(2)}\\in\\mfR^N$ with entries \n\\begin{eqt}\nd^{(1)}=[1/2,1/3,\\dots, 1/3,1/2],\\quad d^{(2)}=[1/3,1/4,\\dots, 1/4,1/3].\n\\end{eqt}\n\nFor $1\\leq i,j\\leq N$, we can classify $(i,j)$ into three groups:\n\\begin{itemize}\n\\item Inner unit: $2\\leq i\\leq N-1$ and $2\\leq j\\leq N-1$;\n\\item Edge unit: $i=1,N$, $2\\leq j\\leq N-1$ or $2\\leq i\\leq N-1$, $j=1,N$;\n\\item Point unit: $i=1,N$, $j=1,N$.\n\\end{itemize}\nGiven a matrix $B'\\in \\mbR^{N\\times N}$, if we let its inner unit be $1/4$, edge unit be $1/3$ and point unit be $1/2$. Then, we directly have\n\\begin{eqt}\n\\label{btb}\n\\frac{1}{h^2}\\mathrm{diag}(\\mfD)=\\phi(B').\n\\end{eqt}\n\n\\subsection{DGS Iteration}\nSuppose the initial values are $\\mfU^{(0)}=\\begin{bmatrix}\\psi(U^{(0)})\\\\ \\phi(V^{(0)})\\end{bmatrix}$ and $\\mfP^{(0)}=\\psi(\\mfP^{(0)})$. Let $k=0$, $\\mfA=D_{\\mfA}-L_{\\mfA}-U_{\\mfA}$. \nThe DGS iteration is defined as follows:\n\nGiven $\\mfU^{(k)}$ and $\\mfP^{(k)}$, we first use Gauss-Seidel Iteration to update velocity $\\mfU^{(k+1/2)}$,\n\\begin{eqt}\n\\label{mfU}\n\\mfU^{(k+1/2)}=\\mfU^{(k)}+(D_\\mfA-L_\\mfA)^{-1}(F-\\mfB\\mfP^{(k)}-\\mfA\\mfU_k).\n\\end{eqt}\n\nWe denote $Q\\in\\mbR^{N\\times N}$ and write $\\mfQ=\\phi(Q)$, then calculate $\\mfQ$ as follows:\n\\begin{eqt}\n\\label{mfQ}\n\\mfQ=\\mfD(-\\mfB^T \\mfU^{(k+1/2)}).\n\\end{eqt}\n\nFinally, we update $\\mfU^{(k+1)}$ and $\\mfP^{(k+1)}$ by\n\\begin{eqt}\n\\label{updup}\n\\mfU^{(k+1)}=\\mfU^{(k+1/2)}+\\mfB\\mfQ, \\quad \\mfP^{(k+1)}=\\mfP^{(k)}-\\mfB^T\\mfB\\mfQ\n\\end{eqt}\n\nThis update routine is introduced in \\cite{mmfts}. We shall point out that this is quite similar to the DGS iteration in the notes. Nevertheless, this routine is performed in parallel while routine on the notes is performed in sequence. \n\nHere we show that update routine in \\cite{mmfts} has only minor differences to DGS (notes) if we update DGS (notes) in parallel. We denote $D'\\in\\mbR^{N\\times N}$ with $0$ on its inner unit, $1$ on its edge unit and $2$ on its point unit. Then we denote $\\tilde \\mfD\\in \\mbR^{N^2\\times N^2}$ with entries $\\tilde \\mfD=\\mathrm{diag}(\\phi(D'))$. We assert that the update rule of DGS (notes) in parallel can be written in the matrix form as: \n\\begin{itemize}\n\\item Update $\\mfU^{(k+1/2)}$ by \\ref{mfU}.\n\\item Calculate $\\mfQ$ by \\ref{mfQ}.\n\\item Update $\\mfU^{(k+1)}$ and $\\mfP^{(k+1)}$ by\n\\begin{eqt}\n\\label{dgsp}\n\\mfU^{(k+1)}=\\mfU^{(k+1/2)}+\\mfB\\mfQ, \\quad \\mfP^{(k+1)}=\\mfP^{(k)}-(\\mfB^T\\mfB+\\tilde \\mfD)\\mfQ.\n\\end{eqt}\n\\end{itemize}\nThe minor differences is that DGS (notes) in parallel has an additional $\\tilde\\mfD$.\n\nFor DGS (notes) in parallel, the calculation of $\\mfU^{(k+1/2)}$ is as same as the notes. In the notes, $r_{i,j}$, the residual of the divergence equation \\ref{divu} is calculated through:\n\\begin{eqt}\n\\label{divequ}\nr_{i,j}=\\frac{u^{(k+1/2)}_{i,j-\\oot}-u^{(k+1/2)}_{i-1,j-\\oot}}{h}+\\frac{v^{(k+1/2)}_{i-\\oot,j}-v^{(k+1/2)}_{i-\\oot,j-1}}{h},\n\\end{eqt}\n\nWe shall point out that \\ref{divequ}  is the major difference between updating in sequence and updating in parallel. $r_{i,j}$ in parallel only considers the value of $u$ and $v$ at $(k+1/2)$ while $r_{i,j}$ in parallel considers the value of $u$ and $v$ at $k+1$. If we write $R\\in \\mbR^{N\\times N}$ with entries $r_{i,j}$ and $\\mfR=\\phi(R)$, then \\ref{divequ} is equivalent to\n\\begin{eqt}\n\\label{mfR}\n\\mfR=-\\mfB^T\\mfU^{(k+1/2)}.\n\\end{eqt}\n\nThe calculation of $\\delta_{i,j}$ is based on the classification of $(i,j)$:\n\\begin{itemize}\n\\item $(i,j)$ is an inner unit: $\\delta_{i,j}=\\frac{h}{4}r_{i,j}$; \n\\item $(i,j)$ is an egde unit: $\\delta_{i,j}=\\frac{h}{3}r_{i,j}$; \n\\item $(i,j)$ is an point unit: $\\delta_{i,j}=\\frac{h}{2}r_{i,j}$.\n\\end{itemize}\n\nWe write $\\mfQ$ as $\\mfQ=\\phi(Q)$, where $Q\\in \\mbR^{N\\times N}$ is with entries $q_{i,j}$. Directly from the definition of $\\mfD$ and \\ref{btb}, we have\n\\begin{eqt}\nq_{i,j} = \\frac{1}{h}\\delta_{i,j}.\n\\end{eqt}\n\nFor the inner unit $(i,j)$, the update of  $\\mfU^{(k+1)}$ is given by\n\\begin{eqt}\n\\label{uinner}\nu_{i-1,j-\\oot}^{(k+1)}=u_{i-1,j-\\oot}^{(k+1/2)}+\\delta_{i,j}, \\quad u_{i,j-\\oot}^{(k+1)}=u_{i,j-\\oot}^{(k+1/2)}-\\delta_{i,j},\\\\\nv_{i-\\oot,j-1}^{(k+1)}=v_{i-\\oot,j-1}^{(k+1/2)}+\\delta_{i,j}, \\quad v_{i-\\oot,j}=v_{i-\\oot,j}^{(k+1/2)}-\\delta_{i,j}.\n\\end{eqt}\nThe update of $\\mfP^{(k+1)}$ is given by\n\\begin{eqt}\np_{i-\\oot,j-\\oot}^{(k+1)}=p_{i-\\oot,j-\\oot}^{(k)}-\\frac{4}{h}\\delta_{i,j}, \\quad p_{i+\\oot,j-\\oot}^{(k+1)}=p_{i+\\oot,j-\\oot}^{(k)}+\\frac{1}{h}\\delta_{i,j}, \\quad p_{i-\\frac{3}{2},j-\\oot}^{(k+1)}=p_{i-\\frac{3}{2},j-\\oot}^{(k)}+\\frac{1}{h}\\delta_{i,j},\n\\end{eqt}\n\\begin{eqt}\n\\label{pinner}\np_{i-\\oot,j+\\oot}^{(k+1)}=p_{i-\\oot,j+\\oot}^{(k)}+\\frac{1}{h}\\delta_{i,j}, \\quad p_{i-\\oot,j-\\frac{3}{2}}^{(k+1)}=p_{i-\\oot,j-\\frac{3}{2}}^{(k)}+\\frac{1}{h}\\delta_{i,j}.\n\\end{eqt}\n\nFor edge unit, we take $2\\leq i\\leq N-1$, $j=N$ for example. The influence of $\\delta_{i,N}$ on $\\mfU^{(k+1)}$ is given by\n\\begin{eqt}\n\\label{uedge}\n&u_{i-1,N-\\oot}^{(k+1)}=u_{i-1,N-\\oot}^{(k+1/2)}+\\delta_{i,N}, \\  u_{i,N-\\oot}^{(k+1)}=u_{i,N-\\oot}^{(k+1/2)}-\\delta_{i,N}, \\ v_{i-\\oot,N-1}^{(k+1)}=v_{i-\\oot,N-1}^{(k+1/2)}+\\delta_{i,N}.\n\\end{eqt}\nThe update rule of $\\mfP^{(k+1)}$ is given by\n\\begin{eqt}\np_{i-\\oot,N-\\oot}^{(k+1)}=p_{i-\\oot,N-\\oot}^{(k)}-\\frac{4}{h}\\delta_{i,N}, \\quad p_{i+\\oot,N-\\oot}^{(k+1)}=p_{i+\\oot,N-\\oot}^{(k)}+\\frac{1}{h}\\delta_{i,N},\\\\\np_{i-\\frac{3}{2},N-\\oot}^{(k+1)}=p_{i-\\frac{3}{2},N-\\oot}^{(k)}+\\frac{1}{h}\\delta_{i,N}, \\quad p_{i-\\oot,N-\\frac{3}{2}}^{(k+1)}=p_{i-\\oot,N-\\frac{3}{2}}^{(k)}+\\frac{1}{h}\\delta_{i,N}.\n\\end{eqt}\nFor DGS \\cite{mmfts}, we shall have $p_{i-\\oot,N-\\oot}^{(k+1)}=p_{i-\\oot,N-\\oot}^{(k)}-\\frac{3}{h}\\delta_{i,N}$ instead.\n\nFor point unit, we take $i=N, j=N$ for example. The update of  $\\mfU^{(k+1)}$ is given by\n\\begin{eqt}\n\\label{upoint}\n&u_{N-1,N-\\oot}^{(k+1)}=u_{N-1,N-\\oot}^{(k+1/2)}+\\delta_{N,N}, \\ v_{N-\\oot,N-1}^{(k+1)}=v_{N-\\oot,N-1}^{(k+1/2)}+\\delta_{N,N}.\n\\end{eqt}\n\nThe update of $\\mfP^{(k+1)}$ is given by\n\\begin{eqt}\n&p_{N-\\oot,N-\\oot}^{(k+1)}=p_{N-\\oot,N-\\oot}^{(k)}-\\frac{4}{h}\\delta_{N,N}, \\ p_{N-\\frac{3}{2},N-\\oot}^{(k+1)}=p_{N-\\frac{3}{2},N-\\oot}^{(k)}+\\frac{1}{h}\\delta_{N,N}, \\\\\n& p_{N-\\oot,N-\\frac{3}{2}}^{(k+1)}=p_{N-\\oot,N-\\frac{3}{2}}^{(k)}+\\frac{1}{h}\\delta_{N,N}.\n\\end{eqt}\nFor DGS \\cite{mmfts}, we shall have $p_{N-\\oot,N-\\oot}^{(k+1)}=p_{N-\\oot,N-\\oot}^{(k)}-\\frac{2}{h}\\delta_{N,N}$ instead.\n\nThe minor differences between DGS \\cite{mmfts} and DGS (notes) in parallel is the coefficient of $\\delta_{i,j}$ in updating $p_{i-\\oot,j-\\oot}^{(k+1)}$ for the edge unit and the point unit. That is the reason why we introduce the matrix $\\tilde \\mfD$. The update rule in the notes can be understood as calculating \\ref{dgsp} column by column, i.e.\n\\begin{equation}\n\\mfB\\mfQ=\\sum_l\\mfB_l\\mfQ_l, \\quad \\lp\\tilde \\mfD+\\mfB^T\\mfB\\rp\\mfQ=\\sum_l\\lp\\tilde \\mfD \\mfB^T\\mfB\\rp_l\\mfQ_l.\n\\end{equation} \nHere we shall emphasize that $\\mfB\\in\\mbR^{2N(N-1)\\times N^2}$ and $\\mfQ\\in \\mbR^{N^2\\times1}$.\n\nWe shall point out that the update date rule on the notes shall be performed sequentially. Namely, we calculate $r_{i,j}$ through our update process. Otherwise, we might face divergence.\n\n\\subsection{V-Cycle Multi-Grid method}\nWe then introduce the V-Cycle multi-grid method. We first consider the restriction operator. \n\\subsubsection{Restriction}\nWe consider the following matrix $W^{(1)}_N\\in\\mbR^{N\\times 2N}$,\n\\begin{eqt}\nW^{(1)}_N=\\begin{bmatrix}\n1&1\\\\\n&&1&1\\\\\n&&&&\\dots&\\dots\\\\\n&&&&&&1&1\\\\\n\\end{bmatrix}=I_N\\otimes \\begin{bmatrix}1&1\\end{bmatrix},\n\\end{eqt} and $W^{(2)}_N\\in\\mbR^{(N-1)\\times (2N-1)}$,\n\\begin{eqt}\nW_N^{(2)}=\n\\begin{bmatrix}\n1&2&1\\\\\n&&1&2&1\\\\\n&&&&\\dots&\\dots&\\dots\\\\\n&&&&&&1&2&1\\\\\n\\end{bmatrix}=\\begin{bmatrix}0&W_{N-1}^{(1)}\\end{bmatrix}+\\begin{bmatrix}W_{N-1}^{(1)}&0\\end{bmatrix}.\n\\end{eqt}\n\nAt the $u$- and $v$ grid points, we consider six points restrictions, and at $p$-grid points, a four-point cell-centered restriction. In stencil notations, the restriction operators are (* indicates the position of the coarse-grid point). \n\\begin{eqt}\n\\label{rh2h}\nR_{h/2,h}^u=\\frac{1}{8}\\begin{bmatrix}1&2&1\\\\\n&*\\\\\n1&2&1\\\\\\end{bmatrix}, \\quad R_{h/2,h}^v=\\frac{1}{8}\\begin{bmatrix}\n1&&1\\\\\n2&*&2\\\\\n1&&1\\\\\\end{bmatrix}, \\quad R_{h/2,h}^p=\\frac{1}{4}\\begin{bmatrix}\n1&&1\\\\\n&*&\\\\\n1&&1\n\\end{bmatrix}.\n\\end{eqt}\n\nWe can write \\ref{rh2h} element-wise\n\\begin{eqt}\nU^{h}_{i,j}=\\frac{1}{8}(U^{h/2}_{2i-1,2j-1}+2U^{h/2}_{2i,2j-1}+U^{h/2}_{2i+1,2j-1}+U^{h/2}_{2i-1,2j}+2U^{h/2}_{2i,2j}+U^{h/2}_{2i+1,2j}).\n\\end{eqt}\nTherefore, the restriction operator for $u$ is given by \n\\begin{eqt}\nR_{h/2,h}^u=\\frac{1}{8}W_N^{(1)}\\otimes W_N^{(2)}.\n\\end{eqt}\nNamely, we have\n\\begin{eqt}\n\\phi(U^{h})=R_{h/2,h}^u\\phi(U^{h/2}).\n\\end{eqt}\nSimilarly, for $v$ we have\n\\begin{eqt}\nR_{h/2,h}^v=R_{h/2,h}^u, \\quad \\psi(V^{h})=R_{h/2,h}^v\\psi(V^{h/2}).\n\\end{eqt}\nFor $p$ we have \n\\begin{eqt}\nR_{h/2,h}^p=\\frac{1}{4}W^{(1)}\\otimes W^{(1)}, \\quad \\phi(P^{h})=P_{h,h/2}^u\\phi(V^{h/2}).\n\\end{eqt}\nThe restriction of $f,g$ is similar to $u,v$ respectively, i.e., $R_{h/2,h}^f=R_{h/2,h}^g=R_{h/2,h}^u$. We denote\n\\begin{eqt}\nR_{h/2,h}^U=R_{h/2,h}^F=\\begin{bmatrix}\nR_{h/2,h}^u\\\\\n&R_{h/2,h}^u\\\\\n\\end{bmatrix}.\n\\end{eqt}\n\n\\subsubsection{Lifting}\nThe lifting operation corresponds to the transpose of restriction operation. We denote $W^{(3)}\\in \\mbR^{2N\\times N}$ with $W^{(3)}_N=\\left(W^{(1)}_N\\right)^T$ and $W^{(4)}_N \\in \\mbR^{(2N-1)\\times(N-1)}$ with $W^{(4)}_N=\\left(W^{(2)}_N\\right)^T$.\n\nFor $u$, we consider the following element-wise lifting operation\n\\begin{eqt}\n\\label{u2i2j}\n&U^{h/2}_{2i,2j-1}=U^{h/2}_{2i,2j}=U^{h}_{i,j}, \\quad U^{h/2}_{2i-1,2j-1}=U^{h/2}_{2i,2j}=\\frac{1}{2}U^{h}_{i-1,j}+\\frac{1}{2}U^{h}_{i,j}.\n\\end{eqt}\nTherefore, we have\n\\begin{eqt}\nR_{h,h/2}^u=\\frac{1}{2}W_N^{(3)}\\otimes W_N^{(4)}, \\quad R_{h,h/2}^u\\phi(U^{h})=\\phi(U^{h/2}).\n\\end{eqt}\nSimilarly, we have $R_{2h,h}^v=R_{2h,h}^u$. For $p$, we have the following element-wise lifting operation\n\\begin{eqt}\nP_{2i,2j}^h=P_{2i,2j-1}^h=P_{2i-1,2j}^h=P_{2i-1,2j-1}^h=P_{i,j}^{2h}.\n\\end{eqt}\nThen, we have\n\\begin{eqt}\nR_{h,h/2}^p=W_N^{(3)}\\otimes W_N^{(3)}, \\quad R_{h,h/2}^p\\phi(P^{h})=\\phi(P^{h/2}).\n\\end{eqt}\nWe denote\n\\begin{eqt}\nR_{h,h/2}^U=\\begin{bmatrix}\nR_{h,h/2}^u\\\\\n&R_{h,h/2}^u\\\\\n\\end{bmatrix}.\n\\end{eqt}\n\nWe shall point out that $R_{h,h/2}^u=4\\lp R_{h/2,h}^u\\rp^T$ and $R_{h,h/2}^p=4\\lp R_{h/2,h}^p\\rp^T$. This results from that our multi-grid method is designed for the problem \\ref{spp} which is not rescaled. If we change this multi-grid to the rescaled problem \\ref{rspp}, we would have $\\tilde R_{h,h/2}^u=\\lp\\tilde R_{h/2,h}^u\\rp^T$ and $\\lp\\tilde R_{h,h/2}\\rp^p= \\lp\\tilde  R_{h/2,h}^p\\rp^T$\n\n\\subsection{V-Cycle}\n\\label{ssec:vcyc}\nWe describe the update routine of V-Cycle. We have a stopping criterion $\\epsilon$ for the V-Cycle and three parameters $v_1, v_2$ and $L$ for the V-Cycle. We denote $k$-th grid to be the grid with step size $2^{k-n}$. Suppose $N=2^{n}, L=2^l$. We denote $\\mfA^{(k)}, \\mfB^{(k)}$ to be $\\mfA, \\mfB$ in the $k$-th grid and $\\mfF^{(k)}$ to be the initial residual in the $k$-th grid, i.e., $\\mfF^{(0)}=\\mfF$.  \n\nWe start with $k=0$. We set the initial value $\\mfU^{(k)}=0, \\mfP^{(k)}=0$ and apply DGS $v_1$ times to get an approximate solution $\\mfU^{(k)}$ and $\\mfP^{(k)}$ to \n\\begin{eqt}\n\\label{aubpf}\n\\mfA^{(k)} \\mfU^{(k)}+\\mfB^{(k)}\\mfP^{(k)}=\\mfF^{(k)}, \\quad \\lp\\mfB^{(k)}\\rp^T\\mfU^{(k)}=0,\n\\end{eqt}\nand record them. Then, we compute the residual $\\mfF_k$\n\\begin{eqt}\n\\label{mfFk}\n\\mfF_k=\\mfF^{(k)}-\\lp\\mfA^{(k)} \\mfU^{(k)}+\\mfB^{(k)}\\mfP^{(k)}\\rp.\n\\end{eqt}\nWe let $r_h=h^2\\mfF_k$ as the residual for the rescaled problem \\ref{rspp} and calculate $\\|r_h\\|_2$. If $\\|r_h\\|<\\epsilon$, we stop the algorithm. Otherwise, we restrict $\\mfF_k$ to $(k+1)$-th grid \n\\begin{eqt}\n\\label{fkp1}\n\\mfF^{(k+1)}=R_{2^kh,2^{k+1}h}^F\\mfF_k,\n\\end{eqt}\nreplace $k=0$ with $k=1$ and move to Stage 1. \n\nIn Stage 1, we start with $k=1$. Given $k<n-l$, we set the initial value $\\mfU^{(k)}=0, \\mfP^{(k)}=0$ and apply DGS $v_1$ times to get an approximate solution $\\mfU^{(k)}$ and $\\mfP^{(k)}$ to \\ref{aubpf} and record them. Then, we compute the residual $\\mfF_k$ by \\ref{mfFk}, restrict $\\mfF_k$ onto $(k+1)$-th grid to get $\\mfF^{(k+1)}$ by \\ref{fkp1}, and replace $k$ with $k+1$ until $k=n-l$. \n\nFor $k=n-l$, we set the initial value $\\mfU^{(k)}=0, \\mfP^{(k)}=0$ and apply DGS $v_1$ times to get an approximate solution $\\mfU^{(k)}$ and $\\mfP^{(k)}$ to \\ref{aubpf}. We let $\\mfU^{[n-l]}=\\mfU^{(n-l)}$ and $\\mfP^{[n-l]}=\\mfP^{(n-l)}$. We move forward to Stage 2. \n\nIn Stage 2, we start with $k=n-l$. Given $k$, we lift $\\mfU^{[k]}$, $\\mfP^{[k]}$ to $(k-1)$-th stage, and update\n\\begin{eqt}\n\\mfU_{k-1}=\\mfU^{(k-1)}+R_{2^kh,2^{k-1}h}^U\\mfU^{[k]},\\quad  \\mfP_{k-1}=\\mfP^{(k-1)}+R_{2^kh,2^{k-1}h}^p\\mfP^{[k]}.\n\\end{eqt}\nwhere $\\mfU^{(k-1)}, \\mfP^{(k-1)}$ is recorded in Stage 1. Then, we use $\\mfU_{k-1}$ and $\\mfP_{k-1}$ as initial value, run DGS $v_2$ times to get the approximate solution $\\mfU^{[k-1]}$ and $\\mfP^{[k-1]}$ to \\ref{aubpf} with $k$ replaced by $k-1$. Then we replace $k$ by $k-1$ until $k=0$. \n\nWith $k=0$, we calculate the residual $\\mfF_k$ by \\ref{mfFk} and let $r_h=h^2 \\mfF_k$. If $\\|r_h\\|_2<\\epsilon$, we stop the algorithm. Otherwise, we restrict $\\mfF_k$ to the $(k+1)$-th grid, calculate $\\mfF^{(1)}$ by \\ref{fkp1}, replace $k=0$ by $k=1$ and return to Stage 1.\n\nWe will elaborate the selection of $\\epsilon$ in Section \\ref{sec:num}.\n\n\\section{V-Cycle Multi-Grid Based on Uzawa (Problem 2, 3, 4)}\n\\label{sec:prob2}\nThe routine of V-Cycle is same as the one in Section \\ref{ssec:vcyc}.\n\\subsection{Uzawa Iteration}\nWe introduce the routine of Uzawa Iteration. Suppose we have the initial value $\\mfP_0$. We start from $k=0$. Given $\\mfP_k$. We consider to solve $\\mfU_{k+1}$ from \n\\begin{eqt}\n\\label{aufbp}\n\\mfA\\mfU_{k+1}=\\mfF-\\mfB\\mfP_k.\n\\end{eqt}\nThen, we update the pressure\n\\begin{eqt}\n\\mfP_{k+1}=\\mfP_k+\\alpha \\mfB^T\\mfU_{k+1}.\n\\end{eqt}\nwhere $\\alpha$ is a parameter. About the selection of $\\alpha$, because we have\n\\begin{eqt}\n\\mfP_{k+1}=(I-\\alpha \\mfB^T\\mfA^{-1}\\mfB)\\mfP_k+\\alpha\\mfB^T\\mfA^{-1}\\mfF.\n\\end{eqt}\nWe prove the following proposition on notes:\n\\begin{proposition}\n$\\arg\\min\\limits_{\\alpha>0} \\rho(I-\\alpha \\mfB^T\\mfA^{-1}\\mfB)$ is given by \n\\begin{eqt}\n\\label{alpha}\n\\alpha^*=\\frac{2}{\\lambda_{min}(\\mfB^T\\mfA^{-1}\\mfB)+\\lambda_{max}(\\mfB^T\\mfA^{-1}\\mfB)}.\n\\end{eqt}\n\\end{proposition}\n\\begin{proof}\nWe denote $\\mfC =  \\mfB^T\\mfA^{-1}\\mfB$. Because $\\mfA$ is positive definite, $\\mfC$ is semi-definite positive. Suppose $\\lambda$ is $\\mfC$'s eigen value and $\\mfx\\in\\mbR^{N^2}$ is its eigen-vector, i.e.\n\\begin{eqt}\n\\lambda\\mfx = (I-\\alpha\\mfC)\\mfx = \\mfx-\\alpha\\mfC\\mfx.\n\\end{eqt}\nAs a result, $\\mfx$ is a eigen-vector of $\\mfC$ and we have\n\\begin{eqt}\n\\mfC\\mfx=\\frac{1-\\lambda}{\\alpha}\\mfx.\n\\end{eqt}\nThen $\\frac{1-\\lambda}{\\alpha}$ is the eigen-value of $\\mfC$.  Suppose $\\frac{1-\\lambda}{\\alpha}=\\beta$, then, $\\lambda = 1-\\alpha\\beta$. Therefore, we have\n\\begin{eqt}\n\\arg\\min\\limits_{\\alpha>0} \\rho(I-\\alpha\\mfC) = \\arg\\min\\limits_{\\alpha>0} \\max\\limits_{\\lambda_{min}(\\mfC)\\leq\\beta\\leq\\lambda_{max}(\\mfC)}|1-\\alpha\\beta|\n\\end{eqt}\nDirectly by the properties of Chebyshev polynomial, \\ref{alpha} holds. Q.E.D.\n\\end{proof}\n\nWe shall point out that $\\mfB^T\\mfA^{-1}\\mfB$ is singular. Note that $\\begin{bmatrix}\\mfA&\\mfB\\\\\\mfB^T&0\\end{bmatrix}$ is singular, because both $\\begin{bmatrix}\\mfU\\\\\\mfP\\end{bmatrix}$ and $\\begin{bmatrix}\\mfU\\\\\\mfP+c\\mathbf{1}\\end{bmatrix}$ is the solution to \\ref{spp}, where $c$ is any real number. Therefore, by the definition of Schur-complement, $\\mfB^T\\mfA^{-1}\\mfB$ is singular and $\\lambda_{min}(\\mfB^T\\mfA^{-1}\\mfB)=0$. Our latter analysis tells us the eigen values of $\\mfB^T\\mfA^{-1}\\mfB$ only consist of $0,1$. Therefore, the optimal choice for $\\alpha$ is $2$. Nevertheless, $\\alpha=1$ achieves the best performance. We will explain this in next subsubsection.\n\n\\subsubsection{Convergence of Uzawa Iteration}\nWe observe that $\\mfB^T\\mfA^{-1}\\mfB=\\lp B_N^{(1)}\\rp^TA_N^{-1}B_N^{(1)}+\\lp B_N^{(2)}\\rp^TA_N^{-1}B_N^{(2)}$. Notce that $A_N=S_{N-1}^{(2)}\\otimes I_{N-1}+I_N\\otimes T_{N-1}$. We know that $T_{N-1}$ has spectral $\\{\\eta_m\\}_{m=1}^{N-1}$ where $\\eta_m=2\\lp1-\\cos\\frac{m\\pi}{N}\\rp=4\\sin^2\\frac{m\\pi}{2N}$. The corresponding eigen vector for $\\eta_m$ is \n\\begin{eqt}\nt_m=\\left[\\sin\\frac{m\\pi}{N}, \\sin\\frac{2m\\pi}{N}, \\dots \\sin\\frac{(N-1)m\\pi}{N}\\right]^T.\n\\end{eqt}\n$S_{N-1}^2$ has spectral $\\{\\xi_n\\}_{n=1}^{N}$ where $\\xi_n=2\\lp1-\\cos \\frac{(n-1)\\pi}{N}\\rp=4\\sin^2\\frac{(n-1)\\pi}{2N}$. The corresponding eigen vector for $\\xi_n$ is \n\\begin{eqt}\ns_n = \\left[ \\cos\\frac{(n-1)\\pi}{2N}, \\cos\\frac{3(n-1)\\pi}{2N}, \\dots \\cos\\frac{(2N-1)(n-1)\\pi}{2N}  \\right]^T.\n\\end{eqt}\nTherefore, $A_N$ has spectral $\\{\\lambda_{m,n}\\}_{1\\leq m\\leq N-1, 1\\leq n\\leq N}$ where $\\lambda_{m,n}=\\eta_m+\\xi_n=4\\sin^2\\frac{m\\pi}{2N}+4\\sin^2\\frac{(n-1)\\pi}{2N}$. The corresponding eigen vector for $\\lambda_{m,n}$ is $s_n\\otimes t_m$. We can verify that\n\\begin{eqt}\n&\\lp S_{N-1}^{(2)}\\otimes I_{N-1}\\rp(s_n\\otimes t_m)=\\lp S_{N-1}^{(2)}s_n \\rp \\otimes t_m=\\xi_n s_n\\otimes t_m,\\\\\n&\\lp I_N\\otimes T_{N-1}\\rp(s_n\\otimes t_m)=s_n\\otimes\\lp  T_{N-1} t_m \\rp=\\eta_m s_n\\otimes t_m.\\\\\n\\end{eqt}\nWe denote $a^{(m,n)}=s_n\\otimes t_m/l_{m,n}$, where \n\\begin{eqt}\nl_{m,n}=\\|s_n\\otimes t_m\\|_2=\\left\\{\\begin{aligned}\n&\\frac{N}{2}, \\quad &n>1\\\\\n&\\frac{N}{\\sqrt{2}}, \\quad &n=1\\\\\n\\end{aligned}\\right.\n\\end{eqt}\nTherefore, we have the spectral decomposition of $A_N^{-1}$ as follows:\n\\begin{eqt}\nA_N^{-1}=\\sum_{m=1}^{N}\\sum_{n=1}^{N-1}\\lambda_{m,n}^{-1}a^{(m,n)}\\lp a^{(m,n)}\\rp^T.\n\\end{eqt}\nNotice that $a^{(m,n)}_{i,j}=l_{m,n}^{-1}\\sin\\frac{m\\pi i}{N}\\cos\\frac{(n-1)\\pi(2j-1)}{2N}$. Therefore, based on \\ref{bn1t} and \\ref{bn2t} in Section \\ref{sec:dotb}, \n\\begin{eqt}\n(B_N^{(1)})^Ta^{(m,n)}=&l_{m,n}^{-1}\\sum_{j=1}^Ne_N^{(j)} \\otimes \\begin{bmatrix}\na^{(m,n)}_{0,j}-a^{(m,n)}_{1,j}\\\\\n\\vdots\\\\\na^{(m,n)}_{N-1,j}-a^{(m,n)}_{N,j}\n\\end{bmatrix} \\\\\n=&l_{m,n}^{-1}\\sum_{j=1}^N\\begin{bmatrix}\n\\sin\\frac{m\\pi0}{N}-\\sin\\frac{m\\pi}{N}\\\\\n\\vdots\\\\\n\\sin\\frac{m\\pi (N-1)}{N}-\\sin\\frac{m\\pi N}{N}\n\\end{bmatrix}\\otimes \\cos\\frac{(n-1)\\pi(2j-1)}{N}e_N^{(j)}\\\\\n=&-2\\sin\\frac{m\\pi}{2N}l_{m,n}^{-1}s_n\\otimes \\tilde t_m.\n\\end{eqt}\n\\begin{eqt}\n(B_N^{(2)})^Ta^{(m,n)}=&l_{m,n}^{-1}\\sum_{j=1}^N\\begin{bmatrix}\na^{(m,n)}_{0,j}-a^{(m,n)}_{1,j}\\\\\n\\vdots\\\\\na^{(m,n)}_{N-1,j}-a^{(m,n)}_{N,j}\n\\end{bmatrix}  \\otimes e_N^{(j)} \\\\\n=&l_{m,n}^{-1}\\sum_{j=1}^N\\begin{bmatrix}\n\\sin\\frac{m\\pi0}{2N}-\\sin\\frac{m\\pi}{2N}\\\\\n\\vdots\\\\\n\\sin\\frac{m\\pi (N-1)}{2N}-\\sin\\frac{m\\pi N}{2N}\n\\end{bmatrix}\\otimes \\cos\\frac{(n-1)\\pi(2j-1)}{N}e_N^{(j)}\\\\\n=&-2\\sin\\frac{m\\pi}{2N}l_{m,n}^{-1}\\tilde t_m\\otimes s_n.\n\\end{eqt}\nwhere $\\tilde t_m\\in \\mbR^{N}$ is with entry\n\\begin{eqt}\\tilde t_m=\\begin{bmatrix}\n\\cos \\frac{m\\pi}{2N}\\\\\n\\cos \\frac{3m\\pi}{2N}\\\\\n\\vdots\\\\\n\\cos \\frac{m\\pi(2N-1)}{2N}\\\\\n\\end{bmatrix}=s_{m+1}.\n\\end{eqt}\nWe denote $\\delta_{m,n}=4\\sin^2\\frac{m\\pi}{2N}\\lambda_{m,n+1}^{-1}=\\frac{\\sin^2\\frac{m\\pi}{2N}}{\\sin^2\\frac{m\\pi}{2N}+\\sin^2\\frac{n\\pi}{2N}}$, \n\\begin{eqt}S^{m,n}=\\lp s_{m+1}\\otimes s_{n+1}\\rp^T\\lp s_{m+1}\\otimes s_{n+1}\\rp=\\lp s_{m+1}s_{m+1}^T\\rp\\otimes \\lp s_{n+1}^T s_{n+1}\\rp\n\\end{eqt}\nNote that \n\\begin{eqt}\n&\\lp\\sum_{n=1}^{N-1}s_{n+1}^T s_{n+1} \\rp_{i,j}=\\sum_{n=1}^{N-1} \\cos\\frac{n\\pi(2j-1)}{2N}\\cos\\frac{n\\pi(2i-1)}{2N}\\\\\n=&\\frac{1}{2}\\sum_{n=1}^{N-1}\\cos\\frac{(i+j-1)n\\pi}{N}+\\cos\\frac{(i-j)n\\pi}{N}.\n\\end{eqt}\nBecause $N$ is even, we have\n\\begin{equation}\n\\sum_{n=1}^{N-1}\\cos\\frac{kn\\pi}{N}=\\left\\{\\begin{aligned}\n&N-1, \\quad &k=0\\\\\n&0, & k\\neq 0 , 2|k+1\\\\\n&-1, & k\\neq 0 , 2|k\\\\\n\\end{aligned}\\right.\n\\end{equation} \nTherefore, \n\\begin{eqt}\\sum_{n=1}^{N-1}s_{n+1}^T s_{n+1}=\\frac{N}{2}I_N-\\frac{1}{2}E_N.\n\\end{eqt}\nwhere $E_N\\in \\mbR^{N\\times N}$ is the matrix with all entries equal to $1$. Then, we have\n\\begin{eqt}\n&\\mfB^T\\mfA^{-1}\\mfB=\\lp B_N^{(1)}\\rp^TA_N^{-1}B_N^{(1)}+\\lp B_N^{(2)}\\rp^TA_N^{-1}B_N^{(2)}\\\\\n=&\\sum_{m=1}^{N-1}\\sum_{n=1}^{N}4\\sin^2\\frac{m\\pi}{2N}\\lambda_{m,n}^{-1}l_{m,n}^{-2}\\lp\\lp s_{m+1}s_{m+1}^T\\rp\\otimes \\lp s_n^T s_n \\rp+\\lp s_{n}s_{n}^T\\rp\\otimes \\lp s_{m+1}^T s_{m+1} \\rp\\rp \\\\\n=&\\sum_{m=1}^{N-1}\\sum_{n=1}^{N-1}\\frac{4\\delta_{m,n}}{N^2}\\lp S^{(m,n)}+S^{(n,m)}\\rp+\\sum_{m=1}^{N-1}\\frac{2}{N^2}(S^{(0,m)}+S^{(m,0)})\\\\ \n=&\\frac{4}{N^2}\\sum_{m=1}^{N-1}\\sum_{n=1}^{N-1}\\lp \\delta_{m,n}S^{(m,n)}+\\delta_{n,m}S^{(m,n)}\\rp+\\frac{2}{N^2}\\sum_{m=1}^{N-1}S^{(0,m)}+S^{(m,0)}\\\\\n=&\\frac{4}{N^2}\\sum_{m=1}^{N-1}\\sum_{n=1}^{N-1}S^{(m,n)}+\\frac{2}{N^2}\\sum_{m=1}^{N-1}S^{(0,m)}+S^{(m,0)}\\\\\n=&\\frac{1}{N^2}(NI_N-E_N)\\otimes(NI_N-E_N)+\\frac{1}{N^2}E_N\\otimes (NI_N-E_N)+\\frac{1}{N^2}(NI_N-E_N)\\otimes E_N\\\\\n=&I_N\\otimes I_N-\\frac{1}{N^2}E_N\\otimes E_N=I_{N^2}-\\lp\\frac{s_1\\otimes s_1}{N}\\rp\\lp\\frac{s_1\\otimes s_1}{N}\\rp^T\n\\end{eqt}\nLet us denote $\\mfv=\\frac{s_1\\otimes s_1}{N}$. $\\mfv^T\\mfv=\\|\\mfv\\|_2^2=\\frac{N^2}{N^2}=1$. As a result, \n\\begin{eqt}\n\\label{lprp}\n&\\lp\\mfB^T\\mfA^{-1}\\mfB\\rp^2 = \\lp I_{N^2}-\\mfv\\mfv^T\\rp^2=I_{N^2}-2\\mfv\\mfv^T+\\mfv\\mfv^T\\mfv\\mfv^T=I_{N^2}-\\mfv\\mfv^T=\\mfB^T\\mfA^{-1}\\mfB\n\\end{eqt}\n\\ref{lprp} tells us $\\mfB^T\\mfA^{-1}\\mfB$ is a projection matrix. Based on \\ref{lprp}, we have the following proposition: \n\n\\begin{proposition}\n\\label{uzawa_conv}\nWe take $\\alpha = 1$. If $\\mfF\\in\\mathrm{range}(\\mfB)$, then the exact Uzawa Iteration will converge in at most 2 iteration. Namely, If $\\mfP_0=0$, $\\mfP_1 = \\mfB^T\\mfA^{-1}\\mfF$, $\\mfU_2=\\mfA^{-1}(\\mfF-\\mfB\\mfP_1)$, $\\mfP_2 = (I-\\mfB^T\\mfA^{-1}\\mfB)\\mfP_1+\\mfB^T\\mfA^{-1}\\mfF$. Then $\\mfU_2$ and $\\mfP_2$ are the exact solution to \\ref{spp}.\n\\end{proposition}\n\\begin{proof}\nBy \\ref{lprp} we have $\\lp\\mfB^T\\mfA^{-1}\\mfB\\rp^2=\\mfB^T\\mfA^{-1}\\mfB$. Then\n\\begin{eqt}\n\\mfB^T\\mfU_2 = \\mfB^T\\mfA^{-1}\\mfF-\\mfB^T\\mfA^{-1}\\mfB\\mfP_1=\\mfB^T\\mfA^{-1}\\mfB\\mfM-\\lp\\mfB^T\\mfA^{-1}\\mfB\\rp^2\\mfM=0\n\\end{eqt}\n\\begin{eqt}\n\\mfA\\mfU_2+\\mfB\\mfP_2&=\\mfF-\\mfB\\mfP_1+\\mfB(2I-\\mfB^T\\mfA^{-1}\\mfB)\\mfP_1=\\mfF-\\mfB(I-\\mfB^T\\mfA^{-1}\\mfB)\\mfP_1\\\\\n&=\\mfF-\\mfB(I-\\mfB^T\\mfA^{-1}\\mfB)\\mfB^T\\mfA^{-1}\\mfF\\\\\n&=\\mfB\\mfM-\\mfB(I-\\mfB^T\\mfA^{-1}\\mfB)\\mfB^T\\mfA^{-1}\\mfB\\mfM\\\\\n&=\\mfB\\lp\\mfB^T\\mfA^{-1}\\mfB-\\lp\\mfB^T\\mfA^{-1}\\mfB\\rp^2\\rp\\mfM=0\n\\end{eqt}\nQ.E.D.\n\\end{proof}\n\\subsection{Inexact Uzawa Iteration}\n\\label{ssec:num_iui}\nThe difference between Inexact Uzawa Iteration and Uzawa Iteration is that we do not solve the solution to \\ref{aufbp}. Instead, we find an approximate solution $\\tilde \\mfU_{k+1}$ s.t. \n\\begin{eqt}\n\\|\\mfA\\tilde\\mfU_{k+1}-(\\mfF-\\mfB\\mfP_k)\\|_2\\leq\\tau\n\\end{eqt}\nwhere $\\tau$ is a parameter for Inexact Uzawa Iteration. We take $\\mfU_k$ as initial value and apply Conjugate Gradient method to obtain $\\mfU_{k+1}$. \n\n\\subsection{Inexact Uzawa Iteration Based On V-Cycle multi-grid}\nHere we introduce another approach to solve \\ref{spp}. We only use Inexact Uzawa Iteration but we solve the subproblem approximately by V-Cycle multi-grid instead of CG in \\ref{ssec:num_iui}. We choose Gauss-Seidel Iteration as smoother for V-Cycle multi-grid.\n\n\\section{Numerical results}\n\\label{sec:num}\n\\subsection{DGS Iteration (Problem 1)}\nWe apply the V-Cycle multi-grid method to solve the saddle point problem \\ref{spp}. \n\\begin{itemize}\n\\item `DGS-s' represents our implementation of DGS in the notes and `s' is the abbreviation of `sequence'.\n\\item `DGS-p' represents our implementation of DGS in \\cite{mmfts} and `p' is the abbreviation of `parallel' ; \n\\end{itemize}\n\nWe evaluate the selection of parameters $L, v_1$ and $v_2$ by three criterions: `time' denotes cpu-time; `VC' denotes the number of V-Cycle; $e_N$ is the error to the true solution, i.e.\n\\begin{eqt}\ne _ { N } = h \\left( \\sum _ { j = 1 } ^ { N } \\sum _ { i = 1 } ^ { N - 1 } \\left| u _ { i , j - \\frac { 1 } { 2 } } - u \\left( x _ { i } , y _ { j - \\frac { 1 } { 2 } } \\right) \\right| ^ { 2 } + \\sum _ { j = 1 } ^ { N - 1 } \\sum _ { i = 1 } ^ { N } \\left| v _ { i - \\frac { 1 } { 2 } , j } - u \\left( x _ { i - \\frac { 1 } { 2 } } , y _ { j } \\right) \\right| ^ { 2 } \\right) ^ { 1 / 2 }.\n\\end{eqt}\nwhere $u(x,y)$ and $v(x,y)$ is the true solution \\ref{true_sol} to the Stokes equation \\ref{stokes}.\n\nIn our numerical experiment, for simplicity, we take $v_1=v_2=v=[10, 20, 40, 80, 160]$. For $N=64, 128, 256$, we set the stopping criterion $\\epsilon=10^{-8}$ and the maximum number of V-Cycle to be $500$.\n\n\\input{project/DGS-64}\n\\input{project/DGS-128}\n\\input{project/DGS-256}\n\nFrom Table \\ref{DGS-64}, \\ref{DGS-128} and \\ref{DGS-256}, we observe that  DGS-s takes more iterations and longer time than DGS-p to make $\\|r_h\\|<_2$. And DGS-s is more probable to exceed the maximum number of V-Cycle, especially when $v$ is small or $L$ is large. Nevertheless, DGS-s achieves much lower $e_N$ than DGS-p. We see that sometimes, with same parameters,  $\\|r_h\\|$ in DGS-s is much smaller than DGS-p, but $e_N$ in DGS-s is much smaller than DGS-p. One possible explanation is that the solution from DGS-p is easier to satisfy \n\\begin{eqt}\n\\label{mainstokes}\n\\mfA \\mfU+\\mfB\\mfP=\\mfF,\n\\end{eqt}\nwhile the solution from DGS-s is easier to satisfy the incompressible condition. And our stopping criterion is to stop when we have a small residual for \\ref{mainstokes}. Therefore, DGS-s finds a solution that is much closer to the true solution to \\ref{stokes}. \n\nFor $N = 512, 1024, 2048$, we set the stopping criterion $\\epsilon =10^{-6}$ and the maximum number of V-Cycle to be $100$. For $N = 512, 1024$, we take $v = [20, 40, 80, 160]$.  For $N=2048$, we take $v = [40, 80, 160]$. The result of $N=512$ is shown in Table \\ref{DGS-512}; the result of $N=1024$ is shown in Table \\ref{DGS-1024}; the result of $N=2048$ is shown in Table \\ref{DGS-2048}.\n\n\\input{project/DGS-512}\n\\input{project/DGS-1024}\n\\input{project/DGS-2048}\n\nWe find that the numerical results are not ideal. DGS-p does not get the required precision of $\\|r_h\\|_2$ within the maximum number of V-Cycles and DGS-s does not find a solution that is close to the true solution. In Appendix \\ref{appendix}, we modify the V-Cycle multi-grid based on DGS. Our modification leads to great improvement in time and $e_N$ for both DGS-p and DGS-s.\n\n\\subsection{Uzawa Iteration (Problem 2)}\nWe apply the V-Cycle multi-grid method to solve the saddle point problem \\ref{spp}. We use Uzawa Iteration as smoother instead of DGS. \n\nWe assume that $v_1=v_2=v$. For $N=64, 128$, we set the stopping criterion $\\epsilon = 10^{-8}$. We take $\\alpha = [0.5, 0.75, 1, 1.5, 2]$. We set the maximum number of V-Cycle to be $100$. We take $L=[4, 16]$. \nThe result of $N=64$ is shown in Table \\ref{uzawa-64}; the result of $N=128$ is shown in Table \\ref{uzawa-128}.  From Table \\ref{uzawa-64} and Table \\ref{uzawa-128}, we observe that $\\alpha = N^2$ is the best choice of $\\alpha$. With $\\alpha = N^2$, after 2 Uzawa Iteration, $\\|r_h\\|_2$ appears to be lower than $10^{-8}$. If we set $\\alpha=2$, the performance is terrible, although $\\alpha=2$ is an optimal choice by \\ref{alpha}. We also obverse that with $\\alpha$ closer to $1$, Uzawa Iteration achieves better performance. This is consistent with our proof.\n\n\\input{project/uzawa-64}\n\\input{project/uzawa-128}\n\nFor $N=256, 512$, we set the maximum number of V-Cycle to be $20$. We take $L=[4, 16, 64]$. The result of $N=256$ is shown in Table \\ref{uzawa-256-1} and \\ref{uzawa-256-2}; the result of $N=512$ is shown in Table \\ref{uzawa-512-1} and \\ref{uzawa-512-2}. With $\\alpha=1$, Uzawa Iteration converges with 2 iterations. Therefore, the number of V-Cycle is $0$. \n\\input{project/uzawa-256-1}\n\\input{project/uzawa-256-2}\n\\input{project/uzawa-512-1}\n\\input{project/uzawa-512-2}\n\nFor $N=1024, 2048$, we set the stopping criterion $\\epsilon = 10^{-6}$ and set the maximum number of V-Cycle to be $10$. We take $L=[4, 16, 64]$. The result of $N=1024$ is shown in Table \\ref{ieuzawa-1024-1} and \\ref{uzawa-1024-2}; the result of $N=2048$ is shown in Table \\ref{uzawa-2048-1} and \\ref{uzawa-2048-2}. This numerically verifies our proposition \\ref{uzawa_conv}.\n\\input{project/uzawa-1024-1}\n\\input{project/uzawa-1024-2}\n\\input{project/uzawa-2048-1}\n\\input{project/uzawa-2048-2}\n\n\\subsection{Inexact Uzawa Iteration (Problem 3)}\nWe apply the V-Cycle multi-grid method to solve the saddle point problem \\ref{spp}. We use Inexact Uzawa Iteration as smoother instead of DGS. \n\nWe assume that $v_1=v_2=v$. For $N=64, 128$, we set the stopping criterion $\\epsilon = 10^{-8}$. Based on the result in Section \\ref{ssec:num_iui}, we fix $\\alpha = 1$. We take $\\tau=[10^{-6}, 10^{-7}, 10^{-8}, 10^{-9}]$. We set the maximum number of V-Cycle to be $100$. The result of $N=64$ is shown in Table \\ref{ieuzawa-64}; the result of $N=128$ is shown in Table \\ref{ieuzawa-128}. We observe that even with inexact solution to the subproblem \\ref{aubpf}, Inexact Uzawa converges in an incredible speed with $\\alpha = 1$. This is consistent with our proof.\n\n\\input{project/ieuzawa-64}\n\\input{project/ieuzawa-128}\n\nFor $N=256, 512$, we set the stopping criterion $\\epsilon = 10^{-7}$ and set the maximum number of V-Cycle to be $20$. We take $\\tau=[10^{-5}, 10^{-6}, 10^{-7}, 10^{-8}]$, $L=[4,16,64]$. The result of $N=256$ is shown in Table \\ref{ieuzawa-256-1} and \\ref{ieuzawa-256-2}; the result of $N=512$ is shown in Table \\ref{ieuzawa-512-1} and \\ref{ieuzawa-512-2}.\n\\input{project/ieuzawa-256-1}\n\\input{project/ieuzawa-256-2}\n\\input{project/ieuzawa-512-1}\n\\input{project/ieuzawa-512-2}\n\nFor $N=1024, 2048$, we set the stopping criterion $\\epsilon = 10^{-6}$ and set the maximum number of V-Cycle to be $10$. We take $\\tau=[10^{-4}, 10^{-5}, 10^{-6}, 10^{-7}]$, $L=[4, 16 ,64, 256]$. The result of $N=1024$ is shown in Table \\ref{ieuzawa-1024-1} and \\ref{ieuzawa-1024-2}; the result of $N=2048$ is shown in Table \\ref{ieuzawa-2048-1} and \\ref{ieuzawa-2048-2}.\n\n\\input{project/ieuzawa-1024-1}\n\\input{project/ieuzawa-1024-2}\n\\input{project/ieuzawa-2048-1}\n\\input{project/ieuzawa-2048-2}\n\nIn summary, with $\\alpha=1$, even with inexact solution to the subproblem, Inexact Uzawa converges in 2 iterations. Namely, Inexact Uzawa does not use the structure of V-Cycle multi-grid because VCs in the numerical results are $0$. We shall point out that forcing Inexact Uzawa to use the structure of V-Cycle multi-grid will greatly deteriorate the performance. This can be verified in Subsection \\ref{ssec:iuibo}.\n\n\\subsection{Inexact Uzawa Iteration Based On V-Cycle multi-grid (Problem 4)}\n\\label{ssec:iuibo}\nWe apply Inexact Uzawa Iteration Based On V-Cycle multi-grid to solve the saddle point problem \\ref{spp}. `iter' denotes the number of outer loop.\n\nWe assume that $v_1=v_2=v$. For $N=64, 128$, we set the stopping criterion $\\epsilon = 10^{-8}$. Based on the result in Section \\ref{ssec:num_iui}, we fix $\\alpha = 1$. We take $\\tau=[10^{-6}, 10^{-7}, 10^{-8}, 10^{-9}]$. We set the maximum number of V-Cycle to be $100$ and the maximum number of outer loop to be $100$. The result of $N=64$ is shown in Table \\ref{ieuzawaVC-64}; the result of $N=128$ is shown in Table \\ref{ieuzawaVC-128}. We observe that if we set $\\tau$ too big, i.e., $\\tau\\leq 10^{-7}$, Inexact Uzawa will converge slowly. But if we select an appropriate $\\tau$, Inexact Uzawa converges in an incredibly fast speed, namely 2 iteration. This is consistent with our proof.\n\n\\input{project/ieuzawaVC-64}\n\\input{project/ieuzawaVC-128}\n\nFor $N=256, 512$, we set the stopping criterion $\\epsilon = 10^{-7}$. We take $\\tau=[10^{-5}, 10^{-6}, 10^{-7}, 10^{-8}]$. The result of $N=256$ is shown in Table \\ref{ieuzawaVC-256-1} and \\ref{ieuzawaVC-256-2}; the result of $N=512$ is shown in Table \\ref{ieuzawaVC-512-1} and \\ref{ieuzawaVC-512-2}.\n\\input{project/ieuzawaVC-256-1}\n\\input{project/ieuzawaVC-256-2}\n\\input{project/ieuzawaVC-512-1}\n\\input{project/ieuzawaVC-512-2}\n\nFor $N=1024, 2048$, we set the stopping criterion $\\epsilon = 10^{-6}$. We take $\\tau=[10^{-4}, 10^{-5}, 10^{-6}, 10^{-7}]$. The result of $N=1024$ is shown in Table \\ref{ieuzawaVC-1024-1} and \\ref{ieuzawaVC-1024-2}; the result of $N=2048$ is shown in Table \\ref{ieuzawaVC-2048-1} and \\ref{ieuzawaVC-2048-2}.\n\n\\input{project/ieuzawaVC-1024-1}\n\\input{project/ieuzawaVC-1024-2}\n\\input{project/ieuzawaVC-2048-1}\n\\input{project/ieuzawaVC-2048-2}\n\nWith a proper choice of $L$ and $\\tau$, i.e. $L=4, \\tau=10^{-8}$, Ineaxt Uzawa based on V-Cycle multi-grid can converge in 2 iterations.\n\n\\section{Conclusion}\nOur work for numerical solutions to Stokes equation ends up here, but new ideas for improvement and continuous passion for research will never fade. We give a brief summary of what we have done and an outlook of what we can improve.\n\nWe try to solve Stokes equation numerically. Through MAC scheme, we formulate a saddle point problem. However, the equation itself is undetermined. Therefore, we introduce V-Cycle multi-grid method and choose Distributive Gauss Seidel Iteration and Uzawa Iteration as smoother. For DGS, we find that whether to update in parallel or in sequence and the management of edge units and point units influence the convergence speed of the algorithm. We also have other selection of restricting and lifting operators but it affects the converge speed slightly. The numerical results of DGS are not ideal. In Appendix \\ref{appendix}, we show that projecting the residual of $\\mfB^T\\mfU=0$ will greatly improve the performance of DGS, especially DGS-s. With this modification on V-Cycle, DGS-s outperforms Uzawa and Inexact Uzawa.\n\nFor Uzawa, we observe and rigorously prove that Uzawa will converge in at most $2$ iterations. This is because $\\mfB^T\\mfA^{-1}\\mfB$ is projection matrix, which can be proved through spectral analysis. Our numerical experiments are consistent with this proposition. Further numerical results show that even with inexact solution to the subproblem, Inexact Uzawa can still converge in at most $2$ iterations. That means using Uzawa as smoother in V-Cycle is unnecessary. Uzawa itself can solve the saddle point problem efficiently. Specifically, Uzawa turns solving the original undetermined problem into solving a determined subproblem. (Because $\\mfA$ is positive definite.) `$\\textbackslash$' in matlab and Conjugate Gradient method may not be the ideal solver for solving the subproblem. We can enhance the speed of inexact Uzawa by choosing efficient solver for positive definite problems, i.e. spectral methods. \n\n\n\\appendix\n\\section{Appendix: Modification of V-Cycle based on DGS}\n\\label{appendix}\nFrom the numerical experiment, we find that V-Cycle based on DGS does not perform as good as we expected. Therefore, we consider to modify the V-Cycle multigrid a little bit. Note that we have not used the restriction operator for $P$. We consider to project the residual of $\\mfB^T\\mfU=0$ to the coarse grid. Specifically, for the saddle point problem on the coarse grid, we do not consider to solve \\ref{spp}. Instead, we solve\n\\begin{equation}\n\\label{sppm}\n\\begin{bmatrix}\n\\mfA&\\mfB\\\\\\mfB^T&0\n\\end{bmatrix}\\begin{bmatrix}\n\\mfU\\\\\\mfP\n\\end{bmatrix}=\\begin{bmatrix}\n\\mfF\\\\\\mfR\n\\end{bmatrix}.\n\\end{equation}\nwhere $\\mfR$ is the residual of $\\mfB^T\\mfU=0$ from the finer grid. For DGS-p and DGS-s, we only need to add $R_{i,j}$ to \\ref{divequ}. And the calculation of $r_h$ in the V-Cycle turns to be\n\\begin{eqt}\nr_h=h^2\\begin{bmatrix}\\mfF^{(k)}-\\mfA^{(k)} \\mfU^{(k)}+\\mfB^{(k)}\\mfP^{(k)}\\\\\n-\\lp\\mfB^{(k)}\\rp^T\\mfU^{(k)}\\end{bmatrix}\n\\end{eqt}\nwhere $k=0$. The numerical experiments show that this modification achieves great improvement.\n\nSimilarly, for $N=64, 128, 256$, we set the stopping criterion $\\epsilon=10^{-8}$ and the maximum number of V-Cycle to be $500$. We take $v_1=v_2=v=[10, 20, 40, 80, 160]$ and $L=[4, 16, 64]$. The result of $N=64$ is shown in Table \\ref{DGS_mod-64}; the result of $N=128$ is shown in Table \\ref{DGS_mod-128}; the result of $N=256$ is shown in Table \\ref{DGS_mod-256}. We observe that with modification, DGS-s converges much faster. Though DGS-p becomes slower, it achieves lower $e_N$.\n\n\\input{project/DGS_mod-64}\n\\input{project/DGS_mod-128}\n\\input{project/DGS_mod-256}\n\nFor $N=512, 1024, 2048$, we set the stopping criterion $\\epsilon =10^{-6}$ and the maximum number of V-Cycle to be $100$. We take $L=[4, 16, 64, 256]$. For $N = 512, 1024$, we take $v = [20, 40, 80, 160]$.  For $N=2048$, we take $v = [40, 80, 160]$. The result of $N=512$ is shown in Table \\ref{DGS_mod-512}; the result of $N=1024$ is shown in Table \\ref{DGS_mod-1024}; the result of $N=2048$ is shown in Table \\ref{DGS_mod-2048}.\n\n\\input{project/DGS_mod-512}\n\\input{project/DGS_mod-1024}\n\\input{project/DGS_mod-2048}\n\nTaking $L=4$ and $v=10,20,40$, DGS-s converges fast and converges to a precise solution. Notably, DGS-s is faster than Uzawa and Inexact Uzawa. This is because although Uzawa can converge in 2 iterations, solving the subproblem in Uzawa is quite time-costly. Also, because Uzawa can converge in 2 iterations, the modification of the V-Cycle multi-grid method does not affect the performance of Uzawa and Inexact Uzawa.\n\n\\printbibliography\n\\end{document}", "meta": {"hexsha": "9a411f2374c18440d428d4acfab57b6439d8b249", "size": 52224, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Latex/Project.tex", "max_stars_repo_name": "YiifeiWang/Numerical-solutions-to-Stokes-Equation", "max_stars_repo_head_hexsha": "ab0af1779ae5a43951a125ada81ecc26e3512f64", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Latex/Project.tex", "max_issues_repo_name": "YiifeiWang/Numerical-solutions-to-Stokes-Equation", "max_issues_repo_head_hexsha": "ab0af1779ae5a43951a125ada81ecc26e3512f64", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Latex/Project.tex", "max_forks_repo_name": "YiifeiWang/Numerical-solutions-to-Stokes-Equation", "max_forks_repo_head_hexsha": "ab0af1779ae5a43951a125ada81ecc26e3512f64", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.4077578051, "max_line_length": 917, "alphanum_fraction": 0.6408739277, "num_tokens": 22667, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.800691997339971, "lm_q1q2_score": 0.5651164236314635}}
{"text": "\n\n    \\filetitle{system}{System matrices for unsolved model}{model/system}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\n[A,B,C,D,F,G,H,J,List,Nf] = system(M)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{M} {[} model {]} - Model object whose system matrices will be\n  returned.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\item\n  \\texttt{A}, \\texttt{B}, \\texttt{C}, \\texttt{D}, \\texttt{F},\n  \\texttt{G}, \\texttt{H} ,\\texttt{J} {[} numeric {]} - Matrices\n  describing the unsolved system, see Description.\n\\item\n  \\texttt{List} {[} cell {]} - Lists of measurement variables,\n  transition variables includint their auxiliary lags and leads, and\n  shocks as they appear in the rows and columns of the system matrices.\n\\item\n  \\texttt{Nf} {[} numeric {]} - Number of non-predetermined\n  (forward-looking) transition variables (multiplied by the first\n  \\texttt{Nf} columns of matrices \\texttt{A} and \\texttt{B}).\n\\end{itemize}\n\n\\paragraph{Options}\\label{options}\n\n\\begin{itemize}\n\\item\n  \\texttt{'linear='} {[} \\emph{\\texttt{@auto}} \\textbar{} \\texttt{true}\n  \\textbar{} \\texttt{false} {]} - Compute the model using a linear\n  approach, i.e.~differentiating around zero and not the currently\n  assigned steady state.\n\\item\n  \\texttt{'select='} {[} \\emph{\\texttt{true}} \\textbar{} \\texttt{false}\n  {]} - Automatically detect which equations need to be\n  re-differentiated based on parameter changes from the last time the\n  system matrices were calculated.\n\\item\n  \\texttt{'sparse='} {[} \\texttt{true} \\textbar{} \\emph{\\texttt{false}}\n  {]} - Return matrices \\texttt{A}, \\texttt{B}, \\texttt{D}, \\texttt{F},\n  \\texttt{G}, and \\texttt{J} as sparse matrices; can be set to\n  \\texttt{true} only in models with one parameterization.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\nThe system before the model is solved has the following form:\n\n\\begin{verbatim}\nA E[xf;xb] + B [xf(-1);xb(-1)] + C + D e = 0\n\nF y + G xb + H + J e = 0\n\\end{verbatim}\n\nwhere \\texttt{E} is a conditional expectations operator, \\texttt{xf} is\na vector of non-predetermined (forward-looking) transition variables,\n\\texttt{xb} is a vector of predetermined (backward-looking) transition\nvariables, \\texttt{y} is a vector of measurement variables, and\n\\texttt{e} is a vector of transition and measurement shocks.\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "5132c131ddc1cbd0f7027e1b62f87be0446aa71f", "size": 2461, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/model/system.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/model/system.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/model/system.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 32.3815789474, "max_line_length": 72, "alphanum_fraction": 0.7139374238, "num_tokens": 756, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006919925839875, "lm_q2_score": 0.7057850278370112, "lm_q1q2_score": 0.5651164202747616}}
{"text": "% -------------------------------------------------------- %\n% Cuckoo Hashing\n% by: Isai Barajas Cicourel\n\n\n% -------------------------------------------------------- %\n% Document Start\n\n\\section{\\textbf{Cuckoo Hashing}}\n\n\n% -------------------------------------------------------- %\n% Particular Caes\n\n\\subsection{Particular Case}\n\\par\nCuckoo hashing is a (sequential) hashing algorithm in which a newly added item displaces any earlier item occupying the same slot. \nIn this test we modify the sequential Cuckoo hashing in order to change the sequential hashing algorithm to concurrent hashing.\n\\par\n\n\n% -------------------------------------------------------- %\n% Solution Information\n\n\\subsection{Solution}\n\\par\nWe break up each method call into a sequence of phases, where each phase adds, removes, or displaces a single item $x$.\nWe use a two-entry array \\textit{table[]} of tables, and two independent hash functions, (denoted as \\textit{hash0()} and \\textit{hash1()} in the code) mapping the set of possible keys to entries in the array. \n\\par\n\\begin{lstlisting}[frame=single,breaklines=true]\n  public TCuckooHashSet(int capacity) {\n    locks = new Lock[2][LOCKS];\n    table = (T[][]) new Object[2][capacity];\n    size = capacity;\n    for (int i = 0; i < 2; i++) {\n      for (int j = 0; j < LOCKS; j++) {\n        locks[i][j] = new ReentrantLock();\n      }\n    }\n  }\n  private final int hash0(Object x) {\n    return Math.abs(x.hashCode() % size);\n  }\n  private final int hash1(Object x) {\n    random.setSeed(x.hashCode());\n    return random.nextInt(size);\n  }\n\\end{lstlisting}\n\\par\nTo test whether a value $x$ is in the set, \\textit{contains(x)} tests whether either table[0][h0(x)] or $table[1][h1(x)]$ is equal to $x$. Similarly, \\textit{remove(x)} checks whether $x$ is in either $table[0][h0(x)]$ or $table[1][h1(x)]$, and removes it if found.\n\\begin{lstlisting}[frame=single,breaklines=true]\n  public boolean remove(T x) {\n    if (x == null) {\n      throw new IllegalArgumentException();\n    }\n    int h0 = hash0(x);\n    Lock lock0 = locks[0][h0 % LOCKS];\n    try {\n      lock0.lock();\n      if (x.equals(table[0][h0])) {\n        table[0][h0] = null;\n        return true;\n      } else {\n        int h1 = hash1(x);\n        Lock lock1 = locks[1][h1 % LOCKS];\n        try {\n          lock1.lock();\n          if (x.equals(table[1][h1])) {\n            table[1][h1] = null;\n            return true;\n          }\n          return false;\n        } finally {\n          lock1.unlock();\n        }\n      }\n    } finally {\n      lock0.unlock();\n    }\n  }\n\\end{lstlisting}\n\\par\nThe \\textit{add(x)} method is the most interesting. It successively removes conflicting items until every key has a slot. To add $x$, the method swaps $x$ with $y$, the current occupant of $table[0][h0(x)]$. If the prior value $y$ was null, it is done, otherwise, it swaps the newly nestles value $y$ for the current occupant of $table[1][h1(y)]$ in the same way and continues swapping entries (alternating tables) until it finds an empty slot.\n\n\n% -------------------------------------------------------- %\n% Experiment\n\n\\subsection{Experiment Description} \n\\par\nThe test creates $8$ threads that perform \\textit{enq()} and \\textit{deq()} operations in sequential and parallel.\nThe dequeue values are revised and if a value is missing a fail assertion is thrown.\n\\par\n\n% -------------------------------------------------------- %\n% Results\n\n\\subsection{Observations and Interpretations}\n\n\\par\nWhen executing the test the test fails on the parallel part due to a missing value when executing the parallel both test.\n\\begin{lstlisting}[frame=single,breaklines=true]\nException in thread \"Thread-5\" Exception in thread \"Thread-7\" junit.framework.AssertionFailedError: Th 0\tDeqThread: missing value: 3\n\\end{lstlisting}\n\n\n% -------------------------------------------------------- %\n% Results\n\n\\subsection{Proposed changes to fix the problem}\n\n\\par\nBy adding terminal output through System.out in the \\textit{remove()} method or Debugging the file the execution works, thus becoming a problem of observation, since the fail assertion only happens when its not observed.\n\\begin{lstlisting}[frame=single,breaklines=true]\ncompile-test:\n.sequential add, contains, and remove\n.parallel add\n.parallel remove\n.parallel both\n\nTime: 0.071\n\nOK (4 tests)\n\\end{lstlisting}\n\n\n\n% -------------------------------------------------------- %\n% Why?\n\n\\subsection{Proposed solution}\n\\par\nIt is not clear why the observation would affect the execution, thus it is possible that the issue is a propagation of the values in the memory which are forced to be executed constantly through the debug implementation.\n\n", "meta": {"hexsha": "225a708317d840b3ab8eab4506c09c58896195ed", "size": 4661, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "isai/Multiprocessor/sections/CuckooHashingTest.tex", "max_stars_repo_name": "rzavalet/multiprocessor", "max_stars_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "isai/Multiprocessor/sections/CuckooHashingTest.tex", "max_issues_repo_name": "rzavalet/multiprocessor", "max_issues_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "isai/Multiprocessor/sections/CuckooHashingTest.tex", "max_forks_repo_name": "rzavalet/multiprocessor", "max_forks_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.5259259259, "max_line_length": 444, "alphanum_fraction": 0.6183222484, "num_tokens": 1136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.8006919925839875, "lm_q1q2_score": 0.5651164202747616}}
{"text": "\n \\FloatBarrier\n\n\\section{Example: Demand response programs}\\label{sec:dr_example}\n\nThis is an example of multiple populations used to implement demand response programs in smart grids \\cite{barreto2013design, barreto2014incentives}. In this case, we assume that each user must decide how to distribute its electricity usage along a day. Particularly, \nagents might have conflicting interests because they might impose externalities on the society through the price signals, i.e., the aggregated demand might affect the profit of agents. This conflict can be seen as a game between agents, in which each agent is selfish and  endeavors to maximize independently its own welfare. \n\nIn this problem we model the daily electricity allocation problem as a multi-population game with nonlinear fitness functions. Particularly, each agent can implement an evolutionary dynamic to find the best distribution of resources. Note that when implemented locally by each user, the evolutionary dynamics lead to the global efficient equilibrium (In this case the fitness is equal to the marginal utility of each agent). \n\nA particular feature of this problem is that the Nash equilibrium of the system is inefficient. Hence, \nwe introduce an incentives scheme (indirect revelation mechanism) to maximize the aggregated surplus of the population.\nThe main feature of this mechanism is that it does not require private information from users, and employs a one dimensional message space to coordinate the demand profile of agents. These properties facilitate the distributed implementation of the mechanism. The mechanism entrusts the computation tasks among users, who should maximize its own utility function based the aggregated demand (that is calculated and broadcasted by a central agent). Thus, users avoid revelation of private information (e.g., preferences), but are required to report the aggregated consumption of their appliances during some time periods.\n\n\n\n\n\n\n\n\\subsubsection{Problem Formulation}\n\n\n\nWe consider a population composed by $N$ consumers defined as $\\mathcal{V} = {1,\\ldots.N}$. Also, let us divide a period of 24 hours in a set of $T$ time intervals denoted $\\tau = \\{\\tau_1,\\ldots,\\tau_T\\}$.\nFormally, we define the set $\\tau$ as a partition of $[0,24)$, where \n $\\cup_{t\\in\\{1,\\ldots,T\\}} \\tau_t = \\tau$ and $\\cap_{t\\in\\{1,\\ldots,T\\}} \\tau_t = \\varnothing$.\n%\nLet $q_i^t$ be the electricity consumption of the $i\\th$ user in the $t\\th$ time interval. \nThe daily electricity consumption of the $i\\th$ user is represented by the vector $\\bs{q}_i=[q_i^1,\\ldots,q_i^T]^\\top\\in \\Re_{\\geq 0}^{T}$.\nThe population consumption at a given time $t$ is defined by the vector $\\bs{q}^t = [q_1^t,, q_2^t\\ldots,q_N^t]^\\top\\in \\Re_{\\geq 0}^{N}$.\nOn the other hand, the joint electricity consumption of the whole population is denoted by $\\bs{q} = [\\bs{q}_1^\\top,\n\\ldots, \\bs{q}_N^\\top]^\\top$. \nWithout loss of generality, we assume that the electricity consumption of the $i\\th$ user  satisfies $q_i^t\\geq 0$,  in each time instant $t$.\nA \\emph{valuation function} $v_i^t(q_i^t)$ models the \\emph{valuation} that the $i\\th$ user gives to an electricity consumption of $q_i^t$ units in the $t\\th$ time interval. Finally, let $p(\\cdot):\\Re\\rightarrow\\Re$ be the price of electricity charged to consumers. The aggregated consumption at a given time $t$ is defined as $||\\bs{q}^t||_1 = \\sum_{j=1}^N q_j^t$.\nMoreover, a daily valuation is \n$v_i(\\bs{q}_i)=\\sum_{t=1}^T v_i^t(q_i^t),$\n where $t\\in\\{1,\\ldots,T\\}$.\n\n\n \n \n \nNow, assuming  that the electricity generation cost is  the same for all $t$, we can express the profit function of each individual as\n%\n\\begin{equation}\\label{eq:u_i_}\n U_i(\\bs{q}) = v_i(\\bs{q}_i) - \\sum_{t=1}^T q_i^t p\\Big( \\norm{\\bs{q}^t}_1 \\Big),\n\\end{equation}\n%\nwhere \n$p:\\Re_+ \\to \\Re_+$ is the unitary price function.\nThe consumers welfare function is maximized by solving \\cite{Johari09}\n%\n\\begin{equation}\\label{eq:opt_problem}\n\\begin{aligned}\n& \\underset{\\bs{q}}{\\text{maximize}}\n& &  \\sum_{i=1}^N U_i(\\bs{q}) =  \\sum_{i=1}^N\\left( v_i(\\bs{q}_i) - \\sum_{t=1}^T q_i^t p\\left( \\norm{\\bs{q}^t}_1 \\right) \\right) \\\\\n& \\text{subject to}\n& & q_i^t \\geq 0,  i =\\{1,\\ldots,N\\}, t =\\{1,\\ldots,T\\}.\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsubsection{Incentives}\n\nThe solution of the optimization problem in Eq.~(\\ref{eq:opt_problem}) is inefficient in a strategic environment, i.e., when individuals are rational and selfish \\cite{barreto2013design, Johari09}. In such cases, the analysis of strategic interactions among rational agents is made using game theory \\cite{fudenberg98}.\nIn particular, the Nash equilibrium (a solution concept in game theory)  is sub-optimal, however, we can show that if we consider an added incentive to the individual cost function of each player, the Nash equilibrium of the game with incentives can be made efficient in the sense of Pareto \\cite{barreto2013design, barreto2014incentives}. \n\nIn particular, our DR scheme with incentives models the case when all agents keep their valuation of electricity to themselves, and have autonomous control their consumption. However, in order to incentive the agents to modify their behavior for the good of the population, the central entity sends them an incentive (e.g., a price signal or reward) to indirectly control their load.\n\nConsider the new cost function for the $i^{th}$ agent:\n\\begin{equation}\\label{eq:game2}\nW_i(q_i,\\bs{q}_{-i}) \n= v_i(q_i) -  q_i p\\left( \\norm{\\bs{q}^t}_1 \\right) + I_i(\\bs{q}) .\n\\end{equation}\nwhere incentives are of the form:\n\\begin{equation}\\label{eq:I_i}\nI_i(\\bs{q}) = \\left( \\norm{\\bs{q}_{-i}^t}_1\\right) \\left( h_i(\\norm{\\bs{q}_{-i}})  - p\\left( \\norm{\\bs{q}^t}_1 \\right) \\right).\n\\end{equation}\n\n\nThe form of this incentive is inspired in the \nVickrey-Clarke-Groves mechanism and the  Clarke pivot rule \\cite{AlgorithmicG}.\n%\nWe assign incentives according to the contribution made by an agent \nto the society. In particular, the function $h_i:\\Re \\to \\Re$ is a design parameter that estimates the externalities introduced by each individual.\nIt can be shown that these incentives can lead to an optimal equilibrium in a strategic environment.\nIn this DR approach we consider that the utility sends a two dimensional signal to each customer, namely $(q,I_i)$ and each customer responds with some consumption $q_i$. \nNote that the incentives modify the price paid by each user according to their relative consumption. However, two different users receive different incentives as long as their consumption are different.\n\n\n\n\n\n\\subsubsection{Simulations}\n\nIn this section, we illustrate some ideas of efficiency and the decentralized implementation of the incentives mechanism. We select some functions used previously in the literature. On the one hand, we define the family of valuation functions as \n\\begin{equation}\\label{eq:valuation_sim}\n v(\\bs{q}^k,\\alpha_i^k) = v_i^k (q_i^k) = \\alpha_i^k \\log(1+q_i^k)\n\\end{equation}\nwhere $\\alpha_i^k>0$ is the parameter that characterizes the valuation of the  $i\\th$ agent at the $k\\th$ time instant.\nOn the other hand, the generation cost function is defined as \n%\n\\begin{equation}\\label{eq:cost_sim}\n C(\\|\\bs{q}\\|_1) = \\beta ({\\|\\bs{q}\\|_1})^2 + b {\\|\\bs{q}\\|_1},\n\\end{equation}\nand the unitary price function is\n%\n\\begin{equation}\\label{eq:p_sim}\n p(\\|\\bs{q}\\|_1) = \\frac{C(\\|\\bs{q}\\|_1)}{\\|\\bs{q}\\|_1} = \\beta \\|\\bs{q}\\|_1 + b.\n\\end{equation}\n%\nNote that the generation cost only depends on the aggregated consumption, not on the time of the day. Furthermore, \n%\nthe fitness function of the system with incentives is\n%\n\\begin{equation}\\label{eq:fitness_without_i_sim}\nF_i^k( \\bs{q}^k)  =  \\frac{\\alpha_i^k }{1+q_i^k}\n - 2\\beta \\left( \\sum_{j=1}^N q_j^k  \\right).\n \\end{equation}\n\n\nThe evolution of utility, demand, and incentives for different dynamics is shown in Figs.~\\ref{fig:dynamics_u} and \\ref{fig:dynamics_i}. Note that despite using the same initial condition, the evolution of the system is different with each dynamical model. In particular, BNN and Smith dynamics converge faster to the  optimum, in contrast with the Logit and replicator dynamics. \nThis is achieved by means of a fast decrease in the power consumption. \n\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=.75\\textwidth]{./images/evolution_u.eps}\n \\caption{Evolution of profit and costs for four different dynamics.}\n \\label{fig:dynamics_u}\n\\end{figure}\n\n\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=.75\\textwidth]{./images/evolution_i.eps}\n \\caption{Evolution of the incentives with four different dynamics.}\n \\label{fig:dynamics_i}\n\\end{figure}\n\n\n\nIncentives in Fig.~\\ref{fig:dynamics_i} show that, in the long run, all dynamics converge to the same level of incentives. Particularly, Smith dynamics requires more incentives during all time, except for logit dynamics, which has a sudden increase in the incentives close to the equilibrium point. \n\nIn Fig.~\\ref{fig:dynamics_i} it is not clear which dynamical model moves the state of the system to the optimal equilibrium using less resources. To answer this question, we simulate the total amount of incentives used by each model.\nThus, let us define the aggregated incentives in a society in a particular time $t$ as\n\\begin{equation}\n I_d (t) = \\sum_{i\\in\\mathcal{P}} \\frac{1}{|S|} \\sum_{k\\in S} I_i \\left( \\bs{q}^k (t) \\right).\n\\end{equation}\nNow, the total accumulated incentives from $t_0$ to $t$ is defined as \n\\begin{equation}\n \\varPhi_d (t) = \\int_{t_0}^t I_d (\\tau) d\\tau.\n\\end{equation}\nThus, $\\varPhi_d (t)$ gives a measurement of the total amount subsidies required by the system with dynamic $d$, in the time interval  $[t_0, t]$.\nIn this case we do not have a reference to compare the subsidies requirements of each evolutionary dynamic. Hence, we compare the subsidies requirements with the average requirements of all the dynamics implemented. \n%\nIn order to see which dynamic requires more resources, we plot the cumulative resources required by each dynamic relative to the average.\nHence, we define the cumulative incentives as \n%\n\\begin{equation}\nCI_d = \\frac{ \\varPhi_d (t) }{ \\sum_{d\\in \\mathcal{D}} \\varPhi_d (t) }.\n\\end{equation}\n%\nFig.~\\ref{fig:integral} shows the results of the simulation of the relative subsidies required by each model of evolutionary dynamics.\n%\n\n\n\nSmith dynamics requires much more resources during all the time stamp, but is particularly high during the first stages, while logit has the lower incentives requirements. However, BNN has the lower incentives in long run.\n\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=.75\\textwidth]{./images/accumulated_i.eps}\n \\caption{Accumulated incentives during the evolution of the algorithm.}\n \\label{fig:integral}\n\\end{figure}\n\n\n\nFig. \\ref{fig:final_state} shows the final demand profile of each agent. Note that the final state corresponds to the state of each population at the equilibrium.\n\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=.75\\textwidth]{./images/final_state.eps}\n \\caption{Final demand profile of each agent.}\n \\label{fig:final_state}\n\\end{figure}\n", "meta": {"hexsha": "bcea7b4217808bf5ea8dbc17346ab1a816a0083e", "size": 11100, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/example_dr.tex", "max_stars_repo_name": "carlobar/PDToolbox_matlab", "max_stars_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2017-08-13T09:50:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T09:22:42.000Z", "max_issues_repo_path": "docs/example_dr.tex", "max_issues_repo_name": "sjtudh/PDToolbox_matlab", "max_issues_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-07-25T13:04:08.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-03T21:16:17.000Z", "max_forks_repo_path": "docs/example_dr.tex", "max_forks_repo_name": "sjtudh/PDToolbox_matlab", "max_forks_repo_head_hexsha": "fea827a80aaa0150932e6e146907f71a83b7829b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2015-07-16T00:40:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-26T10:20:34.000Z", "avg_line_length": 58.7301587302, "max_line_length": 620, "alphanum_fraction": 0.7526126126, "num_tokens": 3052, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5651111076159455}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage{acronym}\n\\usepackage{amsmath}\n\\usepackage{forest}\n\\usepackage{tikz}\n\\usetikzlibrary{matrix,chains,positioning,decorations.pathreplacing,arrows}\n\n\\setlength{\\parindent}{0em}\n\\setlength{\\parskip}{1em}\n\n\\title{Mathematical background for CTRNN simulations}\n\\author{\u00d8yvin Halfdan Thuv}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Equations for the Dynamics}\nEquations are from an article\\cite{beer-1995} of R. D. Beer, who did lots of interesting research into these networks.\n\n\\subsection{Neuron Activation Potential}\nThe activation potential (or firing-frequency) of each neuron is calculated by applying the sigmoid function in equation \\ref{sigmoid-function} to the sum of the bias and the membrane potential.\n\\begin{equation}\n  \\label{sigmoid-function}\n  \\sigma(x) = \\frac{1}{1 + e^{-x}}\n\\end{equation}\n\n\\subsection{Membrane Potentials}\nEvery neuron in a CTRNN maintains a state, the membrane potential. Neuron activation potentials at some specific network input will therefore vary depending on the current neuron states.\n\nThe change in membrane potential for each neuron is governed by the equation:\n\\begin{equation}\n  \\label{membrane-potential-equation}\n  \\dot{y}_i = \\frac{1}{\\tau_i}(-y_i + \\sum_{j=1}^{N}w_{ji}\\sigma(y_j + \\theta_j) + I_i)\n\\end{equation}\n\\vspace{1em}\nwhere\\\\\n\\begin{tabular}{ll}\n  $\\tau_i$: & time-constant of post-synaptic neuron\\\\\n  $y_i$: & membrane potential of post-synaptic neuron\\\\\n  $w_{ji}$: & weight of connection between pre-synaptic neuron $j$ and post-synaptic neuron\\\\\n  $\\sigma(x)$: & the sigmoid activation function in equation \\ref{sigmoid-function}\\\\\n  $y_j$: & membrane potential of pre-synaptic neuron $j$\\\\\n  $\\theta_j$: & bias (input sensivity) of pre-synaptic neuron $j$\\\\\n  $I_i$: & any external input (such as a sensor reading) to post-synaptic node\\\\\n\\end{tabular}\\par\n\\subsubsection{Procedure}\nThe clj-ctrnn library (at least currently) uses the \\textit{forward Euler method} for approximating the solution to equation \\ref{membrane-potential-equation} at a series of timesteps.\n\nFor each step in time, $t$, with $t<\\tau_i$, clj-ctrnn will\n\\begin{enumerate}\n\\item Approximate the rate of membrane potential change, $\\frac{\\Delta y}{\\Delta t}$, for all neurons\n\\item Synchronously update all membrane potentials $y_{i+1} = y_i + \\frac{\\Delta y}{\\Delta t} \\cdot t$\n\\item Commit the new network state\n\\end{enumerate}\n\n\\section{Specific Properties of the Model}\nSome interesting properties of CTRNNs are bifurcative neurons and center-crossing networks.\n\\subsubsection{Bifurcative Neurons}\nFor neurons that connect to themselves with a synaptic connection $\\geq4$, equation \\ref{membrane-potential-equation} will have two solutions. In other words, the membrane potential may converge towards one of two possible steady-states.\n\nWhich one it converges to depends on previous membrane potential, so incjected current can trigger steady-state change. This can be a very useful property in pulse-circuits or sensor readings.\n\nThe following equations give the solutions to the left and right bifurcation points, respectively:\n\\begin{equation}\n  lb(w_{ii},\\theta) = 2 \\cdot ln(\\frac{\\sqrt{w_{ii}} + \\sqrt{w_{ii} - 4}}{2}) - \\frac{w_{ii} + \\sqrt{w_{ii} \\cdot (w_{ii} - 4)}}{2} - \\theta\n\\end{equation}\n\\begin{equation}\n  rb(w_{ii},\\theta) = -2 \\cdot ln(\\frac{\\sqrt{w_{ii}} + \\sqrt{w_{ii} - 4}}{2}) - \\frac{w_{ii} - \\sqrt{w_{ii} \\cdot (w_{ii} - 4)}}{2} - \\theta\n\\end{equation}\n\n\\subsubsection{Center-Crossing Networks}\nSince the activation potential is calculated using the sigmoid function in equation \\ref{sigmoid-function}, changes in input will yield most change in activation potential where existing input causes a neuron activation of $\\approx 0.5$.\n\nFor a network with set weights, maximum inter-neuron sensivity can thus be achieved by setting the bias, $\\theta$, for every neuron to some value that aligns their sum of bias and input to 0.\n\\begin{equation}\n  \\theta = \\frac{-\\Sigma_j^N{w_{ji}}}{2}\n\\end{equation}\n\n\\section{Calculations for the Tests}\nFor some of the tests, expected values are manually calculated.\n\n\\subsection{Single Neuron Network Test}\nA network with a single neuron with one self-connection is one of the simplest tests. It has the following topology:\\par\n\\begin{figure}[h]\n  \\centering\n  \\begin{tikzpicture}[\n      neuron/.style={\n        draw,\n        circle,\n        inner sep=1em,\n        font=\\small\n      }\n    ]\n    %% Neurons\n    \\node[neuron] (n-1) {\\parbox[c][5em]{5em}{%\n        $\\tau = 1$\\\\\n        $\\theta = -5$\\\\\n    }};\n    %% Self 1\n    \\node[below=of n-1] (n-1-self) {$w=5$};\n    \\draw (n-1) edge[-, bend right=50] (n-1-self);\n    \\draw (n-1-self) edge[->, bend right=50] (n-1);\n  \\end{tikzpicture}\\par\n\\end{figure}\nGiven equation \\ref{membrane-potential-equation}, the membrane potential $y$ for each timestep $t$ of size $s$ can be written as:\\par\n\n\\begin{equation}\n  \\footnotesize\n  y(t,s) = y(t-s) + \\dot{y}(t-s)\n\\end{equation}\n\nWhich can be expanded and simplified based on the neuron and network information above:\n\\begin{equation}\n  \\footnotesize\n  \\begin{aligned}\n    \\dot{y_i}(t) & = \\frac{1}{\\tau_i}(-y_i(t-1) + \\sum_{j=1}^{N}w_{ji}\\sigma(y_j(t-1) + \\theta_j) + I_i)\\\\\n    & \\Downarrow\\\\\n    \\dot{y}(t) & = \\frac{1}{1}(-y(t-1) + 5 \\cdot \\sigma(y(t - 1) - 5)\\\\\n    & = -y(t-1) + 5 \\cdot \\sigma(y(t-1) - 5)\\\\\n    & \\Downarrow\\\\\n    y(t,s) & = y(t-s) + s \\cdot (-y(t-s) + 5 \\cdot \\sigma(y(t-1) - 5))\n  \\end{aligned}\n\\end{equation}\nWith timesteps = 0.01 the neuron membrane potentials can be approximated as follows for 5 steps to calculate reference values for testing:\n\\begin{equation}\n  \\footnotesize\n  \\begin{aligned}\n    y(0.00) & = 0.000000\\\\\n    y(0.01) & = 0.000000 + 0.01 \\cdot (-0.000000 + 5 \\cdot \\frac{1}{1 + e^{-(0.000000 - 5)}}) = 0.000335\\\\\n    y(0.02) & = 0.000335 + 0.01 \\cdot (-0.000335 + 5 \\cdot \\frac{1}{1 + e^{-(0.000335 - 5)}}) = 0.000666\\\\\n    y(0.03) & = 0.000666 + 0.01 \\cdot (-0.000666 + 5 \\cdot \\frac{1}{1 + e^{-(0.000666 - 5)}}) = 0.000994\\\\\n    y(0.04) & = 0.000994 + 0.01 \\cdot (-0.000994 + 5 \\cdot \\frac{1}{1 + e^{-(0.000994 - 5)}}) = 0.001319\\\\\n    y(0.05) & = 0.001319 + 0.01 \\cdot (-0.001319 + 5 \\cdot \\frac{1}{1 + e^{-(0.001319 - 5)}}) = 0.001641\\\\\n  \\end{aligned}\n\\end{equation}\n\n\\subsection{Two Neuron Network}\nA more complex two neuron center-crossing network with bifurcative neurons has the following topology:\\par\n\\begin{figure}[h]\n  \\label{two-neuron-network}\n  \\centering\n  \\begin{tikzpicture}[\n      neuron/.style={\n        draw,\n        circle,\n        inner sep=1em,\n        font=\\small\n      }\n    ]\n    %% Neurons\n    \\node[neuron] (n-1) {\\parbox[c][5em]{5em}{%\n        $\\tau = 0.5$\\\\\n        $\\theta = -5$\\\\\n    }};\n    \\node[right=of n-1] (spacer) {};\n    \\node[right=of spacer,neuron] (n-2) {\\parbox[c][5em]{5em}{%\n        $\\tau = 0.5$\\\\\n        $\\theta = 5$\\\\\n    }};\n    %% Synapses\n    %% Self 1\n    \\node[below=of n-1] (n-1-self) {$w=5$};\n    \\draw (n-1) edge[-, bend right=50] (n-1-self);\n    \\draw (n-1-self) edge[->, bend right=50] (n-1);\n    %% Self 2\n    \\node[below=of n-2] (n-2-self) {$w=5$};\n    \\draw (n-2) edge[-, bend right=50] (n-2-self);\n    \\draw (n-2-self) edge[->, bend right=50] (n-2);\n    %% 1 -> 2\n    \\node[below=of spacer] (n-1-n-2) {$w=-10$};\n    \\draw (n-1) edge[-,bend right=5] (n-1-n-2);\n    \\draw (n-1-n-2) edge[->,bend right=5] (n-2);\n    %% 2 -> 1\n    \\node[above=of spacer] (n-2-n-1) {$w=10$};\n    \\draw (n-2) edge[-,bend right=5] (n-2-n-1);\n    \\draw (n-2-n-1) edge[->,bend right=5] (n-1);\n  \\end{tikzpicture}\n\\end{figure}\nThe properties were chosen as follows:\n\\begin{enumerate}\n\\item The neurons are given  self-weights of 5 ($w_{ii}>4$).\n\\item The first neuron is given a bias of -5.\n\\item A synaptic connection from neuron two to neuron one is created, so that on average it will influence neuron one near the center of activations ($w=10$).\n\\item Neuron two is then set up with exactly opposite bias and opposite synaptic input from neuron one.\n\\end{enumerate}\n\n\\begin{figure}[h]\n  \\caption{Example plot of the activations of the neurons in \\ref{two-neuron-network}.}\n  \\includegraphics[width=0.9\\textwidth]{plot.pdf}\n\\end{figure}\n\n\\begin{thebibliography}{9}\n\n\\bibitem{beer-1995}\n  Randall D. Beer\n  \\textit{On the Dynamics of Small Continuous-Time Recurrent Neural Networks},\n  Dept. of Computer Engineering and Science and Dept. of Biology\n  Case Western Reserve University\n  Cleveland, OH 44106\n  1995\n\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "38805c1d1e9db6362af6be872d035e14fdde718b", "size": 8468, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/maths.tex", "max_stars_repo_name": "oyvinht/clj-ctrnn", "max_stars_repo_head_hexsha": "f31418fb34d106b26fd06b59ee78de8450142004", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-06-21T13:31:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-30T10:31:24.000Z", "max_issues_repo_path": "tex/maths.tex", "max_issues_repo_name": "oyvinht/clj-ctrnn", "max_issues_repo_head_hexsha": "f31418fb34d106b26fd06b59ee78de8450142004", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/maths.tex", "max_forks_repo_name": "oyvinht/clj-ctrnn", "max_forks_repo_head_hexsha": "f31418fb34d106b26fd06b59ee78de8450142004", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.1067961165, "max_line_length": 237, "alphanum_fraction": 0.6719414265, "num_tokens": 2821, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5651111036326282}}
{"text": "\\section{Problem Statement}\n\\label{sec:ps}\nIn this section, we formalize the problem introduced in Section~\\ref{sec:intro}, \nproviding also some definitions to better explain the problem and the proposed solution.\n\nFirst, we define some concepts used in the problem definition.\n\\begin{definition}\n    A \\emph{term} is a sequence of characters.\n\\end{definition}\n\\begin{definition}\n    A \\emph{topic} is a set of at least two terms.\n\\end{definition}\n\\begin{definition}\n    A \\emph{tweet} $t$ is a pair $(\\mathit{time}, \\mathit{tokens})$ where\n    $\\mathit{time}$ is a timestamp that indicates the moment in time the \n    tweet has been posted and $\\mathit{tokens}$ is a set of terms mentioned\n    in the text of the tweet.\n\\end{definition}\n\\begin{definition}\n    A topic $q$ is \\emph{in} a tweet $t = (\\mathit{time}, \\mathit{tokens})$ if \n    $\\mathit{q} \\subseteq \\mathit{tokens}$.\n\\end{definition}\n\\begin{definition}\n    A \\emph{period of time} $p$ is a pair $(\\mathit{start}, \\mathit{end})$ where\n    $\\mathit{start}$ and $\\mathit{end}$ are timestamps, such that \n    $\\mathit{start} \\leq \\mathit{end}$, indicating the start and \n    the end time of $p$, respectively.\n\\end{definition}\n\\begin{definition}\n    A tweet $t = (\\mathit{time}, \\mathit{tokens})$ is \\emph{in} a period of \n    time $p = (\\mathit{start}, \\mathit{end})$ if \n    $\\mathit{start} \\leq time < \\mathit{end}$.\n\\end{definition}\n\\begin{definition}\n   Let $p$ be a period of time, $T$ be a set of tweets and $s \\in \\rm I\\!R$ be\n   such that $0 \\leq s \\leq 1$.\n    Let\n   $T' = \\{t_1, \\dots, t_k\\}$ be the set of tweets of $T$ that are in $p$. \n\n   A topic $q$ is $s$\\emph{-popular} in $T$ in the period $p$ if it is in at least\n   $x = s * k$ tweets of $T'$.\n\\end{definition}\n\\begin{definition}\n    Let $P = \\{p_1, \\dots, p_h\\}$ be a set of periods of time, \n    $T = \\{t_1, \\dots, t_k\\}$ be a set of tweets and\n    $s, r \\in \\rm I\\!R$ be such that $0 \\leq s, r \\leq 1$.\n\n    A topic $q$ is $r$\\emph{-consistent-}$s$\\emph{-popular} in $T$ in the periods $P$ \n    if it is $s$-popular in $T$ in at least $y = r * h$ periods of time in $P$.\n \\end{definition}\n\n\\paragraph{Input}\nAs input of our problem we are given:\n\\begin{itemize}\n    \\item The list of $n$ tweets \n$T = [\\mathit{t}_1, \\dots, \\mathit{t}_n]$, where each tweet\n$\\mathit{t}_i = (\\mathit{time}_i, \\mathit{tokens}_i)$;\n%  and $\\mathit{time}_1 \\leq \\mathit{time}_2 \\leq \\dots \\mathit{time}_n$;\n    \\item The two real numbers $s, r \\in \\rm I\\!R$\n    such that $0 \\leq s, r \\leq 1$;\n    \\item Two timestamps $\\mathit{start}$, $\\mathit{end}$ and a number $a \\in \\rm I\\!N$\n    such that $a > 0$.\n    The timestamps $\\mathit{start}$ and $\\mathit{end}$ indicate the start of the first period \n    of time we consider and the end of the last one, respectively. The parameter\n    $a$ indicates the duration of each period of time, in seconds.\n    The periods of time we consider are\n\n    $P = \\{(\\mathit{start}, \\mathit{start} + a),\n              (\\mathit{start} + a, \\mathit{start} + 2 a), \n    \\dots , (\\mathit{start} + k a, \\mathit{start} + (k+1) a)\\}$\n    such that $\\mathit{start} + k a < \\mathit{end} \\leq start + (k+1) a$\n\n\\end{itemize}\n\n\\paragraph{Output}\n\nThe output is the set of $r$-consistent-$s$-popular topics in $T$ in \nthe periods of time $P$.\n", "meta": {"hexsha": "b5979206b669e13957b2f4274854880304749a96", "size": 3278, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/Report/03-problem_statement.tex", "max_stars_repo_name": "masinag/popular_twitter_topics_mining", "max_stars_repo_head_hexsha": "b86e05d7700cfca4dbf9db67cde50664d99e60f7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/Report/03-problem_statement.tex", "max_issues_repo_name": "masinag/popular_twitter_topics_mining", "max_issues_repo_head_hexsha": "b86e05d7700cfca4dbf9db67cde50664d99e60f7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/Report/03-problem_statement.tex", "max_forks_repo_name": "masinag/popular_twitter_topics_mining", "max_forks_repo_head_hexsha": "b86e05d7700cfca4dbf9db67cde50664d99e60f7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.4936708861, "max_line_length": 94, "alphanum_fraction": 0.6412446614, "num_tokens": 1140, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.5651111032400599}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{cctbx_preamble}\n\\usepackage{amscd}\n\n\\title{Restraint Gradients}\n\\author{\\lucjbourhis \\and \\rjgildea}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\section{Bond distance restraint}\n\nThe distance between a pair of atoms is restrained to a target value $r_o$.\nThe weighted least-squares residual is defined as\n\\begin{equation}\nR = w(r_c - r_o)^2\n\\end{equation}\nwhere $r_c$ is the calculated distance given the current structure model.\n\nUsing the chain rule, the derivative of the residual with respect to the\ndistance $r_c$ is\n\\begin{equation}\n\\partialder{R}{r_c} = 2w(r_c - r_o).\n\\end{equation}\nGiven that\n\\begin{equation*}\nr_c = u^\\frac{1}{2},\n\\end{equation*}\nwhere\n\\begin{equation*}\nu = (x_a - x_b)^2 + (y_a - y_b)^2 + (z_a - z_b)^2,\n\\end{equation*}\nthe derivative of $r_c$ with respect to the Cartesian coordinate $x_a$ is then\n\\begin{equation}\n\\partialder{r_c}{x_a} = \\partialder{r_c}{u} \\partialder{u}{x_a}= \\frac{(x_a - x_b)}{r_c}.\n\\label{eqn:r_derivative}\n\\end{equation}\nTherefore the derivative of the residual with respect to $x_a$ is\n\\begin{equation}\n\\partialder{R}{x_a} = \\partialder{R}{r_c} \\partialder{r_c}{x_a}= \\frac{2 w (r_c - r_o)(x_a - x_b)}{r_c}.\n\\end{equation}\n\n\n\n\\section{Bond similarity restraint}\n\nThe distances between two or more atom pairs are restrained to be equal\nby minimising the weighted variance of the distances, where the\nleast-squares residual is defined as the population variance biased estimator\n\\begin{equation}\nR(r_1,...,r_n) = \\frac{\\sum_{i = 1}^n {w_i(r_i - \\mean{r})^2}}\n                      {\\sum_{i = 1}^n {w_i}}.\n\\end{equation}\nIt is easier to compute the derivatives by using the alternative form\n\\begin{align}\nR &= \\mean{r^2} - \\mean{r}^2 \\nonumber\\\\\n&= \\frac{\\sum_{i = 1}^n {w_i r_i^2}}{\\sum_{i = 1}^n {w_i}} - \n    \\left(\\frac{\\sum_{i = 1}^n {w_i r_i}}{\\sum_{i = 1}^n {w_i}}\\right)^2.\n\\end{align}\nThe derivative of the residual with respect to a distance $r_j$ is then\n\\begin{align}\n\\partialder{R}{r_j} &=\n  \\frac{2 w_j r_j}{\\sum_{i=1}^n{w_i}}\n  - \\frac{2 w_j \\sum_{i=1}^n w_i r_i}{(\\sum_{i=1}^n w_i)^2}\\nonumber\\\\\n&= \\frac{2 w_j}{\\sum_{i=1}^n{w_i}}(r_j - \\mean{r}).\n\\end{align}\nFrom \\eqnref{r_derivative},  the derivative of the residual with respect to $x_a$ is therefore\n\\begin{align}\n\\partialder{R}{x_a} &= \\partialder{R}{r_j} \\partialder{r_j}{x_a}\\nonumber\\\\\n&= \\frac{2 w_j (r_j - \\mean{r})(x_a - x_b)}{r_j \\sum_{i=1}^n {w_i}}.\n\\end{align}\n\n\\section{Restraints involving symmetry}\n\nLet's consider a restraint involving a site $x$ which is outside the asymmetric unit,\n\\begin{equation}\nR(x, \\ldots)\n\\end{equation}\nwhere $\\ldots$ stands for other sites. There is a symmetry $M$ such that $x=My$\nfor some site $y$ in the asymmetric unit. So our restraint residual is the\ncomposition of two functions, $R$ and $M$,\n\\newcommand{\\rasu}{R_{\\text{asu}}}\n\\begin{equation}\n\\rasu(y, \\ldots) = R(My, \\ldots)\n\\end{equation}\nand we need to apply the chain rule correctly to get the gradient with respect\nto $y$. The best way to work it out is to go back to the basics: the gradient\narises from a linear approximation of a function\\footnote{The $O$ notation means\n``terms at least quadratic in $\\delta x$'' and the superscript $T$ stands for ``transpose''.},\n\\begin{equation}\nf(x + \\delta x) = f(x) + \\grad{f(x)}^T \\delta x + O(\\delta x^2).\n\\end{equation}\n\nSo we consider $\\rasu(y+\\delta y, \\ldots)$ for a small $\\delta y$.\n\\begin{align}\n\\rasu(y+\\delta y, \\ldots) &= r(My + M\\delta y, \\ldots)\\nonumber\\\\\n& = r(My, \\ldots) + \\grad[x]{r(My, \\ldots)}^T M\\delta y +  O(\\delta x^2)\\nonumber\\\\\n\\intertext{by definition of the gradient of $r$ with respect to $x$.}\n&= r(My, \\ldots) + \\left(M^T \\grad[x]{r(My, \\ldots)}\\right)^T \\delta y\n +  O(\\delta x^2)\\nonumber\n\\end{align}\nOne then reads the gradients with respect to $y$:\n\\begin{equation}\n\\grad[y]{\\rasu(My, \\ldots)} = M^T \\grad[x]{r(My, \\ldots)}\n\\end{equation}\nFinally, $M$ is a space group symmetry operation and is therefore an\northogonal transformation (i.e. one which preserves distances and angles),\nwhich means that in Cartesian coordinates, $M^T = M^{-1}$.\n\n\\bibliography{cctbx_references}\n\n\\end{document}", "meta": {"hexsha": "756b1e69651dd932d6c5faff6b52ecbcbc45725e", "size": 4140, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cctbx/geometry_restraints/gradients.tex", "max_stars_repo_name": "rimmartin/cctbx_project", "max_stars_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 155, "max_stars_repo_stars_event_min_datetime": "2016-11-23T12:52:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T15:35:44.000Z", "max_issues_repo_path": "cctbx/geometry_restraints/gradients.tex", "max_issues_repo_name": "rimmartin/cctbx_project", "max_issues_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 590, "max_issues_repo_issues_event_min_datetime": "2016-12-10T11:31:18.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T23:10:09.000Z", "max_forks_repo_path": "cctbx/geometry_restraints/gradients.tex", "max_forks_repo_name": "rimmartin/cctbx_project", "max_forks_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 115, "max_forks_repo_forks_event_min_datetime": "2016-11-15T08:17:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-09T15:30:14.000Z", "avg_line_length": 36.6371681416, "max_line_length": 104, "alphanum_fraction": 0.6913043478, "num_tokens": 1451, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837527911056, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.5651110996493106}}
{"text": "\\documentclass{standalone}\n\\begin{document}\n\t\\chapter{Surds}\n\t\\section{Introduction}\n\t\\quad Consider numbers $\\sqrt{64}, \\sqrt{16}$. These can be represented as exact quantities by writing $8$ and $4$. There are however other numbers which cannot be expressed as exact quantities using other symbols.\\\\\n\t\n\t\n\tThere is an option of expressing them as corrected decimals without however preserving their full value. Instead, we choose to keep the form $\\sqrt{a}$ which preserves the full value of the numbers.\n\t\\section{Examples}\n\t\\begin{multicols}{2}\n\t\t\\noindent\n\t\t\\begin{alignat*}{2}\n\t\t\ta)\\quad\n\t\t\t\\sqrt{2} & = \\sqrt{16\\times3}          \\\\\n\t\t\t& = \\sqrt{16} \\times \\sqrt{3} \\\\\n\t\t\t& = 4\\sqrt{3}                 \n\t\t\\end{alignat*}\n\t\t\\begin{alignat*}{2}\n\t\t\tb)\\quad\n\t\t\t\\sqrt{72} & = \\sqrt{8\\times9}         \\\\\n\t\t\t& = \\sqrt{9}\\times \\sqrt{8} \\\\\n\t\t\t& = 3\\sqrt{8}               \n\t\t\\end{alignat*}\n\t\t\\begin{alignat*}{2}\n\t\t\t\\quad\n\t\t\t\\sqrt{360} & = \\sqrt{180} \\times \\sqrt{2} \\\\\n\t\t\t& =\\sqrt{36}\\times\\sqrt{10}    \\\\\n\t\t\t& =6\\sqrt{10}                  \n\t\t\\end{alignat*}\n\t\t\\begin{alignat*}{2}\n\t\t\td)\\quad\n\t\t\t& \\quad\\left(1+2\\sqrt{3}\\right)\\left(2+3\\sqrt{5}\\right) \\\\\n\t\t\t& = 2-\\sqrt{3}  - 10\\sqrt{3}                            \\\\\n\t\t\t& =-28-\\sqrt{3}                                         \n\t\t\\end{alignat*}\n\t\t\\begin{alignat*}{2}\n\t\t\te)\\footnote[1]\\quad\n\t\t\t& \\quad\\left(2-3\\sqrt{5}\\right)\\left(2+3\\sqrt{5}\\right) \\\\\n\t\t\t& = 4+6\\sqrt{5}-6\\sqrt{5}-9(5)                          \\\\\n\t\t\t& =-41                                                  \n\t\t\\end{alignat*}\n\t\\end{multicols}\n\t\\footnotetext[1]{This expression shows that for products of the form $\\left(a+b\\sqrt{c}\\right)\\left(a-b\\sqrt{c}\\right)$ the surds will vanish.}\n\t\\newpage\n\t\\section{Rationalizing the denominator}\n\t\\quad \\emph{Consider the fraction:} \n\t$$\\frac{1}{1+\\sqrt{2}}$$\n\tThis fraction contains a surd, thus, making it irrational. To rationalize said fraction one should find the multiplicative operation of canceling the denominator \\textit{(see \\footnotemark[1])}.\\\\\n\t\n\t\n\t\\emph{Continuing...}\n\t\\begin{alignat*}{2}\n\t\t\\frac{1}{1+\\sqrt{2}} & = \\frac{1}{1+\\sqrt{2}} \\times \\frac{1-\\sqrt{2}}{1-\\sqrt{2}} \\\\\n\t\t& =\\frac{1-\\sqrt{2}}{1}                                       \\\\\n\t\t& =1-\\sqrt{2}                                                 \n\t\\end{alignat*} \n\\end{document}", "meta": {"hexsha": "d129ef5744e0b72bc88a9cc1c2e49dc72c6b16af", "size": 2307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Pure Mathematics/Surds.tex", "max_stars_repo_name": "Girogio/My-LaTeX", "max_stars_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-12T11:45:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-30T21:47:25.000Z", "max_issues_repo_path": "Pure Mathematics/Surds.tex", "max_issues_repo_name": "Girogio/My-LaTeX", "max_issues_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pure Mathematics/Surds.tex", "max_forks_repo_name": "Girogio/My-LaTeX", "max_forks_repo_head_hexsha": "706ec7cb4d62af1b5a9ad7547589889240c755bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.4736842105, "max_line_length": 217, "alphanum_fraction": 0.5526657997, "num_tokens": 794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837527911056, "lm_q2_score": 0.7401743505760728, "lm_q1q2_score": 0.5651110908975395}}
{"text": "\\chapter{Notation}\n\\label{appendix:notation}\n\n$\n\\begin{array}{cl}\n\\hat{{}} & \\text{predicted output} \\\\\n\\hat{y} & \\text{predicted output of the variable $y$} \\\\\n\n\n%%%%%%%%%%%\n\\\\ \\\\ \\\\ \\\\ \n\\mathbb{Z} & \\text{the set of integers; zahlen is the German word for numbers} \\\\\n\\bigoplus & \\mathrm{the\\;earth} \\\\\n\\bigodot & \\mathrm{the\\;sun} \\\\\nc & \\mathrm{speed\\;of\\;light}\\\\\nG & \\mathrm{Newton's\\;gravitational\\;constant} \\\\\nC & \\mathrm{circumference} \\\\\nS & \\mathrm{distance} \\\\\n\\end{array}\n$", "meta": {"hexsha": "e05bd009dace57ce1ce76fb679099aa8b78477ab", "size": 487, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX Notes/Appendix/notation.tex", "max_stars_repo_name": "Sz593/coursera_ml_notes", "max_stars_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LaTeX Notes/Appendix/notation.tex", "max_issues_repo_name": "Sz593/coursera_ml_notes", "max_issues_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LaTeX Notes/Appendix/notation.tex", "max_forks_repo_name": "Sz593/coursera_ml_notes", "max_forks_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.35, "max_line_length": 81, "alphanum_fraction": 0.6221765914, "num_tokens": 169, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8723473879530492, "lm_q2_score": 0.647798211152541, "lm_q1q2_score": 0.5651050774195769}}
{"text": "\\chapter{Statistical physics}\n\n\\section{Appreciating the relevance of statistics}\n\nConsider a box of gas with 1 billion particles.\nIt is impractical to model that by 1 billion equations of motion.\nHowever, we can still say something useful,\nbecause \\emph{statistics} allows us to \\emph{summarize} the gas.\nWith statistics, we can talk about \\emph{macroscopic behavior},\nbut we can't talk about individual particles;\nwe get the summary and we sacrifice the details.\n\nNow we can talk about the \\emph{distribution} of the velocity of the particles,\nsuch as \\emph{50\\% of the particles are slower than something}.\n\nStatistical physics is macro-physics.\nThe idea is we consider a statistics of the system.\nWe look at the big picture instead of looking at each particle.\nThere are many particles.\nWe cannot say anything about one particle.\nWhat is an example of \\emph{statistical ensemble}?\n\n\\section{Getting used to probability and statistics}\n\n\\ExerciseAnswer{(Discrete probability) Roll a fair six-faced die once. What is the probability of getting the three-dotted face?}{\\(3/6\\).}\n\\ExerciseAnswer{(Joint probability of independent events) Roll a fair six-faced die three times. What is the probability of getting the three-dotted face three times?}{%\n\\((3/6) \\times (3/6) \\times (3/6)\\).}\n\n\\ShowAnswers\n\nA \\emph{distribution} of a set \\(\\Omega\\) describes how members of \\(\\Omega\\) are distributed.\nLet \\(f\\) be the density of that distribution.\nThen \\(f(x)\\) describes the tendency of values to gather around \\(x\\)?\nValues tend to gather near the peaks of \\(f\\).\n\n\\section{Using the Maxwell\\textendash{}Boltzmann distribution of speed (for what?)}\n\nAn example question that statistical physics (statistical mechanics) can answer is\n``What is the probability of finding a particle with a given speed?''\nFor example, see the probability density function of the Maxwell\\textendash{}Boltzmann distribution.\n\nMaxwell distribution is a chi-distribution with 3 degrees of freedom.\n\nDon't remember the equation.\nTo be a physicist, you don't need to remember this; you can always go to Wikipedia or open a book.\nThe important thing is that you know \\emph{what it means} and \\emph{what it's useful for}.\nThe density of \\emph{Maxwell\\textendash{}Boltzmann distribution} is \\(f(v)\\).\nThe number \\(\\int_A f\\) describes the \\emph{probability of finding a particle\nwhose speed is in the set (the range) \\(A\\)}.\nLet that sink for a moment, especially if you aren't yet comfortable with probability theory.\nThe density of \\emph{Maxwell\\textendash{}Boltzmann distribution} is\n???\n\\Formula{\n    \\NoNumber\n    f(v) = \\parenthesize{ \\frac{m}{2\\pi k T} }^{3/2} 4 \\pi v^2 \\exp \\parenthesize{ - \\frac{mv^2}{2kT} }\n}\nWho got that? How?\n\n% https://en.wikipedia.org/wiki/Temperature\nTODO paraphrase this Wikipedia text:\nBased on the historical development of the kinetic theory of gases, temperature is proportional to the average kinetic energy of the random motions of the constituent microscopic particles\n\n% https://en.wikipedia.org/wiki/Maxwell\u2013Boltzmann_distribution\n\nStatistical mechanics explains thermodynamics.\n\n% https://en.wikipedia.org/wiki/Thermodynamics\n\n\\emph{mole} is\n\nChemistry?\n\nEntropy?\n\nCanonical ensemble?\n\nStatistical ensemble?\n\n% http://demonstrations.wolfram.com/BoseEinsteinFermiDiracAndMaxwellBoltzmannStatistics/\n", "meta": {"hexsha": "5a1b979362daa725f9aa6ed9f01362bdbb87be12", "size": 3314, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "research/physics/statistical.tex", "max_stars_repo_name": "edom/work", "max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_stars_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "research/physics/statistical.tex", "max_issues_repo_name": "edom/work", "max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_issues_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z", "max_forks_repo_path": "research/physics/statistical.tex", "max_forks_repo_name": "edom/work", "max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_forks_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z", "avg_line_length": 41.425, "max_line_length": 188, "alphanum_fraction": 0.7718768859, "num_tokens": 826, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672227971211, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5650445780535597}}
{"text": "\\title{Ideals, Varieties and \\Mtwo}\n\\titlerunning{Ideals, Varieties and \\Mtwo}\n\\toctitle{Ideals, Varieties and \\Mtwo}\n\\author{Bernd Sturmfels\\thanks{Partially supported by\nthe National Science Foundation (DMS-9970254).}}\n\\authorrunning{B. Sturmfels}\n% \\institute{University of California at Berkeley, Department of Mathematics,\n%   Berkeley, CA 94720, USA}\n\\maketitle\n\n\\begin{abstract}\nThis chapter introduces \\Mtwo commands for\nsome elementary computations in algebraic geometry.\nFamiliarity with Gr\\\"obner bases is assumed.\n\\end{abstract}\n\nMany students and researchers alike have their first encounter with\nGr\\\"ob\\-ner bases through the delightful text books \\cite{CLO1} and \\cite{CLO2}\nby David Cox, John Little and Donal O'Shea. This chapter illustrates\nthe use of \\Mtwo for some computations discussed in these books.\nIt can be used as a supplement for an advanced undergraduate course or \nfirst-year graduate course in computational algebraic geometry. The\nmathematically advanced reader will find this chapter a useful summary\nof some basic \\Mtwo commands.\n\n\\section{A Curve in Affine Three-Space}\n\nOur first example concerns geometric objects in\n(complex) affine 3-space. We start by\nsetting up the ring of polynomial functions with rational coefficients.\n\\beginOutput\ni1 : R = QQ[x,y,z]\\\\\n\\emptyLine\no1 = R\\\\\n\\emptyLine\no1 : PolynomialRing\\\\\n\\endOutput\nVarious monomial orderings are available in \\Mtwo; since we did not specify\none explicitly, the monomials in the ring ${\\tt R}$ will be sorted in \ngraded reverse lexicographic order  \\cite[\\S I.2, Definition 6]{CLO1}.\nWe define an ideal generated by two polynomials\nin this ring and assign it to the variable named \n{\\tt curve}.\n\n\\beginOutput\ni2 : curve = ideal( x^4-y^5, x^3-y^7 )\\\\\n\\emptyLine\n\\               5    4     7    3\\\\\no2 = ideal (- y  + x , - y  + x )\\\\\n\\emptyLine\no2 : Ideal of R\\\\\n\\endOutput\nWe compute the reduced Gr\\\"obner basis of our ideal:\n\\beginOutput\ni3 : gb curve\\\\\n\\emptyLine\no3 = | y5-x4 x4y2-x3 x8-x3y3 |\\\\\n\\emptyLine\no3 : GroebnerBasis\\\\\n\\endOutput\nBy inspecting leading terms (and using \\cite[\\S 9.3, Theorem 8]{CLO1}),\nwe see that our ideal {\\tt curve} does indeed \ndefine a one-dimensional affine variety. This can be tested directly\nwith the following commands in \\Mtwo:\n\\beginOutput\ni4 : dim curve\\\\\n\\emptyLine\no4 = 1\\\\\n\\endOutput\n\\beginOutput\ni5 : codim curve\\\\\n\\emptyLine\no5 = 2\\\\\n\\endOutput\nThe {\\it degree} of a curve in complex affine $3$-space is the \nnumber of intersection points with a general plane. It coincides\nwith the degree  \\cite[\\S 6.4]{CLO2} of the projective closure\n\\cite[\\S 8.4]{CLO1} of our curve, which we compute as follows:\n\\beginOutput\ni6 : degree curve\\\\\n\\emptyLine\no6 = 28\\\\\n\\endOutput\nThe Gr\\\"obner basis in {\\tt o3} contains two polynomials which are not\nirreducible: they contain a factor of $x^3$. This shows that our curve\nis not irreducible over ${\\bf Q}$. We first extract the components\nwhich are transverse to the plane $x=0$:\n\\beginOutput\ni7 : curve1 = saturate(curve,ideal(x))\\\\\n\\emptyLine\n\\               2       5    4   5    3\\\\\no7 = ideal (x*y  - 1, y  - x , x  - y )\\\\\n\\emptyLine\no7 : Ideal of R\\\\\n\\endOutput\nAnd next we extract the component which lies in the plane $x=0$:\n\\beginOutput\ni8 : curve2 = saturate(curve,curve1)\\\\\n\\emptyLine\n\\             3   5\\\\\no8 = ideal (x , y )\\\\\n\\emptyLine\no8 : Ideal of R\\\\\n\\endOutput\nThe second component is a multiple line. Hence our input ideal was not radical.\nTo test equality of ideals we use the command {\\tt ==}\\indexcmd{==} .\n\\beginOutput\ni9 : curve == radical curve\\\\\n\\emptyLine\no9 = false\\\\\n\\endOutput\nWe now replace our curve by its first component:\n\\beginOutput\ni10 : curve = curve1\\\\\n\\emptyLine\n\\                2       5    4   5    3\\\\\no10 = ideal (x*y  - 1, y  - x , x  - y )\\\\\n\\emptyLine\no10 : Ideal of R\\\\\n\\endOutput\n\\beginOutput\ni11 : degree curve\\\\\n\\emptyLine\no11 = 13\\\\\n\\endOutput\nThe ideal of this curve is radical:\n\\beginOutput\ni12 : curve == radical curve\\\\\n\\emptyLine\no12 = true\\\\\n\\endOutput\nNotice that the variable ${\\bf z}$ does not appear\namong the generators of the ideal. Our curve consists of\n$13$ straight lines (over {\\bf C}) parallel to the {\\tt z}-axis.\n\n\\section{Intersecting Our Curve With a Surface}\n\nIn this section we explore basic operations on ideals,\nstarting with those described in \\cite[\\S 4.3]{CLO1}.\nConsider the following surface in affine $3$-space:\n\\beginOutput\ni13 : surface = ideal( x^5 + y^5 + z^5 - 1)\\\\\n\\emptyLine\n\\             5    5    5\\\\\no13 = ideal(x  + y  + z  - 1)\\\\\n\\emptyLine\no13 : Ideal of R\\\\\n\\endOutput\nThe union of the curve and the surface is represented by the \nintersection of their ideals:\n\\beginOutput\ni14 : theirunion = intersect(curve,surface)\\\\\n\\emptyLine\n\\              6 2      7      2 5    5    5    5      2       5 5    1 $\\cdot\\cdot\\cdot$\\\\\no14 = ideal (x y  + x*y  + x*y z  - x  - y  - z  - x*y  + 1, x y  + y  $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\no14 : Ideal of R\\\\\n\\endOutput\nIn this example this coincides with the product of the two ideals:\n\\beginOutput\ni15 : curve*surface == theirunion\\\\\n\\emptyLine\no15 = true\\\\\n\\endOutput\nThe intersection of the curve and the surface is represented by the \nsum of their ideals. We get a finite set of points:\n\\beginOutput\ni16 : ourpoints = curve + surface\\\\\n\\emptyLine\n\\                2       5    4   5    3   5    5    5\\\\\no16 = ideal (x*y  - 1, y  - x , x  - y , x  + y  + z  - 1)\\\\\n\\emptyLine\no16 : Ideal of R\\\\\n\\endOutput\n\\beginOutput\ni17 : dim ourpoints\\\\\n\\emptyLine\no17 = 0\\\\\n\\endOutput\nThe number of points is sixty five:\n\\beginOutput\ni18 : degree ourpoints\\\\\n\\emptyLine\no18 = 65\\\\\n\\endOutput\nEach of the points is multiplicity-free:\n\\beginOutput\ni19 : degree radical ourpoints\\\\\n\\emptyLine\no19 = 65\\\\\n\\endOutput\nThe number of points coincides with the number of \nmonomials not in the initial ideal \\cite[\\S 2.2]{CLO2}.\nThese are called the {\\it standard monomials}.\n\\beginOutput\ni20 : staircase = ideal leadTerm ourpoints\\\\\n\\emptyLine\n\\                2   5   5   5\\\\\no20 = ideal (x*y , z , y , x )\\\\\n\\emptyLine\no20 : Ideal of R\\\\\n\\endOutput\nThe {\\tt basis} command can be used to list all the standard monomials\n\\beginOutput\ni21 : T = R/staircase;\\\\\n\\endOutput\n\\beginOutput\ni22 : basis T\\\\\n\\emptyLine\no22 = | 1 x x2 x3 x4 x4y x4yz x4yz2 x4yz3 x4yz4 x4z x4z2 x4z3 x4z4 x3y $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\              1       65\\\\\no22 : Matrix T  <--- T\\\\\n\\endOutput\n\nThe assignment of the quotient ring to the global variable {\\tt T} had a side\neffect: the variables {\\tt x}, {\\tt y}, and {\\tt z} now have values in that\nring.\nTo bring the variables of {\\tt R} to the fore again, we must say:\n\\beginOutput\ni23 : use R;\\\\\n\\endOutput\nEvery polynomial function on our 65 points can be written uniquely\nas a linear combination of these standard monomials. This \nrepresentation can be computed using the normal form command {\\tt \\%}\\indexcmd{\\%}.\n\n\\beginOutput\ni24 : anyOldPolynomial = y^5*x^5-x^9-y^8+y^3*x^5\\\\\n\\emptyLine\n\\       5 5    9    5 3    8\\\\\no24 = x y  - x  + x y  - y\\\\\n\\emptyLine\no24 : R\\\\\n\\endOutput\n\\beginOutput\ni25 : anyOldPolynomial {\\char`\\%} ourpoints\\\\\n\\emptyLine\n\\       4     3\\\\\no25 = x y - x y\\\\\n\\emptyLine\no25 : R\\\\\n\\endOutput\nClearly, the normal form is zero if and only the polynomial is in the ideal.\n\\beginOutput\ni26 : anotherPolynomial = y^5*x^5-x^9-y^8+y^3*x^4\\\\\n\\emptyLine\n\\       5 5    9    8    4 3\\\\\no26 = x y  - x  - y  + x y\\\\\n\\emptyLine\no26 : R\\\\\n\\endOutput\n\\beginOutput\ni27 : anotherPolynomial {\\char`\\%} ourpoints\\\\\n\\emptyLine\no27 = 0\\\\\n\\emptyLine\no27 : R\\\\\n\\endOutput\n\n\n\\section{Changing the Ambient Polynomial Ring}\n\nDuring a \\Mtwo session it sometimes becomes necessary to change the\nambient ring in which the computations takes place. Our original\nring, defined in {\\tt i1}, is the polynomial ring in three variables\nover the field  {\\bf Q} of rational numbers\nwith the graded reverse lexicographic order. In this section \ntwo modifications are made: first we replace the field of coefficients\nby a finite field, and later we replace the  monomial order\nby an elimination order.\n\nAn important operation in algebraic geometry is \nthe decomposition of algebraic varieties\ninto irreducible components \\cite[\\S 4.6]{CLO1}.\nAlgebraic algorithms for this purpose are based on the\n{\\it primary decomposition} of ideals \\cite[\\S 4.7]{CLO1}.\nA future version of \\Mtwo will have an implementation of\nprimary decomposition over any polynomial ring.\nThe current version of \\Mtwo has a command\n{\\tt decompose} for finding all the minimal primes of an ideal,\nbut, as it stands, this works only over a finite field.\n\nLet us change our coefficient field to the field with $101$ elements:\n\\beginOutput\ni28 : R' = ZZ/101[x,y,z];\\\\\n\\endOutput\n\nWe next move our ideal from the previous section into the new ring\n(fortunately, none of the coefficients of its generators have 101 in the\ndenominator):\n\n\\beginOutput\ni29 : ourpoints' = substitute(ourpoints,R')\\\\\n\\emptyLine\n\\                2       5    4   5    3   5    5    5\\\\\no29 = ideal (x*y  - 1, y  - x , x  - y , x  + y  + z  - 1)\\\\\n\\emptyLine\no29 : Ideal of R'\\\\\n\\endOutput\n\\beginOutput\ni30 : decompose ourpoints'\\\\\n\\emptyLine\n\\                                                                       $\\cdot\\cdot\\cdot$\\\\\no30 = \\{ideal (z + 36, y - 1, x - 1), ideal (z + 1, y - 1, x - 1), idea $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\no30 : List\\\\\n\\endOutput\nOops, that didn't fit on the display, so let's print them out one per line.\n\\beginOutput\ni31 : oo / print @@ print;\\\\\nideal (z + 36, y - 1, x - 1)\\\\\n\\emptyLine\nideal (z + 1, y - 1, x - 1)\\\\\n\\emptyLine\nideal (z - 6, y - 1, x - 1)\\\\\n\\emptyLine\nideal (z - 14, y - 1, x - 1)\\\\\n\\emptyLine\nideal (z - 17, y - 1, x - 1)\\\\\n\\emptyLine\n\\        3      2              2                      3    2     2      $\\cdot\\cdot\\cdot$\\\\\nideal (x  - 46x  + 28x*y - 27y  + 46x + y + 27, - 16x  + x y + x  - 15 $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\            2                                            2             $\\cdot\\cdot\\cdot$\\\\\nideal (- 32x  - 16x*y + x*z - 16x - 27y - 30z - 14, - 34x  - 14x*y + y $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\          2                                         2            2     $\\cdot\\cdot\\cdot$\\\\\nideal (44x  + 22x*y + x*z + 22x - 26y - 30z - 6, 18x  + 12x*y + y  + 1 $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\            2                                           2            2 $\\cdot\\cdot\\cdot$\\\\\nideal (- 41x  + 30x*y + x*z + 30x + 38y - 30z + 1, - 26x  - 10x*y + y  $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\          2                                            2            2  $\\cdot\\cdot\\cdot$\\\\\nideal (39x  - 31x*y + x*z - 31x - 46y - 30z + 36, - 32x  - 13x*y + y   $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\            2                                          2            2  $\\cdot\\cdot\\cdot$\\\\\nideal (- 10x  - 5x*y + x*z - 5x - 40y - 30z - 17, - 37x  + 35x*y + y   $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\endOutput\nIf we just want to see the degrees of the irreducible components, then\nwe say:\n\\beginOutput\ni32 : ooo / degree\\\\\n\\emptyLine\no32 = \\{1, 1, 1, 1, 1, 30, 6, 6, 6, 6, 6\\}\\\\\n\\emptyLine\no32 : List\\\\\n\\endOutput\nNote that the expressions ${\\tt oo}$ \nand ${\\tt ooo}$ refer to the previous and\nprior-to-previous output lines respectively.\n\n\\medskip\n\nSuppose we wish to compute the $x$-coordinates of our sixty five points.\nThen we must use an elimination order, for instance, the\none described in \\cite[\\S 3.2, Exercise 6.a]{CLO1}.\nWe define a  new polynomial ring with the elimination order\nfor $\\{y,z\\} > \\{x\\}$ as follows:\n\\beginOutput\ni33 : S = QQ[z,y,x, MonomialOrder => Eliminate 2]\\\\\n\\emptyLine\no33 = S\\\\\n\\emptyLine\no33 : PolynomialRing\\\\\n\\endOutput\nWe move our ideal into the new ring,\n\\beginOutput\ni34 : ourpoints'' = substitute(ourpoints,S)\\\\\n\\emptyLine\n\\              2        5    4     3    5   5    5    5\\\\\no34 = ideal (y x - 1, y  - x , - y  + x , z  + y  + x  - 1)\\\\\n\\emptyLine\no34 : Ideal of S\\\\\n\\endOutput\nand we compute the reduced Gr\\\"obner basis in this new order:\n\\beginOutput\ni35 : G = gens gb ourpoints''\\\\\n\\emptyLine\no35 = | x13-1 y-x6 z5+x5+x4-1 |\\\\\n\\emptyLine\n\\              1       3\\\\\no35 : Matrix S  <--- S\\\\\n\\endOutput\nTo compute the elimination ideal we use the following command:\n\\beginOutput\ni36 : ideal selectInSubring(1,G)\\\\\n\\emptyLine\n\\             13\\\\\no36 = ideal(x   - 1)\\\\\n\\emptyLine\no36 : Ideal of S\\\\\n\\endOutput\n\n\\section{Monomials Under the Staircase}\n\nInvariants of an algebraic variety, such as its dimension\nand degree, are computed from an initial monomial ideal.\nThis computation amounts to the combinatorial task\nof analyzing the collection of standard monomials,\nthat is, the monomials under the staircase \\cite[Chapter 9]{CLO1}.\nIn this section we demonstrate some basic operations on\nmonomial ideals in \\Mtwo.\n\nLet us create a non-trivial staircase in three dimensions\nby taking the third power of the initial monomial from line {\\tt i20}.\n\\beginOutput\ni37 : M = staircase^3\\\\\n\\emptyLine\n\\              3 6   2 4 5   2 9   7 4     2 10     7 5   6 2 5     12  $\\cdot\\cdot\\cdot$\\\\\no37 = ideal (x y , x y z , x y , x y , x*y z  , x*y z , x y z , x*y  , $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\no37 : Ideal of R\\\\\n\\endOutput\nThe number of current generators of this ideal equals\n\\beginOutput\ni38 : numgens M\\\\\n\\emptyLine\no38 = 20\\\\\n\\endOutput\nTo see all generators we can transpose the matrix of minimal generators:\n\\beginOutput\ni39 : transpose gens M\\\\\n\\emptyLine\no39 = \\{-9\\}  | x3y6   |\\\\\n\\      \\{-11\\} | x2y4z5 |\\\\\n\\      \\{-11\\} | x2y9   |\\\\\n\\      \\{-11\\} | x7y4   |\\\\\n\\      \\{-13\\} | xy2z10 |\\\\\n\\      \\{-13\\} | xy7z5  |\\\\\n\\      \\{-13\\} | x6y2z5 |\\\\\n\\      \\{-13\\} | xy12   |\\\\\n\\      \\{-13\\} | x6y7   |\\\\\n\\      \\{-13\\} | x11y2  |\\\\\n\\      \\{-15\\} | z15    |\\\\\n\\      \\{-15\\} | y5z10  |\\\\\n\\      \\{-15\\} | x5z10  |\\\\\n\\      \\{-15\\} | y10z5  |\\\\\n\\      \\{-15\\} | x5y5z5 |\\\\\n\\      \\{-15\\} | x10z5  |\\\\\n\\      \\{-15\\} | y15    |\\\\\n\\      \\{-15\\} | x5y10  |\\\\\n\\      \\{-15\\} | x10y5  |\\\\\n\\      \\{-15\\} | x15    |\\\\\n\\emptyLine\n\\              20       1\\\\\no39 : Matrix R   <--- R\\\\\n\\endOutput\nNote that this generating set is not minimal; see {\\tt o48} below.\nThe number of standard monomials equals\n\\beginOutput\ni40 : degree M\\\\\n\\emptyLine\no40 = 690\\\\\n\\endOutput\nTo list all the standard monomials we first create the residue ring\n\\beginOutput\ni41 : S = R/M\\\\\n\\emptyLine\no41 = S\\\\\n\\emptyLine\no41 : QuotientRing\\\\\n\\endOutput\nand then we ask for a vector space basis of the residue ring:\n\\beginOutput\ni42 : basis S\\\\\n\\emptyLine\no42 = | 1 x x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x14y x14yz x14 $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\              1       690\\\\\no42 : Matrix S  <--- S\\\\\n\\endOutput\nLet us count how many standard monomials there are of a given degree.\nThe following table represents the Hilbert function\nof the residue ring.\n\\beginOutput\ni43 : tally apply(flatten entries basis(S),degree)\\\\\n\\emptyLine\no43 = Tally\\{\\{0\\} => 1  \\}\\\\\n\\            \\{1\\} => 3\\\\\n\\            \\{10\\} => 63\\\\\n\\            \\{11\\} => 69\\\\\n\\            \\{12\\} => 73\\\\\n\\            \\{13\\} => 71\\\\\n\\            \\{14\\} => 66\\\\\n\\            \\{15\\} => 53\\\\\n\\            \\{16\\} => 38\\\\\n\\            \\{17\\} => 23\\\\\n\\            \\{18\\} => 12\\\\\n\\            \\{19\\} => 3\\\\\n\\            \\{2\\} => 6\\\\\n\\            \\{3\\} => 10\\\\\n\\            \\{4\\} => 15\\\\\n\\            \\{5\\} => 21\\\\\n\\            \\{6\\} => 28\\\\\n\\            \\{7\\} => 36\\\\\n\\            \\{8\\} => 45\\\\\n\\            \\{9\\} => 54\\\\\n\\emptyLine\no43 : Tally\\\\\n\\endOutput\nThus the largest degree of a standard monomial is nineteen,\nand there are three standard monomials of that degree:\n\\beginOutput\ni44 : basis(19,S)\\\\\n\\emptyLine\no44 = | x14yz4 x9yz9 x4yz14 |\\\\\n\\emptyLine\n\\              1       3\\\\\no44 : Matrix S  <--- S\\\\\n\\endOutput\nThe most recently defined ring involving {\\tt x}, {\\tt y}, and {\\tt z} was\n{\\tt S}, so all computations involving those variables are done in the\nresidue ring {\\tt S}.\n%% The current ring {\\tt S} is the residue ring. All calculations\n%% are done in {\\tt S}. \nFor instance, we can also obtain the\nstandard monomials of  degree nineteen as follows:\n\\beginOutput\ni45 : (x+y+z)^19\\\\\n\\emptyLine\n\\            14   4          9   9         4   14\\\\\no45 = 58140x  y*z  + 923780x y*z  + 58140x y*z\\\\\n\\emptyLine\no45 : S\\\\\n\\endOutput\nAn operation on ideals which will occur frequently throughout this\nbook is the computation of minimal free resolutions. This is done as follows:\n\\beginOutput\ni46 : C = res M\\\\\n\\emptyLine\n\\       1      16      27      12\\\\\no46 = R  <-- R   <-- R   <-- R   <-- 0\\\\\n\\                                      \\\\\n\\      0      1       2       3       4\\\\\n\\emptyLine\no46 : ChainComplex\\\\\n\\endOutput\nThis shows that our ideal {\\tt M} has sixteen minimal generators.\nThey are the entries in the leftmost matrix of the chain complex {\\tt C}:\n\\beginOutput\ni47 : C.dd_1\\\\\n\\emptyLine\no47 = | x3y6 x7y4 x2y9 x2y4z5 x11y2 xy12 x6y2z5 xy7z5 xy2z10 x15 y15 x $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\              1       16\\\\\no47 : Matrix R  <--- R\\\\\n\\endOutput\nThis means that four of the twenty generators in {\\tt o39} were redundant.\nWe construct the set consisting of the four redundant generators\nas follows:\n\\beginOutput\ni48 : set flatten entries gens M - set flatten entries C.dd_1\\\\\n\\emptyLine\n\\            6 7   10 5   5 10   5 5 5\\\\\no48 = Set \\{x y , x  y , x y  , x y z \\}\\\\\n\\emptyLine\no48 : Set\\\\\n\\endOutput\nHere {\\tt flatten entries} turns the matrix ${\\tt M}$ into a single list.\nThe command {\\tt set} turns that list into a set, to which we\ncan apply the difference operation for sets.\n\nLet us now take a look at the first syzygies \n(or {\\it minimal S-pairs} \\cite[\\S 2.9]{CLO1})\namong  the sixteen minimal generators.\nThey correspond to the columns of the second matrix in our resolution {\\tt C}:\n\\beginOutput\ni49 : C.dd_2\\\\\n\\emptyLine\no49 = \\{9\\}  | -y3 -x4 0   -z5 0   0   0   0   0   0   0   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{11\\} | 0   y2  0   0   0   -x4 0   0   -z5 0   0   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{11\\} | x   0   -y3 0   0   0   0   0   0   -z5 0   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{11\\} | 0   0   0   xy2 -y3 0   -x4 0   x5  y5  0   -z5 0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{13\\} | 0   0   0   0   0   y2  0   0   0   0   0   0   0   -x4 0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{13\\} | 0   0   x   0   0   0   0   -y3 0   0   0   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{13\\} | 0   0   0   0   0   0   y2  0   0   0   0   0   0   0   - $\\cdot\\cdot\\cdot$\\\\\n\\      \\{13\\} | 0   0   0   0   x   0   0   0   0   0   -y3 0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{13\\} | 0   0   0   0   0   0   0   0   0   0   0   xy2 -y3 0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{15\\} | 0   0   0   0   0   0   0   0   0   0   0   0   0   y2  0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{15\\} | 0   0   0   0   0   0   0   x   0   0   0   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{15\\} | 0   0   0   0   0   0   0   0   0   0   0   0   0   0   y $\\cdot\\cdot\\cdot$\\\\\n\\      \\{15\\} | 0   0   0   0   0   0   0   0   0   0   x   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{15\\} | 0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{15\\} | 0   0   0   0   0   0   0   0   0   0   0   0   x   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\      \\{15\\} | 0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\              16       27\\\\\no49 : Matrix R   <--- R\\\\\n\\endOutput\nThe first column represents the S-pair between the\nfirst generator $x^3 y^6 $ and the third generator $x^2 y^9$.\nIt is natural to form the {\\it S-pair graph} with $16$ vertices and\n$27$ edges represented by  this matrix. According to the\ngeneral theory described in \\cite{MS}, this is a planar graph\nwith $12$ regions. The regions correspond to the $12$ second syzygies,\nthat is, to the columns of the matrix\n\\beginOutput\ni50 : C.dd_3\\\\\n\\emptyLine\no50 = \\{12\\} | z5  0   0   0   0   0   0   0   0   0   0   0   |\\\\\n\\      \\{13\\} | 0   z5  0   0   0   0   0   0   0   0   0   0   |\\\\\n\\      \\{14\\} | 0   0   z5  0   0   0   0   0   0   0   0   0   |\\\\\n\\      \\{14\\} | -y3 -x4 0   0   0   0   0   0   0   0   0   0   |\\\\\n\\      \\{14\\} | 0   0   -y5 z5  0   0   0   0   0   0   0   0   |\\\\\n\\      \\{15\\} | 0   0   0   0   z5  0   0   0   0   0   0   0   |\\\\\n\\      \\{15\\} | 0   0   0   0   -x5 z5  0   0   0   0   0   0   |\\\\\n\\      \\{16\\} | 0   0   0   0   0   0   z5  0   0   0   0   0   |\\\\\n\\      \\{16\\} | 0   y2  0   0   -x4 0   0   0   0   0   0   0   |\\\\\n\\      \\{16\\} | x   0   -y3 0   0   0   0   0   0   0   0   0   |\\\\\n\\      \\{16\\} | 0   0   0   0   0   0   -y5 z5  0   0   0   0   |\\\\\n\\      \\{16\\} | 0   0   0   -y3 0   -x4 0   0   0   0   0   0   |\\\\\n\\      \\{16\\} | 0   0   0   0   0   0   0   -y5 z5  0   0   0   |\\\\\n\\      \\{17\\} | 0   0   0   0   0   0   0   0   0   z5  0   0   |\\\\\n\\      \\{17\\} | 0   0   0   0   0   0   0   0   0   -x5 z5  0   |\\\\\n\\      \\{17\\} | 0   0   0   0   0   0   0   0   0   0   -x5 z5  |\\\\\n\\      \\{18\\} | 0   0   0   0   y2  0   0   0   0   -x4 0   0   |\\\\\n\\      \\{18\\} | 0   0   x   0   0   0   -y3 0   0   0   0   0   |\\\\\n\\      \\{18\\} | 0   0   0   0   0   y2  0   0   0   0   -x4 0   |\\\\\n\\      \\{18\\} | 0   0   0   x   0   0   0   -y3 0   0   0   0   |\\\\\n\\      \\{18\\} | 0   0   0   0   0   0   0   0   -y3 0   0   -x4 |\\\\\n\\      \\{20\\} | 0   0   0   0   0   0   0   0   0   y2  0   0   |\\\\\n\\      \\{20\\} | 0   0   0   0   0   0   x   0   0   0   0   0   |\\\\\n\\      \\{20\\} | 0   0   0   0   0   0   0   0   0   0   y2  0   |\\\\\n\\      \\{20\\} | 0   0   0   0   0   0   0   x   0   0   0   0   |\\\\\n\\      \\{20\\} | 0   0   0   0   0   0   0   0   0   0   0   y2  |\\\\\n\\      \\{20\\} | 0   0   0   0   0   0   0   0   x   0   0   0   |\\\\\n\\emptyLine\n\\              27       12\\\\\no50 : Matrix R   <--- R\\\\\n\\endOutput\nBut we are getting ahead of ourselves. Homological algebra and resolutions\nwill be covered in the next chapter, and monomial ideals\nwill appear in the chapter of Ho\\c{s}ten and Smith.\nLet us return to Cox, Little and O'Shea \\cite{CLO2}.\n\n\\section{Pennies, Nickels, Dimes and Quarters}\n\nWe now come to an application of Gr\\\"obner bases which appears in\n\\cite[Section 8.1]{CLO2}: {\\sl Integer Programming}. This is the problem of\n minimizing a linear objective function over the set of non-negative \ninteger solutions of a system of linear equations.  We demonstrate\nsome techniques for doing this in \\Mtwo. Along the way, we learn about\nmultigraded polynomial rings and how to compute\nGr\\\"obner bases with respect to monomial orders defined by weights.\nOur running example is the linear system defined  by the matrix:\n\\beginOutput\ni51 : A = \\{\\{1, 1, 1, 1\\},\\\\\n\\           \\{1, 5,10,25\\}\\}\\\\\n\\emptyLine\no51 = \\{\\{1, 1, 1, 1\\}, \\{1, 5, 10, 25\\}\\}\\\\\n\\emptyLine\no51 : List\\\\\n\\endOutput\nFor the algebraic study of integer programming problems, a good starting\npoint is to work in a multigraded polynomial ring, here in four variables:\n\\beginOutput\ni52 : R = QQ[p,n,d,q, Degrees => transpose A]\\\\\n\\emptyLine\no52 = R\\\\\n\\emptyLine\no52 : PolynomialRing\\\\\n\\endOutput\nThe degree of each variable is the corresponding column vector of the matrix\nEach variable represents one of the four coins in the U.S. currency system:\n\\beginOutput\ni53 : degree d\\\\\n\\emptyLine\no53 = \\{1, 10\\}\\\\\n\\emptyLine\no53 : List\\\\\n\\endOutput\n\\beginOutput\ni54 : degree q\\\\\n\\emptyLine\no54 = \\{1, 25\\}\\\\\n\\emptyLine\no54 : List\\\\\n\\endOutput\nEach monomial represents a collection of coins. For instance, suppose\nyou own four  pennies, eight nickels, ten dimes, and three quarters:\n\\beginOutput\ni55 : degree(p^4*n^8*d^10*q^3)\\\\\n\\emptyLine\no55 = \\{25, 219\\}\\\\\n\\emptyLine\no55 : List\\\\\n\\endOutput\nThen you have a total of 25 coins worth two dollars and nineteen cents.\nThere are nine other possible ways of having 25 coins of the same value:\n\\beginOutput\ni56 : h = basis(\\{25,219\\}, R)\\\\\n\\emptyLine\no56 = | p14n2d2q7 p9n8d2q6 p9n5d6q5 p9n2d10q4 p4n14d2q5 p4n11d6q4 p4n8 $\\cdot\\cdot\\cdot$\\\\\n\\emptyLine\n\\              1       9\\\\\no56 : Matrix R  <--- R\\\\\n\\endOutput\nFor just counting the number of columns of this matrix\nwe can use the command\n\\beginOutput\ni57 : rank source h\\\\\n\\emptyLine\no57 = 9\\\\\n\\endOutput\nHow many ways can you make change for ten dollars using $100$ coins?\n\\beginOutput\ni58 : rank source basis(\\{100,1000\\}, R)\\\\\n\\emptyLine\no58 = 182\\\\\n\\endOutput\nA typical integer programming problem is this: among all 182 ways of\nexpressing ten dollars using 100 coins, which one uses the fewest dimes?\nWe set up the  Conti-Traverso algorithm \\cite[\\S 8.1]{CLO2} for \nanswering this question. We use the following ring with the lexicographic\norder and with the variable order:\ndimes (d) before pennies (p) before nickels (n) before quarters (q).\n\\beginOutput\ni59 : S = QQ[x, y, d, p, n, q, \\\\\n\\          MonomialOrder => Lex, MonomialSize => 16]\\\\\n\\emptyLine\no59 = S\\\\\n\\emptyLine\no59 : PolynomialRing\\\\\n\\endOutput\nThe option {\\tt MonomialSize} advises \\Mtwo to use more space to store the\nexponents of monomials, thereby avoiding a potential overflow.\n\nWe define an ideal with one generator for each column of the matrix A.\n\\beginOutput\ni60 : I = ideal( p - x*y, n - x*y^5, d - x*y^10, q - x*y^25)\\\\\n\\emptyLine\n\\                             5           10           25\\\\\no60 = ideal (- x*y + p, - x*y  + n, - x*y   + d, - x*y   + q)\\\\\n\\emptyLine\no60 : Ideal of S\\\\\n\\endOutput\nThe integer program is solved by normal form reduction with respect\nto the following Gr\\\"obner basis consisting of binomials.\n\\beginOutput\ni61 : transpose gens gb I\\\\\n\\emptyLine\no61 = \\{-6\\}  | p5q-n6     |\\\\\n\\      \\{-4\\}  | d4-n3q     |\\\\\n\\      \\{-3\\}  | yn2-dp     |\\\\\n\\      \\{-6\\}  | yp4q-dn4   |\\\\\n\\      \\{-4\\}  | yd3-pnq    |\\\\\n\\      \\{-6\\}  | y2p3q-d2n2 |\\\\\n\\      \\{-5\\}  | y2d2n-p2q  |\\\\\n\\      \\{-7\\}  | y2d2p3-n5  |\\\\\n\\      \\{-6\\}  | y3p2q-d3   |\\\\\n\\      \\{-6\\}  | y3dp2-n3   |\\\\\n\\      \\{-5\\}  | y4p-n      |\\\\\n\\      \\{-6\\}  | y5n-d      |\\\\\n\\      \\{-8\\}  | y6d2-pq    |\\\\\n\\      \\{-16\\} | y15d-q     |\\\\\n\\      \\{-7\\}  | xq-y5d2    |\\\\\n\\      \\{-5\\}  | xn-y3p2    |\\\\\n\\      \\{-2\\}  | xd-n2      |\\\\\n\\      \\{-2\\}  | xy-p       |\\\\\n\\emptyLine\n\\              18       1\\\\\no61 : Matrix S   <--- S\\\\\n\\endOutput\nWe fix the quotient ring, so the reduction to normal form\nwill happen automatically.\n\\beginOutput\ni62 : S' = S/I\\\\\n\\emptyLine\no62 = S'\\\\\n\\emptyLine\no62 : QuotientRing\\\\\n\\endOutput\nYou need at least two dimes to express one dollar with ten coins.\n\\beginOutput\ni63 : x^10 * y^100\\\\\n\\emptyLine\n\\       2 6 2\\\\\no63 = d n q\\\\\n\\emptyLine\no63 : S'\\\\\n\\endOutput\nBut you can express ten dollars with a hundred coins none of which is a dime.\n\\beginOutput\ni64 : x^100 * y^1000\\\\\n\\emptyLine\n\\       75 25\\\\\no64 = n  q\\\\\n\\emptyLine\no64 : S'\\\\\n\\endOutput\nThe integer program is infeasible if and only if the normal form still\ncontains the variable $x$ or the variable $y$. For instance, you cannot\nexpress ten dollars with less than forty coins:\n\\beginOutput\ni65 : x^39 * y^1000\\\\\n\\emptyLine\n\\       25 39\\\\\no65 = y  q\\\\\n\\emptyLine\no65 : S'\\\\\n\\endOutput\nWe now introduce a new term order on the polynomial ring, defined\nby assigning a weight to each variable. Specifically, we assign\nweights for each of the coins. For instance,\nlet pennies have weight 5, nickels weight 7, \ndimes weight 13 and quarters weight 17.\n\\beginOutput\ni66 : weight = (5,7,13,17)\\\\\n\\emptyLine\no66 = (5, 7, 13, 17)\\\\\n\\emptyLine\no66 : Sequence\\\\\n\\endOutput\nWe set up a new ring with the resulting weight term order, and work modulo\nthe same ideal as before in this new ring.\n\\beginOutput\ni67 : T = QQ[x, y, p, n, d, q, \\\\\n\\                Weights => \\{\\{1,1,0,0,0,0\\},\\{0,0,weight\\}\\},\\\\\n\\                MonomialSize => 16]/\\\\\n\\            (p - x*y, n - x*y^5, d - x*y^10, q - x*y^25);\\\\\n\\endOutput\nOne dollar with ten coins:\n\\beginOutput\ni68 : x^10 * y^100\\\\\n\\emptyLine\n\\       5 2 3\\\\\no68 = p d q\\\\\n\\emptyLine\no68 : T\\\\\n\\endOutput\nTen dollars with one hundred coins:\n\\beginOutput\ni69 : x^100 * y^1000\\\\\n\\emptyLine\n\\       60 3 37\\\\\no69 = p  n q\\\\\n\\emptyLine\no69 : T\\\\\n\\endOutput\nHere is an optimal solution which involves all four types of coins:\n\\beginOutput\ni70 : x^234 * y^5677\\\\\n\\emptyLine\n\\       2 4 3 225\\\\\no70 = p n d q\\\\\n\\emptyLine\no70 : T\\\\\n\\endOutput\n", "meta": {"hexsha": "46da6eb7f7377e931c2061a063c66e7aaad11653", "size": 28013, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/ComputationsBook/chapters/varieties/chapter-m2.tex", "max_stars_repo_name": "d-torrance/Macaulay2-web-site", "max_stars_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-27T08:01:17.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-27T08:01:17.000Z", "max_issues_repo_path": "Book/ComputationsBook/chapters/varieties/chapter-m2.tex", "max_issues_repo_name": "d-torrance/Macaulay2-web-site", "max_issues_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2018-04-17T19:52:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-07T01:08:10.000Z", "max_forks_repo_path": "Book/ComputationsBook/chapters/varieties/chapter-m2.tex", "max_forks_repo_name": "d-torrance/Macaulay2-web-site", "max_forks_repo_head_hexsha": "edb1d0b607c5aa00ffbcf403f2403961c6d6083a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-01-08T16:48:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-10T21:19:02.000Z", "avg_line_length": 32.9564705882, "max_line_length": 93, "alphanum_fraction": 0.5901545711, "num_tokens": 10740, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527631, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5650445766538489}}
{"text": "%!TEX program = xelatex\n\n\\documentclass[12pt,a4paper]{article}\n\\usepackage{xeCJK}\n\\usepackage{amsmath}\n\\setmainfont{Times New Roman}\n\\usepackage{setspace}\n\\usepackage{caption}\n\\usepackage{graphicx, subfig}\n\\usepackage{float}\n\\usepackage{listings}\n\\usepackage{booktabs}\n\\usepackage{setspace}%\u4f7f\u7528\u95f4\u8ddd\u5b8f\u5305\n\\usepackage{mathtools}\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n    \\newcommand{\\dd}{\\mathrm{d}}\n\\usepackage{tcolorbox}\n    \\tcbuselibrary{xparse}\n        \\DeclareTotalTCBox{\\verbbox}{ O{green} v !O{} }\n            {fontupper=\\ttfamily,nobeforeafter,tcbox raise base,\n             arc=0pt,outer arc=0pt,top=0pt,bottom=0pt,left=0mm,\n             right=0mm,leftrule=0pt,rightrule=0pt,toprule=0.3mm,\n             bottomrule=0.3mm,boxsep=0.5mm,bottomrule=0.3mm,boxsep=0.5mm,\n             colback=#1!10!white,colframe=#1!50!black,#3}{#2}\n\\usepackage{color}\n\\usepackage{textcomp}\n\\definecolor{listinggray}{gray}{0.9}\n\\definecolor{lbcolor}{rgb}{0.9,0.9,0.9}\n\\lstset{\n\tbackgroundcolor=\\color{lbcolor},\n\ttabsize=4,\n\trulecolor=,\n\tlanguage=matlab,\n        basicstyle=\\scriptsize,\n        upquote=true,\n        aboveskip={1.5\\baselineskip},\n        columns=fixed,\n        showstringspaces=false,\n        extendedchars=true,\n        breaklines=true,\n        prebreak = \\raisebox{0ex}[0ex][0ex]{\\ensuremath{\\hookleftarrow}},\n        frame=single,\n        showtabs=false,\n        showspaces=false,\n        showstringspaces=false,\n        identifierstyle=\\ttfamily,\n        keywordstyle=\\color[rgb]{0,0,1},\n        commentstyle=\\color[rgb]{0.133,0.545,0.133},\n        stringstyle=\\color[rgb]{0.627,0.126,0.941},\n}\n\n\\begin{document} \n\\title{homework12}\n\t\\author{11611118 \u90ed\u601d\u6e90}  \n\\begin{spacing}{1.5}%%\u884c\u95f4\u8ddd\u53d8\u4e3adouble-space\n\n\\section{Problem 1}\n\nProblem: Maximize $f(x_1,x_2)$.\n\\[\nf(x_1,x_2) = x_1(P_1-a_1x_1-b_1x_2)+x_2(P_2-a_2x_1-b_2x_2)-F-C_1x_1-C_2x_2\n\\]\n1. Slove the problem by calculus.\n\\begin{equation*}\n\t\\begin{aligned}\n\t\t\\frac{\\partial f(x_1,x_2)}{\\partial x_1} \n        &= P_1-C_1-2a_1x_1-a_2x_2-b_1x_2 \\\\ \n        \\frac{\\partial f(x_1,x_2)}{\\partial x_2} \n        &= P_2-C_2-a_2x_1-b_1x_1-2b_2x_2 \\\\ \n        \\frac{\\partial^2 f(x_1,x_2)}{\\partial x_1^2} \n        &= -2a_1 \\\\ \n        \\frac{\\partial^2 f(x_1,x_2)}{\\partial x_1 \\partial x_2} \n        &= -a_2-b_1 \\\\ \n        \\frac{\\partial^2 f(x_1,x_2)}{\\partial x_2^2} \n        &= -2b_2 \\\\ \n\t\\end{aligned}\n\\end{equation*}\n\nThere should be :\n\\begin{equation*}\n\t\\begin{aligned}\n\t\t\\frac{\\partial f(x_1,x_2)}{\\partial x_1} &= 0 \\\\\n        \\frac{\\partial f(x_1,x_2)}{\\partial x_2} &= 0 \\\\\n        \\frac{\\partial^2 f(x_1,x_2)}{\\partial x_1 \\partial x_2} &< 0 \\\\\n        (\\frac{\\partial^2 f(x_1,x_2)}{\\partial x_1 \\partial x_2})^2 &\n        - \\frac{\\partial^2 f(x_1,x_2)}{\\partial x_1^2} \\times\n        \\frac{\\partial^2 f(x_1,x_2)}{\\partial x_2^2} < 0 \n\t\\end{aligned}\n\\end{equation*}\n\n\\newpage\nSo :\n\\begin{equation*}\n    \\begin{aligned}\n        x_2 &= \n        \\frac{2a_1(P_2-C_2)-(a_2+b_1)(P_1-C_1)}{4a_1b_2-(a_2+b_1)^2}  \\\\\n        x_1 &= \\frac{P_1-C_1}{2a_1} - \\frac{a_2+b_1}{2a_1} \\times\n        \\frac{2a_1(P_2-C_2)-(a_2+b_1)(P_1-C_1)}{4a_1b_2-(a_2+b_1)^2}  \\\\\n        & \\qquad \\qquad a_2 + b_1 > 0 \\\\\n        & \\qquad \\qquad (a_2 + b_1)^2-4a_1b_2 < 0 \\\\\n    \\end{aligned}\n\\end{equation*}\n2. Slove the problem by via MATLAB.\n\n\\begin{lstlisting}[language=matlab]\na1 = 1; a2 = 2; b1 = 1; b2 = 3;\nc1 = 1; c2 = 2; P1 = 5; P2 = 3; F = 3;\nf = @(x)(-1*(x(1)*(P1-a1*x(1)-b1*x(2)) + ...\n          x(2)*(P2-a2*x(1)-b2*x(2)) ...\n          - F - c1*x(1) - c2*x(2)));\nopts = optimoptions(@fmincon,'Algorithm','interior-point');\n% problem = createOptimProblem('fmincon', ...\n%     'x0',[2;3],'objective',sixmin, ...\n%     'Aineq',A,'bineq',b,'options',opts);\nproblem = createOptimProblem('fmincon','x0',[0;0],'objective',f,'options',opts);\n[x,fval,eflag,output] = fmincon(problem)\n\\end{lstlisting}\n\n\\newpage\n\\section{Problem 2}\n\nProblem: Minimize $f(x_1,x_2)$, such that $x_1+2x_2=12$.\n\\[\nf(x_1,x_2) = \\frac{23}{x_1} + x_1 + \\frac{30}{x_2} - 6x_2\n\\]\n1. Please present a short review of the Lagrange Multiplier Method. \\\\\\\\\nLagrange multiplication method is an optimization algorithm, which is mainly used to solve the optimization problem under the condition of constraint. The basic idea is to transform the constraint optimization problem with $N$ variables and $K$ constraints into unconstrained optimization problem containing $(N+K)$ variables by introducing Lagrange multiplication. \\\\\\\\\nThe standard form of constrained optimization problem can be expressed in the following formula:\n\\begin{equation}\n    \\begin{aligned}\n        min \\quad & f(x)\\\\\n        s.t.: \\quad & g_i(x) \\leq 0, i=1,...,m\\\\\n              \\quad & h_j(x) = 0, j=1,...,t\\\\\n    \\end{aligned}\n\\end{equation}\nThe basic idea of Lagrangian multiplication method is to transform the optimization problem with constraints in formula (1) into an unconstrained optimization problem. Its concrete implementation is as follows:\n\\begin{equation}\n    \\begin{aligned}\n        L(x;\\alpha,\\beta) = f(x) + \\sum_{i=1}^m \\alpha_i g_i(x) + \\sum_{j=1}^t \\beta_j h_j(x)\n    \\end{aligned}\n\\end{equation}\nWhere $L$ is the Lagrange function, $\\alpha$, $\\beta$ is the Lagrange boarding, and satisfies ${\\alpha}_i \\geq 0$. $x$ is the primal variable, $\\alpha$ and $\\beta$ are the dual variables. \n\n\\newpage\n\\noindent For problem:\n\\begin{equation*}\n    \\begin{aligned}\n        min \\quad & f(x)\\\\\n        s.t.: \\quad & g(x) = 0\n    \\end{aligned}\n\\end{equation*}\n\n\\noindent From a geometric point of view, the goal of this problem is to look for points on the dimensional curvature surface determined by the equation that can minimize the objective function. \\\\\\\\\nIt is not difficult for us to draw the following conclusion: \\\\\n1. For any point on the constrained surface, the gradient of the point $\\nabla g(x)$ is orthogonal to the constraint surface \\\\\n2. For the best solution $x^*$, the gradient of the objective function in this point \n$\\nabla f(x^*)$ is also orthogonal to the constrained surface \\\\\\\\\nSo that, in the most advantages, the gradient and the inevitable direction are the same or the opposite. That is: Existing $\\lambda\\not=0$, such that: \n\\[\n    \\nabla f(x^*) + \\lambda \\nabla g(x^*) = 0\n\\]\n\n\\noindent 2. Slove the problem by the Lagrange Multiplier Method (LMM).\n\n\\begin{equation*}\n    \\begin{aligned}\n        \\nabla f(\\vec{x}) &= (1-\\frac{23}{x_1^2},-6-\\frac{30}{x_2^2}) \\\\\n        g(\\vec{x}) &= x_1 + 2x_2 - 12 \\\\\n        \\nabla g(\\vec{x}) &= (1,2)\n    \\end{aligned}\n\\end{equation*}\nfor $\\nabla f(x^*) + \\lambda \\nabla g(x^*) = 0$:\n\\[\n\\vec{x} = (2.229, 4.885),\\quad f(\\vec{x}) = -10.62\n\\]\n\n\\newpage\n\\section{Problem 3}\n$A\\ Least\\ Squares\\ Plane$ --- Given the four points $(x_k,y_k,z_k)$\n\\[\n(0,0,0),\\quad (0,1,1),\\quad (1,1,1),\\quad (1,0,-1)\n\\]\nFind the values of A,B and C to minimize the sum of squared errors\n\\[\n\\sum_{k=1}^4 (Ax_k + By_k + C - z_k)^2\n\\]\nIf the points must line in the plane \n\\[\nz=Ax+By+C\n\\]\n\\begin{equation*}\n    \\begin{aligned}\n        W &= \\sum_{k=1}^4 (Ax_k + By_k + C - z_k)^2 \\\\\n          &= 2A^2+2B^2+4C^2+2AB+4AC+4BC-4B-2C+3 \\\\\n        \\frac{\\partial W}{\\partial A}\n          &= 4A+2B+4C = 0 \\\\\n        \\frac{\\partial W}{\\partial B}\n          &= 4B+2A+4C-4 = 0 \\\\\n        \\frac{\\partial W}{\\partial C}\n          &= 8C+4A+4B-2 = 0 \\\\\n    \\end{aligned}\n\\end{equation*} \nGet $A=-0.5,B=1.5,C=-0.25$ \\\\\nThen we have:\n\\[\n\\mathbf{H(\\vec{x})} = \n\\begin{bmatrix} \n4 & 2 & 4 \\\\\n2 & 4 & 4 \\\\\n4 & 4 & 8 \n\\end{bmatrix},\n\\mathbf{det(H(\\vec{x}))} > 0\n\\]\nSo we have the $min$ W.\n\n\n\n\\end{spacing}\n\n\\end{document}", "meta": {"hexsha": "68d0e306bbe6894b98c91dc934ec6693733b81c8", "size": 7514, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW12/HW12.tex", "max_stars_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_stars_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-30T11:32:36.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-30T11:32:36.000Z", "max_issues_repo_path": "Homework/HW12/HW12.tex", "max_issues_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_issues_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW12/HW12.tex", "max_forks_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_forks_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0, "max_line_length": 370, "alphanum_fraction": 0.6268299175, "num_tokens": 2824, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8080672204860316, "lm_q1q2_score": 0.5650445764375202}}
{"text": "\\section{Quadric Surfaces}\r\n\\noindent\r\nQuadric surfaces extend parabolas and other shapes composed of at most squared terms into 3D.\r\n\r\n\\input{./differentialMultivariableCalculus/paraboloids}\r\n\\input{./differentialMultivariableCalculus/hyperboloids}\r\n\\input{./differentialMultivariableCalculus/hyperbolicParaboloids}\r\n\\input{./differentialMultivariableCalculus/ellipsoids}\r\n\\input{./differentialMultivariableCalculus/3DCylinders}", "meta": {"hexsha": "86d9b8551c0c3f27481ff7cbdcb1c5812b7b55d5", "size": 429, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "multiCalc/differentialMultivariableCalculus/quadricSurfaces.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "multiCalc/differentialMultivariableCalculus/quadricSurfaces.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "multiCalc/differentialMultivariableCalculus/quadricSurfaces.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 47.6666666667, "max_line_length": 94, "alphanum_fraction": 0.8461538462, "num_tokens": 117, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5650445651252431}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section{The multivariate gaussian}\n\n\\marginpar{Monday\\\\ 2020-10-19, \\\\ compiled \\\\ \\today}\n\nThis will be a once-in-a-lifetime calculation, we need the result practically but it is good to have seen the derivation once. \n\nThe Multi-Variate Normal is in general \n%\n\\begin{align}\n\\mathcal{N} (\\vec{x}, \\vec{\\mu}, C) = \n\\frac{1}{(2 \\pi )^{n/2} \\sqrt{\\det C}}\n\\exp(- \\frac{1}{2} \\qty(\\vec{x} - \\vec{\\mu})^{\\top} C^{-1} (\\vec{x} - \\vec{\\mu}))\n\\,.\n\\end{align}\n\nGaussians have many applications: with lots of data points, both the likelihood and the posterior converge to Gaussians with the same mean.\nWe have many analytical results about Gaussians.\n\n\\paragraph{Normalization}\n\nWe define \\(V = C^{-1}\\), the precision matrix. \nThis is diagonalized as \\(O^{\\top} V O\\), where \\(\\Lambda \\) is diagonal with eigenvalues \\(\\lambda _k\\) and \\(O\\) is orthogonal. \nThen, the integral of the exponential \\(\\exp( -\\vec{y}^{\\top} V \\vec{y}/2)\\) can be expressed as \n%\n\\begin{align}\n\\int \\dd[n]{y} \\exp(- \\frac{1}{2} \\vec{y}^{\\top} O^{\\top} V O \\vec{y}) \\det O &= \n\\int \\dd[n]{y} \\exp(- \\frac{1}{2} \\lambda _k y_k^2) \n \\\\\n&= \\prod_{k=1}^{n} \\int \\dd{y}_k \\exp(- \\frac{1}{2} \\lambda _k y_k^2)   \\\\\n&= \\prod_{k=1}^{n} \\sqrt{\\frac{2 \\pi }{\\lambda _k}} = \\frac{(2\\pi)^{n/2}}{\\sqrt{\\det \\Lambda }} = (2 \\pi )^{n/2} \\sqrt{\\det C} \n\\,.\n\\end{align}\n\n\\paragraph{Marginalization}\n\nWe have an \\(n\\)-variate MVN, which we want to marginalize over \\(M\\) parameters: the integral we want to perform is \n%\n\\begin{align}\n\\int \\dd{x_{n-M+1}} \\dots \\dd{x_n} \\frac{1}{(2 \\pi )^{n/2} \\sqrt{\\det C}} \\exp(- \\frac{1}{2} \\vec{y}^{\\top} V \\vec{y})\n\\,.\n\\end{align}\n\nWe partition the precision matrix into \\(M\\) and \\(n-M\\)-dimensional blocks: \n%\n\\begin{align}\nV = \\left[\\begin{array}{cc}\nV_{aa} & V_{ab} \\\\ \nV_{ab} & V_{bb}\n\\end{array}\\right]\n\\,.\n\\end{align}\n\nNote that there is no one-to-one correspondence with the inverse matrix: in general \\(V_{aa} \\neq C^{-1}_{aa}\\).\nThe argument of the exponential can be written, with the notation \\(\\vec{y} = \\vec{x} - \\vec{\\mu}\\):\n%\n\\begin{align}\n- \\frac{1}{2} \\vec{y}^{\\top} V \\vec{y} &= \n\\vec{y}_a^{\\top} V_{aa} \\vec{y}_a + \n\\vec{y}_b^{\\top} V_{bb} \\vec{y}_b + \n2\\vec{y}_a^{\\top} V_{ab} \\vec{y}_b \n\\,.\n\\end{align}\n\nWe then use the square completion formula: \n%\n\\begin{align}\n\\frac{1}{2} \\vec{z}^{\\top} A \\vec{z} + \\vec{b}^{\\top} \\vec{z} + c \n= \\frac{1}{2} \\qty(\\vec{z} + A^{-1} \\vec{b})^{\\top} A \\qty(\\vec{z} + A^{-1} \\vec{b}) - \\frac{1}{2} \\vec{b}^{\\top} A^{-1} \\vec{b} + c\n\\,,\n\\end{align}\n%\nwhich generalizes the procedure used to solve second-degree equations. \nWe identify \\(A = V_{bb}\\), \\(\\vec{z} = \\vec{y}_b\\), \\(\\vec{b} = V_{ab}^{\\top} \\vec{y}_a\\),  \\(c = \\vec{y}_a V_{aa} \\vec{y}_a / 2\\). \n\n\\todo[inline]{Check calculation! something is wrong here}\n\nFinally, we get that the marginalized distribution is still Gaussian, and is written as\n%\n\\begin{align}\n\\mathcal{N}(\\vec{x}_a | \\vec{\\mu}_a, (V_{aa} - V_{ab} V_{bb}^{-1} V_{ab})^{-1})\n\\,.\n\\end{align}\n\nThe new covariance can be expressed more simply after some matrix algebra (using the Woodbury formula for block matrix inversion): \nwe can show that \n%\n\\begin{align}\nC_{aa} = \\qty(V^{-1})_{aa} = \\qty(V_{aa} - V_{ab} V_{bb}^{-1} V_{ab})^{-1}\n\\,,\n\\end{align}\n%\nso the marginal PDF is just \\(\\mathcal{N}(\\vec{x}_a| \\mu_a, C_{aa})\\).\nMarginalizing for Gaussians can then be done fully analytically in \\(\\mathcal{O}(1)\\) time: we just discard the unnecessary parts of the covariance and mean. \n\nThe Hessian (calculated at the mean) is the inverse of the opposite of the covariance matrix: \\(H = - V = - C^{-1}\\). \n\n% \\todo[inline]{Hold on\\dots is this right?}\n\nThen, \\(\\sigma _m = \\sqrt{(- H)^{-1}_{mm}}\\).\n\n\\paragraph{Conditioning}\n\nNow we want to compute the conditional probability of a certain part of the parameter vector, \\(\\vec{x}_b\\), if we fix the rest of the vector to values \\(\\vec{x}_a\\). \nThis can also be done analytically: now, the mean will be \\(\\vec{\\mu}_{b | a} = \\vec{\\mu}_b - V^{-1}_{bb} V_{ba} \\qty(\\vec{x}_a - \\vec{\\mu}_a)\\), while the covariance is \\(C_{b | a} = V^{-1}_{bb}\\).\n% This almost looks like the wrong thing, \\(C_{bb} = V^{-1}_{bb}\\), but in this case it is actually correct.\n\nIf we marginalize over all the other parameters, the error looks like \\(\\sigma _m = \\sqrt{(-H_{mm})^{-1}}\\). This time we invert the single matrix element.\nThis will usually be much smaller than what we would get when marginalizing. \n\n\\paragraph{Summing}\n\nIf both \\(\\vec{x}\\) and \\(\\vec{y}\\) are MVN distributed and independent, then \n%\n\\begin{align}\n\\mathbb{P}(\\vec{z}) = \\mathcal{N} (\\vec{z} | \\vec{\\mu}_x + \\vec{\\mu}_y, C_x + C_y)\n\\,.\n\\end{align}\n\nThe way to prove this is to start from  the fact that \n%\n\\begin{align}\n\\mathbb{P}(z) &= \\int \\dd{x} \\dd{y} \\mathbb{P}(x) \\mathbb{P}(y) \\delta (z - (x + y))  \\\\\n&= \\int \\dd{x} \\mathbb{P}(x) \\mathbb{P}(z-x) \n\\,,\n\\end{align}\n%\na convolution. The convolution of two Gaussians is a Gaussian, so we can just calculate the mean, covariance, and we are done: \n%\n\\begin{align}\n\\expval{x + y} = \\expval{x} + \\expval{y}\n\\,,\n\\end{align}\n%\nwhile the covariance is (implying a tensor product between the vectors):\n%\n\\begin{align}\n\\expval{(\\vec{x} + \\vec{y})(\\vec{x} + \\vec{y})} - (\\vec{x}+\\vec{y})(\\vec{x}+\\vec{y}) \n&= \\expval{\\vec{x} \\vec{x}} - \\expval{\\vec{x}}^2 + \\expval{\\vec{y} \\vec{y}} - \\expval{\\vec{y}}^2\n\\,,\n\\end{align}\n%\nusing the property that \\(\\expval{\\vec{x} \\vec{y}} - \\expval{\\vec{x}} \\expval{\\vec{y}} = 0\\).\n\nNote that independence \\(\\implies\\) uncorrelation, but not the other way around. For example, if \\(x\\) is normally distributed with mean zero, \\(x^2\\) is \\emph{uncorrelated} to it (the correlation would be given by \\(\\expval{x^3} = 0\\)); however \\(x\\) and \\(x^2\\) are related two-to-one, certainly not independent. \n\n\\end{document}\n", "meta": {"hexsha": "01e7ef56a2ea7618795989d4bd95c9cda0e86b11", "size": 5817, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_third_semester/astrostatistics_cosmology/oct19.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_third_semester/astrostatistics_cosmology/oct19.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_third_semester/astrostatistics_cosmology/oct19.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 38.78, "max_line_length": 315, "alphanum_fraction": 0.6307374936, "num_tokens": 2120, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.808067204308405, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5650445651252431}}
{"text": "\\documentclass[aps,prd,amsmath,amssymb,superscriptaddress,onecolumn,\r\nnofootinbib,showpacs,preprintnumbers]{revtex4-1}\r\n\r\n\\usepackage{hyperref,color,subfigure,soul}\r\n\\usepackage{amsmath}\r\n\\usepackage{amssymb}\r\n\\usepackage[utf8]{inputenc}\r\n\\usepackage{graphicx}% Include figure files\r\n\\usepackage{dcolumn}% Align table columns on decimal point\r\n\\usepackage{bm}% bold math\r\n\\usepackage{physics}\r\n\\usepackage{cleveref}\r\n\\usepackage{tikz}\r\n\\usepackage{mathtools}\r\n\r\n\\newcommand{\\pp}{\\pi\\pi }\r\n\\newcommand{\\ma}{\\mathcal{A}}\r\n\\newcommand{\\maI}[1]{\\mathcal{A}^{(#1)}}\r\n\\newcommand{\\mvI}[1]{\\mathcal{V}^{(#1)}}\r\n\r\n\\newcommand{\\az}{\\alpha_0}\r\n\\newcommand{\\apr}{\\alpha^\\prime}\r\n\\newcommand{\\as}{\\alpha_s}\r\n\\newcommand{\\at}{\\alpha_t}\r\n\r\n\\begin{document}\r\n\r\n\\section{General $\\pp$ Scattering Amplitude}\r\nWe are interested in the structure of the the isospin invariant $\\pp$ scattering amplitude:\r\n%\r\n\t\\begin{equation}\r\n\t\\frac{1}{32\\pi}\\mel{\\pi_k(p_3)\\pi_l(p_4)}{T}{\\pi_i(p_1)\\pi_j(p_2)} \\equiv i (2 \\pi)^4 \\delta^4(p_1 + p_2 - p_3 - p_4) \\;  \\mathcal{M}_{ijkl}(s,t,u) \r\n\t\\end{equation}\r\n%\r\nwhere the Cartesian indices denote isospin indices and $s + t + u = 16 \\; m_\\pi^2$ are the standard Mandelstam variables.\r\nWe can factor out the isospin dependence and define three scalar amplitudes representing the scattering amplitdes in the s-, t-, and u-channels respectively:\r\n%\r\n\t\\begin{equation} \\label{scalaramps}\r\n\t\\mathcal{M}_{ijkl}(s,t,u) = \\delta_{ij}\\delta_{kl} \\; A(s,t,u) + \\delta_{ik}\\delta_{jl} \\; B(s,t,u) + \\delta_{il}\\delta_{jk} \\; C(s,t,u).\r\n\t\\end{equation}\r\n%\r\nThe three scalar amplitudes in \\cref{scalaramps} are not independent thanks to symmetries of the initial and final state pions. Bose symmetry lets us freely interchange pions in the final state, i.e. amplitudes are invariant from exchange $t \\leftrightarrow u$. Additionally, crossing symmetry relates cross-channel amplitudes must by the interchange of a initial and final state pion, ($i\\leftrightarrow k$ or $i \\leftrightarrow j$ or similarly $s \\leftrightarrow t$ and $s \\leftrightarrow u$). This gives us relations:\r\n%\r\n\t\\begin{equation} \\label{symmetries}\r\n\t B(s,t,u) = A(t,s,u) \\quad \\textrm{and} \\quad C(s,t,u) = A(u,t,s). \r\n\t\\end{equation}\r\n%\r\n\r\nThus we can write \\cref{scalaramps} in terms of a single amplitude:\r\n%\r\n\t\\begin{equation}\\label{matrixelement}\r\n\t\\mathcal{M}_{ijkl}(s,t,u) =  \\delta_{ij}\\delta_{kl} \\; A(s, t,u) + \\delta_{ik}\\delta_{jl} \\; A(t, s, u) + \\delta_{il}\\delta_{jk} \\; A(u, t,s).\r\n\t\\end{equation}\r\n\t\\subsection{Isospin Amplitudes}\r\n\r\nSimilar to \\cite{JPACCollaboration2018} we want to look at amplitudes which have a well-defined isopin in the $s$-channel which we can project out. We define two-pion isospin states by:\r\n%\r\n\t\\begin{equation} \\label{isospin}\r\n\t\\ket{\\pi^i \\pi^j}_I = P^{I}_{ijkl} \\ket{\\pi^k \\pi^l}\r\n\t\\end{equation}\r\n%\r\nwhere \r\n%\r\n\t\\begin{equation} \\label{projectors}\r\n\tP^{(0)}_{ijkl} = \\frac{1}{3}\\delta_{ij}\\delta_{kl},  \\quad P^{(1)}_{ijkl} = \\frac{1}{2}(\\delta_{ik}\\delta_{jl}-\\delta_{il}\\delta_{jk}),  \\quad \\textrm{and} \\quad   P^{(2)}_{ijkl} = \\frac{1}{2}\r\n\t(\\delta_{ik}\\delta_{jl} + \\delta_{il}\\delta_{jk}) - \\frac{1}{3} \\delta_{ij}\\delta_{kl}.\r\n\t\\end{equation}\r\n%\r\n\r\nCombining \\cref{matrixelement} and \\cref{projectors} we get:\r\n%\r\n\t\\begin{equation}\\label{iso-decomp}\r\n\t\\mathcal{M}_{ijkl}(s,t,u) = P^{(0)}_{ijkl} \\; A^{(0)}(s,t,u) + P^{(1)}_{ijkl}  \\; A^{(1)}(s,t,u) +  P^{(2)}_{ijkl} \\; A^{(2)}(s,t,u).\r\n\t\\end{equation}\r\n%\r\nAlso comparing \\cref{matrixelement,iso-decomp} we can write our isospin amplitudes in terms of the original scalar amplitude: \r\n%\r\n\t\\begin{align} \\label{matrix}\r\n\t\t\\begin{bmatrix} \r\n\t\tA^{(0)}(s,t,u) \\\\ A^{(1)} (s,t,u) \\\\ A^{(2)}(s,t,u)\r\n\t\t\\end{bmatrix} \r\n\t=\r\n\t\t\\begin{bmatrix*}[r]\r\n\t\t\t3 & 1 & 1 \\\\ \t0 & 1 & -1 \\\\ 0 & 1 & 1 \r\n\t\t\\end{bmatrix*}\r\n\t\t\\begin{bmatrix}\r\n\t\tA(s,t,u) \\\\ A(t,s,u) \\\\ A(u,t,s) \r\n\t\t\\end{bmatrix}\r\n\t\\end{align}\r\n%\r\nBy projecting out specific partial-waves of the isospine definite amplitudes \\cref{matrix} we can study the resonance content of the scattering amplitude.\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%% VENEZIANO AMPLITUDE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\section{Veneziano Model}\r\n\r\nHistorically a dual resonance formalism for $\\pp$ scattering takes the form of decomposing the scalar amplitude $A(s,t,u)$ into symmetric meromorphic functions of two variables and imposing Bose and crossing symmetries. Specifically this means:\r\n\t\\begin{equation}\r\n\t\tA(s,t,u) = [ \\ma(t,u) - \\ma(s,u) - \\ma(s,t) ]\r\n\t\t\\label{dual_decomp}\r\n\t\\end{equation}\r\nfor $\\ma(s,t) = \\ma(t,s)$ being chosen to have the correct analytic (i.e. dynamical resonance) structure in the energy variables.\r\nThe original Veneziano amplitude selected:\r\n\t\\begin{equation}\r\n\t\t\\ma(s,t) = \\frac{\\Gamma( 1 - \\alpha(s) ) \\Gamma(1- \\alpha(t))}{\\Gamma(1- \\alpha(s) - \\alpha(t))}\r\n\t\t\\label{eq:orig_venez}\r\n\t\\end{equation}\r\nwhich has infinitely many simple pole singularities when $\\alpha(s)$ (or $\\alpha(t)$) are integers and the correct Regge behavior ( $\\ma(s,t) \\sim s^{\\alpha(t)}$) at high energies. Analogous decompositions for the cross channel amplitudes are defined by simple interchange of Mandelstam variables in \\cref{dual_decomp}\r\n\t\\begin{equation}\r\n\t\t\\text{t-channel:} \\quad A(t,s,u) \\qquad \\text{u-channel:} \\quad A(u,t,s)\r\n\t\\end{equation}.\r\nThen isospin definite scattering amplitudes can be assembled by combinations of \\cref{eq:orig_venez}, using \\cref{dual_decomp} in \\cref{matrix}.\r\n\\begin{align} \\label{isoamps}\r\n\t\t\\begin{bmatrix}\r\n\t\tA^{(0)}(s,t,u) \\\\ \r\n\t\tA^{(1)}(s,t,u) \\\\ \r\n\t\tA^{(2)}(s,t,u) \r\n\t\t\\end{bmatrix}\r\n\t= \r\n\t\t\\begin{bmatrix*}[r]\r\n\t\t\t-3 & -3 & 1 \\\\ \r\n\t\t\t -2 & \\;2 & \\;0 \\\\ \r\n\t\t\t\t\\; 0 & \\; 0 & -2 \r\n\t\t\\end{bmatrix*}\r\n\t\t\\begin{bmatrix} \r\n\t\t\\ma(s,t) \\\\ \r\n\t\t\\ma(s,u) \\\\ \r\n\t\t\\ma(t,u)\r\n\t\t\\end{bmatrix} \r\n\t\\end{align}\r\n\t\r\n\t This approach, however, requires the assumption of exact exchange degeneracy between trajectories, so that the couplings and masses of isospin-0 trajectories depend on the parameterization of the isospin-1 trajectory. It also provides very little freedom in the changing the features predicted by the model.\r\n\r\n\\subsection{Phenomenological Approach}\r\nWe wish to construct a model that takes advantage of the appealing features of the Veneziano amplitude, most notably the Regge behavior at high energies, but with more flexibility in describing resonance structures in scattering data. \r\nTo incorporate multiple trajectories (i.e. $\\sigma, \\rho, f_2, ...$) we take a bottom-up approach, noticing that the direct-channel resonance content of the isospin amplitudes should come from the Bose-symmetric combinations:\r\n\t\\begin{equation}\r\n\t\t[ \\ma^{(I)}(s,t) + (-1)^I \\ma^{(I)}(s,u) ] \r\n\t\t\\label{direct}\r\n\t\\end{equation}\r\nwhere we now allow our (at this point completely general) symmetric functions to be different for each isospin channel. This freedom will allow us to parameterize each channel independently (a trajectory $\\alpha^{(I)}$(s) for each isospin channel) and more accurately describe resonance masses and widths. \r\n\r\nIn order to also satisfy crossing symmetry, we must add background terms containing poles in $t$ and $u$ in other isospin channels. To this end, we consider the most general sum of background terms that are Bose-symmetric:\r\n\t\\begin{align}\r\n\t\t\\text{Even Isospin (symmetric): }& \\qquad \\sum_{I^\\prime} C_{II^\\prime} \\; \\ma^{(I^\\prime)}(t,u) \\nonumber \\\\\r\n\t\t\\text{Odd Isospin (antisymmetric): }& \\qquad \\sum_{I^\\prime} C_{II^\\prime} \\; [ \\ma^{(I^\\prime)}(s,t) - \\ma^{(I^\\prime)}(s,u) ].\r\n\t\t\\label{background}\r\n\t\t\\end{align}\r\nWe immediately see that the isospin-1 poses as problem as the only way to construct a background term that is antisymmetric under $t \\leftrightarrow u$ with symmetric scalar functions introduces $s$ dependence and therefore the possibility of unwanted direct-channel resonance contributions from the background terms. Our choice for the function form of $\\maI{I}$ must have a way to eliminate these unwanted poles.\r\n\r\nImposing crossing symmettry on $A^{(I)}(s,t,u)$ as a sum of \\cref{direct,background} we can derive the most general form of a dual-resonance model  for $\\pp$ scattering using different isospin-definite scalar functions:\r\n    \\begin{align} \\label{isospin!}\r\n          A^{(I)}(s,t,u) &= (-1)^I \\big[ \\maI{1}(s,t) + \\maI{I}(s,t) \\big ] + \\big [ \\maI{1}(s,u) +  (-1)^I \\; \\maI{I}(s,u) \\big ] + \\big[ \\maI{1}(t,u) - \\maI{I}(t,u)\\big] \\nonumber \\\\\r\n          &+ \\frac{1}{2}  \\big[1-(-1)^I\\big]  \\sum_{I^\\prime}  \\; C_{II^\\prime} \\bigg [ \\maI{I^\\prime}(s,t) + (-1)^{I+I^\\prime} \\maI{I^\\prime}(s,u)\\bigg]   \\\\\r\n          &+ \\frac{1}{2} \\big[1 + (-1)^I] \\sum_{I^\\prime} C_{II^\\prime}\\;\\maI{I^\\prime}(t,u) \\nonumber.\r\n    \\end{align}\r\nHere the coefficients $C_{II^\\prime}$ are the elements of the isospin crossing matrix \r\n\t\\begin{align}\r\n\tC= \r\n\t\\setlength\\arraycolsep{6pt}\r\n\t \t\\begin{bmatrix*}[r]\r\n\t \t\\dfrac{2}{3} & 2 & \\dfrac{10}{3} \\\\[1.2em]\r\n\t \t\\dfrac{2}{3} & 1 & -\\dfrac{5}{3} \\\\[1.2em]\r\n\t \t\\dfrac{2}{3} & -1 & \\dfrac{1}{3} \r\n\t \t\\end{bmatrix*},\r\n\t\\end{align}\r\nand allow the isospin-projections of the $t$ and $u$ channels to be related to \\cref{isospin!} by permutations of Mandelstam variables.\r\n The extra factors of $\\maI{1}$ are required to maintain crossing symmetry and come from the necessary $s$-dependence of anti-symmetric background terms. \r\n \r\n For our choice of scalar function we use an infinite sum of Euler function-like terms:\r\n \t\\begin{equation}\r\n \t\t\\maI{I}(s,t) = \\sum_{n=1}^{n_{\\text{max}}} \\sum_{m=n}^{\\infty} \\sum_{k=0}^{m} \\; c^{(I)}_{m,k} \\frac{\\Gamma( m- \\alpha^{(I)}(s) ) \\Gamma( m - \t\\alpha^{(I)}(t))}{\\Gamma(m + k - \\alpha^{(I)}(s) - \\alpha^{(I)}(t))}\r\n \t\\end{equation}\r\nWith appropriate choices of $c^{(I)}_{m,k}$ (c.f. charmonium paper) we can define:\r\n\t\\begin{equation} \\label{single-pole}\r\n    \t\\mathcal{V}^{(I)}_n(s,t) = \\bigg [\\frac{2n - \\as^{(I)} - \\at^{(I)}}{(n- \\as^{(I)})(n-\\at^{(I)})} \\bigg] \\sum_{i=0}^n a^{(I)}_{n,i} \\; (\\as^{(I)} + \\at^{(I)} )^{i} \\; \\bigg [ \\frac{\\Gamma(N + 1 - \\as^{(I)}) \\; \\Gamma (N + 1 - \\at^{(I)})}{\\Gamma ( N + 1 - n) \\; \\Gamma (N+ n + 1 - \\as^{(I)} - \\at^{(I)})} \\bigg ] \r\n\t\\end{equation}\r\nwhere $\\as^{(I)} \\equiv \\az^{(I)} + {\\apr}^{(I)} \\; s$ is the linear Regge trajectory of isospin-$I$.\r\nWe see that \\cref{single-pole} retains the usual behavior of the Veneziano amplitude \\cref{eq:orig_venez} at when $\\as \\geq N$, and thus at high energies the Regge behavior is preserved. The low enegy behavior (in the resonance region), is a simple summation of simple poles in $s$ and $t$ channels when $\\as$ or $\\at = n$ respectively. The pole at $\\as = n$ has a polynomial residue of $\\mathcal{O}(n)$ in both $s$ and $t$ and will thus have partial waves of spin $0 \\leq \\ell \\leq n$. The coefficients $a_{n,i}^{(I)}$ allow us to control the strengths of the couplings to individual resonant intermediate states and/or decouple unwanted resonances (discussed below).\r\n\r\nThus the full model becomes \\cref{isospin!} with\r\n\t\\begin{equation}\r\n\t\\maI{I}(s,t) = \\sum_{n=1}^{n_{\\text{max}}} \\mvI{I}_n(s,t).\r\n\t\\end{equation}\r\nThe inputs then become parameterizations for Regge trajectories of each channel to reproduce physical resonance masses, the ``resonant region cutoff'', $N$, above which Regge behavior dominates. Fitting to data scattering data, we can extract couplings (and therefore partial widths) to individual resonant intermediate states. \r\n \r\n %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\subsection{Resonances}\r\nFrom \\cref{direct} we see that $A^{(I)}(s,t,u)$ will have direct-channel poles of isospin-$I$ at every $\\as = n$. It will have ``wrong parity'' contributions from the background as discussed above.\r\nLooking first at the physical poles with the correct Bose symmetry (surpressing isospin indices for ease of notation):\r\n\t\\begin{align} \\label{pole}\r\n\tA^{(I)}(m_n^2,t,u) \\sim& \\frac{1}{s -  m_n^2} \\sum_{i=0}^n a^{(I)}_{n,i} \\; \\bigg [ \\big(\\alpha(m_n^2) + \\alpha(t) \\big)^i + (-1)^{I} \\big(\\alpha(m_n^2) + \\alpha(u) \\big)^i \\bigg]\r\n\t\\end{align}\r\nwhere \r\n\t\\begin{equation}\r\n\tm_n^2 \\equiv {m_n^{(I)}}^2 = \\frac{n - \\az^{(I)}}{{\\apr}^{(I)}}\r\n\t\\end{equation}\t\r\nis the mass of the resonance of isospin-$I$ at $\\as = n$.\r\nLooking at the residue of \\cref{pole}, we can redefine the coefficients $a_{n,i}^{(I)}$ such that\r\n\t\\begin{equation} \\label{residue}\r\n\t\\text{Res }[ A^{(I)}(m_n^2,t,u)] = \\sum_{\\ell=0}^n \\gamma_{n,\\ell}^{(I)} \\; [1 + (-1)^{\\ell+I}] \\; P_\\ell(z_s)\r\n\t\\end{equation}\r\nwhere we've used:\r\n\t\\begin{equation}\r\n\tt(s,z_s) = -2k^2(s)(1-z_s) \\quad \\text{ and } \\quad u(s, z_s) = -2k^2(s)(1+z_s)\r\n\t\\end{equation}\r\n\twith $z_s$, the $s$-channel scattering angle, $k(s) = \\sqrt{s - 4m^2_p}/2$ and Legendre polynomials $P_\\ell$. We see Bose symmetry ($I+\\ell$ is even) is explicitly enforced in \\cref{residue}. We additionally have $n$ coefficients $\\gamma_{n.\\ell}^{(I)}$ related to the partial decay widths of each partial wave by\r\n\t\\begin{equation}\r\n\t\\Gamma^{(I)}_\\ell = \\frac{ - k(m^2_n)}{2m^2_n} \\int^1_{-1} dz_s P_\\ell \\; \\text{Res }A^{(I)}(m^2_n, z_s) = (2\\ell +1) \\frac{ - k(m^2_n)}{2m^2_n} \\gamma^{(I)}_{n,\\ell}.\r\n\t\\end{equation}\r\nHere we can address the problem of unwanted poles of wrong parity. These partial widths are completely undetermined, instead require being fit to data. However, by making the distinction of different Regge trajectories, couplings to resonances not the natural Bose symmetry of that trajectory (i.e. $I+\\ell$ is even) are unphysical. For example the $\\rho$ trajectory containing isospin-1 resonances ($\\rho(770), \\rho(1450), \\rho_3(1690)$, etc) should not contain spin-even resonances.  We can set these couplings identically to zero, leaving $\\lceil n/2 \\rceil$ free couplings for the physical poles: \t\r\n\t\t\\begin{equation}\r\n\t\t\\gamma_{n,\\ell}^{(I)} \\equiv 0 \\qquad \\text{for odd} \\; \\ell+I.\r\n\t\t\\end{equation}\r\n\r\nWe note that the unwanted background poles are exactly of this form and by setting these unphysical couplings to zero eliminates them. For example, taking $I = 0$ in \\cref{isospin!}, \r\n    \\begin{align} \\label{Azero}\r\n    A^{(0)}(s,t,u) &= \\maI{0}(s,t) + \\maI{0}(s,u) - \\frac{1}{3} \\, \\maI{0}(t,u) \\nonumber \\\\[0.5em]\r\n                        &\\; + \\maI{1}(s,t) + \\maI{1}(s,u) + 3 \\, \\maI{1}(t,u)  \\\\[0.5em] \r\n                        &\\; + \\frac{10}{3} \\, \\maI{2}(t,u) \\nonumber,\r\n    \\end{align}\r\nthe contributing poles come from terms of the form \r\n\t\\begin{equation}\r\n\t\t\t[ \\ma^{(I^\\prime)}(s,t) + (-1)^I \\ma^{(I^\\prime)}(s,u) ] \\sim \\sum_{\\ell=0}^n \\gamma_{n,\\ell}^{(I^\\prime)} \\; [1 + (-1)^{\\ell+I}] \\; P_\\ell(z_s)\r\n\t\\end{equation}\r\nwhere for even (odd) $I$, the background $I^\\prime$ is odd (even). Thus, Bose symmetry for $I=0$ will have contributions from even resonances on the isospin-1 trajectory which are identically zero. Similarly we can in general always decouple odd partial waves on the isospin-0 and 2 trajectories to prevent unphysical contributions to the isospin-1 amplitude.\r\n\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n \t\r\n\\end{document}", "meta": {"hexsha": "66c065d0b70379ff09ca2fcc7713d96ceba509f3", "size": 15040, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/phen-venz-pipi.tex", "max_stars_repo_name": "dwinney/veneziano-pipi", "max_stars_repo_head_hexsha": "2536bdec37e746c6bff98996420bd0fbccae7354", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/phen-venz-pipi.tex", "max_issues_repo_name": "dwinney/veneziano-pipi", "max_issues_repo_head_hexsha": "2536bdec37e746c6bff98996420bd0fbccae7354", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/phen-venz-pipi.tex", "max_forks_repo_name": "dwinney/veneziano-pipi", "max_forks_repo_head_hexsha": "2536bdec37e746c6bff98996420bd0fbccae7354", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.7547169811, "max_line_length": 669, "alphanum_fraction": 0.6511303191, "num_tokens": 5166, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672089305841, "lm_q2_score": 0.6992544147913993, "lm_q1q2_score": 0.565044563292875}}
{"text": "% Part: first-order-logic\n% Chapter: model-theory\n% Section: partial-iso\n\n\\documentclass[../../../include/open-logic-section]{subfiles}\n\n\\begin{document}\n\n\\olfileid{mod}{bas}{pis}\n\\section{Partial Isomorphisms}\n\n\\begin{defn}\n  Given two !!{structure}s $\\Struct{M}$ and $\\Struct{N}$, a\n  \\emph{partial isomorphism} from $\\Struct{M}$ to $\\Struct{N}$ is a\n  finite partial function $p$ taking arguments in $\\Domain M$ and returning\n  values in $\\Domain N$, which satisfies the isomorphism conditions from\n  \\olref[iso]{defn:isomorphism} on its domain:\n  \\begin{enumerate}\n  \\item $p$ is !!{injective};\n  \\item for every !!{constant}~$c$: if $p(\\Assign{c}{M})$ is defined,\n    then $p(\\Assign{c}{M}) = \\Assign{c}{N}$;\n  \\item for every $n$-place !!{predicate} $P$: if $a_1$, \\dots, $a_n$\n    are in the domain of $p$, then $\\langle a_1, \\dots, a_n\\rangle \\in\n    \\Assign P M$ if and only if $\\langle p(a_1), \\dots, p(a_n) \\rangle\n    \\in \\Assign P N$;\n  \\item for every $n$-place !!{function} $f$: if $a_1$, \\dots, $a_n$\n    are in the domain of $p$, then $p(\\Assign f M (a_1, \\dots,a_n))\n    = \\Assign f N (p(a_1), \\ dots, p(a_n))$.\n  \\end{enumerate}\n  That $p$ is finite means that $\\dom{p}$ is finite.\n\\end{defn}\n\nNotice that the empty function~$\\emptyset$ is always a partial\nisomorphism between any two !!{structure}s.\n\n\\begin{defn}\\ollabel{defn:partialisom}\n  Two !!{structure}s $\\Struct{M}$ and $\\Struct{N}$, are\n  \\emph{partially isomorphic}, written $\\Struct{M} \\iso[p]\n  \\Struct{N}$, if and only if there is a non-empty set $I$\n  of partial isomorphisms between $\\Struct{M}$ and $\\Struct{N}$\n  satisfying the \\emph{back-and-forth} property:\n  \\begin{enumerate}\n  \\item (\\emph{Forth}) For every $p \\in I$ and $a \\in \\Domain M$\n    there is $q \\in I$ such that $p \\subseteq q$ and $a$ is\n    in the domain of $q$;\n  \\item (\\emph{Back}) For every $p \\in I$ and $b \\in \\Domain N$\n    there is $q \\in I$ such that $p \\subseteq q$ and $b$ is\n    in the range of $q$.\n  \\end{enumerate}\n\\end{defn}\n\n\\begin{thm}\\ollabel{thm:p-isom1}\n  If $\\Struct{M} \\iso[p] \\Struct{N}$ and $\\Struct{M}$ and\n  $\\Struct{N}$ are !!{enumerable}, then $\\Struct{M} \\iso\n  \\Struct{N}$.\n\\end{thm}\n\n\\begin{proof}\n  Since $\\Struct{M}$ and $\\Struct{N}$ are !!{enumerable}, let $\\Domain{M} =\n  \\{a_0, a_1, \\ldots \\}$ and $\\Domain{N} = \\{b_0, b_1, \\ldots \\}$. Starting\n  with an arbitrary $p_0 \\in I$, we define an increasing\n  sequence of partial isomorphisms $p_0 \\subseteq p_1 \\subseteq p_2\n  \\subseteq \\cdots$ as follows:\n  \\begin{enumerate}\n  \\item if $n+1$ is odd, say $n = 2r$, then using the Forth property\n    find a $p_{n+1} \\in I$ such that $p_n \\subseteq p_{n+1}$\n    and $a_r$ is in the domain of $p_{n+1}$;\n  \\item if $n+1$ is even, say $n+1 =2r$, then using the Back property\n    find a $p_{n+1} \\in I$ such that $p_n \\subseteq p_{n+1}$\n    and $b_r$ is in the range of $p_{n+1}$.\n  \\end{enumerate}\nIf we now put:\n\\[\np = \\bigcup_{n\\ge 0} p_n,\n\\]\nwe have that $p$ is a an isomorphism between $\\Struct{M}$ and\n$\\Struct{N}$.\n\\end{proof}\n\n\\begin{prob}\n  Show in detail that $p$ as defined in\n  \\olref[mod][bas][pis]{thm:p-isom1} is in fact an isomorphism.\n\\end{prob}\n\n\\begin{thm}\\ollabel{thm:p-isom2}\n  Suppose $\\Struct{M}$ and $\\Struct{N}$ are !!{structure}s for a\n  purely relational !!{language} (!!a{language} containing only\n  !!{predicate}s, and no !!{function}s or constants). Then if\n  $\\Struct{M} \\iso[p] \\Struct{N}$, also $\\Struct{M} \\elemequiv\n  \\Struct{N}$.\n\\end{thm}\n\n\\begin{proof}\n  By induction on !!{formula}s, one shows that if $a_1$, \\dots, $a_n$ and\n  $b_1$, \\dots, $b_n$ are such that there is a partial isomorphism $p$\n  mapping each $a_i$ to $b_i$ and $s_1(x_i) =a_i$ and $s_2(x_i) =b_i$\n  (for $i =1$, \\dots,~$n$), then $\\Sat{M}{!A}[s_1]$ if\n  and only if $\\Sat{N}{!A}[s_2]$. The case for $n=0$\n  gives $\\Struct{M} \\elemequiv \\Struct{N}$.\n\\end{proof}\n\n\\begin{rem}\nIf !!{function}s are present, the previous result is still true, but\none needs to consider the isomorphism induced by $p$ between the\nsub!!{structure} of $\\Struct{M}$ generated by $a_1$, \\dots, $a_n$ and the\nsub!!{structure} of $\\Struct{N}$ generated by $b_1$, \\dots, $b_n$.\n\\end{rem}\n\nThe previous result can be ``broken down'' into stages by establishing a\nconnection between the number of nested quantifiers in a !!{formula} and\nhow many times the relevant partial isomorphisms can be extended.\n\n\\begin{defn}\n  For any !!{formula} $!A$, the \\emph{quantifier rank} of $!A$, denoted\n  by $\\QuantRank{!A} \\in \\Nat$, is recursively defined as\n  the highest number of nested quantifiers in $!A$.  Two\n  !!{structure}s $\\Struct{M}$ and $\\Struct{N}$ are \\emph{$n$-equivalent},\n  written $\\Struct{M} \\elemequiv[n] \\Struct{N}$, if they agree on all\n  sentences of quantifier rank less than or equal to~$n$.\n\\end{defn}\n\n\\begin{prop}\\ollabel{prop:qr-finite}\n  Let $\\Lang{L}$ be a finite purely relational !!{language}, i.e., a\n  !!{language} containing finitely many !!{predicate}s and !!{constant}s,\n  and no !!{function}s. Then for each $n \\in \\Nat$ there are\n  only finitely many first-order !!{sentence}s in the !!{language}\n  $\\Lang{L}$ that have quantifier rank no greater than $n$, up to\n  logical equivalence.\n\\end{prop}\n\n\\begin{proof}\n  By induction on $n$.\n\\end{proof}\n\n\\begin{defn}\n  Given a !!{structure} $\\Struct{M}$, let $\\Domain M^{<\\omega}$ be the set of\n  all finite sequences over $\\Domain{M}$. We use $\\mathbf{a},\n  \\mathbf{b}, \\mathbf{c}, \\ldots$ to range over finite sequences of\n  elements. If $\\mathbf{a} \\in \\Domain{M}^{<\\omega}$ and $a \\in \\Domain{M}$, then\n  $\\mathbf{a}a$ represents the \\emph{concatenation} of $\\mathbf{a}$ with $a$.\n\\end{defn}\n\n\\begin{defn}\n  Given !!{structure}s $\\Struct{M}$ and $\\Struct{N}$, we define\n  relations $I_n \\subseteq \\Domain M^{<\\omega} \\times \\Domain N^{<\\omega}$ between\n  sequences of equal length, by recursion on $n$ as follows:\n   \\begin{enumerate}\n   \\item $I_0(\\mathbf{a},\\mathbf{b})$ if and only if $\\mathbf{a}$ and\n     $\\mathbf{b}$ satisfy the same atomic !!{formula}s in  $\\Struct{M}$\n     and  $\\Struct{N}$; i.e., if $s_1(x_i) = a_i$ and $s_2(x_i) =\n     b_i$ and $!A$ is atomic with all !!{variable}s among\n     $x_1$, \\dots,~$x_n$, then $\\Sat{M}{!A}[s_1]$ if and\n     only if~$\\Sat{N}{!A}[s_2]$.\n   \\item $I_{n+1} (\\mathbf{a},\\mathbf{b})$ if and only if for every\n     $a\\in A$ there is a $b\\in B$ such that $I_n\n     (\\mathbf{a}a,\\mathbf{b}b)$, and vice-versa.\n   \\end{enumerate}\n\\end{defn}\n\n\n\\begin{defn}\n  Write $\\Struct{M} \\approx_n \\Struct{N}$ if\n  $I_n(\\emptyseq,\\emptyseq)$ holds of $\\Struct{M}$ and\n  $\\Struct{N}$ (where $\\emptyseq$ is the empty sequence).\n\\end{defn}\n\n\\begin{thm}\\ollabel{thm:b-n-f}\n  Let $\\Lang{L}$ be a purely relational !!{language}. Then $I_n\n  (\\mathbf{a},\\mathbf{b})$ implies that for every $!A$ such that\n  $\\QuantRank{!A} \\le n$, we have $\\Sat{M}{!A}[\\mathbf{a}]$ if and\n  only if $\\Sat{N}{!A}[\\mathbf{b}]$ (where again $\\mathbf{a}$\n  satisfies $!A$ if any $s$ such that $s(x_i) = a_i$ satisfies\n  $!A$). Moreover, if $\\Lang{L}$ is finite, the converse also holds.\n\\end{thm}\n\n\\begin{proof}\n  The proof that $I_n(\\mathbf{a},\\mathbf{b})$ implies that\n  $\\mathbf{a}$ and $\\mathbf{b}$ satisfy the same !!{formula}s of\n  quantifier rank no greater than $n$ is by an easy induction on\n  $!A$. For the converse we proceed by induction on $n$, using\n  \\olref{prop:qr-finite}, which ensures that for each $n$\n  there are at most finitely many non-equivalent !!{formula}s of that\n  quantifier rank.\n\n  For $n=0$ the hypothesis that $\\mathbf{a}$ and $\\mathbf{b}$ satisfy\n  the same quantifier-free !!{formula}s gives that they satisfy the same\n  atomic ones, so that $I_0(\\mathbf{a},\\mathbf{b})$.\n\n  For the $n+1$ case, suppose that $\\mathbf{a}$ and $\\mathbf{b}$\n  satisfy the same !!{formula}s of quantifier rank no greater than\n  $n+1$; in order to show that $I_{n+1}(\\mathbf{a},\\mathbf{b})$\n  suffices to show that for each $a \\in \\Domain M$ there is a $b \\in\n  \\Domain N$ such that $I_n(\\mathbf{a}a,\\mathbf{b}b)$, and by the\n  inductive hypothesis again suffices to show that for each $a \\in\n  \\Domain M$ there is a $b \\in \\Domain N$ such that $\\mathbf{a}a$ and\n  $\\mathbf{b}b$ satisfy the same !!{formula}s of quantifier rank no\n  greater than $n$.\n\n  Given $a \\in \\Domain M$, let $!T^a_n$ be set of !!{formula}s\n  $!B(x,\\mathbf{y})$ of rank no greater than $n$ satisfied by\n  $\\mathbf{a}a$ in $\\Struct{M}$; $\\tau^a_n$ is finite, so we can\n  assume it is a single first-order !!{formula}. It follows that\n  $\\mathbf{a}$ satisfies $\\lexists[x][!T^a_n(x,\\mathbf{y})]$, which\n  has quantifier rank no greater than $n+1$. By hypothesis\n  $\\mathbf{b}$ satisfies the same !!{formula} in $\\Struct{N}$, so that\n  there is a $b \\in \\Domain N$ such that $\\mathbf{b}b$ satisfies\n  $!T^a_n$; in particular, $\\mathbf{b}b$ satisfies the same\n  !!{formula}s of quantifier rank no greater than $n$ as\n  $\\mathbf{a}a$. Similarly one shows that for every $b \\in \\Domain N$\n  there is $a\\in \\Domain M$ such that $\\mathbf{a}a$ and $\\mathbf{b}b$\n  satisfy the same !!{formula}s of quantifier rank no greater than $n$,\n  which completes the proof.\n\\end{proof}\n\n\\begin{cor}\\ollabel{cor:b-n-f}\n  If $\\Struct{M}$ and $\\Struct{N}$ are purely relational !!{structure}s\n  in a finite !!{language}, then $\\Struct{M} \\approx_n\\Struct{N}$ if and\n  only if $\\Struct{M} \\elemequiv[n] \\Struct{N}$. In particular\n  $\\Struct{M} \\elemequiv \\Struct{N}$ if and only if for each $n$,\n  $\\Struct{M} \\approx_n \\Struct{N}$ .\n\\end{cor}\n\n\\end{document}\n", "meta": {"hexsha": "0346af4bd22cfb6bec56ebd72b920b2878e3104d", "size": 9468, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/model-theory/basics/partial-iso.tex", "max_stars_repo_name": "jzc/OpenLogic", "max_stars_repo_head_hexsha": "5948483c1d08c25664dc12ac8350e9ae34986b31", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 754, "max_stars_repo_stars_event_min_datetime": "2015-01-13T20:57:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T23:18:26.000Z", "max_issues_repo_path": "content/model-theory/basics/partial-iso.tex", "max_issues_repo_name": "jzc/OpenLogic", "max_issues_repo_head_hexsha": "5948483c1d08c25664dc12ac8350e9ae34986b31", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 229, "max_issues_repo_issues_event_min_datetime": "2015-01-12T23:00:15.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T19:14:08.000Z", "max_forks_repo_path": "content/model-theory/basics/partial-iso.tex", "max_forks_repo_name": "jzc/OpenLogic", "max_forks_repo_head_hexsha": "5948483c1d08c25664dc12ac8350e9ae34986b31", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 241, "max_forks_repo_forks_event_min_datetime": "2015-02-28T22:05:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T18:47:05.000Z", "avg_line_length": 42.2678571429, "max_line_length": 82, "alphanum_fraction": 0.6501901141, "num_tokens": 3406, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5648813306671477}}
{"text": "\\section{Results}\nResults for two different cases are being presented, one the one-dimensional Ising Model and another for the the two-dimensional Ising model. We begin with looking at the. 1D Ising model.\n\n\\subsection{1D Ising model}\n\\subsubsection{Fitting with linear regression}\nFor linear regression we got coefficients of $\\bm{J}$ in \\eqref{eq:1d-ising-linreg} as the following presented in figure \\ref{fig:reg-coef-heatmap},\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[b]{0.9\\textwidth}\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=1]{../fig/{regression_ising_1d_heatmap_lambda0.001}.pdf}\n        \\caption{$\\lambda=10^{-3}$}\n        \\label{fig:linreg-hm-1e-3}\n    \\end{subfigure} \\\\\n    \\begin{subfigure}[b]{0.9\\textwidth}\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=1]{../fig/{regression_ising_1d_heatmap_lambda0.1}.pdf}\n        \\caption{$\\lambda=10^{-1}$}\n        \\label{fig:linreg-hm-1e-1}\n    \\end{subfigure} \\\\\n    \\begin{subfigure}[b]{0.9\\textwidth}\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=1]{../fig/{regression_ising_1d_heatmap_lambda10.0}.pdf}\n    \\caption{$\\lambda=10^{1}$}\n        \\label{fig:linreg-hm-1e2}\n    \\end{subfigure}\n    \\caption{Heat map plots of the $\\bm{J}$ in \\eqref{eq:1d-ising-linreg} retrieved from OLS, Ridge and Lasso. Gathered using $N_\\mathrm{train}=5000$.}\n    \\label{fig:reg-coef-heatmap}\n\\end{figure}\n\nThe $R^2$ score of the OLS, Ridge and Lasso can be seen in figure \\ref{fig:linreg-r2},\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/r2_ols_ridge_lasso.pdf}\n    \\caption{$R^2$ score for different Ordinary Least Squares(OLS), Ridge and Lasso regression. Retrieved $N_\\mathrm{train}=5000$ and $N_\\mathrm{test}=5000$ on a 1D Ising model of size $L=20$.}\n    \\label{fig:linreg-r2}\n\\end{figure}\n\nThe bias-variance decomposition for Ridge and Lasso using bootstrap and cross validation can be viewed in figure \\ref{fig:linreg-bias-variance-decomp-ridge} and \\ref{fig:linreg-bias-variance-decomp-lasso}.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[b]{0.5\\textwidth}\n        \\centering\n        \\includegraphics[scale=0.5]{../fig/ridge_bs_bias_variance_analysis.pdf}\n        \\caption{Bootstrap.}\n        \\label{fig:linreg-bias-variance-decomp-bs-ridge}\n    \\end{subfigure}%\n    \\begin{subfigure}[b]{0.5\\textwidth}\n        \\centering\n        \\includegraphics[scale=0.5]{../fig/ridge_cv_bias_variance_analysis.pdf}\n        \\caption{$k$-fold Cross Validation.}\n        \\label{fig:linreg-bias-variance-decomp-cv-ridge}\n    \\end{subfigure}\n    \\caption{A bias-variance decomposition of Ridge regression using bootstrapping\\ref{fig:linreg-bias-variance-decomp-bs-ridge} and cross-validation\\ref{fig:linreg-bias-variance-decomp-cv-ridge}.}\n    \\label{fig:linreg-bias-variance-decomp-ridge}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[b]{0.5\\textwidth}\n        \\centering\n        \\includegraphics[scale=0.5]{../fig/lasso_bs_bias_variance_analysis.pdf}\n        \\caption{Bootstrap.}\n        \\label{fig:linreg-bias-variance-decomp-bs-lasso}\n    \\end{subfigure}%\n    \\begin{subfigure}[b]{0.5\\textwidth}\n        \\centering\n        \\includegraphics[scale=0.5]{../fig/lasso_cv_bias_variance_analysis.pdf}\n        \\caption{$k$-fold Cross Validation.}\n        \\label{fig:linreg-bias-variance-decomp-cv-lasso}\n    \\end{subfigure}\n    \\caption{A bias-variance decomposition of Lasso regression using bootstrapping\\ref{fig:linreg-bias-variance-decomp-bs-lasso} and cross-validation\\ref{fig:linreg-bias-variance-decomp-cv-lasso}.}\n    \\label{fig:linreg-bias-variance-decomp-lasso}\n\\end{figure}\n\n\\subsubsection{Fitting with a neural network}\nBy setting the output activation function to the identity and by having zero hidden layers, we are essentially performing a regression analysis on the 1D Ising model. We generate the same amount of data by inputing the same RNG(random number generator) seed. A fit using $N_\\mathrm{train}=400$, $N_\\mathrm{train}=5000$ and $N_\\mathrm{test}=5000$ for $\\lambda=10^{-3}, 10^{-1}, 10^1$ can be seen in figure \\ref{fig:mlp_coefs}.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[b]{0.4\\textwidth}\n        \\centering\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.0001_N800}.pdf}\n        \\caption{$N_\\mathrm{train}=400$, $\\lambda=10^{-3}$}\n        \\label{fig:mlp-reg-heatmap400-lmb-3}\n    \\end{subfigure} \\qquad \\qquad \\qquad\n    \\begin{subfigure}[b]{0.4\\textwidth}\n        \\centering\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.0001_N100000}.pdf}\n        \\caption{$N_\\mathrm{train}=5000$, $\\lambda=10^{-3}$}\n        \\label{fig:mlp-reg-heatmap5000-lmb-3}\n    \\end{subfigure} \\\\\n        \\begin{subfigure}[b]{0.4\\textwidth}\n        \\centering\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.1_N800}.pdf}\n        \\caption{$N_\\mathrm{train}=400$, $\\lambda=10^{-1}$}\n        \\label{fig:mlp-reg-heatmap400-lmb-1}\n    \\end{subfigure} \\qquad \\qquad \\qquad\n    \\begin{subfigure}[b]{0.4\\textwidth}\n        \\centering\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda0.1_N100000}.pdf}\n        \\caption{$N_\\mathrm{train}=5000$, $\\lambda=10^{-1}$}\n        \\label{fig:mlp-reg-heatmap5000-lmb-1}\n    \\end{subfigure} \\\\\n        \\begin{subfigure}[b]{0.4\\textwidth}\n        \\centering\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda10.0_N800}.pdf}\n        \\caption{$N_\\mathrm{train}=400$, $\\lambda=10^1$}\n        \\label{fig:mlp-reg-heatmap400-lmb1}\n    \\end{subfigure} \\qquad \\qquad \\qquad\n    \\begin{subfigure}[b]{0.4\\textwidth}\n        \\centering\n        \\includegraphics[trim={1.5cm 3.5cm 0 3.5cm},clip, scale=0.6]{../fig/{mlp_ising_1d_heatmap_lambda10.0_N100000}.pdf}\n        \\caption{$N_\\mathrm{train}=5000$, $\\lambda=10^1$}\n        \\label{fig:mlp-reg-heatmap5000-lmb1}\n    \\end{subfigure} \\\\\n    \\caption{Heat map plot of the coefficients of $\\bm{J}$ in \\eqref{eq:1d-ising-linreg} using neural networks with different regularizations for $\\lambda=10^{-3}, 10^{-1}, 10^1$.}\n    \\label{fig:mlp-coefs}\n\\end{figure}\n\nThe $R^2$ score of the neural network using L$^1$, L$^2$ and no regularization can be seen in figure \\ref{fig:mlp-r2},\n\\begin{figure}[H]\n    \\centering\n    \\begin{subfigure}[b]{0.5\\textwidth}\n        \\centering\n        \\includegraphics[scale=0.5]{../fig/{mlp_r2_ols_ridge_lasso800}.pdf}\n        \\caption{$N_\\mathrm{train}=400$}\n        \\label{fig:mlp-r2-800}\n    \\end{subfigure}%\n    \\begin{subfigure}[b]{0.5\\textwidth}\n        \\centering\n        \\includegraphics[scale=0.5]{../fig/{mlp_r2_ols_ridge_lasso100000}.pdf}\n        \\caption{$N_\\mathrm{train}=5000$}\n        \\label{fig:mlp-r2-5000}\n    \\end{subfigure}\n    \\caption{$R^2$ score for the neural network using L$^1$ (Lasso), L$^2$ (Ridge) and no regularization (OLS). Retrieved $N_\\mathrm{train}=400$ on the left and $N_\\mathrm{train}=5000$ on the right, for a 1D Ising model of size $L=20$.}\n    \\label{fig:mlp-r2}\n\\end{figure}\n\n% TODO: rerun mlp regression with N_samples = 10000 as I run for too much :|\n\n\\subsection{2D Ising model}\nAs stated in the section about the 2D Ising model \\ref{sec:2d-ising-model}, the classification will focus on evaluating the phases of different lattice configurations, and wetter or not it is below or above a critical temperature. We begin by listing the results from the logistic regression.\n\\subsubsection{Classification through logistic regression}\nIn logistic regression we investigated the behavior of the classification and compared it to that of SciKit Learn\\cite{scikit-learn}, using the standard logistic regression method\\footnote{See \\href{https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html}{Logistic Regression documentation}} and \nSciKit Learn's SGD(Stochastic Gradient Descent) implementation \\footnote{See \\href{https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html}{SGD documentation}}. This gave the results found in figure \\ref{fig:logreg-accuracy-sklearn-comparison}.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/logistic_accuracy_sklearn_comparison.pdf}\n    \\caption{The accuracy for our implementation of logistic regression versus that of SciKit learn.}\n    \\label{fig:logreg-accuracy-sklearn-comparison}\n\\end{figure}\n\n\\subsubsection{Classification through neural networks}\nFor classifying the states through a neural network, we looked at several different hyper parameters. All runs were made using $N_{samples}=10000$ except stated other wise. The training percent was 0.5. We start by comparing two different cost functions and their layer outputs,\n\\begin{itemize}\n    \\item Cross entropy with softmax layer output\\eqref{eq:ce-mlp-cost}\n    \\item MSE with sigmoidal layer output.\\eqref{eq:mse-mlp-cost}\n\\end{itemize}\nThese cost functions following behavior for epochs seen in following figure \\ref{fig:mlp-cost-function-comparison},\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_cost_functions.pdf}\n    \\caption{A comparison in accuracy scores between the MSE and (CE) Cross Entropy loss functions over 500 epochs. The output layer for MSE is sigmoidal, the output layer for CE is softmax. The learning parameter was $\\eta=0.001$ and we used the inverse learning rate\\eqref{eq:inverse-eta}.}\n    \\label{fig:mlp-cost-function-comparison}\n\\end{figure}\n\nWe then wish to to investigate the effects of having different initial weights. Given the initial weights \\textit{large} and \\textit{default} as listed in section \\ref{sec:nn-weights}, we get the results as seen in figure \\ref{fig:mlp-epoch-init-weights},\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_weight_inits.pdf}\n    \\caption{A comparison in accuracy scores between the initial weights \\textit{large} and \\textit{default} as listed in section \\ref{sec:nn-weights}. The run was for 500 epochs. The cost function was set to cross entropy and had softmax output activation. The learning parameter was $\\eta=0.001$ and we used the inverse learning rate\\eqref{eq:inverse-eta}.}\n    \\label{fig:mlp-epoch-init-weights}\n\\end{figure}\n\n% skriv inn lambda ting her !=\")#(097 21841 \\\\\\\\}\u2206\u2211\u25ca\u2202\"\n\nAn investigation into different layer activations\\ref{sec:layer-acts} was performed for both the MSE- and the CE-cost function. The results from MSE can be seen in figure \n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_mse.pdf}\n    \\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \\ref{sec:layer-acts}) for MSE as cost function. The run was for 500 epochs. The learning rate was set with the inverse learning rate \\eqref{eq:inverse-eta} with an $\\eta_0=0.001$ and $\\lambda=0.0$.}\n    \\label{fig:mlp-epoch-activations-mse}\n\\end{figure}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_log_loss.pdf}\n    \\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \\ref{sec:layer-acts}) for cross entropy as cost function. The run was for 500 epochs. The learning rate was set with the inverse learning rate \\eqref{eq:inverse-eta} with an $\\eta_0=0.001$ and $\\lambda=0.0$.}\n    \\label{fig:mlp-epoch-activations-log-loss}\n\\end{figure}\n\nWe then move on to an investigation for different L$^2$ regularization strengths $\\lambda$ versus different constant learning rates $\\eta$. A run with 500 epochs, cross entropy and sigmoidal hidden layer activation can be seen in figure \\ref{fig:mlp-eta-lambda},\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_lambda_eta.pdf}\n    \\caption{The accuracy as function of the L$^2$ regularization parameter $\\lambda$ and constant training rate $\\eta$. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The hidden layer was 10 neurons large.}\n    \\label{fig:mlp-eta-lambda}\n\\end{figure}\n\nA comparison of the accuracy score\\eqref{eq:mlp-accuracy} as a function of L$^2$ regularization parameter and hidden layer size(the neurons) can be viewed in figure \\ref{fig:mlp-lambda-neurons}.\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_lambda_neurons.pdf}\n    \\caption{The accuracy score as function of the L$^2$ regularization parameter $\\lambda$ and the number of neurons. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The learning rate was set with the inverse learning rate \\eqref{eq:inverse-eta} with an $\\eta_0=0.001$.}\n    \\label{fig:mlp-lambda-neurons}\n\\end{figure}\n\nThe accuracy score\\eqref{eq:mlp-accuracy} as a function of the hidden layer size(the neurons) and the training data size as percentage of of a $N_\\mathrm{samples}=10000$ training data, can be viewed in figure \\ref{fig:mlp-neurons-ts}.\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_neurons_training_size.pdf}\n    \\caption{The accuracy score as function of the number of neurons and the training data size percentage of $N_\\mathrm{samples}=10000$. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The learning rate was set with the inverse learning rate \\eqref{eq:inverse-eta} with an $\\eta_0=0.001$ and $\\lambda=0.0$. }\n    \\label{fig:mlp-neurons-ts}\n\\end{figure}\n\nThe accuracy score\\eqref{eq:mlp-accuracy} as a function of the hidden layer size(the neurons) and the learning rate $\\eta$, can be viewed in figure \\ref{fig:mlp-neurons-ts}.\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_neurons_eta.pdf}\n    \\caption{The accuracy score as function of the number of neurons and the learning rate $\\eta$. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation. The regularization strength was set to $\\lambda=0.0$}.\n    \\label{fig:mlp-neurons-eta}\n\\end{figure}\n\nThe accuracy score\\eqref{eq:mlp-accuracy} as a function of L$^2$ regularization strength $\\lambda$ and the mini batch size in the SGD\\ref{alg:sgd}, can be viewed in figure \\ref{fig:mlp-lambda-mb}.\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_lambda_mini_batch_size.pdf}\n    \\caption{The accuracy score as function of the  L$^2$ regularization strength $\\lambda$ and the mini batch size in the SGD\\ref{alg:sgd}. The run was for 500 epochs and with cross entropy as cost function, softmax output and sigmoidal hidden layer activation and the inverse learning rate \\eqref{eq:inverse-eta} with $\\eta_0=0.001$.}\n    \\label{fig:mlp-lambda-mb}\n\\end{figure}\n\nAfter choosing a set of optimal parameters seen in table \\ref{tab:opt_ols_result}, we get the results that is seen in figure \\ref{fig:mlp-epoch-activations-mse-optimal} with the MSE cost function, figure \\ref{fig:mlp-epoch-activations-log-loss-optimal} with the CE cost function.\n\\begin{table}[H]\n    \\centering\n    \\caption{A set of optimal parameters}\n    \\begin{tabular}{l l} % 6 columns\n        \\specialrule{.1em}{.05em}{.05em}\n        Parameters & Values \\\\ \\hline\n        $N_\\mathrm{neurons}$    & 10 \\\\\n        $\\lambda$               & 0.1 \\\\\n        $N_\\mathrm{mb}$         & 20 \\\\\n        $N_\\mathrm{epochs}$     & 500 \\\\\n        $\\eta$ (constant)       & 0.001 \\\\\n        \\specialrule{.1em}{.05em}{.05em}\n    \\end{tabular}\n    \\label{tab:opt_ols_result}\n\\end{table}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_mse2.pdf}\n    \\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \\ref{sec:layer-acts}) for MSE as cost function using \\textit{optimal parameters}. The optimal parameters can be viewed in table \\ref{tab:opt_ols_result}.}\n    \\label{fig:mlp-epoch-activations-mse-optimal}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_log_loss2.pdf}\n    \\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \\ref{sec:layer-acts}) for cross entropy as cost function using \\textit{optimal parameters}. The optimal parameters can be viewed in table \\ref{tab:opt_ols_result}.}\n    \\label{fig:mlp-epoch-activations-log-loss-optimal}\n\\end{figure}\n\nAnother set of optimal parameters were chosen a listed in table \\ref{tab:opt_ols_result2} provided us with the figures seen in \\ref{fig:mlp-epoch-activations-mse-optimal} and \\ref{fig:mlp-epoch-activations-log-loss-optimal}. \n% Taking the last 50 accuracy scores gives as with tanh activation function, Accuracy$=0.999(21)$.\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{l l} % 6 columns\n        \\specialrule{.1em}{.05em}{.05em}\n        Parameters & Values \\\\ \\hline\n        $N_\\mathrm{neurons}$    & 20 \\\\\n        $\\lambda$               & 0.1 \\\\\n        $N_\\mathrm{mb}$         & 30 \\\\\n        $N_\\mathrm{epochs}$     & 500 \\\\\n        $\\eta$ (constant)       & 0.001 \\\\\n        \\specialrule{.1em}{.05em}{.05em}\n    \\end{tabular}\n    \\label{tab:opt_ols_result2}\n\\end{table}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_mse_optimal3.pdf}\n    \\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \\ref{sec:layer-acts}) for MSE as cost function using \\textit{optimal parameters}. The optimal parameters can be viewed in table \\ref{tab:opt_ols_result2}.}\n    \\label{fig:mlp-epoch-activations-mse-optimal}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=1.0]{../fig/mlp_epoch_activations_log_loss_optimal3.pdf}\n    \\caption{A comparison in accuracy scores between the hidden layer activation functions(see section \\ref{sec:layer-acts}) for cross entropy as cost function using \\textit{optimal parameters}. The optimal parameters can be viewed in table \\ref{tab:opt_ols_result2}.}\n    \\label{fig:mlp-epoch-activations-log-loss-optimal}\n\\end{figure}", "meta": {"hexsha": "a883ec4370ec7a9418566d4ff04a59d35db5bf10", "size": 18262, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/results.tex", "max_stars_repo_name": "hmvege/FYSSTK4155-Project2", "max_stars_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/results.tex", "max_issues_repo_name": "hmvege/FYSSTK4155-Project2", "max_issues_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/results.tex", "max_forks_repo_name": "hmvege/FYSSTK4155-Project2", "max_forks_repo_head_hexsha": "3cf617399f99026cbcd79f8153d3196ebd86c7cd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.7560137457, "max_line_length": 425, "alphanum_fraction": 0.7197459205, "num_tokens": 5487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.7662936377487305, "lm_q1q2_score": 0.5648796188866436}}
{"text": "\\documentclass[%\n %twocolumn,\n %preprint,\n onecolumn,\n amsmath, amssymb, aps, pra, 10pt\n]{revtex4-2}\n\\usepackage{amsmath}\n\\usepackage{appendix}\n\\usepackage[colorlinks,citecolor=blue,urlcolor=black,bookmarks=false,hypertexnames=true]{hyperref} \n\\begin{document}\n\\title{Invertible Fractional Calculus}% Force line breaks with \\\n\\author{Luke A. Siemens}\n\\email{luke.siemens@lsiemens.com}\n\\noaffiliation\n\\date{\\today}\n\\maketitle\n\nFractional calculus has two aspects that I find deeply unsatisfying: the multiple incompatible definitions and that composition of operator with itself forms a semigroup not an abelian group. In this paper I demonstrate a series of conditions under which a version of fractional calculus can be uniquely defined and forms an abelian group. First I will explicitly define which operator will be extended to a fractional version. It must reproduce differntiation and integration and also have the desired algebraic properties. Then I will use Ramanujan's Master Theorem (RMT) to define the extension of the operator, and investigate its properties.\n\nLet the proposed fractional calculus operator $J^\\alpha$ have the following properties: for $\\alpha \\in \\mathbb{Z}$ it can reproduce repeated integration and differentiation, it is a linear operator, that $J^\\alpha$ forms an abelian group when acting on some suitable subset of differentiable functions and it is analytic in the parameter $\\alpha$. First let us identify a suitable operator to extend into a fractional calculus with nice algebra, and function space for which this operator is well behaved. Let the operator $J^k$ for $k \\in \\mathbb{Z}$ be,\n\\begin{equation}\nJ^k f(x) := \\begin{cases} \\frac{1}{(k-1)!}\\int_{-\\infty}^x (x - t)^{k - 1}f(t)dt & k \\geq 1 \\\\ f(x) & k = 0 \\\\ \\frac{d^{\\left|k\\right|}}{dx^{\\left|k\\right|}}f(x) & k \\leq -1 \\end{cases}\n\\label{integer_calculus}\n\\end{equation}\nUsing this operator a space of functions where the operator is well behaved is any function that is bounded by positive exponentials. Let us call this set $\\mathbb{S}$,\n\\begin{equation}\n\\mathbb{S} := \\left\\lbrace f \\in C^\\omega(\\mathbb{C}) \\middle| (\\forall n \\in \\mathbb{Z}^+)(\\exists a_n \\in \\mathbb{M}, b_n \\in \\mathbb{R}), b_n > 0, \\frac{d^n}{dx^n}f(x) \\in \\mathbb{B}(a_n, b_n) \\right\\rbrace\n\\label{exponentialy_bounded}\n\\end{equation}\nWhere the sets $\\mathbb{M}$ and $\\mathbb{B}$ are defined as follows,\n\\[\\mathbb{M} := \\left\\lbrace a \\in C(\\mathbb{R}) \\middle| (\\forall x, y \\in \\mathbb{R}), x > y, a(x) > a(y) \\geq 0 \\right\\rbrace\\]\n\\[\\mathbb{B}(a, b) := \\left\\lbrace f \\in C^\\omega(\\mathbb{C}) \\middle| (\\forall x, x_0 \\in \\mathbb{R}), x \\leq x_0, |f(x)| \\leq a(x_0)e^{bx} \\right\\rbrace\\]\nSome properties of this operator and the space on which it acts is investigated in the appendix. Note that set $\\mathbb{S}$ is a subset of $C^{\\omega}(\\mathbb{C})$, it is a vector space and if $f(x) \\in \\mathbb{S}$ then $J^k f(x) \\in \\mathbb{S}$. Also if $f(x) \\in \\mathbb{S}$ then $J^k J^m f(x) = J^{k + m} f(x)$, where $k, m \\in \\mathbb{Z}$.\n\n\\section{Fractional extension using RMT}\nNow that we have a suitable linear operator to extend, we need a procedure to actually extend the operator. We will use RMT for this purpose as given by G. H. Hardy \\cite[p.~186]{hardy1940ramanujan}. Given a sequence $\\phi(k)$, where $k \\in \\mathbb{Z}^+$, then\n\\[g(u) = \\sum_{k=0}^\\infty \\frac{\\phi(k)(-u)^k}{k!}\\]\nIf the series converges and its Mellin transform exists then the following result holds,\n\\[\\int_0^{\\infty} u^{s-1}g(u)du = \\Gamma(s)\\phi(-s)\\]\nwhere $\\phi(-s)$ is the analytic interpolation of the sequence $\\phi(k)$ subject to some growth constraints \\cite[p.~188--189]{hardy1940ramanujan}.\nUsing RMT fractional calculus can be defined in the following manner. For some function $f(x) \\in \\mathbb{S}$ consider the action of the operator $J^k$ at a point $x_0$. Every version of fractional calculus consists of producing an interpolation over some or all of this sequence. RMT can be used to find one such interpolation. Let us define $\\phi(k)$ in terms of a function $f(x) \\in \\mathbb{S}$ ,\n\\[\\phi(k) = \\left. J^{-k}f(x)\\right|_{x = x_0} = \\left. \\frac{d^k}{dx^k}f(x) \\right|_{x = x_0}\\]\nIn this case $g(u)$ is,\n\\[g(u) = \\sum_{k=0}^\\infty \\frac{(-u)^k}{k!} \\left. \\frac{d^k}{dx^k}f(x)\\right|_{x=x_0}\\]\nNote that $g(u)$ is the Taylor expansion of $f(x_0 - u)$ in terms of $u$. Now using $f(x_0 - u)$ in the Mellin transform,\n\\[\\int_0^{\\infty} u^{\\alpha-1}f(x_0 - u)du = \\Gamma(\\alpha)\\phi(-\\alpha) = \\left. \\Gamma(\\alpha)J^{\\alpha}f(x)\\right|_{x=x_0}\\]\nUsing the substitution $t = x_0 - u$, the integral becomes,\n\\[\\int_{x_0}^{-\\infty} -(x_0 - t)^{\\alpha-1}f(t)dt = \\left. \\Gamma(\\alpha)J^{\\alpha}f(x) \\right|_{x=x_0}\\]\nFinally rearranging, and applying for arbitrary $x$,\n\\[J^{\\alpha}f(x) = \\frac{1}{\\Gamma(\\alpha)} \\int_{-\\infty}^{x} (x - t)^{\\alpha-1} f(t)dt\\]\nThe operator $J^{\\alpha}$ is only defined on the region $\\mathfrak{R}(\\alpha) \\geq 1$, but since it is constructed by RMT it is analytic on this region. So the complete Operator must be defined in terms of the analytic continuation of the interpolation produced by RMT. In the case of this operator its domain can be extended by using integration by parts given $f(x) \\in \\mathbb{S}$,\n\\[J^{\\alpha}\\frac{d}{dx}f(x) = \\frac{1}{\\Gamma(\\alpha)}\\int_{-\\infty}^x (x-t)^{\\alpha-1} \\left( \\frac{d}{dt} f(t) \\right)dt = \\frac{\\alpha-1}{\\Gamma(\\alpha)}\\int_{-\\infty}^x (x-t)^{\\alpha-2}f(t)dt=\\frac{1}{\\Gamma(\\alpha-1)}\\int_{-\\infty}^x (x-t)^{\\alpha-2}f(t)dt = J^{\\alpha-1}f(x)\\]\nUsing this repeatedly allows for $J^{\\alpha}$ to be extended to the region $\\mathfrak{R}(\\alpha-n) \\geq 1$ for $n \\in \\mathbb{Z}^+$,\n\\begin{equation}\nJ^{\\alpha}\\frac{d^n}{dx^n}f(x)=J^{\\alpha-n}f(x)\n\\label{analytic_continuation}\n\\end{equation}\nTherefore using RMT to interpolate between the derivatives of $f(x)$ yields an analytic function that interpolates between the derivatives and integrals. However, the operator $J^{\\alpha}$ produced from RMT is not unique.\n\\[\\psi(\\alpha) \\in C^{\\omega}(\\mathbb{C}), \\psi(k) = 0, k \\in \\mathbb{Z}^-\\]\n\\[\\left. J'^{\\alpha}f(x)\\right|_{x=x_0} = \\left. J^{\\alpha}f(x)\\right|_{x=x_0} + \\psi(\\alpha)\\]\nEvaluating the function at the points $k \\in \\mathbb{Z}^-$\n\\[\\left. J'^{\\alpha}f(x)\\right|_{x=x_0} = \\left. J^{\\alpha}f(x)\\right|_{x=x_0} + \\psi(\\alpha) = \\left. J^{\\alpha}f(x)\\right|_{x=x_0}, k \\in \\mathbb{Z}^-\\]\nThis new function $\\left. J'^{\\alpha}f(x) \\right|_{x=x_0}$ also satisfies the basic properties expected of fractional calculus. This demonstrates that using RMT to define a fractional calculus, produces many of the algebraic properties I am looking for, but it is not unique. In order to define a unique fractional calculus operator using RMT an additional constraint must be added. Let us denote an arbitrary compatible fractional calculus operator as $I^\\alpha$. Define a new operator $R^\\alpha$ as,\n\\[R^\\alpha f(x) = I^\\alpha f(x) - J^\\alpha f(x)\\]\nwhere $J^\\alpha$ is the operator found using RMT. If we can prove that $R^\\alpha = 0$ when some additional constraint is applied, then that constraint would force the operator $I^\\alpha$ to be uniquely defined. To start apply the generalized Leibniz rule, given in \\textit{Fractional Integrals and Derivatives} \\cite[p.~280]{samko1993fractional}, to the operator $I^\\alpha$,\n\\[I^\\alpha f(x)g(x) = \\sum_{n=0}^\\infty \\binom{-\\alpha}{n}\\left( I^{\\alpha + n}f(x) \\right)\\left( \\frac{d^n}{dx^n} g(x)\\right) = \\sum_{n=0}^\\infty \\binom{-\\alpha}{n}\\left( \\left(J^{\\alpha + n} + R^{\\alpha + n}\\right)f(x) \\right)\\left( \\frac{d^n}{dx^n} g(x)\\right)\\]\nThen subtracting $J^\\alpha f(x)g(x)$ from both sides, \n\\begin{equation}\nI^\\alpha f(x)g(x) - \\sum_{n=0}^\\infty \\binom{-\\alpha}{n}\\left( J^{\\alpha + n}f(x) \\right)\\left( \\frac{d^n}{dx^n} g(x)\\right) = R^\\alpha f(x)g(x) = \\sum_{n=0}^\\infty \\binom{-\\alpha}{n}\\left( R^{\\alpha + n}f(x) \\right)\\left( \\frac{d^n}{dx^n} g(x)\\right)\n\\label{RemainderOperatorProductRule}\n\\end{equation}\nNow that we are setup, look at the set of ODEs $\\frac{d^n}{dx^n}f(x) = f(x)$. From this the following general statement can be made,\n\\[\\exists f(x) \\forall n \\in \\mathbb{Z}, \\frac{d^n}{dx^n}f(x) = f(x), f(0) = 1\\]\ngiven that negative integers in the index $n$ are interpreted as repeated integrals of the form $\\int_{-\\infty}^x f(t)dt$. This statement is true and admits only one solution $f(x) = e^x$. Let us generalize this statement to a fractional form,\n\\begin{equation}\n\\exists f(x) \\forall \\alpha \\in \\mathbb{C}, \\frac{d^\\alpha}{dx^\\alpha}f(x) = f(x), f(0) = 1\n\\label{ConstaintOnFractionalCalculus}\n\\end{equation}\nFrom statement \\eqref{ConstaintOnFractionalCalculus} we can conclude that if $f(x)$ exists it must be $f(x) = e^x$, since it necessitates that $\\frac{d}{dx}f(x) = f(x)$ and $f(0) = 1$. We will now require that statement \\eqref{ConstaintOnFractionalCalculus} applies to fractional calculus. Using this condition $R^\\alpha$ can be calculated in the case where $f(x) = e^x$. Using this constraint $I^\\alpha e^x = e^x$, but since $J^\\alpha e^x = e^x$ then $R^\\alpha e^x = 0$. Now apply this result to \\eqref{RemainderOperatorProductRule} with $f(x) = e^x$ and $g(x) = e^{-x}h(x)$ with $h(x) \\in \\mathbb{S}$.\n\\[R^\\alpha h(x) = \\sum_{n=0}^\\infty \\binom{-\\alpha}{n}\\left( R^{\\alpha + n}e^x \\right)\\left( \\frac{d^n}{dx^n} h(x)e^{-x}\\right) = \\sum_{n=0}^\\infty \\binom{-\\alpha}{n} 0 \\left( \\frac{d^n}{dx^n} h(x)e^{-x}\\right) = 0\\]\nGiven statement \\eqref{ConstaintOnFractionalCalculus} is true, then $R^\\alpha = 0$, so the operator $J^{\\alpha}$ is the compatible fractional calculus operator. This operator still is only defined for $\\mathfrak{\\alpha} \\geq 1$. Using the analytic continuation of the functions produced by RMT \\eqref{analytic_continuation} a general form of the operator can be defined as,\n\\begin{equation}\nJ^{\\alpha}f(x) = \\frac{1}{\\Gamma(\\alpha+n)}\\int_{-\\infty}^x (x-t)^{\\alpha+n-1}\\left( \\frac{d^n}{dt^n} f(t) \\right)dt\n\\label{fractional_calculus}\n\\end{equation}\nwhere $f \\in \\mathbb{S}$, $n \\in \\mathbb{Z}^+$, $\\alpha \\in \\mathbb{C}$ and $\\mathfrak{R}(\\alpha + n) \\geq 1$.\n\n\\section{Properties of $J^{\\alpha}$}\nNow that we have $J^{\\alpha}$ uniquely defined we can check if it maintains the properties that were built into the operator $J^k$. Note that the operator \\eqref{fractional_calculus} is a linear operator for all $\\alpha \\in \\mathbb{C}$ and that for $\\alpha \\in \\mathbb{Z}$ it is equivalent to the operator $J^k$. Given $f \\in \\mathbb{S}$, $\\alpha \\in \\mathbb{C}$, $n \\in \\mathbb{Z}^+$ and $\\mathfrak{R}(\\alpha + n) \\geq 1$ then using Libnitz's integral rule,\n\\[\\frac{d}{dx} J^{\\alpha} f(x) = \\frac{d}{dx}\\frac{1}{\\Gamma(\\alpha + n)}\\int_{-\\infty}^x (x-t)^{\\alpha+n-1} \\left( \\frac{d^n}{dt^n} f(t) \\right)dt = \\frac{\\alpha+n-1}{\\Gamma(\\alpha+n)}\\int_{-\\infty}^x (x-t)^{\\alpha+n-2} \\left( \\frac{d^n}{dt^n} f(t) \\right)dt = J^{\\alpha-1}f(x)\\]\nand using integration by parts,\n\\[J^{\\alpha} \\left( \\frac{d}{dx} f(x) \\right) = \\frac{1}{\\Gamma(\\alpha+n)}\\int_{-\\infty}^x (x-t)^{\\alpha+n-1} \\left( \\frac{d^{n+1}}{dt^{n+1}} f(t) \\right)dt = -\\frac{1-n-\\alpha}{\\Gamma(\\alpha+n)}\\int_{-\\infty}^x (x-t)^{\\alpha+n-2} \\left( \\frac{d^n}{dt^n} f(t) \\right)dt = J^{\\alpha-1}f(x)\\]\nSo the operator $J^\\alpha$ and derivative commute, and applying this property repeatedly yields $\\frac{d^m}{dx^m}J^{\\alpha}f(x) = J^{\\alpha-m}f(x) = J^{\\alpha}\\frac{d^m}{dx^m}f(x)$, where $m \\in \\mathbb{Z}^+$.\n\nGiven $f \\in \\mathbb{S}$, $\\alpha, \\beta \\in \\mathbb{C}$, $n,m \\in \\mathbb{Z}^+$, $\\mathfrak{R}(\\alpha + n) \\geq 1$ and $\\mathfrak{R}(\\beta + m) \\geq 1$ then,\n\\[J^{\\alpha}J^{\\beta} f(x) = \\frac{1}{\\Gamma(\\alpha + n)}\\int_{-\\infty}^x (x-t)^{\\alpha+n-1} \\left( \\frac{d^n}{dt^n} \\frac{1}{\\Gamma(\\beta+m)}\\int_{-\\infty}^t (t-s)^{\\beta+m-1} \\left( \\frac{d^m}{ds^m} f(s) \\right)ds \\right)dt\\]\nRearranging and using the fact that derivatives commute with $J^{\\alpha}$,\n\\[J^{\\alpha}J^{\\beta} f(x) = \\frac{d^{n+m}}{dx^{n+m}}\\frac{1}{\\Gamma(\\alpha+n)\\Gamma(\\beta+m)}\\int_{-\\infty}^x \\int_{-\\infty}^t (x-t)^{\\alpha+n-1}(t-s)^{\\beta+m-1} f(s) ds dt\\]\nSwapping the order of integration,\n\\[J^{\\alpha}J^{\\beta} f(x) = \\frac{d^{n+m}}{dx^{n+m}}\\frac{1}{\\Gamma(\\alpha+n)\\Gamma(\\beta+m)} \\int_{-\\infty}^x \\int_{s}^x (x-t)^{\\alpha+n-1}(t-s)^{\\beta+m-1} f(s) dt ds\\]\nUsing a change of variables with $t=x-(x-s)r$ and rearranging yields,\n\\[J^{\\alpha}J^{\\beta} f(x) = \\frac{d^{n+m}}{dx^{n+m}}\\frac{1}{\\Gamma(\\alpha+n)\\Gamma(\\beta+m)} \\int_{-\\infty}^x (x-s)^{\\alpha+\\beta+n+m-1} f(s)ds \\int_0^1 r^{\\alpha+n-1}(1 - r)^{\\beta+m-1}dr\\]\nIdentifying the beta integral ,$\\int_0^1 r^{\\alpha+n-1}(1-r)^{\\beta+m-1}dr=\\frac{\\Gamma(\\alpha+n)\\Gamma(\\beta+m)}{\\Gamma(\\alpha+\\beta+n+m)}$, and then rearranging,\n\\[J^{\\alpha}J^{\\beta} f(x) = \\frac{d^{n+m}}{dx^{n+m}}\\frac{1}{\\Gamma(\\alpha+\\beta+n+m)} \\int_{-\\infty}^x (x-s)^{\\alpha+\\beta+n+m-1} f(s)ds = \\frac{d^{n+m}}{dx^{n+m}} J^{\\alpha+\\beta+n+m} f(x) = J^{\\alpha + \\beta} f(x)\\]\nRelaxing constraints on $J^\\alpha$ to allow Schwartz distributions enables us to reconstruct other standard fractional calculus definitions. Denote the Heaviside function as $H(x)$. The Riemann-Liouville fractional integral ${}_aI_x^\\alpha$ can be defined as,\n\\[{}_aI_x^\\alpha f(x) = \\frac{1}{\\Gamma(\\alpha)}\\int_a^x (x - t)^{\\alpha - 1}f(t)dt = \\frac{1}{\\Gamma(\\alpha)}\\int_{-\\infty}^x (x - t)^{\\alpha - 1}H(t - a)f(t)dt = J^\\alpha \\left(H(x - a)f(x)\\right)\\]\nThe Riemann-Liouville fractional derivative ${}_aD_x^\\alpha f(x)$, where $n = \\lceil \\alpha \\rceil$\n\\[{}_aD_x^\\alpha f(x) = \\frac{d^n}{dx^n} {}_aI_x^{n - \\alpha} f(x) = \\frac{d^n}{dx^n} J^{n - \\alpha} \\left(H(x - a)f(x)\\right) = J^{-\\alpha} \\left(H(x - a)f(x)\\right)\\]\nThe Caputo derivative ${}_a^C D_x^\\alpha f(x)$ is,\n\\[{}_a^C D_x^\\alpha f(x) = \\frac{1}{\\Gamma(n - \\alpha)} \\int_a^x (x - t)^{n - \\alpha - 1} \\left( \\frac{d^n}{dt^n}f(t) \\right) dt = J^{n - \\alpha} \\left(H(x - a)\\frac{d^n}{dx^n}f(x)\\right) \\]\nSo $J^\\alpha$ is a commuting linear operator. With the property $J^{\\alpha}J^{\\beta} f(x) =J^{\\alpha+\\beta} f(x) =J^{\\alpha}J^{\\beta} f(x)$, where $\\alpha,\\beta \\in \\mathbb{C}$ and $f \\in \\mathbb{S}$. Also given Schwartz distributions are allowed, when $a = -\\infty$ and $f(x) \\in \\mathbb{S}$ then, $J^\\alpha f(x) = {}_{a}I_x^\\alpha f(x) = {}_{a}D_x^{-\\alpha} f(x) = {}_{a}^C D_x^{-\\alpha} f(x)$.\n\n\\section{Conclusion}\nWe have seen that RMT can be used to extend a calculus operator to a fractional version, and that the fractional calculus operator described is compatible with multiple definitions of fractional calculus operators given the constraints applied. Also this fractional calculus operator uniquely satisfies the conditions we imposed on fractional calculus when statement \\eqref{ConstaintOnFractionalCalculus} is added as an additional constraint. And finally the operator can be generalized. If we modified the operator $J^k$ to use a different lower limit of integration and found a corresponding space of functions on which the operator is invertible then it should be possible to apply RMT to define a unique fractional calculus on that set of functions. So if the lower limit of integration was set to $\\infty i$ the resulting operator should be a version of fractional calculus defined on a set that includes periodic functions.\n\n\\clearpage\n\n\\appendix*\n\\section{The integral as an invertible linear operator}\nAs a starting point for defining a fractional calculus we will first construct an integral operator having properties compatible with what I am looking for in fractional calculus. This operator that we will call $J^k$ should be a commuting linear operator that can reproduce repeated integration and differentiation. Also we will construct a function space $\\mathbb{S}$ where the operator $J^k$ is well-behaved.\n\n\\subsection{Definitions}\nLet the operator $J^k$ for $k \\in \\mathbb{Z}$ be,\n\\[ J^k f(x) := \\begin{cases} \\frac{1}{(k-1)!}\\int_{-\\infty}^x (x - t)^{k - 1}f(t)dt & k \\geq 1 \\\\ f(x) & k = 0 \\\\ \\frac{d^{\\left|k\\right|}}{dx^{\\left|k\\right|}}f(x) & k \\leq -1 \\end{cases} \\]\nUsing this operator any function for which its absolute value and the absolute value of its derivatives are bounded by exponentials of the form $ae^{bx}$, with $a > 0$ and $b > 0$, are well-behaved. This set of functions can be expanded to produce a function space. Let us call this space set $\\mathbb{S}$,\n\\[ \\mathbb{S} := \\left\\lbrace f \\in C^\\omega(\\mathbb{C}) \\middle| (\\forall n \\in \\mathbb{Z}^+)(\\exists a_n \\in \\mathbb{M}, b_n \\in \\mathbb{R}), b_n > 0, \\frac{d^n}{dx^n}f(x) \\in \\mathbb{B}(a_n, b_n) \\right\\rbrace \\]\nWhere $\\mathbb{M}$ is a set of positive monotonic increasing functions and $\\mathbb{B}$ is a set of analytic functions for which the absolute value of the function along the real line is bounded by the exponential $e^{bx}$ rescaled by the function $a(x_0)$. The sets $\\mathbb{M}$ and $\\mathbb{B}$ are defined as follows,\n\\[\\mathbb{M} := \\left\\lbrace a \\in C(\\mathbb{R}) \\middle| (\\forall x, y \\in \\mathbb{R}), x > y, a(x) > a(y) \\geq 0 \\right\\rbrace\\]\n\\[\\mathbb{B}(a, b) := \\left\\lbrace f \\in C^\\omega(\\mathbb{C}) \\middle| (\\forall x, x_0 \\in \\mathbb{R}), x \\leq x_0, |f(x)| \\leq a(x_0)e^{bx} \\right\\rbrace\\]\n\n\\subsection{Properties of $\\mathbb{S}$}\nLooking at the properties of the space $\\mathbb{S}$ let us assume the following. Given $f, g \\in \\mathbb{S}$ by definition there exists functions such that the following is true for $x_0 \\geq x$, with $a_n, c_n \\in \\mathbb{M}$ and $b_n, d_n \\in \\mathbb{R}$\n\\[\\left| \\frac{d^n}{dx^n}f(x) \\right| \\leq a_n(x_0)e^{b_n x}, \\left| \\frac{d^n}{dx^n} g(x) \\right| \\leq c_n(x_0)e^{d_n x}\\]\nto simplify the arguments we will assume that $b_n \\geq d_n$.\n\\[\\left| \\frac{d^n}{dx^n} \\left( f(x) + g(x) \\right) \\right| \\leq \\left| \\frac{d^n}{dx^n} f(x) \\right| + \\left| \\frac{d^n}{dx^n} g(x) \\right| \\leq a_n(x_0)e^{b_n x} + c_n(x_0)e^{d_n x} \\leq \\left(a_n(x_0)e^{(b_n - d_n) x_0} + c_n(x_0)\\right)e^{d_n x}\\]\n\\[\\left| \\frac{d^n}{dx^n} (\\alpha f(x)) \\right| \\leq \\left|\\alpha\\right| \\left| \\frac{d^n}{dx^n} f(x) \\right| \\leq \\left| \\alpha \\right| a_n(x_0)e^{b_n x}\\]\nSo the set $\\mathbb{S}$ is a function space. Now investigating the action of derivatives and integrals on this space, let us check if the set $\\mathbb{S}$ is closed under repeated differentiation.\n\\[\\left| \\frac{d^n}{dx^n} \\left(\\frac{d^m}{dx^m} f(x)\\right) \\right| \\leq \\left| \\frac{d^{n+m}}{dx^{n+m}} f(x) \\right| \\leq a_{n+m}(x_0)e^{b_{n+m} x}\\]\nIn the case of integration we will check for closure in two steps, first if $n = 0$\n\\[\\left| \\frac{d^0}{dx^0} \\left( \\int_{-\\infty}^x f(t)dt \\right) \\right| \\leq \\int_{-\\infty}^x \\left| f(t) \\right|dt \\leq \\int_{-\\infty}^x a_0(x_0)e^{b_0 t}dt = a_0(x_0)\\frac{1}{b_0}e^{b_0 x}\\]\nand then in the case of $n \\geq 1$, using Leibniz\u2019s integral rule,\n\\[\\left| \\frac{d^n}{dx^n} \\left( \\int_{-\\infty}^x f(t)dt \\right) \\right| \\leq \\left| \\frac{d^{n-1}}{dx^{n-1}} f(x) \\right| \\leq a_{n-1}(x_0)e^{b_{n-1} x}\\]\nUsing the result above repeatedly to extend the result to repeated integration. In the case of $m>n$, \n\\[\\left| \\frac{d^n}{dx^n} \\left(\\frac{1}{(m-1)!} \\int_{-\\infty}^x (x - t)^{m-1} f(t)dt\\right) \\right| \\leq a_0(x_0)\\frac{1}{{b_0}^{m-n}}e^{b_0 x}\\]\nand in the case of $n \\leq m$,\n\\[\\left| \\frac{d^n}{dx^n} \\left(\\frac{1}{(m-1)!} \\int_{-\\infty}^x (x - t)^{m-1} f(t)dt\\right) \\right| \\leq a_{n-m}(x_0)e^{b_{n-m} x}\\]\nThis shows that repeated integrals and derivatives of functions in $\\mathbb{S}$ exist and are in the space $\\mathbb{S}$.\n\n\\subsection{Properties of $J^k$}\nConsidering the operator $J^k$ operating on functions in the space $\\mathbb{S}$. Note that the operator is a linear operator for all $k \\in \\mathbb{Z}$ and that for $\\left| k \\right| > 1$ the operator $J^k$ can be constructed by repeated application of either $J^1$ or $J^{-1}$. Also note that the set $\\mathbb{S}$ is closed under the action of $J^k$. Given that the function $f(x) \\in \\mathbb{S}$, and using the fundamental theorem of calculus and Libnitz\u2019s integral rule we can show that, \n\\[J^1J^{-1} f(x) = \\int_{-\\infty}^x \\frac{d}{dt} f(t)dt = f(x) - 0 = J^0 f(x)\\]\n\\[J^{-1}J^1 f(x) = \\frac{d}{dx} \\int_{-\\infty}^x f(t)dt = f(x)\\frac{d}{dx}x - 0 = J^0 f(x)\\]\nSo the following identity is true $J^1J^{-1} f(x) = J^0 f(x) = J^{-1}J^1 f(x)$. Applying this identity repeatedly and using the fact that $J^k$ can be constructed by repeated application of $J^1$ and $J^{-1}$ leads to the result,\n\\begin{equation}\nJ^kJ^m f(x) = J^{k+m} f(x)\n\\label{additiveProperty}\n\\end{equation}\n\n\\bibliographystyle{plain}\n\\bibliography{fractional_calculus.bib}\n\\end{document}\n", "meta": {"hexsha": "cdb3107c912edff688f3e569949a9fbee850e3f4", "size": 20422, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/fractional_calculus/rmt_fractional_calculus_and_notes/invertible_fractional_calculus.tex", "max_stars_repo_name": "lsiemens/lsiemens.github.io", "max_stars_repo_head_hexsha": "d93bf32c8e849b6514aea0f8eb582c42543a53d6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-16T18:16:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-16T18:16:07.000Z", "max_issues_repo_path": "theory/fractional_calculus/rmt_fractional_calculus_and_notes/invertible_fractional_calculus.tex", "max_issues_repo_name": "lsiemens/lsiemens.github.io", "max_issues_repo_head_hexsha": "d93bf32c8e849b6514aea0f8eb582c42543a53d6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-03-08T23:16:36.000Z", "max_issues_repo_issues_event_max_datetime": "2015-12-29T02:17:16.000Z", "max_forks_repo_path": "theory/fractional_calculus/rmt_fractional_calculus_and_notes/invertible_fractional_calculus.tex", "max_forks_repo_name": "lsiemens/lsiemens.github.io", "max_forks_repo_head_hexsha": "d93bf32c8e849b6514aea0f8eb582c42543a53d6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-16T18:16:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-16T18:16:10.000Z", "avg_line_length": 126.0617283951, "max_line_length": 929, "alphanum_fraction": 0.6698168642, "num_tokens": 7347, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.564879618396156}}
{"text": "\\documentclass{tufte-handout}\n\\usepackage[utf8]{inputenc}\n\\usepackage{tikz}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\n\\usepackage{color}\n\\newcommand{\\red}[1]{{\\color{red} #1}}\n\\usepackage{booktabs}\n\\title{Red Scare!}\n\\begin{document}\n\\maketitle\n\\begin{abstract}\nOh no! All I wanted was to write a straightforward reachability exercise.\nBut some of the vertices have turned red, giving me cruel ideas for much harder questions! \n\\end{abstract}\n\\section{Problems}\nIn every problem of this exercise, we consider a graph $G$ with vertex set $V(G)$ and edge set $E(G)$.\nThe graph can be directed or undirected.\nEvery graph in this exercise is simple (no multiple edges between any pair of vertices) and unweighted.\nWe fix the notation $n=|V(G)|$ and $m=|E(G)|$.\n\n\\begin{marginfigure}\n \\begin{tikzpicture}[yscale=.4, xscale = .7]\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=below:$0$, label = left:$s$] at (0,0) {};\n   \\node (1) [label=below:$1$] at (1,0) {};\n   \\node (2) [label=below:$2$] at (2,0) {};\n   \\node (3) [label=below:$3$, label = right:$t$] at (3,0) {};\n   \\node (4) [label=above:$4$, fill=red, draw, inner sep =1.5pt] at (1.5,1) {};\n   \\node (5) [label=left:$5$, fill=red, draw, inner sep =1.5pt] at (.5,3) {};\n   \\node (6) [label=above:$6$] at (1.5,3) {};\n   \\node (7) [label=right:$7$, fill=red, draw, inner sep =1.5pt] at (2.5,3) {};\n   \\draw (0) -- (1) -- (2) -- (3);\n   \\draw (0) -- (4) -- (3);\n   \\draw (0) -- (5) -- (6) -- (7) -- (3);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{Example graph $G_{\\text{ex}}$ corresponding to the file {\\tt G-ex}.}\n\\end{marginfigure}\n\nEvery graph comes with two specified vertices $s,t\\in V(G)$ called the \\emph{start} and \\emph{end} vertices, and a subset $R\\subseteq V(G)$ of \\emph{red} vertices.\nIn particular, $R$ can inlude $s$ and $t$.\nWe fix the notation $r= |R|$.\nIn the example graph $G_{\\text{ex}}$, we have $s=0$, $t=3$, and $R=\\{4,5,7\\}$.\n\nAn \\emph{$s,t$-path} is a sequence of distinct vertices $v_1,\\ldots, v_l$ such that $v_1=s$, $v_l=t$, and $v_iv_{i+1}\\in E(G)$ for each $i\\in\\{1,\\ldots,l-1\\}$.\nThe \\emph{length} of such a path is $l-1$, the number of edges.\nNote that this definition requires the vertices on a path to be distinct, this is sometimes called a \\emph{simple} path.\n\nThe problems we want solved for each graph are the following:\n\\begin{description}\n  \\item[None] Return the length of a shortest $s,t$-path internally avoiding $R$.\n    To be precise, let $P$ be the set of $s,t$-paths using no vertices from $R$ except maybe $s$ and $t$ themselves. Let $l(p)$ denote the length of a path $p$.\n    Return $\\min\\{\\,l(p)\\colon p\\in P\\,\\}$.\n    If no such path exists, return `-1'.\n    Note that if the edge $st$ exists then the answer is 1, no matter the colour of $s$ or $t$.\n    In $G_{\\text{ex}}$, the answer is 3 (because of the path 0, 1, 2, 3.)\n  \\item[Some] Return `true' if there is a path from $s$ to $t$ that includes at least one vertex from $R$.\n    Otherwise, return `false.'\n    In $G_{\\text{ex}}$, the answer is `yes' (in fact, two such paths exist: the path 0, 4, 3 and the path 0, 5, 6, 7, 3.)\n  \\item [Many] Return the maximum number of red vertices on any path from $s$ to $t$.\n    To be precise, let $P$ be the set of $s,t$-paths and let $r(p)$ denote the number of red vertices  on a path $p$.\n    Return $\\max\\{\\,r(p)\\colon p\\in P\\,\\}$.\n    If no path from $s$ to $t$ exists, return `-1'.\n    In $G_{\\text{ex}}$, the answer is `2' (because of the path 0, 5, 6, 7, 3.)\n  \\item [Few] Return the minimum number of red vertices on any path from $s$ to $t$.\n    To be precise, let $P$ be the set of $s,t$-paths and let $r(p)$ denote the number of red vertices  on a path $p$.\n    Return $\\min\\{\\,r(p)\\colon p\\in P\\,\\}$.\n    If no path from $s$ to $t$ exists, return `-1'.\n    In $G_{\\text{ex}}$, the answer is 0 (because of the path 0, 1, 2, 3.)\n  \\item [Alternate] Return `true' if there is a path from $s$ to $t$ that alternates between red and non-red vertices.\n    To be precise, a path $v_1,\\ldots, v_l$ is \\emph{alternating} if for each $i\\in\\{1,\\ldots,l-1\\}$, exactly one endpoint of the edge $v_iv_{i+1}$ is red.\n    Otherwise, return `false.'\n    In $G_{\\text{ex}}$, the answer is `yes' (because of the path 0, 5, 6, 7, 3.)\n\\end{description}\n\n\\subsection{Requirements}\n\\paragraph{Solved instances.}\nFor three of the problems (I\u2019m not telling which), you need to be able to handle \\emph{all} instances.\nFor the remaining two problems, you will not be able to solve all instances, but should be able to solve roughly half of them.\nYour solutions must run in polynomial time. \nIf you do have a polynomial-time implementation that takes more than 1 hour on some instance, just abort it and report this in your report.\\sidenote{However, this should not happen. As a guideline, I have a non-optimised Python implementation that solves all the instances in a single run in half an hour on a low-powered 2012 laptop.}\n\nFor two of the problems (I\u2019m not telling which), you will not be able to write an algorithm that works for all graphs, because the problems are hard in general.\nFor one of those problems, you should be able to argue for computational hardness with a simple reduction.\nThe other problem will probably mystify you.\nA sophisticated hardness argument exists in the research literature (but not in the course text book)---you are welcome to try to find it and include a reference in your report.\nBut you are not required to find an explanation, and absolutely not required to come up with the reduction yourself. \n\n\\paragraph{Universality.}\nYour algorithms must run in polynomial time on a well-defined class of graphs.\n\u201cWell-defined class\u201d means something like \u201call graphs,\u201d\\sidenote{\u201cAll graphs\u201d means all simple, loop-less, unweighted graphs.} \u201cdirected graphs,\u201d \u201cundirected graphs,\u201d \u201cbipartite graphs,\u201d \u201cacyclic graphs,\u201d \u201cgraphs of bounded treewidth,\u201d \u201cplanar graphs,\u201d \u201cexpanders\u201d or even a combination of these.\n\nIn particular, you are allowed to do something like this: \n\\begin{quotation}\n  \\vspace*{-3ex} \\begin{tabbing}\n  if \\= (isBipartite($G$)) then\\\\\n  \\> $\\cdots\\qquad\\qquad$\\=\\# run the Strumpf--Chosa algorithm \\\\\n  else \\\\\n  \\> print(``?!'') \\>\\# problem is NP-hard for non-bipartite graphs, so give up\n\\end{tabbing}\n\\end{quotation}\n\nOn the other hand, you are not allowed to base your algorithm on specific knowledge of which graphs are in the {\\tt data} directory,\nFor an extreme example, the following would not be allowed:\n\\begin{quotation}\n  \\vspace*{-3ex}\n  \\begin{tabbing}\n  if \\= (filename == \"rusty-1-17\")) then\\\\\n  \\> print(\"14\") \\=\\# solved by hand\\\\\n  else \\\\\n  \\> print(``?'') \\>\\# no idea what to do \n\\end{tabbing}\n\\end{quotation}\n\n\\paragraph{Libraries.}\nThis exercise focusses on choosing \\emph{between} algorithms, not implementing them.\nThus, you are \\emph{not} required to write these algorithms from scratch.\nFor instance, if you need a minimum spanning tree, and you already have a working implementation of Prim\u2019s algorithm, you are free to reuse that.\nIn particular, you are free to use a library implementation of these algorithms. \nYou are also free to use implementations from books or from other people, provided that you are not violating any intellectual property rights.\n(It goes without saying that you properly cite the source of these implementations in your report.)\n\nYou are highly encouraged to use your own implementation of standard graph algorithms that you may have made for some other exercise.\nIf you do this, then separate that implementation in your source code, maybe by leaving it in its own file. \nAttribute all the original authors in the source code.\n\n\\paragraph{Deliverables.}\nHand in \n\\begin{enumerate}\n  \\item a report; follow the skeleton in {\\tt doc/report.pdf}. \n  \\item a text file {\\tt results.txt} with all the results, as specified in the report.\n  \\item the programs you have written to answer the questions, including any scripts you need to run these programs on the instances, and a README file that explains how to recreate {\\tt results.txt} by running your programs.\n\\end{enumerate}\n\n\\newpage\n\\section{Appendix: Gallery of Graphs}\n\nThis gallery consists of descriptions and drawings of many of the graphs in the {\\tt data} directory.\nIdeally, these descriptions are useful for finding mistakes in your code.\nIn particular, for many of these graphs it is obvious what the correct answers are.\nSome of the graphs are \\emph{random} graph---they have no structure and are pretty boring.\nOthers, such as the Word graphs, have a lot of structure.\n\n\\subsection{Individual graphs}\n\\begin{marginfigure}\n \\begin{tikzpicture}[scale=.4]\n   \\node  at (1,-2) {\\tt P3-0};\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=below:$0$, label = above:$s$] at (0,0) {};\n   \\node (1) [label=below:$1$, fill=red, draw, inner sep =1.5pt] at (1,0) {};\n   \\node (2) [label=below:$2$, label = above:$t$] at (2,0) {};\n   \\draw (0) -- (1) -- (2);\n \\end{scope}\n \\end{tikzpicture}\n\\quad\n \\begin{tikzpicture}[scale=.4]\n   \\node  at (1,-2) {\\tt P3-1};\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=below:$0$, label = above:$s$] at (0,0) {};\n   \\node (1) [label=below:$1$, label = above:$t$] at (1,0) {};\n   \\node (2) [label=below:$2$, fill=red, draw, inner sep =1.5pt] at (2,0) {};\n   \\draw (0) -- (1) -- (2);\n \\end{scope}\n \\end{tikzpicture}\n\n \\bigskip\n \\begin{tikzpicture}[scale=.4]\n   \\node  at (0,-2) {\\tt K3-0};\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=135:$0$, label = 45:$s$] at (90:1cm) {};\n   \\node (1) [label=210:$1$, fill=red, draw, inner sep =1.5pt] at (210:1cm) {};\n   \\node (2) [label=285:$2$, label = 15:$t$] at (330:1cm) {};\n   \\draw (0) -- (1) -- (2) -- (0);\n \\end{scope}\n \\end{tikzpicture}\n\\quad\n \\begin{tikzpicture}[scale=.4]\n   \\node  at (0,-2) {\\tt K3-1};\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=135:$0$, label = 45:$s$] at (90:1cm) {};\n   \\node (1) [label=210:$1$, fill=red, draw, inner sep =1.5pt] at (210:1cm) {};\n   \\node (2) [label=285:$2$, label = 15:$t$] at (330:1cm) {};\n   \\draw [->] (0) -- (1);\n   \\draw [->] (1) -- (2);\n   \\draw [->] (0) -- (2);\n \\end{scope}\n \\end{tikzpicture}\n\\quad\n \\begin{tikzpicture}[scale=.4]\n   \\node  at (0,-2) {\\tt K3-2};\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=135:$0$, label = 45:$s$] at (90:1cm) {};\n   \\node (1) [label=210:$1$, fill=red, draw, inner sep =1.5pt] at (210:1cm) {};\n   \\node (2) [label=285:$2$, label = 15:$t$] at (330:1cm) {};\n   \\draw [->] (1) -- (0);\n   \\draw [->] (1) -- (2);\n   \\draw [->] (0) -- (2);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{Paths and triangles with various choices of orientation and redness.}\n \\end{marginfigure}\n\n The {\\tt data} directory contains a small number of individual graphs, typicall of very small size.\n This includes $G_{\\text{ex}}$, a number of small graphs of 3 vertices shown to the right, and an all-red dodecahedron.\n It is a good idea to use these graphs initially to ensure that your parser works, your graph data structure makes sense, etc.\nThese graphs also serve as an invitation to create more toy examples by hand while you test your code, so ensure that everything works on very small graphs.\nKeep some good, clear drawings of `your' graphs around, they help immensely when finding mistakes.\n\n\\subsection{Word graphs}\nIn the word graphs, each vertex represents a five-letter word of English.\nFor $k\\in\\{1,2\\}$, an edge joins $u$ and $v$ if the corresponding words are anagrams, or if they differ in exactly $k$ positions.\nFor instance ``begin'' and ``binge'' are neighbours, and so are ``turns'' and ``terns'' for $k=1$.\n\nThe word graphs  come in two flavours.\nThe \\emph{rusty word} graphs are guaranteed to include ``begin,'' ``ender,'' and ``rusty.''\nThe vertex corresponding to ``rusty'' is coloured red, no other vertices are red.\n\nThe filenames are {\\tt rusty-$k$-$n$}.\n\\begin{figure}\n\\[\n\\begin{tikzpicture}[\n    xscale = 1.5,\n    every node/.style={ draw,rectangle,\n      rounded corners, inner sep = 1.5pt, font =\\sf\\small }\n    ]\n  \\node (rungs) at (3,2) {RUNGS}; \n  \\node (sings) at (1,2) {SINGS}; \n  \\node (begin) [label = below:$s$] at (0,0) {BEGIN};\n  \\node (rents) at (4,1) {RENTS};\n  \\node (ender) [label = below:$t$] at (4,0) {ENDER};\n  \\node (rests) at (5,1) {RESTS};\n  \\node (binge) at (1,0) {BINGE};\n  \\node (singe) at (1,1) {SINGE};\n  \\node (rings) at (2,2) {RINGS};\n  \\node (rusty) [fill = red] at (5,3) {RUSTY};\n  \\node (rente) at (3,1) {RENTE};\n  \\node (runts) at (4,2) {RUNTS};\n  \\node (enter) at (3,0) {ENTER};\n  \\node (rusts) at (5,2) {RUSTS};\n  \\node (tinge) at (2,0) {TINGE};\n  \\node (tings) at (2,1) {TINGS};\n  \\node (runty) at (4,3) {RUNTY};\n\\draw (rungs) -- (runts) ;\n\\draw (sings) -- (tings) ;\n\\draw (sings) -- (rings) ;\n\\draw (sings) -- (singe) ;\n\\draw (begin) -- (binge) ;\n\\draw (rents) -- (rests) ;\n\\draw (rents) -- (rente) ;\n\\draw (rents) -- (runts) ;\n\\draw (ender) -- (enter) ;\n\\draw (rests) -- (rusts) ;\n\\draw (binge) -- (tinge) ;\n\\draw (binge) -- (singe) ;\n\\draw (singe) -- (tinge) ;\n\\draw (rings) -- (tings) ;\n\\draw (rusty) -- (rusts) ;\n\\draw (rusty) -- (runty) ;\n\\draw (rente) -- (enter) ;\n\\draw (runts) -- (rusts) ;\n\\draw (runts) -- (runty) ;\n\\draw (tinge) -- (tings) ;\n\\draw (rungs) -- (rings) ;\n  \\end{tikzpicture}\n\\]\n\\caption{{\\tt rusty-1-17}}\n\\end{figure}\n\nThe \\emph{common word} use the same adjacency structure, and always include `start' and `ender.' \nA word is red if it is uncommon (like `ender'), just under half the words are uncommon.\nThe filenames for these graphs are {\\tt common-$n$}.\n\n\\subsection{Grids}\n\n\\begin{marginfigure}\n\\begin{tikzpicture}[scale = .5, every node/.style={circle, fill, inner sep =1.5pt}]\n  \\foreach \\x in {0,1,2,3,4} {\n    \\foreach \\y in {0,1,2,3,4} {\n      \\node (\\x_\\y) at (\\x,\\y) {};\n    }\n  }\n  \\node [label = left:$s$] at (0,0) {};\n  \\node [label = right:$t$] at (4,4) {};\n  \\draw (1_4) -- (2_4);\n  \\draw (1_4) -- (0_4);\n  \\draw (1_4) -- (1_3);\n  \\draw (1_4) -- (0_3);\n  \\draw (1_3) -- (2_3);\n  \\draw (1_3) -- (2_4);\n  \\draw (1_3) -- (1_2);\n  \\draw (1_3) -- (0_2);\n  \\draw (1_3) -- (0_3);\n  \\draw (1_2) -- (2_2);\n  \\draw (1_2) -- (2_3);\n  \\draw (1_2) -- (1_1);\n  \\draw (1_2) -- (0_2);\n  \\draw (1_2) -- (0_1);\n  \\draw (1_1) -- (2_1);\n  \\draw (1_1) -- (2_2);\n  \\draw (1_1) -- (1_0);\n  \\draw (1_1) -- (0_0);\n  \\draw (1_1) -- (0_1);\n  \\draw (1_0) -- (2_0);\n  \\draw (1_0) -- (2_1);\n  \\draw (1_0) -- (0_0);\n  \\draw (0_4) -- (0_3);\n  \\draw (0_2) -- (0_3);\n  \\draw (0_2) -- (0_1);\n  \\draw (0_0) -- (0_1);\n  \\draw (3_1) -- (3_0);\n  \\draw (3_1) -- (3_2);\n  \\draw (3_1) -- (2_0);\n  \\draw (3_1) -- (2_1);\n  \\draw (3_1) -- (4_2);\n  \\draw (3_1) -- (4_1);\n  \\draw (3_0) -- (2_0);\n  \\draw (3_0) -- (4_0);\n  \\draw (3_0) -- (4_1);\n  \\draw (3_3) -- (3_2);\n  \\draw (3_3) -- (3_4);\n  \\draw (3_3) -- (2_2);\n  \\draw (3_3) -- (2_3);\n  \\draw (3_3) -- (4_3);\n  \\draw (3_3) -- (4_4);\n  \\draw (3_2) -- (2_1);\n  \\draw (3_2) -- (2_2);\n  \\draw (3_2) -- (4_2);\n  \\draw (3_2) -- (4_3);\n  \\draw (3_4) -- (2_3);\n  \\draw (3_4) -- (2_4);\n  \\draw (3_4) -- (4_4);\n  \\draw (2_0) -- (2_1);\n  \\draw (2_1) -- (2_2);\n  \\draw (2_2) -- (2_3);\n  \\draw (2_3) -- (2_4);\n  \\draw (4_2) -- (4_1);\n  \\draw (4_2) -- (4_3);\n  \\draw (4_3) -- (4_4);\n  \\draw (4_0) -- (4_1);\n  \\node [fill=red,draw,inner sep = 1.5pt] at (1,0) {};\n  \\node [fill=red,draw,inner sep = 1.5pt] at (1,1) {};\n  \\node [fill=red,draw,inner sep = 1.5pt] at (1,2) {};\n  \\node [fill=red,draw,inner sep = 1.5pt] at (1,3) {};\n  \\node [fill=red,draw,inner sep = 1.5pt] at (3,1) {};\n  \\node [fill=red,draw,inner sep = 1.5pt] at (3,2) {};\n  \\node [fill=red,draw,inner sep = 1.5pt] at (3,3) {};\n  \\node [fill=red,draw,inner sep = 1.5pt] at (3,4) {};\n\\end{tikzpicture}\n\\caption{The grid for $N=5$, represented by {\\tt grid-5-0}.}\n\\end{marginfigure}\n\nThe Grid graphs consist of $N^2$ vertices that represent integer coordinates $(x,y)$ for $x,y\\in\\{0,\\ldots,N-1\\}$.\nEach vertex $(x,y)$ is connected to $(x-1,y)$, $(x,y-1)$, and $(x-1,y-1)$, provided that these vertices exist.\nThe red vertices form a maze-like structure in the graph:\nEvery second row is red, except for the top- or bottommost vertex, alternatingly.\nThere is a unique $s,t$-path avoiding all red vertices, and a shortest alternating path following the diagonal.\n\nGrid graphs of various sizes are represented by {\\tt grid-$N$-0}.\nEach of these graphs comes with two variants.\nIn {\\tt grid-$N$-1}, some random red vertices have turned non-red (so there are `holes' in the hedges).\nIn {\\tt grid-$N$-2}, some random non-red vertices have turned red (so some passages are blocked).\n\n\n\\subsection{Walls}\n\n\\begin{marginfigure}\n \\begin{tikzpicture}[scale=.4]\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=below:$0$, label=left:$s$] at (0,0) {};\n   \\node (1) [label=below:$1$] at (1,0) {};\n   \\node (2) [label=below:$2$] at (2,0) {};\n   \\node (3) [label=below:$3$,fill=red,draw,inner sep = 1.5pt] at (3,0) {};\n   \\node (4) [label=above:$4$] at (3,1) {};\n   \\node (5) [label=above:$5$] at (2,1) {};\n   \\node (6) [label=above:$6$] at (1,1) {};\n   \\node (7) [label=above:$7$, label=left:$t$] at (0,1) {};\n   \\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{The single-brick wall, {\\tt wall-p-1}.}\n \\end{marginfigure}\n\nBricks are arranged like a wall of height $2$.\nHere are three bricks with overlap $1$:\n\\begin{marginfigure}\n \\begin{tikzpicture}[scale=.4]\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=below:$0$] at (0,0) {};\n   \\node (1) [label=below:$1$] at (1,0) {};\n   \\node (2) [label=below:$$] at (2,0) {};\n   \\node (3) [label=below:$$] at (3,0) {};\n   \\node (4) [label=above:$4$] at (3,1) {};\n   \\node (5) [label=above:$5$] at (2,1) {};\n   \\node (6) [label=above:$6$] at (1,1) {};\n   \\node (7) [label=above:$7$] at (0,1) {};\n   \\node (8) [label=below:$$] at (4,0) {};\n   \\node (9) [label=below:$$] at (5,0) {};\n   \\node (10) [label=below:\\small$10$]  at (5,-1) {};\n   \\node (11) [label=below:\\small$11$]  at (4,-1) {};\n   \\node (12) [label=below:\\small$12$]  at (3,-1) {};\n   \\node (13) [label=below:\\small$13$]  at (2,-1) {};\n   \\node (14) [label=below:\\small$14$]  at (6,0) {};\n   \\node (15) [label=below:\\small$15$,fill=red,draw,inner sep = 1.5pt]  at (7,0) {};\n   \\node (16) [label=above:\\small$16$]  at (7,1) {};\n   \\node (17) [label=above:\\small$17$]  at (6,1) {};\n   \\node (18) [label=above:\\small$18$]  at (5,1) {};\n   \\node (19) [label=above:\\small$19$]  at (4,1) {};\n   \\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);\n   \\draw (3)--(8)--(9)--(10)--(11)--(12)--(13)--(2);\n   \\draw (9)--(14)--(15)--(16)--(17)--(18)--(19)--(8);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{Three bricks with overlap 1, {\\tt wall-p-3}.}\n \\end{marginfigure}\n\n \\begin{marginfigure}\n \\begin{tikzpicture}[scale=.4]\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=below:$0$, label=left:$s$] at (0,0) {};\n   \\node (1) [label=below:$1$] at (1,0) {};\n   \\node (2) [label=below:$2$] at (2,0) {};\n   \\node (3) [label=below:$$] at (3,0) {};\n   \\node (4) [label=above:$4$] at (3,1) {};\n   \\node (5) [label=above:$5$] at (2,1) {};\n   \\node (6) [label=above:$6$] at (1,1) {};\n   \\node (7) [label=above:$7$, label=left:$t$] at (0,1) {};\n   \\node (8) [label=above:$8$] at (4,0) {};\n   \\node (9) [label=above:$9$] at (5,0) {};\n   \\node (10) [label=below:\\small$$]  at (6,0) {};\n   \\node (11) [label=below:\\small$11$]  at (6,-1) {};\n   \\node (12) [label=below:\\small$12$]  at (5,-1) {};\n   \\node (13) [label=below:\\small$13$]  at (4,-1) {};\n   \\node (14) [label=below:\\small$14$]  at (3,-1) {};\n   \\node (15) [label=below:\\small$15$]  at (7,0) {};\n   \\node (16) [label=below:\\small$16$]  at (8,0) {};\n   \\node (17) [label=below:\\small$17$,fill=red,draw,inner sep = 1.5pt]  at (9,0) {};\n   \\node (18) [label=above:\\small$18$]  at (9,1) {};\n   \\node (19) [label=above:\\small$19$]  at (8,1) {};\n   \\node (20) [label=above:\\small$20$]  at (7,1) {};\n   \\node (21) [label=above:\\small$21$]  at (6,1) {};\n   \\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);\n   \\draw (3)--(8)--(9)--(10)--(11)--(12)--(13)--(14)--(3);\n   \\draw (10)--(15)--(16)--(17)--(18)--(19)--(20)--(21)--(10);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{Three bricks with overlap 0, {\\tt wall-z-3}.}\n \\end{marginfigure}\n\n \\begin{marginfigure}\n \\begin{tikzpicture}[scale=.4]\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (0) [label=below:$0$] at (0,0) {};\n   \\node (1) [label=below:$1$] at (1,0) {};\n   \\node (2) [label=below:$2$] at (2,0) {};\n   \\node (3) [label=below:$3$] at (3,0) {};\n   \\node (4) [label=above:$4$] at (3,1) {};\n   \\node (5) [label=above:$5$] at (2,1) {};\n   \\node (6) [label=above:$6$] at (1,1) {};\n   \\node (7) [label=above:$7$] at (0,1) {};\n   \\node (8) [label=below:$8$] at (4,0) {};\n   \\node (9) [label=below:$9$] at (5,0) {};\n   \\node (10) [label=below:$10$]  at (6,0) {};\n   \\node (11) [label=below:$11$,fill=red,draw,inner sep = 1.5pt]  at (7,0) {};\n   \\node (12) [label=above:$12$]  at (7,1) {};\n   \\node (13) [label=above:$13$]  at (6,1) {};\n   \\node (14) [label=above:$14$]  at (5,1) {};\n   \\node (15) [label=above:$15$]  at (4,1) {};\n   \\draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);\n   \\draw (3)--(8)--(9)--(10)--(11)--(12)--(13)--(14)--(15)--(8);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{Two bricks with negative overlap, {\\tt wall-n-2}.}\n \\end{marginfigure}\n\nThe Wall graphs are a family consisting of $N$ overlapping 8-cycles called \\emph{bricks}.\nThe bricks are laid in a wall of height 2, with various intervals of overlap.\nEach wall has a single red vertex $w$, the rightmost vertex at the same level as vertex $0$.\nThese graphs are interesting instances for finding paths from $s$ to $t$ through the red vertex.\nThe should help you avoid some obvious pitfalls when developing an algorithm for the problem \\emph{Some}.\n\nThe Walls with overlap 1, called {\\tt brick-1-$N$}, allow an $s,t$-path through $w$.\n\nThe Walls with overlap 0, called {\\tt brick-0-$N$}, allow a walk from $s$ to $t$ through $w$, but this walk will use $N-2$ vertices twice. \nIn particular, such a walk it not a path, and your algorithm for Problem \\emph{Some} should not be fooled by it.\n\nThe walls with negative overlap, called {\\tt brick-n-$N$} also allow a walk from $s$ to $t$ through $w$, but this walk would use $N-2$ edges twice.\nAgain, such walk is not a path.\n\n\\section{Ski}\nSkiFree\\footnote{Read more at Wikipedia\u2019s SkiFree page, the author\u2019s site {\\tt ski.ihoc.net}, or (amazingly) SkiFree Fan Fiction at {\\tt www.fanfiction.net/game/SkiFree/}.} is an ancient computer game by Chris Pirih, part of the Microsoft Entertainment Pack in 1991, and going back to VT100 VAX/VMS terminals, ultimately inspired by Activision\u2019s \\emph{Skiing} for the Atari 2600 console.\n\nTime to show you can handle yourself off-pist. Get from the start to the goal, avoiding the trees, dogs rocks etc.\nBeware of the dreaded Yeti who is told to lurk at the red nodes of the mountain and will certainly chase you down and eat you if you pass him.\n\nIn each level, the player moves down, and either one step left or right.\n(Some moves are blocked by obstacles.)\n\n\\begin{figure*}[h]\n\\includegraphics[width=7cm]{ski.png}  \n\\quad\n\\begin{tikzpicture}[scale = .5, every node/.style={circle, fill, inner sep =1.5pt}]\n  \\node  (0) [label = above:$0\\,\\,s$] at (0,0) {};\n  \\node  (1) [label = above:$1$] at (-1,-1) {};\n  \\node  (2) [label = above:$2$] at (1,-1) {};\n  \\node  (3) [label = above:$3$] at (-2,-2) {};\n  \\node  (4) at (0,-2) {};\n  \\node  (5) [label = above:$5$] at (2,-2) {};\n  \\node  (6) [label = above:$6$] at (-3,-3) {};\n  \\node  (7) at (-1,-3) {};\n  \\node  (8) at (1,-3) {};\n  \\node  (9) [label = above:$9$] at (3,-3) {};\n  \\node (10) at (-4,-4) {};\n  \\node (11) at (-2,-4) {};\n  \\node (12) [label = left:$12$, fill = red, draw] at (0,-4) {};\n  \\node (13) at (2,-4) {};\n  \\node (14) at (4,-4) {};\n  \\node (15) at (-5,-5) {};\n  \\node (16) at (-3,-5) {};\n  \\node (17) at (-1,-5) {};\n  \\node (18) at (1,-5) {};\n  \\node (19) at (3,-5) {};\n  \\node (20) at (5,-5) {};\n  \\node (21) at (-6,-6) {};\n  \\node (22) at (-4,-6) {};\n  \\node (23) at (-2,-6) {};\n  \\node (24) at (0,-6) {};\n  \\node (25) at (2,-6) {};\n  \\node (26) at (4,-6) {};\n  \\node (27) at (6,-6) {};\n  \\node (29) at (-5,-7) {};\n  \\node (30) at (-3,-7) {};\n  \\node (31) at (-1,-7) {};\n  \\node (32) at (1,-7) {};\n  \\node (33) at (3,-7) {};\n  \\node (34) at (5,-7) {};\n  \\node (35) at (7,-7) {};\n  \\node (36) [label= below:$36\\,\\,t$] at (0,-8) {};\n  \\draw [->] (0) -- (1);\n  \\draw [->] (0) -- (2);\n  \\draw [->] (1) -- (3);\n  \\draw [->] (2) -- (4);\n  \\draw [->] (2) -- (5);\n  \\draw [->] (3) -- (6);\n  \\draw [->] (3) -- (7);\n  \\draw [->] (4) -- (7);\n  \\draw [->] (4) -- (8);\n  \\draw [->] (5) -- (8);\n  \\draw [->] (6) -- (11);\n  \\draw [->] (7) -- (11);\n  \\draw [->] (8) -- (12);\n  \\draw [->] (9) -- (13);\n  \\draw [->] (9) -- (14);\n  \\draw [->] (10) -- (15);\n  \\draw [->] (11) -- (16);\n  \\draw [->] (12) -- (18);\n  \\draw [->] (13) -- (18);\n  \\draw [->] (14) -- (19);\n  \\draw [->] (14) -- (20);\n  \\draw [->] (15) -- (22);\n  \\draw [->] (16) -- (22);\n  \\draw [->] (16) -- (23);\n  \\draw [->] (17) -- (23);\n  \\draw [->] (17) -- (24);\n  \\draw [->] (18) -- (24);\n  \\draw [->] (18) -- (25);\n  \\draw [->] (19) -- (25);\n  \\draw [->] (19) -- (26);\n  \\draw [->] (20) -- (26);\n  \\draw [->] (20) -- (27);\n  \\draw [->] (21) -- (29);\n  \\draw [->] (22) -- (29);\n  \\draw [->] (23) -- (30);\n  \\draw [->] (23) -- (31);\n  \\draw [->] (24) -- (31);\n  \\draw [->] (24) -- (32);\n  \\draw [->] (25) -- (33);\n  \\draw [->] (26) -- (33);\n  \\draw [->] (26) -- (34);\n  \\draw [->] (27) -- (35);\n  \\draw [->] (29) -- (36);\n  \\draw [->] (30) -- (36);\n  \\draw [->] (31) -- (36);\n  \\draw [->] (32) -- (36);\n  \\draw [->] (33) -- (36);\n  \\draw [->] (34) -- (36);\n  \\draw [->] (35) -- (36);\n\\end{tikzpicture}\n\\caption{A Ski level and the corresponding graph.\n  This graph is described in {\\tt ski-illustration}.\nYeti is lurking at node 12.\nGraphics from {\\tt ski.ihoc.net}.}\n\\end{figure*}\n\n\\subsection{Increasing numbers}\nEach \\emph{Increase} graph is generated from a sequence $a_1,\\ldots, a_n$ of unique integers with $0\\leq\\alpha_i\\leq 2n$.\n(The random process is this: Pick a subset of size $n$ from $\\{0,\\ldots, 2n\\}$ and arrange the elements in random order.)\nWe set $s=\\alpha_1$ and $t=\\alpha_n$.\nOdd numbers are red.\nThere is an edge from  $\\alpha_i$ to $\\alpha_j$ if $i<j$ and $\\alpha_i<\\alpha_j$.\n\n\\begin{figure}\n \\begin{tikzpicture}[]\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (2)  [label=below:$2$, label = left:$s$] at (0,0) {};\n   \\node (13) [label=below:$13$, fill=red, draw] at (1,0) {};\n   \\node (1)  [label=below:$1$,  fill=red, draw] at (2,0) {};\n   \\node (11) [label=below:$11$, fill=red, draw] at (3,0) {};\n   \\node (15) [label=below:$15$, fill=red, draw] at (4,0) {};\n   \\node (3)  [label=below:$3$,  fill=red, draw] at (5,0) {};\n   \\node (4)  [label=below:$4$] at (6,0) {};\n   \\node (5)  [label=below:$5$, label = right:$t$, fill=red, draw] at (7,0) {};\n   \\draw [->] (2) edge[] (13);\n   \\draw [->] (2) edge [bend left] (11);\n   \\draw [->] (2) edge [bend left] (15);\n   \\draw [->] (2) edge [bend left] (3);\n   \\draw [->] (2) edge [bend left] (4);\n   \\draw [->] (2) edge [bend left] (5);\n   \\draw [->] (13) edge [bend left] (15);\n   \\draw [->] (1) edge [] (11);\n   \\draw [->] (1) edge [bend left] (15);\n   \\draw [->] (1) edge [bend left] (3);\n   \\draw [->] (1) edge [bend left] (4);\n   \\draw [->] (1) edge [bend left] (5);\n   \\draw [->] (11) edge []  (15);\n   \\draw [->] (3) edge [] (4);\n   \\draw [->] (3) edge [bend left] (5);\n   \\draw [->] (4) edge [] (5);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{\\tt increase-n8-1.0}\n  \\end{figure}\n\n\n\\begin{figure}\n \\begin{tikzpicture}[]\n \\begin{scope}[every node/.style={circle, fill, inner sep =1.5pt}]\n   \\node (11)  [label=below:$11$, label = left:$s$] at (0,0) {};\n   \\node (15) [label=below:$15$, fill=red, draw] at (1,0) {};\n   \\node (1)  [label=below:$1$,  fill=red, draw] at (2,0) {};\n   \\node (8) [label=below:$8$] at (3,0) {};\n   \\node (13) [label=below:$13$, fill=red, draw] at (4,0) {};\n   \\node (4)  [label=below:$4$] at (5,0) {};\n   \\node (6)  [label=below:$6$] at (6,0) {};\n   \\node (12)  [label=below:$12$, label = right:$t$] at (7,0) {};\n   \\draw [->] (11) edge[] (15);\n   \\draw [->] (11) edge [bend left] (13);\n   \\draw [->] (11) edge [bend left] (12);\n   \\draw [->] (1) edge [] (8);\n   \\draw [->] (1) edge [bend left] (13);\n   \\draw [->] (1) edge [bend left] (3);\n   \\draw [->] (1) edge [bend left] (4);\n   \\draw [->] (1) edge [bend left] (6);\n   \\draw [->] (1) edge [bend left] (12);\n   \\draw [->] (8) edge [] (13);\n   \\draw [->] (8) edge [bend left] (12);\n   \\draw [->] (4) edge [] (6);\n   \\draw [->] (4) edge [bend left] (12);\n \\end{scope}\n \\end{tikzpicture}\n \\caption{\\tt increase-n8-2.0}\n  \\end{figure}\n\n\\end{document}\n", "meta": {"hexsha": "b7500aa46df9232d75366a6a5d6835cc287ab66b", "size": 29031, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "red-scare/doc/red-scare.tex", "max_stars_repo_name": "Sebastian-ba/DoDoBing", "max_stars_repo_head_hexsha": "6edcc18de22ad76505d2c13ac6a207a2c274cc95", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-09-25T11:59:20.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-20T12:55:21.000Z", "max_issues_repo_path": "red-scare/doc/red-scare.tex", "max_issues_repo_name": "ITU-2019/DoDoBing", "max_issues_repo_head_hexsha": "6edcc18de22ad76505d2c13ac6a207a2c274cc95", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2017-09-25T12:04:51.000Z", "max_issues_repo_issues_event_max_datetime": "2017-11-13T07:51:40.000Z", "max_forks_repo_path": "red-scare/doc/red-scare.tex", "max_forks_repo_name": "ITU-2019/DoDoBing", "max_forks_repo_head_hexsha": "6edcc18de22ad76505d2c13ac6a207a2c274cc95", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.5900900901, "max_line_length": 387, "alphanum_fraction": 0.589266646, "num_tokens": 10972, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5648796174151802}}
{"text": "\\documentclass{article}\n\\usepackage{tocloft}\n\\include{common_symbols_and_format}\n\\renewcommand{\\cfttoctitlefont}{\\Large\\bfseries}\n\\begin{document}\n\\logo\n\\rulename{Two Moving Averages}\n\\tblofcontents\n\\ruledescription{This price-only rule takes positions based on two moving averages, and a minimum threshold K. It allocates a positive position when the shorter moving average is above K fraction higher than the other average and vice versa.}\n\\ruleparameters\n{Minimum ratio}{0.01}{Threshold difference in averages.}{$\\minimumratio$}\n{Short price average length}{2}{Number of days to include in the fast moving average.}{$\\averagelengthshort$}\n{Long price average length}{2}{Number of extra days to include in slow moving average.}{$\\averagelengthlong$}\n{Maximum allocation}{1.0}{Maximum size of position to take.}{$Z_{max}$}\n\\stoptable\n\n\\section{Equation}\n\n\\begin{equation}\n\\bigcontribution(\\currenttime, \\averagelengthshort, \\price) = \\frac{1}{\\averagelengthshort} \\sum_{\\dummyiterator=0}^{\\averagelengthshort-1} \\price(\\currenttime - \\dummyiterator)\\\\\n\\end{equation}\n\n\\begin{equation}\n\\bigcontribution(\\currenttime, \\averagelengthlong, \\price) = \\frac{1}{\\averagelengthlong} \\sum_{\\dummyiterator=0}^{\\averagelengthlong-1} \\price(\\currenttime - \\dummyiterator)\\\\\n\\end{equation}\n\n\\begin{equation}\n\\bigcontributiontime = \\frac{\\bigcontributionshort}{\\bigcontributionlong} \\\\\n\\end{equation}\n\n\\begin{equation}\n\\position_\\currenttime = \n\\begin{cases} \n\\mbox{$Z_{max}$,} & \\mbox{if } \\bigcontributiontime > 1 + 2\\minimumratio \\\\ \n\\mbox{$\\frac{\\bigcontributiontime - 1 - \\minimumratio}{\\minimumratio}$,} & \\mbox{if } 1 + \\minimumratio < \\bigcontributiontime < 1 + 2\\minimumratio \\\\\n\\mbox{0.0,} & \\mbox{if } 1/(1 + \\minimumratio) < \\bigcontributiontime < 1 + \\minimumratio \\\\\n\\mbox{$-\\frac{1/\\bigcontributiontime - 1 - \\minimumratio}{\\minimumratio}$,} & \\mbox{if } 1/(1 + 2\\minimumratio) < \\bigcontributiontime < 1/(1 + \\minimumratio) \\\\\n\\mbox{$-Z_{max}$,} & \\mbox{if } \\bigcontributiontime < 1/(1 + 2\\minimumratio) \\\\\n\\end{cases}\n\\end{equation}\n\n\\hspace{200mm}\n\\\\\n\\noindent where $\\position_\\currenttime$ is the portfolio allocation at time $\\currenttime$, and $\\price = \\price(\\currenttime)$ is the value of the price series.\n\n\\hspace{200mm}\n\\hspace{200mm}\n\n\\keyterms\n\\furtherlinks\n\n\\end{document}\n", "meta": {"hexsha": "9d583b5bfe296691debefdbc4cb5146fae7a33f2", "size": 2292, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/strategies/tex/TwoMovingAverages.tex", "max_stars_repo_name": "pawkw/infertrade", "max_stars_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_issues_repo_path": "docs/strategies/tex/TwoMovingAverages.tex", "max_issues_repo_name": "pawkw/infertrade", "max_issues_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 137, "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_forks_repo_path": "docs/strategies/tex/TwoMovingAverages.tex", "max_forks_repo_name": "pawkw/infertrade", "max_forks_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "avg_line_length": 43.2452830189, "max_line_length": 242, "alphanum_fraction": 0.7421465969, "num_tokens": 730, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5648796095464163}}
{"text": "\nIn this section, we deal with only with the FirstOrderType2R case.\n\n\n  \\begin{equation}\n    \\begin{array}{l}\n      \\label{eq:full-toto1-ter}\n      M x_{k+1} = M x_{k} +h \\theta f(x_{k+1},t_{k+1}) +h(1-\\theta)f(x_{k},t_{k}) + h r_{k+\\gamma} \\\\[2mm]\n      y_{k+\\gamma} =  h(t_{k+\\gamma},x_{k+\\gamma},\\lambda _{k+\\gamma}) \\\\[2mm]\n      r_{k+\\gamma} = g(t_{k+\\gamma},\\lambda_{k+\\gamma})\\\\[2mm]\n    \\end{array}\n\\end{equation}\n\n \\paragraph{Newton's linearization of the first line of~(\\ref{eq:full-toto1-ter})} The first line of the  problem~(\\ref{eq:full-toto1-ter}) can be written under the form of a residue $\\mathcal R$ depending only on $x_{k+1}$ and $r_{k+\\gamma}$ such that \n\\begin{equation}\n  \\label{eq:full-NL3}\n  \\mathcal R (x_{k+1},r _{k+\\gamma}) =0\n\\end{equation}\nwith $$\\mathcal R(x,r) = M(x - x_{k}) -h\\theta f( x , t_{k+1}) - h(1-\\theta)f(x_k,t_k) - h r. $$\nThe solution of this system of nonlinear equations is sought as a limit of the sequence $\\{ x^{\\alpha}_{k+1},r^{\\alpha}_{k+\\gamma} \\}_{\\alpha \\in \\NN}$ such that\n \\begin{equation}\n   \\label{eq:full-NL7}\n   \\begin{cases}\n     x^{0}_{k+1} = x_k \\\\ \\\\\n     r^{0}_{k+\\gamma} = (1-\\gamma ) r_{k} + \\gamma r^0_{k+1}  = r_k \\\\ \\\\     \n     \\mathcal R_L( x^{\\alpha+1}_{k+1},r^{\\alpha+1}_{k+\\gamma}) = \\mathcal\n     R(x^{\\alpha}_{k+1},r^{\\alpha}_{k+\\gamma})  + \\left[ \\nabla_{x} \\mathcal\n     R(x^{\\alpha}_{k+1},r^{\\alpha}_{k+\\gamma})\\right] (x^{\\alpha+1}_{k+1}-x^{\\alpha}_{k+1} ) + \\\\[2mm]\n     \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\left[ \\nabla_{r} \\mathcal R(x^{\\alpha}_{k+1},r^{\\alpha}_{k+\\gamma})\\right] (r^{\\alpha+1}_{k+\\gamma} - r^{\\alpha}_{k+\\gamma} ) =0\n \\end{cases}\n\\end{equation}\n\\begin{ndrva}\n  What about $r^0_{k+\\gamma}$ ?\n\\end{ndrva}\n\nThe residu free is also defined (useful for implementation only):\n\\[\\mathcal R _{\\free}(x) \\stackrel{\\Delta}{=}  M(x - x_{k}) -h\\theta f( x , t_{k+1}) - h(1-\\theta)f(x_k,t_k).\\]\nWe get\n\\begin{equation}\n  \\mathcal R (x^{\\alpha}_{k+1},r^{\\alpha}_{k+\\gamma}) = \\fbox{$\\mathcal R^{\\alpha}_{k+1} \\stackrel{\\Delta}{=}  \\mathcal R_{\\free}(x^{\\alpha}_{k+1} )  - h r^{\\alpha}_{k+\\gamma}$}\\label{eq:full-rfree-1}\n\\end{equation}\n\n\\[  \\mathcal R\n_{\\free}(x^{\\alpha}_{k+1} )=\\fbox{$ \\mathcal R _{\\free, k+1} ^{\\alpha} \\stackrel{\\Delta}{=}  M(x^{\\alpha}_{k+1} - x_{k}) -h\\theta f( x^{\\alpha}_{k+1} , t_{k+1}) - h(1-\\theta)f(x_k,t_k)$}\\]\n \nThe computation of the Jacobian of $\\mathcal R$ with respect to $x$, denoted by $   W^{\\alpha}_{k+1}$ leads to \n\\begin{equation}\n   \\label{eq:full-NL9}\n   \\begin{array}{l}\n    W^{\\alpha}_{k+1} \\stackrel{\\Delta}{=} \\nabla_{x} \\mathcal R (x^{\\alpha}_{k+1})= M - h  \\theta \\nabla_{x} f(  x^{\\alpha}_{k+1}, t_{k+1} ).\\\\\n \\end{array}\n\\end{equation}\nAt each time--step, we have to solve the following linearized problem,\n\\begin{equation}\n   \\label{eq:full-NL10}\n    \\mathcal R^{\\alpha}_{k+1} + W^{\\alpha}_{k+1} (x^{\\alpha+1}_{k+1} -\n    x^{\\alpha}_{k+1}) - h  (r^{\\alpha+1}_{k+\\gamma} - r^{\\alpha}_{k+\\gamma} )  =0 ,\n\\end{equation}\nBy using (\\ref{eq:full-rfree-1}), we get\n\\begin{equation}\n  \\label{eq:full-rfree-2}\n  \\mathcal R _{\\free}(x^{\\alpha}_{k+1})  - h  r^{\\alpha+1}_{k+\\gamma}   + W^{\\alpha}_{k+1} (x^{\\alpha+1}_{k+1} -\n    x^{\\alpha}_{k+1})  =0 \n\\end{equation}\n\n%\\fbox\n{\n  \\begin{equation}\n    \\boxed{ x^{\\alpha+1}_{k+1} = h(W^{\\alpha}_{k+1})^{-1}r^{\\alpha+1}_{\\gamma+1} +x^\\alpha_{\\free}}\n  \\end{equation}\n}\nwith :\n\\begin{equation}\n  \\boxed{x^\\alpha_{\\free}\\stackrel{\\Delta}{=}x^{\\alpha}_{k+1}-(W^{\\alpha}_{k+1})^{-1}\\mathcal R_{\\free,k+1}^{\\alpha} \\label{eq:full-rfree-12}}\n\\end{equation}\n\nThe matrix $W$ is clearly non singular for small $h$.\n\nNote that the linearization is equivalent to the case (\\ref{eq:rfree-2}) and (\\ref{eq:rfree-12}) with $\\gamma=1$ and replacing $r_{k+1}$ by $r_{k+\\gamma}$.\n\n \\paragraph{Newton's linearization of the second  line of~(\\ref{eq:full-toto1-ter})}\nThe same operation is performed with the second equation of (\\ref{eq:full-toto1-ter})\n\\begin{equation}\n  \\begin{array}{l}\n    \\mathcal R_y(x,y,\\lambda)=y-h(t_{k+\\gamma},\\gamma x + (1-\\gamma) x_k ,\\lambda) =0\\\\ \\\\\n  \\end{array}\n\\end{equation}\nwhich is linearized as\n\\begin{equation}\n  \\label{eq:full-NL9}\n  \\begin{array}{l}\n    \\mathcal R_{Ly}(x^{\\alpha+1}_{k+1},y^{\\alpha+1}_{k+\\gamma},\\lambda^{\\alpha+1}_{k+\\gamma}) = \\mathcal\n    R_{y}(x^{\\alpha}_{k+1},y^{\\alpha}_{k+\\gamma},\\lambda^{\\alpha}_{k+\\gamma}) +\n    (y^{\\alpha+1}_{k+\\gamma}-y^{\\alpha}_{k+\\gamma})- \\\\[2mm] \\qquad  \\qquad \\qquad \\qquad  \\qquad \\qquad\n    \\gamma C^{\\alpha}_{k+1}(x^{\\alpha+1}_{k+1}-x^{\\alpha}_{k+1}) - D^{\\alpha}_{k+\\gamma}(\\lambda^{\\alpha+1}_{k+\\gamma}-\\lambda^{\\alpha}_{k+\\gamma})=0\n  \\end{array}\n\\end{equation}\n\nThis leads to the following linear equation\n\\begin{equation}\n  \\boxed{y^{\\alpha+1}_{k+\\gamma} =  y^{\\alpha}_{k+\\gamma}\n  -\\mathcal R^{\\alpha}_{y,k+1}+ \\\\\n  \\gamma C^{\\alpha}_{k+1}(x^{\\alpha+1}_{k+1}-x^{\\alpha}_{k+1}) +\n  D^{\\alpha}_{k+\\gamma}(\\lambda^{\\alpha+1}_{k+\\gamma}-\\lambda^{\\alpha}_{k+\\gamma})}. \\label{eq:full-NL11y}\n\\end{equation}\nwith,\n\\begin{equation}\n     \\begin{array}{l}\n  C^{\\alpha}_{k+\\gamma} = \\nabla_xh(t_{k+1}, x^{\\alpha}_{k+\\gamma},\\lambda^{\\alpha}_{k+\\gamma} ) \\\\ \\\\\n  D^{\\alpha}_{k+\\gamma} = \\nabla_{\\lambda}h(t_{k+1}, x^{\\alpha}_{k+\\gamma},\\lambda^{\\alpha}_{k+\\gamma})\n \\end{array}\n\\end{equation}\nand\n\\begin{equation}\\fbox{$\n\\mathcal R^{\\alpha}_{yk+1} \\stackrel{\\Delta}{=} y^{\\alpha}_{k+\\gamma} - h(x^{\\alpha}_{k+\\gamma},\\lambda^{\\alpha}_{k+\\gamma})$}\n \\end{equation}\n\nNote that the linearization is equivalent to the case (\\ref{eq:NL11y}) by replacing $\\lambda_{k+1}$ by $\\lambda_{k+\\gamma}$ and $x_{k+1}$ by $x_{k+\\gamma}$.\n\n \\paragraph{Newton's linearization of the third  line of~(\\ref{eq:full-toto1-ter})}\nThe same operation is performed with the third equation of (\\ref{eq:full-toto1-ter})\n\\begin{equation}\n  \\begin{array}{l}\n    \\mathcal R_r(r,\\lambda)=r-g(\\lambda,t_{k+1}) =0\\\\ \\\\  \\end{array}\n\\end{equation}\nwhich is linearized as\n\\begin{equation}\n  \\label{eq:full-NL9}\n  \\begin{array}{l}\n      \\mathcal R_{L\\lambda}(r^{\\alpha+1}_{k+\\gamma},\\lambda^{\\alpha+1}_{k+\\gamma}) = \\mathcal\n      R_{r,k+\\gamma}^{\\alpha} + (r^{\\alpha+1}_{k+\\gamma} - r^{\\alpha}_{k+\\gamma}) - B^{\\alpha}_{k+\\gamma}(\\lambda^{\\alpha+1}_{k+\\gamma} -\n      \\lambda^{\\alpha}_{k+\\gamma})=0\n    \\end{array}\n  \\end{equation}\n\\begin{equation}\n  \\label{eq:full-rrL}\n  \\begin{array}{l}\n    \\boxed{r^{\\alpha+1}_{k+\\gamma} = g(\\lambda ^{\\alpha}_{k+\\gamma},t_{k+\\gamma}) -B^{\\alpha}_{k+\\gamma}\n      \\lambda^{\\alpha}_{k+\\gamma} + B^{\\alpha}_{k+\\gamma} \\lambda^{\\alpha+1}_{k+\\gamma}}       \n  \\end{array}\n\\end{equation}\nwith,\n\\begin{equation}\n     \\begin{array}{l}\n  B^{\\alpha}_{k+\\gamma} = \\nabla_{\\lambda}g(\\lambda ^{\\alpha}_{k+\\gamma},t_{k+\\gamma})\n \\end{array}\n\\end{equation}\nand the  residue for $r$:\n\\begin{equation}\n\\boxed{\\mathcal\n      R_{rk+\\gamma}^{\\alpha} = r^{\\alpha}_{k+\\gamma} - g(\\lambda ^{\\alpha}_{k+\\gamma},t_{k+\\gamma})}\n  \\end{equation}\nNote that the linearization is equivalent to the case (\\ref{eq:rrL}) by replacing $\\lambda_{k+1}$ by $\\lambda_{k+\\gamma}$ and $x_{k+1}$ by $x_{k+\\gamma}$.\n\n\\paragraph{Reduction to a linear relation between  $x^{\\alpha+1}_{k+1}$ and\n$\\lambda^{\\alpha+1}_{k+\\gamma}$}\n\nInserting (\\ref{eq:full-rrL}) into~(\\ref{eq:full-rfree-12}), we get the following linear relation between $x^{\\alpha+1}_{k+1}$ and\n$\\lambda^{\\alpha+1}_{k+1}$, \n\n\\begin{equation}\n   \\begin{array}{l}\n     x^{\\alpha+1}_{k+1} = h(W^{\\alpha}_{k+1} )^{-1}\\left[g(\\lambda^{\\alpha}_{k+\\gamma},t_{k+\\gamma}) +\n    B^{\\alpha}_{k+\\gamma} (\\lambda^{\\alpha+1}_{k+\\gamma} - \\lambda^{\\alpha}_{k+\\gamma}) \\right ] +x^\\alpha_{free}\n\\end{array}\n\\end{equation}\nthat is \n\\begin{equation}\n  \\begin{array}{l}\n\\boxed{x^{\\alpha+1}_{k+1}=x_p + h (W^{\\alpha}_{k+1})^{-1}    B^{\\alpha}_{k+\\gamma} \\lambda^{\\alpha+1}_{k+\\gamma}}\n   \\end{array}\n  \\label{eq:full-rfree-13}\n\\end{equation}\nwith \n\\begin{equation}\n  \\boxed{x_p \\stackrel{\\Delta}{=}  h(W^{\\alpha}_{k+1} )^{-1}\\left[g(\\lambda^{\\alpha}_{k+\\gamma},t_{k+\\gamma}) -B^{\\alpha}_{k+\\gamma} (\\lambda^{\\alpha}_{k+\\gamma}) \\right ] +x^\\alpha_{free}}\n\\end{equation}\n\n\n\\paragraph{Reduction to a linear relation between  $y^{\\alpha+1}_{k+\\gamma}$ and\n$\\lambda^{\\alpha+1}_{k+\\gamma}$}\n\nInserting (\\ref{eq:full-rfree-13}) into (\\ref{eq:full-NL11y}), we get the following linear relation between $y^{\\alpha+1}_{k+1}$ and $\\lambda^{\\alpha+1}_{k+1}$, \n\\begin{equation}\n   \\begin{array}{l}\n y^{\\alpha+1}_{k+1} = y_p + \\left[ h \\gamma C^{\\alpha}_{k+\\gamma} ( W^{\\alpha}_{k+1})^{-1}  B^{\\alpha}_{k+1} + D^{\\alpha}_{k+1} \\right]\\lambda^{\\alpha+1}_{k+1}\n   \\end{array}\n\\end{equation}\nwith \n\\begin{equation}\ny_p = y^{\\alpha}_{k+1} -\\mathcal R^{\\alpha}_{yk+1} + \\gamma C^{\\alpha}_{k+1}(x_q) - D^{\\alpha}_{k+1} \\lambda^{\\alpha}_{k+1} \n\\end{equation}\nthat is \n\\begin{equation}\\boxed{\ny_p =  h(x^{\\alpha}_{k+\\gamma},\\lambda^{\\alpha}_{k+\\gamma}) + \\gamma C^{\\alpha}_{k+1}(x_q) - D^{\\alpha}_{k+1} \\lambda^{\\alpha}_{k+1} }\n\\end{equation}\n\\textcolor{red}{\n  \\begin{equation}\n   \\boxed{ x_q=(x_p -x^{\\alpha}_{k+1})\\label{eq:full-xqq}}\n  \\end{equation}\n}\n\n\n\\paragraph{The linear case}\n\\begin{equation}\n  \\begin{array}{lcl}\n    y_p &=&  h(x^{\\alpha}_{k+\\gamma},\\lambda^{\\alpha}_{k+\\gamma}) + \\gamma C^{\\alpha}_{k+1}(x_q) - D^{\\alpha}_{k+1} \\lambda^{\\alpha}_{k+1}\\\\\n        &=&  C^{\\alpha}_{k+1} x^{\\alpha}_{k+\\gamma} + D^{\\alpha}_{k+1}\\lambda^{\\alpha}_{k+\\gamma}  + \\gamma C^{\\alpha}_{k+1}(x_q) - D^{\\alpha}_{k+1} \\lambda^{\\alpha}_{k+1} \\\\\n        &=& C^{\\alpha}_{k+1}  (x^{\\alpha}_{k+\\gamma} + \\gamma x_p - \\gamma x^{\\alpha}_{k+1} ) \\\\\n        &=& C^{\\alpha}_{k+1}  ((1-\\gamma) x_{k} + \\gamma x_{free} ) \\text {since } x_p =x_{free} \n\\end{array}\n\\end{equation}\n\n\n\n\n\\paragraph{Implementation details}\n\nFor the moment (Feb. 2011), we set $x_q=(1-\\gamma) x_{k} + \\gamma x_{free} $ in the linear case.\nThe nonlinear case is not yet implemented since we need to\nchange the management of \\texttt{ H_alpha} Relation to be able to compute the mid--point values.\n% things that remain to  do\n%\n% \\begin{itemize}\n% \\item implement the function \\texttt{BlockVector  computeg(t,lambda)} and \\texttt{SimpleVector computeh(t,x,lambda)} which takes into account the values of the argument and return and vector\n% \\item remove temporary computation in Relation of {\\verb Xq, \\verb g_alpha and \\verb H_alpha }. This should be stored somewhere else. (in the  node of the graph)\n% \\end{itemize}\n\n\n\n\n\n\n\n\n\\clearpage\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"DevNotes\"\n%%% End: \n", "meta": {"hexsha": "ff1b5aaa128cf7063a64fc635a4226c57c52eb24", "size": 10402, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/sphinx/devel_guide/notes/MCP-FullThetaGamma.tex", "max_stars_repo_name": "ljktest/siconos", "max_stars_repo_head_hexsha": "85b60e62beca46e6bf06bfbd65670089e86607c7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 137, "max_stars_repo_stars_event_min_datetime": "2015-06-16T15:55:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T06:01:59.000Z", "max_issues_repo_path": "docs/sphinx/devel_guide/notes/MCP-FullThetaGamma.tex", "max_issues_repo_name": "ljktest/siconos", "max_issues_repo_head_hexsha": "85b60e62beca46e6bf06bfbd65670089e86607c7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 381, "max_issues_repo_issues_event_min_datetime": "2015-09-22T15:31:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-14T09:05:23.000Z", "max_forks_repo_path": "docs/sphinx/devel_guide/notes/MCP-FullThetaGamma.tex", "max_forks_repo_name": "ljktest/siconos", "max_forks_repo_head_hexsha": "85b60e62beca46e6bf06bfbd65670089e86607c7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2015-08-06T22:57:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T20:30:20.000Z", "avg_line_length": 42.8065843621, "max_line_length": 253, "alphanum_fraction": 0.611517016, "num_tokens": 4122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7371581568543043, "lm_q1q2_score": 0.5648796095464161}}
{"text": "\\section{Results}\n\\label{Spline_1}\n\n\\subsection{Results}\n \n In our case we simulated the results and found following outputs.\nPlotting the second order curve for given data of IMU with just least square method. We get\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{./figures/lsm.jpg}\n\\caption{In this case Blue is actual curve traced by IMU this are points with noise.And green is plotted curve. With least square method.}\n\\end{figure}\n\nThen according to the paper we fused data of camera and IMU so we get following graph.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{./figures/AllOutput.jpg}\n\\caption{Multiple result Graph}\n\\end{figure}\n\nHere we have 3 curved fitted as shown as red which gives better results and it also uses data of camera shown with black dots. So here according to the above formula scale factor of different section is different it is according to the quality of reading by camera.\nFor example, in our case \n\\begin{equation}\n\\lambda_1=0.5 ,\\lambda_2=0.5 ,\\lambda_3=0.1\n\\end{equation}\n\nIn 3rd section camera readings were not accurate so taken into smaller scalar value. This camera data further improved the result. And we get minimum error and good fit. With the given spline fitting method.\n\nSo if we look into different of error with traditional Least square method with Jung and Taylor method. It gives good results. Almost we get the same curve.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{./figures/ErrorC.jpg}\n\\caption{A. Red curve for method with one curve and least square method. \nB. Green curve is using the given method by Jung and Taylor.}\n\\end{figure}\n\nThis graph has error comparison with and without Jung and Taylor method of split curve fitting and scaling camera input.\nSo it is better to follow the given first method in the paper. It results into considerable reduction in error. which is clearly seen in graph.\n  ", "meta": {"hexsha": "63af9d5076a8edb4e8647c263f55823bd48ac679", "size": 1875, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/text/Results.tex", "max_stars_repo_name": "rohit517/Scale-Estimation-Monocular-SLAM", "max_stars_repo_head_hexsha": "ec86d42b83f2574db7b1e22b12cc531b09062c45", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/text/Results.tex", "max_issues_repo_name": "rohit517/Scale-Estimation-Monocular-SLAM", "max_issues_repo_head_hexsha": "ec86d42b83f2574db7b1e22b12cc531b09062c45", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/text/Results.tex", "max_forks_repo_name": "rohit517/Scale-Estimation-Monocular-SLAM", "max_forks_repo_head_hexsha": "ec86d42b83f2574db7b1e22b12cc531b09062c45", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.0769230769, "max_line_length": 265, "alphanum_fraction": 0.7861333333, "num_tokens": 439, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.5648796061025219}}
{"text": "\\subsubsection{Vertical and Whole-day-ahead Forecasts without Retraining}\n\\label{vert}\n\nThe upper-right in Figure \\ref{f:inputs} shows an alternative way to\n    generate forecasts for a test day before it has started:\nFirst, a seasonally-adjusted time series $a_t$ is obtained from a vertical\n    time series by STL decomposition.\nThen, the actual forecasting model, trained on $a_t$, makes an $H$-step-ahead\n    prediction.\nLastly, we add the $H$ seasonal na\\\"{i}ve forecasts for the seasonal component\n    $s_t$ to them to obtain the actual predictions for the test day.\nThus, only one training is required per model type, and no real-time data is\n    used.\nBy decomposing the raw time series, all long-term patterns are assumed to be\n    in the seasonal component $s_t$, and $a_t$ only contains the level with\n    a potential trend and auto-correlations.\nThe models in this family are:\n\\begin{enumerate}\n\\item \\textit{fnaive},\n      \\textit{pnaive}:\n          Sum of STL's trend and seasonal components' na\\\"{i}ve forecasts\n\\item \\textit{vholt},\n      \\textit{vses}, and\n      \\textit{vtheta}:\n          Exponential smoothing without calibration and seasonal\n                       fit\n\\item \\textit{vets}:\n          ETS calibrated as described by \\cite{hyndman2008b}\n\\item \\textit{varima}:\n          ARIMA calibrated as described by \\cite{hyndman2008a}\n\\end{enumerate}\nAs mentioned in Sub-section \\ref{unified_cv}, we include the sum of the\n    (seasonal) na\\\"{i}ve forecasts of the STL's trend and seasonal components\n    as forecasts on their own:\nFor \\textit{fnaive}, we tune the \"flexible\" $ns$ parameter, and for\n    \\textit{pnaive}, we set it to a \"periodic\" value.\nThus, we implicitly assume that there is no signal in the remainder $r_t$, and\n    predict $0$ for it.\n\\textit{fnaive} and \\textit{pnaive} are two more simple benchmarks.\n", "meta": {"hexsha": "949a485ebf1c6d88c9452215fed4d8d2f6d04da5", "size": 1847, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/3_mod/7_models/3_vert.tex", "max_stars_repo_name": "webartifex/urban-meal-delivery-paper-demand-forecasting", "max_stars_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-25T19:40:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T19:40:56.000Z", "max_issues_repo_path": "tex/3_mod/7_models/3_vert.tex", "max_issues_repo_name": "webartifex/urban-meal-delivery-demand-forecasting", "max_issues_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/3_mod/7_models/3_vert.tex", "max_forks_repo_name": "webartifex/urban-meal-delivery-demand-forecasting", "max_forks_repo_head_hexsha": "9ee3396a24ce20c9886b4cde5cfe2665fd5a8102", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.175, "max_line_length": 78, "alphanum_fraction": 0.7217108825, "num_tokens": 497, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7371581568543043, "lm_q1q2_score": 0.5648796016776519}}
{"text": "\\graphicspath{{Chapter2/Figs/}}\n\n\\chapter{Multi-Omics Factor Analysis (MOFA), a Bayesian model for integration of multi-omics data}\n\nThe work described in this Chapter results from a collaboration with Wolfgang Huber's group at the EMBL (Heidelberg, Germany). It has been peer-reviewed and published in \\cite{Argelaguet2018}. The method was conceived by Florian Buettner, Oliver Stegle and me. I performed most of the mathematical derivations and implementation, but with significant contributions from Damien Arnol and Britta Velten. The CLL data application was led by Britta Velten whereas the single-cell application was led by me, but with joint contributions in either cases. Florian Buettner, Wolfgang Huber and Oliver Stegle supervised the project.\\\\\nThe article was jointly written by Britta Velten and me, with contributions from all authors.\n\n\\section{Theoretical foundations}\n\n\\subsection*{Mathematical notation} \\label{section:mathematical_notation}\n\n\\begin{itemize}[noitemsep]\n\t\\item[--] Matrices are denoted with bold capital letters: $\\bfW$\n\t\\item[--] Vectors are denoted with bold non-capital letters: $\\bfw$. If the vector comes from a matrix, we will use a single index to indicate the row that it comes from. If two indices are used, the first one corresponds to the row and the second one to the column. The symbol '$:$' denotes the entire row/column. For instance, $\\bfw_{i}$ refers to the $i$th row from the $\\bfW$ matrix, whereas $\\bfw_{:,j}$ refers to the $j$th column.\n\t\\item[--] Scalars are denoted with non-bold and non-capital letters: $w$. If the scalar comes from a 1-dimensional array (a vector), a single subscript will indicate its position in the vector. If the scalar comes from a 2-dimensional array, two indices will be shown at the bottom: the first one corresponding to the row and the second one to the column. For instance, $w_{i,j}$ refers to the value from the $i$th row and the $j$th column of the matrix $\\bfW$, and $w_i$ to the $i$th value of the vector $\\bfw$.\n\t\\item[--] $\\boldzero_k$ is a zero vector of length $k$.\n\t\\item[--] $\\I_k$ is the identity matrix with rank $k$.\n\t\\item[--] $\\E_q[x]$ denotes the expectation of $x$ under the distribution $q$. When the expectations are taken with respect to the same distribution many times, we will avoid cluttered notation and we will instead use $\\la x \\ra$.\n\t\\item[--] $\\Ndist{x}{\\mu,\\sigma^2}$: $x$ follows a univariate normal distribution with mean $\\mu$ and variance $\\sigma^2$.\n\t\\item[--] $\\Gdist{x}{a,b}$: $x$ follows a gamma distribution with shape and rate parameters $a$ and $b$.\n\t\\item[--] $\\Bdist{x}{a, b}$: $x$ follows a beta distribution with shape and rate parameters $a$ and $b$.\n\t\\item[--] $\\text{Ber}(x|\\theta)$: $x$ follows a Bernoulli distribution with parameter $\\theta$.\n\t\\item[--] $\\mathds{1}_0$: Dirac delta function centered at 0.\n\t\\item[--] $Tr(\\bfX)$: Trace of the matrix \\bfX\n\\end{itemize}\n\n\\subsection*{Graphical notation for probabilistic models}\n\nProbabilistic models can be represented in a diagrammatic format (i.e. a graph or a network) that offers a compact visual representation of complicated systems of probability distributions \\cite{Bishop2006}. In a graphical model the relationship between the nodes becomes more explicit, namely their conditional independence properties which allow the joint distribution over all variables to be factorised into a series of simpler products involving subsets of variables \\cite{Bishop2006}. The basic unit of a network is the node, which represents the different types of variables, including observed variables, unobserved probabilistic variables and unobserved parameters. The nodes are connected by unidirectional edges (arrows) which capture the conditional independence relationship between the variables.\n\nFor this thesis we adapted the graphical notations from~\\cite{Dietz2010-technical-report-graphs}:\n\n\\begin{center}\n  \\begin{tabular}{m{8cm} m{2cm}}\n    Observed variables & \\tikz{\\node[obs](){$Y$}} \\\\\n    Unobserved probabilistic variables & \\tikz{\\node[latent](){$\\theta$}} \\\\\n    Unobserved parameters & \\tikz{\\node[latent,double, double distance=1pt](){$\\theta$}} \\\\\n    Repetition of node $\\theta_n$ for $n\\in\\llbracket 1;N \\rrbracket$ & \\tikz{\\node[latent](theta){$\\theta_n$}; \\plate[] {plateN} {(theta)} {$N$};} \\\\\n    Conditional dependency between nodes: $p(Y,\\theta) = p(Y|\\theta)p(\\theta)$ & \\tikz{%\n            \\node[latent]   (theta) {$\\theta$};\n            \\node[obs, xshift=1.5cm] (Y) {$Y$};\n            \\edge{theta}{Y}}\n  \\end{tabular}\n\\end{center}\n% For simplicity, fixed hyperparameters are not represented on the graphical model. Unobserved parameters are only represented when optimised together with the unobserved probabilistic variables.\n\n\n\n\\input{Chapter2/bayes}\n\n\\input{Chapter2/factor_analysis}", "meta": {"hexsha": "fa7f2438e62bf3b9bd14a39f8db5ce72cfa0bdad", "size": 4802, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter2/introduction.tex", "max_stars_repo_name": "rargelaguet/thesis", "max_stars_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2021-01-08T13:01:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T07:24:40.000Z", "max_issues_repo_path": "Chapter2/introduction.tex", "max_issues_repo_name": "rargelaguet/thesis", "max_issues_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter2/introduction.tex", "max_forks_repo_name": "rargelaguet/thesis", "max_forks_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-01-09T04:47:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-04T08:25:50.000Z", "avg_line_length": 94.1568627451, "max_line_length": 810, "alphanum_fraction": 0.7448979592, "num_tokens": 1249, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5648755887892248}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\n\n\\subsection{Circuit characterization of a matroid}\nIn the paper of Oxley \\cite{ox_paper} it is said that \"A set in a matroid that is not independent is called dependent.The hereditary property,(I2), means that a matroid is uniquely determined by its collection of maximal independent sets, which are called bases, or by its collection of minimal dependent sets, which are called circuits.\" \\\\\n\\indent The circuits of a graph are a rather intuitive concept, and so it is natural to define the cycle matroid $M(G)$ of a graph in terms of its circuits. These circuits are the \"edge sets of cycles in a graph $G\".$ All dependent sets in a matroid contain a circuit.\\\\\n\\indent A circuit of a matroid is defined as a minimally dependent set due to the fact that a circuit is a dependent set with the minimum number of edges to be dependent. i.e if you remove one edge we have an independent set.\n\n\\begin{defn} By using (I1)\u2013(I3) , it is not difficult to show that the collection $\\mathcal{C}$ of circuits of a matroid M has the following three properties:\\\\\n(C1) The empty set is not in $\\mathcal{C}$\\\\\n(C2) No member of $\\mathcal{C}$ is a proper subset of another member of $\\mathcal{C}$\\\\\n(C3) if $ C_1 $ and $ C_2 $ are distinct members of $ C $ and \n$ e \\in C_1 \\cap C_2 $, then $ (C_1 \\cup C_2 ) \\setminus \\{e\\} $ contains a member of $\\mathcal{C}$ \n \\end{defn}\n \n\\noindent The following two proofs appear in Oxley's text and shows how the circuits define a matroid and how circuits appear in graphs.\\cite{ox_book}\n\n \\begin{thm}\n Let M be a matroid and $\\mathcal{C}$ be its collection of circuits. Then $\\mathcal{C}$ satisfies (C1) - (C3)\n  \\end{thm}\n\\begin{proof}\n\\noindent $(C1)$ is obvious as by $(I1)$ the empty set must always be an independent set.\n \n\\noindent $(C2)$ is also straightforward because any $ C \\in \\mathcal{C} $ is a minimally independent set by definition. Therefore , if there exists a $ C_1 \\in \\mathcal{C} $ such that $ C_1 \\subset C $ then $ C_1 \\in \\mathcal{C} $ and $ C $ is not a minimally dependent subset of E. \n \n \\vspace{2mm}\n \n \\noindent $(C3)$ Let $ A, B \\in \\mathcal{C} $ and suppose that (seeking a contradiction) $ (A \\cup B) \\setminus \\{e\\} $ where $ e $ is $ \\in (A \\cap B) $ does not contain a circuit.\\\\\n \\noindent Then $ (A \\cup B) \\setminus \\{e\\} $ is independent and therefore in $\\mathcal{I}.$\n \n \\vspace{2mm}\n \n\\noindent The set $ A \\setminus B $ is non-empty.\\\\\n Let $ s \\in A \\setminus B    \\implies s \\in A$\\\\\n \\noindent as $A$ is in $\\mathcal{C}$ it is minimally dependent.\n \\noindent $\\implies A \\setminus \\{s\\} \\in \\mathcal{I}$ i.e is independent.\n \n \\vspace{2mm}\n \n\\noindent Let $J$ be a maximal independent set of $(A \\cup B)$ with the following properties: $A \\setminus \\{s\\} \\subset J $ and therefore $ \\{s\\} \\notin J $ but as B is a circuit there must be some element $t \\in B$ that is not in $J$; $s$ and $t$ are distinct.\n \n \\vspace{2mm}\n \n\\noindent $\\implies |J| $ must be at most equal to $|(A \\cup B) \\setminus \\{s,t\\}| $\\\\\n $\\implies |J| \\leq |(A \\cup B) \\setminus \\{s,t\\}| = |(A \\cup B)| - 2 < |(A \\cup B) \\setminus \\{e\\}|$\n \n \\vspace{2mm} \n \n \\noindent Now by (I3) we can substitute elements from $(A \\cup B) \\setminus \\{e\\}$ into $(A \\cup B) \\setminus \\{s,t\\}$ that are not in $(A \\cup B) \\setminus \\{s,t\\}$ but the only elements that fits this condition are $\\{s\\}$ and $\\{t\\}$ and introducing either of these elements breaks the independence of $J$.\n \n\\noindent Therefore, $(A \\cup B) \\setminus \\{e\\}$ must contain a circuit\\\\\n \\end{proof}\n \n\\end{document}", "meta": {"hexsha": "734dcb8ede39a7e3c5e145d2f26d2b52675ebc0f", "size": 3579, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeXPdfs/sections/char1.tex", "max_stars_repo_name": "emcd123/Matroids", "max_stars_repo_head_hexsha": "f1ab7a5164a60b753ba429ef7ba9ce36517d4439", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LaTeXPdfs/sections/char1.tex", "max_issues_repo_name": "emcd123/Matroids", "max_issues_repo_head_hexsha": "f1ab7a5164a60b753ba429ef7ba9ce36517d4439", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LaTeXPdfs/sections/char1.tex", "max_forks_repo_name": "emcd123/Matroids", "max_forks_repo_head_hexsha": "f1ab7a5164a60b753ba429ef7ba9ce36517d4439", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-21T18:03:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-21T18:03:07.000Z", "avg_line_length": 66.2777777778, "max_line_length": 341, "alphanum_fraction": 0.6848281643, "num_tokens": 1130, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.564875587427747}}
{"text": "\\documentclass[12pt]{article} \\input{physics1}\n\\begin{document}\n\n\\noindent\nName: \\rule[-1ex]{0.55\\textwidth}{0.1pt}\nNetID: \\rule[-1ex]{0.2\\textwidth}{0.1pt}\n\n\\section*{NYU Physics I---Final Exam}\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nAssume that Professor Hogg has a mass of exactly $70\\,\\kg$. Now\nconsider a very realistic, stone statue of Professor Hogg that is\ntwice his size in every direction (twice as tall, and realistic). What\nwould be the mass of this statue, approximately?\n(From Lecture, 2018-09-06.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nRoughly what is terminal velocity for a 1-liter soda bottle, filled\nwith soda (or pop or juice or whatever), falling from a tall building?\n(From Problem Set 1.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nWe spent time talking about two vectors, $\\vec{v}_3$ and $\\vec{v}_4$,\nwhich were the velocities of the rock on a no-air-resistance trajectory.\nWhat was wrong with this picture, that we drew?\n\\marginpar{\\includegraphics[width=1in]{../jpg/wrong_vectors.png}}\n(From Term Exam 1.)\n\n\\vfill\n~\n\\clearpage\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nConsider a $1\\,\\kg$ stone thrown upwards at an initial upward velocity of\nmagnitude $10\\,\\mps$. Plot the vertical velocity (with upwards\npositive) as a function of time for the first 2 seconds of its\ntrajectory. Use a gravitational acceleration of $10\\,\\mpss$. Label\nyour axes quantitatively. Ignore air resistance.\n(From Problem Set 2.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nWhat is the magnitude $|\\vec{N}|$ of the normal force acting on a package of\nmass $M$ sitting on the floor of an elevator that is accelerating\nupwards with acceleration magnitude $a$. The elevator is an ordinary\nelevator in an ordinary building in New York City.\n(From Problem Set 3.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA car is going around a banked turn faster than the speed for which\nthe bank is designed. Therefore the car is relying on lateral friction\nto make the turn!  Draw a free-body diagram for the car, showing at\nleast the gravitational force, the normal force, and the frictional\nforce.\n(From Problem Set 4.)\n\n\\vfill\n~\n\\clearpage\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nYou have a block of mass $m$ on an inclined plane, inclined at an\nangle $\\theta=15\\,\\deg$ to the horizontal. The coefficient of friction\nis $\\mu=0.9$. What is the magnitude of the frictional force on the\nblock? The acceleration due to gravity is $g$.\nYou must leave your answer in terms of $\\mu$, $m$, $g$, $\\theta$, or\nwhatever you need to deliver a correct symbolic answer.\nOnce again, state any assumptions you need to make.\n(From Term Exam 2.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA roller coaster zooms over the top of a hill of radius $4\\,\\m$ (such that the\ncenter-of-mass of the car and people are moving on a circular trajectory of\nradius $4\\,\\m$). At what speed $v$ should the roller coaster go if the people in\nthe car are going to nearly feel weightless? Use $g=10\\,\\mpss$, and give\na numerical answer in $\\mps$.\n(From Lecture 2018-09-27.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA package of mass $m$ is moving at a horizontal velocity of magnitude $v$ at $t=0$\nrelative to the floor.  It scrapes to a halt on the floor. How much\nheat $Q$ is generated by the friction, from $t=0$ until when the\npackage comes to rest?\n(From Problem Set 5.)\n\n\\vfill\n~\n\\clearpage\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA $0.16\\,\\,kg$ ball is dropped from a height of $1\\,\\m$ onto a concrete\nfloor. If the bounce takes only $10^{-4}\\,\\s$, what was the (approximate)\nmagnitude of the force on the ball from the floor? Give a numerical\nanswer in $\\N$.\n(From Lecture 2018-10-11.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nWithout using a calculator, estimate the sine of the angle $1.2~\\deg$.\nUse the small-angle approximation! (\\emph{Hint:} Degrees aren't radians.)\n(From Term Exam 3.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nIf I stretch a spring, I am experiencing Hooke's law. What is the\nstress and what is the strain in this case? Answer in words!\n(From Lecture 2018-10-18.)\n\n\\vfill\n~\n\\clearpage\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA typical adult is holding her or his left arm at a right angle, so\nthe upper arm is pointing straight down, and the forearm is pointing\nhorizontally forwards. The hand is oriented palm-up. The arm is\nholding a 6 kg grocery bag by its handle in the hand. Draw a diagram\nshowing the forces acting on the forearm+hand system, at roughly the locations\nwhere they are acting. You can think of it as being the forces on the forearm+hand\nbones if that makes more sense.\n(From Problem Set 7.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nIf you have a potential of the form\n$$\nU(x) = A\\,x^3 - B\\,x + C\n$$ where $A$ and $B$ and $C$ are positive constants, find a location ($x$ position) at which\nthe force is zero.\n(From Term Exam 4.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nIf I tell you that the position $x$ of a particle is moving with\ntime according to the equation:\n\\begin{equation}\nx(t) = B\\,\\e^{-\\frac{\\Gamma}{2}\\,t}\\,\\cos (2\\pi\\,f\\,t + \\theta)\n\\quad ,\n\\end{equation}\nthen what are the units of $B$, $\\Gamma$, $f$ and $\\theta$?\n(From Problem Set 8.)\n\n\\vfill\n~\n\\clearpage\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA bungy cord has a natural (unstretched) length of $5\\,\\m$ and it\nstretches by $1\\,\\m$ for every $400\\,\\N$ of applied force. How much\nenergy is the bungy cord storing when it is stretched to\na total length of $7\\,\\m$? Give a numerical answer in $\\J$.\n(From Problem Set 9.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nHow is it that the astronauts in the Space Station are weightless?\nWrite a grammatically correct answer in one sentence, in fewer than 17\nwords.  Box your sentence!\n(From Term Exam 5.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA car of mass $M$ is accelerating at acceleration $a$ in the $x$\ndirection. Its center of mass is a height $h$ above the ground.  What\nis the net torque on the car, if any, when you consider the reference point to be\na point that is stationary and \\emph{on the ground}?\n(From Problem Set 10.)\n\n\\vfill\n~\n\\clearpage\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nWhat is escape velocity from the surface of the Earth? That is, how\nfast do you have to move to get out of Earth's gravity?\nIf you don't remember $G$ or $M$ for the Earth (and you shouldn't remember\nthose!) then remember that $g = G\\,M/R^2$ and the radius $R$ of the Earth is\nabout $6300\\,\\km$.\nGive a numerical answer in $\\mps$.\n(From Problem Set 11.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nIf the Earth were four times less massive than it is, but were still\non a circular orbit at its current orbital radius (semi-major axis),\nhow much longer or shorter would the year be?  (From Worksheet on\norbits.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nSketch an orbit of roughly eccentricity 0.9. Most importantly: Show the\npoint about which the object is orbiting, and make sure your pericenter\nand apocenter distances make sense. Don't worry about getting it all right,\njust roughly!\n(From Term Exam 6.)\n\n\\vfill\n~\n\\clearpage\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nWhat, roughly, is $\\gamma$ for the astronauts in the Space Station,\naccording to a scientist stationary in the Earth rest frame?\n(From Problem Set 13.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nA particle of mass $M$ is at rest. It decays into two identical\nphotons traveling in the positive and negative $x$ directions.\nWhat are the 4-momenta of these two photons?\n(From Problem Set 14.)\n\n\\vfill\n\n\\paragraph{\\problemname~\\theproblem:}\\refstepcounter{problem}%\nTwo balls of putty of mass $m$ are fired towards each other in outer space.\nThey collide and stick perfectly, without losing any putty or\nradiating any sound or heat or anything else. Is the combined blob\nof putty created by this collision slightly more than, slightly less\nthan, or exactly equal to $2\\,m$ in mass? And say why, in fewer than\n17 words. Make your answer correct to at least first order in\n$\\beta^2$. That is, the weakly relativistic limit. But I only need\na qualitative answer, don't calculate anything! (From Lecture\n2018-12-13.)\n\n\\vfill\n~\n\\end{document}\n", "meta": {"hexsha": "5367c80936dd79104683c79ed91f80a60e5d44a2", "size": 8756, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/physics1_final.tex", "max_stars_repo_name": "davidwhogg/Physics1", "max_stars_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-11-13T03:48:56.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-13T03:48:56.000Z", "max_issues_repo_path": "tex/physics1_final.tex", "max_issues_repo_name": "davidwhogg/Physics1", "max_issues_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 29, "max_issues_repo_issues_event_min_datetime": "2016-10-07T19:48:57.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-29T22:47:25.000Z", "max_forks_repo_path": "tex/physics1_final.tex", "max_forks_repo_name": "davidwhogg/Physics1", "max_forks_repo_head_hexsha": "6723ce2a5088f17b13d3cd6b64c24f67b70e3bda", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.1646586345, "max_line_length": 92, "alphanum_fraction": 0.7519415258, "num_tokens": 2485, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.564875587427747}}
{"text": "\\section{Group Cohomology Model for SPT}\nSo we've been discussing this Dijkgraaf-Witten theory.\nBecause there were questions about this last time,\nlet me mention the big picture.\n\nBefore discussing topological phases,\nall models were free fermion models.\nWe could exactly solve them,\nunderstand everything about them,\nthe ground state wave function,\nthe topological invariant and so on.\n\nNow with strongly interacting systems,\nthey are not free.\nFor bosons,\nyou need strongly interaction,\nbecause if not they will condensed into a superfluid.\n\nThese group cohomology models are kind of like free fermion system,s\nin that they are exactly solvable classes of models.\nYou could exactly write down a path integral you could exactly compute,\nand exact ground sate wave functions.\nThey are idealized exactly solvable models analogous to the free fermion case.\n\nOf course these are not free,\nthey are strongly interacting.\n\nOnce we work with these exactly solvable modes,\nthese TQFTs,\nthis opens the entire door to a huge class of exactly solvable TQFTs with idea\nHamiltonians,\nand so on.\nThis is the tip of the iceberg.\n\nWe start off discussing the path integrals,\nthen discuss the ground state wave function.\n\nWe're discussing the 1+1D case.\nWhat we did was triangulate spacetime.\n\nWe add a branching structure,\nwhich is a local ordering of vertices.\n\nThen we put group elements on vertices.\n\nAnd then we define the path integral,\nwhich depends on a background gauge field $A$,\nwhich is essentially the twist you can put on these non-contractible paths.\n\\begin{align}\n    Z(M^2, A) &=\n    \\frac{1}{|G|^{N_V}}\n    \\sum_\\left\\{{g_i\\right\\}}\n    \\prod_{\\Delta_2 \\ni (i,j,k)}\n    \\nu_2^{S\\left( \\Delta_2 \\right)} \\left( g_i, g_j, g_k \\right)\n\\end{align}\nSo then this $\\nu_2$ is a homogeneous 2-cocycle.\nSo then if you write\n\\begin{align}\n    \\nu_2\\left( g_i, g_j, g_k \\right)\n    &=\n    \\nu_2^{\\sigma\\left( g_i^{-1} \\right)}\n    \\left( 1, g_{ij}, g_{ik} \\right)\n\\end{align}\nit doesn't change if you multiply everything by $g$,\nwhere\n$g_{ij} := g_{i}^{-1} g_j$.\nAnd hen I can define the in-homogeneous 2-cochain which is\n\\begin{align}\n    \\omega_2(g_{ij}, g_{ik}) :=\n    \\nu_2\\left( 1, g_{ij}, g_{ij} g_{jk} \\right)\n\\end{align}\nand the reason they needed to satisfy the 2-cocycle equation is that we needed\nthis topological action phase\n\\begin{align}\n    e^{iS_{\\mathrm{top}}\\left( \\left\\{ g_i \\right\\} \\right)}\n    \\prod_{\\Delta_2 \\ni (i,j,k)}\n    \\nu_2^{S\\left( \\Delta_2 \\right)} \\left( g_i, g_j, g_k \\right)\n\\end{align}\nto be retriangulation-invariant,\nwhich led us to the 2-cycle equation.\n\nI also said this \n$\\frac{1}{|G|^{N_V}} \\sum_\\left\\{{g_i\\right\\}}$\nwhich is averaging over gauge transformations is actually unnecessarily.\n\nWe could have instead,\nif you had some triangle,\njust defined $g_{ij},g_{jk},g_{ik}$,\nand have gauge field $A_{ij} = g_{ij}$.\n\nWhat we picked here is a flat gauge field.\nFlat means that\n$g_{ij}, g_{jk}, g_{ki} = 1$.\nAnd so the action is just the product over all triangulations $\\Delta_2$\nof the 2-cocycles.\n\\begin{align}\n    Z\\left( M^2, A \\right)\n    &=\n    \\prod_{\\Delta_2}\n    \\omega_2^{S\\left( \\Delta_2 \\right)}\n    \\left( g_{ij}, g_{jk} \\right)\n\\end{align}\nThe $S\\left( \\Delta_2 \\right)$ is the orientation of the triangulation.\nThis path integral is independent of the choice of gauge.\nLet's say we have a gauge transformation $h_i$ and we define\n$g_{ij}:= g_i^{-1} g_j$.\nThen the gauge transformation can be defined to act like\n\\begin{align}\n    g_{ij} \\to\n    h_i g_i^{-1}g_j h_j\n\\end{align}\nSo if I think of taking\n\\begin{align}\n    g_i \\to g_i h_i\n\\end{align}\nby right multiplication.\nTo see this path integral is independent of gauge transformations,\ntake a group element assigned to a single vertex,\nthen get the new group element,\nthen show the path integral is independent of this transformation.\n\nOne way to convince yourself that this would work is that you could draw your\ninitial system,\nsay here,\nwhich has some choice of $g_i$,\nthen you could draw the exact copy of this.\nSay you have some vertex $g_i$.\nAnd then we change it go $g_i h_i$,\na gauge transformation.\nYou could get from this picture to this picture,\nby doing a series of Pachner moves.\nOne way to think how it works,\nis think of it as an evolution of a 2D system from some initial time  a late\ntime,\nso we have a 3D system and now we triangulate the 3D system.\nDraw some additional lines,\nand there's a way of triangulating this system,\nwhich is the cobordism\nSo we have a cobordism\nfrom the initial slice to the final slice.\n\nIn Pachner moves,\nyou could think of it as stacking a simplex of one higher dimension.\nIf I had a single triangle,\nand I do a Pachner move that adds an extra vertex in the middle,\nthen you could think of it as pasting a 3-simplex on top of this triangle,\nso the original triangle is just the bottom of the 3-simplex tetrahedron.\nAnd so the 1-3 Pachner move is pasting a 3-simplex.\nYou could think of these Pachner moves as starting with some initial\nconfiguration,\nand some final configuration,\nI have a 3-manifold that interpolates between them,\nI triangular the manifold,\nand all I'm doing is the sequence of Pachner moves of 3-simplices that I\ncontinuously paste onto here to get the final configuration.\nThis whole thing is invariant under triangulation and Pachner moves,\nso we could start with some $g_i$ and do some triangulation of 3-manifolds,\nand those are some series of Pachner moves,\nand ultimately we end up with $g_i h_i$ on that vertex.\n\nUltimately,\nthis is an explanation that we can do Pacher moves to to implement the gauge\ntransformation $g_i \\to g_i h_i$.\nThis then means that this whole amplitude $Z\\left( M^2, A\\right)$\nis gauge-invariant.\n\nAnd because it's gauge invariant,\nwhen we do the sum,\nwhich is just picking different group elements in the vertices,\nwe don't have to do the sum,\nbecause changing the group elements on the vertices is equivalent to doing a\ngauge transformation.\nBecause of that,\nthe path integral we wrote there is just a phase,\nan element of $U(1)$.\n\n\\begin{question}\n    What is the gauge field?\n\\end{question}\n\nIf you have a cylinder,\nyou could have holomony when you go around a loop.\nYou could do that by cutting the cylinder surface open along its axis,\nand then I glue it back with a twist.\nSo if this is $g_i,g_j,g_k,\\ldots$ on the bottom edge,\nI would just identify it with a twist $hg_i,hg_j,hg_k,\\ldots$,\nwhich effectively determines some gauge transformations.\n\nThere's a slightly different way.\nYou could define gauge field separately on the links.\nThat is exactly the same as doing what I said.\n\n\\begin{question}\n    Where do we encode the fact that we are describing physical systems which\n    are gapped and have unique ground sate.\n\\end{question}\nThere's no notion of gapped yet,\nbecause there's no Hamilton defined yet.\nAfter I define it,\nI will show these do define a gapped system.\n\n\\begin{question}\n    Can we say this describes only gapped systems?\n\\end{question}\nDepends what you mean by describe.\nI'm going to do it right now,\nso in a minute,\nsuing this path integral\nI can define a wave function on the boundary.\nThat wave function on the boundary is going to be the unique gapped ground state\nof a certain ideal exactly solvable Hamiltonian.\nYou could ask,\ncan the aground sate also be the ground state of a gapless Hamiltonian.\nI don't know if there is a proof if that can or cannot happen.\n\nBut hits thing gives a lot of information.\nThis can't really describe a gapless system in a comprehensive way,\nalthough it could describe gapped systems.\n\nAny questions about the exact setup.\nIt's a bit strange,\nbecause in condensed,\nwe want these on the vertices,\nbut if you're a field theorist you don't care.\nIt's awkward by you will see why.\n\n\\begin{question}\n    What will change in the formula with gauge fields?\n\\end{question}\nLet me write it in this one over here.\nYou  think of $\\omegag_2\\left( g_{ij}, g_{jk} \\right)$\nas\n$\\omegag_2\\left( A_{ij}, A_{jk} \\right)$.\nBut maybe it's more clear,\nif there is a background gauge field $A_{ij}$,\nwhich is flat,\nand the path integral is\n\\begin{align}\n    Z\\left( M^2, A \\right)\n    &=\n    \\prod_{\\Delta_2}\n    \\omega_2^{S\\left( \\Delta_2 \\right)}\n    \\left( A_{ij}, A_{jk} \\right)\n\\end{align}\nand then if I want,\nI can sum over gauge transformations to get\n\\begin{align}\n    Z\\left( M^2, A \\right)\n    &=\n    \\frac{1}{|G|^{N_V}}\n    \\sum_\\left\\{ {g_i \\right\\}}\n    \\prod_{\\Delta_2}\n    \\omega_2^{S\\left( \\Delta_2 \\right)}\n    \\left( g_i^{-1} A_{ij} g_j, g_j^{-1} A_{jk} g_k \\right)\n\\end{align}\nSo I fix my flat background gauge field with $A_{ij}$ on the links.\nThen I write down the amplitude for each 2-simplex,\nand because this amplitude is invariant,\nif I do a gauge transformation,\nthat changes\n$A_{ij} \\to g_{i}^{-1} A_{ij} g_j$.\n\nAnd the point is that this formula,\nI ca do what I said about cutting open the cylinder and gluing together with a\ntwist.\nMaybe that was a cleaner way of presenting it in terms of $A$,\nbut I didn't write it explicitly.\nBut this is probably the most best comprehensive definition.\n\nIf you take any two triangulations,\nyou could definitely construct this manifold,\nwhich is this piece times $I$ basically.\nAnd you can triangulate the 3-manifold.\nIt's clear that this is basically this with some 3-simplices attached to it.\nYou could just think of it just one by one.\n\n\\begin{question}\n    Why did you introduce this picture?\n\\end{question}\nI wanted to introduce this picture to show you could change $g_i$ on a vertex to\n$g_i h_i$ by Pachner move.\n\n\\begin{question}\n    Did you really show that?\n\\end{question}\ni just sketched it.\nI showed this $Z$ is invariant under Pachner moves,\nand then I showed you could get from one to another.\nI have a cobordism that I triangulate,\nand I have 3-simplices I glue on one-by-one,\neach of which is equivalent to a Pachner move.\n\n\\begin{question}\n    We have lines that are intersecting the triangle.\n\\end{question}\nthat might be a function of how you look at it.\nIt might be useful to take a simple example.\nSuppose you have a triangle,\nwith a vertex in the centre and rays to vertices.\nThen I want to take this $g_i$ on the centre vertex I want to transform to\n$g_i'$.\nI just do a Pachner move to delete the vertex.\nAnd then I do another Pachner move to reintroduce the vertex with $g_i'$ on the\nvertex.\nJust play  a bunch of examples it works.\nBut this is the general proof.\n\nSo far we defined the path integral by triangulating and labelling the group\nelements coupled to this background gauge field,\nand that defines a path integral.\nNow we discuss the wave function.\nRemember from TQFT,\nIf I evaluate a path integral on a manifold with a boundary,\nthen I get a state on the boundary.\n\nImagine a 1D system on a circle like this.\nI can write down a wave function for the group elements that live on the edge of\nmy manifold,\nand I write down a path integral where I sum over all the group elements in the\nbulk\n\\begin{align}\n    \\Psi\\left( \\left\\{ g_i^{\\textrm{edge}} \\right\\} \\right)\n    &=\n    \\frac{1}{|G|^{N_{v,bulk}}}\n    \\sum_\\left\\{ {g_i^{bulk} \\right\\}}\n    \\prod_{\\Delta^2\\ni (i, j, k)}\n    \\nu_2^{S\\left( \\Delta^2 \\right)}\n    \\left( g_{i}, g_{j}, g_{k} \\right)\n\\end{align}\nSo now I have a wave function defined on a circle defined on chain.\nAnd by studying the wave function,\nI can learn some interesting things.\n\nFirstly,\nwe have a system that has a global symmetry.\nHere,\nthe global symmetry would act just by left multiplication.\nIf I multiply every group element on the edge by $g$,\nyou can see what that does is \\ldots.\nIt's easy to see this in the special case.\nSuppose you have vertices on the boundary, say 4 vertices.\nAnd the bulk is retriangulation-invariant,\nI can do enough retriangulations such that there is only t only a single vertex\nin the bulk, say it's called $g_*$,\nand on the edge we have $g_1, g_2, g_3, g_4$.\nSuppose $g_*$ has the largest ordering number,\nso they the arrow lines go like this (toward the centre vertex and around the\nedges clockwise)\nBut because $\\nu_2$ has this property where it's symmetric itself,\nI can multiply $g^{-1}$ everywhere,\nand I could just raise it to $\\sigma(g)$,\nwhich is the complex conjugate symbol if it's anti-unitary.\n\\begin{align}\n    \\Psi\\left( \\left\\{ g g_i^{\\text{edge}} \\right\\} \\right)\n    &=\n    \\frac{1}{|G|}\n    \\sum_{g_*}\n    \\prod_{\\Delta^2}\n    \\left[ \\nu_2^{S\\left( \\Delta_2 \\right)}\n    \\left( g_i, g_j, g^{-1} g_* \\right)\\right]^{\\sigma(g)}\\\\\n    &=\n    \\left[ \\Psi\\left( \\left\\{ g_i^{\\textrm{edge}} \\right\\} \\right)\n    \\right]^{\\sigma(g)}\n\\end{align}\nwhere I relabelled $g^{-1} g_i^* \\to g_*$.\n\nThe local Hilbert space on the edge is\n\\begin{align}\n    \\mathcal{H}_i &=\n    \\span\\left( \\left\\{ \\ket{g_i} \\right\\} \\right)\n\\end{align}\nThe degrees of freedom compose the Hilbert space on the boundary.\n\nThe physical degrees of freedom are the group elements on the vertices,\nthe $A$'s are just a background gauge field which tells you whether you have\ntwist on the boundaries.\n\nEven though this is technically a sum over $g_*$,\nthe actual $g_*$ on the bulk doesn't matter,\nbecause we can just retriangulate to completely get rid of $g_*$.\nSo the amplitude shouldn't even depend on $g_*$ at all.\n\n\\begin{question}\n    What is $\\sigma(g)$?\n\\end{question}\nTo be clear,\n\\begin{align}\n    \\sigma(g) &=\n    \\begin{cases}\n        * & \\text{if $g$ is anti-unitary symmetry}\\\\\n        1 & \\text{if $g$ is unitary symmetry}\\\\\n    \\end{cases}\n\\end{align}\n\n\n\\begin{question}\n    Why is it 2+1D?\n\\end{question}\nUsually spacetime is $M^{d+1} = \\Sigma^d\\times \\mathbb{R}$,\nbut in TQFT,\nwe consider arbitrary spacetime manifolds.\nFor example,\nfor states on the edge of a circle,\nyou could thick of the radial direction as ``time''.\nI still like to say 2+1,\nbecause it's not clear if it's 2+1 or 3D if I just say 3D.\n\nAnyway,\nwe can actually get rid of that bulk vertices by Pachner moves.\nI can just imagine that the triangulation looks something like this.\n(triangulation without vertices in the bulk)\nSo each term in the sum should be independent of the bulk.\nAnd so we can actually just pick an arbitrary $g_*$.\nAnd write the wave function without the sum.\n\\begin{align}\n    \\Psi\\left( \\left\\{ g_i^{\\textrm{edge}} \\right\\} \\right)\n    &=\n    \\prod_{\\Delta_2}\n    \\left[ \n    \\nu_2\\left( g_i, g_{j}, g_* \\right)\n    \\right]^{S\\left( \\Delta_2 \\right)}\n\\end{align}\nwhich means that the wave function is just a phase\n\\begin{align}\n    \\left|\n    \\Psi\\left( \\left\\{ g^{\\textrm{edge}} \\right\\} \\right)\n    \\right|\n    = 1\n\\end{align}\nSo there is no intrinsic topological order.\nAnd so that means there is no intrinsic topological order,\nso we can disentangle by constant depth circuit.\nThe state itself\n\\begin{align}\n    \\prod_{\\Delta_2}\n    \\ket{\\Psi_{1D}}\n    &=\n    \\sum_\\left\\{ {g_i^{\\textrm{edge}} \\right\\}}\n    \\left[ \\nu_2 \\left( g_i, g_j, g_* \\right) \\right]^{S\\left( \\Delta_2 \\right)}\n    \\ket{\n    g_i^{\\textrm{edge}}\n    }\n\\end{align}\nAnd so we can define a new basis\n\\begin{align}\n    \\ket{g_i'}\n    &=\n    U\\ket{g_i}\\\\\n    &=\n    \\prod_{\\Delta_2}\n    \\left[ \\nu_2\\left( g_i, g_j, g_* \\right) \\right]^{S(\\Delta_2)}\n    \\ket{\\left\\{ g_i \\right\\}}\n\\end{align}\nThis unitary is just a local unitary circuit.\nThat means,\nwe can view this as a constant depth local unitary.\n\n\nIt's a local unitary because every term only depends on the $g_i$ and $g_j$ that\nare close to each other with this $g_*$ in the bulk.\nIt's constant depth,\nbecause I could just imagine that I do these links on the edge all at the same\ntime,\nand then apply the links to the center vertex.\n\nSo with depth 2 I could implement the sequence of phases.\nYOu could also redefine the basis as the original $\\ket{g_i}$ times a unitary of\nconstant depth circuit.\n\nIt's a fairly trivial state in the new basis.\nIn the new basis,\n\\begin{align}\n    \\ket{\\Psi_{1D}}\n    &=\n    \\sum_{\\left\\{ g_i' \\right\\}}\n    \\ket{\\left\\{ g_i' \\right\\}}\\\\\n    &=\n    \\bigotimes_i\\left( \n    \\sum_{g_i} \\ket\\left\\{ g_i' \\right\\}\n    \\right)\n\\end{align}\nwhich is just a tirvial product state.\nhence no intrinsic topological order.\n\nBut this $U$ does not actually preserve the symmetry,\nunless the cohomology class of $\\nu_2$ is trivial,\nthat is the class $[\\nu_2]$\nis trivial in $H^2\\left( G, U(1) \\right)$.\n\nIf I do  symmetyr transofmraiton,\nI'm changing the ones on the edge.\nThis itself does not equal.\n\\begin{align}\n    \\nu_2\\left( gg_1, gg_j, g_* \\right) \\ne\n    \\nu_2\\left( g_i, g_j, g_* \\right)\n\\end{align}\nbecause to get the symetry you also need to transafomr the $g_*$.\nonly the trivial cocycle would satisfy the above equality.\n\nLet me just say that\ntwo cocycles in the same $H^2(G, U(1))$ class\ndefine wave functions that can be related by a constant depth symmetric circuit.\nThe idea is that if I take some cocylce $\\nu_2' = \\nu_2\\, d\\mu_1$,\nand if I draw a picture with $g_i, g_j, g_k, g_l$ on four corners,\nthen the ones on the bulk will come iwth compelx conugation,\nthe only one left is the $\\mu$ on the boundary,\nso you get a factor of $\\mu\\left( g_i, g_j \\right)$ on the boudnary because\nthere is no $g_*$ anymore.\nYougeta different wave fucntio nif you change this on the coboundary,\nbut it onlydiffers by the gound elmetns on the boundary,\nso we won'th ave this issue iwht the $g_*$ that spoils the symmetry so you can\nget a symmetric constant depth circuit.\n\nThe pathe integral defines a wvae funtion that is symmetric.\nIt has no intrinsci toploical order.\nIT can be distentagled by a constant depth circuti,\nbutit can not be distentabled by a constant depth symmetryc circuit.\nThis cohomology class determines equivlaence classes of ground satte wwave\nufnctions that cannot be reached to each other by aconstantt depth symmetric\ncircuit.\n\n\\begin{question}\n    Is it easy to see how the ege transforms projectively?\n\\end{question}\nNot yet.\nSo far I've only defined the ground state wave funiton on a ring.\nWhat I coulddo is look at the entanglement of the ave fucntion,\nI can loko ath the reduced density matrix on the interval,\nand how it decomposes into a product of terms.\nYou should be able to do that,\nbut I havne't gone thoruh the anlaysis myself.\n\n\\begin{question}\n    What's the symmetic of the constant dpeth circuit?\n\\end{question}\nThe symmtery group is $G$.\nI said that on each site,\nwe have $\\ket{g_i}$ that takes this form,\nand the symmetry is\n\\begin{align}\n    \\ket{g_i} \\to \\ket{ gg_i}\n\\end{align}\nwhich is the symmetric action nthe state.\nThis constant depth circuit $U$ doesn't respect that symmetry,\nbecause if it did,\nwe would need to hae this equlity.\n\\begin{align}\n    \\nu_2\\left( gg_1, gg_j, g_* \\right) =\n    \\nu_2\\left( g_i, g_j, g_* \\right)\n\\end{align}\n\nNow that we have this wave fucntion,\nwe could alos write down an ideal Hamiltonian.\nThree's a one-line way of writing down the ideal Hamiltonian.\n\n\\subsection{Ideal Hamiltonian}\nI can start with the trivial Hamiltonian\n\\begin{align}\n    H_{\\text{trivial}} &=\n    \\sum_{i}\n    H_{i,\\text{trivial}}\n\\end{align}\nwhere\n\\begin{align}\n    H_{i,\\text{trivial}}\n    &=\n    \\ket{\\phi_i}\\bra{\\phi_i}\n\\end{align}\nwhere\n\\begin{align}\n    \\ket{\\phi_i} &=\n    \\sum_{i} \\ket{g_i}\n\\end{align}\nIt's like the paramagnet.\nEach site is uniform to position with respect to all the $g$'s.\n\nThen I can write the SPT Hamiltonian as\n\\begin{align}\n    H_{SPT} &=\n    U H_{\\text{trivial}} U^\\dagger\n\\end{align}\nwhere $U$ is the constant depth circuit that disentangles \n$\\ket{\\Psi_{1D}}$.\nSo I start with the trivial one and I do this basis transformation.\n\nA more useful way of thinking is the following.\nThe ground state wave function is a path integral defined on a disk,\nwhere we are summing over group elements on the interior.\nLet's focus on just some segment of the chain,\nwith vertices\n$\\ldots,g_{i-1},g_i,g_{i+1},\\ldots$.\nNow let's introduce a bulk vertex $g_i'$\nthat has links to each\n$g_{i-1},g_i,g_{i+1}$.\nSo then imagine doing a bulk transformation.\nThe new wave function is now\n\\begin{align}\n    \\Psi\\left( \\ldots, g_{i-1}, g_i', g_{i+1},\\ldots \\right)\n    &=\n    \\frac{1}{|G|}\n    \\sum_{g_i}\n    \\nu_2\\left( g_{i-1}, g_i, g_i' \\right)\n    \\nu_2^*\\left( g_{i-1}, g_i', g_{i+1} \\right)\n    \\Psi\\left( \\ldots, g_{i-1}, g_i, g_{i+1}, \\ldots \\right)\n\\end{align}\nSo that's how they are related under bulk transformation.\nAnd now I can just write down the Hamiltonian\n\\begin{align}\n    H &=\n    \\sum_i\n    \\sum_{g_i, g_i'}\n    \\frac{\\nu_2\\left( g_{i-1}, g_i, g_i' \\right)}{\n    \\nu_2\\left( g_i, g_i', g_{i+1}\\right)\n    }\n    \\ket{\\ldots, g_i', \\ldots}\n    \\bra{\\ldots, g_i, \\ldots}\n\\end{align}\nSo now we have a ground sate and the Hamiltonian.\nNow e can study the system on an open chain with boundaries,\nand we have a complete solvable model.\n\n\\begin{question}\n    We considered the Hamiltonian hat considered one edge state and change it to\n    a different group element,\n    why don't we consider it simultaneously and onwards?\n\\end{question}\nThat's just doing it twice.\n\nThe beautiful thing about this group cohomology model,\nis that everything we do generalizes to higher dimensional models.\nI could have $D$-cochains and $D$-cocycles.\nThat's why it's a nice way of thinking about a system.\nThe MPS description on the other hand,\nwhile deeply related to this group cohomology construction,\nthis is really what's going on behind the scenes.\n\n\\section{Higher dimensions}\nFirstly,\nlet me define the higher dimension cohomology groups.\nConsider some $n$-cochain\n\\begin{align}\n    \\omega_d \\in C^n_\\rho (G, M).\n\\end{align}\n$M$ is going to be an Abelian group,\nwhich used to be $U(1)$\nwith a $G$-action $\\rho$.\nIt is a map that takes in a group element \n\\begin{align}\n    \\rho: G \\times M &\\to M\n\\end{align}\nso this $M$ is a $G$-module.\nAnd this $\\omega_$ has values\n\\begin{align}\n    \\omega_2\\left( g_1,\\ldots, g_n \\right) \\in M\n\\end{align}\nThen define the coboundary map\n\\begin{align}\n    d: C^n \\to C^{n+1}\n\\end{align}\nand the way we define this is \n\\begin{align}\n    d\\omega(g_1, \\ldots, g_{n+1})\n    &=\n    \\rho_{g_1}\\left( \n    \\omega\\left( g_2,\\ldots, g_{n+1} \\right)\n    \\right)\n    \\times\n    \\prod_{j=1}^{n}\n    \\left[\\omega^{\\left( -1 \\right)^j}\n    \\left( g_1,\\ldots, g_{j-1}, i, g_{j+1}, g_{j+2}, \\ldots \\right)\n    \\right]\n    \\times\n    \\left[\n    \\omega\\left( g_1,\\ldots, g_n \\right)\n    \\right]^{\\left( -1 \\right)^{n+1}}\n\\end{align}\nThis is a complicated expression,\nbut it does have the property that\n\\begin{align}\n    d^2\\omega = 1\n\\end{align}\nEven though this thing looks weird,\none way of thinking about where this complicated expression comes from is the\ntriangulation invariance we want to impose.\nWe want our path integral to be independent of triangulation.\nThis invariance under triangulation literally gives you this.\nThat's geometrically where this complicated expression actually comes from.\n\nNow we have this sequence of cochains.\n\\begin{align}\n    C_\\rho^{n-1}\n    \\xrightarrow{d_{n-1}}\n    C_\\rho^n\n    \\xrightarrow{d_n}\n    C_\\rho^{n+1}\n    \\to \\cdots\n\\end{align}\nCohomology groups are\n\\begin{align}\n    \\frac{\\ker d_n}{\\im d_{n-1}}\n    =\n    H_\\rho^n\\left( G, M \\right)\n    =\n    \\frac{Z_\\rho^{n}(G, M)}{B_\\rho^n(G,M)}\n\\end{align}\nThe $n$-cocycles are\n\\begin{align}\n    Z_\\rho^n(G, M) &=\n    \\left\\{\n    \\omega \\in C_\\rho^n\n    \\mid\n    d\\omega = 1\n    \\right\\}\n    =\n    \\ker d_n\n\\end{align}\nand the $n$-coboundaries are\n\\begin{align}\n    B_\\rho^n &=\n    \\left\\{ \n    \\omega \\in C_\\rho^n\n    \\mid\n    \\omega = d\\mu\\,\n    \\text{ for some }\n    \\mu\\in C_\\rho^{n-1}\n    \\right\\}\n    =\n    \\im d_{n-1}\n\\end{align}\nTime-reversal actually complex conjugates things,\nbecause it's anti-unitary.\nSo we need this $\\rho$ for SPTs.\n", "meta": {"hexsha": "55f07ae9c45b32f8803b841aa0b4ffc3bd0d4100", "size": 23579, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phys733/lecture20.tex", "max_stars_repo_name": "ehua7365/umdphysnotes", "max_stars_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-11T12:53:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-11T12:53:46.000Z", "max_issues_repo_path": "phys733/lecture20.tex", "max_issues_repo_name": "ehua7365/umdphysnotes", "max_issues_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phys733/lecture20.tex", "max_forks_repo_name": "ehua7365/umdphysnotes", "max_forks_repo_head_hexsha": "00e4e2b6aba3d03baaec5caa36903e5135b014de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.1678035471, "max_line_length": 80, "alphanum_fraction": 0.7147461724, "num_tokens": 7048, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.793105941403651, "lm_q2_score": 0.7122321964553657, "lm_q1q2_score": 0.564875586667723}}
{"text": "% !TEX root = ../main_lecture_notes.tex\n\\chapter{Security of blockchain systems}\\label{chap:security}\nThe security evaluation of blockchain systems consists in calculating the probability of a successful attack on the blockchain. We will focus, in \\cref{sec:double_spending}, on the double spending attack which is concern for \\PoW powererd cryptocurrency like teh bitcoin one. Security is also at risk when the node have an incentive to deviate from the prescribed protocol. \\cref{sec:blockwithholding} discusses the opportunity for miner of \\PoW equipped blockchain to resort to blockwithholding strategy to optimize their revenue. \n\n\\section{Double-spending in PoW}\\label{sec:double_spending}\nA double spending attack aims at generating a concurrent blockchain to replace the main one. Consider the following scenario\n\\begin{enumerate}\n\t\\item Marie sends to John BTC$10$\n\t\\item The transaction from Marie to John is recorded in the blockchain\n\t\\item John is advised for $\\alpha$ confirmation, that is for $\\alpha-1$ block to be appended after the block where the Marie to John transaction is recorded\n\t\\item Once $\\alpha$ confirmations have been sent, John ships the good\n\t\\item Meanwhile, Marie has started working on her own blockchain version where the Marie to John transaction is replaced by a Marie to Marie transaction\n\t\\item At the shipment date the main blockchain is ahead by $z$ blocks \n\t\\item Marie's goal is then to work on her blockchain branch to catch up with the main branch. If she manages to to that then her branch will replace the public branch and she recovers her bitcoin. She can therefore spend these bitcoins again hence the name double spending.\n\\end{enumerate}\nThe race between the two competing branches of the blockchain is summarized on \\cref{fig:dp_illustration}.\n\\begin{figure}[ht!]\n\\begin{center}\n\\begin{tikzpicture}[-, >=stealth', auto, semithick, node distance=1cm]\n% \\tikzstyle{block} = [rectangle, draw, fill=blue!20,\n%     text width=5em, text centered, rounded corners]\n\\tikzstyle{block}=[rectangle, fill=black,draw=black,thick,text=black,scale=1.5]\n\\tikzstyle{block}=[rectangle, fill=white,draw=black,thick,text=black,scale=1.5]\n\\tikzstyle{confirmed block}=[rectangle, fill=white,draw=blue,thick,text=black,scale=1.5]\n\\tikzstyle{bad block}=[rectangle, fill=white,draw=red,thick,text=black,scale=1.5]\n\\node[block]    (1)                     {\\tiny $\\text{M}\\rightarrow \\text{J}$};\n\\node[block]    (2)[right of=1]                     {};\n\\node[block]    (3)[right of=2]                     {};\n\\node[block]    (4)[right of=3]                     {};\n\\node[confirmed block]    (5)[right of=4]                     {};\n\n\\node[bad block]    (6)[below of=1]         {\\tiny $\\text{M}\\rightarrow \\text{M}$};\n\\node[block]    (7)[right of=6]         {};\n\\node[block]    (8)[right of=7]         {};\n\\path\n(1) edge[ left]     node{}     (2)\n(2) edge[ left]     node{}     (3)\n(3) edge[ left]     node{}     (4)\n(4) edge[ left]     node{}     (5)\n(6) edge[ left]     node{}     (7)\n(7) edge[ left]     node{}     (8);\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{Double spending race illustrated, here we have $\\alpha = 4$ and $z = 2$}\n\\label{fig:dp_illustration}\n\\end{figure}\n\\subsection{Random walk model}\\label{ssec:double_spending_rw}\nWe define a discrete time stochastic process $(R_n)_{n\\geq0}$ equal to the difference in length between the public and the private branch of the blockchain. At each time step a block is found, it belongs to the main branch with probability $p$ to the other branch with probability $q=1-p$. The parameter $p$ represents the proportion of hashpower owned by the honest miners, while $q$ is that of the attacker. We have\n$$\nR_0 = z\\text{, and  }R_n = z+Y_1+\\ldots+ Y_n.\n$$\nThe $Y_i$'s are \\iid random variables such that \n$$\n\\mathbb{P}(Y=1) = p\\in (0,1)\\text{, and }\\mathbb{P}(Y=-1) = 1-p=q,\n$$ \n$(R_n)_{n\\geq0}$ is therefore a random walk on $\\mathbb{Z}$. We assume that $p>q$ so that the attacker does not hold more than half of the total hashpower. Define the double spending time as \n$$\n{\\tau_0} = \\inf\\{n>0\\text{ ; }R_n = 0\\}.\n$$\nOur goal is to study the distribution of this stopping time with respect to the filtration \n$$\n\\mathcal{F}_n = \\sigma(Y_1,\\ldots, Y_n),\\text{ }n\\geq1.\n$$ \nAn illustration of this first-hitting time problem is provided in \\cref{fig:double_spending_time}.\n\\begin{figure}[ht!]\n\\begin{center}\n\\begin{tikzpicture}\n  %Origin and axis\n  \\coordinate (O) at (0,0);\n  \\draw[->] (-1,0) -- (9,0) coordinate[label = {below:$n$}] (xmax);\n  \\draw[->] (0,-0.5) -- (0,3) coordinate[label = {left:$R_n$}] (ymax);\n  %Lower linear boundary\n\n \n  %Stochastic process trajectory\n  \n  \\draw (0,0) node[blue,left] {} node{};\n  \\draw[very thick,blue,-] (0,1) -- (1,1) node[pos=0.5, above] {} ;\n  \\draw[very thick,dashed,blue] (1,1) -- (1,1.5) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (1,1.5) -- (2,1.5) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (2,1.5) -- (2,2) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (2,2) -- (3,2) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (3,2) -- (3,1.5) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (3,1.5) -- (4,1.5)node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (4,1.5) -- (4,1) node[pos=0.5, right] {};  \n  \\draw[very thick,blue,-] (4,1) -- (5,1) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (5,1) -- (5,0.5) node[pos=0.5, right] {};  \n  \\draw[very thick,blue,-] (5,0.5) -- (6,0.5) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue,-] (6,0.5) -- (6,1) node[pos=0.5, above] {};\n   \\draw[very thick,blue,-] (6,1) -- (7,1) node[pos=0.5, above] {};\n    \\draw[very thick,dashed,blue,-] (7,1) -- (7,0.5) node[pos=0.5, above] {};\n     \\draw[very thick,blue,-] (7,0.5) -- (8,0.5) node[pos=0.5, above] {};\n     \\draw[very thick,dashed,blue,-] (8,0.5) -- (8,0) node[pos=0.5, above] {};\n  %Jump Times\n  \\draw (1,0) node[black,below] {$1$} node{ \\color{black}$\\bullet$};\n  \\draw (2,0) node[black,below] {$2$} node{ \\color{black}$\\bullet$};\n  \\draw (3,0) node[black,below] {$3$} node{ \\color{black}$\\bullet$};\n  \\draw (4,0) node[black,below] {$4$} node{ \\color{black}$\\bullet$};\n  \\draw (5,0) node[black,below] {$5$} node{ \\color{black}$\\bullet$};\n  \\draw (6,0) node[black,below] {$6$} node{ \\color{black}$\\bullet$};\n  \\draw (7,0) node[black,below] {$7$} node{ \\color{black}$\\bullet$};\n  \\draw (8,0) node[black,below] {$8$} node{ \\color{black}$\\bullet$};\n  %Level of the counting process\n   \\draw (0,0) node[black,below left] {$0$} node{ \\color{black}$\\bullet$};\n   \\draw (0,0.5) node[black,left] {$1$} node{ \\color{black}$\\bullet$};\n   \\draw (0,1) node[black,left] {$z=2$} node{ \\color{black}$\\bullet$};\n   \\draw (0,1.5) node[black,left] {$3$} node{ \\color{black}$\\bullet$};\n   \\draw (0,2) node[black,left] {$4$} node{ \\color{black}$\\bullet$};\n   \\draw (0,2.5) node[black,left] {$5$} node{ \\color{black}$\\bullet$};\n\n  % %Aggregated Capital gains\n%  \\draw (0,1.5) node[blue,below right] {$\\mu_1$} node{ \\color{blue}$-$};\n%  \\draw (0,2.25) node[blue,left] {$\\mu_2$} node{ \\color{blue}$-$};\n%  \\draw (0,3.75) node[blue,left] {$\\mu_3$} node{ \\color{blue}$-$};\n  %Ruin time = First-crossing time time\n%  \\draw (5,0) node[black,above right] {${\\tau_0}_u$} node{ \\color{black}$\\times$};\n%  \\draw[dotted,black] (0,3.28) -- (5,3.28);\n%  \\draw[dotted,black] (5,0) -- (5,3.28);\n\\end{tikzpicture}\n\\end{center}\n\\caption{Illustration of the first-hitting time problem of a double spending attack.}\n\\label{fig:double_spending_time}\n\\end{figure}\nLet us denote by \n$$\n\\mathbb{P}_z(\\cdot) = \\mathbb{P}(\\cdot|R_0 = z)\\text{ and }\\mathbb{E}_z(\\cdot) = \\mathbb{E}(\\cdot|R_0 = z) \n$$\nWe are interested for now in the conditional distribution of ${\\tau_0}$ provided that $R_0 = z$.\n\\subsubsection{Double spending probability}\\label{sssec:double_spending_rw_dsp}\nThe double spending probability is defined as \n$$\n\\phi(z)=\\mathbb{P}_z({\\tau_0} <\\infty),\n$$\nand given in the following result\n\\begin{theo}\nIf $p>q$ then \n$$\n\\phi(z) = \\left(\\frac{q}{p}\\right)^z.\n$$\n\\end{theo}\n\\noindent We give two proofs for this result, the first one uses simple first step analysis exploiting the Markov property of the random walk. The second one uses Martingale and the optional stopping theorem.\\\\\n\n\\underline{\\textit Proof 1:}\\\\\nUsing a first step analysis, we have \n\\begin{equation}\\label{eq:difference_equation}\n\\phi(z) = p\\phi(z+1)+(1-p)\\phi(z-1),\\text{ }z\\geq1.\n\\end{equation}\nWe also have the boundary conditions\n\\begin{equation}\\label{eq:boundary_conditions_double_spending}\n\\phi(0) = 1\\text{ and }\\underset{z\\rightarrow +\\infty}{\\lim}\\phi(z) = 0\n\\end{equation}\nEquation \\eqref{eq:difference_equation} is a linear difference equation of order $2$ associated to the following characteristic equation\n$$\npx^2 - x + 1-p = 0\n$$\nwhich has two roots on the real line with \n$$\nr_1 = 1, \\text{ and }r_2 = \\frac{1-p}{p}.\n$$\nThe solution of \\eqref{eq:difference_equation} is given by \n$$\n\\phi(z)=A+B\\left(\\frac{1-p}{p}\\right)^z,\n$$\nwhere $A$ and $B$ are constant. Using the boudary conditions \\eqref{eq:boundary_conditions_double_spending}, we deduce that\n$$\n\\phi(z) = \\left(\\frac{1-p}{p}\\right)^z\n$$\nas announced.\\\\\nFor the second proof we need the notion of martingale\n\\begin{definition}\nA stochastic process $(X_n)_{n\\geq0}$, is called a martingale with respect to a filtration $\\mathcal{F}_n$, if\n\\begin{itemize}\n  \\item[(i)] $X_n$ is $\\mathcal{F}_n$-adapted\n  \\item[(ii)] $\\mathbb{E}(X_n)<\\infty\\text{ for }n\\geq\\geq0$ \n  \\item[(iii)] $\\mathbb{E}(X_n|\\mathcal{F}_{n-1}) = X_{n-1}$\n\\end{itemize} \n\\end{definition}\n\\noindent and the optional stopping theorem.\n\\begin{theo}\nLet $T$ be a stopping time for the Martingale $(X_n)_{n\\geq0}$ then it holds that \n$$\n\\mathbb{E}(X_T) = \\mathbb{E}(X_0) \n$$\nin each of the following situations\n\\begin{itemize}\n\\item[(i)] $T$ is bounded almost surely \n\\item[(ii)] There exists $c>0$ such that $|X_{T\\land n}|<c$ for every $n>0$.\n\\item[(iii)] $\\mathbb{E}(T)<\\infty$, and, for some $K>0$ we have that \n$$\n|X_n(\\omega) - X_{n-1}(\\omega)|\\leq K,\\text{ }\\forall (n,\\omega).\n$$\n\\end{itemize}\n\n\\end{theo}\n\\underline{\\textit Proof 2:}\\\\\nDefine the process \n$$\nX_n = \\exp\\left[sR_n- n\\kappa_Y(s)\\right],\\text{ for }n\\in\\mathbb{N}\\text{ and }s\\in\\mathbb{R}, \n$$\nwhere\n$$\n\\kappa_Y(s) = \\log\\left[\\mathbb{E}\\left(e^{sY}\\right)\\right],\n$$\nis the cumulant generating function of $Y$. \n\\begin{lemma}\\label{lem:wald_martingale_RW}\nTake $s$ so that $\\kappa_Y(s)<\\infty$ then $(X_n)_{n\\geq0}$ is a $\\mathcal{F}_n$-martingale.\n\\end{lemma}\n\\begin{proof}\nDenote by $M_Y(s) = \\mathbb{E}(e^{sY})$ the moment generating function of $Y$, we have that \n\\begin{eqnarray*}\n\\mathbb{E}(X_n|\\mathcal{F}_n)&=&\\mathbb{E}\\left\\{\\exp\\left[sR_n - n\\kappa_Y(s)\\right]|\\mathcal{F}_n\\right\\}\\\\\n&=&\\exp\\left[sR_{n-1} - n\\kappa_Y(s)\\right]\\mathbb{E}\\left[\\exp\\left(sY_{n}\\right)|\\mathcal{F}_n\\right]\\\\\n&=&\\exp\\left[sR_{n-1} \\right]M_Y(s)^{-n} M_Y(s)\\\\\n&=& X_{n-1}.\n\\end{eqnarray*}\n\\end{proof} \nThe equation $\\kappa_Y(s) = 0$ is equivalent to \n$$\npe^s+qe^{-s} = 1\n$$\nwhich admits $\\gamma =\\log(q/p)$ as only non-zero solution. The process $(\\e^{\\gamma R_n})_{n\\geq0}$ is a $\\mathcal{F}_n$-Martingale. Define $\\tau_a = \\inf\\{n\\geq 0\\text{ ; }R_n = a\\}$, for $a>z$. Consider the stopping time $\\tau = \\tau_0\\land\\tau_a$, we have that for any $n>0$, \n$$\n\\mathbb{P}( \\tau=\\infty) \\leq \\mathbb{P}( \\tau > n) < \\mathbb{P}( |R_n| \\leq a) \\underset{n\\rightarrow \\infty}{\\longrightarrow} 0;\n$$\nWe can therefore apply the optional stopping time theorem at $\\tau$ to get\n\\begin{eqnarray*}\n \\mathbb{E}(X_{\\tau}) = \\mathbb{E}(X_{0})&\\Leftrightarrow& \\mathbb{P}(\\tau = \\tau_0) + [1- \\mathbb{P}(\\tau = \\tau_0)]\\e^{a z}= \\e^{\\gamma z}\\\\\n &\\Leftrightarrow& \\mathbb{P}(\\tau = \\tau_0)] = \\frac{\\e^{\\gamma z}-\\e^{a z}}{1-\\e^{az}}.\n\\end{eqnarray*}\nWe then let $a\\rightarrow\\infty$ in the above equation to conclude that \n$$\n\\mathbb{P}(\\tau = \\tau_0) =\\left(\\frac{q}{p}\\right)^z.\n$$\n\\begin{exercise}\nJust for fun\n\\begin{itemize}\n\\item What is $\\phi(z)$ when $p \\leq q$ ? \n\\item Compute $\\mathbb{P}(\\tau_0 \\leq \\tau_a)$ if $p = q = 1/2$\n\\end{itemize}\n\\end{exercise}\nIn practice the number of blocks $z$ is actually random variable \n$$\nZ = (\\alpha -M)_+,\n$$ \nwhere $M$ corresponds to the number of blocks the attacker managed to mine while the vendor waits for $\\alpha$ confirmations. If we assume that a block mined by the honest miners is a success while a block mined by the attacker is a failure then $M$ actually counts the number of failure before $\\alpha$ successes. We have that $M\\sim \\text{Neg-Bin}(\\alpha, p)$ where $M$ has a probability mass function (\\pmf) given by \n$$\n\\mathbb{P}(M = m) = \\binom{\\alpha+m-1}{m}p^\\alpha q^m.\n$$\nWhenever $Z = 0$ then double spending occur right away as $\\phi(0) =1$. To derive the double spending probability, we condition upon the values of $Z$ via the law of total probability \n$$\n\\mathbb{P}( \\text{Double Spending}) = \\mathbb{P}(M\\geq \\alpha)+\\sum_{m = 0}^{\\alpha-1}\\binom{\\alpha+m-1}{m}q^\\alpha p^m.\n$$\nThe analysis conducted here is similar to that of \\citet{rosenfeld2014analysis}.\n\\subsubsection{Double spending time}\\label{sssec:double_spending_rw_dst}\nIn the block mining world time is money. Every hour spent in computing hashes is costly in terms of energy. It is then very interesting to know whether a double spending attack is meant to last long or not. Intuitively, we can think that if it must occur the it should at a earlier stage because as $p>1/2$ our random walk $(R_n)_{n\\geq0}$ will eventually drift toward $+\\infty$. The following result provides the probability distribution of $\\tau_0$ when $R_0 = z$.\n\\begin{theo}\nIf $z = 0$ then $\\tau_0=0$ almost surely. If $z>0$ then $\\tau_0$ admits a \\pmf given by \n$$\n\\mathbb{P}_z(\\tau_0 = n)=\n\\frac{z}{n}\\binom{n}{(n-z) / 2}p^{(n-z) / 2}q^{(n\n+z) / 2}\\text{ if }n>z\\text{ and }n-z\\text{ is even},\n$$\nand $0$ otherwise.\n\\end{theo}\n\\begin{proof}\nWe start by showing the following lemma, sometimes referred to as the Markov hitting time theorem.\n\\begin{lemma}\\label{lem:markov_hitting_time}\n\\begin{equation}\\label{eq:markov_hitting_time}\n\\mathbb{P}_z(\\tau_0 = n) = \\frac{z}{n}\\mathbb{P}_z(R_n = 0)\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nIf $z = 0$ then $\\tau_0 = 0$ almost surely and both sides of \\eqref{eq:markov_hitting_time} equal to $0$. Assume that $z\\geq1$, we have that $\\mathbb{P}_z(\\tau_0 = n) = 0$ and $\\mathbb{P}_z(R_n = 0) = 0$ whenever $n<z$ and $n-z$ is odd. The rest of the proof is by induction on $n\\geq1$, when $n = 1$ we have that \n$$\n\\mathbb{P}_z(\\tau_0 = 1) = 0 = \\frac{z}{1}\\mathbb{P}_z(R_1 = 0),\\text{ for }z>1, \n$$\nand \n$$\\mathbb{P}_1(\\tau_0 = 1) = q = \\frac{1}{1}\\mathbb{P}_1(R_1 = 0),\\text{ for }z=1. \n$$\nThe property holds for $n=1$. Assume that it holds for some $n\\geq1$. The law of total probability yields\n\\begin{eqnarray*}\n\\mathbb{P}_z(\\tau_0 = n+1)&=&\\sum_{y\\in\\{-1,1\\}}\\mathbb{P}_z(\\tau_0 = n+1|Y_1 = y)\\mathbb{P}(Y_1 = y)\\\\\n&=&\\sum_{y\\in\\{-1,1\\}}\\mathbb{P}_{z+y}(\\tau_0 = n)\\mathbb{P}(Y_1 = y)\\\\\n&=&\\sum_{y\\in\\{-1,1\\}}\\frac{z+y}{n}\\mathbb{P}_{z+y}(R_n = 0)\\mathbb{P}(Y_1 = y)\n\\end{eqnarray*}\nThe second equality holds thanks to the strong Markov property. We further undo the law of total probability.\n\\begin{eqnarray}\n\\mathbb{P}_z(\\tau_0 = n+1)&=&\\sum_{y\\in\\{-1,1\\}}\\frac{z+y}{n}\\mathbb{P}_{z}(Y_1 = y|R_{n+1} = 0)\\mathbb{P}_z(R_{n+1} = 0)\\nonumber\\\\\n&=&\\frac{\\mathbb{P}_z(R_{n+1} = 0)}{n}\\left[z+\\mathbb{E}(Y_1|R_{n+1}=0)\\right]\\label{eq:law_total_probability_undone}\n\\end{eqnarray}\nSince the $Y_i$ are \\iid then it holds that \n$$\n\\mathbb{E}(Y_1|R_{n+1}=0) = \\mathbb{E}(Y_i|R_{n+1}=0)\\text{, } i = 1, \\ldots, n+1,\n$$\nand it follows that \n$$\n\\mathbb{E}(Y_1|R_{n+1}=0) = \\frac{1}{n+1}\\sum_{i =1}^{n+1}\\mathbb{E}(Y_i|R_{n+1}=0) = \\frac{-z}{n+1}.\n$$\nInserting the above expression in \\eqref{eq:law_total_probability_undone} yields \n$$\n\\mathbb{P}_z(\\tau_0 = n+1) = \\frac{z}{n+1}\\mathbb{P}_z(R_{n+1} = 0).\n$$\n\\end{proof}\n\\begin{remark}\nThis proof is direct, simple and inspired from \\citet{Hofstad2008}. It is possible to make it shorter taking advantage of the ballot theorem. Indeed consider again the first hitting problem on \\cref{fig:double_spending_time} and reverse the timeline. It corresponds to that of a random walk $(S_n)_{n\\geq0}$ that starts at $0$, make upward jumps with probability $q$, and aims at reaching the level $z$ without crossing the $X$ axis, see \\cref{fig:time_reversed_double_spending_time}.\n\\begin{figure}[ht!]\n\\begin{center}\n \\subfloat[Original first hitting problem]\n {\n\\begin{tikzpicture}\n  %Origin and axis\n  \\coordinate (O) at (0,0);\n  \\draw[->] (-1,0) -- (9,0) coordinate[label = {below:$n$}] (xmax);\n  \\draw[->] (0,-0.5) -- (0,3) coordinate[label = {left:$R_n$}] (ymax);\n  %Lower linear boundary\n\n \n  %Stochastic process trajectory\n  \n  \\draw (0,0) node[blue,left] {} node{};\n  \\draw[very thick,blue,-] (0,1) -- (1,1) node[pos=0.5, above] {} ;\n  \\draw[very thick,dashed,blue] (1,1) -- (1,1.5) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (1,1.5) -- (2,1.5) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (2,1.5) -- (2,2) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (2,2) -- (3,2) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (3,2) -- (3,1.5) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (3,1.5) -- (4,1.5)node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (4,1.5) -- (4,1) node[pos=0.5, right] {};  \n  \\draw[very thick,blue,-] (4,1) -- (5,1) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (5,1) -- (5,0.5) node[pos=0.5, right] {};  \n  \\draw[very thick,blue,-] (5,0.5) -- (6,0.5) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue,-] (6,0.5) -- (6,1) node[pos=0.5, above] {};\n   \\draw[very thick,blue,-] (6,1) -- (7,1) node[pos=0.5, above] {};\n    \\draw[very thick,dashed,blue,-] (7,1) -- (7,0.5) node[pos=0.5, above] {};\n     \\draw[very thick,blue,-] (7,0.5) -- (8,0.5) node[pos=0.5, above] {};\n     \\draw[very thick,dashed,blue,-] (8,0.5) -- (8,0) node[pos=0.5, above] {};\n  %Jump Times\n  \\draw (1,0) node[black,below] {$1$} node{ \\color{black}$\\bullet$};\n  \\draw (2,0) node[black,below] {$2$} node{ \\color{black}$\\bullet$};\n  \\draw (3,0) node[black,below] {$3$} node{ \\color{black}$\\bullet$};\n  \\draw (4,0) node[black,below] {$4$} node{ \\color{black}$\\bullet$};\n  \\draw (5,0) node[black,below] {$5$} node{ \\color{black}$\\bullet$};\n  \\draw (6,0) node[black,below] {$6$} node{ \\color{black}$\\bullet$};\n  \\draw (7,0) node[black,below] {$7$} node{ \\color{black}$\\bullet$};\n  \\draw (8,0) node[black,below] {$8$} node{ \\color{black}$\\bullet$};\n  %Level of the counting process\n   \\draw (0,0) node[black,below left] {$0$} node{ \\color{black}$\\bullet$};\n   \\draw (0,0.5) node[black,left] {$1$} node{ \\color{black}$\\bullet$};\n   \\draw (0,1) node[black,left] {$z=2$} node{ \\color{black}$\\bullet$};\n   \\draw (0,1.5) node[black,left] {$3$} node{ \\color{black}$\\bullet$};\n   \\draw (0,2) node[black,left] {$4$} node{ \\color{black}$\\bullet$};\n   \\draw (0,2.5) node[black,left] {$5$} node{ \\color{black}$\\bullet$};\n\n  % %Aggregated Capital gains\n%  \\draw (0,1.5) node[blue,below right] {$\\mu_1$} node{ \\color{blue}$-$};\n%  \\draw (0,2.25) node[blue,left] {$\\mu_2$} node{ \\color{blue}$-$};\n%  \\draw (0,3.75) node[blue,left] {$\\mu_3$} node{ \\color{blue}$-$};\n  %Ruin time = First-crossing time time\n%  \\draw (5,0) node[black,above right] {${\\tau_0}_u$} node{ \\color{black}$\\times$};\n%  \\draw[dotted,black] (0,3.28) -- (5,3.28);\n%  \\draw[dotted,black] (5,0) -- (5,3.28);\n\\end{tikzpicture}\n\\label{subfig:ds_RW_time}\n}\n\\hskip1em\n\\subfloat[Time reversed first hitting problem]{\n\\begin{tikzpicture}\n  %Origin and axis\n  \\coordinate (O) at (0,0);\n  \\draw[->] (-1,0) -- (9,0) coordinate[label = {below:$n$}] (xmax);\n  \\draw[->] (0,-0.5) -- (0,3) coordinate[label = {left:$S_n$}] (ymax);\n  %Lower linear boundary\n\n \n  %Stochastic process trajectory\n  \n  \\draw (0,0) node[blue,left] {} node{};\n  \\draw[very thick,blue,-] (0,0) -- (1,0) node[pos=0.5, above] {} ;\n  \\draw[very thick,dashed,blue] (1,0) -- (1,0.5) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (1,0.5) -- (2,0.5) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (2,0.5) -- (2,1) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (2,1) -- (3,1) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (3,1) -- (3,0.5) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (3,0.5) -- (4,0.5)node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (4,0.5) -- (4,1) node[pos=0.5, right] {};  \n  \\draw[very thick,blue,-] (4,1) -- (5,1) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue] (5,1) -- (5,1.5) node[pos=0.5, right] {};  \n  \\draw[very thick,blue,-] (5,1.5) -- (6,1.5) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,blue,-] (6,1.5) -- (6,2) node[pos=0.5, above] {};\n   \\draw[very thick,blue,-] (6,2) -- (7,2) node[pos=0.5, above] {};\n    \\draw[very thick,dashed,blue,-] (7,2) -- (7,1.5) node[pos=0.5, above] {};\n     \\draw[very thick,blue,-] (7,1.5) -- (8,1.5) node[pos=0.5, above] {};\n     \\draw[very thick,dashed,blue,-] (8,1.5) -- (8,1) node[pos=0.5, above] {};\n  %Jump Times\n  \\draw (1,0) node[black,below] {$1$} node{ \\color{black}$\\bullet$};\n  \\draw (2,0) node[black,below] {$2$} node{ \\color{black}$\\bullet$};\n  \\draw (3,0) node[black,below] {$3$} node{ \\color{black}$\\bullet$};\n  \\draw (4,0) node[black,below] {$4$} node{ \\color{black}$\\bullet$};\n  \\draw (5,0) node[black,below] {$5$} node{ \\color{black}$\\bullet$};\n  \\draw (6,0) node[black,below] {$6$} node{ \\color{black}$\\bullet$};\n  \\draw (7,0) node[black,below] {$7$} node{ \\color{black}$\\bullet$};\n  \\draw (8,0) node[black,below] {$8$} node{ \\color{black}$\\bullet$};\n  %Level of the counting process\n   \\draw (0,0) node[black,below left] {$0$} node{ \\color{black}$\\bullet$};\n   \\draw (0,0.5) node[black,left] {$1$} node{ \\color{black}$\\bullet$};\n   \\draw (0,1) node[black,left] {$z=2$} node{ \\color{black}$\\bullet$};\n   \\draw (0,1.5) node[black,left] {$3$} node{ \\color{black}$\\bullet$};\n   \\draw (0,2) node[black,left] {$4$} node{ \\color{black}$\\bullet$};\n   \\draw (0,2.5) node[black,left] {$5$} node{ \\color{black}$\\bullet$};\n\n  % %Aggregated Capital gains\n%  \\draw (0,1.5) node[blue,below right] {$\\mu_1$} node{ \\color{blue}$-$};\n%  \\draw (0,2.25) node[blue,left] {$\\mu_2$} node{ \\color{blue}$-$};\n%  \\draw (0,3.75) node[blue,left] {$\\mu_3$} node{ \\color{blue}$-$};\n  %Ruin time = First-crossing time time\n%  \\draw (5,0) node[black,above right] {${\\tau_0}_u$} node{ \\color{black}$\\times$};\n%  \\draw[dotted,black] (0,3.28) -- (5,3.28);\n%  \\draw[dotted,black] (5,0) -- (5,3.28);\n\\end{tikzpicture}\n\\label{subfig:time_reversed_ds_RW_time}\n}\n\\end{center}\n\\caption{Another look at the first hitting time problem.}\n\\label{fig:time_reversed_double_spending_time}\n\\end{figure}\nNote that \\cref{subfig:time_reversed_ds_RW_time} is the reflection of \\cref{subfig:ds_RW_time} with respect to the $Y$ axis. We have equivalently\n$$\n\\mathbb{P}_z(\\tau_0) = \\mathbb{P}(S_k>0,\\text{ }1\\leq k\\leq n|S_n = z, S_0 = 0)\\mathbb{P}_0(S_n = z|S_0=0),\n$$\nand \n$$\n\\mathbb{P}(S_k>0,\\text{ }1\\leq k\\leq n|S_n = z, S_0 = 0) = \\frac{z+(n-z)/2-(n-z)/2}{n} = \\frac{z}{n}.\n$$\nFor proof of the ballot theorem, see \\citet{Renault2007}. For a general formulation and application to queueing see \\citet{Takacs1962}.\n\\end{remark}\nTo complete the proof, we just note that \n$$\n\\mathbb{P}_z(R_n = 0) = \\binom{n}{(n-z) / 2}p^{(n-z) / 2}q^{(n\n+z) / 2}\n$$\nas it corresponds to a trajectory of $(R_n)_{n \\geq0}$ starting at $R_0 = z$ ending at $0$ made of $(n-z)/2$ upward jumps and $(n+z)/2$ downward one.\n\\end{proof}\nJust like in the previous section, the actual double spending time depends on the value of the random variable $Z = (\\alpha -M)_+$.\n\n\\subsection{Counting process model}\\label{sec:counting_process}\nOur aim is to go from the discrete time framework of the previous section to a continuous time. To do so, we will model the length of the blockchain as counting processes. We will consider renewal processes and more specifically Poisson processes. We start by giving some reminders on the exponential distribution and counting processes before studying the double spending time distribution.\n\n\\subsubsection{Poisson process, Exponential distributions and friends}\\label{sssec:exp_distribution}\n\\begin{definition}\\label{def:counting_process}\nA counting process $(N_t)_{t\\geq0}$ is a continuous time stochastic process that counts the occurence of an event over time such that\n\\begin{equation*}\nN_0=0\\text{ and }N_t=\\sum_{k=1}^{+\\infty}\\mathbb{I}_{T_k\\leq t}.\n\\end{equation*}\nwhere $T_1,T_2,T_3,\\ldots$ denote the arrival times, with the convention that $T_0=0$. Let $\\Delta^T_0,\\Delta^T_1,\\Delta^T_2,\\ldots$ be the sequence of inter-arrival times defined as\n$$\n\\Delta^T_k=T_{k+1}-T_{k}\\text{, }k=0,1,2\\ldots.\n$$\n\\end{definition}\nA trajectory of a counting process is given in \n\\begin{figure}[!h]\n\\begin{center}\n\\begin{tikzpicture}[scale=1]\n  %Origin and axis\n  \\coordinate (O) at (0,0);\n  \\draw[->] (-1,0) -- (8,0) coordinate[label = {below:$t$}] (xmax);\n  \\draw[->] (0,-0.5) -- (0,5.5) coordinate[label = {left:$N_t$}] (ymax);\n  %Lower linear boundary\n\n\n  %Stochastic process trajectory\n\n  \\draw (0,0) node[blue,left] {} node{};\n  \\draw[very thick,blue,-] (0,0) -- (1,0) node[pos=0.5, above] {$\\Delta^T_0$} ;\n  \\draw[very thick,dashed,blue] (1,0) -- (1,1) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (1,1) -- (1.75,1) node[pos=0.5, above] {$\\Delta^T_1$};\n  \\draw[very thick,dashed,blue] (1.75,1) -- (1.75,2) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (1.75,2) -- (2.75,2) node[pos=0.5, above] {$\\Delta^T_2$};\n  \\draw[very thick,dashed,blue] (2.75,2) -- (2.75,3) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (2.75,3) -- (5,3)node[pos=0.5, above] {$\\Delta^T_3$};\n  \\draw[very thick,dashed,blue] (5,3) -- (5,4) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (5,4) -- (6,4) node[pos=0.5, above] {$\\Delta^T_4$};\n  \\draw[very thick,dashed,blue] (6,4) -- (6,5) node[pos=0.5, right] {};\n  \\draw[very thick,blue,-] (6,5) -- (8,5) node[pos=0.5, above] {$\\Delta^T_5$};\n  %Jump Times\n  \\draw (1,0) node[blue,below] {$T_1$} node{ \\color{blue}$\\bullet$};\n  \\draw (1.75,0) node[blue,below] {$T_2$} node{ \\color{blue}$\\bullet$};\n  \\draw (2.75,0) node[blue,below] {$T_3$} node{ \\color{blue}$\\bullet$};\n  \\draw (5,0) node[blue,below] {$T_4$} node{ \\color{blue}$\\bullet$};\n  \\draw (6,0) node[blue,below] {$T_5$} node{ \\color{blue}$\\bullet$};\n  %Level of the counting process\n   \\draw (0,0) node[black,left] {$0$} node{ \\color{black}$\\bullet$};\n   \\draw (0,1) node[black,left] {$1$} node{ \\color{black}$\\bullet$};\n   \\draw (0,2) node[black,left] {$2$} node{ \\color{black}$\\bullet$};\n   \\draw (0,3) node[black,left] {$3$} node{ \\color{black}$\\bullet$};\n   \\draw (0,4) node[black,left] {$4$} node{ \\color{black}$\\bullet$};\n   \\draw (0,5) node[black,left] {$5$} node{ \\color{black}$\\bullet$};\n\\end{tikzpicture}\n\\end{center}\n\\caption{Trajectory of the counting process $(N_t)_{t\\geq0}$.}\n\\label{fig:trajectory_counting_process}\n\\end{figure}\n\\begin{definition}\\label{def_poisson_process}\nA Poisson process $(N_t)_{t\\geq0}$ is a counting process whose inter-arrival times are \\iid exponential random variables.\n\\end{definition}\n\\begin{remark}\\label{def_renewal_process}\nA Poisson process belongs to the family renewal processes which are counting process with \\iid inter-arrival times.\n\\end{remark}\n\\begin{definition}\\label{def:exp_dist}\nA random variable $X$ is exponentially distributed $X\\sim\\ExpDist(\\lambda)$ if it has \\pdf\n$$\nf_X(x) =\\begin{cases} \n\\lambda\\e^{-\\lambda x},&\\text{ if }x>0,\\\\\n0,&\\text{ otherwise}.\n\\end{cases}\n$$\n\\end{definition}\nFor some reasons, we need to introduce the joint distribution of the order statistics of a uniform random sample.\n\\begin{prop}\\label{def:uniform_OS_dist}\nLet $U_1,\\ldots, U_n$ be a sample of \\iid uniform random variables on $(a,b)$. Denote by \n$$\nU_{(1)}\\leq U_{(2)}\\leq\\ldots\\leq U_{(n)}\n$$\nthe order statistics of such a sample. The joint distribution of $(U_{(1)},\\ldots,  U_{(n)})$ is given by \n$$\nf_{(U_{(1)},\\ldots, U_{(n)})}(u_1,\\ldots, u_n)=\\frac{n!}{(b-a)^n}\\mathbb{I}_{a<u_1<\\ldots< u_n<b}(u_1,\\ldots, u_n).\n$$\nand we denote $(U_{(1)},\\ldots,  U_{(n)})\\sim \\text{OS}_n([a,b])$\n\\end{prop}\n\\begin{proof}\nLet $g:\\mathbb{R}^n\\mapsto \\mathbb{R}_+$ be measurable and bounded. We have that\n$$\n\\mathbb{E}[g(U_{(1)},\\ldots,U_{(n)})]=\\mathbb{E}\\left[\\sum_{\\sigma\\in\\mathcal{S}_n}\ng(U_{\\sigma(1)},\\ldots,U_{\\sigma(n)})\n\\mathbb{I}_{U_{\\sigma(1)} <\\ldots< U_{\\sigma(n)}}\n\\right]\n$$\nwhere $\\mathcal{S}_n$ the set of all the permutation of $\\{1,\\ldots,n \\}$. We note that\n\\begin{eqnarray*}\n\\mathbb{E}\\left[g(U_{\\sigma(1)},\\ldots,U_{\\sigma(n)})\n\\mathbb{I}_{U_{\\sigma(1)} <\\ldots< U_{\\sigma(n)}}\n\\right]&=& \\mathbb{E}\\left[g(U_{1},\\ldots,U_{n})\n\\mathbb{I}_{U_{1} <\\ldots< U_{n}}\\right]\\\\\n&=&\\int_{\\mathbb{R}^{n}}g(u_{1},\\ldots,u_{n})\n\\mathbb{I}_{u_{1} <\\ldots< u_{n}}\\frac{1}{(b-a)^n}\\text{d}\\lambda_{n}(u_1,\\ldots, u_n).\n\\end{eqnarray*}\nIt then follows that\n$$\n\\mathbb{E}[g(U_{(1)},\\ldots,U_{(n)})]=\\int_{\\mathbb{R}^{n}}g(u_{1},\\ldots,u_{n})\n\\mathbb{I}_{u_{1} <\\ldots< u_{n}}\\frac{n!}{(b-a)^n}\\text{d}\\lambda_{n}(u_1,\\ldots, u_n).\n$$\n\\end{proof}\nWe require some additional result about the gamma distribution.\n\\begin{prop}\\label{prop:erlang_dist}\nLet $\\Delta^T_1,\\ldots, \\Delta^T_n$ be \\iid exponential $\\text{Exp}(\\lambda)$ random variables, define the sequence $T_k=\\sum_{i=1}^{k}\\Delta^T_i, k=1,\\ldots, n$.\n\\begin{enumerate}\n\\item The $T_{k}$'s are gamma distributed, $T_{k}\\sim\\GammaDist(k,\\lambda)$ with \\pdf \n$$\nf_{T_k}(t)=\\frac{t^{k-1}e^{-\\lambda t}\\lambda^k}{\\Gamma(k)},\\text{ }t>0,\n$$\nwhere $\\Gamma(k) = \\int_{0}^{\\infty}\\e^{- x}x^{k-1}\\text{d}x$.\n\\item The joint distribution of $(T_1,\\ldots, T_n)$ has \\pmf given by\n$$\nf_{(T_1,\\ldots, T_n)}(t_1,\\ldots, t_n)=\\lambda^n e^{-\\lambda t_n }\\mathbb{I}_{0<t_1<\\ldots< t_n}(t_1,\\ldots, t_n)\n$$\n\\item $[(T_1,\\ldots, T_n)|T_{n+1}=t]\\sim \\text{OS}_{n}([0,t])$\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n\\begin{enumerate}\n\\item We use induction on $k\\geq1$. For $k=1$ we have that $\\Delta^T_1= T_1$ so the property holds. Assume that the property hold true for some $k$ and consider $k+1$. We note that $T_{k+1}=T_k+\\Delta^T_{k+1}$ then\n\\begin{eqnarray*}\nf_{T_{k+1}}(t)&=&\\int_{0}^{t}f_{T_k}(x)f_{\\Delta^T_{k+1}}(t-x)\\text{d}x\\\\\n&=&\\int_{0}^{t}\\frac{x^{k-1}e^{-\\lambda x}\\lambda^k}{(k-1)!}\\lambda e^{-\\lambda(t-x)}\\text{d}x\\\\\n&=&\\frac{e^{-\\lambda t}\\lambda^{k+1}}{(k-1)!}\\frac{t^k}{k}=\\frac{t^k e^{-\\lambda t}\\lambda^{k+1}}{k!}.\n\\end{eqnarray*}\n\\begin{exercise}\nCan you propose another way to show this result? Without using induction.\n\\end{exercise}\n\\item Let $g:\\mathbb{R}^n\\mapsto \\mathbb{R}_+$ be measurable and bounded, we have\n\\begin{eqnarray*}\n\\mathbb{E}[g(T_{1},\\ldots,T_n)]&=& \\mathbb{E}[g(\\Delta^T_{1},\\Delta^T_1+\\Delta^T_2\\ldots,\\Delta^T_1+\\ldots+ \\Delta^T_n)]\\\\\n&=&\\int_{\\mathbb{R}^n}g(t_{1},\\ldots,t_1+\\ldots+ t_n)f_{(\\Delta^T_{1},\\ldots,\\Delta^T_n)}(t_1,\\ldots,t_n)\\text{d}\\lambda_n(t_{1},\\ldots,t_n)\\\\\n&=&\\int_{\\mathbb{R}_+^n}g(t_{1},\\ldots,t_1+\\ldots+ t_n)\\lambda^n\\e^{-\\lambda(t_1+\\ldots+t_n)}\\text{d}\\lambda_n(t_{1},\\ldots,t_n)\\\\\n\\end{eqnarray*}\nLet us apply the following change of variable \n$$\n\\Phi:(u_{1},\\ldots,u_n)\\mapsto(u_1, u_{2}-u_1,\\ldots,u_n-u_{n-1}):=(t_1,\\ldots, t_n),$$\nminding the change in the integration domain as \n$$\n\\Phi(\\mathbb{R}_+\\times ]u_1,\\infty[\\times \\ldots\\times ]u_{n-1},\\infty[) = \\mathbb{R}^n_+ \n$$\nand the Jacobian $\\left|\\frac{\\text{d}\\Phi}{\\text{d}u}\\right|=1$. It follows that\n$$\n\\mathbb{E}[g(T_{1},\\ldots,T_n)]=\\int_{\\mathbb{R}^n} g(u_1,\\ldots, u_n)\\lambda^{n}e^{-\\lambda u_n}\\mathbb{I}_{0<u_1<\\ldots<u_n}(u_1,\\ldots, u_n)\\text{d}\\lambda_{n}(u_1,\\ldots, u_n).\n$$\n\\item We have that \n\\begin{eqnarray*}\nf_{T_1,\\ldots, T_n|T_{n+1}}(t_1,\\ldots,t_n|t)\n&=&\\frac{f_{T_1,\\ldots, T_n,T_{n+1}}(t_1,\\ldots,t_n,t)}{f_{T_{n+1}(t)}}\\\\\n&=&\\frac{n!}{t^n}\\mathbb{I}_{0<t_1<\\ldots< t_n<t}(t_1,\\ldots, t_n, t).\n\\end{eqnarray*}\n\\end{enumerate}\n\\end{proof}\nThe fact that the Poisson process is a Levy process will be useful later on, so here it is\n\\begin{prop}\nThe following statements are equivalent\n\\begin{enumerate}\n\\item $(N_t)_{t\\geq 0}$ is a Poisson process\n\\item The stochastic process $(N_t)_{t\\geq 0}$ has\n\\begin{itemize}\n\\item[(i)] independent increments, it means that for $0<t_1\\leq\\ldots\\leq t_n$, the random variables\n$N_{t_1},N_{t_2}-N_{t_1},\\ldots,N_{t_{n}}-N_{t_{n-1}}$\nare independent.\n\\item[(ii)] stationnary increments in the sense that the event frequency distribution over some time period of duration $s>0$ only depends on $s$. Indeed, we have that \n$$\nN_{t+s}-N_t\\sim\\PoissonDist(\\lambda s),\\text{ for }s,t\\geq 0.\n$$\n\\end{itemize}\n\\end{enumerate}\nThe stochastic processes with independent and stationnary increments are called Levy processes.\n\\end{prop}\n\\begin{proof}\n\\underline{$1 \\Rightarrow 2$}\\\\\nAssume that $(N_t)_{t\\geq0}$ is a Poisson process and let $0<t_1<\\ldots <t_n$ be some times. Consider the folowing probability \n$$\n\\mathbb{P}\\left(N_{t_1} = j_1, N_{t_2}-N_{t_1} = j_2,\\ldots, N_{t_n}-N_{t_{n-1}} = j_n\\right)\n$$\nsuch that $j_1,\\ldots, j_n\\in \\mathbb{N}$. We can rewrite it as \n$$\n\\mathbb{P}\\left(T_{k_1}\\leq t_1<T_{k_1+1}, T_{k_2}\\leq t_2<T_{k_2+1},\\ldots, T_{k_n} \\leq t_n<T_{k_n+1}\\right),\n$$\nwhere $k_i = j_1 + \\ldots +j_i, i = 1,\\ldots, n$. Conditionning with respect to  $T_{k_n+1}$ yields\n\\begin{eqnarray*}\n&&\\mathbb{P}\\left(T_{k_1}\\leq t_1<T_{k_1+1}, T_{k_2}\\leq t_2<T_{k_2+1},\\ldots, T_{k_n}\\leq t_n<T_{k_n+1}\\right)\\\\\n&=&\\int_{t_n}^{+\\infty}\\mathbb{P}\\left(T_{k_1}<t_1<T_{k_1+1}, T_{k_2}<t_2<T_{k_2+1},\\ldots, T_{k_n}<t_n|T_{k_n +1}=t\\right)f_{T_{k_n+1}}(t)\\text{d}\\lambda(t)\\\\\n&=& \\int_{t_n}^{+\\infty}\\binom{k_n}{j_1,\\ldots, j_n}\\left(\\frac{t_1}{t}\\right)^{j_1}\\left(\\frac{t_2-t_1}{t}\\right)^{j_2}\\ldots \\left(\\frac{t_n-t_{n-1}}{t}\\right)^{j_n}\\frac{e^{-\\lambda t}t^{k_n}\\lambda^{k_n + 1}}{k_n!}\\text{d}\\lambda(t)\\\\\n&=&\\frac{e^{-\\lambda t_1}\\left(t_1\\right)^{j_1}}{j_1!}\\frac{\\left(t_2-t_1\\right)^{j_2}e^{-\\lambda(t_2 - t_1)}}{j_2!}\\ldots \\frac{\\left(t_n-t_{n-1}\\right)^{j_n}e^{-\\lambda(t_n - t_{n-1})}}{j_n!}\n\\end{eqnarray*}\nFrom the second to the third equality we simply ask that amomg $k_n$ uniform random variables $j_1$ fall inside $(0,t_1)$, $j_2$ fall inside $(t_1,t_2)$, etc...\n\n\n\\noindent \\underline{$2 \\Rightarrow 1$}\\\\\nWe aim at showing that $(T_1,\\ldots, T_n)$ has \\pdf given by\n\\begin{equation}\\label{eq:joint_density_arrival_times}\nf_{T_1,\\ldots, T_n}(t_1,\\ldots, t_n)=\\lambda^n e^{-\\lambda t_n}\\mathbb{I}_{0< t_1<\\ldots<t_n}.\n\\end{equation}\nLet $t_1,\\ldots, t_n$ and $h$ be nonnegative real numbers such that\n$$\nt_1<t_1+h<t_2<\\ldots<t_n<t_n+h,\n$$ \nWe have\n\\begin{eqnarray*}\n&&\\mathbb{P}(t_1<T_1<t_1+h,\\ldots,t_n<T_1<t_n+h)\\\\\n&=&\\mathbb{P}(N_{t_1}=0,N_{t_1+h}-N_{t_1}=1,\\ldots,N_{t_n}-N_{t_{n-1}+h}=0,N_{t_n+h}-N_{t_n}\\geq 1)\\\\\n&=&e^{-\\lambda t_1}e^{-\\lambda h}\\lambda h e^{-\\lambda [t_2-(t_1+h)]}e^{-\\lambda h}\\lambda h\\ldots e^{-\\lambda [t_n-(t_{n-1}+h)]}[1-e^{-\\lambda h}] \\\\\n&=&e^{-\\lambda t_n}\\lambda^{n-1}h^{n-1}[1-e^{-\\lambda h}]\\\\\n\\end{eqnarray*}\nDivide by $h^{n}$ and let $h$ go to $0$ to get \\eqref{eq:joint_density_arrival_times}. After applying a change of variable (reciprocal of that used in the proof of \\cref{prop:erlang_dist}) to recover the joint distribution of $(\\Delta^{T}_1, \\ldots, \\Delta^{T}_n)$, we see that the later is actually that of an \\iid sample of size $n$ of exponential random variables.\n\\end{proof}\nLast but not the least, we establish the order statistic property of the Poisson process.\n\\begin{prop}\nProvided that $\\{N_t=n\\}$, the jump times $T_1,\\ldots,T_n$ have the same distribution as the order statistic of an \\iid sample of $n$ uniform random variable on $(0,t)$, namely it holds that\n$$\n[T_1,\\ldots,T_n|N_t=n]\\sim \\left(U_{(1)}(0,t),\\ldots, U_{(n)}(0,t)\\right).\n$$\n\\end{prop}\n\\begin{proof}\nWe have\n\\begin{eqnarray*}\n&&\\mathbb{E}\\left[g(T_1,\\ldots, T_n)|N_t = n\\right]\\\\\n&=&\\frac{\\mathbb{E}\\left[g(T_1,\\ldots, T_n)\\mathbb{I}_{N_t = n}\\right]}{\\mathbb{P}(N_t = n)}\\\\\n&=&\\frac{\\mathbb{E}\\left[g(T_1,\\ldots, T_n)\\mathbb{I}_{T_n \\leq t}\\mathbb{I}_{T_{n+1} > t}\\right]}{\\mathbb{P}(N_t = n)}\\\\\n&=&\\frac{n!}{e^{-\\lambda t}(\\lambda t)^n}\\int_{\\mathbb{R}^{n+1}}g(t_1,\\ldots, t_n)\\mathbb{I}_{t_n\\leq t<t_{n+1}}(t_n,t_{n+1})f_{T_1,\\ldots, T_{n+1}}(t_1,\\ldots, t_{n+1})\\text{d}\\lambda_{n+1}(t_1,\\ldots, t_{n+1})\\\\\n&=&\\frac{n!}{e^{-\\lambda t}(\\lambda t)^n}\\int_{\\mathbb{R}^{n}}\\int_{t}^{+\\infty}g(t_1,\\ldots, t_n)\\mathbb{I}_{0<t_1<\\ldots t_n\\leq t}(t_1,\\ldots,t_{n})\\lambda^{n+1} e^{-\\lambda t_{n+1}}\\text{d}\\lambda_{n+1}(t_1,\\ldots, t_{n+1})\\\\\n&=&\\frac{n!}{e^{-\\lambda t}(\\lambda t)^n}\\int_{\\mathbb{R}^{n}}g(t_1,\\ldots, t_n)\\mathbb{I}_{0<t_1<\\ldots t_n\\leq t}(t_1,\\ldots,t_{n})\\lambda^{n} \\text{d}\\lambda_{n}(t_1,\\ldots,t_n)\\e^{-\\lambda t}\\\\\n&=&\\int_{\\mathbb{R}^{n}}g(t_1,\\ldots, t_n)\\frac{n!}{t^n}\\mathbb{I}_{0<t_1<\\ldots t_n\\leq t}(t_1,\\ldots,t_{n})\\text{d}\\lambda(t_1,\\ldots,t_{n} ).\n\\end{eqnarray*}\n\\end{proof}\n\\noindent The order statistic property of the Poisson process will be useful to solve the first hitting time problem arising later on. To fully benefit from this nice property, we need to introduce Appell and Abel-Gontcharov polynomials.\\\\\n\n\\noindent Let $U=\\{u_i, \\, i\\geq 1\\}$ be a non-decreasing sequence of real numbers. To $U$ is attached a (unique) family of Appell polynomials of degree $n$ in $x$, $\\{A_{n}(x\\vert U),\\, n\\geq 0\\}$ defined as follows.\n\\begin{definition} \nStarting with $A_0(x\\vert U)=1$, the $A_n(x\\vert U)$'s satisfy the differential equations\n\\begin{equation}\\label{eq:DifferentialEquationAppellPolynomial}\nA_{n}^{(1)}(x|U)=nA_{n-1}(x|U),  \n\\end{equation}\nwith the border conditions\n\\begin{equation}\\label{eq:Border AppellPolynomial}\nA_{n}(u_{n}|U)=0, \\quad n\\geq 1.\n\\end{equation}\nSo, each $A_n$ has the integral representation\n\\begin{equation}\\label{eq:IntegralRepresentationAppellPolynomials}\nA_{n}(x|U)=n!\\int_{u_{n}}^{x}\\left[\\int_{u_{n-1}}^{y_{n}}\\text{d}y_{n-1} \\ldots \\int_{u_{1}}^{y_{1}}\\text{d}y_{1}\\right]\\text{d}y_{n}, \\quad n\\geq 1.\n\\end{equation}\n\\end{definition}\n\nIn parallel, to $U$ is attached a (unique) family of Abel-Gontcharov (A-G) polynomials of degree $n$ in $x$, $\\{G_{n}(x\\vert U),\\, n\\geq 0\\}$.\n\\begin{definition} \nStarting with $G_0(x\\vert U)=1$, the $G_n(x\\vert U)$'s satisfy the differential equations\n\\begin{equation}\\label{eq:DifferentialEquationAGPolynomials}\nG_n^{(1)}(x|U)=nG_{n-1}(x|{\\cal E}U), \n\\end{equation}\nwhere ${\\cal E}U$ is the shifted family $\\{u_{i+1}, \\, i\\geq 1\\}$, and with the border conditions\n\\begin{equation}\\label{eq:Border AGPolynomial}\nG_{n}(u_{1}|U)=0, \\quad  n\\geq 1.\n\\end{equation}\nSo, each $G_n$ has the integral representation\n\\begin{equation}\\label{eq:IntegralRepresentationAGPolynomials}\nG_{n}(x|U)=n!\\int_{u_{1}}^{x}\\left[\\int_{u_{2}}^{y_{1}}\\text{d}y_{2} \\ldots \\int_{u_{n}}^{y_{n-1}}\\text{d}y_{n}\\right]\\text{d}y_{1}, \\quad n\\geq 1.\n\\end{equation}\n\\end{definition}\n\\noindent The Appell and A-G polynomials are closely related through the identity\n\\begin{equation}\\label{eq:link_AG_Appell}\nG_{n}(x|u_{1},\\ldots,u_{n})=A_{n}(x|u_{n},\\ldots,u_{1}), \\quad n\\geq 1.\n\\end{equation}\nThe two families (i.e. for all $n\\geq 0$), however, are distinct and enjoy different properties. From \\eqref{eq:IntegralRepresentationAppellPolynomials} and \\eqref{eq:IntegralRepresentationAGPolynomials}, it is clear that the polynomials $A_n$ and $G_n$ can be interpreted in terms of the joint distribution of the vector $(U_{1:n},\\ldots,U_{n:n})$. \n\\begin{prop}\nFor $0\\leq u_1 \\leq \\ldots \\leq u_{n} \\leq x \\leq 1$,\n\\begin{equation}\\label{eq:ProbabilisticInterpretationAppellPolynomials}\nP[U_{(1)} > u_1, \\ldots, U_{(n)} > u_{n}\\, \\mbox{ and }\\, U_{(n)}\\leq x]= \nA_n(x \\vert u_1, \\ldots, u_{n}), \\quad n\\geq1.\n\\end{equation}\nFor $0\\leq x \\leq u_1 \\leq \\ldots \\leq u_{n} \\leq 1$,\n\\begin{equation}\\label{eq:ProbabilisticInterpretationAGPolynomials}\nP[U_{(1)} \\leq u_1, \\ldots, U_{(n)} \\leq u_{n}\\, \\mbox{ and }\\, U_{(1)}> x]\n= (-1)^n \\, G_n(x \\vert u_1, \\ldots, u_{n}), \\quad n\\geq 1.\n\\end{equation}\n\\end{prop}\nThe representations \\eqref{eq:ProbabilisticInterpretationAppellPolynomials} and \\eqref{eq:ProbabilisticInterpretationAGPolynomials} will play a key role for solving first-hitting problem that involve Poisson processes. Numerically, it will be necessary to evaluate some special values of the polynomials. To this end, it is convenient to use the following recusive relations. \n\\begin{prop} \nThe Appell polynomials are computed through the expansion\n\\begin{equation}\\label{eq:appell_polynomials_recursion_1}\nA_{n}(x|U)=\\sum_{k=0}^{n}\\binom{n}{k}A_{n-k}(0|U)x^{k}, \\quad n\\geq 1,\n\\end{equation}\nwhere the $A_{n}(0|U)$'s are obtained recursively from \n\\begin{equation}\\label{eq:appell_polynomials_recursion_2}\nA_{n}(0|U)=-\\sum_{k=1}^{n} \\binom{n}{k} A_{n-k}(0|U)u_{n}^{k}, \\quad n\\geq1.\n\\end{equation}\nThe A-G polynomials are computed through the recursion\n\\begin{equation}\\label{eq:recursion_AG_polynomials}\nG_{n}(x|U)=x^{n}-\\sum_{k=0}^{n-1} \\binom{n}{k} u_{k+1}^{n-k} G_{k}(x|U), \\quad n\\geq 1.\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nThe Maclaurin expansion of $A_n(x|U)$ gives \\eqref{eq:appell_polynomials_recursion_1} as\n$$\nA_n(x|U) = \\sum_{k = 0}^n\\frac{A_n^{(k)}(0|U)}{k!}x^k = \\sum_{k = 0}^n\\binom{n}{k}A_{n-k}(0|U)x^k.\n$$\nEvaluation at $x = u_n$ then provides \\eqref{eq:appell_polynomials_recursion_2}. Regarding \\eqref{eq:recursion_AG_polynomials}, first note that \n$$\nG_n^{(k)}(u_{k+1}|U) = \\begin{cases}\n1,&\\text{ if }k = n,\\\\\n0,&\\text{ otherwise.}\n\\end{cases}\n$$\nAny polynomials $R(x)$ of degree $n$ can therefore be written as \n$$\nR(x)=\\sum_{k = 0}^n\\frac{R^{(k)}(u_{k+1})}{k!}G_k(x|U).\n$$\nBy expanding $x^n$ one gets \\eqref{eq:recursion_AG_polynomials}.\n\\end{proof}\nHereafter are a couple of useful properties\n\\begin{prop}\\label{prop:Appell_AG_polynomials_properties}\n\\begin{enumerate}\n\\item For any $a,b\\in \\mathbb{R}$, it holds that\n\\begin{equation}\nA_{n}(x|a+bU) = b^n A_n\\left((x-a)/b \\, |U\\right), \\quad n\\geq 1, \\label{eq:LinearTransform}\n\\end{equation}\nwith the same identity for $G_n$.\n\\item We have \n\\begin{eqnarray}\n&&A_{n}(x|1,\\ldots,n) = x^{n-1}(x-n), \\label{eq:ExplicitExpressionIntAppellPolynomials}\\\\\n&&G_{n}(x|0,\\ldots,n-1)= x(x-n)^{n-1}.    \\label{eq:ExplicitExpressionIntAGPolynomials}\n\\end{eqnarray}\n\\item Let $\\{X_{n}\\text{ , }n\\geq1\\}$ be a sequence of \\iid nonnegative random variables, of partial sums $S_{n}=\\sum_{k=1}^{n}X_{k}$ with $S_0=0$. Then, for $n\\geq 1$,\n\\begin{eqnarray}\n&&\\mathbb{E} \\left[A_{n}(x|S_{1},\\ldots,S_{n})\\vert S_{n}\\right] = x^{n-1}(x-S_{n}), \\label{eq:ExplicitExpressionPartialSumsAppellPolynomials}\\\\\n&&\\mathbb{E}\\left[G_{n}(x|S_{0},\\ldots,S_{n-1})\\vert S_{n}\\right] = x(x-S_{n})^{n-1}.    \\label{eq:ExplicitExpressionPartialSumsAGPolynomials}\n\\end{eqnarray}\n\\end{enumerate} \n\\end{prop}\n\\begin{proof}\n\\begin{enumerate}\n  \\item Let us use induction on $n\\geq1$. Take $n = 1$, we have\n  $$\n  A_1(x|a + bu_1) = \\int_{a+bu_1}^{x} A_1'(y|a+bu_1)\\text{d}y = x - a - bu_1 \n  $$\n  and \n   $$\n  A_1(x|u_1) = x-u_1.\n  $$\n  The property holds for $n = 1$. Assume that it holds true for some $n$ and consider $n+1$. We have\n  \\begin{eqnarray*}\n  A_{n+1}(x|a+bU) &=& \\int^{x}_{a+b u_{n+1}}A_{n+1}'(y|a+bU)\\text{d}y\\\\\n  &=& \\int^{x}_{a+b u_{n+1}}nA_{n}(y|a+bU)\\text{d}y\\\\\n  &=&b^n\\int^{x}_{a+b u_{n+1}}nA_{n}\\left(\\frac{y-a}{b}\\Big|U\\right)\\text{d}y\\\\\n  &=&b^{n+1}n\\int^{\\frac{x-a}{b}}_{u_{n+1}}A_{n}\\left(z\\Big|U\\right)\\text{d}y\\\\\n  &=&b^{n+1}\\int^{\\frac{x-a}{b}}_{u_{n+1}}A_{n+1}'\\left(z\\Big|U\\right)\\text{d}y\\\\\n  &=&b^{n+1}A_{n+1}\\left(\\frac{x-a}{b}\\Big|U\\right),\n  \\end{eqnarray*}\n  so the property holds for $n+1$. No need to do the job for the $G_n(.|U)$'s thanks to \\eqref{eq:link_AG_Appell}.\n\\item Again induction on $n\\geq1$. Take $n = 1$, we have\n$$\nA_1(x|1) = x-1\n$$\nAssume that the result holds true for some $n$ and consider $n+1$. We have \n\\begin{eqnarray*}\nA_{n+1}(x|1,\\ldots, n,n+1) &=& \\int_{n+1}^{x}A_{n+1}'(y|1,2,\\ldots, n+1)\\text{d}y\\\\\n&=& (n+1)\\int_{n+1}^{x}A_{n}(y|1,2,\\ldots, n)\\text{d}y\\\\\n&=& (n+1)\\int_{n+1}^{x}y^{n-1}(y-n)\\text{d}y\\\\\n&=& x^n[x - (n + 1)]\n\\end{eqnarray*}\nFor identity \\eqref{eq:ExplicitExpressionIntAGPolynomials}, write \n\\begin{eqnarray*}\nG_{n}(x|0,\\ldots, n-1)&=&G_{n}(x-n|-n ,\\ldots,-1)\\\\\n&=&(-1)^nG_{n}(n-x|n,\\ldots, 1)\\\\\n&=&(-1)^nA_{n}(n-x|1,\\ldots, n)\\\\\n&=&(-1)^n(n-x)^{n-1}(-x)\\\\\n&=&x(x-n)^{n-1}.\n\\end{eqnarray*}\n\\item Again induction on $n\\geq1$. Take $n = 1$, we have\n$$\n\\mathbb{E}[A_1(x|S_1)|S_1] = x-S_1\n$$\nAssume that the result holds true for some $n$ and consider $n+1$. We have \n\\begin{eqnarray*}\n\\mathbb{E}\\left[A_{n+1}(x|S_1,\\ldots, S_n,S_{n+1})|S_{n+1}\\right]&=&\\mathbb{E}\\left[\\int_{S_{n+1}}^x A_{n+1}'(y|S_1,\\ldots, S_n,S_{n+1})\\text{d}y|S_{n+1}\\right]\\\\\n&=&(n+1)\\mathbb{E}\\left[\\int_{S_{n+1}}^x A_{n}(y|S_1,\\ldots, S_n)\\text{d}y|S_{n+1}\\right]\\\\\n&=&(n+1)\\int_{S_{n+1}}^x\\mathbb{E}\\left[A_{n}(y|S_1,\\ldots, S_n)|S_{n+1}\\right]\\text{d}y\\text{ (Fubini)}\\\\\n&=&(n+1)\\int_{S_{n+1}}^x\\mathbb{E}\\left\\{\\mathbb{E}\\left[A_{n}(y|S_1,\\ldots, S_n)|S_n,S_{n+1}\\right]|S_{n+1}\\right\\}\\text{d}y\\\\\n&=&(n+1)\\int_{S_{n+1}}^x\\mathbb{E}\\left\\{\\mathbb{E}\\left[A_{n}(y|S_1,\\ldots, S_n)|S_n\\right]|S_{n+1}\\right\\}\\text{d}y\\\\\n&=&(n+1)\\int_{S_{n+1}}^x\\mathbb{E}\\left[y^{n-1}(y-S_n)|S_{n+1}\\right]\\text{d}y\\\\\n&=&(n+1)\\int_{S_{n+1}}^xy^n-\\frac{n}{n+1}S_{n+1}y^{n-1} \\text{d}y\\\\\n&=&x^n(x-S_{n+1})\\\\\n\\end{eqnarray*}\nFor identity \\eqref{eq:ExplicitExpressionPartialSumsAGPolynomials}, write \n\\begin{eqnarray*}\n\\mathbb{E}\\left[G_{n}(x|S_0,\\ldots, S_{n-1})|S_{n}\\right]&=&\\mathbb{E}\\left[G_{n}(x-S_n|S_0 - S_n,\\ldots, S_{n-1}-S_n)|S_{n}\\right]\\\\\n&=&(-1)^n\\mathbb{E}\\left[G_{n}(S_n-x|S_n,\\ldots, S_{n}-S_{n-1})|S_{n}\\right]\\\\\n&=&(-1)^n\\mathbb{E}\\left[A_{n}(S_n-x|S_{n}-S_{n-1},\\ldots, S_n)|S_{n}\\right]\\\\\n&=&(-1)^n(S_n-x -S_n)(S_n-x)^{n-1}\\\\\n&=&x(x-S_n)^{n-1}.\\\\\n\\end{eqnarray*}\n\\end{enumerate}\n\\end{proof}\n\n\\subsubsection{A tiny bit of insurance risk theory}\\label{sssec:insurance_risk_theory}\nIn what follows we will use classical result from insurance risk theory and so we have to provide some context. The financial reserve of a nonlife insurance company can be modelled by a continuous time stochastic process $(R_t)_{t\\geq0}$ defined as \n\\begin{equation}\\label{eq:risk_process}\nR_t = u + P_t - L_t,\\text{ }t\\geq0\n\\end{equation}\nwhere \n\\begin{itemize}\n  \\item $R_0 = u>0$ are the initial reserves,\n  \\item $(P_t)_{t\\geq0}$ is the premium income process,\n  \\item $(L_t)_{t\\geq0}$ is the liability of the insurance company, that is the sum of all the compensations paid to to the policyholders.\n\\end{itemize}\nThe classical assumes that the premium are collected linearly in times at some rate $c$ per time unit. Namely, we have \n$$\nP_t = c\\cdot t.\n$$\nThe number of claims is governed by a counting process $(N_t)_{t\\geq0}$, usually a standard Poisson process with intensity $\\lambda$. To each claim is associated a randomly sized compensation distributed as $U$. The claim sizes $U_1,U_2,\\ldots$ form a sequence of \\iid random variables independent from $N_t$. We therefore have \n$$\nL_t = \\sum_{k = 1}^{N_t}U_k.\n$$\nThe name of the game is to compute the ruin probabilities \n\\begin{equation}\\label{eq:ruin_probabilities}\n\\psi(u,T) = \\mathbb{P}(\\tau_u \\leq T),\\text{ and }\\psi(u) = \\mathbb{P}(\\tau_u < \\infty),\n\\end{equation} \nwhere $\\tau_u = \\inf\\{t\\geq0\\text{ ; }R_t \\leq 0\\}$ is the ruin time. This first hitting time problem is illustrated on \\cref{fig:ruin_time_CL}\n\\begin{figure} \n\\begin{center}\n\\begin{tikzpicture}\n  %Origin and axis\n  \\coordinate (O) at (0,0);\n  \\draw[->] (-0.5,0) -- (5.5,0) coordinate[label = {below:$t$}] (xmax);\n  \\draw[->] (0,-0.5) -- (0,4) coordinate[label = {right:$R_t$}] (ymax);\n   %Initial reserves\n  \\draw (0,2) node[black,left] {$z$} node{};\n % % %Length of the honest chain\n  \\draw[thick, blue,-] (0,2) -- (2,3) node[pos=0.5, above] {};\n  \\draw[thick, dashed, blue] (2,3) -- (2,1) node[pos=0.5, left] {\\color{black}$U_1$};\n  \\draw[thick, blue] (2,1) -- (3,1.5) node[pos=0.5, above] {};\n  \\draw[thick, dashed, blue] (3,1.5) -- (3, 0.5) node[pos=0.5, left] {\\color{black}$U_2$};\n  \\draw[thick, blue] (3,0.5) -- (5, 1.5) node[pos=0.5, above] {};\n   \\draw[thick, dashed, blue] (5,1.5) -- (5, -0.5) node[pos=0.5,above left] {\\color{black}$U_3$};\n\n  %Block finding Times \n  \\draw (2,0) node[black,below] {$T_1$} node{ \\color{black}$\\bullet$};\n  \\draw (3,0) node[black,below] {$T_2$} node{ \\color{black}$\\bullet$};\n  \\draw (5,0) node[black,below left] {$\\tau_u$} node{ \\color{black}$\\bullet$};\n\\end{tikzpicture}\n\\end{center}\n\\label{fig:ruin_time_CL}\n\\caption{Illustration of the classical insurance ruin problem}\n\\end{figure}\nThis enables to select an initial reserves level $u$ so that the ruin probability is low enough. The premium rate is usually set to compensate the losses due to claims with \n$$\n\\mathbb{E}(P_t) = (1+\\eta)\\mathbb{E}(L_t)\\text{ or  equivalently } c = (1+\\eta)\\lambda\\mathbb{E}(U).\n$$\nDefine the claim surplus process as\n$$\nS_t = u - R_t = L_t-P_t,\\text{ }t\\geq 0.\n$$\nThe following result is the counterpart of \\cref{lem:wald_martingale_RW} for L\u00e9vy processes.\n\\begin{prop}\\label{prop:wald_martingale_levy}\nIf $(S_t)_{t\\geq0}$ is a L\\'evy process then\n$$\nM_t = \\exp\\left[\\theta S_t-t\\kappa(\\theta)\\right]\\text{, }t\\geq0,\\text{ and }\\theta\\in \\mathbb{R}, \n$$\nwhere $\\kappa(\\theta)=\\log\\mathbb{E}\\left(e^{\\theta S_1}\\right)<\\infty$, is a martingale.\n\\begin{proof}\nNote that for a Levy process, we have that \n\\[\n\\mathbb{E}(e^{\\theta S_t}) =\\mathbb{E}(e^\\theta S_1)^t. \n\\]\nIt follows that \n\\begin{eqnarray*}\n\\mathbb{E}(M_t|\\mathcal{F}_s) &=&\\mathbb{E}\\left\\{\\exp\\left[\\theta S_t-t\\kappa(\\theta)\\right]|\\mathcal{F}_s\\right\\} \\\\\n&=&\\e^{-t\\kappa(\\theta)}\\mathbb{E}\\left\\{\\exp\\left[\\theta (S_t-S_s) + \\theta S_s\\right]|\\mathcal{F}_s\\right\\}\\\\\n&=&\\e^{-t\\kappa(\\theta)}\\e^{\\theta S_s}\\mathbb{E}\\left(\\e^{\\theta (S_t-S_s)}\\right)\\\\\n&=&\\e^{-t\\kappa(\\theta)}\\e^{\\theta S_s}\\e^{(t-s)\\kappa(\\theta)}\\\\\n&=&\\e^{\\theta S_s -s\\kappa(\\theta)}.\n\\end{eqnarray*}\n\\end{proof}\n\\end{prop}\nThe infinite time horizon ruin probability may be evaluated using the following result.\n\\begin{prop}\\label{prop:ruin_proba_levy}\nIf $S_t\\overset{\\textbf{a.s.}}{\\rightarrow} -\\infty$, and there exists $\\gamma>0$ such that $\\left(e^{\\gamma S_t}\\right)_{t\\geq0}$ is a martingale then\n$$\n\\mathbb{P}(\\tau_u<\\infty)=\\frac{e^{-\\gamma u}}{\\mathbb{E}\\left[e^{\\gamma \\xi(u)}|\\tau_u<\\infty\\right]},\n$$\nwhere $\\xi(u)=S_{\\tau_u}-u$ denotes the deficit at ruin.\n\\end{prop}\n\\begin{proof}\nWe apply the optionnal stopping theorem on $(e^{\\gamma S_t})_{t\\geq0}$ at time $T\\land \\tau_u$ for some $T>0$ to get \n\\begin{eqnarray*}\n1 = \\mathbb{E}\\left(\\e^{\\gamma S_0}\\right)&=& \\mathbb{E}\\left(\\e^{\\gamma S_{T\\land \\tau_u}}\\right)\\\\  \n&=& \\mathbb{E}\\left(\\e^{\\gamma S_{\\tau_u}}\\mathbb{I}_{\\tau_u<T} + \\e^{\\gamma S_{T}}\\mathbb{I}_{\\tau_u\\geq T}\\right).\\\\\n\\end{eqnarray*} \nLetting $T\\rightarrow\\infty$  yields\n\\begin{eqnarray*}\n1 &=&\\mathbb{E}\\left(\\e^{\\gamma S_{\\tau_u}}\\mathbb{I}_{\\tau_u<\\infty}\\right)\\\\\n&=&\\e^{\\gamma u}\\mathbb{E}\\left(\\e^{\\gamma \\xi(u))}|\\tau_u<\\infty\\right)\\psi(u),\n\\end{eqnarray*} \nthanks to $S_t\\rightarrow -\\infty$, monotone and dominated convergence ($\\e^{\\gamma S_{T}}<\\e^{\\gamma u}$ on $\\{\\tau_u\\geq T\\}$)\n\\end{proof}\nWe will use \\cref{prop:ruin_proba_levy} to derive the double spending probability within a continuous time framework in the next sections.\n\\subsubsection{Double spending probability}\\label{sssec:double_spending_counting_process_dsp}\nThe \\textit{Proof-of-Work} protocol implies a steady block arrival, every $10$ minnutes for the bitcoin blockchain. Each trial (of the network) for mining a block is independent of the others and leads to a success with very small probability, the overall number of successes is binomially distributed, very well approximated by a\nPoisson random variable. This justifies the Poisson process assumption made in the sequel to model the block arrival.\\\\\n\n\\noindent Denote by $(z+ N_t)_{t\\geq 0}$ and $(M_t)_{t\\geq 0}$ the number of blocks found by the honest miners and the attackers respectively. Double spending occurs at time \n$$\n\\tau_z = \\inf\\{t\\geq0\\text{ ; }z+N_t = M_t\\}.\n$$\nAssume that $(N_t)_{t\\geq 0}$ and $(M_t)_{t\\geq 0}$ are Poisson processes with intensity $\\lambda$ and $\\mu$ such that $\\lambda>\\mu$. Let $(T_i)_{i\\geq0}$ and $(S_i)_{i\\geq0}$ be the arrival times of $(N_t)_{t\\geq 0}$ and $(M_t)_{t\\geq 0}$\nThe first-hitting problem along with its notation is illustrated in \\cref{fig:ds_poisson_process_illustrated}.\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tikzpicture}\n  %Origin and axis\n  \\coordinate (O) at (0,0);\n  \\draw[->] (-0.5,0) -- (5.5,0) coordinate[label = {above:$t$}] (xmax);\n  \\draw[->] (0,-0.5) -- (0,5) coordinate[label = {right:$n$}] (ymax);\n %Length of the honest chain\n  \\draw[thick,blue,-] (0,3) -- (2,3) node[pos=0.5, above] {} ;\n  \\draw[thick,blue] (2,3) -- (2,4) node[pos=0.5, above] {};\n  \\draw[thick,blue] (2,4) -- (5.5,4) node[pos=0.5, right] {};\n  % %Length of the Malicious chain\n  \\draw[very thick,dashed,red,-] (0,0) -- (0.75,0) node[pos=0.5, above] {} ;\n  \\draw[very thick,dashed,red] (0.75,0) -- (0.75,1) node[pos=0.5, right] {};\n  \\draw[very thick,dashed,red] (0.75,1) -- (1.25,1) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,red] (1.25,1) -- (1.25,2) node[pos=0.5, right] {};\n  \\draw[very thick,dashed,red] (1.25,2) -- (2.5,2) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,red] (2.5,2) -- (2.5,3) node[pos=0.5, right] {};\n  \\draw[very thick,dashed,red] (2.5,3) -- (5,3) node[pos=0.5, right] {};\n  \\draw[very thick,dashed,red] (5,3) -- (5,4) node[pos=0.5, above] {};\n  \\draw[very thick,dashed,red] (5,4) -- (5.5,4) node[pos=0.5, above] {};\n  %Jump Times of the malicious chain\n  \\draw (0.75,0) node[red,below] {$S_1$} node{ \\color{red}$\\bullet$};\n  \\draw (1.25,0) node[red,below] {$S_2$} node{ \\color{red}$\\bullet$};\n  \\draw (2.5,0) node[red,below] {$S_3$} node{ \\color{red}$\\bullet$};\n  \\draw (5,0) node[black,below] {$S_4=\\tau_z$} node{ \\color{black}$\\bullet$};\n  % %Jump Times of the honest chain\n  \\draw (2,0) node[blue,below] {$T_1$} node{ \\color{blue}$\\bullet$};\n  % %Aggregated Capital gains\n  \\draw (0,1) node[black,left] {$1$} node{ \\color{black}$-$};\n  \\draw (0,2) node[black,left] {$2$} node{ \\color{black}$-$};\n  \\draw (0,3) node[black,left] {$z$} node{};\n  \\draw (0,4) node[black,left] {$4$} node{ \\color{black}$-$};\n  % %Ruin time = First-meeting time\n  % \\draw (7,0) node[black,below] {$\\tau_z$} node{ \\color{black}$\\times$};\n  % \\draw[dotted,black] (7,3) -- (7,0);\n\\end{tikzpicture}\n\\end{center}\n\\caption{Illustration of the double spending problem within a continuous time framework.}\n\\label{fig:ds_poisson_process_illustrated}\n\\end{figure}\nThe double spending probability is given by the following result\n\\begin{theo}\\label{theo:ds_probability_Poisson_process}\nThe double spending probability is given by \n$$\n\\mathbb{P}(\\tau_z < \\infty) = \\left(\\frac{\\mu}{\\lambda}\\right)^z.\n$$\n\\end{theo}\n\\begin{proof}\nWe make an analogy with \\cref{sssec:insurance_risk_theory}, where $z$ is the inital reserves, $N_t$ is the premium income and $M_t$ is the liability of an insurance company. Define the claim surplus process as \n$$\nS_t = M_t - N_t,\\text{ }t\\geq0.\n$$\nNote that $(S_t)_{t\\geq0}$ is a L\\'evy process such that $S_t\\rightarrow + \\infty$ beacuse $\\lambda >\\mu$. The equation \n$$\n\\kappa(s) =\\log\\mathbb{E}\\left(e^{s S_1}\\right) = 0\n$$\nis equivalent to \n$$\n\\mu \\e^s + \\lambda \\e^{-s}- (\\mu+\\lambda) =0,\n$$\nwhich has a unique non negative solution given by \n$$\n\\gamma = \\log\\left(\\frac{\\lambda}{\\mu}\\right).\n$$\nIt follows from \\cref{prop:wald_martingale_levy} that $(\\e^{\\gamma S_t})_{t\\geq 0}$ is a martingale, and applying \\cref{prop:ruin_proba_levy} yields the double spending probability \n$$\n\\mathbb{P}(\\tau_z<\\infty) = \\left(\\frac{\\mu}{\\lambda}\\right)^z,\n$$\nsince $\\xi(z)=0$ as ruin occurs exactly.\n\\end{proof}\n\\subsubsection{Double spending time}\\label{sssec:double_spending_counting_process_dst}\nJust like in \\cref{sssec:double_spending_rw_dst}, we are interested in the time required to complete a double spending attack. Accounting for the cost of electricity, we can approximate the operational cost per time unit by \n$$\nc = \\pi_W\\cdot W\\cdot q,\n$$\nwhere \n\\begin{itemize}\n  \\item $\\pi_W$ is the electricity cost\n  \\item $W$ is the electricity consumed by the network\n  \\item  $q$ is the attacker's hashpower\n\\end{itemize}\nThe double spending cost reduces to $\\tau_z\\cdot c$. the following result provides a formula for the \\pdf of $\\tau_z$.\n\\begin{theo}\nIf $(N_t)_{t\\geq0}$ is a Poisson process and $(M_t)_{t\\geq0}$ is a renewal process then the \\textbf{p.d.f.} of $\\tau_z$ is given by\n\\begin{equation}\\label{eq:DS_time_pdf}\nf_{\\tau_z}(t)=\\mathbb{E}\\left[\\frac{z}{z+N_t}f_{S_{N_t+z}}(t)\\right],\\text{ for }t\\geq0.\n\\end{equation}\n\\end{theo}\n\\begin{proof}\nThe event $\\{\\tau_z\\in(t,t+\\text{d}t)\\}$, for $t\\geq0$, corresponds to the exact time at which the double-spending attack is successful as the malicious chain takes over the honest one. At time $t=0$, the honest chain is ahead by $z\\geq 1$ blocks. Assuming that later, at time $t>0$, the honest miners manage to add $N_t=n\\in\\mathbb{N}$ blocks to the chain then the malicious chain must be of length $M(t^{-})=n+z-1$ at some time $t^{-}<t$ and jumps to the level $n+z$ exactly at $t$. Conditioning over the values of $\\{N_t\\text{ , }t\\geq0\\}$ yields\n\\begin{equation}\\label{eq:Tauz}\n\\{\\tau_z\\in(t,t+\\text{d}t)\\}=\\bigcup_{n=0}^{+\\infty}\\{\\tau_z\\in(t,t+\\text{d}t)\\}\\cap\\{N_t=n\\}.\n\\end{equation}\nIn the case where $N_t=0$, the only requirement is that the $z^{\\text{th}}$ jump of $(M_t)_{t\\geq0}$ occurs at time $t$. It then follows that\n\\begin{equation}\\label{eq:TauzNt0}\n\\{\\tau_z\\in(t,t+\\text{d}t)\\}\\cap\\{N_t=0\\}=\\{S_{z}\\in(t,t+\\text{d}t)\\}\\cap\\{N_t=0\\},\n\\end{equation}\nand consequently\n\\begin{equation}\\label{eq:PDFTauzNt0}\nf_{\\tau_z|N_t}(t|0)=f_{S_z}(t),\\text{ }t\\geq0,\n\\end{equation}\nwhere $f_{\\tau_z|N_t}(t|0)$ denotes the conditional \\pdf of $\\tau_z$ given that $N_t=0$. On the set $\\{N_t\\geq1\\}$, one needs to make sure that $\\{M_t\\text{ , }t\\geq0\\}$ behaves properly by constraining its jump times so that it does not reach $N_s+z$ at any time $s<t$ and performs the $(n+z)^{\\text{th}}$ jump at $t$. Hence, it holds that\n\\begin{equation*}\n\\{\\tau_z\\in(t,t+\\text{d}t)\\}\\cap\\{N_t\\geq1\\}=\\bigcup_{n=1}^{+\\infty}\\bigcap_{k=1}^{n}\\{T_k\\leq S_{z+k-1}\\}\\cap\\{S_{z+n}\\in(t,t+\\text{d}t)\\}\\cap\\{N_t=n\\}.\n\\end{equation*}\nApplying the law of total probability yields\n\\begin{eqnarray}\n&&\\mathbb{P}\\left(\\{\\tau_z\\in(t,t+\\text{d}t)\\}\\cap\\{N(t)\\geq1\\}\\right)\\nonumber\\\\\n&=&\\sum_{n=1}^{+\\infty}\\mathbb{P}\\left[\\bigcap_{k=1}^{n}\\{T_k\\leq S_{z+k-1}\\}\\cap\\{S_{z+n}\\in(t,t+\\text{d}t)\\}\\Big\\rvert N(t)=n\\right]\\mathbb{P}[N(t)=n].\\label{eq:LawOfTotalProbabilityTh1}\n\\end{eqnarray}\nIn virtue of the order statistic property, the successive jump times $(T_1,\\ldots,T_n)$ are distributed as the order statistics $(V_{1:n},\\ldots,V_{n:n})$ of a sample of $n$ \\iid random variables uniformly distributed on $(0,t)$. The conditional probability in \\eqref{eq:LawOfTotalProbabilityTh1} may be rewritten as\n\\begin{eqnarray}\n&&\\mathbb{P}\\left[\\bigcap_{k=1}^{n}\\{V_{k:n}\\leq S_{z+k-1}\\}\\cap\\{S_{z+n}\\in(t,t+\\text{d}t)\\}\\right]\\nonumber\\\\\n&=&\\mathbb{P}\\left[\\bigcap_{k=1}^{n}\\{U_{k:n}\\leq S_{z+k-1}/ t\\}\\cap\\{S_{z+n}\\in(t,t+\\text{d}t)\\}\\right]\\nonumber\\\\\n&=&\\mathbb{P}\\left[\\bigcap_{k=1}^{n}\\{U_{k:n}\\leq S_{z+k-1}/t\\}\\big\\rvert S_{z+n}\\in(t,t+\\text{d}t)\\right]\\mathbb{P}[S_{z+n}\\in(t,t+\\text{d}t)]\\nonumber\\\\\n&=&\\mathbb{E}\\left\\{(-1)^{n}G_{n}[0\\big\\rvert S_{z}/t,\\ldots, S_{z+n-1} / t] \\big\\rvert S_{z+n}\\in(t,t+\\text{d}t)\\right\\}\\mathbb{P}[S_{z+n}\\in(t,t+\\text{d}t)],\\nonumber\\\\\n&=&(-1/t)^{n}\\mathbb{E}\\left\\{G_{n}[0\\big\\rvert S_{z},\\ldots, S_{z+n-1}] \\big\\rvert S_{z+n}\\in(t,t+\\text{d}t)\\right\\}\\mathbb{P}[S_{z+n}\\in(t,t+\\text{d}t)],\\nonumber\\\\\n&&\\label{eq:AGPolynomialsProof}\n\\end{eqnarray}\nwhere $U_{1:n},\\ldots,U_{n:n}$ denote the order statistics of a sample of $n$ \\iid uniform random variables on $(0,1)$, and $G_n(.|.)$ correspond to the sequence of A-G polynomials as defined in \\cref{sssec:exp_distribution}. The last equation in \\eqref{eq:AGPolynomialsProof} follows from using the first identity of \\cref{prop:Appell_AG_polynomials_properties}.\nInserting \\eqref{eq:AGPolynomialsProof} into \\eqref{eq:LawOfTotalProbabilityTh1} and letting $\\text{d}t$ be small enough yields\n\\begin{eqnarray}\nf_{\\tau_z|N(t)\\geq1}(t)&=&\\sum_{n=1}^{+\\infty}(-1/t)^{n}\\mathbb{E}\\left\\{G_{n}[0\\big\\rvert S_{z},\\ldots,S_{z+n-1}] \\big\\rvert S_{z+n}=t\\right\\}\\nonumber\\\\\n&\\times&f_{S_{z+n}}(t)\\mathbb{P}[N(t)=n]\\label{eq:cond_pdf_tau_N_greater_than_1}.\n\\end{eqnarray}\nWe further work on the AG polynomials to simplify the above expressions. We have that\n\\begin{eqnarray} \n\\mathbb{E}\\left\\{G_{n}(0\\big\\rvert S_{z},\\ldots,S_{z+n-1}) \\rvert S_{z+n}=t\\right\\}&=& \\mathbb{E}\\left\\{G_{n}(-S_z\\rvert 0,\\ldots,S_{z+n-1}-S_z) \\rvert S_{z+n}=t\\right\\} \\nonumber\\\\\n&=& \\mathbb{E}\\left\\{\\mathbb{E}\\left[G_{n}(-S_z\\rvert 0,\\ldots,S_{z+n-1}-S_z)\\rvert S_{z+n}-S_z, S_{z+n}\\right] \\rvert S_{z+n}=t\\right\\} \\nonumber\\\\\n&=& \\mathbb{E}\\left\\{(-S_z)(-S_z-S_{n+z}+S_z)^{n-1} \\rvert S_{z+n}=t\\right\\} \\nonumber\\\\\n&=& (-1)^n\\mathbb{E}\\left\\{S_z S_{n+z}^{n-1} \\rvert S_{z+n}=t\\right\\} \\nonumber\\\\\n&=& (-1)^nt^{n-1}\\frac{z}{z+n}t =(-t)^n\\frac{z}{z+n} \\label{eq:AG_polynomial_simplification}\n\\end{eqnarray}\nInserting \\eqref{eq:AG_polynomial_simplification} into \\eqref{eq:cond_pdf_tau_N_greater_than_1} yields\n\\begin{equation*}\nf_{\\tau_z|N_t\\geq1}(t)=\\sum_{n=1}^{+\\infty}\\frac{z}{z+n}f_{S_{z+n}}(t)\\mathbb{P}(N_t=n)\n\\end{equation*}\nThe final step consists in adding the case $N_t=0$ to the sum, therefore writing\n\\begin{equation*}\nf_{\\tau_z}(t)=\\sum_{n=0}^{+\\infty}\\frac{z}{z+n}f_{S_{z+n}}(t)\\mathbb{P}(N_t=n)\n\\end{equation*}\nwhich is equivalent to the announced result \\eqref{eq:DS_time_pdf}.  \n\\end{proof}\n\\begin{exercise}\nAssume that $(M_t)_{t\\geq0}$ is a Poisson process with intensity $\\mu$, compute \n\\[\n\\mathbb{P}(\\tau_u<\\infty) = \\int_0^{\\infty}f_{\\tau_u}(t)\\text{d}t.\n\\]\n\\end{exercise}\n\\begin{remark}\nJust like in the random walk framework of \\cref{ssec:double_spending_rw}, the number $z$ is actually a random variable defined as \n$$\nZ = (\\alpha - M_{T_\\alpha})_+,\n$$\nwhere $T_\\alpha$ is the arrival time of the $\\alpha^{th}$ block in the main branch of the blockchain. If $(M_t)_{t\\geq0}$ is a Poisson process with intensity $\\mu$ then $M_{T_\\alpha})$ is mixed Poisson distributed with parameter $\\mu\\cdot T_{\\alpha}$. We have that \n\\begin{eqnarray*}\n\\mathbb{P}(M_{T_\\alpha} = m) &=& \\int_{0}^{\\infty}\\frac{\\e^{-\\mu t}(\\mu t)^m}{m!}\\frac{\\e^{-\\lambda t}t^{\\alpha-1}\\lambda ^\\alpha}{(\\alpha -1)}\\text{d}t\\\\\n&=&\\frac{\\mu^m\\lambda^{\\alpha}}{m!(\\alpha-1)!}\\int_{0}^{\\infty}\\e^{-t(\\mu + \\lambda)}t^{m+\\alpha-1}\\text{d}t\\\\\n&=&\\binom{m+\\alpha-1}{m}\\left(\\frac{\\lambda}{\\lambda+\\mu}\\right)^{\\alpha}\\left(\\frac{\\mu}{\\lambda+\\mu}\\right)^{m}.\n\\end{eqnarray*}\nThe number of blocks found by the attacker until the vendor's transaction gets $\\alpha$ confirmations is governed by a negative binomial distribution.\n\\end{remark}\n\\noindent For further results on the distribution of $\\tau_z$ with different set of assumptions, the reader is referred to \\citet{Goffard2019}.\n\n\\section{Blockwitholding in PoW}\\label{sec:blockwithholding}\nThe constant operational cost and the infrequent capital gains make mining blocks on a \\PoW blockchain a risky activity. One way to tackle this problem is to engage in blockwitholding strategy. The goal is to build up a stock of blocks and release them publicly at well chosen times so as to fork the chain and make a part of the mining activities of competitors worthless, depending on which branch is followed up in the longer run. We focus here on a simplified version\\footnote{For the sake of tractability.} of the original blockwitholding strategy called selfish mining and described in \\citet{Eyal2014}. \\\\\n\n\\noindent We start by introducing a risk model to follow the financial reserves of a miner over time. The ruin probability and expected profit given that ruin did not occur plays teh role of risk and performace indicators which will allow us to compare the solo mining strategy to the selfish mining strategy.\n\n\\subsection{Ruin and expected profit of a miner}\\label{ssec:selfish_mining}\nA miner, referred to as Sam, starts operating with an initial surplus level $u>0$. Mining blocks generates steady operational costs of amount $c>0$ per unit of time, most notably due to electricity consumption. The entire network of miners appends blocks to the blockchain at an exponential rate $\\lambda$, which means that the length of the blockchain over time is governed by a Poisson process $(N_t)_{t\\geq0}$ with intensity $\\lambda$. We assume that Sam owns a fraction $q\\in(0,1/2)$ of the overall computing power, which implies that each block is published by Sam with a probability $q$. The number of blocks found by Sam and therefore the number of rewards of size $b>0$ he collects up to time $t\\geq 0$ is a thinned Poisson process $(\\tilde{N}_t)_{t\\geq0}$ with intensity $\\lambda$ and thinning parameter $q$. Sam's surplus $R_t$ at time $t$ is then given by  \n\\begin{equation}\\label{eq:surplus_protocol}\nR_t = u + b\\cdot\\tilde{N}_t - c\\cdot t,\\qquad t\\ge 0.\n\\end{equation}\nThe stochastic process $(R_t)_{t\\geq0}$ is referred to as the dual risk process in risk theory, see for instance \\citet{Avanzi2007}. Our goal is to measure the profitability of mining blocks on the blockchain, but subject to a ruin constraint. In this context, the time of ruin $\\tau_u = \\inf\\{t\\geq0:\\text{ }R_t = 0\\}$ is defined as the first time when the surplus reaches zero, i.e.\\ the miner runs out of money and cannot continue to operate. \n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tikzpicture}\n  %Origin and axis\n  \\coordinate (O) at (0,0);\n  \\draw[->] (-0.5,0) -- (5.5,0) coordinate[label = {below:$t$}] (xmax);\n  \\draw[->] (0,-0.5) -- (0,4) coordinate[label = {right:$R_t$}] (ymax);\n   %Initial reserves\n  \\draw (0,3) node[black,left] {$u$} node{};\n % % %Length of the honest chain\n  \\draw[thick, blue,-] (0,3) -- (2,1) node[pos=0.5, above] {};\n  \\draw[thick, dashed, blue] (2,1) -- (2,2) node[pos=0.5, above left] {\\color{black}$b$};\n  \\draw[thick, blue] (2,2) -- (3.5,0.5) node[pos=0.5, above] {};\n  \\draw[thick, dashed, blue] (3.5,0.5) -- (3.5, 1.5) node[pos=0.5, above left] {\\color{black}$b$};\n  \\draw[thick, blue] (3.5,1.5) -- (5, 0) node[pos=0.5, above] {};\n\n  %Block finding Times \n  \\draw (2,0) node[black,below] {$T_1$} node{ \\color{black}$\\bullet$};\n  \\draw (3.5,0) node[black,below] {$T_2$} node{ \\color{black}$\\bullet$};\n  \\draw (5,0) node[black,below] {$\\tau_u$} node{ \\color{black}$\\bullet$};\n\\end{tikzpicture}\n\\end{center}\n\\caption{Ruin time in the miner risk model}\n\\label{fig:ruin_time_miner}\n\\end{figure}\nThe riskiness of the mining business may be assessed via the finite-time and infinite-time horizon ruin probabilities defined as \n\\begin{equation}\\label{eq:ruin_probabilities}\n\\psi(u,t) = \\mathbb{P}(\\tau_u\\leq t),\\text{ and }\\psi(u)=\\mathbb{P}(\\tau_u<\\infty), \n\\end{equation}\nrespectively. The rewards earned through mining must in expectation exceed the operational cost per time unit. The latter condition translates into $q\\lambda b > c$, and is referred to as the net profit condition in standard risk theory, see \\citet{Asmussen_2010}. In particular, this implies $\\psi(u)<1$, i.e.\\ ruin does not occur almost surely. The profitability for a time horizon $t>0$ is now measured by\n\\begin{equation}\\label{eq:value_function_finite_time}\nV(u,t) = \\mathbb{E}(R_t\\cdot \\mathbb{I}_{\\tau_u>t}),\n\\end{equation}\nCorrespondingly, $V(u,t)$ is the expected surplus level at time $t$, where in the case of ruin up to $t$ this surplus is $0$ (i.e., due to the ruin event the surplus is frozen in at $0$). In terms of a conditional expectation, one can equivalently express $V(u,t)$ as the probability to still be alive at $t$ times the expected value of the surplus at that time given that ruin has not occurred:\n\\[V(u,t)= (1-\\psi(u,t))\\cdot \\mathbb{E}(R_t\\vert {\\tau_u>t}).\\]\n The following result provides formulas for the ruin probabilities and $V(u,t)$ in this setting.\n\\begin{prop}\\label{prop:ruin_proba_and_value_func} \n\n\\begin{enumerate}\n\\item For any $u\\ge 0$, the infinite-time ruin probability is given by\n\\begin{equation}\\label{eq:infinite_time_ruin_proba}\n\\psi(u) =e^{-\\theta^\\ast u},\n\\end{equation}\nwhere $\\theta^\\ast$ is the positive solution in $\\theta$ of the equation \n\\begin{equation}\\label{eq:cl_equation}\n{c}\\,\\theta + q\\lambda \\, (e^{-b\\, \\theta }-1)=0.\n\\end{equation}\n\\item For any $u\\ge 0$, the finite-time ruin probability is given by \n\\begin{equation}\\label{eq:finite_time_ruin_proba}\n\\psi(u,t) = \\sum_{n = 0}^{\\infty}\\frac{u}{u+bn}\\;\\mathbb{P}\\left[\\tilde{N}_{\\frac{u+bn}{c}} = n\\right]\\mathbb{I}_{\\left\\{t>\\frac{u+bn}{c}\\right\\}}. \n\\end{equation}\n\\item For any $u\\ge 0$, the expected surplus at time $t$ in case ruin has not occurred until then, can be written as \n\\begin{equation}\\label{eq:value_function_finite_time_prop}\nV(u,t) = \\mathbb{E}\\left[\\left(u+b\\tilde{N}_t - ct\\right)_+(-1)^{\\tilde{N}_t}G_{\\tilde{N}_t}\\left(0\\;\\Big\\rvert \\left\\{\\frac{u}{ct}\\land 1,\\ldots, \\frac{u+(\\tilde{N}_t-1)b}{ct}\\land 1\\right\\}\\right) \\right],\n\\end{equation}\nwhere $(.)_+$ denotes the positive part, $\\land$ stands for the minimum operator and\\\\ $\\left(G_n(\\cdot\\rvert\\{\\ldots\\}\\right)_{n\\in\\mathbb{N}}$ is the sequence of Abel-Gontcharov polynomials defined in \\cref{sssec:exp_distribution}. \n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n\\begin{enumerate}\n  \\item Define the process \n  \\[\n  S_t = ct -  b\\cdot\\tilde{N}_t,\\text{ }t\\geq0.\n  \\]\n  It is L\\'evy and such that $S_t\\rightarrow-\\infty$. Note that \n  \\[\n  \\kappa(\\theta) = \\log\\left\\{\\mathbb{E}\\left[\\e^{\\theta(c-bN_1)}\\right]\\right\\} = \\theta c+q\\lambda(\\e^{-b\\theta}-1).\n  \\]\n  The equation $\\kappa(\\theta) = 0$ has only one positive solution $\\theta^{\\ast}$. The process $(\\e^{\\theta^{\\ast}}S_t)$ is a martingale, we then apply \\cref{prop:ruin_proba_levy} to get \n  $$\n  \\psi(u) = \\e^{-\\theta^{\\ast}u},\n  $$\n  noting that $\\xi(u) = 0$ as ruin occurs exactly.\n  \\item The ruin time $\\tau_u$ may be rewritten as \n\\begin{equation*}\n\\tau_u = \\inf\\left\\{t\\geq 0\\text{ ; }\\tilde{N}_t = {ct}/{b} - {u}/{b}\\right\\}.\n\\end{equation*}\nNote that ruin can only occur at the specific times \n$$\nt_k = \\frac{u+bk}{c}\\text{, }k \\geq0,\n$$\nwhen the function $t\\mapsto {ct}/{b} - {u}/{b}$ reaches integer levels. For $t>0$, define the set of indices $\\mathcal{I} = \\{k\\geq0\\text{ ; }t_k\\leq t\\}$. The finite-time ruin probability can then be written as \n\\begin{equation*}\n\\psi(u,t)=\\sum_{k\\in\\mathcal{I}}\\mathbb{P}(\\tau_u = t_k)\n\\end{equation*}\nWe have that \n\\begin{eqnarray*}\n\\mathbb{P}(\\tau_u = t_k) &=& \\mathbb{P}\\left(\\bigcap_{l = 1}^{k}\\{T_l\\leq t_{l-1}\\}\\cap\\{T_{k+1}> t_k\\}\\right)\\\\\n&=& \\mathbb{P}\\left(\\bigcap_{l = 1}^{k}\\{T_l\\leq t_{l-1}\\}\\cap\\{T_{k} \\leq t_k\\}\\cap\\{T_{k+1} > t_k\\}\\right)\\\\\n&=& \\mathbb{P}\\left(\\bigcap_{l = 1}^{k}\\{T_l\\leq t_{l-1}\\}\\cap\\{N_{t_k} = k\\}\\right)\\\\\n&=& \\mathbb{P}\\left(\\bigcap_{l = 1}^{k}\\{T_l\\leq t_{l-1}\\}|N_{t_k} = k\\right)\\mathbb{P}(N_{t_k} = k)\\\\\n&=&\\mathbb{P}\\left(\\bigcap_{l = 1}^{k}\\{U_{(l)}\\leq t_{l-1}/t_k\\}|\\right)\\mathbb{P}(N_{t_k} = k)\\\\\n&=&(-1)^kG_{k}(0|t_0/t_k, \\ldots, t_{k-1}/t_k)\\mathbb{P}(N_{t_k} = k)\\\\\n&=&(-1)^kG_{k}\\left\\{0|u/(u+bk), \\ldots, [u+b(k-1)]/(u+bk)\\right\\}\\mathbb{P}(N_{t_k} = k)\\\\\n&=&(-1)^k\\left(\\frac{1}{u+bk}\\right)^{k}G_{k}\\left\\{0|u, \\ldots, [u+b(k-1)]\\right\\}\\mathbb{P}(N_{t_k} = k)\\\\\n&=&(-1)^k\\left(\\frac{b}{u+bk}\\right)^{k}G_{k}\\left\\{-u/b|0, \\ldots, k-1\\right\\}\\mathbb{P}(N_{t_k} = k)\\\\\n&=&(-1)^k\\left(\\frac{b}{u+bk}\\right)^{k}\\left(-\\frac{u}{b}\\right)\\left(-\\frac{u}{b}-k\\right)^{k-1}\\mathbb{P}(N_{t_k} = k)\\\\\n&=& \\frac{u}{u+bk}\\mathbb{P}(N_{t_k} = k)\n\\end{eqnarray*}\n\\item Using the tower property, we can express the value function \\eqref{eq:value_function_finite_time} as \n\\begin{eqnarray}\nV(u,t) &=&  \\mathbb{E}\\left[\\mathbb{E}\\left(R_t\\mathbb{I}_{\\tau_u>t}\\Big\\rvert \\tilde{N}_t\\right)\\right]\\nonumber\\\\\n&=&\\mathbb{E}\\left[\\left(u+b\\tilde{N}_t - ct\\right)\\mathbb{E}\\left(\\mathbb{I}_{\\tau_u>t}\\Big\\rvert \\tilde{N}_t\\right)\\right]\\nonumber\\\\\n&=&\\mathbb{E}\\left[\\left(u+b\\tilde{N}_t - ct\\right)\\mathbb{E}\\left(\\prod_{k = 1}^{\\tilde{N}_t}\\mathbb{I}_{\\{T_k \\leq t_{k-1}\\land t\\}}\\mathbb{I}_{\\{u+b\\tilde{N}_t-ct>0\\}}\\Big\\rvert \\tilde{N}_t\\right)\\right]\\nonumber\\\\\n&=&\\mathbb{E}\\left[\\left(u+b\\tilde{N}_t - ct\\right)_+\\mathbb{P}\\left(\\bigcap_{k = 1}^{\\tilde{N}_t}\\{T_k \\leq t_{k-1}\\land t\\}\\Big\\rvert \\tilde{N}_t\\right)\\right]\\nonumber\\\\\n&=&\\mathbb{E}\\left[\\left(u+b\\tilde{N}_t - ct\\right)_+\\mathbb{P}\\left(\\bigcap_{k = 1}^{\\tilde{N}_t}\\{U_{1:k} \\leq \\frac{t_{k-1}}{t}\\land 1\\}\\right)\\right],\n\\end{eqnarray}\nwhere $U_{1:n},\\ldots, U_{n:n}$ denote the order statistics of $n$ \\iid standard uniform random variables. Using the interpretation of Abel-Gontcharov polynomials as the joint probabilities of uniform order statistics then yields \\eqref{eq:value_function_finite_time_prop}.\n\\end{enumerate}\n\\end{proof}\n\\noindent The reader who wants to learn more on the use of Appell and Abel-Gontcharov polynomials to solve first passage problems involving point processes is referred to \\citet{Goffard2017}. The main issue with formula \\eqref{eq:value_function_finite_time_prop} is the lack of tractability. Formula \\eqref{eq:value_function_finite_time_prop} is not a closed form expression because of the infinite serie. Note also that the evaluation of the higher order AG polynomials can result in numerical instabilities when using the recurrence relationship. The work around is to replace the fixed time horizon $t$ by an independent exponential random time horizon $T$ with mean $t$. We hence consider the ruin probability and expected profit at $T\\sim \\text{Exp}(t)$, defined by\n\\begin{equation}\\label{eq:value_function_exp_time_horizon}\n\\widehat{\\psi}(u,t):= \\mathbb{E}[\\psi(u,T)] \\text{ and }\\widehat{V}(u,t):= \\mathbb{E}[V(u,T)] = \\mathbb{E}(R_T\\mathbb{I}_{\\tau_u>T}).\n\\end{equation} \nSimple expressions for these quantities are given in the following result \n\\begin{theo}\\label{theo:psi_V_T_solo_mining}\nFor any $u\\geq0$, we have\n\\begin{equation*}\n\\widehat{\\psi}(u,t) = e^{\\rho^\\ast u},\n\\end{equation*}\nand \n\\begin{equation*}\n\\widehat{V}(u,t) = u+(q\\lambda b-c)t\\left(1-e^{\\rho^\\ast u }\\right),\n\\end{equation*}\nwhere $\\rho^\\ast$ is the negative solution of the equation\n\\begin{equation}\\label{eq:equation_rho}\n-c\\rho + q\\lambda(e^{b\\rho}-1) = 1/t.\n\\end{equation}\nThe solution $\\rho^\\ast$ of \\eqref{eq:equation_rho} is given by \n\\begin{equation*}\n  \\rho^{\\ast}=-\\frac{q\\lambda t+1}{ct}\n  -\\frac{1}{b} \\,{\\rm W} \\left[-\\frac{q\\lambda\n    \\,b}{c}\\,{e^{-b\\,\\left(\\frac{q\\lambda t+1}{ct}\\right)}}\n  \\right],\n  \\end{equation*}\n  where $W(.)$ denotes the Lambert function, which satisfies\n  $$\n  W(z)\\e^{W(z)} = z,\\text{ for }z\\in\\mathbb{C}.\n  $$\n\\end{theo}\n\\begin{proof}\nLet $0<h<u/c$, so that ruin can not occur in the interval $(0,h)$. We distinguish three cases:\n\\begin{itemize}\n  \\item[(i)] $T>h$ and there is no block discovery in the interval $(0,h)$,\n  \\item[(ii)] $T<h$ and there is no block discovery in the interval $(0,T)$,\n  \\item[(iii)] There is a block discovery before time $T$ and in the interval $(0,h)$. \n\\end{itemize}\nAll the other events will have negligible probabilities when leting $h\\rightarrow0$.\nLet $T_1$ be the arrival time of the first block. We have that  \n\\begin{equation*}\n  \\widehat{\\psi}(u,t)=e^{-h(1/t + q\\lambda)}\\,\\widehat{\\psi}(u-ch,t) +\\int\\limits_0^h q\\lambda\\, e^{-s(1/t + q\\lambda)}\\,\\widehat{\\psi}(u-cs+b,t)ds,\n  \\end{equation*}\nand \n\\begin{eqnarray*}\n  \\widehat{V}(u,t)& =&e^{-h(1/t + q\\lambda)}\\,\\widehat{V}(u-ch,t)+\\int\\limits_0^h\\frac1t\\, e^{-s(1/t + q\\lambda)}\\,(u-cs)ds\\\\\n  &&\\quad+\\int\\limits_0^h q\\lambda\\, e^{-s(1/t + q\\lambda)}\\,\\widehat{V}(u-cs+b,t)ds.\n  \\end{eqnarray*}\nNow we take the derivative with respect to $h$ and set $h=0$ to obtain\n\\begin{equation}\\label{eq:ODE_psi}\nc\\widehat{\\psi}'(u,t) + \\left(\\frac{1}{t} +  q\\lambda\\right)\\widehat{\\psi}(u,t) - q\\lambda \\widehat{\\psi}(u+b,t)=0,\n\\end{equation}\n and\n\\begin{equation}\\label{eq:ODE_V}\nc\\widehat{V}'(u,t) + \\left(\\frac{1}{t} +  q\\lambda\\right)\\widehat{V}(u,t) - q\\lambda \\widehat{V}(u+b,t) - \\frac{u}{t} =0,\n\\end{equation}\nNote that the derivative is with respect to the first argument and that equations \\eqref{eq:ODE_psi} and \\eqref{eq:ODE_V} are advanced functional differential equations. Equation \\eqref{eq:ODE_psi} is actually the homogeneous counterpart of \\eqref{eq:ODE_V}. For \\eqref{eq:ODE_psi} consider the form \n\\begin{equation}\\label{eq:potential_solution_psi}\n\\widehat{\\psi}(u,t) = A_1e^{\\rho u } + B_1,\\text{ }u \\ge 0. \n\\end{equation}\nSubstituting in \\eqref{eq:ODE_psi}, together with the boundary condition $\\widehat{\\psi}(0,t) = 1$, yields\n\\begin{equation*}\n\\begin{cases}\n0&=ct\\rho + \\left(1+q\\lambda t\\right)-q\\lambda te^{\\rho b}, \\\\\n0&=B_1, \\\\\n1&= A_1+B_1,\n\\end{cases}\n\\end{equation*}\nwhere $A_1, B_1$ and $\\rho$ are constant to be determined. We have that $B_1 = 0$ and $A_1= 1$. \nThe equation\n\\begin{equation*}\nct\\rho + \\left(1+q\\lambda t\\right)-q\\lambda te^{\\rho b} = 0,\n\\end{equation*}\nhave two solutions on the real line, one being negative, the other being positive. To ensure that $\\underset{u\\rightarrow \\infty}{\\lim}\\widehat{\\psi}(u,t) = 1$, we must take the negative solution that we denote by $\\rho^\\ast$. We finally get\n\\[\n\\widehat{\\psi}(u,t) = \\e^{\\rho^{\\ast} u}.\n\\]\nNow for \\eqref{eq:ODE_V} consider form\n\\begin{equation}\\label{eq:potential_solution_V}\n\\widehat{V}(u,t) = A_2e^{\\rho u }+B_2u + C_2,\\text{ }u \\ge 0, \n\\end{equation}\nwhere $A_2, B_2,C_2$ and $\\rho$ are constants to be determined. Substituting \\eqref{eq:potential_solution_V} in \\eqref{eq:ODE_V} together with the boundary condition yields the system of equations \n\\begin{equation*}\n\\begin{cases}\n0&=ct\\rho + \\left(1+q\\lambda t\\right)-q\\lambda te^{\\rho b}, \\\\\n0&= B_2\\left(1+tq\\lambda\\right)-q\\lambda tB_2 - 1,\\\\\n0&=B_2ct+C_2(1+tq\\lambda) - q\\lambda t B_2b-q\\lambda tC_2, \\\\\n0&=A_2+C_2.\n\\end{cases}\n\\end{equation*}\nWe then have $A_2 = -t(q\\lambda b - c)$, $B_2 = 1$, $C_2 = t(q\\lambda b - c)$. As $A<0$, we have to choose the negative solution $\\rho^\\ast<0$ \n\\begin{equation*}\nct\\rho + \\left(1+q\\lambda t\\right)-q\\lambda te^{\\rho b} = 0,\n\\end{equation*}\nin order to ensure $\\widehat{V}(u,t)>0$. Substituting $A_2,B_2,C_2$ and $\\rho^{\\ast}$ in \\eqref{eq:potential_solution_V} yields the result.\n\\end{proof}\n\\begin{ex}\nConsider a miner with hashpower $q = 0.1$. We are going to illustrate the difference between taking a fixed time horizon as opposed to an exponentially distributed time horizon. Let the BTC price be $\\$36,303.27$ and the block fiinding reward be $\\text{BTC}6.25$. The rewards then amounts to $\\$226,895$. Suppose the network consumes $10,913,757.70\\text{kWh}$ of electricity per time and that the miner operates in an area where the electricity price is $\\$0.09$ per kWh then the operational cost is $c = \\$98,223.81$. The time unit is the hour so that $\\lambda = 6$. \\cref{fig:rp_V_exp_fixed_time} shows the ruin probability and expected profit given that ruin did not occur as a function of the initial reserves $u$ for $t = 24$.\n\\begin{figure}[!ht]\n  \\begin{center}\n      \\subfloat[$\\psi(u), \\psi(u,t)$ and $\\widehat{\\psi}(u,t)$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rp_exp_fixed_time}\n      \\label{sub:rp_exp_fixed_time}\n                         }\n                         \\hskip1em\n    \\subfloat[$V(u, t)-u, \\widehat{V}(u,t)-u$ and $\\mathbb{E}(R_t) - u$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/V_exp_fixed_time}\n      \\label{sub:V_exp_fixed_time}\n                         }\n\n    \\caption{Ruin probabilities and expected profit given that ruin did not occur.}\n    \\label{fig:rp_V_exp_fixed_time}\n  \\end{center}\n\\end{figure}\nAn exponential time horizon tends to put more weight on smaller time horizon, which results in smaller ruin probability \\cref{sub:rp_exp_fixed_time} and higher expected profit \\cref{sub:V_exp_fixed_time}.\n\\end{ex}\n\n\\subsection{Ruin and expected profit of a selfish miner}\\label{ssec:selfish_mining}\nLet $(N_t)_{t\\geq0}$ be the homogeneous Poisson process with intensity $\\lambda$ that governs the block discovery process of the entire network. As in the previous section, we consider a miner named Sam who owns a share $q\\in(0,1)$ of the computing power so that a newly found block belongs to Sam with probability $q$.  We keep track of Sam's lead over the honest chain in terms of number of blocks via a Markov jump process\n$$\nX_t=Z_{N_t}, \\text{ }t\\geq0,\n$$ \nwhere $(Z_{k})_{k \\geq 0}$ is a homogeneous Markov chain with finite state space $E = \\{0,1,0^\\ast\\}$ which we define now. Consider some time $n\\geq0$,\n\\begin{itemize}\n\\item If Sam is not hiding any block, then $Z_n = 0$, \n\\begin{itemize}\n\\item if Sam finds the next block, he stores it in a buffer and $Z_{n+1} = 1$, \n\\item if the other miners discover a block then Sam's buffer remains empty $Z_{n+1}=0$, \n\\end{itemize}\nIn both cases, Sam is not collecting any reward.\n\\item If Sam is hiding one block at that time, then $Z_n = 1$,\n\\begin{itemize} \n\\item if he then finds a new block, then he broadcasts both blocks immediately (which resets the Markov chain to $Z_{n+1}=0$), and he collects two rewards,\n\\item if the others find a block, then Sam also releases his block leading to a fork situation characterized by $Z_{n+1} = 0^\\ast$. At that moment Sam is not collecting any rewards.\n\\end{itemize}\n\\item If a fork situation is present at that time ($Z_n = 0^{\\ast}$), then \n\\begin{itemize}\n\\item if Sam finds a new block then he appends it to his branch of the chain and collects the reward for two blocks and $Z_{n+1} = 0$.  \n\\item if the others find a block then \n\\begin{itemize}\n\\item they append it to Sam's branch with a probability $0\\leq \\gamma \\leq 1$, in which case Sam gets the reward for one block. \n\\item If the block is mined on top of the competing branch, then Sam earns nothing. \n\\end{itemize}\nIn both cases, the number of hidden blocks then becomes $Z_{n+1}=0$.\n\\end{itemize}\n\\end{itemize}\nLet $(\\xi_n)_{n\\geq1}$ be a sequence of \\iid Bernouill variables with parameter $q$, we have that \n\\begin{equation}\\label{eq:gain_function_selfish}\nZ_n = g\\left[Z_{n-1},\\xi_{n}\\right] = \\begin{cases}\n0,&\\text{ if } Z_{n-1} =0^\\ast,\\\\\n0,&\\text{ if } Z_{n-1} =1\\text{ }\\&\\text{ }\\xi_{n} =1,\\\\\n0,&\\text{ if } Z_{n-1} =0\\text{ }\\&\\text{ }\\xi_{n} =0,\\\\\n0^\\ast,&\\text{ if } Z_{n-1} =1\\text{ }\\&\\text{ }\\xi_{n} =0,\\\\\n1,&\\text{ if } Z_{n-1} =0 \\text{ }\\&\\text{ }\\xi_{n} = 1.\\\\\n\\end{cases}\n\\end{equation} \nThe process $(Z_n)_{n\\geq0}$ is a Markov chain with transition graph provided in \\cref{fig:transition_graph}.\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=3cm]\n\\tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.8]\n\\node[state]    (1)                     {$0$};\n\\node[state]    (2)[right of=1]         {$1$};\n\\node[state]    (3)[above of=2]         {$0^{\\ast}$};\n\\path\n(1) edge[loop left]     node{$1-q$}        (1)\n    edge[bend left]     node{$q$}          (2)\n(2) edge[bend left]      node{$q$}         (1)\n    edge[bend right]      node{$1-q$}      (3)\n(3) edge[bend right, above]      node{$1$}         (1);\n\\end{tikzpicture}\n\\end{center}\n\\caption{Transition graph of the Markov chain $(Z_k)_{k\\geq0}$ representing the stock of blocks retained by Sam when implementing the simplified selfish mining strategy.}\n\\label{fig:transition_graph}\n\\end{figure} \nThe selfish mining strategy alters the reward collecting process. The surplus process of Sam introduced in \\eqref{eq:surplus_protocol} now becomes \n\\begin{equation}\\label{eq:surplus_selfish}\nR_t = u-c\\cdot t+b\\cdot\\sum_{n = 1}^{N(t)}f\\left[Z_{n-1},\\xi_n, \\zeta_n\\right],\n\\end{equation}\nwhere the $(\\xi_n)_{n\\ge 1}$ and $(\\zeta_n)_{n\\ge 1}$ are \\iid Bernoulli random variables with parameter $q$ and $\\gamma$, respectively, and \n\\begin{equation}\\label{eq:gain_function_selfish}\nf\\left[Z_{n-1},\\xi_n, \\zeta_{n}\\right] = \\begin{cases}\n0,&\\text{ if } Z_{n-1} =0,\\\\\n0,&\\text{ if } Z_{n-1} =0^\\ast\\text{ }\\&\\text{ }\\xi_{n} =0\\text{ }\\&\\text{ }\\zeta_{n} =0,\\\\\n1,&\\text{ if } Z_{n-1} =0^\\ast\\text{ }\\&\\text{ }\\xi_{n} =0\\text{ }\\&\\text{ }\\zeta_{n} =1,\\\\\n2,&\\text{ if } Z_{n-1} =0^\\ast\\text{ }\\&\\text{ }\\xi_{n} =1,\\\\\n2,&\\text{ if } Z_{n-1} =1\\text{ }\\&\\text{ }\\xi_{n} =1.\\\\\n\\end{cases}\n\\end{equation} \n\\noindent It is interesting to see whether selfish mining is still profitable for Sam if the possibility of ruin is included in the analysis. His average earning per time unit is now given by \n\\begin{equation}\\label{eq:average_earning_selfish}\n\\frac{b}{t}\\,\\mathbb{E}\\left[\\sum_{k = 1}^{N_t}f(Z_{k-1},\\xi_k,\\zeta_k)\\right] - c.\n\\end{equation}\nThis quantity can be determined if we assume that Sam has been mining in a selfish way for quite some time already, so that we can consider the Markov chain to be in stationarity with stationary probabilities\n$$\n\\mathbb{P}(Z = 0)=\\frac{1}{1+2q-q^2},\\text{ }\\mathbb{P}(Z = 1)=\\frac{q}{1+2q-q^2},\\text{ and }\\mathbb{P}(Z = 0^{\\ast})=\\frac{q(1-q)}{1+2q-q^2}.\n$$\nThe quantities $U_k:=f\\left(Z_{k-1},\\xi_k,\\zeta_k\\right),\\text{ }n\\geq1$ then have a \\pmf $p_U(\\cdot): =\\mathbb{P}(U = \\cdot)$ given by \n$$\np_U(0) =\\frac{1+q(1-q)+q(1-q)^2(1-q)}{1+2q-q^2},\\text{ }p_U(1)=\\frac{p\\gamma(1-p)^2}{1+2q-q^2},\\text{ and }p_U(2)=\\frac{q^2+q^2(1-q)}{1+2q-q^2},\n$$\nand the net profit condition correspondingly reads\n\\begin{equation*}\nb\\lambda\\frac{\\gamma q(1-q)^2 + 4q^2-2q^3}{1+2q-q^2} - c>0.\n\\end{equation*}\nThe profitability of selfish mining consequently depends on the interplay between the probabilities $q$ and $\\gamma$, and not all values of $q$ and $\\gamma$ will lead to positive expected profit. Define \n\\begin{equation*}\n\\widehat{\\psi}_z(u,t)\\equiv \\mathbb{E}\\left[\\psi_z(u,T)\\right] = \\mathbb{E}\\left(\\psi(u,T)\\Big \\rvert Z_0 = z\\right)\n\\text{ and }\\widehat{V}_z(u,t)\\equiv \\mathbb{E}[V_z(u,T)] = \\mathbb{E}\\left(R_T\\mathbb{I}_{\\tau_u>T}\\Big \\rvert Z_0 = z\\right).\n\\end{equation*}\n\\begin{theo}\\label{theo:psi_V_selfish}\nFor any $u\\ge 0$, the ruin probability and expected profit of a selfish miner are given by\n\\begin{equation*}\\label{ruin_selfish_app}\n\\widehat{\\psi}_0(u,t) =\n{\\it C_1}\\,{{\\rm e}^{\\rho_1\\,u}}+{{\\rm e}^{\\rho_2\\,u}} \\left[ {\\it C_2}\\,\n\\cos \\left( \\rho_3\\,u \\right) +{\\it C_3}\\,\\sin \\left( \\rho_3\\,u \\right)\\right],\n\\end{equation*}\nand \n\\begin{equation*}\n\\widehat{V}_0(u,t)={\\it A_1}\\,{{\\rm e}^{\\rho_1\\,u}}+{{\\rm e}^{\\rho_2\\,u}} \\left[ {\\it A_2}\\,\n\\cos \\left( \\rho_3\\,u \\right) +{\\it A_3}\\,\\sin \\left( \\rho_3\\,u \\right)\\right] +u+C,\\label{thisone}\n\\end{equation*}\n\\end{theo}\n\\begin{proof}\nSee \\citet{Hansjoerg2022}.\n\\end{proof}\n\nThe truth is that following the protocol is always more profitable on average, the question is then why selfish mining?\\\\\n\nThree reasons:\n\\begin{enumerate}\n  \\item The relative revenue of selfish miner can be greater than their fair share\n  \\begin{itemize}\n    \\item It is pivotal when the number of cryptocurrency units that will be issued is bounded. By mining selfishly over the course of the cryptocurrency minting period, the ultimate share of the selfish miner is greater than that of the honest miners.\n  \\end{itemize}\n  \\item Honest miners waste resources, therefore they might quite making malicious miners more prominent\n  \\item Selfish mining slows down the pace of block arrivals leading to a downward adjustment of the cryptopuzzle difficulty\n\\end{enumerate}\nReason 1 is the one invoked in the paper of \\citet{Eyal2014}. Reason 2 is probably difficult to study as we would have to model the resilience of honest miners, it probably requires a game theoretic framework. We will try to illustrate numerically reason 3 in \\cref{ssec:solo_mining_VS_selfish_mining}.\n\n\\subsection{Solo mining versus selfish mining when including a difficulty adjustment}\\label{ssec:solo_mining_VS_selfish_mining}\nLet the time unit be the hour. Since the Bitcoin blockchain protocol is designed to ensure that one block of confirmed transactions is added to the blockchain about every ten minutes, this renders the block arrival intensity in our model to be $\\lambda = 6$. The reward $b$ is determined by the number $n_{BTC}$ of bitcoins earned when finding a block and the price $\\pi_{BTC}$ of the bitcoin. For the illustrations in this paper, we use the data of January 1, $2020$, when $n_{BTC}=12.5$ and $\\pi_{BTC} =\\$7,174.74$\\footnote{Source:\\href{https://www.blockchain.com/}{blockchain.com}}, so that the reward amounts to\n$$\nb = n_{BTC}\\times \\pi_{BTC} = \\$89,684.30.\n$$\nWe assume that the operational cost of mining reduces to the electricity consumed when computing hashes. On January 1,  $2020$, the yearly consumption of the network was estimated by the Cambridge Bitcoin Electricity Consumption Index\\footnote{Source:\\href{https://www.cbeci.org/}{CBEI}} to $72.1671$ TWh.\\footnote{The choice of this concrete date for the illustrations in this paper is somewhat arbitrary. Choosing another date and estimate of the Bitcoin price will, however, not crucially change the conclusions as long as the reward for finding a block compensates the operational cost.    \n} We denote by \n$$\nW = \\frac{72.1671\\times 10^9}{365.25\\times 24}\n$$\nthe electricity consumption of the network expressed in kWh. We let the operational cost $c$ of a given miner be proportional to its share $p\\in(0,1)$ of the network computing power with \n$$\nc = p\\times W \\times \\pi_W,\n$$  \nwhere $\\pi_W$ denotes the price of the electricity where the miner is located, expressed in USD per kWh. Mining (at least on the bitcoin blockchain) boils down to drawing random numbers, referred to as hashes uniformly inside the set $\\{0,\\ldots, 2^{256}-1\\}$. The block is mined if the computed hash is smaller than some target $T$. Calibrating $T$ helps to maintain a steady flow of blocks in the blockchain. In the bitcoin blockchain, the target $T$ is set so as to ensure that $6$ blocks are generated per hour. The target may be estimated by comparing $2^{256}$ to the number of hashes computed by the network in ten minutes. On January 1, 2020, the network was computing $97.01$\\footnote{Source: \\href{https://www.blockchain.com/}{blockchain.com}} exahashes per second. The number of hashes required on average to mine a block is given by the difficulty defined as $D = T_{\\max}/T$. Define $H= 97.01 \\times 10^{18} \\times 3600$ as the number of hashes computed per hour by the network. We then set the target so that $D / H = 6$ which is equivalent to \n\\[\nT = \\frac{T_{\\max}}{6H}\n\\] \nThe difficulty/target is adjusted every $2,016$ blocks by changing the target $T$ to \n$$\nT^\\ast = T\\times \\frac{t^\\ast}{336},\n$$\nwhere $t^\\ast$ is the time (in hours) it took to mine $2,016$ blocks (here $2016/6=336$ hours is the time it should have taken to mine $2,016$ blocks). The difficulty adjustment was studied in \\citet{Bowden2020} and led them to conclude that the block arrival process is well captured by a non-homogeneous Poisson process. Following their terminology, we refer to the time elapsed between two difficulty adjustments as a \\textit{segment}.\\\\\n\n\\noindent We now want to quantitatively address whether selfish mining is worthwhile when considering the possibly implied adjustment of the cryptopuzzle difficulty. We do so in a simplified setup, where only Sam may switch between selfish mining and following the protocol, whereas everyone else follows the protocol. Concretely, we compute the ruin probability and expected profit of Sam over two segments when\n\\begin{enumerate}\n\\item[(i)]  he is following the protocol during both segments,\n\\item[(ii)] he applies selfish mining during the first segment and resumes following the protocol during the second segment. \n\\end{enumerate}\n\n\\noindent For (i), we compute the expected profit using the result of \\cref{theo:psi_V_T_solo_mining} by setting the average time horizon to $t = 672$ (the number of hours in four weeks). We assume that the arrival intensity of the blocks in the blockchain remains unchanged over the two segments with $\\lambda = 6$, as does the cryptopuzzle difficulty $T$.\\\\\n\n\\noindent For (ii), we proceed as follows. Selfish mining slows down the pace at which the blocks are added to the blockchain, and the number of blocks wasted exactly corresponds to the number of passages of the Markov chain $(Z_n)_{n\\geq0}$ through the state $0^\\ast$. We know that once stationarity is reached, the probability for  $(Z_n)_{n\\geq0}$ to be in state $0^\\ast$ is \n$$\n\\frac{q(1-q)}{1+2q-q^2}.\n$$\nWe therefore approximate the arrival process in this first segment by a homogeneous Poisson process with intensity \n $$\n\\lambda_1 = \\lambda \\times\\left(1- \\frac{q(1-q)}{ 1+2q-q^2}\\right).\n$$ \nWhen blocks are being withheld, the average time required to mine $2,016$ blocks increases from $336$ to $t_1 = 2016/\\lambda_1$. We hence compute the ruin probability and expected surplus over the first segment using \\cref{theo:psi_V_selfish} respectively with time horizon $t_1$. Selfish mining during the first segment then leads to a downward adjustment of the cryptopuzzle difficulty  to $T_2 = T\\times{t_1}/{336}$ that will be in force during the second segment. As the miner resumes following the protocol on that second segment, the block arrival process becomes again a proper homogeneous Poisson process with intensity $\\lambda_2$ to be determined as follows. Let $H$ be the number of hashes computed per hour (hashrate) by the network. The number of blocks mined per hour is then given by \n$$\n\\lambda_2 = \\frac{T_{\\max}}{T_2\\times H}.\n$$\n\nThe miner's ruin probability and surplus over the second segment are now computed using the formulas of \\cref{theo:psi_V_T_solo_mining} using as initial wealth the miner's expected surplus over the first segment\n\\[\nu_2 = \\widehat{V}_0(u,t), \n\\]\na block arrival intensity equal to $\\lambda_2$ and a reduced time horizon equal to $t_2=2016/\\lambda_2$.\\\\  \n\n\\noindent We consider a miner who owns a share $p = 0.2$ of the network computing power. If he follows the protocol, then the net profit condition holds if $\\pi_W<0.065$. If he withholds blocks on the first segment, then the net profit condition frontier depends on the electricity price and the hashpower provided that his connectivity is fixed, for instance set to $q=0.75$. \\cref{fig:nbc_segments} shows the net profit condition frontiers for a selfish miner on Segment $1$ who starts following again the protocol on Segment $2$. \n\\begin{figure}[!ht]\n  \\begin{center}\n    \\subfloat[Net profit condition frontier on Segment 1]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/nbc_segment1_12a}\n      \\label{sub:nbc_segment1}\n                         }\n                         \\hskip1em\n    \\subfloat[Net profit condition frontier on Segment 2.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/nbc_segment2_12b}\n      \\label{sub:nbc_segment2}\n                         }\n    \\caption{Net profit condition frontier on Segment 1 and 2 for a selfish miner.}\n    \\label{fig:nbc_segments}\n  \\end{center}\n\\end{figure}\n\\noindent In absence of ruin considerations, selfish mining on Segment $1$ is profitable only if the price of electricity is lower than $0.058$, see \\cref{sub:nbc_segment1}. Only when the selfish miner owns the totality of the hashpower, is selfish mining as profitable as following the protocol. On Segment $2$, the profitability is always greater than when following the protocol. The profitability on Segment $2$ holds in our case if the electricity price is lower than $0.074$, see \\cref{sub:nbc_segment2}. It is interesting to note that by increasing the hashpower beyond a certain threshold we actually lower the profitability during Segment $2$. At higher hashpower levels, the probability of the Markov chain $(Z_n)_{n\\geq 0}$ visiting state $0^\\ast$ becomes small. In that case fewer blocks are being wasted, which in turn reduces the downward adjustment of the cryptopuzzle difficulty and hence the profitability during Segment 2.\\\\\n\n\\noindent Figure \\ref{fig:rp_difficulty_adjusment} shows the ruin probability of a selfish miner and a miner following the protocol as a function of initial wealth for a range of electricity prices. \n\\begin{figure}[!ht]\n  \\begin{center}\n      % \\subfloat[$\\pi_W = 0.04$.]{\n      % \\includegraphics[width=0.45\\textwidth]{Figures/rp_difficulty_adjusment_4}\n      % \\label{sub:rp_difficulty_adjusment_4}\n      %                    }\n                         \\hskip1em\n    \\subfloat[$\\pi_W = 0.05$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rp_difficulty_adjusment_5_13a}\n      \\label{sub:rp_difficulty_adjusment_5}\n                         }\n                         \\hskip1em\n       \\subfloat[$\\pi_W = 0.06$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rp_difficulty_adjusment_6_13b}\n      \\label{sub:rp_difficulty_adjusment_6}\n                         }\n                         \\hskip1em\n    \\subfloat[$\\pi_W = 0.07$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rp_difficulty_adjusment_7_13c}\n      \\label{sub:rp_difficulty_adjusment_7}\n                         }\n                         \\hskip1em\n       \\subfloat[$\\pi_W = 0.08$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rp_difficulty_adjusment_8_13d}\n      \\label{sub:rp_difficulty_adjusment_8}\n                         }\n                         \\hskip1em\n    % \\subfloat[$\\pi_W = 0.09$.]{\n    %   \\includegraphics[width=0.45\\textwidth]{../Figures/rp_difficulty_adjusment_9}\n    %   \\label{sub:rp_difficulty_adjusment_9}\n    %                      }\n    \\caption{Ruin probability over two segments as a function of initial wealth of a miner following the protocol (solid) and a selfish miner (dashed) for various electricity prices with hashpower $p=0.2$ and connectivity $q=0.75$.}\n    \\label{fig:rp_difficulty_adjusment}\n  \\end{center}\n\\end{figure}\nFrom a ruin probability perspective, reducing the difficulty adjustment does not compensate for the risk involved in implementing selfish mining. \n\nFigure \\ref{fig:rev_difficulty_adjusment} displays the expected profit of a selfish miner and a miner following the protocol as a function of initial wealth for a range of electricity prices.\n\\begin{figure}[!ht]\n  \\begin{center}\n      % \\subfloat[$\\pi_W = 0.04$.]{\n      % \\includegraphics[width=0.45\\textwidth]{../Figures/rev_difficulty_adjusment_4}\n      % \\label{sub:rev_difficulty_adjusment_4}\n      %                    }\n                         \\hskip1em\n    \\subfloat[$\\pi_W = 0.05$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rev_difficulty_adjusment_5_14a}\n      \\label{sub:rev_difficulty_adjusment_5}\n                         }\n                         \\hskip1em\n       \\subfloat[$\\pi_W = 0.06$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rev_difficulty_adjusment_6_14b}\n      \\label{sub:rev_difficulty_adjusment_6}\n                         }\n                         \\hskip1em\n    \\subfloat[$\\pi_W = 0.07$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rev_difficulty_adjusment_7_14c}\n      \\label{sub:rev_difficulty_adjusment_7}\n                         }\n                         \\hskip1em\n       \\subfloat[$\\pi_W = 0.08$.]{\n      \\includegraphics[width=0.45\\textwidth]{../Figures/rev_difficulty_adjusment_8_14d}\n      \\label{sub:rev_difficulty_adjusment_8}\n                         }\n                         \\hskip1em\n    % \\subfloat[$\\pi_W = 0.09$.]{\n    %   \\includegraphics[width=0.45\\textwidth]{../Figures/rev_difficulty_adjusment_9}\n    %   \\label{sub:rev_difficulty_adjusment_9}\n    %                      }\n    \\caption{Expected profit over two segments as a function of initial wealth of a miner following the protocol (solid) and a selfish miner (dashed) for various electricity prices with hashpower $p=0.2$ and connectivity $q=0.75$.}\n    \\label{fig:rev_difficulty_adjusment}\n  \\end{center}\n\\end{figure}\n One can observe the different profit and loss profile of a selfish miner compared to that of a miner following the protocol on both segments. If the net profit condition holds when following the protocol, then it also holds for the second segment when blocks were withheld during the first segment. For electricity prices $\\pi_W = 0.05, 0.06$, the expected profit as a function of $u$ reaches a plateau of level \n\\begin{equation}\\label{eq:plateau_protocol}\n(c-b\\lambda q)\\times t\n\\end{equation}\nwhen following the protocol, and \n\\begin{equation}\\label{eq:plateau_selfish}\n(c-b\\lambda_2 q)\\times t_2,\n\\end{equation}\nwhen selfish mining is applied during the first segment.  For $\\pi_W = 0.05$, the plateau when following the protocol \\eqref{eq:plateau_protocol} is higher than the plateau when withholding blocks \\eqref{eq:plateau_selfish}, but the expected profit at lower initial wealth is greater for the selfish miner, see Figure \\ref{sub:rev_difficulty_adjusment_5}. The exact opposite holds when $\\pi_W = 0.06$, see Figure \\ref{sub:rev_difficulty_adjusment_6}. The latter is probably the most desirable situation for a selfish miner. For $\\pi_W = 0.07$, the net profit condition no longer holds when following the protocol which entails a loss, it holds however on the second segment of the selfish miner but it does not compensate for the loss incurred during the first segment, see Figure \\ref{sub:rev_difficulty_adjusment_7}. Selfish mining helps at least in slightly mitigating the losses in this case. For electricity prices $\\pi_W > 0.074$, the net profit condition breaks down in each case, resulting in huge losses for both the selfish and the honest miner (cf.\\ Figure \\ref{sub:rev_difficulty_adjusment_8} ).\\\\\n\n\n\\noindent The above analysis allowed us to distinguish situations where selfish mining can be considered worthwile and when it may not. In particular, it turns out that selfish mining can be advisable when following the protocol is not profitable. \n\n% \\section{Nothing-at-stake in PoS}\\label{sec:NaS}\n% \\newpage", "meta": {"hexsha": "a65759033a0d6640426e2d7cf39bbae399514f00", "size": 101652, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture_notes/includes/security.tex", "max_stars_repo_name": "LaGauffre/BLOCKASTICS", "max_stars_repo_head_hexsha": "4087304a4fb6fe55b5e8746315f524eddedc72e8", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lecture_notes/includes/security.tex", "max_issues_repo_name": "LaGauffre/BLOCKASTICS", "max_issues_repo_head_hexsha": "4087304a4fb6fe55b5e8746315f524eddedc72e8", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture_notes/includes/security.tex", "max_forks_repo_name": "LaGauffre/BLOCKASTICS", "max_forks_repo_head_hexsha": "4087304a4fb6fe55b5e8746315f524eddedc72e8", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-21T08:20:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-21T08:20:38.000Z", "avg_line_length": 61.9074299635, "max_line_length": 1109, "alphanum_fraction": 0.67259867, "num_tokens": 37528, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321720225278, "lm_q2_score": 0.7931059560743422, "lm_q1q2_score": 0.5648755777388322}}
{"text": "\\label{sec:theory}\n\\subsection{Mathematical Framework}\n\\label{sec:mathFramework}\nLet be $\\bar{\\theta}(t)$ a vector describing the plant status in the phase space; the dynamic of the plant, including the control system, can be summarized by the following equation~\\cite{MathFrameworkMC2013}:\n\\begin{equation}\n\\frac{\\partial \\bar{\\theta}}{\\partial t} = \\bar{H}(\\theta(t),t)\n\\label{eq:SystemDynamics}\n\\end{equation}\nIn the above equation it is assumed the time differentiability in the phase space. Performing an arbitrary decomposition of the phase space, the following statement is obtained:\n\\begin{equation}\n\\bar{\\theta}=\\binom{\\bar{x}}{\\bar{v}}\n\\label{eq:firstDecomposition}\n\\end{equation}\nThe decomposition is made in such a way that $\\bar{x}$ represents the unknowns solved by RELAP-7, while $\\bar{v}$ are the variables directly controlled by the control system (i.e., RAVEN). Equation~\\ref{eq:SystemDynamics} can now be rewritten as follows:\n\\begin{equation}\n\\begin{cases}\n\\dfrac{\\partial \\bar{x}}{\\partial t} = \\bar{F}(\\bar{x},\\bar{v},t) \\\\\n\\dfrac{\\partial \\bar{v}}{\\partial t} = \\bar{V}(\\bar{x},\\bar{v},t) \\\\\n\\end{cases}\n\\label{eq:generalSystemEquation}\n\\end{equation}\nNote that the function $\\bar{V}(\\bar{x},\\bar{v},t)$ representing the control system, does not depend on the knowledge of the complete status of the system but on a restricted subset (i.e. control variables) $\\bar{C}$:\n%Why is V still a function of x and v?\n\\begin{equation}\n\\begin{cases}\n\\dfrac{\\partial \\bar{x}}{\\partial t} = \\bar{F}(\\bar{x},\\bar{v},t) \\\\\n\\bar{C} = \\bar{G}(\\bar{x},t) \\\\\n\\dfrac{\\partial \\bar{v}}{\\partial t} = \\bar{V}(\\bar{x},\\bar{v},t)\n\\end{cases}\n\\label{eq:generalSystemEquationwithControl}\n\\end{equation}\nThe system of equations in Eq.~\\ref{eq:generalSystemEquationwithControl} is fully coupled and has commonly been solved by an operator splitting approach. The reasons for this choice are several:\n\\begin{itemize}\n\\item Control system reacts with an intrinsic delay\n\\item The reaction of the control system might move the system between two different discrete states and\ntherefore numerical errors will be always of first order unless the discontinuity is treated explicitly.\n\\end{itemize}\nRAVEN uses this approach to solve Eq.~\\ref{eq:generalSystemEquationwithControl} which now becomes:\n\\begin{equation}\n\\begin{cases}\n\\dfrac{\\partial \\bar{x}}{\\partial t} = \\bar{F}(\\bar{x},\\bar{v}_{t_{i-1}},t) \\\\\n\\bar{C} = \\bar{G}(\\bar{x},t) & t_{i-1}\\leq t\\leq t_{i} = t_{i-1} + \\Delta t_{i}\\\\\n\\dfrac{\\partial \\bar{v}}{\\partial t} = \\bar{V}(\\bar{x},\\bar{v}_{t_{i-1}},t) \\\\\n\\end{cases}\n\\label{eq:generalSystemEquationwithControlSplitting}\n\\end{equation}\nEven if all information needed is contained in $\\bar{x}$ and $\\bar{v}$, it is not often practical and efficient to implement the control logic for complex system . Consequently, a system of auxiliary variables has been introduced.\n\nThe auxiliary variables are those that in statistical analysis are artificially added, when possible, to non-Markovian systems into the space phase to obtain a Markovian behavior back, so that only the information of the previous time step is needed to determine the future status of the system.\nThus, the introduction of the auxiliary variables into the mathematical framework leads to the following formulation of Eq.~\\ref{eq:generalSystemEquationwithControlSplitting}:\n\\vspace{-2mm}\n\\begin{equation}\n\\begin{cases}\n\\dfrac{\\partial \\bar{x}}{\\partial t} = \\bar{F}(\\bar{x},\\bar{v}_{t_{i-1}},t) \\\\\n\\bar{C} = \\bar{G}(\\bar{x},t) & t_{i-1}\\leq t\\leq t_{i} = t_{i-1} + \\Delta t_{i}\\\\\n\\dfrac{\\partial \\bar{a}}{\\partial t} = \\bar{A}(\\bar{x},\\bar{C},\\bar{a}_{t_{i-1}},\\bar{v}_{t_{i-1}},t) \\\\\n\\dfrac{\\partial \\bar{v}}{\\partial t} = \\bar{V}(\\bar{x},\\bar{C},\\bar{v}_{t_{i-1}},\\bar{a},t)\n\\end{cases}\n\\label{eq:generalSystemEquationwithControlSplittingAndAux}\n\\end{equation}\n\n\\subsection{Software Overview}\nRAVEN~\\cite{alfonsiMC}, is plugged into the software environment MOOSE~\\cite{MOOSE}. MOOSE is a computer simulation framework,  developed at Idaho National Laboratory (INL),which simplifies the process for predicting the behavior of complex systems and developing non-linear, multi-physics simulation tools. MOOSE provides the algorithms for the solution of generic partial differential equations, and all the manipulation tools needed to extract information from the solution field.  This framework has been used to construct and develop the Thermal-Hydraulic code RELAP-7, giving an enormous flexibility in the coupling procedure with RAVEN.\n\nRELAP-7 is the next generation nuclear reactor system safety analysis. It will become the main reactor systems simulation toolkit for the RISMC (\\textbf{R}isk \\textbf{I}nformed \\textbf{S}afety \\textbf{M}argin \\textbf{C}haracterization)~\\cite{mandelliANS_RISMC} project and the next generation tool in the RELAP reactor safety/systems analysis application series.\n\nRAVEN has been developed in a highly modular and pluggable way in order to enable easy integration of different programming languages (i.e., \\verb!C++!, \\verb!Python!) and coupling with other MOOSE based applications and not only. The code consists of four main modules:\n\\vspace{-5mm}\n\\begin{itemize}\n\\itemsep0em\n\\item RAVEN/RELAP-7 interface\n\\item Python Control Logic\n\\item External Python Manager\n\\item Graphical User Interface\n\\end{itemize}\n\\vspace{-5mm}\nThe RAVEN/RELAP-7 interface, coded in \\verb!C++!, is the container of all the tools needed to interact with RELAP-7/MOOSE. It has been designed in order to be general and pluggable with different solvers simultaneously in order to allow an easier and faster development of the control logic/PRA capabilities for multi physics applications.\nThe interface provides all the capabilities to extract the monitored quantities and accordingly modify the controlled parameters in the RELAP-7/MOOSE calculation.\n\nThe control logic module is used to drive a RAVEN/RELAP-7 simulation, as the real plant control system would do. It is implemented by the user via \\verb!Python! scripting. The reason of this choice is to try to preserve generality of the approach in the initial phases of the project so that further specialization (pre-generated control logic blocks)  is possible and  inexpensive. The implementation of the control logic via \\verb!Python! is rather convenient and flexible. The user only needs to know few \\verb!Python! syntax rules in order to build an input. Though simple by itself, the GUI will provide tools to automate the construction of the control logic scripting in order to minimize user effort.\n\nThe core of PRA analysis is contained in an external driver/manager. It consists of a \\verb!Python! framework that contains the capabilities and interfaces to drive a PRA analysis. Its basic infrastructure in connection with the DET module will be discussed in section~\\ref{sec:CPUInfrastructure}\n\nAs previously mentioned, a GUI is not required to run RAVEN, but it represents an added value to the whole code. The GUI is compatible with all the capabilities actually available in RAVEN.  Its development is performed using \\verb!PyQt4!, which is a \\verb!Python! interface for \\verb!Qt! (https://qt-project.org). \\verb!Qt!  is a popular library used for cross-platform GUI development. RAVEN\u2019s GUI is developed starting from Peacock, which is a GUI interface for generic MOOSE based applications. Because RELAP-7 is rather different from the other MOOSE based applications, much effort has been required to specialize Peacock for RAVEN and further addition will continue to follow RELAP-7 development.\n\\vspace{-5mm}\n", "meta": {"hexsha": "2e839367caf8bca4c73c16dac3b245d0c836aa62", "size": 7561, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/L3MilestoneDETSept2013/theory.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "papers/L3MilestoneDETSept2013/theory.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "papers/L3MilestoneDETSept2013/theory.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 87.9186046512, "max_line_length": 708, "alphanum_fraction": 0.7684168761, "num_tokens": 2002, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404096760998, "lm_q2_score": 0.6076631698328916, "lm_q1q2_score": 0.5647867055145601}}
{"text": "\\appendix\n\\chapter{Drift-Kinetic Basis Transformation}\n\\label{app:tlkpj}\n\nIn the present Appendix, we derive the expressions for the coefficients $T_{alk}^{pj}$ appearing in \\cref{eq:tlkpj}. These coefficients allows us to express up to order $\\epsilon \\epsilon_\\nu$ the relation between fluid $\\bm M_{a}^{lk}$ and guiding-center $N_{a}^{lk}$ moments via \\cref{eq:CoulDKmom}.\nAs a first step, we define a transformation similar to \\cref{eq:tlkpj} but with isotropic temperatures between both {bases}\n\n\\be\n    \\begin{split}\n        c_a^l P_l(\\xi_a)L_k^{l+1/2}(c_a^2)=&\\sum_{p=0}^{l+2k}\\sum_{j=0}^{k+\\floor{l/2}}\\overline T_{lka}^{pj} H_p\\left(\\frac{v_\\parallel-u_{\\parallel a}}{v_{th a}}\\right) L_j\\left(\\frac{v^{'2}_{\\perp}}{v_{th a}^2}\\right),\n    \\end{split}\n    \\label{eq:deftbarlkpj}\n\\ee\n\n\\noindent with the inverse transformation\n\n\\be\n    \\begin{split}\n        H_p\\left(\\frac{v_\\parallel-u_{\\parallel a}}{v_{th a}}\\right)L_j\\left(\\frac{v^{'2}_{\\perp}}{v_{th a}^2}\\right) =& \\sum_{l=0}^{p+2j}\\sum_{k=0}^{j+\\floor{p/2}}\\left(\\overline T^{-1}\\right)_{pj a}^{lk} \\\\\n        &\\times c_a^l P_l(\\xi_a)L_k^{l+1/2}(c_a^2),\n    \\end{split}\n\\ee\n\nThe relation between the coefficients $\\left(\\overline T^{-1}\\right)_{pj}^{lk}$ and $\\overline T_{lk}^{pj}$ is given by\n\n\\be\n    \\left(\\overline T^{-1}\\right)_{pj}^{lk}=\\frac{\\sqrt{\\pi}2^p p!(l+1/2)k!}{(k+l+1/2)!}\\overline T_{lk}^{pj}.\n\\ee\n\nBy integrating both sides of \\cref{eq:deftbarlkpj} over the whole velocity space, we can write $\\overline{T}_{lk}^{pj}$ as\n%\n\\begin{align}\n    T_{lk}^{pj}=\\int P_l(\\xi)c^l L_k^{l+1/2}(c^2)\\frac{H_p(s_\\parallel) L_j(s_\\perp^2)}{2^p p! \\sqrt{\\pi}}e^{-s_\\parallel^2-s_\\perp^2}ds_\\parallel ds_\\perp^2,\n\\label{eq:deftlkpj1}\n\\end{align}\n%\nwhere we suppressed the species index $a$ for simplicity, and find\n%\n\\begin{equation}\n    \\begin{split}\n        \\overline T_{lk}^{pj}&=\\sum_{q=0}^{\\floor{l/2}}\\sum_{v=0}^{\\floor{p/2}}\\sum_{i=0}^{k}\\sum_{r=0}^{q}\\sum_{s=0}^{\\text{min}(j,i)}\\sum_{m=0}^{k-i}\\frac{(-1)^{q+i+j+v+m}}{2^{\\frac{3l+p}{2}+m+v-r}}\\\\\n        &\\times\\binom{l}{q}\\binom{2(l-q)}{l}\\binom{q}{r}\\binom{r}{j-s}\\binom{r}{i-s}\\binom{s+r}{s}{r!}{}\\\\\n        &\\times\\frac{(k-i+l-1/2)!(l+p+2(m-r-v)-1)!!}{(p-2v)!(k-i-m)!(l+m-1/2)!v!m!}.\n    \\end{split}\n\\end{equation}\n\nWe then integrate both sides of \\cref{eq:tlkpj} with weights $H_l(s_{\\parallel a})L_j(s_{\\perp a}^2)$, with the argument transformation\n\n\\begin{equation}\n\\begin{split}\n        H_p(s_{\\parallel a})=\\left(\\frac{T_a}{T_{\\parallel a}}\\right)^{p/2}\\sum_{k=0}^{\\floor{p/2}}&\\frac{p!}{k!(p-2k)!}\\left(1-\\frac{T_{\\parallel a}}{T_a}\\right)^kH_{p-2k}\\left(\\frac{v_\\parallel-u_{\\parallel a}}{v_{th a}}\\right),\n\\end{split}\n\\end{equation}\n\n\\noindent and\n\n\\begin{equation}\n\\begin{split}\n    L_j(s_{\\perp a}^2)=\\sum_{k=0}^{j}&\\binom{j}{j-k}\\left(\\frac{T_a}{T_{\\perp a}}\\right)^k \\left(1-\\frac{T_a}{T_{\\perp a}}\\right)^{j-k}L_{k}\\left(\\frac{v_{\\perp}^{'2}}{v_{th a}^2}\\right),\n\\end{split}\n\\end{equation}\n\n\\noindent to find the relation between the isotropic and anisotropic temperature coefficients\n\n\\be\n    \\begin{split}\n        T_{alk}^{pj}=&\\sum_{m=0}^{l+2k}\\sum_{n=0}^{k+\\floor{l}{2}}\\sum_{z=0}^{n}\\sum_{d=0}^{\\floor{m/2}}\\binom{n}{n-z}\\frac{m!\\delta_{z,j}\\delta_{p,m-2d}}{d!(m-2d)!}\\\\\n        &\\times\\left(\\frac{T_{\\parallel a}}{T_a}\\right)^{p/2}\\left(\\frac{T_{\\perp a}}{T_a}\\right)^{z}\\left(1-\\frac{T_a}{T_{\\parallel a}}\\right)^{d}\\left(1-\\frac{T_{\\perp a}}{T_a}\\right)^{n-z}{\\overline{T}_{lk}^{m n}},\n    \\end{split}\n    \\label{eq:tlkpjexact}\n\\ee\n\n\\be\n    \\begin{split}\n        \\left(T^{-1}_a\\right)_{pj }^{lk}=&\\sum_{z=0}^{j}\\sum_{d=0}^{\\floor{p/2}}\\sum_{t=0}^{p-2d+2z}\\sum_{v=0}^{z-d+\\floor{p}{2}}\\binom{j}{j-z}\\frac{p!\\delta_{l,t}\\delta_{k,v}}{d!(p-2d)!}\\\\\n        &\\times\\left(\\frac{T_{ a}}{T_{\\parallel a}}\\right)^{p/2}\\left(\\frac{T_{a}}{T_{\\perp a}}\\right)^{z}\\left(1-\\frac{T_{\\parallel a}}{T_{ a}}\\right)^{d}\\left(1-\\frac{T_{ a}}{T_{\\perp a}}\\right)^{j-z}\\left(\\overline{T^{-1}}\\right)_{p-2d z}^{t v}.\n    \\end{split}\n    \\label{eq:tlkpjminus1exact}\n\\ee\n\nA more efficient algorithm can be found as follows.\n%\nFirst, we expand the product $P_l(\\xi)c^l L_k^{l+1/2}(c^2)$ into products of $s_\\parallel$ and $s_\\perp^2$ in order to write \\cref{eq:deftlkpj1} in terms of $s_\\parallel$ and $s_\\perp^2$ only\n%\n\\begin{align}\n    P_l(\\xi)c^l L_k^{l+1/2}(c^2)&=\\sum_{i=0}^{\\floor{l/2}}\\sum_{m=0}^k\\sum_{r=0}^{m+i}\\binom{2l-2i}{l}\\binom{l}{i}\\binom{m+i}{r}\\frac{(-1)^{i+m}(l+k+1/2)!)}{2^l(k-m)!(l+m+1/2)!m!}\\nonumber\\\\\n    &\\times\\frac{s_\\parallel^{l-2i+2r}s_\\perp^{2(m+i-r)}}{(T_\\parallel/T)^{l/2-i+r}(T_\\perp/T)}.\n\\end{align}\n%\nWe then perform the parallel and perpendicular integrations separately, using the fact that\n%\n\\begin{equation}\n    \\int_{-\\infty}^{\\infty} x^n \\frac{H_p(x)}{2^p p! \\sqrt{\\pi}}e^{-x^2}dx=\\frac{n!}{2^n}\\frac{1-\\mod(n-p,2)}{\\left(\\frac{n-p}{2}\\right)!p!},\n\\label{eq:inthp1}\n\\end{equation}\n%\nand\n%\n\\begin{equation}\n        \\int_0^{\\infty} x^m L_j(x) e^{-x}dx = m!\\binom{m}{m-j}(-1)^j.\n\\label{eq:intlj1}\n\\end{equation}\n%\nFinally, we apply \\cref{eq:inthp1,eq:intlj1} to \\cref{eq:deftlkpj1}, yielding\n%\n\\begin{align}\n    T_{lk}^{pj}&=\\sum_{i=0}^{\\floor{l/2}}\\sum_{m=0}^k\\sum_{r=0}^{m+i}\\binom{2l-2i}{l}\\binom{l}{i}\\binom{m+i}{r}\\frac{(-1)^{i+m+j}(l+k+1/2)!}{2^l(k-m)!(l+m+1/2)!m!}\\nonumber\\\\\n    &\\times\\binom{m+i-r}{m+i-r-j}\\frac{(l-2i+2r)!}{2^{l-2i+2r}}\\frac{1-\\mod(l-p,2)}{\\left(\\frac{l-p}{2}-i+r\\right)!p!}(m+i-r)!.\n\\end{align}\n\n\\chapter{Expressions for the Moments of the Collision Operator}\n\\label{app:cabmoments}\n\nIn the present Appendix, we present the expressions for the guiding-center moments of the collision operator relevant for the fluid model in \\cref{sec:fluidmodel}.\nThe collision operator moments satisfy particle conservation\n\\be\n    C_{ab}^{00}=0,\n\\ee\n\n\\noindent and momentum conservation {at lowest order}\n\n\\be\n    C_{aa}^{10}=0,\n\\ee\n\n\\begin{equation}\n    C_{ei}^{10}=-\\frac{m_i}{m_e}\\frac{v_{th\\parallel i}}{v_{th \\parallel e}}C_{ie}^{10}{+O({m_e/m_i})}.\n\\end{equation}\n\n\\noindent Both the like-species and electron-ion satisfy energy conservation exactly, {while the ion-electron operator satisfies \\cref{eq:cabenergy} at zeroth order in $\\delta_a$}\n\n\\begin{equation}\n    T_{\\parallel a}C_{ab}^{20}-\\sqrt{2}T_{\\perp a} C_{ab}^{01} = 0.\n    \\label{eq:cabenergy}\n\\end{equation}\n\n{The remaining moments $C_{ab}^{pj}$, in the linear transport regime with $\\Delta T_a/T_a = (T_{\\parallel a} - T_{\\perp a})T_a \\sim N^{11}\\sim N^{30} \\sim (u_{\\parallel e}-u_{\\parallel i})/v_{the} \\sim \\delta_a$, for ion-electron collisions are given by}\n\n\\begin{align}\n    C_{ie}^{10}&=-\\frac{m_e}{m_i}\\frac{v_{th \\parallel i}}{v_{th \\parallel e}}C_{ei}^{10},\\\\\n    C_{ie}^{20}&=\\sqrt{2}\\nu_{ei}\\frac{m_e}{m_i}\\left(\\frac{T_e-T_i}{T_i}\\right)-\\frac{2 \\sqrt{2} \\nu_{ei}}{3}\\frac{m_e}{m_i}\\frac{T_e}{T_i}\\frac{\\Delta T_i}{T_i},\\\\\n    C_{ie}^{01}&=-2\\nu_{ei}\\frac{m_e}{m_i}\\left(\\frac{T_e-T_i}{T_i}\\right)-\\frac{2 \\nu_{ei}}{3}\\frac{m_e}{m_i}\\frac{T_e}{T_i}\\frac{\\Delta T_i}{T_i},\\\\\n    C_{ie}^{30}&=-\\nu_{ei}\\sqrt{\\frac{3}{2}}\\frac{m_e}{m_i}\\frac{Q_{\\parallel i}}{n T_i v_{thi}},\\\\\n    C_{ie}^{11}&=3 \\nu_{ei}\\frac{m_e}{m_i}\\frac{Q_{\\perp i}}{n T_i v_{thi}},\n\\end{align}\n\n\\noindent {for electron-ion collisions}\n\n{\n\\begin{align}\n    C_{ei}^{10} &= -\\frac{\\sqrt{2}\\nu_{ei}}{6 \\pi^{3/2}}\\frac{u_{\\parallel e}-u_{\\parallel i}}{v_{the}}+\\frac{\\sqrt{2}\\nu_{ei}}{10 \\pi^{3/2}}\\frac{Q_{\\parallel e}+2 Q_{\\perp e}}{n T_e v_{the}},\\\\\n    C_{ei}^{20}&=-\\frac{2 \\sqrt{2}\\nu_{ei}}{15 \\pi^{3/2}}\\frac{\\Delta T_e}{T_e},\\\\\n    C_{ei}^{30}&= \\frac{\\sqrt{3}\\nu_{ei}}{10 \\pi^{3/2}}\\frac{u_{\\parallel e}-u_{\\parallel i}}{v_{the}}-\\frac{ \\nu_{ei}}{70 \\sqrt{3} \\pi^{3/2}}\\frac{31 Q_{\\parallel e} - 2 Q_{\\perp e}}{n T_e v_{the}},\\\\\n    C_{ei}^{11}&= \\frac{ \\nu_{ei}}{5 \\sqrt{2} \\pi^{3/2}}\\frac{u_{\\parallel e}-u_{\\parallel i}}{v_{the}}+\\frac{ \\nu_{ei}}{150 \\sqrt{2} \\pi^{3/2}}\\frac{Q_{\\parallel e}-94 Q_{\\perp e}}{n T_e v_{the}},\\\\\n\\end{align}\n}\n\\noindent {and for like-species collisions}\n{\n\\begin{align}\n    C_{aa}^{20}&=0,\\\\\n    C_{aa}^{30}&=-\\frac{2 \\sqrt{2}}{125 \\sqrt{3} \\pi^{3/2}}\\frac{\\nu_{aa}}{n T_a v_{tha}}\\left(19 Q_{\\parallel a}-7 Q_{\\perp a}\\right),\\\\\n    C_{aa}^{11}&=-\\frac{2}{375 \\pi^{3/2}}\\frac{\\nu_{aa}}{n T_a v_{tha}}\\left(7 Q_{\\parallel a}-121 Q_{\\perp a}\\right).\n\\end{align}\n}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Spherical Basis Tensors}\n\\label{app:basistensors}\n\nWe start with the definition of the $\\mathbf{Y}^l(\\mathbf v)$ tensor in terms of spherical basis tensors $\\mathbf e^{lm}$ in \\cref{eq:ylvfull}.\n%\nFor the $l=1$ case, \\cref{eq:ylvfull} yields\n%\n\\begin{equation}\n    \\mathbf Y^1(\\mathbf v) = \\mathbf v = \\sqrt{\\frac{4 \\pi}{3}}v \\sum_{m=-1}^1 Y_{1m}(\\phi,\\theta) \\mathbf e^{1m}.\n\\label{eq:Y1defe}\n\\end{equation}\n%\nThe spherical basis vectors $\\mathbf e^{1m}$ can then be derived from \\cref{eq:Y1defe} by decomposing the vector $\\mathbf v$ in spherical coordinates as\n%\n\\begin{equation}\n    \\mathbf v = v\\left(\\sin \\phi \\cos \\theta \\mathbf e_x+\\sin \\phi \\sin \\theta \\mathbf e_y+\\cos \\phi \\mathbf e_z\\right),\n\\end{equation}\n%\nand using the identities for the spherical harmonics\n%\n\\begin{equation}\n    Y_{1m}(\\phi,\\theta)=\n\\begin{cases}\n    \\sqrt{\\frac{3}{8 \\pi}}\\sin \\phi e^{-i \\theta}, &m=-1,\\\\\n    \\sqrt{\\frac{3}{4 \\pi}}\\cos \\phi, &m=0,\\\\\n    -\\sqrt{\\frac{3}{8 \\pi}}\\sin \\phi e^{i \\theta}, &m=1,\\\\\n\\end{cases}\n\\end{equation}\n%\ntherefore obtaining\n%\n\\begin{equation}\n    \\mathbf e^{1m}=\n\\begin{cases}\n    \\frac{\\mathbf e_x-i \\mathbf e_y}{\\sqrt{2}}, &m=-1,\\\\\n    \\mathbf e_z, &m=0,\\\\\n    -\\frac{\\mathbf e_x+i \\mathbf e_y}{\\sqrt{2}}, &m=1.\\\\\n\\end{cases}\n\\end{equation}\n\nWe now construct spherical basis tensors $\\mathbf e^{lm}$ from the spherical basis vectors $\\mathbf e^{1m}$ leveraging the techniques developed for the angular momentum formalism in quantum mechanics \\citep{Zettili2009,Snider2018}.\n%\nIndeed, the basis vectors $\\mathbf e^{1m}$ are eigenvectors of the angular momentum matrix $G_z$\n%\n\\begin{equation}\n    G_z=i\n  \\begin{pmatrix}\n    0 & -1 & 0\\\\\n    1 &  0 & 0\\\\\n    0 &  0 & 0\n  \\end{pmatrix},\n\\end{equation}\n%\nwith eigenvalue $m$, that is\n%\n\\begin{equation}\n    G_z \\cdot \\mathbf e^{1m} = m \\mathbf e^{1m}.\n\\end{equation}\n%\nIn general, the angular momentum matrices along any axis $n={x,y,z}$ are given by\n%\n\\begin{equation}\n    G_{n} = -i \\mathbf e_{n} \\cdot \\epsilon,\n\\label{eq:gnmatr}\n\\end{equation}\n%\nwith $\\epsilon$ the standard Levi-Civita tensor.\n%\nIn index notation, \\cref{eq:gnmatr} can be written as\n%\n\\begin{equation}\n    \\left({G_{n}}\\right)_{kl}=-i\\sum_{j=1}^3\\left(e_{n}\\right)_j \\epsilon_{jkl}.\n\\end{equation}\n%\nThe raising $G_+$ and lowering $G_-$ operators (corresponding to the ladder operators in quantum mechanics) are defined by\n%\n\\begin{equation}\n    G_{\\pm}=G_x \\pm i G_y.\n\\end{equation}\n%\nThe allow us to obtain the basis vectors $\\mathbf e^{1\\pm1}$ from $\\mathbf e^{10}$ using\n%\n\\begin{equation}\n    G_{\\pm}\\mathbf e^{0m}=\\mathbf e^{1\\pm}.\n\\end{equation}\n%\nFinally, we note that  the dual basis $\\mathbf e^{1}_m = (\\mathbf e^{1}_m)^* = (-1)^m \\mathbf e^{1-m}$, together with $\\mathbf e^{1m}$, satisfy\n%\n\\begin{equation}\n    \\mathbf e^{1m} \\cdot \\mathbf e^{1}_{m'} = \\delta_{m,m'}.\n\\label{eq:orthoe}\n\\end{equation}\n\nTo obtain the spherical tensor basis $\\mathbf e^{l m}$ for the irreducible tensors $\\mathbf Y^{l}$, we start with the spherical basis tensor\n%\n\\begin{equation}\n    \\mathbf e^{ll}=\\mathbf e^{11}\\mathbf e^{11}...\\mathbf e^{11},\n\\end{equation}\n%\nformed by the product of $l$ basis vectors $\\mathbf e^{11}$.\n%\nIndeed, similarly to $\\mathbf Y^l(\\mathbf v)$, this tensor is of rank $l$, symmetric, and traceless between any of its indices, as $\\mathbf e^{11}\\cdot \\mathbf e^{11} = 0$.\n%\nFurthermore, we note that $\\mathbf e^{ll}$ is an eigenvector with eigenvalue $l$ of the angular momentum tensor $G_z^{l}$, with $G_n^{l}$ defined by\n%\n\\begin{equation}\n    \\left[G_n^{l} \\cdot T^l\\right]_{j k ... l}=\\sum_{j'k'...l'}\\left\\{\\left[G_n\\right]_{jj'}\\delta_{kk'}...\\delta_{ll'}+\\left[ \\delta_{jj'}G_n\\right]_{kk'}...\\delta_{ll'}+...+\\delta_{jj'}\\delta_{kk'}...\\left[G_n\\right]_{ll'}\\right\\} T^l_{j'k'...l'},\n\\end{equation}\n%\nwhere $T^l$ is an arbitrary tensor of rank $l$.\n%\nThe remaining basis tensor elements $\\mathbf e^{l m}$ can be obtained by applying the tensorial lowering operator $G^l_-=G_x^{l}-i G_y^{l}$ to $\\mathbf e^{ll}$, namely\n%\n\\begin{equation}\n    \\mathbf e^{l m} = \\sqrt{\\frac{(l+m)!}{(2l)!(l-m)!}}G^{l}_-\\cdot^{l-m}\\mathbf e^{ll},\n\\label{eq:elmbasistensor}\n\\end{equation}\n%\nwith $m=-l,-l+1,...,-1,0,1,...,l$.\n%\nThe normalization factor in \\cref{eq:elmbasistensor} is obtained by requiring that the contravariant $\\mathbf e^{lm}$ and the covariant $\\mathbf e^l_m=(\\mathbf e^l_m)^*$ basis tensors form an orthonormal basis, i.e.,\n%\n\\begin{equation}\n    \\mathbf e^{l m} \\cdot \\mathbf e^{l}_{m'} = \\delta_{m,m'}.\n\\end{equation}\n%\nFor computational purposes, we note that the tensor $\\mathbf e^{l m}$ can also be written as a function of the basis vectors $\\mathbf e^{1m}$ as \\citep{Snider2018}\n%\n\\begin{equation}\n    \\mathbf e^{l m}=N_{lm}\\sum_{n=0}^{\\floor{\\frac{l+m}{2}}}a_n^{lm}\\left\\{(\\mathbf e^{11})^{m+n}(\\mathbf e^{1-1})^{n}(\\mathbf e^{10})^{l-m-2n}\\right\\}_{TS},\n\\end{equation}\n%\nwhere $N_{lm}=\\sqrt{(l+m)!(l-m)!2^{l-m}/(2l)!}$ and $a_n^{lm}=l!/[2^n n!(m+n)!(l-m-2n)!]$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Gyrokinetic Basis Transformation}\n\\label{app:tlkpj1}\n\nIn this section, we derive a closed form expression for the $T_{lkm}^{pj}$ and $(T^{-1})^{lkm}_{pj}$ coefficients defined in \\cref{eq:plljhplj,eq:plljhpljminus1}.\n%\nBy multiplying \\cref{eq:plljhplj} by a Hermite and a Laguerre polynomial and by an exponential of the form $e^{-\\overline v^2}$, and integrating over the whole $\\overline v_\\parallel$ and $\\overline \\mu$ space, we obtain the following integral expression for $T_{lkm}^{pj}$\n%\n\\begin{equation}\n    T_{lkm}^{pj}=\\frac{v_{tha}^{m-l}}{2^p p! \\sqrt{\\pi}} \\int \\frac{\\overline v^l}{\\overline v_\\perp^m} P_l^m\\left(\\frac{\\overline v_\\parallel}{\\overline v}\\right) L_k^{l+1/2}\\left(\\frac{\\overline v_{tha}^m}{\\overline v_\\perp^m}\\right) H_p\\left(\\frac{\\overline v_{\\parallel a}}{v_{tha}}\\right)L_j\\left(\\frac{\\overline v_\\perp^2}{v_{tha}^2}\\right) e^{-\\frac{v^2}{v_{tha}^2}}\\frac{d \\bm v}{2 \\pi}.\n\\label{eq:appbas1}\n\\end{equation}\n%\nWe first write the integrand in \\cref{eq:appbas1} in terms of $\\overline \\xi=\\overline v_\\parallel/\\overline v$ and $\\overline v$ coordinates using the basis transformation in \\cref{eq:plljhpljminus1}, yielding\n%\n\\begin{equation}\n\\begin{split}\n     T_{lkm}^{pj}&= \\sum_{l'=0}^{p+2j}\\sum_{k'=0}^{j+\\floor{p/2}}\\frac{(l+1/2)k!}{(l+k+1/2)!}T_{l'k'}^{pj}\\\\\n     &\\times\\int_{-1}^1 \\frac{P_l^m(\\overline \\xi) P_{l'}(\\overline \\xi)}{(1-\\overline \\xi)^2} d \\overline \\xi \\int_0^\\infty x_a^{(l+l'-m+1)/2}L_k^{l+1/2}(x_a)L_{k'}^{l'+1/2}(x_a) dx_a,\n\\end{split}\n\\label{eq:appbas2}\n\\end{equation}\n%\nwhere we used the fact that $(T^{-1})_{lk}^{pj}=T_{lk}^{pj}\\sqrt{\\pi}2^p p! k! (l+1/2)/(k+l+1/2)!$ \\citep{Jorge2017}.\n%\nThe first integral in \\cref{eq:appbas2} is performed by expanding $P_{l}$ as a finite sum of the form\n%\n\\begin{equation}\n    P_{l}(x)=\\sum_{s=0}^l c_s^l x^s,\n\\end{equation}\n%\nwith the coefficients $c_s^l=2^l[(l+s-1)/2]!/[s!(l-s)!((s-l-1)/2)!]$, and using the relation between associated Legendre functions $P_l^m(x)$ and Legendre polynomials $P_l(x)$\n%\n\\begin{equation}\n    P_l^m(x) = (-1)^m(1-x^2)^{m/2}\\frac{d^m P_l(x)}{dx^m}.\n\\end{equation}\n%\nThe second integral in \\cref{eq:appbas2} is performed by using the expansion of the associated Laguerre polynomials in \\cref{eq:asslaguerre}.\n%\nThe $T_{lkm}^{pj}$ coefficient can then be written as\n%\n\\begin{align}\n    T_{lkm}^{pj}&=\\sum_{l'=0}^{p+2j}\\sum_{k'=0}^{j+\\floor{p/2}} T_{l'k'}^{pj}\\frac{(l'+1/2)k'!}{(l'+k'+1/2)!} \\sum_{m_1=0}^k\\sum_{m_2=0}^{k'}\\sum_{s_1=m}^l\\sum_{s_2=0}^{l'}L_{k m_1}^l L_{k' m_2}^{l'}\\nonumber\\\\\n    &\\times \\frac{c_{s_1}^l c_{s_2}^{l'}}{2} \\frac{s_1!}{(s_1-m)!} \\frac{\\left[1+(-1)^{s_1+s_2-m}\\right]}{s_1+s_2+1-m}\\left(m_1+m_2+\\frac{l+l'-m+1}{2}\\right)!.\n\\end{align}\n%\nThe inverse transformation coefficients $(T^{-1})^{lkm}_{pj}$ defined by \\cref{eq:plljhpljminus1} can be found similarly, yielding\n%\n\\begin{equation}\n    (T^{-1})^{lkm}_{pj}=\\frac{2^p p! \\sqrt{\\pi} k! (l+1/2)(l-m)!}{(k+l+1/2)!(l+m)!}T_{lkm}^{pj}.\n\\end{equation}", "meta": {"hexsha": "7fbc1abb7314115ef594a24963243dcfe993a225", "size": 16277, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "IST Version/tail/appendix.tex", "max_stars_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_stars_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IST Version/tail/appendix.tex", "max_issues_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_issues_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IST Version/tail/appendix.tex", "max_forks_repo_name": "rogeriojorge/Rogerio_PhD_Thesis", "max_forks_repo_head_hexsha": "955be22ad75b54d44a3c2d1499098824e7500d38", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.2414772727, "max_line_length": 395, "alphanum_fraction": 0.627326903, "num_tokens": 6734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8670357666736773, "lm_q2_score": 0.6513548511303336, "lm_q1q2_score": 0.5647479527264078}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel} \n\\usepackage{amsmath,amssymb,amsthm}\n\\usepackage{hyperref}\n\n\\newtheorem{theorem}{Theorem}[section]\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[theorem]\n\\newtheorem{corollary}{Corollary}[theorem]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newcommand\\numberthis{\\stepcounter{equation}{1}\\tag{\\theequation}}\n\n\\title{Mathematics Handout - Logaritmics}\n\\author{Daniel Frederico Lins Leite}\n\\date{July 2016}\n\n\\begin{document}\n\\section{Introduction}\n\\begin{definition}[Log Definition]\\label{definitions:log}\n\t\\begin{align*}\n\t\tlog_a b=c \\\\\n\t\ta^{log_a b}=a^c \\\\\n\t\tb=a^c \\\\\n\t\tlog_a b=log_a a^c \\\\\n\t\tlog_a b= c\t\t\n\t\\end{align*}\n\\end{definition}\n\\begin{theorem}[Logarithm Product Rule]\\label{log:product.rule}\n\t\\begin{align*}\n\tlog_x{(A*B)}=log_x A + log_x B\n\t\\end{align*}\n\t\\begin{proof}\n\t\t\\begin{align}\t\t\t\n\t\t\tx^l = A \\\\\n\t\t\tlog_x {x^l} = log_x A \\\\\n\t\t\tl = log_x A \\label{logproductrulel} \\\\ \t\t\t\n\t\t\t\\nonumber\\\\\n\t\t\tx^m = B \\\\\n\t\t\tlog_x {x^m} = log_x B \\\\\n\t\t\t\\label{logproductrulem} m = log_x B \\\\\n\t\t\t\\nonumber\\\\\n\t\t\tx^n = A*B \\\\\n\t\t\tlog_x{x^n} = log_x{(A*B)} \\\\\n\t\t\t\\label{logproductrulen}n = log_x{(A*B)} \\\\\n\t\t\t\\nonumber\\\\\n\t\t\tlog_x{(A*B)} = n \\\\\n\t\t\tx^n = A*B \\\\\n\t\t\tx^n\t= x^l*x^m \\\\\n\t\t\tx^n\t= x^{l+m} \\\\\n\t\t\tn = l + m && \\text{use (\\ref{logproductrulel}) (\\ref{logproductrulem}) \t(\\ref{logproductrulen})}\\\\\n\t\t\tlog_x{(A*B)} = log_x A + log_x B\t\t\n\t\t\\end{align}\n\t\\end{proof}\t\n\\end{theorem}\n\\begin{theorem}[Logarithm Power Rule]\\label{log:power.rule}\n\t\\begin{align*}\n\tlog_x{A^B}=B*log_x(A)\n\t\\end{align*}\n\t\\begin{proof}\n\t\t\\begin{align}\n\t\tlog_x{(A^B)} \\\\\n\t\tlog_x{(\\prod_{n=1}^{B}A)}\\\\\n\t\t\\sum_{n=1}^{B}(log_x{A}) && \\text{use (\\ref{log:product.rule})}\\\\\n\t\tlog_x{A}*\\sum_{n=1}^{B}1\\\\\n\t\tB*log_x{A}\n\t\t\\end{align}\n\t\\end{proof}\n\\end{theorem}\n\\end{document}", "meta": {"hexsha": "dfe2fb8ba2fddd871d91a37e645f17215623d3c0", "size": 1813, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "texts/math/Handout.Log.tex", "max_stars_repo_name": "xunilrj/sandbox", "max_stars_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-04-01T17:18:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-12T05:23:23.000Z", "max_issues_repo_path": "texts/math/Handout.Log.tex", "max_issues_repo_name": "xunilrj/sandbox", "max_issues_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-05-24T13:36:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-15T06:44:20.000Z", "max_forks_repo_path": "texts/math/Handout.Log.tex", "max_forks_repo_name": "xunilrj/sandbox", "max_forks_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-09-20T01:07:39.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-22T14:55:38.000Z", "avg_line_length": 25.9, "max_line_length": 101, "alphanum_fraction": 0.6365140651, "num_tokens": 709, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342972, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5645968326197354}}
{"text": "\\chapter{ALGORITHMS}\n% \\addcontentsline{toc}{chapter}{Algorithms}\n\n\\section{Sample Algorithm}\nIn Algorithm~\\ref{alg1} we show how to calculate $y=x^n$.\n\n\\begin{algorithm}\n  \\caption{Calculate $y = x^n$}\n  \\label{alg1}\n  \\begin{algorithmic}\n%     \\input{algorithms/yxn.alg}\n    \\REQUIRE $n \\geq 0 \\vee x \\neq 0$\n    \\ENSURE $y = x^n$\n\n    \\STATE $y \\leftarrow 1$\n    \\IF{$n < 0$}\n      \\STATE $X \\leftarrow 1 / x$\n      \\STATE $N \\leftarrow -n$\n    \\ELSE\n      \\STATE $X \\leftarrow x$\n      \\STATE $N \\leftarrow n$\n    \\ENDIF\n\n    \\WHILE{$N \\neq 0$}\n      \\IF{$N$ is even}\n\t\\STATE $X \\leftarrow X \\times X$\n\t\\STATE $N \\leftarrow N / 2$\n      \\ELSE[$N$ is odd]\n\t\\STATE $y \\leftarrow y \\times X$\n\t\\STATE $N \\leftarrow N - 1$\n      \\ENDIF\n    \\ENDWHILE\n\n\n%     \\endinput\n  \\end{algorithmic}\n\\end{algorithm}\n\n\\endinput\n", "meta": {"hexsha": "38274d6c9afa5ece1d6fbe93accc18c280b3e2c9", "size": 814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algorithms.tex", "max_stars_repo_name": "anthonyjclark/missouristate-thesis-latex-template", "max_stars_repo_head_hexsha": "c3970883934f5ac64e7ee5564317a4ca59430835", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "algorithms.tex", "max_issues_repo_name": "anthonyjclark/missouristate-thesis-latex-template", "max_issues_repo_head_hexsha": "c3970883934f5ac64e7ee5564317a4ca59430835", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algorithms.tex", "max_forks_repo_name": "anthonyjclark/missouristate-thesis-latex-template", "max_forks_repo_head_hexsha": "c3970883934f5ac64e7ee5564317a4ca59430835", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-05-07T03:17:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-30T16:07:19.000Z", "avg_line_length": 20.35, "max_line_length": 57, "alphanum_fraction": 0.5884520885, "num_tokens": 308, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5645968242604272}}
{"text": "\\section{Real World Signals: Respiratory Sinus Arrhythmia from RR-Intervals}\n\n\\begin{enumerate}[label=\\alph*), leftmargin=*]\n%% a)\n\\item\n%\n\nThe standard and averaged periodograms with different window lengths $L \\in \\{ 50, 100, 200 \\}$ are obtained and illustrated in figure \\ref{fig:2_4_a}.\nIn detail, Bartlett's method periodogram is used, where:\n\n\\begin{enumerate}[label=\\arabic*)]\n    \\item the signal is split to $M$ non-overlapping segments of length $L$\n    \\item the standard periodogram (rectangular window) of each segment is calculated\n    \\item the $M$ periodograms are averaged\n\\end{enumerate}\n\nIncreasing $M$ (or decreasing $L$) trades frequency resolution for variance, which is reflected in \\ref{fig:2_4_a},\nwhere the standard periodogram, which can be treated as a special case of the Bartlett's method with $M = 1$, has finest frequency resolution but largest variance,\nwhile for $L = 50$ and $M \\rightarrow \\mathtt{max}$, variance is minimised in price of larger frequency resolution.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/a/standard-trial1}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/a/averaged-trial1}\n    \\end{subfigure}\n    ~\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/a/standard-trial2}\n    \\end{subfigure}\n    ~ \n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/a/averaged-trial2}\n    \\end{subfigure}\n    ~\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/a/standard-trial3}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/a/averaged-trial3}\n    \\end{subfigure}\n    \\caption{RRI: standard and averaged periodograms with different window lengths $L$.}\n    \\label{fig:2_4_a}\n\\end{figure}\n\n%% b)\n\\item\n%\n\nThe three trials were carried out under different breathing conditions, which is reflected by the Breaths per Minute (BPM) rate.\nTable \\ref{tab:2_4_b} summarises the conditions and the corresponding expected and observed BPM measurements for each trial.\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{|c|c|c||c|}\n\\hline\n& \\textbf{Conditions} & \\textbf{Expected Range} (BPM) & \\textbf{Observation} (BPM) \\\\\n\\hline\n\\hline\n\\textbf{Trial 1} & normal \\& unconstraint breathing & $12 - 35$ & $19.69$ \\\\\n\\hline\n\\textbf{Trial 2} & fast breathing & $35 - 50$ & $49.92$ \\\\\n\\hline\n\\textbf{Trial 3} & slow breathing & $8 - 15$ & $15$ \\\\\n\\hline\n\\end{tabular}\n\\caption{RRI: conditions and observed BPM.}\n\\label{tab:2_4_b}\n\\end{table}\n\nThe observed peaks for the three trials are at frequencies $f_{1} = 0.1641\\ Hz$, $f_{2} = 0.416\\ Hz$ and $f_{3} = 0.125\\ Hz$.\nFor trial 1, both the standard and the averaged periodograms have a peak at $f_{1} = 0.1641\\ Hz$, however the averaged periodograms due to their inadequate frequency resolution\nfail to capture its harmonics, which are hardly visible also at the standard periodogram. Trial 2 peak at $f_{2} = 0.416\\ Hz$ and its second and third harmonics are visible\nusing any of the provided methods. Lastly, trial 3 peak at $f_{3} = 0.125\\ Hz$ and its harmonics are clearly captures by both the standard and the averaged periodograms.\n\n\\pagebreak\n\n%% c)\n\\item\n%\n\nThe power spectral density of the three trials is estimated by fitting $p$ order AR processes and figures \\ref{fig:2_4_c_1}, \\ref{fig:2_4_c_2} and \\ref{fig:2_4_c_3} are obtained.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/c/standard_ar-trial1}\n    \\caption{RRI Trial 1: AR process spectral estimation.}\n    \\label{fig:2_4_c_1}\n\\end{figure}\n\nFor trial 1, for $p \\geq 11$, the fundamental frequency $f_{1} = 0.1641\\ Hz$ and its 3 first harmonics are identified, corraborating with the results of the previous part.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/c/standard_ar-trial2}\n    \\caption{RRI Trial 2: AR process spectral estimation.}\n    \\label{fig:2_4_c_2}\n\\end{figure}\n\nFigure \\ref{fig:2_4_c_2} depicts the AR spectral estimate of trial 2, which again for $p \\geq 11$ adequately captures the fundamental frequency and its first three harmonics.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/assets/c/standard_ar-trial3}\n    \\caption{RRI Trial 3: AR process spectral estimation.}\n    \\label{fig:2_4_c_3}\n\\end{figure}\n\nLast but not least, for $p \\geq 16$ trial 3 AR(p) process model discriminated all six first harmonics, verifying the respiration rate reported in the previous part.\n\nOverall, the AR process models can provide reliable estimates when the generative process order $p$ is known, or noise power is negligible compared to the signal power.\nWhen under-modelling is performed ($p < p_{true}$) the estimates cannot identify the frequency peaks, while in presence of noise, over-modelling ($p > p_{true}$) can\nlead to overfitting noise and thus providing faulty peaks. Nonetheless, compared to the standard and averaged periodogram methods, the variance, frequency resolution trade-off\nis not present in case of AR models.\n\n%\n\\end{enumerate}", "meta": {"hexsha": "08d46624b4f181bcd64fe9481bd62718d2b12f8a", "size": 6218, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/index.tex", "max_stars_repo_name": "filangel/ASPMI", "max_stars_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-02-20T14:43:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-13T21:13:02.000Z", "max_issues_repo_path": "tex/report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/index.tex", "max_issues_repo_name": "AmjadHisham/ASPMI", "max_issues_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/index.tex", "max_forks_repo_name": "AmjadHisham/ASPMI", "max_forks_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-07-17T08:32:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-12T18:26:18.000Z", "avg_line_length": 48.2015503876, "max_line_length": 178, "alphanum_fraction": 0.7444515922, "num_tokens": 1838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8354835452961425, "lm_q1q2_score": 0.5645902046450271}}
{"text": "\\chapter{Methods}\n\\begin{comment}\n\t0. Verlet Step\n\t1. Describe the Lenard Jones Potential\n\t2. Describe the Berendsen Thermostat\n\t3. Describe the Gupta \n\t\n\\end{comment}\n\\begin{comment}\n\tAtoms are discreticed into the Postitions, Velocities and Forces \n\tSecond propagate the atom in the discriticed realm given constant Forces\n\t\t-> Velocity-Verlet Integration\n\t\n\tCompute Forces somehow\n\t-> Potentials\n\tLJ Potential \n\tGupta Potential\n\t\n\tThermodynamic Effects\n\tBerendsen Thermostat\n\t\t->Gentle Way of resealing the Velocities\n\t\t\n\\end{comment}\n%end general\n\\begin{comment}\ngoals of a md simulation\n- determine the motion of each atom\n\t-> solve equation of motion \n\t-> compute forces\n\t-> From the dirivation of the potential energy we can get the forces\n\n- disciticed in time\n\n\\end{comment}\nThe goal of a molecular dynamics simulation is to determine the motion of each individual atom. To achieve this we have to solve the equations of motion and compute the forces that affect each atom.\n\\begin{equation}\n\t\\label{potentialForceEquation}\n\t\\overrightarrow{f_{k}}(\\overrightarrow{r_{k}}) = -\\cfrac{\\displaystyle\\partial}{\\displaystyle \\partial \\overrightarrow{r_{k}}} E_{\\mathrm{pot}}(\\{\\overrightarrow{r_{k}}\\}) \n\\end{equation}\nThe forces can be derived from the potential energy as shown in Eq.  \\ref{potentialForceEquation} \\cite[][]{molDymCourse}. \n\n\n\\section{Integration}\n\\begin{comment}\n- obtain force from the previous equation\n- dirivation in v. r.. from the mass\n- system can be discribed just by the velocity and postion of each atom\n- in the simulation the positions and velocities of the individual atoms have to be tracked and updated accordingly\n- most used  is the Velocity-Verlet Algorithm\n\\end{comment}\nFrom Eq. \\ref{potentialForceEquation} we can obtain the force acting on each individual atom with its acceleration, velocity and position as shown in  Eq. \\ref{potentialForceEquationForEachAtom}. \n\\begin{equation}\n\t\\label{potentialForceEquationForEachAtom}\n\t\\overrightarrow{f_{i}} = \\cfrac{\\displaystyle\\partial E_{\\mathrm{pot}}}{\\displaystyle \\partial \\overrightarrow{r_{i}}} = m_{i}\\overrightarrow{a_{i}} = m_{i}\\overrightarrow{\\dot{v_{i}}} = m_{i}\\overrightarrow{\\ddot{r_{i}}}\n\\end{equation}\nAs is shown the system can be described just by the velocities and positions of the atoms. \nIn the implementation we hold these values in a container-class to ensure that they are consecutive in  memory, which speeds up the simulation. \n\\par\n%to porpagate\nIn the next step we need to propagate the simulation forward in time. A popular algorithms to update the positions and velocities is the Velocity-Verlet integration algorithm.\nThe scheme is often split up into two steps, the prediction step shown in the Eqs.  \\ref{verletPrediction1} and \\ref{verletPrediction2}, and the correction step shown in Eq. \\ref{verletCorrection} \\cite[cf. ][]{molDymCourse}. The force which is needed in the second step can be updated beforehand, based on the new positions from the first step.\n%Verlet steps\n\\begin{equation}\n\t\\label{verletPrediction1}\n\t\\overrightarrow{v_{i}}(t+\\Delta t/2) = \n\t\\overrightarrow{v_{i}}(t) + \n\t\\frac{\\overrightarrow{f_{i}}(t)\\Delta t}{2m_{i}}\n\\end{equation}\n\n\\begin{equation}\n\t\\label{verletPrediction2}\n\t\\overrightarrow{r_{i}}(t+ \\Delta t) = \n\t\\overrightarrow{r_{i}}(t) + \\overrightarrow{v_{i}}(t + \\Delta t/2)\\Delta t\n\\end{equation}\n\n\\begin{equation}\n\t\\label{verletCorrection}\n\t\\overrightarrow{v_{i}}(t+\\Delta t) = \\overrightarrow{v_{i}}(t+\\Delta t/2) +\n\t\\frac{\\overrightarrow{f_{i}}(t + \\Delta t)\\Delta t}{2m_{i}}\n\\end{equation}\nWe are now able to propagate atoms forward with a constant force, therefore we have to actually model the forces affecting each atom next.\n\n\n\\section{Lenard-Jones-Potential}\n\\begin{comment}\n- pair potential\n\\end{comment}\n%General Formula for the dirivation\nA popular pair potential to model interatomic interactions is the Lenard-Jones-Potential. \nThe goal is to model the Coulomb force and the Pauli repulsion. \n\\begin{equation}\n\tV(r) = 4\\epsilon\\bigg[\\Big(\\frac{\\sigma}{r}\\Big)^{12}- \\Big(\\frac{\\sigma}{r}\\Big)^{6} \\bigg]\n\\end{equation}\nAs shown in Eq. \\ref{potentialForceEquation} we can derive the forces from the potential energy. \nIn Eq. \\ref {potentialEquationAtoms} we formulate the equation for each atom. \nWe can acquire the force affecting each atom if we sum up the derivative of the potential energy between atom k and all the other atoms i times the normed vector to atom i from atom k. \n\\begin{equation}\n\t\\label{potentialEquationAtoms}\n\t\\overrightarrow{f_{k}} = -\\frac{\\partial E_{\\mathrm{pot}}}{\\partial  \\overrightarrow{r_{k}}} = \\sum_{i}^{}\\frac{\\partial V}{\\partial r_{ik}} \\hat{r_{ik}}\n\\end{equation}\n%Formulation of the lj potential\nTo implement the equation we have to derive $\\delta V/ \\delta r_{ik}$ analytically, which was done using the symbolic python library sympy. \n%Potential derivation with python\n\\begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder]\n%\\prompt{In}{incolor}{4}{\\boxspacing}\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n\\PY{k+kn}{import} \\PY{n+nn}{sympy} \\PY{k}{as} \\PY{n+nn}{sp}\n\\PY{k+kn}{import} \\PY{n+nn}{warnings}\n\\PY{n}{warnings}\\PY{o}{.}\\PY{n}{filterwarnings}\\PY{p}{(}\\PY{l+s+s1}{\\PYZsq{}}\\PY{l+s+s1}{ignore}\\PY{l+s+s1}{\\PYZsq{}}\\PY{p}{)}\n\\PY{n}{sp}\\PY{o}{.}\\PY{n}{init\\PYZus{}printing}\\PY{p}{(}\\PY{p}{)}\n\\PY{n}{eps} \\PY{o}{=} \\PY{n}{sp}\\PY{o}{.}\\PY{n}{Symbol}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{e}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\n\\PY{n}{sig} \\PY{o}{=} \\PY{n}{sp}\\PY{o}{.}\\PY{n}{Symbol}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{s}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\n\\PY{n}{rad} \\PY{o}{=} \\PY{n}{sp}\\PY{o}{.}\\PY{n}{Symbol}\\PY{p}{(}\\PY{l+s+s2}{\\PYZdq{}}\\PY{l+s+s2}{r}\\PY{l+s+s2}{\\PYZdq{}}\\PY{p}{)}\n\\PY{n}{energyRad} \\PY{o}{=} \\PY{l+m+mi}{4} \\PY{o}{*} \\PY{n}{eps} \\PY{o}{*} \\PY{p}{(}\\PY{p}{(}\\PY{n}{sig}\\PY{o}{/}\\PY{n}{rad}\\PY{p}{)}\\PY{o}{*}\\PY{o}{*}\\PY{l+m+mi}{12} \\PY{o}{\\PYZhy{}} \\PY{p}{(}\\PY{n}{sig}\\PY{o}{/}\\PY{n}{rad}\\PY{p}{)}\\PY{o}{*}\\PY{o}{*}\\PY{l+m+mi}{6}\\PY{p}{)}\n\\PY{n}{energyRad}\\PY{o}{.}\\PY{n}{diff}\\PY{p}{(}\\PY{n}{rad}\\PY{p}{)}\n\t\\end{Verbatim}\n\\end{tcolorbox}\n\nWith the codesnippet from above the derivative of the potential was obtained as shown in Eq. \\ref{potentialEquationAtomsDirivative}.\n\\begin{equation}\n\t\\label{potentialEquationAtomsDirivative}\n\t\\frac{\\partial V}{\\partial r} = 4 \\epsilon \\left(\\frac{6 \\sigma^{6}}{r^{7}} - \\frac{12 \\sigma^{12}}{r^{13}}\\right)\n\\end{equation}\nThe result from Eq. \\ref{potentialEquationAtomsDirivative} can be used to calculate the interatomic forces from Eq. \\ref {potentialEquationAtoms}. \nWe can optimize the implementation by subtracting the force that affects atom k from the forces of atom i, as they have to be the same just with the opposite direction and therefore sign. \n\\par \nIt can be seen that in the simulation this algorithm behaves in the order O(N\u00b2) with increasing numbers of atoms N. It can be reduced to a linear order O(N) if we only consider atoms up to a certain distance (cutoff-distance) around each atom. This a good approximation as the forces get smaller as the interatomic distance between the atoms increases. The next goal is to generate a list containing the nearest atoms that are inside this cutoff-distance, which can be done by using domain decomposition \\cite[cf.][]{molDymCourse}\n\\section{Berendsen-Thermostat}\n\\begin{comment}\n- couple the moleclular system to a larger heat bath\n- thermostat controls the heat of the simulation so the system does not melt or evaporate\n\\end{comment}\nThe initial state of the atomic system is often far away form the equilibrium. To prevent atoms from melting or evaporating, thermostats are used to control the heat/energy of the atoms. A rather simple thermostat, which was used in the simulation, is the Berendsen-thermostat. \n\\par\nIt is typically implemented by rescaling the velocities by a certain factor as shown in Eq.  \\ref{coupelingFunction}. The factor is composed of the target-temperature $T_{0}$, the current temperature $T$, the timestep of the simulation $\\Delta{t}$ and the relaxation time $\\tau$. \n\\begin{equation}\n\t\\label{coupelingFunction}\n\t\\lambda = \\sqrt{1 + \\bigg(\\frac{T_{0}}{T} -1\\bigg)\\frac{\\Delta t}{\\tau}}\n\\end{equation}\nReducing the temperature of the simulation via rescaling the velocities of each atom can be done because the temperature is connected to kinetic energy via the following equation \\cite[cf.][]{molDymCourse}. \n\\begin{equation}\n\t\\label{bolzmann}\n\t\\frac{3}{2} k_{\\mathrm{B}} T = \\sum_{i} \\frac{1}{2} m v_{i}^2\n\\end{equation}\nWith Eq. \\ref{bolzmann} the current temperature can be calculated and put into Eq. \\ref{coupelingFunction}.\n\n\n\\section{Embedded-Atom Method Potentials}\n\\begin{comment}\n- describe Units \n- better discribed in the course\n- good model for metalic systems\n- work with gold clusters \n\n\\end{comment}\nEmbedded atom-potentials are a good model for metallic bonding in solid or liquid states for different materials. The method used in the code was developed by Gupta \\cite{gupta} and the parameters were taken from  Cleri \\& Rosato \\cite{rosato}. \n\\par \n%TODO explain a bit\n\\begin{comment}\npair potentials cant model crystals as they have 3 independet elastic constants\nput pair potentials have only two -> cauchy pressure\nrather heuristic description in the lecture, like basic chemistry\nE_pot = E_repulsion + E_embedding (embedding energy is connected to electric density)\nE_repulustion is typically a pair potential\nE_emedding is a funtiunal of the density in r\n\\end{comment}\nCompared to pair potentials, the potential energy is calculated from two contributions shown in Eq. \\ref{EAM-basic}. The first contribution $E_{repulsion}$ is modeled as a pair-potential. The second factor $E_{bonding}$ models the electron density between the metal atoms. \n\\begin{equation}\n\t\\label{EAM-basic}\n\tE_{\\mathrm{pot}}  = E_{\\mathrm{repulsion}} + E_{\\mathrm{bonding}}\n\\end{equation}\n \nThe Eqs. \\ref{rosatto-repulsion} and \\ref{rosatto-bonding} show how $E_{\\mathrm{repulsion}}$ and $E_{\\mathrm{bonding}}$ were implemented in the code. The parameter $A$,  $\\xi$, $p$ and $q$ are also taken from the paper of Cleri and Rosato and can be fitted to different types of metals \\cite{rosato}. \n\\begin{equation}\n \t\\label{rosatto-repulsion}\n \t\\mathrm{E}_{\\mathrm{R}}^{i} = \\sum_{j} A e^{-p(r_ij/r_{0} - 1)}\n \\end{equation}\n\n\\begin{equation}\n\t\\label{rosatto-bonding}\n\t%\\mathrm{}_{}^{}\n\t\\mathrm{E}_{\\mathrm{B}}^{i} = - \\big\\{\\sum_{j}\\xi^{2} e^{-2q(r_{ij}/r_{0}-1)} \\big\\}^{1/2}\n\\end{equation}\n\nTo finalize the implementation, the forces between the atoms also have to be calculated. For this the Eq. \\ref{potentialEquationAtoms} can be considered again. The expression has to be derived.\n", "meta": {"hexsha": "f44ba81ee34091d668fdc393a101d2ac64f650f4", "size": 10707, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "AALatex/Tex/methods.tex", "max_stars_repo_name": "cmoser8892/MoleDymCode", "max_stars_repo_head_hexsha": "9077289a670c6cb0ed9e1daac5a03b51c83bc6fb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "AALatex/Tex/methods.tex", "max_issues_repo_name": "cmoser8892/MoleDymCode", "max_issues_repo_head_hexsha": "9077289a670c6cb0ed9e1daac5a03b51c83bc6fb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AALatex/Tex/methods.tex", "max_forks_repo_name": "cmoser8892/MoleDymCode", "max_forks_repo_head_hexsha": "9077289a670c6cb0ed9e1daac5a03b51c83bc6fb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.564516129, "max_line_length": 530, "alphanum_fraction": 0.7342859811, "num_tokens": 3233, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430394931456, "lm_q2_score": 0.679178699175393, "lm_q1q2_score": 0.5644946483916369}}
{"text": "% !TeX root = ../assembly-lines-analysis-presentation.tex\n% !TeX encoding = UTF-8\n% !TeX spellcheck = en_GB\n\n\\section{Modelling and solution technique}  \n  \\subsection{Evaluation of performance measures}\n    \\begin{frame}{Time To Done}\n      \\begin{equation*}\n        \\mbox{TTD}(k,\\omega) := \\left\\{ \\begin{array}{ll}\n          \\displaystyle \\sum_{\\gamma\\in \\phi_k} P_{k, \\gamma,\\omega} \\cdot (R(k,\\gamma) + Z(k,\\gamma)), &  \\mbox{ if } \\sigma_k=\\mbox{\\em producing}\n          \\medskip\\\\\n          \\mbox{TTD}(k-1,\\omega) + V(k), & \\mbox{ if } \\sigma_k=\\mbox{\\em idling}\n          \\medskip\\\\\n          0, & \\mbox{ if } \\sigma_k=\\mbox{\\em done}\n        \\end{array} \\right.\n        \\label{eq:ttd}\n      \\end{equation*}\n      \n      \\begin{itemize}\n        \\item $P_{k,\\gamma,\\omega}$ probability weight that $\\pl{WS}_k$ is in phase $\\gamma$ according to $\\omega$\n        \\item $R(k,\\gamma)$ \\textit{remaining time} in phase $\\gamma$ of $\\pl{WS}_k$\n        \\item $Z(k,\\gamma)$ \\textit{execution time} of phases of $\\pl{WS}_k$ that follow $\\gamma$\n        \\item $V(k)$ \\textit{production time} of $\\pl{WS}_k$\n      \\end{itemize}\n      \n      \\begin{center}\\hspace{-0.5cm}\\scalebox{0.8}{\\input{img/assembly_line_TTD}}\\end{center}\n      \n      Backward recursive evaluation\n    \\end{frame}\n    \n    \\begin{frame}{Time To Idle}\n      \\begin{equation*}\n        \\mbox{TTI}(k,\\omega) := \\left\\{ \\begin{array}{ll}\n          \\max\\{\\mbox{TTD}(k,\\omega),\\mbox{TTI}(k+1,\\omega)\\}, & \\mbox{ if } \\sigma_k\\in\\{\\mbox{\\em producing}, \\mbox{\\em done}\\}\n          \\medskip\\\\\n          0, & \\mbox{ if } \\sigma_k=\\mbox{\\em idling}\n        \\end{array} \\right.\n        \\label{eq:tti}\n      \\end{equation*}\n      \n      \\begin{itemize}\n        \\item $\\mbox{TTI}(k,\\omega)=\\max\\{ \\mbox{TTD}(k,\\omega), \\ldots, \\mbox{TTD}(k+n,\\omega) \\}$\n        \\begin{itemize}\n          \\item $\\pl{WS}_j$ producing/done $\\forall j \\in [k,k+n]$\n          \\item either $\\pl{WS}_{k+n}$ last workstation or $\\pl{WS}_{k+n+1}$ idling\n        \\end{itemize}\n        \\item $\\pl{WS}_k$ becomes idle when the bottleneck finishes its production\n      \\end{itemize}\n      \n      \\begin{center}\\hspace{-1.5cm}\\scalebox{0.8}{\\input{img/assembly_line_TTI}}\\end{center}\n      \n      Forward recursive evaluation\n    \\end{frame}\n    \n    \\begin{frame}{Time To Start Next}\n      \\begin{equation*}\n        \\mbox{TTSN}(k,\\omega) := \\max\\{ \\mbox{TTI}(k,\\omega), \\mbox{TTD}(k-1,\\omega) \\}\n        \\label{eq:ttn}\n      \\end{equation*}\n      \n      \\vspace{1em}\n      \\begin{center}\\hspace{-1.5cm}\\scalebox{0.8}{\\input{img/assembly_line_TTSN}}\\end{center}\n      \n      Forward and backward recursive evaluation\n    \\end{frame}\n    \n  \\subsection{Disambiguation of phases}\n    \\begin{frame}{Disambiguation of observed phases}\n      Resolve observed (producing) phases' ambiguity\n      \\begin{itemize}\n        \\item steady-state probability that $\\pl{WS}_k$ is in phase $\\gamma$ according to $\\omega$\n      \\end{itemize}\n      Given observation $\\phi_k$ for workstation $\\pl{WS}_k$\n      \\begin{itemize}\n        \\item we compute probability $P_{k,\\gamma,\\omega}$\n        \\item that it was actually $\\gamma$ that produced $\\phi_k$\n      \\end{itemize}\n      \n     \t\\begin{equation*}\n       \tP_{k,\\gamma,\\omega} = \\frac{\\tilde{\\pi}(\\gamma)}{\\displaystyle \\sum_{\\gamma' \\in \\phi_k} \\tilde{\\pi}(\\gamma')}\n       \t\\label{eq:probabilityObservation}\n     \t\\end{equation*}\n     \t\n     \t\\begin{itemize}\n       \t\\item $\\tilde{\\pi}(\\gamma)$ steady-state probability of phase $\\gamma$ in an isolated model of $\\pl{WS}_k$\n     \t\\end{itemize}\n    \\end{frame}\n    \n    \\begin{frame}{Isolated workstation model}\n      The isolated workstation model represents a workstation\\\\\n      repeatedly processing a product\n      \\begin{itemize}\n        \\item one product being processed\n        \\item after its production, it's moved back to the entry point of the workstation\n      \\end{itemize}\n      \n      \\begin{center}\\scalebox{0.6}{\\input{img/isolated_workstation}}\\end{center}\n      \n      It can be used for two reasons\n      \\begin{itemize}\n        \\item steady-state probabilities of producing phases are independent\n        \\item the inspection is at steady-state\n        \\begin{itemize}\n          \\item arrivals and productions can be considerer in equilibrium\n        \\end{itemize}\n      \\end{itemize}\n    \\end{frame}\n    \n  \\subsection{Remaining time}\n    \\begin{frame}{Remaining time}\n      Evaluation of $F_{R(k,\\gamma)}(t) = $ CDF of $R(k,\\gamma)$\n      \\begin{itemize}\n        \\item $R(k,\\gamma)$ \\textit{remaining time} in phase $\\gamma$ of $\\pl{WS}_k$\n      \\end{itemize}\n      \n      \\vspace{1em}\n      Problem!\n      \\begin{itemize}\n        \\item remaining times of enabled GEN transitions are \\textit{dependent}\n        \\item joint probabilities don't allow for a compositional approach\n      \\end{itemize}\n      \n      \\vspace{1em}\n      $\\sfrac{1}{3}$ \\textit{Immediate} approximation\n      \\begin{itemize}\n        \\item assume that phase $\\gamma$ is inspected at its ending\n        \\begin{itemize}\n          \\item $\\tilde{F}_{R(k, \\gamma)}(t) = 1 \\quad \\forall t$\n        \\end{itemize}\n        \\item represents an \\textit{upper bound}\n      \\end{itemize}\n      \n      \\vspace{1em}\n      $\\sfrac{2}{3}$ \\textit{Newly enabled} approximation\n      \\begin{itemize}\n        \\item assume that phase $\\gamma$ is inspected at its beginning\n        \\begin{itemize}\n          \\item $\\tilde{F}_{R(k, \\gamma)}(t) = F_{\\gamma}(t)$ \n          \\item $F_{\\gamma}(t)$ original CDF of the duration of $\\gamma$\n        \\end{itemize}\n        \\item represents a \\textit{lower bound}\n      \\end{itemize}\n    \\end{frame}\n    \n    \\begin{frame}{Remaining time}\n      $\\sfrac{3}{3}$ \\textit{Independent remaining times} approximation\n      \\begin{itemize}\n        \\item consider the remaining times of ongoing phases as \\textit{independent}\n        \\item represents a (better) \\textit{lower bound}\n      \\end{itemize}\n      \n      \\begin{block}{Theorem: positive correlation \\& stochastic order}\n        If $\\hat{R}$ is an independent version of vector $R$ of positively correlated remaining times of ongoing phases, then $\\hat{R} \\geq_{st} R$\n      \\end{block}\n      \n      \\vspace{1em}\n      Steady-state distribution of $\\hat{R}(k,\\gamma)$\\\\\n      computed according to the Key Renewal Theorem\\footnote{\\scriptsize Serfozo, R., 2009. Basics of applied stochastic processes. Springer Science \\& Business Media.}\n      \\begin{equation*}\n        \\tilde{F}_{R(k,\\gamma)}(t) = \\frac{1}{\\mu} \\int_{0}^{t} [1 - F_{\\gamma}(s)]ds\n      \\end{equation*}\n      \\begin{itemize}\n        \\item $\\mu$ expected value of $F_{\\gamma}(t)$\n      \\end{itemize}\n    \\end{frame}\n    \n  \\subsection{Execution and production time}\n    \\begin{frame}{Execution and production time}\n      Evaluation of $F_{Z(k,\\gamma)}(t)$ and $F_{V(k)}$\n      \\begin{itemize}\n        \\item $Z(k,\\gamma)$ \\textit{execution time} of phases of $\\pl{WS}_k$ that follow $\\gamma$\n        \\item $V(k)$ \\textit{production time} of $\\pl{WS}_k$\n      \\end{itemize}\n    \n      \\vspace{1em}\n      CDFs of $Z(k,\\gamma)$ and $V(k)$ are computed as transient probabilities\n      \n      \\begin{itemize}\n        \\item $F_{Z(k,\\gamma)}$ transient probability from phase after $\\gamma$ to final phase of $\\pl{WS}_k$\n        \\item $F_{V(k)}$ transient probability from first phase to final phase of $\\pl{WS}_k$\n      \\end{itemize}\n      \n      \\vspace{2em}\n      Upper/lower bounds for \\textit{TTD}, \\textit{TTI} and \\textit{TTSN} can be evaluated\n      \\begin{itemize}\n        \\item \\textit{convolution} and \\textit{max} operations maintain stochastic order\n      \\end{itemize}\n    \\end{frame}", "meta": {"hexsha": "08c6588817f3e20c59088b4c8452f49521f4abec", "size": 7638, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phd/presentations/imt/assembly-line-analysis/body/modelling_solution_technique.tex", "max_stars_repo_name": "oddlord/uni", "max_stars_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "phd/presentations/imt/assembly-line-analysis/body/modelling_solution_technique.tex", "max_issues_repo_name": "oddlord/uni", "max_issues_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phd/presentations/imt/assembly-line-analysis/body/modelling_solution_technique.tex", "max_forks_repo_name": "oddlord/uni", "max_forks_repo_head_hexsha": "a1226bd41b0208d0aac08c15c3372a759df0cb63", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.6276595745, "max_line_length": 168, "alphanum_fraction": 0.6022518984, "num_tokens": 2341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438950947024555, "lm_q2_score": 0.6688802735722128, "lm_q1q2_score": 0.5644647818108268}}
{"text": "\\documentclass{ximera}\n\\input{../preamble}\n\\title{Trigonometric Substitions}\n%%%%%\\author{Philip T. Gressman}\n\n\\begin{document}\n\\begin{abstract}\n  We practice executing trigonometric substitutions.\n\\end{abstract}\n\\maketitle\n\n\\section*{(Video) Calculus: Single Variable}\n\\youtube{D_2N1OLQjyk}\n\n\\section*{Online Texts}\n\\begin{itemize}\n\\item \\link[OpenStax II 3.3: Trigonometric Substitution]{https://openstax.org/books/calculus-volume-2/pages/3-3-trigonometric-substitution}\n\\item \\link[Ximera OSU: Trigonometric Substitution]{https://ximera.osu.edu/mooculus/calculus2/trigonometricSubstitution/titlePage}\n\\item \\link[Community Calculus 8.3: Trigonometric Substitution]{https://www.whitman.edu/mathematics/calculus_online/section08.03.html}\n\\end{itemize}\n\n\n\\section*{Examples}\n\n\\begin{example}\n(c.f.~APEX Calculus Example 6.4.2) Compute the indefinite integral\n\\[ \\int \\frac{dx}{\\sqrt{5 + x^2}}. \\]\n\\begin{itemize}\n\\item The structure of this integrand is adapted to a trigonometric substitution of \\wordChoice{\\choice{secant}\\choice{sine}\\choice[correct]{tangent}} type, so we let $x = \\sqrt{5} \\answer{\\tan \\theta}$ (type out the word theta). This gives $dx = \\sqrt{5} \\answer{\\sec^2 \\theta}~d \\theta$.\n\\item We use the identity $\\sec^2 \\theta = \\tan^2 \\theta + 1$, the quantity $5 + x^2$ can be rewritten using our substitution to equal $\\answer{5 \\sec^2 \\theta}$. Therefore\n\\[ \\int \\frac{dx}{\\sqrt{5+x^2}} dx = \\int \\answer{\\sec \\theta} \\, d \\theta = \\answer{\\ln | \\sec \\theta + \\tan \\theta|} + C. \\]\n(When writing out the indefinite integral, don't forget absolute values.)\n\\item We construct a reference triangle compatible with our substitution. In this case, that means we need a right triangle with an angle $\\theta$ which satisfies $x/\\sqrt{5} = \\tan \\theta$:\n\\begin{center}\n\\begin{image}\n\\includegraphics[width=4in]{images/trigsub02.png}\n\\end{image}\n\\end{center}\n\\item Using this reference triangle, we get that\n\\[ \\tan \\theta = \\frac{x}{\\sqrt{5}} \\qquad \\sec \\theta = \\answer{ \\frac{\\sqrt{x^2+5}}{\\sqrt{5}}}. \\]\nUsing these formulas, we can write\n\\[ \\ln |\\sec \\theta + \\tan \\theta| = \\ln \\left| \\answer{\\frac{x}{\\sqrt{5}} + \\frac{\\sqrt{x^2+5}}{\\sqrt{5}}} \\right| . \\]\n\\item We conclude that\n\\[ \\int \\frac{dx}{\\sqrt{x^2+5}} = \\ln | x + \\sqrt{x^2 + 5} | + C. \\]\n(Note that the factor of $\\sqrt{5}$ in the denominator of the logarithm is not necessary when we already have an arbitrary constant $C$ in our answer.)\n\\end{itemize}\n\\end{example}\n\n\\begin{example}\nCompute the indefinite integral\n\\[ \\int \\frac{dx}{x^2 \\sqrt{x^2 - 3}}. \\]\n\\begin{itemize}\n\\item The structure of this integrand is adapted to a trigonometric substitution of \\wordChoice{\\choice[correct]{secant}\\choice{sine}\\choice[correct]{tangent}} type, so we let $x = \\sqrt{3} \\answer{\\tan \\theta}$. This gives $dx = \\sqrt{3} \\answer{\\sec \\theta \\tan \\theta}~d \\theta$.\n\\item We use the identity $\\sec^2 \\theta = \\tan^2 \\theta + 1$, the quantity $x^2 - 3$ can be rewritten using our substitution to equal $\\answer{3 \\tan^2 \\theta}$. Therefore\n\\[ \\int \\frac{dx}{x^2 \\sqrt{x^2 - 3}} = \\int \\answer{ \\frac{\\sqrt{3} \\sec \\theta \\tan \\theta}{3 \\sec^2 \\theta \\sqrt{3} \\tan \\theta}} d \\theta = \\answer{\\frac{\\sin \\theta}{3}} + C.\\] \n\\item We construct a reference triangle compatible with our substitution. In this case, that means we need a right triangle with an angle $\\theta$ which satisfies $x/\\sqrt{3} = \\sec \\theta$:\n\\begin{center}\n\\begin{image}\n\\includegraphics[width=4in]{images/trigsub03.png}\n\\end{image}\n\\end{center}\n\\item Using this reference triangle, we get that\n\\[ \\sin \\theta = \\answer{\\frac{\\sqrt{x^2-3}}{x}}.\\]\n\\item We conclude that\n\\[ \\int \\frac{dx}{x^2 \\sqrt{x^2-3}} = \\answer{ \\frac{\\sqrt{x^2-3}}{3x}} + C. \\]\n\\end{itemize}\n\\end{example}\n\n\\begin{example}\n(c.f.~APEX Calculus Example 6.4.1) Compute the indefinite integral\n\\[ \\int \\sqrt{9 - x^2} dx. \\]\n\\begin{itemize}\n\\item The structure of this integrand is adapted to a trigonometric substitution of \\wordChoice{\\choice{secant}\\choice[correct]{sine}\\choice{tangent}} type, so we let $x = 3 \\answer{\\sin \\theta}$. This gives $dx = 3 \\answer{\\cos \\theta}~d \\theta$.\n\\item Using the identity $\\cos^2 \\theta + \\sin^2 \\theta = 1$, the expression $9 - x^2$ can be rewritten using the formula above as $\\answer{(3 \\cos \\theta)^2}$. Therefore\n\\[ \\int \\sqrt{9-x^2} dx = \\int \\answer{9 (\\cos \\theta)^2} d \\theta. \\]\n\\item We use the power reduction formula $\\cos^2 t = (1 + \\cos 2t)/2$ and conclude\n\\[ \\int \\sqrt{9-x^2} dx = \\frac{9}{2} \\int \\left( 1 + \\cos 2 \\theta \\right) d \\theta = \\frac{9}{2} \\left[ \\theta + \\frac{1}{2} \\sin 2 \\theta \\right] + C. \\]\n\\item Now we construct a reference triangle which is compatible with the substitution we made. In this case, this means we need a right triangle for which $x/3 = \\sin \\theta$:\n\\begin{center}\n\\begin{image}\n\\includegraphics[width=4in]{images/trigsub01.png}\n\\end{image}\n\\end{center}\n\\item Using the reference triangle above, we have the identities\n\\[ \\sin \\theta = \\frac{x}{3} \\qquad \\cos \\theta = \\answer{ \\frac{\\sqrt{9-x^2}}{3}} \\qquad \\theta = \\arcsin \\frac{x}{3}. \\]\n\\item The quantity $\\sin 2 \\theta$ is difficult to write as a function of $x$ as it stands, so we first use the double-angle formula $\\sin 2 \\theta = \\cos \\theta \\sin \\theta$. We therefore conclude that\n\\[ \\int \\sqrt{9-x^2} dx = \\answer{ \\frac{9}{2} \\left( \\arcsin \\frac{x}{3} + \\frac{x \\sqrt{9-x^2}}{9} \\right)} + C. \\]\n(Note: Ximera will interpret $\\sin^{-1} x$ as $1/(\\sin x)$, so write $\\arcsin$ when you mean inverse sine.)\n\\end{itemize}\n\\end{example}\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "e6a288170893888b6eec387d276844da8db67f39", "size": 5527, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "techniques/11trigsubwarm.tex", "max_stars_repo_name": "ptgressman/math104", "max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "techniques/11trigsubwarm.tex", "max_issues_repo_name": "ptgressman/math104", "max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "techniques/11trigsubwarm.tex", "max_forks_repo_name": "ptgressman/math104", "max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.5729166667, "max_line_length": 289, "alphanum_fraction": 0.6884385743, "num_tokens": 1838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802603710086, "lm_q2_score": 0.8438951005915208, "lm_q1q2_score": 0.564464774609475}}
{"text": "\\documentclass[aps,pra,notitlepage,amsmath,amssymb,letterpaper,12pt]{revtex4-1}\n\\usepackage{amsthm}\n\\usepackage{graphicx}\n\\graphicspath{{./CW1LaTeXimages/}}\n \n\\newenvironment{problem}[2][Problem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{solution}{\\begin{proof}[Solution]}{\\end{proof}}\n \n% --------------------------------------------------------------\n%                   Document Begins Here\n% --------------------------------------------------------------\n \n\\begin{document}\n \n\\title{Homework 1: Remarks on Derivatives}\n\\author{Alexis \\surname{Ford}}\n\\author{Nengyin (Helen) \\surname{Zhu}}\n\\affiliation{CS 510 -- Computing for Scientists}\n\\date{\\today}\n\n\\begin{abstract}\n\tThis paper will give an overview of the derivative, namely, the definition of the derivative of a function, as well as what it means to take the derivative of a function. \n\\end{abstract}\n\n\\maketitle\n\n\\section{Definition}\nThe derivative of a function $f(x)$, denoted $f'(x)$, is defined as\n\\begin{equation}\nf'(x)=\\lim_{h\\rightarrow0}\\frac{f(x+h)-f(x)}{h}.\n\\end{equation}\nIn words, the derivative is the instantaneous rate of change of a function $f(x)$ at $x$, but let us take a more intuitive look. Notice that $\\frac{f(x+h)-f(x)}{h}$ bears a strong resemblnce to the formula used to find the slope, $m$, of a line defined by two points, $(x, y)$ and $(x_1,y_1)$:\n\\begin{equation}\nm=\\frac{y_1-y}{x_1-x}.\n\\end{equation}\nIf we say $h=x_1-x$, and $f(x)$ is the function such that $f(x)=y$ and $f(x_1)=y_1$, we need only note that $h=x_1-x\\Rightarrow x_1=x+h$ before the two fractions become identical. Thus, what the derivative is really doing, is finding a slope. \n\nIt is easy enough to find a slope given two points, $a$ and $b$, and we can even estimate the slope of a nonlinear function at a particular point $x$ by creating a secant line using points around our point of interest. The derivative, however, is far more clever than that. By takig the limit as the distance between $a$ and $b$ approaches $0$, we can get the exact, instantaneous rate of change at our point $x$. That is, we get the slope of the line tangent to $f(x)$ at $x$.\n\n\\section{An Illustration}\nLet us take a look at some pictures to get a better idea of what exactly is going on. Consider the function $f(x)=\\frac{1}{10}(x^4-10x^2+9)$, and say we want to find the slope of this function at $x=2$. Let us start by looking at an approximation of the slope using a secant line through the points $(1,0)$ and $(3,0)$.\n\n\\begin{figure}[h!] \n  \\includegraphics[width=0.4\\textwidth]{horizSec}\n  \\caption{Secant using points $a=1$ and $b=3$, yielding the blue, horizontal line.\\\\ (Image created in WinPlot by Alexis Ford)}\n  \\label{fig:fig1}\n\\end{figure}\n\nWe can easily calculate (and see) that the slope of out blue secant line is $0$. If we move our points closer to $x=2$, say $a=1.5$ and $b=2.5$, we can see that the slope is closer to what we imagine our tangent line's slope should be (we can calculate the slope to be roughly $-1.69$), but we can get closer still.\n\n\\begin{figure}[h!] \n  \\includegraphics[width=0.4\\textwidth]{closerSec} \n  \\caption{Secant using points $a=1.5$ and $b=2.5$, yielding the blue line.\\\\ (Image created in WinPlot by Alexis Ford)}\n  \\label{fig:fig2}\n\\end{figure}\n\nBy taking the derivative, which forces our two points as close together as possible, we get our exact tangent line, $f'(2)=-\\frac{4}{5}$\n\n\\begin{figure}[h!] \n  \\includegraphics[width=0.4\\textwidth]{tangent} \n  \\caption{A red tangent line at $x=2$ with slope $f'(2)=-\\frac{4}{5}$.\\\\ (Image created in WinPlot by Alexis Ford)}\n  \\label{fig:fig3}\n\\end{figure}\n \n\\section{Conclusion}\nDerivatives are a powerful tool, be sure not to drink and derive.\n\\end{document}\n\n%%% INSTRUCTOR COMMENTS %%%\n%\n%  Very thorough! Hopefully you can use these notes when teaching calculus next.\n%  LaTeX is a wonderful thing for creating professional-looking documents.\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "583dc0cea189cfb9e5eee038cd84cf26244bf80d", "size": 3997, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "testlatex.tex", "max_stars_repo_name": "chapman-cs510-2016f/cw-01-nv-shen", "max_stars_repo_head_hexsha": "bf273af15284a5fba837957f533ec7066d78e490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "testlatex.tex", "max_issues_repo_name": "chapman-cs510-2016f/cw-01-nv-shen", "max_issues_repo_head_hexsha": "bf273af15284a5fba837957f533ec7066d78e490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "testlatex.tex", "max_forks_repo_name": "chapman-cs510-2016f/cw-01-nv-shen", "max_forks_repo_head_hexsha": "bf273af15284a5fba837957f533ec7066d78e490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.5921052632, "max_line_length": 477, "alphanum_fraction": 0.6925193895, "num_tokens": 1166, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.668880247169804, "lm_q2_score": 0.843895106480586, "lm_q1q2_score": 0.5644647674081223}}
{"text": "\n\\subsection{Hotelling's lemma}\n\n\n", "meta": {"hexsha": "be15a2dd437bf4578571b9aaaebc4e13bd026c31", "size": 34, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/economics/producer/04-02-hotellingLemma.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/economics/producer/04-02-hotellingLemma.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/economics/producer/04-02-hotellingLemma.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 6.8, "max_line_length": 30, "alphanum_fraction": 0.7352941176, "num_tokens": 9, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5643183473963982}}
{"text": "\\chapter{Introduction to Finite-Difference Time-Domain Methods}\\label{ch:FDTD}\n\n\\begin{flushright}{\\it\n``Since Newton, mankind has come to realize that the laws of physics\\\\\nare always expressed in the language of differential equations.''\\\\\n- Steven Strogatz} %\\\\\n%\\SWcomment[(https://youtu.be/O85OWBJ2ayo?t=44)]\n\\end{flushright}\n%\n\\vspace{2em}\nThis chapter introduces some important concepts needed to understand finite-difference time-domain (FDTD) methods. These techniques are what the implementation of the physical models presented later on in this document are based on. \nBy means of a simple mass-spring system and the 1D wave equation, the notation and terminology used throughout this document will be explained. \nUnless denoted otherwise, the theory presented in this chapter and the notation have been taken from \\cite{theBible}. %Before diving into the mathematics, let us go over some useful terminology.\n\n\\section{Differential equations}\\label{sec:differentialEquations}\nDifferential equations are used to describe the motion of dynamic systems, including vibrations in musical instruments. In this work, these equations are used to describe, among others, the movement of a string, an instrument body and the air pressure in an acoustic tube.\n\nA characteristic feature of these equations is that, rather than an absolute value or \\textit{state} of a system, such as displacement from the equilibrium of a string, or the pressure in a tube, the time derivative of its state -- its velocity -- or the second time derivative -- its acceleration -- is described. From this, the absolute state of the system can then be computed.\n%\nThis state is usually described by the variable $u$ which is always a function of time, i.e., $u=u(t)$. If the system is distributed in space, $u$ also becomes a function of space, i.e., $u = u(x,t)$, or with two spatial dimensions, $u = u(x,y,t)$, etc. Though this work only describes systems of up to two spatial dimensions, one can easily extend to three dimensions \\cite{Hamilton2016} and potentially higher-dimensional systems% \\cite{Bustamante2017}\\todo{maybe not such a relevant reference..}\n. See Section \\ref{sec:dimensions} for more information on dimensions.\n% one could potentially extend to systems of infinite spatial dimensions evolving over time!\n\nIf $u$ is univariate, and only a function of time, the differential equation that describes the motion of the system is called an \\textit{ordinary differential equation} (ODE). Various ways to describe the second derivative in time of $u$, or the acceleration of $u$ are\n%\n\\begin{equation}\\nonumber\n    \\begin{aligned}\n        \\frac{d^2 u}{d t^2} & \\quad \\text{(Leibniz's notation)},\\\\\n        \\ddot u\\ \\  &\\quad \\text{(Newton's notation)},\\\\[3pt]\n        D_t^2 u& \\quad \\text{(Euler's notation)}.\n    \\end{aligned}\n\\end{equation}\n%\nLeibniz's notation could be considered the most standard notation but is not necessarily compact. Newton's notation on the other hand allows for an ultra compact notation using dots above the function to denote time-derivatives. For this reason, Newton's notation will be used for ODEs in isolation. The drawback of this notation is that it can only be used for univariate functions. Finally, Euler's notation indicates a derivative using an operator which can be applied to a function. \n \nIf $u$ is also a function of at least one spatial dimension, the equation of motion is a called a \\textit{partial differential equation} (PDE).\nThe literature uses different types of notation for taking (continuous-time) partial derivatives. Applied to a state variable $u$ these can look like \n\\begin{equation}\\nonumber\n    \\begin{aligned}\n        \\frac{\\partial^2 u}{\\partial t^2} & \\quad \\text{(Leibniz's notation)},\\\\\n        u_{tt}\\:\\,& \\quad \\text{(subscript notation)},\\\\[3pt]\n        \\ptt u\\: & \\quad \\text{(Euler's notation)},\n    \\end{aligned}\n\\end{equation}\n% \n%\n% \\begin{equation}\\nonumber\n%     \\begin{aligned}\n%         \\frac{\\partial^2 u}{\\partial t^2}, \\quad\n%         u_{tt},\\quad\n%         \\ptt u\n%     \\end{aligned}\n% \\end{equation}\n%\nwhere the subscript notation could be seen as the partial derivative counterpart to Newton's notation due to its compactness. In the remainder of this document, Euler's notation will be used for PDEs, due to their similarity to operators in discrete time (introduced in Section \\ref{sec:FDoperators}). Also, it allows for more compactness when creating bigger operators with multiple (connected) systems (see e.g. Chapter \\ref{ch:tromba}). Moreover, state-of-the-art literature in the field of FDTD methods for sound synthesis use this notation as well \\cite{Bilbao2018}.\n\n% Often-used partial derivatives and their meanings \\todo{maybe not yet as this is super general still} are shown below\n\n% \\begin{minipage}[c]{0.49\\textwidth}\n%     \\begin{align*}\n%         \\ptt u &\\quad \\text{(acceleration)}\\\\\n%         \\pt u &\\quad \\text{(velocity)}\n%     \\end{align*}\n% \\end{minipage}\n% \\begin{minipage}[c]{0.49\\textwidth}\n%     \\begin{align*}\n%     \\pxx u &\\quad \\text{(curvature)}\\\\\n%     \\px u &\\quad \\text{(slope)}\n%     \\end{align*}\n% \\end{minipage}\n\n\\subsection{Dimensions and degrees of freedom}\\label{sec:dimensions}\nAll objects in the physical world are three-dimensional (3D) as they have a non-zero width, length and depth. Moreover, these objects can move in these three dimensions and thus have three translational \\textit{degrees of freedom (DoF)} (the three rotational DoF are ignored here). \nTo reduce the complexity of the models describing physical systems as well as computational complexity (computational cost), simplifications can be made to reduce both the dimensionality of the spatial distribution of a physical object as well as that of the translational DoF. \n\nGenerally, the spatial distribution of an object can be simplified if one (or more) of the dimensions are small, relative to the wavelengths of interest. A guitar string, for instance, has much greater length than its width or depth and can therefore be reduced to a one-dimensional (1D) system. If a 3D description were to be kept, the relative displacement between two locations on one cross-section along the length of the string would be taken into account. One could imagine that this displacement will always be orders of magnitude smaller than the relative displacement of two points along the string length and is thus negligible. Similarly, the thickness of a drum membrane is much smaller than its length and width and can therefore be simplified to a two-dimensional (2D) system.\\footnote{In this work, `1D' and `2D' will also be used to abbreviate `one dimension' and `two dimensions'.}  \n\nThe translational DoF, on the other hand, describe now many ``coordinates'' a state variable includes. \nIn much of the literature on FDTD methods in the field of musical acoustics, the state variable only has one coordinate. In most string models, for example, only the transverse displacement in one polarisation is considered (see Chapter \\ref{ch:stiffString}) and the other polarisation as well as the longitudinal motion of the string (motion along the string length) are ignored. In other words, every point along the string can only move up and down, not side-to-side and not forward and back. Although this greatly simplifies the system at hand and reduces computational complexity, this is not what happens in reality. Nonlinear effects such as pitch glides due to tension modulation caused by high-amplitude string vibration are not present in the simplified model and have not been included in this project.\\todo{think about references to this project/work in this chapter that should only be focused on background} \n\nWork has been done on strings with dual (transverse) polarisation by Desvages \\cite{Desvages2018} and Desvages and Bilbao \\cite{Desvages2016} using FDTD methods. Models including longitudinal string vibration, where the longitudinal and transversal displacements are couples can be found in \\cite{theBible,Bilbao2009spring}.\nIn \\cite{Villeneuve2019}, Villeneuve and Leonard present a mass-spring network where the state of every individual mass has three translational DoF. Due to these additional DoF, these networks do capture the aforementioned effects, but greatly increase the computational complexity of the models.\n\nAlthough the dimensionality reduction ignores some of the physical processes, surprisingly realistic sounding models can be made despite these simplifications. Due to computational considerations, all models used in this work thus only have 1 translational DoF.\n\n\\subsubsection{Notation}\nWhen describing the state of a system, the spatial dimensions it is distributed over appear in the argument of the state variable. For example, the state of a 2D system with 1 translational DoF, is written as $u(x,y,t)$.\n\nThe translational DoF, on the other hand, determines the number of coordinates that the state variable describes. A 1D system with 3 translational DoF can thus be written as $\\mathbf{u}(x,t)$ where $\\mathbf{u}$ is a vector containing the coordinates for all three translational DoF.  \n\n% What this means for the notation introduced in the previous section is  the number of arguments for the state variable. As all systems (in this context) have one temporal dimension, the state of a 1D system -- such as a string -- is described using $u(x,t)$, a 2D system -- such as a membrane -- is described using $u(x,y,t)$, and the state of a simple zero-dimensional (0D) mass-spring system as $u(t)$.\n\n\\subsection{Ranges of definition and domains}\\label{sec:domains}\nWhen modelling physical systems, one needs to provide a \\textit{range of definition} over which they are defined. For a 1D system $u = u(x,t)$, ranges of definition must be given for $x$ and $t$. Usually, the temporal range $t\\geq 0$, meaning that the system is defined for non-negative time. \n\nIn space, the range of definition is usually referred to as a (spatial) \\textit{domain}, denoted by the symbol $\\D$. Using the example above, $x$ may be defined over $\\D$, which is written as $x\\in \\D$. For analysis purposes, infinite domains ($\\D = \\mathbb{R} = (-\\infty, \\infty)$) or semi-infinite domains ($\\D = \\mathbb{R}^+ = [0, \\infty)$) may be used, but for implementation purposes, a finite domain needs to be established. For higher dimensional systems, one needs to define higher dimensional domains. A 2D system $u=u(x,y,t)$, for simplicity assumed to be rectangular, may be defined over `horizontal domain' $\\D_x$ and `vertical domain' $\\D_y$, which are both 1D domains. The system is then defined for $(x,y)\\in \\D$ where $\\D = \\D_x\\times \\D_y$. \n\n\\section{Discretisation using FDTD methods}\\label{sec:discUsingFDTD}\nDifferential equations are powerful tools to describe the motion of physical systems. Despite this, only few of these have a closed-form, or analytical, solution. More complex systems require methods that do not perfectly solve, but rather \\textit{approximate} the solutions to these equations. FDTD methods are the most straightforward approach to numerically approximate differential equations. These methods are considered to be among the most general and flexible techniques in terms of the systems they can model, and frankly, relatively simple to understand once some familiarity with them is obtained. The main concern with these methods is the numerical stability of the eventual approximation. Conditions for stability can be mathematically derived and will be introduced in Section \\ref{sec:stabilityAnalysis}.\n\n% \\SWcomment[It is important to note that a discrete FD scheme is an \\textit{approximation} to a continuous PDE, not a sampled version of it. This means that the resulting schemes are rarely an exact solution to the original continuous equation.]\n\nFDTD methods essentially subdivide a continuous differential equation into discrete points in time and space, a process called \\textit{discretisation}. Once an ODE or PDE is discretised using these methods it is now called a \\textit{finite-difference (FD) scheme} which approximates the original differential equation. See Figure \\ref{fig:discretisation}. In the following, for generality and ease of explanation, a 1D system will be used, and (again) the theory and notation follows \\cite{theBible}.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{tikzpicture}[->,node distance=3cm,\n        thick,main node/.style={circle,draw}]\n    \n        \\node[] (image) at (0,0) {\n        \\includegraphics[width=\\textwidth]{figures/fdtd/discretisation.eps}\n        };\n    \n        \\draw [-stealth](-0.8,0) -- (0.3,0);\n        \\draw [-stealth](-5.75,1.2) -- (-5.75,-1.5);\n        \\node[] (t) at (-6, -1.2) {\\small $t$};\n        \n        \\draw[thin, ->, >=stealth] (5.0,0.88) arc (90:-90:0.25);\n        \\node[] () at (5.5, 0.6) {\\small $\\tfrac{1}{\\fs}$};\n        \n        \\draw[thin, ->, >=stealth] (5.0,0.30) arc (90:-90:0.25);\n        \\node[] () at (5.5, 0.02) {\\small $\\tfrac{1}{\\fs}$};\n\n        \\draw[thin, ->, >=stealth] (5.0,-0.28) arc (90:-90:0.25);\n        \\node[] () at (5.5, -0.56) {\\small $\\tfrac{1}{\\fs}$};\n\n        \\draw[thin, ->, >=stealth] (5.0,-0.86) arc (90:-90:0.25);\n        \\node[] () at (5.5, -1.12) {\\small $\\tfrac{1}{\\fs}$};\n\n      \\end{tikzpicture}\n      \\caption{A continuous PDE is discretised to a FD scheme. In a PDE, time passes continuously, whereas for a FD scheme, time passes in finite increments with a duration of the reciprocal of the sample rate $\\fs$. \\label{fig:discretisation}}\n\\end{figure}\n\n\\subsection{Grid functions \\todo{FULL DOC SWEEP: check capitalisation of headings throughout document (DOING NOW)}} \\label{sec:gridFunctions}\nThe first step to approximate continuous PDEs, is to define a discrete \\textit{grid} over time and space. See Figure \\ref{fig:gridExp}. A system described by state $u = u(x,t)$ defined over time $t$ and one spatial dimension $x$, can be discretised to a \\textit{grid function} $u_l^n$. Here, integers $l$ and $n$ describe the spatial and temporal indices respectively, and arise from the discretisation of the continuous variables $x$ and $t$, according to $x=lh$ and $t=nk$. The spatial step $h$, also called the \\textit{grid spacing} describes the distance (in m) between two neighbouring \\textit{grid points}, and is closely related to the stability of the FD scheme. The temporal step $k$, or \\textit{time step}, is the time (in s) between two consecutive temporal indices and can be calculated $k=1/\\fs$ for a sample rate $\\fs$ (in Hz). In many audio applications $\\fs = 44100$ Hz, which is the sample rate that will be used in this work (unless denoted otherwise).\n\n% \\subsubsection{Discrete Domains}\nAs mentioned in Section \\ref{sec:domains}, a 1D system needs to be defined over a temporal range of definition and one spatial domain.\nIn discrete time, $t \\geq 0$ is discretised to $n \\in \\mathbb{N}^0$.\\footnote{In this work, $\\mathbb{N}^0$ is used to denote the set of non-negative integers ($\\mathbb{N}^0 = 0, 1, 2, \n\\hdots$).} \nThe spatial domain $\\D$ can be subdivided into $N$ equal sections, or intervals, of length $h$ (see Figure \\ref{fig:gridExp}). The grid points describing the state of the system are placed at the edge of each interval, including the end points. The spatial range of interest then becomes $l\\in \\{0, \\hdots, N\\}$ and the total number of grid points is $N+1$, which is one more than the number of intervals.\n\n%The grid points interact with their neighbouring grid points according to the model at hand.\n\n% At the ends of each section there needs to be grid point describing the discrete state of the system at each sections needs to be \n\n%Consider a (continuous-time) system, $u = u(x,t)$ defined over time $t\\geq 0$ and space $x\\in \\D$ with domain $\\D = [0, L]$. Time can be subdivided in \n\nTo summarise, for a 1D system\n\\begin{equation*}\n    u(x,t) \\approxeq u_l^n \\quad \\text{with} \\quad x=lh \\quad \\text{and} \\quad t = nk,\n\\end{equation*}\n\\begin{equation*}\n    l\\in\\{0, \\hdots, N\\}\\qaq n \\in \\mathbb{N}^0.\n\\end{equation*}\n\n\\begin{figure}[h]\n    \\centering\n    % \\subfloat[If $N=5$ there are 5 sections of length $h$ and 6 grid points describing the state of the system ($l=\\{0, \\hdots, 5\\}$).\\label{fig:gridExp1}]{\\includegraphics[width=0.45\\textwidth]{figures/fdtd/gridExplanation.pdf}}\\hspace{0.06\\textwidth}\n    % \\subfloat[If $N$ is large (as is usually the case), The 1D system is divided into $N$ sections of length $h$ and $l=\\{0, \\hdots, N\\}$.\\label{fig:gridExp2}]{\\includegraphics[width=0.45\\textwidth]{figures/fdtd/gridExplanation2.pdf}}\n    \\includegraphics[width=\\textwidth]{figures/fdtd/gridFigure3.pdf}\n    \\caption{The spatio-temporal grid that appears when a 1D system $u(x,t)$ with $x\\in \\D$ is discretised to a grid function $\\uln$. The spatial domain $\\D$ is divided into $N$ intervals of length $h$ and spatial range of interest $l=\\{0, \\hdots, N\\}$. Time is subdivided into time steps of duration $k$ and together with the discretised domain, forms a grid over space and time. Some grid points are labelled with the appropriate grid function. \\label{fig:gridExp}}\n\\end{figure}\n\n\n\\subsection{Finite-difference operators}\\label{sec:FDoperators}\nNow that the state variable has a discrete counterpart, this leaves the derivatives to be discretised, or approximated. We start by introducing \\textit{shift operators} that can be applied to a grid function and `shift' its indexing, either temporally or spatially. These are denoted by $e_s$, where subscript $s$ denotes the type and direction of the shift. Forward and backward shifts in time, together with the identity operation are\n% \n\\begin{equation}\\label{eq:temporalShifts}\n    e_{t+}\\uln = u_l^{n+1},\\quad e_{t-}\\uln = u_l^{n-1}, \\quad \\text{and} \\quad 1\\uln = \\uln.\n\\end{equation}\n%\nSimilarly, forward and backward shifts in space are\n%\n\\begin{equation}\\label{eq:spatialShifts}\n    e_{x+}\\uln = u_{l+1}^n,\\quad \\text{and}\\quad e_{x-}\\uln = u_{l-1}^n.\n\\end{equation}\n%\n\\todo{many figures for shift and FD operators}These shift operators are rarely used in isolation, though they do appear in energy analysis techniques detailed in Section \\ref{sec:energyAnalysis}. The operators do, however, form the basis of commonly used \\textit{finite-difference (FD) operators}. The first-order derivative in time can be discretised in three different ways. The forward, backward and centred \\todo{FULL DOC SWEEP: check centred instead of centered} difference operators are\n%\n\\todo{these spacings are different in overleaf...}\n\\begin{subnumcases}{\\pt \\approxeq\\label{eq:discFirstTime}}\n        \\dtp &$\\!\\!\\!\\!\\triangleq \\frac{1}{k}\\left(e_{t+} - 1\\right),$\\label{eq:forwardTimeOperator}\\\\\n        \\dtm &$\\!\\!\\!\\!\\triangleq \\frac{1}{k}\\left(1 - e_{t-}\\right),$\\label{eq:backwardTimeOperator}\\\\\n        \\dtd &$\\!\\!\\!\\!\\triangleq \\frac{1}{2k}\\left(e_{t+} - e_{t-}\\right),$\\label{eq:centredTimeOperator}\n\\end{subnumcases}\nwhere ``$\\triangleq$'' means ``equal to by definition''. These operators can then be applied to grid function $\\uln$ to get\n\\begin{subnumcases}{\\pt u \\approxeq\\label{eq:discFirstTimeU}}\n    \\dtp \\uln &$\\!\\!\\!\\! = \\frac{1}{k}\\left(u_l^{n+1} - \\uln\\right),$\\label{eq:forwardTimeOperatorU}\\\\\n    \\dtm \\uln &$\\!\\!\\!\\! = \\frac{1}{k}\\left(\\uln - u_l^{n-1}\\right),$\\label{eq:backwardTimeOperatorU}\\\\\n    \\dtd \\uln &$\\!\\!\\!\\! = \\frac{1}{2k}\\left(u_l^{n+1} - u_l^{n-1}\\right),$\\label{eq:centredTimeOperatorU}\n\\end{subnumcases}\nand all approximate the first-order time derivative of $u$. Note that the centred difference has a division by $2k$ as the time difference between $n+1$ and $n-1$ is, indeed, twice the time step. \n\n\\todo{figure here visualising operators (with reference to grid figure)}\nSimilar operators exist for a first-order derivative in space, where the forward, backward and centred difference are\n\\begin{subnumcases}{\\px \\approxeq\\label{eq:discFirstSpace}}\n    \\dxp &$\\!\\!\\!\\!\\triangleq \\frac{1}{h}\\left(e_{x+} - 1\\right),$\\label{eq:forwardSpaceOperator}\\\\\n    \\dxm &$\\!\\!\\!\\!\\triangleq \\frac{1}{h}\\left(1 - e_{x-}\\right),$\\label{eq:backwardSpaceOperator}\\\\\n    \\dxd &$\\!\\!\\!\\!\\triangleq \\frac{1}{2h}\\left(e_{x+} - e_{x-}\\right),$\\label{eq:centredSpaceOperator}\n\\end{subnumcases}\nand when applied to $\\uln$ are\n\\begin{subnumcases}{\\px u \\approxeq\\label{eq:discFirstSpace}}\n    \\dxp \\uln&$\\!\\!\\!\\!= \\frac{1}{h}\\left(u_{l+1}^n- \\uln\\right),$\\\\\n    \\dxm \\uln&$\\!\\!\\!\\!= \\frac{1}{h}\\left(\\uln - u_{l-1}^n\\right),$\\\\\n    \\dxd \\uln&$\\!\\!\\!\\!= \\frac{1}{2h}\\left(u_{l+1}^n - u_{l-1}^n\\right).$\\label{eq:centredSpaceOperatorU}\n\\end{subnumcases}\nHigher-order differences can be approximated through a composition of first-order difference operators where their definitions are multiplied.\\footnote{Alternatively, one could first apply one operator to a grid function, expand it, and apply the other operator to all individual grid functions in the result of the first expansion thereafter.} The second-order difference in time may be approximated using\n\\begin{equation}\\label{eq:discSecondTime}\n    \\ptt \\approxeq \\dtp\\dtm = \\dtt \\triangleq \\frac{1}{k^2}\\left(e_{t+}-2+e_{t-}\\right),\n\\end{equation}\nwhere ``$2$'' is the identity operator applied twice. This can be done similarly for the second-order difference in space\n\\begin{equation}\\label{eq:discSecondSpace}\n    \\pxx \\approxeq \\dxp\\dxm = \\dxx \\triangleq \\frac{1}{h^2}\\left(e_{x+}-2+e_{x-}\\right),\n\\end{equation}\nboth of which can be applied to a grid function $\\uln$ in a similar fashion. Figure \\ref{fig:operators} shows the \\textit{stencils} of the operators introduced above. A stencil shows the grid points needed to perform the operation of a FD operator.  \n\\begin{figure}[h]\n    \\centering\n    \\subfloat[Temporal operators.\\label{fig:temporalOperators}]{\\includegraphics[width=0.45\\textwidth]{figures/fdtd/operatorsT.pdf}}\\hspace{0.06\\textwidth}\n    \\subfloat[Spatial operators.\\label{fig:spatialOperators}]{\\includegraphics[width=0.45\\textwidth]{figures/fdtd/operatorsX.pdf}}\n    \\caption{The stencils of various FD operators applied to the grid point highlighted with a red square. Black grid points are used in the calculation, and white grid points are not. The averaging operators follow the same pattern.\\label{fig:operators}}\n\\end{figure}\n\n%Further information on combining operators can be found in Section \\ref{sec:combiningOperators}.\n\nAlso useful \\todo{in energy analysis, interleaved grids, etc.} are averaging operators, all of which approximate the identity operation. The temporal forward, backward and centred averaging operators are\n\\begin{subnumcases}{1 \\approxeq \\label{eq:averagingTime}}\n    \\mtp & $\\!\\!\\!\\!\\triangleq \\frac{1}{2}\\left(e_{t+} + 1\\right),$\\label{eq:forwardAvgTime}\\\\\n    \\mtm & $\\!\\!\\!\\!\\triangleq \\frac{1}{2}\\left(1 + e_{t-}\\right),$\\label{eq:backwardAvgTime}\\\\\n    \\mtd & $\\!\\!\\!\\!\\triangleq \\frac{1}{2}\\left(e_{t+} + e_{t-}\\right).$\\label{eq:centredAvgTime}\n\\end{subnumcases}\nNotice how these definitions are different than the difference operators in \\eqref{eq:discFirstTime}: the terms in the parentheses are added rather than subtracted, and rather than a division by the time step $k$ there is a division by $2$. Finally, the centred averaging operator does not have an extra division by $2$ as in \\eqref{eq:centredTimeOperator}.\nApplied to $\\uln$, Eqs. \\eqref{eq:averagingTime} become\n\\begin{subnumcases}{\\uln \\approxeq \\label{eq:averagingTimeU}}\n    \\mtp \\uln & $\\!\\!\\!\\!= \\frac{1}{2}\\left(u_l^{n+1}+ \\uln\\right),$\\label{eq:forwardAvggTimeU}\\\\\n    \\mtm \\uln & $\\!\\!\\!\\!= \\frac{1}{2}\\left(\\uln + u_l^{n-1}\\right),$\\label{eq:backwardAvggTimeU}\\\\\n    \\mtd \\uln & $\\!\\!\\!\\!= \\frac{1}{2}\\left(u_l^{n+1} + u_l^{n-1}\\right).$\\label{eq:centredAvggTimeU}\n\\end{subnumcases}\n%\nSimilarly, spatial averaging operators are\n\\begin{subnumcases}{1 \\approxeq \\label{eq:averagingSpace}}\n    \\mxp & $\\!\\!\\!\\!\\triangleq \\frac{1}{2}\\left(e_{x+} + 1\\right),$\\label{eq:forwardAvgSpace}\\\\\n    \\mxm & $\\!\\!\\!\\!\\triangleq \\frac{1}{2}\\left(1 + e_{x-}\\right),$\\label{eq:backwardAvgSpace}\\\\\n    \\mxd & $\\!\\!\\!\\!\\triangleq \\frac{1}{2}\\left(e_{x+} + e_{x-}\\right),$\\label{eq:centredAvgSpace}\n\\end{subnumcases}\nand when applied to $\\uln$\n\\begin{subnumcases}{\\uln \\approxeq \\label{eq:averagingSpaceU}}\n    \\mxp \\uln & $\\!\\!\\!\\!= \\frac{1}{2}\\left(u_{l+1}^n+ \\uln\\right),$\\label{eq:forwardAvgSpaceU}\\\\\n    \\mxm \\uln & $\\!\\!\\!\\!= \\frac{1}{2}\\left(\\uln + u_{l-1}^n\\right),$\\label{eq:backwardAvgSpaceU}\\\\\n    \\mxd \\uln & $\\!\\!\\!\\!= \\frac{1}{2}\\left(u_{l+1}^n + u_{l-1}^n\\right).$\\label{eq:centredAvgSpaceU}\n\\end{subnumcases}\nFinally, using forward and backward averaging operators, second-order temporal and spatial averaging operators can be created according to\n\\begin{equation}\n    1 \\approxeq \\mtt = \\mtp\\mtm \\triangleq \\frac{1}{4}\\left(e_{t+}+2+e_{t-}\\right),\n\\end{equation}\nand\n\\begin{equation}\n    1 \\approxeq \\mxx = \\mxp\\mxm \\triangleq \\frac{1}{4}\\left(e_{x+}+2+e_{x-}\\right).\n\\end{equation}\n\nOperators and derivatives in 2D will be discussed in Chapter \\ref{ch:2Dsyst}.\n\n\n\\subsubsection{Accuracy}\nAs FDTD methods approximate continuous systems, the resulting solution is rarely 100\\% accurate. To determine the accuracy of the FD operators above, one can perform a \\textit{Taylor series analysis}. The Taylor series is an infinite sum and its expansion of a function $f$ about a point $a$ is defined as\n\\begin{equation}\n    f(x) = \\sum_{n=0}^{\\infty} \\frac{(x-a)^n}{n!}f^{(n)}(a)\n\\end{equation}\nwhere superscript $(n)$ denotes the $n$\\th derivative of $f$ with respect to $x$. The analysis will be performed on the temporal operators in this section, but also applies to the spatial operators presented.\n\nUsing continuous function $u=u(t)$ and following Bilbao's ``slight abuse of notation'' in \\cite{theBible}, one may apply FD operators to continuous functions according to \n\\begin{equation}\\label{eq:forwardTimeCont}\n    \\dtp u(t) = \\frac{u(t+k) - u(t)}{k}\\ .\n\\end{equation}\n%\nAssuming that $u$ is infinitely differentiable, $u(t+k)$, i.e., $u$ at the next time step (in continuous time), can be approximated using a Taylor series expansion of $u$ about $t$ according to\n\\begin{equation}\\label{eq:taylorStartForward}\n    u(t+k) = u(t) + k \\dot u + \\frac{k^2}{2} \\ddot u + \\frac{k^3}{6} \\dot{\\ddot{u}} + \\OO(k^4).\n\\end{equation}\nHere, (following Newton's notation introduced in Section \\ref{sec:differentialEquations}) the dot describes a single temporal derivative and $\\OO$ includes additional terms in the expansion. The power of $k$ in the argument of $\\OO$ describes the order of accuracy, the higher the power of $k$ the more accurate the approximation. Equation \\eqref{eq:taylorStartForward} can be rewritten to \n\\begin{equation*}\n    \\frac{u(t+k) - u(t)}{k} = \\dot u + \\frac{k}{2} \\ddot u +\\frac{k^2}{6} \\dot{\\ddot{u}} + \\OO(k^3),\n\\end{equation*}\nand using Eq. \\eqref{eq:forwardTimeCont} can be written to\n\\begin{equation}\n    \\dtp u(t) = \\dot u + \\OO(k).\n\\end{equation}\nThis says that the forward difference operator approximates the continuous first order derivative with an additional error term that depends on $k$.\nAs the power of $k$ in $\\OO$'s argument is $1$, the forward operator is first-order accurate. One can also observe that, as expected, the error gets smaller as the time step $k$ gets smaller and indicates that higher sample rates result in more accurate simulations (through $k=1/\\fs$).\n\nOne can arrive at a similar result for the backward operator. Applying Eq. \\eqref{eq:backwardTimeOperator} to $u(t) $ yields\n\\begin{equation}\n    \\dtm u(t) = \\frac{u(t) - u(t-k)}{k}.\n\\end{equation}\nOne can then approximate $u(t-k)$ by performing a Taylor series expansion of $u$ about $t$ according to\n\\begin{align}\n    u(t-k) &= u(t) + (-k) \\dot u + \\frac{(-k)^2}{2} \\ddot u +\\frac{(-k)^3}{6}\\dot{\\ddot{u}} + \\OO(k^4),\\label{eq:taylorStartBackward}\\\\\n    \\frac{u(t-k) - u(t)}{k} &= -\\dot u + \\frac{k}{2} \\ddot u - \\frac{k^2}{6}\\dot{\\ddot{u}} + \\OO(k^3),\\nonumber\\\\\n    % \\dot u &= \\frac{u(t) - u(t-k)}{k} + \\OO(k).\n    \\dtm u(t) &= \\dot u + \\OO(k).\n\\end{align}\nNotice that the sign of $\\OO$ does not matter.\n\nApplying the centred operator in Eq. \\eqref{eq:centredTimeOperator} to $u(t)$ yields\n\\begin{equation}\n    \\dtd u(t) = \\frac{u(t+k) - u(t-k)}{2k},\n\\end{equation}\nindicating that to find the order of accuracy for this operator, both Eqs. \\eqref{eq:taylorStartForward} and \\eqref{eq:taylorStartBackward} are needed. Subtracting these and substituting their definitions yields\n\\begin{align}\n    u(t+k) - u(t-k) &= 2k\\dot u - \\frac{2k^3}{6}\\dot{\\ddot{u}} + 2\\OO(k^5),\\nonumber\\\\\n    \\frac{u(t+k) - u(t-k)}{2k} &= \\dot u + \\OO(k^2),\\nonumber\\\\\n    \\dtd u(t) &= \\dot u + \\OO(k^2),\n\\end{align}\nand shows that the centred difference operator is second-order accurate. \n\nAs a first-order derivative indicates the \\textit{slope} of a function, the differences in accuracy between the above operators can be visualised as in Figure \\ref{fig:taylor}. It can be observed that the derivative approximation -- the slope -- of the centred operator matches much more closely the true derivative of $u$ at $t$.\n\n\\begin{figure}[h]\n    \\includegraphics[width=\\textwidth]{figures/fdtd/taylor.eps}\n    \\caption{\\label{fig:taylor} The accuracy of the forward, backward and centred difference operators in \\eqref{eq:discFirstTime} visualised. The centred difference operator much more closely approximates the derivative, or the slope, of $u$ at $t$ when compared to the forward and backward difference operators.}\n\\end{figure}\n\nHigher-order differences, such as the second-order difference in time operator in Eq. \\eqref{eq:discSecondTime} can also be applied to $u(t)$ to get\n\\begin{equation}\n    \\dtt u(t) =  \\frac{u(t+k) - 2u(t) + u(t-k)}{k^2},\n\\end{equation}\nand can be proven to be second-order accurate by adding Eqs. \\eqref{eq:taylorStartForward} and \\eqref{eq:taylorStartBackward}:\n\\begin{align}\n    u(t+k) + u(t-k) &= 2 u(t) + k^2 \\ddot u + \\OO(k^4),\\nonumber\\\\\n    \\frac{u(t+k) -2 u(t) + u(t-k)}{k^2} &= \\ddot u + \\OO(k^2),\\nonumber\\\\\n    \\dtt u(t) &= \\ddot u + \\OO(k^2).\n\\end{align}\n\nThe accuracy of averaging operators can be found in the same way and follow a similar pattern. \n\\begin{equation}\n    \\begin{aligned}\n    \\mtp u(t) = u(t) + \\OO(k),\\quad &\\mtm u(t)= u(t) + \\OO(k),\\\\\n    \\mtd u(t)= u(t) + \\OO(k^2), \\quad &\\mtt u(t)= u(t) + \\OO(k^2).\n    \\end{aligned}\n\\end{equation}\n\n\\subsection{Identities}\nFor working with FD schemes, either for implementation or analysis, it can be extremely useful to rewrite the operators presented above to equivalent versions of themselves. These are called \\textit{identities} and for future reference, some useful ones are listed below:\\footnote{The $\\pm$ operators in a single equation are either all `$+$' or all `$-$' and are used for a more compact notation.}\n\\begin{subequations}\n    \\begin{align}\n        \\dtt &= \\frac{2}{k}\\left(\\dtd- \\dtm\\right),\\label{eq:identity1}\\\\\n        \\dtd &= \\dtp\\mtm = \\dtm\\mtp\\label{eq:identity2},\\\\\n        \\mu_{t\\pm} &= \\pm\\frac{k}{2}\\delta_{t\\pm} + 1\\label{eq:identity3},\\\\\n        \\delta_{t\\pm} &= \\pm\\frac{2}{k}(\\mu_{t\\pm} - 1),\\label{eq:identityLip}\\\\\n        \\mtd &= k\\dtd + e_{t-}.\\label{eq:identity4}\n    \\end{align}\n\\end{subequations}\nThat these equalities hold, can easily be proven by expanding the operators defined in Section \\ref{sec:FDoperators}. Naturally, these identities also hold for spatial operators by simply substituting the `$t$' subscripts for `$x$'. \n\n\\section{%Intro to ODEs: \nThe mass-spring system}\\label{sec:massSpringSystem}\nThough a complete physical modelling field on their own (see Chapter \\ref{ch:physMod}), mass-spring systems are also sound-generating systems and lend themselves well to illustrating and explaining FDTD methods in practice. Starting with the continuous-time ODE, the current section follows the discretisation process to a FD scheme using the operators described in Section \\ref{sec:FDoperators}. Finally, the scheme is rewritten to an update equation that can be implemented and the output of the system is shown. \n\n\\subsection{Continuous time}\\label{sec:massSpringCont}\nUsing dots to indicate a temporal derivative, the ODE of a simple mass-spring system is defined as\n\\begin{equation}\\label{eq:massSpringPDE}\n    M\\ddot u = -Ku,\n\\end{equation}\nwhere $u = u(t)$ is the distance from the equilibrium position (in m), $M>0$ is the mass of the mass (in kg) and $K\\geq 0$ is the spring constant (in N/m). Equation \\eqref{eq:massSpringPDE} can be written as\n\\begin{equation}\\label{eq:massSpringCompact}\n    \\ddot u = -\\omega_0^2u,\n\\end{equation}\nwith angular frequency (in rad/s)\n\\begin{equation}\\label{eq:omega0MassSpring}\n    \\omega_0 = \\sqrt{K/M}.\n\\end{equation}\nThis way of writing the mass-spring ODE is more compact and can more directly be related to the fundamental frequency $f_0 = \\omega_0 / 2 \\pi$ (in Hz) of the system. \n\nApart from the choices of $K$ and $M$, the behaviour of the mass-spring system is determined by its \\textit{initial conditions}, being $u(0)$ and $\\pt u(0)$, i.e., the displacement and velocity of the mass at $t = 0$. If the initial conditions are non-zero, the path that the displacement of the mass follows over time is sinusoidal (see Figure \\ref{fig:massSpring}), which is also why the mass-spring system is often referred to as the \\textit{simple harmonic oscillator}. The amplitude of the sinusoid is determined by the initial conditions, whereas the frequency is determined by $M$ and $K$. \n\n\\input{introduction/massSpringTikz.tex}\n\n\\subsubsection{Intuition}\\todo{move section up (stefan's comment)}\nThe behaviour of the mass-spring system in Eq. \\eqref{eq:massSpringPDE} arises from two basic laws of physics: \\textit{Newton's second law} and \\textit{Hooke's law}. \n\nStarting with Newton's second law -- \\textit{force equals mass times acceleration} -- and relating this to the variables used in Eq. \\eqref{eq:massSpringPDE} yields an expression for force\n\\begin{equation}\\label{eq:newton2nd}\n    F = M\\ddot u.\n\\end{equation}\nThis equation in isolation can be used to, for example, calculate the force necessary to accelerate a mass of $M$ kg to $\\ddot u$ m/s$^2$. Next, the force generated by the spring follows Hooke's law:\n\\begin{equation}\\label{eq:hookesLaw}\n    F = -Ku,\n\\end{equation} \nwhich simply states that the force generated by a spring with stiffness $K$ is negatively proportional to the value of $u$. In other words, the further the spring is extended (from the equilibrium $u=0$), the more force will be generated in the opposite direction. Finally, as the sole force acting on the mass is the one generated by the spring, the two expressions for the force $F$ can be set equal to each other and yield the equation for the mass-spring system in \\eqref{eq:massSpringPDE}. \n\nThe sinusoidal behaviour of the mass-spring system, or a least the fact that the mass ``gets pulled back'' to the equilibrium, is apparent from the minus-sign in Eq. \\eqref{eq:hookesLaw}. The frequency of the sinusoid, depends on the value of $K$ as the ``pull'' happens to a higher degree for a higher spring stiffness. \nThat the frequency of the system is also dependent on the mass $M$ can be explained by the fact that a lighter object is more easily moved and vice versa, which is apparent from Eq. \\eqref{eq:newton2nd}. In other words, the pull of the spring has a greater effect on the acceleration of a lighter object than a heavier one. \n\n%Both these parameters affect the frequency through the definition of $\\omega_0$ in Eq. \\eqref{eq:omega0MassSpring}.\n\n%From the definition of $\\omega_0$ in Eq. \\eqref{eq:omega0MassSpring} one can observe that a smaller $M$ will cause a higher frequency explained by the fact that a smaller mass is more easily moved. Similarly, a higher spring constant $K$ will cause a higher frequency, as the force caused by Hooke's law ``pulls'' the mass back to the equilibrium to a higher degree.\n\nFinally, if $u = 0$ there is no spring force present and the acceleration remains unchanged. This is exactly what Newton's first law states: if the net force acting on an object is zero, its velocity will be constant. If the mass is not in motion, this means that it remains stationary. If it is, at the exact moment that $u=0$, the velocity is unchanged.\n\n\\subsection{Discrete time}\nFollowing the discretisation process introduced in Section \\ref{sec:discUsingFDTD}, one can approximate the PDE in Eq. \\eqref{eq:massSpringPDE}. The displacement of the mass is approximated using \n\\begin{equation}\n    u(t) \\approx u^n,\n\\end{equation}\nwith time $t = nk$, time step $k = 1/\\fs$, sample rate $\\fs$ and temporal index and $n \\in \\mathbb{N}^0$. Note that the ``grid function'' does not have a subscript $l$ as $u$ is not distributed in space and is now simply called a \\textit{time series}.\n\nUsing the operators found in Section \n\\ref{sec:FDoperators}, Eq. \\eqref{eq:massSpringPDE} can be discretised as follows:\n\\begin{equation}\\label{eq:massSpringFDS}\n    M \\dtt \\un = -K\\un,\n\\end{equation}\nwhich is the first appearance of a FD scheme in this work. Expanding the $\\delta_{tt}$ operator yields \n\\begin{equation*}\n    \\frac{M}{k^2}\\left(u^{n+1}-2\\un+u^{n-1}\\right) = -K\\un,\\nonumber\n\\end{equation*}\nand solving for $u^{n+1}$ results in the following recursion or \\textit{update equation}:\n\\begin{equation}\\label{eq:massSpringUpdate}\n    u^{n+1} = \\left(2-\\frac{Kk^2}{M} \\right)\\un - u^{n-1},\n\\end{equation}\nwhich can be implemented in a programming language such as \\texttt{MATLAB}. \n\n\\subsection{Implementation and output}\\label{sec:massSpringImplementation}\nA simple \\texttt{MATLAB} script implementing the mass-spring system described in this section is shown in Appendix \\ref{app:massSpringCode}. The most important part of the algorithm happens in a for-loop recursion, where update equation \\eqref{eq:massSpringUpdate} is implemented. At the end of each loop, the system states are updated and prepared for the next iteration. %See Algorithm \\ref{alg:massSpringLoop}\n\nTo be able to start the simulation of the scheme, the initial conditions given in Section \\ref{sec:massSpringCont} must be discretised at $n=0$. As $n$ is only defined for values greater than zero, the forward difference operator is used. A simple way to obtain a sinusoidal motion with an amplitude of $1$, is to set the initial conditions as follows: \n%\n\\begin{equation}\n    u^0 = 1 \\quad \\text{and} \\quad \\dtp u^0 = 0.\n\\end{equation}\nThe latter equality can be expanded and solved for $u^1$ to obtain its definition: \n\\begin{align*}\n    &\\frac{1}{k}\\left(u^1 - u^0\\right) = 0,\\\\[-1em]\n    \\xLeftrightarrow{\\mystrut\\ u^0 = 1\\ }\\quad &u^1 - 1 = 0,\\\\[0.1em]\n &u^1 = 1.\n\\end{align*}\nIn short, setting $u^0 = u^1 \\neq 0$ yields an oscillatory behaviour with an amplitude of $1$. Note that any other non-zero initial condition will also yield oscillatory behaviour, but likely with a different amplitude.\n\n\\begin{figure}[b]\n    \\includegraphics[width=\\textwidth]{figures/fdtd/massSpringOutput.eps}\n    \\caption{The time domain and frequency domain output of a mass-spring system with $f_0 = 440$ Hz. \\label{fig:massSpringOutput}}\n\\end{figure}\n\nThe values for $K$ and $M$ are restricted by a stability condition\n\\begin{equation}\n    k < 2\\sqrt{\\frac{M}{K}},\n\\end{equation}\nwhich will be elaborated on in Section \\ref{sec:stabilityAnalysis}. If this condition is not satisfied, the system will exhibit (exponential) growth and is \\textit{unstable}. \n\nThe output of the system can be obtained by `recording' the displacement of the mass and listening to this at the given sample rate $\\fs$. An example of this can be found in Figure \\ref{fig:massSpringOutput} where the frequency of oscillation $f_0 = 440$ Hz.\n\n\n% \\setlstMAT\n% \\begin{lstlisting}[caption=\\texttt{MATLAB} implementation of a simple mass spring system. The code only shows the for-loop. The full code can be found in Appendix \\ref{app:massSpringCode}., label=alg:massSpringLoop]\n% %%% ... initialisation ... %%%\n\n% %% Simulation loop\n% for n = 1:lengthSound\n    \n%     % Update equation Eq. %*\\eqrefMatlab[eq:massSpringUpdate] *)\n%     uNext = (2 - K * k^2 / M) * u - uPrev; \n    \n%     out(n) = u;\n    \n%     % Update system states\n%     uPrev = u;\n%     u = uNext;\n% end\n% \\end{lstlisting}\n\n\\section{%Intro to PDEs: \nThe 1D wave equation}\\label{sec:1DWave}\nArguably the most important PDE in the field of physical modelling for sound synthesis is the 1D wave equation. It can be used to describe transverse vibration in an ideal string, longitudinal vibration in an ideal bar or the pressure in an acoustic tube (see Chapter \\ref{ch:brass}). Although the behaviour of this equation alone does not appear in the real world as such -- as no physical system is ideal -- it is extremely useful as a test case and a basis for more complicated models. %Here, it will be used to introduce various concepts and analysis techniques in the field of FDTD methods.\n\n\\subsection{Continuous time\\todo{FULL DOC SWEEP: check hyphen in titles (DOING)}}\\label{sec:1DwaveContTime}\nThe 1D wave equation is a PDE that describes the motion of a system distributed in one dimension of space. Consider the state of a 1D system $u=u(x,t)$ \\todo{unit?}of length $L$ (in m) defined for time $t\\geq 0$ and $x\\in \\D$ with $\\D = [0, L]$. The PDE describing its motion is\n\\begin{equation}\\label{eq:1DwavePDE}\n    \\ptt u = c^2 \\pxx u,\n\\end{equation}\nwhere $c$ is the wave speed of the system (in m/s). If the PDE is used to model an ideal string, the wave speed can be defined as $c = \\sqrt{T / \\rho A}$, with tension $T$ (in N), material density $\\rho$ (in kg/m$^3$) and cross-sectional area $A$. If instead, it is used to model pressure in an acoustic tube $c$ is the speed of sound in air. Figure \\ref{fig:1DWavePropagation} shows the wave propagation of the 1D wave equation excited using a raised cosine. \n\n\\def\\figWidth{0.32}\n\\begin{figure}[h]\n    \\centering\n    \\subfloat[$t = 0$ ms.\\label{fig:1DWaveProp1}]{\\includegraphics[width=\\figWidth\\textwidth]{figures/fdtd/1DWaveProp1.eps}}\\hfill\n    \\subfloat[$t = 1$ ms.\\label{fig:1DWaveProp2}]{\\includegraphics[width=\\figWidth\\textwidth]{figures/fdtd/1DWaveProp2.eps}}\\hfill\n    \\subfloat[$t = 2$ ms.\\label{fig:1DWaveProp3}]{\\includegraphics[width=\\figWidth\\textwidth]{figures/fdtd/1DWaveProp3.eps}}\n    \\caption{Wave propagation in the 1D wave equation in Eq. \\eqref{eq:1DwavePDE} with $c \\approx 127$ m/s.\\label{fig:1DWavePropagation}}\n\\end{figure}\n\n\n\\subsubsection{Intuition}\n% Although the 1D wave equation often appears in the literature, an intuition or interpretation of why it works the way it does is hard to find. In the following, $u$ describes the transverse displacement of an ideal string.\n\n% OR \n\n% I would like to use this opportunity to provide some extra explanation as to how and why the 1D wave equation in \\eqref{eq:1DwavePDE} works the way it does. This will hopefully provide some basic intuition into the workings of PDEs that will make it easier to work with later on. \\SWcomment[somethingsomething]\n\nAs with the mass-spring system in Section \\ref{sec:massSpringSystem} the working of the PDE in \\eqref{eq:1DwavePDE} arises from Newton's second law, even though this connection might be less apparent. %As in the mass-spring case, the acceleration of a system state is calculated, but now this depends on \n\nThe 1D wave equation in \\eqref{eq:1DwavePDE} states that the acceleration of $u(x,t)$ at location $x$ is determined by the second-order spatial derivative of $u$ at that same location (scaled by a constant $c^2$). In the case that $u$ describes the transverse displacement of an ideal string, this second-order derivative denotes the \\textit{curvature} of this string. As $c^2$ is always positive, the sign (or direction) of the acceleration is fully determined by the sign of the curvature. In other words, a `positive' curvature at location $x$ along the ideal string yields a `positive' or upwards acceleration at that same location. \n\nWhat a `positive' or `negative' curvature implies is more easily seen when we take a simple function describing a parabola, $y(x) = x^2$, and take its second derivative to get $y''(x) = 2$. The answer is a positive number which means that $y$ has a positive curvature. \n\nSo, what does this mean for the 1D wave equation? As a positive curvature implies a positive or upwards acceleration as per Eq. \\eqref{eq:1DwavePDE}, $u$ with a positive curvature at a location $x$ will start to move upwards and vice versa. Of course, the state of a physical system such as $u$ will rarely have a perfect parabolic shape, but the argument still applies. See Figure \\ref{fig:curvature} for a visualisation of the forces acting on $u$ due to curvature.\n\nHow the 1D wave equation relates to Newton's second law, becomes apparent by slightly rewriting Eq. \\eqref{eq:1DwavePDE}. Recalling the definition of $c$ for an ideal string, one can rewrite the 1D wave equation to\n\\begin{equation*}\n    \\rho A \\ptt u = T \\pxx u,\n\\end{equation*}\nwhere $\\rho A$ describes the \\textit{mass per unit length} of the string. As the forces present in the system act on infinitesimally small portions of the string Newton's second law appears by a multiplication of $dx$\n\\begin{equation*}\n    \\underbrace{\\rho A \\ptt u dx}_{ma} = \\underbrace{T \\pxx u dx}_{F},\n\\end{equation*}\nwhere $\\rho A dx$ is the mass of a (tiny) portion of the string of length $dx$ (in m), $\\ptt u$ is the acceleration of that portion and $T \\pxx u dx$ describes the force acting on that portion, yielding Newton's second law.  \n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=\\textwidth]{figures/fdtd/curvature.eps}\n    \\caption{\\label{fig:curvature} The forces acting on the 1D wave equation described by $u(x,t)$ due to curvature. The arrows indicate the direction and magnitude of the force, and simultaneously the acceleration as these are connected through Eq. \\eqref{eq:1DwavePDE}.}\n\\end{figure}\\todo{different wording in caption}\n\n\\subsubsection{Boundary conditions}\nWhen a system is distributed in space, \\textit{boundary conditions} must be determined. Recalling that $x$ is defined over domain $\\D = [0, L]$, the boundaries, or end points of the system are located at $x=0$ and $x=L$. Two often-used alternatives for the boundary conditions are\n%\n\\begin{subequations}\\label{eq:boundaryCond1DWave}\n    \\begin{align}\n        u(0, t) = u(L, t) &= 0\\quad \\text{(Dirichlet, fixed)},\\label{eq:contDirichlet}\\\\\n        \\px u(0, t) = \\px u(L, t) &= 0\\quad \\text{(Neumann, free)}.\\label{eq:contNeumann}\n    \\end{align}\n\\end{subequations}\n%\nThe Dirichlet boundary condition says that at the end points of the system, the state is 0 at all times. The Neumann condition on the other hand, says that rather the slope of these points needs to be 0, but that the end points are free to move transversely. In the former case, incoming waves invert after reaching the boundary whereas in the latter incoming waves are reflected un-inverted. See Figure \\ref{fig:boundaryCondsCont}\\todo{add why this is relevant?}.\n\nIf both boundaries of the 1D wave equation share the same condition, the fundamental frequency of the simulation can be calculated using \n\\begin{equation}\\label{eq:fundamentalFreq}\n    f_0 = \\frac{c}{2L}\\ .\n\\end{equation}\n\n\\begin{figure}[t]\n    \\centering\n    \\subfloat[The Dirichlet boundary condition in Eq. \\eqref{eq:contDirichlet} fixes the boundary, which causes the incoming waves to invert.\\label{fig:dirichlet}]{\\includegraphics[width=0.45\\textwidth]{figures/fdtd/dirichletCont.eps}}\\hspace{0.06\\textwidth}\n    \\subfloat[The Neumann or free boundary condition in Eq. \\eqref{eq:contNeumann} fixes the slope at the boundary, causing the incoming waves to not invert.\\label{fig:neumann}]{\\includegraphics[width=0.45\\textwidth]{figures/fdtd/neumannCont.eps}}\n    \\caption{The behaviour of the 1D wave equation with (a) Dirichlet or (b) Neumann boundary conditions.\\label{fig:boundaryCondsCont}}\n\\end{figure}\n\n\\subsubsection{Scaling}\nAs this work follows much of Bilbao's \\textit{Numerical Sound Synthesis} \\cite{theBible}, it might be good to talk about a major discrepancy between the PDEs and FD schemes that appear there and those used here. Non-dimensionalisation, or \\textit{scaling}, is extensively used in \\cite{theBible} and much of the literature published around that time (fx. \\cite{Bilbao2009Modular,Bilbao2009spring}) and can be useful to reduce the number of parameters used to describe a system.\n\nScaling techniques normalise the domain $x\\in[0, L]$ to $x' \\in [0, 1]$ with $x' = x/L$ . The 1D wave equation in \\eqref{eq:1DwavePDE} can then be rewritten to\n\\begin{equation}\\label{eq:scaled1Dwave}\n    \\ptt u = \\gamma^2\\partial_{x'x'}u,\n\\end{equation}\nwhere scaled wave speed $\\gamma = c/L$ has units of frequency. The scaling has removed the necessity for both $c$ and $L$ and simply specifying the scaled wave speed $\\gamma$ is enough to parametrise the behaviour of the system. The parameter reduction gets more apparent for more complex systems and could greatly simplify the models used, at least in notation and parameter control. \n\nAlthough this parameter reduction might be useful for resonators in isolation, when multiple resonators interact with each other (see Part \\ref{part:interactions}), it is better to keep the systems dimensional. As a big part of this work includes interaction between multiple resonators, only dimensional systems will appear here.\\todo{check whether still correct}\n\n\\subsection{Discrete time}\\label{sec:1DWaveDisc}\nComing back to the PDE presented in Eq. \\eqref{eq:1DwavePDE}, we continue by finding a discrete-time approximation for it. As explained in Section \\ref{sec:gridFunctions}, a continuous state variable $u = u(x,t)$ can be discretised using $x=lh$ with grid spacing $h$ (in m) and $t=nk$ with time step $k$ (in s). The grid function $\\uln$ approximating $u$ can then be indexed by spatial index $l \\in \\{0, \\hdots, N\\}$ with number of intervals between the grid points $N$ and temporal index $n\\in \\mathbb{N}^0$. Continuing with the approximations of the derivatives in the 1D wave equation, the most straightforward \\todo{FULL DOC SWEEP: check straightforward or straight-forward} discretisation of Eq. \\eqref{eq:1DwavePDE} is the following FD scheme\n\\begin{equation}\\label{eq:1DwaveFDS}\n    \\dtt \\uln = c^2 \\dxx \\uln.\n\\end{equation}\nOther schemes exist (see e.g. \\cite{theBible}), but are excluded as they have not been used in this work. Expanding the operators using the definitions given in Section \\ref{sec:FDoperators} yields\n\\begin{equation}\n    \\frac{1}{k^2}\\left(u_l^{n+1}-2 \\uln + u_l^{n-1}\\right) = \\frac{c^2}{h^2} \\left(u_{l+1}^n - 2 \\uln + u_{l-1}^n\\right).\n\\end{equation}\nand solving for $u_l^{n+1}$ yields\n\\begin{equation}\\label{eq:1DwaveUpdate}\n    u_l^{n+1} = \\left(2-2\\lambda^2\\right) \\uln  + \\lambda^2\\left(u_{l+1}^n + u_{l-1}^n\\right) - u_l^{n-1}.\n\\end{equation}\nHere, \n\\begin{equation}\\label{eq:courantNumber}\n    \\lambda = \\frac{ck}{h}\n\\end{equation}\nis called the \\textit{Courant number} and plays a big role in stability and quality of the FD scheme. More specifically, $\\lambda$ needs to abide the (famous) Courant-Friedrichs-Lewy or \\textit{CFL condition} for short \\cite{Courant1928}\n\\begin{equation}\\label{eq:CFL}\n    \\lambda \\leq 1,\n\\end{equation}\nwhich acts as a stability condition for scheme \\eqref{eq:1DwaveFDS}. More details on this are given in Section \\ref{sec:quality1DWave}.\n\nAs $c$, $k$ and $h$ are interdependent due to the CFL condition, it is useful to rewrite Eq. \\eqref{eq:CFL} in terms of known variables.\nAs the time step $k$ is based on the sample rate and thus (usually) fixed, and $c$ is a user-defined wave speed, the CFL condition can be rewritten in terms of the grid spacing $h$:\n\\begin{equation}\\label{eq:1DWaveStabilityCond}\n    h \\geq ck,\n\\end{equation}\n%\nwhich, in implementation, is used as a stability condition for the scheme. See Section \\ref{sec:stabilityAnalysis} for more information on how to derive a stability condition from a FD scheme.\n\n\\subsubsection{Stencil}\nAs was done for several FD operators in Figure \\ref{fig:operators}, it can be useful to visualise the \\textit{stencil}, or region of operation, of a FD scheme. A stencil of a scheme visualises what grid values are necessary to calculate the state at the next time step $u_l^{n+1}$. Figure \\ref{fig:stencil1DWave} shows the stencil for scheme \\eqref{eq:1DwaveFDS} and -- in essence -- visualises the various shifts of the grid function in Eq. \\eqref{eq:1DwaveUpdate}. One could visualise this stencil to be placed on the left-most points of the grid shown in Figure \\ref{fig:gridExp}. The update equation then iterates this stencil over the entire domain and calculates all values of $u_l^{n+1}$ based on known values of $u_l^n$ and $u_l^{n-1}$.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{figures/fdtd/1DWaveStencil.eps}\n    \\caption{The stencil, or region of operation, for the FD scheme in \\eqref{eq:1DwaveFDS}. The time steps of the various grid points are colour-coded by yellow ($n+1$), light blue ($n$) and dark blue ($n-1$).\\label{fig:stencil1DWave}}\n\\end{figure}\n\n\\subsubsection{Boundary conditions and virtual grid points}\nThe end points of the discrete domain are located at $l = 0$ and $l = N$.\nSubstituting these locations into Eq. \\eqref{eq:1DwaveUpdate} shows that grid points outside of the defined domain are needed, namely $u_{-1}^n$ and $u_{N+1}^n$. These can be referred to as \\textit{virtual grid points} and can be accounted for %/defined\nby discretising the boundary conditions in Eq. \\eqref{eq:boundaryCond1DWave}. Discretising these (using the most accurate centred spatial difference operator for the Neumann condition) yields\n\\begin{subequations}\n    \\begin{align}\n        u_0^n = u_N^n &= 0 \\quad\\text{(Dirichlet, fixed)}\\label{eq:discreteDirichlet},\\\\\n        \\delta_{x\\cdot} u_0^n = \\delta_{x\\cdot} u_N^n &= 0 \\quad \\text{(Neumann, free)}.\\label{eq:discreteNeumann}\n    \\end{align}\n\\end{subequations}\nIf Dirichlet boundary conditions are used, the states of the boundary points will always be zero and can therefore be excluded from the calculations. The range of calculation then simply becomes $l\\in\\{1,\\hdots, N-1\\}$ and no virtual grid points are needed when performing the update.\n\nIf, on the other hand, Neumann conditions are used, the range of calculation remains $l\\in\\{0,\\hdots, N\\}$ and definitions for the virtual grid points need to be found.\nExpanding the operators in Eq. \\eqref{eq:discreteNeumann} and solving for $u_{-1}^n$ and $u_{N+1}^n$ provides the definitions for these virtual grid points based on values inside the defined domain:\n\n\\begin{minipage}[c]{0.49\\textwidth}\n    \\begin{align*}\n        &\\frac{1}{2h} \\left(u_1^n - u_{-1}^n\\right) = 0,\\\\\n        &u_1^n - u_{-1}^n = 0,\\\\\n        &u_{-1}^n = u_1^n.\n    \\end{align*}\n\\end{minipage}\n\\begin{minipage}[c]{0.49\\textwidth}\n    \\begin{align*}\n        &\\frac{1}{2h} \\left(u_{N+1}^n - u_{N-1}^n\\right) = 0,\\\\\n        &u_{N+1}^n - u_{N-1}^n = 0,\\\\\n        &u_{N+1}^n = u_{N-1}^n.\n    \\end{align*}\n\\end{minipage}\n\\\\\n\\\\\n\\noindent At the boundaries, the update equation in Eq. \\eqref{eq:1DwaveUpdate} will then have the above definitions for the virtual grid points substituted and will become \n\\begin{equation}\\label{eq:1DWaveLeftBound}\n    u_0^{n+1} = \\left(2-2\\lambda^2\\right) u_0^n  + 2\\lambda^2 u_1^n - u_0^{n-1},\n\\end{equation}\nand \n\\begin{equation}\\label{eq:1DWaveRightBound}\n    u_N^{n+1} = \\left(2-2\\lambda^2\\right) u_N^n  + 2\\lambda^2 u_{N-1}^n - u_N^{n-1},\n\\end{equation}\nat the left and right boundary respectively.\n\n\\subsection{Implementation}\\label{sec:output1DWave}\nThis section provides information on the excitation and output of the system. See Appendix \\ref{app:1DWave} for a \\texttt{MATLAB} implementation of the 1D wave equation.\n\n\\subsubsection{Excitation}\nA simple way to excite the system is to initialise the state using a raised cosine, or Hann window. More information on excitations will be given in Part \\ref{part:exciters}, but for completeness, the formula for a discrete raised cosine will be given here. \n\nThe discrete raised cosine can be parametrised by its center location $l_0$ and width $w$ (in` grid spacings') \\todo{look at this} from which the start index $l_\\text{s}$ and end index $l_\\text{e}$ can be calculated, according to \n\\begin{equation}\n    l_\\text{s} = l_0 - \\floor[w/2]\\qaq l_\\text{e} = l_0 + \\floor[w/2],\n\\end{equation}\nwhere $\\floor[\\cdot]$ denotes the flooring operation and needs to be used as all the above variables are integers. Furthermore, both $l_\\text{s}$ and $l_\\text{e}$ must fall into the defined spatial range of calculation. Then, a raised cosine with an amplitude of $1$ can be calculated and used as an initial condition for the system according to\n\\begin{equation}\\label{eq:raisedCos}\n    u_l^1 = u_l^0 =\n    \\begin{cases}\n        0.5 - 0.5\\cos\\left(\\frac{2\\pi (l - l_\\text{s})}{w}\\right), & l_\\text{s} \\leq l \\leq l_\\text{e},\\\\\n        0, & \\text{otherwise}.\n    \\end{cases}\n\\end{equation}\nAs done for the implementation of the mass-spring system in Section \\ref{sec:massSpringImplementation}, both $u_l^0$ and $u^1_l$ are initialised with the same state, as to only have an initial displacement, and not an initial velocity. \n\nIn \\texttt{MATLAB}, an easier way to obtain a raised cosine is to use the \\texttt{hann(w)} function which returns a raised cosine (or Hann window) of width \\texttt{w} (in grid points).\n\n\\subsubsection{Output and modes}\nAfter the system is excited, one can retrieve the output of the system by selecting a grid point $l_\\text{out}$ and listening to that at the given sample rate $\\fs$. An example using the parameters in Table \\ref{tab:1DWaveParams} and Dirichlet boundary conditions is shown in Figure \\ref{fig:1DWaveOutput}.\n\n\\begin{table}[h]\n    \\begin{center}\n    \\begin{tabular}{|l|c|c|}\n        \\hline\n        Name & Symbol (unit) & Value\\\\ \\hline\n        \\multicolumn{3}{|l|}{\\bf User-defined parameters}\\\\ \\hline\n        Length & $L$ (m) & $1$\\\\\n        Wave speed & $c$ (m/s) & $1470$\\\\\n        Sample rate & $\\fs$ (Hz) & $44100$ \\\\\\hline\n        \\multicolumn{3}{|l|}{\\bf Derived parameters}\\\\ \\hline\n        Fundamental frequency & $f_0$ (Hz) & $735$\\\\\n        No. of intervals & $N$ (-) & $30$ \\\\\n        Time step & $k$ (s)& $\\approx 2.27\\cdot 10^{-5}$ \\\\\n        Grid spacing & $h$ (m)& $\\approx 0.033$ \\\\\n        Courant number & $\\lambda$ (-)& $1$ \\\\\\hline\n        \\multicolumn{3}{|l|}{\\bf Excitation and output}\\\\ \\hline\n        Center location& $l_0$ (-)& $0.2N$\\\\\n        Width& $w$ (-)& 4\\\\\n        Output location & $l_\\text{out}$ & 3 \\\\\\hline\n    \\end{tabular}\n    \\caption{Parameters used for 1D wave equation example used in this section. The user-defined parameters have been chosen such that $\\lambda = 1$. \\label{tab:1DWaveParams}}\n    \\end{center}\n\\end{table}\n\nAs can be seen from Figure \\ref{fig:1DWaveOutput}, the output of the 1D wave equation contains many peaks in the frequency spectrum on top of the fundamental frequency. These are called \\textit{harmonic partials} or \\textit{harmonics} for short and arise from the various modes of vibration present in the system (see Figure \\ref{fig:modes}). Although the PDE has not been implemented using modal synthesis (discussed in Chapter \\ref{ch:physMod}), the system can still be decomposed into different modes of vibration, each corresponding to a harmonic frequency. These modes are assumed to vibrate independently, and their weighted sum yields the eventual behaviour of the system.\\footnote{Modes of the vibrating string were first discovered by Sauveur in 1701 who said that  ``\\textit{especially at night}'' he observed ``\\textit{other small sounds}'' on top of the fundamental frequency and coined the terms `node' and `harmonic' \\cite{Sauveur1701}.}\n\n\\begin{figure}[h]\n    \\includegraphics[width=\\textwidth]{figures/fdtd/oneDWaveOutput.eps}\n    \\caption{The time-domain and frequency domain output of the 1D wave equation with $f_0 = 735$ Hz and $\\fs = 44100$ Hz ($N = 30$ and $\\lambda = 1$) and Dirichlet boundary conditions. The system is initialised with a raised cosine described in Eq. \\eqref{eq:raisedCos} with $l_0 = 0.2N)$ and $w=4$ and the output is retrieved at $l_\\text{out} = 3$. \\label{fig:1DWaveOutput}}\n\\end{figure}\n\n\\begin{figure}[h]\n    \\includegraphics[width=\\textwidth]{figures/fdtd/modes.eps}\n    \\caption{The first $10$ modal shapes of the 1D wave equation with Dirichlet boundary conditions defined for $x\\in[0, L]$ (only shown for mode $1$). The modes are normalised to have the same amplitude and vibrate at their respective modal frequencies with the extremes indicated by the black and the grey plot. The number of the shape can be determined by the number of antinodes present in the shape. \\label{fig:modes}}\n\\end{figure}\n\nThe number of modes present in the continuous PDE of the 1D wave equation is theoretically infinite. The number present in the discrete FD scheme, however, is determined by the number of moving points in the system. If Dirichlet boundary conditions are used, this means that there are $N-1$ modes, and $N+1$ modes for Neumann boundary conditions. If the CFL condition is satisfied with equality, the frequencies of these modes are integer multiples of the fundamental: $f_m = mf_0$ for mode number $m \\in \\{1, \\hdots, N-1\\}$ for Dirichlet and $m \\in \\{0, \\hdots, N\\}$ when using Neumann boundary conditions. The frequency of the harmonics -- and even the modal shapes -- can be analytically derived using modal analysis as will be explained in Section \\ref{sec:modalAnalysis}.\n\nThe amplitude of the different modes depends on the excitation location (and type) and the output location. \nFigure \\ref{fig:1DWaveOutput}, for example, seemingly shows that the system only exhibits $24$ modes, rather than the $29$ ($N-1$) predicted. As the system is excited at $0.2N$, or in other words, $1/5$\\th of the length of the system, this means that every $5$\\th mode will be attenuated.\nTo understand how and/or why this happens, one can refer to Figure \\ref{fig:modes} and see that every $5$\\th modal shape has a node at $1/5$\\th of its length. If the system is excited exactly there, this modal shape will not obtain any energy and will thus not resonate. Similarly, if the system is excited exactly in the middle, every 2\\textsuperscript{nd} modal frequency will be attenuated as there is a node present in the corresponding modal shape. The output would then only contain odd-numbered modes. \n\n\\subsection{Stability and simulation quality}\\label{sec:quality1DWave}\nAs shown in Eq. \\eqref{eq:CFL}, the Courant number needs to abide the CFL condition in order for the scheme to be stable. A system is regarded \\textit{unstable} if it exhibits (exponential) unbounded growth. If Neumann boundary conditions (free) are used, it is possible that the system drifts off over time. This does not mean that the system is unstable, it is actually entirely physically possible!\\footnote{Imagine a `free' guitar string where the ends are not connected to the nut and bridge of a guitar. The string can be taken far away from the guitar without it breaking or exploding.}\n\nBesides stability, the value of $\\lambda$ is closely related to the quality of the simulation. \nIf $\\lambda = 1$, Eq. \\eqref{eq:1DwaveFDS} is actually an exact solution to Eq. \\eqref{eq:1DwavePDE}, which is quite uncommon in the realm of differential equations! See Figure \\ref{fig:1DWaveDisp1}. Identically, if Eq. \\eqref{eq:1DWaveStabilityCond} is satisfied with equality, the FD scheme is an exact solution to the PDE, and if $h$ deviates from this condition, the quality of the simulation decreases. \n\nIf $\\lambda < 1$, the quality of the simulation decreases in an effect called \\textit{numerical dispersion}. Dispersion is a phenomenon where some frequencies travel faster through a medium than others, which is desired in some models (see fx. Chapter \\ref{ch:stiffString}). Numerical dispersion, however, which is due to numerical inaccuracy, never is! Figure \\ref{fig:1DWaveDisp09} shows an example when $\\lambda = 0.9$, and one can observe that the wave propagation does not match the ideal case as Figure \\ref{fig:1DWaveDisp1} shows. Moreover, bandlimiting effects occur, meaning that the highest frequency that the system can generate decreases. See Figure \\ref{fig:1DWaveBandwidth}. Higher modes get `squished' together and are not exact multiples of the fundamental anymore. Section \\ref{sec:modalAnalysis} elaborates on how to calculate the exact modal frequencies of a FD implementation of the 1D wave equation.\n\nFinally, if $\\lambda > 1$ the system becomes unstable. An example is shown in Figure \\ref{fig:1DWaveDisp1001}. Unstable behaviour usually comes in the form of high frequencies (around the Nyquist frequency of $\\fs / 2$) growing without bounds.\n\n\\def\\figSpacing{0.01\\textwidth}\n\\def\\figWidth{0.32\\textwidth}\n\n\\begin{figure}[t]\n    \\centering\n    \\subfloat[$\\lambda = 1$\\label{fig:1DWaveDisp1}]{\\includegraphics[width=\\figWidth]{figures/fdtd/ulnLambda1.eps}}\\hspace{\\figSpacing}\n    \\subfloat[$\\lambda = 0.9$ \\label{fig:1DWaveDisp09}]{\\includegraphics[width=\\figWidth]{figures/fdtd/ulnLambda09.eps}}\\hspace{\\figSpacing}\n    \\subfloat[$\\lambda = 1.001$\\label{fig:1DWaveDisp1001}]{\\includegraphics[width=\\figWidth]{figures/fdtd/ulnLambda1001.eps}}\n    \\caption{Grid function $\\uln$ visualised $\\sim\\!100$ samples after excitation. (a) If $\\lambda = 1$, the solution is exact. (b) If $\\lambda < 1$ dispersive behaviour shows. (c) If $\\lambda > 1$ the CFL condition in Eq. \\eqref{eq:CFL} is not satisfied and the system is unstable.\\label{fig:1DWaveDispersion}}\n\\end{figure}\n\n\\begin{figure}[t]\n    \\centering\n    \\subfloat[$\\lambda = 1$\\label{fig:1DWaveBand1}]{\\includegraphics[width=\\figWidth]{figures/fdtd/bandwidthLambda1.eps}}\\hspace{\\figSpacing}\n    \\subfloat[$\\lambda = 0.9$ \\label{fig:1DWaveBand09}]{\\includegraphics[width=\\figWidth]{figures/fdtd/bandwidthLambda09.eps}}\\hspace{\\figSpacing}\n    \\subfloat[$\\lambda = 1.001$\\label{fig:1DWaveBand1001}]{\\includegraphics[width=\\figWidth]{figures/fdtd/bandwidthLambda05.eps}}\n    \\caption{Frequency spectra of the simulation output. The Courant number is set to (a) $\\lambda = 1$, (b) $\\lambda = 0.9$ and (c) $\\lambda = 0.5$. One can observe that for lower values of $\n    \\lambda$ the bandwidth of the output decreases drastically. \\label{fig:1DWaveBandwidth}}\n\\end{figure}\n\nIn what situation would the stability condition then not be satisfied with equality? As mentioned in Section \\ref{sec:gridFunctions}, a continuous domain $\\D = [0,L]$ for a system of length $L$ needs to be divided into $N$ equal sections of length $h$ in the discretisation process. A logical step to calculate $N$ would be to divide $L$ by $h$ calculated using Eq. \\eqref{eq:1DWaveStabilityCond} satisfied with equality to get the highest possible simulation quality. However, this calculation might not result in an integer value, which $N$ should be! To stay as close to the stability condition as possible, the following calculations are performed in order:\n\\begin{equation}\\label{eq:orderOfCalc}\n    h := ck, \\quad N := \\floor[\\frac{L}{h}], \\quad h := \\frac{L}{N}, \\quad \\lambda := \\frac{ck}{h}.\n\\end{equation}\nIn other words, Eq. \\eqref{eq:1DWaveStabilityCond} is satisfied with equality and used to calculate integer $N$. After this, $h$ is recalculated based on $N$ and used to calculate the Courant number using Eq. \\eqref{eq:courantNumber}. This process assures that $N$ is an integer and that the CFL condition is satisfied, though not necessarily with equality.\n\nTo understand why $h$ needs to be recalculated, consider the following example. Consider the 1D wave equation defined over domain $\\D = [0, L]$ where $L = 1$. Furthermore, we say that the system should produce a fundamental frequency of $f_0 = 750$ Hz which requires a wave speed of $c = 1500$ m/s according to Eq. \\eqref{eq:fundamentalFreq}. If we use the commonly-used sample rate of $\\fs = 44100$ Hz, and recalling that $k = 1/\\fs$, these values can be filled into \\eqref{eq:1DWaveStabilityCond} satisfied with equality and yields $h \\approx 0.034$. If we divide the length by the grid spacing, we get $L / h = 29.4$, meaning that exactly $29.4$ intervals of size $h$ fit in the domain $\\D$. However, the number of intervals needs to be an integer and -- using Eq. \\eqref{eq:orderOfCalc} -- we get $N = 29$. If $h$ is not recalculated according to \\eqref{eq:orderOfCalc}, the total length will be $29$ times the grid spacing $h$. This results in $L \\approx 0.986$ and is slightly less than the original length of $1$. Although the CFL condition will be satisfied with equality, the fundamental frequency will be slightly higher than desired: $f_0 \\approx 760.34$ Hz. If $h$ is recalculated based on $N$, then $L$ and $f_0$ will be unchanged, and the system will have the correct fundamental frequency. The Courant number $\\lambda \\approx 0.986$ is still very close to satisfying the condition in Eq. \\eqref{eq:CFL}, and the decrease in quality will be perceptually irrelevant -- or at the very least, less perceptually relevant than the change in $f_0$ if $h$ is not recalculated.\n\n% Together with the total length of the domain $L$, $h$ can be used to determine into how many intervals $N$ the continuous domain can be subdivided in . As this number is an integer,  This number of intervals $N$ and through that the Courant number $\\lambda$ can be calculated using the following operations\n% \\begin{equation}\n%     h := ck, \\quad N := \\floor[\\frac{L}{h}], \\quad h := \\frac{L}{N} \\quad \\lambda := \\frac{ck}{h}.\n% \\end{equation}\n% where $\\floor[\\cdot]$ denotes the flooring operation and is necessary as $N$ is an integer. The CFL condition in Eq. \\eqref{eq:CFL} thus places a limit on the grid spacing $h$ and with that the number of grid points allowed by the simulation.\n\n\\subsubsection{Intuition}\nIt might not be immediately clear why a too low value for $h$ might cause instability. Some intuition is provided in \\cite[Fig. 6.9]{theBible}, but here I would like to provide an alternative, hopefully more tangible way to see this. \n\nIn a FD implementation of the 1D wave equation, grid points can only affect their neighbours as seen in update equation \\eqref{eq:1DwaveUpdate}. Using the values in Table \\ref{tab:1DWaveParams} as an example, $N=30$ and if $\\lambda = 1$, it takes exactly $30$ samples, or iterations of Eq. \\eqref{eq:1DwaveUpdate}, for a wave to travel from one boundary to the other. \n\nIf $h$ were to be chosen to be twice as big so that there are only half as many intervals between the grid points as per Eq. \\eqref{eq:orderOfCalc} ($N=15$), the grid points could be set to `affect' their neighbours to a lesser degree. This way, the wave still takes the same amount of time to travel between the boundaries and the fundamental frequency stays approximately the same. This is essentially what happens when $\\lambda < 1$ (in this case $\\lambda = 0.5$) and can be observed from the update equation in Eq. \\eqref{eq:1DwaveUpdate}; the effect that the neighbouring grid points have on each other will indeed be less. The output of the system will have approximately the same fundamental frequency as if $\\lambda = 1$, but its partials will be detuned due to numerical dispersion as explained in this section.\n\nIf, on the other hand, $h$ were to be chosen to be twice as small so that there are twice as many intervals between grid points ($N = 60$), it is impossible for the waves to travel from one boundary to the other in $30$ samples. If they could interact with their second neighbour, this would be possible, but the FD scheme in \\eqref{eq:1DwaveFDS} does not allow for this. Indeed, as $\\lambda = 2$ in this case, the effect that the grid points have on each other will be disproportionate. In a way, grid points have too much energy that they can not lose to their neighbours, because their effect should have reached their second neighbour over the course of one sample. The way to solve this would be to halve the time step $k$ (or double the sample rate $\\fs$), which would allow grid points to interact with their second neighbours over the course of once the old time step (as this is now divided into two time steps). This also shows in the fact that $\\lambda = 1$ again (as halving $k$ cancels out halving $h$) and grid points transfer their energy to their neighbours proportionately again.\\todo{figure?}\n\n\\subsubsection{Possible solution}\nOne of the main contributions of the PhD project is published in paper \\citeP[G] and summarised in Chapter \\ref{ch:dynamicGrid}, where a `fractional' number of intervals is introduced. This removes the necessity of the flooring operation in Eq. \\eqref{eq:orderOfCalc} and circumvents the recalculation of $h$ to always satisfy the stability condition with equality while retaining the correct fundamental frequency.", "meta": {"hexsha": "4a9490431d439c12c0815bdc42d608d101c2626f", "size": 72673, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "aauPhdCollectionThesis/introduction/FDTDBasics.tex", "max_stars_repo_name": "SilvinWillemsen/phdThesis", "max_stars_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "aauPhdCollectionThesis/introduction/FDTDBasics.tex", "max_issues_repo_name": "SilvinWillemsen/phdThesis", "max_issues_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "aauPhdCollectionThesis/introduction/FDTDBasics.tex", "max_forks_repo_name": "SilvinWillemsen/phdThesis", "max_forks_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 95.7483530962, "max_line_length": 1583, "alphanum_fraction": 0.736628459, "num_tokens": 20544, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5643183473963982}}
{"text": "\\chapter{Feedback gives rise to fixed point equation.}\n\nWhat happens if we feed the output\nof a function \\(f\\) back to its input?\nBut firstly, what do we mean by feeding it back?\nWe mean we wire the input and the output together so that they are the same,\nso that the function's input \\(x\\) is the same as the function's output \\(f~x\\).\nThat statement immediately translates to the equation \\( x = f~x \\),\nwhich is a \\emph{fixed point equation}.\nTherefore equalizing the output of a function to its input gives rise to a fixed point equation.\n\nBut feedback is more general than that.\nFeedback is the case where the input also depends on the output.\nIf the input is \\(x\\) and the output is \\(f~x\\),\nwe can express feedback as the equation \\( x = g~(f~x)~x \\),\nwhich we can generalize to \\( x = g~f~x \\)\nby letting \\(g\\) apply \\(x\\) to \\(f\\) internally.\nWe shall call the equation \\( x = g~f~x \\) the \\emph{general feedback equation}.\n\nBut it turns out that we can always write \\( x = g~f~x \\) as \\( x = h~x \\)\nwhere \\( h = g~f \\).\nWe can always write \\( f~x = x \\) as \\( g~x = 0 \\) where \\( g~x = f~x - x \\).\nWe can also always write \\( x = g~f~x \\) as \\( 0 = g~f~x \\).\nWhat about squared fixed point equation \\( (f~x)^2 - x^2 = 0 \\)?\nThe equation \\( g~x = 0 \\) doesn't mean that \\(g\\) is zero everywhere.\nIt means that \\(g\\) is zero at \\(x\\), and nothing more, unless quantified.\n\nA fixed point equation is an example of functional equations.\nOther examples are Cauchy's functional equations.\n\n\\section{Feedback is a special case of fixed point.}\n\nThe \\emph{general fixed point equation} is \\(x = f ~ x\\).\nIn this case, we say that \\(x\\) is a fixed point of \\(f\\).\n\n\\begin{m:thm}[Feedback as special case of fixed point]\nThe general feedback equation is a special case of the general fixed point equation.\n\\begin{proof}\nStart with the general feedback equation \\( f ~ x = g ~ f ~ x \\).\nDo eta conversion, producing \\( f = g ~ f \\).\nThen, rename \\(f\\) to \\(x\\) and rename \\(g\\) to \\(f\\),\nproducing \\( x = f ~ x \\),\nwhich is the general fixed point equation.\n\\end{proof}\n\\end{m:thm}\n\nThat theorem relates cybernetics and fixed point theory.\nThus a feedback is a fixed point of a higher-order function\nin the sense that if \\(f = g f\\), then \\(f\\) is a fixed point of \\(g\\).\n\n\\begin{m:lem}\n    If \\( f~x = g~x \\) for all \\(x\\), then \\(f = g\\).\n    \\begin{proof}\n        We can prove this using set theory.\n        A function is an injective relation.\n        A relation is a set of ordered pairs.\n    \\end{proof}\n\\end{m:lem}\n\nGenerally, the equation \\(f~x = x\\) doesn't hold for all \\(x\\),\nbecause if it does, \\(f\\) becomes the identity function.\nThere are two points of view:\nyou can fix \\(f\\) and let \\(x\\) vary, or you can fix \\(x\\) and let \\(f\\) vary. \n\nThe set of fixed points of \\(f\\) is \\(\\sbn{x}{x = f~x}\\).\n\n\\begin{m:lem}\n    If \\( x = f~x \\) for all \\(x\\),\n    then \\(f\\) is an identity function.\n\\begin{proof}\n    The step from the second to the third equation\n    comes from the previous lemma.\n    \\begin{align*}\n        x &= f~x & \\text{from the antecedent of this lemma}\n        \\\\\n        \\fid~x &= f~x & \\text{due to the antecedent of this lemma}\n        \\\\\n        \\fid &= f\n    \\end{align*}\n\\end{proof}\n\\end{m:lem}\n\n\\section{Recurrence relation is an example feedback.}\n\nConsider the function \\( f~x = y \\).\nIf we know \\(x\\), we can compute \\(y\\).\nIf we know \\(y\\), we can limit the possibilities of \\(x\\).\nThey depend on each other.\nBut they aren't feedback.\nWhy?\nWhat do we mean by `depend'?\n\nUsually we can rewrite a recurrence relation to a fixed point equation.\nFor example, we can rewrite \\( x_k = f~x_{k-1} \\)\nto another equation \\( x = f \\circ (d~x) \\),\nand then to \\( x = (f \\circ d)~x \\),\nwhere \\( d \\) is the discrete unit delay function,\nwhich is defined as \\( d~x~k = x~(k-1) \\).\nRecurrence relations describe feedback.\nDifference equation is a special case of recurrence relations.\nDifference equation is a discrete version of differential equation.\n\n\\[\n    f~x = c \\cdot (s - x)\n\\]\n\n\\section{Closed-loop control is feedback.}\n\nA control system has input signal \\(x\\), control signal \\(y\\), output parameter \\(z\\),\ncontrol function \\(c\\),\nresponse function \\(r\\),\nand follows the equations\n\\( z~t = r~z~y~t \\)\nand\n\\( y~t = c~y~x~t \\)\nwhere \\(t\\) represents time.\nWe use two equations to model the case\nthat changing the control signal\ndoes not instantaneously change the output parameter.\nThe control function affects the output parameter indirectly through the control signal.\nThe control function cannot directly change the output parameter.\nThe response function models physical limitations.\n\\begin{align*}\n    z &= r~z~y\n    \\\\ y &= c~y~x\n\\end{align*}\nwhich can also be stated as the vector feedback equation\n\\begin{align*}\n    \\bmat{z \\\\ y} &= \\bmat{r~z~y \\\\ c~y~x}\n\\end{align*}\n\nThat system of equations suggests that a control system is just two feedback loops.\nA control system is a system of feedback equations.\nHowever, system of feedback equations is still a special case of the general feedback equation.\n\n\\section{Is this a general feedback equation?}\n\nAn equation \\(f~x = y\\) is a feedback equation iff \\(f\\) appears in \\(y\\),\nthat is if \\(f\\) appears in both sides of the equation.\nA feedback equation is just another name for functional equation.\nA feedback equation describes a subset of implicit functions.\n\nThe \\emph{general feedback equation} is \\(f ~ x = g ~ f ~ x\\).\nIt can model every feedback system.\nWe give the name \\emph{system function} to \\(f\\)\nand \\emph{wiring function} to \\(g\\).\n\nAn \\emph{open} system or a \\emph{pure feedforward} system\nis a system whose \\(g\\) does not use \\(f\\).\n\n\\section{What are examples of feedbacks?}\n\nThe differential equation \\( f~x = d~f~x + d~(d~f)~x \\)\nis a special case of the general feedback equation \\( f~x = g~f~x \\)\nwith \\( g~f~x = d~f~x + d~(d~f)~x \\).\nWe can also eta-convert \\(g\\) to \\( g = d + d^2 \\).\n\nThe Fibonacci equation \\( f~x = f~(x-1) + f~(x-2) \\)\nis a special case of the general feedback equation \\( f~x = g~f~x \\)\nwith \\( g~f = f \\circ (s~1) + f \\circ (s~2) \\)\nwhere \\(s~a~b = b - a\\) is flipped subtraction.\n\nIndeed all differential equations and difference equations\nare special cases of the general feedback equation.\n\nPID (proportional-integral-derivative)\ncontrol systems are special cases of the general feedback equation\nwith \\(g~f~x = c_p \\cdot x + c_i \\cdot i~f~0~x + c_d \\cdot d~f~x\\).\n\nIIR (infinite impulse response) filters are special cases of the general feedback equation.\n\nFunctional equations are special cases of the general feedback equation.\n\nThermostats are special cases of the general feedback equation.\n\nOrganisms, organizations, and planets are\nalso special cases of the general feedback equation.\n\n\\section{What are positive and negative feedback?}\n\nWe say that a system \\( f~x = g~f~x \\) has positive feedback iff \\( d~f~x > 0 \\).\nWe say that it has a negative feedback iff \\( d~f~x < 0 \\).\n\n\\section{What is feedback from dynamical systems of view?}\n\nAccording to \\r{A}str\\\"om and Murray \\cite{AstromMurrayFeedback}:\n\\begin{quote}\nA \\emph{dynamical system} is a system whose behavior changes over time, often in response\nto external stimulation or forcing. The term \\emph{feedback} refers to a situation in which\ntwo (or more) dynamical systems are connected together such that each system\ninfluences the other and their dynamics are thus strongly coupled.\n\\cite[p. 1]{AstromMurrayFeedback}\n\\end{quote}\n\n\\section{What are examples of fixed points?}\n\nEverything is a fixed point of the identity function \\(\\fb{x}{x}\\).\nThe constant \\(c\\) is a fixed point of the constant function \\(\\fb{x}{c}\\).\nThe function \\(\\exp\\) is a fixed point of the real derivative operator \\(d\\)\nbecause \\(d ~ \\exp = \\exp\\).\n\nConsider \\(d^4\\), the real derivative operator applied four times.\nThe function \\(\\exp\\) is a fixed point of \\(d^4\\).\nThe sine function is another fixed point.\nThe cosine function is yet another fixed point.\n\nEigenvalue-eigenvector pairs are related to fixed points.\nIf \\(a \\cdot x = c \\cdot x\\) where \\(a\\) is a matrix, \\(x\\) is a vector, and \\(c\\) is a scalar,\nthen \\(x\\) is a fixed point of \\(g\\) where \\(g~x = (a \\cdot x) / c\\).\n", "meta": {"hexsha": "2ee29f0b068a1833d1ae63be62054e0a8a6fa142", "size": 8174, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "research/feedback.tex", "max_stars_repo_name": "edom/work", "max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_stars_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "research/feedback.tex", "max_issues_repo_name": "edom/work", "max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_issues_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z", "max_forks_repo_path": "research/feedback.tex", "max_forks_repo_name": "edom/work", "max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_forks_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z", "avg_line_length": 38.9238095238, "max_line_length": 96, "alphanum_fraction": 0.6858331294, "num_tokens": 2314, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.5643183426637195}}
{"text": "%!TEX program = xelatex\n\n\\documentclass[12pt,a4paper]{article}\n\\usepackage{xeCJK}\n\\usepackage{amsmath}\n%\\setmainfont{Times New Roman}\n\\usepackage{setspace}\n\\usepackage{caption}\n\\usepackage{graphicx, subfig}\n\\usepackage{float}\n\\usepackage{listings}\n\\usepackage{booktabs}\n\\usepackage{setspace}%\u4f7f\u7528\u95f4\u8ddd\u5b8f\u5305\n\\usepackage{mathtools}\n\\usepackage{amsfonts}\n\\usepackage{amssymb,amsmath}\n\\newcommand{\\dd}{\\mathrm{d}}\n\\usepackage{enumitem}\n\n\\begin{document} \n\\title{homework10}\n\t\\author{11611118 \u90ed\u601d\u6e90}  \n\n\\begin{spacing}{1.2}  \n\\section{Homework 10}\n\\textit{Controlling a population}---The fish and game department in a certain state is planning to issue hunting permits to control the deer population(one deer per permit). It is known that if the deer population falls below a certain level m, the deer will become extinct. It is also known that if the deer population rises above the carrying capacity $M$, the population will decrease back to $M$ through disease and malnutrition.\n\\begin{enumerate}[label={\\textbf{\\alph*.}}]\n    \\item Discuss the reasonableness of the following model for the growth rate of the deer population as a function of time:\n    \\[\n        \\frac{\\dd P}{\\dd t} = rP(M-P)(P-m)\n    \\]\n    where $P$ is the population of the deer and $r$ is a positive constant of proportionality. Include a phase line.\n    \\\\\\\\\n    There should be $P >= 0$.  \\\\\\\\\n    While the $P = m$, the $\\frac{dP}{dt} = 0$, the $P$ will stay at value $m$.  \\\\\\\\\n    While the $P < m$, the $\\frac{dP}{dt} <= 0$, the $P$ will tend to equal 0 or stay at value $m$. As we known, if the deer population falls below a certain level $m$, the deer will become extinct. \\\\\\\\\n    While the $m < P < M$, the $\\frac{dP}{dt} > 0$ and as the $P$ increase, the $\\frac{dP}{dt}$ will firstly increase and match the maximum and then decrease, which also matches the rule of deer population development. \\\\\\\\\n    While the $P = M$, the $\\frac{dP}{dt} = 0$, the $P$ will stay at value $M$, which also matches the rule of deer population development.  \\\\\\\\\n    While the $P > M$, the $\\frac{dP}{dt} <= 0$, the $P$ will decrease until $P=M$, and then $\\frac{dP}{dt} = 0$ and $P$ will not change, which also matches the rule of deer population development. \n    \\newpage\n\n    \\item Explain how this model differs from the logistic model $\\dd P/\\dd t=rP(M-P)$. Is it better or worse than the logistic model?\n    \\\\\\\\\n    In the logistic model, if $P <= m$, the population will grow, which means that the rule if the deer population falls below a certain level $m$, the deer will become extinct was ignored in this model. But in our new model, we can simulate the extinct behavior of beer population, so our new model is much better than the logistic model, since it could remain people do not to hunt too many deer which will destroy the deer population.\\\\\\\\\n\n    \\item Show that if $P>M$ for all $t$, then $\\lim_{t\\to\\infty}P(t)=M$. \n\t\\\\\\\\\n\tIf $P > M$, the $\\frac{dP}{dt} < 0$, and if $P = M$, the $\\frac{dP}{dt} = 0$, we also known that $P$ is continuous, which means that if $P > M$ at all $t$, the $P$ will decrease until $P(t)=M$ and then become stable, so we can get that $\\lim_{t\\to\\infty}P(t)=M$.\\\\\\\\\n\n    \\item Discuss the solutions to the differential equation. What are the equilibrium points of the model? Explain the dependence of the steady-state value of $P$ on the initial values of $P$. About how many permits should be issued?\n    \\\\\\\\\n    Equilibrium points : $P=0$, $P=m$ and $P=M$\\\\\n    If $P_0<m$, $P_{steady-state}=0$ \\\\\n    If $P_0=m$, $P_{steady-state}=m$ \\\\\n    If $P_0>m$, $P_{steady-state}=M$ \\\\\n    The population of beer should be as small as possible but we cannot let $P<m$, so the best solution is to control to $P$ to $P=m$. The number of permission should be $(P_{current}-m)$.\n    \n\n\n\n\\end{enumerate}\n\\end{spacing}\n\\end{document}", "meta": {"hexsha": "c2acbdfc24a6a601dfefe0838bd56a07dcd7750e", "size": 3821, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Latex/HW10/HW10.tex", "max_stars_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_stars_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-30T11:32:36.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-30T11:32:36.000Z", "max_issues_repo_path": "Latex/HW10/HW10.tex", "max_issues_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_issues_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Latex/HW10/HW10.tex", "max_forks_repo_name": "c235gsy/Sustech_Mathematical-Modeling", "max_forks_repo_head_hexsha": "e2187b3d181185af4927255c50b4c08ba2a5fb3e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.6507936508, "max_line_length": 441, "alphanum_fraction": 0.6888249149, "num_tokens": 1133, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5643183390530399}}
{"text": "Thesis aimed to cover, explain and experiment with new approaches to facial recognition. The research in Chapter\\,\\ref{chapter:research} listed and covered mathematical and engineering principles of construction of convolutional neural networks. It stumbled upon some drawbacks of such design and proposed available solutions. It discusses convolutional neural networks as well as the newest capsule network approach. This throughout review of principles behind successful identification is later transformed into a state-of-the-art market scan, providing a quick insight into publicly available solutions as well as available frameworks. Fundamental benefits and advantages of each framework and solution is listed in the Chapter\\,\\ref{chapter:solutions}.\n\nThe very same chapter also provides an overview of available training data sets are listed, with focus on in-the-wild image data. This review includes a short description of each, accompanied by basic metric. Large scale face databases are listed as well.\n\nThis knowledge is later leveraged to create own implementation of a capsule network. The Chapter\\,\\ref{chapter:implementation} covers the actual design and implementation of this solution. A part from this implementation we also demonstrated how such model can be trained and shown the results.\n\n\\section{Experiment discussion}\nOverall our implementation proved itself to be successful in recognition of 75\\,\\% individuals when trained with 11 identities. When trained to learn 42 individuals, the achieved performance drops to  This accuracy decreases dramatically with either more identities added or less training data per identity. These effects correlate but the actual cause may be n one factor only. That would require testing against a data set greater in magnitude, while balanced in amount of images per identity.\n\nIn the case of a model trained to recognize 11 different identities, we had available 50 or more images per each individual. These images covered different settings, angles and poses. This might have proved to be a great benefit to our model. On the other hand, the fewer amount of individuals to recognise there is much less parameters to be learned in the network and therefore the error margin is smaller. And since our data are small images of $32\\times32$ pixels, the capsule network struggles to successfully match a greater amount of identities.\n\n\\section{Improvements and suggestions}\n\nA capsule network provides great power at smaller scales, however, when it is facing big problems, it demands great computational powers and resources. Hence quick prototyping and full scale training result in big differences in network configurations and therefore the observed behaviours. To achieve better results with this type of network, over more identities, it would be necessary to provide sufficient amount of input data, great GPU resources. Enhancing detail on the input data form $32\\times32$ pixels to more, comes with a great cost as well, since nearly every layer's parameter count is dependent on amount of input pixels. That means the resource demand is raising again. This problem may seem easy to bypass\\,--\\,forced retraining of the capsule network on a new label. This is topic is even newer than capsule networks itself and so far was not solved sufficiently.\n\nCapsule networks are a still young approach to machine learning. First announcement of this approach is dated to 2017, which didn't yet provided enough time for this technology to mature. It features new and never before tried algorithms and requires more advanced mathematics and thinking than a standard and nowadays classical convolutional neural network. This thesis covered one of the first attempts to publicize and advertise the possibilities and capabilities of a capsule neural network in the field of facial recognition. The success rate of the experiments does not reach the best of the best in the field, however the results can hopefully serve as a base for future research.\n", "meta": {"hexsha": "ad850bb93baf4107ef09251b402ffc714561ac6e", "size": 4006, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "project-16-chapter-conclusion.tex", "max_stars_repo_name": "tumido/capsnet-face-thesis", "max_stars_repo_head_hexsha": "eed6796b63ba5d4cc3d178850e09b00085c660cf", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "project-16-chapter-conclusion.tex", "max_issues_repo_name": "tumido/capsnet-face-thesis", "max_issues_repo_head_hexsha": "eed6796b63ba5d4cc3d178850e09b00085c660cf", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project-16-chapter-conclusion.tex", "max_forks_repo_name": "tumido/capsnet-face-thesis", "max_forks_repo_head_hexsha": "eed6796b63ba5d4cc3d178850e09b00085c660cf", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 235.6470588235, "max_line_length": 882, "alphanum_fraction": 0.8220169745, "num_tokens": 736, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5643183379310409}}
{"text": "\\chapter{Definition of terms}\n\n\n\n\\begin{center}\n\\begin{longtable}{lp{10cm}}\n\\caption{Definition of terms.}\\\\\n\\label{tab:definition-of-terms}\n\\vspace{1em}\n%\\hline\n%\\textbf{First entry} & \\textbf{Second entry}\\\\\n%\\hline\n\\endfirsthead\n\\multicolumn{2}{c}{\\captionlabelfont\\captionfont\\tablename\\  \\thetable{}: \\rmfamily Definition of terms (continued).} \\\\\n\\vspace{1em}\n%\\hline\n%\\textbf{First entry} & \\textbf{Second entry} \\\\\n%\\hline\n\\endhead\n%\\hline \n\\multicolumn{2}{r}{\\textit{Continued on next page}} \\\\\n\\endfoot\n%\\hline\n\\endlastfoot\n$f(\\cdot{})$&true state operator (nonlinear function)\\\\\n$\\hat{f}(\\cdot{})$&supposed state operator (nonlinear function)\\\\\n$h(\\cdot{})$&true measurement operator (nonlinear function)\\\\\n$\\hat{h}(\\cdot{})$&supposed measurement operator (nonlinear function)\\\\\n$g(\\cdot{})$&nonlinear transformation of the model output\\\\\n$\\boldsymbol\\Theta$&constrained parameter space\\\\\n$\\mathbf{\\Omega}$&constrained state space\\\\\n$\\mathbb{R}$&real number space\\\\\n$\\boldsymbol\\theta$&true parameter values pertaining to the state operator\\\\\n$\\hat{\\boldsymbol\\theta}$&estimate of the parameter values pertaining to the state operator\\\\\n$\\boldsymbol\\phi$&true parameter values pertaining to the measurement operator\\\\\n$\\sigma$&standard deviation of the normal distribution\\\\\n$\\mu$&mean of the normal distribution\\\\\n$\\nu_{t+1}$&random perturbation of the true output at time $t$\\\\\n$\\psi_{t+1}$&random perturbation of the true forcing at time $t$\\\\\n$\\omega_{t+1}$&random perturbation of the true state at time $t$\\\\\n$u_t$&true forcing at time $t$\\\\\n$\\tilde{u}_t$&observed forcing at time $t$\\\\\n$x_t$&true state at time $t$\\\\\n$\\tilde{x}_t$&observed state at time $t$\\\\\n$\\hat{x}_t$&predicted state at time $t$\\\\\n$y_t$&true output at time $t$\\\\\n$\\tilde{y}_t$&observed output at time $t$\\\\\n$\\hat{y}_t$&predicted output at time $t$\\\\\n$\\boldsymbol\\varepsilon$&observation-prediction residual vector\\\\\n$\\mathcal{N}$&normal probability distribution\\\\\n$L(\\cdot{})$&log likelihood\\\\\n$p(\\boldsymbol\\theta|\\tilde{\\mathbf{x}})$&posterior probability of $\\boldsymbol{\\theta}$ given the observations $\\tilde{\\mathbf{x}}$\\\\\n$p(\\boldsymbol\\theta)$&prior probability of $\\boldsymbol{\\theta}$\\\\\n$p(\\tilde{\\mathbf{x}}|\\boldsymbol\\theta)$&likelihood of the observations $\\tilde{\\mathbf{x}}$ given the parameters $\\boldsymbol{\\theta}$\\\\\n$p(\\tilde{\\mathbf{x}})$&probability of the observations $\\tilde{\\mathbf{x}}$, or \\textit{evidence}\\\\\n$s^2$&estimator of variance $\\sigma^2$\\\\\n$n_p$&number of parameters in the vector $\\hat{\\boldsymbol{\\theta}}$\\\\\n$n_o$&number of observations in the vector $\\tilde{\\mathbf{x}}$\\\\\n$n_e$&number of evaluations of $\\hat{f}(\\cdot{})$ within a parameter optimization context\\\\\n\\end{longtable}\n\\end{center}\n\n", "meta": {"hexsha": "1b3d2795a620b0706003d3ebd0d4c981ad2d5c5e", "size": 2725, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "syllabus/tex/appendices/definition-of-terms.tex", "max_stars_repo_name": "jspaaks/inverse-modeling-2017", "max_stars_repo_head_hexsha": "f60bc1848734c3560c7e6051216bb49b35535f48", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "syllabus/tex/appendices/definition-of-terms.tex", "max_issues_repo_name": "jspaaks/inverse-modeling-2017", "max_issues_repo_head_hexsha": "f60bc1848734c3560c7e6051216bb49b35535f48", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2017-02-21T09:45:01.000Z", "max_issues_repo_issues_event_max_datetime": "2017-03-22T08:47:56.000Z", "max_forks_repo_path": "syllabus/tex/appendices/definition-of-terms.tex", "max_forks_repo_name": "jspaaks/inverse-modeling-2017", "max_forks_repo_head_hexsha": "f60bc1848734c3560c7e6051216bb49b35535f48", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.253968254, "max_line_length": 138, "alphanum_fraction": 0.7181651376, "num_tokens": 851, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.785308580887758, "lm_q1q2_score": 0.564318333198362}}
{"text": "\\subsection{Cox Proportional Hazard Regression Model}\n\n\\noindent{\\bf Description}\n\\smallskip\n\n\nThe Cox (proportional hazard or PH) is a semi-parametric statistical approach commonly used for analyzing survival data.\nUnlike non-parametric approaches, e.g., the Kaplan-Meier estimates (Section \\ref{sec:kaplan-meier}), which can be used to analyze single sample of survival data or to compare between groups of survival times, the Cox PH models the dependency of the survival times on the values of {\\it explanatory variables} (i.e., covariates) recorded for each individual at the time origin. Our focus is on covariates that do not change value over time, i.e., time-independent covariates, and that may be categorical (ordinal or nominal) as well as continuous-valued. \\\\  \n\n\n\\smallskip\n\\noindent{\\bf Usage}\n\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\it%\n{\\tt{}-f }path/\\/{\\tt{}Cox.dml}\n{\\tt{} -nvargs}\n{\\tt{} X=}path/file\n{\\tt{} TE=}path/file\n{\\tt{} F=}path/file\n{\\tt{} R=}path/file\n{\\tt{} M=}path/file\n{\\tt{} S=}path/file\n{\\tt{} T=}path/file\n{\\tt{} COV=}path/file\n{\\tt{} RT=}path/file\n{\\tt{} XO=}path/file\n{\\tt{} MF=}path/file\n{\\tt{} alpha=}double\n{\\tt{} fmt=}format\n\n}\n\n\\smallskip\n\\noindent{\\bf Arguments --- Model Fitting/Prediction}\n\\begin{Description}\n\\item[{\\tt X}:]\nLocation (on HDFS) to read the input matrix of the survival data containing: \n\\begin{Itemize}\n\t\\item timestamps,\n\t\\item whether event occurred (1) or data is censored (0),\n\t\\item feature vectors\n\\end{Itemize}\n\\item[{\\tt Y}:]\nLocation (on HDFS) to the read matrix used for prediction \n\\item[{\\tt TE}:]\nLocation (on HDFS) to read the 1-column matrix $TE$ that contains the column indices of the input matrix $X$ corresponding to timestamps (first entry) and event information (second entry)\n\\item[{\\tt F}:]\nLocation (on HDFS) to read the 1-column matrix $F$ that contains the column indices of the input matrix $X$ corresponding to the features to be used for fitting the Cox model\n\\item[{\\tt R}:] (default:\\mbox{ }{\\tt \" \"})\nIf factors (i.e., categorical features) are available in the input matrix $X$, location (on HDFS) to read matrix $R$ containing the start (first column) and end (second column) indices of each factor in $X$;\nalternatively, user can specify the indices of the baseline level of each factor which needs to be removed from $X$. If $R$ is not provided by default all variables are considered to be continuous-valued.\n\\item[{\\tt M}:]\t\t\t\t\t\t\t\nLocation (on HDFS) to store the results of Cox regression analysis including regression coefficients $\\beta_j$s, their standard errors, confidence intervals, and $P$-values  \n\\item[{\\tt S}:] (default:\\mbox{ }{\\tt \" \"})\nLocation (on HDFS) to store a summary of some statistics of the fitted model including number of records, number of events, log-likelihood, AIC, Rsquare (Cox \\& Snell), and maximum possible Rsquare \n\\item[{\\tt T}:] (default:\\mbox{ }{\\tt \" \"})\nLocation (on HDFS) to store the results of Likelihood ratio test, Wald test, and Score (log-rank) test of the fitted model\n\\item[{\\tt COV}:]\nLocation (on HDFS) to store the variance-covariance matrix of $\\beta_j$s; note that parameter {\\tt COV} needs to provided as input to prediction.\n\\item[{\\tt RT}:]\nLocation (on HDFS) to store matrix $RT$ containing the order-preserving recoded timestamps from $X$; note that parameter {\\tt RT} needs to provided as input for prediction.\n\\item[{\\tt XO}:]\nLocation (on HDFS) to store the input matrix $X$ ordered by the timestamps; note that parameter {\\tt XO} needs to provided as input for prediction.\n\\item[{\\tt MF}:]\nLocation (on HDFS) to store column indices of $X$ excluding the baseline factors if available; note that parameter {\\tt MF} needs to provided as input for prediction.\n\\item[{\\tt P}] \nLocation (on HDFS) to store matrix $P$ containing the results of prediction\n\\item[{\\tt alpha}](default:\\mbox{ }{\\tt 0.05})\nParameter to compute a $100(1-\\alpha)\\%$ confidence interval for $\\beta_j$s \n\\item[{\\tt tol}](default:\\mbox{ }{\\tt 0.000001})\nTolerance (epsilon) used in the convergence criterion\n\\item[{\\tt moi}:] (default:\\mbox{ }{\\tt 100})\nMaximum number of outer (Fisher scoring) iterations\n\\item[{\\tt mii}:] (default:\\mbox{ }{\\tt 0})\nMaximum number of inner (conjugate gradient) iterations, or~0 if no maximum\nlimit provided\n\\item[{\\tt fmt}:] (default:\\mbox{ }{\\tt \"text\"})\nMatrix file output format, such as {\\tt text}, {\\tt mm}, or {\\tt csv};\nsee read/write functions in SystemML Language Reference for details.\n\\end{Description}\n\n\n \\smallskip\n \\noindent{\\bf Usage: Cox Prediction}\n \\smallskip\n \n {\\hangindent=\\parindent\\noindent\\it%\n \t{\\tt{}-f }path/\\/{\\tt{}Cox-predict.dml}\n \t{\\tt{} -nvargs}\n \t{\\tt{} X=}path/file\n \t{\\tt{} RT=}path/file\n \t{\\tt{} M=}path/file\n \t{\\tt{} Y=}path/file\n \t{\\tt{} COV=}path/file\n \t{\\tt{} MF=}path/file\n \t{\\tt{} P=}path/file\n \t{\\tt{} fmt=}format\n \t\n }\\smallskip\n \n% \\noindent{\\bf Arguments --- Prediction}\n% \\begin{Description}\n% \t\\item[{\\tt X}:]\n%\tLocation (on HDFS) to read the input matrix of the survival data sorted by the timestamps including: \n%\t\\begin{Itemize}\n%\t\t\\item timestamps,\n%\t\t\\item whether event occurred (1) or data is censored (0),\n%\t\t\\item feature vectors\n%\t\\end{Itemize}\n% \t\\item[{\\tt RT}:]\n% \tLocation to read column matrix $RT$ containing the (order preserving) recoded timestamps from X (output by {\\tt Cox.dml})\n% \t\\item[{\\tt M}:]\n% \tLocation to read matrix $M$ containing the fitted Cox model (see below for the schema) \n% \t\\item[{\\tt Y}:]\n%\tLocation to the read matrix used for prediction    \n% \t\\item[{\\tt COV}:] \n% \tLocation to read the variance-covariance matrix of the regression coefficients (output by {\\tt Cox.dml})\n% \t\\item[{\\tt MF}] \n% \tLocation to store column indices of $X$ excluding the baseline factors if available (output by {\\tt Cox.dml})\n% \t\\item[{\\tt P}] \n% \tLocation to store matrix $P$ containing the results of prediction\n% \t\\item[{\\tt fmt}:] (default:\\mbox{ }{\\tt \"text\"})\n% \tMatrix file output format, such as {\\tt text}, {\\tt mm}, or {\\tt csv}\n% \\end{Description}\n \n\n\n\\noindent{\\bf Details}\n\\smallskip\n\n \nIn Cox PH regression model the relationship between the hazard function---i.e., the probability of event occurrence at a given time---and the covariates is described as\n\\begin{equation}\nh_i(t)=h_0(t)\\exp\\Bigl\\{ \\sum_{j=1}^{p} \\beta_jx_{ij} \\Bigr\\}, \\label{eq:coxph}\n\\end{equation} \nwhere the hazard function for the $i$th individual ($i\\in\\{1,2,\\ldots,n\\}$) depends on a set of $p$ covariates $x_i=(x_{i1},x_{i2},\\ldots,x_{ip})$, whose importance is measured by the magnitude of the corresponding coefficients \n$\\beta=(\\beta_1,\\beta_2,\\ldots,\\beta_p)$. The term $h_0(t)$ is the baseline hazard and is related to a hazard value if all covariates equal 0. \nIn the Cox PH model the hazard function for the individuals may vary over time, however the baseline hazard is estimated non-parametrically and can take any form.\nNote that re-writing~(\\ref{eq:coxph}) we have \n\\begin{equation*}\n\\log\\biggl\\{ \\frac{h_i(t)}{h_0(t)} \\biggr\\} = \\sum_{j=1}^{p} \\beta_jx_{ij}.\n\\end{equation*}\nThus, the Cox PH model is essentially a linear model for the logarithm of the hazard ratio and the hazard of event for any individual is a constant multiple of the hazard of any other. \n%Consequently, the Cox model is a proportional hazard model.\nWe follow similar notation and methodology as in~\\cite[Sec.~3]{collett2003:kaplanmeier}.\nFor completeness we briefly discuss the equations used in our implementation.\n\n\n\\textbf{Factors in the model.} \nNote that if some of the feature variables are factors they need to {\\it dummy code} as follows. \nLet $\\alpha$ be such a variable (i.e., a factor) with $a$ levels. \nWe introduce $a-1$ indicator (or dummy coded) variables $X_2,X_3\\ldots,X_a$ with $X_j=1$ if $\\alpha=j$ and 0 otherwise, for $j\\in\\{ 2,3,\\ldots,a\\}$.\nIn particular, one of $a$ levels of $\\alpha$ will be considered as the baseline and is not included in the model.\nIn our implementation, user can specify a baseline level for each of the factor (as selecting the baseline level for each factor is arbitrary). \nOn the other hand, if for a given factor $\\alpha$ no baseline is specified by the user, the most frequent level of $\\alpha$ will be considered as the baseline.   \n\n\n\\textbf{Fitting the model.}\nWe estimate the coefficients of the Cox model via negative log-likelihood method.\nIn particular the Cox PH model is fitted by using trust region Newton method with conjugate gradient~\\cite{Nocedal2006:Optimization}.\n%The likelihood for the PH hazard model is given by\n%\\begin{equation*}\n%\\prod_{i=1}^{n} {\\Bigg\\{ \\frac{\\exp(\\vec{\\beta}^\\top\\vec{x_i})}{\\sum_{l\\in %R(t_i)\\exp(\\vec{\\beta}\\vec{x}_l)}} \\Biggr\\}}^\\delta_i,\n%\\end{equation*}\n%where $\\delta_i$ is an event indicator, which is 0 if the $i$th survival time is censored or 1 otherwise, and $R(t_i)$ is the risk set defined as the set of individuals who die at time $t_i$ or later.\nDefine the risk set $R(t_j)$ at time $t_j$ to be the set of individuals who die at time $t_i$ or later. \nThe PH model assumes that survival times are distinct. In order to handle tied observations\nwe use the \\emph{Breslow} approximation of the likelihood function\n\\begin{equation*}\n\\mathcal{L}=\\prod_{j=1}^{r} \\frac{\\exp(\\beta^\\top s_j)}{{\\bigg\\{ \\sum_{l\\in R(t_j)} \\exp(\\beta^\\top x_l) \\biggr\\}}^{d_j}},\n\\end{equation*}\nwhere $d_j$ is number individuals who die at time $t_j$ and $s_j$ denotes the element-wise sum of the covariates for those individuals who die at time $t_j$, $j=1,2,\\ldots,r$, i.e.,\nthe $h$th element of $s_j$ is given by $s_{hj}=\\sum_{k=1}^{d_j}x_{hjk}$, where $x_{hjk}$ is the value of $h$th variable ($h\\in \\{1,2,\\ldots,p\\}$) for the $k$th of the $d_j$ individuals ($k\\in\\{ 1,2,\\ldots,d_j \\}$) who die at the $j$th death time ($j\\in\\{ 1,2,\\ldots,r \\}$).  \n\n\\textbf{Standard error and confidence interval for coefficients.}\nNote that the variance-covariance matrix of the estimated coefficients $\\hat{\\beta}$ can be approximated by the inverse of the Hessian evaluated at $\\hat{\\beta}$. The square root of the diagonal elements of this matrix are the standard errors of estimated coefficients.  \nOnce the standard errors of the coefficients $se(\\hat{\\beta})$ is obtained we can compute a $100(1-\\alpha)\\%$ confidence interval using $\\hat{\\beta}\\pm z_{\\alpha/2}se(\\hat{\\beta})$, where $z_{\\alpha/2}$ is the upper $\\alpha/2$-point of the standard normal distribution.\nIn {\\tt Cox.dml}, we utilize the build-in function {\\tt inv()} to compute the inverse of the Hessian. Note that this build-in function can be used only if the Hessian fits in the main memory of a single machine.   \n\n\n\\textbf{Wald test, likelihood ratio test, and log-rank test.}\nIn order to test the {\\it null hypothesis} that all of the coefficients $\\beta_j$s are 0, our implementation provides three statistical test: {\\it Wald test}, {\\it likelihood ratio test}, the {\\it log-rank test} (also known as the {\\it score test}). \nLet $p$ be the number of coefficients.\nThe Wald test is based on the test statistic ${\\hat{\\beta}}^2/{se(\\hat{\\beta})}^2$, which is compared to percentage points of the Chi-squared distribution to obtain the $P$-value.\nThe likelihood ratio test relies on the test statistic $-2\\log\\{ {L}(\\textbf{0})/{L}(\\hat{\\beta}) \\}$ ($\\textbf{0}$ denotes a zero vector of size $p$ ) which has an approximate Chi-squared distribution with $p$ degrees of freedom under the null hypothesis that all $\\beta_j$s are 0.\nThe Log-rank test is based on the test statistic \n$l=\\nabla^\\top L(\\textbf{0}) {\\mathcal{H}}^{-1}(\\textbf{0}) \\nabla L(\\textbf{0})$, \nwhere $\\nabla L(\\textbf{0})$ is the gradient of $L$ and $\\mathcal{H}(\\textbf{0})$ is the Hessian of $L$ evaluated at \\textbf{0}. Under the null hypothesis that $\\beta=\\textbf{0}$, $l$ has a Chi-squared distribution on $p$ degrees of freedom.\n\n\n% Scoring\n\\textbf{Prediction.}\nOnce the parameters of the model are fitted, we compute the following predictions together with their standard errors\n\\begin{itemize}\n\t\\item linear predictors,\n\t\\item risk, and\n\t\\item estimated cumulative hazard. \n\\end{itemize}\nGiven feature vector $X_i$ for individual $i$, we obtain the above predictions at time $t$ as follows.\nThe linear predictors (denoted as $\\mathcal{LP}$) as well as the risk (denoted as $\\mathcal{R}$) are computed relative to a baseline whose feature values are the mean of the values in the corresponding features.\nLet $X_i^\\text{rel} = X_i - \\mu$, where $\\mu$ is a row vector that contains the mean values for each feature.  \nWe have  $\\mathcal{LP}=X_i^\\text{rel} \\hat{\\beta}$ and $\\mathcal{R}=\\exp\\{ X_i^\\text{rel}\\hat{\\beta} \\}$.\nThe standard errors of the linear predictors $se\\{\\mathcal{LP} \\}$ are computed as the square root of ${(X_i^\\text{rel})}^\\top V(\\hat{\\beta}) X_i^\\text{rel}$ and the standard error of the risk $se\\{ \\mathcal{R} \\}$ are given by the square root of \n${(X_i^\\text{rel} \\odot \\mathcal{R})}^\\top V(\\hat{\\beta}) (X_i^\\text{rel} \\odot \\mathcal{R})$, where $V(\\hat{\\beta})$ is the variance-covariance matrix of the coefficients and $\\odot$ is the element-wise multiplication.     \n\nWe estimate the cumulative hazard function for individual $i$ by\n\\begin{equation*}\n\\hat{H}_i(t) = \\exp(\\hat{\\beta}^\\top X_i) \\hat{H}_0(t), \n\\end{equation*}\nwhere $\\hat{H}_0(t)$ is the \\emph{Breslow estimate} of the cumulative baseline hazard given by\n\\begin{equation*}\n\\hat{H}_0(t) = \\sum_{j=1}^{k} \\frac{d_j}{\\sum_{l\\in R(t_{(j)})} \\exp(\\hat{\\beta}^\\top X_l)}.\n\\end{equation*}\nIn the equation above, as before, $d_j$ is the number of deaths, and $R(t_{(j)})$ is the risk set at time $t_{(j)}$, for $t_{(k)} \\leq t \\leq t_{(k+1)}$, $k=1,2,\\ldots,r-1$.\nThe standard error of $\\hat{H}_i(t)$ is obtained using the estimation\n\\begin{equation*}\nse\\{ \\hat{H}_i(t) \\} = \\sum_{j=1}^{k} \\frac{d_j}{ {\\left[ \\sum_{l\\in R(t_{(j)})} \\exp(X_l\\hat{\\beta}) \\right]}^2 } + J_i^\\top(t) V(\\hat{\\beta}) J_i(t),\n\\end{equation*}\nwhere \n\\begin{equation*}\nJ_i(t) = \\sum_{j-1}^{k} d_j \\frac{\\sum_{l\\in R(t_{(j)})} (X_l-X_i)\\exp \\{ (X_l-X_i)\\hat{\\beta} \\}}{ {\\left[ \\sum_{l\\in R(t_{(j)})} \\exp\\{(X_l-X_i)\\hat{\\beta}\\} \\right]}^2  },\n\\end{equation*}\nfor $t_{(k)} \\leq t \\leq t_{(k+1)}$, $k=1,2,\\ldots,r-1$. \n\n\n\\smallskip\n\\noindent{\\bf Returns}\n\\smallskip\n\n  \nBlow we list the results of fitting a Cox regression model stored in matrix {\\tt M} with the following schema:\n\\begin{itemize}\n\t\\item Column 1: estimated regression coefficients $\\hat{\\beta}$\n\t\\item Column 2: $\\exp(\\hat{\\beta})$\n\t\\item Column 3: standard error of the estimated coefficients $se\\{\\hat{\\beta}\\}$\n\t\\item Column 4: ratio of $\\hat{\\beta}$ to $se\\{\\hat{\\beta}\\}$ denoted by $Z$  \n\t\\item Column 5: $P$-value of $Z$ \n\t\\item Column 6: lower bound of $100(1-\\alpha)\\%$ confidence interval for $\\hat{\\beta}$\n\t\\item Column 7: upper bound of $100(1-\\alpha)\\%$ confidence interval for $\\hat{\\beta}$.\n\\end{itemize}\nNote that above $Z$ is the Wald test statistic which is asymptotically standard normal under the hypothesis that $\\beta=\\textbf{0}$.\n\nMoreover, {\\tt Cox.dml} outputs two log files {\\tt S} and {\\tt T} containing a summary statistics of the fitted model as follows.\nFile {\\tt S} stores the following information \n\\begin{itemize}\n\t\\item Line 1: total number of observations\n\t\\item Line 2: total number of events\n\t\\item Line 3: log-likelihood (of the fitted model)\n\t\\item Line 4: AIC\n\t\\item Line 5: Cox \\& Snell Rsquare\n\t\\item Line 6: maximum possible Rsquare. \n\\end{itemize}\nAbove, the AIC is computed as in (\\ref{eq:AIC}),\nthe Cox \\& Snell Rsquare is equal to $1-\\exp\\{ -l/n \\}$, where $l$ is the log-rank test statistic as discussed above and $n$ is total number of observations,\nand the maximum possible Rsquare computed as $1-\\exp\\{ -2 L(\\textbf{0})/n \\}$ , where $L(\\textbf{0})$ denotes the initial likelihood. \n\n\nFile {\\tt T} contains the following information\n\\begin{itemize}\n\t\\item Line 1: Likelihood ratio test statistic, degree of freedom of the corresponding Chi-squared distribution, $P$-value\n\t\\item Line 2: Wald test statistic, degree of freedom of the corresponding Chi-squared distribution, $P$-value\n\t\\item Line 3: Score (log-rank) test statistic, degree of freedom of the corresponding Chi-squared distribution, $P$-value.\n\\end{itemize}\n\nAdditionally, the following matrices will be stored. Note that these matrices are required for prediction.\n\\begin{itemize}\n\t \\item Order-preserving recoded timestamps $RT$, i.e., contiguously numbered from 1 $\\ldots$ \\#timestamps\n\t \\item Feature matrix ordered by the timestamps $XO$\n\t \\item Variance-covariance matrix of the coefficients $COV$\n\t \\item Column indices of the feature matrix with baseline factors removed (if available) $MF$.  \n\\end{itemize}\n\n\n\\textbf{Prediction}\nFinally, the results of prediction is stored in Matrix $P$ with the following schema\n\\begin{itemize}\n\t\\item Column 1: linear predictors\n\t\\item Column 2: standard error of the linear predictors\n\t\\item Column 3: risk\n\t\\item Column 4: standard error of the risk\n\t\\item Column 5: estimated cumulative hazard\n\t\\item Column 6: standard error of the estimated cumulative hazard.\n\\end{itemize}\n\n\n\n\n\\smallskip\n\\noindent{\\bf Examples}\n\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\tt\n\t\\hml -f Cox.dml -nvargs X=/user/biadmin/X.mtx TE=/user/biadmin/TE\n\tF=/user/biadmin/F R=/user/biadmin/R M=/user/biadmin/model.csv\n\tT=/user/biadmin/test.csv COV=/user/biadmin/var-covar.csv XO=/user/biadmin/X-sorted.mtx fmt=csv\n\t\n}\\smallskip\n\n{\\hangindent=\\parindent\\noindent\\tt\n\t\\hml -f Cox.dml -nvargs X=/user/biadmin/X.mtx TE=/user/biadmin/TE\n\tF=/user/biadmin/F R=/user/biadmin/R M=/user/biadmin/model.csv\n\tT=/user/biadmin/test.csv COV=/user/biadmin/var-covar.csv \n\tRT=/user/biadmin/recoded-timestamps.csv XO=/user/biadmin/X-sorted.csv \n\tMF=/user/biadmin/baseline.csv alpha=0.01 tol=0.000001 moi=100 mii=20 fmt=csv\n\t\n}\\smallskip\n\n\\noindent To compute predictions:\n\n{\\hangindent=\\parindent\\noindent\\tt\n\t\\hml -f Cox-predict.dml -nvargs X=/user/biadmin/X-sorted.mtx \n\tRT=/user/biadmin/recoded-timestamps.csv\n\tM=/user/biadmin/model.csv Y=/user/biadmin/Y.mtx COV=/user/biadmin/var-covar.csv \n\tMF=/user/biadmin/baseline.csv P=/user/biadmin/predictions.csv fmt=csv\n\t\n}\n\n\n", "meta": {"hexsha": "48613b7f7994d15de87a42aabcbc5aa6e493caee", "size": 18049, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "system-ml/docs/Algorithms Reference/Cox.tex", "max_stars_repo_name": "alcedo/systemml", "max_stars_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-03-17T18:03:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-25T08:17:09.000Z", "max_issues_repo_path": "system-ml/docs/Algorithms Reference/Cox.tex", "max_issues_repo_name": "alcedo/systemml", "max_issues_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "system-ml/docs/Algorithms Reference/Cox.tex", "max_forks_repo_name": "alcedo/systemml", "max_forks_repo_head_hexsha": "4d371a6d6b52e5517b1411302af3fdd8cd3c156a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-11-26T00:43:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-02T06:29:30.000Z", "avg_line_length": 56.403125, "max_line_length": 558, "alphanum_fraction": 0.7154967034, "num_tokens": 5479, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.6825737408694988, "lm_q1q2_score": 0.5642917618997543}}
{"text": "\\documentclass[11pt,a4paper]{report}\n\\usepackage{amsmath,amsfonts,amssymb,amsthm,epsfig,epstopdf,titling,url,array}\n\\usepackage{enumitem}\n\\usepackage{changepage}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\theoremstyle{plain}\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem*{cor}{Corollary}\n\\theoremstyle{definition}\n\\newtheorem{defn}{Definition}[section]\n\\newtheorem{conj}{Conjecture}[section]\n\\newtheorem{exmp}{Example}[section]\n\\newtheorem{exercise}{Exercise}[section]\n\\theoremstyle{remark}\n\\newtheorem*{rem}{Remark}\n\\newtheorem*{lem}{Lemma}\n\\newtheorem*{note}{Note}\n\\def\\changemargin#1#2{\\list{}{\\rightmargin#2\\leftmargin#1}\\item[]}\n\\let\\endchangemargin=\\endlist\n\\begin{document}\n\n\n\\section*{Problem}\n\\sloppy Let\u00a0$x_0 < x_1 < \u2026 < x_{n-1}$\u00a0and\u00a0$y_0 < y_1 \u2026 < y_{n - 1}$\u00a0be strictly increasing sequences of positive numbers.\u00a0 Show that for any permutation\u00a0$\\sigma$\u00a0of\u00a0$\\{0, \u2026 , n-1\\}$,\n$$\\sum_{i = 0}^{n-1} x_i y_{\\sigma(i)}< \\sum_{i = 0}^{n-1} x_i y_i$$\nunless\u00a0$\\sigma$\u00a0is the identity permutation.\u00a0 What this says is that when forming dot products of two vectors of distinct values, the product is maximized when the entries appear in the same order.\n\n\\section*{Solution}\n Let $\\sigma$ be an arbitrary permutation of \u00a0$\\{0, \u2026 , n-1\\}$ and define $f(\\sigma) = \\sum_{i = 0}^{n-1} x_i y_{\\sigma(i)}$.   It suffices to show that if $\\sigma$ is not the identity permutation, then there is another permutation $\\tau$ such that $f(\\tau) > f(\\sigma)$.    This is sufficient because there are only finitely many permutations of $\\{0, \u2026 , n-1\\}$ so $f$ must have a maximum that is attained by some permutation.  What we are showing is that no non-identity permutation can attain the maximum.\n \\\\\\\\\n We start by establishing a lemma.\n \\lem  If $x_1< x_2$ and $y_1 < y_2$ are positive real numbers then  $x_1 y_1 + x_2 y_2> x_1 y_2 + x_2 y_1$.\n \\proof Write $x_2 = x_1 + a$ and $y_2 = y_1 + b$ where $a$ and $b$ are positive.  Then $x_1 y_1 + x_2 y_2 = x_1 y_1 + (x_1 + a)(y_1 + b) = 2x_1y_1 + x_1 b + y_1a + ab$ and $x_1 y_2 + x_2 y_1 = x_1(y_1 + b) + (x_1 + a)y_1 = 2x_1y_1 + x_1b + y_1a,$ so since $a$ and $b$ are positive $ab$ is positve and the inequality is proved.\n \\\\\\\\\n Now consider an arbitrary permutation $\\sigma$ of $\\{0, \u2026 , n-1\\}$.  If $\\sigma$ is not the identity, let $i$ be the largest index such that $\\sigma(i)  \\ne i$. So for all $k > i$, $\\sigma(k) = k$.  Since permutations are 1-1 mappings, we must have $\\sigma(i) < i$ (all values above $i$ are taken by the fixed points above $i$).  Let $j = \\sigma(i)$ and let $l$ be the pre-image of $i$  under $\\sigma$,  so $\\sigma(l) = i$.  By the lemma, if we modify $\\sigma$ to make $\\sigma(i) = i$ and $\\sigma(l) = j$, the portion of $\\sum_{i = 0}^{n-1} x_i y_{\\sigma(i)}$ contributed by the indexes $i$ and $l$ will increase while the rest of the terms remain the same.  So making $\\tau$ the permuation that modifies $\\sigma$ in this way, we have shown what we need to show to establish the main result.\n\n\n\n\n\n\n\\end{document}\n\n", "meta": {"hexsha": "452a65cd67b7184ac083ef6949fc099cf97758f5", "size": 3020, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "rewardTheRich/rewardTheRich.tex", "max_stars_repo_name": "psteitz/problems", "max_stars_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "rewardTheRich/rewardTheRich.tex", "max_issues_repo_name": "psteitz/problems", "max_issues_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-03T21:08:11.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-03T21:08:11.000Z", "max_forks_repo_path": "rewardTheRich/rewardTheRich.tex", "max_forks_repo_name": "psteitz/problems", "max_forks_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.652173913, "max_line_length": 792, "alphanum_fraction": 0.6887417219, "num_tokens": 1054, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6406358411176238, "lm_q2_score": 0.8807970654616711, "lm_q1q2_score": 0.5642701688859725}}
{"text": "\\chapter{Perturbation of inputs}\n\\section{Methodology for perturbation}\n\\begin{enumerate}\n    \\item {Zero mean IID (Independent identical distribution) noise of certain standard deviation is added to each of the $n$ points of the graph.}\n    \\item {The span of points is measured for each graph and is used for scaling the Standard deviation accordingly.}\n    \\item {Standard deviation is scaled for each graph instead of scaling the given points in view of getting better precision in calculations.}\n    \\item {The algorithm is run 20 times for each experiment and the average of tour lenght was recorded.}\n\\end{enumerate}\n\\section{Results}\n\\subsection{Perturbation with different standard deviations}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.26]{7.jpg}\n    \\caption{Performance of heuristics on few selected datasets}\n\\end{figure}\n\n\\subsection{Worst case performance of heuristics on perturbation}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.26]{8.jpg}\n    \\caption{Worst case performance among all files for different heuristics}\n\\end{figure}\n\n\\subsection{Performance of heuristics on different size graphs for each standard deviation}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[scale=0.21]{9.jpg}\n    \\includegraphics[scale=0.21]{10.jpg}\n    \\caption{Performance of heuristics on few selected datasets}\n\\end{figure}", "meta": {"hexsha": "6c8d60e7996e0c383b76893bb1779055a4d24a65", "size": 1369, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "perturbation.tex", "max_stars_repo_name": "jaiteshp/CS6100-Christofides-ShortCutting-Heurisitcs-Project-Report", "max_stars_repo_head_hexsha": "5819512e75e1a145faa47d4a0168a03d2e9c5c33", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "perturbation.tex", "max_issues_repo_name": "jaiteshp/CS6100-Christofides-ShortCutting-Heurisitcs-Project-Report", "max_issues_repo_head_hexsha": "5819512e75e1a145faa47d4a0168a03d2e9c5c33", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "perturbation.tex", "max_forks_repo_name": "jaiteshp/CS6100-Christofides-ShortCutting-Heurisitcs-Project-Report", "max_forks_repo_head_hexsha": "5819512e75e1a145faa47d4a0168a03d2e9c5c33", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.6333333333, "max_line_length": 147, "alphanum_fraction": 0.7669831994, "num_tokens": 335, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718435083355188, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.5642628226511717}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{wrapfig}\n\\usepackage{float}\n\\usepackage{graphicx}\n\\usepackage[margin=1in]{geometry} \n\\usepackage{comment}\n\\usepackage{amsmath,amsthm,amssymb}\n\\usepackage{hyperref}\n% \\usepackage[vlined,linesnumbered,ruled,resetcount]{algorithm2e}\n\\usepackage{algorithm}\n\\usepackage[noend]{algpseudocode}\n\\usepackage{tikz}\n\n% the following nonsense allows you to put a comment below a big algorithm\n\\usepackage{etoolbox}\n\\makeatletter\n\\AfterEndEnvironment{algorithm}{\\let\\@algcomment\\relax}\n\\AtEndEnvironment{algorithm}{\\kern2pt\\hrule\\relax\\vskip3pt\\@algcomment}\n\\let\\@algcomment\\relax\n\\newcommand\\algcomment[1]{\\def\\@algcomment{#1}}\n\n\\renewcommand\\fs@ruled{\\def\\@fs@cfont{\\bfseries}\\let\\@fs@capt\\floatc@ruled\n  \\def\\@fs@pre{\\hrule height.8pt depth0pt \\kern2pt}%\n  \\def\\@fs@post{}%\n  \\def\\@fs@mid{\\kern2pt\\hrule\\kern2pt}%\n  \\let\\@fs@iftopcapt\\iftrue}\n\\makeatother\n\n\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\DeclareMathOperator*{\\val}{val}\n\\DeclareMathOperator*{\\MOSM}{MOSM}\n\\DeclareMathOperator*{\\WOSM}{WOSM}\n\\DeclareMathOperator*{\\best}{best}\n\\DeclareMathOperator*{\\worst}{worst}\n\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\Rgz}{\\mathbb{R}_{\\ge 0}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\n\\newcommand{\\Es}[2]{\\mathbb{E}_{#1}\\left[{#2}\\right]}\n\\newcommand{\\E}[1]{\\mathbb{E}\\left[{#1}\\right]}\n\\newcommand{\\ip}[2]{\\left\\langle{#1} , {#2}\\right\\rangle}\n\n\\newcommand{\\M}{\\mathcal{M}}\n\\newcommand{\\W}{\\mathcal{W}}\n\\renewcommand{\\L}{\\mathcal{L}}\n\n\\newtheorem{definition}{Definition}[section]\n\\newtheorem{lemma}[definition]{Lemma}\n\\newtheorem{corollary}[definition]{Corollary}\n\\newtheorem{theorem}[definition]{Theorem}\n\\newtheorem{claim}[definition]{Claim}\n\\newtheorem{proposition}[definition]{Proposition}\n\n\\begin{document}\n\n% \\renewcommand{\\qedsymbol}{\\filledbox}\n \n\\title{\n  Efficiently Representing the Set of Stable Matchings by the Rotation Poset\n}\n\\author{\n  Clay Thomas\\\\\n  claytont@princeton.edu\n}\n\\maketitle\n\n\\begin{abstract}\n  Stable matchings are a classical area of study, and\n  the well-known treatment of [Gusfield and Irving. ``The stable\n    marriage problem: structure and algorithms\". MIT press, 1989]\n  gives strong structural results useful for understanding this problem.\n  Namely, the set of stable matchings for an $n\\times n$\n  instance is in a bijection with the set of downward-closed subsets of a\n  certain polynomial-sized partial order (the elements of which correspond to\n  the set of stability-preserving ``partner rotations'').\n  While Gusfield and Irving's treatment is quite elegant,\n  the proofs given are a bit long and abstract.\n  In this paper, we give a slight modification of\n  the Gusfield and Irving algorithm, emphasizing its conceptual basis\n  in the same principles as differed acceptance.\n  We prove correctness mostly by using concrete algorithmic properties.\n  We also provide a more streamlined and\n  unified construction of the partial ordering relations, correcting some\n  slight errors in the original text.\n  Our algorithm naturally handles unacceptable pairs\n  and unequal numbers of men and women.\n\n  We also discuss two recent advancements in the theory of \\emph{counting}\n  stable matchings. First, using basic properties of the partial\n  order on rotations (which we reprove), a exponential worst-case upper bound on the\n  number of stable matchings has been shown in \n  [Karlin, Gharan, and Weber. ``A simply exponential upper bound on the maximum\n    number of stable matchings.\" STOC, 2018].\n  Second, upper bounds for the \\emph{expected}\n  number of stable matchings when there are an unequal number of men and women\n  have recently been found in [Ashlagi, Kanoria, and Leshno.\n    ``Unbalanced random matching markets: The stark effect of competition.\"\n    Journal of Political Economy, 2017]\n  (it turns out there are far fewer matches than in the balanced case).\n  For the second result, we provide greatly simplified proofs of some\n  preliminary results in this direction.\n\\end{abstract}\n\n\\section{Introduction}\n\n  Stable matching mechanisms are ubiquitous in theory and in practice,\n  especially in the ``bipartite case'' where agents lie in two disjoint groups\n  and matches are made between members of different groups.\n  % Stable matchings present one of the rare examples in mechanism design without\n  % money where many positive results are possible.\n  The most commonly used stable matching mechanism is ``one-side proposing\n  differed acceptance'', which has the nice properties of being simple to\n  implement, fast to execute, and strategyproof for the proposing side\n  \\cite{DubinsMachiavelliGaleShapley81}\n  (moreover, it is the unique mechanism satisfying strategyproofness for \n  one of the sides \\cite{GaleMsMachiavelli85}).\n\n  Many structural properties are known about the set of stable matches for a\n  given instance (known as the \\emph{core} of the instance).\n  For instance, the core forms a distributive lattice with a compact\n  (and algorithmically friendly) representation,\n  and every distributive lattice is the core of some ``not too large''\n  stable marriage instance \\cite{IrvingCountingStable86}.\n  This latter fact means that, a priori, one cannot say much more\n  about the core than one can say about arbitrary distributive\n  lattices\\footnote{\n    One recent result which runs counter to this statement is given in\n    \\cite{KarlinExpUBNumberStable18}, where the authors use the \n    size of the stable marriage instance to\n    bound the number of elements in the distributive lattice of the core.\n    There, the exact meaning of the words ``not too large'' becomes important.\n  }. However, in addition to the structure that the core must necessarily have,\n  it is very interesting to know what qualities the core \\emph{probably} has.\n\n  The ``first'' randomized model of stable matching is to take preferences\n  completely uniform for each agent.\n  For a while, it's been known that in a randomized balanced market,\n  i.e. one with the same\n  number of agents on each side, there's probably a large core\n  and a big imbalance between the different sides -- the proposing side gets\n  matched which they favor significantly more \\cite{PittelAverageStable89}.\n  However, in a recent breakthrough paper \\cite{AshlagiUnbalancedCompetition17},\n  it was found that this distinction\n  \\emph{almost vanishes} in large markets when there is an imbalance of just\n  \\emph{one} more agents on one side than the other.\n  In this note we survey this result.\n\n  The basic setup of stable matching has many variations.\n  The central focus of this note is the setting with an unequal number of agents\n  on each side.\n  Some other import variations include adding ``unacceptable matches'' (agents do not\n  rank every agent on the other side), ``indifference'' (agents sometimes don't\n  distinguish their preference between some matches), ``many-to-one matchings''\n  (where several agents from one side are matched to one on the other),\n  and ``couples'' (where certain agents on one side express their \n  preferences/acceptable matches jointly).\n  We use the ``unacceptable matches'' concept as a technical tool,\n  but ignore the subtleties of the other variations.\n\n  \\paragraph{Relation to prior work.}\n  No claim in this paper is original, and much of our treatment and proofs are\n  very similar to that presented in \\cite{GusfieldStableStructureAlgs89}.\n  The point where we start to differ is in proving results about the rotation\n  poset: we use the concepts of women truncating their preference lists and\n  running MPDA to prove most of our claims.\n  This approach has been surveyed in \\cite{ManloveMatchPrefs13}.\n\n  ==== HERM actually idk things were pretty clear to begin with in all these\n  proof... but whatever. I'm still not sure how different my algorithm is from\n  anything else.\n\n  \\paragraph{Organization.}\n  In section 2, we review the basic properties of differed acceptance and stable\n  matchings, which will motivate some of the techniques to come.\n  In section 3, we review the case of a balanced market, and discuss which parts\n  of the analysis fail for unbalance markets.\n  In section 4, we prove some results on unbalanced markets, taking a new proof\n  approach based around differed acceptance which is closer to the analysis for\n  the balanced market.\n  In section 5, we discuss the proof technique given in\n  \\cite{AshlagiUnbalancedCompetition17}.\n  % and describe its relationship to algorithms previously known about the\n  % structure of the core (from \\cite{GusfieldStableStructureAlgs89}).\n\n\\section{Review: Differed Acceptance and Stable Matching}\n\n\n  We start with the basic definitions.\n  A matching market is a collection $\\M$ of ``men'' and $\\W$ of ``women'', where\n  each man $m\\in \\M$ has a ranking over women in $\\W$, represented\n  as list ordered from most preferred to least preferred, and vice versa.\n  Lists may be partial, and agents included on the list of some $a \\in \\M\\cup\\W$\n  are called the acceptable partners of $a$.\n  We write $w_1 \\succ_m w_2$ if $w_1$ is ranked higher than $w_2$ on $m$'s list\n  (or if $w_1$ is acceptable but $w_2$ is not ranked at all).\n  We also denote the fact that $w$ is not an acceptable partner of $m$ by\n  $\\emptyset \\succ_m w$.\n\n  === More defining to do here::\n  A matching is a one-to-one assignment of men to women, which we denote\n  by $\\mu : \\M\\cup \\W\\to \\M\\cup \\W\\cup\\{\\emptyset\\}$.\n  We write $\\mu(i) = \\emptyset$ if\n  agent $i$ is unmatched.\n\n  A matching $\\mu$ is \\emph{stable} for a set of preferences \n  $P = \\{\\succ_w\\}_{w\\in\\W} \\cup \\{\\succ_m\\}_{m\\in\\M}$\n  if no unmatched man/woman pair \n  $(m,w)\\in \\M\\times \\W$ is blocking under $P$,\n  % we do not simultaneously have\n  % $m \\succ_w \\mu(w)$ and $w \\succ_m \\mu(w)$.\n  where $(m,w)$ is called blocking if we simultaneously have\n  $m \\succ_w \\mu(w)$ and $w \\succ_m \\mu(w)$.\n  A \\emph{pair} $(m,w)$ is called stable under $P$ if $\\mu(m)=w$ in \n  \\emph{some} stable matching,\n  and $m$ is called a stable partner of $w$ (and vice-versa).\n\n  The man-proposing differed acceptance algorithm is given by\n  Algorithm~\\ref{algoMPDA}. We provide simple proof of the basic properties of\n  this algorithm.\n\n  \\begin{algorithm}\n    \\caption{MPDA: Men-proposing differed acceptance algorithm}\n    \\label{algoMPDA}\n  \\begin{algorithmic}[0]\n    \\State Let $U = \\M$ be the set of unmatched men\n    \\State Let $\\mu$ be an all empty matching\n    \\While { $U\\ne \\emptyset$ and some $m\\in U$ has not proposed to every woman\n      on his list}\n      \\State Pick such a $m$ (in any order)\n      \\State $m$ ``proposes'' to their highest-ranked woman $w$ which \n        they have not yet proposed to\n      \\If {$m \\succ_w \\mu(w)$} \n        \\State If $\\mu(w)\\ne \\emptyset$, add $\\mu(w)$ to $U$\n        \\State Set $\\mu(w) = m$, remove $m$ from $U$ \n      % \\EndIf\n      % \\If {$\\mu(h) = d_0$ and $d\\succ_h d_0$} \n      %   \\State Set $\\mu(h) = d$, remove $d$ from $U$, add $d_0$ to $U$\n      % \\Else \\ \\ $h$ remains matched to $d_0$ and $d$ remains in $U$\n      \\EndIf\n    \\EndWhile\n  \\end{algorithmic}\n  \\end{algorithm}\n\n  Intuitively, this algorithm starts with the men doing whatever they prefer\n  the most, then doing the minimal amount of work to make the matching stable.\n  Indeed, men propose in their order of preference. If a woman $w$ ever\n  rejected a man $m$ they prefer over their current match,\n  then \\emph{remained} with their current match,\n  then $(m,w)$ would clearly create an instability in the final matching.\n\n  \\begin{claim}\n    The output of MPDA is a stable matching.\n  \\end{claim}\n  \\begin{proof}\n    First, observe that the MPDA algorithm terminates\n    because every man will propose to every woman at most once.\n    The claim follows from two simple invariants of the algorithm:\n    \\begin{itemize}\n      \\item Men propose in their order of preference\n      \\item Women can only increase the rank of their tentative match over time\n        (and once they are matched, they stay matched)\n    \\end{itemize}\n    Formally, consider a pair $m\\in \\M$, $w\\in \\W$ which\n    is unmatched in the output matching\n    $\\mu$. Suppose for contradiction $w\\succ_m \\mu(m)$ and $m\\succ_w \\mu(w)$.\n    In the MPDA algorithm, $m$ would propose to $w$ before $\\mu(m)$.\n    This means that $w$ received a proposal from a man she preferred over her\n    eventual match $\\mu(w)$, a contradiction.\n  \\end{proof}\n\n  Note that this algorithm gives us a very interesting existence result: it was\n  not at all clear that stable matching existed before we had this algorithm.\n\n  This next claim allow us to easily prove several corollaries.\n  The proof follows this strategy: although it's not immediately easy to show an\n  event can't happen, you can show it \\emph{can't happen for the first time}.\n\n  \\begin{claim}\\label{claimRejectionUnstable}\n    If a man $m\\in \\M$ is ever rejected by a woman $w\\in \\W$ during some run\n    of MPDA (that is, $m$ proposes to $w$ and $w$ does not accept) then no stable\n    matching can pair $m$ to $w$.\n  \\end{claim}\n  \\begin{proof}\n    Let $\\mu$ be any matching.\n    Suppose that some pair, matched in $\\mu$, is rejected during MPDA.\n    Consider the first time during in the run of MPDA where such a rejection\n    occurs, i.e. a woman $w$ rejects $\\mu(w)$ but no other woman $w'$ has\n    rejected $\\mu(w')$ so far.\n    In particular, let $w$ reject $m=\\mu(w)$ in favor of $m'\\ne m$\n    (either because $m'$ proposed to $w$,\n    or because $m'$ was already matched to $w$ and $m$ proposed).\n    We have $m'\\succ_w m$, so if $m'$ is unmatched in $\\mu$, then $\\mu$ is\n    unstable.\n    Thus we have $\\mu(m') = w' \\ne w$,\n    and because this is the first time any man has been rejected by a match from\n    $\\mu$, $m'$ has not yet proposed to $w'$.\n    Because men propose in their preference order, we have $w \\succ_{m'} w'$.\n    However, this means $\\mu$ is not stable.\n\n    Thus, no woman can ever reject a stable partner in MPDA.\n  \\end{proof}\n\n  We can now formalize our intuition that DA moves the men down their preference\n  lists the minimal amount required to enforce stability.\n  Interestingly, a completely dual phenomenon occurs for the women's preferences.\n\n  \\begin{corollary}\\label{claimMenBestStable}\\label{claimWomenWorstStable}\n    % Let $\\best(m)$ denote the most preferred match $m$ can achieve in any stable\n    % matching, i.e. the maximum according to $\\succ_m$ of the set\n    % $\\{w\\in \\W: \\exists \\mu:\\text{$\\mu$ is stable and $\\mu(m)=w$}\\}$\n    % (or write $\\best(m)=\\emptyset$ if the above set is empty).\n    In the match returned by MPDA,\n    \\begin{enumerate} \n      \\item every $m\\in \\M$ is paired to\n        his most preferred stable partner.\n      \\item every $w\\in \\W$ is paired to their worst stable\n        match in $\\M$.\n    \\end{enumerate} \n  \\end{corollary}\n  \\begin{proof}\n    Let $m\\in \\M$ and $w\\in \\W$ be paired by MPDA.\n    Let $\\mu$ be any stable matching which does not pair $m$ and $w$.\n    We must have $w \\succ_m \\mu(m)$, because $w=\\best(m)$.\n    If $m \\succ_w \\mu(w)$, then $\\mu$ is not stable.\n    Thus, $w$ cannot be stably matched to any man she prefers less than $m$.\n  \\end{proof}\n\n  The matching output by the MPDA algorithm is independent of the order in which\n  men are selected to propose.\n\n\n  % NOTE:: MAYBE TRY TO IMPROVE THIS TO HOLD MORE GENERALLY::\n  % AT BARE MINIMUM FIX THE USAGE OF DOCTOR!!\n  % THIS RURAL HOSPITAL BUSINESS IS IMPORTANTE!!\n  Using our results so far, we can prove the following weaker version of the rural\n  hospital theorem\\footnote{\n    The full rural hospital theorem \\cite{RothRuralHospital86} \n    properly refers to many-to-one matching markets are considered \n    (i.e. the residents and hospitals problem).\n    The conclusion is that if a hospital does not\n    fill \\emph{all} its openings in \\emph{some} stable outcome,\n    then it will always recieve the same doctors\n    (and same number of doctors) in \\emph{every} stable outcome.\n  } which will be key for some of our later results.\n  \\begin{claim}[Rural Hospital Theorem] \\label{claimRuralDoctors}\n    Then the set of unmatched agents is the same across every stable outcome.\n    % If a set of men $\\overline \\M$ are rejected by every woman during MPDA,\n    % then no stable matching will match any man in $\\overline \\M$.\n    % Moreover, in every stable matching, the set of unmatched men is the same.\n  \\end{claim}\n  \\begin{proof}\n    Let $\\overline \\M$ be unmatched in MPDA.\n    Observe that each man in $\\overline\\M$ has proposed to every acceptable\n    partner he has over the run of MPDA. Thus,\n    claim~\\ref{claimRejectionUnstable} implies that $\\overline\\M$ is unmatched\n    in every stable outcome.\n    On the other hand, reversing the roll of men and women and considering\n    women-proposing differed acceptance, we can see that the set of matched\n    women is also identical across every stable outcome.\n  \\end{proof}\n\n  The final results of this section strengthens the intuition provided by\n  claims~\\ref{claimWomenWorstStable} and \\ref{claimMenBestStable} which states\n  that the incentives of women and men are exactly opposed over the set of\n  stable matchings.\\footnote{\n    DO NOT INCLUDE: note for myself: I think this fails in a subtle way for the\n    resident-hospitals problem... look into this.\n  }\n  \\begin{claim}\\label{claimOneUpOneDown}\n    Let $\\mu, \\mu'$ be stable matchings, and say $\\mu(m) = w$, but $\\mu'(m)\\ne w$.\n    Then $\\mu'(m) \\succ_m w$ if and only if $\\mu'(w) \\prec_w m$.\n  \\end{claim}\n  \\begin{proof}\n    $(\\Leftarrow)$ ``If $w$ downgrades, then $m$ upgrades''.\n    Suppose $\\mu'(w) \\prec_w m$. Because $\\mu'$ is stable, yet $m$ and $w$\n    are not matched in $\\mu'$, we must have $\\mu'(m) \\succ_m w$,\n    or else $(m,w)$ would form a blocking pair.\n    (A rephrasing: this direction is easy because the definition of stability\n    immediately makes it impossible for $m$ and $w$ to both downgrade).\n\n    $(\\Rightarrow)$ ``If $w$ upgrades, then $m$ downgrades''.\n    Let $m' = \\mu'(w) \\ne m$ and $w' = \\mu'(m) \\ne w$.\n    Suppose that $m' \\succ_w m$, and for contradiction suppose that $w' \\succ_m w$.\n    Because $\\mu'$ is stable, $(m', w')$ is not a blocking pair,\n    so either $w\\succ_{m'} w'$ or $m\\succ_{w'} m'$.\n    In the first case, $(m',w)$ form a blocking pair in $\\mu$,\n    and in the second case, $(m,w')$ form a blocking pair in $\\mu$.\n    Thus, in either case $\\mu$ is not stable.\n  \\end{proof}\n  \\begin{claim}\\label{claimDominateOposites}\n    Let $\\mu$ and $\\mu'$ be stable matchings.\n    Every man (weakly) prefers $\\mu$ over $\\mu'$ if and only if \n    every woman (weakly) prefers $\\mu'$ over $\\mu$.\n  \\end{claim}\n  \\begin{proof}\n    Suppose each $m\\in\\M$ has $\\mu'(m)\\succeq_m\\mu(m)$.\n    For each $w\\in\\W$ with $\\mu(w)\\ne\\mu'(w)$, we must have\n    $\\mu'(w)\\prec_w \\mu(w)$ by claim~\\ref{claimOneUpOneDown}.\n    The proof for the other direction is identical.\n  \\end{proof}\n\n\\section{The Lattices of Stable Matchings}\n\n  A partial order $\\le$ is a reflexive, transitive, antisymmetric relation.\n  For elements $a,b$ of a partial order, a least upper bound $a\\vee b$ is\n  an element such that\n  $a\\le a\\vee b$ and $b\\le a \\vee b$, and for any \n  $c$ such that $a\\le c$ and $b\\le c$, we have $a\\vee b\\le c$.\n  A greatest lower bound $a\\wedge b$ is defined analogously, interchanging $\\le$\n  with $\\ge$.\n  We also call $a\\vee b$ the meet of $a$ and $b$ and $a\\wedge b$ the join of $a$\n  and $b$.\n  A lattice $L$ is a partial order in which there exist greatest lower bounds and\n  least upper bounds for any $a,b\\in L$.\n  A lattice $L$ is distributive if the join and meet operations satisfy\n  the following equations:\n  \\[ a \\wedge (b \\vee c) = (a\\wedge b)\\vee (a\\wedge c) \\]\n  \\[ a \\vee (b \\wedge c) = (a\\vee b)\\wedge (a\\vee c) \\]\n  An element $a$ of a lattice covers an element $b$, denoted $a \\gtrdot b$,\n  when $a > b$ and no element $c$ exists with $a > c > b$.\n\n  For stable matches $\\mu$, $\\mu'$, we say that $\\mu$ \\emph{dominates} $\\mu'$\n  if, for every $m\\in\\M$, we have $\\mu(m)\\succeq_m \\mu'(m)$,\n  that is, if every man is at least as happy with his match in $\\mu$\n  as in $\\mu'$.\n  If $\\mu$ dominates $\\mu'$ we write $\\mu\\le \\mu'$.\n  (ORGANIZE THIS THOUGHT OR POSSIBLY REVERSE IT:\n  we (somewhat misogynistically) say that one matching\n  dominates another based on the men's preferences.\n  However, we visualize starting at the man-optimal stable outcome at the bottom\n  of the stable matching lattice, and the woman-optimal being at the top.\n  This is why we write $\\mu \\le \\mu'$ when $\\mu$ dominates $\\mu'$)\n\n\n  === IDEA: we don't really use the fact that the full set of matches is a\n  distributive lattice. Probably we should push most of the material of section\n  2 and the following theorem to an appendix. We should then mention here that\n  we just need the notion of maximal chains and of covering relations and that\n  we use the term lattice to distinguish the set of matchings from the rotation\n  poset (which isn't a lattice).\n  \\begin{theorem}\n    The collection $\\L$ of all stable matchings of some instance\n    form a distributive lattice under the dominance ordering $\\le$.\n  \\end{theorem}\n  \\begin{proof}\n    It's easy to see that $\\le$ forms a partial order on $\\L$.\n    We'll show that least upper bounds exists (the proof for greatest lower\n    bounds is identical, interchanging men with women).\n    For stable matchings $\\mu,\\mu'$, define\n    $\\tilde\\mu = \\mu\\vee\\mu'$ such that, for each woman $w$,\n    $\\mu(w)$ is the most preferred partner\n    of $w$ among $\\mu(w)$ and $\\mu'(w)$.\n    It's clear that, if $\\tilde\\mu$ is a stable matching,\n    then it is the least upper bound for $\\mu$ and $\\mu'$.\n\n    First, we claim that $\\tilde\\mu$ is a matching.\n    Suppose some man $m$ is the match of two women $w$ and $w'$ in\n    $\\tilde\\mu$. Without loss of generality suppose $\\mu(w)=m$,\n    so $m=\\mu(w)\\succ_w \\mu'(w)$,\n    and $\\mu'(w')=m$, so $m=\\mu'(w')\\succ_{w'}\\mu(w')$.\n    Applying claim~\\ref{claimOneUpOneDown} twice,\n    we get that $w=\\mu(m)\\prec_m \\mu'(m)=w'$\n    and also that $w'=\\mu'(m)\\prec_m \\mu(m)=w$,\n    a contradiction.\n\n    Second, we claim that $\\tilde\\mu$ is stable.\n    Suppose that $(m,w)$ is a blocking pair for $\\tilde\\mu$,\n    Certainly the partners of $m$ and $w$ must be from different matchings among\n    $\\mu$ or $\\mu'$, say $\\tilde\\mu(m)=\\mu'(m)$ and \n    $\\tilde\\mu(w)=\\mu(w)\\ne \\mu'(w)$.\n    As $(m,w)$ is blocking, $w\\succ_m\\mu'(m)$ and $m\\succ_w\\mu(w)$.\n    But by the definition of $\\tilde\\mu$, we have $\\mu(w)\\succ_w\\mu'(w)$,\n    so $m\\succ_w\\mu'(w)$ as well, and $\\mu'$ is not stable.\n\n    Finally, we show that the join and meet operations in $\\L$ are distributive.\n    Analogously to the above, we would define $\\mu\\wedge\\mu'$ such that every\n    man gets their preferred partner from $\\mu$ or $\\mu'$.\n    By claim~\\ref{claimOneUpOneDown}, this is equivalent to defining\n    $\\mu\\wedge\\mu'$ such that every woman\n    gets worst partner from $\\mu$ or $\\mu'$.\n    Thus, the join and meet operations are distributive for the same reason that\n    the operations of min and max distribute over each other.\n    In particular, we can fix a woman $w$ and see that\n    \\[ \\big(\\mu_1 \\wedge (\\mu_2 \\vee \\mu_3) \\big)(w)\n    = \\big( (\\mu_1 \\wedge \\mu_2) \\vee (\\mu_1 \\wedge \\mu_3) \\big)(w) \\]\n    % Thus, for every woman $w$,\n    % equal to the worse of $\\mu_1(w)$ and (the better of $\\mu_2(w)$ and\n    % $\\mu_3(w)$).\n    % On the other hand, we have the better of\n    % (the worse of $\\mu_1(w)$ and $\\mu_2(w)$)\n    % and (the worse of $\\mu_1(w)$ and $\\mu_3(w)$),\n    % which with a little thought you can realize that these two equations are\n    % equal.\n  \\end{proof}\n\n  We will not directly use this theorem, but it serves as motivation for why one\n  might expect a compact representation of $\\L$ to exist.\n  In general, when one encounters a distributive lattice, it's always useful to\n  ask what its join irreducible elements are, and see if there's a natural\n  mathematical or algorithmic interpretation.\n  \\begin{theorem}[Birkhoff's representation theorem]\n    For any finite distributive lattice $L$, there exists a subset $P$ of $L$\n    (namely, the ``join irreducible elements'' of $L$, which are those such\n    that, whenever $a\\vee b = c$, we have $a=c$ or $b=c$) such that\n    $L$ is isomorphic to the collection of downward-closed subsets of $P$\n    (under the partial order induced by restricting $L$ to $P$), which\n    forms a lattice under set containment.\n  \\end{theorem}\n\n  \\begin{claim}\n    Let $P$ be some instance of the stable matching problem\n    with man-optimal stable match $\\mu_0$, and let\n    $P'$ be an instance identical to $P$, except some set of women\n    who are all matched in $\\mu_0$ can truncate\n    their preference lists. Let $\\mu$ be the result of $MPDA(P')$.\n    Then $\\mu$ is stable for the original set of preferences $P$ if and only if\n    the set of matched women in $\\mu$ is identical to that in $\\mu_0$.\n    Moreover, if $\\mu$ is stable for $P$ then it is the man-optimal stable match\n    in which each women receives a match above their point of truncation.\n  \\end{claim}\n  \\begin{proof}\n    If the set of matched agents differs, then $\\mu$ cannot be stable simply by\n    the rural hospital theorem \\ref{claimRuralDoctors}.\n\n    For the other direction, suppose that $\\mu$ is the result of $MPDA(P')$.\n    We'll show that some woman matched in $\\mu_0$ must go unmatched in $\\mu$,\n    using almost the same proof that originally showed that the result of $MPDA$\n    is stable.\n    In particular, suppose $(m,w)$ blocks $MPDA(P')$ (with respect to the\n    original set of preferences $P$).\n    As $w\\succ_m \\mu(m)$, $m$ would propose to $w$ before his current match\n    (or before going unmatched).\n    Note that if $w$ was unmatched in $\\mu_0$, then $w$ does not truncate her\n    preferences, so $w$ would certainly get a match at least as good as $m$.\n    Thus we can assume that $w$ was matched in $\\mu_0$ and truncates her\n    preferences.\n    As $m\\succ_w \\mu(w)$, yet $w$ wound up with a worse outcome than $m$,\n    it must be the case that $w$ rejected $m$ because he was past her truncation\n    point on her list.\n    If $w$ went matched in $MPDA(P')$, she would certainly get a matched above\n    her truncation point, so we would have $\\mu(w)\\succ_w m$.\n    Thus, $w$ cannot be matched in $\\mu$.\n\n    % One fact towards this:\n    % Observe that $MPDA(P')$ produces the man optimal stable match such that each\n    % woman (if matched) receives a match higher than her truncation point.\n  \\end{proof}\n  \\begin{claim}\n    Let $\\mu$ be a stable matching for preferences $P$.\n    For each woman $w$, let $P_w$ denote the preference profile where each\n    preference list is identical except for that of $w$,\n    and $w$ truncates her list from $P$ one place after her stable partner from\n    $\\mu$. The rest of the women should truncate their lists\n    just BEFORE the match in $\\mu$.\n    Then the collection of matches which cover $\\mu$\n    is contained in the set $\\{\\mu' | \\mu' = MPDA(P_w), \n    \\mu'\\text{ is stable for } P\\}_{w\\in\\W}$, and all of those elements dominate $\\mu'$.\n  \\end{claim}\n  \\begin{proof}\n\n    Key observation for the next section: the sequence of rejections made\n    (ignoring terminal phases) are a valid sequence of rejections for the\n    original MPDA algorithm with women truncating their preferences.\n    So you can find a maximal path from bottom to top of the lattice.\n  \\end{proof}\n  \\begin{claim}\n    If $\\mu' \\gtrdot \\mu$ is a covering pair in the stable matching lattice\n    then $\\mu'$ and $\\mu$ differ by a rotation.\n  \\end{claim}\n  \\begin{proof}\n    In terms of what we need to eventually show, this won't be quite enough.\n    E.g. (12)(23) = (132) is a simple rotation when considered as a permutation,\n    but its composed of two (comparable) flips and not a simple rotation in the\n    stable marriage instance.\n\n    Here's something a bit more like it: bring this analysis after the\n    algorithm, then say this: ``if V is a simple cycle, then $\\mu_{new}$\n    covers $\\mu_{old}$, because (clearly its $\\ge$) and in order to be\n    $>\\mu_{old}$, some woman must reject her match (but MPDA finds the\n    man-optimal such a thing).\n\n    As $\\mu' > \\mu$, some woman $w$ must recieve a strictly better match in\n    $\\mu'$ than in $\\mu$.\n  \\end{proof}\n\n  Q: why must the set of rotations found be the same for every execution of the\n  algorithm? One approach to proving this formally: consider execution sequences\n  $P$ and $Q$ which contain different rotations, say $\\rho\\in P\\setminus Q$.\n  Consider the first time during the execution of $Q$ that the following\n  happens: no future execution sequence which starts from the prefix of $Q$\n  which has already been executed will expose $\\rho$.\n  Now, why did the future sequence exist before this step, but it doesn't exist\n  after?\n\n\\section{The Rotation Poset}\n\n  Let's start with an example of how to use the facts proven above.\n  Start with stable matching instance\n\n  \\begin{tabular}{c | c c | c}\n    $m_1$ & 1 & 2 & \\textbf{3} \\\\\n    $m_2$ & 2 & 1 & 3 \\\\\n    \\cline{2-3}\n    $m_3$ & 3 & 5 & 1 \\\\\n    $m_4$ & 4 & 3 & 5 \\\\\n    $m_5$ & 5 & 4 & \\\\\n  \\end{tabular}\n  \\qquad \\qquad\n  \\begin{tabular}{c | c c c c}\n    $w_1$ & & \\multicolumn{1}{c|}{3} & 2 & 1 \\\\\n    $w_2$ & & \\multicolumn{1}{c|}{ } & 1 & 2 \\\\\n    \\cline{3-5}\n    $w_3$ & \\multicolumn{1}{c|}{2} & 4 & \\textbf{1} & 3 \\\\\n    \\cline{3-3}\n    $w_4$ & & & \\multicolumn{1}{|c}{5} & 4 \\\\\n    $w_5$ & & 4 & \\multicolumn{1}{|c}{3} & 5 \\\\\n  \\end{tabular}\n  \\qquad \\qquad\n  \\begin{tabular}{c}\n    \\begin{tikzpicture}[scale=1]\n      \\node (zero) at (0,-1) {$(12345)$};\n      \\node (l) at (-1,0) {$({\\bf21}345)$};\n      \\node (r) at (1,0) {$(12{\\bf534})$};\n      \\node (b) at (0,1) {$(21534)$};\n      \\node (one) at (0,2) {$(2{\\bf315}4)$};\n      \\draw (zero) -- (l) -- (b) -- (one);\n      \\draw (zero) -- (r) -- (b);\n    \\end{tikzpicture} \n  \\end{tabular}\n\n  Starting from the man-optimal match, the rotations\n  $\\rho_1=[(1,2),(2,1)]$ and $\\rho_2=[(3,3),(4,4),(5,5)]$\n  are exposed (this is easily seen\n  from the border separated rankings).\n  Moreover, once both these rotations have been applied, rotation\n  $\\rho_3=[(2,1),(4,3),(3,5)]$ is exposed, and eliminating $\\rho_3$ gets us to\n  the woman-optimal outcome.\n\n  Suppose you flip 1 and 2 first, then woman 2 rejects man 1.\n  This leads to a terminal\n  phase but also finds rotation $\\rho_2$.\n  One the other hand, woman 1 rejecting man 2 would start to find $\\rho_3$, then\n  find $\\rho_2$, then complete $\\rho_3$.\n\n  \\begin{algorithm}\n    \\caption{MOSM to WOSM Conversion Algorithm}\\label{algMosmToWosm}\n  \\begin{algorithmic}[1]\n    \\State Let $\\mu$ be the man-optimal stable matching\n    \\State For each man $m$, let $R(m)$ be the set of women who rejected $m$\n    during the run of $MPDA$.\n    \\State Let $S$ be the set of unmatched women in $\\mu$\n    % \\State\\Comment $S$ will be those women who have upgraded to their optimal stable match\n    \\State Set $pred^1_m = \\emptyset$ for each man $m$\n      \\Comment Store the most recent rotation moving $m$\n    \\State Label each entry of each woman's preference list with $\\emptyset$\n      \\Comment Store the rotation moving $w$ \\emph{above} certain men\n    \\While { $S \\ne \\W$ }\n      \\State Store $\\tilde\\mu \\leftarrow \\mu$\n      \\Comment $\\tilde\\mu$ stores the most recent \\emph{stable} match encountered\n      \\State Pick any $\\hat w\\in \\W\\setminus S$\n      \\State Let $m = \\mu(\\hat w)$; let $V = [ (m, \\hat w) ]$\n      \\State Set $\\mu(\\hat w) = \\emptyset$ and add $\\hat w$ to $R(m)$\n        \\Comment $\\hat w$ rejects $m$\n\n      % \\While { $m \\ne \\emptyset$ }\n      \\While { $V \\ne [\\ ]$ }\n        \\State Let $pred_{m}^2 = \\emptyset$\n          \\Comment Keep track of predecessor rotations\n        \\State Let $w$ be $m$'s most preferred woman not in $R(m)$\n\n        \\While { $\\tilde\\mu(w) >_w m$ }\n          \\Comment while $w$ has received a better stable match\n          \\State Add $w$ to $R(m)$\n          \\Comment $w$ rejects $m$\n          \\State If $w$ labeled $m$ with $\\rho$, add $\\rho$ to $pred_m^2$\n          \\State \\Comment If rotation $\\rho$ moved \n            $w$ above $m$, then $\\rho$ must precede the current rotation\n          \\State Update $w$ to $m$'s top woman not in $R(m)$ (or set $w$ to $\\emptyset$)\n        \\EndWhile\n\n        % \\State \\Comment If $|\\M|<|\\W|$, then $m$ has had\n        %   his proposal accepted by \\emph{some} woman $w$\n        \\State ( So now $m >_w \\tilde\\mu(w)$ (if $w\\ne \\emptyset$) )\n        \\If { $w=\\emptyset$ or $w\\in S$ }\n        \\label{lineFirstDeciciveCase}\n          \\Comment No stable matching exists rotating partners in $V$\n          \\State Restore $\\mu \\leftarrow \\tilde\\mu$\n          \\State Add all women in $V$ to $S$; Set $V = [\\ ]$\n          % \\State Set $m = \\emptyset$\n\n        \\ElsIf {$w$ appears in $V$}\n          \\Comment New rotation found\n          \\If { $w \\ne \\hat w$ }\n            % \\State Set $m_{float} \\leftarrow \\mu(w) $\n            \\State Swap $\\mu(w) \\leftrightarrow m$\n            \\Comment Subtlety: $w$ does not reject $m$ yet\n          \\EndIf\n          \\State \\Call{BuildNewRotation}{$w$}\n\n        \\ElsIf {$w\\notin V$, $w\\notin S$} \n          \\Comment Continue building rejection chain $V$\n          \\State Swap $\\mu(w) \\leftrightarrow m$; Add $w$ to $R(m)$\n            \\Comment $w$ rejects $\\mu(w)$\n          \\State Append $(m,w)$ to the end of $V$\n        \\EndIf\n        % \\State Set $\\nu(w) \\leftarrow \\nu(w)+1$ and $t \\leftarrow t+1$\n\n      \\EndWhile\n    \\EndWhile\n\n    \\\n    \\Function{BuildNewRotation}{$w$}\n      \\State Suppose $V = [(m_1,w_1), (m_2,w_2), \\ldots,\n        (m_k,w_k)]$ with $w = w_\\ell$ for some $\\ell \\le k$\n      \\State Update $\\tilde\\mu(w_i) = \\mu(w_i)$ for $i=\\ell,\\ell+1,\\ldots, J$\n      \\State Remove $\\rho^* = [(m_\\ell,w_\\ell),\\ldots,(m_k,w_k)]$ from $V$\n      \\State Add rotation $\\rho^*$ with predecessors\n        $\\bigcup_{i=\\ell}^k pred_{m_i}^1\\cup pred_{m_i}^2 $\n        to the rotation poset\n      \\State For each $i=\\ell,\\ldots,k$, set $pred_{m_i}^1 = \\rho^*$\n      \\State For each $i=\\ell,\\ldots,k$, and for each man $m$ between \n      $m_i$ and $m_{i-1}$ (or $m_k$ if $i=\\ell$) on $w_i$'s list:\n      \\State \\qquad $w$ labels $m$ with $\\rho^*$\n    \\EndFunction\n  \\end{algorithmic}\n  \\end{algorithm}\n\n  The algorithm starts from the man-optimal stable outcome (MOSM)\n  by running differed acceptance.\n  Then it triggers a series of ``divorces'', i.e. it breaks a marriage\n  $(m,\\hat w)$ and ``continues running differed acceptance''.\n  This involves $m$ again proposing down his list until accepted by some woman\n  who prefers $m$ to her current best known stable match.\n  This ``rejection chain'' can involve cycles, where a woman accepts a proposal\n  for the second time. Ignoring cycles for now, the chain can end in one of two\n  ways: a ``terminal phase'' in which an element of $\\overline\\W$\n  (the set of women unmatched in the MOSM)\n  receives a match, or an ``improvement phase'' in which $\\hat w$ receives a\n  better stable match.\n  A terminal phase clearly cannot constitute a stable match, by the rural\n  hospital theorem (claim~\\ref{claimRuralDoctors}).\n  On the other hand, \\cite{AshlagiUnbalancedCompetition17} shows that any\n  improvement phase yields a new stable outcome.\n\n  Some notes:\n  \\begin{itemize}\n    % \\item When we reach line~\\ref{lineFirstDeciciveCase}, we have either\n    %   $m >_w \\mu(w)$ or $m >_w \\tilde\\mu(w)$ ((This comment isn't quite necessary,\n    %   the next line explain why we actually hit all the cases in the big if else))\n    \\item The most recent stable match $\\tilde\\mu$ is stored, while the\n      ``partial match'' $\\mu$ may not correspond to any stable match.\n    \\item When $w\\in V$, we compare $m$ against $w$'s match in $\\tilde\\mu$, not in\n      $\\mu$. This is because we want to make the smallest changes possible that\n      result in new stable outcomes, and $w$ will still make an improvement from\n      the old stable outcome by accepting this $m$.\n    \\item If a cycle occurs during a rejection chain, that cycle is immediately \n      ``recorded'' in $\\tilde\\mu$. This is because this is equivalent to picking\n      $\\hat w$ to be any woman in the simple cycle, and (similar to differed\n      acceptance) the execution of this algorithm is independent of the order in\n      which women are picked.\n    \\item We actually don't need to wait until the rejection chain hits\n      $\\overline\\W$. Instead, we let $S$ denote the set of women for whom we\n      know that a phase starting with $\\hat w\\in S$ will be terminal. If at any\n      point a woman in $S$ then accepts the proposal, we know that the phase we\n      are currently in will be terminal as well.\n  \\end{itemize}\n\n  Formally, \\cite{AshlagiUnbalancedCompetition17} proves that the following\n  all hold during the run of algorithm~\\ref{algMosmToWosm}:\n  \\begin{itemize}\n    \\item $\\tilde\\mu$ is always a stable matching.\n    \\item When a phase is terminal, $\\hat w$ is currently matched in $\\tilde\\mu$\n      to her optimal stable partner. Because we eliminate cycles, any other\n      woman in $V$ would also cause a terminal phase if she was chosen as \n      $\\hat w$. Thus, $S$ always consists of women who have reached their\n      optimal stable match.\n    \\item Every woman is at some point in\n      time matched to every stable partner she has, in order from her least\n      preferred to her most preferred.\n    \\item Algorithm~\\ref{algMosmToWosm} terminates with $\\tilde\\mu$ being the\n      woman-optimal stable outcome.\n  \\end{itemize}\n\n\n\\section{Balanced Random Markets}\n\n  For a man $m$ and woman $w$ in some preference set $P$,\n  define $R_m(w) := |\\{w' : w' \\succeq_m w\\}|$,\n  i.e. the number of women preferred at least as much as $w$,\n  i.e. the index of $w$ on $m$'s ordered preference list.\n  We will say that $m$ ranks $w$ better than $w'$ when $R_m(w) < R_m(w')$.\n  Given a matching $\\mu$ stable under preferences $P$,\n  define \n  \\[ R_{men}(\\mu) := \\frac{1}{|\\M\\setminus\\overline\\M|}\n    \\sum_{m\\in \\M\\setminus\\overline\\M} R_m(\\mu(m))\n  \\]\n  That is, $R_{men}(\\mu)$ is the average over matched men of the men's rank of\n  their wives. Define $R_w(m)$ and $R_{women}(\\mu)$ analogously.\n\n  A random matching market is defined by a set of randomly draw preferences\n  $P = P_{n,m}$ with $n$ men and $m$ women,\n  where each of the $n$ men with uniformly random rankings over the $m$ women,\n  and $m$ women with uniformly random rankings over the $n$ men.\n  Given a set of preferences $P$, let $MOSM(P)$ denote the man-optimal stable\n  outcome, i.e. the result of running MPDA (men-proposing differed acceptance).\n  Likewise, let $WOSM(P)$ denote the woman-optimal stable\n  outcome, i.e. the result of running women-proposing differed acceptance.\n\n  The crucial observations to dealing with MPDA are the following:\n  \\begin{itemize}\n    \\item The sum of the ranks that the men have for their wives\n      in the MOSM is exactly the number of proposals made during MPDA.\n    \\item When $n\\le m$, MPDA terminates as soon as $n$ distinct women are\n      proposed to. Thus, the sum of ranks of the men is essentially a\n      \\emph{coupon collector} random variable, with expectation $O(\\log n)$.\n  \\end{itemize}\n\n  \\begin{proposition}\n    Let $P$ be a random matching market with $n$ men and $n+k$ women for $k\\ge 0$.\n    Then $\\E{R_{men}(MOSM(P))}\n    = O(\\left(\\frac{n+k} n\\right)\\log\\left(\\frac{n+k}{k}\\right))$.\n    % and $\\E{R_{women}(MOSM(P))} = \\Omega(n/\\log n)$.\n  \\end{proposition}\n  \\begin{proof}\n    Consider running men-proposing differed acceptance on $M$.\n    At the steps requiring some unmatched man to be selected,\n    select one uniformly at random.\n    Let $Y$ denote the random variable giving the total number of proposals made\n    by men to women.\n    By the so-called ``principle of differed decisions'', running this algorithm\n    on a random matching market is equivalent to running the following\n    randomized algorithm:\n    women's preferences are taken as input, but men's are not, and\n    whenever a man is ready to propose to a woman, simply select a woman he hasn't\n    yet proposed to uniformly at random.\n\n    Let $Z$ denote a ``coupon collector'' random variable defined by the\n    following random process: at each step, a uniformly random number in $[n+k]$\n    is drawn, and $Z$ returns the number of steps needed for $n$ distinct\n    numbers to be draw.\n    Note that $Z$ statistically dominates $Y$, because every time a man proposes\n    to a woman, he is at least as likely to propose to an unmatched woman as the\n    coupon collect step is to select an unselected outcome.\n\n    Now, we have $Z = \\sum_{i=1}^n Z_i$, where $Z_i$ is the number of steps\n    needed before the $i$th distinct number is draw.\n    Observe that $Z_i$ is a geometric random variable with success probability\n    $\\frac{n+k-i} {n+k}$. Thus:\n    \\begin{align*}\n      \\E{Y} & \\le \\E{Z} \\\\\n        & = \\sum_{i=1}^n \\E{Z_i} \\\\\n        & = \\sum_{i=1}^n \\frac{n+k}{n+k-i} \\\\\n        & = (n+k) \\sum_{i=k}^{n+k} \\frac{1}{i} \\\\\n        & = (n+k)[H_{n+k} - H_k]  \\\\\n        & = \\Theta\\left( (n+k)[\\log(n+k) - \\log(k)] \\right)  \\\\\n    \\end{align*}\n    Which gives exactly the desired bound on \n    $\\E{R_{men}(MOSM(P))} = \\E{Y / n}$.\n  \\end{proof}\n\n  We can use the above claim to get an informal handle on the expectation\n  of $R_{women}(MOSM(P))$ as well.\n  In expectation, $O\\left((n+k)\\log\\left(\\frac{n+k}{k}\\right)\\right)$\n  proposals are made, so any\n  fixed woman $w$ receives an average of\n  $O\\left(\\log\\left(\\frac{n+k}{k}\\right)\\right)$ proposals.\n  The rank of the men proposing to $w$ is essentially uniformly distributed \n  on $[n]$. The minimum of $h$ uniformly distributed random variables with\n  expectation $x$ has expectation $x/h$.\n  Thus, the best-ranked man proposing to $w$ has rank\n  $\\Omega\\left(n / \\log\\left(\\frac{n+k}{k}\\right) \\right)$ on average.\n\n  The previous paragraph can be made formal. A sequence of classical works \n  (in order, \\cite{WilsonAnalysisStable72}, \\cite{KnuthStableCombinatorial97},\n  \\cite{PittelAverageStable89}, \\cite{PittelLikelyStable92}) got\n  quite a good handle on the asymptotic behavior of balanced matching markets.\n  Performing a full analysis can get quite complicated, especially when counting\n  the \\emph{number} of distinct stable matchings\\footnote{ \n    This is likely one reason that \n    \\cite{AshlagiUnbalancedCompetition17} uses different statistics to measure the\n    size of the core, namely, the fraction of agents with multiple distinct\n    partners. For many applications, such as the strategic implication of the\n    size of the core, this is the more important statistic anyway.\n  }.\n\n  % In ((PITEL)), the author gives a much more detailed analysis in order to prove\n  % an upper bound on the expected total number of distinct stable matchings.\n\n\\section{Unbalanced Random Markets: One Exposition}\n\n  With a little effort, you can extend coupon-collector type arguments to\n  get results in the unbalanced case.\n  This next claim is adapted from \\cite{ImmorlicaHonestyStability05}:\n  \\begin{claim}\n    Consider some matching market $P$ (i.e. a set of preferences\n    for men and women) and fix a woman $w^*$ who is matched under any some\n    stable matching for $P$.\n    Construct preference sets $P_n, P_{n-1}, \\ldots, P_1$\n    where in $P_i$ we have $w^*$ ``truncate her list'' after place $i$,\n    that is, $w^*$ reports her top $i$ choices and deems all other matches\n    unacceptable. Let $m_i$ denote the match of $w^*$\n    under $MPDA(P_i)$ (or write $m_i = \\emptyset$ if $w^*$ is unmatched)\n    and let $L = (m_n, m_{n-1}, \\ldots, m_1)$.\n\n    We have the following:\n    \\begin{enumerate}\n      \\item For each $i$, $\\mu_i$ is a stable matching for $P_i$\n        with $\\mu_i(w^*)\\ne \\emptyset$\n        if and only if $\\mu_i$ is a stable matching for $P$.\n        % then $\\mu_i$ is a stable matching for $P$ if and only if\n        % $\\mu_i(w^*) \\ne \\emptyset$.\n      \\item $L$ is a list containing all the stable matches of $w$,\n        possibly with repetition and possibly terminating in a string of\n        $\\emptyset$s (which do not represent stable matches).\n      \\item $L$ is ranked from worst to best according to $\\prec_w$.\n    \\end{enumerate}\n  \\end{claim}\n  \\begin{proof}\n    (1, $\\Rightarrow$) If $\\mu_i(w^*) = \\emptyset$, then we immediately know that\n    $\\mu_i$ is not stable for $P$, by the rural hospital theorem\n    (claim~\\ref{claimRuralDoctors} suffices).\n    Also, if $\\mu_i$ is not stable for $P_i$, then any blocking pair in $P_i$ \n    will also be a blocking pair in $\\mu_i$, so $\\mu_i$ will not be stable for $P$.\n\n    (1, $\\Leftarrow$) Suppose $w^*$ is matched in $\\mu_i$,\n    and suppose for contradiction that $(m,w)$ is a blocking pair for $\\mu_i$\n    under $P$. All agents have identical preferences in $P$ and $P_i$ except for\n    $w^*$, so we must have $w = w^*$. But because $w^*$ is matched under\n    $P_i$, we know $m \\succ_{w^*} \\mu_i(w^*)$ if and only if\n    $m \\succ_{w^*}^{(i)} \\mu_i(w^*)$ (where $\\succ_{w^*}^{(i)}$ denotes $w^*$'s\n    preferences in $P_i$). Thus, $(m,w)$ is also blocking for $\\mu_i$\n    under $P_i$, a contradiction.\n\n    (2, 3) As in the case of full-length preference lists, $MPDA(P_i)$\n    will return a stable matching for $P_i$, and it will assign $w^*$ to the\n    worst stable partner $w^*$ has under preferences $P_i$, provided one exists.\n    By part 1, the stable partners of $w^*$ under $P_i$ are exactly the stable\n    partners of $w^*$ under $P$ which are ranked better than place $i$.\n\n    Now, induct on $j$, with the inductive hypothesis being that\n    $(m_n, m_{n-1}, \\ldots, m_{j})$ contains all stable partners ranked above\n    $j$ by woman $w^*$ (in order according to $\\prec_{w^*}$).\n    The base case is simply the fact that $m_n$ is the worst stable partner of\n    $w^*$ (claim~\\ref{claimWomenWorstStable}).\n    Now, for the inductive step, we have two cases.\n    If $m_j$ is still on the preference list of $w^*$ in $P_{j+1}$, then $\\mu_j$\n    will still be a stable match under preferences $P_{j+1}$. Again by\n    claim~\\ref{claimWomenWorstStable}, woman $w^*$ will still be matched to\n    $m_j$ under preferences $P_{j+1}$.\n    On the other hand, if $m_j$ is no longer acceptable to $w^*$ in $P_{j+1}$,\n    then $m_{j+1}$ will be the stable partner of $w^*$ which is worst-ranked\n    according to $\\prec_{w^*}$ (or $\\emptyset$ if $w^*$ has already reached her\n    best stable match) by the same logic.\n    This finishes the proof of the inductive step and of claims 2 and 3.\n\n    % Without loss of generality, suppose $w$ is matched under any\n    % stable matching for $P$ (if $w$ is unmatched in $MPDA(P)$,\n    % then $m_i=\\emptyset$ for each $i$).\n    % Let $\\mu_i$ be a stable matching for restricted preferences $P_i$.\n    % We claim that $\\mu_i$ is stable under $P$ if and only if\n    % $w$ is matched in $\\mu_i$.\n    % If $w$ is unmatched, then $\\mu_i$ cannot be stable for $P$ by\n    % claim~\\ref{claimRuralDoctors}.\n    % So suppose $w$ is matched in $\\mu_i$.\n\n    % First, we show that if $m_i\\ne \\emptyset$, i.e. $w$ is matched by\n    % $MPDA(P_i)$, then the match $\\mu_i$ returned by $MPDA(P_i)$ is stable\n    % for the original preference set $P$.\n    % Indeed, suppose that no pair $(m',w')$, not matched in $\\mu_i$,\n    % are non-blocking for $P_i$. If $w\\ne w'$, then $(m',w')$ cannot be blocking\n    % for $P$. So consider $w' = w$. If $w \\succ_{m'} \\mu_i(m')$,\n    % then $m'$ proposed to $w$ during $MPDA(P_i)$.\n    % Observe that $w$ only accepts matches above a certain threshold,\n    % and once $w$ is matched, her rank of her partner only increases.\n    % Thus, as long as $w$ eventually received a match, $\\mu_i(w) \\succ_w m'$\n    % (note that this will not be true if $w$ is unmatched in the end).\n\n    % Now, by the above paragraph, when $m_i\\ne\\emptyset$, $MPDA(P_i)$\n    % returns a stable match of $w$ ranked higher than place $i$.\n    % By claim~\\ref{claimWomenWorstStable}, it returns the \\emph{worst}\n    % such stable partner.\n\n    % Now, we induct on $i$ to show that $L$ contains all stable matches of $w$ in\n    % order according to $\\prec_w$.\n    % The base case, that $m_n = \\worst(w)$, is just\n    % claim~\\ref{claimWomenWorstStable}.\n    % because $w$ only accepts matches above some threshold any match, she will only upgrade.\n  \\end{proof}\n\n  We get this immediate corollary:\n  \\begin{corollary}\\label{corTruncateOptimal}\n    Suppose that when a woman $w^*$ truncates her preference list after place\n    $i$, she is unmatched by MPDA.\n    Then $w^*$ has no stable parters ranked better than $i$.\n  \\end{corollary}\n\n  We are now ready to prove the main result of this section.\n  We focus on the case of $n$ men and $n+1$ women, for simplicity.\n  I'm still working on making this bit fully formal.\n\n  \\begin{claim}\n    Consider a random matching market with $n$ men and $n+1$ women.\n    Fix some woman $w^*$.\n    With probability approaching $1$ as $n\\to\\infty$, % $1 - O((\\log n)^{-2})$,\n    $w^*$ has no stable partners ranked better than $\\Omega(n/\\log n)$.\n    % fewer than $\\Theta(n\\log n)$ proposals are made by the men\n    % before $MPDA$ terminates. \n    % Moreover, fewer than $\\log n$ proposals are made to $w^*$.\n  \\end{claim}\n  \\begin{proof}[(Proof Sketch)]\n    Suppose $w^*$ truncates her preference list at place $c n / \\log n$ for some\n    constant $c$. Denote this preference list by $P^*$.\n    By corollary~\\ref{corTruncateOptimal}, $w^*$ has a stable partner ranked\n    better than $c n/\\log n$ if and only if $w^*$ is matched under $MPDA(P^*)$.\n    This occurs if and only if $w^*$ receives a proposal from a man ranked\n    better than $c n/\\log n$.\n    But when a man proposes to $w^*$, his rank by $w^*$ is uniformly distributed\n    (over those ranks $w^*$ has not yet seen in the algorithm).\n    To proceed, we upper bound the number of proposals that $w^*$ likely\n    receives, and thus the probability that she gets a proposal ranked higher\n    than $c n/\\log n$. \n\n    Let $Y$ denote the random variable giving the number of proposals $w^*$ sees\n    during the run of $MPDA(P^*)$.\n    First, note that $MPDA(P^*)$ will certainly terminate if all women other\n    than $w^*$ have been proposed to.\n    Thus, $Y$ is statistically dominated by the following variation on a coupon\n    collector random variable:\n    define a process which at each step selects a number uniformly at random\n    from $[n+1]$. The process terminates when each number in $[n]$ has been\n    selected, and $Z$ outputs the number of times $n+1$ was selected.\n    We have $Y$ statistically dominated by $Z$.\n\n    Now, because the standard coupon collector random variable has expectation\n    $O(n\\log n)$, this modified coupon collector should have expectation\n    $O(\\log n)$. Thus, in expectation, the best ranked man who proposes to\n    $w^*$ has rank $\\Omega( n /\\log n )$.\n\n    % We again divide $Z$ into ``stages'' $Z = \\sum_{i=1}^n Z_i$, where $Z_i$\n    % gives the number of times $n+1$ is selected between the $i-1$th and $i$th\n    % distinct number from $[n]$ being selected.\n    % Let $N_i$ denote the number of draws between the $i-1$th and $i$th distinct\n    % draw from $[n]$.\n\n    % Let $Y$ denote the number of time $w^*$ is proposed to.\n    % $MPDA$ will terminate as soon as $n$ distinct women are proposed to.\n    % This is still dominated by a coupon collector random variable.\n\n    % In expectation, $\\Theta(\\log n)$ of those proposals go to $w^*$.\n  \\end{proof}\n\n\n\n\\section{\\cite{AshlagiUnbalancedCompetition17} and MOSM to WOSM Conversion}\n\n  For this section, we always assume $|\\M| = |\\W| + k$ for $k > 0$.\n  The main algorithm used in the analysis of\n  \\cite{AshlagiUnbalancedCompetition17} is given here in\n  Algorithm~\\ref{algMosmToWosm}.\n  To understand this algorithm fully, I've actually implemented it\n  at \\url{https://github.com/ClathomasPrime/CompetitiveStableMatching}.\n  It is presented here in a more ``imperative programming'' style than that\n  given in \\cite{AshlagiUnbalancedCompetition17}\n\n  Finally, \\cite{AshlagiUnbalancedCompetition17} shows that the core of an\n  unbalanced market is probably small by showing that\n  algorithm~\\ref{algMosmToWosm} probably terminates quickly.\n  Thus, there cannot be a big difference between the man-optimal and the woman\n  optimal stable matches: most agents have unique stable partners, and the\n  average ranks for agent's partners is about the same in all matchings.\n  This is a long, involved probabilistic argument, but is terribly difficult to\n  carry out. The result is:\n\n  \\begin{theorem}[\\cite{AshlagiUnbalancedCompetition17}, Appendix B]\n    Consider any sequence of random matching markets with $n$ men and $n+k$ women\n    (for any $k = k(n) \\ge 1$).\n    For any $\\epsilon>0$, with probability that approaches $1$ as $n\\to\\infty$,\n    each of the following events hold:\n    \\begin{enumerate}\n      \\item Define $ s_k(n) := \\left(\\frac{n+k} n\\right) \n        \\log\\left(\\frac{n+k} k\\right)$.\n        Then for every stable matching $\\mu$,\n        \\begin{align*}\n          R_{men}(\\mu) \\le (1+\\epsilon)s_k(n) &&\n          R_{women}(\\mu) \\ge \\frac n {1 + (1+\\epsilon)s_k(n)}\n        \\end{align*}\n      \\item Less than a $1/\\sqrt{\\log n}$ fraction of the men and women have\n        multiple stable partners. In particular, this fraction approaches $0$ as\n        $n\\to\\infty$.\n        Moreover, the total number of stable partners\n        is at most $n + n/\\sqrt{\\log n}$.\n      \\item There is little difference in ranking between the MOSM and the WOSM:\n        \\begin{align*}\n          \\frac{R_{men}(WOSM)}{R_{men}(MOSM))} \\le 1+\\epsilon &&\n          \\frac{R_{women}(WOSM)}{R_{women}(MOSM))} \\ge 1-\\epsilon\n        \\end{align*}\n    \\end{enumerate}\n  \\end{theorem}\n\n  Comparing this to the results from section 3, we see that when imbalance is\n  present, men \\emph{almost always} do as well in \\emph{any} stable matching as\n  they do in the man-optimal stable matching.\n  Two special cases are worth pointing out:\n\n  \\begin{corollary}[2.2 in \\cite{AshlagiUnbalancedCompetition17}]\n    Suppse $k(n) = 1$ for each $n$.\n    Then for every $\\epsilon>0$, with probability approaching $1$ as\n    $n\\to\\infty$, we have\n    \\begin{align*}\n      R_{men}(\\mu) \\le (1+\\epsilon)\\log n &&\n      R_{women}(\\mu) \\ge (1-\\epsilon)\\frac n {\\log n}\n    \\end{align*}\n    for every stable matching $\\mu$.\n  \\end{corollary}\n\n  \\begin{corollary}[2.3 in \\cite{AshlagiUnbalancedCompetition17}]\n    Suppse there exists a $\\lambda >0$ with $k(n) = \\lambda n$ for each $n$.\n    Let $\\kappa = (1 + \\lambda)\\log(1 + 1/\\lambda)$.\n    Then for every $\\epsilon>0$, with probability approaching $1$ as\n    $n\\to\\infty$, we have\n    \\begin{align*}\n      R_{men}(\\mu) \\le (1+\\epsilon)\\kappa &&\n      R_{women}(\\mu) \\ge (1-\\epsilon)\\frac n {1 + \\kappa}\n    \\end{align*}\n    for every stable matching $\\mu$.\n  \\end{corollary}\n\n  In a follow-up paper \\cite{PittelLikelyStableUnbalanced19}, many of these\n  results are shown to be sharp, and the analysis is extended to count the\n  average number of stable matchings in an unbalanced market.\n  Interestingly, \\cite{PittelLikelyStableUnbalanced19} is not able to improve\n  the rate at which the fraction of agents with multiple stable partners tends\n  to zero as $n\\to\\infty$, even though the arguments in section 4 seem to\n  suggest that this fraction might be about $1/\\log n$.\n\n  % \\subsection{Relation to the rotation poset}\n  It's worth noting that algorithm~\\ref{algMosmToWosm} is almost identical to the\n  algorithm \\emph{minimal-differences} described\n  in~\\cite{GusfieldStableStructureAlgs89} (a book which fully fleshes out\n  ideas initiated in~\\cite{IrvingCountingStable86}).\n  The purpose of algorithm \\emph{minimal-differences} is to traverse the\n  matching lattice in a maximal chain from the MOSM to the WOSM, and along the\n  way identify ``the rotation poset'' (i.e. the poset for which the matching\n  lattice is the collection of downwards-closed sets of the poset, which is\n  guaranteed to exist by Birkhoff's representation theorem).\n  The only difference is that algorithm~\\ref{algMosmToWosm} does not explicitly\n  identify the ``rotations'' (i.e. simple improvement cycles).\n  \\cite{AshlagiUnbalancedCompetition17}'s careful analysis of this algorithm\n  for unbalanced random markets essentially says that the rotation poset, and\n  consequentially the stable matching lattice, is typically small (in a few very\n  precise ways).\n\n    % It's not clear if the authors of \\cite{AshlagiUnbalancedCompetition17} realize this.\n\n\n  % \\begin{proof}\n  %   Induct on the length of $V$ and the size of $S$.\n  %   If $|V| = 1$, then \n  %   UUGH well this still gets a bit messy. Probably best to use known proofs.\n  %   I think it would work IF you show a converse of the ``lattice neighbors\n  %   covering'' thing: all guys hasse-adjacent in the lattice differ by an\n  %   improvement cycle.\n  % \\end{proof}\n\n\\bibliography{MechDesign}{}\n\\bibliographystyle{alpha}\n\n\\appendix\n\n\\section{The Error in \\cite{GusfieldStableStructureAlgs89}}\n\nThe algorithm of \\cite{GusfieldStableStructureAlgs89} correctly identifies all\nof the rotations in a stable matching instance. However, there is a slight error\nin the construction of the order relations for the rotation poset.\nIn particular, once the rotations are found (via an algorithm essentially\nequivalent to our algorithm~\\ref{algMosmToWosm}), they propose the following:\n\\begin{algorithm}\n  \\caption{Construct rotation ordering}\\label{algRotationOrderGI}\n\\begin{algorithmic}[1]\n  \\For {Each rotation $\\rho$ and pair $(m_i,w_i)\\in \\rho$}\n    \\State Label $w_i$ in $m_i$'s preference list with a type 1 $\\rho$ label\n    \\For {Each $m$ strictly between $m_i$ and $m_{i-1}$ on $w_i$'s list}\n      % \\Comment $w_i$ upgrades $m_i$ to $m_{i-1}$\n      \\State Label $w_i$ in $m$'s preference list with a type 2 $\\rho$ label\n    \\EndFor\n  \\EndFor\n  \\For {Each man $m$}\n    \\State Set $\\rho^* = \\emptyset$\n    \\For {Each woman $w$ on $m$'s preference list, in order}\n      \\If {$w$ has a type 1 label of $\\rho$}\n        \\If { $\\rho^* \\ne \\emptyset$ } \n          Add $\\rho^*$ as a predecessor of $\\rho$\n        \\EndIf\n        \\State Set $\\rho^* = \\rho$\n      \\EndIf\n      \\If { $w$ has a type 2 label of $\\rho$}\n        \\If {$\\rho^*\\ne \\emptyset$}\n          Add $\\rho$ as a predecessor of $\\rho^*$\n        \\EndIf\n      \\EndIf\n    \\EndFor\n  \\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nThe idea behind this algorithm is very sound:\nSuppose the labeled women on $m$'s preference list consist of\nof $m$ has woman $w$ on his preference list,\nand rotation $\\rho$ moves $w$ from below to above $m$\non her own preference list.\n\n\n\\end{document}\n", "meta": {"hexsha": "67175291fdc1cf35e6740d226aca9f7b5bbf0c9e", "size": 58678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tex/sosa.tex", "max_stars_repo_name": "ClathomasPrime/CompetitiveStableMatching", "max_stars_repo_head_hexsha": "8a0b1edadbffb8ea7dde8fac8d7696f91747668f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tex/sosa.tex", "max_issues_repo_name": "ClathomasPrime/CompetitiveStableMatching", "max_issues_repo_head_hexsha": "8a0b1edadbffb8ea7dde8fac8d7696f91747668f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tex/sosa.tex", "max_forks_repo_name": "ClathomasPrime/CompetitiveStableMatching", "max_forks_repo_head_hexsha": "8a0b1edadbffb8ea7dde8fac8d7696f91747668f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.1361771944, "max_line_length": 93, "alphanum_fraction": 0.6791642524, "num_tokens": 17229, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7718435083355187, "lm_q1q2_score": 0.5642628181285607}}
{"text": "\n\\section{Scaling the GNK value}\\label{sec:scaling}\n\nBecause the GNK value is difficult to compute at scale, we review two primary approaches to approximate the GNK value for larger and more realistic networks:\n\n\\begin{itemize}\n    \\item In sections \\ref{sec:sampling_techniques} to \\ref{subsection:selection_of_sampling_method} we describe and employ sampling techniques to reduce the number of minimax optimisations that need to be conducted to approximate the GNK value to sufficient accuracy.\n    \\item And in section \\ref{sec:modified_gnk} we consider a polynomial-time computable proxy inplace of the minimax optimisations in the characteristic function \\eqref{knvalue1} of the GNK value.\n\\end{itemize}\n\n\\subsection{Sampling techniques}\\label{sec:sampling_techniques}\nTo compute the GNK value to a required accuracy, not all of the minimax optimisations $v(S)$ need to be performed, as sampling techniques may be used.\nWe consider two different approaches for bias free sampling of the GNK value:\n\\begin{enumerate}\n    \\item By the inspection of equation \\ref{da_value_eq}, we see that the GNK value of any player $i$ is an average over $v_{i,k}$%(the average value $v(S)$ for coalitions $S$ of size $k$ that include player $i$)\n, and that by randomly sampling coalitions of size $k$ which include $i$ we can sample for estimations of each $v_{i,k}$.\n    \\item By utilizing equation \\ref{convert1} we convert the problem into a standard cooperative game, where we can then compute the GNK value via the many existing sampling techniques developed for approximating the Shapley Value.\n\\end{enumerate}\n\nThe first of these two is an uncomplicated approach consisting of randomly generating coalitions $S\\subset N$, %(implicitly also calculating $v(N\\setminus S)$ via equation \\ref{myeq2})\nand then approximating $v_{i,k}$ by averaging the appropriate $v(S)$, which are then averaged to approximate the GNK value $\\varphi$ via equation \\ref{da_value_eq}. We denote this method `\\textsc{Simple}'.\n\nThe second of these two approaches is more complicated, since it involves converting the problem into a cooperative game and then a selecting a technique to sample the Shapley Value.\nSome of the possible techniques include: Neyman Sampling (`\\textsc{Neyman}') \\citep{CASTRO2017180,1938.10503378}, sampling to minimize a Hoeffding-type inequality (`\\textsc{Hoeffding}') \\citep{2013arXiv1306.4265M}, as well as a random stratified join-order sampling method (`\\textsc{Join}') \\citep{CASTRO2017180}, and unstratified random join-order sampling `ApproShapley' (`\\textsc{Appro}') \\citep{DBLP:journals/cor/CastroGT09}.\nWe consider these alongside our own developed method, the Stratified Finite Empirical Bernstein Sampling method (`SEBM') (as developed in Section \\ref{section:SEBB}).\n\nWe will compare the performance of these sampling approaches in section \\ref{section:performance}, but first we will summarise some of the differences between Shapley Value sampling techniques first.\n\n\\subsection{Differences in sampling approaches}\n\nAll the Shapley Value sampling technique sample over marginal contributions in slightly different ways; but primarily, they differ in whether they employ stratified sampling or not.\nIf they employ stratified sampling, then they sample the marginal contributions by player $i$ and size $k$ and average them to estimate each $\\hat{v}_{i,k}$ (ie. approximating the terms of \\eqref{eq:shapley_value2} and then using \\eqref{shap2}).\nConversely, if they do not employ stratified sampling, then they directly approximate the Shapley Value via equation \\eqref{shapley_value3}.\n\nSecondarily, they differ in whether or not they sample by a join-order process or not. Sampling over marginal contributions involves calculating the difference between $v(S)$ and $v(S\\cup\\{i\\})$ for various players $i$ and coalitions $S$, and one particularly easy way of doing this is to start with the empty coalition $\\emptyset$ and generate a permutation of players that sequentially join the coalition and each make a marginal contribution in turn, in this way $n+1$ evaluations of $v(S)$ can be used to calculate $n$ marginal contributions.\nConversely, if they do not use a join-order process, then the methods randomly select coalitions $S$ and player $i$ and calculate the marginal contribution $v(S\\cup\\{i\\}) - v(S)$, thus taking two evaluations of $v(S)$ for one marginal contribution sample point.\n\nBetween the methods, as shown in Table \\ref{table:stratified_sampling_methods}: \\textsc{Appro} randomly samples in join orders without stratification, \\textsc{Join} randomly samples in join orders with stratification, \\textsc{Hoeffding} samples with stratification and without join orders, to minimise a sum of Hoeffding-type concentration inequalities on each of the estimates $\\hat{v}_{i,k}$,\n\\textsc{Neyman} samples with stratification without join orders to the sample each $\\hat{v}_{i,k}$ proportional to the sample variance of the marginal contributions which make up each,\nand \\textsc{SEBM} samples with stratification without join orders to sample $\\hat{v}_{i,k}$ in order to maximally reduce a complicated concentration inequality on the resultant estimated Shapley Value itself.\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{|l|l|l|l|}\n\\hline\nMethod & Stratified & Join-Order & Sampling Choice \\\\ \\hline\n\\textsc{Appro} & No & Yes & Random\\\\\n\\textsc{Join} & Yes & Yes & Random\\\\\n\\textsc{Hoeffding} & Yes & No & Hoeffding-type inequality \\\\\n\\textsc{Neyman} & Yes & No & by variance of the strata \\\\\n\\textsc{SEBM} & Yes & No & speciality inequality\\\\ \\hline\n\\end{tabular}\n\\caption{Different Shapley Value sampling methods and their features}\n\\label{table:stratified_sampling_methods}\n\\end{table}\n\nThe full details about each of the methods can be found in their respective source documents \\citep{CASTRO2017180,2013arXiv1306.4265M,DBLP:journals/cor/CastroGT09} (and Chapter \\ref{chap:stratified_sampling_chapter}).\nAnd the performance of using these different methods is evaluated in the next section \\ref{section:performance}.\n\n\\subsection{Sampling the GNK value at scale}\\label{section:performance}\n\nTo analyse the performance of approximating the GNK with different sampling techniques we calculated the average absolute error in the approximated GNK value for randomly generated electricity networks.\nWe used a known process of generating pseudo random meshed networks of buses and lines reminiscent of real electricity networks. The particular algorithm is called the `Simple minimum-distance graph' method as expounded by \\cite{hines1} and is given as Algorithm \\ref{alg1}.\n\n\\begin{algorithm}[]\n\\caption{Simple minimum-distance graph algorithm}\n\\label{alg1}\n\\begin{algorithmic}\n    \\REQUIRE number of nodes $N$, number of links $m$\n    \\REQUIRE natural numbers $n_i$, such that $\\forall~i~n_i\\leq i$ and also $\\sum_{i=1}^Nn_i=m$\n    \\STATE $M=\\emptyset$ is set of nodes\n    \\FOR{$a=1:N$}\n        \\STATE Randomly generate planar coordinates for node index $a$, $(x_a ,y_a)$ with uniform distribution\n        \\STATE $M_a=\\emptyset$ is set of links for node index $a$\n        \\FOR{$k=1:n_a$}\n            \\STATE select a novel $b\\in M$ to minimise the Euclidean distance to node index $a$:\\\\ $\\quad\\min_b\\quad (x_a-x_b)^2+(y_a-y_b)^2\\quad\\text{s.t}\\quad (a, b)\\notin M_a$\n            \\STATE Add the link $a$-to-$b$:\\\\ $\\quad M_a=M_a\\cup \\{(a, b)\\}$\\\\ $\\quad M_b=M_b\\cup \\{(b, a)\\}$\n        \\ENDFOR\n        \\STATE Add the node $a$:\\\\ $\\quad M=M\\cup \\{(x_a ,y_a)\\}$, \n    \\ENDFOR\n    \\STATE Output $M$ and $M_a$ \n\\end{algorithmic}\n\\end{algorithm}\n\nUsing this algorithm we considered networks which were randomly generated to have 10 nodes with 12 lines between them, and thus small enough for it to be possible to solve the GNK value exactly.\nAnd in each of these networks each line had randomly generated line limits and each node was assigned to be either a generator or consumer of electricity randomly with a randomly generated linear utility function with randomly generated consumption/generation limits.\n\nBy computing the exact GNK value for these networks we were then able to compute the average absolute error achieved by each of the different sampling methods for different sample budgets.\nFor a given budget, all the algorithms called for the computation of different numbers of the bilevel optimisations $v(S)$ (per equation \\ref{knvalue1}).\nAnd by plotting the average absolute error achieved against the number of unique optimisations called, we can see the performance of the sampling methods as shown in Figure \\ref{fig:performance_graph1}.\n\nFrom this graph it is seen that the methods which utilised stratification (\\textsc{Hoeffding}, \\textsc{Neyman}, \\textsc{SEBM} and \\textsc{Join})  generally performed better than those that did not use stratification (particularly \\textsc{Simple} and \\textsc{Appro}).\nAnd particularly that the \\textsc{Join} method, which utilises stratification and join-order sampling is quite effective over a range of situations.\n\nOn this graph, the sampling methods which utilised stratification were warm-started with a budget of 200 samples (two from each strata), as methods like \\textsc{Neyman} required atleast two samples per stratum minimum (totalling 200) for a bias-free estimate of the variances which they run on.\nConversely \\textsc{Simple} and \\textsc{Appro} were able to be run with a smaller sample budget.\nIt is noted that the methods \\textsc{Hoeffding}, \\textsc{Neyman}, \\textsc{SEBM} and \\textsc{Join} perform exactly the same at 200 samples, as they each have two samples per stratum and no extra budget for any difference in logic to operate on.\nIt was also recognised that because of our 10 bus networks, there are only 1024 unique optimsations $v(S)$ that can be called, and hence the x-axis stops at 1024.\nEach of the methods were run multiple times with increasing sample budget allowances, up until a point where the computation time became prohibitive, particularly some of the methods which used sampling without replacement would only call a stochastic number of unique optimisations, and hence it became time-prohibitive to sample to perfect accuracy utilising them.\n\n\\subsection{Selection of sampling method}\\label{subsection:selection_of_sampling_method}\n\nFrom the Figure \\ref{fig:performance_graph1} we witnessed that \\textsc{Join} was seen to be reasonably effective and it was chosen for all further analysis.\nThe reason for this superiority is the additional power of sampling with stratification and by join-orders.\nAs already stated, by using join orders, it is possible to calculate $n$ marginal contributions using $n+1$ evaluations of $v(S)$ as opposed to taking two evaluations of $v(S)$ for one marginal contribution sample point.\nThis simple factor outweighed the advantage of the extra sophistication associated with \\textsc{SEBM} and \\textsc{Neyman} methods, which did not use join-order sampling - even though SEBM was seen to be more performant for larger sample budgets.\nAnother advantage that was witnessed in utilising the \\textsc{Join} method was that it didn't have the computational overhead of using the formulas present in the \\textsc{SEBM} method, and additionally the join-order sampling allowed each new optimisation that was called to be warm-started by the previous one as each new addition to the coalition would yield an optimisation similar to the previous one.\nThese factors made \\textsc{Join} appear as simpler and more effective method for our purposes.\n\nTo evaluate the performance of the \\textsc{Join} method in approximating the GNK value for larger networks, we generated random networks of different sizes and estimated the GNK value using 8 simultaneous estimations, and timed how long it took for the average magnitude of error between these estimations to reach an one percent of each other.\nA scatter plot of the run-time performance of \\textsc{Join} in approximating the GNK value for variously sized random networks by this process was generated and the results are shown in Figure \\ref{fig:performance_graph2}.\n\nFrom this figure it is witnessed that the GNK value appears to be doubly exponentially complex to approximate by sampling and quickly becomes intractable for networks consisting of more than $16$ busses.\nThis double-exponential complexity was somewhat expected as solving the GNK value exactly is identified as a calculation that involves exponentially many NP-hard computations.\n\nIn light of this realisation we sought to make simplifications to the GNK value itself to ease its intractability.\nSpecifically we considered a polynomial-time computable proxy inplace of the minimax optimisations in the characteristic function \\eqref{knvalue1} of the GNK value.\n\n\\iffigures\n\\input{figs/absolute_error1.tex}\n\\fi\n\n\\iffigures\n\\input{figs/approximating_times1.tex}\n\\fi\n\n\n\n\\subsection{Sampling a modified GNK value (M-GNK) at scale}\\label{sec:modified_gnk}\n\nGiven the intractable nature of the GNK value for large networks, we considered a proxy inplace of the minimax optimisations of the characteristic function that define the GNK value (equation \\ref{knvalue1}).\nParticularly we considered a relaxation:\n\n\\begin{align}\n\\label{knvalue22}\nv(S) = &+ \\frac{1}{2}\\left(\\sum_{i\\in S} u_i(p_i) - \\sum_{i\\in N\\setminus S}u_i(p_i)\\right)\\nonumber\\\\\n&\\quad\\quad\\quad\\quad\\quad~\\text{s.t}~ \\left[\\{p_i\\}_{i\\in N}=\\argmax \\left(\\sum_{i\\in S} u_i(p_i) + \\sum_{i\\in N\\setminus S}\\epsilon u_i(p_i)\\right)\\right]\\nonumber\\\\\n&- \\frac{1}{2}\\left(\\sum_{i\\in N\\setminus S}u_i(p_i) - \\sum_{i\\in S} u_i(p_i)\\right)\\nonumber\\\\\n&\\quad\\quad\\quad\\quad\\quad~\\text{s.t}~ \\left[\\{p_i\\}_{i\\in N}=\\argmax \\left(\\sum_{i\\in N\\setminus S}u_i(p_i) + \\sum_{i\\in S} \\epsilon u_i(p_i) \\right)\\right]\n\\end{align}\n\nWhere $\\epsilon$ is a small positive value, and where we assume all the DC power constraints apply in both $\\argmax$.\n\nRather than equation \\eqref{knvalue1}, this equation \\eqref{knvalue22} encapsulates the expected payoff advantage of the coalition under a 50:50 coinflip of who goes first, where in each case the leader chooses the powerflows that strictly prioritise their own utility and then the follower's utility is maximised secondarily, we call this proxy GNK value the `M-GNK value'.\\footnote{This dynamic is evident for sufficiently small $\\epsilon$ in the formula, but the same dynamic could also be achieved by a two stage optimisation process.}\nThis expression encodes much of the same dynamic as the original expression (equation \\ref{knvalue1}) but avoids much of the adversarial strategic counter-considerations that make the original expression NP-hard, as the leader is unwilling to sacrifice their utility to harm the follower's utility and vice versa.\n\nThis proxy replaces the two-part NP-hard bilevel problems of equation \\ref{knvalue1}, into two-part single-level linear programming problems.\nThis transformation makes the M-GNK value much easier to compute at scale, but potentially at the cost of being a less perfect description of idealised minimax bargaining - even though no axioms (from section \\ref{the_value_def}) would be lost.\nThus the use of this proxy might potentially see the introduction of artefacts - we discuss the similarity and the resulting numerical differences between the original GNK and M-GNK in section \\ref{sec:GNK_extensions_discussion}.\n\nSome of the runtime statistics of sampling this M-GNK value with \\textsc{Join} sampling for randomly generated network sizes are shown in Figure \\ref{fig:performance_graph4}.\nFrom the figure it is seen that it is readily possible to calculate to about one percent accuracy this M-GNK value in three minutes of runtime for networks of up to the size of about 50 nodes, and with ten minutes of runtime up to about 80 nodes.\\footnote{\\label{note1} All calculations were performed on an Dell Optiplex 9020, with Intel i7 quad core 3.6GHz processor, and all source-code available at:\\\\\n\\href{https://github.com/Markopolo141/The\\_Generalized\\_N-K\\_Value}{github.com/Markopolo141/The\\_Generalized\\_N-K\\_Value}}\n\nUsing this ability to calculate, we can now turn our attention to what this M-GNK value actually looks like in comparison with LMP and VCG for larger networks.\n\n%\\iffigures\n\\input{figs/accuracy_graph1.tex}\n%\\fi\n\n\n\n\\section{Results and evaluation of the GNK value at scale}\\label{sec:results_and_evaluation_of_GNK}\n\n%\\iffigures\n\\input{figs/experiment_table1.tex}\n%\\fi\n\nIn this section we display and discuss some of the behaviour exhibited by the M-GNK, VCG and LMP for randomly generated larger networks.\nParticularly in figures \\ref{fig:experiment_table_11}-\\ref{fig:experiment_table_14} we show the results of these techniques applied to an example 90 bus network (generated by algorithm \\ref{alg1}), consisting of a 50-50 split of small consumers and small generators of electricity.\nFrom this exercise we are able to see some of the distinct features of these techniques when applied to a large number of players.\n\nThe most immediate result is that LMP and VCG are nearly identical (figures \\ref{fig:experiment_table_11} and \\ref{fig:experiment_table_12}) while GNK is distinctly different (\\ref{fig:experiment_table_13}).\nThe similarity between VCG and LMP can be explained by a variety of means, but a most informal explanation is that both LMP and VCG implement the same electrical outcome, and allocate payments in proportional to marginal differences about the socially optimal point.\nThe confluence between VCG and LMP is not only witnessed by us, but also has been noted in a more general settings where there are many small participants \\citep{NATH2019673, 8430852} (see section \\ref{subsec:summary_discussion_VCG}).%\\ref{subsec:summary_discussion_VCG}\n\nIn figure \\ref{fig:experiment_table_11} and \\ref{fig:experiment_table_12} we see that there are no negative utility imputations, but that positive utility is allocated to those generators who are able to provide power the cheapest, and those consumers who value power the greatest (high x-value \\& low y-value, and low x-value \\& high y-value respectively).\nThis is because the LMP creates prices for power such that the cheapest generators are scheduled to create power which is consumed by the most desiring consumers up until a marginal point where any further dispatch/consumption would be unmotivated, and because VCG schemes allocate non-negative utilities by their axiomatic construction (per axiom of individual rationality).\nFrom these graphs we straightforwardly identify those consumers/generators which generate/receive power as they are rewarded with positive utility.\n\nConversely we notice in figure \\ref{fig:experiment_table_13} a completely different result for the M-GNK value, particularly that the M-GNK imputes utility to those consumers who do not receive power (as already noted in section \\ref{sec:features}), but furthermore, it allocates negative net utility to all but the cheapest generators, even those who generate power at the socially optimal point (as identified by the previous paragraph as those generators who have positive imputation under LMP, in figure \\ref{fig:experiment_table_11}), and this dynamic is not particularly easy to explain.\n\nOne primary explanation, lies in considering the average additional playoff advantage to a coalition of a high-cost generator.\nParticularly, if we consider the taking of a 50-50 coin flip about whether the coalition or its complement chooses their strategies first, and if the coalition goes first, then the generator will be idle (bringing no benefit to the coalition), whereas if the complement goes first, then it potentially will get dispatched (hence causing a loss to the coalition) because the complement will choose to consume power and power-constraints must be obeyed, giving them a greater pay off advantage.\nHence the high-cost generator only brings negative payoff-advantage which is reflected in the M-GNK value.\n\nOr more succinctly, the way in which the generators get allocated negative utility is that they are targets of being forced to generate at their own deficit since power-conservation constraints must be obeyed, and this dynamic becomes a negotiating lever, in the selection of threat points.\nThis consideration works reversely for consumers, who can only consume at a positive utility to them, thus they are at a bargaining advantage which is then reflected in the positive utility they are rewarded with under M-GNK.\n\nThis behaviour in the context of large networks was somewhat unexpected; and we summarise the consequences in the following section \\ref{sec:GNK_value_discussion}.\n\n\\subsection{Possible extensions}\\label{sec:GNK_extensions_discussion}\n\nOne primary question is how much the behaviour exhibited in figure \\ref{fig:experiment_table_13} is entirely the result of us using a modified characteristic function, per the M-GNK value (equation \\ref{knvalue22}), over the more original characteristic function of the GNK value (equation \\ref{knvalue1}).\nIn order to investigate this question, the error of using this modification across randomly generated networks was calculated and shown in Figure \\ref{fig:performance_graph5}.\nThis graph shows the cumulative probability of the difference in payments between the GNK and M-GNK - and shows for instance, that 60\\% of participants could expect to receive within 10\\% what they would have between GNK and M-GNK values, and that this similarity increases for larger network sizes.\n\nFrom this graph it is noticed that the GNK and M-GNK values feature similarity which seems to increase with the size of the network under consideration --- although for computational reasons, it is increasingly difficult to confirm for networks with a size greater than 13 nodes.\nThis limited observation coincides with expectations that the possible strategic counter-considerations that are discarded by using equation \\ref{knvalue22} over equation \\ref{knvalue1} become less relevant in the context of larger networks.\n\nAnother observation, is that while the GNK remains soluble for networks of less than 13 nodes and the M-GNK value remains soluble for up to 80 node networks (to about 1\\% accuracy, as per figure \\ref{fig:performance_graph4}), however these numbers might still be regarded as being too small for real-world electrical network modelling.\nLarger networks are expected to be tractable for the M-GNK value with more computing power and/or execution time, however further methodological improvements may be necessary to make the GNK value (or anything similar to it) capable of bearing on larger and real world networks.\nSome possible avenues of investigation include employing further approximations such as player clustering (such as implemented by \\cite{DBLP:journals/corr/abs-1903-10965}), or transforming the problem into a non-atomic form, similar to non-atomic Shapley Value.\nHowever, these technical considerations are secondary to the ethical considerations about the desirability of GNK-type solutions, which we consider in the following section \\ref{sec:GNK_value_discussion}.\n\nAnother outstanding question, is what the likely outcome would be, under GNK, in the context of strategising network participants.\nSo, while VCG has some explicit consideration of the strategising of individual agents, and LMP is identified to be largely identical to it in the context of large numbers of small players, the consequences of agents strategising under GNK is not considered here.\nAs the GNK is incentive incompatible and also quite complex, this issue constitutes both a major consideration and a difficult question.\nWhile there does exist some work on similar (but more complex) solution concepts like GNK that are incentive compatible (such as presented by \\cite{myerson1,Salamanca2019}) their investigation and evaluation in the context of electricity networks remain a topic for further investigation.\n\n\n%\\iffigures\n\\input{figs/probability_comparrison1.tex}\n%\\fi\n\n\\section{Discussion and conclusion}\\label{sec:GNK_value_discussion}\n\nTo consider the social and ethical value of the GNK value we must loop back to consider some of the topics explored in Chapter \\ref{cha:background}.\nParticularly we briefly considered some moral elements about the distribution of resources, including Equality, Efficiency, and the various normative reference points which may be pertinent.\nWe will consider the GNK in light of each of these in turn.\n\n\\subsection{Efficiency}\nIt is easy to realise that the GNK value is axiomatically efficient, specifically in terms of maximising the sum of utility, by axiom (equation \\ref{myeq}); and as it is efficient in this regard it is also therefore Pareto optimal.\nThe way this efficiency is implemented is that the GNK designates the electrical outcome which maximises the sum of utility, and issues budget-balanced utility transfers between the players.\nNeither LMP or VCG has a similar efficiency claim, as both allocate payments between network participants are not necessarily budget balanced.\nHowever in order to consider whether this axiomatic efficiency is actually a good thing we need to consider it in terms of what is socially good.\n\nWhile there are different and potentially competing philosophical notions of what social 'efficiency' should mean, the maximisation of the sum of utility is a very classical notion.\nThe implementation of the GNK value assumes that utility is transferable, and the easiest way of comprehending this is in terms of utility as measured by money. And hence, in this context the maximisation of utility might be seen as corresponding to the maximisation of the monetary value of outcomes.\n\nHowever it is worth noting that this might not be socially desirable, as there may be some sense in thinking that maximising utility via monetary measurement could potentially be prone to social problems, specifically as there exists some agreement of the diminishing value for money itself, in that the rich are made only a little better off by an amount of money, that would be more appreciated by the poor.\n\nThe GNK value allocates the utilitarian outcome, however it is not necessarily required that utility be measured in money (even though this is the easiest interpretation).\nIt is also possible to note that there exist non-transferable-utility (NTU) modifications - such as also considered by \\cite{value1} - which could be modified to yield a NTU GNK value.\nSuch solution concepts could be utilised where utility is not directly transferable and which could incorporate non-linear utilities over possible monetary transactions, potentially resulting in more egalitarian outcomes.\n\nThe potential for modifying the GNK value for application in NTU settings in order to account for different efficiency measures is a topic of potential future investigation.\n\n\\subsection{Formal-equality}\n\nAll the techniques considered (GNK, VCG and LMP, etc.) satisfy formal-equality, as they only treat different participants differently in relation to specific morally relevant characteristics --- particularly their utility preferences, and how their presence and actions can affect the utility of others.\nIn the electricity context, this involves the utility of electricity and capacity to deliver/generate electricity at their location in relation to the utility of that electricity for others.\n\nHowever, whether-or-not these qualify as morally warranted bases for differential treatment is the subject of a wider discussion, particularly as differential treatment gives rise to differential incentives.\nFor instance, the idea of people being afforded different effective prices for electricity depending on the location of their electrical influence may be regarded as being ethically unfair to some people (especially perhaps between close neighbours whose grid connections are electrically different), however a negation of this also destroy any incentive for people to install additional generation capacity in electrically advantageous locations.\n\nDifferential treatment can be seen as the mirror image of differential incentives, and a comprehensive reflection of the incentives that would be given under LMP against VCG and GNK are beyond the scope of our consideration.\nHowever we can speculate that there exists a primary difference between VCG and LMP (which give similar results) and GNK, in that LMP and VCG incentivse behaviour that could affect the network operating point; particularly affecting only those generators/consumers which would (or do) generate or consume power at the network operating point.\nWherease GNK is more comprehensive in its consideration, as it affords utility for every possible way that the network could degenerate into adversarial competition; thus we might speculate that the GNK value (or something similar to it) might incentives robustness in the network.\n\n\\subsection{Hetrogeny of normative points}\n\nThe essential feature of the GNK value is that it is an extension of bargaining solution concepts to multiple players over the restricted strategy space of a generalised non-cooperative games.\n%And it inherits the underlying idea that the disagreement point in the non-cooperative game between two parties is the normative reference point from which the cooperative bargaining result is determined.\nThe GNK value inherits the assumption from Nash bargaining that the minimax of the zero-sum game is the point which is/should-be the disagreement point between any two coalitions for the system, and that all coalitions are equally weighted - however this construction can be questioned.\n\nFirstly, the GNK can be seen to assign equal weight for every possible coalition that could form, irrespective of the likelihood that such coalitions would actually form in competition.\nIt may be possible to construct a weighted GNK value to account for differential likelihoods of different coalitions forming, but that is a remaining further consideration.\n\nSecondly, the minimax of the zero-sum game identifies a point of maximum strategising to gain payoff advantage specifically over the opponent, irrespective of the absolute payoff to the player.\nAnd in this way minimax of the zero-sum game identifies a point of maximally engaged competition between the players.\n%the GNK value integrates this dynamic to consider the space of all possible coalitions and anti-coalitions which could form, and amalgamates all possible minimax's between them into a point of value for each participant in those formations.\nAgainst this consideration, it is good to note, that while the GNK (and Nash bargaining solution concepts) can be viewed as a description of perfect competition between individuals, it can also directly account for the possibility that that groups (or subgroups) of players may be altruistic; in that altruism may be accounted for by a player's utility function including terms associated with positive utility for others.\n\nHowever this might not be sufficient, and the GNK value contrasts against other similar solutions over non-cooperative games (as briefly mentioned in section \\ref{relating_to_the_old}) such von Neumann and Morgenstern's solution (per equation \\ref{knvalue3}), where the characteristic function is identified as minimax payoff, not minimax payoff advantage.\nIn this way von Neumann and Morgenstern's characteristic function can be considered as descriptive of a point of less totally engaged competition between coalitions - which might be more appropriate or pertinent for electricity networks.\nVon Neumann and Morgenstern's characteristic function is not the only alternative way that the characteristic function could be constructed, but unfortunately these alternative constructions would not nessisarily yield Nash bargaining solution in the two player case.\n\n\n\n%And the question outstanding is, should resources be distributed above the point of such a specific description of perfect competition?\n\n%From this perspective the GNK contrasts against other solutions (such as LMP and VCG) in regards to the reference point which they build upon.\n%The VCG solution builds upon the idea that the value of a player should be in reference to their 'absence' only at the solution point of the grand coalition, and therefore cannot allocate resources to thoes who do not interact with such a solution point; and a similar dynamic is present in LMP, in terms of marginal interaction.\n\n%Between the various solutions (LMP, VCG, GNK and possible variants thereof) there are different choices of reference point from which the dynamic occurs, and the question which is pertinent, realistic and morally justified is still ultimately unaddressed.\n%However, one particularly important reference point, which is axiomatic to VCG is the reference point of the utility of zero.\n%Under VCG the axiom of individual rationality casts the dynamics such that each participant will be rewarded with a non-negative utility.\n%If a zero utility is associated with not participating in the system, then individual rationality guarantees that it is better that any participant participates in the mechanism than not.\n%This is an important reference point is not manifest in GNK and we discuss it in the next.\n\n\\subsection{Wider equality}\\label{sec:wider_equality_gnk}\n\nIn the development of this research it was hoped that the GNK value would ultimately be witnessed to have a similar individual rationality property as VCG.\nParticularly that no participant should be allocated less than zero utility, which might be interpreted as being what they would get if they did not participate in the mechanism.\nThe ethical importance of individual rationality cannot easily be overstated, particularly as a primary notion of ethical exchange is the concept of `euvoluntary' exchange (see section \\ref{sec:reference_points}) which is (at least) where every party is not left worse-off for participating.\n\nHowever, it is evident that GNK seems to violate this property, as it is possible for participants to be allocated with less less utility than zero.\nWe would expect that this particular absence of a guarantee for participants would be a major hindrance to GNK's application in electricity systems; hence an ethical failing.\n%Indeed, is difficult to imagine how a system which would leave participants worse off than if they did not participate could be good for the wellbeing of society.\n\nIn the designing of the GNK value it was hoped that individual rationality would be a property which would be present for larger networks, particularly as it was suspected in section \\ref{subsec:nash_bargaining_endogenous} that if it was possible for players to unilaterally implement an outcome which guaranteed them a utility of zero, then they would be guaranteed a non-negative net utility.\nHowever in our GNK application to electricity networks, the enforcement of the power conservation constraints (per equation \\ref{dcopf1}) seems to have disallowed this eventuality.\n\nFor instance, under GNK value in the context of our generalised game modelling DC networks, the power conservation constraint causes the incorporation of unrealistic bargaining manoeuvres, such as participants extortionately threatening to oversupply others.\nIt is very possible that alterations to the GNK value, and similar types of Shapley Value structures over non-cooperative game solutions could be made to amend this issue.\n\nIt may be possible, for instance, to reconsider the electrical interaction between participants, perhaps by allowing player's action space to be selection of voltage level at their location, rather than directly their power input/output; however in this context power line limits would be manifest as non-linear constraints on the optimisation problem, subsequently making computation more difficult.\nAlternatively, it might also be possible to implement blackout costs to participants to curtail the possibility of unrealistic bargaining manoeuvres in the GNK value logic.\n\nIn these cases, it might be possible to give participants actions to guarantee themself a disagreement point which affords them a zero utility, and hence potentially restore individual rationality to the resultant GNK value; however these investigations are a topic for future research.\n\nAt a broader level of consideration, it is interesting how perfect competition may or may not coincide with what is equal and ethical.\nThe question of when and where these coincide, and particularly if they might coincide in the context of electricity networks, was part of the motivation of this research.\nUnfortunately our investigations demonstrated that the most direct application of the GNK value would be expected to fail wider social equality considerations.\n\nIn the next chapter we delve into the details of the sampling techniques that were developed and tested throughout this research, and we provide a final conclusion to our research in Chapter \\ref{cha:conc}. \n\n\n\n\n\n", "meta": {"hexsha": "44d422f1bdb5d8cbeb3d376da17785bc5a05555e", "size": 36894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/chapters_old/old_new_solution2.tex", "max_stars_repo_name": "Markopolo141/Thesis_code", "max_stars_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Thesis/chapters_old/old_new_solution2.tex", "max_issues_repo_name": "Markopolo141/Thesis_code", "max_issues_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/chapters_old/old_new_solution2.tex", "max_forks_repo_name": "Markopolo141/Thesis_code", "max_forks_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 117.8722044728, "max_line_length": 593, "alphanum_fraction": 0.8063099691, "num_tokens": 8000, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718435083355188, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5642628136059497}}
{"text": "\\chapter{Implicit One-Step Methods and Long-Term Stability}\n\n\\begin{intro}\n  In the first chapter, we studied methods for the solution of IVP and\n  the analysis of their convergence with shrinking step size $h$.  We\n  could gain a priori error estimates from consistency and stability\n  for sufficient small $h$.\n\n  All of these error estimates are based on Gr\u00f6nwall's\n  inequality. Therefore, they contain a term of the form $e^{Lt}$\n  which increases fast with increasing length of the time interval\n  $[t_0,T]$. Thus, the analysis is unsuitable for the study of\n  long-term integration, since the exponential term will eventually\n  outweigh any term of the form $h^p$.\n  \n  On the other hand, for instance our solar system has been moving on\n  stable orbits for several billion years and we do not observe an\n  exponential increase of velocities. Thus, there are in fact\n  applications for which the simulation of long time periods is\n  worthwhile and where exponential growth of the discrete solution\n  would be extremely disturbing.\n\t\n  This chapter first studies conditions on differential equations with\n  bounded long term solutions, and then discusses numerical methods\n  mimicking this behavior.\n\\end{intro}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Monotonic initial value problem}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{example}\n  \\label{ex:linear-model}\n  We consider for $\\lambda \\in \\C$ the linear initial value problem\n  \\begin{equation}\n    \\begin{split}\n      \\label{eq:impl:testproblem}\n      u' & = \\lambda u\\\\\n      u(0) & = 1.\n    \\end{split}\n  \\end{equation}\n  Splitting $\\lambda=\\Re(\\lambda)+i\\Im(\\lambda)$ into its real and\n  imaginary part, the (complex valued) solution to this problem is\n  \\begin{gather*}\n     u(t) = e^{\\lambda t}\n     = e^{\\Re(\\lambda)t}\\bigl(\n     \\cos(\\Im(\\lambda)t)+i\\sin(\\Im(\\lambda)t)\n     \\bigr).\n  \\end{gather*}\n  The behavior of $u(t)$ for $t\\to\\infty$ is determined by the real\n  part of $\\lambda$:\n  \\begin{gather}\n    \\label{eq:impl:cases}\n    \\begin{aligned}\n      \\Re(\\lambda) < 0 & :\\qquad& u(t) &\\to 0 \\\\\n      \\Re(\\lambda) = 0 & :\\qquad& |u(t)| &= 1 \\\\\n      \\Re(\\lambda) > 0 & :\\qquad& u(t) &\\to \\infty\n    \\end{aligned}\n  \\end{gather}\n  Moreover, the solution is bounded for $\\lambda$ with non-positive\n  real part for all points in time $t$.\n\\end{example}\n\n\\begin{remark}\n  Since we deal in the following again and again with eigenvalues of\n  real-valued matrices, we will always consider complex valued IVP\n  hereafter, due to the well known fact, that these eigenvalues can be\n  complex.\n\\end{remark}\n\n\\begin{remark}  \n  Due to Gr\u00f6nwall's inequality and the stability\n  \\slideref{Theorem}{IVP-stability}, the solution to the IVP above\n  admits the estimate $|u(t)| \\le e^{|\\lambda|t} |u(0)|$.  This is\n  seen easily by applying the comparison function $v(t) \\equiv 0$.  As\n  soon as $\\lambda \\neq 0$ has a non-positive real part, this estimate\n  is still correct but very pessimistic and therefore useless for\n  large $t$.  Since problems with bounded long-term behavior are quite\n  important in applications, we will have to introduce an improved\n  notation of stability.\n\\end{remark}\n\n\\input{definitions/lipschitz-one-sided}\n\n\\begin{remark}\n  The term monotonic from the previous definition is consistent with\n  the term \\emph{monotonically decreasing}, which we know from real-valued\n  functions. We can see this by observing for $y>x$\n  \\begin{gather*}\n    \\bigl(f(t,y)-f(t,x)\\bigr)(y-x) \\le 0\n    \\quad \\Leftrightarrow \\quad f(t,y)-f(t,x) < 0.\n  \\end{gather*}\n\\end{remark}\n\n\\input{theorems/stability-monotonic}\n\n\\begin{proof}\n  We consider the auxiliary function $m(t) = \\abs{v(t)-u(t)}^2$ and\n  its derivative\n  \\begin{align*}\n    m'(t) &= 2\\Re\\scal({v'(t)-u'(t)},{v(t)-u(t)}) \\\\\n    &= 2 \\Re\\scal({f\\bigl(t,v(t)\\bigr)-f\\bigl(t,u(t)\\bigr)},{v(t)-u(t)})\n    \\\\\n    &\\le 2 \\nu \\abs{v(t)-u(t)}^2 \\\\\n    &= 2 \\nu m(t).\n  \\end{align*}\n  According to Gr\u00f6nwall's inequality in \\slideref{Lemma}{gronwall}\n  we obtain for $t > t_0$:\n  \\begin{gather*}\n    m(t) \\le m(t_0) e^{2\\nu(t-t_0)}.\n  \\end{gather*}\n  Taking the square root yields the stability\n  estimate~\\eqref{eq:implicit:3}.\n\\end{proof}\n\n\\begin{remark}\n  Analog to example~\\ref{ex:linear-model} we obtain from the\n  stability estimate, that for the difference of two solutions\n  $u(t)$ and $v(t)$ of the differential equation $u'=f(t,u)$ we obtain \n\tin the limit $t\\to\\infty$:\n  \\begin{gather}\n    \\label{eq:implicit:4}\n    \\begin{aligned}\n      \\nu < 0: & \\qquad & \\abs{v(t)-u(t)} &\\to 0 \\\\\n      \\nu = 0: & \\qquad & \\abs{v(t)-u(t)} &\\le \\abs{v(t_0)-u(t_0)}\n    \\end{aligned}\n  \\end{gather}\n\\end{remark}\n\n\\begin{Lemma}{linear-monotonic}\n  The linear differential equation $u'=Au$ with $u(t)\\in \\C^d$ and a\n  diagonalizable matrix function $A(t) \\in \\C^{d\\times d}$ admits the\n  one-sided Lipschitz condition~\\eqref{eq:impl:Lipschitz} with the\n  constant\n  \\begin{gather*}\n    \\nu = \\max_{\\substack{i=1,\\dots,d\\\\t\\in \\R}} \\Re(\\lambda_i).\n  \\end{gather*}\n  Accordingly, this ODE is monotonic if and only if for all\n  eigenvalues $\\lambda_i$ of $A(t)$ there holds\n  \\begin{gather}\n    \\label{eq:implicit:10}\n    \\Re (\\lambda_i) \\le 0.\n  \\end{gather}\n  This is the vector-valued form of example~\\ref{ex:linear-model}. \n\\end{Lemma}\n\n\\begin{proof}\n  For the right hand side of the equation we have\n  \\begin{gather*}\n    \\Re \\scal(A(t)y-A(t)x,y-x)\n    \\le \\Re \\frac{\\scal(A(t)y-A(t)x,y-x)}{\\abs{y-x}}\\abs{y-x}\n    \\le \\max_{i=1,\\dots,d}\\Re (\\lambda_i) \\abs{y-x}.\n  \\end{gather*}\n  Hence, we obtain already $\\nu \\le \\max_{i=1,\\dots,d}\n  \\Re(\\lambda_i)$. If we now insert for $x-y$ an eigenvector of \n\teigenvalue\t$\\lambda$ for which the maximum is accepted,\n\tthen we obtain the equality and therefore $\\nu = \\max_{i=1,\\dots,d}\n  \\Re(\\lambda_i)$.\n\\end{proof}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Stiff initial value problems}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{example}\n  \\label{ex:impl:2}\n  We consider the IVP\n  \\begin{gather}\n    \\label{eq:implicit:5}\n    u' =\n    \\begin{pmatrix}\n      -1 & 0 \\\\ 0 & -100\n    \\end{pmatrix}u,\n    \\qquad u(0) =\n    \\begin{pmatrix}\n      1 \\\\ 1\n    \\end{pmatrix}.\n  \\end{gather}\n  This has the solution\n  \\begin{gather*}\n    u(t) =\n    \\begin{pmatrix}\n      e^{-t} \\\\ e^{-100t}\n    \\end{pmatrix}.\n  \\end{gather*}\n  We see, that the solution has a component which decreases slowly in\n  time with $e^{-t}$ and a second one, which decreases fast with\n  $e^{-100t}$. If we apply the Euler method with step size $h$ to this\n  equation, then we obtain the method step\n  \\begin{gather*}\n    y^{(n)} = y^{(n-1)} + h\n    \\begin{pmatrix}\n      -1 & 0 \\\\ 0 & -100\n    \\end{pmatrix} y^{(n-1)}\n    = \\begin{pmatrix}\n      1-h & 0 \\\\ 0 & 1-100h\n    \\end{pmatrix}y^{(n-1)}\n  \\end{gather*}\n  with the solution\n  \\begin{gather*}\n    y^{(n)} =\n    \\begin{pmatrix}\n      (1-h)^n \\\\ (1-100h)^n\n    \\end{pmatrix}\n  \\end{gather*}\n  If we are interested in the second solution component, the one which\n  decreases fast, we choose $h$ to be small, say $h<1/100$. Thus, for\n  $n\\to\\infty$ we have $y_n\\to 0$, slowly in the first component, fast\n  in the second one, just like the solution $u(t)$ of the continuous\n  solution.  (Recall that for fixed chosen step size $h$ the limits\n  $t\\to\\infty$ and $n\\to\\infty$ are equal.)\n  \n  If we are just interested in the first, the slow component, at a\n  time where it has changed significantly.  then a considerably larger\n  step size is appropriate, say $h=1/10$. For this step size the\n  first solution component is still converging to zero with $y^{(n)}_1\n  = 0.9^n$.  For the second one we have however $|y^{(n)}_2| =\n  |(-9)^n| \\to \\infty$. Therefore the approximate solution for this\n  step size diverges for $n\\to\\infty$, very much in contrast to the\n  behavior of the exact solution $u(t)\\to 0$ for $t\\to\\infty$.\n\\end{example}\n\n\\begin{remark}\n  Of course, it would have been possible to ignore the second\n  component in the previous example. But this is not a simple task in\n  general, due to the fact that most solution components are\n  coupled through the equation.  In such cases the step size of the\n  Euler method must be chosen to accommodate the ``fast\n  components''. This can lead to significant computational overhead.\n  Therefore, we define in the following characteristic\n  properties of such problems and develop to that specially adapted\n  solution methods.\n\\end{remark}\n\n\\input{definitions/stiff}\n\n\\begin{remark}\n  Even though we used the term definition, the notion of ``stiffness\n  of an IVP'' has something vague or even inaccurate about it. In fact\n  that is due to the very nature of the problems and cannot be fixed.\n  Instead we are forced to sharpen our understanding by means of a few\n  examples.\n\\end{remark}\n\n\\begin{remark}\n  The third condition in the definition of stiffness is rather rare to\n  find in the literature, but it is in general implicitly assumed by\n  the discussion for time step methods for stiff IVP.  It is important\n  though to realize, that the methods of the previous chapter do not\n  cause problems computing a good resolution for the fastest time\n  scales. In this case, $e^{Lt}$ will be not too much greater than\n  $e^{\\nu t}$.\n\\end{remark}\n\n\\begin{example}\n  \\label{ex:impl:2a}\n  First of all we will have a look at equation~\\eqref{eq:implicit:5}\n  in example~\\ref{ex:impl:2}. The first condition of the stiffness\n  definition is fulfilled. The decrease to $1/e$ happens at $t=1$ and\n  at $t=1/100$ for the first and second component, respectively.\n  Thus, the second condition holds as well.\n  \n  According to the discussion of example~\\ref{ex:impl:2}, the third\n  condition depends on the purpose of the computation.  If we want\n  to compute the solution at time $t=1/100$, we would not denote the\n  problem as stiff. As one is interested on the solution at time\n  $t=1$, on which the second component with $e^{-100}$ is already\n  below typical machine accuracy, the problem is stiff indeed. Here\n  we have seen that Euler's method requires disproportionately small\n  time steps.\n\\end{example}\n\n\\begin{remark}\n  \\label{remark:impl:1}\n  The definition of stiffness and the discussion of the examples\n  reveal that numerical methods are needed, which are not just\n  convergent for time steps $h\\to 0$ but also for fixed step size $h$,\n  even in the presence of time scales clearly below $h$. In this case,\n  methods still have to produce solutions with correct limit behavior\n  for $t\\to\\infty$.\n\\end{remark}\n\n\\begin{example}\n  The \\define{implicit Euler method} is defined by the one-step\n  formula\n  \\begin{gather}\n    \\label{eq:implicit:17}\n    y_1 = y_0 + hf(t_1, y_1)\n    \\quad \\Leftrightarrow \\quad\n    y_1 - h f(t_1, y_1) = y_0.\n  \\end{gather}\n  Applied to our example~\\eqref{eq:implicit:5}, we observe\n  \\begin{gather*}\n    y^{(n)} =\n    \\begin{pmatrix}\n      1 + h & 0 \\\\ 0 & 1+100h\n    \\end{pmatrix}^{-1}\n    y^{(n-1)}.\n  \\end{gather*}\n  This yields the solution\n  \\begin{gather*}\n    y^{(n)} =\n    \\begin{pmatrix}\n      \\frac1{(1+h)^n} \\\\\n      \\frac1{(1+100h)^n}\n    \\end{pmatrix}\n  \\end{gather*}\n  which converges to zero for $n\\to \\infty$, independent of $h$. Thus,\n  although the implicit Euler method requires in general the solution\n  of a nonlinear system in each step, it allows for much larger time\n  steps than the explicit Euler method,when applied to a stiff problem.\n  \n  For a visualization of the implicit and explicit Euler method see the\n  appendix.\n\\end{example}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{A- and B-stability}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  In this section, we will investigate desirable properties of\n  one-step methods for stiff IVP~\\eqref{eq:implicit:6}. We will first\n  study linear problems of the form\n\\begin{gather}\n  \\label{eq:implicit:6}\n  u'=Au \\qquad u(t_0) = u_0.\n\\end{gather}\nand the related notion of A-stability in\n  detail From the conditions for stiffness we derive the following\n  problem characteristics:\n  \\begin{enumerate}\n  \\item All eigenvalues of the matrix $A$ lie in the left half-plane\n    of the complex plane. With~\\eqref{eq:impl:cases} all solutions are\n    bounded for $t\\to\\infty$.\n  \\item There are eigenvalues close to zero and eigenvalues with a\n    large negative real part.\n  \\item We are interested in time spans which make it necessary, that\n    the product $h\\lambda$ of a time step and an arbitrary eigenvalue,\n    is allowed to be large.\n  \\end{enumerate}\n  For this case we now want to derive criteria for the boundedness of\n  the discrete solution for $t\\to\\infty$. The important part is not to\n  derive an estimate holding for $h\\to 0$, but one that holds for any\n  value of $h\\lambda$ in the left half-plane of the complex numbers.\n\\end{intro}\n\n\\begin{Definition}{stability-function}\n  The \\define{stability function} $R(z) = R(h \\lambda)$ is the\n  function generated by applying the one-step method\n  \\begin{gather*}\n    y_1 = y_0 + h \\verfahren_h(t_0, y_0)\n  \\end{gather*}\n  to the linear test problem $u'(t)=\\lambda u(t)$. Therefore,\n  \\begin{equation}\n    y_1 = R(h \\lambda) u_0,\n  \\end{equation}\n  and\n  \\begin{equation}\n    y^{(n)} = R(h \\lambda)^n u_0.\n  \\end{equation}\n  The \\define{stability region} of a one-step method is the set\n  \\begin{gather}\n    S = \\bigl\\{ z \\in \\C \\big| \\abs{R(z)} \\le 1 \\bigr\\}.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{example}\n  The stability function of the explicit Euler method is derived as follows:\n  \\begin{equation}\\begin{split}\n      \\label{eq:impl:stabil:expleuler}\n      y_1 &= y_0 + h \\lambda y_0 = (1 + h \\lambda) y_0 \\\\\n      \\Rightarrow R(h \\lambda) &= 1 + h \\lambda \\\\\n      R(z) &= 1 + z \\\\\n    \\end{split}\\end{equation}\n  The stability region for the explicit Euler is a circle with radius\n  1 around the point (-1,0) in the complex plane (see Figure~\\ref{fig:implicit:stability-euler})\n\\end{example}\n\n\n\\begin{example}\n  The stability function of the implicit Euler method is derived as follows:\n  \\begin{equation}\\begin{split}\n  \\label{eq:impl:stabil:impleuler}\n    y_1 &= y_0 + h f(t_1, y_1) \\\\\n    y_1 &= y_0 + h \\lambda y_1 \\\\\n    (1 - h \\lambda) y_1 &= y_0 \\\\\n    \\Rightarrow R(h \\lambda) &= \\frac{1}{1- h \\lambda} \\\\\n    R(z) &= \\frac{1}{1-z} \\\\\n  \\end{split}\\end{equation}\nThe stability region for the implicit Euler is the complement of a\ncircle with radius 1 around the point (1,0) in the complex plane (see\nFigure~\\ref{fig:implicit:stability-euler}). %% anschauliches Bild?\n\\end{example}\n\\begin{figure}[tp]\n  \\centering\n  \\includegraphics[width=.47\\textwidth]{fig/stability-euler}\n  \\hfill\n  \\includegraphics[width=.47\\textwidth]{fig/stability-euler2}\n  \\caption{Stability regions of the explicit and implicit Euler\n    methods (blue stable, red unstable)}\n  \\label{fig:implicit:stability-euler}\n\\end{figure}\n\n\\begin{Definition*}{a-stability}{A-stability}\n  A method is called \\define{A-stable}, if its stability region contains\n  the left half-plane of $\\C$, hence\n  \\begin{gather}\n    \\{ z \\in \\C | \\Re(z) \\le 0 \\} \\subset S\n  \\end{gather}\n\\end{Definition*}\n\n\\begin{Theorem}{a-stability}\n  Let $\\left\\{y^{(k)}\\right\\}$ be the sequence generated by an\n  A-stable one-step method of step size $h$ for the linear, autonomous IVP\n  \\begin{gather*}\n    u'=Au, \\qquad u(t_0) = u_0\n  \\end{gather*}\n  with a diagonalizable matrix $A$ an initial value $y^{(0)} =\n  u_0$. If additionally all eigenvalues of $A$ have a non-positive\n  real part, then the sequence members $y^{(k)}$ are uniformly bounded\n  for all $h$.\n\\end{Theorem}\n\n \\begin{remark}\n  The notion of A-stability was deliberately chosen neutrally by Dahlquist.\n  In particaluar, note that A-stability does not stand for asymptotic stability.\n \\end{remark}\n\n\\begin{proof}\n  Let $\\{v_\\ell\\}_{\\ell=1,\\dots,d}$ be a basis of $\\C^d$ consisting of\n  eigenvectors of $A$. Such a basis exists since $A$ is\n  diagonalizable. Let $y_0 = \\sum_{\\ell=1}^d \\alpha^\\ell v_\\ell$ be\n  the representation of $y_0$ in this basis. Furthermore, we introduce\n  the representations $g_i = \\sum_{\\ell=1}^d \\gamma_i^\\ell\n  v_\\ell$. Then, we see that equations of the form\n  \\begin{gather*}\n    g_i = y_0 + h \\sum_{j=1}^s a_{ij} A g_i\n  \\end{gather*}\n  decouple into\n  \\begin{gather*}\n    \\gamma_{i}^\\ell = \\alpha^\\ell + h \\sum_{j=1}^s a_{ij} \\lambda_\\ell \\gamma_j^\\ell.\n  \\end{gather*}\n  Similarly, if $y_1 = \\sum_{\\ell=1}^d \\eta^\\ell v_\\ell$ we have the separation\n  \\begin{gather*}\n    y_1 = y_0 + h\\sum_{i=1}^s b_i g_i\n    \\quad\\longrightarrow\\quad\n    \\eta^\\ell = \\alpha^\\ell  + h\\sum_{i=1}^s b_i \\gamma_i^\\ell.\n  \\end{gather*}\n  Thus, instead of a vector valued problem, the method solves $d$\n  decoupled scalar problems, inside and across time steps. But for each of the scalar problems, the definition of A-stability implies boundedness of the solution, if $\\Re(\\lambda_\\ell) \\le 0$.\n\\end{proof}\n\n\\begin{Theorem}{implicit:a-stability-erk}\n  No explicit Runge-Kutta method is A-stable.\n\\end{Theorem}\n\n\\begin{proof}\n  We show that for such methods $R(z)$ is a polynomial.  Then, the\n  theorem follows immediately, it is known for polynomials, that the\n  absolute value of its values goes to infinity, if the absolute value\n  of the argument goes to infinity.\n  \n  From the equation~\\eqref{eq:explicit:1b} follows $k_i = \\lambda\n  g_i$. If we insert that into the equation~\\eqref{eq:explicit:1a},\n  we obtain\n  \\begin{gather*}\n    g_i = y_0 + h \\sum_{j=1}^{i-1} \\rka_{ij}k_j = y_0 + h\\lambda\n    \\sum_{j=1}^{i-1} \\rka_{ij} g_j.\n  \\end{gather*}\n  With $g_1 = y_0$ and $z=h\\lambda$ one has\n  \\begin{align*}\n    g_2 &= y_0+ \\rka_{21} z y_0 = (1+\\rka_{21}z) y_0\\\\\n    g_3 &= y_0 +  \\rka_{32} z g_2 =  y_0 +  \\rka_{32} z (1+\\rka_{21} z) y_0 =\n    (1+\\rka_{32} z(1+\\rka_{21} z)) y_0.\n  \\end{align*}\n\tTherefore one shows easily per induction that $k_j$ results as\n\tmultiplication of a polynomial of order $j-1$ with $y_0$. \n  With formula~\\eqref{eq:explicit:1c} we have that $R(z)$ is a\n  polynomial of order $\\rks-1$.\n\\end{proof}\n\n\n\n\\begin{remark}\n  The notion of A-stability is only applicable to linear problems with\n  diagonalizable matrices. Now we are considering its extension to\n  nonlinear problems with monotonic right hand sides.\n\\end{remark}\n\n\\input{definitions/b-stability}\n\n\\begin{Theorem}{b-stability}\n  Let be $\\left\\{y^{(k)}\\right\\}$ the sequence generated by a\n  B-stable method for the IVP\n  \\begin{gather*}\n    u'=f(u), \\qquad u(t_0) = u_0\n  \\end{gather*}\n  with initial values $y^{(0)} = u_0$. If the right hand side $f$ is\n  monotonic, then the values $y^{(k)}$ of the sequence are uniformly\n  bounded for for $k\\to\\infty$ independent of the time step size $h$.\n\\end{Theorem}\n\n\\begin{proof}\n  The theorem follows immediately by iterating over the definition of\n  B-stability.\n\\end{proof}\n\n\\begin{Corollary}{b-stable-a-stable}\n  A B-stable method is A-stable.\n\\end{Corollary}\n\n\\begin{proof}\n  Apply the method to the linear differential model equation, which is\n  monotonic for $\\Re(\\lambda) \\le 0$. Now, the definition of B-stability\n  implies $\\abs{R(z)} \\le 1$, and thus, the method is A-stable.\n\\end{proof}\n\n\\subsection{L-stability}\n\nAn undesirable feature of complex differentiable functions in the\ncontext of stability of Runge-Kutta methods is the fact, that the\nlimit $\\lim_{z\\to \\infty} R(z)$ is well-defined on the \\putindex{Riemann sphere},\nindependent of the\npath chosen to approach this limit in the complex plane. Thus, for any\nreal number $x$, we have\n\\begin{gather}\n  \\lim_{x\\to\\infty} R(x) = \\lim_{x\\to\\infty} R(ix).\n\\end{gather}\n\nThus, a method, which has exactly the left half-plane of $\\C$ as its\nstability domain, seemingly a desirable property, has the undesirable\nproperty that components in eigenspaces corresponding to very large\nnegative eigenvalues, and thus decaying very fast in the continuous\nproblem, are decaying very slowly if such a method is applied.\n\nThis gave raise to the following notion of L-stability. We\nnevertheless point out, that the L-stable methods are not always to be\nconsidered better than A-stable ones, like it is not always necessary\nto require A-stability. Judgment must be applied according to the\nproblem being solved.\n\n\\begin{Definition}{l-stability}\n  An A-stable one-step method is called \\define{L-stable}, if for its\n  \\putindex{stability function} there holds\n  \\begin{gather}\n    \\lim_{\\Re (z) \\to -\\infty} \\abs{R(z)} = 0.\n  \\end{gather}\n  Some authors refer to L-stable methods as \\define{strongly A-stable}.\n\\end{Definition}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{General Runge-Kutta methods}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  According to theorem~\\ref{Theorem:implicit:a-stability-erk}, ERK\n  cannot be A- or B-stable. Thus, they are not suitable for long term\n  integration of stiff IVP. The goal of this chapter is the study of\n  methods not suffering from this limitation. The cure will be\n  implicit methods, where stages may not only depend on\n  known values from the past, but also on the value to be computed.\n  \n  We point out at the beginning of this chapter, that the main\n  drawback of these methods is the fact that they require the solution\n  of a typically nonlinear system of equations and thus involve much\n  higher computational effort. Therefore, careful judgment should\n  always be applied to determine whether a problem is really stiff or\n  an implicit method is needed for other reasons.\n\\end{intro}\n\n\\input{definitions/rk}\n\n\n\\begin{example}[Two-stage SDIRK]\n  Both SDIRK methods in table~\\ref{tab:SDIRK2} are of \n  order three\n  \\begin{table}[htp]\n    \\centering\n  \\begin{gather}\n    \\def\\arraystretch{1.5}\n    \\begin{array}{c|cc}\n      \\frac12 - \\frac{\\sqrt{3}}{6} & \\frac12 - \\frac{\\sqrt{3}}{6} & 0 \\\\\n      \\frac12 + \\frac{\\sqrt{3}}{6} & \\frac{\\sqrt{3}}{3} & \\frac12 - \\frac{\\sqrt{3}}{6} \\\\\n      \\hline\n      & \\frac12 & \\frac12\n    \\end{array}\n    \\qquad\n    \\begin{array}{c|cc}\n      \\frac12 + \\frac{\\sqrt{3}}{6} & \\frac12 + \\frac{\\sqrt{3}}{6} & 0 \\\\\n      \\frac12 - \\frac{\\sqrt{3}}{6} & -\\frac{\\sqrt{3}}{3} & \\frac12 + \\frac{\\sqrt{3}}{6} \\\\\n      \\hline\n      & \\frac12 & \\frac12\n    \\end{array}\n  \\end{gather}\n    \\caption{Two-stage SDIRK method of order 3}\n    \\label{tab:SDIRK2}\n  \\end{table}\n\\end{example}\n\n\\input{theorems/stability-rk}\n\n\\begin{proof}\n  Entering $f(u) = \\lambda u$ into the definition of the stages $g_i$,\n  we obtain the linear system\n  \\begin{gather*}\n    g_i = y_0 + h \\sum_{j=1}^s a_{ij}\\lambda g_j, \\quad i=1,\\dots,s.\n  \\end{gather*}\n  In matrix notation with $z=h\\lambda$, we obtain\n  $(\\identity-z A) g = (y_0,\\dots,y_0)^T$, where $g$ is the vector\n  $(g_1,\\dots,g_s)^T$. Equally, we obtain\n  \\begin{align*}\n    R(z) y_0 = y_1 &= y_0 + h \\sum_{i=1}^s b_i \\lambda g_i = y_0 + z\n    b^T g\\\\\n    &= y_0 + z b^T(\\identity-z A)^{-1}\n    \\begin{pmatrix}\n      y_0\\\\\\vdots\\\\y_0\n    \\end{pmatrix}\n    \\\\\n    &= \\left(1+z b^T(\\identity-z A)^{-1}\n      \\begin{pmatrix}\n        1\\\\\\vdots\\\\1\n      \\end{pmatrix}\n    \\right) y_0.\n  \\end{align*}\n  In order to prove the second representation, we write the whole Runge-Kutta method\n  as a single system of equations of dimension $s+1$:\n  \\begin{gather*}\n    \\begin{pmatrix}\n      I-z A & 0 \\\\\n      -z b^T & 1\n    \\end{pmatrix}\n    \\begin{pmatrix}\n      g \\\\ y_1\n    \\end{pmatrix}\n    = y_0\n    \\begin{pmatrix}\n      1\\\\\\vdots\\\\1\n    \\end{pmatrix}.\n  \\end{gather*}\n  Applying Cramer's rule yields the result.\n\\end{proof}\n\n\\input{theorems/stability-examples-rk}\n\\begin{figure}[tp]\n  \\centering\n  \\includegraphics[width=.3\\textwidth]{fig/stability-RK2}\n  \\hfill\n  \\includegraphics[width=.3\\textwidth]{fig/stability-RK4}\n  \\hfill\n  \\includegraphics[width=.3\\textwidth]{fig/stability-DOPRI5}\n  \\caption{Stability regions of the modified Euler method, the\n    classical Runge-Kutta method of order 4 and the Dormand/Prince\n    method of order 5 (blue stable, red unstable)}\n  \\label{fig:implicit:stability-explicit-rk}\n\\end{figure}\n\n\\input{definitions/theta}\n\n\\begin{Theorem}{theta-a-stability}\n  The $\\theta$-scheme is A-stable for $\\theta\\ge 1/2$. Furthermore, if\n  there exists a constant $c$ such that $\\theta-1/2 \\le ch$, the\n  method is consistent of second order.\n\\end{Theorem}\n\n\\begin{proof}\n  Left as a homework question. Additionally, we show stability regions\n  for different $\\theta$ in figure ~\\ref{fig:implicit:theta}.\n  \\begin{figure}[tp]\n    \\centering\n    \\includegraphics[width=.45\\textwidth]{fig/stability-CR}\n    \\hfill\n    \\includegraphics[width=.45\\textwidth]{fig/stability-theta6}\n\n    \\includegraphics[width=.45\\textwidth]{fig/stability-theta7}\n    \\hfill\n    \\includegraphics[width=.45\\textwidth]{fig/stability-euler2}\n    \\caption{Stability regions of the $\\theta$-scheme with\n      $\\theta=0.5$ (Crank-Nicolson), $\\theta=0.6$, $\\theta=0.7$, and\n      $\\theta=1$ (implicit Euler).}\n    \\label{fig:implicit:theta}\n  \\end{figure}\n\\end{proof}\n\n\\subsection{Existence and uniqueness of discrete solutions}\n\nWhile it was clear that the steps of an explicit Runge-Kutta method\ncan always be executed, implicit methods require the solution of a\npossibly nonlinear system of equations. The solvability of such a\nsystem is not always clear. We will investigate several cases here:\nFirst, Lemma~\\ref{Lemma:existence-implicit-1} based on a Lipschitz\ncondition on the right hand side. Since this result suffers from a\nsevere step size constraint, we add\nLemma~\\ref{Lemma:existence-implicit-2} for DIRK methods based on right\nhand sides with a one-sided Lipschitz condition. Finally, we present\nTheorem~\\ref{Theorem:existence-implicit-3} for general Runge-Kutta\nmethods with one-sided Lipschitz condition.\n\n\\input{theorems/existence-implicit-1}\n\n\\begin{proof}\n  We prove existence and uniqueness by a fixed-point argument. To this\n  end, define the vectors\n  $k^{(m)}=(k_1^{(m)},\\dots,k_s^{(m)})^T\\in \\R^{s d}$ for $m=1,\\dots$\n  and the iteration $k^{(m)} = F(k^{(m-1)}$ by\n  \\begin{gather*}\n    k_i^{(m)} = F_i(k^{(m-1)}) = f\\left(\n      t_0+c_i h, y_0+ h \\sum_{j=1}^s a_{ij} k_j^{(m-1)}\\right),\n    \\quad i=1,\\dots,s.\n  \\end{gather*}\n  Clearly the vector $k=(k_1,\\dots,k_s)^T\\in \\R^{s d}$ of the\n  Runge-Kutta method is a fixed-point of this iteration. Using on\n  $\\R^{s d}$ the norm $\\norm{k} = \\max_{i=1,\\dots,s} \\abs{k_i}$, where\n  $\\abs{.}$ is the regular Euclidean norm on $\\R^d$, we obtain the\n  estimate\n  \\begin{gather*}\n    \\norm{F(k_1)-F(k_2)}\n    \\le  \\left(h L \\max_{i=1,\\dots,s} \\sum_{j=1}^s \\abs{a_{ij}}\\right)\n    \\norm{k_1-k_2}.\n  \\end{gather*}\n  Under assumption~\\eqref{eq:existence-implicit-1:1}, the term in\n  parentheses is strictly less than unity and thus, the mapping $F$ is\n  a contraction. The \\putindex{Banach fixed-point theorem} yields the\n  unique existence.\n\\end{proof}\n\n\\input{theorems/existence-implicit-2}\n\n\\begin{proof}\n  The proof simplifies compared to the general case of an IRK, since\n  each stage depends explicitly on the previous stages and implicitly\n  only on itself.  Thus, we can write\n  \\begin{gather}\n    \\label{eq:implicit:18}\n    g_i = y_0 + v_i + h a_{ii} f(g_i),\n    \\quad\\text{with}\\quad\n    v_i = h \\sum_{j=1}^{i-1} a_{ij} f(g_j).\n  \\end{gather}\n  For linear IVP with diagonalizable matrix $A$, we have\n  \\begin{gather*}\n    \\left(I-ha_{ii} A\\right) g_i = y_0 + v_i,\n  \\end{gather*}\n  and assumption~\\eqref{eq:existence-implicit-2:1} implies that all\n  eigenvalues of $\\left(I-ha_{ii} A\\right)$ are positive, thus, the\n  inverse exists and we obtain a unique solution.\n\n  In the nonlinear case, we use a homotopy argument. To this end, we\n  introduce the parameter $\\tau\\in [0,1]$ and set up the family of\n  equations\n  \\begin{gather*}\n    g(\\tau) = y_0 + \\tau v_i + h a_{ii} f(g(\\tau)) + (\\tau-1) h a_{ii} f(y_0).\n  \\end{gather*}\n  For $\\tau=0$ this equation has the solution $g(0) = y_0$, and for\n  $\\tau=1$ the solution $g(1) = g_i$. By showing, that\n  $\\frac{d}{d\\tau}g$ is bounded, we conclude that a solution exists,\n  since\n  \\begin{gather}\n    \\label{eq:implicit:19}\n    g(1) = g(0) + \\int_0^1 g'(s)\\ds\n  \\end{gather}\n  There holds\n  \\begin{gather*}\n    g'(\\tau) = v_i + h a_{ii} \\partial_y f g'(\\tau) + h a_{ii} f(y_0).\n  \\end{gather*}\n  Thus\n  \\begin{align*}\n    \\abs{g'}^2 &= \\scal(g',{v_i + h a_{ii}f(y_0)})\n    + h a_{ii} \\scal(g', \\partial_y f g')\\\\\n    & \\le \\abs{g'}\\abs{v_i + h a_{ii} f(y_0)}\n    + h a_{ii} \\nu \\abs{g'}^2.\n  \\end{align*}\n  Here, we used that by the mean value theorem, there holds\n  \\begin{gather*}\n    \\scal(\\partial_y f u,u) \\le \\nu \\abs{u}^2,\\quad\\forall u\\in \\C^d.\n  \\end{gather*}\n  We continue by stating that by assumption $1-h a_{ii} \\nu > 0$ and thus\n  \\begin{gather*}\n    \\abs{g'} \\le  \\frac{\\abs{v_i + h a_{ii} f(y_0)}}{1-h a_{ii} \\nu}.\n  \\end{gather*}\n  Thus, we have proved existence of the stages $g_i$. On the other\n  hand $y_1$ is just a fixed linear combination of these values, such\n  that it exists as well.\n  Uniqueness follows immediately from A- or B-stability of the method.\n\\end{proof}\n\n\\input{theorems/existence-implicit-3}\n\n\\begin{proof}\n  We omit the proof here and refer to~\\cite[Theorem IV.14.2]{HairerWanner10}\n\\end{proof}\n\n% \\begin{todo}\n% In the following we want to discuss under which conditions a solution \n% for our IRK exists and is unique.\n% For that we will need the help of some definitions:\n\n\n% \\begin{definition}\n% \tWe consider the inner product $\\scal(u,v)_D = u^T Dv$,\n%   where $D$ is a diagonal matrix with the entries $d_i > 0$. \n% \tWe now denote with $\\alpha_D(A^{-1})$ the largest number\n%   $\\alpha$ for which holds\n%   \\begin{gather}\n%     \\scal(u,A^{-1}u)_D \\geq \\alpha \\scal(u,u)_D \\qquad \\forall u \\in \\R^\\rks\n%   \\end{gather}\n\n%   Futhermore we set\n%   \\begin{gather}\n%     \\alpha_0 (A^{-1}) = \\sup_{D>0} \\alpha_D (A^{-1})\n%   \\end{gather}\n% \\end{definition}\n\n\n\n% \\begin{theorem}[Existence of a solution for IRKs]\n%   Let be f continuously differentiable and it satisfies the \n% \tone-sided Lipschitz condition with the Lipschitz constant $L$. If the\n%   Runge-Kutta matrix $A$ is invertible and satisfies the condition\n%   \\begin{gather}\n%     \\label{eq:implicit:cond}\n%     hL < \\alpha_0 (A^{-1}),\n%   \\end{gather}\n%   then the nonlinear system~\\ref{eq:implicit:1a} has a solution\n%   $(\\rkg_1,...,\\rkg_\\rks)$.\n% \\end{theorem}\n\n\n\n% \\begin{remark}\n% \tNow we examine the perturbation susceptibility of the solution of the IRK.\n% \tFor this purpose we use as before the notation\n%   \\begin{gather}\n%     \\begin{array}{lll}\n%       || u ||_D & = \\sqrt{u^TDu} = \\sqrt{\\langle u,u \\rangle_D} & \\ \\ \\ \\forall u \\in \\R^\\rks \\\\\n%       || g ||_D & = \\sqrt{g^T(D\\otimes I)g} & \\ \\ \\ \\forall g \\in \\R^{\\rks n}\n%     \\end{array}\n%   \\end{gather}\n% \\end{remark}\n\n\n% \\begin{theorem}\n%   \\label{theorem:implicit:perturb}\n%   Given $\\rkg_i$ and $y_1$ as in ~\\ref{eq:implicit:1a} and ~\\ref{eq:implicit:1c} and the perturbed values $\\hat{\\rkg}_i$ and $\\hat{y}_1$ are defined as follows\n%   \\begin{gather}\n%     \\begin{split}\n%       \\hat{\\rkg}_i & = y_0 + h \\sum\\limits_{j=1}^\\rks \\rka_{ij} f(x_0 + \\rkc_j h, \\hat{\\rkg}_j) + \\delta_i \\\\\n%       \\hat{y}_1 & = y_0 + h \\sum\\limits_{j=1}^\\rks \\rkb_j f(x_0 + \\rkc_j h, \\hat{\\rkg}_j) \\\\\n%     \\end{split}\n%   \\end{gather}\n\n%   If the Runge-Kutta matrix A is invertible, the differential equation \n% \tsatisfies the one-sided Lipschitz condition and for a positive diagonal\n% \tmatrix D holds $hL<\\alpha_D(A^{-1})$, then we have the estimates\n%   \\begin{gather}\n%     \\begin{split}\n%       ||\\hat{\\rkg}-\\rkg||_D & \\le \\frac{||A^{-1}||_D}{\\alpha_D(A^{-1})-hL}||\\delta||_D \\\\\n%       ||\\hat{y}_1 - y_1|| & \\le ||b^TA^{-1}||_D \\left( 1+\\frac{||A^{-1}||_D}{\\alpha_D(A^{-1})-hL} \\right) ||\\delta||_D \\\\\n%     \\end{split}\n%   \\end{gather}\n\n%   where $\\rkg = (\\rkg_1,...,\\rkg_\\rks)^T$, $\\hat{\\rkg}=(\\hat{\\rkg}_1,...,\\hat{\\rkg}_\\rks)^T$ and $\\delta=( \\delta_1,...,\\delta_\\rks)^T$.\n% \\end{theorem}\n\n\n%  \\begin{proof}\n% % \\begin{todo}\n% %     \\cite[IV.14.3]{HairerWanner10}\n% % \\end{todo}\n%  \\end{proof}\n\n\n% \\begin{theorem}[Uniqueness of the solution for IRKs]\n%   For a differential equation, which satisfies the Lipschitz conditions,\n%   the following holds true: a Runge-Kutta method has maximal one solution\n% \tif its matrix $A$ is invertible \n%   and the condition $~\\ref{eq:implicit:cond}$ is satisfied.\n% \\end{theorem}\n\n\n% \\begin{proof}\n%   Set $\\delta = 0$ in $~\\ref{Theorem:implicit:perturb}$.\n% \\end{proof}\n% \\end{todo}\n\n% See also Lemma IV 13.16 in HW10\n\\input{definitions/order-conditions}\n\\input{theorems/order-conditions}\n\n\\begin{proof}\n  For the proof, we refer to~\\cite[Ch. II, Theorem\n  7.4]{HairerNorsettWanner93}. Here, we only observe, that\n  \\begin{gather*}\n    \\int_0^1 t^{q-1} \\dt = \\frac1{q}, \\qquad\n    \\int_0^{c_i} t^{q-1} \\dt = \\frac{c_i^{q}}{q}.\n  \\end{gather*}\n  If we now insert the function $x$ at the places $c_i$ into the\n  quadrature formula with the quadrature weights $\\rkb_i$, then we\n  obtain~\\eqref{eq:implicit:8}. Similarly we get~\\eqref{eq:implicit:9}, if we insert the value $t^{q}/q$ at\n  the places $c_i$ from the quadrature formula with weights\n  $\\rka_{ij}$ for $j=1,\\dots,s$. In both cases we carry this out for\n  all monomials until the desired degree is reached.  Due to linearity\n  of the formulas the exactness holds for all polynomials until this\n  degree.\n\\end{proof}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Methods based on quadrature and B-stability}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection{Gauss-, Radau-, and Lobatto-quadrature}\n\n\\begin{intro}\n  In this subsection, we review some of the basic facts of quadrature\n  formulas based on orthogonal polynomials.\n\\end{intro}\n\n\\begin{Definition}{gauss-quadrature}\n  Let $L_n(t)$ be the Legendre polynomial of degree $n$ on $[0,1]$, up\n  to scaling,\n  \\begin{gather*}\n    L_n(t) = \\frac{d^{n}}{dt^{n}}(t^2-1)^{n}.\n  \\end{gather*}\n  A quadrature formula, which uses the $n$ roots of $L_n$ as its\n  quadrature points and the integrals of the Lagrange interpolating\n  polynomials as its weights is called \\define{Gau\u00df quadrature}, more\n  precisely, Gau\u00df-Legendre quadrature.\n\\end{Definition}\n\n\\begin{Definition}{radau-lobatto-quadrature}\n  The \\define{Radau quadrature} formulas use one end point of the interval\n  $[0,1]$ and the roots of orthogonal polynomials of degree $n-1$ as\n  their abscissas. We distinguish left and right Radau quadrature\n  formulas, depending on which end is included. \\define{Lobatto quadrature}\n  formulas use both end points and the roots of a polynomial of degree\n  $n-2$. The polynomials are\n  \\begin{xalignat}2\n    &\\text{Radau left}\n    & p_n(t) &= \\frac{d^{n-1}}{dt^{n-1}}\\bigl(t^n(t-1)^{n-1}\\bigr), \\\\\n    &\\text{Radau right}\n    & p_n(t) &= \\frac{d^{n-1}}{dt^{n-1}}\\bigl(t^{n-1}(t-1)^{n}\\bigr), \\\\\n    &\\text{Lobatto}\n    & p_n(t) &= \\frac{d^{n-2}}{dt^{n-2}}\\bigl(t^{n-1}(t-1)^{n-1}\\bigr).\n  \\end{xalignat}\n\\end{Definition}\n\n\\begin{Lemma}{gauss-quadrature}\n  A Gau\u00df quadrature formula with $n$ points is exact for polynomials\n  of degree $2n-1$. A Radau quadrature formula with $n$ points is\n  exact for polynomials of degree $2n-2$. A Lobatto quadrature formula\n  with $n$ points is exact for polynomials of degree $2n-3$. The\n  quadrature weights of these formulas are positive.\n\\end{Lemma}\n\n\\subsection{Collocation methods}\n\\begin{intro}\n  An alternative to solving IVP in individual points in time, is to\n  develop methods, which first approximate the solution function\n  through a simpler function. For example this could be a polynomial.\n  \n  Polynomials are especially suitable for the computation with\n  computers. They are not suited though for high-order interpolation\n  of large intervals.  Therefore, we apply them not on the entire\n  interval but rather on subintervals. The subintervals correspond\n  to the time steps, which we used until now.\n\\end{intro}\n\n\\input{definitions/collocation}\n\\input{theorems/collocation}\n\n\\begin{proof}\n  The polynomial $y'(t)$ is of degree $\\rks-1$ and therefore uniquely\n  defined through $\\rks$ interpolation conditions in\n  equation~\\eqref{eq:implicit:12}.  We set $y'(x_0 + \\rkc_i h) =\n  f\\bigl(t_0+\\rkc_i h, y(t_0+\\rkc_i h)\\bigr) = k_i$, such that we have\n  \\begin{gather}\n    y'(x_0+t h) = \\sum\\limits_{j=1}^\\rks k_j \\cdot L_j(t)\n  \\end{gather}\n  with the Lagrange interpolation polynomial $L_j(t)$.\n  By integration we obtain:\n  \\begin{gather}\n    g_i = y(x_0 + \\rkc_i h)\n    = y_0 + h \\int\\limits_0^{\\rkc_i} y'(x_0 + t h) \\dt\n    = y_0+h \\sum_{j=1}^s k_j \\int_0^{\\rkc_i} L_j(t)\\dt,\n  \\end{gather}\n  which defines the coefficients $\\rka_{ij}$ by comparison with\n  ~\\eqref{eq:implicit:1a}.  If we integrate to one instead of until\n  $c_i$, then we obtain the coefficients $\\rkb_j$ by comparison\n  with~\\eqref{eq:implicit:1c}.\n\\end{proof}\n\n\n\\input{theorems/collocation-equivalence}\n\n\\begin{proof}\n  Condition $C(s)$ from~\\eqref{eq:implicit:9} results in\n  $s^2$ interpolation conditions for $s^2$ coefficients $a_{ij}$.\n  Therefore these coefficients are defined uniquely. On the other\n  hand~\\eqref{eq:implicit:9} yields for $q<s$:\n  \\begin{gather*}\n    \\sum_{j=1}^s \\rka_{ij} \\rkc_j^q = \\frac{\\rkc_i^{q+1}}{q+1} =\n    \\int_{0}^{\\rkc_i} t^q \\dt.\n  \\end{gather*}\n  As a consequence of linearity we have\n  \\begin{gather*}\n    \\sum_{j=1}^s \\rka_{ij} p(\\rkc_i) = \\int_{0}^{\\rkc_i} p(t)\n    \\dt,\\qquad\\forall p\\in \\Pol_{s-1}.\n  \\end{gather*}\n  Applying this to Lagrange interpolation polynomials $L_j(t)$, we\n  obtain the coefficients of equation~\\eqref{eq:implicit:kolkoef},\n  which were in turn computed from the collocation polynomial.\n\\end{proof}\n\n\\input{theorems/collocation-order}\n\n\\begin{proof}\n  The condition on $\\pi$ implies that the quadrature rule is exact for\n  polynomials of degree $s+r-1$, thus $B(s+r)$ holds.\n  We have already shown in the proof of\n  Lemma~\\ref{Lemma:collocation-equivalence}, that $C(s)$\n  holds. Therefore, it remains to show $D(r)$.\n\n  First, we observe that first by $C(s)$ and then by $B(s+r)$ for\n  any $p<s$ and $q\\le r$ there holds\n  \\begin{gather*}\n    \\sum_{i=1}^s\\sum_{j=1}^s b_i c_i^{q-1} a_{ij} c_j^{p-1}\n    = \\frac1p \\sum_{i=1}^s b_i c_i^{p+q-1}\n    = \\frac1{p(p+q)}.\n  \\end{gather*}\n  Furthermore, since $B(s+r)$ we have for the same $p$ and $q$:\n  \\begin{gather*}\n    \\frac1q\\sum_{j=1}^s\\bigl(b_j c_j^{p-1} - b_j c_j^{p+q-1}\\bigr)\n    = \\frac1q\\left(\\frac1p - \\frac1{p+q}\\right)\n    = \\frac1{p(p+q)}.\n  \\end{gather*}\n  Subtracting these two and integrating the last result yields\n  \\begin{gather*}\n    0 = \\frac{1}{p(p+q)}-\\frac{1}{p(p+q)} =\n    \\sum_{j} c_j^{p-1} \\underbrace{\\left(\n      \\sum_i b_i c_i^{q-1}a_{ij} - \\frac1q b_j \\bigl(1-c_j^q\\bigr)\n    \\right)}_{:= \\xi_i}.\n  \\end{gather*}\n  This holds for $p=1,\\dots,s-1$ and thus amounts to a homogeneous,\n  linear system in the variables $\\xi_i$. Thus, $\\xi_i=0$ and the\n  theorem holds. % \\marginpar{Oops!}\n%%% Todo!\n\\end{proof}\n\n\\begin{corollary}\n  An $\\rks$-stage collocation method is at least of order\n  $\\rks$ and at most of order $2\\rks$.\n\\end{corollary}\n\n\\begin{proof}\n  The polynomial $\\pi(t)$ in~\\eqref{eq:implicit:14} is of degree\n  $\\rks$. As a result it can be orthogonal on all polynomials of\n  degree $\\rks-1$ in the best case.  Otherwise it would be orthogonal\n  to itself.  The transformed Legendre polynomial of degree $\\rks$ on\n  the interval $[0,1]$ satisfies this condition and\n  theorem~\\ref{Theorem:collocation-order} holds true with\n  $r=s$. On the other hand $\\pi(t)$ is not orthogonal on the\n  constants. In this case the theorem holds true with $r=0$.\n\\end{proof}\n\n\n\\input{theorems/collocation-continuous}\n\n\\begin{proof}\n  $y'$ is the interpolation polynomial of degree $s-1$ in the interpolation\n\tpoints $c_1, \\dots, c_s$. There holds\n\t\\begin{gather*}\n\t\\max_{t \\in [t_0, t_1]} |y'(t) - u'(t)| \\le c \\max_{t \\in [t_0, t_1]}\n\t\t|u^{(s+1)} (t)| \\cdot h^s.\n\t\\end{gather*}\n\tWe now write\n\t\\begin{gather*}\n\ty(t) - u(t) = \\int_0 ^t y'(\\tau) - u'(\\tau) \\diffd \\tau \\le \\int_0 ^t h^s\n\t\t\\cdot c \\max_{t \\in [t_0, t_1]} |u^{(s+1)} (t)| \\diffd \\tau = ch^s t\n\t\t\\le ch^{h+1}.\n\t\\end{gather*}\n\tSince by taking the derivative one loses one order, we obtain\n\t\\begin{gather*}\n\t\\max_{t \\in [t_0, t_1]} |y^{(k)}(t) - u^{(k)}(t)| \\le ch^{s-k+1}\n\t\t\\max_{t \\in [t_0, t_1]} |u^{(s+1)} (t)|.\n\t\\end{gather*}\n\tDefining $C = c \\max_{t \\in [t_0, t_1]} |u^{(s+1)} (t)|$ yields the desired result.\n\\end{proof}\n\n\n\\input{definitions/gauss-collocation}\n\n\\begin{Example*}{Gauss-collocation-2-3}{2- and 3-stage Gauss collocation methods}\n  \\mbox{}\n  \\begin{minipage}[t]{.4\\linewidth}\n    \\input{tables/gauss2}\n  \\end{minipage}\n  \\begin{minipage}[t]{.59\\linewidth}\n    \\input{tables/gauss3}      \n  \\end{minipage}\n  % The two- and three-stage Gau\u00df-Collocation methods were described by\n%  Hammer and Hollingsworth or\\ Kuntzmann and Butcher. \n\tsee~\\cite[Tables\n  7.3, 7.4]{HairerNorsettWanner93}\n\\end{Example*}\n\n\\input{theorems/gauss-consistency}\n\n\\begin{proof}\n  Gau\u00df quadrature is exact for polynomials of degree $2s-1$ and we\n  have that $\\pi$ in Theorem~\\ref{Theorem:collocation-order} is of\n  order $s$. Therefore, the same theorem concludes that the method is\n  of order $2s$. The same proof applies to Radau- and Lobatto-\n  quadrature rules with their reduced orders.\n\\end{proof}\n\n\\begin{Theorem}{gauss-stability}\n% HW p. 181 Example 12.3\n  Collocation methods with Gau\u00df-, right Radau- and Lobatto-quadrature are\n  \\putindex{B-stable}. The stability region of Gau\u00df-collocation is\n  exactly the left half-plane of $\\C$.\n\\end{Theorem}\n\n\\begin{proof}\n  We only prove the theorem for Gau\u00df-collocation, where the proof is\n  simple and instructive. The proof for Radau- and Lobatto-collocation\n  can be found in~\\cite{HairerWanner10}.\n\n  Let be $y(t)$ and $z(t)$ the collocation polynomials according\n  to~\\eqref{eq:implicit:13} with respect to initial values $y_0$ or\n  $z_0$. Analogous to the proof of\n  theorem~\\ref{Theorem:stability-monotonic} we introduce the auxiliary\n  function $m(t) = | z(t)-y(t)|^2$. In the collocation points\n  $\\xi_i = t_0 + \\rkc_i h$, there holds\n  \\begin{multline}\n    \\label{eq:implicit:20}\n    m'(\\xi_i) = 2 \\Re\\left<\n      z'(t)-y'(t),z(t)-y(t)\\right>\n    \\\\\n    = 2 \\Re\\left<f(\\xi_i,z(\\xi_i)) -\n      f(\\xi_i,y(\\xi_i)),z(t)-y(t)\\right> \\le 0. \n  \\end{multline}\n  Since Gau\u00df quadrature is exact for polynomials of degree $2s-1$, we have:\n  \\begin{multline*}\n    |z_1-y_1|^2 = m(t_0+h) = m(t_0) + \\int_{t_0}^{t_0+h} m'(t)\\dt\n    \\\\\n    = m_0+h \\sum_{i=1}^s b_i m'(\\xi_i) \\le m(t_0) =  |z_0-y_0|^2.\n  \\end{multline*}\n  Applying this argument to $f(t,u) = \\lambda u$, we see \n  $A$, we see from~\\eqref{eq:implicit:20} that\n  \\begin{gather*}\n    m'(t) = 2\\Re(\\lambda) \\abs{z(t)-y(t)}^2,\n  \\end{gather*}\n  which proves the statement about the stability domain.\n\\end{proof}\n\n\\begin{Example*}{Radau-right-2-3}{2- and 3-stage right Radau collocation methods}\n  \\mbox{} \n  \\begin{minipage}[t]{.25\\linewidth}\n    \\input{tables/radau2}\n  \\end{minipage}\n  \\begin{minipage}[t]{.58\\linewidth}\n    \\input{tables/radau3}      \n  \\end{minipage}\n\\end{Example*}\n\n\\begin{remark}\n  The Radau-collocation methods with right end point of the interval\n  $[0,1]$ included in the quadrature set are L-stable. The stability\n  regions of the first three are shown in\n  Figure~\\ref{fig:radau-collocation}.\n\n  Observe that the stability domains are growing with order of the\n  method. Also, observe that the computation of $y_1$ coincides with\n  that of $g_s$, such that we can save a few operations.\n\\end{remark}\n\n\\begin{figure}[tp]\n  \\centering\n  \\includegraphics[width=.3\\textwidth]{fig/stability-radau1}\n  \\hfill\n  \\includegraphics[width=.3\\textwidth]{fig/stability-radau2}\n  \\hfill\n  \\includegraphics[width=.3\\textwidth]{fig/stability-radau3}\n  \\caption{Stability domains of right Radau-collocation methods with\n    one (implicit Euler), two, and three collocation points (left to\n    right). Note the different scaling of coordinate axes in\n    comparison with previous figures.}\n  \\label{fig:radau-collocation}\n\\end{figure}\n%\\endinput\n\\section{Considerations on implementation}\n\n\\begin{intro}\n  Implicit Runge-Kutta methods require the solution of a nonlinear\n  system of size $\\rks\\cdot d$, where $\\rks$ is the number of stages\n  and $d$ the dimension of the system of ODE. DIRK methods are simpler\n  and only require the solution of a system of dimension $d$. Thus, we\n  should prefer this class of methods, weren't it for\n\\end{intro}\n\n\\begin{Theorem}{DIRK-order-barrier}\n  A B-stable DIRK method has at most order 4\n\\end{Theorem}\n\n\\begin{proof}\n  See~\\cite[Theorem 13.13]{HairerWanner10}.\n\\end{proof}\n\n\\begin{remark}\n  In each step of an IRK, we have to solve a (non-)linear system for\n  the quantities $g_i$. First, we note that in order to reduce\n  round-off errors, it is advantageous to solve for $z_i = g_i-y_0$,\n  since, especially for small time steps, $z_i$ is expected to be much\n  smaller than $g_i$. Thus, we have to solve the system\n  \\begin{gather}\n    \\label{eq:implicit:21}\n    z_i = h \\sum_{j=1}^{\\rks} a_{ij} f(t_0+c_jh, y_0+z_j),\n    \\quad\n    i=1,\\dots,\\rks.\n  \\end{gather}\n  Using the Runge-Kutta matrix $A$, we rewrite this as\n  \\begin{gather}\n    \\label{eq:implicit:22}\n    \\begin{pmatrix}\n      z_1\\\\\\vdots\\\\z_\\rks\n    \\end{pmatrix}\n    = A\n    \\begin{pmatrix}\n      hf(t_0+c_1h, y_0+z_1)\n      \\\\\\vdots\\\\\n      hf(t_0+c_\\rks h, y_0+z_\\rks)\n    \\end{pmatrix}.\n  \\end{gather}\n  The latter can be used to avoid additional function evaluations by\n  computing\n  \\begin{gather}\n    \\label{eq:implicit:23}\n    y_1 = y_0 + b^TA^{-1}z, \n  \\end{gather}\n  which again is numerically much more stable than evaluating $f$ with\n  a possibly large Lipschitz constant.\n\\end{remark}\n\n%\\begin{remark}\n%  Each step of the Newton iteration requires the inversion of a \n%\\end{remark}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"notes\"\n%%% End: \n", "meta": {"hexsha": "d6c4e4a94d40e73160bacde0aa78925dd851ba29", "size": 46416, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ode/implicit.tex", "max_stars_repo_name": "arimiftari/notes", "max_stars_repo_head_hexsha": "737b95ed6a4163bd1d395c0379410513dcb03ef1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ode/implicit.tex", "max_issues_repo_name": "arimiftari/notes", "max_issues_repo_head_hexsha": "737b95ed6a4163bd1d395c0379410513dcb03ef1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-05-24T07:31:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-31T12:58:14.000Z", "max_forks_repo_path": "ode/implicit.tex", "max_forks_repo_name": "arimiftari/notes", "max_forks_repo_head_hexsha": "737b95ed6a4163bd1d395c0379410513dcb03ef1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-05-15T19:28:53.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-05T19:07:29.000Z", "avg_line_length": 36.8966613672, "max_line_length": 192, "alphanum_fraction": 0.6596432265, "num_tokens": 15152, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7718434978390747, "lm_q1q2_score": 0.5642628104550452}}
{"text": "\\documentclass[11pt,reqno]{amsart}\n\\usepackage{amsfonts,amssymb,amscd,amsmath,mathrsfs,amsthm}\n\\usepackage[pdftex]{graphicx}\n\n\\setlength{\\oddsidemargin}{0.25in}  %please do not change\n\\setlength{\\evensidemargin}{0.25in} %please do not change\n\\setlength{\\marginparwidth}{0in} %please do not change\n\\setlength{\\marginparsep}{0in} %please do not change\n\\setlength{\\marginparpush}{0in} %please do not change\n\\setlength{\\topmargin}{0in} %please do not change\n\n\\setlength{\\footskip}{.3in} %please do not change\n\\setlength{\\textheight}{8.75in} %please do not change\n\\setlength{\\textwidth}{6in} %please do not change\n\\setlength{\\parskip}{4pt} %please do not change\n\n\\begin{document}\n\n\\section{Typesetting mathematical formulas}\n\n\\begin{enumerate}\n\n\\item  Typeset\n\\[\n\\delta(x) = \\int_1^{15} \\cos x\\frac{ x\\sin x + 4}{x^2-90} dx \n\\]\n\n\\item Typeset\n\\begin{align}\n\\Gamma(x) = \\sum_{j=0}^\\infty \\binom{j}{2} x^j\n\\end{align}\n\n\\item Typeset $x\\in \\mathbb{R}$, $y\\in \\mathbb{C}$. \n\n\\item Typeset\n\\begin{align*}\nf(x) &= x^n e^{-x^2 \\sin x}\\\\\ng(x) &= x^{2n} + f\\left( \\frac{\\sqrt{x} }{\\cos x} \\right)\n\\end{align*}\n\n\\item Typeset (matrix with [] or bmatrix if amsmath is used)\n\\[\nA = \\left[ \\begin{matrix} a & b \\\\ c & d \\end{matrix} \\right]\n\\]\n\n\\item Typeset\n\\[\nf(x) = \\begin{cases}\n\\displaystyle \\int_1^{30} e^{-xt} \\sinh(t) dt\\text{ if }x>0,\\\\[1.5ex]\n\\displaystyle \\int_{-30}^0 e^{-xt} \\cosh(t) \\text{ if }x\\le 0.\n\\end{cases}\n\\]\n\n\\item Typeset (note the position of the equation number)\n\\begin{align}\n\\begin{split}\nk_n & = \\sum_{\\substack{k\\in\\mathbb{Z} \\\\ k\\neq 0}} k^{-n} (k+n!)\\\\\n\\ell_n & = \\sum_{\\substack{ k\\in\\mathbb{N} \\\\ k>20}} k^{-n} (k+n!)\n\\end{split}\n\\end{align}\n\n\\item Typeset: Define the set $A_n = \\{ 1, \\dots ,n\\}$ and the matrix\n\\[\nB = \\begin{pmatrix}\n a_{1,1} & \\dots & a_{1,n} \\\\\n a_{2,1} & \\dots & a_{2,n} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n a_{n1,} & \\dots & a_{n,n}\n\\end{pmatrix}\n\\]\n\n\\item Typeset\n\\[\n C = \\overset{\\circ}{\\bigcup_{j\\ge 1}} A_j\n\\]\n\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "2872d64e16d46954e123e631a5f5006f1aba58de", "size": 1981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "capstone/script2math.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "capstone/script2math.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "capstone/script2math.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.0759493671, "max_line_length": 69, "alphanum_fraction": 0.6491670873, "num_tokens": 806, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434873426303, "lm_q2_score": 0.7310585844894971, "lm_q1q2_score": 0.5642628073041404}}
{"text": "\\documentclass[a4paper]{article}\n\n%% Language and font encodings\n\\usepackage[english]{babel}\n\\usepackage[utf8x]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\n%% Sets page size and margins\n\\usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}\n\n%% Useful packages\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage[colorinlistoftodos]{todonotes}\n\\usepackage[colorlinks=true, allcolors=blue]{hyperref}\n\\usepackage{neuralnetwork}\n\\usepackage{sidecap}\n\\usepackage{hyperref}\n\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{corollary}{Corollary}[theorem]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\n\\newcommand{\\setreal}{\\mathbb{R}}\n\n\\title{Idea: Adaption of theory to networks with biases}\n\\author{Leander Kurscheidt}\n\n\\begin{document}\n\\maketitle\n\\begin{abstract}\n\\end{abstract}\n\\section{Affine Transformations}\n\nSince our NN \"somewhat\" resembles linear functions following from the rule $\\sigma(x)=\\sigma'(x)x$ (i see this rule as separating the linear from the non-linear part),\ni am curious to see whether ideas of affine spaces and how to integrate them into linear-spaces translate.\n\nI think they do.\n\n\\section{Idea}\n\nAffine subspaces are usually define via $A=v + U_V$, where $U_V$ is a subspace of $V$ and $v$ is a vector of $V$ (see \\href{https://de.wikipedia.org/wiki/Affiner_Unterraum}{(german) wikipedia}). But you can work in them using \\href{https://en.wikipedia.org/wiki/Homogeneous_coordinates}{homogenous coordinates}.\n\nEvery affine transformation can be turned into a linear transformation with a constant 1-input using the \\href{https://en.wikipedia.org/wiki/Augmented_matrix}{Augmented Matrix} trick.\\\\\nFor example the intercept in linear models in statistics is often modeled this way. \\\\\nWe can adapt this trick with only one additional constant input using this matrix:\n\\[\nW'=\\begin{bmatrix}\n    w_{11}       & w_{12} & w_{13} & \\dots & w_{1n} & bias_1\\\\\n    w_{21}       & w_{22} & w_{23} & \\dots & w_{2n} & bias_2\\\\\n    \\hdotsfor{6} \\\\\n\tw_{d1}       & w_{d2} & w_{d3} & \\dots & w_{dn} & bias_n\\\\\n\t0 & \\hdotsfor{3} & 0 & 1\n\\end{bmatrix}\n= \n\\begin{bmatrix}\n    W & & & b\\\\\n\t0 & \\dots & 0 & 1\n\\end{bmatrix}\n\\]\\\\\nthe last line is the difference to the usual construction and should be ommitted in the last layer.\\\\\n\nThis results in:$f'_\\theta(x)=f_{\\theta'}(\\begin{bmatrix}x\\\\1\\end{bmatrix})$\\footnote{slight abuse of notation. We define $\\begin{bmatrix}x\\\\1\\end{bmatrix}:=\\begin{bmatrix}x_1\\\\\\vdots\\\\x_n\\\\1\\end{bmatrix}$},\\\\\nwhere $\\theta'$ is the parameter adpated in the above schema.\\\\\n\nUsually proofs get a lot simple when you just have to wory about keeping one input constant. I think a lot of the proofs from the paper should still hold, for example Leamma 2.1 doesn't change. I hope the other proofs only need some tweaking here and there.\\\\\n\nFor this to work, we need to add the additional requirement that $\\sigma(1)=1$\n\\pagebreak\n\\section{Proof}\n\nIn the following section, neural networks with biases as defined as:\\\\\n\\begin{align*}\nf_{\\theta}(x)=\\sigma_{L+1}(\\sigma_L(\\dots \\sigma_2(\\sigma_1(xW^0 + b^0)W^1 + b^1)W^2 +b^2)\\dots W^L + b^L)\n\\end{align*}\nWe call $b^i$ bias and additionally need the follwing constraint on the activation function $\\sigma_i$:\\\\\n\\begin{align*}\n\\forall i \\in {L+1, \\dots, 1}. \\exists c_i. \\sigma_i(c_i)=1\n\\end{align*}\nFor $f_{\\theta}$ with $\\theta = (W^0, W^1,\\dots, W^L)$ we define:\n\\begin{flalign*}\n            &f_{0,\\theta^0}(z) := \\sigma_1(zW'^0)\\text{, where } \\theta^0 := (W'^0) \\\\\n            &f_{L,\\theta^L} \\text{, so that}\\\\\n            &f_{\\theta} = f_{L,\\theta^L} \\circ f_{0,\\theta^0} \n            \\text{ and }\\\\\n            &\\theta^L := (W^1,\\dots, W^L)\n\\end{flalign*}\n\n\\theoremstyle{definition}\n\\begin{definition}{Homogenous Coordinates for Neural Networks}\\\\\nLet $f_{\\theta}$ be a neural network with biases. Then augumented neural network $f_{\\theta'}$ is a neural network as defined in Definition 1, (2.1) in [TODO: ref], where:\\\\\n$\\theta' = (W'^0, \\dots, W'^L)$\\\\\nand\\\\\n\\[\nW'^i=\\begin{bmatrix}\n    w_{11}^i       & w_{12}^i & w_{13}^i & \\dots & w_{1k_i}^i & b_1^i\\\\\n    w_{21}^i       & w_{22}^i & w_{23}^i & \\dots & w_{2k_i}^i & b_2^i\\\\\n    \\hdotsfor{6} \\\\\n\tw_{k_{i+1}1}^i       & w_{k_{i+1}2}^i & w_{k_{i+1}3}^i & \\dots & w_{k_{i+1}k_i}^i & b_{k_{i+1}}^i\\\\\n\t0 & \\hdotsfor{3} & 0 & c_i\n\\end{bmatrix}\n=: \n\\begin{bmatrix}\n    W^i & & & b^i\\\\\n\t0 & \\dots & 0 & c_i\n\\end{bmatrix} \\in \\mathbb{R}^{k_i + 1, k_{i + 1} + 1}  \\forall i \\in \\{0, \\dots (L-1)\\}\n\\]\\\\\n\\[\nW'^i=\\begin{bmatrix}\n    w_{11}^i       & w_{12}^i & w_{13}^i & \\dots & w_{1k_i}^i & b_1^i\\\\\n    w_{21}^i       & w_{22}^i & w_{23}^i & \\dots & w_{2k_i}^i & b_2^i\\\\\n    \\hdotsfor{6} \\\\\n\tw_{k_{i+1}1}^i       & w_{k_{i+1}2}^i & w_{k_{i+1}3}^i & \\dots & w_{k_{i+1}k_i}^i & b_{k_{i+1}}^i\\\\\n\\end{bmatrix}\n=: \n\\begin{bmatrix}\n    W^i & b^i\n\\end{bmatrix} \\in \\mathbb{R}^{k_i + 1, K}  \\text{ with } i = L\n\\]\n\\end{definition}\n\n\\begin{theorem}{Equality of the augumented NN}\\\\\nFor all NN with Bias $f_{\\theta}$:\\\\\n$f'_{\\theta}(x)=f_{\\theta'}(\\begin{bmatrix}x\\\\1\\end{bmatrix})$\n\\end{theorem}\n\n\\begin{proof}\n    via Induction over L.\\\\\n    \\begin{enumerate}\n        \\item For $L=0$:\\\\\n        \\begin{align}\n            \\Theta'&=(W'^0) \\\\\n            W'^0 &= \\begin{bmatrix}\n                W^0 & b^0\n            \\end{bmatrix} \\text{ because } 0=L \\\\\n            f_{\\theta'}(\\begin{bmatrix}x\\\\1\\end{bmatrix}) &= \\sigma_1(\\begin{bmatrix}x\\\\1\\end{bmatrix}W'^0) \\\\\n            &= \\sigma_1(\\begin{bmatrix}x\\\\1\\end{bmatrix}\\begin{bmatrix}\n                W^0 & b^0\n            \\end{bmatrix})\\\\\n            &= \\sigma_1(xW^0 + b^0)\\\\\n            &= f_{\\theta}(x)\n        \\end{align}\n        \\item For $L \\rightarrow (L+1)$ with $f_{L,\\theta'} \\rightarrow f_{L,\\theta'} \\circ f_{0,\\theta'^0}$ holds Theorem 3.1\n        \\item $L \\rightarrow (L+1)$:\\\\\n        \\begin{align}\n            \\theta' &= (W'^0, W'^1,\\dots, W'^{(L+1)}) \\\\\n            W'^{0} &= \\begin{bmatrix}\n                W^0 & & & b^0\\\\\n                0 & \\dots & 0 & c_0\n            \\end{bmatrix} \\text{ because } 0\\neq(L+1)\\\\\n            \\\\\n            f_{0,\\theta'^0}(\\begin{bmatrix}x\\\\1\\end{bmatrix}) &= \\sigma_1(\\begin{bmatrix}x\\\\1\\end{bmatrix}W'^0)\\\\\n            &=\\begin{bmatrix}\n                \\sigma_1(\\begin{bmatrix}x\\\\1\\end{bmatrix}\\begin{bmatrix}\n                    W^0 & & & b^0\\\\\n                \\end{bmatrix})\\\\\n                \\sigma_1(1*c_0)\n            \\end{bmatrix}\\\\\n            &=\\begin{bmatrix}\n                f_{0,\\theta^0}(x)\\\\\n                1\n            \\end{bmatrix}\\\\\n            f_{\\theta'}(\\begin{bmatrix}x\\\\1\\end{bmatrix}) &=(f_{L,\\theta'^L} \\circ f_{0,\\theta'^0}) (\\begin{bmatrix}x\\\\1\\end{bmatrix})\\\\\n            &= f_{\\theta'^L}(\\begin{bmatrix}\n                f_{\\theta^0}(x)\\\\\n                1\n            \\end{bmatrix}) & \\text{using (12)}\\\\\n            &= (f_{L,\\theta^L} \\circ f_{0,\\theta^0})(x) & \\text{induction hypothesis}\\\\\n            &= f_{\\theta}(x)\n        \\end{align}\n    \\end{enumerate}\n\\end{proof}\n\nLet $g(x) := \\begin{bmatrix}x\\\\1\\end{bmatrix}$ (which is continous). Then, following Theorem 3.1:\\\\\nFor all NN with Bias $f_{\\theta}$:$f'_{\\theta}=f_{\\theta'} \\circ g$\n\n\\begin{theorem}{Transformation of the Data-Distribution}\\\\\nLet $P_{data}$ be Data-Distribution, then $P'_{data} = g(P_{data})$ is the transformed Data-Distribution.\\footnote{The question now arises: What is the distribution of $P'_{data}$? For this innocent and trivial transformation, the answer is harder than it might seem. It is a degenerate distribution and $P'_{data} \\sim \\begin{bmatrix}P_{data}\\\\Dirac\\end{bmatrix}$, where $Dirac$ is the Dirac-Distribution.}\nThen:\\\\\n$E_{x \\sim P_{data}}f_{\\theta}(x) = E_{x \\sim P'_{data}}f_{\\theta'}(x)$\n\\end{theorem}\n\n\\begin{proof}\n    \\begin{align}\n        E_{x \\sim P_{data}}f_{\\theta}(x) &= E_{x \\sim P_{data}} (f_{\\theta'} \\circ g)(x)\\\\\n        &= E_{x \\sim P'_{data}}f_{\\theta'}(x)\n    \\end{align}\n    (18) follows from \\textit{Law of the unconscious statistician}.\n\\end{proof}\n\nThe Rest of the proofs from the \"Fisher-Rao Metric, Geometry ...\"-paper should now follow from Theorem 3.1 and 3.2.\n\n\n\\bibliographystyle{alpha}\n\\bibliography{sample}\n\n\\end{document}", "meta": {"hexsha": "652d873682d435a63e3d2f252ee5000d6b675d81", "size": 8265, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2_fisher_rao_norm/leander_idea.tex", "max_stars_repo_name": "ML-KA/PDG-Theory", "max_stars_repo_head_hexsha": "dbbdf93098af3a201bf67449a29c15cd633e2430", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-07-19T20:29:51.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-18T13:09:46.000Z", "max_issues_repo_path": "2_fisher_rao_norm/leander_idea.tex", "max_issues_repo_name": "ML-KA/PDG-Theory", "max_issues_repo_head_hexsha": "dbbdf93098af3a201bf67449a29c15cd633e2430", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-10-04T13:36:53.000Z", "max_issues_repo_issues_event_max_datetime": "2018-10-04T13:36:53.000Z", "max_forks_repo_path": "2_fisher_rao_norm/leander_idea.tex", "max_forks_repo_name": "ML-KA/PDG-Theory", "max_forks_repo_head_hexsha": "dbbdf93098af3a201bf67449a29c15cd633e2430", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-07-21T19:33:48.000Z", "max_forks_repo_forks_event_max_datetime": "2018-07-21T19:33:48.000Z", "avg_line_length": 40.9158415842, "max_line_length": 407, "alphanum_fraction": 0.6120992136, "num_tokens": 2960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7718434925908525, "lm_q1q2_score": 0.5642628066182873}}
{"text": "\\section{PGM to CNF}\n\n\\subsection{ENC 1}\nOur ENC1 encoding for the Cancer Bayesian network can be found in appendix~\\ref{ENC1}. The CNF in dimacs format can be found under report/encodings/cancer/.\n\n\\subsection{ENC 2}\nOur ENC2 encoding for the Cancer Bayesian network can be found in appendix~\\ref{ENC2}. The CNF in dimacs format can be found under report/encodings/cancer/.\n\n\n\n\\section{SRL to CNF}\n\\subsection{Encoding of Monty Hall as CNF}\nAn encoding of problog programs can be generated by our program as follows:\n\\begin{lstlisting}\npython3 scripts/inference.py --problog files/problog/monty_hall.pl\n\\end{lstlisting}\nThe CNF will be shown using the program's predicates. A version of the CNF in dimacs format will be shown as well.\nSee \\texttt{README.MD} for more information.\n\nOur CNF encoding for the given Monty Hall ProbLog program is:\n\\begin{align*}\n    \\land & (open\\_door(2) \\lor prize(2) \\lor prize(3) \\lor \\neg p\\_open\\_door(2)\\_0) \\\\\n    \\land & (open\\_door(2) \\lor prize(2) \\lor \\neg prize(3))                          \\\\\n    \\land & (\\neg open\\_door(2) \\lor \\neg prize(2) \\lor \\neg prize(2))                \\\\\n    \\land & (\\neg open\\_door(2) \\lor \\neg prize(2) \\lor prize(3))                     \\\\\n    \\land & (\\neg open\\_door(2) \\lor \\neg prize(3) \\lor \\neg prize(2))                \\\\\n    \\land & (\\neg open\\_door(2) \\lor \\neg prize(3) \\lor prize(3))                     \\\\\n    \\land & (\\neg open\\_door(2) \\lor p\\_open\\_door(2)\\_0 \\lor \\neg prize(2))          \\\\\n    \\land & (\\neg open\\_door(2) \\lor p\\_open\\_door(2)\\_0 \\lor prize(3))               \\\\\n    \\land & (open\\_door(3) \\lor prize(2) \\lor prize(3) \\lor \\neg p\\_open\\_door(3)\\_0) \\\\\n    \\land & (open\\_door(3) \\lor prize(3) \\lor \\neg prize(2))                          \\\\\n    \\land & (\\neg open\\_door(3) \\lor \\neg prize(2) \\lor \\neg prize(3))                \\\\\n    \\land & (\\neg open\\_door(3) \\lor \\neg prize(2) \\lor prize(2))                     \\\\\n    \\land & (\\neg open\\_door(3) \\lor \\neg prize(3) \\lor \\neg prize(3))                \\\\\n    \\land & (\\neg open\\_door(3) \\lor \\neg prize(3) \\lor prize(2))                     \\\\\n    \\land & (\\neg open\\_door(3) \\lor p\\_open\\_door(3)\\_0 \\lor \\neg prize(3))          \\\\\n    \\land & (\\neg open\\_door(3) \\lor p\\_open\\_door(3)\\_0 \\lor prize(2))               \\\\\n    \\land & (win\\_keep \\lor \\neg prize(1))                                            \\\\\n    \\land & (\\neg win\\_keep \\lor prize(1))                                            \\\\\n    \\land & (win\\_switch \\lor \\neg prize(2) \\lor open\\_door(2))                       \\\\\n    \\land & (win\\_switch \\lor \\neg prize(3) \\lor open\\_door(3))                       \\\\\n    \\land & (\\neg win\\_switch \\lor prize(2) \\lor prize(3))                            \\\\\n    \\land & (\\neg win\\_switch \\lor prize(2) \\lor \\neg open\\_door(3))                  \\\\\n    \\land & (\\neg win\\_switch \\lor \\neg open\\_door(2) \\lor prize(3))                  \\\\\n    \\land & (\\neg win\\_switch \\lor \\neg open\\_door(2) \\lor \\neg open\\_door(3))        \\\\\n    \\land & (\\neg prize(1) \\lor \\neg prize(2))                                        \\\\\n    \\land & (\\neg prize(1) \\lor \\neg prize(3))                                        \\\\\n    \\land & (\\neg prize(2) \\lor \\neg prize(3))                                        \\\\\n    \\land & (prize(1) \\lor prize(2) \\lor prize(3))\\\\\n    & \\textbf{Weights:}             \\\\\n    & W(p\\_open\\_door(2)\\_0) = 0.5 & W &(\\neg p\\_open\\_door(2)\\_0) = 0.5 \\\\\n    & W(p\\_open\\_door(3)\\_0) = 0.5 & W &(\\neg p\\_open\\_door(3)\\_0) = 0.5 \\\\\n    & W(select\\_door(1)) = 1.00    & W &(\\neg select\\_door(1)) = 0.00    \\\\\n    & W(prize(1)) = 0.33           & W &(\\neg prize(1)) = 1.00           \\\\\n    & W(prize(2)) = 0.33           & W &(\\neg prize(2)) = 1.00           \\\\\n    & W(prize(3)) = 0.33           & W &(\\neg prize(3)) = 1.00           \\\\\n    & W(open\\_door(2)) = 1.00      & W &(\\neg open\\_door(2)) = 1.00      \\\\\n    & W(open\\_door(3)) = 1.00      & W &(\\neg open\\_door(3)) = 1.00      \\\\\n    & W(win\\_keep) = 1.00          & W &(\\neg win\\_keep) = 1.00          \\\\\n    & W(win\\_switch) = 1.00        & W &(\\neg win\\_switch) = 1.00        \\\\\n\\end{align*}\n\n\n\\section{Weighted Model Counting}\n\\subsection{Weighted model counters on above CNFs}\nWe have selected MiniC2D and Cachet as weighted model counters and have executed them on DIMACS versions of the CNFs of the previous tasks. The DIMACS files can be found under report/encodings.\nThe output of the model counters is listed below.\n\n\\subsubsection{MiniC2D}\nMiniC2D needs to be executed with the $-W$ flag in order for it to do weighted model counting.\nThe resulting probability can be read next to ``Count''.\n\n\\begin{lstlisting}[caption={MiniC2D on ENC1 encoding of Cancer network}]\nConstructing CNF... DONE\nCNF stats:\n  Vars=30 / Clauses=74\n  CNF Time\t0.000s\nConstructing vtree (from primal graph)... DONE\nVtree stats:\n  Vtree widths: con<=5, c_con=48 v_con=5\n  Vtree Time\t0.001s\nCounting... DONE\n  Learned clauses      \t0\nCache stats:\n  hit rate   \t75.0%\n  lookups    \t16\n  ent count  \t4\n  ent memory \t0.2 KB\n  ht  memory \t152.6 MB\n  clists     \t1.0 ave, 1 max\n  keys       \t3.0b ave, 3.0b max, 3.0b min\nCount stats:\n  Count Time\t0.000s\n  Count \t0.9999999999999999\nTotal Time: 0.012s\n\\end{lstlisting}\n\n\\begin{lstlisting}[caption={MiniC2D on ENC2 encoding of Cancer network}]\nConstructing CNF... DONE\nCNF stats:\n  Vars=20 / Clauses=30\n  CNF Time\t0.000s\nConstructing vtree (from primal graph)... DONE\nVtree stats:\n  Vtree widths: con<=6, c_con=16 v_con=6\n  Vtree Time\t0.000s\nCounting... DONE\n  Learned clauses      \t0\nCache stats:\n  hit rate   \t23.1%\n  lookups    \t26\n  ent count  \t20\n  ent memory \t1.0 KB\n  ht  memory \t152.6 MB\n  clists     \t1.0 ave, 1 max\n  keys       \t1.8b ave, 3.0b max, 1.0b min\nCount stats:\n  Count Time\t0.000s\n  Count \t1.0000000000000000\nTotal Time: 0.012s\n\\end{lstlisting}\n\n\\begin{lstlisting}[caption={MiniC2D on CNF encoding of Monty Hall}]\nConstructing CNF... DONE\nCNF stats:\n  Vars=10 / Clauses=26\n  CNF Time\t0.000s\nConstructing vtree (from primal graph)... DONE\nVtree stats:\n  Vtree widths: con<=4, c_con=22 v_con=4\n  Vtree Time\t0.000s\nCounting... DONE\n  Learned clauses      \t0\nCache stats:\n  hit rate   \t20.0%\n  lookups    \t5\n  ent count  \t4\n  ent memory \t0.2 KB\n  ht  memory \t152.6 MB\n  clists     \t1.0 ave, 1 max\n  keys       \t3.2b ave, 4.0b max, 3.0b min\nCount stats:\n  Count Time\t0.000s\n  Count \t1.0000000000000000\nTotal Time: 0.011s\n\\end{lstlisting}\n\n\\subsubsection{Cachet}\nFor Cachet, there is no need to use extra parameters to get a probability.\nIt is reported next to ``Satisfying probability''.\n\n\\begin{lstlisting}[caption={Cachet on ENC1 encoding of Cancer network}]\nNumber of total components\t\t11\nNumber of split components\t\t2\nNumber of non-split components\t\t5\nNumber of SAT residual formula\t\t12\nNumber of trivial components\t\t0\nNumber of changed components\t\t0\nNumber of adjusted components\t\t0\nFirst component split level\t\t1\n\nNumber of Decisions\t\t\t11\nMax Decision Level\t\t\t5\nNumber of Variables\t\t\t30\nOriginal Num Clauses\t\t\t74\nOriginal Num Literals\t\t\t172\nAdded Conflict Clauses\t\t\t0\nAdded Conflict Literals\t\t\t0\nDeleted Unrelevant clauses\t\t0\nDeleted Unrelevant literals\t\t0\nNumber of Implications\t\t\t124\nTotal Run Time\t\t\t\t0.0163\n\nSatisfying probability\t\t\t8.72319e-08\nNumber of solutions\t\t\t93.6645\n\n\\end{lstlisting}\n\n\\begin{lstlisting}[caption={Cachet on ENC2 encoding of Cancer network}]\nNumber of total components\t\t11\nNumber of split components\t\t2\nNumber of non-split components\t\t5\nNumber of SAT residual formula\t\t12\nNumber of trivial components\t\t0\nNumber of changed components\t\t0\nNumber of adjusted components\t\t0\nFirst component split level\t\t1\n\nNumber of Decisions\t\t\t11\nMax Decision Level\t\t\t5\nNumber of Variables\t\t\t20\nOriginal Num Clauses\t\t\t30\nOriginal Num Literals\t\t\t84\nAdded Conflict Clauses\t\t\t0\nAdded Conflict Literals\t\t\t0\nDeleted Unrelevant clauses\t\t0\nDeleted Unrelevant literals\t\t0\nNumber of Implications\t\t\t72\nTotal Run Time\t\t\t\t0.017372\n\nSatisfying probability\t\t\t1\nNumber of solutions\t\t\t1.04858e+06\n\n\\end{lstlisting}\n\n\\begin{lstlisting}[caption={Cachet on WCNF encoding of Monty Hall}]\nNumber of total components\t\t4\nNumber of split components\t\t1\nNumber of non-split components\t\t2\nNumber of SAT residual formula\t\t5\nNumber of trivial components\t\t0\nNumber of changed components\t\t0\nNumber of adjusted components\t\t0\nFirst component split level\t\t2\n\nNumber of Decisions\t\t\t4\nMax Decision Level\t\t\t4\nNumber of Variables\t\t\t10\nOriginal Num Clauses\t\t\t26\nOriginal Num Literals\t\t\t73\nAdded Conflict Clauses\t\t\t0\nAdded Conflict Literals\t\t\t0\nDeleted Unrelevant clauses\t\t0\nDeleted Unrelevant literals\t\t0\nNumber of Implications\t\t\t26\nTotal Run Time\t\t\t\t0.016062\n\nSatisfying probability\t\t\t0.444444\nNumber of solutions\t\t\t455.111\n\\end{lstlisting}\n\nFor ENC1, we see that with Cachet reports a satisfying probability of almost 0. Similarly, for Monty Hall, we see that we get a probability of 0.44. This is due to the fact that with ENC1, the weights of negated literals are 1, but Cachet expects that $weight(x) + weight(-x) = 1$. In the Monty Hall encoding, we also have weights of negated literals equalling 1, which gives the same problem as with ENC1.\n\n\\subsection{Difference between the selected WMCs}\n%The differences between the different WMCs come from \\cite{CHAVIRA2008772}.\n%\\subsubsection{Cachet vs jointree and recursive conditioning}\n%Jointree and recursive conditioning only exploit topological structure thus they take no advantage of the massive determinism available in networks whilst Cachet does this.\n\n\\subsubsection{MiniC2D Vs Cachet}\nMiniC2D and Cachet are weighted model counters that work in different ways. In short, MiniC2D is a top down compiler that compiles CNFs into SDDs, while Cachet uses formula caching combined with clause learning and component analysis [\\cite{MiniC2D},\\cite{Cachet}].\n\nBoth weighted model counters use concepts from the SAT literature. They both use clause learning and component caching in order to reuse components that later appear again during search. \\\\\nCachet also uses other methods from SAT literature, like an explicit on the fly calculation of connected components. This is different in MiniC2D, as it relies on vtrees to identify disconnected CNF components. MiniC2d creates vtrees for CNFs and then creates SDDs based on the created vtrees.\n%The way that the compilation is done with MiniC2D is as follows:\n%The SDD compilation is driven by the vtree, it uses this to identify disconnected CNF components and it uses a component caching scheme to prevent compiling the same component multiple times.\n\n\n\\subsection{Overview of computational requirements}\nWe have executed the model counters with various CNFs to build an overview of computational requirements. The files we used for testing can be found under report/encodings. We have used scripts to convert the ``.dsc'' files to ENC1 and ENC2 encodings in DIMACS format. We downloaded the ``.dsc'' files from \\url{http://www.bnlearn.com/bnrepository/}.\n\n\\subsubsection{Cancer network (small)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 1.0  & 0.3 KB    & 0.053s   & 1.0    & 1.0 KB    & 0.050s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0.0  & ?    & 0.016s       & 1.0     & ?    & 0.016s    \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n% updated cancer with total time\n\n\\subsubsection{Asia network (small)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 1.0  & 1.2 KB    & 0.049s   & 1.0    & 1.9 KB    & \t 0.05s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0.0  & ?    & 0.018s       & 1.0     & ?    & 0.017s    \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n\n\\subsubsection{Sachs network (small)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 0.99707  & 16.8 KB    & 0.075s   & 1.0    & 13.4 KB   & \t0.07s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0.0  & ?    & 0.019s       & 1.0     & ?    & 0.017s    \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n%sachs also updated\n\n\\subsubsection{Earthquake network (small)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 1.0  & 0.6 KB & 0.051s   & 1.0    & 1.0 KB  & \t0.05s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0.0  & ?    & 0.016s       & 1.0     & ?    & 0.017s    \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n\n\\subsubsection{Survey network (small)}\n\\begin{table}[H]\n\\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 1.0  & 0.5 KB    & 0.035s   & 1.0    & 2.0 KB    & \t0.052s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0.0  & ?    & 0.016s       & 1.0     & ?    & 0.016s    \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n\n\\subsubsection{Alarm network (medium)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 0.999  & 451.1 KB    & 0.217s   & 0.999    & 143.4 KB    & \t0.093s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0.0  & ?    & 0.176s       & 1.0     & ?    & 0.222s    \\\\ \\cline{1-7}\n\\end{tabular}\n\\end{table}\n\n\\subsubsection{Child network (medium)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 1.0  & 45.8KB    & 0.076s   & 1.0    & 30.8 KB    & \t0.059s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0.0  & ?    & 0.03s      & 1.0     & ?    & 0.03s    \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n\n\\subsubsection{Hailfinder network (large)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 0.999  & 591.8 MB & 46.065s   & 1.0    & 25.1MB    & \t2.73s \\\\\n      \\hline\n    \\textbf{Cachet}  & 0  & ?    & 58.86s       & 1     & ?    & 15.21s    \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n\n\\subsubsection{Andes network (very large)}\n\\begin{table}[H]\n    \\centering\n    \\begin{tabular}{c|c|c|c|c|c|c|}\n    \\cline{2-7}\n            & \\multicolumn{3}{c|}{ENC1} & \\multicolumn{3}{c|}{ENC2} \\\\ \\cline{2-7}\n      & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} & \\textbf{Prob}  & \\textbf{Memory}  & \\textbf{Runtime} \\\\ \\cline{1-7}\n      \\textbf{Minic2d} & 1.0  & 5.5GB    & 266.605s   & 1.0    & 122.8 MB    & \t5.646s \\\\\n      \\hline\n    \\textbf{Cachet}  & ?  & ?    & $ > 4h (killed) $      & ?     & ?    & $ > 4h (killed) $     \\\\ \\cline{1-7}\n    \\end{tabular}\n\\end{table}\n\nFor Cachet, we didn't find a way to output memory usage, so we cannot compare that aspect to MiniC2D.\nFrom the results listed above, we can conclude that ENC2 is a better encoding than ENC1 when it comes to computational requirements. In most of the cases, it has lower memory usage and a lower runtime.\nConcerning the weighted model counters, we clearly see that Cachet is faster than MiniC2D on small to medium networks. When we use larger networks, Cachet is a lot slower than MiniC2D, and for the very large network, Cachet couldn't even finish in 4 hours!\n\n\\section{Knowledge compilation}\n\\subsubsection{Vtree with the most compact circuit}\nFor this task, we used both the SDD and MiniC2D packages to generate vtrees and SDDs. The resulting vtrees and their corresponding SDDs can be found under report/1.4/. Subdirectories have been made for each WMC's output for each of the CNFs. The file name in each of these directories follow the following naming convention: (SDD\\_vtree\\_type $\\vert$ MiniC2D\\_vtree\\_method)[\\_SDD\\_minimize\\_cardinality]\\_(vtree $\\vert$ sdd)\n\\begin{itemize}\n    \\item \\textbf{ENC1}: SDD built the best vtree for the most compact circuit with the ``-t balanced -m'' flags. The next best circuit can be created via the vtree created by MiniC2D with ``-m 3'' (reverse elimination order).\n  \n    \\item \\textbf{ENC2}: For this encoding, we have different options that result in the best circuits. The options ``-t balanced'' for SDD and ``-m 3'' for MiniC2D are again one of the best. Like for ENC1, the vtree created by SDD is better than the one from MiniC2D.\n\n    \\item \\textbf{Monty Hall}: Here we see that by using the option ``-t balanced -m'', we receive the most compact circuit and yet again closely followed by ``-m 3'' from MiniC2D.\n\\end{itemize}\n\n\\subsubsection{Pattern for a good vtree}\nFrom our tests, we've noticed that vtrees that are more shallow result in the best circuits. This seems logical to us, since a vtree is a binary search tree, and with such a tree you also prefer it to be shallow.\n", "meta": {"hexsha": "afdc76e7c5af621d9cb3847548a74b53b2784518", "size": 18085, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/chapter-1.tex", "max_stars_repo_name": "arminnh/ma2-csai-probabilistic-programming", "max_stars_repo_head_hexsha": "49a12b829b79a6dc7105dbf3b2d6f5c15fea12c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/chapter-1.tex", "max_issues_repo_name": "arminnh/ma2-csai-probabilistic-programming", "max_issues_repo_head_hexsha": "49a12b829b79a6dc7105dbf3b2d6f5c15fea12c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/chapter-1.tex", "max_forks_repo_name": "arminnh/ma2-csai-probabilistic-programming", "max_forks_repo_head_hexsha": "49a12b829b79a6dc7105dbf3b2d6f5c15fea12c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-03-19T11:00:59.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-11T09:59:12.000Z", "avg_line_length": 46.7312661499, "max_line_length": 425, "alphanum_fraction": 0.6333978435, "num_tokens": 6215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7310585669110202, "lm_q1q2_score": 0.5642628014098231}}
{"text": "\\chapter{Morphological Component Analysis}\n\\label{sect_ctm}\n\\section{Introduction}\nThe content of an image is often complex, and there is not a single\ntransform which is optimal to represent all the contained features. For\nexample, the Fourier transform better represents some textures, while\nthe wavelet transform better represents singularities. Even if we limit\nour class of transforms to the wavelet one, decision have to\nbe taken between an isotropic wavelet transform which produce good \nresults for isotropic objects (such stars and galaxies in astronomical \nimages, cells in biological images, etc), or an orthogonal wavelet \ntransform, which is better for images with edges. \nThis has motivated\nthe development of different methods \\cite{wave:donoho98,wave:meyer98,cur:huo99}, \nand the two most frequently discussed approaches are the Matching Pursuit (MP)\n\\cite{wave:mallat93} and the Basis pursuit (BP) \\cite{wave:donoho98}.\nA dictionary ${\\cal D}$ being defined as a collection of waveforms \n$(\\varphi_{\\gamma})_{\\gamma \\in \\Gamma}$,\nthe general principe consists in representing a signal $s$ as a ``sparse''\nlinear combination  of a small number of basis such that:\n\\begin{eqnarray}\n s = \\sum_{\\gamma} a_{\\gamma} \\varphi_{\\gamma}\n\\end{eqnarray}\nor an approximate decomposition\n\\begin{eqnarray}\n s = \\sum_{i=1}^m a_{\\gamma_i} \\varphi_{\\gamma_i} + R^{(m)} .\n\\end{eqnarray}\n \nMatching pursuit \\cite{wave:mallat93,ima:mallat98} method (MP) uses a greedy\nalgorithm which adaptively refines the signal approximation with an\niterative procedure:\n\\begin{itemize}\n\\item Set $s^0 = 0$ and $R^0 = 0$.\n\\item Find the element $\\alpha_k \\varphi_{\\gamma_k}$ which best correlates with the \nresidual.\n\\item Update $s$ and $R$:\n\\begin{eqnarray}\ns^{k+1} & = & s^k + \\alpha_k \\varphi_{\\gamma_k} \\nonumber \\\\\nR^{k+1} & = & s -  s^k .\n\\end{eqnarray}\n\\end{itemize}\nIn case of non orthogonal dictionaries, it has been shown \\cite{wave:donoho98}\n that MP may\nspend most of the time correcting mistakes made in the first few terms,\nand therefore is suboptimal in term of sparsity.\n\n\\bigskip\nBasis pursuit method  \\cite{wave:donoho98} (BP) is a global procedure which\nsynthesizes an approximation $\\tilde{s}$ to $s$ by minimizing a\nfunctional of the type\n\\begin{equation}\n\\|s - \\tilde{s}\\|_{\\ell_2} ^2 + \\lambda \\cdot \\|\\alpha\\|_{\\ell_1}, \n\\quad \\tilde{s} = \\Phi \\alpha.  \n\\label{eqn_mp}\n\\end{equation}\nBetween all possible solutions, the chosen one has the minimum $l^1$ norm.\nThis choice of $l_1$ norm is very important. A $l_2$ norm, as \nused in the method of frames \\cite{wave:daube88b},\n does not preserve the sparsity \\cite{wave:donoho98}. \n \nIn many cases, BP or MP synthesis algorithms are computationally very\nexpensive. We present in the following an alternative approach, that we call \n{\\em Morphological Component Analysis} (MCA), which combines  \n the different available transforms \nin order to benefit of the advantages of each of them. \n\n\\section{The Combined Transformation}\n\nDepending on the content of the data, several transforms \ncan be combined in order to get an optimal representation of all \nfeatures contained in our data set. In addition to the ridgelet\nand the curvelet transform, we may want to use the \\`a trous algorithm\nwhich is very well suited to astronomical data, or the undecimated\nwavelet transform which is commonly used in the signal processing domain.\n\nOther transform such wavelet packets, the Fourier transform, \nthe Pyramidal median transform \\cite{starck:book98}, \nor other multiscale morphological transforms,\ncould also be considered. However, we found that in practice,\nthese four transforms (i.e. curvelet, ridgelet, \\`a trous algorithm,\nand undecimated wavelet transform)\nfurnishes a very large panel of waveforms\nwhich is generally large enough to well \nrepresents all features contained in the data.\n\nIn general, suppose that we are given $K$ linear transforms $T_1,\n\\ldots, T_K$ and let $\\alpha_k$ be the coefficient sequence of an\nobject $x$ after applying the transform $T_k$, i.e. $\\alpha_k = T_k\nx$. We will suppose that for each transform $T_k$ we have available a\nreconstruction rule that we will denote by $T^{-1}_k$ although this is\nclearly an abuse of notations.\n\nTherefore, we search a vector  \n${\\bf \\alpha} = {\\alpha_1, \\dots, \\alpha_{K}}$ such that\n\\begin{equation}\ns =  \\Phi {\\bf \\alpha}\n\\end{equation}\nwhere $\\Phi {\\bf \\alpha} = \\sum_{k=1}^K T_k^{-1} \\alpha_k$.\nAs our dictionary is overcomplete, there is an infinity of vectors\nverifing this condition, and we need to solve the following optimization \nproblem:\n\\begin{equation}\nmin \\parallel s - \\phi {\\bf \\alpha} \\parallel^2 + {\\cal C}(\\bf \\alpha)  \n\\end{equation}\nwhere ${\\cal C}$ is a penalty term. We easily see that chosing\n${\\cal C}({\\bf \\alpha}) =  \\parallel {\\bf \\alpha} \\parallel_{l_1}$ leads to the\nBP method, where the dictionary ${\\cal D}$ is only composed of the\nbasis elements of the chosen transforms.\n \nTwo iterative methods, {\\em soft-MCA} and {\\em hard-MCA}, allowing us \nto realize such a combined transform, are described in this section.\n\n\\section{Soft-MCA}\n\\label{sect_l1}\n% $l^1$ optimization by soft thresholding\n\nNoting $T_1, ..., T_{K}$ the $K$ transform operators, \na solution ${\\bf \\alpha}$  is obtained by minimizing a functional of the form:\n\\begin{eqnarray}\nJ({\\bf \\alpha}) = \\parallel s - \\sum_{k=1}^{K} T_k^{-1} \\alpha_k  \\parallel_2^2 + \\lambda \\sum_k \\parallel \\alpha_k \\parallel_1\n\\label{eqn_min_l1}\n\\end{eqnarray}\nwhere $s$ is the original signal, and $\\alpha_k$ are the coefficients \nobtained with the transform $T_k$.\n\nAn simple algorithm to achieve such an solution is \\cite{starck:spie01b,starck:sta02_3}:\n\\begin{enumerate}\n\\item Initialize $L_{\\max}$, the number of iterations $N_i$, \n$\\lambda = L_{\\max}$, and $\\delta_{\\lambda} = \\frac{L_{\\max}}{N_i}$.\n\\item While $\\lambda >= 0$ do\n\\item For k = 1, .., $K$ do\n\\begin{itemize}\n\\item Calculate the residual $R = s - \\sum_k T_k^{-1} \\alpha_k$.\n\\item Calculate the  transform $T_k$ of the residual:\n$r_k = T_k R$.\n\\item Add the residual to  $\\alpha_k$:  $\\alpha_k = \\alpha_k + r_k$.\n\\item Soft threshold the coefficient $\\alpha_k$ with the $\\lambda$ threshold.\n\\end{itemize}\n\\item $\\lambda = \\lambda - \\delta$, and goto 2.\n\\end{enumerate}\n\nFigure~\\ref{fig_cb1_synt} illustrates the result in the case where the\ninput image contains only lines and Gaussians. In this experiment, we have \ninitialized  $L_{\\max}$ to $20$, and $\\delta$ to $2$ (10 iterations).\nTwo transform operators\nwere used, the \\`a trous wavelet transform and the ridgelet transform. The \nfirst is well adapted to the detection of Gaussian due to the isotropy of\nthe wavelet function~\\cite{starck:book98}, while the second is optimal\nto represent lines \\cite{cur:candes99_1}. Figure~\\ref{fig_cb1_synt} top,\nbottom left, and bottom right represents respectively the \noriginal image, the reconstructed image from the \\`a trous wavelet\ncoefficient, and the reconstructed image from the ridgelet\ncoefficient. The addition of both reconstructed images reproduces the\noriginal one. \n\n\\begin{figure}[htb]\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=fig_cb2_line_g.ps,bbllx=2cm,bblly=13.5cm,bburx=13cm,bbury=24.5cm,width=5.5cm,height=5.5cm,clip=}}\n}\n\\centerline{\n\\hbox{\n\\psfig{figure=fig_cb2_line_g_atrou.ps,bbllx=2cm,bblly=13.5cm,bburx=13cm,bbury=24.5cm,width=5.5cm,height=5.5cm,clip=}\n\\psfig{figure=fig_cb2_line_g_rid.ps,bbllx=2cm,bblly=13.5cm,bburx=13cm,bbury=24.5cm,width=5.5cm,height=5.5cm,clip=}\n}}}\n\\caption{Top, original image containing lines and gaussians. Botton left,\nreconstructed image for the \\`a trous wavelet coefficient, bottom right,\nreconstructed image from the ridgelet coefficients.}\n\\label{fig_cb1_synt}\n\\end{figure}\n\nIn some specific cases where the data are sparse in all bases, \nit has been shown \\cite{cur:huo99,cur:donoho01}  \nthat the solution is identical to the solution when using a \n$\\parallel . \\parallel_0$ penalty term. This is however generally not the \ncase. \nThe problem we met in image restoration applications,\nwhen minimizing equation~\\ref{eqn_min_l1}, is \nthat both the signal and noise are split into the bases. The way the noise\nis distributed in the coefficients $\\alpha_k$ is not known, and leads to the problem\nthat we do not know at which level we should threshold the coefficients. Using\nthe threshold we would have used with a single transform \nmakes a strong over-filtering of the data. Using the $l^1$ optimization \nfor data restoration implies to first study how the noise is \ndistributed in the coefficients. The hard-MCA method does not present\nthis drawback.\n\n\\section{Hard-MCA}\n\\label{sect_l0}\n%  $l^0$ optimization by hard thresholding\nThe following algorithm consists in hard thresholding the residual \nsuccessively on the different bases \\cite{starck:spie01b,starck:sta02_3}. \n\\begin{enumerate}\n\\item For noise filtering, estimate the noise standard deviation $\\sigma$,\nand set $L_{\\min} = k_{\\sigma}$. \nOtherwise, set $\\sigma=1$ and $L_{\\min} = 0$.\n\\item Initialize $L_{\\max}$, the number of iterations $N_i$, \n$\\lambda = L_{\\max}$ and $\\delta_{\\lambda} = \\frac{L_{\\max} - L_{\\min}}{N_i}$.\n\\item Set all coefficients $\\alpha_k$ to 0.\n\\item While $\\lambda >= L_{\\min}$ do\n\\item for k = 1, .., $K$ do\n\\begin{itemize}\n\\item Calculate the residual $R = s - \\sum_k T_k^{-1} \\alpha_k$.\n\\item Calculate the transform $T_k$ of the residual:\n$r_k = T_k R$.\n\\item For all coefficients $\\alpha_{k,i}$ do\n\\begin{itemize}\n\\item  Update the coefficients:  \n if $\\alpha_{k,i} \\ne 0$ or $\\mid r_{k,i}  \\mid > \\lambda \\sigma$ \nthen $\\alpha_{k,i}  =  \\alpha_{k,i} + r_{k,i}$.\n\\end{itemize}\n\\end{itemize}\n\\item $\\lambda = \\lambda - \\delta_{\\lambda} $, and goto 5.\n\\end{enumerate}\nFor an exact representation of the data, $k_{\\sigma}$ must be set to 0.\nChoosing $k_{\\sigma} > 0$ introduces a filtering. If a single transform is used,\nit corresponds to the standard  $k$-sigma hard thresholding.\n\nIt seems that starting with a high enough $L_{\\max}$ and a high\nnumber of iterations would lead to the $l^0$ optimization solution,\nbut this remains to be proved.\n\n\n\\section{Experiments}\n\n\\subsubsection{Experiment 1: Infrared Gemini Data}\n\\begin{figure}[htb]\n\\centerline{\n\\vbox{\n\\hbox{\n\\psfig{figure=fig_gemini.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=5cm,height=5cm,clip=}\n\\psfig{figure=fig_gem_cb_rid.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=5cm,height=5cm,clip=}\n\\psfig{figure=fig_gem_cb_cur.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=5cm,height=5cm,clip=}\n}\n\\hbox{\n\\psfig{figure=fig_gem_cb_atrou.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=5cm,height=5cm,clip=}\n\\psfig{figure=fig_gem_cb_resi_cur_rid_atrou.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=5cm,height=5cm,clip=}\n\\psfig{figure=fig_gem_cb_resi_cur_rid.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=5cm,height=5cm,clip=}\n}\n}}\n\\caption{Upper left, galaxy SBS 0335-052 (10 $\\mu$m), upper middle, upper middle,\nand bottom left, reconstruction respectively from the ridgelet, the curvelet\nand wavelet coefficients. Bottom middle, residual image. Bottom right, \nartifact free image.}\n\\label{fig_ctm_gemini1}\n\\end{figure}\n$ $ \n\\begin{figure}[htb]\n\\centerline{\n\\vbox{\n\\hbox{\n\\psfig{figure=fig_gemini2_bw.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=7cm,height=7cm,clip=}\n\\psfig{figure=fig_gem2_cb_cur_rid_bw.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=7cm,height=7cm,clip=}\n}\n\\hbox{\n\\psfig{figure=fig_gem2_cb_atrou_bw.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=7cm,height=7cm,clip=}\n\\psfig{figure=fig_gem2_cb_resi_cur_rid_atrou_bw.ps,bbllx=1.8cm,bblly=12.5cm,bburx=14.8cm,bbury=25.5cm,width=7cm,height=7cm,clip=}\n}\n}}\n\\caption{Upper left, galaxy SBS 0335-052 (20 $\\mu$m), upper right, addition\nof the reconstructed images from both the ridgelet and the curvelet coefficients,\nbottom left, reconstruction from the wavelet coefficients, and bottom right,\nresidual image.}\n\\label{fig_ctm_gemini2}\n\\end{figure}\n\\clearpage\nFig.~\\ref{fig_ctm_gemini1} upper left shows\na compact blue galaxy located at 53 Mpc. The data have been\nobtained on ground with the GEMINI-OSCIR instrument at $10$ $\\mu$m. \nThe pixel field of\nview is $0.089^{\\prime\\prime}$/pix, and the  source was observed during 1500s.\nThe data are contaminated by a noise and a stripping artifact due to the \ninstrument electronic. The same kind of artifact pattern were observed with\nthe ISOCAM instrument \\cite{starck:sta99_1}.\n\nThis image, noted $D_{10}$, has been decomposed using wavelets, ridgelets, and curvelets.\nFig.~\\ref{fig_ctm_gemini1} upper middle,  upper  right, and bottom left\nshow the three images $R_{10}$, $C_{10}$, $W_{10}$\nreconstructed respectively from the ridgelets, the curvelets, and the wavelets.\nImage in Fig.~\\ref{fig_ctm_gemini1} bottom middle shows the residual, i.e.\n$e_{10} = D_{10} - (R_{10} + C_{10} + W_{10})$. Another interesting\nimage is the artifact free one, obtained by subtracting $R_{10}$ and\n$C_{10}$ from the input data (see Fig.~\\ref{fig_ctm_gemini1} bottom right).\nThe galaxy has well been detected in the wavelet space, while all stripping\nartifact have been capted by the ridgelets and curvelets.\n\nFig.~\\ref{fig_ctm_gemini2} upper left shows the same galaxy, but at\n$20$ $\\mu$m. We have applied the same decomposition on $D_{20}$. \nFig.~\\ref{fig_ctm_gemini2} upper right shows the coadded \nimage $R_{20} + C_{20}$, and we can see bottom left and right the \nwavelet reconstruction $W_{20}$ and the residudal\n$e_{20} = D_{20} - (R_{20} + C_{20} + W_{20})$.\n\n\\subsubsection{Experiment 2: A370}\n \n\\begin{figure}[htb]\n\\centerline{\n\\vbox{\n\\hbox{\n\\psfig{figure=a370.ps,bbllx=1.5cm,bblly=6cm,bburx=19.5cm,bbury=24cm,width=7cm,height=7cm,clip=}\n\\psfig{figure=a370_ridcur.ps,bbllx=1.5cm,bblly=6cm,bburx=19.5cm,bbury=24cm,width=7cm,height=7cm,clip=}\n}\n\\hbox{\n\\psfig{figure=a370_atrou.ps,bbllx=1.5cm,bblly=6cm,bburx=19.5cm,bbury=24cm,width=7cm,height=7cm,clip=}\n\\psfig{figure=a370_comb.ps,bbllx=1.5cm,bblly=6cm,bburx=19.5cm,bbury=24cm,width=7cm,height=7cm,clip=}\n}\n}}\n\\caption{Top left, HST image of A370, top right coadded image from the\nreconstructions from the ridgelet and the curvelet coefficients, bottom\nleft  reconstruction from  the \\`a trous wavelet coefficients, \nand bottom right addition of the three reconstructed images. }\n\\label{fig_a370}\n\\end{figure}\n\nFigure~\\ref{fig_a370} upper left shows the HST A370 image. It contains many \nanisotropic features such the gravitationnal arc, and the arclets. The\nimage has been decomposed using three transforms: the ridgelet transform, \nthe curvelet transform, and the \\`a trous wavelet transform. Three images\nhave then been reconstructed from the coefficients of the three basis.\nFigure~\\ref{fig_a370} upper right shows the coaddition of the ridgelet \nand curvelet reconstructed images. The \\`a trous reconstructed image\nis displayed in Figure~\\ref{fig_a370} lower left, and the coaddition \nof the three images can be seen in Figure~\\ref{fig_a370} lower right.\nThe gravitational arc and the arclets are all represented in the \nridgelet and the curvelet basis, while all isotropic features are \nbetter represented in the wavelet basis.\n\nWe can see that this Morphological Component Analysis (MCA) allows\nus to separate automatically features in an image which have different\nmorphological aspects. It is very different from other techniques such as\nPrincipal Component Analysis or \nIndependent Component Analysis \\cite{mc:cardoso98} where the separation\nis performed via statistical properties.\n\n\n\\clearpage\n\\newpage\n", "meta": {"hexsha": "50a197c73052d36145ffddd9b3fcbaab18e6f3be", "size": 15558, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_mra/doc_mr4/ch_mga.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_mra/doc_mr4/ch_mga.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_mra/doc_mr4/ch_mga.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.8938053097, "max_line_length": 129, "alphanum_fraction": 0.756652526, "num_tokens": 4804, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795402, "lm_q2_score": 0.7025300573952052, "lm_q1q2_score": 0.5642537398726258}}
{"text": "\\documentclass{article}\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{December 6, 2013}\n\\maketitle\nquiz in 6.2-6.4 fri?\n\n\\section*{section 6.3}\n\\begin{align*}\n  x'&=3x+3y\\\\\n  y'&=3x+8y \\intertext{both x and y are functions of t}\n\\end{align*}\n\\subsection*{example}\nshow that $\\left(\\begin{array}{c}-2e^{2t}\\\\e^t\\end{array}\\right)$ is a solution of $\\mathbb{X}'\\left(\\begin{array}{cc}1&-2\\\\2&6 \\end{array}\\right)\\mathbb{x}$\n\\begin{align*}\n  \\mathbb{X}'&=\\left(\\begin{array}{c}\n      -4e^{2t}\\\\2e^{2t}\n      \\end{array}\\right)\n  &=\\left(\\begin{array}{cc}\n      1&-2\\\\2&6\n      \\end{array}\\right)\n  \\left(\\begin{array}{c}\n      -2e^{2t}\\\\e^{2t}\n      \\end{array}\\right)\n  &=\\left(\\begin{array}{c}\n      -2e^{2t}-2e^{2t}\\\\\n      -4e^{2t}+6e^{2t}\n      \\end{array}\\right)\n  &=\\left(\\begin{array}{c}\n      -4e^{2t}\\\\2e^{2t}\n      \\end{array}\\right)\n\\end{align*}\n\\begin{align*}\n  W&=\\left\\lvert\\begin{array}{cc}y_1&y_2\\\\y_1'&y_2'\\end{array}\\right\\rvert\\\\\n  \\phi_1&=\\left(\\begin{array}{c}\\alpha_1(t)\\\\\\alpha_2(t)\\end{array}\\right)\\\\\n  \\phi_2&=\\left(\\begin{array}{c}\\beta_1(t)\\\\\\beta_2(t)\\end{array}\\right)\\\\\n  W(\\phi_1,\\phi_2)&=\\left(\\begin{array}{cc}\\alpha_1&\\beta_1\\\\ \\alpha_2&\\beta_2 \\end{array}\\right)\n\\end{align*}\n\\subsection*{example}\ncheck if $\\phi$'s are independant solutions\n\\begin{align*}\n  \\mathbb{X}'&=\\left(\\begin{array}{cc}4&-3\\\\-2&-1 \\end{array}\\right)\\mathbb{X}\\\\\n  \\phi_1&=\\left(\\begin{array}{c}e^{-2t}\\\\2e^{-2t} \\end{array}\\right)\\\\\n  \\phi_2&=\\left(\\begin{array}{c}3e^{5t}\\\\e^{5t} \\end{array}\\right)\\\\\n  W(\\phi_1,\\phi_2)&=e^{3t}+6e^{3t}=7e^{3t}\\neq0\n\\end{align*}\nthey are independant\n\n\\subsection*{eigenvalues}\n\\begin{align*}\n  A&=\\left(\\begin{array}{cc}2&3\\\\-4&-5\\end{array}\\right)\n\\end{align*}\nfind the eigenvalue of A\n\\begin{align*}\n  \\left\\lvert\\righ\\left(\\begin{array}{cc}2-\\lambda&3\\\\-4&-5-\\lambda\\end{array}\\right)t\\rvert&=0\\\\\n    (2-\\lambda)(-5-\\lambda)+12&=0\\\\\n    -10-2\\lambda+5\\lambda+\\lambda^2+12&=0\\\\\n    \\lambda^2+3\\lambda+2&=0\\\\\n    (\\lambda+1)(\\lambda+2)&=0\\\\\n    \\lambda&=-1,-2\n\\end{align*}\neigenvector corresponding to $\\lambda=-1$\n\\begin{align*}\n  A\\left(\\begin{array}{c}x_1\\\\x_2\\end{array}\\right)&=-1\\left(\\begin{array}{c}x_1&x_2\\end{array}\\right)\\\\\n  \\left(\\begin{array}{cc}2&3\\\\-4&-5\\end{array}\\right)\\left(\\begin{array}{c}x_1&x_2\\end{array}\\right)&=-\\left(\\begin{array}{c}x_1&x_2\\end{array}\\right)\\\\\n  \\left(\\begin{array}{c}2x_1+3x_2\\\\-4x_1-5x_2\\end{array}\\right)=-\\left(\\begin{array}{c}x_1&x_2\\end{array}\\right)\\\\\n  2x_1+3x_2&=-x_1\\\\\n  -4x_1-5x_2&=x_2\\\\\n  3x_1+3x_2&=0\\\\\n  -4x_1-4x_2&=0\\\\\n  x_1+x_2&=0\\\\\n  x_1&=x_2 \\intertext{choose}\n  x_1&=1,x_2=-1\\\\\n  \\text{eigenvector}&=\\left(\\begin{array}{c}1\\\\-1\\end{array}\\right)\n\\end{align*}\neigenvector corresponding to $\\lambda=-2$ is $\\left(\\begin{array}{c}3\\\\-4\\end{array}\\right)$\n  \nquiz show that eigenvector\n\\end{document}\n", "meta": {"hexsha": "c396209889b832e761f4ae627124bc507d9a1e50", "size": 2933, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "differential equations/diffeq-notes-2013-12-06.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "differential equations/diffeq-notes-2013-12-06.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "differential equations/diffeq-notes-2013-12-06.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.7126436782, "max_line_length": 157, "alphanum_fraction": 0.628025912, "num_tokens": 1347, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.803173801068221, "lm_q1q2_score": 0.5642537365627825}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{tikz}\n\\usepackage[utf8]{inputenc}\n\\usepackage{aeguill}\n\\usepackage{setspace}\n\n\\usetikzlibrary{graphs,graphdrawing}\n\\usegdlibrary{trees}\n\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{6 f\u00e9vrier, 2015}\n\\maketitle\nwednesday was eulerian graphs (bridges of k\u00f6nigsberg).\n\ncycles are not circuits and trails are not paths\n\n\\section*{3.2 hamiltonian paths}\nEuler is E, edges is E. Hamiltonian graphs is H, vertices is...not\n\na {\\bfseries Hamiltonian path} or {\\bfseries cycle} is a path or cycle that meets every vertex of $G$ exactly once.\n\na graph with hamiltonian cycle is called {\\bfseries hamiltonian}\n\\subsection*{example}\n\\begin{enumerate}\n\\item\nHamiltonian:\n\n\\begin{tikzpicture}[main_node/.style={circle,draw,text=black,inner sep=1pt,outer sep=0pt]}]\n  \\node[main_node] (1) at (-1,-1) {};\n  \\node[main_node] (2) at (1,-1) {};\n  \\node[main_node] (3) at (1,1) {};\n  \\node[main_node] (4) at (-1,1) {};\n  \\draw (1) -- (2) -- (3) -- (4)--(1);\n\\end{tikzpicture}\n\\item\nNot hamiltonian\n\n\\tikz\\path [graphs/.cd, nodes={shape=circle, draw, text=black,inner sep=1pt,outer sep=0pt}]\n  graph [tree layout] { 1 -- {2 -- 3};2--4 }\n  [shift=(0:1)];\n\\item\nis hamiltonian\n\\tikz\\path [graphs/.cd, nodes={shape=circle, draw, text=black,inner sep=1pt,outer sep=0pt}]\n  graph [tree layout] { 1 -- 2;1--3;1--4;1--5;1--6;2--3--4--5--6 }\n  [shift=(0:1)];\n\\end{enumerate}\n\n\\subsection*{theorem (Ore)}\nif $G$ is a graph of order $n\\ge 3$ and $\\forall u,v$ vertices $\\deg(u)+\\deg(v)\\ge n$ then $G$ is hamiltonian\n\n{\\scshape note:} completely nonconstructive proof\n\n\\subsubsection*{proof}\nassume for a contradiction that for all $u,v\\in V(G)$, $\\deg(u)+\\deg(v)\\ge n$ but $G$ is not hamiltonian.\n\nwithout loss of generality, we can assume that $G$ is ``maximal'' with this property. why?\n\n$G$ is finite therefore $G$ is a subgraph  of some complete graph $G\\le K_n$. But $K_n$ is hamiltonian. Since $G$ is not hamiltonian and $K_n$ is then. somewhere added enough edges to $G$ to make it hamiltonian.\n\nAdd edge $xy$ to $G$. The $G\\cup \\{xy\\}$ is hamiltonian, so there is an $x-y$ path in $G$. We have to use the $xy$ edge in the hamiltonian cycle, else the graph would already be hamiltonian. In fact the $x-y$ path is hamiltonian.\n\nlet the $x-y$ path be $x=v_1,\\dots,v_n=y$. If $x$ is adjacent to $v_i$ then $y$ cannot be adjacent to $v_{i-1}$. why? because then you would have a hamiltonian cycle. $v_1v_i\\dots v_nv_{i-1}v_1$\n\nTherefore, for every neighbor of $x$ we can eliminate a possible neighbor of $y$. This means that $\\deg(y)\\le(n-1)-\\deg(x)$. because $n-1$ is max possible deg and $\\deg(x)$ are the things that can't be adjacent to $y$. Now $\\deg(y)+\\deg(x)\\le n-1$ which is a contradiction because $\\deg(y)+\\deg(x)\\ge n$\n\n\\subsection*{corollary}\nif $G$ of order $n\\ge 3$ has the property that for all $v\\in V(G)$ then $\\deg(v)\\ge \\frac{n}{2}$, then $G$ is hamiltonian.\n\\subsubsection*{proof}\n$\\forall u,v\\in V(G)$ then $\\deg(u)+\\deg(v)\\ge n$\n\n\\section*{independent sets}\na subset of $V(G)$ is called {\\bfseries independent} if no vertices are adjacent to one another.\n\nthe maximal cardinality of independent sets is called the {\\bfseries independent number}. it is denoted $\\alpha(G)$\n\nwhat is $\\alpha(K_n)$? 1\n\n\n\\section*{theorem chv\u00e1tal-erd\u00f6s}\nlet $G$ be a graph of order $n\\ge 3$. if connectedness$\\kappa(G)\\ge \\alpha(G)$, then $G$ is Hamiltonian.\n\n\\section*{homework}\n1,7,14\n\\end{document}\n", "meta": {"hexsha": "fab4857d7e62a95c1992243aa6834c2728df201e", "size": 3576, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "graph/graph-notes-2015-02-06.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "graph/graph-notes-2015-02-06.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "graph/graph-notes-2015-02-06.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6421052632, "max_line_length": 303, "alphanum_fraction": 0.6951901566, "num_tokens": 1223, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300449389326, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5642537249033088}}
{"text": "%!TEX root = ../main.tex\n\\section{Pure random attachment}\\label{section:pure-random-attachment}\n\n\\subsection{Degree distribution: Theory}\nThe pure random attachment model can be seen as a limiting case of the BA model, where all existing vertices are chosen with equal probability, i.e. $\\Pi = \\Pi_{rnd} \\propto 1$. This preserves growth but removes preferential attachment. \n\nWe start from the master equation in \\autoref{eq:master}, but instead of $\\Pi = k/ 2E(t)$, we use $\\Pi_{rnd} = 1 / N(t)$. Again, we consider the long-time ansatz $n(k, t) \\rightarrow N(t) p_{\\infty}(k)$. Substituting these terms into \\autoref{eq:master}, we have\n\n\\begin{equation}\n\tp_{\\infty}(k) = m p_{\\infty}(k-1) - m p_{\\infty}(k) + \\delta_{k, m}. \n\t\\label{eq:ra-degree-distribution-p-infinity}\n\\end{equation}\n\nConsidering the case of $k > m$, we obtain the recurrence relation\n\\begin{equation}\n\tp_{\\infty}(k) = \\left ( \\frac{m}{m+1} \\right ) p_{\\infty}(k-1)=...= \\left ( \\frac{m}{m+1} \\right )^{k-m} p_{\\infty}(m)\n\t\\label{eq:ra-degree-recurrence-relation}\n\\end{equation}\n\nNow we consider $k=m$. Substituting $k=m$ into \\autoref{eq:ra-degree-distribution-p-infinity} and remembering that $p_{\\infty}(k < m) = 0$, we get\n\\begin{equation}\n\tp_{\\infty}(m) = -mp_{\\infty} + 1, \n\t\\label{eq:ra-degree-k-equal-m}\n\\end{equation}\ngiving us \n\\begin{equation}\n\tp_{\\infty}(m) = \\frac{1}{m+1}. \n\t\\label{eq:ra-degree-p-infinity-m}\n\\end{equation}\n\nCombining this result with \\autoref{eq:ra-degree-recurrence-relation}, we get the following formula for $p_{\\infty}(k)$:\n\\begin{equation}\n\tp_{\\infty}(k) = \\frac{1}{m+1} \\left ( \\frac{m}{m+1}\\right )^{k-m}.\n\t\\label{eq:p-infinity-solution-ra}\n\\end{equation}\n\nFor normalization, we need to check that \n\\begin{equation}\n\t\\sum_{k=m}^{\\infty}p_{\\infty}(k) = \\frac{1}{m+1} \\sum_{k=m}^{\\infty} \\left ( \\frac{m}{m+1}\\right )^{k-m} = 1.\n\t\\label{eq:ra-check-normalization}\n\\end{equation}\nThe terms in the summation form a converging geometric series, with the starting term being zero and common ratio being $m / (m+1)$. Hence we have \n\\begin{equation}\n\t\\sum_{k=m}^{\\infty} \\left ( \\frac{m}{m+1} \\right )^{k-m} = \\frac{1}{1 - [m / (m+1)]}. \n\t\\label{eq:ra-geom-series}\n\\end{equation}\n\nBy substituting this back into the \\autoref{eq:ra-check-normalization}, we can see that normalization is satisfied. \n\nAs we can see from \\autoref{eq:p-infinity-solution-ra}, the resulting degree distribution in this limit is geometric \\citep{Pekoz2013}, indicating that growth alone is not sufficient to produce a scale free structure. \n\n\\subsection{Degree distribution: Numerical analysis}\\label{subsection:ra-numerical-analysis}\nNumerical simulations confirmed that growth alone is not sufficient to produce a scale free structure. \\autoref{fig:ra-fixed-n-degree-dist} shows the simulated raw degree distributions for $N=10^6$ and different $m$. The fat tail was reduced by taking the average of multiple simulations. Already, we can see that it does not follow a power law. After log binning, it follows the theoretical geometric simulation very well for small $k$, as can be seen in \\autoref{fig:ra-fixed-n-logbin}.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[height=0.5\\linewidth]{img/ra-fixed-n-degree-dist}\n    \\caption{Raw degree distribution for random attachment for $N = 10^6$ and $m = 1, 2, 4, 8, 16, 32$. }\n    \\label{fig:ra-fixed-n-degree-dist}\n\\end{figure}\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[height=0.5\\linewidth]{img/ra-fixed-n-logbin}\n    \\caption{Data collapse of the degree distribution for networks of size $N=10^2, 10^3, 10^4, 10^5, 10^6, 10^7$}\n    \\label{fig:ra-fixed-n-logbin}\n\\end{figure}\n\nGoodness of fit was tested in the same way as in the previous section. $p$-values obtained from testing against the null hypothesis of a geometric distribution governed by \\autoref{eq:p-infinity-solution-ra} are listed below, showing that it is plausible that the simulated data follow the theoretical distribution. While the $p$-values fluctuate between 0.4 and 0.7, they are all safely above the threshold of $0.1$. \n\n\\begin{center}\n\\begin{tabular}{ c | c }\nm & $p$-value \\\\\n\\hline\n1  & 0.714 \\\\\n2  & 0.446 \\\\\n4  & 0.502 \\\\\n8  & 0.706 \\\\\n16 & 0.456 \\\\\n\\end{tabular}\n\\label{table:ra-ks-test}\n\\captionof{table}{The list of $p$-values for each $m$ for random attachment when compared with synthetic datasets.}\n\\end{center}\n\nTo demonstrate the test's effectiveness, random attachment simulation data was also tested against the power law distribution, and we obtained result of $p = 0$, to an accuracy of $\\pm 0.05$, for all $m$. This shows that we can reject the power law hypothesis, as expected. \n\n\\subsection{Largest expected degree: Theory}\\label{subsection:largest-expected-degree}\nTo calculate the largest expected degree, we need\n\n\\begin{equation}\n\tN \\sum_{k=k_1}^\\infty p_{\\infty}(k) = N \\sum_{k=k_1}^\\infty \\frac{1}{m+1} \\left (\\frac{m}{m+1} \\right )^{k-m} = 1.\n\t\\label{eq:largest-expected-degree-ra-criteria}\n\\end{equation}\n\nThe summation is similar to the previous \\autoref{eq:ra-geom-series} with a different lower limit. Applying the geometric series summation formula to the terms in the summation, we get \n\\begin{equation}\n\t\\frac{N}{m+1} \\left ( \\frac{m}{m+1} \\right )^{k_1 - m} \\frac{1}{1 - (m / (m+1))} = 1\n\\end{equation}\n\nRearranging this and taking logarithm of both sides, we obtain an expression for $k_1$:\n\\begin{equation}\n\tk_1 = \\frac{\\ln N}{\\ln (m+1) - \\ln m} + m\n\t\\label{eq:largest-degree-ra}\n\\end{equation}\nFrom this, we know that the largest degree grow logarithmically with $N$, as opposed to a power law like for preferential attachment. \n\n\\subsection{Largest expected degree: Numerical analysis}\nThe numerical largest degree was calculated in the same way as described in the previous section. The ratio between $k_1^{\\text{theory}}$ and $k_1^{\\text{numerical}}$ is largely constant. From \\autoref{fig:ra-numerical-theoretical-k1} we can see that the difference between the numerical and theoretical values decrease as $N$ increases, as expected. \n\n\\begin{figure}\n    \\centering\n    \\includegraphics[height=0.5\\linewidth]{img/ra-numerical-theoretical-k1}\n    \\caption{This shows the difference in $k_1$ for different values of $N$ ranging from $100$ to $10^7$ for random attachment. From the graph we can see that the difference between the theoretical and numerical values decrease as $N$ increases.}\n    \\label{fig:ra-numerical-theoretical-k1}\n\\end{figure}\n\nFrom the following tables of values, we can also see that the values get closer to 1 for larger $N$. Similarly, the standard error was estimated from the sample standard deviation. \n\n\\begin{center}\n\\begin{tabular}{ ||c | c | c | c ||}\n\\hline\nN & $k_1^{\\text{theory}}$ & $k_1^{\\text{numerical}}$ & $k_1^{\\text{numerical}} / k_1^{\\text{theory}} $\\\\ \n\\hline\n$10^2$ & 25    & 21.85  $\\pm$ 0.01 & 0.887 \\\\  \n$10^3$ & 35   & 32.15 $\\pm$ 0.03 & 0.920 \\\\\n$10^4$ & 46   & 42.05   $\\pm$  0.03  & 0.929 \\\\\n$10^5$ & 56  & 52.75  $\\pm$  0.02  & 0.949 \\\\\n$10^6$ & 66  & 63.55  $\\pm$  0.02  & 0.964 \\\\\n$10^7$ & 76 & 73.55 $\\pm$ 0.03  & 0.965 \\\\  \n\\hline\n\\end{tabular}\n\\label{table:ra-numerical-theoretical-ratio}\n\\captionof{table}{This table shows the theoretical and numerical values for the largest expected degree as defined in \\autoref{eq:largest-degree-ra} for random attachment. The errors on $k_1$ are rounded off to 1 significant figure. }\n\\end{center}", "meta": {"hexsha": "023ca204457161741654b0436ee65e05f60d7e8e", "size": 7359, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/pra.tex", "max_stars_repo_name": "lingxz/networks", "max_stars_repo_head_hexsha": "c8c38927271ccb6ed31916b0b92eeab4095e3490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/sections/pra.tex", "max_issues_repo_name": "lingxz/networks", "max_issues_repo_head_hexsha": "c8c38927271ccb6ed31916b0b92eeab4095e3490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections/pra.tex", "max_forks_repo_name": "lingxz/networks", "max_forks_repo_head_hexsha": "c8c38927271ccb6ed31916b0b92eeab4095e3490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.9179104478, "max_line_length": 488, "alphanum_fraction": 0.7100149477, "num_tokens": 2373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6370307944803832, "lm_q2_score": 0.8856314738181875, "lm_q1q2_score": 0.5641745213832328}}
{"text": "{\"Hp2_version\":2, \"Hp2_stencil\":\"asst\",\n \"duedate\": \"Thursday 24 May 2018\", \"studentid\":\"200878696\",\n \"semester\": \"S1 2018\", \"course\":\"MATHS 326\", \"upi\": \"\\\\MakeLowercase{aelz176}\",\n \"asstno\":\"4\", \"Buc_tags\": [\"homework\", \"auckland\", \"mathematics\", \"combinatorics\", \"326\",\"test\"]}\n===\n\\section*{Question One}\n\\paragraph{(a)}\nThe symmetry group of the bracelet is $ C_6 $.\n\\begin{center}\\begin{tabular}{|c|c|c|}\\hline\n  \\textbf{Cycle type} & \\textbf{Count} & \\textbf{Fixed colourings}\\\\\\hline\n  $[1^6]$ & 1 & $ 2^6 $\\\\\n  $[2^3]$ & 1 & $ 2^3 $\\\\\n  $[3^2]$ & 2 & $ 2^2 $\\\\\n  $[6]$ & 2 & $ 2^1 $\\\\\\hline\n   & 6 &\\\\\\hline\n\\end{tabular}\\end{center}\n\nSo the number of orbits of $ C_6 $ on the colourings is\n\\begin{displaymath}\n  \\frac{1 \\cdot 2^6 + 1 \\cdot 2^3 + 2 \\cdot 2^2 + 2 \\cdot 2^1}{6} = 14.\n\\end{displaymath}\n\nThere are 14 distinct colourings up to rotation.\n\n\\paragraph{(b)}\nIf we also include reflections, then the symmetry group of the bracelet is $ D_6 $ (of order 12).\n\\begin{center}\\begin{tabular}{|c|c|c|}\\hline\n  \\textbf{Cycle type} & \\textbf{Count} & \\textbf{Fixed colourings}\\\\\\hline\n  $[1^6]$ & 1 & $ 2^6 $\\\\\n  $[2^3]$ & 1 & $ 2^3 $\\\\\n  $[3^2]$ & 2 & $ 2^2 $\\\\\n  $[6]$ & 2 & $ 2^1 $\\\\\\hline\n  $[1^2,2^2] $ & 3 & $ 2^4 $\\\\\n  $[2^3] $ & 3 & $ 2^3 $\\\\\\hline\n   & 12 &\\\\\\hline\n\\end{tabular}\\end{center}\n\nSo the number of orbits of $ D_6 $ on the colourings is\n\\begin{displaymath}\n  \\frac{1 \\cdot 2^6 + 1 \\cdot 2^3 + 2 \\cdot 2^2 + 2 \\cdot 2^1 + 3 \\cdot 2^4 + 3 \\cdot 2^3}{12} = 13.\n\\end{displaymath}\n\nThere are 13 distinct colourings up to rotation and reflection.\n\n\\section*{Question Two}\nThe cube's symmetry group has order 24.\n\\begin{center}\\begin{tabular}{|c|c|c|}\\hline\n  \\textbf{Cycle type on edges} & \\textbf{Count} & \\textbf{Fixed colourings}\\\\\\hline\n  $[1^{12}]$ & 1 & $ 2^{12} $\\\\\n  $[4^3]$ & 6 & $ 2^3 $\\\\\n  $[2^6]$ & 3 & $ 2^6 $\\\\\n  $[3^4]$ & 8 & $ 2^4 $\\\\\n  $[1^2,2^5]$ & 6 & $ 2^7 $\\\\\\hline\n   & 24 &\\\\\\hline\n\\end{tabular}\\end{center}\n\nSo the number of orbits of the group on the edge colourings is\n\\begin{displaymath}\n  \\frac{1 \\cdot 2^{12} + 6 \\cdot 2^3 + 3 \\cdot 2^6 + 8 \\cdot 2^4 + 6 \\cdot 2^7}{24} = 218.\n\\end{displaymath}\n\nThere are 218 distinct colourings up to rotation.\n\n\\section*{Question Thre}\n\\paragraph{(a)}\nLabelling the vertices with the numbers 1 to 5, and writing the edge between vertex 1 and vertex 2 by 12, we have the following:\n\\begin{gather*}\n  12 \\mapsto 12\\\\\n  13 \\mapsto 24 \\mapsto 15 \\mapsto 23 \\mapsto 14 \\mapsto 25 \\mapsto 13\\\\\n  34 \\mapsto 45 \\mapsto 53.\n\\end{gather*}\n\n\\paragraph{(b)}\nThe symmetry group of $ K_5 $ is $ S_5 $. We have the following:\n\\begin{center}\\begin{tabular}{|c|c|c|c|}\\hline\n  \\textbf{Cycle type on vertices} & \\textbf{Count} & \\textbf{Cycle type on edges} &\\textbf{Fixed colourings}\\\\\\hline\n  $ [1^5] $ & 1 & $ [1^{10}] $ & $ k^{10} $\\\\\n  $ [1^3,2] $ & 10 & $ [1^4,2^3] $ & $ k^7 $\\\\\n  $ [1^2,3] $ & 20 & $ [1,3^3] $ & $ k^4 $\\\\\n  $ [1, 4] $ & 30 & $ [2,4^2] $ & $ k^3 $\\\\\n  $ [1,2,2] $ & 15 & $ [1^2,4^2] $ & $ k^4 $\\\\\n  $ [2,3] $ & 20 & $ [1,3,6] $ & $ k^3 $\\\\\n  $ [5] $ & 24 & $ [5^2] $ & $ k^2 $\\\\\\hline\n  & 120 &&\\\\\\hline\n\\end{tabular}\\end{center}\n\nHence the cycle index of the symmetry group is\n\\begin{displaymath}\n  \\frac{1}{120}(a_1^{10} + 10a_1^2 a_2^3 + 20a_1 a_3^3 + 30a_2 a_4^2 + 15a_1^2 a_4^2 + 20 a_1 a_3 a_6 + 24a_5^2).\n\\end{displaymath}\n\n\\paragraph{(c)}\nCounting the number of non-isomorphic graphs on five vertices is equivalent to counting the number of 2-colourings of $ K^5 $.\nHence we need only calculate\n\\begin{displaymath}\n  \\frac{1}{120}(2^{10} + 10(2^5) + 20(2^4) + 30(2^3) + 15(2^4) + 20(2^3) + 24(2^2)) = 20.\n\\end{displaymath}\n\nThere are 20 distinct graphs on five vertices up to isomorphism.\n\n\\section*{Question Four}\nThe symmetry group of the pentagonal prism is $ C_2 \\times C_5 $.\n\\begin{center}\\begin{tabular}{|c|c|c|}\\hline\n  \\textbf{Cycle type} & \\textbf{Count} & \\textbf{Cycle monomial}\\\\\\hline\n  $ [1^7] $ & 1 & $ a_1^7 $\\\\\n  $ [1^2,5] $ & 4 & $ a_1^2 a_5 $\\\\\n  $ [1^5,2] $ & 1 & $ a_1^5 a_2 $\\\\\n  $ [2,5] $ & 4 & $ a_2 a_5 $\\\\\\hline\n   & 10 &\\\\\\hline\n\\end{tabular}\\end{center}\n\nSo the cycle index on the face colourings is\n\\begin{displaymath}\n  \\frac{1}{10}(a_1^7 + 4a_1^2 a_5 + a_1^5 a_2 + 4a_2 a_5).\n\\end{displaymath}\nLet $ x $, $ y $, and $ z $ be the number of green, blue, and red edges respectively; so the generating function for\nthe colorings of a particular edge is $ f(x,y,z) = x + y + z $. Hence, we want the coefficient of $ x^2 y^2 z^2 $ in\n\\begin{displaymath}\n  \\begin{split}\n    \\frac{1}{10}((x + y + z)^7 + 4(x + y + z)^2 (x^5 + y^5 + z^5) &+ (x + y + z)^5 (x^2 + y^2 + z^2)\\\\\n    &{}+ 4(x^2 + y^2 + z^2)(x^5 + y^5 + z^5)).\n  \\end{split}\n\\end{displaymath}\nThis coefficient is\n\\begin{displaymath}\n  \\frac{1}{10}\\left(\\frac{7!}{2!2!2!} + 0 + 3 \\cdot \\frac{5!}{2!2!0!} + 0\\right) = 72.\n\\end{displaymath}\n\nThere are 72 distinct colourings such that each colour appears precisely twice, up to rotation.\n\n\\section*{Question Five}\nThe symmetry group of the dodecahedron has order sixty.\n\\begin{center}\\begin{tabular}{|c|c|c|}\\hline\n  \\textbf{Cycle type} & \\textbf{Count} & \\textbf{Cycle monomial}\\\\\\hline\n  $ [1^{12}] $ & 1 & $ a_1^{12} $\\\\\n  $ [1^2,5^2] $ & 24 & $ a_1^2a_5^2 $\\\\\n  $ [2^6] $ & 15 & $ a_2^6 $\\\\\n  $ [3^4] $ & 20 & $ a_3^4 $\\\\\\hline\n   & 60 &\\\\\\hline\n\\end{tabular}\\end{center}\n\nSo the cycle index on the face colourings is\n\\begin{displaymath}\n  \\frac{1}{60}(a_1^{12} + 24a_1^2a_5^2 + 15a_2^6 + 20a_3^4).\n\\end{displaymath}\nLet $ x $, $ y $, and $ z $ be the number of red, green, and white faces respectively; so the generating function for\nthe colorings of a particular face is $ f(x,y,z) = x + y + z $. Hence, we want the coefficient of $ x^3y^3z^6 $ in\n\\begin{displaymath}\n  \\frac{1}{60}\\left((x+y+z)^{12} + 24(x+y+z)^2(x^5+y^5+z^5)^2 + 15(x^2+y^2+z^2)^6 + 20(x^3+y^3+z^3)^4\\right).\n\\end{displaymath}\nThis coefficient is\n\\begin{displaymath}\n  \\frac{1}{60}\\left(\\frac{12!}{3!3!6!} + 0 + 0 + 20\\cdot\\binom{4}{1}\\cdot\\binom{3}{1}\\cdot\\binom{2}{2}\\right) = 312.\n\\end{displaymath}\n", "meta": {"hexsha": "ce691c18107dbfde03274fdc65cd3b8d2ee3f53c", "size": 6015, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "test/Assment4.tex", "max_stars_repo_name": "aelzenaar/bucephalus", "max_stars_repo_head_hexsha": "49cc084a5444ffbde2f850fc1f7b230d3bb8dfbc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "test/Assment4.tex", "max_issues_repo_name": "aelzenaar/bucephalus", "max_issues_repo_head_hexsha": "49cc084a5444ffbde2f850fc1f7b230d3bb8dfbc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2018-11-09T03:00:28.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-02T05:39:55.000Z", "max_forks_repo_path": "test/Assment4.tex", "max_forks_repo_name": "aelzenaar/bucephalus", "max_forks_repo_head_hexsha": "49cc084a5444ffbde2f850fc1f7b230d3bb8dfbc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8064516129, "max_line_length": 128, "alphanum_fraction": 0.5941812136, "num_tokens": 2665, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.709019146082187, "lm_q2_score": 0.7956580976404296, "lm_q1q2_score": 0.5641368249623947}}
{"text": "\\section{Fluxograma do Tableau}\n\\subsection{Problema de Maximiza\u00e7\u00e3o}\n\n\\begin{frame}\n\\frametitle{Fluxograma Tableau Simplex - \\color{pink!100} MAXIMIZA\u00c7\u00c3O}\n\t\\centering\n\t\\begin{tikzpicture} [\n\t\t\t\t\t\t\tauto, node distance = 2cm,\n\t\t\t\t\t\t\tdecisao/.style = { diamond, draw, shape aspect=2, thick, fill=red!20,\n\t\t\t\t\t\t\t                   text width=6em, text badly centered,\n\t\t\t\t\t\t\t                   inner sep=1pt},\n\t\t\t\t\t\t\tbloco/.style   = { rectangle, draw, thick, fill=blue!20, \n\t\t\t\t\t\t\t\t\t\t\t\ttext width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=1em },\n\t\t\t\t\t\t\tbloco2/.style   = { rectangle, draw, thick, fill=blue!20, \n\t\t\t\t\t\t\t\t\t\t\t\ttext width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=1em, node distance = 4.2cm },\n\t\t\t\t\t\t\textremo/.style  = { ellipse, draw, fill=green!20,\n\t\t\t\t\t\t\t                   text width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=2em },\t\n\t\t\t\t\t\t\tline/.style   = { draw, -latex' },\n\t\t\t\t\t\t\tgoto/.style = {circle, draw, fill=yellow!60, text width=1em,\n\t\t\t\t\t\t\t\t\t\t   text centered, node distance = 1.8cm},\n\t\t\t\t\t\t    bullet/.style = {rectangle, draw, thick, fill=red!80, \n\t\t\t\t\t\t    \t\t\t\ttext width=9em, text centered,\n\t\t\t\t\t\t    \t\t\t\tminimum height=1em},\n\t\t\t\t\t\t]\n\n\t\n\t\t\\node [extremo] (inic) {\\tiny IN\u00cdCIO}; \\pause\n\t\t\n\t\t\\node [bloco, below of = inic] (sbfi) {\\tiny Montar Tableau Simplex com SBF Inicial};\n\t\t\\path [line] (inic) -- (sbfi); \\pause\n\t\t\n\t\t\\node [decisao, below of = sbfi] (conv) {\\tiny Existe Custo (coeficientes) $<$ 0 ?} ;\n\t\t\\path [line] (sbfi) -- (conv); \\pause\n\t\t\n\t\t\\node[bloco, below of = conv] (otimo) {\\tiny Solu\u00e7\u00e3o \u00d3tima};\n\t\t\\path [line] (conv) -- node [near start] {\\scriptsize \\color{red} n\u00e3o} (otimo); \n\t    \\node [extremo] at (2,-7.2) (fim) {\\tiny Fim};\t\n\t    \\path [line] (otimo) |- (2,-6.5) -- (fim); \\pause\n\n\t\t\\node[bloco2, right of = inic] (escolhe) {\\tiny Escolher Vari\u00e1vel Entrar na Base};\n\t\t\\path [line] (conv) -| (2.2, 0) node [near start] {\\scriptsize \\color{red} sim} -- (escolhe); \\pause\n\t\t\n\t\t\\node[bloco, below of = escolhe] (razao) {\\tiny Calcular Raz\u00e3o $ \\frac{b_i}{coluna}$};\n\t\t\\path [line] (escolhe) -- (razao); \\pause\n\n\t\t\\node [decisao, below of = razao] (finito) {\\tiny Existe Raz\u00e3o $\\ge 0$ finita ?} ;\n\t\t\\path [line] (razao) -- (finito); \\pause\n\t\t\n\t\t\\node[bloco, below of = finito] (ilimit) {\\tiny Solu\u00e7\u00e3o Ilimitada};\n\t\t\\path [line] (finito) -- node [near start] {\\scriptsize \\color{red} n\u00e3o} (ilimit); \n\t\t\\path [line] (ilimit) |- (2,-6.5) -- (fim); \\pause\n\n\t\t\\node[bloco2, right of = razao] (saibase) {\\tiny Escolher Vari\u00e1vel Sair da Base};\n\t\t\\path [line] (finito) -| (6.2, -2) node [near start] {\\scriptsize \\color{red} sim} -- (saibase); \\pause\n\t\t\n\t\t\\node[bloco, below of = saibase] (swap) {\\tiny Troca Base e Recalc. Tableau};\n\t\t\\path[line] (saibase) -- (swap); \\pause\n\t\t\n\t\t\\node[goto, below of = swap] (pula_a) {\\tiny 1};\n\t\t\\node[goto, left of = sbfi] (pula_b) {\\tiny 1};\n\t\t\\path[line] (pula_b) -- (conv);\n\t\t\\path[line] (swap) -- (pula_a); \\pause\n\t\t\n\t\t%\\only<2>\n\t\t%{\n\t\t\t\\node[bullet] at (7.5,-0.2) (obs1) {\\tiny Coef mais negativo FOB};\n\t\t\t\\path[line] (obs1) -- (escolhe);\n\t\t%}\n\n\t\t%\\only<3>\n\t\t%{\n\t\t\t\\node[bullet] at (7.7,-1) (obs2) {\\tiny Menor Raz\u00e3o Positiva};\n\t\t\t\\path[line] (obs2) -- (saibase);\n\t\t%}\n\t\t\n\t\\end{tikzpicture}\t\n\\end{frame}\n\n\\begin{frame}\n\t\\frametitle{\\includegraphics[width=0.6cm,height=0.6cm]{number_4.jpg} \\hspace{0.2cm} Passos da Resolu\u00e7\u00e3o do PPL via Tableau Simplex}\n\n\t\\only<1-2>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 4$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 12$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 18$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -5$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<3>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 4$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 12$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 18$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -5$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<4>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $ \\scriptstyle \\color{red} 4 \\div 0 = \\infty $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 12$\n\t\t\t\t& $ \\scriptstyle \\color{red} 12 \\div 2 = +6 $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 18$\n\t\t\t\t& $ \\scriptstyle \\color{red} 18 \\div 2 = +9 $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -5$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<5>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $ \\scriptstyle \\color{red} 4 \\div 0 = \\infty $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 12$\n\t\t\t\t& $ \\scriptstyle \\color{red} 12 \\div 2 = +6 $\n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.2cm]{setaesquerda.jpg}\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 18$\n\t\t\t\t& $ \\scriptstyle \\color{red} 18 \\div 2 = +9 $\n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.2cm]{setaesquerda.jpg}\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -5$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<6>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $ \\scriptstyle \\color{red} 4 \\div 0 = \\infty $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_2$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 12$\n\t\t\t\t& $ \\scriptstyle \\color{red} 12 \\div 2 = +6 $\n\t\t\t\t& \\scriptsize \\includegraphics[width=0.3cm,height=0.2cm]{setaesquerda.jpg}\n\t\t\t\t& \\scriptsize \\color{red} Sai Base \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 18$\n\t\t\t\t& $ \\scriptstyle \\color{red} 18 \\div 2 = +9 $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -5$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<7-8>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $ \\scriptstyle \\color{red} 4 \\div 0 = \\infty $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{red!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 12$\n\t\t\t\t& $ \\scriptstyle \\color{red} 12 \\div 2 = +6 $\n\t\t\t\t& \\scriptsize \\includegraphics[width=0.3cm,height=0.2cm]{setaesquerda.jpg}\n\t\t\t\t& \\scriptsize \\color{red} Sai Base \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 18$\n\t\t\t\t& $ \\scriptstyle \\color{red} 18 \\div 2 = +9 $\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -5$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t% Recome\u00e7a\n\t\\only<9-10>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 4$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 6$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 6$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 5/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 30$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<11>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 4$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 6$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 6$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 5/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 30$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<12>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $\\scriptstyle \\color{red} 4 \\div 1 = +4$\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle \\color{red}  6 \\div 0 = \\infty$\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle \\color{red}  6 \\div 3 = +2$\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 5/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 30$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<13>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $\\scriptstyle  \\color{red} 4 \\div 1 = +4$\n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setaesquerda.jpg}\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle  \\color{red} 6 \\div 0 = \\infty$\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle  \\color{red} 6 \\div 3 = +2$\n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setaesquerda.jpg}\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 5/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 30$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<14>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $\\scriptstyle  \\color{red} 4 \\div 1 = +4$\n\t\t\t\t& \n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle  \\color{red} 6 \\div 0 = \\infty$\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle  \\color{red} 6 \\div 3 = +2$\n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setaesquerda.jpg}\n\t\t\t\t& \\scriptsize \\color{red} Sai Base\\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 5/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 30$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<15-16>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 4$\n\t\t\t\t& $\\scriptstyle  \\color{red} 4 \\div 1 = +4$\n\t\t\t\t& \n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle  \\color{red} 6 \\div 0 = \\infty$\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{red!50} $\\scriptstyle 3$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 6$\n\t\t\t\t& $\\scriptstyle  \\color{red} 6 \\div 3 = +2$\n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setaesquerda.jpg}\n\t\t\t\t& \\scriptsize \\color{red} Sai Base\\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle -3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 5/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{gray!50} $\\scriptstyle 30$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t& \n\t\t\t\t& \\includegraphics[width=0.3cm,height=0.3cm]{setacima.jpg}\n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\ \n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Entra\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t& \n\t\t\t\t& \\scriptsize \\color{red} Base\n\t\t\t\t&  \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t& \n\t\t\t\t&  \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\\only<17-20>\n\t{\t\t\n\t\t\\begin{table}\t\t\n\t\t\t\\begin{tabular}{c c c c c c c c c c c}\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Base \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize Z \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_2$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} $\\scriptstyle S_3$ \n\t\t\t\t&\\cellcolor{blue!100} \\color{white} \\scriptsize b\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle S_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -1/3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 2$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t    \\cellcolor{blue!100} \\color{red} $\\scriptstyle X_2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\t\t\t\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 6$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8} \n\t\t\t\t\\cellcolor{blue!100} \\color{red} $\\scriptstyle X_1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle -1/3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1/3$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 2$\n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\t\\cellcolor{blue!100} \\color{white} $\\scriptstyle Z$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 0$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 3/2$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 1$\n\t\t\t\t& \\cellcolor{yellow!50} $\\scriptstyle 36$ \n\t\t\t\t&\n\t\t\t\t&\n\t\t\t\t& \\\\\n\t\t\t\t\\cline{1-8}\n\t\t\t\\end{tabular}\n\t\t\\end{table}\t\t\t\n\t}\t\n\t\n\t\n\t\\begin{columns}\n\t\t\\begin{column}{0.5\\textwidth}\n\t\t\\only<1>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tParte da Solu\u00e7\u00e3o B\u00e1sica Fact\u00edvel (SBF)\tinicial.\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<2>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\tVerificar se a solu\u00e7\u00e3o \u00e9 \u00f3tima! (Linha de Z).\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<3>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tEscolhe vari\u00e1vel com coeficiente mais negativo na FOB para entrar na base.\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<4>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\tPara cada linha do Tableau calcular raz\u00e3o $\\frac{b_i}{\\text{coluna}}$.\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<5>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tVerifica se existe raz\u00e3o positiva e finita.\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<6>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\tA linha com a menor raz\u00e3o positiva finita sair\u00e1 da base.\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<7>\n\t\t{\n\t\t\t\\underline{\\textbf{Troca de Base}}\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tCada linha da tabela deve possuir apenas uma VB com coeficiente unit\u00e1rio.\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<8>\n\t\t{\n\t\t\t\\scriptsize Zerar a coluna de $X_2$ com exce\u00e7\u00e3o da linha referente ao piv\u00f4 que deve assumir o valor unit\u00e1rio. Para tanto: \n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\t\t\\centering\n\t\t\t\t\t$\n\t\t\t\t\t\t\\begin{matrix}\n\t\t\t\t\t\t\t\\text{Linha}'_1 = \\text{Linha}_1 \\\\ \n\t\t\t\t\t\t\t\\text{Linha}'_3 = \\text{Linha}_3 - \\text{Linha}_2\\\\ \n\t\t\t\t\t\t\t\\text{Linha}'_4 = \\text{Linha}_4 + \\frac{2}{5}\\text{Linha}_2 \\\\ \n\t\t\t\t\t\t\t\\text{Linha}'_2 = \\frac{1}{2}\\text{Linha}_2 \\\\\t\t\t\t\t\t \n\t\t\t\t\t\t\\end{matrix}\n\t\t\t\t\t$\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<9>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tNovo Tableau !!!!\n\t\t\t\\end{mdframed}\n\t\t\tLinhas com apenas uma VB com coeficiente unit\u00e1rio.\n\t\t}\n\t\t\\only<10>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\tVerificar se a solu\u00e7\u00e3o \u00e9 \u00f3tima! (Linha de Z).\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<11>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tEscolhe vari\u00e1vel com coeficiente mais negativo na FOB para entrar na base.\n\t\t\t\\end{mdframed}\n\t\t}\t\n\t\t\\only<12>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\tPara cada linha do Tableau calcular raz\u00e3o $\\frac{b_i}{\\text{coluna}}$.\n\t\t\t\\end{mdframed}\n\t\t}\t\t\n\t\t\\only<13>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tVerifica se existe raz\u00e3o positiva e finita.\n\t\t\t\\end{mdframed}\n\t\t}\t\n\t\t\\only<14>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\tA linha com a menor raz\u00e3o positiva finita sair\u00e1 da base.\n\t\t\t\\end{mdframed}\n\t\t}\t\n\t\t\\only<15>\n\t\t{\n\t\t\t\\underline{\\textbf{Troca de Base}}\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tCada linha da tabela deve possuir apenas uma VB com coeficiente unit\u00e1rio.\n\t\t\t\\end{mdframed}\n\t\t}\t\n\t\t\\only<16>\n\t\t{\n\t\t\t\\scriptsize Zerar a coluna de $X_1$ com exce\u00e7\u00e3o da linha referente ao piv\u00f4 que deve assumir o valor unit\u00e1rio. Para tanto: \n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\t\t\\centering\n\t\t\t\t\t$\n\t\t\t\t\t\t\\begin{matrix}\n\t\t\t\t\t\t\t\\text{Linha}'_1 = \\text{Linha}_1 - \\frac{1}{3}\\text{Linha}_3\\\\ \n\t\t\t\t\t\t\t\\text{Linha}'_2 = \\text{Linha}_2\\\\ \n\t\t\t\t\t\t\t\\text{Linha}'_4 = \\text{Linha}_4 + \\text{Linha}_3 \\\\ \n\t\t\t\t\t\t\t\\text{Linha}'_3 = \\frac{1}{3}\\text{Linha}_3 \\\\\t\t\t\t\t\t \n\t\t\t\t\t\t\\end{matrix}\n\t\t\t\t\t$\n\t\t\t\\end{mdframed}\n\t\t}\t\t\n\t\t\\only<17>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=orange!80]\n\t\t\t\tNovo Tableau !!!!\n\t\t\t\\end{mdframed}\n\t\t\tLinhas com apenas uma VB com coeficiente unit\u00e1rio.\n\t\t}\t\t\n\t\t\\only<18>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=olive!80]\n\t\t\t\tVerificar se a solu\u00e7\u00e3o \u00e9 \u00f3tima! (Linha de Z).\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<19>\n\t\t{\n\t\t\t\\begin{mdframed}[backgroundcolor=red!80]\n\t\t\t\t\\centering\n\t\t\t\tOPTIMAL SOLUTION FOUND !!!!\n\t\t\t\\end{mdframed}\n\t\t}\n\t\t\\only<20>\n\t\t{\n\t\t\t\\begin{table}\n\t\t\t\t\\begin{tabular}{ c | c | c |}\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\cellcolor{blue!100} \\color{white}\\scriptsize Tipo     & \n\t\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Vari\u00e1vel &  \n\t\t\t\t\t\\cellcolor{blue!100} \\color{white} \\scriptsize Valor \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t\\cellcolor{green!100} \\multirow{3}{1cm}{VB}  & \n\t\t\t\t\t\\cellcolor{green!100} $\\scriptstyle X_1$   &  \n\t\t\t\t\t\\cellcolor{green!100} $\\scriptstyle 2$   \\\\\n\t\t\t\t\t\\cellcolor{green!100} VB & \\cellcolor{green!100} $\\scriptstyle X_2$   &  \n\t\t\t\t\t\\cellcolor{green!100} $\\scriptstyle 6$   \\\\\n\t\t\t\t\t\\cellcolor{green!100} & \\cellcolor{green!100} $\\scriptstyle S_1$   &  \n\t\t\t\t\t\\cellcolor{green!100} $\\scriptstyle 2$   \\\\\n\t\t\t\t\t\\multirow{2}{1cm}{ VNB} & \n\t\t\t\t\t$\\scriptstyle S_2$   &  \n\t\t\t\t\t$\\scriptstyle 0$   \\\\\n\t\t\t\t\t& $\\scriptstyle S_3$   &  \n\t\t\t\t\t$\\scriptstyle 0$   \\\\\n\t\t\t\t\t\\cellcolor{green!100}FOB \t\t\t\t & \n\t\t\t\t\t\\cellcolor{green!100} $\\scriptstyle Z$     &  \n\t\t\t\t\t\\cellcolor{green!100} $\\scriptstyle 36$  \\\\\n\t\t\t\t\t\\hline\t\t\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{table}\n\t\t}\n\n\t\t\t\t\n\t\t\\end{column}\n\t\t\\begin{column}{0.5\\textwidth}\n\t\t\t\\only<1>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_1.png}\n\t\t\t}\n\t\t\t\\only<2>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_2.png}\n\t\t\t}\n\t\t\t\\only<3>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_3.png}\n\t\t\t}\n\t\t\t\\only<4>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_4.png}\n\t\t\t}\n\t\t\t\\only<5>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_5.png}\n\t\t\t}\n\t\t\t\\only<6>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_6.png}\n\t\t\t}\n\t\t\t\\only<7-8>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_7.png}\n\t\t\t}\n\t\t\t\\only<9>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_9.png}\n\t\t\t}\n\t\t\t\\only<10>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_2.png}\n\t\t\t}\n\t\t\t\\only<11>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_3.png}\n\t\t\t}\n\t\t\t\\only<12>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_4.png}\n\t\t\t}\n\t\t\t\\only<13>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_5.png}\n\t\t\t}\n\t\t\t\\only<14>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_6.png}\n\t\t\t}\n\t\t\t\\only<15-16>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_7.png}\n\t\t\t}\n\t\t\t\\only<17>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_9.png}\n\t\t\t}\n\t\t\t\\only<18>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_2.png}\n\t\t\t}\n\t\t\t\\only<19-20>\n\t\t\t{\n\t\t\t\t\\includegraphics[width=5.5cm,height=3cm]{Alg_8.png}\n\t\t\t}\n\t\t\\end{column}\n\t\\end{columns}\n\\end{frame}\n\n\n\\subsection{Problema de Minimiza\u00e7\u00e3o}\n\\begin{frame}\n\t\\only<1>\n\t{\n\t\t\\frametitle{Fluxograma Tableau Simplex - \\color{pink!100} MAXIMIZA\u00c7\u00c3O}\n\t}\n\t\\only<2>\n\t{\n\t\t\\frametitle{Fluxograma Tableau Simplex - \\color{cyan!100} MINIMIZA\u00c7\u00c3O}\n\t}\n\n\n\t\\centering\n\t\\only<1>\n\t{\n\t\\begin{tikzpicture} [\n\t\t\t\t\t\t\tauto, node distance = 2cm,\n\t\t\t\t\t\t\tdecisao/.style = { diamond, draw, shape aspect=2, thick, fill=red!20,\n\t\t\t\t\t\t\t                   text width=6em, text badly centered,\n\t\t\t\t\t\t\t                   inner sep=1pt},\n\t\t\t\t\t\t\tbloco/.style   = { rectangle, draw, thick, fill=blue!20, \n\t\t\t\t\t\t\t\t\t\t\t\ttext width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=1em },\n\t\t\t\t\t\t\tbloco2/.style   = { rectangle, draw, thick, fill=blue!20, \n\t\t\t\t\t\t\t\t\t\t\t\ttext width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=1em, node distance = 4.2cm },\n\t\t\t\t\t\t\textremo/.style  = { ellipse, draw, fill=green!20,\n\t\t\t\t\t\t\t                   text width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=2em },\t\n\t\t\t\t\t\t\tline/.style   = { draw, -latex' },\n\t\t\t\t\t\t\tgoto/.style = {circle, draw, fill=yellow!60, text width=1em,\n\t\t\t\t\t\t\t\t\t\t   text centered, node distance = 1.8cm},\n\t\t\t\t\t\t    bullet/.style = {rectangle, draw, thick, fill=red!80, \n\t\t\t\t\t\t    \t\t\t\ttext width=9em, text centered,\n\t\t\t\t\t\t    \t\t\t\tminimum height=1em},\n\t\t\t\t\t\t]\n\n\t\n\t\t\\node [extremo] (inic) {\\tiny IN\u00cdCIO}; \n\t\t\n\t\t\\node [bloco, below of = inic] (sbfi) {\\tiny Montar Tableau Simplex com SBF Inicial};\n\t\t\\path [line] (inic) -- (sbfi); \n\t\t\n\t\t\\node [decisao, below of = sbfi] (conv) {\\tiny Existe Custo (coeficientes) $<$ 0 ?} ;\n\t\t\\path [line] (sbfi) -- (conv); \n\t\t\n\t\t\\node[bloco, below of = conv] (otimo) {\\tiny Solu\u00e7\u00e3o \u00d3tima};\n\t\t\\path [line] (conv) -- node [near start] {\\scriptsize \\color{red} n\u00e3o} (otimo); \n\t    \\node [extremo] at (2,-7.2) (fim) {\\tiny Fim};\t\n\t    \\path [line] (otimo) |- (2,-6.5) -- (fim); \n\n\t\t\\node[bloco2, right of = inic] (escolhe) {\\tiny Escolher Vari\u00e1vel Entrar na Base};\n\t\t\\path [line] (conv) -| (2.2, 0) node [near start] {\\scriptsize \\color{red} sim} -- (escolhe); \n\t\t\n\t\t\\node[bloco, below of = escolhe] (razao) {\\tiny Calcular Raz\u00e3o $ \\frac{b_i}{coluna}$};\n\t\t\\path [line] (escolhe) -- (razao); \n\n\t\t\\node [decisao, below of = razao] (finito) {\\tiny Existe Raz\u00e3o $\\ge 0$ finita ?} ;\n\t\t\\path [line] (razao) -- (finito); \n\t\t\n\t\t\\node[bloco, below of = finito] (ilimit) {\\tiny Solu\u00e7\u00e3o Ilimitada};\n\t\t\\path [line] (finito) -- node [near start] {\\scriptsize \\color{red} n\u00e3o} (ilimit); \n\t\t\\path [line] (ilimit) |- (2,-6.5) -- (fim); \n\n\t\t\\node[bloco2, right of = razao] (saibase) {\\tiny Escolher Vari\u00e1vel Sair da Base};\n\t\t\\path [line] (finito) -| (6.2, -2) node [near start] {\\scriptsize \\color{red} sim} -- (saibase);\n\t\t\n\t\t\\node[bloco, below of = saibase] (swap) {\\tiny Troca Base e Recalc. Tableau};\n\t\t\\path[line] (saibase) -- (swap); \n\t\t\n\t\t\\node[goto, below of = swap] (pula_a) {\\tiny 1};\n\t\t\\node[goto, left of = sbfi] (pula_b) {\\tiny 1};\n\t\t\\path[line] (pula_b) -- (conv);\n\t\t\\path[line] (swap) -- (pula_a); \n\t\t\n\t\t\\node[bullet] at (7.5,-0.2) (obs1) {\\tiny Coef mais negativo FOB};\n\t\t\\path[line] (obs1) -- (escolhe);\n\n\t\t\\node[bullet] at (7.7,-1) (obs2) {\\tiny Menor Raz\u00e3o Positiva};\n\t\t\\path[line] (obs2) -- (saibase);\n\t\t\n\t\\end{tikzpicture}\t\n\t}\n\n\t\\only<2>\n\t{\n\t\\begin{tikzpicture} [\n\t\t\t\t\t\t\tauto, node distance = 2cm,\n\t\t\t\t\t\t\tdecisao/.style = { diamond, draw, shape aspect=2, thick, fill=red!20,\n\t\t\t\t\t\t\t                   text width=6em, text badly centered,\n\t\t\t\t\t\t\t                   inner sep=1pt},\n\t\t\t\t\t\t\tbloco/.style   = { rectangle, draw, thick, fill=blue!20, \n\t\t\t\t\t\t\t\t\t\t\t\ttext width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=1em },\n\t\t\t\t\t\t\tbloco2/.style   = { rectangle, draw, thick, fill=blue!20, \n\t\t\t\t\t\t\t\t\t\t\t\ttext width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=1em, node distance = 4.2cm },\n\t\t\t\t\t\t\textremo/.style  = { ellipse, draw, fill=green!20,\n\t\t\t\t\t\t\t                   text width=5em, text centered,\n\t\t\t\t\t\t\t                   minimum height=2em },\t\n\t\t\t\t\t\t\tline/.style   = { draw, -latex' },\n\t\t\t\t\t\t\tgoto/.style = {circle, draw, fill=yellow!60, text width=1em,\n\t\t\t\t\t\t\t\t\t\t   text centered, node distance = 1.8cm},\n\t\t\t\t\t\t    bullet/.style = {rectangle, draw, thick, fill=red!80, \n\t\t\t\t\t\t    \t\t\t\ttext width=9em, text centered,\n\t\t\t\t\t\t    \t\t\t\tminimum height=1em},\n\t\t\t\t\t\t\tdecisao2/.style = { diamond, draw, shape aspect=2, thick, fill=cyan!80,\n\t\t\t\t\t\t\t                   text width=6em, text badly centered,\n\t\t\t\t\t\t\t                   inner sep=1pt},\n\t\t\t\t\t\t    bullet2/.style = {rectangle, draw, thick, fill=cyan!80, \n\t\t\t\t\t\t    \t\t\t\ttext width=9em, text centered,\n\t\t\t\t\t\t    \t\t\t\tminimum height=1em},\n\t\t\t\t\t\t]\n\n\t\n\t\t\\node [extremo] (inic) {\\tiny IN\u00cdCIO}; \n\t\t\n\t\t\\node [bloco, below of = inic] (sbfi) {\\tiny Montar Tableau Simplex com SBF Inicial};\n\t\t\\path [line] (inic) -- (sbfi); \n\t\t\n\t\t\\node [decisao2, below of = sbfi] (conv) {\\tiny Existe Custo (coeficientes) $>$ 0 ?} ;\n\t\t\\path [line] (sbfi) -- (conv); \n\t\t\n\t\t\\node[bloco, below of = conv] (otimo) {\\tiny Solu\u00e7\u00e3o \u00d3tima};\n\t\t\\path [line] (conv) -- node [near start] {\\scriptsize \\color{red} n\u00e3o} (otimo); \n\t    \\node [extremo] at (2,-7.2) (fim) {\\tiny Fim};\t\n\t    \\path [line] (otimo) |- (2,-6.5) -- (fim); \n\n\t\t\\node[bloco2, right of = inic] (escolhe) {\\tiny Escolher Vari\u00e1vel Entrar na Base};\n\t\t\\path [line] (conv) -| (2.2, 0) node [near start] {\\scriptsize \\color{red} sim} -- (escolhe); \n\t\t\n\t\t\\node[bloco, below of = escolhe] (razao) {\\tiny Calcular Raz\u00e3o $ \\frac{b_i}{coluna}$};\n\t\t\\path [line] (escolhe) -- (razao); \n\n\t\t\\node [decisao, below of = razao] (finito) {\\tiny Existe Raz\u00e3o $\\ge 0$ finita ?} ;\n\t\t\\path [line] (razao) -- (finito); \n\t\t\n\t\t\\node[bloco, below of = finito] (ilimit) {\\tiny Solu\u00e7\u00e3o Ilimitada};\n\t\t\\path [line] (finito) -- node [near start] {\\scriptsize \\color{red} n\u00e3o} (ilimit); \n\t\t\\path [line] (ilimit) |- (2,-6.5) -- (fim); \n\n\t\t\\node[bloco2, right of = razao] (saibase) {\\tiny Escolher Vari\u00e1vel Sair da Base};\n\t\t\\path [line] (finito) -| (6.2, -2) node [near start] {\\scriptsize \\color{red} sim} -- (saibase);\n\t\t\n\t\t\\node[bloco, below of = saibase] (swap) {\\tiny Troca Base e Recalc. Tableau};\n\t\t\\path[line] (saibase) -- (swap); \n\t\t\n\t\t\\node[goto, below of = swap] (pula_a) {\\tiny 1};\n\t\t\\node[goto, left of = sbfi] (pula_b) {\\tiny 1};\n\t\t\\path[line] (pula_b) -- (conv);\n\t\t\\path[line] (swap) -- (pula_a); \n\t\t\n\t\t\\node[bullet2] at (7.5,-0.2) (obs1) {\\tiny Coef mais positivo FOB};\n\t\t\\path[line] (obs1) -- (escolhe);\n\n\t\t\\node[bullet] at (7.7,-1) (obs2) {\\tiny Menor Raz\u00e3o Positiva};\n\t\t\\path[line] (obs2) -- (saibase);\n\t\t\n\t\\end{tikzpicture}\t\n\t}\n\n\n\\end{frame}\n", "meta": {"hexsha": "b2fc5d74fcc18d3662733cf596f8a44f26cc8758", "size": 50453, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Aula 3/Otimiz_AULA3_B.tex", "max_stars_repo_name": "AndreMarcato/CursoOtimizacao", "max_stars_repo_head_hexsha": "81c0e37b5f4fa6cb491f877c732887aa2d65515a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Aula 3/Otimiz_AULA3_B.tex", "max_issues_repo_name": "AndreMarcato/CursoOtimizacao", "max_issues_repo_head_hexsha": "81c0e37b5f4fa6cb491f877c732887aa2d65515a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Aula 3/Otimiz_AULA3_B.tex", "max_forks_repo_name": "AndreMarcato/CursoOtimizacao", "max_forks_repo_head_hexsha": "81c0e37b5f4fa6cb491f877c732887aa2d65515a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.9780154486, "max_line_length": 132, "alphanum_fraction": 0.5782411353, "num_tokens": 20225, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.7956581097540519, "lm_q1q2_score": 0.5641368237669026}}
{"text": "\\section{3D DP}\n\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{3-D DP}\n  \\begin{exampleblock}{Floyd-Warshall algorithm}\n\t\\begin{enumerate}[(1)]\n\t  \\item DP for Floyd-Warshall algorithm for APSP on \\textcolor{red}{directed} graphs\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\begin{description}\n\t\\item[Subproblem:] $\\text{dist}[i,j,k]$: the length of the shortest path from $i$ to $j$ via only nodes in $v_{1} \\cdots v_{k}$\n\t\\item[Goal:] $\\text{dist}[i,j,n], \\forall i,j$\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{3-D DP}\n  \\begin{exampleblock}{Floyd-Warshall algorithm}\n\t\\begin{enumerate}[(1)]\n\t  \\item DP for Floyd-Warshall algorithm for APSP on \\textcolor{red}{directed} graphs\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\begin{description}\n\t\\item[Make choice:] Is $v_{k}$ on the $\\text{ShortestPath}[i,j,k]$?\n\t\\item[Recurrence:] \n\t  \\[\n\t\t\\text{dist}[i,j,k] = \\min \\set{\\text{dist}[i,j,k-1], \\text{dist}[i,k,k-1] + \\text{dist}[k,j,k-1]}\n\t  \\]\n\n\t  \\fignocaption{width = 0.40\\textwidth}{figs/floyd-warshall.png}\n\t  \\pause\n\t\\item[Init:]\n\t  \\[\n\t\t\\text{dist}[i,j,0] = \\left\\{\\begin{array}{ll}\n\t\t  \\textcolor{red}{0} & i = j \\\\\n\t\t  w(i,j) & (i,j) \\in E \\\\\n\t\t  \\infty & \\text{o.w.}\n\t\t\\end{array} \\right.\n\t  \\]\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{3-D DP}\n  \\begin{exampleblock}{Floyd-Warshall algorithm (Problem 6.25)}\n\t\\begin{enumerate}[(1)]\n\t  \\setcounter{enumi}{1}\n\t  \\item Routing table for Floyd-Warshall algorithm\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\begin{algorithmic}\n\t\\ForAll{$k \\gets 1 \\dots n$}\n\t  \\ForAll{$i \\gets 1 \\dots n$}\n\t\t\\ForAll{$j \\gets 1 \\dots n$}\n\t\t  \\If{$\\text{dist}[i,j] > \\text{dist}[i,k] + \\text{dist}[k,j]$}\n\t\t\t\\State $\\text{dist}[i,j] \\gets \\text{dist}[i,k] + \\text{dist}[k,j]$\n\t\t\t\\uncover<3->{\\State \\textcolor{red}{$\\text{Go}[i,j] \\gets \\text{Go}[i,k]$}}\n\t\t  \\EndIf\n\t\t\\EndFor\n\t  \\EndFor\n\t\\EndFor\n  \\end{algorithmic}\n\n  \\pause\n  \\vspace{0.50cm}\n  \\centerline{Time: $\\Theta(n^3)$ \\hspace{0.10cm} Space: $\\Theta(n^2)$}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{3-D DP}\n  \\begin{exampleblock}{Floyd-Warshall algorithm (Problem 6.25)}\n\t\\begin{enumerate}[(1)]\n\t  \\setcounter{enumi}{1}\n\t  \\item Routing table for Floyd-Warshall algorithm\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\begin{columns}\n\t\\column{0.50\\textwidth}\n\t  \\begin{algorithmic}\n\t\t\\ForAll{$i \\gets 1 \\dots n$}\n\t\t  \\ForAll{$j \\gets 1 \\dots n$}\n\t\t\t\\State $\\text{dist}[i,j] \\gets \\infty$\n\t\t\t\\State \\textcolor{red}{$\\text{Go}[i,j] \\gets \\text{Nil}$}\n\t\t  \\EndFor\n\t\t\\EndFor\n\n\t\t\\ForAll{$(i,j) \\in E$}\n\t\t  \\State $\\text{dist}[i,j] \\gets w(i,j)$\n\t\t  \\State \\textcolor{red}{$\\text{Go}[i,j] \\gets j$}\n\t\t\\EndFor\n\n\t\t\\ForAll{$i \\gets 1 \\dots n$}\n\t\t  \\State $\\text{dist}[i,i] \\gets 0$\n\t\t  \\State \\textcolor{red}{$\\text{Go}[i,i] \\gets \\text{Nil}$}\n\t\t\\EndFor\n\t  \\end{algorithmic}\n\t  \\pause\n\t\\column{0.50\\textwidth}\n\t  \\begin{algorithmic}\n\t\t\\Procedure{Path}{$i,j$}\n\t\t  \\If{$\\text{Go}[i,j] = \\text{Nil}$}\n\t\t  \\State Output ``No Path.''\n\t\t  \\EndIf\n\n\t\t  \\Statex\n\t\t  \\State Output ``$i$''\n\t\t  \\While{$i \\neq j$}\n\t\t\t\\textcolor{red}{\\State $i \\gets \\text{Go}[i,j]$}\n\t\t\t\\State Output ``$i$''\n\t\t  \\EndWhile\n\t\t\\EndProcedure\n\t  \\end{algorithmic}\n  \\end{columns}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{3-D DP}\n  \\begin{exampleblock}{Floyd-Warshall algorithm (Problem 6.29)}\n\t\\begin{enumerate}[(1)]\n\t  \\setcounter{enumi}{2}\n\t  \\item Find minimum-weighted cycle of \\textcolor{red}{directed} graph ($w(e) > 0$)\n\t\\end{enumerate}\n  \\end{exampleblock}\n\n  \\[\n\t\\text{dist}[i,i] \\gets 0 \\implies \\text{dist}[i,i] \\gets \\infty \n  \\]\n\n  \\[\n\t\\forall i: \\text{dist}[i,i] = \\infty\n  \\]\n\n  \\pause\n  \\[\n\t\\textcolor{red}{\\text{Q:}}\\;\\; \\exists e: w(e) < 0\n  \\]\n\n  \\pause\n  \\begin{columns}\n\t\\column{0.50\\textwidth}\n\t  \\[\n\t\t\\exists i: \\text{dist}[i,i] < 0\n\t  \\]\n\t\\column{0.50\\textwidth}\n\t  \\[\n\t\t\\forall i: \\text{dist}[i,i] \\ge 0 \\; (= \\infty)\n\t  \\]\n  \\end{columns}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Shortest paths on undirected graphs}\n  \\fignocaption{width = 0.70\\textwidth}{figs/undirected-negative.png}\n\n  \\centerline{\\url{https://cs.stackexchange.com/q/76578/4911}}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "3a9017705534d8d211b6d8b9c6d7afe541de711e", "size": 4107, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/3d-dp.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/3d-dp.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2017/alg-tutorial-dp-20170619/sections/3d-dp.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 26.1592356688, "max_line_length": 128, "alphanum_fraction": 0.5950815681, "num_tokens": 1591, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5641368188747612}}
{"text": "% Copyright 2018 Markus J. Pflaum, licensed under CC BY-NC-ND 4.0\n% main author: \n%   Markus J. Pflaum\n%\n\\section{Orthonormal bases in Hilbert spaces}\n\\label{sec:orthonormal-baes-hilbert-spaces}\n\n\\begin{definition}\n  A (possibly empty) subset $S$ of a Hilbert space $\\hilbertH$ is called\n  an \\emph{orthogonal system} or just \\emph{orthogonal} if for any  two different elements $v,w \\in S$ the relation \n  $\\inprod{v,w} = 0$ holds true. If in addition $\\norm{v}=1$ for all elements $v \\in S$,\n  then the set is called \\emph{orthonormal} or an \\emph{orthonormal system}.  \n  A family $(v_i)_{i\\in I}$ of vectors in $\\hilbertH$ is called \\emph{orthogonal} if \n  $\\inprod{v_i,v_j} = 0$ for all  $i,j\\in I$ with $i\\neq j$ and  \\emph{orthonormal} \n  if in addition $\\|v_i\\| =1$ for all $i\\in I$. \n\\end{definition}\n\n\\para\n  Obviously, the set of orthonormal subsets of a Hilbert space is ordered by set-theoretic inclusion. \n  Therefore, the following definition makes sense. \n\n\\begin{definition}\n  A maximal orthonormal set in a Hilbert space $\\hilbertH$ is called an \\emph{orthonormal basis}\n  or a \\emph{Hilbert basis} of $\\hilbertH$.      \n\\end{definition}\n\n\\begin{proposition}\n  Every  Hilbert space $\\hilbertH$ has an orthonormal basis. \n\\end{proposition}\n\n\\begin{proof}\n  Wothout loss of generality we can assume that $\\hilbertH \\neq \\{ 0\\}$, because \n  $\\emptyset$ is a Hilbert basis for $\\{ 0 \\}$. \n  Let $\\mathscr{O}$ denote the set of orthonormal subsets of $\\hilbertH$. As mentioned before, $\\mathscr{O}$ \n  is ordered by set-theoretic inclusion. Let $\\mathscr{C} \\subset \\mathscr{O}$ be a non-empty chain. \n  Put $U  = \\bigcup_{S\\in \\mathscr{C}} S$. Then  $U$ is an upper bound of $\\mathscr{C}$.\n  So by Zorn's lemma $\\mathscr{O}$ has a maximal element.   \n\\end{proof}\n\n\\begin{remark}\n  \\begin{environmentlist} \n  \\item\n    By slight abuse of language we sometimes call an orthonormal family $(b_i)_{i\\in I}$ in a Hilbert space $\\hilbertH$\n    an \\emph{orthonormal basis} or a \\emph{Hilbert basis} of $\\hilbertH$  \n    if the set $\\{ b_i \\mid i\\in I \\}$ is an orthornormal basis. \n  \\item \n    If on an orthonormal basis $B \\subset \\hilbertH$ a total order relation is given, \n    one calls $B$ an \\emph{ordered Hilbert basis} of $\\hilbertH$. Likewise,  \n    an orthonormal basis of the form $(b_i)_{i\\in I}$ is called \\emph{orderd} if the index set $I$ carries a total order.\n  \\end{environmentlist}\n\\end{remark}\n\n\\begin{proposition}[Pythagorean theorem for orthogonal families]\\label{thm:pythagorean-theorem-infinite-families}\\hspace{1mm}\n  An orthogonal family $(v_i)_{i\\in I}$ in a Hilbert space $\\hilbertH$ is summable if and only if \n  the family of norms $\\left(\\|v_i\\| \\right)_{i\\in I}$ is square summable. In this case one has \n  \\[\n     \\left\\|\\sum_{i\\in I} v_i\\right\\|^2 =  \\sum_{i\\in I} \\|v_i\\|^2 \\ .\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Assume that $\\left(\\|v_i\\| \\right)_{i\\in I}$ is square summable or in other words that the net of partial sums\n  $\\left( \\sum_{i\\in J} \\|v_i\\|^2 \\right)_{J\\in\\mathscr{F} (I)}$  converges to some $s \\in \\R$. \n  For $\\varepsilon >0$ choose a finite $J_\\varepsilon \\subset I$ such that for all finite $J$ with \n  $J_\\varepsilon \\subset J\\subset I$ the relation\n  \\[\n           \\left| s - \\sum_{i\\in J} \\|v_i\\|^2   \\right| < \\frac{\\varepsilon^2}{2}\n  \\]\n  holds true. For finite $K\\subset I$ with $K\\cap J_\\varepsilon  =\\emptyset$ one then obtains by the pythagorean theorem for \n  finite orthogonal families, Eq.~(\\ref{eq:pythagorean-theorem}),\n  \\[\n      \\left\\| \\sum_{i\\in K} v_i\\right\\|^2 =  \\sum_{i\\in K}  \\left\\| v_i\\right\\|^2 \\leq \n       \\left| s - \\sum_{i\\in K \\cup J_\\varepsilon} \\|v_i\\|^2   \\right|  + \\left| s - \\sum_{i\\in  J_\\varepsilon} \\|v_i\\|^2   \\right|\n       < \\varepsilon^2 \\ .\n  \\]\n  Hence  $\\left( \\sum_{i\\in J} v_i \\right)_{J\\in\\mathscr{F} (I)}$ is a Cauchy net in $\\hilbertH$, so convergent. \n \n  Now let  $(v_i)_{i\\in I}$ be  summable to $v\\in \\hilbertH$. \n  Then there exists a $J_1 \\in \\mathscr{F} (I)$ such that for all finite $J \\subset I$ \n  containing  $J_1$\n  \\[\n    \\left\\| v -  \\sum_{i\\in J} v_i \\right\\| \\leq 1 \\ .\n  \\] \n  This implies by the pythagorean theorem for finite orthogonal families \n   \\[\n    \\sum_{i\\in J} \\left\\| v_i \\right\\|^2 =  \\left\\|  \\sum_{i\\in J} v_i \\right\\|^2  \\leq \n    \\left(  \\left\\| v -  \\sum_{i\\in J} v_i \\right\\| +  \\left\\| v \\right\\| \\right)^2 \\leq  ( 1 + \\| v\\| )^2 \\ .\n  \\] \n  Hence the net of partial sums $\\left( \\sum_{i\\in J} \\|v_i\\|^2 \\right)_{J\\in\\mathscr{F} (I)}$ is bounded, so convergent\n  since each term $\\|v_i\\|^2$ is $\\geq 0$.  \n\n  By continuity of the inner product and pairwise orthogonality of the $v_i$ we finally obtain in the convergent case\n  \\[\n   \\left\\| \\sum_{i\\in I} v_i \\right\\|^2 = \\inprod{ \\sum_{i\\in I} v_i, \\sum_{j\\in I} v_j} =\n   \\sum_{i\\in I} \\inprod{ v_i, \\sum_{j\\in I} v_j} =  \\sum_{i\\in I} \\sum_{j\\in I} \\inprod{ v_i, v_j} =  \\sum_{i\\in I} \\left\\| v_i \\right\\|^2 \\ .\n  \\]\n\\end{proof}\n\n\\begin{proposition}\n  Let $(v_i)_{i\\in I}$ be an orthonormal family in a Hilbert space $\\hilbertH$. Then for every $v \\in \\hilbertH$ \n  the family $\\left( \\inprod{v,v_i} \\right)_{i\\in I}$  is square summable and \n  \\emph{Bessel's inequality} holds true that is \n  \\[\n    \\sum_{i\\in I} \\left| \\inprod{v,v_i} \\right|^2 \\leq \\| v \\|^2 \\ .\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  \n\\end{proof}\n\n\\begin{theorem}\nLet $B$ be an orthonormal system in a Hilbert space $\\hilbertH$. Then the following are equivalent:\n\\begin{numberlist}\n  \\item\\label{ite:orthonormal-system-maximal} The orthonormal system $B$ is maximal, i.e.\\ a Hilbert basis.\n  \\item\\label{ite:orthonormal-system-total} The orthonormal system $B$ is  \\emph{total} that is for all $v \\in H$    \n      such that $\\inprod{v, b} = 0$ for all $b \\in B$ the equality $v = 0$ holds true.\n  \\item\\label{ite:orthonormal-system-isomorphism}\n     For every $b\\in B$ let $\\hilbertH_b = \\{ r b \\in \\hilbertH \\mid r \\in \\fldK\\}$. Then the canonical map \n     \\[\n       \\iota: \\widehat{\\bigoplus_{b \\in B}} \\hilbertH_b \\to \\hilbertH , \\: \n       (v_b)_{b \\in B} \\mapsto \\sum_{b \\in B} v_b\n     \\] \n     is an isometric isomorphism.  \n  \\item\\label{ite:orthonormal-system-closed-span}\n     The closed linear span $\\clSpan{B}$ coincides with $\\hilbertH$.\n  \\item\\label{ite:orthonormal-system-fourier-expansion} For all $v \\in \\hilbertH$, one has the\n    \\emph{Fourier expansion}\n    \\[ v = \\sum_{b \\in B} \\inprod{v, b} b  \\ . \\]\n  \\item\\label{ite:orthonormal-system-inner-product-expansion} For all $v, w \\in \\hilbertH$, one has \n    \\[ \\inprod{v, w} = \\sum_{b \\in B}\\inprod{v, b} \\inprod{b, w} \\ . \\]\n  \\item\\label{ite:orthonormal-system-parsevals-identity} For all $v \\in \\hilbertH$, \\emph{Parseval's identity} holds true that is\n    \\[ \\norm{v}^2 = \\sum_{b \\in B} \\abs{\\inprod{v, b}}^2  \\ . \\] \n\\end{numberlist}\n\\end{theorem}\n\n\\begin{proof}\n \\ref{ite:orthonormal-system-maximal} $\\Rightarrow$ \\ref{ite:orthonormal-system-total}:\n   If $v \\neq 0$, then $\\frac{v}{\\norm{v}}$ is a unit vector orthogonal to each $v_i$. Hence $\\{v\\} \\cup B$ \n   is an orthonormal system which is strictly larger than $B$, contradicting \\ref{ite:orthonormal-system-maximal}.\n\n \\ref{ite:orthonormal-system-total} $\\Rightarrow$ \\ref{ite:orthonormal-system-isomorphism}. \n   First note that by the pythagorean theorem for infinite families, \\Cref{thm:pythagorean-theorem-infinite-families},\n   the canonical map $\\iota: \\widehat{\\bigoplus}_{b \\in B} H_b \\rightarrow H$ is well-defined and an isometry. \n   Hence $\\iota$ is injective. It remains to show that $\\iota$ is surjective. \n   To this end observe that $\\im \\iota$ is closed in $\\hilbertH$ since $\\iota$ is an isometry (the image is complete). \n   If $\\iota$ is not surjective, then $\\im \\iota^\\perp$ is not the zero vector space. \n   Choose $v \\in \\im \\iota^\\perp \\setminus \\{ 0\\}$. \n   Then $v$ is orthogonal to each element of $B$, but $v \\neq 0$. This contradicts \n   \\ref{ite:orthonormal-system-total}, so $\\im \\iota = \\hilbertH$.\n\n  \\ref{ite:orthonormal-system-isomorphism} $\\Rightarrow$ \\ref{ite:orthonormal-system-fourier-expansion}: \n    We can represent any $v \\in \\hilbertH$ in the form $v = \\iota \\left( (v_b)_{b \\in B} \\right) = \n    \\sum_{b\\in B} v_b$ with $\\left( v_b \\right)_{b\\in B} \\in \\widehat{\\bigoplus}_{b \\in B}  H_b$. \n    Write $v_b = r_b \\, b$ for every $b\\in B$, where $r_b \\in \\fldK$ is uniquely determined by $v_b$. \n    Then compute using continuity of the inner product\n    \\[\n      \\inprod{v, b} = \\inprod{\\sum_{c \\in B} v_c, b} = \\sum_{c \\in B} r_c \\inprod{c, b} = r_b \\ .\n    \\]\n    Therefore,\n    \\[\n       v = \\sum_{b \\in B} r_b \\, b = \\sum_{b \\in B} \\inprod{v, b} b \\ .\n    \\]\n\n  \\ref{ite:orthonormal-system-fourier-expansion} $\\Rightarrow$ \\ref{ite:orthonormal-system-inner-product-expansion}:\n  Fourier expansion of $v, w \\in H$ gives $v = \\sum\\limits_{b \\in B} \\inprod{v, b} b$ and \n  $w = \\sum\\limits_{b \\in B} \\inprod{w, b} b$.   Then, by continuity of the inner product,\n  \\[\n    \\inprod{v, w} = \\sum\\limits_{b \\in B} \\inprod{v, b}\\inprod{b, w} \\ .\n  \\]\n\n  \\ref{ite:orthonormal-system-fourier-expansion} $\\Rightarrow$ \\ref{ite:orthonormal-system-closed-span}:\n  Let $v\\in \\hilbertH$. Then $\\sum\\limits_{b\\in J}\\inprod{v, b} b \\in \\Span (B)$ for all finite  $J \\subset B$.\n  But by Fourier expansion $v$ is the limit of the net \n  $\\left( \\sum\\limits_{b\\in J}\\inprod{v, b} b \\right)_{J\\in \\mathscr{F} (B)} $, so $v$ lies in the closure \n  $\\clSpan (B)$.\n\n  \\ref{ite:orthonormal-system-closed-span}  $\\Rightarrow$ \\ref{ite:orthonormal-system-total}:\n  Assume that $\\inprod{v, b} = 0$ for all $b \\in B$. By \\ref{ite:orthonormal-system-closed-span}, $v$ can be written as a limit \n  $v = \\lim\\limits_{n\\to\\infty} v_n$, where $v_n \\in \\Span (B)$ for all $n\\in \\N$. \n  Then  $\\inprod{v, v_n} = 0$ for all $n\\in \\N$ by assumption. \n  By continuity of the inner product this implies\n  \\[\n     \\norm{v}^2 = \\lim\\limits_{n\\to\\infty} \\inprod{v, v_n} = 0 \\ ,\n  \\]\n  so $v=0$. \n\n  \\ref{ite:orthonormal-system-inner-product-expansion} $\\Rightarrow$ \\ref{ite:orthonormal-system-parsevals-identity}: \n  Put $v = w$. Then, by assumption,\n  \\[\n   \\norm{v}^2 = \\inprod{v, v} = \\sum\\limits_{b\\in B} \\inprod{v, b} \\inprod{b, v} = \\sum\\limits_{b \\in B} \\abs{\\inprod{v, b}}^2 \\ .\n  \\]\n\n  \\ref{ite:orthonormal-system-parsevals-identity} $\\Rightarrow$ \\ref{ite:orthonormal-system-maximal}:\n  Assume \\ref{ite:orthonormal-system-parsevals-identity} and that \\ref{ite:orthonormal-system-maximal} is not true. \n  Then there exists $v \\in H$ with $\\norm{v} = 1$ and $\\inprod{v, b} = 0$ for all $b \\in B$. But then\n  \\[\n    \\norm{v}^2 = \\sum_{b \\in B} \\abs{\\inprod{v, b}}^2 = 0,\n  \\]\n  which is a contradiction.\n\n \n\\end{proof}", "meta": {"hexsha": "c5a365895095f16694988d47a778282d6ab4c855", "size": 10654, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Example/sections/orthonormal-bases-hilbert-spaces.tex", "max_stars_repo_name": "martinpflaum/latex_to_html", "max_stars_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-11-13T15:10:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T14:08:26.000Z", "max_issues_repo_path": "Example/sections/orthonormal-bases-hilbert-spaces.tex", "max_issues_repo_name": "martinpflaum/latex_to_html", "max_issues_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-07-11T13:18:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T22:02:11.000Z", "max_forks_repo_path": "Example/sections/orthonormal-bases-hilbert-spaces.tex", "max_forks_repo_name": "martinpflaum/latex_to_html", "max_forks_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-13T15:22:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-13T15:22:47.000Z", "avg_line_length": 52.4827586207, "max_line_length": 143, "alphanum_fraction": 0.6492397222, "num_tokens": 3953, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7956581024858786, "lm_q1q2_score": 0.5641368186136285}}
{"text": "\\chapter{The reduce Library}\n\nThis manual describes the use of the \\ml{reduce} library, as well as discussing\nits design. The library is intended to ease the burden of performing tedious\nand intellectually trivial tasks of arithmetic such as proving:\n\n\\begin{hol}\\begin{verbatim}\n  |- 2 EXP 6 = 64\n\\end{verbatim}\\end{hol}\n\n\\noindent Anyone trying to prove the above by hand will testify to its extreme\ntediousness. However, using the \\ml{reduce} library, the evaluation of:\n\n\\begin{hol}\\begin{verbatim}\n  REDUCE_CONV \"2 EXP 6\"\n\\end{verbatim}\\end{hol}\n\n\\noindent will perform the above automatically, and probably in far fewer\nprimitive inferences than a human would take. The library will also take care\nof certain boolean expressions. This is mainly for the sake of completeness,\nsince the same effect can be achieved by careful use of rewriting.\n\n\n\\section{The representation of numbers}\n\nThe approach to representing natural number constants in \\HOL\\ is to provide a\nconversion \\ml{num\\_CONV} which generates the definition of any nonzero numeral\nin terms of its predecessor, for example:\n\n\\begin{hol}\\begin{verbatim}\n   #num_CONV \"1\";;\n   |- 1 = SUC 0\n\n   #num_CONV \"256\";;\n   |- 256 = SUC 255\n\\end{verbatim}\\end{hol}\n\n\\noindent This conversion uses \\ml{mk\\_thm}, so could be regarded as\nunsatisfactory; however it is arguably no worse than expanding a constant\ndefined through the normal constant definition mechanism.\n\nThe \\ml{reduce} library uses only the above conversion, together with certain\ndefinitions and preproved theorems concerning the various arithmetic and\nboolean operators, to derive, strictly by inference, reduction theorems for\ncertain expressions.\n\n\n\\section{Sample algorithm}\n\nThe following is an example of an algorithm for reducing addition expressions.\nIn fact, for reasons of efficiency, the library's \\ml{ADD\\_CONV} conversion is\nimplemented in a slightly more sophisticated way, but this example gives the\ngeneral flavour of how the various arithmetic conversions are defined.  Suppose\nwe want to apply the conversion to \\ml{\"m + n\"} where \\ml{m} and \\ml{n} are\nboth numerals. Then\n\n\\begin{itemize}\n  \\item If the first numeral is zero, we need only specialize the first\n  conjunct of the inbuilt definition \\ml{ADD}:\n\n  {\\small\\begin{verbatim}\n    SPEC \"n\" (CONJUNCT1 ADD)\n  \\end{verbatim}}\n\n  \\item If the first numeral is not zero, then we call the conversion\n  recursively to give:\n\n  {\\small\\begin{verbatim}\n    |- p + n = s\n  \\end{verbatim}}\n\n  where \\ml{p} is the predecessor of \\ml{m}, and \\ml{s} the corresponding sum.\n  Now we can apply the following inferences:\n\n  {\\small\\begin{verbatim}\n    |- p + n = s                % [1] From recursive call             %\n    |- (SUC p) + n = SUC(p + n) % [2] Instance of 2nd conjunct of ADD %\n    |- (SUC p) + n = SUC s      % [3] Substituting [1] in [2]         %\n    |- SUC p = m                % [4] SYM (num_CONV m)                %\n    |- SUC s = t                % [5] SYM (num_CONV (s+1))            %\n    |- m + n = t                % [6] Substituting [4] and [5] in [6] %\n  \\end{verbatim}}\n\n  This gives the desired result. The above can easily be converted into a\n  simple recursive procedure.\n\n\\end{itemize}\n\n\\section{Using the library}\n\nThe straightforward way to use the library is simply to do:\n\n\\begin{hol}\\begin{verbatim}\n   load_library `reduce`;;\n\\end{verbatim}\\end{hol}\n\n\\noindent This will work only if the \\HOL\\/ being used has been installed\ncorrectly; if this is not the case the user should refer to the \\TUTORIAL\\/ or\nthe \\DESCRIPTION.\n\nThe library provides the following three ML bindings, which should be all that\nis needed for most purposes:\n\n\\begin{itemize}\n\n  \\item \\ml{REDUCE\\_CONV} is a conversion which, when given an expression, will\n  return a theorem expressing its equivalence with a reduced version, which in\n  many cases will be simply a single numeral or boolean literal. For example:\n\n  {\\small\\begin{verbatim}\n    #REDUCE_CONV \"(50 < 51) => (2 * 25) | (60 - 17)\";;\n    |- (50 < 51 => 2 * 25 | 60 - 17) = 50\n\n    #REDUCE_CONV \"(3 * x) + (1 + 2)\";;\n    |- (3 * x) + (1 + 2) = (3 * x) + 3\n\n    #REDUCE_CONV \"(1 < 2) /\\ (2 <= 2)\";;\n    |- 1 < 2 /\\ 2 <= 2 = T\n  \\end{verbatim}}\n\n  \\item \\ml{REDUCE\\_RULE} is a rule which applies the reductions corresponding\n  to \\ml{REDUCE\\_CONV} to subterms of the theorem's conclusion\n\n  \\item \\ml{REDUCE\\_TAC} performs the same reductions on a goal.\n\n\\end{itemize}\n\n\\noindent For more sophisticated use, there are conversions specific to each\noperator, for example \\ml{ADD\\_CONV} and \\ml{OR\\_CONV}. For more details of\nthese, refer to the next chapter. The arithmetic conversions and boolean\nconversions may be loaded separately by loading (with \\ml{loadt} for instance)\n\\ml{arithconv.ml} and \\ml{boolconv.ml} respectively, in the main library\ndirectory.\n", "meta": {"hexsha": "0519632b62e020acd4d25a394409851b05ab8907", "size": 4812, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/num/reduce/Manual/description.tex", "max_stars_repo_name": "dwRchyngqxs/HOL", "max_stars_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 492, "max_stars_repo_stars_event_min_datetime": "2015-01-07T16:36:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T22:18:48.000Z", "max_issues_repo_path": "src/num/reduce/Manual/description.tex", "max_issues_repo_name": "dwRchyngqxs/HOL", "max_issues_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 759, "max_issues_repo_issues_event_min_datetime": "2015-01-01T00:40:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T17:33:39.000Z", "max_forks_repo_path": "src/num/reduce/Manual/description.tex", "max_forks_repo_name": "dwRchyngqxs/HOL", "max_forks_repo_head_hexsha": "3b1931c130fcab243da332adb2c1413c42c59cf9", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 126, "max_forks_repo_forks_event_min_datetime": "2015-02-17T03:20:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T00:42:55.000Z", "avg_line_length": 35.9104477612, "max_line_length": 79, "alphanum_fraction": 0.7022028263, "num_tokens": 1314, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956580952177051, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5641368183524957}}
{"text": "%!TEX root = /home/renaud/Documents/EPL/tfe/latex/tfe.tex\n\\chapter{The transport model} \\label{chap:transportmodel}\n\\section{The reactive transport equation}\nThe evolution of the concentration of a particular constituent (or tracer) in an aquatic environment is described by the \\textit{reactive transport equation}:\n\\begin{equation}\\label{eq:ADR}\n\t\\frac{\\partial C}{\\partial t} = q-\\nabla \\cdot \\left(\\b uC - \\b K \\nabla C\\right).\n\\end{equation}\nIn this equation, $C$ is the concentration of the constituent in water, expressed in $\\rm{kg/m^d}$, where $d$ is the dimension of the problem; $t$ is the time, expressed in seconds; $q$ is the local net production (i.e. production - destruction) rate of the constituent, expressed in $\\rm{kg/(m^ds)}$; $\\b u$ is the velocity vector whose units are $\\rm{m/s}$; and $\\b K$ is the \\textit{diffusivity tensor}, whose entries are expressed in $\\rm{m^2/s}$. Without loss of generality, we can assume $\\b K$ to be symmetric. This is essentially because the impact of the anti-symmetric part of $\\b K$, if any, may be viewed as additional advection. More details may be found in appendix A of \\cite{deleersnijder2001concept}. Of course, the symmetric tensor $\\b K$ must then be positive-definite in order to represent truly diffusive processes, namely phenomena which tend, at any time and location, to homogenize the concentration of any constituent.\n\nIn equation~\\eqref{eq:ADR}, $\\nabla \\cdot (\\b u C)$ is the \\textit{advective} term, $\\nabla \\cdot (\\b K \\nabla C)$ is the \\textit{diffusive} term, and $q$ is often called the \\textit{source/sink} term. When $q=0$, equation~\\eqref{eq:ADR} becomes the \\textit{advection-diffusion} equation:\n\\begin{equation}\\label{eq:C_PDE_vec}\n \t\\frac{\\partial C}{\\partial t} = -\\nabla \\cdot \\left(\\b uC - \\b K \\nabla C\\right).\n \\end{equation}\n When $q=0$, the tracer is said to be \\textit{passive}.\n\nThe reactive transport equation is omnipresent in this work for several reasons. First, it is a fundamental equation in our purpose to develop consistent compartment models, as we will see in chapter~\\ref{chap:compartment}. Furthermore, we have seen in chapter~\\ref{chap:clustering} that the stability method is based on the knowledge of the \\textit{transition probability matrix}. Computing that matrix in the case of geophysical flows cannot be done unless we understand the dynamics of the flow: we will see in chapter~\\ref{chap:numerical} that efficient numerical methods allowing to track the fate of individual tracer's particles can be devised from equation~\\eqref{eq:ADR}. The transition probability matrix can then easily be approximated from the particles trajectories. Finally, the particles trajectories can also be used to evaluate the concentration, and hence to assess the validity of the compartment models that will be developed further in this work.\n\n\\section{The continuity equation and the Boussinesq approximation}\nIn general, the (possibly time-dependent) velocity field in equation~\\eqref{eq:ADR} is unknown and has to be solved from the Navier-Stokes equations. In this work, we will restrict ourselves to problems where the velocity field is known and stationary. The \\textit{continuity equation} (local mass conservation)\n\\begin{equation} \\label{eq:continuity}\n\t\\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot (\\rho \\b u) = 0\n\\end{equation}\nmust be satisfied everywhere on the domain. In equation~\\eqref{eq:continuity}, $\\rho$ is the density of the (sea)water mixture. In order to simplify the continuity equation, we make the very common \\textit{Boussinesq approximation}. In the aquatic environment, water is, by far, the dominant constituent. The density of seawater is thus close to that of pure water, $\\rho_w$. The latter depends on the temperature and pressure, but the variations are often very small: this consideration is at the basis of the Boussinesq approximation. Let $\\bar{\\rho}$ and $\\Delta\\rho$ be appropriate reference values of the density and the order of magnitude of its variation. The key assumption in the \\textit{Boussinesq approximation} is that\n\\begin{equation} \\label{eq:bouss_hyp}\n\t\\frac{\\Delta \\rho}{\\bar{\\rho}} \\ll 1.\n\\end{equation}\nTo assess the impact of this assumption on the continuity equation, we consider its dimensionless form. Let $U$, $T$ and $X$ be relevant velocity-, time- and space-scales. We use those parameters to scale the flow variables, leading to the following dimensionless variables (denoted by primes):\n\\begin{equation}\n\t\\rho' = \\frac{\\rho - \\bar{\\rho}}{\\Delta \\rho}, \\quad \\b u' = \\frac{\\b u}{U}, \\quad t' = \\frac{t}{T}, \\quad \\mbox{and} \\quad \\b x' = \\frac{\\b x}{X},\n\\end{equation}\nwhere $\\b x = (y,z)$. The dimensionless version of the continuity equation~\\eqref{eq:continuity} reads then: \n\\begin{equation}\n\t\\frac{\\Delta \\rho}{T}\\frac{\\partial \\rho'}{\\partial t'} + \\frac{U\\Delta \\rho}{X}\\b u' \\cdot \\nabla' \\rho' + \\frac{U(\\bar{\\rho}+ \\rho' \\Delta \\rho)}{X}\\nabla' \\cdot \\b u' = 0.\n\\end{equation}\nMultiplying both sides by $X/(U\\bar{\\rho})$ yields :\n\\begin{equation}\n\t\\frac{X}{UT}\\frac{\\Delta \\rho}{\\bar{\\rho}}\\frac{\\partial \\rho'}{\\partial t'} + \\frac{\\Delta \\rho}{\\bar{\\rho}}\\b u' \\cdot \\nabla' \\rho' + \\left(1+\\frac{\\Delta \\rho}{\\bar{\\rho}}\\rho'\\right)\\nabla' \\cdot \\b u' = 0.\n\\end{equation}\nBy taking~\\eqref{eq:bouss_hyp} into account, this equation simplifies to $\\nabla' \\cdot \\b u' = 0$, or equivalently in dimensional variables \n\\begin{equation}\n\t\\nabla \\cdot \\b u = 0.\n\\end{equation}\n\n\\section{Properties of the solution of the reactive transport equation}\\label{sec:propcontinuous}\nIn this section, we show the most important properties related to the reactive transport equation. To this end, let us first introduce some notations and terminology. Let $\\Omega$ denote the (time independent) domain of interest and $\\partial \\Omega$ its boundary. Let $\\hat{\\b n}$ denote the unit vector, normal to $\\partial \\Omega$ and oriented towards the exterior of the domain. The volume of the domain is\n\\begin{equation}\n\t|\\Omega| = \\int_{\\Omega} \\rm d\\Omega.\t\n\\end{equation}\nThe mean tracer concentration over the domain is\n\\begin{equation} \\label{eq:Cmean}\n\t\\bar{C}(t) = \\frac{1}{|\\Omega|}\\int_{\\Omega} C(\\b x,t) \\rm d\\Omega,\n\\end{equation}\nand the variance of the concentration over $\\Omega$ is expressed as\n\\begin{equation} \\label{eq:sigma2continuous}\n \t\\sigma^2(t) = \\frac{1}{|\\Omega|} \\int_{\\Omega} \\left(\\hat C(\\b x,t)\\right)^2 \\rm d\\Omega,\n\\end{equation}\nwhere \n\\begin{equation} \\label{eq:Chat}\n\t\\hat C(\\b x,t) := C(\\b x,t) - \\bar C\t\n\\end{equation}\ndenotes the deviation of the concentration with respect to $\\bar C$.\nThe variance is a measure of the concentration inhomogeneity.\nThe mean net production rate is denoted\n\\begin{equation}\n\t\\bar q(t) = \\frac{1}{|\\Omega|}\\int_\\Omega q\\ \\rm d\\Omega.\n\\end{equation}\nThe flux that enters the domain through the boundary $\\partial \\Omega$ is\n\\begin{equation}\n\t\\Phi = -\\int_{\\partial \\Omega} (\\b uC - \\b K \\cdot \\nabla C)\\cdot \\n\\ \\rm d(\\partial \\Omega).\n\\end{equation}\nAn \\textit{isolated} domain is such that there is no exchange with the environment, and hence no net flux of the fluid crossing the boundary:\n\\begin{equation} \\label{eq:isolateddomain}\n\t\\left[ \\b u \\cdot \\hat{\\b n}\\right]_{\\b x \\in \\partial \\Omega} = 0 \\quad \\mbox{and} \\quad \\left[ (\\b K \\cdot \\nabla C) \\cdot \\hat{\\b n}\\right]_{\\b x \\in \\partial \\Omega} = 0.\n\\end{equation}\nTherefore, $\\Phi=0$ if the domain is isolated.\n\n\\begin{property}\\label{prop:cont1}\n\tThe mean tracer concentration satisfies the following relation\n\t\\begin{equation}\n\t\t\\frac{d\\bar{C}(t)}{dt} = \\bar q + \\frac{\\Phi}{|\\Omega|}.\n\t\\end{equation}\n\\end{property}\n\\begin{proof}\n\tThe time derivative of the mean concentration is expressed as\n\t\\begin{equation} \\label{eq:tracermass1}\n\t\t\\frac{d\\bar{C}(t)}{dt} = \\frac{1}{|\\Omega|} \\frac{d}{dt} \\int_{\\Omega} C(\\b x,t) \\rm d\\Omega = \\frac{1}{|\\Omega|} \\int_{\\Omega} \\frac{\\partial}{\\partial t} C(\\b x,t) \\rm d\\Omega.\n\t\\end{equation}\n\tInjecting~\\eqref{eq:ADR} into~\\eqref{eq:tracermass1}, yields successively\n\t\\begin{subequations}\n\t\\begin{align}\n\t\t\\frac{d\\bar{C}(t)}{dt} &= \\frac{1}{|\\Omega|} \\int_\\Omega q\\ \\rm d\\Omega + \\frac{1}{|\\Omega|} \\int_{\\Omega} -\\nabla \\cdot \\left(\\b uC - \\b K \\nabla C\\right) \\rm d\\Omega\\\\\n\t\t &= \\bar q - \\frac{1}{|\\Omega|} \\int_{\\partial \\Omega} \\left(\\b uC - \\b K \\nabla C\\right) \\cdot \\n \\ \\rm d(\\partial \\Omega)\\\\\n\t\t &= \\bar q + \\frac{\\Phi}{|\\Omega|},\n\t\\end{align}\n\t\\end{subequations}\n\twhere we have used the divergence theorem.\n\\end{proof}\n\n\\begin{corollary}{prop:cont1} \\label{prop:mass-is-constant}\n\tIf the domain is isolated and the tracer is passive, the mean tracer concentration is constant.\n\\end{corollary}\n\\begin{proof}\n\tThis is a straightforward consequence of property~\\ref{prop:cont1} with $\\Phi=0$ (isolated domain) and $\\bar q = 0$ (passive tracer). \n\\end{proof}\n\n\\begin{property}\n\tFor a passive tracer in an isolated domain, if the initial concentration is constant, then the concentration remains constant.\n\\end{property}\n\\begin{proof}\n\tLet $C_0 \\ge 0$ and $C(\\b x, 0) = C_0$. At time $t=0$, equation~\\eqref{eq:ADR} becomes\n\t\\begin{equation}\n\t\t\\frac{\\partial C}{\\partial t} = q - C_0 \\nabla \\cdot \\b u.\n\t\\end{equation}\n\tBut $q=0$ since the tracer is passive, and the Boussinesq approximation implies that $\\b u$ is divergence-free (equation~\\eqref{eq:continuity_boussinesq}). Hence\n\t\\begin{equation}\n\t\t\\frac{\\partial C}{\\partial t} = 0,\n\t\\end{equation}\n\tand the concentration remains constant.\n\\end{proof}\n\n\\begin{property}\n\tFor a passive tracer in an isolated domain, the variance of the concentration decreases monotonously with time until the concentration is everywhere equal to the mean concentration. In other words, the tracer concentration gets more and more homogeneous with time until the equilibrium state is reached, which happens when the concentration is constant and uniform everywhere on $\\Omega$.\n\\end{property}\n\\begin{proof}\n\tBy corollary~\\ref{prop:mass-is-constant}, the mean concentration $\\bar C$ is constant. Let $\\hat C(\\b x,t)$ be defined as in~\\eqref{eq:Chat}. It is easy to show that $\\bar C$ satisfies the advection diffusion equation:\n\t \\begin{equation}\n\t \t0 = \\frac{\\partial \\bar C}{\\partial t} = -\\nabla \\cdot \\left(\\b u \\bar C - \\b K \\nabla \\bar C\\right) = - \\bar C \\nabla \\cdot (\\b u - 0) = 0,\n\t \\end{equation}\n\t and thus\n\t \\begin{equation} \\label{prop:variance1}\n\t \t\\frac{\\partial \\hat C}{\\partial t} = -\\nabla \\cdot \\left(\\b u \\hat C - \\b K \\nabla \\hat C\\right).\n\t \\end{equation}\n\t Multiplying equation~\\eqref{prop:variance1} by $\\hat C$ yields, after some calculations\n\t \\begin{equation} \\label{prop:variance2}\n\t \t\\frac{\\partial \\hat C^2}{\\partial t} = - \\nabla \\cdot \\left(\\hat C^2 \\b u \\right) + 2\\nabla \\cdot \\left(\\hat C \\b K \\nabla \\hat C \\right) - 2 \\nabla \\hat C^\\t \\b K \\nabla \\hat C.\n\t \\end{equation}\n\t Now, notice that\n\t \\begin{equation} \\label{prop:variance3}\n\t \t\\int_{\\Omega} \\frac{\\partial \\hat C^2}{\\partial t} \\rm d\\Omega = \\frac{d}{dt} \\int_{\\Omega} \\left( \\hat C(\\b x,t) \\right)^2 \\rm d\\Omega = |\\Omega|\\frac{d\\sigma^2}{dt}.\n\t \\end{equation}\n\t Combining~\\eqref{prop:variance2} and~\\eqref{prop:variance3} yields\n\t \\begin{equation}\n\t \t|\\Omega|\\frac{d\\sigma^2}{dt} = - \\int_{\\Omega} \\nabla \\cdot \\left(\\hat C^2 \\b u \\right) \\rm d\\Omega + 2 \\int_{\\Omega} \\nabla \\cdot \\left(\\hat C \\b K \\nabla \\hat C \\right) \\rm d\\Omega -2 \\int_{\\Omega} \\nabla \\hat C^\\t \\b K \\nabla \\hat C \\rm d\\Omega.\n\t \\end{equation}\n\t Using the divergence theorem and equations~\\eqref{eq:isolateddomain}, we get\n\t \\begin{equation}\n\t \t\t|\\Omega|\\frac{d\\sigma^2}{dt} = - \\underbrace{\\int_{\\partial \\Omega} \\hat C^2 (\\b u \\cdot \\n) \\rm d(\\partial \\Omega)}_{=0} + 2\\underbrace{\\int_{\\partial \\Omega} \\hat C (\\b K \\nabla \\hat C \\cdot \\n) \\rm d(\\partial \\Omega)}_{=0} -2 \\int_{\\Omega} \\nabla \\hat C^\\t \\b K \\nabla \\hat C \\rm d\\Omega.\n\t \\end{equation}\n\t Finally,\n\t \\begin{equation}\n\t \t\\frac{d\\sigma^2}{dt} = -\\frac{2}{|\\Omega|} \\int_{\\Omega} \\nabla \\hat C^\\t \\b K \\nabla \\hat C \\rm d\\Omega.\n\t \\end{equation}\n\t Since $\\b K$ is positive definite, the variance of the concentration will decrease until $\\nabla \\hat C = 0$, hence until $C$ reaches a constant value everywhere on $\\Omega$. The only possibility is $\\bar C$, and thus we have shown that\n\t \\begin{equation}\n\t \t\\lim_{t\\rightarrow \\infty} C(\\b x,t) = \\bar C.\n\t \\end{equation}\n\\end{proof}", "meta": {"hexsha": "07e27a865c58d86192e4d1cf6aeffa2b2c4c004f", "size": 12295, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "inputs/advdiffeq/advdiffeq.tex", "max_stars_repo_name": "dufaysr/tfe", "max_stars_repo_head_hexsha": "75c6191e1533da84233d4a38dea3cc3f3884a286", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "inputs/advdiffeq/advdiffeq.tex", "max_issues_repo_name": "dufaysr/tfe", "max_issues_repo_head_hexsha": "75c6191e1533da84233d4a38dea3cc3f3884a286", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "inputs/advdiffeq/advdiffeq.tex", "max_forks_repo_name": "dufaysr/tfe", "max_forks_repo_head_hexsha": "75c6191e1533da84233d4a38dea3cc3f3884a286", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.3121019108, "max_line_length": 967, "alphanum_fraction": 0.7127287515, "num_tokens": 3940, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.7090191214879992, "lm_q1q2_score": 0.56413681398262}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[margin=1in]{geometry} \n\\usepackage{amsmath,amsthm,amssymb,amsfonts}\n\\usepackage{listings}\n \n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n \n\\newenvironment{problem}[2][Problem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n%If you want to title your bold things something different just make another thing exactly like this but replace \"problem\" with the name of the thing you want, like theorem or lemma or whatever\n \n\\begin{document}\n \n%\\renewcommand{\\qedsymbol}{\\filledbox}\n%Good resources for looking up how to do stuff:\n%Binary operators: http://www.access2science.com/latex/Binary.html\n%General help: http://en.wikibooks.org/wiki/LaTeX/Mathematics\n%Or just google stuff\n \n\\title{Assignment 2\\\\ MEEN 357}\n\\author{Jacob Hartzer}\n\\maketitle\n \n \\section*{Task 6}\n\\begin{problem} {1} Machine Epsilon\n\nMachine epsilon for this 9-bit floating-point number would be half the difference between the following floats: 000000000 and 000000001. This can be found as:\n\\begin{align*}\n\\epsilon &= \\frac{-1^0 * 2^{1-3} * 0.00001 - -1^0 * 2^{1-3} * 0.00000}{2} \\\\\n& = \\frac{(0.0000001 \u2013 0.0000000)}{2}\\\\\n& = 0.00000001_2\\\\\n&= 2^{-8}\\\\\n&=0.00390625\n\\end{align*}\n\n\\end{problem} \n\n\\begin{problem}{2} Floats\n \\subsubsection*{i}\n\\begin{align*}\n0\\: 110\\: 11111 &= 2^(2^2+2^1 \u2013 3) x 1.11111 \\\\\n&= 1111.112\\\\\n&= 2^4 + 2^3 +\\dots{}+2^{-2}\\\\\n&= 15.75_10\n\\end{align*}\n\n \\subsubsection*{ii}\n\\begin{align*}\n0\\: 001\\: 00001&= 2^{1-3} * 1.00001\\\\\n&= .0100001_2\\\\\n&= 2^{-2} + 2^{-7 }\\\\\n&= 0.2578125_10\n\\end{align*}\n\n \\subsubsection*{iii}\n\\begin{align*}\n0\\: 000\\: 11111&= 2^{1-3} * 0.11111\\\\\n&= 0.0011111_2\\\\\n&= 2^{-3} +\\dots{}+ 2^{-7}\\\\\n&= 0.2421875_10\n\\end{align*}\n\n \\subsubsection*{iv}\n\\begin{align*}\n0\\: 000\\: 00001&= 2^{1-3} * 0.00001\\\\\n&= 0.0000001_2\\\\\n&= 2^{-7}\\\\\n&= 0.0000078125_10\n\\end{align*}\n\\end{problem}\n\n\n\\begin{problem}{3} Floating Point Binary\n\\begin{center}\n\\begin{tabular}{ |c|c|c|c|c| } \n \\hline\nNumber\t&Number in Binary&\tExponential&\tExponential Binary&\tBinary Float Representation\\\\\\hline\n0\t& 0\t\t& 0\t\t& 000\t& 0\\: 000\\: 00000\\\\\n1\t& 1\t\t& 0+3\t& 011\t& 0\\: 011\\: 00000\\\\\n2\t& 10\t\t& 1+3\t& 100\t& 0\\: 100\\: 00000\\\\\n3\t& 11\t\t& 1+3\t& 100\t& 0\\: 100\\: 10000\\\\\n4\t& 100\t& 2+3\t& 101\t& 0\\: 101\\: 00000\\\\\n5\t& 101\t& 2+3\t& 101\t& 0\\: 101\\: 01000\\\\\n6\t& 110\t& 2+3\t& 101\t& 0\\: 101\\: 10000\\\\\n7\t& 111\t& 2+3\t& 101\t& 0\\: 101\\: 11000\\\\\n8\t& 1000\t& 3+3\t& 110\t& 0\\: 110\\: 00000\\\\\n9\t& 1001\t& 3+3\t& 110\t& 0\\: 110\\: 00100\\\\\n10\t& 1010\t& 3+3\t& 110\t& 0\\: 110\\: 01000\\\\\n11\t& 1011\t& 3+3\t& 110\t& 0\\: 110\\: 01100\\\\\n12\t& 1100\t& 3+3\t& 110\t& 0\\: 110\\: 10000\\\\\n13\t& 1101\t& 3+3\t& 110\t& 0\\: 110\\: 10100\\\\\n14\t& 1110\t& 3+3\t& 110\t& 0\\: 110\\: 11000\\\\\n15\t& 1111\t& 3+3\t& 110 \t& 0\\: 110\\: 11100\\\\\n\n \\hline\n\\end{tabular}\n\\end{center}\n\\end{problem}\n\\end{document}", "meta": {"hexsha": "b50a2fd813d60694297a15ec70a64640b18802d7", "size": 2809, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "A2/a2task6.tex", "max_stars_repo_name": "JHartzer/MEEN_345", "max_stars_repo_head_hexsha": "794890ee37ada10d97280c794508e4c7a4d61337", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "A2/a2task6.tex", "max_issues_repo_name": "JHartzer/MEEN_345", "max_issues_repo_head_hexsha": "794890ee37ada10d97280c794508e4c7a4d61337", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "A2/a2task6.tex", "max_forks_repo_name": "JHartzer/MEEN_345", "max_forks_repo_head_hexsha": "794890ee37ada10d97280c794508e4c7a4d61337", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.09, "max_line_length": 193, "alphanum_fraction": 0.6219295123, "num_tokens": 1320, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191214879992, "lm_q2_score": 0.7956580927949806, "lm_q1q2_score": 0.5641368019583141}}
{"text": "\\documentclass[12pt]{article}\n\n\\pdfoutput=24\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{esvect}\n\\usepackage[toc,page]{appendix}\n\\usepackage{leftidx}\n\\usepackage{color}\n\\usepackage{framed, color}\n\\usepackage{multirow}\n\\usepackage{pdfpages}\n\\usepackage{multicol}\n\\usepackage{wrapfig,lipsum,booktabs}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{mathtools,hyperref}\n\\hypersetup{\n    colorlinks=true,\n    linkcolor=cyan,\n    filecolor=cyan,      \n    urlcolor=red,\n    citecolor=red,\n}\n\n\\usepackage{cleveref}\n\\usepackage{commath}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\renewcommand{\\qedsymbol}{$\\blacksquare$}\n\n%%%%%%%%%%%\n%%%%%%%%%%%  User Defined Commands. (macros)\n%%%%%%%%%%%\n\n\\definecolor{mgreen}{RGB}{25,147,100}\n\\definecolor{shadecolor}{rgb}{1,.8,.1}\n\\definecolor{shadecolor2}{RGB}{245,237,0}\n\\definecolor{orange}{RGB}{255,137,20}\n\\definecolor{orange}{RGB}{245,37,100}\n\n%%%%%%%%%%%\n%%%%%%%%%%%  Graphical Packages\n%%%%%%%%%%%\n\\usepackage{pgfplots}\n\\usetikzlibrary{patterns}\n\\usepackage{mdframed}\n\\usepackage{adjustbox}\n\\usepackage{tcolorbox}\n%\\usepackage{graphics}\n\\usepackage{tikz,ifthen,fp,calc}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\\usetikzlibrary{plotmarks}\n\\usepackage{graphicx}\n\n%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%% Theorem Styles\n%%%%%%%%%%%%%%%%%\n\\theoremstyle{plain}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{prop}{Proposition}[section]\n\\newtheorem{corr}{Corollary}[section]\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\theoremstyle{definition}\n\\newtheorem{remark}{Remark}[section]\n\\newtheorem{fact}{Fact}[section]\n\n\\usepackage[english]{babel}\n\\usepackage{babel,blindtext}\n\\newtheorem{corollary}{Corollary}[theorem]\n\\newtheorem{exmp}{Example}[section]\n\\usepackage{fullpage}\n\\usepackage{amsfonts}\n\\usepackage{lscape}\n\\usepackage{bbm}\n\n\\usepackage{todonotes}\n\\usepackage{cite}\n\\usepackage{verbatim}\n\\usepackage{bm}\n\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\usepackage[margin=1in]{geometry}\n\\providecommand{\\keywords}[1]{\\textbf{\\textit{Keywords:---}} #1}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[utf8]{inputenc}\n\\usepackage{authblk}\n\\usepackage{cite}\n\n\n\\newcommand*{\\affaddr}[1]{#1} % No op here. Customize it for different styles.\n\\newcommand*{\\affmark}[1][*]{\\textsuperscript{#1}}\n%\\newcommand*{\\email}[1]{\\texttt{#1}}\n\n\\title{\\textbf{Mahalanobis distance explained}}\n\\author{}\n\\date{}\n\n\\providecommand{\\keywords}[1]{\\textbf{\\textit{Keywords:}} #1}\n\n\\begin{document} \n\n\\maketitle\n\n\\section{Diagonalizability}\n\nLet $\\mathbf{X}$ be a matrix whose rows are variables and \ncolumns are observations that are centered about the mean. \nThen the covariance of the $\\mathbf{X}$ is given by\n\n\\begin{equation}\n\\mathbf{{\\Large{\\Sigma} }}= cov(\\mathbf{X}) = \\frac{1}{n-1}\\mathbf{ XX^\\text{T}}\n\\end{equation}\n\nAssume we want to have a mapping $\\mathbf{M}$ so that $\\mathbf{Y=M^\\text{T}X}$ has covariance \nthat is diagonal. Then\n\n\\begin{itemize}\n\\item Assume there is no observation with all zero values\n\\item Assume there is no two identical observations, i.e.columns are linearly independent.\n\\item (Rare) things like that which will make covariance matrix singular.\n\\end{itemize}\n\nThen since covariance matrix is symmetric, with positive entries, etc., it is diagonalizable, \ni.e. it can be written as \n\n\\begin{equation}\\label{eq:eigendecomposition}\n\\mathbf{{\\Large{\\Sigma} }}  = \\mathbf{U^{-1} D U = U^\\text{T} D U}\n\\end{equation}\n\nor equivalently \n\n\\begin{equation}\\label{eq:eigendecomposition2}\n\\mathbf{U^\\text{T} {\\Large{\\Sigma} }U }  = \\mathbf{D}\n\\end{equation}\n\nwhere $\\mathbf{D}$ is diagonal and, in this case due to properties of covariance matrix, the \nmatrix $\\mathbf{U}$ is orthogonal, i.e. $\\mathbf{U^{-1} = U^\\text{T}}$, i.e. $\\mathbf{UU^\\text{T}} = \\mathbf{I}$\n\nThe columns of $\\mathbf{U^{-1}}$ are eigenvectors of covariance matrix, entries of $\\mathbf{D}$\nare eigenvalues of covariance matrix, which are also variances of different variables in $\\mathbf{X}$.\n\n\nAssume we want to rotate the data, or represent them in a \ncoordinate system where it is represented with variables that are\northogonal, i.e. the collinearity is killed. This is what PCA does.\n\nNow lets pretend we do not know that. Lets say we want to\ntransform $\\mathbf{X}$ into $\\mathbf{Y}$ by a mapping $\\mathbf{M^\\text{T}}$, i.e. $\\mathbf{Y= M^\\text{T}X}$,\nso that covariance of $\\mathbf{Y}$ is diagonal matrix, $\\mathbf{\\hat{D}}$. \\\\\n\\pagebreak\n\nSo, we want $cov(\\mathbf{Y}) = \\frac{1}{n-1}\\mathbf{Y}\\mathbf{Y^\\text{T}}$,\nLets take a look:\n\n\\begin{equation}\\label{eq:diagonalize}\n\\begin{aligned}\n\\mathbf{\\hat{D}} = cov({\\mathbf{Y}}) &= \\frac{1}{n-1}\\mathbf{Y}\\mathbf{Y^\\text{T}}\\\\\n                           &= \\frac{1}{n-1}\\mathbf{M^\\text{T}X}\\mathbf{(M^\\text{T}X)^\\text{T}}\\\\\n                           &= \\frac{1}{n-1} \\mathbf{M^\\text{T}}\\mathbf{XX^\\text{T}}\\mathbf{ M}\\\\\n                           &=  \\mathbf{M^\\text{T}}\\mathbf{\\Sigma}\\mathbf{M}\n\\end{aligned}\n\\end{equation}\n\nSo, we arrived at $\\mathbf{\\hat{D}} = \\mathbf{M^\\text{T}}\\mathbf{\\Sigma}\\mathbf{M}$.\nHence, if you choose $\\mathbf{M}$ to be the same as $\\mathbf{U}$, then\ncovariance of $\\mathbf{Y}$ would be diagonal, and you have $\\mathbf{\\hat{D}}  = \\mathbf{D}$.\\\\\n\nSince, the the decomposition given by Eq.~(\\ref{eq:eigendecomposition})\nis eigen-decomposition of $\\mathbf{\\Sigma}$, we can see this is what PCA does.\n\n\\section{Equivalency}\n\n\\theoremstyle{definition} \n\\begin{definition}\nSuppose the random vectors of $\\mathbf{v}$ and $\\mathbf{w}$ be drawn from a distribution \nwhose associated covariance matrix is given by $\\mathbf{\\Sigma}$. Then define the Malanoblis\ndistance as follows:\n\n \\begin{equation}\\label{eq:mahabDist}\n d = \\mathbf{(v - w)^\\text{T} \\Sigma^{-1} (v - w)} \n \\end{equation}\n\\end{definition}\n\nLets take a look:\n\n\\begin{equation}\\label{eq:equivalency}\n\\begin{aligned}\n\\mathbf{d} &= \\mathbf{(v - w)^\\text{T} \\Sigma^{-1} (v - w)} \\\\\n                 &= \\mathbf{(v - w)^\\text{T}  (U^\\text{T} D U) ^{-1} (v - w)} \\\\\n                 &= \\mathbf{(v - w)^\\text{T}  (U^{-1} D^{-1} U^{-T}) (v - w)} \\\\\n                 &= \\mathbf{(v - w)^\\text{T}  (U^\\text{T} D^{-1} U) (v - w)} \\\\\n                 &= \\mathbf{(U(v - w))^\\text{T}  D^{-1} (U (v - w))} \\\\\n\\end{aligned}\n\\end{equation}\n\nNotice:\n\\begin{itemize}\n\\item The diagonal entries of $\\mathbf{D}$ eigenvalues of covariance matrix,\nwhich are variance of variables in $\\mathbf{X}$. So, if the data in $\\mathbf{X}$\nwas scaled by their variances, then this distance was equivalent to\n Euclidean distance.\n\n\\item Lets look at the last term in above equation:\n\n\\begin{equation}\n\\mathbf{ U (v - w)}  = \\mathbf{ U v} - \\mathbf{ U w}\n\\end{equation}\n\nEach of these terms are mappings of $\\mathbf{v}$ and $\\mathbf{w}$\ninto the PCA space of the data $\\mathbf{X}$.\n\\end{itemize}\n\n\nSuppose $\\mathbf{x}$ is a vector and we wish to represent it, \nin the column space of a matrix $\\mathbf{A} = [\\mathbf{A}_1, \\mathbf{A}_2, \\dots \\mathbf{A}_N]$,\nwhere each $\\mathbf{A}_i$ is a column of $\\mathbf{A}$. So, we are looking for constants $y_1, y_2, \\dots, y_N$ so that \n\n\\[\\mathbf{x} = y_1 \\mathbf{A}_1 + y_2 \\mathbf{A}_2 \\dots + \\mathbf{A}_N y_N = \\mathbf{Ay} \\]\n\nHence, $\\mathbf{y = A^{-1} X}$. So, $\\mathbf{y}$ is the mapping of $\\mathbf{x}$ into column space of\n$\\mathbf{A}$. Just like $\\mathbf{Uv}$ which is mapping of $\\mathbf{v}$ into column space of $\\mathbf{U^{-1}}$\nwhose columns are eigenvectors of covariance, i.e. PCA.\n\n\\iffalse\nP.S. I had to choose where to put the exponent \\{-1\\} to indicate inverse of the matrices, \nwhether to the left or right of  $\\mathbf{\\Sigma}$.\nEither way would have made some parts easier, but some other parts harder to follow.\\\\\n\n\nNow, the question is, when you mentioned M. Distance takes\ncare of scales and collinearity, where, in what context you learned that.\nDid the context refer to this definition of distance as M. distance, \nor they were talking about the distance between a point and distribution?\n\\fi\n\\end{document}\n", "meta": {"hexsha": "9464d732ed255527ba90f102dc10e2486963455e", "size": 7952, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "arxiv/mahab_equivalency/arXiv.tex", "max_stars_repo_name": "HNoorazar/Kirti", "max_stars_repo_head_hexsha": "fb7108dac1190774bd90a527aaa8a3cb405f127d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "arxiv/mahab_equivalency/arXiv.tex", "max_issues_repo_name": "HNoorazar/Kirti", "max_issues_repo_head_hexsha": "fb7108dac1190774bd90a527aaa8a3cb405f127d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "arxiv/mahab_equivalency/arXiv.tex", "max_forks_repo_name": "HNoorazar/Kirti", "max_forks_repo_head_hexsha": "fb7108dac1190774bd90a527aaa8a3cb405f127d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.9958506224, "max_line_length": 119, "alphanum_fraction": 0.6828470825, "num_tokens": 2570, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.810478913248044, "lm_q1q2_score": 0.5640595469390416}}
{"text": "\\section{Up up and away! Adding More Layers to the Atmosphere}\nUp until now we have neglected any vertical movement. This hampers the model, as the rising of warm air that then flows to the poles, cools down and flows back to the tropics is not possible\nas the warm air cannot rise. So let us change that, let's add some vertical motion and some more layers to the atmosphere.\n\nRemember \\autoref{eq:atmos change}? We need this equation for every layer in the atmosphere. This also means that we have to adjust the main loop of the code, which is described  in \n\\autoref{alg:temperature with density}. The $T_a$ needs to change, we need to either add a dimension (to indicate which layer of the atmosphere we are talking about) or we need to add different\nmatrices for each atmosphere layer. Let us define some useful variables in \\autoref{alg:more layers}. We opt for adding a dimension as that costs less memory than defining new arrays \n\\footnote{This has to do with pointers, creating a new object always costs a bit more space than adding a dimension as we need a pointer to the object and what type of object it is whereas with \nadding a dimension we do not need this additional information as it has already been defined}. So $T_a$, and all other matrices that have to do with the atmosphere (so not $T_p$ for instance) \nare no longer indexed by $lat, lon$ but are indexed by $lat, lon, layer$.\n\n\\begin{algorithm}\n    $nlevels \\leftarrow 4$ \\;\n    $heights \\leftarrow \\text{Array with } nlevels \\text{ layers, each with a uniform thickness of } \\frac{100 \\cdot 10^3}{nlevels} m$ \\;\n    \\caption{Definition of variables that are used throughout the code}\n    \\label{alg:more layers}\n\\end{algorithm}\n\nWe also need to change all the gradient functions (\\autoref{alg:gradient x} and \\autoref{alg:gradient y}) to incorporate the atmospheric layers. Additionally we need a new gradient function that \ncalculates the gradient in the $z$ direction (vertical). Let us first change the existing gradient functions to take the atmospheric layer in effect. The changes can be found in \n\\autoref{alg:gradient x layer} and \\autoref{alg:gradient y layer}. Let us improve the gradient in the $y$ direction as well. Since we are using the central difference method (calculating the\ngradient by taking the difference of the next grid cell and the previous grid cell) there is no gradient at the poles. What we can do instead of returning $0$ for those cases is forward \ndifferencing (calculating the gradient by taking the difference of the cell and the next/previous cell, multiplied by $2$ to keep it fair). A special change in both functions is checking whether\n$k$ is equal to \\texttt{NULL}. We do this as sometimes we want to use this function for matrices that does not have the layer dimension. Hence we define a default value for $k$ which is \n\\texttt{NULL}. \\texttt{NULL} is a special value in computer science. It represents nothing. This can be useful sometimes if you declare a variable to be something but it is referring to something\nthat has been deleted or it is returned when some function fails. It usually indicates that something special is going on. So here we use it in the special case where we do not want to consider\nthe layer part in the gradient.\n\n\\begin{algorithm}[hbt]\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{Matrix (double array) $a$, first index $i$, second index $j$, third index $k$ with default value \\texttt{NULL}}\n    \\Output{Gradient in the $x$ direction}\n    \\eIf{$k == \\texttt{NULL}$}{\n        $grad \\leftarrow \\frac{a[i, (j + 1)\\text{ mod } nlon] - a[i, (j - 1) \\text{ mod } nlon]}{\\delta x[i]}$ \\;\n    }{\n        $grad \\leftarrow \\frac{a[i, (j + 1)\\text{ mod } nlon, k] - a[i, (j - 1) \\text{ mod } nlon, k]}{\\delta x[i]}$ \\;\n    }\n    \\Return{$grad$} \\;\n    \\caption{Calculating the gradient in the $x$ direction}\n    \\label{alg:gradient x layer}\n\\end{algorithm}\n\n\\begin{algorithm}[hbt]\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{Matrix (double array) $a$, first index $i$, second index $j$, third index $k$ with default value \\texttt{NULL}}\n    \\Output{Gradient in the $y$ direction}\n    \\eIf{$k == \\texttt{NULL}$}{\n        \\uIf{$i == 0$}{\n            $grad \\leftarrow 2 \\frac{a[i + 1, j] - a[i, j]}{\\delta y}$ \\;\n        }\\uElseIf{$i == nlat - 1$}{\n            $grad \\leftarrow 2 \\frac{a[i, j] - a[i - 1, j]}{\\delta y}$ \\;\n        }\\uElse{\n            $grad \\leftarrow \\frac{a[i + 1, j] - a[i - 1 j]}{\\delta y}$ \\;\n        }\n    }{\n        \\uIf{$i == 0$}{\n            $grad \\leftarrow 2 \\frac{a[i + 1, j, k] - a[i, j, k]}{\\delta y}$ \\;\n        }\\uElseIf{$i == nlat - 1$}{\n            $grad \\leftarrow 2 \\frac{a[i, j, k] - a[i - 1, j, k]}{\\delta y}$ \\;\n        }\\uElse{\n            $grad \\leftarrow \\frac{a[i + 1, j] - a[i - 1 j]}{\\delta y}$ \\;\n        }\n    }\n    \\Return $grad$ \\;\n    \\caption{Calculating the gradient in the $y$ direction}\n    \\label{alg:gradient y layer}\n\\end{algorithm}\n\nWith those changes done, let us define the gradient in the $z$ direction. The function can be found in \\autoref{alg:gradient z layer}. Here $a.dimensions$ is the attribute that tells us how \ndeeply nested the array $a$ is. If the result is $1$ we have just a normal array, if it is $2$ we have a double array (an array at each index of the array) which is also called a matrix and if it \nis $3$ we have a triple array. We need this because we have a one-dimensional case, for when we do not use multiple layers and a three-dimensional case for when we do use multiple layers. This \ndistinction is needed to avoid errors being thrown when running the model with one or multiple layers.\n\n\\begin{algorithm}[hbt]\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{Matrix (double array) $a$, first index $i$, second index $j$, third index $k$}\n    \\Output{Gradient in the $z$ direction}\n    \\uIf{$a.dimensions == 1$}{\n        \\uIf{$k == 0$}{\n            $grad \\leftarrow \\frac{a[k + 1] - a[k]}{\\delta z[k]}$ \\;\n        }\\uElseIf{$k == nlevels - 1$}{\n            $grad \\leftarrow \\frac{a[k] - a[k - 1]}{\\delta z[k]}$ \\;\n        }\\uElse{\n            $grad \\leftarrow \\frac{a[k + 1] - a[k - 1]}{2\\delta z[k]}$ \\;\n        }\n    } \\uElse {\n        \\uIf{$k == 0$}{\n            $grad \\leftarrow \\frac{a[i, j, k + 1] - a[i, j, k]}{\\delta z[k]}$ \\;\n        }\\uElseIf{$k == nlevels - 1$}{\n            $grad \\leftarrow \\frac{a[i, j, k] - a[i, j, k - 1]}{\\delta z[k]}$ \\;\n        }\\uElse{\n            $grad \\leftarrow \\frac{a[i, j, k + 1] - a[i, j, k - 1]}{2\\delta z[k]}$ \\;\n        }\n    }\n    \n    \\Return $grad$ \\;\n    \\caption{Calculating the gradient in the $z$ direction}\n    \\label{alg:gradient z layer}\n\\end{algorithm}\n\nAs you can see, we have used $\\delta z$ however, we have not defined it yet. Let us do that in \\autoref{alg:gradient z}.\n\n\\begin{algorithm}[hbt]\n    \\For{$k \\in [0, nlevels - 1]$}{\n        $\\delta z[k] \\leftarrow heights[k + 1] - heights[k]$ \\;\n    }\n    $\\delta z[nlevels - 1] \\leftarrow \\delta z [nlevels - 2]$ \\;\n    \\caption{Defining $\\delta z$ for later use throughout the code}\n    \\label{alg:gradient z}\n\\end{algorithm}\n\nLet's incorporate the changes for the Laplacian operator (\\autoref{alg:laplacian}) as well. The new code can be found in \\autoref{alg:laplacian layer}.\n\n\\begin{algorithm}[hbt]\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{A matrix (double array) a}\n    \\Output{A matrix (double array) with results for the laplacian operator for each element}\n    \\eIf{$a.dimensions == 2$}{\n        \\For{$lat \\in [1, nlat - 1]$}{\n            \\For{$lon \\in [0, nlon]$}{\n                $output[lat, lon] \\leftarrow \\frac{\\Delta_x(a, lat, (lon + 1) \\text{ mod } nlon) - \\Delta_x(a, lat, (lon - 1) \\text{ mod } nlon)}{\\delta x[lat]} + \\frac{\\Delta_y(a, lat + 1, lon) - \n                \\Delta_y(a, lat - 1, lon)}{\\delta y}$\\;\n            }\n        }\n    }{\n        \\For{$lat \\in [1, nlat - 1]$}{\n            \\For{$lon \\in [0, nlon]$}{\n                \\For{$k \\in [0, nlevels - 1]$}{\n                    $output[lat, lon, k] \\leftarrow \\frac{\\Delta_x(a, lat, (lon + 1) \\text{ mod } nlon, k) - \\Delta_x(a, lat, (lon - 1) \\text{ mod } nlon, k)}{\\delta x[lat]} + \\frac{\\Delta_y(a, \n                    lat + 1, lon, k) - \\Delta_y(a, lat - 1, lon, k)}{\\delta y} + \\frac{\\Delta_z(a, lat, lon, k + 1) - \\Delta_z(a, lat, lon, k + 1)}{2\\delta z[k]}$\\;\n                }\n            }\n        }\n    }\n    \n    \\Return{$ouput$} \\;\n    \\caption{Calculate the laplacian operator over a matrix a}\n    \\label{alg:laplacian layer}\n\\end{algorithm}\n\nOf course we also need to incorporate the new layers in the divergence operator (\\autoref{alg:divergence}). The new changes can be found in \\autoref{alg:divergence layer}. Here we use $w$, the \nvertical wind velocity. We define $w$ in the same way as $u$ and $v$, it is all zeroes (in the beginning) and has the same dimensions as $u$ and $v$.\n\n\\begin{algorithm}[!hbt]\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{A matrix (double array) $a$}\n    \\Output{A matrix (double array) containing the result of the divergence operator taken over that element}\n    $dim_1 \\leftarrow \\text{ Length of } a \\text{ in the first dimension}$ \\;\n    \\For{$i \\in [0, dim_1]$}{\n        $dim_2 \\leftarrow \\text{ Length of } a \\text{ in the second dimension (i.e. the length of the array stored at index } i)$ \\;\n        \\For{$j \\in [0, dim_2]$}{\n            $dim_3 \\leftarrow \\text{ Length of } a \\text{ in the third dimension}$ \\;\n            \\For{$k \\in [0, dim_3]$}{\n                $output[i, j] \\leftarrow \\Delta_x(au, i, j, k) + \\Delta_y(av, i, j, k) + \\Delta_z(aw, i, j, k)$ \\;\n            }\n        }\n    }\n    \\Return{$output$} \\;\n    \\caption{Calculate the result of the divergence operator on a vector}\n    \\label{alg:divergence layer}\n\\end{algorithm}\n\nWith all those changes in the functions done, let us incorporate the changes into the model itself. We now need to account for the temperature change throughout the layers. Let us look at the \natmospheric temperature equation again (\\autoref{eq:atmos change}). We need to account for one more thing, the absorbtion of energy from another layer. The new equation is shown in \n\\autoref{eq:atmos change layer}. Here $k$ is the layer of the atmosphere, $k = -1$ means that you use $T_p$ and $k = nlevels$ means that $T_{a_{nlevels}} = 0$ as that is space. Also, let us\nrewrite the equation a bit such that the variables that are repeated are only written once and stuff that is divided out is removed, which is done in \\autoref{eq:atmos change layer improved}.\nLet us also clean up the equation for the change in the surface temperature (\\autoref{eq:surface change}) in \\autoref{eq:surface change improved}.\n\n\\begin{subequations}\n    \\begin{equation}\n        \\Delta T_{a_k} = \\frac{\\delta t (\\sigma \\epsilon_{k - 1}T_{a_{k - 1}}^4 + \\sigma \\epsilon_{k + 1}T_{a_{k + 1}}^4 - 2\\epsilon_k\\sigma T_{a_k}^4)}{C_a}\n        \\label{eq:atmos change layer}\n    \\end{equation}\n    \\begin{equation}\n        \\Delta T_{a_k} = \\frac{\\delta t \\sigma (\\epsilon_{k - 1}T_{a_{k - 1}}^4 + \\epsilon_{k + 1}T_{a_{k + 1}}^4 - 2\\epsilon_kT_{a_k}^4)}{C_a}\n        \\label{eq:atmos change layer improved}\n    \\end{equation}\n    \\begin{equation}\n        \\Delta T_p = \\frac{\\delta t (S + \\sigma(4\\epsilon_pT_a^4 - 4T_p^4))}{4C_p}\n        \\label{eq:surface change improved}\n    \\end{equation}\n\\end{subequations}\n\nWith the changes made to the equation, we need to make those changes in the code as well. We need to add the new dimension to all matrices except $T_p$ and $a$ as they are unaffected (with \nregards to the storage of the values) by the addition of multiple atmospheric layers. Every other matrix is affected. The new code can be found in \\autoref{alg:temperature layer}.\n\n\\begin{algorithm}[hbt]\n    \\SetAlgoLined\n\n    \\While{\\texttt{TRUE}}{\n        \\For{$lat \\in [-nlat, nlat]$}{\n            \\For{$lon \\in [0, nlot]$}{\n                \\For{$layer \\in [0, nlevels]$}{\n                    $T_p[lat, lon] \\leftarrow T_p[lat, lon] + \\frac{\\delta t ((1 - a[lat, lon])S + \\sigma(4\\epsilon[0](T_a[lat, lon, 0])^4 - 4(T_p[lat, lon])^4))}\n                    {4C_p[lat, lon]}$ \\;\n                    \\uIf{$layer == 0$}{\n                        $T_a[lat, lon, layer] \\leftarrow T_a[lat, lon, layer] + \\frac{\\delta t \\sigma((T_p[lat, lon])^4 - 2\\epsilon[layer](T_a[lat, lon, layer])^4)}\n                        {\\rho[lat, lon, layer]C_a\\delta z[layer]}$ \\;\n                    }\\uElseIf{$layer == nlevels - 1$}{\n                        $T_a[lat, lon, layer] \\leftarrow T_a[lat, lon, layer] + \\frac{\\delta t \\sigma(\\epsilon[layer - 1](T_a[lat, lon, layer - 1])^4 - 2\\epsilon[layer](T_a[lat, lon, layer])^4)}\n                        {\\rho[lat, lon, layer]C_a\\delta z[layer]}$ \\;\n                    }\\uElse{\n                        $T_a[lat, lon, layer] \\leftarrow T_a[lat, lon, layer] + \\frac{\\delta t \\sigma(\\epsilon[layer - 1](T_a[lat, lon, layer - 1])^4 + \\epsilon[layer + 1]T_a[lat, lon, layer + 1] \n                        - 2\\epsilon[layer](T_a[lat, lon, layer])^4)}{\\rho[lat, lon, layer]C_a\\delta z[layer]}$ \\;\n                    }\n                    $t \\leftarrow t + \\delta t$ \\;\n                }\n            }\n        }\n    }\n    \\caption{The main loop of the temperature calculations}\n    \\label{alg:temperature layer}\n\\end{algorithm}\n\nWe also need to initialise the $\\epsilon$ value for each layer. We do that in \\autoref{alg:epsilon}.\n\n\\begin{algorithm}\n    $\\epsilon[0] \\leftarrow 0.75$ \\;\n    \\For{$i \\in [1, nlevels]$}{\n        $\\epsilon[i] \\leftarrow 0.5\\epsilon[i - 1]$\n    }\n    \\caption{Intialisation of the insulation of each layer (also known as $\\epsilon$)}\n    \\label{alg:epsilon}\n\\end{algorithm}\n\nNow we need to add vertical winds, or in other words add the $w$ component of the velocity vectors. We do that by editing \\autoref{alg:stream3}. We change it to \\autoref{alg:velocity}. Here we \nuse gravity ($g$) instead of the coriolis force ($f$) and calculate the change in pressure. Therefore we need to store a copy of the pressure before we do any calculations. This needs to be a\ncopy due to aliasing \\footnote{Aliasing is assigning a different name to a variable, while it remains the same variable. Take for instance that we declare a variable $x$ and set it to be $4$. \nThen we say $y \\leftarrow x$, which you might think is the same as saying they $y \\leftarrow 4$ but behind the screen it is pointing to $x$. So if $x$ changes, then so does $y$.}\n\n\\begin{algorithm}\n    $S_{xu} \\leftarrow \\texttt{gradient\\_x}(u, lan, lon)$ \\;\n    $S_{yu} \\leftarrow \\texttt{gradient\\_y}(u, lan, lon)$ \\;\n    $S_{xv} \\leftarrow \\texttt{gradient\\_x}(v, lan, lon)$ \\;\n    $S_{yv} \\leftarrow \\texttt{gradient\\_y}(v, lan, lon)$ \\;\n    $S_{px} \\leftarrow \\texttt{gradient\\_x}(p, lan, lon)$ \\;\n    $S_{py} \\leftarrow \\texttt{gradient\\_y}(p, lan, lon)$ \\;\n    \\While{\\texttt{TRUE}}{\n        \\For{$lat \\in [1, nlat - 1]$}{\n            \\For{$lon \\in [0, nlon]$}{\n                \\For{$layer \\in [0, nlevels]$}{\n                    $u[lan, lon, layer] \\leftarrow u[lat, lon, layer] + \\delta t \\frac{-u[lat, lon, layer]S_{xu} - v[lat, lon, layer]S_{yu} + f[lat]v[lat, lon, layer] - S_{px}}{\\rho}$ \\;\n                    $v[lan, lon, layer] \\leftarrow v[lat, lon, layer] + \\delta t \\frac{-u[lat, lon, layer]S_{xv} - v[lat, lon, layer]S_{yv} - f[lat]u[lat, lon, layer] - S_{py}}{\\rho}$ \\;\n                    $w[lan, lon, layer] \\leftarrow w[lat, lon, layer] - \\frac{p[lat, lon, layer] - p_o[lat, lon, layer]}{\\delta t\\rho[lat, lon, layer]g}$ \\;\n                }\n            }\n        }\n\n        $p_o \\leftarrow copy(p)$ \\;\n    }\n    \\caption{Calculating the flow of the atmosphere (wind)}\n    \\label{alg:velocity}\n\\end{algorithm}\n\nLastly, we need to add the correct indices to the advection algorithm \\autoref{alg:advectionfix}. Let us add it, with \\autoref{alg:advection layer} as a result. Here the ':' means all indices \nof the 3 dimensional matrix.\n\n\\begin{algorithm}\n    $\\alpha_a \\leftarrow 2 \\cdot 10^{-5}$ \\;\n    $\\alpha_p \\leftarrow 1.5 \\cdot 10^{-6}$ \\;\n    $boundary \\leftarrow 7$ \\;\n    \\While{\\texttt{TRUE}}{\n        $T_{add} \\leftarrow T_a + \\delta t \\alpha_a \\nabla^2(T_a) + \\nabla(T_a)$ \\;\n        $T_a \\leftarrow T_a - 0.5T_{add}[boundary:-boundary, :, :] \\text{ //Only subtract } T_{add} \\text{ to } T_a \\text{ for indices in the interval } [-nlat + boundary, nlat - boundary]$. \\;\n        $\\rho[boundary: -boundary, :, :] \\leftarrow \\rho - 0.5(\\delta t \\nabla \\rho) \\text{ //Only change the density for indices in the interval } [-nlat + boundary, nlat - boundary]$ \\;\n        $T_p \\leftarrow T_p + \\delta t \\alpha_p \\nabla^2(T_p)$ \\;\n    }\n    \\caption{The main loop for calculating the effects of advection}\n    \\label{alg:advection layer}\n\\end{algorithm}", "meta": {"hexsha": "8493d2da7b2ea6730ea4a18583eeb549e6fc4eaa", "size": 16810, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex-docs/streams/Stream5.tex", "max_stars_repo_name": "balintf/claude", "max_stars_repo_head_hexsha": "a3ebf0605ca26c4aadd0273f6b70813bdf931c9c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex-docs/streams/Stream5.tex", "max_issues_repo_name": "balintf/claude", "max_issues_repo_head_hexsha": "a3ebf0605ca26c4aadd0273f6b70813bdf931c9c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex-docs/streams/Stream5.tex", "max_forks_repo_name": "balintf/claude", "max_forks_repo_head_hexsha": "a3ebf0605ca26c4aadd0273f6b70813bdf931c9c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.1272727273, "max_line_length": 197, "alphanum_fraction": 0.6270672219, "num_tokens": 5214, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789178257654, "lm_q2_score": 0.6959583187272711, "lm_q1q2_score": 0.5640595450139179}}
{"text": "\\subsection{Maximum Ratio Combining based Macro diversity}\nDue to the fact that a transmitted packet theoretically can be simultaneously received by all BS (ignorance of background noise), linear combining of signals at each BS can be leveraged so that the output SINR is maximized. Such a scheme is called Maximum Ratio Combining (MRC) based Macro diversity\\qsong{It is better to give a reference to say this is studied by some LPWAN networks}. In this section, the performance of MRC-based Macro diversity in one-shot random access is evaluated.\n\nIn the literature, it has been prove that if the weigh factors involved in MRC context is well designed, the output $\\text{SINR}$ $\\Theta$ of best combiner is expressed as \\qsong{Perhaps we also need a reference for this...}:\n\\begin{align}\n\t\\Theta = \\sum_{y_j \\in \\Phi_{b}}^{} \\text{SINR}_{y_j},\n\\end{align}\nwhere $\\text{SINR}_{y_j}$ is the received SINR at BS with label $y_j$.\n\nThe Laplace Transform (LT) of $\\Theta$ is by definition as follows:\n\\begin{align}\n\t\\label{eq:lt-sinr-mrc-1}\n\t\\mathcal{L}_{\\Theta}\\left( s \\right) &= \\mathbb{E}\\left[ e^{-s\\Theta}\\right] \\nonumber\\\\\n\t&=\\mathbb{E}\\left[ \\exp( -s \\sum_{y_j \\in \\Phi_{b}}^{} \\Theta_{y_j} )\\right]\n\\end{align}\n\nRecall that the received SINR of target BS is defined as:\n\\begin{align}\n\t\\theta &= \\frac{H \\exp(\\chi) r^{-\\gamma}}{I} \\nonumber\\\\\n\t&= \\epsilon r^{-\\gamma}\n\\end{align}\nwhere term $\\epsilon = H\\exp(\\chi)  / I$ is identically independently distributed RV for each BS.\n\\begin{align}\n\t\\label{eq:lt-sinr-mrc-2}\n\t\\mathcal{L}_{\\Theta}\\left( s \\right) &= \\mathbb{E}\\left[ \\prod_{y_j \\in \\Phi_b}^{} \\mathbb{E}_{\\epsilon} \\left[ \\exp( -s \\epsilon r^{-\\gamma}) \\right] \\right] \n\\end{align}\nApplying Campbell theorem to $(\\ref{eq:lt-sinr-mrc-1})$,\n\\begin{align}\n\t\\mathcal{L}_{\\Theta}\\left( s \\right) &= \\exp \\left\\lbrace -\\mathbb{E}_{\\epsilon}\\left[ \\int_{0}^{+\\infty} \\left(1-\\exp(-s\\epsilon r^{-\\gamma} )2 \\pi \\lambda_{b} r dr\\right)  \\right]   \\right\\rbrace \n\\end{align}\n\nLet us focus on the integral $D = \\int_{0}^{+\\infty} \\left(1-\\exp(-s\\epsilon r^{-\\gamma} ) \\right) 2 \\pi \\lambda_{b} r dr $:\n\\begin{align}\n\t\\label{itg:D}\n\tD &= \\int_{0}^{+\\infty} \\left(1-\\exp(-s\\epsilon r^{-\\gamma/2} ) \\right) \\pi \\lambda_{b} dr \\nonumber\\\\\n\t&= \\pi \\lambda_{b} \\int_{0}^{+\\infty} \\left(1-\\exp(-s\\epsilon r^{-\\gamma/2} ) \\right) \\frac{2}{\\gamma} x^{\\frac{2}{\\gamma}  - 1} dx \\text{ (substitution $x = r^{\\gamma/2}$)}\n\\end{align}\nTo further calculate $(\\ref{itg:D})$, we note that it is the expected value of RV $ \\left(    (X/\\epsilon s) ^{-1} \\right)^{\\frac{2}{\\gamma}} $ where $X$  is an exponential RV with unit mean. Since $\\mathbf{E}\\left[ X^{p}\\right] = \\Gamma(1+p)$ by the definition of the gamma function. Hence, we have:\n\\begin{align}\n\t\\mathbf{E} \\left[ \\left(    (X/\\epsilon s) ^{-1} \\right)^{\\frac{2}{\\gamma}}  \\right] = (s \\epsilon) ^{\\frac{2}{\\gamma}} \\Gamma (1 - \\frac{2}{\\gamma}) \\nonumber.\n\\end{align}\n \nFinally, $(\\ref{eq:lt-sinr-mrc-2})$ is simplified as:\n\\begin{align}\n\t\\mathcal{L}_{\\Theta}\\left( s \\right) &= \\exp(-\\lambda_{b} \\pi \\mathbb{E}\\left[ \\epsilon ^{\\frac{2}{\\gamma}} \\right]  \\Gamma(1-\\frac{2}{\\gamma}) s^{\\frac{2}{\\gamma}})\n\\end{align}\n\nLet us now focus on term $\\mathbb{E}\\left[ \\epsilon ^{\\frac{2}{\\gamma}} \\right] $:\n\\begin{align}\n\t\\mathbb{E}\\left[ \\epsilon ^{\\frac{2}{\\gamma}} \\right]  = \\mathbb{E}\\left[ \\left( H \\exp(\\chi)\\right)  ^{\\frac{2}{\\gamma}} \\right] \\mathbb{E}\\left[ I ^{-\\frac{2}{\\gamma}}\\right] \n\\end{align}\n\nFor term $\\mathbb{E}\\left[ I ^{-\\frac{2}{\\gamma}}\\right] $, it is a negative fractional moment calculation problem, which is also a research subject in applied mathematical domain. \\qsong{Still need to find some numerical technique in the literature to compute this. I think it is the last difficulty to be solved if we want to obtain a general analytical framework...}\n\nWith substitution $s = -j\\omega$, the characteristic function (CF) of $\\Theta$:\n\\begin{align}\n\\phi_{\\Theta}\\left( \\omega \\right) &= \\exp(-\\lambda_{b} \\pi \\mathbb{E}\\left[ \\epsilon ^{\\frac{2}{\\gamma}} \\right]  \\Gamma(1-\\frac{2}{\\gamma}) \\exp(-i\\pi/\\gamma) \\omega^{\\frac{2}{\\gamma}}),  \n\\end{align}\nwhere $\\omega \\geq 0$.\n\nAs a continuous random variable, the cumulative distribution function $F_{\\Theta}\\left( x \\right)$ of total SINR $\\Theta$ can be directly derived from its characteristic function $\\phi_{\\Theta}\\left(\\omega\\right)$, for example by use of Gil-Pelaez Theorem~\\cite{gil1951note}. However, directly using Gil-Pelaez Theorem needs long time. Applying mathematical techniques used in finance domain~\\cite{hirsa2012computational}, we seek to calculate the Fourier transform of $e^{-\\eta x} F_{\\Theta}\\left( x \\right)$ where term $e^{-\\eta x}$ is a damping function with $\\eta > 0$. \n\\begin{align}\n\\label{eq:intermediate_formula_1}\n\\int_{-\\infty}^{+\\infty} e^{i\\omega x} e^{-\\eta x} F_{Y_k}\\left( x \\right) dx = \\frac{1}{\\eta - iw} \\phi_{Y_{k}}\\left( \\omega +i\\eta \\right) \n\\end{align}\nApplying Fourier inversion for ($\\ref{eq:intermediate_formula_1}$), we obtain the expression for $F_{\\Theta}\\left( x \\right)$ as follows:\n\\begin{align}\n\\label{eq:pr_c_m_case2}\nF_{\\theta}\\left( x \\right)  &= \\frac{e^{\\eta x}}{2\\pi} \\int_{-\\infty}^{+\\infty} e^{-i \\omega x} \\frac{1}{\\eta - i\\omega} \\phi_{\\theta}\\left( \\omega +i\\eta\\right) d\\omega  \\nonumber\\\\\n&= \\frac{e^{\\eta x}}{\\pi} \\Re\\left\\lbrace  \\int_{0}^{+\\infty} e^{-i \\omega x} \\frac{1}{\\eta - i\\omega} \\phi_{\\theta}\\left( \\omega +i\\eta\\right) d\\omega\\right\\rbrace, \n\\end{align}\nThe cumulative distribution function $F_{\\theta}\\left( x \\right)$ now can be derived directly from ($\\ref{eq:pr_c_m_case2}$) using a single numerical integration.\n\n\\subsection{A special case, $\\gamma=4$}\nIn this case, we study a special case where $\\gamma=4$.\n\\begin{align}\n\t\\mathbb{E}\\left[ \\left( H \\exp(\\chi)\\right)  ^{\\frac{1}{2}} \\right]  = \\Gamma(1+\\frac{1}{2}) \\exp \\left( \\frac{\\sqrt{2}\\sigma}{4}\\right) ^2\n\\end{align}\nReference~\\cite{haenggi2009interference} gives the PDF of cumulative interference $I$ suffered by device at the origin:\n\\begin{align}\n\tf_{I}(x) = \\frac{p\\lambda_{m}}{4} (\\frac{\\pi}{x})^{\\frac{3}{2}} \\exp(-\\frac{\\pi^4 p^2\\lambda_{m}^2}{16x}), x >= 0\n\\end{align}\nWe can get the PDF of cumulative interference $I$ in the presence of Rayleigh fading and log-normal shadowing, just by scaling $\\lambda_{m}$ as $ \\lambda_{m} \\exp(\\frac{\\sigma^2}{8})$:\n\\begin{align}\n\tf_{I}(x) = \\frac{    p\\lambda_{m}   \\exp(\\frac{\\sigma^2}{8})  }{4} (\\frac{\\pi}{x})^{\\frac{3}{2}} \\exp(    -\\frac{    \\pi^4    p^2\\lambda_{m}^2     \\exp^2(\\frac{\\sigma^2}{8})    }{    16x    }    ), \n\\end{align} \nHence,\n\\begin{align}\n\t\\mathbb{E}\\left[ I ^{-\\frac{1}{2}}\\right] &= \\int_{0}^{+\\infty} x^{-\\frac{1}{2}} f_{I}(x) dx \\nonumber\\\\\n\t&= \\frac{4}{    \\pi^{\\frac{5}{2}}  p\\lambda_{m} \\exp(\\frac{\\sigma^2}{8}) }\n\\end{align}\n\n\\begin{align}\n\t\\mathbb{E}\\left[ \\epsilon ^{\\frac{1}{2}} \\right] & = 4\\Gamma(\\frac{3}{2})\\pi^{-\\frac{5}{2}} p^{-1}\\lambda_{m}^{-1}\\nonumber\\\\\n\t&=2\\pi^{-2} p^{-1}\\lambda_{m}^{-1}\n\\end{align}\nTherefore:\n\\begin{align}\n\t\\mathcal{L}_{\\Theta}\\left( s \\right) &= \\exp(-\\lambda_{b} \\pi 2\\pi^{-2}\\lambda_{m}^{-1} \\Gamma(\\frac{1}{2}) s^{\\frac{1}{2}}) \\nonumber \\\\\n\t&= \\exp(-2\\lambda_{b} p^{-1}\\lambda_{m}^{-1}\\pi^{-\\frac{1}{2}}  s^{\\frac{1}{2}}) \n\\end{align}\nIts CDF and CCDF:\n\\begin{align}\n\tF_{\\Theta} (x) &=1 -\\erf \\left( \n\t\\frac{\\lambda_{b}p^{-1}\\lambda_{m}^{-1}}{\\sqrt{\\pi x}}\n\t\\right) \\\\\n\tF^c_{\\Theta} (x) &=\\erf(\\frac{\\lambda_{b} p^{-1}\\lambda_{m}^{-1}}{\\sqrt{\\pi x}}) \\nonumber\\\\\n\t&= \\erf(\\frac{1}{\\frac{p\\lambda_{m}}{\\lambda_{b}}\\sqrt{\\pi x}})\n\\end{align}\n\nThe packet loss rate performance in the case of applying maximum ratio combining is illustrated in Fig.~\\ref{fig:newpacketlossratemprmrc}. It is not surprisingly to observe that maximum ratio combing bring significant performance improve.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{/Users/qsong/Documents/slotted_aloha_related_project/multiple_recepteur_capacity/new_packet_loss_rate_mpr_mrc}\n\t\\caption{Network packet loss rate with respect to normalized load $p\\lambda_{m}/\\lambda_{b}$ (ANA=analytical, SIM=simulation)}\n\t\\label{fig:newpacketlossratemprmrc}\n\\end{figure}\n", "meta": {"hexsha": "08aea70794bbc8faef561297b7f357bfc1b2b352", "size": 8126, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter5/bs_rx_divers_mrc.tex", "max_stars_repo_name": "hansomesong/PhD-Thesis", "max_stars_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter5/bs_rx_divers_mrc.tex", "max_issues_repo_name": "hansomesong/PhD-Thesis", "max_issues_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter5/bs_rx_divers_mrc.tex", "max_forks_repo_name": "hansomesong/PhD-Thesis", "max_forks_repo_head_hexsha": "4bdd0a41012030398a23ae16a66d5a02631f76f5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.8644067797, "max_line_length": 574, "alphanum_fraction": 0.662810731, "num_tokens": 2929, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.6959583187272711, "lm_q1q2_score": 0.5640595386421111}}
{"text": "\\section{Mathematical Modeling}\n\\label{sec:mathmodel}\n\n% \\begin{itemize}\n% \\item \\st{unstable stratified boundary layers (raleigh number estimate)}\n% \\item \\st{justify incompressible N-S}\n% \\item \\st{justification of far-field eddy-viscosity model (M-O)}\n% \\item modeling eddy-viscosity in device \n% \\item vane and turbine representation via penalty function // immersed boundary method\n% \\item cone representation\n% \\end{itemize}\n\n%remember that \\st{} is strikethrough\n%\n% should this all be math modeling?\n%\n\nThe aim of the proposed work is to simulate synthetic\ndust devils in the field. This requires a model of the ambient\nconditions for a representative case, such as Arizona, where\nexperimental data is available from tests that have been\nperformed. Furthermore, for this to be more generally useful in the\nprediction of flows in a variety of conditions, we need a model generally\napplicable to any flow near the surface of the earth.  \n\nThis section details an analysis of surface fluid mechanics, and\ndevelops a mathematical model for turbulence in a thermally stratified\nmedium. We seek to emulate the operation of the apparatus during the\nday, when dust devils are observed to form readily. \nAt these times, the atmospheric surface layer has the following\ncharacter. Incident radiation from the Sun does not significantly\ninteract with the air, which is nearly transparent. Instead, this\nradiation is absorbed by the ground, which causes its temperature to\nrise. This results in a thermal gradient between the hot ground and the\ncooler air. The warm ground conducts heat to the air, causing expansion\nand lowering the density of the air. This reduced density air near the\nsurface is driven upwards by the force of buoyancy.  \n\nFor sufficiently large temperature gradients, the hot surface layer is\nunstable, and as the warm air is driven upwards the flow will transition\nto turbulence. For the typical use case we consider, namely Arizona in\nsummer, the temperature difference can be in excess of 30 Kelvin. \nRayleigh numbers associated with temperature gradients of this magnitude\nare between $10^{9} - 10^{11}$ and therefore meet the criterion\n% ref thermo book here?\nfor transition to a turbulent regime. The flow is that of an unstably\nstratified fluid.  \n\nThis section begins by describing the governing equations of the system\nof interest. It then proceeds to the development of a viscosity model\nused to resolve the large scale features of the solution. Next, models\nused to represent the vanes and turbine, as well as the separation of\nfluid off of these modeled surfaces, are introduced.  Finally, the\nmodels for the computational domain extent and the boundary conditions\nare discussed. \n\n%Note that a complete numerical specification of all the model\n%parameters introduced in this section are provided in a table in\n%appendix \\ref{app:model_param}.\n\n\\subsection{Equations of Fluid Motion}\n\\label{sub_sec:ns_en}\n%\n% do I need to justify this more? These are pretty critical, after all\n%\n\nThe equations describing fluid flow with natural convection are,\n\\begin{align}\n  \\frac{\\partial u}{\\partial t} + u \\cdot \\nabla u =& \\,\n  -\\frac{1}{\\rho}\\nabla P + \\nu \\nabla^2 u - g \\frac{T'}{T_0}\\\\\n  \\nabla \\cdot u =& \\, 0 \\\\\n  \\rho c_p \\frac{\\partial T}{\\partial t} + u \\cdot \\nabla T =& \\, \\nabla\n \\cdot ( k \\nabla T)\n\\end{align} \nunder the assumption that the temperature variation is small in\ncomparison to the mean temperature of the region. These are the\nincompressible Navier-Stokes equations with Boussinesq, a representation\nof buoyancy coupled with the heat equation.  \n%\n% in full document be sure to mention that neglecting coriolis is legit\n% below 50ms\n%\n%\nAs discussed above, we anticipate that the flow will be\nturbulent. Turbulence significantly alters the character of the flow,\nand necessitates either resolving the resulting small scales or\nproviding a model that emulates their impact. In this case, a\nReynolds Averaged Navier-Stokes (RANS) formulation is used, where the \nturbulent viscosity and thermal conductivities are permitted to vary in\nspace, and the flow is decomposed into constant laminar and varying\nturbulent and vane components,  \n\n\\begin{eqnarray*}\n \\nu =& \\nu_{l} + \\nu_{T}(z) + \\nu_{V}(r,z) \\\\\n K =& K_{l} + K_{T}(z) + K_{V}(r,z).\n\\end{eqnarray*}\n\nThis is an effective eddy viscosity model, and the subsequent two\nsections will elaborate on the spatial dependence and character of\n$\\nu_T$, $K_T$, $\\nu_C$ and $K_C$. The laminar, base diffusivities \nare $\\nu_l$ and $K_l$, which do not vary in space. \n\n\\subsection{Viscosity Model}\n\nWe use the well-known similarity model of Monin and\nObukhov\\cite{monin2007statistical,monin1954basic} as a guide to the\nspecification of an eddy viscosity model to describe the vertical mixing\nin the atmosphere. This formulation is an extension of the mixing-length\nmodel of Prandtl, where the concepts of gradient diffusion and mixing\nlength were generalized to thermally stratified flow.   \n\n%\n% justify prandtl assumption here\n%\n\nMonin and Obukhov argued that under statistically stationary, horizontally\nhomogeneous conditions, the dynamics of any mean turbulent quantity\n($\\bar f$) in a thermally stratified medium depend only on,  \n\n\\begin{equation}\n\\bar f = f(z,\\frac{g}{T_0},\\rho_0,\\nu_l,K_l,u^*,q).\n\\end{equation}\nAside from near the surface, the laminar diffusivities $\\nu_l$ \nand $K_l$ will be  \nsmall compared to their turbulent counterparts, $\\nu_T$ and $K_T$, and \nare therefore negligible. \nThe remaining five parameters are: the distance from the ground, z; the\nbuoyancy coefficient, $\\frac{g}{T_0}$; the density of the fluid,\n$\\rho_0$; a velocity scale, $u^*$ (in particular, the freestream\nvelocity); and the heat flux to the ground, $q$. %\\todo{is this right?}\n% Likewise, if we define $z-z_0$ as an ``effective roughness\n% height'' or displacement distance, we can reasonably neglect $z_0$ from these\n% considerations. While the roughness height can be large (for instance in\n% a cornfield, where the roughness height could reasonably be several\n% meters), for the present study the expectation is that this roughness\n% height will be on the order of centimeters\\cite{oke1987boundary}, and\n% therefore negligible.  \n%\n% add refence to dynamical and physical meteorology \n% \nThese quantities depend on four dimensions: length, time, temperature\nand mass. Dimensional analysis implies that this mean turbulent quantity\n($\\bar f$) should then only be a function of a single dimensionless\ngroup\\cite{munson2012fundamentals}. This is chosen to be,\n\\begin{equation}\n \\xi = -\\frac{\\kappa \\frac{g}{T_0} \\frac{q}{c_p \\rho_0} z}{ {u^*}^3}.\n\\end{equation}\nwhere $\\kappa$ is the (dimensionless) Von-Karman constant. \nThe physical meaning of this quantity bears some discussion.  \nThe numerator, $\\kappa \\frac{g}{T_0} \\frac{q}{c_p \\rho_0} $, is\nproportional to the buoyant production of kinetic energy.  The\ndenominator, $\\frac{{u^*}^3}{z}$, is a shear production rate. \n\nThe non-dimensional group $\\xi$ is typically cast into the following \nform, \n\\begin{equation}\n \\xi = \\frac{z}{L_{M-O}}\n\\end{equation}\nwhere $L_{M-O}$ is the famous, ``Monin-Obukhov'' length,\n\\begin{equation}\n L_{M-O} = -\\frac{{u^*}^3}{\\kappa \\frac{g}{T_0} \\frac{q}{c_p \\rho_0}}. \n  \\label{eqn_mo_length}\n\\end{equation}\n\nThis length can be interpreted as the vertical location\nwhere the production of buoyantly generated kinetic energy is\napproximately equal to the energy generated by wind shear. When the\nmagnitude of $L_{M-O}$ is large, the flow is dominated by shear effects,\nand the impact of buoyancy is small. Conversely, a small magnitude of\n$L_{M-O}$ implies that buoyant effects largely dominate the kinetic\nenergy production. Notice also that the sign convention in Equation\n\\ref{eqn_mo_length} is such that for the systems we consider (q>0, heat\nflux from the surface to the air), $L_{M-O}$ will always be\nnegative. This is as expected, as the convection from the high\ntemperature surface to cooler air is unstable. \n\n%The mean quantity $\\bar f$ has a\n%functional representation to the effect,\nIn this case, appropriately normalized mean turbulent quantities should\nbe functions of only the non-dimensional group \n\\begin{equation}\n \\frac{\\bar f}{f_{MO}} = \\phi(\\frac{z}{L_{M-O}})\n\\end{equation}\nwith $f_{MO}$ a normalizing constant with units of $\\bar f$, and $\\phi$\nis a function only of $\\xi$. We are interested in the case where\n$\\xi<0$, which corresponds to heat flux from the ground into the\nair. For instance, the mean velocity field would have scaling,\n$\\frac{u^*}{\\kappa}$ and the temperature fields would be scaled as $T^*\n= \\frac{1}{\\kappa u^*} \\frac{q}{c_p \\rho_0}$. In this way, the mean\nvelocity and temperature fields would have the form,  \n\\begin{eqnarray}\n\\bar u(z) = \\frac{u^*}{\\kappa} \\phi_u(\\frac{z}{L_{M-O}}) \\\\\n\\bar T(z) = T^* \\phi_T(\\frac{z}{L_{M-O}}).\n\\end{eqnarray}\nAs a result, the vertical gradients of the velocity and temperature are\nnecessarily, \n\\begin{eqnarray}\n\\frac{\\partial \\bar u(z)}{\\partial z} = \\frac{u^*}{\\kappa L_{M-O}}\n \\varphi_u(\\frac{z}{L_{M-O}}) \\label{eq:uz} \\\\ \n\\frac{\\partial \\bar T(z)}{\\partial z} = \\frac{T^*}{L_{M-O}}\n \\varphi_T(\\frac{z}{L_{M-O}}) \\label{eq:tz}.\n\\end{eqnarray}\nNotice that $\\phi$ and $\\varphi$ are different universal functions. Eddy\nviscosity is defined as, $u'v' = \\nu_T \\frac{\\partial\nu}{\\partial z}$\\cite{durbin2001statistical}, in which case, using\nequation \\ref{eq:uz}, it must scale as\n\\begin{equation}\n \\nu_T = \\frac{{u^*}^2}{\\frac{\\partial \\bar u}{\\partial z}} = \\frac{u^*\n  \\kappa L_{M-O}}{\\varphi_u(\\xi)}.\n\\end{equation}\nWhile eddy thermal diffusivity (defined as, $q = c_p \\rho_0 K_T\n\\frac{\\partial T}{\\partial z}$) is \n\\begin{equation}\n K_T = \\frac{q/c_p \\rho_0}{\\frac{\\partial \\bar T}{\\partial z}} = \\frac{u^*\n  \\kappa L_{M-O}}{\\varphi_T(\\xi)}.\n\\label{eqn:eddy_kt}\n\\end{equation}\nNote the difference between $\\varphi_u$ and\n$\\varphi_T$, which for turbulent Prandtl numbers near unity (e.g. $Pr_T\n\\approx 1$) the functions will be identical. The asymptotic behavior of\nthe $\\varphi_T$ and $\\varphi_u$ at large and small values of $\\xi$\nprovides guidance to the more general character of the functions. \nOur interest lies in the case where $L_{M-O}<0$, which corresponds to heat flux\nfrom the ground into the air.  \n%\n\n\nThe case where $\\xi \\to -\\infty $ implies $\\frac{z}{L_{M-O}} \\to\n-\\infty $ and $z>>L_{M-O}$. This is most readily interpreted as the instance\nwhere $u^* \\to 0$, e.g. the buoyancy-dominated case with no wind\n(free-convection). For this case, the function $\\varphi_T$ must have no\ndependence on $u^*$, and will approach a constant. Scaling\nanalysis implies that the overall function will not depend on $u^*$ only\nwhen the function $\\varphi$ scales to the $-\\frac{4}{3}$ power. The\nfunction must then be\n\n\\begin{equation}\n K_T = \\frac{1}{C_T} \\left( \\frac{q}{c_p \\rho_0} \\frac{g}{T_0}\n\t\t     \\right)^\\frac{1}{3} z^{\\frac{4}{3}}  \\text{ \nfor } z \\gg L_{M-O}. \n\\end{equation}\n\nSo long as the Prandtl number remains constant in space, then\n% todo: provide discussion as to why this is not an unreasonable expectation\nidentical arguments regarding the asymptotic behaviour at large $\\xi$ provide\nthe analogous result for the eddy viscosity's variation with respect to\ndistance from the ground,  \n\\begin{equation}\n \\nu_T = \\frac{1}{C_{\\nu_T}} \\left( \\frac{q}{c_p \\rho_0} \\frac{g}{T_0}\n\t\t\t     \\right)^\\frac{1}{3} z^{\\frac{4}{3}}  \\text{ \nfor } z \\gg L_{M-O}. \n\\end{equation}\n\nThese functions have been found to be broadly applicable and\naccurate\\cite{Foken2006}, and are easily implemented in software.\n\n\\subsection{Eddy Viscosity in the Device}\n\n%However, this is also a more\n%difficult regime to model.\\todo{poor justification} \nThe validation process identified a refinement to the virtual vane\nformulation that results in a better representation of the vane\neffects in a broader range of flows. The thermal and momentum\ndiffusivities are even larger in the device where the flow across the\nvanes produces shear and generates turbulence. The model now include an\nenhanced turbulent diffusivity in the vortical plume region to account\nfor the effects of vortex shedding from the trailing edge of the vanes,\nwhich is not represented in the virtual vane representation (discussed in\n\\ref{subsec:vane}). \n\n% To successfully accomplish this, source terms\n% for production of diffusivity were formulated to properly\n% account for the generation of turbulence in the region of the\n% vanes. This diffusivity would then convect and diffuse through space. \n% To avoid modeling a temporal and three-dimensionally spatially\n% varying field of diffusivities, we have instead calibrated the field\n% based on data provided by our partners at Georgia Tech. \n\n%This calibration\n%is detailed in Section \\ref{sec:validation}. \nThe eddy-viscosity in the region of the vanes and interior is set based\non scaling relations for a turbulent self-similar circular jet, as\ndescribed in Pope\\cite{pope2000turbulent},\n \n\\begin{equation}\n  \\nu_C = U_0 y_{1/2} \\bar \\nu_C.\n  %\\nu_C(r) = U_0(r) y_{1/2}(r) \\bar \\nu_C\n\\end{equation}\n\nIn this equation, $U_0$ is the peak velocity, and $y_{1/2}$ the jet\nhalf-width (taken to be the dust devil half-width).  \n% In words, we are scaling the\n% calibrated viscosity by the velocity and length scale of the\n% apparatus. $\\bar \\nu_C $ is input, measured from the experimental\n% laboratory.  The diffusivity here is essentially a top hat filter, which\n% radial values interior to the vanes the nominal calibrated value, and\n% those outside the vanes zero, e.g. $\\nu_C(r>r_{\\text{vane}})=0$. \nThe dimensionless constant $\\bar \\nu_C $ is calibrated based on\nexperimental data, and is set to zero outside the device. \nThe thermal diffusivity inside the device, $K_C$, is then fixed with the \nassumption that the Prandtl number is unity.  \n\n%The thermal and momentum diffusivities are expected to be even larger in the\n%device where the flow across the vanes produces shear and generates\n%turbulence. Our model for the diffusivities inside the vanes should\n%therefore be higher than the ambient regions outside the vanes. \n\n\n\n\\subsection{Vane and Turbine Representation}\n\\label{subsec:vane}\nTo rapidly prototype general system configurations, the\ncomputations must be able to explore a large space of possible\ngeometries and settings. This presents a significant meshing and \ncomputational challenge if the detailed flow around the vanes is to be\ncomputed. In the region near the vanes, where a no-slip boundary\ncondition is imposed, the flow will necessarily form a thin momentum\nboundary layer. Resolving this boundary layer requires high resolutions\nimmediately adjacent to the walls. Changing the vane location requires\nthat a new mesh be generated.\nThis is a significant\nchallenge, as the development of a new mesh often requires significant\nhuman effort and time. Furthermore, the process is error-prone, \nand would require that each simulation using a new mesh undergo \ndetailed solution verification. \n\n\n% Your text on the virtual vanes does not provide enough information to\n% know exactly what we did. It is needlessly vague, and does not\n% adequately connect to the real vane geometry. I propose the following\n% more direct and more precise text. Further, the penalty nomenclature\n% is inappropriate.\n\nInstead, we have developed a modeling formulation that does not require\nexplicitly meshing the turning vanes, or any surface. These so-called\n``virtual vanes'' are implemented as a body force that \nis applied in the annular region that contains the vanes. Vane\ngeometry is specified by the angle $\\phi$ a vane makes with a radial\nline as a function of the radial coordinate $r$. A unit normal to vane\nsurfaces ${\\bf n}$ is defined as\n%\n\\begin{equation}\n {\\bf n}({\\bf x}) = \\sin \\left(\\phi(r) \\right) \\hat{{\\bf r}}+ \\cos\n  \\left(\\phi(r) \\right) \\hat{{\\bf \\theta}} \n\\end{equation}\n%\nwhere $\\hat{{\\bf r}}$ and $\\hat{{\\bf \\theta}}$ are unit\nvectors in the radial and azimuthal directions, respectively.\nWith this vane-normal vector field specified, a body force ${\\bf f_v}$\nis defined\nthat will drive the velocity in the ${\\bf n}$ direction toward zero,\neffectively turning the flow to be parallel to the vanes. The body\nforce is defined:\n\\begin{equation}\n {\\bf f}_v= -\\frac{1}{\\ell_v}|{\\bf u}|({\\bf u}\\cdot{\\bf n}){\\bf n}\n \\label{eqn:body_force}\n\\end{equation}\nwith ${\\bf u}$ the velocity and $\\ell_v$ is a specified length\nscale. $\\ell_v$ represents the distance over which the\nflow evolves under the influence of the body force before the\nvelocity in the normal direction is reduced by a factor of $1/{\\bf\ne}$. In other words, this is the length scale over which the\nnormal component of the velocity decays exponentially.\nIt is a modeling constant and is specified to be\nthe same order as the separation distance between neighboring vanes in\nthe physical vane configuration, since entry lengths in internal flows\nscale with the width of the channel.\n\nThis virtual vane formulation is similar to the ``actuator disk'' model\ncommonly used to represent the rotor of a wind turbine \\cite{betz}. An actuator \ndisk model is under development and will be detailed in the dissertation.\n\n\\subsection{Solid Surface Representation}\n\\label{subsec:solid_surface}\n\nIn addition to vanes, the SoV device includes impermeable surfaces\nsuch as the wind break (``cone'') on the top of the facility. As with\nthe turning vanes, this is represented without explicitly meshing the\nsurface nor imposing  a boundary condition at the surface. This allows\nrapid exploration of configurations  with different solid surfaces to\ncontrol and manipulate the fluid flow. These solid surfaces are\nrepresented by a body force acting in a region surrounding the wall. \nA body force normal to the surface is defined in this region so\nthat it will drive the normal velocity to zero, resulting in the flow\nmoving only parallel to the virtual surface. \nThe body force is defined as in Equation\n\\ref{eqn:body_force}; however, the length scale $\\ell_v$ is specified to\nbe identical to the width of the surface being represented. This is\ntypically the width of two or three grid cells. While the actual surface we are\nemulating is thinner than this, the numerical method has difficulty\nconverging for surfaces smaller than the grid size.  \n\nForcing models designed to mimic a surface\nare not unique to this project, and the current formulation is\n closely related to (among others)\n``immersed boundary methods'' as used by various other\nresearchers\\cite{doi:10.1146/annurev.fluid.37.061903.175743}. \nThis approach is unique in its use of Babuska's penalty treatment of\nconstraints\\cite{1973fempen,ZAMM:ZAMM19880680925} to enforce the\nbehavior at the boundary. This method was selected because it is easily\nimposed in the FEM context, and the method has been explored in detail\nin the literature. \n%Typically, the enforcement occurs along a domain\n%boundary, but in this work it is used in the interior, \n%and is not imposed as a mathematical constraint but rather as a modeling \n%approach. \n\n% We add a penalty term to the weak\n% form of the Navier-Stokes momentum equation described in the subsequent\n% section %\\ref{eq:ns_weak} \n% that has the form, \n% \\begin{equation}\n% P_\\epsilon = \\int_\\Omega ({\\bf f_v} \\cdot v) \\, dx\n% \\end{equation}\n% where ${\\bf f_v}$ is as described in equation \\ref{eqn:body_force}. \n%% As the system is formulated as a variational problem that seeks to\n%% minimize the test function $v$, any velocity that is not aligned with\n%% the vane normal will incur a penalty versus one that does. Note that unlike\n%% some penalty methods, this does not automatically satisfy\n%% continuity. Rather, the velocity field remains divergence free through a\n%% separately enforced constraint.  \n\n\\subsection{Separation Model}\n\nIn the presence of wind, it was found that there was a significant flow\nout through the vanes in the back of the device. This was obviously\ninconsistent with the findings of our colleagues in the field, who\nobserved no outflows out of the back of the device. Moreover, this resulted\nin large inconsistencies between our predictions and the field results,\nalmost certainly because of the kinetic and thermal energy that our vane\nrepresentation was permitting to leave out the back of the device.  \n\nThis exposed a weakness of the turning vane representation outlined\npreviously. When the flow entered the virtual vane forcing region it was\nalways turned to align with the vane angle, even when the forcing was in\nthe opposite direction of the present velocity.\nThis is in contrast to the physical situation, in which we\nexpect the flow to continue along an averaged streamline separating from the \ntrailing edges of the vanes, instead of turning around it. \nThe averaged streamline will continue past the trailing edge of the vane\ndue to the separation of the boundary layer off the edge surface. An\nimage depicting these two cases in shown in Figure \\ref{fig:sep_model}.  \n\n\\begin{figure}[!htb]\n  \\begin{center}\n    \\includegraphics[width = 6 cm]{figs/sep_model}\n    \\caption{Schematic depicting the separation model that extends past\n   the trailing edge of the vanes. In the top case, the flow entering\n   the virtual vane region is forced to align with the vane angle despite\n   this resulting in a reversal of the flow direction. This is a\n   consequence of the forcing function acting on the fluid to ensure the\n   velocity vector aligns with the vane. \n   The second case depicts the separation\n   model, where the flow under certain conditions is not forced and\n   continues to move tangent to the vanes due to \n   the separation of the boundary layer off the trailing edge.} \n    \\label{fig:sep_model}\n  \\end{center}\n\\end{figure}\n\nLet $\\bf n^v$ be the normal vector to the vanes,\nand $\\bf n^r$ the normal vector pointing out of the vane\nregion\\footnote{\\normalsize The superscripts ``v'' and ``r'' stand for\nvane and radial, respectively.}.  \nThen, $\\bf t^v$ is the tangential vector to the vanes pointing out of\nthe vane region and is defined as,\n\n%\\begin{equation}\n% \\bf t^v = \\left( {\\bf n^v_y},{\\bf -n_x^v} \\right) \\text{sign}\\left[\n%\t    \\left( {\\bf n^v_y},{\\bf -n_x^v} \\right) \\cdot {\\bf n^r} \\right].\n%\\end{equation}\n\\begin{equation}\n \\bf t^v = \\left( {\\bf n^{v^\\perp}} \\right) \\text{sign}\\left(\n\t    {\\bf n^{v^\\perp}} \\cdot {\\bf n^r} \\right).\n\\end{equation}\nHere, ${\\bf n^{v^\\perp}}$ is the vector perpendicular to the normal\nvector of the vanes, which is simply, \n\\begin{equation*}\n {\\bf n^{v^\\perp}} = \\left[ \\begin{array}{c}\nn_x\\\\\nn_y\\\\\n\\end{array}\\right]^{\\perp} = \n \\left[ \\begin{array}{c}\n  -n_y\\\\\n  n_x\\\\\n\t\\end{array}\\right].\n\\end{equation*}\n\n%\n% example of an algorithm\n%\n\\begin{center}\n \\begin{algorithm}\n  \\caption{The crude separation model. This model identifies if the flow\n  is coming into or out of the vane region, and if the velocity vector\n  is in the same direction as the tangent line of the vanes. In the case\n  of the ``special forcing'' the flow is forced as if it was\n  impacting a solid surface. In the\n  algorithm below, $r_0$ is the max radius of vanes, $r_i$ the minimum\n  radius of vanes, and $\\delta$ is the width of the separation region.}\n  \\label{alg:sep}\n  \\begin{algorithmic}[1]\n   \\If{($r_0 > r > r_i$)} \n   \\If{$(r_0 - r) < \\delta$ \\textbf{or} $((r - r_i) < \\delta)$}\n   \\State $\\overrightarrow{n^r} = \\overrightarrow{r}/|r|$\n   \\If{$(r - r_i) < \\delta$} \n   \\State $\\overrightarrow{n^r} = -\\overrightarrow{n^r}$\n   \\EndIf\n   \\State  $\\bf t^v = \\left( {\\bf n^{v^\\perp}} \\right) \\text{sign}\\left(\n\t    {\\bf n^{v^\\perp}} \\cdot {\\bf n^r} \\right)$\n   \\If{$v \\cdot t^v > 0$ \\textbf{and} $v \\cdot n^r < 0 $}\n   \\State ${\\bf n}({\\bf x}) = \\hat r$ \\quad (Special Forcing)\n   \\Else\n   \\State  ${\\bf n}({\\bf x}) = \\sin \\left(\\phi(r) \\right) \\hat{{\\bf r}}+ \\cos\n  \\left(\\phi(r) \\right) \\hat{{\\bf \\theta}} $\n  \\quad (Normal Forcing)\n   \\EndIf\n   \\Else\n   \\State ${\\bf n}({\\bf x}) = \\sin \\left(\\phi(r) \\right) \\hat{{\\bf r}}+ \\cos\n  \\left(\\phi(r) \\right) \\hat{{\\bf \\theta}} $\n  \\quad (Normal Forcing)\n   \\EndIf\n   \\EndIf\n  \\end{algorithmic}\n \\end{algorithm}\n\\end{center}\n%\n\nThe forcing is modified when the velocity vector of the local flow, $\\bf\nu$ is pointing in to the forcing region: ${\\bf u} \\cdot {\\bf n^r} < 0$, and\nwhen the velocity vector is in the same direction as the tangent line to\nthe vanes: ${\\bf u} \\cdot {\\bf t^v} > 0 $. In these instances, the\nforcing acts as if there was a rigid surface past the vane edge, and\ngives the appearance of a special ``no-penetration'' condition for the\nvelocity for these cases. The pseudo-code for this procedure is shown in\nAlgorithm~\\ref{alg:sep}.\n\nThe addition of this simple separation model significantly reduced the\nflow that penetrated the back of the vanes, and produces results\nconsistent with the observations provided by our experimental\ncolleagues.  \n\n\\subsection{Effect of Surface Roughness}\n\n%%\n%% this does not describe the phenomena being modeled or the precise\n%% model -- rewrite\n%%\n%%\n%% this does not say enough about the surface roughness\n%% motivate that and explain how it is used, than show your estimate \n%% to argue it is small\n%%\n\nSurface roughness effects have been shown to play a role in the\nformation of dust devils and related atmospheric\nphenomena\\cite{oke1987boundary}. For the flat and sandy regions we are\nsimulating, the impact is expected to be a small velocity perturbation \nin the vertical direction. This is modeled as a volumetric \nforcing in a narrow region above the surface,\n\\begin{equation}\n F^{'''}_{z_0} = \\frac{1}{2}\\rho V_z^2/z_{0}, \n\\end{equation}\nwhere $z_{0}$ is the roughness height. We ensure that the energy\nintroduced into the flow is a small fraction of total flow energy by comparing\nthis with the energy flux through the top of the vanes. The total energy\nadded is measured as,  \n\\begin{equation}\n E_{\\text{injected}} = \\int_0^{2\\pi} \\int_0^R \\int_0^{z_0} F^{'''}_{z_0}\n  dz dr d\\theta.  \n\\end{equation}\nR is the outer diameter of the vanes. \nThe value of $E_{\\text{injected}}$ is typically a few percent of the\ntotal kinetic energy flux measured through the top of the\nvanes.\n\n%This general forcing provides additional capabilities including the\n%ability to investigate engineering greater surface roughness or\n%structures that could provide greater ``kick-up'' of the thin thermal\n%layer near the surface into the flowing regime. It can also support more\n%general turning configurations than the virtual vanes outlined above. \n\n\\subsection{Simulation Geometry and Boundary Conditions}\n\\label{sec:bc}\n\nIn this project, two principle modeling regimes are considered. \nOne is the ``thermal-only'' scenario, in which there are no ambient\nvelocities and there is an imposed elevated temperature on the ground.  \nIn the other, there are also ambient winds that contribute to the SoV energy\n(``wind'' cases). \nThe computational domain and boundary conditions for these \ntwo scenarios are described below.\n\n\\textbf{Computational Domain} \n\nAll simulations are performed in a cuboid domain, with six\nfaces.  The domain is denoted $\\Omega \\subset \\mathbb{R}^3$. \nThe domain extents are scaled by the system diameter, D, created by the\nouter vane radius. The extents are defined in terms of $\\{L_x,L_y,L_z\\}$ indicating the \nstreamwise, spanwise and vertical directions, respectively. \nFor both simulation regimes, sensitivity analyses \nwere performed to ensure that the results were not sensitive \nto the domain extents. For the thermal-only case, for which $L_x = L_y$,\nthe system \nextents $L_x/D$ and $L_y/D$ are chosen to be 3. The height ($L_z/D$) is\nthree times the system diameter, which is typically nearly equal to the\nheight of the vanes. This defines the thermal-only domain $\\Omega_T$, \nas $\\Omega_T = \\left[-L_x,L_x \\right] \\times \\left[-L_y,L_y \\right]\n\\times \\left[0,L_z \\right]$.   \n\nFor the wind cases, the streamwise extent is no longer equal to\nthe spanwise length, $L_y$. In these cases, the domain length extends\ntwo diameters in front of the vanes and three behind. The\nspanwise direction is symmetric and extends two diameters in each direction \nfrom the center ($L_y/D = 2$). The height is identical to the\nthermal-only case, at three system diameters ($L_z/D = 3$). Thus, the\nwind domain is defined as $\\Omega_W = \\left[-2D,3D \\right] \\times\n\\left[-L_y,L_y \\right] \\times \\left[0,L_z \\right]$.   \n\nThe boundary for the thermal only case is decomposed as,\n$\\partial \\Omega_T = \\Gamma_G \\bigcup \\Gamma_T \\bigcup \\Gamma_P $. \n$\\Gamma_G$ is the boundary along the ``Ground'', $\\Gamma_T$\nthe ``Top'' boundary, and $\\Gamma_P$ the four periodic ``Sides''. A 3d\ndiagram labeling these boundaries is shown in\nFigure~\\ref{fig:thermal3d}. For this case study (no mean wind),\nperiodic boundary conditions are used on the four sides , with a modified \n``inflow-outflow'' Neumann condition\\cite{gunzburger1989finite} on the\ntop boundary. On the ground, a ``no-slip'' velocity boundary condition is\nimposed, and a Dirichlet condition uniformly fixes\nthe temperature of the surface. \nEach of the $\\Gamma$ boundary terms are defined in the paragraphs below. \nNote that a finite thickness ``Sponge Layer'' is\nindicated on the figure along the top boundary and is defined below. \n\n\\begin{figure}[!htb]\n  \\begin{center}\n    \\includegraphics[width = 14 cm]{figs/thermal_only_3d}\n    \\caption{Domain for the thermal-only\n   scenario. The diagram scale is representative of typical cases. Note\n   the SoV apparatus in the center, which provides perspective on the\n   extent of the domain with respect to the turning vane diameter. The\n   ground, sides and top boundaries are labeled with the discussion the\n   precise boundary conditions on each provided in\n   section~\\ref{sec:bc}. Notice also the finite thickness, high\n   viscosity ``sponge layer'' at the top of the domain.} \n    \\label{fig:thermal3d}\n  \\end{center}\n\\end{figure}\n\nThe boundary for the wind cases is decomposed as,\n$\\partial \\Omega_W = \\Gamma_G \\bigcup \\Gamma_T \\bigcup \\Gamma_O \\bigcup\n\\Gamma_I \\bigcup \\Gamma_S $.  \nWhere $\\Gamma_G$ is the boundary along the ``Ground'',\n$\\Gamma_T$ the ``Top'' boundary, $\\Gamma_S$ the two ``Sides'',\n$\\Gamma_I$ the inflow boundary, and $\\Gamma_O$ the ``Outflow''  \nboundary.\nThe ``wind'' simulation domain is diagrammed in\nFigure~\\ref{fig:wind3d}, with the boundaries labeled. \nFor this particular study (a heated ground with \nan ambient wind), the wind case has a proscribed inlet boundary layer\nalong the upstream streamwise face ($\\Gamma_I$) for both the temperature\nand the velocity. The ``Ground'' boundary is identical to\nthe thermal-only case. The ``Sides'', ``Outflow'' and ``Top'' are all\nset to modified Neumann boundary conditions. Note that ``Sponge Layers''\nare used on both the back boundary and the top. \n\n\\begin{figure}[!htb]\n  \\begin{center}\n   %\\includegraphics[width = 14 cm]{figs/wind_3d}\n   \\includegraphics[width = 14 cm]{figs/wind_3d}\n    \\caption{Domain for the wind and thermal scenario. The diagram scale\n   is representative of typical cases. Note the SoV apparatus which\n   provides perspective on the extent of the domain with respect to the\n   turning vane diameter. The ground, sides, inflow, back and top\n   boundaries are labeled with the discussion the precise boundary\n   conditions on each provided in section~\\ref{sec:bc}. Notice also the\n   finite thickness, high viscosity ``sponge layer'' at the top and back\n   of the domain.}   \n    \\label{fig:wind3d}\n  \\end{center}\n\\end{figure}\n\n\\textbf{Ground Boundary Conditions, $\\Gamma_G$} \n\nFor both the wind and thermal-only cases the ground has a fixed\ntemperature and no-slip velocity boundary conditions. This boundary \n($\\Gamma_G$) is modeled with a Dirichlet boundary condition, \n\\begin{align}\n \\overrightarrow{u} &= 0 \\quad \\text{ on } \\Gamma_G \\\\\n T &= T_g.\n\\end{align}\nWhere $\\Gamma_G = \\{(x,y,0) \\subset \\partial \\Omega \\} $. \n\n%\n% http://fenicsproject.org/documentation/dolfin/dev/python/demo/documented/periodic/python/documentation.html \n%\n%\n\\textbf{Periodic Boundary Condition, $\\Gamma_P$} \n\nA periodic boundary condition is used in the thermal only cases, \nalong the streamwise and spanwise boundary faces \n(denoted $\\Gamma_{P,x}$ and $\\Gamma_{P,y}$, respectively). In these\ncases the state variables  \nare contrained to have the same value on the opposite faces of the domain, \nfor instance in the streamwise direction the boundary conditions are, \n\\begin{align}\n \\overrightarrow{u}(-L_x,y,z) &= \\overrightarrow{u}(L_x,y,z) \\quad \\text{ on } \\Gamma_{P,x} \\\\\n T(-L_x,y,z) &= T(L_x,y,z)\n\\end{align}\nand in the spanwise direction,\n\\begin{align}\n \\overrightarrow{u}(x,-L_y,z) &= \\overrightarrow{u}(x,L_y,z) \\quad \\text{ on } \\Gamma_{P,y} \\\\\n T(x,-L_y,z) &= T(x,L_y,z). \n\\end{align}\nWhere $\\Gamma_{P,x} = \\{(-L_x,y,z) \\bigcup (L_x,y,z) \\subset \\partial\n\\Omega \\}$  \nand $\\Gamma_{P,y} = \\{(x,-L_y,z) \\bigcup (x,L_y,z) \\subset \\partial\n\\Omega \\}$. \n\n\\textbf{Inflow Boundary Condition, $\\Gamma_I$} \n\nOn the inflow boundary $(\\Gamma_I$), dirichlet conditions are used for both\nvelocity and temperature. The boundary-normal, or streamwise component \nis a function of the surface normal coordinate (z), representing a boundary \nlayer below a uniform velocity, U.\nThe common 7th power model of a turbulent boundary layer is used,   \n\\begin{equation*}\n  u_{\\text{in}}(z) = U \\text{ min }\\left(\\left(\\frac{z}{\\delta}\\right)^7,1\\right)\n  \\label{eq:bl_u}\n\\end{equation*}\nwhere $\\delta$, the boundary layer thickness, is set based on data\nmeasured by our experimental partners in the field. \nThe thermal boundary layer is assumed to have a similar boundary layer,\nbut, as observed in real atmospheric flows, there remains a vertical\ntemperature gradient outside the thin boundary layer. Based on\nresults in the literature a $2/3$ Kelvin per meter gradient is \nimposed\\cite{Blocken2007238}. The thermal inflow is then  \n\\begin{equation*}\n  T_{\\text{in}}(z) = \\Delta T \\left(1- \\text{ min\n\t\t\t}\\left(\\left(\\frac{z}{\\delta}\\right)^7,1\\right)\\right)\n  + T_0 - 2z/3.  \n  \\label{eq:bl_t}\n\\end{equation*}\n% 335+18*tanh(-z/0.1)-z*2/3\nThe inflow boundary is at the surface $x=-L_x$.\n\n\\textbf{Mixed inflow/outflow Boundary Conditions on $\\Gamma_T$,\n$\\Gamma_S$ and $\\Gamma_B$}  \n\nAt outflow boundaries, a homogeneous Neumann condition is appropriate,\n\\begin{align}\n  \\frac{\\partial u}{\\partial n}\\bigg|_{\\Gamma_T} = 0 \\\\\n  \\frac{\\partial T}{\\partial n}\\bigg|_{\\Gamma_T} = 0\n\\end{align}\nHowever, for the cases in this study, \na modified Neumann condition is necessary due to the possibility that there will\nbe an inflow on these boundaries.\nFor example, in the region above the vanes, the concentrated hot plume is\nlifted by buoyancy upward and out of the simulation domain. However, the\nradial inflow towards the apparatus is drawn in by large scale\nconvection cells larger than the system diameter. Thus, our boundary\nconditions must permit inflow along the areas above and external to the\nvanes, while simultaneously permitting outflow in the area above the vanes. \n\n% Roy Stogner: Okay, so this DDN is basically no-traction when there's \n% outflow and Tn=v when there's inflow?  We have no-traction for v*n, \n% no-traction for anything else when there's outflow, and Dirichlet\n% v.cross.n = 0 when there's inflow. \n\nTo accomplish this, the boundary condition is,\n\\begin{align}\n  \\frac{\\partial u_n}{\\partial n}\\bigg|_{\\Gamma_T} = 0 \\\\\n  \\text{if } (w<0) \\text{ then}& \\begin{cases}\n    u_t = 0,\\\\\n    T = T_{\\text{in}}\n  \\end{cases} \\\\\n  \\text{ else}& \\begin{cases}\n    \\frac{\\partial u_t}{\\partial n}\\bigg|_{\\Gamma_T} = 0, \\\\  \n    \\frac{\\partial T}{\\partial n}\\bigg|_{\\Gamma_T} = 0\n  \\end{cases}\n\\end{align}\n\nwhere $u_n$ and $u_t$ are normal and tangential components of the velocity, \nrespectively. This boundary condition is applied on the top boundary\n$\\Gamma_T$ ($z=L_z$) and downstream side boundary  $\\Gamma_B$ in the wind case.\n\n\\textbf{Sponge Layer} \n\nFinally, a finite thickness ``sponge layer'' is used in the region adjacent \nto the mixed inflow/outflow boundaries $\\Gamma_T$ and $\\Gamma_B$.\nThis layer artificially increases the momentum diffusivity by\nup to a factor of ten times the nominal value. This was designed to stabilize\nthe modified Neumann boundary conditions which can exhibit an instability\nwhen there is a compact jet of fluid leaving the domain. \nThese regions are referred to by many names in the\nliterature\\cite{doi:10.1146/annurev.fluid.36.050802.121930}, such as\nabsorbing layers, fringe regions, buffer zones, sponges, etc. \n\n\n%%  This\n%% would create small high velocity inflows, and the feedback loop would\n%% result in instabilities and numerical blow-up. Mindful of the fact that\n%% the character of solution not important in this region, and that our\n%% physical interest remains focused on the region inside and in immediate\n%% proximity to the vanes, we introduced a higher diffusivity ``sponge''\n%% region that would diffuse the high velocity exiting jets sufficiently to\n%% prevent numerically un-physical behavior. No results are quoted from\n%% this ``sacrificial'' region, as it is not considered physically\n%% meaningful.\n\n", "meta": {"hexsha": "01749cf03730e0bf54d979f3636cf6c5923d98b7", "size": 37072, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "disputatio/propositum/model.tex", "max_stars_repo_name": "nicholasmalaya/paleologos", "max_stars_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-04T17:49:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-04T17:49:42.000Z", "max_issues_repo_path": "disputatio/propositum/model.tex", "max_issues_repo_name": "nicholasmalaya/paleologos", "max_issues_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "disputatio/propositum/model.tex", "max_forks_repo_name": "nicholasmalaya/paleologos", "max_forks_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-01-04T16:08:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-16T19:34:24.000Z", "avg_line_length": 45.7114673243, "max_line_length": 110, "alphanum_fraction": 0.7467360811, "num_tokens": 10093, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104788995148791, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5640595373813312}}
{"text": "\\chapter{Visual Exploration of Time Series Datasets}\nIn data science, it is very rare to get a well-defined and detailed problem description. A much more common task is to analyze the data and to detect interesting patterns, outliers, or other phenomena that are fulfilling a complex set of criteria, which results in multiple processing steps. As we cannot comprehensively represent all necessary information only by numbers, we use data visualization. \n\nThis chapter looks at methods for comprehensible and effective visualization of comprehensibly and effectively visualize a time series datasets. These methods enable to study these datasets in detail.\n\n\\section{Dimensionality Reduction}\nUsual datasets that we encounter in computer science have often tens or hundreds of dimensions, far beyond our imagination's capabilities. It makes it unintuitive for us to examine the dataset while also increases computational requirements for its analysis and visualization. For a visual exploration of the dataset, it is necessary to project our data into a lower-dimensional space. In this section, we will focus on the dimensionality reduction techniques, that are specifically used in data visualization.\n\n\\subsection{Principal Component Analysis}\nThe oldest and probably the most used dimensionality reduction technique in computer science is the \\textit{Principal Component Analysis} (PCA) \\cite{vis:pca}. PCA is a linear transformation that determines the orthogonal axes in which the dataset has the largest variances and then projects it onto these axes. They are represented as orthogonal eigenvectors with eigenvalues that correspond to the variance on the vector. Formally we define the PCA transformation as:\n\\begin{equation}\n    PCA(X) = X \\times W_L^T = M\n\\end{equation}\nwhere $X$ ($m \\times n$) is the original dataset with $m$ data points, each having $n$ features, $W^T_L$ ($n \\times l$) is the transposed matrix of $l$ eigenvectors, and $M$ ($m \\times l;~ l \\leq n$) is the lower-dimensional representation of $X$. Usually we choose eigenvectors with the largest eigenvalues that are corresponding to directions with the largest variances.\n\nBecause PCA comes from statistical data analysis, its primary purpose is to capture maximal statistical information in lower-dimensional representation, not data visualization. We usually use eigenvectors with the largest variance to project data into two or three-dimensional embeddings when we want to use it for data visualization. It means that we mainly show global structures without enough detail to see the data's local behavior (Fig.~\\ref{fig:pca-emb}). In combination with incredible speed and efficiency of modern PCA implementations, it is a perfect first step for every visual exploration of multi-dimensional datasets.\n\n\\subsection{t-SNE}\nt-SNE or \\textit{t-Distributed Stochastic Neighbor Embedding} \\cite{vis:tsne} is a popular manifold representation learning technique that is particularly efficient for visualizing high dimensional datasets \\cite{vis:repre-learning}. It transforms the multi-dimensional data into a low-dimensional space, emphasizing the preservation of local similarities and distances from the high-dimensional space. It does so by converting affinities of data points into probabilities. Gaussian joint probabilities represent the higher-dimensional space's affinities, and Student's t-distributions represent the embedded space's affinities. Because it very well preserves the local structures in a dataset (Fig.~\\ref{fig:tsne}), it is one of the most popular techniques for data visualization \\cite{vis:tsne-analysis}.\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}[b]{0.33\\textwidth}\n        \\centering\n        \\includesvg[width=\\textwidth]{img/s_dataset.svg}\n        \\caption{S-shape dataset}\n        \\label{fig:s-shape-sub}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.3\\textwidth}\n        \\centering\n        \\includesvg[width=\\textwidth]{img/s_dataset_pca.svg}\n        \\caption{PCA embedding}\n        \\label{fig:pca-emb}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.3\\textwidth}\n        \\centering\n        \\includesvg[width=\\textwidth]{img/s_dataset_tsne.svg}\n        \\caption{t-SNE embedding}\n        \\label{fig:tsne}\n    \\end{subfigure}\n    \n    \\caption{Comparison of PCA (b) and t-SNE (c) applied to the S-shape dataset (a), commonly used for benchmarking the dimensionality reduction techniques.}\n    \\label{fig:s-shape}\n\\end{figure}\n\nHowever, the usage of t-SNE comes with several disadvantages:\n\\begin{itemize}\n    \\item It is quite computationally expensive. The common practice uses some other fast dimensionality reduction methods, such as PCA, to reduce the dataset into a reasonable number of parameters and use t-SNE afterwards. Another option is to use optimized approximate Barnes-Hut t-SNE \\cite{vis:barnes-hut-tsne}, which is only available for two or three-dimensional embeddings.\n    \\item As it is focusing on preserving mainly the local structure of the data, it does not guarantee to preserve the global structure correctly. For example, the number and distances between clusters are not always reliable.\n    \\item t-SNE is a stochastic algorithm, and each its run every run could return slightly different results.\n\\end{itemize}\n\n\n\\subsection{UMAP and densMAP}\n\\textit{Uniform Manifold Approximation and Projection} (UMAP) \\cite{vis:umap} is one of the more recent manifold dimension reduction techniques. Similar to t-SNE, it is excellent for visualization but also as a general non-linear dimensionality reduction transformation. The underlying idea of UMAP comes from the Riemannian geometry and algebraic topology. It makes three assumptions about the data:\n\\begin{enumerate}\n    \\item Dataset has a uniform distribution on the Riemannian manifold.\n    \\item The Riemannian metric is locally approximately constant.\n    \\item The manifold is locally connected.\n\\end{enumerate}\nUsing these assumptions, UMAP models the manifold as a fuzzy topological structure, and the resulting low-dimensional representation is the closest possible representation with an equivalent topological structure. In other words, it captures the local similarities of data with respect to their global structure (Fig.~\\ref{fig:fas-umap}).\n\nUsage of UMAP over the t-SNE comes with several advantages:\n\\begin{itemize}\n    \\item It scales way better than t-SNE. It is faster, more efficient, and not restricted to two or three-dimensional embeddings so that it can process even high-dimensional sparse data.\n    \\item Because UMAP is a general dimensionality reduction technique, it is possible to use it in a preprocessing step.\n    \\item Although both methods mainly capture the local similarities in data, UMAP shows better results in preserving the global structure.\n    \\item UMAP can use non-metric distance measures.\n\\end{itemize}\n\nDespite their visualization qualities, both t-SNE and UMAP neglect information about the local density of the original dataset. As they mainly look at the $n$ closest neighbors but not at their density, it could lead to misleading visualizations where the cluster's size and shape are primarily representing how many points are in it, rather than the underlying distribution.\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}[b]{0.33\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{img/fashion-pca.png}\n        \\caption{PCA}\n        \\label{fig:fas-pca}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.3\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{img/fashion-umap.png}\n        \\caption{UMAP}\n        \\label{fig:fas-umap}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.3\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{img/fashion-densmap.png}\n        \\caption{densMAP}\n        \\label{fig:fas-densmap}\n    \\end{subfigure}\n    \\captionsetup{font={footnotesize}}\n    \\caption{Comparison of PCA (a), UMAP (b), and densMAP (c) on the Fashion MNIST dataset \\cite{vis:fashion-mnist}.}\n    \\label{fig:fashion}\n\\end{figure}\n\nNarayan et al. \\cite{vis:densMAP} introduced density-preserving data visualization derivatives called \\textit{den-SNE} and \\textit{densMAP}. Since both methods converge by iteratively optimizing their objective functions, they added a new term called \\textit{local radius} to represent local densities. In other words, this term represents an average distance to the closest neighbors. We can see the difference in Fig.~\\ref{fig:fashion}, where we applied the UMAP and densMAP on the Fashion MNIST dataset \\cite{vis:fashion-mnist}. The densMAP's embedding is more spread out due to density preservation.\n\n\\section{Time Series Downsampling for Visualization}\nPlotting time series containing many points comes with several pitfalls. We cannot draw all data points in time series without overplotting due to restricted space, making it almost impossible for us to examine them in detail. The second pitfall is that with the rising number of objects to render, we need more power. Because we do not want to create a misleading plot, finding the closest representation with visible structures while minimizing information loss is essential.\n\n\\subsection{Simple Downsampling and Piecewise Aggregate Approximation}\nThe most straightforward solution to overplotting in spatial data is sub-sampling the dataset, where its time series equivalent is called downsampling. If we try to sub-sample the data in the same way as in the spatial domain, we quickly encounter problems with the temporal aspect.\n\nAs we cannot randomly select points, the simplest downsampling algorithm picks every $n$-th data point. This algorithm is very fast and we can use it on specific types of time series because we are losing $\\frac{1}{n}$ of the original information.\n\n\\textit{Piecewise Aggregate Approximation} (PAA) is a simple downsampling algorithm that, instead of removing data points, aggregates them into smaller representations \\cite{vis:paa}. The basic idea is to split time series into approximately equal-sized buckets and aggregate their data points. We can use any aggregation function like average, mode, or median, based on the application. As the aggregation function combines all data points in the bucket, PAA captures much more information than selecting every $n$-th data point. Because of the simple nature of PAA, the whole process is fast and scalable.\n\n\\subsection{Largest Triangle Downsampling}\nAlready mentioned algorithms have excellent properties from a computational perspective but not from a visualization standpoint. As they remove or aggregate data points by a fixed value or range, we can lose some visually important information.\n\nBecause of this issue, Steinarsson proposed the Largest Triangle downsampling algorithms designed for time series downsampling for visual representation \\cite{vis:lttb}. These algorithms select data points by their effective area, similarly to the \\textit{Visvalingam\u2013Whyatt algorithm}. Having data points $X_a$, $X_b$, $X_c$ such as $a < b < c$ and indices $a$, $b$, $c$ are the positions in time series, the effective area of the $X_b$ is the area of a triangle $X_a X_b X_c$ (Fig.~\\ref{fig:effective-area}). The indices $a$, $b$, $c$ are chosen differently in every algorithm, and for the final downsampling, we are using the data points with the largest effective area.\n\n\\begin{figure}[h]\n    \\centering\n    \\includesvg[width=\\textwidth]{img/effective-area.svg}\n    \\caption{Effective area when $X_a$, $X_b$, and $X_c$ are directly successive data points (there are no data points between them). The color of the effective area corresponds to the color of a data point, first and last datapoints are different as they are part of the downsampled result.}\n    \\label{fig:effective-area}\n\\end{figure}\n\n\\subsubsection{Largest Triangle One Bucket Algorithm}\nThe simplest algorithm proposed by Steinarsson is \\textit{the Largest Triangle One Bucket} (LTOB) algorithm. First, all points get rank by their effective area, which we compute using directly successive data points. Afterwards, it removes data points with zero effective areas and splits points into buckets based on the number of data points we want to have in the downsampled time series. For every bucket, it selects point with the highest rank (Fig.~\\ref{fig:ltob}). This method's disadvantage is that it only uses the effective area computed from two adjacent points, which can potentially lead to misleading representations.\n\n\\begin{figure}[h]\n    \\centering\n    \\includesvg[width=\\textwidth]{img/ltob.svg}\n    \\caption{For every bucket, the LTOB downsampling algorithm selects data points with the largest effective area within the bucket.}\n    \\label{fig:ltob}\n\\end{figure}\n\n\\subsubsection{Largest Triangle Three Buckets Algorithm}\n\\textit{The Largest Triangle Three Buckets}~(LTTB) algorithm partially solves the problem of the LTOB and searches a much larger area for point exclusion. First it separates data points into equally-sized buckets, where the first and the last data point get their own buckets, as we do not want to exclude them. Second, it computes the effective area for every point by iterating through the buckets from left to right, taking three directly successive buckets $A$, $B$, $C$ at a time. For every point $X_b \\in B$, it computes the effective area as an area $S$ of a triangle $X_a X_b X_c$, where $X_a \\in A$, $X_c \\in C$, such as:\n\\begin{equation}\n    S = \\max_{X_a \\in A,~X_c \\in C} S_{X_a X_b X_c}\n\\end{equation}\nFinally, for every bucket, it selects the point with the largest effective area (Fig.~\\ref{fig:lttb}).\n\nThis algorithm is robust, reasonably efficient, and solves the problems of LTOB. The only problem could be the bucket selection when working with data with values non-uniformly spread over time.\n\n\\begin{figure}[h]\n    \\centering\n    \\includesvg[width=\\textwidth]{img/lttb.svg}\n    \\caption{LTTB downsampling algorithm.}\n    \\label{fig:lttb}\n\\end{figure}\n\n\\subsubsection{Largest Triangle Dynamic Algorithm}\nBecause both methods above rely on equally-sized buckets, the last algorithm Steinarsson proposed is the \\textit{Largest Triangle Dynamic}~(LTD) algorithm. First, it uses the predefined number of equally-sized buckets. Then it computes the interval \\textit{calmness} as the mean square error of linear regression on that bucket, including the last point from the previous interval and the first point in the next one. Afterwards, it joins two adjacent intervals with the smallest error and divides the one with the largest error, so the number of buckets remains the same. This iterative process converges to the optimal bucket sizes. The number of iterations is empirically determined, but in the original paper, the author recommends starting with one-tenth of the original time series's size. Afterwards, it uses the LTTB algorithm to select one point from each bucket.\n\nTLD solves the problem of equally-sized buckets but requires a lot of computational time, which makes it hard to apply it to long time series.\n\n\n\\subsubsection{Datashader}\nAll of the approaches mentioned so far transform time series into a representation with fewer points, which we then can use for visualization. But as we downsample time series, we lose information based on the number of points we eventually plot. For example, it is not uncommon to have time series consisting of hundred thousand data points, and usually, we are downsampling to less than one thousand data points for visualization. It is impossible to capture the information precisely and even with the best possible downsampling algorithms, we lose a lot of information, especially the initial data density. If our task requires to study these underlying properties, we must logically plot all our data points.\n\\begin{figure}[tbh]\n    \\centering\n     \\includegraphics[width=\\textwidth]{img/datashader.png}\n    \\caption{Datashader and LTTB demonstrated on one milion points.}\n    \\label{fig:my_label}\n\\end{figure}\n\nThe \\textit{Datashader} renderer \\cite{vis:datashader} is a complete graphical pipeline from the original data to final graph. It breaks plotting into multiple computations on intermediate representations.\nThat allows multiple fast and efficient aggregations, such as value counts and averaging. Then Datashader renders the results to the final image's pixels as color saturations. The resulting representation is a raster image that accurately represents the aggregated information. Accuracy is limited only by the resolution of the final image and, due to its effectiveness, it can process millions of data points in a reasonable time. As the raster image loses detail as we zoom in, this method gives us an excellent overview of the whole time series (or dataset), but if we want to explore details, we need to repeat the processing pipeline all over again.\n\n\n\\section{Chapter Summary}\nIn this chapter, we showed several dimensionality reduction techniques used for visualization. Generally, it is advisable to start with PCA to capture the global data structure and then combine it with either t-SNE, UMAP, or densMAP to examine the details. In our case, we will primarily use UMAP and densMAP as they have several advantages over t-SNE.\n\nFor time series plotting, we can use the LTTB downsampling algorithm if we want a smaller representation for visualization or use the Datashader renderer to render the original data accurately.\n", "meta": {"hexsha": "b1ea7b5118e3722825a45091d9481a5fbc8af5ba", "size": 17414, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/chapters/2_visual_analysis.tex", "max_stars_repo_name": "H00N24/visual-analysis-of-big-time-series-datasets", "max_stars_repo_head_hexsha": "8c9c14ca5d16f5d9ef8b623c84f92fe62eee1f86", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-07-30T04:07:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T07:28:44.000Z", "max_issues_repo_path": "thesis/chapters/2_visual_analysis.tex", "max_issues_repo_name": "H00N24/visual-analysis-of-big-time-series-datasets", "max_issues_repo_head_hexsha": "8c9c14ca5d16f5d9ef8b623c84f92fe62eee1f86", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/chapters/2_visual_analysis.tex", "max_forks_repo_name": "H00N24/visual-analysis-of-big-time-series-datasets", "max_forks_repo_head_hexsha": "8c9c14ca5d16f5d9ef8b623c84f92fe62eee1f86", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 103.0414201183, "max_line_length": 872, "alphanum_fraction": 0.7860916504, "num_tokens": 3864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891479496523, "lm_q2_score": 0.6859494550081926, "lm_q1q2_score": 0.5639801979497142}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% PROBLEM 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section*{Problem 2}\n\nConsider a bare sphere composed of a homogeneous multiplying medium.\n\\begin{enumerate}[a)]\n\\item Give the steady-state, continuous energy diffusion equation. Assume that the diffusion coefficient ($D$) and the average neutrons produced from fission ($\\nu$) are constant for all energies.\n\\item Derive the multigroup equation corresponding to the case where there are three energy groups. Assume that there is no upscattering, all groups are directly coupled, and fission is only induced by the slowest group while only producing neutrons in the fastest group.\n\\item Write the multigroup equation you found as a matrix-equation.\n\\end{enumerate}\n\n", "meta": {"hexsha": "d1319b9b241da8da4f3eaa3d2ddbc5efd53e544c", "size": 747, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/drafts/disc12/disc12_exercise02.tex", "max_stars_repo_name": "mitchnegus/NE150-discussion", "max_stars_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/drafts/disc12/disc12_exercise02.tex", "max_issues_repo_name": "mitchnegus/NE150-discussion", "max_issues_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/drafts/disc12/disc12_exercise02.tex", "max_forks_repo_name": "mitchnegus/NE150-discussion", "max_forks_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.9090909091, "max_line_length": 271, "alphanum_fraction": 0.7269076305, "num_tokens": 151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5639801852964661}}
{"text": "\\section{Method}\n\\label{sec:method}\n\nIn the following, we introduce the mathematical formulation of the weakly-supervised 3D shape completion problem. Subsequently, we briefly discuss denoising variational auto-encoders (\\DVAEs) \\citep{Kingma2014ICLR,Im2017AAAI} which we use to learn a strong shape prior \\red{that embeds a set of reference shapes in a low-dimensional latent space.} Then, we formally derive our proposed amortized maximum likelihood (\\AML) approach. \\red{Here, we use maximum likelihood to learn an embedding of the observations within the same latent space -- thereby allowing to perform shape completion.} The overall approach is also illustrated in \\figref{fig:method}.\n\n\\subsection{Problem Formulation}\n\\label{subsec:method-problem}\n\n\\input{fig_method_problem}\n\nIn a supervised setting, the task of 3D shape completion can be described as follows: Given a set of incomplete observations $\\mathcal{X} = \\{x_n\\}_{n = 1}^N \\subseteq \\mathbb{R}^R$ and corresponding ground truth shapes $\\mathcal{Y}^* = \\{y_n^*\\}_{n = 1}^N \\subseteq \\mathbb{R}^R$, learn a mapping $x_n \\mapsto y_n^*$ that is able to generalize to previously unseen observations and possibly across object categories. We assume $\\mathbb{R}^R$ to be a suitable representation of observations and shapes; in practice, we resort to occupancy grids and signed distance functions (SDFs) defined on regular grids, \\ie, $x_n, y_n^* \\in \\mathbb{R}^{H \\ntimes W \\ntimes D} \\simeq \\mathbb{R}^R$. Specifically, occupancy grids indicate occupied space, \\ie, voxel $y_{n,i}^* = 1$ if and only if the voxel lies on or inside the shape's surface. To represent shapes with sub-voxel accuracy, SDFs hold the distance of each voxel's center to the surface; for voxels inside the shape's surface, we use negative sign.\nFinally, for the (incomplete) observations, we write $x_n \\in \\{0, 1, \\uk\\}^R$ to make missing information explicit; in particular, $x_{n,i} = \\uk$ corresponds to unobserved voxels, while $x_{n,i} = 1$ and $x_{n,i} = 0$ correspond to occupied and unoccupied voxels, respectively.\n\nOn real data, \\eg, KITTI \\citep{Geiger2012CVPR}, supervised learning is often not possible as obtaining ground truth annotations is labor intensive, \\cf \\citep{Menze2015CVPR,Xie2016CVPR}. Therefore, we target a weakly-supervised variant of the problem instead: Given observations $\\mathcal{X}$ and reference shapes $\\mathcal{Y} = \\{y_m\\}_{m = 1}^M \\subseteq \\mathbb{R}^R$ both of the same, known object category, learn a mapping $x_n \\mapsto \\tilde{y}(x_n)$ such that the predicted shape $\\tilde{y}(x_n)$ matches the unknown ground truth shape $y_n^*$ as close as possible -- or, in practice, the sparse observation $x_n$ while being plausible considering the set of reference shapes, \\cf \\figref{fig:method-problem}. Here, supervision is provided in the form of the known object category. Alternatively, the reference shapes $\\mathcal{Y}$ can also include multiple object categories resulting in an even weaker notion of supervision as the correspondence between observations and object categories is unknown. \\red{Except for the object categories, however, the set of reference shapes $\\mathcal{Y}$, and its size $M$, is completely independent of the set of observations $\\mathcal{X}$, and its size $N$, as also highlighted in \\figref{fig:method}.} On real data, \\eg, KITTI, we additionally assume the object locations to be given in the form of 3D bounding boxes in order to extract the corresponding observations $\\mathcal{X}$. In practice, the reference shapes $\\mathcal{Y}$ are derived from watertight, triangular meshes, \\eg, from ShapeNet \\citep{Chang2015ARXIV} or ModelNet \\citep{Wu2015CVPR}.\n\n\\subsection{Shape Prior}\n\\label{subsec:method-prior}\n\n\\red{We approach the weakly-supervised shape completion problem by first learning a shape prior using a denoising variational auto-encoder (\\DVAE). Later, this prior constrains shape inference (see \\secref{subsec:method-inference}) to predict reasonable shapes. In the following, we briefly discuss the standard variational auto-encoder (\\VAE), as introduced by \\cite{Kingma2014ICLR}, as well as its denoising extension, as proposed by \\cite{Im2017AAAI}.}\n\n\\boldparagraph{Variational Auto-Encoder (\\VAE)}\n%\nWe propose to use the provided reference shapes $\\mathcal{Y}$ to learn a \\red{generative} model of possible 3D shapes over a low-dimensional latent space $\\mathcal{Z} = \\mathbb{R}^Q$, \\ie,  $Q \\ll R$. \\red{In the framework of \\VAEs, the joint distribution $p(y, z)$ of shapes $y$ and latent codes $z$} decomposes into $p(y | z)p(z)$ with $p(z)$ being a unit Gaussian, \\ie, $\\mathcal{N}(z;0, I_Q)$ and $I_Q \\in \\mathbb{R}^{R \\times R}$ being the identity matrix. \\red{This decomposition allows to sample $z \\sim p(z)$ and $y \\sim p(y|z)$ to generate random shapes.}\n\\red{For training, however, we additionally need to approximate the posterior $p(z | y)$.} \\red{To this end, the so-called recognition model $q(z | y) \\approx p(z | y)$} takes the form\n%\n\\begin{align}\nq(z | y) &= \\mathcal{N}(z; \\mu(y), \\text{diag}(\\sigma^2(y)))\\label{eq:encoder-decoder}\n\\end{align}\n%\nwhere $\\mu(y), \\sigma^2(y) \\in \\mathbb{R}^Q$ are predicted using the encoder neural network. \\red{The generative model $p(y|z)$ decomposes over voxels $y_i$; the corresponding probabilities $p(y_i | z)$ are represented using Bernoulli distributions for occupancy grids or Gaussian distributions for SDFs:}\n%\n\\begin{align}\n    \\begin{split}\n        p(y_i | z) &= \\text{Ber}(y_i ; \\theta_i(z))\\quad\\text{or}\\\\\n        p(y_i | z) &= \\mathcal{N}(y_i ; \\mu_i(z), \\sigma^2).\\label{eq:decoder}\n    \\end{split}\n\\end{align}\n%\nIn both cases, the parameters, \\ie, $\\theta_i(z)$ or $\\mu_i(z)$, are predicted using the decoder neural network. \\red{For SDFs, we explicitly set $\\sigma^2$ to be constant (see \\secref{sec:training}). Then, $\\sigma^2$ merely scales the corresponding loss, thereby implicitly defining the importance of accurate SDFs relative to occupancy grids as described below.}\n\nIn the framework of variational inference, the parameters of the encoder and the decoder \\red{neural networks} are found by maximizing the likelihood $p(y)$. \\red{In practice, the likelihood is usually intractable and the evidence lower bound is maximized instead, see \\citep{Kingma2014ICLR,Blei2016ARXIV}. This results in the following loss to be minimized:}\n%\n\\begin{align}\n\\mathcal{L}_{\\text{VAE}}(w) = - \\mathbb{E}_{q(z |y)}[\\ln p(y|z)] + \\text{KL}(q(z | y)| p(z)).\\label{eq:vae}\n\\end{align}\n%\n\\red{Here,} $w$ are the weights of the encoder and decoder \\red{hidden in the recognition model $q(z | y)$ and the generative model $p(y | z)$, respectively}. \\red{The Kullback-Leibler divergence $\\text{KL}$ can be computed analytically as described in the appendix of \\citep{Kingma2014ICLR}.} The negative log-likelihood $-\\ln p(y|z)$ corresponds to a binary cross-entropy error for occupancy grids \\red{and} a scaled sum-of-squared error for SDFs. The loss $\\mathcal{L}_{\\text{VAE}}$ is minimized using stochastic gradient descent (SGD) by approximating the expectation \\red{using samples}:\n%\n\\begin{align}\n- \\mathbb{E}_{q(z |y)}[\\ln p(y|z)] \\approx - \\sum_{l = 1}^L \\ln p(y | z^{(l)})\n\\end{align}\n%\n\\red{The required samples $z^{(l)} \\sim q(z|y)$ are computed using the so-called reparameterization trick,\n\\begin{align}\nz^{(l)} = \\mu(y) + \\epsilon^{(l)} \\sigma(y)\\quad\\text{with}\\quad\\epsilon^{(l)} \\sim \\mathcal{N}(\\epsilon; 0, I_Q),\\label{eq:repa}\n\\end{align}\nin order to make $\\mathcal{L}_{\\text{VAE}}$, specifically the sampling process, differentiable.} \\red{In practice, we found $L = 1$ samples to be sufficient -- which conforms with results by \\cite{Kingma2014ICLR}. At test time, the sampling process $z\\sim q(z|y)$ is replaced by the predicted mean $\\mu(y)$.}\n\\red{Overall, the standard VAE allows us to embed the reference shapes in a low-dimensional latent space. In practice, however, the learned prior might still include unreasonable shapes.}\n\n\\boldparagraph{Denoising \\VAE (\\DVAE)}\n%\n\\red{In order to avoid inappropriate shapes to be included in our shape prior, we consider a denoising variant of the \\VAE allowing to obtain a tighter bound on the likelihood $p(y)$.} More specifically, a corruption process $y' \\sim p(y' | y)$ is considered and the corresponding evidence lower bound results in the following loss:\n%\n\\begin{align}\n    \\begin{split}\n        \\mathcal{L}_{\\text{DVAE}}(w) = &- \\mathbb{E}_{q(z | y')}[\\ln p(y|z)]\\\\\n        &+ \\text{KL}(q(z | y')| p(z)).\n    \\end{split}\n\\end{align}\n%\n\\red{Note that the reconstruction error $-\\ln p(y|z)$ is still computed with respect to the uncorrupted shape $y$ while $z$, in contrast to \\eqnref{eq:vae}, is sampled conditioned on the corrupted shape $y'$.} In practice, the corruption process $p(y' | y)$ is modeled using Bernoulli noise for occupancy grids and Gaussian noise for SDFs.\nIn experiments, we found \\DVAEs to learn more robust latent spaces \\red{-- meaning the prior is less likely to contain unreasonable shapes. In the following, we always use \\DVAEs as shape priors.}\n\n\\subsection{Shape Inference}\n\\label{subsec:method-inference}\n\nAfter learning the shape prior, \\red{defining the joint distribution $p(y, z)$ of shapes $y$ and latent codes $z$ as product of generative model $p(y|z)$ and prior $p(z)$}, shape completion can be formulated as a maximum likelihood (\\ML) problem \\red{for $p(y, z)$} over the lower-dimensional latent space $\\mathcal{Z} = \\mathbb{R}^Q$. The corresponding negative log-likelihood $-\\ln p(y, z)$ to be minimized can be written as\n%\n\\begin{align}\n\\mathcal{L}_{\\text{ML}}(z) &= - \\sum_{x_i \\neq \\uk} \\ln p(y_i = x_i | z) - \\ln p(z).\\label{eq:ml}\n\\end{align}\n%\nAs the prior $p(z)$ is Gaussian, the negative log-probability $- \\ln p(z)$ is proportional to $\\|z\\|_2^2$ and \\red{constrains the problem to likely, \\ie, reasonable, shapes with respect to the shape prior}. As before, the generative model $p(y | z)$ decomposes over voxels; here, we can only consider actually observed voxels $x_i \\neq \\uk$. \\red{We assume that the learned shape prior can complete the remaining, unobserved voxels $x_i = \\uk$.} Instead of solving \\eqnref{eq:ml} for each observation $x \\in \\mathcal{X}$ independently, however, we follow the idea of amortized inference \\citep{Gersham2014COGSCI} and train a \\red{new} encoder $z(x;w)$ to \\emph{learn} \\ML. To this end, we keep the generative model $p(y|z)$ fixed and train \\red{only} the weights $w$ of the \\red{new} encoder $z(x;w)$ using the \\ML objective as loss:\n%\n\\begin{align}\n    \\begin{split}\n        \\mathcal{L}_{\\text{dAML}}(w) =& - \\sum _{x_i \\neq \\uk} \\ln p(y_i = x_i | z(x; w))\\\\\n        &- \\lambda \\ln p(z(x; w)).\\label{eq:aml}\n    \\end{split}\n\\end{align}\n%\n\\red{Here,} $\\lambda$ controls the importance of the shape prior. The exact form of the probabilities $p(y_i = x_i | z)$ depends on the used shape representation. For occupancy grids, this term results in a cross-entropy error as \\red{both the predicted voxels $y_i$ and the observations $x_i$} are, for $x_i \\neq \\uk$, binary. For SDFs, however, the term is not well-defined as $p(y_i | z)$ is modeled with a continuous Gaussian distribution, while the observations $x_i$ are binary. As solution, we could compute (signed) distance values along the rays corresponding to observed points (\\eg, following \\citep{Steinbrucker2013ICCV}) in order to obtain continuous observations $x_i \\in\\mathbb{R}$ for $x_i \\neq \\uk$. However, as illustrated in \\figref{fig:method-sdf}, noisy observations cause the distance values along the whole ray to be invalid. This can partly be avoided when relying \\red{only} on occupancy to represent the observations; in this case, free space (\\cf \\figref{fig:method-problem}) observations are partly correct even though observed points may lie within the corresponding shapes.\n\nFor making SDFs tractable (\\ie, to predict sub-voxel accurate, visually smooth and appealing shapes, see \\secref{sec:experiments}) \\red{while using binary observations}, we propose to define $p(y_i = x_i | z)$ through a simple transformation. In particular, as $p(y_i | z)$ is modeled using a Gaussian distribution $\\mathcal{N}(y_i ; \\mu_i(z), \\sigma^2)$ where $\\mu_i(z)$ is predicted using the fixed decoder ($\\sigma^2$ is constant), and $x_i$ is binary (for $x_i \\neq \\uk$), we introduce a mapping $\\theta_i(\\mu_i(z))$ transforming the predicted \\red{mean SDF value to an occupancy probability} $\\theta_i(\\mu_i(z))$:\n%\n\\begin{align}\np(y_i = x_i | z) = \\text{Ber}(y_i = x_i; \\theta_i(\\mu_i(z)))\n\\end{align}\n%\nAs, \\red{by construction (see \\secref{subsec:method-problem}), occupied voxels have negative sign or value zero in the SDF}, we can derive the occupancy probability $\\theta_i(\\mu_i(z))$ as the probability of a non-positive distance:\n%\n\\begin{align}\n\\theta_i(\\mu_i(z)) &= \\mathcal{N}(y_i \\leq 0; \\mu_i(z), \\sigma^2)\\\\[3px]\n&= \\frac{1}{2} \\left(1 + \\text{erf}\\left(\\frac{- \\mu_i(z)}{\\sigma \\sqrt{\\pi}}\\right)\\right).\\label{eq:sdf}\n\\end{align}\n%\nHere, $\\text{erf}$ is the error function which, in practice, can be approximated following~\\citep{Abramowitz1974}. \\eqnref{eq:sdf} is illustrated in \\figref{fig:method-sdf} where the occupancy probability $\\theta_i(\\mu_i(z))$ is computed as the area under the Gaussian bell curve for $y_i \\leq 0$. This per-voxel transformation can easily be implemented as non-linear layer and its derivative \\wrt $\\mu_i(z)$\nis, by construction, a Gaussian. \\red{Note that the transformation is correct, not approximate, based on our model assumptions and the definitions in \\secref{subsec:method-problem}.} \\red{Overall, this transformation allows us to easily minimize \\eqnref{eq:aml} for both occupancy grids and SDFs using binary observations. The obtained encoder embeds the observations in the latent shape space to perform shape completion.}\n\n\\begin{figure}[t]\n\t\\vspace*{-\\figskipabove px}\n\t\\centering\n\t\\hfill\n\t\\begin{subfigure}[t]{0.25\\linewidth}\n\t\t\\vspace{0px}\n\t\t\\centering\n\t\t\\includegraphics[height=4.5cm]{fig_method_sdf_2}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.6\\linewidth}\n\t\t\\vspace{3px}\n\t\t\\centering\n\t\t\\hspace*{-12px}\n\t\t\\includegraphics[height=4.5cm]{fig_method_sdf_1}\n\t\\end{subfigure}\n\t\\hfill\n\t\\vspace*{-8px}\n\t\\caption{{{\\bf Left: Problem with SDF Observations.} Illustration of a ray ({\\color{red}red line}) correctly hitting a surface ({\\color{blue}blue line}) causing the (signed) distance values and occupancy values computed for voxels along the ray to be correct (\\cf (a)). A noisy ray, however, causes all voxels along the ray to be assigned incorrect distance values (marked {\\colorbox{red!25}{red}}) \\wrt to the true surface ({\\color{blue}blue line}) because the ray ends far behind the actual surface (\\cf (b)). When using occupancy only, in contrast, only the voxels behind the surface are assigned invalid occupancy states (marked {\\colorbox{red!25}{red}}); the remaining voxels are labeled correctly (marked {\\colorbox{green!25}{green}}; \\cf (c)).\n\t\t\t{\\bf Right: Proposed Gaussian-to-Bernoulli Transformation.} For $p(y_i) := p(y_i | z) = \\mathcal{N}(y_i;\\mu_i(z), \\sigma^2)$ ({\\color{blue}blue}), we illustrate the transformation discussed in \\secref{subsec:method-inference} allowing to use the binary observations $x_i$ (for $x_i \\neq \\uk$) to supervise the SDF predictions. This is achieved by transforming the predicted Gaussian distribution to a Bernoulli distribution with occupancy probability $\\theta_i(\\mu_i(z)) = p(y_i \\leq 0)$ ({\\color{blue}blue area}).}}\n\t\\label{fig:method-sdf}\n\t\\vspace*{-\\figskipbelow px}\n\\end{figure}\n\n\\subsection{Practical Considerations}\n\n\\input{fig_data_synthetic}\n\n\\boldparagraph{Encouraging Variety}\n%\nSo far, our \\AML formulation assumes a deterministic encoder $z(x,w)$ which predicts, given the observation $x$, a single code $z$ corresponding to a completed shape. A closer look at \\eqnref{eq:aml}, however, reveals an unwanted problem: the data term scales with the number of observations, \\ie, $|\\{x_i \\neq \\uk\\}|$, while the regularization term stays constant -- with less observations, the regularizer gains in importance leading to limited variety in the predicted shapes because $z(x; w)$ tends towards zero.\n\nIn order to encourage variety, we draw inspiration from the \\VAE shape prior. Specifically, we use a probabilistic recognition model\n%\n\\begin{align}\nq(z|x) = \\mathcal{N}(z; \\mu(x), \\text{diag}(\\sigma^2(x)))\n\\end{align}\n%\n(\\cf see \\eqnref{eq:encoder-decoder}) and replace the negative log-likelihood $-\\ln p(z)$ with the corresponding Kullback-Leibler divergence $\\text{KL}(q(z|x)|p(z))$ with $p(z) = \\mathcal{N}(z; 0, I_Q)$. Intuitively, this makes sure that the encoder's predictions ``cover'' the prior distribution -- thereby enforcing variety. Mathematically, the resulting loss, \\ie, \n%\n\\begin{align}\n\\begin{split}\n    \\mathcal{L}_{\\text{AML}}(w) =& - \\red{\\mathbb{E}_{q(z|x)}\\left[\\sum_{x_i \\neq \\uk} \\ln p(y_i = x_i | z)\\right]}\\\\\n    &+ \\lambda \\text{KL}(q(z|x)p(z)),\\label{eq:daml}\n\\end{split}\n\\end{align}\n%\ncan be interpreted as the result of maximizing the evidence lower bound of a model with observation process $p(x | y)$ (analogously to the corruption process $p(y'|y)$ for \\DVAEs in \\citep{Im2017AAAI} and \\secref{subsec:method-prior}). \\red{The expectation is approximated using samples (following the reparameterization trick in \\eqnref{eq:repa}) and, during testing, the sampling process $z \\sim q(z|x)$ is replaced by the mean prediction $\\mu(x)$.} In practice, we find that \\eqnref{eq:daml} improves visual quality of the completed shapes. We compare this \\AML model to its deterministic variant \\dAML in \\secref{sec:experiments}.\n\n\\boldparagraph{Handling Noise}\n%\nAnother problem of our \\AML formulation concerns noise. On KITTI, for example, specular or transparent surfaces cause invalid observations -- laser rays traversing through these surfaces cause observations to lie within shapes or not get reflected. However, our \\AML framework assumes deterministic, \\ie, trustworthy, observations \\red{-- as can be seen in the reconstruction error in \\eqnref{eq:daml}.} Therefore, we introduce per-voxel weights $\\kappa_i$ computed using the reference shapes $\\mathcal{Y} = \\{y_m\\}_{m=1}^M$:\n%\n\\begin{align}\n\\kappa_i = 1 - \\left(\\frac{1}{M} \\sum_{m = 1}^M y_{m,i}\\right) \\in [0,1]\n\\end{align}\n%\nwhere $y_{m,i} = 1$ if and only if the corresponding voxel is occupied. Applied to observations $x_i = 0$, these are trusted less if they are unlikely under the shape prior. Note that for point observations, \\ie, $x_i = 1$, this is not necessary as we explicitly consider ``filled'' shapes (see \\secref{sec:data}). This can also be interpreted as imposing an additional \\red{mean shape prior on the predicted shapes with respect to the observed free space}. In addition, we use a corruption process $p(x' | x)$ consisting of Bernoulli and Gaussian noise during training (analogously to the \\DVAE shape prior).\n", "meta": {"hexsha": "78bd68b98dfac5021bfecaecd9f6f11d7c0b4465", "size": 18846, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sec_method.tex", "max_stars_repo_name": "davidstutz/ijcv2018-improved-shape-completion", "max_stars_repo_head_hexsha": "cae19dc2484c31a60d3162594e020ed63287f6e5", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2018-12-03T08:05:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-18T04:11:34.000Z", "max_issues_repo_path": "paper/sec_method.tex", "max_issues_repo_name": "davidstutz/arxiv2018-improved-shape-completion", "max_issues_repo_head_hexsha": "cae19dc2484c31a60d3162594e020ed63287f6e5", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sec_method.tex", "max_forks_repo_name": "davidstutz/arxiv2018-improved-shape-completion", "max_forks_repo_head_hexsha": "cae19dc2484c31a60d3162594e020ed63287f6e5", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2019-01-02T09:59:31.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-31T10:02:20.000Z", "avg_line_length": 112.8502994012, "max_line_length": 1601, "alphanum_fraction": 0.7367080548, "num_tokens": 5412, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972751232808, "lm_q2_score": 0.6477982043529715, "lm_q1q2_score": 0.5639713515394512}}
{"text": "\\chapter{Potentials}\n\n\\section{Functional forms of force fields}\n\nThe molecular energy can be described as an Taylor expansion in bonds, bends, torsions, etc.\n\\begin{equation}\n \\begin{split}\n U=&\\sum_{\\text{bonds}} U_r\\left(r\\right)+\\sum_{\\text{bends}} U_\\theta\\left(\\theta\\right)+\n   \\sum_{\\text{torsions}} U_\\phi\\left(\\phi\\right)+\\sum_{\\text{out-of-plane bends}} U_\\chi\\left(\\chi\\right)\n   +\\sum_{\\text{non-bonded}}U_{nb}\\left(r\\right)\\\\\n   &+\\sum_{\\text{bond-bond}}U_{bb'}\\left(r,r'\\right)\n   +\\sum_{\\text{bond-bend}}U_{b\\theta'}\\left(r,\\theta\\right)\n   +\\sum_{\\text{bend-bend}}U_{\\theta\\theta'}\\left(\\theta,\\theta'\\right)\\\\\n   &+\\sum_{\\text{bond-torsion}}U_{r\\phi}\\left(r,\\phi,r'\\right)\n   +\\sum_{\\text{bend-torsion}}U_{\\theta\\phi}\\left(\\theta,\\phi,\\theta'\\right)+\\dots\n  \\end{split}\n  \\label{Eq: force field}\n\\end{equation}\nThis expansion is believed to capture all the chemical entities we can think of, such as\natoms, bonds, angles, etc, and physical properties like equilibrium structures, vibrational spectra, etc.\nThe cross terms are not ad-hoc functions, but arise naturally from this expansion. \nFor example, bonds and bends interact, as the bend angle becomes smaller the bond lengths tend to increase.\nTheir inclusion leads to two advantages: 1) they increase the accuracy of the force field\n(especially the vibrational frequencies), and 2) they\nincrease the transferability of the diagonal terms $U_r\\left(r\\right),U_\\theta\\left(\\theta\\right),\nU_\\phi\\left(\\phi\\right),U_\\chi\\left(\\chi\\right)$.\nOn top of the terms in Eq. \\ref{Eq: force field} one can add ad hoc terms, such as hydrogen bonding, that\nare not adequately accounted for otherwise.\n\nEq. \\ref{Eq: force field} is historically referred to as an \\emph{force field}. The name arose from the lowest\norder approximation using only springs with \\emph{force constants}. Force fields have matured\nand have become quite accurate and many parameters exists for a wide range of structure. These parameters\nare crucial and determine the quality of the force field. Unfortunately, deriving high quality parameters\nremains more than a art rather than a science. However, some progress has been made and in the end of the chapter\nsome algorithms are described how to obtain them.\n\nThe terms in Eq. \\ref{Eq: force field} consists of a functional form, force constants\n(a resistance against a change from the optimum value), and a reference value.\nThe functional form is chosen such as to be an accurate description of the true potential energy (either\nknown from experiment or from quantum mechanics), although one can simplify the functional form to decrease\ncomputational evaluation time of the energy at the cost of diminished accuracy. This tradeoff has almost vanished\nfor intra-molecular potentials but is still an issue for the non-bonded terms.\nThe reference value is \\emph{not} the equilibrium value (except by chance).\nFor example, bond lengths are affected by all other terms in the force field and the more strained a molecule\nthe farther the bond equilibrium length will deviate from its reference value. This means that one can not simply\ntake the equilibrium values from known experiment.\n\n\\section{Bonded potentials diagonal terms}\n\n\\subsection{Bond-stretching potentials}\n\nThe bond stretching potential describes the change in energy as the bond stretches and contracts.\nThe simplest functional form would be Hook's law:\n\\begin{equation}\n  U=\\frac{1}{2} k \\left(r-r_0\\right)^2\n\\end{equation}\nwhere $k$ is the force constant and $r_0$ the reference value for the bond. This form is computationally\nvery fast, but not very realistic. It is well known that it is easier to stretch a bond than it is to\ncompress a bond. The `Morse' potential is an-harmonic and provides a much better description of the energy\n\\begin{equation}\n  U= D\\left(1-e^{-\\alpha\\left(r-r_0\\right)}\\right)^2\n\\end{equation}\nExpanding around the equilibrium value leads to\n\\begin{equation}\n U=D\\alpha^2\\left(r-r_0\\right)^2\\left[1-\\alpha\\left(r-r_0\\right)+\\frac{7}{12}\\alpha^2\\left(r-r_0\\right)^2\\dots\\right]\n\\end{equation}\nThe first terms is the harmonic potential (with $k=2D\\alpha^2$) and for organic structures where distortions\nfrom equilibrium are small the difference between the potentials are small. However, for larger deviations\nthe Morse potential provides a significantly better description.\nThe Morse potential provides a restoring force which goes to zero at long distances. For minimizations\nstarting far equilibrium could result in non-convergence. Some force fields solved this problem by\nusing modification of Hook's law. MM2 added a cubic term making the bond an-harmonic. However, this leads to large\nnegative energies for poor initial geometries with large distortions. MM3 added the quartic term to solve this.\nNote the $7/12$ terms in the MM2/3 functional forms originate from the Taylor expansion of the Morse potential,\nand the cubic and quartic terms are chosen to mimic the Morse potentials for moderate distortions.\nDinur and Hagler proposed a functional form based on inverse bond lengths which follows the true potential energy\ncompared to QM over an even wider range\n\\begin{equation}\n U=U_0+C_2\\left(\\frac{1}{r}-\\frac{1}{r_0}\\right)^2+C_3\\left(\\frac{1}{r}-\\frac{1}{r_0}\\right)^3\n\\end{equation}\n\n\\noindent\nThe implemented bond-potentials:\n\\begin{itemize}\n\n  \\item{HARMONIC\\_BOND}\n  \\begin{equation}\n  U=\\frac{1}{2} p_0 \\left(r-p_1\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA.\n\n  \\item{CORE\\_SHELL\\_SPRING}\n  \\begin{equation}\n  U=\\frac{1}{2} p_0 r^2\n  \\end{equation}\n  1 argument: $p_0/k_B$ in units of K/\\AA$^2$.\n\n  \\item{MORSE\\_BOND}\n  \\begin{equation}\n  U=p_0\\left[\\left(1-e^{-p_1\\left(r-p_2\\right)}\\right)^2-1\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K, $p_1$ in \\AA$^{-1}$, and $p_2$ in \\AA.\n\n  \\item{LJ\\_12\\_6\\_BOND}\n  \\begin{equation}\n  U=\\frac{p_0}{r^{12}}-\\frac{p_1}{r^{6}}\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K\\,\\AA$^{12}$, and $p_1/k_B$ in units of K\\,\\AA$^6$.\n\n  \\item{LENNARD\\_JONES\\_BOND}\n  \\begin{equation}\n  U=4 p_0 \\left[\\left(\\frac{p_1}{r}\\right)^{12}-\\left(\\frac{p_1}{r}\\right)^{6}\\right]\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K, $p_1$ in \\AA.\n\n  \\item{BUCKINGHAM\\_BOND}\n  \\begin{equation}\n  U=p_0 e^{-p_1 r}-\\frac{p_2}{r^{6}}\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, and $p_2/k_B$ in K\\,\\AA$^6$.\n\n  \\item{RESTRAINED\\_HARMONIC\\_BOND}\n  \\begin{equation}\n  U=\\begin{cases}\n      \\frac{1}{2} p_0\\left(r-p_1\\right)^2 & \\qquad \\left|r-p_1\\right|\\leq p_2\\\\\n      \\frac{1}{2} p_0 p_2^2+p_0 p_2\\left(\\left|r-p_1\\right|-p_2\\right) & \\qquad \\left|r-p_1\\right|> p_2\n     \\end{cases}\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA, and $p_2$ in \\AA.\n\n  \\item{QUARTIC\\_BOND}\n  \\begin{equation}\n  U=\\frac{1}{2} p_0 \\left(r-p_1\\right)^2+ \\frac{1}{3} p_2 \\left(r-p_1\\right)^3+ \\frac{1}{4} p_3 \\left(r-p_1\\right)^4\n  \\end{equation}\n  4 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA, $p_2/k_B$ in K/\\AA$^3$, and $p_3/k_B$ in K/\\AA$^4$.\n\n  \\item{CFF\\_QUARTIC\\_BOND}\n  \\begin{equation}\n  U=p_0 \\left(r-p_1\\right)^2+ p_2 \\left(r-p_1\\right)^3+ p_3 \\left(r-p_1\\right)^4\n  \\end{equation}\n  4 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA, $p_2/k_B$ in K/\\AA$^3$, and $p_3/k_B$ in K/\\AA$^4$.\n\n  \\item{MM3\\_BOND}\n  \\begin{equation}\n  U=p_0 \\left(r-p_1\\right)^2\\left(1-2.55\\left(r-p_1\\right)+\\frac{7}{12}\\,2.55^2\\,\\left(r-p_1\\right)^2\\right)\n  \\end{equation}\n  2 arguments: $p_0$ in units of mdyne/\\AA\\, molecule, $p_1$ in \\AA.\n\n  \\item{RIGID\\_BOND}\\\\\n  Use for connections between rigid units.\n\n  \\item{FIXED\\_BOND}\\\\\n  Use for bonds constraint using the `SHAKE' and `RATTLE'-algorithm. Applies to Monte-Carlo, Molecular Dynamics, and minimization.\n\n  \\item{MEASURE\\_BOND}\\\\\n  A histogram of the bond-distance can be computed.\n\n\\end{itemize}\n\n\\subsection{Urey-Bradley potentials}\n\nThe Urey-Bradley potential is sometimes used to account for the repulsion between two atoms bound to a common\natom. In more modern force field they are replaced by bond/bend cross potentials.\nUrey-Bradley are essentially just bonds between 1-3 nearest neighbor atoms and \nthe same range of potentials is offered as for 1-2 bonds in RASPA.\n\n\n\\begin{itemize}\n\n  \\item{HARMONIC\\_UREYBRADLEY}\n  \\begin{equation}\n  U=\\frac{1}{2} p_0 \\left(r-p_1\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA.\n\n  \\item{MORSE\\_UREYBRADLEY}\n  \\begin{equation}\n  U=p_0\\left[\\left(1-e^{-p_1\\left(r-p_2\\right)}\\right)^2-1\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K, $p_1$ in \\AA$^{-1}$, and $p_2$ in \\AA.\n\n  \\item{LJ\\_12\\_6\\_UREYBRADLEY}\n  \\begin{equation}\n  U=\\frac{p_0}{r^{12}}-\\frac{p_1}{r^{6}}\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K\\,\\AA$^{12}$, and $p_1/k_B$ in units of K\\,\\AA$^6$.\n\n  \\item{LENNARD\\_JONES\\_UREYBRADLEY}\n  \\begin{equation}\n  U=4 p_0 \\left[\\left(\\frac{p_1}{r}\\right)^{12}-\\left(\\frac{p_1}{r}\\right)^{6}\\right]\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K, $p_1$ in \\AA.\n\n  \\item{BUCKINGHAM\\_UREYBRADLEY}\n  \\begin{equation}\n  U=p_0 e^{-p_1 r}-\\frac{p_2}{r^{6}}\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, and $p_2/k_B$ in K\\,\\AA$^6$.\n\n  \\item{RESTRAINED\\_HARMONIC\\_UREYBRADLEY}\n  \\begin{equation}\n  U=\\begin{cases}\n      \\frac{1}{2} p_0\\left(r-p_1\\right)^2 & \\qquad \\left|r-p_1\\right|\\leq p_2\\\\\n      \\frac{1}{2} p_0 p_2^2+p_0 p_2\\left(\\left|r-p_1\\right|-p_2\\right) & \\qquad \\left|r-p_1\\right|> p_2\n     \\end{cases}\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA, and $p_2$ in \\AA.\n\n  \\item{QUARTIC\\_UREYBRADLEY}\n  \\begin{equation}\n  U=\\frac{1}{2} p_0 \\left(r-p_1\\right)^2+ \\frac{1}{3} p_2 \\left(r-p_1\\right)^3+ \\frac{1}{4} p_3 \\left(r-p_1\\right)^4\n  \\end{equation}\n  4 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA, $p_2/k_B$ in K/\\AA$^3$, and $p_3/k_B$ in K/\\AA$^4$.\n\n  \\item{CFF\\_QUARTIC\\_UREYBRADLEY}\n  \\begin{equation}\n  U=p_0 \\left(r-p_1\\right)^2+ p_2 \\left(r-p_1\\right)^3+ p_3 \\left(r-p_1\\right)^4\n  \\end{equation}\n  4 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_1$ in \\AA, $p_2/k_B$ in K/\\AA$^3$, and $p_3/k_B$ in K/\\AA$^4$.\n\n  \\item{MM3\\_UREYBRADLEY}\n  \\begin{equation}\n  U=p_0 \\left(r-p_1\\right)^2\\left(1-2.55\\left(r-p_1\\right)+\\frac{7}{12}\\,2.55^2\\,\\left(r-p_1\\right)^2\\right)\n  \\end{equation}\n  2 arguments: $p_0$ in units of mdyne/\\AA\\, molecule, $p_1$ in \\AA.\n\n \\item{RIGID\\_UREYBRADLEY}\\\\\n  Use for connections between rigid units.\n\n  \\item{FIXED\\_UREYBRADLEY}\\\\\n  Use for bonds constraint using the `SHAKE' and `RATTLE'-algorithm. Applies to Monte-Carlo, Molecular Dynamics, and minimization.\n\n  \\item{MEASURE\\_UREYBRADLEY}\\\\\n  A histogram of the Urey-Bradley distance can be computed.\n\n\\end{itemize}\n\n\\subsection{Bending potential}\n\nThe simplest approach for an angle potential is the harmonic potential\n\\begin{equation}\n  U=\\frac{1}{2} k \\left(\\theta-\\theta_0\\right)^2\n\\end{equation}\nAngles are much softer than bonds, especially in zeolites where a Si-O-Si angle ranges \nbetween 135 and 180 degrees.\nA problem with all polynomial representations of angles is that angles of 180 degrees results\nin singular point (unless the reference angle is 180 degrees). The case of 0 degree is not possible\ndue to repulsion of the i and k atoms in the i-j-k bend.\nThe singularity is due to the fact that the force expression of such a polynomial\ncontains a factor $1/\\sin\\left(\\theta\\right)$. A common solution is to use a trigonometric function\n\\begin{equation}\n  U=\\frac{1}{2} k \\left[\\cos\\left(\\theta\\right)-\\cos\\left(\\theta_0\\right)\\right]^2\n\\end{equation}\nNote that close to the maximum these potentials have no restoring force, but for small distortions this\nis not a problem.\nThe MM force fields use higher order terms. A six power term was needed to describe the highly bent\nbicyclo[1.1.1]pentane. Cubic terms and higher become desirable when the bending is more then 10-15 degrees.\nMM3 angle bending has been divided into in-plane and out-of-plane bending for planar trigonal centers.\n\n\\begin{itemize}\n\n  \\item{HARMONIC\\_BEND,CORE\\_SHELL\\_BEND}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\theta_{ijk}-p_1\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K/rad$^2$ and $p_1$ in degrees.\n\n  \\item{QUARTIC\\_BEND}\n  \\begin{equation}\n  U=\\frac{1}{2} p_0 \\left(\\theta_{ijk}-p_1\\right)^2+\n     \\frac{1}{3} p_2 \\left(\\theta_{ijk}-p_1\\right)^3+\n     \\frac{1}{4} p_3 \\left(\\theta_{ijk}-p_1\\right)^4\n  \\end{equation}\n  4 arguments: $p_0/k_B$ in units of K/rad$^2$, $p_1$ in degrees, $p_2/k_B$ in K/rad$^3$, and $p_3/k_B$ in K/rad$^4$.\n\n  \\item{CFF\\_QUARTIC\\_BEND}\n  \\begin{equation}\n  U=p_0 \\left(\\theta_{ijk}-p_1\\right)^2+\n    p_2 \\left(\\theta_{ijk}-p_1\\right)^3+\n    p_3 \\left(\\theta_{ijk}-p_1\\right)^4\n  \\end{equation}\n  4 arguments: $p_0/k_B$ in units of K/rad$^2$, $p_1$ in degrees, $p_2/k_B$ in K/rad$^3$, and $p_3/k_B$ in K/rad$^4$.\n\n  \\item{HARMONIC\\_COSINE\\_BEND}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\cos\\theta_{ijk}-\\cos p_1\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K and $p_1$ in degrees.\n\n  \\item{COSINE\\_BEND}\\\\\n  \\begin{equation}\n  U=p_0\\left(1+\\cos\\left(p_1\\theta_{ijk}-p_2\\right)\\right)\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K, $p_1$ dimensionless, and $p_2$ in degrees.\n\n  \\item{MM3\\_BEND}\n  \\begin{equation}\n  \\begin{split}\n  U=\\frac{1}{2}p_0 \\left(\\theta_{ijk}-p_1\\right)^2\\Bigl(1-0.014\\left(\\theta_{ijk}-p_1\\right)+\n   5.6\\times10^{-5}\\left(\\theta_{ijk}-p_1\\right)^2&-7\\times10^{-7}\\left(\\theta_{ijk}-p_1\\right)^3\\\\\n   &+2.2\\times10^{-8}\\left(\\theta_{ijk}-p_1\\right)^4\n   \\Bigr)\n  \\end{split}\n  \\end{equation}\n  2 arguments: $p_0$ in units of mdyne\\,\\AA/rad$^2$, $p_1$ in degrees.\n  \\item{MM3\\_IN\\_PLANE\\_BEND}\n  \\begin{equation}\n  \\begin{split}\n  U=\\frac{1}{2}p_0 \\left(\\theta_{ijk}-p_1\\right)^2\\Bigl(1-0.014\\left(\\theta_{ijk}-p_1\\right)+\n   5.6\\times10^{-5}\\left(\\theta_{ijk}-p_1\\right)^2&-7\\times10^{-7}\\left(\\theta_{ijk}-p_1\\right)^3\\\\\n   &+2.2\\times10^{-8}\\left(\\theta_{ijk}-p_1\\right)^4\n   \\Bigr)\n  \\end{split}\n  \\end{equation}\n  2 arguments: $p_0$ in units of mdyne\\,\\AA/rad$^2$, $p_1$ in degrees. The bend is `in-plane' and only applicable to bends in a defined planar trigonal centers.\n  The bend is dependend on the fourth atom of the trigonal center.\n\n  \\item{FIXED\\_BEND}\\\\\n   Use for bend-angle constraint using the `SHAKE' and `RATTLE'-algorithm. Applies to Molecular Dynamics and minimization.\n   Does not work (yet) in Monte-Carlo.\n\n  \\item{MEASURE\\_BEND}\\\\\n  A histogram of the bend angle can be computed.\n\n\\end{itemize}\n\n\\subsection{Wilson inversion-bend potential}\n\n\\begin{figure}[t]\n  \\centering\n  \\includegraphics[width=7.5cm]{./Potentials/WilsonAnglePlus.jpg}\n  \\includegraphics[width=7.5cm]{./Potentials/WilsonAngleMinus.jpg}\n  \\caption{The definition of the Wilson inversion-bend angle $\\chi$.\n  On the left a positive Wilson angle, and on the right a negative Wilson angle.}\n  \\label{Fig: Wilson definition}\n\\end{figure}\n\nCommon planar molecule that contain a double bond or sp$^2$ hybridization form planar groups with trigonal centers.\nFor example: the carbon and nitrogen centers in formamide, and the carbon centers in benzene.\nThe mode of motion is different from bond stretching, bending, and internal rotation.\nThe associated harmonic potential is\n\\begin{equation}\n  U=\\frac{1}{2} k \\left(\\chi\\right)^2\n\\end{equation}\nwith $\\chi$ the out-of-plane angle. Two possible definitions are in use\n\\begin{enumerate}\n\\item{the distance of the central atom from the plane defined by the other three atoms (pyramid height),}\n\\item{the average angle between any bond that extends from the central atom and the plane defined by the\nother two bonds}.\n\\end{enumerate}\nNote that an alternative to the out-of-plane angle is the \\emph{improper torsion} using\n\\begin{equation}\n  U=\\frac{1}{2} k \\left(1-\\cos 2\\chi\\right)\n\\end{equation}\nThe out-of-plane potential can also be used for non-planar structure, for example in united-atom for chiral\ncenters to avoid inversion of the chiral center. Another example of its use is coordination complexes\nwhere now the plane of the ligands need no longer be defined exactly.\nIn square planar complexes it is necessary to define an average plane through the ligands\n(usually the least-square plane).\nNote that the definition include one central atom\nwhich is listed as the second in $a-b-c-d$: $a$, $c$, and $d$ are bonded to the central atom $b$.\nThe inversion angle potential is the average of the three possible inversion angle terms.\n\n\\begin{itemize}\n  \\item{HARMONIC\\_INVERSION}\\\\\n \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\chi_{ijk}-p_1\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K/rad$^2$ and $p_1$ in degrees.\n  \\item{HARMONIC\\_COSINE\\_INVERSION}\\\\\n \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\cos\\left(\\chi_{ijk}\\right)-\\cos\\left(p_1\\right)\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K and $p_1$ in degrees.\n  \\item{PLANAR\\_INVERSION}\\\\\n \\begin{equation}\n  U=p_0\\left(1-\\cos\\left(\\chi\\right)\\right)\n  \\end{equation}\n  1 argument: $p_0/k_B$ in units of K.\n  \\item{MM3\\_INVERSION}\\\\\n  \\begin{equation}\n  \\begin{split}\n  U=\\frac{1}{2}p_0 \\left(\\chi-p_1\\right)^2\\Bigl(1-0.014\\left(\\chi-p_1\\right)+\n   5.6\\times10^{-5}\\left(\\chi-p_1\\right)^2&-7\\times10^{-7}\\left(\\chi-p_1\\right)^3\\\\\n   &+2.2\\times10^{-8}\\left(\\chi-p_1\\right)^4\n   \\Bigr)\n  \\end{split}\n  \\end{equation}\n  2 arguments: $p_0$ in units of mdyne\\,\\AA/rad$^2$, $p_1$ in degrees.\n\n  \\item{FIXED\\_INVERSION\\_BEND}\\\\\n   Use for inversion bend-angle constraint using the `SHAKE' and `RATTLE'-algorithm. \n   Applies to Molecular Dynamics and minimization. Does not work (yet) in Monte-Carlo.\n\\end{itemize}\n\n\\begin{figure}[t]\n  \\centering\n  \\includegraphics[width=7.5cm]{./Potentials/TorsionAnglePlus.jpg}\n  \\includegraphics[width=7.5cm]{./Potentials/TorsionAngleMinus.jpg}\n  \\caption{The definition of the dihedral angle $\\phi$: the angle between the planes formed by atoms a-b-c and b-c-d.\n   On the left a positive dihedral angle, and on the right a negative dihedral angle.}\n  \\label{Fig: Torsion definition}\n\\end{figure}\n\n\\subsection{Torsion potential}\n\nIntramolecular rotations about bonds do not occur freely. A possible description with a physical interpretation\nis the three-term Fourier expansion\n\\begin{equation}\nU=\\frac{V_1}{2}\\left[1+\\cos\\phi\\right]+\n  \\frac{V_2}{2}\\left[1-\\cos2\\phi\\right]+\n  \\frac{V_3}{2}\\left[1+\\cos3\\phi\\right]\n\\end{equation}\n\\begin{enumerate}\n\\item{the 1 fold-term has been attributed to residual dipole-dipole interactions, to Van der Waal interactions,\nor to any other direct interaction between atoms not accounted for otherwise,}\n\\item{the 2-fold arises from conjugation or hyper conjugation, being geometrically related to p orbitals,}\n\\item{and the 3-fold term has a steric (or bonding/anti-bonding) origin.}\n\\end{enumerate}\nThe values for 4-fold or higher are small and it is not known whether these are essential to include.\nIt may be that Van der Waals and dipole interactions already take care of these effects.\nTorsions are even softer than bond angles. All possible values can be found in structures. Therefore, the\nenergy function must be valid over the entire range, the function must be periodic, and for reasons of\nsymmetry have stationary points at 0 and 180 degrees. The periodicity is the number of minima for the\npotential, usually 3 for an sp$^3$-sp$^3$ bond and 2 for a conjugate bond.\n\nThe definition of a torsion includes two central and two terminal atoms. The term `torsional' means\nan internal rigid rotation and `dihedral' means a rotation of two vicinal bonds about a middle bond.\n\\begin{itemize}\n  \\item{HARMONIC\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\phi_{ijkl}-p_1\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K/rad$^2$, $p_1$ in degrees.\n\n  \\item{HARMONIC\\_COSINE\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left[\\cos\\left(\\phi_{ijkl}\\right)-\\cos\\left(p_1\\right)\\right]^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K, $p_1$ in degrees.\n\n  \\item{THREE\\_COSINE\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K\n\n\n  \\item{MM3\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0,p_1,p_2$ in units of kcal/mol.\n\n  \\item{CFF\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=p_0\\left[1-\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    p_2\\left[1-\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K.\n\n  \\item{CFF\\_DIHEDRAL2}\\\\\n  \\begin{equation}\n  U=p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    p_1\\left[1+\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K.\n\n  \\item{SIX\\_COSINE\\_DIHEDRAL}\\\\\n  The Ryckaert-Bellemans potentials is often used for alkanes, the use implies exclusion of VDW-interactions\n  between the first and last atoms of the dihedral, and $\\phi'=\\phi-\\pi$ is defined according to the\n  polymer convention $\\phi'(trans)=0$.\n  \\begin{align}\n  U=&\\sum_{n=0}^5 p_n \\cos^n\\left(\\phi'_{ijkl}\\right)\\\\\n    =&p_0+p_1\\cos\\left(\\phi'_{ijkl}\\right)+p_2\\cos^2\\left(\\phi'_{ijkl}\\right)+p_3\\cos^3\\left(\\phi'_{ijkl}\\right)\n    p_4\\cos^4\\left(\\phi'_{ijkl}\\right)+p_5\\cos^5\\left(\\phi'_{ijkl}\\right)\n  \\end{align}\n  6 arguments: $p_0/k_B,\\dots,p_5/k_B$ in units of K.\n  Rewritten in terms of $\\phi$ the potential reads\n  \\begin{equation}\n  U=p_0-p_1\\cos\\left(\\phi_{ijkl}\\right)+p_2\\cos^2\\left(\\phi_{ijkl}\\right)\n    -p_3\\cos^3\\left(\\phi_{ijkl}\\right)+p_4\\cos^4\\left(\\phi_{ijkl}\\right)\n    -p_5\\cos^5\\left(\\phi_{ijkl}\\right)\n  \\end{equation}\n\n  \\item{TRAPPE\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=p_0+p_1\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n        p_2\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n        p_3\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  4 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B$ in units of K.\n\n  \\item{CVFF\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=p_0\\left[1+\\cos\\left(p_1\\phi_{ijkl}-p_2\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K, $p_1$ dimensionless, and $p_2$ in degrees.\n\n  \\item{OPLS\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U= \\frac{1}{2}p_0+\n    \\frac{1}{2}p_1\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_3\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  4 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B$ in units of K.\n\n  \\item{FOURIER\\_SERIES\\_DIHEDRAL}\\\\\n  The general form of a Fourier expansion is:\n  \\begin{equation}\n  U=\\sum_{n=1}^6\\left[a_n\\cos\\left(n\\phi\\right)+b_n\\sin\\left(n\\phi\\right)\\right]\n  \\end{equation}\n  This form uses equilibrium angles of 0 for $n=1,3,5$ and 180 for $n=2,4,6$\n  \\begin{equation}\n  \\begin{split}\n  U=&\\frac{1}{2}p_0\\left[1+\\cos\\phi\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi\\right)\\right]+\\\\\n    &\\frac{1}{2}p_3\\left[1-\\cos\\left(4\\phi\\right)\\right]+\n    \\frac{1}{2}p_4\\left[1+\\cos\\left(5\\phi\\right)\\right]+\n    \\frac{1}{2}p_5\\left[1-\\cos\\left(6\\phi\\right)\\right]\n  \\end{split}\n  \\end{equation}\n  6 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B,p_4/k_B,p_5/k_B$ in units of K.\n\n  \\item{FOURIER\\_SERIES\\_DIHEDRAL\\_2}\\\\\n  The general form of a Fourier expansion is:\n  \\begin{equation}\n  U=\\sum_{n=1}^6\\left[a_n\\cos\\left(n\\phi\\right)+b_n\\sin\\left(n\\phi\\right)\\right]\n  \\end{equation}\n  This form uses equilibrium angles of 0 for $n=1,3,4,5,6$ and 180 for $n=2$\n  \\begin{equation}\n  \\begin{split}\n  U=&\\frac{1}{2}p_0\\left[1+\\cos\\phi\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi\\right)\\right]+\\\\\n    &\\frac{1}{2}p_3\\left[1+\\cos\\left(4\\phi\\right)\\right]+\n    \\frac{1}{2}p_4\\left[1+\\cos\\left(5\\phi\\right)\\right]+\n    \\frac{1}{2}p_5\\left[1+\\cos\\left(6\\phi\\right)\\right]\n  \\end{split}\n  \\end{equation}\n  6 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B,p_4/k_B,p_5/k_B$ in units of K.\n\n  \\item{FIXED\\_DIHEDRAL}\\\\\n   Use for dihedral-angle constraint using the `SHAKE' and `RATTLE'-algorithm. \n   Applies to Molecular Dynamics and minimization. Does not work (yet) in Monte-Carlo.\n\\end{itemize}\n\n\\begin{center}\n\\shadowbox{\n\\begin{minipage}{16cm}\n\\begin{quote}\nThe following identities are convenient when dealing with torsions:\n\\begin{equation}\n\\begin{split}\n\\cos 1x &= \\cos x\\\\\n\\cos 2x &= -1+2\\cos^2 x\\\\\n\\cos 3x &= -3\\cos x+4\\cos^3x\\\\\n\\cos 4x &= 1-8\\cos^2x+8\\cos^4x\\\\\n\\cos 5x &= 5\\cos x-20\\cos^3x+16\\cos^5x\\\\\n\\cos 6x &= -1+18\\cos^2 x-48\\cos^4 x+32\\cos^6x\n\\end{split}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\n\\sin 1x &= \\sin x\\\\\n\\sin 2x &= (\\sin x) (2\\cos x)\\\\\n\\sin 3x &= (\\sin x) (-1+4\\cos^2 x)\\\\\n\\sin 4x &= (\\sin x ) (-4\\cos x +8\\cos^3x)\\\\\n\\sin 5x &= (\\sin x) (1-12\\cos^2x+16\\cos^4x)\\\\\n\\sin 6x &= (\\sin x) (6\\cos x-32\\cos^3x+32\\cos^5x)\n\\end{split}\n\\end{equation}\n\\end{quote}\n\\end{minipage}}\n\\end{center}\n\n\\subsection{Improper torsion potential}\n\n\\begin{figure}[t]\n  \\centering\n  \\includegraphics[width=7.5cm]{./Potentials/ImproperTorsionAnglePlus.jpg}\n  \\includegraphics[width=7.5cm]{./Potentials/ImproperTorsionAngleMinus.jpg}\n  \\caption{The most common (CVFF, DLPOLY) definition of the improper dihedral angle \n   $\\phi$: the angle between the planes formed by atoms `a-c-d' and `c-d-b'.\n   On the left a positive improper dihedral angle, and on the right a negative improper dihedral angle.\n   The atoms need to be listed in the order `a-c-d-b'. Note that an exchange of atoms `c' and `d' leads to a change\n   of sign, but \\emph{not} in magntitude.}\n  \\label{Fig: Improper Torsion definition}\n  \\includegraphics[width=7.5cm]{./Potentials/ImproperTorsionAngleTypeA.jpg}\n  \\includegraphics[width=7.5cm]{./Potentials/ImproperTorsionAngleTypeB.jpg}\n  \\caption{A second definition of the improper dihedral angle (CHARMM, AMBER). The central atom is `c', and the improper torsion\n   is enter as `a-b-c-d'. Howevere, an exchange of terminal atoms leads to a change in magntitude and the improper torsion\n   needs to be symmetrized by adding two additional improper torsions `b-d-c-a' and `d-a-c-b' and rescaling the force constant\n   by a factor of $1/3$.}\n\\end{figure}\n\nThe improper torsion is an alternative for the out-of-plane angle, and a possible definition is\n\\begin{equation}\n  U=\\frac{1}{2} k \\left(1-\\cos 2\\chi\\right)\n\\end{equation}\nIt is termed `improper torsion' because it simply treats the four atoms in the plane as if they were\nbonded in the same way as in a true torsional angle. Note that the definition include one central atom\nwhich is listed as the second in $a-b-c-d$: $a$, $c$, and $d$ are bonded to the central atom $b$.\nImproper torsions are often used to keep sp2 atoms planar and sp3 atoms in a tetrahedral geometry.\n\nThe CHARMM convention is to list the central atom first, while there are no rules how to order the other three atoms.\nHence, six possibilities exist for the definition of an improper torsion. The AMBER convention is that the out-of-plane\natom is listed in the third position and the order of the other atoms is determined alphabetically by atom type, and\nby the atom number (i.e. the order in the molecule) when atom types are identical.\n\n\\begin{itemize}\n  \\item{HARMONIC\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\phi_{ijkl}-p_1\\right)^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K/rad$^2$, $p_1$ in degrees.\n\n  \\item{HARMONIC\\_COSINE\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left[\\cos\\left(\\phi_{ijkl}\\right)-\\cos\\left(p_1\\right)\\right]^2\n  \\end{equation}\n  2 arguments: $p_0/k_B$ in units of K, $p_1$ in degrees.\n\n  \\item{THREE\\_COSINE\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K.\n\n  \\item{MM3\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0,p_1,p_2$ in units of kcal/mol.\n\n\n  \\item{CFF\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=p_0\\left[1-\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    p_2\\left[1-\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K.\n\n  \\item{CFF\\_IMPROPER\\_DIHEDRAL2}\\\\\n  \\begin{equation}\n  U=p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    p_1\\left[1+\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K.\n\n  \\item{SIX\\_COSINE\\_IMPROPER\\_DIHEDRAL}\\\\\n  The Ryckaert-Bellemans potentials is often used for alkanes, the use implies exclusion of VDW-interactions\n  between the first and last atoms of the dihedral, and $\\phi'=\\phi-\\pi$ is defined according to the\n  polymer convention $\\phi'(trans)=0$.\n  \\begin{align}\n  U=&\\sum_{n=0}^5 p_n \\cos^n\\left(\\phi'_{ijkl}\\right)\\\\\n    =&p_0+p_1\\cos\\left(\\phi'_{ijkl}\\right)+p_2\\cos^2\\left(\\phi'_{ijkl}\\right)+p_3\\cos^3\\left(\\phi'_{ijkl}\\right)\n    p_4\\cos^4\\left(\\phi'_{ijkl}\\right)+p_5\\cos^5\\left(\\phi'_{ijkl}\\right)\n  \\end{align}\n  6 arguments: $p_0/k_B,\\dots,p_5/k_B$ in units of K.\n  Rewritten in terms of $\\phi$ the potential reads\n  \\begin{equation}\n  U=p_0-p_1\\cos\\left(\\phi_{ijkl}\\right)+p_2\\cos^2\\left(\\phi_{ijkl}\\right)\n    -p_3\\cos^3\\left(\\phi_{ijkl}\\right)+p_4\\cos^4\\left(\\phi_{ijkl}\\right)\n    -p_5\\cos^5\\left(\\phi_{ijkl}\\right)\n  \\end{equation}\n\n  \\item{TRAPPE\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=p_0+p_1\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n        p_2\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n        p_3\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  4 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B$ in units of K.\n\n  \\item{CVFF\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U=p_0\\left[1+\\cos\\left(p_1\\phi_{ijkl}-p_2\\right)\\right]\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K, $p_1$ dimensionless, and $p_2$ in degrees.\n\n  \\item{OPLS\\_IMPROPER\\_DIHEDRAL}\\\\\n  \\begin{equation}\n  U= \\frac{1}{2}p_0+\n    \\frac{1}{2}p_1\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_3\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\n  \\end{equation}\n  4 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B$ in units of K.\n\n  \\item{FOURIER\\_SERIES\\_IMPROPER\\_DIHEDRAL}\\\\\n  The general form of a Fourier expansion is:\n  \\begin{equation}\n  U=\\sum_{n=1}^6\\left[a_n\\cos\\left(n\\phi\\right)+b_n\\sin\\left(n\\phi\\right)\\right]\n  \\end{equation}\n  This form uses equilibrium angles of 0 for $n=1,3,5$ and 180 for $n=2,4,6$\n  \\begin{equation}\n  \\begin{split}\n  U=&\\frac{1}{2}p_0\\left[1+\\cos\\phi\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi\\right)\\right]+\\\\\n    &\\frac{1}{2}p_3\\left[1-\\cos\\left(4\\phi\\right)\\right]+\n    \\frac{1}{2}p_4\\left[1+\\cos\\left(5\\phi\\right)\\right]+\n    \\frac{1}{2}p_5\\left[1-\\cos\\left(6\\phi\\right)\\right]\n  \\end{split}\n  \\end{equation}\n  6 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B,p_4/k_B,p_5/k_B$ in units of K.\n\n  \\item{FOURIER\\_SERIES\\_IMPROPER\\_DIHEDRAL\\_2}\\\\  The general form of a Fourier expansion is:\n  \\begin{equation}\n  U=\\sum_{n=1}^6\\left[a_n\\cos\\left(n\\phi\\right)+b_n\\sin\\left(n\\phi\\right)\\right]\n  \\end{equation}\n  This form uses equilibrium angles of 0 for $n=1,3,4,5,6$ and 180 for $n=2$\n  \\begin{equation}\n  \\begin{split}\n  U=&\\frac{1}{2}p_0\\left[1+\\cos\\phi\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi\\right)\\right]+\\\\\n    &\\frac{1}{2}p_3\\left[1+\\cos\\left(4\\phi\\right)\\right]+\n    \\frac{1}{2}p_4\\left[1+\\cos\\left(5\\phi\\right)\\right]+\n    \\frac{1}{2}p_5\\left[1+\\cos\\left(6\\phi\\right)\\right]\n  \\end{split}\n  \\end{equation}\n  6 arguments: $p_0/k_B,p_1/k_B,p_2/k_B,p_3/k_B,p_4/k_B,p_5/k_B$ in units of K.\n  \\item{FIXED\\_IMPROPER\\_DIHEDRAL}\\\\\n   Use for improper-dihedral-angle constraint using the `SHAKE' and `RATTLE'-algorithm.      \n   Applies to Molecular Dynamics and minimization. Does not work (yet) in Monte-Carlo.\n\n\\end{itemize}\n\n\n\\section{Non-bonded potentials}\n\n\\subsection{Van der Waals potentials}\n\nThe general expression for Van der Waals potentials when using a cutoff distance is\n\\begin{equation}\n U_{ij}^{\\text{VDW}}=\\begin{cases}\n    U_{ij}\\left(r_{ij}\\right)& \\text{if }r_{ij}\\leq r_c\\\\\n    0 & \\text{otherwise}\n   \\end{cases}\n\\end{equation}\n\n\\begin{itemize}\n\n  \\item{NONE}\\\\\n  \\begin{equation}\n    U=0\n  \\end{equation}\n  zero parameters.\n\\item{$\\begin{array}{l}\\text{LENNARD\\_JONES}\\\\\n      \\text{LENNARD\\_JONES\\_SMOOTHED3}\\\\\n      \\text{LENNARD\\_JONES\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= \n      4 p_0 \\left[\\left(\\frac{p_1}{r}\\right)^{12}-\\left(\\frac{p_1}{r}\\right)^6\\right]\n  \\end{equation}\n  2 parameters: $p_0/k_B$ in units of K, and $p_1$ in \\AA.\n\\item{$\\begin{array}{l}\\text{FEYNMAN\\_HIBBS\\_LENNARD\\_JONES}\\\\\n      \\text{FEYNMAN\\_HIBBS\\_LENNARD\\_JONES\\_SMOOTHED3}\\\\\n      \\text{FEYNMAN\\_HIBBS\\_LENNARD\\_JONES\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U=4 p_0 \\left[\\left(\\frac{p_1}{r}\\right)^{12}-\\left(\\frac{p_1}{r}\\right)^6\\right]\n      +\\frac{\\hbar^2}{24 p_2 k_B T} 4 p_0\\left[132\\left(\\frac{p_1}{r}\\right)^{12}-30\\left(\\frac{p_1}{r}\\right)^6\\right]\\frac{1}{r^2}\n  \\end{equation}\n  3 parameters: $p_0/k_B$ in units of K, $p_1$ in \\AA, and $p_2$ is the reduced mass in unified atomic mass units.\n\n\\item{$\\begin{array}{l}\\text{FEYNMAN\\_HIBBS2\\_LENNARD\\_JONES}\\\\\n      \\text{FEYNMAN\\_HIBBS\\_LENNARD\\_JONES2\\_SMOOTHED3}\\\\\n      \\text{FEYNMAN\\_HIBBS\\_LENNARD\\_JONES2\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U=4 p_0 \\left[\\left(\\frac{p_1}{r}\\right)^{12}-\\left(\\frac{p_1}{r}\\right)^6\\right]\n      +4 p_0\\left[132\\left(\\frac{p_1}{r}\\right)^{12}-30\\left(\\frac{p_1}{r}\\right)^6\\right]\\frac{p_2}{r^2}\n  \\end{equation}\n  3 parameters: $p_0/k_B$ in units of K, $p_1$ in \\AA, and $p_2$ in units of $\\AA^2$.\n\n  \\item{LENNARD\\_JONES\\_SHIFTED\\_FORCE}\\\\\n  \\begin{equation}\n       U=4 p_0 \\left\\{\\left[\\left(\\frac{p_1}{r}\\right)^{12}-\\left(\\frac{p_1}{r}\\right)^6\\right]-\n           \\left[\\left(\\frac{p_1}{r_c}\\right)^{12}-\\left(\\frac{p_1}{r_c}\\right)^6\\right]+\n           \\left[12\\left(\\frac{p_1}{r_c}\\right)^{12}-6\\left(\\frac{p_1}{r_c}\\right)^{6}\\right]\\frac{\\left(r-r_c\\right)}{r_c}\\right\\}\n  \\end{equation}\n  2 parameters: $p_0/k_B$ in units of K, and $p_1$ in \\AA.\n\n  \\item{LENNARD\\_JONES\\_SHIFTED\\_FORCE2}\\\\\n  \\begin{equation}\n       4 p_0 \\left\\{\\left[\\left(\\frac{p_1}{r}\\right)^{12}-\\left(\\frac{p_1}{r}\\right)^6\\right]\n       +\\left[6\\left(\\frac{p_1}{r_c}\\right)^{12}-3\\left(\\frac{p_1}{r_c}\\right)^6\\right]\\frac{r^2}{r_c^2}\n       +7\\left(\\frac{p_1}{r_c}\\right)^{12}+4\\left(\\frac{p_1}{r_c}\\right)^6\\right\\}\n  \\end{equation}\n  2 parameters: $p_0/k_B$ in units of K, and $p_1$ in \\AA.\n\n\\item{$\\begin{array}{l}\\text{POTENTIAL\\_12\\_6}\\\\\n      \\text{POTENTIAL\\_12\\_6\\_SMOOTHED3}\\\\\n      \\text{POTENTIAL\\_12\\_6\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= \n      \\frac{p_0}{r^{12}}-\\frac{p_1}{r^6}\n  \\end{equation}\n   2 parameters: $p_0/k_B$ in units of K\\,\\AA$^{12}$, and $p_1/k_B$ in units of K\\,\\AA$^6$.\n\n\\item{$\\begin{array}{l}\\text{POTENTIAL\\_12\\_6\\_2\\_0}\\\\\n      \\text{POTENTIAL\\_12\\_6\\_2\\_0\\_SMOOTHED3}\\\\\n      \\text{POTENTIAL\\_12\\_6\\_2\\_0\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= \n      \\frac{p_0}{r^{12}}+\\frac{p_1}{r^6}+\\frac{p_2}{r^2}+p_3\n  \\end{equation}\n   4 parameters: $p_0/k_B$ in units of K\\,\\AA$^{12}$, $p_1/k_B$ in units of K\\,\\AA$^6$, $p_2/k_B$ in units of K\\,\\AA$^2$, \n   and $p_3$ in units of K.\n\n\\item{$\\begin{array}{l}\\text{MORSE}\\\\\n      \\text{MORSE\\_SMOOTHED3}\\\\\n      \\text{MORSE\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= p_0 \\left[(1-{e^{-p_1*(r-p_2)}})^2-1\\right]\n  \\end{equation}\n   3 parameters: $p_0/k_B$ in units of K, $p_1$ in units of $\\AA^{-1}$ and $p_2$ in units of \\AA.\n\n\\item{$\\begin{array}{l}\\text{MORSE2}\\\\\n      \\text{MORSE2\\_SMOOTHED3}\\\\\n      \\text{MORSE2\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= p_0 \\left[e^{p_1*(1-r/p_2)}-2e^{(p_1/2)*(1-r/p_2)}\\right]\n  \\end{equation}\n   3 parameters: $p_0/k_B$ in units of K, $p_1$ in units of $\\AA^{-1}$ and $p_2$ in units of \\AA.\n\n\\item{$\\begin{array}{l}\\text{MORSE3}\\\\\n      \\text{MORSE3\\_SMOOTHED3}\\\\\n      \\text{MORSE3\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= p_0 \\left[\\left(1-e^{\\left(\\frac{-\\ln{2}}{2^{\\nicefrac{1}{6}}-1}\\right)\\left(\\frac{r}{p_2}-2^{\\nicefrac{1}{6}}\\right)}\\right)^2-1\\right]\n  \\end{equation}\n   2 parameters: $p_0/k_B$ in units of K $p_2$ in units of \\AA. This form of the Morse potential resembles the Lennard-Jones potential.\n\n\\item{$\\begin{array}{l}\\text{CFF\\_9\\_6}\\\\\n      \\text{CFF\\_9\\_6\\_SMOOTHED3}\\\\\n      \\text{CFF\\_9\\_6\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= \n      \\frac{p_0}{r^{9}}-\\frac{p_1}{r^6}\n  \\end{equation}\n   2 parameters: $p_0/k_B$ in units of K\\,\\AA$^{9}$, and $p_1/k_B$ in units of K\\,\\AA$^6$.\n\n\\item{$\\begin{array}{l}\\text{CFF\\_EPS\\_SIGMA}\\\\\n      \\text{CFF\\_EPS\\_SIGMA\\_SMOOTHED3}\\\\\n      \\text{CFF\\_EPS\\_SIGMA\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U_{ij}= \n      p_0 \\left[2\\left(\\frac{p_1}{r}\\right)^{9}-3\\left(\\frac{p_1}{r}\\right)^6\\right]\n  \\end{equation}\n  2 parameters: $p_0/k_B$ in units of K, and $p_1$ in \\AA.\n\n\\item{$\\begin{array}{l}\\text{BUCKINGHAM}\\\\\n      \\text{BUCKINGHAM\\_SMOOTHED3}\\\\\n      \\text{BUCKINGHAM\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n  U=\n     p_0 e^{-p_1 r}-\\frac{p_2}{r^{6}}\n  \\end{equation}\n  3 parameters: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, and $p_2$ in K\\,\\AA$^6$. Warning: in literature sometimes $\\rho=\\frac{1}{p_1}$ is given,\n  $\\rho$ is usually around 0.3-0.4 \\AA, $p_1$ is usually around 2-4 \\AA$^{-1}$.\n\n\\item{$\\begin{array}{l}\\text{BUCKINGHAM2}\\\\\n      \\text{BUCKINGHAM2\\_SMOOTHED3}\\\\\n      \\text{BUCKINGHAM2\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n  U=\\begin{cases}\n     10^{10}  & r<p_3\\\\\n     p_0 e^{-p_1 r}-\\frac{p_2}{r^{6}} & \\text{otherwise}\n    \\end{cases}\n  \\end{equation}\n  4 parameters: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, $p_2$ in K\\,\\AA$^6$, and $p_3$ in [\\AA].\n  Warning: in literature sometimes $\\rho=\\frac{1}{p_1}$ is given,\n  $\\rho$ is usually around 0.3-0.4 \\AA, $p_1$ is usually around 2-4 \\AA$^{-1}$.\n\n\\item{$\\begin{array}{l}\\text{MM3\\_VDW}\\\\\n      \\text{MM3\\_VDW\\_SMOOTHED3}\\\\\n      \\text{MM3\\_VDW\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U_{ij}= \\begin{cases}\n       \\sqrt{p_0^i p_0^j}\\left[1.84\\times10^5 e^{-\\frac{12}{P}}-2.25\\,P^6\\right] & \\text{if }P\\geq 3.02\\\\\n       \\sqrt{p_0^i p_0^j} 192.27 P^2  & \\text{if }P<3.02\n       \\end{cases}\n  \\end{equation}\n  with $P=\\frac{p_1^i+p_1^j}{r_{ij}}$ and where $p_1^i$ and $p_1^j$ are the VDW radii of atoms $i$ and $j$,\n  and $r_{ij}$ the separation distance in \\AA\\ between atoms $i$ and $j$. \\\\\n  2 arguments: $p_0$ in units of kcal/mol, $p_1$ in units of \\AA.\n\n\\item{$\\begin{array}{l}\\text{MATSUOKA\\_CLEMENTI\\_YOSHIMINE}\\\\\n      \\text{MATSUOKA\\_CLEMENTI\\_YOSHIMINE\\_SMOOTHED3}\\\\\n      \\text{MATSUOKA\\_CLEMENTI\\_YOSHIMINE\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= p_0e^{-p_1 r_{ij}}+p_2e^{-p_3 r_{ij}}\n  \\end{equation}\n   4 arguments: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, $p_2/k_B$ in units of K, and $p_3$ in units of \\AA$^{-1}$.\n\n\\item{$\\begin{array}{l}\\text{GENERIC}\\\\\n      \\text{GENERIC\\_SMOOTHED3}\\\\\n      \\text{GENERIC\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= p_0 e^{-p_1 r}-\\frac{p_2}{r^4}-\\frac{p_3}{r^6}-\\frac{p_4}{r^8}-\\frac{p_5}{r^{10}}\n  \\end{equation}\n  6 arguments: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, $p_2/k_B$ in units of K\\,\\AA$^{4}$, $p_3/k_B$ in units of K\\,\\AA$^{6}$,\n  $p_4/k_B$ in units of K\\,\\AA$^{8}$, and $p_5/k_B$ in units of K\\,\\AA$^{10}$.\n\n\\item{$\\begin{array}{l}\\text{PELLENQ\\_NICHOLSON}\\\\\n      \\text{PELLENQ\\_NICHOLSON\\_SMOOTHED3}\\\\\n      \\text{PELLENQ\\_NICHOLSON\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= p_0 e^{-p_1 r}-f_6 \\frac{p_2}{r^6}-f_8\\frac{p_3}{r^8}-f_{10}\\frac{p_4}{r^{10}}\n  \\end{equation}\n  with\n  \\begin{equation}\n   f_{2n}=1-\\sum_{k=0}^{2n}\\frac{\\left(p_1 r_{ij}\\right)^k}{k!} e^{-p_1 r_{ij}}\n  \\end{equation}\n  5 arguments: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, $p_2/k_B$ in units of K\\,\\AA$^{6}$,\n  $p_3/k_B$ in units of K\\,\\AA$^{8}$, and $p_4/k_B$ in units of K\\,\\AA$^{10}$.\n\n\\item{$\\begin{array}{l}\\text{HYDRATED\\_ION\\_WATER}\\\\\n      \\text{HYDRATED\\_ION\\_WATER\\_SMOOTHED3}\\\\\n      \\text{HYDRATED\\_ION\\_WATER\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= p_0 e^{-p_1 r}-\\frac{p_2}{r^4}-\\frac{p_3}{r^6}-\\frac{p_4}{r^{12}}\n  \\end{equation}\n  5 arguments: $p_0/k_B$ in units of K, $p_1$ in units of \\AA$^{-1}$, $p_2/k_B$ in units of K\\,\\AA$^{4}$,\n  $p_3/k_B$ in units of K\\,\\AA$^{6}$, and $p_4/k_B$ in units of K\\,\\AA$^{12}$.\n\n\\item{$\\begin{array}{l}\\text{MIE}\\\\\n      \\text{MIE\\_SMOOTHED3}\\\\\n      \\text{MIE\\_SMOOTHED5}\\end{array}$}\\\\\nThe Mie-potential \\cite{Mie1903}\n  \\begin{equation}\n    U= \n      \\left(\\frac{p_0}{r^{p_1}}-\\frac{p_2}{r^{p_3}}\\right)\n  \\end{equation}\n   4 arguments: $p_0/k_B$ in units of K \\AA$^{p_1}$, $p_1$ dimensionless,\n  $p_2/k_B$ in units of K \\AA$^{p_3}$, and $p_3$ dimensionless.\n\n\\item{$\\begin{array}{l}\\text{BORN\\_HUGGINS\\_MEYER}\\\\\n      \\text{BORN\\_HUGGINS\\_MEYER\\_SMOOTHED3}\\\\\n      \\text{BORN\\_HUGGINS\\_MEYER\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U_{ij}=p_0 e^{p_1\\left(p_2-r_{ij}\\right)}-\\frac{p_3}{r_{ij}^6}-\\frac{p_4}{r_{ij}^8}\n  \\end{equation}\n  5 arguments: $p_0/k_B$ in units of K, $p_1$ dimensionless, $p_2$ in units of \\AA, $p_3/k_B$ in units of K\\,\\AA$^{6}$, and\n  $p_4/k_B$ in units of K\\,\\AA$^{8}$.\n\n\\item{$\\begin{array}{l}\\text{HYDROGEN}\\\\\n      \\text{HYDROGEN\\_SMOOTHED3}\\\\\n      \\text{HYDROGEN\\_SMOOTHED5}\\end{array}$}\\\\\n  \\begin{equation}\n    U= \n      \\frac{p_0}{r^{12}}-\\frac{p_1}{r^{10}}\n  \\end{equation}\n   2 arguments: $p_0/k_B$ in units of K\\,\\AA$^{12}$, and $p_1/k_B$ in units of K\\,\\AA$^{10}$.\n\n\\end{itemize}\n\n\\subsection{Tail corrections}\n\n\\subsubsection*{energy}\n\n\\begin{equation}\n U^{\\text{Tail}}=\\frac{2 \\pi}{V}\\sum_a \\sum_b N_a N_b \\left[\\int_{r_c}^\\infty r^2 U\\left(r\\right)\\, dr\\right]\n\\end{equation}\n\n\\begin{tabular}{|l|l|}\n\\hline\npotential & $\\int_{r_c}^\\infty r^2 U\\left(r\\right)$\\\\\n\\hline\\hline\n  LENNARD\\_JONES &\n      $\\frac{4}{3}\\,p_0\\,p_1^3 \\left[\\frac{1}{3}\\left(\\frac{p_1}{r}\\right)^{9}-\\left(\\frac{p_1}{r}\\right)^3\\right]$\\\\\n  LENNARD\\_JONES\\_SHIFTED\\_FORCE & -\\\\\n\\hline\n\\end{tabular}\n\n\\subsubsection*{pressure}\n\\begin{align}\n P^{\\text{Tail}}=&-\\sum_a \\sum_b \\frac{2\\pi}{3V} N_a N_b \\left[\\int_{r_c}^\\infty r^2\\, r\\frac{\\partial U\\left(r\\right)}{\\partial r}\\, dr\\right]\\\\\n    =&\\sum_a \\sum_b\\left(\\frac{2\\pi}{3V} r_c^3 N_a N_b U\\left(r_c\\right)+U^{\\text{Tail}}\\right)\n\\end{align}\n\\subsubsection*{chemical potential}\n\\begin{equation}\n \\beta \\mu^{\\text{Tail}}=2 U^{\\text{Tail}}\n\\end{equation}\n\n\\subsection{Electrostatics}\n\n\\subsubsection{Charge-charge interaction}\n\\begin{itemize}\n\\item{Ewald}\\\\\nThe potential energy for a system of charges in a periodic system can be written as\n\\begin{equation}\n U=U^{\\text{real}}+U^{\\text{rec}}\n\\end{equation}\nwhere\n\\begin{equation}\n \\begin{split}\n U^{\\text{real}}\n  &=\\sum_{i<j} q_i q_j \\frac{\\text{erfc}\\left(\\alpha r_{ij}\\right)}{r_{ij}}\\\\\n U^{\\text{rec}}\n  &=\\frac{2\\pi}{V}\\sum_{\\mathbf{k}\\not=0}\\frac{1}{k^2} e^{-\\frac{k^2}{4\\alpha^2}}\n  \\left(\\left|\\sum_{i=1}^N q_i\\cos\\left(\\mathbf{k}\\cdot\\mathbf{r}_i\\right)\\right|^2+\n   \\left|\\sum_{i=1}^N q_i\\sin\\left(\\mathbf{k}\\cdot\\mathbf{r}_i\\right)\\right|^2\\right)\n -\\sum_i\\frac{\\alpha}{\\sqrt{\\pi}}q_i^2\n \\end{split}\n\\end{equation}\nwhere $q_i$ and $q_j$ are the charges of particle $i$ and $j$, respectively, $\\mathbf{r}_i$ the position of atom $i$, $V$ the volume of the cell,\n$\\alpha$ a damping factor, $k$ the wavelength, and `erfc' the error function complement. The expression gives the \\emph{exact} solution for charges\nin a periodic system up to arbitrary precision. One part is computed in `real' space, and the long-range part is more conveniently computed in Fourier space.\n\n\\item{CoulombTruncated}\n\\begin{equation}\n U=\\begin{cases}\n    \\sum_{i<j} \\frac{1}{4\\pi\\epsilon}\\frac{q_i q_j}{r_{ij}}& \\text{if }r_{ij}\\leq r_c\\\\\n    0 & \\text{otherwise}\n   \\end{cases}\n\\end{equation}\n\n\\item{CoulombShifted}\n\\begin{equation}\n U=\\begin{cases}\n    \\sum_{i<j} \\frac{q_i q_j}{4\\pi \\epsilon }\\left(\\frac{1}{r_{ij}}-\\frac{1}{r_c}\\right)& \\text{if }r_{ij}\\leq r_c\\\\\n    0 & \\text{otherwise}\n   \\end{cases}\n\\end{equation}\n\n\\item{CoulombSmoothed}\n\\item{Wolf}\n\n\\end{itemize}\n\n\\subsubsection{Charge-dipole interaction}\n\n\\begin{itemize}\n\\item{Ewald}\\\\\n\\item{CoulombTruncated}\n\\begin{equation}\n U=\\begin{cases}\n    \\sum_{i,j} \\frac{1}{4\\pi\\epsilon} \\frac{-q_i}{r^2_{ij}}\n      \\left({\\boldsymbol \\mu}_j\\cdot\\mathbf{r}_{ij}\\right)& \\text{if }r_{ij}\\leq r_c\\\\\n    0 & \\text{otherwise}\n   \\end{cases}\n\\end{equation}\n\\end{itemize}\n\n\n\\subsubsection{Dipole-dipole interaction}\n\n\\begin{itemize}\n\\item{Ewald}\\\\\n\\item{CoulombTruncated}\n\\begin{equation}\n U=\\begin{cases}\n    \\sum_{i,j} \\frac{1}{4\\pi\\epsilon}\\frac{1}{r^3_{ij}}\n    \\left[{\\boldsymbol \\mu}_i\\cdot {\\boldsymbol \\mu}_j-3\\frac{\\left({\\boldsymbol\\mu}_i\\cdot\\mathbf{r}_{ij}\\right)\n    \\left(\\mathbf{r}_{ij}\\cdot{\\boldsymbol \\mu}_j\\right)}{r^2_{ij}}\\right]& \\text{if }r_{ij}\\leq r_c\\\\\n    0 & \\text{otherwise}\n   \\end{cases}\n\\end{equation}\n\\end{itemize}\n\n\n\\section{Bonded potentials cross terms}\n\n\\subsection{Bond-bond potential}\n\n\\begin{itemize}\n  \\item{CFF\\_BOND\\_BOND\\_CROSS,CVFF\\_BOND\\_BOND\\_CROSS}\\\\\n  \\begin{equation}\n  U=p_0\\left(r-p_1\\right)\\left(r'-p_2\\right)\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K/\\AA$^2$, $p_0$ and $p_1$ in \\AA.\n\\end{itemize}\n\n\\subsection{Bond-bend potential}\n\n\\begin{itemize}\n  \\item{CFF\\_BOND\\_BEND\\_CROSS,CVFF\\_BOND\\_BEND\\_CROSS}\\\\\n  \\begin{equation}\n  U=\\left(\\theta-p_0\\right)\\left[p_1\\left(r-p_2\\right)+p_3\\left(r'-p_4\\right)\\right]\n  \\end{equation}\n  5 arguments: $p_0$ in degrees, $p_1/k_B$ in units of K/\\AA/rad, $p_2$ in \\AA, $p_3/k_B$ in units of K/\\AA/rad, $p_4$ in \\AA.\n\n  \\item{MM3\\_BOND\\_BEND\\_CROSS}\n  \\begin{equation}\n  U=p_0\\left[\\left(r-p_1\\right)+\\left(r'-p_2\\right)\\right]\\left(\\theta-p_3\\right)\n  \\end{equation}\n  4 arguments: $p_0$ in mdyne/rad, $p_1$ and $p_2$ in \\AA, and $p_3$ in degrees.\n\n  \\item{TRUNCATED\\_HARMONIC}\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\theta-p_1\\right)^2 e^{-\\frac{r_{ij}^8+r_{ik}^8}{p_2^8}}\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in K/rad$^2$, $p_1$ in degrees, and $p_2$ in units of \\AA.\n\n  \\item{SCREENED\\_HARMONIC}\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(\\theta-p_1\\right)^2 e^{-\\left(\\frac{r_{ij}}{p_2}+\\frac{r_{ik}}{p_3}\\right)}\n  \\end{equation}\n  4 arguments: $p_0$ in K/rad$^2$, $p_1$ in degrees, $p_2$ and $p_3$ in units of \\AA.\n\n  \\item{SCREENED\\_VESSAL}\n  \\begin{equation}\n  U=\\frac{p_0}{8\\left(\\theta_{ijk}-\\pi\\right)^2}\n    \\left[\\left(p_1-\\pi\\right)^2-\\left(\\theta_{ijk}-\\pi\\right)^2\\right]^2\n    e^{-\\left(\\frac{r_{ij}}{p_2}+\\frac{r_{ik}}{p_3}\\right)}\n  \\end{equation}\n  4 arguments: $p_0$ in K/rad$^2$, $p_1$ in degrees, $p_2$ and $p_3$ in units of \\AA.\n\n  \\item{TRUNCATED\\_VESSAL}\n  \\begin{equation}\n  U=p_0\\left[\n   \\theta_{ijk}^{p_2}\\left(\\theta_{ijk}-p_1\\right)^2\n     \\left(\\theta_{ijk}+p_1-2\\pi\\right)^2-\\frac{p_2}{2}\\pi^{p_2-1}\n      \\left(\\theta_{ijk}-p_1\\right)^2\\left(\\pi-p_1\\right)^3\n    e^{-\\frac{r_{ij}^8+r_{ik}^8}{p_3^8}}\n    \\right]\n  \\end{equation}\n  4 arguments: $p_0$ in K/rad$^{4+p_2}$, $p_1$ in degrees, $p_2$ dimensionless, and $p_3$ in \\AA.\n\n\\end{itemize}\n\n\\subsection{Bend-bend potential}\n\n\\begin{itemize}\n  \\item{CFF\\_BEND\\_BEND\\_CROSS,CVFF\\_BEND\\_BEND\\_CROSS}\\\\\n  \\begin{equation}\n  U=p_0\\left(\\theta-p_1\\right)\\left(\\theta'-p_2\\right)\n  \\end{equation}\n  3 arguments: $p_0$ in units of K/rad$^2$, $p_1$ and $p_2$ in units of degrees.\n\n \\item{MM3\\_BEND\\_BEND\\_CROSS}\n  \\begin{equation}\n  U=-p_0\\left(\\theta-p_1\\right)\\left(\\theta'-p_2\\right)\n  \\end{equation}\n  3 arguments: $p_0$ in units of mdyne/rad$^2$, $p_1$ and $p_2$ in units of degrees.\n\\end{itemize}\n\n\\subsection{Bond-torsion potential}\n\nThe bond-torsions potential correlates the torsion $i-j-k-l$ with the central bond $j-k$,\nor with the two terminating bonds.\n\n\\begin{itemize}\n \\item{MM3\\_BOND\\_TORSION\\_CROSS}\\\\\nThe MM3 bond-torsion potential correlates the torsion $i-j-k-l$ with the central bond $j-k$\n  \\begin{equation}\n  U=\\frac{1}{2}p_0\\left(r-p_3\\right)\\left(1+\\cos\\phi\\right)+\n    \\frac{1}{2}p_1\\left(r-p_3\\right)\\left(1+\\cos2\\phi\\right)+\n    \\frac{1}{2}p_2\\left(r-p_3\\right)\\left(1+\\cos3\\phi\\right)\n  \\end{equation}\n  4 arguments: $p_0,p_1,p_2$ in units of kcal/mol, $p_3$ the reference length of the central bond in \\AA.\n\\end{itemize}\n\n\n\\subsection{Bend-torsion potential}\n\n\\begin{itemize}\n  \\item{CFF\\_BEND\\_TORSION\\_CROSS,CVFF\\_BEND\\_TORSION\\_CROSS}\\\\\n  \\begin{equation}\n  U=p_0\\left(\\theta-p_1\\right)\\left(\\theta'-p_2\\right)\\cos\\phi\n  \\end{equation}\n  3 arguments: $p_0$ in units of K/rad$^3$, $p_1$ and $p_2$ in units of degrees.\n\\item{SMOOTHED\\_DIHEDRAL}\n  \\begin{equation}\n  U=p_0\\left(1+\\cos(p_1\\phi_{ijkl}-p_2\\right)S\\left(\\theta_{ijk}\\right)S\\left(\\theta_{jkl}\\right)\n  \\end{equation}\n  3 arguments: $p_0/k_B$ in units of K/rad$^2$, $p_1$ dimensionless, and  $p_2$ in degrees.\n\n\\item{SMOOTHED\\_THREE\\_COSINE\\_DIHEDRAL}\n  \\begin{equation}\n  U=\\left\\{\\frac{1}{2}p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\\right\\}\n    S\\left(\\theta_{ijk}\\right)S\\left(\\theta_{jkl}\\right)\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K.\n\n\\item{SMOOTHED\\_CFF\\_DIHEDRAL}\n  \\begin{equation}\n  U=\\left\\{p_0\\left[1-\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    p_2\\left[1-\\cos\\left(3\\phi_{ijkl}\\right)\\right]\\right\\}\n    S\\left(\\theta_{ijk}\\right)S\\left(\\theta_{jkl}\\right)\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K.\n\n\\item{SMOOTHED\\_CFF\\_DIHEDRAL2}\n  \\begin{equation}\n  U=\\left\\{p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    p_1\\left[1+\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\\right\\}\n    S\\left(\\theta_{ijk}\\right)S\\left(\\theta_{jkl}\\right)\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K/rad.\n\n\\item{NICHOLAS\\_DIHEDRAL}\n \\begin{equation}\n  U=\\left\\{\\frac{1}{2}p_0\\left[1+\\cos\\left(\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_1\\left[1-\\cos\\left(2\\phi_{ijkl}\\right)\\right]+\n    \\frac{1}{2}p_2\\left[1+\\cos\\left(3\\phi_{ijkl}\\right)\\right]\\right\\}\n    S\\left(\\theta_{ijk}\\right)\n  \\end{equation}\n  3 arguments: $p_0/k_B,p_1/k_B,p_2/k_B$ in units of K/rad.\n\n\\item{SMOOTHED\\_CFF\\_BEND\\_TORSION\\_CROSS}\n\n\\begin{equation}\n U=S\\left(\\theta_1\\right) \n \\left[\n p_0*(Theta_1-p_1)*(\\theta_2-p_2)\\cos(\\phi)\n\\right]\nS\\left(\\theta_2\\right)\n\\end{equation}\n  3 arguments: $p_0/k_B$ in units K/rad$^3$, $p_1$ and $p_2$ in units of degrees.\n\\end{itemize}\n\n\n\\noindent The smoothing function $S\\left(\\theta\\right)$ is defined as\n\\begin{equation}\n S\\left(\\theta\\right)=\\begin{cases}\n  1 & \\theta<\\theta_{\\text{on}}\\\\\n  \\left(\\theta_{\\text{off}}-\\theta\\right)^2 \n       \\frac{\\theta_{\\text{off}}+2\\theta-3\\theta_{\\text{on}}}\n       {\\left(\\theta_{\\text{off}}-\\theta_{\\text{on}}\\right)^3}& \\theta\\geq\\theta_{\\text{on}}\n  \\end{cases}\n\\end{equation}\nwith $\\theta_{\\text{on}}=170^\\circ$ and $\\theta_{\\text{off}}=180^\\circ$.\n\n\n\\bibliographystyle{unsrt}\n\\bibliography{Potentials/biblio}\n\n", "meta": {"hexsha": "65ce29632c993c8752500dfbb9a136d39746d0ca", "size": 52205, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Docs/Potentials/potentials.tex", "max_stars_repo_name": "benjaminbolbrinker/RASPA2", "max_stars_repo_head_hexsha": "0fe1ed405133feede5d3c9ce75d76ae915b17d16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 54, "max_stars_repo_stars_event_min_datetime": "2019-01-15T21:17:37.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T22:14:06.000Z", "max_issues_repo_path": "Docs/Potentials/potentials.tex", "max_issues_repo_name": "benjaminbolbrinker/RASPA2", "max_issues_repo_head_hexsha": "0fe1ed405133feede5d3c9ce75d76ae915b17d16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2019-05-23T02:27:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-21T21:21:52.000Z", "max_forks_repo_path": "Docs/Potentials/potentials.tex", "max_forks_repo_name": "benjaminbolbrinker/RASPA2", "max_forks_repo_head_hexsha": "0fe1ed405133feede5d3c9ce75d76ae915b17d16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 46, "max_forks_repo_forks_event_min_datetime": "2019-01-18T12:17:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-23T04:02:45.000Z", "avg_line_length": 41.7306155076, "max_line_length": 160, "alphanum_fraction": 0.6791303515, "num_tokens": 20815, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972583359805, "lm_q2_score": 0.647798211152541, "lm_q1q2_score": 0.5639713465843548}}
{"text": "\\documentclass{beamer}\n\n\\input{../../shared_slides.tex}\n\\DeclareMathOperator*{\\perm}{perm}\n\n\\title{Matrix scaling}\n\n\\begin{document}\n\\maketitle\n\\frame{\\tableofcontents}\n\n\\section{Introduction}%\n\n\\begin{frame}\n  \\frametitle{Introduction}\n  \\textbf{given:} a matrix $A \\in \\R^{m\\times n}_{\\ge0}$, vectors $r \\in \\R_{>0}^m$ and $c \\in \\R^n_{>0}$\\\\\n  \\textbf{find:} nonneg.\\ diagonal matrices $X$ and $Y$ such that for\n  \\begin{equation}\n    B = XAY\n  \\end{equation}\n  it holds that:\n  \\begin{equation}\n    B \\mathbbm{1}_n = r \\quad \\text{and} \\quad B^T \\mathbbm{1}_m = c\n  \\end{equation}\n  where $\\mathbbm{1}_n = (1, \\dots, 1)$ exactly $n$-times.\n  Equivalently\n  \\begin{equation}\n    \\Vert B_{i,:} \\Vert_1 = r_i \\quad \\text{and} \\Vert B_{:, j} \\Vert = c_j.\n  \\end{equation}\n\n  \\begin{block}{}\n    \\centering\n    In this case $A$ is called $(r,c)$-scalable.\n  \\end{block}\n  \\onslide<2->{%\n    If $\\Vert r \\Vert_1 \\neq \\Vert c \\Vert_1$ this is not possible.\n  }\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Visualization of diagonal scaling}\n\n  \\begin{equation}\n    \\begin{aligned}\n    B = \\begin{bmatrix}\n      x_1 & & & \\\\\n      & x_2 & & \\\\\n      & & \\ddots & \\\\\n      & & & x_m\n    \\end{bmatrix}\n    A\n    \\begin{bmatrix}\n      y_1 & & & \\\\\n      & y_2 & & \\\\\n      & & \\ddots & \\\\\n      & & & y_n\n    \\end{bmatrix}\n    \\\\\n    =\n    \\begin{bmatrix}\n      a_{1,1}x_1y_1 & a_{1,2}x_1y_2 & \\cdots & a_{1,n} x_1y_m \\\\\n      \\vdots   & \\ddots & & \\\\\n      a_{m,1}x_m y_1 & & \\cdots & a_{m,n}x_m y_m\n    \\end{bmatrix}\n    \\end{aligned}\n  \\end{equation}\n  \\begin{block}{\\textbf{Application:} Ill conditioned linear system $Az = b$.}\n    Can multiply both sides by $X$  and substitute $z= Yv$ to get instead\n    \\begin{equation}\n      XAY v = X b\n    \\end{equation}\n  \\end{block}\n\\end{frame}\n\n\\section{Matchings}%\n\\label{sec:}\n\n\\begin{frame}\n  \\frametitle{$(0-1)$ matrices | bipartite graphs}\n  \\begin{minipage}{0.5\\textwidth}\n    \\begin{equation}\n      \\begin{bmatrix}\n        0 & 1 & 1 \\\\\n        1 & 0 & 0 \\\\\n        0 & 1 & 1\n      \\end{bmatrix}\n    \\end{equation}\n  \\end{minipage}\n  \\begin{minipage}{0.35\\textwidth}\n    \\begin{figure}[ht]\n      \\centering\n      \\includegraphics[width=\\textwidth]{bipartite-graph.png}\n      \\caption{bipartite graph}\n    \\end{figure}\n  \\end{minipage}\n\n    \\begin{definition}\n      \\begin{itemize}\n        \\item A \\textbf{matching} is a set of edges without common vertices.\n        \\item A \\textbf{perfect matching} is a matching which covers all vertices.\n      \\end{itemize}\n    \\end{definition}\n    Applications:\n    \\begin{itemize}\n      \\item marriage problem\n      \\item Hitchcock transport problem\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Finding the number of perfect matchings}\n  Finding one is easy (polynomial time).\n  Finding all is in \\# P (i.e.\\ hard!).\n  \\begin{block}{Consider $m=n$, $A\\in \\R^{n \\times n}$}\n    Recall:\n    \\begin{equation}\n      \\begin{aligned}\n        &\\text{(determinant)} \\quad \\det A = \\sum_{\\sigma} \\sign(\\sigma) \\prod_{i=1}^n a_{i,\\sigma(i)}  \\\\\n        &\\text{(permanent)} \\quad \\perm A = \\sum_{\\sigma} \\prod_{i=1}^n a_{i,\\sigma(i)}\n      \\end{aligned}\n    \\end{equation}\n  \\end{block}\n  \\begin{block}{Observation}\n    For a $(0,1)-$matrix $A$, $\\perm A$ is the number of perfect matchings.\n    % This means that computing permanent must be hard.\n  \\end{block}\n  One is easy to compute the other one hard. How can this be?\n  % Diagionalize via gaussion elimination gives O(n^3) algorithm for computing determinant.\n  % This doesn't work for permanent because it is not invariant under these operation (not multilinear).\n\\end{frame}\n\n\\section{Permanent}%\n\\label{sec:}\n\n\\begin{frame}\n  \\frametitle{Lower bounding the permanent}\n  % This notion also appears in optimal transport\n  \\begin{definition}\n    A matrix $A \\in \\R^{m\\times n}_{\\ge0}$ is called \\textbf{doubly stochastic}, if sum of every row and every column is $1$.\n  \\end{definition}\n\n  \\begin{block}{van der Waerden (1926) conjectured}\n    For doubly stochastic matrices the following \\emph{lower bound} holds\n    \\begin{equation}\n      \\perm A \\ge \\frac{n!}{n^n}.\n    \\end{equation}\n  \\end{block}\n  \\begin{equation}\n   \\text{Is tight for } A = \\begin{bmatrix}\n      1/n & \\cdots & 1/n \\\\\n      \\vdots & \\ddots & 1/n \\\\\n      1/n & \\dots & 1/n\n    \\end{bmatrix}\n  \\end{equation}\n  Proved in 1980.\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Matrix scaling to approximate the permanent}\n  \\begin{block}{}\n    If a $(0, 1)$-matrix $A$ can be scaled to be doubly stochastic, i.e.\\ it is $\\left(\\mathbbm{1}, \\mathbbm{1}\\right)$-scalable, then\n    we can apply lower bound\n    \\begin{equation}\n      \\perm B = \\perm (XAY) = \\left(\\prod_i x_i\\right) \\left(\\prod_j y_j\\right) perm A\n    \\end{equation}\n  \\end{block}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Matrix scaling as an optimization problem}\n  \\begin{itemize}\n    \\item \\textbf{given:} $A, r, c$\n    \\item \\textbf{find:} $X, Y$ such that $B=XAY$ fulfills $B\\mathbbm{1}_m=r$ and $B \\mathbbm{1}_n=c$.\n    \\item $m+n$ unknowns\n          \\item $m+n$ constraints\n  \\end{itemize}\n  Consider the (\\emph{nonconvex}) function\n  \\begin{equation}\n    g(x,y) = \\langle x, A y \\rangle - \\langle r, \\log x \\rangle - \\langle c, \\log y \\rangle\n  \\end{equation}\n  with derivative (coordinatewise)\n  \\begin{equation}\n    \\label{eq:grad-original-formulation}\n    \\begin{aligned}\n      \\nabla_x g(x,y) = Ay - \\frac{r}{x} \\\\\n      \\nabla_y g(x,y) = A^T x - \\frac{c}{y}.\n    \\end{aligned}\n  \\end{equation}\n\\end{frame}\n\n\n\\begin{frame}\n  \\frametitle{Reparametrizing this system}\n  Via reparametrization $x= e^\\xi$ and $y=e^{\\eta}$ we get\n  \\begin{equation}\n    f(\\xi, \\eta) = \\sum_{i,j} a_{i,j} e^{\\xi_i + \\eta_j} - \\langle r, \\xi \\rangle - \\langle c, \\eta \\rangle\n  \\end{equation}\n  which is \\textbf{\\emph{convex}}. It's gradient is given by\n  \\begin{equation}\n    \\label{eq:grad-convex-reformulation}\n    \\frac{\\partial f}{\\partial \\xi_i} = \\sum_{j=1}^{n} a_{i,j} e^{\\xi_i + \\eta_j} - r_i\n  \\end{equation}\n  Easy to see that the optimality condition of~\\eqref{eq:grad-convex-reformulation} and~\\eqref{eq:grad-original-formulation} agree.\\\\\n  $\\Rightarrow$ the nonconvex function only has \\emph{global} minimizers.\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Matrix scaling as an optimization problem [contd]}\n  It is easy to see that a solution $(x,y)$ of\n  \\begin{equation}\n    \\begin{aligned}\n      Ay - \\frac{r}{x} = 0 \\\\\n      A^T x - \\frac{c}{y} = 0\n    \\end{aligned}\n  \\end{equation}\n  defines a solution to the \\emph{matrix scaling} problem via $X=\\diag x$ and $Y=\\diag y$\n  \\begin{equation}\n    \\left(\\begin{array}{c}\n            a_{1 1} y_1 + a_{1 2} y_2 + \\cdots \\\\\n            a_{2 1} y_1 + a_{2 2} y_2 + \\cdots \\\\\n            a_{m 1} y_1 + a_{m 2} y_2 + \\cdots \\\\\n          \\end{array}\\right)\n        \\begin{array}{c}\n          \\cdot x_1 = r_1\\\\\n          \\cdot x_2 = r_2\\\\\n          \\cdot x_m = r_m\n        \\end{array}\n      \\end{equation}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{}\n  The question remains: how to minimize\n  \\begin{equation}\n    g(x,y) = \\langle x, A y \\rangle - \\langle r, \\log x \\rangle - \\langle c, \\log y \\rangle\n  \\end{equation}\n  \\vspace{-0.5cm}\n  \\begin{block}{alternating minimization}\n    Given a problem\n    \\begin{equation}\n      \\min_{x,y} \\phi(x,y)\n    \\end{equation}\n    repeat:\n    \\begin{align}\n      x_{k+1} &= \\argmin_x \\phi(x,y_k) \\\\\n      y_{k+1} &= \\argmin_y \\phi(x_{k+1}, y).\n    \\end{align}\n  \\end{block}\n    Makes sense as long as the \\textcolor{blue}{subproblems are easy} (e.g. convex).\n  \\begin{align}\n    &\\text{opt. cond.\\ for $x$:} \\quad Ay - \\frac{r}{x} = 0 \\\\\n    &\\text{opt. cond.\\ for $y$:} \\quad A^T x - \\frac{c}{y} = 0\n  \\end{align}\n\\end{frame}\n\n\n% change this to algorithmx package\n\\begin{frame}\n  \\frametitle{Sinkhorn's Algorithm}\n\n  \\begin{block}{Sinkhorn '60}\n    Given $(x_0, y_0)$, for $k=1,\\dots$\n    \\begin{equation}\n      \\begin{aligned}\n        x_{k+1}&= \\frac{r}{A y_k} \\\\\n        y_{k+1}&= \\frac{c}{A x_{k+1}} \\\\\n      \\end{aligned}\n    \\end{equation}\n  \\end{block}\n  Linear convergence if $a_{i,j} > 0$.\n  \\textbf{Q:} What if $A$ is not $(r,c)$-scalable?\n  % Then the optimization problem has no solution as FOC are never fulfilled\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "51278c259abddd1d717906c32afaa9cfd4594f3f", "size": 8207, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/07_Matrix-scaling/Matrix_scaling.tex", "max_stars_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_stars_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-10-03T14:40:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-20T15:34:36.000Z", "max_issues_repo_path": "slides/07_Matrix-scaling/Matrix_scaling.tex", "max_issues_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_issues_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-10-21T13:02:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-06T19:50:32.000Z", "max_forks_repo_path": "slides/07_Matrix-scaling/Matrix_scaling.tex", "max_forks_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_forks_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-10-05T21:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T15:38:30.000Z", "avg_line_length": 29.6281588448, "max_line_length": 134, "alphanum_fraction": 0.6094797124, "num_tokens": 2958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7279754607093178, "lm_q1q2_score": 0.5638776959452425}}
{"text": "\n\\chapter{Signed Distance Field Model Development and Data}\\label{ch:appendix-a}\n\n\\section{Detailed Signed Distance Field Model Formulation}\n\n\\subsection{Fixed Distance Model Development}\n\nThe utilization of the signed distance field as a preconditioner for ray tracing\noperations can be modeled as an evaluation of the combined probability space for\nparticles with a current position, $\\vec{p}$, and a next physics event location,\n$\\vec{n}$, after traveling a distance, $d$. The fraction of this probability\nspace in which signed distance values can be used to rule out surface crossings\nfor next surface intersections is then considered to be the theoretical\nutilization of the signed distance field. An initial form for this probability\nspace can found in Equation \\ref{appeq:util_model}.\n\n\\begin{equation}\n  \\label{appeq:util_model}\n\\int_{V_{sphere}}\\int_{V_{track}} p_p(r) p_n(d) \\, \\mathrm{d}V_{sphere}\\mathrm{d}V_{track}\n\\end{equation}\n\nIn this model, the starting location of particles, $\\vec{p}_{p}(r,\\phi,\\theta)$, is\nuniformly distributed, $p_p(r)=1$, throughout a sphere of radius, $R$, centered\nat the origin.  The location of the next event, $\\vec{p}_{n}(d,\\alpha,\\beta)$, where $d$\nis the distance traveled by the particle, $\\alpha$ is the interior angle between\nthe particle's \\textit{position} vector and the particle's sampled direction\nvector, and $\\beta$ represents an azimuthal angle for directions traveled with\nangle of departure, $\\alpha$.  Figure \\ref{fig:model} depicts these variables, $r$,\n$d$, and $\\alpha$ more clearly.\n\nThe outer integral in Equation \\ref{appeq:util_model} represents all possible particle\npositions within the geometric sphere and expands to\n\n\\begin{equation}\n\\int_{0}^{R}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\\int_{V_{track}} r^2\\sin{\\phi} \\, \\mathrm{d}\\phi\n\\mathrm{d}\\theta \\mathrm{d}r \\,  p_n(d) \\mathrm{d}V_{track}\n\\end{equation}\n\nThe inner integral over $V_{track}$ then expands to\n\n\\begin{equation}\n\\small \\int_{0}^{R}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\\int_{0}^{\\infty}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\nr^2\\sin{\\phi} \\, p_n(d) d^2 \\sin{\\alpha} \\, \\mathrm{d}\\alpha \\mathrm{d}\\beta \\mathrm{d}d \\, \\mathrm{d}\\phi\n\\mathrm{d}\\theta \\mathrm{d}r\n\\end{equation}\n\nIntegration of $\\phi$, $\\theta$, and $\\beta$ can now be performed with\nthe knowledge that they are symmetric with respect to the problem and\nintegration of $p_n(d)$ does not rely on them.\n\n\\begin{equation}\n\\small 8\\pi^2  \\int_{0}^{R}\\int_{0}^{\\infty}\\int_{0}^{\\pi} p_n(d) \\,\nr^2 \\, d^2 \\sin{\\alpha} \\, \\mathrm{d}\\alpha \\mathrm{d}d \\, \\mathrm{d}r\n\\end{equation}\n\nIn order to represent particles traveling a fixed distance, the relationship in Equation \\ref{appeq:pn_fixed}\nis applied.\n\n\\begin{equation}\n  \\label{appeq:pn_fixed}\n  p_n(d) = \\frac{\\delta(d-\\lambda)}{d^{2}}\n\\end{equation}\n\nThe evaluation of this integral then gives a representation of all the query\nspace available to the problem\n\n\\begin{equation}\n  \\label{appeq:A_fixed}\n\\small A = 8\\pi^2  \\int_{0}^{R}\\int_{0}^{\\infty}\\int_{0}^{\\pi} \\delta(d-\\lambda) \\,\nr^2 \\, \\sin{\\alpha} \\, \\mathrm{d}\\alpha \\mathrm{d}d \\, \\mathrm{d}r\n\\end{equation}\n\nand represents all geometric query space, labeled $A$, for a sphere of radius,\n$R$ and a fixed distance traveled, $\\lambda$. The condition for preconditioner\nutilization without error consideration is as follows\n\n\\begin{equation}\n  SDV(\\vec{p}) + SDV(\\vec{n}) > |\\vec{p}-\\vec{n}|\n  \\label{appeq:condition_no_error}\n\\end{equation}\n\\begin{align*}\n &SDV - \\, signed \\, distance \\, value \\, function \\\\\n &\\vec{p} - \\, particle's \\, current \\, position \\\\\n &\\vec{n} - \\, particle's \\, next \\, event \\, location \\\\\n &h - \\, mesh \\, step \\, size \\\\\n\\end{align*}\n\nTo apply this within the spherical geometry, the signed distance function of a\nsphere with radius, $R$, from Equation \\eqref{eq:sdf_sphere} is applied\n\n\\begin{equation}\nR-|\\vec{p}| + R - |\\vec{n}| >   |\\vec{p}-\\vec{n}|\n\\end{equation}\n\nThe right hand side of this inequality can be described as the distance\ntraveled, $d$, and the magnitude of $\\vec{p}$ can be represented\nby the variable $r$.\n\n\\begin{equation}\n R-r + R - |\\vec{n}(d,\\alpha,\\beta)| > d\n\\end{equation}\n\nReducing the next event location, $\\vec{n}(d,\\alpha,\\beta)$, into an expression\nin terms of $r$, $d$, and $\\alpha$ requires further examination of the\nproblem. Because the coordinates of $n$ depend on the current particle position,\nthe magnitude of $n$ with respect to the geometry origin must be obtained to get\na correct form for the signed distance value. Again, Figure \\ref{fig:model} depicts the\nvalue of $n$ graphically for reference. The magnitude of n can then be described\nusing the law of cosines as\n\n\\begin{equation}\n|n(d,\\alpha,\\beta)| = \\sqrt{r^2 + d^2 - 2rd \\cos{\\pi-\\alpha}}\n\\end{equation}\ninserting this into the inequality gives\n\n\\begin{equation}\nR-r + R - \\sqrt{r^2 + d^2 + 2rd \\cos{\\alpha}} > d\n\\end{equation}\n\nThe inequality has now been reduced to the three variables seen in Equation\n\\ref{appeq:A_fixed}: $r$, $d$, and $\\alpha$. This can be applied to construct\nlimits of integration representing boundaries of space in which the SDF can be\nutilized. As described in Chapter \\ref{ch:preconditioning}, $\\alpha_{min}$ can\nbe used as a limit on the integral over $\\textrm{d}\\alpha$. It is also mentioned\nthat $\\alpha_{min}$ is undefined until $d > R-r$ as shown in Equations\n\\ref{appeq:alpha_min_below} and \\ref{appeq:alpha_min}.\n\n\\begin{equation}\n  d < R-r : \\alpha_{min} = 0\n  \\label{appeq:alpha_min_below}\n\\end{equation}\n\n\\begin{equation}\n  \\alpha_{min} > \\arccos\\Bigg ( \\frac{(2R-r-d)^2-d^2-r^2}{2 d r} \\Bigg )\n  \\label{appeq:alpha_min}\n\\end{equation}\n\n\\begin{figure}[ht]\n  \\centering\n  \\includesvg{../images/model_cases_fixed_distance}{width=\\textwidth}\n  \\caption[Depiction of utilization model scenarios.]{Depiction of modeling\n    cases. Left: an example of a track for which $d < R - r$. Middle: an example\n    of a track for which $R-r < d < R$ and can be preconditioned.  Right: an\n    example of a track for which $R-r < d < R$ and cannot be preconditioned.}\n  \\label{appfig:modeling_cases}\n\\end{figure}\n\nIn order to account for the fact that the form of $\\alpha_{min}$ is undefined\nuntil $d = R-r$, a Heaviside function is applied before applying it as a limit\non the particle's angle of departure from the position vector. Similarly,\nbecause the $\\alpha_{min}$ condition is undefined after $d=R$ a Heaviside\nfunction is used to limit the condition to $\\pi$ for any distances traveled\nlarger than $R$.\n\n\\begin{equation}\n  \\small\n  \\begin{split}\n  \\alpha_{min} =& (H(d-(R-r))-H(d-R)) \\arccos\\Bigg ( \\frac{(2R-r-d)^2-d^2-r^2}{2 d r} \\Bigg ) \\\\\n  &+ \\pi \\, H(d-R)\n  \\end{split}\n  \\label{appeq:alpha_min_heaviside}\n\\end{equation}\n\nBy inserting this condition as a lower limit of the $d\\alpha$ integration,\nEquation \\ref{appeq:subs_a_cond} will give all utilized space, $US$, in the query space\nof the simulation.\n\n\\begin{equation}\n\\small US = 8\\pi^2  \\int_{0}^{R}\\int_{0}^{\\infty}\\int_{\\boldsymbol{\\alpha_{min}}}^{\\pi} \\delta(d-\\lambda) \\,\nr^2 \\, d^2 \\sin{\\alpha} \\, \\mathrm{d}\\alpha \\mathrm{d}d \\, \\mathrm{d}r\n\\label{appeq:subs_a_cond}\n\\end{equation}\n\nEvaluating and simplifying this fully formed integral gives us the model\npresented in Chapter \\ref{ch:preconditioning}.\n\n\\begin{equation}\nU_{theoretical} = \\frac{US}{A} =  \\frac{(1-H(\\lambda-R))(2R-\\lambda)(R-\\lambda)}{2R^2}\n\\end{equation}\n\n\\subsection{Sampled Distance Model Development}\n\n%% After the agreement of the simulation results and analytic model for signed\n%% distance field utilization for the fixed distance traveled case, the simulation\n%% was used to produce a similar set of results in which the distance is\n%% sampled based on the standard probability for distance to interaction in a\n%% medium with a cross section, $\\Sigma$, or mean free path $\\lambda\n%% =1/\\Sigma$. This results in the probability distribution function shown in\n%% Equation \\ref{appeq:pn_sampled} for the particle distance traveled in this scenario.\n\n%% \\begin{equation}\n%%   \\label{appeq:pn_sampled}\n%% p_n(d) \\propto \\frac{e^{-\\Sigma d}}{d^{2}} = \\frac{e^{-\\frac{d}{\\lambda}}}{d^{2}}\n%% \\end{equation}\n\n%% \\begin{figure}[ht]\n%% \\centering\n%% \\includesvg{../images/sdf_sampled_dist_results}{width=0.5\\textwidth}\n%% \\caption{Results of the model for the theoretical utilization limit with the\n%% results of the simulation for a sampled distance traveled case.}\n%% \\label{fig:sdf_sampled_dist}\n%% \\end{figure}\n\n%% %has to be in this section for latex reasons. grumble grumble...\n%% \\begin{table*}[!h]\n%%   \\centering\n%%   \\begin{tabular}{lcccc}\n%%           \\multicolumn{5}{l}{\\textbf{\\textit{Source Location:}} <0,0,-1>} \\\\\n%%           \\textbf{Implementation} & \\textbf{ctme (min)} & \\textbf{wall time\n%%             (min)} & \\textbf{time ratio} & \\textbf{precond. utilization}\\\\\n%%           \\hline\n%%           MCNP6 & 0.17 & 0.14 & 1 & N/A \\\\\n%%           DAG-MCNP6 & 1841.33 & 1841.33 & ~11,000 & N/A \\\\\n%%           DAG-MCNP6 w/ SDF & 0.48 & 0.46 & 2.82 & 0.94\\\\\n%%           \\multicolumn{5}{l}{} \\\\\n%%           \\multicolumn{5}{l}{\\textbf{\\textit{Source Location:}} <0,0,10>} \\\\\n%%           \\textbf{Implementation} & \\textbf{ctme (min)} & \\textbf{wall time\n%%             (min)} & \\textbf{time ratio} & \\textbf{precond. utilization}\\\\\n%%           \\hline\n%%           MCNP6 & 0.18 & 0.18 & 1 & N/A \\\\\n%%           DAG-MCNP6 & 11.12 & 11.16 & 62 & N/A \\\\\n%%           DAG-MCNP6 w/ SDF & 0.50 & 0.52 & 2.89 & 0.96 \\\\\n          \n%%   \\end{tabular}\n%%   \\caption{Performance results for an MCNP6 test case involving electron\n%%     transport of a 1 keV-100 keV photon source incident on an Fe/W target. 5,000\n%%     histories were run in this test problem.}\n%%   \\label{tab:inp066_results}\n%% \\end{table*}\n\n%% Following the same process as in the fixed distance case by plugging Equation \\ref{appeq:pn_sampled} into\n%% Equation \\ref{appeq:subs_a_cond}, the utilization form for the sampled distance case is\n%% shown in Equation \\ref{appeq:sampled_limit}.\n\n%% \\begin{equation}\n%%   \\label{appeq:sampled_limit}\n%%   U_{theoretical} = \\frac{US}{A} = \\frac{ \\frac{1}{2} \\lambda(R - 2 \\lambda) e^{\\frac{-R}{\\lambda}} + \\lambda^2 - \\frac{3}{2} R \\lambda + R^2 }{R^2}\n%% \\end{equation}\n\n%% The results of this set of simulations can be seen in\n%%  Figure \\ref{fig:sdf_sampled_dist}. In this scenario, it is not expected that the\n%% utilization will approach zero when $\\lambda = 100\\, cm$, as the actual distance\n%% sampled may be considerably less than the provided mean free path for the\n%% simulation. Overall utilization values in this scenario for $\\lambda$ from 0 to\n%% 100 cm remain higher than the corresponding fixed distance simulation cases as\n%% is expected in a sampled distance case. Utilization values remain high for\n%% relatively large increases in mesh step size, $h$. This is important to\n%% application of the data structure given concerns regarding its potentially high\n%% memory footprint for large volumes. For example if the utilization of the signed\n%% distance field drops $~20\\%$ when going from a step size of 1 cm to 6.21 cm, but\n%% the memory footprint of the data structure will have decreased by a factor of\n%% $6.21^3$ or $239.5$ as well. The optimization of the mesh step size with respect\n%% to its effect on utilization will also need to be included in future models of\n%% the utilization.\n\n\nThe sampled distance probability distribution presented in Chapter \\ref{ch:preconditioning}\nis as follows:\n\n\\begin{equation}\n  p = \\frac{e^{-\\Sigma d}}{d^{2}} = \\frac{e^{-\\frac{d}{\\lambda}}}{d^{2}}\n\\end{equation}\n\nwith distances sampled using the function\n\n\\begin{equation}\n  d = -\\lambda ln(c)\n\\end{equation}\n\nwhere $c$ is randomly sampled with a uniform distribution between 0 and 1\n\nAn examination of the change of variables in the general form for the utilized\nspace from Equation \\eqref{eq:util_model} gives\n\n\\begin{equation}\n  \\frac{dp}{dd} = -\\frac{p}{\\lambda}\n\\end{equation}\n\n\\begin{equation}\n  d = 0 \\rightarrow c = 1\n\\end{equation}\n\n\\begin{equation}\n  d = \\infty \\rightarrow c = 0\n\\end{equation}\n\nResulting in an integral with the following form for the sampled distance model\n\n\\begin{equation}\n  \\int_{0}^{R}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\\int_{1}^{0}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\n-r^2\\sin{\\phi} \\, \\lambda c ln(c)^2 \\sin{\\alpha} \\, \\mathrm{d}\\alpha \\mathrm{d}\\beta \\mathrm{d}c \\, \\mathrm{d}\\phi\n\\mathrm{d}\\theta \\mathrm{d}r\n\\end{equation}\n\nAs in the fixed distance case, the condition for $\\alpha_{min}$ is a piece-wise function based on the distance traveled. The expression for d changes slightly for this case however, given that the integral is now being performed over the variable $c$.\n\n\\begin{equation}\n  d < R-r : \\alpha_{min} = 0\n\\end{equation}\n\n\\begin{equation}\n  d > R-r : \\alpha_{min} = \\arccos\\Bigg ( \\frac{(2R-r-d)^2-d^2-r^2}{2 d r} \\Bigg\n  )\n\\end{equation}\n\nNow that the distance traveled is being used to construct these two regions in\nthe model, this integral must be separated into two pieces, one with limits of $0$\nto $R-r$ and another with limits $R-r$ to $R$. Based on the distance sampling\ndistribution, these values become\n\n\\begin{equation}\n  d_{min} = R-r \\rightarrow c_{max} = e^{(-\\frac{(R-r)}{\\lambda})}\n\\end{equation}\n\n\\begin{equation}\n  d_{max} = R   \\rightarrow c_{min} = e^{(-\\frac{R}{\\lambda})}\n\\end{equation}\n\nand the resulting integral becomes\n\n\\begin{equation}\n  \\int_{0}^{R}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\\int_{1}^{c_{max}}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\n-r^2sin(\\phi) \\, \\lambda c ln(c)^2 sin(\\alpha) \\, \\mathrm{d}\\alpha \\mathrm{d}\\beta \\mathrm{d}c \\, \\mathrm{d}\\phi\n\\mathrm{d}\\theta \\mathrm{d}r +\n\\end{equation}\n\n\\begin{equation}\n  \\int_{0}^{R}\\int_{0}^{2\\pi}\\int_{0}^{\\pi}\\int_{c_{max}}^{c_{min}}\\int_{0}^{2\\pi}\\int_{alpha_{min}}^{\\pi}\n-r^2\\sin{\\phi} \\, \\lambda c ln(c)^2 \\sin{\\alpha} \\, \\mathrm{d}\\alpha \\mathrm{d}\\beta \\mathrm{d}c \\, \\mathrm{d}\\phi\n\\mathrm{d}\\theta \\mathrm{d}r\n\\end{equation}\n\nThe evaluation of this integral gives the final form of the analytic preconditioner limit from Equation \\eqref{eq:sdf_model_sampled}\n\n\\begin{align}\n    \\begin{split}\n    U_{sampled} = \\frac{\\frac{1}{2} \\lambda (R - 2 \\lambda) e^{-\\frac{R}{\\lambda}} + \\lambda^{2} - \\frac{3}{2} R \\lambda + R^{2} }{R^{2}}\n  \\end{split}\n\\end{align}\n\n%% It is quite possible that utilization in either th interior or exterior\n%% resion of the sphere is being under represented. Regardless of the cause, examining the contribution to\n%% utilization from each of these regions (shown in\n%%  Figure \\label{fig:util_region_contributions}) is an interesting endeavor.\n\n\n%% \\begin{figure}[ht] \n%% \\centering\n%% \\includesvg{../images/util_contributions}{width=\\textwidth}\n%% \\caption{A plot of the total predicted utilization along with the contriubtions\n%%   from the inner region ($d < R-r$) and outer region ($d > R-r$).}\n%% \\label{fig:util_region_contributions}\n%% \\end{figure}\n\n%% Figure \\ref{fig:util_region_contributions} depicts the utilization of the SDF\n%% data structure for various values of the mean free path, $\\lambda$.\n%% When $\\lambda$ is very small, particles' next event locations\n%% rarely reach the outer region condition of $ d > R - r $. As $\\lambda$ increases, particles are\n%% more likely to enter that region and its contribution increases greatly. Then\n%% as particles begin to travel distances on the order of the sphere radius the\n%% utilization decreases. The interior region utilization acts as one would\n%% expect. When particles travel small distances with respect to the sphere radius,\n%% there is high utilization, but as the particles begin to travel further its\n%% utilization rapidly decreases. The contribution from the outer region defines the\n%% significance of using both the current position signed distance value as well as the\n%% next event location's signed distance value to precondition ray fire calls in DAGMC.\n\n\\subsection{Inclusion of Error in Model Development}\n\n\nAs provided in Chapter \\ref{ch:preconditioning}, the condition for avoiding a\nray fire call when including error associated with signed distance value\ninterpolation is:\n\n\\begin{equation}\n  SDV(\\vec{p}) + SDV(\\vec{n}) > |\\vec{p}-\\vec{n}| + 2\\varepsilon(h)\n  \\label{appeq:condition}\n\\end{equation}\n\\begin{align*}\n &SDV - \\, signed \\, distance \\, value \\, function \\\\\n &\\vec{p} - \\, particle's \\, current \\, position \\\\\n &\\vec{n} - \\, particle's \\, next \\, event \\, location \\\\\n &h - \\, mesh \\, step \\, size \\\\\n &\\varepsilon(h) - \\, error \\, evaluation \\, for \\, signed \\, distance \\, values \\\\\n\\end{align*}\n\nThe limits of the $\\alpha_{min}$ condition need to be adjusted yet again to\naccount for the error that will be subtracted from the signed distance values. This becomes a relatively straightforward process in which the error is also subtracted from the arguments to the Heaviside functions in Equation \\ref{appeq:alpha_min_heaviside}. Accounting for interpolation error reduces the maximum possible distance the particle can travel and still be preconditioned. It also reduces the value of $d_{min}$ where $\\alpha_{min}$ becomes non-zero.\n\n\\begin{equation}\n\\small\n\\begin{split}\n\\alpha_{min} =& \\arccos\\Bigg ( \\frac{(2R-r-d-2\\epsilon)^2-d^2-r^2}{2 d r} \\Bigg )[H(d-(R-r-\\epsilon))-H(d-(R-\\epsilon))]  \\\\\n&+ \\pi \\, H(d-(R-\\epsilon))\n\\end{split}\n\\label{appeq:min_alpha_w_error}\n\\end{equation}\n\nAfter these adjustments to the $\\alpha_{min}$ condition, the integral can be evaluated and simplified to give the form found in Equation \\eqref{eq:sdf_util_sampled_w_error}:\n\n\\begin{equation}\n  US_{sampled} = \\frac{(R-\\epsilon) (\\frac{1}{2} \\lambda ( R - 2\\lambda - \\epsilon ) e^{\\frac{-R + \\epsilon}{\\lambda}} + \\lambda^{2} - \\frac{3}{2}\\lambda(R - \\epsilon) + (R-\\epsilon)^{2})}{R^3}\n  \\label{appeq:sdf_util_sampled_w_error}\n\\end{equation}\n\n%% For a direct evaluation of the utilization based on\n%% the mesh step size, $h$, the formula for the error can be substituted for\n%% $\\epsilon$ to give\n\n%% \\begin{equation}\n%% \\small\n%% \\begin{split}\n%% \\alpha_{min} =& (H(d-(R-r-\\sqrt{3}h))-H(d-(R-\\sqrt{3}h))) \\arccos\\Bigg ( \\frac{(2R-r-d-2\\sqrt{3}h)^2-d^2-r^2}{2 d r} \\Bigg ) \\\\\n%% &+ \\pi \\, H(d-(R-\\sqrt{3}h))\n%% \\end{split}\n%% \\end{equation}\n\n", "meta": {"hexsha": "ba92bc1589ed2649c6d7ec2a166255f2b91836b8", "size": 18082, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "document/chapters/07-appendix-a.tex", "max_stars_repo_name": "pshriwise/dissertation", "max_stars_repo_head_hexsha": "b32316befa1b8803210ca980594d42adb9794f7d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-04T20:50:48.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-04T20:50:48.000Z", "max_issues_repo_path": "document/chapters/07-appendix-a.tex", "max_issues_repo_name": "pshriwise/dissertation", "max_issues_repo_head_hexsha": "b32316befa1b8803210ca980594d42adb9794f7d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2018-06-09T04:42:07.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-09T21:08:37.000Z", "max_forks_repo_path": "document/chapters/07-appendix-a.tex", "max_forks_repo_name": "pshriwise/dissertation", "max_forks_repo_head_hexsha": "b32316befa1b8803210ca980594d42adb9794f7d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2583732057, "max_line_length": 461, "alphanum_fraction": 0.6967702688, "num_tokens": 5562, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833945721304, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5638776898074267}}
{"text": "\\hypertarget{haskell-expressions}{%\n\\section{Haskell Expressions}\\label{haskell-expressions}}\n\nIn each of the following problems some declarations are given. Give the\nmost general type of each declared value, and if the value is not a\nfunction, then also the result of evaluating it.\n\n\\hypertarget{lambda-expressions}{%\n\\subsection{Lambda expressions}\\label{lambda-expressions}}\n\n\\begin{lstlisting}[language=Haskell]\nf01 :: Num a => a -> a\nf01 = \\x -> 2 * x\nf01' = \\x -> 2 * x :: Num a => a -> a\nf01'' () = \\x -> 2 * x :: Num a => () -> a -> a\nf01''' _ = \\x -> 2 * x :: Num a => p -> a -> a\n\nf02 = \\x -> \\y -> x + y :: Num a => a -> a -> a\nf03 = \\x y -> x + y :: Num a => a -> a -> a\nf04 = = \\y -> x + y ==> error (x not known)\nf05 = \\(x, y) -> x + y :: Num a => (a, a) -> a\nf06 = \\[x, y] -> x + y :: Num a => [a] -> a\n\nf07 = [\\x -> x +1, \\x -> 2* x, \\x -> x ^2] :: Num a => [a -> a]\nf08 = head f07 5 :: Num a => a\nf09 = \\x -> x :: p -> p\nf10 = [f09, \\x -> x +1] :: Num a => [a -> a]\nf11 = \\_ -> (\\x -> x +1 , \\() -> 'a') :: Num a => p -> (a -> a, () -> Char)\n\\end{lstlisting}\n\n\\hypertarget{sections}{%\n\\subsection{sections}\\label{sections}}\n\n\\begin{lstlisting}[language=Haskell]\nx ^+^ y = x^2 + y^2\ng01 = (^+^) :: Num a => a -> a -> a\ng02 = (^+^2) :: Num a => a -> a\ng03 = (3^+^) :: Num a => a -> a\ng04 = (3^+^2) :: Num a => a\n\ng05 x y = 2*x + 3*y :: Num a => a -> a -> a\ng06 = (`g05` 2) :: Num a => a -> a\ng07 = (2`g05`) :: Num a => a -> a\ng08 = g06 3 :: Num a => a\ng09 = g07 4 :: Num a => a\n\ng10 x y z = 2*x + 3*y + 4*z :: Num a => a -> a -> a -> a\ng14 x = (g10 (x +1)) :: Num a => a -> a -> a -> a\ng15 = g14 2 3 4 :: Num a => a\n\ng16 n = \\x -> ([(+), (-), (*)] !! n ) x 2 :: Num a => Int -> a -> a\ng17 = g16 1 5 :: Num a => a\n\\end{lstlisting}\n\n\\hypertarget{list-comprehensions}{%\n\\subsection{List comprehensions}\\label{list-comprehensions}}\n\n\\begin{lstlisting}[language=Haskell]\nh01 = [ x | x <- [1 .. 5]] :: (Num a, Enum a) => [a]\nh02 = [2*x | x <- [1 .. 5]] :: (Num a, Enum a) => [a]\nh03 = [ x-y | x <- [1 .. 3], y <- [1 .. 4]] :: (Num a, Enum a) => [a]\nh04 = [ x-y | y <- [1 .. 3], x <- [1 .. 4]] :: (Num a, Enum a) => [a]\nh05 = [ x+y | x <- [1 .. 3], y <- [1 .. 4], x >= y ] :: (Num a, Enum a, Ord a) => [a]\nh06 = [ head x | x <- [\"dimdi\", \"schnurpsel\", \"zumsel\"]] :: [Char]\nh07 = [ x | (x:_) <- [\"dimdi\", \"schnurpsel\", \"zumsel\"]] :: [Char]\nh08 = [ xs | ('s':xs) <- [\"dimdi\", \"schnurpsel\", \"zumsel\"]] ==> error (s unkown)\n\\end{lstlisting}\n\n\\clearpage", "meta": {"hexsha": "9936fdc2cfdbba9b4c8417216b23a2cdbe7bf450", "size": 2457, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TSM_AdvPrPa/Excercises/Haskell/04_ExpressionsExercise.tex", "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": ["Beerware"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TSM_AdvPrPa/Excercises/Haskell/04_ExpressionsExercise.tex", "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_licenses": ["Beerware"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TSM_AdvPrPa/Excercises/Haskell/04_ExpressionsExercise.tex", "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": ["Beerware"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "avg_line_length": 35.6086956522, "max_line_length": 85, "alphanum_fraction": 0.4798534799, "num_tokens": 1030, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5638776814479952}}
{"text": "% Copyright 2011-2015 David Hadka.  All Rights Reserved.\r\n%\r\n% This file is part of the MOEA Framework User Manual.\r\n%\r\n% Permission is granted to copy, distribute and/or modify this document under\r\n% the terms of the GNU Free Documentation License, Version 1.3 or any later\r\n% version published by the Free Software Foundation; with the Invariant Section\r\n% being the section entitled \"Preface\", no Front-Cover Texts, and no Back-Cover\r\n% Texts.  A copy of the license is included in the section entitled \"GNU Free\r\n% Documentation License\".\r\n\r\n\\chapter{Defining New Problems}\r\n\\label{chpt:problems}\r\n\r\nThe real value of the MOEA Framework comes not from the algorithms and tools it provides, but the problems that it solves.  As such, being able to introduce new problems into the MOEA Framework is an integral aspect of its use.\r\n\r\nThroughout this chapter, we will show how a simple multiobjective problem, the Kursawe problem, can be defined in Java, C/C++, and in scripting languages.  The formal definition for the Kursawe problem is provided below.\r\n\r\n\\begin{equation}\r\n  \\label{eq:mop}\r\n  \\begin{aligned}\r\n    & \\underset{\\vect{x} \\in \\mathbb{R}^L}{\\text{minimize}}\r\n      & & F(\\vect{x}) = (f_1(\\vect{x}), f_2(\\vect{x})) \\\\\r\n    & \\text{where}\r\n      & & f_1(\\vect{x}) = \\sum_{i=0}^{L-1} -10\\text{e}^{-0.2\\sqrt{x_i^2 + x_{i+1}^2}}, \\\\\r\n    & & & f_2(\\vect{x}) = \\sum_{i=0}^{L} \\left|x_i\\right|^{0.8} + 5\\sin\\left(x_i^3\\right).\r\n  \\end{aligned}\r\n\\end{equation}\r\n\r\n\\begin{important}\r\nThe MOEA Framework only works on minimization problems.  If any objectives in your problem are to be maximized, you can negate the objective value to convert from maximization into minimization.  In other words, by minimizing the negated objective, your are maximizing the original objective.  See section \\ref{sect:maximizing} for additional details on dealing with maximization objectives.\r\n\\end{important}\r\n\r\n\\section{Java}\r\nDefining new problems in Java is the most direct and straightforward way to introduce new problems into the MOEA Framework.  All problems in the MOEA Framework implement the \\class{Problem} interface.  The \\class{Problem} interface defines the methods for characterizing a problem, defining the problem's representation, and evaluating solutions to the problem.  In practice, you should never need to implement the \\class{Problem} interface directly, but can extend the more convenient \\class{AbstractProblem} class.  \\class{AbstractProblem} provides default implementations for many of the methods required by the \\class{Problem} interface.  The code example below shows the Kursawe problem defined by extending the \\class{AbstractProblem} class.\r\n\r\n\\begin{lstlisting}[language=Java]\r\nimport org.moeaframework.core.Solution;\r\nimport org.moeaframework.core.variable.EncodingUtils;\r\nimport org.moeaframework.core.variable.RealVariable;\r\nimport org.moeaframework.problem.AbstractProblem;\r\n \r\npublic class Kursawe extends AbstractProblem {\r\n \r\n\tpublic Kursawe() {\r\n\t\tsuper(3, 2);\r\n\t}\r\n\r\n\t@Override\r\n\tpublic Solution newSolution() {\r\n\t\tSolution solution = new Solution(numberOfVariables, \r\n\t\t\t\tnumberOfObjectives);\r\n \r\n\t\tfor (int i = 0; i < numberOfVariables; i++) {\r\n\t\t\tsolution.setVariable(i, new RealVariable(-5.0, 5.0));\r\n\t\t}\r\n \r\n\t\treturn solution;\r\n\t}\r\n\r\n\t@Override\r\n\tpublic void evaluate(Solution solution) {\r\n\t\tdouble[] x = EncodingUtils.getReal(solution);\r\n\t\tdouble f1 = 0.0;\r\n\t\tdouble f2 = 0.0;\r\n   \t \r\n\t\tfor (int i = 0; i < numberOfVariables - 1; i++) {\r\n\t\t\tf1 += -10.0 * Math.exp(-0.2 * Math.sqrt(\r\n\t\t\t\t\tMath.pow(x[i], 2.0) + Math.pow(x[i+1], 2.0)));\r\n\t\t}\r\n \r\n\t\tfor (int i = 0; i < numberOfVariables; i++) {\r\n\t\t\tf2 += Math.pow(Math.abs(x[i]), 0.8) +  \r\n\t\t\t\t\t5.0 * Math.sin(Math.pow(x[i], 3.0));\r\n\t\t}\r\n \r\n\t\tsolution.setObjective(0, f1);\r\n\t\tsolution.setObjective(1, f2);\r\n\t}\r\n \r\n}\r\n\\end{lstlisting}\r\n\r\nNote on line 9 in the constructor, we call \\java{super(3, 2)} to set the number of decision variables (3) and number of objectives (2).  All that remains is defining the \\java{newSolution} and \\java{evaluate} methods.\r\n\r\nThe \\java{newSolution} method is responsible for instantiating new instances of solutions for the problem, and in doing so defines the decision variable types and bounds.  In the \\java{newSolution} method, we start by creating a new \\class{Solution} instance on lines 14-15.  Observe that we must pass the number of decision variables and objectives to the \\class{Solution} constructor.  Next, we define each of the decision variables and specify their bounds on lines 17-19.  For the Kursawe problem, all decision variables are real values ranging between $-5.0$ and $5.0$, inclusively.  Finally, we complete this method by returning the \\class{Solution} instance.\r\n\r\nThe \\java{evaluate} method is responsible for evaluating solutions to the problem.  A solution which has been generated by an optimization algorithm is passed as an argument to the \\java{evaluate} method.  The decision variables contained in this solution are set to the values specified by the optimization algorithm.  The evaluate method must extract these values, evaluate the problem, and set the objective values.\r\n\r\nSince the Kursawe problem contains all real-valued decision variables, we can cast the decision variables to an array using the helpful methods of the \\class{EncodingUtils} class on line 26.  Use of \\class{EncodingUtils} is encouraged for extracting the decision variables from a solution.  Then on lines 27 to 38, we use those decision variables to evaluate the Kursawe problem.  Finally, on lines 40-41, we assign the two objectives for this problem.\r\n\r\nAt this point, the problem is completely defined and can be used with the MOEA Framework.  To use this problem with the \\class{Executor}, \\class{Instrumenter} and \\class{Analyzer} classes introduced in \\chptref{chpt:executor}, you pass a direct references to the problem class using the \\java{withProblemClass} method.  For example, we can optimize the Kursawe problem we just defined with the following code:\r\n\r\n\\begin{lstlisting}[language=Java]\r\nnew Executor()\r\n\t\t.withProblemClass(Kursawe.class)\r\n\t\t.withAlgorithm(\"NSGAII\")\r\n\t\t.withMaxEvaluations(10000)\r\n\t\t.run();\r\n\\end{lstlisting}\r\n\r\nNote how we pass the reference to the problem with \\java{Kursawe.class}.  The name of the class, \\java{Kursawe}, is followed by \\java{.class}.  The MOEA Framework then calls the constructor for the problem class, which in this case is the empty (no argument) constructor, and proceed to optimize the problem.\r\n\r\nProblems can also define constructors with arguments.  For example, consider a problem that needs to load data from a file.  For this to work, define a constructor in the problem class that accepts the desired inputs.  In this case, our constructor would be called \\java{public ProblemWithArgument(File dataFile) { ... }}.  You can then solve this problem as shown below.\r\n\r\n\\begin{lstlisting}[language=Java]\r\nnew Executor()\r\n\t\t.withProblemClass(ProblemWithArgument.class,\r\n        new File(\"inputFile.txt\"))\r\n\t\t.withAlgorithm(\"NSGAII\")\r\n\t\t.withMaxEvaluations(10000)\r\n\t\t.run();\r\n\\end{lstlisting}\r\n\r\n\\section{C/C++}\r\nIt is often the case that the problem / model / computer simulation you are working with is written in a different programming language, such as C/C++.  With a little work, it is possible to connect that C/C++ problem to the MOEA Framework and optimize its inputs / parameters.  In the following example, we will demonstrate how to connect the MOEA Framework to a simple C program.  We continue using the Kursawe problem, which if written in C would appear as follows:\r\n\r\n\\begin{lstlisting}[language=C]\r\n#include <math.h>\r\n\r\nint nvars = 3;\r\nint nobjs = 2;\r\n \r\nvoid evaluate(double* vars, double* objs) {\r\n\tint i;\r\n\tobjs[0] = 0.0;\r\n\tobjs[1] = 0.0;\r\n\r\n\tfor (i = 0; i < nvars - 1; i++) {\r\n\t\tobjs[0] += -10.0 * exp(-0.2 * sqrt(\r\n\t\t\t\tpow(vars[i], 2.0) + pow(vars[i+1], 2.0)));\r\n\t}\r\n\r\n\tfor (i = 0; i < nvars; i++) {\r\n\t\tobjs[1] += pow(abs(vars[i]), 0.8) +  \r\n\t\t\t\t5.0 * sin(pow(vars[i], 3.0));\r\n\t}\r\n}\r\n\\end{lstlisting}\r\n\r\nNote how the \\cpp{evaluate} method takes two arguments, \\cpp{vars} and \\cpp{objs}, which coincide with the inputs (the decision variables) and the outputs (the objective values).  Now we need to define how the \\cpp{evaluate} method connects to the MOEA Framework.  This connection is established using the following code.\r\n\r\n\\begin{lstlisting}[language=C]\r\n#include <stdlib.h>\r\n#include \"moeaframework.h\"\r\n \r\nint main(int argc, char* argv[]) {\r\n\tdouble vars[nvars];\r\n\tdouble objs[nobjs];\r\n\r\n\tMOEA_Init(nobjs, 0);\r\n\r\n\twhile (MOEA_Next_solution() == MOEA_SUCCESS) {\r\n\t\tMOEA_Read_doubles(nvars, vars);\r\n\t\tevaluate(vars, objs);\r\n\t\tMOEA_Write(objs, NULL);\r\n\t}\r\n\r\n\tMOEA_Finalize();\r\n\treturn EXIT_SUCCESS;\r\n}\r\n\\end{lstlisting}\r\n\r\nFirst, line 2 includes the \\file{moeaframework.h} file.  This header is provided by the MOEA Framework and defines all the functions needed to communicate with the MOEA Framework.  All such functions begin with the prefix \\cpp{MOEA\\_}.  You can find the \\file{moeaframework.h} file in the source code distribution in the folder \\folder{examples/} along with additional examples.\r\n\r\nLines 4-18 define the main loop for the C/C++ program.  Lines 5-6 initialize the storage arrays for the decision variables and objectives.  Line 8 calls \\cpp{MOEA\\_Init} to initialize the communication between C/C++ and the MOEA Framework.  The \\cpp{MOEA\\_Init} method takes the number of objectives and constraints as arguments.  Once initialized, we can begin reading and evaluating solutions.  Line 10 loops as long as we successfully read the next solution using \\cpp{MOEA\\_Next\\_solution()}.  Line 11 extracts the real valued decision variables, filling the array \\cpp{vars}.  Line 12 invokes the \\cpp{evaluate} method to evaluate the problem.  This results in the array \\cpp{objs} being filled with the resulting objective values.  Line 13 writes the objectives back to the MOEA Framework.  The second argument to \\cpp{MOEA\\_Write} is \\cpp{NULL} in this example, since the Kursawe problem is unconstrained.  This loop repeats until no more solutions are read.  At this point, the C/C++ program terminates by invoking \\cpp{MOEA\\_Finalize()} and exiting.  The complete source code is shown below.\r\n\r\n\\begin{lstlisting}[language=C]\r\n#include <stdlib.h>\r\n#include <math.h>\r\n#include \"moeaframework.h\"\r\n \r\nint nvars = 3;\r\nint nobjs = 2;\r\n \r\nvoid evaluate(double* vars, double* objs) {\r\n\tint i;\r\n\tobjs[0] = 0.0;\r\n\tobjs[1] = 0.0;\r\n\r\n\tfor (i = 0; i < nvars - 1; i++) {\r\n\t\tobjs[0] += -10.0 * exp(-0.2 * sqrt(\r\n\t\t\t\tpow(vars[i], 2.0) + pow(vars[i+1], 2.0)));\r\n\t}\r\n\r\n\tfor (i = 0; i < nvars; i++) {\r\n\t\tobjs[1] += pow(abs(vars[i]), 0.8) +  \r\n\t\t\t\t5.0 * sin(pow(vars[i], 3.0));\r\n\t}\r\n}\r\n\r\nint main(int argc, char* argv[]) {\r\n\tdouble vars[nvars];\r\n\tdouble objs[nobjs];\r\n\r\n\tMOEA_Init(nobjs, 0);\r\n\r\n\twhile (MOEA_Next_solution() == MOEA_SUCCESS) {\r\n\t\tMOEA_Read_doubles(nvars, vars);\r\n\t\tevaluate(vars, objs);\r\n\t\tMOEA_Write(objs, NULL);\r\n\t}\r\n\r\n\tMOEA_Finalize();\r\n\treturn EXIT_SUCCESS;\r\n}\r\n\\end{lstlisting}\r\n\r\nYou can save this C code to \\file{kursawe.c} and compile it into an executable.  If using the GNU C Compiler (gcc), you can compile this code with the following command on Linux or Windows.  Note that you will need both \\file{moeaframework.h} and \\file{moeaframework.c} in the same directory as \\file{kursawe.c}.\r\n\r\n\\begin{lstlisting}[language=bash,breakatwhitespace=true]\r\ngcc -o kursawe.exe kursawe.c moeaframework.c -lm\r\n\\end{lstlisting}\r\n\r\nAt this point, we now switch back to Java and define the problem class by extending the \\class{ExternalProblem} class.  We extend the \\class{ExternalProblem} class instead of the \\class{AbstractProblem} class since \\class{ExternalProblem} understands how to communicate with the executable we just compiled.  The code snippet below shows the complete Java class for this example.\r\n\r\n\\begin{lstlisting}[language=Java]\r\nimport org.moeaframework.core.Solution;\r\nimport org.moeaframework.core.variable.RealVariable;\r\nimport org.moeaframework.problem.ExternalProblem;\r\n\r\npublic class ExternalKursawe extends ExternalProblem {\r\n\r\n\tpublic ExternalKursawe() {\r\n\t\tsuper(\"kursawe.exe\");\r\n\t}\r\n\t\r\n\tpublic int getNumberOfVariables() {\r\n\t\treturn 3;\r\n\t}\r\n\t\r\n\tpublic int getNumberOfObjectives() {\r\n\t\treturn 2;\r\n\t}\r\n\t\r\n\tpublic int getNumberOfConstraints() {\r\n\t\treturn 0;\r\n\t}\r\n\r\n\t@Override\r\n\tpublic Solution newSolution() {\r\n\t\tSolution solution = new Solution(getNumberOfVariables(), \r\n\t\t\t\tgetNumberOfObjectives(), getNumberOfConstraints());\r\n \r\n\t\tfor (int i = 0; i < numberOfVariables; i++) {\r\n\t\t\tsolution.setVariable(i, new RealVariable(-5.0, 5.0));\r\n\t\t}\r\n \r\n\t\treturn solution;\r\n\t}\r\n\r\n}\r\n\\end{lstlisting}\r\n\r\nNote how we still need to define the number of variables, objectives, and constraints in addition to defining the \\java{newSolution} method.  However, we no longer include the \\java{evaluate} method.  Instead, we reference the executable we previously created on line 8.  The MOEA Framework will launch the executable and use it to evaluate solutions to the problem.\r\n\r\nOur work is now complete.  We can now solve this ``external'' version of the Kursawe problem just like the pure Java implementation shown earlier in this chapter.\r\n\r\n\\begin{lstlisting}[language=Java]\r\nnew Executor()\r\n\t\t.withProblemClass(ExternalKursawe.class)\r\n\t\t.withAlgorithm(\"NSGAII\")\r\n\t\t.withMaxEvaluations(10000)\r\n\t\t.run();\r\n\\end{lstlisting}\r\n\r\n\\begin{tip}\r\nIt is helpful to test the C/C++ program manually prior to running it with the MOEA Framework.  Tests can be performed by launching the C/C++ program and manually typing in inputs.  In this example, the program requires 3 real values entered on a single line.\r\n\\end{tip}\r\n\r\n\\begin{lstlisting}[language=Plaintext]\r\n-2.5 1.25 0.05\r\n\\end{lstlisting}\r\n\r\nOnce the enter key is pressed, the program will output the two objectives to the console:\r\n\r\n\\begin{lstlisting}[language=Plaintext]\r\n-13.504159423733775 6.966377092192072\r\n\\end{lstlisting}\r\n\r\nAdditional inputs can be provided, or press Ctrl+D to terminate the program.\r\n\r\n\\section{Scripting Language}\r\nProblems can also be defined in one of the many scripting languages available via the Java Scripting API.  Supported languages include Javascript, Python, Ruby, Scheme, Groovy and BeanShell.  Java SE 6 includes Rhino, a Javascript scripting engine, out-of-the-box.  The following code snippet shows the Rhino Javascript code for defining the Kursawe problem.\r\n\r\n\\begin{lstlisting}[language=JavaScript]\r\nimportPackage(java.lang);\r\nimportPackage(Packages.org.moeaframework.core);\r\nimportPackage(Packages.org.moeaframework.core.variable);\r\n\r\nfunction getName() {\r\n\treturn \"Kursawe\";\r\n}\r\n\r\nfunction getNumberOfVariables() {\r\n\treturn 3;\r\n}\r\n\r\nfunction getNumberOfObjectives() {\r\n\treturn 2;\r\n}\r\n\r\nfunction getNumberOfConstraints() {\r\n\treturn 0;\r\n}\r\n\r\nfunction evaluate(solution) {\r\n\tx = EncodingUtils.getReal(solution);\r\n\tf1 = 0.0;\r\n\tf2 = 0.0;\r\n\r\n\tfor (i = 0; i < getNumberOfVariables() - 1; i++) {\r\n\t\tf1 += -10.0 * Math.exp(-0.2 * Math.sqrt(\r\n\t\t\t\tMath.pow(x[i], 2.0) + Math.pow(x[i+1], 2.0)));\r\n\t}\r\n \r\n\tfor (i = 0; i < getNumberOfVariables(); i++) {\r\n\t\tf2 += Math.pow(Math.abs(x[i]), 0.8) +  \r\n\t\t\t\t5.0 * Math.sin(Math.pow(x[i], 3.0));\r\n\t}\r\n \r\n\tsolution.setObjective(0, f1);\r\n\tsolution.setObjective(1, f2);\r\n}\r\n\r\nfunction newSolution() {\r\n\tsolution = new Solution(getNumberOfVariables(), \r\n\t\t\tgetNumberOfObjectives());\r\n \r\n\tfor (i = 0; i < getNumberOfVariables(); i++) {\r\n\t\tsolution.setVariable(i, new RealVariable(-5.0, 5.0));\r\n\t}\r\n \r\n\treturn solution;\r\n}\r\n\r\nfunction close() {\r\n\r\n}\r\n\\end{lstlisting}\r\n\r\nNote how all methods defined by the \\class{Problem} interface appear in this code.  Also note how we can invoke Java methods and constructors through the scripting language.  The specifics of how to implement functions and invoke existing methods are specific to the scripting language chosen.  Refer to the documentation for the scripting language for details.\r\n\r\nSave this script to an appropriate file with the correct file extension for the scripting language.  Since the script in this example is written in the Rhino Javascript language, we save the file to \\file{kursawe.js}.  Solving this Javascript version of the Kursawe problem is nearly identical to all previous examples, as shown below.\r\n\r\n\\begin{lstlisting}[language=Java]\r\nnew Executor()\r\n\t\t.withProblemClass(ScriptedProblem.class, \r\n\t\t\t\tnew File(\"kursawe.js\"))\r\n\t\t.withAlgorithm(\"NSGAII\")\r\n\t\t.withMaxEvaluations(10000)\r\n\t\t.run();\r\n\\end{lstlisting}\r\n\r\nThe only difference is on lines 2-3, where we specify the problem class as \\java{ScriptedProblem.class} and pass as an argument the file \\file{kursawe.js}.  The \\java{ScriptedProblem} class loads the file, determines the appropriate scripting engine, and uses that scripting engine to evaluate solutions to the problem.\r\n\r\n\\section{Conclusion}\r\nThis chapter introduced the various means for introducing new problems to the MOEA Framework.  This includes implementing problems in Java, C/C++, and in one of the many supported scripting languages.  Care must be taken when choosing which language to use, as each method has different advantages and drawbacks.  One key consideration is the speed of each method.  The table below shows the wall-clock time for the three methods discussed in this chapter.  These times were produced on an Intel\\copyright Core\\texttrademark 2 Duo @ 2.13 GHz.\r\n\r\n\\par\r\n\\begin{center}\r\n\\begin{tabular}{ll}\r\n  Method & Time (Seconds) \\\\\r\n  \\hline\r\n  Java & 1.218 \\\\\r\n  C/C++ & 4.011 \\\\\r\n  Scripted (Javascript) & 24.874\r\n\\end{tabular}\r\n\\end{center}\r\n\r\nObserve that using C/C++ incurs an overhead of approximately $0.000278$ seconds per evaluation.  For the simple Kursawe problem used as the example throughout this chapter, the overhead outweighs the evaluation time.  One would expect, however, that larger and more complex problems will benefit from potentially faster C/C++ implementations.  Furthermore, as one would expect, the scripted implementation in Javascript incurs a significant performance penalty.\r\n", "meta": {"hexsha": "7fbbd2d5205526c5b41f392a273334f5c53fb02c", "size": 18000, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/problems.tex", "max_stars_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_stars_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_stars_repo_licenses": ["Intel"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manual/problems.tex", "max_issues_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_issues_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_issues_repo_licenses": ["Intel"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/problems.tex", "max_forks_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_forks_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_forks_repo_licenses": ["Intel"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.3870967742, "max_line_length": 1101, "alphanum_fraction": 0.7306111111, "num_tokens": 4619, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.78793120560257, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5637048962954758}}
{"text": "\\section{Conclusion}\nThe aim of this paper was to introduce a Galois theory for general differential equations in characteristic zero. To accomplish this we had to introduce a large body of algebraic concepts.\n\\subsection{Summary}\nWe are giving a biref summary of the concepts introduced.\n\\subsubsection{Algebras and coalgebras}\nThe primer was clearly the introduction of algebras and a certain type of algebra extensions, the so called Ore-extensions. In summary, for a given algebra $A$ over a ring $R$, an algebra automorphism $\\alpha \\in \\trm{Aut}_{R-\\trm{alg}}(A)$ and an $\\alpha$-derivation $\\delta \\in \\trm{End}_R(A)$ which is\n$$\\delta(a b) = \\alpha(a) \\delta(b) + \\delta(a) b,\\ \\forall a, b \\in A$$\nthe Ore extension $A[X,\\alpha,\\delta]$ is an $A$-algebra, with\n$$X a = \\alpha(a) X - \\delta(a)$$\nfor all $a \\in A$. In addition, we provided the basic concepts of Lie algebras and their enveloping algebras. A Lie algebra is an $R$-module over a ring $R$ with a antisymmetric multiplication map $\\mu$ fulfulling the Jacobi-identity. Their enveloping algebras are unital associative algebras in the above sense. Lie algebras are intimitely connected to derivations and differential rings.\\\\\nCoalgebras a categorically dual to algebras, that is the dual module $C^*$ for any coalgebra $C$ is an algebra over the same ring. Furthermore, the module $\\trm{Hom}_R(C,A)$ is an algebra as well, with convolution\n$$\\mu : \\trm{Hom}_R(C,A) \\otimes \\trm{Hom}_R(C,A) \\longrightarrow \\trm{Hom}_R(C,A),\\ f \\otimes g \\longmapsto \\mu_A \\circ (f \\otimes g) \\circ \\Delta_C$$\nas multiplication and $\\eta_A \\circ \\eps_C$ as unit.\n\\subsubsection{Bialgebras, module algebras and Hopf algebras}\nA bialgebra $(D,\\mu_D,\\eta_D,\\Delta_D,\\eps_D)$ has both, the structure of an algebra $(D,\\mu_D,\\eta_D)$ and that of a coalgebra $(D,\\Delta_D,\\eps_D)$, such that multiplication and unit map are homomorphisms of coalgebras and comultiplication and counit are homomorphisms of algebras.\\\\\nFor a bialgebra $D$, a $D$-module algebra $(A,\\Psi_A)$ is an algebra $(A,\\mu_A,\\eta_A)$ such that\n$$\\Psi (a b) = \\sum_{(d)} \\mu_A(d_{(1)}(a) \\otimes d_{(2)}(b)),\\ \\Psi_A(1_D \\otimes a) = \\eps_D(1_D) a$$\nholds.\\\\\nA Hopf algebra $D$ is a bialgebra, with a bialgebra antihomomorphism $S$:\n$$S : D \\longrightarrow D^{\\trm{copop}}$$\nsuch that $\\eta \\eps(d) = \\mu(id_D \\otimes S(\\Delta(d))) = \\mu(S \\otimes id(\\Delta(d)))$ holds for all $d \\in D$.\n\\subsubsection{Differential modules and their constructs}\nWe introduced the concept of derivations over arbitrary modules over differential rings. Next, we introduced the ring of differential operators and showed that this ring is an Ore extension of the differential ring. This is followed by the defintion of the ring of differential polynmials. We showed that this ring is a module algebra over the ring of differential operators.\\\\\nFurthermore, we discussed the basic Picard-Vessiot theory for linear differential equations in characteristic zero. We showed that any simple differential ring containing all solutions for suchs an equations is isomorphic to the PV ring. Moreover, algebraic elements over the field of constants are constant over the PV ring and any constant algebraic element over the PV ring is already algebraic over the field of constants. We defined the Galois group for this type of equations to be all $k^\\partial$-linear bijections commuting with the derivation $\\partial$. Concluding this section, we showed to simple examples over a non-trivial and a trivial differential ring:\n$$L_1 = \\partial - a, \\ k = \\currfield(z),\\ L_2 = \\partial^2 - a, \\ k = \\currfield,\\ a \\in \\currfield^\\times$$\nin both cases. We computed the PV rings $R_1 = \\currfield(z)[y,y^{-1}], \\partial(y) = a y$ and $R_2 = \\currfield[y,y^{-1}], \\partial(y) = \\sqrt{a} y$ and with Galois group $\\currfield^\\times$.\n\\subsubsection{General theory by Heiderich}\nNext, we introduced the concept of iterative derivations and the universal Taylor homomorphism for differential ring extensions in characteristic zero. This was followed by the definition of the Umemura functor which assigns to each commutative algebra over a differential ring extension the group of algebra automorphisms leaving the image of the differential ring under the Taylor homomorphism fixed and making the following diagram commutative:\n%diagram!\nThis led us the the definition of the Lie-Ritt functor assigning to each non-reduced algebra over $K$ the group of infinitesimal coordinate transformation. Heiderich proved that the Umemura functor is such a Lie-Ritt functor. Lastly, we compared the general theory with the PV theory and revisited our previous example $L_2$. There, we computed the Hopf algebra $H = R \\otimes R^{\\Psi_R \\otimes \\Psi_R}$ and saw that its set of prime ideals is indeed isomorphic to the spectrum of the ring of Laurent polynomials over $\\currfield$.\n\\subsection{Outlook}\nAs we only computed a linear example over a trivial differential ring, it would be most interesting to extend this to non-trivial differential rings. Furthermore, the theory is general enough to deal with non-linear differential equations. Rather simple examples as $\\partial(x) - x^2$ could be a starting point to further our understanding of this intriguing theory.\\\\\nSecondly, the theory developed by Umemura and Heiderich does not involve non-commutative cases as the definition of the Lie-Ritt functors heavily depends on commutativity of the underlying algebras. An inverstigation  into this would seem promissing.\n%We broadly introduced the concepts of module algebras and their Galois theory according to \\cite{Heid10,Heid13}. This concept got applied to a rather simple example. In addition, we compared the classical Galois theory in the sense of Picard-Vessiot with the expanded theory.\\\\\n%\\indent To conclude, this theory provides alternative means not only to linear differential equations but rather to a wide range of similar problems, as iterative derivatives, difference equations and, of course, non-linear (ordinary or partial) differential equations.\n\\newpage\n\\section*{Acknowledgements}\nFirst of all, I would like to thank professor Gro\\ss{}e-Kl\\\"onne for offering me this challenging topic and providing invaluable hints, as well as doctor R\\\"uhling. Furthermore, I am grateful to Greta, Uwe, Anne and Anne W. for supporting me almost endlessly. In addition, Verena and Alex are not to be forgotten, last but not least Christian and Otis.\n\\newpage\n\\section{Appendix}\\label{appendix}\n%\\subsection{Basic category theory}\n%To generalize or abstract certain statements it is convenient to introduce the notion of categories. Here, we give a brief introduction for the sake of clarity following loosely \\cite{Awo} and \\cite{Borc}. % In short, a category is a class of sets $\\trm{Obj}$ sharing a common mathematical structure. Maps for two objects in $\\trm{Obj}$ are called morphism on $\\trm{Obj}$ if they preserve the structure. More formally, \n% A category $\\mathcal{C}$ conists of the class of objects $\\trm{Obj}(\\mathcal{C})$ and morphisms for two objects $A, B$ in $\\trm{Obj}(\\mathcal{C})$ which is denoted by $\\mathcal{C}(A,B)$, $\\mathcal{M_C}$ or $\\trm{Morph}_{\\mathcal{C}}(A,B)$. Moreover, if $A, B, C$ are three (not necessarily distinct) objects in $\\mathcal{C}$, the composition of two morphisms $f: A \\longrightarrow B$ and $g : B \\longrightarrow C$ is denoted by $g \\circ f : A \\longrightarrow C$ and is a morphism in $\\mathcal{C}(A,C)$. Lastly, the identity map $id : A \\longrightarrow A$ is a morphism for all objects $A$ in $\\trm{Obj}(\\mathcal{C})$.%if $X$ and $Y$ are two objects in $\\trm{Obj}$ %(with structure maps $\\phi$, $\\psi$ or tuples of structure maps), then a map $f : X \\longrightarrow Y$ is called a morphism in the category of $\\trm{Obj}$ if and only there is a map $\\wt{f} : \\im \\phi \\longrightarrow \\im \\psi$ such that the diagram commutes\\index{Index}{category}\\index{Index}{morphism}\n%%$$\\xymatrix{\n%%X \\ar[r]^f \\ar[d]_\\phi & Y \\ar[d]^\\psi\\\\\n%%\\trm{im} \\phi \\ar[r]_{\\wt{f}} & \\im \\psi.\\\\\n%%}$$\n%%The class of \n%\\bsp Some prominent examples\n%\\bn\n%\\item the category of sets with morphisms simply all maps between two sets.\n%\\index{Index}{category!of sets}\n%\\item the category of pointed spaces $\\trm{PSpc}$, which can be considered as all non-empty sets with a designated element - the basepoint - and its morphisms are maps preserving the basepoints (the mandatory element in each object).\n%\\item the category of abelian groups $\\trm{Abel}$, where morphisms are group homomorphisms,\n%\\index{Index}{category!of groups!abelian}\n%\\item the category of non-abelian groups $\\trm{NAbel}$, where morphisms are also group homomorphisms, in case we do not know if a given group belongs to either of the two categories we simply assign it to $\\trm{Grp}$,\n%\\index{Index}{category!of groups}\n%\\index{Index}{category!of groups!non-abelian}\n%\\item the category of rings $\\trm{Rng}$, with ring homomorphisms as morphisms (note this is a proper subcategory of $\\trm{Abel}$), with its prominent subcategory $\\trm{CRng}$, the category of commutative rings and unital rings $\\trm{URng}$.\n%\\index{Index}{category!of rings}\n%\\index{Index}{category!of rings!commutative}\n%\\index{Index}{category!of rings!unital}\n%\\item the category of differential manifolds, with smooth maps as morphisms,\n%\\index{Index}{category!of differential manifolds}\n%\\item the category of topological spaces $\\trm{Top}$, with continuous maps as morphisms,\n%\\index{Index}{category!of topological spaces}\n%\\en\n%We note, that in general there is no concept of union, products etc. - that is the union or product of to objects in the same category does not necessarily constitute an other object in same category, respectively. Therefore, we avoid talking about sets of certain mathematical objects (rather classes).\n%\\begin{defi}[Covariant and contravariant functors]\n%Let $\\mathcal{C}$ and $\\mathcal{D}$ be two categories with morphisms $\\mathcal{M_C}$ and $\\mathcal{M_D}$, respectively. A (co/contravariant) functor $F$ is a pairing of $\\mathcal{C}$ and $\\mathcal{D}$ such that\n%\\bd\n%\\item[covariant]\n%\\bn\n%\\item for all $X \\in \\mathcal{C}$ the image $F(X)$ is an object in $\\mathcal{D}$,\n%\\item for all morphisms $f : X \\longrightarrow Y$, $X, Y \\in \\mathcal{C}$, the map $F(f) : F(X) \\longrightarrow F(Y)$ is a morphism of $\\mathcal{D}$\n%\\item $F(id_X) = id_{F(X)}$.\n%\\en\n%\\item[contravariant] as in covariant except:\n%\\bn\n%\\item for all morphisms $f : X \\longrightarrow Y$, $X, Y \\in \\mathcal{C}$, the map $F(f) : F(Y) \\longrightarrow F(X)$ is a morphism of $\\mathcal{D}$, as well as\n%\\item $F(id_{F(X)}) = id_X$.\n%\\en\n%\\ed\n%\\index{Index}{functor!covariant}\n%\\index{Index}{functor!contravariant}\n%\\end{defi}\n%\\bsp Classical examples are\n%\\bn\n%\\item from main theorem of (classical Galois theory): let $L/K$ be a field extension\n%$$\\trm{Fix} : \\trm{Grp} \\longrightarrow \\trm{CRng},\\ H \\longmapsto L^H := \\{x \\in L : g x = x\\ \\forall g \\in H\\}$$\n%$$\\trm{Gal} : \\trm{CRng} \\longrightarrow \\trm{Grp},\\ M \\longmapsto \\{\\varphi \\in \\trm{Aut}_K(M) : \\varphi\\mid_K = id_K\\},$$\n%both define functors. Although, these functors are only defined on subcategories. In particular, the second functor is only well defined for normal separable fields $K \\subset M \\subset L$.\n%\\item in algebraic geometry, the (pre-) sheave defines a functor from the category of topolocial subspaces (of a variety - affine, projective, ...) to the category of associative algebras over a given ground field,\n%\\item furthermore, the so called forgetful functor: let $\\mathcal{C}$ be a subcategory of $\\mathcal{D}$, then $F : \\mathcal{C} \\longrightarrow \\mathcal{D}$ is called the forgetful functor (informally, we simply omit some of the structure maps from $\\mathcal{C}$), e.g\n%$$F : \\trm{Grp} \\longrightarrow \\trm{Set},$$\n%\\item the set of all automorphisms $\\trm{Aut}$ on a given algebraic category (e.g. $\\trm{Vec}$, $\\trm{Grp}$, $\\trm{Rng}$,...) is also a functor - assigning to an object $X$ the set of all structure preserving bijections $(X,\\phi) \\longrightarrow (X,\\phi)$, i.e.\n%$$\\trm{Aut} : \\trm{AlgCat} \\longrightarrow \\trm{Grp}.$$\n%More generally, any structure preserving bijection (i.e. each map having a two-sided inverse which still preserves $\\phi$) defines the $\\trm{Aut}$ functor on their respective category.\n%\\en\n%\\begin{defi}\n%For a given category $\\mathcal{C}$ we construct a new category $\\mathcal{C}^{\\trm{op}}$ by simply reversing arrows and composition order of morphisms:\n%$$f, g \\in \\mathcal{M_C},\\ g \\circ f \\in \\mathcal{M_C} \\RA f^{\\trm{op}}, g^{\\trm{op}} \\in \\mathcal{M}_{\\mathcal{C}^{\\trm{op}}},\\  (g \\circ f)^{\\trm{op}} := f^{\\trm{op}} \\circ g^{\\trm{op}} \\in \\mathcal{M}_{\\mathcal{C}^{\\trm{op}}}.$$\n%We call $\\mathcal{C}^{\\trm{op}}$ the opposite or dual category of $\\mathcal{C}$.\n%\\index{Index}{category!dual}\n%\\end{defi}\n%We already encountered the opposite category in case of $C$-coalgebras and $C$-algebras (i.e. coalgebras are dual to the algebras).\n%\\begin{defi}\n%We call a category small if it defines an actual set. Otherwise it is called a large category. In addition, we call a category $\\mathcal{C}$ a category with initial object, if there is an object $I$ in $\\mathcal{C}$ such that for all $X$ in $\\mathcal{C}$ there is exactly one morphism $\\iota : I \\longrightarrow X$. The dual notion is the category with terminal object: if there is an object $T$ in $\\mathcal{C}$ such that for every object $X$ in $\\mathcal{C}$ there is one morphism $\\tau : X \\longrightarrow T$.\n%\\index{Index}{category!small}\n%\\index{Index}{category!large}\n%\\index{Index}{category!with initial object}\n%\\index{Index}{category!with terminal object}\n%\\end{defi}\n%\\bsp To illustrate the last definitions:\n%\\bn\n%\\item $\\trm{Set}$ is a large category.\n%\\item Usually the class of morphisms for a given category $\\mathcal{C}$ is also large. However, some counter examples are for instance module homomorphisms.\n%\\item The category of unital rings is a category with initial object $(\\zz,+,\\cdot,1)$.\n%\\item Dually, the category of schemes over unital rings is a category with terminal object (by duality: $\\trm{Spec}(\\zz)$).\n%\\en\n%\\begin{defi}\n%A natural transformation $\\zeta$ for two given categories $\\mathcal{C}$, $\\mathcal{D}$ and two functors $F, G$ between the two categories is a family of morphisms such that\n%\\bn\n%\\item it assigns to each object $X \\in \\mathcal{C}$ a morphism $\\zeta_X : F(X) \\longrightarrow G(X)$ and\n%\\item for each morphism $f : X \\longrightarrow Y$ in $\\mathcal{C}$ we have\n%$$\\bao{cc}\n%\\xymatrix{\n%F(X) \\ar[r]^{F(f)} \\ar[d]_{\\zeta_X} & F(Y)\\ar[d]^{\\zeta_Y}\\\\\n%G(X) \\ar[r]_{G(f)} & G(Y)\\\\\n%} & \n%\\xymatrix{\n%F(Y) \\ar[r]^{F(f)} \\ar[d]_{\\zeta_Y} & F(X)\\ar[d]^{\\zeta_X}\\\\\n%G(Y) \\ar[r]_{G(f)} & G(X)\\\\\n%}\\\\\n%\\trm{covariant} & \\trm{contravariant},\\\\\n%\\ea$$\n%where covariant stands for covariant functors $F, G$ and contravariant stands for the other case.\n%\\index{Index}{natural transformation}\n%\\en\n%\\end{defi}\n%Let $\\mathcal{C}$ be a category. For two objects $A, B$ in $\\mathcal{C}$ we denote the class of morphisms $\\mathcal{M}_{\\mathcal{C}}(A,B)$ simply by $\\trm{Hom}_{\\mathcal{C}}(A,B)$ or $\\trm{Hom}(A,B)$ if there is no ambiguity. Every object $A \\in \\mathcal{C}$ defines a functor $F_A := \\trm{Hom}(A,\\_) : \\mathcal{C} \\longrightarrow \\trm{Set}, B \\longmapsto \\trm{Hom}(A,B)$ such that for all maps $\\varphi : B \\longrightarrow B'$:\n%$$F_A(\\varphi)(f) := \\varphi \\circ f,\\ \\forall f \\in F_A(B).$$\n%Each morphism $\\varphi : A' \\longrightarrow A$ in $\\mathcal{C}$ defines a map $f \\longmapsto f \\circ \\varphi : F_A(B) \\longrightarrow F_{A'}(B)$ being natural wrt $B$, i.e. this map is a natural transformation. In particular, the pairing $A \\longmapsto F_A$ is a contravariant functor.\n%\\begin{defi}\n%A functor $F : \\mathcal{C} \\longrightarrow \\trm{Set}$ is called representable if it is isomorphic to $F_A$ for some $A \\in \\mathcal{C}$.\n%\\index{Index}{functor!representable}\n%\\end{defi}\n%\\subsubsection{Direct products and coproducts}\n% %From now on, let $\\mathcal{C}$ is a category with finite products.\n%Given a category $\\mathcal{C}$ and an index set $I$ we define\n%\\begin{defi}[direct product]\n%for a family of objects $\\{X_i : i \\in I\\}$ in $\\mathcal{C}$ the direct product $X$ to be the object in $\\mathcal{C}$ such that for each canonical projection $\\pi_i : X \\longrightarrow X_i$, $i \\in I$ and an indexed family of morphisms $f_i : Y \\longrightarrow X_i$ for all $i \\in I$ and $Y \\in \\mathcal{C}$, there is a unique morphism $f : Y \\longrightarrow X$ making\n%$$\\xymatrix{\n%Y \\ar[rd]_f\\ar[r]^{f_i}&X_i\\\\\n%&X\\ar[u]_{\\pi_i}\\\\\n%}$$\n%commutative. Sometimes the direct product is denoted by $\\prod_{i \\in I} X_i$.\n%\\index{Index}{product!direct}\n%\\end{defi}\n%The coproduct is simply:\n%\\begin{defi}[coproduct]\n%For a family of objects $\\{X_i : i \\in I\\}$ in $\\mathcal{C}$ the coproduct $X$ to be the object in $\\mathcal{C}$ such that for each (not necessarily injective) inclusion $\\iota_i : X_i \\longrightarrow X$ and a family of morphisms $f_i : X_i \\longrightarrow Y$ for all $i \\in I$ and $Y \\in \\mathcal{C}$ there exists a unique $f : X \\longrightarrow Y$ making\n%$$\\xymatrix{\n%Y &X_i\\ar[l]_{f_i}\\ar[d]^{\\iota_i}\\\\\n%&X\\ar[lu]^f\\\\\n%}$$\n%commutative. The coproduct $X$ is sometimes denoted by $\\coprod_{i \\in I} X_i$ or $\\bigoplus_{i \\in I} X_i$.\n%\\index{Index}{coproduct}\n%\\end{defi}\n%\\bmk For a finite index set $I$ both notions are equivalent. Consider for instance for any ring $R$ the left module $\\prod_{i \\leq n} R$ and $\\bigoplus_{i \\leq n} R$. However, if $I$ is not finite, then the coproduct is a strict subset of the direct product. Furthermore, the products and coproducts are only unique up to isomorphism (within their respective category).\n%\\begin{defi}\n%$\\mathcal{C}$ is called a category with finite products, if for any finite subcategory of $\\mathcal{C}$ its coproduct is in $\\mathcal{C}$ and there exists a finite object in $\\mathcal{C}$, denoted by $*$ - called the empty product - and an isomorphism:\n%$$S \\times * \\simeq S \\simeq * \\times S$$\n%for all $S \\in \\mathcal{C}$.\n%\\index{Index}{category!with finite product}\n%\\index{Index}{category!empty object}\n%\\end{defi}\n%$\\trm{PSpc}$, the category of pointed spaces is an example of a category with finite products with one element sets as empty products.\n%\\subsubsection{Co-/limits, formal schemes and group schemes}\n%\n%To complete our defintions we need:\n%\\begin{defi} Let $\\mathcal{C}$ be a category.\n%\\bn\n%\\item A diagram of type $\\mathcal{J}$ is a functor $F : \\mathcal{J} \\longrightarrow \\mathcal{C}$, where $\\mathcal{J}$ is an index category and $F$ indexes objects and morphisms in $\\mathcal{C}$. A diagram $F$ of type $\\mathcal{J}$ is called small or finite if $\\mathcal{J}$ is a small or finite category.\n%\\item Let $F : \\mathcal{J} \\longrightarrow \\mathcal{C}$ be a diagram of type $\\mathcal{J}$. A cone to $F$ is an object $N$ in $\\mathcal{C}$ and a family of morphisms $\\psi_X : N \\longrightarrow F(X)$ indexed by $X$ in $\\mathcal{J}$ such that for all morphisms $f : X \\longrightarrow Y$ in $\\mathcal{J}$ we get\n%$$F(f) \\circ \\psi_X = \\psi_Y.$$\n%A cone is denoted by $(N,\\psi)$.\n%\\item A co-cone is dual to cone: co-cone of a diagram $F$ is an object $N$ in $\\mathcal{C}$ and family of morphisms $\\psi_X : F(X) \\longrightarrow N$ for every $X$ in $\\mathcal{J}$ such that for any morphism $f : X \\longrightarrow Y$ in $\\mathcal{J}$ we have: $\\psi_X \\circ F(f) = \\psi_Y$.\n%The pair $(N,\\phi)$ denotes the co-cone.\n%\\item A limit of a diagram of type $\\mathcal{J}$ is a cone $(L,\\phi)$ of $F$ such that for any other cone $(N,\\psi)$ of $F$ there exists a unique morphism $u : N \\longrightarrow L$ such that following diagram commutes:\n%$$\\xymatrix{\n%& N \\ar[ldd]_{\\psi_X} \\ar[d]^u \\ar[rdd]^{\\psi_Y}&\\\\\n%&L\\ar[ld]^{\\phi_X} \\ar[rd]_{\\phi_Y}&\\\\\n%F(X) \\ar[rr]_{F(f)} & & F(Y)\\\\\n%}$$\n%\\item A colimit of a diagram $F$ is a co-cone $(L,\\phi)$ of $F$ such that for any other co-cone $(N,\\psi)$ of $F$ there is a unique morphism $u : N \\longrightarrow L$ such that the following diagram commutes:\n%$$\\xymatrix{\n%F(X) \\ar[rd]^{\\phi_X}\\ar[rdd]_{\\psi_X}\\ar[rr]^{F(f)} && F(Y)\\ar[ld]_{\\phi_Y}\\ar[ldd]^{\\psi_Y}\\\\\n%&L\\ar[d]_u&\\\\\n%&N&\\\\\n%}$$\n%\\en\n%\\index{Index}{diagram}\n%\\index{Index}{cone}\n%\\index{Index}{cocone}\n%\\index{Index}{limit}\n%\\index{Index}{colimit}\n%\\end{defi}\n%\\bmk A cone $(N,\\psi)$ of a diagram $F : \\mathcal{J} \\longmapsto \\mathcal{C}$ is characterized by the following commutative diagram:\n%$$\\xymatrix{\n% & F(X) \\ar[dd]^{F(f)} & X \\ar[l]_F \\ar[dd]^f\\\\ \n%N \\ar[ru]^{\\psi_X} \\ar[rd]_{\\psi_Y}& &\\\\\n%& F(Y) & Y\\ar[l]^F.\\\\\n%}$$\n%A co-cone $(N,\\phi)$ of a diagram $F : \\mathcal{J} \\longrightarrow \\mathcal{C}$ is characterized by the following commutative diagram:\n%$$\\xymatrix{\n% & F(X) \\ar[ld]_{\\phi_X}\\ar[dd]^{F(f)} & X \\ar[l]_F \\ar[dd]^f\\\\ \n%N & &\\\\\n%& F(Y) \\ar[lu]^{\\phi_Y}& Y\\ar[l]^F.\\\\\n%}$$\n%\\subsubsection{Monoidal and group category}\n%Let $\\mathcal{C}$ is a category with finite products.\n%\\begin{defi}\n%A monoid $G$ in $\\mathcal{C}$ is a triple $(G, m, e)$, with operation morphism $m : G \\times G \\longrightarrow G$ and unit morphism $e : * \\longrightarrow G$ satisfying the following commutative diagrams:\n%\\bd\n%\\item[associativity] $$\\xymatrix{\n%G \\times G \\times G \\ar[rr]^{id_G \\times m}\\ar[d]_{m \\times id_G} && G\\times G\\ar[d]^m\\\\\n%G \\times G \\ar[rr]_m && G,\\\\\n%}$$\n%\\item[unit] $$\\xymatrix{\n% & G \\ar[ld]_{\\simeq} \\ar[rd]^{\\simeq} & \\\\\n% \\ast \\times G\\ar[d]_{e \\times id_G} & & G \\times \\ast \\ar[d]^{id_G \\times e}\\\\\n% G \\times G \\ar[rd]_{m} & & G \\times G \\ar[ld]^m\\\\\n% & G &\\\\\n%}$$\n%\\ed\n%We denote by $\\trm{Mon}$ the monoidal category.\n%\\index{Index}{category!monoidal}\n%\\end{defi}\n%\\bmk Compare the two diagrams with the definition of associative unital algebras over some ring $R$ (replacing direct products with tensor products, then here the empty product is indeed $* = R$ and unit $\\eta = e$).\\\\\n%For each monoid $G$ we define the opposite monoid $G^{\\trm{op}}$ being the same set with operation $(g, h) \\longmapsto h g$.\n%\\begin{defi}\n%A group $G$ in $\\mathcal{C}$ is a quadruple $(G, m, e, S)$ being a monoid in $\\mathcal{C}$ and $S : G \\longrightarrow G$ defines a commuting diagram:\n%$$\\xymatrix{\n%&G\\ar[ld]_\\Delta\\ar[rd]^\\Delta&\\\\\n%G \\times G \\ar[rd]^{S \\times id}\\ar[dd]_{\\pi \\times \\pi} && G \\times G\\ar[ld]_{id \\times S}\\ar[dd]^{\\pi \\times \\pi}\\\\\n%& G \\times G \\ar[d]_m &\\\\\n%\\ast \\ar[r]_e & G & \\ast\\ar[l]^e,\\\\\n%}$$\n%where $\\Delta : G \\longrightarrow G \\times G$ is the diagonal map, $f_1 \\times f_2 = \\left[(g,h) \\longmapsto \\left(f_1(g),f_2(h)\\right)\\right]$ and $\\pi : G \\longrightarrow \\ast \\simeq G/G$ is the trivial projection, such that\n%$$S : G \\longrightarrow G^{\\trm{op}}$$\n%is a group homomorphism.\n%\\index{Index}{category!of groups}\n%\\end{defi}\n%\\subsubsection{Schemes}\n%Now, we introduce some important notations in the field of algebraic geometry. Let $\\trm{UCRng}$ denote the category of unital commutative rings, $\\trm{Mod}_R$ the category of $R$-modules over $R$ in $\\trm{UCRng}$ and $\\trm{Mod}_R \\cap R$ the subcategory of $R$-submodules in $R$ (i.e. ideals). We have\n%\\begin{defi}[Spectrum of ring]\n%The functor\n%$$\\trm{Spec} : \\trm{UCRng} \\longrightarrow \\trm{Set},\\ R \\longmapsto \\{\\mathfrak{p} \\in \\trm{Mod}_R \\cap R :  \\trm{Ann}(R/\\mathfrak{p}) = 0\\}$$\n%assigns to each commutative unital ring $R$ its spectrum, i.e. the set of its prime ideals.\n%\\index{Index}{spectrum}\n%\\end{defi}\n%\\bsp We have:\n%\\bn\n%\\item $\\spec \\zz = \\left\\{(p) : p \\trm{~prime~number}\\right\\} \\cup \\{0\\}$,\n%\\item for $n \\geq 2$, $\\spec \\zz_n = \\left\\{(\\ov{p}) : p \\mid n \\wedge p \\trm{~prime}\\right\\} \\cup \\{\\ov{0}\\}$, in particular the first subset may be empty if $n$ is prime,\n%\\item $\\spec \\zz_2[X]$:\n%$$\\left\\{\\left<f = \\sum_{i \\leq n} f_i X^i\\right> :\\exists m \\in \\nz,\\ X^m \\mid (f + \\ov{1}) \\wedge |\\{f_i : f_i \\neq \\ov{0}\\}| \\in 2 \\nz + 1\\right\\} \\cup \\left\\{\\left<\\ov{0}\\right>, \\left<X\\right>, \\left<X + \\ov{1}\\right>\\right\\},$$ in words:\n%all polynomials $f$ with odd non-zero coefficients and $f \\equiv 1 \\mod X^m$ for some $m \\geq 1$, zero and the two polynomials of degree one.\n%\\item in general, for any field $k$ we have\n%$$\\spec k[X] = \\{(f) : f \\trm{~irreducible}\\} \\cup \\{0\\},$$\n%in case of algebraic closeness, $\\spec k[X] = \\{\\left<X - a\\right> : a \\in k\\} \\cup \\{\\left<0\\right>\\}$.\n%\\item $R$ integral domain, iff $\\{0\\} \\in \\spec R$,\n%\\item $R$ a field, iff $\\{0\\} = \\spec R$.\n%\\en\n%Note, that the non-unitary ring $R := \\left(\\bao{cc} 0 & k\\\\0 & 0\\\\\\ea\\right)$ for some field $k$ has no prime ideal (but a maximal ideal (!)). Thus, unitality ensures the existence of prime ideals. % However, let us recall some further definitions.\n%%\\begin{defi}\n%%Let $R = k$ be a algebraically closed field and identify the affine space $A_k^n$ with the coordinate space.\n%%\\bn\n%%\\item For some $S \\subset k[X]$, the set $Z(S) = \\{x \\in k^n : f(x) = 0 \\forall f \\in S\\}$ is called the zero set or algebraic set of $S$.\n%%\\item For each $Z \\subset k^n$ an algebraic set, we have the ideal $I(Z) := \\{f \\in k[X] : f(x) = 0 \\forall x \\in Z\\}$ and call $A(Z) := k[X]/I(Z)$ the coordinate ring of $Z$.\n%%\\item If $A(Z)$ is an integral domain, we call $Z$ an affine variety.\n%%\\item For $k^{n+1}$ we define projective space $\\prjn_k$ the over $k$ as the quotient\n%%$$\\prjn_k := \\left(k^{n+1} \\bsl \\{0\\}\\right)^2 /\\sim,\\ \\sim := \\left\\{(x,y) \\in \\left(k^{n+1} \\bsl \\{0\\}\\right)^2 : \\exists \\lambda \\in k^\\times, y = \\lambda x\\right\\}.$$\n%%\\item A polynomial is called homogeneous of degree $n$, if we have\n%%$$f = \\sum_{|\\alpha| = n} f_\\alpha X_\\alpha, f_\\alpha \\in k, X_\\alpha = \\prod_{i \\leq k} X^{s_i}_{\\alpha_i}\\ \\trm{and}\\ \\sum s_i = n.$$\n%%An ideal is called homogeneous, if it is generated by homogeneous polynomials and a projective algebraic set is simply the zero set of a homogeneous polynomial $f$, i.e. $Z(f) := \\{x \\in \\prjn_k : f(x) = 0\\}$. A projective variety is simply the zero set of homogeneous prime ideal.\n%%\\item Let $Z \\subset k^n$ be an affine variety and $f \\in k[X]$ some polynomial such that $Z(f) \\nsubset Z$. We call $U_f := Z\\bsl Z(f)$ the principal open sets of $Z$.\n%%\\item For some affine variety $Z$, the structure presheaf is defined as the functor $\\trm{Top} \\longrightarrow \\trm{Alg}$ with\n%%$$\\bao{rrcl}\n%%\\mathcal{O}_Z :& \\{U : U \\subset Z\\trm{~open}\\} &\\longrightarrow &\\mathcal{O}_Z(U) := S^{-1}A(Z),\\\\\n%%&&&\\\\\n%%&S &:=& \\{f \\in A(Z) : f(x) \\neq 0 \\forall x \\in U\\}\\\\\n%%\\ea$$\n%%assigning to every open set $U$ the ring of regular functions on $U$ with the following condition:\n%%\\bn\n%%\\item $\\mathcal{O}_Z(\\emptyset) = \\{0\\}$,\n%%\\item $\\mathcal{O}_Z\\mid_{V}(U) = \\mathcal{O}_Z(V)$ for all $V \\subset U$ open,\n%%\\item $\\mathcal{O}_Z\\mid_W \\circ \\mathcal{O}_Z\\mid_V = \\mathcal{O}_W$ for all $W \\subset V \\subset U$ open.\n%%\\en\n%%\\item We call a (structure) presheaf of an affine variety $X$ a sheaf, if there is the following glueing property. Let $\\mathcal{U}$ be an open cover of some open $U \\subset X$. For all $V, W \\in \\mathcal{U}$ and $f_V \\in \\mathcal{O}_X(V), f_W \\in \\mathcal{O}_X(W)$ such that $f_V\\mid_{V \\cap W} = f_W\\mid_{V \\cap W}$ then there exists a unique $f \\in \\mathcal{O}_X(U)$ such that $f\\mid_V = f_V$ and $f\\mid_W = f_W$.\n%%\\item A ringed space for some topological space $X$ consists of the pair $(X, \\mathcal{O}_X)$, where $\\mathcal{O}_X$ is the structure sheaf of $X$.\n%%\\en\n%%\\end{defi}\n%%First, we note that our definition of algebraic sets induces a topology on $k^n$, the so called Zariski-topology by simply putting our algebraic sets as closed subsets of $k^n$. In particular, any affine variety defines an irreducible topological space. The definition of projective sheafs is omitted but the interested reader may consult \\cite{Hart}\\\\\n%%Furthermore, we may reformulate affine varieties as follows:\n%We are omitting the definitions of presheafs, sheafs and ringed spaces and refer the interested reader to \\cite{Hart}.\n%%\\begin{defi}\n%%Let $(X, \\mathcal{O}_X)$ be a ringed space. We call $X$ a scheme, if for any open cover $\\mathcal{U}$ of $X$ and $U \\in \\mathcal{U}$ the ringed space $(U, \\mathcal{O}_X\\mid_{U})$ is isomorphic to some affine scheme.\n%%\\end{defi}\n%\\begin{defi}\n%For some ideal $I \\subset R$, the affine scheme $X_I$ is the set of all prime ideals $\\mathfrak{p} \\supset I$. \n%%$$X_I :=  \\{\\mathfrak{p} \\in \\tmr{Spec}(R) : a \\notin \\mthfrak{p}\\}$\n%A principal open set $U(a)$, for some $a \\in R\\bsl R^\\times$, is $\\{\\mathfrak{p} \\in \\trm{Spec} R : a \\notin \\mathfrak{p}\\}$. A scheme $X$ is simply some ringed space $(X,\\mathcal{O}_X)$ such that for every open cover $\\mathcal{U}$ of $X$ the restrictions $\\mathcal{O}_X\\mid_U$ as ringed spaces $(U, \\mathcal{O}_X\\mid_U)$ define affine schemes for all $U \\in \\mathcal{U}$.\n%\\index{Index}{scheme}\n%\\index{Index}{scheme!affine}\n%\\index{Index}{Set!principal open}\n%\\end{defi}\n%In short, a scheme is a (topological) space with structure sheaf $\\mathcal{O}_X$ that is locally isomorphic to some affine variety.\n%\\bsp For any ring $R$ in $\\trm{UCRng}$:\n%$$S^{-1}_{\\mathfrak{p}}(R),\\ S_{\\mathfrak{p}} = R \\bsl \\mathfrak{p},$$\n%the pair $(\\trm{Spec}(R), S^{-1})$ is a scheme.% Here in obuse of notation, we use the multiplative system $S_{\\mathfrak{p}}$ as a functor $\\trm{Set} \\longmapsto \\trm{Mon}$.\n%\\begin{lemm}\n%The following statements are equivalent:\n%\\bn\n%\\item the ringed space $(X, \\mathcal{O}_X)$ is an affine variety,\n%\\item and:\n%\\bn\n%\\item $X$ is an irreducible topological space,\n%\\item $\\mathcal{O}_X$ is a structure sheaf,\n%\\item $X$ is isomorphic to an affine variety.\n%\\en\n%\\en\n%\\end{lemm}\n%A proof can be found in \\cite{Hart}.\n%\\begin{defi}\n%A formal scheme is a functor $X : \\trm{CRng} \\longrightarrow \\trm{Set}$, that is a small filtered colimit of affine schemes. Its category is denoted by $\\trm{FSch}$ - its morphisms are natural transformations. Given a scheme $S$, we define the formal schemes over $S$ as follows, all objects are morphisms $X \\longrightarrow S$ of formal schemes and as morphisms between $X \\longrightarrow S$ and $Y \\longrightarrow S$ all morphisms $X \\longrightarrow Y$, such that\n%$$\\xymatrix{\n%X \\ar[r] \\ar[rd]& Y\\ar[d]\\\\\n%&S\\\\\n%}$$\n%commutes. We denote the formal schemes over $S$ with $\\trm{FSch}_S$.\n%\\index{Index}{scheme!formal scheme}\n%\\end{defi}\n%\\bmk According to \\cite{Strickl} a formal scheme to $\\mathcal{X}$ is as follows: given a small filtered category $\\mathcal{J}$ and a functor $i \\longmapsto X_{i}$ from $\\mathcal{J}$ to $\\mathcal{X} = \\{\\trm{CRng}, \\trm{Set}\\}$ such that\n%$$X = \\lim_{\\substack{\\longrightarrow\\\\i}} X_{i} \\in \\mathcal{X}$$\n%or equivalently $X(R) = \\lim_{\\substack{\\longrightarrow\\\\i}} X_{i}(R)$ for all $R$.\n%\\bsp Two examples to illustrate the definition of formal schemes (following \\cite{Strickl}):\n%\\bn\n%\\item Let $R$ be an object in $\\trm{CRng}$ with unit and $N(R)$ denote its nilradical. The functors\n%$$\\hat{\\mathbb{A}}_n = \\left[R \\longmapsto N(R)^n\\right]$$\n%are a prominent example.\n%\\item Let $X$ be some scheme and $Y = V(I)$ a closed subscheme then\n%$$X_{\\hat{Y}} := \\lim_{\\substack{\\longrightarrow\\\\N}} V(I^N)$$\n%defines a formal scheme.\n%\\en\n%%An scheme is simply a ringed space $(Z,\\mathcal{O}_Z)$ such that for ... open cover $\\mathcal{U}$ the restriction $\\mathcal{O}_Z\\mid_U$ is an affine scheme for all $U \\in \\mathcal{U}$.\n%\\begin{defi}\n%A group scheme is a scheme $X = X(G)$ which has a group structure $G$ as well, i.e. a functor $m : X \\times X \\longrightarrow X$. A principal homogeneous space - or torsor - for a given group (group scheme) $G$ is a pair $(G,X)$, with $X$ some set, such that the map\n%$$\\alpha : (X, G) \\longrightarrow (X, X),\\ (x, g) \\longmapsto (x, g x)$$\n%is a bijection.\n%\\index{Index}{group scheme}\n%\\index{Index}{space!principal homogeneous}\n%\\index{Index}{torsor}\n%\\end{defi}\n%\\begin{koro}\n%The following statements are equivalent.\n%\\bn\n%\\item $(G, X)$ is a principle homogeneous space.\n%\\item The action $\\alpha : G \\times X \\longrightarrow X$ is transitive on $X$ and its stabilizer $\\trm{Stab}_G(X)$ is trivial.\n%\\en\n%\\end{koro} \\bws The proof is simply the application of the above definitions.\n%\n%%Again, our notation, not incidentally, resembles the notation of Hopf-algebras. Now an important classification of affine groups:\n%\\begin{defi}\n%An affine group is a representable functor $G : \\trm{Alg} \\longrightarrow \\trm{Set}$ with natural transformation $\\mu : G \\times G \\longrightarrow G$ such that $(G(A),\\mu(A),e(A))$ is a group for all $A \\in \\trm{Alg}$. $G$ is called affine algebraic group if $G$ is represented by a finite presented algebra $A$.\n%\\index{Index}{affine group}\n%\\index{Index}{affine algebraic group}\n%\\end{defi}\n%Formal group laws will be discussed in a later section.\n%\\newpage\n\\subsection{Topological basics}\nWe take some notions from topology as given.\n\\subsubsection{Basis and neighborhood basis}\n\\begin{defi}\nLet $(X, \\tau)$ be a topological space.\n\\bd\n%\\item[Filter] A subset $F \\subset \\tau$ is called a filter if:\n%\\bn\n%\\item for all $A, B \\in F$, we have $A \\cap B \\in F$,\n%\\item the empty set $\\emptyset$ is not in $F$,\n%\\item if $A \\in F$ and $A \\subset B$, then $B \\in F$ for all $B \\subset X$.\n%\\en\n\\item[Basis] A subset $\\beta \\subset \\tau$ is called an open neighborhood basis for $x \\in X$ if:% ein topologischer Raum. Eine Teilmenge $\\beta \\subset \\tau$ hei\\ss{}t Basis von $x \\in X$, oder auch Umgebungsbasis von $x$, falls\n\\bn\n\\item for every open $V \\subset X$ with $x \\in V$ there is an $U \\in \\beta$ open in $X$ such that $x \\in U \\subset V$.%f\\\"ur alle offene $V \\subset X$ offen, mit $x \\in V$ gibt es ein $U \\in \\beta$ mit $x \\in U \\subset V$ offen,\n\\item for all $U, V \\in \\beta$ we have $x \\in U \\cap V$.%f\\\"ur $U_1, U_2 \\in \\beta$ ist $x \\in U_1 \\cap U_2$.\n\\en\nA subset $\\beta \\subset \\tau$ is called (open) basis of $X$ if\n%Eine Teilmenge $\\beta \\subset \\tau$ hei\\ss{}t Basis von $X$, falls\n\\bn\n\\item all $U \\in \\beta$ are open in $X$,\n\\item every open subset of $X$ is a union of elements in $\\beta$.%jede offene Teilmenge $V \\in \\tau$ eine Vereinigung von $U \\in \\beta$ ist.\n\\en\n\\ed\nA neighborhood basis $\\beta$ of $x \\in X$ is called a fundamental basis of $x \\in X$ if every neighborhood $x \\in U$ is a finite intersection of elements in $\\beta$.%Eine Umgebungsbasis $\\beta$ von $x \\in X$ eines topologischen Raums $(X, \\tau)$ hei\\ss{}t  Fundamentalbasis, falls jede offene Umgebung $V$ von $x$ einen endlichen Durchschnitt von $U_i \\in \\beta$ enth\\\"alt.\n\\index{Index}{filter}\n\\index{Index}{basis}\n\\index{Index}{basis!neighborhood}\n\\index{Index}{basis!fundamental}\n\\end{defi}\n\\subsubsection{Linear topological rings}\nTopological rings $R$ are topological spaces $(R,\\tau_R)$ such that addition and multiplication are continuous wrt. the product topology:\n$$+, \\cdot \\in C(R\\times R,R).$$%Topologische Ringe sind Ringe $R$, mit einer Topologie $\\tau$, sodass\n%$$+ : R \\times R \\longrightarrow R,\\ \\cdot : R \\times R \\longrightarrow R$$\n%stetige Abbildungen in der Produkttopologie $\\tau_{R \\times R}$ sind. Die Menge der Ideale in $R$ definiert eine Umgebungsbasis der $0 \\in R$. %Im Allgemeinen ist diese Basis abgeschlossen in $R$, \n\\index{Index}{topological ring}\nThe set of ideals in $R$ is neighborhood basis of $0 \\in R$. In general, the elements generated by union are not ideals but are contained in larger ideals (in general $I \\cup J$ is not an ideal but is contained in its sum $I + J$).\n\\begin{defi}[Linear topological rings]\nA topological ring $R$ is called linear if there is a fundamental neighborhood basis $\\beta$ of $0 \\in R$.\n%Ein topologischer Ring $R$ hei\\ss{}t linear, falls es eine fundamentale Umgebungsbasis von $0 \\in R$ gibt.\n\\index{Index}{topological ring!linear}\n\\end{defi}\nIs $R$ a linear topological ring with fundamental neigborhood basis $\\beta(0)$ then any open neighborhood of zero contains at least one ideal, trivially $(0)$, since the intersection of ideals is again an ideal. Let $\\{I_i \\in \\eta(0) : i \\in \\mathcal{I}\\}$ be a system of ideals then the union:\n$$\\bigcup_{i \\in \\mathcal{I}'} I_i,\\ \\forall \\mathcal{I}' \\subset \\mathcal{I},\\ |\\mathcal{I}'| < \\infty$$\nis a subset of $R$ containing each $I_i$. Consequently, $\\beta(0)$ is an open neighborhood basis for zero. In addition, elements in $\\beta(0)$ are also intersection stable. Hence, we get a clopen basis.\n%Ist nun $R$ ein linear topologischer Ring, mit fundamentaler Umgebungsbasis $\\beta(0)$, dann enth\\\"alt jede offene Umgebung der Null mindestens ein Ideal $I \\in \\beta(0)$, trivialerweise mindestens $(0)$, da der Schnitt beliebiger Ideale wieder ein Ideal ergibt. Sei $\\{I_i \\in \\beta(0) : i \\in \\mathcal{I}\\}$ ein System von Idealen, dann ist deren Vereinigung eine Teilmenge von $R$, die alle Ideale der Form\n%$$\\bigcap_{i \\in \\mathcal{I}'} I_i, \\forall \\mathcal{I}' \\subset \\mathcal{I}\\ \\trm{und}\\ |\\mathcal{I}'| < \\infty$$\n%enth\\\"alt und definiert damit eine offene Umgebung der Null. Andererseits sind die Schnitte der Ideale auch Umgebungen der Null, d.h. damit sind alle Ideale \\textit{clopen} in $R$.\n\\begin{defi}\nLet $R$ be a linear topological ring with fundamental neighborhood basis of zero: $\\beta(0)$. $\\hat{R}$ is called the completion of $R$, if\n%Sei $R$ ein linear topologischer Ring mit Fundamental-UB $\\beta(0)$. Ein Ring $\\hat{R}$ hei\\ss{}t Vervollst\\\"andigung von $R$, falls\n$$\\hat{R} \\simeq \\lim_{\\substack{\\longleftarrow\\\\I \\in \\beta(0)}} R/I,$$\ni.e. the profinite limit of $(R/I)_{I \\in \\beta(0)}$, with ring homomorphisms $R/I \\longrightarrow R/J$ for all $I \\subset J$ and $\\beta(0)$ ordered by wrt. inclusion.\n%ist, d.h. der pro-endliche (oder inverse) Limes von  $(R/I)_{I \\in \\beta(0)}$, mit Ringmorphismen $R/I \\longrightarrow R/J$ f\\\"ur alle $I \\subset J$ und $\\beta(0)$ angeordnet bzgl. Inklusion.\n\\index{Index}{topological ring!completion of}\n\\end{defi}\n\\bmk In terms of co/limits, the completion of a linear topological ring $R$ with neighborhood basis of zero is simply the limit.\n\\bsp Consider $(0) \\neq \\idealp = (p) \\subset \\zz$ and $S := \\prod_{n \\geq 1} \\zz/\\idealp^n$ with component-wise ring operations and inclusion map:\n$$\\zz \\longrightarrow S, z \\longmapsto (z \\mod p^n)_{n\\geq1}.$$\nFurthermore, there are projections $\\pi_{ij} : \\zz/\\idealp^i \\longrightarrow \\zz/\\idealp^j, x \\mod p^i \\longmapsto x \\mod p^j$ for all $i \\geq j$. Clearly, $\\beta(0) := \\{\\idealp^n : n \\geq 1\\}$ defines a neighborhood system of zero (all intersections contain the zero ideal) and via inclusion a partial order on $\\beta(0)$ (in this cases total).\n%\\subsection{Commutative algebra}\n%\n%\\subsubsection{Invariant and equivariant rings}\n%Given a finite family of polynomials $\\mathcal{F} \\subset k[x]$, its invariant group is defined to be\n%the set of all $k$-linear maps $\\varphi : k[x] \\longrightarrow k[x]$ such that $f \\circ \\varphi = f$ for all $f \\in \\mathcal{F}$. Conversely:\n%\\begin{defi}\n%the invariant ring of a given group $G$ is the set of all polynomials in $k[x]$ such that $f g = f$ for all $g \\in G$. This ring is denoted by\n%$$k[x]^G.$$\n%\\end{defi}\n%Since the invariant group only acts on the monomials of degree $\\geq 1$ we have $g\\mid_{k.1_{k[x]}} = id_{k.1_{k[x]}}$. For $n \\in \\nz$ the most prominent example is the ring of symmetric polynomials:\n%$$k[x]^{S_n} = k[s_1,\\ldots,s_n],$$\n%with $s_i = \\sum_{\\substack{\\alpha \\in \\nz_0^n\\\\|\\alpha| = i}} x^\\alpha$ the symmetric polynomials. They are of utmost importance in classical Galois theory.\n%\\begin{defi}\n%For a given finite family of polynomials $\\mathcal{F}$ we call the set of all $k$-linear maps $\\varphi$ commuting with all $f \\in \\mathcal{F}$ the equivariant group of $\\mathcal{F}$. Conversely, for a given group $G$ the set of all polynomials in $k[x]$ commuting with all $g \\in G$ is called the equivariant ring of $G$ and is denoted by\n%$$k[x]^G_G.$$\n%\\end{defi}\n%\\bsp Consider $\\varphi = [x^i \\longmapsto (-x)^i] \\in \\trm{End}(k[x])$, with $n = 1$ for all $i \\geq 0$. The invariant ring is obviously $k[x^2]$.\n\n%Although rather complicated, we simply recall our definition of linear topological rings $(R,+,\\cdot,1,\\tau)$, where the completion, in the above sense, is a colimit, indexed via some neighborhood basis $\\beta(0)$ of zero (also filtered, as all ideals contain zero ideal):\n%\\bd\n%\\item[Index category] all neighborhood basis of zero:\n%$$\\mathcal{J} \\subset \\trm{Mod}_R(R) \\subset \\trm{Mod}_R$$\n%clearly form a sub-category of all $R$-submodules. \n%\\item[Diagram to $F$]  \n%\\item[Co-cone to $F$] \n%\\ed\n%\\newpage\n%\\subsection{Formal group laws}\n%Here we follow \\cite{Strickl}.\n%\\begin{defi}\n%Let $C$ be a commutative ring and $n \\in \\nz$. An $n$-dimensional formal group law $F$ is a formal power series in $C[[x_1,\\ldots,x_n,y_1,\\ldots,y_n]]^n := C[[x,y]]^n$ such that\n%\\bn\n%\\item $F(0,x) = x \\in C[[x]]^n$,\n%\\item $F(x,y) = F(y,x) \\in C[[x,y]]^n$,\n%\\item $F(F(x,y),z) = F(x,F(y,z)) \\in C[[x,y,z]]^n$ and\n%\\item there is an map $m$ on $C[[x]]$ such that $F(m(x),x) = 0$.\n%\\en\n%\\index{Index}{formal group law}\n%\\end{defi}\n%\\bsp Some examples taken from Strickland 2011:\n%\\bn\n%\\item the map $F(x,y) = x + y \\in C[[x,y]]^n$ is called the $n$-dimensional formal additive group law, with $m = [x \\longmapsto -x]$,\n%\\item let $c \\in C$, the map $F(x,y) = x + y + c x y \\in C[[x,y]]$ is a 1-dimensional formal group law, with $m = [x \\longmapsto -x/(1 + c x)]$ (recall if the constant term of a power series is a unit, then the formal power series itself is a unit - hence $1 + c x$ is a unit),\n%\\item if $c \\in C^\\times$, then $F(x,y) = \\frac{x + y}{1 + \\frac{xy}{c^2}}$ is a formal group law, with\n%$m$ as in the first example. It is well-known in relativistic geometry - the so called Lorenz-FGL.\n%\\en\n%\\begin{lemm}\n%Let $F$ be an $n$-dimensional formal group law over some commutative ring $R$. There exists a $\\Psi \\in R[[x_1,\\ldots,x_n]]^n$ such that $\\Psi(0) = 0$ and\n%$$F(u, \\Psi(u)) = F(\\Psi(u),u) = 0.$$\n%\\end{lemm}\n%A proof is given in \\cite{Serr}.\n%\\bmk For any $n$-dimensional formal group law $F \\in R[[x,y]]^n$, we define a formal group scheme over $\\trm{Spec}(R)$ via\n%$$\\mathbf{F} : \\trm{CAlg}_R \\longrightarrow \\trm{Grp},\\ A \\longmapsto N(A)^n$$\n%where the operation is given via $(u,v) \\longmapsto F(u,v)$.", "meta": {"hexsha": "c6c4d01cb301b02092732ebf5cd5420251010d2a", "size": 41980, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Script_Diff_Gal07/appendix.tex", "max_stars_repo_name": "gmuel/texlib", "max_stars_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Script_Diff_Gal07/appendix.tex", "max_issues_repo_name": "gmuel/texlib", "max_issues_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Script_Diff_Gal07/appendix.tex", "max_forks_repo_name": "gmuel/texlib", "max_forks_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 84.8080808081, "max_line_length": 970, "alphanum_fraction": 0.6945688423, "num_tokens": 13692, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.563704889170085}}
{"text": "\\chapter{Products in dagger categories with complete ordered Hom-sets}\n\n\\fxwarning{This is a rough draft. It is not yet checked for errors.}\n\n\\begin{note}\n  What I previously denoted $\\prod F$ is now denoted $\\bigodot^{\\text{proj}}_{\\sqcap} F$ (and\n  likewise for $\\mathord{\\coprod}$). The other draft chapters referring to\n  this chapter may be not yet updated.\n\\end{note}\n\n\\begin{prop}\n  ~\n  \\fxwarning{Should we move this to volume 1?}\n  \\begin{enumerate}\n    \\item Every entirely defined monovalued morphism is metamonovalued and metacomplete.\n    \\item Every surjective injective morphism is metainjective and co-metacomplete.\n  \\end{enumerate}\n\n\\end{prop}\n\n\\begin{proof}\nLet's prove the first (the second follows from duality):\n  \nLet $f$ be an entirely defined monovalued morphism.\n\n$\\left( \\bigsqcap G \\right) \\circ f \\sqsubseteq \\bigsqcap_{g \\in G} (g \\circ\nf)$ by monotonicity of composition.\n\nUsing the fact that $f$ is monovalued and entirely defined:\n\n$\\left( \\bigsqcap_{g \\in G} (g \\circ f) \\right) \\circ f^{\\dagger} \\sqsubseteq\n\\bigsqcap_{g \\in G} (g \\circ f \\circ f^{\\dagger}) \\sqsubseteq \\bigsqcap G$;\n\n$\\bigsqcap_{g \\in G} (g \\circ f) \\sqsubseteq \\left( \\bigsqcap_{g \\in G} (g\n\\circ f) \\right) \\circ f^{\\dagger} \\circ f \\sqsubseteq \\left( \\bigsqcap G\n\\right) \\circ f$.\n\nSo $\\left( \\bigsqcap G \\right) \\circ f = \\bigsqcap_{g \\in G} (g \\circ f)$.\n\nLet $f$ be a entirely defined monovalued morphism.\n\n$f \\circ \\left( \\bigsqcup G \\right) \\sqsupseteq \\bigsqcup_{g \\in G} (f \\circ\ng)$ by monotonicity of composition.\n\nUsing the fact that $f$ is entirely defined and monovalued:\n\n$f^{\\dagger} \\circ \\left( \\bigsqcup_{g \\in G} (f \\circ g) \\right) \\sqsupseteq\n\\bigsqcup_{g \\in G} (f^{\\dagger} \\circ f \\circ g) \\sqsupseteq \\bigsqcap G$;\n\n$\\bigsqcup_{g \\in G} (f \\circ g) \\sqsupseteq f \\circ f^{\\dagger} \\circ\n\\bigsqcup_{g \\in G} (f \\circ g) \\sqsupseteq f \\circ \\left( \\bigsqcup G\n\\right)$.\n\nSo $f \\circ \\left( \\bigsqcup G \\right) = \\bigsqcup_{g \\in G} (f \\circ g)$.\n\\end{proof}\n\n\\section{General product in partially ordered dagger category}\n\nTo understand the below better, you can restrict your imagination to the case\nwhen $\\mathcal{C}$ is the category $\\mathbf{Rel}$.\n\n\\subsection{Products}\n\nLet $\\mathcal{C}$ be a dagger category, each Hom-set of which is a complete\nlattice (having order agreed with the dagger).\n\nWe will designate some morphisms as \\emph{principal} and require that\nprincipal morphisms are both metacomplete and co-metacomplete. (For a\nparticular example of the category $\\mathbf{Rel}$, all morphisms are\nconsidered principal.)\n\nLet $\\prod^{(Q)} X$ be an object for each indexed family $X$ of objects.\n\nLet $\\pi$ be a partial function mapping elements $X \\in \\dom \\pi$ (which\nconsists of small indexed families of objects of $\\mathcal{C}$) to indexed\nfamilies $\\prod^{(Q)} X \\rightarrow X_i$ of principal morphisms (called\n\\emph{projections}) for every $i \\in \\dom X$.\n\nWe will denote particular projections as $\\pi^X_i$.\n\n\\begin{defn}\n  If $\\pi$ is defined at $\\lambda\n  j \\in n : \\Src F_j$ and $\\lambda j \\in n : \\Dst F_j$, then\n  \\[ \\bigodot^{\\text{proj}}_{\\sqcap} F = \\bigsqcap_{i \\in \\dom F} ((\\pi^{\\Dst \\circ\n   F_{}}_i)^{\\dagger} \\circ F_i \\circ \\pi^{\\Src \\circ F}_i) . \\]\n\\end{defn}\n\nIf $F_i:Y\\to X_i$ for all~$i$ for some object~$Y$:\n\\[\n  \\prod^{\\text{proj}}_{\\sqcap} F = \\bigsqcap_{i \\in \\dom F} ((\\pi^{\\Dst\\circ F}_i)^{\\dagger} \\circ F_i) .\n\\]\n\nIf $F_i:X_i\\to Y$ for all~$i$ for some object~$Y$:\n\\[\n  \\coprod^{\\text{proj}}_{\\sqcup} F = \\bigsqcup_{i \\in \\dom F} (F_i \\circ \\pi^{\\Src\\circ F}_i) .\n\\]\n\n\\begin{rem}\n  \\begin{align*}\n  (\\pi^{\\Dst \\circ\n   F_{}}_i)^{\\dagger} \\circ F_i \\circ \\pi^{\\Src \\circ F}_i \\in \\Hom \\left(\n  \\prod^{(Q)}_{j \\in n} \\Src F_j , \\prod^{(Q)}_{j \\in n} \\Dst F_j\n  \\right);\\\\\n  (\\pi^{\\Dst \\circ\n   F_{}}_i)^{\\dagger} \\circ F_i \\in \\Hom \\left(\n  Y , \\prod^{(Q)}_{j \\in n} \\Dst F_j\n  \\right);\\\\\nF_i \\circ \\pi^{\\Src \\circ F}_i \\in \\Hom \\left(\n  \\prod^{(Q)}_{j \\in n} \\Src F_j , Y \\right).\n  \\end{align*}\n  are properly defined and have the same sources and destination\n  (whenever $i \\in \\dom F$ is), thus the meet in the formulas is\n  properly defined.\n\\end{rem}\n\n\\begin{rem}\n  Thus, for example,\n\\begin{multline*}\n  F_0 \\odot^{\\text{proj}}_{\\sqcap} F_1 = ((\\pi^{(\\Dst F_0 , \\Dst\n  F_1)}_0)^{\\dagger} \\circ F_0 \\circ \\pi^{(\\Src F_0 , \\Src\n  F_1)}_0) \\sqcap\\\\ ((\\pi^{(\\Dst F_0 , \\Dst F_1)}_1)^{\\dagger}\n  \\circ F_1 \\circ \\pi^{(\\Src F_0 , \\Src F_1)}_1)\n\\end{multline*}\n  that is product is defined by a pure algebraic formula.\n\\end{rem}\n\n\\begin{lem}\n$F\\mapsto\\bigsqcup_{i\\in\\dom F}\\phi(F_i)$ for ordinal variadic~$F$ is infinitely\nassociative for any function~$\\phi$ defined on all values~$F_i$.\n\\end{lem}\n\n\\begin{proof}\nI will denote $t(F)=\\bigsqcup_{i\\in\\dom F}\\phi(F_i)$. We need\nto prove:\n\\begin{widedisorder}\n\\item[$t(t\\circ S)=t(\\concat S)$]\n$t(\\concat S)=\n\\bigsqcup_{i\\in\\dom(\\concat S)}\\phi((\\concat S)_i)=\n\\bigsqcup_{i\\in\\dom(\\uncurry(S))}\\phi((\\uncurry(S))_i)$.\n\n$t(t\\circ S)=\n\\bigsqcup_{i\\in\\dom S}\\phi(tS_i)=\n\\bigsqcup_{i\\in\\dom S}\\bigsqcup_{j\\in\\dom F}\\phi((S_i)_j)$.\n\nSo, obviously $t(t\\circ S)=t(\\concat S)$.\n\n\\item[$t(\\llbracket x\\rrbracket)=x$] Obvious.\n\\end{widedisorder}\n\\end{proof}\n\n\\begin{cor}\nAll three above defined products are infinitely associative\nfor ordinal variadic families~$F$.\n\\end{cor}\n\n\\begin{proof}\nAn obvious consequence taking into account duality.\n\\end{proof}\n\n\\begin{prop}\n  $\\bigodot^{\\text{proj}}_{\\sqcap} F = \\max \\setcond{ \\Phi \\in \\Hom \\left( \\prod^{(Q)}_{j \\in\n  n} \\Src F_j , \\prod^{(Q)}_{j \\in n} \\Dst F_j \\right)\n  }{ \\forall i \\in n : \\Phi \\sqsubseteq (\\pi^{\\Dst\\circ F}_i)^{\\dagger} \\circ F_i \\circ \\pi^{\\Src\\circ F}_i }$.\n\\end{prop}\n\n\\begin{proof}\n  By definition of meet on a complete lattice.\n\\end{proof}\n\n\\begin{thm}\n  Let $\\pi^X_i$ be metamonovalued morphisms. Let~$I$ be an index set. If $S \\in \\subsets \\prod_{i\\in I}\\Hom(A_i, B_i)$ for some objects $A_i$, $B_i$ where $i\\in I$ then\n  \\begin{gather*}\n    \\bigsqcap_{f\\in S} \\bigodot^{\\operatorname{proj}}_{\\sqcap}f =\n    \\bigodot_{i\\in I}\\bigsqcap_{f\\in S}f_i=\n    \\bigodot_{i\\in I}\\bigsqcap\\Pr_i S;\\\\\n    \\bigsqcap_{f\\in S} \\prod^{\\operatorname{proj}}_{\\sqcap} f =\n    \\prod_{i\\in I}\\bigsqcap_{f\\in S}f_i=\n    \\prod_{i\\in I}\\bigsqcap\\Pr_i S;\\\\\n    \\bigsqcap_{f\\in S} \\coprod^{\\operatorname{proj}}_{\\sqcap} f =\n    \\coprod_{i\\in I}\\bigsqcap_{f\\in S}f_i=\n    \\coprod_{i\\in I}\\bigsqcap\\Pr_i S.\n  \\end{gather*}\n\\end{thm}\n\n\\begin{proof}\nLet us consider for example the first formula (two others\nare similar):\n  \\begin{align*}\n  \\bigsqcap_{f\\in S} \\bigodot^{\\operatorname{proj}}_{\\sqcap} & = \\\\\n  \\bigsqcap_{f\\in S} \\bigsqcap_{i\\in I}((\\pi^{\\Dst\\circ f_i}_i)^{\\dagger} \\circ f_i \\circ \\pi^{\\Src\\circ f_i}_i) & = \\\\\n  \\bigsqcap_{i\\in I}\\bigsqcap_{f\\in S}((\\pi^{\\Dst\\circ f_i}_i)^{\\dagger} \\circ f_i \\circ \\pi^{\\Src\\circ f_i}_i) & = \\\\\n  \\bigsqcap_{i\\in I}((\\pi^{\\Dst\\circ f_i}_i)^{\\dagger} \\circ \\bigsqcap_{f\\in S}f_i \\circ \\pi^{\\Src\\circ f_i}_i) & = \\\\\n  \\bigodot_{i\\in I}\\bigsqcap_{f\\in S}f_i & = \\\\\n  \\bigodot_{i\\in I}\\bigsqcap\\Pr_i S.\n  \\end{align*}\n\\end{proof}\n\n\\begin{cor}\n~\n\\begin{enumerate}\n\\item\n  $(a_0 \\odot^{\\operatorname{proj}}_{\\sqcap} b_0) \\sqcap (a_1 \\odot^{\\operatorname{proj}}_{\\sqcap} b_1) = (a_0 \\sqcap a_1)\n  \\odot^{\\operatorname{proj}}_{\\sqcap} (b_0 \\sqcap b_1)$;\n\\item\n  $(a_0 \\times^{\\operatorname{proj}}_{\\sqcap} b_0) \\sqcap (a_1 \\times^{\\operatorname{proj}}_{\\sqcap} b_1) = (a_0 \\sqcap a_1)\n\\times^{\\operatorname{proj}}_{\\sqcap} (b_0 \\sqcap b_1)$;\n\\item\n  $(a_0 \\amalg^{\\operatorname{proj}}_{\\sqcap} b_0) \\sqcap (a_1 \\amalg^{\\operatorname{proj}}_{\\sqcap} b_1) = (a_0 \\sqcap a_1)\n  \\amalg^{\\operatorname{proj}}_{\\sqcap} (b_0 \\sqcap b_1)$;\n\\end{enumerate}\n\\end{cor}\n\n\\subsection{Product for endomorphisms}\n\nLet $F$ is an indexed family of endomorphisms of $\\mathcal{C}$.\n\nI will denote $\\Ob f$ the object (source and destination) of an\nendomorphism $f$.\n\nLet also $\\pi^X_i$ be a monovalued entirely defined morphism (for each $i \\in\n\\dom F$).\n\nThen\n\\begin{gather*}\n\\bigodot^{\\text{proj}}_{\\sqcap} F = \\bigsqcap_{i \\in \\dom F} ((\\pi^{\\lambda j \\in n :\n\\Ob F_j}_i)^{\\dagger} \\circ F_i \\circ \\pi^{\\lambda j \\in n : \\Ob\nF_j}_i);\\\\\n\\bigodot^{\\text{proj}}_{\\sqcup} F = \\bigsqcup_{i \\in \\dom F} ((\\pi^{\\lambda j \\in n :\n\\Ob F_j}_i)^{\\dagger} \\circ F_i \\circ \\pi^{\\lambda j \\in n : \\Ob\nF_j}_i)\n\\end{gather*}\n(if $\\pi$ is defined at $\\lambda j \\in n : \\Ob F_j$).\n\nAbbreviate $\\pi_i = \\pi^{\\lambda j \\in n : \\Ob F_j}_i$.\n\nSo\n\\begin{gather*}\n\\bigodot^{\\text{proj}}_{\\sqcap} F = \\bigsqcap_{i \\in \\dom F} ((\\pi_i)^{\\dagger} \\circ\nF_i \\circ \\pi_i);\\\\\n\\bigodot^{\\text{proj}}_{\\sqcup} F = \\bigsqcup_{i \\in \\dom F} ((\\pi_i)^{\\dagger} \\circ\nF_i \\circ \\pi_i).\n\\end{gather*}\n\n$\\bigodot^{\\text{proj}}_{\\sqcap} F = \\max \\setcond{ \\Phi \\in \\End \\left( \\prod^{(Q)}_{j \\in n}\n\\Ob F_j \\right) }{ \\forall i \\in n : \\Phi\n\\sqsubseteq (\\pi_i)^{\\dagger} \\circ F_i \\circ \\pi_i }$.\n\n$\\bigodot^{\\text{proj}}_{\\sqcup} F = \\min \\setcond{ \\Phi \\in \\End \\left( \\prod^{(Q)}_{j \\in n}\n\\Ob F_j \\right) }{ \\forall i \\in n : \\Phi\n\\sqsupseteq (\\pi_i)^{\\dagger} \\circ F_i \\circ \\pi_i }$.\n\nTaking into account that $\\pi_i$ is a monovalued entirely defined morphism, we\nget:\n\n\\begin{obvious}\n$\\bigodot^{\\text{proj}}_{\\sqcap} F = \\max \\setcond{ \\Phi \\in \\End \\left( \\prod^{(Q)}_{j \\in\nn} \\Ob F_j \\right) }{ \\forall i \\in n : \\pi_i\n\\in \\mathrm{C} (\\Phi , F_i) }$.\n\\end{obvious}\n\n\\begin{obvious}\n$\\bigodot^{\\text{proj}}_{\\sqcup} F = \\min \\setcond{ \\Phi \\in \\End \\left( \\prod^{(Q)}_{j \\in\nn} \\Ob F_j \\right) }{ \\forall i \\in n : \\pi_i\n\\in \\mathrm{C}_{\\ast} (\\Phi , F_i) }$.\n\\end{obvious}\n\n\\begin{rem}\n  The above formulas may allow to define the product for non-dagger categories\n  (but only for endomorphisms). In this writing I don't introduce a notation\n  for this, however.\n\\end{rem}\n\n\\begin{cor}\n  $\\pi_i \\in \\mathrm{C} \\left( \\bigodot^{\\text{proj}}_{\\sqcap} F , F_i \\right)$ and\n  $\\pi_i \\in \\mathrm{C}_{\\ast} \\left( \\bigodot^{\\text{proj}}_{\\sqcup} F , F_i \\right)$\n  for every $i \\in \\dom F$.\n\\end{cor}\n\n\\subsection{Category of continuous morphisms}\n\n\\begin{defn}\n  The category $\\cont (\\mathcal{C})$ is defined as follows:\n  \\begin{itemize}\n    \\item Objects are endomorphisms of the category $\\mathcal{C}$.\n    \n    \\item Morphisms are triples $(f , a , b)$ where $a$ and $b$ are objects\n    and $f : \\Ob a \\rightarrow \\Ob b$ is an entirely defined\n    monovalue principal morphism of the category $\\mathcal{C}$ such that $f\n    \\in \\mathrm{C} (a , b)$ (in other words, $f \\circ a \\sqsubseteq b \\circ\n    f$).\n    \n    \\item Composition of morphisms is defined by the formula $(g , b , c)\n    \\circ (f , a , b) = (g \\circ f , a , c)$.\n    \n    \\item Identity morphisms are $(a , a , 1^{\\mathcal{C}}_a)$.\n  \\end{itemize}\n\\end{defn}\n\nIt is really a category:\n\n\\begin{proof}\n  We need to prove that: composition of morphisms is a morphism, composition\n  is associative, and identity morphisms can be canceled on the left and on\n  the right.\n  \n  That composition of morphisms is a morphism by properties of generalized\n  continuity.\n  \n  That composition is associative is obvious.\n  \n  That identity morphisms can be canceled on the left and on the right is\n  obvious.\n\\end{proof}\n\n\\begin{rem}\n  The ``physical'' meaning of this category is:\n  \\begin{itemize}\n    \\item Objects (endomorphisms of $\\mathcal{C}$) are spaces.\n    \n    \\item Morphisms are continuous functions between spaces.\n    \n    \\item $f \\circ a \\sqsubseteq b \\circ f$ intuitively means that $f$\n    combined with an infinitely small is less than infinitely small combined\n    with $f$ (that is $f$ is continuous).\n  \\end{itemize}\n\\end{rem}\n\n\\begin{defn}\n  $\\pi^{\\cont (\\mathcal{C})}_i = \\left( \\bigodot^{\\text{proj}}_{\\sqcap} F , F_i ,\n  \\pi_i \\right)$.\n\\end{defn}\n\n\\begin{prop}\n  $\\pi_i$ are continuous, that is $\\pi^{\\cont}\n  (\\mathcal{C})_i$ are morphisms.\n\\end{prop}\n\n\\begin{proof}\n  We need to prove $\\pi_i \\in \\mathrm{C} \\left( \\bigodot^{\\text{proj}}_{\\sqcap} F , F_i \\right)$\n  but that was proved above.\n\\end{proof}\n\nLet further $\\mathcal{C}$ have sets as objects and $\\prod^{(Q)}X=\\prod X$ for an indexed family~$X$ of sets and $\\pi_i = \\Pr_i$ (for $i \\in \\dom F$).\n\n\\begin{lem}\n  $f \\in \\Hom_{\\cont (\\mathcal{C})} \\left( Y ,\n  \\bigodot^{\\text{proj}}_{\\sqcap} F \\right)$ is continuous iff all $\\pi_i \\circ f$ are continuous.\n\\end{lem}\n\n\\begin{proof}\n  ~\n  \\begin{description}\n    \\item[$\\Rightarrow$] Let $f \\in \\Hom_{\\cont\n    (\\mathcal{C})} \\left( Y , \\bigodot^{\\text{proj}}_{\\sqcap} F \\right)$. Then $f \\circ Y\n    \\sqsubseteq \\left( \\bigodot^{\\text{proj}}_{\\sqcap} F \\right) \\circ f$; $\\pi_i \\circ f \\circ Y\n    \\sqsubseteq \\pi_i \\circ \\left( \\bigodot^{\\text{proj}}_{\\sqcap} F \\right) \\circ f$; $\\pi_i\n    \\circ f \\circ Y \\sqsubseteq \\left( \\bigodot^{\\text{proj}}_{\\sqcap} F \\right) \\circ \\pi_i \\circ\n    f$. Thus $\\pi_i \\circ f$ is continuous.\n    \n    \\item[$\\Leftarrow$] Let all $\\pi_i \\circ f$ be continuous. Then\n    $\\pi^{\\cont (\\mathcal{C})}_i \\circ f \\in\n    \\Hom_{\\cont (\\mathcal{C})} (Y , F_i)$;\n    $\\pi^{\\cont (\\mathcal{C})}_i \\circ f \\circ Y \\sqsubseteq\n    F_i \\circ \\pi^{\\cont (\\mathcal{C})}_i \\circ f$. We need\n    to prove $Y \\sqsubseteq f^{\\dagger} \\circ \\left( \\bigodot^{\\text{proj}}_{\\sqcap} F \\right)\n    \\circ f$ that is\n    \\[ Y \\sqsubseteq f^{\\dagger} \\circ \\bigsqcap_{i \\in n} ((\\pi_i)^{\\dagger}\n       \\circ F_i \\circ \\pi_i) \\circ f \\]\n    for what is enough (because $f$ is metamonovalued)\n    \\[ Y \\sqsubseteq \\bigsqcap_{i \\in n} (f^{\\dagger} \\circ (\\pi_i)^{\\dagger}\n       \\circ F_i \\circ \\pi_i \\circ f) \\]\n    what follows from $Y \\sqsubseteq \\bigsqcap_{i \\in n} (f^{\\dagger} \\circ\n    (\\pi_i)^{\\dagger} \\circ \\pi_i \\circ f \\circ Y)$ what is obvious.\n  \\end{description}\n\\end{proof}\n\n\\begin{thm}\\label{cont-pr-pr}\n$\\prod^{\\text{proj}}_{\\sqcap}$ together with $\\pi$ is a categorical product in the category $\\cont (\\mathcal{C})$.\n\\end{thm}\n\n\\begin{proof}\nCheck\n\\url{http://math.stackexchange.com/questions/102632/how-to-check-whether-it-is-a-direct-product/102677\\#102677}\n\nI will denote $(\\prod f)x=\\prod_{i\\in\\dom f}f_i x$ for an\nindexed family~$f$ of functions.\n\nWe need to prove:\n\\begin{enumerate}\n\\item $\\pi_k\\circ\\prod f=f_k$;\n\\item $\\prod_{i\\in\\dom f}(\\pi_i\\circ f)=f$.\n\\end{enumerate}\nBut it follows from the fact that $\\pi_i=\\Pr_i$.\n\\end{proof}\n\n\\section{On duality}\n\nWe will consider duality where both the category $\\mathcal{C}$ and orders on\nHom-sets are replaced with their dual. I will denote $A\n\\xleftrightarrow{\\dual} B$ when two formulas $A$ and $B$ are dual with\nthis duality.\n\n\\begin{prop}\n  $f \\in \\mathrm{C} (\\mu, \\nu) \\xleftrightarrow{\\dual} f^{\\dagger}\n  \\in \\mathrm{C} (\\nu^{\\dagger} , \\mu^{\\dagger})$.\n\\end{prop}\n\n\\begin{proof}\n  $f \\in \\mathrm{C} (\\mu, \\nu) \\Leftrightarrow f \\circ \\mu\n  \\sqsubseteq \\nu \\circ f \\xleftrightarrow{\\dual} \\mu^{\\dagger}\n  \\circ f^{\\dagger} \\sqsupseteq f^{\\dagger} \\circ^{\\dagger} \\nu^{- 1}\n  \\Leftrightarrow f^{\\dagger} \\in \\mathrm{C} (\\nu^{\\dagger} ,\n  \\mu^{\\dagger})$.\n\\end{proof}\n\n$f \\text{ is entirely defined} \\Leftrightarrow f^{\\dagger} \\circ f \\sqsupseteq\n1_{\\Src f} \\xleftrightarrow{\\dual} f^{\\dagger} \\circ f \\sqsubseteq\n1_{\\Src f} \\Leftrightarrow f \\text{ is injective} \\Leftrightarrow\nf^{\\dagger} \\text{ is monovalued}$.\n\n$f \\text{ is monovalued} \\Leftrightarrow f \\circ f^{\\dagger} \\sqsubseteq\n1_{\\Dst f} \\xleftrightarrow{\\dual} f \\circ f^{\\dagger} \\sqsupseteq\n1_{\\Dst f} \\Leftrightarrow f \\text{ is surjective} \\Leftrightarrow\nf^{\\dagger} \\text{ is entirely defined}$.\n\n\\section{Dual products}\n\nThe below is the dual of the above, proofs are omitted as they are dual.\n\nLet $\\coprod^{(Q)}X$ be an object for each indexed family~$X$ of objects.\n\nI will denote \\emph{coprincipal} morphisms~$f^{\\dagger}$ where $f$~is principal. (Usually there is no distinction between\nprincipal and coprincipal.)\n\nLet $\\iota$ be a partial function mapping elements $X\\in\\dom\\iota$ (which consist of small indexed families of objects of~$\\mathcal{C}$) to indexed families $X_i\\to\\coprod^{(Q)}X$ of coprincipal morphisms (called \\emph{injections}) for every $i\\in\\dom X$.\n\nWe will denote particular morphisms as $\\iota_i^X$.\n\nIf (but we won't assume this below) $\\iota_i =\n(\\pi_i)^{\\dagger}$. We have the above equivalent to $\\pi_i$ being principal.\n\nWe will define $\\bigodot^{\\operatorname{inj}}$, $\\prod^{\\operatorname{inj}}$, $\\coprod^{\\operatorname{inj}}$\nby analogy with $\\operatorname{proj}$ counterparts replacing~$\\pi$ by~$\\iota^{\\dagger}$.\n\nWe will also define $\\bigodot^{\\operatorname{proj}}_{\\sqcup}$, $\\bigodot^{\\operatorname{inj}}_{\\sqcup}$, etc.\\ by replacing~$\\bigsqcap$ by~$\\bigsqcup$.\n\n\\subsection{Dual products for endomorphisms}\n\n\\begin{prop}\n  $\\bigodot^{\\operatorname{inj}}_{\\sqcup} F = \\min \\setcond{ \\Phi \\in \\End \\left( \\coprod^{(Q)}_{j\n  \\in n} \\Ob F_j \\right) }{ \\forall i \\in n :\n  \\Phi \\sqsupseteq \\iota^{\\lambda j \\in n : \\Src F_j}_i \\circ\n  F_i^{\\dagger} \\circ (\\iota^{\\lambda j \\in n : \\Dst F_j}_i)^{\\dagger} }$.\n\\end{prop}\n\n\\begin{proof}\n  By duality.\n\\end{proof}\n\nLet $F$ be an indexed family of endomorphisms of $\\mathcal{C}$.\n\n\\begin{defn}\n  $\\bigodot^{\\operatorname{inj}}_{\\sqcup} F = \\bigsqcup_{i \\in \\dom F} (\\iota^{\\lambda j \\in n :\n  \\Ob F_j}_i \\circ F_i^{\\dagger} \\circ (\\iota^{\\lambda j \\in n :\n  \\Ob F_j}_i)^{\\dagger})$.\n\\end{defn}\n\nAbbreviate $\\iota_i = \\iota^{\\lambda j \\in n : \\Ob F_j}_i$.\n\nSo $\\bigodot^{\\operatorname{inj}}_{\\sqcup} F = \\bigsqcup_{i \\in \\dom F} (\\iota_i \\circ F_i^{\\dagger}\n\\circ (\\iota_i)^{\\dagger})$.\n\n$\\bigodot^{\\operatorname{inj}}_{\\sqcup} F = \\min \\setcond{ \\Phi \\in \\End \\left( \\coprod^{(Q)}_{j \\in n}\n\\Ob F_j \\right) }{ \\forall i \\in n : \\Phi\n\\sqsupseteq \\iota_i \\circ F_i^{\\dagger} \\circ (\\iota_i)^{\\dagger} }$.\n\nTaking into account that $\\iota_i$ is a monovalued entirely defined morphism,\nwe get:\n\n\\begin{obvious}\n$\\coprod^{(L)} = \\min \\setcond{ \\Phi \\in \\End \\left( \\coprod^{(Q)}_{j \\in\nn} \\Ob F_j \\right) }{ \\forall i \\in n : \\iota_i\n\\in \\mathrm{C} (F_i^{\\dagger} , \\Phi) }$.{\\hspace*{\\fill}}{\\medskip}\n\\end{obvious}\n\n\\begin{cor}\n  $\\iota_i \\in \\mathrm{C} \\left( F_i , \\coprod^{(L)} F \\right)$ for every $i\n  \\in \\dom F$.\n\\end{cor}\n\n\\begin{rem}\nThe last two theorems don't require that our category is dagger. I omit the proof.\n\\end{rem}\n\n\\subsection{Category of continuous morphisms}\n\nLet $\\iota_i$ be canonical injections.\n\n\\begin{defn}\n$\\iota^{\\cont(\\mathcal{C})}_i = \\left(F_i,\\bigodot^{\\operatorname{inj}}_{\\sqcup}F,\\iota_i\\right)$.\n\\end{defn}\n\n\\begin{obvious}\n$\\iota_i$ are continuous that is $\\iota^{\\cont(\\mathcal{C})}_i$ are morphisms.\n\\end{obvious}\n\n\\begin{thm}\n$\\coprod^{\\operatorname{inj}}_{\\sqcup}$ together with $\\iota$ is a categorical coproduct in the category~$\\cont(\\mathcal{C})$.\n\\end{thm}\n\n\\begin{proof}\nDual to theorem~\\ref{cont-pr-pr}.\n\\end{proof}\n\n\\section{Applying this to the theory of funcoids and reloids}\n\n\\subsection{Funcoids}\n\n\\begin{defn}\n  $\\mathbf{Fcd} \\eqdef \\cont \\mathsf{FCD}$.\n\\end{defn}\n\nLet $F$ be a family of endofuncoids.\n\n\\subsection{Reloids}\n\n\\begin{defn}\n  $\\mathbf{Rld} \\eqdef \\cont \\mathsf{RLD}$.\n\\end{defn}\n\nLet $F$ be a family of endoreloids.\n\nIt is trivial?? that for uniform spaces infimum product of reloids coincides\nwith product uniformilty.\n\n\\section{Initial and terminal objects}\n\n\\subsection{Of category~$\\mathbf{Fcd}$}\n\nInitial object of $\\mathbf{Fcd}$ is the endofuncoid\n$\\bot^{\\mathsf{FCD} (\\emptyset , \\emptyset)}$. It is\ninitial because it has precisely one morphism $o$ (the empty set considered as\na function) to any object $Y$. $o$ is a morphism because $o \\circ\n\\bot^{\\mathsf{FCD} (\\emptyset , \\emptyset)} \\sqsubseteq Y\n\\circ o$.\n\n\\begin{prop}\n  Terminal objects of $\\mathbf{Fcd}$ are exactly\n  $\\uparrow^{\\mathscr{F}} \\{ \\ast \\} \\times^{\\mathsf{FCD}}\n  \\uparrow^{\\mathscr{F}} \\{ \\ast \\} = \\uparrow^{\\mathsf{FCD}} \\{ (\\ast\n  , \\ast) \\}$ where $\\ast$ is an arbitrary point.\n\\end{prop}\n\n\\begin{proof}\n  In order for a function $f : X \\rightarrow \\uparrow^{\\mathsf{FCD}} \\{\n  (\\ast , \\ast) \\}$ be a morphism, it is required exactly $f \\circ X\n  \\sqsubseteq \\uparrow^{\\mathsf{FCD}} \\{ (\\ast , \\ast) \\} \\circ f$\n  \n  $f \\circ X \\sqsubseteq (f^{- 1} \\circ \\uparrow^{\\mathsf{FCD}} \\{\n  (\\ast , \\ast) \\})^{- 1}$; $f \\circ X \\sqsubseteq (\\{ \\ast \\}\n  \\times^{\\mathsf{FCD}} \\langle f^{- 1} \\rangle \\{ \\ast \\})^{- 1}$; $f\n  \\circ X \\sqsubseteq \\langle f^{- 1} \\rangle \\{ \\ast \\}\n  \\times^{\\mathsf{FCD}} \\{ \\ast \\}$ what true exactly when $f$ is a\n  constant function with the value $\\ast$.\n\\end{proof}\n\nIf $n = \\emptyset$ then; $\\bigodot^{\\text{proj}}_{\\sqcap} \\emptyset = \\prod^{\\text{proj}}_{\\sqcap} \\emptyset = \\coprod^{\\text{proj}}_{\\sqcap} \\emptyset = \\max\n\\mathsf{FCD} (\\{\\emptyset\\}, \\{\\emptyset\\}) = \\uparrow^{\\mathscr{F}} \\{ \\emptyset \\}\n\\times^{\\mathsf{FCD}} \\uparrow^{\\mathscr{F}} \\{ \\emptyset \\} =\n\\uparrow^{\\mathsf{FCD}} \\{ (\\emptyset , \\emptyset) \\}$.\n\n\\subsection{Of category~$\\mathbf{Rld}$}\n\nInitial object of $\\mathbf{Rld}$ is the endofuncoid\n$\\bot^{\\mathsf{RLD} (\\emptyset , \\emptyset)}$. It is\ninitial because it has precisely one morphism $o$ (the empty set considered as\na function) to any object $Y$. $o$ is a morphism because $o \\circ\n\\bot^{\\mathsf{RLD} (\\emptyset , \\emptyset)} \\emptyset \\sqsubseteq Y\n\\circ o$.\n\n\\begin{prop}\n  Terminal objects of $\\mathbf{Rld}$ are exactly\n  $\\uparrow^{\\mathscr{F}} \\{ \\ast \\} \\times^{\\mathsf{RLD}}\n  \\uparrow^{\\mathscr{F}} \\{ \\ast \\} = \\uparrow^{\\mathsf{RLD}} \\{ (\\ast\n  , \\ast) \\}$ where $\\ast$ is an arbitrary point.\n\\end{prop}\n\n\\begin{proof}\n  In order for a function $f : X \\rightarrow \\uparrow^{\\mathsf{RLD}} \\{\n  (\\ast , \\ast) \\}$ be a morphism, it is required exactly $f \\circ X\n  \\sqsubseteq \\uparrow^{\\mathsf{RLD}} \\{ (\\ast , \\ast) \\} \\circ f$\n  \n  $f \\circ X \\sqsubseteq (f^{- 1} \\circ \\uparrow^{\\mathsf{RLD}} \\{\n  (\\ast , \\ast) \\})^{- 1}$; $f \\circ X \\sqsubseteq (\\{ \\ast \\}\n  \\times^{\\mathsf{RLD}} \\langle f^{- 1} \\rangle \\{ \\ast \\})^{- 1}$; $f\n  \\circ X \\sqsubseteq \\langle f^{- 1} \\rangle \\{ \\ast \\}\n  \\times^{\\mathsf{RLD}} \\{ \\ast \\}$ what true exactly when $f$ is a\n  constant function with the value $\\ast$.\n\\end{proof}\n\nIf $n = \\emptyset$ then; $\\bigodot^{\\text{proj}}_{\\sqcap} \\emptyset = \\prod^{\\text{proj}}_{\\sqcap} \\emptyset = \\coprod^{\\text{proj}}_{\\sqcap} \\emptyset = \\max\n\\mathsf{RLD} (\\{\\emptyset\\}, \\{\\emptyset\\}) = \\uparrow^{\\mathscr{F}} \\{ \\emptyset \\}\n\\times^{\\mathsf{RLD}} \\uparrow^{\\mathscr{F}} \\{ \\emptyset \\} =\n\\uparrow^{\\mathsf{RLD}} \\{ (\\emptyset , \\emptyset) \\}$.\n\n\\section{Canonical product and subatomic product}\n\n\\fxwarning{Confusion between filters on products and multireloids.}\n\n\\begin{prop}\n  $\\Pr^{\\mathsf{RLD}}_i |_{\\mathfrak{F} (Z)} = \\langle \\pi_i \\rangle$\n  for every index $i$ of a cartesian product $Z$.\n\\end{prop}\n\n\\begin{proof}\n  If $\\mathcal{X} \\in \\mathfrak{F} (Z)$ then $(\\Pr^{\\mathsf{RLD}}_i\n  |_{\\mathfrak{F} (Z)}) \\mathcal{X} = \\Pr^{\\mathsf{RLD}}_i  \\mathcal{X}\n  = \\bigsqcap^{\\mathscr{F}} \\rsupfun{\\Pr_i} \\mathcal{X} =\n  \\bigsqcap \\langle \\pi_i \\rangle \\up \\mathcal{X} = \\langle \\pi_i\n  \\rangle \\mathcal{X}$.\n\\end{proof}\n\n\\begin{prop}\n  $\\prod^{(A)} F = \\bigsqcap_{i \\in n} \\left( \\left( \\pi^{\\mathsf{FCD}\n  \\left( \\prod_{i \\in n} \\Dst F \\right)}_i \\right)^{- 1} \\circ F_i \\circ\n  \\pi^{\\mathsf{FCD} \\left( \\prod_{i \\in n} \\Src F \\right)}_i\n  \\right)$.\n\\end{prop}\n\n\\begin{proof}\n  $a \\mathrel{\\left[ \\prod^{(A)} F \\right]} b \\Leftrightarrow \\forall i \\in\n  \\dom F : \\Pr^{\\mathsf{RLD}}_i a \\mathrel{[F_i]}\n  \\Pr^{\\mathsf{RLD}}_i b \\Leftrightarrow \\forall i \\in \\dom F :\n  \\left\\langle \\pi^{\\mathsf{FCD} \\left( \\prod_{i \\in n}\n  \\Dst F \\right)}_i \\right\\rangle a \\mathrel{[F_i]}\n  \\left\\langle \\pi^{\\mathsf{FCD} \\left( \\prod_{i \\in n} \\Src F\n  \\right)}_i \\right\\rangle b \\Leftrightarrow \\forall i \\in \\dom F : a\n  \\mathrel{\\left[ \\left( \\pi^{\\mathsf{FCD} \\left( \\prod_{i \\in n}\n  \\Dst F \\right)}_i \\right)^{- 1} \\circ F_i \\circ\n  \\pi^{\\mathsf{FCD} \\left( \\prod_{i \\in n} \\Src F \\right)}_i\n  \\right]} b \\Leftrightarrow a \\mathrel{\\left[ \\bigsqcap_{i \\in n} \\left(\n  \\left( \\pi^{\\mathsf{FCD} \\left( \\prod_{i \\in n} \\Dst F\n  \\right)}_i \\right)^{- 1} \\circ F_i \\circ \\pi^{\\mathsf{FCD} \\left(\n  \\prod_{i \\in n} \\Src F \\right)}_i \\right) \\right]} b$ for ultrafilters\n  $a$ and $b$.\n\\end{proof}\n\n\\begin{cor}\n  $\\bigodot^{\\text{proj}}_{\\sqcap} F = \\prod^{(A)} F$ is $F$ is a small indexed family of\n  funcoids.\n\\end{cor}\n\n\\section{Further plans}\n\nCoordinate-wise continuity.\n\n\\section{Cartesian closedness}\n\nWe are not only to prove (or maybe disprove) that our categories are cartesian closed, but also to find (if any) explicit formulas for exponential transpose and evaluation.\n\n''Definition'' A category is //cartesian closed// iff:\n\\begin{enumerate}\n\\item It has finite products.\n\\item For each objects $A$, $B$ is given an object $\\operatorname{HOM} ( A , B)$ (//exponentiation//) and a morphism $\\varepsilon_{A, B} : \\operatorname{HOM} ( A , B) \\times A \\rightarrow B$.\n\\item For each morphism $f : Z \\times A \\rightarrow B$ there is given a morphism (//exponential transpose//) $\\sim f : Z \\rightarrow \\operatorname{HOM} ( A , B)$.\n\\item $\\varepsilon_{B,C} \\circ ( \\sim f \\times 1_A) = f$ for $f : A \\rightarrow B \\times C$.\n\\item $\\sim ( \\varepsilon_{B,C} \\circ ( g \\times 1_A)) = g$ for $g : A \\rightarrow \\operatorname{HOM} ( B , C)$.\n\\end{enumerate}\n\nWe will also denote $f\\mapsto (-f)$ the reverse of the bijection $f\\mapsto (\\sim f)$.\n\nOur purpose is to prove (or disprove) that categories $\\mathbf{Dig}$, $\\mathbf{Fcd}$, and $\\mathbf{Rld}$ are cartesian closed. Note that they have finite (and even infinite) products is already proved.\n\nAlternative way to prove:\nyou can prove that the functor $-\\times B$ is left adjoint to the exponentiation $-^B$ where the counit is given by the evaluation map.\n\n\\subsection{Definitions}\n\nCategories $\\mathbf{Dig}$, $\\mathbf{Fcd}$, and $\\mathbf{Rld}$ are respectively categories of:\n\\begin{enumerate}\n\\item discretely continuous maps between digraphs;\n\\item (proximally) continuous maps between endofuncoids;\n\\item (uniformly) continuous maps between endoreloids.\n\\end{enumerate}\n\n''Definition'' //Digraph// is an endomorphism of the category $\\mathbf{Rel}$.\n\nFor a digraph $A$ we denote $\\operatorname{Ob} A$ the set of vertexes or $A$ and $\\operatorname{GR} A$ the set of edges or $A$.\n\n''Definition'' Category $\\mathbf{Dig}$ of digraphs is the category whose objects are digraphs and morphisms are discretely continuous maps between digraphs. That is morphisms from a digraph $\\mu$ to a digraph $\\nu$ are functions (or more precisely morphisms of $\\mathbf{Set}$) $f$ such that $f \\circ \\mu \\sqsubseteq \\nu \\circ f$ (or equivalently $\\mu \\sqsubseteq f^{- 1} \\circ \\nu \\circ f$ or equivalently $f \\circ \\mu \\circ f^{- 1} \\sqsubseteq \\nu$).\n\n''Remark'' Category of digraphs is sometimes defined in an other (non equivalent) way, allowing multiple edges between two given vertices.\n\n\\subsection{Conjectures}\n\n\\begin{conjecture}\n  The categories $\\mathbf{Fcd}$ and $\\mathbf{Rld}$ are\n  cartesian closed (actually two conjectures).\n\\end{conjecture}\n\n\\url{http://mathoverflow.net/questions/141615/how-to-prove-that-there-are-no-exponential-object-in-a-category}\nsuggests to investigate colimits to prove that there are no exponential\nobject.\n\nOur purpose is to prove (or disprove) that categories $\\mathbf{Dig}$, $\\mathbf{Fcd}$, and $\\mathbf{Rld}$ are cartesian closed. Note that they have finite (and even infinite) products is already proved.\n\nAlternative way to prove:\nyou can prove that the functor $-\\times B$ is left adjoint to the exponentiation $-^B$ where the counit is given by the evaluation map.\n\nSee \\url{http://www.springer.com/us/book/9780387977102} for another way to prove Cartesian closedness.\n\n\\subsection{Category Dig is cartesian closed}\n\nCategory of digraphs is the simplest of our three categories and it is easy to demonstrate that it is cartesian closed. I demonstrate cartesian closedness of $\\mathbf{Dig}$ mainly with the purpose to show a pattern similarly to which we may probably demonstrate our two other categories are cartesian closed.\n\nLet $G$ and $H$ be digraphs:\n\\begin{itemize}\n\\item $\\operatorname{Ob} \\operatorname{HOM} ( G , H) = ( \\operatorname{Ob} H)^{\\operatorname{Ob} G}$;\n\\item $( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( G , H) \\Leftrightarrow \\forall ( v , w) \\in \\operatorname{GR} G : ( f ( v) , g ( w)) \\in \\operatorname{GR} H$ for every $f, g \\in \\operatorname{Ob} \\operatorname{HOM} ( G , H) = ( \\operatorname{Ob} H)^{\\operatorname{Ob} G}$;\n\\end{itemize}\n\n$\\operatorname{GR} 1_{\\operatorname{HOM} ( B , C)} = \\operatorname{id}_{\\operatorname{Ob} \\operatorname{HOM} ( B , C)} = \\operatorname{id}_{( \\operatorname{Ob} H)^{\\operatorname{Ob} G}}$\n\nEquivalently\n\n$( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( G , H) \\Leftrightarrow \\forall ( v , w) \\in \\operatorname{GR} G : g \\circ \\{ ( v , w) \\} \\circ f^{- 1} \\subseteq \\operatorname{GR} H$\n\n$( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( G , H) \\Leftrightarrow g \\circ ( \\operatorname{GR} G) \\circ f^{- 1} \\subseteq \\operatorname{GR} H$\n\n$( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( G , H) \\Leftrightarrow \\langle f \\times^{( C)} g \\rangle \\operatorname{GR} G \\subseteq \\operatorname{GR} H$\n\nThe transposition (the isomorphism) is uncurrying.\n\n$\\sim f = \\lambda a \\in Z \\lambda y \\in A : f ( a , y)$ that is $( \\sim f) ( a) ( y) = f ( a , y)$.\n\n$( - f) ( a , y) = f ( a) ( y)$\n\nIf $f : A \\times B \\rightarrow C$ then $\\sim f : A \\rightarrow \\operatorname{HOM} ( B , C)$\n\n''Proposition'' Transposition and its inverse are morphisms of $\\mathbf{Dig}$.\n\n''Proof'' It follows from the equivalence $\\sim f : A \\rightarrow \\operatorname{HOM} ( B , C) \\Leftrightarrow \\forall x, y : ( x A y \\Rightarrow ( \\sim f) x ( \\operatorname{HOM} ( B , C))  ( \\sim f) y) \\Leftrightarrow \\\\ \\forall x, y : ( x A y \\Rightarrow \\forall ( v , w) \\in B : ( ( \\sim f) x v , ( \\sim f) y w) \\in C) \\Leftrightarrow \\\\ \\forall x, y, v, w : ( x A y \\wedge v B w \\Rightarrow ( ( \\sim f) x v , ( \\sim f) y w) \\in C) \\Leftrightarrow \\\\ \\forall x, y, v, w : ( ( x , v)  ( A \\times B)  ( y , w) \\Rightarrow ( f ( x , v) , f ( y , w)) \\in C) \\Leftrightarrow f : A \\times B \\rightarrow C$.\n\nEvaluation $\\varepsilon : \\operatorname{HOM} ( G , H) \\times G \\rightarrow H$ is defined by the formula:\n\nThen evaluation is $\\varepsilon_{B, C} = \\mathop{\\sim} 1_{\\operatorname{HOM}(B,C)}$.\n\nSo $\\varepsilon_{B, C} ( p , q) = ( \\mathop{\\sim} 1_{\\operatorname{HOM}(B,C)}) ( p , q) = 1_{\\operatorname{HOM}(B,C)} ( p) ( q) = p ( q)$.\n\n''Proposition'' Evaluation is a morphism of $\\mathbf{Dig}$.\n\n''Proof'' Because $\\varepsilon_{B, C} ( p , q) = \\mathop{\\sim} 1_{\\operatorname{HOM}(B,C)}$.\n\nIt remains to prove:\n\\begin{itemize}\n\\item $\\varepsilon_{B, C} \\circ ( \\sim f \\times 1_{A}) = f$ for $f : A \\rightarrow B \\times C$;\n\\item $\\sim ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A})) = g$ for $g : A \\rightarrow \\operatorname{HOM} ( B , C)$.\n\\end{itemize}\n\n''Proof'' $\\varepsilon_{B, C} ( \\sim f \\times 1_{A}) ( a , p) = \\varepsilon_{B, C} ( ( \\sim f) a , p) = ( \\sim f) a p = f ( a , p)$. So $\\varepsilon_{B, C} \\circ ( \\sim f \\times 1_{A}) = f$.\n\n  $(\\sim ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A}))) ( p) ( q) = ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A})) ( p , q) = \\varepsilon_{B, C} ( g \\times 1_{A}) ( p , q) = \\varepsilon_{B, C} ( g p , q) = g ( p) ( q)$. So $\\sim ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A})) = g$.\n\n\\subsection{New attempt}\n\nWe will take $\\times^{(C)}$ as the product?? in the category $\\mathbf{Fcd}$\n\n\\begin{prop}\n$\\rsupfun{\\bigcup\\rsupfun{\\curry f}X}Y=\n\\rsupfun{f}(X\\times Y)$\n[Is the left part always defined?]\n\\end{prop}\n\n\\begin{proof}\n$\\rsupfun{\\bigcup\\rsupfun{\\curry f}X}Y=\\setcond{\\rsupfun{\\bigcup\\rsupfun{\\curry f}X}\\{y\\}}{y\\in Y}=\n\\setcond{\\rsupfun{\\bigcup\\left(\\bigcup_{x\\in X}(\\curry f)x\\right)}\\{y\\}}{y\\in Y}=\n\\setcond{\\bigcup\\rsupfun{\\bigcup_{x\\in X}(\\curry f)x}\\{y\\}}{y\\in Y}=\n\\setcond{\\bigcup\\bigcup_{x\\in X}\\rsupfun{(\\curry f)x)}\\{y\\}}{y\\in Y}=\n\\setcond{\\bigcup_{x\\in X}(\\curry f)x)y}{y\\in Y}=\n\\setcond{((\\curry f)x)y}{x\\in X,y\\in Y}$.\n\n$\\rsupfun{f}(X\\times Y)=\\setcond{f(x,y)}{x\\in X,y\\in Y}$.\n\nSo the thesis.\n\\end{proof}\n\n\\begin{prop}\n$\\supfun{\\bigsqcup\\supfun{\\curry f}x}y =\n\\supfun{f}(x\\times^{\\mathsf{RLD}}y)$.\n\\end{prop}\n\n\\begin{proof}\n$\\supfun{\\bigsqcup\\supfun{\\curry f}x}y =\n\\bigsqcap_{Y\\in\\up y}\\bigsqcup\\supfun{\\bigsqcup\\supfun{\\curry f}x}Y=\n\\bigsqcap_{Y\\in\\up y}\\bigsqcup\\supfun{\\bigsqcap_{X\\in\\up x}\\bigsqcup\\supfun{\\curry f}X}Y=\n\\bigsqcap_{Y\\in\\up y}\\bigsqcap_{X\\in\\up x}\\bigsqcup\\supfun{\\bigsqcup\\supfun{\\curry f}X}Y=\n\\bigsqcap_{Y\\in\\up y}\\bigsqcap_{X\\in\\up x}\\bigcup\\rsupfun{\\bigcup\\rsupfun{\\curry f}X}Y=\n\\bigsqcap_{Y\\in\\up y}\\bigsqcap_{X\\in\\up x}\\supfun{f}(X\\times Y)$.\nBy properties of generalized filter bases\n$\\bigsqcap_{Y\\in\\up y}\\bigsqcap_{X\\in\\up x}\\supfun{f}(X\\times Y)=\\supfun{f}(x\\times^{\\mathsf{RLD}} y)$\n\\end{proof}\n\nLet $G$ and $H$ be endofuncoids. By definition:\n\\begin{itemize}\n\\item $\\operatorname{Ob} \\operatorname{HOM} ( G , H) = ( \\operatorname{Ob} H)^{\\operatorname{Ob} G}$;\n\\item $( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( G , H) \\Leftrightarrow \\forall v,w\\in\\atoms^{\\mathscr{F}}\\Ob G : \\supfun{f}v \\times^{\\mathsf{FCD}} \\supfun{g}w \\sqsubseteq H$ for every $f, g \\in \\operatorname{Ob} \\operatorname{HOM} ( G , H) = ( \\operatorname{Ob} H)^{\\operatorname{Ob} G}$;\n\\end{itemize}\n\n$\\operatorname{GR} 1_{\\operatorname{HOM} ( B , C)} = \\operatorname{id}_{\\operatorname{Ob} \\operatorname{HOM} ( B , C)} = \\operatorname{id}_{( \\operatorname{Ob} H)^{\\operatorname{Ob} G}}$\n\nEquivalently\n\n$( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( A , B) \\Leftrightarrow \\forall v,w\\in\\atoms^{\\mathscr{F}}\\Ob A : g \\circ (v \\times^{\\mathsf{FCD}} w) \\circ f^{- 1} \\sqsubseteq \\operatorname{GR} B$\n\n$( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( A , B) \\Leftrightarrow g \\circ A \\circ f^{- 1} \\sqsubseteq B$\n\n$( f , g) \\in \\operatorname{GR} \\operatorname{HOM} ( A , B) \\Leftrightarrow \\langle f \\times^{( C)} g \\rangle A \\sqsubseteq B$\n\n\\begin{lem}\n$F\\suprel{\\operatorname{HOM}(A,B)}G \\Leftrightarrow G\\circ A\\circ F^{-1}\\sqsubseteq B$ for sets $F$,~$G$ of functions.\n\\end{lem}\n\n\\begin{proof}\n$F\\suprel{\\operatorname{HOM}(A,B)}G \\Leftrightarrow\n\\exists f\\in F,g\\in G:(f,g)\\in\\operatorname{HOM}(A,B)\\Leftrightarrow\n\\exists f\\in F,g\\in G:g\\circ A\\circ f^{-1}\\sqsubseteq B\\Leftrightarrow\nG\\circ A\\circ F^{-1}\\sqsubseteq B$.\n\\end{proof}\n\n\\begin{prop}\n$\\mathcal{F}\\suprel{\\operatorname{HOM}(A,B)}\\mathcal{G} \\Leftrightarrow \\mathcal{G}\\circ A\\circ \\mathcal{F}^{-1}\\sqsubseteq B$.\n\\end{prop}\n\n\\begin{proof}\n$\\mathcal{F}\\suprel{\\operatorname{HOM}(A,B)}\\mathcal{G} \\Leftrightarrow\n\\forall F\\in\\up\\mathcal{F},G\\in\\up\\mathcal{G}:F\\suprel{\\operatorname{HOM}(A,B)}G\n\\Leftrightarrow\n\\forall F\\in\\up\\mathcal{F},G\\in\\up\\mathcal{G}:G\\circ A\\circ F^{-1}\\sqsubseteq B$ what by properties of generalized filter\nbases is equivalent to $\\mathcal{G}\\circ A\\circ \\mathcal{F}^{-1}\\sqsubseteq B$.\n\\end{proof}\n\nLet $\\sim f=\\bigsqcup\\circ\\supfun{\\curry f}$. Here we consider $\\bigsqcup\\circ\\supfun{\\curry f}$ as a principal funcoid that is binary relation:\n\n\\begin{prop}\n$\\bigsqcup\\circ\\supfun{\\curry f}$ is a complete and co-complete pointfree funcoid.\n\\end{prop}\n\n\\begin{proof}\nIt is obviously a pointfree funcoid.\n\nLet~$X$ be a principal filter.\n$\\bigsqcup\\supfun{\\curry f}X$ is obviously principal.\nSo, it's co-complete.\n\nAnd it is obviously complete.\n\\end{proof}\n\n\\begin{obvious}\n$\\supfun{\\supfun{\\sim f}x}y = \\supfun{f}(x\\times^{\\mathsf{RLD}}y)$.\n\\end{obvious}\n\nLet $f\\in\\Hom(A\\times B,C)$ that is $f\\in Z^{A\\times B}$. Then\n$\\curry f\\in (C^B)^A$, $\\supfun{\\curry f}X\\in\\subsets(C^B)$,\n$\\bigsqcup\\supfun{\\curry f}X\\in C^B$.\nThus $\\sim f\\in (\\mathsf{FCD}(B,C))^{\\mathscr{F}(A)}$.\n\nIf $f : A \\times B \\rightarrow C$ then $\\sim f : \\mathscr{F}(A) \\rightarrow \\operatorname{HOM} ( B , C)$\n\n\\fxwarning{Rewrite below:}\n\n\\begin{prop}\nTransposition and its inverse are morphisms of $\\mathbf{Fcd}$.\n\\end{prop}\n\n\\begin{proof}\nIt follows from the equivalence $\\sim f : A \\rightarrow \\operatorname{HOM} ( B , C) \\Leftrightarrow \\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A : ( x\\suprel{A}y \\Rightarrow \\supfun{\\sim f} x \\suprel{\\operatorname{HOM}(B,C)} \\supfun{\\sim f} y) \\Leftrightarrow \\\\\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A : (x\\suprel{A}y \\Rightarrow (\\supfun{\\sim f}y)\\circ B\\circ(\\supfun{\\sim f}x)^{-1} \\sqsubseteq C) \\Leftrightarrow \\\\\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A : (x\\suprel{A}y \\Rightarrow \\forall p,q\\in\\atoms^{\\mathscr{F}}\\Ob B : (\np\\times^{\\mathsf{FCD}}q\\sqsubseteq B \\Rightarrow\n(\\supfun{\\sim f}y)\\circ(p\\times^{\\mathsf{FCD}}q)\\circ(\\supfun{\\sim f}x)^{-1} \\sqsubseteq C) \\Leftrightarrow \\\\\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A, p,q\\in\\atoms^{\\mathscr{F}}\\Ob B:\n(x\\suprel{A}y\\land p\\suprel{B}q \\Rightarrow (\\supfun{\\sim f}y)\\circ(p\\times^{\\mathsf{FCD}}q)\\circ(\\supfun{\\sim f}x)^{-1} \\sqsubseteq C) \\Leftrightarrow \\\\\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A, p,q\\in\\atoms^{\\mathscr{F}}\\Ob B:\n(x\\suprel{A}y\\land p\\suprel{B}q \\Rightarrow \\supfun{\\supfun{\\sim f}x}p\\times^{\\mathsf{FCD}}\\supfun{\\supfun{\\sim f}y}q \\sqsubseteq C) \\Leftrightarrow \\\\\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A, p,q\\in\\atoms^{\\mathscr{F}}\\Ob B:\n(x\\times^{\\mathsf{RLD}}p\\suprel{A\\times^{(C)}B}y\\times^{\\mathsf{RLD}}q \\Rightarrow \\supfun{f}(x\\times^{\\mathsf{RLD}}p)\\times^{\\mathsf{FCD}}\\supfun{f}(y\\times^{\\mathsf{RLD}}q) \\sqsubseteq C) %\\Leftrightarrow \\\\\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A, p,q\\in\\atoms^{\\mathscr{F}}\\Ob B:\n((x\\times^{\\mathsf{RLD}}p)\\times^{\\mathsf{FCD}}(y\\times^{\\mathsf{RLD}}q)\\sqsubseteq A\\times^{(C)}B \\Rightarrow f\\circ((x\\times^{\\mathsf{RLD}}p)\\times^{\\mathsf{FCD}}(y\\times^{\\mathsf{RLD}}q))\\circ f^{-1} \\sqsubseteq C)\n\\Leftarrow \\\\\n\\forall t,s\\in\\atoms(\\Ob A\\times^{\\mathsf{RLD}}\\Ob B):\n(t\\times^{\\mathsf{FCD}}s\\sqsubseteq A\\times^{(C)}B \\Rightarrow f\\circ(t\\times^{\\mathsf{FCD}}s)\\circ f^{-1} \\sqsubseteq C)\n\\Leftrightarrow \\\\\nf\\circ(A\\times^{(C)}B)\\circ f^{-1} \\sqsubseteq C) \\Leftrightarrow\nf : A\\times^{(C)}B \\rightarrow C\n$\n\nBut:\n\n$f : A\\times^{(C)}B \\rightarrow C \\Leftrightarrow f\\circ(A\\times^{(C)}B)\\circ f^{-1} \\sqsubseteq C \\Leftrightarrow??\n\\supfun{f}A\\times^{\\mathsf{FCD}}\\supfun{f}B\\sqsubseteq C\n\\Rightarrow\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A, p,q\\in\\atoms^{\\mathscr{F}}\\Ob B:(x\\times^{\\mathsf{RLD}}p\\sqsubseteq A\\land y\\times^{\\mathsf{RLD}}q\\sqsubseteq B \\Rightarrow\n\\supfun{f}(x\\times^{\\mathsf{RLD}}p)\\times^{\\mathsf{FCD}}\n\\supfun{f}(y\\times^{\\mathsf{RLD}}q) \\sqsubseteq C) \\Leftrightarrow\n\\forall x,y\\in\\atoms^{\\mathscr{F}}\\Ob A, p,q\\in\\atoms^{\\mathscr{F}}\\Ob B:((x\\times^{\\mathsf{RLD}}p)\\times^{\\mathsf{FCD}}(y\\times^{\\mathsf{RLD}}q)\\sqsubseteq A\\times^{(C)}B \\Rightarrow\n\\supfun{f}(x\\times^{\\mathsf{RLD}}p)\\times^{\\mathsf{FCD}}\n\\supfun{f}(y\\times^{\\mathsf{RLD}}q) \\sqsubseteq C)$.\n\nThus the thesis follows.\n\\end{proof}\n\nEvaluation $\\varepsilon : \\operatorname{HOM} ( G , H) \\times G \\rightarrow H$ is defined by the formula:\n\nThen evaluation is $\\varepsilon_{B, C} = \\mathop{\\sim} 1_{\\operatorname{HOM}(B,C)}$.\n\nSo $\\varepsilon_{B, C} ( p , q) = ( \\mathop{\\sim} 1_{\\operatorname{HOM}(B,C)}) ( p , q) = 1_{\\operatorname{HOM}(B,C)} ( p) ( q) = p ( q)$.\n\n''Proposition'' Evaluation is a morphism of $\\mathbf{Dig}$.\n\n''Proof'' Because $\\varepsilon_{B, C} ( p , q) = \\mathop{\\sim} 1_{\\operatorname{HOM}(B,C)}$.\n\nIt remains to prove:\n\\begin{itemize}\n\\item $\\varepsilon_{B, C} \\circ ( \\sim f \\times 1_{A}) = f$ for $f : A \\rightarrow B \\times C$;\n\\item $\\sim ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A})) = g$ for $g : A \\rightarrow \\operatorname{HOM} ( B , C)$.\n\\end{itemize}\n\n''Proof'' $\\varepsilon_{B, C} ( \\sim f \\times 1_{A}) ( a , p) = \\varepsilon_{B, C} ( ( \\sim f) a , p) = ( \\sim f) a p = f ( a , p)$. So $\\varepsilon_{B, C} \\circ ( \\sim f \\times 1_{A}) = f$.\n\n  $(\\sim ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A}))) ( p) ( q) = ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A})) ( p , q) = \\varepsilon_{B, C} ( g \\times 1_{A}) ( p , q) = \\varepsilon_{B, C} ( g p , q) = g ( p) ( q)$. So $\\sim ( \\varepsilon_{B, C} \\circ ( g \\times 1_{A})) = g$.\n\n\\subsection{By analogy with the proof that Dig is cartesian closed}\n\nThe most obvious way for proof attempt that $\\mathbf{Fcd}$ is cartesian closed is an analogy with the proof that\n$\\mathbf{Dig}$ is cartesian closed.\n\nConsider the long formula above. The proof would arise if we replace $x$ and $y$ in this formula with filters and operations and relations on set element with operations and relations on filters.\n\nThis proof could be simplified in either of two ways:\n\\begin{itemize}\n\\item replace $x$ and $y$ with ultrafilters, see [[Proof for Fcd using ultrafilters]];\n\\item replace $x$ and $y$ with sets (principal filter), see [[Proof for Fcd using sets]].\n\\end{itemize}\n\nThis is not quite easy however, because we need to calculate uncurrying for a entirely defined monovalued principal funcoid (what is essentially the same as a function of a $\\mathbf{Set}$-morphisms) taking either ultrafilters or principal filters as arguments. Such (generalized) uncurrying is not quite easy.\n\nTo sum what we need to prove:\n\\begin{itemize}\n\\item Transposition is a morphism.\n\\item Evaluation is a morphism.\n\\item $\\varepsilon_{B,C} \\circ ( \\sim f \\times 1_A) = f$ for $f : A \\rightarrow B \\times C$.\n\\item $\\sim ( \\varepsilon_{B,C} \\circ ( g \\times 1_A)) = g$ for $g : A \\rightarrow \\operatorname{HOM} ( B , C)$.\n\\end{itemize}\n\n\\subsection{Attempt to describe exponentials in Fcd}\n\n\\begin{itemize}\n\\item Exponential object $\\operatorname{HOM}(A,B)$ is the following endofuncoid:\n\\item\\begin{itemize}\n\\item Object $\\operatorname{Ob}\\operatorname{HOM}(A,B) = (\\operatorname{Ob} B)^{\\operatorname{Ob} A}$;\n\\item Graph is $\\operatorname{GR} \\operatorname{HOM} ( A , B) = \\uparrow^{\\mathsf{FCD}} \\setcond{ ( f , g) }{ f, g \\in (\\operatorname{Ob}B)^{\\operatorname{Ob} A} \\wedge \\uparrow^{\\mathsf{FCD}}g \\circ A \\circ \\uparrow^{\\mathsf{FCD}}f^{- 1} \\sqsubseteq B }$.\n\\end{itemize}\n\\item Transposition is uncurrying.\n\\item Evaluation is $\\varepsilon_{A, B} x = \\langle \\dom x \\rangle \\im x$.\n\\end{itemize}\n\nWe need to prove that the above defined are really an exponential and an evaluation.\n\nPossible ways to prove that $\\mathbf{Fcd}$ is cartesian closed follow:\n\nNEW IDEA: Instead take $\\GR\\operatorname{HOM}(A,B) =\n\\uparrow^{\\mathsf{FCD}}\\setcond{(\\dom p,\\im p)}{\np\\in\\End(B)^{\\End(A)}\\land\\supfun{p}A\\sqsubseteq B}$ (what's about other\nkinds of projections?)\n\n\\subsection{Proof for Fcd using sets}\n\nCurrying for sets is $\\langle f \\rangle ( X \\times Y) = \\bigcup \\langle \\langle \\sim f \\rangle X\n\\rangle Y$ (as it's easy to prove). This simple formula gives hope, but...\n\nIt does not work with sets because an analogy for sets of the last equality of the above mentioned long formula would be:\n\n$\\forall X, Y, V, W \\in \\mathscr{P} \\operatorname{Ob} A : \\left( X \\times V \\mathrel{[\nA \\times B]^{\\ast}} Y \\times W \\Rightarrow \\langle f \\rangle ( X \\times V)\n\\mathrel{[ C]^{\\ast}} \\langle f \\rangle ( Y \\times W) \\right) \\Rightarrow \\\\ f : A\n\\times B \\rightarrow C$\n\nbut this implication seems false.\n\nThe most obvious way for proof attempt that $\\mathbf{Fcd}$ is cartesian closed is an analogy with the proof that Dig is cartesian closed.\n\nUse the exponential object, transposition, and evaluation as defined in [[this page|Is category Fcd cartesian closed?]]\n\n\\subsection{Reducing to the fact that Dig is cartesian closed}\nIt is probably a simpler way to prove that $\\mathbf{Fcd}$ is cartesian closed by embedding it into $\\mathbf{Dig}$ (which is [[already known to be cartesian closed|Category Dig is cartesian closed]]).\n\n$\\mathbf{Fcd}$ can be embedded into $\\mathbf{Dig}$ by the formulas:\n\\begin{itemize}\n\\item $A \\mapsto \\langle A \\rangle$;\n\\item $f \\mapsto \\langle f \\rangle$.\n\\end{itemize}\n\nThat this really maps a morphism of $\\mathbf{Fcd}$ into a morphism of $\\mathbf{Dig}$ follows from the fact that $\\langle g\\circ f\\rangle = \\langle g\\rangle\\circ\\langle f\\rangle$.\n\nObviously this embedding (denote it $T$) is an injective (both on objects and morphisms) functor.\n\nWe will define:\n\\begin{itemize}\n\\item $\\varepsilon^{\\mathbf{Fcd}}_{A, B} = T^{-1} \\varepsilon^{\\mathbf{Dig}}_{T A, T B}$;\n\\item $\\sim^{\\mathbf{Fcd}} f = T^{-1} \\sim^{\\mathbf{Dig}} T f$.\n\\end{itemize}\n\nDue to functoriality and injectivity of $T$ it is enough to prove that above defined $\\varepsilon^{\\mathbf{Fcd}}_{A, B}$ and $\\sim^{\\mathbf{Fcd}} f$ exist and are morphisms of $\\mathbf{Fcd}$.\n\n$\\varepsilon^{\\mathbf{Dig}}_{T A, T B} \\ne T\\varepsilon^{\\mathbf{Fcd}}_{A, B}$ because $\\varepsilon^{\\mathbf{Dig}}_{T A, T B}$ accepts ordered pairs as the argument and $T \\varepsilon^{\\mathbf{Fcd}}_{A, B}$ accepts sets as the argument. So this is a dead end. Can the proof idea be salvaged?\n\n\\section{Is category Rld cartesian closed?}\n\nWe may attempt to prove that $\\mathbf{Rld}$ is cartesian closed by embedding it into supposedly cartesian closed category $\\mathbf{Fcd}$ by the function $\\rho$:\n\n$\\langle \\rho f \\rangle x = f \\circ x \\quad \\text{and} \\quad \\langle \\rho f^{- 1} \\rangle y = f^{- 1} \\circ y$.\n\nTODO: More to write on this topic.\n", "meta": {"hexsha": "506d1b78d1f3d21708a8ceca5b9a618644ec73f1", "size": 44826, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-product.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-product.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-product.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.775390625, "max_line_length": 602, "alphanum_fraction": 0.659594878, "num_tokens": 16874, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.5637048796084936}}
{"text": "\\documentclass{article}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{bm}\n\\usepackage{hyperref}\n\\usepackage{enumerate}\n\\usepackage[margin=2cm]{geometry}\n\n\\title{Augmented Kalman filter specialization}\n\\author{Darjus Hosszejni}\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Model specifications}\n\n\\subsection{General}\n\nThe model, conditional on mixture states (from de Jong), is\n\\begin{align*}\ny_t & = X_t\\beta+Z_t\\alpha_t+G_tu_t, \\qquad t=1,2,\\dots,n, \\\\\n\\alpha_{t+1} & = W_t\\beta+T_t\\alpha_t+H_tu_t, \\qquad t=0,1,2,\\dots,n,\n\\end{align*}\nwhere $\\alpha_0=0$, and $u_t\\sim \\mathcal{N}(0,\\sigma_g^2I)$.\n\n\\subsection{Our case}\n\nDenote $\\gamma_{ts}=d_t\\rho\\sigma\\exp(m_{ts}/2)$. Depending on the mixture state $s$, we have\n\\begin{align*}\ny_t & = m_{ts}+\\alpha_t+v_{ts}u_{t1}, \\\\\n\\alpha_{t+1} & = \\mu(1-\\phi)+a_{ts}\\gamma_{ts}+\\phi\\alpha_t+b_{ts}v_{ts}\\gamma_{ts}u_{t1}+\\sigma\\sqrt{1-\\rho^2}u_{t2}.\n\\end{align*}\n\n\\subsection{Pattern matching}\n\nThe parameters from the general setup are substituted here according to below.\n\\begin{align*}\n\\beta & = \\begin{pmatrix} 1 \\\\ 1 \\\\ \\mu(1-\\phi) \\end{pmatrix}, \\\\\nZ_t & = 1, \\\\\nT_t & = \\phi, \\\\\n\\sigma_g & = 1, \\\\\nG_t & = \\begin{pmatrix} v_{ts} & 0 \\end{pmatrix}, \\\\\nH_0 & = \\begin{pmatrix} 0 & \\frac{\\sigma}{\\sqrt{1-\\phi^2}} \\end{pmatrix}, \\\\\nH_t & = \\begin{pmatrix} b_{ts}v_{ts}\\gamma_{ts} & \\sigma\\sqrt{1-\\rho^2} \\end{pmatrix}, \\\\\nW_0 & = \\begin{pmatrix} 0 & 0 & \\frac1{1-\\phi} \\end{pmatrix}, \\\\\nW_t & = \\begin{pmatrix} 0 & a_{ts}\\gamma_{ts} & 1 \\end{pmatrix}, \\\\\nX_t & = \\begin{pmatrix} m_{ts} & 0 & 0 \\end{pmatrix}.\n\\end{align*}\n\n\\section{Augmented Kalman filter}\n\n\\subsection{General case}\n\nDenote $\\eta_t=F_tu_t$ for $t=0,1,\\dots,n$. Then the augmented Kalman filter for $t=1,\\dots,n$ is\n\\begin{align*}\ne_t & = y_t-X_t\\beta-Z_ta_t, \\\\\nD_t & = Z_tP_tZ_t^\\prime+G_tG_t^\\prime, \\\\\nK_t & = (T_tP_tZ_t^\\prime+H_tG_t^\\prime)D_t^{-1}, \\\\\na_{t+1} & = W_t\\beta+T_ta_t+K_te_t, \\\\\nP_{t+1} & = T_tP_tL_t^\\prime+H_tJ_t^\\prime,\n\\end{align*}\nwith values\n\\begin{align*}\na_1 & = W_0\\beta, \\\\\nP_1 & = H_0H_0^\\prime, \\\\\nL_t & = T_t-K_tZ_t, \\\\\nJ_t & = H_t-K_tG_t.\n\\end{align*}\n\nWe also consider extra equations with the decomposition $\\beta=b+B\\mu$, (from Nakajima, Appendix B),\n\\begin{align*}\nf_t & = y_t-X_tb-Z_ta_t^*, \\\\\na_{t+1}^* & = W_tb+T_ta_t^*+K_tf_t, \\\\\nF_t & = X_tB-Z_tA_t^*, \\\\\nA_{t+1}^* & = -W_tB+T_tA_t^*+K_tF_t,\n\\end{align*}\nwith values\n\\begin{align*}\na_1^* & = W_0b, \\\\\nA_1^* & = -W_0B.\n\\end{align*}\nNote that $e_t = f_t-F_t\\mu$ and $a_t = a_t^*-A_t^*\\mu$.\n\nThen the posterior of $\\mu$ given $y$ is $\\mathcal{N}(Q_{n+1}^{-1}q_{n+1},Q_{n+1}^{-1})$, where the prior of $\\mu$ is $\\mathcal{N}(c, C)$, and\n\\begin{align*}\nq_{t+1} & = q_t+F_t^\\prime D_t^{-1}f_t, \\qquad q_1=C^{-1}c, \\\\\nQ_{t+1} & = Q_t+F_t^\\prime D_t^{-1}F_t, \\qquad Q_1=C^{-1}.\n\\end{align*}\n\nThen the log likelihood of $y$ is given by\n\\begin{equation*}\n\\log f(y) = -0.5\\left(\\sum_{t=1}^n\\log\\left(\\lvert\\text{det}(D_t)\\rvert\\right)+\\log\\left(\\lvert\\text{det}(Q_{t+1})\\rvert\\right)+\\sum_{t=1}^n f_t^\\prime D_t^{-1}f_t + c^\\prime C^{-1}c - q_{n+1}^\\prime Q_{n+1}^{-1}q_{n+1}\\right) + \\text{constant}.\n\\end{equation*}\n\n\\subsection{Our case}\n\nWe have\n\\begin{align*}\nF_t & = H_t, \\\\\nb & = \\begin{pmatrix} 1 \\\\ 1 \\\\ 0 \\end{pmatrix}, \\\\\nB & = \\begin{pmatrix} 0 \\\\ 0 \\\\ \\mu(1-\\phi) \\end{pmatrix}, \\\\\nc & = \\mu_\\mu, \\\\\nC & = \\sigma_\\mu^2.\n\\end{align*}\n\nDenote $\\gamma_{ts}=d_t\\rho\\sigma\\exp(m_{ts}/2)$ and $h_{ts}=b_{ts}v_{ts}\\gamma_{ts}$. We don't calculate $e_t$ or $a_t$, we deduce them later. Then\n\\begin{align*}\nD_t & = P_t+v_{ts}^2, \\\\\nK_t & = \\frac{\\phi P_t+h_{ts}v_{ts}}{D_t}, \\\\\nP_{t+1} & = \\phi P_tL_t+h_{ts}j_{t1} + j_{t2}^2, \\\\\nf_t & = y_t - m_{ts} - a_t^*, \\\\\nF_t & = -A_t^*, \\\\\na_{t+1}^* & = a_{ts}\\gamma_{ts}+\\phi a_t^* + K_tf_t, \\\\\nA_{t+1}^* & = \\phi-1+\\phi A_t^*+K_tF_t,\n\\end{align*}\nwith values\n\\begin{align*}\na_1^* & = 0, \\\\\nA_1^* & = -1, \\\\\nP_1 & = \\frac{\\sigma^2}{1-\\phi^2}, \\\\\nL_t & = \\phi-K_t, \\\\\nJ_t & = \\begin{pmatrix} h_{ts}-K_tv_{ts} & \\sigma\\sqrt{1-\\rho^2} \\end{pmatrix}.\n\\end{align*}\nDon't confuse $a_t$ with $a_{ts}$, they are completely different! The latter is the constant $a$ specified for the approximating Gaussian mixture in state $s$.\n\nAbout the posterior of $\\mu$ given $y$,\n\\begin{align*}\nq_{t+1} & = q_t+F_tf_t/D_t, \\qquad q_1=\\mu_\\mu/\\sigma_\\mu^2, \\\\\nQ_{t+1} & = Q_t+F_t^2/D_t, \\qquad Q_1=1/\\sigma_\\mu^2.\n\\end{align*}\n\nAnd the log likelihood of $y$,\n\\begin{equation*}\n\\log f(y) = -0.5\\left(\\sum_{t=1}^n\\log\\left(\\lvert D_t\\rvert\\right)+\\log\\left(\\lvert Q_{t+1}\\rvert\\right)+\\sum_{t=1}^n f_t^2/D_t + \\mu_\\mu^2/\\sigma_\\mu^2 - q_{n+1}^2/Q_{n+1}\\right) + \\text{constant}.\n\\end{equation*}\n\n\\section{Simulation smoother}\n\n\\subsection{General case}\n\nThen, for $t=n,\\dots,1$, the simulation smoother is\n\\begin{align*}\nC_t & = F_t(I-G_t^\\prime D_t^{-1}G_t-J_t^\\prime U_tJ_t)F_t^\\prime, \\\\\n\\varepsilon_t & \\sim\\mathcal{N}(0, \\sigma_g^2C_t), \\\\\nV_t & = F_t(G_t^\\prime D_t^{-1}Z_t+J_t^\\prime U_tL_t), \\\\\nr_{t-1} & = Z_t^\\prime D_t^{-1}e_t + L_t^\\prime r_t-V_t^\\prime C_t^{-1}\\varepsilon_t, \\\\\nU_{t-1} & = Z_t^\\prime D_t^{-1}Z_t+L_t^\\prime U_tL_t+V_t^\\prime C_t^{-1}V_t, \\\\\n\\eta_t & = F_t(G_t^\\prime D_t^{-1}e_t+J_t^\\prime r_t) + \\varepsilon_t,\n\\end{align*}\nwith values\n\\begin{align*}\nr_n &=0, \\\\\nU_n &=0, \\\\\n\\eta_0 &= F_0H_0^\\prime r_0 + \\varepsilon_0, \\\\\n\\varepsilon_0 &\\sim\\mathcal{N}(0,\\sigma_g^2C_0), \\\\\nC_0 &= F_0(I-H_0^\\prime U_0H_0)F_0^\\prime.\n\\end{align*}\n\n\\subsection{Our case}\n\n\\begin{align*}\nC_t & = (h_{ts}^2 + \\sigma^2(1-\\rho^2))-(h_{ts}v_{ts})^2/D_t-U_t(H_tJ_t^\\prime)^2, \\\\\n\\varepsilon_t & \\sim\\mathcal{N}(0, C_t), \\\\\nV_t & = h_{ts}v_{ts}/D_t+U_tL_t(h_{ts}j_{t1} + j_{t2}^2), \\\\\nr_{t-1} & = e_t/D_t + L_tr_t-V_t\\varepsilon_t/C_t, \\\\\nU_{t-1} & = 1/D_t+U_tL_t^2+V_t^2/C_t, \\\\\n\\eta_t & = h_{ts}v_{ts}e_t/D_t+(h_{ts}j_{t1} + j_{t2}^2)r_t + \\varepsilon_t,\n\\end{align*}\nwith values\n\\begin{align*}\nr_n &=0, \\\\\nU_n &=0, \\\\\n\\eta_0 &= \\frac{\\sigma^2}{1-\\phi^2}r_0 + \\varepsilon_0, \\\\\n\\varepsilon_0 &\\sim\\mathcal{N}(0,C_0), \\\\\nC_0 &= \\frac{\\sigma^2}{1-\\phi^2}\\left(1-\\frac{\\sigma^2}{1-\\phi^2}U_0\\right).\n\\end{align*}\n\n\\end{document}\n\n", "meta": {"hexsha": "60aa45f9cc01323e927d7fb30cdcfc47544476af", "size": 6079, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/other_calculations/Model_specifications.tex", "max_stars_repo_name": "hdarjus/master-thesis", "max_stars_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-23T12:51:22.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-23T12:51:22.000Z", "max_issues_repo_path": "thesis/other_calculations/Model_specifications.tex", "max_issues_repo_name": "hdarjus/master-thesis-WU", "max_issues_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/other_calculations/Model_specifications.tex", "max_forks_repo_name": "hdarjus/master-thesis-WU", "max_forks_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-06-12T00:39:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-12T00:39:19.000Z", "avg_line_length": 33.218579235, "max_line_length": 245, "alphanum_fraction": 0.6219772989, "num_tokens": 2725, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8652240825770432, "lm_q2_score": 0.6513548646660542, "lm_q1q2_score": 0.5635679152127808}}
{"text": "\\documentclass[inequalities.tex]{subfile}\n\n\\begin{document}\n\t\n\t\\section{Tangent Line Trick}\\label{sec:tangent}\n\tThe \\textit{tangent line trick} is a very useful trick which has been around for quite some time. \\textcite{li_2006} demonstrates some problems with this trick. Here, we will discuss this in detail and show how it can tackle problems.\n\t\n\tImagine that we are given an inequality in $a_{1},\\ldots,a_{n}$. The inequality can be divided into a sum of expressions which is same for $a_{1},\\ldots,a_{n}$. That is it can be written as $f(a_{1})+\\ldots+f(a_{n})\\geq g(a_{1},\\ldots,a_{n})$. Now, we typically try to use Jensen's inequality in cases like this. But in many cases,  $f$ is not convex. Even so, we may still be able to prove something like\n\t\t\\begin{align*}\n\t\t\tf(x)\n\t\t\t\t& \\geq f(\\bar{a})+(x-\\bar{a})f'(\\bar{a})\n\t\t\\end{align*}\n\twhere $\\bar{a}=\\frac{a_{1}+\\ldots+a_{n}}{n}$. Summing this for $x=a_{1},\\ldots,a_{n}$, we have\n\t\t\\begin{align*}\n\t\t\tf(a_{1})+\\ldots+f(a_{n})\n\t\t\t\t& \\geq nf(\\bar{a})\n\t\t\\end{align*}\n\tThe motivation sort of comes from basic calculus. By \\textit{wishful thinking}, we may be able to prove that the slope between the points $(x,f(x))$ and $(\\bar{a},f(\\bar{a}))$ is at least the slope of the tangent line of $f(x)$ at $x=\\bar{a}$. This is where the name comes from. This trick is specially useful if you are given the quantity $a_{1}+\\ldots+a_{n}$. If the expressions involved are homogeneous, then you do not even need this value. You can use homogeneity to impose conditions such as $a+b+c=1$. For example, see this problem.\n\t\t\\begin{problem}\n\t\t\tLet $a,b,c$ be positive real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{a(b+c)}+\\dfrac{1}{b(c+a)}+\\dfrac{1}{c(a+b)}\n\t\t\t\t\t\t& \\geq \\dfrac{27}{2(a+b+c)^{2}}\n\t\t\t\t\\end{align*}\n\t\t\t\n\t\t\t\t\\begin{solution}\n\t\t\t\t\tThe inequality is homogeneous in $a,b,c$. So without loss of generality, we can normalize the inequality assuming that $a+b+c=1$. Then $0<a,b,c<1$, $\\bar{a}=1/3$ and we get the transformation\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\t\\dfrac{1}{a(1-a)}+\\dfrac{1}{b(1-b)}+\\dfrac{1}{c(1-c)}\n\t\t\t\t\t\t\t\t& \\geq \\dfrac{27}{2}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe define\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& = \\dfrac{1}{a(1-a)}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tand attempt to prove the inequality that is required by tangent line trick. Since\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf'(x)\n\t\t\t\t\t\t\t\t& = \\dfrac{2x-1}{x^{2}(1-x)^{2}}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\twe have $f(1/3)=9/2$ and $f'(1/3)=-27/4$. Thus, if we can prove the inequality\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& \\geq \\dfrac{9}{2}-\\dfrac{27}{4}\\left(x-\\dfrac{1}{3}\\right)\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\twe are done since we will have\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(a)+f(b)+f(c)\n\t\t\t\t\t\t\t\t& \\geq 3\\cdot f\\left(\\dfrac{1}{3}\\right)\\\\\n\t\t\t\t\t\t\t\t& = \\dfrac{27}{2}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tSo we should check if the tangent line inequality indeed holds.\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\t\\dfrac{1}{x(1-x)}\n\t\t\t\t\t\t\t\t& \\geq \\dfrac{9}{2}-\\dfrac{27}{4}\\left(x-\\dfrac{1}{3}\\right)\\\\\n\t\t\t\t\t\t\t\\iff \\dfrac{1}{x(1-x)}\n\t\t\t\t\t\t\t\t& \\geq \\dfrac{54-81x+27}{12}\\\\\n\t\t\t\t\t\t\t\t& \\geq \\dfrac{27(1-x)}{4}\\\\\n\t\t\t\t\t\t\t\\iff x(1-x)^{2}\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{4}{27}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe can check if this holds. If $g(x)=x(1-x)^{2}$, then\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tg'(x)\n\t\t\t\t\t\t\t\t& = -2x(1-x)+(1-x)^{2}\\\\\n\t\t\t\t\t\t\t\t& = (1-x)(1-3x)\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe have $g'(x)=0$ if $x\\in\\{1,1/3\\}$ and\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tg''(x)\n\t\t\t\t\t\t\t\t& = -3(1-x)-(1-3x)\\\\\n\t\t\t\t\t\t\t\t& = 6x-4\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tSince $0<x<1$ and $g''(1)=2>0$, $g''(1/3)=-2<0$, we have that $g(x)$ is maximum at $x=1/3$ in the interval $(0,1)$. Thus,\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tx(1-x)^{2}\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{1}{3}\\left(1-\\dfrac{1}{3}\\right)^{2}\\\\\n\t\t\t\t\t\t\t\t& = \\dfrac{27}{4}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe are finally done.\n\t\t\t\t\\end{solution}\n\t\t\t\n\t\t\t\t\\begin{remark}\n\t\t\t\t\tTangent line is a neat trick but it may always not be pretty. But before you go all in, you can easily check if the desired inequality follows from\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(a_{1})+\\ldots+f(a_{n})\n\t\t\t\t\t\t\t\t& \\geq nf(\\bar{a})\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tor not. If it does, there is a good chance that $f(x)\\geq f(\\bar{a})+(x-\\bar{a})f'(\\bar{a})$ holds as well. Another indication that this might work is that equality occurs for $a_{1}=\\ldots=a_{n}=\\bar{a}$. In order to avoid calculation with fractional values, we could also assume $a_{1}+\\ldots+a_{n}=n$ for normalization so $\\bar{a}=1$. Also, we may sometimes have to deal with $\\leq$ instead of $\\geq$. Let us see an example of this type below.\n\t\t\t\t\\end{remark}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}\n\t\t\tGiven positive real numbers $a,b,c$ such that $a+b+c\\geq 3$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{1}{a^{2}+b+c}+\\dfrac{1}{a+b^{2}+c}+\\dfrac{1}{a+b+c^{2}}\n\t\t\t\t\t\t& \\leq 1\n\t\t\t\t\\end{align*}\n\t\t\t\n\t\t\t\t\\begin{solution}\n\t\t\t\t\tWe have equality in the case $a=b=c=1$. So, we may be optimistic that tangent line trick will work here. Using $b+c\\geq 3-a$,\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\t\\dfrac{1}{a^{2}+b+c}+\\dfrac{1}{a+b^{2}+c}+\\dfrac{1}{a+b+c^{2}}\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{1}{a^{2}+3-a}+\\dfrac{1}{b^{2}+3-b}+\\dfrac{1}{c^{2}+3-c}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tSo we are done if we can prove that\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\t\\dfrac{1}{a^{2}+3-a}+\\dfrac{1}{b^{2}+3-b}+\\dfrac{1}{c^{2}+3-c}\n\t\t\t\t\t\t\t\t& \\leq 1\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tLet\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& = \\dfrac{1}{x^{2}+3-x}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe have that\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf'(x)\n\t\t\t\t\t\t\t\t& = \\dfrac{1-2x}{(x^{2}+3-x)^{2}}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tSince $f(1)=1/3$ and $f'(1)=-1/9$, we need to prove the tangent line inequality\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{1}{3}-\\dfrac{1}{9}(x-1)\\\\\n\t\t\t\t\t\t\t\\iff \\dfrac{1}{x^{2}+3-x}\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{4-x}{9}\\\\\n\t\t\t\t\t\t\t\\iff 5x^{2}+3-7x-x^{3}\n\t\t\t\t\t\t\t\t& \\geq 0\\\\\n\t\t\t\t\t\t\t\\iff x^{3}-5x^{2}+7x-3\n\t\t\t\t\t\t\t\t& \\leq 0\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe can see that $(x-3)$ is a factor of this polynomial. So we can factor it easily.\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\t\\iff x^{2}(x-3)-2x(x-3)+x-3\n\t\t\t\t\t\t\t\t& \\leq 0\\\\\n\t\t\t\t\t\t\t\\iff (x-3)(x^{2}-2x+1)\n\t\t\t\t\t\t\t\t& \\leq 0\\\\\n\t\t\t\t\t\t\t\\iff (3-x)(x-1)^{2}\n\t\t\t\t\t\t\t\t& \\geq 0\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tThis is obviously true. And summing up the tangent line inequality, we get the conclusion.\n\t\t\t\t\\end{solution}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[USAMO $2003$]\\label{prob:usamo2003}\n\t\t\tLet $a,b,c$ be positive real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{(2a+b+c)^{2}}{2a^{2}+(b+c)^{2}}+\\dfrac{(2b+c+a)^{2}}{2b^{2}+(c+a)^{2}}+\\dfrac{(2c+a+b)^{2}}{2c^{2}+(a+b)^{2}}\n\t\t\t\t\t\t& \\leq 8\n\t\t\t\t\\end{align*}\n\t\t\t\n\t\t\t\t\\begin{solution}\n\t\t\t\t\tDue to homogeneity, we can assume that $0<a,b,c<1$ and $a+b+c=1$ without loss of generality. Then we are required to prove\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\t\\dfrac{(1+a)^{2}}{2a^{2}+(1-a)^{2}}+\\dfrac{(1+b)^{2}}{2b^{2}+(1-b)^{2}}+\\dfrac{(1+c)^{2}}{2c^{2}+(1-c)^{2}}\n\t\t\t\t\t\t\t\t& \\leq 8\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe can see that equality occurs for $a=b=c=1/3$. Define\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& = \\dfrac{(1+x)^{2}}{2x^{2}+(1-x)^{2}}\\\\\n\t\t\t\t\t\t\t\t& = \\dfrac{(1+x)^{2}}{3x^{2}-2x+1}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tThe tangent line inequality in this case is\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& \\leq f(\\bar{a})+(x-\\bar{a})f'(\\bar{a})\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\twhere\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf'(x)\n\t\t\t\t\t\t\t\t& = \\dfrac{2(3x^{2}-2x+1)(1+x)-2(1+x)^{2}(3x-1)}{(3x^{2}-2x+1)^{2}}\\\\\n\t\t\t\t\t\t\t\t& = \\dfrac{2(1+x)}{3x^{2}-2x+1}-\\dfrac{2(1+x)^{2}(3x-1)}{(3x^{2}-2x+1)^{2}}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tWe have $f(1/3)=8/3$ and $f'(1/3)=4$ so if we can prove that\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{8}{3}+4\\left(x-\\dfrac{1}{3}\\right)\\\\\n\t\t\t\t\t\t\t\t& = \\dfrac{4(3x+1)}{3}\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tfor $0<x<1$, then the inequality follows.\n\t\t\t\t\t\t\\begin{align*}\n\t\t\t\t\t\t\tf(x)\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{4(3x+1)}{3}\\\\\n\t\t\t\t\t\t\t\\iff \\dfrac{(1+x)^{2}}{3x^{2}-2x+1}\n\t\t\t\t\t\t\t\t& \\leq \\dfrac{12x+3}{3}\\\\\n\t\t\t\t\t\t\t\\iff 36x^{3}-15x^{2}-2x+1\n\t\t\t\t\t\t\t\t\t& \\geq 0\\\\\n\t\t\t\t\t\t\t\\iff (3x-1)^{2}(4x+1)\n\t\t\t\t\t\t\t\t& \\geq 0\n\t\t\t\t\t\t\\end{align*}\n\t\t\t\t\tThis is obviously true. So, we have the inequality.\n\t\t\t\t\\end{solution}\n\t\t\\end{problem}\n\\end{document}", "meta": {"hexsha": "22d7f888ee6910480761e4eaaf6a302a62d998a6", "size": 8028, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tangent.tex", "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_issues_repo_path": "tangent.tex", "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tangent.tex", "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.9591836735, "max_line_length": 540, "alphanum_fraction": 0.5401096163, "num_tokens": 3280, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6513548511303338, "lm_q2_score": 0.865224072151174, "lm_q1q2_score": 0.5635678967104091}}
{"text": " \\documentclass [12pt]{article} \n\n\\usepackage {amsmath}\n\\usepackage {amsthm}\n\\usepackage {amssymb}\n\\usepackage {graphicx} \n\\usepackage {float}\n\\usepackage {multirow}\n\\usepackage {xcolor}\n\\usepackage {algorithmic}\n\\usepackage [ruled,vlined,commentsnumbered,titlenotnumbered]{algorithm2e} \\usepackage {array} \n\\usepackage {booktabs} \n\\usepackage {url} \n\\usepackage {parskip} \n\\usepackage [margin=1in]{geometry} \n\\usepackage [T1]{fontenc} \n\\usepackage {cmbright} \n\\usepackage [many]{tcolorbox} \n\\usepackage [colorlinks = true,\n            linkcolor = blue,\n            urlcolor  = blue,\n            citecolor = blue,\n            anchorcolor = blue]{hyperref} \n\\usepackage {enumitem} \n\\usepackage {xparse} \n\\usepackage {verbatim}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\\lstset { %\n    language=C++,\n    backgroundcolor=\\color{black!5}, % set backgroundcolor\n    basicstyle=\\footnotesize,% basic font setting\n}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{remark}{Remark}\n\\newtheorem{lemma}[theorem]{Lemma}\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\n\n\n\\DeclareTColorBox {Solution}{}{breakable, title={Solution}} \\DeclareTColorBox {Solution*}{}{breakable, title={Solution (provided)}} \\DeclareTColorBox {Instruction}{}{boxrule=0pt, boxsep=0pt, left=0.5em, right=0.5em, top=0.5em, bottom=0.5em, arc=0pt, toprule=1pt, bottomrule=1pt} \\DeclareDocumentCommand {\\Expecting }{+m}{\\textbf {[We are expecting:} #1\\textbf {]}} \\DeclareDocumentCommand {\\Points }{m}{\\textbf {(#1 pt.)}} \n\n\\begin {document} \n\n\\vspace {1em} \n\\begin {Instruction} \nAdapted From Virginia Williams' lecture notes.\n\\end {Instruction}  \n\n{\\LARGE \\textbf {COMP 285 (NC A\\&T, Spr `22)}\\hfill \\textbf {Lecture 18} } \n\n\\begin{centering}\n\\section*{Strongly Connected Components}\n\\end{centering}\n \n \\section{Connected components in undirected graphs} \n\n A connected component of an undirected graph $ G = (V, E)$ is a maximal set of vertices $S \\subset V$ such that for each $u \\in S$ and $v \\in S$, there exists a path in $G$ from vertex $u$ to vertex $v$.\n\n\\begin{definition}[Formal Definition]\nLet $u \\equiv v$ if and only if $G$ has a path from vertex $u$ to vertex $v$ . This is an equivalence relation (it is symmetric, reflexive, and transitive). Then, a connected component of $G$ is an equivalence class of this relation $\\equiv$. Recall that the equivalence class of a vertex $u$ over a relation $\\equiv$ is the set of all vertices $v$ such that $u \\equiv v$.\n\\end{definition}\n\n \\subsection{Algorithm to find connected components in a undirected graph}\n\nIn order to find a connected component of an undirected graph, we can just pick a vertex and start doing a search (BFS or DFS) from that vertex. All the vertices we can reach from that vertex compose a single connected component. To find all the connected components, then, we just need to go through every vertex, finding their connected components one at a time by searching the graph. Note however that we do not need to search from a vertex v if we have already found it to be part of a previous connected component. Hence, if we keep track of what vertices we have already encountered, we will only need to perform one BFS for each connected component.\n\n\n\\begin{proof}\nWhen searching from a particular vertex v , we will clearly never reach any nodes outside the connected component with DFS or BFS. So we just need to prove that we will in fact reach all connected vertices. We can prove this by induction: Consider the vertices at minimum distance $i$ from vertex $v$. Call these vertices ``level $i$'' vertices. If BFS or DFS successfully reaches all vertices at level $i$, then they must reach all vertices at level $i + 1$, since each vertex at distance $i + 1$ from v must be connected to some vertex at distance $i from v$ . This is the inductive step, and for the base case, DFS or BFS will clearly reach all vertices at level $0$ (just v itself). So indeed this algorithm will find each connected component correctly.\n\\end{proof}\n\nThe searches in the above algorithm take total time $O(|E| + |V |),$ because each BFS or DFS call takes linear time in the number of edges and vertices for its component, and each component is only searched once, so all searches will take time linear in the total number of edges and vertices.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.5]{scc_graph.png}\n\\caption{The strongly connected components of a directed graph}\n\\label{fig:scc_graph}\n\\end{figure}\n\n\\section{Connectivity in directed graphs}\nHow can we extend the notion of connected components to directed graphs?\n\n\\begin{definition}[Strongly connected component (SCC)]\nA strongly connected component in a directed graph $G = (V, E)$ is a maximal set of vertices $S \\subset V$ such that each vertex $v \\in S$ has a path to each other vertex $u \\in S$. This is the same as the definition using equivalence classes for undirected graphs, except now $u \\equiv v$ if and only if there is a path from $u$ to $v$ AND a path from $v$ to $u$.\n\\end{definition}\n\n\\begin{definition}[Weakly connected compnent]\nLet $G = (V, E)$ be a directed graph, and let $G'$ be the undirected graph that is formed by replacing each directed edge of $G$ with an undirected edge. Then the weakly connected components of $G$ are exactly the connected components of $G'$.\n\\end{definition}\n\n\\section{Algorithm to find strongly connected components of a directed graph}\n\nThe algorithm we present is essentially two passes of depth-first search, plus some extremely clever additional book-keeping. The algorithm is described in a top-down fashion in Algorithms \\ref{alg:1} to \\ref{alg:3}. Algorithm \\ref{alg:1} describes the top level of the algorithm, and Algorithm \\ref{alg:2} and Algorithm \\ref{alg:3} describe the subroutines DFS-Loop and DFS. Read these procedures carefully before proceeding to the next section.\n\n\\begin{algorithm}\n\\caption{The top level of our SCC algorithm. The $f$-values and leaders are computed in the first and second calls to DFS-Loop, respectively (see below)}\n\\label{alg:1}\n\\begin{algorithmic}\n\\STATE \\textbf{INPUT:} A directed graph $G = (V,E)$, in adjacency list representation. Assume that the vertices $V$ are labeled $1, 2, 3, \\cdots, n$.\n\\STATE $G^{\\text{rev}} \\gets$ the graph $G$ after the orientation of all arcs have been reversed.\n\\STATE Run the DFS-Loop subroutine on $G^{\\text{rev}}$, processing vertices in any arbitrary order, to obtain a finishing time $f(v)$ for each vertex $v \\in V$.\n\\STATE Run the DFS-Loop subrouting on $G$, processing vertices in creasing order of $f(v)$, to assign a ``leader'' to each vertex $v \\in V$. The leader of a vertex $v$ will be the source vertex that the DFS that discovered $v$ started from.\n\\STATE The strongly connected components of $G$ correspond to vertices of $G$ that share a common leader.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}\n\\caption{The DFS-Loop subroutine}\n\\label{alg:2}\n\\begin{algorithmic}\n\\STATE \\textbf{INPUT:} A directed graph $G = (V,E)$, in adjacency list representation.\n\\STATE Let global variable $t \\gets 0$ \\texttt{/* This keeps track of the number of vertices that have been fully explored */}\n\\STATE Let global variable $s \\gets $NULL \\texttt{/* This keeps track of the vertex from which the last DFS call was invoked */}\n\\FOR{$i = n, n-1, \\cdots, 1$}\n    \\STATE \\texttt{// In the first call, vertices are labeled $1, 2, \\cdots, n$ arbitrarily. In the second call, vertices are labeled by their $f(v)$-values from the first call.}\n    \\IF{$i$ not yet explored}\n        \\STATE Let $s \\gets i$ \\texttt{/* Set the current source $s$ to $i$ All vertices discovered from the below DFS call will have their leaders set to $s$ */}\n        \\STATE DFS(G, i)\n    \\ENDIF\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}\n\\caption{The DFS subroutine. The $f$-values only need to be computed during the first call to DFS-Loop, and the ledaer values only need to be computed during the second call to DFS-Loop.}\n\\label{alg:3}\n\\begin{algorithmic}\n\\STATE \\textbf{INPUT:} A directed graph $G = (V,E)$, in adjacency list representation, and source vertex $i \\in V$.\n\\STATE Mark $i$ as explored. \\texttt{/* It remains explored for the entire duration of the DFS-Loop call */}\n\\STATE leader($i) \\gets s$\n\\FOR{arc($i,j$) in $G$}\n    \\IF{$j$ noto yet explored}\n        \\STATE DFS($G,j$)\n    \\ENDIF\n\\ENDFOR\n\\STATE $t \\gets t + 1$\n\\STATE Let $f(i) \\gets t$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{remark} \nThe algorithm in Algorithm \\ref{alg:1} is a bit different than the one in CLRS/Lecture! The difference is that in these notes, we first run DFS on the reversed graph, and then we run it again on the original; in CLRS, we first run DFS on the original, and then the second time on the reversed graph. Is it the case that one of these two textbooks has messed it up? In fact, it doesn't matter: the SCCs of$G$ are the same as the SCCs of $G^{\\text{rev}}$ , so both algorithms find exactly the same SCC decomposition.\n\\end{remark} \n\nAs we've seen, each invocation of DFS-Loop can be implemented in linear time (i.e., $O(|E|+ |V |)$), so this whole algorithm will take linear time (the bookkeeping of leaders and finishing times just adds a constant number of operations per each node).\n\n\\section{An Example}\n \nBut why on earth should this algorithm work? An example should increase its plausibility (though it certainly doesn't constitute a proof of correctness). Figure \\ref{fig:g_rev_scc} displays a reversed graph $G^{\\text{rev}}$, with its vertices numbered arbitrarily, and the $f$-values computed in the first call to DFS-Loop. In more detail, the first DFS is initiated at node $9$. The search must proceed next to node $6$. DFS then has to make a choice between two different adjacent nodes; we have shown the $f$-values that ensue when DFS visits node $3$ before node $8$.\\footnote{Different choices of which node to visit next generate different sets of $f$-values, but our proof of correctness will apply to all ways of resolving these choices}. When DFS visits node $3$ it gets stuck; at this point node $3$ is assigned a finishing time of $1$. DFS backtracks to node $6$, proceeds to node $8$, then node $2$, and then node $5$. DFS then backtracks all the way back to node $9$, resulting in nodes $5, 2, 8, 6$, and $9$ receiving the finishing times $2, 3, 4, 5$, and $6$, respectively. Execution returns to DFS-Loop, and the next (and final) call to DFS begins at node $7$.\n\n\\begin{figure}[ht!]\n\\includegraphics[scale=0.8]{g_rev_scc.png}\n\\caption{Example exuection of the strongly connected components algorithm. Nodes tare labaled arbitrarily and their finishing times are shown.}\n\\label{fig:g_rev_scc}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\includegraphics[scale=0.8]{scc_alg.png}\n\\caption{Example execution of the strongly connected components algorithm. Nodes are labeled by their finishing times and their leaders are shown.}\n\\label{fig:scc_alg}\n\\end{figure}\n\nFigure \\ref{fig:scc_alg} shows the original graph (with all arcs now unreversed), with nodes labeled withtheir finishing times. The magic of the algorithm is now evident, as the SCCs of $G$ present themselves to us in order: since we call DFS on the nodes in decreasing order of their finishing times, the first call to DFS discovers the nodes 7-9 (with leader 9); the second the nodes 1,5, and 6 (with leader 6); and the third the remaining three nodes (with leader 4).\n\n\\end{document}", "meta": {"hexsha": "4c404193a3a659cf503f0abba6b9907517998d6a", "size": 11407, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/lectures/lecture18.tex", "max_stars_repo_name": "facebookEIR/algorithms-course", "max_stars_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-16T02:47:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-16T02:47:46.000Z", "max_issues_repo_path": "assets/lectures/lecture18.tex", "max_issues_repo_name": "facebookEIR/algorithms-course", "max_issues_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/lectures/lecture18.tex", "max_forks_repo_name": "facebookEIR/algorithms-course", "max_forks_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-01-20T21:52:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-21T03:00:16.000Z", "avg_line_length": 67.4970414201, "max_line_length": 1176, "alphanum_fraction": 0.7431401771, "num_tokens": 3046, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056295505783, "lm_q2_score": 0.8175744761936437, "lm_q1q2_score": 0.5635586890171438}}
{"text": "\\section{Sequential colimits}\n\n\\emph{Note: This chapter currently contains only the statements of the definitions and theorems, but no proofs. I hope to make a complete version available soon.}\n\n\\subsection{The universal property of sequential colimits}\n\nType sequences are diagrams of the following form.\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots.\n\\end{tikzcd}\n\\end{equation*}\nTheir formal specification is as follows.\n\n\\begin{defn}\nAn \\define{(increasing) type sequence} $\\mathcal{A}$ consists of\n\\begin{align*}\nA & : \\N\\to\\UU \\\\\nf & : \\prd{n:\\N} A_n\\to A_{n+1}. \n\\end{align*}\n\\end{defn}\n\nIn this section we will introduce the sequential colimit of a type sequence.\nThe sequential colimit includes each of the types $A_n$, but we also identify each $x:A_n$ with its value $f_n(x):A_{n+1}$. \nImagine that the type sequence $A_0\\to A_1\\to A_2\\to\\cdots$ defines a big telescope, with $A_0$ sliding into $A_1$, which slides into $A_2$, and so forth.\n\nAs usual, the sequential colimit is characterized by its universal property.\n\n\\begin{defn}\n\\begin{enumerate}\n\\item A \\define{(sequential) cocone} on a type sequence $\\mathcal{A}$ with vertex $B$ consists of\n\\begin{align*}\nh & : \\prd{n:\\N} A_n\\to B \\\\\nH & : \\prd{n:\\N} f_n\\htpy f_{n+1}\\circ H_n.\n\\end{align*}\nWe write $\\mathsf{cocone}(B)$ for the type of cones with vertex $X$.\n\\item Given a cone $(h,H)$ with vertex $B$ on a type sequence $\\mathcal{A}$ we define the map\n\\begin{equation*}\n\\mathsf{cocone\\usc{}map}(h,H) : (B\\to C)\\to \\mathsf{cocone}(B)\n\\end{equation*}\ngiven by $f\\mapsto (f\\circ h,\\lam{n}{x}\\mathsf{ap}_f(H_n(x)))$. \n\\item We say that a cone $(h,H)$ with vertex $B$ is \\define{colimiting} if $\\mathsf{cocone\\usc{}map}(h,H)$ is an equivalence for any type $C$. \n\\end{enumerate}\n\\end{defn}\n\n\\begin{thm}\\label{thm:sequential_up}\nConsider a cocone $(h,H)$ with vertex $B$ for a type sequence $\\mathcal{A}$. The following are equivalent:\n\\begin{enumerate}\n\\item The cocone $(h,H)$ is colimiting.\n\\item The cocone $(h,H)$ is inductive in the sense that for every type family $P:B\\to \\UU$, the map\n\\begin{align*}\n\\Big(\\prd{b:B}P(b)\\Big)\\to {}& \\sm{h:\\prd{n:\\N}{x:A_n}P(h_n(x))}\\\\ \n& \\qquad \\prd{n:\\N}{x:A_n} \\mathsf{tr}_P(H_n(x),h_n(x))={h_{n+1}(f_n(x))}\n\\end{align*}\ngiven by\n\\begin{equation*}\ns\\mapsto (\\lam{n}s\\circ h_n,\\lam{n}{x} \\mathsf{apd}_{s}(H_n(x)))\n\\end{equation*}\nhas a section.\n\\item The map in (ii) is an equivalence.\n\\end{enumerate}\n\\end{thm}\n\n\\subsection{The construction of sequential colimits}\n\nWe construct sequential colimits using pushouts.\n\n\\begin{defn}\nLet $\\mathcal{A}\\jdeq (A,f)$ be a type sequence. We define the type $A_\\infty$ as a pushout\n\\begin{equation*}\n\\begin{tikzcd}[column sep=large]\n\\tilde{A}+\\tilde{A} \\arrow[r,\"{[\\idfunc,\\sigma_{\\mathcal{A}}]}\"] \\arrow[d,swap,\"{[\\idfunc,\\idfunc]}\"] & \\tilde{A} \\arrow[d,\"\\inr\"] \\\\\n\\tilde{A} \\arrow[r,swap,\"\\inl\"] & A_\\infty.\n\\end{tikzcd}\n\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\nThe type $A_\\infty$ comes equipped with a cocone structure consisting of\n\\begin{align*}\n\\mathsf{seq\\usc{}in} & : \\prd{n:\\N} A_n\\to A_\\infty \\\\\n\\mathsf{seq\\usc{}glue} & : \\prd{n:\\N}{x:A_n} \\mathsf{in}_n(x)=\\mathsf{in}_{n+1}(f_n(x)).\n\\end{align*}\n\\end{defn}\n\n\\begin{constr}\nWe define\n\\begin{align*}\n\\mathsf{seq\\usc{}in}(n,x)\\defeq \\inr(n,x) \\\\\n\\mathsf{seq\\usc{}glue}(n,x)\\defeq \\ct{\\glue(\\inl(n,x))^{-1}}{\\glue(\\inr(n,x))}.\n\\end{align*}\n\\end{constr}\n\n\\begin{thm}\nConsider a type sequence $\\mathcal{A}$, and write $\\tilde{A}\\defeq\\sm{n:\\N}A_n$. Moreover, consider the map\n\\begin{equation*}\n\\sigma_{\\mathcal{A}}:\\tilde{A}\\to\\tilde{A}\n\\end{equation*}\ndefined by $\\sigma_{\\mathcal{A}}(n,a)\\defeq (n+1,f_n(a))$. Furthermore, consider a cocone $(h,H)$ with vertex $B$.\nThe following are equivalent:\n\\begin{enumerate}\n\\item The cocone $(h,H)$ with vertex $B$ is colimiting.\n\\item The defining square\n\\begin{equation*}\n\\begin{tikzcd}[column sep=large]\n\\tilde{A}+\\tilde{A} \\arrow[r,\"{[\\idfunc,\\sigma_{\\mathcal{A}}]}\"] \\arrow[d,swap,\"{[\\idfunc,\\idfunc]}\"] & \\tilde{A} \\arrow[d,\"{\\lam{(n,x)}h_n(x)}\"] \\\\\n\\tilde{A} \\arrow[r,swap,\"{\\lam{(n,x)}h_n(x)}\"] & A_\\infty,\n\\end{tikzcd}\n\\end{equation*}\nof $A_\\infty$ is a pushout square.\n\\end{enumerate}\n\\end{thm}\n\n\\subsection{Descent for sequential colimits}\n\n\\begin{defn}\nThe type of \\define{descent data} on a type sequence $\\mathcal{A}\\jdeq (A,f)$ is defined to be\n\\begin{equation*}\n\\mathsf{Desc}(\\mathcal{A}) \\defeq \\sm{B:\\prd{n:\\N}A_n\\to\\UU}\\prd{n:\\N}{x:A_n}\\eqv{B_n(x)}{B_{n+1}(f_n(x))}.\n\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\nWe define a map\n\\begin{equation*}\n\\mathsf{desc\\usc{}fam} : (A_\\infty\\to\\UU)\\to\\mathsf{Desc}(\\mathcal{A})\n\\end{equation*}\nby $B\\mapsto (\\lam{n}{x}B(\\mathsf{seq\\usc{}in}(n,x)),\\lam{n}{x}\\mathsf{tr}_B(\\mathsf{seq\\usc{}glue}(n,x)))$.\n\\end{defn}\n\n\\begin{thm}\nThe map \n\\begin{equation*}\n\\mathsf{desc\\usc{}fam} : (A_\\infty\\to\\UU)\\to\\mathsf{Desc}(\\mathcal{A})\n\\end{equation*}\nis an equivalence.\n\\end{thm}\n\n\\begin{defn}\nA \\define{cartesian transformation} of type sequences from $\\mathcal{A}$ to $\\mathcal{B}$ is a pair $(h,H)$ consisting of\n\\begin{align*}\nh & : \\prd{n:\\N} A_n\\to B_n \\\\\nH & : \\prd{n:\\N} g_n\\circ h_n \\htpy h_{n+1}\\circ f_n,\n\\end{align*}\nsuch that each of the squares in the diagram\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[d,swap,\"h_0\"] \\arrow[r,\"f_0\"] & A_1 \\arrow[d,swap,\"h_1\"] \\arrow[r,\"f_1\"] & A_2 \\arrow[d,swap,\"h_2\"] \\arrow[r,\"f_2\"] & \\cdots \\\\\nB_0 \\arrow[r,swap,\"g_0\"] & B_1 \\arrow[r,swap,\"g_1\"] & B_2 \\arrow[r,swap,\"g_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nis a pullback square. We define\n\\begin{align*}\n\\mathsf{cart}(\\mathcal{A},\\mathcal{B}) & \\defeq\\sm{h:\\prd{n:\\N}A_n\\to B_n} \\\\\n& \\qquad\\qquad \\sm{H:\\prd{n:\\N}g_n\\circ h_n\\htpy h_{n+1}\\circ f_n}\\prd{n:\\N}\\mathsf{is\\usc{}pullback}(h_n,f_n,H_n),\n\\end{align*}\nand we write\n\\begin{equation*}\n\\mathsf{Cart}(\\mathcal{B}) \\defeq \\sm{\\mathcal{A}:\\mathsf{Seq}}\\mathsf{cart}(\\mathcal{A},\\mathcal{B}).\n\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\nWe define a map\n\\begin{equation*}\n\\mathsf{cart\\usc{}map}(\\mathcal{B}) : \\Big(\\sm{X':\\UU}X'\\to X\\Big)\\to\\mathsf{Cart}(\\mathcal{B}).\n\\end{equation*}\nwhich associates to any morphism $h:X'\\to X$ a cartesian transformation of type sequences into $\\mathcal{B}$.\n\\end{defn}\n\n\\begin{thm}\nThe operation $\\mathsf{cart\\usc{}map}(\\mathcal{B})$ is an equivalence.\n\\end{thm}\n\n\\subsection{The flattening lemma for sequential colimits}\n\nThe flattening lemma for sequential colimits essentially states that sequential colimits commute with $\\Sigma$. \n\n\\begin{lem}\nConsider\n\\begin{align*}\nB & : \\prd{n:\\N}A_n\\to\\UU \\\\\ng & : \\prd{n:\\N}{x:A_n}\\eqv{B_n(x)}{B_{n+1}(f_n(x))}.\n\\end{align*}\nand suppose $P:A_\\infty\\to\\UU$ is the unique family equipped with\n\\begin{align*}\ne & : \\prd{n:\\N}\\eqv{B_n(x)}{P(\\mathsf{seq\\usc{}in}(n,x))}\n\\end{align*}\nand homotopies $H_n(x)$ witnessing that the square\n\\begin{equation*}\n\\begin{tikzcd}[column sep=7em]\nB_n(x) \\arrow[r,\"g_n(x)\"] \\arrow[d,swap,\"e_n(x)\"] & B_{n+1}(f_n(x)) \\arrow[d,\"e_{n+1}(f_n(x))\"] \\\\\nP(\\mathsf{seq\\usc{}in}(n,x)) \\arrow[r,swap,\"{\\mathsf{tr}_P(\\mathsf{seq\\usc{}glue}(n,x))}\"] & P(\\mathsf{seq\\usc{}in}(n+1,f_n(x)))\n\\end{tikzcd}\n\\end{equation*}\ncommutes. Then $\\sm{t:A_\\infty}P(t)$ satisfies the universal property of the sequential colimit of the type sequence\n\\begin{equation*}\n\\begin{tikzcd}\n\\sm{x:A_0}B_0(x) \\arrow[r,\"{\\total[f_0]{g_0}}\"] & \\sm{x:A_1}B_1(x) \\arrow[r,\"{\\total[f_1]{g_1}}\"] & \\sm{x:A_2}B_2(x) \\arrow[r,\"{\\total[f_2]{g_2}}\"] & \\cdots.\n\\end{tikzcd}\n\\end{equation*}\n\\end{lem}\n\nIn the following theorem we rephrase the flattening lemma in using cartesian transformations of type sequences.\n\n\\begin{thm}\nConsider a commuting diagram of the form\n\\begin{equation*}\n\\begin{tikzcd}[column sep=small,row sep=small]\nA_0 \\arrow[rr] \\arrow[dd] & & A_1 \\arrow[rr] \\arrow[dr] \\arrow[dd] &[-.9em] &[-.9em] A_2 \\arrow[dl] \\arrow[dd] & & \\cdots \\\\\n& & & X \\arrow[from=ulll,crossing over] \\arrow[from=urrr,crossing over] \\arrow[from=ur,to=urrr] \\\\\nB_0 \\arrow[rr] \\arrow[drrr] & & B_1 \\arrow[rr] \\arrow[dr] & & B_2 \\arrow[rr] \\arrow[dl] & & \\cdots \\arrow[dlll] \\\\\n& & & Y \\arrow[from=uu,crossing over] \n\\end{tikzcd}\n\\end{equation*}\nIf each of the vertical squares is a pullback square, and $Y$ is the sequential colimit of the type sequence $B_n$, then $X$ is the sequential colimit of the type sequence $A_n$. \n\\end{thm}\n\n\\begin{cor}\nConsider a commuting diagram of the form\n\\begin{equation*}\n\\begin{tikzcd}[column sep=small,row sep=small]\nA_0 \\arrow[rr] \\arrow[dd] & & A_1 \\arrow[rr] \\arrow[dr] \\arrow[dd] &[-.9em] &[-.9em] A_2 \\arrow[dl] \\arrow[dd] & & \\cdots \\\\\n& & & X \\arrow[from=ulll,crossing over] \\arrow[from=urrr,crossing over] \\arrow[from=ur,to=urrr] \\\\\nB_0 \\arrow[rr] \\arrow[drrr] & & B_1 \\arrow[rr] \\arrow[dr] & & B_2 \\arrow[rr] \\arrow[dl] & & \\cdots \\arrow[dlll] \\\\\n& & & Y \\arrow[from=uu,crossing over] \n\\end{tikzcd}\n\\end{equation*}\nIf each of the vertical squares is a pullback square, then the square\n\\begin{equation*}\n\\begin{tikzcd}\nA_\\infty \\arrow[r] \\arrow[d] & X \\arrow[d] \\\\\nB_\\infty \\arrow[r] & Y\n\\end{tikzcd}\n\\end{equation*} \nis a pullback square.\n\\end{cor}\n\n\\begin{exercises}\n\\item \\label{ex:seqcolim_shift}\nShow that the sequential colimit of a type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nis equivalent to the sequential colimit of its shifted type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & A_3 \\arrow[r,\"f_3\"] & \\cdots.\n\\end{tikzcd}\n\\end{equation*}\n\\item \\label{ex:seqcolim_contr}Consider a type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nand suppose that $f_n\\htpy \\mathsf{const}_{a_{n+1}}$ for some $a_n:\\prd{n:\\N}A_n$. Show that the sequential colimit is contractible.\n\\item Define the $\\infty$-sphere $\\sphere{\\infty}$ as the sequential colimit of\n\\begin{equation*}\n\\begin{tikzcd}\n\\sphere{0} \\arrow[r,\"f_0\"] & \\sphere{1} \\arrow[r,\"f_1\"] & \\sphere{2} \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nwhere $f_0:\\sphere{0}\\to\\sphere{1}$ is defined by $f_0(\\bfalse)\\jdeq \\inl(\\ttt)$ and $f_0(\\btrue)\\jdeq \\inr(\\ttt)$, and $f_{n+1}:\\sphere{n+1}\\to\\sphere{n+2}$ is defined as $\\susp(f_n)$. Use \\cref{ex:seqcolim_contr} to show that $\\sphere{\\infty}$ is contractible.\n\\item Consider a type sequence\n\\begin{equation*}\n\\begin{tikzcd}\nA_0 \\arrow[r,\"f_0\"] & A_1 \\arrow[r,\"f_1\"] & A_2 \\arrow[r,\"f_2\"] & \\cdots\n\\end{tikzcd}\n\\end{equation*}\nin which $f_n:A_n\\to A_{n+1}$ is weakly constant in the sense that\n\\begin{equation*}\n\\prd{x,y:A_n} f_n(x)=f_n(y)\n\\end{equation*}\nShow that $A_\\infty$ is a mere proposition.\n\\end{exercises}\n", "meta": {"hexsha": "b96bb1f96fb1a2411b4e6dd84ce8eadac54a77c0", "size": 10679, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/sequences.tex", "max_stars_repo_name": "tadejpetric/HoTT-Intro", "max_stars_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Book/sequences.tex", "max_issues_repo_name": "tadejpetric/HoTT-Intro", "max_issues_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Book/sequences.tex", "max_forks_repo_name": "tadejpetric/HoTT-Intro", "max_forks_repo_head_hexsha": "f4228d6ecfc6cdb119c6e8b0e711fea05b98b2d5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2759856631, "max_line_length": 262, "alphanum_fraction": 0.6740331492, "num_tokens": 4232, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.6893056167854461, "lm_q1q2_score": 0.5635586785806975}}
{"text": "\\section{Recursive CTEs}\nRecursive CTEs iteratively traverse hierarchical data of arbitrary length, and similarly to algorithms contain a base case (stop condition) and its recursive step. The number of times to iterate does not have to be constant.\n\n\\begin{lstlisting}[language=SQL]\nwith recursive r (i) as (\n\tselect 1\n\t-- non-recursive term\n\tunion all\n\t-- recursive term\n\tselect i + 1\n\tfrom r\n\twhere i < 5)\nselect * from r;\n\\end{lstlisting}\n\nRecursive expressions can be used to manipulate data in a hierarchical structure. The non-recursive part defines the base case with eventual deterministic checks, while the recursive table traverses the tree upwards applying conditions.\n\nEvery recursive query can be defined by a simple evaluation algorithm: \n\\begin{lstlisting}[language=Matlab]\nworkingTable = evaluateNonRecursive()\noutput workingTable\nwhile workingTable is not empty\n\tworkingTable = evaluateRecursive(workingTable)\n\toutput workingTable\n\\end{lstlisting}\n\nIt is important to notice that the working table gets fully replaced at every iteration of the second part.\n\nIt is also possible to increment counters within RCTEs, and create trees. To avoid infinite loops in cyclic structures, using \\texttt{UNION} instead of \\texttt{UNION ALL} allows to remove duplicates and checking before writing a new value.\n\nTo avoid loops within graphs, a NOT EXISTS condition can be added to check whether a node has already been discovered. \n\nBest practices to deal with graph-like data involve knowing its cardinality, to be aware of the maximum distance, yet most problems are Turing complete.\n\nPostgres only allows one mention of the recursive relation in the recursive subquery.\n\n\\section{Window functions}\nWindow functions are versatile tools to implement time series analysis, percentiles and cumulative sums. \n\nA window function can be seen as a special case of aggregate, evaluated after every clause but ORDER BY, but opposite to aggregation they do not change the result: only additional columns are computed.\n\nThe term window describes the set of rows on which the window operates, returning values from it. \n\nExample (from Postgres documentation):\n\\begin{lstlisting}\nselect product_name, \n\tprice, \n\tgroup_name, \n\tavg(price) over (\n\t\t\tpartition by group_name\n\t\t\t)\nfrom products\n\tinner join product_groups\n\tusing (group_id);\n\\end{lstlisting}\nThe previous query returns the product name, the price, product group name, along with the average prices of each product group.\n\nAVG works as window function, operating on the subset of rows specified by OVER. \n\nPARTITION BY distributes the rows of the result set into groups, each one of them subject to AVG, returning the average price.\n\nCalculations on window functions are always performed on the result set after the other clauses but ORDER BY.\n\nGeneral window functions syntax:\n\\begin{lstlisting}\nwindow_function(arg1, arg2,..) OVER (\n\t[PARTITION BY partition_expression]\n\t[ORDER BY sort_expression [ASC | DESC] [NULLS {FIRST | LAST }]\n\t[frame_clause])\n\\end{lstlisting}\n\nTo summarize:\n\\begin{itemize}\n\t\\item The window function is an aggregate function, such as SUM or AVG;\n\t\\item The (optional) partitioning clause divides the rows into multiple groups;\n\t\\item The ordering clause specifies the order of internal rows;\n\t\\item The framing clause defines a subset of rows in the current partition to which the window function is applied.\n\\end{itemize}\n\nMultiple window functions can also be applied in the same query. Some other examples which ignore framing are:\n\\begin{itemize}\n\t\\item Ranking:\n\t\\begin{itemize}\n\t\t\\item \\texttt{rank()} and \\texttt{dense\\_rank()}, returning the rank of the current row with or without gaps;\n\t\t\\item \\texttt{row\\_number()};\n\t\t\\item\\texttt{ ntile(n)}, distributing data over buckets;\n\t\\end{itemize}\n\t\\item Distribution:\n\t\\begin{itemize}\n\t\t\\item \\texttt{percent\\_rank()} and \\texttt{cume\\_dist()}, returning the relative rank of the current row or peer group (rows with equal partitioning);\n\t\\end{itemize}\n\t\\item Navigation:\n\t\\begin{itemize}\n\t\t\\item \\texttt{lag(expr, offset, default)} and \\texttt{lead(expr, offset, default)} evaluate the expression respectively on preceding and following row.\n\t\\end{itemize}\n\\end{itemize}\n\nFraming can follow only a few specifications, not too complex:\n\\begin{itemize}\n\t\\item \\texttt{current row};\n\t\\item \\texttt{unbounded preceding} or \\texttt{unbounded following}, first and last row in the partition;\n\t\\item \\texttt{range between unbounded preceding and current row}, default frame with order specified;\n\t\\item \\texttt{range between unbounded preceding and unbounded following}, default frame with order unspecified;\n\t\\item \\texttt{aggregates()}, computing aggregates over all tuples in current frame;\n\t\\item \\texttt{first\\_value()}, \\texttt{last\\_value()}, \\texttt{nth\\_value()}, evaluating an expression on determined rows of the frame.\n\\end{itemize}\n\n\\subsection{Segment trees}\nSegment trees are an useful data structure for efficient evaluation, used for storing information related to intervals (segments).\n\nQuerying a static segment tree allows knowing which of the stored segments contains a given element, using logarithmic time operations.\n\nThe set of existing values gets divided into possible interval endpoints, sorted from left to right. All leaves in the tree correspond to the elementary intervals induced by endpoints, while internal nodes are the union of two intervals. \n\nAny interval is stored in the canonical set for at most two nodes at the same depth, therefore the required storage is $O(n \\log n)$ with $n$ intervals.\n\nSpace for leaves is linear, and aggregated functions can be calculated on intervals very quickly. \n\nSegment trees are useful for window functions, since intervals can be seen as windows, either static or sliding. For each call of a window function, a new segment tree gets built: the time for construction is just linear, since ordering is already performed by the ORDER BY clause.\n\n\\subsection{Other aggregates}\nNewer version of Postgres allow calculation of more complex statistical aggregates, such as standard deviation, correlation and linear regression.\n\nMode and percentiles are also supported, yet they require materialization and sorting therefore they need a special syntax and the clause \\texttt{within group}.\n\n\\subsubsection{Grouping sets}\nA grouping set is a set of columns by which data is grouped, for instance using aggregates. They can be unified using UNION ALL.\n\nSince uniting requires all sets to have the same number of columns, eventual inconsistencies must be filled with NULL: this is both lengthy and nonperforming (linear scans).\n\nGROUPING SETS, a GROUP BY option, allows to define multiple grouping sets in the same query. It is possible to group by one or multiple columns, or even nothing. Missing values are automatically filled with NULL.\n\nROLLUP calculates all the grouping possibilities ($2^n$) and applies them. \n\n\\section{Database clusters}\nStoring a large quantity of data on a single machine can have limitations: performance is restricted to hardware, and in case of failure transactions get lost. \n\nUsing multiple machines is therefore a valid solution to handle databases: clusters are transparent and present themselves as a single system, still maintaining consistency between nodes. \n\n\\subsection{ACID vs BASE}\nACID and BASE are two standards to describe the properties of databases. One could be defined as the opposite of the other, because their usage is distinct between relational models and NoSQL.\n\n\\begin{itemize}\n\t\\item Atomic: all operations in a transaction must either succeed or be reverted;\n\t\\item Consistent: each transaction is always propagated to all nodes;\n\t\\item Isolated: transactions do not contend with one another;\n\t\\item Durable: results are permanent, even with failures.\n\\end{itemize}\n\n\\begin{itemize}\n\t\\item Basically Available: ensures an operating cluster yet not all data might be immediately accessible after a failure;\n\t\\item Soft state: data needs to be periodically refreshed, it does not have a permanent state;\n\t\\item Eventual consistency: consistency is not achieved all the time, but will be reached at some point after a number of updates.\n\\end{itemize}\n\nBoth models have advantages and disadvantages, but generally BASE properties are much looser and less strict than ACID. \n\nBASE does not provide a consistent view of the data, which can lead to confusion and cannot be used for critical systems, therefore it should only be used when performance is more important than ACID guarantees.\n\n\\subsection{CAP theorem}\nThe CAP theorem states that is impossible for a distributed database to simultaneously achieve more than two of the following guarantees:\n\\begin{enumerate}\n\t\\item Consistency (every read receives either the most recent write or an error);\n\t\\item Availability (every request receives a response);\n\t\\item Partition tolerance (system operating even with dropped messages or delays).\n\\end{enumerate}\nPartition tolerance is not really an option: some systems cannot be partitioned (servers with no replication), and for a distributed system not guaranteeing partition tolerance would imply to never drop messages and never have a dying node.\n\nThe probability of any node to fail increases exponentially as the number of total nodes increase.\n\nWhen a network partition failure happens, therefore, the choice has to be between canceling the operation decreasing availability or proceed propagating it risking consistency.\n\nIf a system chooses to provide consistency, it will preserve atomic reads but refuse to respond to some requests.\n\nIf, on the other hand, availability is chosen, the system will respond to all requests, potentially returning outdated information and accepting conflicting writes. \n\n\\subsection{Transaction handling}\nIn distributed systems, transactions should be atomic. This means there are two options for ending one:\n\\begin{itemize}\n\t\\item Commit, the transaction was successful and it can become persistent and visible to all nodes;\n\t\\item Abort, the transaction failed or violates a constraint, therefore the system needs to be reverted to its previous state.\n\\end{itemize}\nSince node crashes in a distributed system are independent, the protocol needs to be extended to ensure atomicity of transactions: a valid method is two-phase locking (2PL). \n\nIt is characterized by the presence of a coordinator node which allows $n$ agents of a system $A_1, A_2, \\dots, A_n$ to either all persist the changes of $T$ or all discard them.\n\nThis is a commitment protocol that coordinates all processes participating in a distributed atomic transaction, achieving its purpose using logging of the states to aid recovery procedures.\n\nThe two phases of the protocol in a normal execution are:\n\\begin{itemize}\n\t\\item Commit-request phase, in which a coordinator process attempts to prepare all the processes participating to the transaction to take the necessary steps and vote whether the operation can be committed;\n\t\\item Commit phase, based on the votes, whose result gets communicated to the participants and the needed actions to commit or abort are taken.\n\\end{itemize}\n\nMessage flow in 2PL with 4 agents:\n\\begin{figure}[h]\n\t\\includegraphics[scale=0.4]{2pc.png}\n\t\\centering\n\\end{figure}\n\nIssues of this method include the crash of nodes (both coordinator and agent) or potential lost messages.\n\n\\subsection{Replication}\nReplication is a technique consisting in keeping a copy of the dataset in each machine. Redundant resources improve query throughput, can accommodate node failures and solve locality issues.\n\nAccess to a replicated entity is typically uniform with access to a single non-replicated entity, and copying should be transparent to external users. \n\nSystems can also use a master-slave scheme, predominant in high availability clusters, where a single node is designated to process all requests. \n\n\\subsubsection{Horizontal partitioning}\nHorizontal sharding involves every machine having only a chunk of the dataset. Query runtimes are improved, yet communication between shards should be minimized, due to network throughput.\n\nPerformance, in fact, relies heavily on the used interconnect, but allows nodes to have much less occupied memory so that dataset eveb bigger than the capacity of the machine can be accommodated.\n\nOptimization of queries imply processing clauses separately, locally on each node, and aggregating results later so that the least amount of data is sent.\n\n\\subsubsection{Shard allocation}\nShards can be allocated to the cluster machines in different ways:\n\\begin{itemize}\n\t\\item Every machine receives one shard;\n\t\\item Every machine receives multiple shards.\n\\end{itemize}\nHaving more shards than machines results in better skew handling: they can take advantage of the resources available on nodes, increasing performance.\n\nAlso, there is no guarantee that the chosen hashing is uniform, and when values in between different shards are queried, it is likely that they are not all on the same machine.\n\nReplication and sharding can be applied together, combining the benefits of both, but this leads to an increased resource consumption.\n\\begin{figure}[h]\n\t\\includegraphics[scale=0.5]{replication_sharding.png}\n\t\\centering\n\\end{figure}\n\n\\subsection{Joins}\nJoins in a distributed system are not different than local ones in presence of a replicated environment, yet can become complicated when data is sharded across multiple machines. \n\n\\subsubsection{Co-located joins}\nA co-located join is the most efficient way to join two large distributed tables: every table has a common distributed column and is sharded across machines in the same way.\n\nAll rows with the same column, therefore, are always on the same machine, even across different tables, so that relational operations can be performed within the groups.\n\nThe joins can be executed in parallel locally on the workers, returning results to the coordinator node.\n\n\\subsubsection{Distributed joins}\nA distributed join involves data which is sharded across several nodes.\n\nData is usually filtered as much as possible before being sent, but data movement is still necessary for joining: all nodes send their part of the table to a single one for it to run operations.\n\nThis join is only performing when the number of rows is small, otherwise the node doing the join will be overwhelmed by the dataset size and unable to parallelize.\n\n\\subsubsection{Broadcast joins}\nA broadcast join is applicable when only one side of the join requires a small set of rows (small table or very selective filter).\n\nThe side of the join with the small set is sent to every node in the cluster, then joined locally with the large table on the other side of the join.\n\n\\subsubsection{Considerations}\nAll the previous methods still needs to consider size of the tables and bandwidth: some operations might be faster reading the data from remote machines rather than on local disks, since cache size is limited.\n\nIn this case, data can be shared to multiple nodes, each of them storing a partition of the table and computing local results which are then aggregated.\n\nPostgres supports read replicas, kept in sync for consistent reads: each application can send data either to the master or the slaves, and the master itself can also trigger replication.\n\nHowever, it is not natively a distributed database: its alternative Greenplum (still based on Postgres) allows data distribution and partitioning, both of data and schema. \n\nDistributed and partitioning schemes can be specified creating a table, adding DISTRIBUTED BY to define how data is spread, or PARTITION BY for partitioning within a segment.\n\nDistributions may be round-robin (as evenly as possible, non-deterministic), or hashed (rows dispersed according to a hash function).\n\n\\begin{figure}[h]\n\t\\includegraphics[scale=0.38]{greenplum.png}\n\t\\centering\n\\end{figure}\n\nMemSQL is another distributed database system based on SQL, which also allow replication, distributed and co-located joins.\n\nA MemSQL cluster consists of aggregator and leaf nodes: aggregators handle metadata, monitoring and results of queries, while leaves act as storage layer and execute queries. \n\nBy default, MemSQL shards tables by their primary key, while manual sharding needs to be specified.\n\nDistributed systems overall have more resources, but their synchronization might be expensive.\n\n\\section{Autonomous database systems}\nAutonomous database systems are designed to remove the burden of managing DBMS from humans, focusing on physical database design and configuration/query tuning. The first attempts to program self-adaptive systems were in the 1970s, and have now evolved to learned components. \n\nIndexing has been replacing with a neural network which predicts the location of a key, and transaction are scheduled through the machines by learned scheduling with unsupervised clustering. The most promising innovation is learned planning, deep learning algorithms which help running the query planner, estimating the best execution order of the operations. \n\nA self driving DBMS is therefore a system that configures, manages and optimizes itself for the target database and its workload, using an objective function (throughput, latency) and some constraints. The two components of this are an agent (the one who makes decisions and learns over time) and an environment, the subject of the actions. Environments have a state used as feedback for the agent. This system can help exploring unknown configurations (and their consequences) and generalizing over applications or hardware.\n\n\\subsection{State modeling}\nState modeling concerns the representation of the environment state, and has to be performed in an accurate way. The model is stochastic, non stationary and episodic (there is a fixed endpoint), and it can be represented with a Markov decision process model. The entire history of the DBMS is encoded in a state vector: its contents, design, workload and resources.\n\nThe table state contains number of tuples, attributes, cardinality and other values which altogether contribute to the choice of specific implementations (for instance indexes), but changes are constrained since the number of dimensions of the feature vectors always has to be the same. The content is approximated through a state vector, therefore there is no precise information regarding how the actual table looks like. \n\nThe knob configuration state depends on the machine, and is not really scalable. Not every configuration is applicable to servers, but one potential solution is to store hardware resources according to their percentages in respect of the available amount. \n\nFeature vectors can be reduced (PCA), exacerbating instability, and hierarchical models help isolating components to reduce the propagation of changes. \n\nAcquisition of data is done through targeted experiments while the system is running in production, training the model with end-to-end benchmarks of sample workloads. Instead of running the full system, micro-benchmarks can be run on a subset of components. This method is more effective and requires less iteration to converge. \n\nTo avoid slowing down the entire system, training is performed on replicas (master-slave architecture) having an agent recognize the resources and eventually propagating the best changes to the master. The application server primarily communicates with the master through reads and writes, and obtaining a complete view of all the operation is sometimes hard since not everything is sent to the replicas (failed queries, for instance). \n\nAn alternative is imitation learning: the model observes a tuned DBMS and tries to replicate those decisions. The state of the database still needs to be captured to extrapolate why changes have been applied, and training data will be sparse or noisy.\n\n\\subsection{Reward observation}\nRewards are classified as short-term and long-term, the latter more problematic since it is difficult to predict the workload trends. Transient hardware problems are hard to detect and could mask the true reward of an action, so current benchmarks have to be compared with historical data to reconsider recent events. \n\nMany systems concern both OLTP and OLAP workloads, and changes in the objective functions may have a negative impact on either. Generally multiple policies define preference of one over th other in mixed environments. \n\nOther common problems regard action selection policies, transparency and human interaction. \n", "meta": {"hexsha": "25d6e6ab0289ab98666fa3b81603a501e0952791", "size": 20506, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Foundations of Data Engineering/lectures/lecture4.tex", "max_stars_repo_name": "mrahtapot/TUM", "max_stars_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 225, "max_stars_repo_stars_event_min_datetime": "2019-10-02T10:49:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T22:25:38.000Z", "max_issues_repo_path": "Foundations of Data Engineering/lectures/lecture4.tex", "max_issues_repo_name": "mrahtapot/TUM", "max_issues_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-02-16T12:22:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-31T19:35:57.000Z", "max_forks_repo_path": "Foundations of Data Engineering/lectures/lecture4.tex", "max_forks_repo_name": "mrahtapot/TUM", "max_forks_repo_head_hexsha": "b736fc4ae065612dc988b6cb220fcf2f6119a138", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 69, "max_forks_repo_forks_event_min_datetime": "2019-10-02T21:46:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-17T19:27:50.000Z", "avg_line_length": 66.3624595469, "max_line_length": 525, "alphanum_fraction": 0.8054715693, "num_tokens": 4180, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5634415817608917}}
{"text": "\\documentclass[a4paper,man,natbib]{apa6}\n\\usepackage[english]{babel}\n\\usepackage[utf8x]{inputenc}\n% Common Packages - Delete Unused Ones %\n\\usepackage{setspace}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\graphicspath{ {./images/} }\n% End Packages %\n\\title{Exercise Assignment 12}\n\\shorttitle{ES12}\n\\author{Brandon Hosley}\n\\date{2018 11 19}\n\\affiliation{Mike Davis}\n%\\abstract{}\n\\begin{document}\n\t\\singlespacing\n\t\\raggedbottom\n\t\\maketitle\n\\subsection{1. You are stranded on a desert island and you only have NAND gates.  Construct the following using only NAND gates. For each of the following show the logic circuit with only NAND gates and the truth table. (Six Points)}\n\\emph{Create a NOT gate.}\\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p1not.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c | c }\n\t\t\tInput & Output \\\\\n\t\t\t\\hline\n\t\t\t0 & 1 \\\\\n\t\t\t1 & 0 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create an AND gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p1and.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c | c }\n\t\t\t\\multicolumn{2}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 0 \\\\\n\t\t\t0 & 1 & 1 & 0 \\\\\n\t\t\t1 & 0 & 1 & 0 \\\\\n\t\t\t1 & 1 & 0 & 1 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create an OR gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p1or.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c | c }\n\t\t\t\\multicolumn{3}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & B$_{1}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 1 & 0 \\\\\n\t\t\t0 & 1 & 1 & 0 & 1 \\\\\n\t\t\t1 & 0 & 0 & 1 & 1 \\\\\n\t\t\t1 & 1 & 0 & 0 & 1 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create a NOR gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p1nor.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c c | c }\n\t\t\t\\multicolumn{4}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & B$_{1}$ & A$_{2}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 1 & 0 & 1 \\\\\n\t\t\t0 & 1 & 1 & 0 & 1 & 0 \\\\\n\t\t\t1 & 0 & 0 & 1 & 1 & 0 \\\\\n\t\t\t1 & 1 & 0 & 0 & 1 & 0 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create and XOR gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p1xor.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c c | c }\n\t\t\t\\multicolumn{4}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & A$_{2}$ & B$_{2}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 1 & 1 & 0 \\\\\n\t\t\t0 & 1 & 1 & 1 & 0 & 1 \\\\\n\t\t\t1 & 0 & 1 & 0 & 1 & 1 \\\\\n\t\t\t1 & 1 & 0 & 1 & 1 & 0 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Inverting the output of an XOR gate creates an XNOR gate.  Create an XNOR gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p1xnor.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c c c | c }\n\t\t\t\\multicolumn{5}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & A$_{2}$ & B$_{2}$ & A$_{3}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 1 & 1 & 0 & 1 \\\\\n\t\t\t0 & 1 & 1 & 1 & 0 & 1 & 0 \\\\\n\t\t\t1 & 0 & 1 & 0 & 1 & 1 & 0 \\\\\n\t\t\t1 & 1 & 0 & 1 & 1 & 0 & 1 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\n\\subsection{2. You are stranded on a desert island and you only have NOR gates.  Construct the following using only NOR gates. For each of the following, show the logic circuit with only NOR gates and the truth table. (Six Points)}\n\\emph{Create a NOT gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p2not.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c | c }\n\t\t\tInput & Output \\\\\n\t\t\t\\hline\n\t\t\t0 & 1 \\\\\n\t\t\t1 & 0 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create an AND gate.} \\\\\n\t\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p2and.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c | c }\n\t\t\t\\multicolumn{3}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & B$_{1}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 1 & 0 \\\\\n\t\t\t0 & 1 & 1 & 0 & 0 \\\\\n\t\t\t1 & 0 & 0 & 1 & 0 \\\\\n\t\t\t1 & 1 & 0 & 0 & 1 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create an OR gate.} \\\\\n\t\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p2or.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c | c }\n\t\t\t\\multicolumn{2}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 0 \\\\\n\t\t\t0 & 1 & 0 & 1 \\\\\n\t\t\t1 & 0 & 0 & 1 \\\\\n\t\t\t1 & 1 & 0 & 1 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create a NAND gate.} \\\\\n\t\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p2nand.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c c | c }\n\t\t\t\\multicolumn{4}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & B$_{1}$ & A$_{2}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 1 & 0 & 1 \\\\\n\t\t\t0 & 1 & 1 & 0 & 0 & 1 \\\\\n\t\t\t1 & 0 & 0 & 1 & 0 & 1 \\\\\n\t\t\t1 & 1 & 0 & 0 & 1 & 0 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Create and XOR gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p2xor.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c c c | c }\n\t\t\t\\multicolumn{5}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & A$_{2}$ & B$_{2}$ & A$_{3}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 0 & 0 & 1 & 0 \\\\\n\t\t\t0 & 1 & 0 & 1 & 0 & 0 & 1 \\\\\n\t\t\t1 & 0 & 0 & 0 & 1 & 0 & 1 \\\\\n\t\t\t1 & 1 & 0 & 0 & 0 & 1 & 0 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\\emph{Inverting the output of an XOR gate creates an XNOR gate.  Create an XNOR gate.} \\\\\n\t\\begin{minipage}{2.5in}\n\t\t\\includegraphics[width=\\linewidth]{ES12p2xnor.png}\n\t\\end{minipage}\n\t\\begin{minipage}{2.5in}\n\t\t\\begin{tabular}{ c c | c c c | c }\n\t\t\t\\multicolumn{4}{c}{Input} & & Output \\\\\n\t\t\tA$_{0}$ & B$_{0}$ & A$_{1}$ & A$_{2}$ & B$_{2}$ & \\\\\n\t\t\t\\hline\n\t\t\t0 & 0 & 1 & 0 & 0 & 1 \\\\\n\t\t\t0 & 1 & 0 & 1 & 0 & 0 \\\\\n\t\t\t1 & 0 & 0 & 0 & 1 & 0 \\\\\n\t\t\t1 & 1 & 0 & 0 & 0 & 1 \\\\\n\t\t\\end{tabular}\n\t\\end{minipage} ~\\\\\n\n\\nocite{warford10}\n\\bibliographystyle{apacite}\n\\bibliography{CS} %link to relevant .bib file\n\\end{document}", "meta": {"hexsha": "d9cb2d2b4515b326098268e6c76ad21c8a2dff89", "size": 5995, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2018-Spr Computer Architecture/ES12/bhosl2ES12.tex", "max_stars_repo_name": "bhosley/Schoolwork", "max_stars_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2018-Spr Computer Architecture/ES12/bhosl2ES12.tex", "max_issues_repo_name": "bhosley/Schoolwork", "max_issues_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2018-Spr Computer Architecture/ES12/bhosl2ES12.tex", "max_forks_repo_name": "bhosley/Schoolwork", "max_forks_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.6782178218, "max_line_length": 233, "alphanum_fraction": 0.5507923269, "num_tokens": 2779, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.7981867873410141, "lm_q1q2_score": 0.5633482839226123}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{fullpage}\n\\usepackage{minted}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{hyperref}\n\\usepackage[utf8x]{inputenc}\n\n\\title{Array manipulations as functional programming}\n\\author{Jim Pivarski}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\setlength{\\parskip}{0.5\\baselineskip}\n\n\\section*{Introduction}\n\nThe central features of an array library like Numpy or Awkward Array simplify if we think of arrays as functions and these features as function composition. A one-dimensional array of \\mintinline{python}{dtype} $d$ (e.g.\\ \\mintinline{python}{int32} or \\mintinline{python}{float64}) can be thought of as a function from integer indexes to members of $d$. Thus,\n\\[ \\mintinline{python}{array[i]} \\]\n\\noindent becomes\n\\[ \\mintinline{python}{array}: \\mathbb{Z} \\to d \\]\n\\noindent because given an integer \\mintinline{python}{i} $\\in \\mathbb{Z}$, it returns a value in $d$.  In Python, this function is the implementation of the array's \\mintinline{python}{__getitem__} method.\n\nSpecified this way, this is a partial function\\footnote{\\url{https://en.wikipedia.org/wiki/Partial_function}}---for some integers, it raises an exception rather than returning a value in $d$. (Integers greater than or equal to the array's length or less than its negated length, if the array implements Python's negative indexing, are outside the bounds of the array and do not return a value.) It can be made into a total function by restricting the domain to $[0, n)$ where $n$ is the length of the array:\n\\[ \\mintinline{python}{array}: [0, n) \\to d. \\]\n\nWe can choose $[0, n)$ as the domain and work with total functions or $\\mathbb{Z}$ as the domain and work with partial functions---it is a matter of the granularity of the type system. Numpy has a single type \\mintinline{python}{ndarray} for all arrays (effectively untyped), Numba has an array type that depends on the array's dimension, and C++ has a \\mintinline{c++}{std::array<dtype, n>} type that depends on the exact size (\\mintinline{c++}{n}) of the array, like our functional description above. As we'll see later, a consequence of this specificity is that the return value of some functions will depend on the values given to that function, a feature known as dependent types\\footnote{\\url{https://en.wikipedia.org/wiki/Dependent_type}}.\n\nIn this note, we'll describe arrays as total functions in a dependent type system.\n\n\\section*{Multidimensional arrays}\n\nNumpy arrays can have arbitrarily many dimensions, referred to as the array's \\mintinline{python}{shape}. The \\mintinline{python}{shape} is a tuple of positive integers specifying the length of each dimension: $(n_1, n_2, \\ldots, n_k)$ is a rank-$k$ tensor ($k = 1$ is a vector, $k = 2$ is a matrix, etc.).\n\nTo get values of type $d$ from a rank-$k$ array of \\mintinline{python}{dtype} $d$, we must specify $k$ integers, each in a restricted domain $[0, n_i)$. In Numpy syntax, this is an implicit Python \\mintinline{python}{tuple} between the square brackets:\n\\[ \\mintinline{python}{array[i1, i2, ..., ik]} \\]\n\\noindent In mathematical syntax, we can represent a $k$-tuple as a cartesian product,\n\\[ [0, n_1) \\times [0, n_2) \\times \\ldots \\times [0, n_k) \\]\n\\noindent so the function corresponding to this array is\n\\[ \\mintinline{python}{array}: [0, n_1) \\times [0, n_2) \\times \\ldots \\times [0, n_k) \\to d. \\]\n\nA function with multiple arguments can be replaced with functions of one argument that each return a function, a process known as currying\\footnote{\\url{https://en.wikipedia.org/wiki/Currying}}. For example, the function above can be replaced with\n\\[ \\mintinline{python}{array}: [0, n_1) \\to [0, n_2) \\to \\ldots \\to [0, n_k) \\to d \\]\n\\noindent by noting that\n\\[ \\mintinline{python}{array[i1]} \\]\n\\noindent returns an array of rank $k - 1$ and \\mintinline{python}{dtype} $d$, which is a function\n\\[ \\mintinline{python}{array[i1]}: [0, n_2) \\to \\ldots \\to [0, n_k) \\to d \\]\n\\noindent (and so on, for each dimension). In fact, Numpy's indexing syntax illustrates this clearly:\n\\[ \\mintinline{python}{array[i1, i2, i3] == array[i1][i2][i3]} \\]\n\\noindent for any \\mintinline{python}{i1}, \\mintinline{python}{i2}, \\mintinline{python}{i3} that satisfy a three-dimensional array's domain.\n\n\\section*{Record arrays}\n\nNumpy also has record arrays\\footnote{\\url{https://docs.scipy.org/doc/numpy/user/basics.rec.html}} for arrays of record-like structures (e.g.\\ \\mintinline{c++}{struct} in C). In Numpy, the named fields and their types are considered part of the array's \\mintinline{python}{dtype}, but they are accessed through the same square bracket syntax as elements of the array's \\mintinline{python}{shape}:\n\\[ \\mintinline{python}{array[i1, i2, i3][fieldname]} \\]\n\\noindent where \\mintinline{python}{fieldname} is a string, the name of one of the record's fields. (Numpy does not allow the \\mintinline{python}{fieldname} to be uncurried---it must be in a different set of brackets from \\mintinline{python}{i1}, \\mintinline{python}{i2}, \\mintinline{python}{i3}.)\n\nSince record fields are accessed through a similar syntax, let's consider it part of the array's functional type, making no distinction between \\mintinline{python}{shape} elements and field names. For a record type in which string-valued field names $s_1$, $s_2$, \\ldots, $s_m$ map to \\mintinline{python}{dtypes} $d_1$, $d_2$, \\ldots, $d_m$, we can write\n\\begin{align*}\n\\mintinline{python}{recarray}: [0, n) \\to &\\ s_1 \\to d_1 \\\\\n &\\ s_2 \\to d_2 \\\\\n &\\ \\ldots \\\\\n &\\ s_m \\to d_m\n\\end{align*}\n\\noindent to represent a one-dimensional record array of length $n$. This is a dependent type because the choice of field name determines the return type of the function.\n\nA multidimensional record array can be described as\n\\begin{align*}\n\\mintinline{python}{recarray}: [0, n_1) \\to [0, n_2) \\to \\ldots \\to [0, n_k) \\to &\\ s_1 \\to d_1 \\\\\n &\\ s_2 \\to d_2 \\\\\n &\\ \\ldots \\\\\n &\\ s_m \\to d_m\n\\end{align*}\n\\noindent or as\n\\begin{align*}\n\\mintinline{python}{recarray}: [0, n_1) \\to &\\ s_1 \\to [0, n_2) \\to \\ldots \\to [0, n_k) \\to d_1 \\\\\n &\\ s_2 \\to [0, n_2) \\to \\ldots \\to [0, n_k) \\to d_2 \\\\\n &\\ \\ldots \\\\\n &\\ s_m \\to [0, n_2) \\to \\ldots \\to [0, n_k) \\to d_m\n\\end{align*}\n\\noindent or any other placement of the field name index within the ordered sequence of dimensional indexes. In general, the string indexes (field names) commute with the integer indexes (dimensions). This is evident in Numpy's syntax:\n\\begin{align*}\n\\mintinline{python}{recarray[i1][i2][i3][fieldname]} & \\mintinline{python}{ == recarray[i1][i2][fieldname][i3]} \\\\\n & \\mintinline{python}{ == recarray[i1][fieldname][i2][i3]} \\\\\n & \\mintinline{python}{ == recarray[fieldname][i1][i2][i3]}\n\\end{align*}\n\\noindent It is also evident if the array is arranged as a rectilinear table, in which $[0, n_1)$, $[0, n_2)$, \\ldots $[0, n_k)$ form a $k$-dimensional lattice of bounded integers and the field names are an additional dimension, indexed by a finite set of strings with no predefined order. This dimension of category labels is usually called the ``columns'' and all other dimensions are called ``rows.'' In this picture, rearranging the order of the string index and the integer indexes corresponds to selecting a column before a row, rather than a row before a column.\n\n\\section*{Vectorized functions}\n\nNumpy uses so-called ``vectorized'' functions or ``universal'' functions (``ufuncs'') for most calculations\\footnote{\\url{https://docs.scipy.org/doc/numpy/reference/ufuncs.html}}. (These are not to be confused with vectorized instructions in CPU hardware, but are based on a similar idea.) Any function $f$ that maps scalar \\mintinline{python}{dtypes} $d^A$ to $d^B$,\n\\[ f: d^A \\to d^B\\mbox{,} \\]\n\\noindent can be lifted to a vectorized function that maps arrays of \\mintinline{python}{dtype} $d^A$ to arrays of \\mintinline{python}{dtype} $d^B$:\n\\[ \\mbox{ufunc}(f): \\left([0, n) \\to d^A\\right) \\to \\left([0, n) \\to d^B\\right)\\mbox{.} \\]\n\\noindent Note that the \\mintinline{python}{shape} of the array, $[0, n)$ in this case, is the same for the argument type of $\\mbox{ufunc}(f)$ as for its return type.\n\nThis $\\mbox{ufunc}$ functor is a partial application of what would be called ``$\\mbox{map}$'' in most functional languages\\footnote{\\url{https://en.wikipedia.org/wiki/Map_(higher-order_function)}}. The $\\mbox{map}$ functor takes a function and a collection, returning a collection of the same length with the function applied to each element. The $\\mbox{ufunc}$ functor only takes a function, and its result is applied to collections (arrays) later.\n\nSince arrays are themselves functions, applying $\\mbox{ufunc}(f)$ to an array is a composition\\footnote{\\url{https://en.wikipedia.org/wiki/Function_composition}} of the array with $f$. Thus, the following is true for any \\mintinline{python}{i} $\\in [0, n)$:\n\\begin{align*}\n\\underbrace{\\mbox{ufunc}(f)(\\mintinline{python}{array})}_{\\mbox{\\scriptsize array}}(\\mintinline{python}{i}) = & \\,f(\\underbrace{\\mintinline{python}{array}(\\mintinline{python}{i})}_{\\mbox{\\scriptsize scalar}}) \\\\[0.5\\baselineskip]\n\\underbrace{\\mintinline{python}{numpy.vectorize(f)(array)}}_{\\mbox{\\scriptsize array}}\\mintinline{python}{[i]} = & \\,\\mintinline{python}{f(}\\underbrace{\\mintinline{python}{array[i]}}_{\\mbox{\\scriptsize scalar}}\\mintinline{python}{)}\\mbox{.}\n\\end{align*}\n\\noindent by associativity of function composition. This composition always applies $f$ to the {\\it output} of the array, never the {\\it input} (function composition is not commutative).\n\\begin{align*}\nf: & \\mbox{ } d^A \\to d^B \\\\\n\\mintinline{python}{array}: & \\mbox{ } [0, n) \\to d^A \\\\\n\\mbox{ufunc}(f)(\\mintinline{python}{array}) = f \\circ \\mintinline{python}{array} : & \\mbox{ } [0, n) \\to d^B\\mbox{.}\n\\end{align*}\n\nUsing associativity again, we should be able to compose a sequence of scalar functions $f: d^A \\to d^B$, $g: d^B \\to d^C$, \\ldots, $h: d^Y \\to d^Z$ before applying them to the array. If the scalar function can be extracted from a ufunc object, it would be possible to compose\n\\[ \\mbox{ufunc}(f) \\circ \\mbox{ufunc}(g) \\circ \\ldots \\circ \\mbox{ufunc}(h) \\]\n\\noindent into a single\n\\[ \\mbox{ufunc}(f \\circ g \\circ \\ldots \\circ h): \\left([0, n) \\to d^A\\right) \\to \\left([0, n) \\to d^Z\\right) \\]\n\\noindent that can be applied to an array. This is an optimization known as loop fusion\\footnote{\\url{https://en.wikipedia.org/wiki/Loop_fission_and_fusion}}, and is often faster than making multiple passes over arrays and possibly allocating large temporary arrays between ufuncs. There have been proposals\\footnote{\\url{https://numpy.org/doc/1.14/neps/deferred-ufunc-evaluation.html}} and external libraries\\footnote{\\url{https://www.weld.rs/weldnumpy}} to add this feature transparently to Numpy. In principle, it could even be an explicit (user-visible) feature of the ufunc object, but to my knowledge, it has never been implemented as such.\n\n\\section*{Array slicing}\n\nWhereas ufuncs compose scalar functions to the output of an array, slicing composes \\mintinline{python}{index} arrays (which are functions) to the input of an array.\n\nThe fact that array slicing is itself composition may not be obvious because of the way that slicing is presented:\n\\[ \\mintinline{python}{array[i:j]} \\]\n\\noindent does not seem to be a composition of two arrays. The first point to make is that all of Numpy's slicing mechanisms---range slices (Python's \\mintinline{python}{slice} operator or \\mintinline{python}{start:stop:step}), \\mintinline{python}{numpy.compress} with boolean arrays, and \\mintinline{python}{numpy.take} with integer arrays---can be rewritten in terms of \\mintinline{python}{numpy.take} with integer arrays:\n\\begin{itemize}\n\\item A range slice \\mintinline{python}{start:stop:step} can be replaced with an integer sequence\n\\[ \\mintinline{python}{range(start, stop, step)} \\]\n(ignoring the effect of negative \\mintinline{python}{start} and \\mintinline{python}{stop}).\n\\item A boolean array \\mintinline{python}{mask} can be replaced with \\mintinline{python}{numpy.nonzero(mask)}.\n\\item An integer array \\mintinline{python}{index} is already an integer array.\n\\end{itemize}\n\nAs an array, \\mintinline{python}{index} is a function $\\mintinline{python}{index}: [0, x) \\to [0, n)$ that can be composed with $\\mintinline{python}{array}: [0, n) \\to d$ to produce\n\\[ \\mintinline{python}{array} \\circ \\mintinline{python}{index}: [0, x) \\to d \\]\n\\noindent Note that \\mintinline{python}{index} is to the right (transforms the {\\it input}) of \\mintinline{python}{array}, whereas $\\mbox{ufunc}(f)$ put $f$ to the left (transforms the {\\it output}) of \\mintinline{python}{array}.\n\nNumpy uses the same syntax for this function composition, \\mintinline{python}{array[index]}, as it does for function evaluation, \\mintinline{python}{array[i]}, which is potentially confusing. Let's illustrate this with an extended example.\n\n\\subsection*{Example}\n\nConsider two functions that are defined on all non-negative integers (at least).\n\\begin{minted}{python}\ndef f(x):\n    return x**2 - 5*x + 10\ndef g(y):\n    return max(0, 2*y - 10) + 3\n\\end{minted}\n\nThey may be transformed into arrays by sampling \\mintinline{python}{f}, \\mintinline{python}{g}, and $\\mintinline{python}{f} \\circ \\mintinline{python}{g}$ at enough points to avoid edge effects from their finite domains. For \\mintinline{python}{f} and \\mintinline{python}{g} above, 100 points in \\mintinline{python}{g} is enough to accept the entire range of \\mintinline{python}{f} when \\mintinline{python}{f} is sampled at 10 points.\n\\begin{minted}{python}\nF   = numpy.array([f(i) for i in range(10)])     # F is f at 10 elements\nG   = numpy.array([g(i) for i in range(100)])    # G is g at 100 elements\nGoF = numpy.array([g(f(i)) for i in range(10)])  # GoF is g\u2218f at 10 elements\n\\end{minted}\n\\noindent Now $\\mintinline{python}{F}: [0, 10) \\to [4, 46)$, $\\mintinline{python}{G}: [0, 100) \\to [3, 191)$, and $\\mintinline{python}{GoF}: [0, 10) \\to [3, 85)$.\n\nIndexing \\mintinline{python}{G} by \\mintinline{python}{F} can be expressed with square-bracket syntax or \\mintinline{python}{numpy.take}, and it returns the same result as the sampled composition \\mintinline{python}{GoF}.\n\\begin{minted}{python}\nG[F]               # \u2192 [13,  5,  3,  3,  5, 13, 25, 41, 61, 85]\nG.take(F)          # \u2192 [13,  5,  3,  3,  5, 13, 25, 41, 61, 85]\nnumpy.take(G, F)   # \u2192 [13,  5,  3,  3,  5, 13, 25, 41, 61, 85]\n\nGoF                # \u2192 [13,  5,  3,  3,  5, 13, 25, 41, 61, 85]\n\\end{minted}\n\\noindent In \\mintinline{python}{GoF}, the functions are composed before being transformed into arrays, and in \\mintinline{python}{G[F]}, the arrays themselves are composed via integer-array indexing.\n\nFunction composition is associative, so we should be able to change the order of two integer-array indexings. To demonstrate this, introduce another array, which need not have integer \\mintinline{python}{dtype}.\n\\begin{minted}{python}\nH = numpy.arange(1000)*1.1\n\\end{minted}\n\\noindent When we compute \\mintinline{python}{H} indexed by \\mintinline{python}{G} indexed by \\mintinline{python}{F}, it shouldn't matter whether the \\mintinline{python}{H[G]} index is computed first or the \\mintinline{python}{G[F]} index is computed first, and we see that this is the case.\n\\begin{minted}{python}\nH[G][F]            # \u2192 [14.3  5.5  3.3  3.3  5.5 14.3 27.5 45.1 67.1 93.5]\nH[G[F]]            # \u2192 [14.3  5.5  3.3  3.3  5.5 14.3 27.5 45.1 67.1 93.5]\n\\end{minted}\n\n\\subsection*{Multidimensional slicing}\n\nIf Numpy's integer-array indexing for multiple dimensions worked the same was as its range-slicing does, then the above would be trivially extensible to any number of dimensions. However, Numpy's integer-array indexing (called ``advanced indexing'')\\footnote{\\url{https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html}} couples iteration over integer arrays supplied to each of the $k$ slots in a rank-$k$ array.\n\nTo work-around this caveat, consider rank-$k$ integer arrays in each of the $k$ slots, in which the integer array in slot $i$ has shape $(1, \\ldots, n_i, \\ldots, 1)$. For example, a three-dimensional slice\n\\[ \\mintinline{python}{array[start1:stop1, start2:stop2, start3:stop3]} \\]\n\\noindent can be simulated with integer arrays\n\\begin{align*}\n\\mintinline{python}{array[} & \\mintinline{python}{numpy.arange(start1, stop1).reshape(-1, 1, 1),} \\\\\n                            & \\mintinline{python}{numpy.arange(start2, stop2).reshape(1, -1, 1),} \\\\\n                            & \\mintinline{python}{numpy.arange(start3, stop3).reshape(1, 1, -1)]}\n\\end{align*}\n\\noindent because Numpy broadcasts the three integer arrays into a common three-dimensional shape, and the symmetry of these arrays decouples their effects in each dimension.\n\n\\section*{Beyond rectilinear arrays}\n\nNumpy is a library for contiguous, rectangular grids of numbers---within that scope, range-slicing and broadcasting\\footnote{\\url{https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html}} can be effectively computed without modifying or copying array buffers, using stride tricks\\footnote{\\url{https://docs.scipy.org/doc/numpy/reference/generated/numpy.lib.stride_tricks.as_strided.html}}.\n\nHowever, we often want more general data structures, so Awkward Array extends Numpy by interpreting collections of rectilinear arrays as non-rectilinear arrays. The two most important additions are\n\\begin{itemize}\n\\item records containing fields of any type and\n\\item arrays of unequal-length subarrays.\n\\end{itemize}\n\\noindent (Although union types, nullable types, and cross-referencing are also included in the Awkward Array library, they are less related to this note's focus on arrays as functions.)\n\n\\vfill\n\n\\section*{Non-rectilinear record types}\n\nNumpy's record array only allows one dimension of category labels, and all arrays identified by a category label share the same \\mintinline{python}{shape}. This is because the location of each of these arrays is defined by a stride. It also means that Numpy is limited to ``arrays of structs.''\\footnote{\\url{https://en.wikipedia.org/wiki/AoS_and_SoA}}\n\nSuppose we want field $s_1$ of an array to have shape $[0, n_1)$ and field $s_2$ to have shape $[0, n_1) \\times [0, n_2)$. That is, at each $i_1$, field $s_1$ has a scalar and field $s_2$ has an array. This can be described as the dependent type\n\\begin{align*}\n\\phantom{\\mbox{ufunc}(f)(}\\mintinline{python}{array}\\phantom{)}: [0, n_1) \\to & s_1 \\to d^A \\\\\n & s_2 \\to [0, n_2) \\to d^A\\mbox{.}\n\\end{align*}\nWith \\mintinline{python}{i1} $\\in [0, n_1)$ and \\mintinline{python}{i2} $\\in [0, n_2)$,\n\\begin{align*}\n\\mintinline{python}{array[i1][s1]} & \\mbox{ returns } d^A\\mbox{ (a scalar)} \\\\\n\\mintinline{python}{array[i1][s2]} & \\mbox{ returns } [0, n_2) \\to d^A\\mbox{ (an array), and} \\\\\n\\mintinline{python}{array[i1][s2][i2]} & \\mbox{ returns } d^A\\mbox{ (a scalar).}\n\\end{align*}\n\nSuch an array can still be passed into ufuncs because ufuncs compose a function $f: d^A \\to d^B$ to the {\\it output} of an array. For example,\n\\begin{align*}\n\\mbox{ufunc}(f)(\\mintinline{python}{array}): [0, n_1) \\to & s_1 \\to d^B \\\\\n & s_2 \\to [0, n_2) \\to d^B\\mbox{.}\n\\end{align*}\n\nSuch an array can still be sliced because slicing composes an integer array $\\mintinline{python}{index}: [0, x) -> [0, n_1)$ to the {\\it input} of the array. For example,\n\\begin{align*}\n\\mintinline{python}{array}(\\mintinline{python}{index}): [0, x) \\to & s_1 \\to d^A \\\\\n & s_2 \\to [0, n_2) \\to d^A\\mbox{.}\n\\end{align*}\n\nThe string-index/integer-index can be {\\it partially} commuted: the domain $\\{s_1, s_2\\}$ can be moved from the middle of this function type to the left, like so:\n\\begin{align*}\n\\mintinline{python}{array}: & s_1 \\to [0, n_1) \\to d^A \\\\\n& s_2 \\to [0, n_1) \\to [0, n_2) \\to d^A\\mbox{,}\n\\end{align*}\n\\noindent but it cannot be moved to the right, where the type of each field is different.\n\nSimilarly, if we had nested records, with $s_1$ containing \\mintinline{python}{dtype} $d$ and $s_2$ containing a record with fields $t_1$ and $t_2$, our options for commuting indexes would be limited to one possibility:\n\\begin{align*}\n\\mintinline{python}{array}: [0, n_1) \\to & s_1 \\to d & \\mintinline{python}{array}: &\\ s_1 \\to [0, n_1) \\to d \\\\\n & {s_2 \\to\\ }t_1 \\to d                              & &\\ {s_2 \\to [0, n_1) \\to\\ }t_1 \\to d \\\\\n & \\phantom{s_2 \\to\\ }t_2 \\to d                      & &\\ \\phantom{s_2 \\to [0, n_1) \\to\\ }t_2 \\to d\\mbox{.}\n\\end{align*}\n\n\\vfill\n\n\\section*{Non-rectilinear shapes}\n\nWe can also consider arrays of unequal-length subarrays (``jagged'' or ``ragged'' arrays).\n\nLike records, jagged arrays must be described by a dependent type if the function is to be defined on its whole domain. Just as each value in a record's domain, $\\{s_1, s_2\\}$, can return a different type, each value in a jagged array's domain can return a different type.\n\nFor example, the type of an array like \\mintinline{python}{[[1.1, 2.2, 3.3], [], [4.4, 5.5]]} is\n\\begin{align*}\n\\mintinline{python}{array}: &\\ 0 \\to [0, 3) \\to d^A \\\\\n &\\ 1 \\to [0, 0) \\to d^A \\\\\n &\\ 2 \\to [0, 2) \\to d^A\\mbox{.}\n\\end{align*}\n\\noindent The type description grows with the length of the array---while it may be practical to fully enumerate a record's fields, it's not practical to enumerate a large jagged array's type.\n\nJagged arrays can be passed into ufuncs and sliced for the same reasons as non-rectilinear records: these features are composition to the {\\it output} and {\\it input} of the array, respectively:\n\\begin{align*}\n\\mbox{ufunc}(f)(\\mintinline{python}{array}): &\\ 0 \\to [0, 3) \\to d^B & \\mintinline{python}{array}(\\mintinline{python}{index}): &\\ 0 \\to [0, 2) \\to d^A \\\\\n &\\ 1 \\to [0, 0) \\to d^B                                             & &\\ 1 \\to [0, 3) \\to d^A \\\\\n &\\ 2 \\to [0, 2) \\to d^B                                             & &\\ 2 \\to [0, 3) \\to d^A \\\\\n &                                                                   & &\\ 3 \\to [0, 0) \\to d^A\n\\end{align*}\n\\noindent for $f: d^A \\to d^B$ and \\mintinline{python}{index = [2, 0, 0, 1]}, for example.\n\nNon-rectilinear record types and non-rectilinear shapes can be combined, and these two generators can already produce data types as general as JSON. (Note that the explicit enumeration of dependent types for each array index allows heterogeneous lists and \\mintinline{json}{null}.)\n\nString-valued field indexes can always commute to the left through a jagged dimension, but it can only commute to the right if domains match for all elements of a jagged dimension. For example, $\\{s_1, s_2\\}$ can commute through both levels of the following jagged array, but only because it has the same combinations of nested shapes for both $s_1$ and $s_2$.\n\\begin{align*}\n\\mintinline{python}{a}: &\\ s_1 \\to\\mbox{\\hspace{-0.5 cm}} & 0 \\to [0, 3) \\to d^A & \\mbox{\\hspace{0.5 cm}\\mintinline{python}{a}:\\hspace{-0.5 cm}} &\\ 0 \\to &\\ s_1 \\to [0, 3) \\to d^A & \\mintinline{python}{a}: &\\ 0 \\to [0, 3) \\to\\mbox{\\hspace{-0.5 cm}} &\\ s_1 \\to d^A \\\\\n& &\\ 1 \\to [0, 0) \\to d^A                                  & & &\\ s_2 \\to [0, 3) \\to d^B                                & & & s_2 \\to d^B \\\\\n& &\\ 2 \\to [0, 2) \\to d^A                                  & &\\ 1 \\to &\\ s_1 \\to [0, 0) \\to d^B                         & &\\ 1 \\to [0, 0) \\to\\mbox{\\hspace{-0.5 cm}} &\\ s_1 \\to d^A \\\\\n&\\ s_2 \\to\\mbox{\\hspace{-0.5 cm}} & 0 \\to [0, 3) \\to d^B                         & & &\\ s_2 \\to [0, 0) \\to d^B                                & & & s_2 \\to d^B \\\\\n& &\\ 1 \\to [0, 0) \\to d^B                                  & &\\ 2 \\to &\\ s_1 \\to [0, 2) \\to d^B                         & &\\ 2 \\to [0, 2) \\to\\mbox{\\hspace{-0.5 cm}} &\\ s_1 \\to d^A \\\\\n& &\\ 2 \\to [0, 2) \\to d^B                                  & & &\\ s_2 \\to [0, 2) \\to d^B                                & & & s_2 \\to d^B\n\\end{align*}\n\n\\section*{Conclusions}\n\nThe reason Awkward Array can make use of Numpy's ufunc and slicing concepts, despite a much more general data model, is because arrays are functions and these two operations correspond to function composition at the {\\it output} or the {\\it input} of the array.\n\nThis note does not discuss the implementation details of Numpy or Awkward Array, though their scopes are well drawn by technical considerations. Numpy focuses on rectilinear arrays because stride tricks greatly optimize that domain. Awkward Array is more general, but it cannot use stride tricks on non-rectilinear data. Parts of a data structure must be physically separated in memory to allow generalized reshaping. However, the fact that slices can be explicitly composed with one another before applying them to an array (the associativity of function composition) has been very useful when dealing with separated data: slices do not need to be propagated all the way down a tree of nested structures---the meaning is preserved by lazy evaluation.\n\n\\end{document}\n", "meta": {"hexsha": "6279a02e4bdd1c8f3589754e2af4739ca092f193", "size": 24756, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/theory/arrays-are-functions.tex", "max_stars_repo_name": "martindurant/awkward-1.0", "max_stars_repo_head_hexsha": "a3221ee1bab6551dd01d5dd07a1d2dc24fd02c38", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/theory/arrays-are-functions.tex", "max_issues_repo_name": "martindurant/awkward-1.0", "max_issues_repo_head_hexsha": "a3221ee1bab6551dd01d5dd07a1d2dc24fd02c38", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/theory/arrays-are-functions.tex", "max_forks_repo_name": "martindurant/awkward-1.0", "max_forks_repo_head_hexsha": "a3221ee1bab6551dd01d5dd07a1d2dc24fd02c38", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 85.0721649485, "max_line_length": 751, "alphanum_fraction": 0.6976490548, "num_tokens": 7589, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.7057850154599562, "lm_q1q2_score": 0.5633482672668911}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.7\\columnwidth]{fig/two_pointers.png}\n    \\caption{Two pointer Technique}\n    \\label{fig:two pointer}\n\\end{figure}\nOn linear data structures, or on implicit linear state space, either a particular targeted item or a consecutive substructure such as a subarray and substring can be searched. \n\nTo find a single item on linear space, we can apply linear search in general, or binary search if the data structure is ordered/sorted with logarithmic cost. In this chapter, we introduce two pointer techniques that are commonly used to solve two types of problems:\n\\begin{enumerate}\n    \\item Searching: To search for an item such as median,  a predefined substructure, and a substructure that satisfy certain conditions such as finding the minimum subarray length wherein the subarray equals to a targeted sum. Or find a substructure satisfy a string pattern. \n    \\item Adjusting: To adjust ordering or arrangement of items in the data structure such as removing duplicates from sorted array. \n\\end{enumerate}\n\n% \\section{Introduction to Two Pointers}\nAs the name suggests, Two pointers technique involves two pointers that start and move with the following two patterns:\n\\begin{enumerate}\n    \\item Equi-directional:  Both pointers start from the beginning of the array, and usually one moves faster and the other slower. Sliding window algorithm can be put into this category. \n    \\item Opposite-directional: One pointer start at the start position and conversely the other pointer starts at the end. These two oppositely posed pointers  move toward each other and usually meet in the middle. \n\\end{enumerate}\nIn the following sections, we will detail on two-pointer technique exemplified on real interview questions.\n\n\\section{Slow-Faster Pointers}\nSuppose we have two pointers, $i$ and $j$, which may or may not start at the start position in the linear data structures, but one move slower ($i$) and the other faster ($j$). Two pointers can decide either a pair or a subarray to solve related problems.  For the case of subarray, the algorithm is called sliding window algorithm. On the span of the array, and at most of three potential sub-spaces exist: from start index to $i$ ($[0, i]$), from $i$ to $j$ ($[i, j]$), and from $j$ to the end index ($[j, n]$).\n\nEven though slow-faster pointers technique rarely given formal introduction in book, it is widely used in algorithms. In sorting, Lumuto's partition in the QuickSort used the slow-faster pointers to divide the whole region into three parts according the comparison result to the pivot: Smaller Items region, Larger Items region, and the unrestricted region. In string pattern matching, fixed sliding window and one we will introduce in this chapter. \n\nIn this section, we explain how two pointers work on two types of linear data structures: Array and Linked List.\n\\subsection{Array}\n\\subsubsection{Remove Duplicates from Sorted Array(L26)} Given a sorted array $a=[0,0,1,1,1,2,2,3,3,4]$, remove the duplicates in-place such that each element appears only once and return the new length. Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory. In the given example, there are in total of 5 unique items and 5 is returned. \n\\paragraph{Analysis} We set both slower pointer $i$ and the faster pointer $j$ at the first item in the array. Recall that slow-fast pointers cut the space of the sorted array into three parts, we can define them as:\n\\begin{enumerate}\n    \\item unique items in region $[0, i]$,\n    \\item untouched items in region $[i+1, j]$,\n    \\item and unprocessed items in region $[j+1, n)$.\n\\end{enumerate}\nIn the process, we compare the items pointed by two pointers, once these two items does not equal, we find an new unique item. We copy this unique item at the faster pointer right next to the position of the slower pointer. Afterwards, we move the slow pointer by one position to remove duplicates of our copied value. \n\n\nWith our example, at first, $i=j=0$, region one has one item which is naively unique and region two has zero item. Part of the process is illustrated as:\n\\begin{lstlisting}[numbers=none]\ni  j   [0, i]  [i+1, j]   process\n0  0   [0]     []         item 0==0, j+1=1\n0  1   [0]     [0]        item 0==0, j+1=2\n0  2   [0]     [0, 1]     item 0!=1, i+1=1, copy 1 to index 1, j+1=3\n1  3   [0, 1]  [1, 1]     item 1==1, j+1=4\n1  4   [0, 1]  [1, 1, 1]  item 1==1, j+1=5\n1  5   [0, 1]  [1, 1, 1, 2]item 1==2, i+1=2, copy 2 to index 2, j+1=6\n2  6   [0, 1, 2] [1, 1, 2, 2]\n\\end{lstlisting}\nThe code is given as:\n\\begin{lstlisting}[language=Python]\ndef removeDuplicates(nums) -> int:\n    i, j = 0, 0\n    while j < len(nums):\n        if nums[i] != nums[j]:\n            # Copy j to i+1\n            i += 1\n            nums[i] = nums[j]\n        j += 1\n    return i + 1\n\\end{lstlisting}\nAfter calling the above function on our given example, array $a$ becomes $[[0, 1, 2, 3, 4, 2, 2, 3, 3, 4]$. Check the source code for the whole visualized process.\n\n\\subsubsection{Minimum Size Subarray Sum(L209)} Given an array of $n$ positive integers and a positive integer $s$, find the minimal length of a contiguous subarray of which the $sum \\geq s$. If there isn't one, return 0 instead.\n\\begin{lstlisting}[numbers=none]\nExample: \n\nInput: s = 7, nums = [1,4,1,2,4,3]\nOutput: 2\nExplanation: the subarray [4,3] has the minimal length under the problem constraint.\n\\end{lstlisting}\n\\paragraph{Analysis} In this problem, we need to secure a substructure--subarray--that not only satisfies a condition($sum \\geq s$) but also  has the minimal length. Naively, we can enumerate all subarrays and search through them to find the minimal length, which requires at least $O(n^2)$ time complexity using prefix sum. The code is as:\n\\begin{lstlisting}[language=Python]\n\\end{lstlisting}\nHowever, we can use two pointers $i$ and $j$ ($i\\leq j)$ and both points at the first item. In this case, these two pointers defines a subarray  $a[i:j+1]$ and we care the region $[i, j]$. As we increase pointer $j$, we keep adding positive item into the sum of the subarray, making the subarray sum monotonically increasing. Oppositely, if we increase pointer $i$, we remove positive item away from the subarray, making the sum of the subarray monotonically decreasing. The detailed steps of two pointer technique in this case is as:\n\\begin{enumerate}\n    \\item Get the optimal subarray for all subproblems(subarries) that start from current $i$, which is $0$ at first. We accomplish this by forwarding $j$ pointer to include enough items until $sum \\geq s$ that we pause and go to the next step. Let's assume pointer $j$ stops at $e_0$.  \n    \\item Get the optimal subarray for all subproblems(subarries) that end with current j, which is $e_0$ at the moment. We do this by forwarding pointer $i$ this time to shrink the window size until $sum\\geq s$ no longer holds. Let's assume pointer $i$ stops at index $s_0$. Now, we find the optimal solution for subproblems $a[0:i,0:j]$( denoting subarries with the start point in range $[0, i]$ and the end point in range $[0,j]$. \n    \\item Now that $i=s_0$ and $j=e_0$, we repeat step 1 and 2.\n\\end{enumerate}\nIn our example, we first move $j$ until $j=3$ with a subarray sum of 8. Then we move pointer $i$ until $i=1$ when the subarray sum is less than 7. For subarray $[1, 4, 1, 2]$, we find its optimal solution to have a length 3. The Python code is given as:\n\\begin{lstlisting}[language=Python]\ndef minSubArrayLen(s: int, nums) -> int:\n    i, j = 0, 0\n    acc = 0\n    ans = float('inf')\n    while j < len(nums):\n        acc += nums[j]\n        # Shrink the window\n        while acc >= s:\n            ans = min(ans, j - i + 1)\n            acc -= nums[i]\n            i += 1\n        j += 1\n        \n    return ans if ans < float('inf') else 0\n\\end{lstlisting}\n\nBecause both pointer $i$ and $j$  move at most $n$ steps, with the total operations to be at most $2n$, making the time complexity as $O(n)$. The above question would be trivial if the maximum subarray length is asked.\n\n\\subsection{Minimum Window Substring (L76, hard)} \n Given a string $S$ and a string $T$, find all the minimum windows in $S$ which will contain all the characters in $T$ in complexity $O(n)$.\n\\begin{lstlisting}[numbers=none]\nExample:\nInput: S = \"AOBECDBANC\", T = \"ABC\"\nOutput: [\"CDBA\", \"BANC\"]\n\\end{lstlisting}\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.5\\columnwidth]{fig/minimum_window_substring.png}\n    \\caption{The data structures to track the state of window.}\n    \\label{fig:minimum_window}\n\\end{figure}\n\\paragraph{Analysis} Applying two pointers, with the region between pointer $i$ and $j$ to be our testing substring. For this problem, the condition for the window $[i, j]$ it will at most have all characters from $T$. The intuition is we keep expanding the window by moving forward $j$ until all characters in $T$ is found. Afterwards, we contract the window so that we can find the minimum window with the condition satisfied. Instead of using another data structure to track the state of the current window, we can depict the pattern $T$ as a dictionary data structure where all unique characters comprising the keys and with the number of occurrence of each character as value. We use another variable \\texttt{count} to track how the number of unique characters. In all, they are used to track the state of the moving window in $[i, j]$, with the value of the dictionary to indicate how many occurrence is short of, and the \\texttt{count} represents how many unique characters is not fully found, and we depict the state  in Fig.~\\ref{fig:minimum_window}. \n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=1.1\\columnwidth]{fig/minimum_window_substring_process.png}\n    \\caption{The partial process of applying two pointers. The grey shaded arrow indicates the pointer that is on move.}\n    \\label{fig:minimum_window_process}\n\\end{figure}\n\nAlong the expanding and shrinking of the window that comes with the movement of pointer $i$ and $j$, we track the state with:\n\\begin{itemize}\n    \\item When forwarding $j$, we encompass $S[j]$ in the window. If $S[j]$ is a key in the dictionary, decrease the value by one. Further, if the value reaches to the threshold $0$, we decrease \\texttt{count} by one, meaning we are short of one less character in the window.\n    \\item Once \\texttt{count=0}, our window satisfy condition for contracting. We then forward $i$, removing $S[i]$ from the window if it is existing key in the dictionary by increasing this key's value, meaning the window is short of one more character. Once the value reaches to the threshold of $1$, we increase $count$. \n\\end{itemize}\nPart of this process with our example is shown in Fig.~\\ref{fig:minimum_window_process}. And the Python code is given as:\n\\begin{lstlisting}[language=Python]\nfrom collections import Counter\ndef minWindow(s, t):\n  dict_t = Counter(t)\n  count = len(dict_t)\n  i, j = 0, 0\n  ans = []\n  minLen = float('inf')\n  while j < len(s):\n    c = s[j]\n    if c in dict_t:\n      dict_t[c] -= 1\n      if dict_t[c] == 0:\n        count -= 1\n    # Shrink the window\n    while count == 0 and i < j:\n      curLen = j - i + 1\n      if curLen < minLen:\n        minLen = j - i + 1\n        ans = [s[i:j+1]]\n      elif curLen == minLen: \n        ans.append(s[i:j+1])\n\n      c = s[i]\n      if c in dict_t:\n        dict_t[c] += 1\n        if dict_t[c] == 1:\n          count += 1\n      i += 1\n\n    j += 1\n  return ans\n\\end{lstlisting}\n\n% until When the window has all the desired characters, we contract (if possible) and save the smallest window till now. The only difference compared with the above problem is the definition of desirable: we need to compare the state of current window with the required state in T. They can be handled as a hashmap with character as key and frequency of characters as value. \n% \\begin{lstlisting}[language=Python]\n% def minWindow(self, s, t):\n%     dict_t = Counter(t)\n%     state = Counter()\n%     required = len(dict_t)\n\n%     # left and right pointer\n%     i, j = 0, 0\n\n%     formed = 0\n%     ans = float(\"inf\"), None # min len, and start pos\n\n%     while j < len(s):\n%         char = s[j]\n%         # record current state\n%         if char in dict_t:\n%             state[char] += 1\n%             if state[char] == dict_t[char]:\n%                 formed += 1\n\n%         # Try and contract the window till the point where it ceases to be 'desirable'.\n%         # bPrint = False\n%         while i<=j and formed == required:\n%             # if not bPrint:\n%             #     print('found:', s[i:j+1], i, j)\n%             #     bPrint = True\n%             char = s[i]\n%             if j-i+1 < ans[0]:\n%                 ans = j - i + 1, i\n%             # change the state\n%             if char in dict_t:\n%                 state[char] -= 1\n%                 if state[char] == dict_t[char]-1:\n%                     formed -= 1\n\n%             # Move the left pointer ahead,\n%             i += 1    \n        \n%         # Keep expanding the window \n%         j += 1  \n%         # if bPrint:\n%         #     print('move to:', s[i:j+1], i, j)\n%     return \"\" if ans[0] == float(\"inf\") else s[ans[1] : ans[1] + ans[0]]\n% \\end{lstlisting}\n\n% The process would be:\n% \\begin{lstlisting}[numbers=none]\n% found: ADOBEC 0 5\n% move to: DOBECO 1 6\n% found: DOBECODEBA 1 10\n% move to: ODEBAN 6 11\n% found: ODEBANC 6 12\n% move to: ANC 10 13\n% \\end{lstlisting}\n\\subsection{When Two Pointers do not work} Two pointer does not always work on subarray related problems.\n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{What happens if there exists negative number in the array? } Since the sum of the subarray is no longer monotonically increasing with the number of items between two pointers, we can not figure out how to move two pointers each step. Instead (1) we can use prefix sum and organize them in order, and use binary search to find all possible start index.  (2) use monotone stack (see  LeetCode probelm: 325. Maximum Size Subarray Sum Equals k, 325. Maximum Size Subarray Sum Equals k (hard)))\n\\end{bclogo}\n\n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{What if we are to check the maximum average subarray? } \n644. Maximum Average Subarray II (hard). Similarly, the average of subarray does not follow a certain order with the moving of two pointers at each side, making it impossible to decide how to make the two pointers.\n\n\\end{bclogo}\n\n\\subsection{Linked List}\nThe complete code to remove cycle is provided in google colab together with running examples.\n\\subsubsection{Middle of the Linked List(L876)} \n%The simplest example of slow-fast pointers on linked list is to get the middle node of a given linked list. \nGiven a non-empty, singly linked list with head node $head$, return a middle node of linked list. When the linked list is of odd length, there exists one and only middle node, but when it is of even length, two exists and we return the second middle node.\n\\begin{lstlisting}[numbers=none]\nExample 1 (odd length):\n\nInput: [1,2,3,4,5]\nOutput: Node 3 from this list (Serialization: [3,4,5])\n\nExample 2 (even length):\n\nInput: [1,2,3,4,5,6]\nOutput: Node 4\n\n from this list (Serialization: [4,5,6])\n\\end{lstlisting}\n\n\\paragraph{Analysis} If the data structure is array, we can compute the position of the middle item simply with the total length. Following this method, if only one pointer is applied, we can first iterate over the whole linked list in $O(n)$ time to get the length. Then we do another iteration to obtain the middle node. $n + \\frac{n}{2}$ times of operations  needed, making the time complexity $O(n)$. \n\nHowever, we can apply two pointers simultaneously at the head node, each one moves at different paces: the slow pointer moves one step at a time and the fast moves two steps instead. When the fast pointer reached the end, the slow pointer will stop at the middle. This slow-faster pointers technique requires only $\\frac{n}{2}$ times of operations, which is three times faster than our naive method, although the big Oh time complexity still remains $O(n)$. \n\\paragraph{Implementation}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width = 0.98\\columnwidth]{fig/middle_of_linked_list.png}\n    \\caption{Slow-fast pointer to find middle}\n    \\label{fig:slow-faster}\n\\end{figure}\nSimply, we  illustrate the process of running the two pointers technique on our two examples in Fig.~\\ref{fig:slow-faster}. As we can see, when the slow pointer reaches to item 3, the faster pointer is at item 5, which is the last item in the first example that comes with odd length. Further, when the slow pointer reaches to item 4, the faster pointer reaches to the empty node of the last item in the second example that comes with even length. Therefore, in the implementation, we check two conditions in the \\texttt{while} loop: \n\\begin{enumerate}\n    \\item For example 1: if the fast pointer has no successor (\\texttt{fast.next==None}), the loop terminates.\n    \\item For example 1: if the fast pointer is invalid (\\texttt{fast==None}), the loop terminates.\n\\end{enumerate}\nThe Python code is as:\n\\begin{lstlisting}[language=Python]\ndef middleNode(head):\n    slow = fast = head\n    while fast and fast.next:        \n        fast = fast.next.next\n        slow = slow.next     \n    return slow\n\\end{lstlisting}\n\n\\subsubsection{Floyd's Cycle Detection (Floyd's Tortoise and Hare)} \n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.8\\columnwidth]{fig/circular_linked_list.png}\n    \\caption{Circular Linked List}\n    \\label{fig:circular_linked_list}\n\\end{figure}\nWhen a linked list which has a cycle, as shown in Fig.~\\ref{fig:circular_linked_list}, iterating items over the list will make the program stuck into infinite loop. The pointer starts from the heap, traverse to the start of the loop, and then comes back to the start of the loop again and continues this process endlessly. To avoid being stuck into a ``trap'', we have to possibly solve the following three problems:\n\\begin{enumerate}\n    \\item Check if there exists a cycle. \n    \\item Check where the cycle starts.\n    \\item Remove the cycle once it is detected.\n\\end{enumerate}\nThe solution encompasses the exact way of slow faster pointers traversing through the linked list as our last example. With the slow pointer iterating one item at a time, and the faster pointer in double pace, these two pointers will definitely meet at one item in the loop. In our example, they will meet at node 6. So, is it possible that it will meet at the non-loop region starts from the heap and ends at the start node of the loop? The answer is No, because the faster pointer will only traverse through the non-loop region once and it is always faster than the slow pointer, making it impossible to meet in this region.  This method is called Floyd's Cycle Detection, aka Floyd's Tortoise and Hare Cycle Detection. Let's see more details at how to solve our mentioned three problems with this method. \n\\paragraph{Check Linked List Cycle(L141)}\nCompared with the code in the last example, we only need to check if the \\texttt{slow} and \\texttt{fat} pointers are pointing at the same node: If it is, we are certain that there must be a loop in the list and return \\texttt{True}, otherwise  return \\texttt{False}.\n\\begin{lstlisting}[language=Python]\ndef hasCycle(head):\n    slow = fast = head\n    while fast and fast.next:\n        slow = slow.next\n        fast = fast.next.next\n        if slow == fast:\n            return True\n    return False\n\\end{lstlisting}\n\\paragraph{Check Start Node of Linked List Cycle(L142)} Given a linked list, return the node where the cycle begins. If there is no cycle, return \\texttt{None}.\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/TQoyH.png}\n    \\caption{Floyd's Cycle finding Algorithm}\n    \\label{fig:floyd_cycle_1}\n\\end{figure}\n\nFor a given linked list, assume the slow and fast pointers meet at node somewhere in the cycle. As shown in Fig.~\\ref{fig:floyd_cycle_1}, we denote  three nodes: head ($h$, start node of cycle($s$), and meeting node in the cycle($m$). we denote the distance between $h$ and $s$ to be $x$, the distance between $s$ and $m$ to be $y$, and the distance between $m$ and $s$ to be $z$.  Because the faster pointer traverses through the list in double speed, when it meets up with the slow pointer, the distance that it traveled($x+y+z+y$) to be two times of the distance traveled by the slow pointer ($x+y$).\n\\begin{align}\n    x + y + z + y &= x + y \\\\\n    x = z\n\\end{align}\nFrom the above equation, we obtain the equal relation between $x$ and $z$. the starting node of the cycle from the head is $x$, and $y$ is the distance from the start node to the slow and fast pointer's node, and $z$ is the remaining distance from the meeting point to the start node.  Therefore, after we have detected the cycle from the last example, we can reset the slow pointer to the head of the linked list after. Then we make the slow and the fast pointer both traverse at the same pace--one node at a time--until they meet at a node we stop the traversal. The node where they stop at is the start node of the cycle. The code is given as:\n\n% the meeting point, and making both slow and fast pointer to move one node at a time, they will meet at the starting node of the cycle.\n\n\n% Now, let's try to device the algorithm. Both slow and fast pointer starts at position 0, the node index they travel each step is: [0,1,2,3,...,k] and [0,2,4,6,...,2k] for slow and fast pointer respectively. Therefore, the total distance traveled by the slow pointer is half of the distance travelled by the fat pointer. From the above figure, we have the distance travelled by slow pointer to be $d_s = x+y$, and for the fast pointer $d_f = x+y+z+y = x+2y+z$. With the relation $2*d_s = d_f$. We will eventually get $x = z$. Therefore, by moving slow pointer to the start of the linked list after the meeting point, and making both slow and fast pointer to move one node at a time, they will meet at the starting node of the cycle. (LeetCode problem: 142. Linked List Cycle II (medium)).\n\\begin{lstlisting}[language=Python]\ndef detectCycle(head):\n    slow = fast = head\n\n    def getStartNode(slow, fast, head):\n      # Reset slow pointer      \n      slow = head\n      while fast and slow != fast:\n          slow = slow.next\n          fast = fast.next\n      return slow\n\n    while fast and fast.next:\n        slow = slow.next\n        fast = fast.next.next\n        # A cycle is detected\n        if slow == fast: \n            return getStartNode(slow, fast, head)\n    \n    return None\n\\end{lstlisting}\n\n\\paragraph{Remove Linked List Cycle}\nWe can remove the cycle by recirculing the last node in the cycle, which in example in Fig.~\\ref{fig:circular_linked_list} is node 6 to an empty node. Therefore, we have to modify the above code to make the \\texttt{slow} and {fast} pointers stop at the last node instead of the start node of the loop. This subroutine is implemented as:\n\\begin{lstlisting}[language=Python]\ndef resetLastNode(slow, fast, head):\n    slow = head\n    while fast and slow.next != fast.next:\n        slow = slow.next\n        fast = fast.next\n    fast.next = None\n\\end{lstlisting}\nThe complete code to remove cycle is provided in google colab together with running examples.\n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{What if there has not only one, but multiple cycles in the Linked List? } \n\\end{bclogo}\n\n\\section{Opposite-directional Pointers}\nAnother variant of two pointers technique is to place these two pointers oppositely: one at the beginning and the other at the end of the array. Through the process, they move toward each other  until they meet in the middle. Details such as how much each pointer moves or which pointer to move at each step decided by our specific problems to solve. We just have to make sure when we are applying this technique, we have considered its whole state space, and will not miss out some area which makes the search incomplete. \n\nThe simplest example of this two pointers method is to reverse an array or a string around. For example, when the list $a=[1, 2, 3, 4, 5]$ is reversed, it becomes $[5, 4, 3, 2, 1]$. Of course we can simply assign a new list and copy the items in reversed orders. But, with two pointers, we are able to reverse it in-place and using only $O(\\frac{n}{2})$ times of operations through the following code:\n\\begin{lstlisting}[language=Python]\ndef reverse(a):\n  i, j = 0, len(a) - 1\n  while i < j:\n    # Swap items\n    a[i], a[j] = a[j], a[i]\n    i += 1\n    j -= 1\n\\end{lstlisting}\nMoreover, binary search can be viewed as an example of opposite-directional pointers. At first, these two pointers are the first and the last item in the array. Then depends on which side of the target compared with the item in the middle, one of the pointers move either forward or backward to the middle point, reducing the search space to half of where it started at each step.  We also explore another example with this technique.\n\\subsubsection{Two Sum on Sorted Array(L167)} \nGiven an array of integers that is already sorted in ascending order, find two numbers such that they add up to a specific target number. \n\\begin{lstlisting}[numbers=none]\nInput: numbers = [2,7,11,15], target = 9\nOutput: [1,2]\nExplanation: The sum of 2 and 7 is 9. Therefore index1 = 0, index2 = 1.\n\\end{lstlisting}\n\\paragraph{Analysis} If we simply put enumerate all possible pairs, we have to take $O(n^2)$ to solve this problem. However, with the opposite-directional two pointers, it gives out linear performance.\n\nDenote the list as $A=[a_1, a_2, ..., a_{n-1}, a_{n}]$, and for the sorted array we have $a_1\\leq a_2 \\leq...\\leq  a_{n-1} \\leq a_n$. The range of the sum of any two items in the array is within two possible ranges: $[a_1+a_2, a_1+a_n]$ and $[a_1+a_n, a_{n-1}+a_n]$.  By placing one pointer $i$ at $a_1$ and the other $j$ at $a_n$ to start with, we can get $a_1+a_n$ as the sum. Pointer $i$ can only move forward, accessing larger items. On the other hand, pointer $j$ can only backward, accessing smaller items. Now there are three scenarios according to the comparison between the target and the current sum of the two pointers:\n\\begin{enumerate}\n    \\item  If $t == a[i] + a[j]$, target sum found.\n    \\item  If $t > a[i] + a[j]$, we have to increase the sum, we can only do this by moving pointer $i$ forward.\n    \\item If $t > a[i] + a[j]$, we have to decrease the sum, we can only do this by moving pointer $j$ backward.\n\\end{enumerate}\nThe Python code is as:\n\\begin{lstlisting}[language=Python]\ndef twoSum(a, target):\n    n = len(a)\n    i, j  = 0, n-1\n    while i < j:\n        temp = a[i] + a[j]\n        if temp == target:\n            return [i, j]\n        elif temp < target:\n            i += 1\n        else:\n            j -= 1\n    return []\n\\end{lstlisting}\n\\section{Follow Up: Three Pointers}\nSometimes, manipulating two pointers is not even enough to distinguish different subspaces, we might need to the assistant of one another pointer to make things  work.\n\\subsubsection{Binary Subarrays With Sum (L930)} In an array $A$ of $0$s and $1$s, how many non-empty subarrays have sum $S$?\n\\begin{lstlisting}[numbers=none]\nExample 1:\nInput: A = [1,0,1,0,1], S = 2\nOutput: 4\nExplanation: \nThe 4 subarrays are listed below:\n[1,0,1], index (0, 2)\n[1,0,1,0], index (0, 3)\n[0,1,0,1], index (1, 4)\n[1,0,1], index (2, 4)\n\\end{lstlisting}\n\\paragraph{Analysis}\nThis problem is highly similar to the minimum length subarray problem we encountered before. We naturally start with two pointers $i$ and $j$, and restrict the subarray in range $[i, j]$ to satisfy condition $sum\\leq S$. The window is contracted when the condition is violated. We would have write the following code:\n\\begin{lstlisting}[language=Python]\ndef numSubarraysWithSum(a, S):\n  i, j = 0, 0\n  win_sum = 0\n  ans = 0\n  while j < len(a):\n    win_sum += a[j]\n    while i < j and win_sum > S:\n      win_sum -= a[i]\n      i += 1\n    if win_sum == S:\n      ans += 1\n      print('({}, {})'.format(i, j))\n    j += 1\n  return ans\n\\end{lstlisting}\nHowever, the above code only returns $3$, instead of $4$ as shown in the example. By printing out pointers $i$ and $j$, we can see the above code is missing case $(2, 4)$. Why? Because we are restricting the subarray sum in range $[i, j]$ to be smaller than or equal to $S$, with the occruence of $0$s that might appear in the front or in the rear of the subarray:\n\\begin{itemize}\n\\item In the process of expanding the subarray, pointer $j$ is moved one at a time. Thus, even though $0$s appear in the rear of the subarray, the counting is correct.\n\\item However, in the process of shrinking the subarray while the restriction is violated($sum > S$), we stop right away once $sum \\leq S$. And in the code, we end up only counting it as one occurrence. With $0$s at the beginning of the subarray, such as the subarray $[0, 1, 0, 1]$ with index $1$ and $4$, there count should be two instead of one. \n\\end{itemize}\nThe solution is to add another pointer $i_h$ to handle the missed case: When the $sum=S$, count the total occurrence of $0$ in the front. Compared with the above solution, the code only differs slightly with the additional pointer and one extra \\texttt{while} loop to deal the case. Also we need to pay attention that $i_h \\leq j$, otherwise, the \\texttt{while} loop would fail with example with only zeros and a targeting sum $0$.\n\\begin{lstlisting}[language=Python]\ndef numSubarraysWithSum(a, S):\n  i, i_h, j = 0, 0, 0\n  win_sum = 0\n  ans = 0\n  while j < len(a):\n    win_sum += a[j]\n    while i < j and win_sum > S:\n      win_sum -= a[i]\n      i += 1\n    # Move i_h to count all zeros in the front\n    i_h = i\n    while i_h < j and win_sum == S and a[i_h] == 0:\n      ans += 1\n      i_h += 1\n      \n    if win_sum == S:\n      ans += 1\n    j += 1\n  return ans\n\\end{lstlisting}\n\nWe noticed that in this case, we have to explicitly restrict $i < j$ and $i_h < j$ due to the special case,  while in all our previous examples, we do not have to.\n\n\\section{Summary}\nTwo pointers is a powerful tool for solving problems on liner data structures, such as ``certain'' subarray and substring problems as we have shown in the examples. The ``window'' secluded between the two pointers can be viewed as sliding window: It can move slide forwarding with the forwarding the slower pointer. Two important properties are generally required for this technique to work:\n\\begin{enumerate}\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.7\\columnwidth]{fig/Sliding_window_property.png}\n    \\caption{Sliding Window Property}\n    \\label{fig:slide_window}\n\\end{figure}\n    \\item Sliding window property: Either we move the faster pointer $j$ forward by one, or move the slower pointer $i$, we can get the state of current window in $O(1)$ cost knowing the state of the last window. \n    \n    For example, given an array, imagine that we have a fixed size window as shown in Fig.~\\ref{fig:slide_window}, and we can slide it forward one position at a time, compute the sum of each window. The bruteforce solution would be of $O(kn)$ complexity where $k$ is the window size and $n$ is the array size by using two nested \\texttt{for} loops: one to set the starting point, and the other to compute the sum in $O(k)$. However, the sum of the current window ($S_c$) can be computed from the last window ($S_l$), and  the items that just slid out and in as $a_j$ and $a_i$ respectively. Then $S_c = S_l-a_i+a_j$. Getting the state of of the window between two pointers in $O(1)$ as shown in the example is our called Sliding Window Property.\n    \n    Usually, for an array with numerical value, it satisfies the sliding window property if we are to compute its sum or product. For substring, as shown in our minimum window substring example, we can get the state of current window referring to the state of the last window in $O(1)$ with the assist of dictionary data structure. In substring, this is more obscure, and the general requirement is that the state of the substring does not relate to the order of the characters(anagram-like state). \n    \n    \\item Monotonicity: For subarray sum/product, the array should only comprise all positive/negative values so that the prefix sum/product has monotonicity: moving the faster pointer and the slower pointer forward results into opposite change to the state.  The same goes for the substring problems where we see from the minimum window substring example the change of the state: \\texttt{count} and the value of the dictionary is monotonic, and each either increases or decreases with the moving of two pointers.\n% \\begin{lstlisting}[language=Python]\n% def fixedSlideWindow(A, k):\n%     n = len(A)\n%     if k >= n:\n%         return sum(A)\n%     # compute the first window\n%     acc = sum(A[:k])\n%     ans = acc\n%     # slide the window\n%     for i in range(n-k): # i is the start point of the window\n%         j = i + k # j is the end point of the window\n%         acc = acc - A[i] + A[j]\n%         ans = max(ans, acc)\n%     return ans\n% \\end{lstlisting}\n\\end{enumerate}\n\n% The steps of using sliding windows:\n% \\begin{enumerate}\n%     \\item Initialize the left and right pointer;\n%     \\item Handle the right pointer and record the state of the current window;\n%     \\item While the window is in the state of desirable: record the optimal solution, move the left pointer and record the state (change or stay unchanged).\n%     \\item Up till here, the state is not desirable.  Move the right pointer in order to find a desirable window;\n% \\end{enumerate}\n\\section{Exercises}\n\\begin{enumerate}\n\\item 3. Longest Substring Without Repeating Characters\n\\item 674. Longest Continuous Increasing Subsequence (easy)\n\\item 438. Find All Anagrams in a String\n\\item 30. Substring with Concatenation of All Words\n\\item 159. Longest Substring with At Most Two Distinct Characters\n\\item 567. Permutation in String\n\\item 340. Longest Substring with At Most K Distinct Characters\n\\item 424. Longest Repeating Character Replacement\n\\end{enumerate}\n% that data strArray Search is to find a \\textbf{sub-structure} on a given linear data structure( Chapter~\\ref{chapter_linear_data_structure}) or a virtual linear search space. Categorized by the definition of sub-structure:\n% \\begin{itemize}\n%     \\item Define the sub-structure as a \\textbf{particular item}: Usually the worst and average performance is $O(n)$.  \\textbf{Binary search} (Section~\\ref{sec_binary_search}) finds an item within an ordered data structure, each time, the search space is elimilated by half in size, which makes the worst time complexity $O(\\log n)$. Using hashmap can gain us the best complexity of $O(1)$. \n%     \\item Define the sub-structure as a \\textbf{consecutive substructure} indexed by a start and end index (subarray) in the linear data structure, we introduce the \\textbf{Sliding Window Algorithm} (Section~\\ref{sec_pointer_sliding_window}). Compared with the brute force solution, it decrease the complexity from $O(n^2)$ to $O(n)$. If the sub-structure is \\textbf{predefined pattern}, we need pattern matching algorithms. This usually exists in string data structure, and we do string pattern matching (Section~\\ref{}). \n% \\end{itemize}\n\n% \\subfile{chapters/learning/search/sliding_window}\n\n\n\n\n\n\n \\end{document}", "meta": {"hexsha": "73a1d1656612c7ba36cb8aaf5efedf25ec6e4042", "size": 35545, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Easy-Book/chapters/chapter_advanced_linear_search.tex", "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Easy-Book/chapters/chapter_advanced_linear_search.tex", "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Easy-Book/chapters/chapter_advanced_linear_search.tex", "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.7047619048, "max_line_length": 1060, "alphanum_fraction": 0.7130398087, "num_tokens": 9503, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8056321936479701, "lm_q1q2_score": 0.5633418731556089}}
{"text": "%-------------------1.1\n\\section{Auto-resizing equation}\n\\begin{tabular}{l | c}\n\\begin{minipage}[m]{0.4\\textwidth}\n\\enum{\n\\resizebox{.6\\textwidth}{!}{$\\dot{\\rho}=\n\\dfrac{x^3}{45a^9-23b}$}}{1.1}\n\\end{minipage}\n& \\begin{minipage}[m]{0.5\\textwidth}\n\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\\begin{lstlisting}[numberstyle=\\zebra{black!5}{blue!15},numbers=left,basicstyle=\\footnotesize]{tex}\n\\begin{equation*}\\label{eq1}\n\\resizebox{.4\\textwidth}{!}{ % change .4 to 0.5...\n$\\dot{\\rho}=\\dfrac{x^3}{45a^9-23b}$}\n\\end{equation*}\n    \\end{lstlisting}\n\\end{minipage}\n\\end{tabular}\n\n\n%-------------------1.2\n\\section{Form for simplest calculation}\n\\begin{tabular}{l | c}\n\\begin{minipage}[m]{0.4\\textwidth}\n\\enum{ \\newcommand{\\sss}[1]{this.getField(\"#1\").value}\n\\begin{Form}\n\\noindent%\nFill with number \\\\ \n\\small{\\mybox[red]{if it does't work try another PDF viewer}}\\\\ \n\n\\TextField[name=a]{a:} \\\\\n\n\\TextField[name=b]{b:} \\\\\n\n\\TextField[name=c]{c:} \\\\\n\n\\noindent%\n$\\sum = $ \\TextField[name=AvgStat, calculate={\n  event.value = ( \n    \\sss{a} +\n    \\sss{b} +\n    \\sss{c}) ;\n}, readonly, value=0]{} \n\\end{Form}}{1.2}\n\\end{minipage}\n& \\begin{minipage}[m]{0.5\\textwidth}\n\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\\begin{lstlisting}[numberstyle=\\zebra{black!5}{blue!15},numbers=left,basicstyle=\\footnotesize]{tex}\n\\documentclass{article}\n\\usepackage{hyperref}\n\\begin{document}\n\\newcommand{\\sss}[1]{this.getField(\"#1\").value}\n\\begin{Form}\n\\noindent%\nFill with number\\\\ \n\n\\TextField[name=a]{a:} \\\\\n\n\\TextField[name=b]{b:} \\\\\n\n\\TextField[name=c]{c:} \\\\\n\\noindent%\n$\\sum = $ \\TextField[name=AvgStat, calculate={\n  event.value = ( \n    \\sss{a} +\n    \\sss{b} +\n    \\sss{c}) ;\n}, readonly, value=0]{} \n\\end{Form}\n\\end{document}\n\\end{lstlisting}\n\\end{minipage}\n\\end{tabular}\n\n\n%-------------------1.3\n\\section{Equation in the form of steps}\n\\begin{tabular}{l | c}\n\\begin{minipage}[m]{0.4\\textwidth}\n\\enum{ \\resizebox{.4\\textwidth}{!}{$  \\frac{n_0}{n_1} = q_1 + \\dfrac{\\makebox[\\mywd][l]{$1$}}\n  {\\makebox[\\mywd][l]{$q_2 + \\dfrac{\\makebox[\\mywd][l]{$1$}}\n  {\\makebox[\\mywd][l]{$q_3 + \\dfrac{\\makebox[\\mywd][l]{$1$}}\n  {\\makebox[\\mywd][l]{$q_4 + \n   \\raisebox{-6pt}{$\\ddots$}\n   \\raisebox{-12pt}{+$\\dfrac{\\makebox[\\mywd][l]{$1\\kern30pt$}}\n  {q_{k-1} + \\dfrac{1}\n  {q_k}}$}$}}$}}$}} $}}{1.3}\n\\end{minipage}\n& \\begin{minipage}[m]{0.5\\textwidth}\n\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\\begin{lstlisting}[numberstyle=\\zebra{black!5}{blue!15},numbers=left,basicstyle=\\footnotesize]{tex}\n\\documentclass{article}\n\\usepackage{amsmath}\n\\def\\mywd{35pt}\n\\begin{document}\n\\[\n  \\frac{n_0}{n_1} = q_1 + \\dfrac{\\makebox[\\mywd][l]{$1$}}\n  {\\makebox[\\mywd][l]{$q_2 + \\dfrac{\\makebox[\\mywd][l]{$1$}}\n  {\\makebox[\\mywd][l]{$q_3 + \\dfrac{\\makebox[\\mywd][l]{$1$}}\n  {\\makebox[\\mywd][l]{$q_4 + \n   \\raisebox{-6pt}{$\\ddots$}\n   \\raisebox{-12pt}{+$\\dfrac{\\makebox[\\mywd][l]{$1\\kern30pt$}}\n  {q_{k-1} + \\dfrac{1}\n  {q_k}}$}$}}$}}$}}\n\\]\n\\end{document}\n\\end{lstlisting}\n\\end{minipage}\n\\end{tabular}\n\n%-------------------1.4\n\\section{One number for multiline equation}\n\\begin{tabular}{l | c}\n\\begin{minipage}[m]{0.4\\textwidth}\n\\enum{ \\begin{equation}\n\\begin{aligned}\nx_{ij} &= d_{ijk}E_k, \\\\ \nx_{ij} &= \\varsigma_{ijk}H_k,\\\\ \nx_{ij} &= s_{ijkl}X_{kl},\\\\ \nx_{ij} &= \\xi_{ij}\\delta p,\\\\ \nx_{ij} &= \\alpha_{ij}\\delta T\n\\end{aligned}\n\\end{equation}}{1.4}\n\\end{minipage}\n& \\begin{minipage}[m]{0.5\\textwidth}\n\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\\begin{lstlisting}[numberstyle=\\zebra{black!5}{blue!15},numbers=left,basicstyle=\\footnotesize]{tex}\n\\documentclass{article}\n\\usepackage{amsmath}\n\\begin{document}\n\\begin{equation}\n\\begin{aligned}\nx_{ij} &= d_{ijk}E_k, \\\\ \nx_{ij} &= \\varsigma_{ijk}H_k,\\\\ \nx_{ij} &= s_{ijkl}X_{kl},\\\\ \nx_{ij} &= \\xi_{ij}\\delta p,\\\\ \nx_{ij} &= \\alpha_{ij}\\delta T\n\\end{aligned}\n\\end{equation}\n\\end{document}\n\\end{lstlisting}\n\\end{minipage}\n\\end{tabular}\n\n%-------------------1.5\n\\section{Matrix in \\textbf{standalone} documentclass}\n\\begin{tabular}{l | c}\n\\begin{minipage}[m]{0.4\\textwidth}\n\\enum{ \\begin{equation*}\n\\begin{matrix} \na_{11} & a_{12} & a_{13}  \\\\\na_{21} & a_{22} & a_{23}  \\\\\na_{31} & a_{32} & a_{33}  \\\\\n\\end{matrix} \n\\end{equation*} }{1.5}\n\\end{minipage}\n& \\begin{minipage}[m]{0.5\\textwidth}\n\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\\begin{lstlisting}[numberstyle=\\zebra{black!5}{blue!15},numbers=left,basicstyle=\\footnotesize]{tex}\n\\documentclass[preview,border={-5cm 0cm -5cm -0.1cm}]{standalone}\n\\usepackage{amsmath}\n\\begin{document}\n\\begin{equation*}\n\\begin{matrix} \na_{11} & a_{12} & a_{13}  \\\\\na_{21} & a_{22} & a_{23}  \\\\\na_{31} & a_{32} & a_{33}  \\\\\n\\end{matrix} \n\\end{equation*}\n\\end{document}\n\\end{lstlisting}\n\\end{minipage}\n\\end{tabular}\n\n\n%-------------------1.6\n\\section{Multiple lines, one centered label}\n\\begin{tabular}{l | c}\n\\begin{minipage}[m]{0.4\\textwidth}\n\\enum{ \\begin{equation} \\label{eq1}\n\\begin{split}\nA & = \\frac{\\pi r^2}{2} \\\\\n & = \\frac{1}{2} \\pi r^2\n\\end{split}\n\\end{equation} }{1.6}\n\\end{minipage}\n& \\begin{minipage}[m]{0.5\\textwidth}\n\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\\begin{lstlisting}[numberstyle=\\zebra{black!5}{blue!15},numbers=left,basicstyle=\\footnotesize] \n\\begin{equation} \\label{eq1}\n\\begin{split}\nA & = \\frac{\\pi r^2}{2} \\\\\n & = \\frac{1}{2} \\pi r^2\n\\end{split}\n\\end{equation}\n\\end{lstlisting}\n\\end{minipage}\n\\end{tabular}\n\n\n\n\n \n", "meta": {"hexsha": "49186fe3353a45bc2ec476b874d9a9465c0d3bbf", "size": 5293, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/C1.tex", "max_stars_repo_name": "AnMnv/eBook", "max_stars_repo_head_hexsha": "768a1914311525e412c0edcb79e858f2b68bd544", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-12-16T17:18:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-16T17:59:04.000Z", "max_issues_repo_path": "source/C1.tex", "max_issues_repo_name": "AnMnv/eBook", "max_issues_repo_head_hexsha": "768a1914311525e412c0edcb79e858f2b68bd544", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/C1.tex", "max_forks_repo_name": "AnMnv/eBook", "max_forks_repo_head_hexsha": "768a1914311525e412c0edcb79e858f2b68bd544", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.9460784314, "max_line_length": 99, "alphanum_fraction": 0.6211978084, "num_tokens": 2165, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.8056321936479701, "lm_q1q2_score": 0.5633418731556089}}
{"text": "\\chapter{Chromodynamics}\n\nIn this last chapter we want to give a quick introduction to chromodynamics or the theory of strong interaction. This is really just a small overview of how the theory is built and how we can apply our Hamilton formalism. \nLet's start with a fundamental observation:\n\n\\begin{itemize}\n\\item \\textbf{Electrodynamics:} \\\\\nIf we want to seperate an electron and a positron, we have to invest a finite energy.\n\\item \\textbf{Chromodynamics:} \\\\\nIf we want to seperate a quark and an antiquark, we would need an infinite amount of energy. The field lines form a tube and get closer and closer the more we seperate the particles. The field energy and the fluctuations raise, so that we create new quarks which bind again before we can seperate the old ones. This process is called \"color confinement\".\n\\end{itemize}\n\nSo the physical properties of these two interactions are very different but the mathematical description is quite similar, as we will see.\nOne often starts with the postulation of gauge invariance to introduce the theory of chromodynamics. We want to show a different approach with the same result: a matrix-generalization of electrodynamics. \nIn the end, we want to find a Lagrangian again and apply the Hamiltonian method. \\\\\n\nLet us introduce matrices $\\hat{T}^a$, where $a = 1, \\dots, n$ with the following properties:\n\\begin{enumerate}\n\\item $[ \\hat{T}^a , \\hat{T}^b ] = t^{abc} \\hat{T}^c$ \\ with real-valued coefficients $t^{abc}$.\n\\item $\\text{tr}(\\hat{T}^a \\hat{T}^b) = - 2 \\delta^{ab}$.\n\\item $( \\hat{T}^a )^\\dag = - \\hat{T}^a$.\n\\end{enumerate} \n\nFor $n=3$, we see that the following $3 \\times 3$ matrices satisfy the required properties:\n\\begin{align}\n\\hat{T}^1 = \n\\begin{pmatrix}\n0 & 0 & 0 \\\\\n0 & 0 & - 1 \\\\\n0 & 1 & 0 \\\\  \n\\end{pmatrix}, \\ \\ \\\n\\hat{T}^2 = \n\\begin{pmatrix}\n0 & 0 & 1 \\\\\n0 & 0 & 0 \\\\\n- 1 & 0 & 0 \\\\  \n\\end{pmatrix}, \\ \\ \\\n\\hat{T}^3 = \n\\begin{pmatrix}\n0 & - 1 & 0 \\\\\n1 & 0 & 0 \\\\\n0 & 0 & 0 \\\\  \n\\end{pmatrix}.\n\\end{align}\n\nIn group theory, this is often referred to the three-dimensional representation of SU(2). \\footnote{Note that one can define $i \\hat{T}^a \\equiv \\hat{\\tilde{T}}^a$ and get the standard property of hermitian matrices: $( \\hat{\\tilde{T}}^a )^\\dag = - \\hat{\\tilde{T}}^a$.} \\\\\n \nIn nature, we observe the case $n=8$ which corresponds to the group SU(3). It turns out that this group describes strong interaction very well. The number 3 can be referred to the 3 colors of chromodynamics. \\\\\n\nThe following developement of the theory doesn't depend on the number $n$ of matrices. For different $n$, we can get a different description. Just remember that $n=8$ is the case of strong interaction (chromodynamics).\n\n\\pagebreak\n\nWith the help of our matrices, we generalize the theory of electrodynamics in the following way. Instead of one electromagnetic potential field $A_{\\mu}$, we take $n$ different electromagnetic potentials $A_{\\mu}^a$:\n\\begin{align*}\n\\arraycolsep=10pt\\def\\arraystretch{1.6}\n\\begin{array}{ccccc}\n &  & Electrodynamics & & Chromodynamics \\\\\nq_i(t) & \\longrightarrow & A_{\\mu}(\\bar{x},t) & \\longrightarrow & A_{\\mu}^a(\\bar{x},t) \\\\\ni & \\longrightarrow & \\{ \\bar{x},\\mu \\} & \\longrightarrow & \\{ \\bar{x},a,\\mu \\}\n\\end{array}\n\\end{align*}\nand define the \"gluon field\" $\\hat{A}_{\\mu}(\\bar{x},t)$:\n\\begin{align}\n\\hat{A}_{\\mu}(\\bar{x},t) = \\sum_{a=1}^n \\ A_{\\mu}^a(\\bar{x},t) \\hat{T}^a.\n\\end{align}\n\nA simple generalization would be:\n\n\\begin{itemize}\n\\item \\textbf{Electrodynamics:} \n\\begin{align}\nL_{el}[A_{\\mu}(\\bar{x}), \\dot{A}_{\\mu}(\\bar{x})] = - \\frac{1}{4} \\displaystyle\\int d^3 x \\ F_{\\mu \\nu}(\\bar{x}) F^{\\mu \\nu}(\\bar{x}) \n\\end{align}\nwith the electromagnetic field tensor $F_{\\mu \\nu} = \\partial_{\\mu} A_{\\nu} - \\partial_{\\nu} A_{\\mu}$.\n\\item \\textbf{Chromodynamics:} \n\\begin{align}\nL_{ch}[A_{\\mu}^a(\\bar{x}), \\dot{A}_{\\mu}^a(\\bar{x})] = \\frac{1}{8} \\displaystyle\\int d^3 x \\ \\text{tr}\\left(\\hat{F}_{\\mu \\nu}(\\bar{x}) \\hat{F}^{\\mu \\nu}(\\bar{x}) \\right) \n\\end{align}\nwith the \"gluon field strength tensor\" $\\hat{F}_{\\mu \\nu} = \\partial_{\\mu} \\hat{A}_{\\nu} - \\partial_{\\nu} \\hat{A}_{\\mu}$.\n\\end{itemize}\n\nWhen we write it in the form\n\\begin{align}\n\\hat{F}_{\\mu \\nu}(\\bar{x}) = \\sum_{a=1}^n F_{\\mu \\nu}^a(\\bar{x}) \\hat{T}^a ,\n\\end{align}\nwe get for the Lagrangian of chromodynamics:\n\\begin{align}\nL_{ch} = - \\frac{1}{4} \\displaystyle\\int d^3 x \\ \\sum_{a=1}^n \\left( F_{\\mu \\nu}^a F^{a, \\mu \\nu} \\right) .\n\\end{align}\n\nBut such a simple generalization doesn't lead to chromodynamics. We have to add another term and take\n\\begin{align}\n\\hat{F}_{\\mu \\nu} &= \\partial_{\\mu} \\hat{A}_{\\nu} - \\partial_{\\nu} \\hat{A}_{\\mu} + g [ \\hat{A}_{\\mu},\\hat{A}_{\\nu} ] \\\\\nF_{\\mu \\nu}^a &= \\partial_{\\mu} A_{\\nu}^a - \\partial_{\\nu} A_{\\mu}^a + g t^{abc} A_{\\mu}^b A_{\\nu}^c\n\\end{align}\nas the gluon field strength tensor analog to the electromagnetic field tensor. \\\\\nThis looks very similar to the curvature tensor of differential geometry. In fact, this extra term conserves the antisymmetry and vanishes in the case of electrodynamics ($n=1$). \\\\\nThe equations of motion are non-linear now and the principle of superposition doesn't hold anymore (gluon fields interact with each other).\n\n\\pagebreak\n\nThe Hamiltonian procedure is the same, just a bit more complicated. We define\n\\begin{align}\nA_{\\mu}^a(\\bar{x}) \\ \\ \\ \\longrightarrow \\ \\ \\ \\pi_{\\mu}^a(\\bar{x}) = \\frac{\\delta L}{\\delta \\dot{A}^{a, \\mu}(\\bar{x})}.\n\\end{align}\nThen we build the Hamiltonian and get again a primary constraint.\nIt turns out that the primary constraint has the same form like in electrodynamics:\n\\begin{align}\n\\phi_1 = \\pi_0^a(\\bar{x}) = 0.\n\\end{align}\n\nThe secondary constraint reads now:\n\\begin{align}\n\\phi_2 = \\nabla \\cdot \\bar{\\pi}^a + g t^{abc} \\left( \\bar{A}^b \\bar{\\pi}^c \\right) = 0.\n\\end{align}\n\nThe procedure ends here and both constraints turn out to be first-class.\nThe last term can be interpreted like the appearance of a charge in analogy to electrodynamics. If this term isn't zero, then we have interaction between different gluons. \\\\\n\n\nWhich transformations do they generate? \n\\begin{align}\n\\delta g = \\varepsilon_a \\{ g,\\phi_a \\}\n\\end{align}\n\nWe generalize this to arbitrary matrices $\\hat{\\omega}(\\bar{x})$ by letting $\\omega^a(\\bar{x})$ be arbitrary functions:\n\\begin{align}\n\\hat{\\omega}(\\bar{x}) = \\sum_{a=1}^n \\ \\omega^a(\\bar{x}) \\hat{T}^a.\n\\end{align}\n\nAfter calculating the Poisson bracket with the constraints, we see that the space part changes like\n\\begin{align}\n\\hat{\\bar{A}} \\ \\ &\\longrightarrow \\ \\ \\hat{\\bar{A}}' = e^{- \\hat{\\omega}} \\hat{\\bar{A}} e^{\\hat{\\omega}} + e^{- \\hat{\\omega}} \\nabla e^{\\hat{\\omega}} \\\\\n\\hat{\\bar{E}} \\ \\ &\\longrightarrow \\ \\ \\hat{\\bar{E}}' = e^{- \\hat{\\omega}} \\hat{\\bar{E}} e^{\\hat{\\omega}} \n\\end{align}\nand $\\hat{\\omega}^\\dag = - \\hat{\\omega}$ is antihermite because of the properties of $\\hat{T}^a$. \\\\\nThis is quite similar to electrodynamics, where we had:\n\\begin{align}\n\\bar{A}(\\bar{x}) \\ \\ \\longrightarrow \\ \\ \\bar{A}'(\\bar{x}) = \\bar{A}(\\bar{x}) + \\nabla \\omega(\\bar{x}) .\n\\end{align}\n\nOne realizes that the field strength cannot be a physical quantity anymore but \n\\begin{align}\n\\text{tr} ( (\\hat{\\bar{E}}')^2 ) = \\text{tr} ( e^{- \\hat{\\omega}} \\hat{\\bar{E}} e^{\\hat{\\omega}} e^{- \\hat{\\omega}} \\hat{\\bar{E}} e^{\\hat{\\omega}} ) = \\text{tr} ( \\hat{\\bar{E}}^2 ),\n\\end{align}\nbecause of the gauge invariance. \\\\\n\nOur small overview ends with this short introduction. It is left to say that the theory turns out to be very complicated. Because of the non-linearity, gluons can interact with each other and create new gluons. Curious readers are advised to get familiar with the Yang-Mills theory.", "meta": {"hexsha": "b035553cd37b83b9581d5fbda3d0fb6351f93736", "size": 7623, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "07_chromodynamics.tex", "max_stars_repo_name": "Spektralzerleger/Hamilton-Systems", "max_stars_repo_head_hexsha": "53ba6a624bda7a6e03acdecbd48d43f79e221823", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-11T22:55:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-11T22:55:50.000Z", "max_issues_repo_path": "07_chromodynamics.tex", "max_issues_repo_name": "Spektralzerleger/Hamilton-Systems", "max_issues_repo_head_hexsha": "53ba6a624bda7a6e03acdecbd48d43f79e221823", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07_chromodynamics.tex", "max_forks_repo_name": "Spektralzerleger/Hamilton-Systems", "max_forks_repo_head_hexsha": "53ba6a624bda7a6e03acdecbd48d43f79e221823", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.1610738255, "max_line_length": 354, "alphanum_fraction": 0.6784730421, "num_tokens": 2499, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321843145405, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5633418716783533}}
{"text": "\\section{Multi-armed Bandits}\n\\subsection{Exercise 2.1}\n\\subsubsection*{Q}\nIn $\\varepsilon$-greedy action selection, for the case of two actions and $\\varepsilon = 0.5$, what is the probability that the greedy action is selected?\n\n\\subsubsection*{A}\n0.75.\n\n\n\\subsection{Exercise 2.2: Bandit example}\n\\subsubsection*{Q}\nConsider a $k$-armed bandit problem with $k = 4$ actions, denoted 1, 2, 3, and 4. Consider applying to this problem a bandit algorithm using $\\varepsilon$-greedy action selection, sample-average action-value estimates, and initial estimates of $Q_1(a) = 0$, for all $a$. Suppose the initial sequence of actions and rewards is $A_1 = 1$, $R_1 =1$, $A_2 =2$, $R_2 =1$, $A_3 =2$, $R_3 =2$, $A_4 =2$, $R_4 =2$, $A_5 =3$, $R_5 =0$. On some of these time steps the $\\varepsilon$ case may have occurred, causing an action to be selected at random. On which time steps did this definitely occur? On which time steps could this possibly have occurred?\n\n\\subsubsection*{A}\n$A_2$ and $A_5$ were definitely exploratory. Any of the others \\emph{could} have been exploratory.\n\n\n\\subsection{Exercise 2.3}\n\\subsubsection*{Q}\nIn the comparison shown in Figure 2.2, which method will perform best in the long run in terms of cumulative reward and probability of selecting the best action? How much better will it be? Express your answer quantitatively.\n\n\\subsubsection*{A}\nThe $\\varepsilon = 0.01$ will perform better because in both cases as $t \\to \\infty$ we have $Q_t \\to q_*$. The total reward and probability of choosing the optimal action will therefore be 10 times larger in this case than for $\\varepsilon = 0.1$.\n\n\n\\subsection{Exercise 2.4}\n\\label{ex:2.4}\n\\subsubsection*{Q}\nIf the step-size parameters, $\\alpha_n$, are not constant, then the estimate $Q_n$ is a weighted average of previously received rewards with a weighting different from that given by (2.6). What is the weighting on each prior reward for the general case, analogous to (2.6), in terms of the sequence of step-size parameters?\n\n\\subsubsection*{A}\nLet $\\alpha_0 = 1$, then \n\\begin{equation}\n    Q_{n + 1} = \\left(\\prod_{i=1}^n (1 - \\alpha_i) \\right) Q_1 + \\sum_{i = 1}^{n}  \\alpha_{i} R_{i} \\prod_{k = i + 1}^n\n(1 - \\alpha_k).\n\\end{equation}\nWhere $\\prod_{i=x}^y f(i) \\doteq 1$ if $x > y$.\n\n\\subsection{Exercise 2.5 (programming)}\n\\subsubsection*{Q}\nDesign and conduct an experiment to demonstrate the difficulties that sample-average methods have for non-stationary problems. Use a modified version of the 10-armed testbed in which all the $q_*(a)$ start out equal and then take independent random walks (say by adding a normally distributed increment with mean zero and standard deviation 0.01 to all the $q_*(a)$ on each step). Prepare plots like Figure 2.2 for an action-value method using sample averages, incrementally computed, and another action-value method using a constant step-size parameter, $\\alpha$ = 0.1. Use $\\varepsilon$ = 0.1 and longer runs, say of 10,000 steps.\n\n\\subsubsection*{A}\n\\ProgrammingExercise\n\n\\includegraphics[width=\\textwidth]{\\ProjectDir/data/exercise_output/ex_2_5/learning_curve.png}\n\n\n\\subsection{Exercise 2.6: Mysterious Spikes}\n\\subsubsection*{Q}\nThe results shown in Figure 2.3 should be quite reliable because they are averages over 2000 individual, randomly chosen 10-armed bandit tasks. Why, then, are there oscillations and spikes in the early part of the curve for the optimistic method? In other words, what might make this method perform particularly better or worse, on average, on particular early steps?\n\n\\subsubsection*{A}\nAt some point after step 10, the agent will find the optimal value. It will then choose this value greedily. The small step-size parameter (small relative to the initialisation value of 5) means that the estimate of the optimal value will converge slowly towards its true value.\\\\\n\nIt is likely that this true value is less than 5. This means that, due to the small step size, one of the sub-optimal actions will still have a value close to 5. Thus, at some point, the agent begins to act sub-optimally again.\n\n\\subsection{Exercise 2.7: Unbiased Constant-Step-Trick}\n\\subsubsection*{Q}\nIn most of this chapter we have used sample averages to estimate action values because sample averages do not produce the initial bias that constant step sizes do (see the analysis in (2.6)). However, sample averages are not a completely satisfactory solution because they may perform poorly on non-stationary problems. Is it possible to avoid the bias of constant step sizes while retaining their advantages on non-stationary problems? One way is to use a step size of\n\\begin{equation}\n    \\beta_t \\doteq \\alpha / \\bar{o}_t,\n\\end{equation}\nwhere $\\alpha > 0$ is a conventional constant step size and $\\bar{o}_t$ is a trace of one that starts at 0:\n\\begin{equation}\n    \\bar{o}_{t+1} = \\bar{o}_t + \\alpha (1 - \\bar{o}_t)\n\\end{equation}\nfor $t \\geq 1$ and with $\\bar{o}_1 \\doteq \\alpha$.\\\\\n\nCarry out an analysis like that in (2.6) to show that $\\beta_t$ is an exponential recency-weighted average \\emph{without initial bias}. \n\n\\subsubsection*{A}\nConsider the answer to \\hyperref[ex:2.4]{Exercise 2.4}. There is no dependence of $Q_k$ on $Q_1$ for $k > 1$ since $\\beta_1 = 1$. Now it remains to show that the weights in the remaining sum decrease as we look further into the past. That is\n\\begin{equation}\n    w_i = \\beta_i \\prod_{k = i + 1}^{n} (1 - \\beta_k)\n\\end{equation}\nincreases with $i$ for fixed n. For this, observe that\n\\begin{equation}\n    \\frac{w_{i+1}}{w_i} = \\frac{\\beta_{i+1}}{\\beta_i(1 - \\beta_{i + 1})} = \\frac{1}{1 - \\alpha} > 1\n\\end{equation} \nwhere we have assumed $\\alpha < 1$. If $\\alpha = 1$ then $\\beta_t = 1 \\,\\, \\forall \\, t$.\n\n\\subsection{Exercise 2.8: UCB Spikes}\n\\subsubsection*{Q}\nIn Figure 2.4 the UCB algorithm shows a distinct spike in performance on the 11th step. Why is this? Note that for your answer to be fully satisfactory it must explain both why the reward increases on the 11th step and why it decreases on the subsequent steps. Hint: if $c = 1$, then the spike is less prominent.\n\n\\subsubsection*{A}\nIn the first 10 steps the agent cycles through all of the actions because when $N_t(a) = 0$ then $a$ is considered maximal. On the 11th step the agent will most often then choose greedily. The agent will continue to choose greedily until $\\mathrm{ln}(t)$ overtakes $N_t(a)$ for one of the other actions, in which case the agent begins to explore again hence reducing rewards.\\\\\n\nNote that, in the long run, $N_t = O(t)$ and $\\mathrm{ln}(t) / t \\to 1$. So this agent is `asymptotically greedy'.\n\n\n\\subsection{Exercise 2.9}\n\\subsubsection*{Q}\nShow that in the case of two actions, the soft-max distribution is the same as that given by the logistic, or sigmoid, function often used in statistics and artificial neural networks.\n\n\\subsubsection*{A}\nLet the two actions be denoted by 0 and 1. Now\n\\begin{equation}\n    \\P{}(A_t = 1) = \\frac{e^{H_t(1)}}{e^{H_t(1)} + e^{H_t(0)}} = \\frac{1}{1 + e^{-x}}, \n\\end{equation}\nwhere $x = H_t(1) - H_t(0)$ is the relative preference of 1 over 0.\n\n\\subsection{Exercise 2.10}\n\\subsubsection*{Q}\nSuppose you face a 2-armed bandit task whose true action values change randomly from time step to time step. Specifically, suppose that, for any time step, the true values of actions 1 and 2 are respectively 0.1 and 0.2 with probability 0.5 (case A), and 0.9 and 0.8 with probability 0.5 (case B). If you are not able to tell which case you face at any step, what is the best expectation of success you can achieve and how should you behave to achieve it? Now suppose that on each step you are told whether you are facing case A or case B (although you still don\u2019t know the true action values). This is an associative search task. What is the best expectation of success you can achieve in this task, and how should you behave to achieve it?\n\n\\subsubsection*{A}\nI assume the rewards are stationary.\\\\\n\nOne should choose the action with the highest expected reward. In the first case, both action 1 and 2 have expected value of 0.5, so it doesn't matter which you pick.\\\\\n\nIn the second case one should run a normal bandit method separately on each colour. The expected reward from identifying the optimal actions in each case is 0.55.\n\n\\subsection{Exercise 2.11 (programming)}\n\\subsubsection*{Q}\nMake a figure analogous to Figure 2.6 for the non-stationary case outlined in Exercise 2.5. Include the constant-step-size $\\varepsilon$-greedy algorithm with $\\alpha=0.1$. Use runs of 200,000 steps and, as a performance measure for each algorithm and parameter setting, use the average reward over the last 100,000 steps.\n\n\\subsubsection*{A}\n\\ProgrammingExercise\n\n\\includegraphics[width=\\textwidth]{\\ProjectDir/data/exercise_output/ex_2_11/action_values.png}\n\n\\includegraphics[width=\\textwidth]{\\ProjectDir/data/exercise_output/ex_2_11/parameter_study.png}\n\n", "meta": {"hexsha": "ecf264092108c31ff24c9a064dfd4e2853cd8203", "size": 8868, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/chapters/chapter2/chapter2_content.tex", "max_stars_repo_name": "ElliotMunro200/reinforcement_learning_an_introduction", "max_stars_repo_head_hexsha": "c4fccb46a4bb00955549be3505144ec49f0132e5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 234, "max_stars_repo_stars_event_min_datetime": "2018-09-01T00:26:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T03:55:50.000Z", "max_issues_repo_path": "exercises/chapters/chapter2/chapter2_content.tex", "max_issues_repo_name": "15779235038/reinforcement_learning_an_introduction", "max_issues_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2018-11-29T21:04:36.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-13T17:11:50.000Z", "max_forks_repo_path": "exercises/chapters/chapter2/chapter2_content.tex", "max_forks_repo_name": "15779235038/reinforcement_learning_an_introduction", "max_forks_repo_head_hexsha": "a0ac9e5da6eaeae14d297a560c499d1a6e579c2a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 63, "max_forks_repo_forks_event_min_datetime": "2018-07-31T04:53:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T04:03:43.000Z", "avg_line_length": 70.380952381, "max_line_length": 741, "alphanum_fraction": 0.747631935, "num_tokens": 2503, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.8056321796478255, "lm_q1q2_score": 0.5633418532675735}}
{"text": "\\section{Output}\n\\lstinputlisting[style=Plain]{../src/lab_01_output.txt}\n\n\\newpage\n% \\begin{lstlisting}[style=Plain]\n\\begin{verbatim}\n\\section{Basics of \\LaTeX{}}\n\n\\subsection{Simplifying Fractions}\nConsider the function\n$$\nf(x) = \\frac{x^2 - 1}{x + 1}.\n$$\nTo simplify this funciton we can factor the numerator and cancel like terms\n\\begin{align*}\n    f(x)\n       & = \\frac{x^2 - 1}{x + 1}\n    \\\\ & = \\frac{(x - 1) (x + 1)}{x + 1}\n    \\\\ & = x - 1.\n\\end{align*}\n\n\\subsection{Matrix}\nA general $3 \\times 3$ matrix $A$ has the form\n$$\nA =\n\\begin{bmatrix}\n    1 & 2 & 3 \\\\\n    4 & 5 & 6 \\\\\n    7 & 8 & 9 \\\\\n\\end{bmatrix}.\n$$\n\n\\subsection{The Millennium Prize Problems}\n\\emph{The Millennium Prize Problems} are seven problems in mathematics that were stated by\nthe \\textbf{Clay Mathematics Institute} on May 24, 2000. The problems are\n\\begin{enumerate}[(1)]\n  \\item Birch and Swinnerton-Dyer conjecture,\n  \\item Hodge conjecture,\n  \\item Navier\u2013Stokes existence and smoothness,\n  \\item P versus NP problem,\n  \\item Poincar\\'{e} conjecture,\n  \\item Riemann hypothesis,\n  \\item Yang\u2013Mills existence and mass gap.\n\\end{enumerate}\n\\textsc{The Riemann zeta function} is defined for complex $s$ with real part greater than $1$\nby the absolutely convergent infinite series\n$$\n\\zeta(s) = \\sum_{n=1}^{\\infty} \\frac{1}{n^s} = \\frac{1}{1^s} + \\frac{1}{2^s} + \\frac{1}{3^s} + \\cdots\n$$\nThe practical uses of the Riemann hypothesis include many propositions known true under the\nRiemann hypothesis, and some that can be shown to be equivalent to the Riemann hypothesis:\n\\begin{itemize}\n  \\item Distribution of prime numbers,\n  \\item Growth of arithemtic functions,\n  \\item Large prime gap conjecture,\n  \\item Criteria equivalent to the Riemann hypothesis.\n\\end{itemize}\n\\end{verbatim}\n% \\end{lstlisting}\n", "meta": {"hexsha": "7c205fbcd302e39ea6ed6f9d809991cc173029df", "size": 1787, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "courses/template/MATH3341/Math.3341.Lab.01/Math.3341.Lab.01.Report/LaTeX/body.tex", "max_stars_repo_name": "butlerm0405/math3341", "max_stars_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "courses/template/MATH3341/Math.3341.Lab.01/Math.3341.Lab.01.Report/LaTeX/body.tex", "max_issues_repo_name": "butlerm0405/math3341", "max_issues_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "courses/template/MATH3341/Math.3341.Lab.01/Math.3341.Lab.01.Report/LaTeX/body.tex", "max_forks_repo_name": "butlerm0405/math3341", "max_forks_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.7833333333, "max_line_length": 101, "alphanum_fraction": 0.6939003917, "num_tokens": 580, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7520125848754472, "lm_q1q2_score": 0.5633230198563665}}
{"text": "%----------------------------------------------------------\r\n%\r\n\r\n\\documentclass[10pt,letterpaper,oneside,notitlepage]{article}\r\n%\\documentclass{report}%\r\n\\usepackage{algorithm}\r\n\\usepackage{algpseudocode}\r\n\\usepackage{enumitem}\r\n\\usepackage{nomencl}\r\n\\usepackage{amsmath}%\r\n%\\usepackage{amsfonts}%\r\n%\\usepackage{amssymb}%\r\n%\\usepackage{graphicx}\r\n%----------------------------------------------------------\r\n\\makenomenclature\r\n%\\theoremstyle{plain}\r\n%\\newtheorem{acknowledgement}{Acknowledgement}\r\n%\\newtheorem{definition}{Definition}\r\n%\\newtheorem{remark}{Remark}\r\n%\\numberwithin{equation}{section}\r\n%-----------------------------------------------------------\r\n\\begin{document}\r\n\\title{Interpolation of DCMs}\r\n\\author{Bonnie Jonkman}\r\n\\maketitle\r\n\r\n\r\n\\section{Logarithmic maps for DCMs}\r\n\r\nFor any direction cosine matrix (DCM), $\\Lambda$, \r\nthe logarithmic map for the matrix is a skew-symmetric matrix, $\\lambda$:\r\n\r\n\r\n\\begin{equation}\r\n\\label{EqLog}\r\n\\lambda \r\n= \\log( \\Lambda )\r\n= \\begin{bmatrix}\r\n\t\t0          &  \\lambda_3 & -\\lambda_2 \\\\\r\n\t\t-\\lambda_3 &  0         &  \\lambda_1 \\\\\t\r\n\t\t \\lambda_2 & -\\lambda_1 &  0          \r\n\t\\end{bmatrix}\r\n\\end{equation}\r\n\r\n\\section{Matrix exponentials}\r\n\r\nThe angle of rotation for the skew-symmetric matrix, $\\lambda$ is\r\n\\begin{equation}\r\n\\label{EqRotationAng}\r\n\\theta = \\left\\|\\lambda\\right\\| = \\sqrt{{\\lambda_1}^2+{\\lambda_2}^2+{\\lambda_3}^2} \r\n\\end{equation}\r\n\r\nThe matrix exponential is \r\n\\begin{equation}\r\n\\label{EqExp}\r\n \\Lambda = \\exp(\\lambda) =\r\n \\left\\{ \r\n \\begin{matrix}\r\n\tI                                                                              &  \\theta = 0 \\\\\r\n\tI + \\frac{\\sin\\theta}{\\theta}\\lambda + \\frac{1-\\cos\\theta}{\\theta^2}\\lambda^2  &  \\theta > 0 \\\\\r\n \\end{matrix} \r\n \\right.\r\n\\end{equation}\r\n\r\n\r\n\\section{Solving for $\\lambda$}\r\n\r\nIf the logarithmic map and matrix exponential are truly inverses, we need \r\n\\begin{equation}\r\n\\exp(\\log(\\Lambda)) = \\Lambda. \r\n\\end{equation}\r\nUsing the expression for $\\lambda$ from Equation \\ref{EqLog}, we get\r\n\\begin{equation}\r\n\\label{EqExpMatrix}\r\n\t\\exp\\left(\r\n \\begin{bmatrix}\r\n\t\t0          &  \\lambda_3 & -\\lambda_2 \\\\\r\n\t\t-\\lambda_3 &  0         &  \\lambda_1 \\\\\t\r\n\t\t \\lambda_2 & -\\lambda_1 &  0          \r\n\t\\end{bmatrix}\r\n\\right)\t= \\Lambda =\r\n \\begin{bmatrix}\r\n  \\Lambda_{11} & \\Lambda_{12} & \\Lambda_{13} \\\\\r\n  \\Lambda_{21} & \\Lambda_{22} & \\Lambda_{23} \\\\\r\n  \\Lambda_{31} & \\Lambda_{32} & \\Lambda_{33} \\\\\r\n \\end{bmatrix}\r\n\\end{equation}\r\n\r\nDoing a little algebra for $\\theta > 0$, Equation \\ref{EqExpMatrix} becomes\r\n\\begin{equation}\r\n\\label{EqMatrixAlgebra}\r\n\\Lambda =\r\n \\begin{bmatrix}\r\n  1-\\frac{1-\\cos\\theta}{\\theta^2}\\left( \\lambda_3^2 + \\lambda_2^2\\right) \r\n&   \\frac{\\sin\\theta}{\\theta}\\lambda_3+\\frac{1-\\cos\\theta}{\\theta^2}\\lambda_1\\lambda_2  \r\n&  -\\frac{\\sin\\theta}{\\theta}\\lambda_2+\\frac{1-\\cos\\theta}{\\theta^2}\\lambda_1\\lambda_3 \\\\\r\n   -\\frac{\\sin\\theta}{\\theta}\\lambda_3+\\frac{1-\\cos\\theta}{\\theta^2}\\lambda_1\\lambda_2\r\n&  1-\\frac{1-\\cos\\theta}{\\theta^2}\\left( \\lambda_3^2 + \\lambda_1^2\\right) \r\n&  \\frac{\\sin\\theta}{\\theta}\\lambda_1+\\frac{1-\\cos\\theta}{\\theta^2}\\lambda_2\\lambda_3  \\\\\r\n   \\frac{\\sin\\theta}{\\theta}\\lambda_2+\\frac{1-\\cos\\theta}{\\theta^2}\\lambda_1\\lambda_3 \r\n& -\\frac{\\sin\\theta}{\\theta}\\lambda_1+\\frac{1-\\cos\\theta}{\\theta^2}\\lambda_2\\lambda_3 \r\n&  1-\\frac{1-\\cos\\theta}{\\theta^2}\\left( \\lambda_2^2 + \\lambda_1^2\\right)                    \\\\\r\n \\end{bmatrix}\r\n\\end{equation}\r\nIt follows that the trace is \r\n\\begin{eqnarray*}\r\n\\mathrm{Tr}(\\Lambda) &=& 3 - 2\\frac{1-\\cos\\theta}{\\theta^2}(\\lambda_1^2 + \\lambda_2^2 + \\lambda_3^2) \\\\\r\n               &=& 3 - 2\\left(1-\\cos\\theta\\right) \\\\\r\n               &=& 1 + 2\\cos\\theta\r\n\\end{eqnarray*} or\r\n\\begin{equation}\r\n\\theta= \\begin{matrix} \\cos^{-1}\\left(\\frac{1}{2}\\left(\\mathrm{Tr}(\\Lambda)-1\\right)\\right) & \\theta \\in \\left[0,\\pi\\right]\\end{matrix}\r\n\\end{equation}\r\nIt also follows that  \r\n\\begin{equation}\r\n\\Lambda - \\Lambda^T=\\frac{2\\sin\\theta}{\\theta}\r\n \\begin{bmatrix}\r\n  0  &   \\lambda_3  &  -\\lambda_2 \\\\\r\n   -\\lambda_3 &  0 &  \\lambda_1  \\\\\r\n   \\lambda_2 & -\\lambda_1 &  0 \\\\\r\n \\end{bmatrix}\r\n\\end{equation} or, when $\\sin\\theta \\neq 0$\r\n\\begin{equation}\r\n\\label{EqLambdaSinNot0}\r\n\\lambda = \\frac{\\theta}{2\\sin\\theta} \\left( \\Lambda - \\Lambda^T\\right)\r\n\\end{equation}\r\nWe need an equation that works when $\\sin\\theta$ approaches 0, that is, when $\\theta$ approaches 0 or $\\theta$ approaches $\\pi$. When $\\theta$ approaches 0, Equation \\ref{EqLambdaSinNot0} actually behaves well: \r\n\\begin{equation}\r\n\\lim_{\\theta \\to 0}\\frac{\\theta}{2\\sin\\theta} = \\frac{1}{2}\r\n\\end{equation}\r\nand since $\\theta$ is the $l_2$ norm of the individual components of $\\lambda$, it follows that they approach zero, and we get\r\n\\begin{equation}\r\n\\label{EqLambdaTheta0}\r\n\\lambda = 0\r\n\\end{equation}\r\nHowever, when $\\theta$ approaches $\\pi$, $\\Lambda - \\Lambda^T$ approaches 0, and \r\n\\begin{equation}\r\n\\lim_{\\theta \\to \\pi}\\frac{\\theta}{2\\sin\\theta} = \\infty\r\n\\end{equation}\r\nWe need a different method to find $\\lambda$. Going back to Equations \\ref{EqExpMatrix} and \\ref{EqMatrixAlgebra}, we can compute the following:\r\n\\begin{equation}\r\n\\Lambda_{11}+\\Lambda_{22}-\\Lambda_{33} = 1 - \\frac{2\\lambda_3^2(1-\\cos\\theta)}{\\theta^2} \r\n\\end{equation}\r\n%\\begin{equation}\r\n%\\Lambda_{11}-\\Lambda_{22}+\\Lambda{33} = 1 - \\frac{1-\\cos\\theta}{\\theta^2}\\left( 2\\lambda_2^2\\right)\r\n%\\end{equation}\r\n%\\begin{equation}\r\n%-\\Lambda_{11}+\\Lambda_{22}+\\Lambda{33} = 1 - \\frac{1-\\cos\\theta}{\\theta^2}\\left( 2\\lambda_1^2\\right)\r\n%\\end{equation}\r\nor\r\n\\begin{equation}\r\n\\label{EqLambda3}\r\n\\lambda_3 = \\pm \\theta\\sqrt{ \\frac{\\left(1 + \\Lambda_{33} - \\Lambda_{11} - \\Lambda_{22}\\right)}{2\\left(1-\\cos\\theta\\right)}   }\r\n\\end{equation}\r\nEquations for the other two components of $\\lambda$ are similar:\r\n\\begin{equation}\r\n\\label{EqLambda1}\r\n\\lambda_1 = \\pm \\theta\\sqrt{ \\frac{\\left(1 + \\Lambda_{11} - \\Lambda_{22} - \\Lambda_{33}\\right)}{2\\left(1-\\cos\\theta\\right)}   }\r\n\\end{equation}\r\n\\begin{equation}\r\n\\label{EqLambda2}\r\n\\lambda_2 = \\pm \\theta\\sqrt{ \\frac{\\left(1 + \\Lambda_{22} - \\Lambda_{11} - \\Lambda_{33}\\right)}{2\\left(1-\\cos\\theta\\right)}   }\r\n\\end{equation}\r\nEquations \\ref{EqLambda3}-\\ref{EqLambda2} give us the magnitude, not the sign of the vector we are looking for. Here is one possibility for choosing the sign:\r\nIf $(\\lambda_1) \\neq 0$, choose $\\mathrm{sign}(\\lambda_1)$ positive. \r\n\\begin{equation}\r\n\\Lambda_{12}+\\Lambda_{21} = \\frac{2\\left(1-\\cos\\theta\\right)}{\\theta^2}\\lambda_1\\lambda_2   \r\n\\end{equation}\r\nso \r\n\\begin{equation}\r\n\\mathrm{sign}(\\lambda_2) = \\mathrm{sign}(\\Lambda_{12}+\\Lambda_{21}) \r\n\\end{equation}\r\nand similarly,\r\n\\begin{equation}\r\n\\mathrm{sign}(\\lambda_3) = \\mathrm{sign}(\\Lambda_{13}+\\Lambda_{31}) \r\n\\end{equation}\r\nIf $(\\lambda_1) = 0$, similar arguments can be used to choose $\\mathrm{sign}(\\lambda_2)$ positive, and \r\n\\begin{equation}\r\n\\mathrm{sign}(\\lambda_3) = \\mathrm{sign}(\\Lambda_{23}+\\Lambda_{32}) \r\n\\end{equation}\r\nAt this point, the relationships between the components of $\\lambda$ are set, so we have computed $\\pm\\lambda$. If $\\theta=\\pi$, both values are a solution, so this good enough.\r\n\r\nIf $\\theta$ is close to $\\pi$, we will need to determine if we have the negative of the solution. This is required for numerical stability of the solution.\r\nIn this case, $\\Lambda-\\Lambda^T$ is not exactly zero, so we can look at the sign of the solution we would have computed if we had used Equation \\ref{EqLambdaSinNot0}:\r\n\\begin{equation}\r\n\\Lambda_{23}-\\Lambda_{32} = \\left|\\frac{\\sin\\theta}{\\theta}\\right|\\lambda_1\r\n\\end{equation}\r\n\\begin{equation}\r\n\\Lambda_{31}-\\Lambda_{13} = \\left|\\frac{\\sin\\theta}{\\theta}\\right|\\lambda_2\r\n\\end{equation}\r\n\\begin{equation}\r\n\\Lambda_{12}-\\Lambda_{21} = \\left|\\frac{\\sin\\theta}{\\theta}\\right|\\lambda_3\r\n\\end{equation}\r\nFor numerical reasons, we don't want to use these equations to get the magnitude of the components of $\\lambda$, but we can look at the sign of the element with largest magnitude and ensure our $\\lambda$ has the same sign.\r\n\r\n\\section{Interpolation}\r\n\\subsection{Periodicity of solutions} \r\n\r\nGiven $\\lambda_k = \\lambda \\left( 1 + \\frac{2k\\pi}{\\left\\| \\lambda \\right\\|}\\right)$ for any integer $k$, it follows that \r\n\\begin{equation}\r\n\\theta_k = \\left| 1 + \\frac{2k\\pi}{\\left\\|\\lambda\\right\\|}\\right| \\theta\r\n\\end{equation}\r\nor\r\n\\begin{equation}\r\n\\theta_k =  \\left|\\theta + 2k\\pi\\right|\r\n\\end{equation}\r\n\r\n\\begin{eqnarray*}\r\n\\Lambda_k\r\n&=&  \\exp(\\lambda_k) \\\\\r\n&=&  I + \\frac{\\sin\\theta_k}{\\theta_k}\\lambda_k + \\frac{1-\\cos\\theta_k}{\\theta_k^2}\\lambda_k^2  \\\\\r\n&=&  I + \\frac{\\sin\\left|\\theta + 2k\\pi\\right|}{\\left|\\theta + 2k\\pi\\right|}\\left( \\frac{\\theta+2k\\pi}{\\theta}\\right)\\lambda + \\frac{1-\\cos\\left|\\theta + 2k\\pi\\right|}{\\left|\\theta + 2k\\pi\\right|^2}\\left(\\frac{\\theta+2k\\pi}{\\theta}\\right)^2\\lambda^2  \\\\\r\n&=&  I + \\frac{\\sin\\left|\\theta + 2k\\pi\\right|}{\\theta} \r\n         \\frac{\\theta + 2k\\pi}{\\left|\\theta + 2k\\pi\\right|}\\lambda + \\frac{1-\\cos\\left|\\theta + 2k\\pi\\right|}{\\theta^2}\\lambda^2\\\\\r\n&=&  I + \\frac{\\sin\\theta}{\\theta} \\lambda + \\frac{1-\\cos\\theta}{\\theta^2}\\lambda^2\\\\\r\n&=& \\exp(\\lambda) \\\\\r\n&=& \\Lambda \\\\\r\n\\end{eqnarray*}\r\n\r\nThus, if $\\lambda$ is one solution to $\\log(\\Lambda)$, then so is \r\n$\\lambda_k = \\lambda \\left( 1 + \\frac{2k\\pi}{\\left\\| \\lambda \\right\\|}\\right)$ for any integer k.\r\n\r\n\\subsection{Finding values of $\\lambda$ for interpolation} \r\nGiven a set of $\\lambda^j$ to be interpolated, find equivalent $\\tilde{\\lambda}^j$ for integers $j=1,2,...n$:\r\nSet $\\tilde{\\lambda}^1 = \\lambda^1$.\r\nFor each $j\\in\\left[2,n\\right]$, \r\ncheck to see if $\\tilde{\\lambda}^{j-1}$ is closer (in the $l_2$-norm sense) to \r\n$\\lambda^j$ or $\\lambda^j \\left( 1 + \\frac{2\\pi}{\\left\\| \\lambda^j \\right\\|}\\right)$. \r\nIf the latter, set $\\tilde{\\lambda}^{j}=\\lambda^j \\left( 1 + \\frac{2\\pi}{\\left\\| \\lambda^j \\right\\|}\\right)$ and continue checking if we need to add more $2\\pi$ periods.\r\nOtherwise, check to see if $\\tilde{\\lambda}^{j-1}$ is closer to \r\n$\\lambda^j$ or $\\lambda^j \\left( 1 - \\frac{2\\pi}{\\left\\| \\lambda^j \\right\\|}\\right)$. \r\nIf the latter, set $\\tilde{\\lambda}^{j}=\\lambda^j \\left( 1 - \\frac{2\\pi}{\\left\\| \\lambda^j \\right\\|}\\right)$ and continue checking if we need to subtract more $2\\pi$ periods.\r\nOtherwise set $\\tilde{\\lambda}^{j} = \\lambda^j$.\r\n\r\n\r\nInterpolation must occur on the $\\tilde{\\lambda}^{j}$ and not the $\\lambda^j$.\r\n\r\n\\end{document}", "meta": {"hexsha": "21749c643ccc7364d7ab3dc75ade905ad34ad665", "size": 10332, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Docs/TheoryGuide/DCM_Interpolation/DCM_Interpolation.tex", "max_stars_repo_name": "rdamiani/FAST_Aqwa", "max_stars_repo_head_hexsha": "7e5150cc432e7424bbf9877cbd7570755ee0eea8", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Docs/TheoryGuide/DCM_Interpolation/DCM_Interpolation.tex", "max_issues_repo_name": "rdamiani/FAST_Aqwa", "max_issues_repo_head_hexsha": "7e5150cc432e7424bbf9877cbd7570755ee0eea8", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Docs/TheoryGuide/DCM_Interpolation/DCM_Interpolation.tex", "max_forks_repo_name": "rdamiani/FAST_Aqwa", "max_forks_repo_head_hexsha": "7e5150cc432e7424bbf9877cbd7570755ee0eea8", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.4117647059, "max_line_length": 254, "alphanum_fraction": 0.6410181959, "num_tokens": 3597, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5633230073173676}}
{"text": "\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Copyright (c) 2003-2018 by The University of Queensland\n% http://www.uq.edu.au\n%\n% Primary Business: Queensland, Australia\n% Licensed under the Apache License, version 2.0\n% http://www.apache.org/licenses/LICENSE-2.0\n%\n% Development until 2012 by Earth Systems Science Computational Center (ESSCC)\n% Development 2012-2013 by School of Earth Sciences\n% Development from 2014 by Centre for Geoscience Computing (GeoComp)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Seismic Wave Propagation in Two Dimensions}\n\n\\sslist{example08a.py}\nWe will now expand upon the previous chapter by introducing a vector form of\nthe wave equation. This means that the waves will have not only a scalar\nmagnitude as for the pressure wave solution, but also a direction. This type of\nscenario is apparent in wave types that exhibit compressional and transverse\nparticle motion. An example of this would be seismic waves.\n\nWave propagation in the earth can be described by the elastic wave equation\n\\begin{equation} \\label{eqn:wav} \\index{wave equation}\n\\rho \\frac{\\partial^{2}u_{i}}{\\partial t^2} - \\frac{\\partial\n\\sigma_{ij}}{\\partial x_{j}} = 0\n\\end{equation}\nwhere $\\sigma$ is the stress given by\n\\begin{equation} \\label{eqn:sigw}\n \\sigma _{ij} = \\lambda u_{k,k} \\delta_{ij} + \\mu (\nu_{i,j} + u_{j,i})\n\\end{equation}\nand $\\lambda$ and $\\mu$ represent Lame's parameters. Specifically for seismic\nwaves, $\\mu$ is the propagation materials shear modulus. \nIn a similar process to the previous chapter, we will use the acceleration\nsolution to solve this PDE. By substituting $a$ directly for\n$\\frac{\\partial^{2}u_{i}}{\\partial t^2}$ we can derive the\nacceleration solution. Using $a$ we can see that \\autoref{eqn:wav} becomes\n\\begin{equation} \\label{eqn:wava} \n\\rho a_{i} - \\frac{\\partial\n\\sigma_{ij}}{\\partial x_{j}} = 0\n\\end{equation}\nThus the problem will be solved for acceleration and then converted to \ndisplacement using the backwards difference approximation as for the acoustic\nexample in the previous chapter.\n\nConsider now the stress $\\sigma$. One can see that the stress consists of two\ndistinct terms:\n\\begin{subequations}\n\\begin{equation} \\label{eqn:sigtrace}\n\\lambda u_{k,k} \\delta_{ij}\n\\end{equation}\n\\begin{equation} \\label{eqn:sigtrans}\n\\mu (u_{i,j} + u_{j,i})\n\\end{equation}\n\\end{subequations}\nOne simply recognizes in \\autoref{eqn:sigtrace} that $u_{k,k}$ is the\ntrace of the displacement solution and that $\\delta_{ij}$ is the\nkronecker delta function with dimensions equivalent to $u$. The second term\n\\autoref{eqn:sigtrans} is the sum of $u$ with its own transpose. Putting these\nfacts together we see that the spatial differential of the stress is given by the\ngradient of $u$ and the aforementioned operations. This value is then submitted\nto the \\esc PDE as $X$.\n\\begin{python}\ng=grad(u); stress=lam*trace(g)*kmat+mu*(g+transpose(g))\nmypde.setValue(X=-stress) # set PDE values\n\\end{python}\nThe solution is then obtained via the usual method and the displacement is\ncalculated so that the memory variables can be updated for the next time\niteration.\n\\begin{python}\naccel = mypde.getSolution() #get PDE solution for acceleration\nu_p1=(2.*u-u_m1)+h*h*accel #calculate displacement\nu_m1=u; u=u_p1 # shift values by 1\n\\end{python}\n\nSaving the data has been handled slightly differently in this example. The VTK\nfiles generated can be quite large and take a significant amount of time to save\nto the hard disk. To avoid doing this at every iteration a test is devised which\nsaves only at specific time intervals.\n\nTo do this there are two new parameters in our script.\n\\begin{python}\n# data recording times\nrtime=0.0 # first time to record\nrtime_inc=tend/20.0 # time increment to record\n\\end{python}\nCurrently the PDE solution will be saved to file $20$ times between the start of\nthe modelling and the final time step. With these parameters set, an if\nstatement is introduced to the time loop\n\\begin{python}\nif (t >= rtime):\n\tsaveVTK(os.path.join(savepath,\"ex08a.%05d.vtu\"%n),displacement=length(u),\\\n    \tacceleration=length(accel),tensor=stress)\n    \trtime=rtime+rtime_inc #increment data save time\n\\end{python}\n\\verb!t! is the time counter. Whenever the recording time \\verb!rtime! is less\nthen \\verb!t! the solution is saved and \\verb!rtime! is incremented. This\nlimits the number of outputs and increases the speed of the solver.\n\n\\section{Multi-threading}\nThe wave equation solution can be quite demanding on CPU time. Enhancements can\nbe made by accessing multiple threads or cores on your computer. This does not\nrequire any modification to the solution script and only comes into play when\n\\esc is called from the shell. To use multiple threads \\esc is called using\nthe \\verb!-t! option with an integer argument for the number of threads\nrequired. For example\n\\begin{verbatim}\n$escript -t 4 example08a.py\n\\end{verbatim}\nwould call the script in this section and solve it using 4 threads.\n\nThe computation times on an increasing number of cores is outlined in\n\\autoref{tab:wpcores}.\n\n\\begin{table}[ht]\n\\begin{center}\n\\caption{Computation times for an increasing number of cores.}\n\\label{tab:wpcores}\n\\begin{tabular}{| c | c |}\n\\hline\nNumber of Cores & Time (s) \\\\\n\\hline\n1 & 691.0 \\\\\n2 & 400.0 \\\\\n3 & 305.0 \\\\\n4 & 328.0 \\\\\n5 & 323.0 \\\\\n6 & 292.0 \\\\\n7 & 282.0 \\\\\n8 & 445.0 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Vector source on the boundary}\n\\sslist{example08b.py}\nFor this particular example, we will introduce the source by applying a\ndisplacement to the boundary during the initial time steps. The source will\nagain be\na radially propagating wave but due to the vector nature of the PDE used, a\ndirection will need to be applied to the source.\n\nThe first step is to choose an amplitude and create the source as in the\nprevious chapter. \n\\begin{python}\nU0=0.01 # amplitude of point source\n# will introduce a spherical source at middle left of bottom face\nxc=[ndx/2,0]\n\n############################################FIRST TIME STEPS AND SOURCE\n# define small radius around point xc\nsrc_length = 40; print(\"src_length = \",src_length)\n# set initial values for first two time steps with source terms\nxb=FunctionOnBoundary(domain).getX()\ny=source[0]*(cos(length(x-xc)*3.1415/src_length)+1)*\\\n                whereNegative(length(xb-src_length))\nsrc_dir=numpy.array([0.,1.]) # defines direction of point source as down\ny=y*src_dir\n\\end{python}\nwhere \\verb xc  is the source point on the boundary of the model. Note that\nbecause the source is specifically located on the boundary, we have used the\n\\verb!FunctionOnBoundary! call to ensure the nodes used to define the source\nare also located upon the boundary. These boundary nodes are passed to\nsource as \\verb!xb!. The source direction is then defined as an $(x,y)$ array and multiplied by the \nsource function. The directional array must have a magnitude of $\\left| 1\n\\right| $ otherwise the amplitude of the source will become modified. For this\nexample, the source is directed in the $-y$ direction.\n\\begin{python}\nsrc_dir=numpy.array([0.,-1.]) # defines direction of point source as down\ny=y*src_dir\n\\end{python}\nThe function can then be applied as a boundary condition by setting it equal to\n$y$ in the general form.\n\\begin{python}\nmypde.setValue(y=y) #set the source as a function on the boundary\n\\end{python}\nThe final step is to qualify the initial conditions. Due to the fact that we are\nno longer using the source to define our initial condition to the model, we\nmust set the model state to zero for the first two time steps.\n\\begin{python}\n# initial value of displacement at point source is constant (U0=0.01)\n# for first two time steps\nu=[0.0,0.0]*wherePositive(x)\nu_m1=u\n\\end{python}\n\nIf the source is time progressive, $y$ can be updated during the\niteration stage. This is covered in the following section.\n\n\\begin{figure}[htp]\n\\centering\n\\subfigure[Example 08a at 0.025s ]{\n\\includegraphics[width=3in]{figures/ex08pw50.png}\n\\label{fig:ex08pw50}\n}\n\\subfigure[Example 08a at 0.175s ]{\n\\includegraphics[width=3in]{figures/ex08pw350.png}\n\\label{fig:ex08pw350}\n} \\\\\n\\subfigure[Example 08a at 0.325s ]{\n\\includegraphics[width=3in]{figures/ex08pw650.png}\n\\label{fig:ex08pw650}\n}\n\\subfigure[Example 08a at 0.475s ]{\n\\includegraphics[width=3in]{figures/ex08pw950.png}\n\\label{fig:ex08pw950}\n}\n\\label{fig:ex08pw}\n\\caption{Results of Example 08 at various times.}\n\\end{figure}\n\\clearpage\n\n\\section{Time variant source}\n\n\\sslist{example08b.py}\nUntil this point, all of the wave propagation examples in this cookbook have\nused impulsive sources which are smooth in space but not time. It is however,\nadvantageous to have a time smoothed source as it can reduce the temporal\nfrequency range and thus mitigate aliasing in the solution. \n\nIt is quite \nsimple to implement a source which is smooth in time. In addition to the\noriginal source function the only extra requirement is a time function. For\nthis example the time variant source will be the derivative of a Gaussian curve\ndefined by the required dominant frequency (\\autoref{fig:tvsource}).\n\\begin{python}\n#Creating the time function of the source.\ndfeq=50 #Dominant Frequency\na = 2.0 * (np.pi * dfeq)**2.0\nt0 = 5.0 / (2.0 * np.pi * dfeq)\nsrclength = 5. * t0\nls = int(srclength/h)\nprint('source length',ls)\nsource=np.zeros(ls,'float') # source array\nampmax=0\nfor it in range(0,ls):\n    t = it*h\n    tt = t-t0\n    dum1 = np.exp(-a * tt * tt)\n    source[it] = -2. * a * tt * dum1\n    if (abs(source[it]) > ampmax):\n        ampmax = abs(source[it])\n    time[t]=t*h\n\\end{python}\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=3in]{figures/source.png}\n\\caption{Time variant source with a dominant frequency of 50Hz.}\n\\label{fig:tvsource}\n\\end{figure}\n\nWe then build the source and the first two time steps via;\n\\begin{python}\n# set initial values for first two time steps with source terms\ny=source[0]\n*(cos(length(x-xc)*3.1415/src_length)+1)*whereNegative(length(x-xc)-src_length)\nsrc_dir=numpy.array([0.,-1.]) # defines direction of point source as down\ny=y*src_dir\nmypde.setValue(y=y) #set the source as a function on the boundary\n# initial value of displacement at point source is constant (U0=0.01)\n# for first two time steps\nu=[0.0,0.0]*whereNegative(x)\nu_m1=u\n\\end{python}\n\nFinally, for the length of the source, we are required to update each new\nsolution in the iterative section of the solver. This is done via;\n\\begin{python}\n# increment loop values\nt=t+h; n=n+1\nif (n < ls):\n\ty=source[n]**(cos(length(x-xc)*3.1415/src_length)+1)*\\\n                   whereNegative(length(x-xc)-src_length)\n        y=y*src_dir; mypde.setValue(y=y) #set the source as a function on the\nboundary\n\\end{python}\n\n\\section{Absorbing Boundary Conditions}\nTo mitigate the effect of the boundary on the model, absorbing boundary\nconditions can be introduced. These conditions effectively dampen the wave energy\nas they approach the boundary and thus prevent that energy from being reflected.\nThis type of approach is typically used when a model is shrunk to decrease\ncomputational requirements. In practise this applies to almost all models,\nespecially in earth sciences where the entire planet or a large enough\nportional of it cannot be modelled efficiently when considering small scale\nproblems. It is impractical to calculate the solution for an infinite model and thus ABCs allow\nus the create an approximate solution with small to zero boundary effects on a\nmodel with a solvable size.\n\nTo dampen the waves, the method of \\citet{Cerjan1985}\nwhere the solution and the stress are multiplied by a damping function defined\non $n$ nodes of the domain adjacent to the boundary, given by;\n\\begin{equation}\n\\gamma =\\sqrt{\\frac{| -log( \\gamma _{b} ) |}{n^2}}\n\\end{equation}\n\\begin{equation}\ny=e^{-(\\gamma x)^2}\n\\end{equation}\nThis is applied to the bounding 20-50 pts of the model using the location\nspecifiers of \\esc;\n\\begin{python}\n# Define where the boundary decay will be applied.\nbn=30.\nbleft=xstep*bn; bright=mx-(xstep*bn); bbot=my-(ystep*bn)\n# btop=ystep*bn # don't apply to force boundary!!!\n\n# locate these points in the domain\nleft=x[0]-bleft; right=x[0]-bright; bottom=x[1]-bbot\n\ntgamma=0.98   # decay value for exponential function\ndef calc_gamma(G,npts):\n    func=np.sqrt(abs(-1.*np.log(G)/(npts**2.)))\n    return func\n\ngleft  = calc_gamma(tgamma,bleft)\ngright = calc_gamma(tgamma,bleft)\ngbottom= calc_gamma(tgamma,ystep*bn)\n\nprint('gamma', gleft,gright,gbottom)\n\n# calculate decay functions\ndef abc_bfunc(gamma,loc,x,G):\n    func=exp(-1.*(gamma*abs(loc-x))**2.)\n    return func\n\nfleft=abc_bfunc(gleft,bleft,x[0],tgamma)\nfright=abc_bfunc(gright,bright,x[0],tgamma)\nfbottom=abc_bfunc(gbottom,bbot,x[1],tgamma)\n# apply these functions only where relevant\nabcleft=fleft*whereNegative(left)\nabcright=fright*wherePositive(right)\nabcbottom=fbottom*wherePositive(bottom)\n# make sure the inside of the abc is value 1\nabcleft=abcleft+whereZero(abcleft)\nabcright=abcright+whereZero(abcright)\nabcbottom=abcbottom+whereZero(abcbottom)\n# multiply the conditions together to get a smooth result\nabc=abcleft*abcright*abcbottom\n\\end{python}\nNote that the boundary conditions are not applied to the surface, as this is\neffectively a free surface where normal reflections would be experienced.\nSpecial conditions can be introduced at this surface if they are known. The\nresulting boundary damping function can be viewed in\n\\autoref{fig:abconds}.\n\n\\section{Second order Meshing}\nFor stiff problems like the wave equation it is often prudent to implement\nsecond order meshing. This creates a more accurate mesh approximation with some\nincreased processing cost. To turn second order meshing on, the \\verb!rectangle!\nfunction accepts an \\verb!order! keyword argument.\n\\begin{python}\ndomain=Rectangle(l0=mx,l1=my,n0=ndx, n1=ndy,order=2) # create the domain\n\\end{python}\nOther pycad functions and objects have similar keyword arguments for higher\norder meshing.\n\nNote that when implementing second order meshing, a smaller timestep is required\nthen for first order meshes as the second order essentially reduces the size of\nthe mesh by half.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=5in]{figures/ex08babc.png}\n \\label{fig:abconds}\n \\caption{Absorbing boundary conditions for example08b.py}\n\\end{figure}\n\n\\begin{figure}[htp]\n\\centering\n\\subfigure[Example 08b at 0.03s ]{\n\\includegraphics[width=3in]{figures/ex08sw060.png}\n\\label{fig:ex08pw060}\n}\n\\subfigure[Example 08b at 0.16s ]{\n\\includegraphics[width=3in]{figures/ex08sw320.png}\n\\label{fig:ex08pw320}\n} \\\\\n\\subfigure[Example 08b at 0.33s ]{\n\\includegraphics[width=3in]{figures/ex08sw660.png}\n\\label{fig:ex08pw660}\n}\n\\subfigure[Example 08b at 0.44s ]{\n\\includegraphics[width=3in]{figures/ex08sw880.png}\n\\label{fig:ex08pw880}\n}\n\\label{fig:ex08pw}\n\\caption{Results of Example 08b at various times.}\n\\end{figure}\n\\clearpage\n\n\\section{Pycad example}\n\\sslist{example08c.py}\nTo make the problem more interesting we will now introduce an interface to the\nmiddle of the domain. In fact we will use the same domain as we did fora\ndifferent set of material properties on either side of the interface.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=5in]{figures/gmsh-example08c.png}\n\\caption{Domain geometry for example08c.py showing line tangents.}\n\\label{fig:ex08cgeo}\n\\end{center}\n\\end{figure}\n\nIt is simple enough to slightly modify the scripts of the previous sections to\naccept this domain. Multiple material parameters must now be defined and assigned\nto specific tagged areas. Again this is done via\n\\begin{python}\nlam=Scalar(0,Function(domain))\nlam.setTaggedValue(\"top\",lam1)\nlam.setTaggedValue(\"bottom\",lam2)\nmu=Scalar(0,Function(domain))\nmu.setTaggedValue(\"top\",mu1)\nmu.setTaggedValue(\"bottom\",mu2)\nrho=Scalar(0,Function(domain))\nrho.setTaggedValue(\"top\",rho1)\nrho.setTaggedValue(\"bottom\",rho2)\n\\end{python}\nDon't forget that the source boundary must also be tagged and added so it can\nbe referenced \n\\begin{python}\n# Add the subdomains and flux boundaries.\nd.addItems(PropertySet(\"top\",tblock),PropertySet(\"bottom\",bblock),\\\n                                     PropertySet(\"linetop\",l30))\n\\end{python}\nIt is now possible to solve the script as in the previous examples\n(\\autoref{fig:ex08cres}).\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=4in]{figures/ex08c2601.png}\n\\caption{Modelling results of example08c.py at 0.2601 seconds. Notice the\nrefraction of the wave front about the boundary between the two layers.}\n\\label{fig:ex08cres}\n\\end{figure}\n\n\n", "meta": {"hexsha": "32553c05e8738d794c439289c12363c06463e999", "size": 16572, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/cookbook/example08.tex", "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/cookbook/example08.tex", "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_forks_repo_path": "doc/cookbook/example08.tex", "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.5782312925, "max_line_length": 100, "alphanum_fraction": 0.7521723389, "num_tokens": 4570, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5633229946801447}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{xcolor}\n\\usepackage[T1]{fontenc}\n\\usepackage{pagecolor}\n\\usepackage{amssymb}\n\\usepackage{lmodern}\n\\usepackage{mathtools, nccmath}\n\\usepackage{courier}\n\\usepackage[overload]{empheq}\n\\usepackage[inline, shortlabels]{enumitem}\n\\usepackage{amsmath}\n\\usepackage{mathtools} \n\\color{white}\n\\title{Solution to Number theory \\#2}\n\\author{@all.about.mathematics}\n\n\n\\pagecolor{black}\n\\begin{document}\n\\maketitle\n\\section{Problem}\n\\noindent\\fbox{%\n    \\parbox{\\textwidth}{%\n        Is there a power of 2 ($>8$) such that that its digits can be rearranged to form another power of 2? (zeros are not allowed in leading digits, e.g. 032 is not allowed)\n    }%\n}\n\\section{solution}\nWe list out some simple and obvious facts here to help us with the problem:\n\n1. Rearranging digits does not change the total number of digits.\n\n2. At most 4 consecutive powers of 2 can have the same number of digits.\n\n3. A number is equal to its digit sum modulo 9.\n\n\\noindent{Using these, we rephrase the question:}\n\n\\noindent\\fbox{%\n    \\parbox{\\textwidth}{%\n    Is there a power of 2 such that there is another power of 2 with the same number of digits and the same remainder modulo 9?\n    }%\n}\n\nObserving the remainders modulo 9 of some powers of 2, we conjecture that the remainders repeat themselves every 6 powers.\n$$\\because 64\\equiv1 \\mod 9$$\n$$\\therefore 2^{n+6}\\equiv2^n\\times 64\\equiv 2^n \\mod 9$$\nSince the remainders modulo 9 repeat themselves every 6 powers and there can only be at most 4 powers of 2 can have the same digits, the answer to the question is NO.\n\\end{document}", "meta": {"hexsha": "e0b449d14119feae04391305952b958fc8c606fe", "size": 1627, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Number theory/all.about.mathematics' questions/Number theory 2.tex", "max_stars_repo_name": "Nanu00/LaTeX", "max_stars_repo_head_hexsha": "0f08a90c4e9ef78af42797670903636059ca0df2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2020-05-29T17:22:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T18:47:05.000Z", "max_issues_repo_path": "Number theory/all.about.mathematics' questions/Number theory 2.tex", "max_issues_repo_name": "Nanu00/LaTeX", "max_issues_repo_head_hexsha": "0f08a90c4e9ef78af42797670903636059ca0df2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-06-26T07:33:59.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-11T12:14:49.000Z", "max_forks_repo_path": "Number theory/all.about.mathematics' questions/Number theory 2.tex", "max_forks_repo_name": "Shreenabh664/LaTeX", "max_forks_repo_head_hexsha": "675e03f3ec555456b9a2cc714825ec75317848c2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-06-22T07:50:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-08T05:11:14.000Z", "avg_line_length": 33.2040816327, "max_line_length": 175, "alphanum_fraction": 0.7486170867, "num_tokens": 478, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.563322990516849}}
{"text": "\n \n \n \n \n %%%%%%%%%%%%%%%%%%%%%%\n\\section{Numerical Experiments}\n\\label{sec:NumExp}\n\nFor our numerical experiments, we calculated normalized geodesic lengths for a variety of regression and classification tasks.  In practice, this involved training a pair of randomly initialized models to the desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models via the Dynamic String Sampling algorithm.  We also tabulated the average number of ``beads'', or the number intermediate models needed by the algorithm to connect two initial models.  For all of the below experiments, the reported losses and accuracies are on a restricted test set.  For more complete architecture and implementation details, see our \\href{github.com/danielfreeman11/convex-nets}{GitHub page}.\n\n\nThe results are broadly organized by increasing model complexity and task difficulty, from easiest to hardest.  Throughout, and remarkably, we were able to easily connect models for every dataset and architecture investigated except the one explicitly constructed counterexample discussed in Appendix \\ref{symdisc}.  Qualitatively, all of the models exhibit a transition from a highly convex regime at high loss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length as well as the monotonic increase in the number of required ``beads'' to form a low-loss connection.\n\n\n\n\n\n\n\n%\\begin{table}[ht]\n%  \\centering\n%  \\begin{tabular}{c@{\\quad}cc}\n%    & a & b \\\\\n%    1 & \\includegraphics[width=.3\\textwidth]{../Plots/normlengthquadratics}\\fixedlabel{QUADRATICSfigsA}{1a} \n%      & \\includegraphics[width=.3\\textwidth]{../Plots/numbeadsquadratics}\\fixedlabel{QUADRATICSfigsB}{1b}  \\\\ \\\\\n%    2 & \\includegraphics[width=.3\\textwidth]{../Plots/normlengthcubics}\\fixedlabel{CUBICSfigsA}{2a} \n%      & \\includegraphics[width=.3\\textwidth]{../Plots/numbeadscubics} \\fixedlabel{CUBICSfigsB}{2b} \\\\ \\\\\n%    3 & \\includegraphics[width=.3\\textwidth]{../Plots/normlengthMNIST}\\fixedlabel{MNISTfigsA}{3a} \n%      & \\includegraphics[width=.3\\textwidth]{../Plots/numbeadsMNIST}\\fixedlabel{MNISTfigsB}{3b} \\\\ \\\\\n%    4 & \\includegraphics[width=.3\\textwidth]{../Plots/normlengthCIFAR}\\fixedlabel{CIFARfigsA}{4a} \n%      & \\includegraphics[width=.3\\textwidth]{../Plots/numbeadsCIFAR}\\fixedlabel{CIFARfigsB}{4b} \\\\ \\\\\n%    5 & \\includegraphics[width=.3\\textwidth]{../Plots/normlengthPTB}\\fixedlabel{PTBfigsA}{5a} \n%      & \\includegraphics[width=.3\\textwidth]{../Plots/numbeadsPTB}\\fixedlabel{PTBfigsB}{5b}\n%\t \n%  \\end{tabular}\n%  \\caption{(Column a) Average normalized geodesic length and (Column b) average number of beads versus loss. (1) \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/QUADRATIC.py}{A quadratic regression task}. (2) \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/CUBIC.py}{A cubic regression task}. (3) A convnet for \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/MNIST.py}{MNIST}. (4) A convnet inspired by \\href{www.cs.toronto.edu/\\%7Ekriz/cifar.html}{Krizhevsky} for \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/CIFAR10.py}{CIFAR10}. (5) \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/PTBRNN.py}{A RNN} inspired by \\href{arxiv.org/pdf/1409.2329.pdf}{Zaremba} for PTB next word prediction.}\n%  \\label{FigTable}\n%\\end{table}\n\n\n\n\\subsection{Polynomial Regression}\n\\label{sec:PolyFuncs}\n%%%%%%%%%%%%%%%%%%%%%%\n\n We studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlinearities and RMSProp/ADAM optimization.  For ease-of-analysis, we restricted the training and test data to be strictly contained in the interval $x\\in[0,1]$ and $f(x)\\in[0,1]$.  The number of required beads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstrated in Table \\ref{FigTable} Fig. 1.  We also provide a visualization of a representative connecting path between two models of equivalent power in Appendix \\ref{visualization}.\n \n\n\\begin{figure} \n%\\begin{figure}[h]\n  \\centering\n%\\begin{center}\n%  \\begin{tabular}{c@{\\quad}cc}\n%    & a & b \\\\\n  \\includegraphics[width=.34\\textwidth]{../Plots/normlengthquadratics}%\\fixedlabel{QUADRATICSfigsA}\\caption{(1a)} \n   \\includegraphics[width=.34\\textwidth]{../Plots/numbeadsquadratics} \\\\ %\\fixedlabel{QUADRATICSfigsB}\\caption{(1b)}  \\\\ \n    \\includegraphics[width=.34\\textwidth]{../Plots/normlengthcubics} %\\fixedlabel{CUBICSfigsA}\\caption{(2a)} \n    \\includegraphics[width=.34\\textwidth]{../Plots/numbeadscubics}\\\\% \\fixedlabel{CUBICSfigsB}\\caption{(2b)} \\\\\n    \\includegraphics[width=.34\\textwidth]{../Plots/normlengthMNIST}%\\fixedlabel{MNISTfigsA}\\caption{(3a)} \n     \\includegraphics[width=.34\\textwidth]{../Plots/numbeadsMNIST}\\\\% \\fixedlabel{MNISTfigsB}\\caption{(3b)} \\\\\n   \\includegraphics[width=.34\\textwidth]{../Plots/normlengthCIFAR}% \\fixedlabel{CIFARfigsA}\\caption{(4a)} \n     \\includegraphics[width=.34\\textwidth]{../Plots/numbeadsCIFAR} \\\\ %\\fixedlabel{CIFARfigsB}\\caption{(4b)} \\\\\n   \\includegraphics[width=.34\\textwidth]{../Plots/normlengthPTB}% \\fixedlabel{PTBfigsA}\\caption{(5a)} \n   \\includegraphics[width=.34\\textwidth]{../Plots/numbeadsPTB} \\\\ %\\fixedlabel{PTBfigsB}\\caption{(5b)}\n  \\caption{(Column a) Average normalized geodesic length and (Column b) average number of beads versus loss. (1) \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/QUADRATIC.py}{A quadratic regression task}. (2) \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/CUBIC.py}{A cubic regression task}. (3) A convnet for \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/MNIST.py}{MNIST}. (4) A convnet inspired by \\href{www.cs.toronto.edu/\\%7Ekriz/cifar.html}{Krizhevsky} for \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/CIFAR10.py}{CIFAR10}. (5) \\href{github.com/danielfreeman11/convex-nets/tree/master/LaunchScripts/PTBRNN.py}{A RNN} inspired by \\href{arxiv.org/pdf/1409.2329.pdf}{Zaremba} for PTB next word prediction.}\n  \\label{FigTable}\n\\end{figure}\n%\\end{center}\n\n \n The cubic regression task exhibits an interesting feature around $L_0=.15$ in Table \\ref{FigTable} Fig. 2, where the normalized length spikes, but the number of required beads remains low.  Up until this point, the cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behavior and a concomitant radical change in the geometry of the loss surface for lower loss.\n  \n \n%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Convolutional Neural Networks}\n\\label{sec:CNN}\n%%%%%%%%%%%%%%%%%%%%%%\n\n To test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognition task as well as the CIFAR10 image recognition task, indicated in Table \\ref{FigTable}, Figs. 3 and 4.  Again, the data exhibits strong qualitative similarity with the previous models: normalized length remains low until a threshold loss value, after which it grows approximately as a power law.  Interestingly, the MNIST dataset exhibits very low normalized length, even for models nearly at the state of the art in classification power, in agreement with the folk-understanding that MNIST is highly convex and/or ``easy''.  The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest test accuracy of 80\\%.\n\n\n%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Recurrent Neural Networks}\n%%%%%%%%%%%%%%%%%%%%%%\n\n To gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solving the next word prediction task on the PTB dataset, depicted in Table \\ref{FigTable} Fig. 5.  Noteably, even for a radically different architecture, loss function, and data set, the normalized lengths produced by the DSS algorithm recapitulate the same qualitative features seen in the above datasets---i.e., models can be easily connected at high perplexity, and the normalized length grows at lower and lower perplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.\n\n\n\n", "meta": {"hexsha": "25d4824afadfd405c59fc9cfbfc5f3ad864712a1", "size": 8063, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Writeup/iclr/experiments.tex", "max_stars_repo_name": "danielfreeman11/convex-nets", "max_stars_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-08-09T00:48:46.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-03T09:04:59.000Z", "max_issues_repo_path": "Writeup/iclr/experiments.tex", "max_issues_repo_name": "danielfreeman11/convex-nets", "max_issues_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Writeup/iclr/experiments.tex", "max_forks_repo_name": "danielfreeman11/convex-nets", "max_forks_repo_head_hexsha": "252a8230845fb2076221113ac8cabfade5152bfb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.6043956044, "max_line_length": 807, "alphanum_fraction": 0.7556740667, "num_tokens": 2286, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5632715163536205}}
{"text": "\\section{Problems to try before some fixed day}\n\n\\prob{}\n{Thue's Note}{}{\n    Let $S$ be a set of all positive integers which can be represented as\n    $a^2+ 5b^2$ for some coprime integers $a,b$. Let $p$ be a prime number\n    such that $p= 4n+ 3$ for some integer $n$. Show that if for some positive\n    integer $k$ the numberk $p$ is in $S$, then $2p$ is in $S$ as well.\n}\n\nKonig's theorem, Dilworth theorem, Max flow Min cut\n", "meta": {"hexsha": "539f583fb25e9145a488d4d2f4e1eb405b9ea5c1", "size": 428, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "for_sharing.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "for_sharing.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "for_sharing.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 35.6666666667, "max_line_length": 77, "alphanum_fraction": 0.6682242991, "num_tokens": 141, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5632715125340693}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage[utf8x]{luainputenc}\n\\usepackage{aeguill}\n%\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathrsfs}\n\\usepackage{fullpage}\n\\usepackage{fancyhdr}\n\\setlength{\\headheight}{12pt}\n\\pagestyle{fancy}\n\\chead{Linear Algebra}\n\\lhead{October 28, 2015}\n\\rhead{Jon Allen}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\\newcommand{\\proj}{\\text{proj}}\n\\newcommand{\\rank}{\\text{rank}}\n\\newcommand{\\Span}{\\text{Span}}\n\n\\begin{document}\n%\\renewcommand{\\labelenumii}{\\alph{enumii}.}\n%\\renewcommand{\\labelenumiii}{\\alph{enumiii}.}\n%\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\section*{3.3}\n\\begin{enumerate}\n\\setcounter{enumi}{2}\n\\item\n%3.3 #3\nDecide whether the following sets fo vectors give a basis for the indicated space.\n  \\begin{enumerate}\n  \\item\n  %3.3 #3a\n  $[(1,2,1),(2,4,5),(1,2,3)]:\\mathbb{R}^3$\n\n  \\[\n  \\left[\\begin{array}{rrr}\n  1&2&1\\\\\n  2&4&5\\\\\n  1&2&3\\\\\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrr}\n  1&2&1\\\\\n  0&0&3\\\\\n  0&0&2\\\\\n  \\end{array}\\right]\n  \\]\n\n  So the span of the vectors has dimension 2, thus it is not a basis of $\\mathbb{R}^3$ which has dimension 3.\n  \\item\n  %3.3 #3b\n  $[(1,0,1),(1,2,4),(2,2,5),(2,2,-1)]:\\mathbb{R}^3$\n\n  Not a basis. The dimension and the cardinality of the basis should be the same.\n  \\item\n  %3.3 #3c\n  $[(1,0,2,3),(0,1,1,1),(1,1,4,4)]:\\mathbb{R}^4$\n\n  Not a basis for the same reason as above.\n  \\item\n  %3.3 #3d\n  $[(1,0,2,3),(0,1,1,1),(1,1,4,4),(2,-2,1,2)]:\\mathbb{R}^4$\n\n  \\[\n  \\left[\\begin{array}{rrrr}\n  1&0&2&3\\\\\n  0&1&1&1\\\\\n  1&1&4&4\\\\\n  2&-2&1&2\\\\\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrrr}\n  1&0&2&3\\\\\n  0&1&1&1\\\\\n  0&1&2&1\\\\\n  0&-2&-3&-4\\\\\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrrr}\n  1&0&2&3\\\\\n  0&1&1&1\\\\\n  0&0&1&0\\\\\n  0&0&1&2\\\\\n  \\end{array}\\right]\n  \\]\n\n  And so the span of the vectors has dimension 4, and so it must be $\\mathbb{R}^4$ since the vectors are in $\\mathbb{R}^4$\n  \\end{enumerate}\n\\setcounter{enumi}{4}\n\\item\n%3.3 #5\nFollowing example 10, for each of the following matrices $A$, give a basis for each of the subspaces $\\mathbf{R}(A),\\mathbf{C}(A),\\mathbf{N}(A),$ and $\\mathbf{N}(A^T)$.\n  \\begin{enumerate}\n  \\setcounter{enumii}{1}\n  \\item\n  %3.3 #5b\n  $A=\n  \\left[\\begin{array}{rrr}\n  1&1&0\\\\\n  2&1&1\\\\\n  1&-1&2\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrr}\n  1&1&0\\\\\n  0&-1&1\\\\\n  0&-2&2\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrr}\n  1&0&1\\\\\n  0&1&-1\\\\\n  0&0&0\n  \\end{array}\\right]\n  $\n\n  And so\n  $\\mathbf{R}(A)=\\Span\\left([1,0,1],[0,1,-1]\\right)$,\n  $\\mathbf{N}(A)=\\Span\\left([-1,1,1]\\right)$,\n  and $\\mathbf{C}(A)=\\Span\\left([1,2,1],[1,1,-1]\\right)$.\n\n  We formed the zero row of the \\emph{rref} by $(r_3-r_1)-2(r_2-2r_1)=3r_1-2r_2+r_3$. And so $\\mathbf{N}(A^T)=\\Span\\left([3,-2,1]\\right)$.\n  \\end{enumerate}\n\\setcounter{enumi}{6}\n\\item\n%3.3 #7\nLet $V\\subset \\mathbb{R}^5$ be spanned by $(1,0,1,1,1)$ and $(0,1,-1,0,2)$. By finding the left nullspace of an appropriate matric, give a homogeneous system of equation having $V$ as its solution set. Explain how you are using Proposition 3.6.\n\\setcounter{enumi}{8}\n\\item\n%3.3 #9\nSuppose $\\mathbf{u},\\mathbf{v},\\mathbf{w}\\in \\mathbb{R}^n$ form a linearly independent set. Prove that $\\mathbf{u}+\\mathbf{v}, \\mathbf{v}+2\\mathbf{w},$ and $-\\mathbf{u}+\\mathbf{v}+\\mathbf{w}$ likewise form a linearly  independent set. \n\nLet us assume to the contrary, that $\\mathbf{u}+\\mathbf{v}, \\mathbf{v}+2\\mathbf{w},$ and $-\\mathbf{u}+\\mathbf{v}+\\mathbf{w}$ are linearly dependent. Then there exists some $a,b,c\\in \\mathbb{R}$ not zero such that $a(\\mathbf{u}+\\mathbf{v})+b(\\mathbf{v}+2\\mathbf{w})+c(-\\mathbf{u}+\\mathbf{v}+\\mathbf{w})=\\mathbf{0}$. Then $(a-c)\\mathbf{u}+(a+b+c)\\mathbf{v}+(2b+c)\\mathbf{w}=\\mathbf{0}$. But $\\mathbf{u},\\mathbf{v},\\mathbf{w}$ are linearly independent, and so $a-c=a+b+c=2b+c=0$. But then $a=c$ so $b+2c=2b+c$ and so $b=c$. If $a=b=c$ and $a+b+c=0$ then $a=b=c=0$. But our assumption is that $a,b,c$ are not zero, and so $\\mathbf{u}+\\mathbf{v}, \\mathbf{v}+2\\mathbf{w}$ and $-\\mathbf{u}+\\mathbf{v}+\\mathbf{w}$ must be linearly dependent.\n\\setcounter{enumi}{9}\n\\item\n%3.3 #10\nSuppose $\\mathbf{v}_1,\\dots,\\mathbf{v}_k$ are nonzero vectors with the property that $\\mathbf{v}_i\\cdot\\mathbf{v}_j=0$ whenever $i\\ne j$. Prove that $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_k\\}$ is linearly independent.\n\nSuppose $c_1\\mathbf{v}_1+c_2\\mathbf{v}_2+\\dots+c_k\\mathbf{v}_k=\\mathbf{0}$.\nThen for any $i$ such that $1\\le i\\le k$ we have\n\\begin{align*}\n  c_1\\mathbf{v}_1+\\dots+c_k\\mathbf{v}_k)\\cdot\\mathbf{v}_i=\\mathbf{0}\\cdot\\mathbf{v}_i\\\\\n  c_1\\mathbf{v}_1\\cdot\\mathbf{v}_i+\\dots+c_i\\mathbf{v}_i\\cdot\\mathbf{v}_i+\\dots+c_k\\mathbf{v}_k\\cdot\\mathbf{v}_i=0\\\\\n  c_10+\\dots+c_i||\\mathbf{v}_i||^2+\\dots+c_k0=0\\\\\n  c_i||\\mathbf{v}_i||^2=0\n\\end{align*}\nNow $\\mathbf{v}_i$ is not zero by assumption, and so $||\\mathbf{v}_i||^2>0$. Thus $c_i=0$.\n\nBecause our choice of $c_i$ was arbitrary, we know that $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_k\\}$ is linearly independent.\n\\item\n%3.3 #11\nSuppose $\\mathbf{v}_1,\\dots,\\mathbf{v}_n$ are nonzero, mutually orthogonal vectors in $\\mathbb{R}^n$.\n  \\begin{enumerate}\n  \\item\n  %3.3 #11a\n  Prove that they form a basis for $\\mathbb{R}^n$. (Use Exercise 10)\n\n  We know that they are linearly independent, and there are $n$ of them, therefore they form basis for an $n$-dimensional subspace. The vectors are in $\\mathbb{R}^{n}$ and so they form a subspace in $\\mathbb{R}^n$. But the only $n$-dimensional subspace of $\\mathbb{R}^n$ is $\\mathbb{R}^n$ and so our set must form a asis for $\\mathbb{R}^n$\n  \\item\n  %3.3 #11b\n  Given any $\\mathbf{x}\\in \\mathbb{R}^n$, give an explicit formula for the coordinates of $\\mathbf{x}$ with respect to the basis $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_n\\}$.\n\n  If $\\mathbf{x}=\\sum\\limits_{i=1}^n{\\proj_{\\mathbf{v}_i}\\mathbf{x}}$ or if $\\mathbf{x}=[x_1,\\dots,x_n]$ and $\\mathbf{v}_i=[v_{i_1},\\dots,v_{i_n}]$. Then $x_k=\\sum\\limits_{i=1}^n{\\frac{\\mathbf{v}_i\\cdot\\mathbf{x}}{||\\mathbf{x}||^2}x_i}$\n  \\item\n  %3.3 #11c\n  Deduce from your answer to part \\emph{b} that $\\mathbf{x}=\\sum\\limits_{i=1}^n{\\proj_{\\mathbf{v}_i}\\mathbf{x}}$\n  \\end{enumerate}\n\\setcounter{enumi}{13}\n\\item\n%3.3 #14\nSuppose $\\mathbf{v}_1,\\dots,\\mathbf{v}_k\\in \\mathbb{R}^n$ form a linearly independent set. Show that for any $1\\le l<k$, the set $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_l\\}$ is lenearly  independent as well.\n\nAssume that $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_l\\}$ is linearly dependant. Thene there exists some $\\{c_1,\\dots,c_l\\}$ not all zero such that $c_1\\mathbf{v}_1+\\dots+c_l\\mathbf{v}_l=0$. Then $c_1\\mathbf{v}_1+\\dots+c_l\\mathbf{v}_l+0\\mathbf{v}_{l+1}+\\dots0\\mathbf{v}_k=0$ and so $\\{\\mathbf{v}_1,\\dots\\mathbf{v}_k\\}$ is linearly dependent. And so we have proven the contrapositive.\n\\setcounter{enumi}{20}\n\\item\n%3.3 #21\nLet $A$ be an $m\\times n$ matrix of rank $n$. Suppose $\\mathbf{v}_1,\\dots,\\mathbf{v}_k\\in \\mathbb{R}^n$ and $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_k\\}$ is linearly independent. Prove that $\\{A\\mathbf{v}_1,\\dots,A\\mathbf{v}_k\\}\\subset\\mathbb{R}^m$ is likewise linearly independent. ({\\bfseries N.B.}: If you did not explicitly make use of the assumption that $\\text{rank}(A)=n$, your proof cannot be correct. Why?)\n\\end{enumerate}\n\\section*{3.4}\n\\begin{enumerate}\n\\item\n%3.4 #1\nFind a basis for each of the given subspaces and determine its dimension.\n  \\begin{enumerate}\n  \\setcounter{enumii}{1}\n  \\item\n  %3.4 #1b\n  $V=\\{\\mathbf{x}=\\mathbb{R}^4:x_1+x_2+x_3+x_4=0,x_2+x_4=0\\}\\subset\\mathbb{R}^4$\n\n  We need the nullspace of $\\left[\\begin{array}{rrrr}1&1&1&1\\\\0&1&0&1\\end{array}\\right]$ which is $\\Span([-1,0,1,0],[0,-1,0,1])$\n  \\item\n  %3.4 #1c\n  $V=\\left(\\text{Span}\\left((1,2,3)\\right)\\right)^\\perp\\subset\\mathbb{R}^3$\n\n  We need all $\\mathbf{x}$ such that $(\\alpha[1,2,3])\\cdot\\mathbf{x}=0$ or the nullspace of $\\left[\\begin{array}{rrr}1&2&3\\\\0&0&0\\\\0&0&0\\end{array}\\right]$. Which is $\\Span([-2,1,0],[-3,0,1])$.\n  \\end{enumerate}\n\\setcounter{enumi}{2}\n\\item\n%3.4 #3\nFore each of the following matrices $A$, give bases for $\\mathbf{R}(A),\\mathbf{N}(A),\\mathbf{C}(A),$ and $\\mathbf{N}(A^T)$. Check dimensions and orthogonality.\n  \\begin{enumerate}\n  \\setcounter{enumii}{1}\n  \\item\n  %3.4 #3b\n  $A=\n  \\left[\\begin{array}{rrr}\n  2&1&3\\\\\n  4&3&5\\\\\n  3&3&3\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrr}\n  2&1&3\\\\\n  1&0&2\\\\\n  3&3&3\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrr}\n  0&1&-1\\\\\n  1&0&2\\\\\n  0&3&-3\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrr}\n  1&0&2\\\\\n  0&1&-1\\\\\n  0&0&0\n  \\end{array}\\right]\n  $\n  \n  $r_3-3(r_2-r_3)-3(r_1-2(r_2-r_3))$ to get to the zero row in rref leads to $r_3-3r_2+3r_3-3r_1+6r_2-6r_3=-3r_1+3r_2-2r_3$. And so\n  $\\mathbf{R}(A)=\\Span([1,0,2],[0,1,-1])$,\n  $\\mathbf{N}(A)=\\Span([-2,1,1])$,\n  $\\mathbf{C}(A)=\\Span([2,4,3],[1,3,3])$, and\n  $\\mathbf{N}(A^T)=\\Span([-3,3,-2])$,\n  \\setcounter{enumii}{4}\n  \\item\n  %3.4 #3e\n  $A=\n  \\left[\\begin{array}{rrrrr}\n  1 & 1& 0& 1&-1\\\\\n  1 & 1& 2&-1& 1\\\\\n  2 & 2& 2& 0& 0\\\\\n  -1&-1& 2&-3& 3\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrrrr}\n   1& 1& 0& 1&-1\\\\\n   0& 0& 2&-2& 2\\\\\n   0& 0& 2&-2& 2\\\\\n   0& 0& 2&-2& 2\n  \\end{array}\\right]\n  \\Rightarrow\n  \\left[\\begin{array}{rrrrr}\n   1& 1& 0& 1&-1\\\\\n   0& 0& 1&-1& 1\\\\\n   0& 0& 0& 0& 0\\\\\n   0& 0& 0& 0& 0\n  \\end{array}\\right]\n  $\n\n  $(r2-r1)-(r3-2r1)=r1+r2-r3$ and $(r2-r1)-(r4+r1)=r2-r4$ are both zero.\n\n  $\\mathbf{R}(A)=\\Span([1,1,0,1,-1],[0,0,1,-1,1])$,\n  $\\mathbf{N}(A)=\\Span([-1,1,0,0,0],[-1,0,1,1,0],[1,0,-1,0,1])$,\n  $\\mathbf{C}(A)=\\Span([1,1,2,-1],[0,2,2,2])$, and\n  $\\mathbf{N}(A^T)=\\Span([1,1,-1,0],[0,1,0,-1])$.\n  \\end{enumerate}\n\\setcounter{enumi}{7}\n\\item\n%3.4 #8\nIn each case, construct a matrix with the requisite properties or explain why no such matrix exists.\n  \\begin{enumerate}\n  \\item\n  %3.4 #8a\n  The column space has basis $\\left[\\begin{array}{r}1\\\\0\\\\1\\end{array}\\right]$, and the nullspace contains $\\left[\\begin{array}{r}1\\\\2\\\\0\\end{array}\\right]$.\n\n  It must be a $3\\times 3$ matrix. We need the first column to be a multiple of $[1,0,1]$ and the first element of a row should be twice the negative of the second element.\n  $\\left[\\begin{array}{rrr}2&-1&0\\\\0&0&0\\\\2&-1&0\\end{array}\\right]$\n  \\item\n  %3.4 #8b\n  The nullspace contains\n  $\\left[\\begin{array}{r}1\\\\0\\\\1\\end{array}\\right]$,\n  $\\left[\\begin{array}{r}-1\\\\2\\\\1\\end{array}\\right]$, and the row space contains\n  $\\left[\\begin{array}{r}1\\\\1\\\\-1\\end{array}\\right]$\n\n  We notice that $[1,0,1]\\cdot[1,1,-1]=0$ and $[-1,2,1]\\cdot[1,1,-1]=0$ and so we can just make our matrix $[1,1,-1]$\n  \\item\n  %3.4 #8c\n  The column spacce has basis\n  $\\left[\\begin{array}{r}1\\\\0\\\\1\\end{array}\\right]$,\n  $\\left[\\begin{array}{r}0\\\\1\\\\1\\end{array}\\right]$,\n  and the row space has basis\n  $\\left[\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right]$,\n  $\\left[\\begin{array}{r}2\\\\0\\\\1\\end{array}\\right]$.\n\n  \n  \\[\n  \\left[\\begin{array}{rrr}1&0&0\\\\0&1&0\\\\1&1&0\\end{array}\\right]\\Rightarrow\n  \\left[\\begin{array}{rrr}2&0&1\\\\0&1&0\\\\2&1&0\\end{array}\\right]\\Rightarrow\n  \\left[\\begin{array}{rrr}2&0&1\\\\0&2&0\\\\2&2&2\\end{array}\\right]\\Rightarrow\n  \\left[\\begin{array}{rrr}2&0&1\\\\0&2&1\\\\2&2&2\\end{array}\\right]\n  \\]\n\n  \\item\n  %3.4 #8d\n  The column space and the nullspace both have basis $\\left[\\begin{array}{r}1\\\\0\\end{array}\\right]$.\n\n  $\\left[\\begin{array}{rr}0&1\\\\0&0\\end{array}\\right]$\n  \\item\n  %3.4 #8e\n  The column space and the nullspace both have basis\n  $\\left[\\begin{array}{r}1\\\\0\\\\0\\end{array}\\right]$.\n  \n  The column space basis has the same cardinality as the row space basis. The cardinality of the nullspace basis and the rowspace basis should add up to three. But it adds up to two, and so we can't make a matrix.\n\n  \\end{enumerate}\n\\setcounter{enumi}{10}\n\\item\n%3.4 #11\nLet $A=\\left[\\begin{array}{rrrr}1&-1&0&0\\\\0&0&1&-1\\end{array}\\right]$.\n  \\begin{enumerate}\n  %3.4 #11a\n  \\item\n  Given any $\\mathbf{x}\\in \\mathbb{R}^4$, find $\\mathbf{u}\\in \\mathbf{R}(A)$ and $\\mathbf{v}\\in \\mathbf{N}(A)$ so that $\\mathbf{x}=\\mathbf{u}+\\mathbf{v}$.\n\n  $\\mathbf{R}(A)=\\Span([1,-1,0,0],[0,0,1,-1])$ and $\\mathbf{N}(A)=\\Span([1,1,0,0],[0,0,1,1]$ and so $[x_1,x_2,x_3,x_4]=x_1/2([1,-1,0,0]+[1,1,0,0])+x_2/2(-[1,-1,0,0]+[1,1,0,0])+x_3/2([0,0,1,-1]+[0,0,1,1])+x_4/2(-[0,0,1,-1]+[0,0,1,1])$\n\n  Then $\\mathbf{u}=[x_1/2-x_2/2,x_2/2-x_1/2,x_3/2-x_4/2,x_4/2-x_3/2]$ and $\\mathbf{v}=[x_1/2+x_2/2,x_1/2+x_2/2,x_3/2+x_4/2,x_3/2+x_4/2]$\n  %3.4 #11b\n  \\item\n  Given $\\mathbf{b}\\in \\mathbb{R}^2$, give the unique element $\\mathbf{x}\\in \\mathbf{R}(A)$ so that $A\\mathbf{x}=\\mathbf{b}$.\n\n  $\\left[\\begin{array}{rrrr}1&-1&0&0\\\\0&0&1&-1\\end{array}\\right]\\left[\\begin{array}{r}b_1/2\\\\-b_1/2\\\\b_2/2\\\\-b_2/2\\end{array}\\right]=\\left[\\begin{array}{r}b_1\\\\b_2\\end{array}\\right]$\n  \\end{enumerate}\n\\setcounter{enumi}{16}\n\\item\n%3.4 #17\nLet $V\\subset \\mathbb{R}^n$ be a subspace, and suppose you are given a linearly independent set of vectors $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_k\\}\\subset V$. Show that if $\\dim V>k$ then there are vectors $\\mathbf{v}_k+1,\\dots,\\mathbf{v}_l\\in V$ so that $\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_l\\}$ forms a basis for $V$.\n\nIf $\\dim V>k$ then we can choose some element $\\mathbf{v}_{k+1}\\in V$ and $\\mathbf{v}_{k+1}\\not\\in \\Span\\left(v_1,\\dots,v_k\\right)$. The set $\\{v_1,\\dots,v_{k+1}\\}$ is linearly independent. If $\\dim V>k+1$ then we choose another element From $V$ which is not the span of our set. We can continue in this way, until there are $\\dim V$ elements in our set. \n\\item\n%3.4 #18\nSuppose $V$ and $W$ are subspaces of $\\mathbb{R}^n$ and $W\\subset V$. Prove that $\\dim W\\le \\dim V$. (\\emph{Hint:} Start with a basis for $W$ and apply Exercise 17.)\n\nWe choose a basis for $W$. This set contains $\\dim W$ vectors. If $\\dim W=\\dim V$ then we are done. Otherwise, we choose some element of $V$ not in $W$ and proceed as in the last question until we have formed a basis for $V$. The size of this set is bigger than the size of the basis of $W$ and so $\\dim W<\\dim V$.\n\\item\n%3.4 #19\nSuppose $A$ is an $n\\times n$ matrix, and let $\\mathbf{v}_1,\\dots,\\mathbf{v}_n\\in \\mathbb{R}^n$. Suppose $\\{A\\mathbf{v}_1,\\dots,A\\mathbf{v}_n\\}$ is linearly independent. Prove that $A$ is nonsingular.\n\nIf there exists a $v_i=0$ then $Av_i=0$ and $\\{Av_1,\\dots,Av_n\\}$ is not linearly independent. So $v_i\\ne 0$ for all $1\\le i\\le n$. Now let us assume that some $v_i=\\alpha_1\\mathbf{v}_1+\\dots+\\alpha_{i-1}\\mathbf{v}_{i-1}+\\alpha_{i+1}\\mathbf{v}_{i+1}+\\dots+\\alpha_n\\mathbf{v}_{n}$. Then $Av_i=\\alpha_1A\\mathbf{v}_1+\\dots+\\alpha_{i-1}A\\mathbf{v}_{i-1}+\\alpha_{i+1}A\\mathbf{v}_{i+1}+\\dots+\\alpha_nA\\mathbf{v}_{n}$. But then $\\{A\\mathbf{v}_1,\\dots,A\\mathbf{v}_n\\}$ is not linearly independent, and so $\\{v_1,\\dots,v_k\\}$ must be linearly independent. Thus $\\Span(\\mathbf{v}_1,\\dots,\\mathbf{v}_n)=\\mathbb{R}^n$.\n\nNow if $\\alpha_1A\\mathbf{v}_1+\\dots+\\alpha_nA\\mathbf{v}_n=0$ then $\\alpha_i=0$ for all $1\\le i\\le n$. And of course $A(\\alpha_1\\mathbf{v}_1+\\dots+\\alpha_n\\mathbf{v}_n)=0$. But if $A$ is singular, then there exists some $\\mathbf{x}\\ne \\mathbf{0}$ so that $A\\mathbf{x}=0$. And $\\mathbf{x}\\in \\Span(\\mathbf{v}_1,\\dots,\\mathbf{v}_2)$ so then there exists some $\\alpha_i\\ne 0$. But we have already established that $\\alpha_i=0$ for all $\\alpha_i$ and so we have a contradiction, thus $A$ is nonsingular.\n\\setcounter{enumi}{20}\n\\item\n%3.4 #21\nLet $U$ and $V$ be subspaces of $\\mathbb{R}^n$. Prove that $\\dim(U+V)=\\dim U+\\dim V-\\dim(U\\cap V).$ (\\emph{Hint:} This is a generalization of Exercise 20. Start with a basis for $U\\cap V$, and use Exercise 17.)\n\nRecall that $U\\cap V\\le U$ and $U\\cap V\\le V$. Now any element in $U+V$ is a linear combination of a basis for $U$ and a basis for $V$. But any elements of $U\\cap V$ in the basis of $U$ can be expressed as a linear combination of elements from the basis in $V$. And so the dimension of $U+V$ is $\\dim V+\\dim U-\\dim U\\cap V$.\n\\item\n%3.4 #22\nContinuing Exercise 3.2.10: Let $A$ be an $m\\times n$, and let $B$ be an $n\\times p$ matrix.\n  \\begin{enumerate}\n  \\item\n  %3.4 #22a\n  Prove that $\\rank(AB)\\le \\rank(A).$ (\\emph{Hint:} Look at part $b$ of Exercise 3.2.10.)\n\n  We showed in 3.2.10 that $\\mathbf{C}(AB)\\subset\\mathbf{C}(A)$. And we know that $\\dim \\mathbf{C}(A)=\\dim A=\\rank A$ while $\\dim \\mathbf{C}(AB)=\\dim AB=\\rank AB$. And so $\\rank(AB)\\le \\rank(A)$\n  \\item\n  %3.4 #22b\n  Prove that if $n=p$ and $B$ is nonsingular, then $\\rank(AB)=\\rank(A)$\n\n  From 3.2.10d we know that $\\mathbf{C}(AB)=\\mathbf{C}(A)$ and so $\\rank(AB)=\\dim \\mathbf{C}(AB)=\\dim\\mathbf{C}(A)=\\rank(A)$\n  \\item\n  %3.4 #22c\n  Prove that $\\rank(AB)\\le \\rank(B)$. (\\emph{Hint:} Use part $a$ of Exercise 3.2.10 and Theorem 4.6.)\n\n  $N(B)\\subset N(AB)$ so $\\dim N(B)\\le \\dim N(AB)$. But $\\dim N(B)=p-\\rank(B)$ and $\\dim N(AB)=p-\\rank(AB)$ and so it follows that $p-\\rank(B)\\le p-\\rank(AB)$ or $\\rank(AB)\\le \\rank(B)$\n  \\item\n  %3.4 #22d\n  Prove that if $m=n$ and $A$ is nonsingular, then $\\rank(AB)=\\rank(B)$.\n\n  We have $\\text{row}_i(AB)=a_{i1}\\text{row}_1(B)+\\dots+a_{in}\\text{row}_n(B)\\in\\Span(\\text{row}_1(AB),\\dots,\\text{row}_n(AB))$ and so $R(AB)\\le R(B)$. If $AB=C$ then $B=A^{-1}C$ and $R(B)=R(A^{-1}C)\\le R(C)=R(AB)$. Since the size of the rowspace is equal to the rank we have $\\rank(AB)=\\rank(B)$\n  \\item\n  %3.4 #22e\n  Prove that if $\\rank(AB)=n$, then $\\rank(A)=\\rank(B)=n$.\n\n  We know that $n=\\rank(AB)\\le \\rank (A)\\le n$ because the rank cannot be greater than the number of columns in $A$. And so $n\\le \\rank(A)\\le n$. Similarly $n\\le \\rank(B)\\le n$ because the rank must be less than the number of rows in $B$.\n  \\end{enumerate}\n\\setcounter{enumi}{23}\n\\item\n%3.4 #24\nContinuing Exercise 3.2.10: Let $A$ be an $m\\times n$ matrix.\n  \\begin{enumerate}\n  \\item\n  Use Theorem 2.5 to prove that $\\mathbf{N}(A^TA)=\\mathbf{N}(A)$. (\\emph{Hint:} If $\\mathbf{x}\\in \\mathbf{N}(A^TA)$, then $A\\mathbf{x}\\in \\mathbf{C}(A)\\cap \\mathbf{N}(A^T)$.)\n\n  If $\\mathbf{x}\\in N(A)$ then $(A^TA)\\mathbf{x}=A^T(A\\mathbf{x})=A^T\\mathbf{0}=0$ and so $N(A)\\subset N(A^TA)$. Now if $\\mathbf{x}\\in N(A^TA)$ then $(A^TA)\\mathbf{x}=0=A^T(A\\mathbf{x})$ and $A\\mathbf{x}\\in N(A^T)$. Further $A\\mathbf{x}\\in C(A)$ by proposition 4.10. But $N(A^T)$ and $C(A)$ are orthogonal and so $A\\mathbf{x}=0$. Thus $N(A^TA)\\subset N(A)$\n  \\item\n  Prove that $\\rank(A)=\\rank(A^TA)$.\n\n  $N(A)=n-\\rank(A)=N(A^TA)=n-\\rank(A^TA)$ and so $\\rank(A)=\\rank(A^TA)$\n  \\item\n  Prove that $\\mathbf{C}(A^TA)=\\mathbf{C}(A^T)$.\n\n  From 3.2.10 we have $C(A^TA)\\subset C(A^T)$ and from above we have $\\dim C(A^TA)=\\rank(A)$. And $C(A^T)=R(A)$, thus $\\dim C(A^T)=\\rank(A)$. And so because they have the same dimension, we know that $C(A^TA)=C(A^T)$\n  \\end{enumerate}\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "adb3e536a971a73c8ea6f8953353b59ebe61dfaf", "size": 18639, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "linear/linear-hw-2015-10-28.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "linear/linear-hw-2015-10-28.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear/linear-hw-2015-10-28.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.1307506053, "max_line_length": 733, "alphanum_fraction": 0.6321154568, "num_tokens": 8212, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7549149868676284, "lm_q1q2_score": 0.563271508125235}}
{"text": "\\chapter{Key and related works}\\label{ch:2}\n\nIn this chapter, we start by introducing the \\textit{HSM model} by \\cite{antolik} model, which will be the main focus of our experiments. Further, we describe a model by \\cite{klindt}, whose \\textit{separable layer} contribution we will also explore. To provide some background, we finish with a quick look at a few additional related works.\n\n\\section{Antol\u00edk et al. and the DoG layer}\n\nPublished in \\citeyear{antolik}, the paper \\textit{Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes} by \\citeauthor{antolik} introduced a three layer DNN model to fit primary visual cortex responses to image stimuli. As its main contribution, it explored incorporating biologically plausible components with traditional DNN methods. Doing so, it managed to outperform\\footnote{For results refer to \\refsection{ch:4.1.2}.}, at the time of writing, other state of the art methods while providing greater interpretability and requiring less free parameters.\n\nThe model is grounded in known hierarchical properties of the early visual system in three ways. First, it assumes that LGN units can be well modeled using difference-of-Gaussians filters. Second, that local population of V1 neurons share inputs from a limited number of such LGN units. Third, that simple cells can be constructed as a combination of several LGN neurons and complex cells similarly from simple cells. Based on these assumptions, the model consists of 3 layers (Fig. \\ref{fig:2.1}): the first represents both the retinal and LGN computation, and the second and third the two levels of V1 neurons.\n\n\\subsection{DoG layer}\\label{ch:2.1.1}\n\nRetinal and LGN computations are modeled together using a set of parallel difference-of-Gaussian filters followed by an identity activation function. In the context of the HSM architecture, we will call this the filter layer. The usage of difference-of-Gaussian filters significantly decreases the number of free parameters introduced by this layer while being grounded in biological reality. Instead of \\texttt{input\\_width}$*$\\texttt{input\\_height} parameters per filter of a fully connected layer, the \\textit{Difference of Gaussians layer (DoG)} is parametrized by only 6 per filter: weight and width of the center and surrounding Gaussians and x, y coordinates of their shared center. The published model contains 9 filters DoG layer. For input $S{x,y}$, weights $\\alpha_1$ and $\\alpha_2$, widths $\\sigma_1$ and $\\sigma_2$, and center $\\mu_x$, $\\mu_y$ the output of a single DoG filter is:\n\n\\begin{equation}\\label{eq:2.1}\n    \\sum_{x,y}^{w,h} S_{x,y}\n\\left(\n{\\frac{\\alpha_1}{ 2 \\sigma_1 \\pi}} e^{\\frac{(x - \\mu_x)^2 + (y - \\mu_y)^2}{2\\sigma_1}} -\n{\\frac{\\alpha_2}{2 (\\sigma_1+\\sigma_2) \\pi}} e^{\\frac{(X - \\mu_x)^2 + (Y - \\mu_y)^2}{ 2(\\sigma_1+\\sigma_2) }}\n\\right)\n\\end{equation}\n\n\\subsection{V1 neurons}\nThe two levels of V1 neurons are modeled as two consecutive fully connected layers. We will call these the hidden and output layers. The output layer has the same size as the number of measured neurons. The hidden layer is proportional to the size of the output layer. In the case of the reported model, its size is 20 \\% of the number of measured neurons. Both fully connected layers are followed by a SoftPlus\\footnote{Refer to table \\ref{tab:1.1}.} nonlinearity.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=1\\textwidth]{../figures/02_HSM}\n    \\caption[HSM model architecture]{Three layer architecture of \\textit{HSM model} and corresponding neural sections. Figure was adopted from \\citep{antolik}.}\n    \\label{fig:2.1}\n\\end{figure}\n\n\\subsection{Training regime}\\label{ch:2.1.3}\n\nThe published model was trained using a Newton Conjugate-Gradient algorithm\\footnote{\\href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_tnc.html}{https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin\\_tnc.html}.} with 100 epochs, each consisting of 10 000 evaluations maximum. The batch size for this optimiser is the entire dataset. In addition to the \\textit{hard regularization} provided by the parameterized filters of the DoG layer, all parameters -- of the weights of DoG layer and both fully connected layers -- were kept within predefined bounds by the optimiser throughout the fitting. These bounds were also used for random initialization of the parameters, during which values were drawn from uniform distributions with corresponding bounds. No other \\textit{soft} or \\textit{hard regularization} is present in the model. \n\nThe reported values for the hidden layer size ratio and the number of DoG filters hyperparameters were found empirically using two one-dimensional searches through the parameter space.\n\n\\section{Klindt et al. and the separable layer}\\label{ch:2.2}\n\nIn \\citeyear{klindt} \\citeauthor{klindt} published a paper \\textit{Neural system identification for large populations separating \u201cwhat\u201d and \u201cwhere\u201d} that explored deep convolutional neural networks in the context of V1 neural data. It specifically focused on the estimation of individual neurons\u2019 receptive field locations through a novel approach to readout layer\\footnote{Readout layer is the first fully connected layer after a set of convolutional layers.} factorization, in an effort to allow effective fitting of thousands of neurons on relatively little data simultaneously. In addition to artificial data, it was also evaluated on the same dataset as \\cite{antolik} where it achieved moderate improvements\\footnote{For exact results refer to \\refsection{ch:4.1.2}.} over the HSM model by \\citeauthor{antolik}\n\n\\subsection{Architecture}\n\nThe model (further also referred to as the \\textit{what/where model}) consists of two parts, feature space and receptive fields (Fig. \\ref{fig:2.2}). The feature space is a cascade of -- in the variant for \\citeauthor{antolik} dataset -- 3 convolutional layers, each followed by a batch normalisation\\footnote{\\citep{2015arXiv150203167I}} and a SoftPlus activation function. The convolution layers have 48 feature maps per each layer, with the first one having 13 pixel and the other two 3 pixel kernels. In addition, the convolution layers feature two types of \\textit{soft regularizations}. A Laplacian regularizations to ensure smoothness of the filters and an L2 based group sparsity regularization\\footnote{For a definition, please refer to the original paper \\citep{klindt}.} to encourage filters to pool from only a small set of feature maps in the previous layer.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=1\\textwidth]{../figures/02_fig2_A2u}\n    \\caption[klindt et al. model]{Architecture of \\citeauthor{klindt} model, consisting of convolution layers and a factorized readout layer. Figure was adopted from \\citep{klindt}.}\n    \\label{fig:2.2}\n\\end{figure}\n\nThe receptive field consists of a single instance of a specific readout layer (further referred to as the \\textit{separable layer}). It is a fully connected layer factored into two masks per each output neuron. First mask, with dimensionality \\texttt{feat\\_space\\_output\\_spatial\\_dims}, selects the input locations the output neuron will use, in essence its receptive field -- the \\textit{where}. The other, with dimensionality of \\texttt{feat\\_space\\_output\\_channels}, determines what channels of the feature space at each of the locations will make the neuron\u2019s output -- its \\textit{what}. This factorization achieves two things. It decreases the number of free parameters, from \\texttt{spatial\\_dims}$*$\\texttt{channels} to \\texttt{spa\\-tial\\_dims}$+$\\texttt{channels} but also enables more direct interpretability of the parameters. Comparing the \\textit{what} masks, for example, allows us to identify similar cell types. Both masks feature L1 regularization to encourage sparsity.\n\nThe model was trained using ADAM optimiser and early stopping on 20/80 training set split. Reported hyperparameters, such as the number and size of convolution filters, were found and cross-validated using grid search for each region.\n\n\\section{Related works}\\label{ch:2.3}\nTo provide some background, this section lists a few related works that are relevant to V1 system identification but are not directly used or in any way referenced by our experiments. Specifically, we will introduce a few convolutional architectures, as they have seen an increase in usage for V1 modeling, similarly as they have taken over classical computer vision\\footnote{Even though they are slowly being superseded by attention/transformer-based networks: \\citep{2019arXiv190605909R}, \\citep{2019arXiv190409925B}), \\citep{dosovitskiy2020image}.}.\n\n\\subsection{Neural convolutional models}\n\\cite{ecker} inspired by aforementioned \\cite{klindt}, further explored convolutional architectures. In addition to the \\citeauthor{klindt}\u2019s observation that many neurons of early visual processing are similar but only have their receptive fields at different locations, \\citeauthor{ecker}\u2019s model leveraged the fact that even more V1 neurons are functionally the same, if we assume not only arbitrary receptive field positions but also arbitrary orientations. Based on the same two parts, 3~convolutional layers architecture introduced by \\citeauthor{klindt}, their model used \\textit{Group equivariant convolutions} \\citep{2016arXiv160207576C} instead of traditional convolution layers.\n\nTackling a similar problem from a different angle, \\cite{Walke506956} introduced a convolutional architecture, where the readout locations -- representing the receptive fields of output neurons -- are modulated by a \\textit{shifter side network} based on eye movement data. In addition, the neural outputs are further adjusted by a \\textit{second side network} using behaviour data (running state, pupil dilation), trying to incorporate extra-retina sources of LGN and further downstream stages of visual processing. For the core network, the model used a three layer convolutional cascade followed by a neuron specific linear readout stage. \n\n\\subsection{Transfer learning}\n\nMotivated by the observation that lower parts of the visual pathway can be seen analogous to early layers of classical object recognition neural networks, \\cite{10.1371/journal.pcbi.1006897} explored transfer learning within the domain of V1 system identification. Their proposed architecture used 16 feature maps\\footnote{The output of one filter of a convolutional layer given a specific input.} from the initial section of an on-image-classification-dataset already trained VGG-16 network \\citep{VGG16} and then fitted per output neuron readout layer on neural data. This approach managed to achieve parity with a 3 convolutional layer CNN architecture trained entirely on neural data, while requiring a smaller training dataset.\n", "meta": {"hexsha": "c130a153131fdc804f30381f353a104b0af6079c", "size": 10822, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/chapters/02_chap.tex", "max_stars_repo_name": "petrroll/msc-thesis", "max_stars_repo_head_hexsha": "65219d1819f7d93f154bd2dc1484727a52a00229", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "text/chapters/02_chap.tex", "max_issues_repo_name": "petrroll/msc-thesis", "max_issues_repo_head_hexsha": "65219d1819f7d93f154bd2dc1484727a52a00229", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-11-26T12:37:50.000Z", "max_issues_repo_issues_event_max_datetime": "2020-11-26T21:02:36.000Z", "max_forks_repo_path": "text/chapters/02_chap.tex", "max_forks_repo_name": "petrroll/msc-thesis", "max_forks_repo_head_hexsha": "65219d1819f7d93f154bd2dc1484727a52a00229", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-25T21:44:31.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-25T21:44:31.000Z", "avg_line_length": 156.8405797101, "max_line_length": 989, "alphanum_fraction": 0.7977268527, "num_tokens": 2560, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.563271504011042}}
{"text": "\\subsection{Key steps of a \\acrshort{mcts}}%\n\\label{sub:key_components_of_a_mcts}\n\nA \\gls{mcts} algorithm such as \\gls{uct} follows four steps that are repeated indefinitely.\nTheses steps are the selection, the expansion, the simulation and the backpropagation.\nHere, we will give a quick explanation of each of these steps.\nBut, before going in, we have to explain what the tree used in \\gls{mcts} approach looks like.\n\nA \\gls{mcts} tree is a tree as defined in graph theory which is not mandatory to explain here. \nAt the top of the tree, we find the root state, which is the current state of the game, the one for which we search the best action.\nEach node of the tree is a state and nodes are connected by actions. \n\n\\paragraph*{During the selection}\nthe algorithm will descend in the tree from node to node, starting from the root node.\nAt one point, according to a policy, it will stop on a node corresponding to a state \\(s\\) and begin the expansion phase.\n\n\\paragraph*{The expansion}\nphase corresponds to the addition of a new node on the tree.\nIt applies an action \\(a \\in A_{s}\\) chosen according to a policy.\nThen, it adds the resulting state, let's say \\(s'\\) to the tree.\n\n\\paragraph*{The simulation}\nis a Monte-Carlo simulation.\nIt is used to evaluate the quality of a node for a given player.\nIt consists in playing the game randomly from state \\(s'\\) until a terminal state is reached.\nNo modification is done to the tree.\nThe simulation is called a rollout because it is a random playout.\nIt follows what is called the default policy which chooses the action uniformly.\n\n\\paragraph{The backpropagation}\nis the last phase.\nThe result of the rollout is transmitted back to the top of the tree.\nIt follows back the way chosen at the selection.\nWe can then have statistics on each node which, often, are useful to create the policy used during the selection and expansion.\n\nAll theses steps are repeated over and over again.\nThe number of repetition is often limited by a search time which is set beforehand.\nBased on the tree obtained at the end, the player can choose which action apply on the root state.\nOften, the search is repeated on a new tree each turn to avoid the algorithm to be stuck on the same strategy.\nerge\n\n\n", "meta": {"hexsha": "8fc820fee430289d0ef73a1d3cc8bc71cf6cfd38", "size": 2239, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/report/src/sections/mcts/subs/key_components.tex", "max_stars_repo_name": "XanX3601/stochastic_mcts_optimization", "max_stars_repo_head_hexsha": "743ef3df090427750fee55fd69d7646a88d5946a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documents/report/src/sections/mcts/subs/key_components.tex", "max_issues_repo_name": "XanX3601/stochastic_mcts_optimization", "max_issues_repo_head_hexsha": "743ef3df090427750fee55fd69d7646a88d5946a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documents/report/src/sections/mcts/subs/key_components.tex", "max_forks_repo_name": "XanX3601/stochastic_mcts_optimization", "max_forks_repo_head_hexsha": "743ef3df090427750fee55fd69d7646a88d5946a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.0697674419, "max_line_length": 132, "alphanum_fraction": 0.7766860205, "num_tokens": 517, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.563271504011042}}
{"text": "\\documentclass[a4]{jgaa-art}\n\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[toc,page]{appendix}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\\usepackage{algorithm}\n\\usepackage[noend]{algpseudocode}\n\\usepackage{hhline}\n\\usepackage{array}\n\\newcolumntype{C}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\n\\newenvironment{definition}[1][Definition]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}]}{\\end{trivlist}}\n\n\\makeatletter\n\\def\\BState{\\State\\hskip-\\ALG@thistlm}\n\\makeatother\n\n\\algnewcommand{\\LineComment}[1]{\\State \\(\\triangleright\\) #1}\n\n\\title{Enhancing PQ-tree Planarity Algorithms for non-adjacent $s$ and $t$ }\n\n\\author{Shoichiro Yamanishi}\n\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nThe PQ-tree based planarity testing algorithm presented\nby Booth and Lueker in \\cite{BL76} requires an st-ordering where $\\{s,t\\}$ must exist.\nWe propose an enhancement to the algorithm that permits st-ordering of any vertex pair.\nThe enhancement is made by introducing a type of circular consecutiveness of pertinent \nleaves denoted by  {\\it complementarily partial},\n where the pertinent leaves can be consecutively arranged only at both ends of the frontier with \none or more non-pertinent leaves in between. \nThe implementation is enhanced with 4 additional templates P7, P8, Q4, and Q5.\nThe correctness of the new algorithm is given following the proof in\n \\cite{EVEN79} and \\cite{LEC67} \non the equivalence of PQ-tree and its new reduction operations to the corresponding bush form.\nThe complexity of the new algorithm stays $O(|N|)$.\n\\end{abstract}\n\n\\section{Introduction}\\label{se:intro}\n\n%Context\n%Purpose\n%Summary\n%Forecast\n\nPQ-tree data structure and the reduction algorithm were first proposed by Booth and Lueker \\cite{BL76} \nin 1976.\nPQ-tree is a rooted ordered tree with three node types: L-node represents a leaf with no children, P-node \npermits any permutation of its children, Q-node has an ordering on its children but it can be reversed.\nPQ-tree is used to represent a set of permissible permutations of elements in $S$, and the reduction\noperation tries to find a consecutive arrangement of subset $U$ of $S$.\nIt has many applications including the planarity testing of an undirected biconnected graph $G(V,E)$.\nBooth and Lueker proposed a planarity testing algorithm in their original paper \\cite{BL76}.\nIt has $O(|N|)$ time complexity, and it depends on a particular ordering on graph vertices called st-ordering,\nwhere the two terminal nodes $s$ and $t$ must be adjacent as in $\\{s,t\\}$.\nAn st-ordering can be found in $O(|N|)$ time with an algorithm such as \\cite{TARJAN86}, assuming $|E| \\le 3|V| - 6$ .\n\nThe algorithm falls in a category called vertex addition.\nEach graph vertex $v$ is processed in an iteration according to the st-ordering.\nConceptually each graph vertex is added to the bush form \\cite{LEC67},\\cite{EVEN79}, and the PQ-tree\nevolves reflecting the state of the bush form.\nEach iteration of the algorithm transforms the PQ-tree for $v$\n and its incident edges.\nIn the beginning of an iteration, the tree leaves that correspond to the incoming graph edges incident to $v$,\nor {\\it pertinent leaves}, are gathered consecutively in the tree by transforming the tree \nin a series of template applications.\nThe minimum connected subtree for all the pertinent leaves, or {\\it pertinent tree}, is removed from the tree, \nand then new leaves that corresponds to the outgoing graph edges incident to $v$ are added to the tree node \nthat was the pertinent tree root.\nThe st-ordering ensures there is at least one incoming graph edge and one out going graph edge at \neach iteration except for the first and the last. \nAt the first iteration the leaves for the outgoing graph edges of the first graph vertex in the st-ordering\nare attached to the initial P-node, and the PQ-tree evolves form there over the iterations.\nAt an iteration, if the pertinent leaves can not be arranged consecutively, the algorithm aborts and declares it\nnon-planar.\nAt the second to last iteration, if all the pertinent leaves are consecutively arranged, the graph is declared planar.\n\nThe PQ-tree itself is an elegant data structure to represent a set of permissible permutations of elements, \nbut its reduction algorithm involves 10 rather cumbersome tree transformation operations called templates, though\neach of them is straightforward and intuitive to understand.\nSince the original algorithm was published, many graph planarity-related algorithms have been proposed,\n but some of them were later proven to have some issues in them \\cite{OZAWA81} \\cite{JTS89} \\cite{KANT92}.\nFor example, J\\\"unger, Leipert, and Mutzel \\cite{JUNGER97} discuss the pitfalls and difficulties of using \nPQ-trees for maximal planarization graphs.\nIt seems to take a very careful attention when we apply PQ-tree to graph planarity algorithms despite its apparent \nstraight-forwardness and intuitiveness.\n\nAnother data structure called PC-tree was proposed by Shih and Hsu \\cite{HSU99}.\nIt is a rootless tree to express permissible 'circular' permutations. \nHsu \\cite{HSU99} proposes a planarity test algorithm using PC-tree with iterative vertex\n addition along a DFS exploration of graph vertices.\nHsu \\cite{HSU01} compares PQ-tree and PC-tree in terms of planarity testing. \nHsu\\cite{HSU01} mentions testing 'circular ones' property with PC-tree and \n'consecutive ones' property with PQ-tree.\n\nIn this paper, we will expand the PQ-tree planarity algorithm for any (not necessarily adjacent) $s$,$t$ pair.\nA non-adjacent st-ordering will introduce a new type of consecutiveness through the course of the algorithm on the PQ-tree.\nIt is a type of 'circular' permutation.\nWe define the type of circular permutations that the original algorithm can not handle \n as {\\it complementarily partial}. \nWe show the insufficiency of the set of the original templates in Booth and Lueker \\cite{BL76} with a specific example, and propose 4 \nnew templates to handle complementarily partial nodes.\nThen we prove the correctness of the new algorithm following  Lempel, Even, and Cederbaum \\cite{LEC67} and Even \\cite{EVEN79} in equivalence to the corresponding bush form. \nWe show the time complexity stays $O(|N|)$. \nWe then discuss some implementation details.\nThe algorithm mainly consists of two parts called BUBBLE() and REDUCE() in \\cite{BL76}.\nWe show that no change is required for BUBBLE(), but the 4 new templates have to be added to REDUCE().\n\nPQ-tree and its reduction technique are also used for some planarization algorithms such as\n\\cite{OZAWA81}, \\cite{JTS89}, and \\cite{KANT92}.\nIn those algorithms, the costs are calculated per tree node, \nand some tree leaves that have associated graph edges are removed based on the costs to maintain planarity.\nThe costs are basically the number of descendant tree leaves that would have to be removed to \nmake the tree node of certain pertinent and non-pertinent type.\nWe briefly propose an improvement on the cost values.\nWithout the improvement, some graph edges can be removed unnecessarily as a consequence of lack of handling the {\\it complementarily partial} nodes defined below.\n\n\n\\section{Circular Consecutiveness}\\label{se:insuff}\n\nThe insufficiency of the existing algorithm is best shown by a real example.\nPlease see figure~\\ref{fig:pq_tree_01} and~\\ref{fig:bush_form_01}.\nThey are taken from a snapshot of the original algorithm for the graph and the st-numbering shown in Appendix A.\nFigure~\\ref{fig:pq_tree_01} is the PQ-tree at the 23rd iteration after \nBUBBLE(), and figure~\\ref{fig:bush_form_01} is the corresponding bush form.\nIn this iteration the graph vertex $2$ is to be added.\nThe pertinent leaves that correspond to $\\{8,2\\}$,$\\{3,2\\}$, and $\\{7,2\\}$\n are about to be consecutively arranged in REDUCE().\nThose edges can be consecutively arranged in a circular manner.\nHowever, the reduction fails at Q-node $Q4$ as none of the templates Q1,Q2, \nor Q3 can handle this arrangement.\nIn $Q4$, the pertinent leaves can only be arranged consecutively\n at both ends with one or more non-pertinent leaves in between.\n\nAs shown above, the original algorithm is not capable of handling this type \nof arrangements of the pertinent leaves with pertinent leaves at both ends.\nThis is a result of using a rooted tree structure to handle circular \nconsecutiveness around the outer face of the corresponding bush form. \nOnce a Q-node has formed in the PQ-tree, at a later iteration \nif there is a pertinent node of complementarily partial, \nthere will be no way to arrange the pertinent nodes consecutively using the \noriginal set of templates, \neven if the corresponding bush form permits circular consecutiveness.\n\n\n\\begin{figure}[!htb]\n  \\centering\n  \\begin{minipage}[b]{0.64\\textwidth}\n    \\includegraphics[width=\\textwidth]{pq_tree_sample_01}\n    \\caption{PQ-tree}\n    \\label{fig:pq_tree_01}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}[b]{0.35\\textwidth}\n    \\includegraphics[width=\\textwidth]{bush_form_01}\n    \\caption{Bush form}\n    \\label{fig:bush_form_01}\n  \\end{minipage}\n\\end{figure}\n\n\n\\section{Enhancement with New Templates}\\label{se:correction}\nIn the previous section we showed a type of node arrangement in PQ-tree that\nthe original algorithm can not handle.\nIn this section we first formulate this condition by defining a new\npertinent node type {\\it complementarily partial}, and then\nintroduce 4 new templates.\nThen we discuss other changes required in BUBBLE() and REDUCE().\n\n\\begin{definition}\nA \\emph{P-node} is \\emph{complementarily partial}, if either of the following\nholds:\n\\begin{enumerate}\n\\item \\label{item:P2}It is not a pertinent root, and it satisfies the\ncondition for template $P6$ on the arrangement of child nodes,\ni.e., if there are exactly two singly partial children. (Figure \\ref{fig:pq_tree_02})\n\\item \\label{item:P1}There is exactly one complementarily partial child, and all the\n children are full. (Figure \\ref{fig:pq_tree_03})\n\\end{enumerate}\n\\end{definition}\n\n\\begin{definition}\nA \\emph{Q-node} is \\emph{complementarily partial}, if either of the following\nholds:\n\\begin{enumerate}\n\\item \\label{item:Q2}The children are arranged as the complement of permissible arrangement\n   for template $Q3$, i.e., if the descendant pertinent leaves can be arranged \n   consecutively at both ends with one or more non-pertinent leaves in between.\n (Figure \\ref{fig:pq_tree_04})\n\\item \\label{item:Q1}There is exactly one complementarily partial child, and all the children\n   are full\n (Figure \\ref{fig:pq_tree_05})\n\\end{enumerate}\n\\end{definition}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.9\\textwidth]{pq_tree_sample_02}\n  \\caption{P-node Condition 1, Template P7}\n  \\label{fig:pq_tree_02}\n\\end{figure}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.5\\textwidth]{pq_tree_sample_03}\n  \\caption{P-node Condition 2, Template P8}\n  \\label{fig:pq_tree_03}\n\\end{figure}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=\\textwidth]{pq_tree_sample_04}\n  \\caption{Q-node Condition 1, Template Q4}\n  \\label{fig:pq_tree_04}\n\\end{figure}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.55\\textwidth]{pq_tree_sample_05}\n  \\caption{Q-node Condition 2, Template Q5}\n  \\label{fig:pq_tree_05}\n\\end{figure}\n\n\nThe first condition for Q-node is formerly defined using a regular expression \nfor the children as:\n\\[F+((S(D?|E+D))|(E+D))|SE*D\\]\n\nwhere $F$ denotes a full child, $S$ a singly partial child, $E$ a \n non-pertinent child, and $D:=((F|S)F*)$ for notational convenience.\n\nFirst we show there is no need to change BUBBLE() to handle the complementarily\n consecutive cases.\nIf the PQ-tree permits complementarily partial arrangement, then the tree\nnode will be the pertinent tree node. During BUBBLE(), the parent of\nall the pertinent nodes will be eventually found, and there will be no need\nfor a surrogate parent, or {\\it pseudo node} in \\cite{BL76}.\n\nNext, we introduce 4 new templates for each of 4 conditions shown in the\n definitions above. Basically Template P7 is a complement version of Template P6,\nand Template Q4 is a complementary version of Q3. Template P8 and Q5 are\nfor trivial recursive cases.\n\nFinally we show the updated REDUCE().\n\n\n\\begin{algorithm}\n\\caption{Template P7}\\label{template_P7}\n\\begin{algorithmic}[1]\n\\Procedure{TemplateP7}{X: reference to a node object}\n\\If {$X.type \\ne P$} \\Return false\n\\EndIf\n\\If {Number of singly partial children $\\ne 2$} \\Return false\n\\EndIf\n\\State $newX \\gets \\text{CreateNewPNode()}$\n\\State Move all the full children of $X$ to $newX$\n\\State $C_1 \\gets \\text{Singly partial child}_1$\n\\State $C_2 \\gets \\text{Singly partial child}_2$\n\\State Remove links of $C_1$ and $C_2$ from $X$\n\\LineComment At this point $X$ contains zero or more empty children only.\n\\State Save the location of $X$ in the PQ-tree to $L_X$\n\\State Unlink $X$ from the PQ-tree\n\\If {Number of empty children of $X > 1$}\n\\State Put $X$ to the empty side of sibling list of $C_1$\n\\ElsIf {Number of empty children of $X = 1$}\n\\State Put the empty child to the empty side of sibling list of $C_1$\n\\State Discard $X$\n\\Else\n\\State Discard $X$\n\\EndIf\n\\State Concatenate the children list of $C_2$ to $C_1$'s on the empty sides\n\\State Discard $C_2$\n\\State $C_1.pertinentType \\gets \\textit{ComplementarilyPartial}$\n\\If {Number of full children of $newX \\ge 1$}\n\\State Put $C_1$ under $newX$\n\\State Link $newX$ at $L_X$ in the PQ-tree\n\\State $newX.pertinentType \\gets \\textit{ComplementarilyPartial}$\n\\Else\n\\State Link $C_1$ at $L_X$ in the PQ-tree\n\\EndIf\n\\Return true\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}\n\\caption{Template P8}\\label{template_P8}\n\\begin{algorithmic}[1]\n\\Procedure{TemplateP8}{X: reference to a node object}\n\\If {$X.type \\ne P$} \\Return false\n\\EndIf\n\\State $|F| \\gets $ Number of full children of $X$\n\\State $|C| \\gets $ Number of children of $X$\n\\If {$|F|+1 \\ne |C|$} \\Return false\n\\EndIf\n\\State $C_{cp} \\gets $the non-full child of $X$\n\\If {$C_{cp}.pertinentType \\ne \\textit{ComplementarilyPartial}$} \\Return false\n\\EndIf\n\\State {$X.pertinentType \\gets \\textit{ComplementarilyPartial}$}\n\\State\\Return {true}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}\n\\caption{Template Q4}\\label{template_Q4}\n\\begin{algorithmic}[1]\n\\Procedure{TemplateQ4}{X: reference to a node object}\n\\If {$X.type \\ne Q$} \\Return false\n\\EndIf\n\\If {Children of $X$ are not ordered according to the condition for Q4}\n\\Return false\n\\EndIf\n\\For {each $C_{sp}$ of singly partial children}\n\\State Flatten $C_{sp}$ into $X$ such that the full side of the children list\nof $C_{sp}$ is concatenated to the full immediate sibling of $C_{sp}$\n\\EndFor\n\\State $X.pertinentType \\gets \\textit{ComplementarilyPartial}$\n\\State \\Return {true}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}\n\\caption{Template Q5}\\label{template_Q5}\n\\begin{algorithmic}[1]\n\\Procedure{TemplateQ5}{X: reference to a node object}\n\\If {$X.type \\ne Q$} \\Return false\n\\EndIf\n\\State $|F| \\gets $ Number of full children of $X$\n\\State $|C| \\gets $ Number of children of $X$\n\\If {$|F|+1 \\ne |C|$} \\Return false \n\\EndIf\n\\LineComment {The check above can be made wihtout calculating $|C|$}\n\\State $C_{cp} \\gets $the non-full child of $X$\n\\If {$C_{cp}.pertinentType \\ne \\textit{ComplementarilyPartial}$} \\Return false\n\\EndIf\n\\State {$X.pertinentType \\gets \\textit{ComplementarilyPartial}$}\n\\State\\Return {true}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}\n\\caption{REDUCE}\\label{reduce}\n\\begin{algorithmic}[1]\n\\Procedure{REDUCE}{T, S}\n\\LineComment $PERTINENT\\_LEAF\\_COUNT$ is shortened to $PLC$.\n\\LineComment $PERTINENT\\_CHILD\\_COUNT$ is shortened to $PCC$.\n\\State $QUEUE \\gets $ empty list\n\\For {each leaf $X \\in S$}\n\\State place $X$ to the back of $QUEUE$\n\\State $X.PLC \\gets 1$\n\\EndFor\n\\While {$|QUEUE| > 0$}\n\\State remove $X$ from the front of $QUEUE$\n\\If {$X.PLC < |S|$}\n\\Comment $X$ is not $ROOT(T,S)$\n\\State $Y \\gets X.PARENT$\n\\State $X.PLC \\gets X.PLC + Y.PLC$\n\\State $X.PCC \\gets X.PCC - 1$\n\\If {$X.PCC = 0$}\n\\State place $Y$ to the back of $QUEUE$\n\\EndIf\n\\If {not TEMPLATE\\_L1(X)}\n\\If {not TEMPLATE\\_P1(X)}\n\\If {not TEMPLATE\\_P3(X)}\n\\If {not TEMPLATE\\_P5(X)}\n\\If {not TEMPLATE\\_P7(X)}\n\\If {not TEMPLATE\\_P8(X)}\n\\If {not TEMPLATE\\_Q1(X)}\n\\If {not TEMPLATE\\_Q2(X)}\n\\If {not TEMPLATE\\_Q4(X)}\n\\If {not TEMPLATE\\_Q5(X)}\n\\State {$T \\gets T(\\emptyset, \\emptyset)$}\n\\State {\\bf exit} from {\\bf do}\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\Else\n\\Comment $X$ is not $ROOT(T,S)$\n\\If {not TEMPLATE\\_L1(X)}\n\\If {not TEMPLATE\\_P2(X)}\n\\If {not TEMPLATE\\_P4(X)}\n\\If {not TEMPLATE\\_P6(X)}\n\\If {not TEMPLATE\\_P8(X)}\n\\If {not TEMPLATE\\_Q1(X)}\n\\If {not TEMPLATE\\_Q2(X)}\n\\If {not TEMPLATE\\_Q3(X)}\n\\If {not TEMPLATE\\_Q4(X)}\n\\If {not EMPLATE\\_Q5(X)}\n\\State {$T \\gets T(\\emptyset, \\emptyset)$}\n\\State {\\bf exit} from {\\bf do}\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndIf\n\\EndWhile\n\\Return T\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Correctness of the New Algorithm}\\label{se:correctness}\n\nWe prove the correctness following the series of lemmas, theorems, and a corollary given in\nSection 8.4 of \\cite{EVEN79} by Even.\nIn their book, the equivalence between the transformations on the bush form \nand the reductions on PQ-tree is left for the readers on pp 190.\nWe fill the missing part with the following proof.\n\nIn a similar case, Hsu \\cite{HSU01} tries to prove the equivalence between \nPQ-tree and PC-tree in its Theorem 5, but it is not sufficient. \nIt proves the equivalence from PQ-tree to PC-tree for each of the templates.\nHowever the sufficiency of the PQ-tree templates for all the possible\nPC-tree transformations is not given.\n\nThe proof presented here is relatively long involving many notations and concepts.\nHowever, it may be a useful guide to study the details of the PQ-tree behavior.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.8\\textwidth]{proof_overview}\n  \\caption{Overview of the Proof of the Correctness of the Proposed Algorithm}\n  \\label{fig:proof_overview}\n\\end{figure}\n\nFigure \\ref{fig:proof_overview} shows the overview of the proof.\nIt proves the equivalence between bush form with the operations on it and the \ncorresponding PQ-tree with the operations on it.\nThe proof uses two intermediate representations: Marked bush form and its underlying rooted embedded bc-tree.\nA marked bush form is based on a bush form by placing a root marker on a cut vertex or an edge on the outer\nface of a block.\nSuch a marker splits the circular arrangement of virtual edges around the bush form into one of three types of \na linear consecutive arrangement with two designated end points at the marker.\nThe root marker also introduces the root-to-descendants orientation to the bush form.\n\nWe prove in Lemma \\ref{lem:lemma1} that a circular consecutive arrangement of virtual edges \nby arbitrary reorderings of incident edges around cut vertices and flippings of blocks in a bush form is\n equivalent to a linear consecutive arrangement by reordering of cut vertices and flipping of blocks\n from the leave components toward the root. In the proof we also show that the incident components\n around each cut vertex or a block will be arranged in one of 5 types of orderings.\n\nWe then introduce the underlying block-cut tree of the marked bush form, called rooted embedded bc-tree, \nand prove equivalence between the rooted embedded bc-tree and the PQ-tree with their operations in Lemma 2 and 3.\n\nFirst, we introduce some concepts, operations, and notations required for the following discussions.\n\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.4\\textwidth]{marked_bush_form_00}\n  \\caption{A Bush Form}\n  \\label{fig:bush_form}\n\\end{figure}\n\n\\begin{definition}\nType of nodes and edges along the outer face of a bush form\n\\begin{itemize}\n\\item {\\bf \\emph{virtual node}} of a bush form is a node of degree 1 that represents a copy of a node\nin the original graph \\cite{EVEN79}. In figure \\ref{fig:bush_form}, $v1$...$v14$ are virtual nodes.\n\\item {\\bf \\emph{virtual edge}} of a bush form is an edge incident to a virtual node.\n\\item {\\bf \\emph{pertinent virtual node}} is a virtual node that  corresponds to a pertinent leaf\nin PQ-tree. I.e., a virtual node to be merged.\n\\item {\\bf \\emph{pertinent virtual edge}} is a virtual edge incident to a pertinent virtual node.\n\\end{itemize}\n\\end{definition}\n\n\\begin{definition}\nOperations on a bush form and a marked bush form.\n\\begin{itemize}\n\\item {\\bf \\emph{attach}} is an operation to attach new virtual edges to a vertex $v$ in the bush form.\nAs a result it makes $v$ a cut vertex in the bush form.\n\\item {\\bf \\emph{reorder}} of a cut vertex in a bush form is the operation to rearrange the circular ordering of\nthe incident edges around the cut vertex.\n\\item {\\bf \\emph{flip}} of a block in a bush form is reversing the combinatorial embedding of the block.\nAs a result, the ordering the outer face of the block is reversed.\n\\item {\\bf \\emph{merge}} is an operation to merge virtual nodes into single real node in the bush form. As a result a new\nblock forms in the bush form.\n\\end{itemize}\n\\end{definition}\n\n\\begin{definition}\nIf a bush form is decomposed to maximal connected components by removing a cut vertex $c$, or a block $B$,\nan {\\bf \\emph{active component}} of a cut vertex $c$ or $B$ is a maximal connected component incident \nto $c$ or $B$ that has at least one virtual node in it. \nIn figure \\ref{fig:bush_form}, $c3$ has three maximally connected components.\nThe component that includes $b5$, $c11$, $c10$, and $b4$ is not an active component. The other two are.\n\\end{definition}\n\n\\begin{definition}\nAn {\\bf \\emph{orienting}} cut vertex or a block is a cut vertex or a block in the bush form that has at least 3 incident\nactive components if the corresponding node in the PQ-tree is not the root.\nIf the corresponding PQ-tree is the root, then it is a cut vertex or a block that has at least 2 incident active \ncomponents. Such a correspondence is proved in Lemma 3.\nIn figure \\ref{fig:bush_form}, assuming $c1$ corresponds to the root of the PQ-tree, $c1$, $b1$, and $c7$\nare orienting.\nThe components $c3$, $b5$, $c5$, and $c8$ are not.\n\\end{definition}\n\n\\begin{definition}\nAdditional operations on a marked bush form. These are used in the proof of Lemma \\ref{lem:lemma2} and \n\\ref{lem:lemma3}.\n\\begin{itemize}\n\\item {\\bf \\emph{interlock}} is an operation to fix the orientation of \none block relative to another. As a result, flipping one of them will flip the other.\n\\item {\\bf \\emph{split}} is an operation to change a node in the bush form to $k_2$, $k_3$, or $C_4$,\nand distribute the incident edges among them. If the split is for a $k_3$ or $C_4$, \na new block will result in the (marked) bush form.\n\\end{itemize}\n\\end{definition}\n\nThe following definitions are for the marked bush forms.\nIf we place a root marker on a cut vertex, or an edge of an outer face of a block,\nit will split the circular consecutive arrangement around the bush form into one\nof 3 types. In figures \\ref{fig:marked_bush_form_cut_vertex} and\n\\ref{fig:marked_bush_form_block}, the dots and line segments in navy blue indicate\nthe marked vertex or edge. The dots in light blue are pertinent virtual nodes.\nThe dots in light yellow are non-pertinent virtual nodes.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\begin{minipage}[b]{0.3\\textwidth}\n    \\includegraphics[width=\\textwidth]{marked_bush_form_01}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}[b]{0.3\\textwidth}\n    \\includegraphics[width=\\textwidth]{marked_bush_form_02}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}[b]{0.3\\textwidth}\n    \\includegraphics[width=\\textwidth]{marked_bush_form_03}\n  \\end{minipage}\n  \\caption{Bush forms with the root marker on a cut vertex}\\label{fig:marked_bush_form_cut_vertex}\n\\end{figure}\n\n\n\\begin{figure}[!htb]\n  \\centering\n  \\begin{minipage}[b]{0.3\\textwidth}\n    \\includegraphics[width=\\textwidth]{marked_bush_form_04}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}[b]{0.3\\textwidth}\n    \\includegraphics[width=\\textwidth]{marked_bush_form_05}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}[b]{0.3\\textwidth}\n    \\includegraphics[width=\\textwidth]{marked_bush_form_06}\n  \\end{minipage}\n  \\caption{Bush forms with the root marker on a block}\\label{fig:marked_bush_form_block}\n\\end{figure}\n\n\n\\begin{definition}\nTypes of linear consecutive arrangements on the marked bush form.\n\\begin{itemize}\n\\item {\\it singly partially consecutive}: the pertinent virtual nodes are arranged \non either side of the linear order.\nIn figure \\ref{fig:marked_bush_form_cut_vertex} left, \n($v1$, $v3$, $v4$, $v14$, $v13$, $v12$, $v11$, $v5$, $v7$, $v6$, $v8$, $v9$, $v10$, $v2$) is such an arrangement.\nIn figure \\ref{fig:marked_bush_form_block} left, \n($v10$, $v1$, $v2$, $v14$, $v13$, $v12$, $v11$, $v3$, $v4$, $v5$, $v6$, $v7$, $v9$, $v8$) is such an arrangement.\n\n\\item {\\it doubly partially consecutive}: the pertinent virtual nodes are arranged \nin the middle of the linear order.\nIn figure \\ref{fig:marked_bush_form_cut_vertex} center,\n($v2$, $v14$, $v13$, $v12$, $v11$, $v1$, $v3$, $v4$, $v5$, $v7$, $v6$, $v8$, $v9$, $v10$) is such an arrangement.\nIn figure \\ref{fig:marked_bush_form_block} center, \n($v10$, $v11$, $v12$, $v13$, $v14$, $v1$, $v2$, $v3$, $v4$, $v5$, $v6$, $v7$, $v8$, $v9$) is such an arrangement.\n\n\\item {\\it complementarily partially consecutive}: the pertinent virtual nodes are arranged on both side of\nthe linear order with one or more non-pertinent nodes in the middle.\nIn figure \\ref{fig:marked_bush_form_cut_vertex} right,\n($v1$, $v2$, $v3$, $v4$, $v5$, $v7$, $v6$, $v9$, $v8$, $v10$, $v11$, $v12$, $v13$, $v14$) is such an arrangement.\nIn figure \\ref{fig:marked_bush_form_block} right, \n($v5$, $v4$, $v3$, $v2$, $v1$, $v14$, $v13$, $v12$, $v11$, $v10$, $v6$, $v7$, $v8$, $v9$) is such an arrangement.\n\n\\end{itemize}\n\\end{definition}\n\n\\begin{definition}\n{\\bf \\emph{rooted embedded bc-tree}} is the underlying block-cut vertex tree of a marked bush form.\nThe root marker on the bush form  gives a natural root-to-descendants orientation \non the underlying block-cut tree, and the embedding in the bush form determines\n an embedding of the block-cut tree and the embeddings of the blocks.\n\\end{definition}\n\n\\begin{definition}\n{\\bf \\emph{pertinent root}} of a rooted embedded bc-tree is the highest (the closest to the root) node \nin the minimal connected subtree that spans the nodes for all the pertinent virtual nodes.\n\\end{definition}\n\nThe left-to-right ordering of the children of the root node is determined as follows.\nIf the root is for a cut vertex, then pick an arbitrary incident component of the root of the marked \nbush form,  and arrange the incident components in the counter-clock wise ordering.\nIf the root is for a block, then pick the incident component immediately after the marked edge in the \ncounter-clock wise ordering, and arrange the incident components accordingly.\n\n\n\\begin{definition}\n\\emph{Pertinent Types} of the nodes in a rooted embedded bc-tree are recursively defined as follows.\n\\begin{itemize}\n\\item {\\bf \\emph{empty}}:\nEach child of the node $n$ is either non-pertinent virtual edge or an empty node.\n\\item {\\bf \\emph{singly partial}}:\nThe node $n$ meets one of the following conditions.\n\\begin{itemize}\n\\item $n$ is for a cut vertex, there is one singly partial child, and there is no complementarily partial child.\n\n\\item $n$ is for a cut vertex, there is no singly partial child, at least one empty child, at least one full child, and no complementarily partial child.\n\n\\item $n$ is for a block, there is no complementarily partial child, there is at least one full child, all the full children are consecutively embedded on the outer face on either side of the parent with possibly at most one singly partial child at the boundary between full children and the empty ones.\n\n\\item $n$ is for a block, there is no complementarily partial child, there is no full child, and there is exactly one singly partial child immediately next to the parent.\n\\end{itemize}\n\\item {\\bf \\emph{doubly partial}}: \nThe node $n$ meets one of the following conditions.\n\\begin{itemize}\n\\item $n$ is the pertinent root, $n$ is for a cut vertex, and there are exactly two singly partial children.\n\\item $n$ is the pertinent root, $n$ is for a block, at least one full child, all the full children are consecutively embedded in the middle of\n the outer face away from the parent, and possibly a singly partial child at each of the two boundaries \nbetween full and empty children.\n\n\\item $n$ is the pertinent root, $n$ is for a block, there is no full child,\nand there are exactly two consecutively embedded singly partial children.\n\n\\end{itemize}\n\\item {\\bf \\emph{complementarily partial}}: \nThe node $n$ meets one of the following conditions.\n\\begin{itemize}\n\\item $n$ is not the pertinent root, $n$ is for a cut vertex, and there are exactly two singly partial children.\n\n\\item $n$ is for a cut vertex and the children are all full except for one that is complementarily partial.\n\n\\item $n$ is for a block, and the children are arranged as the complement of the arrangement for doubly partial.\n\n\\item $n$ is for a block and the children are all full except for one that is complementarily partial.\n\\end{itemize}\n\\item {\\bf \\emph{full}}: \nIf all the children of $n$ are either pertinent virtual edge or full.\n\\end{itemize}\n\\end{definition}\n\nWe have defined all the necessary types and operations.\nNext we prove the equivalence of a PQ-tree to its bush form.\n\n\n\\begin{lemma}\\label{lem:lemma1}\nIf and only if the pertinent virtual nodes in a bush form are arranged consecutively by arbitrary \nreorder and flip operations, then there is a sequence of reorder and flip operations \non any rooted embedded bc-tree from the leaf nodes toward the root to arrange the pertinent virtual\nedges in one of the linear consecutive arrangements.\nAt each node of the rooted embedded bc-tree, the operation is such that its children are arranged\nin one of 5 pertinent types.\n\\end{lemma}\n\n\\begin{proof}\nThe details is given in Appendix \\ref{App:AppendixB}.\nThe 'only if' part is trivial.\nThe 'if' part is by induction on  $|T_{BF}|$ of the rooted embedded bc-tree $T_{BF}$ of a marked bush form $BF$.\n\\end{proof}\n\n\n\nWe prove the equivalence between the marked bush \\& its underlying rooted embedded bc-tree, \nand the PQ-tree with Lemma \\ref{lem:lemma2} \\& \\ref{lem:lemma3} in the same induction step\n on the number of iterations of the algorithm.\n\n\\begin{lemma}\\label{lem:lemma2}\nFollowing holds for PQ-tree and its bush form:\n\\begin{itemize}\n\\item There is a one-to-one mapping between a P-node and an orienting cut vertex in the marked bush form.\n\\item There is a one-to-one mapping between a Q-node and an orienting block in the marked bush form.\n\\end{itemize}\n\\end{lemma}\n\nLemma \\ref{lem:lemma2} gives the location of the marker on the marked bush form. \nIf the root of the PQ-tree is a P-node, then there is a corresponding orienting cut vertex in the bush form,\nand we place the marker on it.\nIf the root is a Q-node, there is a corresponding block $B$ in the bush form.\nWe place the marker on an edge $e$ of $B$ on the outer face.\nThe edge $e$ is determined as follows.\nThe children of the root Q-node in the left-to-right ordering in the PQ-tree corresponds\nto a counter-clock wise ordering of the orienting active components around $B$ in the bush form.\nProceed from the cut vertex that corresponds to the right most child of the Q-node \nin the counter-clock wise orientation, find the first edge $e$ on the outer face of $B$.\n\n\\begin{lemma}\\label{lem:lemma3}\n   If and only if the pertinent virtual nodes in the marked bush form \n   can be consecutively arranged into one of 5 types\n   using a series of reorder and flip operations from the leaves to the root,\n   then there is an equivalent series of transformations by templates \n   in REDUCE() for the PQ-tree to arrange the corresponding pertinent\n   leaves to the same consecutive type.\n\\end{lemma}\n\n\\begin{proof}\n   The details of a proof of Lemma \\ref{lem:lemma2} \\& \\ref{lem:lemma3} is given in Appendix \\ref{App:AppendixC}.\n   The 'only if' part is trivial. \n   The 'if' part is by induction on the number of iterations of the algorithm \n   Lemma \\ref{lem:lemma3} is proved by examining exhaustively\n   all the cases of operations on the marked bush form and finding equivalent templates of PQ-tree.\n   Lemma \\ref{lem:lemma2} is proved by examining all the cases for the births and the deaths of\n   the orienting cut vertices and blocks, and find equivalent P-nodes and Q-nodes on PQ-tree. \n\\end{proof}\n\n\n\\begin{theorem}\n   If and only if the pertinent virtual edges in the bush form can be circularly consecutively arranged,\n   then REDUCE() can transform the corresponding PQ-tree such that the pertinent\n   leaves are arranged consecutively in one of 5 pertinent types.\n\\end{theorem}\n\\begin{proof}\n   This is a direct application of Lemma 1, 2, and 3.\n\\end{proof}\n\nThis concludes the correctness of the new algorithm.\n\n\\section{Time Complexity}\\label{se:complexity}\n\n\\begin{theorem}\n   The time complexity of the new algorithm stays $O(|N|)$.\n\\end{theorem}\n\\begin{proof}\n   We follow the discussion given in the original \\cite{BL76} around\n   Theorem 5, which depends on Lemma 2, 3 and 4 in [1].\n   Lemma 2 in [1] holds for the new algorithm as there is no change in BUBBLE().\n\n   Lemma 3 in [1] holds for the updated REDUCE() with new templates as follows.\n   It's easy to see the required time for P8, Q4, and Q5 are on the order\n   of the number of pertinent children. As for P7, just like P5, the empty children\n   can implicitly stay in the original P-node, and it will be put under\n   the new Q-node. This way, P7 runs in the order of the number of\n   pertinent children.\n\n   Lemma 4 holds for the updated REDUCE() as there are only $|S|$ leaves\n   and at most $O(|S|)$ nonunary nodes in the PQ-tree.\n\n   Theorem 5 holds as the Lemma 2, 3, and 4 hold, and also\n   the new templates P7, P8, Q4, and Q5 do not apply to the unary nodes.\n\n   In theorem 5, We substitute $m=|E|$, $n=|V|$, and $SIZE(\\mathbb{S})=|E|$, and hence \n   the algorithm runs in $O(|E|+|V|+|E|) = O(|V|)$, assuming $|E| \\le 3|V| -6$.\n\\end{proof}\n\n\\section{Implementation Issues}\\label{se:implementation}\nIn this section we discuss two topics regarding implementation.\nFirst we discuss the changes required to remove the pertinent tree of the complementarily partial\ntype. Second we propose an improvement to Ozawa-type planarization algorithms with a new\ncost calculation technique.\nA reference implementation is found in github:yamanishi/colameco.\n\n\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.7\\textwidth]{pq_tree_com}\n  \\caption{Removal of a Complementarily Partial Pertinet Tree}\n  \\label{fig:pq_tree_com}\n\\end{figure}\n\n\nRemoval of a complementarily partial pertinent sub-tree and emitting new leaves\nfrom a new P-node is different from the other cases.\nA complementarily partial pertinent subtree originates from a Q-node\nin template P7, or Q4. All the other nodes above this lowest Q-node including the tree root\nare pertinent nodes.\nThey will be removed and the Q-node will become the new tree root. \nThe P-node where the new leaves are attached will be put under\nthe Q-node on either end. See figure \\ref{fig:pq_tree_com}.\n\n\nThere has not been a correct $O(|N|^2)$ linear time maximal planarization algorithm using PQ-tree as shown \nin \\cite{JUNGER97}, except for $O(|N||E|)$ add-edge-and-test type that calls an $O(|N|)$ planarity test multiple times.\nHowever those algorithms such as the first stage of \\cite{JTS89} called\n$PLANALIZE(G)$ generates a planar spanning connected subgraph and it can be used as a base graph for further maximal planarization. \nHere we base our discussion on \\cite{JTS89} and propose an improvement with additional\nnode type and a cost value. \\cite{JTS89} defines 4 node types: W, B, H, and A which correspond to\nempty, full, singly partial, and doubly partial respectively. We propose a new type C which\ncorresponds to 'complementarily partial', and it associated cost value $c$ for each node.\nThe value for $c$ in $COMPUTE1()$ in \\cite{JTS89} is calculated as follows.\n\\begin{enumerate}\n\\item $X$ is a pertinent leaf, $c = 0$.\n\\item $X$ is a full node, $c = 0$.\n\\item $X$ is a partial P-node, $c = a$.\n\\item $X$ is a partial Q-node, $c = min\\{\\gamma_1,\\gamma_2\\}$.\n\\end{enumerate}\n\\begin{equation}\n\\begin{aligned}\n  \\gamma_1 &= \\sum_{i \\in P(X)}(w_i) - max_{i \\in P(X)}\\{(w_i - c_i)\\} \\\\\n  \\gamma_2 &= \\sum_{i \\in P(X)}(w_i) - (max_{i \\in P_L(X)}\\{(w_i - h_i)\\}\n                                       + max_{i \\in P_R(X)}\\{(w_i - h_i))\\}\n\\end{aligned}\n\\end{equation}\nwhere $P_L(X)$ means the maximal consecutive sequence of pertinent children of X from the\nleft end such that all the nodes except the right most one are full. The right most one\nmay be either full or singly partial. $P_R(X)$ is defined similarly from the right end.\n\nAfter the cost calculation in the bottom-up manner, the type of the nodes can be determined\ntop-down from the tree root using the new type $C$.\nIn this way, the algorithm is capable of handling the complementarily partial situations, and \nwould be able to reduce the number of edges removed.\n\n\n\n\\section{Experiments}\\label{se:experiments}\nThe execution time of the new algorithm is reported in Figure \\ref{fig:plot_02}.\nX-axis is the time to process one graph in micro seconds taken from clock() function available on C++ standard.\nY-axis indicates the number of vertices in the given biconnected planar graph.\nThe test program was run on Mac 2.8GHz Corei7. \nThe program was compiled by Apple LLVM 8.0.0.\nThe plot shows the linearity of the algorithm.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.8\\textwidth]{plot_performance_01}\n  \\caption{Processing Time of the New Algorithm}\n  \\label{fig:plot_02}\n\\end{figure}\n\n\nFor all the test cases the new algorithm detected planarity.\nHere 'ramdom'ness is not rigorous in the mathematical sense.\nIt is handled by the pseudo-random number generator library std::rand() and std::srand() in C++.\nThe code and the data used for this experiments are found in github:yamanishi/colameco.\n\n\n\n\n\\section{Conclusion}\\label{se:conclusion}\nWe have shown an enhancement to the original PQ-tree planarity algorithm proposed by \\cite{BL76} \nand proved the correctness. The time complexity stays in $O(|N|)$.\nThe enhancement applies not only to the planarity test for the graphs, but also for anything \ninvolving circular consecutive ones. As far as the author knows, it seems\nthere is no other applications than planarity testing.\n\n\\section{Acknowledgements}\nThe author wishes to thank whoever has encouraged and supported him for this article.\n\n\\clearpage\n\n\\bibliographystyle{abbrvurl}\n\\bibliography{pq_tree_enhancement}\n\n\\clearpage\n\n% \\appendix\n\\begin{appendices}\n\\section{A Planar Biconnected Graph and a ST-Numbering}\\label{App:AppendixA}\nFollowing is the planar biconnected graph and the \nst-numbering used to produce the PQ-tree and the bush form in figure \n \\ref{fig:pq_tree_01} and \\ref{fig:bush_form_01}.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\includegraphics[width=0.5\\textwidth]{sample_graph}\n  \\caption{Biconnected Planar Graph Used for Figure \\ref{fig:pq_tree_01} and \\ref{fig:bush_form_01}}\n  \\label{fig:sample_graph}\n\\end{figure}\n\nThe st-numbering:\n\n$(8, 13, 7, 14, 19, 20, 25, 24, 23, 22, 21, 16, 11, 17, 15, 9, 10, 5, 4, 3, 2, 1, 6, 12, 18)$\n\n\n\\section{Proof of Lemma \\ref{lem:lemma1}}\\label{App:AppendixB}\n\n\nThe 'only if' part is trivial.  We prove the 'if' part by induction on\n $|T_{BF}|$\nof the rooted embedded bc-tree $T_{BF}$ of a marked bush form $BF$.\nAssume after some arbitrary reorder and flip operations, the pertinent virtual edges have been arranged\nconsecutively around the bush form.\nThis is equivalent to arranging the pertinent virtual edges into one of 5 types of \nlinear consecutive arrangements in a marked bush form $BF$.\nIf $|T_{BF}| = 1$, the lemma trivially holds as the bush form consists of one pertinent or non-pertinent virtual node.\nIf $|T_{BF}| = 2$, the lemma trivially holds as the bush form consists of $k_2$ whose incident nodes are a cut vertex and a virtual node. Technically the node in $k_2$ is not a cut vertex but we consider it a cut vertex for the sake of the arguments.\n\n\nAssume $|T_{BF}| \\ge 3$.\nWe split $T_{BF}$ at the root node by splitting one connected component $C$ from the rest of $T_{BF}$.\nIf the marker in $BF$ is on a cut vertex $c$, then we split one connected active component $C$ of $T_{BF}$.\nIf the marker is on an edge $e$ of a block $B$ in $BF$, then we pick the closest cut vertex $c$ \nafter $e$ in the counter-clock wise ordering around $B$ \nsuch that the maximal connected component after\nremoving the block from $c$ is still active.\nLet $n_c$ be the corresponding node of $c$ in $T_{BF}$.\n$C$ is the maximal connected component of $T_{BF} \\setminus B$ that contains $n_c$.\n\nNow $T_{BF}$ is decomposed into two components $D=T_{BF} \\setminus C$ and $C$ at $n_c$, and\n $BF$ is decomposed into two parts $BF_{D}$ and $BF_{C}$ at $c$. \n$BF_{D}$ and $BF_{C}$ can be considered two marked bush form with the marker placed on $c$ in $BF_{C}$.\nWe examine each of the 5 types of linear consecutive arrangements around $BF$, \nand find all the possible combinations of pertinent types of $BF_{D}$ and $BF_{C}$.\nWe observe table \\ref{tab:table1} enumerates all the possible combinations.\nWe observe for each linear consecutive arrangement around $BF$, any other\narrangements of $BF_{T\\setminus C}$ or $BF_{C}$ not shown in table \\ref{tab:table1} \nwould lead to non-nonsecutive arrangement of the pertinent virtual nodes around $BF$.\n\nBy the induction hypothesis, the condition on the pertinent types of the children hold \nfor both $BF_{D}$ and $BF_{C}$.\nFor each of 14 cases in Table \\ref{tab:table1}, if we re-compose $BF_{D}$ and $BF_{C}$ at $c$ into $BF$, \nthen we can see the condition on the corresponding pertinent type for the case holds for $BF$.\nFor example, if and only if $BF$ is singly partial, then there will be 6 permissible combinations of the\npertinent types of $BF_{D}$ and $BF_{C}$. We observe any other combination would leat to a different\npertinent type of $BF$ or to a non-planar arrangement.\nIf $BF_{D}$ ends up in full, and $BF_{C}$ in singly partial after some operations from the leaves to the root\nby the induction hypothesis,\nthen it is easy to see that the $BF$ will end up in singly partial as desired, possibly after flipping\n$BF_{C}$ and reordering all the incident component around $c$ or $B$.\nThe other cases can be proved in the same manner.\n\n\n\\begin{table}[h]\n\\begin {tabular}{|l|l|l|}\n\\hhline{|=|=|=|}\n{\\bf Type on $BF$} & {\\bf Type on $BF_{D}$} & {\\bf Type on $BF_{C}$} \\\\\n\\hhline{|=|=|=|}\nEmpty                   & Empty                   & Empty \\\\\n\\hhline{|=|=|=|}\nSingly Partial          & Empty                   & Singly Partial \\\\\n\\hline\nSingly Partial          & Empty                   & Full           \\\\\n\\hline\nSingly Partial          & Singly Partial \\textsuperscript{*1} & Empty     \\\\\n\\hline\nSingly Partial          & Singly Partial \\textsuperscript{*2} & Full  \\\\\n\\hline\nSingly Partial          & Full                    & Empty          \\\\\n\\hline\nSingly Partial          & Full                    & Singly Partial \\\\\n\\hhline{|=|=|=|}\nDoubly Partial          & Empty                   & Doubly Partial \\\\\n\\hline\nDoubly Partial          & Singly Partial \\textsuperscript{*3} & Empty \\\\\n\\hline\nDoubly Partial          & Singly Partial \\textsuperscript{*2} & Singly Partial \\\\\n\\hline\nDoubly Partial          & Doubly Partial          & Empty          \\\\\n\\hhline{|=|=|=|}\nComplementarily Partial & Singly Partial \\textsuperscript{*4} & Singly Partial \\\\\n\\hline\nComplementarily Partial & Full                    & Complementarily Partial \\\\\n\\hhline{|=|=|=|}\nFull                    & Full                    & Full \\\\\n\\hhline{|=|=|=|}\n\\multicolumn{3}{p{12cm}}{\\textsuperscript{*1}\\footnotesize{If the root is a block, then the pertinent virtual edges are on $e$ side, not on $c$ side.}}\\\\\n\\multicolumn{3}{p{12cm}}{\\textsuperscript{*2}\\footnotesize{If the root is a block, then the pertinent virtual edges are on $c$ side, not on $e$ side.}}\\\\\n\\multicolumn{3}{p{12cm}}{\\textsuperscript{*3}\\footnotesize{If the root is a block, then the pertinent virtual edges are on $c$. \nIf the root is a cut vertex, this row does not apply.}}\\\\\n\\multicolumn{3}{p{12cm}}{\\textsuperscript{*4}\\footnotesize{If the root is a block, then the pertinent virtual edges are on $e$. If the root is a cut vertex, this row does not apply.}}\\\\\n\\end {tabular}\n\\caption{\\textbf{All the arrangements of pertinent types of $BF_{T\\setminus C}$ and $BF_{C}$.}}\n\\label{tab:table1}\n\n\\end{table}\n\n\\section{Proof of Lemma \\ref{lem:lemma2} and \\ref{lem:lemma3}}\\label{App:AppendixC}\n\n\n   At the first iteration, Lemma \\ref{lem:lemma2} and \\ref{lem:lemma3} trivially hold.\nThe only operation is an attach operation\n   for the initial virtual edges to an initial cut vertex $c$ in the bush form,\n   and the corresponding operation on PQ-tree is creation of a new P-node and attaching pertinent\n   leaves to it. The root marker will be placed on $c$.\n\n   By the induction hypothesis,  Lemma \\ref{lem:lemma2},  and \\ref{lem:lemma3},  hold up to $i$-th iteration.\n   By Lemma \\ref{lem:lemma1} and \\ref{lem:lemma2},\n   there is a marked bush form $BF$ whose marker is placed on a cut vertex $c$ if\n   the root of the PQ-tree is P-node, or on an edge of a block $B$ if the root is a Q-node.\n   Assume we have arranged the pertinent nodes for the $i+1$-th iteration by an arbitrary set\n   of reorder and flip operations on the bush form.\n   Then by Lemma \\ref{lem:lemma1}, \n   we can arrange the child components of each node of the rooted embedded bc-tree $T_{BF}$\n   into one of 5 pertinent types from the descendants toward the root in $T_{BF}$.\n\n   Now we examine each pertinent type for a cut vertex and a block in a marked bush form\n   for their equivalence to their\n   corresponding nodes in the PQ-tree. We introduce two new operations to the marked bush form, which\n   are interlock and split.\n   Table \\ref{tab:table2} shows all the operations on an orienting cut vertex \n   and their equivalent templates on PQ-tree.\n   Table \\ref{tab:table3} is for an orienting block.\n\n   Following explains the symbols used in those tables.\n   \\begin {itemize}\n   \\item a square is a component\n   \\item a circle is a cut vertex\n   \\item sky blue indicates pertinent component.\n   \\item yellow indicates non-pertinent component.\n   \\item a square enclosing '$p$' is a parent bock component. \nThis does not apply if the cut vertex is the root.\n   \\item a circle enclosing '$p$' is a parent cut vertex component.\nThis does not apply if the block is the tree root.\n   \\item a square in sky blue is a full component.\n   \\item a square in yellow is a non-pertinent component.\n   \\item a rectangle enclosing yellow and sky blue squares is a singly partial component.\n   \\item a circle is in sky blue with a yellow wedge is a complementarily partial component.\n   \\item a polygon is a block.\n   \\item a grey triangle or a quadrilateral is a block induced by a split operation at a cut vertex.\n   \\item a dashed broken line indicates interlocking of two blocks.\n   \\end {itemize}\n\n   We observe the rows of those tables cover exhaustively all the cases of 4 pertinent types. (Empty type\n   is not considered.)\n   Then we can prove the equivalence for each case with the aid of interlock and split operations\n   as well as reorder and flip.\n   For example, take the case for 'Singly Partial 2' for an orienting cut vertex $c$.\n   Originally, $c$\n   has 3 full child components, 3 non-pertinent child components, and a singly partial child component.\n   To make it singly partial, we first reorder the incident ordering of those children so that\n   the full children are arranged consecutively on one side, the non-pertinent children on the other,\n   and the singly partial child between those two oriented such that the full side of the singly partial\n   child is on the full side of children. Please note it is not a circular ordering due to the presence\n   of the parent component.\n   To make those changes permanent, we split $c$\n   to a triangle $k_3$ or a qiadrilateral $C_4$ and fix the orientation of the singly partial child\n   relative to them with the interlock operation. The 3rd column in Table \\ref{tab:table2} is for the case\n   if $c$ is a root component, and the 5th column is for the case if it is not.\n   We can see those are in fact equivalent to the template P4 and P5 respectively for the case in which\n   the are both full and non-pertinent components.\n   In the 3rd column, the interlocked $k_3$ and the singly partial child together correspond to \n   the resultant Q-node in P4 with the orientation of the children preserved. If there are more than 1\n   full child component, the full vertex on $k_3$ becomes a new orienting cut vertex, which corresponds\n   to the new full P-node under the Q-node in P4.  Similarly, in the 5th column, the interlocked $C_4$\n   and the singly partial child together correspond to the resultant Q-node in P5.\n   The other cases can be examined in the same way.\n   We see that in fact, those tables cover all the cases for all the templates for PQ-tree.\n    In case of 'Doubly Partial 4', there are no corresponding PQ-tree templates, as those nodes are above the \n   pertinent root.\n   This concludes the proof of Lemma 3.\n\n   As for Lemma 2, we prove the equivalence in the birth and the death of the orienting cut vertex\n   and the corresponding P-node, and the equivalence of an orienting block to Q-node.\n\n   An orienting cut vertex is born only at the following location.\n   \\begin{itemize}\n   \\item an attach operation with more than 1 virtual edge to be attached.\n   \\end{itemize}\n\n   Please note that a split operation can produce $k_3$ or $C_4$, which means 2 or 3 new cut vertices\n   are created. However, the one incident to the parent is not orienting as it has just 2 active components.\n   The one incident to the interlocked singly partial child is not orienting either.\n   The one on the full side is a temporary cut vertex which will be absorbed inside a newly created \n   block on a merge operation later. The remaining one on the non-pertinent side is considered to have been\n   transferred from the original cut vertex. So, effectively a split operation does not create new orienting cut vertex.\n\n   An orienting cut vertex will die at the following locations.\n   \\begin{itemize}\n   \\item Becoming full. In this case the children of the cut vertex will be merged into a new cut vertex,\n   and eventually it will have two active components and will become non-orienting.\n\n   \\item Becoming complementarily partial, and it has a complementarily partial child.\n   In this case all the full children together with the parent will be merged into a new cut vertex,\n   and eventually it will have two active components and will become non-orienting.\n\n   \\item Singly Partial 1, non-pertinent root, and there is one non-pertinent child. In this case the new cut vertex\n   on the non-pertinent side of $k_3$ will have only one child, and it has two active components and it \n   will become non-orienting.\n\n   \\item Singly Partial 3, non-pertinent root, and there is one non-pertinent child. In this case the new cut vertex\n   on the non-pertinent side of $k_3$ will have only one child, and it has two active components and it \n   will become non-orienting.\n\n   \\item Singly Partial 4, pertinent root and non-pertinent root, and there is no non-pertinent child. In this case, the $k_3$ by the split\n   operation does not produce any cut vertex for non-pertinent children and hence the cut vertex vanish.\n\n   \\item Doubly Partial or Complementarily Partial 3, Root and Non-root, and there is no non-pertinent child. \nIn this case, the $C_4$ by the split\n   operation does not produce any cut vertex for non-pertinent children and hence the cut vertex vanish.\n   \\end{itemize}\n\n   An orienting block is born at the following conditions.\n   \\begin{itemize}\n   \\item $k_3$ or $C_4$ is generated by a split operation\n   \\end{itemize}\n\n   An orienting block dies at the following locations.\n   \\begin{itemize}\n   \\item Where a singly partial child is interlocked to the parent. In this case there is a singly\n   partial block in the component specified by the singly partial child, and the parent.\n   \\item Becoming full. In this case the children of the block will be merged into a new cut vertex,\n   and eventually it will have two active components and will become non-orienting.\n\n   \\item Becoming complementarily partial, and it has a complementarily partial child.\n   In this case all the full children together with the parent will be merged into a new cut vertex,\n   and eventually it will have two active components and will become non-orienting.\n   \\end{itemize}\n\n   For each of the birth and death cases above, there is a corresponding creation or destruction of \n   P-node or Q-node. For example, take the 3rd death case of an orienting cut vertex that is a singly\n   partial and non-root and that has one non-pertinent child. This corresponds to template P3.\n   Since there is one non-pertinent child, it will be directly placed as one of two children of the Q-node.\n   Eventually, the original P-node is considered removed from the PQ-tree.\n   We can show the equivalence for the other cases. We can also show those cases cover all the locations\n   in the templates, attach and removal of the pertinent subtree in which new P-nodes and Q-nodes are created\n   and destructed. This concludes the proof of Lemma \\ref{lem:lemma2}.\n\n\\begin{table}[h]\n  \\def\\arraystretch{1}\n  \\centering\n  \\begin{tabular}\n      {|C{0.22\\textwidth}|C{0.14\\textwidth}|C{0.14\\textwidth}|C{0.05\\textwidth}|C{0.14\\textwidth}|C{0.05\\textwidth}|}\n      \\hline \n      Pertinent type & Original State & Operation for Pert Root & PQ Op. & Operation for Non-Pert Root & PQ Op.\\\\\n      \\hline\n      Full&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_01_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_01_root} &\n      P1 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_01_nonroot} &\n      P1 \\\\\n      \\hline\n      Empty &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_11_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_11_root} &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_11_nonroot} &\n      - \\\\\n      \\hline\n      Singly Partial 1&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_02_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_02_root} &\n      P2 & \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_02_nonroot} &\n      P3 \\\\\n      \\hline\n      Singly Partial 2& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_03_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_03_root} &\n      P4 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_03_nonroot} &\n      P5 \\\\\n      \\hline\n      Singly Partial 3& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_04_before} &\n      N/A&\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_04_nonroot} &\n      P5 \\\\\n      \\hline\n      Singly Partial 4& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_05_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_05_root} &\n      P4 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_05_nonroot} &\n      P5 \\\\\n      \\hline\n      Doubly Partial or Complementarily Partial 1&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_06_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_06_root} &\n      P6 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_06_nonroot} &\n      P7 \\\\\n      \\hline\n      Doubly Partial or Complementarily Partial 2&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_07_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_07_root} &\n      P6 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_07_nonroot} &\n      P7 \\\\\n      \\hline\n      Doubly Partial or Complementarily Partial 3&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_08_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_08_root} &\n      P6 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_08_nonroot} &\n      P7 \\\\\n      \\hline\n      Doubly Partial 4&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_10_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_10_root} &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_10_nonroot} &\n      - \\\\\n      \\hline\n      Complementarily Partial&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_09_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_09_root} &\n      P8 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_cv_09_nonroot} &\n      P8 \\\\\n      \\hline\n\n\n\n \\end{tabular}\n \\caption{Operations on an orienting cut vertex and equivalent PQ-tree templates}\n \\label{tab:table2}\n\\end{table}\n\n\n\\begin{table}[h]\n  \\def\\arraystretch{1}\n  \\centering\n  \\begin{tabular}\n      {|C{0.22\\textwidth}|C{0.14\\textwidth}|C{0.14\\textwidth}|C{0.05\\textwidth}|C{0.14\\textwidth}|C{0.05\\textwidth}|}\n      \\hline \n      Pertinent type & Original State & Operation for Pert Root & PQ Op. & Operation for Non-Pert Root & PQ Op.\\\\\n      \\hline\n      Full&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_01_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_01_root} &\n      Q1 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_01_nonroot} &\n      Q1 \\\\\n      \\hline\n      Empty&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_12_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_12_root} &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_12_nonroot} &\n      - \\\\\n      \\hline\n      Singly Partial 1&\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_02_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_02_root} &\n      Q2 & \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_02_nonroot} &\n      Q2 \\\\\n      \\hline\n      Singly Partial 2& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_03_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_03_root} &\n      Q2 &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_03_nonroot} &\n      Q2 \\\\\n      \\hline\n      Doubly Partial 1& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_04_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_04_root} &\n      Q3 &\n      N/A &\n      - \\\\\n      \\hline\n      Doubly Partial 2& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_05_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_05_root} &\n      Q3 &\n      N/A &\n      - \\\\\n      \\hline\n      Doubly Partial 3& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_06_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_06_root} &\n      Q3 &\n      N/A &\n      - \\\\\n      \\hline\n      Doubly Partial 4& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_11_before} &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_11_nonroot} &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_11_root} &\n      - \\\\\n      \\hline\n      Complementarily Partial 1& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_07_before} &\n      N/A &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_07_nonroot} &\n      Q4 \\\\\n      \\hline\n      Complementarily Partial 2& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_08_before} &\n      N/A &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_08_nonroot} &\n      Q4 \\\\\n      \\hline\n      Complementarily Partial 3& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_09_before} &\n      N/A &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_09_nonroot} &\n      Q4 \\\\\n      \\hline\n      Complementarily Partial 4& \n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_10_before} &\n      N/A &\n      - &\n      \\includegraphics[width=0.1\\textwidth]{bc_transform_bl_10_nonroot} &\n      Q5 \\\\\n      \\hline\n\n \\end{tabular}\n \\caption{Operations on an orienting block and equivalent PQ-tree templates}\n \\label{tab:table3}\n\\end{table}\n\n\n\\end{appendices}\n\n\\end{document}\n\n\n\n\n", "meta": {"hexsha": "e684a69fe4c2bd8d9c3c937a7645e53933f3e8ed", "size": 60781, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/pq_tree_enhancement/pq_tree_enhancement.tex", "max_stars_repo_name": "ShoYamanishi/wailea", "max_stars_repo_head_hexsha": "e6263ed238ae32233a58d169868a4a94bf03a30b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-05-15T10:07:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-02T18:38:35.000Z", "max_issues_repo_path": "docs/pq_tree_enhancement/pq_tree_enhancement.tex", "max_issues_repo_name": "ShoYamanishi/wailea", "max_issues_repo_head_hexsha": "e6263ed238ae32233a58d169868a4a94bf03a30b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-06-18T17:31:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-18T17:31:13.000Z", "max_forks_repo_path": "docs/pq_tree_enhancement/pq_tree_enhancement.tex", "max_forks_repo_name": "ShoYamanishi/wailea", "max_forks_repo_head_hexsha": "e6263ed238ae32233a58d169868a4a94bf03a30b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-10T21:13:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-10T21:13:51.000Z", "avg_line_length": 45.7688253012, "max_line_length": 303, "alphanum_fraction": 0.7370724404, "num_tokens": 16873, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.743168019989179, "lm_q1q2_score": 0.5631685424591194}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[english]{babel}\n\\usepackage[utf8x]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\n\\title{MATH 542 Homework 6}\n\\author{Saket Choudhary\\\\skchoudh@usc.edu}\n\n\\begin{document}\n\\maketitle \n\\section*{Problem 1}\n\\subsection*{Problem 1a}\n\nTo find: $f_{y_2,y_4}(y_1,y_3) = \\int_{-\\infty}^{\\infty}f(y_1,y_2,y_2,y_4)dy_2dy_4$\nFor marginalising a MVN, we simply drop the irrelevant terms(terms with respect to which marginalisation is performed, as they integrate to 1)\n\nJoint Marginal distribution of $y_1,y_3$: $f_{y_2,y_4}(y_1,y_3) \\sim N(\\begin{pmatrix}1 \\\\ 3\\end{pmatrix},\\begin{pmatrix}4 & -1 \\\\ -1 &5\\end{pmatrix})$\n\n\\subsection*{Problem 1b}\n$f_{y_1,y_3,y_4}(y_2) \\sim N(3, 5)$\n\n\\subsection*{Problem 1c}\n$z=y_1+2y_2-y_3+3y_4$ \n\nThus, $z=aY$ where $a = \\begin{pmatrix}1 & 2 & -1 & 3 \\end{pmatrix}$ and $Y=\\begin{pmatrix}y_1 & y_2 & y_3 & y_4\\end{pmatrix}'$\n\nThus, $Ez = aE[y] = -4$\n\n\\begin{align*}\nVar(z) &= aVar(y)a'\\\\\n& = 79 \\text{ using `R`}\n\\end{align*}\n\n\\subsection*{Problem 1d}\n\n$z_1=a_1y$ and $z_2 = a_2y$ where $a_1 = \\begin{pmatrix}1 & 1 & -1 & -1 \\end{pmatrix}$\nand $a_2 = \\begin{pmatrix}-3 & 1 & 2 & -2 \\end{pmatrix}$\n\nThen $f_{z_1,z_2} \\sim N(\\begin{pmatrix} \\mu_1 \\\\ \\mu_2 \\end{pmatrix}, S)$\n\n$\\mu_1 = a_1'E[y] = 2$\n\n$\\mu_2 = a_2'E[y] = 9$\n\n$\\Sigma^z_{11} = a_1\\Sigma a_1^T = 11$\n\n$\\Sigma^z_{22} = a_2\\Sigma a_2^T = 154$\n\n$\\Sigma^z_{12} =  \\Sigma^z_{21} =  a_2\\Sigma a_2^T = -6$\n\n\nThus, $Z = \\begin{pmatrix} z_1 \\\\ z_2 \\end{pmatrix}\n\\sim N(\\begin{pmatrix} 2\\\\ 9\\end{pmatrix}, \\begin{pmatrix} 11 & -6\\\\ \n-6 & 154\\end{pmatrix})$ \n\n\\subsection*{Problem 1e}\n\n$\\mu' = \\mu_1 + \\Sigma_{12}\\Sigma_{22}^{-1}(x_2-\\mu_2)$\n\n$Cov = \\Sigma_{11} - \\Sigma_{12}\\Sigma_{22}^{-1}\\Sigma_{21}$\n\n\n\n$\\mu' = \\begin{pmatrix}1 \\\\ 2 \\end{pmatrix} + \\begin{pmatrix} -1 & 2\\\\ 3 & -2 \\end{pmatrix} \\begin{pmatrix} 5 & -4\\\\ -4 & 4\\end{pmatrix}^{-1} \\begin{pmatrix}y_3-3\\\\ y_4+2\\end{pmatrix} = \\begin{pmatrix}1 \\\\ 2 \\end{pmatrix} + \\begin{pmatrix} 1&1.5\\\\ 1 & 0.5 \\end{pmatrix} \\begin{pmatrix}y_3-3\\\\ y_4+2\\end{pmatrix}$\n\n$Cov' = \\begin{pmatrix} 4 & 2\\\\ 2 & 6 \\end{pmatrix}-\\begin{pmatrix} -1 & 2\\\\ 3 & -2 \\end{pmatrix} \\begin{pmatrix} 5 & -4\\\\ -4 & 4\\end{pmatrix}^{-1}\\begin{pmatrix}-1 & 3\\\\ 2 & -2 \\end{pmatrix}=\n\\begin{pmatrix}2 & 2\\\\ 2& 4 \\end{pmatrix}$\n\n $f(y_1, y_2 |y_3, y_4) = N(\\mu', Cov)$\n \n \n \n\\subsection*{Problem 1f}\n\n$\\mu' =  \\begin{pmatrix} 1 \\\\3 \\end{pmatrix} + \\begin{pmatrix} -1 & 2\\\\ 5 &-4 \\end{pmatrix}\\begin{pmatrix} 3 & -2 \\\\-4 & 4\\end{pmatrix}^{-1}\\begin{pmatrix}y_2-2\\\\ y_4+2\\end{pmatrix} = \\begin{pmatrix} 1 \\\\3 \\end{pmatrix} + \\begin{pmatrix} 1 & 1 \\\\ 1 & -0.5\\end{pmatrix}\\begin{pmatrix}y_2-2\\\\ y_4+2\\end{pmatrix}$\n\n$Cov' = \\begin{pmatrix} 4 & 2\\\\ -1 & 3 \\end{pmatrix} - \\begin{pmatrix} -1 & 2\\\\ 5 &-4 \\end{pmatrix}\\begin{pmatrix} 3 & -2 \\\\-4 & 4\\end{pmatrix}^{-1} \\begin{pmatrix} 2 & 6\\\\ 2 & -2\\end{pmatrix} $\n\n\n $f(y_1, y_2 |y_3, y_4) = N(\\mu', Cov)$\n\n\\subsection*{Problem 1g}\n$Cov(y_1,y_3) = -1 $\n\n\\subsection*{Problem 1h}\n\n$\\mu' = 1 - \\begin{pmatrix} 2 & -1 & 2 \\end{pmatrix} \\begin{pmatrix} 6 & 3 & -2\\\\ 3 & 5 & -4 \\\\ -2 & -4 & 4 \\end{pmatrix}^{-1} \\begin{pmatrix} y_2-2\\\\ y_3-3\\\\ y_4+2 \\end{pmatrix} $\n\n$Cov' = 4 - \\begin{pmatrix} 2 & -1 & 2 \\end{pmatrix} \\begin{pmatrix} 6 & 3 & -2\\\\ 3 & 5 & -4 \\\\ -2 & -4 & 4 \\end{pmatrix}^{-1} \\begin{pmatrix} 2 \\\\ -1 \\\\ 2 \\end{pmatrix}$\n\n $f(y_1, y_2 |y_3, y_4) = N(\\mu', Cov)$\n\n\n\n\n\n\n\\section*{Problem 2}\nSince $\\sigma_{12} = \\sigma_{13} = \\sigma_{14} = 0$ and $y$ follows a MVN, by Theorem 2.2, $y_1$ is pairwise independent with $y_2,y_3,y_4$\n\n\\section*{Problem 3}\n\n\\begin{align*}\ny_2 - \\Sigma_{21}\\Sigma_{11}^{-1}y_1 &= \\begin{pmatrix} 0_{n-r \\times r} & I_{n-r \\times n-r} \\end{pmatrix} \\begin{pmatrix}y_1 \\\\ y_2\\end{pmatrix} + \\begin{pmatrix} -\\Sigma_{21}\\Sigma_{11}^{-1} & 0_{n \\times r}\\end{pmatrix}\\begin{pmatrix}y_1 \\\\ y_2\\end{pmatrix} \\\\\nE(y_2 - \\Sigma_{21}\\Sigma_{11}^{-1}y_1) &= \\begin{pmatrix} 0_{n-r \\times r} & I_{n-r \\times n-r} \\end{pmatrix} \\begin{pmatrix}Ey_1\\\\ Ey_2 \\end{pmatrix} + \\begin{pmatrix} -\\Sigma_{21}\\Sigma_{11}^{-1} & 0_{n \\times r}\\end{pmatrix}\\begin{pmatrix}Ey_1 \\\\ Ey_2\\end{pmatrix}\\\\\n&= \\mu_2 - \\Sigma_{21}\\Sigma_{11}^{-1}\\mu_1\n\\end{align*}\n\n\\begin{align*}\nCov(y_2 - \\Sigma_{21}\\Sigma_{11}^{-1}y_1 ) &= Cov(\\begin{pmatrix} 0_{n-r \\times r} & I_{n-r \\times n-r} \\end{pmatrix} \\begin{pmatrix}y_1 \\\\ y_2\\end{pmatrix} + \\begin{pmatrix} -\\Sigma_{21}\\Sigma_{11}^{-1} & 0_{n \\times r}\\end{pmatrix}\\begin{pmatrix}y_1 \\\\ y_2\\end{pmatrix})\\\\\n&= Cov(aY+bY)\\\\\n&= (a+b)Var(Y)(a+b)^T\\\\\n&= \\begin{pmatrix} -\\Sigma_{21}\\Sigma_{11}^{-1} & I \\end{pmatrix} \\begin{pmatrix} \n\\Sigma_{11} & \\Sigma_{12}\\\\\n\\Sigma_{21} & \\Sigma_{22}\n\\end{pmatrix}\\begin{pmatrix} -(\\Sigma_{11}^{-1})^T\\Sigma_{21}^T \\\\ I^T \\end{pmatrix}\\\\\n&= \\begin{pmatrix}-\\Sigma_{21}+\\Sigma_{21} & -\\Sigma_{21}\\Sigma_{11}^{-1}\\Sigma_{12}+\\Sigma_{22} \\end{pmatrix} \\begin{pmatrix}-(\\Sigma_{21}\\Sigma_{11}^{-1})^T \\\\ I^T\\end{pmatrix}\\\\\n&= \\Sigma_{22}-\\Sigma_{21}\\Sigma_{11}^{-1}\\Sigma_{12}\n\\end{align*} \n\n\n\\section*{Problem 4}\n\\subsection*{Problem 4a}\n\nGiven $t=\\frac{z}{\\sqrt{\\frac{u}{\\rho}}} \\sim t(\\rho)$ we know the following facts:\n\n\\begin{itemize}\n\\item $Z \\sim N(0,1)$\n\\item $u \\sim \\chi^2_\\rho$\n\\item $Z$ and $u$ are independent\n\\end{itemize}\n\n$t^2 = \\frac{z^2}{{\\frac{u}{\\rho}}} \\sim \\frac{\\chi_1^2}{\\chi_\\rho^2} \\sim F(1,\\rho)$\n\n\\subsection*{Problem 5.3}\nWe consider first the following vector:\n$Z=\\begin{pmatrix} \\bar{Y} & Y_1-Y_2 & Y_2-Y_3 \\dots Y_n-Y_{n-1}\\end{pmatrix}'$\n\nLet's call $X = \\begin{pmatrix} Y_1-Y_2 & Y_2-Y_3 \\dots Y_{n-1}-Y_{n}\\end{pmatrix}'$ so that it allows us to write $\\sum_{i=1}^{n-1}(Y_i-Y_{i+1})^2=X'X$ \n\nNow $Z=\\begin{pmatrix} \\bar{Y} & X \\end{pmatrix}$\n\n\n\\begin{align*}\n\\begin{pmatrix} \\bar{Y} & Y_1-Y_2 & Y_2-Y_3 \\dots Y_{n-1}-Y_{n}\\end{pmatrix}' &= \\begin{pmatrix}\n1/n & 1/n & 1/n & \\dots & 1/n\\\\\n1 & -1 &  0 & \\dots & 0\\\\\n0 & 1 & -1 & \\dots & 0\\\\\n\\vdots \\\\\n0 & 0 & 0 & \\dots & -1\\\\\n\\end{pmatrix} \\begin{pmatrix} Y_1 & Y_2 & Y_3 &\\dots Y_n \\end{pmatrix}'\\\\\n%&= (I-\\frac{1}{n}1_n1_n')\\begin{pmatrix} Y_1 & Y_2 & Y_3 &\\dots Y_n \\end{pmatrix}'\nZ &= AY\n\\end{align*}\n\nAlso $Z \\sim N(A\\mu, A\\Sigma A')$\n\n$A\\mu = \\begin{pmatrix} \\mu  & 0 & 0 \\dots 0 \\end{pmatrix}$\n\n$A\\Sigma A' = AA'$ since $\\Sigma = I$\n\n$AA'  = \\begin{pmatrix} 1/n & 0 & 0 & \\dots & 0\\\\ \n0 & 2 & -1  & \\dots & 0\\\\\n0 & -1 & 2 & -1 & \\dots 0\\\\\n\\vdots \n0 & 0 & 0 & 0 & \\dots 2\n\\end{pmatrix}$\n\nThus, $Z=\\begin{pmatrix}\\bar{Y} & X_{n \\times 1}\\end{pmatrix}'$ is a MVN such that $\\bar{Y}$ and $X$ are independent (since the covariance is 0)\n\nWe also know that $\\sum_{i=1}^{n-1}(Y_i-Y_{i+1})^2=X'X = h(X)$\n\nSince, functions of independent random variables are also independent $\\bar{Y}$ and $h(X) = X'X$ are independent.\n\n\\subsection*{Problem 5.11}\n\n$Z = \\begin{pmatrix} \\phi & 1 & 0 & 0 & \\dots & 0\\\\ 0 & \\phi & 1 & 0 & \\dots & 0\\\\ \n0 & 0 & \\phi & 1 &  \\dots & 0\\\\ \n\\vdots\\\\\n0 & 0 & 0 & 0 & \\dots & 1\\\\ \n\\end{pmatrix} \\begin{pmatrix} y_1\\\\ y_2 \\\\ y_3\\\\ \\vdots\\\\ y_n\\end{pmatrix} = AY$\n\nSince $Y \\sim N(0, \\sigma^2I)$ $\\implies$ $Z \\sim N(0, \\sigma^2 AA^T)$\n\nwhere $AA^T = \\begin{pmatrix}\\phi^2+1 & \\phi & 0 & 0 & \\dots & 0\\\\ \n\\phi & 1+\\phi^2 & \\phi & 0 & \\dots & 0\\\\\n0 & \\phi & 1+\\phi^2 & \\phi & \\dots & 0\\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n0 & 0 & 0 & \\dots & 1+\\phi^2 & \\phi\n\\end{pmatrix}$\n\n\n\\end{document}\n\n\n\n", "meta": {"hexsha": "78f75968da641ebc17d065aeb43295d395efa751", "size": 7292, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2016_Spring/MATH-542/HW06/hw06.tex", "max_stars_repo_name": "NeveIsa/hatex", "max_stars_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2015-09-10T02:45:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-10T03:20:47.000Z", "max_issues_repo_path": "2016_Spring/MATH-542/HW06/hw06.tex", "max_issues_repo_name": "NeveIsa/hatex", "max_issues_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-09-16T23:11:00.000Z", "max_issues_repo_issues_event_max_datetime": "2015-09-23T21:21:52.000Z", "max_forks_repo_path": "2016_Spring/MATH-542/HW06/hw06.tex", "max_forks_repo_name": "saketkc/hatex", "max_forks_repo_head_hexsha": "c5cfa2410d47c7e43a476a8c8a9795182fe8f836", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2015-09-25T19:06:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-10T03:21:09.000Z", "avg_line_length": 36.6432160804, "max_line_length": 312, "alphanum_fraction": 0.5968184312, "num_tokens": 3468, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5631685379030357}}
{"text": "\\documentclass[../Notes/main.tex]{subfiles}\n\n\\begin{document}\n\\section{Graphs}\n\n\\subsection{Graph Traversal}\n\n\\subsubsection{Breadth First Search}\n\\lstinputlisting[firstline=2]{bfs/bfs.cpp}\n\n\\subsubsection{Recursive Depth First Search}\n\\lstinputlisting[firstline=2]{dfs/dfsRecursive.cpp}\n\n\\subsubsection{Iterative Depth First Search}\n\\lstinputlisting[firstline=2]{dfs/dfsIterative.cpp}\n\n%\\subsection{Topological Sort}\n%\\lstinputlisting[firstline=2]{topoSort/topoSort.cpp}\n\n\\subsection{Shortest Path Algorithms}\n\n\\subsubsection{Dijsktra}\nAll edges have non-negative values\n\\lstinputlisting[firstline=2]{dijsktra/dijsktra.cpp}\n\n\\subsubsection{Bellman Ford}\nEdges can be negative, and it detects negative cycles\n\\lstinputlisting[firstline=2]{bellmanFord/bellmanFord.cpp}\n\n\\subsubsection{Floyd Warshall}\nShortest path from every node to every other node\n\\lstinputlisting[firstline=2]{floydWarshall/floydWarshall.cpp}\n\n\\subsection{Minimum Spanning Tree (MST)}\n\n\\subsubsection{Kruskal}\n\\lstinputlisting[firstline=2]{kruskal/kruskal.cpp}\n\n% \\subsubsection{Prim}\n% \\lstinputlisting[firstline=2]{prim/prim.cpp}\n\n\\subsection{Lowest Common Ancestor (LCA)}\nSupports multiple trees\n\\lstinputlisting[firstline=2]{lca/lca.cpp}\n\n% \\subsection{Strongly Connected Components}\n\n% \\subsubsection{Tarjan}\n% \\lstinpuntlisting[firstline=2]{tarjan/tarjan.cpp}\n\n% \\subsubsection{Korasaju}\n% \\lstinpuntlisting[firstline=2]{korasaju/korasaju.cpp\n\n\\subsection{Max Flow}\n\\lstinputlisting[firstline=2]{dinic/dinic.cpp}\n\n% \\subsection{Heavy Light Decomposition}\n% \\lstinputlisting[firstline=2]{heavyLight/heavyLight.cpp}\n\n\\subsection{Others}\n\\subsubsection{Diameter of a tree}\n\\lstinputlisting[firstline=2]{treeDiameter/treeDiameter.cpp}\n\\end{document}", "meta": {"hexsha": "efbcbb948237afea1dc50228f9eb8214b4d73a8e", "size": 1721, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "graphs/graphs.tex", "max_stars_repo_name": "ignaciohermosillacornejo/apuntes_icpc", "max_stars_repo_head_hexsha": "0cf8935931c776f2899c03f79d4dcc6c09b81373", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-08-19T14:25:54.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-26T06:35:56.000Z", "max_issues_repo_path": "graphs/graphs.tex", "max_issues_repo_name": "ignaciohermosillacornejo/apuntes_icpc", "max_issues_repo_head_hexsha": "0cf8935931c776f2899c03f79d4dcc6c09b81373", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-08-04T23:30:32.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-10T22:23:49.000Z", "max_forks_repo_path": "graphs/graphs.tex", "max_forks_repo_name": "ignaciohermosillacornejo/apuntes_icpc", "max_forks_repo_head_hexsha": "0cf8935931c776f2899c03f79d4dcc6c09b81373", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-12-02T22:44:57.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-17T02:00:22.000Z", "avg_line_length": 27.3174603175, "max_line_length": 62, "alphanum_fraction": 0.8047646717, "num_tokens": 487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680199891789, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5631685343288912}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{multicol}\n\\usepackage[margin=1in]{geometry} \n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath, mathtools}  % mathtools required for box around equations\n\\usepackage{graphicx,float}\n\\usepackage[font={small}]{caption}\n\n\\title{Competitive Programming Team: AB IdeaLab\\\\ at Fall Activities Fair}\n\\author{Sanjit Bhat \\and Alexander Sun}\n\\date{September 13, 2018}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Description}\nHello prospective IdeaLab \\textbf{Competitive Programming Team} (CPT) member!\n\nThis year is AB's first year competing in the \\textbf{American Computer Science League} (ACSL) competition!\nAs you will see with the three samples on the back page, the\nproblems are an interesting mix between Math and Computer Science and cover topics such as:\n\n\\begin{itemize}\n\\item Recursion (see Figure~\\ref{fig:recursion} and Problems~\\ref{prob:easy} and~\\ref{prob:hard})\n\\item Graph Theory (see Figure~\\ref{fig:graph_theory})\n\\item Computer Number Systems (see Problem~\\ref{prob:medium})\n\\item Data Structures\n\\end{itemize}\n\nIf you would like to collaborate with others, learn how to solve these and similar problems,\nand enter the world of Competitive Programming, come to IdeaLab's first meeting,\n\\textbf{September 21, 2018 in the SYSCO lab\\footnote{Location: Upper West, near math department center} at 3:00 PM}.\n\nThe only pre-requisites are a \\textbf{love for solving problems and an open mind}.\nWe value these \\textbf{more} than prior experience coding\n(as we'll be teaching basic coding skills on an ad-hoc basis).\n\n\\textbf{Represent AB, join an inclusive and caring team, and learn a ton in our first year competing in ACSL!}\n\n\\begin{multicols}{2}\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.40\\textwidth]{figures/fibonacci-recursion-tree.png}\n  \\caption{A tree representing the recursive calls made to the Fibonacci function.}\n  \\label{fig:recursion}\n\\end{figure}\n\n%second column\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.20\\textwidth]{figures/graph-theory.png}\n  \\caption{A graph theoretic representation of nodes, directed edges, and weights.}\n  \\label{fig:graph_theory}\n\\end{figure}\n\\end{multicols}\n\n\\section{Easy Problem}\n\\label{prob:easy}\nFind $f(-1)$:\n\\begin{equation*}\nf(x) =\n\\begin{cases}\nx - f(x+1) & \\text{if $x < 3$}\\\\\nf(2x) & \\text{if $3 \\leq x < 5$}\\\\\nx + 1 & otherwise\n\\end{cases}\n\\end{equation*}\n\\vspace{2cm}\n\n\\hfill\n    \\begin{tabular}[b]{|c|}\n        \\hline\n        Your\\\\\n        Answer:\\\\\n        \\\\\n        \\hline\n    \\end{tabular}\n\\hfill\n\n\\section{Medium Problem}\n\\label{prob:medium}\nLet $n$ be any positive base 10 integer from 1 to $2^{12}$ inclusive.\nLet $S(n)$ be the number of 1's in the binary representation of $n$.\nFind the number of possible $n$'s such that $S(n) - S(n + 1) = 3$.\n\\vspace{2cm}\n\n\\hfill\n    \\begin{tabular}[b]{|c|}\n        \\hline\n        Your\\\\\n        Answer:\\\\\n        \\\\\n        \\hline\n    \\end{tabular}\n\\hfill\n\n\\section{Hard Problem}\n\\label{prob:hard}\nAckerman's function is defined as:\n\n\\begin{equation*}\nA(x, y) =\n\\begin{cases}\ny + 1 & \\text{if $x = 0$}\\\\\nA(x - 1, 1) & \\text{if $x \\neq 0$ and $y = 0$}\\\\\nA(x - 1, A(x, y - 1)) & \\text{if $x \\neq 0$ and $y \\neq 0$}\n\\end{cases}\n\\end{equation*}\nFind $A(3, 4)$.\n\\vspace{2cm}\n\n\\hfill\n    \\begin{tabular}[b]{|c|}\n        \\hline\n        Your\\\\\n        Answer:\\\\\n        \\\\\n        \\hline\n    \\end{tabular}\n\\hfill\n\\end{document}\n", "meta": {"hexsha": "9f1d47ae1e25fbaff94faa372a12da2ecf4087e8", "size": 3398, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "activities-fair-cpt-flier.tex", "max_stars_repo_name": "sanjit-bhat/AB-ACSL", "max_stars_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-12T03:01:29.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-12T03:01:29.000Z", "max_issues_repo_path": "activities-fair-cpt-flier.tex", "max_issues_repo_name": "sanjit-bhat/AB-ACSL", "max_issues_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "activities-fair-cpt-flier.tex", "max_forks_repo_name": "sanjit-bhat/AB-ACSL", "max_forks_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.4032258065, "max_line_length": 116, "alphanum_fraction": 0.6912889935, "num_tokens": 1087, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5631685338379219}}
{"text": "\\subsection{Major blocks}\n\nThe flow of control begins at the bottom left of \\autoref{fig:blockdiag}, with a\nmic providing audio input, which is captured by MATLAB (above) and converted to\nan array of integers. In ideal conditions and availability of modules, this\nwould be done on the arduino, with little to no performance detriment. The time\nbetween serial inputs (based on the baud rate) is utilised to process the\nincoming data into the format used through the program, which is as in the\nDoobit storage system. \n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[height=0.4\\textheight]{fig/blockdiag.pdf}\n    \\caption{Block diagram and work distribution.}\n    \\label{fig:blockdiag}\n\\end{figure}\n\nAfter the signal has been completely read, a Dirac comb (in frequency space) is\ngenerated from a preset range of frequencies. Ideally, a Fourier transform would\nbe utilized for this, but that method carries far too much resolution for just\nidentifying certain frequencies. A limitation that still preserves a huge space\nof applications. Identification of a combination of frequencies (a trivial\nextension of the current algorithm, a simple check at the end, or even better if\nspecific, a smaller starting set) could allow for identifying more complicated\nspeech patterns and more processing on subsets of the input (would be stretching\nthe limits of the arduino) could help in preliminary speech recognition,\npossibly as a low power first gate in an always-on speech recognition system.\nFor example, a lower level circuit that listening for \\emph{\"Alexa\"} and\nactivates a more power hungry and performant circuit for speech recognition.\n\nTo avoid processing all the frequencies, we process the entire comb at once, and\nif it passes a certain threshold, we infer that one or more of the frequencies\nin the comb is present in the sample. Proceeding, the comb is split halfway in\nfrequency space, processed depth first, to identify the frequencies present,\nrejecting the entire set at any point the global maxima of its cross correlation\ndrops below a given threshold.\n\nA possible optimization, abandoned in the interest of time, is to compute the\ncross correlation at only a few points, utilizing optimization techniques\ninstead of brute forcing to find the maximum.", "meta": {"hexsha": "6a5f1732ad8aa5287cb535e38769142c6ffea7a5", "size": 2270, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/blockd.tex", "max_stars_repo_name": "sankalpgambhir/ardio", "max_stars_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-29T18:00:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-29T18:00:57.000Z", "max_issues_repo_path": "doc/blockd.tex", "max_issues_repo_name": "sankalpgambhir/ardio", "max_issues_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/blockd.tex", "max_forks_repo_name": "sankalpgambhir/ardio", "max_forks_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.75, "max_line_length": 80, "alphanum_fraction": 0.8022026432, "num_tokens": 485, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5631685295273229}}
{"text": "\\documentclass[10pt]{article}\n\n\\usepackage{amsthm}\n\\usepackage{amsfonts}\n\\usepackage{stmaryrd}\n\\usepackage{tikz}\n\\usetikzlibrary{shapes, arrows.meta, positioning}\n\n\\tikzstyle{block} = [draw, fill=white, rectangle,\n    minimum height=3em, minimum width=2em]\n\\tikzstyle{sum} = [draw, fill=white, circle, node distance=1cm]\n\\tikzstyle{input} = [coordinate]\n\\tikzstyle{output} = [coordinate]\n\\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]\n\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\B}{\\mathbb{B}}\n\\newcommand{\\stream}[1]{\\ensuremath{\\mathcal{S}_{#1}}}\n\\newcommand{\\zm}{\\ensuremath{z^{-1}}}\n\\newtheorem{theorem}{Theorem}[section]\n\\newcommand{\\code}[1]{\\texttt{#1}}\n\\newcommand{\\I}{\\mathcal{I}}\n\\newcommand{\\D}{\\mathcal{D}}\n\\newcommand{\\distinct}{\\mathit{distinct}}\n\n\\newcommand{\\multiset}[1]{\\mathit{multiset}_{#1}}\n\\newcommand{\\set}[1]{\\mathit{set}_{#1}}\n\n\\title{A model of DDlog}\n\n\\begin{document}\n\\maketitle\n\n\\section{Basic notation}\n\n\\subsection{Types}\n\n$\\N$ is the set of natural numbers, while $\\Z$ is the set of integers.\n\nOur language will deal with a set of base types $B$ that includes\n$\\N$, $\\Z$, \\code{string}, $\\B$ (Booleans), $\\mathbf{unit}$.\n\nDerived types include: tuples, vectors (lists), functions.\n\n$$T = B \\;|\\; T \\times T \\;|\\; T^* \\;|\\; T \\rightarrow T$$\n\n%% We also highlight two special derived types: $\\multiset{T}$: finite\n%% multisets of elements of type $T$, and $\\set{T}$: sets of elements of\n%% type $T$.  We call these two particular types \\emph{collections}.\n\nA typing context is a map that assigns types to variable names.\n$\\Gamma = x_1 : T_1, x_2 : T_2, \\ldots, x_m : T_m$, where $T_i$ is the\ntype of variable $x_i$.\n\n\\subsection{Multisets}\n\nA monoid $(M, 0_M, +_M)$ is a set $M$ with a distinguished element\n$0_M \\in M$ and a binary associative operation $+_M : M \\times M\n\\rightarrow M$, such that $a +_M 0 = 0 +_M a = a$ for any $a \\in M$.\nWhen clear from the context we drop the index $_M$.  A commutative\nmonoid has the property that $a + b = b + a$ for all $a, b \\in M$.  A\ngroup is a monoid where every element $a \\in M$ has a complement $-a\n\\in M$ such that $a + (-a) = 0$.\n\nGiven a monoid (group) $B$, the set of functions between $A$ and $B$\nform a monoid (group): M = $A \\rightarrow B$, defined by $0_M = (x\n\\mapsto 0_B)$ and $f +_M g = x \\mapsto f(x) +_B g(x)$.  We denote the\n\\emph{finite} functions between $A$ and $B$ as $B[A]$; these are\nfunctions where $f(x) \\not= 0_B$ for at most a finite number of values\nof $x \\in A$.  Such a function can be denoted by enumerating the\ninputs that map to non-zero values: $\\{ x_1 \\mapsto a_1, \\dots, x_n\n\\mapsto a_n \\}$.  The values in $B[A]$ can also be thought as being\nkey-value maps with keys in $A$ and values in $B$.\n\nIn particular, \\emph{multisets} with elements of type $A$ are the\nfunctions $\\Z[A]$.  Multiset union is just addition in the $\\Z[A]$\nmonoid.  Note that multisets elements can have negative weights.\nMultisets form a commutative group, since $\\Z$ with addition is a\ngroup.  The zero is the empty multiset $\\{\\} = \\phi$.  Given a\nmultiset $m \\in \\Z[A]$ and a value $v \\in A$, we overload the array\nindex notation $m[v]$ to denote the \\emph{weight} of the element $v$.\n\nThe ``distinct'' function $\\distinct: \\Z[D] \\rightarrow \\Z[D]$ projects\na multiset into an underlying set, but \\emph{represents the result\n  still as a multiset}.  The definition of distinct is given by\n$\\distinct(m) = \\{ x \\mapsto 1, m[x] > 0 \\} \\in \\Z[D]$.\n\n\\subsection{Streams}\n\nGiven a monoid $T$, streams $\\stream{T}$ of values of type $T$ are\nfunctions $\\N \\rightarrow T$.  If $S \\in \\stream{T}$ is a stream of\nvalues of type $T$, we use the notation $S[t] = S(t) \\in T$ is a is\nthe value of the element of the stream at time $t \\in \\N$.\n\nThe $\\zm: \\stream{T} \\rightarrow \\stream{T}$ \\emph{delay operator} is\na function from streams to streams, defined as $(\\zm(s))[0] = 0_T$,\nand $(\\zm(S))[t] = S[t - 1]$ for all $t > 0$.\n\nGiven a function $f: A \\times B \\rightarrow C$ one can naturally\nextend it to operate on streams $f: \\stream{A} \\times \\stream{B}\n\\rightarrow \\stream{C}$ operating pointwise on time: $f(X, Y)[t] =\nf(X[t], Y[t])$ for $t \\in \\N$.  (This applies to unary functions as\nwell, which can be extended to operate over a stream pointwise.)\n\nIn the rest of this text we only consider streams over multisets: at\neach time moment the value of the stream is a multiset.  Since\nmultisets form a group, addition and subtraction on streams over\nmultisets of the same type are well-defined.\n\nGiven a monoid $M$, integration is an operation from streams to\nstreams, $\\I : \\stream{M} \\rightarrow \\stream{M}$.  The integral of a\nstream $S$ is defined using an equation: $\\I(S) = \\zm(\\I(S)) + S$.\nUsing the notation used for digital filters this equation can be also\nwritten as $\\I(S) = S / (1 - \\zm)$.  The element of $\\I(S)$ at time\n$t$ is the (prefix) sum of all elements of $S$ at times up to $t$:\n$\\I(S)[t] = \\sum_{i \\leq t} S[t]$, where the sum uses the addition of\nthe underlying monoid $M$.\n\nGiven a group $M$, differentiation is an operation from streams to\nstreams $\\D : \\stream{M} \\rightarrow \\stream{M}$, defined as $\\D(S) =\nS - \\zm(S)$ (we need $M$ to be group in order to have subtraction).\nAn element of $\\D(S)$ (at time $t$) is the difference between the\nprevious element (time $t-1$) of $S$ and the current (time $t$)\nelement of $S$.\n\n\\begin{theorem}\nFor a stream $S$ over a commutative group we have $D(I(S)) = S$.\n\\end{theorem}\n\n\\begin{proof}\n$\\D(\\I(S)) = \\I(S) - \\zm(\\I(S)) = (\\zm(\\I(S)) + S) - \\zm(\\I(S)) = S$, by\n  substituting the definitions of $\\D$ and $\\I$ and using the\n  commutativity of the group.\n\\end{proof}\n\n\\section{DDlog}\n\n\\subsection{DDlog types}\n\n\\[\\frac{\\code{A}: T, \\code{B}: S}{\\code{(A,B)}: T \\times S}\\]\n\\[\\frac{\\code{A}: T, \\code{B}: S}{\\code{function(x:A):B}: A \\rightarrow B}\\]\n\\[\\frac{\\code{T1}: T_1, \\code{T2}: T_2}{\\code{T1 | T2} : T_1 \\oplus T_2}\\]\n\\[\\frac{\\code{T1}: T_1, \\code{T2}: T_2}{\\code{Constructor\\{f1: T1, f2:\n    T2\\}} : \\string \\rightarrow T_1 \\oplus T_2}\\]\n\n\\subsection{DDlog collections}\n\nDDlog programs consume and produce streams of data.  The outside world\ninputs/outputs streams feed to/from DDlog collections.  A DDlog\ntransaction defines a time step for all the input streams.  On each\ntransaction commit DDlog updates all output streams.  DDlog programs\nonly operate on streams.  Some input and output streams are fed\ndirectly to DDlog programs, while others are processed, as described\nin this section.\n\n\\subsubsection{\\code{input stream IS[D]}}\n\n\\begin{tikzpicture}[auto, node distance=4cm,>=latex]\n    \\node [input, name=input] {};\n    \\node [block, right of=input] (block) {};\n    \\node [block, shape=ellipse, right of=block] (ddlog) {DDlog};\n    \\draw [draw,->] (input) -- node {$IS$} (block) node [below,pos=0.5] {input stream};\n    \\draw [->] (block) -- node {$\\code{IS}$}(ddlog) node [below,pos=0.5] {internal stream};\n\\end{tikzpicture}\n\nA DDlog declaration of \\code{input stream IS[D]} declares a (DDlog\ninternal) stream $\\code{IS} \\in \\stream{\\Z[\\code{D}]}$, where the\nstream's elements are multisets with elements of type \\code{D}.  Given\na stream of input values $IS \\in \\stream{\\Z[D]}$, the contents of the\nDDlog stream \\code{IS} is exactly $IS$.\n\n\\subsubsection{\\code{output stream OS[D]}}\n\n\\begin{tikzpicture}[auto, node distance=4cm,>=latex]\n    \\node [block, shape=ellipse] (ddlog) {DDlog};\n    \\node [block, right of=ddlog] (block) {};\n    \\node [output, right of=block] (output) {};\n    \\draw [draw,->] (ddlog) -- node {\\code{OS}} (block) node [below,pos=0.5] {internal stream};\n    \\draw [->] (block) -- node {$OS$}(output) node [below,pos=0.5] {output stream};\n\\end{tikzpicture}\n\nAs shown in the previous diagram, a DDlog declaration of \\code{output\n  stream OS[D]} declares an (output) stream $OS \\in\n\\stream{\\Z[\\code{D}]}$.  The contents of the output stream $OS$ is\nexactly the contents of the stream \\code{OS}.\n\n\\subsubsection{\\code{input multiset IM[D]}}\n\n\\begin{tikzpicture}[auto, node distance=4cm,>=latex]\n    \\node [input, name=input] {};\n    \\node [block, right of=input] (block) {$\\I(IM)$};\n    \\node [block, shape=ellipse, right of=block] (ddlog) {DDlog};\n    \\draw [draw,->] (input) -- node {$IM$} (block) node [below,pos=0.5] {input stream};\n    \\draw [->] (block) -- node {$\\code{IM}$}(ddlog) node [below,pos=0.5] {internal stream};\n\\end{tikzpicture}\n\nA DDlog declaration of \\code{input multiset IM[D]} declares a stream\n$\\code{IM} \\in \\stream{\\Z[\\code{D}]}$.  Given a stream of input values\n$IM \\in \\stream{\\Z[\\code{D}]}$, the contents of \\code{IM} is given by\n$\\code{IM} = \\I(MI)$, the integral of the stream of inputs.\n\n\\subsubsection{\\code{output multiset OM[D]}}\n\n\\begin{tikzpicture}[auto, node distance=4cm,>=latex]\n    \\node [block, shape=ellipse] (ddlog) {DDlog};\n    \\node [block, right of=ddlog] (block) {$\\D(\\code{OM})$};\n    \\node [output, right of=block] (output) {};\n    \\draw [draw,->] (ddlog) -- node {\\code{OM}} (block) node [below,pos=0.5] {internal stream};\n    \\draw [->] (block) -- node {$OM$}(output) node [below,pos=0.5] {output stream};\n\\end{tikzpicture}\n\nA DDlog declaration of \\code{output multiset OM[D]} declares a stream\n$OM \\in \\stream{\\Z[\\code{D}]}$.  The contents of the (output) stream\n$OM$ is given by $OM = \\D(\\code{OM})$, the derivative of the stream\n\\code{OM}.\n\n\\subsubsection{\\code{input relation IR[D]}}\n\n\\begin{tikzpicture}[auto, node distance=4.5cm,>=latex]\n    \\node [input, name=input] {};\n    \\node [block, right of=input] (block) {$\\distinct(\\I(IR))$};\n    \\node [block, shape=ellipse, right of=block] (ddlog) {DDlog};\n    \\draw [draw,->] (input) -- node {$IR$} (block) node [below,pos=0.5] {input stream};\n    \\draw [->] (block) -- node {$\\code{IR}$}(ddlog) node [below,pos=0.5] {internal stream};\n\\end{tikzpicture}\n\nA DDlog declaration of \\code{input relation IR[D]} declares a stream\n$\\code{IR} \\in \\stream{\\Z[\\code{D}]}$.  The elements of the stream\n\\code{IR} are defined by $\\code{IR} = \\distinct(\\I(IR))$.\n\n\\subsubsection{\\code{output relation OR[D]}}\n\n\\begin{tikzpicture}[auto, node distance=4.5cm,>=latex]\n    \\node [block, shape=ellipse] (ddlog) {DDlog};\n    \\node [block, right of=ddlog] (block) {$\\D(\\distinct(\\code{OR}))$};\n    \\node [output, right of=block] (output) {};\n    \\draw [draw,->] (ddlog) -- node {\\code{OR}} (block) node [below,pos=0.5] {internal stream};\n    \\draw [->] (block) -- node {$OR$}(output) node [below,pos=0.5] {output stream};\n\\end{tikzpicture}\n\nA DDlog declaration of \\code{output relation OR[D]} declares a stream\n$OR \\in \\stream{\\Z[\\code{D}]}$.  The elements of the stream are $OR$ are defined\nby $OR = \\D(\\distinct(\\code{OR}))$.\n\n\\subsection{DDlog operators}\n\nAll DDlog operators: join, group-by, aggregate, projection, etc., are\njust particular instances of functions over multisets.  When computing\nover streams DDlog just applies these functions to the streams\npointwise.  In this section we describe the semantics of these\noperators over a multi-set; the semantics over streams is just lifted\npointwise.\n\n\\subsection{DDlog syntax}\n\nA Datalog rule is a term of the form:\n\n\\begin{eqnarray*}\n  rule &::=& atom \\code{ :- } \\mathit{RHSClauses}. \\\\\n  atom &::=& C[X] \\mbox{ where $C$ is a collection and $X$ is a tuple\n    of names} \\\\\n  \\mathit{RHSClauses} &::=& \\mathit{RHSClause} \\\\\n  &|& \\mathit{RHSClause}, \\mathit{RHSClauses} \\\\\n  \\mathit{RHSClause} &::=& \\code{not } atom \\\\\n  &|& expr   \\\\\n  &|& \\code{var } name = expr \\\\\n  &|& \\code{var } name = \\code{FlatMap}( expr ) \\\\\n  &|& \\code{var } name = expr \\code{.groupby}( expr ) \\\\\n\\end{eqnarray*}\n\n\\subsection{Typing rules}\n\n\\code{X}: $T$\n\n\\subsubsection{Multiset semantics}\n\nWe use $\\llbracket t \\rrbracket$ to denote the semantics of a term\n$t$.  The semantics of a $\\mathit{RHSClauses}$ term is a mapping of\nall live variables to a multisets of tuples values.  In a typing\nenvironment $\\Gamma = x_1 : T_1, x_2 : T_2, \\ldots, x_m : T_m$, the\nsemantics of a \\code{RHSClauses}\n\n\\end{document}\n", "meta": {"hexsha": "01f4c449a160c0979be2df8e5f7dbc45b5a9d2b3", "size": 11952, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/model/model.tex", "max_stars_repo_name": "ddlog-dev2/differential-datalog", "max_stars_repo_head_hexsha": "e06d676b5d77d3713837fbca2ffbad4963124d23", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/model/model.tex", "max_issues_repo_name": "ddlog-dev2/differential-datalog", "max_issues_repo_head_hexsha": "e06d676b5d77d3713837fbca2ffbad4963124d23", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/model/model.tex", "max_forks_repo_name": "ddlog-dev2/differential-datalog", "max_forks_repo_head_hexsha": "e06d676b5d77d3713837fbca2ffbad4963124d23", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.9368421053, "max_line_length": 95, "alphanum_fraction": 0.6684236948, "num_tokens": 3974, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5631685295273229}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{pmmeta}\n\\pmcanonicalname{LeastCommonMultiple}\n\\pmcreated{2015-05-06 19:07:25}\n\\pmmodified{2020-02-09}\n\\pmowner{pahio}{2872}\n\\pmmodifier{pahio}{2872}\n\\pmformalizer{ngonga}{}\n\\pmtitle{least common multiple}\n\\pmrecord{32}{35723}\n\\pmprivacy{1}\n\\pmauthor{pahio}{2872}\n\\pmtype{Definition}\n\\pmcomment{trigger rebuild}\n\\pmclassification{msc}{11-00}\n\\pmsynonym{least common dividend}{LeastCommonMultiple}\n\\pmsynonym{lcm}{LeastCommonMultiple}\n\\pmrelated{Divisibility}\n\\pmrelated{PruferRing}\n\\pmrelated{SumOfIdeals}\n\\pmrelated{IdealOfElementsWithFiniteOrder}\n\n\\pmdefines{divides}\n\\pmdefines{multiple}\n\\pmdefines{common multiple}\n\\pmdefines{least common multiple}\n\n\n\\endmetadata\n\n\n\\usepackage{amssymb}\r\n\\usepackage{amsmath}\r\n\\usepackage{amsfonts}\r\n\n\\usepackage{cnl}\n\\usepackage{xcolor}\n\n\\DeclareMathOperator{\\lcm}{lcm}\n\n% TITLE AUTHOR DATE\n\\title{Number Theory,\\\\ LeastCommonMultiple}\n\\date{February 09, 2020}\n\\author{Ngo Nga (ngongatlu)}\n\n\n%- DOCUMENT\n\n\\begin{document}\n\\parskip=\\baselineskip\n\n% Local Defs \n\\def\\natdiv#1#2{{#1}\\mathrel{|}{#2}}\n\\parindent=0pt\n\n\\begin{cnl}\n\n\\Cnlinput{../TeX2CNL/package/cnlinit}\n\n\\bigskip\n\n%-% BEGIN\n\n\\lsection{LeastCommonMultiple}\nIn this section, let $m,\\ d$ be integers.\n\n\\deflabel{divides}\nAssume that $m, \\d$ are integers. Then we say that $d$ \\df{divides} $m$ iff $d\\ne 0$ and there \nexists an integer $r$ such that $m= d\\*r$.\n\\end{definition}\n\n\\dfn{\nWe write $\\natdiv{d}{m}$ iff $d$ divides $m$.\n}\n\n\\dfn{\nWe say that $m$ is a \\df{mutiple of} $d$ iff $d$ divides $m$.\n}\n\n\\dfn{\nWe say that $d$ is \\df{positive} iff $d > 0$.\n}\n\n\\dfn{\nAssume that $f,\\ p,\\ q$ are integers. Then we say $f$ is a \\df{common\\~multiple} of $p$ and $q$ \niff $p$ divides $f$ and $q$ divides $f$.}\n\n\n\\begin{remark}\nWe now introduce the least common multiple: \n\\end{remark}\n\n\\dfn{Assume that $p,\\ q, \\ f$ are positive integers. Then we say that $f$ is a \n\\df{least\\~common\\~multiple} of $p$ and $q$  iff $f$ is a positive common\\~multiple of $p$ and\n $q$ such that for every common\\~multiple $r$ of $p$ and $q$ we have $\\natdiv{f}{r}$.}\n\n\\dfn{ Assume that  $p,\\ q$ are positive integers. Let $\\h{\\df{lcm}}\\ p\\ q$ denote the \nleast\\~common\\~multiple of $p$ and $q$. This exists and is unique.}\n\n\n\n\n\\begin{demark}\n\\textbf{Note:} \\, The definition can be generalized for several \r\nnumbers. \\,The positive {lcm of positive integers is \r\nuniquely determined. (Its negative satisfies the same two \r\nconditions.)\r\n\r\n\\subsection*{Properties}\r\n\r\n\\begin{enumerate} \r\n  \\item If\\, $a = \\prod_{i=1}^{m}p_i^{\\alpha_i}$\\, and\\, \r\n  $b = \\prod_{i=1}^{m}p_i^{\\beta_i}$\\, are the prime factor \r\n  \\PMlinkescapetext{presentations} of the positive integers $a$ and $b$ ($\\alpha_{i} \\geqq 0$, \\,$\\beta_{i} \\geqq 0$ \\,$\\forall i$), then \r\n        $$\\mathrm{lcm}\\!(a,\\,b)= \r\n        \\prod_{i=1}^{m}p_i^{\\max\\{\\alpha_i,\\,\\beta_i\\}}.$$ \r\nThis can be generalized for lcm of several numbers.\r\n  \\item  Because the greatest common divisor has the expression\\, \r\n  $\\gcd(a,\\,b) = \\prod_{i=1}^{m}p_i^{\\min\\{\\alpha_i,\\,\\beta_i\\}}$, we see that  \r\n  $$\\gcd(a,\\,b)\\cdot \\mathrm{lcm}\\!(a,\\,b) = ab.$$\r\nThis formula is sensible only for two integers; it can not be \r\ngeneralized for several numbers, i.e., for example,\r\n       $$\\gcd(a,\\,b,\\,c)\\cdot \\mathrm{lcm}(a,\\,b,\\,c) \\neq abc.$$\r\n  \\item The preceding formula may be presented in \r\n  \\PMlinkescapetext{terms} of ideals of $\\mathbb{Z}$; we may \r\n  replace the integers with the corresponding principal ideals.\\, \r\n  The formula acquires the form\r\n      $$((a)+(b))((a)\\cap(b)) = (a)(b).$$\r\n  \\item The recent formula is valid also for other than principal ideals and even in so general systems as the Pr\\\"ufer rings; in fact, it could be taken as defining property of these rings: \\, Let $R$ be a commutative ring with non-zero unity. \\,$R$ is a Pr\\\"ufer ring iff {\\em Jensen's formula}\r\n $$(\\mathfrak{a}+\\mathfrak{b})(\\mathfrak{a}\\cap\\mathfrak{b}) = \\mathfrak{ab}$$\r\nis true for all ideals $\\mathfrak{a}$ and $\\mathfrak{b}$ of $R$, with at least one of them having \\PMlinkname{non-zero-divisors}{ZeroDivisor}.\r\n\\end{enumerate}\r\n\r\n\\end{demark}\n\n\\begin{thebibliography}{9}\r\n\\bibitem{Larsen & McCarthy} M. Larsen and P. McCarthy: {\\em Multiplicative theory of ideals}. Academic Press. New York (1971).\r\n\\end{thebibliography}\r\n%%%%%\r\n%%%%%\n\n\\end{cnl}\n\\end{document}\n", "meta": {"hexsha": "c6c6ec4f499c050e1893e5eee5156e555ec6af06", "size": 4327, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sample-texts/planet-math-11/11-00-LeastCommonMultiple.tex", "max_stars_repo_name": "ngongatlu/CNL-CIC", "max_stars_repo_head_hexsha": "c98f6d9e0cafbde014952cd65d20bc64fb3112b9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-22T09:04:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-22T09:04:10.000Z", "max_issues_repo_path": "sample-texts/planet-math-11/11-00-LeastCommonMultiple.tex", "max_issues_repo_name": "ngongatlu/CNL-CIC", "max_issues_repo_head_hexsha": "c98f6d9e0cafbde014952cd65d20bc64fb3112b9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sample-texts/planet-math-11/11-00-LeastCommonMultiple.tex", "max_forks_repo_name": "ngongatlu/CNL-CIC", "max_forks_repo_head_hexsha": "c98f6d9e0cafbde014952cd65d20bc64fb3112b9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0486111111, "max_line_length": 297, "alphanum_fraction": 0.6815345505, "num_tokens": 1498, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7577943658046609, "lm_q1q2_score": 0.5631685211516101}}
{"text": "\\section{Introduction}\n\n\\begin{frame}{Some Physics}\n\n    \\begin{block}{Classical Mechanics}\n        \\begin{itemize}\n            \\item In classical mechanics, the space of all possible states of a system is given by \\emph{phase space}, $M$.\n            \\item ``\\emph{State}'' describes the \\emph{position} and the \\emph{momentum}.\n        \\end{itemize}\n    \\end{block}\n\n    \\begin{block}{Quantum Mechanics}\n        \\begin{itemize}\n            \\item In quantum mechanics, still have $M$ but states are replaced by \\emph{wavefunctions}.\n            \\item In my research, wavefunctions are just \\emph{homogeneous polynomials}, for example:\n            $$ \\phi(\\vb{z}) = z_{1}^{k_{1}}z_{2}^{k_{2}}z_{3}^{k_{3}}, \\qquad \\text{where } k_{1} + k_{2} + k_{3} = k. $$\n        \\end{itemize}\n    \\end{block}\n\n\\end{frame}", "meta": {"hexsha": "e5927ab5054f347ecab5d276b211196a87f9286b", "size": 808, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/intro.tex", "max_stars_repo_name": "bencwbrown/WomenInSTEM-Semester2-2021", "max_stars_repo_head_hexsha": "8222f9ca0696d1f32feb3306f99b9294dd16c75f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/intro.tex", "max_issues_repo_name": "bencwbrown/WomenInSTEM-Semester2-2021", "max_issues_repo_head_hexsha": "8222f9ca0696d1f32feb3306f99b9294dd16c75f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/intro.tex", "max_forks_repo_name": "bencwbrown/WomenInSTEM-Semester2-2021", "max_forks_repo_head_hexsha": "8222f9ca0696d1f32feb3306f99b9294dd16c75f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.4, "max_line_length": 123, "alphanum_fraction": 0.6089108911, "num_tokens": 247, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8840392817460333, "lm_q2_score": 0.6370308013713525, "lm_q1q2_score": 0.5631602520944304}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n\n\n    \n    \n      \\subsection{reward.m}\n\n\\begin{par}\n\\textbf{Summary:} Compute expectation, variance, and their derivatives of an exponentiated negative quadratic cost $\\exp(-(x-z)'W(x-z)/2)$, where $x\\sim\\mathcal N(m,S)$\n\\end{par} \\vspace{1em}\n\\begin{par}\n\\textbf{Input arguments:}\n\\end{par} \\vspace{1em}\n\\begin{verbatim}m:          D-by-1 mean of the state distribution\nS:          D-by-D covariance matrix of the state distribution\nz:          D-by-1 target state\nW:          D-by-D weight matrix\\end{verbatim}\n\\begin{par}\n\\textbf{Output arguments:}\n\\end{par} \\vspace{1em}\n\\begin{verbatim}muR:        1-by-1 expected reward\ndmuRdm:     1-by-D derivative of expected reward wrt input mean\ndmuRdS:     D-by-D derivative of expected reward wrt input covariance matrix\nsR:         1-by-1 variance of reward\ndsRdm:      1-by-D derivative of variance of reward wrt input mean\ndsRdS:      D-by-D derivative reward variance wrt input covariance matrix\\end{verbatim}\n\\begin{par}\nCopyright (C) 2008-2013 by Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen.\n\\end{par} \\vspace{1em}\n\\begin{par}\nLast modification: 2013-01-20\n\\end{par} \\vspace{1em}\n\n\n\\subsection*{High-Level Steps} \n\n\\begin{enumerate}\n\\setlength{\\itemsep}{-1ex}\n   \\item Compute expected reward\n   \\item Compute the derivatives of the expected reward with respect to the input   distribution (optional)\n   \\item Compute variance of reward\n   \\item Compute the derivatives of the variance of the reward with respect to the input distribution (optional)\n\\end{enumerate}\n\n\\begin{lstlisting}\nfunction [muR, dmuRdm, dmuRdS, sR, dsRdm, dsRdS] = reward(m, S, z, W)\n\\end{lstlisting}\n\n\n\\subsection*{Code} \n\n\n\\begin{lstlisting}\n% some precomputations\nD = length(m); % get state dimension\nSW = S*W;\niSpW = W/(eye(D)+SW);\n\n% 1. expected reward\nmuR = exp(-(m-z)'*iSpW*(m-z)/2)/sqrt(det(eye(D)+SW));\n\n% 2. derivatives of expected reward\nif nargout > 1\n  dmuRdm = -muR*(m-z)'*iSpW;  % wrt input mean\n  dmuRdS = muR*(iSpW*(m-z)*(m-z)'-eye(D))*iSpW/2;  % wrt input covariance matrix\nend\n\n% 3. reward variance\nif nargout > 3\n  i2SpW = W/(eye(D)+2*SW);\n  r2 = exp(-(m-z)'*i2SpW*(m-z))/sqrt(det(eye(D)+2*SW));\n  sR = r2 - muR^2;\n  if sR < 1e-12; sR=0; end % for numerical reasons\nend\n\n% 4. derivatives of reward variance\nif nargout > 4\n  % wrt input mean\n  dsRdm = -2*r2*(m-z)'*i2SpW-2*muR*dmuRdm;\n  % wrt input covariance matrix\n  dsRdS = r2*(2*i2SpW*(m-z)*(m-z)'-eye(D))*i2SpW-2*muR*dmuRdS;\nend\n\\end{lstlisting}\n", "meta": {"hexsha": "17e01f5097f65d5dbedc9ceaac90174fe30fa75a", "size": 2598, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/reward.tex", "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_issues_repo_path": "doc/tex/reward.tex", "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_forks_repo_path": "doc/tex/reward.tex", "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "avg_line_length": 29.8620689655, "max_line_length": 168, "alphanum_fraction": 0.6870669746, "num_tokens": 892, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8840392725805823, "lm_q2_score": 0.6370307806984444, "lm_q1q2_score": 0.5631602279800932}}
{"text": "\\section{Classes} % (fold)\n\\label{sec:classes}\n\\begin{questions}\n\n    \\titledquestion{Rational numbers} % (fold)\n    \\label{sub:rational_numbers}\n\n        In this problem, we will write a class that can represent rational numbers,\n        i.e. fractions $\\frac{p}{q}$.\n\n        \\begin{parts}\n            \\part Create a class \\texttt{Rational} which is initialized by two\n            integers, $p$ and $q$, the nominator and denominator\n            \\part Add a method to print the rational number as $p/q$\n            (the \\verb|__str__| or \\verb|__repr__| method is useful).\n            \\part We would like to represent $\\frac{10}{20}$ by $\\frac{1}{2}$\n            instead, hence write a function that computes the greatest common\n            divisor, and ensure that every rational number is simplified\n            \\part Add a method so that we can add two rational numbers with\n            \\texttt{r1 + r2}, here the \\verb|__add__()| method is useful.\n            \\part Add a method to subtract two rational numbers. (\\verb|__sub__|)\n            \\part Add a method to multiply two rational numbers. (\\verb|__mul__|)\n            \\part Add a method to divide two rational numbers. (\\verb|__div__|)\n            \\part Add a method that compares whether two rational numbers are equal.\n            \\part Add a method to convert the rational number to a floating point\n            (the \\verb|__float__()| method may be handy).\n            \\part Add any more functionality that you think is useful but I failed\n            to mention.\n        \\end{parts}\n\n    % titledquestion rational_numbers (end)\n\n    \\titledquestion{Rock Paper Scissors}\n    \\label{sub:rock_paper_scissors}\n\n        In this problem, we will finish an implementation for Rock-Paper-Scissors.\n        We have written some code to get you started, now it's up to you to finish the\n        implementation.\n\n        The code consists of 2 files: game.py and agent.py.\n        The code that implements the actual game is coded in game.py.\n        agent.py defines several agents that can play the game.\n        Download the starter code here using\\\\\n        \\texttt{\\$ git clone https://github.com/schmit/Rock-paper-scissors-startercode.git} (if you have git / use Cloud9)\n        or dowload the code directly:\\\\ \\url{https://github.com/schmit/Rock-paper-scissors-startercode/archive/master.zip}\n        \\begin{parts}\n            \\part Finish the implementation of game.py by implementing the compare function,\n                updating the scores (where a win is 1 point, and a tie or loss 0 points),\n                and finally the summary function that gives some information after the game.\n            \\part Implement the HumanAgent in agent.py, this agent should query the user\n                for the next move, and ensure that the user gives valid input.\n            \\part Implement MyAgent, where you can implement your own strategy,\n                try to beat the InstructorAgent consistently over 100 rounds.\n        \\end{parts}\n\n        Hint: have a look at the Hangman code.\n\n    \\titledquestion{Hangman agent}\n    \\label{sub:hangman_agent}\n\n    Implement your own Hangman computer agent (see exercise~\\ref{sec:control_flow}.\\ref{sub:hangman_1})\n    that is much more effective than the Agent that guesses random characters.\n\n    Make sure you create a new class rather than overwriting the existing \\texttt{Agent}\n    class. You can of course inherit from the \\texttt{Agent} class\n\n    You can update the \\texttt{simulate.py} script to test your implementation.\n\n    \\titledquestion{Sparse and dense vectors}\n    \\label{sub:vector_class}\n    In exercise~\\ref{sec:dictionaries}.\\ref{sub:vector_multiply} you implemented\n    functions for sparse and dense vector multiplications using lists and dictionaries.\n    However, this is a bit clumsy to use in practice.\n    Really, we would like to represent sparse and dense vectors as classes,\n    this way we can overload operators such as \\texttt{+} (\\texttt{\\_\\_add\\_\\_})\n    and get sensible output.\n    For example, using \\texttt{+} on two dense vectors implemented as lists would\n    append the second vector to the first, instead of adding the two together.\n\n    Implement sparse and dense vectors.\n    Both classes should have the following capabilities:\n    \\begin{parts}\n        \\item Print vector\n        \\item Add two vectors (both if other is dense and sparse)\n        \\item Multiply two vectors (both if other is dense and sparse)\n    \\end{parts}\n    Do re-use your code from the previous exercise.\n\n    Hint: \\texttt{isinstance()} might be useful.\n\n    \\titledquestion{Implementing the set class}\n    \\label{sub:set_as_dict}\n\n    Write a class \\texttt{mySet} that has the same basic functionality as\n    the Python \\texttt{set} data structure.\n    Base your implementation on a dictionary.\n\n    \\titledquestion{Binary search tree} % (fold)\n    \\label{sub:binary_search_tree}\n\n        In this exercise, we will implement a binary search tree.\n        See \\url{http://en.wikipedia.org/wiki/Binary_search_tree} for\n        an explanation.\n\n        \\begin{parts}\n            \\part Define a class \\texttt{Node}, and write the constructor, which\n            takes one argument, \\texttt{value}, and initializes the left and right\n            children to None.\n            \\part Write a function to print the tree.\n            \\part Write a function that inserts a new value in the tree at the\n            right location.\n            \\part Write a function that looks up a value in the tree.\n            \\part Write a function that removes a value from the tree.\n        \\end{parts}\n\n    % titledquestion binary_search_tree (end)\n\n    \\titledquestion{Ordinary least squares} % (fold)\n    \\label{sub:binary_search_tree}\n\n        Our goal in this exercise is to write our own least-squares solver to solve regression problems:\n        \\[\n            \\arg\\min_{\\beta} \\|y - X\\beta\\|_2\n        \\]\n        See for example statsmodels \\texttt{ols} or \\texttt{LinearRegression}.\n        While one can, and should, use written solvers, it's a good practice exercise.\n\n        \\begin{parts}\n            \\part Setup an OLS class with fit and predict methods, to be coded later\n            \\part Write the fit method using numpy's or scipy's linear algebra module.\n            \\part Now write the predict function, that predicts $y_n$ given new $X_n$.\n            \\part Add a function that summarizes the model\n            \\part (Optional) Use Patsy and Pandas to support DataFrames and formulas, similar to \\texttt{R}.\n        \\end{parts}\n\n    % titledquestion binary_search_tree (end)\n\\end{questions}\n% section classes (end)\n", "meta": {"hexsha": "e18f1fcd0f5a3f0a77f38f2f27d39b82d8633cd2", "size": 6640, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/tex/classes.tex", "max_stars_repo_name": "naskoch/python_course", "max_stars_repo_head_hexsha": "84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-08-10T17:46:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-18T21:09:03.000Z", "max_issues_repo_path": "exercises/tex/classes.tex", "max_issues_repo_name": "naskoch/python_course", "max_issues_repo_head_hexsha": "84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/tex/classes.tex", "max_forks_repo_name": "naskoch/python_course", "max_forks_repo_head_hexsha": "84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-04-24T03:31:02.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-13T07:36:06.000Z", "avg_line_length": 47.7697841727, "max_line_length": 122, "alphanum_fraction": 0.6746987952, "num_tokens": 1540, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723316860482763, "lm_q2_score": 0.8376199714402812, "lm_q1q2_score": 0.5631584476661533}}
{"text": "%\n% Chapter 8\n%\n\n\\chapter{Statistical methods and systematic uncertainties}\n\\label{syst_unc}\n\n\\section{Introduction}\nIn the search for LFV Higgs decays, a discovery will be the observation of events with the Higgs boson decaying either into \\mutau or \\etau. As the LFV is forbidden in SM, the SM can be taken as the background model, and discovery can be claimed if the observation is not compatible with this background model. The uncertainties resulting from theoretical, experimental, and statistical sources can give rise to an excess in the observation even when there is no signal at all. When an excess is observed, a p-value is computed. The p-value represents the probability that an observed excess is due to statistical fluctuations. The p-value has to be very low to indicate that the excess observed is due to a signal's presence and not merely a statistical fluctuation. However, if no excess is observed, upper exclusion limits can be set on the branching fraction. This chapter describes the statistical methods used for extracting the signal strength, followed by the systematic uncertainties involved in the analysis.\n\n\\section{Statistical methods}\n\\label{stat_meth}\n\nThe results on the branching fraction of the LFV Higgs boson decays to \\mutau, and \\etau are estimated using a profile likelihood method. The SM Higgs boson production cross-sections are used for the signal model, while the branching fraction of the Higgs boson to \\mutau and \\etau remain free parameters. The branching fraction is the parameter of interest. In addition to it, the signal and background model contain nuisance parameters whose values must be derived from collision data. The profile likelihood method is implemented, assuming the asymptotic approximation. Distributions of the BDT discriminator and the collinear mass for signal and various background processes are fitted to collision data. Systematic uncertainties are represented as nuisance parameters, and they can affect the normalization or the shape of the distribution.\n\nPoisson distribution can model the expected number of events and the observed events for the situation at hand. The expected number of events is $\\mu\\cdot s + b$, where $s$ is the expected signal event yields, and $b$ is the expected background event yields. The parameter $\\mu$ is the signal strength modifier, which changes the signal production cross-sections of all the production mechanisms by the same scale of $\\mu$. The likelihood function measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters. For our situation, the likelihood function $\\mathcal{L}(data|\\mu)$ is then given by:\n\\begin{equation}\n  \\mathcal{L}(data|\\mu)=\\prod_{i=1}^{bins}\\frac{(\\mu\\cdot s_i + b_i)^{n_i}}{n_{i}!}e^{-\\mu\\cdot s_i - b_i}\n\\end{equation}\nwhere $n_i$ is the number of events observed in the bin $i$ of the distribution, and $s_i$ and $b_i$ are the expected number of signal and background events in that bin, respectively.\n\nThe nuisance parameters which represent the systematic uncertainties are embedded into the likelihood function. The uncertainties considered are taken to be 100\\%-correlated or uncorrelated, thus ensuring that the likelihood function has a clean factorized form~\\cite{ATLAS:2011tau}. However, certain uncertainties have partial correlations across the years. There are also partial correlations among uncertainties between the embedded and MC samples. A partially correlated uncertainty is separated into 100\\%-correlated or uncorrelated components.\nThe magnitude of the correlated component will be $\\rho$, and for the uncorrelated part, it will be $\\sqrt{1-\\rho^{2}}$, where $\\rho$ is the magnitude of the correlation. For example, a 50\\% correlation will have a correlated component with a magnitude of 0.5 and an uncorrelated part with a magnitude of 0.866.\n\nThe expected signal and background yields are effected by nuisance parameters and we parametrize them as $s(\\theta)$ and $b(\\theta)$. There is a default value $\\tilde{\\theta}$ for each component of $\\theta$. The default value reflects our degree of belief on the real value of $\\theta$. The probability distribution function $\\rho(\\theta|\\tilde{\\theta})$ can then be interpreted as a posterior distribution from measurements of $\\tilde{\\theta}$. Using Bayes' theorem:\n\\begin{equation}\n  \\rho(\\theta|\\tilde{\\theta})=\\rho(\\tilde{\\theta}|\\theta)\\cdot\\pi_\\theta(\\theta),\n\\end{equation}\n\nThe priors $\\pi_\\theta(\\theta)$ are taken as flat distributions representing no prior knowledge of $\\theta$. The likelihood of the measurement can be constrained by using the probability distribution function of $\\tilde{\\theta}$. After incorporating the nuisance parameters, the likelihood function now becomes:\n\\begin{equation}\n  \\mathcal{L}(data|\\mu,\\theta)=\\prod_{i=1}^{bins}\\frac{(\\mu\\cdot s_i(\\theta) + b_i(\\theta))^{n_i}}{n_{i}!}e^{-\\mu\\cdot s_i(\\theta) - b_i(\\theta)}\\cdot\\rho(\\tilde{\\theta}|\\theta)\n\\end{equation}\n\nIf no excess is observed then upper exclusion limits can be set on the branching fraction using the CL$_\\text{s}$ method~\\cite{Read:2002hq, Read:2000ru, Junk:1999kv}. According to Neyman-Pearson lemma, the likelihood ratio is the most powerful discriminator. The likelihood ratio is used generally in the searches at the LHC for hypothesis testing, and it uses profiling of nuisances. The test statistic is denoted by $\\tilde{q_\\mu}$ and is given by:\n\\begin{equation}\n\\label{eq:proflik}\n  \\tilde{q_\\mu}=-2\\text{ ln}\\frac{\\mathcal{L}(\\text{data}|\\mu,\\hat{\\theta_\\mu})}{\\mathcal{L}(\\text{data}|\\hat{\\mu},\\hat{\\theta})},\\text{   with  } 0 \\leq \\hat{\\mu} \\leq \\mu\n\\end{equation}\nwhere $\\hat{\\theta_\\mu}$ refers to the conditional maximum likelihood estimators of $\\theta$, while $\\hat\\mu$ and $\\hat\\theta$ refer to the global maximum likelihood estimators for $\\mu$ and $\\theta$. The constraint $0 \\leq \\hat{\\mu}$ ensures that the signal rate cannot be negative. In contrast, the upper constraint $\\hat{\\mu} \\leq \\mu$ is imposed to guarantee that upward fluctuations of data such that $\\hat{\\mu} > \\mu$ are not considered as evidence against the signal hypothesis.\n\nThe observed value of the test statistic, $\\tilde{q_\\mu}^{obs}$, is calculated for the signal strength $\\mu$, using equation~\\ref{eq:proflik}. Maximum likelihood estimators for the nuisance parameters, under the background-only and signal-plus-background hypotheses, are calculated and are denoted by $\\hat{\\theta_{0}}^{obs}$ and $\\hat{\\theta_\\mu}^{obs}$ respectively. They are used to generate toy MC pseudo-datasets. These pseudo-datasets are used to construct probability distribution functions of test statistics $f(\\tilde{q_\\mu}|0,\\hat{\\theta_{0}}^{obs})$ and $f(\\tilde{q_\\mu}|\\mu,\\hat{\\theta_\\mu}^{obs})$ by treating them as if they were real data. An illustration of these distributions can be seen in Figure~\\ref{fig:test_stat_dist}.\n\\begin{figure*}[!htpb]\\centering\n \\captionsetup{width=.87\\textwidth,justification=centering}\n \\includegraphics[width=0.85\\textwidth]{plots/chapter8/test_statistic_distri.png}\n \\caption{Test statistic distributions for ensembles of pseudo-data generated for background-only (blue) and signal-plus-background (red) hypotheses~\\cite{ATLAS:2011tau}.}\n \\label{fig:test_stat_dist}\n\\end{figure*}\n\nCL$_\\text{s+b}$ and CL$_\\text{b}$ correspond to the probabilities of the observations under both hypotheses. CL$_\\text{s+b}$ measures the incompatibility of data under the signal-plus-background hypothesis. CL$_\\text{b}$ measures the incompatibility of data under the background hypothesis. These quantities alone are not adequate for hypothesis testing. For example, when the signal is so small that both hypotheses are compatible with the observation and a downward fluctuation of the background can lead to an inference of signal. Also, the data's incompatibility with the background-only hypothesis alone doesn't tell us that it is compatible with the signal. The ratio of the two quantities referred to as CL$_\\text{s}$ helps deal with both the situations described above well. The 95\\% CL is then arrived at by iterating over $\\mu$ until we have CL$_\\text{s}=0.05$. The $\\mu$ thus obtained is denoted as $\\mu^{95\\% \\, \\text{CL}}$ and is said to be excluded at 95\\% CL.\n\n\\begin{gather}\n  p_\\mu=P(\\tilde{q_\\mu}\\geq \\tilde{q_\\mu}^{obs}|\\text{signal-plus-background})=\\int_{\\tilde{q_\\mu}^{obs}}^{\\inf}f(\\tilde{q_\\mu}|\\mu,\\hat{\\theta_\\mu}^{obs})d\\tilde{q_\\mu} \\\\\n  1-p_b=P(\\tilde{q_\\mu}\\geq \\tilde{q_\\mu}^{obs}|\\text{background-only})=\\int_{\\tilde{q_\\mu}^{obs}}^{\\inf}f(\\tilde{q_\\mu}|0,\\hat{\\theta_0}^{obs})d\\tilde{q_\\mu} \\\\\n  \\text{CL}_\\text{s}=\\frac{p_\\mu}{1-p_b}\n\\end{gather}\n\nThe discussion until now pertains to calculating the observed limits when the data is unblinded. However, when the analysis is performed in a blinded manner, we first calculate the expected limits, which are upper exclusion limits calculated using toy datasets of background-only expectation. An extensive set of pseudo-data is generated using the background-only hypothesis, and CL$_\\text{s}$ and $\\mu^{95\\%\\text{CL}}$ is calculated for each of them. A distribution is built as a function of the $\\mu^{95\\%\\text{CL}}$ calculated for each pseudo-data. We then calculate the median expected limit from the 50\\% quantile of the cumulative distribution function. The $\\pm 1\\sigma$ and $\\pm 2\\sigma$ bands are calculated similarly by integrating the distribution until the appropriate quantiles are reached. The expected limits can be used to maximize the sensitivity of the search. A more stringent median limit corresponds to a more sensitive search.\n\n\\begin{figure*}[!htpb]\\centering\n  \\captionsetup{width=.87\\textwidth,justification=centering}\n  \\includegraphics[width=0.85\\textwidth]{plots/chapter8/Median.png}\n  \\caption{(left) An example of differential distribution of possible limits on $\\mu$ for the background-only hypothesis. (right) The cumulative probability distribution of the plot on the left with 2.5\\%, 16\\%, 50\\%, 84\\%, and 97.5\\% quantiles defines the median expected limit as well as the 68\\% and 95\\% bands for the expected value of $\\mu$ for the background-only hypothesis.}\n  \\label{fig:median}\n\\end{figure*}\n\n\\section{Systematic uncertainties}\n\nSeveral sources of experimental and theoretical systematic uncertainties are taken into account as input to the maximum likelihood fit. These uncertainties affect both the normalization and the shape of the distributions of the different processes. The normalization effect is described by log-normal probability distribution functions and corresponds to a multiplicative factor in the signal or background yields. These probability distribution functions are characterized by the parameter $\\kappa$ and are well-suited for positively valued observables. The log-normal distribution looks like:\n\\begin{equation}\n\\rho(\\theta|\\tilde{\\theta})=\\frac{1}{\\sqrt{2\\pi}\\text{ ln}(\\kappa)}\\text{exp }(\\frac{\\text{ln}(\\theta/\\tilde{\\theta})^2}{2(\\text{ln }\\kappa)^2}) \\frac{1}{\\theta}\n\\end{equation}\n\nThe distributions' shape is altered when the systematic uncertainties affect the distribution's scale differently in each bin. Such uncertainties are called shape uncertainties~\\cite{Conway:2011in} and are modeled using a linear interpolation method~\\cite{Read:1999kh}. Two other distributions obtained by varying the nuisance parameter by $\\pm 1$ standard deviation are used for implementing the shape uncertainties. A parameter is added to the likelihood that smoothly interpolates between these two other distributions.\n\nAs the analysis is categorized into different final states, partial and complete correlations between the uncertainties in different categories are treated appropriately. They are summarized in Tables~\\ref{tab:systematics_mt} and~\\ref{tab:systematics_et}.\n\n\\input{tables/tab_sysUnc_mt.tex}\n\\input{tables/tab_sysUnc_et.tex}\n\nThe uncertainties to reconstruct a \\tauh and estimation of its identification efficiency for different \\pt ranges are measured using a tag-and-probe method and found to be in the range of 2-3\\%. The uncertainties for different ranges of \\pt are treated in an uncorrelated way. These uncertainties are also considered for the embedded \\Pgt{}\\Pgt background, where they are treated 50\\% correlated with the simulation uncertainties. For the embedded samples, triggering on muons before being replaced by tau leptons have an uncertainty of about 4\\%. This uncertainty is treated as uncorrelated between the three years due to different triggering criteria. Uncertainty due to tracking measurement is also considered in correlated way between $\\text{h}^{\\pm}$ and $\\text{h}^{\\pm}\\text{h}^{\\mp}\\text{h}^{\\pm}$ decay modes and uncorrelated way between $\\text{h}^{\\pm}\\pi^{0}$ and $\\text{h}^{\\pm}\\text{h}^{\\mp}\\text{h}^{\\pm}\\pi^{0}$ decay modes.\n\nUncertainties due to electrons and muons misidentified as \\tauh candidates correspond to 40\\% and between 10-70\\%, respectively, for different bins of \\pt, $\\eta$, and \\tauh decay modes. The uncertainty due to \\tauh energy scale uncertainty is treated as uncorrelated for different decay modes and 50\\% correlated between embedded and simulated backgrounds and ranges from 0.7 to 1.2\\%. The uncertainty due to the electron (muon) energy scale for misidentified leptons is independent of the \\tauh energy scale and amounts to 7\\% (1\\%). The effect of lepton energy resolution is found to be negligible for the study under consideration.\n\nThe jet energy scale (JES) is affected by several sources, and uncertainty is evaluated as a function of \\pt and pseudorapidity. JES's effect is propagated to the BDT discriminator by varying each source of uncertainty by one standard deviation and disseminating them to the fit template for each process and are of the order of 3-20\\%. The jet energy resolution uncertainties are also taken into account, and they mostly impact the \\mjj-defined categories. In the PF candidate list, jets with $\\pt < 10 \\GeV$ are not considered. They fall under unclustered energy. The unclustered energy scale is considered independently for charged particles, neutral hadrons, photons, and very forward particles, affecting both shape and yield, and are treated as uncorrelated. The efficiency to classify a jet as b-tagged is different in data, and simulation and scale factors depend on jet \\pt and $\\eta$ are used to correct simulation. The uncertainties in measuring these scale factors are treated as systematic uncertainty.\n\nThe uncertainty to track leptons (\\Pe, \\Pgm), reconstruct them, and identify and isolate is measured using the tag-and-probe method in data in \\Zee and \\Zmm events and sums up to about 2\\%~\\cite{Chatrchyan:2012xi, Khachatryan:2015hwa, Khachatryan:2015dfa, CMS:2016gvn}. The uncertainty in the measurement of the muon momentum scale is in the range $0.4-2.7\\%$ for different $\\eta$, while for the electron momentum scale, it is less than 1\\%. The selection of events using electron and muon based triggers results in additional 2\\% uncertainty in the yield of simulated processes.\n\nThe misidentification rates in the  \\ehad and \\muhad final states are parameterized using a linear function dependent on transverse momentum of \\tauh, where a couple of uncertainties are ascribed per fit function. Discriminators with different signal to background efficiency are used to single out \\tauh against electrons and muons, which entails an additional 3\\% uncertainty for the \\ehad channel. The normalization uncertainties in the data-driven estimation of misidentified lepton backgrounds ($\\text{jet}\\to\\tauh, \\Pgm, \\Pe$), is taken from the orthogonal control region as described in Chapter~\\ref{bkg_est}.\n\nAn additional shape uncertainty is estimated for the misidentified lepton background in the \\PW boson enriched control region. Hence, the signal region is defined orthogonally to the definition of the \\PW boson enriched control region in hadronic channels. The variables $\\Delta\\phi(\\ell, MET)$ are chosen because a clear discrepancy can be seen for them in the \\PW boson enriched control region. The discrepancy is obtained by subtracting the other background's contribution from data and dividing it by the estimated misidentified lepton background. It is parameterized as a function of \\dphimmet for \\muhad channel and as a function of \\dphiemet for \\ehad channel. This is not applied as a correction to the signal region because no such discrepancy is observed in the signal region, and hence only a shape uncertainty is considered. The discrepancy in the \\PW boson enriched control region, parametrization for shape uncertainty, and the distribution observed in the signal region is shown in Figure~\\ref{fig:mutauh_systematics} for \\Hmuhad channel and Figure~\\ref{fig:etauh_systematics} for \\Hehad channel.\n\n\\begin{figure}[htbp]\n  \\centering\n  \\subfigure[]{\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/dPhiMuMET2016.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/dPhiMuMET2017.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/dPhiMuMET2018.pdf}\n  }\n  \\subfigure[]{\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/2016.png}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/2017.png}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/2018.png}\n  }\n  \\subfigure[]{\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/dPhiMuMET2016SR.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/dPhiMuMET2017SR.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/mutau/dPhiMuMET2018SR.pdf}\n  }\n  \\caption{Shape uncertainty estimated for the \\Hmuhad channel. (a) Distribution of $\\Delta\\phi(\\Pgm, MET)$ from the \\PW boson enriched control region. (b) The discrepancy seen above is parametrized with a linear function. (c) Distribution of $\\Delta\\phi(\\Pgm, MET)$ from the signal region.}\n  \\label{fig:mutauh_systematics}\n\\end{figure}\n\n\\begin{figure}[htbp]\n  \\centering\n  \\subfigure[]{\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/dPhiEMET2016.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/dPhiEMET2017.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/dPhiEMET2018.pdf}\n  }\n  \\subfigure[]{\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/2016.png}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/2017.png}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/2018.png}\n  }\n  \\subfigure[]{\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/dPhiEMET2016SR.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/dPhiEMET2017SR.pdf}\n    \\includegraphics[width=0.3\\textwidth]{plots/chapter8/systematic/etau/dPhiEMET2018SR.pdf}\n  }\n  \\caption{Shape uncertainty estimated for the \\Hehad channel. (a) Distribution of $\\Delta\\phi(\\Pe, MET)$ from the \\PW boson enriched control region. (b) The discrepancy seen above is parametrized with a linear function. (c) Distribution of $\\Delta\\phi(\\Pe, MET)$ from the signal region.}\n  \\label{fig:etauh_systematics}\n\\end{figure}\n\nThe misidentified lepton background in the \\emu and \\mue final states is affected by different shape uncertainties. The statistical uncertainties arising from both fits of the extrapolation factors as a function of the spatial separation between electron and muon, and lepton \\pt are taken into account. The uncertainties from extrapolation factors are estimated in the anti-isolated muon control region, which results in an additional uncertainty due to the inversion of muon isolation, with a combined effect of about of 20\\% on the normalization.  The predominant source of uncertainties in the simulated background processes, \\Zee, \\Zmm, \\Ztt, \\PW{}\\PW, \\PZ{}\\PZ, \\PW{}\\Pgg, \\ttbar, and single top production is the measurement of the cross-section for these processes.\n\nThe theoretical uncertainties affecting the measurement of Higgs boson production cross-section are the factorization and the renormalization scales, choice of the PDFs, and the strong coupling constant (\\as). These uncertainties affect the normalization of the signal shape and are taken from Ref.~\\cite{deFlorian:2016spz}. The variations of the QCD scales results in a 3.9\\%, 0.5\\%, 0.9\\%, and 0.8\\% uncertainty for the ggF, VBF, ZH, WH cross-sections, respectively, while variations of PDF+\\as results in 3.2\\%, 2.1\\%, 1.3\\%, 1.9\\% uncertainty. The acceptance effects are also taken into account while varying the renormalization and factorization scales and the PDF choice and PDF+\\as.\n\nThe uncertainty in the \\Htt branching fraction includes a 1.7\\% uncertainty due to missing higher-order corrections, a 0.99\\% parametric uncertainty in the quark masses, and a 0.62\\% parametric uncertainty in \\as. In addition, the latest CMS measurement of the \\Htt signal strength $\\mu = 0.85^{+0.12}_{-0.11}$ has been used~\\cite{CMS:2020dvp}. The uncertainty in the \\HWW branching fraction includes a 0.99\\% uncertainty due to missing higher-order corrections, a 0.99\\% parametric uncertainty in the quark masses, and a 0.66\\% parametric uncertainty in \\as.\n\nThe bin-by-bin uncertainties account for the statistical uncertainties in each bin of the template distributions of every process. The Barlow-Beeston Lite~\\cite{Conway:2011in, Barlow:1993dm} approach is used, assigning a single nuisance parameter to scale the sum of the process yields in each bin, constrained by the total uncertainty, instead of requiring separate parameters, one per process. This is useful to minimize the number of parameters needed in the maximum-likelihood fit. They are treated uncorrelated between bins, processes, and categories.\n\nThe integrated luminosities of 2016, 2017, and 2018 data-taking periods are individually known with uncertainties in the 2.3--2.5\\% range~\\cite{CMS:2017sdi, CMS:2018elu, CMS:2019jhq}, while the total Run~2 (2016--2018) integrated luminosity has an uncertainty of 1.8\\%, the improvement in precision reflecting the (uncorrelated) time evolution of some systematic effects. The uncertainty in the integrated luminosity affects all processes with the normalization taken directly from the simulation. Shape uncertainties related to the pileup have been considered by varying the weights applied to simulation. The weight variation is obtained by a 5\\% change of the total inelastic cross-section used to estimate the number of pileup events in data. Other minimum bias event modeling and simulation uncertainties are estimated to be much smaller than those on the rate and are therefore neglected.\n\nDuring the 2016 and 2017 data-taking periods, pre-firing has impacted the ECAL Level-1 trigger. A gradual shift in the timing of the inputs of the ECAL Level-1 trigger in the region at $\\aeta > 2.0$ caused a specific trigger inefficiency. For events containing an electron (a jet) with \\pt larger than $\\approx$50~\\GeV ($\\approx$100~\\GeV), in the region $2.5 < \\aeta < 3.0$ the efficiency loss is $\\approx$10-20\\%, depending on \\pt, $\\eta$, and time. Correction factors were computed from data and applied to the acceptance evaluated by simulation. Uncertainty due to this correction factor is of the order of 1\\%.\n", "meta": {"hexsha": "a453a6fa746aae6f8cac97a7cbfbea49cd6bae11", "size": 23336, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/chapter8.tex", "max_stars_repo_name": "psiddire/nddiss", "max_stars_repo_head_hexsha": "9a7a4ae447331fb76b458374b9a3511298df309d", "max_stars_repo_licenses": ["LPPL-1.3c"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/chapter8.tex", "max_issues_repo_name": "psiddire/nddiss", "max_issues_repo_head_hexsha": "9a7a4ae447331fb76b458374b9a3511298df309d", "max_issues_repo_licenses": ["LPPL-1.3c"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/chapter8.tex", "max_forks_repo_name": "psiddire/nddiss", "max_forks_repo_head_hexsha": "9a7a4ae447331fb76b458374b9a3511298df309d", "max_forks_repo_licenses": ["LPPL-1.3c"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 159.8356164384, "max_line_length": 1111, "alphanum_fraction": 0.7829105245, "num_tokens": 5801, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199552262967, "lm_q2_score": 0.6723316860482762, "lm_q1q2_score": 0.5631584367649777}}
{"text": "\n\\subsection{Defining spheres}\n\n\\(x^2+y^2+z^2 = r^2\\)\n\n\\subsection{Volume of a sphere}\n\n\\(V=\\)\n\n\\subsection{Surface area of a sphere}\n\n\\(A=\\)\n\n", "meta": {"hexsha": "a605bd21f8e877c397e4db1047dc8852f44c6e32", "size": 143, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/geometryAlgebraic/02-01-sphere.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/geometryAlgebraic/02-01-sphere.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/geometryAlgebraic/02-01-sphere.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 10.2142857143, "max_line_length": 37, "alphanum_fraction": 0.6293706294, "num_tokens": 49, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9324533107374444, "lm_q2_score": 0.6039318337259583, "lm_q1q2_score": 0.5631382378175056}}
{"text": "\\chapter{Discussion and Outlook}\n\n\\textbf{by Bastian Boll} \\\\\n\nUnfortunately, the alignments obtained by our original approach are fairly underwhelming. They do improve upon the initial guess, but the required computation time is relatively large and the alignments are not precise enough to generate subtitles without additional human help. Upon closer examination of the model predictions one can especially see that predicted probabilities of single words over the time of a TED talk remain almost constant. Our hypothesis of why the models seem to be unable to capture the structure of the data is twofold:\n\\begin{enumerate}\n\t\\item As previously alluded to, the (acoustic) structure of the labels is continuous, but the categorical labels we provided for training are discrete. This makes classification much more challenging because mistaking a word in the audio for an acoustically similar one is treated by cross entropy loss as if the predicted word and the label were completely dissimilar. This is addressed via a word embedding approach described in section \\ref{seq:word_embedding}.\n\t\\item It is challenging to construct a good loss function which allows for the inherent imprecision of our data. We describe an alternative approach to mitigate this problem in section \\ref{seq:interval_seq}.\n\\end{enumerate}\n\n\n\\section{Acoustic word embedding}\n\\label{seq:word_embedding}\n\n\\textbf{by Bastian Boll} \\\\\n\nWith regard to acoustical structure, we need to use full words from the transcript as labels instead of word stems. A statistical survey of the dataset analogous to the one described in section \\ref{sec:reduction_labels} leads to the decision to use at least about $7000$ distinct words.\\\\\nOne can leverage available dictionary resources such as \\href{https://github.com/cmusphinx/cmudict}{cmudict} through \\href{https://www.nltk.org/}{nltk} in order to transcribe each word into a sequence of phonemes. To quantify the acoustic distance between two words, we can construct the sets of 2-shingles of the respective phoneme sequences and compute their Jaccard distance. This defines a distance metric on the set of words. To be able to leverage this distance metric computationally, we compute an embedding of the words into a Euclidean space such that the Euclidean distance corresponds to the above acoustical distance. This embedding process is a non-trivial optimization problem in a very high dimensional space. To make it computationally feasible, we construct a stochastic variant of the objective and iterate by stochastic gradient descent. The idea of  embedding structure into Euclidean spaces is well-known in the research community and has been used in adjacent problem settings \\cite{silfverberg2018sound,goldberg2014word2vec,tang2014learning}.\n\n\n\\section{Interval sequence prediction}\n\\label{seq:interval_seq}\n\n\\textbf{by Bastian Boll} \\\\\nWe assume the error between the computed time $t_w$ for $w$ in the audio and the exact time $w$ to be normally distributed with zero mean. Empirically, the standard deviation of the respective error distribution may be as high as $0.8$ seconds. In order to select an interval which contains a given word in the transcript with high probability, we need to consider larger intervals.\\\\\nThe model is now set up to predict a sequence of eight words for the MFCC features extracted from a $2.0$ second interval. From the middle of the interval, we take two consecutive words as label which should be part of the eight predicted words with high probability. As a loss function, we compute the minimal acoustic distance between two consecutive predicted words and the two label words.\n\n\n\\section{Conclusion}\n\n\\textbf{by Bastian Boll and Paul Warkentin} \\\\\n\nOur optimization setting has proven to be well-suited to tackle the alignment problem at hand. One can clearly see the positive effect of being able to leverage prior knowledge and prevent unwanted classification side effects such as possible confusion of word order.\\\\\nWe have also seen speciallized loss function constructions being better geared towards this specific learning problem, yielding superior results on real-world data. However, the computed alignments still are not sufficient for generating e.g. movie subtitles because local deviations may be on the order of seconds.\\\\\n\nWhile we already outlined ways to improve prediction results in sections \\ref{seq:word_embedding} and \\ref{seq:interval_seq}, the gained structural improvements do not seem to fully overcome the difficulty presented by the coarsely labeled dataset.\\\\\nIn order to leverage both the availability of coarsely labeled data and the ability to learn lower level features from data with greater time resolution, one could try to devise a weakly supervised approach. Similar endeavors have already shown to be effective for adjacent problem domains \\cite{wei2017personalized, synnaeve2014weakly}. In fact, \\cite{serriere2016weakly} even describes proceedings to address very similar alignment problems.\n\n% In summary, we could successfully tackle the problem of automatic subtitle alignment. The results depend on the class of neural network and the loss function that was used during the training process. In the end, the most difficult parts to implement were the nearly exact extraction of the timings of each spoken word as well as the optimization process and loss function considerations. \\\\\n\n% The latter still presents  opportunity for improvement, as the time required is very inconsistent and overall fairly large. The problem of the optimizer converging towards wrong local minima could also be addressed by a globalization strategy or by introducing a regularization term to make the computed distribution of words more uniform.\\\\\n\n% Mastering this type of problems with neural networks may also enable application of similar methods to adjacent problem domains such as indicating the progress of an orchestra playing a classical piece of music.\n", "meta": {"hexsha": "5ed33fa0daabff093dbaaaa5832978a884d44315", "size": 5956, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/chapters/discussion.tex", "max_stars_repo_name": "bbboll/ml_subtitle_align", "max_stars_repo_head_hexsha": "3abb5628902db1021af8ff1666f37a1663574487", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-04-01T20:02:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T15:59:20.000Z", "max_issues_repo_path": "doc/chapters/discussion.tex", "max_issues_repo_name": "bbboll/ml_subtitle_align", "max_issues_repo_head_hexsha": "3abb5628902db1021af8ff1666f37a1663574487", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/chapters/discussion.tex", "max_forks_repo_name": "bbboll/ml_subtitle_align", "max_forks_repo_head_hexsha": "3abb5628902db1021af8ff1666f37a1663574487", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 135.3636363636, "max_line_length": 1066, "alphanum_fraction": 0.8163196776, "num_tokens": 1199, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246035907933, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.5631312737064078}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{bm}\n\\usepackage{xspace}\n\\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}\n\\setcounter{MaxMatrixCols}{20}\n\\usepackage[capitalise]{cleveref}\n\\usepackage[per-mode=symbol]{siunitx}\n\\usepackage{url}\n\n\\newcommand{\\xk}{\\ensuremath{\\bm{x}_k}\\xspace}\n\\newcommand{\\yk}{\\ensuremath{\\bm{y}_k}\\xspace}\n\\newcommand{\\xkk}{\\ensuremath{\\bm{x}_{k+1}}\\xspace}\n\\newcommand{\\xx}[1]{\\ensuremath{\\bm{x}_{#1}}\\xspace}\n\\newcommand{\\uk}{\\ensuremath{\\bm{u}_k}\\xspace}\n\\newcommand{\\ukk}{\\ensuremath{\\bm{u}_{k+1}}\\xspace}\n\\newcommand{\\uu}[1]{\\ensuremath{\\bm{u}_{#1}}\\xspace}\n\\newcommand{\\duk}{\\ensuremath{\\bm{\\dot{u}}_k}\\xspace}\n\\newcommand{\\dukk}{\\ensuremath{\\bm{\\dot{u}}_{k+1}}\\xspace}\n\\newcommand{\\duu}[1]{\\ensuremath{\\bm{\\dot{u}}_{#1}}\\xspace}\n\\newcommand{\\ek}{\\ensuremath{\\bm{e}_k}\\xspace}\n\\newcommand{\\ekk}{\\ensuremath{\\bm{e}_{k+1}}\\xspace}\n\\newcommand{\\ee}[1]{\\ensuremath{\\bm{e}_{#1}}\\xspace}\n\\newcommand{\\rk}{\\ensuremath{\\bm{r}_k}\\xspace}\n\\newcommand{\\ymin}{\\ensuremath{\\bm{y}_\\text{min}}\\xspace}\n\\newcommand{\\ymax}{\\ensuremath{\\bm{y}_\\text{max}}\\xspace}\n\\newcommand{\\umin}{\\ensuremath{\\bm{u}_\\text{min}}\\xspace}\n\\newcommand{\\umax}{\\ensuremath{\\bm{u}_\\text{max}}\\xspace}\n\\newcommand{\\dumin}{\\ensuremath{\\bm{\\dot{u}}_\\text{min}}\\xspace}\n\\newcommand{\\dumax}{\\ensuremath{\\bm{\\dot{u}}_\\text{max}}\\xspace}\n\\newcommand{\\epsy}{\\ensuremath{\\epsilon_{y}}\\xspace}\n\\newcommand{\\epsu}{\\ensuremath{\\epsilon_{u}}\\xspace}\n\\newcommand{\\epsdu}{\\ensuremath{\\epsilon_{\\dot{u}}}\\xspace}\n\\newcommand{\\epst}{\\ensuremath{\\epsilon_T}\\xspace}\n\\newcommand{\\code}[1]{\\texttt{#1}\\xspace}\n\\newcommand{\\R}{\\ensuremath{\\mathbb{R}\\xspace}}\n\\newcommand{\\ts}{\\ensuremath{t_s}}\n\\newcommand{\\Co}{\\ensuremath{C_1}}\n\\newcommand{\\Cc}{\\ensuremath{C_2}}\n\\def\\pwidth{0.49}\n\n\\begin{document}\n\n\\section{Problem formulation}\n\nThe \\code{MPCProblem} class permits to solve Model Predictive Control (MPC) problems.\nThe class considers the following state space representation of the system\n\\begin{align}\n    \\xkk &= A\\xk + B\\uk\\\\\n    \\yk &= \\Co\\xk\n\\end{align}\nwhere $\\xk \\in \\R^n$, $\\yk \\in \\R^q$, and $\\uk \\in \\R^p$ represent the state of the system, the output, and the control at time step $k$, respectively.\n$A \\in \\R^{n \\times n}$, $B \\in \\R^{n \\times p}$, and $\\Co \\in \\R^{q \\times n}$ are the state, the control, and the output matrices.\nThe matrices given to the class are the discrete time matrices, so if you have the continuous time state space representation of your system, you should first discretize it for a particular sampling time.\n\nIn general, the \\code{MPCProblem} class solves the following quadratic optimization problem over the time horizon $T$:\n\\begin{equation}\n    \\label{eq:min}\n    \\min_s s^\\intercal Q s\n\\end{equation}\nwith\n\\begin{equation}\n    \\label{eq:s}\n    s = [\\bm{e}, \\bm{u}, \\bm{\\dot{u}}, \\epsy, \\epsy, \\epsdu, \\epst]\n\\end{equation}\nsubject to\n\\begin{align}\n    \\uu{0} &= \\bm{u}^* \\label{eq:u0}\\\\\n    \\ek &= \\Co\\xk - \\rk,\\quad k = 0,\\dots,T \\label{eq:error}\\\\\n    -\\epst &\\leq \\ee{T} \\leq \\epst \\label{eq:terminal}\\\\\n    \\ukk &= \\uk + \\duk \\cdot \\ts,\\quad k = 0,\\dots,T-1 \\label{eq:duk}\\\\\n    \\Cc\\xkk &\\leq \\ymax + \\epsy,\\quad k = 0,\\dots,T-1 \\label{eq:ymax}\\\\\n    \\Cc\\xkk &\\geq \\ymin - \\epsy,\\quad k = 0,\\dots,T-1 \\label{eq:ymin}\\\\\n    \\ukk &\\leq \\umax + \\epsu,\\quad k = 0,\\dots,T-1 \\label{eq:umax}\\\\\n    \\ukk &\\geq \\umin - \\epsu,\\quad k = 0,\\dots,T-1 \\label{eq:umin}\\\\\n    \\ukk - \\uk &\\leq \\dumax\\cdot\\ts + \\epsdu,\\quad k = 0,\\dots,T-1 \\label{eq:dumax}\\\\\n    \\ukk - \\uk &\\geq \\dumin\\cdot\\ts - \\epsdu,\\quad k = 0,\\dots,T-1 \\label{eq:dumin}\\\\\n    \\epsy &\\geq 0 \\label{eq:epsy}\\\\\n    \\epsu &\\geq 0 \\label{eq:epsu}\\\\\n    \\epsdu &\\geq 0 \\label{eq:epsdu}\\\\\n    \\epst &\\geq 0 \\label{eq:epst}\\\\\n\\end{align}\nwhere\n\\begin{equation}\n    \\label{eq:xk}\n    \\xk = A^{k} \\xx{0} + \\sum_{i=0}^{k-1} A^{k-i-1}B \\uu{i}\n\\end{equation}\n\nIn \\cref{eq:min}, $s$ is the minimization variable and $Q$ a diagonal matrix, which can be modified by the user to weight the minimization terms.\nThe $Q$ matrix must be positive definite, so all entries on the diagonal must be strictly positive.\nThis is required because of the resolution method used by the solver, which requires problems to be strictly convex.\nThis kind of problem is simpler to solve, so the solution can be computed very efficiently.\nIn turn, we cannot set diagonal entries to 0, which requires the problem to be expressed in a non-intuitive way.\nFor example, in standard MPC problems \\xk is part of the state so, for example, the error constraint is formulated as\n\\begin{align}\n    \\xkk &= A\\xk + B\\uk\\\\\n    \\ek &= \\Co\\xk - \\rk,\n\\end{align}\nwhich is much easier to read than the combination of \\cref{eq:error,eq:xk}.\nClearly, \\xk should not be part of the minimization: you want to minimize the error, not the state.\nWith a solver accepting semi-definite programs, this is not a problem, as we can simply set the diagonal entries of the $Q$ matrix regarding the \\xk terms to 0.\nWith the QuadProg++\\footnote{\\url{https://github.com/liuq/QuadProgpp}} solver used in here this is not possible, and this is the reason why the problem is formulated in a more ``complex'' way.\n\nThe $s$ variable is composed by a set of subvariables:\n\\begin{itemize}\n    \\item $\\bm{e}$, output error: this is the set of output errors for each time step $k$ in the considered time horizon. $\\ek \\in \\R^q$ is the difference between the output of the system ($\\Co\\xk$) and the target reference value ($\\rk$) (constraint in \\cref{eq:error}) for $k = 0,\\dots,T$.\n    \\item $\\bm{u}$, control: this is the set of control actions for each time step $k$ in the considered time horizon, with $\\uk \\in \\R^p$ for $k = 0,\\dots,T-1$. The first value (\\uu{0}) is fixed and it is a parameter of the problem (\\cref{eq:u0}).\n    \\item $\\bm{\\dot{u}}$, control derivative: this is the set of control derivatives, i.e., $\\duk \\in \\R^q$ is the difference between \\ukk and \\uk (constraint in \\cref{eq:duk}) for $k = 0,\\dots,T-1$. This term is optional can can be disabled when instantiating the problem, disregarding the control derivative when minimizing.\n    \\item \\epsy: slack variable for the violation of output constraints. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and allows no constraint violation.\n    \\item \\epsu: slack variable for the violation of control constraints. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and allows no constraint violation.\n    \\item \\epsdu: slack variable for the violation of control derivative constraints. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and allows no constraint violation.\n\t\\item \\epst: slack variable for the terminal constraint. This is optional and can be disabled when instantiating the problem. This removes the slack variable from the optimization problem and transforms the terminal constraint into $\\ee{T} = 0$.\n\\end{itemize}\n\nWith respect to the constraints, the interpretation is the following:\n\\begin{itemize}\n    \\item \\cref{eq:terminal}: this is called the terminal constraint and indicates that, at the end of the time horizon, we want our output error to be zero, or very close to it. Notice that this might make the problem unfeasible, e.g., when setting a very small time horizon. For this reason, the constraint can be disabled. Alternatively to disabling the constraint, it is possible to enable a slack variable for it, so that the constraint can be violated but at a certain cost.\n    \\item \\cref{eq:duk}: this constraint simply defines that two successive control actions differ by the control derivative. When the minimization on the control derivative is disabled, this constraint is not used.\n    \\item \\cref{eq:ymax,eq:ymin}: constraints on output limiting the maximum and the minimum output value. Notice that we use a different output matrix than the one of the error constraint (\\cref{eq:error}), which we call \\Cc. The reason is simple: imagine that our state is composed by the acceleration and the speed of a vehicle and that we want to reach a target speed while limiting the acceleration. If we use the same output matrix this is not possible (if not with some ugly hack), but by using two different output matrices we increase the flexibility while lowering the computational effort. In the document and in the code, $\\Cc \\in \\R^{q_2 \\times n}$. These constraints can be violated if the slack variable for the output is enabled. If the slack variable on output is not enabled, then the \\epsy variable is not considered.\n    \\item \\cref{eq:umax,eq:umin}: constraints on control limiting the maximum and the minimum control value. These constraints can be violated if the slack variable for the control is enabled. If the slack variable on control is not enabled, then the \\epsu variable is not considered.\n    \\item \\cref{eq:dumax,eq:dumin}: constraints on control derivative limiting the maximum and the minimum control derivative value. The limit is multiplied by the sampling time \\ts, so changing the sampling time doesn't require the user to change the \\dumax and the \\dumin values. These constraints can be violated if the slack variable for the control derivative is enabled. If the slack variable on control derivative is not enabled, then the \\epsu variable is not considered.\n    \\item \\cref{eq:epsy,eq:epsu,eq:epsdu,eq:epst}: these constraints simply force the slack variables to be positive.\n\\end{itemize}\n\nFinally, \\cref{eq:xk} defines \\xk in terms of the variables of the problem.\n\\xx{0} and \\uu{0} are the initial state and control action, respectively, while \\uk for $k = 1,\\dots,k-1$ are the control actions computed by the algorithm.\n\n\\section{Sample problem}\n\nThe library comes with a sample program (\\texttt{test-mpc.cc}) that solves an MPC problem where the state is described by the following system of differential equations:\n\\begin{equation}\n    \\dot{\\bm{x}} = \\begin{cases}\n        \\dot{a} = -\\frac{1}{\\tau} a + \\frac{1}{\\tau} u\\\\\n        \\dot{v} = a\n    \\end{cases}\n\\end{equation}\nThe system state represents the acceleration and the speed of a vehicle, where the acceleration is subject to an actuation lag modeled as a first order lag with a time constant $\\tau$.\nBy writing the system in the standard state-space representation we obtain\n\\begin{align}\n    \\dot{\\bm{x}} &= G\\bm{x} + H\\bm{u}\\\\\n    \\bm{y} &= \\Co\\bm{x}\n\\end{align}\nwhere\n\\begin{equation}\n    G = \\begin{bmatrix}-\\frac{1}{\\tau} & 0\\\\1 & 0\\end{bmatrix},\\quad H = \\begin{bmatrix}\\frac{1}{\\tau}\\\\0\\end{bmatrix},\\quad \\Co = \\begin{bmatrix}0 & 1\\end{bmatrix}\n\\end{equation}\nWe discretize the continuous time representation by transforming $H$ and $G$ into $A$ and $B$, respectively, so that the state-space representation now becomes\n\\begin{align}\n    \\xkk &= A\\xk + B\\uk\\\\\n    \\yk &= \\Co\\xk\n\\end{align}\nwhere\n\\begin{equation}\n    A = e^{G \\cdot \\ts},\\quad B = \\int_0^{\\ts} e^{G \\lambda} d\\lambda H\n\\end{equation}\nobtaining\n\\begin{equation}\n    A = \\begin{bmatrix}\n        e^{-\\frac{\\ts}{\\tau}} & 0\\\\\n        \\tau\\left(1 - e^{-\\frac{\\ts}{\\tau}}\\right) & 1\\\\\n    \\end{bmatrix},\\quad\n    B = \\begin{bmatrix}\n        1 - e^{-\\frac{\\ts}{\\tau}}\\\\\n        \\ts + \\tau\\left(e^{-\\frac{\\ts}{\\tau}} - 1\\right)\n    \\end{bmatrix}\n\\end{equation}\n$A$, $B$, and $C$ are thus the matrices we pass to the \\texttt{MPCProblem} class given a particular choice of the time constant $\\tau$ and the sampling time \\ts.\n\nIn the provided example, we set $\\ts = \\SI{0.1}{\\second}$ and $\\tau = \\SI{0.5}{\\second}$, obtaining\n\\begin{equation}\n    A = \\begin{bmatrix}\n        0.8187 & 0\\\\\n        0.09063 & 1\\\\\n    \\end{bmatrix},\\quad\n    B = \\begin{bmatrix}\n        0.1812\\\\\n        0.009365\\\\\n    \\end{bmatrix}\n\\end{equation}\nThe initial state $\\xx{0}$ and the initial control $\\uu{0}$ are set to $[0, 0]^\\intercal$ and $[0]$, respectively, the reference $\\rk$ to $[1]$, the time horizon $T$ to 60 steps (\\SI{6}{\\second}), the terminal constraint is enabled, and there is no bound for output, control, and control derivative variables.\nFinally all the weights in the $Q$ matrix are set to 1.\n\nIn the root folder you find a bash script (\\texttt{test-mpc.sh}) that runs the test application under different output parameters.\nIf the scripts finds an \\texttt{R} installation, it will also plot the results to some PDF files.\n\nThe first example (\\cref{fig:test01}) shows the results for the default parameters, i.e., the ones previously described.\nThe graph shows the different quantities considered in the optimization problem, i.e., the actual speed and the target speed, the acceleration, the control input, and the control derivative.\nAs expected, the solution brings the system to the target speed minimizing also the effort on control and control derivative.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{minipage}[t]{\\pwidth\\textwidth}\n        \\includegraphics[width=\\textwidth]{./fig/test01}\n        \\caption{Results for default parameters.}\n        \\label{fig:test01}\n    \\end{minipage}\n    \\hfill\n    \\begin{minipage}[t]{\\pwidth\\textwidth}\n        \\includegraphics[width=\\textwidth]{./fig/test02}\n        \\caption{Results for lower weights on control and control derivative.}\n        \\label{fig:test02}\n    \\end{minipage}\n\\end{figure}\n\nIn the second example (\\cref{fig:test02}) we lower the weights for the control and the control derivative terms in the minimization, setting them to $0.01$ instead of $1$.\nControl and control derivative are now much larger, as we told our solver that we do not care too much about minimizing them.\nThis results in a faster settling time.\n\nIn the third example (\\cref{fig:test03}) we use the parameters of the second but we add upper and lower bounds on control actions, i.e., \\SI{1}{\\meter\\per\\second\\squared} and \\SI{-1}{\\meter\\per\\second\\squared}, respectively.\nThe result is very similar to the second, with the only difference that the control action is ``truncated'' at \\SI{1}{\\meter\\per\\second\\squared} as per constraint.\nThis causes a slightly larger settling time with respect to the second example.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{minipage}[t]{\\pwidth\\textwidth}\n        \\includegraphics[width=\\textwidth]{./fig/test03}\n        \\caption{Results for lower weights on control and control derivative, plus bounded control action.}\n        \\label{fig:test03}\n    \\end{minipage}\n    \\hfill\n    \\begin{minipage}[t]{\\pwidth\\textwidth}\n        \\includegraphics[width=\\textwidth]{./fig/test04}\n        \\caption{Results for lower weights on control and control derivative, plus bounded control and control derivative action.}\n        \\label{fig:test04}\n    \\end{minipage}\n\\end{figure}\n\nIn the fourth example (\\cref{fig:test04}), w.r.t. the third, we add a constraint on the control derivative as well, i.e., $\\SI{-0.5}{\\meter\\per\\second\\cubed} \\leq \\duk \\leq \\SI{0.5}{\\meter\\per\\second\\cubed}$.\nThe result is pretty evident, both in terms of control derivative bounds and in terms of control input.\nThe bound on control derivative causes a ``linear'' increase and decrease of the control action.\nThe additional bound causes the settling time to increase.\n\nThe fifth example (\\cref{fig:test05}) is the same as the second (i.e., we lower the weights for control and control derivative) but, in addition, we add a bound to the maximum acceleration (\\SI{0.6}{\\meter\\per\\second\\squared}).\nThe plot shows that the system reaches the target speed, but the acceleration never exceeds the bound set by the constraint.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{minipage}[t]{\\pwidth\\textwidth}\n        \\includegraphics[width=\\textwidth]{./fig/test05}\n        \\caption{Results for lower weights on control and control derivative, plus bound on acceleration.}\n        \\label{fig:test05}\n    \\end{minipage}\n    \\hfill\n    \\begin{minipage}[t]{\\pwidth\\textwidth}\n        \\includegraphics[width=\\textwidth]{./fig/test06}\n        \\caption{Results for lower weights on control and control derivative, plus bound on acceleration and slack variable for output bounds enabled.}\n        \\label{fig:test06}\n    \\end{minipage}\n\\end{figure}\n\nThe final example (\\cref{fig:test06}) enables the slack variable for the output constraint set in the fifth example, setting the weight penalization for the slack variable in the $Q$ matrix to 10.\nIn this case the solver is allowed to violate the constraint on acceleration, as it can bee seen in the plot.\nThe constraint bound is highlighted by the gray dashed line, and the solver exceeds this limit reaching roughly \\SI{0.75}{\\meter\\per\\second\\squared}.\n\n\\end{document}\n", "meta": {"hexsha": "13d287869651238215d3344bcec62b8d508d3d82", "size": 16817, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/mpclib.tex", "max_stars_repo_name": "michele-segata/mpclib", "max_stars_repo_head_hexsha": "a030421cbbcca8d3eb5e50b7cd0335bac0057fb0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2018-07-17T10:20:31.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-16T17:50:54.000Z", "max_issues_repo_path": "doc/mpclib.tex", "max_issues_repo_name": "michele-segata/mpclib", "max_issues_repo_head_hexsha": "a030421cbbcca8d3eb5e50b7cd0335bac0057fb0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-01-03T15:23:49.000Z", "max_issues_repo_issues_event_max_datetime": "2018-07-22T13:19:19.000Z", "max_forks_repo_path": "doc/mpclib.tex", "max_forks_repo_name": "michele-segata/mpclib", "max_forks_repo_head_hexsha": "a030421cbbcca8d3eb5e50b7cd0335bac0057fb0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2018-02-03T15:22:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-12T11:16:32.000Z", "avg_line_length": 64.4329501916, "max_line_length": 836, "alphanum_fraction": 0.7175477196, "num_tokens": 4914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245870332531, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5631312570758888}}
{"text": "\\documentclass[prb,preprint]{revtex4-1} \n% The line above defines the type of LaTeX document.\n% Note that AJP uses the same style as Phys. Rev. B (prb).\n\n% The % character begins a comment, which continues to the end of the line.\n\n\\usepackage{amsmath}  % needed for \\tfrac, \\bmatrix, etc.\n\\usepackage{amsfonts} % needed for bold Greek, Fraktur, and blackboard bold\n\\usepackage{graphicx} % needed for figures\n\n\\DeclareMathOperator{\\dd}{d\\!}\n\\DeclareMathOperator{\\ddd}{\\mathrm{d}}\n\n\n\\begin{document}\n\n\\title{Demystifying the Lagrangian formalism for field theories}\n\n\\author{Gerd Wagner}\n\\email{gerdhwagner@t-online.de} % optional\n\\affiliation{Mayener Str. 131, 56070 Koblenz, Germany} % optional second address\n% If there were a second author at the same address, we would put another \n% \\author{} statement here.  Don't combine multiple authors in a single\n% \\author statement.\n%\\affiliation{mailing address}\n% Please provide a full mailing address here.\n\n\\author{Matthew W. Guthrie}\n\\email{matthew.guthrie@ucf.edu}\n\\affiliation{Department of Physics, University of Central Florida, Orlando, FL 32816}\n\n% See the REVTeX documentation for more examples of author and affiliation lists.\n\n\\date{\\today}\n\n\\begin{abstract} This paper expands on previous work\\cite{guthrie2019demystifying} to derive and motivate the Lagrangian formulation of field theories.\nIn the process, we take three deliberate steps.\nFirst, we give the definition of the action and derive Euler-Lagrange equations for field theories.\nSecond, we prove the Euler-Lagrange equations are independent under arbitrary coordinate transformations and motivate that this independence is desirable for field theories in physics.\nWe then use the Lagrangian for Electrodynamics as an example field Lagrangian and prove that the related Euler-Lagrange equations lead to Maxwell's equations.\n\\end{abstract}\n\n\\maketitle\n\n\n\\section{Introduction}\n\nWhen Lagrangian field theory is introduced it is often presented as a generalization of the Lagrangian formalism of classical mechanics, and this profoundly shapes physicists' understanding of the subject.\nFor the most common modern example, see Goldstein's treatment of classical field theories \\cite{goldstein2002classical}, which appears in one of the common graduate texts in classical mechanics.\nThe approach of this paper differs from traditional approaches in that we start by presenting the Lagrangian formulation of field theories as a purely mathematical formalism.\nWe find that the Lagrangian formulation has very well defined coordinate and field transformation properties.\nBecause we consider these properties extremely useful for physical field theories, the desire to find Lagrangians for these theories in order to turn their field equations into Euler-Lagrange equations is well motivated.\nAs a proof of concept, we provide the Lagrangian for Electrodynamics as a definition and show that it leads to Maxwell's equations.\nThe domain of this work is purely non-relativistic which means neither the Lagrangian formulation of the field theories nor the treatment of Electrodynamics requires invoking concepts from special relativity. \n\n\\section{Definition of the Lagrangian formalism for fields}\\label{definition}\n\nThe experienced reader may recognize the symbols and names used in this section.\nNonetheless, this section should be thought of as containing purely mathematical definitions and conclusions.\nWe define a field as a function $\\psi(t,x)$ of time $t$ and of three spatial coordinates denoted by $x$. \nThe field's values may be multidimensional.\nA well-known example is the electric field, which has a direction in space and whose values are thus three-dimensional.\n\nA Lagrangian $\\mathcal{L}$ of $\\psi$ is defined as a function that may depend on $\\psi$ itself as well as on its time and spatial derivatives: \n\\begin{equation}\n\\mathcal{L} = \\mathcal{L}\\left(\\psi, \\frac{\\partial \\psi}{\\partial t}, \\frac{\\partial \\psi}{\\partial x}\\right).\n\\end{equation}\nThe action $S$ for two points $t_1$ and $t_2$ in time and a three dimensional area of space $A$ is defined as the following integral of the Lagrangian:\n\\begin{equation}\nS := \\int\\limits_{t_1}^{t_2} \\int_{A} \\mathcal{L}\\left(\\psi, \\frac{\\partial \\psi}{\\partial t}, \\frac{\\partial \\psi}{\\partial x}\\right) \\dd x^3 \\dd t.\n\\end{equation}\n\n\nNext, we are interested in the conditions that $\\mathcal{L}$ must fulfill to make $S$ stationary.\nTo do so we consider arbitrary but small variations $\\delta\\psi$ of the field and calculate the resulting variation $\\delta S$ of $S$. We require the variations $\\delta\\psi$ to vanish at $t_1$ and $t_2$ as well as on the surface of $A$,\n\\begin{equation}\\label{integrands}\n\\delta S = \\int\\limits_{t_1}^{t_2} \\int_{A} \n\\frac{\\partial \\mathcal{L}}{\\partial \\psi} \\cdot \\delta \\psi\n+ \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\; \\delta \\left(\\frac{\\partial \\psi} {\\partial t}\\right)\n+ \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}}  \\; \\delta \\left(\\frac{\\partial \\psi} {\\partial x}\\right)\n\\dd x^3 \\dd t.\n\\end{equation}\nIf we consider the possibly multidimensional values of $\\psi$ indexed by $j$ and the three spacial dimensions indexed by $i$, the integrands in equation \\eqref{integrands} can be rewritten as:\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\psi} \\; \\delta \\psi\n&= \\sum_{j} \\frac{\\partial \\mathcal{L}}{\\partial \\psi_{j}} \\; \\delta \\psi_{j}, \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\; \\delta \\left(\\frac{\\partial \\psi} {\\partial t}\\right)\n&= \\sum_{j} \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi_{j}}{\\partial t}} \\; \\delta \\left(\\frac{\\partial \\psi_{j}} {\\partial t}\\right), \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\; \\delta \\left(\\frac{\\partial \\psi} {\\partial x}\\right)\n&= \\sum_{i,j} \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi_{j}}{\\partial x_{i}}} \\; \\delta \\left(\\frac{\\partial \\psi_{j}} {\\partial x_{i}}\\right).\n\\end{align}\n\nWe now integrate the second summand by parts over time and the third summand by parts over space.\nTo do so, we use the identities\n$\\delta \\left(\\frac{\\partial \\psi} {\\partial t}\\right)\n= \\frac{\\partial \\psi_2} {\\partial t} - \\frac{\\partial \\psi_1} {\\partial t}\n= \\frac{\\partial (\\psi_2 - \\psi_1)} {\\partial t}\n= \\frac{\\partial \\delta \\psi} {\\partial t}$ \nand\n$\\delta \\left(\\frac{\\partial \\psi} {\\partial x}\\right)\n= \\frac{\\partial \\psi_2} {\\partial x} - \\frac{\\partial \\psi_1} {\\partial x}\n= \\frac{\\partial (\\psi_2 - \\psi_1)} {\\partial x}\n= \\frac{\\partial \\delta \\psi} {\\partial x}$.\nThe integral is performed as follows:\n\\begin{equation} \\label{calcDeltaSSection2}\n\\begin{split}\n\\delta S = \\int\\limits_{t_1}^{t_2} \\int_{A} \n\\frac{\\partial \\mathcal{L}}{\\partial \\psi} \\delta \\psi\n-\\left(\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\right)\\right) \\delta \\psi\n-\\left(\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\right)\\right) \\delta \\psi \\;\n\\dd x^3 \\dd t \\\\\n+ \\int_{A} \\int\\limits_{t_1}^{t_2} \\frac{\\partial}{\\partial t} \\left(\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\; \\delta \\psi \\right) \\dd t \\dd x^3\n+ \\int\\limits_{t_1}^{t_2} \n\\int_{A} \\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\; \\delta \\psi \\right) \\dd x^3  \\dd t.\n\\end{split}\n\\end{equation}\nwith ``$\\frac{\\partial}{\\partial x} \\cdot$'' denoting the divergence with respect to the coordinates $x$ (we refrain from using the usual ``$\\nabla \\cdot$'' because later the divergence with respect to variables other than $x$ will occur).\n\nThe second integral vanishes as a result of the fundamental theorem of calculus and because $\\delta \\psi(t_1) = \\delta \\psi(t_2) = 0$.\nThe third integral vanishes from use of Gauss's theorem and because $\\delta \\psi(x) = 0$ for any $x$ on the surface of $A$. Thus,\n\\begin{equation}\n\\delta S = \\int\\limits_{t_1}^{t_2} \\int_{A} \n\\frac{\\partial \\mathcal{L}}{\\partial \\psi} \\delta \\psi\n-\\left(\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\right)\\right) \\delta \\psi\n-\\left(\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\right)\\right) \\delta \\psi \\;\\;\n\\dd x^3 \\dd t.\n\\end{equation}\nIf we use the same index conventions for the possibly multidimensional values of $\\psi$ and the three spacial dimensions as we did above, the last two summands of the integrand become:\n\\begin{align}\n\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\right) \\; \\delta \\psi\n&= \\sum_j \\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi_j}{\\partial t}} \\right) \\; \\delta \\psi_j \\\\\n\\left(\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\right)\\right) \\; \\delta \\psi\n&= \\sum_j \\left(\\sum_i \\frac{\\partial}{\\partial x_i} \\; \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi_j}{\\partial x_i}} \\right)\\right) \\; \\delta \\psi_j.\n\\end{align}\n\nThe last rewrite of $\\delta S$ we wish to do is\n\\begin{equation}\n\\delta S = \\int\\limits_{t_1}^{t_2} \\int_{A} \n\\left(\n\\frac{\\partial \\mathcal{L}}{\\partial \\psi}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\right)\\right) \\delta \\psi \\;\n\\dd x^3 \\dd t.\n\\end{equation}\nBecause $\\delta \\psi$ is arbitrary except for its boundary conditions, the only way to make $S$ stationary (which is equivalent to requiring that $\\delta S = 0$) is that $\\mathcal{L}$ fulfills the condition\n\n\n\\begin{equation}\n0 = \\frac{\\partial \\mathcal{L}}{\\partial \\psi}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\right).\n\\end{equation}\n\nThis is the Euler-Lagrange equation for field theory.\nIt can be seen as a counterpart of the Euler-Lagrange equation for classical particles discussed in previous work \\cite{guthrie2019demystifying}.\nThe procedure of looking for a condition to make $S$ stationary under a Lagrangian $\\mathcal{L}\\left(\\psi, \\frac{\\partial \\psi}{\\partial t}, \\frac{\\partial \\psi}{\\partial x}\\right)$ is called the Lagrangian formalism for field theory.\n\n\\section{Invariance of the Euler-Lagrange equation for field theory under transformations} \\label{sectionInvariance}\n\nLet $x=f(\\bar{x})$ be an invertible and differentiable transformation of the spatial coordinates and $\\psi=F(\\bar{\\psi})$ be an invertible and differentiable transformation of the field (for a discussion of this transformation see appendix \\ref{appendixDiscussionTransform}).\nWe define the transformed Lagrangian  $\\bar{\\mathcal{L}}$ as\n\n\\begin{equation} \\label{LagrTransform}\n\\bar{\\mathcal{L}}\\left(\\bar{\\psi}, \\frac{\\partial \\bar{\\psi}}{\\partial t}, \\frac{\\partial \\bar{\\psi}}{\\partial \\bar{x}}\\right) \n:= \\mathcal{L}\\left(F(\\bar{\\psi}), \\frac{\\partial F(\\bar{\\psi})}{\\partial t}, \\frac{\\partial F(\\bar{\\psi})}{\\partial f}\\right) \n\\left| \\det \\frac{\\partial f}{\\partial \\bar{x}} \\right|,\n\\end{equation}\nwhere $\\left| \\det \\frac{\\partial f}{\\partial \\bar{x}} \\right|$ is the absolute value of the determinant of the Jacobian matrix of $f$ with respect to the spatial coordinates $\\bar{x}$. \\\\\n\nWe will prove that, by requiring $S$ to be stationary, the equations\n\n\\begin{equation} \\label{ELGTransformed}\n0 = \\frac{\\partial \\bar{\\mathcal{L}}}{\\partial \\bar{\\psi}}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{\\bar{L}}}{\\partial \\frac{\\partial \\bar{\\psi}}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial \\bar{x}} \\cdot \\left( \\frac{\\partial \\mathcal{\\bar{L}}}{\\partial \\frac{\\partial \\bar{\\psi}}{\\partial \\bar{x}}} \\right) \n\\end{equation}\nand\n\\begin{equation} \\label{ELGUntransformed}\n0 = \\frac{\\partial \\mathcal{L}}{\\partial \\psi}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\psi}{\\partial x}} \\right) \n\\end{equation}\nfollow, and thus that the Euler-Lagrange equation for field theory is independent of arbitrary coordinate and field transformations as long as the transformation of the Lagrangian is given by equation \\eqref{LagrTransform}. \\\\\n\nTo do so, we consider arbitrary but small variations $\\delta \\bar{\\psi}$ of the field $\\bar{\\psi}$ that vanish at $t_1$, $t_2$, and on the surface of an area of space $\\bar{A}$. We use these to find the condition for  \n\\begin{equation}\nS = \\int\\limits_{t_1}^{t_2} \\int_{\\bar{A}} \\bar{\\mathcal{L}}\\left(\\bar{\\psi}, \\frac{\\partial \\bar{\\psi}}{\\partial t}, \\frac{\\partial \\bar{\\psi}}{\\partial \\bar{x}}\\right) \\dd \\bar{x}^3 \\dd t \n= \\int\\limits_{t_1}^{t_2} \\int_{\\bar{A}} \\mathcal{L}\\left(F(\\bar{\\psi}), \\frac{\\partial F(\\bar{\\psi})}{\\partial t}, \\frac{\\partial F(\\bar{\\psi})}{\\partial f}\\right) \n\\left| \\det \\frac{\\partial f}{\\partial \\bar{x}} \\right| \\dd \\bar{x}^3 \\dd t \n\\end{equation}\nto become stationary. \nEquation \\eqref{ELGTransformed} follows from repeating the considerations of section \\ref{definition}.\nTo prove equation \\eqref{ELGUntransformed}, we look at\n\\begin{equation}\nS = \\int\\limits_{t_1}^{t_2} \\int_{\\bar{A}} \\mathcal{L}\\left(F(\\bar{\\psi}), \\frac{\\partial F(\\bar{\\psi})}{\\partial t}, \\frac{\\partial F(\\bar{\\psi})}{\\partial f}\\right) \n\\left| \\det \\frac{\\partial f}{\\partial \\bar{x}} \\right| \\dd \\bar{x}^3 \\dd t \n\\end{equation}\nwhich, using the transformation formula of multidimensional integrals (see Appendix \\ref{TranformationFormula}), can be transformed into\n\\begin{equation}\nS = \\int\\limits_{t_1}^{t_2} \\int_{f(\\bar{A})} \\mathcal{L}\\left(F(\\bar{\\psi}), \\frac{\\partial F(\\bar{\\psi})}{\\partial t}, \\frac{\\partial F(\\bar{\\psi})}{\\partial f}\\right) \n\\dd f^3 \\dd t,\n\\end{equation}\nwhere $f(\\bar{A})$ is the representation of $\\bar{A}$ under the coordinate transformation $f$.\nBased on this formula, the variation $\\delta S$ of $S$ is given by\n\\begin{equation}\n\\delta S = \\int\\limits_{t_1}^{t_2} \\int_{f(\\bar{A})} \n\\frac{\\partial \\mathcal{L}}{\\partial F} \\; \\delta F\n+ \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial t}} \\; \\delta \\left(\\frac{\\partial F}{\\partial t}\\right)\n+ \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial f}} \\; \\delta \\left(\\frac{\\partial F} {\\partial f}\\right)\n\\dd f^3 \\dd t,\n\\end{equation}\nwhere \n\\begin{equation} \\label{deltaFDefinition}\n\\delta F = \\frac{\\partial F}{\\partial \\bar{\\psi}} \\delta \\bar{\\psi}.\n\\end{equation}\nIntegration by parts of the second and the third term leads to\n\\begin{equation} \\label{calcDeltaSSection3}\n\\begin{split}\n\\delta S = \\int\\limits_{t_1}^{t_2} \\int_{f(\\bar{A})} \n\\frac{\\partial \\mathcal{L}}{\\partial F} \\delta F\n-\\left(\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial t}} \\right)\\right) \\delta F\n-\\left(\\frac{\\partial}{\\partial f} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial f}} \\right)\\right) \\delta F \\;\n\\dd f^3 \\dd t \\\\\n+ \\int_{f(\\bar{A})} \\int\\limits_{t_1}^{t_2} \\frac{\\partial}{\\partial t} \\left(\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial t}} \\; \\delta F \\right) \\dd t \\dd f^3\n+ \\int\\limits_{t_1}^{t_2} \n\\int_{f(\\bar{A})} \\frac{\\partial}{\\partial f} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial f}} \\; \\delta F \\right) \\dd f^3 \\dd t,\n\\end{split}\n\\end{equation}\nwhere the identities $\\delta \\left(\\frac{\\partial F} {\\partial t}\\right)\n= \\frac{\\partial F_2} {\\partial t} - \\frac{\\partial F_1} {\\partial t}\n= \\frac{\\partial (F_2 - F_1)} {\\partial t}\n= \\frac{\\partial \\delta F} {\\partial t}$ \nand\n$\\delta \\left(\\frac{\\partial F} {\\partial f}\\right)\n= \\frac{\\partial F_2} {\\partial f} - \\frac{\\partial F_1} {\\partial f}\n= \\frac{\\partial (F_2 - F_1)} {\\partial f}\n= \\frac{\\partial \\delta F} {\\partial f}$ \nwere used.\nFollowing from equation \\eqref{deltaFDefinition}, $\\delta F$ vanishes at $t_1$ and $t_2$ as $\\delta \\bar{\\psi}$ does, and the second term in \\eqref{calcDeltaSSection3} is zero because of the fundamental theorem of calculus.\nUsing Gauss's theorem, the third term can be transformed into an integral over the surface of $f(\\bar{A})$ which we denote by $\\partial (f(\\bar{A}))$.\nThis surface is the same as the representation of the surface of $\\bar{A}$ under $f$:\n\\begin{equation} \\label{surfaceEquality}\n\\partial (f(\\bar{A})) = f(\\partial \\bar{A}).\n\\footnote{The simplest way to picture this equation is to imagine a real area in space, which is described from within two systems of coordinates.}\n\\end{equation}\nTo show that the third term vanishes, we will prove that $\\delta F$ is zero for any $x \\in \\partial (f(\\bar{A}))$.\n\nFirst, let $x$ be an element of $\\partial (f(\\bar{A}))$.\n\\footnote{The simplest way to picture this element is to imagine a real point on the surface of the area in space, which is described from within two systems of coordinates.}\nThen for $x$ there exists a unique $\\bar{x} \\in \\partial \\bar{A}$ which is defined by $x=f(\\bar{x})$. We use the fact from above that $\\delta \\bar{\\psi}(\\bar{x}) = 0$ and recall that the variation $\\delta \\bar{\\psi}$ is a difference between two fields (which we denote $\\bar{\\psi}_1$ and $\\bar{\\psi}_2$) such that\n\\begin{equation}\n\\delta \\bar{\\psi} = \\bar{\\psi}_2 - \\bar{\\psi}_1.\n\\end{equation}\nThe value of $F$ considered as a function of $x$ is given by \n\\begin{equation} \\label{defineFOfx}\nF(x) = F(\\bar{\\psi}(\\bar{x})) \\;\\; \\text{with} \\;\\; \\bar{x} \\;\\; \\text{defined through} \\;\\; x=f(\\bar{x}) \n\\iff \\bar{x} = f^{-1}(x).\n\\end{equation}\nFor a discussion of formula \\eqref{defineFOfx}, see appendix \\ref{appendixDiscussionTransform}.\nThe variation $\\delta F$ that results from the difference $\\delta \\bar{\\psi}$ between $\\bar{\\psi}_1$ and $\\bar{\\psi}_2$ is given by\n\\begin{equation}\n\\delta F(x) = F(\\bar{\\psi}_2(\\bar{x})) - F(\\bar{\\psi}_1(\\bar{x})) \n= F(\\bar{\\psi}_1(\\bar{x}) + \\delta \\bar{\\psi} (\\bar{x})) - F(\\bar{\\psi}_1(\\bar{x})) \n= \\frac{\\partial F}{\\partial \\bar{\\psi}} \\delta \\bar{\\psi} (\\bar{x}).\n\\end{equation}\nBecause $\\delta \\bar{\\psi}(\\bar{x})$ is zero by assumption, $\\delta F(x)$ is zero, too, which finishes the proof.\n\nAs for $\\delta S$, we are now left with\n\\begin{equation}\n\\delta S = \\int\\limits_{t_1}^{t_2} \\int_{f(\\bar{A})} \n\\left(\n\\frac{\\partial \\mathcal{L}}{\\partial F}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial f} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial f}} \\right)\\right) \\delta F \\;\\;\n\\dd f^3 \\dd t.\n\\end{equation}\nBecause of equation \\eqref{deltaFDefinition}, $\\delta F$ is as arbitrary as $\\delta \\bar{\\psi}$. Thus, the only way for $\\delta S$ to vanish is\n\\begin{equation} \\label{lastEqToProveInvariance}\n0 = \n\\frac{\\partial \\mathcal{L}}{\\partial F}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial f} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial F}{\\partial f}} \\right).\n\\end{equation}\nIf we now replace $F$ and $f$ according to their definitions by $\\psi$ and $x$ this equation transforms into equation \\eqref{ELGUntransformed} and thus finishes the proof.\n\n\\section{Application to physics: The Lagrangian for Electrodynamics}\nThe transformation properties for the Euler-Lagrange equations for field theories that we found on a purely mathematical basis makes the Euler-Lagrange formalism for field theories desirable for physics.\nTo make the formalism useful for physics we must turn the physical field equations into Euler-Lagrange equations.\nThis first requires finding a Lagrangian for the physical field theory in question.\nThis has been done for many existing physical field theories, such as Electrodynamics, General relativity, Schr\\\"odinger's equation, Dirac's equation, the Klein-Gordon equation, and the Standard model of particle physics.\n\nAs an example, we define the Lagrangian of Electrodynamics and show that Maxwell's equations can be derived from its Euler-Lagrange equations.\nWe start with some remarks on Maxwell's equations that can be found in greater detail in several textbooks, Jackson\\cite{jackson1999classical} or Griffiths\\cite{griffiths2017introduction} being two common examples.\n\n\nWith\n\\begin{itemize}\n  \\item $E$ denoting the three components of the electric field,\n  \\item $B$ denoting the three components of the magnetic field,\n  \\item $j$ denoting the three components of the electric current density,\n  \\item $\\rho$ denoting the electric charge density,\n  \\item $\\mu_0$ denoting permeability constant of empty space,\n  \\item $\\epsilon_0$ denoting dielectric constant of empty space,\n  \\item $c$ denoting the speed of light,\n\\end{itemize}\nMaxwell's equations in empty space are given by\n\\begin{align}\n\\nabla \\times E &= - \\frac{\\partial B}{\\partial t}, \\label{Faraday} \\\\\n\\nabla \\times B &= \\mu_0 j +  \\frac{1}{c^2} \\frac{\\partial E}{\\partial t}, \\\\\n\\nabla \\cdot E &= \\frac{\\rho}{\\epsilon_0}, \\\\\n\\nabla \\cdot B &= 0, \\label{noMonopole}\n\\end{align}\nwhere\n\\begin{equation}\n\\mu_0 = \\frac{1}{\\epsilon_0 c^2}.\n\\end{equation}\nBecause $\\nabla \\cdot B = 0$, there exists some vector potential $A$ such that\n\\begin{equation} \\label{BbyA}\nB = \\nabla \\times A.\n\\end{equation}\nWith that, equation \\eqref{Faraday} can be rewritten as\n\\begin{equation}\n\\nabla \\times \\left( E + \\frac{\\partial A}{\\partial t} \\right) = 0.\n\\end{equation}\nWhen the curl of a field is zero, the field can be expressed by the gradient of a scalar potential $\\phi$. Thus, we can write\n\\begin{equation} \\label{EbyAPhi}\nE + \\frac{\\partial A}{\\partial t} = - \\nabla \\phi\n\\iff\nE = - \\nabla \\phi - \\frac{\\partial A}{\\partial t}.\n\\end{equation}\nTo find equations \\eqref{BbyA} and \\eqref{EbyAPhi}, we refer to equations \\eqref{noMonopole} and \\eqref{Faraday}.\nWe now use equations \\eqref{BbyA} and \\eqref{EbyAPhi} to express the remaining two Maxwell equations using only $A$ and $\\phi$:\n\\begin{align}\n\\nabla \\times ( \\nabla \\times A) &= \\mu_0 j + \\frac{1}{c^2} \\frac{\\partial}{\\partial t} \\left(- \\nabla \\phi - \\frac{\\partial A}{\\partial t} \\right) \\label{MaxwellAPhi1} \\\\\n\\nabla \\cdot \\left( -\\nabla \\phi - \\frac{\\partial A}{\\partial t} \\right) &= \\frac{\\rho}{\\epsilon_0} . \\label{MaxwellAPhi2}\n\\end{align}\nWe assert that these equations are the Euler-Lagrange equations of the Lagrangian\n\\begin{equation}\n\\mathcal{L} = \\epsilon_0 \\frac{E^2 - c^2 B^2}{2} - \\rho\\phi + j \\cdot A\n= \\epsilon_0 \\frac{(-\\nabla\\phi - \\frac{\\partial A}{\\partial t})^2 - c^2 (\\nabla \\times A)^2}{2} - \\rho\\phi + j \\cdot A.\n\\end{equation}\nTo prove this assertion, we first calculate the Euler-Lagrange equation for $\\phi$\n\\begin{equation} \\label{ELPhi}\n0 = \\frac{\\partial \\mathcal{L}}{\\partial \\phi}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\phi}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\phi}{\\partial x}} \\right).\n\\end{equation}\nWe calculate the terms separately,\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\phi} &= -\\rho \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\phi}{\\partial t}} &= 0\n\\implies \\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\phi}{\\partial t}}\\right) = 0 \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\phi}{\\partial x}}\n&= \\frac{\\epsilon_0}{2} \\cdot 2 \\left(-\\nabla \\phi - \\frac{\\partial A}{\\partial t} \\right) (-1)\n= \\epsilon_0 \\left(\\nabla \\phi + \\frac{\\partial A}{\\partial t} \\right) \\\\\n\\frac{\\partial}{\\partial x} \\cdot \\left(\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial \\phi}{\\partial x}} \\right)\n&= \\nabla \\cdot \\left[\\epsilon_0 \\left(\\nabla \\phi + \\frac{\\partial A}{\\partial t} \\right)\\right].\n\\end{align}\nSubstituting these results into equation \\eqref{ELPhi} results in\n\\begin{equation}\n0 = -\\rho - 0 -\\nabla \\cdot \\left[\\epsilon_0 \\left(\\nabla \\phi + \\frac{\\partial A}{\\partial t} \\right)\\right],\n\\end{equation}\nwhich is equivalent to equation \\eqref{MaxwellAPhi2}.\nWith that, the first part of the proof is done.\n\nFor the second part we must prove that\n\\begin{equation} \\label{ELA}\n0 = \\frac{\\partial \\mathcal{L}}{\\partial A}\n-\\frac{\\partial}{\\partial t} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial A}{\\partial t}} \\right) \n-\\frac{\\partial}{\\partial x} \\cdot \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial A}{\\partial x}} \\right)\n\\end{equation}\nis equivalent to equation \\eqref{MaxwellAPhi1}.\nWe do this for the first component $A_1$ of $A$ only (note that equation \\eqref{ELA} actually represents one equation for each component of $A$).\nAgain, we calculate the terms separately:\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial A_1} &= j_1 \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial A_1}{\\partial t} }\n&= \\frac{\\epsilon_0}{2} \\cdot 2 \\left(-\\frac{\\partial \\phi}{\\partial x_1} - \\frac{\\partial A_1}{\\partial t}\\right) (-1) \\\\\n\\frac{\\partial}{\\partial t} \\left(\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial A_1}{\\partial t} } \\right)\n&= \\epsilon_0 \\frac{\\partial}{\\partial t} \\left(\\frac{\\partial \\phi}{\\partial x_1} + \\frac{\\partial A_1}{\\partial t}\\right)\n\\end{align}\n\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial A_1}{\\partial x} }\n&= - \\frac{\\epsilon_0 c^2}{2} \\frac{\\partial}{\\partial \\frac{\\partial A_1}{\\partial x}}\n\\left[\n\\left(\\frac{\\partial A_3}{\\partial x_2} - \\frac{\\partial A_2}{\\partial x_3} \\right)^2\n+ \\left(\\frac{\\partial A_1}{\\partial x_3} - \\frac{\\partial A_3}{\\partial x_1} \\right)^2\n+ \\left(\\frac{\\partial A_2}{\\partial x_1} - \\frac{\\partial A_1}{\\partial x_2} \\right)^2\n\\right] \\\\\n&=\n- \\frac{\\epsilon_0 c^2}{2} \n\\left(\n\\begin{array}{c} \n0\n\\\\\n2 \\left(\\frac{\\partial A_2}{\\partial x_1} - \\frac{\\partial A_1}{\\partial x_2} \\right) (-1)\n\\\\\n2 \\left(\\frac{\\partial A_1}{\\partial x_3} - \\frac{\\partial A_3}{\\partial x_1} \\right)\n\\end{array}\n\\right)\n= \n- \\epsilon_0 c^2\n\\left(\n\\begin{array}{c} \n0\n\\\\\n\\left(\\frac{\\partial A_1}{\\partial x_2} - \\frac{\\partial A_2}{\\partial x_1} \\right)\n\\\\\n\\left(\\frac{\\partial A_1}{\\partial x_3} - \\frac{\\partial A_3}{\\partial x_1} \\right)\n\\end{array}\n\\right) \\\\ \\nonumber \\\\\n\\frac{\\partial}{\\partial x} \\cdot \\frac{\\partial \\mathcal{L}}{\\partial \\frac{\\partial A_1}{\\partial x} }\n&= - \\epsilon_0 c^2\n\\left( \\frac{\\partial^2 A_1}{{\\partial x_2}^2} - \\frac{\\partial^2 A_2}{\\partial x_2 \\partial x_1}\n+ \\frac{\\partial^2 A_1}{{\\partial x_3}^2} - \\frac{\\partial^2 A_3}{\\partial x_3 \\partial x_1} \\right).\n\\end{align}\nSubstituting these results into equation \\eqref{ELA} results in\n\\begin{equation}\n0= j_1 \n- \\epsilon_0 \\frac{\\partial}{\\partial t} \\left(\\frac{\\partial \\phi}{\\partial x_1} + \\frac{\\partial A_1}{\\partial t} \\right)\n+ \\epsilon_0 c^2 \n\\left( \\frac{\\partial^2 A_1}{{\\partial x_2}^2} - \\frac{\\partial^2 A_2}{\\partial x_2 \\partial x_1}\n+ \\frac{\\partial^2 A_1}{{\\partial x_3}^2} - \\frac{\\partial^2 A_3}{\\partial x_3 \\partial x_1} \\right).\n\\end{equation}\nUsing $\\mu_0 = \\frac{1}{\\epsilon_0 c^2}$, this can be represented as\n\\begin{equation}\n - \\frac{\\partial^2 A_1}{{\\partial x_2}^2} + \\frac{\\partial^2 A_2}{\\partial x_2 \\partial x_1}\n- \\frac{\\partial^2 A_1}{{\\partial x_3}^2} + \\frac{\\partial^2 A_3}{\\partial x_3 \\partial x_1}\n= \\mu_0 j_1 \n+ \\frac{1}{c^2} \\frac{\\partial}{\\partial t} \\left(-\\frac{\\partial \\phi}{\\partial x_1} - \\frac{\\partial A_1}{\\partial t} \\right).\n\\end{equation}\nThe right hand side of this equation is equal to the first component of the right hand side of equation \\eqref{MaxwellAPhi1}.\nNow we need to prove that the first component of $\\nabla \\times \\left( \\nabla \\times A\\right)$ equals the left hand side of this equation.\nTo do so we use the formula\n$\\nabla \\times \\left( \\nabla \\times A\\right) = \\nabla\\left(\\nabla \\cdot A\\right) - \\Delta A$:\n\\begin{align}\n[\\nabla(\\nabla \\cdot A) - \\Delta A]_1 &=\n\\frac{\\partial}{\\partial x_1} \\left( \\frac{\\partial A_1}{\\partial x_1} + \\frac{\\partial A_2}{\\partial x_2} + \\frac{\\partial A_3}{\\partial x_3} \\right)\n- \\left( \\frac{\\partial^2 A_1}{{\\partial x_1}^2} + \\frac{\\partial^2 A_1}{{\\partial x_2}^2} + \\frac{\\partial^2 A_1}{{\\partial x_3}^2} \\right) \\nonumber \\\\\n&=\n\\frac{\\partial^2 A_2}{\\partial x_1 \\partial x_2} + \\frac{\\partial^2 A_3}{\\partial x_1 \\partial x_3}\n- \\frac{\\partial^2 A_1}{{\\partial x_2}^2} - \\frac{\\partial^2 A_1}{{\\partial x_3}^2}. \n\\end{align}\nWith this, we found that the Euler-Lagrange equation for $A_1$ is equivalent to the first component of equation \\eqref{MaxwellAPhi1}. To finish the proof, the last calculation must only to be repeated for the remaining Euler-Lagrange equations and components of equation \\eqref{MaxwellAPhi1}.\n\n\\section{Conclusion}\nThe Euler-Lagrange formalism for field theories was presented as a purely mathematical framework that provides us with field equations which are invariant under any coordinate and field transformation as long as the associated Lagrangian $\\mathcal{L}$ has the very simple and well defined transformation property given by equation \\eqref{LagrTransform}.\nBased on this mathematical result, we are well motivated to reformulate physical field equations in such a way that they become Euler-Lagrange equations.\nThe critical part of this reformulation is finding the Lagrangian for the field theory that we want to reformulate.\n\n\\appendix\n\n\\section{Discussion of the coordinate and field transformation defined in the beginning of section \\ref{sectionInvariance}} \\label{appendixDiscussionTransform}\nThe transformations\n\\begin{align}\n  &x=f(\\bar{x}) \\;\\; \\text{for the spacial coordinates}\\\\\n  &\\psi=F(\\bar{\\psi}) \\;\\; \\text{for the fields}\n\\end{align}\ncan be given precise meaning if we consider a particle from within two coordinate systems $\\bar{T}$ and $T$.\nIf in $\\bar{T}$ the particle's coordinates are given by $\\bar{x}$, then in $T$ they are given by $x = f(\\bar{x})$.\n\nNext, we assume that in $\\bar{T}$ there is a field $\\bar{\\psi}$ which at the particle's position $\\bar{x}$ has the value $\\bar{\\psi}(\\bar{x})$.\\footnote{Of course the field may have multiple components as it would in the case of an electric field.}\nWe now want to know the field's value at the particle's position $x$ in $T$, which we denote by $\\psi(x)$, which is where the transformation $F$ of the field comes into play.\n$F$ is meant to be defined in such a way that $\\psi(x)$ is given by\n\\footnote{A notable special case is when $F$ is the identity function.\nEquation \\ref{defFByArg} then reads $$\\psi(x) = \\bar{\\psi}(\\bar{x}) \\iff \\psi(f(\\bar{x})) = \\bar{\\psi}(\\bar{x}).$$\nFields that transform this way are called scalar fields.\nThe Higgs field is a famous example of a scalar field.}\n\\begin{equation} \\label{defFByArg}\n  \\psi(x) = F(\\bar{\\psi}(\\bar{x})) \\iff \\psi(f(\\bar{x})) = F(\\bar{\\psi}(\\bar{x}))\n\\end{equation}\nThe second equation allows to write the functional identity:\n\\begin{equation} \\label{defFByFunctional}\n  \\psi \\circ f = F \\circ \\bar{\\psi} \\;\\; \\text{defined on the coordinates $\\bar{x}$ of $\\bar{T}$}\n\\end{equation}\nThese results allow us to state two parts of section \\ref{sectionInvariance} more precisely.\nIn fact, equations \\eqref{defFByArg} and \\eqref{defFByFunctional} were used in these parts:\n\\begin{itemize}\n  \\item\n  In equations \\eqref{lastEqToProveInvariance} and \\eqref{LagrTransform} we use the derivative $\\frac{\\partial F(\\bar{\\psi})}{\\partial f}$.\n  For this, according to equations \\eqref{defFByFunctional} and \\eqref{defFByArg}, the following equation holds:\n  \\begin{equation}\n    \\frac{\\partial F(\\bar{\\psi})}{\\partial f} = \\frac{\\partial \\psi}{\\partial f} = \\frac{\\partial \\psi}{\\partial x},\n  \\end{equation}\n  which is used to follow the equivalence of equations \\eqref{lastEqToProveInvariance} and \\eqref{ELGUntransformed}.\n  \\item\n  With equation \\eqref{defFByArg} it is clear that $F(x)$, which is used in equation \\eqref{defineFOfx}, is well defined.\n\\end{itemize}\n\n\n\\section{Alternative to the transformation formula of multidimensional integrals} \\label{TranformationFormula}\n\nThe proof of the transformation formula of multidimensional integrals is not trivial.\nA heuristic explanation is given by Sterman \\cite{Sterman}.\nThere,\n\\begin{equation}\n\\left|\\det \\frac{\\partial f}{\\partial \\bar{x}} \\right| =  \\left|\\det \\frac{\\partial x}{\\partial \\bar{x}} \\right|\n\\end{equation}\nis written as $\\dd x^3 / \\dd \\bar{x}^3$ and is explained to be the ratio of the differentials in the transformed and untransformed coordinates.\n\n%\\acknowledgments{Acknowledgments}\n\n\\bibliographystyle{unsrt}\n\\bibliography{bibliography}\n\n%\\begin{thebibliography}{9}\n%\n%\\bibitem{guthrie2019demystifying} Gerd Wagner and Matt Guthrie, Demystifying the Lagrangian of Classical Mechanics\n%\n%\\bibitem{goldstein2002classical Goldstein, Poole and Safko, Classical Mechanics, Chapter 13 \n%\n%\\bibitem{Jackson} J.D. Jackson, Classical Electrodynamics, section 6.4\n%\n%\\bibitem{Stackexchange} \\url{https://physics.stackexchange.com/questions/34241/deriving-lagrangian-density-for-electromagnetic-field}\n%\n%\\bibitem{Sterman} George Sterman - An Introduction to Quantum Field Theory - Section 1.3 Invariance and conservation\n%\n%\\bibitem{griffiths} Griffiths introduction to electrodynamics.\n%\n%\\end{thebibliography}\n\n\n\n\\end{document}\n", "meta": {"hexsha": "60ef87fefc5982390f495cd9c1790bf0a16c7604", "size": 33567, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LagrangeField.tex", "max_stars_repo_name": "mwguthrie/lagrangian", "max_stars_repo_head_hexsha": "ac16747a58b967399ed0f66ae4de661a26bc7194", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LagrangeField.tex", "max_issues_repo_name": "mwguthrie/lagrangian", "max_issues_repo_head_hexsha": "ac16747a58b967399ed0f66ae4de661a26bc7194", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LagrangeField.tex", "max_forks_repo_name": "mwguthrie/lagrangian", "max_forks_repo_head_hexsha": "ac16747a58b967399ed0f66ae4de661a26bc7194", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.8155893536, "max_line_length": 353, "alphanum_fraction": 0.7098340632, "num_tokens": 10585, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696748, "lm_q2_score": 0.7122321842389469, "lm_q1q2_score": 0.5630411355732132}}
{"text": "\n\\subsection{Autoregressive Distributed Lag (ARDL) model}\n\nInclude lagged y and lagged x (and current x)\n\n", "meta": {"hexsha": "4229ed85d765efc601d2f4f29b4662805a963db1", "size": 106, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/probability/stochasticMultiple/02-01-ARDL.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/probability/stochasticMultiple/02-01-ARDL.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/probability/stochasticMultiple/02-01-ARDL.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.6666666667, "max_line_length": 56, "alphanum_fraction": 0.7735849057, "num_tokens": 28, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8128673359709796, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5630260384550803}}
{"text": "%              %\n%%            %%\n%%% PREAMBLE %%%\n%%            %%\n%              %\n\n% Document class\n\\documentclass[11pt]{beamer}\n\n\\usetheme{Boadilla}\n\\usecolortheme{beaver}\n\\useinnertheme{rectangles}\n\n\\setbeamertemplate{navigation symbols}{}\n\n% Font\n\\usepackage{fontspec}\n\\setmainfont{Latin Modern Roman}\n\n% Language and typography\n\\usepackage{polyglossia}\n\\setdefaultlanguage{english}\n\\setotherlanguage{french}\n\n\\usepackage{csquotes}\n\n\\usepackage{microtype}\n\n% Mathematics\n\\usepackage{mathtools}\n\\usepackage{cases}\n\\usepackage{systeme}\n\n% Floats\n\\usepackage{float}\n\\usepackage{booktabs}\n\\usepackage{multicol}\n\n% References\n\\usepackage{cleveref}\n\n\n%              %\n%%            %%\n%%% DOCUMENT %%%\n%%            %%\n%              %\n\n% Information\n\\title{Mathematics}\n\\subtitle{Systems of equations}\n\\author[A. Quenon]{Alexandre Quenon}\n\n% Text\n\\begin{document}\n% *** Title page *** %\n\\begin{frame}\n\t\\titlepage\n\\end{frame}\n\n\n% *** Contents *** %\n\\begin{frame}\n\t\\frametitle{Overview}\n\t\n\t\\tableofcontents\n\\end{frame}\n\n\n% *** Tutorial *** %\n%-----\n\\section{Useful packages}\n\n\\begin{frame}\n\t\\frametitle{Packages for systems of equations}\n\n\tSome packages very useful for mathematics are listed here below:\n\t\\begin{itemize}\n\t\t\\item \\emph{mathtools} which is mainly an upgrade of the very well-known \\enquote{amsmath} package (the backbone for mathematics with \\LaTeX{}),\n\t\t\\item \\emph{cases} which provides the \\texttt{numcases} command to number all lines of a system of equations, and\n\t\t\\item \\emph{systeme} which proposes tools to improve the display of the variables of the system.\n\t\\end{itemize}\n\\end{frame}\n\n\n%-----\n\\section{Functions defined by domain}\n\n\\begin{frame}\n\t\\frametitle{Functions defined by domain}\n\t\n\t\\structure{Tool}: \\texttt{cases} environment, \\textbackslash\\textbackslash{} before starting a new line, maximum one \\& per line.\n\tMust be included inside another mathematical equation environment.\n\t\n\tFor examples:\n\t\\begin{equation}\n\t\ta = \\begin{cases}\n\t\t\tx^2 + 2\t\t\t\t\t& \\text{if} x<2  \\\\\n\t\t\t\\int x-3\\, \\mathrm{d}x\t& \\text{if} x \\geq 2\n\t\t\\end{cases}\n\t\\end{equation}\n\t\\begin{equation}\n\t\ta = \\begin{dcases*}\n\t\t\tx^2 + 2\t\t\t\t\t& if  $x<2$  \\\\\n\t\t\t\\int x-3\\, \\mathrm{d}x\t& otherwise\n\t\t\\end{dcases*}\n\t\\end{equation}\n\t\n\t\n\t\\structure{Extra}: a starred version makes the right column \\emph{text-mode} instead of \\emph{math-mode}.\n\tA \\texttt{dcases} variant makes the environment \\emph{displaystyle}.\n\tAn \\texttt{rcases} variant creates a closing bracket on the right side.\n\\end{frame}\n\n\n%-----\n\\section{Systems of equations}\n\n\\begin{frame}\n\t\\frametitle{Systems of equations}\n\t\\framesubtitle{One number for the whole system}\n\t\n\t\\structure{Tool}:  \\texttt{cases} environment (same as for defined-by-domain function).\n\t\n\tExample:\n\t\\begin{align}\n\t\t\\begin{cases}\n\t\t\tx  +2y - z  = 1 \\\\\n\t\t\tx  -3y + 2z = 4 \\\\\n\t\t\t-x +y  +z   = 0 \n\t\t\\end{cases}\n\t\\end{align}\n\t\n\t\n\t\\structure{Issues}:\n\t\\begin{enumerate}\n\t\t\\item only one number the whole system but it would be useful to number each line of the system $\\rightarrow$ see the \\emph{cases} package,\n\t\t\\item there is no alignment between the variables like it is sometimes done in algebra $\\rightarrow$ see the \\emph{systeme} package.\n\t\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}\n\t\\frametitle{Systems of equations}\n\t\\framesubtitle{Numbering all lines of the system (1)}\n\t\n\t\\structure{Tool}: \\texttt{numcases} environment from the \\emph{cases} package, \\textbackslash\\textbackslash{} before starting a new line, maximum one \\& per line.\n\t\n\tExamples:\n\t\\begin{numcases}{}\n\t\tx  +2y - z  = 1 \\\\\n\t\tx  -3y + 2z = 4 \\\\\n\t\t-x +y  +z   = 0 \n\t\\end{numcases}\n\t\n\t\\begin{numcases}{a=}\n\t\tx^2 + 2\t\t\t\t\t& if  $x<2$  \\\\\n\t\t\\int x-3\\, \\mathrm{d}x\t& otherwise\n\t\\end{numcases}\n\t\n\t\n\t\\structure{Extra}: it corresponds to \\texttt{dcases*} with all lines numbered, which means that:\n\t\\begin{itemize}\n\t\t\\item it is directly in \\emph{displaystyle},\n\t\t\\item the right column is in \\emph{text-mode}.\n\t\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\t\\frametitle{Systems of equations}\n\t\\framesubtitle{Numbering all lines of the system (2)}\n\t\n\t\\structure{Numbering style}: \\texttt{subnumcases} uses the same number and adds a letter in the equation tag.\n\t\n\tExample:\n\t\\begin{subnumcases}{}\n\t\tx  +2y - z  = 1 \\\\\n\t\tx  -3y + 2z = 4 \\\\\n\t\t-x +y  +z   = 0 \n\t\\end{subnumcases}\n\\end{frame}\n\n\\begin{frame}\n\t\\frametitle{Systems of equations}\n\t\\framesubtitle{Alignment on variables}\n\t\n\t\\structure{Tool}: \\texttt{systeme} command from the \\emph{systeme} package, commas (,) used to separate equations.\n\tWorks outside any math environment as well as inside \\texttt{equation}.\n\t\n\tExample:\n\t\\begin{equation}\n\t\t\\systeme{%\n\t\t\tx  +2y - z  = 1,\n\t\t\tx  -3y + 2z = 4,\n\t\t\t-x +y  +z   = 0}\n\t\\end{equation}\n\t\n\t\n\t\\structure{Issue}: the numbering counter used by the \\texttt{systeme} command seems to not work properly the \\LaTeX{}'s equation internal counter.\n\tConsequently, use it inside an \\texttt{equation} environment.\n\\end{frame}\n\n\n\n\n% *** END *** %\n\\end{document}", "meta": {"hexsha": "0545c9375cd63b8618ddbf9b470e6c5a9ca0e0ec", "size": 4939, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tutorials/C100__Maths_systems/Quick_reference.tex", "max_stars_repo_name": "Arkh42/LaTeX_magic", "max_stars_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tutorials/C100__Maths_systems/Quick_reference.tex", "max_issues_repo_name": "Arkh42/LaTeX_magic", "max_issues_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tutorials/C100__Maths_systems/Quick_reference.tex", "max_forks_repo_name": "Arkh42/LaTeX_magic", "max_forks_repo_head_hexsha": "fb17aab27bae727267605897c6d00ab65b097f23", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.9757281553, "max_line_length": 163, "alphanum_fraction": 0.6780724843, "num_tokens": 1581, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8128673223709251, "lm_q1q2_score": 0.5630260341924204}}
{"text": "\\chapter{Data Fusion and State Estimation}\nThis chapter is dedicated to explaining the mathematical methods and models used to fuse data generated by the cameras and smartphone. It details the design and implementation of an EKF using MATLAB software. Kim et al. \\cite{kim2011kalman} and source code provided by Patel served as the foundation of this chapter. All files pertaining to the filter have been included on the accompanying CD.\n\nA good starting point for this chapter would be to define the states of interest of our system. We choose our states to be various parameters relating to the human gait and the bodies pose and position in the inertial frame. The model used in this work has various constants (the lengths of the rigid beams making up the limbs), but many changing parameters (the angles these rigid beams make with each other). These angles quantify the position of the various joints during steady state running and have therefore been chosen as states.\n\nWe cannot directly measure these angles since we have no sensors directly connected to them. All the system states are predicted by the EKF using various measurements relating directly and indirectly to them. The following table specifies the parameters used to symbolize the states of the system.\n   \n\\begin{table}[!ht]\n\\captionsetup{width=0.8\\linewidth, font=small}  \n\\centering\n\\label{statesForEkf}\n\\begin{tabular}{ll}\nState & Description \\\\\n$x_{body}$  \t&x Position of body w.r.t. the inertial frame\\\\\n$y_{body}$     &y Position of body w.r.t. the inertial frame\\\\\n$z_{body}$     &z Position of body w.r.t. the inertial frame\\\\\n$\\phi_{body}$      &Roll of body w.r.t. the inertial frame\\\\\n$\\theta_{body}$    &Pitch of body w.r.t. the inertial frame\\\\\n$\\psi_{body}$      &Yaw of body w.r.t. the inertial frame\\\\\n$\\theta_{LH}$     &Pitch of left thigh w.r.t. left hip\\\\\n$\\psi_{LH}$      &Yaw of left thigh w.r.t. left hip\\\\\n$\\theta_{LK}$    &Pitch of left calf w.r.t. left knee\\\\\n$\\theta_{LA}$   &Pitch of left foot w.r.t. left ankle\\\\\n$\\theta_{RH}$    &Pitch of right thigh w.r.t. right hip\\\\\n$\\psi_{RH}$      &Yaw of right thigh w.r.t. right hip\\\\\n$\\theta_{RK}$   &Pitch of the right calf w.r.t. right knee\\\\\n$\\theta_{RA}$  &Pitch of the right foot w.r.t. the right ankle\\\\\n\\end{tabular}\n\\caption{Table showing the different states of the model to be determined by the EKF}\n\\end{table}\n\n\\newpage\nOur system is not only concerned with these positional and angular elements, but also how they change over time. These rates where defined as the derivative of the states with respect to time. The first derivative serving as velocity and angular velocity, while the second derivative serves as acceleration and angular acceleration. Here the vector \\textbf{x} serves as a element of our state vector \\textbf{X}.\n\n$$\\textbf{x} = [ x_{body} \\; y_{body} \\; z_{body} \\; \\phi_{body} \\;    \t\t    \\theta_{body}  \\; \\psi_{body}\\;  \\theta_{LH}\\; \\psi_{LH}   \\;\\theta_{LK} \t\\;\\theta_{LA} \t\\;\\theta_{RH} \\;\\psi_{RH}   \t\\;\\theta_{RK}   \\; \\theta_{RA}  \t\t]$$ \n\n$$\t\\textbf{X}=[\\textbf{x} \\; \\dot{\\textbf{x}} \\; \\ddot{\\textbf{x}}] $$\n\nThe vector \\textbf{x} contains 14 elements. From the above equations it is clear that our state vector \\textbf{X} contains 42 elements. With the states of our system defined we must discuss state estimation and the critical elements of the EKF.\n\n\\section{State Estimation}\nThe key purpose of the Kalman Filter is the ability to estimate the various states of our system. To understand how this estimation will occur we need to firstly understand the KF. The following diagram shows the critical parts of the KF.\n\n\\begin{figure}[!ht] \n\\captionsetup{width=0.8\\linewidth, font=small}  \n\\includegraphics[width=0.6\\linewidth]{figures/kf.png}\n\\caption{Figure showing the interplay of various elements of the KF, adapted from \\cite{kfkfkf}}\n\\label{fig:kf}\n\\end{figure}\n\nThe critical elements of the KF as seen in the diagram are: the states themselves, a prediction model, an update model, measurements, initialization values, and various uncertainty models. \n\nThe underlying equation of the KF is the \\textit{process equations}, where the state value at the next time instance $ X_{k+1} $ is predicted by applying the transition matrix $ F_{k+1} $ to the state values at the current time instance $ X_{k} $ and adding a zero mean Gaussian noise term $ w_{k} $. The $ w $ element of this equation is collected in the process noise matrix $ Q $. \n\n$$ X_{k+1} = F_{k+1}X_{k} + w_{k}  $$\n\nAnother underlying equation of the KF is the \\textit{measurement equation}. Here the observable measurements contained in $ Y_k $ are related to the states at the current time instance $ X_{k} $ through the measurement matrix $ H_{k} $. A zero mean Gaussian noise term $ v_{k} $ is added to account for measurement uncertainty. The various measurement uncertainties are collected in the measurements noise matrix $ R $. \n\n$$ Y_k = H_{k}X_{k} + v_{k} $$\n\n\nAs discussed in the literature review, the KF can only be applied to linear systems. The EKF is the extension of the filter so that it may be applied to nonlinear systems. To introduce the workings of the EKF we start by defining a classic nonlinear system defined by the state space model\n\n$$ X_{k+1} = f(k,X_{k}) + w_{k}  $$\n\n$$ Y_k = h(k,X_{k}) + v_{k} $$\n\nIn this representation $ w_{k} $ and $ v_{k} $ are consistent with definitions given for the KF and are contained in matrices $ Q $ and $ R $ respectively. From the state space equations one can see that we now predict states using a nonlinear transition matrix function $ f $ and $ h $.\n\nThese functions can be defined by their anti-derivative matrices $F$ and $H$ respectively.\n\n$$ F_{k+1,k} = \\frac{\\delta f(k,X_{k})}{\\delta x} $$\n\n$$ H_{k} = \\frac{\\delta h(k,X_{k})}{\\delta x} $$\n\nThese are the fundamental equations of the KF and EKF. We can now \n\n\\section{Process Equations}\nThe fundamental assumption made when deriving the prediction equations for our states was that the acceleration (both linear and angular) was constant between sampling intervals. It would therefore stand that the positional states of the filter $[x_{body} \\; y_{body} \\; z_{body}]$ and the angular states of the filter $[\\theta_{body} \\; ... \\; \\theta_{RA}]$ could be predicted using: \n\n$$ \\ddot{p}_{k+1} = \\ddot{p}_{k} + \\sigma_{\\ddot{p}}^{2} $$\n$$ \\dot{p}_{k+1} = \\dot{p}_{k} + \\ddot{p}_{k}T + \\sigma_{\\dot{p}}^{2} $$\n$$ p_{k+1} = p_{k} + \\dot{p}_{k}T + \\sigma_{p}^{2} $$\n\nfor the positional states, and:\n\n$$ \\ddot{\\alpha}_{k+1} = \\ddot{\\alpha}_{k} + \\sigma_{\\ddot{\\alpha}}^{2} $$\n$$ \\dot{\\alpha}_{k+1} = \\dot{\\alpha}_{k} + \\ddot{\\alpha}_{k}T + \\sigma_{\\dot{\\alpha}}^{2} $$\n$$ \\alpha_{k+1} = \\alpha_{k} + \\dot{\\alpha}_{k}T + \\sigma_{\\alpha}^{2} $$\n\nfor the angular states.\n\nThese prediction equations where created in MATLAB using symbolic functions. The sigma associated each equations takes into account the prediction uncertainties contained in the Q matrix. By adjusting these values we can increase the filter performance. This is discussed further in a later section.\n\n\\section{Measurement Equations}\nThe measurement equations for the lower limbs where generated using inverse kinematics. This requires a model\n\n\\subsection{Euler Matrices}\nThe following matrices are the rotational matrices for rotating a point in 3D space along a certain axis. \n\n$$\nRoll(\\phi) = \n\\begin{bmatrix} \n1 & 0 & 0 \\\\ \n0 & \\cos{\\phi} & -\\sin{\\phi} \\\\ \n0 & \\sin{\\phi} & \\cos{\\phi}  \n\\end{bmatrix}\n$$\n\n\n$$\nPitch(\\theta) = \n\\begin{bmatrix} \n\\cos{\\theta} & 0 & \\sin{\\theta} \\\\ \n0 & 1 & 0 \\\\ \n-\\sin{\\theta} & 0 & \\cos{\\theta}  \n\\end{bmatrix}\n$$\n\n\n$$\nYaw(\\psi) = \n\\begin{bmatrix} \n\\cos{\\psi} & -\\sin{\\psi} & 0 \\\\ \n\\sin{\\psi} & \\cos{\\psi} & 0 \\\\ \n0 & 0 & 1  \n\\end{bmatrix}\n$$\n\n\n\\subsection{Direct Cosine Matrix}\n\n\n\\subsection{Forward Kinematics}\n\n\\textbf{front}\\\\\nright knee\n$$ p1xyz = bodyY + bodyZ + R1 * Thigh $$\nleft knee\n$$ p2xyz = bodyY + bodyZ + R1 * Thigh $$\nright foot\n$$ p3xyz = bodyY + bodyZ + R1 * Thigh + R2 * Calf + R3 * Foot $$\nleft foot\n$$ p4xyz = bodyY + bodyZ + R1 * Thigh + R2 * Calf + R3 * Foot $$\n\n\\textbf{back}\\\\\nright calf\n$$ p1xyz = bodyY + bodyZ + R1 * Thigh + R2 * 0.5 * Calf $$\nleft calf\n$$ p2xyz = bodyY + bodyZ + R1 * Thigh + R2 * 0.5 * Calf $$\nright heel\n$$ p2xyz = bodyY + bodyZ + R1 * Thigh + R2 * Calf $$\nleft heel\n$$ p2xyz = bodyY + bodyZ + R1 * Thigh + R2 * Calf $$\n\n\\section{Camera Matrix}\n\nIt is important to understand the different parameters that mathematically quantify cameras. These parameters can be devided into \\textit{extrinsic} and \\textit{extrinsic}. Extrinsic camera variables related to the cameras position in the inertial frame and the direction the camera is facing. These can be summarized by the extrinsic camera matrix \n\n$$[ R \\, |\\, \\boldsymbol{t}] = \n\\left[ \\begin{array}{ccc|c} \nr_{1,1} & r_{1,2} & r_{1,3} & t_1 \\\\\nr_{2,1} & r_{2,2} & r_{2,3} & t_2 \\\\\nr_{3,1} & r_{3,2} & r_{3,3} & t_3 \\\\\n\\end{array} \\right]$$\n\n\n\\section{Q Matrix, R Matrix and Initialization}\nThis section will discuss the final components of the EKF namely the Q matrix containing the various process noise variations, R matrix containing the various measurement noise variances and the initial state values.\n\n\\subsection{Defining the Q Matrix}\nFrom the above equations we can see that the Q matrix must have dimensions of $n*n$, where $n$ in this equation is the total amount of states. As previously defined in this section our filter operates over 42 states, giving Q a size of $42*42$. All the variance parameters will be contained on the diagonal of the matrix with all other entries being zero.\n\nTo find the initial values of these uncertainties the derivative of the various accelerations where taken. The maximum element from that set was selected as the uncertainty parameter.\n\n\\subsection{Defining the R Matrix}\nThe R matrix relates to the measurement variables and must therefore have size of $m*m$, where $m$ is the amount of inputs the EKF. These are elements of the sensors themselves and can be found by researching the relative data sheets for the smartphone IMU. As for the cameras a relatively large uncertainty of about 5 pixels was assumed.\n\n\\subsection{Choosing Initial States}\nThe subject was stationary during the initial stages of the run. This allows us to initialize our state vector with all states initially zero. The filter will therefore track the transience and steady state estimation of a running subject, giving insight into the filters performance under different conditions. \n\n\n\\subsection{Initializing the Covariance Matrix P}\nfollowing on from the previous section zeroing the initial states of the system allows us to have a relatively small initial covariance matrix due to the relative certainty we have that the states are truly zero. \n\n\n \n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "706ee400a84b8823c2a2ab32a118e91d27320e9d", "size": 10771, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "body/fusion.tex", "max_stars_repo_name": "hendrikjoosten/bachelors-latex", "max_stars_repo_head_hexsha": "64325c76b83f5f80f629eeaba4971c719e7f8eae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-10T22:31:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-10T22:31:04.000Z", "max_issues_repo_path": "body/fusion.tex", "max_issues_repo_name": "wolvexyz/thesis-latex", "max_issues_repo_head_hexsha": "64325c76b83f5f80f629eeaba4971c719e7f8eae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "body/fusion.tex", "max_forks_repo_name": "wolvexyz/thesis-latex", "max_forks_repo_head_hexsha": "64325c76b83f5f80f629eeaba4971c719e7f8eae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.9540816327, "max_line_length": 537, "alphanum_fraction": 0.720824436, "num_tokens": 2960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673087708699, "lm_q2_score": 0.6926419704455589, "lm_q1q2_score": 0.5630260144578338}}
{"text": "\\section{Introduction}\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{tikzpicture}[-, node distance = 3cm, auto]\n        \\node[anchor=north](H1){\\(H_1\\)};\n        \\node[anchor=north](H1_d1) at (3, 2){.};\n        \\node[anchor=north](H1_d2) at (3, -2){.};\n\n        \\path(H1) edge node {}(H1_d1);\n        \\path(H1) edge node {}(H1_d2);\n        \\path(H1_d1) edge [bend left] node {}(H1_d2);\n        \\path(H1_d1) [dashed] edge node {}(H1_d2);\n\n        \\node[anchor=north](H2) at (3.9, 0){\\(H_2\\)};\n        \\node[anchor=north](H2_d1) at (6.9, 2){.};\n        \\node[anchor=north](H2_d2) at (6.9, -2){.};\n\n        \\path(H2) edge node {}(H2_d1);\n        \\path(H2) edge node {}(H2_d2);\n        \\path(H2_d1) edge [bend left] node {}(H2_d2);\n\n        \\node[anchor=north](A) at (7.8, 0){\\(A\\)};\n        \\node[anchor=north](A_d1) at (10.8, 2){.};\n        \\node[anchor=north](A_d2) at (10.8, -2){.};\n        \n        \\path(A) edge node {}(A_d1);\n        \\path(A) edge node {}(A_d2);\n        \\path(A_d1) edge [bend left] node {}(A_d2);\n\n    \\end{tikzpicture}\n    \\caption{Ambulance Decision Problem} \n    \\label{Ambulance_Problem}\n\\end{figure}\n\n{\\Large\\textbf{States:}}\n\n\\begin{enumerate}\n    \\item \\(A\\) = Ambulance\n    \\item \\(H_i\\) = Hospital i\n\\end{enumerate}\n\n{\\Large\\textbf{Notation:}}\n\\begin{itemize}\n    \\item \\(\\Lambda\\) = total number of patients that need to be hospitalised\n    \\item \\(p_i\\) = proportion of patients going to Hospital i (\\(p_i\\Lambda\\) \n    = number of patients going to hospital i)\n    \\item \\(d_i\\) = distance from Hospital i\n    \\item \\(\\hat{c_i}\\) = capacity of hospital i\n    \\item \\(W(c, \\lambda\\, \\mu)\\) = waiting time in the system function\n    \\item \\(\\mu_i\\) = service rate of hospital i\n    \\item \\(\\lambda_i^o\\) = arrival rate of other patients to the hospital \n    (not by ambulance)\n    \\item \\(C_i(p_i) = d_i + W(c = \\hat{c_i},\\hspace{0.1cm} \\lambda = \n    p_i\\Lambda + \\lambda_i^o, \\hspace{0.1cm} \\mu = \\mu_i)\\)\n\\end{itemize}\n", "meta": {"hexsha": "5ff0a3f684bd5149b72671de9c0add184825bc4d", "size": 1952, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/main/Introduction/main.tex", "max_stars_repo_name": "11michalis11/AmbulanceDecisionGame", "max_stars_repo_head_hexsha": "45164ba51da0417297f715e41716cb91facc120f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/main/Introduction/main.tex", "max_issues_repo_name": "11michalis11/AmbulanceDecisionGame", "max_issues_repo_head_hexsha": "45164ba51da0417297f715e41716cb91facc120f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 20, "max_issues_repo_issues_event_min_datetime": "2020-04-20T09:08:31.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-23T11:09:25.000Z", "max_forks_repo_path": "tex/main/Introduction/main.tex", "max_forks_repo_name": "11michalis11/AmbulanceDecisionGame", "max_forks_repo_head_hexsha": "45164ba51da0417297f715e41716cb91facc120f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.2456140351, "max_line_length": 79, "alphanum_fraction": 0.5701844262, "num_tokens": 718, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019595, "lm_q2_score": 0.7606506635289835, "lm_q1q2_score": 0.5630141283314151}}
{"text": "\\chapter{Short-cutting heuristics}\n\n\n\\section{Simple Heuristic}\n\\begin{enumerate}\n    \\item {The order of appearance of points in the hamiltonian cycle is the order of their appearance in the Euler tour. (For each of the points, we retain the first occurence of the point and discard the remaining).}\n    % \\item Remove repeated points in the Euler tour to get a Hamiltonian cycle.\n    \\item Time complexity is $O(n)$.\n\\end{enumerate}\n% \\includegraphics[scale=0.5]{simple.png}\n\n\\section{Tri-Opt Heuristic}\n\n\\begin{enumerate}\n    \\item In this heuristic we take a Euler tour and greedily remove the repeated vertices which have higher heuristic value.\n    \\item The heuristic value used is sum of distances from the vertex to adjacent vertices minus distance between the adjacent vertices.\n    \\item We can see that this is better than previous one but we are performing it on a Euler tour, hence there is still room for imprevement.\n    \\item Time complexity is $O(n)$.\n\\end{enumerate}\n% \\includegraphics[scale=0.5]{simple.png}\n\n\n\n\\section{Tri-Comp Heuristic}\n\n\\begin{enumerate}\n    \\item This heuristic is applied on Multi graph (H) instead of one Euler tour.\n    \\item Here we start with vertices of order greater than two and greedily remove its edges until its order is two.\n    \\item The idea is that in the final hamiltonian cycle which we need to arrive by short cutting has degree two for all the vertices.\n    \\item Two things we need to do are - pair up the free vertices formed greedily and make sure the process does not result in two disjoint components\n    \\item The heuristic value here is sum of distances between paired up vertices and distace of two edges that remained with our vertex.\n    \\item Since our problem is in 2d space, each vertex in MST will have a degree of maximum 5, so our multi graph will have a maximum degree of 6. This property highly affect the theoritical complexity of our heuristic.\n    \\item Time complexity for checking the graph connectivity is $O(n)$ and therefore, time complexity for the tri-compt heuristic is $O(n^2)$.\n\\end{enumerate}\n% \\includegraphics[scale=0.5]{simple.png}\n\n\\section{DIH-Tri-Comp Heuristic}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[scale=0.3]{5.jpg}\n    \\caption{Pseudo code for DIH heuristic}\n\\end{figure}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[scale=0.3]{4.jpg}\n    \\caption{Applying DIH on $T$(left) to get $T'$(right)}\n\\end{figure}\n\\begin{enumerate}\n    \\item This heuristic is DIH(Degree Increasing Heuristic) optimization applied on the MST $T$ to get a new tree $T'$, then creating a multigraph $H'$ from $T'$, followed by applying Tri-compt heuristic on the multigraph $H'$.\n    \\item We want to increase the order of a vertex in MST by adding the vertices  of its children to itself\n    \\item We can see that the tree $T'$ is no longer MST but this process preserves the Euler tours i.e. the set of Euler tours of this tree $T'$ is super set of that of the MST $T$.\n    \\item This way we are applying heuristic on a bigger space than that of the former one and might have a chance of getting better results.\n    \\item  DIH can be implemented with $O(n)$, so comp heuristic is the bottleneck here. Overall time complexity is $O(n^2)$\n\\end{enumerate}\n\\vspace{7in}\n\\section{Performance analysis of heuristics}\n\nWe measured the performance of the four heuristics over 100 TSP problems from TSPLIB which is a library of sample instances for the TSP (and related problems) from various sources and of various types.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[scale=0.22]{6.jpg}\n    \\caption{Performance of heuristics on different families of datasets}\n\\end{figure}", "meta": {"hexsha": "840868c5175527ddc307862b97e9fa584a6ba7dc", "size": 3678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "work-done.tex", "max_stars_repo_name": "jaiteshp/CS6100-Christofides-ShortCutting-Heurisitcs-Project-Report", "max_stars_repo_head_hexsha": "5819512e75e1a145faa47d4a0168a03d2e9c5c33", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "work-done.tex", "max_issues_repo_name": "jaiteshp/CS6100-Christofides-ShortCutting-Heurisitcs-Project-Report", "max_issues_repo_head_hexsha": "5819512e75e1a145faa47d4a0168a03d2e9c5c33", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "work-done.tex", "max_forks_repo_name": "jaiteshp/CS6100-Christofides-ShortCutting-Heurisitcs-Project-Report", "max_forks_repo_head_hexsha": "5819512e75e1a145faa47d4a0168a03d2e9c5c33", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.46875, "max_line_length": 228, "alphanum_fraction": 0.7509516041, "num_tokens": 915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7606506418255927, "lm_q1q2_score": 0.5630141035478264}}
{"text": "\\section{The Sorting Problem}\n\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Algorithms}\n  % \\begin{center}\n  %   An algorithm is a sequence of operations \\\\\n  %   that transform the input into the output.\n  % \\end{center}\n  \\begin{center}\n\tWhat is an algorithm? \\qquad \\pause What is computation?\n  \\end{center}\n\n  \\pause\n\n  \\fignocaption{width = 0.50\\textwidth}{figs/algorithm-def.pdf}\n\n  \\pause\n  \\vspace{-0.60cm}\n  \\begin{center}\n\t\\textcolor{red}{Correctness!}\n  \\end{center}\n\n  \\pause\n\n  \\begin{description}[Effectiveness:]\n\t\\item[Definiteness:] precisely defined operations\n\t  \\pause\n\t\\item[Finiteness:] termination\n\t  \\pause\n\t\\item[Effectiveness:] a reasonable model; basic operations % RAM {\\scriptsize (Random-Access Machine)} model\n\t  \\pause\n\t  \\begin{itemize}\n\t\t% \\item unrealistic: \\texttt{sort} operation\n\t\t% \\item realistic: arithmetic, data movement, and control\n\t\t%   \\pause\n\t\t\\item for sorting: compare, swap\n\t  \\end{itemize}\n  \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Sorting}\n  The sorting problem:\n  \\begin{description}\n\t\\item[Input:] A sequence of $n$ integers $A$:\\seq{$a_1, a_2, \\cdots, a_n$}.\n\t\\item[Output:] A permutation $A'$:\\seq{$a'_1, a'_2, \\ldots, a'_n$} of $A$ \\emph{s.t.} $a'_1 \\le a'_2 \\le \\cdots \\le a'_n$ {\\small (non-decreasing order)}.\n  \\end{description}\n\n  \\[\n\t3\\quad 1\\quad 4\\quad 2\\quad \\Longrightarrow 1\\quad 2\\quad 3\\quad 4\n  \\]\n\n  \\pause\n  \\vspace{0.50cm}\n\n  \\fignocaption{width = 0.50\\textwidth}{figs/sorting-alg-def.pdf}\n  % sortable\n  % A little more formalism: ordering relation ``$<$'' on $A$.\n\n  % \\vspace{0.20cm}\n  % $\\forall a, b, c \\in A$,\n  % \\begin{description}[Transitivity:]\n  %   \\item[Trichotomy:] $a < b, a = b, a > b$\n  %   \\item[Transitivity:] $a < b \\land b < c \\implies a < c$\n  % \\end{description}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Inversions}\n  \\[\n\tA = a_1\\quad a_2\\quad \\cdots\\quad a_i\\quad \\cdots\\quad a_j\\quad \\cdots\\quad a_n.\n  \\]\n\n  \\begin{center}\n\tIf $i < j$ and $a_{i} > a_{j}$, then $(a_i, a_j)$ is an \\textcolor{red}{\\bf inversion}.\\\\[8pt] \\pause\n\t\\textcolor{blue}{Adjacent} inversion: $(a_i, a_{i+1})$\n  \\end{center}\n\n  \\pause\n  \\vspace{-0.50cm}\n\n  \\begin{columns}\n\t\\column{0.50\\textwidth}\n\t  \\fignocaption{width = 0.50\\textwidth}{figs/inversions-example.pdf}\n\t\\column{0.50\\textwidth}\n\t{\\small\n\t  \\begin{center}\n\t\t\\#inversions = 3\\\\\n\t\t\\#adjacent inversions = 2\n\t  \\end{center}\n\t}\n  \\end{columns}\n\n  \\pause\n  \\begin{columns}\n\t\\column{0.50\\textwidth}\n\t  \\fignocaption{width = 0.50\\textwidth}{figs/inversions-example-nonincreasing.pdf}\n\t\\column{0.50\\textwidth}\n\t{\\small\n\t  \\begin{center}\n\t\t\\#inversions = 3 + 2 + 1 = 6\\\\\n\t\t\\#adjacent inversions = 3\n\t  \\end{center}\n\t}\n  \\end{columns}\n\n  \\pause\n  \\begin{columns}\n\t\\column{0.50\\textwidth}\n\t  \\fignocaption{width = 0.50\\textwidth}{figs/inversions-example-nondecreasing.pdf}\n\t\\column{0.50\\textwidth}\n\t{\\small\n\t  \\begin{center}\n\t\t\\#inversions = 0\\\\\n\t\t\\#adjacent inversions = 0\n\t  \\end{center}\n\t}\n  \\end{columns}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Inversions}\n  \\begin{center}\n\t\\fbox{\\textcolor{blue}{Theorem:} $A$ is sorted $\\iff$ $A$ has no adjacent inversions.}\n  \\end{center}\n\n  \\begin{align*}\n\t\\onslide<2->{A \\text{ is sorted } \\Longrightarrow A \\text{ has no adjacent inversions}.}\n  \\end{align*}\n\n  \\vspace{-0.50cm}\n\n  \\begin{align*}\n\t\\onslide<3->{A \\text{ has no adjacent inversions } &\\Longrightarrow \\forall i \\in [1,n-1]: a_{i} \\le a_{i+1} \\\\}\n\t  \\onslide<4->{&\\Longrightarrow A \\text{ is sorted}.}\n  \\end{align*}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "ce2f6ee52f0b5c619504e787f219ae0a81f6557c", "size": 3513, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algorithm-lecture-bubblesort-for-cs-application/sections/sorting-problem.tex", "max_stars_repo_name": "hengxin/algorithm-lectures", "max_stars_repo_head_hexsha": "cf00b0d2d88da6e20d37c36d1f49ca6c1a0669ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-04-20T06:57:57.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-12T19:07:16.000Z", "max_issues_repo_path": "algorithm-lecture-bubblesort-for-cs-application/sections/sorting-problem.tex", "max_issues_repo_name": "hengxin/algorithm-lectures", "max_issues_repo_head_hexsha": "cf00b0d2d88da6e20d37c36d1f49ca6c1a0669ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algorithm-lecture-bubblesort-for-cs-application/sections/sorting-problem.tex", "max_forks_repo_name": "hengxin/algorithm-lectures", "max_forks_repo_head_hexsha": "cf00b0d2d88da6e20d37c36d1f49ca6c1a0669ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-12T10:36:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-12T10:36:11.000Z", "avg_line_length": 25.8308823529, "max_line_length": 155, "alphanum_fraction": 0.635354398, "num_tokens": 1285, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.8289388040954683, "lm_q1q2_score": 0.5629975786615661}}
{"text": "\\chapter{Rate-Energy Tradeoff}\nIn this section, we first define the rate-energy (R-E) region for the proposed system, then formulate the characterization into a general optimization problem. On top of it, we decouple the spatial and frequency design, investigate the lower bound of the superposed waveform, consider the PAPR constraint, and extend the approach to MIMO cases.\n\n\\section{Rate-Energy Region}\\label{sec:rate-energy-region}\n  \\input{rate-energy-tradeoff/rate-energy-region.tex}\n\n\\section{Problem Formulation}\\label{sec:problem-formulation}\n  \\input{rate-energy-tradeoff/problem-formulation.tex}\n\n\\section{Iterative Algorithms}\\label{sec:iterative-algorithms}\n  \\input{rate-energy-tradeoff/iterative-algorithms.tex}", "meta": {"hexsha": "f192f4857fecbce92570104a064e173a265dedd0", "size": 726, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/thesis/rate-energy-tradeoff/rate-energy-tradeoff.tex", "max_stars_repo_name": "SnowzTail/signal-optimisation-for-wireless-information-and-power-transmission", "max_stars_repo_head_hexsha": "f53382f99610becd8d78ee34cc9c3d49d2c7f61b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2019-07-10T21:31:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T18:01:41.000Z", "max_issues_repo_path": "tex/thesis/rate-energy-tradeoff/rate-energy-tradeoff.tex", "max_issues_repo_name": "SnowzTail/signal-optimisation-for-wireless-information-and-power-transmission", "max_issues_repo_head_hexsha": "f53382f99610becd8d78ee34cc9c3d49d2c7f61b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/thesis/rate-energy-tradeoff/rate-energy-tradeoff.tex", "max_forks_repo_name": "SnowzTail/signal-optimisation-for-wireless-information-and-power-transmission", "max_forks_repo_head_hexsha": "f53382f99610becd8d78ee34cc9c3d49d2c7f61b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-06-12T23:20:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T18:01:46.000Z", "avg_line_length": 66.0, "max_line_length": 344, "alphanum_fraction": 0.8085399449, "num_tokens": 168, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387998695209, "lm_q2_score": 0.6791786861878392, "lm_q1q2_score": 0.5629975650255054}}
{"text": "%\\section{Land Physics}\n{\\bf \\Large \n\\begin{tabular}{ccc}\n\\hline\n  Corresponding author & : & Tsuyoshi Yamaura\\\\\n\\hline\n\\end{tabular}\n}\n\n\n\\subsection{Ocean physics: slab model}\n\nThe ocean slab model estimates sea temperature tendencies using a single-layered model.\nThe governing equations of the internal energy $E$ (J/m$^2$) and mass of ocean $M$ (kg/m$^2$) are\n\\begin{align}\n  \\frac{\\partial E}{\\partial t} &= G + e_{prec} - e_{evap} + Q_{ext}, \\label{eq:Ocean-Tdt} \\\\\n  \\frac{\\partial M}{\\partial t} &= F_{prec} - F_{evap},\n\\end{align}\nwhere\n$G$ is the downward surface heat flux (J/m$^2$/s);\n$e_{prec}$ and $e_{evap}$ are the downward surface internal energy flux (J/m$^2$/s) of the precipitation and evaporation, respectively;\n$Q_{ext}$ is external heat source (J/m$^2$/s);\nand $F_{prec}$ and $F_{evap}$ are the surface mass flux (kg/m$^2$/s) of the precipitation and evaporation, respectively.\n\nThe internal energy E (J/m$^2$) is\n\\begin{align}\n E = c_l M T,\n\\end{align}\nwhere $M$ and $T$ are total mass of water (kg/m$^2$) and temperature, and\n$c_l$ is the specific heat capacity of water (J/K/kg),\n\n\nThe surface preciptaion flux is\n\\begin{align}\n F_{prec} &= F_{rain} + F_{snow},\n\\end{align}\nwhere $F_{rain}$ and $F_{snow}$ are the surface flux of rain and snow, respectively.\nThe internal energy fluxes are\n\\begin{align}\n e_{prec} &= c_lT_{rain}F_{rain} + ( c_iT_{snow} - L_f ) F_{snow}, \\\\\n e_{evap} &= c_lT_{evap}F_{evap},\n\\end{align}\nwhere $c_i$ is the specific heat capacity of ice (J/K/kg);\n$T_{rain}, T_{snow}$ and $T_{evap}$ are the temperature of rain, snow, and evaporated water, respectively;\nNote that the fluxes of the rain and snow are positive for the downward direction and that of the evapolation is positive for the upward.\nThese ground surface fluxes are calculated in the surface scheme.\n\n\nIn the calculation of change of temperature, the mass change is taken into the account.\nHowever, the mass change is ignored after the calculation and then $M = \\rho_w D$,\nwhere\n$\\rho_w$ is the water density (kg/m$^3$),\nand $D$ is the water depth of the slab model.\n\nEq. (\\ref{eq:Ocean-Tdt}) is discretized as follows:\n\\begin{align}\n  T^{n+1}\n &= \\frac{ C_w T^n + \\Delta t ( G + e_{prec} - e_{evap} + Q_{ext} ) }{ C_w + \\Delta t c_l ( F_{prec} - F_{evap} ) }, \\nonumber\\\\\n &= T^n + \\Delta t \\frac{ G + e_{prec} - e_{evap} + Q_{ext} - c_l ( F_{prec} - F_{evap} ) T^n }{ C_w + \\Delta t c_l ( F_{prec} - F_{evap} ) },\n\\end{align}\nwhere\n$C_w$ is the heat capacity of the slab layer (J/K/m$^2$) and $C_w = \\rho_w c_l D$.\n\nNote that the internal energy is not conserved, since the mass change is ignored.\n\n\n\\subsection{Sea ice}\n\\subsubsection{Governing equation}\nThe equations of budget of the mass (kg/m$^2$) and internal energy (J/m$^2$) in the ocean and sea ice are\n\\begin{align}\n \\frac{\\partial M_i}{\\partial t} &= f_i ( F_{prec}- F_{subl} ) - m_{mlt} + m_{frz}, \\\\\n \\frac{\\partial E_i}{\\partial t} &= f_i ( G_i - G_{oi} + e_{prec} - e_{subl} ) - c_lT_0m_{mlt} + ( c_iT_0 - L_f ) m_{frz}, \\\\\n \\frac{\\partial M}{\\partial t} &= (1-f_i) ( F_{prec} - F_{evap} ) + m_{mlt} - m_{frz}, \\\\\n \\frac{\\partial E}{\\partial t} &= (1-f_i) ( G_o + e_{prec} - e_{evap} ) + f_i G_{oi} + c_lT_0m_{mlt} - ( c_iT_0 - L_f )m_{frz},\n\\end{align}\nwhere\n$f_i$ is the fraction of the sea ice;\n$m_{mlt}$ and $m_{frz}$ are the mass change (kg/m$^2$/s) by melting of ice and freezing of sea water;\nand $G_i, G_o$ and $G_{oi}$ are the heat flux (J/m$^2$/s) at the ice-atmosphere, ocean-atmosphere, and ice-ocean surfaces, respectively;\n$F_{subl}$ is the upward mass flux due to the sublimation of ice (kg/m$^2$/s);\nand $e_{subl}$ is the upward internal energy flux at the surface (J/m$^2$/s) of the sublimated ice as\n\\begin{align}\n e_{subl} &= (c_iT_{subl}-L_f)F_{subl}.\n\\end{align}\n\nAs noted in the previous section, the mass change in the sea water is ignored after the calculation of change in the ocean temperature.\n\nThe $G_{oi}$ is estimated by the diffusion equation\n\\begin{align}\n G_{oi} &= \\nu_i \\frac{T_i - T}{D_i/2}, \\\\\n D_i &= \\frac{M_i}{\\rho_i f_i},\n\\end{align}\nwhere $T_i$ is the temperature of the sea ice;\n$\\nu_i$ and $D_i$ are the thermal conductivity of ice (J/K/m$^3$/s) and depth of the sea ice (m).\n\nThe fraction is estimated as\n\\begin{align}\n f_i &= \\sqrt{ \\frac{M_i}{M_c} },\n\\end{align}\nwhere $M_c$ is the critical ice mass (kg/m$^2$).\n\nThe amount of the melting during a time step is estimated to  satisfy the conservation of mass and internal energy of the sea ice as\n\\begin{align}\n M_{mlt} &= \\int_t^{t+\\Delta t} m_{mlt} dt \\nonumber\\\\\n &= \\min\\left\\{ \\max\\left\\{ \\frac{ c_i(T_i-T_0) }{ (c_w-c_i) T_0 + L_f } M_i, 0\\right\\}, M_i\\right\\}.\n\\end{align}\n\nThe amount of the freezing is estimated to satisfy the conservation of mass and internal energy of the ocean as\n\\begin{align}\n M_{frz} &= \\int_t^{t+\\Delta t} m_{frz} dt \\nonumber\\\\\n &= \\min\\left\\{ \\max\\left\\{ \\frac{ c_l \\rho_w D ( T_0 - T ) }{ (c_l - c_i)T_0 + L_f }, 0\\right\\}, \\rho_w D\\right\\},\n\\end{align}\nwhere $T_0$ is the freezing tempearture.\n\n\n\\subsubsection{Time integration}\nThe governing equation is solved by a spliting method.\nIn the first step, the mass and internal energy budgets of the ice without the phase change is solved.\nIn the second step, the melting of sea ice is estimated by the mass and internal energy conservation.\nIn the next step, then the temperature change of ocean is calculated in the ocean scheme.\nIn the last step, the freezing ocean water is estimated and the mass and temperature of ice and ocean is updated.\n\nThe followings are the summary of the sequence of calculation.\nHere, the superscript ``$n$'' indicates the quantites at the time step $n$, and ``$n_1$'', ``$n_2$'', ``$n_3$'', and ``$n+1$'' are those after calculation of the first, seccond, third and the last step, respectively.\n\n\n\\begin{description}\n\n\\item[First step]\n\\begin{align}\n \\Delta M_i^{n_1} &= \\Delta t f_i^n ( F_{prec} - F_{subl} ), \\\\\n \\Delta E_i^{n_1} &= \\Delta t f_i ( G_i - G_{oi} + e_{prec} - e_{subl} ), \\\\\n M_i^{n_1} &= M_i^n + \\Delta M_i^{n_1}, \\\\\n T_i^{n_1} &= T_i^n + \\frac{ \\Delta E_i^{n_1} - ( c_iT_i^n - L_f ) \\Delta M_i^{n_1} }{c_i M_i^{n_1}}.\n\\end{align}\n\n\\item[Second step]\n\\begin{align}\n M_{mlt} &= \\min\\left\\{ \\max\\left\\{ \\frac{ c_i(T_i^{n_1}-T_0)M_i^{n_1} }{ (c_w-c_i) T_0 + L_f }, 0\\right\\}, M_i^{n_1}\\right\\}, \\\\\n M_i^{n_2} &= M_i^{n_1} - M_{mlt}, \\\\\n T_i^{n_2} &= T_i^{n_1} + \\frac{ - c_l T_0  + ( c_iT_i^{n_1} - L_f )}{c_i M_i^{n_2}}M_{mlt}.\n\\end{align}\n\n\\item[Third step] (Ocean model)\n\\begin{align}\n \\Delta E^{n_3} &= \\Delta t \\{ (1-f_i)(G_o + e_{prec} - e_{evap}) + f_i G_{oi} \\} + c_l T_0 M_{mlt}, \\\\\n \\Delta M^{n_3} &= \\Delta t (1-f_i)(F_{prec}-F_{evap}) + M_{mlt}, \\\\\n T^{n_3} &= T^n + \\frac{ \\Delta E^{n_3} - c_l T^n \\Delta M^{n_3} }{c_l \\{\\rho_w D + \\Delta M^{n_3} \\}}.\n\\end{align}\n\n\\item[Forth step]\n\\begin{align}\n M_{frz} &= \\min\\left\\{ \\max\\left\\{ \\frac{ c_l \\rho_w D ( T_0 - T^{n_3} ) }{ (c_l - c_i)T_0 + L_f }, 0\\right\\}, \\rho_w D\\right\\}, \\\\\n M_i^{n+1} &= M_i^{n_2} + M_{frz}, \\\\\n T_i^{n+1} &= T_i^{n_3} + ( T_0 - T_i^{n_3} ) \\frac{M_{frz}}{M_i^{n+1}}, \\\\\n T^{n+1} &= T^{n_3} + \\frac{ - ( c_i T_0 - L_f ) + c_l T^{n_3} }{c_l (\\rho_w D - M_{frz}) } M_{frz}.\n\\end{align}\n\n\\end{description}\n\n\\subsection{Sea surface albedo}\n\\subsubsection{Nakajima et al. (2000) model}\n\\citet{nakajima_2000} provided the albedo for the short wave on the sea surface $A$:\n\\begin{align}\n  A = \\exp \\left[ \\Sigma_{i=1}^{3} \\Sigma_{j=1}^{5} C_{ij} t^{j-1} \\mu_0^{i-1} \\right],\n\\end{align}\nwhere\n$C_{ij}$ is the empirical optical parameters,\n$t$ is the flux transmissivity for short-wave radiation,\nand $\\mu_0$ is cosine of the solar zenith angle.\n\n\n\\subsection{Roughness length}\n\\subsubsection{Miller et al. (1992) model}\n\\citet{miller_1992} provides the roughness length over the tropical ocean,\nbased on numerical calculations by combining smooth surface values\nwith the Charnock relation for aerodynamic roughness length\nand constant values for heat and moisture in accordance with \\citet{Smith_1988,Smith_1989} suggestions:\n\\begin{align}\n  z_0 &= 0.11u/\\nu_* + 0.018u_*^2/g, \\label{eq: z_0} \\\\\n  z_t &= 0.40u/\\nu_* + 1.4 \\times 10^{-5}, \\label{eq: z_t} \\\\\n  z_q &= 0.62u/\\nu_* + 1.3 \\times 10^{-4}, \\label{eq: z_q}\n\\end{align}\nwhere $\\nu_*$ is the kinematic viscosity of air ($\\sim 1.5 \\times 10^{-5}$), and $z_0, z_t$,\nand $z_q$ are the roughness length for momentum, heat, and vapor, respectively.\n\n\\subsubsection{Moon et al. (2007) model}\n\\citet{moon_2007} provides the air--sea momentum flux at high wind speeds\nbased on the coupled wave--wind model simulations for hurricanes.\nAt first, the wind speed $U$ at 10-m height is estimated from the previous roughness length $z_0$, as follows:\n\\begin{align}\n  U =\\frac{u_{*}}{\\kappa} \\ln \\frac{10}{z_0},\n\\end{align}\nwhere\n$u_{*}$ is friction velocity (m/s)\nand $\\kappa$ is von Kalman constant.\nAnd then, new roughness length $z_0$ is iteratively estimated from the wind speed:\n\\begin{equation}\n  z_0   = \\left\\{\n  \\begin{array}{lll}\n    \\frac{0.0185}{g} u_{*}^2 & \\mathrm{for} & U < 12.5, \\\\\n    \\left[ 0.085 \\left( -0.56 u_{*}^2 + 20.255 u_{*} + 2.458 \\right) - 0.58 \\right] \\times 10^{-3} & \\mathrm{for} & U \\ge 12.5.\n   \\end{array} \\right.\n\\end{equation}\n\nFurthermore, \\citet{Fairall_2003} provides the roughness length for the heat and vapor\nusing that for momentum, as follows:\n\\begin{align}\n  z_t &= \\frac{ 5.5 \\times 10^{-5} }{ ( z_0 u_{*} / \\nu_{*} )^{0.6} }, \\\\\n  z_q &= z_t.\n\\end{align}\n\n", "meta": {"hexsha": "5dbfa4bdda4dc6307d20db427c871cb12eef054d", "size": 9437, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/descriptions/ocean.tex", "max_stars_repo_name": "slayoo/scale", "max_stars_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2020-06-14T11:12:31.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T05:29:55.000Z", "max_issues_repo_path": "doc/descriptions/ocean.tex", "max_issues_repo_name": "slayoo/scale", "max_issues_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-29T03:38:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-30T05:08:47.000Z", "max_forks_repo_path": "doc/descriptions/ocean.tex", "max_forks_repo_name": "slayoo/scale", "max_forks_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-07-10T10:39:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-28T22:20:41.000Z", "avg_line_length": 44.3051643192, "max_line_length": 216, "alphanum_fraction": 0.6635583342, "num_tokens": 3426, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637577007394, "lm_q2_score": 0.6548947425132314, "lm_q1q2_score": 0.5629892752473827}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{enumitem}\n\\usepackage{amsmath}\n\\begin{document}\n\\title{MAT1830 --- Assignment 2}\n\\author{Dylan Pinn --- 24160547}\n\\maketitle\n\n\\section*{Question 1}\nUse truth tables to determine whether $(a \\land \\neg b) \\to (b \\lor c)$ is logically equivalent to $\\neg a \\lor b \\lor c$.\n\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n$a$ & $b$ & $c$ & $a \\land \\neg $ & $b \\lor c$ & $(a \\land \\neg b) \\to (b \\lor c)$ & $\\neg a \\lor b \\lor c$ \\\\ \\hline\nT & T & T & F & T & T & T \\\\ \\hline\nT & T & F & F & T & T & T \\\\ \\hline\nT & F & T & T & T & T & T \\\\ \\hline\nT & F & F & T & T & T & F \\\\ \\hline\nF & T & T & F & T & T & T \\\\ \\hline\nF & T & F & F & F & T & T \\\\ \\hline\nF & F & T & F & T & T & T \\\\ \\hline\nF & F & F & F & F & F & T \\\\ \\hline\n\\end{tabular}\n\nBecause they have different values they are not equivalent.\n\n\\break\n\n\\section*{Question 2}\nUse laws of logic to show that $\\neg (p \\lor \\neg (p \\to q)) \\lor p$ is a tautology.\n\n\\begin{equation}\n\t\\neg (p \\lor \\neg (p \\to q)) \\lor p\n\\end{equation}\n\\begin{equation} \\label{eq:1}\n\t\\neg (p \\lor \\neg (\\neg p \\lor q)) \\lor p\n\\end{equation}\nEquation \\ref{eq:1} uses Implication law.\n\\begin{equation} \\label{eq:2}\n\t\\neg (p \\lor \\neg (\\neg p) \\land \\neg q) \\lor p)\n\\end{equation}\nEquation \\ref{eq:2} uses De Morgan's law.\n\\begin{equation} \\label{eq:3}\n\t\\neg (p \\lor p \\land \\neg q) \\lor p\t\n\\end{equation}\nEquation \\ref{eq:3} uses Double Negation law.\n\\begin{equation} \\label{eq:4}\n\t\\neg (p \\land \\neg q) \\lor p\n\\end{equation}\nEquation \\ref{eq:4} uses Idempotent law.\n\\begin{equation} \\label{eq:5}\n\t\\neg p \\land \\neg (\\neg q) \\lor p\n\\end{equation}\nEquation \\ref{eq:5} uses De Morgan's law.\n\\begin{equation} \\label{eq:6}\n\t\\neg  p \\lor q \\lor p\n\\end{equation}\nEquation \\ref{eq:6} uses Double Negation law.\n\\begin{equation} \\label{eq:7}\n\tT \\lor q\n\\end{equation}\nEquation \\ref{eq:7} uses Inverse law.\n\\begin{equation} \\label{eq:8}\n\tT\n\\end{equation}\nEquation \\ref{eq:8} uses Annihilation law.\n\nThis shows that it is a tautology.\n\n\\break\n\n\\section*{Question 3}\nFor each of the following statements, write down the statement's contrapositive and then write down the statement's negation.\n\n\\begin{enumerate}[label=(\\roman*)]\n\\item \"If the function is differentiable, then it is continuous.\"\n\n\t\\textbf{Contrapositive}\n\t\n\t\"If it isn't continuous, then the function is not differentiable.\"\n\t\n\t\\textbf{Negation}\n\t\n\t\"There exists a continuous function that is not differentiable.\"\n\n\\item \"If a weak key was used, then the encryption wasn't secure.\"\n\n\t\\textbf{Contrapositive}\n\t\n\t\"If the encryption was secure, then a weak key wasn't used.\"\n\t\n\t\\textbf{Negation}\n\t\n\t\"The encryption was secure and a weak key was used.\"\n\t\n\\end{enumerate}\n\\break\n\n\\section*{Question 4}\n\nPei Ann has been dealt two cards from a standard 52 card deck. She holds one in her left hand and one in her right.  \n\nLet $p$ be the proposition \"The card in Pei Ann's left hand is an ace\".\n\nLet $q$ be the proposition \"The card in Pei Ann's right hand is an ace\".\n\nLet $r$ be the proposition \"The card in Pei Ann's left hand is a club\".\n\nLet $s$ be the proposition \"The card in Pei Ann's right hand is a club\".\n\nWrite down propositions (using just $p$, $q$, $r$, $s$ and logical connectives) corresponding to the following statements.\n\n\\begin{enumerate}[label=(\\roman*)]\n\n\\item Neither of Pei Ann's cards is an ace.\n\n$$\\neg p \\land \\neg q$$\n\n\\item If Pei Ann has the ace of clubs in her right hand, then she doesn't have a club in her left hand\n\n$$(q \\land s) \\to \\neg r $$\n\n\\item Pei Ann has the ace of clubs and another club.\n\n$$((p \\land r) \\lor (q \\land s)) \\land (r \\lor s) $$\n\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "ead77242582ad6d0a68885bc8239747b3dcd11ce", "size": 3633, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignments/assignment-02.tex", "max_stars_repo_name": "dylanpinn/MAT1830", "max_stars_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-03-01T22:58:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-20T03:41:28.000Z", "max_issues_repo_path": "assignments/assignment-02.tex", "max_issues_repo_name": "dylanpinn/MAT1830", "max_issues_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-03-05T13:52:05.000Z", "max_issues_repo_issues_event_max_datetime": "2018-06-03T06:46:04.000Z", "max_forks_repo_path": "assignments/assignment-02.tex", "max_forks_repo_name": "dylanpinn/MAT1830", "max_forks_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-27T03:41:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-19T15:55:08.000Z", "avg_line_length": 27.9461538462, "max_line_length": 125, "alphanum_fraction": 0.6608863198, "num_tokens": 1268, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947425132314, "lm_q2_score": 0.8596637505099168, "lm_q1q2_score": 0.5629892705381508}}
{"text": "\\section{Proof Outline}\n\n\\begin{itemize}\n\n\\item A logical region $L$ is a set of abstract memory locations $a_i, a_j, ... \\in A$.\n\n\\item A coloring function $c : A \\rightarrow C$ is a function from abstract memory locations to ``colors''.\n\n\\item A partitioning $P_{L,c} : C \\rightarrow 2^A$ of a region $L$ with a coloring function $c$ is a function from a color to a subregion of L, satisfying:\n\n\\begin{itemize}\n\n\\item $\\forall L,c,i,a . a \\in L \\wedge c(a) = i \\leftrightarrow a \\in P_{L,c}(i)$\n\n\\item $\\forall L,c,i_1,i_2 . i_1 \\neq i_2 \\rightarrow P_{L,c}(i_1) * P_{L,c}(i_2)$\n\n\\end{itemize}\n\n\\item A static effect $E$ is a set of tuples $\\langle L, op \\rangle$ where $op \\in \\{ read, write \\} \\cup \\{ reduce_f : f \\text{is a reduction function} \\}$.\n\n\\item Physical memory locations are represented by the set $m_1, m_2, ... \\in M$, along with a mapping function $\\alpha : M \\rightarrow A$ that describes which abstract location a physical memory location corresponds to.\n\n\\item A dynamic trace $D = \\langle \\hat E, \\hat O \\rangle$ is a directed acyclic graph whose nodes $\\hat E$ are actual memory operations $\\langle m, op \\rangle$ and edges $\\hat O$ describe a partial ordering $\\hat e_1 \\prec \\hat e_2$ of those memoory operations.  (Hmm...  Need notation that makes it clear that the same memory operation can be performed on the same memory address multiple times.)\n\n\\item Soundness of effects: If $\\vdash t : ^ET$ then the mapping function $\\alpha$ and dynamic trace $D = \\langle \\hat E, \\hat O \\rangle$ that results from evaluating $t$ have the following properties:\n\n\\begin{itemize}\n\n\\item $\\forall m . \\langle m, read \\rangle \\in \\hat E \\rightarrow \\exists L . \\alpha(m) \\in L \\wedge \\langle L, read \\rangle \\in E$\n\n\\item $\\forall m . \\langle m, write \\rangle \\in \\hat E \\rightarrow \\exists L . \\alpha(m) \\in L \\wedge \\langle L, write \\rangle \\in E$\n\n\\item $\\forall m, f . \\langle m, reduce_f \\rangle \\in \\hat E \\rightarrow ( \\exists L . \\alpha(m) \\in L \\wedge \\langle L, reduce_f \\rangle \\in E ) \\vee ( \\exists L_1, L_2 . \\alpha(m) \\in L_1 \\wedge \\langle L_1, read \\rangle \\in E \\wedge \\alpha(m) \\in L_2 \\wedge \\langle L_2, write \\rangle \\in E )$\n\n\\end{itemize}\n\n\\item Two dynamic subtraces $D_1 = \\langle \\hat E_1, \\hat O_1 \\rangle$, and $D_2 = \\langle \\hat E_2, \\hat O_2 \\rangle$ are ``memory ordered'' (written $D_1 \\prec_D D_2$) within a larger trace $D = \\langle \\hat E_1 \\cup \\hat E_2 \\cup \\hat E', \\hat O_1 \\cup \\hat O_2 \\cup \\hat O' \\rangle$ if $D_2$ sees all the results of $D_1$'s memory operations and $D_1$ sees none of the results of $D_2$'s memory operations:\n\n\\begin{tabular}{l@{}l@{}l}\n$D_1 \\prec_D D_2 \\leftrightarrow \\forall$ & $\\hat e_1 = \\langle m_1, op_1 \\rangle \\in \\hat E_1,$ \\\\\n& $\\hat e_2 = \\langle m_2, op_2 \\rangle \\in \\hat E_2 . \\big($ & $m_1 \\neq m_2 \\vee$  \\\\\n&& $( op_1 = read \\wedge op_2 = read ) \\vee$ \\\\\n&& $( op_1 = reduce_f \\wedge op_2 = reduce_f ) \\vee$ \\\\\n&& $( \\hat e_1 \\prec \\hat e_2 \\in \\hat O' ) \\big)$\n\\end{tabular}\n\n\\item Note that if $D_1$ and $D_2$ have no memory addresses in common, then you have $D_1 \\prec_D D_2$ and $D_2 \\prec_D D_1$ for all D.  Maybe $\\prec$ is the wrong symbol to use?\n\n\\item Tasks are annotated with a coherence requirements $H_{excl}, H_{atom} \\subseteq A$.  The default annotation is $H_{excl} = \\bigcup_{\\langle L, op \\rangle \\in E} L, H_{atom} = \\emptyset$.\n\n\\item The runtime enforces an execution order $\\prec_E$ between two tasks $S_1$ and $S_2$ as follows:\n\n\\begin{itemize}\n\n\\item Strict ordering: when the two tasks have exclusive coherence requirements on two regions that can't be proven disjoint (i.e. $\\not\\vdash H_{excl_1} * H_{excl_2}$), we enforce $S_1 \\prec_E S_2$.\n\n\\item Serializability: when the two tasks have atomic coherence requirements on two regions that can't be proven disjoint, we enforce $S_1 \\prec_E S_2 \\vee S_2 \\prec_E S_1$.\n\n\\end{itemize}\n\n\\item Execution order is stronger than memory order: $(S_1 \\prec_E S_2) \\rightarrow ( \\forall \\hat e_1 \\in \\hat E_1, \\hat e_2 \\in \\hat E_2 . \\hat e_1 \\prec \\hat e_2 ) \\rightarrow (\\forall D. D_1 \\prec_D D_2)$.\n\n\\item Coherence of sibling tasks:  If sibling tasks $S_1$ and $S_2$ are program ordered (i.e. $S_1 \\prec_P S_2$) within their parent task:\n\n\\begin{itemize}\n\n\\item Overlap in exclusivity requirements guarantees memory ordering: $H_{excl_1} \\cap H_{excl_2} \\neq \\emptyset \\rightarrow D_1 \\prec_D D_2$.  (If $\\vdash (E_1 \\cap H_{excl_1}) * (E_2 \\cap H_{excl_2})$, soundness of effects guarantees disjointness of memory addresses and therefore memory ordering.  If not, the runtime enforces execution order and therefore memory ordering.)\n\n\\item Overlap in atomic requirements guarantees serializability: $H_{atom_1} \\cap H_{atom_2} \\neq \\emptyset \\rightarrow D_1 \\prec_D D_2 \\vee D_2 \\prec_D D_1$.  (Parallels proof above.)\n\n\\end{itemize}\n\n\\end{itemize}\n", "meta": {"hexsha": "4d6dd2ca8984cb1d4e33aea030495507e6e10afa", "size": 4823, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/popl2013/outline.tex", "max_stars_repo_name": "lightsighter/LegionOrigins", "max_stars_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-11-10T06:29:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-14T20:56:13.000Z", "max_issues_repo_path": "doc/popl2013full/outline.tex", "max_issues_repo_name": "lightsighter/LegionOrigins", "max_issues_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/popl2013full/outline.tex", "max_forks_repo_name": "lightsighter/LegionOrigins", "max_forks_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.1756756757, "max_line_length": 410, "alphanum_fraction": 0.6979058677, "num_tokens": 1615, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.859663743319094, "lm_q2_score": 0.6548947357776795, "lm_q1q2_score": 0.5629892600386089}}
{"text": "\\chapter{Miscellaneous Services}\n\nThe modules described in this chapter provide miscellaneous services\nthat are available in all Python versions.  Here's an overview:\n\n\\begin{description}\n\n\\item[math]\n--- Mathematical functions (\\code{sin()} etc.).\n\n\\item[rand]\n--- Integer random number generator.\n\n\\item[whrandom]\n--- Floating point random number generator.\n\n\\item[array]\n--- Efficient arrays of uniformly typed numeric values.\n\n\\end{description}\n", "meta": {"hexsha": "b7a726e72c91687ff0cf5af37cc4142de6243113", "size": 449, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Doc/libmisc.tex", "max_stars_repo_name": "AtjonTV/Python-1.4", "max_stars_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_stars_repo_licenses": ["Unlicense", "TCL", "DOC", "AAL", "X11"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Doc/libmisc.tex", "max_issues_repo_name": "AtjonTV/Python-1.4", "max_issues_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_issues_repo_licenses": ["Unlicense", "TCL", "DOC", "AAL", "X11"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Doc/libmisc.tex", "max_forks_repo_name": "AtjonTV/Python-1.4", "max_forks_repo_head_hexsha": "2a80562c5a163490f444181cb75ca1b3089759ec", "max_forks_repo_licenses": ["Unlicense", "TCL", "DOC", "AAL", "X11"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.380952381, "max_line_length": 68, "alphanum_fraction": 0.7616926503, "num_tokens": 92, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5629545813871741}}
{"text": "\\section{Scheduling and Partial Orders}\n\n\\frame{\n{Part 3: Scheduling and Partial Orders}\n\n\\tableofcontents[currentsection,hideallsubsections, firstsection=1, sections={1-3}]\n}\n\n\\subsection{Scheduling}\n\n\\begin{frame}\n  \\frametitle{Using DAGs for Scheduling}\n\n  Let's consider again using DAGs for calculating course prerequisites.\\medskip\n\n  \\begin{columns}[T]\n    \\column{0.3\\textwidth}\n    $18.01 \\rightarrow 6.042$\\\\\n    $18.01 \\rightarrow 18.02$\\\\\n    $18.01 \\rightarrow 18.03$\n    \\column{0.35\\textwidth}\n    $6.001 \\rightarrow 6.034$\\\\\n    $6.042 \\rightarrow 6.046$\\\\\n    $8.02 \\rightarrow 6.002$\\\\\n    $18.03, 6.002 \\rightarrow 6.004$\n    \\column{0.35\\textwidth}\n    $6.001, 6.004 \\rightarrow 6.033$\\\\\n    $6.033 \\rightarrow 6.857$\\\\\n    $6.046 \\rightarrow 6.840$\n  \\end{columns}\n\n  \\vfill\n\n  We say that $u$ is a \\structure{indirect prerequisite} of $v$ if there is a positive length walk in graph $R$:\n\n    \\begin{center}\n      $18.01 \\rightarrow 6.042 \\rightarrow 6.046 \\rightarrow 6.840$\n    \\end{center}\n\\end{frame}\n\n\n\n\\begin{frame}{DAGs and Scheduling}{Minimal, Minimum, Maximal, Maximum of a DAG}\n  \\begin{columns}\n    \\column{0.4\\textwidth}\n      \\includegraphics[width=1\\textwidth]{../img/greedy_schedule}\n\n      \\column{0.6\\textwidth}\n    \\begin{itemize}\n    \\item A \\structure{minimal} course is does not have any prerequisites:\n    \\begin{itemize}\n      \\item $\\emptyset \\to 18.01$, $\\emptyset \\to 6.001$, $\\emptyset \\to  8.02$\n    \\end{itemize}\\bigskip\n\n    \\item A \\structure{minimum} course is an indirect prerequisite of {\\bf all} courses.\n    \\begin{itemize}\n      \\item none in this example!\n      \\item if we add a course $x \\to \\{18.01, 8.02, 6.001\\}$, then $x$ would be the minimum.\n    \\end{itemize}\\bigskip\n\n    \\item \\structure{Maximal} and \\structure{maximum} courses have a similar definition.\n    \\begin{itemize}\n      \\item $\\{18.02, 6.840, 6.857, 6.034\\}\\to \\emptyset$ are maximal.\n    \\end{itemize}\n    \\end{itemize}\n  \\end{columns}\n\\end{frame}\n\n\\begin{frame}{DAG and Scheduling}{How to Schedule}\n\n  \\begin{columns}\n    \\column{0.4\\textwidth}\n      \\includegraphics[width=1\\textwidth]{../img/greedy_schedule}\n\n    \\column{0.6\\textwidth}\n      If we have the graph of course requirements, how do we select the courses for each semester? \\medskip\n\n      \\structure{Greedy Scheduling}:\n      \\begin{enumerate}\n      \\item Identify Minimal Subjects;\n      \\item Add Minimal Subjects to Schedule;\n      \\item Remove Minimal Subjects;\n      \\item Return to Step 1\n      \\end{enumerate}\n  \\end{columns}\\medskip\n\n  Schedule:\\\\\n  $\\{18.01, 8.02, 6.001\\} \\to \\{18.02, 6.042, 18.03, 6.002, 6.034\\} \\to \\{6.046, 6.004\\} \\to \\{ 6.840, 6.033\\} \\to 6.857$\n\\end{frame}\n\n\n\\begin{frame}{DAG and Scheduling}{Anti-Chains}\n  \\begin{columns}\n  \\column{0.4\\textwidth}\n    \\includegraphics[width=1\\textwidth]{../img/greedy_schedule}\n\n  \\column{0.6\\textwidth}\n\n    \\begin{itemize}\n    \\item An \\structure{anti-chain} is a set of vertices (courses) where there is no direct or indirect requisite relation among them.\\medskip\n\n    \\item This means that the courses in an anti-chain can be taken in any order, even all at the same time.\\medskip\n\n    \\item Members of an anti-chain are \\structure{imcomparable}: It is not possible to say which one comes first.\\medskip\n\n    \\item A relation graph can have multiple anti-chains. Example:\n    \\begin{itemize}\n      \\item $\\{6.046, 6.004\\}$\n      \\item $\\{6.046, 18.03, 6.001\\}$\n    \\end{itemize}\n    \\end{itemize}\n  \\end{columns}\n\\end{frame}\n\n\\begin{frame}{DAG and Scheduling}{Chains and Topological Sort}\n\n  \\begin{block}{Chains}\n    Just like anti-chain is a set of vertices that have no relation among themselves, a \\structure{chain} is a set of vertices that {\\bf all} have a relation among themselves.\n  \\end{block}\\medskip\n\n  Using of chains and anti-chains, we define a \\structure{Topological Sort}. A topological sort is an ordering of all vertices in $G$ that obeys the requisite relations.\n  \\begin{itemize}\n    \\item 18.01, 6.001, 8.02, 6.002, 18.03, 6.034, 6.042, 18.02, 6.004, 6.046, 6.033, 6.840, 6.857\n    \\item 6.001, 8.02, 6.002, 18.01, 6.034, 18.03, 18.02, 6.042, 6.004, 6.046, 6.033, 6.857, 6.840\n  \\end{itemize}\n  If $G$ has anti-chains, it will also have multiple topological sorts.\n\n\\end{frame}\n\n\\begin{frame}{DAG and Scheduling}{Parallel Processing}\n\n  We can use the same way of thinking to describe \\structure{parallel scheduling} of tasks.\\bigskip\n\n  \\begin{itemize}\n    \\item $n$ tasks have to be executed by $p$ processors.\n    \\item some pairs of tasks have a {\\bf prerequisite} relation.\n    \\item \\structure{Minimum Parallel Time}: minimum time to complete all tasks (assuming no limits on $p$)\n      \\begin{itemize}\n        \\item Minimum Parallel Time = Maximum Chain Size\n      \\end{itemize}\n    \\item \\structure{Maximum Parallel Load}: value of $p$ necessary to achieve the Minimum Parallel Time\n    \\begin{itemize}\n      \\item Maximum Parallel Load $\\leq$ Maximum Anti-chain Size\n    \\end{itemize}\n  \\end{itemize}\n\\end{frame}\n\n%\\section{Partial Orders and Equivalence}\n\n\\begin{frame}{Partial orders: Transitivity}\n\n  In a graph $G$, if there is a walk from $u$ to $v$, and a walk from $v$ to $w$, then there is a walk from $u$ to $w$\n  \\begin{center}\n    \\begin{tikzpicture}[scale=.5,auto,swap]\n      \\tikzset{edge/.style = {->,>=latex'}}\n      \\node[vertex] (a) at (0,0) {u};\n      \\node[vertex] (b) at (3,0) {v};\n      \\node[vertex] (c) at (6,0) {w};\n      \\draw[edge] (a) to (b);\n      \\draw[edge] (b) to (c);\n      \\draw[dotted] (a) to[bend right] (c);\n    \\end{tikzpicture}\n  \\end{center}\\bigskip\n\n  Representing this in terms of \\structure{walk relation} in $G$:\n  \\begin{equation*}\n    uG^+v \\land vG^+w \\implies uG^+w\n  \\end{equation*}\\bigskip\n\n  \\begin{block}{Definition: Transitive Relations}\n    A relation {\\bf R} is transitive if:\\hspace{1cm} $xRy \\land yRz \\implies xRz$\n  \\end{block}\n\\end{frame}\n\n\\begin{frame}{Partial Orders: Assimetry}\n\n  In an \\structure{acyclic digraph G}, we can observe that for any two vertices $v$ and $u$, if there is a walk from $v$ to $u$, then there is no walk from $u$ to $v$.\\bigskip\n\n  \\begin{block}{Definition: Assimetric Relation}\n    A relation {\\bf R} is assimetric if: $uD^+v \\implies \\text{NOT}(vD^+u)$\n  \\end{block}\n\\end{frame}\n\n\\begin{frame}{Strict Partial Order}\n\n  A relation $R$ is a \\structure{Strict Partial Order}(SPO) {\\bf iff} it is {\\bf Transitive} and {\\bf Assimetric}.\\bigskip\n\n  Examples:\n  \\begin{itemize}\n  \\item The $\\subset$ relation on sets\n  \\item The ``indirect prerequisite'' relationship on subjects.\n  \\item The $<$ relationship on $\\mathbb{R}$\n  \\end{itemize}\\bigskip\n\n  Another way to say it, is that $R$ is a \\structure{SPO} {\\bf iff} $R$ is the walk relation $D^+$ for some DAG $D$.\n\\end{frame}\n\n\\begin{frame}{Path Total Orders}\n\n    A \\structure{Strict Partial Order} is also \\structure{Path Total} if, for any two distinct elements, one will always be ``greater than'' another.\n\n    \\bigskip\n\n    Example: $<$ on $\\mathbb{R}$: if $x,y \\in \\mathbb{R},\n    x\\neq y \\implies x>y \\text{ or } y>x$\n\n    \\bigskip\n\n    Counter-Example: $\\subset$ in POW($\\mathbb{N}$): $\\{1,3\\} \\not\\subset \\{2,5\\} \\not\\subset \\{1,3\\}$\n\n    \\vfill\n\n    \\begin{itemize}\n    \\item Relation $R$ is \\structure{path total}: if $x \\neq y \\implies xRy \\lor yRx$\n    \\item This means there are \\alert{no imcomparable elements}\n    \\end{itemize}\n\n    In a \\structure{path total} relation, the whole graph is a\n    \\structure{chain}\n  \\begin{center}\n    \\begin{tikzpicture}[scale=.8,auto,swap]\n      \\tikzset{edge/.style = {->,>=latex'}}\n      \\node[vertex] (a) at (0,0) {};\n      \\node[vertex] (b) at (1,0) {};\n      \\node[vertex] (c) at (2,0) {};\n      \\node[vertex] (d) at (4,0) {};\n      \\node[vertex] (e) at (5,0) {};\n      \\node[vertex] (f) at (6,0) {};\n      \\draw[edge] (a) to (b);\n      \\draw[edge] (b) to (c);\n      \\draw[dotted] (c) to (d);\n      \\draw[edge] (d) to (e);\n      \\draw[edge] (e) to (f);\n    \\end{tikzpicture}\n  \\end{center}\n\\end{frame}\n\n\\begin{frame}{Weak Partial Order}\n\n  A \\structure{weak partial order} is the same as a \\structure{strict\n    partial order} R, except that $aRa$ always holds:\n  \\begin{center}\n    \\begin{tikzpicture}[scale=.8,auto,swap]\n      \\tikzset{edge/.style = {->,>=latex'}}\n      \\node[vertex] (a) at (0,0) {};\n      \\node[vertex] (b) at (1,0) {};\n      \\node[vertex] (c) at (2,0) {};\n      \\node[vertex] (d) at (4,0) {};\n      \\node[vertex] (e) at (5,0) {};\n      \\node[vertex] (f) at (6,0) {};\n      \\draw[edge] (a) to (b);\n      \\draw[edge] (b) to (c);\n      \\draw[dotted] (c) to (d);\n      \\draw[edge] (d) to (e);\n      \\draw[edge] (e) to (f);\n      \\draw[edge] (a) to[loop] (a);\n      \\draw[edge] (b) to[loop] (b);\n      \\draw[edge] (c) to[loop] (c);\n      \\draw[edge] (d) to[loop] (d);\n      \\draw[edge] (e) to[loop] (e);\n      \\draw[edge] (f) to[loop] (f);\n    \\end{tikzpicture}\n  \\end{center}\n\n  \\bigskip\n\n  \\begin{itemize}\n  \\item Examples: $\\subseteq$ on sets, $\\leq$ on $\\mathbb{R}$\n  \\item Weak Partial Orders define the property of\n    \\structure{Reflexivity}\n  \\item Relation $R$ on $A$ is \\structure{reflexive} {\\bf iff} $aRa,\n    \\forall a\\in A$\n  \\end{itemize}\n\n  Another way to define a weak partial order is that $R$ is a WPO {\\bf iff}\\\\ $R = D^*$ for some DAG $D$\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Assimetry and Antissimetry}\n\n  {\\larger\n\n    \\begin{block}{Assimetry}\n      \\begin{itemize}\n      \\item Reflexibility is \\alert{never} allowed\n      \\item R is \\structure{assimetric} {\\bf iff}:\n        \\begin{equation*}\n          xRy \\implies \\text{NOT}(yRx)\n        \\end{equation*}\n      \\end{itemize}\n\n    \\end{block}\n\n    \\begin{block}{Antissimetry}\n      \\begin{itemize}\n      \\item Reflexibility is \\alert{sometimes} allowed\n      \\item R is \\structure{antissimetric} {\\bf iff}\n        \\begin{equation*}\n          xRy \\implies \\text{NOT}(yRx), \\text{ for } x \\neq y\n        \\end{equation*}\n      \\end{itemize}\n    \\end{block}\n  }\n\\end{frame}\n\n\n\\subsection{Partial Orders and Isomorphisms}\n\n\\begin{frame}{Partial Orders and Isomorphism}{Proper Subset Relation}\n  \\frametitle{Proper Subset Relation}\n\n  The proper subset relation: $A \\subset B$ represents a partial order.\\bigskip\n\n  For example, the proper subset relation defined on the power set of $\\{1,2,3,5,10,15,30\\} - \\emptyset$ is as follows:\n\n\\begin{center}\n    \\begin{tikzpicture}[scale=1.2,auto,swap]\n      \\tikzset{edge/.style = {->,>=latex'}}\n      \\node[vertex,label={1}] (a) at (0,1) {};\n      \\node[vertex,label={1,3}] (b) at (2,0) {};\n      \\node[vertex,label={1,5}] (c) at (2,1) {};\n      \\node[vertex,label={1,2}] (d) at (2,2) {};\n      \\node[vertex,label={1,3,5,15}] (e) at (4,0) {};\n      \\node[vertex,label={1,2,5,10}] (f) at (4,2) {};\n      \\node[vertex,label={1,2,3,5,10,15,30}] (g) at (5,1) {};\n      \\draw[edge] (a) to (b);\n      \\draw[edge] (a) to (c);\n      \\draw[edge] (a) to (d);\n      \\draw[edge] (c) to (e);\n      \\draw[edge] (b) to (e);\n      \\draw[edge] (c) to (f);\n      \\draw[edge] (d) to (f);\n      \\draw[edge] (e) to (g);\n      \\draw[edge] (f) to (g);\n    \\end{tikzpicture}\n  \\end{center}\n\\end{frame}\n\n\\begin{frame}{Partial Orders and Isomorphism}{Proper Divide Relation}\n\n  The \\structure{proper divide} relation is defined as $a R b$ if $a|b$ and $a \\neq b$.\\bigskip\n\n  We can see that, for the set $\\{1,2,3,5,10,15,30\\}$, the proper subset relation and the proper division relation have {\\bf the same relationship DAG}.\\bigskip\n\n  This means that the two relations are \\structure{{\\bf isomorphic}}\n\n  \\begin{center}\n    \\begin{tikzpicture}[scale=1.2,auto,swap]\n      \\tikzset{edge/.style = {->,>=latex'}}\n      \\node[vertex,label={1}] (a) at (0,1) {};\n      \\node[vertex,label={3}] (b) at (2,0) {};\n      \\node[vertex,label={5}] (c) at (2,1) {};\n      \\node[vertex,label={2}] (d) at (2,2) {};\n      \\node[vertex,label={15}] (e) at (4,0) {};\n      \\node[vertex,label={10}] (f) at (4,2) {};\n      \\node[vertex,label={30}] (g) at (5,1) {};\n      \\draw[edge] (a) to (b);\n      \\draw[edge] (a) to (c);\n      \\draw[edge] (a) to (d);\n      \\draw[edge] (c) to (e);\n      \\draw[edge] (b) to (e);\n      \\draw[edge] (c) to (f);\n      \\draw[edge] (d) to (f);\n      \\draw[edge] (e) to (g);\n      \\draw[edge] (f) to (g);\n    \\end{tikzpicture}\n  \\end{center}\n\\end{frame}\n\n\\begin{frame}{Isomorphism}\n\n    \\begin{itemize}\n    \\item Two graphs are \\structure{isomorphic} if they have the same \\structure{set of vertices and set of edges}\\bigskip\n\n    \\item More formally, two graphs $G_1, G_2$ are \\structure{isomorphic} if there is a relation $M$ which is an \\emph{edge preserving maching} between their vertices.\\bigskip\n\n    \\item $G_1 \\text{ isomorphic } G_2 \\iff \\exists \\text { bijection\n    } M:V_1\\to V_2$\\\\\\hfill with $(u,v) \\in E_1 \\iff\n      (M(u),M(v)) \\in E_2$\n\n    \\end{itemize}\n\\end{frame}\n\n%% TODO: Remember why this is important and then re-add to the lecture\n% \\begin{frame}\n%   \\frametitle{Isomorphism, $\\subset$ and partial orders}\n%\n%   {\\larger\n%     {\\bf Theorem:} Every strict p.o. $R$ is isomorphic to some\n%     collection of sets partially ordered by $\\subset$.\n%\n%     \\bigskip\n%\n%     {\\bf Proof (by construction):}\n%     \\begin{itemize}\n%     \\item Map element $a$ to the set of elements below it.\n%     \\item in other words, $a$ maps to $\\{b \\in A| bRa \\lor b = a\\}$\\\\\n%       \\hfill(remember that NOT$(aRa)$)\n%     \\item in other words, $f(a) ::= R^{-1}(a) \\cup \\{a\\}$\n%     \\end{itemize}\n%\n%     \\bigskip\n%\n%     {\\bf Example:} from divides\n%     \\begin{itemize}\n%     \\item $f(10) = 1|10, 2|10, 5|10, \\cup \\{10\\} = \\{1,2,5,10\\}$\n%     \\item $f(3) = 1|3, \\cup \\{3\\} = \\{1,3\\}$\n%     \\end{itemize}\n%   }\n% \\end{frame}\n\n\\subsection{Equivalence Relations}\n\n\\begin{frame}\n  \\frametitle{Symmetric Relations and Equivalence Relations}\n\n  \\begin{itemize}\n    \\item If there is a walk from $u$ to $v$ and a walk from $v$ to\n      $u$, then we say that $u$ and $v$ are \\structure{strongly\n      connected}.\n      \\begin{itemize}\n        \\item $uG^*v$ and $vG^*u$\n      \\end{itemize}\\bigskip\n\n    \\item The relation $R$ is \\structure{symmetric} if $aRb \\implies bRa$.\n    \\begin{itemize}\n      \\item The walk relation of A \\structure{strongly connected} graph is symmetric.\n    \\end{itemize}\\bigskip\n\n    \\item An \\structure{equivalence relation} $R$ is: transitive,\n      symmetric and reflexive.\\bigskip\n\n    \\item This means that $R$ is an \\structure{equivalence relation} {\\bf iff} $R$ is the \\structure{strongly connected} relation of some DiGraph.\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Equivalence Relations Examples}\n\n  The definitions of the last slide allows us to formally define an \\emph{equivalence equation}.\\bigskip\n\n  {\\larger\n    Examples:\n    \\begin{itemize}\n    \\item Equality: $=$\n    \\item $\\equiv $(mod n)\n    \\item Same Size, Same Color, etc.\n    \\end{itemize}\n  }\\bigskip\n\n  It may seem that an equivalence relation is too obvious to need a definition (specially for numbers!), but this can be useful when we want to define equivalence for more complex things, like sets.\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Relation Properties: Graphical Review}\n\n  {\\larger\n\n    \\begin{columns}\n      \\column{0.5\\textwidth}\n      \\begin{center}\n      Reflexive:\\\\\n      \\begin{tikzpicture}[scale=1,auto,swap]\n        \\tikzset{edge/.style = {->,>=latex'}}\n        \\node[vertex] (a) at (0,0) {};\n        \\node[vertex] (b) at (1,0) {};\n        \\node[vertex] (c) at (0,1) {};\n        \\draw[edge] (a) to[loop] (a);\n        \\draw[edge] (b) to[loop](b);\n        \\draw[edge] (c) to[loop] (c);\n        \\draw[edge] (c) to (b);\n      \\end{tikzpicture}\n\n      \\bigskip\n\n      Transitive:\\\\\n      \\begin{tikzpicture}[scale=1,auto,swap]\n        \\tikzset{edge/.style = {->,>=latex'}}\n        \\node[vertex] (a) at (0,0) {};\n        \\node[vertex] (b) at (1,0) {};\n        \\node[vertex] (c) at (2,0) {};\n        \\draw[edge] (a) to (b);\n        \\draw[edge] (b) to (c);\n        \\draw[dotted] (a) to[bend right] (c);\n      \\end{tikzpicture}\n      \\end{center}\n      \\column{0.5\\textwidth}\n      \\begin{center}\n      Assymetric:\\\\\n      \\begin{tikzpicture}[scale=1,auto,swap]\n        \\tikzset{edge/.style = {->,>=latex'}}\n        \\node[vertex] (a) at (0,0) {};\n        \\node[vertex] (b) at (1,1) {};\n        \\node[vertex] (c) at (2,0) {};\n        \\draw[edge] (a) to (b);\n        \\draw[edge] (b) to (c);\n        \\draw[edge] (a) to (c);\n      \\end{tikzpicture}\n\n      \\bigskip\n\n      Symetric:\\\\\n      \\begin{tikzpicture}[scale=1,auto,swap]\n        \\tikzset{edge/.style = {->,>=latex'}}\n        \\node[vertex] (a) at (0,0) {};\n        \\node[vertex] (b) at (1,0) {};\n        \\node[vertex] (c) at (2,0) {};\n        \\draw[edge] (a) to[bend left] (b);\n        \\draw[edge] (b) to[bend left] (a);\n        \\draw[edge] (b) to[bend left] (c);\n        \\draw[edge] (c) to[bend left] (b);\n      \\end{tikzpicture}\n\n      \\end{center}\n    \\end{columns}\n  }\n\\end{frame}\n\n\n% TODO: Remember why this is important and re-introduce it!\n% \\begin{frame}\n%   \\frametitle{Representing Equivalence}\n%   {\\larger\n%\n%     \\begin{itemize}\n%     \\item For a total function $f:A\\rightarrow B$\n%     \\item We can define an equivalence relation: $\\equiv_f$ on $A$:\n%       \\begin{equation}\n%         a \\equiv_f a' \\iff f(a) = f(a')\n%       \\end{equation}\n%\n%       \\bigskip\n%\n%     \\item {\\bf Theorem:} Relation $R$ on set $A$ is an equiv. relation\n%       {\\bf iff}: $R$ is $\\equiv_f$ for some $f:A\\rightarrow B$\n%\n%       \\bigskip\n%\n%     \\item {\\bf Example:} $\\equiv$ (mod n) is $\\equiv_f$\n%       \\structure{where} $f(k) ::= \\text{rem}(k,n)$\n%     \\end{itemize}\n%   }\n% \\end{frame}\n%\n% \\begin{frame}\n%   \\frametitle{Equivalence and Partition}\n%\n%   {\\larger\n%     \\begin{itemize}\n%     \\item We define a \\structure{partition} $\\Pi$ of a set $A$, where\n%       $\\$Pi$ is a collection of subsets of $A$ that cover all elements\n%       but do not overlap.\n%\n%       \\bigskip\n%\n%       {\\bf Example:} For A = $\\{a,b,c,d,e\\}$ one partition could be:\n%       $\\{a,b\\},\\{c,e\\},\\{d\\}$\n%\n%\n%       \\bigskip\n%\n%     \\item We define a relatin $\\equiv_{\\Pi}$ on A: $a \\equiv_{\\Pi} a'$\n%       if both $a$ and $a'$ are in the same subset of $\\Pi$\n%\n%       \\bigskip\n%\n%     \\item A relation $R$ on set $A$ is an equivalence relation {\\bf\n%       iff} $R$ is $\\equiv_{\\Pi}$ for some partition $\\Pi$ of $A$.\n%     \\end{itemize}\n%\n%   }\n% \\end{frame}\n", "meta": {"hexsha": "8c16c73550a247d076a7f66665fa1fb86d5a8d7c", "size": 18333, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week04/03_Scheduling.tex", "max_stars_repo_name": "caranha/MathCS", "max_stars_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-09-13T18:59:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-16T02:14:56.000Z", "max_issues_repo_path": "week04/03_Scheduling.tex", "max_issues_repo_name": "caranha/MathCS", "max_issues_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week04/03_Scheduling.tex", "max_forks_repo_name": "caranha/MathCS", "max_forks_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.9390243902, "max_line_length": 198, "alphanum_fraction": 0.6041018928, "num_tokens": 6309, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5629545804098984}}
{"text": "\\section{Locality (almost done!)}\n\\begin{theorem}\nWe discovered that $S^\\sca_\\ast(X)\\hookrightarrow S_\\ast(X)$ is a subcomplex, and this induced an isomorphism in homology.\n\\end{theorem}\nWe talked about subdivision and the cone construction, the latter of which dealt with a star-shaped region, relative to some point (which we can safely assume is the origin) $b$. If $\\sigma:\\Delta^n\\to X$ is a map, then $b\\ast \\sigma:S_n(X)\\to S_{n+1}(X)$ where $\\ast$ is the join. We did all of this before. The property that this had is that it's a homotopy between $1$ and $\\eta_b\\epsilon$, i.e., $db\\ast + b\\ast d = 1-\\eta_b\\epsilon$. This is called equation $(\\ast)$. Look above for the definition of $\\eta_b$ and $\\epsilon$. Hopefully you remember this story.\n\nThe subdivision operator $\\$:S_\\ast(X)\\to S_\\ast(X)$ for any space $X$ is natural, so it's enough to say what $\\$\\iota_n$ and $\\$\\iota_0$ is. Define $\\$\\iota_0=\\iota_0$, and define $\\$\\iota_n=b_n\\ast\\$(d\\iota_{n-1})$ where $b_n$ is the barycenter of the $n$-simplex. The standard simplex is star-shaped relative to its barycenter, so by naturality, it suffices to do this for $\\iota_n$. The two key properties are the following.\n\\begin{theorem}\n\\begin{enumerate}\n\\item $\\$$ is a chain map.\n\\item There is a chain homotopy $T:\\$\\sim 1$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nLet's try to prove that it's a chain map. We'll use induction on $n$. It's enough to show that $d\\$\\iota_n=\\$ d\\iota_n$, because:$$d\\$\\sigma=d\\$\\sigma_\\ast\\iota_n=\\sigma_\\ast d\\$\\iota_n=\\sigma_\\ast \\$d\\iota_n=\\$ d\\sigma_\\ast\\iota_n=\\$ d\\sigma$$\nWe declared $d\\$\\iota_0=d\\iota_0=0$. But also $\\$d\\iota_0=\\$0=0$, so this works.\n\nFor $n\\geq 1$, we want to compute $d\\$\\iota_n$. This is:\n\\begin{align*}\nd\\$\\iota_n & =d(b_n\\ast \\$ d\\iota_n) & \\\\\n & = (1-\\eta_b\\epsilon-b_n\\ast d)(\\$ d\\iota_n) & \\text{by $(\\ast)$}\n\\end{align*}\nWhat happens when $n=1$? Well:\n$$\\eta_b\\epsilon\\$d\\iota_1 = \\eta_b\\epsilon \\$(c^0_1 - c^0_0)=\\eta_b\\epsilon(c^0_1 - c^0-0)=0$$\nBecause $\\epsilon$ takes sums of coefficients, which is $1+(-1)=0$. Let's continue.\n\\begin{align*}\nd\\$\\iota_n & = ... & \\\\\n & = \\$d\\iota_n - b_n\\ast d\\$ d\\iota_n & \\\\\n & = \\$d\\iota_n - b_n\\$d^2\\iota_n &\\\\\n & = \\$d\\iota_n & \\text{because $d^2=0$}\n\\end{align*}\nSo we're done.\n\nTo define the chain homotopy $T$, we'll just write down a formula and not justify it. We just need to define $T\\iota_n$ by naturality. So define:\n\\begin{equation*}\nT\\iota_n = \\begin{cases}\n0 & n=0\\\\\nb_{n}\\ast(\\$\\iota_n - \\iota_n- Td\\iota_n)\\in S_{n+1}(\\Delta^n) & n>0\n\\end{cases}\n\\end{equation*}\nThis is because $T:S_n(X)\\to S_{n+1}(X)$ such that $dT+Td=\\$-1$. I'm confused about this, so help me out. Hmm. The term $\\$\\iota_n - \\iota_n-Td\\iota_n$ is an $n$-chain. We're going to do this by induction. Again, we need to check only on the universal case.\n\nWhen $n=0$, $dT\\iota_0 + Td\\iota_0 = 0+0 = 0 = \\$\\iota_0 - \\iota_0$ because $\\$\\iota_0=\\iota_0$. Now let's induct. For $n\\geq 1$, let's start by computing $dT\\iota_n$. This is:\n\\begin{align*}\ndT\\iota_n & = d_n(b_n\\ast(\\$\\iota_n - \\iota_n - Td\\iota_n)) & \\\\\n& = (1-b_n\\ast d)(\\$\\iota_n - \\iota_n - Td\\iota_n) & \\text{by $(\\ast)$}\\\\\n & = \\$\\iota_n-\\iota_n-Td\\iota_n-b_n\\ast (d\\$\\iota_n - d\\iota_n - dTd\\iota_n)\n\\end{align*}\nWe can ignore the $\\eta_b\\epsilon$ part because we're in dimension $\\geq 1$. All we want now is that $b_n\\ast(d\\$\\iota_n - d\\iota_n - dTd\\iota_n)=0$. We can do this via induction, because $T(d\\iota_n)$ is in dimension $n$ (or is it $n-1$?):\n\\begin{align*}\ndTd\\iota_n & = -Td(d\\iota_b)+\\$ d\\iota_n - d\\iota_n\\\\\n& = \\$d\\iota_n - d\\iota_n\\\\\n& = d\\$\\iota_n - d\\iota_n\n\\end{align*}\nThis means that $d\\$\\iota_n-d\\iota_n - dTd\\iota_n=0$, so we're done.\n\\end{proof}\n\\begin{corollary}\n$\\$^k\\sim 1:S_\\ast(X)\\to S_\\ast(X)$. I.e., we're iterating subdivision. We want $T_k$ such that $dT_k+T_kd=\\$^k-1$\n\\end{corollary}\n\\begin{proof}\n$dT+Td=\\$-1$. Let's apply $\\$$ to this. We get $\\$dT+\\$Td=\\$^2-\\$$. Sum up these two things, so we get $dT+Td+\\$dT+\\$Td = \\$^2-1$. But now, $\\$d=d\\$$, so the left hand side is $dT+d\\$T + Td+\\$Td = d(\\$+1)T + (\\$+1)Td$, i.e., $d(\\$+1)T+(\\$+1)Td=\\$^2-1$. So define $T_2=(\\$+1)T$, and continuing, you see that $T_k=(\\$^{k-1}+\\$^{k-2}+\\cdots+1)T=\\left(\\sum^{k-1}_{i=0}\\$^i\\right)T$.\n\\end{proof}\n\\begin{prop}[Almost completes the proof of locality]\nLet $\\sca$ be a cover of $X$. For every chain $c\\in S_n(X)$, there is a $k\\geq 0$ such that $\\$^kc\\in S^\\sca_n(X)$. This is the geometric thing we have to prove.\n\\end{prop}\n\\begin{proof}\nWe may assume that $c:\\sigma:\\Delta^n\\to X$, and this makes sense because you just take the max of the $k$ of the terms of the sum. A great trick is the following: define an open cover $\\mathscr{U}$ of $\\Delta^n$ defined by $\\mathscr{U}:=\\{\\sigma^{-1}(\\mathrm{Int}(A))|A\\in\\sca\\}$. This is a cover, a basic result from topology. Then we use the Lesbegue covering lemma:\n\\begin{lemma}[Lesbegue covering lemma]\nWe'll pretend this is part of 18.901. Let $M$ be a compact metric space (eg. $\\Delta^n$), and let $\\mathscr{U}$ be an open cover. Then there is $\\epsilon> 0$ such that for all $x\\in M$, there is $B_\\epsilon(x)\\subseteq U$ for some $U\\in \\mathscr{U}$.\n\\end{lemma}\n\\begin{proof}\nOmitted, may be in 18.901. Or even 18.100B.\n\\end{proof}\nLet's apply this to the cover we constructed. What we want is that for all $\\epsilon>0$, there is a $k$ such that the diameter of the simplices in $\\$^k\\iota_n$ is less than $\\epsilon$. Let's do that.\n\\begin{question}\nHow small are these subdivided simplices in $\\$^k\\iota_n$?\n\\end{question}\nFor example, suppose $\\sigma:\\Delta^n\\to\\Delta^n$ is something in the subdivision. These are all affine simplices, i.e., it's determined by where the simplices of $\\Delta^n$ go to in $\\sigma$. We can write $\\sigma=\\langle v_,\\cdots,v_n\\rangle$. It could be in $\\mathbf{R}^N$ if you wanted; maybe it's easier to think of it this way. The barycenter is $\\frac{\\sum_{i=0}^nv_i}{n+1}$. Let's compute:\n\\begin{align*}\n|b-v_i| & =\\left|\\frac{v_0+\\cdots+v_n-(n+1)v_i}{n+1}\\right|\\\\\n& =\\left|\\frac{(v_0-v_i)+(v_1-v_i)+\\cdots+(v_n-v_i)}{n+1}\\right|\\\\\n & \\leq \\frac{n}{n+1}\\max_{i,j}|v_i-v_j|\\\\\n & = \\frac{n}{n+1}\\mathrm{diam}(\\img\\sigma)\n\\end{align*}\nThe following lemma completes the proof because there's always a $k$ such that $\\left(\\frac{n}{n+1}\\right)^k<\\epsilon$.\n\\begin{lemma}\nLet $\\tau$ be a simplex in $\\$^k\\sigma$ where $\\sigma$ is an affine simplex. Then $\\mathrm{diam}(\\tau)\\leq \\frac{n}{n+1}\\mathrm{diam}(\\sigma)$.\n\\end{lemma}\n\\begin{proof}\nLet's write $\\tau=\\langle w_0=b,\\cdots,w_n\\rangle$ and $\\sigma=\\langle v_0,\\cdots,v_n\\rangle$. We saw:\n\\begin{align*}\n|b-w_i| & \\leq\\max_i|b-v_i|\\\\\n & \\leq \\frac{n}{n+1}\\mathrm{diam}(\\sigma)\n\\end{align*}\nFor the other cases, well, we use induction:\n\\begin{align*}\n|w_i-w_j|& \\leq \\mathrm{diam}(\\text{simplex in }\\$d\\sigma)\\\\\n& \\leq \\frac{n-1}{n}\\mathrm{diam}(d\\sigma)\\\\\n& \\leq \\frac{n}{n+1}\\mathrm{diam}(\\sigma)\n\\end{align*}\nWe're almost there. We'll finish the proof of locality on Friday.\n\\end{proof}\n\\end{proof}\n", "meta": {"hexsha": "85c90c0e28d516c29303e5ead92425ee4ac9feaa", "size": 6981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old-905/lec-13-locality-end-ish.tex", "max_stars_repo_name": "ichung/algtop-notes", "max_stars_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-04-26T15:00:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-27T22:47:06.000Z", "max_issues_repo_path": "old-905/lec-13-locality-end-ish.tex", "max_issues_repo_name": "ichung/algtop-notes", "max_issues_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2018-03-13T17:54:37.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-13T17:59:46.000Z", "max_forks_repo_path": "old-905/lec-13-locality-end-ish.tex", "max_forks_repo_name": "ichung/algtop-notes", "max_forks_repo_head_hexsha": "3f5d3189e2082716a69fccc1711d02ed848552d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-21T18:15:11.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-13T17:38:04.000Z", "avg_line_length": 67.125, "max_line_length": 565, "alphanum_fraction": 0.6602205988, "num_tokens": 2662, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5629545767187616}}
{"text": "\\section{Evaluation}\n\\label{sec:Evaluation}\nThis section contains the evaluation and the graphical representation of the measurements. The following subsections contain the measurement results of the individual tasks listed in chapter \\ref{subsec:Measurements}. The aim is to determine the sound velocity and the attenuation coefficient of different materials. The exact descriptions of the tasks can be found in the assignment \\cite{ultrasound}.\n\n%-----------------------------------------------------------------------------------\n\\subsection{Wave Length Calculation}\n\\label{subsec:wave_length_calculation}\nThe following table \\ref{tab:wave_length_calculation} shows the calculated wave length $\\lambda$ of the ultrasound wave in air, distilled water and aluminium at 1 kHz, 1 MHz and 5 MHz. All literature longitudinal sound velocities are at 20 \\textdegree C room temperature.\n\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r||c|c|c|c}\n\t\t& $\\boldsymbol{c_L}$ ($\\frac{\\si{m}}{\\si{s}}$) \\cite{kohlrausch} & $\\boldsymbol{\\lambda}$ \\textbf{@ 1 kHz} (m) & $\\boldsymbol{\\lambda}$ \\textbf{@ 1 MHz} (m) & $\\boldsymbol{\\lambda}$ \\textbf{@ 5 MHz} (m) \\\\\n\t\t\\hline\\hline\n\t\t\\textbf{Air} & 344 & 0.34 & $344\\cdot 10^{-6}$ & $68.8\\cdot 10^{-6}$ \\\\\n\t\t\\textbf{Distilled Water} & 1483 & 1.48 & $1.48\\cdot 10^{-3}$ & $297\\cdot 10^{-6}$ \\\\\n\t\t\\textbf{Aluminium} & 6400 & 6.40 & $6.40\\cdot 10^{-3}$ & $1.28\\cdot 10^{-3}$ \\\\\n\t\\end{tabular}\n\t\\caption{Calculated wave lengths $\\lambda$ for different solids and liquids at the audible frequency 1 kHz and the two ultrasound frequencies 1 MHz and 5 MHz. The calculations were carried out in the software Microsoft Office Excel.}\n\t\\label{tab:wave_length_calculation}\n\\end{table}\n\n%-----------------------------------------------------------------------------------\n\\subsection{Longitudinal Ultrasound Velocities in Solids}\n\\label{subsec:Longitudinal_Ultrasound_Velocities_in_Solids}\nTable \\ref{tab:Longitudinal_Ultrasound_Velocities_in_Solids} shows the calculated longitudinal ultrasound velocities from the measurements. These results were obtained using linear regression. This was done in the software QtiPlot. To use linear regression, equation \\ref{eq:sound_velocity} has to be rearranged, so that it relates the time of flight with the distance and an offset $b$ has to be added:\n\n\\begin{equation}\n\\Delta t = \\frac{2x}{c_L} + b\n\\label{eq:sound_velocity_linear_regression}\n\\end{equation}\n\nThis equation \\ref{eq:sound_velocity_linear_regression} can now be used in QtiPlot.\n\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r||c|c|c}\n\t\t& $\\boldsymbol{c_L}$ \\textbf{@ 1 MHz} ($\\frac{\\si{m}}{\\si{s}}$) & $\\boldsymbol{c_L}$ \\textbf{@ 5 MHz} ($\\frac{\\si{m}}{\\si{s}}$) & \\textbf{Literature Value} ($\\frac{\\si{m}}{\\si{s}}$) \\cite{kohlrausch} \\\\\n\t\t\\hline\\hline\n\t\t\\textbf{PMMA} & $(2751\\pm 5)$ & $(2721\\pm 10)$ & $\\approx 2700$ \\\\\n\t\t\\textbf{Aluminium} & - & $(6387\\pm 30)$ & $\\approx 6400$ \\\\\n\t\t\\textbf{Copper} & - & $(4680\\pm 17)$ & $\\approx 4700$ \\\\\n\t\t\\textbf{Brass} & - & $(4627\\pm 46)$ & $\\approx 4700$ \\\\\n\t\\end{tabular}\n\t\\caption{Measured longitudinal ultrasound velocities $c_L$ in different solids. Furthermore, the literature value of $c_L$ is stated. The literature values are from a physics book from 1968.}\n\t\\label{tab:Longitudinal_Ultrasound_Velocities_in_Solids}\n\\end{table}\n\nThe longitudinal ultrasound velocities of the metals could not be measured with a frequency of 1 MHz. Figure \\ref{fig:ghost} shows the measured \\flqq ghost signals\\frqq\\ (signals which are overlapping). Due to this, the longitudinal ultrasound velocities of the metals was only measured with a frequency of 5 MHz, which works perfectly fine.\n\nFigures \\ref{fig:Longitudinal_Ultrasound_Velocity_of_PMMA} and \\ref{fig:Longitudinal_Ultrasound_Velocity_of_Aluminium} show the plots obtained from QtiPlot for PMMA and aluminium respectively.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.94]{Longitudinal_Ultrasound_Velocity_of_PMMA}\n\t\\caption{This plot shows the time of flight $\\Delta t$ in relation to the distance $x$ for PMMA.Linear regression can then be used on these discrete values to obtain the longitudinal ultrasound velocity of PMMA. The fitted curve is shown in red and the discrete measurements are shown as black dots.}\n\t\\label{fig:Longitudinal_Ultrasound_Velocity_of_PMMA}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.94]{Longitudinal_Ultrasound_Velocity_of_Aluminium}\n\t\\caption{Linear regression is used here as well. This time to obtain the longitudinal ultrasound velocity of aluminium. The measurement had to be done at a frequency of 5 MHz because of ghost signals (see figure \\ref{fig:ghost}). The fit is shown in red and the measurements as black dots.}\n\t\\label{fig:Longitudinal_Ultrasound_Velocity_of_Aluminium}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.5]{ghost}\n\t\\caption{When metals are measured with a transducer that has an oscillation frequency of 1~MHz, \\flqq ghost signals\\frqq\\ occur. This is due to the much higher longitudinal ultrasound velocity of the metals, which leads to signals overlapping also known as \\flqq ghost signals\\frqq. To get rid of this effect the oscillation frequency can be increased.}\n\t\\label{fig:ghost}\n\\end{figure}\n\n\\newpage\n%-----------------------------------------------------------------------------------\n\\subsection{Attenuation Coefficient of PMMA}\n\\label{subsec:Attenuation_Coefficient_of_PMMA}\nThe results for the attenuation coefficient $\\mu$ of PMMA are shown in table \\ref{tab:Attenuation_Coefficient_of_PMMA} below. The attenuation coefficient $\\mu$ was measured with an oscillation frequency of 1 MHz and 5 MHz.\n\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r||c|c}\n\t\t& $\\boldsymbol{\\mu}$ \\textbf{@ 1 MHz} ($m^{-1}$) & $\\boldsymbol{\\mu}$ \\textbf{@ 5 MHz} ($m^{-1}$) \\\\\n\t\t\\hline\\hline\n\t\t\\textbf{PMMA} & $(16\\pm 1)$ & $(43\\pm 3)$ \\\\\n\t\\end{tabular}\n\t\\caption{Results of the non-linear fit carried out in the software QtiPlot. The unit of the attenuation coefficient $\\mu$ has to be m$^{-1}$ due to the fact, that the exponent of $e$ has to be uniform and the distance has the unit m. This leads to the following: $\\si{m}^{-1}\\cdot \\si{m} = 1$.}\n\t\\label{tab:Attenuation_Coefficient_of_PMMA}\n\\end{table}\n\nTo obtain these results, non-linear regression was used. The necessary exponential relationship of the absorption is shown in equation \\ref{eq:absorption}. This relationship was used in QtiPlot to create the fit shown in figure \\ref{fig:Attenuation_Coefficient_of_PMMA}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Attenuation_Coefficient_of_PMMA}\n\t\\caption{Semi-log plot of the amplitude $A$ in function of the distance $x$ which shows the attenuation of the signal in PMMA. The black and red line represent the exponential fit of the attenuation. The fits look like a linear function because of the logarithmic y-axis. Furthermore, the attenuation at a lower frequency is less than at a higher frequency, just as expected (see chapter \\ref{subsec:Absorption}). Noticeable is the last red measurement value which seems to be off by quite a bit. This is probably due to the small amplitude which leads to measurement inaccuracies.}\n\t\\label{fig:Attenuation_Coefficient_of_PMMA}\n\\end{figure}\n\n\\newpage\n%-----------------------------------------------------------------------------------\n\\subsection{Longitudinal Ultrasound Velocities in Liquids}\n\\label{subsec:Longitudinal_Ultrasound_Velocities_in_Liquids}\nTable \\ref{tab:Longitudinal_Ultrasound_Velocities_in_Liquids} shows the results of the fits obtained from QtiPlot. The longtidunial ultrasound velocities $c_L$ differ quite bit from each other. In other words, the temperature of a liquid such as water is not negligible when doing calculations with sound waves.\n\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r||c|c}\n\t\t& $\\boldsymbol{c_L}$ \\textbf{@ 5 MHz} ($\\frac{\\si{m}}{\\si{s}}$) & \\textbf{Literature Value} ($\\frac{\\si{m}}{\\si{s}}$) \\cite{kohlrausch} \\\\\n\t\t\\hline\\hline\n\t\t\\textbf{Water 21 \\textdegree C} & $(1488\\pm 3)$ & $\\approx 1483$ \\\\\n\t\t\\textbf{Water 49 \\textdegree C} & $(1538\\pm 5)$ & $\\approx 1540$ \\\\\n\t\t\\textbf{Saltwater 40 \\textdegree C} & $(1612\\pm 6)$ & $\\approx 1620$ \\\\\n\t\\end{tabular}\n\t\\caption{Measured longitudinal ultrasound velocities $c_L$ in water and saltwater at different temperatures. The saltwater has a salt concentration of approx. 10 \\% (90 g salt / 0.901 L). Furthermore, the literature value of $c_L$ is stated.}\n\t\\label{tab:Longitudinal_Ultrasound_Velocities_in_Liquids}\n\\end{table}\n\nFigure \\ref{fig:Longitudinal_Ultrasound_Velocity_of_Water} shows an example plot from QtiPlot for destilled water with a temperature of 21 \\textdegree C. Solely an oscillation frequency of 5 MHz was used to obtain all measurements.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Longitudinal_Ultrasound_Velocity_of_Water}\n\t\\caption{This plot shows the linear fit and the discrete measurement values for destilled water with a temperature of 21 \\textdegree C. The time of flight was measured in function of the travelled distance. The measurements were taken with no linear relationship to each other (not always the same $\\Delta x$ between to meassured values). The statistical error is quite low, which means the measured values must match each other really well.}\n\t\\label{fig:Longitudinal_Ultrasound_Velocity_of_Water}\n\\end{figure}\n\n\\newpage\n%-----------------------------------------------------------------------------------\n\\subsection{Transversal Ultrasound Velocity in PMMA}\n\\label{subsec:Transversal_Ultrasound_Velocity_in_PMMA}\nA special transversal transducer was used to measure the transverse time of flight of a sound wave in PMMA. An oscillation frequency $f$ of 1 MHz was used to obtain the measurement results. Table \\ref{tab:Transversal_Ultrasound_Velocity_in_PMMA} shows the calculated transversal ultrasound velocity $c_T$ for PMMA.\n\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r||c|c}\n\t\t& $\\boldsymbol{c_T}$ \\textbf{@ 1 MHz} ($\\frac{\\si{m}}{\\si{s}}$) & \\textbf{Literature Value} ($\\frac{\\si{m}}{\\si{s}}$) \\cite{kohlrausch} \\\\\n\t\t\\hline\\hline\n\t\t\\textbf{PMMA (Transversal)} & $(1405\\pm 14)$ & $\\approx 1350$ \\\\\n\t\\end{tabular}\n\t\\caption{To measure the transversal time of flight, a special kind of ultrasound shear gel had to be used in conjunction with a special transducer. The transducer was manufactured to be used at an oscillation frequency of 1 MHz.}\n\t\\label{tab:Transversal_Ultrasound_Velocity_in_PMMA}\n\\end{table}\n\nOnly two measurments were obtained, but it is possible to use $0$ as another measurement value. This is due to the fact that at a distance of $0$ the time of flight will always be $0$ as well. Figure \\ref{fig:Transversal_Ultrasound_Velocity_of_PMMA} shows the plot obtained from QtiPlot.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Transversal_Ultrasound_Velocity_of_PMMA}\n\t\\caption{Measurement values of the transversal time of flight $\\Delta t$ in PMMA in function of the distance $x$. With only two measurement values QtiPlot obviously can not calculate the statistical error. Due to this, an additional measurement value at a distance of $0$ m was set to $0$ s. Furthermore, linear regression was used to fit the red linear function.}\n\t\\label{fig:Transversal_Ultrasound_Velocity_of_PMMA}\n\\end{figure}\n", "meta": {"hexsha": "155ad425059d5f32e761808a53df8a604d5e9d65", "size": 11476, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "glaL4_W_12_Ultrasound/sections/evaluation.tex", "max_stars_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_stars_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "glaL4_W_12_Ultrasound/sections/evaluation.tex", "max_issues_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_issues_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "glaL4_W_12_Ultrasound/sections/evaluation.tex", "max_forks_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_forks_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.6329113924, "max_line_length": 583, "alphanum_fraction": 0.7351864761, "num_tokens": 3233, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7799928951399098, "lm_q1q2_score": 0.5629545730276245}}
{"text": "\\listfiles\n\\documentclass{article}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathtools}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\n\\DeclarePairedDelimiter\\floor{\\lfloor}{\\rfloor}\n\\DeclarePairedDelimiter\\ceil{\\lceil}{\\rceil}\n\\DeclareMathOperator{\\cl}{cl}\n\\DeclareMathOperator{\\spn}{span}\n\\DeclareMathOperator{\\E}{E}\n\\DeclareMathOperator{\\St}{St}\n\\def\\Z{\\mathbb{Z}}\n\\def\\N{\\mathbb{N}}\n\\def\\R{\\mathbb{R}}\n\\def\\Q{\\mathbb{Q}}\n\\def\\K{\\mathbb{K}}\n\\def\\T{\\mathbb{T}}\n\\def\\B{\\mathcal{B}}\n\\def\\XX{\\mathfrak{X}}\n\\def\\YY{\\mathfrak{Y}}\n\\def\\AA{\\mathfrak{A}}\n\\def\\ZZ{\\mathfrak{Z}}\n\\def\\BB{\\mathcal{B}}\n\\def\\UU{\\mathcal{U}}\n\\def\\MM{\\mathcal{M}}\n\\def\\M{\\mathfrak{M}}\n\\def\\l{\\lambda}\n\\def\\L{\\Lambda}\n\\def\\<{\\langle}\n\\def\\>{\\rangle}\n\n\\newcommand{\\todo}[1]{{\\leavevmode\\color{orange}#1}}\n\n\\usepackage[a4paper,margin=1in]{geometry}\n\n\\setlength{\\parindent}{0cm}\n\\setlength{\\parskip}{1em}\n\n\\title{Zero-Knowledge Proofs}\n\\date{}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Honesty in the Digital World}\n\nIn the analogue world, there exists devices called lie detectors. Their efficiency is based on a premise that when a person is lying, they exhibit signs such as sweating and an increased heart rate, which can be detected.\n\nThe field of zero-knowledge proofs is an attempt to build a protocol that behaves like a lie detector, but in the digital realm. Suppose Alice wishes to prove to Bob that she is rich. She could send a bank statement to Bob to prove this; however, by doing so, Bob learns not only the fact that Alice is rich, but other potentially sensitive data such as her income sources and spending habits. A zero-knowledge proof would allow Bob to learn only of the facts that he needs to learn, and nothing else.\n\n\\subsection*{Definitions}\n\nThe founding paper of this field was TBD. They gave three criteria a proof system should satisfy to be a zero-knowledge proof system. We will define a ``proof system\" more carefully later, but for now we can think of it as a system for a \\textit{prover} to convince a \\textit{verifier} of the truth of some \\textit{statement}.\n\n\\begin{enumerate}\n\\item{Completeness.} If the statement is true, then the prover can convince the verifier.\n\\item{Soundness.} If the statement is false (i.e. the prover is ``cheating\"), the prover cannot convince the verifier.\n\\item{Zero-Knowledge.} The verifier, even if cheating, does not learn any information except for the validity of the statement.\n\\end{enumerate}\n\n\\subsection*{Applications}\n\nTBD. digital signature, election fraud, mix-nets, verifiable outsourced computation, ring/group signatures, zerocoin, malicious to honest-but-curious, lie prevention, proof-carrying data\n\n\\section*{Zero-knowledge Proof Systems\u2028}\n\nFirst, we revisit and try to make more precise our definition of a zero-knowledge proof system from section TBD. To do this, we first review one definition of the complexity class $NP$ (nondeterministic polynomial-time); it turns out that the definition of an interactive proof system will be a modification to the proof systems used in $NP$.\n\nTBD.\n\n\\subsection*{A Zero-Knowledge Proof System for Graph Isomorphism}\n\nLet us define a proof system for graph isomorphism. For concreteness, we encode our graphs by labelling the vertices with consecutive natural numbers, and giving the graph as an edgelist. Hence, the following two graphs (which happen to be isomorphic) are encoded as TBD.\n\nThe decisional graph isomorphism problem is: given two graphs $G_0$ and $G_1$, determine if $G_0$ and $G_1$ are isomorphic. It is easy to see that this problem is in $NP$: the witness is an isomorphism between the graphs, or more concretely a permutation of the vertices, encoded as a list of $|V|$ integers.\n\nNow let us describe an interactive proof system for this problem. The prover is trying to prove that $G_0$ and $G_1$ are isomorphic, and furthermore, that he possesses the isomorphism $G_0 \\leftrightarrow G_1$ denoted by $w$. The interactive proof consists of $r$ \\textit{rounds}, and each round consists of three \\textit{steps}. In the first step, the prover randomly permutes the vertices of $G_0$ to obtain a graph $H$, and sends $H$ over. In the second step, the verifier sends a single bit $b \\in \\{0, 1\\}$. In the third step, the prover sends an isomorphism between $G_b$ and $H$.\n\nThe verifier rejects if, at any round, he fails to verify the isomorphism, and accepts if after $r$ rounds he has not rejected.\n\n\\textbf{Exercise}: show that this proof system is complete, sound and zero-knowledge.\n\nCompleteness. We will show that in the case that the statement is true (the prover really possesses a witness $w$), he will convince an honest verifier. We can show this for every round. In the first step, let the isomorphism between $G_0$ and $H$ be $h$. Suppose the verifier sends $b = 0$; then the prover can simply send $h$. Otherwise, if the verifier sends $b = 1$, then the prover can simply send $h \\circ w$, which is an isomorphism between $H$ and $G_1$.\n\n\\textbf{Soundness}: Suppose the graphs are not isomorphic, and the cheating verifier is trying to convince an honest prover. The verifier sends a graph $H$ which can be isomorphic to $G_0$ or $G_1$, but not both. Let $H$ be not isomorphic to $G_c$ for $c \\in \\{0, 1\\}$. With probability at least $\\frac{1}{2}$ the verifier will choose $b = c$ and the prover cannot possibly produce an isomorphism between $G_b$ and $H$ in the last step.\n\nWith $r$ rounds, the chance that a cheating prover is not caught in $2^{-r}$.\n\n\\textbf{Zero-Knowledge}: We assume an arbitrary verifier, who might be crafting his queries to try to learn something about $w$, interacts with an honest prover. It is clear that in the case that the verifier is honest, he learns nothing about $w$, since at each round the verifier learns either\n\n\\begin{enumerate}\n\\item{A random isomorphism $G_0 \\sim H$, or}\n\\item{$w$ composed with a random isomorphism $G_0 \\sim H$.}\n\\end{enumerate}\n\nThis is equivalent to sampling from the space of random isomorphisms of $G_0$ $r$ times, where each sample is independent, and this distribution does not depend on $w$, the protocol is zero-knowledge.\n\nHowever, this proof is not sufficient; \\todo{there are protocols where an honest verifier learns nothing, but a dishonest one does! TBD.} Reference: Theorem 2 of GMW.\n\n\\subsection*{Removing Interactivity}\n\nThe proof system presented above is interactive, that is, the messages that a verifier and prover send at various steps depend on previous messages. \\todo{Example of a non-public-coin interactive proof system?} It is also \\textbf{public-coin}, that is, an honest verifier sends uniformly random challenges. \\todo{Example of an interactive, non-public-coin proof system?}\n\nIn a public-coin protocol, if there is a randomness source (e.g. the NIST random beacon) trusted by both parties, then the verifier can use that instead of generating his own randomness. If the challenges are big enough, the Fiat-Shamir heuristic allows one to derive a non-interactive proof system from a public-coin interactive proof system under some conditions. The heuristic is to use a collision-resistant hash function to deterministically generate the random challenges. The prover computes a transcript where the verifier's challenges are replaced by the output of the hash function.\n\n\\subsection*{Statements}\n\nThe protocol above allows a prover to convince a verifier about statements of the form ``this graph is isomorphic to that one\" in zero-knowledge. Now, we try to formalize the notion of a ``statement\" so we can generalize it.\n\nTBD.\n\n\\subsection*{Arguments vs Proofs}\n\n\n\\subsection*{AC-SAT}\n\n\\section*{Efficient ZKPs based on the Discrete Log Problem}\n\n\u2028\\section*{Linear-Time ZKPs for AC-SAT}\n\n\u2028\\section*{Pairing-Based SNARKs}\n\n\n\\end{document}\n", "meta": {"hexsha": "42bebbbf572a94e158751d0d60f8ebc5af558933", "size": 7765, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "zkp-workshop-master/notes.tex", "max_stars_repo_name": "counterfactual/research", "max_stars_repo_head_hexsha": "f6ed10464725bc3b076d22d4676d909943e69f5d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2018-08-09T01:06:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-02T05:51:12.000Z", "max_issues_repo_path": "zkp-workshop-master/notes.tex", "max_issues_repo_name": "counterfactual/research", "max_issues_repo_head_hexsha": "f6ed10464725bc3b076d22d4676d909943e69f5d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-03-09T07:54:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-09T03:22:27.000Z", "max_forks_repo_path": "zkp-workshop-master/notes.tex", "max_forks_repo_name": "counterfactual/research", "max_forks_repo_head_hexsha": "f6ed10464725bc3b076d22d4676d909943e69f5d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.1937984496, "max_line_length": 592, "alphanum_fraction": 0.7657437218, "num_tokens": 2042, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928951399098, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.5629545730276245}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\nThe research consists of 3 objectives:\n\\begin{itemize}\n\t\\item \\textit{Optimization:} Determine the parcel's optimal beacon range.\n\t\\item \\textit{Discovery:} Determine whether exchanging parcels improves the solution.\n\t\\item \\textit{Discovery:} Determine how the delivery strategy improves the solution.\n\\end{itemize}\nTo determine whether the solution is improved, we make use of the cost function used by Gendreau et al. This function rates the solution based on a trade-off between operational costs and customer satisfaction. \\cite{gendreau2006neighborhood}\n$$\nC = \\sum_{k \\in M}{T_k} + \\alpha \\sum_{v \\in V} \\text{max} \\left\\{ 0, t_v - l_v \\right\\} + \\beta \\sum_{k \\in M} \\text{max} \\left\\{0, \\bar{t_k} - l_0 \\right\\}\n$$\nWith $T_k$ the total travel time on route $R_k$ and $\\alpha$I and $\\beta$ weighting parameters. In order of appearance the terms symbolize the following values:\n\\begin{enumerate}\n\t\\item The total travel time on route\n\t\\item The sum of lateness over all pick-up and delivery locations\n\t\\item The sum of overtime over all vehicles\n\\end{enumerate}\n\n\\subsubsection{Independant Variables}\n\\begin{enumerate}\n\t\\item $c$ radius of circle in which contact with the parcel can be established.\n\t\\item Making use of exchanging parcels between agents or not.\n\t\\item The used delivery strategy.\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "13a05cfae052507f35ea2ad27c3d24d6cea54606", "size": 1392, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/subfiles/objectives.tex", "max_stars_repo_name": "JDevlieghere/MAS", "max_stars_repo_head_hexsha": "b00d2e5fa487b74a31f4092086d94ed01e348822", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-04-27T13:48:02.000Z", "max_stars_repo_stars_event_max_datetime": "2016-04-27T13:48:02.000Z", "max_issues_repo_path": "report/subfiles/objectives.tex", "max_issues_repo_name": "JDevlieghere/MAS", "max_issues_repo_head_hexsha": "b00d2e5fa487b74a31f4092086d94ed01e348822", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/subfiles/objectives.tex", "max_forks_repo_name": "JDevlieghere/MAS", "max_forks_repo_head_hexsha": "b00d2e5fa487b74a31f4092086d94ed01e348822", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.7142857143, "max_line_length": 242, "alphanum_fraction": 0.7593390805, "num_tokens": 375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.562954567381936}}
{"text": "\\section{Bias of OLS estimators}\n\n", "meta": {"hexsha": "64921b8469fb3fea38c59e509b7c687adef00451", "size": 34, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/olsInference/01-00-bias.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/olsInference/01-00-bias.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/olsInference/01-00-bias.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 11.3333333333, "max_line_length": 32, "alphanum_fraction": 0.7647058824, "num_tokens": 10, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7217431943271998, "lm_q1q2_score": 0.562954567381936}}
{"text": "\\chapter{Households}\n\\index{Households%\n@\\emph{Households}}%\n\n  \\section{Demographics}\n    A measure $\\omega_{1,t}$ of individuals with heterogeneous working ability $e \\in\\mathcal{E}\\subset\\mathbb{R}_{++}$ is born in each period $t$ and live for $E+S$ periods, with $S\\geq 4$.\\footnote{Theoretically, the model exposition of the model works without loss of generality for $S\\geq 3$. However, because we are calibrating the ages outside of the economy to be one-fourth of $S$ (e.g., ages 21 to 100 in the economy, and ages 1 to 20 outside of the economy), we need $S$ to be at least 4.} The population of age-$s$ individuals in any period $t$ is $\\omega_{s,t}$. Households are termed ``youth'', and do not participate in market activity, during ages $1\\leq s\\leq E$. The households enter the workforce and economy in period $E+1$ and remain in the workforce until they unexpectedly die or live until age $s=E+S$.\\footnote{We model the population with households age $s\\leq E$ outside of the workforce and economy in order most closely match the empirical population dynamics.} The population of agents of each age in each period, $\\omega_{s,t}$, evolves according to the following function,\n    \\begin{equation}\\label{EqPopLawofmotion}\n      \\begin{split}\n        \\omega_{1,t+1} &= \\sum_{s=1}^{E+S} f_s\\omega_{s,t}\\quad\\forall t \\\\\n        \\omega_{s+1,t+1} &= (1 + i_s - \\rho_s)\\omega_{s,t}\\quad\\forall t\\quad\\text{and}\\quad 1\\leq s \\leq E+S-1\n      \\end{split}\n    \\end{equation}\n    where $f_s\\geq 0$ is an age-specific fertility rate, $i_s$ is an age-specific immigration rate, $\\rho_s$ is an age specific mortality hazard rate,\\footnote{The parameter $\\rho_s$ is the probability that a household of age $s$ dies before age $s+1$.} and $1+i_s-\\rho_s$ is constrained to be nonnegative. The total population in the economy $N_t$ at any period is simply the sum of individuals in the economy, the population growth rate in any period $t$ from the previous period $t-1$ is $g_{n,t}$, $\\tilde{N}_t$ is the working age population, and $\\tilde{g}_{n,t}$ is the working age population growth rate in any period $t$ from the previous period $t-1$.\n    \\begin{equation}\\label{EqPopDef}\n      N_t\\equiv\\sum_{s=1}^{E+S} \\omega_{s,t} \\quad\\forall t\n    \\end{equation}\n    \\begin{equation}\\label{EqPopGrowth}\n      g_{n,t+1} \\equiv \\frac{N_{t+1}}{N_t} - 1 \\quad\\forall t\n    \\end{equation}\n    \\begin{equation}\\label{EqPopWkDef}\n      \\tilde{N}_t\\equiv\\sum_{s=E+1}^{E+S} \\omega_{s,t} \\quad\\forall t\n    \\end{equation}\n    \\begin{equation}\\label{EqPopWkGrowth}\n      \\tilde{g}_{n,t+1} \\equiv \\frac{\\tilde{N}_{t+1}}{\\tilde{N}_t} - 1 \\quad\\forall t\n    \\end{equation}\n\n\n  \\section{Households}\n  Consumer's are forward-looking, intertemporal optimizers.  Their objective is the maximize the expected, discounted value of lifetime utility.  Expectations are taken over mortality risk, the only source of uncertainty in the model.  Individuals are heterogenous with repeat to age and lifetime income group.  We define the expected, discounted lifetime utility at time $t$ for an individual in lifetime income group $j$ and age $s$ to be $U_{j,s,t}$.  We assume that utility is additively separable across periods and thus write expected, discounted lifetime utility as:\n    \\begin{equation}\\label{EqUtilMax}\n      \\begin{split}\n        &U_{j,s,t} = \\sum_{u=0}^{E+S-s}\\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right] u\\left(c_{j,s+u,t+u},n_{j,s+u,t+u},b_{j,s+u+1,t+u+1}\\right) \\\\\n        &\\text{where}\\quad \\rho_{s-1}=0 \\\\\n        &\\text{and} \\quad u\\left(c_{j,s,t},n_{j,s,t},b_{j,s+1,t+1}\\right) = \\frac{\\left(c_{j,s,t}\\right)^{1-\\sigma} - 1}{1-\\sigma} ... \\\\\n        &\\qquad\\qquad + e^{g_y t(1-\\sigma)}\\chi^n_s\\left(b\\left[1 - \\left(\\frac{n_{j,s,t}}{\\tilde{l}}\\right)^\\upsilon\\right]^\\frac{1}{\\upsilon} + k\\right) + \\rho_s\\chi^b\\frac{\\left(b_{j,s+1,t+1}\\right)^{1-\\sigma} - 1}{1-\\sigma} \\\\\n        &\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\forall j,t\\quad\\text{and}\\:E+1\\leq s\\leq E+S\n      \\end{split}\n    \\end{equation}\n\n\\noindent\\noindent The parameter $\\beta\\in(0,1)$ represents the individual's rate of time preference.  The quantities $c_{j,s,t}$, $n_{j,s,t},$ and $b_{j,s,t}$ are total consumption of a composite consumption good, labor supply, and asset holdings, respectively.  The parameter $\\sigma \\geq 1$ is the coefficient of relative risk aversion, $\\upsilon$ is a measure of the elasticity of labor supply, and $\\tilde{l}$ is the total time endowment of the individual.  The utility weight for the disutility of labor is given by the age-dependent parameters $\\chi^{n}_{s}$.  The parameter $g_y$ is a constant growth rate of labor augmenting technological progress, which we explain in more detail in the firm's problem .\\footnote{The term with the growth rate $e^{g_y t(1-\\sigma)}$ must be included in the period utility function because consumption and bequests will be growing at rate $g_y$ and this term stationarizes the individual Euler equation by making the marginal disutility of labor grow at the same rate as the marginal benefits of consumption and bequests.  This is the same balanced growth technique as that used in \\citet{MertensRavn:2011}.}  The disutility of labor term in the utility function looks nonstandard, but is simply the upper quadrant of an ellipse that closely approximates the standard constant relative risk aversion utility of leisure functional form.\\footnote{Appendix \\ref{AppEllipseUtil} describes how the elliptical function closely matches the more standard utility of leisure of the form $\\frac{(\\tilde{l}-n_{j,s,t})^{1-\\eta} - 1}{1-\\eta}$. The parameters $b$ and $k$ are the scale and shift parameters of describing the elliptical form.  This elliptical utility function forces an interior solution that automatically respects both the upper and lower bound of labor supply, which greatly simplifies the computation of equilibrium.  For a more in-depth discussion see \\citet{EvanPhillips:2015}} The utility weight on bequests (both intentional and accidental) is given by $\\chi^{b}$.\n\n \\begin{figure}[htb]\\centering \\captionsetup{width=4.0in}\n\\caption{\\label{fig:hh_tree}\\textbf{Summary of the Individual Problem}}\n\\begin{tikzpicture}\n\\tikzstyle{every node}=[auto,every node/.style={rectangle,draw, text centered, text width=1.3cm,minimum height=0.8cm },node distance=3cm]\n\\tikzset{%\nlevel 1/.style={sibling distance = 2cm, level distance=3cm,edge from parent path={(\\tikzparentnode.south) -- (\\tikzchildnode.north)}},\nlevel 2/.style={sibling distance = 1cm,level distance=3cm}\n%level 4/.style={sibling distance = 1.1cm,level distance=3cm}\n}\n  \\node (0){$U_{j,s,t}$}\n    child {node (1) {$\\tilde{c}_{j,s,t}$}\n    child {node {}}\n    child {node {}}\n    child {node {}}\n    child {node (2){$c_{i,j,s,t}, i=1,...,I$}   \nchild {node (3) {}}\n            child {node {}}\n            child {node {}}\n            child {node {$X_{m,t}, m=1,..,M$}       \n             child {node {$X_{m,c,t} \\ \\ $}}\n             child {node {$\\ \\ X_{m,nc,t}$}}}\n             child {node {}}\n                        }\n                  }\nchild {node {$n_{j,s,t}$}}\nchild {node {$bq_{j,s,t}$}} ;\n\n\\node at (0) [xshift=+2.6cm,right,draw=none, text width=6cm]{Utility, $U_{j,s,t}$, is a CRRA function of consumption, leisure, and bequests};\n\\node at (1) [xshift=+4.5cm, right,draw=none, text width=6cm]{Composite consumption, $\\tilde{c}_{j,s,t}$, is a Stone-Geary function of distinct consumption goods};\n\\node at (2) [xshift=+3.0cm, right,draw=none, text width=6cm]{Consumption goods, $c_{i,j,s,t}$, are determined by a fixed coefficient mix of production goods };\n\\node at (3) [xshift=+5.0cm, right,draw=none, text width=6cm]{Production goods for each industry, $X_{m,t}$, are determined by a CES function of production goods from each sector};\n\\end{tikzpicture}\n\\end{figure}\n\n\n    Households choose consumption of a composite consumption good, $c_{j,s+u,t+u}$, labor supply, $n_{j,s+u,t+u},$ and asset holdings, $b_{j,s+u+1,t+u+1}$, to maximize the expected, discounted, lifetime utility subject to their per-period budget constraint.  Total consumption of the composite good is made up of discretionary consumption, $\\tilde{c}_{j,s,t}$, and minimum required purchases of each consumption good, $\\bar{c}_{i,s}$.  Thus the consumer's choice is over $\\tilde{c}_{j,s,t}$, which together with the minimum required purchases equal determine total composite consumption: $c_{j,s,t}=\\tilde{c}_{j,s,t}+\\sum_{i=1}^{I}c_{i,s}$.  It is therefore the case that there minimum required purchases affect the household's ability to smooth consumption over time.  We discuss the composite consumption good in more detail in Section \\ref{sec:subutil}.  This composite good is age dependent, thus the price of the composite consumption good varies with age $s$.  We denote the gross-of-tax price of the composite consumption good for households of age $s$ in period $t$ as $\\tilde{p}_{s,t}$ and the gross-of-tax price for good $i$ at time $t$ as $p_{i,t}$. The households' per period budget constraint is:\n    \n    \\begin{equation}\\label{EqBC}\n      \\begin{split}\n        \\sum_{i=1}^{I} p_{i,t}\\bar{c}_{i,s} + \\tilde{p}_{s,t}\\tilde{c}_{s,t} + b_{j,s+1,t+1} \\leq \\left(1 + r_t\\right) b_{j,s,t} + w_t e_{j,s}&n_{j,s,t} + \\frac{BQ_{j,t}}{\\lambda_j\\tilde{N}_t} - T_{j,s,t} \\\\\n        \\quad\\text{where}\\quad b_{j,s,1} = 0 \\\\\n        &\\text{for} \\quad E+1\\leq s \\leq E+S \\quad \\forall j,t\n      \\end{split}\n    \\end{equation}\n\n\\noindent\\noindent Here, $r_{t}$ and $w_{t}$ are the real interest rate and the wages rate on a unit of effective labor.  The variable $e_{j,s}$ denotes the effective labor units of an individual from lifetime income group $s$ and age $j$.  An individual's labor income is thus determined by her choice of $n_{j,s,t}$ units of labor times her measure of effective labor units, $e_{j,s}$, times the wage per unit of effect labor.  An individuals effective labor units vary over the life-cycle, as the age subscript implies.  $BQ_{j,t}$ denote aggregate bequests left from those in lifetime income group {j} at time {t}.  We divide this number by the number of individuals in lifetime income group $j$ at time $t$, given by $\\lambda_{j}\\tilde{N}_{t}$, to determine the amount of bequests received by each household in lifetime income group $j$.\\footnote{This distribution of bequests is just place holder. The goal is to find suitable data to calibrate the process describing the transmission of bequests between individuals of different ages and lifetime income groups.}  The last term in the budget constraint, $T_{j,s,t}$ are total taxes paid by the individual.  These include all non-consumption taxes and are based on tax functions for separate tax sources that we estimate based on a microsimulation model.  We discuss the parameterization and calibration of these functions below. \n\n    The Lagrangian for the individual's problem can be written as:\n     \\begin{equation}\\label{eqn:hh_prob_lagrangian}\n      \\begin{split}\n     \\mathcal{L} =  \\max_{\\left\\{ \\substack{\\tilde{c}_{j,s+u,t+u},\\\\\\ {n}_{j,s+u,t+u},\\\\\\ {b}_{j,s+u,t+u}}\\right\\}_{u=0}^{E+S-s}}  &  \\sum_{u=0}^{E+S-s}\\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right]  \\frac{\\left(c_{j,s+u,t+u}\\right)^{1-\\sigma} - 1}{1-\\sigma} + ...\\\\\n  &   e^{g_y t(1-\\sigma)}\\chi^n_s+u\\left(b\\left[1 - \\left(\\frac{n_{j,s+u,t+u}}{\\tilde{l}}\\right)^\\upsilon\\right]^\\frac{1}{\\upsilon} + k\\right) + \\rho_s\\chi^b\\frac{\\left(b_{j,s+u+1,t+u+1}\\right)^{1-\\sigma} - 1}{1-\\sigma}  +... \\\\\n       & \\lambda_{j,s+u,t+u}\\left\\{ (1+r_{t+u})b_{j,s+u,t+u} + w_{t+u}e_{j,s+u}n_{j,s+u,t+u}+\\frac{BQ_{j,t+u}}{\\lambda_{j}\\tilde{N}_{t+u}}-T_{j,s+u,t+u} -.... \\right.\\\\\n       & \\left. \\sum_{i=1}^{I}p_{i,t+u}\\bar{c}_{i,s+u}-\\tilde{p}_{s+u,t+u}\\tilde{c}_{j,s+u,t+u} -b_{j,s+u+1,t+u+1} \\right\\}\n        \\end{split}\n    \\end{equation}\n    \n    \n    \n\\noindent\\noindent taking derivatives with respect to $\\{\\tilde{c}_{j,s,t},n_{j,s,t+u},b_{j,s,t+1}\\}$ gives us the necessary conditions for each $j,s$ and $t$.  The necessary condition with respect to the discretionary consumption of the composite consumption good, $\\tilde{c}_{j,s+u,t+u}$, labor supply, $n_{j,s+u,t+u}$, and asset holdings, $b_{j,s+u+1,t+u+1}$, are:\n  \n    \\begin{equation}\\label{Eqcfoc}\n      \\begin{split}\n     \\frac{\\partial U}{\\partial \\tilde{c}_{j,s+u,t+u}}  = \\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right] c_{j,s+u,t+u}^{-\\sigma} - \\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right]  \\lambda_{j,s+u,t+u} \\tilde{p}_{s+u,t+u} = 0, \\forall u\n        \\end{split}\n    \\end{equation}\n\n    \\begin{equation}\\label{Eqnfoc}\n      \\begin{split}\n      \\frac{\\partial U}{\\partial n_{j,s+u,t+u}} & = \\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right] e^{g_y (t+u)(1-\\sigma)}\\chi^n_{s}\\biggl(\\frac{b}{\\tilde{l}}\\biggr)\\biggl(\\frac{n_{j,s+u,t+u}}{\\tilde{l}}\\biggr)^{v-1}\\Biggl[1 - \\biggl(\\frac{n_{j,s+u,t+u}}{\\tilde{l}}\\biggr)\\Biggr]^{\\frac{1-v}{v}} \\\\\n      & -  \\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right]\\lambda_{j,s+u,t+u} \\left( w_{t+u} e_{j,s+u} - \\frac{\\partial T_{j,s+u,t+u}}{\\partial n_{j,s+u,t+u}} \\right)= 0, \\forall u\n        \\end{split}\n    \\end{equation}\n\n    \\begin{equation}\\label{Eqbfoc}\n      \\begin{split}\n      \\frac{\\partial U}{\\partial b_{j,s+u+1,t+u+1}} & = \\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right] \\rho_s\\chi^b\\bigl(b_{j,s+u+1,t+u+1}\\bigr)^{-\\sigma} - \\beta^u\\left[\\prod_{v=s-1}^{s+u-1}(1-\\rho_v)\\right] \\lambda_{j,s+ut+u}  \\\\\\ \n      & + \\beta^{u+1}\\left[\\prod_{v=s-1}^{s+u}(1-\\rho_v)\\right] \\lambda_{j,s+u+1,t+u+1} \\left( 1 + r_{t+u+1} - \\frac{\\partial T_{j,s+u+1,t+u+1}}{\\partial b_{j,s+u+1,t+u+1}} \\right)= 0, \\forall u\n      \\end{split}\n    \\end{equation}\n\n Note that the term $\\frac{\\partial T_{j,s+u+1,t+u+1}}{\\partial n_{j,s+U+1,t+u+1}}$ give the change in total taxes for additional labor supply $\\frac{\\partial T_{j,s+u+1,t+u+1}}{\\partial b_{j,s+U+1,t+u+1}}$ gives the change in total taxes for additional savings.  The tax functions that define the total taxes paid will take into account the interactions, for example how increasing capital income by saving more impacts the marginal tax rate on labor income in a system that progressively taxes labor income.  Rearranging the equations above to solve each for $\\lambda_{t+u}$, we get the following:\n    \\begin{equation}\n      \\begin{split}\n      \\lambda_{j,s+u,t+u} = \\frac{c_{j,s+u,t+u}^{-\\sigma}}{\\tilde{p}_{s,t+u}} \\nonumber\n      \\end{split}\n    \\end{equation}\n\n    \\begin{equation}\n      \\begin{split}\n      \\lambda_{j,s+u,t+u} = \\frac{e^{g_y (t+u)(1-\\sigma)}\\chi^n_{s}\\biggl(\\frac{b}{\\tilde{l}}\\biggr)\\biggl(\\frac{n_{j,s+u,t+u}}{\\tilde{l}}\\biggr)^{v-1}\\Biggl[1 - \\biggl(\\frac{n_{j,s+u,t+u}}{\\tilde{l}}\\biggr)\\Biggr]^{\\frac{1-v}{v}}}{ w_{+u} e_{j,s+u} - \\frac{\\partial T_{j,s+u,t+u}}{\\partial n_{j,s+u,t+u}} }  \\nonumber\n      \\end{split}\n    \\end{equation}\n\n    \\begin{equation}\n      \\begin{split}\n      \\lambda_{j,s+u,t+u} = \\rho_s\\chi^b\\bigl(b_{j,s+u+1,t+u+1}\\bigr)^{-\\sigma} - \\beta (1-\\rho_{s+u}) \\lambda_{t+u+1} \\left( 1 + r_{t+u+1} - \\frac{\\partial T_{j,s+u+1,t+u+1}}{\\partial b_{j,s+u+1,t+u+1}} \\right)\n        \\end{split}  \\nonumber\n    \\end{equation}\n\n    These three equations can then be reduced to just two equations that must hold for all $j,s,$ and $t$.  The first relates the marginal utility of consumption of the composite good to the marginal utility of labor:\n    \\begin{equation}\\label{EqcEuler}\n      \\begin{split}\n      & \\frac{ c_{j,s+u,t+u}^{-\\sigma}}{\\tilde{p}_{s+u,t+u}} \\\\\n      & = \\frac{ e^{g_y (t+u)(1-\\sigma)}\\chi^n_{s}\\biggl(\\frac{b}{\\tilde{l}}\\biggr)\\biggl(\\frac{n_{j,s+u,t+u}}{\\tilde{l}}\\biggr)^{v-1}\\Biggl[1 - \\biggl(\\frac{n_{j,s+u,t+u}}{\\tilde{l}}\\biggr)\\Biggr]^{\\frac{1-v}{v}} } { w_{t+u} e_{j,s+u} - \\frac{\\partial T_{j,s+u,t+u}}{\\partial n_{j,s+u,t+u}} }\n       \\end{split}\n    \\end{equation}\n\n    \\noindent\\noindent The second equation is the intertemporal Euler equation for savings, including the utility effects of bequests:\n    \\begin{equation}\\label{EqbEuler}\n      \\begin{split}\n      & \\frac{ c_{j,s+u,t+u}^{-\\sigma}}{\\tilde{p}_{s+u,t+u}} = \\rho_s\\chi^b\\bigl(b_{j,s+u+1,t+u+1}\\bigr)^{-\\sigma}  + \\frac{ \\beta(1-\\rho_{s+u}) c_{j,s+u+1,t+u+1}^{-\\sigma}} {\\tilde{p}_{s+u+1,t+u+1}} \\times \\left( 1 + r_{t+u+1} - \\frac{\\partial T_{j,s+u+1,t+u+1}}{\\partial b_{j,s+u+1,t+u+1}} \\right)\n      \\end{split}\n    \\end{equation}\n\n   \n    \\subsection{Household's Portfolio Problem}\\label{sec:portfolio}\n    \nHousehold's are assumed to have constant elasticity of substitution (CES) preferences over debt and equity in their portfolio.  These preferences allows for investor portfolios that are a mix of debt and equity despsite the two assets having differential returns.  Thus, while our model does not include idiosyncratic or aggregate uncertainty, these CES preferences account for the premium paid to the more risky assets through the preference parameters. To match lifecycle portfolio changes, we may consider allowing to the CES preference parameters to vary by age.  Given the CES preferences, the household's total assets, $a_{j,s,t}$ are given by (\\textcolor{red}{NOTE that if we include these notations we should change our notation for assets from $b$ to $a$. We need to also think about the notion for equity - I use $e$ below, but we are already using that for effective labor units.}): \n\n\\begin{equation}\n\\label{eqn:ces_port}\na_{j,s,t}= \\left[\\gamma_{a,s}^{\\frac{-1}{\\ve_{a,s}}}b_{j,s,t}^{\\frac{1+\\ve_{a.s}}{\\ve_{a,s}}}+(1-\\gamma_{a,s})^{\\frac{-1}{\\ve_{a,s}}}e_{j,s,t}^{\\frac{1+\\ve_{a,s}}{\\ve_{a,s}}}\\right]^{\\frac{\\ve_{a,s}}{1+\\ve_{a,s}}}\n\\end{equation}\n    \nThe parameter $\\ve_{a,s}$ is the elasticity of substitution between bonds and stocks in the asset portfolio of an age $s$ household.  $\\gamma_{a,s}$ is the taste parameter in these preferences (\\textcolor{red}{I've written this in the most flexible way, where both parameters depend upon age, but perhaps we just want the taste parameter to vary by age the rate of substitution is constant.}).\n\nSince neither firms of governments default in our model, the rate of return on bonds issued by firms and government will all have the same pretax rate of return, $r_{b,t}$.  \n\nIt's not clear whether firm equity will all have the same return.  It seems that multinationals, with the ability to shift profits, may have higher returns than domestic corporations.  In general, firms will earn economic profits and thus have a rate of return in excess of the risk free return, $r_{b,t}$.  If it is the case that some firms earn greater returns than others, I think we may just assume that households own an diversified equity portfolio of all firms and thus earn the average return on equity.  Thus we'll have either a market return for equity that is the average of the heterogeneous returns across firms or the return that is the same for all firms.  We will denote the pretax market turn on equity as $r_{e,t}$.\n\nWe introduce further notation that is the after-tax gross return on the each asset composite.  The after-tax gross return on bonds for a household of age $s$, lifetime income group $j$, and in year $t$ is given by:\n\n\\begin{equation}\n\\rho_{b,j,s,t}=(1+r_{b,t}(1-\\tau^{int}(y_{j,s,t})))\n\\end{equation}\n\n\\textcolor{red}{Note that I've again shifted some notation.  We had used $a$ to denote total income for tax purposes, while I use $y$ above (since I've used $a$ for total assets).  Also, I'm writing the marginal tax rate on interest income as a function of total income, $\\tau^{int}(y_{j,s,t})$, rather than using the total tax function.  I think this is helpful for expository purposes.  Is it ok to use the marginal rather than the average tax function?  Seems like we could use either interchangeably.  But if we want to use the total tax function, the notation for the above would be: } \n\n\\begin{equation}\n\\rho_{b,j,s,t}=(1+r_{b,t}) - \\frac{\\partial T_{j,s,t}}{\\partial b_{j,s,t}}\n\\end{equation}\n\n\\textcolor{red}{Either way, we'll want to be sure that when we calibrate this functions we account for the mix of tax exempt and taxable interests realized by household and how that changes across age and income group.  The micro simulation model should be able to help us here.}\n\n\nThe after-tax gross return on equity for a household of age $s$, lifetime income group $j$, and in year $t$ is given by:\n\n\\begin{equation}\n\\rho_{e,j,s,t}=(1+r_{e,t}(1-\\tau^{cap}(y_{j,s,t})))\n\\end{equation}\n\nHere, $\\tau^{cap}$ will be the tax rate on capital income - some mix of the tax on dividends and capital gains from corporate and non-corporate entities.  Recall that we do not explicitly model capital gains realizations nor do we track dividend issues from each representative firm.  Instead, the return to the firms (both from dividends and capital gains) is put into the per period return on equity, $r_{e,t}$.  Thus the tax rate on these returns must be a weighted average of the taxes on dividends and capital gains, where the weighting is given by data on the share of income from capital gains and dividends by age and income group in the data.\n\nWe can now write the gross after-tax return on the households total portfolio:\n\n\\begin{equation}\n\\label{eqn:port_return}\n\\rho_{a,j,s,t}a_{j,s,t}=\\rho_{b,j,s,t}b_{j,s,t}+\\rho_{e,j,s,t}e_{j,s,t}\n\\end{equation}\n\nThe optimal portfolio is given by the household choosing bonds and stocks to maximize Equation \\ref{eqn:port_return} subject to Equation \\ref{eqn:ces_port}.  The Lagrangian for this problem is:\n\n\\begin{equation}\n\\mathcal{L} =  \\max_{b_{j,s,t},e_{j,s,t}} \\rho_{b,j,s,t}b_{j,s,t} + \\rho_{e,j,s,t}e_{j,s,t} + \\lambda_{j,s,t}\\left(a_{j,s,t} -  \\left[\\gamma_{a,s}^{\\frac{-1}{\\ve_{a,s}}}b_{j,s,t}^{\\frac{1+\\ve_{a.s}}{\\ve_{a,s}}}+(1-\\gamma_{a,s})^{\\frac{-1}{\\ve_{a,s}}}e_{j,s,t}^{\\frac{1+\\ve_{a,s}}{\\ve_{a,s}}}\\right]^{\\frac{\\ve_{a,s}}{1+\\ve_{a,s}}}\\right)\n\\end{equation}\n\nSince every variable is subscripted by $j,s,t$ we drop these and write the necessary conditions as:\n\n\\begin{equation}\n\\label{eqn:port_foc_b}\n\\frac{\\partial \\mathcal{L}}{\\partial b}: \\rho_{b} = \\lambda \\gamma_{a,s}^{\\frac{-1}{\\ve_{a,s}}}b^{\\frac{1}{\\ve_{a,s}}}\\left[\\gamma_{a,s}^{\\frac{-1}{\\ve_{a,s}}}b^{\\frac{1+\\ve_{a.s}}{\\ve_{a,s}}}+(1-\\gamma_{a,s})^{\\frac{-1}{\\ve_{a,s}}}e^{\\frac{1+\\ve_{a,s}}{\\ve_{a,s}}}\\right]^{\\frac{-1}{1+\\ve_{a,s}}}\n\\end{equation}\n\n\\begin{equation}\n\\label{eqn:port_foc_e}\n\\frac{\\partial \\mathcal{L}}{\\partial e}: \\rho_{e} = \\lambda (1- \\gamma_{a,s})^{\\frac{-1}{\\ve_{a,s}}}e^{\\frac{1}{\\ve_{a,s}}}\\left[\\gamma_{a,s}^{\\frac{-1}{\\ve_{a,s}}}b^{\\frac{1+\\ve_{a.s}}{\\ve_{a,s}}}+(1-\\gamma_{a,s})^{\\frac{-1}{\\ve_{a,s}}}e^{\\frac{1+\\ve_{a,s}}{\\ve_{a,s}}}\\right]^{\\frac{-1}{1+\\ve_{a,s}}}\n\\end{equation}\n       \n\\begin{equation}\n\\label{eqn:port_foc_lambda}\n\\frac{\\partial \\mathcal{L}}{\\partial \\lambda}: a = \\left[\\gamma_{a,s}^{\\frac{-1}{\\ve_{a,s}}}b^{\\frac{1+\\ve_{a.s}}{\\ve_{a,s}}}+(1-\\gamma_{a,s})^{\\frac{-1}{\\ve_{a,s}}}e^{\\frac{1+\\ve_{a,s}}{\\ve_{a,s}}}\\right]^{\\frac{\\ve_{a,s}}{1+\\ve_{a,s}}}\n\\end{equation}\n              \nWe can use Equations \\ref{eqn:port_foc_b} and \\ref{eqn:port_foc_e} to find the ratio of bonds to stocks as (again, suppressing the subscripts):\n\n\\begin{equation}\n\\label{eqn:port_ratio}\n\\frac{b}{e}= \\left(\\frac{\\rho_{b}}{\\rho_{e}}\\right)^{\\ve_{a,s}}\\frac{\\gamma_{a,s}}{(1-\\gamma_{a,s})}\n\\end{equation}\n     \nUsing Equation \\ref{eqn:port_ratio} can then be used with \\ref{eqn:port_foc_lambda} to find demand for bonds and stocks separately.  The solution should yield (\\textcolor{red}{We should work this out here, I had some trouble, but we know what the solution should be...}):\n\n\\begin{equation}\n\\label{eqn:bond_demand}\nb_{j,s,t} = \\left(\\frac{\\rho_{b,j,s,t}}{\\rho_{e,j,s,t}}\\right)^{\\ve_{a,s}}\\gamma_{a,s}a_{j,s,t}\n\\end{equation}     \n\n\\begin{equation}\n\\label{eqn:equity_demand}\ne_{j,s,t} = \\left(\\frac{\\rho_{e,j,s,t}}{\\rho_{b,j,s,t}}\\right)^{\\ve_{a,s}}(1-\\gamma_{a,s})a_{j,s,t}\n\\end{equation}     \n\nWe can thus find the return to the portfolio as:\n\n\\begin{equation}\n\\label{eqn:port_return}\n\\rho_{a,j,s,t} = \\left(\\gamma_{a,s}\\rho_{b,j,s,t}^{\\ve_{a,s}}+(1-\\gamma_{a,s})\\rho_{e,j,s,t}^{\\ve_{a,s}}\\right)^{\\frac{1}{1+\\ve_{a,s}}}\n\\end{equation}     \n\n\\subsubsection{After-tax return differentials}\n\nI don't think we have any problem in that household have different after tax returns.  Each will hold a different portfolio because of this, but none will have corner solutions because of the CES preferences.  So it's not problem if, for example, the after-tax return to stocks exceed the return to bonds.  This just means households hold relatively more stocks.  \n\nWhen considering the firms' problems we do need to think about different household having different after-tax returns to equity.  The solution there will be that the firms just chooses a representative household when thinking about maximizing firm value.  The household will be termed the ``marginal investor\".  However, we will need to think a bit about which household in the microsimulation model represents this investor.\n       \n       \n    \\subsection{Household's Subutility Function}\\label{sec:subutil}\n    \n    Household preferences over the composite consumption good are modeled as a Stone-Geary function. The aggregate discretionary consumption of the composite good is defined as follows.\n    \\begin{equation} \\label{eqn:comp_cons}\n        \\tilde{c}_{j,s,t}  = \\prod_{i=1}^I \\left( c_{i,j,s,t} - \\bar c_{i,s} \\right) ^{\\alpha_{i,s}} \n    \\end{equation}\n\nWhere, $c_{i,j,s,t}$ is consumption of good $i$ by household of type $j$, age $s$, at time $t$.  There are $I$ total goods and $\\bar{c}_{i,s}$ represents the minimum consumption amount for each good at each age.  The parameters $\\alpha_{i,s}$ are the share parameters (and $\\sum_{i=1}^{I} \\alpha_{i,s}=1$).  They correspond to the share of income, after minimum expenditure amounts, that are spent on each good at each age.  Allowing the minimum consumption amounts and the share parameters to vary by age helps to incorporate life-cycle profiles of consumption into the model.  For example, we do not explicitly model household formation decisions, but they will be some of the effects of changes in household composition over the life-cycle are obtained through the parameters of the Stone-Geary function.  For example, the minimum required expenditure on shelter may be higher in the middle of the life-cycle when household size is larger.  The minimum consumption amounts also mean that the composition of consumption will vary with income, even though all households have the same utility function.\n\nThe consumer chooses $c_{i,j,s,t}$ to maximize Equation \\ref{Eqcagg} subject to the budget constraint:\n\n    \\begin{equation} \\label{eqn:cons_budgetcons}\n        \\sum_{i=1}^{I} p_{i,t}(c_{i,j,s,t}-\\bar{c}_{i,s})  = \\tilde{p}_{s,t}\\tilde{c}_{j,s,t}\n    \\end{equation}\n\n\\noindent where $p_{i,t}$ is the gross of tax price of good $i$ at time $t$ and $\\tilde{p}_{s,t}$ is the gross of tax price of the the discretionary component of the composite consumption good consumed by those of age $s$ at time $t$.  Maximization of \\ref{Eqcagg} subject to \\ref{eqn:cons_budgetcons} yields:\n\n    \\begin{equation} \\label{eqn:cons_lagrangian}\n       \\mathcal{L} =  \\max_{\\{c_{i,j,s,t}\\}_{i=1}^{I}}  \\prod_{i=1}^I \\left( c_{i,j,s,t} - \\bar c_{i,s} \\right) ^{\\alpha_{i,s}}  + \\lambda \\left(\\tilde{p}_{s,t}\\tilde{c}_{j,s,t} - \\sum_{i=1}^{I} p_{i,t}(c_{i,j,s,t}-\\bar{c}_{i,j,s,t})\\right)\n    \\end{equation}\n    \n    Which as $I$ FOCs (for each $j$, $s$, $t$):\n    \n      \\begin{equation} \\label{eqn:cons_FOC}\n      \\begin{split}\n       & \\frac{\\partial \\mathcal{L}}{\\partial c_{i,j,s,t}} = \\frac{\\alpha_{i,s} \\prod_{i=1}^I \\left( c_{i,j,s,t} - \\bar c_{i,s} \\right) ^{\\alpha_{i,s}}}{(c_{i,j,s,t}-\\bar{c}_{i,s})}-\\lambda p_{i,t} = 0, \\forall \\ i  \\\\\n       & \\implies  \\frac{\\alpha_{i,s} \\prod_{i=1}^I \\left( c_{i,j,s,t} - \\bar c_{i,s} \\right) ^{\\alpha_{i,s}}}{(c_{i,j,s,t}-\\bar{c}_{i,s})} = \\lambda p_{i,t}, \\forall \\ i \\\\\n       & \\implies  \\frac{\\alpha_{i,s} \\prod_{i=1}^I \\left( c_{i,j,s,t} - \\bar c_{i,s} \\right) ^{\\alpha_{i,s}}}{ p_{i,t}(c_{i,j,s,t}-\\bar{c}_{i,s})} = \\lambda, \\forall \\ i \\\\\n       & \\implies \\frac{\\alpha_{i,s}}{p_{i,t}(c_{i,j,s,t}-\\bar{c}_{i,s})}=\\frac{\\alpha_{j,s}}{p_{k,t}(c_{k,j,s,t}-\\bar{c}_{k,s})}, \\forall \\ i,k \\\\\n       & \\implies c_{i,j,s,t}= \\frac{\\alpha_{i,s} p_{k,t}(c_{k,j,s,t}-\\bar{c}_{k,s})}{\\alpha_{k,s} p_{i,t}} + \\bar{c}_{i,s} \\forall i,k \n       \\end{split}\n    \\end{equation}\n    \n    Now substitute the last line of \\ref{eqn:cons_FOC} into the budget constraint (Equation \\ref{eqn:cons_budgetcons}):\n    \n          \\begin{equation} \\label{eqn:cons_solve}\n      \\begin{split}\n       & \\tilde{p}_{s,t}\\tilde{c}_{j,s,t} = \\sum_{i=1}^{I}p_{i,t}(c_{i,j,s,t}-\\bar{c}_{i,s}) \\\\\n       & \\implies  \\tilde{p}_{s,t}\\tilde{c}_{j,s,t} = \\sum_{i=1}^{I}p_{i,t}\\left[ \\frac{\\alpha_{i,s} p_{k,t}(c_{k,j,s,t}-\\bar{c}_{k,s})}{\\alpha_{k,s} p_{i,t}} + \\bar{c}_{i,s}- \\bar{c}_{i,s}\\right] \\\\\n       & \\implies  \\tilde{p}_{s,t}\\tilde{c}_{j,s,t} = \\sum_{i=1}^{I}\\left[ \\frac{\\alpha_{i,s} p_{k,t}(c_{k,s}-\\bar{c}_{k,s})}{\\alpha_{k,s}}\\right] \\\\\n       & \\implies  \\tilde{p}_{s,t}\\tilde{c}_{j,s,t} = \\frac{ p_{k,t}(c_{k,j,s,t}-\\bar{c}_{k,s})}{\\alpha_{k,s}} \\underbrace{\\sum_{i=1}^{I}\\alpha_{i,s}}_{=1} \\\\\t\n        & \\implies  \\tilde{p}_{s,t}\\tilde{c}_{j,s,t} = \\frac{ p_{k,t}(c_{k,j,s,t}-\\bar{c}_{k,s})}{\\alpha_{k,s}} \\\\\n        & \\implies  \\frac{ p_{k,t}(c_{k,j,s,t}-\\bar{c}_{k,s})}{\\alpha_{k,s}}  = \\tilde{p}_{s,t}\\tilde{c}_{j,s,t}   \\\\\t\n        & \\implies  c_{k,j,s,t}  = \\frac{\\alpha_{k,s} \\tilde{p}_{s,t}\\tilde{c}_{j,s,t}}{p_{k,t}} + \\bar{c}_{k,s},  \\forall \\ k  \\\\\t\n       \\end{split}\n    \\end{equation}\n    \n    Thus, total consumption of each good $i$, $c_{i,j,s,t}$, is given by the the amount of minimum consumption plus the share of total expenditures remaining after making the minimum expenditures on all goods (this is called the ``supernumerary\" expenditure).  We derive the prices of the age $s$ composite consumption good in period $t$, $\\tilde{p}_{s,t}$ by using the demand for good $i$ provided in Equation \\ref{cons_solve} in the function defining aggregate discretionary consumption, Equation \\ref{eqn:comp_cons}: \n    \n              \\begin{equation} \\label{eqn:composite_price}\n      \\begin{split}\n      & \\tilde{c}_{j,s,t} = \\prod_{i=1}^{I}(c_{i,j,s,t}-\\bar{c}_{i,s})^{\\alpha_{i,s}} \\\\\n      &\\implies \\tilde{c}_{j,s,t} = \\prod_{i=1}^{I}\\left( \\frac{\\alpha_{i,s} \\tilde{p}_{s,t}\\tilde{c}_{j,s,t}}{p_{i,t}} + \\bar{c}_{i,s}-\\bar{c}_{i,s}\\right)^{\\alpha_{i,s}} \\\\\n      &\\implies \\tilde{c}_{j,s,t} = \\prod_{i=1}^{I} \\left( \\frac{\\alpha_{i,s} \\tilde{p}_{s,t}\\tilde{c}_{j,s,t}}{p_{i,t}} \\right)^{\\alpha_{i,s}} \\\\\n      &\\implies \\tilde{c}_{j,s,t} =  \\tilde{p}_{s,t}\\tilde{c}_{j,s,t} \\prod_{i=1}^{I}\\left( \\frac{\\alpha_{i,s}}{p_{i,t}} \\right)^{\\alpha_{i,s}} \\\\\n      &\\implies \\frac{\\tilde{p}_{s,t}\\tilde{c}_{j,s,t}}{\\tilde{c}_{j,s,t}} =  \\prod_{i=1}^{I}\\left( \\frac{p_{i,t}}{\\alpha_{i,s}} \\right)^{\\alpha_{i,s}} \\\\\n       &\\implies \\tilde{p}_{s,t} =  \\prod_{i=1}^{I}\\left( \\frac{p_{i,t}}{\\alpha_{i,s}} \\right)^{\\alpha_{i,s}} \\\\\n       \\end{split}\n    \\end{equation}\n    \n    This composite good price is then used in the household's intertemporal optimization problem described in Equation \\ref{eqn:hh_prob_lagrangian}.  With the parameters and endogenous variables, we then use \\ref{eqn:cons_solve} to find the $c_{i,j,s,t}$.\n    \n    \\subsection{Relating Consumption and Production Goods}\\label{sec:prod_cons_map}\n    \n    Our model contains $I$ consumption goods and $M$ production goods.  We denote the quantity of production good $m$ in period $t$ as $X_{m,t}$.  We relate the output of the production sectors and the consumption goods using a fixed coefficient model. That is, we assume each consumption good is made up of a mix of the outputs of different production sectors.  This means that the composition of these consumption goods do not respond to prices. The weights that determine the mix for each consumption goods are given in the matrix $\\Pi$.  Element $\\pi_{i,m}$ of the matrix $\\Pi$ corresponds to the percentage contribute of the output of sector $m$ in the production of good $i$.  The total supply of good $i$ in the economy at time $t$ is thus given by: \n    \n             \\begin{equation} \\label{eqn:mix_cons}\n             c_{i,t} = \\sum_{m=1}^{M}\\pi_{i,m}X_{m,t} \n    \t\\end{equation}\n\t\n \\noindent\\noindent And the price of a unit of consumption good $i$ at time $t$ is:\n\t\n             \\begin{equation} \\label{eqn:mix_cons_price}\n             p_{i,t} = \\sum_{m=1}^{M}\\pi_{i,m}p_{m,t}, \n    \t\\end{equation}\n    \n    \\noindent\\noindent Where $p_{m}$ is the price of output of production sector $m$ at time $t$.\n    \n    \\subsection{Preferences for Corporate vs. Noncorporate Goods}\\label{sec:pref_corp_noncorp}\n    \n    Production sectors may contain corporate and non-corporate producers, each facing different tax treatment.  If the output from corporate and non-corporate entities are perfect substitutes, then if the producers have the same production technology, consumers will end up consuming only the output from the sector with lowest after tax cost of producing.  \\citet{GK1989} propose a model where different production sectors use different technologies, which can give rise to an equilibrium where both the corporate and non-corporate sector produce the same good.  We take a different track, following \\citet{FR1993} we allow production technologies to vary across industry, but not across sectors within industry.  Both sectors produce output in equilibrium, because output across sectors are not perfect substitutes.  For example, food outside the home from a corporate, chain restaurant chain is not the same as food outside the home from a small, family-owned restaurant.  Specifically, we define consumer preferences such that demand for the composite production good (combing output from the corporate and non-corporate sector) for production sector $m$ at time $t$, $X_{m,t}$, is a constant elasticity of substitution (CES) function of the output from the corporate and non-corporate sectors, $X_{m,t,C}$ and $X_{m,t,NC}$, respectively:\n    \n                  \\begin{equation} \\label{eqn:comp_output}\n             X_{m,t} = \\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}X_{m,t,C}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}X_{m,t,NC}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{\\ve_{3}}{(\\ve_{3}-1)}}, \n    \t\\end{equation}\n\t\n\t\\noindent where $\\ve_{3}$ is the elasticity of substitution between corporate and non-corporate output and is assumed to be constant across industries.  The share parameter in the CES function, $\\gamma_{m}$ is allowed to vary across industry and will be identified by the fraction of corporate produced output across industries.  The CES function thus explains the existence of corporate and non-corporate production within each industry as well as the different shares of corporate output across industries.  Because of these preferences, changes in corporate and non-corproate tax treatment will have differential impacts across consumers of different ages and income levels.  \n\tConsumers choose $X_{m,t,C}$ and $X_{m,t,NC}$ to maximize \\ref{eqn:comp_output} subject to:\n\t\n\t \\begin{equation} \\label{eqn:comp_output_cons}\n             p_{m,t}X_{m,t} = p_{m,t,C}X_{m,t,C}+p_{m,t,NC}X_{m,t,NC}, \n    \t\\end{equation}\n\t\n\t\n\\noindent where $p_{m,t,C}$ and $p_{m,t,NC}$ are the prices of output from the corporate and non-corporate firms in production industry $m$, respectively.  Note that these prices are determined through the firm's profit maximization problem and the zero economic profit condition for firms. The constrained optimization problem consumers face is: \n    \n \\begin{equation} \\label{eqn:comp_output_lagrangian}\n\t\\begin{split}\n\t \\mathcal{L} = \\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}X_{m,t,C}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}X_{m,t,NC}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{\\ve_{3}}{(\\ve_{3}-1)}} + \\lambda_{m,t,C}\\left(p_{m,t}X_{m,t} - p_{m,t,C}X_{m,t,C}+p_{m,t,NC}X_{m,t,NC}\\right)\n  \t\\end{split}\n\\end{equation}\n    \n    FOCs are:\n    \n\\begin{equation} \\label{eqn:comp_output_foc_C}\n\t\\begin{split}\n       \t&  \\frac{\\partial \\mathcal{L}}{\\partial X_{m,t,C}} = \\gamma_{m}^{\\frac{1}{\\ve_{3}}} X_{m,t,C}^{\\frac{-1}{\\ve_{3}}} \\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}X_{m,t,C}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}X_{m,t,NC}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{1}{(\\ve_{3}-1)}} - \\lambda_{m,t,C} p_{m,t,C} = 0\n      \t \\end{split}\n\\end{equation}\n    \n   and\n   \n\\begin{equation} \\label{eqn:comp_output_foc_NC}\n\t\\begin{split}\n       \t&  \\frac{\\partial \\mathcal{L}}{\\partial X_{m,t,NC}} = (1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}} X_{m,t,NC}^{\\frac{-1}{\\ve_{3}}} \\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}X_{m,t,C}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}X_{m,t,NC}^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{1}{(\\ve_{3}-1)}} - \\lambda_{m,t,C} p_{m,t,NC} = 0\n\t\\end{split}\n\\end{equation}\n    \n    Solving the two necessary conditions, we can find the equations for the demand for the corporate and non-corporate output in industry $m$ as a function of the prices out output from each sector of industry $m$, price of the composite production good, the demand for the composite production good, and the parameters:\n    \n\\begin{equation} \\label{eqn:demand_XmtC}\n\tX_{m,t,C} = \\frac{\\gamma_{m}p_{m,t}X_{m,t}}{p_{m,t,C}^{\\ve_{3}}\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\n\\end{equation}\n    \n    and \n    \n    \\begin{equation} \\label{eqn:demand_XmtNC}\n\tX_{m,t,NC} = \\frac{(1-\\gamma_{m})p_{m,t}X_{m,t}}{p_{m,t,NC}^{\\ve_{3}}\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\n\\end{equation}\n\nTo determine $p_{m,t}$, note that the CES subutility function defining preferences over corporate and non-corporate output within a production industry is linearly homogenous.  Because the subutility function is linearly homogenous, we know that the associated indirect utility function is homogenous of degree one in $X_{m,t}$.  Letting $V(\\cdot)$ represent the indirect utility function, this means that $V(p_{m,t,C},p_{m,t,NC},\\lambda X_{m,t}) = \\lambda V(p_{m,t,C},p_{m,t,NC}, X_{m,t})$.  The linear homogeneity of the utility function also means that the indirect utility function is homogenous of degree -1 in prices.  That is, $V(\\lambda p_{m,t,C},\\lambda p_{m,t,NC}, X_{m,t}) = \\frac{V(p_{m,t,C},p_{m,t,NC}, X_{m,t})}{\\lambda}$. Linear homogeneity of the utility function means that: \n\n\\begin{equation}\nV(p_{m,t,C},p_{m,t,NC}, X_{m,t}) = \\frac{p_{m,t}X_{m,t}}{e(p_{m,t,C},p_{m,t,NC})},\n\\end{equation}\n    \n    \n\\noindent\\noindent where $e(p_{m,t,C},p_{m,t,NC})$ is the minimum expenditure for a unit of the composite good given prices.  Rearranging, we have: \n    \n \\begin{equation}\n \\label{eqn:price_comp}\n \\begin{split}\n& e(p_{m,t,C},p_{m,t,NC}) = \\frac{p_{m,t}X_{m,t}}{V(p_{m,t,C},p_{m,t,NC}, X_{m,t})}\\\\\n&\\implies e(p_{m,t,C},p_{m,t,NC}) = p_{m,t}X_{m,t}/ \\\\\n& {\\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}\\left( \\frac{\\gamma_{m}p_{m,t}X_{m,t}}{p_{m,t,C}^{\\ve_{3}}\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}\\left(\\frac{(1-\\gamma_{m})p_{m,t}X_{m,t}}{p_{m,t,NC}^{\\ve_{3}}\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{\\ve_{3}}{(\\ve_{3}-1)}}}\\\\\n&\\implies e(p_{m,t,C},p_{m,t,NC}) = p_{m,t}X_{m,t}/ \\\\\n& {p_{m,t}X_{m,t}\\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}\\left( \\frac{\\gamma_{m}}{p_{m,t,C}^{\\ve_{3}}\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}\\left(\\frac{(1-\\gamma_{m})}{p_{m,t,NC}^{\\ve_{3}}\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{\\ve_{3}}{(\\ve_{3}-1)}}}\\\\\n&\\implies e(p_{m,t,C},p_{m,t,NC}) = 1/ \\\\\n& \\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]{\\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}\\left( \\frac{\\gamma_{m}}{p_{m,t,C}^{\\ve_{3}}}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}\\left(\\frac{(1-\\gamma_{m})}{p_{m,t,NC}^{\\ve_{3}}}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{\\ve_{3}}{(\\ve_{3}-1)}}}\\\\\n&\\implies e(p_{m,t,C},p_{m,t,NC}) = \n\\frac{\\left[\\gamma_{m}^{\\frac{1}{\\ve_{3}}}\\left( \\frac{\\gamma_{m}}{p_{m,t,C}^{\\ve_{3}}}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})^{\\frac{1}{\\ve_{3}}}\\left(\\frac{(1-\\gamma_{m})}{p_{m,t,NC}^{\\ve_{3}}}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{\\ve_{3}}{(1-\\ve_{3})}}}{\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\\\\\n&\\implies e(p_{m,t,C},p_{m,t,NC}) = \n\\frac{\\left[\\gamma_{m}\\left( \\frac{1}{p_{m,t,C}^{\\ve_{3}}}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+(1-\\gamma_{m})\\left(\\frac{1}{p_{m,t,NC}^{\\ve_{3}}}\\right)^{\\frac{(\\ve_{3}-1)}{\\ve_{3}}}+\\right]^{\\frac{\\ve_{3}}{(1-\\ve_{3})}}}{\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\\\\\n&\\implies e(p_{m,t,C},p_{m,t,NC}) = \n\\frac{\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]^{\\frac{\\ve_{3}}{(1-\\ve_{3})}}}{\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]}\\\\\n&\\implies e(p_{m,t,C},p_{m,t,NC}) = \n\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]^{\\frac{1}{(1-\\ve_{3})}}\\\\\n\\end{split}\n\\end{equation}   \n\n\\noindent\\noindent Thus we have the price of the corporate-non-corporate composite good from production industry $m$ at time $t$ as:\n\\begin{equation}\n e(p_{m,t,C},p_{m,t,NC})=p_{m,t}=\\left[\\gamma_{m}p_{m,t,C}^{1-\\ve_{3}}+(1-\\gamma_{m})p_{m,t,NC}^{1-\\ve_{3}}\\right]^{\\frac{1}{(1-\\ve_{3})}}\n \\end{equation}\n \n%\\textcolor{red}{I'm not sure, but I think we then have  $e(p_{m,t,C},p_{m,t,NC})=p_{m,t}$ since the unit of ``utility\" is really a unit of output of the composite production good, $X_{m,t}$}.\n\n\n    \n    ", "meta": {"hexsha": "ca2c348c8ce5361c8bc689da17dd591bc6d99d04", "size": 42068, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Model Writeup/Household_description.tex", "max_stars_repo_name": "lnsongxf/OG-USA", "max_stars_repo_head_hexsha": "9e92129e67f4aea5f3a6b8da4110bf67b99ce88a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-05-23T13:57:53.000Z", "max_stars_repo_stars_event_max_datetime": "2017-05-23T13:57:53.000Z", "max_issues_repo_path": "Model Writeup/Household_description.tex", "max_issues_repo_name": "lnsongxf/OG-USA", "max_issues_repo_head_hexsha": "9e92129e67f4aea5f3a6b8da4110bf67b99ce88a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Model Writeup/Household_description.tex", "max_forks_repo_name": "lnsongxf/OG-USA", "max_forks_repo_head_hexsha": "9e92129e67f4aea5f3a6b8da4110bf67b99ce88a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-03T19:06:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-03T19:06:24.000Z", "avg_line_length": 98.2897196262, "max_line_length": 2015, "alphanum_fraction": 0.6631881715, "num_tokens": 14523, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424411924674, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.562899038074495}}
{"text": "\\documentclass[./Thesis.tex]{subfiles}\n\\begin{document}\n\n\\chapter{Exponentiation}\n\\label{chap:exponentiation}\n\n\\epigraph{\n  Accurate reckoning. \\\\\n  The entrance into the knowledge of all existing\n  things and all obscure secrets.\n}{\n  Translated from the Rhind Egyptian Mathematical Papyrus.\n  \\cite{rhind-papyrus}\n}\n\nIn this chapter we will investigate formalizing exponentiation and the\nalgorithms to compute it. As \\Agda{} is a proof relevant language, these things\nare one and the same \\cite{hott-book}.\n% TODO: Rewrite\n\\section{Inductive Definitions}\n\\label{sec:inductive-definitions}\n\n\\begin{code}[hide]\n  open import Algebra using (CommutativeMonoid)\n  module Exponentiation where\n  open import Relation.Nullary.Decidable using (False)\n  open import Relation.Nullary.Negation using (contradiction)\n  open import AKS.Nat using (\u2115; zero; suc; _+_; _*_; _<_; lte; _\u225f_)\n  open import AKS.Nat using (n%m<m)\n  open import Data.Nat.Properties using (*-identity\u02e1; *-assoc; *-comm)\n  open import Relation.Binary.PropositionalEquality using (_\u2261_; _\u2262_; module \u2261-Reasoning) renaming (refl to \u2261-refl; sym to \u2261-sym; cong to \u2261-cong)\n\\end{code}\n\nBefore formalizing a piece of mathematics its often useful to step\nback and think about the desired semantics of your formalism. In this case we\nask the question what are the semantics of exponentiation. Middle and Elementary\nstudents often learn that exponentiation is ``repeated'' multiplication. This\nthought is made more precise below:\n\\begin{align}\n  \\label{eqn:exp-first-idea}\n  x^n = \\underbrace{x \\times x \\times \\dots \\times x}_{n - 1 \\text{ multiplications}}\n\\end{align}\nThe equation above lacks parentheses, arbitrarily assigning right precedence\nthe equation becomes:\n\\begin{align}\n  \\label{eqn:exp-second-idea}\n  x^n = x \\times (x \\times (x \\times \\dots (x \\times x) \\dots ))\n\\end{align}\nThis leads to an obvious inductive definition to compute $x^n$ just multiply $x$\nby $x^{n -1}$ with a base case of $x^0 = 1$. This definition cleanly bakes a law\nof exponents $x^{1+n} = x^1 \\times x^n = x \\times x^n$ into itself. This\nidea is formalized below. \\\\\n\n\\begin{code}[hide]\n  module Basic (C : Set) where\n    open \u2261-Reasoning \n    infixr 8 _^_\n\\end{code}\n\\begin{code}\n    _^_ : \u2115 \u2192 \u2115 \u2192 \u2115\n    x ^ zero = 1\n    x ^ suc n = x * x ^ n\n\\end{code} \\\\\n\nWe demonstrate the correctness of this definition by proving one of the\nfundamental laws of exponentiation,\nthe exponentiational homomorphism $x^{n + m} = x^n x^m$.\n\\begin{code}\n    ^-homo : \u2200 x n m \u2192 x ^ (n + m) \u2261 x ^ n * x ^ m\n\\end{code}\nThe type above just makes the quantifiers explicit. The base case below\nis more interesting. We first simplify and unwrap the definitions then\ntack a $1$ onto the exponential. This is allowable as $1$ is the\nmultiplicative unit. Finally we re-wrap definitions, notice that\n$x \\string^{} \\, 0$ is defined to be $1$ so the simplifier can work in reverse.\n\\begin{code}\n    ^-homo x zero m = begin\n      x ^ (0 + m)   \u2261\u27e8\u27e9\n      x ^ m         \u2261\u27e8 \u2261-sym (*-identity\u02e1 (x ^ m)) \u27e9\n      1 * x ^ m     \u2261\u27e8\u27e9\n      x ^ 0 * x ^ m \u220e\n\\end{code}\nThe inductive step starts similarlly to the base case in so far as we\nunfold the defintions. Then we invoke the inductive hypothesis which in \\Agda{}\nis just a recursive call to our proof. Notice that the induction is well founded\nas the natural $n$ is decreasing by one each inductive step.\nLastly we associate the multiplications correctly.\n\\begin{code}\n    ^-homo x (suc n) m = begin\n      x ^ (suc n + m)     \u2261\u27e8\u27e9\n      x ^ suc (n + m)     \u2261\u27e8\u27e9\n      x * x ^ (n + m)     \u2261\u27e8 \u2261-cong (\u03bb e \u2192 x * e) (^-homo x n m) \u27e9\n      x * (x ^ n * x ^ m) \u2261\u27e8 \u2261-sym (*-assoc x (x ^ n) (x ^ m)) \u27e9\n      (x * x ^ n) * x ^ m \u2261\u27e8\u27e9\n      (x ^ suc n) * x ^ m \u220e\n\\end{code}\n\\section{Don't Repeat Yourself}\n\\label{sec:dont-repeat-yourself}\nIn the previous section we assumed that the base of our exponential was a\nnatural number. This definition restricts even basic mathematics as expressions\nlike $(-1)^3$ and $\\pi^2$ are not well typed. A mathematician might suggest\nusing a base of complex numbers as the naturals, integers, and reals are all\nsubclasses of $\\mathbb{C}$. This solution has two problems, firstly subtyping\n(the type theoretic analog of subclassing) is an open problem in a language like\n\\Agda{} \\cite{algebraic-subtyping}. Secondly there are types which are not\nsubclasses of $\\mathbb{C}$ that we would like to exponentiate. A notable example\nfrom linear algebra, the $n$th entry of the Fibonacci sequence can be found by\nraising a certain matrix to the $n$th power \\cite{linear-algebra}.\n\\begin{align}\n  \\label{eqn:fib-example}\n  \\rowarrowsep=-2pt\n  \\begin{gmatrix}[p]\n    F_{n - 1} & F_{n} \\\\\n    F_{n} & F_{n + 1}\n  \\end{gmatrix}\n  =\n  {\n  \\begin{gmatrix}[p]\n    0 & 1 \\\\\n    1 & 1\n  \\end{gmatrix}\n  }^n\n\\end{align}\nIn the coming chapters we will need to exponentiate non-number types so we need\nto step back and generalize our base. In the previous section the proofs\nrequired almost no properties of multiplication. The proofs required an\nassociative binary operation equipped with an identity element. Those\nconstraints basically define a monoid. A quick sanity check shows that all the\ntypes listed above form a monoid under their respective multiplication\noperators. \\\\\n\nWe can generalize our definition by creating a module that takes any monoid.\nOur monoid laws are proven over an arbitrary \\textit{setoid}. A setoid is a\npackaging of some type $C$ and an equivalence class $\\AgdaFunction{\\_\u2248\\_}$ over\nthat type. For instance the\nnaturals form a setoid with $\\AgdaFunction{\\_\u2261\\_}$ as the equivalence class.\n\\begin{code}[hide]\n  open import Algebra using (Monoid; CommutativeMonoid)\n  open import Relation.Binary using (Setoid)\n\\end{code}\n\\begin{code}\n  module Exponentiation\u2081 {c \u2113} (M : Monoid c \u2113) where\n    open Monoid M public\n      using (\u03b5; _\u2219_; \u2219-cong; \u2219-cong\u02e1; \u2219-cong\u02b3; identity\u02e1; identity\u02b3; assoc)\n      using (_\u2248_; setoid)\n      renaming (Carrier to C)\n    open Setoid setoid public using (refl; sym)\n    open import Relation.Binary.Reasoning.Setoid setoid public\n\\end{code}\nFrom this point on bases are values of type C, the\nbinary operation is $\u2219$, and the identity element is\n$\\varepsilon$. This leads to a more general inductive definition:\n\\begin{code}\n    _^_ : C \u2192 \u2115 \u2192 C\n    x ^ zero = \u03b5\n    x ^ suc n = x \u2219 x ^ n\n\\end{code}\nWe can update the homomorphism proof as well keeping in mind now we are working\nover an arbitrary setoid instead of the built in equality type. This is\nreflected in the following new type of $\\AgdaFunction{\\^{}-homo}$.\n% TODO: Define setoid\n\\begin{code}[hide]\n    infixr 8 _^_\n\\end{code}\n\\begin{code}\n    ^-homo : \u2200 x n m \u2192 x ^ (n + m) \u2248 x ^ n \u2219 x ^ m\n\\end{code}\nThe base case closely resembles the original proof with one change. Adding on\nthe identity element is only true up to setoid equality. This is\nreflected in the combinator used to string the proof steps together.\n\\begin{code}\n    ^-homo x zero m = begin\n      x ^ (0 + m)   \u2261\u27e8\u27e9\n      x ^ m         \u2248\u27e8 sym (identity\u02e1 (x ^ m)) \u27e9\n      \u03b5 \u2219 x ^ m     \u2261\u27e8\u27e9\n      x ^ 0 \u2219 x ^ m \u220e\n\\end{code}\nNot all functions are congruent over an arbitrary setoid, so you need to prove\ncongruence for specific functions. Luckily by definition, all monodial binary\noperations are congruent. This fact is witnessed by the\nproof generating function $\\AgdaFunction{\u2219-cong\u02e1}$.\n\\begin{code}\n    ^-homo x (suc n) m = begin\n      x ^ (suc n + m)     \u2261\u27e8\u27e9\n      x ^ suc (n + m)     \u2261\u27e8\u27e9\n      x \u2219 x ^ (n + m)     \u2248\u27e8 \u2219-cong\u02e1 (^-homo x n m) \u27e9\n      x \u2219 (x ^ n \u2219 x ^ m) \u2248\u27e8 sym (assoc x (x ^ n) (x ^ m)) \u27e9\n      (x \u2219 x ^ n) \u2219 x ^ m \u2261\u27e8\u27e9\n      (x ^ suc n) \u2219 x ^ m \u220e\n\\end{code}\nAnother core law of exponentiation is the fact that exponentiation distributes\nover multiplication or in our case the binary operator\n$(x \\cdot y)^n = x^n \\cdot y^n$. Unfortunately this law is false over\nnon-commutative monoids. For instance the free monoid, commonly called the list\ndata structure \\cite{awodey}, is a counterexample to this law:\n\\begin{align}\n  \\label{eqn:dist-counterexample}\n       ([1] \\, \\diamond \\, [2])^2\n     = [1, 2]^2\n     = [1, 2, 1, 2]\n  \\neq [1, 1, 2, 2]\n     = [1, 1] \\, \\diamond \\, [2, 2]\n     = [1]^2 \\, \\diamond \\, [2]^2\n\\end{align}\nThis means we need to strengthen our assumptions and work with commutative\nmonoids. As every commutative monoid is a monoid all our previous work holds\ntrue, and this can be witnessed by passing the correct modules around.\nNote that we now have access to the $\\AgdaFunction{comm}$ property, a proof that the\nbinary operator is commutative. \n\\begin{code}\n  module Exponentiation\u2082 {c \u2113} (M : CommutativeMonoid c \u2113) where\n    open CommutativeMonoid M using (monoid; comm) public\n    open Exponentiation\u2081 monoid public\n\\end{code}\nThe type and base case for this proof are quite similar to the proof above. Note\nthat for any monoid $\\varepsilon = \\varepsilon^2$.\n\\begin{code}\n    \u2219-^-dist : \u2200 x y n \u2192 (x \u2219 y) ^ n \u2248 x ^ n \u2219 y ^ n\n    \u2219-^-dist x y zero = begin\n      (x \u2219 y) ^ 0   \u2261\u27e8\u27e9\n      \u03b5             \u2248\u27e8 sym (identity\u02e1 \u03b5) \u27e9\n      \u03b5 \u2219 \u03b5         \u2261\u27e8\u27e9\n      x ^ 0 \u2219 y ^ 0 \u220e\n\\end{code}\nThe inductive step contains the meat of this proof. We first apply the inductive\nhypothesis then we reassociate the terms until $y$ and $x \\,\\,\\, \\hat{} \\,\\,\\, n$ are siblings.\nOnly then can we apply the commutativity axiom. Note the repeated uses of\n$\\AgdaFunction{\u2219-cong}$ to ignore irrelevant parts of expression tree. The proof concludes by\nreassociating the terms in reverse. \n\\begin{code}\n    \u2219-^-dist x y (suc n) = begin\n      (x \u2219 y) ^ suc n           \u2261\u27e8\u27e9\n      (x \u2219 y) \u2219 (x \u2219 y) ^ n     \u2248\u27e8 \u2219-cong\u02e1 (\u2219-^-dist x y n) \u27e9\n      (x \u2219 y) \u2219 (x ^ n \u2219 y ^ n) \u2248\u27e8 assoc x y (x ^ n \u2219 y ^ n) \u27e9\n      x \u2219 (y \u2219 (x ^ n \u2219 y ^ n)) \u2248\u27e8 \u2219-cong\u02e1 (sym (assoc y (x ^ n) (y ^ n))) \u27e9\n      x \u2219 ((y \u2219 x ^ n) \u2219 y ^ n) \u2248\u27e8 \u2219-cong\u02e1 (\u2219-cong\u02b3 (comm y (x ^ n))) \u27e9\n      x \u2219 ((x ^ n \u2219 y) \u2219 y ^ n) \u2248\u27e8 \u2219-cong\u02e1 (assoc (x ^ n) y (y ^ n)) \u27e9\n      x \u2219 (x ^ n \u2219 (y \u2219 y ^ n)) \u2248\u27e8 sym (assoc x (x ^ n) (y \u2219 y ^ n)) \u27e9\n      (x \u2219 x ^ n) \u2219 (y \u2219 y ^ n) \u2261\u27e8\u27e9\n      x ^ suc n \u2219 y ^ suc n     \u220e\n\\end{code}\nWe end this section by analyzing the run time of this algorithm. The run time\ncan be represented by this simple recurrence. Assume that the binary operation\nruns in $\\BigO{1}$ time\\footnote\n{In general this assumption is false but we\n  will use this algorithm to exponentiate modular rings.\n}.\n\\begin{align}\n  \\label{eqn:slow-recurrence}\n  \\begin{split}\n    T(0) &= \\BigO{1} \\\\\n    T(n) &= T(n - 1) + \\BigO{1}\n  \\end{split}\n\\end{align}\nThis recurrence clearly solves to $T(n) = \\BigO{n}$. This linear result obscures\nthe exponential run-time of this algorithm. The key to realize this is to ask\n``What does $n$ represent in that recurrence?'' It represents the full integer, not\nthe length of the integer's binary representation. This insight shows that an inductively\ndefined exponential algorithm will run exponentially slower then the run-time of the\nbinary operation.\n\\section{Exponential Speedup}\n\\label{sec:exponential-speedup}\nThe solution to this exponential problem can be found by raising a base to\npowers of two. In the following equation assume $n$ is a power of two $n = 2^\\ell$:\n\\begin{align}\n  \\label{eqn:squaring-even}\n  x^n = x^{2^{\\, \\ell}}\n  {}  = x^{2 \\cdot 2^{\\ell - 1}}\n  {}  = \\Parens*{x^2}^{2^{\\ell - 1}}\n  {}  = \\Parens*{x^2}^{n / 2}\n\\end{align}\nThis equation provides a simple algorithm to exponentiate a base to a power of\ntwo. Just square the base and recurse on half the exponent. The run time of this\nalgorithm is modeled by the recurrence given below.\n\\begin{align}\n  \\label{eqn:power-of-two-recurrance}\n  \\begin{split}\n    T(0) &= \\BigO{1} \\\\\n    T(n) &= T(n / 2) + \\BigO{1}\n  \\end{split}\n\\end{align}\nThe well known recurrence yields a complexity of $T(n) = \\BigO{\\Lg{n}}$. This is an\nexponential improvement. Unfortunately this algorithm only works correctly for\npowers of two. This is because equation \\ref{eqn:squaring-even} requires n to be\neven. The defining feature of powers of two is that they are all even,\ndisregarding $1$. So if we can find a parallel equation to equation\n\\ref{eqn:squaring-even} for odd numbers, we will be able to exponentiate any power\nquickly.\n\\begin{align}\n  \\label{eqn:squaring-odd}\n  x^n = x^{n - 1 + 1} = x x^{n - 1} = x x^{2\\Parens*{n - 1} / 2} = x \\Parens*{x^2}^{\\Parens*{n - 1} / 2}\n\\end{align}\nWe can combine these two ideas \\ref{eqn:squaring-even}, \\ref{eqn:squaring-odd}\ninto a single cased based definition.\n\\begin{align}\n  \\label{eqn:squaring-def-division}\n  \\begin{split}\n    x^0 &= 1 \\\\\n    x^n &=\n    \\begin{cases}\n      \\Parens*{x^2}^{n / 2} & n \\text{ is even} \\\\\n      x \\Parens*{x^2}^{\\Parens*{n - 1} / 2} & n \\text{ is odd}\n    \\end{cases}\n  \\end{split}\n\\end{align}\nThis leads straight into a similar cased based recurrence.\n\\begin{align}\n  \\label{eqn:squaring-recurrance}\n  \\begin{split}\n    T(0) &= \\BigO{1} \\\\\n    T(n) &=\n    \\begin{cases}\n      T(n / 2) + \\BigO{1} & n \\text{ is even} \\\\\n      T(n / 2) + 2 \\BigO{1} & n \\text{ is odd} \\\\\n  \\end{cases}\n  \\end{split}\n\\end{align}\nThe cases in this recurrence collapse together as the constant $2$ folds into\nthe big-o term $\\BigO{1}$. This fact makes this recurrence equal to\n\\ref{eqn:power-of-two-recurrance} and so they have equal run time.\n\\section{A Binary Digression}\n\\label{sec:a-binary-digression}\nWe could attempt to translate definition \\ref{eqn:squaring-def-division} directly to\n\\Agda{}. While this is possible, the definition obstructs proof writing as\nchecking for parity and integer division are non-trivial constructions in\n\\Agda{}. Instead we can apply a couple of well known tricks in the world\nof bitwise programming. First division by two is the same as bit shift right ($\\gg$)\nby 1. In the following code $n_i$ represents the $i$th bit of $n$ and\n$n_{\\ell - 1} n_{\\ell - 2} \\dots n_{1} n_{0}$ represents the bitstring of $n$.\n\\begin{align}\n  \\Floor{n / 2}\n  {} &= \\Floor[\\big]{\\Parens[\\big]{n_{\\ell - 1} n_{\\ell - 2} \\dots n_{1} n_{0}} / 2} \\\\\n  {} &= \\Floor[\\big]{\\Parens[\\big]{2^{\\ell - 1}n_{\\ell - 1} + 2^{\\ell - 2} n_{\\ell - 2} + \\dots + 2^1 n_1 + 2^0 n_0} / 2} \\\\\n  {} &= \\Floor[\\big]{\\Parens[\\big]{2^{\\ell - 1}n_{\\ell - 1}} / 2}\n      + \\Floor[\\big]{\\Parens[\\big]{2^{\\ell - 2} n_{\\ell - 2}} / 2}\n      + \\dots\n      + \\Floor[\\big]{\\Parens[\\big]{2^1 n_1} / 2}\n      + \\Floor[\\big]{\\Parens[\\big]{2^0 n_0} / 2} \\\\\n  {} &= 2^{\\ell - 2}n_{\\ell - 1}\n      + 2^{\\ell - 3} n_{\\ell - 2}\n      + \\dots\n      + 2^0 n_1 \\\\\n  {} &= n_{\\ell - 2} n_{\\ell - 3} \\dots n_{1} = n \\gg 1\n\\end{align}\nThe term $\\Floor[\\big]{\\Parens[\\big]{2^0 n_0} / 2}$ disappears because the numerator is less\nthan $2$, a quirk of integer division. By a similar analysis shown above the\nparity of an integer can be determined by the first bit of the integer's binary\nrepresentation. If the first bit is set then the integer is odd. These two\ntricks allow us to rewrite our definition in terms \\Agda{} will like.\n\\begin{align}\n  \\label{eqn:squaring-def-shift}\n  \\begin{split}\n    x^0 &= 1 \\\\\n    x^n &=\n    \\begin{cases}\n      \\Parens*{x^2}^{n \\gg 1} & n_0 = 0\ud835\udd53 \\\\\n      x \\Parens*{x^2}^{n \\gg 1} & n_0 = 1\ud835\udd53\n    \\end{cases}\n  \\end{split}\n\\end{align}\n\\Agda{} does not come with bitwise operator primitives so we will need to build\nour own binary number type. Note that at every level of recursion we only need\nto inspect the last element of the bitstring, so can we drop that element.\nThus if we store the binary number as a snoc list we can perform the\nrequired operations in $\\BigO{1}$. A snoc list is a purely functional linked\nlist that supports push/pop operations from the back of the list in\n$\\BigO{1}$ time \\cite{okasaki}. The figure below is a snoc list encoding of the\nnumber $10$.\n\\begin{align}\n  \\label{eqn:snoc-binary-10}\n  [] :: 1\ud835\udd53 :: 0\ud835\udd53 :: 1\ud835\udd53 :: 0\ud835\udd53\n\\end{align}\nThis encoding has one major flaw. Numbers do not have a unique encoding as a user\ncan add arbitrarily many leading zeros. Take for instance the two encodings of\n$3$ given below.\n\\begin{align}\n  [] :: 0\ud835\udd53 :: 1\ud835\udd53 :: 1\ud835\udd53 \\stackrel{?}{=} [] :: 1\ud835\udd53 :: 1\ud835\udd53\n\\end{align}\nWe can fix this issue by fusing the empty list and the most significant digit\ntogether \\cite{donnacha}. This idea generates a simple \\Agda{} data type, note\nthat $\\AgdaInductiveConstructor{\ud835\udd531\u1d47}$ is the fused based case.\n\\begin{code}\n  infixl 5 _0\u1d47 _1\u1d47\n  data \ud835\udd39\u207a : Set where\n    \ud835\udd531\u1d47 : \ud835\udd39\u207a\n    _0\u1d47 _1\u1d47 : \ud835\udd39\u207a \u2192 \ud835\udd39\u207a\n\\end{code}\nThis encoding unfortunately forbids any leading zeros, and the number $0$\nrequires exactly one leading zero. We can special case $0$ with its own\nconstructor.\n\\begin{code}\n  infixr 4 +_\n  data \ud835\udd39 : Set where\n    \ud835\udd530\u1d47 : \ud835\udd39\n    +_ : \ud835\udd39\u207a \u2192 \ud835\udd39\n\n  three : \ud835\udd39\n  three = + \ud835\udd531\u1d47 1\u1d47\n\\end{code}\nThe following code defines a function to reify a binary encoded natural number\ninto \\Agda{} primitive naturals. We define a helper function to multiply a number\nby $2$. We could have used the term $2 * n$ but the simplifier transforms that\nexpression into $n + (n + 0)$. This expression unnecessarily complicates proofs\nin the following sections.\n\\begin{code}\n  2* : \u2115 \u2192 \u2115\n  2* n = n + n\n\n  \u27e6_\u21d3\u27e7\u207a : \ud835\udd39\u207a \u2192 \u2115\n  \u27e6 \ud835\udd531\u1d47  \u21d3\u27e7\u207a = 1\n  \u27e6 x 0\u1d47 \u21d3\u27e7\u207a = 2* \u27e6 x \u21d3\u27e7\u207a\n  \u27e6 x 1\u1d47 \u21d3\u27e7\u207a = 1 + 2* \u27e6 x \u21d3\u27e7\u207a\n\n  \u27e6_\u21d3\u27e7 : \ud835\udd39 \u2192 \u2115\n  \u27e6 \ud835\udd530\u1d47 \u21d3\u27e7 = 0\n  \u27e6 + x \u21d3\u27e7 = \u27e6 x \u21d3\u27e7\u207a\n\n  evaluation-test : \u27e6 + \ud835\udd531\u1d47 0\u1d47 1\u1d47 0\u1d47 \u21d3\u27e7 \u2261 10\n  evaluation-test = \u2261-refl\n\\end{code}\nIn order to develop a way to upcast naturals to our binary representation we\nneed to introduce an formal specification of integer division. In order\nto make use of integer division we usually require both the quotient\nand remainder so we create a type that bundles these results together. This type\nalso contains a proof of correctness of the division algorithm along with a\nproof that the remainder is less than the divisor.\n\\begin{code}\n  record Euclidean (n : \u2115) (m : \u2115) : Set where\n    constructor Euclidean\u2713\n    field\n      q : \u2115\n      r : \u2115\n      division : n \u2261 r + m * q\n      r<m : r < m\n\\end{code}\nThe code above is just a type, we also require a function that produces the type\nfor a given dividend and divisor. We must ensure that the divisor is not equal\nto $0$, to do this we introduce two new concepts. A boolean function\n$\\AgdaFunction{\\_\u225f\\_}$ which returns true if the two inputs are equal. As well\nas a type which is only inhabited if a boolean function returns\n$\\AgdaDatatype{false}$. These two concepts combine to require that any user of\nthe $\\AgdaFunction{\\_div\\_}$ proves that the divisor is not equal to $0$. The\nactual implementation of this function is omitted as it makes use of complicated\nprimitive functions provided by \\Agda{}.\n\\begin{code}[hide]\n  open import Data.Nat.DivMod using (_/_; _%_; m\u2261m%n+[m/n]*n)\n  open import Relation.Binary.PropositionalEquality.TrustMe using (trustMe)\n\\end{code}\n\\begin{code}\n  _div_ : \u2200 (n m : \u2115) {\u22620 : False (m \u225f 0)} \u2192 Euclidean n m\n\\end{code}\n\\begin{code}[hide]\n  n div (suc m) = Euclidean\u2713 (n / suc m) (n % suc m) div-proof (n%m<m n (suc m))\n    where\n    div-proof : n \u2261 n % suc m + suc m * (n / suc m)\n    div-proof rewrite *-comm (suc m) (n / suc m) = m\u2261m%n+[m/n]*n n m\n\\end{code}\nNow we can write a function that reifies non zero naturals into our binary\nrepresentation. We begin by dividing the input by $2$. Note that we can\ndetermine the parity of the input by observing the remainder after dividing by\n$2$. In the case where the remainder is $0$ we know the input is even so we\nrecurse adding a $\\AgdaInductiveConstructor{0\u1d47}$ to the result. \\Agda{} infers\nthat the quotient must be greater than $0$ as the input is greater than $0$.\nWhen the remainder is equal to $1$ the quotient could be $0$ so we perform a\ncase analysis. If the quotient is $0$, we introduce our base case of\n$\\AgdaInductiveConstructor{\ud835\udd531\u1d47}$, otherwise we recurse and add\n$\\AgdaInductiveConstructor{1\u1d47}$ to the result as the input was odd.\n\\begin{code}[hide]\n  {-# TERMINATING #-}\n\\end{code}\n\\begin{code}\n  \u27e6_\u21d1\u27e7\u207a : \u2200 (n : \u2115) {\u22620 : False (n \u225f 0)} \u2192 \ud835\udd39\u207a\n  \u27e6 suc n \u21d1\u27e7\u207a with suc n div 2\n  ... | Euclidean\u2713 (suc q) 0 _ _ = \u27e6 suc q \u21d1\u27e7\u207a 0\u1d47\n  ... | Euclidean\u2713 zero    1 _ _ = \ud835\udd531\u1d47\n  ... | Euclidean\u2713 (suc q) 1 _ _ = \u27e6 suc q \u21d1\u27e7\u207a 1\u1d47\n\n  \u27e6_\u21d1\u27e7 : \u2115 \u2192 \ud835\udd39\n  \u27e6 zero \u21d1\u27e7 = \ud835\udd530\u1d47\n  \u27e6 suc n \u21d1\u27e7 = + \u27e6 suc n \u21d1\u27e7\u207a\n\\end{code}\nLastly, we need a proof that $\\AgdaFunction{\u27e6\\_\u21d3\u27e7}$ and $\\AgdaFunction{\u27e6\\_\u21d1\u27e7}$\nform an isomorphism. After applying the inductive hypothesis the proof\ncan be offloaded to an automated ring solver. For this reason we exclude\nthe body of the proof.\n\\begin{code}\n  \u2115\u2192\ud835\udd39\u2192\u2115 : \u2200 n \u2192 \u27e6 \u27e6 n \u21d1\u27e7 \u21d3\u27e7 \u2261 n\n\\end{code}\n\\begin{code}[hide]\n  \u2115\u2192\ud835\udd39\u2192\u2115 n = trustMe\n\\end{code}\n\\section{Mutally Recursive Proofs}\n\\label{sec:mutally-recursive-proofs}\nIn this section we redefine our exponential inline with\n\\ref{eqn:squaring-def-shift}. Then we prove that the fast exponential is\nequivalent to the inductively defined exponential up to setoid equality. In\norder to do this we need to rename the old exponential and all its associated\nproofs. To accomplish this we tack an (i)nductive onto all the old functions and proofs.\n\\begin{code}\n  module Exponentiation\u2083 {c \u2113} (M : CommutativeMonoid c \u2113) where\n    open Exponentiation\u2082 M\n      renaming (_^_ to _^\u2071_; ^-homo to ^\u2071-homo; \u2219-^-dist to \u2219-^\u2071-dist)\n\\end{code}\nWith that renaming out of the way the definitions follow simply from\n\\ref{eqn:squaring-def-shift}.\n\\begin{code}\n    _^\u1d47\u207a_ : C \u2192 \ud835\udd39\u207a \u2192 C\n    x ^\u1d47\u207a \ud835\udd531\u1d47 = x\n    x ^\u1d47\u207a (b 0\u1d47) = (x \u2219 x) ^\u1d47\u207a b\n    x ^\u1d47\u207a (b 1\u1d47) = x \u2219 (x \u2219 x) ^\u1d47\u207a b\n\n    _^\u1d47_ : C \u2192 \ud835\udd39 \u2192 C\n    x ^\u1d47 \ud835\udd530\u1d47 = \u03b5\n    x ^\u1d47 (+ b) = x ^\u1d47\u207a b\n\n    _^_ : C \u2192 \u2115 \u2192 C\n    x ^ n = x ^\u1d47 \u27e6 n \u21d1\u27e7\n\\end{code}\nAs $\\AgdaFunction{\\_\\^{}\u1d47\\_}$ is defined by recursion on a value of type\n$\\AgdaDatatype{\ud835\udd39}$ any proof about $\\AgdaFunction{\\_\\^{}\u1d47\\_}$ will have to case\non an input of type $\\AgdaDatatype{\ud835\udd39}$ in order for the proof term to simplify.\nSo we create a lemma $\\AgdaFunction{loop}$ that is true for any value of type\n$\\AgdaDatatype{\ud835\udd39}$ then we instantiate that lemma at $\u27e6 \\, n \u21d1\u27e7 :\n\\AgdaDatatype{\ud835\udd39}$. This will leave a term of the form $\u27e6 \\, \u27e6 \\, n \u21d1\u27e7 \u21d3\u27e7$ which is\nisomorphic to $n$.\n\\begin{code}\n    x^n\u2248x^\u2071n : \u2200 x n \u2192 x ^ n \u2248 x ^\u2071 n\n    x^n\u2248x^\u2071n x n = begin\n      x ^ n \u2248\u27e8 loop x \u27e6 n \u21d1\u27e7 \u27e9\n      x ^\u2071 \u27e6 \u27e6 n \u21d1\u27e7 \u21d3\u27e7 \u2261\u27e8 \u2261-cong (\u03bb t \u2192 x ^\u2071 t) (\u2115\u2192\ud835\udd39\u2192\u2115 n) \u27e9\n      x ^\u2071 n \u220e\n\\end{code}\n\\begin{code}[hide]\n      where\n\\end{code}\nThis lemma only proves the desired result for values of type $\\AgdaDatatype{\ud835\udd39}$\nwe require another similar lemma (produced below) for values of type $\\AgdaDatatype{\ud835\udd39\u207a}$.\n\\begin{code}\n        loop : \u2200 x b \u2192 x ^\u1d47 b \u2248 x ^\u2071 \u27e6 b \u21d3\u27e7\n        loop x \ud835\udd530\u1d47 = refl\n        loop x (+ b) = begin\n          x ^\u1d47 (+ b)    \u2261\u27e8\u27e9\n          x ^\u1d47\u207a b       \u2248\u27e8 loop\u207a x b \u27e9\n          x ^\u2071 \u27e6 b \u21d3\u27e7\u207a  \u2261\u27e8\u27e9\n          x ^\u2071 \u27e6 + b \u21d3\u27e7 \u220e\n\\end{code}\n\\begin{code}[hide]\n          where\n\\end{code}\nCases two and three simplify to extremely similar expressions so we factored the\ncommon pattern out into the function $\\AgdaFunction{even}$. This has a problem\nthough, $\\AgdaFunction{even}$ calls $\\AgdaFunction{loop\u207a}$ and\n$\\AgdaFunction{loop\u207a}$ calls $\\AgdaFunction{even}$. This pattern is called\nmutual recursion, and \\Agda{} supports it nicely. To signal to \\Agda{} that two\nfunctions are mutually recursive you just declare both their type signatures\nbefore providing a body to either function.\n\\begin{code}\n          even : \u2200 x b \u2192 (x \u2219 x) ^\u1d47\u207a b \u2248 x ^\u2071 2* \u27e6 b \u21d3\u27e7\u207a\n          loop\u207a : \u2200 x b \u2192 x ^\u1d47\u207a b \u2248 x ^\u2071 \u27e6 b \u21d3\u27e7\u207a\n\\end{code}\nThe idea behind $\\AgdaFunction{even}$ is quite simple, we just apply the\ninductive hypothesis and then we can work with proofs about the inductively\ndefined exponential.\n\\begin{code}\n          even x b = begin\n            (x \u2219 x) ^\u1d47\u207a b               \u2248\u27e8 loop\u207a (x \u2219 x) b \u27e9\n            (x \u2219 x) ^\u2071 \u27e6 b \u21d3\u27e7\u207a          \u2248\u27e8 \u2219-^\u2071-dist x x \u27e6 b \u21d3\u27e7\u207a  \u27e9\n            x ^\u2071 \u27e6 b \u21d3\u27e7\u207a \u2219 x ^\u2071 \u27e6 b \u21d3\u27e7\u207a \u2248\u27e8 sym (^\u2071-homo x \u27e6 b \u21d3\u27e7\u207a \u27e6 b \u21d3\u27e7\u207a) \u27e9\n            x ^\u2071 (\u27e6 b \u21d3\u27e7\u207a + \u27e6 b \u21d3\u27e7\u207a)    \u2261\u27e8\u27e9\n            x ^\u2071 2* \u27e6 b \u21d3\u27e7\u207a            \u220e\n\\end{code}\nThe simplifier can deduce most $\\AgdaFunction{loop\u207a}$ and then we apply\n$\\AgdaFunction{even}$. The only difference in the even and odd cases is a\nsingle $x$ on the outside of the proof.\n\\begin{code}\n          loop\u207a x \ud835\udd531\u1d47 = begin\n            x ^\u1d47\u207a \ud835\udd531\u1d47      \u2261\u27e8\u27e9\n            x              \u2248\u27e8 sym (identity\u02b3 x) \u27e9\n            x \u2219 \u03b5          \u2261\u27e8\u27e9\n            x \u2219 x ^\u2071 0     \u2261\u27e8\u27e9\n            x ^\u2071 1         \u2261\u27e8\u27e9\n            x ^\u2071 \u27e6 \ud835\udd531\u1d47 \u21d3\u27e7\u207a \u220e\n          loop\u207a x (b 0\u1d47) = begin\n            x ^\u1d47\u207a (b 0\u1d47)    \u2261\u27e8\u27e9\n            (x \u2219 x) ^\u1d47\u207a b   \u2248\u27e8 even x b \u27e9\n            x ^\u2071 2* \u27e6 b \u21d3\u27e7\u207a \u2261\u27e8\u27e9\n            x ^\u2071 \u27e6 b 0\u1d47 \u21d3\u27e7\u207a \u220e\n          loop\u207a x (b 1\u1d47) = begin\n            x ^\u1d47\u207a (b 1\u1d47)          \u2261\u27e8\u27e9\n            x \u2219 (x \u2219 x) ^\u1d47\u207a b     \u2248\u27e8 \u2219-cong\u02e1 (even x b) \u27e9\n            x \u2219 x ^\u2071 2* \u27e6 b \u21d3\u27e7\u207a   \u2261\u27e8\u27e9\n            x ^\u2071 (1 + 2* \u27e6 b \u21d3\u27e7\u207a) \u2261\u27e8\u27e9\n            x ^\u2071 \u27e6 b 1\u1d47 \u21d3\u27e7\u207a       \u220e\n\\end{code}\nBeyond the correctness of the algorithm, this proof has many uses. If we ever need to prove a\nproperty of exponentials we can prove that fact with the inductive definition.\nThen we can apply this proof to get a version of that property that works on\nfast exponentiation for free.\n\n\\end{document}\n", "meta": {"hexsha": "9c824e733cf0e1192651c8aa7dda2929c620f601", "size": 24811, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/Exponentiation.lagda.tex", "max_stars_repo_name": "mckeankylej/thesis", "max_stars_repo_head_hexsha": "ddad4c0d5f384a0219b2177461a68dae06952dde", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-01T22:38:27.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-01T22:38:27.000Z", "max_issues_repo_path": "tex/Exponentiation.lagda.tex", "max_issues_repo_name": "mckeankylej/thesis", "max_issues_repo_head_hexsha": "ddad4c0d5f384a0219b2177461a68dae06952dde", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/Exponentiation.lagda.tex", "max_forks_repo_name": "mckeankylej/thesis", "max_forks_repo_head_hexsha": "ddad4c0d5f384a0219b2177461a68dae06952dde", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.2828618968, "max_line_length": 144, "alphanum_fraction": 0.6458828745, "num_tokens": 8883, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424295406087, "lm_q2_score": 0.665410572017153, "lm_q1q2_score": 0.5628990359341967}}
{"text": "\n\\subsection{Bundle projection}\n\nThis is a projection from any point on any of the fibres, to the underlying base manifold.\n\n", "meta": {"hexsha": "84ad48167d52796c95bb84ba94d62e265b29fe97", "size": 125, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/geometry/manifoldsTopological/07-02-projection.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/geometry/manifoldsTopological/07-02-projection.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/geometry/manifoldsTopological/07-02-projection.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.8333333333, "max_line_length": 90, "alphanum_fraction": 0.784, "num_tokens": 27, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8688267830311354, "lm_q2_score": 0.6477982179521103, "lm_q1q2_score": 0.5628244417566343}}
{"text": "\\chapter{On-policy control with approximation}\n\n\\section{Summary}\n\n\\subsection{Episodic Semi-gradient control}\n\nThe general formulation of episodic semi-gradient control equation~\\ref{eq:general formula on-policy control with approximation} can be combined with one-step SARSA leading to equation~\\ref{eq:episodic semi-gradient one step sarsa}. The algorithm itself will still need to use a $\\epsilon$ greedy policy to allow exploration.\n\n\\begin{equation}\n\tw_{t+1} = w_t + \\alpha \\big[ U_t - \\hat{q}(S_t, A_t, W_t) \\big] \n\\label{eq:general formula on-policy control with approximation}\n\\end{equation}\n\n\\begin{equation}\nw_{t+1} = w_t + \\alpha \\big[R_{t+1} + \\gamma\\hat{q}(S_{t+1}, A_{t+1},W_t) - \\hat{q}(S_t, A_t, W_t) \\big] \\nabla \\hat{q}(S_t, A_t, W_t)\n\\label{eq:episodic semi-gradient one step sarsa}\n\\end{equation}\n\n\\subsection{Semi-gradient n-step sarsa}\nThe n-step SARSA leads to update of equation~\\ref{eq:semi-gradient n-step sarsa update}.\n\n\\begin{equation}\n\\begin{split}\nG_{t:t+n} & = R_{t+1} + \\gamma R_{t+2} + ... + \\gamma^{n-1} R_{t+n} \\\\\nw_{t+1} & = w_t + \\alpha \\big[G_{t:t+n} + \\gamma \\hat{q}(s_{t}, A_{t}, W_{t+n-1}) \\big] \\nabla \\hat{q}(S_{t}, A_{t}, W_{t+n-1})\n\\end{split}\n\\label{eq:semi-gradient n-step sarsa update}\n\\end{equation}\n\n\\subsection{Average Reward: A new problem setting for continuing tasks}\nThe previous definitions of reward work great with episodic tasks, but turn out to be problematic with continuous tasks. Equation~\\ref{eq:average reward definition}\nintroduces the \\textbf{reward rate}, which represents the average amount you will have in a certain state. \n\n\\begin{equation}\n\\begin{split}\nr(\\pi) & = \\lim\\limits_{h \\rightarrow \\infty} \\frac{1}{h}\\sum_{t=1}^{h} \\EX[R_t | S_0, A_{0:t-1} \\sim \\pi] \\\\\n& = \\lim_{t \\rightarrow \\infty} \\EX[R_t | S_0, A_{0:t-1} \\sim \\pi] \\\\\n& = \\sum_s \\mu_{\\pi}(s)\\sum_a \\pi(a|s) \\sum_{s', r}p(s', r| s, a)\n\\end{split}\n\\label{eq:average reward definition}\n\\end{equation}\n\nThe steady state distribution $\\mu_{\\pi}$, is a special case for which equation~\\ref{eq:the steady state distribution} holds. The MDP must be \\textbf{ergodic}, eg. the starting state and any early decisions made only have a short term effect. Otherwise the limit of equation~\\ref{eq:average reward definition} is not guaranteed to exist. The policies that have a maximum $r(\\pi)$ value are called the optimal policies.\n\n\\begin{equation}\n\\sum_s \\mu_{\\pi}(s)\\sum_a \\pi(a|s) p(s'| s, a)=\\mu(s')\n\\label{eq:the steady state distribution}\n\\end{equation}\n\n\\begin{equation}\n\\mu(s) = \\lim_{t \\rightarrow \\infty} \\Pr\\{S_t = s | A_{0:t-1} \\sim \\pi\\} \n\\end{equation}\n\n\\begin{equation}\nG_t = R_{t+1} - r(\\pi) + R_{t+2} - r(\\pi) + R_{t+3} - r(\\pi) + ...\n\\label{eq:differential return}\n\\end{equation}\n\nThe differential return can be used to setup the values functions (equation~\\ref{eq:difference value function update}). \n\n\\begin{equation}\n\\begin{split}\nv_{\\pi}(s) & = \\sum_a \\pi(a|s) \\sum_{s',r}p(s',r|s, a)\\big[ r - r(\\pi) + v_{\\pi}(s') \\big] \\\\\nq_{\\pi}(s, a) & = \\sum_{r, s'} p(s', r | s, a)\\Big[ r - r(\\pi) + \\sum_{a'}\\pi(a' | s')q_{\\pi}(s', a') \\Big] \\\\\nv_*(s) & = \\max_a \\sum_{r, s'} p(s', r | s, a)\\Big[ r - \\max_{\\pi}r(\\pi) + v_*(s') \\Big] \\\\\nq_*(s, a) & = \\sum_{r, s'} p(s', r | s, a)\\Big[ r - \\max_{\\pi}r(\\pi) + \\max_{a'}q_{\\pi}(s', a') \\Big] \\\\\n\\end{split}\n\\label{eq:differential value functions}\n\\end{equation}\n\nThe TD errors then become equation~\\ref{eq:TD errors differential form}. The weight update of semi-gradient SARA then becomes equation~\\ref{eq:semi-gradient Sarsa differential form update rule}.\n\n\\begin{equation}\n\\begin{split}\n\\delta_t & = R_{t+1} - \\bar{R_t} + \\hat{v}(S_{t+1}, w_t) - \\hat{v(S_t, w_t)} \\\\\n\\delta_t & = R_{t+1} - \\bar{R_t} + \\hat{q}(S_{t+1}, A_{t+1}, w_t) - \\hat{q}(S_t, A_t, w_t) \\\\\n\\end{split}\n\\label{eq:TD errors differential form}\n\\end{equation}\n\n\\begin{equation}\nw_{t+1} = w_t + \\alpha \\delta_t \\nabla \\hat{q}(S_t, A_t, w_t)\n\\label{eq:semi-gradient Sarsa differential form update rule}\n\\end{equation}\n\n\\subsection{Deprecating the discount setting} \n\nThe discount problem works really well in the tabular case, as states are easy to identify. With an approximated value function, it might be that we \\textbf{don't know the explicit state}. We only have actions and rewards, in that case \\textbf{the discounted return is a scaled version of the undiscounted}.\n\nIf we have a continuous process that gives a reward after every output, the \\textbf{discounted} reward we would have the weights $1 + \\gamma + \\gamma^2 + \\gamma^3 + ... = \\frac{1}{1-\\gamma}$. As we only have 1 explicit state. The total reward then becomes $\\frac{r(\\tau)}{1-\\gamma}$, witch is a scaled version of the \\textbf{undiscounted} $r(\\tau)$. \n\nA more formal proof can be found on page 254. The root cause of the difficulties with the discounted control setting is that with function approximation we have lost the policy improvement. This is further discussed in the chapter on \\textbf{policy gradient descent}.\n\n\\subsection{Differential Semi-Gradient n-step SARSA}\nThe n-step differential TD error is defined in equation~\\ref{eq:n-step differential TD-error}, algorithm~\\ref{alg:differential semi-gradient n-step sarsa} illustrates the differential semi-gradient n-step Sarsa.\n\n\\begin{equation}\n\\begin{split}\nG_{t:t+n} & = R_{t+1} - \\bar{R}_{t+n-1} + ... + R_{t+n} + \\bar{R}_{t+n-1} + \\hat{q}(S_{t+n}, A_{t+n}, W_{t+n-1}) \\\\\n\\delta_t & = G_{t:t+n} - \\hat{q}(S_t, A_t, W)\n\\end{split}\n\\label{eq:n-step differential TD-error}\n\\end{equation}\n\n\\begin{algorithm}\n\\begin{algorithmic}\n\t\\State $W, S_0 \\gets \\text{Init}$\n\t\\State $\\pi(s)=\\epsilon - \\text{greedy}$\n\t\\Loop \\space \\textbf{for each step t in the episode}\n\t\\State $A_t = \\pi(S_t)$\n\t\\State $S_{t+1}, R_t = system(S_t, A_t)$\n\t\\State $\\tau = t -n + 1$ ($\\tau$ is the time whose estimate is being updated)\n\t\\If {$\\tau \\ge 0$}\n\t\t\\State $\\delta \\gets \\sum_{i=\\tau+1}^{\\tau + n} (R_i - \\bar{R}) + \\hat{q}(S_{\\tau + n}, A_{r+n}, W) - \\hat{q}(S_{\\tau}, A_{\\tau}, W)$\n\t\t\\State $\\bar{R} \\gets \\bar{R} + \\beta \\delta$\n\t\t\\State $w \\gets \\alpha \\delta \\nabla \\hat{q}(S_{\\tau}, A_{\\tau}, w)$\n\t\\EndIf\n\t\\EndLoop\n\\end{algorithmic}\n\\label{alg:differential semi-gradient n-step sarsa}\n\\caption{Differential semi-gradient n-step sarsa}\\end{algorithm}\n\\section{Exercises}\n\n\\subsection{Exercise 10.1 page 248}\n\\textbf{We have no explicitly considered or given pseudo code for any Monte-Carlo methods in this chapter. what would they be like? Why is it reasonable not to give pseudo code for them? How would they perform on the Mountain Car Task.}\n\nThe monte carlo algorithm replaces $U_t$ with the actual return of an episode $G_t$ in equation~\\ref{eq:general formula on-policy control with approximation}. This is by far the simplest way, and the algorithm is trivial.\n\n\\subsection{Exercise 10.2 page 248}\n\\textbf{Give pseudo code for semi-gradient one-step Expected Sarsa for control.}\n\\begin{algorithmic}\n\\State $W \\gets \\text{Init}$\n\\State $\\pi(s)=\\epsilon - \\text{greedy}$\n\\Loop \\space \\textbf{for each episode}\n\t\\State $S, A \\gets S_{init}, \\pi(S_{init})$\n\t\\Loop \\space \\textbf{for each step in the episode}\n\t\t\\State Take action $A$, observe $R$ and $S'$\n\t\t\\State $A_{exp} \\gets \\EX_{\\pi}[A' | S']$\n\t\t\\State $W \\gets W + \\alpha \\big[R + \\gamma\\hat{q}(S', A_{exp}, W) - \\hat{q}(S, A, W) \\big] \\nabla \\hat{q}(S, A, W)$\t\t\n\t\t\\State $S, A \\gets S', \\pi(S')$\n\t\\EndLoop\n\\EndLoop\n\\label{alg:approximated one step exptected SARSA}\n\\end{algorithmic}\n\n\\subsection{Exercise 10.3 page 248}\n\\textbf{Why do the results show in Figure 10.4 (book page 248) have higher standard errors at large n than at small n?}\nThe standard error $SE=\\frac{\\sigma}{\\sqrt{n}}$ increase as $n$ goes up. As the policy considers more actions. The lower graph of figure 10.4 in the book page 248 has smoother lines on the lower n as the variance on Q is lower.\n\n\\subsection{Exercise 10.4 page 250}\n\\textbf{Give pseudocode for a differential version of semi-gradient Q-learning}\n\n\\begin{algorithmic}\n\\State $W \\gets \\text{Init}$\n\\State $\\pi(s)=\\epsilon - \\text{greedy}$\n\\Loop \\space \\textbf{for each episode}\n\\State $S \\gets S_{init}$\n\\State $\\bar{R} \\gets 0$\n\\Loop \\space \\textbf{for each step in the episode}\n\\State $A = \\pi(S')$\n\\State Take action $A$, observe $R$ and $S'$\n\\State $\\delta = R -\\bar{R} + \\max_a\\hat{q}(S', a, W) - \\hat{q}(S, A, W)$\n\\State $W \\gets W + \\alpha \\delta \\nabla \\hat{q}(S, A, W)$\n\\State $S \\gets S'$\n\\State $\\bar{R} = \\bar{R} + \\beta \\delta$\n\\EndLoop\n\n\\EndLoop\n\\end{algorithmic}\n\n\\subsection{Exercise 10.5 page 250}\n \\textbf{What equations are needed (beyond equation~10.10 from the book page 250) to specify the differential version of TD(0)}\n \n Equation 10.10 from the book $\\delta_t = R_{t+1} - \\bar{R_t} + \\hat{v}(S_{t+1}, w_t) - \\hat{v(S_t, w_t)} $, can be combined with the update rule $w_{t+1} = w_t + \\alpha \\delta_t \\nabla \\hat{q}(S_t, A_t, w_t)$ to form the full TD(0) algorithm.\n\n\\subsection{Exercise 10.6 page 251}\n\n\\begin{itemize}\n\t\\item MDP rewards +1,0,+1,0,... for any policy\n\t\\item ergodicity is violated these is no stationary distribution $\\mu$ \n\t\\item Average reward is well defined, what is it? \\textbf{0.5 times the number of steps taken, as the average keeps going up}.\n\t\\item As the average return is not static, the limit of the value function is not well defined. An alternative value function (equation~\\ref{eq: ex10.6 alternative value function}) can be used. \n\t\\item A, B are both MDP's with +1,0 returns as before. But A starts with 1 while B starts with zero. What are the differential value s of states A and B.\n\\end{itemize}\n\n\\begin{equation}\nv_{\\pi} = \\lim_{\\gamma \\rightarrow 1} \\lim_{h \\rightarrow \\infty} \\sum_{t=0}^h \\gamma^t \\big( \\EX[R_{t+1} | S_0=s] - r(\\tau) \\big)\n\\label{eq: ex10.6 alternative value function}\n\\end{equation}\n\nIf $A = [1, 0, 1, 0, 1, 0 ]$ then $v(A)=(1-0.5)+(0-0.5)*\\gamma+(1-0.5)*\\gamma^2...$ if $\\gamma=1$ then $v=0.5 - 0.5 + 0.5 - 0.5 + 0.5 - 0.5$. This is the geometric series of equation~\\ref{eq: ex10.6 alternative value function}, with $a = 0.5$ and $r = -1$. Resulting in $v=\\frac{0.5}{1-(-1)}=\\frac{1}{4}$\n\n\\begin{equation}\n\\sum_{k=0}^\\infty ar^k = \\frac{a}{1-r} \n\\label{eq:ex 10.6 geometric series}\n\\end{equation}\n\nIf $B = [0, 1, 0, 1, 0, 1, ... ]$ and $\\gamma=1$ then $V(B) = (0-0.5) + (1-0.5) + (0-0.5)... = -0.5 + 0.5 -0.5 + 0.5$ notice that $V(A) = -V(B)$. So $V(B) = -\\frac{1}{4}$.\n\n\\subsection{Exercise 10.7 page 251}\nThe average value is $1/3=r(\\tau)$.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}\n\t\n\t\\node at (2, 0) (a) {A};\n\t\\node at (4, -3) (b) {B};\n\t\\node at (0, -3) (c) {C};\n\t\n\t\\draw [->, auto, bend left] (a) to node {+0} (b);\n\t\\draw [->, auto, bend left] (b) to node {+0} (c);\n\t\\draw [->, auto, bend left] (c) to node {+1} (a);\n\t\\end{tikzpicture}\n\t\\caption{MPD ex 10.7}\n\t\\label{fig:mdp ex 10.7}\n\\end{figure}\n\n$\\gamma=1$ We get the following series: $v(A) = -\\frac{1}{3} - \\frac{1}{3} + \\frac{2}{3} + ...$. We could split it up into 3 series, but notice that if we take $-\\frac{1}{3} - \\frac{1}{3}$ together into $-\\frac{2}{3}$. It becomes one geometric series: $-\\frac{2}{3} + \\frac{2}{3} - ...$, with $a=-\\frac{2}{3}$ and $r=\\gamma=1$. So $v(A)=-\\frac{1}{3}$\n\n\n\\begin{equation}\n\\begin{split}\nv_{\\pi}(B) & = \\lim_{\\gamma \\rightarrow 1} \\lim_{h \\rightarrow \\infty} \\sum_{t=0}^h \\gamma^t \\big( \\EX[R_{t+1} | S_0=B] - r(\\tau) \\big)\\\\\n& = \\lim_{\\gamma \\rightarrow 1} \\lim_{h \\rightarrow \\infty} -\\sum_{t=0}^h \\gamma^{3t} \\frac{1}{3} + \\sum_{t=0}^h \\gamma^{3t+1} \\frac{2}{3} -\\sum_{t=0}^h \\gamma^{3t+2} \\frac{1}{3} \\\\\n& = \\lim_{\\gamma \\rightarrow 1} \\lim_{h \\rightarrow \\infty}  \\sum_{t=0}^h \\gamma^{3t} \\Big( -\\frac{1}{3} + \\frac{2}{3}\\gamma - \\frac{1}{3}\\gamma^2 \\Big)\\\\\n& =  0\n\\end{split}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\nv_{\\pi}(C) & = \\lim_{\\gamma \\rightarrow 1} \\lim_{h \\rightarrow \\infty} \\sum_{t=0}^h \\gamma^t \\big( \\EX[R_{t+1} | S_0=C] - r(\\tau) \\big)\\\\\n& = \\lim_{\\gamma \\rightarrow 1} \\lim_{h \\rightarrow \\infty}\\sum_{t=0}^h \\gamma^{3t} \\frac{2}{3} - \\sum_{t=0}^h \\gamma^{3t+1} \\frac{1}{3} -\\sum_{t=0}^h \\gamma^{3t+2} \\frac{1}{3} \\\\\n& = \\lim_{\\gamma \\rightarrow 1} \\lim_{h \\rightarrow \\infty}  \\sum_{t=0}^h \\gamma^{3t} \\Big( -\\frac{1}{3} + \\frac{2}{3}\\gamma - \\frac{1}{3}\\gamma^2 \\Big)\\\\\n& = \\frac{1}{3}\n\\end{split}\n\\end{equation}\n\nSo: $V(A) = -1/3$, $V(B)=0$ and $V(C)=1/3$\n\n\\subsection{Exercise 10.8 page 251}\n\n$\\bar{R}$ is in steady state and fixed to $\\frac{1}{3}$\n\n\\begin{table}\n\\begin{center}\n\t\\begin{tabular}{c | c}\n\t\tTransition & $\\delta=R_{t+1} - \\bar{R}$ \\\\\n\t\t\\hline\n\t\tA, B & $0 -\\frac{1}{3}=-\\frac{1}{3}$ \\\\\n\t\tB, C & $0 -\\frac{1}{3}=-\\frac{1}{3}$ \\\\\n\t\tC, A & $1 -\\frac{1}{3}= \\frac{2}{3}$ \\\\\n\t\\end{tabular}\t\n\\end{center}\n\\caption{$\\delta$ using a simple error}\n\\end{table}\n\nUsing equation $10.10$ from the book page 250: $\\delta = R_{t+1} - \\bar{R} + \\hat{v}(S_{t+1}, W) - \\hat{v}(S_t, W)$.\n\n\\begin{table}[H]\n\\begin{center}\n\t\\begin{tabular}{c | c}\n\t\t$X$ & $V(X)$ \\\\\n\t\t\\hline\n\t\tA & $-\\frac{1}{3}$ \\\\\n\t\tB & $0$ \\\\\n\t\tC & $ \\frac{1}{3}$ \\\\\n\t\\end{tabular}\n\\end{center}\n\\label{tab:value function evaluated in previous exercise}\t\n\\caption{value function calculated in previous exercise}\n\\end{table}\n\n\\begin{table}[H]\n\\begin{center}\n\t\\begin{tabular}{c | c}\n\t\tTransition & $\\delta = R_{t+1} - \\bar{R} + \\hat{v}(S_{t+1}, W) - \\hat{v}(S_t, W)$ \\\\\n\t\t\\hline\n\t\tA, B & $0 - \\frac{1}{3} + 0 + \\frac{1}{3} = 0$ \\\\\n\t\tB, C & $0 - \\frac{1}{3} + \\frac{1}{3} - 0 = 0$ \\\\\n\t\tC, A & $1 - \\frac{1}{3} + \\frac{1}{3} + \\frac{1}{3}=0$ \\\\\n\t\\end{tabular}\n\t\\caption{$\\delta$ using the differential value function update}\n\\end{center}\n\\end{table}\n\nThe update using the $\\delta$ with the value function has a stable update, as the estimation of the value function is in steady state.\n\n\\subsection{Exercise 10.9 page 255}\nPage 35 of the hand describes the \\textbf{exponential recency-weighted average without initial bias}, equation~\\ref{eq:ex 10.9 Exponential recency-weighted average without initial bias}. Using this $\\beta$ instead of the static one will get rid of the bias.\n\n\\begin{equation}\n\\begin{split}\n\\beta & = \\frac{\\alpha}{\\bar{o}_n} \\\\\n\\bar{o}_n & = \\bar{o}_{n-1} + \\alpha (1-\\bar{o}_{n-1})\n\\end{split}\n\\label{eq:ex 10.9 Exponential recency-weighted average without initial bias}\n\\end{equation}", "meta": {"hexsha": "aebc29c1d25fe0dc4c32ae6d11e8539950ec8fb5", "size": 14158, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RL/notes/TeX_files/chapter10.tex", "max_stars_repo_name": "Zilleplus/HML", "max_stars_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "RL/notes/TeX_files/chapter10.tex", "max_issues_repo_name": "Zilleplus/HML", "max_issues_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RL/notes/TeX_files/chapter10.tex", "max_forks_repo_name": "Zilleplus/HML", "max_forks_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.3511705686, "max_line_length": 418, "alphanum_fraction": 0.6543297076, "num_tokens": 5306, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837527911057, "lm_q2_score": 0.7371581741774411, "lm_q1q2_score": 0.5628082892216323}}
{"text": "\n\\section{The \\removecopyiii algorithm (final contract)}\n\\label{sec:removecopyiii}\n\nIn this section we extend the contracts of \\specref{removecopy}\nand \\specref{removecopyii} by introducing a \nlogic function, which describes the relationship between the elements of\ninput range \\inl{a[0..n-1]} and the output range \\inl{b[0..\\\\result-1]}.\nNote that we have shown in the previous section that\n\\inl{\\\\result} equals \\inl{CountNotEqual(a, n, v)}.\n\n\\subsection{A closer look on the properties of \\removecopy}\n\\label{sec:formal-view-remove}\n\nFigure~\\ref{fig:removecopy-trip} shows a modified version of the\nFigure~\\ref{fig:removecopy}. We left out the indices of values that were not copied into the\ntarget array. Furthermore we have added a dashed arrow which points to the\nindex that corresponds to the \\emph{one past the end} location of the input and\noutput range.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=0.70\\textwidth]{Figures/remove_copy_partition.pdf}\n\\end{center}\n\\caption{\\label{fig:removecopy-trip} Partitioning the input of \\removecopy}\n\\end{figure}\n\\FloatBarrier\n\nThese arrows between the indices of the array~\\inl{b} and array~\\inl{a} define\nthe following sequence~$p$ of seven indices. The index of the \\emph{one past the end} is underlined. \n$p = (1, 2, 5, 7, 8, 10, \\underline{11})$\n\n\n%\\clearpage \n\nMore generally, we refer to the sequence~$p$ as \\emph{partitioning sequence} of \\removecopy for the array \\inl{a[0..n-1]}.\nFor the \\textbf{length of a partitioning sequence} $m$ we get the following \\textbf{strictly monotone increasing} sequence:\n\\begin{align}\n  \\label{eq:remove-monotone}\n  0 &\\leq  p_0 < ... < p_{m} = n \\\\\n  \\intertext{and the open index intervals}\n  %\\label{eq:remove-interval}\n  \\nonumber\n  (p_i,&p_{i+1}) && \\forall i: 0 \\leq i < m\\\\\n  \\intertext{mark \\textbf{consecutive ranges} of the value \\inl{v}\n  in the source array, that is,}\n  \\label{eq:between}\n  a[k] &= v &&\\forall k: p_i < k < p_{i+1}\\\\\n  \\intertext{Additionally, the half open interval}\n  \\nonumber\n  [0,&p_{0})\\\\ \n  \\intertext{also marks another \\textbf{consecutive range} of the value \\inl{v} in the\n  source array:}\n  \\label{eq:beginning}\n  a[k] &= v &&\\forall k: 0 \\leq k < p_{0}\\\\\n  \\intertext{Another observation is that} \n  \\label{eq:a_nv}\n  a[p_i] &\\neq v &&\\forall i: 0 \\leq i < m\\\\\n  \\intertext{holds. Finally, we have}\n  \\label{eq:b_eqa}\n  a[p_i] &= b[i] &&\\forall i: 0 \\leq i < m\\\\\n  \\intertext{which, together with the inequality~\\eqref{eq:a_nv} \n  states, that the target does not contain the value~\\inl{v}}\n  \\nonumber\n  %\\label{eq:final}\n  b[i] &\\neq v &&\\forall i:0 \\leq i < m \n\\end{align} \n\n\\subsection{More lemmas on \\CountNotEqual}\n\nOur formalization the properties of \\S\\ref{sec:formal-view-remove}\nrelies on the logic function \\logicref{CountNotEqual}.\nWe also rely on the logic function \\logicref{FindNotEqual} and\nthe lemmas of \\logicref{CountFindNotEqual} in the following listing that provide more\nfacts about \\CountNotEqual and \\FindNotEqual.\n\n\\input{Listings/CountFindNotEqual.acsl.tex}\n\n\\clearpage\n\n\\subsection{Formalizing the properties of the partitions}\n\nThe function~\\RemovePartition, whose axiomatic definition is given in \nListings~\\ref{logic:RemovePartition-1} and~\\ref{logic:RemovePartition-2}\ndefines the partitioning sequence~$p$ from \\S\\ref{sec:formal-view-remove}.\n\n\n\\begin{logic}[hbt]\n\\begin{minipage}{\\textwidth}\n\\lstinputlisting[linerange={1-53}, style=acsl-block, frame=single]{Source/RemovePartition.acsl}\n\\end{minipage}\n\\caption{\\Label{logic:RemovePartition-1} The logic function \\RemovePartition (1)}\n\\input{Listings/RemovePartition.acsl.labels.tex}\n\\input{Listings/RemovePartition.acsl.index.tex}\n\\end{logic}\n\n\\FloatBarrier\n\nBefore we begin to relate the various lemmas\nto the formulas from \\S\\ref{sec:formal-view-remove} we want to remind\nthe reader that logic functions (and predicates) must be total that is they must\nbe defined for all possible argument values.\n\n\\begin{logic}[hbt]\n\\begin{minipage}{\\textwidth}\n\\lstinputlisting[linerange={54-103}, style=acsl-block, frame=single]{Source/RemovePartition.acsl}\n\\end{minipage}\n\\caption{\\Label{logic:RemovePartition-2}The logic function \\RemovePartition (2)}\n\\end{logic}\n\n\\FloatBarrier\n\nThe lemmas for \\RemovePartition are related to the properties of \n\\S\\ref{sec:formal-view-remove} in the following way.\n\n\\begin{itemize}\n  \\item Property \\eqref{eq:remove-monotone} is expressed by the lemmas\n  \\RemovePartitionEmpty, \\RemovePartitionLeft\n  \\RemovePartitionRight, and \\RemovePartitionStrictlyWeakIncreasing\n\n  \\item Properties~\\eqref{eq:between} and~\\eqref{eq:beginning}\n  are described by lemmas \\RemovePartitionSegment.\n\n  \\item Property~\\eqref{eq:a_nv} is expressed by lemma \\RemovePartitionNotEqual.\n\n  \\item Property~\\eqref{eq:b_eqa} is formulated using the predicate \\logicref{Remove}.\n\\end{itemize}\n\n\\clearpage\n\nWe would like to point out lemma \\RemovePartitionCore which subsumes\nthe statements of the subsequent lemmas \\RemovePartitionUpper,\n\\RemovePartitionNotEqual,\\\\\nand \\RemovePartitionCount.\nWhile these three lemmas add nothing new \nwe have kept them because they correspond directly to individual properties\nof \\S\\ref{sec:formal-view-remove}.\nThe question may arise why there is the lemma \\RemovePartitionCore in the first place.\nThe answer is that we found the individual properties so intertwined\nthat we were not able to verify them separately but only their joint embodiment.\n\n\n\\subsection{The predicate \\Remove}\n\nThe predicate \\logicref{Remove} primarily serves \nin order to improve the readability of our specification \\specref{removecopyiii}.\nAs mentioned before this predicate encapsulates the Property~\\eqref{eq:b_eqa}\nfrom \\S\\ref{sec:formal-view-remove}.\nNote that \\logicref{Remove} also contains an overloaded version of\n\\Remove which will be used for the specification of the \\emph{in-place}\nvariant \\specref{remove} of \\removecopy.\n\n\\input{Listings/Remove.acsl.tex}\n\n\\clearpage\n\n\\subsection{Formal specification of \\removecopyiii}\n\nThe following listing shows the formal specification of \\specref{removecopy}.\nThe additional postcondition \\inl{remove} makes use of the predicate \\logicref{Remove}\nwhich we have just described.\nFurthermore, we have again the postcondition~\\inl{unchanged} which states that\nthe source array~\\inl{a[0..n-1]} does not change.\n\n\\input{Listings/remove_copy3.h.tex}\n\n\\subsection{Implementation of \\removecopyiii}\n\\label{sec:removecopyiii:impl}\n\nWe discuss now some aspects of the implementation of \\implref{removecopyiii}.\nWe introduce the loop invariant~\\inl{mapping}.\nThis invariant states that the variable~\\inl{i} will\nalways be smaller or equal to the result of \\inl{RemovePartition(a, n, v, k)}.\nWe also add the assertion~\\inl{mapping} to our implementation as stepping stone\nfor the provers to verify the correctness of this loop invariant.\n\nSomewhat surprisingly, in order to reduce excessive verification times we had to add\nan else-branch to our implementation that besides the assertion \\inl{unchanged}\nis empty.\n\nRegarding the assertion \\inl{update}, one might wonder why we do not simply write\n\\inl{\\\\at(a[i], Pre)}. \nHowever, this expression would be wrong because the index~\\inl{i}\nwould then be interpreted as\n\\inl{\\\\at(i,Pre)} which doesn't makes sense for a local variable.\n\\wpframac consequently rejects this expression with the following error message.\n\n\\begin{small}\n\\begin{verbatim}\n       Warning: unbound logic variable i. Ignoring code annotation\n\\end{verbatim}\n\\end{small}\n\n\\clearpage\n\nWe could explicitly refer to the current value of~\\inl{i} by using\nthe subexpression \\inl{\\\\at(i,Here)} inside the assertion~\\inl{update}.\nWe felt, however, tow introduce the predicate \\logicref{At}\nto simplify the comparison of array elements in programme states \nwhere the particular index variable isn't visible.\n\n\\input{Listings/At.acsl.tex}\n\nThe second argument \\At is interpreted at the programme point here it appears,\nthat is, \\inl{Here}.\nUsing this auxiliary logic function the assertion \\inl{update}\nis arguably more readable.\n\n\\input{Listings/remove_copy3.c.tex}\n\n\\clearpage\n\n", "meta": {"hexsha": "ace0fc9aa92945fe1e16297e44914e1520d8c413", "size": 8072, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/mutating/remove_copy3.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/mutating/remove_copy3.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/mutating/remove_copy3.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 37.896713615, "max_line_length": 123, "alphanum_fraction": 0.7666005946, "num_tokens": 2278, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837527911057, "lm_q2_score": 0.7371581684030623, "lm_q1q2_score": 0.5628082848129878}}
{"text": "\\documentclass[../report/main.tex]{subfiles}\n \n\\begin{document}\n\n% The asterix after \\subsection disables section numbering\n\\subsection*{Problem 1 Part B}\nDetermine the number of refrigerators to be shipped from plants to warehouses, and then warehouses to retailers to minimize the cost.  \\textcolor{red}{For part B warehouse 2 is closed along with all associated routes.  Changes to the problem statement or solution relative to Problem 1 Part A are highlighted in red.}\n\n\\subsection*{Solution}\n\\textcolor{red}{There is no solution when warehouse 2 is closed.  The following is the error message that is returned by Matlab function linprog():\\\\\\\\Exiting: One or more of the residuals, duality gap, or total relative error has grown 100000 times greater than its minimum value so far:\\\\\\indent the primal appears to be infeasible (and the dual unbounded).\\\\ \\indent(The dual residual $<$ TolFun=1.00e-08.)\\\\\\\\So Matlab tells us that there is no feasible solution.  Why is that?  If you look back at the network diagram and the supply and demand tables, you'll note that with warehouse 2 out of commission, retailers 5, 6, and 7 can only receive shipments from warehouse 3 and their total demand is 450 units.  At the same time, warehouse 3 can only receive shipments from plant 3 and 4.  The total supply capacity of those plants is only 400 units.  Therefore, warehouse 3 gets 50 less units from plants 3 and 4 than are demanded from retailers 5, 6, and 7.}\n\\subsection*{Linear Program Formulation}\n\\begin{enumerate}[1.]\n\t\\item Overall idea of problem\n\t\\begin{itemize}\n\t\t\\item Refrigerators moving from $n=4$ plants to $q=\\textcolor{red}{2}$ warehouses to $m=7$ retailers.\n\t\t\\item Not all plants deliver to all warehouses.\n\t\t\\item Not all warehouses deliver to all retailers.\n\t\t\\item Costs of shipping from plants to warehouses vary by pair.\n\t\t\\item Costs of shipping from warehouses to retailers vary by pair.\n\t\t\\item Each plant has a capacity in terms of number of refrigerators it can supply.\n\t\t\\item Each retailer has a capacity in terms of number of refrigerators it demands.\n\t\t\\item\\textcolor{red}{Warehouse 2 has closed and all associate routes have been eliminated.}\t\t\n\t\\end{itemize}\n\t\\item What is the goal?  What are you trying to achieve?\n\t\\begin{itemize}\n\t\t\\item Unchanged from part A.\n\t\\end{itemize}\n\t\\item Identify variables\n\t\\begin{itemize}\n\t\t\\item Unchanged from part A.\n\t\\end{itemize}\n\t\\item Identify constraints\n\t\\begin{itemize}\n\t\t\\item All constraints from part A remain in effect with the addition of two new constraints:\n\t\t\\item\\textcolor{red}{$np_{12} + np_{22} + np_{32} + np_{42} = 0$}\t\t\n\t\t\\item\\textcolor{red}{$nw_{23} + nw_{24} + nw_{25} + nw_{26} = 0$}\t\t\n\t\\end{itemize}\n\t\\item Identify inputs and outputs that you can control\n\t\\begin{itemize}\n\t\t\\item Unchanged from part A.\n\t\\end{itemize}\n\t\\item Specify all quantities mathematically\n\t\\begin{itemize}\n\t\t\\item Unchanged from part A.\n\t\\end{itemize}\n\t\\item Check the model for completeness and correctness\n\t\\begin{itemize}\n\t\\item All variables are positive.\n\t\\end{itemize}\n\\end{enumerate}\n\\subsection*{ Matlab Code}\n\\textcolor{red}{Code minimally changed from part A.  Only changes are 2 additional constraints (16 total equations) in the linear equality matrix and vector.  Identical code from part A is not shown below (to save space).}\n\\lstinputlisting{../problem_one/partB_changes_fromA.m}\n\\end{document}", "meta": {"hexsha": "7f63f55019b12e906b2fe791235458d9697cf417", "size": 3384, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "problem_one/partB.tex", "max_stars_repo_name": "OSU-CS-325/Project_Three_LP", "max_stars_repo_head_hexsha": "88301202f62a44a1b17a98bbc33c0efdb4a9d458", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "problem_one/partB.tex", "max_issues_repo_name": "OSU-CS-325/Project_Three_LP", "max_issues_repo_head_hexsha": "88301202f62a44a1b17a98bbc33c0efdb4a9d458", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem_one/partB.tex", "max_forks_repo_name": "OSU-CS-325/Project_Three_LP", "max_forks_repo_head_hexsha": "88301202f62a44a1b17a98bbc33c0efdb4a9d458", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-24T18:35:38.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-24T18:35:38.000Z", "avg_line_length": 62.6666666667, "max_line_length": 962, "alphanum_fraction": 0.759751773, "num_tokens": 911, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.763483758172699, "lm_q1q2_score": 0.5628082843714288}}
{"text": "% ---------------------------------------------------------------------\n\\documentclass{article}\n\n\\usepackage{nberpreamble}\n\\title{\\bfseries Notes on Simulating Power}\n\\author{\\sffamily Prepared by Mauricio C\\'aceres}\n\\date{\\sffamily \\today}\n\n\\usepackage{pgfplots}  % awesome plotting\n\\usepackage{tikz}      % vector graphics!\n\\usetikzlibrary{\n  arrows,\n  patterns,\n  positioning,\n  calc,\n  fit,\n  intersections,\n  decorations.text,\n  decorations.markings,\n  decorations.pathmorphing,\n  shadows.blur\n}\n\n\\pgfplotsset{\n  compat      = newest,\n  axis x line = middle,\n  axis y line = center,\n  tick align  = outside,\n  yticklabels = {,,},\n  xticklabels = {,,},\n  xtick       = {0},\n  ytick       = {0}\n}\n\n\\renewcommand{\\displayoptions}{\n  \\maketitle\n  \\pagenumbering{arabic}\n}\n\n% ---------------------------------------------------------------------\n\\begin{document}\n\\displayoptions\n\n%----------------------------------------------------------------------\n\\section{Parametric Power}\n\\label{sec:parametric_power}\n\nWe typically consider the model\n\\begin{equation}\nY_i = \\alpha + \\beta T_i + \\gamma X_i + \\varepsilon_i\n\\end{equation}\n\nfor some treatment $T_i$ at the individual level and the OLS estimator $\\widehat{\\beta}_{OLS}$ the difference in outcomes for the treatment and control groups. For $H_0: \\beta = 0$, consider the rejection probability function\n\\begin{equation}\n\\pi_{N}(\\beta) = P\\set{\\text{reject $H_0$} | \\beta}\n\\end{equation}\n\nFor $\\beta = 0$, this is $\\alpha$ the probability of Type I error or \\textit{significance}: How likely are we to make a mistake? For $\\beta = \\widetilde{\\beta} \\ne 0$ this is \\textit{power}, the probability of rejecting the null: How likely are we to get it right? $\\widehat{\\beta}_{OLS}$ is $\\sqrt{n}$-consistent, that is,\n\\[\n  \\sqrt{n} \\left(\\widehat{\\beta}_{OLS} - \\beta\\right)\n  \\xrightarrow{D} N(0, V_{\\widehat{\\beta}})\n\\]\n\nwhere $\\beta$ is the true mean. Relying on large-sample asymptotics, we can visualize $\\pi_N(\\beta)$,\n\\begin{figure}[H]\n  \\centering\n  \\caption{Power of a Test}\n  \\begin{tikzpicture}\n    \\begin{axis}[name=plot1\n      %,title=\n      ,width=9cm\n      ,height=5cm\n      ,ymin=0\n      ,ymax=0.5\n      ,xmin=-1.5\n      ,xmax=5.5\n      ,domain=-6:6\n      ,xticklabels = {-1,0,1,2,3,4}\n      ,xtick = {-1,0,1,2,3,4}\n      ]\n      \\addplot [fill=black!20,smooth,domain=1.96:6] {exp( - (x - 2.8)^2 / 2) / sqrt(2 * pi * 1)} \\closedcycle;\n      \\addplot [smooth,domain=-6:1.96] {exp( - (x - 2.8)^2 / 2) / sqrt(2 * pi * 1)};\n      \\draw [-] (axis cs:2.8, 0) -- (axis cs:2.8, 0.399);\n      \\draw [<->] (axis cs:2, 0.1) -- (axis cs:2.76, 0.1);\n      \\node[above] at (axis cs:2.4, 0) {$t_{\\kappa}$};\n      \\node[above] at (axis cs:2.8, 0.4) {$\\widetilde{\\beta} / SE$};\n      \\draw [->] (axis cs:4.75, 0.3) to[bend left=25] (axis cs:4.2, 0.115)\n        node[above] at (axis cs:4.75, 0.3) {$\\pi_N(\\widetilde{\\beta}) = \\kappa$};\n    \\end{axis}\n    \\begin{axis}[name=plot2\n      ,title={}\n      ,at={($(plot1.south) + (-2.125cm, 4cm)$)}\n      ,anchor=south\n      ,width=9cm\n      ,height=5cm\n      ,ymin=0\n      ,ymax=0.5\n      ,xmin=-3.5\n      ,xmax=3.5\n      ,domain=-4:4\n      ,xticklabels = {-2,-1,0,1}\n      ,xtick = {-2,-1,0,1}\n      ]\n      \\addplot [fill=black!20,smooth,domain=-1.96:1.96] {exp( - (x - 0)^2 / 2) / sqrt(2 * pi * 1)} \\closedcycle;\n      \\addplot [smooth,domain=-4:-1.96] {exp( - (x - 0)^2 / 2) / sqrt(2 * pi * 1)};\n      \\addplot [smooth,domain=1.96:4] {exp( - (x - 0)^2 / 2) / sqrt(2 * pi * 1)};\n      \\draw [smooth] (axis cs:0, 0) -- (axis cs:0, 0.5);\n      \\node[above] at (axis cs:1.96, 0.1) {$t_{1 - \\frac{\\alpha}{2}}$};\n      \\node[above] at (axis cs:0.9, 0.4) {$H_0: \\beta = 0$};\n      \\draw [->] (axis cs:-2.1, 0.2) to[bend right=25] (axis cs:-1.3, 0.115)\n        node[above] at (axis cs:-2.1, 0.2) {$\\pi_N(0) = 1 - \\alpha$};\n    \\end{axis}\n    \\draw [dashed] (3.66, 4) -- (3.66, 1.5);\n  \\end{tikzpicture}\n  \\label{fig:power_of_a_test}\n\\end{figure}\n\nA \\textit{parametric} approach to power estimates $\\widetilde{\\beta}$ given $\\alpha, \\kappa, N, SE$ and terms it the \\textit{minimum detectable effect}, MDE, or it estimates $N$ given $\\alpha, \\kappa, SE, MDE$.\n\n%----------------------------------------------------------------------\n\\section{Simulated Confidence Interval}\n\\label{sec:simulated_confidence_interval}\n\nHowever, it is possible to follow a \\textit{non-parametric} approach to estimating power. Note there are $C = \\left(\\begin{smallmatrix} N \\\\ NP \\end{smallmatrix}\\right)$ ways to treat $PN$ individuals. If we estimate $\\widehat{\\beta}_{OLS}$ for each $c = 1, \\ldots, C$, then we would know the exact distribution of our estimator for the treatment effect under the null given the data. Thus we could compute an exact $p$-value and determine whether to reject the null.\n\nEven for modestly-sized data, $C$ will be intractably large. Hence we simulate $K$ draws from the possible treatment-control arrangements, $T_{ik}$ such that $\\sum^{N}_{i = 1} T_{ik} = PN$, and estimate\n\\begin{equation}\nY_i = \\alpha + \\beta_k T_{ik} + \\gamma X_i + \\varepsilon_i\n\\label{eq:ri_ci_reg}\n\\end{equation}\n\nHere $\\widehat{\\beta}_k$ will be distributed around $0$ and a $1 - \\alpha$ CI under the null is given by\n\\begin{equation}\n\\widehat{CI}_{1 - \\alpha} = \\left(\\widehat{F}^{-1}(\\alpha / 2), \\widehat{F}^{-1}(1 - \\alpha / 2)\\right)\n\\label{eq:ri_ci_hat}\n\\end{equation}\n\nwith $\\widehat{F}(\\widehat{\\beta}_k)$ the empirical cdf of $\\widehat{\\beta}_k$. This approach is appealing compared to a parametric approach because it naturally takes into account the correlation structure of the errors, whereas a parametric approach requires making an assumption about $V_{\\widehat{\\beta}}$. If we had historical data on our study population (we can think of historical data as data on a population where our treatment had no effect, or the counterfactual of what would happen were we to treat our population with no effect) then we can simulate the CI above, and say that we expect to be able to reject effects outside the confidence interval.\n\n%----------------------------------------------------------------------\n\\section{Simulated Power}\n\\label{sec:simulated_power}\n\nNote, however, that this says nothing about power. In fact, power at either end of the confidence interval should be about $0.5$ (if $\\widehat{\\beta}_{OLS}$ were to be symmetrically distributed). Typically we look for a power level of $0.8$ or $0.9$. We could assume that the true effect of $T_i$ is $\\widetilde{\\beta}$, and estimate\n\\begin{equation}\n  \\begin{array}{r@{\\hskip 4.5pt}l}\n    \\widetilde{Y}_{ik} & = Y_{i} + \\widetilde{\\beta} T_{ik} \\\\\n    \\widetilde{Y}_{ik} & = \\alpha + \\beta_k T_{ik} + \\gamma X_{i} + \\varepsilon_{i}\n  \\end{array}\n\\label{eq:ri_power_reg}\n\\end{equation}\n\nIn this case, $\\widehat{\\beta}_k$ will be distributed around $\\widetilde{\\beta}$ but the shape of the distribution would not have changed. Thus we can estimate power as\n\\begin{equation}\n\\widehat{\\kappa} = \\dfrac{1}{K} \\sum^{}_{k} 1\\left(\\widehat{\\beta}_k \\notin \\widehat{CI}_{1 - \\alpha}\\right)\n\\label{eq:ri_power_hat}\n\\end{equation}\n\nand we we can search for $\\widetilde{\\beta}$ such that $\\widehat{\\kappa} \\approx \\kappa$ for some desired power level $\\kappa$ (note $\\widehat{\\kappa} \\xrightarrow{P} P\\left(\\beta_k \\notin CI_{1 - \\alpha}\\right) = \\kappa$). That is, we look for a $\\widetilde{\\beta}$ that causes us to reject the null $\\kappa$ portion of the time. This is more complicated if $Y_i$ is binary. The approach works to obtain a CI under the null but the subsequent search does not map trivially. One idea is to randomly swap successes to failures (or the converse) based on $\\widetilde{\\beta}$. Consider\n\\begin{equation}\n  \\begin{array}{r@{\\hskip 4.5pt}l}\n    \\widetilde{Y}_{ik} & = Y_{i} (1 - T_{ik}) + (Y_i + S_{ik}) T_{ik}  = Y_i + S_{ik} T_{ik} \\\\\n    \\widetilde{Y}_{ik} & = \\alpha + \\beta_k T_{ik} + \\gamma X_{i} + \\varepsilon_{i}\n  \\end{array}\n\\label{eq:ri_power_reg_binary}\n\\end{equation}\n\nwhere $S_{ik}$ is constructed as follows\n\\begin{itemize}\n  \\item Let\n    \\begin{align*}\n    T_k & = \\sum^{}_{i} T_{ik}\n    \\quad\\quad\n    S_k = \\sum^{}_{i} T_{ik} Y_i\n    \\\\\n    \\widetilde{\\beta} & \\in \\left[-\\dfrac{S_k}{T_k}, \\dfrac{T_k - S_k}{T_k}\\right] \\\\\n    S^1_k & = \\widetilde{\\beta} T_k \\\\\n    S^2_k & =\n      \\begin{cases}\n      \\dfrac{S_k}{T_k} - S_{1k} & \\widetilde{\\beta} > 0  \\\\[9pt]\n        \\dfrac{T_k - S_k}{T_k} - S_{1k} & \\widetilde{\\beta} < 0\n    \\end{cases} \\\\\n    \\end{align*}\n\n  \\item Construct the set $\\varsigma_k$ with $S^1_k$ entries equal to $1(\\widetilde{\\beta} > 0) - 1(\\widetilde{\\beta} < 0)$ and $S^2_k$ entries equal to $0$.\n\n  \\item For $i$ such that $T_{ik} = 1$, draw $s$ from $\\varsigma_k$ \\textit{without} replacement and set $S_{ik} = s$ ($S_{ik} = 0$ otherwise).\n\\end{itemize}\n\nNote that $T^{-1}_k \\sum^{}_{i} S_{ik} T_{ik} = \\widetilde{\\beta}$, hence\n\\begin{align*}\nE\\widehat{\\beta}_k & = E\\left[\\widetilde{Y}_{ik} | T_{ik} = 1\\right] - E\\left[\\widetilde{Y}_{ik} | T_{ik} = 0\\right] \\\\\n& = E\\left[Y_i + S_{ik} | T_{ik} = 1\\right] - E\\left[Y_i | T_{ik} = 0\\right] \\\\\n& = EY_i + E\\left[S_{ik} | T_{ik} = 1\\right] - EY_i \\\\\n& = \\widetilde{\\beta}\n\\end{align*}\n\nSo $\\widehat{\\beta}_k$ will be distributed around $\\widetilde{\\beta}$. Now we can outline a general simulation procedure:\n\\begin{enumerate}\n\\item Estimate a $1 - \\alpha$ CI for $\\widehat{\\beta}_{OLS}$ under $H_0: \\beta = 0$ using \\Cref{eq:ri_ci_reg} and \\Cref{eq:ri_ci_hat}.\n\n\\item Choose a starting MDE, $\\widetilde{\\beta}$, and estimate $\\widehat{\\beta}_k$ for $k = 1, \\ldots, K$ using \\Cref{eq:ri_power_reg} if $Y_i$ is continuous or \\Cref{eq:ri_power_reg_binary} if $Y_i$ is binary.\n\n\\item Estimate power using \\Cref{eq:ri_power_hat}.\n\n\\item If $\\widehat{\\kappa} < \\kappa$ then increase $\\widetilde{\\beta}$; if $\\widehat{\\kappa} > \\kappa$ then decrease $\\widetilde{\\beta}$.\n\n\\item Continue until $\\left|\\widehat{\\kappa} - \\kappa\\right| < \\epsilon$ for $\\epsilon$ small.\n\\end{enumerate}\n\n%----------------------------------------------------------------------\n\\section{Monte Carlo Simulations}\n\\label{sec:monte_carlo_simulations}\n\nConsider some data-generating process (DGP) for $w_i = (y_i, x_i, T_i)$,\n\\[\n  x_i \\sim F_x\n  \\quad\\quad\n  T_i \\sim \\text{Bernoulli}(P)\n  \\quad\\quad\n  T_i \\indep x_i\n  \\quad\\quad\n  y_i | x_i, T_i \\sim F_y\n\\]\n\nwhere\n\\[\n\\beta = E(y_i | T_i = 1, x_i) - E(y_i | T_i = 0, x_i)\n\\]\n\nThe aim is to compute the power of testing $\\widehat{\\beta}_{OLS}$ against our simulated confidence interval. For $m = 1, \\ldots, M$:\n\\begin{enumerate}\n\\item Generate two draws from the DGP, $w_{1m}$ with $P = 0$ and $w_{2m}$ $P \\in (0, 1)$.\n\n\\item Compute $\\widehat{\\kappa}_{1m}, \\widehat{\\kappa}_{2m}$ from our power simulation procedure and $\\widehat{\\beta}_{1m}, \\widehat{\\beta}_{2m}$ from OLS.\n\n\\item Construct a $1 - \\alpha$ confidence interval for $\\beta$, $\\widehat{CI}^M_{1 - \\alpha}$, using \\Cref{eq:ri_ci_hat} and compute\n\\[\n  \\overline{\\kappa}^M = \\dfrac{1}{M} \\sum^{}_{m} 1\\left(\n      \\widehat{\\beta}_{2m} \\notin \\widehat{CI}^M_{1 - \\alpha}\n  \\right)\n\\]\n\n\\item It should be the case that $\\widehat{\\kappa}_{1m}$ are distributed around $\\alpha$ and $\\widehat{\\kappa}_{2m}$ are distributed around $\\overline{\\kappa}^M$.\n\\end{enumerate}\n\n% We use the procedure above to see if we can empirically match what power should be under known conditions. Consider the standard formula for MDE\n% \\begin{equation}\n% MDE = (z_{\\alpha / 2} + z_{\\kappa}) \\sqrt{\\dfrac{\\sigma_\\varepsilon^2}{P (1 - P) N}}\n% \\end{equation}\n%\n% We can recover power as\n% \\begin{equation}\n% \\kappa = \\Phi\\left(MDE \\sqrt{\\dfrac{P (1 - P) N}{\\sigma^2_\\varepsilon}} - t_{\\alpha / 2}\\right)\n% \\end{equation}\n\n\n% TODO: Finish writing this out. // 2016-09-02 14:33 EDT\n% TODO: Actually code this up // 2016-09-02 14:33 EDT\n\n% ---------------------------------------------------------------------\n\\end{document}\n", "meta": {"hexsha": "00c3cd42132500b32829997f7a9cb8a841fae1b5", "size": 11824, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/power-simulation-notes.tex", "max_stars_repo_name": "mcaceresb/stata-power", "max_stars_repo_head_hexsha": "8d224e77b389bbb049158166f75e9c4bb10a088e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-06-20T21:43:56.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-12T15:43:37.000Z", "max_issues_repo_path": "notes/power-simulation-notes.tex", "max_issues_repo_name": "arlionn/stata-power", "max_issues_repo_head_hexsha": "8d224e77b389bbb049158166f75e9c4bb10a088e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/power-simulation-notes.tex", "max_forks_repo_name": "arlionn/stata-power", "max_forks_repo_head_hexsha": "8d224e77b389bbb049158166f75e9c4bb10a088e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-08-06T16:32:12.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-05T17:05:13.000Z", "avg_line_length": 44.9581749049, "max_line_length": 663, "alphanum_fraction": 0.6224627876, "num_tokens": 4080, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286834, "lm_q2_score": 0.7634837527911057, "lm_q1q2_score": 0.5628082804043434}}
{"text": "%%Note: You can only use \\section command, you are not allowed, per TTU Graduate School, use\r\n%%\\subsection command for ghigher level subheadings. At most level 2 subheadings are allowed.\r\n\\chapter{Cyber-Physical Systems} \r\n\\label{Cyber-Physical Systems Chapter}\r\n\r\n\\section[URE Generation Mechanisms]{URE Generation Mechanisms}\r\n\r\nElectronic digital circuitry utilizes many periodic signals generated by components such as clocks and oscillators. As part of their inherent operation, these signals are often directly coupled or mixed and modulated both 1) Intentionally for proper operation of the device and 2) Unintentionally resulting from imperfect filtering of high frequency signals or the operation of nonlinear components. This behavior in digital devices leads to generated URE being conducted on to the power infrastructure as shown in Figure \\ref{fig:ure_model}. All periodic signals used in the operation of an electronic device are potentially, and independently, filtered by some process before being directly conducted as URE outside the device often through powerline emissions.  Additionally, each periodic signal may mix (multiply) with other periodic signals before being conducted as URE.\r\n\r\n\\begin{figure}[tb]\r\n\t\\includegraphics[width=\\textwidth]{./model_results/URE_Model.jpg}\r\n\t\\centering\r\n\t\\caption{Model of URE generation mechanisms within an electronic device.  URE is composed of the direct conduction components and the intermodulation components of periodic signals utilized within the URE generating device.}\r\n\t\\label{fig:ure_model}\r\n\\end{figure}\r\n\r\nThe simplified model in Figure \\ref{fig:ure_model} does not take in to account the continued mixing and addition of previously multiplied signals which could theoretically continue nearly indefinitely resulting in a significantly increased model complexity.  Given that no assumption was made with regard to the periodic signals ($P_n$), the additional frequency components generated from continued addition and multiplication can be folded in to additional $P$ components or different periodic signal structures. \r\n\r\n\\section[Model Development]{Model Development}\r\n\r\nAnalytically, the received complex URE time domain signal, $r$, can be written as the sum of the direct and intermodulation responses\r\n\\begin{equation}\r\n\tr ={} c \\ast \\left(\\sum^N_{n=1} p_{n} \\ast h_{n} + \\sum^N_{i\\neq{}j} \\left(p_{i} \\ast g_{i,j}\\right)\\left(p_{j} \\ast g_{j,i}\\right)\\right) \\label{eq:uremodel1}\r\n\\end{equation}\r\nwhere $r$ is the time domain received signal, $c$ is the URE channel gain, $p_{n}$ are periodic signals used within the devices, $h_{n}$ are the direct power gains for each signal, and $g_{i,j}$ are mixing product gains for component cross-talk and intermodulation, $i$ and $j$ are sequences $1$ to $N$, and $N$ is the number of periodic signals. Note that Equation \\ref{eq:uremodel1} assumes no signal mixes with itself.\r\n\r\nUsing the Convolution Theorem to transform $r$ into its Power Spectral Density (PSD), $\\hat{r}$,   \r\n\\begin{equation}\r\n    \\hat{r} ={} \\hat{c} \\left( \\underbrace{\\sum^{N}_{n=1} \\hat{p}_{n}\\hat{h}_{n}}_\\text{Direct Conduction} + \\underbrace{\\sum^N_{i\\neq{}j} \\hat{p}_{i}\\hat{g}_{i,j} \\ast  \\hat{p}_{j}\\hat{g}_{j,i}}_\\text{Mixing Products} \\right) \\label{eq:uremodel2}\r\n\\end{equation}\r\nwhere $\\hat{p}$ is the PSD of $p$ and $\\hat{g}$, $\\hat{h}$, and $\\hat{c}$ are the Fourier transforms of $g$, $h$, and $c$ filter responses, respectively.\r\n\r\nEvaluation of the trivial case where $p_1 = \\sin(2\\pi f_1 t)$, $p_2 = \\sin(2\\pi f_2 t)$, $c = h_1 = h_2 = g_1 = g_2 = \\delta(t)$, and $f_1 \\gg f_2$ shows the following \r\n\\begin{equation} \\label{eq:uremodelsimpletime}\r\n\\begin{split}\r\n\tr & = \\sin(2\\pi f_1 t) + \\sin(2\\pi f_2 t) + \\sin(2\\pi f_1 t)\\sin(2\\pi f_2 t) \\\\\r\n\t\t& = \\sin(2\\pi f_1 t) + \\sin(2\\pi f_2 t) + \\frac{1}{2}\\cos(2\\pi(f_1 - f_2)t) +  \\frac{1}{2}\\cos(2\\pi(f_1 + f_2)t)\\\\\r\n\\end{split}\r\n\\end{equation}\r\nwhich returns the following positive frequency spectral components\r\n\\begin{equation} \\label{eq:uremodelsimplefreq}\r\n\t\\hat{r} = \\underbrace{\\frac{1}{2}\\delta(f - f_1) + \\frac{1}{2}\\delta(f - f_2)}_\\text{Unmodulated Frequency Peaks} + \\underbrace{\\frac{1}{4}\\delta\\left(f - (f_1 - f_2)\\right) + \\frac{1}{4}\\delta\\left(f - (f_1 + f_2)\\right)}_\\text{Modulation Sidebands}\\\\\r\n\\end{equation}\r\n \r\nExamination of Equation \\ref{eq:uremodelsimplefreq} shows that for the simple case of two periodic signals, two unmodulated frequency tones and two modulation sidebands are present.  The number of potential frequency peaks within a URE spectrum grows at a rate of $\\mathcal{O}(n^2)$ for each additional periodic signal and $\\mathcal{O}(n^2)$ for each additional harmonic associated with the periodic signals, resulting in combined growth of $\\mathcal{O}(n^2) \\times \\mathcal{O}(n^2) = \\mathcal{O}(n^4)$.  For instance, $10$ carriers, with $10$ harmonics each, can generate nearly $10000$ peaks within the frequency domain. The number of potential peaks within the spectrum of a URE signal can be significant, resulting in an overdetermined system with a very large feature space, which DASP is presented to address.   \r\n\r\nThe continuous-time Fourier Series expansion of a periodic signal shows that it is comprised of an infinite sum of its respective harmonics; therefore, aligning and summing its harmonic content maximizes detection of the signal.  Although the fundamental frequencies of the periodic signals comprising $r$ are unknown, the HASP algorithm provides a method of aligning harmonics of all periodic signals within a given band regardless of the fundamental frequency.  Additionally, the trigonometric product identities show that the intermodulation and mixing of periodic signals results in the sum and differences of the mixing frequencies, as shown in Equation \\ref{eq:uremodelsimplefreq}.  The MASP, CMASP, and SCAP algorithms provide a method for detecting these modulations and aligning the frequency mixing sums and differences when the underlying carrier and modulation frequencies are unknown.  As shown in Sections \\ref{Harmonically Aligned Signal Projection}, \\ref{Modulation Aligned Signal Projection}, \\ref{Spectral Correlation Aligned Projection}, \\ref{Cross-Modulation Aligned Signal Projection} the HASP, MASP, SCAP, and CMASP algorithms provide a method for aligning and summing harmonics and their cross-products regardless of their underlying fundamental frequency, carrier frequency, and modulation frequency and therefore maximize features relevant to device detection and classification.  Additionally, Section \\ref{Frequency Aligned Signal Projection} describes the Frequency Aligned Signal Projection which provides a method for aligning frequencies over time for further characterization of signal harmonic components and their respective time and frequency modulations.\r\n\r\n\\section[DASP Methodology]{DASP Methodology}\r\n\r\nAlignment of signal components, such as harmonics, modulations, and frequencies, with the DASP algorithms result in 2-D images which need to be further processed for extraction of features and eventually used for testing and training in a machine learning framework.   As shown in Figure \\ref{fig:dasp_methodology}, raw time domain signal captures are transformed through a DASP process in to a 2-D structure and further processed through an image scaling, segmentation, and summing process.   Statistical features and processed DASP images are then utilized for training and testing using LDA, k-NN, and CNN machine learning processes.  \r\n\r\n\\begin{figure}[tb]\r\n\t\\includegraphics[width=\\textwidth]{./model_results/model.jpg}\r\n\t\\centering\r\n\t\\caption{DASP processing and feature generation methodology.  The DASP algorithms transform a 1-D vector into a 2-D image which is subsequently transformed, segmented, or processed through an image processing algorithm.  The DASP images are then utilized to classify URE from electrical devices through a variety of feature extraction and learning methods.}\r\n\t\\label{fig:dasp_methodology}\r\n\\end{figure}", "meta": {"hexsha": "a298508c032f76779dfe4cda3a7dc85eee5f7f6b", "size": 8004, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "research/dissertation/ch02_cyberphysical.tex", "max_stars_repo_name": "argodev/learn", "max_stars_repo_head_hexsha": "d815beb9c1f8fa3dd8cd917640ebcca5822205c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "research/dissertation/ch02_cyberphysical.tex", "max_issues_repo_name": "argodev/learn", "max_issues_repo_head_hexsha": "d815beb9c1f8fa3dd8cd917640ebcca5822205c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 15, "max_issues_repo_issues_event_min_datetime": "2020-01-28T22:25:10.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-11T23:21:02.000Z", "max_forks_repo_path": "research/dissertation/ch02_cyberphysical.tex", "max_forks_repo_name": "argodev/learn", "max_forks_repo_head_hexsha": "d815beb9c1f8fa3dd8cd917640ebcca5822205c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 138.0, "max_line_length": 1691, "alphanum_fraction": 0.7737381309, "num_tokens": 2012, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799252, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5628082755541396}}
{"text": "\\addchap[Notation]{Notation}\\label{chapter:notation}\n\n\\section*{Problems}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $T \\in \\mathbb{N}$ & number of time slots \\\\\n    $\\delta \\in \\mathbb{R}_{>0}$ & length of a time slot \\\\\n    $d \\in \\mathbb{N}$ & number of dimensions \\\\\n    $\\mathcal{X} \\subset \\mathbb{R}^d$ & decision space \\\\\n    $m_k \\in \\mathbb{N}$ & upper bound of dimension $k$ \\\\\n    $M_k \\subset \\mathbb{N}_0$ & allowed values of dimension $k$ (in the discrete case) \\\\\n    $\\mathcal{M} = \\mathcal{X}$ & set of all configurations (in the discrete case) \\\\\n    $f_t(x) \\in \\mathbb{R}_{\\geq 0}$ & hitting cost of action $x \\in \\mathcal{X}$ during time slot $t$ \\\\\n    $\\beta_k \\in \\mathbb{R}_{>0}$ & switching cost of dimension $k$ \\\\\n    $\\lambda_t \\in \\mathbb{N}_0^e$ & load profile during time slot $t$ \\\\\n    $c_k \\in \\mathbb{R}_{\\geq 0}$ & time-independent hitting cost of dimension $k$ \\\\\n\\end{tabularx}\n\n\\section*{Data Center Model}\n\n\\subsection*{Dispatching}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $m_k \\in \\mathbb{N}$ & maximum number of servers of type $k$ \\\\\n    $l_k^{max} \\in \\mathbb{N}$ & maximum number of jobs a server of type $k$ can process in a single time slot \\\\\n    $e \\in \\mathbb{N}$ & number of load types \\\\\n    $\\lambda_{t,i} \\in \\mathbb{N}_0$ & number of jobs of type $i$ during time slot $t$ \\\\\n    $\\lambda_t \\in \\mathbb{N}_0$ & total load during time slot $t$ \\\\\n    $\\lambda^{max} \\in \\mathbb{N}$ & maximum total load of feasible load profiles \\\\\n    $\\mathcal{Z}$ & set of job assignments of jobs of individual load types to server types \\\\\n    $z_{t,k,i} \\in [0,1]$ & fraction of jobs of type $i$ assigned to servers of type $k$ during time slot $t$ \\\\\n    $l_{t,k,i} \\in [0,\\lambda^{max}]$ & (fractional) number of jobs of type $i$ assigned to servers of type $k$ during time slot $t$ \\\\\n    $l_{t,k} \\in [0,\\lambda^{max}]$ & (fractional) number of jobs assigned to servers of type $k$ during time slot $t$ \\\\\n    $s_k(l) \\in [0,1]$ & utilization (or speed) of a server of type $k$ with load $l$ \\\\\n    $\\theta_k \\in [0,1]$ & maximum allowed utilization of a server of type $k$ \\\\\n    $g_{t,k}(l) \\in \\mathbb{R}_{\\geq 0}$ & operating cost of a server of type $k$ during time slot $t$ when $l$ jobs are processed on the server \\\\\n    $q_{t,k,i}(l) \\in \\mathbb{R}_{\\geq 0}$ & cost of processing a job of type $i$ on a server of type $k$ during time slot $t$ when $l$ jobs are processed on the server \\\\\n\\end{tabularx}\n\n\\subsection*{Energy Cost}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $e_{t,k}(s) \\in \\mathbb{R}_{\\geq 0}$ & energy cost of a server of type $k$ with utilization $s$ during time slot $t$ \\\\\n    $\\nu_{t,k}(p) \\in \\mathbb{R}_{\\geq 0}$ & energy cost of a server of type $k$ with energy consumption $p$ during time slot $t$ \\\\\n    $\\phi_k(s) \\in \\mathbb{R}_{\\geq 0}$ & energy consumption of a server of type $k$ with utilization $s$ \\\\\n    $\\Phi_k(s) \\in \\mathbb{R}_{\\geq 0}$ & power consumption of a server of type $k$ with utilization $s$ \\\\\n    $\\Phi_k^{max} \\in \\mathbb{R}_{\\geq 0}$ & power consumption of a server of type $k$ on full load \\\\\n    $\\Phi_k^{min} \\in \\mathbb{R}_{\\geq 0}$ & power consumption of an idling server of type $k$ \\\\\n    $c_{t,i} \\in \\mathbb{R}_{\\geq 0}$ & energy cost of energy source $i$ per unit of energy during time slot $t$ \\\\\n\\end{tabularx}\n\n\\subsection*{Revenue Loss}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $\\gamma \\in \\mathbb{R}_{\\geq 0}$ & revenue loss factor, i.e., lost revenue per unit of delay \\\\\n    $r_{t,i}(d) \\in \\mathbb{R}_{\\geq 0}$ & revenue loss of jobs of type $i$ with an average delay $d$ during time slot $t$ \\\\\n    $d_{t,k,i}(l) \\in \\mathbb{R}_{\\geq 0}$ & average delay of a job of type $i$ processed on a server of type $k$ during time slot $t$ where the total load on the server is $l$ \\\\\n    $\\delta_{t,k,i} \\in \\mathbb{R}_{\\geq 0}$ & constant delay when processing a job of type $i$ on a server of type $k$ during time slot $t$ \\\\\n    $\\mu_k \\in \\mathbb{R}_{\\geq 0}$ & service rate of a server of type $k$ \\\\\n    $\\delta_i \\in \\mathbb{R}_{\\geq 0}$ & minimal detectable delay of jobs of type $i$ \\\\\n    $\\eta_{k,i} \\in \\mathbb{R}_{\\geq 0}$ & processing time of a job of type $i$ on a server of type $k$ \\\\\n\\end{tabularx}\n\n\\subsection*{Switching Cost}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $\\epsilon_k \\in \\mathbb{R}_{\\geq 0}$ & additional energy consumed by toggling a server of type $k$ on and off \\\\\n    $\\delta_k \\in \\mathbb{R}_{\\geq 0}$ & delay in migrating connections or data of a server of type $k$ before it can be powered down \\\\\n    $\\tau_k \\in \\mathbb{R}_{\\geq 0}$ & wear-and-tear costs of toggling a server \\\\\n    $\\rho_k \\in \\mathbb{R}_{\\geq 0}$ & perceived risk associated with toggling a server of type $k$ \\\\\n    $\\xi_k \\in \\mathbb{R}_{>0}$ & normalized switching cost measuring the minimum duration a server of type $k$ must be asleep to outweigh the switching cost \\\\\n\\end{tabularx}\n\n\\subsection*{Networks}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $\\iota \\in \\mathbb{N}$ & number of data centers \\\\\n    $\\zeta \\in \\mathbb{N}$ & number of geographically centered request sources \\\\\n    $\\delta_{t,j,s} \\in \\mathbb{R}_{\\geq 0}$ & network delay incurred by routing a request from source $s$ to data center $j$ during time slot $t$ \\\\\n    $\\xi$ & number of energy sources \\\\\n    $p_{t,i,j} \\in \\mathbb{R}_{\\geq 0} \\cup \\{\\infty\\}$ & energy from source $i$ available at data center $j$ during time slot $t$ \\\\\n    $u_{t,i} \\in \\mathbb{R}_{\\geq 0}$ & average profit per unit of energy from source $i$ during time slot $t$ \\\\\n    $q_{t,i} \\in [0,1]$ & minimum fraction for energy from source $i$ during time slot $t$ \\\\\n    $\\delta_{t,i,j} \\in \\mathbb{R}_{\\geq 0}$ & remaining energy requirement of data center $j$ during time slot $t$ after all energy sources up to source $i$ were used \\\\\n\\end{tabularx}\n\n\\section*{Complexity}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $\\mathcal{O}(C)$ & complexity of computing the hitting costs $f_t$ \\\\\n    $\\mathcal{O}(O_{\\epsilon}^d)$ & convergence rate of a convex optimization finding an $\\epsilon$-optimal solution in $d$ dimensions \\\\\n    $\\mathcal{O}(R_{\\epsilon})$ & convergence rate of a root finding method finding an $\\epsilon$-optimal root \\\\\n    $\\mathcal{O}(I_{\\epsilon})$ & convergence rate of a quadrature method finding an $\\epsilon$-optimal integral \\\\\n\\end{tabularx}\n\n\\section*{Algorithms}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $\\tau$ & current time slot \\\\\n    $\\hat{x}$ & optimal value of $x$ with respect to some optimization \\\\\n    $X \\in \\mathcal{X}^t$ & schedule, i.e., sequence of configurations over the time horizon $t$ \\\\\n    $x, y \\in \\mathcal{X}$ & configuration \\\\\n    $i, j \\in [m_k]_0$ & value of dimension $k$ \\\\\n    $x_{k \\gets j}$ & configuration $x$ after updating dimension $k$ to $j$ \\\\\n\\end{tabularx}\n\n\\section*{Implementation}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $\\chi_{\\tau}(t, x) \\in \\bigcup_{n=1}^{\\infty} \\mathbb{R}_{\\geq 0}^n$ & hitting cost of action $x \\in \\mathcal{S}$ at time slot $t \\in [T]$ given the information from time slot $\\tau$ \\\\\n    $\\mathcal{P}_t \\in \\left(\\bigcup_{n=1}^{\\infty} \\mathbb{N}_0^n\\right)^e$ & predicted load profile for time slot $t$, i.e., vector of predicted loads for each job type\\\\\n\\end{tabularx}\n\n\\section*{Miscellaneous}\n\n\\nopagebreak\\begin{tabularx}{\\textwidth}{p{100pt}X}\n    $[n] := \\{1, \\dots, n\\}$ & range of natural numbers from $1$ to $n$ \\\\\n    $[n]_0 := \\{0\\} \\cup [n]$ & range of integers from $0$ to $n$ \\\\\n    $[a : b] :=\\newline \\{a, a+1, \\dots, b\\}$ & range of integers from $a$ to $b$ \\\\\n    $(x)_a^b :=\\newline \\max\\{a, \\min\\{b, x\\}\\}$ & uni-dimensional projection of $x \\in \\mathbb{R}$ onto $[a,b]$ \\\\\n    $(x)^+ := \\max\\{0, x\\}$ & uni-dimensional projection of $x \\in \\mathbb{R}$ onto $[0, \\infty)$ \\\\\n    $x_{\\text{min}}$ & smallest entry of the vector $x$ \\\\\n    $x_{\\text{max}}$ & largest entry of the vector $x$ \\\\\n\\end{tabularx}", "meta": {"hexsha": "5e80621ef0a5047bd5bb3d3cc3dfe2dda4c6e9d6", "size": 8049, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/pages/notation.tex", "max_stars_repo_name": "jonhue/bachelors-thesis", "max_stars_repo_head_hexsha": "17f760c5b1394a364a2fca1108e7a997201460aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/pages/notation.tex", "max_issues_repo_name": "jonhue/bachelors-thesis", "max_issues_repo_head_hexsha": "17f760c5b1394a364a2fca1108e7a997201460aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-09-08T11:45:09.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-05T07:47:11.000Z", "max_forks_repo_path": "thesis/pages/notation.tex", "max_forks_repo_name": "jonhue/bachelors-thesis", "max_forks_repo_head_hexsha": "17f760c5b1394a364a2fca1108e7a997201460aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-10-14T12:01:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-14T12:01:49.000Z", "avg_line_length": 64.9112903226, "max_line_length": 189, "alphanum_fraction": 0.6410734253, "num_tokens": 2750, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711756575749, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.5626153706494843}}
{"text": "\\iffalse\nIt's unfair to say that mathematicians aren't real doctors, we perform surgeries all the time. In this class we'll introduce the notion of a topological manifold via simplicial (delta) complexes. Spend a day or two doing examples and go over several notions like orientation, cobordism and of course surgery.\n\nKeywords: simplicial complex, manifold, orientation, cobordism, surgery\n\nType: Lecture\nHomework: Recommended\nPrereqs: None\n\\fi\n\n\n\n\\input{../preamble}\n\\rhead{\\scshape Mathcamp 2017 : All things Manifoldy}\n\\begin{document}\n\\title{Modular origami}\n\\author{Apurva Nakade}\n\\thispagestyle{fancy}\n\\maketitle\n\n\n\\section{Orientation of surfaces}\nLet us start with surfaces. A surface is called orientable if has two sides. A sphere or a torus are easily seen to be orientable. In fact any surface that can be \\textit{embedded} in $\\R^3$ is orientable. A Mobius strip on the other hand (which is not a surface as it has a boundary) is not orientable. With a little more mental effort one might be able to see that a Klein bottle or a Projective space are not orientable. We want an equivalent definition which can be used on gluing diagrams.\n\nWe begin by defining orientation of a polygon. An \\textbf{orientation of a polygon} is simply a cyclic ordering of it's vertices. For example this is one of the two possible ways to orient a triangle with vertices $(0,1,2)$.\n\n\\begin{center}\n\t\\begin{tabular}{c}\n\t\t\\centering \\includegraphics[height=3cm]{../noImageAvailable}\n\t\\end{tabular}\n\\end{center}\n\n\\begin{exercise}\n\tThis is related to the fact that the polygon has two sides via the right hand rule in physics. Do you see the connection?\n\\end{exercise}\n\nAn \\textbf{orientation} of (a gluing diagram of) a surface is a compatible choice of orientation for each triangle, where the orientations of two adjacent triangles need to be compatible in the following way,\n\\begin{center}\n\t\\begin{tabular}{c c c}\n\t\t\\centering \\includegraphics[height=3cm]{../noImageAvailable} & \\: & \\centering \\includegraphics[height=3cm]{../noImageAvailable}\n\t\\end{tabular}\n\\end{center}\nThis forces an edge to be directed in two different directions in adjacent triangles.\n\n\\begin{exercise}\n\tAdd the diagonals to the standard gluing diagrams to obtain triangulations and try to find orientations for them. Conclude that $S^2,T$ are orientable and $\\R\\P^2,K$ are not.\n\\end{exercise}\n\n\\begin{exercise}\n\tUse gluing diagrams to show that $T\\#T$ is orientable. Generalize this to argue that $T ^{\\# n}$ are orientable.\n\\end{exercise}\n\nThis same method also allows us to understand manifolds in higher dimensions as well.\n\n\n\n\n\n\n\n\\section{Simplices}\nAs we go to higher dimensions polygons (or is it polytopes?) become whacky are themselves quite hard to understand. So instead of looking at arbitrary polygons we look at the simplest polygons: triangles. We'll upgrade the definition of a triangle to a simplex which can live in arbitrary dimensions.\n\n\\begin{definition}\n\tAn $n$ dimensional \\textbf{simplex} $\\Delta^n$ is any set which is homeomorphic to the following region in $\\R^{n}$.\n\t\\begin{align}\n\t\t\\{ (x_0, x_1, \\cdots, x_n) : x_0 + \\cdots + x_n \\le 1 \\mbox{ and each } x_i \\ge 0  \\}\n\t\\end{align}\n\\end{definition}\nA 1 dimensional simplex is a segment and a 2 dimensional simplex is a triangle.\nSimplices are topologists best friend.\n\nNote that by adding extra diagonal lines we could have made the gluing diagrams entirely out of triangles.\nWe represent a simplex by its set of vertices.\n\n\\begin{figure}[h]\n\t\\centering \\includegraphics{../noImageAvailable}\n\t\\caption{Gluing diagram with labeled simplices}\n\t\\label{}\n\\end{figure}\n\nAs with origami you can glue these simplices any way you want and get relaly interesting objects. We're going to glue them to create manifolds.\n\n\\section{Manifolds from simplices}\nA delta complex is a collection of simplices glued along the faces.\n\n\\begin{definition}\n\tFaces of a simplex followed by examples.\n\\end{definition}\n\n\\begin{definition}\n\tDefinition of a delta complex followed by several examples.\n\\end{definition}\n\nTalk about spheres, tori and projective spaces in 3 dimensions.\n\n\\begin{ques}\n\tWhen is a delta complex a manifold?\n\\end{ques}\n\nExamples of manifolds and non-manifolds.\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "3e069ead8ca2ee42f792a34c2214e2e7b1ff28d8", "size": 4226, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "01 All things manifoldy/04 All things manifoldy.tex", "max_stars_repo_name": "apurvnakade/mc2017", "max_stars_repo_head_hexsha": "ebec59bce5ee1979872e0f37208da6abd91dbb75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01 All things manifoldy/04 All things manifoldy.tex", "max_issues_repo_name": "apurvnakade/mc2017", "max_issues_repo_head_hexsha": "ebec59bce5ee1979872e0f37208da6abd91dbb75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01 All things manifoldy/04 All things manifoldy.tex", "max_forks_repo_name": "apurvnakade/mc2017", "max_forks_repo_head_hexsha": "ebec59bce5ee1979872e0f37208da6abd91dbb75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.1296296296, "max_line_length": 494, "alphanum_fraction": 0.7702318978, "num_tokens": 1084, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859598, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.5625511356440303}}
{"text": "\\section{Communication}\n\\label{sec:comm}\n\nEach of the two MPI implementations consists of two communication steps, but they have different dimensions in each one.\n\n\\begin{itemize}\n\t\\item \\textbf{Bucket count communication}. In the first implementation, each process has to send each of their local bucket sizes to a single process. So for the complete matrix of all bucket sizes (this matrix has size $P \\times B$) each element requires a single Send/Receive. This makes the complexity for this step $\\Theta(P \\cdot B)$\n\t\tWith the addition of load balance control, this entire matrix has to be sent to every processor, making complexity becomes $\\Theta(P^2 \\cdot B)$. Although this complexity is higher for the version that should provide the best results, this is because the main overhead is not in the bucket size communication, but in the communication of the elements themselves. The number of buckets is usually really small (only 256 buckets for $g=8$), and the input can consist in millions of elements, this difference is clear.\n\n\t\\item \\textbf{Keys communication}. Each process has to send their local keys to the apropriate destination. So for $N$ keys, the complexity of communication becomes $O(N)$. Notice that this is $O$ notation and not $\\Theta$ because some of the keys may already be in the appropriate process, requiring only a local copy in memory. The difference is in the balance of the keys. For the first version, the number of keys that a single process sends or receives can be anything between 0 and $N$, and it may happen that the weight of communication will be only in a subset of all processes, increasing communication delay. But in the second implementation, each process is garanteed to send exactly $N/P$ keys and receive that same amount back from other processes. This may have a huge impact in communication delay, and consequentely, in the overall performance of the algorithm.\n\n\\end{itemize}\n\nAlso note that this analysis refers only to a single iteration of radix sort. Every iteration has the same complexity, so the overall complexity will be equal to the one of a single iteration, multiplied by the number of iterations.\n", "meta": {"hexsha": "323aa55983cbb1d27ee2064f251affce75082ae1", "size": 2167, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/first/report_files/7_comm.tex", "max_stars_repo_name": "naps62/parallel-sort", "max_stars_repo_head_hexsha": "23ffbc48e06c4ad79d41a103e09a750c5c4eef56", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2015-02-02T00:03:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-14T05:12:23.000Z", "max_issues_repo_path": "doc/first/report_files/7_comm.tex", "max_issues_repo_name": "naps62/parallel-sort", "max_issues_repo_head_hexsha": "23ffbc48e06c4ad79d41a103e09a750c5c4eef56", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-05T16:08:06.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-05T17:02:53.000Z", "max_forks_repo_path": "doc/first/report_files/7_comm.tex", "max_forks_repo_name": "naps62/parallel-sort", "max_forks_repo_head_hexsha": "23ffbc48e06c4ad79d41a103e09a750c5c4eef56", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2015-07-10T18:32:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-08T18:50:18.000Z", "avg_line_length": 144.4666666667, "max_line_length": 878, "alphanum_fraction": 0.7909552377, "num_tokens": 460, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.5625511188925539}}
{"text": "\\section{Electric charges, fields, and potentials}\n\nBiological tissue contains a high number of mobile electric charges (mostly ions, with \\ce{Na+}, \\ce{Cl-}, and \\ce{K+} the most abundant ones \\cite{Martinsen2015a}). Each charge is an electric monopole that emits an electric field, whose magnitude decays as $1/r^2$ (where $r$ is the distance from the charge). The electric field vectors point outwards for positive charges, and inwards for negative charges. All these electric fields summate linearly to form the total electric field vector $\\E(\\loc,t)$ at location $\\loc$ and time $t$.\n\nThese last facts (and more) are concisely described by the first of Maxwell's equations, also known as Gauss's law \\cite{Feynman2013}:\n%\n\\begin{equation}\n\\label{eq:gauss}\n\\div{\\E(\\loc,t)} = \\frac{\\rho(\\loc,t)}{\\eps_0},\n\\end{equation}\n%\nwhere $\\rho(\\loc,t)$ (in \\si{\\coulomb\\per\\metre^3}) is the charge density at location $\\loc$, and $\\eps_0 \\approx \\SI{10}{\\pF\\per\\m}$ is the distributed capacitance (permitivity) of free space. (For an intuitive explanation of the $\\div$ notation, see \\cref{sec:div_curl}).\n\nIn the complete Maxwell's equations (\\cref{appendix_maxwell}), the electric and the magnetic fields are coupled, and cannot be described separately. When neither field changes too quickly over time however, the equations become decoupled. The electric field is then determined only by the locations of electric charges, and the existence of the magnetic field can be safely ignored. In electrophysiological conditions, this so called ``quasi-static'' assumption is met (\\cite{Nunez2006,Plonsey2007}). The electric field is then completely described by the equations of electrostatics, namely \\cref{eq:gauss} and the following \\cref{eq:irrotational} -- even under normal time-varying conditions.\n%\n\\begin{equation}\n\\label{eq:irrotational}\n\\curl{\\E(\\loc,t)} = \\vb{0}\n\\end{equation}\n%\nA consequence of \\cref{eq:irrotational} is that a potential function can be defined for the electric field \\cite{Feynman2013}. This is the electric field potential, $\\phi(\\loc,t)$. It is a scalar field that is defined such that\n%\n\\begin{equation}\n\\label{eq:potential}\n\\E(\\loc,t) = -\\grad{\\phi(\\loc,t)};\n\\end{equation}\n%\ni.e. such that the electric field points from locations of high potential to locations of lower potential. This is useful because the scalar field $\\phi(\\loc,t)$ is easier to reason about than the vector field $\\E(\\loc,t)$, while it contains the same amount of information. From \\cref{eq:potential}, it is clear that $\\phi(\\loc,t)$ is only defined up to a constant. A natural choice of absolute electric potential is to shift the electric potential $\\phi(\\loc,t)$ so that a point with no charges around (or equivalently, surrounded by positive and negative charges that balance each other out) has $\\phi(\\loc,t) = \\SI{0}{\\volt}$ (see \\cref{sec:appendix_potential}).\n\nIn neuroscience, the electric potential $\\phi(\\loc,t)$ outside cells is often called the `local field potential' (LFP) -- especially when only frequencies below about \\SI{500}{\\hertz} are considered.\\footnotemark{} The adjective `local' is a bit misleading: the LFP is not more `local' than any other electric field potential. The LFP, $\\phi(\\loc,t)$, is the quantity that we measure in sharp wave-ripple detection.\\phantomsection\\label{def:LFP}\n\n\\footnotetext{The naming for $\\phi(\\loc,t)$ is in general inconsistent. It is variably denoted by `voltage', `electric field potential', `electric potential', `field potential', `potential field', or simply `potential'.}\n\nAlthough the above equations are complete, it is not clear from them how the LFP arises from charge distributions in the brain. The following section derives a more explicit formula for $\\phi(\\loc,t)$ in terms of free charges (i.e. mostly ions).\n\n\n\n\n\\section{Electric potential in neural tissue}\n\n\\Cref{eq:gauss,eq:irrotational} are valid from the scale of atoms to the scale of the brain and beyond; material (or tissue) properties appear implicitly as nanoscale variations in the charge density term. When working at macroscopic scales (as we do), it is easier however to use a slightly different but equivalent formulation, where material properties are explicitly built into the equation. This formulation rests on two ideas. The first is the separation of charges $\\rho$ into bound charges $\\rho_\\bound$ and free charges $\\rho_\\free$ (which are mostly ions in the brain): $\\rho = \\rho_\\bound + \\rho_\\free$. The second is the assumption that the polarisation of materials is directly proportional to the electric field strength. This is a common assumption, that is largely valid for brain tissue in normal conditions \\cite{Nunez2006}. It can then be shown \\cite{Feynman2013} that \\cref{eq:gauss,eq:irrotational} are equivalent to:\n%\n\\begin{align}\n\\div(\\eps_r(\\loc,t)\\; \\E(\\loc,t))\n    &= \\frac{\\rho_\\free(\\loc,t)}{\\eps_0}   \\label{eq:gauss_brain} \\\\\n\\curl{\\E(\\loc,t)}\n    &= \\vb{0},                             \\label{eq:irrot_brain}\n\\end{align}\n%\nwhere $\\eps_r(\\loc,t)$ is a dimensionless material property. $\\eps_r(\\loc,t) = \\eps(\\loc,t) / \\eps_0$, with $\\eps(\\loc,t)$ the absolute permittivity of the material at location $\\loc$ and $\\eps_r(\\loc,t)$ the relative permittivity of that material (also known as the dielectric constant; sometimes denoted by $\\kappa$). $\\eps$ and $\\eps_r$ are scalars for a isotropic materials, and $3 \\cross 3$ matrices for anisotropic materials.\n\nBroadly, the brain tissue relevant for sharp wave-ripples can be divided into two tissue types with different material properties. (See \\cref{fig:neuropil}). The first type is the seawater-like fluid inside and in between cells. It has a relative permittivity $\\eps_r \\approx 75$ and is moderately conductive due to the free ions it contains (conductivity $\\sigma \\approx \\SI{1}{\\siemens\\per\\metre}$; compare with $\\approx \\SI{1e7}{\\siemens\\per\\metre}$ for most metals) \\cite{Michel2017,Martinsen2015,Marszalek1991,Nunez2006}. The other type are the lipid bilayer membranes, separating the insides from the outside of cells, organelles, and vesicles. They have a relative permittivity $\\eps_r \\approx 5$ and are ordinarily not conductive ($\\sigma \\approx \\SI{1e-6}{\\siemens\\per\\metre}$)\\footnotemark{} \\cite{Marszalek1991,Weaver2003}.\n\n\\footnotetext{Translated to the scale of neural tissue, a patch of membrane (which is $\\approx \\SI{5}{\\nano\\metre}$ thick \\cite{Goodsell2014}) of area $1 \\cross 1 \\si{\\micro\\metre^2}$ has a resistance of $\\approx \\SI{5}{\\giga\\ohm}$. A similar `slice' of extracellular fluid with a volume of $1 \\si{\\micro\\metre^2} \\cross \\SI{5}{\\nano\\metre}$ has a resistance of $\\approx \\SI{5}{\\kilo\\ohm}$}\n%\n% Conductivity \u03c3 [S/m = S m^2 / m^3]\n% Resistivity \u03c1 = 1 / \u03c3 [\u03a9 m]\n% Conductance G [S = A/V]\n%\n% wikipedia:\n% \u03c1 = R A / l  <=>  R  = \u03c1 l / A  = l / (A \u03c3)\n% => 1 / \u03c3 = A / (G l)\n% or G = \u03c3 * A / l\n% The longer, the less conductive. OK.\n\n% Extracellular space occupies 0.02 - 0.2 of cortex\n% Neurons: 0.4 - 0.5\n% Glia: 0.3 - 0.5\n\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{EM}\n\\caption{\\textbf{The environment in which recordings are made.} \\emph{Figure\nadapted from \\cite{Knott2008}}. Scanning electron micrograph of a slice of mouse cortex. (Rat hippocampal tissue looks similar, see e.g. \\cite{Martin2017}). This type of tissue is called ``neuropil''. It consists of the long and narrow excrescences of neurons (dendrites and axons) and of glial cells, seen here in cross and through section. Note that cell bodies are much larger than the cross sections of dendrites and axons as seen here; a typical neural cell body (``soma'') of \\SI{15}{\\micro\\metre} wide would be larger than this image (which is \\SI{7}{\\micro\\metre} wide). Some landmarks are the many mitochondria (two of which are annotated with red *); two myelinated axons (green *); and all the lipid bilayers separating cells from their environment (thin black lines, two of which are highlighted in blue). The pink inset (\\SI{1}{\\micro\\meter} by \\SI{1.24}{\\micro\\meter}) shows two ``boutons'' (axon terminals) synapsing onto a dendrite. Note the synaptic vesicles (the many small dark circles), which encapsulate neurotransmitter molecules, ready for release in the synaptic clefts (black and white arrows).}\n\\label{fig:neuropil}\n\\end{figure}\n\n\nThe relative permittivity $\\eps_r(\\loc,t)$ is therefore not uniform even throughout small regions in the brain. As a consequence, \\cref{eq:gauss_brain,eq:irrot_brain} together with \\cref{eq:potential} do not have a simple solution expressing the electric potential in terms of excess ion charges, and must instead be solved numerically.\n\nWe can however get some sense for the behavior of the electric potential $\\phi(\\loc,t)$ by assuming a uniform permittivity $\\eps$ throughout the neural tissue. (This assumption is often implicitly made -- and is rarely even mentioned -- in the electrophysiology literature \\cite{Plonsey2007,Nunez2006,Destexhe2013}). It can then be shown that \\cref{eq:gauss_brain,eq:irrot_brain,eq:potential} has an intuitive and straightforward solution for the electric potential \\cite{Feynman2013}. When we divide the tissue in many tiny volumes $\\dd{V}$ that each contain a net free charge density $\\rho_\\free(t)$ at time $t$, the electric potential at any point and time can be calculated as\n%\n\\begin{equation}\n\\label{eq:charge_sum}\n\\phi(\\loc,t) = \\frac{1}{4 \\pi \\eps} \\int_V \\frac{\\rho_\\free(t)}{r} \\dd{V},\n\\end{equation}\n% \nwhere $r$ is the distance from the measurement location $\\loc$ to each tiny volume $\\dd{V}$, and where we sum over the whole region $V$ where free charges are present.\n\n\\Cref{eq:charge_sum} is a symbolical translation of the claim made in the introduction of this chapter, which I summarise here: Each free charge emits a potential field that decays as $1/r$. This potential field is positive around positive ions, and negative around negative ions. All these potential fields summate linearly, to form the total electric potential $\\phi(\\loc,t)$.\n\nThis description is already useful to `explain' multiple phenomena observed in actual $\\phi(\\loc,t)$-recordings in the brain, as the next section shows.\n", "meta": {"hexsha": "def17cc71856523d643cff4e9877134d7daf23fc", "size": 10127, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "modules/Scraps/voltage/Electrostatics.tex", "max_stars_repo_name": "tfiers/master-thesis", "max_stars_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-23T01:39:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-23T01:39:24.000Z", "max_issues_repo_path": "modules/Scraps/voltage/Electrostatics.tex", "max_issues_repo_name": "tfiers/master-thesis", "max_issues_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 46, "max_issues_repo_issues_event_min_datetime": "2018-09-18T16:38:12.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-10T22:37:35.000Z", "max_forks_repo_path": "modules/Scraps/voltage/Electrostatics.tex", "max_forks_repo_name": "tfiers/master-thesis", "max_forks_repo_head_hexsha": "3e97128eeb18827b03da90817fe6f6985c84ad80", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 108.8924731183, "max_line_length": 1120, "alphanum_fraction": 0.7505677891, "num_tokens": 2753, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.7025300698514777, "lm_q1q2_score": 0.5625102148444329}}
{"text": "\\documentclass[10pt]{article}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[all]{xy}\n\n\\usepackage[english]{babel}\n\n\\usepackage[nottoc]{tocbibind}\n\n\\usepackage{amsmath,amsthm,amssymb,color,latexsym}\n\\usepackage{geometry}        \n\\geometry{letterpaper}    \n\\usepackage{graphicx}\n\\usepackage{physics}\n\n\\usepackage{listings}\n\\usepackage[table,xcdraw]{xcolor}\n\n\\usepackage[export]{adjustbox}\n\n\\definecolor{codegreen}{rgb}{0,0.6,0}\n\\definecolor{codegray}{rgb}{0.5,0.5,0.5}\n\\definecolor{codepurple}{rgb}{0.58,0,0.82}\n\\definecolor{backcolour}{rgb}{0.95,0.95,0.92}\n\\usepackage{multirow}\n\\usepackage{float}\n\n\\lstdefinestyle{mystyle}{\n  backgroundcolor=\\color{backcolour},   commentstyle=\\color{codegreen},\n  keywordstyle=\\color{magenta},\n  numberstyle=\\tiny\\color{codegray},\n  stringstyle=\\color{codepurple},\n  basicstyle=\\ttfamily\\footnotesize,\n  breakatwhitespace=false,         \n  breaklines=true,                 \n  captionpos=b,                    \n  keepspaces=true,                 \n  numbers=left,                    \n  numbersep=5pt,                  \n  showspaces=false,                \n  showstringspaces=false,\n  showtabs=false,                  \n  tabsize=2\n}\n\n\\lstset{style=mystyle}\n\n\\usepackage{graphicx}\n\\graphicspath{ {./images/} }\n\n\n\\newtheorem{problem}{Problem}\n\n\\newenvironment{solution}[1][\\it{Solution}]{\\textbf{#1. } }{$\\square$}\n\n\n\\begin{document}\n\\noindent CS 6140: Machine Learning Spring 2021\\hfill Homework Assignment \\#4\\\\\nSakshi Suman\\hfill suman.sak@northeastern.edu\n\n\\hrulefill\n\n\n\\begin{problem}\n\nConsider a logistic regression problem where $\\mathcal{X}=\\mathbb{R}^{d}$ and $\\mathcal{Y}=\\{-1,+1\\}$. Derive the weight update rule that maximizes the conditional likelihood assuming that a data set $\\mathcal{D}=\\left\\{\\left(x_{i}, y_{i}\\right)\\right\\}_{i=1}^{n}$ is given.\n\n\\end{problem}\n\\begin{solution}\n\nTo frame the learning problem as parameter estimation, let the data set $\\mathcal{D}=\\left\\{\\left(\\mathbf{x}_{i}, y_{i}\\right)\\right\\}_{i=1}^{n}$ is an i.i.d. sample from a fixed but unknown probability distribution $p(\\mathbf{x}, y) .$ Even more specifically, we can assume that the data generating process randomly draws a data point $\\mathrm{x},$ a realization of the random vector $\\left(X_{0}=1, X_{1}, \\ldots, X_{d}\\right),$ according to $p(\\mathbf{x})$ and then sets its class label $Y$ according to the Bernoulli distribution\n\n\\begin{equation*}\n\\scalebox{1.4}{\n$p(y \\mid \\mathbf{x}) = \n        \\begin{cases}\n            \\left(\\frac{1}{1+e^{-\\omega^{\\top} \\mathbf{x}}}\\right)^{\\frac{1+y}{2}} &\\text{ for } y=1 \\\\\n\t\t\t  \\left(1-\\frac{1}{1+e^{-\\omega^{\\top} \\mathbf{x}}}\\right)^{\\frac{1-y}{2}} &\\text{ for } y=-1\n        \\end{cases}$\n        }\n    \\end{equation*}\n\nwhere $\\omega=\\left(\\omega_{0}, \\omega_{1}, \\ldots, \\omega_{d}\\right)$ is a set of unknown coefficients we want to recover (or learn) from the observed data $\\mathcal{D}$. Based on the principles of parameter estimation, we can estimate $\\omega$ by maximizing the conditional likelihood of the observed class labels $\\mathbf{y}=\\left(y_{1}, y_{2}, \\ldots, y_{n}\\right)$ given the inputs $\\mathbf{X}=\\left(\\mathbf{x}_{1}^{\\top}, \\mathbf{x}_{2}^{\\top}, \\ldots, \\mathbf{x}_{n}^{\\top}\\right)$\nWe shall first write the conditional likelihood function $p(\\mathbf{y} \\mid \\mathbf{X}, \\mathbf{w}),$ or simply $l(\\mathbf{w}),$ as\n$$\nl(\\mathbf{w})=\\prod_{i=1}^{n} p\\left(y_{i} \\mid \\mathbf{x}_{i}, \\mathbf{w}\\right)\n$$\n\nThe parameter vector that maximizes the likelihood is,\n\n$$\n\\begin{aligned}\n\\mathbf{w}_{\\mathrm{ML}} &=\\underset{\\mathbf{w}}{\\arg \\max }\\{l(\\mathbf{w})\\} \\\\\n&=\\underset{\\mathbf{w}}{\\arg \\max }\\left\\{\\prod_{i=1}^{n} p\\left(y_{i} \\mid \\mathbf{x}_{i}, \\mathbf{w}\\right)\\right\\} .\n\\end{aligned}\n$$\n\nThe likelihood function is\n$$\nl(\\mathbf{w})=\\prod_{i=1}^{n}\\left(\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right)^{\\frac{1+y_{i}}{2}} \\cdot\\left(1-\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right)^{\\frac{1-y_{i}}{2}}\n$$\n\nAs $\\log(x)$ is a strictly increasing function, maximizing likelihood is equivalent to maximizing the log-likelihood function $l l(\\mathbf{w})=\\log (l(\\mathbf{w}))$\n$$\nl l(\\mathbf{w})=\\sum_{i=1}^{n}\\left(\\left(\\frac{1+y_{i}}{2}\\right) \\cdot \\log \\left(\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right)+\\left(\\frac{1-y_{i}}{2}\\right) \\cdot \\log \\left(1-\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right)\\right)\n$$\n\nThe negative of the log-likelihood i.e., cross-entropy minimization is equivalent to the maximization of likelihood.\n\n$$\nll(\\mathbf{w})=\\sum_{i=1}^{n}\\left(\\left(\\frac{y_{i}-1}{2}\\right) \\mathbf{w}^{\\top} \\mathbf{x}_{i}+\\log \\left(\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right)\\right)\n$$\n\nThere is no closed-form solution to $\\nabla ll(\\mathbf{w})=\\mathbf{0}$. Thus, we have to proceed with iterative optimization methods. Hence the goal is to calculate the gradient $(\\nabla ll(\\mathbf{w}))$ and Hessian $\\left(H_{l l(\\mathbf{w})}\\right)$ in order to specify the update rule described by Newton-Raphson's method, as a function of inputs $\\mathbf{X}$, class labels $\\mathbf{y},$ and the current parameter vector. We can calculate the first and second partial derivatives of $ll(\\mathbf{w})$ as follows\n\n$$\n\\begin{aligned}\n\\frac{\\partial ll(\\mathbf{w})}{\\partial w_{j}} &=\\sum_{i=1}^{n}\\left(\\left(\\frac{y_{i}-1}{2}\\right) \\cdot x_{i j}-\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}} \\cdot e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}} \\cdot\\left(-x_{i j}\\right)\\right) \\\\\n&=\\sum_{i=1}^{n} x_{i j} \\cdot\\left(\\left(\\frac{y_{i}-1}{2}\\right)+\\frac{e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right) \\\\\n&=\\sum_{i=1}^{n} x_{i j} \\cdot\\left(\\left(\\frac{y_{i} + 1}{2}\\right)-\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right) \\\\\n&=\\mathbf{f}_{j}^{\\top}\\left(\\left(\\frac{\\mathbf{y}+\\mathbf{1}}{2}\\right)-\\mathbf{p}\\right)\n\\end{aligned}\n$$\n\nwhere $\\mathbf{f}_{j}$ is the $j$ -th column (feature) of data matrix $\\mathbf{X}, \\mathbf{y}$ is an $n$-dimensional column vector of class labels, $\\mathbf{1}$ is a vector containing $n$ 1s and $\\mathbf{p}$ is an $n$-dimensional column vector of (estimated) posterior probabilities $p_{i}=P\\left(Y_{i}=1 \\mid \\mathbf{x}_{i}, \\mathbf{w}\\right),$ for $i=1, \\ldots, n$. Considering partial derivatives for every component of $\\mathbf{w}$, we have\n$$\n\\nabla ll(\\mathbf{w})=\\mathbf{X}^{T}\\left(\\left(\\frac{\\mathbf{y} + \\mathbf{1}}{2}\\right)-\\mathbf{p}\\right)\n$$\n\nThe second partial derivative of the log-likelihood function can be found as,\n$$\n\\begin{aligned}\n\\frac{\\partial^{2} l l(\\mathbf{w})}{\\partial w_{j} \\partial w_{k}} &=\\sum_{i=1}^{n} x_{ij} \\cdot \\frac{e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}{\\left(1+e^{\\left.-\\mathbf{w}^{\\top} \\mathbf{x}_{i}\\right)^{2}}\\right.} \\cdot\\left(-x_{i k}\\right) \\\\\n&=-\\sum_{i=1}^{n} x_{i j} \\cdot \\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}} \\cdot\\left(1-\\frac{1}{1+e^{-\\mathbf{w}^{\\top} \\mathbf{x}_{i}}}\\right) \\cdot x_{i k} \\\\\n&=-\\mathbf{f}_{j}^{\\top} \\mathbf{P}(\\mathbf{I}-\\mathbf{P}) \\mathbf{f}_{k},\n\\end{aligned}\n$$\n\nwhere $\\mathbf{P}$ is an $n \\times n$ diagonal matrix with $P_{i i}=p_{i}=P\\left(Y_{i}=1 \\mid \\mathbf{x}_{i}, \\mathbf{w}\\right)$ and $\\mathbf{I}$ is an $n \\times n$ identity matrix. The Hessian matrix $H_{l l(\\mathbf{w})}$ can now be calculated as\n$$\nH_{ll(\\mathbf{w})}=-\\mathbf{X}^{T} \\mathbf{P}(\\mathbf{I}-\\mathbf{P}) \\mathbf{X}\n$$\n\nThe weight update rule of Newton-Raphson's method is as follows:\n$$\n\\mathbf{w}^{(t+1)}=\\mathbf{w}^{(t)}+\\left(\\mathbf{X}^{\\top} \\mathbf{P}^{(t)}\\left(\\mathbf{I}-\\mathbf{P}^{(t)}\\right) \\mathbf{X}\\right)^{-1} \\mathbf{X}^{\\top}\\left(\\left(\\frac{\\mathbf{y}+\\mathbf{1}}{2}\\right)-\\mathbf{p}^{(t)}\\right)\n$$\n\n\\end{solution}\n\n\n\n\\begin{problem}\nConsider a logistic regression problem with its initial solution obtained through the OLS regression; i.e., $\\mathbf{w}^{(0)}=\\left(\\mathbf{X}^{T} \\mathbf{X}\\right)^{-1} \\mathbf{X}^{T} \\mathbf{y},$ in the context of the code provided in class (week $\\left.6\\right) .$ Recall that $\\mathbf{x}$ was drawn from a mixture of two Gaussian distributions with $\\operatorname{dim}\\{\\mathbf{x}\\}=2$ (before adding a column of ones) and that $y \\in\\{0,1\\}$. You probably noticed that the initial separation line is consistently closer to the data points of class 0.\\\\\\\\\na) (10 points) Why was this the case? Draw a picture (if possible) to support your argument.\\\\\nb) (5 points) Devise a better initial solution by modifying the standard formula $\\mathbf{w}^{(0)}=\\left(\\mathbf{X}^{T} \\mathbf{X}\\right)^{-1} \\mathbf{X}^{T} \\mathbf{y}$.\\\\\nc) (5 points) Now again consider the case where $y \\in\\{-1,+1\\}$. What is the form of the modified solution from part (b) in this case?\n\\end{problem}\n\\begin{solution}\n\n\\subsection*{a)}\n\nLet us add a component $x_{0}=1$ to each input $\\left(x_{1}, \\ldots, x_{d}\\right)$. This extends the input space to $\\mathcal{X}=\\mathbb{R}^{d+1}$ but, fortunately, it also leads us to a simplified notation in which the decision boundary in $\\mathbb{R}^{d}$ can be written as $\\mathbf{w}^{\\top} \\mathbf{x}=0$, where $\\mathbf{w}=\\left(w_{0}, w_{1}, \\ldots, w_{d}\\right)$ is a set of weights and $\\mathbf{x}=\\left(x_{0}=1, x_{1}, \\ldots, x_{d}\\right)$ is any element of the input space. The actual inputs are $d$-dimensional.\n\nThe plane, $w_0 + w_1x_1 + w_2x_2 + \\ldots + w_dx_d = 0$ and the plane, $y = 0$ have a surface which is a $d - 1$ dimensional plane (if $d = 2$ then it is called a line) as the intersection in $d + 1$ dimensions where $d$ dimensions are coming from $\\mathcal{X}$ and the extra 1-dimension comes from $y$.\n\nThis surface intersects the $d$ dimensions coming from $\\mathcal{X}$ with intercepts,\n\n$\\left(x_1=-\\frac{w_0}{w_1}, x_2=0, \\ldots , x_d = 0, y = 0\\right), \\left(x_1=0, x_2=-\\frac{w_0}{w_2}, \\ldots , x_d = 0, y = 0\\right), \\ldots,$\n\n$ \\left(x_1=0, x_2=0, \\ldots , x_d = -\\frac{w_0}{w_d}, y = 0\\right)$ .\n\nIn class demo, $d = 2$. The data is obtained from a Gaussian Distribution with mean for class $0$ at $(1, 2)$ and mean for class $1$ at $(6, 4)$. The above plane becomes, $w_0 + w_1x_1 + w_2x_2 = 0$ and $y = 0$. The intersection of these planes is a line in the $x_1x_2$ plane. This is highlighted in red as shown below:\n\n\\begin{center}\n\\includegraphics[width=16cm, keepaspectratio]{Initial_Separation_Line_Class}\n\\end{center}\n\n\\begin{center}\n\\includegraphics[width=16cm, keepaspectratio]{Problem2}\n\\end{center}\n\n\\subsection*{b)}\n\nHowever, the above solution can be devised better with a good understanding of the two classes. Instead of taking the separation plane/line as intersection of $\\mathbf{w}^{\\top} \\mathbf{x}=0$ and $y = 0$ (initial separation line 1) or as intersection of $\\mathbf{w}^{\\top} \\mathbf{x}=0$ and $y = 1$ (initial separation line 2), we could take a value that favour both classes equally, i.e., which in this case is intersection of $\\mathbf{w}^{\\top} \\mathbf{x}=0$ and $y = 0.5$ (optimal separation line) as it is in between $y = 0$ and $y = 1$ and more importantly equidistant from the two planes in euclidean space. Intersection with plane such as $y = 0.3$ (suboptimal separation line) would be sub-optimal. This gives a modified initial solution as,\n\n$$\n\\mathbf{w}^{(0)}=\\left(\\mathbf{X}^{T} \\mathbf{X}\\right)^{-1} \\mathbf{X}^{T} (\\mathbf{y} - 0.5)\n$$\nwhere the value $0.5$ above is subtracted from every class label.\n\\\\\n\nThe modified line of intersection can be seen as below:\n\n\\begin{center}\n\\includegraphics[width=16cm, keepaspectratio]{Final_Separation_Line_Class}\n\\end{center}\n\nThis definitely looks like a better initial solution than the previous one.\n\nc)\n\nIn this case, $y \\in \\{-1, + 1\\}$, the mean of the possible class labels is $0$. This means that the solution used in the actual code given on class website should be fine. Hence, we can use,\n\n$$\n\\mathbf{w}^{(0)}=\\left(\\mathbf{X}^{T} \\mathbf{X}\\right)^{-1} \\mathbf{X}^{T} \\mathbf{y}\n$$\n\n\\end{solution}\n\n\n\\begin{problem}\nProblem 3. (40 points) Consider two classification concepts given in Figure 1 , where $x \\in \\mathcal{X}=[-6,6] \\times$ $[-4,4], y \\in \\mathcal{Y}=\\{-1,+1\\}$ and $p(y \\mid x) \\in\\{0,1\\}$ is defined in the drawing.\n\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{Problem3}\n\\caption{Two concepts where examples that fall within any of the three $3 \\times 3$ (panel A) or $1 \\times 1$ (panel B) squares are labeled positive and the remaining examples (outside each of the squares but within $\\mathcal{X}$ ) are labeled negative. The position of the point $x=\\left(x_{1}, x_{2}\\right)$ in the upper left-hand corner for each square is shown in the picture. Consider horizontal axis to be $x_{1}$ and vertical axis as $x_{2}$.}\n\\end{figure}\n\n\nYour experiments in this question will rely on generating a data set of size $n \\in\\{250,1000,10000\\}$ drawn from a uniform distribution in $\\mathcal{X}$ and labeled according to the rules from Figure $1 ;$ e.g., $P(Y=1 \\mid x)=1$ if $x$ that was randomly drawn is inside any of the three squares in either of the two panels, and $P(Y=1 \\mid x)=0$ otherwise. The goal of the following two problems will be to train and evaluate classifiers created from the data generated in this way. You can use any library you want in this assignment and do programming in Python, MATLAB, $R$ or $C / C++$. Your code should be easy to run for each question and sub-question below so that we can replicate your results to the maximum extent possible.\n\nConsider single-output feed-forward neural networks with one or two hidden layers such that the number of hidden neurons in each layer is $h_{1} \\in\\{1,4,12\\}$ and $h_{2} \\in\\{0,3\\},$ respectively, with $h_{2}=0$ meaning that there is no second hidden layer. Consider one of the standard objective functions as your optimization criterion and use early stopping and regularization as needed. Consider a hyperbolic tangent activation function in each neuron and the output but you are free to experiment with others if you'd like to. For each of the architectures, defined by a parameter combination $\\left(h_{1}, h_{2}\\right),$ evaluate the performance of each model using classification accuracy, balanced accuracy, and area under the ROC curve as your two performance criteria. To evaluate the performance of your models use cross-validation. However, to evaluate the performance of performance evaluation, generate another very large data set on a fine grid in $\\mathcal{X} .$ Then use the predictions from your trained model on all these points to determine the \"true\" performance. You can threshold your predictions in the middle of your prediction range (i.e., at 0.5 if you are predicting between 0 and 1) to determine binary predictions of your models and to then compare those with true class labels you generated on the fine grid.\n\nProvide meaningful comments about all aspects of this exercise (performance results for different network architectures, accuracy of cross-validation, run time, etc.). The comments should not just re-state the results but rather capture trends and give reasoning as to why certain behavior was observed.\n\\end{problem}\n\n\\begin{solution}\n\nFor code, please refer to \\textbf{NeuralNet.ipynb}.\n\n\\subsection*{Results}\n\n\n\n\n\n\n\\begin{table}[H]\n\\tiny\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\rowcolor[HTML]{ECF4FF} \n\\cellcolor[HTML]{ECF4FF}                        & \\cellcolor[HTML]{ECF4FF}                     & \\cellcolor[HTML]{ECF4FF}                     & \\multicolumn{4}{l|}{\\cellcolor[HTML]{ECF4FF}Balanced Accuracy}                                                                    & \\multicolumn{4}{l|}{\\cellcolor[HTML]{ECF4FF}Classification Accuracy}                                                              & \\multicolumn{4}{l|}{\\cellcolor[HTML]{ECF4FF}Area Under the ROC Curve}                                                             \\\\ \\cline{4-15} \n\\cellcolor[HTML]{ECF4FF}                        & \\cellcolor[HTML]{ECF4FF}                     & \\cellcolor[HTML]{ECF4FF}                     & \\multicolumn{2}{l|}{\\cellcolor[HTML]{F8FF00}Uniform}            & \\multicolumn{2}{l|}{\\cellcolor[HTML]{FFFFC7}Fine}               & \\multicolumn{2}{l|}{\\cellcolor[HTML]{F8FF00}Uniform}            & \\multicolumn{2}{l|}{\\cellcolor[HTML]{FFFFC7}Fine}               & \\multicolumn{2}{l|}{\\cellcolor[HTML]{F8FF00}Uniform}            & \\multicolumn{2}{l|}{\\cellcolor[HTML]{FFFFC7}Fine}               \\\\ \\cline{4-15} \n\\multirow{-3}{*}{\\cellcolor[HTML]{ECF4FF}$n$}     & \\multirow{-3}{*}{\\cellcolor[HTML]{ECF4FF}$h_1$} & \\multirow{-3}{*}{\\cellcolor[HTML]{ECF4FF}$h_2$} & \\cellcolor[HTML]{329A9D}Case A & \\cellcolor[HTML]{96FFFB}Case B & \\cellcolor[HTML]{329A9D}Case A & \\cellcolor[HTML]{96FFFB}Case B & \\cellcolor[HTML]{329A9D}Case A & \\cellcolor[HTML]{96FFFB}Case B & \\cellcolor[HTML]{329A9D}Case A & \\cellcolor[HTML]{96FFFB}Case B & \\cellcolor[HTML]{329A9D}Case A & \\cellcolor[HTML]{96FFFB}Case B & \\cellcolor[HTML]{329A9D}Case A & \\cellcolor[HTML]{96FFFB}Case B \\\\ \\hline\n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{9AFF99}                        & \\cellcolor[HTML]{FE0000}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.5                            & 0.5                            & 0.5                            & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.72   & \\cellcolor[HTML]{EFEFEF}0.964  & \\cellcolor[HTML]{EFEFEF}0.7188 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.5645                         & 0.4226                         & 0.5992                         & 0.5025                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\cellcolor[HTML]{9AFF99}                        & \\multirow{-2}{*}{\\cellcolor[HTML]{FE0000}1}  & \\cellcolor[HTML]{F8A102}3                    & 0.5                            & 0.5                            & 0.5468                         & 0.5                            & \\cellcolor[HTML]{C0C0C0}0.72   & \\cellcolor[HTML]{C0C0C0}0.964  & \\cellcolor[HTML]{C0C0C0}0.7164 & \\cellcolor[HTML]{C0C0C0}0.9688 & 0.4991                         & 0.4676                         & 0.688                          & 0.5117                         \\\\ \\cline{2-15} \n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{9AFF99}                        & \\cellcolor[HTML]{FD6864}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.5714                         & 0.5                            & 0.5891                         & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.716  & \\cellcolor[HTML]{EFEFEF}0.964  & \\cellcolor[HTML]{EFEFEF}0.7516 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.7624                         & 0.447                          & 0.7611                         & 0.6907                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\cellcolor[HTML]{9AFF99}                        & \\multirow{-2}{*}{\\cellcolor[HTML]{FD6864}4}  & \\cellcolor[HTML]{F8A102}3                    & 0.7403                         & 0.5                            & 0.7262                         & 0.517                          & \\cellcolor[HTML]{C0C0C0}0.84   & \\cellcolor[HTML]{C0C0C0}0.964  & \\cellcolor[HTML]{C0C0C0}0.8296 & \\cellcolor[HTML]{C0C0C0}0.9679 & 0.876                          & 0.5582                         & 0.8869                         & 0.7826                         \\\\ \\cline{2-15} \n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{9AFF99}                        & \\cellcolor[HTML]{FFCCC9}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.8258                         & 0.598                          & 0.8421                         & 0.4994                         & \\cellcolor[HTML]{EFEFEF}0.888  & \\cellcolor[HTML]{EFEFEF}0.964  & \\cellcolor[HTML]{EFEFEF}0.8868 & \\cellcolor[HTML]{EFEFEF}0.9676 & 0.9592                         & 0.8097                         & 0.9498                         & 0.7132                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\multirow{-6}{*}{\\cellcolor[HTML]{9AFF99}250}   & \\multirow{-2}{*}{\\cellcolor[HTML]{FFCCC9}12} & \\cellcolor[HTML]{F8A102}3                    & 0.925                          & 0.5939                         & 0.8756                         & 0.5378                         & \\cellcolor[HTML]{C0C0C0}0.936  & \\cellcolor[HTML]{C0C0C0}0.956  & \\cellcolor[HTML]{C0C0C0}0.9047 & \\cellcolor[HTML]{C0C0C0}0.9651 & 0.9806                         & 0.6856                         & 0.9465                         & 0.7522                         \\\\ \\hline\n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{32CB00}                        & \\cellcolor[HTML]{FE0000}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.5                            & 0.5                            & 0.5                            & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.73   & \\cellcolor[HTML]{EFEFEF}0.973  & \\cellcolor[HTML]{EFEFEF}0.7188 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.5784                         & 0.3753                         & 0.4915                         & 0.4966                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\cellcolor[HTML]{32CB00}                        & \\multirow{-2}{*}{\\cellcolor[HTML]{FE0000}1}  & \\cellcolor[HTML]{F8A102}3                    & 0.5                            & 0.5                            & 0.597                          & 0.5                            & \\cellcolor[HTML]{C0C0C0}0.73   & \\cellcolor[HTML]{C0C0C0}0.973  & \\cellcolor[HTML]{C0C0C0}0.7539 & \\cellcolor[HTML]{C0C0C0}0.9688 & 0.6095                         & 0.563                          & 0.7428                         & 0.5052                         \\\\ \\cline{2-15} \n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{32CB00}                        & \\cellcolor[HTML]{FD6864}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.633                          & 0.5                            & 0.6404                         & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.779  & \\cellcolor[HTML]{EFEFEF}0.973  & \\cellcolor[HTML]{EFEFEF}0.7832 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.78                           & 0.4325                         & 0.8282                         & 0.6443                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\cellcolor[HTML]{32CB00}                        & \\multirow{-2}{*}{\\cellcolor[HTML]{FD6864}4}  & \\cellcolor[HTML]{F8A102}3                    & 0.7083                         & 0.5                            & 0.7846                         & 0.5                            & \\cellcolor[HTML]{C0C0C0}0.813  & \\cellcolor[HTML]{C0C0C0}0.973  & \\cellcolor[HTML]{C0C0C0}0.8505 & \\cellcolor[HTML]{C0C0C0}0.9688 & 0.8764                         & 0.4511                         & 0.9223                         & 0.4485                         \\\\ \\cline{2-15} \n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{32CB00}                        & \\cellcolor[HTML]{FFCCC9}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.7919                         & 0.5                            & 0.8056                         & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.859  & \\cellcolor[HTML]{EFEFEF}0.973  & \\cellcolor[HTML]{EFEFEF}0.8734 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.9246                         & 0.8589                         & 0.9528                         & 0.8417                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\multirow{-6}{*}{\\cellcolor[HTML]{32CB00}1000}  & \\multirow{-2}{*}{\\cellcolor[HTML]{FFCCC9}12} & \\cellcolor[HTML]{F8A102}3                    & 0.9233                         & 0.5662                         & 0.8846                         & 0.5                            & \\cellcolor[HTML]{C0C0C0}0.948  & \\cellcolor[HTML]{C0C0C0}0.974  & \\cellcolor[HTML]{C0C0C0}0.9206 & \\cellcolor[HTML]{C0C0C0}0.9688 & 0.9857                         & 0.5164                         & 0.9708                         & 0.504                          \\\\ \\hline\n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{009901}                        & \\cellcolor[HTML]{FE0000}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.5                            & 0.5                            & 0.5                            & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.7184 & \\cellcolor[HTML]{EFEFEF}0.9699 & \\cellcolor[HTML]{EFEFEF}0.7188 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.5895                         & 0.4685                         & 0.4982                         & 0.5156                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\cellcolor[HTML]{009901}                        & \\multirow{-2}{*}{\\cellcolor[HTML]{FE0000}1}  & \\cellcolor[HTML]{F8A102}3                    & 0.5                            & 0.5                            & 0.5                            & 0.5                            & \\cellcolor[HTML]{C0C0C0}0.7184 & \\cellcolor[HTML]{C0C0C0}0.9699 & \\cellcolor[HTML]{C0C0C0}0.7188 & \\cellcolor[HTML]{C0C0C0}0.9688 & 0.6229                         & 0.5404                         & 0.6598                         & 0.5215                         \\\\ \\cline{2-15} \n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{009901}                        & \\cellcolor[HTML]{FD6864}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.6286                         & 0.5                            & 0.6506                         & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.7853 & \\cellcolor[HTML]{EFEFEF}0.9699 & \\cellcolor[HTML]{EFEFEF}0.7771 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.6932                         & 0.5407                         & 0.8395                         & 0.613                          \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\cellcolor[HTML]{009901}                        & \\multirow{-2}{*}{\\cellcolor[HTML]{FD6864}4}  & \\cellcolor[HTML]{F8A102}3                    & 0.755                          & 0.5                            & 0.8019                         & 0.5                            & \\cellcolor[HTML]{C0C0C0}0.8325 & \\cellcolor[HTML]{C0C0C0}0.9699 & \\cellcolor[HTML]{C0C0C0}0.8523 & \\cellcolor[HTML]{C0C0C0}0.9688 & 0.8894                         & 0.5129                         & 0.9048                         & 0.7529                         \\\\ \\cline{2-15} \n\\rowcolor[HTML]{C0C0C0} \n\\cellcolor[HTML]{009901}                        & \\cellcolor[HTML]{FFCCC9}                     & \\cellcolor[HTML]{FFCE93}0                    & 0.8703                         & 0.5085                         & 0.8175                         & 0.5                            & \\cellcolor[HTML]{EFEFEF}0.9031 & \\cellcolor[HTML]{EFEFEF}0.9704 & \\cellcolor[HTML]{EFEFEF}0.8709 & \\cellcolor[HTML]{EFEFEF}0.9688 & 0.9601                         & 0.8795                         & 0.9412                         & 0.7865                         \\\\ \\cline{3-15} \n\\rowcolor[HTML]{EFEFEF} \n\\multirow{-6}{*}{\\cellcolor[HTML]{009901}10000} & \\multirow{-2}{*}{\\cellcolor[HTML]{FFCCC9}12} & \\cellcolor[HTML]{F8A102}3                    & 0.9354                         & 0.5                            & 0.9271                         & 0.6089                         & \\cellcolor[HTML]{C0C0C0}0.9514 & \\cellcolor[HTML]{C0C0C0}0.9699 & \\cellcolor[HTML]{C0C0C0}0.9455 & \\cellcolor[HTML]{C0C0C0}0.9741 & 0.9879                         & 0.5582                         & 0.9857                         & 0.8204                         \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\n\\subsection*{Setup}\n\nA single-output feed-forward neural network with one/two hidden layers is modelled such that the number of hidden neurons in each hidden layer is $h_1 \\in \\{1, 4, 12\\}$ and $h_2 \\in \\{0, 3\\}$, respectively, with $h_2 = 0$ meaning that there is no second hidden layer. I used binary cross-entropy standard objective function as my optimization criterion. I ran the model for 1000 epochs (configurable hyperparameter). I used Adam optimizer in order to converge to a local optimum. There is no other early stopping criterion in my training. I prefer to use regularization only for $y$. I map $y = -1 \\text{ to } y = 0$ and $y = 1 \\text{ to } y = 1$. There is no regularization for $X$ as the data set values are mostly comparable and picked from a uniform distribution from $[-6, 6] \\text{ and } [-4, 4]$ respectively. The expected mean is any way zero and the variance is low. I also don't see any threat of outliers. I didn't use any special initialization for weights. I went with default initialization provided by PyTorch which is choosing weights from \\textbf{Uniform Distribution} from $\\left(-\\frac{1}{\\sqrt{d}}, \\frac{1}{\\sqrt{d}}\\right)$, where $d$ is the number of features. I used a hyperbolic tangent ($tanh$) activation function in each activation. The output layer is estimated using sigmoid function and the threshold I used to classify negatives and positives is 0.5. For each of the architectures, defined by a parameter combination $(h_1, h_2)$, I evaluated the performance of each model using balanced accuracy, classification accuracy, and area under the ROC curve. These are the outputs of my function $get\\_metrics$. To evaluate the performance of my model, I used $k$-fold stratified cross-validation (with $k = 5$) by splitting the sample into $80-20$ ($80$ for training and $20$ for testing). To evaluate the performance of performance evaluation, I generated another very large dataset on a fine grid in X. The points in this grid are spaced at intervals of $0.1$ both horizontally and vertically. I calculated the predictions from my trained model on all these points to determine the true performance. In order to calculate the classification accuracy and balanced accuracy, I used the thresholded predictions and to calculate the area under the curve I used soft predictions.\n\nIn the above table, the columns are appropriately named. 'Uniform' refers to values obtained from uniform distribution and 'Fine' refers to data from fine grid (spaced at $0.1$). 'Case A' refers to relatively balanced case and 'Case B' refers to highly imbalanced case. The coloring of columns $n$, $h_1$, and $h_2$ are mapped as per the expected performance of the model.\n\n\n\\subsection*{Plots}\n\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/1}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/2}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/3}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/4}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/5}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/6}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/7}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/8}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/9}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/10}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/11}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/12}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/13}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/14}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/15}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/16}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/17}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/18}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/19}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/20}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/21}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/22}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/23}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/24}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/25}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/26}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/27}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/28}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/29}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/30}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/31}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/32}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/33}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/34}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/35}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/36}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/37}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/38}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/39}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/40}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/41}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/42}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/43}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/44}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/45}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/46}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/47}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/48}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/49}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/50}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/51}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/52}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/53}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{./3/54}\n\\end{figure}\n\n\\subsection*{How to read Plots}\n\nEach of the 18 combinations of $(n, h_1, h_2)$ have 3 plots above. So, total number of plots are 54. The first plot out of the $3$ contains $2$ subplots, one containing the predictions of my network in balanced and the other contained predictions of my model in imbalanced scenarios. Each of the subplot includes outputs from all $5$ networks, $1$ each from every iteration of $5$-fold Cross Validation. The second plot contains $5$ different subplots on fine grid $1$ each for every iteration of $5$-fold Cross Validation for balanced scenario. The third plot contains $5$ different subplots on fine grid $1$ each for every iteration of $5$-fold Cross Validation for imbalanced scenario. To evaluate the performance of my performance evaluation, the plots $2$ and $3$ were drawn. The points are labeled as red dot markers for negative predictions and green cross markers for positive predictions.\n\n\n\\subsection*{Observations and Reasoning}\n\n\\begin{enumerate}\n  \\item First observation is that we can observe for the balanced scenario, Case A, for a particular $n$, the balanced accuracy, classification accuracy increases as we goes down the table. Which means the model is learning well as we increase the number of activations.\n  \\item Area under curve is different in almost every case because, I used soft-predictions to calculate its value. It is expected to be different because the model is trained with different configuration of activations each time. Also, it is a stochastic process and due to entropy, the value depends on the system on which the program is running. The general trend however is that it also increases as we go down the table, for a particular $n$. However there can be exceptions especially when the data is less and number of activations are less. For example, $\\left(n, h_1, h_2\\right) = \\left(250, 1, 0\\right)$ and $\\left(n, h_1, h_2\\right) = \\left(250, 1, 3\\right)$.\n  \\item In imbalanced class scenario (since we are choosing data from a uniform distribution), sometimes it might happen that we get points belonging to only one class as we are using $k$-fold cross validation with $k = 5$. In this case, balanced accuracy and classification accuracy would be $1.0$. However, the area under the ROC curve would be $np.nan$. When we take average of all the $k=5$ values, we still get a $np.nan$. That is why we sometimes see a $np.nan$ for area under the ROC curve in Case B. In such cases, the balanced accuracy is definitely above $0.5$ as one of the $k=5$ entries is a $1.0$.\n  \\item In imbalanced scenario, Case B, we can see that classification accuracy is high while the balanced accuracy is around $0.5$. Balanced accuracy is defined as the average of sensitivity (true positive rate) and specificity (true negative rate). Classification accuracy is defined as the proportion of correct predictions. This is why we should not use classification accuracy as a measure in case of imbalanced classes. We should rather prefer to use other performance metrics such as balanced accuracy or Matthews correlation coefficient. The results show that there is no learning in this case.\n  \\item It can be noticed from the above table that in most of the cases, it turns out that when number of activations in layer 1/layer 2 are less, the model is not learning the decision boundary properly. In case of imbalanced data, the configuration (values) of $h_1$ and $h_2$ we took in this problem is very less to learn the boundary. In case of balanced data, when the values of $h_1$ and $h_2$ are less, it doesn't learn the boundary properly. When they are relatively high $(h_1, h_2) = (12, 3)$ it is learning the boundary much better. I tried to increase the values even more $(h_1, h_2) = (100, 10)$ and it learns even better. However, in case of imbalanced classes, a configuration of $(h_1, h_2) = (1000, 20)$ also doesn't seem to learn the boundary well. This might mean that a neural network is not the right model for this problem or we can probably try adding more activations and layers.\n  \\item Also, the cross-validation is also 20 percent. Since the probability of getting a positive label in imbalanced data is very less it can also happen that very few positives are there in the training data (especially when $n$ is small) and the test data points are also not predicted correctly.\n  \\item The above conclusions are also inline with the low values for area under the ROC curve obtained in various scenarios. For example, the auc increases are we increase number of activations and decreases as the imbalance of classes increase.\n  \\item The results are also inline with the plots. However, results only provide the average case since were are use $k$-fold Cross Validation here with $k = 5$. If we want to know for each iteration of cross-validation, the performance of our model, plots are good tools. We can say that when number of activations are less, the learning is not so great. Still in some cases, there is learning in at least one scenario out of the $5$ iterations of cross validations. For example, the case when $\\left(h_1, h_2\\right) = \\left(4, 0\\right)$. As we increase the number of activations, the results are getting better for each iteration and the final output looks much better. Few other facts like when $\\left(h_1, h_2\\right) = \\left(1, 0\\right)$, the separation plane is sometimes like a plane because the situation here is close to (but not exactly) logistic regression.\n\nOverall, it was a very fun exercise and I enjoyed it the most. I hope I covered almost all points. Kindly contact me if there is any discrepancy during evaluation.\n\\end{enumerate}\n\n\n\n\n\\end{solution}\n\n\n\\begin{problem}\nMatrix factorization with applications.\n\na) (10 points) Derive the optimization steps for the ALS algorithm by finding formula $\\# 1$ and formula $\\# 2$ in the pseudocode listed above.\n\nb) (20 points) Consider now that some values in $\\mathbf{X}$ are missing (e.g., the rows of $\\mathbf{X}$ are users and the columns of $\\mathbf{X}$ are movie ratings, when available) and that we are interested in carrying out matrix completion using matrix factorization presented above. We would like to use the ALS algorithm, but the problem is that we must exclude all missing values from optimization. Derive now a modified ALS algorithm (formulas $\\# 1$ and $\\# 2$) to adapt it for matrix completion. Hint: consider adding an indicator matrix $\\mathbf{W}$ to the optimization process, where $w_{ij}=1$ if $x_{ij}$ is available and $w_{ij}=0$ otherwise.\n\nc) (20 points) Consider now a MovieLens database available at\n$$\n\\text{http://grouplens.org/datasets/movielens/}\n$$\nand find a data set most appropriate to evaluate your algorithm from the previous step; e.g., one of the $100 \\mathrm{k}$ data sets. Now, implement the ALS algorithm on your data set and evaluate it using the mean-squared-error as the criterion of success. You can randomly remove $25 \\%$ of the ratings, train a recommendation system, and then test it on the test set. You will have to make certain decisions yourselves, such as initialization of $\\mathbf{U}$ and $\\mathbf{V}$, convergence criterion, or picking $k$ and $\\lambda$.\n\nd) (10 points) Describe your full experimentation process (e.g., how did you vary $k$ ) and observations from (c). Additionally, can you provide some reasoning as to what $k$ is and what matrices $\\mathbf{U}$ and $\\mathbf{V}$ are?\n\ne) (10 points) Compare your method against the baseline that fills in every missing movie rating value $x_{ij}$ as an average over all users who have rated the movie $j$. Discuss your empirical findings.\\end{problem}\n\n\\begin{solution}\n\\subsection*{a)}\n\nConsider an $n \\times d$ real-valued data matrix $\\mathbf{X}=\\left(\\mathbf{x}_{1}^{T}, \\mathbf{x}_{2}^{T}, \\ldots, \\mathbf{x}_{n}^{T}\\right) = \\left(\\mathbf{x}_{1}, \\mathbf{x}_{2}, \\ldots, \\mathbf{x}_{d}\\right)$. We use the letter $i$ to denote the rows and $j$ to denote the columns. Let us approximate this matrix using the following factorization\n$$\n\\hat{\\mathbf{X}}=\\mathbf{U} \\mathbf{V}^{T}\n$$\nwhere $\\mathbf{U}=\\left(\\mathbf{u}_{1}^{T}, \\mathbf{u}_{2}^{T}, \\ldots, \\mathbf{u}_{n}^{T}\\right)$ is an $n \\times k$ and $\\mathbf{V}=\\left(\\mathbf{v}_{1}^{T}, \\mathbf{v}_{2}^{T}, \\ldots, \\mathbf{v}_{d}^{T}\\right)$ is a $d \\times k$ matrix of real numbers,\nand where $k<n, d$ is a parameter to be explored and determined. The value $x_{ij}$ in $\\mathbf{X}$ can be approximated by $\\mathbf{u}_{i}^{T} \\mathbf{v}_{j}$ and that $\\mathbf{x}_{i}^{T},$ the $i$-th row of $\\mathbf{X},$ can be approximated by $\\mathbf{u}_{i}^{T} \\mathbf{V}^{T},$ giving $\\hat{\\mathbf{x}}_{i}=\\mathbf{V} \\mathbf{u}_{i}$,  and $\\mathbf{x}_{j},$ the $j$-th column of $\\mathbf{X},$ can be approximated by $ \\mathbf{U}\\mathbf{v}_{j},$ giving $\\hat{\\mathbf{x}}_{j}=\\mathbf{U} \\mathbf{v}_{j}$. The matrix factorization process can be formulated as the following minimization\n$$\n\\min _{\\mathbf{U}, \\mathbf{V}} \\sum_{i, j}\\left(x_{i j}-\\mathbf{u}_{i}^{T} \\mathbf{v}_{j}\\right)^{2}+\\lambda\\left(\\sum_{i}\\left\\|\\mathbf{u}_{i}\\right\\|^{2}+\\sum_{j}\\left\\|\\mathbf{v}_{j}\\right\\|^{2}\\right)\n$$\nwhich minimizes the sum-of-squared-errors between real values $x_{i j}$ and reconstructed values $\\hat{x}_{i j}=\\mathbf{u}_{i}^{T} \\mathbf{v}_{j}$. The regularization parameter $\\lambda \\geq 0$ is user-selected. This problem can be directly solved using gradient descent, but we will attempt a slightly different approach. To do this, we can see that for a fixed $\\mathbf{V}$ we can find optimal vectors $\\mathbf{u}_{i}$ by minimizing,\n$$\nL_{1} = \\left\\|\\mathbf{V} \\mathbf{u}_{i}-\\mathbf{x}_{i}\\right\\|^{2}+\\lambda \\cdot\\left\\|\\mathbf{u}_{i}\\right\\|^{2}\n$$\nwhich can be solved in a closed form using OLS regression for every $i$. Taking the derivative and equating to $0$, we get,\n\n\n\\begin{align*}\n\t\\pdv{L_1}{\\mathbf{u}_i} &= \\pdv{}{\\mathbf{u}_i}\\left(\\mathbf{u}_i^{T}\\mathbf{V}^{T}\\mathbf{V}\\mathbf{u}_i - \\mathbf{u}_i^{T}\\mathbf{V}^{T}\\mathbf{x}_{i} - \\mathbf{x}_{i}^{T}\\mathbf{V}\\mathbf{u}_i + \\mathbf{x}_{i}^{T}\\mathbf{x}_{i} + \\lambda\\mathbf{u}_i^{T}\\mathbf{u}_i\\right)\\\\\n\t&= 2\\mathbf{V}^{T}\\mathbf{V}\\mathbf{u}_i - \\mathbf{V}^{T}\\mathbf{x}_i - \\mathbf{V}^{T}\\mathbf{x}_i + 0 + 2\\lambda\\mathbf{u}_i\\\\\n\t&= 2\\left(\\left(\\mathbf{V}^{T}\\mathbf{V}+\\lambda\\mathbf{I}\\right)\\mathbf{u}_i - \\mathbf{V}^{T}\\mathbf{x}_i\\right)\\\\\n\t&= 0\\\\\n\\end{align*}\n$$\n\t\\implies \\boxed{\\mathbf{u}_i = \\left(\\mathbf{V}^{T}\\mathbf{V}+\\lambda\\mathbf{I}\\right)^{-1}\\mathbf{V}^{T}\\mathbf{x}_i}\n$$\n\nWe can equivalently express the optimization for vectors $\\mathbf{v}_{j}$ and find the solution for every $j$. It will be like minimizing,\n$$\nL_{2} = \\left\\|\\mathbf{U} \\mathbf{v}_{j}-\\mathbf{x}_{j}\\right\\|^{2}+\\lambda \\cdot\\left\\|\\mathbf{v}_{j}\\right\\|^{2}\n$$\nwhich can be solved in a closed form using OLS regression for every $j$. Please note that $\\mathbf{x}_j$ here is the column vector of matrix $\\mathbf{X}$. Taking the derivative and equating to $0$, we get,\n\n\\begin{align*}\n\t\\pdv{L_2}{\\mathbf{v}_j} &= \\pdv{}{\\mathbf{v}_j}\\left(\\mathbf{v}_j^{T}\\mathbf{U}^{T}\\mathbf{U}\\mathbf{v}_j - \\mathbf{v}_j^{T}\\mathbf{U}^{T}\\mathbf{x}_{j} - \\mathbf{x}_{j}^{T}\\mathbf{U}\\mathbf{v}_j + \\mathbf{x}_{j}^{T}\\mathbf{x}_{j} + \\lambda\\mathbf{v}_j^{T}\\mathbf{v}_j\\right)\\\\\n\t&= 2\\mathbf{U}^{T}\\mathbf{U}\\mathbf{v}_j - \\mathbf{U}^{T}\\mathbf{x}_j - \\mathbf{U}^{T}\\mathbf{x}_j + 0 + 2\\lambda\\mathbf{v}_j\\\\\n\t&= 2\\left(\\left(\\mathbf{U}^{T}\\mathbf{U}+\\lambda\\mathbf{I}\\right)\\mathbf{v}_j - \\mathbf{U}^{T}\\mathbf{x}_j\\right)\\\\\n\t&= 0\\\\\n\\end{align*}\n$$\n\t\\implies \\boxed{\\mathbf{v}_j = \\left(\\mathbf{U}^{T}\\mathbf{U}+\\lambda\\mathbf{I}\\right)^{-1}\\mathbf{U}^{T}\\mathbf{x}_j}\n$$\n\n\\subsection*{b)}\n\nI am using the same notations as used in part a) (for ex: $i$ denotes rows and $j$ denotes columns). Let some values in $\\mathbf{X}$ are missing. Consider an indicator matrix $\\mathbf{W}$ to the optimization process, where $w_{ij}=1$ if $x_{ij}$ is available and $w_{ij}=0$ otherwise. The matrix factorization process can be modified and formulated as the following minimization,\n$$\n\\min _{\\mathbf{U}, \\mathbf{V}} \\mathbf{W} \\odot \\left\\|\\mathbf{X}-\\mathbf{U} \\mathbf{V}^{T}\\right\\|^{2}+\\lambda\\left(\\left\\|\\mathbf{U}\\right\\|^{2}+\\left\\|\\mathbf{V}\\right\\|^{2}\\right)\n$$\n$$\\text{(or)}$$\n$$\n\\min _{\\mathbf{U}, \\mathbf{V}} \\left\\|\\sqrt{\\mathbf{W}} \\odot \\left(\\mathbf{X}-\\mathbf{U} \\mathbf{V}^{T}\\right)\\right\\|^{2}+\\lambda\\left(\\left\\|\\mathbf{U}\\right\\|^{2}+\\left\\|\\mathbf{V}\\right\\|^{2}\\right)\n$$\n$$\\text{(or)}$$\n$$\n\\min _{\\mathbf{U}, \\mathbf{V}} \\sum_{i, j}w_{ij}\\left(x_{i j}-\\mathbf{u}_{i}^{T} \\mathbf{v}_{j}\\right)^{2}+\\lambda\\left(\\sum_{i}\\left\\|\\mathbf{u}_{i}\\right\\|^{2}+\\sum_{j}\\left\\|\\mathbf{v}_{j}\\right\\|^{2}\\right)\n$$\n\nwhere $\\odot$ means the Hadamard product (element-wise product).\n\nThe above objective function minimizes the sum-of-squared-errors between real non-missing values $x_{ij}$ and reconstructed values $\\hat{x}_{i j}=\\mathbf{u}_{i}^{T} \\mathbf{v}_{j}$. The regularization parameter $\\lambda \\geq 0$ is user-selected. This problem can be directly solved using gradient descent, but we will attempt a slightly different approach. To do this, we can see that for a fixed $\\mathbf{V}$ we can find optimal vectors $\\mathbf{u}_{i}$ by minimizing,\n\\begin{align*}\nL_{1} &= \\left\\|\\sqrt{\\mathbf{W}^{i}}\\left(\\mathbf{V} \\mathbf{u}_{i}-\\mathbf{x}_{i}\\right)\\right\\|^{2}+\\lambda \\cdot\\left\\|\\mathbf{u}_{i}\\right\\|^{2}\\\\\n      &= \\left(\\mathbf{V}\\mathbf{u}_i-\\mathbf{x}_i\\right)^{T}\\mathbf{W}^{i}\\left(\\mathbf{V}\\mathbf{u}_i-\\mathbf{x}_i\\right) + \\lambda\\mathbf{u}_i^{T}\\mathbf{u}_i\n\\end{align*}\nwhere, $\\mathbf{W}^{i}$ is a diagonal matrix whose elements in diagonal are the elements of $i^{\\text{th}}$ row of matrix $\\mathbf{W}$, i.e., $\\mathbf{W}^{i} = \\text{diag}\\left(w_{i1}, w_{i2}, ..., w_{id}\\right)$.\n\nThis problem can be solved in a closed form using OLS regression for every $i$. Taking the derivative and equating to $0$, we get,\n\n\n\\begin{align*}\n\t\\pdv{L_1}{\\mathbf{u}_i} &= \\pdv{}{\\mathbf{u}_i}\\left(\\left(\\mathbf{V}\\mathbf{u}_i-\\mathbf{x}_i\\right)^{T}\\mathbf{W}^{i}\\left(\\mathbf{V}\\mathbf{u}_i-\\mathbf{x}_i\\right) + \\lambda\\mathbf{u}_i^{T}\\mathbf{u}_i\\right)\\\\\n\t&= -2\\mathbf{V}^{T}\\mathbf{W}^{i}\\left(\\mathbf{x}_i-\\mathbf{V}\\mathbf{u}_i\\right) + 2\\lambda\\mathbf{u}_i\\\\\n\t&= 2\\left(-\\mathbf{V}^{T}\\mathbf{W}^{i}\\mathbf{x}_i+\\mathbf{V}^{T}\\mathbf{W}^{i}\\mathbf{V}\\mathbf{u}_i+\\lambda\\mathbf{u}_i\\right)\\\\\n\t&= 0\\\\\n\\end{align*}\n$$\n\t\\implies \\boxed{\\mathbf{u}_i = \\left(\\mathbf{V}^{T}\\mathbf{W}^{i}\\mathbf{V}+\\lambda\\mathbf{I}\\right)^{-1}\\mathbf{V}^{T}\\mathbf{W}^{i}\\mathbf{x}_i}\n$$\n\nWe can equivalently express the optimization for vectors $\\mathbf{v}_{j}$ and find the solution for every $j$. It will be like minimizing,\n\\begin{align*}\nL_{2} &= \\left\\|\\sqrt{\\mathbf{W}^{j}}\\left(\\mathbf{U} \\mathbf{v}_{j}-\\mathbf{x}_{j}\\right)\\right\\|^{2}+\\lambda \\cdot\\left\\|\\mathbf{v}_{j}\\right\\|^{2}\\\\\n      &= \\left(\\mathbf{U}\\mathbf{v}_j-\\mathbf{x}_j\\right)^{T}\\mathbf{W}^{j}\\left(\\mathbf{U}\\mathbf{v}_j-\\mathbf{x}_j\\right) + \\lambda\\mathbf{v}_j^{T}\\mathbf{v}_j\n\\end{align*}\n\nwhere, $\\mathbf{W}^{j}$ is a diagonal matrix whose elements in diagonal are the elements of $j^{\\text{th}}$ column of matrix $\\mathbf{W}$, i.e., $\\mathbf{W}^{j} = \\text{diag}\\left(w_{1j}, w_{2j}, ..., w_{nj}\\right)$.\n\nThis problem can be solved in a closed form using OLS regression for every $j$. Please note that $\\mathbf{x}_j$ here is the column vector of matrix $\\mathbf{X}$. Taking the derivative and equating to $0$, we get,\n\n\\begin{align*}\n\t\\pdv{L_2}{\\mathbf{v}_j} &= \\pdv{}{\\mathbf{v}_j}\\left(\\left(\\mathbf{U}\\mathbf{v}_j-\\mathbf{x}_j\\right)^{T}\\mathbf{W}^{j}\\left(\\mathbf{U}\\mathbf{v}_j-\\mathbf{x}_j\\right) + \\lambda\\mathbf{v}_j^{T}\\mathbf{v}_j\\right)\\\\\n\t&= -2\\mathbf{U}^{T}\\mathbf{W}^{j}\\left(\\mathbf{x}_j-\\mathbf{U}\\mathbf{v}_j\\right) + 2\\lambda\\mathbf{v}_j\\\\\n\t&= 2\\left(-\\mathbf{U}^{T}\\mathbf{W}^{j}\\mathbf{x}_j+\\mathbf{U}^{T}\\mathbf{W}^{j}\\mathbf{U}\\mathbf{v}_j+\\lambda\\mathbf{v}_j\\right)\\\\\n\t&= 0\\\\\n\\end{align*}\n$$\n\t\\implies \\boxed{\\mathbf{v}_j = \\left(\\mathbf{U}^{T}\\mathbf{W}^{j}\\mathbf{U}+\\lambda\\mathbf{I}\\right)^{-1}\\mathbf{U}^{T}\\mathbf{W}^{j}\\mathbf{x}_j}\n$$\n\n\\subsection*{c) \\&  d)}\n\nFor code, please refer to \\textbf{MatrixFactorization.ipynb}.\n\nI chose the \\texttt{ml-100k} dataset\\cite{movielens100} from MovieLens. This data set consists of:\n\\begin{itemize}\n\t\\item 100,000 ratings (1-5) from 943 users on 1,682 movies. \n\t\\item Each user has rated at least 20 movies. \n\t\\item Simple demographic info for the users (age, gender, occupation, zip)\n\\end{itemize}\n\nThe data was collected through the MovieLens web site (movielens.umn.edu) during the seven-month period from September 19th, 1997 through April 22nd, 1998. This data has been cleaned up - users who had less than 20 ratings or did not have complete demographic information were removed from this data set. Detailed descriptions of the data file can be found at the end of this file.\n\nThe dataset \\texttt{ml-100/u.data} contains $4$ columns, namely, \\texttt{user\\_id}, \\texttt{movie\\_id}, \\texttt{rating} and \\texttt{timestamp}. There are totally $100,000$ entries in this data. There are a total of $n = 943$ users and $d = 1,682$ movies. Since, $100000 \\ne 943 \\times 1682$, we must be having missing values in matrix $\\textbf{X}$ with $n$ rows and $d$ columns. Our goal is to represent this matrix as product of $\\textbf{U}$ and $\\textbf{V}^{T}$ as per the objective function mentioned in parts a) and b). In order to take care of missing values, an indicator matrix $\\textbf{W}$ is formed such that $w_{ij} = 0$ if $x_{ij}$ is missing, else $w_{ij} = 1$. Since, the input is highly sparse, I am unable to extract an exact $25 \\%$ of the set to use it as testing data. If I try to extract $25 \\%$ randomly, there is a chance that there can be columns with all zeroes in the training data and this implies that the training may not be so effective. Hence, I chose $100$ columns for every user from the training set where the input is dense (i.e., from a set of top $1000$ values where there are so many user ratings for the movies). If this is not working for some reason in your computer, please try re-running or reducing the value $100$. This strategy had been doubled checked with Professor during office hours and got a confirmation that this is a valid strategy. So, I have training and test data matrices of same size as input $n \\times k$, but training data matrix contains $100n$ extra missing values shifted to test data matrix as per the logic mentioned above. I also made sure that training and test dataset are disjoint. Indirectly, the split is also a hyperparameter (although not a standard one). Feel free to try different values. The implementation of ALS is fairly straightforward as it is finding the vectors $\\textbf{u}_i$ and $\\textbf{v}_j$ as shown above.\n\nThe most important hyperparameter in the entire exercise is $k$. I varied $k$ according to the values in the list \\texttt{k\\_list = [5, 10, 20, 40, 80]}. $k$ represents the so called \\textbf{Concepts} or \\textbf{Latent Factors}. If there is knowledge the data is coming from $k$ categories, we can simply use that domain knowledge and achieve great performances. In this example of movies and ratings, $k$ can be thought of as \\textbf{genres} of the movies. This hyperparameter is similar to the hyperparameter $y$ in \\textbf{EM-Algorithm}. Even there if we have prior knowledge about the data that is comes from $y$ distributions, we can calculate the parameters of the actual distributions effectively. Because of this presence of latent variable, the \\textbf{MatrixFactorization} can be thought of as an unsupervised learning problem. It is well known in the industry as \\textbf{Collaborative Filtering}. In this problem, I have considered that the weights to be uniform (either $0$ or $1$ depending on values are missing or available). Further reasearch is actively going on on this and we can even have non-uniform weights\\cite{nonuniform_mf} to obtain a beter low rank matrix factorization for real world problems.\n\nThe initialization of matrices \\textbf{U} and \\textbf{V} are done randomly using the function \\texttt{numpy.random.random\\_sample} which return values from the 'continuous uniform' distribution over the interval \\texttt{[0.0, 1.0)}. The final output matrix\n\n\\begin{enumerate}\n\t\\item $\\mathbf{U} \\in \\mathbb{R}^{n \\times k}$ is a user embedding matrix, where row $i$ is the embedding for user $i$.\n\t\\item $\\mathbf{V} \\in \\mathbb{R}^{d \\times k}$ is an item embedding (in this case items are movies) matrix, where row $j$ is the embedding for item (movie) $j$.\n\\end{enumerate}\n\nOne more hyperparameter involved is regularization constant $\\lambda$. It is a value chosen from the list \\texttt{lmd\\_list = [0.1, 2, 5, 10, 25, 50, 100]} once per iteration. Number of iterations (epochs) is another hyperparamter. It is a value chosen from the list \\texttt{epochs\\_list = [1, 2, 5, 10]} once per iteration and trained incrementally to avoid over-training. Refer to my code for a better understanding.\n\nThe evaluation of the predictions is done using $mean\\_square\\_error$, which considers only non-zero/non-missing values from the input.\n\nAfter this entire experiment, the best combination of hyperparamters (with the corresponding train and test errors) obtained for my experiment are as follows:\n\n\\begin{lstlisting}[language=Python]\n{'epochs': 10,\n 'k': 5,\n 'lambda': 5,\n 'train_error': 0.6672904167634547,\n 'test_error': 0.8236433053485411}\n\\end{lstlisting}\n\n\nThe whole experiment takes approximately $15$ minutes to run and get results. I highly recommend using Google Colab (setup below) in order to verify the results:\n\n\\texttt{https://colab.research.google.com/drive/1u8FoCOaL9ugfm8udb7MYYgFeOyLQPObW?usp=sharing}\\\\\n\nA random seed is already set to $42$ in order to replicate the results. Make sure to change path to file in Data Cell numbered [2].\\\\\n\n\\subsection*{e)}\n\nIn order to see how the model performs as compared to mean, I have taken the train dataset and fill those $100n$ values transferred to test data with the mean of rest of the non zero values in the column. This strategy has been quadruple checked with Professor in office hours. Once, the mean is imputed in those removed cells of training data, I call the $get\\_mse$ method and calculate the mean square error using those $100n$ values where the cells in test data has non-zero ratings. This is done to see how our model performs when compared to mean prediction. Here are the results:\n\n\\begin{lstlisting}[language=Python]\n{'k': 5,\n 'lambda': 5,\n 'epochs': 10,\n 'train_error': 0.6672904167634547,\n 'test_error': 0.8236433053485411,\n 'mean_error': 0.3025307958188201}\n\\end{lstlisting}\n\nFrom the above results, it can be noticed that for $k = 5$ and $\\lambda = 5$ with $10$ epochs, we get train\\_error as $0.667$, test\\_error as $0.824$ and $mean\\_error$ as $0.303$. The train\\_error and test\\_error are relatively comparable and also quite less ($< 1$) in magnitude. This means that our model is robust. However, when compared to test\\_error, mean\\_error is quite less. Although there is a chance that our model is performing poorly compared to the basic mean prediction, I think this is a misleading conclusion because our observations is based out of training $943 \\times 1582$ data points and validating against $943 \\times 100$ data points. This is relatively sort of over-fitting and I see there is a chance of high variance involved in my experiment. However, I was unable to avoid it, as I want to select records such that I have at least some non zero ratings for every movie in the training data. The model however, turned out to be robust. I am anyway left with only $943 \\times 100$ records in order to compute mean square error and compare mean prediction with my model predictions. If we have more testing data, it might most likely turn out that I will get a high mean\\_error. Or the vice versa can also happen if my model has high variance which is unlikely as the difference between train error and test error is very less. However, one more important point to note is that, it makes more sense to think that mean predictions are not bad in this problem of user-movie rating because if $p$ persons are liking/disliking a particular movie, it is most likely that $p + 1^{th}$ also likes/dislikes it. The distribution of mean ratings for different samples of a paricular movie can be assumed to approximately follow a normal distribution with low variance and almost tending to a dirac delta spike according to weak law of large numbers\\cite{lln}.\n\n\n\\end{solution}\n\n\n\\begin{problem}\nProve representational equivalence of a three-layer neural network with linear activation function in all neurons and a single-layer layer neural network with the same activation function. Assume a single-output network.\n\\end{problem}\n\n\\begin{solution}\n\\begin{figure}[H]\n\\includegraphics[width=16cm, keepaspectratio]{NeuralNetwork}\n\\caption{A three-layer neural network}\n\\end{figure}\n\nThe above neural network, consists of $1$ input layer, $2$ hidden layers and $1$ output layer. Here, $a^{[i]}$ represents the vector of activations present in the $i^{\\text{th}}$ layer, $w^{[i]}$ represents the vector of weights for activations present in the $i^{\\text{th}}$ layer. $X$ is the input feature vector of $d$-dimensions. Hidden layer 1 contains $d_1$-activations and a bias, hidden layer 2 contains $d_2$-activations and a bias. $\\hat{y}$ represents the output. $a^{[i]}_{j}$ represents $j^{\\text{th}}$ activation in the $i^{\\text{th}}$ layer. $w^{[i]}_{jk}$ represents the weight connection $j^{\\text{th}}$ activation present in $i^{\\text{th}}$ layer to the $k^{\\text{th}}$ activation present in the $i-1^{\\text{th}}$ layer. $x_i$ represents the $i^{\\text{th}}$ input data point.\n\nLet, $d_0 = d$ for notation simplification. Let us consider a linear activation function $h(z) = c_1z +c_2$.\n\nApplying the activation function to the activations, we can construct $d_{l-1}$ linear combinations of activations in $l-1^{\\text{th}}$ layer, after which the $j_{\\text{th}}$ activation in the $l^{\\text{th}}$ layer would look like,\n\\begin{align*}\na^{[l]}_{j} &= \\sum_{i = 1}^{d_{l-1}} w^{[l]}_{ji}h\\left(a^{[l-1]}_{i}\\right) + w^{[l]}_{j0}\\\\\n\t\t\t  &= \\sum_{i = 1}^{d_{l-1}} w^{[l]}_{ji}\\left(c_1a^{[l-1]}_{i}+c_2\\right) + w^{[l]}_{j0}\\\\\n\t\t\t  &= \\sum_{i = 1}^{d_{l-1}} c_1w^{[l]}_{ji}a^{[l-1]}_{i}+\\sum_{i = 1}^{d_{l-1}}c_2 w^{[l]}_{ji} + w^{[l]}_{j0}\\\\\n\t\t\t  &= \\sum_{i = 1}^{d_{l-1}} {w^{'}}^{[l]}_{ji}a^{[l-1]}_{i} + {w^{'}}^{[l]}_{j0}\\\\\n\t\t\t  &= \\sum_{i = 0}^{d_{l-1}} {w^{'}}^{[l]}_{ji}a^{[l-1]}_{i}\n\\end{align*}\n\nwhere in the above equations constants are absorbed and the weights of the new terms are represented by $w{'}$ that are linear tranformations of the original ones.\n\nLet's try to represent activations of layer $l$ in terms of $l - 2$.\n\\begin{align*}\na^{[l]}_{j} &= \\sum_{i = 0}^{d_{l-1}} {w^{'}}^{[l]}_{ji}a^{[l-1]}_{i}\\\\\n\t\t\t  &= \\sum_{i = 0}^{d_{l-1}} {w^{'}}^{[l]}_{ji}\\left(\\sum_{k = 0}^{d_{l-2}} {w^{'}}^{[l-1]}_{jk}a^{[l-2]}_{k}\\right)\\\\\n\t\t\t  &= \\sum_{k = 0}^{d_{l-2}} {w^{''}}^{[l-1]}_{jk}a^{[l-2]}_{k}\n\\end{align*}\n\nwhere in the above equations constants are absorbed and the weights of the new terms are represented by $w{''}$ that are linear tranformations of the ones obtained in the previous steps.\n\nSince, the above tranformation is linear (similar to Linear Regression), we can skip hidden layer 1 and go to hidden layer 2, with similar weights and skip layer 2 and go to layer 3 (output layer) with similar weights. The final representation of the neural network from layer 0 (input layer) to layer 3 (output layer) can be represented as,\n\n$$\na^{[3]}_{j} = \\sum_{i = 0}^{d_0} {w^{'''}}^{[0]}_{ji}a^{[0]}_{i}\n$$\n\nor\n\n\\begin{equation}\n\\hat{y} = \\sum_{i = 0}^{d} {w^{'''}}^{[0]}_{ji}x_{i}\n\\end{equation}\n\nIf we have a $1$-layered neural network with $1$-output and linear activation function, this can be thought similar to Linear Regression, whose general form is,\n\n\\begin{equation}\n\\hat{y} = \\sum_{i = 0}^{d} {w}^{[0]}_{ji}x_{i}\n\\end{equation}\n\nEquations $(1)$ and $(2)$ are representationally equivalent. Hence, proved.\n\n\\end{solution}\n\n\n\n\\begin{problem}\nLet $A, B, C,$ and $D$ be binary input variables (features). Give decision trees to represent the following Boolean functions:\n\na) $(3$ points $) A \\wedge \\bar{B}$\n\nb) $(3$ points $) A \\vee(\\bar{B} \\wedge C)$\n\nc) $(4$ points $) A \\oplus \\bar{B}$\\\\\nwhere $\\bar{A}$ is the negation of $A$ and $\\oplus$ is an exclusive OR operation.\n\\end{problem}\n\n\\begin{solution}\nWe can replace 'T' with $1$ and 'F' with $0$ and still obtain the same outputs in the end.\n\\begin{figure}[H]\n\\includegraphics[width=5.5cm, keepaspectratio]{Problem6_a}\n\\caption{a) $A \\wedge \\bar{B}$}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=5.5cm, keepaspectratio]{Problem6_b}\n\\caption{b) $A \\vee(\\bar{B} \\wedge C)$}\n\\end{figure}\n\\begin{figure}[H]\n\\includegraphics[width=5.5cm, keepaspectratio]{Problem6_c}\n\\caption{c) $A \\oplus \\bar{B}$}\n\\end{figure}\n\n\n\\end{solution}\n\n\n\\medskip\n\n\\begin{thebibliography}{9}\n\\bibitem{movielens100}\nF. Maxwell Harper and Joseph A. Konstan. 2015. \\textit{The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages.}\n\\\\\\texttt{DOI=http://dx.doi.org/10.1145/2827872}\n\n\\bibitem{nonuniform_mf}\nXiangnan He, Jinhui Tang, Senior Member, IEEE, Xiaoyu Du, Richang Hong, Member, IEEE, Tongwei Ren, Member, IEEE, and Tat-Seng Chua. 2018. \\textit{Fast Matrix Factorization with Non-Uniform Weights on Missing Data}\n\\\\\\texttt{DOI=https://arxiv.org/pdf/1811.04411.pdf}\n\n\\bibitem{lln}\nRoutledge, R. (2016, October 12). \\textit{Law of large numbers. Encyclopedia Britannica.}\n\\\\\\texttt{URL: https://www.britannica.com/science/law-of-large-numbers}\n\\end{thebibliography}\n\n\n\\end{document}\n\n\\end{document}", "meta": {"hexsha": "61778e605981b16a9720bb0189fed25ee1c7bc72", "size": 64474, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Matrix Factorization/main.tex", "max_stars_repo_name": "sakshisuman12/deeplearing", "max_stars_repo_head_hexsha": "efa2769ee6f115fb138321e6a5f3ac633a567926", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Matrix Factorization/main.tex", "max_issues_repo_name": "sakshisuman12/deeplearing", "max_issues_repo_head_hexsha": "efa2769ee6f115fb138321e6a5f3ac633a567926", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Matrix Factorization/main.tex", "max_forks_repo_name": "sakshisuman12/deeplearing", "max_forks_repo_head_hexsha": "efa2769ee6f115fb138321e6a5f3ac633a567926", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.9781209781, "max_line_length": 2303, "alphanum_fraction": 0.6535192481, "num_tokens": 20556, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952052, "lm_q2_score": 0.8006920068519376, "lm_q1q2_score": 0.5625102015295739}}
{"text": "\\section{Initial Experiments}\n\nWe start with finding the ranking for nodes recommended in an existing network. We consider a synthetic network $G(N,m,f,h)$ with parameters $N=1000, m=2, f=0.2$ as fixed parameters, and $h$ being varied from [0.0, 1.0], step-size=0.1.\n\nWe show in figure \\ref{fig-hom} the initial structure of the network along with degree distribution and degree growth for the network, which is grown considering only preferential-attachment and homophily as in equation \\ref{pref-hom}. We find our results similar to \\cite{karimi2018homophily}, showing similar network characteristics.\n\n\\begin{figure}[hbt!]\n\t\\centering\n\t\\includegraphics[scale=0.23]{images/proposal_figure_1}\n\t\\caption{Network grown using preferential-attachment and homophily. The minority fraction has been set fixed to 0.2, number of edges added per iteration is 2. The homophily parameter ranges from 0.0(complete heterophily) to 1.0(complete homophily). Tile A shows a visualization of the network with 100 nodes, for different homophily parameters. For Tile B and C, the network contains 1000 nodes, averaged over 5 iterations.}\n\t\\label{fig-hom}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[scale=0.28]{images/proposal_figure_3_1}\n\t\\caption{Fraction of minorities recommended in the top 5 ranks for the two node groups. On the left hand side is the recommendation provided to minority nodes, while on the right-hand side are the recommendations provided to majority nodes. The 3 different ranking schemes are shown with three colors. The network considered has 1000 nodes, with a minority fraction of 0.05.}\n\t\\label{fig-rank_1}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[scale=0.28]{images/proposal_figure_3_2}\n\t\\caption{Fraction of minorities recommended in the top 5 ranks for the two node groups. On the left hand side is the recommendation provided to minority nodes, while on the right-hand side are the recommendations provided to majority nodes. The 3 different ranking schemes are shown with three colors. The network considered has 1000 nodes, with a minority fraction of 0.1.}\n\t\\label{fig-rank_2}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.28]{images/proposal_figure_3_3}\n\t\\caption{Fraction of minorities recommended in the top 5 ranks for the two node groups. On the left hand side is the recommendation provided to minority nodes, while on the right-hand side are the recommendations provided to majority nodes. The 3 different ranking schemes are shown with three colors. The network considered has 1000 nodes, with a minority fraction of 0.2.}\n\t\\label{fig-rank_3}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.28]{images/proposal_figure_3_4}\n\t\\caption{Fraction of minorities recommended in the top 5 ranks for the two node groups. On the left hand side is the recommendation provided to minority nodes, while on the right-hand side are the recommendations provided to majority nodes. The 3 different ranking schemes are shown with three colors. The network considered has 1000 nodes, with a minority fraction of 0.3.}\n\t\\label{fig-rank_4}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.28]{images/proposal_figure_3_5}\n\t\\caption{Fraction of minorities recommended in the top 5 ranks for the two node groups. On the left hand side is the recommendation provided to minority nodes, while on the right-hand side are the recommendations provided to majority nodes. The 3 different ranking schemes are shown with three colors. The network considered has 1000 nodes, with a minority fraction of 0.4.}\n\t\\label{fig-rank_5}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.28]{images/proposal_figure_3_6}\n\t\\caption{Fraction of minorities recommended in the top 5 ranks for the two node groups. On the left hand side is the recommendation provided to minority nodes, while on the right-hand side are the recommendations provided to majority nodes. The 3 different ranking schemes are shown with three colors. The network considered has 1000 nodes, with a minority fraction of 0.5.}\n\t\\label{fig-rank_6}\n\\end{figure}\n\nFor the network, with varying homophily, we run our Multi-Armed Bandits approach of ranking for all the nodes as devised in \\cite{radlinski2008learning}. We take the number of slots to be 5, thus training for 5 ranking spots per node in the network. The underlying MAB algorithm being used is $\\epsilon$-Greedy (with $\\epsilon=0.1$). We use the sample-average method for estimating the best action values, where actions are basically the choice of nodes among the network to recommend. The reward function returns 1, when the node recommended is chosen by our clicking model, and a reward value of 0 if the node recommended is not chosen. Details about the node choice model is provided in the Methods section. We train our recommender system for $10^5$ iterations. We then see at which position nodes are ranked, based on the node group asking for recommendation. We also run the TopRank algorithm \\cite{lattimore2018toprank} and the Adamic-Adar Index to compare our results. Figure \\ref{fig-rank_1} - Figure \\ref{fig-rank_6} shows our results for different minority groups.\n\nFor the complete hetrophilic case, a sharp difference can be seen in the Adamic-Adar ranking and the Ranked-Bandits ranking. This is evident since the Adamic-Adar Index does not consider the homophily value for making link predictions, since the minority nodes have majority nodes as their neighbours always and they in turn have only minority neighbours, other minority nodes are recommended minority nodes. The Ranked-Bandit however considers the homophily parameter and hence the recommendations are more aligned to the idea of homophily.\n\n\\bigskip", "meta": {"hexsha": "096e209370bcb63ede0df356448e103024fdbb13", "size": 5674, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/proposal/initial_experiments.tex", "max_stars_repo_name": "dvaruas/minority_recommendations", "max_stars_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/proposal/initial_experiments.tex", "max_issues_repo_name": "dvaruas/minority_recommendations", "max_issues_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/proposal/initial_experiments.tex", "max_forks_repo_name": "dvaruas/minority_recommendations", "max_forks_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 113.48, "max_line_length": 1075, "alphanum_fraction": 0.7999647515, "num_tokens": 1384, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7025300573952052, "lm_q1q2_score": 0.5625101981883526}}
{"text": "\\section{Abstract}\n\tThe project is composed by two parts:\n\t\\begin{itemize}\n\t\t\\item Implementing an Integer Linear Programming model with the Cplex API\n\t\t\\item Implementing a meta-heuristic solution method\n\t\\end{itemize}\n\tFor the second part my choice of method is a genetic algorithm, details on implementation and decisions in section 3.\n\tBoth solution methods will be applied to the following problem:\n\t\\blockquote{A company produces boards with holes used to build electric frames.  Boards are positioned over a machines and a drill moves over the board, stops at the desired positions and makes the holes.  Once a board is drilled, a new board is positioned and the process is iterated many times.  Given the position of the holes on the board, the company asks us to determine the hole sequence that minimizes the total drilling time, taking into account that the time needed for making an hole is the same and constant for all the holes.}\n\tThe two different approaches are then tested using some benchmark tests from literature and compared to spot differences in results (optimality, execution time, reliability, ...)\n\t", "meta": {"hexsha": "68bb2a1257d1aa6b79f5317391d7d4ca31b236f8", "size": 1126, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/abstract.tex", "max_stars_repo_name": "abeccaro/MeMOC-project", "max_stars_repo_head_hexsha": "74d6b79ac72ed573c280478820a221424fc138f5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-02-07T13:28:49.000Z", "max_stars_repo_stars_event_max_datetime": "2018-02-07T13:28:49.000Z", "max_issues_repo_path": "report/sections/abstract.tex", "max_issues_repo_name": "abeccaro/MeMOC-project", "max_issues_repo_head_hexsha": "74d6b79ac72ed573c280478820a221424fc138f5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections/abstract.tex", "max_forks_repo_name": "abeccaro/MeMOC-project", "max_forks_repo_head_hexsha": "74d6b79ac72ed573c280478820a221424fc138f5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 102.3636363636, "max_line_length": 540, "alphanum_fraction": 0.7992895204, "num_tokens": 236, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.5625101882147147}}
{"text": "\\documentclass[conference]{IEEEtran}\n\\usepackage{graphicx}\n\\usepackage[strings]{underscore}\n\\usepackage{biblatex}\n\n%Title\n\\title{Exploring Otsu's Method}\n\\author{\n    \\IEEEauthorblockN{Sotheanith Sok}\n    \\IEEEauthorblockA{\n        Department of Engineering\\\\ \n        California State University of Long Beach\\\\\n        Sotheanith.Sok@student.csulb.edu\n    }\n}\n\\date{September 9 2021}\n\n\\addbibresource {refs.bib}\n\n\\begin{document}\n\\maketitle\n\n\\section{Introduction}\nIn computer vision, image segmentation is the process of dividing an image into multiple segments and these segments tend to simplify the image which will make it easier to analyze such image for other purposes such as boundary locating, object finding, and many more. One of the simpler types of image segmentation is splitting a gray-scale image between a foreground area and a background area by picking a pixel intensity threshold that separates the two areas. Unfortunately, picking the pixel intensity threshold manually is not efficient as any given threshold can only work for a selective group of images. Thus, a more adaptive approach is necessary and Otsu's method is one of them.\n\nOtsu's method, named after Nobuyuki Otsu, is an algorithm for automatically determining a pixel intensity threshold that separates a background area from a foreground area of any gray-scale image. This is possible under the assumption that there is a bi-modal distribution, with each peak represents a background area or a foreground area, when examining the histogram generated from pixel intensities and pixel counts of a given image \\cite{wikipedia-contributors:2020}.\n\n\\begin{figure}[!htb]\n    \\centering\n    \\includegraphics[width=\\linewidth]{otsu_example.png}\n    \\caption{Original image, binary image, and histogram used or generated by Otsu's method \\cite{scipy-lectures:2020}.}\n\\end{figure}\n\n\\section{The Algorithm}\nIn order to find the pixel intensity threshold, Otsu's method will perform an exhaustive search of all pixel intensity values and select a value that maximizes the inter-class variance. For any given image, the algorithm starts by generating a histogram representing all pixel intensities and the numbers of pixels that have such intensities \\cite{tay:2020}. Then, according to \\cite{greensted:2010}, an inter-class variance can be represented as\n\\[\\sigma_w^2 = W_f\\sigma_f^2 + W_b\\sigma_b^2\\]\nwhere:\n\\begin{description}\n    \\item[$\\sigma_w^2$] is the inter-class variance.\n    \\item[$W_f$] is the weight of the foreground area.\n    \\item[$\\sigma_f^2$] is the variance of the foreground area.\n    \\item[$W_b$] is the weight of the background area.\n    \\item[$\\sigma_b^2$] is variance of the background area.\n\\end{description}\n. Weights of the foreground area and the background area can be calculated with the following formula\n\\[W_b =  \\frac{\\sum_{i=0}^{t-1} P(i)}{\\sum_{i=0}^{all} P(i)} \\]\n\\[W_f =  \\frac{\\sum_{i=t}^{all} P(i)}{\\sum_{i=0}^{all} P(i)} \\]\nwhere:\n\\begin{description}\n    \\item[$W_b$] is the weight of the background area.\n    \\item[$W_f$] is the weight of the foreground area.\n    \\item[$t$] is the pixel intensity threshold.\n    \\item[$P(i)$] is the pixel count at intensity $i$.\n\\end{description}\nThe remaining factors, which are variances of both the background area and the foreground area, can be calculated from the mean of each area respectively with the following formula\n\\[\\mu_b = \\frac{\\sum_{i=0}^{t-1} i P(i)}{\\sum_{i=0}^{t-1}P(i)}\\]\n\\[\\sigma_b^2 = \\frac{\\sum_{i=0}^{t-1}(i-\\mu_b)^2 \\times P(i)}{\\sum_{i=0}^{t-1}P(i)}\\]\n\\[\\mu_f = \\frac{\\sum_{i=t}^{all} i P(i)}{\\sum_{i=t}^{all}P(i)}\\]\n\\[\\sigma_f^2 = \\frac{\\sum_{i=t}^{all}(i-\\mu_f)^2 \\times P(i)}{\\sum_{i=t}^{all}P(i)}\\]\nwhere:\n\\begin{description}\n    \\item[$\\mu_b$]is the mean of the background area.\n    \\item[$\\mu_f$]is the mean of the foreground area.\n    \\item[$\\sigma_b^2$] is the variance of the background area.\n    \\item[$\\sigma_f^2$] is the variance of the foreground area.\n    \\item[$t$] is the intensity threshold.\n    \\item[$P(i)$] is the pixel count at $i$ intensity.\n\\end{description}\nThe entire process is repeated for all pixel intensities and the one that produces the largest inter-class variance will be picked as the final pixel intensity threshold.\n\n\\section{Limitation}\nWhile Otsu's method performs well on most gray-scale images, it still has some problems when processing images that do not produce the bi-modal distribution with high peaks and deep valleys \\cite{kang-and-atul:2019}.\n\\begin{figure}[!htb]\n    \\centering\n    \\includegraphics[width=\\linewidth]{small_object_image.png}\n    \\caption{Otsu's method performance on an image with a small foreground area \\cite{altphotos:2017}.}\n\\end{figure}\nTo start with, any image that has a foreground area significantly smaller than a background area tends to not do well with Otsu's method due to the lack of pixels count in a foreground area which is often necessary for generating a high peak in the image's histogram.\n\\begin{figure}[!htb]\n    \\centering\n    \\includegraphics[width=\\linewidth, scale=1.5]{noisy_image.png}\n    \\caption{Otsu's method performance on an image that contains a lot of noise \\cite{qrcode:2012}.}\n\\end{figure}\nThen, there are images with a lot of noise. This type of image also does not perform well with Otsu's method because noises in such images often prohibit the formation of bi-modal distribution in the histogram.\n\\begin{figure}[!htb]\n    \\centering\n    \\includegraphics[width=\\linewidth]{non-uniform illumination_image.png}\n    \\caption{Otsu's method performance on an image with non-uniform illumination \\cite{bird:2014}.}\n\\end{figure}\nLast but not least, the Otsu's method performs poorly on images with non-uniform illumination because it uses a single global threshold to separate the background area and the foreground area. To simply put, for such an image, a section of the background area might have the same illumination as another section of the foreground area and vice versa.\n\n\\section{Final Thought}\nOtsu' method is a great choice for finding a global intensity threshold for splitting a gray-scale image into a foreground area and a background area. However, due to the nature of the algorithm which relies heavily on an image's intensities having a bi-modal distribution, it should not be applied universally to all gray-scale images. To overcome the limitations of Otsu's method, preprocessing techniques such as image denoising and others should be used before applying Otsu' method onto an image.\n\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "0036be30a660a8a32e1362b20a435cf40625e66e", "size": 6522, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework 2/main.tex", "max_stars_repo_name": "sotheanith/CECS-553-Collection", "max_stars_repo_head_hexsha": "8a52ca6fa3e4a6579d17bf67b668bc83fce361d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Homework 2/main.tex", "max_issues_repo_name": "sotheanith/CECS-553-Collection", "max_issues_repo_head_hexsha": "8a52ca6fa3e4a6579d17bf67b668bc83fce361d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework 2/main.tex", "max_forks_repo_name": "sotheanith/CECS-553-Collection", "max_forks_repo_head_hexsha": "8a52ca6fa3e4a6579d17bf67b668bc83fce361d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.5510204082, "max_line_length": 691, "alphanum_fraction": 0.7559030972, "num_tokens": 1677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.7826624789529375, "lm_q1q2_score": 0.5624168780846504}}
{"text": "\\graphicspath{{Pics/}}\n\n\n\\section{First Portion}\n\t\n\n\t\\lem{}{Let the incircle and excircle (opposite to $A$) of $\\triangle ABC$ meet $BC$ at $D$ and $E$ resp. Suppose $F$ is the antipode of $D$ wrt the incircle.\n\t\t\n\t\\begin{enumerate}\n\t\t\\item Prove that $A,F,E$ are collinear.\n\t\t\\item  $M$ be the midpoint of $DE$. Prove that $MI$ meets $AD$ at it's midpoint. \n\t\\end{enumerate}\n\t\n\t\\figdf{.5}{basics_lem_2}{}}\n\t\n\n\n\t\\lem{}{Let the incircle of $\\triangle ABC$ meets $AB$ and $AC$ at $X$ and $Y$ resp. $BI$ and $CI$ meet $XY$ at $P$ and $Q$ respectively. Prove that $BPQC$ is cyclic. (In fact $BP\\perp CP$ and $BQ\\perp CQ$)\n\t\\figdf{.5}{basics_lem_2}{}}\n\t\n\n\n\t\\lem{}{$AD$ is an altitude of $\\triangle ABC$. $E,F$ are on $AC,AB$ so that $AD,BE,CF$ are concurrent. Prove $\\angle EDA=\\angle FDA$.}\n\t\n\n\n\t\\lem{}{Let $AD$ be an altitude of $\\triangle ABC$ and $E\\in \\bigodot ABC$ so that $AE\\parallel BC$. Prove that $D,G,E$ are collinear where $G$ is the centroid of $\\triangle ABC$.\n\t\\figdf{.5}{basics_lem_3}{}}\n\t\n\n\n\t\\prob{}{}{B}{Let $O$ be the circumcenter of $\\triangle ABC$ and $A',B',C'$ are reflections of $O$ on $BC,CA,AB$ resp. Prove that $AA',BB',CC'$ are concurrent.}\n\t\n\n\n\t\\prob{}{}{B}{Let $D,E$ are on sides $AC,AB$ of $\\triangle ABC$ resp. such that $BE=CD$. Let $\\bigodot ABC \\cap \\bigodot ADE= P$. Prove that $PB=PC$.}\n\t\n\n\n\t\\prob{}{}{B}{Let a line $PQ$ touch circle $S_1$ and $S_2$ at $P$ and $Q$ resp. Prove that the radical axis of $S_1$ and $S_2$ passes through the midpoint of $PQ$.}\n\t\n\n\n\t\\prob{}{}{B}{Let $\\omega_1,\\omega_2,omega_3$ are $3$ circles. Prove that the $3$ radical axis of $\\omega_1$ and $\\omega_2$,$\\omega_2$ and $\\omega_3$,$\\omega_3$ and $\\omega_1$ are either concurrent or parallel. }\n\t\n\n\n\t\\prob{}{}{B}{Two equal-radius circles $\\omega_1$ and $\\omega_2$ are centered at points $O_1$ and $O_2$ . A point $X$ is reflected through $O_1$ and $O_2$ to get points $A_1$ and $A_2$. \n\t\tThe tangents from $A_1$ to $\\omega_1$ touch $\\omega_1$ at points $P_1$ and $Q_1$, and \n\t\tthe tangents from $A_2$ to $\\omega_2$ touch $\\omega_2$ at points $P_2$ and $Q_2$. \n\t\tIf $P_1Q_1$ and $P_2Q_2$ intersect at $Y$ , prove that $Y$ is equidistant from $A_1$ and $A_2$.  }\n\t\n\n\n\t\\prob{}{}{B}{Let $BD,CE$ be the altitudes of $\\triangle ABC$ and $M$ be the midpoint of $BC$. If the ray $MH$ meet $\\bigodot ABC$ at point $K$, prove that $AK,BC,DE$ are concurrent. }\n\t\n\n\n\t\\prob{}{}{B}{Two circle $\\omega$ and $\\Gamma$ touches one another internally at $P$ with $\\omega $ inside of $\\Gamma$. Let $AB$ be a chord of $\\Gamma$ which touches $\\omega$ at $D$. Let $PD\\cap \\Gamma=Q$. Prove that $QA=QB$.}\n\t\n\n\n\t\\prob{}{}{B}{Let $AD$ be a symmedian of $\\triangle ABC$ with $D$ on $\\bigodot ABC$. Let $M$ be the midpoint of $AD$. Prove that $\\angle BMD=\\angle CMD$ and $A,M,O,D$ are cyclic where $O$ is the circumcenter of $\\triangle ABC$. }\n\t\n\n\n\t\\prob{}{}{B}{Let $ A, B $ be two fixed points and let $ P $ be varying point such that $ \\frac{PA}{PB} $ is constant. Prove that the locus of $ P $ is a circle.}\n\t\n\n\n\t\\prob{}{}{B}{Prove that $ r_1+r_2+r_3=4R+r $ ($ R, r, r_1, r_2, r_3 $ are the circumradius, inradius and three exradiuses respectively of a triangle)}\n\t\n\n\n\t\\prob{}{}{B}{Let $ M $  be the midpoint of the altitude $ BE $ in $ \\triangle ABC $ and suppose that the excircle opposite to $ B $ touches $ AC $ at $ Y $. Then $ MY $ goes through the incenter $ I $.}\n\t\n\n\n\t\\prob{}{}{B}{Let $ ABC $ be a triangle, and draw isosceles triangles $ \\triangle DBC, \\triangle AEC, \\triangle ABF $ external to $ \\triangle ABC $ (with $ BC; CA; AB $ as their respective bases). Prove that the lines through $ A; B; C $ perpendicular to $ EF; FD; DE $, respectively, are concurrent.}\n\n\n\n\t\\prob{}{}{B}{In a triangle $ ABC $ we have $ AB = AC $. A circle which is internally tangent with the circumscribed circle of the triangle is also tangent to the sides $ AB; AC $ in the points $ P $, respectively $ Q $. Prove that the midpoint of $ PQ $ is the center of the inscribed circle of the triangle $ ABC $}\n\t\n\n\n\t\\prob{}{}{B}{\\textbf{Nagel Point $ N $}: If the Excircles of $ ABC $ touch $ BC; CA; AB $ at $ D; E; F $, then the intersection point of $ AD; BE; CF $ is called the \\textbf{Nagel Point} $ N $. Prove that\n\t\t\n\t\\begin{enumerate}\n\t\t\\item $ I; G; N $ are collinear. ($ G $ centroid, $ I $ incenter.)\n\t\t\\item $ GN = 2 \\cdot IG $.\n\t\t\\item \\textbf{Speiker center} $ S $: The incircle of the medial triangle is called the Speiker circle, and it\u2019s center is \\textbf{Speiker center} $ S $. Prove that $ S $ is the midpoint of $ IN $.\n\t\\end{enumerate}}\n\n\t\n\t\n\\section{Second Portion}\n\n\n\t\\prob{}{}{B}{Let $PB$ and $PC$ are tangent to $\\bigodot ABC$. Let $D,E,F$ are projection of $A$ on $BC,PB,PC$ resp. Prove that $AD^2=AE\\times AF$.}\n\t\n\n\n\t\\prob{}{}{B}{Let $D$ and $E$ are on $AB$ and $AC$ s.t $DE \\parallel BC$. $P$ is an arbitrary point inside $\\triangle ADE$. $PB,PC\\cap DE=F,G$. Let $\\bigodot PDG \\cap \\bigodot PFE=Q$. Prove that $A,P,Q$ are collinear.  }\n\t\n\n\n\t\\prob{}{}{B}{Let $AB$ and $CD$ be chords in a circle of center $O$ with $A , B , C , D$ distinct , and with the lines $AB$ and $CD$ meeting at a right angle at point $E$. Let also $M$ and $N$ be the midpoints of $AC$ and $BD$ respectively . If $MN \\bot OE$ , prove that $AD \\parallel BC$}\n\t\n\n\n\t\\prob{}{}{B}{Circles $ \\mathcal{C}_1$ and $ \\mathcal{C}_2$ intersect at $ A$ and $ B$. Let $ M\\in AB$. A line through $ M$ (different from $ AB$) cuts circles $ \\mathcal{C}_1$ and $ \\mathcal{C}_2$ at $ Z,D,E,C$ respectively such that $ D,E\\in ZC$. Perpendiculars at $ B$ to the lines $ EB,ZB$ and $ AD$ respectively cut circle $ \\mathcal{C}_2$ in $ F,K$ and $ N$. Prove that $ KF=NC$. }\n\t\n\n\n\t\\prob{}{}{B}{Let $D$ be a point on side $AC$ of triangle $ABC$. Let $E$ and $F$ be points on the segments $BD$ and $BC$ respectively, such that $\\angle BAE = \\angle CAF$. Let $P$ and $Q$ be points on $BC$ and $BD$ respectively, such that $EP$ and $FQ$ are both parallel to $CD$. Prove that $\\angle BAP = \\angle CAQ$.}\n\t\n\n\n\t\\prob{}{}{B}{In the non-isosceles triangle $ABC$ an altitude from $A$ meets side $BC$ in $D$ . Let $M$ be the midpoint of $BC$ and let $N$ be the reflection of $M$ in $D$. The circumcirle of triangle $AMN$ intersects the side $AB$ in $P\\ne A$ and the side $AC$ in $Q\\ne A$ . Prove that $AN,\\ BQ$ and $CP$ are concurrent.}\n\t\n\n\n\t\\prob{}{}{B}{In triangle $ABC$, the interior and exterior angle bisectors of $ \\angle BAC$ intersect the line $BC$ in $D $ and $E$, respectively. Let $F$ be the second point of intersection of the line $AD$ with the circumcircle of the triangle $ ABC$. Let $O$ be the circumcenter of the triangle $ ABC $and let $D'$ be the reflection of $D$ in $O$. Prove that $ \\angle D'FE =90.$}\n\t\n\n\n\t\\prob{}{}{B}{Let $ABCD$ be a convex quadrilateral such that the line $BD$ bisects the angle $ABC.$ The circumcircle of triangle $ABC$ intersects the sides $AD$ and $CD$ in the points $P$ and $Q,$ respectively. The line through $D$ and parallel to $AC$ intersects the lines $BC$ and $BA$ at the points $R$ and $S,$ respectively. Prove that the points $P, Q, R$ and $S$ lie on a common circle.}\n\t\n\n\n\t\\prob{}{}{B}{The incircle of triangle $ABC$ touches $BC$, $CA$, $AB$ at points $A_1$, $B_1$, $C_1$, respectively. The perpendicular from the incenter $I$ to the median from vertex $C$ meets the line $A_1B_1$ in point $K$. Prove that $CK$ is parallel to $AB$.}\n\t\n\n\n\t\\prob{}{}{B}{Let $X$ be an arbitrary point inside the circumcircle of a triangle $ABC$. The lines $BX$ and $CX$ meet the circumcircle in points $K$ and $L$ respectively. The line $LK$ intersects $BA$ and $AC$ at points $E$ and $F$ respectively. Find the locus of points $X$ such that the circumcircles of triangles $AFK$ and $AEL$ touch.}\n\t\n\n\n\t\\prob{}{}{B}{Let $BD$ be a bisector of triangle $ABC$. Points $I_a$, $I_c$ are the incenters of triangles $ABD$, $CBD$ respectively. The line $I_aI_c$ meets $AC$ in point $Q$. Prove that $\\angle DBQ = 90^\\circ$.}\n\t\n\n\n\t\\prob{}{}{B}{Given right-angled triangle $ABC$ with hypotenuse $AB$. Let $M$ be the midpoint of $AB$ and $O$ be the center of circumcircle $\\omega$ of triangle $CMB$. Line $AC$ meets $\\omega$ for the second time in point $K$. Segment $KO$ meets the circumcircle of triangle $ABC$ in point $L$. Prove that segments $AL$ and $KM$ meet on the circumcircle of triangle $ACM$.}\n\t\n\n\n\t\\prob{}{}{B}{Let $BN$ be median of triangle $ABC$. $M$ is a point on $BC$. $S$ lies on $BN$ such that $MS\\parallel AB$. $P$ is a point such that $SP\\perp AC$ and $BP\\parallel AC$. $MP$ cuts $AB$ at $Q$. Prove that $QB=QP$.}\n\t\n\n\n\t\\prob{}{}{B}{ Let $ABCD$ be a convex quadrilateral with $AB$ parallel to $CD$. Let $P$ and $Q$ be the midpoints of $AC$ and $BD$, respectively. Prove that if $\\angle ABP=\\angle CBD$, then $\\angle BCQ=\\angle ACD$.}\n\t\n\n\n\t\\prob{}{}{B}{Point $P$ lies inside a triangle $ABC$. Let $D,E$ and $F$ be reflections of the point $P$ in the lines $BC,CA$ and $AB$, respectively. Prove that if the triangle $DEF$ is equilateral, then the lines $AD,BE$ and $CF$ intersect in a common point.}\n\t\n\n\n\t\\prob{}{}{B}{Let $\\triangle{ABC}$ be an acute angled triangle. The circle with diameter $AB$ intersects the sides $AC$ and $BC$ at points $E$ and $F$ respectively. The tangents drawn to the circle through $E$ and $F$ intersect at $P$. Show that $P$ lies on the altitude through the vertex $C$.}\n\t\n\n\n\t\\prob{}{}{B}{Let $\\gamma$ be circle and let $P$ be a point outside $\\gamma$. Let $PA$ and $PB$ be the tangents from $P$ to $\\gamma$ (where $A, B \\in \\gamma$). A line passing through $P$ intersects $\\gamma$ at points $Q$ and $R$. Let $S$ be a point on $\\gamma$ such that $BS \\parallel QR$. Prove that $SA$ bisects $QR$}\n\t\n\n\n\t\\prob{}{}{B}{Given is a convex quadrilateral $ABCD$ with $AB=CD$. Draw the triangles $ABE$ and $CDF$ outside $ABCD$ so that $\\angle{ABE} = \\angle{DCF}$ and $\\angle{BAE}=\\angle{FDC}$. Prove that the midpoints of $\\overline{AD}$, $\\overline{BC}$ and $\\overline{EF}$ are collinear}\n\t\n\n\n\t\\prob{}{}{B}{Let $P$ be a point out of circle $C$. Let $PA$ and $PB$ be the tangents to the circle drawn from $C$. Choose a point $K$ on $AB$ . Suppose that the circumcircle of triangle $PBK$ intersects $C$ again at $T$. Let ${P}'$ be the reflection of $P$ with respect to $A$. Prove that\n\t\t\\[ \\angle PBT = \\angle {P}'KA \\]}\n\t\n\n\n\t\\prob{}{}{B}{Consider a circle $C_1$ and a point $O$ on it. Circle $C_2$ with center $O$, intersects $C_1$ in two points $P$ and $Q$. $C_3$ is a circle which is externally tangent to $C_2$ at $R$ and internally tangent to $C_1$ at $S$ and suppose that $RS$ passes through $Q$. Suppose $X$ and $Y$ are second intersection points of $PR$ and $OR$ with $C_1$. Prove that $QX$ is parallel with $SY$.}\n\t\n\n\n\t\\prob{}{}{B}{In triangle $ABC$ we have $\\angle A=\\frac{\\pi}{3}$. Construct $E$ and $F$ on continue of $AB$ and $AC$ respectively such that $BE=CF=BC$. Suppose that $EF$ meets circumcircle of $\\triangle ACE$ in $K$. ($K\\not \\equiv E$). Prove that $K$ is on the bisector of $\\angle A$}\n\t\n\n\n\t\\prob{}{}{B}{In triangle $ABC$, $\\angle A=90^{\\circ}$ and $M$ is the midpoint of $BC$. Point $D$ is chosen on segment $AC$ such that $AM=AD$ and $P$ is the second meet point of the circumcircles of triangles $\\Delta AMC,\\Delta BDC$. Prove that the line $CP$ bisects $\\angle ACB$}\n\t\n\n\n\t\\prob{}{}{B}{Let $C_1,C_2$ be two circles such that the center of $C_1$ is on the circumference of $C_2$. Let $C_1,C_2$ intersect each other at points $M,N$. Let $A,B$ be two points on the circumference of $C_1$ such that $AB$ is the diameter of it. Let lines $AM,BN$ meet $C_2$ for the second time at $A',B'$, respectively. Prove that $A'B'=r_1$ where $r_1$ is the radius of $C_1$.}\n\t\n\n\n\t\\prob{}{}{B}{Given a triangle $ABC$, let $P$ lie on the circumcircle of the triangle and be the midpoint of the arc $BC$ which does not contain $A$. Draw a straight line $l$ through $P$ so that $l$ is parallel to $AB$. Denote by $k$ the circle which passes through $B$, and is tangent to $l$ at the point $P$. Let $Q$ be the second point of intersection of $k$ and the line $AB$ (if there is no second point of intersection, choose $Q = B$). Prove that $AQ = AC$.}\n\t\n\n\n\t\\prob{}{}{B}{\tLet $ABCD$ be a cyclic quadrilateral in which internal angle bisectors $\\angle ABC$ and $\\angle ADC$ intersect on diagonal $AC$. Let $M$ be the midpoint of $AC$. Line parallel to $BC$ which passes through $D$ cuts $BM$ at $E$ and circle $ABCD$ in $F$ ($F \\neq D$ ). Prove that $BCEF$ is parallelogram}\n\t\n\n\n\t\\prob{}{}{B}{The side $BC$ of the triangle $ABC$ is extended beyond $C$ to $D$ so that $CD = BC$. The side $CA$ is extended beyond $A$ to $E$ so that $AE = 2CA$. Prove that, if $AD=BE$, then the triangle $ABC$ is right-angled}\n\t\n\n\n\t\\prob{}{}{B}{$ABCD$ is a cyclic quadrilateral inscribed in the circle $\\Gamma$ with $AB$ as diameter. Let $E$ be the intersection of the diagonals $AC$ and $BD$. The tangents to $\\Gamma$ at the points $C,D$ meet at $P$. Prove that $PC=PE$}\n\t\n\n\n\t\\prob{}{}{B}{The quadrilateral $ABCD$ is inscribed in a circle. The point $P$ lies in the interior of $ABCD$, and $\\angle P AB = \\angle P BC = \\angle P CD = \\angle P DA$. The lines $AD$ and $BC$ meet at $Q$, and the lines $AB$ and $CD$ meet at $R$. Prove that the lines $P Q$ and $P R$ form the same angle as the diagonals of $ABCD$}\n\t\n\n\n\t\\prob{}{}{B}{Let $ABCD$ be a cyclic quadrilateral with opposite sides not parallel. Let $X$ and $Y$ be the intersections of $AB,CD$ and $AD,BC$ respectively. Let the angle bisector of $\\angle AXD$ intersect $AD,BC$ at $E,F$ respectively, and let the angle bisectors of $\\angle AYB$ intersect $AB,CD$ at $G,H$ respectively. Prove that $EFGH$ is a parallelogram.}\n\t\n\n\n\t\\prob{}{}{B}{Triangle $ABC$ is given with its centroid $G$ and cicumcentre $O$ is such that $GO$ is perpendicular to $AG$. Let $A'$ be the second intersection of $AG$ with circumcircle of triangle $ABC$. Let $D$ be the intersection of lines  $CA'$ and $AB$ and $E$ the intersection of lines $BA'$ and $AC$. Prove that the circumcentre of triangle $ADE$ is on the circumcircle of triangle $ABC$}\n\t\n\n\n\t\\prob{}{}{B}{Let $M$ be the midpoint of the side $AC$ of $ \\triangle ABC$. Let $P\\in AM$ and $Q\\in CM$ be such that $PQ=\\frac{AC}{2}$. Let $(ABQ)$ intersect with $BC$ at $X\\not= B$ and $(BCP)$ intersect with $BA$ at $Y\\not= B$. Prove that the quadrilateral $BXMY$ is cyclic.}\n\t\n\n\n\t\\prob{}{}{B}{Let be given a triangle $ ABC$ and its internal angle bisector $ BD$ $ (D\\in BC)$. The line $ BD$ intersects the circumcircle $ \\Omega$ of triangle $ ABC$ at $ B$ and $ E$. Circle $ \\omega$ with diameter $ DE$ cuts $ \\Omega$ again at $ F$. Prove that $ BF$ is the symmedian line of triangle $ ABC$.}\n\t\n\n\n\t\\prob{}{}{B}{$ \\Delta ABC$ is a triangle such that $ AB \\neq AC$. The incircle of $ \\Delta ABC$ touches $ BC, CA, AB$ at $ D, E, F$ respectively. $ H$ is a point on the segment $ EF$ such that $ DH \\bot EF$. Suppose $ AH \\bot BC$, prove that $ H$ is the orthocenter of $ \\Delta ABC$.}\n\t\n\n\n\t\\prob{}{}{B}{Let $ ABC $ be a triangle and let $ P $ be a point on the angle bisector $ AD $, with $ D $ on $ BC $. Let $ E, F $ and $ G $ be the intersections of $ AP, BP $ and $ CP $ with the circumcircle of the triangle, respectively. Let $ H $ be the intersection of $ EF $ and $ AC $, and let $ I $ be the ntersection of $ EG $ and $ AB $. Determine the geometric place of the intersection of $ BH $ and $ CI $ when $ P $ varies}\n\t\n\n\n\t\\prob{}{}{B}{Let $ D; E; F $ be the points on the sides $ BC; CA; AB $ respectively, of $ \\triangle ABC $. Let $ P; Q; R $ be the second intersection of $ AD; BE; CF $ respectively, with the cricumcircle of $ \\triangle ABC $.\n\n\tShow that \\[\\frac{AD}{PD}+\\frac{BE}{QE}+\\frac{CF}{RF}\\geq 9\\]}\n\n\n\n\t\\prob{}{}{B}{Points $ D $ and $ E $ lie on sides $ AB $ and $ AC $ of triangle $ ABC $ such that $ DE \\parallel BC $. Let $ P $ be an arbitrary point inside $ ABC $. The lines $ PB $ and $ PC $ intersect $ DE $ at $ F $ and $ G $, respectively. If $ O_1 $ is the circumcenter of $ PDG $ and $ O_2 $ is the circumcenter of $ PFE $, show that $ AP \\parallel O_1O_2 $.}\n\t\n\n\n\t\\prob{}{}{B}{Let $ ABC $ be a triangle. A circle passing through $ A $ and $ B $ intersects segments $ AC $ and $ BC $ at $ D $ and $ E $, respectively. Lines $ AB $ and $ DE $ intersect at $ F $, while lines $ BD $ and $ CF $ intersect at $ M $. Prove that $ MF = MC $ if and only if $ MB \\cdot MD = MC^2 $}\n\t\n\n\n\t\\prob{}{}{B}{Let $ O $ and $ I $ be the circumcenter and incenter of triangle $ ABC $, respectively. Let $ \\omega A $ be the excircle of triangle $ ABC $ opposite to $ A $; let it be tangent to $ AB, AC, BC $ at $ K, M, N $, respectively. Assume that the midpoint of segment $ KM $ lies on the circumcircle of triangle $ ABC $. Prove that $ O; N; I $ are collinear.}\n\t\n\n\n\t\\prob{}{}{B}{Let $ ABCD $ be a cyclic quadrilateral. Let $ AB \\cap CD = P $ and $ AD \\cap BC = Q $. Let the tangents from $ Q $ meet the circumcircle of $ ABCD $ at $ E $ and $ F $. Prove that $ P; E; F $ are collinear.}", "meta": {"hexsha": "abcbfbcb54d441019ec616c7ff98cc2f271ff877", "size": 16859, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "geo/basics.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "geo/basics.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geo/basics.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 66.1137254902, "max_line_length": 465, "alphanum_fraction": 0.6492081381, "num_tokens": 5661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5624168722940524}}
{"text": "\\lab{Wavelet Denoising and Compression}{Denoising and Compression}\n\n\\objective{This lab presents applications of the Discrete Wavelet Transform in\nimage denoising and compression.}\n\n\\section*{The PyWavelets Module}\nPyWavelets is a Python library for Wavelet Analysis. It provides convenient and\nefficient methods to calculate the one and two-dimensional discrete Wavelet\ntransform, as well as much more. Assuming that the package has been installed on\nyour machine, type the following to get started:\n\\begin{lstlisting}\n>>> import pywt\n\\end{lstlisting}\nPerforming the basic discrete Wavelet transform is very simple.\nBelow, we compute the one-dimensional transform for a sinusoidal signal.\n\\begin{lstlisting}\n>>> import numpy as np\n>>> f = np.sin(np.linspace(0,8*np.pi, 256))\n>>> fw = pywt.wavedec(f, 'haar')\n\\end{lstlisting}\nThe variable \\li{fw} is now a list of arrays, starting with the final approximation\nframe, followed by the various levels of detail coefficients, just like the output\nof the wavelet transform function that you coded in the previous lab.\nPlot the level 2 detail and verify that it resembles a blocky sinusoid.\n\\begin{lstlisting}\n>>> from matplotlib import pyplot as plt\n>>> plt.plot(fw[-2], linestyle='steps')\n>>> plt.show()\n\\end{lstlisting}\nWe can give alter the arguments to the \\li{wavedec} function to use different\nwavelets or obtain different levels of the wavelet transform. The second\npositional argument, as you will notice, is a string that gives the name of the\nwavelet to be used. We first used the Haar wavelet, with which you are already\nfamiliar. PyWavelets supports a number of different Wavelets, however, which you can\nlist by executing the following code:\n\\begin{lstlisting}\n>>> # list the available Wavelet families\n>>> print pywt.families()\n['haar', 'db', 'sym', 'coif', 'bior', 'rbio', 'dmey']\n>>> # list the available wavelets in the coif family\n>>> print pywt.wavelist('coif')\n['coif1', 'coif2', 'coif3', 'coif4', 'coif5']\n\\end{lstlisting}\nWe can also include optional arguments \\li{mode} and \\li{level} when calling the\n\\li{wavedec} function. Using these arguments, you can adjust the mode for dealing\nwith border distortion and the level of the Wavelet decomposition, respectively.\n\n\\begin{figure}[t]\n    \\includegraphics[width=\\linewidth]{dwt2.pdf}\n    \\caption{Level 1 Wavelet decomposition of the Lena image.\n    The upper left is the approximation frame, and the remaining\n    plots are the detail coefficients.}\n    \\label{fig:dwt2}\n\\end{figure}\n\nNow we illustrate how to perform a two-dimensional Wavelet transform using\nPyWavelets. We will work with the traditional Lena image, performing a\ntwo level wavelet transform using the Daubechies 4 Wavelet.\n\\begin{lstlisting}\n>>> import scipy.misc\n>>> lena = scipy.misc.lena()\n>>> lw = pywt.wavedec2(lena, 'db4', level=2)\n\\end{lstlisting}\nThe variable \\li{lw} is a list of tuples of arrays. The first entry of the list is\nsimply the level 2 approximation frame. The second entry of the list is a tuple of\nthe level 2 detail coefficients $LH$, $HL$, and $HH$ (in that order). The remaining\nentries of the list are tuples containing the lower level detail coefficients.\nThus, to plot the level 1 $HL$ detail coefficients, we can execute the following code:\n\\begin{lstlisting}\n>>> HL1 = lw[-1][1]\n>>> plt.imshow(np.abs(HL1), cmap=plt.cm.Greys_r, interpolation='none')\n>>> plt.show()\n\\end{lstlisting}\nThe output of this code should be a plot resembling the lower left plot given in Figure\n\\ref{fig:dwt2}.\n\nWe have only introduced a couple of the basic tools available in PyWavelets. There\nare of course many more functions and methods that facilitate a more comprehensive\nWavelet analysis. In the remainder of this lab, we will explore three particular \napplications of Wavelet analysis in the realm of image processing and compression.\n\n%\\section*{Edge Detection}\n%It is often useful to identify the edges of objects and figures\n%represented in images. The edge information can be used to classify images\n%and group them with other similar images (this is part of a field called\n%\\textit{computer vision}), to segment the image into component parts, to\n%sharpen blurry images, to filter out unnecessary details of the image,\n%and so forth. Of course, our human eyes are very adept at recognizing edges,\n%but enabling a computer to do the same is much more difficult. An edge can\n%be thought of as a discontinuity in the image or a region of high contrast\n%in either color or brightness. We can therefore leverage the high-frequency\n%detail coefficients of the wavelet transform to detect the edges. Execute the\n%following code:\n%\\begin{lstlisting}\n%>>> # calculate one level of wavelet coefficients\n%>>> coeffs = pywt.wavedec2(lena,'haar', level=1)\n%\\end{lstlisting}\n%\n%Note that the approximation coefficients are very close to the original\n%image, while the detail coefficients are much more sparse, and roughly\n%capture the edges in the image. In particular, the upper right coefficients\n%emphasize the vertical edges, the lower left coefficients emphasize the\n%horizontal edges, and the lower right coefficients emphasize the diagonal\n%edges.\n%\n%\\begin{problem}\n%Now zero out the approximation coefficients and use your inverse DWT\n%function to recreate the image. Plot its absolute value. This image is\n%a fairly good representation of the edges. If we add this to the original\n%image, we can increase the contrast at the edges (that is, make the dark\n%side darker, and the light side lighter). Do this, and plot the original\n%image side-by-side with the sharpened image. What do you notice? There\n%are many image-sharpening techniques, and those based on wavelets\n%are more sophisticated than what we have done here, but this gives the\n%basic idea.\n%\\end{problem}\n%the above section needs work, or maybe should be taken out completely.\n\n\\section*{Noise Removal}\nNoise in an image can be defined as unwanted visual artifacts that\nobscure the true image. Images can acquire noise from a variety of\nsources, including the camera, transmission, and image processing\nalgorithms. Noise can be completely random and incoherent (as in\nFigure \\ref{fig:incoherent}), or it can be coherent and display\nvisual patterns (Figure \\ref{fig:coherent}). In this section, we will\nfocus on reducing a particular type of random noise in images, called\n\\textit{Gaussian white noise}.\n\n\\begin{figure}[t]\n\\minipage{0.49\\textwidth}\n    \\includegraphics[width=\\linewidth]{phantom_random.pdf}\n    \\caption{The Phantom image with incoherent noise}\n    \\label{fig:incoherent}\n\\endminipage\\hfill\n\\minipage{0.49\\textwidth}\n    \\includegraphics[width=\\linewidth]{phantom_coherent.pdf}\n    \\caption{The Phantom image with coherent noise}\n    \\label{fig:coherent}\n\\endminipage\n\\end{figure}\n\nAn image that is distorted by Gaussian white noise is one in which\nevery pixel has been perturbed by a small amount, such that the\nperturbations are normally distributed. We can easily add such noise\nto an image using the \\li{np.random.normal} function.\n\n\\begin{lstlisting}\n>>> noisyLena = lena + np.random.normal(scale=20, size=lena.shape)\n>>> plt.imshow(noisyLena, cmap=plt.cm.Greys_r)\n>>> plt.show()\n\\end{lstlisting}\n\nGiven an image with Gaussian white noise, how do we go about reducing\nthe noise level? Our approach will be based on the idea of thresholding.\nIt turns out that images are often sparse in the wavelet basis,\nparticularly in the high-frequency details. The Gaussian noise, however,\nis very high frequency, and thus its wavelet transform will be\nconcentrated in high-frequency wavelet coefficients (of magnitude\nroughly proportional to the variance of the noise). We can therefore\nreduce the noise while preserving the true image by shrinking the\ndetail coefficients via hard or soft thresholding.\n\nGiven a positive threshold value $\\tau$, hard thresholding sets\nevery wavelet coefficient whose magnitude is less than $\\tau$ to\nzero, while leaving the remaining coefficients untouched. Soft\nthresholding also zeros out all coefficients of magnitude less than\n$\\tau$, but in addition maps every other coefficient $\\beta$ to\n$\\beta - \\tau$ if $\\beta > 0$ or $\\beta + \\tau$ if $\\beta < 0$.\n\nImplementing these simple thresholding algorithms in Python is \nstraight-forward, but PyWavelets already provides this functionality.\nThe following code gives an example.\n\n\\begin{lstlisting}\n>>> A = np.arange(-4,5).reshape(3,3)\n>>> A\narray([[-4, -3, -2],\n       [-1,  0,  1],\n       [ 2,  3,  4]])\n>>> pywt.thresholding.hard(A,1.5)\narray([[-4, -3, -2],\n       [ 0,  0,  0],\n       [ 2,  3,  4]])\n>>> pywt.thresholding.soft(A,1.5)\narray([[-2.5, -1.5, -0.5],\n       [ 0. ,  0. ,  0. ],\n       [ 0.5,  1.5,  2.5]])\n\\end{lstlisting}\n\nOnce the coefficients have been thresholded, we take the inverse\nwavelet transform to recover the denoised image. This can be done\nby calling the \\li{waverec2} function, providing the list of Wavelet\ncoefficients as well as the name of the desired Wavelet as arguments.\nThe threshold value is generally a function of the variance of the noise,\nand in real situations, we do not know what this variance is. In fact,\nnoise variance estimation in images is a research area in its own\nright, but this goes beyond the scope of this lab, and so we will\nassume that we already have a decent estimate of the variance.\n\n\\begin{figure}[t]\n    \\includegraphics[width=\\linewidth]{denoise.pdf}\n    \\caption{Noisy Lena (left), denoised using hard thresholding (center), \n    and denoised using soft thresholding (right).}\n    \\label{fig:denoise}\n\\end{figure}\n\n\\begin{problem}\nWrite functions that implement the hard and soft thresholding\ntechniques. The inputs should be a list of wavelet coefficients\nin the usual form, as well as the threshold value. The output\nshould be the thresholded wavelet coefficients (also in\nthe usual form). Remember that we only want to threshold the\ndetail coefficients, and not the approximation coefficients.\nYou should therefore leave the first entry of the input\ncoefficient list unchanged.\n\\end{problem}\n\n\\begin{problem}\nCreate a noisy version of the Lena image by adding Gaussian\nwhite noise of mean 0 and standard deviation $\\sigma = 20$ (i.e. \\li{scale=20}).\nCompute four levels of the wavelet coefficients using the Daubechies 4 Wavelet, \nand input these into your\nthresholding functions (with $\\tau = 3\\sigma$ for the hard threshold,\nand $\\tau = 3\\sigma/2$ for the soft threshold). Reconstruct the\ntwo denoised images, and then plot these together alongside the\nnoisy image. Your output should match Figure \\ref{fig:denoise}.\n\nWhat do you notice? How does lowering or raising the\nthreshold affect the reconstructed images? What happens if you use\na different Wavelet?\n\\end{problem}\n\n\\section*{Image Compression}\nWe now turn to the problem of image compression. Explicitly saving\nthe value of every pixel in an image can be very costly in both\nstorage and transmission, and numerous image compression techniques\nhave been developed over the years to deal with this problem.\nTransform methods have long played an important role in these\ntechniques; the popular JPEG image compression standard is based on\nthe discrete cosine transform. Starting from the early 1990's, much\nresearch has gone into compression methods using the discrete wavelet\ntransform, and to great success. The JPEG2000 compression standard\nand the FBI Fingerprint Image database, along with other systems,\ntake the wavelet approach.\n\nThe general framework for compression is fairly straightforward. First,\nthe image to be compressed undergoes some form of preprocessing (this\ncan include subtracting out its mean, tiling the image, or perhaps\nnothing at all). Next, the wavelet coefficients are computed using some\nspecially constructed wavelet (JPEG2000 uses the either the\nCohen-Daubechies-Feauveau 9/7 or 5/3 wavelet) and then \\textit{quantized},\n a process that we will explain shortly. The quantized coefficients are\n then grouped in a particular way and passed through an entropy encoder\n (such as Huffman coding, run length coding, or arithmetic coding). This\n coding step comes from the realm of information theory, and we will not\n worry about it in this lab. What you have left is a compact stream of bits\n that can then be saved or transmitted much more efficiently than the\n original image. All of the above steps are invertible, allowing us to\n reconstruct the image from the bitstream.\n\n The step in this process that we will focus on is quantization. Put simply,\n quantization is a process whereby the coefficients are converted into\n integers. If the coefficients are floating-point numbers, then this\n process introduces some loss of precision, and we call this \\textit{\n lossy compression}. In the situation where all the coefficients are already\n integers, it is possible to compress the image without any loss of precision,\n and this is called \\textit{lossless compression}. Quantization can be\n performed in a variety of ways, and we will explore one particular method\n called a \\emph{uniform null-zone quantizer}. Given a coefficient $x$, we assign\n to it an integer $q$ given by\n\\begin{equation*}\nq =\n \\begin{cases}\n   \\lceil x / \\delta - t/2 \\rceil, &  x \\geq 0\\\\\n   \\lfloor x / \\delta + t/2 \\rfloor & x \\leq 0\n \\end{cases}\n\\end{equation*}\nwhere $1 \\leq t \\leq 2$ and $\\delta > 0$ are adjustable parameters.\nThe inverse process, called de-quantization, consists of recovering\nthe coefficient $y$ from the quantized value $q$ via the equation\n\\begin{equation*}\n y =\n  \\begin{cases}\n   (q - 1/2 + t/2)\\delta & q > 0\\\\\n   (q + 1/2 - t/2)\\delta & q < 0\\\\\n   0,                    & q = 0\n  \\end{cases}\n \\end{equation*}\nWhat we are essentially doing is mapping all wavelet coefficients that\nfall in the interval $[-\\delta,\\delta]$ to 0 and all wavelet coefficients\nin the interval $[j\\delta,(j+1)\\delta]$ to $(2j+1)\\delta/2$ for integers\n$j \\geq 1$ and $j \\leq -2$. This greatly reduces the number of distinct\ncoefficient values (indeed, most of the coefficients are mapped to\nzero), allowing us, at the cost of some precision, to store less\ninformation. The larger we choose $\\delta$, the more compression we\ncan achieve, albeit with a correspondingly larger loss in precision.\n\\begin{problem}\nWrite function \\li{quantize} and \\li{dequantize} based on the discussion above.\nIn both cases, the inputs should be a list of wavelet coefficients in\nstandard form, the $\\delta$ parameter, and the $t$ parameter with default\nvalue of 2. The functions should return the quantized list of wavelet\ncoefficients.\n\nFor the Lena image, calculate the complete set of wavelet coefficients and then\nquantize and de-quantize the coefficients, and reconstruct the image.\nDo this for a few different values of $\\delta$ (with $t=2$), and observe how\nthe image is distorted. Try using different Wavelets. Keep in mind that there are\nspecially designed Wavelets used in image compression that are not available\nin PyWavelets, so distortion will be greater than tolerable in real-life settings.\n\\end{problem}\n", "meta": {"hexsha": "335716e45fc79bb47bbb54bbd54133ed52ede089", "size": 14999, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Labs/Wavelets/WaveletApps.tex", "max_stars_repo_name": "jessicaleete/numerical_computing", "max_stars_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2016-10-18T19:54:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-09T20:12:38.000Z", "max_issues_repo_path": "Labs/Wavelets/WaveletApps.tex", "max_issues_repo_name": "jessicaleete/numerical_computing", "max_issues_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Labs/Wavelets/WaveletApps.tex", "max_forks_repo_name": "jessicaleete/numerical_computing", "max_forks_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-05-14T16:07:59.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-20T09:05:06.000Z", "avg_line_length": 48.2282958199, "max_line_length": 87, "alphanum_fraction": 0.7675178345, "num_tokens": 3734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5624168665034538}}
{"text": "\\documentclass[12pt,a4paper,notitlepage]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n\\input magstyle.sty\n\n% Title Page\n\\title{Calculation of Demagnetisation Field using domain truncation}\n\\author{Gabriel Balaban for the FinMag project at the University of Southampton}\n\\begin{document}\n\\maketitle\n\\abstract{}\nIn this document the mathematics of the python modules \\texttt{prob\\_testcases.py} and \\texttt{solver\\_nitsche.py} are explained.\nThe demagnetisation field is found using an approach that truncates an infinite domain and uses two scalar potentials.\n\n\\newpage\n\\section{The Demagnetisation Field inside a Magnetic Body} \nThe demagnetisation field $\\dmag$ related to a magnetic body is generated by its magnetisation field $\\magn$ ,\n and tends to oppose it.In this context the region in space occupied by the magnetic object of interest \n(usually a ferromagnetic material) is denoted by  $\\omM$.\nThe complement of this region $\\omV$ is considered to be a vacuum, which extends to infinity and over which\nthe demagnetisation field\n$\\dmag $ is also defined.\nAccording to basic electrodynamics the two fields are related by\n\n\\begin{equation}\\label{dmagdef} \\nabla \\cdotp \\dmag = - \\nabla \\cdotp \\magn \\  \\mbox{in} \\ \\omM \\end{equation}\n\n\\noindent The assumption is also made that there are no free currents and that the electric displacement field does not change over time so that\n\\begin{equation}\\label{nocurl}  \\nabla \\times \\dmag = 0\\end{equation}\n\n\\noindent This means that we can introduce a magnetic potential function $\\phi$, such that\n\n\\[ \\dmag = - \\nabla \\phi \\  \\mbox{in} \\ \\ \\mathbb{R}^3 \\]\n\n\\noindent which means that \n\n\\[ - \\nabla^2 \\phi = -\\nabla \\cdotp \\magn \\ \\ \\mbox{in} \\ \\omM \\]\n\n\\noindent so that our demagnetisation field can be obtained by solving a Poisson problem.\n\n\\section{Poisson Problem for the Scalar Potential}\nThe poisson problem for the scalar potential $\\phi$ reads\n\\begin{equation}\\label{phidef} \n\\left\\{\n\\begin{array}{lr}\n- \\nabla^2 \\phi = - \\nabla \\cdotp \\magn  & \\mbox{in } \\ \\omM \\\\\n- \\nabla^2 \\phi = 0 & \\mbox{in } \\ \\omV \n\\end{array}\n\\right.\n\\end{equation}\n\\noindent with corresponding boundary conditions\n\n\\begin{subequations}\n\\begin{align}\n&\\ \\left[ \\frac{\\partial \\phi}{\\partial n} \\right] = n \\cdotp \\magn  &\\mbox{ on }  \\domM \\label{jumpbc} \\\\\n&\\lim_{\\mvec{r} \\in \\omM \\rightarrow \\domM} \\phi =  \\lim_{\\mvec{r} \\in \\omV \\rightarrow \\domM} \n\\phi & \\mbox{on } \\domM \\label{contbc} \\\\\n&\\phi \\rightarrow 0 \\ & \\mbox{as} \\ |\\rvec| \\in \\omV \\rightarrow \\infty \\label{openbc}\n\\end{align}\n\\end{subequations}\n\n\\noindent Where $n$ denotes the unit outward normal. The prescribed jump in the normal derivative (denoted by $[ ]$) is a \nconsequence of the divergence theorem applied to the definition of $\\dmag$, equation~(\\ref{dmagdef}).\nUsing equation~(\\ref{nocurl}) and Stokes' theorem we see that the tangential component of $\\nabla \\phi$ or $\\dmag$ is\ncontinuous over $\\domM$. Which means that the scalar potential only differs by a constant as it crosses $\\domM$.\nThis constant is set to 0, which means that $\\phi$ is continuous over $\\domM$. Finally we expect the demag field $\\dmag$ \nto decay to 0 far away from the magnetic body, which results in the open boundary condition\n$| \\phi | \\rightarrow 0  \\ \\mbox{as} \\ |\\rvec| \\in \\omV \\rightarrow \\infty$ for the potential $\\phi$. \n\n\n\\section{Truncation of the Domain}\nSince a computer cannot simulate an infinite domain there needs to be some way of dealing with the open boundary condition from\n\\ref{openbc}\n\\[\\phi_2 \\rightarrow 0 \\ \\mbox{as} \\ \\mvec{r} \\in \\omV \\rightarrow \\infty \\]\nHere the domain $\\omV$ is truncated, using the rule of thumb that it should be about 5 times as large as $\\omM$ in order to get\ndecent results. \n\n\\section{The Discontinuous Galerkin formulation for $\\phi$}\\label{dgsection}\nThe discontinuous Galerkin formulation is a finite element discretization scheme where the \nsolution is allowed to be discontinuous across element boundaries. The solution space for \n$\\phi$ is restricted to \n\\begin{equation}\\label{dgdef} \n V = \\{ v \\in H^1 (T) \\mbox{ for all } T \\subset \\Omega \\mbox{ s.t. v is discontinous over } \\ \\partial T \\}\n\\end{equation}\n\n\\noindent Where $\\Omega$ denotes the mesh and $T$ a finite element domain in $\\Omega$. More detail regarding this\ndefinition is give in appendix \\ref{appdgspace}. After integrating\nequation~(\\ref{phidef}) by parts we obtain the formulation.\n\\\\\n\\\\\nFind $\\phi \\in V$ such that\n\\begin{equation}\\label{dgform}  \n \\sum_T \\int_T \\nabla \\phi \\cdotp \\nabla v \\ dx - \\int_{\\partial T} \\partial_n \\phi \\  v \\ dS = \\sum_T \\int_T fv \\ dx\n\\end{equation}\n \\\\\nholds for all test functions $v \\in V $. Here $f(\\rvec) = - \\nabla \\cdotp (M)$ for $\\rvec \\in \\omM$ and $f(\\rvec) = 0$\nfor $\\rvec \\in \\omV$.\n\n\\section{Nitsche's Method for $\\phi$}\nContinuing with the discontinuous Galerkin formulation we reintroduce the notation for the jump\nof a function $A$ across a boundary with predefined '+' and '--' sides. \n\\[ [A] = A^+ - A^- \\]\nSimilarly the average is defined to be\n\\[ <A> = \\frac{1}{2} (A^+ + A^-) \\]\n\\\\\nA simple calculation gives the following relation for the jump of the product of two functions.\n\\[ [AB] = [A]<B> + <A>[B] \\]\nwhich we can now use on the boundary term of equation~(\\ref{dgform}). The sum over boundaries of each element \nvisits every facet twice, so we can instead replace it with a sum over each facet $\\sum_S$, taking into account that\nwe can have different values on both sides of a facet. \n\\begin{eqnarray*}\n- \\sum_T \\int_{\\partial T} \\partial_n \\phi \\  v \\ dS &=&  - \\sum_S \\int_S [\\nabla \\phi v ] \\cdotp n \\ dS \\\\\n&=&  - \\sum_S \\int_S ( [\\nabla \\phi] <v> + <\\nabla \\phi> [v]) \\cdotp n \\ dS\n\\end{eqnarray*}\nHere $n$ has been fixed in one of the possible directions along a facet. \n\nNitsche's method applied to our problem \\ref{dgform} consists of adding two additional terms that are equal to 0 \nfor the true solution, but which are allowed to carry a tolerable value for the discrete solution.\nThe first one is\n\\[ <\\partial_n v> [\\phi] \\]\nwhich is added for symmetry and the next one is\n\\[\\gamma h^{-1} [\\phi][v] \\]\nwhich is added for stability. Here $h$ denotes the smallest diameter of an element domain in the mesh, and\n$\\gamma$ is a parameter that we choose. Experiments with the code have shown that a high gamma favours better\ncontinuity at the cost of matching the normal derivative jump condition, and vice versa.  \n\nWe can now formulate the Nitsche formulation for $\\phi$. Since we are only interested in applying a jump condition\n$ [\\nabla \\phi]  \\cdotp n =  M \\cdotp n $ on the magnetic core boundary $\\domM$, we can sew together two continuous functions\ntogether with Nitsche's method to get the result we need. Let $\\phi = (\\phi_0,\\phi_1)$ where\n$\\supp(\\phi_1) \\subset \\omM$ and $\\supp(\\phi_0) \\subset \\omV$. The Nitsche variational problem is to find  $\\phi = (\\phi_0,\\phi_1)$ such that\n\n\\begin{align*}\n&\\int_{\\omM} \\nabla \\phi_1 \\cdotp \\nabla v_1 \\ dx + \\int_{\\omV} \\nabla \\phi_0 \\cdotp \\nabla v_0 \\ dx \\\\\n&- \\int_{\\domM} <\\nabla \\phi> [v]  + <\\partial_n v> [\\phi] - \\gamma h^{-1} [\\phi][v] \\ dS \\\\\n&= \\int_{\\domM} M  \\cdotp n <v> \\ dS - \\int_{\\omM} \\nabla \\cdotp M v_1 \\ dx\n\\end{align*}\n\nThe global solution $\\phi_{tot}$ is then obtained by adding together $\\phi_1$ and $\\phi_0$ and then halving the value\nof the degrees of freedom of $\\phi_{tot}$ on $\\domM$.\n\n\n\\section{The Fredkin-Koehler approach} \nThe Fredkin-Koehler approach splits the magnetic potential $\\phi$ into two parts $\\phi_1$ and $\\phi_2$, \n\\[ \\phi = \\phi_1 + \\phi_2 \\]\n\\noindent in order to get $\\phi_2$ using the hybrid boundary/finite element method(FEM/BEM).\n\n\\noindent $\\phi_1$ is defined to be the solution of the Neumann problem:\n\n\\[ \n\\left\\{\n\\begin{array}{lr}\n- \\nabla^2 \\phi_1 = \\nabla \\cdotp \\magn  & \\mbox{in } \\ \\omM \\\\\n \\frac{\\partial \\phi_1}{\\partial n}  = -n \\cdotp \\magn & \\mbox{on } \\  \\partial \\omM \\\\\n \\phi_1 = 0 & \\mbox{on} \\ \\ \\omV\n\\end{array}\n\\right. \n\\]\n\n\n\\noindent and $\\phi_2$ is defined to be the solution of the Laplace problem\n\\[ \n\\left\\{\n\\begin{array}{lr}\n- \\nabla^2 \\phi_2 = 0 & \\mbox{in } \\ \\omM \\cup \\omV \\\\\n\\mbox{jump}(\\phi_2) = \\phi_1 & \\mbox{on} \\ \\domM \\\\\n\\ \\phi_2 \\rightarrow 0 & \\ \\mbox{as} \\ \\mvec{r} \\in \\omV \\rightarrow \\infty\n\\end{array}\n\\right. \n\\]\n\n\n\\noindent Adding together the two potentials $\\phi_1$ and $\\phi_2$ should result in the solution to the problem described in\ntruncdemag1. \n\n\\section*{Appendix}\n\\appendix\n\\section{Explanation of the definition of the discontinuous finite element space $\\ref{dgdef}$ }\\label{appdgspace}\nIn section \\ref{dgsection} describing the discontinuous FE space the following definition is made.\n\\[\n V = \\{ v \\in H^1 (T) \\mbox{ for all } T \\subset \\Omega \\mbox{ s.t. v is discontinous over } \\ \\partial T \\}\n\\]\nHere $H^1(T)$ is the sobolev space \n\\[ \\left\\{ v \\in H^1(T) \\mbox{ if } v \\in L^2(T) \\mbox{ and } \\frac{\\partial v}{\\partial x_i} \\in L^2(T) \\ \\forall \ni \\in \\{1...,dim(\\Omega)\\}  \\right\\} \\]\n\\\\\nwhere the partial derivatives are to be understood in the weak sense.\nThe possible discontinuities along element boundaries of a function $v \\in V$ mean that a weak derivative cannot be \ndefined globally. However when we restrict $v$ to a finite element domain it becomes continuous, which \nmeans we can define the global discontinuous function space $V$ by patching together local continuous function spaces. \n\n\n\n\\end{document}          \n", "meta": {"hexsha": "f852032ce4b0fa12c287236ba5357f9f20e9c7a0", "size": 9425, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/demag/demagnetisationfield.tex", "max_stars_repo_name": "davidcortesortuno/finmag", "max_stars_repo_head_hexsha": "9ac0268d2c0e45faf1284cee52a73525aa589e2b", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-03-24T07:43:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T10:42:27.000Z", "max_issues_repo_path": "doc/demag/demagnetisationfield.tex", "max_issues_repo_name": "davidcortesortuno/finmag", "max_issues_repo_head_hexsha": "9ac0268d2c0e45faf1284cee52a73525aa589e2b", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2018-03-26T15:08:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-10T16:11:14.000Z", "max_forks_repo_path": "doc/demag/demagnetisationfield.tex", "max_forks_repo_name": "davidcortesortuno/finmag", "max_forks_repo_head_hexsha": "9ac0268d2c0e45faf1284cee52a73525aa589e2b", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2018-04-09T11:50:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-10T09:23:25.000Z", "avg_line_length": 48.5824742268, "max_line_length": 144, "alphanum_fraction": 0.7088594164, "num_tokens": 2974, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.5624168592177227}}
{"text": "\\graphicspath{{Chapter4/Figs/simulations/}{Chapter4/Figs/scrna/}{Chapter4/Figs/scmet/}{Chapter4/Figs/scnmt/}}\n\n\\section{Model description}\n\nIn MOFA+ we introduce two key novelties, both in the model aspect and in the inference scheme. In the model side we introduce a principled approach for modelling multi-omic data set where the samples are structured into non-overlapping groups, where groups typically correspond to batches, donors or experimental conditions. In the inference side we implement a stochastic inference algorithm to improve scalability and enable inference with large single-cell datasets.\n\nFormally, we generalise the model to a disjoint set of $M$ input views (i.e. groups of features) and $G$ input groups (i.e. groups of samples). The data is factorised according to the following model:\n\\begin{equation} \\label{mofa_master_equation}\n\t\\mathbf{Y}^{m}_{g} = \\mathbf{Z}_{g} (\\mathbf{W}^{m})^T + \\bepsilon^{m}_{g}\n\\end{equation}\nwhere $\\bfZ_{g} \\in \\R^{N_{g} \\times K}$ are a set of $G$ matrices that contain the factor values for the $g$-th group and $\\bfW^{m} \\in \\R^{D_m \\times K}$ are a set of $M$ matrices that define the feature weights for the $m$-th view. $\\bepsilon^{m}_{g} \\in \\R^{D_m}$ captures the residuals, or the noise for each feature in each group. Notice that if $G=1$ then the model simplifies to the MOFA framework presented in Chapter 3. \n\nIt is important to get the intuition for the multi-group formulation right. In the factor analysis setting, the aim is not to capture differential changes in \\textit{mean} levels between the groups but rather to exploit the covariation patterns of the features to identify which sources of variability (i.e. latent Factors) are consistently found across multiple groups and which ones are exclusively found within a single group. This is symmetric to the interpretation of the multi-view framework in MOFA v1: the absolute levels of the features are not compared across views, only the covariation patterns are of interest. To achieve this, the features are centered per view and also per group before fitting the model. \\Cref{fig:mofa2_overview} summarises the MOFA+ pipeline.\n\nAs in MOFA v1, the linearity assumptions leads to an interpretable latent space that be visualised and employed for a range of downstream analyses, including clustering, inference of non-linear differentiation trajectories, denoising and feature selection, among others. The most important extension is the generalisation of the variance decomposition analysis, where a value of variance explained per view and group is obtained for every factor. For example, imagine that Factor 1 in \\Cref{fig:mofa2_overview}b corresponds to cell cycle variation, the variance decomposition analysis indicates that cell cycle is a driver of cell-to-cell heterogeneity largely in views 2 and 3, but with only minor influence in view 1. Also, this effect is manifested in groups 1 and 2, but not in group 3. This simple visualisation provides a very intuitive approach to understand variability in complex experimental designs where observations are structured into multiple views and multiple groups of cells.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.00\\linewidth]{mofa2_overview}\n\t\\caption[]{\\textbf{Multi-Omics Factor Analysis v2 (MOFA+) provides an unsupervised framework for the integration of multi-group and multi-view (single-cell) data.}\\\\\n\t(a) Model overview: the input data consists of multiple datasets structured into M views and G groups. Views consist of non-overlapping sets of features that often represent different assays. Analogously, groups consist of non-overlapping sets of samples that often represent different conditions or experiments. Missing values are allowed in the input data. MOFA+ exploits the covariation between the features to learn a low-dimensional representation of the data ($\\bfZ$) defined by $K$ latent factors that capture the global sources of variability. The weights ($\\bfW$) provide a measure of feature importance. Model inference is performed using (GPU-accelerated) stochastic variational inference. \\\\\n\t(b) The trained MOFA+ model can be queried for a range of downstream analyses: variance decomposition, inspection of feature weights, visualisation of factors and other applications such as clustering, inference of non-linear differentiation trajectories, denoising and feature selection.\n\t}\n\t\\label{fig:mofa2_overview}\n\\end{figure}\n\n\n\\subsection{Model priors and likelihood}\n\n\\subsubsection{Prior on the weights}\n\nThis remains the same as in MOFA v1. We adopt a two-level sparsity prior with an Automatic Relevance Determination per factor and view, and a (reparametrised \\cite{Titsias2011}) feature-wise spike-and-slab prior:\n\\begin{equation}\n\tp(\\hat{w}_{dk}^m,s_{dk}^m) = \\Ndist{\\hat{w}_{dk}^{m}}{0, 1/\\alpha_{k}^{m}}  \\text{Ber}(s_{dk}^{m} \\,|\\,\\theta_{k}^{m})\n\\end{equation}\nwith the corresponding conjugate priors for $\\theta$ and $\\alpha$:\n\\begin{align}\n\tp(\\theta_k^m) &= \\Bdist{\\theta_k^m}{a_0^\\theta,b_0^\\theta}\\\\\n\tp(\\alpha_k^m) &= \\Gdist{\\alpha_k^m}{a_0^\\alpha, b_0^\\alpha}\n\\end{align}\n\nAs discussed in Chapter 3, the aim of the ARD prior is to encourage sparse associations between factors and views, such that the weight vector $\\bfw_{:,k}^m$ is shrunk to zero if the factor $k$ does not explain any variation in view $m$. The aim of the spike-and-slab prior is to push individual weights to zero to yield a more interpretable solution.\n\n\\subsubsection{Prior on the factors}\n\nIn MOFA v1 we adopted an isotropic Gaussian prior which assumes an unstructured latent space \\textit{a priori}:\n\\begin{equation}\n\tp(z_{nk}) = \\Ndist{z_{nk}}{0,1}\n\\end{equation}\nThis is the assumption that we want to break. Following the same logic as for the weights, the integration of multiple groups of samples requires a flexible prior distribution that defines the existence of non-overlapping groups, such that the model encourages sparse linkages between factors and groups. To formalise this intuition we simply need to extrapolate the sparsity prior from the weights to the factors:\n\\begin{align}\n\tp(\\hat{z}_{nk}^g,s_{nk}^g) &= \\mathcal{N} (\\hat{z}_{nk}^g \\,|\\, 0, 1/\\alpha_k^g)\\, \\text{Ber}(s_{nk}^g \\,|\\,\\theta_k^g) \\\\\n\tp(\\theta_k^g) &= \\Bdist{\\theta_k^g}{a_0^\\theta,b_0^\\theta} \\\\\n\tp(\\alpha_k^g) &= \\Gdist{\\alpha_k^g}{a_0^\\alpha, b_0^\\alpha},\n\\end{align}\nwhere $g$ is the index of the sample groups.\n% Notice that the spike-and-slab prior is introduced for completeness but is not necessarily required, and can be disabled by fixing $\\E[\\theta_k^g]=1$.\n\n\\subsubsection{Prior on the noise}\n\nThe variable $\\bepsilon$ captures the residuals, or the noise, which is assumed to be normally distributed and heteroskedastic. In MOFA v2 we generalise the noise to have an estimate per feature and per group. This is important to capture the case where some features may be highly variable in one group but not variable in other groups.\n\n\\begin{align}\n\tp(\\bepsilon^{m}_{g}) &= \\Ndist{\\bepsilon^{m}_{g}}{0,(\\btau^{m}_{g})^{-1}\\I} \\\\\n\tp(\\tau^{m}_{g}) &= \\prod_{d=1}^{D_m} \\Gdist{\\tau^{m}_{g}}{a_0^\\tau, b_0^\\tau}\n\\end{align}\n% \\begin{align}\n% \tp(\\epsilon^{m,g}_d) &= \\Ndist{\\epsilon^{m,g}_d}{0,1/\\tau_d^{m,g}} \\\\\n% \tp(\\tau_{d}^{m,g}) &= \\Gdist{\\tau_{d}^{m,g}}{a_0^{\\tau}, b_0^{\\tau}}\n% \\end{align}\n\nIn addition, as in MOFA v1, non-Gaussian noise models can also be defined, but unless otherwise stated, we will always assume Gaussian residuals.\n\n% \\subsubsection{Likelihood}\n% Altogether, this results in the following likelihood:\n% \\begin{equation}\n% \tp(\\bfY|\\bfW,\\bfZ,\\bTau) = \\prod_{m=1}^{M} \\prod_{g=1}^{G} \\Ndist{\\mathbf{Y}^{m}_{g}}{\\mathbf{Z}_{g} (\\mathbf{W}^{m})^T,(\\btau^{m}_{g})^{-1} \\I}\n% \t% p(\\bfY|\\bfW,\\bfZ,\\bTau) = \\prod_{m=1}^{M} \\prod_{g=1}^{G} \\prod_{d=1}^{D_m} \\prod_{n=1}^{N} \\Ndist{y_{nd}^{mg}}{\\bfz_{ng}^T\\bfw_{d}^{mg},1/\\tau_{d}^{mg}}\n% \t% p(y_{nd}^m) = \\Ndist{y_{nd}^m}{\\bfz_{n,:}\\bfw_{d,:}^{mT},1/\\tau_d^m},\n% \\end{equation}\n\n\\subsubsection{Graphical model}\n\nIn summary, the updated model formulation introduces symmetric two-level sparsity priors in both the weights and the factors. The corresponding graphical model is shown below:\n\\begin{figure}[H]\n\t\\centering\t\n\t\\input{graphical_models/mofa2}\n\t\\caption{\\textbf{Graphical model for MOFA+.}\\\\\n\tThe white circles represent hidden variables that are inferred by the model, whereas the grey circles represent the observed variables. There are a total of five plates, each one representing a dimension of the model: $M$ for the number of views, $G$ for the number of groups, $K$ for the number of factors, $D_m$ for the number of features in the $m$-th view and $N_g$ for the number of samples in the $g$-th group.\n\t}\n\t\\label{fig:MOFA2}\n\\end{figure}\n\n\\subsubsection{Guidelines on the definition of views and groups} \\label{section:mofa2_guidelines_views_groups}\n\n\\begin{itemize}\n\t\\item \\textbf{Views}: views typically correspond to different assays, but there is flexibility in their definition and the user can explore different definitions of views. For example, one could divide the RNA expression data into three views corresponding to mRNA, rRNA and miRNA. Similarly, one can quantify DNA methylation and chromatin accessibility data over different genomic context (enhancers, promoters, etc.).\n\n\t\\item \\textbf{Groups}: groups are generally motivated by the experimental design, but the user can also explore data-driven formulations. There is no \\textit{right} or \\textit{wrong} definition of groups, depending on the hypothesis that is sought to explore some definitions will be more useful than others.\n\\end{itemize}\n\n\\subsubsection{Model selection} \\label{section:mofa2_model_selection}\n\nAs discussed in \\Cref{section:mofa_robustness}, the inference procedure depends on the parameter initialisation. When using random initialisation, the Factors can vary between different model instances and a model selection step is advised. I realised that this was not a user-friendly solution and it requires a lot of computational resources when applying the model to large datasets. To simplify model training in MOFA+ we initialise the Factors using the principal components from the concatenated data set. In practice, we observe faster convergence times and better ELBO estimates when initialising with the PCA solution (\\Cref{fig:mofa2_init}).\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{mofa2_init}\n\t\\caption[]{\n\t\\textbf{Comparison of PCA and Random initialisation in MOFA}.\\\\ Data was simulated from the generative model with the following dimensions: $M=2$ modalities, $G=2$ groups, $D=1000$ features, $N=1000$ samples and $K=10$ factors. The dashed lines mark the iteration at which the model converged.\n\t}\n\t\\label{fig:mofa2_init}\n\\end{figure}\n\n% COPIED\n% \\subsection{Solving the rotational invariance problem} \\label{section:mofa2_rotational_invariance}\n\n% Conventional Factor Analysis is invariant to rotations in the latent space \\cite{Zhao2009}. To demonstrate this property, let us apply an arbitrary rotation to the weights and the factors, specified by the rotation matrix $\\bfR \\in \\R^{K \\times K}$:\n% \\begin{align*}\n% \t\\tilde{\\bfZ} &= \\bfZ \\bfR^{-1} \\\\\n% \t\\tilde{\\bfW} &= \\bfR \\bfW\n% \\end{align*}\n% First, note that the model likelihood is unchanged by this rotation, irrespective of the prior distribution used.\n% \\begin{equation*}\n% \t\tp(\\bfY | \\tilde{\\bfZ} \\tilde{\\bfW}, \\tau) = p(\\bfY | \\bfZ \\bfR^{-1} \\bfR \\bfW, \\tau) = p(\\bfY | \\bfZ \\bfW, \\tau)\n% \\end{equation*}\n% However, the prior distributions of the factors and the weights are only invariant to rotations when using isotropic Normal priors:\n% \\begin{equation*}\n% \t\\ln p(\\bfW) \\propto \\sum_{k=1}^{K} \\sum_{d=1}^{D} w_{d,k}^2 = \\mathrm{Tr}(\\bfW^T \\bfW) = \\mathrm{Tr}(\\bfW^T \\bfR^{-1} \\bfR \\bfW) = \\mathrm{Tr}(\\tilde{\\bfW^T} \\tilde{\\bfW})\n% \\end{equation*}\n% where we have used the property $\\bfR^{T} = \\bfR^{-1}$ that applies to rotation matrices. The same derivation follows for the factors $\\bfZ$.\\\\\n% In practice, this property renders conventional Factor Analysis unidentifiable, hence limiting its interpretation and applicability. Sparsity assumptions, however, partially address the rotational invariance problem \\cite{Hore2015}.\n\n% It is important to remark that the factors are nonetheless invariant to permutations. This implies that under different initial conditions, the order of the factors is not necessarily the same in independent model fittings. To address this we manually sort factors \\textit{a posteriori} based on total variance explained.\n\n\n% \\subsection{Stochastic variational inference algorithm}\n\n% In \\Cref{section:stochastic_variational_inference} I have explained how to derive a stochastic variational inference (SVI) algorithm for a general Bayesian model using an adapted version of the algorithm introduced in \\cite{Hoffman2012}.\\\\\n% To apply the SVI algorithm to MOFA the first step is to choose the \\textit{local} and \\textit{global} dimensions such that the \\textit{local} dimension will be factorised in the ELBO and where the stochastic gradients apply. \n\n% In single-cell studies we expect increasingly large datasets in the number of cells, but the number of features are expected to remain roughly constant. Thus, the natural dimension to define as \\textit{local} is the axis of samples. In the case of MOFA+, the variables that are classified as \\textit{local} are the Factors $\\mathbf{Z_g} = \\{ z_{nk}^g \\}$, which due to the reparametrisation of the spike-and-slab prior consists on the element-wise product of two matrices: $\\hat{\\mathbf{Z_g}}$ and $\\mathbf{S_g}$. All other hidden variables are global. \n\n% This leads to the following SVI algorithm:\n% \\begin{algorithm}[h!]\n%   \\caption{Stochastic mean-field variational inference for MOFA+}\n%   \\begin{algorithmic}[1]\n% \t\\State Initialise randomly the parameters of the global variables \\{$\\bf\\tau^{gm}$, $\\bf\\hat{W}^m$, $\\bfS^\\bfm$, $\\bf\\alpha^m, \\bf\\theta^m$, $\\bf\\alpha^g$, $\\bf\\theta^g$\\}.\n% \t\\State Initialise the step size $\\rho^{(t=0)}$\n% \t\\Repeat\n% \t    \\State \\text{sample $\\mathcal{B}$ a mini-batch of samples of size $S << N$}\n% \t\t\\For{\\text{each local variational parameter $\\phi_{nk}^g$ of nodes \\{$\\hat{z_{nk}^g}$, $s_{nk}^g$\\} such that $n$ is in batch $\\mathcal{B}$}} \\\\\n% \t\t\t\\State $\\phi_{nk}^{(t+1)}$ is the updated parameter $\\phi_{nk}$ following the classic VI update equation \\\\\n%       \t\\EndFor\n% \t\t\\For{\\text{each global variational parameter $\\lambda$ of nodes \\{$\\bf\\tau^{gm}$, $\\bf\\hat{\\bfW}^m$, $\\bfS^m$, $\\bf\\alpha^m, \\bf\\theta^m$, $\\bf\\alpha^g$, $\\bf\\theta^g$\\}}}\n% \t\t     \\State\n%         \t\t\\begin{align} \\label{eq_elbo_factorised} \\begin{split}\n%             \t\\lambda^{(t + 1)} &= (1-\\rho ^{(t)})\\lambda^{(t)} +  \\rho ^{(t)} \\lambda_{\\mathcal{B}}^{(t+1)}\n%             \\end{split} \\end{align}\n%             \\State \\text{where $\\lambda_{\\mathcal{B}}^{(t+1)}$ is the updated parameter $\\lambda$ following the classic VI update equation,}\n%             \\State \\text{but considering the selected batch $\\mathcal{B}$ repeated $N/S$ times instead of the full dataset.}\n%       \t\\EndFor\n% \t\\Until{ELBO convergence}\n% \t\\end{algorithmic}\n% \t\\label{MOFAstochasticascent}\n% \\end{algorithm}\n\n% \\subsection{Theoretical comparison with published methods}\n% As discussed in Chapter 3, a variety of factor analysis models exist with the aim of perfoming multi-view and/or multi-group data integration. A comparison of multi-view method is shown in \\Cref{GFAtable}. Here we specifically focus on the methods that have been applied to single-cell data:\n% \\begin{table}[h]\n% \t\\begin{tabular}{@{}llllll} \n% \t\t\\toprule\n% \t\t{\\textbf{Method}} & {\\textbf{\\parbox{2.0cm}{Scales to\\\\$>1e5$ cells?}}} & {\\textbf{\\parbox{2.0cm}{Multi-view}}} & {\\textbf{\\parbox{2.0cm}{Multi-group}}} & {\\textbf{\\parbox{2.0cm}{Missing values}}} & {\\textbf{\\parbox{2.0cm}{Likelihoods}}} \\\\ \\toprule\n% \t\tslalom \\cite{Buettner2017} & No & No & No & No & ZI gaussian \\\\\\midrule\n% \t\tpCMF \\cite{Durif2019} & Yes & No & No & No & ZI poisson \\\\\\midrule\n% \t\tZIFA \\cite{Pierson2015} & No & No & No & No & ZI gaussian \\\\\\midrule\n% \t\tscVI \\cite{Lopez2018} & Yes & No & Yes & No & ZI negative binomial \\\\\\midrule\n% \t\tMSFA \\cite{DeVito2019} & No & No & Yes & No & Gaussian \\\\\\midrule\n% \t\tZINB-WaVE \\cite{Risso2018} & No & No & No & No & ZI negative binomial \\\\\\midrule\n% \t\tAJIVE \\cite{Feng2018} & No & Yes & No & No & NA \\\\\\midrule\n% \t\tDIABLO \\cite{Singh2018} & Yes & Yes & No & * & NA \\\\\\midrule\n% \t\tscHPF \\cite{Levitin2019} & Yes & No & No & Yes & Negative binomial \\\\\\midrule\n% \t\tMOFA \\cite{Argelaguet2018} & No & Yes & No & Yes & Gaussian/Poisson/Bernoulli \\\\\\midrule\n% \t\tMOFA+ & Yes & Yes & Yes & Yes & Gaussian/Poisson/Bernoulli \\\\\\midrule\n% \t\\end{tabular}\n% \t\\caption{Overview of Factor analysis methods for single-cell data.}\n% \t% \\label{XXX}\n% \\end{table}\n\n\n\\subsection{A note on the implementation}\n\nThe core of MOFA+ is implemented in Python, and the downstream analysis and visualisations are implemented in R. GPU acceleration is implemented using CuPy \\cite{Okuta2017}, an open-source matrix library accelerated with NVIDIA CUDA. To facilitate adoption of the method, we deployed MOFA+ as open-source software\\footnote{\\url{https://github.com/bioFAM/MOFA2}} with multiple tutorials and a web-based analysis workbench\\footnote{\\url{http://www.ebi.ac.uk/shiny/mofa/}}.%, hopefully enabling a user-friendly exploration of complex single-cell datasets.\n\n\n\n", "meta": {"hexsha": "d2e522d8b30708f22e76d289a4d31bbff973cc77", "size": 17413, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter4/model_description.tex", "max_stars_repo_name": "rargelaguet/thesis", "max_stars_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2021-01-08T13:01:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T07:24:40.000Z", "max_issues_repo_path": "Chapter4/model_description.tex", "max_issues_repo_name": "rargelaguet/thesis", "max_issues_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter4/model_description.tex", "max_forks_repo_name": "rargelaguet/thesis", "max_forks_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-01-09T04:47:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-04T08:25:50.000Z", "avg_line_length": 88.3908629442, "max_line_length": 993, "alphanum_fraction": 0.738298972, "num_tokens": 4956, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933447152497, "lm_q2_score": 0.6859494678483918, "lm_q1q2_score": 0.5624054034998636}}
{"text": "%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Calculus}\n\t\\lettrine[lines=4]{\\color{BrickRed}I}n the Arithmetic chapter of this book we have written extensively on various theorems using abstract numbers in order to extend the scope of the validity of the latter. However we have only discussed a few on how we should handle these abstract numbers. This is what we will see now.\n\nAs you may know already know, a number may be considered by making an abstraction of the nature of the objects that constitutes the group that it characterizes and as well as how to codify it (Indo-Arabic numbers, Roman numbers or other system...). We then say that the number is an \"\\NewTerm{abstract number}\\index{abstract number}\" and when we handle these kinds of objects we say that we are doing \"algebra calculus\" or also \"\\NewTerm{literal calculation}\".\n\n\\textbf{Definition (\\#\\mydef):} \"\\NewTerm{Literal calculation}\\index{Literal calculation}\" is the fact to calculate with variables (that is to say with letters) as you would with numbers.\n\nFor the mathematicians it is often not advantageous to work with numerical values (1,2,3,...) because they represent only specific cases. What look for physicists, engineers and mathematicians are universally applicable relations in a most general framework as possible.\n\nThese abstract numbers today commonly named \"\\NewTerm{variable}\\index{variables}\" are often represented by the Latin alphabet (for which the first letters of the Latin alphabet $a, b, c, ...$ often denote imposed values and the last $..., x, y, z$ variables values), Greek alphabet (also much used to represent more or less complex mathematical operators) and the Hebrew alphabet (to a lesser extent).\n\nAlthough these symbols can represent any number, there are however some as in  physics or mathematics which may represent constants named \"\\NewTerm{Universal constants}\\index{Universal constants}\" such as the speed of light $c$, the gravitational constant $G$, the value of $\\pi$, the Euler number $e$, etc.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nIt seems that the letters to represent numbers were used for the first time by Vi\u00e8te in the middle of the 16th century (but the notation of exponents did not exist at this time).\n\t\\end{tcolorbox}\t\n\n\tA variable is therefore likely to take different numerical values. All these values can vary according to the character of the problem considered. Let us recall (we had already defined this in the section on Numbers of the Arithmetic chapter) that given two numbers $a$ and $b$ such that $a<b$, then:\n\n\\begin{enumerate}\n\t\\item[R1.] We name \"\\NewTerm{domain of definition}\\index{domain of definition}\" of a variable, all numerical values it is likely to take between two specified limits (endpoints) or on a set (like $\\mathbb{N}, \\mathbb{R},\\mathbb{R}^+,$ etc.).\n\t\n\t\\item[R2.] We name \"\\NewTerm{closed interval with endpoints $a$ and $b$}\\index{closed interval}\", the set of all numbers $x$ between these two values and we denote as example as follows:\n\t\n\t\n\t\\item[R3.] We name \"\\NewTerm{open interval with endpoints $a$ and $b$}\\index{open interval}\", the set of all numbers $x$ between these two values not included and we denote it as example as follows:\n\t\n\t\n\t\\item[R4.] We name \"\\NewTerm{interval closed, left open right}\\index{semi-open interval}\" or \"\\NewTerm{semi-closed left}\\index{semi-closed interval}\" the following relation as example:\n\t\n\t\n\t\\item[R5.] We name \"\\NewTerm{interval open left, closed right}\" or \"\\NewTerm{semi-closed right}\" the following relation as example:\n\t\n\\end{enumerate}\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nIf the variable $x$ can take all possible negative and positive values we write therefore: $\\left] -\\infty,+\\infty \\right[$ where the symbol \"$\\infty$\" means \"infinite\". Obviously there can be combinations of open infinite right intervals with left endpoint and vice versa.\n\t\\end{tcolorbox}\t\n\n\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{neighborhood of $a$}\\index{neighborhood (functional analysis)}\", any open interval containing $a$ (it's a simple concept that we will use later to define a continuous function). So:\n\t\nis a neighborhood of $a$.\n\n\\subsection{Equations and Inequations}\n\nElementary algebra consists starting from the definitions of addition, subtraction, multiplication, and power and their properties (associativity, distributivity, commutativity, identity element, inverse, ...) - this is according to which set we are working with a body or a commutative abelian group or not (\\SeeChapter{see section Set Theory}) - to handle within a fixed goal \"\\NewTerm{algebraic equations}\\index{algebraic equations}\" linking together variables and constants.\n\nWe will define afterwards what an equation and an inequality are but first we want to define some of their properties:\n\nLet $A$ and $B$ be any two polynomials (or monomials)  - see definitions a little further - the expressions:\n\t\nsatisfy the following properties:\n\t\\begin{enumerate}\n\t\t\\item[P1.] We can always add or substract from the two members of an equation or inequality same polynomial by obtaining an equivalent equation or inequality (i.e. with the same solutions or reductions). We then say that the equality or inequality remain \"true\" by the operations of addition or subtraction member to member.\n\t\t\\item[P2.] If we multiply or if we divide both members of an equation or inequality by the same positive number we also get an equivalent equation or inequality (we have already seen this in the sections befow). We then say that the equality or inequality remains \"true\" by the operation of multiplication or division member to member.\n\t\t\\item[P3.] If we multiply or if we divide both sides of an inequality by the same negative number and if we reverse the direction of inequality, then we get an inequality or equivalent equation.\n\t\\end{enumerate}\n\t\n\t\\subsubsection{Equations}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{equation}\\index{equation}\" is a relation of equality between all abstract values (i.e.: two algebraic expressions) or not all abstract (since we're talking about equations with one unknown, two unknowns, three unknowns and some constants...) interconnected by various operators.\n\n\tThe perfect mastery of elementary algebra is fundamental in physics and mathematics and in the industry!!! Since there are an infinite number of types of equations, we will not present them all here. It is the role of the teacher/trainer in classes to train the brain of his audience for several years (2-3 years on average) to solve a lot of different configurations of algebraic equations (exposed as every day, purely mathematical problems or geometric problems) and this so that students manipulate these equations without errors in a logical and rigorous reasoning (practice makes perfect...)!!!\n\n\tIn other words: A teacher/trainer and an ad hoc institution are irreplaceable to acquire knowledge and experience and to have a feedback on experience!!!\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nWe have attempted below to make a simple generalization of the basic rules of elementary algebra. This generalization will be more easy to understand to the reader that already have the habit of manipulating abstract quantities.\n\t\\end{tcolorbox}\t\n\n\tThus, either $a, b, c, d, e, ..., x, y$ abstract numbers can take any numerical value (we stay within the high-school classical numbers ...).\n\n\tLet $\\Xi$ (the Greek capital letter ruling \"Xi\") representing one or more abstract numbers (variables) operating between them in any way as we have or not different and distinguishable algebraic monomials (one abstract number) or polynomials (poly = many). Therefore we do here a kind of abstract of the abstraction or if you prefer a variable of several variables.\n\n\tProperties (in fact these are more examples that properties...):\n\t\\begin{enumerate}\n\t\t\\item[P1.] We will always have $\\Xi=\\Xi$ if and only if the term left $\\Xi$ of equality is the same term as the one $\\Xi$  that is on the right of the equality. If this condition is satisfied then we have:\n\t\t\n\t\tOtherwise:\n\t\t\n\t\twhere we therefore exclude the case where all the $\\Xi$ above are identical to each other (otherwise we get again the property P1).\n\t\t\\item[P2.] We have if $\\Xi\\neq 0$:\n\t\t\n\t\twhich verifies the symbolism of the equation $\\Xi=\\Xi$ in the case only where the elements are identical between them (we obviously exclude the case with zero denominator).\n\t\n\t\t\\item[P3.] \tIf all the $\\Xi$  are identical, then:\n\t\t\t\t\n\t\tOtherwise we have:\n\t\t\n\t\twhich cannot be written in a simple condensed form. It may also happen that:\n\t\t\n\t\twith the $\\Xi$ on the right of equality identical to none, one or more $\\Xi$ of the left member of equality.\n\n\t\t\\item[P4.] We can have:\n\t\t\n\t\twithout necessarily having the numerator or denominator to be equals (we exclude for sure the case where the denominator is equal to zero).\n\n\t\tOtherwise we can also have:\n\t\t\n\t\tBut don't forget (\\SeeChapter{see section Operators}) that in the general case where the numerator and denominator are not equal that:\n\t\t\n\t\t\n\t\t\\item[P5.] We have if all denominators are strictly equals:\n\t\t\nbut it is however not impossible to have:\n\t\t\nwith the $\\Xi$ at the right of equality identical to any one or more of the left member of the equality or even it is quite possible to have:\n\t\t\n\n\t\t\\item[P6.] Let represent exclusively the addition or the subtraction symbol we have (at a given sign variation):\n\t\t\nif all the $\\Xi$ are identical to each other or if the combination of an indeterminate number of $\\Xi$ are equal to the $\\Xi$ on the right of equality.\n\n\t\tOtherwise we will have (meaning that the result will give any undetermined monomial or polynomial):\n\t\t\n\n\t\t\\item[P7.] We have if all $\\Xi$ (based) raised at the power are strictly identical:\n\t\t\n\t\\end{enumerate}\n\tif and only if the bases $\\Xi$ are equal (or could be decomposable to be equal) and the powers $\\Xi$ are not  necessarily equals.\n\n\tFrom the knowledge of these basic rules/properties 7 examples, we can solve, simplify or show that a simple equation has solutions or not relatively to a given  problem or statement.\n\n\tThus, given an operand or a sequence of any operations on one or more abstraction of abstracts and among all , one (or more) of which the numeric values is or are unknown (the others are known). So we could be able to prove that a relation (statement) like:\n\t\n\thas existing solutions or not (that is to say: is True or False).\nIn the case of an equation with the absolute value (\\SeeChapter{see section Arithmetic Operators}) of the type:\n\t\n\twith the second member being strictly positive (otherwise the previous relations would be a nonsense) this is equivalent  of course from the definition of absolute value to write:\n\t\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\\textbf{R1.} The presence of the absolute value in an algebraic equation in which we seek solutions often doubles the number of solutions.\\\\\n\t\n\t\\textbf{R2.} An equation is named \"\\NewTerm{conditional equation}\\index{condition equation}\" when there are numbers in the set of definitions that are not solutions (which is actually the most common case). Conversely, if any number of the definition set is solution of the equation then the equation is named an \"\\NewTerm{identity equation}\\index{identity equation}\".\n\t\\end{tcolorbox}\t\n\t\n\tWe sometimes have to solve a \"\\NewTerm{system of equations}\\index{system of equations}\".\n\t\\begin{itemize}\n\t\t\\item What is it? It is a set of at least two equations to solve (solving is not always equivalent as to simplify an expression!).\n\n\t\t\\item What is the specificity of such a system? All solutions of the system are the intersection of all the solutions of the equations to be solved (for detailed examples see the sections of Linear Algebra and Numerical Methods). \n\n\t\t\\item What is the usage? It is endless (see the different chapters of the book), these systems solve problems involving applications of mathematics to other fields (finance, engineering, operational research, etc.).\n\t\\end{itemize}\t\t\n\tBecause of the unlimited variety of applications, it is difficult to establish precise general rules for solutions (see section of Theoretical Computing) and the existing procedures to follow will for sure be useful for sure only if the problem can be formulated into equations and the following procedures can help a little bit at least to avoid some errors:\n\t\\begin{enumerate}\n\t\t\\item If we have a problem statement already written, we read it several times carefully and we consider the given facts and the amount of unknowns to find and their domain of definition (summarizing statement on a sheet of paper or anywhere else is often useful for large problems!).\n\t\t\\item Select letters that represents the unknown quantities. This is one of the decisive steps in the search for solutions. Sentences containing words such as: find, what, how, where, when should help you to identify the unknown quantity.\n\t\t\\item When make a drawing (in your head or on paper) with captions. This is for sure possible most of time only with problems of 1, 2 or 3 unknowns.\n\t\t\\item List the known facts and relations about the unknown quantities. A relation can be described by an equation in which appear in one or both sides of the equal sign statements written with normal sentences.\n\t\t\\item After the previous step, write one or more equations that accurately describe what was stated with sentences.\n\t\t\n\t\t\\item Solve the equation or the system of equations formulated in the previous steps using of the multiple heuristic existing techniques.\n\t\t\\item Check the solutions obtained in the previous step by referencing to the initial problem statement (check that the solution is consistent with the conditions of the statement).\n\t\\end{enumerate}\n\tSome methods of resolutions of systems of equations are treated in detail in the section of Theoretical Computing and also in the section Linear Algebra (you will thus therefore better understand the procedure above).\n\t \n\t \\subsubsection{Inequations}\n\t Previously we have seen that an equation was composed from an equality of various calculations with different terms (with at least one  \"unknown\" or an \"abstract number\") and that:\n \t\\begin{enumerate}\n \t  \\item \"Solve an equation\" is to process consisting to calculate the  value of the unknown to satisfy the equality (when a solution exists!)\n \t  \n \t  \\item \"Simplify and equation\" is the process consisting to mathematically minimize the number of terms (factoring or eliminiate...)\n \t  \n \t  \\item \"Develop an equation\" is the process consisting to flatten all the terms.\n \t\\end{enumerate}\n\tWhy do we need to recall the definition of an equation? Just because for the inequality, it is almost the same intellectual process! The difference ? If the equation is an equality, inequality is an inequality (...): As the equation, the inequality is composed of various calculations with different terms interconnected by any operators with at least one unknown.\n\t\n\tMain differences between equality and inequality equations:\n\t\n\t\\begin{enumerate}\n\t\t\\item Equality: Symbolized by the symbol $=$\n\t\t\\item Inequality: Symbolized by the strict or large order relations: $<, \\leq, \\geq, >$\n\t\\end{enumerate}\n\t\t\n\tWhen we solve an inequality, our unknown may have a range of values that satisfy the inequality. We say then that the inequality of the solution is a \"\\NewTerm{set of values}\\index{set of values}\". This is the fundamental difference between equality (\\underline{several} solutions) and inequality (\\underline{range} of solutions)!\n\t\n\tLet us make a refresh about the signs that we can meet in an inequality:\n\t\\begin{itemize}\n\t\t\\item $<$: Must be read \"\\NewTerm{strictly inferior to}\\index{strictly inferior symbol}\" or \"\\NewTerm{strictly less than}\\index{strictly less than}\". In this case the target numerical value is not included in the range and we can then represent the range (interval) with an open left square bracket ] ... or right square bracket ... [ next to the target value.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\n\t\\begin{flushleft}\n\tWriting $x<5$ means $x \\in ]-\\infty,5[$ and $x<-5$ means $x \\in ]-\\infty,-5[$ .\n\t\\end{flushleft}\n\t\\end{tcolorbox}\n\t\\item $>$: Must be \"\\NewTerm{strictly superior to}\\index{strictly superior to}\" or \"\\NewTerm{strictly greater than}\\index{strictly greater than}\". In this case the numerical target value is also not included in the range (interval) and we can then represent the range with an open left square bracket ] ... or right square bracket ...[  next to the target value.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\n\t\\begin{flushleft}\n\tWriting $x>5$ means $x \\in ]5,+\\infty[$ and $x>-5$ means $x \\in ]-5,+\\infty[$ .\n\t\\end{flushleft}\n\t\\end{tcolorbox}\n\t\\item $\\leq$: Must be read \"\\NewTerm{inferior or equal to}\\index{symbol inferior or equal to}\" or \"\\NewTerm{less than or equal to}\\index{symbol less than or equal to}\". In this case, the numerical target value is in the range (interval) and we can then represent the range  with a closed left square bracket [... or right square bracket...].\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\n\t\\begin{flushleft}\n\tWriting $x\\leq 5$ means $x \\in ]-\\infty,5]$ and $x<-5$ means $x \\in ]-\\infty,-5]$ .\n\t\\end{flushleft}\n\t\\end{tcolorbox}\n\titem $\\geq$: Must be read \"\\NewTerm{superior or equal to}\\index{symbol superior or equal to}\" or \"\\NewTerm{greater than or equal to}\\index{symbol greater than or equal to}\". In this case, the numerical target value is in the range (interval) and we can then represent the range  with a closed left square bracket [... or right square bracket...].\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\n\t\\begin{flushleft}\n\tWriting $x\\geq 5$ means $x \\in [5,+\\infty[$ and $x>-5$ means $x \\in [-5,+\\infty[$ .\n\t\\end{flushleft}\n\t\\end{tcolorbox}\n\t\\end{itemize}\n\tThe objective of inequalities is most of the time (except aesthetics purpose) to have at least one numeric value that defines the domain of solution (of all the abstract terms - variables - of the inequality) that satisfies the inequality.\n\t\n\tThere are many ways to represent the domains of definition of variables which satisfy an inequality. We will see through small examples what are some of the  most common possibilities:\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\n\tGiven a linear inequality (of first degree) on $x$ with a single unknown on which we impose an arbitrary particular constraint for example (of course, the expression can contain more terms...):\n\t\n\twe have in the above inequation already simplified all the terms that were superfluous.\n\t\n\tSolve the inequality is like looking for the $x$ values less than $2$. Of course, there is not only one solution in $\\mathbb{R}$ but a set (interval) of solutions and this is the principle of inequalities!\\\\\n\t\n\tTo solve the inequality, we first observe the type of inequality imposed (\"strict\" or \"equal\"). Then, in high-schools classes (and not only sometimes ...) we represent the set $\\mathbb{R}$ traditionally by a table such as:\n\t\n\tWe intuitively know that the solution of our inequality includes all values below $2$ ($2$ being itself excluded from the solutions) until $-\\infty$. Then we write this interval or domain as follows:\n\t\n\tThen we can represent in a tabular way the set of solutions (it helps to understand and prepares the student for solving systems of equations and inequalities and to the study of functions variations). For this, we take back the template of the previous table and place in it our target value (we only have one in this special example but sometimes there can be several because there is a singularity or roots for certain values of the definition domain), that is to say the value $2$:\n\t\n\tand finally, we delimitate with color (...) the set of solutions to from $-\\infty$ to $+2$ excluded:\n\t\n\tAt the value $2$, we do not forget to mark the sign $....[$ to show that this value is excluded from the solutions. And that's it, and the concept can be extrapolated to much more complex inequalities.\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\\textbf{R1.} Sometimes instead of representing tables as we have done above, some teachers (this is a completely artistic choice) ask their students to shade the boxes of the table and to draw small circles inside..., or they also use small arrows, or draw the graph of the inequality (the latter method is certainly aesthetic but takes time in complex case but an example is given below...).\\\\\n\t\n\t\\textbf{R2.} As part of inequalities of degree greater than 1, it is necessary (see later what that means exactly) first too determine the roots of the inequality that determine the intervals and then by trial and error, determine which intervals are to reject or keep.\n\t\\end{tcolorbox}\n\tWe can also (just like equations) sometimes have to solve a \"system of inequalities\". What is it ?: It is a set of at least two inequalities to solve. The peculiarity of the system ?: The set of solutions of the system is the intersection of all the solutions of every inequality.\n\t\n\tFor example the system of three inequalities:\n\t\n\tIn other words, the method is the same as the previous one, with the difference that our table (representing the areas of solutions) will include an additional line for each additional inequality in the system plus one line of synthesis which is the projection of the possible solutions areas of the system.\n\t\n\tThus, a system of inequalities will have obviously a summary table with $n+1$ lines.\n\t\n\tMathematically, the areas (there may be several which are disjoint) can be written as a set of domains:\n\t\n\tSystems of inequalities are very common in many problems of mathematics, physics, econometrics, etc. It is important to practice solve them during your studies with the help of your teacher.\n\t\n\tFor example, here is a possible representation of the solution domain of the previous system of inequalities:\n\t\\begin{center}\n\t\\begin{tikzpicture}\n\n    \\draw[gray!50, thin, step=0.5] (-1,-3) grid (5,4);\n    \\draw[very thick,->] (-1,0) -- (5.2,0) node[right] {$x_1$};\n    \\draw[very thick,->] (0,-3) -- (0,4.2) node[above] {$x_2$};\n\n    \\foreach \\x in {-1,...,5} \\draw (\\x,0.05) -- (\\x,-0.05) node[below] {\\tiny\\x};\n    \\foreach \\y in {-3,...,4} \\draw (-0.05,\\y) -- (0.05,\\y) node[right] {\\tiny\\y};\n\n    \\fill[blue!50!cyan,opacity=0.3] (8/3,1/3) -- (1,2) -- (13/3,11/3) -- cycle;\n\n    \\draw (-1,4) -- node[below,sloped] {\\tiny$x_1+x_2\\geq3$} (5,-2);\n    \\draw (1,-3) -- (3,1) -- node[below left,sloped] {\\tiny$2x_1-x_2\\leq5$} (4.5,4);\n    \\draw (-1,1) -- node[above,sloped] {\\tiny$-x_1+2x_2\\leq3$} (5,4);\n\n\t\\end{tikzpicture}\n\t\\end{center}\n\t\n\t\\subsection{Remarkable Identities}\n\tThe remarkable identities are some kind of magical relations that we use most often for factoring or solving algebraic equations in all fields relative to Applied Mathematics. They have an important place and must be absolutely be mastered by the reader.\n\t\n\tLet us recall some notions that have already study in the section of Set Theory and of the chapter Arithmetic (we assume the concept of neutral element known as already defined):\n\t\\begin{itemize}\n\t\t\\item Commutativity:\n\t\t\n\t\t\n\t\t\\item Associativity:\n\t\t\n\t\t\n\t\t\\item Distributivity:\n\t\t\n\t\\end{itemize}\n\tSimilar properties are valid with the subtraction operation of course with the adequate domain of definition.\n\t\n\t\\pagebreak\n\tWe can check with numerical values (by replacing each abstract number a randomly chosen number), or by development (it would be better, so you are sure you have understood what we were talking about until now), that the following algebraic identities are satisfied (they are the most known one):\n\t\\begin{enumerate}\n\t\t\\item Second degree identity::\n\t\t\n\t\t\\item Third degree identity::\n\t\t\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe can very well put that in practice that $(a+(c+d))^2:=(a+b)^2$ where we obviously putted that $b:=c+d$ (we do an \"abstract of the abstraction\" or more commonly: a \"change of variable\"). And therefore:\n\t\n\t\\end{tcolorbox}\n\tWe can thus notice that in generality, to calculate the development $(a+b)^n$, we use the development of $(a+b)^{n-1}$, that is to say with the identity calculated with the previous value of $n$.\n\t\n\tWe can also see a pattern if we put all together:\n\t\n\tWe thus notice the following properties for $a$ and $b$:\n\t\\begin{enumerate}\n\t\t\\item The powers of $a$ decreasing from $n$ to $0$ ($a^0=1$, so it is not noted in the last term).\n\t\t\n\t\t\\item The powers of $b$ increase from $0$ to $n$ ($a^0=1$, so it is not noted in the first term).\n\t\t\n\t\t\\item In each term, the sum of the powers of $a$ and $b$ is equal to $n$.\n\t\t\n\t\t\\item The multiplier coefficients in front of each term are calculated by summing the multipliers of two terms of development achieved with the previous value of $n$ (see figure below).\n\t\\end{enumerate}\n\tThe so named \"\\NewTerm{binomial coefficients}\\index{Binomial coefficient}\" can then be obtained by the construction of the \"\\NewTerm{Pascal triangle}\\index{Pascal triangle}\" below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/pascal_triangle.jpg}\n\t\t\\caption{Pascal Triangle}\n\t\\end{figure}\n\tWhere each element is given by (\\SeeChapter{see chapter Probability}):\n\t\n\twith $n,p\\in \\mathbb{N}^{+}$.\n\t\\begin{theorem}\n\tWe can then prove that:\n\t\n\twhich is the famous \"\\NewTerm{Newton binomial}\\index{Newton binomial}\" (that we will reuse many times in different chapters and thus sections of this book) or also named \"\\NewTerm{Binomial theorem}\\index{Binomial theorem}\".\n\t\\end{theorem}\n\t\\begin{dem}\n\tThis relation can be proved quite simply by induction assuming true the previous relation and calculating it for the rank $1$:\n\t\n\tThis relation can be simply proved by induction assuming true the previous relation and for calculating for the rank $1$:\n\t\n\tThe relation is true for rank $n + 1$, so it is true for any $n$.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem} \n\tRegarding remarkable identities with negative values, there is no need to memorize the location of the sign \"$-$\". Just make a change of variable and once the development is made we replace the variable back (inverse change of variable).\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\n\t\n\tand so on for all finite power $n$.\n\t\\end{tcolorbox}\n\tWe can of course mix genders (...) such as (particularly famous example):\n\t\n\tand some remarkable additional practical relation that are often used in small classes for exercises:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhen, from the right-hand side (in a simplified numerical form) the teacher asks his students as an exercise to get the factoring of the left of the equality, there is no other way but to proceed by successive tests.\n\t\\end{tcolorbox}\n\tFor information we also get following famous development that is immediately deducible from what we have seen before:\n\t\n\twhich is valid for any value of $b$ and is named the \"\\NewTerm{binomial expansion}\\index{binomial expansion}\". \n\t\n\tFor small $b$ we could neglect all terms involving $b^2$ or higher powerf of $b$, giving us the approximation relation:\n\t\n\tof $b\\ll 1$.\n\t\n\tOf course there is still a lot of more useful relations with monomial (from which the biggest part is only a generalization of those presented above) that the reader will discover by his own reasoning and according to his practice and in this book through the different chapters.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIt is of course possible to multiply polynomials between them and distribute the multiplicative terms. Conversely, it is often asked to students in small classes to do the reverse procedure (\"factoring\" or \"decompose\" a polynomial) so they get used to the handling of remarkable identities. Decomposed into a product of factors is an important operation in mathematics, since it is thus possible to reduce the study of complicated expressions to the study of several simpler expressions having interesting properties depending on the context.\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Polynomials}\n\t\\textbf{Definition (naive version \\#\\mydef):} We name \"\\NewTerm{algebraic univariate polynomial $P (x)$}\\index{Algebraic univariate polynomial}\" a function of degree $n\\in \\mathbb{N}$ which is written:\n\t\n\tor in a more condensed manner by:\n\t\n\t where the \"\\NewTerm{dominant factor}\\index{dominant factor}\" of a polynomial, also named \"\\NewTerm{leading coefficient}\\index{dominant factor}\", is the coefficient of the monomial of highest degree $a_n$ and the \"\\NewTerm{leading term}\\index{leading term}\" is simply $a_nx^n$.\n\t \n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The subscript $n$ of $P (x)$ is most of time omitted as explicitly defined in the expression.\\\\\n\t\n\t\\textbf{R2.} The reader who has read the section of Set Theory, probably remember that the set of all polynomial of degree $n$ or lower form a vector space structure!\n\t\\end{tcolorbox}\n\t\\textbf{Definition (set theory version \\#\\mydef):} Let $k$ be a ring (\\SeeChapter{see section Set Theory}) and $n\\in \\mathbb{N}^{*}$, the \"\\NewTerm{polynomial ring}\\index{polynomial ring}\" in $n$ indeterminate (or variables) $k[X_1,...,X_n]$ is constructed from an elementary polynomial, named \"\\NewTerm{monomial}\\index{monomial}\" of the form:\n\t\n\twhere $\\lambda \\in k$ is the \"\\NewTerm{coefficient of the monomial}\\index{coefficient of the monomial}\", $m_1,m_2,...,m_n$ are positive non-null integers and where $X_1^{m_1}...X_m^{m_n}$ forms the \"\\NewTerm{literal part of the monomial}\\index{literal part of a monomial}\". Thus, by construction, a polynomial is a sum of a finite number of monomials named the \"\\NewTerm{terms of the polynomial}\\index{terms of a polynomial}\".\n\t\n\tTherefore, the common special case used in small classes and presented at the beginning is $k[X]$, that is to say the ring of univariate polynoms with coefficients in $k$. Indeed most of time engineers and students deals with \"ring of univariate polynoms with coefficient in $\\mathbb{R}$ and denoted by $\\mathbb{R}[X]$. Any element of $k[X]$ is therefore written as:\n\t\n\twith $a_i\\in k$ (most of time  $a_i\\in \\mathbb{R}$ ), $i=0...n$ and $n\\in \\mathbb{N}$.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} Notice that the powers $i$ are always positive or null in $k[X]$!!!\\\\\n\t\n\t\\textbf{R2.} We say that two polynomes are \"similar\" if they have the same litteral part.\n\t\\end{tcolorbox}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/polynomials.jpg}\n\t\t\\caption{Some polynomials plotted with R.3.2.1 (see my book on R)}\n\t\\end{figure}\n\tThe \"\\NewTerm{limiting behavior}\\index{limiting behavior}\" of a function describes what happens to the function as $x\\rightarrow \\pm\\infty$. The degree of a polynomial and the sign of its leading coefficient dictates its limiting behavior. In particular:\n\t\\begin{itemize}\n\t\t\\item If the degree of a polynomial $f(x)$ is even and the leading coefficient is positive, then $f(x)\\rightarrow +\\infty$ as $x\\rightarrow \\pm\\infty$.\n\t\t\\item If $f(x)$ is an even degree polynomial with negative leading coefficient, then $f(x)\\rightarrow -\\infty$ as $x\\rightarrow \\pm\\infty$. \n\t\t\\item If $f(x)$ is an odd degree polynomial with positive leading coefficient, then $f(x)\\rightarrow -\\infty$ as $x\\rightarrow +\\infty$ and $f(x)\\rightarrow +\\infty$ as $x\\rightarrow +\\infty$.\n \n\t\t\\item If $f(x)$ is an odd degree polynomial with negative leading coefficient, then $f(x)\\rightarrow +\\infty$ as $x\\rightarrow -\\infty$ and $f(x)\\rightarrow -\\infty$ as $x\\rightarrow +\\infty$.\n\t\\end{itemize}\n\tThese results are summarized in the table below.\n\t\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{turning point}\\index{turning point}\" a point at which the graph changes direction from increasing to decreasing or decreasing to increasing.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/algebra/turning_points.jpg}\n\t\t\\caption{Turning point explicitly highlighted}\n\t\\end{figure}\n\tThe function $f$ above is a $4^{\\text{th}}$ degree polynomial and has $3$ turning points. Thex maximum number of turning points of a polynomial function is always one less than the degree of the function.\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{root}\\index{root of a polynomial}\" or \"\\NewTerm{zero of (univariate) polynomial}\\index{zero of a univariate polynomial}\", the $x$ values such as the \"\\NewTerm{polynomial equation}\\index{polynomial equation}\" $P(x)=0$ is satisfied at the condition that at least one of the $a_n$ with $n>0$ is not null.\n\t\n\tIf the polynomial admits one or more roots $r_n$ we can then obviously factorize it as (we will prove it more rigorously further below):\n\t\n\tso that when $x$ takes the value of one of the roots $r_n$, the expression above is zero. This is what we name by convention \"\\NewTerm{factorize a polynomial}\\index{factorize a polynomial}\".\n\t\n\tAlgebraic identities are particular forms of polynomial functions. Indeed, consider a constant $c$ and a variable $x$ and:\n\t\n\tWe see that if we put:\n\t\n\twe fall back on:\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A polynomial in one indeterminate is named an \"\\NewTerm{univariate polynomial}\\index{univariate polynomial}\", a polynomial in more than one indeterminate is named a \"\\NewTerm{multivariate polynomial}\\index{multivariate polynomial}\". A polynomial with two indeterminates is named a \"\\NewTerm{bivariate polynomial}\\index{bivariate polynomial}\".\n\t\t\n\t\tA famous example in pure mathematics of a multivariate polynomial given many times in undergraduates course is: \n\t\t\n\t\n\t\t\\item[D2.] In the case of polynomials in more than one indeterminate, a polynomial is named  \"\\NewTerm{homogeneous of degree $n$}\\index{homogeneous polynomial of degree $n$} if all its non-zero terms have degree $n$ (the example just above is such a polynomial!).\n\t\\end{enumerate}\n\n\t\\pagebreak\n\t\\subsubsection{Euclidean Division of Polynomials}\n\tLet us put ourselves now in the ring $k[X]$. If $P(x)\\in k[X]$, we denote by $\\text{deg}(P)$ the degree of the polynomial $P(X)$ with coefficients in a ring $k$ (real or complex ... whatever!)\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tBy convention:\n\t\n\t\\end{tcolorbox}\n\t\\begin{theorem}\n\tGiven:\n\t\n\twith $k,m>0$. Then there are two unique polynomial $q(X),r(X)\\in k[X]$ such as:\n\t\n\tand:\n\t\n\twhere $q(X)$ is the \"\\NewTerm{quotient polynomial}\\index{quotient polynomial}\" and $r(X)$ the \"\\NewTerm{residual polynomial}\\index{residual polynomial}\".\n\t\\end{theorem}\n\t\\begin{dem}\n\tIf $u (X) = 0$ the result is obvious. Let us suppose that $u(X)\\neq 0$ and let us prove the existence by induction on the degree $k$ of $u (X)$.\n\t\n\tIf $k = 0$ then $q (X) = 0$ (since $m>0$) and therefore $r (X) = u (X)$ will do the job.\n\t\n\tNow let us suppose the statement for any $k\\leq n$... (this supposition is completely free of charge...):\n\t\n\tLet $u (X)$ be of degree $k=n1$. If $m>n+1$ then $q (X) = 0$ and $r (X) = u (X)$ can also do the job.\n\t\n\tOtherwise, if $m\\leq n+1$then by writing ($u_{n+1}$ is the $n+1$-th coefficient of the polynomial $u(X)$ and $v_m$ the $m$-th coefficient of $v(X)$:\n\t\n\twe reduce then $ u (X) $ to a polynomial of degree $ \\ leq n $ since $v (X)$ is of degree $m$ (and that it exists)!\n\t\n\tIndeed, the term:\n\t\n\tremoved (at least) the term of highest degree $u_{n+1}X^{n+1}$.\n\t\n\tBy the induction hypothesis, there are $f(X)$ and $g(X)$ such as:\n\t\n\twith $\\text{deg}(g)<m$. So after rearranging::\n\t\n\tand therefore:\n\t\n\tdo the job!\n\t\n\tSo by induction we see that the Euclidean division exists in the polynomial ring $k[X]$.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThis proved allowed us in the section of Set Theory to show that this ring is \"principal\".\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWe will seel only one example as the idea is always same. We want to divide $x^3+x^2$ by $x-1$ we get:\n\t\\begin{equation}\n\t\t\\renewcommand{\\arraystretch}{1.2}\n\t\t\\renewcommand{\\arraycolsep}{2pt}\n\t\t  \\begin{array}{rrrr|rrr}\n\t\t x^3&+x^2 &   &\\cdot(-1)&x  &-1 &  \\\\\n\t\t\\cline{5-7}\n\t\t-x^3&+x^2 &   &  &x^2&+2x&+2\\\\\n\t\t\\cline{1-2}\n\t\t    &2x^2 &   &  &   &   &  \\\\\n\t\t    &-2x^2&+2x&  &   &   &  \\\\\n\t\t    \\cline{2-3}\n\t\t    &     &2x &  &   &   &  \\\\\n\t\t    &     &-2x&+2&   &   &  \\\\\n\t\t              \\cline{3-4}\n\t\t    &     &   &+1&   &   &  \\\\ \n\t\t  \\end{array}\n\t\\end{equation}\t\n\t\\end{tcolorbox}\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Factorization Theorem of Polynomials}\n\tWe will now prove an important theorem that is in fact originally illustrated (among others) by the remarkable identities we saw above:\n\t\n\t\n\t\\begin{theorem}\n\tIf a polynomial $P(X)\\in k[K]$ with coefficients in $k$ of degree $n\\geq 1$ has a root $x=r$ in the ring $k$, then we can factorize $P(x)$ by $(x - r)$ such that:\n\t\n\twhere $Q$ is a polynomial of degree $n-1$ (and therefore can be a simple monomial).\n\t\n\tIn other words, \"\\NewTerm{factorize a polynomial}\\index{factorization of polynomial}\", it is written it as a product of monomials (in the general case: of polynomial). When not only applied to polynomials or also simply to numbers, factorization is an operation that transforms a sum into a product!\n\t\\end{theorem}\n\t\\begin{dem}\n\tThe idea is to perform the Euclidean division of $P(x)$ by $(x-r)$. According to the previous theorem, there exists a pair $(Q, R)$ of polynomials such as:\n\t\nand according to the result of the previous theorem on the Euclidean division:\n\t\n\tBut $\\text{deg}(x-r)=1$, so $\\text{deg}(R)=0$ (or $-\\infty$ by convention). $R(x)$ is therefore a constant polynomial function. Moreover, by hypothesis, $r$ is a root of $P(x)$. We have therefore:\n\t\n\tSo $R(r)=$. Therefore $R(x)$ is the zero polynomial and the theorem is practically proved. It remains to prove that $\\text{deg}(Q)=n-1$, which is an immediate consequence of the relation:\n\t\n\tHence:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tFrom this property to factorize a polynomial, named sometimes \"\\NewTerm{factorization theorem}\\index{factorization theorem}\", we can give a foretaste of a much more important theorem:\n\t\\begin{theorem}\n\tLet us show that if we have a polynomial  function $P(X)\\in k[X]$ of degree $n\\in \\mathbb{N}$ with coefficients in $k$, then it has at most a finite number $n$ of roots (some being possibly confused) in $k$.\n\t\\end{theorem}\n\n\t\\begin{dem}\n\tFirst, because $P(x)$ has a degree (order), $P(x)$ is not a zero polynomial function. Then, let us argue by the absurd:\n\t\n\tIf the function $P(x)$ has $n$ roots with $p>n$ (more roots than degree....), by noting these roots $r_1,...,r_p$, we have, by the previous  factorization theorem (applied $p$ times):\n\t\n\twhere $Q$ is a polynomial of degree:\n\t\n\tNow, since by definition a polynomial is a polynomial if and only if its degree (order) belongs to  $\\mathbb{N}$, the polynomial $Q$ must be the zero polynomial such that:\n\t\n\tIt follows that:\n\t\n\tThis contradicts the initial hypothesis that $P$ is not the zero polynomial, hence:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\subsubsection{Diophantine equation}\n\tIf we generalize the concept of univariate polynomial with several variables such as:\n\t\n\tthen we call \"\\NewTerm{Diophantine equation}\\index{Diophantine equation}\" an equation of the form:\n\t\n\twhere $P$ is a polynomial with integer (or rational) coefficients for which seeks the radicals (roots) strictly in $\\mathbb{N}$ or $\\mathbb{Q}$. Conventional Diophantine equations are for examples:\n\t\\begin{itemize}\n\t\t\\item  The linear Diophantine equations:\n\t\t\n\t\t\n\t\t\\item The Pythagorean triples:\n\t\t\t\n\t\\end{itemize}\n\tFor the general proof of the latter, the reader will have to wait a little bit the time for the as authors of this book to have time understand to understand the proof (...).\n\t\n\t\\subsubsection{First order univariate Polynomial and Equations}\n\tGiven the linear function:\n\t\n\tIf $a\\neq 0$ the this first the equation:\n\t\n\thas a simple closed formed root given obviously by:\n\t\n\tsuch that $P_1(r)=0$.\n\tIf $b=0$ this polynomial is named an \"\\NewTerm{affine function}\\index{affine function}\".\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} If the coefficients of the univariate polynomial of defree $1$ are all such that  $a,b\\in \\mathbb{R}$ then the root also belongs to $\\mathbb{R}$.\\\\\n\t\n\t\\textbf{R2.} If one of coefficients of the univariate polynomial of defree $1$ belongs to $\\mathbb{C}$ then the root also belongs to $\\mathbb{C}$.\\\\\n\t\n\t\\textbf{R3.} If the both coefficients of the univariate polynomial of defree $1$ belongs to $\\mathbb{C}$ then the root also belongs to $\\mathbb{C}$ or $\\mathbb{R}$.\\\\\n\t\n\t\\textbf{R4.} We say that two polynomial equations are \"\\NewTerm{equivalent}\\index{equivalent polynomials}\" if the admit the same solutions.\n\t\\end{tcolorbox}\n\tHere are also some properties for univariate first order polynomials that we give without proof as they seem very very intuitive to us (except on reader request):\n\t\\begin{enumerate}\n\t\t\\item[P1.] If we add (or respectively subtract) a same number to each member of the question (left from the \"$=$\" sign, and also right), we get an equation that has the same solutions as the original equation (and this whatever is its degree).\n\t\n\t\t\\item[P2.] If we multiply (or respectively divide) each me member of an equation (left from the \"$=$\" sign, and also right) by a same non-null number, we get an equation that has the same solutions as the original equation (and this whatever is its degree).\n\t\\end{enumerate}\n\tThe method should be general enough to by applied to all equations of the same kind, be build on the four arithmetic basis operations (addition, subtraction, multiplication and division) and the extraction of roots. We we can find the solutions (roots), of an equation thanks to its coefficients, using only the previous operations (that is to say in a \"\\NewTerm{closed form}\\index{closed form}\"), we then say that the equation can be solved by \"\\NewTerm{radicals}\\index{radicals}\".\n\t\n\t\\subsubsection{Second order univariate Polynomial and Equations}\n\tGiven the univariate polynomial with coefficient in $\\mathbb{R}$ (trinomial of second degree):\n\t\n\tIf we represent this univariate polynomial on the plan, this give us:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/polynomial_orientation.jpg}\n\t\t\\caption{Typical orientation for second degree polynomials}\n\t\\end{figure}\n\tIf we derivative this function (\\SeeChapter{see section Differential and Integral Calculus}) and we search in what point the derivative is equal to zero, we will always found the optimum on the inflection point of the parabola (which corresponds also to its symmetry axis):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/polynomial_optimum.jpg}\n\t\t\\caption{Inflexion point of the tangent?}\n\t\\end{figure}\n\tIf $a\\neq 0$, then we have:\n\t\n\tWe then have a \"\\NewTerm{double root}\\index{double root}\" (or \"\\NewTerm{root of multiplicity $2$}\") that we denote by:\n\t\n\tsuch that $P(r_{1,2})=0$ and we define a new term named sometimes the \"\\NewTerm{determinant of the polynomial}\" and most commonly the \"\\NewTerm{discriminant of the polynomial}\\index{discriminant of the polynomial}\":\n\t\t\n\tFinally:\n\t\n\tIf the second order univariate polynomial on $x$ has two root, we can then factorize it in an irreductible form (following the factorization theorem proved earlier) in the following form:\n\t\n\tWe also prove we easily from the expression of the root by doing simple algebra the \"\\NewTerm{Vieta relations}\\index{Vieta relations}\" (on request of the readers we can detail the developments in necessary):\n\t\n\t\n\tDepending on the sign of $2$ and this of the discriminant $\\Delta$, we have:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/polynomial_second_order_signature.jpg}\n\t\t\\caption{Inflexion point of the tangent?}\n\t\\end{figure}\n\tTherefore:\n\t\\begin{itemize}\n\t\t\\item If $\\Delta<0$ our polynomial has no real roots and cannot be factorized in a multiplication of monomial with real ($\n\\mathbb{R}$) factor but with complex one ($\\mathbb{C}$). Therefore (it is recommended to have read first the part about Complex Numbers in the section Numbers of this book):\n\t\t\n\t\tand we know that we can write any complex number in a condensed form (Euler formula) and as the complex roots of a polynomial of the second degree are conjugate (we already know this jargon) we have:\n\t\t\n\t\twhere (recall) $r$ is the module of the complex roots (module that is equal for the both) and $\\varphi$ the argument of the complex roots (equal in absolute value).\n\t\t\n\t\t\\item If $\\Delta=0$ the polynomial equation then has a single solution that is obviously:\n\t\t\n\t\t\n\t\t\\item If $\\Delta>0$ the polynomial equation then has two solutions defined by the general relations which we have already given above:\n\t\t\n\t\\end{itemize}\n\tAbout the complex case, let us take as an example the following quadratic polynomial:\n\t\n\tthat admits only two complex roots that are $\\mathrm{i}$ and $-\\mathrm{i}$. In the real plane this polynomial will be represented with Maple 4.00b by:\n\t\n\t\\texttt{>plot(x\\string^2+1,x=-5..5);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/polynomial_complex_solutions_in_real_plane.jpg}\n\t\t\\caption[]{Plot example of a polynomial of degree $2$ which admits only complex solutions}\n\t\\end{figure}\n\twhere we see well that there is no real solutions (zeros). While  placing us in the complex $\\mathbb{C}$, we have:\n\t\n\t\\texttt{>plot3d(abs((re+I*im)\\string^2+1),re=-2..2,im=0..2,view=[-2..2,-2..2,0..2],\\\\\n\torientation=[-130,70],contours=50,style=PATCHCONTOUR,axes=frame,\\\\\n\tgrid=[100,100],numpoints=10000);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/polynomial_complex_solutions_in_complex_plane.jpg}\n\t\t\\caption[]{The same polynomial but playing with the complex representation}\n\t\\end{figure}\n\twhere the two zeros are visible on the imaginary axis at $-1$ and $+1$. Obviously when it's the first time we see a function shown in a figure taking into account the complex values we try to find where is the corresponding parabola of the purely real case. To do this, we simply cut the surface above on two on the imaginary axis and we get then:\n\t\n\t\\texttt{>plot3d(abs((re+I*im)\\string^2+1),re=-2..2,im=0..2,view=[-2..2,-2..2,0..2],\\\\\n\torientation=[-130,70],contours=50,style=PATCHCONTOUR,axes=frame,\\\\\n\tgrid=[100,100],numpoints=10000);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/polynomial_complex_solutions_in_complex_plane_cutted.jpg}\n\t\t\\caption[]{A little zoom on the same polynomial}\n\t\\end{figure}\n\twhere we find again clearly our parabola visible on the cutted surface. So we can ask ourselves whether the complex numbers are a natural extension of our conventional space beyond our physical senses and our common measuring devices...!\n\n\t\\paragraph{Irrational Equations}\\mbox{}\\\\\\\\\n\tThe practitioner must always take the habit to verify the solution in the original equation to be sure of the validation of the definition domain of the function. Indeed, there are solutions to the resolution of equations that do not satisfy the original equation and this is what we name \"\\NewTerm{extraneous solutions}\\index{polynomial extraneous solution}\" and this is typically the case of irrational equations. \n\n\t\\textbf{Definition  (\\#\\mydef):} An \"\\NewTerm{irrational equation}\\index{irrational equation}\" is an equation where the unknown is under a radical (that is to say in typical case: under a square root).\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider the following equation\n\t\n\tTo solve it first we can redistribute:\n\t\n\tWith take the power of $2$:\n\t\n\tWe simplify little bit:\n\t\n\tagain:\n\t\n\tWe take the power of $2$ again:\n\t\n\tWe simplify a last time:\n\t\n\tWe get two trivial solution that are $x_1=-2$ and $x_2=2$. But only the solution $x_1=-2$ satisfy the proposed equation. Indeed, if you put the first solution in the original equation we get:\n\t\n\tBut if we put the second one:\n\t\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\paragraph{Gold Number}\\mbox{}\\\\\\\\\n\tThere is a second order univariate polynomial whose solution is famous around the world. The solution is a value named the \"\\NewTerm{golden ratio}\\index{golden ratio}\" or \"\\NewTerm{Divine proportion}\\index{divine proportion}\" (...) and is found in architectural, aesthetic or in phyllotaxis (that is to say, in the arrangement of the leaves around the stem of the plants).\n\t\n\tThis number is:\n\t\n\tThe golden ratio is also an algebraic number (any complex number that is a root of a non-zero polynomial in one variable with rational coefficients) and even an algebraic integer (complex number that is a root of some monic polynomial with coefficients in $\\mathbb{Z}$) as it is solution of:\n\t\n\tThere is a very elegant way to bring out this polynomial of using the $\\mathcal{Z}$ transform and that we will discuss in the section of Functional Analysis.\n\t\n\t\\subsubsection{Third order univariate Polynomial and Equations}\n\tEven if rare to solve in theoretical physics or in engineering, solving an univariate polynomial of the $3$rd degree is quite recreational and shows a good example of an already mature mathematical reasoning (we have these developments thanks Scipione del Ferro and Jerome Cardan mathematicians of the 16th century...).\n\t\n\tGiven the equation:\n\t\n\twith the coefficients in $\\mathbb{R}$ (to begin...). In a first time, the reader will be able to see that the reasoning that we have applied for the polynomial a smaller order than $3$ stuck when the order is greater (excepted for special simple cases obviously...).\n\n\tWe will avoid the problem by using a change of variables subtle but quite justified.\n\t\n\tThus, nothing prevents us from putting that:\n\t\n\tand that by dividing the polynomial of degree 3 by $a$ to write:\n\t\n\tBy grouping the terms of the same order:\n\t\n\tand let us write (nothing, but really nothing prevent us to do this):\n\t\n\twhere (1) est known if and only if $X$ is known and where $p,q$ are anyway unknowns.\n\n\tThe polynomial\\footnote{The first time a had to solve such a polynomial was to calculate the nutation of a gyroscope and the second time was for the calculation of the horizon of a Black Hole based on the Schwarzschild metric with cosmological constant that was in natural units: $\\dfrac{\\Lambda}{3}r^3-r-2M=0$}:\n\t\n\tbeing of odd degree, it admits (as it can be seen from any visual plot of such a polynomial with real coefficients) at least one real root, named the \"\\NewTerm{certain root}\\index{certain root}\" and at maximum three roots. The reader will easily check by himself that with a graphical representation of an odd degree polynomial this is trivial!\n\t\n\tNow let us make another subtle change of variable (we have the right to do this):\n\t\n\tby imposing the condition that $u, v$ must be such that $3uv=-p$ (nothing prevents us from imposing such a constraint), then we have:\n\t\n\tTherefore we have:\n\t\n\tWe can very well make an analogy between the two equations (1') and (2') and the Vieta relations we had obtained for the polynomial of degree 2 which we recall were:\n\t\n\texcepted that we have now (we adopt another notation for these intermediate roots):\n\t\n\twhich gives us for the polynomial $P$ by imposing (always by analogy) $a=1$ a new equation:\n\t\n\tfor which $z_1,z_2$ are the roots.\n\tThe latter equation has for discriminant:\n\t\n\tLet us take now scenario by scenario:\n\t\\begin{enumerate}\n\t\t\\item If $\\Delta >0$, the equation on $Z$ admits two solutions $z_1,z_2$ whose sum will give us indirectly the value of $X$ since by definition $X=u+v$ and $z_1=u^3$ and $z_2=v^3$. We see that we have all the ingredients to find the first root of the original equation that will be the certain root (or \"\\NewTerm{certain zero}\\index{certain zero}\"). So:\n\t\t\n\t\tas $\\Delta>0$ and the superior roots are cubic we necessarily have $X_1\\in \\mathbb{R}$ if all coefficient of the equation are well in $\\mathbb{R}$.\n\t\t\n\t\t\\item If $\\Delta=0$, we know it, the equation on $Z$ admit a double root and as the discriminant has a square power of $q$ this means necessarily that $p$ is negative.\n\n\t\tThe polynomial $P$ therefore also has a double root and the same for the original equation. We saw also that for a second degree polynomial if the discriminant is zero roots are:\n\t\t\n\t\tThen by analogy:\n\t\t\n\t\t\n\t\t\\item If $\\Delta<0$ we must again use complex numbers as we did in our study of the polynomial of degree $2$. Thus, we know that the equation $Z$ admits two complex solutions such as:\n\t\t\n\t\tand once again as the roots are conjugated we can write in condensed form:\n\t\t\n\t\tand as:\n\t\t\n\t\twe therefore have:\n\t\t\n\t\tAs $u_k,v_k$ are conjugated, we have necessarly $X_k\\in \\mathbb{R}$.\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider the following equation:\n\t\n\tWe therefore have:\n\t\n\tand therefore:\n\t\n\tWe therefore have:\n\t\n\t\\end{tcolorbox}\n\tThe polynomials of degree three are therefore well solvable by radicals.\n\t\n\t\\subsubsection{Fourth order univariate Polynomial and Equations}\n\tThe univariate polynomial equation to solve here is:\n\t\n\twith $a\\neq 0$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe own the method of resolution of $4$th degree polynomial to the Italian mathematician  Ludovico Ferrari (also) of the 16th century.\n\t\\end{tcolorbox}\n\tIf we divide by $a$ we have:\n\t\n\tAnd by putting:\n\t\n\tthe equation will be reduce to:\n\t\n\twhere we see that the coefficient in front of the $y^3$ vanishes. Thus, any polynomial of the type:\n\t\n\tcan be written in the following form:\n\t\n\tBy putting:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIf $d''=0$ the equation to solve is reality a \"\\NewTerm{bisquare equation}\\index{bisquare equation}\". The change of variable $X=y^2$ then allows to focus to a polynomial equation of the second degree (what we know is easily to solve).\n\t\\end{tcolorbox}\n\tWe now introduce a parameter $t$ (that we will choose wisely afterwards) and we rewrite the polynomial equation as follows:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIf the reader develop and distribute all the terme of the previous relation he will fall back obviously on $x^4+c''x^2+d''x+e''=0$.\n\t\\end{tcolorbox}\n\tThe underlying idea is to try to ensure that the bracketed part of the previous expression can be written as a square as:\n\t\n\tBecause in this case, using:\n\t\n\tOur polynomial equation can then be written:\n\t\n\tand we would have only to solve two polynomial equations of the second degree (what we know already how to do).\n\t\n\tBut for us to write:\n\t\n\tthe expression of second degree to the left of equality should have only one root. But we saw in our study of polynomial equations of the second degree that meant since the discriminant is zero:\n\t\n\tand that the root was given by:\n\t\n\tWhich corresponds in our case to:\n\t\n\tand therefore that:\n\t\n\twith:\n\t\n\tSo finally, if $t$ is such that $4(2t-c'')(t^2-e'')={d''}^2$, then we have:\n\t\n\tas the fundamental polynomial theorem give us for a polynomial of the second degree a unique root:\n\t\n\tTo conclude, it is enoughe to find a number $t$ satisfying the following relation:\n\t\n\twhich is a degree $3$ polynomial that we already know how to solve using the Cardan method.\n\n\tSuch general methods doesn't exist anymore for polynomial of degree higher or equal to $5$ as we will prove during our study of Galois Theory (\\SeeChapter{see section Set Algebra}).\n\t\n\t\\subsubsection{Trigonometric Polynomials}\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{trigonometric polynomial of degree $N$}\\index{trigonometric polynomial}\" any finite sum of the type:\n\t\n\twhere $c_n\\in \\mathbb{C}$.\n\n\tA trigonometric polynomial can also be written using the usual trigonometric functions with the following changes:\n\t\n\tEither using Euler's formula (\\SeeChapter{see section Numbers}):\n\t\n\tWhat we can also rewrite as:\n\t\n\tBy putting:\n\t\n\tIt comes:\n\t\n\tWe will extensively discussed in the section Sequences and Series how to use these polynomials in the context of the study of Fourier series.\n\t\n\t\\subsubsection{Cyclotomic Polynomials}\n\tIf $n$ is an integer (belongs to $\\mathbb{N}$) and $x$ a complex number (belongs to $\\mathbb{C}$), we name \"\\NewTerm{cyclotomic polynomial}\\index{cyclotomotic polynomial}\" that we denote traditionally by $\\Phi_n$ and that we define as being the product of all monomials:\n\t\n\twhere $\\alpha$ is a $n$th primitive root of $\\mathbb{C}$. In other words:\n\t\n\tTo recall an $n$th unit root  (sometimes named \"\\NewTerm{Moivre number}\\index{Moivre number}\") is a complex number whose $n$-th power is equal to $1$.\n\t\n\tThus, the set of all $n$-th unit roots is given by:\n\t\n\twhich is a cyclic group (see the section Set Theory and also the section Set Algebra).\n\n\tThen we name \"\\NewTerm{$n$-th primitive root of unity}\\index{primitive root of unity}\" any element of this group generating it.\n\t\n\tThe elements of $G_n$ are of the type:\n\t\n\twith $k\\in \\mathbb{Z}$. We write then the set of the $G_n$ in the form:\n\t\n\tA small example of cyclotomic polynomial (more example will be given below):\n\t\n\twith:\n\t\n\twhich are therefore the $4$th roots of unity (in other words: each of these number set to the power of $4$ is equal to $1$). They form the group $G_4$ and this can be generated only by $\\mathrm{i}$ and $-\\mathrm{i}$ (field generator according to what was seen in the section Set Theory).\n\t\n\tSo a cyclotomic polynomial is the product of factors that is written:\n\t\n\twith $k\\in \\{0,\\ldots,n-1\\}$.\n\t\n\tWe will see with the examples below that if $n$ is even then:\n\t\n\tand if $n$ is odd:\n\t\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tFor $n$ up to $30$, the cyclotomic polynomials are:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Legendre Polynomials}\n\t\\textbf{Definition (\\#\\mydef):} Legendre polynomials are define by (it is strongly recommended to read the sections of Differential and Integral Calculus and also of Functional Analysis before continuing):\n\t\n\twhere $P_n$ is therefore a polynomial of degree $n$. We will see again these polynomials in the resolution of differential equations in physics (heat propagation, quantum physics, quantum chemistry, etc.). In most book the Legendre polynomial are written in the following equivalent form:\n\t\n\tWe will focus here only and uniquely on the properties that will are used actually in the other physic sections of this book!!!\n\t\n\tLet us prove that following definition of functional scalar product (see the sections Functional Analysis and Calculation Vector) Legendre polynomials are orthogonal. This is a very important property for our study of Quantum Chemistry!\n\t\n\t\\begin{dem}\n\tLet $P$ be a polynomial of degree $\\leq n-1$. It suffices to prove that $\\langle P_n | P \\rangle =0$, that is to say that $P_n$ is orthogonal to the space of polynomials of degree less than $n$. Indeed, we have:\n\t\n\tintegrating by parts we get:\n\t\n\tCaution!!! For the above zero term, only the term $(1-x^2)^n$ is derived there! So since $x$ is squared, whatever the derivative the value will always be the same. What justifies this term is equal to zero.\n\t\n\tContinuing in this way we get after $n$ integration by parts:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe derivative term is zero, since the derivative polynomial is of degree $n-1$.\n\t\\end{tcolorbox}\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\t\n\tHere are some useful properties for the section of Quantum Chemistry of the Legendre polynomials:\n\t\\begin{enumerate}\n\t\t\\item[P1.] We have $P_n(1)=1$:\n\t\t\\begin{dem}\n\t\t\n\t\tand using the Leibniz Formula (\\SeeChapter{see section Differential and Integral Calculus}) we have:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item[P2.] We have $P_n(-x)=P_n(-x)$ if $n$ is even:\n\t\t\\begin{dem}\n\t\tIf $n$ is even:\n\t\t\n\t\tis an even function and therefore:\n\t\t\n\t\tis even.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item[P3.] We have $P_n(-x)=-P_n(x)$ if $n$ is odd:\n\t\t\\begin{dem}\n\t\tIf $n$ is odd:\n\t\t\n\t\tis an odd function and therefore:\n\t\t\n\t\tis odd.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\\begin{theorem}\n\tWe will now prove the validity of the following recurrence relation for the $P_n$ (relations that we use in physics):\n\t\n\tfor $n \\geq 1$.\n\t\\end{theorem}\n\t\\begin{dem}\n\t$xP_n(x)$ is a polynomial of degree $n+1$, there exists therefore $a_j\\in \\mathbb{R}$ such that this polynomial can be expressed as a linear combination of the family of polynomials constituting the orthonormal basis (basis that enables us to generate/build the $xP_n(x)$):\n\t\n\tTherefore we can write:\n\t\n\tbut if we choose $k\\leq n-2$ (because $xP_k$ is therefore of degree $n-1$):\n\t\n\tTherefore:\n\t\n\tthat is to say that $a_k=0$. Than it follow:\n\t\n\tBy the properties of the Legendre polynomials proved previously, we can write the equalities:\n\t\n\tand:\n\t\n\thence:\n\t\n\tThe dominant coefficient of $P_n$ denoted by $\\text{dom}(P_n(x))$ is defined (for recall) as the coefficient of the monomial of the highest degree. Thus, it is given by:\n\t\n\tTherefore:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe reader will verify if needed for a given $n$ that:\n\t\n\t\\end{tcolorbox}\n\tThe relation:\n\t\n\twe got earlier impose us that the dominant coefficient of the polynomial of the linear combination to be equal to the dominant coefficient of the polynomial $xP_n$ (we have eliminated the $(-1)^n$ which simplifies):\n\t\n\tAfter simplification we get:\n\t\n\tand which finally gives easily:\n\t\n\tThe relation:\n\t\n\tbecomes therefore:\n\t\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tThe first Legendre polynomials are:\n\t\n\n\tThe graphs of these polynomials (up to $n = 5$) are shown below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/legendre_polynomials.jpg}\n\t\t\\caption{Five first Legendre Polynomials (source: Wikipedia)}\n\t\\end{figure}\n\t\t\n\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{90} & \\pbox{20cm}{\\score{3}{5} \\\\ {\\tiny 70 votes,  56.29\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Set Algebra}\n\t\\lettrine[lines=4]{\\color{BrickRed}W}e will approach in this book the study of set structures very pragmatically (since you must remember that this book is dedicated to engineers). Thus, it will be made use of the minimum of formalism and only the proofs of the elements that we consider as absolutely essential to the engineer will be presented in this. Moreover, numerous proofs will be made by example and we will focus largely on the algebraic theory of groups as it has a prominent place in physics almost more than for other set-structures.\n\n\t\\subsection{Groups Algebra and Geometry}\n\tThe symmetries of geometric figures, of crystals and all the other items of macroscopic physics, are subject for centuries of observations and studies. In modern terms, the symmetries of a given object form a \"group\".\n\t\n\tSince the mid-19th century, the group theory took a huge extension, and its applications to quantum mechanics and the theory of elementary particles have developed throughout the 20th century.\n\t\n\tIn a letter of 1877 to the mathematician Adolf Meyer, Sophus Lie wrote that he created the theory of groups in January 1873. It is of the groups he named \"\\NewTerm{continuous groups}\\index{continuous groups}\" and that are named today \"\\NewTerm{Lie groups}\\index{Lie groups}\". Lie sought to expand the use of groups from the domain of algebraic equations, where Galois had introduced them, to the differential equations.\n\t\n\tSince 1871, the notion of infinitesimal generator of a one-parameter group of transformations appeared in his work. This is the set of infinitesimal generators of the subgroups with one parameter of a continuous group that forms what we name today a \"\\NewTerm{Lie algebra}\\index{Lie algebra}\".\n\t\n\tIt was Weyl and Wigner that show the preeminent role of the theory of groups and their representations in particular in the new quantum mechanics that Heisenberg  and Dirac were developing. The general idea of representation theory is try to study a group by doing it acting on a vector space in a linear way: we try to see the group as as a group of matrices (hence the term \"representation \"). We can, from relatively well-known properties of the automorphism group of the vector space (\\SeeChapter{see section Set Theory}), get to deduce some group properties of main interest.\n\t\n\tWe can consider the theory of group representations as a vast generalization of the Fourier analysis. Its development is continuous and has, since the mid-20th century, countless applications in differential geometry, ergodic theory, probability theory, number theory, the theory of automorphic forms, in that of dynamic systems as well as in physics, chemistry, molecular biology and signal processing. Currently, entire branches of mathematics and physics depend on it.\n\t\n\tBefore we begin, we refer the reader to the section on Set Theory to get a refresh of the structure and fundamental properties that make up a Group and also to the section of Linear Algebra (because we use some results proved in it).\n\t\n\t\\subsubsection{Cyclic Groups}\n\tThe cyclic group (whose definition of has already been given in the section of Set Theory) will serve us a basis for the study of finite groups. Moreover, rather than making generalized expansions we preferred to take specific examples to present the idea of cyclic group (more suitable for an engineering approach).\n\t\n\tWe will take then the very nice example of the hours of a clock... with three different approaches that will successively (!) and simple address the concept of a cyclic group.\n\t\\begin{enumerate}\n\t\t\\item First approach:\n\t\t\n\t\tLet us imagine a clock with a needle which can take $12$ possible positions (but no intermediate positions). We will denote in a special way the $12$ possible positions: $\\overline{0},\\overline{1},\\overline{2},...,\\overline{11}$  (the line above the numbers is not innocent!).\n\t\t\n\t\tNothing prevents us on all of these positions to define an addition, for example:\n\t\t\n\t  \twhich is similar to the results we get when we in our daily life we do calculations manipulating time.\n\t  \t\n\t  \t\\item Second approach (first extension):\n\t  \t\n\t  \tIf we observe well a Watch or a Clock, we notice that every time we add $12$ (or withdraw $12$...) to a value of hours of our Watch/Clock then we fall back on the set of numbers that are well defined that are also in $\\mathbb{Z}$. Therefore (obviously as part of a Watch/Clock only the first positive values are of interest to us most of the time but here we do math so we generalize a bit...)\n\t  \t\n\t\tHere we fall back on a concept we had already seen in the section of Numbers Theory. This are congruence classes and all of these classes form the quotient set $\\mathbb{Z}/12\\mathbb{Z}$. If we endow this quotient set of an addition law, it is normally easy to observe that it is an internal law to the quotient set, that it is associative, that there exist a neutral element and that every element has an inverse (opposite).\n\t\t\n\t\tThus, this quotient set equipped only with the addition law (if adding the multiplication we can form a ring) is a commutative group.\n\n\t\t\\item Third approach (second and last extension):\n\t\t\n\t\tLet us see a third and final approach which is why the quotient group is cyclic.\n\t\t\n\t\tIf we project the rotation of the hands of our Watch/Clock (all rotations in the set-algebra are traditionally clockwise!) in $\\mathbb{C}$ and that we define:\n\t\t\n\t\tWe then have $x^{12}=x^0=1$ and:\n\t\t\n\t\twhich is why the quotient group $(\\mathbb{Z}/12\\mathbb{Z},+)$ is named \"\\NewTerm{cyclic group}\\index{cyclic group}\" (by group isomorphism according to what has been seen in the section Set Theory). Its isomorphic is noted by $C_{12}$ and all elements are of modulus $1$. It is common to denote all complex number of modulus $1$ as following:\n\t\t\n\t\tIf we represent in $\\mathbb{C}$ the isomorphic set $C_{12}$ we get then on the unit circle a polygon with $n$ vertices as shown in the figure below:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/algebra/c_12_cyclic_group.jpg}\n\t\t\t\\caption{$C_{12}$ Cyclic Group}\n\t\t\\end{figure}\n\t\tFurthermore, the number of component elements of $\\mathbb{Z}/12\\mathbb{Z}$ being finite, $(\\mathbb{Z}/12\\mathbb{Z},+)$ is finite. Unlike the group $(\\mathbb{Z},+)$ which is itself a discrete infinite group.\n\t\t\n\t\tThis concept of finitude is perhaps most obvious with the example that we will do afterwards with $\\mathbb{Z}/4\\mathbb{Z}$ where the reader will observe that this set has the same number of elements than $C_4$.\t\n\t\\end{enumerate}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tMathematicians name $C_n$ the \"group of $n$-th roots of unity\". An $n$-th root of the unit (sometimes named \"\\NewTerm{Moivre's number}\\index{Moivre's number}\") is a complex number whose $n$-th power is equal to $1$. Moreover, for a given integer $n$, all $n$-th roots of unit are located on the unit circle and are the vertices of a regular $n$-sided polygon having an apex of affix $1$.\n\t\\end{tcolorbox}\n\tWhat interests particularly physicists at first are the representations of finite groups (also the continuous groups that we will see later). Thus, the representative of $\\mathbb{Z}/n\\mathbb{Z}$ is know to us as the rotation in the complex plane is given as we have shown during our study of complex numbers in the section Numbers:\n\t\n\twith $k\\in[0,n-1]$. This representative is a subgroup of the group of rotations $\\text{O}(2)$ which will be discussed further. The group of rotations of the plane itself being a subgroup of the linear group $\\text{GL}(2)$ (we will give a precise definition and an example further below).\n\t\n\tIn fact, mathematicians are able to prove that all quotients groups $\\mathbb{Z}/n\\mathbb{Z}$ are cyclic to an isomorphism with $C_n$ and then they say that $\\mathbb{Z}/n\\mathbb{Z}$ is a finite quotient of the  monogenic group $\\mathbb{Z}$...\n\t\n\tThis approach is perhaps a bit abstract for the Padawan... So if the reader remembers the section Set Theory we saw a very precise definition of what is the cyclicity of a group: A group $G$ is said to cyclic if $G$ is generated by the power of at least one of its elements $a\\in G$ named the \"generator\" such that:\n\t\n\tLet us check whether this is the case for the group:\n\t\n\twhich is a school case.\n\t\n\tWe will denote the elements that make up this group:\n\t\n\tThis being done, we should be careful that in the set definition of a cyclic group we speak of \"power\" if the internal law of the group is multiplication but if the internal law is the addition, then we have:\n\t\n\tThe first generator of the group $G=\\left\\lbrace \\mathbb{Z}/4\\mathbb{Z},+ \\right\\rbrace$ is $1$. Indeed:\n\t\n\tThe second generating element of the same group is $3$:\n\t\n\tFor cons, the reader can check that $2$ is not a generator of this group!\n\t\n\tIn fact, regarding the groups $G=\\left\\lbrace \\mathbb{Z}/n\\mathbb{Z},+ \\right\\rbrace$ mathematicians are able to prove that only the elements of the group items that are prime with $n$ are generators (that is to say elements whose greatest common divisor is $1$).\n\t\n\tThat's all four our introduction to cyclic groups for engineers. Let us turn now to another group category.\n\t\n\t\\subsubsection{Transformations Groups}\n\tThe transformations groups, that include the rotations groups, are the one that most interests physicists especially in the areas of continuum mechanics, chemistry, quantum physics and art ... Mathematicians appreciate obviously the study of rotations groups in the context of geometry (but not only!) and computer scientists equally linear groups. We have also seen an example of a rotations group just before.\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{linear group of order $n$}\\index{linear group of order $n$}\" and we note $\\text{GL}(n)$ the invertible matrices $n\\times n$ or also named \"\\NewTerm{regular matrices}\\index{regular matrices}\" (i.e. their determinant is not zero according to what we saw in the section Linear Algebra) which component are in any field or ring $K$ (the ring $\\mathbb{R}$ or the grup $\\mathbb{C}$ most of time):\n\t\n\the group is so named because the columns of an invertible matrix are linearly independent.\n\t\n\tWe will consider here as obvious that $\\text{GL}(n)$ is a group: the matrix multiplication is associative array and each matrix of $\\text{GL} (n)$ has an inverse by definition (as the determinant is not null). On the other hand, the product of two regular matrices gives a regular matrix that is again invertible.\n\t\n\tA simple and important example of linear group is that of the sub-\"\\NewTerm{group of affine transformations}\\index{group of affine transformations}\" of the plane that is traditionally noted (that is intuitive):\n\t\n\twith $a,b,c,d,\\alpha,\\beta\\in \\mathbb{R},ad-bc\\neq 0$ (we will see why and how this latter inequality a little further below).\n\t\n\t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{affine group}\\index{affine group}\" or \"\\NewTerm{general affine group}\\index{general affine group}\" of any affine space $A$ over a field $K$ is the group denoted $\\text{Aff}(A)$ or $\\text{Aff}(n,K)$ of all invertible affine transformations from the space into itself. It is a \"\\NewTerm{Lie group}\\index{Lie group}\" if $K$ is the real or complex field or quaternions.\n\t\n\tLet us take a small practical example:\n\t\n\twhich would apply to a circle gives:\n\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/algebra/affine_group_transformation_example.jpg}\n\t\t\t\\caption{Simple affine transformations on a circle}\n\t\t\\end{figure}\n\tThis transformation is a way to define the ellipses as images of a circle by an affine transformation.\n\t\n\tThe coefficients $\\alpha,\\beta$ are irrelevant for the shape of the image figure. In fact, they obviously induce only translations to the figures. So we can do without them if we only seek to distort the original figure.\n\t\n\tTherefore it remains:\n\t\n\twhich can be written in matrix form:\n\t\n\tThe affine transformation is therefore reduced to the matrix:\n\t\n\tand as we have seen in the section of Linear Algebra, matrix multiplication is associative but is not commutative, so the linear transformation is not either.\n\t\n\tThe neutral element is the matrix:\n\t\n\tand the inverse of of $F$ is:\n\t\n\tand as we have imposed $ad-bc\\neq 0$ any element has therefore an inverse. Thus, the linear affine group is not commutative and... is therefore a group...\n\t\n\tAs we will see it, all the \"classics\" Lie groups are subgroups of $\\text{GL} (n)$.\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{special linear group of order $n$}\\index{special linear group of order $n$}\" and denote by $\\text{SL} (n)$ the invertible (square $n\\times n$) matrices with coefficients in an arbitrary field and with determinant equal to unity:\n\t\n\tThis is obviously a subgroup of $\\text{GL} (n)$.\n\t\n\tReturning to the previous example and remembering that the determinant of a square two-dimensional  matrix is (\\SeeChapter{see section Linear Algebra}):\n\t\n\twe notice geometrically well what it means to have a unit determinant in this case! Indeed we will see in the section of Linear Algebra during our geometric interpretation of the determinant that having a determinant is equivalent to have a surface. Thus, having $ad-bc$ equal to the unit allows us regardless of the processing order, to have the area equal to $1$. Thus, the special linear is a group of transformation that keeps surfaces value.\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{orthogonal real group of order $n$}\\index{orthogonal real group of order $n$}\" and denote by $\\text{O} (n)$ the invertible orthogonal (square $n\\times n$) matrices (see the section Linear Algebra for a refresh of what are orthogonal matrices):\n\t\n\tFurthermore, we proved in the section Linear Algebra in our study rotation matrices that $A^TA=I_n$ implies that $\\det(A)=\\pm 1$.\n\t\n\tThis is the case for example of the $\\text{O}(2)$ matrix seen previously (it belongs to the orthogonal group but also to the group of rotations that we will see further below):\n\t\n\twhich is orthogonal as it is easy to check (just multiple by the transposed to see if you get and identity matrix).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t$\\text{O}(1)$ is  made up of all trivial matrices... $[1] [-1]$ that are simply one component vectors or just... simple scalars.\n\t\\end{tcolorbox}\n\t\\textbf{Definition (\\#\\mydef):} If $A\\in \\text{O}(n)$ and that we have $\\det(A)=1$ then we get a subgroup of $\\text{O}(n)$ named \"\\NewTerm{special real group orthogonal of order $n$}\\index{special real group orthogonal of order $n$}\" and therefore define by:\n\t\n\tThe rotation matrix given previously is part of this group since its determinant is equal to unity! Furthermore, this group occupies a very special place in physics and we will meet it again during our study of quantum physics.\n\t\n\tThe subgroup $\\text{SO}(2)$, also sometimes named \"\\NewTerm{circle group}\\index{circle group}\" and denoted $S^1$ that we had also studied in the section of Euclidean Geometry has a representative given by the matrix:\n\t\n\tand occupies a special place in the family of the $\\text{SO}(n)$ groups  with $n$ greater than unity. Indeed it is the only one that is commutative. Moreover, it is isomorphic to $e^{\\mathrm{i}\\theta}$ either $\\text{U}(1)$ the multiplicative group of complex numbers of modulus $1$. This is also the proper symmetry group of a circle and the equivalent continuous of $C_n$.\n\t\n\tThe subgroup $\\text{SO}(3)$ is given by the matrix (\\SeeChapter{see section Euclidean Geometry}):\n\t\n\tfor the rotation around the $x$-axis in the three-dimensional space is not commutative (the rotation matrices in the plane being commutative for recall!). Moreover the quaternions that we have seen in the section Numbers, whose representative is therefore $\\text{SO}(3)$, form also a non-commutative group (relatively to the multiplication law) as we have seen in the section Numbers.\n\t\n\tCompared to a unit vector makes we can relatively easily visually speaking see that $\\text{SO} (3)$ is a closed subgroup of $\\text{GL} (3)$, that is to say, the set of linear groups of dimension $3$.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t$\\text{SO} (1)$ consists in the matrix $[1]$.\n\t\\end{tcolorbox}\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{linear group of order $n$}\\index{linear group of order $n$}\" and we denote it $\\text{U}(n)$ the matrices whose components are complex (as part of this book the most often) or real and which are orthogonal:\n\t\n\tNotice also that any unitary matrix with complex components and of one dimension  (thus that belongs to $\\text{U}(n)$...) is a complex number of unit module, which can always be written in the form $e^{\\mathrm{i}\\mathbb{R}}$.\n\tWe have already seen an example in this book during our study of spinors (see section of the same name). These are the Pauli matrices (used in the section of Relativistic Quantum Physics) given by:\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{special unitary group of order $n$}\\index{special unitary group of order $n$}\" and we denote by $\\text{SU} (n)$ the matrices whose entries are complex and are orthogonal and whose determinant is unity:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t$\\text{U}(1)$ is equal to $\\text{SU}(1)$ and it is therefore the complex unit circle equals to $e^{\\mathrm{i}\\mathbb{R}}$. Moreover, $\\text{SO} (2)$ is commutative and isomorphic to $\\text{U}(1)$ because it is the set of the rotations of the plane.\n\t\\end{tcolorbox}\n\tA well known example is still the one of the Pauli matrices but simply written in the form used in Relativistic Quantum Physics (see section of the same name):\n\t\n\twhich are part of $\\text{SU} (2)$ and as we have shown (implicitly) at the beginning of the section of Spinor Calculus isomorphic to the quaternion group $\\text{SO} (3)$ of module $1$ Relations that mathematicians name in the present situation a \"homomorphism's overlay\" ...\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe special unitary group has a particular importance in particle physics. If the unitary group $\\text{U} (1)$ is the electromagnetic gauge group (think to the complex number appearing in the solutions of the wave equation!), $\\text{SU} (2)$ is the group associated with the weak interaction, and $\\text{SU} (3)$ the one of the strong interaction. This is for example due to the structure of representations of $\\text{SU} (3)$ that Gell-Mann conjectured the existence of quarks!\n\t\\end{tcolorbox}\n\tLet us see with a different approach to that used in the section of Spinor Calculus how to show that the Pauli matrices are the bases of $\\text{SU} (2)$?\n\t\n\tFirst, the reader has to know that we prove in the section of Spinor Calculus  that any rotation in space of three dimensions could be expressed using the relation (for small angles!):\n\t\n\tAnd we saw in the section of Quantum Computing that an explicit decomposed formulation of  the previous relation was:\n\t\n\tand therefore that all $\\text{SU}(2)$ element is produced from these three matrices which each make describe to the end of a vector a curvie on the surface of a sphere!\n\t\n\tNow, we notice that these three matrices are equal to when $\\theta=0$:\n\t\n\tWe then obtain the identity matrix. So if we search tangent at this point we can therefore build a base on it ($3$ orthogonal vectors).\n\t\n\tLet's look at this:\n\t\n\tThus, $\\text{SU}(2)$ has for basis:\n\t\n\tand are in other words the infinitesimal generators of the group $\\text{SU}(2)$.$\\text{SU}(2)$ has therefore a basis that is a Lie algebra according to the vocabulary of mathematicians.\n\t\n\tThis result is quite remarkable ... Since $\\text{SU} (2)$ and $\\text{SO} (3)$ are isomorphic, then we can get the basis of the Lie algebra $\\text{SO} (3)$ while using the same method !!!\n\t\n\tLet us see this! We proved in the section of Euclidean Euclidean that the rotations matrices were given by (we change the $R$ by a $U$ so to not confuse with the previous matrices):\n\t\n\tWe notice again that on \n$\\theta=\\gamma=\\phi=0$ the curve that is generated by the extremity of a vector from the three rotation matrices pass through:\n\t\n\tThen in the same manner as for $\\text{SU} (2)$, we calculate the derivatives in these angles to determine the generating base matrices of $\\text{SO} (3)$:\n\t\n\tThe Lie algebra of $\\text{SO} (3)$ has therefore for basis:\n\t\n\tIn physics, we prefer to work with complex matrices. We then introduce the matrices:\n\t\n\tIt must then been notice that if we define:\n\t\n\tWe trivially have to the complex conjugate of the transposed matrix:\n\t\n\tand by the way ... we also have the following non-commutative relations (which we can develop on request):\n\t\n\tand also the relation of commutation:\n\t\n\tthat the Pauli matrices also satisfy and ... for recall (or for information for those who have not yet read the section of Wave Quantum Physics) the $J_i$ are the operators of the total angular momentum of the spin-orbit coupling system!!!\n\t\n\tMost of the groups we have seen until so far can be resume with the following figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/special_linear_group.jpg}\n\t\\end{figure}\n\tThe rotations with the quaternions indicated in the figure above are studied in the section Numbers of the chapter Arithmetic.\n\t\n\t\\pagebreak\n\t\\subsubsection{Group of Symetries}\n\tThe symmetry group of an object denoted $X$ (image, signal, etc. in 1D, 2D, 3D or other) is the group of all isometries (an isometry is a transformation that preserves length) under which it is invariant with composition as an operation.\n\t\n\tAny group of symmetries which elements have a common fixed point, which is true for all symmetry groups of limited figures, can be represented as a subgroup of the orthogonal group $\\text{O} (n)$ by choosing the origin as a fixed point . The proper symmetry group is a subgroup of the special orthogonal group $\\text{SO} (n)$, and hence it is also named the \"rotations group\" of the figure.\n\t\n\tIn what follows, we will interpret the composition of two operation of symmetries  or rotations as the multiplication as well as for permutations.\n\t\n\tLet us see first two fundamental definitions!\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] The \"\\NewTerm{group of symmetries}\\index{group of symmetries}\", also named \"\\NewTerm{group of invariants}\\index{group of invariants}\" of $X$ is the set of symmetries of $X$, generated by the multiplication structure given by the composition that leaves $X$ invariant.\n\t\t\n\t\t\\item[D2.] The \"\\NewTerm{order}\\index{order of a group}\" of a group is the total number of all its symmetries only (including the identity!).\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. The heart (...):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/symmetries_heart.jpg}\n\t\\end{figure}\n\thas a group of symmetries of $2$ elements, namely the identity application $\\text{id}$ and the application $r_v$ that is the reflection in respect to the vertical axis (sub-group of symmetries with $1$ item). We observe also that the symmetrical is also provided via the relation:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE2. The letter phi (...):\n\t\\begin{center}\n\t\\[ \\scalebox{8}{$\\Phi$} \\]\n\t\\end{center}\n\thas a total symmetry group of $4$ elements, namely the identity application $\\text{id}$, the both reflections $r_h$ and $r_v$ and the rotation of angle $\\pi$ which we denote by $t_\\pi$ (subgroup of rotations of $1$ item). This form thus has a group of symmetries of order $3$.\\\\\n\t\n\tIn this group we have:\n\t\n \t(which is commutative!), $t_{\\pi}\\circ t_{\\pi}$ is the rotation of angle $2\\pi$, which is the same application as the identity application, therefore $t_{\\pi}\\circ t_\\pi=\\text{id}$.\\\\\n \t\n \tThus, the symmetry group of this letter is indeed commutative and the composition law is internal. It is indeed a group.\\\\\n \t\n \tE3. The regular pentagon:\n \t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/symmetries_pentagon.jpg}\n\t\\end{figure}\n\thas a group of symmetries of $10$ elements, namely, more precisley  $5$ rotations $\\text{id},t_{2\\pi/5},t_{4\\pi/5},t_{6\\pi/5},t_{8\\pi/5}$ and also $5$ reflections along the $5$-axis symmetry. It is therefore a group of symmetries of order $5$ corresponding to cyclic group $\\mathbb{Z}/ 5\\mathbb{Z}$.\\\\\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nMore generally, the group of symmetries of a regular $n$-gon (where $n$ is odd) has exactly $2n$ elements. This group is named \"\\NewTerm{dihedral group of order $n$}\\index{dihedral group of order $n$}\" and is most often denoted by $D_{2n}$ (be careful because some authors do not multiply by a factor of $2$ so that the index is then directly the order and not the number of elements).\n\t\\end{tcolorbox}\t\n\tThe pentagon has therefore $D_{10}$ for dihedral group and $\\mathbb{Z}/ 5\\mathbb{Z}$ is a \"\\NewTerm{distinct subgroups}\\index{distinct subgroups}\" (we'll comme on this notion of subgroup later below).\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE4. The dihedral group $D_6$ of order $3$ of the isometries of an equilateral triangle (regular polygon) has $6$ elements which we will denote (so that the writing is less heavy):\n\t\n\twhere $\\sigma_1,\\sigma_2,\\sigma_3$ are the symmetries relatively to the three bisectors (mediators respectively). The compositions table below of this dihedral group also shows that this group is non-commutative:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/equilateral_dihedral_group_representation.jpg}\n\t\\end{figure}\n\t\n\tWe will return back later on this example when we will introduce a little further below the concept of a distinct group in our study of permutations groups.\\\\\n\t\n\tE5. Let's look at one last example applied to chemistry by enumerating the symmetry operations that leave the ammonia molecule NH$_3$ (tetrahedron) invariant.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/nh3.jpg}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tThe transformation group contains $6$ elements: the identity $\\text{id}$, $C_3$ that is the rotation of $2\\pi/3$, $C_3^2$ that is the rotation of $4\\pi/3$ (which we will denote after by $C_{-3}$) both along the $z$-axis (therefore perpendicular to the $xy$-plane...) and $3$ axes of symmetry $\\sigma_1,\\sigma_2,\\sigma_3$ of symetries/reflection each passing through the middle of one of the base edges in the middle of the opposite edge as shown in the figure below (pyramid view from above):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/tetrahedral_group_representation.jpg}\n\t\t\\caption{Operations leaving invariant a tetrahedron}\n\t\\end{figure}\n\tThe combination of the various elements of symmetries show that the compositions table is (which proves that the law is internal and we are working effectively in a group):\n\t\n\tAttention to the order of operations in the above table, we first apply the line element and afterwards the column element!\\\\\n\t\n\tWe note that the group is not commutative.\n\t\\end{tcolorbox}\n\n\t\\pagebreak\n\t\\paragraph{Orbits and Stabilizers}\\mbox{}\\\\\\\\\n\tWe will now see two definitions we meet again in crystallography (their name is not innocent!).\n\t\n\t\\textbf{Definition (\\#\\mydef):} The orbit of an element $x$ of $E$ is given by:\n\t\n\tThe orbit of $x$ is the set of all positions (in $E$) likely to be occupied by the image of $x$ under the action of $G$. The orbits obviously form a partition of $E$.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us consider a set $E$ on which a group $G$ will act, by:\n\t\n\tthe set of all $6$ vertices of a hexagon on which we do act the group $G=\\{\\text{id},t_{2\\pi/3},t_{4\\pi/3}\\}$. We see already trivially see that $G$ is a group!  \\\\\n\t\n\tBut now let us, consider an element of $E$, for example $S_0$.\n\t\n\tIts orbit will be by definition:\n\t\n\t\\end{tcolorbox}\n\t\\textbf{Definition (\\#\\mydef):} The stabilisator of $x$ of an element of $E$ is the set:\n\t\n\tof all elements that let $x$ invariant under their action. It is a subgroup of $G$.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tTo continue with our previous example. Its stabilisator is reduced to:\n\t\n\t\\end{tcolorbox}\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Permutations Groups}\n\tSymmetric groups have significant importance in some areas of Quantum Physics but also in mathematics as part of Linear Algebra (for the theory of the determinant as we will see in the corresponding section) and in Galois Theory (see further below) and also in Error Correcting Codes (see corresponding section for the example used with VISA credit cards). So these must be also pay special attention!\n\t\n\tLet us first recall (\\SeeChapter{see section Probabilities}) that in a set $\\{1,...,n\\}$ there are $n!$ possible permutations. Mathematicians say, rightly, that there are $n!$ bijections and name this number \"\\NewTerm{order of the permutations of the group}\\index{order of the permutations of the group}\".\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tGiven for example the set $\\{1,2,3\\}$. This set has $3!$ possible permutations that are denoted in the context of the permutation groups as follows:\n\t\n\tThis must be read in the order: identity application id, $1$ takes on $2$ or $2$ on $1$ (in terms of positions!), $1$ takes on $3$ or $3$ on $1$, $2$ takes on $3$ or $3$ on $2$, $1$ takes on $2$ that takes on $3$ that takes on $1$, $1$ takes on $3$ that takes on $2$ that takes on $1$.\n\t\n\tWe can easily observe that the composition of two permutations is not commutative:\n\t\n\tand that the composition of two permutations is an internal law:\n\t\n\twith a neutral element that is indeed the identity id. So we do have well a non-commutative group. Let us recall also the reader that certain elements of the group, if well chosen, can form a subgroup. This is the example of:\n\t\n\twhich is a sub-group of $S_3$ (it must be easy to the reader to check that it has all the properties of a group).\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\textbf{Definition (\\#\\mydef):} A subgroup $H$ of a group $G$ is named \"\\NewTerm{distinct group}\\index{distinct group}\" if, for every $g$ of $G$ and every $h$ of $H$, we have that $ghg^{-1}$ is element of $H$. The mathematicians name this an \"\\NewTerm{inner automorphism}\\index{inner automorphism}\"...\n\t\n\tLet us first consider interesting introducing geometric example after which we will come back to this definition with $S_3$.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWe have seen earlier the elements of the dihedral symmetry group of order $3$ of the equilateral triangle. Geometrically they all correspond to displacements in the plane in which the triangle is located. We got for recall of the following table of compositions:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/equilateral_dihedral_group_representation.jpg}\n\t\\end{figure}\n\t\n\tFirst, we easily see using this table that we have:\n\t\\begin{itemize}\n\t\t\\item The sub-group made of $\\{\\text{id}\\}$ of order $1$\n\t\n\t\t\\item The sub-group made of $\\{\\text{id},t_{2\\pi/3},t_{4\\pi/3}\\}$ of order $3$\n\n\t\t\\item The sub-group made of $\\{\\text{id},\\sigma_1\\}$ of order $2$\n\n\t\t\\item The sub-group made of $\\{\\text{id},\\sigma_2\\}$ of order $2$\n\n\t\t\\item The sub-group made of $\\{\\text{id},\\sigma_3\\}$ of order $2$\n\t\\end{itemize}\n\tAmong these five subgroups, let us see which one are distincted sub-groups  (this is relatively easy to see using the table of compositions above):\n\t\\begin{itemize}\n\t\t\\item The sub-group made of $\\{\\text{id}\\}$\n\t\n\t\t\\item The sub-group made of $\\{\\text{id},t_{2\\pi/3},t_{4\\pi/3}\\}$ \n\t\\end{itemize}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tWe will now see a remarkable thing! By numbering with $1, 2$ and $3$ the vertices of the equilateral triangle and taking the rotations clockwise, we can identify the elements of $D_6$ to the following elements of $S_3$:\n\t\n\tand rebuild the same table of compositions (copy of the previous one but just with the notation change ... hehe!):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/equilateral_dihedral_group_representation.jpg}\n\t\\end{figure}\n\t\n\t\\end{tcolorbox}\n\t\tWell ... this little interlude closed, let us return to the distincted subgroup of $S_3$ (as it will be important for our introduction to Galois groups) and let us first recall that:\n\t\n\tand we see that the distinct subgroup consists of:\n\t\n\t\\textbf{Definition (\\#\\mydef):} For any subgroup $H$ stable by the inner automorphisms of a group $G$, we name \"\\NewTerm{index of $H$ in $G$}\" the quotient of the order of the group $G$ by the order of the subgroup $H$ and we denote it by $[G / H]$.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tThe index of the subgroup $\\{(1), (1 2)\\}$ in the group $S_3$ is $6/2$ that is to say $3$.\n\t\\end{tcolorbox}\n\t This concept will be very helpful to us during our introduction to Galois Theory.\n\t\n\tLet us consider now the particular permutation $\\sigma$ to introduce the subject from a different but equivalent angle:\n\t\n\tAs we know the mathematicians are accustomed to note that, at first, in the form:\n\t \n\twith:\n\t\n\tHence:\n\t \n\tGiven $\\sigma$ and $\\tau$, two permutations, it is natural to look at their composition $\\tau\\circ \\sigma$  (recall that this means that first we apply $\\sigma$ and afterwards $\\tau$ as for the composition of functions).\n\t\n\tTherefore if:\n\t\n\tThen:\n\t\n\tand:\n\t\n\tNow the idea is to interpret the composition as a multiplication of permutations. This multiplication is then non-commutative as we have seen in the previous example. We have generally $\\sigma \\circ \\tau \\neq \\tau \\circ \\sigma$.\n\t\n\tEach bijection has in inverse (reciprocal function). In our example it is obviously:\n\t\n\tGeometrically, to calculate the inverse $\\sigma^{-1}$ of an element $\\sigma$, we just need to take the reflection of the drawing of $\\sigma$ in a horizontal axis as shown in the left side of the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/permutations.jpg}\n\t\t\\caption{Examples of compositions and inverse of permutations}\n\t\\end{figure}\n\tWe may represent a permutation $\\sigma$ as a permutation matrix, i.e., the matrix that maps:\n\t\n\tAn $n\\times n$ matrix is a permutation matrix if and only if all its entries are zeros and ones, each column has exactly one $1$, and each row has exactly one $1$.  In this instance, we have:\n\t \n\t \\textbf{Definitions (\\#\\mydef):}\n\t \\begin{enumerate}\n\t\t\\item[D1.] The set of permutations of a set with $n$ elements, with this structure of multiplication, is named the \"\\NewTerm{group of permutations of order $n$}\\index{group of permutations of order $n$}\" or \"\\NewTerm{group of substitutions of order $n$}\\index{group of substitutions of order $n$}\", and is denoted by $S_n$ or $S(n)$.\n\t\t\n\t\tWe say that an element $\\sigma$  of $S_n$ is a \"\\NewTerm{cycle of order $k$}\\index{cycle of order $k$}\", or simply a \"\\NewTerm{$k$-th cycle\"} if there exists $a_1,a_2,...,a_k\\in \\{1,...,n\\}$ such that:\n\t\t\\begin{itemize}\n\t\t\t\\item $\\sigma$ send $a_1$ on $a_2$, $a_2$ on $a_3$, ..., $a_{k-1}$, and $a_k$ on $a_1$.\n\n\t\t\t\\item $\\sigma$ fix all other elements of $S_n$\n\t\t\\end{itemize}\n\t\tand we denote the cycle as following:\n\t\t\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tPerhaps, for a better understanding let us our previous example of $S_4$:\n\t\t\n\t\tThis symmetric group is a $3$-cycle denoted $\\sigma=(1\\; 3\\; 4)$ because in order: $1$ send on $3$, 3 send on $4$ and $4$ send on $1$ (and $2$ is not mentioned as it remains fixed). We can also write this in the following equivalent ways: $\\sigma=(3\\; 4\\; 1)$ or also $\\sigma=(4\\; 1\\; 3)$.\n\t\t\\end{tcolorbox}\n\n\t\t\\item[D2.] The order of a $k$-cycle is $k$ (hence the name!).\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tIndeed if we take again our $\\sigma=(1\\; 3\\; 4)$, then we have:\n\t\t\n\t\t\\end{tcolorbox}\n\n\t\t\\item[D3.] We say that a permutation $\\sigma$ is a \"\\NewTerm{cycle}\\index{cycle (permutation)}\" if there exists $k\\in \\mathbb{N}$ such as that $\\sigma$ is a $k$-cycle.\n\t\t\n\t\tWarning! Any permutation must be written as a product of disjoint cycles (that is to say, a number that appears in a cycle should not appear in another cycle).\n\t\t\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tFor example, in $S_9$, we have:\n\t\t\n\t\tSo this service is a product of a $4$-cycle and of a $3$-disjoint cycle.\\\\\n\t\t\n\t\tWe also let the reader see for himself that the cyclic group generated by $\\sigma$ (which in this case is a subgroup of $S_9$) is of order $12$ ($12$-cycle)...\n\t\t\\end{tcolorbox}\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tMathematicians can prove that if $\\sigma$ is an element that has a decomposition into $c$ disjoint cycles of length $n_1,n_2,...,n_c$ then the order of $\\sigma$ is the least common multiple of the orders of all disjoint cycles that compose it.\n\t\t\\end{tcolorbox}\t\n\t\\end{enumerate} \n\tWe also assume intuitive that in the common vocabulary, a $2$-cycle into $S_n$ is also named a \"\\NewTerm{transposition}\\index{transposition}\".\n\t\n\tLet us go a little further. We propose to show by example that the set of all transpositions generates $S_n$. In other words, any permutation is written as a product of transpositions!\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us return to our example (it is an even permutation)\n\t\t\n\t\\end{tcolorbox}\n\tIn general, a $k$-cycle is thus written as the product of $k-1$ transpositions.\n\t\n\t\\begin{theorem}\n\tAs the permutations of a finite set form a group, this means (among other things) that there is therefore always an integer $k$, such that $p$ applied $k$ time is the identity transformation (that is to say, the operation that does not change anything).\n\t\\end{theorem}\n\t\\begin{dem}\n\tIf $G$ is a finite group and that $g\\in G$, we consider the sequence of items (recall that in a group there is only one an operation and therefore the square, cube, etc. means we compose this operation!):\n\t\n\tFor example, in the permutation groups, the operation is the composition of permutations.\n\t\n\tSince $G$ is finite and that this sequence is infinite, there are necessarily two elements that are equal in the sequence... So there are two different $n$ and $m$ such that:\n\t\n\tAssuming that $n<m$, the previous equality is simplified and we get:\n\t\n\twhere $e$ is the neutral element of the group.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tWe will see that the permutations being bijective, we can create on finite groups, compositions of permutations operations that always end up returning to the initial state (identity application id)!!!\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. In a list of $5$ items, we exchange the first and the third, and, at the same time, we pass the second in position $4$, this that is in position $4$ in put in position $5$ and the one in position $5$ is put on position $2$. And we reiterate. This gives:\n\t\n\tWe returned to the starting point after $6$ steps.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE2. Let us consider the \"\\NewTerm{Photomaton transformation}\\index{photomaton transformation}\" of the image of Mona Lisa of size $256$ by $256$ pixels:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/mona_lisa.jpg}\n\t\t\\caption{Photomaton transofrmation}\n\t\\end{figure}\n\tWe could thing that each image was obtained from the previous by reducing the size of the image half, which gave four similar pieces that we have placed in square to obtain an image having the same size as original image. But in fact it is not!!!! The number of pixels has been preserved (no pixel is duplicated!!!) and we actually just moved the pixels by permutation to get four images that do not actually contain all the information of the original image but only a part!\\\\\n\t\n\tBy repeating the procedure $8$ times we always fall back on the original image regardless of the original image. The question is to understand why?\\\\\n\t\n\tLet us consider that the original image is a square with a size of $16$ pixels wide by 16 pixels high (but you can also apply what follows with a rectangular image of any size and you will see that it works also!). Each pixel of a line (the process is exactly the same for columns!) is identified by a coordinate along the $X$ axis going from $0$ to $15$.\\\\\n\t\n\tThus we have at the beginning a sequence of numbers where the pixel coordinates correspond to their $x$ coordinate:\n\t\\begin{gather*}\n\t\t0\\; 1\\; 2\\; 3\\; 4\\; 5\\; 6\\; 7\\; 8\\; 9\\; 10\\; 11\\; 12\\; 13\\; 14\\; 15\n\t\\end{gather*}\n\tWe then do the permutation that consist to denote by $k$ the position of a pixel and to do:\n\t\n\tThis then gives the first permutation:\n\t\\end{tcolorbox}\n\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\begin{gather*}\n\t\t0\\; 1\\; 2\\; 3\\; 4\\; 5\\; 6\\; 7\\; 8\\; 9\\; 10\\; 11\\; 12\\; 13\\; 14\\; 15\\\\\n\t\t\\downarrow \\text{Permutation }1\\\\\n\t\t0\\; 2\\; 4\\; 6\\; 8\\; 10\\; 12\\; 14\\; 1\\; 3\\; 5\\; 7\\; 9\\; 11\\; 13\\; 15\\\\\n\t\t\\downarrow \\text{Permutation }2\\\\\n\t\t0\\; 4\\; 8\\; 12\\; 1\\; 5\\; 9\\; 13\\; 2\\; 6\\; 10\\; 14\\; 3\\; 7\\; 11\\; 15\\\\\n\t\t\\downarrow \\text{Permutation }3\\\\\n\t\t0\\; 8\\; 1\\; 9\\; 2\\; 10\\; 3\\; 11\\; 4\\; 12\\; 5\\; 13\\; 6\\; 14\\; 7\\; 15\\\\\n\t\t\\downarrow \\text{Permutation }4\\\\\n0\\; 1\\; 2\\; 3\\; 4\\; 5\\; 6\\; 7\\; 8\\; 9\\; 10\\; 11\\; 12\\; 13\\; 14\\; 15\\\\\n\t\\end{gather*}\n\tThus, for an image of $16$ by $16$ pixels, it takes four permutations, which corresponds to $2^4=16$. So for an image of $256$ pixels, we have $256=2^8$, hence the fact that we need $8$ permutation to find back the original Mona Lisa with:\n\t\n\tThus, in the general case of an image of width $L$ in number of pixels counting from $1$, the transformation is:\n\t\n\twhere $E^+[...]$ is the upper (nearest) integer value in the case where $L$ is odd.\\\\\n\t\n\tThe reader will also have maybe notice something interesting if we return to our example with the image of $16$ pixels ... Indeed, let us take the third pixel from the left of coordinate $x$ equal to $2$. In binary, its initial position is then $0010$. After the first permutation, its $x$ coordinate is equal to $1$, or in binary: $0001$. After the second permutation, the $x$ coordinate is equal to $8$, or in binary: $1000$, etc. In fact we see that every permutation can be summarized in binary by shifting the bits to the right.\n\t\\end{tcolorbox}\n\t\\textbf{Definition (\\#\\mydef):} Given $\\sigma \\in S_n$ a permutation. We say that $\\sigma$ is an \"\\NewTerm{even permutation}\\index{even permutation}\" if, in a writing of $\\sigma$ as product of transpositions, there is an even number of transpositions. We say the obviously that $\\sigma$ is an \"\\NewTerm{odd permutation}\\index{odd permutation}\" if, in a writing of $\\sigma$ as product of transposition, there is an odd number of transpositions.\n\t\n\tLet us end with a small complement... We know that $S_3$ is a group of permutations of order $3$ and therefore with $3!=6$ possible permutations.\n\t\n\tIf we list the $6$ permutations we saw what we get:\n\t\n\tAmong these only some can be written as an even product of transpositions:\n\t\n\tThe even permutations form with the identity permutation, a subgroup (not commutative!) we name the \"\\NewTerm{alternating group of order $n$}\\index{alternating group of order $n$}\" and that we denote by $AN$. It is easy to check with the previous example.\n\t\t\n\t\\subsection{Galois Theory}\t\n\tIn abstract algebra, Galois theory, named after \u00c9variste Galois, provides a connection between field theory and group theory. Using Galois theory, certain problems in field theory can be reduced to group theory, which is, in some sense, simpler and better understood.\n\t\n\tOriginally, Galois used permutation groups to describe how the various roots of a given polynomial equation are related to each other. \n\t\n\tThe birth and development of Galois theory was caused by the following question, whose answer is known as the \"\\NewTerm{Abel\u2013Ruffini theorem}\\index{Abel-Ruffini theorem}\": Why is there no formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)?.\n\t\n\tGalois theory not only provides a beautiful answer to this question, it also explains in detail why it is possible to solve equations of degree four or lower in the above manner, and why their solutions take the form that they do. Further, it gives a conceptually clear, and often practical, means of telling when some particular equation of higher degree can be solved in that manner.\n\t\n\tGalois theory originated in the study of symmetric functions:  the coefficients of a monic polynomial are (up to sign) the elementary symmetric polynomials in the roots. For instance:\n\t\n \twhere $1$, $a + b$ and $ab$ are the elementary polynomials of degree $0$, $1$ and $2$ in two variables.\n \t\n \tWe tried to make this part of the book as easy as possible. So we hope our goal is reach if you understand what follow. Let us now begin!!!\n \t\n \t\\subsubsection{Elementary symmetric and Invariant Polynomials}\n \t\\textbf{Definition (\\#\\mydef):} The \"\\NewTerm{$k$th elementary symmetric polynomial in $n$ variables}\\index{elementary symmetric polynomial}\", denoted $s_k(x_k,\\ldots , x_n)$ is the sum of all possible degree $k$ monomials in $n$ variables with each $x_i$ appearing NO MORE THAN ONCE IN EACH MONOMIAL. Formally, for $k\\leq n$:\n\t\n\tA polynomial is said \"\\NewTerm{invariant under $S_n$}\\index{invariant polynomial}\" if and only if it is a polynomial in the elementary symmetric functions $s_1,\\ldots, s_n$.\n\n\tTherefore:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tFor $n = 1$:\n\t\n\tFor $n = 2$:\n\t\n\tFor $n = 3$:\n\t\n\tFor $n = 4$:\n\t\n\tNow consider the equation:\n\t\n\tIt can be rewritten as:\n\t\n\tor as we know the de Viete relations with two roots, the latter can be rewritten:\n\t\n\t\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tThat is to say:\n\t\n\tSimilarly, for a third-degree polynomial:\n\t\n\tand therefore:\n\t\n\t\\end{tcolorbox}\n\n\tThe elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial. We have the identity:\n\t\n\tThat is, when we substitute numerical values for the variables $r_1,r_2,\\dots,r_n$, we obtain the monic univariate polynomial (with variable $x$) whose roots are the values substituted for $r_1,r_2,\\dots,r_n$ and whose coefficients are up to their sign the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are named \"\\NewTerm{General Vieta's formulas}\\index{General Vieta's formulas}\" for which we have already see two special cases in the section Calculus and that we will generalize further below.\n\t\\begin{theorem}\n\tIf $r_1,r_2,\\ldots, r_n$ are the roots of a degree $n$ polynomial, then:\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tWe will prove this by induction on the degree of the polynomial. If our polynomial is of degree $n = 1$ with root $r$, the left hand side is $x-r$, and the right hand side is $x-s_1(r)= x-r$, so the equation holds for $n = 1$. Suppose the equation holds for all polynomials of degree $n$. Let $P(x)$ be of degree $n+1$ with roots $r_1,\\ldots,r_{n+1}$.\n\n    Then, we can write:\n    \n\twhere we let $s_i$ denote $s_i(r1, \\ldots , a_n)$ for brevity. By multiplying out the right hand side:\n\t\n\tSince:\n\t\n\tand:\n\t\n\tSo we get:\n\t\n\tNow it remains that if we can prove that:\n\t\n\tfor all the other $i$, we conclude the equation holds for $n+1$, hence for all $n$.\n\t\n\tRemember that by definition:\n\t\n\tThen:\n\t\n\tBy separating the sum with respect to monomials divisible by $r_{n+1}$, we see the above is equal to (most of time the best is to check this be using one the previous examples given earlier):\n\t\n\tso it is clear the relation we wanted holds.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\subsubsection{General Vieta's formulas}\n\tIf we write the second degree equation as following:\n\t\n\tWe already know that:\n\t\n\tAnd also for a third degree polynomial we saw just before in the examples that if we have:\n\t\n\tthen:\n\t\n\tWe can easily see a pattern emerging that is:\n\t\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{90} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 16 votes,  81.25\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Differential and Integral Calculus}\n\n\\lettrine[lines=4]{\\color{BrickRed}D}ifferential calculus is one of the most exciting and vast field of mathematics and there is colossal literature on the subject. The results initiated by scientists as Fermat, Newton, Leibniz, Euler and company since the late 17th century found absolutely implications in all areas of physics, computer science, electronics, chemistry, finance, biology and mathematics itself.\n\nMathematicians have written such a lot of theorems since the birth of differential calculus in the mid-16th century that the validation of a sample of these theorems is sometimes tricky because requiring for some of them a whole life to be covered (it is a problem that the community of mathematicians recognizes ) and checked (so that sometimes nobody checks...).\n\nThis fact know, we have chosen here to present only the items absolutely necessary to the understanding of the fundamental tools for the engineer. The purists will excuse us for the moment not to present some theorems that seem indispensable to them but we will prepare once we will have more time...\n\n\nWe will mainly study in what follows what mathematicians like to precisely name (and they are right!): the general cases of real functions of one real variable. More complex functions (several real or complex variables, continuous or discrete) will come once this first part is finished. But the reader can already found theorems with complex variables in the section of Complex Analysis.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nWe will not demonstrating all the primitives and derivatives all possible functions because as there is an infinite number of possible functions, there is also an infinite number of derivatives and primitives. It is the role of teachers in educational institutions to train students to apply and understand the reasoning of derivations and integration by applications on known functions (the Internet probably never replace the school at this level).\n\t\\end{tcolorbox}\t\n\t\n\t\\subsection{Differential Calculus}\n\nLet $f$ be a real function of one real variable $x$ denoted $f(x)$ (we restrict ourselves to this case for now and we will study the partial derivatives in any number of dimensions later) continue in at least one interval where the horizontal axis contains the value $a$.\n\n\\textbf{Definitions (\\#\\mydef):} \n\n\\begin{enumerate}\n\n\\item[D1.] We name \"\\NewTerm{average slope}\\index{average slope}\", or \"\\NewTerm{directing coefficient}\\index{directing coefficient}\" the report of the orthogonal projection of two points $x_1 \\neq x_2$ of the function $f$ not necessarily continuous on the $x$-axis and $y$-axis as:\n\t\n\tWhat can be represented graphically as follows with a specific function:\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{tikzpicture}[>=stealth',\n                    dot/.style={circle,draw,fill=white,inner sep=0pt,minimum size=4pt},scale=1.25]\n\n\t    % draw axis lines\n\t    \\draw[->,thick] (-0.5,0) -- ++(11,0) node[below left]{$x$};\n\t    \\draw[->,thick] (0,-0.5) -- ++(0,7) node[below right]{$y=f(x)$};\n\t    \\coordinate (O) at (0,0);\n\t\n\t    % create path for function curve\n\t    \\path[thick,red] (-0.3,2) to[out=-25, in=200] coordinate[pos=0.2] (p) coordinate[pos=0.6] (q) (9,5);\n\t    % fill area\n\t    \\fill[blue, opacity=.1] (p) -| (q);\n\t    % draw the secant line with fixed length\n\t    \\draw[shorten <=-1.5cm] (p) -- ($ (p)!7.5cm!(q) $) node[below right, pos=0.9]{Secant};\n\t    % draw function curve\n\t    \\draw[thick,red] (-0.3,2) to[out=-25, in=200] (9,5);\n\t\n\t    % draw all points\n\t    \\node[dot,label={above:$P$}] (P) at (p) {};\n\t    \\node[dot,label={above:$Q$}] (Q) at (q) {};\n\t    \\node[dot] (p1) at (P |- O) {};\n\t    \\node[dot] (p2) at (Q |- O) {};\n\t    \\node[dot] (p3) at (P -| Q) {};\n\t\n\t    % draw lines between nodes and place text\n\t    \\draw (P) -- node[left]{$f(x_{1})$} (p1) node[dot,label={below:$x_{1}$}]{};\n\t    \\draw (p2) node[dot,label={below:$x_2=x_{1} + h$}]{} -- (p3);\n\t    \\path (p1) -- node[below]{$h$} (p2);\n\t\n\t    % draw blue arrows between nodes\n\t    \\draw[<->,blue,thick] (P) -- node[below]{$\\Delta x=h$} (p3);\n\t    \\draw[<->,blue,thick] (Q) -- node[right]{$f(x_{1} + h) - f(x_{1})=\\Delta y=\\Delta f$} (p3);\n\t\n\t    % draw the explanation for the y-value of point Q\n\t    \\draw[help lines] (Q) -- (Q -| {(9.5,0)}) ++(-0.5,0) coordinate (p4);\n\t    \\draw[help lines, <->] (p4) -- node[fill=white,text=black]{$f(x_{1} + h)=f(x_2)$} (p4 |- O);\n\t\t\\end{tikzpicture}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n$\\Delta$ named \"delta\" expresses the fact that we take a difference of the same amount.\n\t\\end{tcolorbox}\nWe assume as obvious (without proof) that two functions whose slopes are the same in the same interval of definition, are parallel (on a plane).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nWe will prove in the chapter of Analytic Geometry that two functions whose slopes multiplication is equal -1 are perpendicular.\n\t\\end{tcolorbox}\n\t\\item[D2.] We call \"\\NewTerm{derivative on $a$}\\index{derivative}\" or \"\\NewTerm{instantaneous slope}\" or \"\\NewTerm{first derivative}\\index{first derivative}\", the limit when $h$ tends to $0$ (if the limits exists) of the ratio of the orthogonal projection of two points $x_1\\neq x_2$ infinitely close  of a continuous function $f$ (in the sense that it does not contain \"holes\") on the $x$-axis and $y$-axis so that:\n\t\nA graphic interpretation gives us that $f'(a)$ is the directing coefficient (the slope of the tangent at the point of abscissa $a$).\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The letter $\\mathrm{d}$ means here a \"\\NewTerm{differential}\\index{differential}\" and expresses the fact that we take an infinitesimal difference of the same amount.\\\\\n\n\t\\textbf{R2.} We refer the reader to the section of Functional Analysis for the definition of a continuous function.\n\t\\end{tcolorbox}\n\t\\item[D3.] Let $f$ be a function defined on an interval I and differentiable at every point $a$ of $I$. The function that to any real number of I associates the number $f'(a)$ is named the \"\\NewTerm{derivative function of $f$ on I}\" and is denoted by $f'$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nAbout the notations of the derivatives... physicists adopt depending on their mood various possible notations. Thus, consider the real function $f(x)$  with one real variable $x$ you can find in the literature and in this book the following notations for the first derivative:\n\t\nor implicitly assuming that f is a function of x (this gives the opportunity to reduce the size of developments)\n\t\n\t\\end{tcolorbox}\n\\end{enumerate}\nWe can in the same way define 2nd order derivatives (derivative of a derivative), the derivatives of order 3 (derivative from a derivative of order 2) and so on. We will  frequently meet such derivatives in physics or in pure mathematics for functional analysis.\n\nNote that the derivatives of order 2 have a very important interpretation in physics and in the areas of optimization (\\SeeChapter{see section Theoretical Computing}). Indeed, if the sign of the first derivative is positive and then becomes negative (going trough the value of zero) when $x$ increases, then we easily guess we travel through a local maximum of a function (the point where the derivative is zero) and that if the sign of the first derivative becomes negative and positive when $x$ increases, so we travel a local minimum of the function (the point where the derivative is zero). In other words, when the slope changes sign (becomes zero by changing the sign) function passes through an extremum (maximum or minimum) and the tangent is \"horizontal\" in this point: parallel to the $x$-axis. But when the 2nd order derivative is zero, that means that the curvature of the function is reversing. We speak then about an \"\\NewTerm{inflection point}\\index{inflection point}\".\n\nSo a very important thing that should always keep in mind (!!!) when your write that the derivative of a function is null, is that we can have a derivative which vanishes at a point without being an extremum (remember that we call this an inflection point). To check whether it is really an extremum, we can calculate the second derivative to eliminate the case of an inflection point (because in a inflection point the second derivative will not be null). Otherwise you must use a table of variations to ensure that we are dealing with a maximum or minimum like for example with the function $x^3-3x^2+2$:\n\n\t\\begin{minipage}{\\linewidth}\\centering\n    \\begin{variations}\n     x      & \\mI &    & 0 &    & 2 &    & \\pI  \\\\\n     \\filet\n     f'     & \\ga +    & 0    &  -  &  0   & \\dr+      \\\\\n     \\filet\n     \\m{f}  & ~  & \\c  & \\h{~} & \\d & ~    &  \\c       \\\\\n     \\end{variations}\n\t\\end{minipage} \t\n\t\n\tWhich corresponding plot is:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/algebra/variation_plot_example_1.jpg}\n\t\t\\caption[]{Plot of  function $x^3-3x^2+2$}\n\t\\end{figure}\n\tAnother example with $f(x)=x^4-4x^3+11$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/algebra/variation_plot_example_2.jpg}\n\t\t\\caption[]{Plot of  function $x^4-4x^3+11$}\n\t\\end{figure}\n\tWith a more detailed variation table:\n\t\n\t\\begin{center}\n\t\\begin{tikzpicture}[t style/.style={solid}]\n\t\\tkzTabInit[espcl=2]{$x$/.5,$f'(x)$/.5,$f''(x)$/.5,$f(x)$/3} {$-\\infty$,$0$,$2$,$3$,$+\\infty$}\n\t\\tkzTabLine{,-,0,-,t,-,0,+, }\n\t\\tkzTabLine{,+,0,-,0,+,t,+, }\n\t\n\t\\node [below] (n1) at (N13){$+\\infty$};\n\t\\node [below=1cm](n2) at (N23){$11$};\n\t\\node [below=2cm] (n3) at ([yshift=1em]N33){$-5$};\n\t\\node [above] (n4) at ([yshift=1em]N44){$-16$};\n\t\\node [below] (n5) at (N53){$+\\infty$};\n\t\n\t\\node[below=1ex]at(n2){$ \\mathrm{\\Sigma.K.} $};\n\t\\node[below=1ex]at([xshift=.5ex]n3){$ \\mathrm{\\Sigma.K.} $};\n\t\\node[below=1ex]at([xshift=1ex]n4){$ \\mathrm{T.E.} $};\n\t\n\t\\draw[>->] (n1) to [out=-90,in=180] (n2);\n\t\\draw[>->] (n2) to [out=0,in=90] (n3.west);\n\t\\draw[>->] (n3.east) to  [out=-90,in=180] (n4);\n\t\\draw[>->] (n4) to [out=0,in=-90] (n5);\n\t\n\t\\end{tikzpicture}\n\t\\end{center}\n\nHere is a very entertaining example of a function with its first and second derivatives with Maple 4.00:\n\n\\texttt{>plot([tanh(x),diff(tanh(x),x),diff(tanh(x),x\\$2)],x=-5..5,\\\\color=[red,green,blue]);}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/algebra/derivatives.eps}\n\\caption{Plot of the hyperbolic tangent function, its first and second derivatives}\n\\end{figure}\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nTwo good functions to easy remember the property of the second derivative is $f(x)=x^2$ and $f(x)=-x^2$. As you know the first has a global minimum and the second a global maximum and if you calculate the second derivative for the first one you get a positive constant and a negative constant for the second.\n\t\\end{tcolorbox}\n\nNow, following a problem of understanding of one of our reader in one of the chapters of this book, let us specify a technique often used by physicists. Consider a derivative of order $2$ as:\n\t\nIf we look at the $\\dfrac{\\mathrm{d}}{dx}$ as a differential operator (what it is!) we can obviously write:\n\t\nFinally we have:\n\t\nand so it comes after simplification by $f(x)$:\n\t\notherwise we can not have this equality if the operator acts explicitly on a function in a math or any physical relation.\n\nThis may seem obvious to some but sometimes less so for others ... and it is clearly useful to clarify this because it is often used in the sections of Special Relativity, General Relativity, Corpuscular Quantum Physics and Wave Quantum Physics.\n\nLet us now indicate and prove two properties intuitively obvious of derivative which will be essential for us several times some proofs in this book (for example in the section  of Numerical Methods or just simple in this section...).\n\n\\begin{theorem}\nConsider first two real numbers $a<b$ and $f$ a continuous real-valued function on the closed interval $[a, b]$ and differentiable on the open interval $]a, b[$ such that $f(a)=f(b)$. So we want to prove that there is obviously  at least one element $c \\in ]a, b[$ such that $f'(c)=0$ (this is typically the case of polynomial functions!).\n\nThis property is named \"\\NewTerm{Rolle's theorem}\\index{Rolle's theorem}\" and therefore it explicitly shows that there is at least one element where the derivative of $f$ is zero when we browsing its path we return back to the same value images for two distinct values of the abscissas (pre-images), that is to say that there exists at least one point where the tangent is horizontal.\n\\end{theorem}\n\n\\begin{dem}\nFirst, if $f$ is constant, the result is immediate... Otherwise, as $f$ is continuous on the closed interval $[a, b]$ it admits at least one minimum or maximum considering that we rely on the assumption that $f(a)=f(b)$ and that $f$ is not constant. The extrema is reached at a point $c$ belonging to the open interval $] a, b [$ (the fact of taking an open interval allows in some cases to avoid having an extrema again in $a$ or $b$).\n\nSuppose as a first case that $f(c)$ is global maximun in the interval. The derivative of the function $f$ between $c$ and a second point $a$ has a known sign:\n\n\t\\begin{itemize}\n\t\t\\item For $h$ strictly positive and such that $c + h$ still belongs to the interval $[a, b]$:\n\t\nConsidering the limit when $h$ tends to $0$, the  derivative valuation on $f'(c)$ is thus negative.\n\t \\item For $h$ strictly negative and such that $c + h$ still belongs to the interval $[a, b]$:\n\t\nConsidering the limit when $h$ tends to $0$, the  derivative valuation on $f'(c)$ is thus positive.\n\t\\end{itemize}\nUltimately, the derivative of $f$ is zero at point $c$.\n\nThe proof is analogous if $f(c)$ is a minimum in the interval, with the signs of derivatives that are opposite.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\\end{dem}\nLet us now consider two real numbers $a,b$ and $f (x) $ a continuous function on $[a, b]$ and differentiable on $] a, b [$. \n\n\\begin{theorem}\n\tSo we propose to show that there is at least one real number $c \\in ]a,b[ $ such that:\n\t\nThis can also be written as follows:\n\t\nwith $s\\in ]0,1[.$\n\nSince the term on the left represents a finite increase of the term right, then this result is named \"\\NewTerm{mean value theorem}\\index{mean value theorem}\" or better \"\\NewTerm{theorem of finite increments}\\index{theorem of inite increments}\".\n\nGeometrically this means that on at least one point $c$ of the graph of the function $f (x)$, there is a tangent with a director coefficient of:\n\t\nGraphically this gives:\n\\begin{figure}[H]\n\\centering\n\\includegraphics{img/algebra/mean_value_theorem.eps}\n\\caption{Graphical representation of Rolle's theorem}\n\\end{figure}\n\n\\end{theorem}\n\\begin{dem}\n\tWe first have:\n\t\n\tbecause the slope of $h(x)$ is obviously:\n\t\t\n and as we must have $f(a)$ when $x=a$ it follows the relation given previously.\n\nThen, to show that such a $c$ value exists, the idea is to bring the two points $a$ and $b$ in the same ordinate making this brings us back to Rolle's theorem and for that, we define a function $g(x)$ by:\n\t\nwhich is such that indeed $g(a)=g(b)$ ... is in this case equal to $0$ (but this value is not relevant).\n\nTherefore, the Rolle's theorem discussed above indicates that there is a point between $a$ and $b$ where the derivative of $g(x)$ is zero such that $g'(c)=0$. And by seing that:\n\t\nwe get:\n\t\n\tTherefore after simplification:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\\end{dem}\nUsing this little theorem and mathematical tools introduced earlier, we can build a little theorem useful and powerful physics.\n\n\\textbf{Definition (\\#\\mydef):} We name  \"\\NewTerm{H\u00f4pital's rule}\\index{H\u00f4pital's rule}\" (also named sometimes named \"\\NewTerm{Bernoulli's rule}\\index{Bernoulli's rule}\" or \"NewTerm{Hospital rule}\\index{Hospital rule}\") the method that uses the derivative in order to determine the boundaries difficult to calculate with most quotients which often appear in physics.\n\n\t\\begin{dem}\n\t\tConsider two functions $f(x)$ and $g(x)$ and such that $f(a)=f(b)=0$ so we can write:\n\t\t\n\t\tThen according to the definition of the derivative:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\t\t\n\t\\end{dem}\nWe can generalize this previous result initially based on a little too strong constraint:\n\t\n\t\\begin{dem}\nLet us recall that according to the mean value theorem, if $f(x)$ is differentiable on an interval $]a, b[$ and continuous on $[a, b]$ then there is a real $c$ in the interval $[a, b]$ such that:\n\t\t\n\t\tIf the theorem is true for two functions satisfying the same constraints then we have two functions such as:\n\t\t\n\t\t\t\tIf g '(c) is not zero then we have the right to write the ratio (some name this the \"\\NewTerm{generalized mean value theorem}\\index{generalized mean value theorem}\"...):\t\n\t\t\n\t\twhich without losing validity as $c$ is in the range $[a, x]$ can be written:\n\t\t\n\t\tTherefore, when $x \\rightarrow a$ that implies the range $[a, x]$ is always smaller and thus $c \\rightarrow a$ we have:\n\t\t\n\t\tSo we just proved now that in the first simplified proof of the Hospital rule of the relation:\n\t\t\n\t\twe had is true in general and that it is not necessary that $f(a)=g(a)=0$ is true for the result to be fair!\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\pagebreak\n\t\\subsubsection{Differentials}\n\t\n\tWe noted earlier what is a differential $\\mathrm{d}$. But there are actually several different types of differential of a function (note that we distinguish the masculine and feminine gender of the word):\n\t\\begin{enumerate}\n\t\t\\item Differentials\n\t\t\\item Partial differentials\n\t\t\\item Total exact differentials\n\t\t\\item Total inexact differentials\n\t\\end{enumerate}\n\t\tRemember as seen at the beginning of this section hat we name  \"\\NewTerm{differential $\\mathrm{d}f$}\" of an univariate function the relation given by:\n\t\n\tHowever, to express the effect of changing all the variables of a multivariate function $f$, we must use another type of differential which we name the \"\\NewTerm{total differential}\\index{total differential}\" (derived into two subfamilies: total exact differential and total inexact differential).\n\t\n\tTake, for example, a function $f(x, y)$ of the two variables $x$ and $y$. The increase $df$ of the function $f$, for a finite increase of $x$ to $x+\\Delta x$ and of $y$ to $y+\\Delta y$ is obviously given by:\n\t\n\twe can also write:\n\t\n\tOr also:\n\t\n\tFor infinitely small increments of $x$ and $y$:\n\t\n\tLet us therefore focus on the two terms when going to the limit:\n\t\n\tThe first term on the left, we see it clearly, finally gives the change in $x$ of the function $f(x, y)$ by having $y$ fixed on the variation. We denote therefore this by (if the fixed variables are trivial we do note write them):\n\t\n\tand even:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nWhen a variable is fixed to study the variation of the other, some authors and teachers of older generations like to say, \"all things being equal $f$ varies in function of ... as ...\". In short, it is an expression that can be found in other areas of mathematics (such as multivariate linear regressions) but that is disappearing...\n\t\\end{tcolorbox}\n\tBoth expressions:\n\t\n\tare what we name \"\\NewTerm{partial differential}\\index{partial differential}\" or just \"\\NewTerm{partial derivative}\\index{partial derivative}\" (whose practical application case of the simplest and probably the most interesting and pedagogically relevant example available today on the entire book is the supply chain Wilson's model with rupture presented in the section of Quantitative Management).\n\t\n\tWe have therefore:\n\t\n\tthat is the \"\\NewTerm{differential of $f$}\". The thermodynamicists them often talk of \"\\NewTerm{total exact differential of $f$}\" or just \"\\NewTerm{exact differential of $f$}\" or even simple \"\\NewTerm{total derivative}\" and also \"\\NewTerm{exterior derivative}\\index{exterior derivative}\".\n\t\n\tThe previous relation is a special case of what mathematicians name very generally a \"\\NewTerm{differential form}\\index{differential form}\":\n\t\n\twe will come back about this a little further below... \n\t\n\tIt is customary to write:\n\t\n\tthat is to say under vector field way.\n\t\n\tIt is important to remember the form of the total derivative because we will meet it again almost everywhere in special operators in quantum physics, in fluid mechanics, in electrodynamics, in thermodynamics, in economy, etc.\n\t\n\tGeometrically, the partial derivatives can be interpreted as follows: the function $f(x, y)$ defines typically a surface in $\\mathbb{R}$ whose intersection with the plane $y=y_0$ is a curve given by $f(x,y_0)$.\n\t\n\tThe partial derivative $\\partial_x f$ is then the slope of this curve at every point $x$. We then naturally the following function for the slope at the point $(x_0,y_0)$:\n\t\n\tSimilarly, the tangent to the curve $f(x_0,y)$ will be given by:\n\t\n\tThe plane locally tangent to the point $(x_0,y_0)$ determined by its two tangents is then given by:\n\t\n\tReorganizing the terms as:\n\t\n\tWe recognize:\n\t\n\tThus, for example, the surface represented by the function:\n\t\n\tis shown below with the two tangents passing through the point:\n\t\n\tand that respective equations are:\n\t\n\tand:\n\t\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.75]{img/algebra/total_derivative_two_tangents.jpg}\n\t\\caption{Both partial derivatives tangents of the function at the point of interest}\n\t\\end{figure}\n\tThe \"\\NewTerm{tangent plane}\\index{tangent plane}\" at this point is then given by:\n\t\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/algebra/total_derivative_tangent_plane.jpg}\n\t\t\\caption[]{The two tangents of the function at the point of interest with the tangent plane}\n\t\\end{figure}\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nSimilarly, for a function of more than two variables, for example $f(x,y,z)$, the total derivative $\\mathrm{d}f$ is given by:\n\t\n\tIn the above equation, the differential $\\mathrm{d}f$  was calculated from the expression of the function $f$. Since there is a function $f$ satisfying an expression $\\mathrm{d}f$ , the derivative $\\mathrm{d}f$  differential is then, as we know, named the \"total exact derivative\".\n\t\\end{tcolorbox}\t\n\t\n\tWe take the opportunity to make an important indication on the use of partial derivatives by physicists (and therefore in many chapter of this book). We have seen that if $f$ depends on two variables $x, y$ we have:\n\t\n\tand if it depends of only one variable we have then:\n\t\n\tAnd therefore in the univariate case:\n\t\n\tthis is why many physicists mixed the two notations...\n\t\n\tNow you have to know however that there are also total exact derivative that no functions satisfies. In this case, we speak about \"total inexact derivative\" and for determining whether a total derivative is exact or inexact, we use the properties of partial derivatives (very important case in thermodynamics!!!).\n\t\n\tGiven the famous general differential form (notation used typically in differential geometry):\n\t\n\twhere $M (x, y)$ and $N (x, y)$ are functions of the variables $x$ and $y$. If $\\mathrm{d}z$ is an exact total differential, then:\n\t\n\tThis requires verbatim that:\n\t\n\tor, by performing a second partial derivative that:\n\t\n\tso that the differential form is an exact total derivative.\n\t\n\tBefore continuing, we need a result given by the \"\\NewTerm{Schwarz theorem}\\index{Schwarz theorem}\" (but which was proved in the late 17th century by one of the Bernoulli brothers) which is the following:\n\t\n\t\\begin{theorem}\n\tGiven a function $f$, if:\n\t\n\tare continuous (we must really check that this assumption is true!) then we get a very important result in practice:\n\t\n\tfor every $(x_0,y_0)\\in U$ where $U$ is the domain of definition where $f$ is continuous (and therefore assumed to be differentiable).\n\t\\end{theorem}\n\t\\begin{dem}\n\tWe consider the expression:\n\t\n\tLet us write:\n\t\n\tThen we have:\n\t\n\tBy the mean value theorem:\n\t\n\tWith $s,t \\in ]0,1[$. By taking the definitions of $w$ and $g$ we get:\n\t\n\tby applying the again the mean value theorem to both sides in brackets we find:\n\t\n\tWith $\\tilde{s},\\tilde{t} \\in ]0,1[$. To finish we see that we have:\n\t\n\tand by continuity when $k,h\\rightarrow 0$, we have:\n\t\n\tMore simply written:\n\t\n\tSo if $f$ is expressed in an total derivative  form therefore the cross differentials are equal (the reciprocal is not necessarily true!).\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tBy induction on the number of variables we can prove in the general case (that is long but it is possible we will do it if it needs to be done and we have the time...).\n\t\n\tSo finally getting back to our original problem, we have:\n\t\n\tWhich finally gives us the \"\\NewTerm{Schwarz condition}\\index{Schwarz condition}\":\n\t\n\tThis is the condition that must be met for a differential form to be an exact total differential and the condition that it should not meet to be an non-total eact differential!!! This is a very important property for the study of Thermodynamics (see corresponding section)!\n\t\n\tIn order not to confuse the two types of differentials, we use the symbol $\\delta$ to represent a non-total exact differential:\n\t\n\tand $\\mathrm{d}$ for a total exact differential:\n\t\n\tThe distinction is extremely important because only the total exact differentials that satisfy:\n\t\n\thave an integral that depends \\underline{only} on the limits of integration (since all the variables change at the same time). Therefore non-total exact differentials depend not \\underline{only} on the limits of integration, meaning that:\n\t\n\tand therefore on a closed path we can have:\n\t\n\tWhile for total exact differentials:\n\t\n\tand therefore (see detailed proof later when we will deal with curvilinear integrals):\n\t\n\tIn other words, the variation of a function whose differential is total exact, does not depend on the path followed, but only of the initial and final states as it can be expressed as the gradient of a function (see the proof by example in the section of Electrostatics when we check that the electrostatic potential difference is independent of the path). We name such a function that satisfies an exact total differential in physics a \"\\NewTerm{state function}\\index{state function}\" and in mathematics a \"\\NewTerm{holomorphic function}\\index{holomorphic function}\" (see section Complex Analysis for details), that is, i.e. a function whose value depends only on the present and future state, not its history.\n\t\n\tThis distinction is very important and especially in thermodynamics where it should be determined whether a physical quantity is a total exact differential (a \"state function\"!) or not to know how systems evolve.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\nAn important example of differential form in thermodynamics, is the elementary work $\\delta W$ of  force exerted on a body in motion in the plane $\\text{O}xy$, we have:\n\t\n\t$F_x,F_y$ are not necessarily derived from the same potential $U(x, y)$ such that:\n\t\n\ttherefore  $\\delta W$ is indeed in this special case a non-total exact differential!\n\t\\end{tcolorbox}\n\t \n\t \\subsubsection{Usual Derivatives}\n\t \n\t We will prove in details here the most common univariate derivatives (around a small thirty) and some of their major properties that we will meet in theoretical physics and mathematics (in fact, we will used them all in respectively the chapters on Mechanics, Engineering, Atomistic, Social Mathematics, etc.). The list below is not exhaustive at this time but the proofs being generalized, they can be applied to many other similar cases (that we will apply / meet throughout almost all this book).\n\t \\begin{enumerate}\n\t \t\\item Derivative of $f(x)=x^n$:\n\t \t\n\t \tFirst start with a particular case, the derivative of $x^3$:\n\t \t\n\t\tTherefore the derivative of the cubic function is $3a^2$.\n\t\t\n\t\tWe can generalize this result for any positive or negative integer $n$ and we will see that the function $f$ defined on $\\mathbb{R}$ by $f(x)=x^n$ is differentiable and that its derivative $f'$ is given by $f'(x)=nx^{n-1}$.\n\t\t \n\t\t Therefore we have (a few examples can be helpful to understand the scope of this result!):\n\t\t \n\t\t So we see that having determined the derivative of a function of the form $x^n$, we also determined the derivative of any function that written in such a form:\n\t\t \n\t\t However, the following functions:\n\t\t \n\t\t are not differentiable in the $x=0$ because the function is not more defined on this point (division by zero). Furthermore, relatively to the root function (not integer power), the derivative is not defined in $\\mathbb{R}_{-}^{*}$.\n\t\t \n\t\t \\item Derivative of $f(x)=c^{te}$:\n\t\t \n\t\t The previous result gives an interesting immediate result for constant functions such as:\n\t\t \n\t\t it is then not difficult to determine that the derivative is simply:\n\t\t \n\t\t So the derivative of any constant function is zero (it is important to remember that result when we will study the properties of integrals) !!!\n\t\t \\item Derivative of $f(x)=\\cos(x)$:\n\t\t \n\t\t Given any real fixed number $a$, then (be careful it is useful to know the remarkable trigonometric relations that we proved in the section Trigonometry!):\n\t\t \n\t\t Because using Hospital rule (or by seeing that $\\sin(x)$ can be assimilate by a straight line function $f(x)=x$ near $x=0$):\n\t\t \n\t\t So to summarize:\n\t\t \n\t\t \n\t\t \\item Derivative of $f(x)=\\sin(x)$:\n\t\t \n\t\t Given any real fixed number $a$, then (be careful again it is useful to know the remarkable trigonometric relations that we proved in the section Trigonometry!):\n\t\t \n\t\t Because using Hospital rule (or by seeing that $\\sin(x)$ can be assimilate by a straight line function $f(x)=x$ near $x=0$):\n\t\t \n\t\t So to summarize:\n\t\t \n\t\t \\item Derivative of $f(x)=\\log_b(x)$:\n\t\t We begin by writing that:\n\t\t \n\t\t Therefore:\n\t\t \n\t\t Therefore:\n\t\t \n\t\t Multiply and divide by $x$ the term in the right member of the last previous equality:\n\t\t \n\t\t Denote the quantity $\\dfrac{\\Delta x}{x}$ by $\\alpha$. It is obvious that $\\alpha \\rightarrow 0$ when $\\Delta x$ tends to zero for a given $x$. Consequently:\n\t\t \n\t\t However, we find here again an another historical origin from Euler's constant (see the section Functional Analysis for the proof), that is:\n\t\t \n\t\t Therefore:\n\t\t \n\t\t An important special case is the case where $b = e$. Then we have the famous result:\n\t\t  \n\t\t \\item Derivative of a sum of functions:\n\t\t \n\t\t Let us now consider $u$ and $v$ two functions. The sum function $s=u+v$ is derivable over any interval where $u$ and $v$ are differentiable, we will denote by $s'$ the derivative of the sum. Let us now see what is its expression.\n\t\t \n\t\t Let $a$ be a real number and $u,v$ two defined and differentiable functions on $a$:\n\t\t \n\t\t \n\t\t So the derivative of a sum is the sum of the derivatives.\n\t\t \n\t\t This result can be generalized for a sum of any number of functions.\n\t\t \n\t\t \\item Derivative of a product of functions:\n\t\t \n\t\t Let us now consider $u$ and $v$ two functions. The product function $p=uv$ is derivable over any interval where $u$ and $v$ are differentiable, we will denote by $p'$ the derivative of the product. Let us now see what is its expression.\n\t\t \n\t\t Let $a$ be a real number and $u,v$ two defined and differentiable functions on $a$:\n\t\t \n\t\t \n\t\t We add to this last relation two terms whose sum is zero such as:\n\t\t \n\t\t \n\t\t But there is a more general formulation than the first derivative of a product:\n\t\t \\begin{theorem}\n\t\t \tConsider for this purpose always our two functions $u$ and $v$, $n$ times differentiable on an interval $I$. Then the $uv$ product is $n$ times differentiable on $I$ and:\n\t\t \t\t \t\n\t\t \tand this constitutes the \"Leibniz formula\" that we used in the section Calculus for the study of Legendre polynomials (which we are themselves essential to our study of Quantum Chemistry).\n\t\t \tThe proof of this expression is very similar to that made for the Newton's binomial theorem (\\SeeChapter{see section Calculus}).\n\t\t \\end{theorem}\n\t\t \\begin{dem}\n\t\t \tEither:\n\t\t \t\n\t\t \tOn the other hand:\n\t\t \t\n\t\t \tThe relation is thus at least well initialized.\n\t\t \t\n\t\t \tThe proof is made by induction. Thus, the goal is to show that for $\\forall n \\geq 0 \\in N$ if:\n\t\t \t\n\t\t \tthen:\n\t\t \t\n\t\t \tWe have therefore:\n\t\t \t\n\t\t \tWe'll do now a change of variable in the first sum to not have the term in $k + 1$ again. We put for this purpose $j=k+1$:\n\t\t \t\n\t\t \tIf we go back to the letter $k$, we have:\n\t\t \t\n\t\t \tSo we have:\n\t\t \t\n\t\t \tWe want to combine this two sums. For this, we discard the terms in excess in each:\n\t\t \t\n\t\t \tThis therefore gives:\n\t\t \t\n\t\t \tAccording to Pascal's formula (\\SeeChapter{see section Probabilities}), we have:\n\t\t \t\n\t\t \tTherefore:\n\t\t \t\n\t\t \tBut we have at the same time:\n\t\t \t\n\t\t \tTherefore:\n\t\t \t\n\t\t \t\\begin{flushright}\n\t\t\t\t$\\square$  Q.E.D.\n\t\t\t\\end{flushright}\n\t\t \\end{dem}\n\t\t \\item Derivative of a composite univariate function:\n\t\t \n\t\t Let the us consider the composite function $f=g \\circ u=g(u(x))$ of two functions $g$ and $u$ differentiable, the first in $u (a)$, the second in $a$. We have therefore:\n\t\t \n\t\t Let us put now $k=u(a+h)-u(a)$ then we have:\n\t\t  \n\t\t Let us continue our previous development:\n\t\t  \n\t\t Thus the derivative of a composite function is given by the derivative of the function, multiplied by the \"\\NewTerm{inner derivative}\\index{inner derivative}\". Furthermore, this type of derivation is very important because often used in physics under the name of \"\\NewTerm{(univariate) chain derivation}\\index{chain derivation (univariate)}\" or simply \"\\NewTerm{(univariate)  chain rule}\\index{chain rule (univariate) }\".\n\t\t \n\t\t Let's see what it is. The previous obtained relation can be rewritten in another more common way:\n\t\t \n\t\t Or typically when we have multiply function multiplies to each other:\n\t\t \n\t\t \n\t\t \\item Derivative of a composite bivariate function:\n\t\t \\begin{theorem}\n\t\t The $t$ derivative of the composite function $z=f(x(t),y(t))$ is:\n\t\t  \n\t\t We assume in this theorem and its applications that $x=x(t)$ and $y=y(t)$ have first derivatives at $t$ and that $z=f(x,y)$ has continuous first order derivatives in an open circle centered at $(x(t),y(t))$.\n\t\t \\end{theorem}\n\t\t \\begin{dem}\n\t\t \tWe fix $t$ and set $(x,y)=(x(t),y(t))$. We consider nonzero $\\Delta t$ so small that $(x(t+\\Delta t),y(t+\\Delta t))$ is in the circle where $f$ has continuous first derivatives and set $\\Delta x=x(t+\\Delta t)-x(t)$ and $\\Delta y=y(t+\\Delta t)-y(t)$. Then by definition of the derivative:\n\t\t \t\n\t\t\tWe can apply the \"mean value theorem\" that states (see the proof further below during our study of integral calculus):\n\t\t\t\t\t\t\n\t\t\t to the expression in the first set of square brackets on the right of the last equality above where $y$ is constant and to the expression in the second set of square brackets where $x$ is constant. We conclude that there is a number $c_1$ between $x$ and $x+\\Delta x$ and a number $c_2$ between $y$ and $y+\\Delta y$ such that:\n\t\t\t\n\t\t\tWe add the both relations above and divide by $\\Delta t$ to get:\n\t\t\t\n\t\t\tThe function $x=x(t)$ and $y=y(t)$ are continuous at $t$ because they have derivatives at that point. Consequently, as $\\Delta t\\rightarrow 0$, the numbers $\\Delta x$ and $\\Delta y$ both tend to zero and the circle including the constants $c_i$ collapses to the point $(x,y)$, Because the partial derivatives of $f$ are continuous, the term $\\partial_x f(c_1,y+\\Delta y)$ tends to $\\partial_x f(x,y)$ and the term $\\partial_y f(x,c_2)$ tends to $\\partial_y f(x,y)$ as $\\Delta t\\rightarrow 0$. Moreover:\n\t\t\n\t\tas $\\Delta t\\rightarrow 0$, so the above relation becomes:\n\t\t\t\n\t\t\t named the \"\\NewTerm{multivariate chain rule}\\index{multivariate chain rule}\" (but in reality it is only the bivariate case...) and that is veeeeery important for study physics.\n\t\t\t \n\t\t\t The latter relation sometimes written:\n\t\t\t \n\t\t \t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\t\\end{flushright}\n\t\t \\end{dem}\n\t\t \n\t\t \\item Derivative of an inverse function\n\t\t \\begin{theorem}\n\t\t \tIf the function $f$ is continuous, strictly monotonic over an interval $I$, derivable over $I$, then the reciprocal function $f^{-1}$ is derivable on the interval $f(I)$ and admits for derivative function:\n\t\t \t\n\t\t \\end{theorem}\n\t\t \\begin{dem}\n\t\t \tIndeed we can write:\n\t\t \t\n\t\t \tThat is to say (identity application):\n\t\t \t\n\t\t \tBy application of the derivation of composite functions seen just above we have:\n\t\t \t\n\t\t \tTherefore:\n\t\t \t\n\t\t \tFor a variable $x$, it is more common to write for the derivative of the inverse function as following:\n\t\t \t\n\t\t \t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\t\\end{flushright}\n\t\t \\end{dem}\n\t\t \\item Derivative of $\\arccos (x)$:\n\t\t \n\t\t \tUsing the previous result of the reciprocal function and the derivative of $\\cos (x)$ proved above, we can calculate the derivative of the function $\\arccos (x)$:\n\t\t \t\n\t\t \t\\item Derivative of $\\arcsin (x)$:\n\t\t \n\t\t \tUsing the previous result of the reciprocal function and the derivative of $\\sin (x)$ proved above, we can calculate the derivative of the function $\\arcsin (x)$:\n\t\t \t\n\t\t \t\\item Derivative from a quotient of two functions:\n\t\t \t\n\t\t \tConsider that function:\n\t\t \t\n\t\t \tis derivable over any interval where $u$ and $v$ are differentiable functions and wherein the function $v$ is not null.\n\t\t \t\n\t\t \tThe function $f$ can be considered as the product of two functions: the function $u$ and the function $1/v$. A product of two functions is differentiable if each is differentiable, it is necessary that the function $u$ is differentiable and the function $1/v$ is also derivable which is the case when $v$ is differentiable and not null.\n\t\t \t\n\t\t \t\\item Derivative of the function $\\tan(x)$:\n\t\t \t\n\t\t \tBy definition (\\SeeChapter{see section Trigonometry}), $\\forall x \\neq k\\dfrac{\\pi}{2},k\\in \\mathbb{Z}$:\n\t\t \t\n\t\t \tand then applying the derivative rule for a quotient as proved above, we have:\n\t\t \t\n\t\t \tor:\n\t\t \t\n\t\t \t\\item Derivative of the function $\\cot(x)$:\n\t\t \t\n\t\t \tBy definition (\\SeeChapter{see section Trigonometry}),$\\forall x \\neq k\\pi,k\\in \\mathbb{Z}$:\n\t\t \t\n\t\t \tand therefore (applying once again the rule of the derivative of a quotient as proved previously):\n\t\t \t\n\t\t \tor:\n\t\t \t\n\t\t \t\\item Derivative of the function  $\\arctan(x)$:\n\t\t \t\n\t\t \tWe use the properties of the derivative of reciprocal functions proved previously:\n\t\t \t\n\t\t \t\\item Derivative of the function  $\\text{arccot}(x)$:\n\t\t \t\n\t\t \tWe also the properties of the derivative of reciprocal functions proved previously:\n\t\t \t\n\t\t \t\\item Derivative of $e^x$:\n\t\t \t\n\t\t \tWe will prove in our study of Theoretical Computing (see section of the same) that the \"Euler number\" can be calculated from the series:\n\t\t \t\n\t\t \twhich converges on $\\mathbb{R}$. By derivating term by term this series that converges, we get:\n\t\t \t\n\t\t \tThus the exponential is its own derivative. So now we can afford to study the derivatives of some hyperbolic trigonometric functions (\\SeeChapter{see section Trigonometry}) and many other specific cases (see all other chapters of the book).\n\t\t \t\\item Derivative of $\\sinh(x)$:\n\t\t \t\n\t\t \tRemember (\\SeeChapter{see section Trigonometry}):\n\t\t \t\n\t\t \tSo trivially:\n\t\t \t\n\t\t \t\n\t\t \t\\item Derivative of $\\cosh(x)$:\n\t\t \t\n\t\t \tRemember (\\SeeChapter{see section Trigonometry}):\n\t\t \t\n\t\t \tSo trivially:\n\t\t \t\n\t\t \t\n\t\t \t\\item Derivative of $\\tanh(x)$:\n\t\t \t\n\t\t \tRemember (\\SeeChapter{see section Trigonometry}):\n\t\t \t\n\t\t \tTherefore by applying the derivative from a quotient we obtain:\n\t\t \t\n\t\t \tor:\n\t\t \t\n\t\t \t\\item Derivative of $\\coth(x)$:\n\t\t \t\n\t\t \tRemember (\\SeeChapter{see section Trigonometry}):\n\t\t \t\n\t\t \tTherefore by applying the derivative from a quotient we obtain:\n\t\t \t\n\t\t \t\n\t\t \t\\item Derivative of $\\text{arcsinh}(x)$:\n\t\t \t\n\t\t \tWe also use the properties of the derivative of reciprocal functions proved previously:\n\t\t \t\n\t\t \tBut (see again the section Trigonometry):\n\t\t \t\n\t\t \tand therefore:\n\t\t \t\n\t\t \tSince $\\cosh(x)$ takes only positive values, we have:\n\t\t \t\n\t\t \tThen finally:\n\t\t \t\n\t\t \t\\item Derivative of $\\text{arccosh}(x)$:\n\t\t \t\n\t\t \tWe also use the properties of the derivative of reciprocal functions proved previously:\n\t\t \t\n\t\t \tBut (see again the section Trigonometry):\n\t\t \t\n\t\t \tand therefore:\n\t\t \t\n\t\t \tSince $\\text{arccosh}(x)$ takes only positive values so do $\\sinh(x)$, then we have:\n\t\t \t\n\t\t \tThen finally:\n\t\t \t\n\t\t \t\\item Derivative of $\\text{arctanh}(x)$:\n\t\t \tWe also use the properties of the derivative of reciprocal functions proved previously:\n\t\t \t\n\t\t \t\\item Derivative of $\\text{arccoth}(x)$:\n\t\t \tWe also use the properties of the derivative of reciprocal functions proved previously:\n\t\t \t\n\t\t \t\\item Derivative of $a^x$ with $a>0$:\n\t\t \t\n\t\t \tSo (derivative of a composite function):\n\t\t \t\n\t\t \\end{enumerate}\n\t\t \n\t\t \\pagebreak\n\t\t\\subsubsection{Implicit Differentiation}\n\tThus far, the functions we have been concerned with functions that have been defined explicitly. A function is defined explicitly if {\\it the output is given directly in terms of the input}. For instance, in the function:\n\t\t\n\tthe value of $f(x)$ is given explicitly or directly in terms of the input. Just by knowing the input we can immediately find the output. A second type of function that is also useful for us to consider is an \"\\NewTerm{implicitly defined function}\\index{implicitly defined function}\". A function is defined implicitly if {\\it the output cannot be found directly from the input}. For instance (stupid simple example):\n\t\n\t is an implicitly defined function, because for each positive $x$ value there is a corresponding $f$ value, but we cannot find it directly from the function. We would need to square both sides, and then we would have the explicitly defined function:\n\t \t\n\tIt is also possible for us to have implicitly defined functions that we cannot rewrite as an explicitly defined function!!! \n\t\n\tFor instance we might have the function:\n\t\n\tFor a given $x$ value, there {\\it may} be a corresponding output value $f(x)$ which makes this a true statement. In this way a function $f(x)$ would be defined for all such $x$ where there is a solution. For instance, we have $f(0) = 0$, because setting $x=f(x)=0$ in the above equation is a true statement. Right now we don't have the proper tools to solve such an equation, but the important concept here is that we can have a function defined in such a way. \n\t\n\tWhen we speak of functions, we mean that we have a rule which provides us with at most one output for a given input (there is no output for inputs at which the function is not defined). In a more general sense we might want to look at rules that provide us with multiple outputs for a given input. Such an example would be the equation:\n\t\n\twhich is the equation for the unit circle (\\SeeChapter{see section Analytical Geometry}). It turns out that this object consists of two functions, namely:\n\t\n\tThe equation of this circle does not define a function (because the output is multi-valued!!), but it does define some type of curve in the $x$-$y$ plane. In general, we should be able to describe an arbitrary curve as a combination of some number of functions. It is sensible (and useful) to consider the slope of certain points on the curve, which would simply correspond to the slope of the specific function that defines that part of the curve. \n\t\n\tTo solve the problem of finding the derivative of a function defined in a way such as $\\sin(f(x)) + f(x) = x$ or by a curve like a circle, we employ the chain rule once again. The way in which we will employ it is named \"\\NewTerm{implicit differentiation}\\index{implicit differentiation}\". The process works as follows: we differentiate both sides of the equation with respect to $x$ (or the independent variable), and then we solve for the derivative of the dependent variable. Anywhere we find the function $f$ (or the dependent variable), we will use the chain rule to find the derivative. Let us begin with a few simple examples:\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. We want to found $\\mathrm{d}y/\\mathrm{d}x$ if $y^2 = x$.\\\\\n\t\n\tOne method of solving this problem would be rewriting it in terms of an explicit function of $y$, and differentiating. Since we have $y = \\pm \\sqrt{x}$, we actually have two functions, and we would find:\n\t\n\tThis works sufficiently well in this situation, but what about a function we cannot rewrite explicitly? We would need implicit differentiation. Let's apply implicit differentiation to this situation, as a means of exercise. \n\t\n\tWere we unable to rewrite $y$ explicitly in terms of $x$, this is as far as we could go, but we know that $y = \\pm \\sqrt{x}$, and plugging in this result we find that \n\t$$\\frac{dy_1}{dx} = \\frac{1}{2\\sqrt{x}} \\quad \\text{and} \\quad \\frac{dy_2}{dx} = -\\frac{1}{2\\sqrt{x}}$$\n\tonce again. In this way, we were able to calculate both derivatives at once.\\\\\n\t\n\tE2. We want to found $\\mathrm{d}f/\\mathrm{d}x$ for $\\sin(f) + f = x$.\\\\\n\t\n\tHere we have no choice but to apply implicit differentiation. \n\t\n\t\\end{tcolorbox}\n\tA famous case of application in pure mathematics of implicit differentiation (we will see later for physics) is the one at the origin of this technique: the \"\\NewTerm{folium of Descartes}\\footnote{The name comes from the Latin word folium which means \"leaf\".}\" defined by:\n\t\n\tThe curve was first proposed by Ren\u00e9 Descartes in... 1638.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\t\\includegraphics[scale=0.9]{img/algebra/folium_of_descartes.jpg}\n\t\t\\caption{The folium of Descartes (green) with asymptote (blue) (source: Wikipedia)}\n\t\\end{figure}\n\tIts claim to fame lies in an incident in the development of calculus. Descartes challenged Pierre de Fermat to find the tangent line to the curve at an arbitrary point since Fermat had recently discovered a method for finding tangent lines. Fermat solved the problem easily, something Descartes was unable to do. Since the invention of calculus, the slope of the tangent line can be found easily using implicit differentiation in any point as we will show it:\n\t\n\tConsider now that we want to found the slope of the curve at the point $(2,4)$. Then we use implicit differentiation: \n\t\n\tEvaluating the derivative at the point $(2,4)$ we find:\n\t\n\tNow that we have the slope of the tangent line at the point of interest, we use the point-slope form\\index{point-slope form}:\n\t\n\tNow as far as the line normal to the curve at this point is concerned, we need to find the line perpendicular to the tangent line. This line will cross through the same point, but the slope will be the negative reciprocal of the slope of the tangent line. It follows that:\n\t\n\t\n\tOk now let us focus on an example applied to physics (putting apart the famous case of looking for the tangent of an ellipse). We will prove in the section of Continuum Mechanics the Van der Waals equation:\n\t\n\tIf $T$ remains constant, consider that we want to found the rate of change of the volume with respect to the pressure, that is to say $\\mathrm{d}V/\\mathrm{d}P$! We can challenge here to calculate this rate to transform this equation into an explicitly defined function... So we know we have to use implicit differentation. First:\n\t\n\tThis gives immediately:\n\t\n\tNow we distribute:\n\t\n\tAfter simplification and rearrangement we get:\n\t\n\tFinally:\n\t\n\tThe special case of the examples above with the Van der Waals equation and the folium of Descartes is in some text books introduced as following with a two variables case:\n\t\n\tTherefore:\n\t\n\tReduce to:\n\t\n\tFinally:\n\t\n\tThat's it for now. We will stop here on this topic as we don't need more techniques or example to our study of physics and engineering as introduced in this book.\n\t\n\t\t\\pagebreak\n\t\t\\subsubsection{Smoothness}\n\t\t Smoothness has to do with how many derivatives of a function exist and are continuous. The term smooth function is often used technically to mean a function that has derivatives of all orders everywhere in its domain.\n\t\t \n\t\t \\textbf{Definition  (\\#\\mydef):} A \"\\NewTerm{differentiability class}\\index{differentiability class}\" is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives.\n\t\t \n\t\t Consider an open set on the real line $\\mathbb{R}$ and a function $f$ defined on that set with real or complex values. Let $k$ be a non-negative integer. The function $f$ is said to be of (differentiability) class $\\mathcal{C}^k$ if the derivatives $f', f'', ..., f(k)$ exist and are continuous (the continuity is implied by differentiability for all the derivatives except for $f(k)$). The function $f$ is said to be of class $\\mathcal{C}^\\infty$, or smooth, if it has derivatives of all orders. The function $f$ is said to be of class $\\mathcal{C}^\\omega$, or simply \"analytic\", if $f$ is smooth and if it equals its Taylor series expansion around any point in its domain (\\SeeChapter{see section Sequences and Series}). $\\mathcal{C}^\\omega$ is thus strictly contained in $\\mathcal{C}^\\infty$.\n\n\t\t\\pagebreak\n\t\t \\subsection{Integral Calculus}\n\t\t We will discuss here the basic principles of integral calculus in $\\mathbb{R}^n$. Most advanced topics will come depending the time that are available to the redactors of this book but the reader can already refer the Complex Analysis section for integration techniques based on the Residue Theorem that is almost powerful and useful especially for some integral in quantum physics.\n\t\t \n\t\t \\subsubsection{Definite Integral}\n\t\t The origin of Integral Calculus seems to come from Archimedes that was fascinated with calculating the areas of various shapes. He used a process that has come to be known as the \\textit{method of exhaustion}, which used smaller and smaller shapes, the areas of which could be calculated exactly, to fill an irregular region and thereby obtain closer and closer approximations to the total area. In this process, an area bounded by curves is filled with rectangles, triangles, and shapes with exact area formulas. These areas are then summed to approximate the area of the curved region. This subsection introduces the Definite Integrals.\n\t\t \t\t \n\t\t The first idea of the concept of integral is to calculate the algebraic area (positive area if above the $x$-axis or negative below) between a curve and its support. See the figure below with a positive area and the notations for the developments that will follow:\n\t\t \\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/algebra/integral_all_positive.jpg}\n\t\t\t\\caption[]{Area $A$ to be calculated in a bounded continuous positive function}\n\t\t\\end{figure}\n\t\tOr with different algebraic areas (the difference between the blue area and the yellow area is named the \"\"\\NewTerm{net signed area}\\index{net signed area}\"\"):\n\t\t \\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/algebra/integral_all_positive_and_negative.jpg}\n\t\t\t\\caption[]{Area $A$ to be calculated in a bounded continuous positive and negative function}\n\t\t\\end{figure}\n\t\tAn approximate value of the area under a curve can be achieved by a division into $n$ vertical rectangular bands of the same width. In particular, we can achieve a framework of this area with the help a sum of upper bound areas (majorant) $A_M$ or a lower bound sum (minorant) $A_m$ for a given cutting:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\t\t\\includegraphics[width=\\textwidth]{img/algebra/integral_minorant_sum.jpg}\n\t\t\t\t\\caption{Minorant sum of areas $A_m$}\n\t\t\t\\end{subfigure}\n\t\t\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\t\t\\includegraphics[width=\\textwidth]{img/algebra/integral_majorant_sum.jpg}\n\t\t\t\t\\caption{Majorant sum of areas $A_M$}\n\t\t\t\\end{subfigure}\t\t\t\t\n\t\t\\end{figure}\n\t\tSuppose now that the number $n$ of bands tends to infinity. As the bands are of the same width, the width of each bands tends to $0$ (objectively it is not necessary that the width of the cutting of the subintervals is the same everywhere).\n\t\t\n\t\tIf the sums $A_m$ and $A_M$ have both a limit when the number $n$ of bands tends to infinity, then the area $A$ under the curve is between these two limits. We write this:\n\t\t\n\t\tObviously if these two limits are equal, their value is that of the area $A$ under the curve.\n\t\t\n\t\tHence a first direct definition of the definite integral named also \"\\NewTerm{Riemann integral}\\index{Riemann integral}\".\n\t\t\n\t\t\\textbf{Definition (\\#\\mydef):}\n\t\tGiven an interval $[a, b]$, divided into $n$ equal parts, let $f$ be a continuous function on the interval $[a, b]$, consider $A_m$, the algebraic minorante sum of areas and $A_M$ the algebraic majorant sum. We name \"\\NewTerm{definite integral}\\index{definite integral}\" of $f$ from $a$ to $b$, denoted by:\n\t\t\n\t\tthe number $A$ such that:\n\t\t\n\t\tprovided that this limit exists. If this limit exists, then we say that $f$ is \"integrable\" on $[a, b]$ and the definite integral exists.\n\t\t\n\t\tThe symbol:\n\t\t\n\t\tIs only the symbol of a discrete sum $\\sum$ but applied to the case of infinitely small increments.\n\t\t\n\t\tThe numbers $a$ and $b$ of the integral (that can also be functions sometimes!) are named the \"\\NewTerm{integration limits}\\index{integral limits}\" or \"\\NewTerm{integration bounds}\\index{integral bounds}\": $a$ is the \"\\NewTerm{lower bound}\", $b$ is the \"\\NewTerm{upper bound}\".\n\t\t\n\t\t\\begin{center}\n\t\t\\begin{tikzpicture}[scale=2.3]\n\t\t  \\shade[top color=blue,bottom color=gray!50] \n\t\t      (0,0) parabola (1.5,2.25) |- (0,0);\n\t\t  \\draw (1.05cm,2pt) node[above] \n\t\t      {$\\displaystyle\\int_0^{3/2} \\!\\!x^2\\mathrm{d}x$};\n\t\t\n\t\t  \\draw[style=help lines] (0,0) grid (3.9,3.9)\n\t\t       [step=0.25cm]      (1,2) grid +(1,1);\n\t\t\n\t\t  \\draw[->] (-0.2,0) -- (4,0) node[right] {$x$};\n\t\t  \\draw[->] (0,-0.2) -- (0,4) node[above] {$f(x)$};\n\t\t\n\t\t  \\foreach \\x/\\xtext in {1/1, 1.5/1\\frac{1}{2}, 2/2, 3/3}\n\t\t    \\draw[shift={(\\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\\xtext$};\n\t\t\n\t\t  \\foreach \\y/\\ytext in {1/1, 2/2, 2.25/2\\frac{1}{4}, 3/3}\n\t\t    \\draw[shift={(0,\\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\\ytext$};\n\t\t\n\t\t  \\draw (-.5,.25) parabola bend (0,0) (2,4) node[below right] {$x^2$};\n\t\t\\end{tikzpicture}\n\t\t\\end{center}\n\t\t\n\t\tIntuitively, it is obvious that when $a=b$ we extend the definition as follows:\n\t\t\n\t\tFinally, notice that it is quite possible that the result of the integral to be negative or even complex since it is an algebraic surface! That is to say the result can be in general in $\\mathbb{C}$.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} Other letters rather than $x$ can be used in the evaluation of the definite integral. So if $f$ is integrable on $[a, b]$, then $\\int\\limits_a^bf(x)\\mathrm{d}x=\\int\\limits_a^bf(t)\\mathrm{d}t=\\int\\limits_a^bf(s)\\mathrm{d}s$, etc. This is why the variable $x$ is sometimes named \"\\NewTerm{dummy variable}\\index{dummy variable}\".\\\\\n\n\t\\textbf{R2.} As we will see below, it is essential not to confuse between a \"\\NewTerm{definite integral}\\index{definite integral}\" and a \"\\NewTerm{indefinite integral}\\index{indefinite integral}\". Thus, an indefinite integral, denoted $\\int\\limits f(x)\\mathrm{d}x$ is a function, or more precisely, a family of functions also named \"\\NewTerm{primitives of $f$}\\index{primitive of a f unction}\" (see below) while a definite integral, denoted $\\int\\limits_a^b f(x)\\mathrm{d}x$ is considered as a constant.\n\t\\end{tcolorbox}\n\tLet us present a second approach for the definition of the integral, somewhat more rigorous than the previous one  (following the request of several readers). We will use this time, by tradition the $S$ surface instead of the area $A$.\n\t\n\tLet f be a bounded function on $[a, b]$. We consider a subdivision $\\sigma$ of its support $[a, b]$ that we note:\n\t\n\twhere the intervals are not necessarily of equal sizes.\n\n\tWe write for $i=1,2,3,...,n$:\t\n\t\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] We name \"\\NewTerm{lower Darboux sum}\\index{lower Darboux sum}\" associated with $f$ and $\\sigma$ the surface:\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/algebra/darboux_inferior_sum_concept.jpg}\n\t\t\t\\caption{Principle of calculating the lower Darboux sum}\n\t\t\\end{figure}\n\t\t\\item[D1.] We name \"\\NewTerm{upper Darboux sum}\\index{upper Darboux sum}\" associated with $f$ and $\\sigma$ the surface:\n\t\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/algebra/darboux_superior_sum_concept.jpg}\n\t\t\t\\caption{Principle of calculating the upper Darboux sum}\n\t\t\\end{figure}\n\t\\end{enumerate}\n\tA function $f$ is said to be \"\\NewTerm{Riemann-integrable on $[a, b]$}\" if and only if the above two surfaces coincide when the intervals become infinitely small.\n\t\n\tAll Riemann-integrable functions on $[a, b]$ are denoted by $\\mathcal{R}_{[a,b]}$.\n\t\n\tDarboux sums are not very useful for the effective calculation of an integral, for example using a computer, because it is usually quite difficult to find the $\\inf$ and $\\sup$ on sub-intervals. Rather, we consider:\n\t\n\tThe \"\\NewTerm{Riemann sum}\\index{Riemann sum}\" is defined from the fact that we if denote a \"\\NewTerm{partition}\\index{partition}\" (or \"\\NewTerm{regular partition}\" if they all have the same width) of the interval $[a,b]$ by:\n\t\n\tand that:\n\t\n\twhere $\\xi_i \\in [x_{i-1},x_i]$, then:\n\t\n\tBut as we must choose an $\\xi_i$, we often takes either the right or the left one, thus taking randomly the \"method of left rectangles\":\n\t\n\tWhich would give us the example below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/left_rectangle_integral.jpg}\n\t\t\\caption{Principle of calculating methods using left rectangles}\n\t\\end{figure}\n\teither:\n\t\n\tBut it is easy for a step function... but it is less so for a continuous function for which we will obtain only an approximate value of the actual surface! The idea is then to take intervals smaller and smaller:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/riemann_integral_left_rectangle.jpg}\n\t\t\\caption{Principle of the calculation of the Riemann integral with left rectangles method}\n\t\\end{figure}\n\tAnd then, at the limit, we obtain the desired quantity:\n\t\n\tThe fact to search this limit is named \"calculate the integral,\" and more specifically for the chosen method: \"calculate the Riemann integral\".\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWe want to use the construction of the definite integral to evaluation:\n\t\n\tusing a right-endpoint approximation to generate the Riemann sum.\\\\\n\t\n\tFor this purpose we first set up the Riemann sum. Based on the limits of integration, we have $a=0$ and $b=2$. For $i=0,1,2,\\ldots,n$, let $P=\\{x_i\\}$ be a regular partition of $[0,2]$. Then:\n\t\n\tThus, the function value at the right endpoint of the interval is:\n\t\n\tThen the Riemann sum takes the form:\n\t\n\tUsing the Gauss summation relation of $\\sum_{i=1}^n i^2$ (\\SeeChapter{see section Sequences and Series}), we have:\n\t\n\tNow, to calculate the definite integral, we need to take the limit as $n\\rightarrow+\\infty$. We get:\n\t \n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Indefinite Integral}\n\tWe have seen before in our study of derivatives the following problem: given a function denoted by $F(x)$, find its derivative $f(x)$, that is to say the algebraic function:\n\t\n\t\\textbf{Definition (\\#\\mydef):} We say that the function $F (x)$ is a \"\\NewTerm{primitive}\\index{primitive}\" or \"\\NewTerm{indefinite integral}\\index{indefinite integral}\" of the function $f (x)$ on any segment $[a, b]$ if at any point in any segment we have $F'(x)=f(x)$.\n\t\n\tTow more explicit and less technical alternative definitions are: \n\t\\begin{itemize}\n\t\t\\item An \"indefinite integral\" is a FUNCTION of $x$ (or another variable), while a \"definite integral\" is a VALUE!\n\t\t\n\t\t\\item The collection of all antiderivatives of $f(x)$ is named the \"indefinite integral\" of $f$ with respect to $x$.\n\t\\end{itemize}\n\n\t\n\tAnother way to see the indefinite integral concept is to go through the \"\\NewTerm{fundamental theorem of integral and differential calculus}\\index{fundamental theorem of integral and differential calculus}\" also sometimes named \"\\NewTerm{fundamental theorem of calculus}\\index{fundamental theorem of calculus}\" whose two properties are stated as follows:\n\t\\begin{enumerate}\n\t\t\\item If $A$ (area) is the function defined by $A(X)=\\displaystyle\\int\\limits_a^X f(t)\\mathrm{d}t$ for each $X$ in any $[a, b]$, then $A$ is the primitive of $f$ on $[a, b]$ which is zero in $a$ (or in other words: $f (t)$ is the derivative of $A$).\n\t\t\\item If $F$ is a primitive of $f$ on any $[a, b]$, then:\n\t\t\n\t\\end{enumerate}\n\tLet us prove the first property of this fundamental theorem:\n\t\\begin{dem}\n\tGiven the function:\n\t\n\tIf $f$ is positive, and $h>0$ (the proof in the case where $h<0$ is similar) and as $X>0$, we know that we can think of $A(X)$ as the area under the curve of $f$ from $t=a$ to $t=X$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/area_primitive_representation.jpg}\n\t\t\\caption{Graphical representation of the area}\n\t\\end{figure}\n\tTo show that $A$ is a primitive of $f$, we will prove that $A'=f$. According to the definition of the derivative:\n\t\n\tLet us study this quotient: $A(x+h)-A(x)$ is represented by the area of the strip of width $h$, sandwiched between two rectangles of width $h$.\n\t\n\t\tGiven $M$ the maximum of $f$ on the interval $[X,X+h]$ and $m$ the minimum interval of $f$ over the same interval. The respective areas of the two rectangles are $Mh$ and $mh$.\n\t\n\tWe then have the following double inequality:\n\t\n\n\tAs $h$ is positive, we can divide by $h$ without changing the meaning of  the inequalities:\n\t\n\tWhen $h\\rightarrow 0^+$ and if $f$ is a continuous function, then $M$ and $m$ have for limit $f (X)$, and the ratio:\n\t\n\twhich is between $m$ and $M$, has effectively for limit $f(X)$.\n\t\n\tAs $A'(X)=f(X)$ for all $X$, this shows that the derivative of the area function is $f$. And therefore $A$ is an indefinite primitive of $f$. As $A(a)=0$, $A$ is effectively the primitive of $f$ which vanishes on $a$.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tBefore starting the proof of the second properties of the fundamental theorem, let us give and prove the following theorem that will we be essential to us: If $F_1(x)$ and $F_2(x)$ are two primitives of $f (x)$ on any segment $[a, b]$, their difference is a constant (this result is very important in physics in terms of the study of what we name the \"initial conditions\").\n\t\\begin{dem}\n\tWe have after the definition of the concept of Primitive:\n\t\n\tfor $\\forall x \\in [a,b]$.\n\t\n\tLet us write:\n\t\n\tWe can write:\n\t\n\tSo it comes from what we saw during our study of derivatives that:\n\t\n\tThen we have:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tIt follows from this theorem that if we know any primitive $F (x)$ of the function $f (x)$ any other primitive of this function will be of the form:\n\t\n\tSo finally, we name \"\\NewTerm{indefinite integral}\\index{indefinite integral}\" of $f (x)$ and we denote by:\n\t\n\tany expression of the form $F (x)+c^{te}$ where $F(x)$ is a primitive of $f(x)$. Thus, by writing convention:\n\t\n\tif and only if $F'(x)=f(x)$.\n\t\n\tIn this context, $f (x)$ is also named \"\\NewTerm{integrand function}\\index{integrand function}\" and $f (x) \\mathrm{d}x$, the \"\\NewTerm{function under the sum sign}\".\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\tAn \"\\NewTerm{antiderivative}\\index{antiderivative}\" of a function $f$ is one function $F$ whose derivative is $f$. The indefinite integral of $f$ is the set of ALL antiderivatives of $f$. Therefore indefinite integral and antiderivative is not the same as the first is the set of all the seconds! If $f$ and $F$ are as described just now, the indefinite integral of $f$ has the form $\\{F+c^{te}|c^{te}\\in\\mathbb{R}\\}$ when an antiderivative is just one element of this set!! But not all teachers are according on the definition of antiderivatives...\n\t\\end{tcolorbox}\n\t\n\tGeometrically, we can consider the indefinite integral as a set (family) of curves as we move from one to another by performing a translation in the positive or negative direction of the axis.\n\t\n\tLet us return to the proof of item (2) of the fundamental theorem of integral (and differential) calculus:\n\t\\begin{dem}\n\tLet $F$ be a primitive of $f$. Since two primitives differ by a constant, we have well:\n\t\n\tthat we can also write:\n\t\n\tfor all $X$ in $[a, b]$. The particular case $X=a$ gives $\\int\\limits_a^a f(t)\\mathrm{d}t$ and therefore $F(a)+c^{te}=0$ and we get obviously $c^{te}=-F(a)$. Substituting, we get:\n\t\n\tAs this identity is valid for all $X$ in the interval $[a,b]$ it is true especially for $X=b$. Therefore:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tThis last result also shows something useful!: It is not necessary when we evaluate and integral to take into account the constant of the general primitive since it is canceled by the difference of the two primitives!!\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The above fundamental theorem that shows the link between primitive and integral led us to use the same symbol $\\int$ to write a primitive (introduced by Leibniz in the late 17th century), which is a function, and an integral that it is a number.\\\\\n\t\n\t\\textbf{R2.} We also proved in the section of Analytical Mechanics using an integral how to calculate the full length of a curve in the plane if the function $f(x)$ is explicitly known.\n\t\\end{tcolorbox}\n\tHere are some trivial properties of integration that is good to remember because often used elsewhere in this book (if it do not seem obvious to you, contact us and we will give the detailed proof):\n\t\\begin{enumerate}\n\t\t\\item[P1.] The derivative of an indefinite integral is equal to the integrand:\n\t\t\n\t\t\\item[P2.] The differential of an indefinite integral is equal to the expression under the integral sign:\n\t\t\n\t\t\\item[P3.] The indefinite integral of the differential of a given function is equal to the sum of this function and an arbitrary constant:\n\t\t\n\t\t\\item[P4.] The indefinite integral of the sum (or subtraction) of two or more algebraic functions is equal to the algebraic sum of their integral (do not forget that we work with the set of all primitives and not a specific given one primitive!):\n\t\t\n\t\t\\begin{dem}\n\t\tTo prove this, following the request of a reader, we will prove that the derivative of the left-hand side allows us to find the derivative of the right-hand side and vice versa (reciprocal) with the above properties.\n\t\t\n\t\tAccording to P1 we have:\n\t\t\t\t\n\t\tLet us check if it is the same with the right-hand side (we assume known the properties of derivatives that we proved earlier in this section):\n\t\t\n\t\t\\end{dem} \n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\item [P5.] We can get out a constant factor from the integral sign, that is to say:\n\t\t\n\t\tWe justify this equality by deriving the two members (and according to the properties of derivatives):\n\t\t\n\t\t\\item[P6.] We can take out a constant factor in the argument of the integrated function (rather rarely used):\n\t\t\n\t\tIndeed, by differentiating both members of equality we have following the properties of derivatives:\n\t\t\n\t\t\\item[P7.] The integration of a function whose argument is summed (or subtracted) is algebraically the primitive of the argument summed (respectively subtracted):\n\t\t\n\t\tThis property can be showed identically to the previous one using also the derivatives properties.\n\t\t\\item[P8.] The combination of properties P6 and P7 allows us to write:\n\t\t\n\t\t\\item[P9.] Let $f$ be a continuous function on $[a, b]$, we have for all $c$ belonging to this interval:\n\t\t\n\t\tThis theorem, sometimes named \"\\NewTerm{Chasles relation}\\index{Chasles relation}\" (by its vector equivalent) follows directly from the definition of the indefinite integral. $F$ being a primitive of $f$ on $[a, b]$ we have:\n\t\t\n\t\t\\item[P10.] This is a property often used in the section of Statistics (we do not find an easy way to express this property by everyday language so...):\n\t\t\n\t\t Let us see now two properties that will be helpful we sometimes to calculate difficult integrals:\n\t\t\\item[P11.] If a function is even (\\SeeChapter{see section Functional Analysis}), the integral on symmetrical bounds is equivalent to:\n\t\t\n\t\t\\item[P12.] If a function is odd (\\SeeChapter{see section Functional Analysis}), the integral on symmetrical bounds is equivalent to:\n\t\t\n\t\t\n\t\t\\item[P13.] The integral of a periodic function is invariant under a shift of its integration. This is a property that we will use further below to finalize the proof of the integral representation of the zero order Bessel function of the first type:\n\t\n\t\tIf $f$ is a periodic function of period $T$ we know that any value $a$:\n\t\t\n\t\tSo now consider:\n\t\t\n\t\tWe do the change of variable $y=t-T$, the we have for the last integral:\n\t\t\n\t\tTherefore:\n\t\t\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\subsubsection{Double Integral}\n\tThe idea of double integrals is to measure the volume of the area bounded by the graph of a function of two variables over a domain $D$ of the plane (below $D$ is rectangular):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/double_integral_square_domain.jpg}\n\t\t\\caption{Example of a continuous function of two variables over a squared domain}\n\t\\end{figure}\n\tIt could be obvious to the reader that double integrals are extremely important in the field of Applied Mathematics!\n\t\n\tAgain, the idea is the same as the definite integral. If we adapt a simplistic approach, we decompose the continuous function like a a staircase and the volume to calculate is then reduced to the sum of the volumes of parallelepipeds:\n\t\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/double_integral_square_domain_decomposition_into_big_parallelepipeds.jpg}\n\t\t\\caption{Volume decomposition into big parallelepipeds}\n\t\\end{figure}\n\tTherefore we have the double sum:\n\t\n\tFor a continuous function, we proceed by successive approximations: we calculate the Riemann sums for subdivisions always thinner and thinner of the domain $D$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/double_integral_square_domain_decomposition_into_thin_parallelepipeds.jpg}\n\t\t\\caption{Decomposition of the volume in always thinner parallelepipeds}\n\t\\end{figure}\n\tand therefore at the limit:\n\t\n\tBut..., when want want to integrate an area that is not rectangular, things get a priori more complicated ... Let's see a workaround.\n\t\n\tFor this purpose, we will build closed bounded domain $D$ as follows:\n\t\n\twhere the reader will have noticed that the support is the variable $x$ by trough the two functions $u$ and $v$. This then is what we name a \"\\NewTerm{type I domain}\\index{domain of definition!type I domain}\" (and therefore if it is $y$ that parameterizes $x$ then it is \"\\NewTerm{type II domain}\\index{domain of definition!type II domain}\").\n\n\tThis can be illustrated by the figure below:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/double_integral_type_I_domain_example.jpg}\n\t\t\\caption{Example of a type I domain}\n\t\\end{figure}\n\twhere we notice that this simplistic approach (there are other possible approaches but that need the use of Measurement Theory) requires that the domain is simply convex\\footnote{As we have seen in the section of Geometric Shapes an area is convex if any couple of points on the bounds of the area with a crossing line has this latter no outside of the area} (that is to say they are no holes outside the domain $D$ between $u (x)$ and $v (x)$) or decomposed into simply convex disjoint subdomains.\n\n\t\n\tTo resume, we can integrate as follows:\n\t\n\tSo we transform the double integral in two nested simple integrals.\n\t\n\t\\paragraph{Fubini's theorem}\\mbox{}\\\\\\\\\n\tWe will see now an important theorem used repeatedly in different section of this book and which permits to reverse the order of integration.\n\t\n\tRemembering that:\n\t\n\twe can also use:\n\t\n\tSo with this parametrization we can write:\n\t\n\tWe can change the order of integration, the calculation is different, but the result is the same. But that's not what really interests us here!\n\t\n\tConsider a function such as (we say that the function can be \"variable separable\"):\n\t\n\tTherefore:\n\t\n\tSuppose that the domain is a rectangle (we do this simplification otherwise the proof complicates considerably). Meaning:\n\t\n\tTherefore the integral linearity property:\n\t\n\t\n\t\\subsubsection{Integration by Substitution}\n\tWhen we can not easily determine the primitive of a given function, we can find sometimes, with a smart change of variables (sometimes very subtle...), bypass the difficulty. It does not work every time (because some functions are not formally integrable) but it is worth a try before taking out your computer.\n\t\n\tAgain, we give only the general form of the method. It is the role of teachers in schools to train, and train, and train students to understand and master these techniques. In addition, the sections in this book that treats of exact sciences (physics, computer science, astrophysics, chemistry, ...) replete with examples using this technique and thus serve implicitly as exercises.\n\t\n\tConsider we want to calculate the integral (not bounded for the moment):\n\t\n\talthough we do not know directly how to calculate the primitive of this function $f (x)$ (at least we imagine being in that situation) we know (in one way or another) that exists (we do treat of improper integrals at this level).\n\n\tThe technique then consists in this integral to perform the following change of variable:\n\t\n\twhere $\\varphi (t)$ is a continuous function and also its derivative, and having an inverse function. Then $\\mathrm{d}x=\\varphi' (t)\\mathrm{d}t$, and let us prove now that in this case the equality:\t\n\t\n\tis satisfied.\n\t\\begin{dem}\n\tWe imply here that the variable $t$ will be replaced after integration of the right member by its expression in function of $x$. To justify this equality in this sense, it suffices to show that the considered amounts where each is a defined only with a difference of a given arbitrary constant have the same derivative with respect to $x$. The derivative of the left member is:\n\t\n\tWe derive only the right member with respect to $x$ considering that $t$ is a function of $x$. We know that:\n\t\n\tWe get therefore:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tObviously, the function $x=\\varphi (t)$ must be chosen so that we know how to calculate the indefinite integral on the right side of equality.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tSometimes it is preferable to choose the change of variable in the form $t=\\varphi (x)$ instead of $x= \\psi(t)$ because as this to a large tendency to simplify the length of the equation instead of making it longer.\n\t\\end{tcolorbox}\n\tIt is obvious that this theorem will be more explicitly written:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Jacobian}\\mbox{}\\\\\\\\\n\tConsider a domain $D$ of the plane  $u,v$ delimited by a curve $L$. Suppose that the $x, y$ coordinates are functions of the new variables $u, v$ (always in the context of a change of variables!) by the following relations:\n\t\n\twhere the functions $\\varphi (x,y)$ and $\\phi(u,v)$ are unique, continuous and have continuous derivatives in a given domain $D'$ which we will define later. It corresponds therefore following previous relations to any pair of values $u, v$ only one couple of values $x, y$ and vice versa.\n\t\n\tIt follows from what was said just before that at any point $P(x,y)$ of the plane $\\text{O}xy$  corresponds univocally a point $P '(u, v)$ of the plane $\\text{O}uv$ of coordinates $u, v$ defined by the above relations. The numbers $v$ and $u$ will be named \"\\NewTerm{curvilinear coordinates}\\index{curvilinear coordinates}\" of $P$ and we will see concrete and schematics examples of these in the section of Vector Calculus.\n\t\n\tIf in the $X\\text{O}Y$ plane the set of points $P$ describes a closed curve $L$ defining a domain $D$, the corresponding set of points describes in $u\\text{O}v$ a given domain $D'$. It corresponds to any point of $D'$ a point of $D$. Thus, the relations of transformation establish a bi-univocal correspondence between the points of the domains $D$ and $D'$.\n\t\n\tNow consider in $D'$ a straight line of equation $u=c^{te}$. In general, the relations of transformation make it corresponds in the plane $\\text{O}xy$ a curved line (or vice versa). Therefore, let us cut the domain $D'$ by the multiple straight lines of equations $u=c^{te}$ and $v=c^{te}$ into small rectangular areas (we will not take into account at the limit, the rectangles on the boundary of $D'$). The corresponding curves of the domain $D$ then cut this latter it into quadrilateral (defined by curves therefore). Obviously, the reverse applies!\n\t\n\tConsider in the plane $\\text{O}uv$ the rectangle $\\Delta s'$ limited by the straight lines:\n\t\n\tand the curvilinear quadrilateral corresponding $\\Delta s$ in the plane $\\text{O}xy$. We will designate areas of these partial domains also  by $\\Delta s'$ by $\\Delta s$. We have obviously:\n\t\n\tThe areas $\\Delta s$ and $\\Delta s'$ are different in general.\n\t \\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/algebra/non_linear_map.jpg}\n\t\t\\caption{A nonlinear map $f : \\mathbb{R}^2\\mapsto \\mathbb{R}^2$ sends a small square to a distorted parallelogram}\n\t\\end{figure}\n\tSuppose in $D$ a continuous function $z=f(x,y)$. It corresponds to any value of this function of the domain $D$ the same value $z=F(u,v)$ (what we want to check) in $D'$, where:\n\t\n\tConsider the integrals sum of the function $z$ in the domain $D$. We obviously have the following equality:\n\t\n\tLet us calculate $\\Delta s$, that is to say the area of the curvilinear quadrilateral $P_1,P_2,P_3,P_4$ in the plane $\\text{O}xy$:\n\t\n\tWe determine the coordinates of its vertices:\n\t\n\t\n\tWe will assimilate in the calculation of the are of the quadrilateral  $P_1,P_2,P_3,P_4$ the arcs $P_1P_2,P_2P_3,P_3P_4,P_4P_1$ to parallel line segments. We will also replace the increasing values of the functions by their differentials. This means that we ignore the infinitely small differentials of higher order than $\\Delta u$ and $\\Delta v$. The previous relations then become:\n\t\n\n\tUnder these assumptions, the curvilinear quadrilateral $P_1P_2P_3P_4$ can be likened to a parallelogram. His area is therefore approximately equal to twice the area of the triangle $P_1P_2P_3$, area that we can calculate by using the properties of the determinant (as we will prove it in the section on Linear Algebra, in $\\mathbb{R}$ the determinant  represents a parallelogram while in $\\mathbb{R}^2$ it represents the volume of a parallelepiped):\n\t\n\tSuch as (this is here that the best choice has to be done so that the final expression is the simplest and most aesthetic, for this purpose we proceed by trials and finally we make the choice below):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/graphical_representaton_of_determinant.jpg}\n\t\t\\caption{Graphical representation of the determinant}\n\t\\end{figure}\n\n\tThus we have:\n\t\n\tTherefore the following relation (containing what is usually named the \"\\NewTerm{functional determinant}\\index{functional determinant}\"):\n\t\n\twith:\n\t\n\tthat is the \"\\NewTerm{Jacobian matrix}\\index{Jacobian matrix}\" (its determinant is simply named the \"\\NewTerm{Jacobian}\\index{Jacobian}\" (for short)) of the coordinate transformation $\\mathbb{R}^2 \\rightarrow \\mathbb{R}^3$. By applying exactly the same reasoning to $\\mathbb{R}^3$, then the Jacobian is written (by changing some notations because otherwise it becomes unreadable):\n\t\n\tIn short, what it is useful exactly? Well let us come back to our relation:\n\t\n\twhich is finally only an approximation because in the calculations of the area $\\Delta s$ we neglected the infinitely small differential of higher order. However, more the dimensions of the elementary domains $\\Delta s$ and $\\Delta s'$ are small, and more we are approaching the true equality. The equality finally taking place when we go to the limit (finally also in math we make approximations... eh!), the surfaces of the elementary domains tending to zero:\n\t\n\tWe apply now the equality obtained to calculate the double integral (we can do the same with the triple of course). So we can finally write (this is the only way of putting the thing that makes sense):\n\t\n\tPassing to the limit, we obtain the strict equality:\n\t\n\tThis is the coordinate transformation relation in a double integral! It allows us to reduce the calculation of a double integral in the domain $D$ to domain $D'$, which can simplify the problem.\n\tSimilarly, for a triple integral, we write:\n\t\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tLet us now determine the Jacobian for the most common coordinate systems (we send the reader back again to Vector Calculus section for more information about these systems):\\\\\n\t\n\tE1. Polar coordinates $x=r\\cos(\\phi),y=r\\sin(\\phi)$:\n\t\n\tSince $r$ is always positive, we simply write:\n\t\n\t\\end{tcolorbox}\n\t\\pagebreak\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE2. In cylindrical coordinates $x=r\\cos(\\phi),y=r\\sin(\\phi),z=z$ (see section Linear Algebra for calculating the determinant):\n\t\n\tSince $r$ is always positive, we simply write:\n\t\\\\\n\t\n\tE3. In spherical coordinates $x=r\\sin(\\theta)\\cos(\\phi),y=r\\sin(\\theta)\\sin(\\phi),z=r\\cos(\\theta)$ (see section Linear Algebra for calculating the determinant):\n\t\n\tSince $r$ is always positive, we simply write:\n\t\n\t\\end{tcolorbox}\n\n\t\\subsubsection{Integration by Parts}\n\tWhen we seek to make integrations, it is very common that we have to use a tool (or calculation method) named \"\\NewTerm{integration by parts}\\index{integration by parts}\". There are different degrees of use of this tool and we'll start with the most simple and that is the most used in all chapters and sections in this book.\n\t\n\tFirst we start from the derivative of the product of two functions proved above:\n\t\n\t\n\tso we have:\n\t\n\tand we get:\n\t\n\tafter a final simplification we finally get the famous very important equality:\n\t\n\tBut sometimes we will need the generalization of that relation. We can show that if $f$ and $g$ are two applications (functions) of class $\\mathcal{C}^n$ ($n$ times differentiable) on $[a, b]$ in $\\mathbb{C}$, then:\n\t\n\t\\begin{dem}\n\tLet us proceed by induction on n$ $(beware it is not necessarily easy to understand as often with demonstrations by induction!).\n\t\n\tKnowing the relation is true for $n = 1$, we assume it true for $n$ (as given in the above relation!) and we prove it for $n + 1$ (so we must laid on the previous relation but with $n + 1$ instead of $n$):\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe trick (proposed by a reader) in this proof is to see that $-(-1)^n$ gives a minus sign when $n$ is even and plus sign when $n$ is odd and also that $+(-1)^{n+1}$ gives a minus sing when $n$ is even and plus sign when $n$ is odd.\n\t\\end{tcolorbox}\n\tFor $n = 1$ we fall back on the well known and very often used equality in this book:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\n\t\\pagebreak\n\t\\subsubsection{Usual Primitives}\n\tThere are in math and physics many primitives or functions defined on integrals we see  quite frequently (but not exclusively). Furthermore, all primitive proved below will be used in the various chapters on Mechanics, Engineering, Atomistic, Social Mathematics, etc of this book. So, as in any formula booklet, we propose you the most sixty knows primitives but with... the proofs!\n\t\n\tHowever, we will omit the primitives that can be immediately deduced from the usual derivatives we have proved above. This means for example that we assume  known two very important primitives (certainly the most used in all pages of this book):\n\t\t\n\tOtherwise here is the list of the most common primitives (the reader will meet anyway many others - developed in the details - as it reads this book):\n\t\\begin{enumerate}\n\t\t\\item Primitive of $f(x)=\\tan (x)$:\n\t\t\n\t\tBy definition we have:\n\t\t\n\t\tWe use the change of variable $u=\\cos(x),\\mathrm{d}u=-\\sin(x)\\mathrm{d}x$ and therefore:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\cot(x)$:\n\t\t\n\t\tBy definition we have:\n\t\t\n\t\tWe use the change of variable $u=\\sin(x),\\mathrm{d}u=\\cos(x)\\mathrm{d}x$ and therefore:\n\t\t \n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\arcsin(x)$:\n\t\t\n\t\tWe integrate by parts:\n\t\t\n\t\tIf we put $u=1-x^2$, giving us $\\mathrm{d}u=-2x\\mathrm{d}x$, we get:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\arccos(x)$:\n\t\t\n\t\tWe integrate by parts again:\n\t\t\n\t\tIf we put $u=1-x^2$, giving us $\\mathrm{d}u=-2x\\mathrm{d}x$, we get:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\arctan(x)$:\n\t\t\n\t\tWe integrate by parts again:\n\t\t\n\t\tIf we put $u=1+x^2$, giving us $\\mathrm{d}u=2x\\mathrm{d}x$, we get:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\text{arccot}(x)$:\n\t\t\n\t\tOnce again... we integrate by part:\n\t\t\n\t\tIf we put $u=1+x^2$, giving us $\\mathrm{d}u=2x\\mathrm{d}x$, we get:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=xe^{ax}$ with $a\\in \\mathbb{R}\\left\\lbrace 0 \\right\\rbrace$:\n\t\t\n\t\tAn integration by part gives:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tAnother very important primitive with the exponential in physics is that we had proved in our study of the law of Gauss-Laplace law (Normal Law) in the section of Statistics (determination of the expected mean).\n\t\t\\end{tcolorbox}\n\t\t\\item Primitive of $f(x)=\\ln(x)$:\n\t\t\n\t\tWe write:\n\t\t\n\t\tIntegrating by parts we found:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=x\\ln(ax)$ with $a\\in \\mathbb{R}\\left\\lbrace 0 \\right\\rbrace$:\n\t\t\n\t\tAn integration by part give us:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=a^x$ for $a>0,a\\neq 1$:\n\t\t\n\t\tTo begin we write:\n\t\t\n\t\tTherefore we get:\n\t\t\n\t\tand:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\log_a(x)$:\n\t\t\n\t\tFor $a>0,a\\neq 1$ knowing that (see the properties of logarithms in the section of Functional Analysis):\n\t\t\n\t\twe get using the primitive of $\\ln(x)$:\n\t\t\n\t\t\\item Primitive of $f(x)=\\tanh(x)$:\n\t\t\n\t\tWe have:\n\t\t\n\t\twe use the change of variables $u=\\cosh(x),\\mathrm{d}u=\\sinh(x)\\mathrm{d}x$ and we get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\coth(x)$:\n\t\t\n\t\tWe know we have:\n\t\t\n\t\twe use the change of variables $u=\\sinh(x),\\mathrm{d}u=\\cosh(x)\\mathrm{d}x$ and we get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\text{arcsinh}(x)$:\n\t\t\n\t\tWe integrate by parts:\n\t\t\n\t\tIf we put $u=1+x^2,\\mathrm{d}u=2x\\mathrm{d}x$ we get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\text{arccosh}(x)$:\n\t\t\n\t\tIf we integrate by parts as before:\n\t\t\n\t\tIf we put $u=1-x^2,\\mathrm{d}u=-2x\\mathrm{d}x$ we get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\text{arctanh}(x)$:\n\t\t\n\t\tWe integrate by parts:\n\t\t\n\t\tIf we put $u=1-x^2,\\mathrm{d}u=-2x\\mathrm{d}x$ we get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\text{arccoth}(x)$:\n\t\t\n\t\tWe integrate by parts:\n\t\t\n\t\tIf we put $u=1-x^2,\\mathrm{d}u=-2x\\mathrm{d}x$ we get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\sin ^{n}(x)$ with $n \\geq 2$:\n\t\t\n\t\tLet us put $I_n=\\int \\sin ^n(x)\\mathrm{d}x$. An integration by parts give:\n\t\t\n\t\tsubstituting $\\cos ^2(x)$ by $1-\\sin ^2{x}$ in the last primitive, we obtain:\n\t\t\n\t\tand therefore:\n\t\t\n\t\tAll primitives of the same form (recurrence relation) are named \"\\NewTerm{reduction formulas}\\index{reduction formulas}\".\n\t\t\n\t\t\\item Primitive of $f(x)=\\cos ^{n}(x)$ with $n \\geq 2$:\n\t\t\n\t\tIn this case we have the recursive formula:\n\t\t\n\t\tthat is proved exactly in the same way as the previous recursive relation (the reader can request the details if required).\n\t\t\\item Primitive of $f(x)=\\tan ^{2}(x)$:\n\t\t\n\t\tKnowing that $\\tan'(x)=1+\\tan^2(x)$ we have:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\cot ^{2}(x)$:\n\t\t\n\t\tKnowing that $\\cot'(x)=-1-\\cot^2(x)$ we have:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\sin ^{-2}(x)$:\n\t\t\n\t\tUsing remarkable trigonometric identities (\\SeeChapter{see section Trigonometry}), we have:\n\t\t\n\t\tThanks to the primitive of $\\cot^2(x)$. Therefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\cos ^{-2}(x)$:\n\t\tUsing once again remarkable trigonometric identities (\\SeeChapter{see section Trigonometry}), we have:\n\t\t\n\t\tThanks to the primitive of $\\tan^2(x)$. Therefore:\n\t\t\n\t\t\\item Primitive of $f(x)=\\sin ^{-1}(x)$:\n\t\t\n\t\tWe use the substitution $x=2\\arctan(t),t=\\tan(x/2)$. Knowing that (\\SeeChapter{see section Trigonometry}):\n\t\t\n\t\twe then get:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\cos ^{-1}(x)$:\n\t\t\n\t\tKnowing that $\\cos(x)=\\sin(x+\\pi/2)$ (\\SeeChapter{see section Trigonometry}) we have:\n\t\t\n\t\tWe do the change of variable $x+\\pi/2=u, \\mathrm{d}u=\\mathrm{d}x$:\n\t\t\n\t\tthanks to the knowledge of the primitive of $\\sin ^{-1}(x)$. Finally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{1+\\cos(x)}$:\n\t\t\n\t\tWe do the substitution $x=2\\arctan(t),t=\\tan(x/2)$, knowing that (\\SeeChapter{see section Trigonometry}):\n\t\t\n\t\twe get:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item  Primitive of $f(x)=\\dfrac{1}{1-\\cos(x)}$:\n\t\t\n\t\tWe do again the substitution $x=2\\arctan(t),t=\\tan(x/2)$. Then we find:\n\t\t\n\t\tSo that finally:\n\t\t\n\t\t\\item  Primitive of $f(x)=\\dfrac{1}{1+\\sin(x)}$:\n\t\t\n\t\tKnowing that:\n\t\t\n\t\twe can write:\n\t\t\n\t\tBy making the change of variables:\n\t\t\n\t\twe get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\\item  Primitive of $f(x)=\\dfrac{1}{1-\\sin(x)}$:\n\t\t\n\t\tBy the same reasoning as above using the cosine we get:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\sinh^n(x)$ with $n \\geq 2$:\n\t\t\n\t\tLet us put:\n\t\t\n\t\tAn integration by parts gives (we proved during our study of usual derivatives that the primitive of the hyperbolic sine was the hyperbolic cosine):\n\t\t\n\t\tby substituting $\\cosh^2(x)$  and $1+\\sinh^2(X)$ in the last integral, we obtain:\n\t\t\n\t\tand therefore:\n\t\t\n\t\tTherefore we obtain easily the special case:\n\t\t \n\t\t\\item Primitive of $f(x)=\\cosh^n(x)$ with $n \\geq 2$:\n\t\t\n\t\tIn this case we also have the recurrence relation:\n\t\t\n\t\tthat is proved in the same way as above. So we also get easily the special case:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\tanh^2(x)$:\n\t\t\n\t\tKnowing that (proved during our study of usual derivatives):\n\t\t\n\t\twe have:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\coth^2(x)$:\n\t\t\n\t\tKnowing that (proved during our study of usual derivatives):\n\t\t\n\t\twe have:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{\\sinh^2(x)}$:\n\t\t\n\t\tWe have using the primitive of $f(x)=\\coth^2(x)$:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{\\cosh^2(x)}$:\n\t\t\n\t\tWe have using the primitive of $f(x)=\\tanh^2(x)$:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{\\sinh(x)}$:\n\t\t\n\t\tWe do the substitution:\n\t\t\n\t\tWe get using the derivative of $\\text{arctanh}(x)$:\n\t\t\n\t\t\n\t\tand finally:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{\\cosh^2(x)}$:\n\t\t\n\t\tWe do the substitution:\n\t\t\n\t\tWe get using the derivative of $\\arctan(x)$:\n\t\t\n\t\tand finally:\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{1+\\cosh(x)}$:\n\t\t\n\t\tWe do the substitution:\n\t\t\n\t\tWe get:\n\t\t\n\t\tFinally we get:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{1-\\cosh(x)}$:\n\t\t\n\t\tWe do the substitution:\n\t\t\n\t\tWe get:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{1+\\sinh(x)}$:\n\t\t\n\t\tWe do the substitution:\n\t\t\n\t\tWe get:\n\t\t\n\t\tBut:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{1-\\sinh(x)}$:\n\t\t\n\t\tWe still do the same substitution:\n\t\t\n\t\tWe get:\n\t\t\n\t\tBut:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tFinally:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=e^{ax}\\sin(bx)$ with $a,b\\in \\mathbb{R},a^2+b^2\\neq 0$:\n\t\t\n\t\tA first integration by parts gives:\n\t\t\n\t\tA second integration by parts gives\n\t\t\n\t\tSo we have the equality:\n\t\t\n\t\tTherefore redistributing the previous relation:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=e^{ax}\\cos(bx)$ with $a,b\\in \\mathbb{R},a^2+b^2\\neq 0$:\n\t\t\n\t\tA similar reasoning than before shows that (we can detail on demand as always!):\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=x\\sin(ax)$ with $a \\in \\mathbb{R}^*$:\n\t\t\n\t\tAnd integration by parts gives us:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=x\\cos(ax)$ with $a \\in \\mathbb{R}^*$:\n\t\t\n\t\tAnd integration by parts gives us:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{(x-a)(x-b)}$ with $a \\neq b$:\n\t\t\n\t\tWe have the following relation (in integral calculus we name such decomposition a \"\\NewTerm{partial fraction decomposition}\\index{partial fraction decomposition}\"):\n\t\t\n\t\tTherefore:\n\t\t\n\t\t\n\t\tFinally:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{a^2-x^2}$ with $a \\neq 0$:\n\t\t\n\t\tWe have using the previous result:\t\n\t\t\n\t\tTherefore:\n\t\t\n\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{a^2+x^2}$ with $a \\neq 0$:\n\t\t\n\t\tDoing a change of variable:\n\t\t\n\t\tWe get using the derivative of $\\arctan (x)$:\n\t\t\n\t\t\n\t\t\\item Given:\n\t\t\n\t\twith $n \\in \\mathbb{N}$. We get:\n\t\t\n\t\tBut this last primitive can be solved by parts:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tWhat we find most frequently in the literature under the form:\n\t\t\n\t\tIdentically to the next development, we have for (the sign change):\n\t\t\n\t\tthe following relation:\n\t\t\n\t\tYou can find an application of these two primitives in the Newtonian cosmological model of the Universe in the section of Astrophysics and also in the section of General Relativity in the study of the Shapiro effect!\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{(1-x^2)^2}$:\n\t\t\n\t\tWe have using the primitives of $\\dfrac{1}{(1-x^2)^n}$ (proved before) and of $\\dfrac{1}{1-x^2}$ (also proved above):\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{(1+x^2)^2}$:\n\t\t\n\t\tWe have using the primitives of $\\dfrac{1}{(1+x^2)^n}$ (proved before) and of $\\dfrac{1}{1+x^2}$ (also proved above):\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\sqrt{x^2-a^2}$ with $a\\in \\mathbb{R}^*$ (case relative to the area under a hyperbola):\n\t\t\n\t\tWe can assume without loss of generality $a>0$. Note that the domain of definition of $f$ is $]\\infty,-a] \\cup [a,+\\infty[$.\n\t\t\n\t\tWe will determine now a primitive of $f$ only on the interval $[a,+\\infty[$ (because that is it we will need in some sections of this book).\n\t\t\n\t\tLet us make the change of variable:\n\t\t\n\t\tSo with:\n\t\t\n\t\twhere we consider the function $\\cosh: \\mathbb{R}^+ \\rightarrow [1,+\\infty[$ with for reciprocal the function $\\text{arccosh}:[1,+\\infty[ \\rightarrow \\mathbb{R}^+$ given by (\\SeeChapter{see section Trigonometry}):\n\t\t\n\t\tWe obtain then using then primitive of $\\sinh^2(x)$:\n\t\t\n\t\tbut \\SeeChapter{see section Trigonometry}) as:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tand using another identity proved in the section Trigonometry:\n\t\t\n\t\twe have therefore:\n\t\t\n\t\tas the primitive are given to a given constant, we can write:\n\t\t\n\t\tfor $x\\geq a$. $F$ is then a primitive of $\\sqrt{x^2-a^2}$ on the interval $[a,+\\infty[$.\n\t\t\n\t\t\\item Primitive of $f(x)=\\sqrt{a^2-x^2}$ with $a\\in \\mathbb{R}^*$:\n\t\t\n\t\tWe can assume without loss of generality $a>0$. Note that the domain of definition of $f$ is $[-a, a]$.\n\t\t\n\t\tWe make the substitution:\n\t\t\n\t\twe get:\n\t\t\n\t\twhere we used the primitive $\\cos^n (x)$ with $n=2$ proved above. Now we have:\n\t\t\n\t\tThen:\n\t\t\n\t\tand:\n\t\t\n\n\t\t\\item Primitive of $f(x)=\\sqrt{x^2+a^2}$ with $a\\in \\mathbb{R}^*$:\n\t\t\n\t\tWe can assume without loss of generality $a>0$.\n\t\t\n\t\tLet us make the change of variable:\n\t\t\n\t\twith:\n\t\t\n\t\tWe get:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tBut as we saw in the section Trigonometry:\n\t\t\n\t\tand:\n\t\t\n\t\tFinally we have:\n\t\t\n\t\twhere $ln (a)$ has been omitted because the primitives are given to a given constant.\n\t\t\n\t\t\\item Primitive of $f(x)=\\left(\\sqrt{a^2-x^2}\\right)^{-1}$ with $a\\in \\mathbb{R}^*$:\n\t\t\n\t\tWe can assume without loss of generality $a>0$.\n\t\t\n\t\tWe do the substitution:\n\t\t\n\t\tWe get:\n\t\t\n\t\t\n\t\t\\item Primitive of $f(x)=\\dfrac{1}{\\sqrt{a^2+x^2}}$ with $a\\in \\mathbb{R}^*$:\n\t\t\n\t\tWe can assume without loss of generality $a>0$.\n\t\t\n\t\tWe do the substitution:\n\t\t\n\t\tWe get in the same manner as the previous usual integrals:\n\t\t\n\t\tand knowing that (\\SeeChapter{see section Trigonometry}):\n\t\t\n\t\tWe then get finally the following important primitive:\n\t\t\n\t\tProceeding in the same way, but using the hyperbolic cosine instead of hyperbolic sine, we get obviously (we can detail on demand as always):\n\t\t\n\t\tWe will reuse these last two relation in important practical cases of the section of Analytical Mechanics, Civil Engineering (where the constant $a$ is equal $1$, $ln (a)$ will be equal to $0$) and General Relativity (where $a$ will be nonzero and therefore it will not be possible to omit the constant $ln (a)$).\n\t\t\n\t\t\\item Let us consider an integral of the following form that we can use to improve the Stirling formula (\\SeeChapter{see section Theoretical Computing}) and also that we need absolutely for the study of the circular Fresnel aperture diffraction (\\SeeChapter{see section Wave Optics}):\n\t\t\n\t\twhere\n\t\t\\begin{itemize}\n\t\t\t\\item $\\lambda$ is large;\n\t\t\t\\item $g(y)$ is a smooth function which has a local minimum at $y^*$ in the interior of the interval $[a, b]$;\n\t\t\t\\item $h(y)$ is smooth.\n\t\t\\end{itemize}\n\t\tThe integral can be the moment generating function of the\n\t\tdistribution of $g(Y)$ when $Y$ has density $h$ (\\SeeChapter{see section Statistics}), it could be a posterior expectation of $h(Y)$, or just a \"simple\" integral.\n\t\t\n\t\tWhen $\\lambda$ is large, the contribution to this integral is essentially entirely by construction originating from a neighborhood around $y^*$.\n\t\t\n\t\tWe formalize this by Taylor expansion of the function $g$ around $y^*$ :\n\t\t\n\t\tSince $y^*$ is a local minimum, we have:\n\t\t\n\t\t and therefore:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tThe fact that above the bounds have change from $[a,b]$ to $]-\\infty,+\\infty[$ is that we assume that the area of interest is around $y^*$ and that because $\\lambda$ is soooo large that veeeery quickly a bit away from $y^*$ we can consider the curve as negligible!\n\t\t\n\t\tIf we approximate $h(y)$ linearly around $y^*$, that is to say:\n\t\t\n\t\tsuch that:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tThe reader must not forget that here $\\lambda g''(y^*)$ is a constant and that if we put:\n\t\t\n\t\tThen for the first integral above we see that we fall back on the integral of something very similar to the Gauss distribution (with mean $y^*$ and standard deviation $\\lambda g''(y^*)$) and then it comes immediately (\\SeeChapter{see section Statistics}):\n\t\t\n\t\tFor the second integral:\n\t\t\n\t\tthe change of variable $y-y^*=x$ give us:\n\t\t\n\t\tThe primitive is of the form (\\SeeChapter{see section Integral and Differential Calculus}):\n\t\t\n\t\tTherefore by symmetry the second integral is zero! We have finally:\n\t\t\n\t\tThis calculation is named \"\\NewTerm{Laplace's Method of Integration}\\index{Laplace's Method of Integration}\" or simply \"\\NewTerm{Laplace Integration}\".\n\t\\end{enumerate}\n\t\n\t\\subsubsection{Integral representation of first kind Bessel's function}\n\tA particularly useful and powerful way of treating Bessel functions employs their integral representation as we will see in the section of Wave Optics.\n\t\n\tRemember that in the section of Sequences and Series we have proved that the generating function of Bessel's function was:\n\t\n\tThat is:\n\t\n\tNow remember that (\\SeeChapter{see section Trigonometry}):\n\t\n\tSo if we return to the generating function, and substitute $t=e^{\\mathrm{i}\\theta}$, we get:\n\t\n\tIn which to condensate the result we have used first the property proved during our study of the generating function of Bessel's functions:\n\t\n\tand also:\n\t\n\tand so on...\n\tNow remember that:\n\t\n\tTherefore identifying real and imaginary part, we get:\n\t\n\tRemember also that we have proved during our study of Fourier series in the section of Sequences and Series that:\n\t\n\t\\begin{center}\n\t\\begin{tabular}{ccc}\n\t$\\text{with }n,k\\in \\mathbb{N}\\text{ and }n\\ne k$\n\t&$\\qquad$&\n\t$\\text{with }n,k\\in \\mathbb{N}\\text{ and }n = k$\n\t\\end{tabular}\n\t\\end{center}\n\tThat is:\n\t\n\twhere $\\delta_{nm}$ is the Kronecker symbol (\\SeeChapter{see section Tensor Calculus}).\n\t\n\tNow let us write:\n\t\n\tLet us focus on the first integral:\n\t\n\tSo we see above that whatever the value for any $n$, excepte of $n=0$:\n\t\n\tTherefore in only remains for $n>0$:\n\t\n\tand we see above that if $n>0$ is odd all the integrals vanish but if $n>0$ is even, only the corresponding $J_n(x)$ remains!\n\t\n\tExactly the same analysis can be done for can be done:\n\t\n\tWe have therefore for each of the integrals above, especially for each the left term that (recall that $n=0,2,4,\\ldots$ is even and $n=1,3,5,\\ldots$ is odd):\n\t\n\tIf these tow equations are added together we have using trigonometric identities (\\SeeChapter{see section Trigonometry}):\n\t\n\tfor $n=0,1,2,3,\\ldots$.\n\t\n\tIf we put $n=0$ in the above relation, we get:\n\t\n\tIf we plot $\\cos(x\\sin(\\theta))$ we see that it repeats itself in all four quadrants (it's an even function):\\\\\\\\\n\t\\texttt{>plot([cos(sin(theta)),cos(2*sin(theta)),cos(3*sin(theta)),cos(5*sin(theta))]\\\\\n\t,theta=-2*Pi..2*Pi);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.55]{img/algebra/cos_sin_maple.jpg}\n\t\\end{figure}\n\tSo we can write:\n\t\n\tThis is the real integral representation of the zero order Bessel function of the first type.\n\t\n\tBut in many developments we don't use the above expression as there is not phasor that is visible. So the trick is to notice that $\\sin(x\\sin(\\theta)$ reverses its sign in the third an fourth quadrant (it's and odd function):\\\\\\\\\n\t\t\\texttt{>plot([sin(sin(theta)),sin(2*sin(theta)),sin(3*sin(theta)),sin(5*sin(theta))],\\\\\n\ttheta=-2*Pi..2*Pi);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/algebra/sin_sin_maple.jpg}\n\t\\end{figure}\n\tSo we have:\n\t\n\tAdding the both relations by multiplying the second by $\\mathrm{i}$,  we have:\n\t\n\tFinally we get the complex representation of the zero order Bessel function of the first type:\n\t\n\t\n\tLet us do a change of variable $\\theta=\\varphi+\\pi/2$, then:\n\t\n\tBut we have proved earlier that for any periodic function:\n\t\n\tTherefore:\n\t\n\tThis integral representation my be obtained in various ways but this one seems the most easy one to us. Many other integral representation exists.\n\n\t\\subsubsection{Dirac Function}\n\tThe Dirac function, also named \"\\NewTerm{Dirac peak}\\index{Dirac peak}\" or \"\\NewTerm{delta function}\\index{delta function}\", plays a very important practical role both in electronic and computer also  wave mechanics and quantum field theory (this allows to discretize a continuum!) and in the field of civil engineering (see section  of the same name for some examples).\n\t\n\tBefore going further we could notice that it is wrong to speak about a \"function\" because a function is an application of a start set (usually the set of real or complex number with one or more dimensions) in an arrival set (usually the set of real or complex numbers in one or more dimensions). While the domain of definition of the Dirac function is not a set of numbers but strictly speaking a set of functions!\n\t\n\tMore technically the Dirac delta function, or $\\delta$ function, is a generalized function, or distribution, on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line. The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents the density of an idealized point mass or point charge. It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse symbol (or function). Its discrete analog is the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1.\n\t\n\tAs always in this book we will focus here only on the properties we will need to study Applied Mathematics stuffs of other sections of the book.\n\t\n\tTo represent mentally in an easy way this function, first consider the function defined by:\n\t\n\tThe representation of $y=f(x)$ above is a rectangle of width $a$, and of height $1/ a$ and unit surface. The Dirac function can be considered as the boundary when $a\\leftarrow 0$ of the  $f (x)$. So we have:\t\t\n\t\n\tThat is to say:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/algebra/common_dirac_peak_representation.jpg}\n\t\t\\caption{Schematic representation of the Dirac delta function by a line surmounted by an arrow (source: Wikipedia)}\n\t\\end{figure}\n\twith:\n\t\n\twhere $\\varepsilon$ is a number greater than $0$ and as small as we want.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tAs the reader will have probably noticed it when we introduced the initial function $f (x)$, the resulting delta Dirac function has therefore the dimension of the inverse of a length!\n\t\\end{tcolorbox}\n\tFor a function $g (x)$ continues in $x = 0$ we have:\n\t\n\tBy extension we have:\n\t\n\tand for a function $g (x)$ coninue on $x_0$:\n\t\n\tIt is then relatively easy to define the Dirac function in 3-dimensional space by:\n\t\n\tAs as already mentioned we will prove properties of the Dirac function only if we will need them in other sections of this book.\n\t\n\t\\subsubsection{Gamma Euler Function}\n\tWe define the Euler Gamma function (Eulerian integral of the second kind) by the following integral:\n\t\n\twith $x$ belonging to the set of complex numbers whose real part is positive and non-zero (thus the positive real number are also included in the domain of definition)! Indeed, if we take complex numbers with a zero or negative real part, the integral diverges and is then undefined!\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tWe have already met this integral and some of its properties (which will be proved here) in our study of the Beta, Gamma, Chi-square, Fisher and Student statistical distribution functions (\\SeeChapter{see section Statistics}). We will also use this integral in maintenance (\\SeeChapter{see section Management Techniques}), in string theory (\\SeeChapter{see section String Theory}) and other engineering fields (see the corresponding chapter) as in the section of Theoretical Computing for the canonical negative binomial generalized linear regression.\n\t\\end{tcolorbox}\n\t\n\tHere is a graphical plot of the module of the Euler Gamma function for $x$ browsing an interval of real numbers (take care in Maple 4.00b to write GAMMA capitalized!!!):\n\t\n\t\\texttt{>with(plots):\\\\}\n\t\\texttt{>plot(GAMMA(x),x=-Pi..Pi,y=-5..5);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/maple_gamma_euler_2d_plot.jpg}\n\t\t\\caption{Plot of the Euler Gamma function in Maple 4.00b}\n\t\\end{figure}\n\tand always the same function with Maple 4.00b but now in the complex plane and always with in ordinate the module of the Gamma Euler function:\n\t\n\t\\texttt{>with(plots):\\\\}\n\t\\texttt{>plot3d(abs(GAMMA(x+y*I)),x=-Pi..Pi,y=-Pi..Pi,view=0..5, grid=[30,30],orientation=[-120,45],axes=frame,style=patchcontour);}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/maple_gamma_euler_3d_plot.jpg}\n\t\t\\caption{Plot of the Euler Gamma function in the complex plane with Maple 4.0}\n\t\\end{figure}\n\tThis function is interesting if we impose the variable $x$ to belong to the set of integer numbers and that we write it as follows:\n\t\n\tLet us integrate by part the latter function:\n\t\n\tSince the exponential function decreases much faster than $t^x$ then we have:\n\t\n\tIn literature, we frequently find the following notations (there are confusing):\n\t\n\tWhich brings us to write the result in a more traditional form:\n\t\n\tFrom the relation $\\Gamma_{0}(x)=x\\Gamma_{0}(x-1)$, it comes by induction:\n\t\n\tBut:\n\t\n\tThat gives:\n\t\n\tTherefore:\n\t\n\tor written in another way for $x\\in \\mathbb{N}^*$\n\t\n\tAnother interesting and useful result of the Euler gamma function is obtained when we replace $t$ by $y^2$ and calculate this latter for $x=0.5$.\n\t\n\tFirst we have:\n\t\n\tand after:\n\t\n\tBut, as we have proved it in the section Statistics during our study of  the distribution, this integral is equal to:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Euler-Mascheroni Constant}\\mbox{}\\\\\\\\\n\tThis small text is a just curiosity regarding to Euler's constant $e$ and to almost every Differential and Integral calculus tools that we have seen until now. This is a very nice example (almost artistic) of what we can do with mathematics as soon as we have enough tools at our disposal.\n\t\n\tMoreover, this constant is useful in certain differential equations which we see later.\n\t\n\tRemember that we saw in the section of Functional Analysis that the Euler constant $e$ is defined by the limit:\n\t\n\t\n\tIn a more general case we can easily demonstrate in the same way that (you can ask us the details if needed):\t\n\t\n\tThis obviously suggests:\n\t\n\tby a change of variable $t=nu$ we write:\n\t\n\tAnd we use the definition of the Beta function:\n\t\n\tTherefore:\n\t\n\tTo transform this expression we can write:\n\t\n\tBut the quantity:\n\t\n\ttends to the limit $\\gamma=0.5772$, named \"\\NewTerm{Euler-Mascheroni constant}\\index{Euler-Mascheroni constant}\" or also \"\\NewTerm{Euler Gamma constant}\\index{Euler Gamma constant}\" when $n$ tends to infinity.\n\t\n\tTherefore:\n\t\n\tWe divide each term of the product $(x+1)...(x+n)$ by the corresponding integer taken into $n!$, so we get:\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Curvilinear Integrals}\n\tThe line integrals (curvilinear integrals) are also very important in physics. The reader will thus find see them again in the section of Classical Mechanics, Electrodynamics Magnetostatics and to calculate the work of a force or the \"flow field\", or in the chapter of Euclidean geometry to calculate the center of gravity of curves (functions), or in the section of Geometric Shapes to calculate the surface of some bodies of revolution but also in Corpuscular Quantum Physics for the famous \"path integral\" (which is only the term used by physicists to say \"line integral\") or for the specific calculation of integrals using the residue theorem proved in the section of Complex Analysis or for transformation states in the section of Thermodynamics. This is why there will not be here examples of line integrals because they are already so many applications in the other chapters of this book.\n\t\n\tWith the definition of these integrals, we can prove two very important results detailed in section of Vector Calculus that are respectively the Green's theorem and Stokes' theorem or even the theorem of residues proved in the section Complex Analysis and already mentioned in the preceding paragraph (this is important enough to mention it twice!).\n\t\n\tMore technically a line integral is an integral where the function to be integrated is evaluated along a curve. The terms \"\\NewTerm{path integral}\\index{path integral}\", \"\\NewTerm{curve integral}\\index{curve integral}\", and \"\\NewTerm{curvilinear integral}\\index{curvilinear integral}\" are also used; \"\\NewTerm{contour integral}\\index{contour integral}\" as well, although that is typically reserved for line integrals in the complex plane.\n\t\n\t\\paragraph{Curvilinear Integral of a scalar field}\\mbox{}\\\\\\\\\n\tConsider a parametrized curve $C$ (\\SeeChapter{see section Differential Geometry}) by a vector function $\\vec{r}(t)$ with $t \\in [a,b]$ of class $\\mathcal{C}^1$ piecewise (this condition is necessary so that we can integrate the curve without problems).\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t \\begin{enumerate}\n\t \t\\item[D1.] The curve is said to be a \"\\NewTerm{closed curve}\\index{closed curve}\" if $\\vec{r}(a)=\\vec{r}(b)$\n\t \t\n\t \t\\item[D2.] The curve is said to be a \"\\NewTerm{smooth curve}\\index{smooth curve}\" if $\\forall t \\in [a,b]\\; \\vec{r}'(t)\\neq 0$\n\t \\end{enumerate}\n\t Recall that a parametric curve can be written as follows (all vector function can be written in this form):\n\t \n\tConsider a function or a \"\\NewTerm{scalar field}\\index{scalar field}\" $f(x,y)$ defined in a neighborhood of $C$. We subdivide $[a,b]$ into $n$ subintervals of equal length $\\Delta t$ as:\n\t\n\tWe choose on each subinterval a point $t_i^*\\in [t_i,t_{i+1}]$. Given $\\delta s_i$ the length of the arc $C$ connecting the point $(x(t_i),y(t_i))$ and $(x(t_{i+1}),y(t_{i+1})$, the integral of $f$ along $C$ is defined as the \"\\NewTerm{line integral}\\index{line integral}\" or \"\\NewTerm{path integral}\\index{path integral}\":\n\t\n\tWhich as we know, can be written (see section of Differential Geometry or of Geometric Shapes or even of Analytical Mechanics):\n\t\n\tand that can obviously be immediately extended to the case to 3 variables and more.\n\t\n\tOr in vector form:\n\t\n\tThe line integral is linear, that is to say, if $C=C_1 \\cup C_2$ and that $C_1 \\cap C_2$ is a point, then (without going into the strict definition of the union of two curves...):\n\t\n\t\n\t\\paragraph{Curvilinear Integral of a vector field}\\mbox{}\\\\\\\\\n\tConsider a vector field (e.g. a force field) as:\n\t\n\tand an infinitesimal element of a curve (path) $\\mathcal{C}^1$ piecewise as:\n\t\n\tThe idea is then to consider that the dot product (vector field projection on the path element) represents the work along the differential element:\n\t\n\tTherefore the work on all the path will be given by (using the property of linearity of the curvilinear integral):\n\t\n\tThis can obviously be generalized to $n$ dimensions. Let us indicate that when the line integral (path integral) of a vector field is extended to a closed curve, then we speak of \"\\NewTerm{circulation of the vector field}\\index{circulation of the vector field}\".\n\t\n\tAs:\n\t\n\tWe then have write a fairly common notation:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn physics many times problems are often in the plane and require the transition to polar coordinates because many academic physic problems are centro-symmetric, which also facilitates the calculations of path integrals.\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us calculate the work of the force of gravity moving a mass $M$ of the point $M_1(a_1,b_1,c_1)$ to the point $M_2(a_2,b_2,c_2)$ along an arbitrary path $C$. The projections of the force of gravity on the coordinate axes are:\n\t\n\tThe work accomplish is then:\n\t\n\tso we find a very known result of the section of Classical Mechanics.\n\t\\end{tcolorbox}\n\tA line integral of a vector $\\vec{F}$ field along a curve $C_1$ is independent of the path of integration if:\n\t\n\tfor any non-null curve $C_2$ having only the same points of departure and arrival. Furthermore if the vector field satisfied (where $U$ in physics is typically a potential):\n\t\n\tas (the reader will recognize an exact total differential form):\n\t\n\tThen the integral path on an arbitrary curve only depends only on the difference of the values the function $U$ at the two ends! This is the \"\\NewTerm{fundamental theorem for line integrals}\\index{fundamental theorem for line integrals}\" or \"\\NewTerm{gradient theorem for line integrals}\\index{gradient theorem for line integrals}\".\n\t\\begin{dem}\n\tIf the differential form of the vector field satisfies an total exact differential, we have:\n\t\n\tThat is:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tSo the line integral of an exact total differential does not depend on the path of integration but only the ends! We also conclude that if $\\vec{F}$ isderived from a scalar potential and $A = B$, the line integral is then zero.\n\t\n\tIn physics this result is interpreted by saying that the work provided by a force $\\vec{F}$ derived from a scalar potential acting on an elementary particle in a finite displacement does not depend on the path followed.\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] When the curve (path) $C$ is closed and the path integral has a result independent result of the direction in which this path is traveled, we use the following notation (the letter below the symbol representing the path can of course vary...):\n\t\t\n\t\tIf this closed integral is always zero, we say that the integrated vector field is a \"\\NewTerm{conservative vector field}\\index{conservative vector field}\" and \"\\NewTerm{derives from a scalar potential}\" (and therefore satisfies the Schwarz theorem to be written as exact total differential) since this stems from the proof given already just above.\n\t\t\n\t\t\\item[D2.] When the value of the integral of a closed path depends on the orientation (clockwise not equal to counterclockwise) we use the following notation (the letter below the symbol representing the path can of course vary...):\n\t\t\n\t\tThus if the direction is direct (that is to say \"counterclockwise\" or \"trigonometric\") as the notation on the above, its sign will be positive; if on the contrary the direction is clockwise his sign will be negative (see the proof in the section of Complex Analysis). Therefore we often speak about respectively \"negative direction\" or \"positive direction\".\n\t\t\n\t\tThus, to summarize, a line integral (path integral) is fully defined by the expression under the symbol of the integral, the form of integration path and the direction of the integration.\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe reader will find some the proofs of very important properties of curvilinear integrals  in section of Vector Calculus as the Green-Riemann theorem or a particular application to study holomorphic functions in the section of Complex Analysis.\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Integrals involving parametric equations}\n\tNow that we have seen how to calculate the derivative of a plane curve, the next question is this: How do we find the area under a curve defined parametrically? \n\t\n\tTo derive an expression for the area under a parametric curve defined by the functions:\n\t\n\twith $a\\leq t\\leq b$.\n\n\tWe assume that $x(t)$ is differentiable and start with an equal partition of the interval $a\\leq t\\leq b$. Suppose:\n\t\n\tand consider the following figure: $x(t_0),x(t_1),x(t_n)$\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/integral_parametric_curve.jpg}\n\t\\end{figure}\n\tWe use rectangles to approximate the area under the curve. The height of a typical rectangle in this parametrization is $y(x(\\overline{t_i}))$ for some value $\\overline{t_i}$ in the $i$-th subinterval, and the width can be calculated as $x(t_i)-x(t_{i-1})$. Thus the area of the $i$-th rectangle is given by:\n\t\n\tThen a Riemann sum for the area is:\n\t\n\tMultiplying and dividing each area by $t_i-t_{i-1}$ gives:\n\t\n\tTaking the limit as $n$ approaches infinity gives:\n\t\n\tAnd it is obvious that applying Pythagoras's theorem on an infinitesimal length of the parametric curve we have:\n\t\n\twith $x=x(t)$, $y=y(t)$ and $t_1<t<t_2$. This gives the arc length\\index{arc length} of the parametric curve between two points on the curve.\n\t\n\tIt comes also immediately:\n\t\n\tThe chain rule gives:\n\t\n\tTherefore:\n\t\n\tWe will meet again this relation in the section of Analytical Mechanics.\n\t\n\tIn astronomy we often have to deal with closed curved and calculated the distance travel on that curve. But working in cartesian coordinates is not always is the best. This is why it is better to change in polar coordinates.\n\n\tThe idea is to suppose that we are able to express our curve of interest in the following form:\n\t\n\twhere $\\alpha\\leq \\theta \\leq \\beta$. In order to adapt the arc length relation for a polar curve, we use the relations:\n\t\n\tand we replace the parameter $t$ by $\\theta$ Then:\n\t\n\twe replace $\\mathrm{d}t$ by $\\mathrm{d}\\theta$, and the lower and upper limits of integration are $\\alpha$ and $\\beta$ respectively. Then the arc length formula becomes:\n\t\n\tSo finally in polar coordinates:\n\t\n\t\n\t\\subsubsection{Improper Integrals}\n\tImproper integrals are definite integrals where one or both of the boundaries is at infinity, or where the integrand has a vertical asymptote in the interval of integration. As crazy as it may sound, we can actually calculate some improper integrals using some clever methods that involve limits.\n\t\n\tBy abuse of notation, improper integrals are often written symbolically just like standard definite integrals, perhaps with infinity among the limits of integration. When the definite integral exists (in the sense of either the Riemann integral or the more advanced Lebesgue integral), this ambiguity is resolved as both the proper and improper integral will coincide in value and this is what will the most occur through all applications of integrals in physics, chemistry and engineering through this book!\n\t\n\tFor the Riemann integral (or the Darboux integral, which is equivalent to it as we have seen earlier above), improper integration is necessary both for unbounded intervals (since one cannot divide the interval into finitely many subintervals of finite length) and for unbounded functions with finite integral (since, supposing it is unbounded above, then the upper integral will be infinite, but the lower integral will be finite)!\n\t\n\tAn \"\\NewTerm{improper integral}\\index{improper integral}\" of a function $f(x) > 0$ is:\n\t\n\tWe say the improper integral converge if this limit exists and diverges otherwise.\n\t\n\tGeometrically then the improper integral represents the total area under a curve stretching to infinity. If the integral $\\int_a^\\infty f(x)\\mathrm{d}x$ converges the total area under the curve is finite; otherwise it's infinite.\n\t\n\tHow can an area that extends to infinity be finite?  Obviously the area between $a$ and $N$ (i.e. $\\int_a^N f(x)\\mathrm{d}x$) is finite.  As $N$ goes to infinity this quantity will either grow without bound or it will converge to some finite value. \n\t\n\tThe domains where improper integrals are the most used, without even be noticeable most of time by students or engineering practitioners, are respectively:\n\t\\begin{itemize}\n\t\t\\item Statistics (see corresponding section) when we normalize or check the condition of covergence to a cumulated probability of $1$ of a density function (most of time in statistics one or the both bounderies are equal to infinity)\n\n\t\t\\item Wave Quantum Physics (see corresponding section) where we deal sometimes with free propagating particles from infinity to infinity (this also happens sometimes in General Relativity)\n\n\t\t\\item Differential equations, especially when we solve them by using Fourier Transform or Laplace transform (we have many examples using this transforms accross the book) for practical application in physics and high level financial engineering.\n\n\t\t\\item In astrophysics or electrostatics when dealing with any punctual potential field source of the type $1/r^2$  for which we want to calculate the work necessary to bring an object from infinity to that source (calculation of the type $\\int_{+\\infty}^{r} f(r)\\mathrm{d}r$).\n\t\\end{itemize}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. We want compute the very important integral (to introduce Laplace Transforms):\n\t\n\twith $a\\in\\mathbb{R}$.\\\\\n\t\n\tFollowing the definition above we need to first compute a definite integral and the take a limit. So from the definition:\n\t\n\tWe first compute the definite integral. We start with the case $a=0$:\n\t\n\ttherefore for $a=0$ the improper integral $I$ does not exist. When $a\\neq 0$ we have:\n\t\n\tIn the case $a<0$, that is $a=-|a|$, we have that:\n\t\n\ttherefore for $a<0$ the improper integral $I$ does not exist. In the case $a>0$ we have:\n\t\n\t\n\tE2. We want to compute:\n\t\n\tThe integrand is not continuous at $x=0$ and so we\u2019ll need to split the integral up at that point:\n\t\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tNow we need to look at each of these integrals and see if they are convergent\n\t\n\tAt this point we're done.  One of the integrals is divergent that means the integral that we were asked to look at is divergent.  We don't even need to bother with the second integral.\n\t\\end{tcolorbox}\n\tOn a side note, notice that the area under the curve of this infinite interval was not infinity as the reader may have suspected it to be (perhaps).  In fact, it was a surprisingly small number.  Of course this won't always be the case, but it is important enough to point out that not all areas on an infinite interval will yield infinite areas.\n \n\tLet's now get some definitions out of the way.  We will call these integrals convergent if the associated limit exists and is a finite number (i.e. it's not plus or minus infinity) and divergent if the associated limit either doesn't exist or is (plus or minus) infinity.\n\t\n\t\\pagebreak\n\t\\subsection{Differential Equations}\n\t\\textbf{Definition (\\#\\mydef):} In mathematics, a \"\\NewTerm{differential equation D.E.}\\index{differential equation}\"  is a relationb etween one or more unknown functions and their derivatives up to order $n$. The \"\\NewTerm{order}\\index{order of a differential equation}\" of a differential equation corresponding to the maximum degree of differentiation which one of the functions is subjected.\n\t\n\tCompared to our goal of trying to see how the math describes the sensible reality, differential equations are a great success but are also the source of many troubles. First there are modeling difficulties (see for example the differential equation system of General Relativity in the corresponding section of the book...) resolution difficulties (there is no general method even with numerical computer methods as you can see in the corresponding section!), then their are proper mathematical difficulties (that's why some D.E. have million dollar price in case or resolution), finally difficulties related to the fact that certain differential equations are unstable by nature and give chaotic solutions (see the section Population Dynamics or Meteorology for flagrant simple examples!).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe differential equations are used to construct mathematical models of physical orbiological phenomena, such as for the study of radioactivity, celestial mechanics, electronic circuits, populatin development or even financial stochastic process. Therefore, differential equations represent a vast field of study, both in pure and Applied Mathematics.\n\t\\end{tcolorbox}\n\tThe differential equation of order $n$ the more general can always be written as:\n\t\n\tWe consider in this book only the cases where $x$ and $y$ have their values in $\\mathbb{R}$. A solution to such a D.E. on the interval $I \\in \\mathbb{R}$ is a function $y \\in \\mathcal{C}^n (I,\\mathbb{R})$ (a function $y:I \\rightarrow \\mathbb{R}$ which is $n$ times continuously differentiable) such that for any $x \\in I$, we have:\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} For reasons that will be developed later, we also say \"integrate a D.E.\" instead of saying \"finding a solution to the D.E.\". The first expression is particularly found in the american literature.\\\\\n\t\n\t\\textbf{R2.} Since all this book is full examples of differential equations with initial conditions (we speak then about a \"\\NewTerm{Cauchy problem}\\index{Cauchy problem}\") and methods of resolutions in the section of Classical Mechanics, Atomic Physics, Cosmology, Econometry, Sequences and series, Industrial Engineering, Statistics, etc., we will not give any application examples here and will focus only on the minimum theoretical useful aspect.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsubsection{First order Differential Equations}\n\tA differential equation of the first order is therefore a D.E. which involves only the first derivative $y'$.\n\t\n\t\\textbf{Definition (\\#\\mydef):} A first order differential equation is named \"D.E. of order 1 with separate variables\" if it can be written as:\n\t\n\tSuch a differential equation can be easily integrated. Indeed, we write:\n\t\n\tThen symbolically:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe write explicitly here the arbitrary integration constant $c^{te} \\in \\mathbb{R}$ (which is normally implicitly present in the indefinite integrals) to not forget it!\n\t\\end{tcolorbox}\n\tSo the purpose is first to find the primitives $F$ and $G$ of $f$ and $g$, and then to express it in terms of $x$:\n\t\n\tThe integration constant is fixed when asked for a given $x=x_0$, we get a particular value of $y(x)=y(x_0)=y_0$. We speak then of \"\\NewTerm{initial value problem}\\index{initial value problem}\".\n\t\n\t\\subsubsection{Linear Differential Equations}\n\t\\textbf{Definition (\\#\\mydef):} A differential equation of order $n$ is named \"\\NewTerm{linear differential equation L.D.E.}\\index{linear differential equation}\" if and only if it is of the form:\n\t\n\twith:\n\t\n\tLet us now see a property that may seem insignificant at first glance but which will become very important later!\n\t\n\tWe will prove now that $L$ is a linear application:\n\t\n\tand for all $\\lambda \\in \\mathbb{R}$\n\t\n\tThen we say that the linear D.E. represent a linear model if the multiples of this function (or any linear combination) are also a solution. Thus, in physics, for a linear system, the amplification of the cause involves an amplification of the effect (the systems are often linear in high-school problems but in reality they are rather the exception!).\n\t\n\tFor example, the ordinary differential equation of order $2$ of the simple pendulum proved in the section of Classical Mechanics is not linear because it contains a sine term that is not separable.\n\n\t\\textbf{Definition (\\#\\mydef):} The linear differential equation (which is the most common in physics):\n\t\n\tis named \"\\NewTerm{homogeneous equation H.E.}\\index{homogeneous equation}\" or \" \\NewTerm{equation without second member E.W.S.M.}\\index{equation without second member}\" (and sometimes \"\\NewTerm{complementary equation}\\index{complementary equation}\") associated to:\n\t\n\t\\begin{theorem}\n\tWe will now prove an important property of H.E.: the set $\\left\\lbrace S_0 \\right\\rbrace$ of solutions of the H.E. is the kernel of the linear application $L$ (which means for refresh: $L(S_0)=0$) and the set  $\\left\\lbrace S \\right\\rbrace$ of solutions to $L(y)=f(x)$ is given by:\n\t\n\tthat is to say that the solutions of the form:\n\t\n\twhere $y_p$ is a \"particular/specific solution\" to $L(y)=f(x)$ and $y_h$ the \"\\NewTerm{homogeneous solution}\\index{homogeneous solution}\" give all the D.E. solutions.\n\t\\end{theorem}\n\t\\begin{dem}\n\tThe first statement will be assumed obvious.\n\t\n\tAs regards to the second part, any function of the form $y_p+y_h$ is solution of $L(y)=f(x)$.\n\t\n\tIndeed it is trivial and it follows from the definition of the kernel concept (\\SeeChapter{see section Set Theory}):\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tWhat is important also to understand with the linear D.E. with second member, it is that if we find solutions to $L(y)$ with a second given member and solutions to the same D.E. with another different second member, then the sum of all these solutions will be a solution of the D.E. with the sum of the second members!!!\n\t\n\t\\subsubsection{Resolution Methods of Differential Equations}\n\tThere are many ways to solve accurately or approximately linear or non-linear differential equations. Let us give the list of the few methods we will analyze further below by the example (but who are already many, many times in the chapters of Mechanics, Cosmology, Social Sciences and Quantum Physics):\n\t\\begin{itemize}\n\t\t\\item The \"\\NewTerm{method of characteristic polynomial of D.E.}\\index{Differential equations!method of characteristic polynomial}\" (see below) used a bit in every section of the chapter of Mechanics/Quantum Physics/Cosmology/Chemistry and Social Sciences of this book.\n\t\t\n\t\t\\item The \"\\NewTerm{method of integrating factor}\\index{Differential equations!method of integrating factor}\" (see also below) for general knowledge but used to this date for practical cases in this book.\n\t\t\n\t\t\\item The \"\\NewTerm{method of variation of the constant}\\index{Differential equations!method of variation of the constant}\" (see below) and used to this date only in the section of Industrial Engineering in this book.\n\t\t\n\t\t\\item The \"\\NewTerm{method of disturbances of D.E.}\\index{Differential equations!method of disturbances}\" (see below) useful for the wave quantum physics and quantum field theory.\n\t\\end{itemize}\n\t\n\tNote also other widely used methods (classical high-school technics) but that are treated case by case in the individual sections of this book because solving approaches are too numerous and specific to each problem:\n\t\n\t\\begin{itemize}\n\t\t\\item The \"\\NewTerm{separation of variables method of D.E.}\\index{Differential equations!separation of variables}\" (the heat equation in the section of Thermodynamics, wave equation in the section Marine \\& Weather Engineering, Schr\u00f6dinger evolution equation in the section of Wave Quantum Physics, vibration of a drum in the section of Wave Mechanics, etc.), whose we will see a very specific and simple case lower but for which it is best to refer to the sections mentioned for concrete examples.\n\t\t\n\t\t\\item The \"\\NewTerm{matrix method for solving D.E.}\\index{Differential equations!matrix method}\" and \"\\NewTerm{trivial solution of D.E.}\" (Lotka-Volterra model in the section of Populations Dynamics, electron or nuclear spin resonance in the section of Relativistic Quantum Physics, Lorenz model in the section of Marine \\& Weater Engineering, etc.).\n\t\t\n\t\t\\item The \"\\NewTerm{spectral method}\\index{Differential equations!spectral method}\" using the spectral theorem proved in the section of Linear Algebra (see the section of Industrial Engineering for the calculation of system reliability by Markov chains for a concrete example).\n\t\t\n\t\t\\item The \"\\NewTerm{method of the Fourier transform of the D.E.}\\index{Differential equations!Fourier transform method}\" or \"\\NewTerm{method of the Laplace transform of the D.E.}\\index{Differential equations!Laplace transform method}\" (heat equation in the section Thermodynamics, resolution of the Black \\& Scholes equation in the section of Economy, beam equation  under point load in the section of Civil Engineering).\n\t\t\n\t\t\\item \"\\NewTerm{Numerical methods for D.E.}\\index{Differential equations!Numerical method}\" to solve the differential equations using computer when the D.E. have no known analytic solutions or when they have but we need a visual three dimensional view of the solutions (heat equation in the section of Theoretical Computing).\n\t\t\n\t\t\\item The \"\\NewTerm{Frobenius method}\\index{Differential equations!Frobenius method} named after Ferdinand Georg Frobenius and  also \"\\NewTerm{power series solutions}\\index{Differential equations!power series solutions}, that is a way to find an infinite series solution for a second-order ordinary differential equation of a special form. We will use this technique in the section of Sequences and Series for our study of the Bessel series and also introduce Bessel series by solving in the section of Mechanical Engineering the probel of the self-buckling column with the power series solutions.\n\t\\end{itemize}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe first differential equations were solved around the end of the 17th century and beginning of the 18th century. By the middle of the 18th century people realized that the first methods we listed above had reached a dead end. One reason was the lack of functions to write the solutions of differential equations. The elementary functions we use in calculus, such as polynomials, quotient of polynomials, trigonometric functions, exponentials, and logarithms, were simply not enough. People even started to think of differential equations as sources to find new functions. It was matter of little time before mathematicians started to use power series expansions to find solutions of differential equations. Convergent power series define functions far more general than the elementary functions from calculus.\n\t\\end{tcolorbox}\n\t\n\t\\paragraph{Method of characteristic polynomial}\\mbox{}\\\\\\\\\n\tSolving simple differential equations (with constant coefficients and without second member most of the time...) uses a technique using a characteristic polynomial of the differential equation which we will see the details in the developments that follow on  few special cases very frequent in physics.\n\t\n\tIt is a relatively simple method to implement when we seek solutions to the homogeneous differential equation without second member (E.W.S.M.). In the contrary case, the presence of a second member, we add the solutions of the homogeneous equation to the particular solutions.\n\t\n\t\\subparagraph{Resolution of the H.E. of the first order L.D.E. with constant coefficients}\\mbox{}\\\\\\\\\n\n\tConsider the following L.D.E. with constant coefficients:\n\t\n\twhich is a simplified version of the following general L.D.E.  with constant coefficients:\n\t\n\twhere:\n\t\n\tWe write its associated homogeneous equation (E.W.S.M.):\n\t\n\tWhich can be written:\n\t\n\tTherefore:\n\t\n\tThere is behind this homogeneous solution infinite solutions: to each value given to the constant $C$ there is a solution.\n\t\n\tWe still need to add to this homogeneous solution the particular solution $y_p$ and for that we have a collection of recipes, depending on the type of the function $f (x)$ of the second member of the differential equation. We will see in each case in the various chapters this book as already mentioned.\n\t\n\t\\subparagraph{Resolution of the H.E. of the first order L.D.E. with non-constant coefficients}\\mbox{}\\\\\\\\\n\tThe general solution of homogeneous linear differential equations (E.W.S.M.) of order $1$ with non constant coefficients:\n\t\n\tcan always be reduced to the following form:\n\t\n\twhere:\n\t\n\tWell obviously there is the solution $y=0$... but let us try to do better. So we have:\n\t\n\tIt comes therefore:\n\t\n\twhere $G (x)$ is a primitive of $g (x)$. Since then:\n\t\n\tIt is also common to find these developments in another notation a little bit more explicit.\n\t\n\tSo we start again of the differential equation without second member with non-constant coefficients:\n\t\n\tafter rearrangement:\n\t\n\tAnd then:\n\t\n\tTherefore:\n\t\n\tThis result will be very useful to calculate the Fourier transform of a Gaussian function (\\SeeChapter{see section Sequences And Series}), Fourier transform, which is essential to resolve in a fairly general way the heat equation (\\SeeChapter{see section Thermodynamics}) resolution that will finally allow us to prove the Black \\& Scholes equation (\\SeeChapter{see section Economics}).\n\t\n\t\\subparagraph{Resolution of the H.E. of the second order L.D.E. with constant coefficients}\\mbox{}\\\\\\\\\n\tConsider the L.D.E. with constant coefficients:\n\t\n\twhich is a simplified version of the following general L.D.E. with constant coefficients:\n\t\n\twhere:\n\t\n\tWe write is homogeneous associated equation (E.W.S.M.):\n\t\n\twherein the function of the second member is zero. We can quite quickly consider a solution of the type (inspiring of the form of the solutions of the first order differential equations):\n\t\n\twhere $\\tau$ is a constant. Which give us therefore:\n\t\n\tWhat we can simplify by:\n\t\n\tIf our starting assumption is correct, we only have to solve in $K$ this \"\\NewTerm{characteristic equation (CHARE)}\\index{characteristic equation}\"  or \"\\NewTerm{characteristic polynomial}\\index{characteristic polynomial}\" of the homogeneous equation to find the homogeneous solution:\n\t\n\twhose solutions depend on the sign of the discriminant of the characteristic polynomial:\n\t\n\t\\begin{itemize}\n\t\t\\item If the discriminant is strictly positive, that is to say $\\Delta>0$:\n\t\t\n\t\tSo we know that the characteristic polynomial has two distinct roots and then we have:\n\t\t\n\t\twhere $K_1\\tau=c^{te}$ and $K_2\\delta=c^{te'}$. Then we say that the solution is \"\\NewTerm{delayed}\\index{Differential equations!delayed solution}\" or \"\\NewTerm{advanced}\\index{Differential equations!advanced solution}\" by the values of these constants. But the key is to note that if $y_h(x)$ is a solution, then $y_h(x\\pm \\Delta x)$ is always a solution!\n\t\t\n\t\tWe then speak of \"\\NewTerm{general solution of the homogeneous equation}\\index{general solution of the homogeneous equation}\". There is behind this result an infinity of solutions: to each value given to the constants $A, B$ corresponds a solution.\n\t\t\n\t\tPhysicists also write sometimes this in a particular form by putting\n\t\t\n\t\twith then:\n\t\t\n\t\tAnd using the hyperbolic trigonometric functions (\\SeeChapter{see section Trigonometry}):\n\t\t\n\t\twhere finally the possibility to write the homogeneous solution in the form (when we omit the advance or delay $\\delta=\\tau=0$):\n\t\t\n\t\tIn addition, let us show that the solutions of the E.W.S.M. form a vector space of dimension $2$ (corresponding to the order of our differential equation)!\n\t\t\n\t\tIndeed:\n\t\t\\begin{itemize}\n\t\t\t\\item The zero function: $y=0$ is a solution of the E.W.S.M. (this is unnecessary to prove because obvious...!)\n\t\t\t\n\t\t\t\\item The sum or subtraction of solutions remains a solution (this we have already proved it before)\n\t\t\t\n\t\t\t\\item The elements of the basis of a vector space (the solutions of the E.W.S.M.) are linearly independent (that's an interesting property that we will need later!)\n\t\t\\end{itemize}\n\t\tLet us put:\n\t\t\n\t\tThen:\n\t\t\n\t\tThese relations injected into the E.W.S.M. in generalized form:\n\t\t\n\t\tThen gives:\n\t\t\n\t\tWe do have indeed a vector space structure.\n\t\t\n\t\tLet us recall that conversely two functions are linearly dependent if:\n\t\t\n\t\t\\item If the discriminant is strictly positive, that is to say $\\Delta=0$: \n\t\t\n\t\tThe characteristic equation has a real double root $K$.\n\t\t\n\t\tBy going a little fast we would say then:\n\t\t\n\t\tand that it is over... but that it is forget that the vector base must be formed of two independent solutions!\n\t\t\n\t\tSo the second option is probably... of the form:\n\t\t\n\t\tThen:\n\t\t\n\t\tIf we inject it into the E.W.S.M. in generalized form:\n\t\t\n\t\tThen:\n\t\t\n\t\tThat is to say in our case:\n\t\t\n\t\tBut, both actual real values of $K$ are precisely solutions of:\n\t\t\n\t\tThe prior-previous relation is reduced to:\n\t\t\n\t\tand as we are in the case of study where the discriminant is zero, we have:\n\t\t\n\t\tTherefore and finally the prior-previous relation reduces to:\n\t\t\n\t\tWe deduce from it that:\n\t\t\n\t\tTherefore finally:\n\t\t\n\t\tWhich gives for the general solution of the E.W.S.M.:\n\t\t\n\t\t\n\t\t\\item If the discriminant is strictly positive, that is to say $\\Delta<0$:\n\t\tThe characteristic equation has two complex conjugate roots (\\SeeChapter{see section Algebra}):\n\t\t\n\t\tTherefore:\n\t\t\n\t\tBut if we look instead for real solutions, we can always put $A$ and $B$ as being equal such as:\n\t\t\n\t\tAnd if we set the delay and advance respectively as being zero ($\\delta=\\tau=0$), so we find the relation available in most books without proof:\n\t\t\n\t\twhere $A'$ and $B'$ are any two real constants. There is another important form of this last relation (often used in electronics, for example). Indeed, it is possible for any $A'$ and $B' \\in \\mathbb{R}$ , to find $C'$ and $\\phi$ also in $\\mathbb{R}$ such as the following equality holds:\n\t\t\n\t\tWe put:\n\t\t\n\t\tThen:\n\t\t\n\t\tIt is then possible to find $\\phi$ such as:\n\t\t\n\t\tTherefore our initial expression (proposition) can be written as:\n\t\t\n\t\tFinally:\n\t\t\n\t\\end{itemize}\n\tSo we can make the following summary:\n\tTo resume:\n\t\n\t\n\t\\paragraph{Integrating Factor Method (Euler's Method)}\\mbox{}\\\\\\\\\n\tThe technique of integrating factor is useful when it comes to solve differential equations of the form:\n\t\n\tWe have not to this day practical application of this technique in the other chapters of this book. You must therefore see this as a presentation for general culture.\n\t\n\tThe basic idea is to find a function $M(x)$, named \"\\NewTerm{integration factor}\\index{integration factor}\", by which can be multiplied by our differential equation to bring the left-hand side of equality to a simple derivative. For example, for a linear differential equation as the one above, we choose often the following integration factor (but this is by far not the only possibility and this choice does not solve everything!):\n\t\n\tTherefore we have:\n\t\n\tor by distributing:\n\t\t\n\tWhich can therefore be seen as:\n\t\n\tor even better (and therein lies the whole trick)...:\n\t\n\tWe can then take the primitive with respect to $x$:\n\t\n\tWe can then take the primitive with respect to $x$:\n\t\n\tand trivially (!) we have the left primitive that is immediate:\n\t\n\tWhich is sometimes written as:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider the following differential equation:\n\t\n\tThat we we will rewrite as:\n\t\n\tWe see then that (assuming $x$ is positive):\n\t\n\tThen we have:\n\t\n\tHazard making  sometimes things good (the example is purposely very simple), we have this equality that simplified as:\n\t\n\tin:\n\t\n\tWhich may be condense in:\n\t\n\tBy integrating:\n\t\n\tIt then comes immediately:\n\t\n\tFinally:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\paragraph{Method of separation of variables}\\mbox{}\\\\\\\\\n\tThe method of separation of variables (also known as the \"\\NewTerm{Fourier method}\\index{Fourier method}\") is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.\n\t\n\tIn mathematics, a \"\\NewTerm{partial differential equation PDE}\\index{partial differential equation}\" is a differential equation that contains unknown multivariable functions and their partial derivatives (a special case are ordinary differential equations, which deal with functions of a single variable and their derivatives). PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.\n\t\n\tPDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid dynamics, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalization in stochastic partial differential equations.\n\t\n\tA partial differential equation (PDE) for the function $U(x_{1},\\cdots ,x_{n})$ is an equation of the form:\n\t\n\tIf $f$ is a linear function of $U$ and its derivatives, then the PDE is named a \"\\NewTerm{linear partial differential equation}\\index{linear partial differential equation}\". Common examples of linear PDEs include the heat equation, the wave equation, Laplace's equation, Helmholtz equation, Klein\u2013Gordon equation, and Poisson's equation (see the chapters of Mechanics, Electrodynamics and Quantum Physics for the study of most of them!).\n\t\n\tThe method of separation of variables is a very common technique used in physics when we have second-order differential equations. Many useful examples and very detailed are already in the various chapters already mentioned above. Here we will just present a special case just for doing things good but with the minimum subsistence level!\n\t\n\tConsider the common case of physical partial differential equation of the type:\n\t\n\tThe solution of this equation therefore requires finding a function $U$ which depends on $x$ and $y$ such that:\n\t\n\tIn physics, the idea is then to put that we can always find a said a \"separable\" solution of the form:\n\t\n\tThus, the differential equation can be written as:\n\t\n\tWhich can be rewritten as:\n\t\n\tOr:\n\t\n\tAfter rearrangement it is use in physics to note this last equality in condensed form:\n\t\n\tThis equality can only take place if each term is a constant since $X$ depends only on $x$ and Y only on $y$. It comes then:\n\t\n\tAnd each differential equation can then be solved independently of the other and once the solutions found we multiplied them determine the expression of $U$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tProbably the most beautiful example is the section of Quantum Chemistry.\n\t\\end{tcolorbox}\n\t\n\t\\paragraph{Method of constant variation}\\mbox{}\\\\\\\\\n\tThe variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations.\n\n\tFor first-order inhomogeneous linear differential equations it is usually possible to find solutions via integrating factors or undetermined coefficients with considerably less effort, although those methods leverage heuristics that involve guessing and don't work for all inhomogeneous linear differential equations.\n\n\tVariation of parameters extends to linear partial differential equations as well, specifically to inhomogeneous problems for linear evolution equations like the heat equation, wave equation, and vibrating plate equation. In this setting, the method is more often known as \"\\NewTerm{Duhamel's principle}\\index{Duhamel's principle}\", named after Jean-Marie Duhamel who first applied the method to solve the inhomogeneous heat equation.\n\t\n\tThe idea of the method of variation of the constant is as follows: if we have a particular solution affected by constants, we know that depending of the initial conditions thereof are well determined. The idea is then to generalize by putting that these constants are functions. In some cases obviously mathematical developments will show that the functions are necessarily constant.\n\t\n\tThe idea behind this method is to say that the solutions of the (linear) differential equation with the second member will look like the solutions of the homogeneous equation. As the term on the right will disrupt this solution, we vary only constants (which will therefore no longer be constants), but we remain on the \"base\" of homogeneous solutions, to seek close solutions. After we check that this \"physicist reasoning\" gives out all the solutions of the differential equation.\n\t\n\tLet us see before studying to the general case a simple example by considering the following differential equation:\n\t\n\tfor which the particular solution of the homogeneous equation (E.W.S.M.) is (if you need the details not hesitate to ask!):\n\t\n\tThe method of variation of the constant consist then to put that:\n\t\n\tand therefore:\n\t\n\tBut by the differential equation with the second member, we have:\n\t\n\t\n\tand it follows that:\n\t\n\twhere we eliminated the integration constant because what we want is a particular solution! The particular general solution (pg) is then the sum of the particular solution and the homogeneous one and this with the variation of the constant:\n\t\n\tThus, generalizing the previous example, so we have a differential equation of the form:\n\t\n\tGeneral particular solution will be:\n\t\n\tThen we have:\n\t\n\thence injected into the original differential equation:\n\t\n\tTherefore after factoring similar terms:\n\t\n\tSo we have the above relation and the particular solution to the homogeneous differential equation (therefore without second member):\n\t\n\tTherefore we find:\n\t\n\tand it sufficient then to integrate this equation to find $C_0(t)$. Then, the particular general solution (pg) will be the sum of the particular homogeneous solution and that with the variation of the constant.\n\t\n\t\\pagebreak\n\t\\subsubsection{Classification of partial differential equations}\n\tBefore we begin, the reader wonders what classifying differential equations can be used for, well, here are the two main arguments for the usefulness of a classification in the order of importance:\n\t\\begin{itemize}\n\t\t\\item Many books have authors who systematically speak of a differential equation by categorizing it, so it is more pleasant to know what they are talking about\n\n\t\t\\item Some finite element modeling software (including MATLAB\u2122) requires that the differential equation category be chosen before anything else can be done as a calculation.\n\t\\end{itemize}\n\tSo this being said, formally, we name \"\\NewTerm{partial differential equation PDE}\\index{partial differential equation}\" of order less than or equal to $2$ in a domain $\\Omega \\subset \\mathbb{R}^n$ and of unknown:\n\t\n\tAn equation of the following general type:\n\t\n \twhere $s(x)$ is often named a \"\\NewTerm{source term}\\index{source term}\" in analogy with the main situation in physics concerned.\n\n\tIt is now important to generalize the latter equation in vector form. Thus, we introduce the following notations:\n\t\\begin{itemize}\n\t\t\\item $A=[a_{ij}]$ the $n\\times n$ symmetric matrix of the coefficients in front of the terms of order $2$\n\n\t\t\\item $B=(f_i(x))$ The vector of size $n$ of the coefficients in front of the terms of order $1$\n\n\t\t\\item $[H\\Phi(x)]$ the $n\\times n$ symmetric Hessian matrix (\\SeeChapter{see section Sequences and Series}) of $\\Phi$:\n\t\t\n\n\t\t\\item $\\vec{\\nabla}(\\Phi(x))$ the vector of size $n$ of $\\Phi$:\n\t\t\n\n\t\t\\item The notation (named \"\\NewTerm{Frobenius (matrix) dot product}\\index{Frobenius (matrix) dot product}\"):\n\t\t\n\t\\end{itemize}\n\tWith this, the previous relation:\n\t\n\tcan be rewritten as:\n\t\n\tWhat is often (abusively) written in a very condensed way:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tThe following PDE:\n\t\n\tFirst, it is easy to determine (because it is a definition) that:\n\t\n\tIt is also trivial that:\n\t\n\tand that:\n\t\n\tIt is also trivial that:\n\t\n\tFinally, the minor difficulties are to find:\n\t\n\t\\end{tcolorbox}\n\tFor linear PDE of order $2$, the matrix $A$ is non-zero and is symmetric. It is therefore diagonalizable with real eigenvalues (\\SeeChapter{see section Linear Algebra}) and their study provides elements of classification of linear PDE systems of order $2$ under the denomination of \"Elliptic\", \"hyperbolic\" or \"parabolic\" PDE (\\SeeChapter{see section Analytical Geometry}).\n\n\tIt is often customary to say in physics that the elliptics PDE characterize problems of equilibrium or stationarity, that hyperbolics PDE characterize problems of wave propagations and finally that parabolic PDE characterize diffusion problems (see examples further below).\n\t\n\tThat latter terminology comes from the fact that when the matrix $A$ is constant, the curves:\n\t \n\tare respectively ellipsoids, hyperboloid and paraboloid (\\SeeChapter{see section Analytical Geometry}).\n\n\tIndeed, we have proved in the section of Analytic Geometry that following the determinant of $A$, we had:\n\t\\begin{itemize}\n\t\t\\item If $\\det(A)=ac-b^2>0$, the curve $\\Gamma$ is either empty, reduced to a point, or an ellipse.\n\n\t\t\\item If $\\det(A)=ac-b^2<0$, the curve $\\Gamma$ is either the union of two intersecting lines, that is to say an hyperbola.\n\n\t\t\\item If $\\det(A)=ac-b^2=0$, the curve $\\Gamma$ is either empty, a line, or two distinct parallel lines, or a parabola.\t\n\t\\end{itemize}\n\tMore explicitly what we have seen above can be reformulated as following. If we have the following PDE:\n\t\n\twhere $a$, $b$, $c$, $d$, $e$ and $f$ are real constants is said to be:\n\t\\begin{itemize}\n\t\t\\item An \"\\NewTerm{elliptical partial differential equation}\\index{elliptical partial differential equation}\" if $ac-b^2>0$ that is to say if the eigenvalues are all positive or all negative (\\SeeChapter{see section Analytical Geometry})\n\n\t\t\\item An \"\\NewTerm{hyperbolic partial differential equation}\\index{hyperbolic partial differential equation}\" if $ac-b^2<0$ that is to say there only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative (\\SeeChapter{see section Analytical Geometry})\n\n\t\t\\item A \"\\NewTerm{parabolic partial differential equation}\\index{hyperbolic partial differential equation}\" if $ac-b^2=0$ that is to say if the eigenvalues are all positive or all negative, save one that is zero (\\SeeChapter{see section Analytical Geometry})\n\t\\end{itemize}\n\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tThe Laplace's equation:\n\t\n\tis an elliptic PDE.\\\\\n\n\tThe wave equation:\n\t\n\tis a hyperbolic PDE.\\\\\n\n\tThe heat equation:\n\t\n\tis a parabolic PDE.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Systems of Differential Equations}\n\tLet us study now special developments that will also be useful for the study of quantum physics or for the resolution of particular systems of differential equations (see corresponding section in this book for the details on these examples) and especially one which is well known in chaos theory!\n\t\n\tLet us first indicate to the reader before going further that more complex inhomogeneous case (with second member) and with unknown coefficients is treated directly by an example in the section of Industrial Engineering during the study of the reliability of a repairable system as a Markov chain with resolution using the determinants and eigenvalues/eigenvectors.\n\t\n\tTo start this first approach, we will have to introduce the concept of exponentiation of a matrix:\n\n\tThe set of matrices $n \\times n$ with coefficients in $\\mathbb{C}$ denoted $M_n(\\mathbb{C})$ is a vector space for the addition of matrices and multiplication by a scalar. We will as always denote by $\\mathds{1}_n$ the identity matrix of dimension $n$.\n\t\n\tWe will admit that a sequence of matrices $A_n$ converges to a matrix $A$ if and only if the sequences of coefficients of the matrices $A_n$ converge towards the corresponding coefficients of $A$.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tIn $M_2(\\mathbb{C}$ the sequence of matrice:\n\t\n\tconverge to:\n\t\n\twhen $n\\rightarrow +\\infty$.\n\tIf $x\\in \\mathbb{C}$, we saw in our study of complex numbers (\\SeeChapter{see section on Numbers}) that the series:\t\n\t\n\tconverges and its limit is denoted by $e^x$. In fact here there is no difficulty in replacing $x$ by a matrix A$ $since we know (we have proved it during our study of complex numbers) than any complex number can be written as follows (the body of complex numbers is isomorphic to the field of real matrices of square dimensions $2$ having this form):\n\t\n\t\\end{tcolorbox}\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tand that a complex number is equivalent to put his matrix form also to the square:\n\t\n\tIndeed:\n\t\n\t\\end{tcolorbox}\n\tWe then define the exponential of a matrix $A\\in M_n(\\mathbb{C})$ as the matrix limit of the sequence:\n\t\n\tIf the matrix $A$ is diagonal obviously its exponential is easy to calculate. Indeed, if:\n\t\n\tIt follows:\n\t\n\tHowever, it is clear that a non-diagonal matrix will be much more complicated to deal with! We will then use the diagonalization technique or reduction of endomorphisms (\\SeeChapter{see section Linear Algebra}).\n\t\n\tSo note that if $S\\in M_n\\mathbb{C}$ is reversible and if $A\\in M_n\\mathbb{C}$  so:\n\t\n\tThis follows from the fact that (think of the change of base of a linear application as has been studied in the section of Linear Algebra):\n\t\n\tTherefore:\n\t\n\tThis development will enable us to bring the computing of the exponential of a diagonalizable matrix in search of its eigenvalues and its eigenvectors.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tLet us calculate $e^A$ where:\n\t\n\tThe eigenvalues of $A$ are $\\lambda_1=-3,\\lambda_2=7$, and associated eigenvectors are:\n\t\n\tIndeed:\n\t\n\tBy putting:\n\t\n\tWe get:\n\t\n\twith:\n\t\n\tTherefore:\n\t\n\t\\end{tcolorbox}\n\tNow, let us recall that in the case of real numbers we know that if $x,y\\in \\mathbb{R}$ then:\n\t\n\tIn the case of matrices we can prove that if $A,B\\in M_n(\\mathbb{C})$ are two matrices that commute with one another, that is to say such that $AB=BA$, then:\n\t\n\tThe condition of commutativity comes from the fact that the addition in the exponential is itself commutative. The proof is therefore intuitive.\n\t\n\tAn important corollary of this proposition is that for any matrix $A\\in M_n\\mathbb{C}$, $e\u00c2$ is reversible. Indeed the matrices $A$ and $-A$ commute and therefore:\n\t\n\tWe recall that a matrix $A$ with complex coefficients is unitary if:\n\t\n\tThe following theorem will serve us later:\n\t\\begin{theorem}\n\tLet us prove that if $A$ is a Hermitian matrix (also named \"self-adjoint\") (\\SeeChapter{see section Linear Algebra}) then for any $t\\in\\mathbb{R}$, $e^{\\mathrm{i}tA}$ is unitary.\n\t\\end{theorem}\n\t\\begin{dem}\n\t\n\tTherefore:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tRemember that this condition for a self-adjoint matrix is linked to the definition of unit group of order $n$ (\\SeeChapter{see section Set Algebra}).\n\t\n\tOne of the first applications of the exponentation of matrices is the resolution of ordinary differential equations. Indeed, from the linear differential equation below using as initial condition $y(0)=0$ and where $A$ is a matrix:\n\t\n\tthe solution is given by (as seen previously):\n\t\n\tWe frequently find that kind of systems of differential equations in biology (population dynamics), astrophysics (study of plasmas) or in fluid mechanics (chaos theory) and in classical mechanics (coupled systems), astronomy (coupled orbits), in electrical engineering, etc.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tSuppose we have the following homogeneous system of differential equations (without constant terms):\n\t\n\tThe associated matrix is then:\n\t\n\tand its exponential (see developments made above):\n\t\n\tThe general solution of the system is:\n\t\n\tSo we have:\n\t\n\tBy calculating the derivative of the previous relations and comparing to:\n\t\n\twe easily determine the constants to get:\n\t\n\twhich finally gives us:\n\t\n\t\\end{tcolorbox}\n\n\t\\subsection{Regular Methods of Perturbations}\n\tVery frequently in physics (high level physics) or financial engineering, a mathematical problem can not be solved exactly. Even if the solution is known sometimes there are such a dependency of parameters that the solution is difficult to use as such.\n\t\n\tSometimes, however, it happens that an identified parameter of the differential equation, which we denote by tradition with the Greek letter $\\varepsilon$, is such that the solution is available and reasonably simple for $\\varepsilon=0$.\n\t\n\tThe problem then is to know how the solution is altered for a non-zero $\\varepsilon$ but still small. This study is the center of \"\\NewTerm{perturbation theory}\\index{perturbation theory}\" that we will use, for example in the section of General Relativity to calculate the precession of the perihelion of Mercury.\n\t\n\tAs the perturbation theory within the general framework is too complex for this book purpose, we propose an approach by example with first a simple algebraic equation and then with what interests us: a differential equation.\n\t\n\t\\subsubsection{Perturbation theory for algebraic equations}\n\tConsider the following polynomial equation:\n\t\n\tWe know from our study of the section Functional Analysis, that this polynomial equation has two roots that are trivially:\n\t\n\tFor small $\\varepsilon$, these roots can be approximated by the first term of the Taylor series expansion (\\SeeChapter{see section Sequences and Series}):\n\t\n\tThe question is whether we can get the two previous relation without a priori knowledge of the exact solution of the initial polynomial equation? The answer is obviously: YES with the help of perturbation theory.\n\t\n\tThe technique is based on four steps:\n\t\\begin{enumerate}\n\t\t\\item In the first step, we assume that the solution of the polynomial equation is an expression of the type Taylor series on $\\varepsilon$. Then we have:\n\t\t\n\t\twhere $X_1,X_2,X_3$ are obviously to be determined.\n\t\t\\item In the second step, we inject the solution in our hypothetical polynomial equation:\n\t\t\n\t\tAs:\n\t\t\n\t\tand:\n\t\t\n\t\tIt finally comes that the polynomial equation can be written as:\n\t\t\n\t\t\n\t\t\\item In the third step we successively equalize the terms with 0 such as:\n\t\t\n\t\t\\item Fourth and last step, we solve successively the polyinomial equations above to get:\n\t\t\n\t\tBy injecting these results in the hypothetical solution:\n\t\t\n\t\tit is obvious to observe that we fall back on the certain solution:\n\t\t\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\subsubsection{Perturbation theory of differential equations}\n\tPerturbation theory is therefore also often used to resolve numerous differential equations. This is the case for example in fluid mechanics, in General Relativity or quantum physics.\n\t\n\tAgain, rather than doing a super abstract and general theory, we will see the concept with an example as previously.\n\t\n\tConsider the following ordinary differential equation with second member and constant coefficients:\n\t\n\tOr written in another way?\n\t\n\twith the boundary conditions:\n\t\n\tThe exact resolution is relatively easy to obtain:\n\t\n\tFirst we start with the homogeneous equation:\n\t\n\tSo it is a linear differential equation of order 2 with constant coefficients, equation that it is relatively easy to solve in the general case. Given the equation:\n\t\n\tAssume that the function $y$ which satisfies this differential equation is the of the form $y=e^{Kx}$ where $K$ may be a complex number. Then we have:\n\t\n\tprovided, of course, that $e^{Kx}\\neq 0$. This last relation is the auxiliary quadratic equation of the differential equation (characteristic polynomial in other words). It has two solutions/roots (it's a simple resolution of a polynomial of the second degree) which we denote in the general case: $K_1,K_2$. Which means that:\n\t\n\tare satisfied for the two roots. If we do the sum, since both are equal to the same constant:\n\t\n\tThus, it is immediate that the general solution of the homogeneous equation is of the type:\n\t\n\twhere $A, B$ are obviously constants to be determined. Now we solve the characteristic polynomial:\n\t\n\tIt comes immediately that:\n\t\t\n\tTherefore:\n\t\n\tNow a particular solution to:\n\t\n\tis relatively trivially a solution of the type:\n\t\n\twhere $B$ is of course a constant to be determined and which is equal (once injected into the differential equation):\n\t\n\tTherefore:\n\t\n\tHence finally the general solution:\n\t\n\tThen, with the initial conditions that are a for reminder:\n\t\n\tit is very easy to find A:\n\t\n\tWe also have:\n\t\n\tWe are free to choose that $c^{te}=0$ which gives us:\n\t\n\tThen:\n\t\n\tbecomes:\n\t\n\tNow that we have the general solution if $\\varepsilon$ is small we can take the development of order 4 in Maclaurin series of the exponential (\\SeeChapter{see section Sequences and Series}). Such as:\n\t\n\tInjected into $y$ this gives (you will notice that we sometimes express explicitly... the term of order 5 by anticipation...):\n\t\n\tNow that we have this development, what we want to show is that from a perturbative expansion we can find the same result in series and this without any prior knowledge on the solution.\n\t\n\tAgain, the development is done in 4 steps:\n\t\\begin{enumerate}\n\t\t\\item In the first step, we assume that the solution of differential equation is an expression of the type Taylor series on  $\\varepsilon$. Then we have:\n\t\t\n\t\twhere $y_0,y_1,y_2$ are obviously to be determined.\n\t\t\n\t\t\\item In the second step, we inject the hypothetical solution of our differential equation in itself with the initial conditions and we develop the whole.\n\t\t\n\t\tthen the initial conditions:\n\t\t\n\t\n\t\t\\item In the third step we equalize successively the terms with $0$ such as:\n\t\t\n\t\t\n\t\t\\item In the fourth step we solve the differential equations listed above (if you do not see how we solve them do not hesitate to contact us!):\n\t\t\n\t\tBy injecting relations in the supposed solution developed in Taylor series and injected into the differential equation:\n\t\t\n\t\tWe fall back on:\n\t\t\n\t\\end{enumerate}\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{95} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 119 votes,  75.45\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Sequences and Series}\n\t\n\\lettrine[lines=4]{\\color{BrickRed}S}equences and series have a great importance in Applied Mathematics and that is why we devote to them a whole section. We will also see them often in various sections of the Mechanics chapters when we need to make some minor approximations (...) as well as in the sections of Economy and Quantitative Management Techniques. The reader should try to not confuse in what follows the concept of \"sequences\" with that of \"series\" which, while being similar in substance, are not always analyze mathematically in the same way.\n\nWe wanted to study in this section simple things without going to far within the topological concepts of sequences and series. However, those interested in more rigorous definitions can read the sections Fractals (see chapter of Theoretical Computing) and Topology where many concepts about series are  (supremum, infimum, subsequence, Bolzano-Weierstrass' theorem, etc.).\n\n\\subsection{Sequences}\n\n\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{sequence}\\index{sequence}\" of a set is a family of elements indexed by the set of natural numbers (\\SeeChapter{see section Numbers}) or by a part of it. In a vulgarized way we say that a sequence is a list of objects put in order, each with a order number. We typically write a sequence as:\n\t\nwhere indexing is sometimes (by tradition...) without the 0.\n\nFor some sequences, we provide the first term $u_1$ (if indexing starts with 1 instead of 0), and a formula for any term $u_{n+1}$ from the previous term $u_n$ regardless $n \\geq 1$. We call such a formulation a \"\\NewTerm{recurring definition}\\index{recurring definition}\" and the sequence is defined \"\\NewTerm{recursively}\\index{recursively defined sequence}\" (and even if it is indexed from 0 instead of 1).\n\nBefore seeing some examples of sequences families that will be used in the various sections of the book (Population Dynamic, Economy, Nuclear Physics, etc.) let us see a small set of definitions as it is the tradition in mathematics...\n\n\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] Numbers (as sequence) are in \"\\NewTerm{arithmetic progression}\\index{Sequence!arithmetic progression}\" if the difference of two consecutive terms is equal to a constant $r$ named the \"\\NewTerm{reason}\".\n\t\t\n\t\t\\item[D2.] Numbers (as sequence) are in \"\\NewTerm{geometric progression}\\index{Sequence!geometric progression}\" if the ratio of two consecutive terms is equal to a constant $r$ also named the \"\\NewTerm{reason}\".\n\t\t\n\t\t\\item[D3.] Numbers (as sequence) are in \"\\NewTerm{harmonic progression}\\index{Sequence!harmonic progression}\" if the inverse of two consecutive terms are in arithmetic progression.\n\t\\end{enumerate}\n\n\tTherefore, a number $b$ is respectively the arithmetic mean, geometric, harmonic of $a$ and $c$ if the numbers $a, b, c$ are respectively in  an arithmetic, geometric or harmonic progression.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nFor the definitions of the averages listed above see the section Statistics.\n\t\\end{tcolorbox}\n\t\n\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A \"\\NewTerm{majorated sequence}\\index{majorated sequence}\" is a sequence such that there is a real number $M$ such that:\\\\ $\\forall n \\in \\mathbb{N}, \\; u_n \\leq M$\n\t\t\n\t\t\\item[D2.] A \"\\NewTerm{minorated sequence}\\index{minorated sequence}\" is a sequence such that there is a real number $m$ such that:\\\\ $\\forall n \\in \\mathbb{N}, \\; u_n \\geq m$\n\t\t\n\t\t\\item[D3.] A \"\\NewTerm{bounded sequence}\\index{bounded sequence}\" is a sequence that is both majorated and minorated.\n\t\t\n\t\t\\item[D4.] A sequence $(u_n)$ is named  \"\\NewTerm{increasing sequence}\\index{increasing sequence}\" if $\\forall n \\in \\mathbb{N}, \\; u_{n+1}-u_n > 0$\n\t\t\n\t\t\\item[D5.]  A sequence $(u_n)$ is named  \"\\NewTerm{decreasing sequence}\\index{decreasing sequence}\" if $\\forall n \\in \\mathbb{N}, \\; u_{n+1}-u_n < 0$\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIf a sequence is increasing or decreasing, we sometimes just say it is a \"\\NewTerm{monotonous sequence}\" (without specifying if its increasing or decreasing).\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D6.]  A sequence $(u_n)$ is named  \"\\NewTerm{constant sequence}\\index{constant sequence}\" if $\\forall n \\in \\mathbb{N}, \\; u_{n+1}=u_n$\n\t\\end{enumerate}\n\t\n\tWe will now see some practical important arithmetic and geometric sequences that will be used later in other sections of this book.\n\t\n\\subsubsection{Arithmetic Sequences}\n\n\\textbf{Definition (\\#\\mydef):} We say that numbers or \"\\NewTerm{terms}\\index{term of a sequence}\" are in an \"\\NewTerm{arithmetic sequence}\" when the difference of their sequential value is equal to a constant $r$ named the \"\\NewTerm{reason}\\index{reason of an arithmetic sequence}\" of the sequence so that:\n\t\nwhere $r$ is the \"reason\" of the progression. We then obviously have if the indexing starts from $0$:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\nE1. The sequence:\n\t\nwhere $n$ is a constant and the reason is equal $1$.\\\\\\\\\nE2. The sequence:\n\t\nis an arithmetic sequence of reason $x$.\n\t\\end{tcolorbox}\nThus, if we write $u_n$ any term of the sequence $(u_n)$ of reason $r$, we have:\n\t\nWe have the following properties for this type of sequences:\n\t\\begin{enumerate}\n\t\t\\item[P1.] A term whose rank is the average of the ranks of the other two terms is the arithmetic mean of these two terms.\n\t\t\\begin{dem}\n\t\tConsider now $(u_n)$ an arithmetic sequence of reason $r$ given by the previous development:\n\t\t\t\nand $a,b,k \\in \\mathbb{N}$ such as $a+b=2k$, then we have:\n\t\t\t\nand so:\n\t\t\nwith $k=\\dfrac{a+b}{2}$.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\\end{dem}\n\t\\item[P2.] For three consecutive terms $u_n,u_{n+1},u_{n+2}$ in an arithmetic sequence of reason $r$, the second term is the arithmetic mean of the other two.\n\t\t\\begin{dem}\n\t\t\tLet us write:\n\t\t\t\t\n\t\t\t\t\\begin{flushright}\n\t\t\t\t\t$\\square$  Q.E.D.\n\t\t\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\item[P3.] If $u_1,u_2,u_3,...,u_n,...$ is an arithmetic sequence of ratio $r$, then the $n$-th partial sum $S_n$ (that is to say, the sum of the first $n$ terms to the power of $1$) is given by:\n\t\t\nwhen indexing is starts from $1$.\n\t\t\\begin{dem}\n\t\t\tWe can write the sequence:\n\t\t\t\t\n\t\tPlaying with the second line, we get:\n\t\t\t\nWhat can be simplified even more:\n\t\t\t\nConsidering that we will prove a little bit later that the simple following Gauss series:\n\t\t\t\nis equal to:\n\t\t\t\nWe then have for:\n\t\t\t\nthe following relation:\n\t\t\t\nWe thus get:\n\t\t\t\nWe see with the latter relation that if $u_1=r=1$ we fall back on the simple Gauss series.\n\nAs:\n\t\t\t\nwhen the indexation starts from 1 we thus get:\n\t\t\t\n\t\t\t\\begin{flushright}\n\t\t\t\t$\\square$  Q.E.D.\n\t\t\t\\end{flushright}\n\t\t\\end{dem}\nWe will see other types of summations a little bit further below during our study of series!\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\subsubsection{Harmonic Sequences}\n\n\\textbf{Definition (\\#\\mydef):} We say that numbers $\\dfrac{1}{a}, \\dfrac{1}{b}, \\dfrac{1}{c},...$ generates an \"harmonic progression\" when their inverses are in arithmetical progression (also with a \"reason\" $r$. We represent this progress by:\n\t\nWe then obviously if the indexing starts from 0:\n\t\nMoreover, we assume, in what follows, that there is no zero denominator.\n\nBy sharing this type of sequences successively in groups containing $2^n$ terms, we observe that each of them is bigger than the last of his group. For example:\n\t\nAnd we can see that the sum of the terms of each group is larger than 1/2.\n\nWe can also see that each term is the harmonic mean of the previous and consecutive one:\n\n\\begin{dem}\n\t\nThus:\n\t\nSo finally:\n\t\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\\end{dem}\n\n\t\\subsubsection{Geometric Sequences}\n\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{geometric sequence}\\index{geometric sequence}\" is a sequence of numbers such that each of them is equal to the previous $n$ multiplied by a constant number $q$ that we also name the \"\\NewTerm{reason}\\index{term of a geometric sequence}\" of the sequence. We will denote by:\n\t\nThus, if we denote by $u_n$ any term of the sequence $u_n$, we have (trivial):\n\t\nHere are some properties for such a type of sequence (without proof until now... except if some readers ask for them because most are really trivial):\n\t\\begin{enumerate}\n\t\t\\item[P1.] (trivial) The quotient of two terms of the same sequence is a power of the reason $q$ whose exponent equals the difference in rank of the two terms chosen (simple ratio of two same bases with different powers).\n\t\t\\item[P2.] (trivial) If we multiply or divide term by term two geometric sequences, we get a third geometric sequence whose raison equal the product (respectively the quotient) of the reasons of the two chosen sequences (simple operation with the reasons of the two original sequences).\n\t\t\\item[P3.] In a geometric sequence, a term whose rank is the average of the ranks of the other two terms is the geometric mean (see section Statistics) of these two terms (reread many times if needed...).\n\t\\end{enumerate}\nLet us prove the property P3:\n\\begin{dem}\n\tGiven a geometric sequence with real positive reason $q$, we have for recall:\n\t\nLet $a, b$ be the ranks of two terms of the geometric sequence, then we have:\n\t\nand thus:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\\end{dem}\n\n\t\\begin{corollary}\n\tWe have as corollary that for three consecutive terms $n,n+1,n+2$in a geometric progression, the second term is the geometric mean of the other two.\n\t\\end{corollary}\n\t\\begin{dem}\n\t\tWe have:\n\t\t\nThus:\t\n\t\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\nHowever, there are some special sequences that have special properties that we find very frequently in mathematical or theoretical physics in this book. Without going into  too much detail, here's a partial list of with the important proofs that we will have to use later:\n\n\t\\subsubsection{Cauchy Sequence}\n\nIt is often interesting for the mathematician, as much as for the physicist, to know the properties of a sequence with a given type of progression. The most important property is the limit to which it tends.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nThe reader who is not comfortable with topology can skip the text that follows... and whoever wants to know more about Cauchy sequences may read the section Topology or also particularly the section on Fractals (\\SeeChapter{chapter Theoretical Computing}).\n\t\\end{tcolorbox}\n\n\\textbf{Definition (\\#\\mydef):} Let $(X, d)$ a metric space (\\SeeChapter{see section Topology}), we say that the sequence:\n\t\nconverges to $x \\in X$ if (by definition!):\n\t\nIn other words, more we go far in the sequence, the more points are close (in the sense of the metric $d$) to each other.\n\nIf we chose a particular metric (the Euclidean one for example) and a discrete sequence the above definition will look like this:\n\t\n\tWhere the convergence point is therefore $a$ and we have:\n\t\n\tIn the example of the figure below where the sequence seems to converge to $1.13$ we observe that for a given non-zero positive $\\varepsilon$, there there is a particular $n$ which we denote $N$ ($n=17$ in the figure below) from which the sequence converges:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/analysis/cauchy_convergence.eps}\n\t\t\\caption{Illustration of the principle of convergence of a sequence}\n\t\\end{figure}\n\n\tHowever, the above definition of the convergence makes problem because the number $x$ should be known. In most cases of interest $x$ is unfortunately not known. To break this deadlock, Cauchy had the idea to propose the following definition:\n\n\t\\textbf{Definition (\\#\\mydef):} We say by definition that a sequence $(x_n)_{n \\in \\mathbb{N}}$ of elements of $X$ is a \"\\NewTerm{Cauchy sequence}\\index{Cauchy sequence}\" if:\n\t\n\tThe reader must notice that it is not sufficient for each term to become arbitrarily close to the preceding term. This is why require that $|a_{N+1} - a_{N}| < \\varepsilon$ is not sufficient!\n\n\tIt is almost obvious then that any convergent sequence is a Cauchy sequence (well there are some subtleties that we will not reference for now).\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nThis criterion therefore facilitates some proofs because it helps to show the existence of a limit without involving its value, generally unknown.\n\t\\end{tcolorbox}\n\t\n\t\\begin{theorem}\n\tLet us now prove that the theorem that asses that any convergent sequence is Cauchy sequence.\n\t\\end{theorem}\n\t\\begin{dem}\n\tConsider a sequence $u_n$ converging to the value $l$ (which is unknown to us) and $\\varepsilon>0$ (randomly selected). Then there exists according to the definition of a convergent sequence, $n \\in \\mathbb{N}$ such that:\n\t\n\tThe choice to write $\\dfrac{\\varepsilon}{2}$ is completely arbitrary but in fact we anticipate the result of the demonstration so that it is more aesthetic...\n\t\n\tTherefore for $p,q>N$ (in fact know the value of $N$ is irrelevant, since it should work for any value... well don't forget that $N$ depends on $\\dfrac{\\varepsilon}{2}$) we have using the triangle inequality (\\SeeChapter{see section Vector Calculus}):\n\t\n\tand because $d(u_n,l)\\leq\\dfrac{\\varepsilon}{2}$ we can write:\n\t\n\tTherefore:\n\t\n\tThat may be a bit abstract so let's see an example with the harmonic sequence as an example to close the proof:\n\t\n\tFirst, nothing it is not vorbidden to us to take $n \\geq 2$ (otherwise it will be hard to make a difference between two terms...).\n\tTherefore we take the Euclidean distance:\n\t\n\tFirst, the reader will note that in all cases since $k\\leq 2n$ is between $n+1$ and $2n$. Which brings us to write:\n\t\n\tSo from this inequality it comes automatically that each term of the sum on the left below will be greater than each term of the sum on the right:\n\t\n\tWith (just do a particular example)\n\t\n\tTherefore:\n\t\t\n\tNow the idea is to see if the sum on the left is therefore greater than or equal to $\\varepsilon=\\dfrac{1}{2}$ and this for any $n$. Thus the suite is not convergent!\n\t\n\tThus, the idea is that we found an $\\varepsilon$ for which the Cauchy criterion is deficient.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tSo it is not because the points are always closer to each other that they converge to a given point, because this point may not exist.\n\t\n\tThe best example is probably the following (it is also a little bit stupid example but...):\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\nLet us take $X=\\mathbb{Q}$ and the absolute difference as distance:\n\t\n\tGiven $z$ an irrational number and $q_j \\in \\mathbb{Q}$ with $j \\in \\mathbb{N}^{*}$ such that:\n\t\n\tThe idea is that greater is $j$, more the rational number $q_j$ is near the irrational $z$ and we know we can found such a sequence.\n\tLet us show that the $q_1,q_2$ we could be able to build form a Cauchy sequence! Indeed using triangular inequality:\n\t\n\tand therefore is a Cauchy sequence if and only if $\\vert q_m-q_n \\vert\\leq \\varepsilon$ if:\n\t\n\tWe have thus found a $N$ (equal to $\\dfrac{1}{2\\varepsilon}$) which satisfies our definition of a Cauchy sequence. But this sequence does not converge in $\\mathbb{Q}$ otherwise $z$ would be rational.\n\tYou can check this with $\\pi$ and the sequence:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\nMathematicians use such results to define the set of irrational and also by using some additional topological concepts.\n\t\\end{tcolorbox}\n\tWe have just seen that a Cauchy sequence is not necessarily a convergent sequence in $X$. The inverse is however true: any convergent sequence is a Cauchy sequence!!\n\t\n\t\\subsubsection{Fibonacci Sequence}\n\t\n\tIf we calculate a sequence of numbers starting with 0 and 1, such that each term is equal to the sum of the two previous ones, we can form the following sequence:\n\t\n\ttherefore, if we designate the different terms by:\n\t\n\tWe build therefore the following sequence law:\n\t\n\tThe Fibonacci sequence has many interesting strong properties, which will be developed later. However, it seems to be the first \"\\NewTerm{recurring sequence}\\index{recurring sequence}\" known in history (hence the fact that we were talking about it in this book). \n\t\n\tThe origin of this sequence seem to come from a rabbit problem asked to Fibonacci in 1202. Starting with a couple of rabbits, how much couples of rabbits will we get after a given number of months knowing that each pair produces a new pair every month (and no couples die...), which becomes productive only after two months. Therefore we have:\n\t\\begin{itemize}\n\t\t\\item Beginning: We have nothing $(0)$\n\t\t\\item 1st month: We buy a couple of baby rabbits $(1)$.\n\t\t\\item 2nd month: The couple of rabbits are now adults $(1)$.\n\t\t\\item 3rd month: We have the couple of rabbits that make a new couple of baby rabbits. We have two couples $(2)$.\n\t\t\\item 4th month: We have two couple of adults with a new couple of babies. We have three couples $(3)$.\n\t\t\\item 5th month: We have three couples of adults rabbits and two new couples of baby rabbits. We have five couples $(5)$\n\t\t\\item and so on...\n\t\\end{itemize}\n\t\n\tLet us take now a \"in real life\" example (this is typically a biased scientific example because you will always finish to find in Nature what your are looking for to argue your theories with a least on particular example...): the heart of some flowers! The scales of a pineapple or pinecone form two families of spirals wound in opposite directions. On a pine cone, you will count 5 spiral in one direction and 8 in the other, on pineapples, 8 and 13, on sunflowers 21 and 34. Each time we get Fibonacci numbers!\n\t\n\tA famous illustration of this is to do draw the following simple figure (named\"Fibonacci Spiral\") which reproduces the Fibonacci numbers on a grid plan with squares and corners connect with arc circles:\n\t\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/algebra/fibonacci.jpg}\n\\caption{Fibonacci spiral}\n\\end{figure}\n\nWe also use this kind of sequence to show the usefulness of the principle of induction as presented in the section on Numbers Theory and as a simple practical case the $\\mathcal{Z}$ transform (\\SeeChapter{see section Functional Analysis}).\n\n\t\\subsubsection{Logic Sequences/Psychologist Sequences}\n\t\\textbf{Definition (\\#\\mydef):} Psychologists name \"\\NewTerm{logical sequences}\\index{logical sequence}\", sequences that they write with an idea in mind, and they call \"logical\" people who find their idea, although there are other possibilities mathematically speaking (but psychologist don't know anything about real logic).\n\t\n\tFor example if you have to find the next number $X$ to the logic sequence:\n\t\n\tIn fact, you make the difference between the last and prior-previous number then you multiply by $10$ therefore the next number is $X=31000$.\n\t\n\tFrom a mathematical point of view, any number is suitable to replace $X$, also exists for each value of $X$, a polynomial in $n$ that takes the values $4, 5, 10, 50, 400, 3500$ for $n = 0, 1 , 2 ..6$.\n\t\n\tFor the example above we can take for example:\n\t\n\tand is such that $P(0)=4, P(1)=5, P(2)=10, ...,P(6)=0$.\n\n\t\\pagebreak\n\t\\subsection{Series}\n\nThe physicists often needs to simply and formally solve problems, to approximate some given \"terms\" of their equations. For this purpose, they will use the properties of some given series. Also statisticians and financial analysts often face to series they need to simplify.\n\n\\textbf{Definition (\\#\\mydef):} Let be given an infinite number sequence:\n\t\n\tThe expression:\n\t\n\tis named a \"numeric series\".\n\t\n\t\\textbf{Definition (\\#\\mydef):} The partial sum of the first $n$ terms of the series is named \"partial sum\" and denoted by:\n\t\n\tIf the following limit denoted $S$ exists and is finite:\n\t\n\twe name it \"\\NewTerm{sum of the series}\\index{sum of a series}\" and we say that the \"\\NewTerm{series converges}\\index{convergent series}\" (it is therefore a  Cauchy series). However, if the limit does not exist, we say that the \"\\NewTerm{series diverges}\\index{divergent series}\" and has no sum (for details see further below when we will deal with some empirical convergence criterias).\n\t\\begin{theorem}\n\t\tAlso let us prove for fun (because it is almost trivial) that if $\\displaystyle \\sum_{k\\geq 0} u_k $ is a convergent numerical series then:\n\t\t\n\t\tBut the opposite is not necessarily true!! In fact remember the example during our study above of Cauchy sequences with the harmonic series $\\sum_{k=1}^n \\dfrac{1}{k}$ that is not convergent even if the terms tends to zero when $k \\rightarrow +\\infty$.\n\t\t\\end{theorem}\n\t\\begin{dem}\n\t\tWe assume first that $\\displaystyle \\sum_{k\\geq 0} u_k $ is a convergent series and denote its limit by $S$. Let:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tHowever, if the series is really convergent:\n\t\t\n\t\tSo finally:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tLet's see how to calculate the partial sum of some classic series that are important in physics, statistics and finance:\n\t\n\tArithmetic series Gauss are an expression of the sum of the $n$ first nonzero integers raised to a given power $k$ in a condensed form. The application of this condensed form of a series has an important practical use in physics, statistics and finance when we wish to simplify the expression of certain results.\n\t\n\t\\subsubsection{Gauss Series}\n\t\n\tGauss arithmetic series are an expression of the sum of the $n$ first nonzero integers raised to a given power $k$ in a condensed form $S_k$. The application of this condensed form of a series has an important practical use in physics, statistics and finance when we wish to simplify the expression of certain results.\n\t\n\tIt is said that Gauss have found an attractive method in 1786 to determine the arithmetic sum of the first $n$ integers at the power of 1 when he was nine years old (...):\n\t\n\tTo simplify, we find easily:\n\t\n\tfor $n \\geq 0$. Let us indicate that each intermediate sum of the series (1, 3, 6, 10, 15, etc.) is named \"\\NewTerm{triangular number}\\index{triangular number}\" since it is possible to represent it in the following form:\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/algebra/triangular_number.jpg}\n\t\t\\caption{Triangular number}\n\t\\end{figure}\n\n\tWe can continue with higher powers bit not as exercises because these relations are very useful!\n\n\tNow let us calculate the very important case that we find ourselves in a number of other sections (Economy, Quantum wave Physics, etc.) and that  the sum of the first $n$ square integer numbers (still non-zero!).\n\n\tLet us write for this:\n\t\n\tWe know from Newton's binomial theorem (\\SeeChapter{see Section Calculus}):\n\t\n\tso we can write and add a member to member the $n$ following equalities:\n\t\n\tAnd the sum can be simplified as:\n\t\n\tAfter some elementary algebra manipulations we get:\n\t\n\tTherefore:\n\t\n\tFinally:\n\t\n\tWe continue with the sum of the first $n$ cubes (non-zero) integers. The principle is the same as before, we write:\n\t\n\tWe know from Newton's binomial theorem (\\SeeChapter{see Section Calculus}):\n\t\n\tWe get by varying $k$ from $1$ to $n$, $n$ relations that we can add a member to member\n\t\n\tAnd the sum can be simplified as:\n\t\n\tGiving after development:\n\t\n\tAnd after an fist simplification:\n\t\n\tAnd a second simplification:\n\t\n\tThe result result is therefore:\n\t\n\tor written differently:\n\t\n\tFor sure, we can continue like this during a long time, but from a certain value of the power things get a bit more complicated (furthermore the method is a little bit boring). Thus, one of the members of the Bernoulli family (it was a family of very talented mathematicians... as you can see int the Biographies chapter) founded a general relation working for any power by defining what we name the \"Bernoulli polynomial\" (see further below).\n\t\n\tLet us conclude with one last case we will need during our study of Fourier series. We put:\n\t\n\tWe want express this expression (series) as rational fraction. To do this, we multiply all by $x^2$. So we have two expressions:\n\t\n\tWe subtract the first from the second:\n\t\n\tFinally:\n\t\n\tMost of times, to indicate that this is for odd powers we prefer to write:\n\t\n\tSimilarly, for the needs of the section of Economy, we have:\n\t\n\tTherefore:\n\t\n\tFinally:\n\t\n\tMost of times, to indicate that this is for even powers we prefer to write:\n\t\n\t\\paragraph{Bernoulli's Numbers and Polynomials}\\mbox{}\\\\\\\\\n\tAs we have seen above, it is possible to express the sum of the first $n$ nonzero integers to a given power (the first four have been proved previously) following the below relations where we put now $n:=n+1$ as we want now $n$ to be the number of terms we want the sum including 0 (hence the negative sign in the relations below that we did not have earlier):\n\t\n\tIt is said that Jacob Bernoulli  then noticed that the polynomials $S_p$ had the form:\n\t\n\tIn this expression, the numbers $(1,-1/2,1/12,0,...)$ seem not to depend on $p$. More generally, after trial and error we see that the polynomial can be written as:\n\t\n\tGiving by identification the \"\\NewTerm{Bernoulli numbers}\\index{Bernoulli numbers}\":\n\t\n\t\\begin{theorem}\n\tThereafter, it seems that mathematicians in their research fell randomly (???) on the fact that the Bernoulli numbers could be expressed by the series:\n\t\n\twith $\\vert z\\vert<2\\pi$.\n\t\\end{theorem}\n\t\\begin{dem}\n\tWe have seen during our study of complex numbers (\\SeeChapter{see section Numbers}) that:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tLet us write now:\n\t\t\n\t\tThen we must have:\n\t\t\n\t\tWe see (by distributing) that:\n\t\t\n\t\tfor all this to be equal to unity we must have:\n\t\t\n\t\tFrom the second equation we get:\n\t\t\n\t\tand from the third equation we get:\n\t\t\n\t\tetc. Continuing this way we show that:\n\t\t\n\t\tIt is obvious that this method allows us to calculate by hand only the first terms of this series.\n\t\tThus, based on:\n\t\t\n\t\twe find that the first Bernoulli numbers are:\n\t\t\t\n\t\tThe reader will have noticed that $B_k=0$ when k is odd and different from $1$.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\\end{dem}\n\tWe see easily that the values of the Bernoulli numbers can not be described in a simple way. In fact, they are essentially values of the zeta Riemann function (see below) for negative integer values of the variable, and these numbers are associated with profound theoretical properties that go beyond the study of this book. Furthermore, the Bernoulli numbers also appear in the Taylor series expansion of  circular and hyperbolic trigonometric tangent functions in the Euler-Maclaurin formula (see below).\n\t\n\tWith a small modification it is possible to define the \"\\NewTerm{Bernoulli polynomials $B_k(x)$}\\index{Bernoulli polynomials}\" by:\n\t\n\twith:\n\t\n\t\\begin{theorem}\n\tFurthermore, it is normally easy to observe that:\n\t\n\tand therefore it normally easy to deduce that:\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tOn one side we have:\n\t\n\tand another we have:\n\t\n\tSo:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tAnd by identification of the coefficients we deduce:\n\t\n\tand for $k \\geq 1$:\n\t\n\tIt is then easy to deduce that the polynomials $B_k(x)$ are of degree $k$:\n\t\n\tHere is a plot of these polynomials:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/bernoulli_polynomials.jpg}\n\t\t\\caption{Some Bernoulli polynomials (source: Wikipedia)}\n\t\\end{figure}\n\tWhat is remarkable is that using the Bernoulli polynomials, we see that it is possible to write the $S_n$ as follows after some trials:\n\t\n\t\n\tSome write this relations even otherwise. Indeed, from previous relation, we can write:\n\t\n\tusing:\n\t\n\tWe have:\n\t\n\tSo we just demonstrated:\n\t\n\t\n\t\n\tHowever, we can now ask ourselves what happens to the partial sum of arithmetic and geometric sequences as presented earlier in this section.\n\t\\subsubsection{Arithmetic Series}\n\tWe have shown above that the partial sum of a Gauss series (analogous to the sum of the terms of an arithmetic progression of reason $r = 1$) was given by:\n\t\n\tif not denote the value of the $n$-th term by $u_n$ instead of $n$, the development that we made for the series of Gauss then brings us to:\n\t\n\tand if we denote the first term $1$ of the Gauss series $u_0$ then we have:\n\t\n\twhich gives us simple the partial sum of the $n$-terms of an arithmetic sequence of reason $r$.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. A simple Gauss series with of reason 1 starting at 4, finishing at 6:\n\t\n\tE2. Now an arithmetic partial sum series of reason 2 starting at 4, finishing at 8:\n\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\tThe reader will have observed that the reason $r$ does not appear in the latter relation. Indeed, by taking (always) the same development that for the Gauss series, the term $r$ is simplified and vanish.\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Geometric Series}\n\tSimilarly, with a geometrical sume where we have for recall:\n\t\n\twe have therefore:\n\t\n\tThe last relation is written (after simplification):\n\t\n\tand if $q\\neq 1$ we get:\n\t\n\twhich can be written by factoring $u_0$:\n\t\n\tIf $q$ is positive and less than $1$, as $n$ approaches infinity we have the result that will be used extensively in the section Economy:\n\t\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider the following geometric serie of reason $q = 2$:\n\t\n\tto calculate the sum of the first four terms $\\left\\lbrace 1,2,3,4 \\right\\rbrace$, we take the power of $2$ equivalent of $n=2$ (zero is not taken into account). We then get well: $S_3=15$.\n\t\\end{tcolorbox}\n\t\n\t\\paragraph{Zeta function and Euler's identity}\\mbox{}\\\\\\\\\n\tThe German mathematician Riemann named \"zeta\" a function already studied before him, but that he examend when the value is a complex number (\\SeeChapter{see section Numbers}). This function is represented as a series of inverse powers of integers. This is the series:\n\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\tIt is traditional to note $s$ the variable upon which this series depends.\n\t\\end{tcolorbox}\n\tThis series has an interesting property but if we remain within the framework of positive, non-zero integer powers:\n\t\n\twhen $n\\longrightarrow +\\infty$ then we have:\n\t\n\tIf we put $x=2^s$, we obtain the sum of the inverse of powers of $2$ and similarity with $x=3^s$ such that:\n\t\n\tIf we do the product of these two expressions, we obtain the sum of the powers of all fractions whose denominator is a product of $2$ and $3$:\n\t\n\tIf we take all primes left, we'll get on the right all integers, since every integer is the product of prime numbers according to the fundamental theorem of arithmetic (\\SeeChapter{see section Number Theory}), and this is Euler fundamental identity: what we now name \"\\NewTerm{Riemann zeta function}\\index{Riemann zeta function}\" is both a finished product and the sum of inverse powers of all integers:\n\t\n\tIn condensed notation, \"\\NewTerm{Euler's identity}\\index{Euler's identity}\" is given by:\n\t\n\twhere $p$ are the prime numbers.\n\t\n\tWe now recommend most readers to skip what follows on the Riemann zeta function and return back here once the Fourier series presented later in this section mastered and understood...\n\t\n\tWe assume in what follows that the Fourier series are known and mastered and that the Parseval equality was studied (since it is also proved further below). We will seek to determine the Riemann zeta function for two values ($s$ respectively having values $2$ and $4$) that will be useful in the valuation of integrals in certain section of the chapter on Mechanics.\n\t\n\tTo determine the value of $\\zeta (2)$, we will express the function:\n\t\n\tin Fourier series form (see a little further below in this section). During our study of Fourier series we will see that there are two traditional ways to define a Fourier series and we have done here the choice of the definition of the most commonly used among physicists and engineers:\n\t\n\tAs we prove it in our study of Fourier series, Fourier coefficients $a_0,a_n,b_n$ are obtained by solving:\n\t\n\tand using the integration by parts (\\SeeChapter{see section Differential and Integral Calculus}) we have:\n\t\n\tIt comes then:\n\t\n\tBut the Parseval theorem that we will prove in our study of Fourier series a little bit further below gives us too (depending on the choice of the definition of the Fourier series and associated coefficients, the Parseval theorem is expressed a little bit differently!):\n\t\n\tTherefore we get immediately:\n\t\n\tBut we will also see during our proof of Parseval theorem that:\n\t\n\tTherefore it comes in our case:\n\t\n\tTherefore:\n\t\n\tand finally:\n\t\n\t\n\tTo determine the value of $\\zeta(4)$, we will do the same, but with the function:\n\t\n\tin the form of Fourier series:\n\t\n\tFor this purpose, we will calculate Fourier coefficients using the integration by parts (\\SeeChapter{see section Differential and Integral Calculus}):\n\tThen we have:\n\t\n\tTherefore we have:\n\t\n\tBut the Parseval theorem that so we will prove below gives us also:\n\t\n\tIt then comes immediately:\n\t\n\tBut we will see also see later below during our Parseval theorem proof that:\n\t\n\tThen it comes in our case:\n\t\n\tTherefore:\n\t\n\tThat is to say:\n\t\n\tFinally:\n\t\n\t\n\t\\subsubsection{Telescoping Series}\n\tA \"\\NewTerm{telescoping series}\\index{telescoping series}\" is a series in which most of terms cancel in each of the partial sums, leaving only some of the first terms and some of the last terms:\n\t\n\tFor example, the series:\n\t\n\tsimplifies as:\n\t\n\tWe will encounter such as series for business purposes (management) in our study of Queuing Theory in the section of Quantitative Management!!!\n\t\n\t\\subsubsection{Grandi's Series}\n\tThe \"\\NewTerm{Grandi's series}\\index{Grandi's series}\" (after Italian mathematician, philosopher, and priest Guido Grandi, who gave a memorable treatment of the series in 1703) is defined as the following arithmetic series:\n\t\n\tIt is a very famous series in mathematics and physics because:\n\t\\begin{itemize}\n\t\t\\item It highlights in a very simple way the fact (see below) that it is dangerous to manipulate infinite series\n\t\t\n\t\t\\item Its result seems completes non-intuitive but in fact it opens the door to a more general definition of what is a \"sum\"\n\t\t\n\t\t\\item It is a beautiful example of a series that seems useless and purely mathematics but that has in fact important application in quantum physics (Casimir Effect as seen in the section of Corpuscular Quantum Physics) and String Theory (number of dimensions as seen in the section of String Theory).\n\t\\end{itemize}\n\tand this is why we dedicate to it a special subsection in this book!\n\t\n\tIt seem quite obvious at a first glance that it is a divergent series, meaning that it lacks a sum in the usual sense (the sequence of partial sums of Grandi's series clearly does not approach any number). But the other hand, its Ces\u00e0ro sum is $1/2$!!?? So what the hell is a Ces\u00e0ro sum.\n\t\n\t\\textbf{Definition (\\#\\mydef):} In mathematical analysis a \"\\NewTerm{Ces\u00e0ro sum}\\index{Ces\u00e0ro sum} assigns values to some infinite sums that are not convergent in the usual sense. The Ces\u00e0ro sum is defined as the limit of the arithmetic mean of the partial sums of the series.\n\t\n\tLet $\\{a_n\\}$ be a sequence, and let:\n\t\n\tbe the $k$th partial sum of the series:\n\t\n\tThe series $\\sum _{n=1}^{\\infty }a_{n}$ is say to be \"Ces\u00e0ro summable\", with Ces\u00e0ro sum $S\\in\\mathbb{R}$, if the average value of its partial sums $s_k$ tends to $S$:\n\t\n\tIn other words, the Ces\u00e0ro sum of an infinite series is the limit of the arithmetic mean (average) of the first $n$ partial sums of the series, as $n$ goes to infinity. If a series is convergent, then it is Ces\u00e0ro summable and its Ces\u00e0ro sum is the usual sum. For any convergent sequence, the corresponding series is Ces\u00e0ro summable and the limit of the sequence coincides with the Ces\u00e0ro sum.\n\t\n\tOne obvious method to attack the Grandi's series:\n\t\n\tis to treat it like a telescoping series and perform the subtractions in place:\n\t\n\tOn the other hand, a similar bracketing procedure leads to the apparently contradictory result:\n\t\n\tThus, by applying parentheses to Grandi's series in different ways, one can obtain either $0$ or $1$ as a \"value\". It can be shown that it is not valid to perform many seemingly innocuous operations on a series, such as reordering individual terms, unless the series is absolutely convergent. Otherwise these operations can alter the result of summation.\n\n\tTreating Grandi's series as a divergent geometric series we may use the same algebraic methods that evaluate convergent geometric series to obtain a third value:\n\t\n\tso:\n\t\n\tTherefore:\n\t\n\tFinally:\n\t\n\tThe same conclusion results from calculating $-S$, subtracting the result from $S$, and solving $2S = 1$.\n\n\tThe above manipulations do not consider what the sum of a series actually means. Still, to the extent that it is important to be able to bracket series at will, and that it is more important to be able to perform arithmetic with them, one can arrive at two conclusions:\n\t\\begin{itemize}\n\t\t\\item The series $1-1 + 1-1 + \\ldots$ has no sum\n\n\t\t\\item ...but its sum should be $1/2$ (see further below)\n\t\\end{itemize}\n\tIn fact, both of these statements can be made precise and formally proven, but only using well-defined mathematical concepts that arose in the 19th century. After the late 17th-century introduction of calculus in Europe, but before the advent of modern rigor, the tension between these answers fueled what has been characterized as an \"endless\" and \"violent\" dispute between mathematicians. The funnies is that the violent discussions still continue today... a YouTube video on this subject have more than $5,000$ comments... and blog post more than $200$ comments and a forum thread more than $600$... So this is quite a hot topic...\n\t\n\tLet us also recall that at the beginning of our study of Geometric series we have proved that:\n\t\n\tTherefore if $u_0=1$ this reduce to:\n\t\t\n\twhere as $n$ goes to infinity, the absolute value of $|q|$ must be less than one for the series to converge!\n\n\tNow notice that if $q=-1$ we fall back on Grandi's series and therefore that latter is a special case of the geometric series $1+q^1+q^2+q^3+\\ldots$ and then we would perhaps write a bit too quick:\n\t\n\tBut as we have just mention it, we are not authorized to write the latter fraction if $q=\\pm 1$ otherwise the series diverge. \n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThere have been very interesting studies about the reaction of high-school level students to Grandi's series presentation. The reactions and analysis are very interesting and I can personally only recommend every teacher to introduce this series in classes but without giving the result in a first time!\n\t\\end{tcolorbox}\n\tBut there is a \"one more thing\"... We will now calculate a sum to think it really gives infinite:\n\n\t\n\tTo do this, let's do another trick of mathematical magician:\n\t\n\tTherefore:\n\t\n\tSo we can compute:\n\t\n\tOur first concrete result, squared, can be rewritten as follows:\n\n\t\n\tOr well:\n\t\n\tExplicitly:\n   \\begin{eqnarray*}\n\t   (-1 + 1-1 + 1-1 + \\ldots)\\\\\n\t   \\underline{\\times (-1 + 1-1 + 1-1 + \\ldots)} \\\\\n\t    =1-1 + 1-1 + 1-1 + 1-1 + \\cdots \\\\ -1 + 1-1 + 1-1 + 1-1 + \\cdots\\\\ + 1-1 + 1-1 + 1-1 + \\cdots\\\\ \\cdots\n\t\\end{eqnarray*}\n\tSumming each column we see that we fall back on:\n   \n\tTherefore:\n\t\n   But as $S = 1/2$, then:\n   \n   Therefore:\n   \n   That is (to freak out a last time), we have shown that\n   \n\tAstonishing! However, all this stuff is not new. It was known for many people, and it was Srinivasa Ramanujan and later Godfrey Harold Hardy in a book titled \\textit{Divergent Series} where you can find fine theorems about this crazy subject.\n\t\n\t\\pagebreak\n\t\\subsubsection{Taylor and Maclaurin Series}\n\tTaylor and Maclaurin series provide a convenient and powerful tool to simplify theoretical models and computer calculations (fluid modeling or fields in space). They are used heavily in all fields of physics but they are also found in the industry including engineering (design of experiments, numerical methods, quality management), statistics (integral approximations), finance (stochastic processes ), complex analysis... We strongly advise the reader to read carefully the developments that follow.\n\t\n\tGiven a polynomial (with one variable/univariate):\n\t\n\tWe trivially have for this latter:\n\t\n\tGiven now the derivative of the polynomial $P (x)$:\n\t\n\tTherefore:\n\t\n\tand so on with $P''(x), P'''(x), ...$ such that:\n\t\n\tThen:\n\t\n\tTherefore:\n\t\n\trelation that we name \"\\NewTerm{limited Maclaurin series}\\index{limited Maclaurin series}\" or simply \"\\NewTerm{Maclaurin series}\\index{Maclaurin series}\" of order $k + 1$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn practice, as we will see in many other sections of this book, we often use limited developments of order $1$ (also named \"\\NewTerm{affine approximations}\\index{affine approximation}\", or \"\\NewTerm{affine tangent approximations}\\index{affine tangent approximations}\"), which can facilitate the calculations, when we do not expect too much precision in the final result.\n\t\\end{tcolorbox}\n\tNow by applying the same reasoning but by centering the value of the polynomial on $x=x_0$, we have:\n\t\n\tand so the previous development becomes more general:\n\t\n\twhich is no other than the general expression of a polynomial expression in a form named \"\\NewTerm{limited Taylor series}\\index{limited Taylor series}\" of order $k + 1$. This function can be assimilate to a polynomial as $n$ is finite. But if $n$ is infinite, as we shall see later, this series converges to the function we are seeking the representation in the form of a sum of terms.\n\t\n\tThus, some functions $f (x)$ of class $\\mathcal{C}^n$ that can be approximated by a polynomial $P (x)$ (a sum of powers in other words...) centered on the value $x_0$ can be expressed as:\n\t\n\tRelation often referred to as \"\\NewTerm{Taylor's theorem}\\index{Taylor's theorem}\".\n\t\n\tBut this last relation is not correct for all functions that can not be expressed as a polynomial. Therefore we say that the series is not convergent for them. We will see an example later below.\n\t\n\tThe latter relations is sometimes also written ... more conventionally:\n\t\n\tIn finance (and not only!), we will often use the following rearrangement:\n\t\n\tLet us return briefly to the approximation of $f (x)$ near and centered in $x_0$:\n\t\n\tSome people do not like using this formulation because we have the risk to forget that the approximation for a few terms is only good as long as we are not too far from $x_0$ with $x$. This is why it often happens that we write:\n\t\n\twith $x_0$ fixed and a $h$ variable but small (!) and so it then comes a current form of notation of Taylor series:\n\t\n\twith $x_0$ fixed and $h$ variable but small and therefore it comes a common other notation of Taylor series (!):\n\t\n\tLet's see an application example with Maclaurin series (with $x_0$ being zero) of the function $sin (x)$ and Maple 4.00b:\n\t\n\t\\texttt{>p[n](x) = sum((D@@i)(f)(a)/i!*(x-a)\\string^i,i=0..n);\\\\\n\t>p11:= taylor(sin(x),x=0,12);\\\\\n\t>p11:= convert(p11,polynom);\\\\\n\t>with(plots):\\\\\n\t>tays:= plots[display](sinplot):\\\\\n\tfor i from 1 by 2 to 11 do\\\\\n\ttpl:= convert(taylor(sin(x), x=0,i),polynom):\\\\\n\ttays:= tays,plots[display]([sinplot,plot(tpl,x=-Pi..2*Pi,y=-2..2,\\\\\n\tcolor=black,title=convert(tpl,string))]) od: \\\\\n\t>plots[display]([tays],view=[-Pi..2*Pi,-2..2]);}\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/maclaurin_sinus_serie.jpg}\n\t\t\\caption{Approximation of the sine function by a Maclaurin development Maple 4.00b}\n\t\\end{figure}\n\tWe see well in this example that the Maclaurin series only allows to approach a function at a point with a limited number of points. But more terms we take (put $100$ terms in the Maple code above) more the validity is big on the whole domain of definition of the function. In fact it is possible to prove that the function $sin (x)$ is exactly expressible in Maclaurin series when the number of terms is infinite. We say then that its \"rest\" is zero.\n\t\n\tBut this is not true for all functions! For example the function:\n\t\n\t\n\t\\texttt{>p[n](x) = sum((D@@i)(f)(a)/i!*(x-a)\\string^i,i=0..n); \\\\\n\t>p10:= taylor(1/(1-x\\string^2),x=0,10);\\\\\n\t>p10:= convert(p10,polynom);\\\\\n\t>with(plots):\\\\\n\t>tays:= plots[display](xplot):\\\\\n\tfor i from 1 by 2 to 10 do\\\\\n\ttpl:= convert(taylor(1/(1-x\\string^2), x=0,i),polynom):\\\\\n\ttays:= tays,plots[display]([xplot,plot(tpl,x=-2..2,y=-2..2,\n\tcolor=black,title=convert(tpl,string))]) od: \\\\\n\t>plots[display]([tays],view=[-2..2,-2..2]);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/maclaurin_nonconvergent_serie.jpg}\n\t\t\\caption{Example of non-convergent Maclaurin serie Maple 4.00b}\n\t\\end{figure}\n\tWe see above that regardless of the number of terms that we take the Maclaurin series converges only in one area of definition between $] -1,1 [$. This interval is named the \"\\NewTerm{radius of convergence}\\index{radius of convergence}\" and it determination (the singularity) is crucial in many areas of engineering, physics and analysis. We will return in mure more detail on this example in the section of Complex Analysis.\n\t\n\tBut we can shift the Maclaurin series of the previous function to approximate the function with a Taylor series in other non-singular point such as in $x_0$ having for value $2$:\n\t\n\t\\texttt{>p[n](x) = sum((D@@i)(f)(a)/i!*(x-a)\\string^i,i=0..n);\\\\\n\t>p10:= taylor(1/(1-x\\string^2),x=2,10);\\\\\n\t>p10:= convert(p10,polynom);\\\\\n\t>with(plots):\\\\\n\t>tays:= plots[display](xplot):\\\\\n\tfor i from 1 by 2 to 10 do\\\\\n\ttpl:= convert(taylor(1/(1-x\\string^2), x=2,i),polynom):\\\\\n\ttays:= tays,plots[display]([xplot,plot(tpl,x=0..5,y=-2..2,\\\\\n\tcolor=black,title=convert(tpl,string))]) od: \\\\\n\t>plots[display]([tays],view=[-0..5,-2..2]);}\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/maclaurin_nonconvergent_serie_shifted.jpg}\n\t\t\\caption{Shift possibility of Maclaurin serie in Maple 4.00b}\n\t\\end{figure}\n\t\n\tWe will study a generalization to the complex plane of Taylor series in the section of Complex Analysis to get a veeeeery powerful result for physicists to calculate complicated curvilinear integrals.\n\t\n\t\\pagebreak\n\t\\paragraph{Usual Maclaurin developments}\\mbox{}\\\\\\\\\n\tWe will prove here the most frequent Maclaurin developments (about ten) to the second order that we can meet in theoretical and mathematical physics (in fact we heve developed here only use almost everywhere in the book). The list is not exhaustive for the time being but as the proof below are generalized, they can be applied to many other cases (that we will apply/meet throughout this book).\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\tThe Taylor expansions (that is to say elsewhere than on zero) are very rare (there are one or two in this entire book but they are detailed in their respective sections), we will omit them.\n\t\\end{tcolorbox}\n\t\n\t\\begin{enumerate}\n\t\t\\item Taylor-Maclaurin development of $f(x)=e^x$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\tMore generally:\n\t\t\n\t\tAnd therefore we have the famous result for $x=1$:\n\t\t\n\t\tthat is sometimes named the \"\\NewTerm{exponential sequence}\\index{exponential sequence}\".\n\t\t\n\t\t\\item  Taylor-Maclaurin development of $f(x)=\\sin(x)$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\t\n\t\t\\item  Taylor-Maclaurin development of $f(x)=\\cos(x)$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\t\\item  Taylor-Maclaurin development of $f(x)=\\tan(x)$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\t\n\t\t\\item  Taylor-Maclaurin development of $f(x)=\\arctan(x)$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\t\n\t\t\\item  Taylor-Maclaurin development of $f(x)=\\displaystyle\\frac{1}{1+x}$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\tIt then follows immediately another Taylor series we will also meet again many  number of times:\n\t\t\n\t\t\n\t\t\\item  Taylor-Maclaurin development of $f(x)=\\sqrt{1+x}$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\tIt then also follows immediately another Taylor series we will also meet again many  number of times:\n\t\t\n\t\t\n\t\t\\item  Taylor-Maclaurin development of $f(x)=\\ln(1+x)$:\n\t\t\n\t\tFirst remember that we have proved in the section of Differential and Integral Calculus that:\n\t\t\n\t\tTherefore we have:\n\t\t\n\t\t\n\t\t\\item  Now consider the important case for the Langevin model of paramagnetism that is approximated Taylor expansion of the hyperbolic cotangent function (\\SeeChapter{see section Trigonometry}), that is for refresh defined by the relation:\n\t\t\n\t\tFor this, we will use the Landau notation, with expressions like $\\mathcal{O}(x^n)$ remembering that we proved a little before above:\n\t\t\n\t\twhen $x \\rightarrow 0$.\n\t\tFor the hyperbolic cotangent we have then:\n\t\t\n\t\tNow we must be remember as we have just proved a little earlier that:\n\t\t\n\t\tfor $\\vert x \\vert < 1$. Therefore:\n\t\t\n\t\tand finally replacing this in the previous expression we find:\n\t\t\n\t\t\n\t\t\\item  Another famous Maclaurin series used thousand of times in the world for business application is the computation of the numerical values of the Normal distribution:\n\t\t\n\t\tSo first to simplify this integral, we typically let:\n\t\t\n\t\tthe we know already (\\SeeChapter{see section Statistics}) as being the $z$ score of a data value. With this simplification, the integral above becomes:\n\t\t\n\t\tThe Maclaurin series for $e^{-x^2/2}$ is given by:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tIs is obvious that the constant will eliminate itself. Therefore!\n\t\t\n\t\tand in the common case in business where $a=0$ we get (with two terms only):\n\t\t\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\paragraph{Taylor series of bivariate functions (multivariate Taylor series)}\\mbox{}\\\\\\\\\n\tWe will see now how to approach a function $f (x, y)$ of two real variables by a sum of powers (Taylor series). This type of approximation is widely used in many fields of engineering (see sections of Industrial Engineering and Numerical Methods in this book).\n\t\n\tWe are looking for an approximation of $f (x, y)$ at point $f(x_0+h,y_0+h)$. For this, let us write (a priori nothing prohibits us from doing so) that:\n\t\n\tThen we have:\n\t\n\tThe value of (the trick is here!):\n\t\n\tcan be approximated using a Taylor series around the value $0$ such that:\n\t\n\tBut we have:\n\t\n\tand:\n\t\n\tAccording to Schwarz's theorem (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tThen we have:\n\t\n\tand we show by induction that:\n\t\n\tTherefore we finally get:\n\t\n\tor in another equivalent simplified form:\n\t\\begin{empheq}[box=\\fbox]{align}\n\t\t\\begin{split}\n  \t\tf(x_0+h,y_0+k)&=f(x_0,y_0)+\\dfrac{1}{1!}\\dfrac{\\partial f}{\\partial x}(x_0,y_0)h+\\dfrac{1}{1!}\\dfrac{\\partial f}{\\partial y}(x_0,y_0)k\\\\\n\t\t&+\\dfrac{1}{2!}\\left[\\dfrac{\\partial^2 f}{\\partial x^2}(x_0,y_0)h^2+2\\dfrac{\\partial^2 f}{\\partial x\\partial y}(x_0,y_0)hk+\\dfrac{\\partial^2 f}{\\partial y^2}(x_0,y_0)k^2\\right]\n\t\t\\end{split}\n\t\\end{empheq}\n\tOr if we define a matrix H named \"\\NewTerm{Hessian matrix}\\index{Hessian matrix}\" given by:\n\t\n\twe can also write:\n\t\n\tIn Maple 4.00b we use the following command to make a development of order $3$ around $0$:\n\t\n\t\\texttt{>readlib(mtaylor):\\\\\n\t>mtaylor(f(x,y), [x,y], 3);}\n\t\n\t\\paragraph{Quadratic Form}\\mbox{}\\\\\\\\\n\tNow we will need for the section of Theoretical Computing to state an important property (which would have also its place only in the section of Differential and Integral Calculus):\n\t\n\tLet $f$ be a function defined and derived over an interval $I$ and given $a$ an element of $I$. If $f$ is such that $f'(a)=0$ then we say it has a local extremum on $a$.\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\tThe reciprocal is false, the function $x^3$ is such an example. Its derivative is zero at $0$ but there is no local extreme at this point. So be careful!\n\t\\end{tcolorbox}\n\t\n\tHowever, let $f$ be a function defined and derived over an interval $I$ and given $a$ an element of $I$. If $f$ is such that $f'(a)=0$ and if $f'$ changes sign in $a$ then $f$ has a local extremum at $a$.\n\t\n\tTo return now to our bivariate development Taylor, we know that if $(x_0,y_0)$ is a local extremum of $f$ then we have in a first time (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tHowever we have seen that this condition is not sufficient to ensure that $(x_0,y_0)$ is a local extremum.\n\t\n\tReconsider the Taylor expansion of $f$ above taking into account the above condition. Development simplified then to:\n\t\n\tThen we know that by definition so that the $(x_0,y_0)$ is a local minimum (respectively a local maximum) it is sufficient that the expression in brackets is positive (respectively negative). Since the second derivatives of $f$ are continuous, it is sufficient that the expression:\n\t\n\tto be positive (negative resp.) regardless of $h$ or $k$ and it is zero only if $h=k=0$. Then we say that $q$ is a \"\\NewTerm{positive definite quadratic form (resp. negative definite)}\\index{positive definite quadratic form}\".\n\t\n\tTo simplify writing and to comply with traditions we put now:\n\t\n\tThen we can rewrite $q$ as follows:\n\t\n\twhere $H$ remains the Hessian matrix of $f$ evaluated on $(x_0,y_0)$.\n\t\n\tSo we see that $q$ is positive definite (local minimum) and if $a>0$ and $\\det(H)>0$, negative definite (local maximum) if $a<0$ and $\\det(H)>0$.\n\t\n\tReturning to the partial derivatives these conditions are described as follows:\n\t\\begin{itemize}\n\t\t\\item Positive definite (local minimum) if:\n\t\t\n\t\t\n\t\t\\item Negative definite (local maximum) if:\n\t\t\n\t\\end{itemize}\n\tFinally we see that the sign of the determinant of Hessian matrix and that of $\\dfrac{\\partial^2 f}{\\partial x^2}(x_0,y_0)$ allow us to obtain a sufficient condition to determine if we are in the presence of a local extremum.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us see an example with the famous humpback whale:\\\\\n\t\n\t\\texttt{>with(plots): with(plottools):\\\\\n\t>readlib(mtaylor):\\\\\n\t>fct:=x\\string^2*(4-2.1*x\\string^2+1/3*x\\string^4)+x*y+y\\string^2*(-4+4*y\\string^2);\\\\\n\t>poly2 :=mtaylor(fct,[x=1,y=1],6);\\\\\n\t>\\#Convert all the coefficients to floating point numbers\\\\\n\t>poly2n := map(evalf,poly2):\\\\\n\t>gr1:= plot3d(poly2n,x=-2..2,y=-1..1,color=red):\\\\\n\t>gr2:= plot3d(fct,x=-2..2,y=-1..1,color=blue):\\\\\n\t>display3d({gr1,gr2},view=-3..8,axes=framed);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/taylor_multivariate.jpg}\n\t\t\\caption{Bivariate Taylor example with Maple 4.00b}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\paragraph{Lagrange Remainder}\\mbox{}\\\\\\\\\n\tThere may be an interest in certain numerical applications (\\SeeChapter{see section Theoretical Computing}) to know the approximation error of the polynomial $P_n(x)$ in relation to the function $f(x), \\forall x$.\n\t\n\tLet us define for this purpose a \"remainder\", such that:\n\t\n\tThe function $R_n(x)$ is named \"\\NewTerm{Lagrange rest}\\index{Lagrange rest}\" or \"\\NewTerm{Lagrange remainder}\\index{Lagrange remainder}\" or \"\\NewTerm{Lagrange error}\\index{Lagrange error}\".\n\t\n\t\\begin{dem}\n\tGiven a function $g(t)$ defined by the difference of a function $f(x)$ assumed to be known and a Taylor approximation of the same function:\n\t\n\twith, of course:\n\t\n\tWe see that $g (t)$ vanishes as expected for value $t=x$.\n\t\n\tNow let us derive $g(t)$ with respect to $t$, we find:\n\t\n\tAfter simplification:\n\t\n\tAccording to Rolle's theorem (\\SeeChapter{see section Differential and Integral Calculus}), there exist a value $t=z$ for which the derivative $g'(t)$ is zero. So:\n\t\n\tWe can simplify the equation by $(x-z)^n$:\n\t\n\twhich can also be written as:\n\t\n\tso we find for the maximum of $R_n$:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tWe see that as the polynomial $P_n(x)$ is of high degree, the more it approximates the function $f (x)$ with accuracy. What will happen when $n\\rightarrow +\\infty$?:\n\t\n\tSuppose that $f (x)$ has derivatives of all orders (what we denote for reminder $\\mathcal{C}^n$) for all values of any interval containing $x_0$ and let the rest of Lagrange $R_n$ of f (x) of $f(x)$ on $x_0$. If, for any $x$ in the range:\n\t\n\tthen $f (x)$ is exactly represented by P $(x)$ on the interval.\n\t\\begin{dem}\n\tThe proof simply stems from the expression of $P_n(x)$ when $n\\rightarrow +\\infty$.\n\t\n\tIndeed, if we take an infinity of terms for $P_n(x)$, the correspondence with the approximated function is perfect and so the rest is zero.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tThe polynomial:\n\t\n\tis named \"\\NewTerm{Taylor polynomial}\\index{Taylor polynomial}\" or \"\\NewTerm{Taylor series}\\index{Taylor series}\". If $x_0=0$, it is named \"\\NewTerm{Maclaurin polynomial}\\index{Maclaurin polynomial}\" or \"\\NewTerm{Maclaurin series}\\index{Maclaurin series}\".\n\t\n\t\\paragraph{Taylor Series with Integral Remainder}\\mbox{}\\\\\\\\\n\tWe'll see if a theorem that will be useful in the section of Statistics to link the Poisson and Chi-2 laws and that is used in statistical software for Poisson test of rare events (that is the only business practical application that is known to us at this day).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIf anyone has a more educational proof whose beginning is a little less \"formula fell from the sky\", we are takers!\n\t\\end{tcolorbox}\n\t\\begin{theorem}\n\tLet $f(x)$ be $n + 1$ times differentiable on the interval $[a, b]$. Then we have:\n\t\n\twhere it is important (for the good understanding of what we will do in the section of Statistics) that the reader notices in the development that when the derivative stops at the $n$-th term in the series, the integral (the remainder) has a factor of $1 / n !$, a power $n$ and a derivative of order $n + 1$. So verbatim, as we shall prove it below, if we stop the development of the terms to $n-1$, the integral (the remainder) will have a factor of $1 / (n-1) !$, a power $n-1$ and a derivative of order $n$-th.\n\t\\end{theorem}\n\t\\begin{dem}\n\tThe proof is mady by induction. We first consider the formula fallen from the sky:\n\t\n\tWe show that it is correct for $k = 0$, then we do an induction on $k$ for $k\\in \\mathbb{N}$.\n\t\n\tFor $k = 0$, we have the well-known relation (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tSuppose the property true for $k<n$:\n\t\n\tWe integrate by parts (\\SeeChapter{see section Differential and Integral Calculus}) the term:\n\t\n\tThen we have:\n\t\n\tTherefore:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\subsubsection{Fourier Series (trigonometric series)}\n\tWe name by definition \"\\NewTerm{trigonometric series}\\index{trigonometric series}\" a series of the form:\n\t\n\tor in a more compact form:\n\t\n\tThe constants $a_n,b_n$ with $n\\in \\mathbb{N}^{*}$ are the coefficients of the trigonometric series usually named \"\\NewTerm{Fourier coefficients}\\index{Fourier coefficients}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe have already mentioned this type of series in our study of the types of existing polynomials since Fourier series are in fact only trigonometric polynomials (\\SeeChapter{see section Calculus}). Furthermore, we saw as example in the section of Functional Analysis during our study of scalar functional product that the sine and cosine functions were the bases of a vector space!\n\t\\end{tcolorbox}\n\tIf the series converges, its sum is a periodic function $f (x)$ of period $T=2\\pi$, since $\\sin (nx)$ and $\\cos (nx)$ are periodic functions of period $2\\pi$. So that:\n\t\n\tLet us now state the following problem: We give ourselves a known periodic function $f(x)$, piece-wise continuous of period $2\\pi$. We ask ourselves if there is a trigonometric series converging to $f (x)$ under some conditions that must be satisfied on this series.\n\t\n\tSuppose now that the function $f (x)$, periodic and of period $2\\pi$, can be effectively represented by a trigonometric series converging to $f (x)$ in the interval $[0, T]$, that is to say it the sum of this series:\n\t\n\tSuppose that the integral of the function of the left member of this equality is equal to the sum of the integral of all the terms of the above series. This will occur, for example, if we assume that the proposed trigonometric series converges absolutely, that is to say, the numerical series converges (by the property that the trigonometric functions are bounded):\n\t\n\tThe serie:\n\t\n\tis then majorable and can be integrated term by term from $0$ to $T$ (where $T=2\\pi$) which allows us to determine the different Fourier coefficients. But before we start let us present the following integrals that will be very useful later:\n\t\n\t\\begin{center}\n\t\\begin{tabular}{ccc}\n\t$\\text{with }n,k\\in \\mathbb{N}\\text{ and }n\\ne k$\n\t&$\\qquad$&\n\t$\\text{with }n,k\\in \\mathbb{N}\\text{ and }n = k$\n\t\\end{tabular}\n\t\\end{center}\n\tBefore continuing, let us prove the value taken by these six integrals (following the request of readers). But first, remember that as $n,k \\in \\mathbb{N}$ then:\n\t\n\t\n\t\\begin{enumerate}\n\t\t\\item We proceed using the remarkable trigonometric relations (\\SeeChapter{see section Trigonometry}) and the primitive of elementary trigonometric functions (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\t\n\t\tbecause as we have seen it in the section Trigonometry $\\sin(k\\pi)=0,k\\in\\mathbb{Z}$ and as $T=2\\pi$ the two previous differences have all terms equal to zero such that at the end:\n\t\t\n\t\t\n\t\t\\item For the second integral, we proceed using the same techniques and the same properties of trigonometric functions:\n\t\t\n\t\t\n\t\t\\item And we continue like this also for the third one, according to the same properties:\n\t\t\n\t\t\n\t\t\\item Once gain using the same methods (this becomes routine ...) first for $k\\neq 0$:\n\t\t\t\n\t\t\tand for $k=0$ it comes immediately:\n\t\t\t\n\t\t\t\n\t\t\t\\item Again ... (soon finish...) first for $k\\neq 0$:\n\t\t\t\n\t\t\tand for $k=0$ it comes immediately:\n\t\t\t\n\t\t\t\n\t\t\t\\item And finally the last (...):\n\t\t\t\t\n\t\t\\end{enumerate}\n\t\tThis small work done let us now come back on our topic... To determine the coefficients $a_n$ both members of equality:\n\t\n\tby $\\cos(kx)$:\n\t\n\tThe series of the second member of equality is majorable, since its terms do not exceed in absolute to the terms of the positive convergent series. So we can integrate term by term on every bounded segment $0$ to $T$:\n\t\n\tWe have proved above that whatever the integer values that take $k$ or $n$ the second term in the parenthesis is always zero. It then remains only:\n\t\n\tBut we have proved above that the integral on the right is always zero if $n$ and $k$ are different. This leaves only the case where $n$ and $k$ are equal. Meaning:\n\t\n\tIn this situation, we first the special case where $k$ is zero. In that case:\n\t\n\tTherefore:\n\t\n\tIt is obvious that the coefficient $a_0$ represents the average of the signal or of its DC component, if it exists.\n\t\n\tIn the case where $k$ it is not zero, we have:\n\t\n\tTherefore:\n\t\n\tTo determine the coefficients $b_n$ we proceed the same way but this time multiplying both members of equality by $\\sin(kx)$:\n\t\n\tThe series of the second member of equality is majorable because its terms are not higher in absolute values to the terms in the convergent positive series. So we can integrate term by term on every bounded segment from $0$ to $T$:\n\t\n\tWe have shown proved before that whatever are the integer values that taken by $k$ or $n$ the first term of the parenthesis is always zero. It remains then only:\n\t\n\t\n\tBut we have proved before that the integral on the right is always zero if $n$ and $k$ are different. This leaves only the case where $n$ and $k$ are equal. Meaning:\n\t\n\tIn this situation, we first have the special case where $k$ is zero. But we see now that we have a zero indeterminacy. It is better to consider the general case from which have:\n\t\n\tHence we easily derive that:\n\t\n\tTherefore, for the situation where $k$ is zero the coefficient is therefore equal to zero!\n\t\n\tSo finally the Fourier coefficients are determined by the integrals:\n\t\n\tBut as it's annoying to have three results for the coefficients we'll play a little with the definition of the Fourier series.\n\t\n\tIndeed by summing from $1$ to $+\\infty$, rather than $0$ to $+\\infty$, we have:\n\t\n\tThis then allows us only to have to remember ($a_0$ therefore included!):\n\t\n\tPhysicists have for habit to write the last two relations as follows:\n\t\n\tThe possible decomposition of any periodic piecewise continuous function approximated by an infinite sum of trigonometric functions (sine or cosine) consisting of a basic function and its harmonics is named \"\\NewTerm{Fourier theorem}\\index{Fourier theorem}\" or \"\\NewTerm{Fourier-Dirichlet theorem}\\index{Fourier-Dirichlet theorem}\".\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_series_examples.jpg}\n\t\t\\caption{Examples of some Fourier series (source: Mathworld)}\n\t\\end{figure}\n\tIt can also happen that sometimes we know the Fourier series and we are looking for the original function $f(x)$. As a companion example consider that we want to calculate:\n\t\n\tSo this is like searching the original $f(x)$ of the above Fourier series.\n\n\tIt follows therefore that $a_0=0$ and $b_n=0$ and:\n\t\n\tBut as far as we know there is no easy way to extract $f(x)$ that seems accurate! So using hyperbolic trigonometry (\\SeeChapter{see section Trigonometry}), we write:\n\t\n\tNow these power series may be identifies as Maclaurin expansions of $-\\ln(1-z)$ (see proof above) with $z=e^{\\mathrm{i}x}$ for the first terme and $z=e^{-\\mathrm{i}x}$ for the second term.\n\n\tTherefore:\n\t\n\t\n\tThe Fourier series allows implicitly to represent all the frequencies in a periodical signal whose function is known mathematically (closed form). We can wonder why talk about Fourier series when, in practice, we do not really know the mathematical representation of a signal? This will bring us to a better understanding of the concept of the Fourier transform in discrete-time that we will see a little further, which does not need a mathematical representation of a continuous and periodic signal.\n\t\n\tWe note also that if $f (x)$, that is to say the periodic function of which we seek expression in trigonometric Fourier series, is even then the series will also be even and thus contain only cosine terms (the cosine function being an even function) implying that $b_n=0$ and otherwise in the case of an odd function $a_n=0$ (the sine being for reminder an odd function)!\n\t\n\tIt should be noted, and this is important for what will follow, that as we have seen in the section Calculus during our study of trigonometric polynomials, Fourier series could be written in the following complex form (by changing some notations and passing the sum to infinity):\n\t\n\tand we have seen that (always in the section Calculus) that:\n\t\n\tTherefore:\n\t\n\tThis gives us:\n\t\n\tTherefore:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tUpon decomposition of a continuous signal, we say (improperly at our point of view) that the coefficients $a_n,b_n$ are each (implicitly) a separate frequency associated with an amplitude that we visualize on a graph by vertical lines. This graph shows the \"\\NewTerm{frequency spectrum}\\index{frequency spectrum}\" of the decomposed signal. We can also add another representation which is named \"\\NewTerm{phase spectrum}\\index{phase spectrum}\". This spectrum gives us the phase of the harmonic signal (in phase advance or delay).\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_spectrum_graph.jpg}\n\t\t\\caption{Example of amplitudes and frequencies associated to the different coefficients}\n\t\\end{figure}\n\tLet us see now how to decompose a known periodic signal into several distinct amplitudes and frequencies signals\n\tLet us take for example, a periodic square wave signal defined over a period $T = 2$ and of amplitude $A$ such that:\n\t\n\tAt period $T = 2$ corresponds as we know a pulsation:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tLet us calculate first the coefficients $c_k$ thanks to the integral that determine the coefficients (the choice of the bounds of the integral is therefore assumed that the signal is periodic by construction!):\n\t\n\tTaking $k = 2$, we have:\n\t\n\tSimilarly for $k = 4,6,8$ and for any even number.\\\\\n\t\n\tAbout odd numbers, we will have:\n\t\n\tThe coefficients will then be:\n\t\n\tThere is only problem in this relations, the coefficient $c_0$ cannot be calculated according to this relation because you can see that if $k = 0$ in the result above, we have an infinite value and it is at least impossible. The coefficient is null or not null  but never infinite (at least in physics because this implies and infinite energy).\\\\\n\t\n\tTo find the coefficient $c_0$, we must calculate the integral for $k = 0$. The coefficient $c_0$ is then determined by:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tThe \"frequency\" spectrum (caution to the abuse of language!) and amplitude will be of the following form for $k=-5...+5$ and $A=1$ null frequencies not being shown:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_example.jpg}\n\t\t\\caption{Frequency spectrum of the example Fourier series coefficients}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\tThe abuse to talk about frequencies for Fourier coefficients thus leads us to have negative frequencies on the  $x$-axis... but it's only a question of vocabulary (there is no direct relation with the real frequencies) with which you must be familiar.\n\t\n\tThe amplitude spectrum and phase is calculated according to the following relations:\n\t\n\tIt is then relatively easy to notice that if $T$ tends to a larger and larger number, the spectrum peaks approach increasingly. So when $T$ tends to infinity the spectrum becomes continuous!!!\n\t\n\tThe phase spectrum of the above example will give the following for the odd values:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_phase_diagram.jpg}\n\t\t\\caption[]{Phase spectrum for the Fourier Series}\n\t\\end{figure}\n\tIt is even possible for example to obtain relatively easily the frequency spectrum in a software like Microsoft Excel 11.8346 (the reader will find an example much more detailed and interesting on the companion exercise server in the section Sequences and Series) !!!\n\t\n\tIndeed, it is enough for this purpose to sample for example our signal $128$ times (Microsoft Excel 11.8346 needs $2^n$ samples and works only under this condition!). Then we divide the interval $-1<t<0$ in $64$ samples and ditto for the interval $0>t>1$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/signal_sample_01.jpg}\\\\\n\t\t\\includegraphics{img/algebra/signal_sample_02.jpg}\\\\\n\t\t\\includegraphics{img/algebra/signal_sample_03.jpg}\n\t\t\\caption[]{Signal sample}\n\t\\end{figure}\n\tWhich gives in graphical form (be careful because for the discrete Fourier transform works well in Microsoft Excel 11.8346, it is necessary that the sampling frequency - corresponding to the number of measurements in a second - is at least 100 times higher than the frequency of the original signal otherwise the result can be absurd!):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/signal_fourier_transform_excel.jpg}\n\t\t\\caption[]{Graphical representation of the data series in Microsoft Excel 11.8346}\n\t\\end{figure}\n\tAfterwards in Microsoft Excel you simply go to the menu \\textbf{Tools/Utility Analysis} and choose the \\textbf{Fourier Analysis} option:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/excel_data_analysis_tool.jpg}\n\t\t\\caption[]{Screenshot of \\textbf{Utility Analysis} dialog box of Microsoft Excel 11.8346}\n\t\\end{figure}\n\tThen comes the following dialog box that must be fill-in as indicated below (we see that the $x$-axis does not matter!):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/excel_fourier_analysis_dialog_box.jpg}\n\t\t\\caption[]{Parameters of the \\textbf{Fourier Analysis} tool in Microsoft Excel 11.8346}\n\t\\end{figure}\n\tThen comes the following generated list for the coefficients:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_excel_list_01.jpg}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_excel_list_02.jpg}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_excel_list_02.jpg}\n\t\t\\caption[]{Corresponding Fourier coefficients to sampled signal with Microsoft Excel 11.8346}\n\t\\end{figure}\n\tIt remains to calculate the module of the complex numbers with the native function  \\texttt{IMABS( )} function in Microsoft Excel 11.8346 and divide the result by $128$ for each of the coefficients $c_n$ but we already see that each pair even coefficient  is zero and this match well the theoretical result obtained previously.\n\t\n\tWe then have putting the index $n$ in front of each module:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_excel_list_completed_01.jpg}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_excel_list_completed_02.jpg}\n\t\\end{figure}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_excel_list_completed_03.jpg}\n\t\t\\caption[]{Module of complex coefficients of the example Fourier Transform with Microsoft Excel 11.8346}\n\t\\end{figure}\n\tBy plotting a customized scatter diagram (still with  Microsoft Excel 11.8346) of columns D and E, we finally get (we restricted the $x$-axis to $[-5, +5]$ for easier reading:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/excel_fourier_spectrum_frequencies.jpg}\n\t\t\\caption[]{Frequency spectrum of the transformed with Microsoft Excel 11.8346}\n\t\\end{figure}\n\tTo compare with the theoretical calculations (chart already presented previously) ...:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_coefficients_example.jpg}\n\t\t\\caption{Theoretical frequency spectrum of the example Fourier series coefficients}\n\t\\end{figure}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us now consider another example identical to the previous with a different approach. We define a periodic function of period $T=2\\pi$ as follows:\n\t\n\tLet us calculate the Fourier coefficients (we translate the bounds of the integral since the function is periodic this change nothing but facilitate the calculations!):\n\t\n\tand:\n\t\n\tWe notice that $b_n$ is equal to $0$ for $n$ even and equal to $4\\pi/n$ when $n$ is odd.\n\tThe Fourier series of the function under consideration is thus written:\n\t\n\tWhat in Maple 4.00b will be written:\\\\\n\t\n\t\\texttt{>S:=(4/Pi)*Sum(sin((2*n+1)*x)/(2*n+1),n=0..N);}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tand that we can plot using the command:\\\\\n\t\n\t\\texttt{>plot({subs(N=4,S),subs(N=8,S),subs(N=16,S)},x=-Pi..Pi,\\\\\n\tcolor=[red,green,blue],numpoints=200);}\\\\\n\t\n\tWhat gives three plots for $4$, $8$ and $16$ of the series in red, green and blue:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_series_with_various_terms.jpg}\n\t\t\\caption{Example of Fourier series in Maple 4.00b with $4$, $8$ and $16$ terms}\n\t\\end{figure}\n\tFor $50$ terms we get:\n\t\\texttt{> plot(subs(N=50,S),x=-Pi..Pi,numpoints=800);}\\\\\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_series_with_fifty_terms.jpg}\n\t\t\\caption{Example of Fourier series in Maple 4.00b with $50$ terms}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\tWe see on the example above the side effects named \"\\NewTerm{Gibbs phenomenon}\\index{Gibbs phenomenon}\". It is possible to prove they occur to the value of the abscissa corresponding to $x=\\pi/2n$ and matching equation and the peak rises to $\\pm 1.179$ for all values of $n$. Let's see this!\n\t\n\tWe proved just before for our example that:\n\t\n\tWhich can be written:\n\t\n\tusing the proof made much more earlier at the beginning of this section as that:\n\t\n\tThen we have:\n\t\n\tRemember that during our study of complex numbers (\\SeeChapter{see section Numbers}) we proved that:\n\t\n\tWhich brings us to:\n\t\n\tWe will now focus on small values of $x$. So we can then make a Maclaurin  development of first order at the denominator (but not the numerator because of the presence of the $n$):\n\t\n\tWe make a change of variable:\n\t\n\twhere we used the traditional notation of the \"cardinal sinus\" in the last relation as defined in the section trigonometry (remember that this fraction is common in physics this is why it has a specific notation).\n\t\n\tAs what interest us is to determine the maximum of the Gibbs phenomenon (the disturbance), we see that it takes place in this particular case that we presented (see figure above) for each multiple of $\\pi$ and as the denominator of the expression of the integral will decrease as the multiple is higher, it follows that the greatest maximum is at the point where $2nx=\\pi$  (the point $0$ at the opposite will cancel the integral therefore we must this latter of our choice). Then we have:\n\t\n\tthe evaluation of this integral can be done only numerically as we know, therefore we get:\n\t\n\tThat is about $18\\%$ above the expected threshold value.\n\t\n\t\\paragraph{Power of a signal}\\mbox{}\\\\\\\\\n\tA periodic signal has an infinite energy and null average power (\\SeeChapter{see secton Electrokinetics}). Its average power over a period is then defined by:\n\t\n\tIf we develop this equation, we have:\n\t\n\tThis means that the power of a continuous-time periodic signal is equal to the sum of the squared Fourier coefficients. This is what we name the \"\\NewTerm{Parseval theorem}\\index{Parseval theorem}\". This means that if we have any signal that can be decomposed in Fourier series, we can know the power of that signal using only the spectral coefficients.\n\t\n\tIn reality, we can't mathematically determine the expression of this signal, we use therefore discretization or sampling and then we use a discrete Fourier transform, we can calculate the power of this signal using only the spectral coefficients. This gives us a characteristic of the signal.\n\t\n\tLet us also indicate the following result which will be very useful to us in the section of Thermodynamics for the study of the black body and is also very closely related to important properties of the zeta function Riemann:\n\t\n\tThe following relation:\n\t\n\tis named \"\\NewTerm{Parseval equality}\\index{Parseval equality}\".\n\t\n\tAccording to the definition of the Fourier series and the definition of the coefficient $a_0$ which follows immediately, we also have frequently in the literature:\n\t\n\t\n\t\\pagebreak\n\t\\paragraph{Fourier Transform}\\mbox{}\\\\\\\\\n\tFourier series are a very powerful tool for the analysis of periodic signals for example, but the set of periodic functions is small compared to all the functions that we encounter in physical and engineering problems. So, will we introduce a new extremely powerful analytical tool that extends to a class of more general functions that have very important applications in signal processing, image processing, sound processing, finance and markets advanced statistics!!!\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tMany teachers and authors associate Fourier Series and Fourier Transform in the field of Functional Analysis. This is right in fact but it seemed to us more appropriate to put the study of this two subjects in this section because closely related to Sequences and Series. However, the subjects that follows normally the study of Fourier Transform, that is to say: Laplace Transfoms, Hilbert Transform and others will be given in the section Functional Analysis of this book. The Fast Fourier Transform study can be found in the section of Theoretical Computing.\n\t\\end{tcolorbox}\n\tThe Fourier transform (FT) is then used for both periodic signals and for aperiodic signals.\n\t\n\tFor this, we start from study of Fourier series with the complex notation of a periodic function of period $T$ by considering that the period is becoming increasingly big to such that $T\\rightarrow +\\infty$. Therefore the spectral lines gradually approach to turn into a continuous spectrum.\n\t\n\tTherefore, let us resume the expressions proved just earlier:\n\t\n\tthat we can write equivalently in the following traditional form (wherein it is customary to put the factor $1 / T$ rather in $f (t)$):\n\t\n\tand let us write this for future needs in the following form:\n\t\t\n\tand let us put naturally that:\n\t\n\tThus, when $T\\rightarrow +\\infty$, the pulsation tends to zero and we have $\\omega\\rightarrow \\omega$ because we move from discrete values in continuous values that browse through the set of real number $\\mathbb{R}$ (for all $k$). Therefore:\n\t\n\twe pass to the limit that is to say:\n\t\n\tThis implies that:\n\t\n\tTherefore we obtain for the coefficients (we change the notation because the previous one is inadequate)\n\t\n\tand for the infinite series (the sum becomes an integral)\n\t\n\tCaution!!! To make the difference between the given function and its equivalent in which we seek expression in infinite sum, we will note them differently now. Thus, we get:\n\t\n\tThus the discrete Fourier series becomes a continuous function.\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item We name \"\\NewTerm{Fourier transform (FT)}\\index{Fourier transform }\" of $f$ the relation:\n\t\t\n\t\tsometimes also denoted as follows:\n\t\t\n\t\tsometimes also named \"\\NewTerm{spectral density amplitude}\\index{spectral density amplitude}\".\n\t\t\n\t\t\\item We name \"\\NewTerm{inverse Fourier transform (IFT)}\\index{inverse Fourier transform}\" of $F$ the relation:\n\t\t\n\t\\end{enumerate}\n\tAny such transformation technique (as there are many as we will see in the section of Functional Analysis!) is named an \"\\NewTerm{integral transformation}\\index{integral transformation}\".\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThere are many way of writing the Fourier transform according to the choice of the initial value of $T$!\n\t\\end{tcolorbox}\n\tSome physicist and engineers prefer to make the two previous relations symmetrical by putting the same coefficient in both directions, which will be for example $1/\\sqrt{2}$. This will give:\t\n\t\n\tLet us also give the corresponding three-dimensional version that will serve us many times in wave mechanics, electrodynamics, wave optics or in the various sections of quantum physics of this book:\n\t\n\tTo make things perhaps clearer (at least we hope so), let us prove generally that the previous Fourier transform $\\mathcal{F}$ is isometric (retains the \"norm\" - or \"modulus\" if you prefer...)\n\t\n\t\\begin{theorem}\n\tFor any functions $f, g$ we have the functional inner product:\n\t\n\tBut since the functions are in the complex space, as we saw in the section of Vector Calculus, then we must use the notation of the hermitian product:\n\t\n\tRemember that:\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tThen we want to prove the equality:\n\t\n\tExplicitly:\n\t\n\tBut the variable to integrate but must be the same and for $\\mathcal{F}(g)$ to be implicitly dependent of $\\vec{r}$ it is necessary to take the Fourier transform on $\\vec{k}$. Such as:\n\t\n\tTherefore:\n\t\n\tTherefore using the Fubini theorem (\\SeeChapter{see section Differential and Integral}):\n\t\n\tThanks to this result, we have also proved (this is immediate)\n\t\n\tWe have not specified the bounds: they are infinite in every definition (we include all possible $\\vec{k}$ or $\\vec{r}$).\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tLet us now see and prove three interesting properties of the Fourier transform:\n\t\\begin{enumerate}\n\t\t\\item[P1.] If the function $f$ is an even function (\\SeeChapter{Functional Analysis}), it comes a simplification of the Fourier Transform such that:\n\t\t\n\t\t\\item[P2.] If $f$ is odd, we proceed in the same manner as above, and we get:\n\t\t\n\t\t\\item[P3.] Very important property of the Fourier transforms which will be useful to us in finance (\\SeeChapter{see section Economy}), and also as part of the study of the heat equation (\\SeeChapter{see sectionof Thermodynamics}).\n\t\t\n\t\tFirst remember that the Fourier transform is given by:\n\t\t\n\t\tWe want to see what happens if:\n\t\t\n\t\tBy doing an integration by parts (\\SeeChapter{see section Integral and Differential}):\n\t\t\n\t\twe get:\n\t\t\n\t\twhere we put ourself in the situation where:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tMore generally:\n\t\t\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe branch of \"\\NewTerm{harmonic analysis}\\index{harmonic analysis}\", or \"\\NewTerm{2D Fourier analysis}\\index{2D Fourier analysis}\", is the branch of mathematics that studies the representation of functions or signals as a superposition of basic waves. It deepens and generalizes the notions of Fourier series and Fourier transform. The basic waves are named \"harmonics\", hence the name of discipline. During the last two centuries it has had numerous applications in physics and economics under the name \"spectral analysis\" and knows recent applications including signal processing, quantum mechanics, neuroscience, stratigraphy, statistics, etc.\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. Let us see now a first example (among the two fundamental) of a Fourier transform that we use again in the sections about quantum physics as well as in wave optics.\n\t\n\tWe will calculate the Fourier transform (spectrum) of the following function (rectangular pulse):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/fourier_transform_rectangular_pulse.jpg}\n\t\t\\caption[]{Rectangular pulse example for Fourier Transform}\n\t\\end{figure}\n\tWe have therefore:\n\t\n\twhere $\\text{sinc}$ is the sine cardinal as we already know (\\SeeChapter{see section Trigonometry}). So we fall back on the $\\text{sinc}$ and if we take the squared modulus squared we therefore get the decomposition of a theoretical monochromatic wave diffracted by a rectangular slot!!! Thus, it seems possible to study the diffraction phenomena using the Fourier transform and this field is named \"\\NewTerm{Fourier optics}\\index{Fourier optics}\". We'll come back on this later in the section of Functional Analysis when we deepen the Fourier transforms.\\\\\n\t\n\tWe the know that the spectrum (described by the $\\text{sinc}$ function) crosses zero every time that the sine function is zero, that is to say, every time the frequency is a multiple of $1 / a$.\\\\\n\t\n\tThe spectrum of this pulse illustrates two important points regarding the limited time signals:\n\t\\begin{enumerate}\n\t\t\\item[P1.] A short signal has a broadband spectrum.\n\t\t\\item[P2.] To a narrow spectrum correspond a long-term signal.\n\t\\end{enumerate}\n\t\\end{tcolorbox}\n\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE2. The Fourier transform of an integrable function $f$ is given as we know now by:\n\t\n\tConsider the integrable Gaussian function of the type:\n\t\n\twith $a>0$ define on $\\mathbb{R}$.\\\\\n\t\n\tWe want to compute its Fourier transform because it is a very important case and particularly useful for solving the heat equation that we will treat in the section of Thermodynamics and also to solve the differential equation of Black \\& Scholes in the section Economy.\\\\\n\t\n\tThe brilliant trick, if we want to avoid making complex analysis on $3$ A4 pages, is to notice that $F(\\omega)$ is the solution of the following linear differential equation:\n\t\n\twhere $y$ is a function of $\\omega$.\\\\\n\t\n\tIndeed deriving  $F(\\omega)$  we get:\n\t\n\tIntegration by parts gives us:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tWe recognize the expression of the Fourier transform of $f$. Therefore:\n\t\n\tThis shows that the $F(\\omega)$ is solution of the differential equation above.\\\\\n\t\n\tWe have proved in the section of Differential and Integral Calculus  that the general solution of this differential equation is given by:\n\t\n\twhere $A \\in \\mathbb{R}$. And as in the present case:\n\t\n\tThe primitive $G (x)$ is therefore easy to calculate and we get:\n\t\n\tTherefore:\n\t\n\tTo determine the constant $A$ it suffices to notice that:\n\t\n\tTo determine the constant $A$ it suffices to note that:\n\t\n\tand therefore:\n\t\n\tIt is then usual to say that the Fourier transform of a Gaussian is another Gaussian!!\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsubsection{Bessel Series}\n\tBessel functions are very useful in many advanced fields of physics involving delicate differential equations to solve. The areas in which we find them most often are calorimetry (heat conduction), nuclear physics (physics of reactors), optics and fluid mechanics.\n\t\n\tThis series are still not study too much in the graduate curriculum and it is often the role of the student to seek the additional information it needs on this subject in the library of his school. We wanted to present here the developments that avoid this approach while staying at home in front of our computer (furthermore books on the subject are quite rare...).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe usually speak by abuse of language of \"\\NewTerm{Bessel functions}\\index{Bessel functions}\" instead of \"\\NewTerm{Bessel series}\\index{Bessel series}\".\n\t\\end{tcolorbox}\n\tThere is a significant amount of Bessel functions but we will restrict ourselves to the study of those most used one in physics.\n\t\n\t\\paragraph{Zero order Bessel's Functions}\\mbox{}\\\\\\\\\n\tThe function known as the \"\\NewTerm{Zero order Bessel's function}\\index{Zero order Bessel's function}\", is defined by the power series:\n\t\n\tIt is during the study of the properties of derivation and integration that Friedrich Bessel found that this series of power is a solution to a differential equation that is found frequently in physics. That is why it bears his name.\n\t\n\tIf $u_r$ represents the $r$-th term of the series, we easily see that:\n\t\n\twhich tends to zero as $r\\rightarrow +\\infty$, regardless of the value of $x$. This has for consequence that the series converges for all values of $x$. Since this is a series of positive power, the function $J_0(x)$ and all its derivatives are continuous function for all values of $x$, real or complex.\n\t\n\t\\paragraph{$n$ order Bessel's Functions}\\mbox{}\\\\\\\\\n\tThe function $J_n(x)$, known as the \"\\NewTerm{$n$ order Bessel's function}\\index{$n$ order Bessel's function}\", is defined, when $n$ is a positive integer, by the power series:\n\t\n\twhich converges for all values of $x$, real or complex.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/bessel_functions.jpg}\n\t\t\\caption{Plot of few Bessel functions (source: Wikipedia)}\n\t\\end{figure}\n\tand in Microsoft Excel or Maple 4.00b the previous function can be found under the name \\texttt{BESSELJ( )}. For example for the previous graph in Maple, we just write:\n\t\n\t\\texttt{>plot([BesselJ(0,x),BesselJ(1,x),BesselJ(2,x),BesselJ(3,x)],x=0..20);}\n\t\n\tLet us see in particular, that for $n=1$ we have:\n\t\n\tand when $n=2$:\n\t\n\tWe can notice that $N_n(x)$ is an even function of $x$ when $n$ is even and odd if $n$ is odd (\\SeeChapter{see section Functional Analysis}).\n\t\n\tIf we play to do engineering maths we notice by trial and errors that:\n\t\n\tBased on this trial and error approach we have using the above expression by factorizing the $(x/2)^{2k}$ term only:\n\t\n\tSo finally we see after trial and errors again (instead than doing 5 pages of mathematical developments as do mathematicians) that:\n\t\n\tMore generally for $n$ that is non-integer we use the Euler Gamma function (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tThat is also sometimes written by pure mathematicians (...)\n\t\n\tNow by differentiating the function $J_0(x)$ and comparing the result with the series $J_1(x)$, we see that without much pain that:\n\t\n\tWe also find without too much difficulty, the following relation:\n\t \n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn general by recursive reasoning we get:\n\t\n\tTherefore:\n\t\n\tThis last relation will be useful to us in the section of Wave Optics for our study of the circular aperture diffraction (Airy disk).\n\t\\end{tcolorbox}\n\tUsing the fact that:\n\t\n\tand including it in the previous relation, we find:\n\t\n\twritten in another way:\n\t\n\t$y=J_0(x)$ is therefore a solution of the differential equation of the second order:\n\t\n\twritten otherwise:\n\t\n\tor even:\n\t\n\tA solution to a an equation of parameter $n$ which is not a multiple of $J_n(x)$ is named \"\\NewTerm{Bessel function of the second kind}\\index{Bessel function of the second kind}\". Let us suppose now that $u$ is such a function and let us put $v=J_0(x)$; then according to the relation:\n\t\n\twe have:\n\t\n\tMultiplying the first relation by $v$ and the second by $u$ and after subtracting, we get:\n\t\n\twe therefore also have:\n\t\n\twe can therefore write:\n\t\n\tIndeed, because if we develop, we find:\n\t\n\tFor the equality:\n\t\n\tis satisfied, we have:\n\t\n\tDividing by $xv^2$, we have:\n\t\n\twhich is equivalent to:\n\t\n\timmediately, by integrating it comes:\n\t\n\twhere $A$ is a constant. Consecutively we have since $v=J_0(x)$:\n\t\n\twhere we recall, $A$ and $B$ are constants, and $B\\neq 0$ if $u$ is not a multiple of $J_0(x)$ by definition!\n\t\n\tIf in the last relation, $J_0(x)$ is replaced by its expression in terms of series we have:\n\t\n\t\tFor those who want to check this last relation (I do not like this kind of algebraic calculations) with Maple 4.00b just write:\n\t\n\t\\texttt{>1/x*taylor(1/(series(BesselJ(0,x),x))\\string^2,x=0,5);}\n\t\n\tTherefore:\n\t\n\tconsecutively if we put:\n\t\n\twhere $Y_0(x)$ is a particular Bessel function of the second type named \"\\NewTerm{Bessel-Neumann function of the second kind of zero order}\\index{Bessel-Neumann function of the second kind of zero order}\".\n\n\tIdentically to the fact that when $J_0(x)\\rightarrow 0$ when $x \\rightarrow 0$, the expression $Y_0(x)$ because of the term $\\log(x)$ when $x$ is small approaches $Y_0(x) \\rightarrow -\\infty$ when $x\\rightarrow +0$.\n\t\n\tFinally, it comes from what we have seen that $J_0(x)$ and $Y_0(x)$ are independent solutions of the differential equation:\n\t\n\tThe general solution being therefore:\n\t\n\twhere $A$, $B$ are arbitrary constants and $x>0$ so that $Y_0(x)$ is real.\n\n\tIf we replace $x$ by $k$, where $k$ is a constant, the differential equation becomes:\n\t\n\tby multiplying the whole by $k^2$ we find the general form of the differential equation:\n\t\n\twhose general solution is:\n\t\n\twhere $k>0$ such that $Y_0(kx)$ is real when $x>0$.\n\t\n\tIn fact, the Bessel functions are solutions of the differential equation previously studied and solved by the \"\\NewTerm{Frobenius method}\\index{Frobenius method}\". Indeed, let us write:\n\t\n\tand let us make the substitution:\n\t\n\tsubstituting in $Ly$, we get:\n\t\n\tNow let us choose the $c_i$ to satisfy the differential equation such as:\n\t\n\tTherefore, unless $\\rho$ is a negative integer, we have:\n\t\n\tBy substituting these values in the relation:\n\t\n\twe get:\n\t\n\tTherefore:\n\t\n\tIf we put $\\rho=$ in the prior-previous relation, we get:\n\t\n\t\n\t\\paragraph{Bessel's Differential Equations of order $n$}\\mbox{}\\\\\\\\\n\tWe have defined the Bessel series as:\n\t\n\tLet us put:\n\t\n\tand let us derivate as follows:\n\t\n\tBut we also have:\n\t\n\tBy subtraction:\n\t\n\tWhich finally gives:\n\t\n\tThis is also written:\n\t\n\twhich is named the \"\\NewTerm{Bessel differential equation of order $n$}\\index{Bessel differential equation of order $n$}\" or more simply \"\\NewTerm{Bessel equation}\\index{Bessel equation}\". In fact, most schools or Internet sites give this differential equation as a definition but now it is clear that there is rigorous reasoning behind this equation.\n\t\n\tThe solution is therefore of the type:\n\t\n\twhich is still sometimes written using the gamma Euler function (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tIt follows that:\n\t\n\tand therefore that $y=J_n(x)$ is the solution of this differential equation.\n\t\n\tWe will fall back on such a differential equation during our study of the wave equation of a circular drum (\\SeeChapter{see section Wave Mechanics}), during our study of the physics of nuclear reactors (\\SeeChapter{see section Nuclear Physics}) and finally during our study of self-buckling (\\SeeChapter{see section Mechanical Engineering}).\n\t\n\t\\subsection{Convergence Criteria}\n\tWhen we study a series, one of the fundamental questions is that of the convergence or divergence of this series.\n\t\n\tIf a series converges, its general term approaches zero as $n$ approaches infinity:\n\t\n\tor obviously generally:\n\t\n\tThis criterion is necessary but insufficient to establish the convergence of a series. By cons, if this criterion is not met, we are absolutely sure that the series does not converge (so it diverges!).\n\t\n\tThree methods are proposed to deepen the convergence criteria:\n\t\\begin{enumerate}\n\t\t\\item The integral test\n\n\t\t\\item The d'Alembert rule\n\n\t\t\\item The Cauchy rule\n\t\\end{enumerate}\n\tIn the following paragraphs, we will assume the series with positive terms. The case of alternating series will be seen later.\n\t\n\t\\subsubsection{Integral Test}\n\tThe integral test for convergence is a method used to test infinite series of non-negative terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the \"\\NewTerm{Maclaurin\u2013Cauchy test}\\index{Maclaurin\u2013Cauchy test}\".\n\t\n\tGiven the series with decreasing positive (monotone decreasing) terms:\n\t\n\tThat is to say:\n\t\n\tand given a continuous decreasing function such that:\n\t\n\tThen the infinite series\n\t\n\tconverges to a real number if and only if the improper integral:\n\t\n\tis finite. In other words, if the integral diverges, then the series diverges as well.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn no case does the integral gives the value of the sum of the series! The full test only gives an indication of the convergence of the series. Before making the test of the integral, it is important to check that the terms of the series are strictly decreasing to fill the condition $a_1\\ge a_2\\ge a_3\\ge \\cdots\\ge a_n\\ge \\cdots$.\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tThe harmonic series:\n\t\n\tdiverges because (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tSo this harmonic series does not converge.\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{D'Alembert Rule}\n\tThe \"\\NewTerm{ratio test}\\index{ratio test}\" is also a test (or \"criterion\") for the convergence of a series of the type:\n\t\n\twhere each term is a real or complex number and $a_n$ is nonzero when $n$ is large. The test was first published by Jean le Rond d'Alembert and is sometimes known as \"\\NewTerm{d'Alembert's ratio test}\\index{d'Alembert's ratio test}\" or as the \"\\NewTerm{Cauchy ratio test}\\index{Cauchy ratio test}\".\n\t\n\tThe usual form of the test makes use of the limit:\n\t\n \tThe ratio test states that:\n\t\\begin{enumerate}\n\t\t\\item if $L < 1$ then the series converges absolutely;\n\t\t\\item if $L > 1$ then the series does not converge;\n\t\t\\item if $L = 1$ or the limit fails to exist, then the test is inconclusive, because there exist both convergent and divergent series that satisfy this case\n\t\\end{enumerate}\n\tand we define the radius of convergence by:\n\t\n\tFor the proof suppose that:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn reality this rule is normally without the absolute value. The case with the absolute value as above is named the \"\\NewTerm{absolute convergence test}\\index{absolute convergence test}\" and applied for the more general case of an alternate series such that:\n\t\n\tIf an alternating series of terms is absolutely convergent, the absolute series that follows also converge.\\\\\n\t\n\tTherefore the absolute convergence test is a generalization of the d'Alembert rule but most of time we don't make any distinctions between the both.\n\t\\end{tcolorbox}\n\tWe can then show that the series converges absolutely by showing that its terms will eventually become less than those of a certain convergent geometric series. To do this, let:\n\t\n\tThen $r$ is strictly between $L$ and $1$, and:\n\t\n\t for sufficiently large $n$ (say, $n$ greater than $N$).\n\n\tHence:\n\t\n\tfor each $n > N$ and $i > 0$, and so:\n\t\n\tThat is, the series converges absolutely.\n\n\tOn the other hand, if $L > 1$, then:\n\t\n\tfor sufficiently large $n$, so that the limit of the sum is non-zero. Hence the series diverges.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tGiven the following geometric series:\n\t\n\tWe get quotient:\n\t\n\tTherefore the series converge!\n\t\\end{tcolorbox}\n\tObviously some practical applications will (can) give for example:\n\t\n\tand some practitioners name this the \"\\NewTerm{Cauchy convergence rule}\\index{Cauchy convergence rule}\"... (do not confuse with the Cauchy convergence test that will be study in the section Fractals).\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us do now a last important example. We have study earlier above Bessel series. But it can be not obvious that these series converge. Let us prove that this is indeed the case for $J_0$, that is to say for:\n\t\n\tTherefore:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Alternating Series Test}\n\tThe alternating series test i a method used to prove that an alternating series with terms that decrease in absolute value is a convergent series. The test was used by Gottfried Leibniz and is sometimes known as \"\\NewTerm{Leibniz's test}\\index{Leibniz's test}\", \"\\NewTerm{Leibniz's rule}\\index{Leibniz's rule}, or the \"\\NewTerm{Leibniz criterion}\\index{Leibniz criterion}\".\n\t\n\tA series of the form\n\t\n\twhere either all an are positive or all an are negative, is named an \"\\NewTerm{alternating series}\\index{alternating series}\".\n\n\tThe alternating series test then says: if $a_n$ decreases monotonically and:\n\t\n \tthen the alternating series converges.\n\t\n\tThere a lot of other tests as the Raabe's test, the Bertrand's test, the Gauss's test, the Kummer's test.\n\t\n\t\\subsubsection{Fixed Point Theorem}\n\tThe fixed point theorem is not really useful in physics and for the engineers (but implicitly it is essential but physicists and engineers often use math tools whose properties have already been approved in advance by mathematicians), however we find it in chaos theory and in theoretical computing (see the sectiona on Fractals especially the topic on the Sierpinski triangle). We can therefore only recommend the reader to take the time to read and understand the explanations and developments that follow.\n\t\n\tLet $(X,d)$, be a complete metric space (see sections Topology or Fractals) and $T:X\\mapsto X$ an application strictly contracting of constant $L$ (see the Lipschitz functions in the section Topology), then there exists a unique point $\\omega\\in X$ such that:\n\t\n\t$\\omega$ is then named the \"\\NewTerm{fixed point}\\index{fixed point}\" of $T$ (think to the case $\\cos(x)=x)$. Furthermore, if denote by:\n\t\n\tthe image of $x$ by the $n$-th iterate of $T$, then we have:\n\t\n\tand the convergence speed can also be estimated by:\n\t\n\tBy the fact that we restrict our study to iterating a function, we speak of \"\\NewTerm{Banach fixed-point theorem}\\index{Banach fixed-point theorem}\" that gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\tYou can have fun with your pocket calculator or your operating system by choosing a random number and taking the cosine iteratively. You will find you will tend to $0.74$ and therefore it is verbatim the solution of $\\cos (x) = x$.\n\t\\end{tcolorbox}\n\t\\begin{dem}\n\tGiven $x\\in X$. We consider the following sequence $(T^n(x))_{n\\in \\mathbb{N}}$ as defined above. First we will prove that this sequence is a Cauchy sequence (see above what is a Cauchy sequence).\n\n\tApplying the triangle inequality (\\SeeChapter{see section Vector Calculus}) several times we have:\n\t\n\tBut:\n\t\n\tTherefore:\n\t\n\tand as:\n\t\n\tTherefore:\n\t\n\tTo finish:\n\t\n\tthat is to say, that in a first time $(T^n(x))_{n\\in \\mathbb{N}}$ converge, and we put:\n\t\n\tNow we check that $\\omega$ is a fixed point of $T$. Indeed $T$ is uniformly continuous (as Lipschitz - see section Topology) therefore a fortiori continues:\n\t\n\tIt remains to check that $\\omega$ is the only fixed point (therefore we will have proved that $\\omega$ does not depend on the choice of $x$). Suppose that we also have $T(y)=y$ then:\n\t\n\tAn estimate of the speed of convergence is given by:\n\t\n\t$\\mathrm{d}(,)$ is continuous with respect to each variable so:\n\t\n\tand the limits preserve the inequalities (not strict one) thus:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\subsection{Generating Functions (transformation of a sequence into a series)}\n\tFor some mathematical financial risk management models (\\SeeChapter{see section Economics}) and also integration transformation of Bessel functions (\\SeeChapter{see section Differential and Integral Calculus}) we will need in this book a gentle introduction to generating functions.\n\t\n\t\\subsubsection{Ordinary Generating Functions (transformation of a sequence into a series)}\n\tRemember first that we have proved earlier above that the general Maclaurin expansion of a function was given by:\n\t\n\tThat is for $x=\\cong 0$\n\t\n\tIn the case of our study, let us write the latter relation as:\n\t\n\tWe say then that the above relation is the \"\\NewTerm{ordinary generating function}\\index{ordinary generating function}\" of the sequences of numbers $a_0,a_1,a_2,a_3,\\ldots$ in the formal parameter $x$. And we don't care if it diverge or not!\n\t\n\tBecause the generating function is an algebraic expression that encodes the sequence and allows us to manipulate it in ways that are not possible in other forms. Many times if the sequence you are looking at is \"interesting\" (and this word has lots of interpretations), the generating function has a short simple form.\n\n\tThe generating function allows us to derive formulas for the sequence, identities involving the sequence, estimate the values and so much more as we will see further below.\t\n\t\n\tOne thing to never forget: The generating function is not the sequence and the sequence is not the generating function. They are not the same thing. One is a sequence, the other is an algebraic expression!\n\n\tIf you have a sequence you can say \"the generating function of the sequence\" to refer to the algebraic object. If you have a generating function you might say \"the sequence of coefficients of the generating function\" in order to refer to the sequence.\n\t\n\tLet us try now a companion example, the sequence consisting of all $1$:\n\t\n\tThe generating function in therefore the geometric series that we have proved earlier above:\n\t\n\tThe next simple companion example would be the positive integers:\n\t\n\tThis has generating function:\n\t\n\tNow we observer that the derivative of the this generating function is the derivative of the generating function:\n\t\n\tIndeed:\n\t\n\tThat is we have therefore:\n\t\n\tWe conclude that:\n\t\n\tWe can't use the exactly same trick to figure out the generating function for the sequence:\n\t\n\tbecause if we take the derivative of:\n\t\n\tthen we do not quite have the square integers. But the attentive reader will notice that if we first multiply the latter relation by $x$ and then take the derivative then we have:\n\t\n\tTherefore:\n\t\n\tIt is easy with a software like Maple to control that the Maclaurin series of $(1+x)/(1-x)^3$ is equal to the generating function and therefore give us the coefficients $a_i$ of the sequence.\n\t\n\tLet us see now a non-trival example... It's the Fibonacci sequence given by (each term is equal to the sum of the previous two):\n\t\n\tWe will give the generating function for this sequence a name $F(x)$ so then:\n\t\n\twhere $F_0=F_1=1$ and $F_k=F_{k-1}+F_{k-2}$ for $k\\geq 2$. Then we can see that:\n\t\n\tSince we have figured out that:\n\t\n\tthen:\n\t\n\tand this can be rewrittten as:\n\t\n\tand hence:\n\t\n\tIt is always surprising that the generating function for the Fibonacci numbers has such a compact formula. But once again, using for example a software like Maple and doing the Maclaurin expansion of the above function, you will get a series whose coefficients $a_i$ corresponds to the Fibonacci sequence!\n\t\t\n\t\\paragraph{Composition of Generating functions}\\mbox{}\\\\\\\\\n\tNow if we have two generating functions:\n\t\n\tfor two sequences of integers $a_0,a_1,a_2,a_3,\\ldots$ and $b_0,b_1,b_2,b_3,\\ldots$ then there are several ways that we can combine the sequences and get generating functions fr new sequences.\n\n\t\\begin{itemize}\n\t\t\\item Sum: If we add the generating functions we have that:\n\t\t\n\t\tis a generating function for the sequence:\n\t\t\n\t\t\n\t\t\\item Product: However if we multiply the two generating function, we ha that:\n\t\t\n\t\tThis can be summarized in the expression:\n\t\t\n\t\tAnother special case of the product of generating function is the product:\n\t\t\n\t\tIt is the product of two generating functions, the first one being the generating function:\n\t\t\n\t\tBy:\n\t\t\n\t\tthe product of these is a generating function for the sequence (as all $a_i=1$):\n\t\t\n\t\n\t\t\\item Derivative: We have already seen a couple examples of the\nuse of the derivative in previous examples.\n\t\\end{itemize}\n\tRemember now that we have proved that:\n\t\n\tTherefore by the fact that:\n\t\n\thas for generating sequence:\n\t\n\twe have then that:\n\t\n\tis then a generating function for the sequence of the sum of the first $n$ positive integers:\n\t\n\tIn particular, the coefficient of $x^k$ is:\n\t\n\tWe also know by taking the derivative of $1/(1-x)^2$ we have:\n\t\n\tTherefore if we divide this equation by two we have:\n\t\n\tIt must be that the coefficient of $x^k$ in $1/(1-x)^3$ is equal to $(k + 1)(k + 2)/2$ and it is equal to the sum of the first $k+1$ integers, so:\n\t\n\t\n\t\\subsubsection{Multivariate Generating Functions}\n\tRemember that in the section Calculus we have proved that:\n\t\n\tcan be expressed by a condensed form that involves the binomial coefficient:\n\t\n\tthen this is what we name a \"\\NewTerm{multivariate generating function}\\index{multivariate generating function}\". It works just as the other generating functions we have previously worked with except that it has two parameters.\n\t\n\t\\subsubsection{Functional Generating Functions}\n\tSo far we have seen generating functions that gives scalar values. But a more general family are the \"\\NewTerm{functional generating functions}\\index{functional generating functions}\" that gives functions instead of simple scalars.\n\t\n\tA generating function for a sequences of functions $\\{f_n(x)\\}$ is obviously a power series of the type:\n\t\n\twhose coefficient are now functions of $x$.\n\t\n\tLet us see the both examples that we be useful to us in finance and in optics (but also in differential and integral calculus!).\n\t\n\tLet us start with the most important example involving probabilities and especially the Poisson distribution as used in the First Boston Credit Risk Metric model!\n\t\n\tLet us recall that the Poisson distribution mass, that is, the probability that $N$ is equal to $n$, is given by (\\SeeChapter{see section Statistics}):\n\t\n\tfor $n=0,1,2,\\ldots$.\n\n\tThe probability generating function is equal to:\n\t\n\tWe know also the following Maclaurin infinite series expansion:\n\t\n\tAnd if we rewrite:\n\t\n\tin the following way:\n\t\n\tAnd further assuming the above Maclaurin expansion we can write:\n\t\n\tWhich implies:\n\t\n\tThis is the probability generating function of a Poisson distribution that we will use for our study of the CreditRisk model.\n\t\n\tAnd now for our study of optics let us found the functional generating function of the first kind Bessel functions!\n\t\\begin{theorem}\n\tThe generating function for the sequence of Bessel function of first kind, on integer order, is:\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tTo obtain an expression for $J_n(x)$, we use the Maclaurin series for $e^x$ to get:\n\t\n\tNow let us  make the change of variable: $n=r-s$.\n\n\tTherefore the expression in the sum becomes:\n\t\n\tThen it is obvious that as the range of $r$ and $s$ is $]-\\infty,+\\infty[$ we have for the range of $n$ also: $]-\\infty,+\\infty[$. So the first sum is easy to determine:\n\t\n\tBut as we can see in the expression in the sum above, we don't get rid of the summation variable $s$. So there is obviously a second missing sum on the variable $s$! The interval of summation for $s$ is not obvious at a first glance...\n\n\tWhat is sure is that $s$ is still in the range $[0,+\\infty]$ (positivie values) if we look at the term $s!$ in the denominator. But what makes problem is the lower bound of the sum. So if we look closely to the relation in the sum above we have a term $(n+s)!$. Obviously $n+s$ must never be negative! It comes therefore that if $n<0$, we must have $n+s\\geq 0$, that is when $n<0$, $s$ must start $-n$. Finally:\n\t\n\tThat is:\n\t\n\twhere the $J_n$ are the Bessel function of the first kind.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem} \n\tTherefore, for $n\\geq 0$:\n\t\n\tAnd for $n<0$:\n\t\n\tUsing an index shift, we obtain:\n\t\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{95} & \\pbox{20cm}{\\score{3}{5} \\\\ {\\tiny 44 votes,  64.09\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Vector Calculus}\n\n\t\\lettrine[lines=4]{\\color{BrickRed}V}ector calculus or \"vector analysis\" is a branch of mathematics that studies the scalar or vector fields that are sufficiently regular of Euclidean spaces (see definition further below).\\\\\\\\\n\nThe importance of vector calculus comes from its extensive use in physics and in the engineering sciences. It is from this perspective that we will present it, and this is why we limit ourselves mostly to the case of the usual three-dimensional spaces. In this context, a vector field associates to each point of the space a vector (with three real components), while a scalar field associates just a unique real number to such a point.\n\nThere is a phenomenal amount of series and theories about these, but we will mention especially the Taylor series (used almost everywhere in applied science), Fourier series (signal theory, statistics, wave mechanics or quantum physics) and Bessel series functions (very important in nuclear physics!) that we will make a brief study here and that will continue in the section of Functional Analysis.\n\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\nFor example, imagine the water from a lake. The temperature data at each point forms a scalar field, that of his speed at each point, a vector field (see definition further below).\n\t\\end{tcolorbox}\n\nPhysical concepts such as strength or speed are characterized by a direction, an orientation and an intensity. This triple character is highlighted by arrows. These are the source of the concept of vector and are the most suggestive example. Although their nature is essentially geometric, it is their ability to bind to each other, so their algebraic behavior, which mostly retain our attention. Split into equivalence classes their set represents the classic model of a \"\\NewTerm{vector space}\\index{vector space}\" (\\SeeChapter{see section Set Theory}). One of our primary goals here is the detailed description of this model.\n\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} Before reading what follows, the reader is advised to have at least read diagonally the section on Set Theory in the Arithmetic chapter. We define there what is a \"vector space\" using the tools of Set Theory. Even if this concept  is although not absolutely essential it is still interesting to see how two areas of mathematics fit together and also just for the reason... of introducing vector stuffs with a least a little bit rigour.\\\\\n\t\n\t\\textbf{R2.} Vectorial analysis contains many terms and definitions that must be learned by heart. This work is hard but unfortunately necessary...\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Concept of Arrow}\t\n\n\\textbf{Definition (\\#\\mydef):} We denote by $U$ the ordinary space of elementary geometry and $P, Q,...$ its points. We will call \"\\NewTerm{arrow}\\index{arrow}\" all directed line segment (in space). The arrow of origin $P$ (origin point) and extremity $Q$ (terminal point) will be denoted $\\overrightarrow{PQ}$ or abbreviated by a single letter (Latin or Greek) arbitrarily chosen as example: $\\overrightarrow{F}$.\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nIn the norm ISO 80000-2: 2009 it is authorized to represent the vectors with a letter in bold.\n\t\\end{tcolorbox}\t\n\nWe will consider as obvious that any arrow is characterized by its direction, its orientation (because of a given direction it can point in both directions), its intensity or magnitude (length) and its origin.\n\nIn vector (or multivariable) calculus, we will deal with functions of two or three variables (usually $x, y$ or $x, y, z$, respectively). The graph of the arrow of coordinates $(x, y)$, lies in Euclidean space, which in the Cartesian coordinate system consists of all ordered doublets of real numbers $(a,b)$. Since Euclidean can be 3-dimensional (and more or less for sure!), we denote it by $\\mathbb{R}^3$.\n\nThe graph of the arrow consists of the points $(a, b, c)$. The 3-dimensional coordinate system of Euclidean space can be represented on a flat surface, such as this page or a blackboard, only by giving the illusion of three dimensions, in the manner shown in the figure below:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/algebra/euclidian_vector.eps}\n\\caption{Example of arrow in $\\mathbb{R}^3$ euclidian space}\n\\end{figure}\n\nEuclidean space has three mutually perpendicular coordinate axes ($x$, $y$ and $z$), and three\nmutually perpendicular coordinate planes: the $xy$-plane, $yz$-plane and $xz$-plane:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/algebra/euclidian_planes.eps}\n\\caption{Mutually perpendicular planes in $\\mathbb{R}^3$}\n\\end{figure}\n\nThe coordinate system shown above is known as a right-handed coordinate system, because it is possible, using the right hand, to point the index finger in the positive direction of the $x$-axis, the middle finger in the positive direction of the $y$-axis, and the thumb in the positive direction of the $z$-axis, as below:\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/algebra/right_hand.eps}\n\\caption{Right hand system}\n\\end{figure}\n\n\n\t\\subsection{Set of Vectors}\t\n\n\\textbf{Definitions (\\#\\mydef):}\n\n\t\\begin{enumerate}\n\t\t\\item[D1.] We say that two arrows are \"\\NewTerm{equivalent arrows}\" \\index{equivalent arrows}  if they have the same direction, the same orientation and the same intensity.\n\t\t\\item[D2.] We say that two arrows are \"\\NewTerm{colinear arrows}\"\\index{colinear arrows} only if they have the same direction.\n\t\\end{enumerate}\nLet us now split the set of all arrows in equivalence classes: two arrows belong to the same class if and only if they are equivalent.\n\nSo:\n\n\\textbf{Definitions (\\#\\mydef):} \n\n\\begin{enumerate}\n\t\\item[D1.]Each equivalence class of arrows whose origin point and terminal point are distinct is a \"\\NewTerm{vector}\"\\index{vector} or rather a \"\\NewTerm{free vector}\"\\index{free vector} because its origin is not taken into account (if its origin is well defined, then we have a \"\\NewTerm{bounded vector}\")\\index{bounded vector}.\n\t\\item[D2.]Degenerated arrows (that is to say of the form $\\overrightarrow{PP}$) are named \"\\NewTerm{zero vector}\"\\index{zero vector} and written $\\vec{0}$ when they have an undefined direction and orientation and zero intensity (origin and terminal point are not distinct).\n\\end{enumerate}\t\n\nThe set of vectors will be commonly referred  by $\\mathbb{V}$. Note that the elements of $\\mathbb{V}$ are (equivalence) classes arrows and not individual arrows. It is however clear that any arrow is sufficient to determine the class of equivalence to which it belongs and it is natural to name the corresponding class: \"\\NewTerm{representative class}\"\\index{representative class} of the vector.\n\nLet us now draw a representative of a vector $\\vec{y}$ from the end of a representative vector $\\vec{x}$. The arrow whose origin is that of the representative $\\vec{x}$ and the end of representative $\\vec{y}$ determines a new vector which we write: $\\vec{x}+ \\vec{y}$. The operation that combines any two vectors by their sum is named \"\\NewTerm{vector addition}\"\\index{vector addition}.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{img/algebra/vector_addition.jpg}\n\\caption{Example of a sum of two vectors}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=1]{img/algebra/vector_addition_robotics.jpg}\n\\caption{Example of a sum of two vectors with robot dynamics notation}\n\\end{figure}\n\nUsing a figure, it is easy to show that the operation of vector addition is associative and commutative, i.e. that:\n\t\nand:\n\t\nIt is also evident that the zero vector $\\vec{0}$ is the neutral element of the vector addition. Formally:\n\t\nwhere $-\\vec{x}$ means the opposite of vector $\\vec{x}$, that is to say the vector whose representatives have the same direction and the same intensity as those of $\\vec{x}$, but the opposite orientation. \n\n\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] Two vectors whose sum is zero are then named \"\\NewTerm{opposed vectors}\"\\index{opposed vectors} since the only thing that differentiates them is their orientation...\n\t\t\\item[D2.] It follows that if two or more vectors have the same direction, the same intensity and the same orientation then  we say that they are \"\\NewTerm{equal vectors}\"\\index{equal vectors}.\n\t\\end{enumerate}\nAs we can see the reverse operation of vector addition is the vector subtraction. Subtract a vector is equivalent to adding the opposite vector.\n\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\\textbf{R1.} The addition extends, by induction, to the case of any finite family of vectors. Under associativity, these successive additions can be performed in any order, which justifies the writing without brackets.\\\\\\\\\n\\textbf{R2.} The multiplication between two vectors is a concept that does not exist. But, as we shall see it a little further, we can multiply the vectors by some other vector properties that which we call the \"norm\" or simply by scalars and still by some other things...\n\t\\end{tcolorbox}\t\n\n\\subsubsection{Pseudo-Vectors}\n\nIn physics, in the statement named the \"\\NewTerm{Curie's principle}\"\\index{Curie's principle} (\\SeeChapter{see section Principa}), physicists mention of what they name \"\\NewTerm{pseudo-vectors}\"\\index{pseudo-vector}. This is the simple vocabulary to talk about something equally trivial but basically only a few people actually do use. But it can still be useful to present what it is.\n\nIn fact, vectors and pseudo-vectors are transformed in the same way for a rotation or translation (we will see in our study of Linear Algebra how mathematically perform this type transformations). It is not the same in symmetry with respect to a plane or at one point. In these transformations we have by definition the following properties:\n\t\\begin{enumerate}\n\t\t\\item[P1.] A vector is transformed into its symmetrical.\n\t\t\\item[P2.] A pseudo-vector is converted into the opposite of its symmetrical.\n\t\\end{enumerate}\nHere is a figure with typical examples (the choice of letters representing the vectors and pseudo-vectors is not due to chance; they are a wink to the properties of electric and magnetic fields as studied in the Electromagnetism chapter):\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/algebra/pseudo_vector.eps}\n\\caption{Differences of transformations between a vector and a pseudo-vector}\n\\end{figure}\n\nA well know practical example is a pseudo-vector that we will study in detail much further below and resulting of an operation named \"\\NewTerm{cross product}\"\\index{cross product}:\n\t\nAnd to see why the result is a pseudo-vector, consider the special simple case:\n\t\nNow if we do a symmetric operation of the $X\\text{O}Z$ plan we get:\n\t\nSo as we can see the vector resulting of the cross product is a pseudo-vector because under transformation of the plan it's orientation change!\n\nA vector resulting of a mathematical operation of symmetry that does not change its orientation is named a \"\\NewTerm{polar vector}\"\\index{polar vector} but in fact almost everybody say just \"vector\".\n\nNow that we have an idea of what vectors are, we can start to perform some of the usual algebraic operations on them and this is what we name \"\\NewTerm{vector algebra}\"\\index{vector algebra}.\n\n\\pagebreak\n\\subsubsection{Multiplication by a scalar}\n\nThe vector expression $\\alpha\\cdot \\vec{x}$ named \"\\NewTerm{product of vector $\\vec{x}$ by scalar $\\alpha$}\"\\index{vector scalar product} is defined as follows:\n\nTake a representative arrow $\\vec{x}$ and construct a same direction arrow in the same or opposite orientation , depending on whether $\\alpha$ (scalar) is positive or negative, and of intensity $\\mid \\alpha \\mid$ times the intensity of the initial arrow. The arrow thus obtained is a representative of the vector of relation:\n\t\nIf $\\alpha=0$ or $\\vec{x}=0$ we write:\n\t\nThe operation consisting of performing the product of a scalar by a vector is named \"\\NewTerm{scalar multiplication}\\index{scalar multiplication}\".\n\nWe easily check that the scalar multiplication is associative and distributive with respect to the vector numerical addition, formally:\n\t\n\n\tThe multiplication of a vector by non-null scalar doesn't change its direction if the scalar is positive but if the scalar is negative the vector will still have the same direction but it orientation will be opposite.\n\t\n\tFrom this definition he have that two vectors $\\vec{v}$ and $\\vec{w}$ are parallel (denoted by $\\vec{v}\\mid\\mid\\vec{w})$) if one is a scalar multiple of the other. You can think of scalar multiplication of a vector as stretching or shrinking the vector, and as flipping the vector in the opposite direction if the scalar is a negative number.\n\t\n\t\n\tLet's see a concrete example worldwide known of use of vectors with scalars (probably also the simplest example):\n\t\n\t\\paragraph{Rule of three}\\mbox{}\\\\\\\\\n\tLet us go back to the \"\\NewTerm{rule of three}\"\\index{rule of three} (sometimes also named \"\\NewTerm{rules of ratios and proportions}\\index{rules of ratios and proportions}\" or \"\\NewTerm{unit reduction method}\\index{unit reduction method}\") often define in small classes (middle-school) intuitively but without nice proof. This rule is probably the most widely used algorithm in the world that identifies a fourth number when are given three and the four numbers are linearly dependent.\n\nThe rule of three is derived most of time in two versions:\n\t\\begin{enumerate}\n\t\t\\item[V1.] Simple an direct if the magnitudes are directly proportional.\n\t\t\\item[V2.] Simple and reverse if the quantities are inversely proportional.\n\t\\end{enumerate}\nand when two variables $X$ and $Y$ are proportional we note for recall:\n\t\n\\begin{theorem}\nSuppose now that $X$ can take the values $x_1,x_2$. $Y$ will take  the values linearly dependent $y_1,y_2$ then the following proportional relation applies:\n\t\nis named \"\\NewTerm{simple and direct ratio}\\index{simple and direct ratio}\".\n\\end{theorem}\n\\begin{dem}\n\tGiven two collinear vectors $\\vec{x}=(x_1,x_2),\\vec{y}=(y_1,y_2)$ and therefore proportional to a given factor $\\lambda$ such that:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\\end{dem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nIf this ratio is not equal (thus: not proportional), then we must switch to other tools such as simple and inverse ratio, or regression techniques and verbatim: extrapolation.\n\t\\end{tcolorbox}\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\nIn Lausanne (Switzerland), in 2011, garbage bags ar taxed and following rates apply: a bag of $17 [L]$ is $1.-$ and the bag of $110 [L]$ is $3.80.-$. Reported to $17 [L]$ the price of the garbage bag of 110 [L] is thus:\n\t\nThat is to say approximately $60\\%$ of the price of the bag of $17 [L]$ (then go search for an explanation... ???).\n\t\\end{tcolorbox}\n\\begin{theorem}\nThe following proportional relation:\n\t\nis named \"\\NewTerm{simple and inverse ratio}\\index{simple and inverse ratio}\".\n\\end{theorem}\n\\begin{dem}\n\tGiven two collinear vectors $\\vec{x}=(x_1,x_2),\\vec{y}=(y_1,y_2)$ and therefore proportional to a given factor $\\lambda$ such that:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\\end{dem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\\textbf{R1. }If this ratio is not equal (thus: not inverse proportional), then we must switch to other tools such as simple and direct ratio, or regression techniques and verbatim: extrapolation.\\\\\n\\textbf{R2.} We also name \"\\NewTerm{simple or inverse joint rule}\\index{simple or inverse joint rule}\", a series of direct or inverse rule of three.\n\t\\end{tcolorbox}\t\n\n\tBasically, it is enough that we knew three of the four variables to solve this simple equation of the first degree.\n\n\tIn such calculations, the agents of the exchange market have noticed that most of the time the ratio values were close to unity. They were thus naturally led to define the \"percentage\" as the proportion of a quantity or magnitude relative to another, measured with hundred (at least most of time...). Remember (\\SeeChapter{see section Numbers}):\n\n\t\\begin{itemize}\n\t\t\\item Given a scalar $x \\in \\mathbb{R}$ then expressed in percentage it will denoted by:\n\t\t\t\n\t\t\\item Given a scalar $x \\in \\mathbb{R}$ then expressed in per-thousand it will denoted by:\n\t\t\t\n\t\\end{itemize}\n\n\t\\subsection{Vector Spaces}\n\n\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{vector space}\\index{vector space}\" a set $E$ of elements designated by $\\vec{x},\\vec{y},...$ and named (as we know) \"vectors\", with a \"vector algebraic structure\" defined by the operation of vector addition (and thus vector subtraction) and scalar multiplication. These two operations satisfy the laws of associativity, commutativity, distributivity, neutral element and opposing element as we have already seen in the section of Set Theory.\n\n\tFor more information about what a vector space set is exactly  the reader will have therefore to refer to the section of Set Theory where this concept is defined more strictly (it would be redundant to repeat it here an anyway it is not crucial because the properties are intuitive).\n\n\tFor every positive integer $n$, $a_i$ means all the $n$-tuples of numbers arranged in a vector column:\n\t\n\tor as line vector (vector column that has been \\textbf{T}ranslated):\n\t\n\tand $\\mathbb{R}^n$ provides clearly a vector space structure. The vectors of this space will be named as we already know: \"vectors\". They are often denoted more briefly by:\n\t\n\tor even more briefly by:\n\t\n\tThe number $a_i$ is sometimes name \"\\NewTerm{term}\\index{vector term}\" or \"\\NewTerm{component of index $i$}\\index{vector component}\" of $(a_i)$.\n\nNow, unless stated otherwise, the vectors will always be the elements of a vector space $E$.\n\n\t\\subsubsection{Linear Combinations}\n\n\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{linear combination}\\index{linear combination}\" of vectors any vector relation of the form:\n\t\nWhen a vector can be expressed in the above way we say that the vector is in \"\\NewTerm{component form}\\index{vector component form}\".\n\n\tThe null vector $\\vec{0}$ is a linear combination of the $\\alpha_1\\vec{x}_1+\\alpha_2\\vec{x}_2+...+\\alpha_n\\vec{x}_n$ with all coefficients equal to zero. We speak therefore of \"\\NewTerm{trivial linear combination}\\index{vector trivial linear combination}\".\n\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{convex combination}\\index{convex combination}\", any linear combination whose coefficients are non-negative and sum equal to 1. The set of convex combinations of two points $P$ and $Q$ of a punctual space $P_0$ (with an origin) is the line segment $P$ and $Q$. To realize this, we just write:\n\t\n\tand we make $\\alpha$ vary from 0 to 1 and to find that all the points of the segment are thereby obtained.\n\t\n\tIf the vector $\\vec{v}$ is a linear combination of $\\alpha_1\\vec{x}_1+\\alpha_2\\vec{x}_2+...+\\alpha_n\\vec{x}_n$ and each of these vectors $\\vec{x_i}$ is a linear combination of a set of independent vectors $\\vec{y}_1,\\vec{y}_2,...,\\vec{y}_n$, then it could be obvious that $\\vec{v}$ is also a linear combination of  $\\vec{y}_1,\\vec{y}_2,...,\\vec{y}_n$.\n\t\n\t\\textbf{Definition (\\#\\mydef):} A number $n$ of non-zero vector are \"\\NewTerm{coplanar}\\index{coplanar}\" if one of them is a linear combination of the others. For example, three vectors are coplanar if one of them is in the plane defined by the two others.\n\t\n\t\\subsubsection{Sub-vector spaces}\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{vectorial subspace $V$ of $E$}\\index{vectorial subspace}\" any subset of $E$ that is itself a vector space for the operations of addition and scalar multiplication defined in $E$.\n\t\n\tA vectorial sub-space $V$, as a vectorial space, can not be empty as it includes at least one vector, i.e. its zero vector, this being also necessarily also the zero vector of $E$. In addition, together with the vectores $\\vec{x}$ and $\\vec{y}$ (if it contains other vectors than the zero vector), it also includes all their linear combinations $\\alpha\\vec{x}+\\beta\\vec{y}$.\n\t\n\tConversely, as soon as we see any subset having these properties is a vectorial subspace. We have thus established the following proposition:\n\t\n\tA subset $V$ of $E$ is a subspace of $E$ if and only if $V$ is not empty and $\\alpha\\vec{x}+\\beta\\vec{y}$ belongs to $V$ for every pair $(\\vec{x},\\vec{y})$ of $V$ and all any pair $(\\alpha,\\beta) \\in \\mathbb{R}$.\n\t\n\t\\subsubsection{Generating families}\n\t\n\tIt follows that if we have a family of vectors $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ the set of linear combinations of $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$ with $k<n$ can be a subspace $S$ of $E$, more specifically the smallest subspace of $E$ consisting of $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$.\n\t\n\tThe $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$ vectors that satisfy the above condition are named \"\\NewTerm{generators}\\index{generators of a family of vectors}\" of $S$ and the family $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ the \"\\NewTerm{generating family}\\index{generating family}\" of $S$. We also say that these vectors or family \"generate $S$\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nThe subspace generated by a nonzero vector consists of all multiples of this vector. We name such a subspace a \"\\NewTerm{vector line}\\index{vector line}\". A subspace generated by two vector non multiple of each other is named a \"\\NewTerm{vector map}\\index{vector map}\" or \"\\NewTerm{vector plane}\\index{vector plane}\".\n\t\\end{tcolorbox}\n\t\\subsubsection{Linear Dependance or Independance}\n\tWhat follows is very important in physics: we advise future physicists or engineer really take the time to read the developments below.\n\t\n\tIf $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$ are three vectors of $e^3$ whose representatives are not parallel to the same plane (by convention a zero-vector is parallel to any plane), so any vector $\\vec{x}$ of $E^3$ can be written by the linear combination:\n\t\n\twhere $\\alpha_1,\\alpha_2,\\alpha_3$ are typically in $\\mathbb{R}$.\n\tFor example the above vector $\\vec{x}$ (but can also be obtained for different values of $\\alpha_i$!):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/algebra/vector_linear_combination.jpg}\n\t\t\\caption{Example of a construction of a vector in a three-dimensional space}\n\t\\end{figure}\n\tIn particular, the only possibility to get the zero vector $\\vec{0}$ as a linear combination of $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$ is to assign the trivial value $0$ to the coefficients $\\alpha_1,\\alpha_2,\\alpha_3$.\n\n\tConversely, if for three vectors  $\\vec{e}_1,\\vec{e}_2,\\vec{e}_3$ of $E^3$ the relation:\n\t\t\n\timplies $\\alpha_1=\\alpha_2=\\alpha_3=0$, any vectors may be linear combination of the other two, in other words, their representatives are not parallel to the same plane.\n\t\n\tBased on these observations, we will extend the notion of absence of parallelism to a same plane in the case of any number of vectors of a given vector space $E$.\n\t\n\tWe say that the vectors $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$ are \"\\NewTerm{linearly independent}\\index{linearly independent vectors}\" if the relation:\n\t\n\tnecessarily implies  $\\alpha_1=\\alpha_2=...=\\alpha_k=0$, in other words, if the trivial linear combination is the only linear combination of $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$ which is zero. Otherwise, we say that the vectors $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$ are \"\\NewTerm{linearly dependent}\\index{linearly dependent vectors}\".\n\t\n\tIf the intention is fixed on the family $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ rather than the terms of which it is made, we say that the latter is a \"\\NewTerm{free family}\\index{free family (vector calculus)}\" or \"\\NewTerm{linked family}\\index{linked family (vector calculus)}\" following that the vectors are linearly independent or dependent.\n\t\n\t\\subsubsection{Base of a vectorial space}\n\t\\textbf{Definition (\\#\\mydef):}We say that a family of finite vectors is a basis of $E$ if and only if:\n\t\\begin{enumerate}\n\t\t\\item If it is free.\n\t\t\n\t\t\\item It generates $E$.\n\t\\end{enumerate}\n\tFollowing this definition, every free family $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$ is a basis of the subspace it generates.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tIf we consider $\\mathbb{C}$ as a $\\mathbb{R}$-vector space (\\SeeChapter{see section Set Theory}), then since all the elements of $\\mathbb{C}$ are written $a+\\mathrm{i}b$, the elements that generate $\\mathbb{C}$ are $1$ and $\\mathrm{i}$ (both are free).\\\\\n\t\n\tA base of $\\mathbb{C}$ (which is 2-dimensional) as a $\\mathbb{R}$-vector space is therefore the free finite set $\\left\\lbrace 1, i \\right\\rbrace$.\n\t\\end{tcolorbox}\n\tFor a family of vectors $(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_n)$ to be a basis of $E$, then it is necessary and sufficient that every vector $\\vec{x}$ of $E$ is expressed uniquely as a linear combination of the vectors $(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_n)$:\n\t\n\tThe above relation is decomposition of $\\vec{x}$ following the base $(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_n)$ where the coefficients $x_1,x_2,...,x_n$ are the components of $\\vec{x}$ in this base. In the presence of a base, each vector is determined entirely by its components.\n\t\n\tProposition:\n\t\n\tIf $x_1,x_2,...,x_n$ are the components of $\\vec{x}$ and $y_1,y_2,...,y_n$ those of equation then: \n\t\n\tare the components of $\\vec{x}+\\vec{y}$.\n\t\n\tIn other words, add two vectors is equivalent to add their components and multiply a vector by a scalar obviously equivalent to multiplying its components by the same scalar. The basis is an important tool because it allows you to perform operations on vectors through operations on numbers.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tThe following column vectors of $\\mathbb{R}^n$:\n\t\n\tgenerate a base that we name \"\\NewTerm{canonical basis}\\index{canonical basis}\" of $\\mathbb{R}^n$ (we will work in complex spaces in another section of this book).\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAs part of the three-dimensional space, bases are very often treated as a triad (actually if you connect the ends of the three vectors by features you will get an imaginary triad).\n\t\\end{tcolorbox}\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Direction Angles}\n\tIt is clear that only one standard angle cannot describe the direction of a vector in space. We then use the concept of \"\\NewTerm{direction angles}\\index{direction angles}\". This is to measure the angle of the vector $\\vec{U}$ with respect to each of the positive axis of the base:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/direction_angles.jpg}\n\t\t\\caption{Representation of direction angles}\n\t\\end{figure}\n\tif:\n\t\n\tThen by definition:\n\t\n\tThe values:\n\t\n\tare named the \"\\NewTerm{cosines directions}\\index{cosines directions}\" of $\\vec{x}$.\n\t\n\tThe three angles mentioned are not completely independent. Indeed, two are enough to completely determine the direction of a vector in space, the third can be deduced from the following equality (obtained from the calculation of the sum of squares of previous relations):\n\t\n\tTherefore the direction cosines are the scalar components of a unit standard vector  $\\vec{u}$ having the same direction as $\\vec{U}$:\n\t\n\t\n\t\\subsubsection{Dimensions of a vector space}\n\tWe say that a basis $E$ is of \"\\NewTerm{finite size}\\index{finite size basis}\" if it is generated by a finite family of vectors. Otherwise, we say that $E$ is of \"\\NewTerm{infinite dimension}\\index{infinite dimension basis}\" (we'll discuss this type of spaces in another section). Any finite dimensional vector space and not reduced to the zero vector has a basis. In fact, from any generating family of such a vector space we can extract a basis.\n\t\n\tThe dimension of a vector space is denoted by:\n\t\n\tAny vector space $E$ of nonzero finite dimension $n$ can be mapped in one to one correspondence (that is to say in bijection) with $\\mathbb{R}^n$. We just need to choose a basis of $E$ and to match to any vector $\\vec{x}$ of $E$ the column vector whose terms are the components of $\\vec{x}$ in the chosen basis (this is  mathematician blah blah but it will be useful when we will discuss more complex spaces):\n\t\n\tThis correspondence preserves the operations of addition and multiplication by a scalar as we have already seen; in other words, it can perform operations on vectors by operations on numbers.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nFor \"classic\" resolution methods of such systems, we refer the readers to the section on Numerical Methods of the chapter on Computing Science.\n\t\\end{tcolorbox}\t\n\tThen we say that $E$ and $\\mathbb{R}^n$ are \"\\NewTerm{isomorphic}\\index{isomorphic basis}\" or that the correspondence is an isomorphism (\\SeeChapter{see section Set Theory}).\n\t\n\t\\subsubsection{Extension of a free family}\n\t\\begin{theorem}\n\tGiven $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ a free family and $(\\vec{v}_1,\\vec{v}_2,...,\\vec{v}_m)$ a generating family of $E$. If $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ is not a basis of $E$, we can extract a subfamily $$(\\vec{v}_{i1},\\vec{v}_{i2},...,\\vec{v}_{il})$$ of $(\\vec{v}_1,\\vec{v}_2,...,\\vec{v}_m)$ so that the family $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k,\\vec{v}_{i1},\\vec{v}_{i2},...,\\vec{v}_{il})$ is a basis of $E$.\n\t\\end{theorem}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tSuch a theorem is useful when going from a mathematical space passage having given properties to another space with different mathematical properties.\n\t\\end{tcolorbox}\t\n\t\\begin{dem}\n\tWe assume that at least one of the vectors $\\vec{v}_i$ is not a linear combination of vectors $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$, otherwise $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ would generate $E$ and would therefore be a possible basis of $E$. Let us note that vector $\\vec{v}_{i1}$. The family $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k,\\vec{v}_{i1})$ is then a free family. Indeed, the relation:\n\t\n\tthen implies first that $\\beta_1=0$, otherwise $\\vec{v}_{i1}$ would be a linear combination of the vectors $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$, and then all $\\alpha_i=0$ since the vectors $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k$ are linearly independent.\n\t\n\tIf the family $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k,\\vec{v}_{i1})$ generates $E$, it is then a possible base for $E$ and the theorem is proved. Otherwise, the same reasoning ensures the existence of another vector $\\vec{v}_{i2}$ .... If the new resulting family is not a basis of $E$, then the extraction process vectors $\\vec{v}_i$ of $(\\vec{v}_1,\\vec{v}_2,...,\\vec{v}_m)$ continues. When it stops, we will get an \"extension\" of $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ in a free family generating $E$, that is to say a base of $E$.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tIt returns a corollary: \n\n\tEvery finite dimensional vector space and not reduced to zero vector has a basis! In fact, from any generating family of such a space, we can extract a base.\n\t\n\t\\subsubsection{Rank of a finite family}\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{rank of a family of vectors}\\index{rank of a family of vectors}\" and denote by $\\text{rk}(S)$ the dimension of the subspace $S$ of $E$ it creates.\n\t\n\t\\begin{theorem}\n\tThe rank of a family of vector $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ is less than or equal to $k$ and is equal to $k$ if and only if the family is free.\n\t\\end{theorem}\n\t\\begin{dem}\n\tLet us set aside the trivial first case where the rank of the family $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ is zero. By the previous corollary, then we can extract from this family a base of the subspace it generates. The rank is than less or equal to $k$ following that $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_k)$ is a linked family or not.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\pagebreak\n\t\\subsubsection{Direct Sums}\n\t\\textbf{Definition (\\#\\mydef):} We say that the sum $S + T$ of two subspaces $S$ and $T$ of $E$ is a \"\\NewTerm{direct sum}\\index{direct sum of subspaces}\" if (special case applied to a 2 dimensional space!):\n\t\n\tIn this case, we note it:\n\t\n\tIn other words, the sum of two vector subspaces $S$ and $T$ of $E$ is direct if the decomposition of all element $S + T$ into a sum of an element of $S$ and of $T$  is unique.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tFor example, the $XY$-plane, a two-dimensional vector space, can be thought of as the direct sum of two one-dimensional vector spaces, namely the $X$ and $Y$ axes. In this direct sum, the x and y axes intersect only at the origin (the zero vector). Addition is defined coordinate-wise, that is:\n\t \n\twhich is the same as vector addition.\n\t\\end{tcolorbox}\t\n\tThis concept of trivial decomposition will be very useful in some theorems, the most important in this book is definitely the spectral theorem (\\SeeChapter{see section of Linear Algebra}) that has important implications in statistics!!!\n\t\n\tFrom the direct sum we can introduce the concept of \"\\NewTerm{complementary subspace}\\index{complementary subspace}\" also named \"\\NewTerm{subspace}\\index{subspace}\" (depending on countries ...):\n\t\\begin{theorem}\n\tSuppose that $E$ is of finite dimensions. For any subspace $S$ of $E$, there exists a subspace $T$ (not unique) of $E$ such that $E$ is the direct sum of $S$ and $T$. We say then that $T$ is a \"\\NewTerm{supplementary subspace}\\index{supplementary subspace}\" of $S$ into $E$.\n\t\\end{theorem}\n\t\\begin{dem}\n\tFirst let us set aside the trivial case where $S=\\left\\lbrace \\vec{0} \\right\\rbrace$  and $S = E$. The subspace $S$ admits a basis $(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_k)$, where $k$ is less than the dimension $n$ of $E$. By the theorem of extension of a free family, this basis can be extended in a basis $(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_k,\\vec{e}_{k+1},...,\\vec{e}_n)$ of $E$. Let $T$ be the subspace vector generated by the family $(\\vec{e}_{k+1},...,\\vec{e}_n)$ . If $\\vec{x}$ is any vector of $E$, then $\\vec{x}=\\vec{s}+\\vec{t}$, where $\\vec{s}$ is a vector of $S$ and $\\vec{t}$ a vector of $T$. In addition $S\\cap T=\\left\\lbrace \\vec{0} \\right\\rbrace$, because no vector, excepted the zero vector may be a linear combination of the vectors $\\vec{e}_1,...,\\vec{e}_k$ and of the vectors $\\vec{e}_{k+1},...,\\vec{e}_n$. We therefore conclude that:\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\subsubsection{Affine spaces}\n\tIn mathematics, an affine space $G=\\text{A}\\mathbb{R}^n$ is a geometric structure that are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments as their is not origin point $\\text{=}(0,0)$.\n\t\n\tThe space $G$ of elementary geometry is both common and the source of the concept of \"affine space\" that we will introduce because when high-school student begins learn geometry they learn it without any reference point $\\text{=}(0,0)$.\n\n\tIn an affine space, there is therefore no distinguished point that serves as an origin. Hence, no vector has a fixed origin and no vector can be uniquely associated to a point. In an affine space, there are instead \"\\NewTerm{displacement vectors}\\index{displacement vectors}\", also named \"\\NewTerm{translation vectors}\\index{translation vectors}\" or simply translations, between two points of the space.\n\t\n\tThis space $G$ is associated with the \"\\NewTerm{geometric vector space}\\index{geometric vector space}\" $V$ by the correspondence between vectors and arrows studied so far! The following definition is only to highlight the main common points of this correspondence:\n\t\n\t\\textbf{Definition (\\#\\mydef):} Let $G$ be a non-empty set of elements that we name \"\\NewTerm{points}\\index{points in a vector space}\" and let us  denote them by the letters $P, Q, ...$; given also $E$ a vector space. Suppose that to any two points $(P, Q)$ corresponds a vector denoted $\\overrightarrow{PQ}$ (typically the point $P$ is choosen as fictive origin). We say then that $U$ is an \"\\NewTerm{affine space}\\index{affine space}\" of directed space $E$ if the following conditions are met:\n\t\\begin{enumerate}\n\t\t\\item[C1.] For any fixed point $P$, the correspondence between pairs $(P, Q)$ and and vectors $\\vec{x}$ defined by only one point plus the origina point is bijective, ie, for every vector $\\vec{x}$ it exists a point $Q$ such that we can define a vector $\\overrightarrow{PQ}$.\n\t\t\n\t\t\\item[C2.] For each triple of points $(P, Q, R)$:\n\t\t\n\t\tThis is the famous \"\\NewTerm{Chasles relation}\\index{chasles relation}\" (which we will see later a pseudo-equivalent in the section of Differential and Integral Calculus).\n\t\t\n\t\t\\item[C3.] If $P$ is a point and a $\\vec{x}$ vector, to express that $Q$ is the unique point such as $\\vec{x}=\\overrightarrow{\u2022}{PQ}$, we write:\n\t\t\n\t\tAlthough being a bit excessive, this writing is consistent with the usage and suggests well the idea of the operation it designates.\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\n\tBelow an \"artistic\" example of an affine space $G=\\text{A}\\mathbb{R}^2$ where there is no origin and any extremity of a line can be considered as the origin of a vector:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/affine_space.jpg}\n\t\t\\caption{Artistic but real example of an $G=\\text{A}\\mathbb{R}^2$ affine space}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\tThe following properties follow directly from the definition of affine space:\n\t\\begin{enumerate}\n\t\t\\item[P1.] For any point $P$, $P+(\\vec{x}+\\vec{y})=(P+\\vec{x})+\\vec{y}$\n\t\t\n\t\t\\item[P2.] For any point $P$, $\\overrightarrow{PP}=\\vec{0}$. This results from the $\\overrightarrow{PQ}+\\overrightarrow{QR}=\\overrightarrow{PR}$ provided in the case where we have $P=Q=R$.\n\t\t\n\t\t\\item[P3.] $\\overrightarrow{PQ}=-\\overrightarrow{QP}$. Just put $R = P$ in the De Chasles relation $\\overrightarrow{PQ}+\\overrightarrow{QR}=\\overrightarrow{PR}$.\n\t\t\n\t\t\\item[P4.] Parallelogram rule:\n\t\tGiven the polygon with the vertices (clockwise) $P,P',Q,Q'$ and edges $\\overrightarrow{PP'},\\overrightarrow{P'Q'},\\overrightarrow{QQ'},\\overrightarrow{PQ}$:\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics{img/algebra/affine_parallelogram.jpg}\n\t\t\t\\caption{Vector polygon in $\\text{A}\\mathbb{R}^2$}\n\t\t\\end{figure}\n\t\tWe have:\n\t\t\n\t\tif and only if:\n\t\t\n\t\twhich would then give a parallelogram!\n\t\n\t\tIndeed, replacing $R$ with $Q'$ in the Chasles relation we have:\n\t\t\n\t\tand by doing the same but replacing $R$ with $Q$ and $Q$ by $P'$ we get:\n\t\t\n\t\tWe then have by equalizing the last two relations:\n\t\t\n\t\\end{enumerate}\n\tEarlier we saw what made that a space $G$ could be provided with a vector space structure (we saw then that it was therefore \"vectorialized\"). In the general case of an affine space $G$, the process is the same:\n\t\n\tWe choose any point $\\text{O}$ of $G$. The correspondence between pairs $(\\text{O},P)$ and vectors of director space $E$ being therefore biunivocal we define then the addition of points and multiplication of a point by a scalar by the corresponding operations on the vectors of $E$. Armed with these two operations, $G$ becomes a vector space, named \"\\NewTerm{vectorialized space $G$ regarding to $Q$}\\index{vectorialized space}\". We denote this space by $V$ and named the point $\\text{O}$ \"\\NewTerm{origin}\\index{origin of a vector space}\".\n\t\n\tGiven how operations have been defined, it follows that $V$ is isomorphic to the space vector $E$:\n\t\n\tHowever, this isomorphism depends on the choice of the origin $\\text{O}$ and in practice this origin is selected on the basis of the data inherent to the studied problem. For example, if an affine transformation allows an invariant point (which does not move), it is advantageous to select that point as the origin.\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} When we talk about dimension of an affine space, we talk about the size of its director space.\\\\\n\t\n\t\\textbf{R2.} The space $G$ of elementary geometry is an affine space of type $\\text{A}\\mathbb{R}^2$. Indeed, its direction is the geometrical space $G$ and the conditions of definition of affine space are met.\\\\\n\t\n\t\\textbf{R3.} An affine space is a set of elements with a difference function. This difference is a binary function, which takes two points $p$ and $q$ (both in $G$) and yields an element (a vector) $\\vec{v}$ of a vector space $E$. We write $\\vec{v}=p-q.$ Additionally, this difference function must ensure that, for any point $p$ in $E$, it holds$p-p=0$, where $\\vec{0}$ is the null vector of $E$.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Euclidean Vectore Spaces}\n\tBefore defining what is an Euclidean vector space, let us first define some mathematical tools and some concepts.\n\t\n\tWe can, by choosing a unit length, measure the intensity of each arrow, in other words, determine its length. We can also measure the angular distance of two arrows (or vectors) of any common origin (not necessarily distinct) taking as the unit of angle measurement for example the radian (\\SeeChapter{see section Trigonometry}). The measurement of this difference is then a number between $0$ and $\\pi$ named \"\\NewTerm{angle}\\index{angle between two vector}\" of the two arrows (see the section of Euclidiean Geometry for more details). If both arrows have same direction and orientation, their angle is zero and if they have same direction and the opposite orientation, this same angle is $\\pi$.\n\t\n\tThe representing arrows of a same vector $\\vec{x}$ all have the same length. We denote this length by the notation:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/details_norm_calculation.jpg}\n\t\t\\caption{Details of the calculation of the norm in an orthogonal coordinate system $\\mathbb{R}^3$}\n\t\\end{figure}\n\tIf $\\vec{x}$ is a nonzero vector we can build an unit norm vector $\\vec{u}$ of same direction and orientation (colinear) by the following operation that is used a lot in physics:\n\t\n\tWe will name \"\\NewTerm{non-zero angle of vectors $\\vec{x}$ and $\\vec{y}$}\" the angle of two arrows of common origin representing one being $\\vec{x}$ and the other $\\vec{y}$.\n\t\n\tHowever, more strictly speaking a \"norm\" is defined on a real vector space (or complex) $E$, so that we speak then of \"\\NewTerm{normalized vector space}\\index{normalized vector space}\" is an application:\n\t\n\tsatisfying the following properties:\n\t\\begin{enumerate}\n\t\t\\item[P1.] Positivity:\n\t\t\n\t\t\\item[P2.] Linearity:\n\t\t\n\t\twhere we take the modulus of the constant if this is not in the set of real numbers $\\mathbb{R}$ but in the set of complex numbers $\\mathbb{C}$.\n\t\t\\item[P3.] Nullity (often associated with the property P1):\n\t\t\n\t\t\\item[P4.] Minkowski inequality (triangle inequality):\n\t\t\n\t\tThat we will prove further below.\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\\textbf{R1.} These properties are mainly imposed by our intuitive approach of Euclidean space (vector space of finite dimension over the field of real number $\\mathbb{R}$ and with a scalar product that we will see later) and its geometric interpretation (through the fact that it is also an affine space $\\text{A}\\mathbb{R}^n$).\\\\\n\t\n\t\\textbf{R2.} We will prove a little further below the property P4 under the name of \"triangle inequality\" and we will do a little more general study of this inequality under the name \"Minkowski inequality\" in the section Topology.\n\t\\end{tcolorbox}\t\n\t\n\t\\pagebreak\t\n\t\\subsubsection{Scalar Product (Dot Product)}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{Euclidean vector space}\\index{Euclidean vector space}\" is a vector space (real and of finite dimensional for the purists) with a specific operation, the \"\\NewTerm{scalar product}\\index{scalar product}\" also named \"\\NewTerm{dot product}\\index{dot product}\" which we denote (notation specific to this website) regarding to the special case of vectors:\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} We find in some books (for information) the notation $\\left( \\vec{x}|\\vec{y}\\right)$ or $\\langle \\vec{x} | \\vec{y} \\rangle$ in the generalization of this definition as we shall see a little further below. According to the standard ISO 80000-2: 2009 we should write the dot product as $a\\cdot b$.\\\\\n\t\n\t\\textbf{R2.} The scalar product has a huge importance in the whole field of mathematics and physics; and we will see it again in all following chapters of this book. It is therefore necessary to carefully understand what follows.\\\\\n\t\n\t\\textbf{R3.} The scalar product may be viewed as a projection of the length of a vector along the length of another one as we will see later.\n\t\\end{tcolorbox}\t\n\tThis scalar product has the following properties (most of which stem from the definition itself) in a Euclidean space:\n\t\\begin{enumerate}\n\t\t\\item[P1.] Commutativity: $\\vec{x}\\circ\\vec{y}=\\vec{y}\\circ\\vec{x}$\n\t\t\\item[P2.] Associativity: $\\alpha(\\vec{x}\\circ\\vec{y})=(\\alpha\\vec{x})\\circ\\vec{y}=\\vec{x}\\circ(\\alpha\\vec{y}) $\n\t\t\\item[P3.] Distributivity: $\\vec{x}\\circ(\\vec{y}+\\vec{z})=\\vec{x}\\circ\\vec{y}+\\vec{x}\\circ\\vec{z}$\n\t\t\\item[P4.] Non-degerated: $\\vec{x}\\circ\\vec{y}=0$ then $\\forall\\vec{x}\\neq\\vec{0}\\Rightarrow \\vec{y}=\\vec{0}$\n\t\t\\item[P5.] Squared scalar: $||\\vec{x}||^2=\\vec{x}\\circ\\vec{x}$ and $\\vec{x}\\circ\\vec{x}>0$ if $\\vec{x}\\neq \\vec{0}$\n\t\t\\item[P6.] Bi-linearity: $(\\alpha\\vec{x}+\\beta\\vec{y})\\circ\\vec{z}=\\alpha(\\vec{x}\\circ\t\\vec{z})+\\beta(\\vec{y}\\circ\\vec{z})$\n\t\\end{enumerate}\n\tOnly the latter property requires perhaps a proof (and one of the results of the proof  will be useful to us later to prove another very important property of the scalar product):\n\t\\begin{dem}\n\tGiven:\n\t\n\twhich is the \"\\NewTerm{orthogonal projection vector}\\index{orthogonal projection vector}\" (the $x$ on index of $\\text{project}$ meaning \"the vector $\\vec{x}$\") of the vector $\\vec{y}$ of standardization at the unit of vector $\\vec{x}$.\n\t\n\tUsing the scalar product, the vector $\\text{proj}_x\\vec{y}$ can be expressed otherwise, we just need to take the relation that we have seen above:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/dot_product.jpg}\n\t\t\\caption{Geometrical representation of dot product (projection)}\n\t\\end{figure}\n\t\n\tand introduce it into $\\text{proj}_x\\vec{y}$ with to obtain:\n\t\n\tThe norm of $\\text{proj}_x\\vec{y}$ is written:\n\t\n\tIf $\\vec{x}$ has a unit norm, the relations of previous projections are simplified and become obviously:\n\t\n\tBy elementary geometric considerations (distributivity of the scalar product), it is easy to realize that:\n\t\n\tIf we now return to the proof of the bi-linearity property:\n\t\n\tWe have in a first time:\n\t\n\tand, from the definition of the property of the orthogonal projection, it comes immediately by a one-to-one correspondence:\n\t\n\thence the property $P6$ that follows by multiplying the two members of equality by $\\vec{z}\\circ\\vec{z}$ and after by simplification by $\\vec{z}$.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A vector space $E$ is said to be an \"\\NewTerm{proper Euclidean vector space}\\index{proper Euclidean vector space}\" if $\\forall \\vec{x} \\in E \\qquad ||\\vec{x}||>0$.\n\t\t\n\t\t\\item[D2.] We say that the vectors $\\vec{x}$ and $\\vec{y}$ are \"\\NewTerm{orthogonal vectors}\\index{orthogonal vectors}\" if they are non-null and that their scalar product is equal to zero (their angle is equal to $\\pi/2$).\n\t\t\n\t\t\\item[D3.] A basis of vectors $(\\vec{e}_1,\\vec{e}_2,...,\\vec{e_n})$ is said to be an \"\\NewTerm{orthonormal basis}\\index{orthonormal basis}\" if all the vectors $\\vec{e}_1,\\vec{e}_2,...,\\vec{e_n}$ are pairwise orthogonal and their norm is equal to the unit (thus constituting a: free family).\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe will see in the section Tensor Calculus (we could have done here too but we don't use it for a practical case in this section therefore...) how from a set of independent vectors build an orthogonal basis. This is what the reader will find under the name  \"(Gram-)Schmidt orthogonalization method\".\n\t\\end{tcolorbox}\t\n\tBy a a simple geometric argument, we see that every vector is the sum of its orthogonal projections on the vectors of an orthonormal basis, that is, if $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_2)=(\\vec{u},\\vec{v},\\vec{w})$ is an orthonormal basis in $\\mathbb{R}^3$ for example:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/orthonormal_basis_projection.jpg}\n\t\t\\caption{Orthonormale basis projection example}\n\t\\end{figure}\n\tThis decomposition is also obtained by the P6 property of the scalar product. Indeed, consider the components $(\\vec{x}_1,\\vec{x_2},\\vec{x}_3)$ of a vector $\\vec{x}$ in our orthonormal basis:\n\t\n\tsince $\\vec{e}_1\\circ\\vec{e}_1=1$ and $\\vec{e}_1\\circ\\vec{e}_2=0$. Therefore we get immediately:\n\t\n\thence the decomposition.\n\t\n\tGiven the respective components $(\\vec{x}_1,\\vec{x_2},\\vec{x}_3)$ and $(\\vec{y}_1,\\vec{y_2},\\vec{y}_3)$ of the vectors  $\\vec{x}$ and $\\vec{y}$ vectors in a canonical orthonormal basis $(\\vec{e}_1,\\vec{e_2},\\vec{e}_3)$ we know now that we can write the scalar product in the form:\n\t\n\tby the property P6 of the scalar product:\n\t\n\tusing the properties P1 and P6 again:\n\t\n\tWhich finally gives us the very famous and important decomposition:\n\t\n\tThis is one of the most important relation in the field of vector calculus, which we name \"\\NewTerm{canonical scalar product}\\index{canonical scalar product}\" or \"\\NewTerm{canonical dot product}\\index{canonical dot product}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThen angle $\\theta$ of the dot product is sometimes denoted by:\n\t\n\t\\end{tcolorbox}\n\tNow let us prove with a simple two dimensional case a property that physicist like a lot to characterize an orthogonal linear application (\\SeeChapter{see section Linear Algebra}): that the dot product is invariant under any orthogonal transformation (then abusively said \"invariant under basis change...\").\n\n\tFor this let us consider a vector $\\vec{x}=(x_1,x_2)$ and the 2D rotation matrix (\\SeeChapter{see section Numbers}):\n\t\n\tNow let us calculate:\n\t\n\tSo now let us consider two vectors $\\vec{a}$ and $\\vec{b}$ we have proved just above that their dot product is given by:\n\t\n\tAnd after the chosen orthogonal transformation we get:\n\t\n\tSo the dot product is indeed invariant under this rotation that is a special 2D case of orthogonal transformation  and in fact under any other orthogonal transformation.\t\t\n\t\n\t\\pagebreak\n\t\\paragraph{Cauchy\u2013Schwarz inequality}\\mbox{}\\\\\\\\\n\tIn mathematics, the Cauchy\u2013Schwarz inequality is a useful inequality encountered in many different settings, such as linear algebra, analysis, probabilities, statistics and other areas (just read this book entirely to have an idea...). It is considered to be one of the most important inequalities in all of mathematics!!!\n\t\n\tThe relation\n\t\n\t can also be trivially written as follows if we use the concept of the norm and the definition of the scalar product:\n\t \n\t It is interesting to notice that if both $\\vec{x}$ and $\\vec{y}$ are orthogonal vectors, we fall back on the result of a famous theorem: the Pythagorean theorem!\n\n\tIndeed, therefore we have if the two vectors are orthogonal:\n\t \n\t\tThis gives us:\n\t \n\tThis relations is very important in physics and mathematics. It must be remembered!\n\t \n\t\\begin{theorem}\n\tWe name \"\\NewTerm{Cauchy-Schwarz inequality}\\index{Cauchy-Schwarz inequality}\", the inequality, valid for any choice of the vectors $\\vec{x}$ and $\\vec{y}$, the relation:\n\t\n\tWhich can also be written as:\n\t\n\t\\end{theorem}\n\tFirst we will consider as obvious that equality only occurs when the two vectors are collinear.\t\n\t\\begin{dem}\n\tWe put ourself in the case where $\\vec{x},\\vec{y}\\neq\\vec{0}$. So then $\\lambda\\in\\mathrm{R}$ we have obviously according to the properties of the scalar product:\n\t\n\tSo this is a simple equation of the second degree where variable is $\\lambda$. Remembering what we saw in our study of polynomials of second degree (see section Calculus), the previous relation (that it is always greater than or equal to zero) is satisfied if the discriminant $b^2-4ac$ is negative or zero. In other words, if:\n\t\n\tThus after simplification:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tWhen $E$ is $\\mathbb{R}^n$, the Cauchy-Schwarz inequality is writtent with the vector components:\n\t\n\tIn the particular case where $\\forall i,b_i=1$ it becomes:\n\t\n\tor even:\n\t\n\twhich shows that the square of the arithmetic mean is less than or equal to the arithmetic mean of the squares. This result is important for the study of Statistics!\n\t\n\tFurthermore, using the property of the cosine and the Cauchy-Schwarz inequality we can write immediately:\n\t\n\trelation that we will see again in the context of the study of Statistics (\\SeeChapter{see section Statistics}).\n\t\n\t\\paragraph{Triangular Inequalities}\\mbox{}\\\\\\\\\n\tBy majoring $2\\vec{x}\\circ\\vec{y}$ by $2||\\vec{x}||\\cdot||\\vec{y}||$ (using the Cauchy-Schwarz inequality!) in the relation already establish previously:\n\t\n\twe get:\n\t\n\twhich take us immediately by taking the root square the \"\\NewTerm{triangular inequality}\\index{triangular inequality}\" that is very useful for the study of Sequences and Series and also in Topology:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe generalization of this inequality relatively to the choice of the norm (that is to say: the way we define a distance) as we will see in the section of Topology, gives what we name the \"\\NewTerm{Minkowski inequality}\\index{Minkowski inequality}\".\n\t\\end{tcolorbox}\t\n\tBy applying one thime the triangular inequality to the vectors $\\vec{x}$ and $(\\vec{y}-\\vec{x})$ and another time to vectors $(\\vec{y})$ and $(\\vec{x}-\\vec{y})$ we get the variant:\n\t\n\t\n\t\\paragraph{General Scalar/Dot Product}\\mbox{}\\\\\\\\\n\tLet us see now another and little more general, formal and abstract way to define the dot product while trying to stay as simple as possible (caution! in the general case the notation of the scalar product changes! ).\n\t\n\tFirst the reader must know that, as we will see it in the section of Functional Analysis, all the concepts studied until now in this section can also be applied to a special category of functions! Yeeesss!!! Add, subtract functins like vector is obvious but you must know that some more or less complicated functions are colinear or orthogonal (think to affine functions!) and furthermore there are not limite to $\\mathbb{R}$ but can be extended to $\\mathbb{C}$ easily and hence the scalar product departure set. \n\t\n\tTo make thinks to too much complicate we will focus here only on a gentle generalization of the dot product to vectors (we will come back on functions in the section of Functional Analysis later).\n\t\n\t\\textbf{Definition (\\#\\mydef):} Let $E$ be a real vector space (once again we focus here only on simple vectors for the moment). A \"\\NewTerm{positive symmetric bilinear form}\\index{positive symmetric bilinear form}\" on $E$ also named \"\\NewTerm{inner product}\\index{inner product}\", is an application:\n\t\n\t\\begin{enumerate}\n\t\t\\item[P1.] Positivity: \n\t\t\n\t  \t\n\t  \t\\item[P2.] Nullity (defined): \n\t  \t \n\t  \t\n\t  \t\\item[P3.] Symetry (defined): \n\t  \t\n\t  \t\n\t  \t\\item[P4.] The bilinearity (bilinear form) with, in order, the \"\\NewTerm{linearity on the left}\\index{linearity on the left}\" and \"\\NewTerm{linearity on the right}\\index{linearity on the right}\":\n\t  \t\n\t\t\n\t\t\\item[P...] And so on... we have the same six properties as the scalare product as the both are the same if we focus only on vector in $\\mathbb{R}$.\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAgain, these properties are mainly imposed by our intuitive approach of the Euclidean space and its geometric interpretation.\n\t\\end{tcolorbox}\t\n\t\n\t\\textbf{Definition (\\#\\mydef):} A space $E$ provided with a scalar product is named in general (with the departure set in $\\mathbb{C}$) a \"\\NewTerm{pre-Hilbert space}\\index{pre-Hilbert space}\" or \"\\NewTerm{inner product space}\\index{inner product space}\". If $E$ is of finite dimension, then we speak of \"Euclidean space\".\n\t\n\tWe will see in our study of Topology (see section of the same name) that the properties of the scalar product are the foundation bricks to set a norm and therefore a distance in a metric space. This distance will be given according to what we will see in the section of Topology:\n\t\n\t\n\t\\textbf{Definition (\\#\\mydef):} We say that a space $E$ having a dot product (inner product) $ \\langle \\cdot | \\cdot \\rangle$ is a \"\\NewTerm{Hilbert space}\\index{Hilbert space}\" if this space is complete for the metric defined above.\n\t\n\tIn other words, having a metric space provided with a distance generated by a scalar product is one thing. Then having a measurable distance is another one!!! A Hilbert space has thus distances measurable in the topological sense because the set we are working on is continuous and any point can be approached indefinitely (imagine having a rule and you can not approach ont this rule the points that define the dimensions of your object... it would be embarrassing...). So without complete space a lot of theorems of functional analysis (that is strongly linked to vector calculus) could not be used in the study of vector spaces and this would be very embarrassing in quantum wave physics for example...\n\t\n\tFormally, remember that a metric space is complete if all Cauchy sequences (\\SeeChapter{see section Sequences and Series}) of this space are converging (\\SeeChapter{see section Fractals}) in a metric space (\\SeeChapter{see section Topology}).\n\t\n\t\\subsubsection{Cross Product}\n\tThe cross product of two vectors is a proper operation to the dimension $3$. To introduce it, it is first necessary to orient the space intended to receive it. The orientation is defined by the concept of \"determinant\", therefore we will begin with a brief introduction to the study of this concept. This study will be repeated later in more details in the analysis of linear systems in the section of Linear Algebra.\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name basically \"\\NewTerm{determinant}\\index{determinant}\" of two column vectors of $\\mathbb{R}^2$ (for the general form of the determinant see the section of Linear Algebra):\n\t\n\tand we denote it:\n\t\n\tthe number:\n\t\n\tWe name determinant of three column vectors of $\\mathbb{R}^2$ (once again see the section Linear Algebra for a generalization):\n\t\n\tand we denote it:\n\t\n\tthe number:\n\t\n\tThus, the function that associates to each pair of column vectors of $\\mathbb{R}^2$ (or respectively to each triplet of column vectors of $\\mathbb{R}^3$) has a determinant named \"determinant of order $2$\" (respectively \"determinant of order $3$\")\n\t\n\tAs we will prove it in the section of Linear Algebra the determining has the property of being multiplied by $-1$ if one of the column vectors is replaced by its opposite or two of its column vectors are exchanged. In addition, the determinant is nonzero if and only if its column vectors are linearly independent (the proof - that has a great importantce in Applied Mathematics - is a few lines further below and a generalization  can be found in the section of Linear Algebra).\n\t\n\t\\textbf{Definition (\\#\\mydef):} Given $x_1,x_2,x_3$ and $y_1,y_2,y_3$the respective components of the vectors $\\vec{x}$ and  $\\vec{y}$ in the orthonormal basis $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$. We name \"\\NewTerm{cross-product}\\index{vector cross-product}\" of $\\vec{x}$ and $\\vec{y}$, and we denote it in most books by:\n\t\n\tand in a minority of books:\n\t\n\tthe vector:\n\t\n\tor as components:\n\t\n\tThe matrix form above will be very useful to us in the section Mechanics for the construction of the Inertial Matrix.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\\textbf{R1.} The first notation is the international notation due to Gibbs (which we will use throughout this book), the second is the French notation due to Burali-Forti (quite annoying because confusing with the notation of the operator AND in Proof Theory or Logical Systems).\\\\\n\t\n\t\\textbf{R2.} It is usually quite annoying to remember by heart the relations that form the cross product. But fortunately there are at least three good mnemonics techniques:\n\t\\begin{enumerate}\n\n\t\t\\item The first and probably the fastest method is to remember by heart only one of the expressions of the components of the cross product and after by decrement of the indices (by starting again from $3$ when we reaches $0$) get all the other components. But we must still find a simple way to remember by heart one of the components... A good way is the following mathematical property of two collinear vectors giving an easy with to find back the third component (the one along the $z$-axis):\n\t\t\n\t\tGiven two colinear vectors in the same plane, then:\n\t\t\n\t\tWe fall back on the expression of the third component of the cross product of two vectors.\\\\\n\t\t\n\t\tOr if you want to remember only the first component given in letters by $z_x = x_y y_z - x_z y_y$ the indices gives \"xyzzy\" (like a name of a person...). The second and third equations can be obtained from the first by simply vertically rotating the subscripts, $x\\rightarrow y \\rightarrow  z \\rightarrow x$. \n\t\t\n\t\t\\item The second method that we will see in details during our study of the section of Tensor Calculus is to use the Levi-Civita anti-symmetry symbol. This method is certainly the most aesthetic of all but not necessarily the fastest to develop and the easiest to remember. We give here just the expression without explanations at the moment as we will study this later (but t is also useful to get the general expression of the determinant):\n\t\t\n\t\t\n\t\t\\item The latter method is quite simple and trivial but it implicitly uses the first method as you must remember how the calculate at least a $2\\times 2$ determinant. The idea is the following: the $i$-th component of $\\vec{x}\\times\\vec{y}$ is the determinant of the two column vectors from which we have removed the $i$-th term, the second determinant is, however, with a \"$-$\" sign such that:\n\t\t\t  \n\t\\end{enumerate}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\tIt is important, even if it is relatively simple to remember, that the different cross products for orthogonal basis vectors are (and especially for the canonical basis) first:\n\t\n\tand also:\n\t\n\tThe vector product also has the following properties that we will prove just now:\n\t  \\begin{enumerate}\n\t  \t\\item[P1.] Antisymmetry:\n\t  \t\t\n\t  \t\t\n\t  \t\\item[P2.] Linearity:\n\t  \t\t\n\t  \t\t\n\t  \t\\item[P3.] If and only if $\\vec{x}$ and $\\vec{y}$ are linearly independent (very important!):\n\t  \t\n\t  \t\n\t  \t\\item[P4.] Non associativity:\n\t  \t\n\t  \t\n\t  \t\\item[P4.] Distributivity over the sum:\n\t  \t\n\t  \\end{enumerate}\n\t  The first two properties directly derived from the definition and the property P4 is easily verified by developing the components and comparing the results.\n\t  \n\t Then let us prove the third property which is very important in linear algebra (next section) and the fifth one (because requested by a reader).\n\t\\begin{theorem}\n\tIf and only if $\\vec{x}$ and $\\vec{y}$ are linearly independent (very important!):\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven two vectors $\\vec{x}(x_1,x_2,x_3)$ and $\\vec{y}(y_1,y_2,y_3)$. If the two vectors are linearly dependent then there exists an $\\alpha \\in \\mathbb{R}$ such that we can write:\n\t\n\tIf we develop the cross product of two vectors that a dependent to a given factor, we get:\n\t\n\tIt goes without saying that the above result is equal to the zero vector $\\vec{0}$ if indeed the two vectors are linearly dependent.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\begin{theorem}\n\tThe cross product is distributive over the sum.\n\t\\end{theorem}\n\t\\begin{dem}\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\tIf we now assume that both vector $\\vec{x}$ and $\\vec{y}$ are linearly independent and non-zero vector, we must prove that the cross product two properties:\n\t\\begin{theorem}\n\tThe resulting of a cross product results in a vector orthogonal (perpendicular) to $\\vec{x}$ and $\\vec{y}$ if they are not null.\n\t\\end{theorem}\n\t\\begin{dem}\n\tTo prove this we simply write the development using the dot product:\n\t\n\tThis equation shows that the vector $\\vec{x}$ is perpendicular to the resulting vector of cross product between $\\vec{x}$ and $\\vec{y}$.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{theorem}\n\tThe cross product has for norm (module):\n\t\n\twhere $\\theta$ is the angle between $\\vec{x}$ and $\\vec{y}$.\n\t\\end{theorem}\n\t\\begin{dem}\n\t\tTo prove this we simply write the development of the norm of the cross product:\n\t\t\n\t\tFinally:\n\t\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tWe then notice that in the case where $E$ is the Euclidean vector space, the norm of the vector product is the area (surface) of the parallelogram constructed on representatives of vectors $\\vec{x}$ and $\\vec{y}$ of common origin:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/cross_product_parallelogram.jpg}\n\t\t\\caption{Geometrical representation of cross product}\n\t\\end{figure}\n\tIf $\\vec{x}$ and $\\vec{y}$ are linearly independent, the triplet $(\\vec{x},\\vec{y},\\vec{x}\\times \\vec{y})$ and also the triplet $(\\vec{x},\\vec{y},\\vec{y}\\times \\vec{x})$ are direct.\n\t\n\tIndeed, $(\\vec{z}_1,\\vec{z}_2,\\vec{z}_3)$ being the components of $\\vec{x}\\times \\vec{y}$ (in the basis $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$), the determinant of passage of $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$ to $(\\vec{x}\\times \\vec{y},\\vec{x},\\vec{y}$ (form example) will be written:\n\t \n\t This determinant is positive, as at least one of the $z_i$ is not zero, according to the third property of linear independence of the cross product.\n\t \n\t Here are a few very important properties of practical utility of the cross product (particularly in physics) that are trivial to check whether the developments with explicit components are done (we can make them on  request if needed!):\n\t\\begin{enumerate}\n\t\t\\item[P1.] $\\vec{x}\\times(\\vec{y}\\times\\vec{z})=(\\vec{x}\\circ\\vec{z})\\vec{y}-(\\vec{x}\\circ\\vec{y})\\vec{z}$\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe latter relation is sometimes named the \"\\NewTerm{Grassman rule}\\index{Grassman rule}\", or more commonly \"\\NewTerm{dual vector product}\\index{dual vector product}\" and it is important to note that without the parentheses the result is not unique.\n\t\t\\end{tcolorbox}\t\n\t\t\n\t\t\\item[P2.] $(\\vec{x}\\times\\vec{y})\\circ (\\vec{z}\\times\\vec{v})=(\\vec{x}\\circ\\vec{z})(\\vec{y}\\circ\\vec{v})-(\\vec{x}\\circ\\vec{v})(\\vec{y}\\circ\\vec{z})$\n\t\t\n\t\t\\item[P3.] $(\\vec{x}\\times\\vec{y})\\circ\\vec{z}=-(\\vec{x}\\times\\vec{z})\\circ \\vec{y}$\n\t\t\n\t\t\\item[P4.] $(\\vec{x}\\times\\vec{y})\\circ\\vec{z}=\\vec{x}\\circ(\\vec{y}\\times\\vec{z})$\n\t\t\n\t\t\\item[P5.] $||\\vec{x}\\times\\vec{y}||^2=(\\vec{x}\\circ\\vec{x})^2(\\vec{y}\\circ\\vec{y})^2-(\\vec{x}\\circ\\vec{y})^2$\n\t\\end{enumerate}\n\tThe last identity is related to the Pythagorean theorem (\\SeeChapter{see section Euclidean Geometry}). Indeed, we wee it better by rewriting:\n\t\n\tThis can be seen from the definitions of the cross product and dot product, as\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Mixed Product (triple product)}\n\tWe can extend the definition of the vector product to another type of mathematical tool we name the \"\\NewTerm{mixed product}\".\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{mixed product}\\index{mixed product}\" of vectors $\\vec{x},\\vec{y},\\vec{z}$ the double product:\n\t \n\t often condensed under the following notation:\n\t  \n\t From what we saw in the definition of the dot and cross product, mixed product can also be written:\n\t \n\t We note that in the case where $E$ is the Euclidean vector space $\\mathbb{R}^3$, the absolute value of the mixed product symbolize the oriented volume of the parallelepiped, built on the representatives $\\vec{x},\\vec{y},\\vec{z}$ of common origin.\n\t \n\tIt is quite trivial that the mixed product is an extension to the three-dimensional case of the cross product. Indeed, in the expression of the mixed product, the vector product is the base surface of the parallelepiped and the scalar product project the vectors on the resulting vector from the cross product which gives the height $h$ of the parallelepiped.\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/mixed_product.jpg}\n\t\t\\caption{Mixed product illustration}\t\t\n\t\\end{figure}\n\t\t\t\n\tIf we develop:\n\t\n\tso triple product can also be understood as the determinant of a $3\\times 3$ matrix (thus also its inverse) having the three vectors either as its rows or its columns (\\SeeChapter{see section Linear Algebra}):\n\t\n\t By the commutative properties of the scalar product, we have:\n\t \n\tand the reader will check without any trouble (we can write the details on request) that by developing components we have:\n\t\n\tThe triple product has also the following properties that the reader could be able to check easily by developing just the components of each expression excepted perhaps the third one (we can detailed as always on request if needed):\n\t\\begin{enumerate}\n\t\t\\item[P1.]  $[\\vec{x},\\vec{y},\\vec{z}] =\n\t   [\\vec{z},\\vec{x},\\vec{y}] =\n\t   [\\vec{y},\\vec{z},\\vec{x}] =\n\t  -[\\vec{y},\\vec{x},\\vec{z}] =\n\t  -[\\vec{z},\\vec{y},\\vec{x}] =\n\t  -[\\vec{x},\\vec{z},\\vec{y}]$\n\t  \n\t  \\item[P2.] $[\\alpha\\vec{x}+\\beta\\vec{y},\\vec{z},\\vec{v}] =\n\t  \\alpha[\\vec{x},\\vec{z},\\vec{v}] +\n\t    \\beta[\\vec{y},\\vec{z},\\vec{v}]$\n\t    \n\t  \\item[P3.] $[\\vec{x},\\vec{y},\\vec{z}] \\ne 0$ if and only if $\\vec{x},\\vec{y},\\vec{z}$ are independent.\n\t  \n\t  \\item[P4.] $(\\vec{x}\\times\\vec{y})\\times(\\vec{z}\\times\\vec{v}) =\n\t  [\\vec{x},\\vec{y},\\vec{v}] \\cdot \\vec{z} - [\\vec{x},\\vec{y},\\vec{z}] \\cdot \\vec{v}$ \n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe will come back on the triple product during our study on tensor calculus as it gives the opportunity to get a very interesting result concerning a future application in General Relativity.\t\n\t\\end{tcolorbox}\n\t\n\t\n\t\\subsection{Vectorial Functional Space}\n\tGiven $\\mathcal{C}_{[a,b]}^k$ the set of real functions that can be $k$-times derivates (\\SeeChapter{see section Differential and Integral Calculus}) in the closed bounded interval $[a,b]$. We will designate the elements of this set by the letters $\\vec{f},\\vec{g},...$.\n\t\n\tThe value of $\\vec{f}$ at the point $t$ will be obviously denoted by $\\vec{f}(t)$. Say that $\\vec{f}=\\vec{g}$ is therefore equivalent than to say that:\n\t \n\t In a condensed way some practitioners denote this $\\vec{f}(t) \\equiv \\vec{g}(t)$, the symbol $\\equiv$ indicating obviously that the two members are equals for any $t$ in the bounded interval $[a,b]$.\n\t \n\t Consider the two following operations:\n\t\\begin{itemize}\n\t\t\\item $\\vec{f}(t)+\\vec{g}(t)$ defined by the relation $(\\vec{f}+\\vec{g})(t)\\equiv \\vec{f}(t)+\\vec{g}(t)$\n\t\t\\item $\\alpha\\vec{f}$ defined by the relation $(\\alpha \\vec{f})(t)=\\alpha\\vec{f}(t)$\n\t\\end{itemize}\n\tThese two operations satisfy to all conditions of the vectors of a vector space as we have already defined at the beginning of this section (associativity, commutativity, null vector, opposed vector, distributivity, neutral element) and therefore gives us the possibility to assign to $\\mathcal{C}_{[a,b]}^k$ of a vector space structure! Le null vector of this space being obviously the null function (equal to zero everywhere) and the opposite of $\\vec{f}$ being $-\\vec{f}$.\n\n\tIt is interesting to notice that $\\mathcal{C}_{[a,b]}^k$  as a vector space is a generalization of $\\mathbb{R}^n$ to the continuous case. Indeed, we can consider any vector $\\vec{v}=(a_i)$ of $\\mathbb{R}^n$ in the form of a real function defined on the set $\\left\\lbrace 1,2,...,n \\right\\rbrace\n$: the value of this function at the point $i$ is simply $a_i$.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe polynomials of order $n$ with one unknown form as an example of functional vector space of dimension $n + 1$ such that for each coefficient of the polynomial corresponds a vector component such that:\n\t\t\n\t\\end{tcolorbox}\n\tThe preferred application field of the abstract theory of the dot product (inner product) is formed by the functional vector spaces. We name therefore \"\\NewTerm{canonical scalar product}\\index{canonical dot product}\" in $\\mathcal{C}_{[a,b]}(\\mathbb{R}^2)$ the operation defined by the relation:\n\t\n\tThis operation defines indeed a scalar product, the properties of the latter being verified (on reader request we can add the proof if necessary), and furthermore, the integral:\n\t\n\tis positive if the continuous function $\\vec{f}$ is not identically zero.\n\t\n\tTechnically the latter relation is written when in $\\mathbb{R}^2$:\n\t\n\tWe will give more precision about this norm and its associated scalar product and with example in the section of Functional Analysis.\n\t\n\t\\pagebreak\n\t\\subsection{Hermitian Vector Space}\n\tThe purpose of what will follow is, as always in this book, not to give a detailed study about vector spaces in $\\mathbb{C}$ but just to give the minimal knowledge and vocabulary necessary to the lecture of some theoretical models in physics and especially those presented in this book in the section of Wave Quantum Physics.\n\t\n\tWhen the scalars that appears in the definition of vector spaces are complex numbers (in $\\mathbb{C}$ as seen in the section Numbers), and not only real numbers, then we speak obviously about \"\\NewTerm{complex vectorial spaces}\\index{complex vectorial spaces}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tRigourlsy in the common communication, people should always precise if we speak of real vectorial space or complex vectorial space...\n\t\\end{tcolorbox}\n\tLet us give some expamples of famous complex vectorial spaces (as many people think the are useless):\n\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tE1. The space vector $\\mathbb{C}^n$ of column-vectors with $n$ components ($\\mathbb{C}^1$ being obviously identified to $\\mathbb{C}$). We will meet, among others, such vector space in the section of Relativistic Quantum Physics.\\\\\n\t\n\tE2. The vectorial space of univariate polynomial with coefficient in $\\mathbb{C}$. We will meet such spaces in the section of Wave Quantum Physics or even Quantum Chemistry.\\\\\n\t\n\tE3. The vectorial space of complex functions of one real or comple variables continuous or not. We will meet such vector spaces frequently in the section of Wave Mechanics and especially in the section of Electrodynamics.\n\t\\end{tcolorbox}\n\tThe purpose here is to adapt what we have seen so far to complex vectorial space. The following example, show us that we can transpose as it the previous definitions. Indeed, let us consider the vector space $\\mathbb{C}^n$. As for $\\mathbb{R}^n$, we could have the tentation to define a dot product on $\\mathbb{C}^n$ by:\n\t\n\twith $x_i,y_i\\in \\mathbb{C}$.\n\tSadly, we see that this definition is not satisfactory because we could have therefore:\n\t\n\tand this quantity is in general not a real number in a complex vector space and violates the property of positivity of the dot product and therefore prevent us to introduce and use the concept of distance. What is obviously a big problem in our actual perception of the world.\n\t\n\tWe could therefore not define anymore a norm in $\\mathbb{C}^n$ by writing:\n\t\n\tFor $\\langle \\vec{x}|\\vec{x}\\rangle$ to be a positive real number we see that it would be better to define the scalar product like this:\n\t\n\tIn this case we have therefore:\n\t\n\twhich is well a positive real number. From there, we can once again define a norm for complex vector space $\\mathbb{C}^n$ by putting:\n\t\n\tWe will now show how to define an inner product on a complex vector space in the general case.\n\t\n\t\\subsubsection{Hermitian Inner Product}\n\t\\textbf{Definition (\\#\\mydef):} Let $\\mathcal{H}$ be a complex vector space (!). We name \"\\NewTerm{scalar product}\\index{scalar product}\" or more accurately \"\\NewTerm{Hermitian inner product}\\index{Hermitian inner product}\" on $\\mathcal{H}$ (that is to say: a dot product in complex space...), an application:\n\t\n\tThat satisfies (they are more properties but we will focus only about what we need for practical application in this book and especially quantum physics):\n\t\\begin{enumerate}\n\t\t\\item[P1.] Positivity:\n\t\t\n\t  \t\n\t\t\\item[P2.] Nullity:\n\t\t \n\t  \t\n\t\t\\item[P3.] Hermitian symmetry:\n\t\t\n\t  \t\n\t  \t\\item[P4.] Bilinearity (bilinear form) changes a little bit too ... so that we speak now of \"\\NewTerm{sesquilinearity}\\index{sesquilinearity}\". We speak then, in order, of left anti-linearity and of right linearity such as:\n\t  \t\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} It seems that some mathematicians put the anti-linearity on the right. It's just a matter of agreement that does not matter and exists because of a lack of international norms in mathematics.\\\\\n\t\n\t\\textbf{R2.} The reader may notice easily that if the elements of the above definitions are all in the set $\\mathbb{R}$ then the sesquilinearity is reduced to the bilinearity and the hermitian character to a simple symmetry. So the hermitian inner product reduces to the scalar product.\\\\\n\t\n\t\\textbf{R3.} We want to give for now only the minimum on the vast subject that is complex vector spaces so that the reader can read without too much trouble the beginning of the section of Wave Quantum Physics.\n\t\\end{tcolorbox}\t\n\tWhen we join to a complex vector space a scalar product then just as a real vector space becomes an Euclidean vector space or Prehilbertien vector space, the complex vector space becomes what we name a \"\\NewTerm{hermitian vector space}\\index{hermitian vector space}\" (term often used in the section of Wave Quantum Physics).\n\t\n\t\\textbf{Definition (\\#\\mydef):} Again, we say that a space $\\mathcal{H}$ provided with an Hermitian product $\\langle \\vec{x}|\\lambda \\vec{y} +\\mu \\vec{z}\\rangle$ is a \"\\NewTerm{Hilbert space}\\index{Hilbert space}\" if this space is complete for the metric defined above.\n\t\n\tThus, Hilbert spaces is a generalization of spaces including dot products and Hermitian dot product of Euclidean and Prehilbertien spaces.\n\t\n\t\\subsubsection{Types of Vectors Spaces}\n\tTo sum it all up:\n\t\\begin{itemize}\n\t\t\\item We name \"pre-Hilbert space\" (real or complex) any vector space of finite dimension or not, provided with a dot (scalar) product.\n\t\t\n\t\t\\item We name \"Hilbert space\" (real or complex) any complete prehilbertian space (as space provide with a norm).\n\t\t\n\t\t\\item We name \"Euclidean space\" any real vector space of finite dimension with a dot (scalar) product and denoted by $\\mathcal{E}^n$.\n\t\t\n\t\t\\item We name Hermitian space any complex vector space of finite dimension with a dot (scalar) product and denoted by $\\mathcal{H}^n$.\n\t\\end{itemize}\n\t\n\t\\pagebreak\n\t\\subsection{System of Coordinates}\n\tWe will address here the aspect of coordinates changes of vector components not from a basis to another one (for that you need to go see the section of Linear Algebra) but from one coordinate system to another. That means that in any case we will stay in an Euclidean space. This type of transformation has strong implication in physical (and a little bit less in pure mathematics) when we want to simplify the study of physical systems whose equations become easier to handle in other coordinate systems.\n\t\n\t\\textbf{Definition (\\#\\mydef):} In mathematics, a \"\\NewTerm{coordinate system}\\index{coordinate system}\" is used to match to each point of an $n$-dimensional space, a $m$-tuple of scalars.\n\t\n\tAlthough we are in a chapter and a section of this book that is suppose to be pure maths oriented..., we will allow ourselves in what follows to make a direct connection with physics relatively the terms of the speed and acceleration in different coordinate systems (sorry for the \"math skills only\" people...). Our teaching experience has show that this helps the readers (most of time students) to better understand the various abstract concepts.\n\t\n\t\\subsubsection{Cartesian (rectangular) Coordinate System}\n\tWe do not want to take too much time on this system as it is well known to everyone usually. However, let us recall that most of the time, in physics, the Cartesian systems in which we are working are in $\\mathbb{R}^2 $(two real spatial dimensions), or $\\mathbb{R}^3$ (three real spatial dimensions) or even $\\mathbb{R}^4$ or $\\mathbb{C}^4$ (three spatial dimensions and one of time) when we work in relativity. The number of dimensions can be higher as for example with the Kaluza-Klein theory (five dimensions) merging General Relativity and Electromagnetism or much more with String Theory (above 20 dimensions!!!).\n\t\n\tIn $\\mathbb{R}^3$ (the most common case), there are three basic vectors traditionally denoted by:\n\t\n\tOr more explicitly:\n\t\n\t\n\tIn this system, the position of a point $P$ (identifiable by a vector $\\vec{x}$ for example) is defined by the three numbers named  \"\\NewTerm{coordinates}\\index{coordinates}\" (more generally \"\\NewTerm{components}\") denoted (typically in Tensor Calculus):\n\t\n\tand in physics denoted more conventionally by:\n\t\n\twhere usually the component $(z)$ represents the height (vertical), the component $(x)$ is the width and the component  $(y)$ is the length (obviously these are completely arbitrary choice).\n\t\n\tThis point $P$ can be spotted by a vector arbitrarily designated $\\vec{r}$ in the basis  $\\vec{e}_i$ by the relation (using Tensor notation):\n\t\n\tand if the basis is canonical (orthonormal) such that:\n\t\n\twe write:\n\t\n\tIn physics, if we work with coordinates, it is always to be able to determine the location of an item. Or, as we shall see it more rigorously in the section of Analytical Mechanics, the physicist works with the following concepts (each element being often time-dependent):\n\t\\begin{itemize}\n\t\t\\item Positions: $\\vec{r}=\\biggl(x(t),y(t),z(t),t\\biggr)$\n\t\t\n\t\t\\item Velocity: $\\displaystyle\\frac{\\mathrm{d}\\vec{r}}{\\mathrm{d}t}=\\dot{\\vec{r}}=\\vec{v}\n\t=\\biggl(\\dot{x}(t),\\dot{y}(t),\\dot{z}(t),t\\biggr)$\n\t\n\t\t\\item Acceleration: $\\displaystyle\\frac{\\mathrm{d}\\vec{v}}{\\mathrm{d}t}=\\dot{\\vec{v}}=\\vec{a}=\n\t\\biggl(\\ddot{x}(t),\\ddot{y}(t),\\ddot{z}(t),t\\biggr)$\n\t\\end{itemize}\n\tNow let us see how the different concepts are expressed in systems such as spherical, cylindrical and polar coordinates (remember that we remains for all of them in a flat Euclidean space!!!).\n\t\n\t\\pagebreak\n\t\\subsubsection{Spherical Coordinate System}\n\tThe choice to start with this coordinate system is not a coincidence. It has the advantage of being a generalization of cylindrical and polar systems that we will meet thereafter and will help us easier to determine the expressions of position, velocity and acceleration.\n\t\n\tWe traditionally represent (in Switzerland ... and in accordance with the standard ISO 31-11) a spherical coordinate system as follows:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/coordinate_system_spherical.jpg}\n\t\t\\caption{Representation of the spherical coordinate system}\n\t\\end{figure}\n\tWe see very clearly if we know the basic trigonometric relations and identities (see the section of the same name in the Geometry chapter) we have the transformations:\n\t\n\t\n\twhere the two angles $\\theta, \\phi$ are respectively the latitude and colatitude (longitude):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.7]{img/algebra/latitude_longitude.jpg}\n\t\t\\caption{Latitute and Longitude concepts illustrated (source: OpenStax)}\n\t\\end{figure}\n\tWe have inversely:\n\t\n\tNow we must find the expressions that connect the vectors of the spherical basis that we choose to denote by $\\vec{e}_r,\\vec{e}_\\theta,\\vec{e}_\\phi$ with the vectors of the Cartesian basis $\\vec{e}_x,\\vec{e}_y,\\vec{e}_z$:\n\t\n\tLet us indicate that by dividing by $\\sin(\\theta)$ the second basis vector $\\vec{e}_\\phi$, we make sure that by the properties of the norm of the vector product that:\n\t\n\twill be well normalized to unity as expected (as we know from start that as we toke a direct orthogonal coordinate system the product of norms of the basis vectors must be equal to $1$)!\n\t\n\t\n\tWe will also use later (for the study of vector operators further below and the geodesic of the sphere in the section of Analytical Mechanics) the variation $\\mathrm{d}\\vec{r}$ expressed in spherical coordinates:\n\t\n\tOr more explicitly:\n\t\n\tTo express the velocity and acceleration in spherical coordinates, we will also need the derivatives with respect to time:\n\t\n\tSo if we do now a little bit of physics, we have:\n\t\n\tThis brings us to (we will need this relation mainly in the chapter of Astrophysics):\n\t\n\tIt is interesting that we get the same result through the following method that may be less intuitive:\n\t\n\tand substituting the derivative obtained above:\n\t\n\tConcerning the acceleration we get:\n\t\n\tBut we have:\n\t\n\tTherefore it comes:\n\t\n\n\tThus finally:\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Cylindrical Coordinate System}\n\tThe cylindrical coordinate system (very useful in the study of helical motion systems) is quite similar to spherical coordinates as it can be seen as a slice of the sphere. \n\n\tGiven the figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/coordinate_system_cylindrical.jpg}\n\t\t\\caption{Representation of the cylindrical coordinate system}\n\t\\end{figure}\n\tWarning!!! The vector $\\vec{r}$ is unlike the previous spherical case defined only in the $XY$ plane or a plane which is parallel to it!\n\t\n\tIt comes easily in cylindrical coordinates for $r>0$:\n\t\n\tand vice versa:\n\t \n\tNow we must find the expressions that connect the vectors of the cylindrical base that we choose to denote by $\\vec{e}_r,\\vec{e}_\\phi,\\vec{e}_z$ (instead of $\\vec{r}, \\vec{\\phi},\\vec{z}$ as it is done traditionally) with the vectors of the Cartesian base $\\vec{e}_x,\\vec{e}_y,\\vec{e}_z$. We have identically to what we did for the spherical coordinates:\n\t\n\tOr more explicitly:\n\t\n\tLet us indicate that by dividing by $\\sin(\\phi)$ the second vector base $\\vec{e}_\\phi$, we ensure us that by the properties of the norm of the cross product we have:\n\t\n\twill be well normalized to unity as expected (as we know from start that as we toke a direct orthogonal coordinate system the product of norms of the basis vectors must be equal to $1$)!! In the case of cylindrical coordinates the angle being anyway right, we would not be obliged to indicate this division, but we have made this choice for consistency with previous developments...\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIt is important to notice that the cross product of two basis vectors always gives the third perpendicular  basis vector (like the Cartesian and spherical coordinates so!).\n\t\\end{tcolorbox}\n\tFor future needs, let us determine the partial differential of each of these coordinates:\n\t\n\tWe will also use later (for the study of vector operators) the variation $\\mathrm{d}\\vec{r}$ expressed in cylindrical coordinates:\n\t\n\tTo express the speed and acceleration in cylindrical coordinates, we will also need the derivatives with respect to time:\n\t\n\tSo if we now do a little bit physics, we get (let us recall that the $z$ component is independent of other cylindrical components):\n\t\n\twhich brings us to:\n\t\n\tFor acceleration we get (exactly the same approach as for the expression of the speed):\n\t\n\t\n\t\\subsubsection{Polar Coordinate System}\n\tThe polar coordinate system is very similar to the cylindrical coordinates as it can be seen as an entrenchment of one dimension (the height) of the cylindrical system (we will often encounter this system in the section of Classical Mechanics, Corpuscular Quantum Physics and Astronomy).\n\n\tGiven the figure:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/coordinate_system_polar.jpg}\n\t\t\\caption{Representation of the polar coordinate system}\n\t\\end{figure}\n\tThus, it comes easily in polar coordinates for $r>0$:\n\t\n\tand vice versa:\n\t\n\tNow we must find the expressions that connect the vectors of the polar base that we choose to denote by $\\vec{e}_r,\\vec{e}_\\phi$ (instead of $\\vec{r}, \\vec{\\phi}$ as it is done traditionally) with the vectors of the Cartesian base $\\vec{e}_x,\\vec{e}_y$. We have identically to what we did for the spherical coordinates:\n\t\n\tOr more explicitly:\n\t\n\tOnce again, dividing by $\\sin(\\theta)$ the second basis vector $\\vec{e}_\\phi$, we ensure the properties of the norm of the vector product that:\n\t\n\twill be well normalized to unity (as we know from start that as we toke a direct orthogonal coordinate system the product of norms of the basis vectors must be equal to $1$)!. In the case of polar coordinate the angle being a anyway right, we would not be obliged to indicate this division, but we have made this choice for consistency with the previous developments.\n\t\n\tFor future needs, Let us determine the partial differential of each of these coordinates:\n\t\n\tWe will also use later (for the study of vector operators) the variation $\\mathrm{r}$ expressed in polar coordinates:\n\t\n\tTo express the speed and acceleration in polar coordinates, we will also need the derivatives with respect to time:\n\t\n\tSo if we now do a little bit physics, we have:\n\t\n\tand therefore:\n\t\n\twhere the first term is the radial velocity component and the second term the tangential component of the (angular) velocity. The velocity expression in polar coordinates is very important in astronomy as it allows quite easily calculate the calculation of the kinetic energy:\n\t\n\tFor the acceleration we get:\t\n\t\n\twhere the first term is the radial acceleration, the second term the centripetal acceleration, the third the Coriolis acceleration and finally the fourth one the tangential acceleration.\n\t\n\t\\pagebreak\n\t\\subsection{Differential Operators}\n\t\\textbf{Definition (\\#\\mydef):} Define a scalar field, vector field or tensor field in a volume $V$, it is define an application that for any point $\\vec{x}$ of this volume $V$ associates respectively a scalar, a vector or a tensor.\n\t\n\tThus, the application $f$ that at any point $\\vec{x}$ of $V$ of spatial coordinates $(x, y, z)$ associates the scalar value $f(\\vec{x})=f(x,y,z)$ is a scalar field in $V$.\n\t\n\tAt each point of a volume traversed by a moving fluid, the vector that coincides at every moment with the speed of the particle which pass through this point at this same time defines a 3D vector field, optionally variable in time. The fields thus defined are a basic mathematical tool in physics.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhen we plot a scalar field, the set of continuous dots of equal value is so-named \"\\NewTerm{isolines}\\index{isolines}\" or more commonly \"\\NewTerm{contours}\\index{contours}\".\n\t\\end{tcolorbox}\n\tVector fields are especially an important tool for describing many physical concepts, such as gravitation and electromagnetism, which affect the behavior of objects over a large region of a plane or of space. They are also useful for dealing with large-scale behavior such as atmospheric storms or deep-sea ocean currents.\n\n\tFor example the figure below shows a gravitational field exerted by two astronomical objects, such as a star and a planet or a planet and a moon. At any point in the figure, the vector associated with a point gives the net gravitational force exerted by the two objects on an object of unit mass. The vectors of largest magnitude in the figure are the vectors closest to the larger object. The larger object has greater mass, so it exerts a gravitational force of greater magnitude than the smaller object.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/algebra/vector_field_gravitation.jpg}\n\t\t\\caption{Gravitation vector field example (source: OpenStax)}\n\t\\end{figure}\n\tAnother example is the figure below that shows the velocity of a river at points on its surface. The vector associated with a given point on the river's surface gives the velocity of the water at that point. Since the vectors to the left of the figure are small in magnitude, the water is flowing slowly on that part of the surface. As the water moves from left to right, it encounters some rapids around a rock. The speed of the water increases, and a whirlpool occurs in part of the rapids.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.5]{img/algebra/vector_field_speed.jpg}\n\t\t\\caption{Velocity vector field example (source: OpenStax)}\n\t\\end{figure}\n\tThe gradient, divergence and the rotational are the three main linear differential operators of the first order that will be introduced here. This means they only involve the partial first derivatives (or simply \"differentials\") of the fields, unlike, for example, the Laplace operator which involves partial derivatives of order $2$.\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Gradients of Scalar Field}\n\tThe gradient is an operator that applies to a scalar field and transforms it into a vector field. Intuitively, the gradient indicates the direction of the greater variation of the scalar field, and the intensity of this variation. For example, the gradient of the altitude is directed along the maximum slope line and its norm increases with the slope.\n\t\n\tGiven a three-dimensional scalar field $f(x,y,z)$, wherein x and y and z are the Cartesian coordinates of a point $M$ in space. When $M$ moves in space according to the $\\mathrm{d}\\vec{r}$ of components $\\mathrm{d}x, \\mathrm{d}y$ and $\\mathrm{d}z$, the scalar field $f$ varies according to the total differential $\\mathrm{d}f$:\n\t\n\tFrom this relation, we can define the \"\\NewTerm{gradient operator}\\index{gradient operator}\" of a scalar field such as:\n\t\n\twhere:\n\t\n\tis a vector operator named \"\\NewTerm{gradient of the scalar field $f$}\\index{gradient of a scalar field}\". To condense the writing, we sometimes use the symbol $\\vec{\\nabla}$ named the \"\\NewTerm{nabla of the scalar field $f$}\\index{nabla of a scalar field }\".\n\t\n\tThe vector obtained by the gradient calculation has the following three fundamental properties:\n\t\\begin{enumerate}\n\t\t\\item[P1.] The components of the gradient represent by construction the variation (slope) of the function $f$ in the different directions of space.\n\t\t\n\t\t\\item[P2.] The gradient is perpendicular to the isolines of the function $f$.\n\n\t\t\\item[P3.] The direction of the gradient (and therefore its norm) is the maximum variation of $f$.\n\n\t\t\\item[P4.] The direction of the gradient shows the values where $f$ increases.\n\t\\end{enumerate}\n\tFollowing the request of some readers let us prove some of these properties.\n\tGiven $t\\mapsto C(t)$ an isoline. Then $t\\mapsto f(c(t))=c^{te}$ and therefore:\n\t  \n\twhich proves that the gradient is orthogonal to the tangent of the isoline (property P2).\n\t\n\tLet us come back on:\n\t\n\tThat it is tradition to write as:\n\t\n\tnamed the \"\\NewTerm{directional derivative}\\index{directional derivative}\". Its value is maximum obviously if $\\theta=0$, that is to say that the gradient is colinear to the variation $\\mathrm{d}\\vec{r}$. Hence, the direction (and therefore the norm) of greatest increase of $f$ is the same direction as the gradient vector!! Thus we proved the property P3.\n\t\n\tObviously the directional derivative takes on its greatest negative value if $\\theta=\\pi$ (or $180$ degrees). Hence, the direction of greatest decrease of $f$ is the direction opposite to the gradient vector.\n\n\tThe property P4 can be explains without formalism as following:\n\t\\begin{itemize}\n\t\t\\item If the function is decreasing in one variable, then the partial derivative is negative, so the component vector of the gradient for that variable points in the negative direction - which means increasing function value.\n\n\t\t\\item If the function is increasing in one variable, then the partial derivative is positive, so the component vector of the gradient for that variable points in the positive direction - which means increasing function value.\n\t\\end{itemize}\n\tThen is doesn't matter how the function profile is, the gradient, by definition, points in the increasing direction. Indeed, when \t\n$f(x,y)$ is decreasing in $x$, the function decreases as you move forward in $x$. But because the partial derivative with respect to $x$ is negative, the $x$-component of the gradient points towards origin (move backward in $x$), that is to say in the direction which makes f to increase.\n\t\n\tFrom the definition and from the total differential, we get\n\t\n\tThis leads us to put that:\n\t\n\tand so that finally the operator of the \"\\NewTerm{gradient in Cartesian coordinates}\\index{gradient in Cartesian coordinates}\" is given by:\n\t\n\tFinally we see that the gradient of a scalar field $f(x,y,z)$ is the vector field whose components at each point are the three derivatives of the scalar field $f$ with respect to the three-dimensional coordinates, denoted here by $x, y, z$ and that by its direction, and its norm, the gradient vector of a scalar field at a point therefore includes indications on how the field varies around this point.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tOne of the necessary and sufficient conditions for a vector field to be the gradient of a scalar field $f$ is that this vector field is irrotational (see below the rotational operator of a vector field).\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us find the direction for which the directional of:\n\t\n\tat $(-2,3)$ is a maximum and what is its maximum value?\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/directional_derivative_search_plot_maple.jpg}\n\t\t\\caption[]{Maple plot of $f(x,y,)=3x^2-4xy+2y^2$}\n\t\\end{figure}\n\tThe maximum value of the directiona derivative occurs as we have just proved it when $\\vec{\\nabla}f$ and the unit vector $\\mathrm{d}\\vec{r}$ point in the same direction.\\\\\n\n\tTherefore we start by calculating $\\vec{\\nabla}f(x,y)$:\n\t\n\tNext we evaluate the gradient at $(-2,3)$:\n\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tWe need to find a unit vector that points in the same direction as $\\vec{\\nabla}f(-2,3)$, so the next step is to divide $\\vec{\\nabla}f(-2,3)$ by its norm (the value of that norm being also the maximum value of the directional derivative at point $(-2,3)$), which gives:\n\t\n\tAs the unit vector above is a vector in the plane, nothing avoid us to calculate the angle it does with the axis. By applying elementary trigonometry, we get if we denote this angle by $\\theta$:\n\t\n\tSince cosine is negative and sine is positive, the angle must be in the second quadrant. Therefore:\n\t\n\tthat is to say approximately $114.5916$ degrees (or seen from the point of view of the first quadrant: $39.792$ degrees).\n\t\n\tWith Maple 4.00b we can have a more detailed and general investigation of the calculation we just did:\\\\\n\n\t\\texttt{>with(plots):\\\\\n\t>with(linalg):\\\\\n\t>Pa:=contourplot(3*x\\string^3-4*x*y+2*y\\string^2,x=-3..-1,y=2..4,filled=true):\\\\\n\t>Pb:=fieldplot(grad(3*x\\string^3-4*x*y+2*y\\string^2,vector([x,y])),x=-3..-1,y=2..4):\\\\\n\t>display(Pa,Pb);\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.9]{img/algebra/directional_derivative_gradient_plot_maple.jpg}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\tAfter having defined the gradient in Cartesian coordinates $x, y, z$ we have to address the expression of this operator in other coordinate systems. It is common in physics to have to use cylindrical, polar and spherical coordinates to simplify the formal study of physical systems. So if we refer to the previous study of coordinate systems, we have (recall) first in polar coordinates:\n\t\n\tBut, with the definition of gradient in Cartesian coordinates, in polar coordinates we have the following definition:\n\t\n\tIf we express the total exact differential (\\SeeChapter{see section of Differential and Integral Calculus}) of $\\mathrm{d}f$ we obtain the following relation:\n\t\n\tThis allows us to get the relation:\n\t\n\ttherefore:\n\t\n\twhich bring us to:\n\t\n\tThus the \"\\NewTerm{gradient in polar coordinates}\\index{gradient in polar coordinates}\" is expressed as\n\t\n\tLet us now tackle the expression of the gradient in cylindrical coordinates. Let us recall that  during our study of different coordinate systems we obtained for cylindrical coordinates:\n\t\n\tSo we already know that the expression of the gradient in cylindrical coordinates will be the same in polar coordinates with the exception of the addition of the vertical $z$ component that is independent of other coordinates. Thus we get the \"\\NewTerm{gradient in cylindrical coordinates}\\index{gradient in cylindrical coordinates}\":\n\t \n\tLet us now tackle on the expression of the gradient in spherical coordinates. Let us recall that during our study of the different coordinate systems we obtained for the spherical coordinates:\n\t\n\tBut, with the definition of gradient in Cartesian coordinates, we have in spherical coordinates the following definition:\n\t\n\tIf we express the total differential of $\\mathrm{d}f$ we get the following relations:\n\t\n\tThis allows us to obtain the relation (we now use the notation that uses the operator \"nabla\"):\n\t\n\tThe relation:\n\t\n\trequires that:\n\t\n\tThus the \"\\NewTerm{gradient in spherical coordinates}\\index{gradient in spherical coordinates}\" is expressed as:\n\t \n\tSo we finally saw all the expressions of the gradient operator in the Cartesian, polar, cylindrical and spherical systems.\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us see now a visual example of the previous developements with Maple 4.00 and a special case function $f(x,y)=\\sin(x)\\sin(y)$.\\\\\n\t\n\t\\texttt{> with(linalg):\\\\\n\t> with(plots):\\\\\n\t> plot3d(sin(x)*sin(y),x=-3..3,y=-3..3,axes=framed);}\\\\\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/gradient_function_of_example.jpg}\n\t\t\\caption[]{Plot of the function taken as example}\n\t\\end{figure}\n\tAnd now we show the isolines:\\\\\n\t\n\t\\texttt{>contourplot3d(sin(x)*sin(y),x=-3..3,y=-3..3,filled=true,\\\\\n\taxes=framed,coloring=[red,blue],style=patch);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/gradient_function_with_isolines.jpg}\n\t\t\\caption[]{Function with is isolines}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\n\tAnd now we plot the gradient vector with a plane projection and we see they are indeed perpendicular to the isolines:\\\\\\\\\n\t\t\\texttt{>Pa:=contourplot(sin(x)*sin(y),x=-3..3,y=-3..3,contours=10,\\\\\n\t\tcoloring=[red,blue],filled=true):\n\\\\\n\t>Pb:=fieldplot(grad(sin(x)*sin(y),vector([x,y])),x=-3..3,y=-3..3,\\\\\n\tarrows=THICK):\\\\\n\t>display(Pa,Pb);}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/gradient_function_with_isolines_and_projected_gradient.jpg}\n\t\t\\caption[]{Function with its isolines and projected gradient}\n\t\\end{figure}\n\tAnd with a 3D perpsective:\\\\\n\t\n\t\\texttt{>campo:=fieldplot3d([diff(sin(x)*sin(y),x),diff(sin(x)*sin(y),\\\\\n\ty),0],x=-3..3,y=-3..3,\nz=-3..3,axes=framed,arrows=THICK);\\\\\n>superf:=plot3d(sin(x)*sin(y),x=-3..3,y=-3..3):\n\\\\\n\t> display({campo,superf});}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.8]{img/algebra/gradient_function_with_isolines_and_gradient.jpg}\n\t\t\\caption[]{Function with its isolines and gradient in 3D}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tSeen from above with a small rotation:\\\\\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/gradient_function_with_isolines_and_projected_gradient_rotation.jpg}\n\t\t\\caption[]{Function rotation with its isolines and gradient}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Gradients of Vector Field}\n\tThe gradient of a vector field $\\vec{f}(x,y,z): \\mathbb{R}^3\\mapsto\\mathbb{R}^3$ is the field named \"\\NewTerm{tensor field}\\index{tensor field}\" defined by the following nine relations in Cartesian coordinates:\n\t \n\t We will use such a gradient in our study in the section of Marine \\& Weather Engineering the Papillon effect whose origin comes from the determination of the Navier-Stokes equations of the section of Continuum Mechanics and we will also use this type of gradient in the section of Theoretical Computing in our study of the Gauss-Newton optimization method.\n\t\n\tWe have the following 4 components in polar coordinates:\n\t \n\tWe have the following 9 components in cylindrical coordinates:\n\t \n\tWe have the following 9 components in spherical coordinates:\n\t \n\t So we finally saw all the expressions of a gradient vector field in Cartesian , polar, cylindrical and spherical system.\n\t \n\t\\subsubsection{Divergences of a Vector Field}\n\tThe diverence is applied to a vector field and turns it into a scalar field. Therefore it is an application from $\\mathbb{R}^3\\mapsto \\mathbb{R}$. Intuitively, and in the most common case, the divergence of a vector field expressed its tendency to come from or converge to some points.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tNon-initated people often confuse the gradient and divergence operator. To make the difference we must remember that the divergence of a vector is a number and that the gradient is a vector! The gradient indicates the direction in which the change is the most important and its amplitude. The divergence simply say what comes in or out from a given point.\n\t\\end{tcolorbox}\t\n\tHowever, we must distinguish two contributions to the divergence that we will rigorously define a little further below: one due to the variations named the \"\\NewTerm{directional divergence}\\index{directional divergenc}\" and the other due to variations in modules (norm) named the \"\\NewTerm{modular divergence}\\index{modular divergence}\". Thus, for simple fields, we can imagine cases where the divergence would only be modular and others, where it would only be directional. We could also build a field where the two types of divergence coexist, but having adverse effects.\n\t\n\tLet us consider for example a vector $\\vec{f}$ of space and we make it pass through any surface $S$. Physicists then assimilate the quantity $\\vec{f}$ which moves along the normal vector to the surface as a flow of $\\vec{f}$ through $S$.\n\t\n\tTo be convinced of this analogy we can imagine a fluid flowing on a flat surface, the flow through the surface is obviously zero in this cas, by cons if the fluid flows vertically through a horizontal surface the flow will be maximal. It is then immediate that we want to represent the flow by the scalar product of $\\vec{f}$ with the normal $\\vec{n}$ to  the surface $S$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe must always pay attention to the direction of $\u00a8\\vec{n}$ because at any point of a surface $S$ we have in general two normal vectors $\\vec{n}$ that are colinear but of opposite directions.\n\t\\end{tcolorbox}\t\n\tIf the surface is planar  then the normal $\\vec{n}$ is the same everywhere, but if it changes from place to place, then we will look at a small surface element $\\mathrm{d}S$.\n\t\n\tIf a small flow element is defined by:\n\t\n\tthen the total flow will be given by:\n\t\n\twhich is sometimes written (it's a little bit abusive but why not ...)\n\t\n\tLet us now suppose that our vector $\\vec{f}$ moves a point $M(x,y,z)$ in space to  $M'(x+\\mathrm{d}x,y+\\mathrm{d}y,z+\\mathrm{d}z)$ through a rectangular parallelepiped of sides of $\\mathrm{d}x, \\mathrm{d}y$ and $\\mathrm{d}z$:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/ostrogradsky_box_vector_displacement.jpg}\n\t\t\\caption[]{Move of a vector through a parallelepiped}\n\t\\end{figure}\n\tWe can decompose the movement (flow) through each face of the parallelepiped (decompositions in the orthonormal basis). For example, if we are interested at the decomposed part of the flow through the face $(\\mathrm{d}y, \\mathrm{d}z)$ described by the peaks vertices $BCFG$ we have obviously $\\vec{n}=(1,0,0)$.\n\t\n\tWe still need to determined how to represent the flow $\\vec{f}$ for this direction. As the flow is a function, that is to say that each of its components may be dependent of the three components of the space (if we take the case of a function in $\\mathbb{R}^3$) we have:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThose who are not convinced can go read the beginning of section Electrodynamics where we take the electric field as an (excellent) example.\n\t\\end{tcolorbox}\t\n\tWhile the variation of the flow according to $x$ is given by:\n\t\n\twhich give us:\n\t\n\ttherefore by summing:\n\t\n\tCompared to the first expression of $\\Phi$, the term $\\mathrm{d}x\\mathrm{d}y\\mathrm{d}z$ is then a volume element and not more of surface. We also have an interesting result:\n\t\n\twhose more explicit and rigorous writing  should be (to highlight well that the considered closed surface is the boundary of the closed studied volume):\n\t\n\tor more commonly written:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tSee the practical examples in the section Electrodynamics where for example the electric field divergence is zero for a free spherical charge as the vectors point in different directions (directional divergence) and where the norms decrease as the inverse of the square of the radius (modular convergence). Both contributions are in opposition and so the total divergence is zero.\n\t\\end{tcolorbox}\n\tThe development above is named \"\\NewTerm{Ostrogradsky theorem}\\index{Ostrogradsky theorem}\" or \"\\NewTerm{Gauss-Ostrogradsky theorem}\\index{Gauss-Ostrogradsky theorem}\" or more simply \"\\NewTerm{divergence theorem}\\index{divergence theorem}\" and actually defines the total divergence of $\\vec{f}$ in a volume as the flow $\\vec{f}$ through the \"walls\" of the closed volume (Gauss closed surface), which expresses well the name \"divergence\".\n\t\n\tNow reconsider the previous relation but extracting and unitary vector from the vector field $\\vec{f}$ such that:\n\t\n\twhere now $f$ is a scalar field. This can be rewritten obviously using chain rule derivatives an dot product commutativity:\n\t\n\tIn the special case of a uniform field we have:\n\t\n\tThen it remains:\n\t\n\tThe dot product being distributive on the sum of vectors we can rewrite this as:\n\t\n\tand therefore we get the \"\\NewTerm{gradient theorem\\footnote{As it is applicable only for uniform field it is not used a lot in practice}}\\index{gradient theorem}\":\n\t\n\t\n\tWe define the operator \"\\NewTerm{divergence}\\index{divergence operator}\" by the following relation (the tensor notation has been used to shorten the writing) in an $n$-dimensional space:\n\t\n\tThus we have for the operator \"\\NewTerm{divergence in Cartesian coordinates}\\index{divergence in Cartesian coordinates}\":\n\t\n\tIf the divergence of a vector field is identically zero in all the points of an Eulerian frame \\footnote{The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes.[1][2] This can be visualized by sitting on the bank of a river and watching the water pass the fixed location.}, the triple integral flux of this field through a volume $V$ will be:\n\t\n\tIt follows that the flow of this vector field through the edges of the volume is zero, that is to say that the incoming flow compensates the output flow. We say that such a field vectors having a null divergence has a \"\\NewTerm{conservation flow}\\index{conservation flow}\".\n\t\n\tTo determine the expression of the divergence operator in polar coordinates let us recall the relations proved earlier above:\n\t\n\tGiven now a vectorial function $\\vec{f}:\\mathbb{R}^2\\rightarrow \\mathbb{R}^2$. We have:\n\t\n\tKnowing the expression of $\\vec{e}_r,\\vec{e}_\\phi$ depending on $\\vec{e}_x,\\vec{e}_y$, from the expression above we deduce:\n\t\n\tThe divergence of $\\vec{f}$ being defined in the two dimensional case by:\n\t\n\twe then have:\n\t\n\tThe first term is (application of the gradient in polar coordinates!):\n\t\n\tin the same way we get for the second term (we can as always give more details on request):\n\t\n\tBy adding the two terms and expressing the partial derivatives of the functions $f_x,f_y$ in function of the partial derivatives of the functions $f_r,f_\\phi$ using the relations:\n\t\n\tWe get:\n\t\n\tAfter simplification:\n\t\n\tThe expression of the operator \"\\NewTerm{divergence in polar coordinates}\\index{divergence in polar coordinates}\" is then:\n\t\n\tTo determine the expression of the divergence operator in cylindrical coordinates let us recall the relations:\n\t\n\tGiven now a vector function $\\vec{f}:\\mathbb{R}^3\\rightarrow \\mathbb{R}^3$. We have:\n\t\n\tAs we know the expressions of $\\vec{e}_r,\\vec{e}_\\phi,\\vec{e}_z$ in function of the $\\vec{e}_x,\\vec{e}_y,\\vec{e}_z$, from the above expressions we deduce:\n\t\n\tThe divergence of $\\vec{f}$ being defined in the three dimensional case by:\n\t\n\twe then have:\n\t\n\tThe first term is equal to (application of the gradient in cylindrical coordinates):\n\t\n\tin the same way we get for the second component (we can give the details on request):\n\t\n\tand for the last one:\n\t\n\tBy adding the three terms and expressing the partial derivatives of the functions $f_x,f_y,f_z$ in function of the partial derivatives of the functions $f_r,f_\\phi,f_z$ using the relations:\n\t\n\twe get:\n\t\n\tAfter simplification:\n\t\n\tThe expression of the operator \"\\NewTerm{divergence in cylindrical coordinates}\\index{divergence in cylindrical coordinates}\" is then:\n\t\n\tTo find the expression of the divergence in spherical coordinates, let us recall the relations:\n\t\n\tGiven now a vector function $\\vec{f}:\\mathbb{R}^3\\rightarrow \\mathbb{R}^3$. We have:\n\t\n\tKnowing the expression of $\\vec{e}_r,\\vec{e}_\\theta,\\vec{e}_\\phi$ depending on $\\vec{e}_x,\\vec{e}_y,\\vec{e}_z$, from the expression above we deduce:\n\t\n\tThe divergence of $\\vec{f}$ being defined in the three dimensional case by:\n\t\n\twe then have:\n\t\n\tThe first component is equal to (application of the gradient in spherical coordinates):\n\t\n\tin the same way we get for the second component (we can give the details on request):\n\t\n\tand finally for the third and last one:\n\tand:\n\t\n\tBy adding the three terms and expressing the partial derivatives of the functions $f_x,f_y,f_z$ in function of the partial derivatives of the functions $f_r,f_\\theta,f_\\phi$ using the relations:\n\t\n\twe get (we can develop the intermediate details on request):\n\t\n\tWe notice that we can regroup terms depending on the same variable using the property of the derivative, so we get for the expression of the divergence in spherical coordinates:\n\t\n\tand therefore the operator of \"\\NewTerm{divergence in spherical coordinates}\\index{divergence in spherical coordinates}\" is:\n\t\n\tSo we finally we saw all the expressions of the divergence operator of a vector field in Cartesian, polar, cylindrical and spherical systems.\n\t\n\t\\pagebreak\n\t\\subsubsection{Rotationals of a Vector Field (Curl)}\n\tThe \"\\NewTerm{curl}\\index{curl}\" or \"\\NewTerm{rotationnal}\\index{rotational}\" of a vector field can be seen (this is a simplification!) as the vector field whose field lines are perpendicular to those we have calculated the rotational as shown in the special example below (we will see more academic detailed further below):\n\t\\begin{figure}[H]\n\t\\centering\n\t\t\\includegraphics{img/algebra/curl.jpg}\n\t\t\\caption{Example of rotational of a vector field}\n\t\\end{figure}\n\tIn a little bit more technical way the rotational is a vector operator that describes the infinitesimal rotation of a $3$-dimensional vector field. At every point in the field, the rotational of that point is represented by a vector. The attributes of this vector (length and direction) characterize the rotation at that point. he direction of the rotational is the axis of rotation, as determined by the right-hand rule, and the magnitude of the rotational is the magnitude of rotation. \n\t\n\tThe rotational transforms a vector field in another vector field. For most people it is more difficult to accurately represent than the gradient and divergence, it intuitively reflects the tendency of a field to rotate around a point (the way it is twisted).\n\t\n\tLet us give before tackling with the mathematical stuff and also mathematical examples two every-day life examples:\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. In a tornado, the wind turns around the eye of the storm and the wind velocity vector field has a non-zero rotational around the eye.\\\\\n\n\tE2. The rotational of the velocity field of a disc that rotates at a constant speed is constant, directed along the axis of rotation and oriented such that the rotation takes place, in relation to it, in the direct sense.\\\\\n\t\\end{tcolorbox}\n\tA vector field is said to by \"\\NewTerm{irrotational}\\index{irrotational}\" when the rotational of this field is identically zero at all points of space. Otherwise, we say it is a \"\\NewTerm{vortex}\\index{vortex}\".\n\t\n\tIn the usual case where $\\mathrm{d}x$ is an element of length, the measurement unit of the rotational is then the unit of the considerated field divided by a unit length. For example, in fluid mechanics: the unity of the rotational of a velocity field is radians per unit time, as an angular velocity as we divide a velocity ([ms$^{-1}$] by the length [m]!\n\t\n\tThe divergence gives some indication of the behavior of a vector or a vector field: how it moves in relation to the normal and how it crosses the surface, but it is insufficient. Take a field which would have the shape of a cylinder and another field which have a helicoidal form of the same diameter as the cylinder. If the move  in the same direction the divergence will be the same even if the movements are quite different This requires that we determine how the field is bent as it passes through a surface: this will be determined by the circulation (as the work of a force, for example) of the vector along a closed curve, obtained with the sum of dot products $\\vec{f}\\circ \\mathrm{d}\\vec{r}$ on the closed contour (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tin fact it's the same to look at how twisted is the vector with respect to the normal vector of the surface which leads us to define the \"rotational\" or \"swirl vector\" by writing:\n\t\n\tthat thus establishes a relation between the line integral and the surface integral (we then transform a line integral on a closed path in a surface integral delimited by the given path).\n\t\n\t\\begin{theorem}\n\tIn other words, the rotational is calculated by using the fact that the flow around a closed basic path of a vector field is equal to the flux of its rotational through the immediate elementary surface generated by this path.\n\n\tThis is the \"\\NewTerm{Stokes theorem}\\index{Stokes theorem}\" (which is more rigorously demonstrable with a heavy mathematical formalism) which is in fact a definition of the rotational operator which we will seek the explicit mathematical expression right now!\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven $\\vec{f}$ a vector field defined in a given space. We want to calculate the circulation of $\\vec{f}$ around a closed path (contour) $C$:\n\t\n\tWe choose for contour $C$ the edged of an infinitesimal rectangle $(\\mathrm{d}x,\\mathrm{d}y)$ that is into $\\mathbb{R}^3$and parallel to the $xy$-plane (note that we travel the contour so as to always have the surface to our left!):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/rotational_contour_path.jpg}\n\t\t\\caption[]{Contour (path) of integraton}\n\t\\end{figure}\n\tFor the two horizontal sides (edges), the contribution to the circulation is:\n\t\n\tWhich authorize us to write:\n\t\n\tSame for the vertical sides  (edges) we have also:\n\t\n\tTherefore we have the circulation following $z$:\n\t\n\tWhich can also be written in the following more general and important form:\n\t\n\tand is the no less than the famous \"\\NewTerm{Green theorem}\\index{Green theorem}\" or also known as the \"\\NewTerm{Green-Riemann theorem}\\index{Gree-Riemann theorem}\" that we will see again in the section of Complex Analysis.\n\t\n\tAnd that will will write in the situation that interest us:\n\t\n\tBy circulation permutation we then get:\n\t\n\tEither in vector condensed form:\n\t\n\tThis allows us to better understand the notation, or the non intuitive definition of the rotational in many books and that is:\n\t\n\tthat is to say the cross product of the gradient operator by the vector field!\n\t\n\tSo finally we have proved the Stokes theorem or also named in this form \"\\NewTerm{curl theorem}\\index{curl theorem}\" that gives well:\n\t\n\tand at the same time the rotational in Cartesian coordinates.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\n\tTake the vector field, which depends on $x$ and $y$ linearly:\n\t\n\tIts plot look like this in Maple 4.00b:\\\\\n\t\n\t\\texttt{\n\t>with(DEtools): with(plots):\\\\\n\t>fieldplot([y, -x], x=-5..5, y=-5..5,arrows = medium, \\\\\n\tcolor = sqrt(x \\string^2 + y\\string^2),thickness=2,labels=[`x`,`y`],\\\\\n\ttitle=`Simple vector field`);\t\n\t}\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/vector_field_01.jpg}\n\t\t\\caption[]{Vector field example with Maple 4.00b}\n\t\\end{figure}\n\tSimply by visual inspection, we can see that the field is rotating. If we place a paddle wheel anywhere, we see immediately its tendency to rotate clockwise. Using the right-hand rule, we expect the rotational to be into the page. If we are to keep a right-handed coordinate system, into the page will be in the negative $z$ direction. The lack of $x$ and $y$ directions is analogous to the cross product operation.\\\\\n\n\tIf we calculate the rotational:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tAvec Maple 4.00b nous obtenons ce r\u00e9sultat alg\u00e9brique avec les commandes suivantes:\\\\\n\t\n\t\\texttt{>with(linalg):\\\\\n\t>f:=[y,-x,0];v:=[x,y,z];\\\\\n\t>curl(f,v);\\\\\n\t}\n\t\n\tAs we did yet not have the time to found an easy way to plot the resulting rotational vector field in the release 4.00b of Maple we will take the picture provided by Wikipedia:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.65]{img/algebra/rotational_of_vector_field_01.jpg}\n\t\t\\caption[]{Rotational of previous vector field (source: Wikipedia)}\n\t\\end{figure}\n\t\t\n\tE2. Suppose we now consider a slightly more complicated vector field:\n\t\n\tIts plot:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.65]{img/algebra/vector_field_02.jpg}\n\t\t\\caption[]{second vector field example (source: Wikipedia)}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tWe might not see any rotation initially, but if we closely look at the right, we see a larger field at, say, $x=4$ than at $x=3$. Intuitively, if we placed a small paddle wheel there, the larger \"current\" on its right side would cause the paddlewheel to rotate clockwise, which corresponds to a rotational in the negative $z$ direction. By contrast, if we look at a point on the left and placed a small paddle wheel there, the larger \"current\" on its left side would cause the paddlewheel to rotate counterclockwise, which corresponds to a rotational in the positive $z$ direction. Let's check out our guess by doing the math:\\\\\n\t\n\tIndeed the rotational is in the positive $z$ direction for negative $x$ and in the negative $z$ direction for positive $x$, as expected. Since this rotational is not the same at every point, its plot is a bit more interesting:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.65]{img/algebra/rotational_of_vector_field_02.jpg}\n\t\t\\caption[]{Rotational of previous vector field (source: Wikipedia)}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\\pagebreak\t\n\tLet us now determine the expression of the rotational in cylindrical coordinates (the rotational in polar coordinates is not defined in $\\mathbb{R}^2$).\n\t\n\tUsing the same technique as for the rotational in Cartesian coordinates, we write the circulation of $\\vec{f}$ along a contour corresponding to a small piece $P_1P_2P_3P_4$ of an orthogonal cylinder (oriented in the direction of the $z$-axis):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/rotational_cylindrical.jpg}\n\t\t\\caption[]{Representation of the cylinder piece $P_1P_2P_3P_4$}\n\t\\end{figure}\n\tWe have then by fixing $z$ (caution! the $\\mathrm{d}\\vec{r}$ has nothing to do with the cylinder radius $r$... the notation can be confusing I'm sorry!):\n\t\n\tthe total circulation thus gives after regrouping terms:\n\t\n\tWe can not at this stage compare with the rotational because it is difficult to us to make appearing the differential of the surface if we look at the differentials that currently appear in circulation. The best is then to divide everything by $r\\mathrm{d}\\phi\\mathrm{d}r$:\n\t\n\tTherefore:\n\t\n\tNow we determine the rotational by fixing $\\varphi$. The problem is like having a rectangle in the space that we travel to determine the circulation. But we already know what is the result of the rotational for a rectangle in Cartesian coordinates following the $z$-axis:\n\t\n\texcept that in cylindrical coordinates we have to replace $z$ by $\\varphi$, $x$ by $y$, $y$ by $r$ and $f_y$ by $f_r$ and finally $f_x$ by $f_z$ (this choice is always appropriate simply because the circulation is such that the surface is always on our left). This gives us:\n\t\n\tIt therefore only remains to us to find the component of the rotational on $r$ (therefore when $r$ is fixed). The calculation is more difficult as we have to follow (positively always!) a curved surface by the variation of the angle $\\varphi$.\n\t\n\tWe then have by fixing $r$:\n\t\n\tthe total circulation thus gives after regrouping terms:\n\t\n\tWe can not at this stage compare with the rotational because it is difficult to us to make appearing the differential of the surface if we look at the differentials that currently appear in circulation. The best is then to divide everything by $r\\mathrm{d}\\phi\\mathrm{d}z$:\n\t\n\tThen finally:\n\t\n\t\tAnd finally we have the \"\\NewTerm{rotational in cylindrical coordinates}\\index{rotational in cylindrical coordinates}\" given in its globality by:\n\t\n\tThe reader can check verify that this result is simply the gradient in cylindrical coordinates applied to the vector field $\\vec{f}$.\n\t\n\tTo be convinced, let us now show directly the expression of the rotational in spherical coordinates by showing this through the cross product of the gradient in spherical coordinates with the vector field $\\vec{f}$.\n\t\n\tFirst let us recall that we have obtained for the gradient in spherical coordinates:\n\t\n\tTherefore we have:\n\t\n\twhat we can write using the decomposition in basis vectors:\n\t\n\tThanks to the partial derivatives that we proved earlier during our introduction to the spherical coordinates, it comes:\n\t\n\tThe cross products with the colinears vectors canceled. Therefore it remains:\n\t\n\tAs the cross product of two basis vectors give the corresponding orthogonal vector (positively or negatively) then we have:\n\t\n\tBy regrouping the terms it comes:\n\t\n\tThus by simplifying:\n\t\n\tThus finally:\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Laplacians of Scalar Field (Laplace Operator)}\n\tThe Laplacian of a scalar field $\\phi(x_1,x_2,x_3)$ give also a scalar field that gives the difference between the value of the function $\\phi$ on one point and it average around this point. In other words: the second partial derivative measure the variations of the slope on the study point in its immediate neighborhood and following one direction at a time. If the second partial derivative is null following $x$, then the slope is constant in its immediate neighborhood and following this dimension (direction), this implies that the value of the function at the study point is the average of its neighborhood (following one dimension).\n\t\n\tThe reader will be able to see again major practical applications of this differential operator in the sections of Complex Analysis, Quantum Chemistry, Astronomy, Electrodynamics, Weather \\& Marine Engineering, Wave Mechanics, Wave Quantum Physic and Quantum Field Theory.\n\t\n\tThis operator is defined from the divergence and the gradient and we denote it by (tensor notations):\n\t\n\tThe Laplacian is null, or quite small, when the function varies. The functions satisfying the \"\\NewTerm{Laplace equation}\\index{Laplace equation}\":\n\t\n\tare named \"\\NewTerm{harmonic function}\\index{harmonic function}\".\n\t\n\tThus the \"\\NewTerm{scalar Laplacian operator in cartesian coordinates}\\index{scalar Laplacian operator in cartesian coordinates}\" is by this definition, given by:\n\t\n\tThe Laplacian of a scalar field in other coordinate systems is a little bit more hard to get that for the other differential operators. There are more than one possible proof but among the existing one we have try to choose (as always) this that seem to us the most interesting in the point of view of the tools used (and not of simplicity!).\n\n\tGiven the Laplacian in cartesian coordinates in $\\mathbb{R}^2$ of a scalar field $f$:\n\t\n\tTo determine this expression in polar coordinates, we will use the total exact differential and the rule chain in polar coordinates (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\ttherefore for a second derivative:\n\t\n\tbut we know that we have in polar coordinates:\n\t\n\thence for the first derivative:\n\t\n\tand for the second derivative:\n\t\n\ttherefore:\n\t\n\tand given that the second partial derivatives are continuous, then the cross derivatives are equal according to the Schwarz theorem (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tTherefore:\n\t\n\tSimilarly, we will have:\n\t\n\thence the expression of the Laplace operator in polar coordinates by adding the last two expressions:\n\t\n\tTherefore the \"\\NewTerm{scalar Laplacian in polar coordinates}\\index{scalar Laplacian in polar coordinates}\" is finally given by:\n\t\n\tTo find the expression of the Laplacian operator in spherical coordinates, we will use the intuition of the physicist and the concepts of similarity.\n\t\n\tWe will first of all help us with the below figure to find out what we mean:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/coordinate_system_spherical_for_Laplacian_study.jpg}\n\t\t\\caption[]{Recall of the spherical coordinate system representation}\n\t\\end{figure}\n\tRecall that the relation between cartesian and spherical coordinates are given by the relations:\n\t\n\tWe will now consider the following similarities:\n\t\\begin{enumerate}\n\t\t\\item Cylindrical coordinates:\n\t\t\t\n\t\t\n\t\t\\item Spherical coordinates:\n\t\t\t\n\t\\end{enumerate}\n\tLet us build a correspondence table:\n\t\n\tThe goal is to play with this correspondence with in a first time the Laplacian in cylindrical coordinates where we have subtracted from both sides of the equality the term $\\dfrac{\\partial^2 f}{\\partial z^2}$. Therefore:\n\t\n\tlet us now use our small correspondence table and we get:\n\t\n\tThe second term of the equality of the latter relation is the spherical equivalent of the term \\#1 of the Laplacian in cylindrical coordinates:\n\t\n\tNow let us examine and focus on the term: $\\dfrac{1}{\\rho}\\dfrac{\\partial f}{\\partial \\rho}$\n\t\n\tIdentically as when we determined the relation:\n\t\n\twe get:\n\t\n\twith:\n\t\n\tWhich give us the possibility to write:\n\t\n\tIf we play again with our small correspondence table we get:\n\t\n\tWe divide the latter relation by $\\rho$ and we get:\n\t\n\tWe have therefore above the spherical equivalent of the second term \\#2 of the Laplacian in cylindrical coordinates:\n\t\n\tThe last and third term is quite simple to determine. We just have to replace $\\rho$ by $r\\sin(\\theta)$ to get:\n\t\n\tBy bringing together all terms obtained previously, we finally get the extended form of the Laplacien in spherical coordinates used so much in physics (see corresponding sections of this book):\n\t\n\tWe can shorten this expression by factoring the terms:\n\t\n\tIf we condense even a little bit more, we get the final expression of the \"\\NewTerm{scalar Laplacian in spherical coordinates}\\index{scalar Laplacian in spherical coordinates}\" named also \"\\NewTerm{spherical Laplacian}\":\n\t\n\t\n\t\\subsubsection{Laplacians of a Vector Field}\n\tAs to the Laplacian of a scalar field, the Laplacian of a vector field is only a very convenient notation system for condensing the writing of the components of a vector field.\n\t\n\tThe reader will also find practical applications of this operator in the sections of Electrokinetics, Electrodynamics and Continuum Mechanics.\n\t\n\tThus, the vector Laplacian is often defined by:\n\t\n\tWe also prove that in the specific case of the Cartesian coordinates, the Laplacian of a vector field has the components the scalar Laplacian of each of the components.\n\t\n\tWe also prove that in the specific case of the Cartesian coordinates, the Laplacian of a vector field has the components the scalar Laplacian of each of the components.\n\t\n\tSo therefore have in Cartesian coordinates:\n\t\n\tSo we therefore in Cartesian coordinates:\n\t\n\tand thus the \"\\NewTerm{vector Laplacian of a vector field in Cartesian coordinates}\\index{vector Laplacian of a vector field in Cartesian coordinates}\" is indeed the scalar Laplacien of each component:\n\t\n\tOr more explicitly:\n\t\n\tThe Laplacian of a vector field, frequently named \"\\NewTerm{vectorial laplacien}\\index{vectorial laplacien}\", in other coordinates systems is quite simple to get once we know the Laplacian of a scalar field in the same coordinates!\n\t\n\tWe have first in cylindrical coordinates:\n\t\n\tTo simplify (because of a lack of space) let us focus first on the first line:\n\t\n\tand after for the second line:\n\t\n\tand finally the third one:\n\t\n\tSo what gives for us for the \"\\NewTerm{vector Laplacian in cylindrical coordinates}\\index{vector Laplacian in cylindrical coordinates}\" as we can find it in tables or forms:\n\t\n\tTo finish and in the joy (...) let us make the merry and detailed calculations of the vector Laplacian operator in spherical coordinates (it's quite long but it's just to make sure that we fall back on what is in tables and forms):\n\t\n\tLet us focus on the first line (Caution! this will be quite long...):\n\t\n\t\\pagebreak\n\t\n\tThat's it for the first line ... Let us get on to the second line always with joy...:\n\t\n\t\n\t\\pagebreak\n\tand finally a last effort for the third and last line:\n\t\n\n\t\\pagebreak\n\t\n\tSo what gives for the \"\\NewTerm{vector Laplacian in spherical coordinates}\\index{vector Laplacian in spherical coordinates}\" as we can find it in tables or forms:\n\t\n\tthat's it... for the skeptics...\n\t\n\t\\subsubsection{Remarkable Identities}\n\tThe scalar and vector differential operators have some very simple remarkable identities that we will find very often in physics in this book.\n\t\n\tLet us first see the relation that make no sense (in case you would fall on them without purpose...) :\n\t\n\tFor the relation above, the rotational (curl) of a divergence does not exist since the rotational operator applies to a vector field while the divergence is a scalar!\n\t\n\tFor the above relation the rotational (curl) of a scalar Laplacian does not exist since the rotational operator applies to a vector field while by construction, the Laplacian is a scalar.\n\t\n\tLet us now see some remarkable identities without proof for the majority (if there is a proof this is because one reader did the request to have all the details...):\n\t\\begin{enumerate}\n\t\t\\item By construction the scalar Laplacian is the divergence of the gradient of the scalar field:\n\t\t\n\t\t\n\t\t\\item The rotational (curl) of the gradient is equal to zero:\n\t\t\n\t\tTherefore if the rotational (curl) of a vector variable (vector field) is zero, this same variable can be expressed as the gradient of a scalar potential!!!!!!!!!!!! This is a veeeeerrrrry important property (or trick depending of the point of view...) in Electromagnetics and Fluid Mechanics and Quantum Physics!\n\t\t\\begin{dem}\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item The dot product of two rotational is equal to something boring to say just with words...:\n\t\t\n\t\t\n\t\t\\item The divergence of the rotational (curl) of a vector field is always equal to zero:\n\t\t\n\t\t\\begin{dem}\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item The rotational (curl) of the rotational  of a vector field is equal to the gradient of the divergence of this vector field less its vector Laplacian:\n\t\t\n\t\t\\begin{dem}\n\t\t\n\t\tIt is then easy to check that this last equality is equal to:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item The multiplication of the nabla operator by the dot product of two vectors is equal to ... (see below), which provides a very useful relation in Fluid Mechanics:\n\t\t\n\t\t\n\t\t\\item The scalar product of the rotational (curl) of a vector is the difference of the commutated operators such that (we can provide the detail proof on request):\n\t\t\n\t\tWe will use this last relation in our study of electromagnetic radiation pressure in the section of Electrodynamics (among others...).\n\t\t\n\t\t\\item The gradient of a cross product is the difference of the commutated operators such that (we can provide the detail proof on request)\n\t\t\n\t\tWe will use this last relation in our study of superconductors in the section of Electrokinetics.\n\t\\end{enumerate}\n\t\n\t\\pagebreak\n\t\\subsubsection{Summary}\n\tAs part of this book, we will use the different notations presented and summarized in the table below. Their usage gives us the possibility in the context of the different theories to avoid confusion with other mathematics being (tools). It's annoying but we have to do with it.\n\t\n\t\n\tNow let us do a quick summary of main differential operators:\n\t\\begin{itemize}\n\t\t\\item The gradient can be assimilated to the \"slope\" (example: the electric field is the \"slope\" of the electrostatic potential).\n\t\t\n\t\tThe various expressions of the gradient operator (placed under the form of the nabla operator) in Cartesian, polar, cylindrical, spherical coordinates are the following:\n\t\t\n\t\t\n\t\t\\item The divergence characterizes a flow of something that comes from somewhere, a source, or who goes to it. If the divergence is is different from zero, it means that there is concentration around a point, so the density increases (or decreases, it depends on the sign). It could be the density of electric charges or the mass density. Hence the famous theorem that says that the flow (that which passes through a surface) is equal to the integral of the divergence (what remains).\n\t\t\n\t\tThe various expressions of the divergence operator (placed under the form of the nabla operator) in Cartesian, polar, cylindrical, spherical coordinates are the following:\n\t\t\n\t\t\n\t\t\\item The rotational characterizes the existence of a vortex (Widely used in fluid mechanics). If there is a whirlwind, we can follow a flow line on a closed curve (closed: in the differential point of view, not in the geometrical one!) without it change of direction: the circulation will the not be equal to zero (it is equal to integral of the rotational (curl)).\n\t\t\n\t\tThe various expressions of the rotational (curl) operator (placed under the form of the nabla operator) in Cartesian, cylindrical, spherical coordinates are the following:\n\t\t\n\t\t\n\t\t\\item The Laplacian of a scalar field gives a scalar field that measures the difference between the value of the function at a point and its average around that point. In other words, the partial second derivative measure the variations of the slope at the point examined in the immediate surroundings and in one dimension at a time. If the partial second derivative is zero in one direction, then the slope is constant in the immediate surroundings and according to this dimension, this means that the value of the function on the study is equal to the average of his neighborhood (in one dimension).\n\t\t\n\t\tThe different expressions of the scalar Laplacian operator (placed under the form of the nabla operator) in Cartesian, polar and spherical coordinates are:\n\t\t\n\t\t\n\t\t\\item As for the Laplacian of a scalar field, the Laplacian of a vector field is only a very convenient notation system for condensing the writing of the components of a vector field.\n\t\t\n\t\tThe different expressions of the scalar Laplacian operator (placed under the form of the nabla operator) in Cartesian, polar and spherical coordinates are:\n\t\t\n\t\\end{itemize}\n\tAnd also we have the following list of remarkable identities):\n\t\n\t\n\t\\pagebreak\n\tFinally let us finis this summary with all the theorem that we have obtained so far in this section that are named \"\\NewTerm{$1$st order Integral Theorems}\\index{First order Integral Theorems}\":\n\t\\begin{itemize}\n\t\t\\item Gradient theorem (only for uniform field):\n\t\t\n\t\n\t\t\\item Ostrogradsky theorem (divergence theorem):\n\t\t\n\n\t\t\\item Green theorem (Green-Riemann theorem):\n\t\t\n\t\n\t\t\\item Stokes theorem (curl theorem):\n\t\t\n\t\\end{itemize}\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{95} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 99 votes,  84.44\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Linear Algebra}\n\n\t\\lettrine[lines=4]{\\color{BrickRed}T}here are several approaches to learn Linear Algebra. First a pragmatic way (we'll start with this one because our experience has shown us that it is the one that seemed to work best for students) and a more formal way that we will present in after. We should first warn the reader that linear algebra is a powerful calculation tool that we use enormous in economic and industrial practice in the following areas (see the respective chapters of the book for specific examples): Statistics , Electrotechniques, Finance market, Numerical optimization methods, Optics, Quantum Physics, Electrodynamics, Relativity, Fluid mechanics, etc. It is then necessary to attach a special attention to this subject.\n\t\n\tFirst let us answer at  two questions from a reader: Why Linear Algebra is named like this? And is there a Non-Linear Algebra?\n\t\n\tHere are my answers:\n\n\t\\begin{enumerate}\n\t\t\\item This is named \"Linear Algebra\" because it was first necessary to choose a name... and also because it is a generalization of the scalar algebra but with vectors where applications are no longer scalar functions but matrix applications whose effect is to act as a linear sum of a base vectors (at least this can be interpreted as such).\n\t\t\\item Officially and to my knowledge there is no non-linear algebra in the same philosophy as what we will see in this section. It seem that they are mathematicians who have created \"non linear algebra\" but these have nothing to do with matrices.\n\t\\end{enumerate}\n\nNow, remember that we studied in section Calculus how to determine the intersection (if any exist) of the equation of two lines in $\\mathbb{R}^2$  (we can extend the problem obviously to more than two lines), as it is equivalent to solve a polynomial of order $1$, given by:\n\t\nwhere $a_i,b_i \\in \\mathbb{R}$.\n\nThus seeking the value for which:\n\t\nleads us to write:\n\t\nHowever, there is another way of presenting the problem as we have seen in the Numerical Methods section (\\SeeChapter{Theoretical Computer chapter}). Indeed, we can write the problem in the form of a block of equations:\n\t\t\nand as we seek $y_1=y_2=y$, we have:\n\t\t\nThis writing is named as we have seen in the section of Theoretical Computing (\\SeeChapter{see chapter of Theoretical Computing}) a \"\\NewTerm{linear system}\\index{linear system}\" that we can solve by subtracting or adding the lines between them (all the solutions are always equal), which gives us:\n\t\nand we see that we fall back on the solution:\n\t\nSo there are two ways to present a problem of intersection of lines:\n\t\\begin{enumerate}\n\t\t\\item In the form of an equation\n\t\t\\item In the form of a system\n\t\\end{enumerate}\nWe will focus in a part of this section to the second method that will allow us with tools seen in the section of Vector Calculus to resolve not only the intersections of one or more straight lines but one or more straight, plans, hyperplanes, etc. in respectively $\\mathbb{R},\\mathbb{R^2},...,\\mathbb{R^n}$. \n\n\tObviously we will see that Linear Algebra is not only use for this purpose but can also be used the generalized some physics mathematical models or to express geometrical transformations of vectors or figures in 2D or 3D or also express Markov Chains, special properties of Multivariate Statistics, calculation of some differential equation (for reliability engineering for example) and much more! Many examples are given in this book about the application of Linear Algebra in real life.\n\nBefore attacking the theoretical part, let us presented a very interesting example but that requires a concept - the determinant - that we will prove rigorously much further in detail in this section (it seemed to us more pedagogical to approach this subject now rather than the reader must wait to browse dozens of mathematical developments of pages before reaching the rigorous definition of the determinant).\n\nConsider the system of two linear equations with two unknowns (system of intersection of planes):\n\t\nIf we solve this, we quickly get (technique named \"\\NewTerm{substitution method}\\index{substitution method}\"):\n\t\nIt comes then:\n\t\nand so at the end:\n\t\nand if we define a little bit fast something that is named the \"\\NewTerm{determinant}\\index{determinant}\" that we will see further below rigorously as follows:\n\t\nor with another very more common notation:\n\t\nwe thus have:\n\t\nAnd by proceeding in the same way we get:\n\t\nIt comes then:\n\t\nand so finally we get:\n\t\nIt then appears clear that if:\n\t\t\nthe system has infinitely many solutions. In contrast, the system has no solution if:\n\t\nAnd if the reader repeats (happily...) the procedure for a system of three equations with three unknowns of the type (intersection of hyperplanes):\n\t\nWe then get (after some basic boring algebraic operations):\n\t\nwith:\n\t\n\tIt then appears clear that if:\n\t\n\tthe system has infinitely many solutions. In contrast, the system has no solution if:\n\t\n\tand so on for $n$ equations with $n$ unknowns.\n\n\tHowever, there was a condition to satisfy: as we have seen in the previous example, we could not solve a system of equations with two unknowns if we have only one equation. That is why it is necessary and sufficient for a system of equations with $n$ unknowns to have at least $n$ equations. Thus, we speak of \"\\NewTerm{systems with $n$ equations in $n$ unknowns}\\index{systems with $n$ equations in $n$ unknowns}\". We also prove that it is necessary and sufficient that the determinant is non-zero for a linear system, whose matrix equivalent is square, to have a unique solution (the concept of \"determinant\" and \"matrix\" will be defined further below robustly) and therefore that the corresponding matrix to the system is invertible (non-singular).\n\n\\pagebreak\n\\subsubsection{Linear Systems}\n\n\\textbf{Definitions (\\#\\mydef):}\n\\begin{enumerate}\n\t\\item[D1.] We name \"\\NewTerm{linear system}\\index{linear system}\", or simply \"\\NewTerm{system}\" any family of equations of the form:\n\t\nwhere each line represents the equation of a line, plane or hyperplane (\\SeeChapter{see section Analytical Geometry}), and $a_{mn}$ are the \"\\NewTerm{system coefficients}\\index{linear system coefficients}\", the $b_m$ the \"\\NewTerm{coefficients of the second member}\\index{coefficients of the second member}\" and the $x_n$ the \"\\NewTerm{unknowns of the system}\\index{unknowns of the system}\".\n\n\t\\item[D2.] If the system has $n$ unknown and $n$ equations and has a unique solution, we then name it a \"\\NewTerm{Cramer system}\\index{Cramer system}\" (1750).\n\n\t\\item[D3.] If the coefficients of the second member are all zero, then we say that the system is a \"\\NewTerm{homogeneous system}\\index{homogeneous system}\" so it has at least the trivial solution where all $x_n$ are equal to zero.\n\t\n\t\\item[D4.] We name \"\\NewTerm{homogeneous system associated to the system}\\index{homogeneous system associated to the system}\", the system of equations we get by substituting zeros to the coefficients of the second member ($b_m$).\n\t\\end{enumerate}\nLet us now recall the following items:\n\t\\begin{itemize}\n\t\t\\item The equation of a line (\\SeeChapter{see section Functional Analysis}) is given by:\n\t\t\t\n\t\tby defining $x=x_1$ and $y=x_2$.\n\t\t\\item The equation of a plane (\\SeeChapter{see section Functional Analysis}) is given by:\n\t\t\t\n\t\tby defining $x=x_1, y=x_2, z=x_3$.\n\t\\end{itemize}\nWe often write a linear system in the following condensed form:\n\t\nWe name \"\\NewTerm{system solution}\\index{linear system solution}\" or \"\\NewTerm{vector system solution}\\index{vector system solution}\" any $t$-uple $(x_1^0,x_2^0,...,x_n^0)$ such as:\n\t\nSolve a system means finding all the solutions of this system (we find many such systems in economic, operational research or design of experiments). Two system with $n$ unknowns are named  \"\\NewTerm{equivalent systems}\\index{equivalent linear systems}\" if all of the solution of one system is the solution of the other, i.e., if they have the same set of solutions. We sometimes say that the equations of a system are \"\\NewTerm{compatible equations}\\index{compatible equations}\" or \"\\NewTerm{incompatible equations}\\index{incompatible equations}\", depending on whether the system has at least one solution or not admit any.\n\nWe can also give for sure a geometric interpretation to these systems. Suppose that the first members of the equations of the system are not zero. So we know that each of these equations represents a hyperplane of an affine space (\\SeeChapter{see section Vector Calculus}) of dimension $n$. Therefore, all solutions of the system, considered as set of $n$-tuples of coordinates is a finite intersection of hyperplanes.\n\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\n\t\n\tnoted conventionally in high-school classes in the form:\n\t\n\tThis system has for solutions the points representing the intersection of the three planes defined by the three equations. But as we can see it visually with Maple 4.00b using the following commands:\\\\\n\n\t\\texttt{>with(plots):}\\\\\n\t\\texttt{>implicitplot3d({x-3*z=-3,2*x-5*y-z=-2,x+2*y-5*z=1},x=-3..3,}\n\t\\texttt{y=-3..3,z=-3..3);}\n\t\n\t\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.35]{img/algebra/system_3_equations_3_unknowns.eps}\n\t\\caption{Graphical representation of the linear system}\n\t\\end{figure}\n\t\n\tThere is no solutions. You can be checked by hand or with Maple 4.00b by writing:\\\\\n\t\n\t\\texttt{>solve({x-3*z=-3,2*x-5*y-z=-2,x+2*y-5*z=1},{x,y,z});}\n\t\\end{tcolorbox}\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nFor \"classic\" resolution methods of such systems, we refer the readers to the section on Numerical Methods of the chapter on Computing Science.\n\t\\end{tcolorbox}\t\n\t\nFinally, note an important case in practice: \"\\NewTerm{overdetermined systems}\\index{overdetermined systems}\" where we have more equations than unknowns. The first situation of this type dates from the 18th century through the study of the lunar oscillations but we also find this frequently in R\\&D laboratory in the context of design of experiments (\\SeeChapter{see section Industrial Engineering}) or in structural equation models (\\SeeChapter{see section Theoretical Computing}).\n\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\nConsider the special case but very telling following examples of three equations with two unknowns:\n\t\nsystem that will be written in the form of matrix and vector as following:\n\t\nor said in an \"\\NewTerm{augmented form}\\index{linear system augmented form}\" as follows\n\t\nand with Maple 4.00b:\\\\\n\n\\texttt{>with(plots):}\\\\\n\\texttt{>implicitplot({2*x+3*y=-1,-3*x+y=-2,-x+y=1},x=-3..3,y=-3..3);}\n\t\\end{tcolorbox}\n\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.7]{img/algebra/overdetermined_system.eps}\n\\caption{Representation of a system of 3 equations with two unknowns with Maple 4.00b}\n\\end{figure}\n\nWe can see on the chart above that this particular system has no complete solution but thus has a  solution if we take only the problem by pair of equations... which does not necessarily help in facts...\n\t\\end{tcolorbox}\n\nNotice, that the we have just seen that the system could be written as following:\n\t\n\n\t\nThis looks like a multiple linear regression system (see section of Theoretical Computing) whose column vector of unknowns can be viewed as the coefficient vector $\\vec{\\beta}$ of the regression line such that:\n\t\nWe then proved in detail in the section of Theoretical Computing that:\n\t\n\nat the condition that the square matrix $X^TX$ is, as we will see further below, invertible (non-singular). If this is satisfied, we find a \"\\NewTerm{pseudo-solution}\\index{pseudo linear solution}\" (this is the official terminology...) by making calculations quickly by hand (or with a spreadsheet software like Microsoft Excel):\n\t\nand injecting these values in the initial overdetermined system, the reader will quickly understand why we talk about \"pseudo-solution\"...\n\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nA reader asked us why we don't use for the example above just the relation $\\vec{\\beta}=\\vec{y}X^{-1}$? The answer is simple! Because $X$ is not a square matrix or in other words... it is in our example over-determined this is why we can only found a \"pseudo-solution\".\n\t\\end{tcolorbox}\n\nThis was the pragmatic way of looking at it ... now let us turn to the second slightly more mathematical way ... (but still relatively simple):\n\n\\subsection{Linear Transformations}\n\t\n\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{linear transformation}\\index{linear transformation}\" or \"\\NewTerm{linear application}\\index{linear application}\" $A$ is a mapping of a vector space $E$ to a vector space $F$ such that with $K$ being in $\\mathbb{R}$ or in $\\mathbb{C}$:\n\t\nthis constitutes, as a reminder, an endomorphism (\\SeeChapter{see section Set Theory}).\n\nThe first property specifies that the transformation of a sum of vectors must be equal to the sum of the transformed vectors, so that it is linear. The second property specifies that the transformation of a vector to  which we applied a scale factor (scaling) must also be equal to the same factor applied on the transformation of the original vector. If either of these two properties is not met, the transformation is therefore not linear.\n\nWe now show that any linear transformation can be represented by a matrix!\n\nGiven the basis vectors $\\left(\\vec{v}_1,\\vec{v}_2,...,\\vec{v}_n \\right)$ of $E$ and $\\left(\\vec{u}_1,\\vec{u}_2,...,\\vec{u}_n \\right)$ for those of $F$ with $m \\geq n$. With these bases, we can represent any vectors $\\vec{x} \\in E, \\vec{y} \\in F$ with the following linear combinations (\\SeeChapter{see section Vector Calculus}):\n\t\n\tConsider the linear transformation $A$ which applies $E$ to $F$ ($A:E\\mapsto F$). So:\n\t\t\t\n\tthat we can rewrite as follows:\n\t\n\tBut since $A$ is a linear operator by definition, we can also write:\n\t\n\tConsidering now that the vectors $A(\\vec{v}_j)$ are elements of $F$, we can rewrite them as a linear combination of its basis vectors:\n\t\n\tTherefore, we get:\n\t\n\tBy reversing the order of summations, we can write:\n\t\n\tand rearranging the latter relation, we produce the result:\n\t\n\tFinally, remembering that the basis vectors $\\vec{u}_i$  must be independent, we can conclude that their coefficients must necessarily be zero, so:\n\t\n\tWhich corresponds to the \"\\NewTerm{matrix product}\\index{matrix product}\":\n\t\n\tThat we can write:\n\t\n\tIn other words, any linear transformation can be described by a matrix $A$ that is multiplied with the vector that we want to transform, to obtain the vector resulting from the transformation.\n\t\n\t\\pagebreak\n\t\\subsection{Matrices}\n\tSo we call a \"\\NewTerm{matrix}\\index{matrix}\" with $m$ rows and $n$ columns, or \"\\NewTerm{type $m\\times n$ matrix}\" (the first term always correspond to the rows and the second to columns to the second, to remember this there is a good mnemonic trick: President Lincoln - abbreviation of Lin(e) and Col(um)n ...), any array of numbers in a ring $\\mathbb{K}$ (that is most of times $\\mathbb{R}$):\n\t\n\tWe often denote a matrix of type $m\\times n$ briefly by:\n\t\n\tor simply $(a_{ij})$. In a more formal way:\n\t\n\t\n\tThe number $a_{ij}$ is named \"\\NewTerm{term or coefficient of index $i, j$}\\index{term of a matrix}\". The index $i$ is named the \"\\NewTerm{line index}\\index{row index}\" and the index $j$ the \"\\NewTerm{column index}\\index{column index}\".\n\t\n\tWe denote by $M_{mn}(\\mathbb{K})$ all matrices $m\\times n$ whose coefficients take values in $\\mathbb{K}$ (typically $\\mathbb{R}$ or $\\mathbb{C}$ for example).\n\t\n\tWhen $m=n$, we say that $(a_{ij})$ is a \"\\NewTerm{square matrix of order $n$}\\index{square matrix}\":\n\t\t\n\tIn this case, the terms $a_{11},a_{22},...,a_{nn}$ are named \"\\NewTerm{diagonal terms}\\index{matrix diagonal terms}\" denoted by: \n\t\n\t\n\tWe will assign to the matrices special symbols, ie uppercase Latin letters: $A, B, ...$ for matrices and for columns-matrices symbols that will be vectorial lowercase letters $\\vec{a},\\vec{b},...$.\n\t\n\tWe also name a matrix with a single row a \"\\NewTerm{line-matrix}\\index{line -matrix}\" and a matrix with a single column a \"\\NewTerm{column-matrix}\\index{column-matrix}\". It is clear that a matrix column is nothing but a \"\\NewTerm{column vector}\\index{column-vector}\" or simply a \"\\NewTerm{vector}\\index{vector}\" (\\SeeChapter{see section Vector Calculus}). Thereafter, the rows of a matrix will be treated as lines-matrices and the columns to columns-matrices.\n\t\n\tWe name \"\\NewTerm{zero matrix}\\index{zero matrix}\", and we denote by $0_{mn}$ or simply $\\mathbf{0}$ any matrix in which each term is zero:\n\t\t\n\tNull columns-matrices are also designated by the symbol vector: $\\vec{0}$.\n\t\n\tWe name \"\\NewTerm{identity matrix of order $n$}\\index{identity matrix}\" or \"\\NewTerm{unit matrix of order $n$}\\index{unit matrix}\", and denote it by $I$, or simply $\\mathds{1}$, the square matrix of order $n$:\n\t\n\tIt can also be written using the Kronecker delta notation $\\delta_{ij}$ (\\SeeChapter{see section Tensors Calculus}).\n\t\n\tCaution! When we work with matrices having complex coefficients we must always use the term \"identity matrix\" rather than \"unitary matrix\" because in the field of complex numbers the unitary matrix is another mathematical object that should not be confused!\n\t\n\tWe will see later that the zero matrix acts as a neutral element of the matrix addition and the unit matrix of neutral element for the matrix multiplication.\n\t\n\tThe purpose of the concept of matrix will appear throughout the texts that will follow but the immediate reason for this notion is simply to allow some finite families of numbers to be designed as a rectangular array and to generalize physics theorems to multidimensional spaces.\n\t\n\t\\subsubsection{Rank of a matrix}\n\tWe will now briefly review the definition of \"\\NewTerm{rank of a finite family}\\index{rank of a finite family}\" that we saw in the section of Vector Calculus.\n\t\n\tAs a reminder we name \"\\NewTerm{rank}\" of a free family of vectors the dimension (positive integer number) of the vector subspace $S$ of $E$ that it generates.\n\t\n\t\\textbf{Definition (\\#\\mydef):} Given $(\\vec{a}_1,\\vec{a}_2,...,\\vec{a}_n)$ the columns of a matrix $A$, we name \"\\NewTerm{rank of $A$}\\index{rank of a matrix}\", and denote by $\\text{rk}(A)$, the rank of the family of vectors $(\\vec{a}_1,\\vec{a}_2,...,\\vec{a}_n)$. More precisely, in linear algebra, the rank of a matrix $A$ is the dimension of the vector space generated (or spanned) by its columns. This is the same as the dimension of the space spanned by its rows as we will see late. It is a measure of the \"nondegenerateness\" of the system of linear equations and linear transformation encoded by $A$ (see earlier above!). There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics!\n\t\n\tIn a slightly more familiar language (...) the rank of a matrix is given by the number of columns-matrices that can't be expressed by the combination and scalar multiplication of other columns-matrices of the same matrix!!\n\t\n\tGiven this definition it is almost obvious that the rank of a matrix is zero if and only if the matrix is the zero matrix $\\mathbf{0}$.\n\t\n\tBefore we enter in more formal calculations let us see already some introducing examples:\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. The matrix:\n\t\n\thas rank $2$: the first two rows are linearly independent, so the rank is at least $2$, but all three rows are linearly dependent (the first is equal to the sum of the second and third) so the rank must be less than $3$.\\\\\n\t\n\tE2. The matrix:\n\t\n\thas rank $1$: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose:\n\t\n\tof $A$ has rank $1$. Indeed, since the column vectors of $A$ are the row vectors of the transpose of $A$, the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e.:\n\t\n\t\\end{tcolorbox}\n\tThe above example show the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose.\n\t\n\tBefore continuing we would like to indicate to the reader that later we will prove that if the rows of a matrix are independant, its determinant is non zero $\\det(A)\\neq 0$ and therefore $\\text{rk}(A)=n$ and obviously $\\text{rk}(A)=1$ when $\\det(A)=0$.\n\t\n\t\\textbf{Definition (\\#\\mydef):} A matrix is said to have \"\\NewTerm{full rank}\\index{full rank}\" if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be \"\\NewTerm{rank deficient}\\index{rank deficient}\" if it does not have full rank.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIf there is difficulty in determining the rank of a matrix there is a technique named \"\\NewTerm{matrix scaling}\\index{matrix scaling}\" that we will see just below that can do this work very quickly.\n\t\\end{tcolorbox}\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{system associated matrix}\\index{system associated matrix}\":\n\t\n\tThe mathematical object define by:\n\t\n\tthat is to say, the matrix $A$ in which the terms are the coefficients of the system. We name \"\\NewTerm{matrix of the second member of the linear system}\\index{matrix of the second member of the linear system}\", or simply \"\\NewTerm{second member of the system}\", the matrix-column $\\vec{b}=(b_i)$ whose terms are the coefficients of the second member of this system. We also name \"\\NewTerm{augmented matrix associated of the system}\\index{augmented matrix associated of a system}\" the matrix $A$ obtained by adding $\\vec{b}=(b_i)$ as the $(n + 1)$-th column.\n\t\n\tIf we now consider an associated  system matrix $A$ and of second member $\\vec{b}$. Let us always denote the columns of $A$ by $(\\vec{a}_1,\\vec{a}_2,...,\\vec{a}_n)$. The system can then be written equivalently as a linear vector equation:\n\t\n\tNow remember a theorem that we saw and proved in the section of Vector Calculus: for the rank of a family of vectors $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_n)$ to be the equal to rank of augmented family $(\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_n,\\vec{y})$, it is necessary and sufficient that the vector $\\vec{y}$ is a linear combination of the vectors $\\vec{x}_1,\\vec{x}_2,...,\\vec{x}_n)$.\n\t\n\tIt follows that our linear system in a vector form has at least one solution $(\\vec{x}_1^0,\\vec{x}_2^0,...,\\vec{x}_n^0)$ if the rank of the family $(\\vec{a}_1,\\vec{a}_2,...,\\vec{a}_n)$ is equal to the rank of augmented family $(\\vec{a}_1,\\vec{a}_2,...,\\vec{a}_n,\\vec{b})$ and this solution is unique if and only if the rank of the family $(\\vec{a}_1,\\vec{a}_2,...,\\vec{a}_n)$ is equal to $n$.\n\t\n\tThus, for a linear system matrix of associated matrix $A$ and of second member $\\vec{b}$ admits at least one solution, it is necessary and sufficient that the rank of $A$ is equal to the rank of the augmented matrix $(A|\\vec{b})$. If this condition is met, the system admits a unique solution if and only if the rank of $A$ is equal to the number of unknowns in other words: if the columns of $A$ are linearly independent!!!\n\t\n\tWe say that a matrix is \"staggered\" if its rows (lines) meet the following two conditions:\n\t\\begin{enumerate}\n\t\t\\item[C1.] Any null line is followed by lines full of zeros\n\t\t\n\t\t\\item[C2.] The leading coefficient (the first nonzero number from the left, also named the \"\\NewTerm{pivot}\") of a nonzero row is always strictly to the right of the leading coefficient of the row above it \n\t\\end{enumerate}\n\tThese two conditions imply that all entries in a column below a leading coefficient are zeros.\n\t\n\tA non-zero row echelon matrix is therefore of the form (by adding and subtracting rows between them):\n\t\n\twhere $j_1<j_2<...<j_r$ and $a_{1j_1},a_{2j_2},...,a_{rj_r}$ are nonzero terms. The terminal zero lines may be missing.\n\t\n\tThe columns of index $j_1,j_2,...,j_r$ of an echelon matrix are clearly linearly independent. Considered as vectors-columns of $\\mathbb{R}^2$, so they form a basis of this vector space. Considerating the other columns also as vectors-columns of $\\mathbb{R}^n$, we deduce that they are necessarily linear combination of those of index  $j_1,j_2,...,j_r$ and therefore that the rank of the echelon matrix $M$ is $\\text{rk}(A)=r$.\n\t\n\tWe will note that $r$ is also the number of nonzero lines of the echelon matrix and also the rank of the lines of this matrix, since the nonzero lines are therefore clearly independent (we proved in the section Vector Calculus that the rank of lines and columns has same value with the same properties of independence).\n\t\n\tWe can therefore allow ourselves to do a given number of elementary (extra) operations on the lines of matrices that we will be very useful, without changing its rank:\n\t\\begin{enumerate}\n\t\t\\item[P1.] We can swap the lines.\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tAs we know the matrix can be seen just an aesthetic graphic representation of a linear system. So swap two lines does not change the system.\n\t\t\\end{tcolorbox} \n\t\t\n\t\t\\item[P2.] Multiply a row by a nonzero scalar.\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThis obvioiusly not altering the linear independence of the vectors lines.\n\t\t\\end{tcolorbox} \n\t\t\n\t\t\\item[P3.] Adding to an original line, a multiple of another line.\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe original line will disappear in favor of the new that is independent of all (former) others. The system thus remains linearly independent.\n\t\t\\end{tcolorbox} \n\t\\end{enumerate}\n\t\n\tAny matrix can be transformed into a row echelon form by a finite sequence of the previous properties here's how.\n\t\\begin{enumerate}\n\t\t\\item Pivot the matrix\n\t\t\\begin{enumerate}\n\t\t\t\\item Find the pivot, the first non-zero entry in the first column of the matrix.\n\t\t\t\\item Interchange rows, moving the pivot row to the first row.\n\t\t\t\\item Multiply each element in the pivot row by the inverse of the pivot, so the pivot equals $1$.\n\t\t\t\\item Add multiples of the pivot row to each of the lower rows, so every element in the pivot column of the lower rows equals $0$.\t\t\t\t\n\t\t\\end{enumerate}\n\t\t\\item To get the matrix in row echelon form, repeat the pivot.\n\t\t\\begin{enumerate}\n\t\t\t\\item Repeat the procedure from Step 1 above, ignoring previous pivot rows.\n\t\t\t\\item Continue until there are no more pivots to be processed.\n\t\t\\end{enumerate}\n\t\t\\item To get the matrix in reduced row echelon form, process non-zero entries above each pivot.\n\t\t\\begin{enumerate}\n\t\t\t\\item Identify the last row having a pivot equal to 1, and let this be the pivot row.\n\t\t\t\\item Add multiples of the pivot row to each of the upper rows, until every element above the pivot equals $0$.\n\t\t\t\\item Moving up the matrix, repeat this process for each row.\n\t\t\\end{enumerate}\n\t\\end{enumerate}\n\tIt is therefore obvious that the elementary operations on the rows of a matrix do not change the rank of the rows of the matrix. However, we know that the rank of lines of a row matrix matrix is equal to the rank of the columns, that is to say to the rank of this matrix (once again see the section Vector Calculus for the proof). We conclude that the column rank of any matrix of the type $m\\times n$ is also equal to the rank of the rows of this matrix.\n\t\n\tAs a corollary of this conclusion, it appears that:\n\t\n\tWhen solving linear systems of $m$ equation with $n$ unknowns it appears, as we have already noted at the beginning of this section (and with practical example in the section of Theoretical Computing), there there must be at least an equal number of equations than unknowns or more rigorously: the number of unknowns must be less or equal to the number of equations such as:\n\t\n\t\n\t\\pagebreak\n\t\\subsubsection{Matrix Algebra}\n\tRemember that we saw during our study of Vector Calculus that the algebraic operations of multiplication of a vector by a scalar, addition or subtraction of vectors between them and the operation of scalar product formed in the context of set theory a \"vector space \"(\\SeeChapter{see section Set Theory}) possessing therefore a \"vector algebraic structure\". This under the condition that the vectors of course have the same dimensions (this observation, for recall, is not valid if instead of the scalar product we take the cross product).\n\t\n\tJust as vectors, we can multiply a matrix by a scalar and add (subtract) them together (as long as they have the same dimensions..) but in addition, we can also multiply two matrices together under certain conditions which we will define below. This will also make the set of matrices in the set-sense, a vector space on $K$ (being most of time $\\mathbb{R}$) having also therefore a \"vector algebraic structure\".\n\t\n\tThus, a vector may also be viewed as a particular matrix of dimension $m\\times n$  and operate in the vector space of matrices. Basically..., vector calculus is only a special case of linear algebra!!! This is way at school people learn (after Calculus) Vector Calculus first and Linear Algebra later (and some will lean after Tensor Calculus).\n\t\n\t\\begin{enumerate}\n\t\t\\item[D1.] Given $A,B\\in M_{mn} (\\mathbb{R})$. We name \"sum of $A$ and $B$\" the matrix $C\\in M_{mn}(\\mathbb{R})$ whose coefficients are:\n\t\t\n\t\tThat is to say explicitly:\n\t\t\n\t\t\n\t\t\\item[D2.] Given $A\\in M_{mn} (\\mathbb{R})$ a matrix and a $\\lambda\\in \\mathbb{R}$ a scalar (we can also take in $\\mathbb{C}$ if we want). We name \"\\NewTerm{product of $A$ by $\\lambda$}\" the matrix whose coefficients are:\n\t\t\n\t\tThat is to say explicitly:\n\t\t\n\t\tIn two previous definitions so we can actually conclude that the space / set of matrices is a vector space and thus has a vector algebraic structure.\n\t\t\n\t\t\\item[D3.] Let $E, F, G$ be three  vector spaces of basis respectively $\\mathcal{E},\\mathcal{F},\\mathcal{G}$ and two linear application $f$ and $g$ (see the section Set Theory for a refresh).\n\t\t\n\t\tLet us denote by $A$ the matrix of $f$ with respect to basis $\\mathcal{E},\\mathcal{F}$, and $B$ the matrix of $g$ with respect to basis $\\mathcal{F},\\mathcal{G}$. Then the matrix $C$ of $g\\cdot g$ (see the definition of a composite function in the section of Functional Analysis) relating to the basis $\\mathcal{E},\\mathcal{G}$ is equal to the product of $B$ by $A$ denoted simply by $BA$.\n\t\t\n\t\tSo let $B\\in M_{mn} \\in \\mathbb{R}$ and $A\\in M_{np} \\in \\mathbb{R}$, we name \"\\NewTerm{matrix product}\\index{matrix product}\" or \"\\NewTerm{matrix multiplication}\" of $A$ and $B$ and we denote it by $BA$, the matrix $C\\in M_{mp}\\mathbb{R}$ whose components are:\n\t\t\n\t\tIt is important to notice that at the opposite to the addition, $A$ and $B$ may have different dimensions. However! the number of rows of $A$ must be equal to the number of columns of $B$, as indicated by the index $n$ of the two matrices. So in the product $BA$, if $B$ is a matrix $m\\times n$, $A$ must be a matrix $n\\times p$, for any $p$!!!\n\t\t\n\t\tSchematically:\n\t\t{\\Huge{\n\t\t\\[\n\t\t\\framebox[2.5cm]{\\clap{\\raisebox{0pt}[1.5cm][1.5cm]{$\\mat C$}}\\subdims{-2.5cm} n p} =\n\t\t\\framebox[1.5cm]{\\clap{\\raisebox{0pt}[1.5cm][1.5cm]{$\\mat B$}}\\subdims{-2.5cm} m n} \\ \n\t\t\\framebox[2.5cm]{\\clap{\\raisebox{5mm}[1.5cm]{$\\mat A$}}     \\subdims{-1cm} n p} \n\t\t\\]}}\\\\\\\\\n\t\t\n\t\tor even more explicity (\\NewTerm{Falk's scheme}\\index{Falk's scheme}):\n\t\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\t\\includegraphics[scale=0.9]{img/algebra/falks_scheme.jpg}\n\t\t\t\\caption{Matrix Product Falk's scheme (credit: Alain Matthes)}\n\t\t\\end{figure}\n\t\\end{enumerate}\n\tBy noting with a capital Latin letters matrices and with lowercase Greek letters scalars, the reader can easily verify with what we have seen \\underline{until now} (we can add proofs on request) the following properties of matrix algebra (the matrices are assumed to have adequate dimensions):\n\t\\begin{enumerate}\n\t\t\\item[P1.] Left distributivity: $A(B+C)=AB+AC$\n\t\t\\item[P2.] Right distributivity: $(A+B)C=AC+BC$\n\t\t\\item[P3.] Scaling association: $(\\lambda A)B=A(\\lambda A)$\n\t\t\\item[P4.] Associativity: $(AB)C=A(BC)$\n\t\t\\item[P5.] Non-commutativity: $BA\\neq AB$\n\t\t\\item[P6.] Absorbing-Element: $A\\mathbf{0}=\\mathbf{0}$\n\t\t\\item[P7.] Neutral Element for addition: $A+\\mathbf{0}=A$\n\t\\end{enumerate}\n\tIt is especially important to remember of the property P5 that shows that the multiplication is obviously not commutative (for dimensions greater than $1$ of course!) and also the property P4 such that the matrix multiplication is associative.\n\t\n\tConcerning the general proof that the assertion of commutativity if false we must pass through a numerical example (because even the general case without replacing algebraic terms by numerical values will not show you much in our point of view...).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe set of square matrices $M_{nn}\\in \\mathbb{R}$ of order $n$ with components in $\\mathbb{R}$ provided with the sum and the usual matrix multiplication forms a ring (\\SeeChapter{see section Set Theory}). This is true more generally if the coefficients of the matrices are taken in any ring: for example, all the matrices $M_{nn}\\in \\mathbb{Z}$  with integer components is a ring.\n\t\\end{tcolorbox}\n\tOne reader asked us to prove the property of associative. So let us begin!\n\t\n\tLet $A\\in M_{kl}(\\mathbb{C}),B\\in M_{lm}(\\mathbb{C}),C\\in M_{mn}(\\mathbb{C})$, then for $1\\leq i\\leq k,1\\leq j\\leq n$, we have well (we use the explicit expression of matrix components multiplication seen above many times as you can see):\n\t\n\t\n\t\\subsubsection{Type of Matrices}\n\tTo simplify the notations and length of calculations we introduce now the most common types of matrices that the reader will encounter throughout his reading of this book (and not just in the chapters on pure mathematics!).\n\t\n\tSome definitions will only be recalls!\n\t\n\tWe denote by $M_{mn}(\\mathbb{K})$ all matrices $m\\times n$ whose coefficients take values in $K$ (typically $\\mathbb{R}$ or $\\mathbb{C}$ for example).\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] When $m=n$, we say that $(a_{ij})$ is a \"\\NewTerm{square matrix of order $n$}\\index{square matrix}\":\n\t\t\t\n\n\t\t\\item[D2.] We name \"\\NewTerm{zero matrix}\\index{zero matrix}\", and we denote by $0_{mn}$ or simply $\\mathbf{0}$ any matrix in which each term is zero:\n\t\t\t\n\t\n\t\t\\item[D3.] We name \"\\NewTerm{identity matrix of order $n$}\\index{identity matrix}\" or \"\\NewTerm{unit matrix of order $n$}\\index{unit matrix}\", and denote it by $I$, or simply $\\mathds{1}$, the square matrix of order $n$:\n\t\t\n\t\n\t\tIt can also be written using the Kronecker delta notation $\\delta_{ij}$ (\\SeeChapter{see section Tensors Calculus}).\n\t\t\n\t\t\\item[D4.] We name \"\\NewTerm{diagonal matrix}\\index{diagonal matrix}\" any square matrix $A\\in M_{nn}\\in\\mathbb{C}$ which only the diagonal has non-null elements:\n\t\t\n\t\tFormally:\n\t\t\n\t\tThe usual notation of a diagonal matrix is:\n\t\t\n\t\t\n\t\t\\item[D5.] We name \"\\NewTerm{lower triangular matrix}\\index{lower triangular matrix}\" a square matrix if all the entries above the main diagonal are zero:\n\t\t\n\t\tFormally:\n\t\t \n\t\t Similarly, a square matrix is named \"\\NewTerm{upper triangular matrix}\\index{upper triangular matrix}\" if all the entries below the main diagonal are zero:\n\t\t \n\t\tFormally:\n\t\t \n\t\tA \"\\NewTerm{triangular matrix}\\index{triangular matrix}\" is one that is either lower triangular or upper triangular. A matrix that is both upper and lower triangular is named a \"diagonal matrix\".\n\t\t\n\t\tIf an upper triangular matrix was obtained by a \"staggered\" matrix we will write it as following:\n\t\t\n\t\t\n\t\t\\item[D6.] Given $M_{nn}$ a square matrix. The matrix $M_{nn}$ is named \"\\NewTerm{invertible matrix}\\index{invertiable matrix}\" or \"\\NewTerm{regular matrix}\\index{regular matrix}\" or \"\\NewTerm{non-singular matrix}\\index{non-singular matrix}\" if and only if $M_{nn}^{-1}$ is such that:\n\t\t\n\t\tIf this is not the case, we say that $M_nn$ is a \"\\NewTerm{singular matrix}\\index{singular matrix}\". We will prove later that for a square matrix to invertible (non-singular) on sufficient condition is that its determinant is equal to zero.\n\t\t\n\t\tThis definition is fundamental, it has extremely important consequences in all linear algebra and also in physics (solving linear systems, determining, eigenvectors and eigenvalues, etc.), statistics and finance and it is appropriate to remember it.\n\t\t\n\t\t\\item[D7.] Given a matrix $A_{mn}:=A$:\n\t\t\n\t\tWe name \"\\NewTerm{transposed matrix}\\index{transposed matrix}\" of $A=A_ {MN}$, the matrix denoted by $A^T=A_{nm}$  (the superscript $T$ is depending of the books and teachers uppercase or lowercase and either left or right but the standard ISO 80000-2: 2009 recommends the capital and superscript on the top right) the matrix for which we transpose the rows into columns and the into rows:\n\t\t\n\t\tHere are some interesting properties of the transpose of a matrix (which we will be useful to us later in this section for a famous theorem and also in the study of the multiple linear regression methods in the section of Numercial Methods!):\n\t\t\\begin{enumerate}\n\t\t\t\\item[P1.] $(A^T)^T$\n\t\t\t\\item[P2.] $(\\lambda A+B)^T=\\lambda A^T+B^T,\\lambda\\in \\mathbb{R},\\lambda\\in \\mathbb{R}$\n\t\t\t\\item[P3.] $(AB)^T=B^TA^T$\n\t\t\t\\item[P4.] $(A^{-1})^T=(A^T)^{-1}\\exists A^{-1}$\n\t\t\t\\item[P5.] $A\\vec{x}\\circ\\vec{y}=\\vec{x}A^T\\vec{y}$\n\t\t\\end{enumerate}\n\t\tThe transposed matrix is very important in physics, statistics and finance and obviously in the field of mathematics for example in the context of the theory of groups and symmetries! So it is also worth remembering its definition.\n\t\t\n\t\tAs the third property is the most used one in the various sections of this book let us demonstrate it by considering $A\\in M_{lm}\\in\\mathbb{C},B\\in M_{mn}\\in \\mathbb{C}$:\n\t\t\\begin{dem}\n\t\t Remembering the explicit relation of matrix multiplication seen earlier:\n\t\t\n\t\tBut in this last equality, we note that we browse $B$ by line and $A$ in column for a $i$ and a $j$ fixed and this we know then corresponds to the matrix multiplication $AB$, therefore:\n\t\t\n\t\tFinally we have well:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\tAnd for the same reasons let us prove the before last property.\n\t\t\\begin{dem}\n\t\tFirst, it is trivial that if $A$ is invertible:\n\t\t\n\t\tand taking the transpose on both sides of the equality we find (we use the property proved just before):\n\t\t\n\t\tThe latter equality show obviously that $(A^{-1})^T$ is the inverse of $A^T$, that is to say:\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[D8.] Given:\n\t\t\n\t\ta matrix of $M_{mn}(\\mathbb{C})$. We name \"\\NewTerm{adjoint matrix}\\index{adjoint matrix}\" of $A$, the matrix of $M_{mn}(\\mathbb{C})$ defined by:\n\t\t\n\t\twhich is the complex conjugate of the transposed matrix or if you prefer ... the transposed matrix of the complex conjugate (in the case of real components... we obviously don't need to take the conjugate!). To simplify the notations we simply note this matrix $A^\\dagger$ (notation frequently use in Quantum Physics and Set Algebra).\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tTrivial relation (which is often used in Quantum Field Physics) already prove juste before and obviously right when the component are in $\\mathbb{C}$:\n\t\t\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D9.] By definition, a matrix is named \"\\NewTerm{Hermitian matrix}\\index{Hermitian matrix}\" or \"\\NewTerm{self-adjoint matrix}\\index{self-adjint matrix}\"... if it is equal to its own adjoint (conjugate transpose matrix) such that:\n\t\t\n\t\t\n\t\t\\item[D10.] Given $A$ as square matrix $M_{nn}(\\mathbb{R})$, the \"\\NewTerm{trace}\\index{trace of a matrix}\" of $A$ denoted $\\text{tr}(A)$ is defined by the sum of the terms of the diagonal (very useful in some statistical techniques):\n\t\t\n\t\tSome useful related relations (we can add the detailed proof on the demand of the readers):\n\t\t\n\t\tand:\n\t\t\n\n\t\t\\item[D11.] A matrix $A$ is named \"\\NewTerm{nilpotent matrix}\\index{nilpotent matrix}\" if by multiplying it successively by itself it can give zero. Explicitly, if there exists an integer $k$ such that:\n\t\t\n\t\tIf the matrix $A$ multiplied by itself gives $A$ ... then we talk about an \"\\NewTerm{idempotent matrix} \\index{idempotent matrix}\".\n\t\t\n\t\tSuch matrices are for example very common in Markov Chains when the transition matrix contains probabilities (see the sections of Probabilities and Graph Theory).\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tTo remember this name, we can decompose it into \"nil\" that means \"zero\" and \"potent\" that means \"potential\". So something \"nilpotent\" is therefore something that is potentially zero....\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D12.] A matrix $A$ is named \"\\NewTerm{orthogonal matrix}\\index{orthogonal matrix}\" if its elements are real and if it obeys to:\n\t\t\n\t\twhich can be translate into (where $\\delta_{ij}$ the Kronecker symbol):\n\t\t\n\t\tThe matrix column vectors thus orthogonal to each other as the resulting operation above can be seen as a row-column dot product. Therefore an orthogonal matrix represents also an orthogonal basis!\n\t\t\n\t\tA typical mathematical example is the matrix of the canonical orthonormal basis (\\SeeChapter{see section Vector Calculus}):\n\t\t\n\t\tor a well known matrix in quantum physics (\\SeeChapter{see section Quantum Computing}):\n\t\t\n\t\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\t\\textbf{R1.} Therefore this is typically the case of the canonical basis matrix, or any diagonalized matrix.\\\\\n\t\t\n\t\t\\textbf{R2.} If instead of just taking a matrix with real coefficients, we take complex coefficients with its complex transposed matrix (adjoint matrix). So we say (sadly... because it makes confusion with the name of another matrix already define) that $A$ is a \"unitary matrix\" if it satisfies the previous relation!\n\t\t\\end{tcolorbox}\n\t\tWe will come back later, after having introduced the concepts of eigenvectors and eigenvalues, a particular and very important case of orthogonal matrices (named \"translations matrices\").\n\t\t\n\t\tLet us also mention another important property in geometry, physics and statistics of orthogonal matrices.\n\t\t\n\t\t\\begin{theorem}\n\t\tGiven $f(\\vec{x})=A\\vec{x}+\\vec{b}$, where $A$ is an orthgonal matrix and $\\vec{b}\\in \\mathbb{R}^n$. Then $f$ (respectively $A$) is an isometry. That is to say:\n\t\t\n\t\tSo in other words: Orthogonal matrices are linear mappings which preserve the norm (the distance)!!!\n\t\t\\end{theorem}\n\t\t\\begin{dem}\n\t\t\n\t\tand we have well:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[D13.] Given a square matrix $A \\in M_{nn}$. The matrix $A$ is named \"\\NewTerm{symmetric matrix}\\index{symmetric matrix}\" if and only if:\n\t\t\n\t\tWe will meet this definition again in the section of Tensor Calculus.\n\t\t\n\t\t\\item[D14.] Given a square matrix $A \\in M_{nn}$. The matrix $A$ is named \"\\NewTerm{anti-symmetric matrix}\\index{anti-symmetric matrix}\" if and only if:\n\t\t\n\t\twhich requires that:\n\t\t\n\t\t\n\t\t\\item[D15.] Let $E$ be a vector space of dimension $n$ and $ \\mathcal{B},\\mathcal{B}'$ two basis of $E$:\n\t\t\n\t\tWe name \"\\NewTerm{transition matrix}\\index{transition matrix}\" of the basis $\\mathcal{B}$ to the basis  $\\mathcal{B}'$, and we denote by $P$ the square matrix of $M_{nn}(\\mathbb{K})$ which columns are formed of components of vectors of the basis $\\mathcal{B}'$ on the basis $\\mathcal{B}$ (see further below the detailed treatment of basis changes for more information).\n\t\t\n\t\tWe consider the vector $\\vec{x}(x_1,x_2,...,x_n)$ of $E$ which is written in the basis $\\mathcal{B}(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_n)$ and $\\mathcal{B}'(\\vec{e}_1^{'},\\vec{e}_2^{'},...,\\vec{e}_n^{'})$ following the relations:\n\t\t\n\t\tWith:\n\t\t\n\t\tthe vector of $K^n$ formed of the components of $\\vec{x}$ in the basis $\\mathcal{B}$ and of the vector $\\vec{x}$ formed of the components in the basis $\\mathcal{B}^{'}$. So:\n\t\t\n\t\trelation for which the detailed proof will be given later in our study of basis changes. We also have obviously:\n\t\t\n\t\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\t\\textbf{R1.} When a vector is given and its basis is not specified, remember that it is therefore implicitly in the canonical basis:\n\t\t\n\t\twhich remains invariant by the multiplication by any vector and when the basis used is denoted by $(\\vec{e}_i)$ and is not specified, then it is also that of the canonical basis.\\\\\n\t\t\n\t\t\\textbf{R2.} If a vector is given relative to the canonical basis, its components are named \"\\NewTerm{covariant}\\index{covariant components}\",  if they are expressed in another noncanonical base, then we say that the components are \"\\NewTerm{contravariant}\\index{contravariant components}\" (for details on the subject see the section of Tensor Calculus).\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\item[D16.] A matrix is named \"\\NewTerm{positive-defined matrix}\\index{positive-defined matrix}\" (which will be useful in the section of Theoretical Computing for some important engineering techniques and in quantitative finance for qualitative estimation of the correlation matrix) if:\n\t\t\n\t\tand \"\\NewTerm{positive matrix}\\index{positive matrix}\" or \"\\NewTerm{semi-positive matrix}\\index{semi-positive matrix}\" if:\n\t\t\n\t\tWe have proved in our study of the covariance matrix in the section Statistics that a semi-positive matrix has its eigenvalues which are all positive OR null, while if it is positive definite its eigenvalues are all positive AND not null.\n\t\t\n\t\t\\item[D17.] A symmetric matrix having all it components being positive and only zeros on the diagonal is named a \"\\NewTerm{distance matrix}\\index{distance matrix}\" (we will meet several times this type of matrix in Data Mining techniques during our study of the Numerical Methods section).\n\t\t\n\t\t\\item[D18.] A matrix is named \"\\NewTerm{sparse matrix}\\index{sparse matrix}\" if it contains a significant number of null values. In numerical methods, there are algorithms that use this specificity to optimize the storage of this type of matrices (used in OLAP cubes and financial engineering).\n\t\\end{enumerate}\n\t\n\t\\subsubsection{Determinant}\n\tWe will look at determinants in the point of view of the physicist or of the engineer (the mathematician point of view is rather off-putting ...). In physics (whether in classical mechanics and quantum field physics), chemistry or engineering, we frequently have to solve linear systems. But we have now seen that a linear system:\n\t\n\tcan be written as:\n\t\n\tand we know that the only soluble linear systems (in the sense that they have a unique solution !!!) are those that have as many equations as unknowns and their determinant is not zero! Thus, the matrix must be a square matrix $M_{mm}$.\n\t\n\tIf a solution exists, then there is a column matrix (or \"vector\") $X$ such that $AX=B$ which involves:\n\t\n\tWhat imposed this relation? Well this is relatively simple, but at the same time very important: for a linear system to have a unique solution, it is necessary that the matrix is invertible (non-singular)! What relation with the  concept of \"determinant\" then? It's simple: mathematicians have sought how to write inverse matrices of linear systems for which they knew there was a unique solution and they arrived after trial and error to determine a kind of formula to assess if the matrix is invertible (non-singular) or not. Once this formula found, they formalized (as they know so well how to do it...), with a great rigor, the concept surrounding this formula that they named the \"\\NewTerm{determinant}\\index{determinant}\". They did it so well that in fact we sometimes forgot that they have found it by trial and error...\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\tIf a matrix of a linear system is not invertible (non-singular), this has the consequence that there is no solution or an infinity of solutions (as usual what ...)\n\t\\end{tcolorbox}\n\tWe below first focus on how to build the determinant bydefining a particular type of application. Then, after seeing a simple and readable example of the calculation of a determinant, we will focus on determining the formula of it in the general case. Finally, once this is done, we will see what is the relation between the inverse of a matrix and the determinant!!!\n\t\n\tIn what will follow all vector spaces will be considered of finite dimension and the field of complex numbers $\\mathbb{C}$ (those who prefer the can take $\\mathbb{R}$ as basis field, in fact we could take any field).\n\t\n\tFirst of all we will do a little bit of pure math (a bit off putting) before moving on  concrete stuff.\n\t\n\tGiven $V$ a vector space, we write will write as usual $V^n$ instead of $V\\times V\\times... \\times V$. $(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_3)$ designate the canonical basis of $\\mathbb{R}^n$. $M_n(\\mathbb{R})$ is the set of square matrices $n\\times n$ with coefficients in $\\mathbb{R}$.\n\t\n\t\\textbf{Definitions (\\#\\mydef):}\n\t\\begin{enumerate}\n\t\t\\item[D1.] A \"\\NewTerm{multilinear application}\\index{multilinear application}\" on a space $V$ is defined by an $\\varphi: V^n \\rightarrow \\mathbb{R}$ which is linear in each of its components. Meaning:\n\t\t\n\t\tfor any $\\lambda,\\mu\\in K$ and $\\vec{x}_i,\\vec{u},\\vec{v}\\in V$ where the $\\vec{x}_i$ are vectors.\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tA non-null multilinear application is not a linear application of the space $V^n$ in $\\mathbb{R}^n$. Excepted if $n=1$. Indeed, this can be verified by the definition of a linear application  versus the one of the multilinear application:\n\t\t\n\t\t\\end{tcolorbox}\n\t\t\\item[D2.] An \"\\NewTerm{alterneted multilinear application}\\index{alterneted multilinear application}\" on $V$ is by definition a multilinear application that satisfies:\n\t\t\n\t\tfor any $j=1...n,\\vec{x}_j\\in V$. Therefore the permutation of two vectors that follows change the sign of $\\varphi$.\n\t\t\\begin{theorem}\n\t\tTherefore, if $\\varphi$ is a multilinear alterneted application, then $\\varphi$ is multilinear if and only if $\\forall \\vec{j}\\in V,j=1...n$ we have:\n\t\t\n\t\tor in a more simple case:\n\t\t\n\t\t\\end{theorem}\n\t\t\\begin{dem}\n\t\tIf $\\varphi$ is alternated we have by definition:\n\t\t\n\t\tTherefore by rearranging:\n\t\t\n\t\tAnd if $\\varphi$ is a multilinear application we can write:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\tNow comes the interesting stuff:\n\t\t\n\t\t\\item[D3.]  A \"\\NewTerm{determinant}\\index{determinant}\" is by definition a multilinear alterneted application:\n\t\t\n\t\tsatisfying as well:\n\t\t\n\t\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe columns of a matrix form $n$ vectors and we then see that a determinant $D$ on $\\mathbb{R}^n$ inducte an application $D$ of $M_n(\\mathbb{R})\\mapsto \\mathbb{R}$ (where ase we know $M_n(\\mathbb{R})$ is the set of squatre matrices $n\\times n$ with components in $\\mathbb{R}$) defined by:\n\t\t\n\t\twhere $\\vec{m}_i$ is the $i$-th column of $M$. \n\t\t\\end{tcolorbox}\t\n\t\\end{enumerate}\n\tLet us study the case $n=2$. If $D$ is a determinant, for any vector:\n\t\n\twe have:\n\t\n\tAs $D$ is multilinear, we have:\n\t\n\tas it is alterneted:\n\t\n\tit remains:\n\t\n\tand we have:\n\t\n\tAnd finally:\n\t\n\tIn fact, we just prove that if the a determinant exists, it is unique and of the form indicated previously, we should also check that the defined application satisfy the properties of a determinant, but the latter is immediate.\n\t\n\tThus, if:\n\t\n\tis a matrix we have then:\n\t\n\tLet us give now a geometric interpretation of the determinant. Given $\\vec{v}_1,\\vec{v}_2$ two vectors of $\\mathbb{R}^2$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/determinant_parallelelogram.jpg}\n\t\t\\caption{Geometric interpretation of the determinant}\n\t\\end{figure}\n\tThe vector $\\vec{w}$ is obtained by projecting $\\vec{v}_1$ on $\\vec{v}_2$ and we have therefore:\n\tThe vector $\\vec{w}$ is obtained by projecting $\\vec{v}_1$ on $\\vec{v}_2$ and we have therefore:\n\t\n\tThe area of the above parallelogram is therefore:\n\t\t\n\tif:\n\t\n\tthen:\n\t\n\tand finally:\n\t\n\tTherefore the determinant, in absolute value, represents the area of the parallelogram defined by the vectors $\\vec{v}_1,\\vec{v}_2$ when these vectors are linearly independent. We can generalize this result to a $n$-dimensional space, in particular, for $n=3$, the determinant of three linearly independent vectors represents the volume of the parallelepiped defined by these as we already prove it during our study of the mixed product in the section of Vector Calculus.\n\t\n\tThe more general case of the expression of the determinant is a little trickier to ascertain. This requires that we define a particular bijective application but simple that we have already met in the section Statistics.\n\t\n\t\\textbf{Definition (\\#\\mydef):} Given $F_n=\\left\\lbrace 1,2,...,n\\right\\rbrace,n\\in \\mathbb{N}^{*}$ we name \"\\NewTerm{permutation}\\index{permutation}\" of $F_n$ any bijective application of $F_n$ in $F_n$:\n\t\n\tGiven $\\mathcal{S}_n$ the set of possible permutations (bijective applications) of $\\left\\lbrace 1,2,...,n\\right\\rbrace$.  $\\mathcal{S}_n$ obviously contains ... (see our study of Combinatorics in the section of Probabilities) $n!$ elements. The information an element $\\sigma$ of $\\mathcal{S}_n$ is defined by the successive information of:\n\t\n\tGiven an ordered sequence of elements (ascending) $\\left\\lbrace 1,2,...,n\\right\\rbrace,n \\in \\mathbb{N}^{*}$ we name \"inversion\", any permutation of elements in the ordered sequence (so the result will not be ordered at all...). We denote by $I(\\sigma)$ the number of inversions.\n\t\n\tWe say that the permutation $\\sigma$ is even (odd) if $I(\\sigma)$ is even (odd). We name \"\\NewTerm{signature}\\index{signature of a matrix}\" of $\\sigma$, the number $\\varepsilon(\\sigma)$ defined by $\\varepsilon(\\sigma)=(-1)^{I(\\sigma)}$, that is to say:\n\t\n\tWe now have the necessary tools to set up the general relation of the determinant:\n\t\\textbf{Definition (\\#\\mydef):} Given:\n\t\n\tWe name \"\\NewTerm{determinant of a square matrix $A$}\\index{determinant of a square matrix}\" of dimension $n$, and we denote by $\\det (A)$, the scalar defined by (we'll see an example further below):\n\t\n\tsometimes named \"\\NewTerm{Leibniz formula}\\index{Leibniz formula}\" or \"\\NewTerm{Laplace's formula}\\index{Laplace's formula}\". This relation was obtained in the past by trial and error and by induction for larger dimensions.\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. Given $A=(a_{ij})_{1\\leq i,j\\leq 2}\\in M_{22}(K)$, let us consider the $2!=2$ permutations of the second indices (of integers $1,2$) taken in their whole:\n\t\\begin{gather*}\n\t\t12 \\qquad 21\n\t\\end{gather*}\n\tWe calculate the signature of $\\sigma$. Here is the scheme of this rule (recall: we say that there is an \"inversion\", if in a permutation, a greater integer preceed a smallest integer):\n\t\n\tTherefore we have:\n\t\n\tThis corresponds well to what we saw initially. Remember also on the way that we will soon prove that the determinant of a square matrix must be zero so that the matrix is invertible (non-singular)!\\\\\n\t\n\tE2. Given $A=(a_{ij})_{1\\leq i,j\\leq 3} \\in M_{33}(K)$, let us consider the $3!=6$ permutations of the second indices (integers $1,2,3$) taken in their whole:\n\t\\begin{align*}\n\t123 \\quad 132 \\quad 213 \\quad 31 \\quad 312 \\quad 321\n\t\\end{align*}\n\tWe calculate the signatures of $\\sigma$. Here is a scheme of this rule (recall: we say that there is an \"inverse\", if in a permutation, a greatest integer precedes a lower integer):\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\tTherefore we have\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tSome people learn by heart a method named \"\\NewTerm{rule of Sarrus}\\index{rule of Sarrus}\" to calculate the determinants of order three as the previous one given by:\n\t\\begin{center}\n\t\\begin{tikzpicture}\n    \\matrix [%\n      matrix of math nodes,\n      column sep=1em,\n      row sep=1em\n    ] (sarrus) {%\n      a_{11} & a_{12} & a_{13} & a_{11} & a_{12} \\\\\n      a_{21} & a_{22} & a_{23} & a_{21} & a_{22} \\\\\n      a_{31} & a_{32} & a_{33} & a_{31} & a_{32} \\\\\n    }; \n\n    \\path ($(sarrus-1-3.north east)+(0.5em,0)$) edge[dotted] ($(sarrus-3-3.south east)+(0.5em,0)$)\n          (sarrus-1-1)                          edge         (sarrus-2-2)\n          (sarrus-2-2)                          edge         (sarrus-3-3)\n          (sarrus-1-2)                          edge         (sarrus-2-3)\n          (sarrus-2-3)                          edge         (sarrus-3-4)\n          (sarrus-1-3)                          edge         (sarrus-2-4)\n          (sarrus-2-4)                          edge         (sarrus-3-5)\n          (sarrus-3-1)                          edge[dashed] (sarrus-2-2)\n          (sarrus-2-2)                          edge[dashed] (sarrus-1-3)\n          (sarrus-3-2)                          edge[dashed] (sarrus-2-3)\n          (sarrus-2-3)                          edge[dashed] (sarrus-1-4)\n          (sarrus-3-3)                          edge[dashed] (sarrus-2-4)\n          (sarrus-2-4)                          edge[dashed] (sarrus-1-5);\n\n    \\foreach \\c in {1,2,3} {\\node[anchor=south] at (sarrus-1-\\c.north) {$+$};};\n    \\foreach \\c in {1,2,3} {\\node[anchor=north] at (sarrus-3-\\c.south) {$-$};};\n  \\end{tikzpicture}\n\t\\end{center}\n\tWe prefer in this book the general formulation of the determinant because applicable to all orders.\n\t\\end{tcolorbox}\n\tLet's us see some properties and corollaries of this formulation of the determinant:\n\t\\begin{enumerate}\n\t\t\\item[P1.] Given a square matrix of order $n$, we do not change the value of its determinant if:\n\t\t\\begin{enumerate}\n\t\t\t\\item By performing an elementary operation on the columns of $M_n$.\n\n\t\t\t\\item By performing an elementary operation on the rows of $M_n$.\n\t\t\\end{enumerate}\n\t\t\\begin{dem}\n\t\tIf $M_n=(a_{ij})_{i,j=1,...,n}$ the $M_n$ is composed of $n$ column vectors:\n\t\t\n\t\tDoing an elementary operation on the columns of $M_n$ is equivalent to add $\\lambda v_i,i \\in \\{1,...,n\\}$ to one of the columns $v_n$ of $M_n$. Given $M_n^{'}$ the matrix obtained by adding $\\lambda v$ to the $j-th$ column of $M_n$, we get:\n\t\t\n\t\tBy multilinearity (finally the proof in not really difficult):\n\t\t\n\t\tand as the determinant is alternated:\n\t\t\n\t\tAbout the elementary operations on the rows we just need to consider the transpose (that is to cry such it is simple, but we had to thing about this trick).\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[P2.] Given $M_n(K)$ a squared matrix of order $n$ and given $\\lambda \\in K$:\n\t\t\n\t\t\\begin{dem}\n\t\tAs before, it is enough to simply noticed that if $v_1,...,v_n$ are the column vectors forming the matrix $M_n$ then $\\lambda v_1,...,\\lambda v_n$ are those that constitute $\\lambda M_n$ and:\n\t\t\n\t\tThe application being $n$-linear, we arrive at the equality:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[P3.] Given is a square matrix of order $n$. We change the sign of the determinant of $M_n\u00a8$ if:\n\t\t\\begin{itemize}\n\t\t\t\\item We permute two of its columns\n\t\n\t\t\t\\item We permute tow of its rows\n\t\t\\end{itemize}\n\t\t\\begin{dem}\n\t\t$M_n$ is constituted by $n$ vectors $v_1,..,v_n$. The determinant of $M_n$ is equal to the determinant of these $n$. Permute two columns of $M_n$ is same as permuting the two corresponding vectors. Let us suppose that the permuted vectors are the $i$-th and $j$-th, the determinant being an alternate application, we have:\n\t\t\n\tAbout the rows, we just have to consider the transposed of $M_n$ to arrive to the same result!\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[P4.] Given $A,B\\in M_m\\in (\\mathbb{C})$ then:\n\t\t\n\t\tAs far as we know the proof can be done in at least two ways, the first is rather indigestible and abstract... so we will let it to mathematicians (...) even if it has the advantage of being general, the second one easiest is to check this assertion for various squared matrices.\n\t\t\\begin{dem}\n\t\t\n\t\tand:\n\t\t\n\t\tThe calculations therefore produce results that are identical. We can check for square matrices of higher dimensions.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[P5.] A square matrix $A\\in M_n(\\mathbb{C})$ is invertible (non-singular) if and only if $\\det(A)\\neq 0$ (this is the most important property among all).\n\t\t\\begin{dem}\n\t\tIf $A$ is invertible (non-singular), we have:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\tAs we alread say it, this is the most important property of matrices as part of theoretical physics because if $A$ is a linear system, the calculation of the determinant indicates whether it has unique solutions or not. Otherwise, as we have already mentioned and study it, the system has no solution, or an infinity of solutions!\n\t\t\n\t\tWe must consider an important special case! Given the following system:\n\t\t\n\t\twhere $A\\in M_(K)\\neq =0$ and $B\\in M_n(K)$ are to be determined. It is obvious that $A$ is invertible (non-singular) or not, the trivial solution is $A\\cdot B=0$. However, let us imagine a case of theoretical physics where we have $A\\cdot B=0$ but for which we know that $A\\in M_n(K)\\neq 0$ for which we impose $B\\in M_n(K)\\neq 0$. In this case, we must eliminate the trivial solution $B=0$. Furthermore, calculate the inverse (if it exists) of the matrix $A$ will bring us to nothing concrete except that $B=0$ which obviously does not satisfy us. The only solution is then to play such that the coefficients $a_{ij}$ of the matrix $A$ are such that its determinant is zero and therefore the matrix in invertible! The advantage? Just to to have an infinite number of possible solutions (of $B$ then!) that satisfy $A\\cdot B=0$. We will need this methodology in the section of Wave Quantum Physics, when we will determine the existence of antiparticles through the linearized Dirac equation. It must therefore be remember.\n\t\t\n\t\t\\item[P6.] Two \"\\NewTerm{conjugated}\\index{conjugated matrices}\" matrices (be careful! it is not the \"conjugate\" in the complex sense) have the same determinant.\n\t\t\\begin{dem}\n\t\t\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[P7.] For any matrices $A\\in M_n(\\mathbb{C})$:\n\t\t\\begin{dem}\n\t\t\n\t\tBut as (trivial... simple product of all coefficients):\n\t\t\n\t\tAs (trivial) $\\varepsilon(\\sigma^-1)=\\varepsilon(\\sigma)$ and that $x,y\\in\\mathbb{C}:\\bar{x}\\cdot \\bar{y}=\\overline{x\\cdot y}$ (\\SeeChapter{see section Numbers}) then we can write:\n\t\t\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[P8.] For any matrix $A\\in M_n(\\mathbb{R})$:\n\t\t\n\t\t\\begin{dem}\n\t\tWell... it's the same as the previous property but without the conjugate values... In fact, we prove in the same way, the same property for $A\\in M_n(\\mathbb{C})$.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\n\t\t\\item[P9.] Given a matrix $A=(a_{ij})\\in M_n(\\mathbb{C})$ we denote by $A_{ij}$ the matrix obtained from $A$ by removing the $i$-th row and the $j$-th column (very important notation to remember for what will follow!!!). The $A_{ij}$ belongs therefore to $M_{n-1}(\\mathbb{C})$. Then for any $i=1...n$:\n\t\t\n\t\twhere the term:\n\t\t\n\t\tis named the \"\\NewTerm{cofactor}\\index{cofactor}\".\n\t\t\\begin{dem}\n\t\tFor the proof let us define the application:\n\t\t\n\t\tIt could be almost easy to see that $\\varphi$ is multilinear (you just have to consider that $(-1)^{i+j}a_{ij}$ as a simple constant and after by extension of the definition of the determinant... too easy...).\n\n\t\tLet us show however that this application is alternated (in this case it is a determinant hat has all the properties of a... determinant!).\n\t\n\t\tGiven $a_k,a_{k+1}$ two column vectors $A$ that follows each other. Let us suppose that $a_k=a_{k+1}$, we have to show in this case that $\\varphi(A)=0$ (which comes from the definition of an alternate application).\n\t\n\t\tWe have first (it is mandatory by the definition itself) if we don't erase any of the columns $j$ being $k$ or $k+1$:\n\t\t\n\t\tand we have obviously if we don't remove respectively the column $k$ and the column $k+1$:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tIt is therefore OK. The application $\\varphi$ is alternated and multilinear, it is indeed well a determinant.\n\t\t\n\t\tWe have just prove that $\\varphi$ is a determinant and by unicity we have $\\varphi(A)=\\det(A)$ for any $A\\in M_n(\\mathbb{C})$.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\t\tLet us see an example of this method by calculating the determinant of: \n\t\t\n\t\tLet us develop the second line ($i=2$). We get:\n\t\t\\end{tcolorbox}\n\t\t\n\t\t\\pagebreak\n\t\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\t\t\n\t\tLet us develop following the first column for verification (we never know...):\n\t\t\n\t\tThe calculation determined above is therefore \"exponential\" as if for example we must calculate the determinant of a square matrix of order (dimension) $n=10$, then the determinant will be developed in a sum of $10$ terms, which each contains the determinant of a matrix of dimension $n=9$, which is a cofactor of the starting matrix. If we develop any of this determinant, we get a sum of $9$ determinants where each contains the determinant of a matrix of dimension $n=8$. At this level, there is therefore $90$ determinants of matrices of dimension $8$ to calculate. The process could continue until it remains only determinant of order $2$. And therefore we guess that the number of determinants of order $2$ is important!\n\t\t\\end{tcolorbox}\n\t\\end{enumerate}\n\n\t\\textbf{Definition (\\#\\mydef):} Given $m,n$ two positive integers and $A$ a $m\\times n$ matrix with coefficients in $\\mathbb{C}$. For any $k\\leq \\min(m,n)$ a \"\\NewTerm{minor of order $k$}\\index{minor}\" is a determinant of the type:\t\n\t\n\twith $1\\leq i_1< \\ldots \\leq m$ and $1\\leq j_1 <\\ldots <j_k\\leq n$.\n\t\n\tIn the particular case of a matrix of order $n>1$ the definition is simpler: the minor $M_{ij}$ of the element $a_{ij}$ is the determinant of the matrix of order $n-1$ that we get by remove the row $i$ and the column $j$. Therefore, to calculate the minor of an element, we remove the line and the column to which the element belongs to, and we calculate the determinant of the remaining square matrix.\n\t\n\t\\pagebreak\n\t\\paragraph{Derivative of a Determinant}\\mbox{}\\\\\\\\\n\tLet us see now a result that will be quite useful to us in the section General Relativity:\n\n\tGiven a square matrix $n\\times n$ with functions $g_{ij}:\\mathbb{R}\\mapsto \\mathbb{R}$ that can be derivate at least one time. Let us put $g:=\\det(G)$ with $G=(g_{ij})$. We want to calculate $\\mathrm{d}_t g$. Given $g_i$ the $i$-th column vector of the matrix $G$. Let us use the formula:\n\t\n\tKnowing that the derivative of $g_{\\sigma(1),1}\\cdot \\ldots \\cdot g_{\\sigma(n),n}$ is (derivative of $n$ products):\n\t\n\tTherefore we have:\n\t\n\tIf we take a look closely to the first above sum, we notice that:\n\t\n\twhere $g_1^{'}$ is the derivative of the vector $g_1$. Same for the following sums. Therefore:\n\t\n\tLet us develop again. Let us consider the term $\\det(g_1^{'},g_2,\\ldots,g_n)$ above. If we develop it relatively to the first column, we get:\n\t\n\t Also, by developing the $j$-term of the above sum relatively to the $j$-th column, we get:\n\t\n\tIf we put:\n\t\n\tWe get:\n\t\n\tWhich is written in tensor notation (\\SeeChapter{see section Tensor Calculus}):\n\t\n\tWe also have:\n\t\n\twhere $b_{ji}$ is the coefficient being at the row $j$-th, columen $i$-th of the matrix $G^{-1}$. If we denoted $g^{ij}$ the coefficient $i,j$ of the matrix $(G^{-1})^t$ then:\n\t\n\tThe expression of the derivative is then finally:\n\t\n\twhich is written in tensor notation:\n\t\n\tThis result, finally quite simple, we will be helpful to us in the section of Tensor Calculus to build the tools necessary for the study of General Relativity and in the context of the determination of Einstein field equations. It is therefore appropriate to remember it!\n\t\n\t\\paragraph{Determinant Cofactor and Matrix Inverse}\\mbox{}\\\\\\\\\n\tLet us finish our study of the determinants with the \"icing on the cake\" by giving a very important relation in many fields of engineering, physics and mathematics that connects the inverse of the coefficients of a matrix with miners of order $n$ (we will use this relation further below).\n\t\n\tGiven $A\\in M_n(\\mathbb{C})$ an invertible matrix (non singular). Let us write $A=(a_{ij})$ and $A^{-1}=(b_{ij})$. Then:\n\t\n\t\\begin{dem}\n\tLet us denote by $a_k$ the $k$-th column vector of the matrix $A$. Knowing that $A\\cdot A^{1}=\\mathds{1}$ (under known assumptions), we have (trivial):\n\t\n\tLet us calculate now $\\det(a_1,\\ldots,a_{k-1},e_j,a_{k+1},\\ldots,a_n)$. First by developping relatively to the $k$-the column we found (as only one of the coefficient of $e_j$ is not nul and that the unique non-null one is equal to the unit):\n\t\n\tFurthermore (properties of the determinant):\n\t\n\tTherefore:\n\t\n\tThat is to say:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tFor a simple, detailed, and important practical application in the industry (because otherwise in this entire book we will rarely inverse small matrices), the reader can refer to the section of Theoretical Computing in the part concerning the multiple linear regression.\n\t\n\tLet us also indicate the following important properties where $A$ and $B$ are a square invertible matrix $M_{n}(\\mathbb{C})$ and $\\lambda\\in\\mathbb{C}$ (the first should be obvious, the second has already been presented earlier but unproven and the third one is important for the proof of the variance inflation factor that we we will prove in the section of Theoretical Computing) :\n\t\n\tLet us prove the last property using the property of associativity:\n\t\\begin{dem}\n\t\n\tWhich prove that $B^{-1}A^{-1}$ is indeed the inverse of $AB$ where $I_n$ (also denoted $\\mathds{1}$ for recall) is a diagonal matrix (also square) of dimension $n$.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\pagebreak\n\t\\subsection{Change of Basis (frames)}\n\tA basis for a vector space of dimension $n$ is a sequence of $n$ vectors $(\\vec{e}_1, \u2026, \\vec{e}_n)$ with the property that every vector in the space can be expressed uniquely as a linear combination of the basis vectors (\\SeeChapter{see section Vector Calculus}). The matrix representations of operators are also determined by the chosen basis! Since it is often desirable to work with more than one basis for a vector space, it is of fundamental importance in linear algebra to be able to easily transform coordinate-wise representations of vectors and operators taken with respect to one basis to their equivalent representations with respect to another basis. Such a transformation is named a \"\\NewTerm{change of basis}\\index{change of basis}\".\n\t\n\tLet us now suppose that we move from a frame $\\mathcal{E}=(\\vec{e}_1,\\vec{e}_2,...,\\vec{e}_n)$ of a space $V^n$ to another space $\\mathcal{F}=(\\vec{f}_1,\\vec{f}_2,...,\\vec{f}_n)$ of this same space sharing the same origin O. Thus in two dimension:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/algebra/basis_change.jpg}\n\t\t\\caption{A vector can be represented in two different bases (purple and red arrows) (source: Wikipedia)}\n\t\\end{figure}\n\tLet us decompose the $\\vec{f}_i$ in the basis $\\mathcal{E}$:\n\t\n\t\\textbf{Definition (\\#\\mydef):} We name \"\\NewTerm{transition matrix}\\index{transition matrix}\" the matrix (linear application) that allows to pass from $\\mathcal{E}\\mapsto \\mathcal{F}$ given by:\n\t\n\t\\begin{theorem}\n\tNow let us consider the vector given by:\n\t\n\tSo we intend to prove that the components of $y_1,y_2,...,y_n$ of $\\vec{v}$ in the basis $\\mathcal{F}$ are given by:\n\t\n\tThus explicitly:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe matrix $P$ is invertible (non-singular), because its columns are linearly independent (they are the vectors $\\vec{f}_i$ decomposed in the basis $\\mathcal{E}$ and the $\\vec{f}_i$ base and the $\\vec{f}_i$ are linearly independent as they form a base!).\n\t\\end{tcolorbox}\n\t\\end{theorem}\n\t\\begin{dem}\n\tLet us take the case to simplify the case $n=2$ (the proof being quite easily generalized) with $\\mathcal{E}=(\\vec{e}_1,\\vec{e}_2)$ and $\\mathcal{F}=(\\vec{f}_1,\\vec{f}_2)$.\n\tThen we have:\n\t\n\tWe therefore have $\\vec{v}=x^i\\vec{e}_i$ and we seek to express $\\vec{v}$ in the basis $\\mathcal{F}$ as $\\vec{v}=y^i\\vec{f}_i$. We'll search the linear application that link these two relation such that:\n\t\n\tThus written in an explicit way:\n\t\n\tTherefore:\n\t\n\tThat is to say:\n\t\n\tSo $P$ (if it exists) is indeed the matrix that can express the components of a vector of a basis in those of another basis such that we write in vector notation:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\\begin{theorem}\n\tLet us now consider a linear application $g:V^n\\mapsto V^n$. Let $A$ be the matrix in the basis $\\mathcal{E}$, and $B$ its matrix in the basis $\\mathcal{F}$ (of same dimension). Then we might have:\n\t\n\twhich is equivalent:\n\t\n\tor even:\n\t\n\tIf there exists such a matrix $P$ satisfying these relations, we say that $A$ and $B$ are \"\\NewTerm{similar matrices}\\index{similar matrices}\".\n\t\\end{theorem}\n\t\\begin{dem}\n\tLet us take back the fact that we proved that it was eventually possible to build a transition matrix $P$ from the fact that:\n\t\n\tand let us put:\n\t\n\tWe have then a function that bring us to write:\n\t\n\tOn the other hand, we have (that we proved earlier):\n\t\n\tTherefore:\n\t\n\thence:\n\t\n\tand as we saw it in our study of the determinant, the determinants of $A$, $B$ are equal and therefore invariant. We will  return later back on a similar formulation in our study of the Spectral Theorem below.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tAt the vocabulary level we say when we are in the presence of a such a matrix relation that: the matrix $A$ is \"\\NewTerm{conjugated}\" to the matrix $B$.\n\t\n\t\\pagebreak\n\t\\subsection{Eigenvalues and Eigenvectors}\n\t\\textbf{Definition (\\#\\mydef):} An \"\\NewTerm{eigenvalue}\\index{eigenvalue}\" is by definition (we will find again this definition in the introduction to quantum algebra in the section of Wave Quantum Physics) a value $\\lambda$ belonging to a field $K$ such that given a squared matrix $A\\in M_{mm}(K)$ we have:\n\t\n\tand conversely a vector $\\vec{X}\\in M_{m1}(K)$ is an \"\\NewTerm{eigenvector}\\index{eigenvector}\" if and only if:\n\t\n\tThe major advantage of these concepts will be able the possibility to study a linear application, or any other item linked to a matrix representation, in a simple representation through a basis change on which the restriction of $A$ is a single homothetic transformation (typically solving simple systems of differential equations).\n\t\n\tThus, all the eigenvalues of a matrix $A\\in M_{mm}(K)$ is named \"\\NewTerm{spectrum of $A$}\\index{spectrum of a matrix}\" and satisfies the homogeneous system:\n\t\n\tor (whatever it is the same!):\n\t\n\twhere $I_n$ (also denoted for recall that $\\mathds{1}$) is for recall a diagonal unit matrix (and therefore also square) of dimension $n$. This system we know (proved above) has nontrivial solutions, therefore $\\vec{X} \\neq=0$ or $(\\lambda I_n-A)\\neq 0$, if and only if (we'll see many examples in various section related to physics in this book):\n\t\n\tthat is to say that the matrix $A-\\lambda I_n$ is not inversible (singular).\n\t\n\tThe determinant $\\det(A-\\lambda I_n)$ is a polynomial on $\\lambda$ of degree $n$ and can have at maximum $n$ solutions/eigenvalues as we have proved it in our study of polynomials (\\SeeChapter{see section Calculus}) and is named \"\\NewTerm{characteristic polynomial}\\index{characteristic polynomial}\" of $A$ and the equation $\\det(A-\\lambda I_n)$ is named \"\\NewTerm{characteristic equation}\\index{characteristic equation}\"  of $A$ or \"\\NewTerm{eigenvalue equations}\\index{eigenvalue equations}\".\n\t\n\tFor the small parenthesis, it is nice to notice that we always have in the development of $\\det(A-\\lambda I_n)$ the trace of the matrix $\\text{tr}(A)$ and the determinant $\\det (A)$ that appear. Let us see two examples of this:\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. Let us begin with the case $n=2$:\n\t\n\tTherefore for a square matrix of dimension $2$, the eigenvalues are (simple resolution of a polynomial of the 2nd degree):\n\t\n\tE2. For a matrix of dimension $n=3$, we have:\n\t\n\tand here... the final solution (roots) are quite less easy in the general case...\n\t\\end{tcolorbox}\n\tOn the path let us notice (we will generalize the result coming from this during our study of the spectral theorem) that as multiplying the homogeneous system:\n\t\n\tby $-1$ on the both sides of the equality doesn't change anything to the problem, then we get:\n\t\n\tSo we can see that is multiplication doesn't change anything to the final result!\n\t\n\tThus by a term by term correspondence it comes the very important result in Statistics (and also in Numerical Methods!) that we will prove later in a more general way with the spectral theorem:\n\t\n\tIf we look at $(\\lambda I_n-A)$ as a linear application $f$, as it is non-trivial solutions that interest us, we can say that the eigenvalues are the elements $\\lambda$ such that:\n\t\n\tand that the kernel constitutes the eigenspace  of $A$ of the eigenvalue $\\lambda$ from which non-null elements are the eigenvectors!\n\t\n\tIt corresponds to the study of the main axes, according to which the application behaves like an expansion (homothetic application) multiplying the vectors by the same constant. This homothetic ratio is then the \"eigenvalue\", the vectors to which it applies the \"eigenvectors\" are asembled in a \"eigenspace\".\n\t\n\tAnother way of looking at it:\n\t\\begin{itemize}\n\t\t\\item A vector is said to be an \"eigenvector\" by a linear application if it is not zero and if the application does only change its size (norm) without changing its direction.\n\n\t\t\\item An \"eigenvalue\", associated to an \"eigenvector\", is the size modification factor (homothetic ratio), ie the number by which we must multiply the vector to get its image. This factor can be negative (reverse direction of the vector) or zero (vector transformed into a vector of zero length).\n\t\t\n\t\tWe can say that therefore that the eigenvalue $\\lambda$ \"scalar\" the application $A$ for the eigenvector $\\vec{X}$.\n\n\t\t\\item An \"eigenspace\" associated to an \"eigenvalue\" is the set of eigenvectors that have the same eigenvalue value and a zero vector. They suffer all from the multiplication by the same factor.\n\t\\end{itemize}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn mechanics, we study the eigen-frequencies and eigenmode of oscillating systems (\\SeeChapter{see section Wave Mechanics}). In Functional Analysis, an eigenfunction is an eigenvector for a linear operator, that is to say a linear application acting on a space of function (\\SeeChapter{see section Functional Analysis}). In geometry and optics, we speak of eigendirections to take into account the curvature of the surfaces (\\SeeChapter{see section Non-Euclidean Geometry}). In graph theory, an eigenvalue is simply an eigenvector of the adjacency matrix of the graph (\\SeeChapter{see section Graph Theory}).\n\t\\end{tcolorbox}So as the determinant of $\\det(A-\\lambda I_n)$ is a polynomial on $\\lambda$ then the $\\lambda_i$ are also the roots of the characteristic polynomial:\n\t\n\tTherefore:\n\t\n\tThis is a relation used sometimes in some statistical models (form example the MANOVA!). \n\t\n\tBefore closing this short introduction to the eigenvalues and eigenvectors (we will discussed this further below), let us indicate that since an eigenvector must satisfy the homogeneous system:\n\t\n\t\n\tThus in practice it is customary that if the eigenvector is given for example by:\n\t\n\tTo normalize it at the unit by writing:\n\t\n\t\n\tFor the section of Wave Quantum Physics and more especially for the study of the angular momentum and spin we need to prove that given an operator acting on an eigenvector, then if we square the operator, this result in squaring the eigenvalue.\n\t\\begin{dem}\n\tGiven:\n\t\n\tThen:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\t\n\t\\subsubsection{Rotation Matrices and Eigenvalues}\n\tNow that we have seen what was an eigenvalue and an eigenvector, let us come back on a particular type of orthogonal matrices that we will be particularly useful to us in our study of quaternions (\\SeeChapter{see section Numbers}), of groups and symmetries (\\SeeChapter{see section Set Algebra}) and particle physics (\\SeeChapter{Elementary Particle Physics}).\n\t\n\tWe denote, as what has been seen in the section of Set Algebra, $\\text{O}(n)$ the set of $n\\times n$ (square) orthogonal matrices with coefficients in $\\mathbb{R}$, that is to say, satisfying:\n\t\n\tThat will denote also sometimes for recall sometimes as:\n\t\n\tThe columns and rows of an orthogonal matrix the form the orthonormal basis of the usual space $\\mathbb{R}^2$ for the usual dot product.\n\t\n\tThe determinant of an orthogonal matrix is equal to $\\pm 1$ (rotation conserves angles and volumes), indeed $A^T A=I$  leads to:\n\t\n\tA rotation matrix with determinant $+1$ is a \"\\NewTerm{proper rotation}\\index{eigenvalue equations}\", and one with a negative determinant $-1$ is an \"\\NewTerm{improper rotation}\\index{improper rotation}\", that is a reflection combined with a proper rotation.\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us calculate now explicitly the determinant of a $2\\times 2$ rotation matrix (\\SeeChapter{see section Numbers}) and $3\\times 3$ rotation matrix (\\SeeChapter{see section Euclidean Geometry}) as it is ask by many student on various Internet forums.\\\\\n\n\tSo first we consider the $2\\times 2$ rotation matrix and using the relation of the determinant proved earlier, we get:\n\t\n\tAnd for one randomly chosen rotation matrix of the three $3\\times 3$ rotation matrices (\\SeeChapter{see section Euclidean Geometry}), we get:\n\t\t\n\t\\end{tcolorbox}\n\t\n\tWe denote by $\\text{SO}(n)$ the set of orthogonal matrices of determinant $1$ (for more details see the section of Set Algebra). Let us show in three points that if $A\\in \\text{SO}(3,\\mathbb{R})$  then $A$ is the rotation matrix relative to an axis passing through the origin.\n\t\n\t\\begin{enumerate}\n\t\t\\item Any eigenvalue of a rotation matrix $A$ (real or complex) is of module $1$. In other words it conserve the norm:\n\n\t\tIndeed, if $\\lambda$ is an eigenvalue of eigenvector $\\vec{X}$, we have:\n\t\t\n\t\tor noting the dot product with the book usual notation:\n\t\t\n\t\t\n\t\t\\item  It exists a straight line in the space that is used a rotation axes and any vector on this line is not modified by any rotation.\n\n\t\tLet us denote by $\\vec{X}$ an eigenvector  of eigenvalue $1$ (that is to say such that $A\\vec{X}=\\vec{X}$). As the reader may have perhaps already understand it (read until the end please!), the straight line generated by $\\vec{x}$ that we will denote by $\\langle \\vec{X} \\rangle$ constitutes our rotation axes.\n\t\n\t\tIndeed, any vector $\\langle \\vec{X} \\rangle$ is send on itself by the application $A$. In this case, the orthonormal space denoted by $\\langle \\vec{X} \\rangle^\\perp$ that is of dimension $2$ is the perpendicular plane to the rotation axes.\n\t\n\t\t\\item Any vector perpendicular to the rotation axes remains, after rotation, perpendicular to this axes. In other words,  $\\langle \\vec{X} \\rangle$ in invariant through the application of $A$.\n\t\t\n\t\tIndeed, if $\\vec{w}\\in \\langle \\vec{x} \\rangle$ the, $w=A^TA w=A^Tw$ and for all $\\vec{y}\\in \\langle \\vec{X} \\rangle^\\perp$:\n\t\t\n\t\tthat is to say $A \\vec{y} \\langle \\vec{X} \\rangle^\\perp$. Therefore $\\langle \\vec{X} \\rangle^\\perp$ is invariant by $A$.\n\t\t\n\t\tFinally, the restriction of $A$ to the space $\\langle \\vec{X} \\rangle^\\perp$ is a rotation!\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tGiven $e^{\\mathrm{i}\\alpha}$ (see the section Numbers where the rotation by the complex number is proven) an eigenvalue (which module is $1$ as we proved it during our study of complex numbers) of $A$ restraint to $\\langle \\vec{X} \\rangle^\\perp$.\\\\\n\t\n\tLet us write $\\vec{w}=\\vec{u}+\\mathrm{i}\\vec{v}$ an eigenvector with $\\vec{u},\\vec{v}\\in \\mathbb{R}^2$ such as:\n\t\n\twith (as we already proved it in our study of complex numbers):\n\t\n\twhere we know by our study of complex numbers, that the vectors $\\vec{u},\\vec{v}$ generate an orthogonal basis (not necessarily normalized at the unit!) of $\\langle \\vec{X} \\rangle^\\perp$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe think that it could by easy at this level of the reader to check that this matrix is orthogonal (if it not the case contact us and this will be detailed!).\n\t\\end{tcolorbox}\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Spectral Theorem}\n\tLet us now see a very important theorem relatively to the eigenvalues and eigenvectors which is named the \"\\NewTerm{spectral theorem}\\index{spectral theorem}\" which will be very useful to us for the various sections of physics of this book and also the section  Statistics as well as in the section of Theoretical Computing and Industrial Engineering.\n\t\n\tTo summarize, mathematicians say in their language that the spectral theorem give the possibility to affirm the diagonability of endomorphism (of matrices) and also justify the decomposition in eigenvalues (also named \"\\NewTerm{singular value decomposition S.V.D.}\\index{singular value decomposition }\").\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe singular value decomposition theorem (S.V.D.) is however very general, in the sense that it applies to any rectangular matrices. The eigenvalue decomposition, however, only works for some square matrices.\n\t\\end{tcolorbox}\n\tTo simplify the proof, we will deal here only real matrices (component in $\\mathbb{R }$) and also avoiding up the language of mathematicians.\n\t\n\tWe will note in a first time $M_n(\\mathbb{R})$ the set of all $n\\times n$ matrices with real coefficients.\n\t\n\tWe will confuse the matrix $M\\in M_n (\\mathbb{R})$ with the linear application on the vector space $\\mathbb{R}^n$ by:\n\t\n\twith $\\vec{v}\\in \\mathbb{R}^n$.\n\t\n\tReminder: We have seen above during our study of basis changes that if $(\\vec{c}_1,\\ldots,\\vec{c}_n)$ is a basis of $\\mathbb{R}^n$ and $M\\in M_n(\\mathbb{R})$ then the matrix of the linear map $M$ in the basis $(\\vec{c}_1,\\ldots,\\vec{c}_n)$  is:\n\t\n\twhere $S$ is the matrix formed by the column vectors $\\vec{c}_1,...,\\vec{c}_n$.\n\t\n\tFirst, we simply check that if $A$ is a symmetric matrix then (this should be trivial but it can be verified with an example of dimension $2$ very quickly):\n\t\n\t\\begin{enumerate}\n\t\t\\item[P1.] All eigenvalues of $M$ are reals.\n\t\t\\begin{dem}\n\t\tGiven:\n\t\t\n\t\tan a priori complex eigenvector of the eigenvalue $\\lambda \\in \\mathbb{C}$. Let us denote:\n\t\t\t\t\n\t\tthe conjugate vector of $\\vec{z}$. Then we have:\n\t\t\n\t\tOn the other hand since $M=\\overline{M}$ we have:\n\t\t\n\t\tAs $\\vec{z} \\neq \\vec{0}$, we have $\\lambda=\\vec{\\lambda}$ and therefore, $\\lambda\\in \\mathbb{R}$.\n\t\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\tBefore going further, we also have to prove that if $M\\in M_n(\\mathbb{R})$ is a symmetrical matrix and $V$ and vectorial subspace of $\\mathbb{R}^n$ invariant relatively to $M$ (that is to say that satisfies for any $\\vec{v}\\in V: \\; M\\vec{v}\\in V$) the we have the following properties:\n\t\t\n\t\t\\item[P3.] The orthogonal of $V$ denoted obviously by $V^\\perp$ (obtained by applying the Gram-Schmidt method seen in the section of Vector Calculus) is also invariant through $M$.\n\t\t\n\t\t\\begin{dem}\n\t\tGiven $\\vec{v}\\in V$ and $\\vec{w}\\in V^\\perp$ then:\n\t\t\n\t\tthis show well that $M\\vec{w}\\in V^\\perp$.\n\t\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\n\t\t\\item[P4.] If $(\\vec{w}_1,\\ldots,\\vec{w}_k)$ is an orthonormal basis of $\\vec{V}^\\perp$ then the restriction matrix of $M$ to $V^\\perp$ in the basis $(\\vec{w}_1,\\ldots,\\vec{w}_k)$  is also symmetrical.\n\t\t\\begin{dem}\n\t\t\n\t\tLet us denote $A=(a_{ij})_{1\\leq i,j\\leq k}$ the matrix of the restriction of $M$ to $V^\\perp$ in the basis $(\\vec{w}_1,\\ldots,\\vec{w}_k)$. We have by definition for any $j=1...k$ (as the vector resulting of a linear application such as $M$ can be express in its basis):\n\t\t\n\t\tOr:\n\t\t\n\t\tas:\n\t\t\n\t\tif $i\\neq m$ in the orthonormal basis.\n\t\t\n\t\tOn another side:\n\t\t\n\t\tTherefore:\n\t\t\n\t\tThis shows that:\n\t\t\t\t\n\t\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\\begin{theorem}\n\t\tWe will now be able to show that any symmetric matrix $M \\in M_n(\\mathbb{R})$ is diagonalizable. That is to say that there is an invertible matrix $S$ such that the result of the calculation:\n\t\n\tgives a diagonal matrix! This result is in this text a particular form of the more general case (thus also applicable to rectangular matrices) named \"\\NewTerm{Eckart-Young theorem}\\index{Eckart-Young theorem}\".\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn fact we will see, to be more precise, that there exists an orthogonal matrix $S$ such that $S^{-1}MS$ is diagonal.\n\t\\end{tcolorbox}\n\tReminder: Say that $S$ is orthogonal means that $SS^T=I$ (where $I$ is the identity matrix) which is equivalent to say that the columns of $S$ form an orthonormal basis of $\\mathbb{R}^n$.\n\t\\end{theorem}\n\t\\begin{dem}\n\tWe prove the assertion by induction on $n$. If $n=1$ there is nothing to prove. Let us suppose that the assertion is satisfied for $k\\leq n$ and let us prove it for $k=n+1$. Then given $M\\in M_{n+1}(\\mathbb{R})$ a symmetric matrix and $\\lambda$ an eigenvalue of $M$.\n\t\n\tWe easily verify that the eigenspace:\n\t\n\tis invariant by $M$ (just take any numerical application) and that by the proof seen earlier, that $W^\\perp$ is also invariant by $M$. Moreover, we know (\\SeeChapter{see section Vector Calculus}) that $\\mathbb{R}^{n+1}$ can be decomposed into a direct sum:\n\t\n\tIf:\n\t\n\tthen:\n\t\n\tand it is sufficient to take an orthonormal basis of $W$ to diagonalise $M$. Indeed, if $(\\vec{w}_1,\\ldots,\\vec{w}_{n+1})$ is such a basis, the matrix $S$ formed by the column vectors $\\vec{w}_j$ ($j=1\\ldots n+1$) is orthogonal and satisfies:\n\t\n\tand $S^{-1}MS$ is indeed diagonal.\n\t\n\tLet us now suppose that $\\dim(W^\\perp)>0$ and given $(\\vec{u}_1,\\ldots,\\vec{u}_m)$ with $m\\leq n$ an orthonormal basis of $W^\\perp$. Let us denote by $A$ the restriction matrix of $M$ to $W^\\perp$ in the basis  $(\\vec{u}_1,\\ldots,\\vec{u}_m)$ . $A$ is also  symmetric (as proved in one of the preceding properties).\n\t\n\tBy induction hypothesis there exists an orthogonal matrix $H\\in M_m(\\mathbb{R})$ such that $H^{-1}AH$ is diagonal.\n\t\n\tLet us denote by $(\\vec{w}_1,\\ldots,\\vec{w}_{n+1-m})$ an orthonormal basis of $W$ and $G$ the matrix formed by the column vectors: $\\vec{w}_1,\\ldots,\\vec{w}_{n+1-m},\\vec{u}_1,\\ldots\\vec{u}_m$. So we can write that:\n\t\n\tand $G$ is also orthogonal by construction.\n\t\n\tLet us consider the following block matrix (matrix of matrices):\n\t\n\tand let us put:\n\t\n\tIt is almost obvious that $S$ is orthogonal as $G$ and $L$ are also orthogonal. Indeed, if:\n\t\n\tthen (remember hat matrix multiplication is associative !!!):\n\t\n\tAlso $S$ satisfies:\n\t\n\tand then:\n\t\n\tis indeed diagonal.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tFinally here is finally the famous \"\\NewTerm{spectral theorem}\\index{spectral theorem}\" (real case):\n\t\\begin{theorem}\n\tGiven $M\\in M_n(\\mathbb{R})$ a symmetric matrix, then there exists an orthonormal basis made of eigenvectors of $M$.\n\t\\end{theorem}\n\t\\begin{dem}\n\tSo we have seen in the preceding paragraphs that there exists an orthogonal matrix $S$ such that $S^{-1}MS$ is diagonal if $M$ is symmetric! Let denote by $\\vec{c}_1,\\ldots,\\vec{c}_n$ the columns of $S$. The basis $(\\vec{c}_1,\\ldots,\\vec{c}_n)$ is an orthonormal basis of $\\mathbb{R}^2$ as $S$ is orthogonal. Denoting the $\\vec{e}_i$ the $i$-th vector of the canonical basis of $\\mathbb{R}^n$ and $\\lambda_i$ and the $i$-th diagonal coefficient of $S^1{M}S$ we have without directly supposing that $\\lambda_i$ is an eigenvalue for now:\n\t\n\tby multiplying by $S$ on both sides of the equality we have:\n\t\n\tand therefore:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tTo finish about the spectral theorem in this book, let us so reprove a result we get earlier but that was presented in a quite ugly way and poorly  rigorous (the sum of the eigenvalues equals the trace of a matrix):\n\t\n\tRemember that spectral theorem therefore tells us that for any symmetric matrix $M$, there exists an orthogonal matrix $S$ such that:\n\t\n\tis diagonal. Nothing prevents us to choose the resulting diagonal matrix as a matrix of eigenvalues in the diagonal. What we denote usually:\n\t\n\tand as $S$ is a real orthogonal matrix and that by definition we have that a matrix is orthogonal if and only if $A^{-1}=A^T$, then we find the following relation as frequently as follows:\n\t\n\tSo obviously therefore have we will have to found $S$ if $M$ is known or vice versa. Anyway, let us come back on our topic and take track of this relation:\n\t\n\tThen by using the property of the trace $\\text{tr}$, of the associativity of the matrix multiplication, and the orthogonality of $S$ we have:\n\t\n\tThis reprove the results seen earlier above with a condition that was not trivial at this time: the matrix must be symmetrical (or symmetrizable)!\n\t\n\tWe also have by extension:\n\t\n\tand therefore by using the proven property relatively to the determinant (during our proofs of the main determinant properties) and  the conjugated matrices we get:\n\t\n\tand therefore if $M$ is symmetric we have the property:\n\t\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWe want to show that:\n\t\n\twe assume that we know that the eigenvalue-eigenvector pairs are:\n\t\n\tWe therefore introduce $S$ and $\\Lambda$ as follows:\n\t\n\tWe must show that $M=S\\Lambda S^{-1}$. This is indeed the case, since:\n\t\n\tTherefore $M$ is indeed diagonalizable.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Singular Value Decomposition (SVD)}\n\tThe singular value decomposition of a matrix $M$ ($m\\times n$) is the factorization of $M$ into the product of three matrice:\n\t\n\twhere the columns of $U$ ($m\\times r$) and $V$ ($r\\times n$) are orthonormal such that:\n\t\n\tand the matrix $D$ ($n\\times n$) is diagonal with positive real entries. \n\n\tThe SVD is useful in many tasks as Data Mining, Image Processing and Advanced Numerical Methods.\n\n\tTo gain insight into the SVD, we treat the rows of an $m\\times n$ matrix $M$ as $m$ points in a $n$-dimensional space and consider the problem of finding the best $k$-dimensional subspace with respect to the set of points. Here \"best\" means minimize the sum of the squares of the perpendicular distances of the points to the subspace. \n\n\tLet us begin with a special case of the problem where the subspace is 1-dimensional: a line through the origin. We will see later that the best-fitting $k$-dimensional subspace can be found by $k$ applications of the best fitting line algorithm. Finding the best fitting line through the origin with respect to a set of points $\\{x_i|1 \\leq i \\leq m\\}$ in the plane means minimizing the sum of the squared distances of the points to the line. Here distance is measured perpendicular to the line. The problem is then named as we know: the \"best least squares fit\" (\\SeeChapter{see section Theoretical Computing}).\n\t\n\tConsider projecting a point $\\vec{x}_1$ onto a line through the origin:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/algebra/svd.jpg}\n\t\t\\caption{The projection of the point $\\vec{x}_i$ onto the line through the origin in the direction of $\\vec{v}$}\n\t\\end{figure}\n\tThen us Pythagorean theorem we get:\n\t\n\tThat is:\n\t\n\tTherefore (see figure):\n\t\n\tTo minimize the sum of squares for the distance to the line, one could minimize $\\sum_{i=1}^m (x_{i1}^2+x_{i2}^2+\\ldots+x_{in}^2)$ minus the sum of the square of the lengths of the projections of the points to the line. However,  $\\sum_{i=1}^m (x_{i1}^2+x_{i2}^2+\\ldots+x_{in}^2)$ is constant! (independent of the line), so minimizing the sum of the squares of the distances is equivalent to maximizing the sum of the squares of the lengths of the projections onto the line. Similarly for best-fit subspaces, we could maximize the sum of the squared lengths of the projections onto the subspace instead of minimizing the sum of squared distances to the subspace.\n\t\n\t\\subsubsection{Singular Vectors}\n\tWe now build the \"\\NewTerm{singular vector}\\index{singular vector}\" of a $m\\times n$ matrix $M$. \n\n\tConsider the rows of $M$ as $m$ points in a $d$-dimensional space. Consider the best fit line through the origin. Let $\\vec{v}$ be a unit vector along this line.\n\n\tThe length of the projection of $\\vec{x}_i$, the $i$-th row $M$, onto $\\vec{v}$ is (\\SeeChapter{see section Vector Calculus}):\n\t\n\tThat we will denote for what will follow as (\\SeeChapter{see section Vector Calculus}):\n\t\n\tSo in our case:\n\t\n\tFrom this we denote the sum of all lengths of the projections by:\n\t\n\tThe best fit line is the on maximizing $|M\\vec{v}|^2$ (ie: $|M\\vec{v}|$) and hence minimizing the sum of the squared distances of the points to the line.\n\t\n\tWith this in mind, we define the \"\\NewTerm{first singular vector $\\vec{v}_1$}\\index{first singular vector}\", of $M$, which is is a vector, as the best fit through the origin for the $m$ points in $n$-space that are the rows of $M$. Thus:\n\t\n\tThe scalar value:\n\t\n\tis named the \"\\NewTerm{first singular value}\\index{first singular value}\" of $M$. Notice that $\\sigma_1^2$ is therefore implicitly the sum of the squares of the projections of the points to the line determined by $\\vec{v}_1$.\n\t\n\tThe greedy approach to find this time not the best fit $1$-dimensions but $2$-dimensional subspace for a matrix $M$, takes $\\vec{v}_1$ as the first basis vector for the $2$-dimensional subspace and finds the best $2$-dimensional subspace containing $\\vec{v}_1$.\n\n\tThus, instead of looking for the best $2$-dimensional subspace containing $\\vec{v}_1$, look for a unit vector, denoted $\\vec{v}_2$, perpendicular to $\\vec{v}_1$ that maximizes $|M\\vec{v}|^2$ among all such unit vectors.\n\n\tUsing the same strategy to find the best three and higher dimensional subspaces, defines $\\vec{v}_3,\\vec{v}_4,\\ldots$ in similar manner.\n\t\n\tThe \"\\NewTerm{second singular vector $\\vec{v}_2$}\", is defined by the best fit line perpendicular to $\\vec{v}_1$:\n\t\n\tThe value:\n\t\n\tis named the \"\\NewTerm{second singular value}\" of $M$. \n\n\tThe \"\\NewTerm{third singular vector $\\vec{v}_3$}\" is defined similarly by:\n\t\n\tand so on...\n\t\n\tThe process stop theoretically when we have found $\\vec{v}_1,\\vec{v}_2,\\ldots,\\vec{v}_r$ as singular vector that satisfies:\n\t\n\t\n\tAs the $\\vec{v}_i$ are perpendiculars, if we apply $M$ on all this vectors, the resulting vectors will also be perpendicular between them!\n\n\tTherefore we build the vectors:\n\t\n\tthat are all perpendiculars vectors between them as already mentioned and named \"\\NewTerm{left singular vectors}\\index{left singular vectors}\" of $M$ when the $\\vec{v}_i$ will be named \"\\NewTerm{right singular vectors}\\index{right singular vectors}\". The SVD theorem will fully explain the reason for these terms.\n\t\n\t\\begin{theorem}\n\tLet $M$ be an $m\\times n$ matrix with right singular vector $\\vec{v}_1,\\ldots,\\vec{v}_r$, left singular vectors $\\vec{u}_1,\\ldots,\\vec{u}_r$, and corresponding singular values $\\sigma_1,\\ldots,\\sigma_n$. Then the \"\\NewTerm{singular value decomposition theorem}\\index{singular value decomposition theorem}\" states that:\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tWe start naturally from:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tDon't forget that $\\vec{x}^T\\vec{x}$ in linear algebra gives a scalar that is equivalent to the \"dot product\" (\"inner product\"), when instead $\\vec{x}\\vec{x}^T$ gives a square matrix named the \"\\NewTerm{outer product}\\index{outer product}\".\n\t\\end{tcolorbox}\n\tNow let us take the a special case with (as i don't like the general proof):\n\t\n\n\tand let us wee what gives:\n\t\n\tSo if we look closely the result is the same as if we define the matrix;\n\t\n\tTherefore we can see that:\n\t\n\tleads to the same result. Therefore we can write:\n\t\n\tBut we must not forget that if  $V$ is an orthogonal matrix, then it represents an orthonormal basis and then we have proved already earlier above that in this case:\n\t\n\tTherefore:\n\t\n\tAnd this finish the proof!\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tThe reader should also notice that:\n\t\n\tand:\n\t\n\tare equivalent notation for the same thing (just develop the last one explicitly and you will see you fall back on the same result\\footnote{On request we can write the details}! The difference is that the notation with the sum is most used in Data Mining and that with the matrices in Statistics.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/algebra/svd_multiplication.jpg}\n\t\t\\caption{The SVD decomposition of a $m\\times n$ matrix}\n\t\\end{figure}\n\tIt is usage to build the matrix $D$ such that the diagonal is in descending order of amplitude and to order the vectors $\\vec{v}_i$ in the corresponding order. The reason is quite easy to understand as you can see in the example below:\n\t\n\tThis is important to know that this not the only possible decomposition of a matrix. The are many other one but we will focus in this book only the decomposition that are directly useful for engineering topics presented in this book.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tSome authors prefers to work with the norm of the singular values, that is, with $\\sqrt{\\sigma_i}$, therefore the left singular value are defines as:\n\t\n\tWithout that it change our previous proof result that will just be:\n\t\n\tThat in Europe is frequently written (but can bring to confusion with the notation of eigenvalues...):\n\t\n\tor in matrix form (...):\n\t\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\tTake this example in image processing (made by Jason Liu in MATLAB\u2122), the image below is an image made of $400$ unique row vectors (the reader can found the equivalent example in our R companion book):\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/algebra/svd_feynman_original.jpg}\n\t\t\\caption{SVD MATLAB\u2122 example original image}\n\t\\end{figure}\n\tWhat happens if in the sum:\n\t\n\twe take only the first biggest singular vector?:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/algebra/svd_feynman_r_equal_1.jpg}\n\t\t\\caption[]{SVD MATLAB\u2122 SVD with $r=1$}\n\t\\end{figure}\n\tWhat happens if I take the first two singular vectors?:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/algebra/svd_feynman_r_equal_2.jpg}\n\t\t\\caption[]{SVD MATLAB\u2122 SVD with $r=2$}\n\t\\end{figure}\n\t...and if we take the first ten singular vectors?:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/algebra/svd_feynman_r_equal_10.jpg}\n\t\t\\caption[]{SVD MATLAB\u2122 SVD with $r=10$}\n\t\\end{figure}\n\t...and if we take the first fity singular vectors?:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.6]{img/algebra/svd_feynman_r_equal_50.jpg}\n\t\t\\caption[]{SVD MATLAB\u2122 SVD with $r=50$}\n\t\\end{figure}\n\tThere we have it! Using $50$ unique values and we get a decent representation of what $400$ unique values look like.\n\t\n\tSo as we can see SVD is a great space reduction technique!\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{95} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 36 votes,  79.94\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Tensor Calculus}\n\t\\lettrine[lines=4]{\\color{BrickRed}T}he conventional vector calculus is a simple and effective technique that adapts perfectly to the study of mechanical and physical properties of matter in an Euclidean space of three dimensions. However, in many fields of physics, it appears experimental quantities that can't be easily represented by simple column vectors of Euclidean vector spaces. This is the case for example in continuum mechanics (fluids or solids), electromagnetism, General Relativity, etc.\n\t\n\tThus, since the late 19th century, the analysis of forces acting within a continuous medium led to highlight the physical quantities characterized by nine numbers representing the pressure forces or internal stress (see section Continuum Mechanics for the details). The representation of these quantities required the introduction of a new mathematical tool who was named \"\\NewTerm{tensor}\\index{tensor}\", by reference to its physical origin. Subsequently, starting the years 1900, it was R. Ricci and T. Levi-Civita who developed the tensor calculus; then the study of tensor allowed a deepening of the theory of vector spaces and contributed to the development of differential geometry (see section of the same name).\n\t\n\tTensor calculus, also sometimes named \"\\NewTerm{absolute differential geometry}\\index{absolute differential geometry}\" also has the advantage to free itself from all coordinate systems and the results of  the mathematical developments are thus invariant (huge simplification in calculations but in compensation we have a huge increase in abstraction and notation complexity). We therefore don't need to be concerned in what type of referential frame we work and this is very interesting in General Relativity.\n\t\n\tWe advise strongly the reader to very well master the basics of vector calculus and linear algebra as they have been presented before in previous sections (especially because linear algebra forms the skeleton of tensor calculus!). If necessary, we have chosen when writing this section to come back on certain points seen in the section of Vector Calculus and Linear Algebra (covariant components, contravariant components, etc.).\n\t\n\tFurthermore, if the reader has already covered the study of constraints in solids (\\SeeChapter{see section Continuum Mechanics}) or of the Faraday tensor (\\SeeChapter{see section Electrodynamics}) or the energy-momentum tensor (\\SeeChapter{see section General Relativity}) this will be a practical advantage before reading what follows. Furthermore, the redaction of the above items (tensors) was made so that the concept of tensor is  introduced if possible (...) intuitively.\n\t\n\tWe will only do a very few practical examples in this section. Indeed the examples, you have probably already guess it..., will come when we will study the continuum mechanics, General Relativity, quantum field theory, electrodynamics, etc.\n\t\n\tAn advice maybe: you have seen many time in the section of Statistics that write with vectors and after think with matrix was a powerful way to generalize some important results. For this section on tensors remember that the idea is the same but we think matrix and we write tensor! (you will better understand this little adage once you will be finished to read this whole section).\n\t\n\t\\subsection{Tensor}\n\t\\textbf{Definition (simplistic \\#\\mydef):} The \"\\NewTerm{tensors}\\index{tensor}\" are mathematical objects generalizing the concepts of vectors and matrices. They were introduced in physics to represent the state of stress and deformation of a volume subjected to forces, hence their name (tensions).\n\t\n\tThe rigorous definition requires (I personally think...) to have first read this section in its whole. But you must know that in fact a tensor is roughly like a determinant... (\\SeeChapter{see section Linear Algebra}). Eh yes! It is simply a multilinear application on a space of a given size (corresponding to the number of columns of the matrix/tensor) which finally gives a scalar (of a given field).\n\t\n\tFor example, we have proved in the section of Continuum Nechanics that normal and tangential forces in a fluid were given by the relation:\n\t\n\twhat was noted in the traditional condensed form as following (where we no longer distinguish what is tangential to what is normal so there is a loss of clarity):\n\t\n\tWe thus make appear a mathematical quantity $\\sigma_{ij}$ with $9$ components, while a vector in the same space $\\mathbb{R}^3$ has $3$ components.\n\t\n\tThis notion is also much used in the section of General Relativity where we have proved that the energy-momentum tensor in a particularly simple case was given by:\n\t\n\tand satisfies the non-less important equation of conservation:\n\t\n\tOr otherwise, still in the section of General Relativity, we have shown that the tensor of the Schwarzschild metric was given by:\n\t\n\tand therefore gives us the equation of the metric (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\n\tNote also that in the section of Special Relativity we have shown that the Lorentz transformation tensor is given by:\n\t\n\twhich in a condensed form gives the following components transformation:\n\t\n\tAs regards the transformation of the electromagnetic field we have also proved that the Faraday tensor is given by:\n\t\n\tand therefore permits switch from one repository frame to another using the relation:\n\t\n\tBut these are very simple tensor that can be represented in the form of matrices. You should also remember that it is not because you are reading a variable with indices suggests that we are dealing with a tensor that it is necessarily one. For example, the famous relation (widely used in the section of General Relativity and we will prove far further below):\n\t\n\tmight suggest that the first member of the far left is a tensor but in fact it is not... this is just a symbol... hence its name: Christoffel \\underline{symbol} (not: Christoffel \\underline{tensor}).\n\t\n\tThe interest of tensors in physics is that their characteristics are independent of the chosen coordinates. Thus, a relation between tensors in a base will be true regardless of the base used thereafter. This is a fundamental and powerful characteristic of General Relativity (among others)!\n\t\n\t\\pagebreak\n\t\\subsection{Indicial Notation}\n\tWe will use thereafter many mathematical symbols: coordinates, components of vectors and tensors, matrix components, etc., whose number in each category is large or indeterminate. To distinguish the various symbols of a category we use indices. For example, instead of the traditional variables $x, y, z$ we will use the variables $x_1,x_2,x_3$ (as we have already done in the section of Linear Algebra). This rating becomes essential when we an undetermined number of variables.\n\t\n\tThus, if we have $n$ variables, we denote them by $x_1,x_2,...,x_n$.\n\t\n\tWe will also use superscripts, when required; eg $x^1,x^2,x^3$. To avoid confusion with writing powers, the quantity $x\u00ee$ to the power $p$ will be written $(x^i)^p$. When the context eliminates any potential ambiguity, the use of parentheses however is not fundamentally necessary.\n\t\n\tIn tensor calculus there is a summation convention using the fact that the repeated index, below for example the index $i$, will become itself an indication of the summation. We write then, with this convention:\n\t\n\tthereby this condense relatively well the notations!\n\t\n\tThus, to represent the linear system:\n\t\n\twe will write (notice carefully how are written the components of the associated matrix!):\n\t\n\tspecifying that's for $n=3$.\n\t\n\tWe see in this example, how the summation convention allows a condensed and thus powerful writing.\n\t\n\tThe summation convention covers all the mathematical symbols having repeating indices. Thus the decomposition of a vector $\\vec{x}$ on a basis $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$ will be therefore written for $n=3$:\n\t\n\tIn summary, any term that has a repeated index represents a sum over all possible values of the repeated index.\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe name, for obvious reasons we will detail below, the $x^i$ \"\\NewTerm{contravariant component}\\index{contravariant component}\" of the vector $\\vec{x}$.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsubsection{Summation on multiple index}\n\tThe summation convention (due to Einstein) extends to the case where we have, in general, several repeated indices in upper and lower positions so-named \"\\NewTerm{dumb indices}\\index{dumb indices}\" in the same monomial (often physicists omit the rule of set their position opposite as it will be the case often on this book too!). Thus, for example, the quantity $A_i^jx^ix^j$, represents the following sum for $i$ and $j$ taking the values from $1$ to $2$:\n\t\t\n\tThus we see easily that an expression with two summation indices that take values respectively $1,2,...,n$, will have $n^2$; will have $n^2$ terms, $n^3$ if there are three sommation indices, etc.\n\t\n\tHowever, we must be careful to substitutions with this kind of notation because if we assume that we have the relation:\n\t\n\tthen to get the expression of $A$ only in function of the variables $y^j$ we cannot write:\n\t\n\tbecause it does not return to the same expression as the dumb indices after development are systematically sum in an identical and rigid way (we leave to the reader make a little application case to see this, if need you can contact us and we will do an example). In other words, a dumb index can not be repeated more than $2$ times.\n\t\n\t\\subsubsection{Kronecker Symbol}\n\tThis symbol introduced by the mathematician Kronecker, is the following (often used in physics and in many other fields):\n\t\n\tThis symbol is named \"\\NewTerm{Kronecker symbol}\\index{Kronecker symbol}\". It conveniently allows you to write, for example, the dot product of two vectors $\\vec{e}_1$ and $\\vec{e}_2$, of unit norm and orthogonal to each other, in the form:\n\t\n\tWe will find this symbol in many examples of theoretical physics in this book (wave quantum physics, quantum field theory, general relativity, fluid mechanics, etc.).\n\t\n\tIt should be noted that there is a generalized version of the Kronecker symbol:\n\t\n\tWe have also, for example:\n\t\n\twhere $\\varepsilon_{ijk}$ is the Levi-Civita symbol that will define right now:\n\t\n\t\\subsubsection{Antisymmetric Symbol (Levi-Civita symbol)}\n\tAnother useful symbol is the \"\\NewTerm{symbol of anti-symmetry}\\index{symbol of anti-symmetry}\" or also named \"\\NewTerm{antisymmetry tensor}\\index{antisymmetry tensor}\" that we will find in the sections of Electrodynamics,  General Relativity and Relativistic Quantum Physics in this book.\n\t\n\tIn mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol represents a collection of numbers; defined from the sign of a permutation of the natural numbers $1, 2, \u2026, n$, for some positive integer $n$.\n\t\n\t\t The values of the Levi-Civita symbol are independent of any metric tensor and coordinate system. Also, the specific term \"symbol\" emphasizes that it is not a tensor because of how it transforms between coordinate systems, however it can be interpreted as an antisymmetric tensor (a tensor is antisymmetric on (or with respect to) an index subset if it alternates sign (+/-) when any two indices of the subset are interchanged).\n\t \n\t In the case $n=2$ the Levi-Civita symbol is defined by:\n\t \n\tThe values can be arranged into a $2\\times 2$ antisymmetric matrix (we can see we fall back on the definition of an antisymmetric tensor):\n\t\n\tUse of the 2D symbol is relatively uncommon, although in certain specialized topics like supersymmetry and twistor theory it appears in the context of 2-spinors. The 3D and higher-dimensional Levi-Civita symbols are used more commonly.\n\t\n\tIn three dimensions, the Levi-Civita symbol is defined as follows:\n\t\n\ti.e.  $\\varepsilon_{ijk}$  is $1$ if $(i, j, k)$ is an even permutation of $(1,2,3)$ or in the natural order $(1,2,3)$, $-1$ if it is an odd permutation, and $0$ if any index is repeated. In three dimensions only, the cyclic permutations of $(1,2,3)$ are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3D it is sufficient to take cyclic or anticyclic permutations of $(1,2,3)$ and easily obtain all the even or odd permutations.\n\n\tIt can also be express with the Kronecker symbol:\n\t\n\t\n\tAn illustrative representation gives:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/algebra/levi_civita_symbol.jpg}\n\t\t\\caption{3D Levi-Civita symbol illustration (source: Wikipedia)}\n\t\\end{figure}\n\tBy using this symbol, a determinant of order two (\\SeeChapter{see section Linear Algebra}) is then written in the advantageous form:\n\t\n\tand the vector cross product:\n\t\n\twhere of course, $j$ and $k$ are summed and where the dummy index $i$ is the line number of the resulting vector (if requested we will make the developments). In particular, the rotational of a vector field (\\SeeChapter{see section Vector Calculus}) is then:\n\t   \n\tAs an example, let us calculate in index notation the double vector product $\\vec{A}\\times\\vec{B}\\times\\vec{C}$:\n\t \n\twhere again, the dumy index $i$ is the line number of the resulting vector. Let us eee detailed demonstration of these equalities (the order of the equalities below does not need to follow the sequence of equalities of the previous relation).\n\t\\begin{dem}\n\tWe have proved in the section of Vector Calculus the following identity:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe latter relation is sometimes named the \"\\NewTerm{Grassmann rule}\\index{Grassmann rule}\" or more commonly \"\\NewTerm{dual vector product}\\index{dual vector product}\".\n\t\\end{tcolorbox}\t\n\tTo prove the relation:\n\t\n\tto a change of indices let us first prove that:\n\t\n\twhich give us:\n\t\n\tWe do the development only for the first line (this is already bor... euh long enough...):\n\t\n\tThis is the first step that was necessary to be proven.\n\t\n\tNow let us prove that for the $n$th line we have well:\n\t\n\twith the help of a result obtained in the section of Vector Calculus (vector product of three different vectors) we have the first term (the first line of the vector resulting from the calculation):\n\t\n\tIt is immediate that ($i$ being equal to $1$):\n\t\n\tLet us show now that for $i$ equal $1$ we also have:\n\t\n\tIndeed:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tAs a second example, letus prove that the divergence of a curl vanishes:\n\t\n\tBy the Schwarz theorem (\\SeeChapter{see section Differential and Integral Calculus}) $\\partial_i\\partial_j$ is symmetric (so invert the indices has no impact) in the indices and that $\\varepsilon_{ijk}$ is antisymmetric (by definition) in the same indices, the sum on $i$ and $j$ must necessarily cancel. For example, the contribution to the sum of the $i=1,j=2$ is the opposite of this with $i=2,j=1$.\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The symbol of antisymmetry is often named \"\\NewTerm{Levi-Civita tensor}\\index{Levi-Civita tensor}\" in the literature. In fact, although it is a tensor in the form of its notation, it's more of a mathematical tool that a mathematical \"being\" hence the preference of some physicists to name it \"symbol\" rather than \"tensor\". But it's up to you ...\\\\\n\t\n\t\\textbf{R2.} By abuse of writing we do not write the basic vector but rigorously, and to avoid forgetting it, remember that in order to balance the members of the equality and in order to clarify that the vectors are expressed in the same base, we should write:\n\t\n\t\\end{tcolorbox}\n\tLet us now see the first simple practical applications of this index notation using the example of the base change that we have seen in the section of Vector Calculus:\n\t\n\tGiven two bases $(\\vec{e}_1,\\vec{e}_2,\\ldots,\\vec{e}_n)$ and $(\\vec{e}_1^{'},\\vec{e}_2^{'},\\ldots,\\vec{e}_n^{'})$ of an Euclidean vector space $\\mathcal{E}^n$. Each vector of a base can be decomposed on the other base in the form of a linear application (base change matrix - see section Linear Algebra):\n\t\n\twhere we obviously use the summation convention for $i,k=1,2,\\ldots,n$.\n\t\n\tLet us recall that the base change matrix (or \"\\NewTerm{transformation matrix}\\index{transformation matrix}\") should have as many columns as the basic vector have lines (dimensions or components). Small example with three dimensions gives:\n\t\n\tand obviously it is much more funny to write this as:\n\t\n\tso where on $A$, we have the $k$ that represents the column of the matrix and $i$ the row of the matrix.\n\n\tAny vector $\\vec{x}$ of $\\mathcal{E}^n$ can be decomposed (we have already prove this in the section of Vector Calculus) on each basis $\\mathcal{E}^n$ under the form:\n\t\n\tIf we seek for the relations between the components $x^i$ and ${x'}^k$ it is enough to take again the relations prove in the section of Linear Algebra and we have then:\n\t\n\tImmediately by the uniqueness of the decomposition of a vector on a base, we can equalize the components of the basis vectors and we get (the must be careful to rearrange again the order of the terms because the matrix multiplication is in general, not commutative as we already know!):\n\t\n\tBy construction we also the trivial relation (\\SeeChapter{see section Linear Algebra}):\n\t\n\tA way to prove in a quite general way the previous relation using tensor calculus notation is to remember the following result proved in the section of Linear Algebra:\n\t\n\tand by using:\n\t\n\tWe therefore have:\n\t\n\tThe basis vectors being linearly independent, this last relation implies that when $i\\neq j$:\n\t\n\tand when $i=j$:\n\t\n\tTherefore it comes:\n\t\n\tAnd for the dot product, the results obtained with the index notation are very interesting and extremely powerful. We have already defined the scalar product in the section of Vector Calculus but let us see how we handle this with the index notation:\n\n\tLet us consider an Euclidean vector space $\\mathcal{E}^n$ reported any basis ${\\vec{e}_i}$. We already know that vectors are written on this basis:\n\t\n\tThe scalar product with respect to its properties and the index notation is then written:\n\t\n\tThis is a fundamental relation for advanced physics (General Relativity and String Theory) that makes appear \"\\NewTerm{metric covariant tensor}\\index{metric covariant tensor}\" (\\SeeChapter{see section Non-Euclidean Geometry}):\n\t\n\tand to satisfy the commutative property of the dot product (\\SeeChapter{see section Vector Calculus}) we must obviously have the equality (at least in Euclidean space or approximated as...):\n\t\n\tThe prior-previous relation is sometimes written in the form:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWhen the basis vectors $\\vec{e}_i$ form an orthogonal vector space (not necessarily orthonormal) the quantities:\n\t\n\tare obviously zero when $i \\neq j$. The dot product of two vectors $\\vec{x}$ and $\\vec{y}$ is then reduce to:\n\t\n\tWe the have in this particular case:\n\t\n\tand therefore when the basis vectors form an orthonormal vector space it is clear that $g_{ij}$ is equal to the Kronecker symbol alone such that:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Metric and Signature}\n\tAs we have seen in it in the section of Vector Calculu (and Topology), the dot product of a vector $\\vec{x}$ can be used to define the concept of norm of a vector (and also the concept of distance).\n\t\n\tLet us recall that we have by definition the norm of a vector which is given by (\\SeeChapter{see section Vector Calculus}):\n\t\n\twhere the numbers $g_{ij}$ define somehow a \"measure\" of the vectors; we then say in the language of tensor calculus that they are the \"metrics\" of the selected vector space.\n\t\n\tIn the space of classical geometry, the norm is a number that is always strictly positive and which becomes zero if the measured vector is also zero. By cons the previous expression of the norm of a vector, may eventually be negative for any numbers $g_{11},g_{12},\\ldots,g_{nn}$ (complex spaces for example). So we can distinguish two kinds pre-Euclidean vector spaces (Euclidean space in which we define a scalar product for recall) depending on the fact that the norm is positive or not. However when in theoretical physics we want to make the analogy with a vector space structure we need that the condition:\n\t\n\tis satisfied ($g_{ij}$ can be written as a matrix, nothing avoid us to do it).\n\t\n\tExplanations: We know that the dot product must satisfy the commutative property such that:\n\t\n\tOn the other hand, if for any nonzero $y^j$ we have:\n\t\n\tthis implies $x^i=0$ (that is one of the properties of the norm we saw in the section of Vector Calculus). We can then write:\n\t\n\tWe are here simply with a system of $n$ equations with $n$ unknowns (having to admit by hypothesis that only the solution $x^i=0$), it is necessary and sufficient for this that the determinant of the system, denoted $g$, to be is different from zero (\\SeeChapter{see section Linear Algebra}). So we must have:\n\t\n\tIt is one of the condition for an expression comparable to a norm under a tensor index notation form in the context of theoreticla physicsa vector space of the states of the system !!\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The number of $+$ and $-$ signs found in the expression of the dot product is a is a characteristic of a given vector space $E^n$. It is named the \"\\NewTerm{signature of the vector space}\\index{signature of a vector space}\".\\\\\n\t\n\t\\textbf{R2.} A practical application of calculation of the metric is presented in details in the section of General Relativity.\\\\\n\t\\end{tcolorbox}\n\tFrom the coefficients of the covariant metric tensor $g_{ij}$ defining the metric of the space $E^n$, we can introduce the coefficients of the \"\\NewTerm{contravariant metric tensor $g^{ij}$}\\index{contravariant metric tensor}\" defining the metric of a \"\\NewTerm{dual space}\\index{dual space metric}\" $E_{*}^n$ by the relation:\n\t\n\tIn other words, the metric tensor twice covariant is its own inverse by its equivalent twice contravariant tensor. We will prove it explicitly later by showing during our study of the of the Gram's determinant that contravariant and covariant components of a Euclidean space are equal and both space have the same number of dimensions.\n\n\tA well know special case that meets the above equality is the Minkowski's metric tensor of (\\SeeChapter{see section General Relativity}), where we have:\n\t \n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe space $E^n$ is also named \"primal space\" and if it is of Euclidean type let us recall that it is denoted $\\mathcal{E}^n$.\n\t\\end{tcolorbox}\n\tThe dual space is underpinned by $n$ basis vectors $\\vec{e}^i$ constructed from the vectors $\\vec{e}_i$ such that:\n\t\n\tIt is therefore easy to see that the scalar product of the vectors $\\vec{e}^i$ defines the metric $g^{ij}$ of the dual space:\n\t\n\twhile the vectors $\\vec{e}^i$ (contravariant) and $\\vec{e}_j$ (covariates) are orthogonal:\n\t\n\tWe can also express a vector in the dual base by the next writing by noting that obviously the position of the dummy indices is reversed:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe components $x_i$ (orthogonal projections of the vector on the axes) are named, for reasons that we will see further below, the \"covariant\" components.\n\t\\end{tcolorbox}\n\tSo we finally have the possibility to change the vectors of a base to another one:\n\t\n\twhere it is important to remember that to make a contravariant a covariant  component, we bring up the index:\n\t\n\tand conversely, to make it covariant:\n\t\n\tSo, still in the case of the example of the Minkowski metric, if we consider the contravariant four-vector:\n\t\n\tThen we have:\n\t\n\t\n\t\\subsection{Gram's Determinant}\n\tLet us see another approach to determine the base vectors of the dual space that can allow also a better understanding of the concept and will allow us to get an interesting result that we will use during certain calculations of General Relativity and String Theory (mainly its study using to the Lagrangian formalism).\n\t\n\tSo we have for $i=j=1$:\n\t\n\tThis scalar product can be seen as a normalization condition for the two bases and two scalar products $\\vec{e}^2\\circ\\vec{e}_1=0$,$\\vec{e}^2\\circ\\vec{e}_1=0$ as orthogonalization conditions . Thus, as $\\vec{e}_1$ is perpendicular to $\\vec{e}^2,\\vec{e}^3$ we can write:\n\t\n\twhere $c^{te}$ is a constant of proportionality. Now let us play around with the prior previous relation:\n\t\n\tThen we get:\n\t\n\twhere we see appear the mixed product as we had defined it in the section of Vector Calculus.\n\t\n\tThus we get very easily:\n\t\n\tand even for contravariant vectors (without proof as may be too obvious?):\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} The reader will have perhaps noticed that the relations above are only valid for a three-dimensional space.\\\\\n\t\n\t\\textbf{R2.} The notation of the two previous relations is mathematically a bit unfair because in reality it is not an equality between two vectors but an application of a vector space in the other one!\\\\\n\t\n\t\\textbf{R3.} As in physics is very frequently considered Cartesian , cylindrical and spherical orthonormal base and that denominator of the two previous relations is always equal to $1$ in these bases than the contravariant basis vectors are identified with covariant basis vectors (and vice versa ). So the covariant coordinates are equal to the contravariant coordinates for these special cases!!!!!!!!!! \n\t\\end{tcolorbox}\t\n\tNow let us come back on something that will seem very old of us... In the section of Vector Calculus, we have defined and studied what were the cross product and mixed product. We will now see another way of representing them and see that this representation provides a result for the less quite relevant!\n\tWe saw in the section of Vector Calculus that the vector product was given by:\n\t\n\tBut what we did not see and we will now that we can trivially this expression is only the vector determinant of the following matrices:\n\t\n\tYes ... so the result does not give a scalar! It is just a usage rating.\n\t\n\tBut as we do the tensor calculus, we must now properly distinguish covariant and contravariant components. We'll rewrite it properly with contravariant components:\n\t\n\tSimilarly, the mixed product can be written using this relation and notation:\n\t\n\tOr, looking at the expression of the determinant we see quite easily, without having to do developments, that\n\t\n\tIndeed (we calculate the determinant making use of the demonstration of the determinant with three components proved in the section of Linear Algebra):\n\t\n\tThe prior-previous relation is also is frequently written:\n\t\n\twith obviously:\n\t\n\tnamed \"\\NewTerm{Euclidean volume}\\index{Euclidean volume}\" (indeed let us recall that the mixed product is a volume as we show it in the section of Vector Calculus!)\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tLet us recall again that if the basis vectors are orthonormal, whether they are expressed in Cartesian , cylindrical or spherical coordinates then:\n\t\n\t\\end{tcolorbox}\n\tMoreover, we also have the important relation:\n\t\n\tMoreover, we also have the important relation:\n\t\n\tIndeed, using the relation see in the of Vector Calculus:\n\t\n\tBut we have seen previously that $\\vec{e}^i\\circ\\vec{e}_j=\\delta_j^i$:\n\t\n\tand thus finally:\n\t\n\tThis having been done let us come back our relation of the vector product:\n\t\n\tand let express the components of the 1st line of the determinant in their dual basis (in contravariant coordinates):\n\t\n\tObviously, if the cross product is expressed in covariant components then we have:\n\t\n\tNow let us apply the mixed product:\n\t\n\tknowing the expression of the determinant of a square $3\\times 3$ matrix  (\\SeeChapter{see section Linear Algebra}) it comes immediately (we can detail on request as always in this book):\n\t\n\tConversely, it comes almost immediately:\n\t\n\tBut, we have proved in the section of Vector Calculus that $x_i=\\vec{x}\\circ\\vec{e}_i$. It comes then:\n\t\n\tand therefore:\n\t\n\tThe latter relation is often named \"\\NewTerm{Gram determinant}\\index{Gram determinant}\". A special very interesting case gives us (we use the relation between the metric components and the dot products of the vector basis we proved during our study of the metric just earlier above):\n\t\n\twritten in another way:\n\t\n\tThus, the Euclidean volume  is given by what name call the \"\\NewTerm{functional determinant}\\index{functional determinant}\" of the system (expression that we will see again and use in the section of General relativity to calculate the real volume and also in the section of String Theory):\n\t\t\n\tthat is without units (so you have to multiply it by a factor of elementary volume to get volume units). If we note in another way the determinant:\n\t\t\n\tWe get the common relation we can found in many books on General Relativity and String Theory but given without proof and named the \"\\NewTerm{Riemannian volume}\\index{Riemannian volume}\" form or simply \"\\NewTerm{volume form}\":\n\t\t\n\tor written in the following \"\\NewTerm{invariant volume element}\\index{invariant volume element}\" form:\n\t\n\tThe reader can verify normally easily enough that for the orthonormal Cartesian reference frame we fall back on the volume of a cube and that for the spherical case we fall back well on the expression of the infinitesimal volume of the sphere as used in the section of Geometric Shapes (but on request we can add the details here).\n\t\n\tIf we use the result we get in the section of Differential Geometry we have therefore for any surface patch:\n\t\n\n\t\n\t\\subsection{Contravariant and Covariant Components}\n\tSo far we wrote the dummy indices arbitrarily on superscript or subscript at our discretion. However, this is not always allowed and sometimes the fact that a dummy index is in superscript or subscript has a special significance! This is often the major difficulty in the study of some theorems, because if we do not study those inddices from the beginning, we do not really know how to interpret the position of the dummy indices. The reader should then be extremely careful at this level.\n\n\tFor an Euclidean vector space $\\mathcal{E}^n$ reported to any base $(\\vec{e}_i)$, the scalar product of a vector $\\vec{x}=x^i\\vec{e}_i$ by a vector its base is written as we know by (remember that this is equivalent as projecting the components on the axis corresponding to $\\vec{e}_j$):\n\t\n\tTherefore:\n\t\n\tThis relation is of major importance in theoretical physics and tensor calculus. It is important to remember it when we will study the contraction of indexes later (you can observe in the previous relation that we have \"lowered\" in the left side the index of the component of the right member of the equality).\n\t\n\tThese scalar products denoted $x_j$, are named \"\\NewTerm{covariant components}\\index{covariant components}\" in the base $(\\vec{e}_i)$, of the vector $\\vec{x}$. These components are therefore defined by:\n\t\n\tThey will denoted by lower indices !!! We will see later that these components are naturally introduced for some vectors of physics, for example the gradient vector. Moreover, the notion of covariant component is essential for tensors.\n\t\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} Never forget that this is therefore the projection of a vector on a vector of its own base!!!!\\\\\n\t\n\t\\textbf{R2.} The basic vectors always have their indices noted down because they are their own covariant components (they project on themselves by scalar product). This is the main trick used by beginners to remember when to put lower indices (and therefore they know when to put the upper one...)!\n\t\\end{tcolorbox}\n\tConversely, the \"\\NewTerm{contravariant components}\\index{contravariant components}\" (in other words: the non-projected components) can be calculated by solving with respect to the $n$ unknowns, the system of $n$ equations:\n\t\n\tThe previous relations show that the covariant components $x_j$ are related to the conventional components $x^i$ and that the contravariant components $x^i$ are therefore numbers such that:\n\t\n\tThey will be indicated with superscripts !! The study of the basis changes will further justify the appellation of these different components.\n\t\n\tIn a canonical orthonormal basis (very special case and corresponding to the classical cartesian, polar, cylindric and shperical coordinates), the covariant and contravariant components are the same as we already know after our study of Gram's determinant. Indeed:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe see above, that the incessant writing  of dummy superscript or subscript indices  can sometimes lead to some confusion and serious headaches ...\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Operation in Basis}\n\tThe interest of physicist for the tensor calculus, is passing parameters from one base to another for some given reasons (often the aim is to simplify the study of problems or simply because the studied states depend - or may depend - on the geometry of the space in question). It is therefore necessary to introduce the main tools relating thereto. We will also take this opportunity to present the developments that could have been already addressed in the section of Vector Calculus.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAs I like to say... Tensor Calculus is to physics with is XML to computing science. A background independant language!\n\t\\end{tcolorbox}\n\t\n\t\\subsubsection{Gram-Schmidt Orthogonalization Method}\n\tThe \"\\NewTerm{Schmidt orthogonalization method}\\index{Schmidt orthogonalization method}\" (also named \"\\NewTerm{Gram-Schmidt orthogonalization method}\\index{Gram-Schmidt orthogonalization method}\") allows the actual determination of an orthogonal basis for any pre-Euclidean vector space $\\mathcal{E}^n$ (we could introduce this method in the section of Vector Calculus but it seemed more interesting to us to present this method in a general and aesthetic framework using tensor calculus).\n\t\n\tFor this, let us consider a set of $n$ linearly independent vectors $(\\vec{x}_1,\\vec{x}_2,\\ldots,\\vec{x}_n)$ of $\\mathcal{E}^n$ and suppose that for each vector we have the dot product (square of the norm):\n\t\n\tLet us seek $n$ vectors $\\vec{e}_i$ orthogonal between them. Let us start for this with $\\vec{e}_1=\\vec{x}_1$ and let us seek for $\\vec{e}_2$ orthogonal to to $\\vec{e}_1$ under the form (this is a choice!!!):\n\t\n\tThe mental visualization of the process is not quit easy so the reader has to trust (anyway...) the mathematical results (if once we have the time we will draw the process of the classical three dimension case).\n\t\n\tThe coefficient $\\lambda_1$ is calculated by writing the orthogonality relation:\n\t\n\tWe deduce without too much troubles:\n\t\n\tThe parameter $\\lambda_1$ being determined, we get the vector $\\vec{e}_2$ that is orthogonal to $\\vec{e}_1$ and not zero, since the system is linearly independent $(\\vec{e}_1,\\vec{x}_2,\\ldots,\\vec{x}_n)$.\n\t\n\tThus so far we have:\n\t\n\tThe parameter $\\lambda_1$ being determined, we get the vector $\\vec{e}_2$ that is orthogonal to $\\vec{e}_1$ and not zero, since the system $(\\vec{e}_1,\\vec{x}_2,\\ldots,\\vec{x}_n$ is linearly independent.\n\t\n\tThe vector $\\vec{e}_2$ is sought in the form:\n\t\n\tThe two relations of orthogonality: $\\vec{e}_1\\circ\\vec{e}_e=0$ and $\\vec{e}_2\\circ\\vec{e}_3=0$, enables the calculation of the coefficients $\\mu_1$ and $\\mu_2$ . We get therefore:\n\t\n\twhat determines the vector $\\vec{e}_3$, orthogonal to $\\vec{e}_1$ and $\\vec{e}_2$, and not zero, since the $(\\vec{e}_1,\\vec{e}_2,\\vec{x}_3,\\ldots,\\vec{x}_n)$ are independent. By continuing the same type of calculation, we get step by step a system of orthogonal vectors $(\\vec{e}_1,\\vec{e}_2,\\vec{x}_3,\\ldots,\\vec{e}_n)$ between them and none of them are zero.\n\t\n\tIn case where some vectors are such like $\\vec{x}_i\\circ\\vec{x}_i=0$ (their norm is zero), we replace then $\\vec{x}_i$ by $\\vec{x'}_i+\\lambda\\vec{x}_j$, choosing a vector $\\vec{x}_j$ so that we get $\\vec{x'}_i\\circ\\vec{x'}_i\\neq 0$.\n\t\n\tWe therefore conclude that any pre-Euclidean space admits orthogonal bases!\n\n\tThis system of calculation of bases is of primary importance! It can be used to study physical systems from a pre-Euclidean repository whose properties change over time. Which is typical in General Relativity.\n\n\t\\subsubsection{Change of Basis}\n\tGiven two bases $(\\vec{e}_1,\\ldots,\\vec{e}_n)$ and $(\\vec{e'}_1,\\ldots,\\vec{e'}_n)$ of a vector space $\\mathcal{E}^n$. Each vector of a base can be decomposed on the other basis as follows (we have already prove it):\n\t\n\tA vector $\\vec{x}$ of $\\mathcal{E}^n$, when its contravariant components known, can be decomposed in each base in the form:\n\t\n\tand we have already proved that:\n\t\n\tWe notice that the transformation relations of the components of a contravariant vare the opposite of those of the basis vectors, the quantities $A$ and $A '$ being permutted, hence also the origin of the name \"\\NewTerm{contra}\"-\"\\NewTerm{variants}\" of these components!\n\n\tLet $x_i$ and ${x'}_k$ be the covariant components of a vector $\\vec{x}$ respectively in the bases $(\\vec{e}_i)$ and $(\\vec{e'}_k$. Let us replace the basis vectors expressed by the relations:\n\t\n\tin the expression of the definition of the covariant components, therefore we get:\n\t\n\tHence the relation between the covariant components in each base:\n\t\n\tWe get also:\n\t\n\tWe notice that the covariant components transform as the basis vectors, hence also the name of these components!\n\t\n\tOnce again, unless the basis is orthonormal, never forget that the covariant and contravariant components are different!!!\n\t\n\t\\subsubsection{Reciprocal Basis (Dual Basis)}\n\tNow let us come back on the concept of dual space but as seen in the vector calculation. This second approach can perhaps help some readers to better understand the concepts seen previously but against hides the underlying reasoning for the origin of the names \"covariant\" and \"contravariant\". But it is still the most common presentation used in the literature...\n\t\n\tGiven a basis ($\\vec{e}_i$) of an Euclidean vector space $\\mathcal{E}^n$. By definition, $n$ vectors $\\vec{e}^k$ which satisfy the following relations:\n\t\n\tare named \"\\NewTerm{reciprocal vectors}\\index{reciprocal vectors}\" of the vectors $\\vec{e}_i$. They will be denoted with higher indices. By definition, each reciprocal vector $\\vec{e}^k$ must therefore be orthogonal to all the vectors $\\vec{e}_i$, except for $k=i$.\n\t\n\tLet us first show that the reciprocal vectors $\\vec{e}^k$ of a given base $(\\vec{e}_i)$ are linearly independent. For this, we must show that a linear combination $\\lambda\\vec{e}^k$ gives a zero vector if and only if each coefficient $\\lambda_k$ is zero.\n\t\\begin{dem}\n\tGiven $\\vec{e}=x^i\\vec{e}_i$ any vector of $\\mathcal{E}_n$. Let us make a dot product by $\\vec{x}$ the previous linear combination $\\lambda_k \\vec{e}^k$, we get:\n\t\n\tThe latter equality must be verified whatever the $x^i$, it is therefore necessary that each $\\lambda_i$ is zero and thus the vectors $\\vec{e}^k$ are indeed linearly independent vectors.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem} \n\tThe system of $n$ reciprocal vectors forms a basis named the \"\\NewTerm{reciprocal basis}\\index{reciprocal basis}\" (which is just the dual basis) of the vector space $\\mathcal{E}^n$.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tGiven three vectors $\\vec{e}_1,\\vec{e}_2,\\vec{e}_3$ forming a basis (not necessarily orthonormal!) of an euclidean space. We decide to denote by:\n\t\n\twhere, for recall, the symbol $\\times$ represents the cross product (\\SeeChapter{see section Vector Calculus}) and the whole is mixed product as also seen in the section Vector calculus and represents an oriented volume.\\\\\n\t\n\tThe following vectors:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe recognize here the relations we have just proved earlier above during our study of the Gram's determinant!!!\n\t\\end{tcolorbox}\n\t\\end{tcolorbox}\n\tNow, let us consider a vector on the original base $\\vec{e}_1,\\vec{e}_2,\\vec{e}_3$ that we will denote by (as seen above):\n\t\n\twith therefore by definition the contravariant components of the vector that appear as we defined earlier above (and that we had at the same time explained the origin of the name). We also saw above that each contravariant component will also (naturally and by extension) be given by:\n\t\n\tSimilarly, so we have the covariant components that appear:\n\t\n\tIn this approach, we then define the contravariant  metric tensor and respectively covariant:\n\t\n\tIt comes therefore for example for the contravariant components (in the case of a three-dimensional space), knowing that the approach is the same for the covariant components:\n\t\n\tAnd so we find the transformation relations between the  contravariant and covariant components already seen above with the difference that it seems coming out of a hat by successive definitions and that therefore hides the origin of the name of these components (at least in our point of view). But perhaps some readers prefer this approach...???\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\n\tAs an example, consider the basis:\n\t\n\tNotice that it is not orthogonal because $\\vec{e}_1\\circ \\vec{e}_2=4\\neq 0$.\n\n\tIn this case we have by applying the previous Gram's relations:\n\t\n\tAs we can see in the figure below where $\\vec{e}_1$ and $\\vec{e}_2$ are shown in green, and the reciprocal vectors  $\\vec{e}^1$ and  $\\vec{e}^2$ are shown in blue:\\\\\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/algebra/reciprocal_basis_contravariant_components.jpg}\n\t\t\\caption[]{Basis vectors, reciprocal basis vectors and contravariant components}\n\t\\end{figure}\n\tNotice that by construction we have indeed that $\\vec{e}^1$ is orthogonal to $\\vec{e}_2$ and $\\vec{e}^2$ is orthogonal to $\\vec{e}_1$!\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tFor a given vector $\\vec{a}$, say:\n\t\n\twe can use the relation proved earlier to find its contravariant components in the basis $(\\vec{e}_1,\\vec{e}_2,\\vec{e}_3)$. We get:\n\t\n\tSo that (see figure above):\n\t\n\tNow observe that the original basis vectors are reciprocal of the reciprocal ones. Thus we can just as well expand the same vector $\\vec{a}$ alogn the basis vectors:\n\t\n\twith $a_i=\\vec{a}\\circ\\vec{e}_i$.\n\n\tThe components $a_i$ are the covariant components of $\\vec{a}$. In our example, we obtain:\n\t\n\tso we have:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=0.75]{img/algebra/reciprocal_basis_covariant_components.jpg}\n\t\t\\caption[]{Basis vectors, reciprocal basis vectors and covariant components}\n\t\\end{figure}\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Euclidean Tensors (cartesian tensor)}\n\tThe generalization of the concept of vector has led us to the study of vector spaces to $n$ dimensions. Tensors are also one-dimensional vectors but possess additional properties compared to vectors.\n\n\tFor the theoretical physicist, tensor calculus is interesting primarily in how the components of the tensor are transformed during a change of basis vector spaces from which they come. We will begin to study them vis-\u00e0-vis the properties of bases changes (because it is the most interesting case).\n\n\tA tensor is in practice often only defined and used in the form of its components. These can be expressed in covariant or contravariant form like any vector. But a new type of components will appear in the tensor, it is the \"mixed components\". These three types of components are decomposition of Euclidean tensor on different bases.\n\t\n\t\\textbf{Definition (\\#\\mydef):} A \"\\NewTerm{Cartesian tensor}\\index{Cartesian tensor}\" uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components.\n\n\tUse of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor (\\SeeChapter{see section Continuum Mechanics}) and the moment of inertia tensor in rigid body dynamics (\\SeeChapter{see section Classical Mechanics}). Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in General Relativity (\\SeeChapter{see section General Relativity }).\n\t\n\t\\subsubsection{Fundamental Tensor}\n\tDuring the theory view earlier above, we used the quantities $g_{ij}$, defined from the scalar product of the basis vectors $(\\vec{e}_i)$ of a $n$-dimensional pre-Euclidean  vector space $\\mathcal{E}^n$, by:\n\t\n\tThese $n^2$ quantities are the covariant components of a tensor named the \"\\NewTerm{fundamental tensor}\\index{fundamental tensor}\" or as we already know the \"\\NewTerm{metric tensor}\\index{metric tensor}\".\n\t\n\tLet us study how vary the quantities $g_{ij}$ when we make a basis change:\n\n\tGiven $({e'}_k)$ another based linked to the previous by the known relation:\n\t\n\tSubstituting the relation $\\vec{e}_i={A'}_i^k\\vec{e'}_k$ in the expression of $g_{ij}$, it come (we change the indices as it should be done during a substitution):\n\t\n\tIn the new base $(\\vec{e'}_k)$, the dot products of the basis vectors are therefore quantities such that:\n\t\n\tSo we finally have for the expression of the covariant components $g_{ij}$ in a basis change:\n\t\n\tIdentically we have:\n\t\n\tIn general, a sequence of $n^2$ quantities $t_{ij}$ that transforms, during a base change of $\\mathcal{E}^n$, according to the two previous relations, namely:\n\t\n\tare, by definition, the \"\\NewTerm{covariant components of a tensor of order two}\" (with two indices) on $\\mathcal{E}^n$.\n\n\tWe can therefore manipulate quantities expressing the intrinsic properties of bases as standard tensors!\n\t\n\t\\subsubsection{Tensor product (dyadic) of two vectors and matrices}\n\tLet us consider an Euclidean vector space $\\mathcal{E}^n$ of base $(\\vec{e}_i)$ and given two vector of $\\mathcal{E}^n$:\n\t\n\tLet us form the two by two products of contravariant components $x^i$ and $y^j$, namely:\n\t\n\tWe thus get $n^2$ quantities, if the two vectors have the same number of components, which are also the contravariant components of a tensor of order two named the \"\\NewTerm{tensor product}\\index{tensor product}\" of the vector $\\vec{x}$ by the vector $\\vec{y}$.\n\t\n\tFor example for $\\vec{x}$ of dimension $2$ and $\\vec{y}$ of dimension $3$ we have:\n\t\n\tWe can also tensorally mutliply two matrices $A$ and $B$. Then, the matrix describing the tensor product $A\\otimes B$ is the Kronecker product of the two matrices.\n\t\n\tFor example, if:\n\t\n\tThen:\n\t\n\tThe most famous case is the covariant tensor of rank $2$ in a space of $4$ dimensions as it is the most used one in tensor calculus:\n\t\n\tThe reader can also now better understand the origin of the name of the gradient of a vector field (giving a \"tensor field\") as we saw in the section of Vector Calculus because we can rewrite it now:\n\t\n\t The reader will have certainly notice that through the examples above, the tensor product is non-commutative. That is:\n\t\n\tWe can obviously build tensor products of order three (thus with $n^3$ terms) such as with the following tensor three times  contravariant vectors:\n\t\n\tetc.\n\n\tLet us study the properties of the basis changes of these components. Let us use for the basis changes relations of contravariant components of a vector, namely:\n\t\n\tLet us replace in the relation $u^{ij}=x^iy^j$ the components $x^i$ and $y^i$ by their basis change expression, we get:\n\t\n\tThe quantities ${u'}^{kl}$ are the new components:\n\t\n\tThe transformation formula of the $n^2$ quantities $u^{ij}$ on a change of basis change of $\\mathcal{E}^n$ is finally (very similar to metric tensor):\n\t\n\tSuch a change basis relation characterizes the contravariant components of a tensor of order two. Conversely, we get:\n\t\n\tSo the $n^2$ quantities are the \"\\NewTerm{contravariant components of a tensor of order two}\\index{contravariant components of a tensor of order two}\".\n\n\tWe can the build the same products by pairs for covariant components $x_i$ and $y_i$ of the vectors $\\vec{x}$ and $\\vec{y}$ thus:\n\t\n\tThe formulas of basis change of the covariant components of the vectors are given by the following relations that we have already proved previously:\n\t\n\tSubstituting the first relation in the product $u_{ij}=x_iy_i$, we get:\n\t\n\tThis is the basis change relation of covariant components of a tensor of order two. We also easily check that we have:\n\t\n\tIdentically we have of course ${u'}_{kl}={x'}_k{y'}_l$ since $u_{ij}=x_iy_i$.\n\n\tSo the $n^2$ quantities are then the \"\\NewTerm{covariant components of a tensor of order two}\\index{covariant components of a tensor of order two}\".\n\n\tLet us now create $n^2$ quantities my multiplying two by two the covariant components of a vector $\\vec{x}$ by contravariant components of a vector $\\vec{y}$, we get:\n\t\n\tLet us perform a basis change in this last relation taking into account the expressions $x_i={A'}_i^k{x'}_k$ and $x^i={A}_k^i {x'}^k$, we get:\n\t\n\tThis basis change relation characterizes the \"\\NewTerm{mixed components}\\index{mixed components}\" of an order two tensor. Conversely, we can verify that we have:\n\t\n\tThese mixed components also constitutes the components of a tensor product of $\\vec{x}$ by $\\vec{y}$, according to a given basis.\n\t\n\tIn general, a sequence of  $n^2$ quantities that transforms, during a basis of $\\mathcal{E}^n$, just as previously established relation are therefore, by definition, \"\\NewTerm{mixed components of a tensor of order two}\\index{mixed components of a tensor of order two}\".\n\t\n\t\\pagebreak\n\t\\subsubsection{Tensor Spaces}\n\tIn the previous study, we used as system of $n^2$ number, created from a vector space $\\mathcal{E}^n$. When these numbers satisfy some basis change relations, we name these quantities, by definition, \"\\NewTerm{components of a tensor}\\index{components of a tensor}\".\n\n\tWe have seen that any linear combination of these components constitutes the components of a new tensor. We can therefore add together the components of the tensor and multiply by a scalar, to get other components of a new tensor. These addition and multiplication properties mean that we can use these tensors quantities as vector components.\n\t\n\tTo clarify how we define a tensor on a base, let us study the particular case of a tensor product of two vectors formed by triplets of numbers (that is to say in $\\mathbb{R}^3$ typically). Consider therefore the Euclidean vector space $\\mathcal{E}^3$ whose vectors are triplets of number of the form $\\vec{x}=(x_1,x_2,x_3)$. The canonical orthonormal basis of $\\mathcal{E}^3$ consists of three vectors that we know very well but written in tensor calculs as:\n\t\n\twith $i=1,2,3$ (nice way to write simple things isn't it...).\n\tVectors of $\\mathcal{E}^3$ gives the possibility to form the nine quantities that we have named the \"\\NewTerm{components of the tensor product}\\index{components of a tensor product}\" of the the vectors $\\vec{x}$ and $\\vec{y}$.\n\t\n\tIf we make all possible tensor products between vectors of $\\mathcal{E}^3$, we get sequences of $9$ numbers that can be used to define the following vector:\n\t\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe see immediately with the above relation and the previous relation that the tensor product is therefore not commutative.\n\t\\end{tcolorbox}\n\tWe are left then with the elements of a vector space $\\mathcal{E}^9$ with nine-dimensional space, whose elements all combinations by pairs of three numbers.\n\t\n\tWe then say that $\\mathcal{E}^9$ has a \"\\NewTerm{tensor product structure}\\index{tensor product structure}\" which is denoted obviously and in standard calculus by $\\mathcal{E}^9:\\; \\mathcal{E}^3\\otimes \\mathcal{E}^3$ or sometimes $\\mathcal{E}_3^{(2)}$.\n\n\tThese vectors can be decomposed, for example, on an orthonormal canonical basis:\n\t\n\twith $k=1,2,\\ldots,9$.\n\n\tIf we rewrite the quantities $x^iy^j$ according to their place in the expression of $\\vec{U}$, ie:\n\t\n\twith $k=1,2,\\ldots,9$ and $i,j=1,2,3$, the vectors $\\vec{U}$ are then written:\n\t\n\tand is an example as we know of tensor of order $2$ (obviously we can generalize this approach).\n\tHow do these tensor differ from ordinary vectors? Although they are identical to some vectors of $\\mathcal{E}^9$ in our example but were formed by the vectors $\\vec{x}$ and $\\vec{y}$ of $\\mathcal{E}^3$. To remember this fact, we write then as we already know:\n\t\n\tand they are named as we already know \"tensor products of order two\" of the vectors $\\vec{x}$ and $\\vec{y}$. The symbol $\\otimes$ is defined in the way we have formed the quantities $x^iy^j=u^{ij}$ and in the order in which they were classified them to form the vector $\\vec{U}$.\n\n\tTo recall the dependence between a quantity $x^iy^j=u^{ij}$ and the basis vector $\\vec{e}_i$ to which he is assigned, les us rewrite these vectors by putting in place of the index $k$ the two indices $i$ and $j$, relative to the components, namely:\n\t\n\tThe latter can be written in the form:\n\t\n\tThe vectors $\\vec{e}_i\\otimes\\vec{j}$ generates a basis of $\\mathcal{E}^9$ in the case of our example with is same the \"\\NewTerm{tensor associated basis}\\index{tensor associated basis}\".\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tConsider:\n\t\n\tWe then have for example:\n\t\n\tThat is to say:\n\t\n\t\\end{tcolorbox}\n\tIt follows that as element of a space $\\mathcal{E}^n\\otimes \\mathcal{E}^n$, a tensor $\\vec{U}$ is a vector of the general form:\n\t\n\tLet us study its properties vis-\u00e0-vis a base change of $\\mathcal{E}^n$ such that:\n\t\n\tDuring such a base change, the base $(\\vec{e}_i\\otimes\\vec{e}_j)$ associated to $\\vec{e}_i$  becomes another base $(\\vec{e}_k^{'}\\otimes\\vec{e}_l^{'})$ associated to $\\vec{e}_k^{'}$, that is:\n\t\n\tIt follows that the tensor product $\\vec{U}$ has for components in the new basis:\n\t\n\tWe have the following properties for the tensor product given:\n\t\n\t\\begin{enumerate}\n\t\t\\item[P1.] Right/Left distribituvity relatively to the addition of vectors:\n\t\t\n\t\tThe proof of these relation is simple deduce from the definition of the tensor product. Indeed, we have for example:\n\t\t\n\t\t\n\t\t\\item[P2.] Associativity with multiplication by a scalar:\n\t\t\n\t\tIndeed, we have:\n\t\t\n\t\n\t\t\\item[P3.] When we choose a base in each of the vector spaces $(\\vec{e}_i)$ for $\\mathcal{E}^n$, $(\\vec{f}_i)$ to for $\\mathcal{F}^m$, the $n\\cdot m$ elements of $G_{nm}$ that we denote by $\\vec{e}_i \\otimes\\vec{f}_i$ also form a basis of $G_{nm}$.\n\t\t\\begin{dem}\n\t\t\tAlready made in the particular example we used earlier above.\n\t\t\\begin{flushright}\n\t\t\t$\\square$  Q.E.D.\n\t\t\\end{flushright}\n\t\t\\end{dem}\n\t\\end{enumerate}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn practice, we often have to use tensor formed from vectors belonging to the same vector spaces $\\mathcal{E}^n$.\n\t\\end{tcolorbox}\n\tWe can of course generalize the tensor product to any number of vectors. Gradually, given the property P1, we can consider $p$ vectors $\\vec{x}_1,\\vec{x}_2,\\ldots,\\vec{x}_p$ each belonging to different vector spaces $\\mathcal{E}^{n_1},\\mathcal{E}^{n_2},\\ldots,\\mathcal{E}^{n_p}$. If we have:\n\t\n\twe can form the tensor product:\n\t\nwith $i_1=\\{1,\\ldots,n_1\\},i_2=\\{1,\\ldots,n_2\\},\\ldots,i_p=\\{1,\\ldots,n_p\\},$.\n\n\tWe build thus tensor products of oder $p$ belonging to the vector space $\\mathcal{E}^{n_1}\\otimes\\mathcal{E}^{n_2}\\otimes\\ldots \\otimes\\mathcal{E}^{n_p}$, space that has a product structure tensor. The elements of this space are by definition tensor of order $p$.\n\n\tIn order to unify the classification, the elementary vector spaces, which can not be fitted with a tensor product structure can be regarded as having components of a tensor of order $1$. In general, we name these elements \"\\NewTerm{vectors}\\index{vector}\", reserving the term \"\\NewTerm{tensor}\\index{tensor}\" to elements of tensor spaces of order equal or greater than $2$!\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIt is of usage to name \"\\NewTerm{tensor of order zero}\\index{tensor of order zero}\" scalar quantities. It is also rare to meet tensor of order tensor greater than $2$.\n\t\\end{tcolorbox}\n\tIt is quite obvious and we will not do the proof  that we absolutely can redefine all the concepts (base, decomposition on a base, reciprocal basis, dot product, tensor product) that we have seen so far considering tensor of order $1$ as a vector (we should therefore rewrite everything that was already written above... which is useless in our point of view).\n\n\tIt is also quite possible to repeat all these definitions for higher order tensor and thus generalize the concept of space tensor for all dimensions.\n\n\tFrom these considerations, we can state the \"\\NewTerm{tensoriality criterion}\\index{tensoriality criterion}\":\n\t\n\tSo that the elements of a sequence of $n^p$, relative to a base of a vector space $\\mathcal{E}_{(p)}^n$, can be considered as the components of a tensor, it is necessary and sufficient that these quantities to be linked together, in two different bases of $\\mathcal{E}_{(p)}^n$, by the relations of transformation of the components.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tA vector can be represented in any base by a sequence of $n$ components. However, we can not conclude that any sequence of $n$ numbers is a vector. Indeed, when we put ourselves in another base of space, the components must also change to represent the same object, then we say that the vector is an intrinsic object (whose existence does not depend on the choice of the base). It remains then to know that a vector is a tensor of order $1$.\t\n\t\\end{tcolorbox}\n\n\t\\subsubsection{Linear combination of tensors}\n\tWe can form other tensor by combining together the components of various tensor products defined using vectors of the same vector space. For example, let us consider the contravariant components of the tensor products of the vectors $\\vec{x},\\vec{y}$ and $\\vec{w},\\vec{z}$ :\n\t\n\tLet us form the following quantities:\n\t\n\tThe $n^2$ quantities $t^{ij}$ also satisfied the general formulas for basis change. We have indeed by substituting the relations of transformation of the  contravariant components of a tensor product in the previous expression:\n\t\n\tThe $n^2$ quantities $t^{ij}$, satisfying the relations of basis change also constitutes components of a tensor of order two.\n\n\t\\subsubsection{Contraction of indices}\n\tLet us consider the mixed tensor product of two vectors $\\vec{x}$ and $\\vec{y}$ of respective contravariant $x^i$ and covariant $y_j$ components . The mixed components of the tensor product $\\vec{V}$ of these two vectors are:\n\t\n\tLet us perform the addition of the various components of the tensor $\\vec{V}$ such as $i=j$, ie:\n\t\n\tWe thus get the expression of the dot product of vectors $\\vec{x}$ and $\\vec{y}$. The quantity $v$ is a scalar (tensor of order zero). Such an addition on different variance indices constitutes, by definition, the operation of \"\\NewTerm{contraction of indices}\\index{contraction of indices}\" of the tensor $\\vec{V}$. This operation allowed us to move from a tensor of order two to a tensor of order zero. The tensor $\\vec{V}$ has been amputated and of a covariance and a contravariance.\n\t\n\tLet us also take the example of a tensor $\\vec{U}$ whose mixed components are one time covariant and one two times contravariant $u_k^{ij}$ (caution ... it is not a three-dimensional matrix but simply an indication that the components of this tensor are expressed from three other variables!!!). Let us consider some of its components such as $k=j$, that is the components $u_j^{ij}$ and let us perform the addition of the latter. We then get:\n\t\n\tThese new quantities $v^i$ form the components of a tensor $\\vec{V}$ of order one (thus a vector!) and constitute what we name then \"\\NewTerm{contracted components}\\index{contracted components}\" of the tensor $\\vec{U}$ and of course meet the basis relations change (on request we can prove it but you have to know that it is similar to the one we made for vectors). So we have indeed change form a tensor of order three to a tensor of order one!\n\t\n\tSo we can see that the underlying idea of the contraction is to allow us to facilitate the resolution of a purely mathematical problem and depending on the situation it may be good  to raise or reduce the order of a tensor. This is often a choice that is made by trial and error based on a specific context or that naturally arises from a purely mathematical or mathematical-physical development (as we will see examples further below).\n\n\t\\paragraph{Raising and lowering indices}\\mbox{}\\\\\\\\\n\tIf we start with a contravariant or covariant components tensor , we can lower / raise one or more indices by multiplication (if repeated) by $g_{ij}$ or $g^{ij}$ (unitary diagonal and positive signature: of canonical type) to get mixed components on which we can then perform contraction operations.\n\t\n\tLet us consider an Euclidean tensor $\\vec{U}$ of contravariant components $u^{i_1i_2i_3\\ldots i_p}$\n\n\tIf we want to perform a contraction on this tensor, we will first need to transform it into a mixed tensor. This transformation we be done using a fundamental tensor.\n\n\tLet us write $\\vec{U}$ in mixed components by lowering at the covariant position the index $i_1$ by example (it is therefore equivalent to express this in contravariant component in covariant component). So:\n\t\n\tWe see well that in this case to take down a contravariant index in a tensor using a fundamental tensor, we must first go search in the covariants components of the fundamental tensor the one who is itself in contravariant position in the original tensor and replace its position (but this time in covariance) by the other index of the fundamental tensor (it is the same idea when it we desire to operate a contraction on a covariant tensor ).\n\t\n\tIndeed, let us recall that we have proved that:\n\t\n\tAlso remember that raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the covariant and contravariant metric tensors being inverse to each other:\n\t\n\tNow that we got a mixed tensor components, we can very well getting contract the indices. Let us choose for example the index $i_2$ and let us perform the contraction with the index $j_1$, let us put $i_2=j_1=k$ (we are then concerned only to some specific terms), then just writing the whole process from the beginning:\n\t\n\tSo we get after lowering the index and one contraction, a tensor of order $p-2$.\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Examples:}\\\\\\\\\n\tE1. Let us see a first example of raising down and contracting a the covariant 4-position first order tensor given by (\\SeeChapter{see section Special Relativity}):\n\t\n\tin components:\n\t\n\t(where $x_j$ are the usual Cartesian coordinates) and the Minkowski metric tensor with signature $(-+++)$ given by (\\SeeChapter{see section Special Relativity}):\n\t\n\tin components:\n\tIn components:\n\t\n\tTo raise the index, multiply by the tensor and contract:\n\t\n\tThen for $\\lambda = 0$:\n\t\n\tand for $\\lambda = j = 1, 2, 3$:\n\t\n\tSo the index-raised contravariant $4$-position is:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE2. For a second order tensor example let us consider the contravariant electromagnetic tensor in the $(+---)$ signature is given by (\\SeeChapter{see section Electrodynamics}):\n\t\n\tin components:\n\t\n\tTo obtain the covariant tensor $F_{\\alpha\\beta}$, we multiply by the metric tensor and contract:\n\t\n\tand since $F^{00} = 0$ and $F^{0i}=-F^{i0}$, this reduces to:\n\t\n\tNow for $\\alpha = 0, \\beta = k = 1, 2, 3$:\n\t\n\tand by antisymmetry, for $\\alpha = k = 1, 2, 3$, $\\beta = 0$:\n\t\n\tthen finally for $\\alpha = k = 1, 2, 3$, $\\beta = \\ell = 1, 2, 3$:\n\t\n\tThe (covariant) lower indexed tensor is then:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE3. We will also see further below an example where we contract a tensor of order $1$ (one of the contravariant components of the vectors of the spherical base) having already a lowered index:\n\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remarks,colframe=black,arc=10pt]\n\t\\textbf{R1.} In the relation:\n\t\n\tthe equality is an abusive notation that we can found in some books (because strictly speaking we should do the calculation in two steps).\\\\\n\t\n\t\\textbf{R2.} As a result of the symmetry of the quantities $g_{ij}$ (the dot product is commutative), this latter tensor is identical to what we would get to the position by lowering to the covariant position the index $i_2$, and then by doing the contraction of the index $i_1$ with the index $j_2$.\\\\\n\n\tLet see this:\\\\\n\n\tThe symmetry $g_{ij}=g_{ji}$ takes the form (this may seem confusing but let us remember that the number of the component $i$ indicates the place of this component):\n\t\n\tTherefore it comes:\n\t\n\tand putting $i_1=j_2=k$:\n\t\n\t\\end{tcolorbox}\n\tIn general, the contraction of a tensor allows to form a tensor of order $p-2$ from a tensor of order $p$. We can of course repeat the operation of contraction. Thus, an even tensor, $2p$, will become a scalar after $p$ contractions and an odd order tensor $2p+1$, will become a vector.\n\n\tWe can extend after this definition of the contraction of indices, the tensoriality criterion. We have seen until now two ways to recognize the character of a tensor of a sequence of quantities:\n\t\\begin{itemize}\n\t\t\\item The first is to show that these quantities are formed by the tensor product of component vectors or by a sum of tensor products;\n\n\t\t\\item The second is to study how these quantities are converted during a basis change and to check the conformity of the relations of transformation;\n\n\t\t\\item The third and new one that brings to put that for a set of $n^{p+1}$ quantities, having $p$ upper and $q$ lower indexes to be a tensor, it is necessary and sufficient that their product fully contracted by the contravariant components of any $p$ vectors and the covariates components of any $q$ vectors, to be a quantity (the norm in fact...) that remains invariant under basis change.\n\t\\end{itemize}\n\t\n\t\\subsection{Special Tensors}\n\tWe may face in theoretical physics and engineering with tensors that have interesting properties. To avoid redundant work in each case, we will list here and proved the various existing properties used in this book in other sections and discuss their possible implications briefly (the detailed analysis being reserved for their application in the other same sections of the book).\n\t\n\t\\subsubsection{Symmetric Tensor}\n\tConsider a tensor $\\vec{T}$ of order two of contravariant components $T^{ij}$. Let us suppose that, following a base $(\\vec{e}_i)$, all these components satisfy the relations:\n\t\n\tOn another base $(\\vec{e}_j^{'})$, related to the previous by the known transformation relations, the new components of ${T'}^{lm}$ satisfy the relation:\n\t\n\tWe see that the property $T^{ij}=T^{ji}$ is therefore an intrinsic characteristic of the tensor $\\vec{T}$, independent of the base! We then say that the tensor is a \"\\NewTerm{symmetric tensor}\\index{symmetric tensor}\" (we will come back again on this concept a little further below) also named \"\\NewTerm{totally invariant tensor}\\index{totally invariant tensor}\" (implicitly meaning: by base change).\n\n\tThe symmetry property is also true for the covariant components of a symmetric tensor since we have:\n\t\n\tConversely, the symmetry of the covariant components implies that of the contravariant components.\n\n\tFor higher order tensor, the symmetry may be partial, being valid only for two covariant indices or two contravariant indices. Thus, a fourth order tensor, of mixed components $T_l^{ijk}$, may also be symmetrical in $i$ and $j$, for example, given:\n\t\n\tWe check, as above, that such a property is intrinsic.\n\t\n\tA tensor is said to be \"\\NewTerm{completely symmetrical tensor}\\index{completely symmetrical tensor}\" if any transposition of two indices with the same variance, changes the corresponding component into itself. For example, for a tensor of order three $T^{ijk}$, completely symmetric, we have the following components that are equal:\t\n\t\n\tThere are many examples of symmetric tensors. Some include:\n\t\\begin{itemize}\n\t\t\\item the metric tensor $g_{\\mu \\nu }$ (\\SeeChapter{see section General Relativity}) \n\t\t\\item the Einstein tensor,$G_{\\mu \\nu }$ (see further below) \n\t\t\\item the Ricci tensor $R_{\\mu \\nu }$ (see also further below).\n\t\t\\item the stress and strain tensor for fluids or solids  $\\sigma_{ij}$ (\\SeeChapter{see section Continuum Mechanics})\n\t\t\\item  the Lorentz boost tensor $\\Gamma_\\nu^\\mu$ (\\SeeChapter{see section Special Relativity}) \n\t\t\\item ...\n\t\\end{itemize}\n\tWe can also (very interesting curiosity) obtain a geometric representation of the values of the components of a symmetric tensor of order two!! \n\t\n\tFor this let us, consider in the ordinary geometric  space coordinates $x^i$, the following equation:\n\t\n\twhere, for recall, $x^ix^j$ can be seen as a tensor with $i,j=1,2,3$ and where the $a_{ij}$ are real given coefficients. Let us suppose that coefficients are such that:\n\t\n\tThe above equation is then:\n\t\n\tHere we fall back on the equation of a surface of the second degree of a quadratic similar to of the plan that we saw in the section of Analytical Geometry. We know by extension in to three dimensions that these surfaces are ellipsoids or hyperboloids, depending to the values of the quantities $a_{ij}$.\n\t\n\tLet us study how the quantities $a_{ij}$ transform  when we make a change of coordinates such as:\n\t\n\tThe equation of the quadric is written in this new coordinate system:\n\t\n\tHence the expression of the coefficients in the new system of axes:\n\t\n\tThe coefficients $a_{ij}$ therefore transform as the covariant components of a tensor of order two. Conversely, if the quantities $a_{ij}$ are the components of a symmetric tensor, these components define the coefficients of a quadric!! There is therefore a certain equivalence between a symmetric tensor and the coefficients of a quadratic...!!! We say then that the equation of the quadric is the \"\\NewTerm{quadric representation of a symmetric tensor}\\index{quadric representation of a symmetric tensor}\" or \"\\NewTerm{representation surface}\".\n\t\n\tSo the representation surface (or representation quadric) is a geometrical representation of a second rank symmetric tensor and is useful for giving us a visual image of the tensor as well as being useful for example in calculating magnitudes of material properties described by second rank symmetric tensors!!\n\t\n\tWe know from our study of quadrics in the section of Analytical Geometry (by extending it to the three-dimensional case) that we can always find a coordinate system relative to which the equation of a quadratic takes a simpler form:\n\t\n\tIn this case, the basis vectors are supported by the main axes of the quadric. In this coordinate system, the components of the tensor equation reduce to:\n\t\n\tand $a_{ij}=0$ for the other components. The quantity $b_i$ equation are named the \"\\NewTerm{principal values}\\index{principal values}\" of the tensor $a_{ij}$.\n\n\tIf the quantities $b_1,b_2,b_3$ are positive, the surface is an ellipsoid, if two quantities are strictly positive and the third strictly negative, we have a one sheet hyperboloid, if two quantities are strictly negative and the third positive, we have two sheet hyperboloid (\\SeeChapter{see section Analytical Geometry}).\n\t\n\tComparing the expression of the quadric obtained previously with the classic equation:\n\t\n\twhere $a$, $b$, $c$ are the semi-axes of an ellipsoid shows that we have:\n\t\n\tBelow we can see a screen shot of an interactive tool of Cambridge University to play with the ellipcity (only!) representation quadric of a rank tow symmetric tensor:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/algebra/symmetric_tensor_represntation_quadric_cambridge.jpg}\n\t\t\\caption[]{Visual link between a tensor and its representative surface \\\\(source: \\href{http://www.doitpoms.ac.uk/tlplib/tensors/representation.php}{http://www.doitpoms.ac.uk})}\n\t\\end{figure}\n\t\n\t\\subsubsection{Antisymmetric Tensor}\n\tWhen the contravariant or covariant components of a tensor of order two, satisfy the property:\n\t\n\twe the say that the tensor is a \"\\NewTerm{anti-symmetric tensor}\\index{anti-symmetric tensor}\".  In other words a tensor is antisymmetric on (or with respect to) an index subset if it alternates sign ($+/-$) when any two indices of the subset are interchanged.\n\t\n\tIt follows from this definition that an antisymmetric tensor must obviously satisfy the fact that its diagonal components are all zero, such as for example for a rank $2$ tensor:\n\t\n\t\n\tA well know antisymmetric tensor is the electromagnetic tensor $F_{\\mu \\nu }$ (\\SeeChapter{see section Electrodynamics}).\n\t\n\tFor example a covariant tensor of order three $T_{ijk}$ will be say to be symmetric in $i$ and $k$ if for all values that can take the indices, we have:\n\t\n\tOr the fourth order covariant tensor $T_{ijkl}$ will be said in antisymmetric on $i$ and $l$ if for all values that can take the indices, we have:\n\t \n\t\n\tA tensor will \"\\NewTerm{totally anti-symmetric}\\index{totally anti-symmetric tensor}\" if any transposition of index of same variance (covariant/contravariant) changes the corresponding component into its opposite.\n\n\tA tensor $T_{ij}$ can be put in the form of a sum of a symmetric tensor and an antisymmetric tensor. Indeed, we have:\n\t\n\tThe first term of the sum above is a symmetric tensor and the second, an antisymmetric tensor.\n\t\\begin{dem}\n\tConsider first that $T_{ij}$ is symmetric, then we have:\n\t\n\tSo this proves that the left term is indeed symmetric for this special case.\n\t\n\tConsider secondly that $T_{ij}$ is symmetric, then we have:\n\t\n\tSo this proves that the left term is indeed antisymmetric for this special case.\n\n\tNow if $T_{ij}$ is neither symmetric or antisymmetric we have:\n\t\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tA tensor $T_l^{ijk}$ will be \"\\NewTerm{partially anti-symmetric}\\index{partially anti-symmetric tensor} if for example we have:\n\t\n\tThat is to say anti-symmetric only for a subset of indices.\n\tLet s now consider two vectors $\\vec{x}=x^i\\vec{e}_i$ and $\\vec{y}=y^j\\vec{e}_j$ of a vector space $\\mathcal{E}^n$. Let us form the following antisymmetric quantities (we can see in it two tensor products):\n\t\n\twhere we see immediately that the components $T^{ij}$ are those of an antisymmetric tensor $\\vec{T}$ by construction as:\n\t\n\tThe decomposition of the vector $\\vec{T}$ in the base $\\vec{e}_i\\vec{e}_j$ is:\n\t\n\tThe tensor $\\vec{x}\\otimes\\vec{y}$ (written as it in analogy to the vectors cross product for $n=3$ ) is named the \"\\NewTerm{outer product}\\index{outer product}\" of the vectors $\\vec{x}$ and $\\vec{y}$. We say that this tensor is a \"\\NewTerm{bivector}\\index{bivector}\".\n\t\n\tThe outer product is an antisymmetric tensor which satisfies the following properties:\n\t\\begin{enumerate}\n\t\t\\item[P1.] Anticommutativity:\n\t\t\n\t\tthe result is:\n\t\t\n\n\t\t\\item[P2.] Left and right distributivity for the vector addition:\n\t\t\n\n\t\t\\item[P3.] Associativity for the multiplication by a scalar:\n\t\t\n\n\t\t\\item[P4.] Outer products:\n\t\t\n\t\\end{enumerate}\n\tconstitute a base for all bivectors.\n\t\\begin{dem}\n\tAn antisymmetric tensor $\\vec{T}$ of order two, element of $\\mathcal{E}_{(2)}^2$ can, as we have proved it earlier, be written as:\n\t\n\tExchanging, in the last sum of the above relation, the name of the indices and considering that $T^{ij}=-T^{ji}$, we get:\n\t\n\tThe elements:\n\t\n\tare linearly independent vectors since the vectors $\\vec{e}_i\\otimes\\vec{e}_j$ also are it. These elements constitute a base on which the antisymmetric tensor can be decomposed.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tThe number of distinguishable vectors $\\vec{e}_i\\otimes\\vec{e}_j-\\vec{e}_j\\otimes\\vec{e}_i$ is equal to the number of combinations of vectors taken two by two and distinguishable among $n$ such that (\\SeeChapter{see section Probabilities}):\n\t\n\tIndeed among the $n^2$ components, $n$ components are equal to zero and $n(n-1)$ other components have opposed values to two by two. So we can consider that half of the latter is sufficient to characterize the tensor.\n\n\tIn the context of the outer tensor product where we have:\n\t\n\tthe number of distinguishable components is also equal to $n(n-1)/2$ and they are named \"\\NewTerm{strict components}\\index{strict components}\".\n\t\n\tWe notice that for $n=3$, the strict number of components of the outer product of two vectors is also equal to three. This allows to form, with the components of the bivector, the components of a cross product $\\vec{z}$.\n\n\tThus, a cross product therefore exists only for a subspace of bivectors whose number of dimensions is equal to $3$ and the pre-images that are antisymmetric tensors.\n\n\tIf all these conditions are satisfied, we say that the vector \n$\\vec{z}$ is the \"\\NewTerm{adjoint tensor}\\index{adjoint tensor}\" of tensor $\\vec{T}$.\n\n\t\\subsubsection{Fundamental Tensor}\n\tWe saw at the beginning of our study of Tensor Calculus the definition of the components of the fundamental covariant tensor $g_{ij}$, that is for recall:\n\t\n\tThese quantities are involved, as we know it (see previous topics), in the expression of the dot product of two vectors $\\vec{x}$ and $\\vec{y}$ of contravariant components $x^i$ and $y^i$, given by the relation (for recall):\n\t\n\tLet us use the general test of tensoriality to highlight the tensor character of the $g_{ij}$. The previous expression is a product fully contracted of the quantities $g_{ij}$ with the contravariant components  $x^iy^i$ of an arbitrary tensor. As the dot product is an invariant quantity (in this case a scalar) with respect to the base changes, it follows that the $n^2$ quantities $g_{ij}$ are the covariant components of a tensor.\n\n\tThis tensor is also symmetrical as a result of the symmetry property of the dot product of the basis vectors such that:\n\t\n\tWe have the same for the contravariant components of the fundamental tensor:\n\t\n\tIf we denote by $g_j^i$ the mixed components of a fundamental tensor to himself:\n\t\n\tobviously with the canonical basis:\t\n\t\n\t\n\t\\subsection{Curvilinear Coordinates}\n\tThe conventional concepts of coordinate system can be generalized as we know to any specific $n$-dimensional (\\SeeChapter{see section Principia}). We name \"\\NewTerm{coordinate system}\\index{coordinate system}\" in $\\mathcal{E}^n$, any mode of definition of a point $M$ in the considered system.\n\t\n\t\\textbf{Definitions (\\#\\mydef):} \n\t\\begin{enumerate}\n\t\t\\item[D1.] For a given coordinates system (Cartesian, spherical, cylindrical, polar ...) system we name \"\\NewTerm{coordinate line}\\index{coordinate line}\" the \"place\" of the points $M$ when a only single coordinate of $M$ varies, the other being keep as constant.\n\n\t\t\\item[D2.] A \"\\NewTerm{curvilinear coordinates}\\index{curvilinear coordinates}\" are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible (a one-to-one map) at each point (\\SeeChapter{see section Vector Calculus}).\n\t\\end{enumerate}\n\t This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The purpose as we already know is that depending on the application, a curvilinear coordinate system may be simpler to use than the Cartesian coordinate system. For instance, a physical problem with spherical symmetry defined (for example, motion of particles under the influence of central forces) is usually easier to solve in spherical polar coordinates than in Cartesian coordinates.\n\t \n\t Well-known examples of curvilinear coordinate systems are as we know in three-dimensional Euclidean space the Cartesian, cylindrical and spherical polar coordinates.\n\t \n\t Let us first study the generalization of a coordinate system relative to a fixed reference frame (we urge the reader to read first the subsection about Coordinate Systems in the section of Vector Calculus and the subsection of Analytical Mechanis in the section Principia).\n\t \n\t Let us consider a punctual space $\\mathcal{E}^n$ and $(\\vec{e}_i)$ a reference frame of that space. Given $x_i$ the rectilinear coordinates of a point $M$ of $\\mathcal{E}^n$ relatively to this reference fame. Any curvilinear coordinate system $u^k$ (with $k=1,2,\\ldots,n$ is obtained by giving $n$ arbitrary functions $f^i$ of parameters $u^k$, such that:\n\t \n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\begin{tikzpicture}[x=(10:4cm),y=(90:4cm),z=(225:4cm),>=Triangle,scale=1.5]\n\t\t\\coordinate (O) at (0,0,0); \n\t\t\\draw [->] (O) -- (1,0,0) node [at end, right] {$x^2$ axis};\n\t\t\\draw [->] (O) -- (0,1,0) node [at end, above] {$x^3$ axis};\n\t\t\\draw [->] (O) -- (0,0,1) node [at end, left]  {$x^1$ axis};\n\t\t\n\t\t\\draw [draw=blue, -Circle] (O) to [bend left=8] \n\t\t  coordinate [pos=7/8] (q2n) \n\t\t  (1,-1/4,0) coordinate (q2) node [right] {$u^2$};\n\t\t\\draw [draw=blue, -Circle] (O) to [bend right=8] \n\t\t  coordinate [pos=7/8] (q3n) \n\t\t  (0,1,1/2) coordinate (q3) node [left] {$u^3$};\n\t\t\\draw [draw=blue, -Circle] (O) to [bend right=8] \n\t\t  coordinate [pos=7/8] (q1n) \n\t\t  (1/4,0,1) coordinate (q1) node [right] {$u^1$};\n\t\t\n\t\t\\begin{pgfonlayer}{background}\n\t\t\\begin{scope}\n\t\t\\clip (O) to [bend left=8] (q2) -- (1,1,0) -- (q3n) to [bend right=8] (O);\n\t\t\\shade [left color=green, right color=green!15!white, shading angle=135]\n\t\t  (O) to [bend left] (q3n) to [bend left=16] (3/4,1/2,0) to [bend left=16] (q2n) -- cycle;\n\t\t\\end{scope}\n\t\t\n\t\t\\begin{scope}\n\t\t\\clip (O) to [bend left=8] (q2) -- (1,0,1) -- (q1) to [bend left=8] (O);\n\t\t\\shade [left color=red, right color=red!15!white, shading angle=45]\n\t\t  (O) to [bend right] (q1n) to [bend left=16] (1,0,1) to [bend left=16] \n\t\t  (q2n) to [bend right] (O);\n\t\t\\end{scope}\n\t\t\n\t\t\\begin{scope}\n\t\t\\clip (O) to [bend right=8] (q1) -- (0,1,1) -- (q3) to [bend left=8] (O);\n\t\t\\shade [left color=cyan, right color=cyan!15!white, shading angle=225] \n\t\t  (O) -- (q1n) to [bend right=16] (0,1,1) to [bend left=16] (q3n) \n\t\tto [bend left] (O);\n\t\t\\end{scope}\n\t\t\\end{pgfonlayer}\n\t\t\n\t\t\\node at (1/3,1/3,0) {$q_1=\\mbox{const}$};\n\t\t\\node at (0,1/2,1/2) {$q_2=\\mbox{const}$};\n\t\t\\node at (1/2,0,1/3) {$q_3=\\mbox{const}$};\n\t\t\\end{tikzpicture}\n\t\t\\caption{Coordinate surfaces, coordinate lines, and coordinate axes of general curvilinear coordinates}\n\t\\end{figure}\n\tWe will assume thereafter that these $n$ functions satisfy the following three properties:\n\t\\begin{itemize}\n\t\t\\item[P1.] They must be of class at least $\\mathcal{C}^2$ (differentiable at least twice for the needs of physics: speed and acceleration). This assumption implies, that at any point where it is satisfied, that we have the permutation of derivations (with respect to the two partial derivaties as seen in the section of Differential and Integral Calculus):\n\t\t\n\n\t\t\\item[P2.] These functions are such that we can solve the system of $n$ equations of coordinate system change relatively to the variables $u^k$ and express them in function of the parameters $x^i$, thus:\n\t\t\n\t\tstill with $k=1,2, \\ldots,n$.\n\n\t\t\\item[P3.] When the variables $u^k$ vary in a domain $\\Delta$, the variables $x^i$ vary in a domain $\\Delta'$ (think to cartesian$\\leftrightarrow$ spherical coordinates where in cartesian the variable range is infinite when in spherical both of them are limited a typical $2\\pi$ width range ). \n\n\t\t\\item[P4.] The Jacobian of the functions $x^i=f^i(u^1,u^2,\\ldots,u^n)$, defined by (\\SeeChapter{see section Differential and Integral Calculus}):\n\t\t\n\t\twill be assumed different from zero in the domain $\\Delta$ (and also the Jacobian $D(\\partial_i u^k)$ of the functions $u^k=g^k(x^1,x^2,\\ldots,x^n)$) and is the inverse of the Jacobian of $u^k=g^k(x^1,x^2,\\ldots,x^n)$. If tje jacobians exist, they are not zero as a result primarily of the second property above and implicitly the first.\n\t\\end{itemize}\n\tAs we have already mention if, if If we fix $(n-1)$ parameters $u^k$ by varying only one parameter, $u^i$ for example, we get the coordinates $x_{(1)}^i$ of a set of points $M$ that constitute a \"coordinate line line\". In general, the coordinate lines are not straight but curved as we know. These coordinates $u^k$ are named for this reasons the \"curvilinear coordinates\". On a point $M$ of $\\mathcal{E}^n$ intersect elsewhere $n$ coordinate lines (see figure above).\n\t\n\tWe prove in the section of Analytical Mechanics, during our study of punctual spaces, that partial derivatives of a vector $\\overrightarrow{\\text{O}M}$ are independent of the point O (origin) of a given reference frame. If $\\mathcal{E}^n$ is assimilated to a system of curvilinear coordinate, we write:\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tA famous example of curvilinear coordinates, where each $u^k$ is a uniform function of the rectilinear coordinates $x^k$, the being moreover $u^k$  continuous functions at the current point $M$, is that of the spherical coordinates where we have (\\SeeChapter{see section Vector Calculus}):\n\t\n\tand where for recall:\n\t\n\tLet us also recall that during our study of the system of spherical coordinates in the section of Vector Calculus we obtained after normalization of the basis vector:\n\t\n\tTherefore:\n\t\n\twith:\n\t\n\t\\end{tcolorbox}\n\tIn a non-Euclidean space, we can not define a unique valid basis over the whole space. Thus, we construct a base at each point separately and for this purpose we use the curvilinear coordinates such that at each point $M$, the base vectors $\\vec{e}_k$ are tangent to the corresponding coordinate line equation via the relation given above:\n\t\n\tGiven now $u^1, u^2, \\ldots, u^n$ the curvilinear coordinates of the point $M$ with respect to a Cartesian basis $(\\vec{e}_i^0)$. In this reference frame, we obviously have:\n\t\n\twhere the Cartesian coordinates are functions $x^i=x^i(u^1,u^2,\\ldots,u^n)$.\n\t\n\tThe vector $\\vec{e}_k$ therefore also be expressed by:\n\t\n\tThe reader can check with the example of spherical coordinates (by looking to the explicit version of the corresponding vectors in the section of Vector Calculus) that this relation is correct but at the condition that we work with the non-normalized version of the orthogonal vectors $\\vec{e}_r$, $\\vec{e}_\\theta$, $\\vec{e}_\\phi$!\n\t\n\tFrom the components $\\partial_k x^i$ of the vector $\\vec{e}_k$, we can form a determinant $D(\\partial_kx^i)$ which is precisely the Jacobian of the of the functions $x^i$ we have defined previously. Since this determinant is different from zero (at least imposed as such), it follows that the $n$ vectors $\\vec{e}_k$ (as functions) are linearly independent (we have proved in the section of Differential and Integral Calculus that this Jacobian is in absolute value equal to $r^2\\sin(\\theta)$ for the spherical coordinates).\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tLet us recall that in the section of Differential and Integral Calculus we have prove that the determinant of the Jacobian appears when calculating the surface of a parallelogram in a non-euclidean space. Obviously is the determinant is zero, then the surface is zero as it means that the vector basis of the parallelogram are all collinear (linearly dependents). Hence the fact that if the determinant is not null then some of the vectors (or all) are independent!\n\t\\end{tcolorbox}\n\tThese $n$ vectors, defined by the relation:\n\t\n\tare named the \"\\NewTerm{natural basis}\\index{natural basis}\" at the point $M$ of the vector space $\\mathcal{E}^n$. They are collinear to the tangents of the $n$ coordinated lines which intersect at the point $M$ where they are defined.\n\t\n\tWe will not insist on the quite obvious fact that at each system of curvilinear coordinates there is an associated natural basis whose base is expressed by these same coordinates (\\SeeChapter{see section Vector Calculus}).\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tIn spherical coordinates, the vectors of the natural basis are those that we obtained in our study of the spherical coordinate system in the section of Vector Calculus and that are orthogonal but not orthonormal.\n\t\\end{tcolorbox}\n\t\n\tLet us associate at the point $M$ of $\\mathcal{E}^n$ a reference frame formed by the point $M$ and by the vectors of the natural basis\\footnote{For recall that natural basis in an Euclidean space is the set of unit vectors pointing in the direction of the axes of a Cartesian coordinate system}. This reference frame is named the \"\\NewTerm{natural reference frame}\\index{natural reference frame}\" on $M$ of the coordinate system $u^k$. It will be denoted by:\n\t\n\tThe differential of the vector $\\overrightarrow{OM}$ is then expressed as:\n\t\n\tThe quantities $\\mathrm{d}u^k$ are (obviously) the contravariant components of the vector $\\mathrm{d}\\vec{M}$ in the natural reference frame $(M,\\vec{e}_k)$ of the coordinate system $u^k$.\n\t\n\tLet us now consider any two curvilinear coordinate systems $u^i$ and $u^k$ (thinks for example to the spherical and cylindrical coordinate systems), linked between them by the relations:\n\t\n\twhere the functions $u^i=u^i({u'}^1,{u'}^2,\\ldots,{u'}^n)$ are assumed as we already know to be several times continuously differentiable with respect to the ${u'}^k$ and same for the functions ${u'}^k={u'}^k(u^1,u^2,\\ldots,u^n)$ with respect the coordinates $u^i$. When we move from one coordinate system to another, we say that we make a \"\\NewTerm{change of curvilinear coordinates}\"\\index{change of curvilinear coordinates}.\n\t\n\tWe saw that in the sections of Differential Geometry and General Relativity that the squared distance $\\mathrm{d}s$ between two points $M$ and $M'$ infinitely close was given by the relation:\n\t\n\twhere the $\\mathrm{d}x^i$ are the components of the vector $\\mathrm{d}\\vec{M}=\\overrightarrow{MM}$, assimilated to a fixed reference frame of a punctual space $\\mathcal{E}^n$. When this space is assimilated to a system of curvilinear coordinates $u^i$, we have seen that the relation:\n\t\n\tShows that the vector $\\mathrm{d}\\vec{M}$ has for curvilinear contravariant components the quantities $\\mathrm{d}u^i$ with respect to the natural base $(M,\\vec{e}_i$. The square of the distance $\\mathrm{d}s^2$ is then written in the natural reference frame:\n\t\n\tWhere the quantities $g_{ij}=\\vec{e}_i\\circ\\vec{e}_j$ are the components of the \"fundamental tensor\" or of \"metric tensor\" defined using a natural base. The previous expression is named the  \"\\NewTerm{linear element of the point space}\" $\\mathcal{E}^n$ or sometimes the \"\\NewTerm{metric}\" of this space.\n\t\n\tThe vectors $\\vec{e}_i$ of the natural reference frame generally vary from one point to another. This is the case, for example, of the spherical coordinates whose quantities $g_{ij}$ (we show it afterwards with a detailed example) are variable since depending on the parameters $r$, $\\theta$ or $\\phi$!\n\t\n\tA curve $\\Gamma$  of $\\mathcal{E}^n$ can be defined by the data of the curvilinear coordinates $u^i(\\alpha)$ of the locus of the points $M(\\alpha)$ as a function of a parameter $\\alpha$ (\\SeeChapter{see section Differential Geometry}). The elementary distance $\\mathrm{d}s$ on this curve $\\Gamma$ is then written:\n\t\n\t\n\t\\subsubsection{Natural basis in spherical coordinates (curvilinear basis in spherical coordinates)}\n\tLet us determine the natural basis of the vector space $E^3$ associated with the point space $\\mathcal{E}^3$ of ordinary geometry, in spherical coordinates. Let us write the expression of the vectors $\\overrightarrow{\\text{O}M}$ in a fixed cartesian basis $(\\vec{e}_i^{\\,0})$ which is by definition (see the section Vector Calculus for more details):\n\t\n\tThe vectors of the natural base are given by:\n\t\n\tTherefore we have:\n\t\n\tThe derivative of $\\overrightarrow{\\text{O}M}$ with respect to $\\theta$ gives the vector $\\vec{e}_2$:\n\t\n\tThe derivative with respect to $\\varphi$ gives the vector $\\vec{e}_3$:\n\t\n\tThese three vectors are orthogonal to each other as we can easily verify it by performing the dot products $\\vec{e}_i\\circ\\vec{e}_j$. When this is the case, we say that the coordinates are \"\\NewTerm{orthogonal curvilinear coordinates}\\index{orthogonal curvilinear coordinates}\" (\\SeeChapter{see section Differential Geometry}).\n\n\tWe thus find the same result as in the section of Vector Calculus! These vectors, however, are not all normalized (they're norm is not equal to $1$) since we have:\n\t\n\tThe natural basis in spherical coordinates is thus formed by vectors that are variable in direction and in modulus at each point of $M$. The quantities $g_{ij}$ constitute an example of a metric tensor attached to each of the points $M$ of the space $\\mathcal{E}^3$.\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/algebra/spherical_natural_basis.jpg}\n\t\t\\caption{oordinate surfaces, coordinate lines, and coordinate axes of spherical coordinates (source: Wikipedia)}\n\t\\end{figure}\n\tThe linear element of the surface is then given by (the details of the calculations can be found in the section of General Relativity):\n\t\n\t\n\t\\subsubsection{Natural basis in polar coordinates (curvilinear basis in polar coordinates)}\n\tLet us determine the natural basis of the vector space $E^2$ associated with the point space $\\mathcal{E}^2$ of ordinary geometry, in polar coordinates. Let us write the expression of the vectors $\\overrightarrow{\\text{O}M}$ in a fixed cartesian basis $(\\vec{e}_i^{\\,0})$ which is by definition (see the section Vector Calculus for more details):\n\t\n\tThe vectors of the natural basis are given by:\n\t\n\tWe have:\n\t\n\tThe derivative of $\\overrightarrow{\\text{O}M}$ with respect to $\\phi$ gives the vector $\\vec{e}_2$:\n\t\n\tThese two vectors are orthogonal to each other as we can easily verify by performing the dot products $\\vec{e}_1\\circ\\vec{e}_2$. We thus find the same result as in the section of Vector Calculus.\n\t\n\tWe then have:\n\t\n\tThe linear element of the plane is then given by (the details of the calculations can be found in the section of General Relativity):\n\t\n\t\n\t\\subsubsection{Natural basis in cylindrical coordinates (cylindrical basis in polar coordinates)}\n\tLet us determine the natural basis of the vector space $E^3$ associated with the point space $\\mathcal{E}^3$ of ordinary geometry, in cylindrical coordinates. Let us write the expression of the vectors $\\overrightarrow{\\text{O}M}$ in a fixed cartesian basis $(\\vec{e}_i^{\\,0})$ which is by definition (see the section Vector Calculus for more details):\n\t\n\tThe vectors of the natural basis are given by:\n\t\n\tWe have:\n\t\n\tThe derivative of $\\overrightarrow{\\text{O}M}$ with respect to $\\varphi$ gives the vector $\\vec{e}_2$:\n\t\n\tand finally:\n\t\n\tThese three vectors are orthogonal to each other as we can easily verify by performing the dot products $\\vec{e}_i\\circ\\vec{e}_j$. We thus find the same result as in the section of Vector Calculus.\n\t\n\tThe linear element of the surface is then given by (the details of the calculations can be found in the section of General Relativity):\n\t\n\t\n\t\\pagebreak\n\t\\subsection{Christoffel symbols}\n\tThe study of tensor fields constitutes, for the physicist, the essential element of the tensorial analysis. The generic tensor $\\vec{U}$ of this field is a function of the point $M$ and we will denote it simply by:\n\t\nIf the tensor $\\vec{U}$ is a function only of $M$, the field considered is named a \"\\NewTerm{fixed field}\". If $\\vec{U}$ is moreover a function of one or more equation parameters other than the coordinates of $M$, then we say that this is a \"\\NewTerm{variable field}\" and we note it:\n\t\n\tThe different algebraic operations on the tensors $\\vec{U}(M)$ associated with a same point $M$ do not generates any particular difficulty. The derivative of $\\vec{U}(M)$ with respect to a parameter $\\alpha$ leads to the use of the classical results relating to the derivation of the vectors.\n\t\n\tHowever, a difficulty appears when we try to calculate the derivative of a tensor $\\vec{U}(M)$ with respect to the curvilinear coordinates. Indeed, the components of the tensor are defined at each point $M$ with respect to a natural coordinate system which varies from one point to another.\n\n\tConsequently, the calculation of the elementary variation, named \"\\NewTerm{elementary transport}\":\n\t\n\tWhen we pass from a point $M$ to an infinitely neighboring point $M'$ this can be done in physics only if we use to the same basis. In order to compare the the tensors $\\vec{U}(M')$ and $\\vec{U}(M)$ each other, we are led to study how a natural coordinate system for a given coordinate system varies when we pass from a point $M$ to an infinitely close point $M'$.\n\t\n\tFor a system of curvilinear coordinates $u^i$ given a punctual space $\\mathcal{E}^n$, a fundamental problem of tensor analysis consists in determining, with respect to the natural reference point $(M,\\vec{e}_k)$ at the point $M$, the natural reference point $(M',\\vec{e}_k^{'})$ at the infinitely close point $M'$. We then say that we are looking for an \"\\NewTerm{affine connection}\\index{affine connection}\".\n\t\n\tOn the one hand, the point $M'$ will be perfectly defined with respect to $M$ if we determine the vector $\\mathrm{d}\\vec{M}$ such as $\\overrightarrow{MM'}=\\mathrm{d}\\vec{M}$. For curvilinear coordinates $u^k$, the decomposition of an elementary vector $\\mathrm{d}\\vec{M}$ is given by the relation that we have previously proved:\n\t\n\tthe quantities $\\mathrm{d}u^k$ being for recall the contravariant components of the vector $\\mathrm{d}\\vec{M}$ on the natural basis $(\\vec{e}_k)$.\n\t\n\tAnd now to make things physically comparable, we must guarantee that vectors of the both basis are also parallel! So the idea is that the derivative we are looking for allows one to transport vectors of a manifold (surface) along curves so that they stay parallel with respect to the connection (derivative). Such an idea is named in physics \"\\NewTerm{parallel transport}\\index{parallel transport}\".\n\n\tTherefore, the idea is to determine the vector $\\vec{e}_k^{'}$ with elementary variations $\\mathrm{d}\\vec{e}_k$ of the vector $\\vec{e}_k$ relatively to the natural reference frame $(M,\\vec{e}_k)$, when we go from $M$ to $M'$. We then have:\n\t\n\tThe computation of the vectors $\\mathrm{d}\\vec{e}_k$ then remains the main problem to solve. We will first consider an example of this type of calculation in spherical coordinates for pedagogical purposes as it can help to understand what will follows.\n\nFor this, let us return the expression of the vectors $\\vec{e}_k$ of the natural base in spherical coordinates, that is:\n\t\n\tSince the basis vectors $\\vec{i}$,$\\vec{j}$,$\\vec{k}$ of the fixed Cartesian reference being constant in modulus and in direction, the differential of the vector $\\vec{e}_1$ is written:\n\t\n\tWe notice that the terms in parentheses represent respectively the vectors $\\vec{e}_2/r$ and $\\vec{e}_3/r$, hence:\n\t\n\tWe also compute, by differentiating the vector $\\vec{e}_2$:\n\t\n\twith:\n\t\n\twe have:\n\t\n\tSo finally:\n\t\n\tAnd:\n\t\n\tAfter a few elementary and very tricky algebraic operations (...), we get:\n\t\n\tThe differentials $\\mathrm{d}\\vec{e}_k$ are thus decomposed on the natural basis $(\\vec{e}_k)$. If we denote by $\\omega_i^k$ the contravariant components of the vector $\\mathrm{d}\\vec{e}_1$, that latter is written:\n\t\n\tThe components $\\omega_i^k$ of the vector $\\mathrm{d}\\vec{e}_i$ are differential forms (linear combinations of differentials). We have, for example:\n\t\n\tIf we denote by in a general way by $u^i$ the spherical parameters, we have:\n\t\n\tThe coordinate differentials are then denoted:\n\t\n\tand the components $\\omega_i^j$ are then written in a general way:\n\t\n\tWhere the quantities $\\Gamma_{ki}^j$ are functions of $r$,$\\theta$,$\\phi$ that will be explicitly obtained by identifying each component $\\omega_i^j$. For example, the component $\\omega_3^3$ is written with the notation of the previous relation:\n\t\n\tIdentifying the coefficients of the differentials, it comes:\n\t\n\tBy doing the same with the $9$ components $\\omega_i^j$, we get the $27$ (...) terms for which the detailed calculations for the $3\\cdot 9=27$ are given much further below in the text. For any curvilinear coordinate system, these quantities $\\Gamma_{ki}^j$ are named \"\\NewTerm{Christoffel symbols of the second kind}\\index{Christoffel symbols of the second kind}\" or \"\\NewTerm{Euclidean functions of affine connection}\\index{Euclidean functions of affine connection}\".\n\t\n\tThus, for a punctual space $\\mathcal{E}^3$ and a system of any curvilinear coordinates $u^j$, the differential $\\mathrm{}\\vec{e}_i=\\omega_i^k\\vec{e}_k$ of the vectors $\\vec{e}_i$ of the natural basis is written on this basis\n\t\n\tWe have just seen, on the example of the spherical coordinates, that a direct calculation makes it possible, by identification, to obtain explicitly the quantities $\\Gamma_{ki}^j$. We shall see that we can also obtain the expression of these quantities as a function of the components of $g_{ij}$.\n\t\n\tThe calculation of the quantities $\\Gamma_{ki}^j$ as a function of the $g_{ij}$ will lead us to introduce other Christoffel symbols. For this, let us write the covariant components, denoted $\\omega_{ji}$, of the differentials $\\mathrm{d}\\vec{e}_i$, thus (it's kind a definition):\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWith our example above:\n\tand:\n\t\n\tWe get (most dot products are orthogonal vectors, therefore they vanish):\n\t\n\t\\end{tcolorbox}\n\tThe covariant components are also a linear combinations of the  differentials $\\mathrm{d}u^k$ that we can write in the form:\n\t\n\tThe quantities $\\Gamma_{kji}$ are named the \"\\NewTerm{Christoffel symbols of the first kind}\".\n\n\tWe see quite clearly by going through the definitions and examples of the Christoffel symbols above again that:\n\\begin{enumerate}\n\t\\item For the Christoffel symbols of the second kind, they are symmetrical with respect to their lower indices and therefore if the metric is symmetric, we have:\n\t\n\t\n\t\\item For the Christoffel symbols of the first kind, they are also symmetrical with respect to their extremal indices and therefore if the metric is symmetric, we have:\n\t\n\t\\end{enumerate}\n\tIndeed (following the request of a reader), since we have:\n\t\n\tit then comes:\n\t\n\tand by swapping the indices $i$ and$ j$:\n\t\n\tThe term-to-term identification of the development on a concrete case of the last two relations will give (necessarily) the equality:\n\t\n\tthat we wanted to prove.\n\n\tSince the covariant components are related to the contravariant components by the relations (contraction of indices):\n\t\n\tWe get the expression linking the Christoffel symbols of each kind:\n\t\n\tConversely:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tVarious notations are used to represent the Christoffel symbols. The most common are:\n\t\\begin{itemize}\n\t\t\\item Christoffel symbol of the first kind:\n\t\t\n\n\t\t\\item Christoffel symbol of the second kind:\n\t\t\n\t\\end{itemize}\n\t\\end{tcolorbox}\n\tLet us consider now a punctual space $\\mathcal{E}^n$ and given a linear element $\\mathrm{d}s^2$ of this space:\n\t\n\tStarting from:\n\t\n\twe get by differentiation:\n\t\n\tBy injecting in it the expression of the differential $\\mathrm{d}\\vec{e}_j=\\omega_j^l\\vec{e}_l$ this gives us:\n\t\n\twhere the term represents $\\omega_j^l$ represents the mixed component of the vector $\\mathrm{d}\\vec{e}_j$. We can make this component covariant by multiplying it by the metric tensor $g_{il}$ so as to form an $\\omega_{ij}$ quantity which can in turn be expressed by means of the Christoffel symbols as follows as we already know:\n\t\n\tSubstituting the relation $\\Gamma_{kji}=g_{jl}\\Gamma_{ki}^l$ in the preceding expression (the indices used in this relation are not those of the expression in question, but mutatis mutandis it is equivalent), we then get:\n\t\n\tThe differential $\\mathrm{d}g_{ij}$ is then written:\n\t\n\tOn the other hand, the differential of the function $g_{ij}$ is then written:\n\t\n\thence by identifying the coefficients of the differentials $\\mathrm{d}u^k$ in these two last expressions (much further below in this section there is a detailed example of all the relations which will follow with several coordinate systems):\n\t\n\tRelation that the reader can (if he doubt about its veracity) check with the detailed practical examples which are given much further below.\n\n\tAs we have (in the case of a symmetric metric):\n\t\n\twhere it is strongly recommended to the reader to remember that the permutation of the indices respecting this last relation generally only  works on the extreme indices!\n\n\tWe can therefore write the prior-previous relation as:\n\t\n\tand by by performing a circular permutation on the indices (hence it is not a permutation of the extreme indices!), we get:\n\t\n\tBy making the sum:\n\t\n\tand by subtracting:\n\t\n\tSimplifying it comes:\n\t\n\ttherefore:\n\t\n\tIt is the expression of the Christoffel symbols of the first kind as a function of the partial derivatives of the components $g_{ij}$ of the fundamental tensor named \"\\NewTerm{first Christoffel identity}\\index{first Christoffel identity}\". We thus understand why in a locally inertial frame (of the Minkowski type) the Christoffel symbols are all zero (given that the metric is constant).\n\t\n\tWe get those of the second kind from the following relation (by definition) named \"\\NewTerm{fundamental theorem of Riemannian geometry}\\index{fundamental theorem of Riemannian geometry}\" or \"\\NewTerm{Levi-Civita connection}\\index{Levi-Civita connection}\" or \"\\NewTerm{first Christoffel identity}\\index{first Christoffel identity}\":\n\t\n\tThe last two expressions listed above allow the actual calculation of the Christoffel symbols for a given metric (hence an enormous gain in calculations). When the quantities $g_{ij}$ are given a priori, we can study the properties of the punctual space defined by the data of this metric, which is the case of the Riemann spaces we will see later.\n\t\n\t\\pagebreak\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tWe propose to calculate the Christoffel symbols of the second kind $\\Gamma_{kj}^i$ corresponding to the polar coordinate system in the plane  (it will be sufficiently long...) that we will write this time (at the opposite of the section of Vector Calculus) in index notation:\n\t\n\tWe will calculate the Christoffel symbols of the second kind from our last relation:\n\t\n\tLet us determine the components of the metric. By the way, they are the same as those we had calculated for the cylindrical coordinates above with the obvious difference that $g_{33}$ does not exist. Therefore, we have:\n\t\n\tLet us then calculate the $g_{ji}$. In this example it is rather trivial, it is enough to apply the relation demonstrated at the beginning of this section:\n\t\n\tOr by dealing with as with standard matrices (\\SeeChapter{see section Linear Algebra}):\n\t\n\tWe then have immediately:\n\t\n\tNow let us develop the Christoffel symbols writing for these coordinates:\n\t\\end{tcolorbox}\n\t\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tHence due to the properties of symmetry:\n\t\n\tIn the same way:\n\t\n\t\\end{tcolorbox}\n\t\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tTo sum up:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Ricci Theorem}\n\t\\begin{tcolorbox}[colback=red!5,borderline={1mm}{2mm}{red!5},arc=0mm,boxrule=0pt]\n\t\\bcbombe Before reading what follows ... We want to remind the reader that the writing of this section is not finished (as most chapters in the book)! Thus, we still have to illustrate the abstract notions that will follow with concrete practical examples but that are example outside of the special case of General Relativity!\n\t\\end{tcolorbox}\n\tThis being said, we have seen in the section of General Relativity that geodesics are the shortest distances between two points in any type of space. What will interest us now is to study the variations of a vector during such a displacement. Let us first remind that the geodesics equation\\index{geodesic equation} for any curvilinear coordinate system $y^i$ of the punctual $\\mathcal{E}^n$ (\\SeeChapter{see section Principles}) is given by (\\SeeChapter{see section General Relativity}):\n\t\n\tLet us consider now a vector $v$ of $\\mathcal{E}^n$ of covariant components $v_i$ and let us form the dot product of the vector $\\vec{v}$ and $\\vec{n}=\\mathrm{d}y/\\mathrm{d}s$ (the latter vector, denoted here abusively with the indices, gives the components tangent to the geodesic on which the first vector flows), we then have the following quantity:\n\t\n\tWhen moving along the geodesic from a point $M$ to an infinitely near point $M'$, the scalar undergoes the variation:\n\t\n\tand as:\n\t\n\tHence:\n\t\n\tLet us replace in this last expression, on the one hand, the differential of $\\mathrm{d}v_K$ by its exact total differential that we rewrite a bit:\n\t\n\tand on the other hand, the second derivative $\\mathrm{d}^2y^i/\\mathrm{d}s$ by its expression taken from the equation of geodesics. We are getting:\n\t\n\twhich can also be written:\n\t\n\tWhere we have put:\n\t\n\twhich are by definition the absolute differentials of the covariant components of the vector $\\vec{v}$. We also define the \"\\NewTerm{covariant derivative}\\index{covariant derivative}\" (also named \"\\NewTerm{connection}\\index{connection}\" or \"\\NewTerm{affine connection}\\index{affine connection}\") by the relation:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn ancient or American textbooks this is often written in the form (which we will not use in this book):\n\t\n\tmaking use of the \"$;$\" to denote the covariant derivative and the \"$,$\" for the partial differential.\n\t\\end{tcolorbox}\n\tSince the derivative of the product of two functions is the sum of the partial derivatives, then we also have:\n\t\n\tIf we put $t_br_c\\equiv \\nabla_j v_i$ then we have (result which we will use after having proved the Ricci theorem to further determine the Einstein tensor necessary in the section of General Relativity):\n\t\n\tIn curvilinear coordinates, in order for the differential of a vector to be a vector, the two vectors of which we take the difference must be in the same point of space. In other words, one of the two infinitely close vectors must be transported in one way or another to the point where the second is located, and only after making the difference between the two vectors which are now in one and only one point of the same punctual space. The \"\\NewTerm{parallel transport}\\index{parallel transport}\" operation must be defined in such a way that in cartesian coordinates (for the small example...), the difference of the components coincides with the ordinary difference $\\mathrm{d}v_k$. \n\t\n\tThus, we have indeed in Cartesian coordinates:\n\t\n\tsince in this system: $\\Gamma_{kj}^i=0$.\n\t\n\tThus, in curvilinear coordinates, the difference of the components of the two vectors after the transport of one of them to the point where the other is is denoted $\\delta v_k$ such that we have:\n\t\n\tThis brings us by identification to write:\n\t\n\tBut also to write the principle of least action (variational principle) in the tensorial form:\n\t\n\tLet us consider now a tensor of order two, product of two tensors of order one such that (as we have seen in our study of tensor compositions):\n\t\n\tTherefore:\n\t\n\thence (we take out the last two equalities just for aesthetics!):\n\t\n\tA parallel transport is therefore an operation that takes a tangent vector and moves it along a path in space without turning it (relative to the space) or changing its length. In flat space we can say that the transported vector is parallel to the original vector at every point along the path. In curved space we cannot say such a thing. Let's use the spherical surface of Earth to show this! We start at the equator at longitude $0^\\circ$ holding an arrow pointing north:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics[scale=1]{img/algebra/parallel_curvature.jpg}\n\t\t\\caption{Parallel transport illustration on a sphere}\n\t\\end{figure}\n\tWe go along longitude $0^\\circ$ up to the North Pole, keeping our arrow parallel to the ground and pointing forward all the time. When we get to the North Pole our arrow is pointing in the direction of longitude $180^\\circ$. Now we go south along longitude $90^\\circ$ east keeping our arrow perpendicular to our path as it was at the North Pole. When we get to the equator our arrow is pointing to the east. Finally, we go west along the equator until we get back to our starting point. We keep the arrow pointing backwards all the way. Though we did not turn the arrow all the way, we are now at the starting point with our arrow turned $90^\\circ$ relative to its original position. We surely can't say that it is now parallel to the original position. This means that the term \"direction\" cannot be defined globally in a curved space. We can only compare the direction of two vectors if they are at the same point.\n\n\tThe fact that parallel transport along a closed loop changes the direction of a vector in a curved space but not in a flat space may lead to the idea of using it as a way to measure curvature. It turns out that if we choose a loop that is small enough around a point in a curved space, the amount of change in the direction of a vector that is parallel transported along it is proportional to the area enclosed by the loop. So, the ratio between the area of the loop and the amount of change in the direction of the vector (whatever way we chose to measure it) can be used as a measure to the curvature of the surface that includes the loop. Actually we define curvature by the value of this ratio.\n\t\n\tThe previous relation leads us to be able to write the metric in its variational form named \"\\NewTerm{Ricci identity}\\index{Ricci identity}\":\n\t\n\tBut we also have since $g_{ij}=\\vec{e}_i\\circ\\vec{e}_j$:\n\t\n\thence the identity:\n\t\n\tWith the both relations:\n\t\n\tand the absolute differential (which is simply generalized for a tensor of order two):\n\t\n\twe get:\n\t\n\tNow, let us recall that we have by definition:\n\t\n\tHence finally:\n\t\n\tThe absolute differential on a geodesic in the approximation of an infinitesimal transport of the fundamental tensor is therefore (as we could expect) zero. This is the \"\\NewTerm{Ricci theorem}\\index{Ricci theorem}\". Some theoretical physicists then say that \\textit{the covariant derivative kills the metric} in the sense that the metric does not change on a space differential.\n\n\tFinally, we also see that for a tensor of order two (the metric in particular) we have:\n\t\n\tWe can therefore write the absolute differential which in this particular case is zero:\n\t\n\tAnd therefore the \"\\NewTerm{covariant derivative}\\index{covariant derivative}\" of the metric is null:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe reader will have to remember for the definition of the Einstein tensor that:\n\t\n\tand that this is another way of expressing that an infinitesimal variation on a geodesic according to the principle of least action kills the metric. We will therefore work from now on (as before) with nonlinear differential equations that must be integrated to find the behavior of matter in a given space.\n\t\\end{tcolorbox}\n\tLet us now determine an expression which will be very useful in General Relativity when we will determine the Einstein field equations (another way of expressing that the covariant derivative of the metric is null):\n\n\tLet us perform the contracted multiplication of:\n\t\n\tby $g^{ij}$, it then comes by using the relation $g^{ij}g_{jl}=\\delta_i^l$ (which we had proved much earlier above) that:\n\t\n\thence the relation:\n\t\n\tThe quantities $\\Gamma_{jh}^j$ and $\\Gamma_{ih}^h$ representing the same sums, we then have:\n\t\n\t\\begin{theorem}\n\tLet us consider now the determinant $g$ of the quantities $g_{ij}$. The derivation of the determinant gives us:\n\t\n\t\\end{theorem}\n\t\\begin{dem}\n\tGiven any variable that we choose here as being the time $t$ only to simplify the notations of the calculations that will follow. When the main part of the development is completed, the result can be adapted to any other variable! We will write for what will follow $g_j$ the elements of the $j$-th column of $g_{ij}$.\n\n\tFor the following developments, we define the notations:\n\t\n\tThe rule of derivation of a functional determinant is given for recall (\\SeeChapter{see section Linear Algebra}) by:\n\t\n\tBy considering the first determinant, by using the minors (\\SeeChapter{see section Linear Algebra}) for the development of its first column:\n\t\n\tFor the $j$-th determinant, it comes:\n\t\n\tThus:\n\t\n\tNow, we have demonstrated much earlier that the metric tensor is its own inverse. Therefore:\n\t\n\tThis allows us to write:\n\t\n\tand therefore:\n\t\n\twhat is also written (following the conventions defined at the beginning of the proof):\n\t\n\twhere the reader must therefore be careful not to read badly, typically thinking that the derivative $\\mathrm{d}_t$ in the right-hand term derives everything ($g_{ij}g^{ij}$)... while it only derives $g_{ij}$.\n\t\n\tHowever, we can adopt another variable. Let $h$ be this other variable:\n\t\n\tEither by rearranging:\n\t\n\tThis is what we wanted to prove.\n\t\\begin{flushright}\n\t\t$\\square$  Q.E.D.\n\t\\end{flushright}\n\t\\end{dem}\n\tNow by combining:\n\t\n\tDemonstrated earlier above and the result we have just proved:\n\t\n\tit comes:\n\t\n\tTherefore we have:\n\t\n\tLet us prove that it is possible to derive this last relation from the following important equality:\n\t\n\tIndeed:\n\t\n\tThis relationship does not mean much until we will make a more explicit use of it in our study of General Relativity (\\SeeChapter{see section General Relativity}).\n\n\tLet us now determine the second covariant derivative of the metric tensor. Let us remember before we go further (because it is important) that we had obtained:\n\t\n\t\n\t\\subsection{Riemann-Christoffel symbols}\\begin{tcolorbox}[colback=red!5,borderline={1mm}{2mm}{red!5},arc=0mm,boxrule=0pt]\n\t\\bcbombe Before reading what follows ... We want to remind the reader \\underline{again} that the writing of this section is not finished (as most chapters in the book)! Thus, we still have to illustrate the abstract notions that will follow with concrete practical examples but that are example outside of the special case of General Relativity!\n\t\\end{tcolorbox}\n\t\n\tLet us recall that we have demonstrated earlier above that:\n\t\n\tThis relation means, exception of an an interpretation error from the writer of those linear as sometimes interpreting Tensor Calculus result is a pain in the a.., that the covariant derivative of a tensor of order two - such as the metric - on a geodesic path in two perpendicular directions (the second covariant derivative making it possible to the \"perpendicular geodesic\" between the two geodesics infinitely close to the first covariant derivative). We know already that such a shift is \"parallel transport\".\n\t\n\tBy substituting in it:\n\t\n\tWe then have:\n\t\n\tLet us now switch the indices $j$ and $k$ in the previous expression to have a differential with respect to another path:\n\t\n\tAssuming that the components satisfy the classical properties $\\partial_{kj}v_i=\\partial_{jk}v_i$, we get by subtraction of the two previous expressions:\n\t\n\tAnd since we have proved that in the case of a symmetric metric we have:\n\t\n\tWe have therefore:\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThe fact of having in the case of a symmetric metric $\\Gamma_{jk}^r=\\Gamma_{kj}^r$ and which vanishes in the preceding relation, leads many practitioners to define what we name the \"\\NewTerm{torsion tensor}\\index{torsion tensor}\" or \"\\NewTerm{torsor}\" (but in reality it is a particular case of a more general relation which belongs to the domain of differential geometry). Thus, we define the torsion tensor as:\n\t\n\tand in the case of a symmetric metric (Euclidean space), the torsion is zero by extension as we have already seen it! In fact, the Einstein field equations which we shall prove later implicitly implies a symmetric metric with zero torsion. However, it is possible to proved that a non-symmetric tensor can always be decomposed into a symmetric and non-symmetric tensor (this is trivial because it is like separating a complete matrix in the sum of a matrix having a diagonal of non-zero component that are and another matrix having the diagonal zero).\n\t\\end{tcolorbox}\n\tIt then remains:\n\t\n\tSince parallel transport takes place on infinitely close geodesic paths, we take the limit:\n\t\n\tWhich mainly underlies the fact that the velocity field is almost equal in two infinitely close parallel points.\n\n\tIt then remains:\n\t\n\tThis relation expresses the fact that, like gravity, the curvature of space-time causes a mutual acceleration between the geodesics! Moreover, it is easy to see that the mutual acceleration between the geodesics is zero if the Riemann-Christoffel tensors are null (typically in Cartesian coordinates, in extenso it means for a flat time-space). This is exactly what we expect of gravity: if we do not observe any acceleration, the curvature (we shall now define what it is) is zero and if the curvature is zero, we observe no acceleration. Morale of the story: the gravity is curvature and the curvature is gravity !!\n\n\tWe see that the quantity in parentheses is a tensor of order four that we will write on this site as following (because there are several traditions in the way to write it...):\n\t\n\tand which summarizes itsel the parallel transport and the fact that gravity and geometry of space are linked together. Obviously, if the metric is of Minkowski type (or tends to a metric of Minkowski under certain conditions), then $R_{i,jk}^l$ is zero! Very rare authors write this last equality in the (unhappy ....) form:\n\t\n\tThe tensor $R_{i,jk}^l$ is named the \"\\NewTerm{Riemann-Christoffel tensor}\\index{Riemann-Christoffel tensor}\" or \"\\NewTerm{Riemannian space tensor}\\index{Riemannian space tensor}\". The curvature of a Riemannian space can also be characterized using this tensor.\n\n\tIf we multiply the tensor $R_{i,rs}^k$ by $g_{jk}$, then we have the covariant components of this tensor such that:\n\t\n\tand given the following relations that we proved earlier above:\n\t\n\tTherefore we get:\n\t\n\tand let us replace the quantities $g_{jk}\\partial_r\\Gamma_{is}^k$ by $\\partial_r\\left(g_{jk}\\Gamma_{is}^k\\right)-\\Gamma_{is}^k\\partial_r g_{jk}$. We then get:\n\t\n\tWe also proved earlier before that:\n\t\n\tHence:\n\t\n\tand as:\n\t\n\twe get:\n\t\n\tand we also proved that:\n\t\n\tHence:\n\t\n\tAnd by reporting them into the prior-previous relation, we get:\n\t\n\tFinally, we get for the covariant expression of the Riemann-Christoffel tensor:\n\t\n\tIt should be noticed that the Riemann-Christoffel tensor is therefore antisymmetric:\n\t\n\tand that in the parenthesis of the prior-previous relation we have only double partial derivatives, while outside of the parenthesis the Christoffel symbols contain only simple partial derivatives!\n\n\tFinally, the permutation of the indices $ij$ and $rs$ as a block gives us as a consequence of the symmetry of the $g_{ij}$ and by inverting their derivation order:\n\t\n\tLet us now perform a circular permutation on the indices $j$, $r$, $s$ in the expression (obtained just above)\n\t\n\tthen we get:\n\t\n\tand we get (it is very simple to control by summing the three lines above):\n\t\n\tThe previous identity is named the \"\\NewTerm{first Bianchi identity}\\index{first Bianchi identity}\" or also \"\\NewTerm{Bianchi algebraic identity}\\index{Bianchi algebraic identity}\" and it highlights the cyclicity property of the  Riemann-Christoffeltensor. In reality, we should not use the word \"identity\" since it is verified only (at least to my knowledge) in the case of a symmetric metric tensor (otherwise, the torsion is not zero for recall!).\n\n\tThe reader will observe that it is immediate that this last relation is satisfied in the case of the Minkowski metric, since if at all points the partial derivative of the metric is zero we have:\n\t\n\tAnd we will see in the sectin of General Relativity that this first identity will serve as a basis for the construction of the Schwarzschild metric.\n\n\tIf the metric is of the Minkowski type (we change the notations of the indices to be more in conformity with the usual writings in general relativity) then it is immediate that we also have:\n\t\n\tBut in the case where the metric is not of the Minkowski type, this latter relation can be satisfied and has an interest only if and only if the chosen metric is decomposable in Taylor series whose first partial derivatives are zero at $0$ (see the section of Differential Geometry for such Taylor series!).\n\n\tThis relation is valid only in the case of a \"\\NewTerm{locally inertial frame (LIF)}\\index{locally inertial frame}\" in which all Christoffel symbols cancel each other but not their derivatives.\n\n\tBy extension:\n\t\n\tLet us recall that implicitly, this relation, named \"\\NewTerm{Bianchi second identity}\\index{}\" or \"\\NewTerm{Bianchi differential identity}\\index{Bianchi differential identity}\", always expresses simply (if one may say ...) the fact that gravity and geometry of space are linked together.\n\n\tFollowing a reader request let us detail much more how to derivate this identity!\n\n\tThe identity is easiest to derive at the origin of a locally inertial frame (LIF) as already mention, where the first derivatives of the metric tensor, and thus the Christoffel symbols, are all zero. At this point, we have\n\t\n\tIf the Christoffel symbols are all zero, then the covariant derivative becomes the ordinary derivative\n\t\n\tTherefore, we get, at the origin of a LIF:\n\t\n\tBy cyclically permuting the index of the derivative with the last two indices of the tensor, we get\n\t\n\tBy adding up all linear with the covariant derivative and using the commutativity of partial derivatives, we see that the terms cancel in pairs, so we get\n\t\n\tAs usual we can use the argument that since we can set up a LIF with its origin at any non-singular point in spacetime, this equation is true everywhere and since the covariant derivative is a tensor, this is a tensor equation and is thus valid in all coordinate systems.\n\t\n\t\n\t\\subsection{Ricci curvature (Ricci tensor)}\n\tBefore we can see the consequences of second Bianchi's identity, we need to define the \"\\NewTerm{Ricci tensor}\\index{Ricci tensor}\":\n\t\n\twhich is therefore simply the contraction of the first and third indices of the Riemann-Christoffel tensor which we have given above:\n\t\n\tin other words it is just a more condensed notation ... and then the letters for the upper or lower indices as well as the presence of the comma are at the free choice of the writer (according to the mood and especially if the context makes it possible to avoid any confusion).\n\n\tFor example with the Riemann-Christoffel tensor we have just given, the Ricci tensor could be written in the following two ways (we keep the indices with latin letters):\n\t\n\tThe Ricci tensor can be taken as the trace of the Riemann tensor, hence it is of lower rank, and has fewer components. If you have a small geodesic ball in free fall, then (ignoring shear and vorticity) the Ricci tensor tells you the rate at which the volume of that ball begins to change, whereas the Riemann tensor contains information not only about its volume, but also about its shape. \n\t\n\tOther contractions of other indices may also be possible but because $R_{\\alpha\\beta,\\mu\\nu}$ is antisymmetric on $\\alpha,\\beta$ and $\\mu,\\nu$ then the contraction on these indices are equivalent to have $\\pm R_{\\alpha\\beta}$.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tThere is no widely accepted convention for the sign of the Riemann\ncurvature tensor, or the Ricci tensor, so check the sign conventions of whatever book you are reading!\n\t\\end{tcolorbox}\n\tSimilarly, we define the \"\\NewTerm{Ricci scalar}\\index{Ricci scalar}\" (also sometimes named \"\\NewTerm{Riemann scalar}\\index{Riemann scalar}\") by the relation:\n\t\n\twhich has the following properties:\n\t\\begin{itemize}\n\t\t\\item If the space is flat, the Ricci scalar is zero\n\t\n\t\t\\item If space is curved like a sphere, the Ricci scalar is positive\n\t\n\t\t\\item If the space is curved like a horse saddle, the Ricci scalar is negative\n\t\\end{itemize}\n\tEither explicitly (by changing the notation for the indices in order to insist on the fact that it has no impact!):\n\t\n\tThe Ricci scalar is the trace of the Ricci tensor, and it is a measure of scalar curvature. It can be taken as a way to quantify how the volume of a small geodesic ball (or alternatively its surface area) is different from that of a reference ball in flat space.\n\n\tWe will have concrete practical examples in the section of General Relativity for the first two cases, but let us look at simplified examples for the first two cases (we will not, however, prove the reciprocal).\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tE1. Let us start with the metric of flat space (without the temporal component). We have (\\SeeChapter{see section General Relativity}):\n\t\n\tBy taking the definition of Ricci's scalar in explicit form:\n\t\n\tIt is immediate that R is zero since the partial derivatives will all be zero. So a flat space has a null Ricci scalar.\\\\\n\t\n\tE2. Let us now look at the metric of the plane expressed in spherical coordinates (without the temporal component). We have (\\SeeChapter{see section General Relativity}):\n\t\n\tand:\n\t\n\twith:\n\t\n\tWe know that to compute the Ricci scalar (or Ricci curvature), we must therefore compute the contraction of the Riemann-Christoffel tensor (that is, the Ricci tensor) which itself depends on the Christoffel symbols of the second kind which themselves depend on the symbols of Christoffel of the first kind (argh!).\n\n\tWe shall therefore begin with the lowest level, that is to say by determining all the Christoffel symbols of the first kind given for recall by:\n\t\n\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tSo we have an $3^3$, that is $27$ possible Christoffel symbols of the first kind! Even if some symbols are equal (we have proved it!), We will still calculate everything.\\\\\n\n\tLet's start with joy and good humor ...:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tLet us now calculate all the symbols of Christoffel symbols of the second in the details:\n\t\n\tAgain, as the metric tensor is diagonal, this will simplify the calculations!\\\\\n\n\tWe have then:\n\t\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tLet us now calculatethe $9$ components of the Ricci tensor in details according to:\n\t\n\tWe then have (we calculate them all, even if we know that subsequently those which do not have $\\alpha=\\beta$ will be useless by the fact that the metric is diagonal):\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tLet us now calculate the Ricci scalar:\n\t\n\twe then have:\n\t\n\tThe Ricci scalar is therefore also zero. This result may be surprising, but in reality it is logical since we have only calculated the scalar curvature of a flat space expressed in spherical coordinates.\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\tE3. Let us now impose ourselves the diagonal metric of the 2-sphere surface $\\mathcal{S}^2$ (without the temporal component). We have then in accordance with what we have seen in the section of Differential Geometry:\n\t\n\tand:\n\t\n\tTherefore (\\SeeChapter{see section Differential geometry}):\n\t\n\toften written as:\n\t\n\twhere $r$ is a constant!\n\n\tWe shall therefore begin with the lowest level, that is, by determining all the Christoffel symbols of of the first kind given for recall by:\n\t\n\tWe have therefore $2^3$, that is, $8$ possible Christoffel symbols of the first kind! Even if some symbols are equal (we have demonstrated!), we will still calculate everything:\n\t\n\t\\end{tcolorbox}\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tLet us now calculate all the Christoffel symbols of the second kind in the details:\n\t\n\tAgain, as the metric tensor is diagonal, this will simplify the calculations!\\\\\n\n\tWe have then:\n\t\n\tLet us now calculate the $4$ components of the Ricci tensor in the details according to:\n\t\n\tWe then have (we calculate them all, even if we know that subsequently those which do not have $\\alpha=\\beta$ will be useless by the fact that the metric is diagonal):\n\t\n\t\\end{tcolorbox}\n\t\n\t\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\n\tLet us now calculate the Ricci scalar :\n\t\n\tWe then have:\n\t\n\tWe notice that:\n\t\\begin{enumerate}\n\t\t\\item The Ricci scalar is a constant. This means that the hypersurface has a constant curvature at all points of the surface (we know that the sphere by symmetry has a constant curvature at all points). It thus possesses a form of symmetry, with respect to its curvature. We are then dealing with a \"\\NewTerm{maximally symmetrical variety}\".\n\n\t\t\\item This scalar is positive which describes a domed space (ball, sphere)\n\t\\end{enumerate}\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tBe careful not to confuse the value of the Ricci curvature of and that of the Gauss curvature.\n\t\\end{tcolorbox}\n\t\n\t\\subsection{Einstein Tensor}\n\tLet us apply a contraction to the second Bianchi  identity (valid for recall with the \"$+$\" only if the metric is positive):\n\t\n\tLet us recall that $\\nabla_\\lambda g_{\\alpha\\mu}=0$ and similarly by extension that $\\Delta_\\lambda g^{\\alpha\\mu}=0$. So finally this leads us to write by the property of the derivatives (product in sum):\n\t\n\tand therefore to get:\n\t\n\t Using the antisymmetry property of the Riemann-Christoffel tensor we write:\n\t\n\tWhat ultimately brings us to write from the definition of Ricci's tensor:\n\t\n\tThis last relationship being named \"\\NewTerm{contracted Bianchi identity}\".\n\n\tLet us contract this relation once more:\n\t \n\tThat which is identical to write using the properties of the Einstein summation (which allows to freely change the indices):\n\t\n\tWhich is equivalent to:\n\t\n\tAs $\\nabla_\\lambda=g_{\\lambda}^\\mu \\nabla_\\mu R$, we have:\n\t\n\tBy raising the index $\\lambda$ by multiplication with $g^{\\nu\\mu}$, we get the \"\\NewTerm{Einstein's identity}\\index{Einstein's identity}\":\n\t\n\tThe \"\\NewTerm{Einstein tensor}\\index{Enstein tensor}\" (tensor of order two and contravariant in the present case) which is therefore a constant in a given Riemannian space is therefore defined by:\n\t\n\tAnd expresses in the shortest possible way, parallel transport under all assumptions seen so far.\n\n\tIdentically, we can obtain the covariant form:\n\t\n\tThe tensor is therefore constructed for a Riemannian metric only (which nevertheless makes a lot of possible spaces ...), and is automatically non-divergent:\n\t\n\tIt must be remembered, however, that a large part of the latest developments consider a symmetrical metric. This is why some speak of \"\\NewTerm{symmetrical gravitational theory}\" when we deal with General Relativity.\n\n\tWe shall find this tensor naturally in the sectionof General Relativity when, by making use of the variational principle, we decompose the action into two terms:\n\t\\begin{itemize}\n\t\t\\item the action of mass in the gravitational field\n\n\t\t\\item the action of the gravitational field in the absence of mass\n\t\\end{itemize}\n\tBy expressing the whole in a Riemannian space we will then get the no less famous Einstein field equations (without further explanations in this section):\n\t\n\tthe details being given in the section of General Relativity.\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tAs we see, we can very well add a constant term to the expression of Einstein's tensor, without changing the nullity of its divergence. This fact, used in Astrophysics, makes it possible to construct models of particular Universes that we will deal with in the section of Cosmology.\n\t\\end{tcolorbox}\n\t\\begin{tcolorbox}[colframe=black,colback=white,sharp corners]\n\t\\textbf{{\\Large \\ding{45}}Example:}\\\\\\\\\n\tLet us calculate the order $2$ covariant Einstein tensor:\n\t\n\tbased on the diagonal metricsurface of the 2-sphere $\\mathcal{S}^2$ (without the temporal component):\n\t\n\tSince the metric is diagonal, we have proved earlier above and in detail by the example that:\n\t\n\tAnd as in the present case, we also have:\n\t\n\tIt comes:\n\t\n\tSo we have to focus only on two components:\n\t\n\twhich confirms what we have said earlier above.\n\t\\end{tcolorbox}\n\tThe complexity of the Einstein Tensor expression can be shown using the formula for the Ricci tensor in terms of Christoffel symbols:\n\t\n\twhere ${\\displaystyle \\delta _{\\beta }^{\\alpha }}$ is the Kronecker tensor and the Christoffel symbol $\\Gamma ^{\\alpha }{}_{\\beta \\gamma }$ is given for recall by:\n\t\n\tBefore cancellations, this formula results in $2\\times (6+6+9+9)=60$ individual terms. Cancellations bring this number down somewhat.\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{95} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 40 votes,  82.5\\%}} \n\t\\end{tabular} \n\t\\end{flushright}\n\t\t\n\t%to make section start on odd page\n\t\\newpage\n\t\\thispagestyle{empty}\n\t\\mbox{}\n\t\\section{Spinor Calculus}\n\t\\lettrine[lines=4]{\\color{BrickRed}A}s we will see first in relativistic quantum physics, spinors play a major role in quantum theory, and therefore in all contemporary physics (quantum field theory, standard model, string theory, etc.). The actual purpose of this section on the spinors is only to give the tools to the reader that are necessary to a deep understanding of what will be done in the chapter about Atomistic.\n\t\n\tIt was starting 1927 that the physicists Pauli, and after Dirac introduced spinors  for the representation of the wave functions (see section Relativistic Quantum Physics). However, in their mathematical form, spinors were discovered by Elie Cartan in 1913 during his research on the representations of the groups following the general theory of Clifford spaces (introduced by the mathematician W.K. Clifford in 1876). He showed, as we will see it, that in spinors provide in fact a linear representation of the group of rotations of a space with any number of dimensions. Thus, spinors are closely related to geometry but they are often introduce in an abstract way without intuitive geometric meaning. Thus, we will try (as always on this book) in this section to introduce this tool in the most simple and intuitive possible way with a maximum of details.\n\t\n\tThe spinor formalism is not interest only of interest for quantum physics and its related developments, among others, Roger Penrose showed that the spinor theory was an extremely fruitful approach to the theory of General Relativity. Even if the most commonly used tool for the treatment of General Relativity is the tensor calculus, Penrose seems to have shown that in the specific case of the four-dimensional space and in the metric Lorentz the formalism of two components spinors was more appropriate.\n\t\n\tThe theory of spinors named, \"\\NewTerm{spinor calculus}\\index{spinor calculus}\" or sometimes \"\\NewTerm{spin geometry}\\index{spin geometry}\" is extremely broad but as we know this book aims to address the physicists and engineers therefore we will limit ourselves to spinors properties useful in quantum physics (at least actually).\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nWe strongly recommend the reader to have previously read the subsubsection on quaternions (\\SeeChapter{see section Numbers}), the subsubsection on rotations in space (\\SeeChapter{see section Euclidean Geometry}) and finally, if possible, for a physical practice example, the section on Relativistic Quantum Physics.\n\t\\end{tcolorbox}\n\t\n\t\\pagebreak\n\t\\subsection{Unit Spinor}\n\t\n\tWe will give here a first simplified and special definition (or example) of spinors. Thus, we will show that it is possible from such a tool to represent a vector equation of a space $e^3$ of three components using a two-component spinor. The method is extremely simple and the reader who has already read the part of the section on Quantum Wave Physics dealing with the Dirac equation and the section on Quantum Computing will see a rather beautiful analogy.\n\t\n\tConsider to start the sphere of radius of the following equation (\\SeeChapter{see section Analytic Geometry}):\n\t\n\tAnd consider the following figure:\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.75]{img/algebra/spinor_unit_sphere.jpg}\n\\caption[]{Unit sphere}\n\\end{figure}\nLet us put on this sphere of center on O and unit radius a point $P$ of coordinates $(x,y,z)$  and denote by $N$ (north) and $S$ (south) the points of the sphere intersecting with the $Z$ axis.\n\nThe point $S$ will have by convention for coordinates:\n\t\nWe obtain a projection so-named \"\\NewTerm{stereographic projection} \\index{stereographic projection}\" $P'$ of the point $P$ by tracing the straight line $SP$ that passes through the complex equatorial plane $x\\text{O}y$  (yes! we chose a $\\mathbb{C}$ as plane!) at point $P'$ of coordinates $(x', y', z')$.\n\nThe similar triangles $SP'\\text{O}$ and $SPQ$ (with $Q$ being the orthogonal projection onto the $Z$-axis of the point $P$) give us the following relations by simply applying Thales' theorem (\\SeeChapter{see section Euclidean Geometry}):\n\t\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nThe last two relation (ratios) are obtained by simply applying Thales' theorem (see section Euclidean Geometry) in the complex equatorial plane.\n\t\\end{tcolorbox}\n\t\n\tLet us write now to simplify the notations:\n\t\n\tWe have from the prior-previous relation that:\n\t\n\tTaking the squared modulus (see the study of complex numbers in the section on Numbers):\n\t\n\tand from  the equation of the sphere it follows:\n\t\n\twe finally get:\n\t\n\tLet us write now the complex number $\\xi$ under the form:\n\t\n\t\n\t where $\\phi,\\psi$ are two complex numbers that we can always impose verify the following condition of unitarity (nothing prohibits to do that but for theoretical physics purpose this choice suit us well...):\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\t\tThe following complex numbers satisfy for example (hazard!!??)  the above condition:\n\t\t\n\t\\end{tcolorbox}\t\n\tRemember before continuing that we have proved in our study of complex numbers (\\SeeChapter{see section Numbers}) that:\n\t\n\tTherefore it comes by injecting the last two relations in the equation given above:\n\t\n\tSo we get:\n\t\n\tRearranged:\n\t\n\tTherefore:\n\t\n\tFinally we have a simple expression for the $z$ coordinate of the point $P$ because in the last equality above you must remember that $\\psi\\bar{\\psi}+\\phi\\bar{\\phi}=1$ then:\n\t\n\tAs we have:\n\t\n\tThen by summing and respectively by substracting the two above relations and using previous results we get:\n\t\n\tTo resume for the point $P$ we have with our choices:\n\t\n\tThus, for any point $P$ on the sphere of radius unity, we can match a pair of complex numbers satisfying the imposed unitary identity!\n\t\n\t\\pagebreak\n\tTherefore in complete and explicit form we finally have using what we know about complex numbers (\\SeeChapter{see section Numbers}) and remarkable trigonometric functions (\\SeeChapter{see section Trigonometry}):\n\t\n\tWe notice easily that the norm of this vector is equal to $1$.\n\t\n\tThis last relation also indicates us that $2\\alpha$ is the angle between $\\text{O}z$ and $\\overrightarrow{OP}$ (since the hypotenuse of vector's angle has a unit norm) and therefore by deduction $\\gamma-\\beta$ represents the angle between $\\text{O}x$ and the plane $(\\text{O}z, \\overrightarrow{OP})$:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/spinor_simple_rotation.jpg}\n\t\t\\caption[]{Representation of the rotation}\n\t\\end{figure}\n\n\tThe pair of complex numbers:\n\n\t\n\n\tis by definition a \"\\NewTerm{unitary spinor}\\index{unitary spinor}\" (it contains also all the information about $z$). Thus, as we have seen, a unitary spinor can also be expressed in the form:\n\t\n\tAs well any spinor can be written in the more general form:\n\t\n\tThe spin is essentially measured from the oriented $z$-axis as we have seen it yet with the previous figure.\n\t\n\tThe stereographic projection led us to represent certain vectors of the Euclidean space $\\mathcal{E}^3$ with the elements of a complex two-dimensional vector space that is the \"\\NewTerm{space of spinors}\\index{space of spinors}\".\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\nThis representation is not unique because the arguments of complex number are (in trigonometric form) determined at a given offset constant!\n\t\\end{tcolorbox}\t\n\t\n\tThe reader who has already read a little bit the section on Wave Quantum Physics  will probably noticed the strange (not innocent) similarity of the following identity and relations:\n\t\n\tcompared to the de Broglie normalization condition (the integral over the entire space of the sum of the products of the complex wave functions and its conjugate is equal to the unit) and to the developments determining the continuity equation also in the section of Wave Quantum Physics .\n\t\n\tLet us now see that for future needs, we can find two new vectors $\\vec{v}_1,\\vec{v}_2$ of Euclidean space $\\mathcal{E}^3$ associated with a unitary spinor $(\\psi,\\phi) $ determined on the unit sphere. These vectors will have to be orthogonal to each other and with unit norm, each orthogonal to the vector $\\overrightarrow{OP}$.\t\n\t\n\tTo simplify the notations let us write  $\\vec{v}_3=\\overrightarrow{OP}$ and $\\vec{v}_i=(x_i,y_i,z_i),i\\in \\left\\lbrace 1, 2 ,3\\right\\rbrace$.\n\t\n\tThe $\\vec{v}_1,\\vec{v}_2,\\vec{v}_3$ vectors are of course bounded by the cross product (\\SeeChapter{see section Vector Calculus}):\n\t\n\thence taking into account the expression of $\\overrightarrow{OP}$ components based on its associated spinor, and the fact that $\\psi\\bar{\\psi}+\\phi\\bar{\\phi}=1$, we obtain:\n\t\n\tWriting the orthogonality of vectors we get them obviously six additional equations. However the orientation of vectors $\\vec{v}_1,\\vec{v}_2$ being not fixed, there is some uncertainty in the values of their components. Let us select values such that:\n\t\n\tTaking the complex conjugate quantities of previous relation and summing, to have only real parts we have to write:\n\t\n\tWe can control the norm is equal to the unit. Just check with the squared norm:\n\t\n\tIn the same way we get:\n\t\n\tWe can easily check that these values restore well the relations of the vector cross product of $\\vec{v}_2$. At any unitary spinor $(\\psi,\\phi)$ we can therefore associate three vectors $\\vec{v}_1,\\vec{v}_2,\\vec{v}_3$. We can directly check that the vectors thus calculated are mutually orthogonal and with unit norm.\n\t\n\tA reader has make us the request to show in much details as possible this affirmation for $x_x$. So let's go:\n\t \n\t\n\t\\subsection{Geometric Properties}\n\tWe will study the transformations of vectors associated to a spinor to derive the corresponding properties of spinor transformation. As we know some special (trivial) rotations in space can always be expressed as the product of two plane symmetries, therefore we begin by studying these latter.\n\t\n\t\\subsubsection{Plane Symmetries}\n\tLet us consider first the plan symmetry of a vector:\n\t\n\tDuring a symmetry relative to a plane $P$, any vector $\\overrightarrow{OM}$ is transformed into a vector $\\overrightarrow{OM'}$. Let us determine a matrix $S$ representing this symmetry with respect to this plane! \n\t\n\tGiven $\\overrightarrow{\\text{O}A}$ a unit vector normal to the plane $\\mathcal{P}$ and $H$  the root of the perpendicular projection from any given point $M$ of space on the plane $\\mathcal{P}$:\n\t\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/plane_symmetry.jpg}\n\t\t\\caption[]{Plane symmetry of a vector relatively to a plane}\n\t\\end{figure}\n\tLet $M'$ be the symmetric point $M$ with respect to the plane $\\mathcal{P}$, we have:\n\t\n\tGiven $a_1,a_2,a_3$ the cartesian components of $\\overrightarrow{\\text{O}A}$ and $(x,y,z),(x',y',z')$ the respective components of the vectors $\\overrightarrow{OM},\\overrightarrow{OM'}$, the above equation gives us the linear relations:\n\t\n\tThe matrix $S$ that take us from vector $\\vec{X}(x,y,z)$ to the vector $\\vec{X}'(x,y,z)$ has therefore the following expression:\n\t\n\tWe keep in mind this result and let us now consider two vectors $(\\vec{X}_1,\\vec{X}_2)$ orthogonal to each other and unitary, defining as we have above  seen a unitary spinor $(\\psi,\\phi)$ (we used the notation $\\vec{v}_1,\\vec{v}_2$ before). A symmetry with respect to a plane $\\mathcal{P}$ transforms the vectors $\\vec{X}_1,\\vec{X}_2$ into vectors $\\vec{X'}_1,\\vec{X'}_2$ which are associated the spinor $(\\psi',\\phi')$. \n\t\n\t\\begin{theorem}\n\tWe will now show that the following transformation of the spinor $(\\psi,\\phi)$ into spinor $(\\psi',\\phi')=(x'_3,y'_3,z'_3)$ is:\n\t\n\t\\end{theorem}\n\tand transforms the vectors $\\vec{X}_1,\\vec{X}_2$ into vectors $\\vec{X}_1^{'},\\vec{X}_2^{'}$, these vectors being deduced respectively - as we will just show it - from each other by a single plane symmetry and the matrix $\\mathcal{A}$ represents well the sought transformation.\n\t\n\tThe previous relation gives us therefore:\n\t\n\tIn all we have so far the previous set of relations and:\n\t\n\tTherefore we can deduce:\n\t\n\tAfter, using the fact that $||\\vec{A}||=a_1^2+a_2^2+a_3^2=1$, we get:\n\t\n\tSo we fall well back on the symmetry matrix:\n\t\n\tThus, the matrix that we will again in the section of Relativistic Quantum Physics:\n\t\n\t$\\mathcal{A}$ therefore generates the transformation of a spinor $(\\psi,\\phi)$ into a spinor $(\\psi',\\phi')$ such that the associated vectors $(\\vec{X}_1,\\vec{X}_2)$ can be deduced respectively from $(\\vec{X}_1^{'},\\vec{X}_2^{'})$  by a planar symmetry.\n\t\n\t\\subsubsection{Rotations}\n\tAs we have saw it in the section of Euclidean Geometry, it is possible to rotate a vector in the plane or in space using matrices. Similarly, by extension, it is clear that the multiplication of two rotations is a rotation (that is the elementary linear algebra - at least we consider it as is).\n\t\n\tConsider therefore two planes $P, Q$ whose intersection generates a line $L$ and let us denote $\\vec{A}(a_1,a_2,a_3)$ and $\\vec{B}(b_1,b_2,b_2)$ the unit vectors carried by the respective normal vectors (\\SeeChapter{see section Analytical Geometry}) to these two intersecting planes in $L$:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/spinor_intersecting_planes.jpg}\n\t\t\\caption[]{Illustrated intersection of two planes}\n\t\\end{figure}\n\tLet us denote by $\\theta/2$ the angle of the vectors $\\vec{A},\\vec{B}$ between them (the reason for this notation comes from our study of quaternions (\\SeeChapter{see section Numbers})). Given $\\vec{L}$ the unit direction vector carried by the line $L$ resulting from the intersection of planes $P, Q$ and such that:\n\t\n\tExplanations: $\\vec{A},\\vec{B}$ are unitary but not necessarily perpendicular and we still need to ensure that $\\vec{L}$ is a unit vector (the norm equal to unity!). Therefore, the above relation ensures that:\n\t\n\tThe previous vector product gives us for the components of $\\vec{L}$:\n\t\n\tOn the other hand, the scalar product can be written:\t\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tWe will use these two planes as symmetry planes for our rotations.\n\t\\end{tcolorbox}\t\n\tAs we have noticed previously, a rotation in $\\mathcal{E}^3$ can always be done with more than two plane symmetries. Thus, a rotation can be denoted by the application (multiplication) of two matrices of symmetry according to the results obtained previously:\n\t\n\tDeveloping the product of these two matrices and taking into account the relations arising from vector and dot product we get:\n\t\n\tThus, we can write the transformation of a spinor $(\\psi,\\phi)$ and a spinor $(\\psi',\\phi')$ with a matrix of the form:\n\t\n\twhose parameters are named \"\\NewTerm{Cayley-Klein parameters}\\index{Cayley-Klein parameters}\".\n\t\n\tThe matrix $R\\left(\\vec{L},\\frac{\\theta}{2}\\right)$ can be written in another form if we do a limited development for infinitely small rotations $\\theta/2=\\varepsilon/2$ (that's where the physics comes back...):\n\t\n\tUsing only the first order terms, the rotation matrix is finally written:\n\t\n\tThis matrix is the limited development of the matrix of rotation in the neighborhood of the identity matrix, the latter obviously corresponding to the zero rotation. We note also the latter in the form:\n\t\n\twhere the matrix $\\sigma_0$ is the identity matrix of order two and $\\chi(\\vec{L})$ is named the \"\\NewTerm{infinitesimal rotation matrix}\\index{infinitesimal rotation matrix}\". Now, if we put $L_1=1,L_2=L_3=0$ in $\\chi(\\vec{L})$ we get:\n\t\n\tHow to interpret this result? Well it's quite simple, choose $L_1=1,L_2=L_3=0$ gives us a collinear vector $\\vec{L}$ to the axis $\\text{O}x$. Therefore, we can very well imagine the planes generating the axis $\\text{O}x$ that carries $\\vec{L}$. As $\\varepsilon/2$ (verbatim $\\theta/2$) is generated by the vectors $\\vec{A},\\vec{B}$ perpendicular to $\\vec{L}$ and thus to $\\text{O}x$, then the angle $\\varepsilon/2$ (or its variation) represents a variation of the direction of the normal planes to $\\vec{A},\\vec{B}$ which by symmetry are used to construct the rotation (recall that $\\vec{A},\\vec{B}$are not necessarily mutually orthogonal). So by extension, having  $L_1=1,L_2=L_3=0$ allows us then only to make rotations (symmetries) around the $x$-axis.\n\t\n\tSimilarly, a rotation about the $y$-axis corresponds to $L_2=1,L_1=L_3=0$, which gives:\n\t\n\tand the same with $L_3=1,L_1=L_2=0$ we finally have:\n\t\n\tThe three matrices:\n\t\n\tare rotation matrices in the space of \"\\NewTerm{two-dimensional spinors}\\index{two-dimensional spinors}\". Physicists and mathematicians say that these matrices are an irreducible representation of dimension two of the group \"$\\NewTerm{\\text{SU} (2)}$\" or named  \"\\NewTerm{special group of spatial rotations $\\text{SU}(2)$}\\index{special group of spatial rotations}\" (\\SeeChapter{see section of Set Algebra}).\n\t\n\tThe previous infinitesimal matrices therefore show us in a skillful way the following matrices:\n\t\n\tThese matrices are named \"\\NewTerm{Pauli matrices}\\index{Pauli matrices}\" and we will find them again in the section of Relativistic Quantum Physics as part of the study of the Dirac equation and the determination of its explicit solutions (using spinors).\n\t\n\tUsing Pauli matrices, the infinitesimal rotations matrix can finally be written:\n\t\n\tLet us define a vector $\\vec{\\sigma}$, named \"\\NewTerm{Pauli vector}\\index{Pauli vector}\", whose components are the Pauli matrices:\n\t\n\tThe expression $L_1\\sigma_1+L_2\\sigma_2+L_3\\sigma_3$ can be written as a sort of dot product which represents a sum of matrices (the arrow above the sigma is sometimes omitted if no confusion is possible):\n\t\n\tThe limited development is then written:\n\t\n\tThe rotations matrix:\n\t\n\tcan using Pauli matrices be written in the remarkable form under the assumption of small angles:\n\t\n\tExpression that we will a lot in the section of Quantum Computing to express the $R$ matrices explicitly and also in the section of Set Algebra.\n\t\n\tWhat is written sometimes:\n\t\n\tWhich can also be written in an extensive form:\n\t\n\twhich has the form of a quaternion of angle $\\theta$ (don't remember that this is only for small angles!) and of axis $\\vec{L}$. Hence the reason that  have from the beginning chosen the notation $\\varepsilon/2$.\n\t\n\tIt is clear, so that the analogy with quaternions to be stronger, that the $2\\times 2$ Pauli matrices are a set of four linearly independent matrices! As the canonical basis for quaternions!\n\t\n\tIf we denote by $\\vec{L}(L_1,L_2,L_3)=\\vec{X}(x,y,z)$  then the \"\\NewTerm{spinor product}\\index{spinor product}\" is finally defined by:\n\t\n\tThis matrix constitutes as we have already mentioned, to the limited development of the rotation matrix in the neighborhood of the identity matrix, the components of $\\vec{x}$ being associated with a spinor whose rotation is through the double symmetry defined by two planes whose intersection is defined by the vector $\\vec{X}$.\n\t\n\tWe can also notice the interesting consequence that a rotation of $2\\pi$ ($360^\\circ$) rotation does not restore the object to its original position!!!\n\t\n\tIndeed:\n\t\n\tTherefore we need a rotation of $4\\pi$ ($720^\\circ$) to make a full turn! This corresponds to the spin of ${}^1{\\mskip -5mu/\\mskip -3mu}_2$. It takes two turns to find that the object reappears equivalently (this is counter intuitive and can make thing we are dealing with object having higher dimensions that what we exepect!). We then say that the representation of rotations is \"bivaluated\".\n\t\n\tSchematically this can be represented as:\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\\includegraphics{img/algebra/full_rotation.jpg}\n\t\t\\caption{Spinor full rotation (special example)}\n\t\\end{figure}\n\tOr you can consider the analogy that is to hold a tea cup on the palm of your hand and you turn your hand by maintaining it flat to regain its (the hand, not the cup!) original position. You will see that your hand has to do two laps!\n\t\n\t\\pagebreak\n\t\\subsubsection{Properties of Pauli Matrices}\n\tThe reader can easily check (if this is not the case he can always contact us and we will write the details) the following main properties of the Pauli matrices, some of which will be used in the section of Relativistic Quantum Physics:\n\t\\begin{enumerate}\n\t\t\\item[P1.] Unitarity:\n\t\t\n\n\t\t\\item[P2.] Anticommutativity:\n\t\t\n\t\tor $i\\neq j$ and $i,j=1,2,3$.\n\n\t\tThe last tow properties gives us:\n\t\t\n\t\twith $i,j=1,2,3$.\n\n\t\t\\item[P3.] Cyclicity:\n\t\t\n\n\t\t\\item[P4.] Commutativity:\n\t\t\n\t\t\n\t\t\\item[P5.] Vector product:\n\t\t\n\t\tGiven the square of the different $\\sigma$ by noting abusively by \"$1$\" the unitary matrix (we change the indices to give you the habit to use other common notations):\n\t\t\n\t\tLeading us to write that (square norm of the Pauli vector):\n\t\t\n\t\tLet us consider now the following products:\n\t\t\n\t\tLet us consider now the following products:\n\t\t\n\t\tAll these relation can be summarized into a unique one (!):\n\t\t\n\t\twhere for recall (\\SeeChapter{see section Tensor Calculus}) the Kronecker symbol is defined by:\n\t\t\n\t\tand the antisymmetric symbol by:\n\t\tIn three dimensions, the Levi-Civita symbol is defined as follows:\n\t\t\n\t\ti.e.  $\\varepsilon_{ijk}$  is $1$ if $(i, j, k)$ is an even permutation of $(1,2,3)$ or in the natural order $(1,2,3)$, $-1$ if it is an odd permutation, and $0$ if any index is repeated. In three dimensions only, the cyclic permutations of $(1,2,3)$ are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3D it is sufficient to take cyclic or anticyclic permutations of $(1,2,3)$ and easily obtain all the even or odd permutations.\n\t\t\n\t\tWe also have:\n\t\t\n\t\tWe fall back here on the components of the vector product:\n\t\t\n\t\tNow let us develop an important spinor identity which will be useful to us in the section of Relativistic Quantum Physics:\n\t\t\n\t\tBut we also have:\n\t\t\n\t\tSo finally:\t\n\t\t\n\t\n\t\t\\item[P6.] We note that these matrices are also hermitian (let us recall that a Hermitian matrix is a transposed matrix followed by its complex conjugate according to what we saw in the section of Linear Algebra) such that:\n\t\t\n\t\tIt is therefore in the language of quantum physics: Hermitian operators!!!\n\t\\end{enumerate}\n\tLet us now see what are the eigenvectors and eigenvalues of the Pauli matrices because this result is very useful for the section of Relativistic Quantum Physics and of Quantum Computing!\n\n\tLet us recall that when a transformation (application of a matrix) act on a vector, it changes the direction of the vector except for specific matrices that have eigenvalues. In this case, the direction is conserved but not their length. This property is exploited in quantum mechanics (and not only as we will see in many other sections of this book).\n\t\n\tLet us determine in a first time, the associated eigenvectors and eigenvalues (\\SeeChapter{see sectionLinear Algebra}) using the most common method:\n\t\n\tThe eigenvalues equation (\\SeeChapter{see section Linear Algebra}) is thus written:\n\t\n\tWhich gives us as characteristic equation:\n\t\n\thence the eigenvalues $\\lambda=\\pm 1$. Which gives us the possibility to determine the eigenvectors as following:\n\t\n\tTherefore for $\\lambda=1$:\n\t\n\tThis impose that $y=x$. The eigenvector is therefore:\n\t\n\twhatever is the value of $x$.\n\t\n\tConclusion: The proper direction of the vector is conserved but not its length (norm) because it depends on the value of $x$.\n\t\n\tFor $\\lambda=-1$:\n\t\n\tThis impose that $y=-x$ and therefore that the eigenvector is  equal to:\n\t\n\tThe previous eigenvectors written with the Dirac formalism (\\SeeChapter{see section of Relativistic Quantum Physics}) give for $\\lambda=1$:\n\t\n\twith a norm ($1$ since we formalize to the unit):\n\t\n\t\\begin{tcolorbox}[title=Remark,colframe=black,arc=10pt]\n\tIn the formalism of Dirac $\\langle v |$ is the is named a \"Bra\" and $|v \\rangle$ a \"Ket\".\n\t\\end{tcolorbox}\n\tThis being only valid for components that are real numbers. The normalized eigenvector has therefore for expression:\n\t\n\tFor $\\lambda=-1$, we have:\n\t\n\tand:\n\t\n\tand the normalized eigenvector has thus for expression:\n\t\n\tLet us now determine the eigenvectors and eigenvalues associated to $\\sigma_y$ by following the same procedure:\n\t\n\tSo we have for the eigenvalues:\n\t\n\tThe eigenvectors are determined as following:\n\t\n\tand therefore for $\\lambda=1$:\n\t\n\tThe eigenvector is therefore:\n\tand therefore for $\\lambda=1$:\n\t\n\tThe associated norm:\n\t\n\tThe normalized vector is therefore expressed as:\n\t\n\tLet us now determine the eigenvectors and eigenvalues associated with $\\sigma_z$ by doing the same again.\n\t\n\tWe have therefore:\n\t\n\tThe eigenvectors are then for $\\lambda=1$:\n\t\n\tWhich makes problem to us for say anything ... the only possibility is to choose $y=0$ and therefore:\n\t\n\tand the associated norm:\n\t\n\tThe normalized vector has then for expression:\n\t\n\tand for $\\lambda=-1$ we will have the same choice to do by chosing this time $x=0$ and therefore:\n\t\n\thence the associated norm:\n\t\n\tThe normalized eigenvector has then for expression:\n\t\n\tTherefore the normalized eigenvectors of $\\sigma_z$ are on the directions of the Cartesian coordinate axes. It is for this particular reason that the eigenvectors of $\\sigma_z$ are denoted in quantum computing by:\n\t\n\tand the reader should also know that we write also:\n\t\n\t\n\t\n\t\\begin{flushright}\n\t\\begin{tabular}{l c}\n\t\\circled{100} & \\pbox{20cm}{\\score{4}{5} \\\\ {\\tiny 18 votes,  84.44\\%}} \n\t\\end{tabular} \n\t\\end{flushright}", "meta": {"hexsha": "b5e7a99fc2e5d63a833831dbb6ff4505e007fe49", "size": 799908, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter_Algebra.tex", "max_stars_repo_name": "lefevred/Opera_Magistris_Francais_v3", "max_stars_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter_Algebra.tex", "max_issues_repo_name": "lefevred/Opera_Magistris_Francais_v3", "max_issues_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter_Algebra.tex", "max_forks_repo_name": "lefevred/Opera_Magistris_Francais_v3", "max_forks_repo_head_hexsha": "71a881b8dfdf0ac566c59442244e6ed5f9a2c413", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.0564778898, "max_line_length": 1025, "alphanum_fraction": 0.7465871075, "num_tokens": 208323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8198933293122506, "lm_q1q2_score": 0.5624053824065909}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{tfrmmce}\n\\section*{\\hspace*{-1.6cm} tfrmmce}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nMinimum mean cross-entropy combination of spectrograms.\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[tfr,t,f] = tfrmmce(x)\n[tfr,t,f] = tfrmmce(x,h)\n[tfr,t,f] = tfrmmce(x,h,t)\n[tfr,t,f] = tfrmmce(x,h,t,N)\n[tfr,t,f] = tfrmmce(x,h,t,N,trace)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty tfrmmce} computes the minimum mean cross-entropy combination\n        of spectrograms using as windows the columns of the matrix {\\ty\n        h}. The expression of this distribution writes \n\n\\[\\Pi_x(t,\\nu)=\\dfrac{E}{\\| \\Pi_{k=1}^N |F_x(t,\\nu;h_k)|^{2/N}\\|_1}\\\n{\\displaystyle \\Pi_{k=1}^N} |F_x(t,\\nu;h_k)|^{2/N}, \\]\n\nwhere $\\|\\ \\|_1$ denotes the $L_1$ norm, $E$ the energy of the signal\\,:\n\\[E=\\int_{-\\infty}^{+\\infty} |x(t)|^2\\ dt=\\iint_{-\\infty}^{+\\infty}\n\\Pi_x(t,\\nu)\\ dt\\ d\\nu=\\|\\Pi_x(t,\\nu)\\|_1,\\] and $F_x(t,\\nu;h_k)$ the\nshort-time Fourier transform of $x$, with analysis window $h_k(t)$.\\\\\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty x}     & signal ({\\ty Nx=length(x)})\\\\\n        {\\ty h}     & frequency smoothing windows, the {\\ty h(:,i)} being normalized\n                so as to be of unit energy\\\\\n        {\\ty t}     & time instant(s)          & {\\ty (1:Nx)}\\\\\n        {\\ty  N}    & number of frequency bins & {\\ty Nx}\\\\\n        {\\ty trace} & if nonzero, the progression of the algorithm is shown\n                                         & {\\ty 0}\\\\\n     \\hline {\\ty tfr}   & time-frequency representation \\\\\n        {\\ty f}     & vector of normalized frequencies\\\\\n \n\\hline\n\\end{tabular*}\n\\vspace*{.2cm}\n\nWhen called without output arguments, {\\ty tfrmmce} runs {\\ty tfrqview}.\n\\end{minipage}\n\n\\newpage\n\n{\\bf \\large \\sf Example}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nHere is a combination of three spectrograms with gaussian analysis windows\nof different lengths :\n\\begin{verbatim}\n         sig=fmlin(128,0.1,0.4); h=zeros(19,3);\n         h(10+(-5:5),1)=window(11); \n         h(10+(-7:7),2)=window(15);  \n         h(10+(-9:9),3)=window(19); \n         tfrmmce(sig,h);\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nall the {\\ty tfr*} functions.\n\\end{minipage}\n\\vspace*{.2cm}\n\n\n{\\bf \\large \\sf Reference}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n[1] P. Loughlin, J. Pitton, B. Hannaford ``Approximating Time-Frequency\nDensity Functions via Optimal Combinations of Spectrograms'' IEEE Signal\nProcessing Letters, Vol. 1, No. 12, Dec. 1994. \n\\end{minipage}\n", "meta": {"hexsha": "fa81a599d0ef5773ba8e81e53874e255d439af5d", "size": 3031, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/tfrmmce.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/tfrmmce.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/tfrmmce.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 30.31, "max_line_length": 84, "alphanum_fraction": 0.6156384032, "num_tokens": 1115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5623811095544488}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[margin=1in]{geometry} \n\\usepackage{amsmath,amsthm,amssymb,amsfonts}\n\\usepackage{listings}\n \n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n \n\\newenvironment{problem}[2][Problem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n%If you want to title your bold things something different just make another thing exactly like this but replace \"problem\" with the name of the thing you want, like theorem or lemma or whatever\n \n\\begin{document}\n \n%\\renewcommand{\\qedsymbol}{\\filledbox}\n%Good resources for looking up how to do stuff:\n%Binary operators: http://www.access2science.com/latex/Binary.html\n%General help: http://en.wikibooks.org/wiki/LaTeX/Mathematics\n%Or just google stuff\n \n\\title{Assignment 2\\\\ MEEN 357}\n\\author{Jacob Hartzer}\n\\maketitle\n \n\\section*{Task 5}\n\\begin{problem}{i}\nWhat is the range of a 9-bit unsigned integer?\\\\\n\nThe range is from 0 to $2^9-1$ or \\bf{0 to 511}.\n\\end{problem}\n\n\\begin{problem}{ii}\nWhat is the range of a 9-bit signed integer? \\\\\n\nThe range is from $-2^8 to 2^8-1$ or \\bf{-256 to 255}\n\\end{problem}\n\n\\begin{problem}{iii}\nWhat is the binary representation of decimal 125 as a 9-bit unsigned integer? \\\\\n\nI derived this by finding the largest power of 2 less than the original number and then subtracting and repeating. This processes looked like this: $125 - 2^6 = 61$. $61 - 2^5 = 29$. And so on. Then, I filled in each digit appropriately where if I used that power, the digit would be a 1.  \\bf{001111101} \n\\end{problem}\n\n\\begin{problem}{iv}\nWhat is the binary representation of decimal 125 as a 9-bit signed integer?\\\\\n\nFor a nine-bit signed integer, since the number is less than $2^8$ and positive, the answer remains the same: \\bf{001111101}\n\\end{problem}\n\n\\begin{problem}{v}\nWhat is the binary representation of decimal -125 as a 9-bit signed integer?\\\\\n\nTo obtain the negative counterpart, I found the 1\u2019s complement and then added one. This gave an answer of: \\bf{110000011}\n\\end{problem}\n\\end{document}", "meta": {"hexsha": "3159af3084660fedc4e01408f9528b52727ff7a7", "size": 2058, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "A2/a2task5.tex", "max_stars_repo_name": "JHartzer/MEEN_345", "max_stars_repo_head_hexsha": "794890ee37ada10d97280c794508e4c7a4d61337", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "A2/a2task5.tex", "max_issues_repo_name": "JHartzer/MEEN_345", "max_issues_repo_head_hexsha": "794890ee37ada10d97280c794508e4c7a4d61337", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "A2/a2task5.tex", "max_forks_repo_name": "JHartzer/MEEN_345", "max_forks_repo_head_hexsha": "794890ee37ada10d97280c794508e4c7a4d61337", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4181818182, "max_line_length": 305, "alphanum_fraction": 0.74101069, "num_tokens": 617, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.8080672135527632, "lm_q1q2_score": 0.5623811044586303}}
{"text": "\\section{The Rarita-Schwinger Field}\n\\begin{frame}\n\t\\frametitle{The Spinor of the Rarita-Schwinger Field}\n\tIdea: Combine spin 1 $A^\\mu$ and the spin $\\sfrac{1}{2}$ Dirac spinor $\\Psi$ into $\\Psi^\\mu$.\n\tIt transforms according to the\n\t\\begin{align*}\n\t\t(\\sfrac{1}{2}, \\sfrac{1}{2})\\otimes\\l[(\\sfrac{1}{2}, 0)\\oplus(0, \\sfrac{1}{2})\\r] =\n\t\t(\\sfrac{1}{2}, 1)\\oplus(\\sfrac{1}{2}, 0)\\oplus(1, \\sfrac{1}{2})\\oplus(0, \\sfrac{1}{2})\n\t\\end{align*}\n\t\\pause\n\tTo eliminate the Dirac component, we require\n\t\\begin{align*}\n\t\t\\gamma_\\mu \\Psi^\\mu = 0\n\t\\end{align*}\n\twhich transforms like a Dirac spinor. (Remember: $\\gamma_\\mu$ does not transform)\n\t\\pause\n\tRarita-Schwinger-Equation\n\t\\footnote{Solutions in the RF are in the appendix}:\n\t\\begin{align*}\n\t\t\\l(i\\slashed{\\partial} - m\\r)\\Psi^\\mu = 0\n\t\\end{align*}\n\\end{frame}\n\\begin{frame}\n\t\\frametitle{Properties of the Rarita-Schwinger equation}\n\tUsing $\\{\\gamma^\\mu, \\gamma^\\nu\\}_+=2g^{\\mu\\nu}\\symbb{1}_4$:\n\t\\begin{equation*}\n\t\t\\l .\n\t\t\\begin{aligned}\n\t\t\t\\gamma_\\mu \\Psi^\\mu                       & = 0 \\\\\n\t\t\t\\l(i\\gamma^\\nu\\partial_\\nu - m\\r)\\Psi^\\mu & = 0\n\t\t\\end{aligned}\n\t\t\\r \\rbrace\n\t\t\\l(-i\\gamma^\\nu\\partial_\\nu\\gamma_\\mu+2i \\partial_\\nu g_{\\ \\mu}^\\nu - m\\gamma_\\mu\\r)\\Psi^\\mu\n\t\t= 2i \\partial_\\mu \\Psi^\\mu\n\t\t= 0\n\t\t\\Rightarrow\n\t\t\\quad\\partial_\\mu \\Psi^\\mu = 0\n\t\\end{equation*}\n\t\\pause\n\tThese equations impose each 4 conditions on the RS-Spinor giving\n\t\\begin{equation*}\n\t\t4\\times4-2\\times4 = 16-8 = 8 \\text{ degrees of freedom }=\n\t\t\\l \\lbrace\n\t\t\\begin{aligned}\n\t\t\t\\text{Particle: } s     & =-\\sfrac{3}{2},-\\sfrac{1}{2},\\sfrac{1}{2},\\sfrac{3}{2} \\\\\n\t\t\t\\text{Antiparticle: } s & =-\\sfrac{3}{2},-\\sfrac{1}{2},\\sfrac{1}{2},\\sfrac{3}{2}\n\t\t\\end{aligned}\\r .\n\t\\end{equation*}\n\\end{frame}\n\\begin{frame}\n\t\\frametitle{Lagrangian}\n\t\\begin{equation*}\n\t\t\\symcal{L} = \\frac{1}{2}\\overline\\Psi_\\nu\\l(-\\epsilon^{\\nu\\mu\\kappa\\lambda}\\gamma_5\\gamma_\\mu\\partial_\\kappa+\\frac{1}{2}m[\\gamma^\\nu, \\gamma^\\lambda]\\r)\\Psi_\\lambda\n\t\t\\text{\\footnotemark[1]}\n\t\\end{equation*}\n\t\\footnotetext[1]{Full derivation:\\fullcite[334]{weinberg2005quantum}}\n\t\\pause\n\tGetting back the field equations:\n\t\\begin{align*}\n\t\t\\frac{\\delta \\symcal{L}}{\\delta \\overline\\Psi_\\nu}=0=\n\t\t\\l(-\\epsilon^{\\nu\\mu\\kappa\\lambda}\\gamma_5\\gamma_\\mu\\partial_\\kappa+\\frac{1}{2}m[\\gamma^\\nu, \\gamma^\\lambda]\\r)\\Psi_\\lambda\n\t\t= T^\\nu                                                       \n\t\\end{align*}\n\t\\pause\n\t\\begin{align*}\n\t\t\\overset{\\partial_\\nu T^\\nu}{\\Rightarrow}\n\t\t[\\slashed{\\partial},\\gamma^\\lambda]\\Psi_\\lambda=0 \\qquad\n\t\t\\overset{\\gamma_\\nu T^\\nu}{\\Rightarrow}\n\t\t3m\\gamma^\\lambda \\Psi_\\lambda = \\epsilon^{\\nu\\mu\\kappa\\lambda}\\gamma_5\\gamma_\\nu\\gamma_\\mu\\partial_\\kappa\\Psi_\\lambda = i[\\slashed \\partial, \\gamma^\\lambda]\\Psi_\\lambda = 0 \\\\\n\t\t\\Rightarrow \\partial_{\\lambda}\\Psi^\\lambda = \\frac{1}{2}\\l\\{\\slashed \\partial, \\gamma_\\lambda \\r\\} \\Psi^\\lambda = \\frac{1}{2}[\\slashed \\partial, \\gamma_\\lambda]\\Psi^\\lambda = 0\n\t\\end{align*}\n\tUsing\n\t\\begin{equation*}\n\t\t\\gamma^\\mu\\gamma^\\nu\\gamma^\\rho = \\eta^{\\mu\\nu}\\gamma^\\rho + \\eta^{\\nu\\rho}\\gamma^\\mu - \\eta^{\\mu\\rho}\\gamma^\\nu - i\\epsilon^{\\sigma\\mu\\nu\\rho}\\gamma_\\sigma\\gamma^5\n\t\\end{equation*}\n\\end{frame}\n\\begin{frame}\n\t\\frametitle{Lagrangian}\n\t\\begin{align*}\n\t    0 =\t\\l(-\\epsilon^{\\nu\\mu\\kappa\\lambda}\\gamma_5\\gamma_\\mu\\partial_\\lambda+\\frac{1}{2}m[\\gamma^\\nu, \\gamma^\\lambda]\\r)\\Psi_\\lambda \\\\\n\t\t\\gamma_\\lambda\\Psi^\\lambda = 0 \\qquad \\partial_\\lambda\\Psi^\\lambda = 0\\\\\n\t\\end{align*}\n\t\\pause \n\t\\begin{align*}\n\t\t-\\frac{1}{2}\\gamma^\\lambda\\gamma^\\nu \\Psi_\\lambda \n\t\t= -m\\Psi^\\nu \n\t\t= - \\epsilon^{\\mu\\nu\\kappa\\lambda}\\gamma_5\\gamma_\\mu\\partial_\\kappa \\Psi_\\lambda\n\t\t= -i\\l(\\gamma^\\nu\\gamma^\\kappa\\gamma^\\lambda-g^{\\nu\\kappa}\\gamma^\\lambda-g^{\\kappa\\lambda}\\gamma^\\nu+g^{\\nu\\lambda}\\gamma^\\kappa\\r)\\partial_\\kappa\\Psi_{\\lambda}\n\t\t= -i\\slashed \\partial \\Psi^\\nu\\\\\n\t\\end{align*}\n\t\\pause\n\t\\centering\n\t\\alert{\n\tWith this lagrangian we arrive at the correct field equations:\n\t\\begin{align*}\n\t\t\\Rightarrow (i\\slashed \\partial-m)\\Psi^\\nu = 0\n\t\\end{align*}}\n\\end{frame}\n\\begin{frame}\n\t\\frametitle{But wait! Why don't we use $(\\sfrac{3}{2}, 0)\\oplus(0, \\sfrac{3}{2})$?}\n\t\\begin{itemize}\n\t\\item Weinberg did the calculations in 1964\\footnote{\\fullcite{weinberg1964feynman}} to get an 8-Spinor\n\t\\item \\enquote{Intricate} calculations\\footnotemark[2]\n\t\t\\item Requires a generalised set of $\\gamma$-matrices\n\t\\end{itemize}\n\t\\footnotetext[2]{Detailed discussion of ATS: \\fullcite{pure}}\n\t\\pause \n\tThe antisymmetric Tensor-Spinor (ATS) is sometimes used\n\t\\begin{equation*}\n\t\t\\l[\\l(1, 0\\r)\\oplus\\l(0,1\\r)\\r]\\otimes\\l[\\l(\\sfrac{1}{2}, 0\\r)\\oplus\\l(0,\\sfrac{1}{2}\\r)\\r]\n\t\t=\\l[\\l(\\sfrac{1}{2}, 0\\r)\\oplus\\l(0,\\sfrac{1}{2}\\r)\\r]\\oplus\\l[\\l(\\sfrac{3}{2}, 0\\r)\\oplus\\l(0,\\sfrac{3}{2}\\r)\\r]\n\t\t\\oplus\\l[\\l(\\sfrac{1}{2}, 1\\r)\\oplus\\l(1,\\sfrac{1}{2}\\r)\\r]\n\t\\end{equation*}\n\t$\\Rightarrow\\Psi^{\\mu\\nu}=-\\Psi^{\\nu\\mu}$ and the unphysical components are removed with $\\gamma_\\mu\\Psi^{\\mu\\nu}=0$.\n\t\\begin{itemize}\n\t\t\\item Degrees of freedom$=6\\times 4-4\\times 4=8$\n\t\t\\item Propagator has also (more on that later) problems in discribing $\\Delta$ resonances \n\t\t\\item Detailed discussion in 2\n\t\\end{itemize}\n\t\\vspace{1em}\t\n\\end{frame}\n", "meta": {"hexsha": "36771b63762c2acbbef969a4afedc37e6ebfc106", "size": 5131, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/rarita_schwinger.tex", "max_stars_repo_name": "The-Ludwig/Rarita-Schwinger-Talk", "max_stars_repo_head_hexsha": "64b43d3c85b8614e5ca75c57bee9d76ff322eba8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/rarita_schwinger.tex", "max_issues_repo_name": "The-Ludwig/Rarita-Schwinger-Talk", "max_issues_repo_head_hexsha": "64b43d3c85b8614e5ca75c57bee9d76ff322eba8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/rarita_schwinger.tex", "max_forks_repo_name": "The-Ludwig/Rarita-Schwinger-Talk", "max_forks_repo_head_hexsha": "64b43d3c85b8614e5ca75c57bee9d76ff322eba8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.0573770492, "max_line_length": 178, "alphanum_fraction": 0.6503605535, "num_tokens": 2127, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5623811044586303}}
{"text": "\\section{Laboratory Methods}\nThis section is the most vague of all the PGRE topics.\nI put information down that helped me, but please contact me if you notice a glaring omission or know more about this than I do (very likely).\n\n\\subsection{Dimensional Analysis}\nKnow how to deduce if a solution has the correct units.\\\\*\nAlso understand if a solution is reasonable (i.e. make sure the velocity you calculated is less than or equal to the speed of light).\n\n\\subsection{Poisson Distribution}\nAlso called the law of small numbers.\\\\\\\\*\n\\(\\displaystyle p(k)=\\frac{\\lambda^k}{k!e^{\\lambda}}\\)\\\\*\n\\(\\lambda\\) is the rate at which the (rare) event occurs\\\\\\\\*\nThe mean and variance of the distribution are the same.\n\\begin{itemize}\n\\item The Poisson distribution describes mutually independent events, occurring at a known and constant rate (\\(\\lambda\\)) per unit (time or space), and observed through a certain window: a unit of time or space\n\\item The probability of \\(k\\) occurrences in that unit can be calculated from \\(p(k)\\)\n\\item The rate is also the expected or most likely outcome (for whole number \\(\\lambda\\) greater than 1, the outcome corresponding to \\(\\lambda-1\\) is equally likely)\n\\end{itemize}\n(This information was taken from the University of Massachusetts Amherst website on statistics.)\n\n\\subsection{Oscilloscopes}\nKnow how to read and interpret output from an oscilloscope including Lissajous curves.\n", "meta": {"hexsha": "d2e8230610f3fabc82ae0d80d3a762a98e8b4eb9", "size": 1418, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/lab_methods.tex", "max_stars_repo_name": "jhetherly/Physics_GRE_Review", "max_stars_repo_head_hexsha": "3edbd342c1d1bf39502b4c6838828501e145e408", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-11T13:33:29.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-11T13:33:29.000Z", "max_issues_repo_path": "src/lab_methods.tex", "max_issues_repo_name": "jhetherly/Physics_GRE_Review", "max_issues_repo_head_hexsha": "3edbd342c1d1bf39502b4c6838828501e145e408", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/lab_methods.tex", "max_forks_repo_name": "jhetherly/Physics_GRE_Review", "max_forks_repo_head_hexsha": "3edbd342c1d1bf39502b4c6838828501e145e408", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.652173913, "max_line_length": 211, "alphanum_fraction": 0.7672778561, "num_tokens": 331, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8633916099737806, "lm_q2_score": 0.651354857898194, "lm_q1q2_score": 0.5623743194249649}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Written By Michael Brodskiy\n% Class: Analytic Geometry & Calculus III (Math-292)\n% Professor: V. Cherkassky\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[12pt]{article} \n\\usepackage{alphalph}\n\\usepackage[utf8]{inputenc}\n\\usepackage[russian,english]{babel}\n\\usepackage{titling}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\usepackage[super]{nth}\n\\usepackage{everysel}\n\\usepackage{ragged2e}\n\\usepackage{geometry}\n\\usepackage{fancyhdr}\n\\geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in}\n\\newcommand{\\subtitle}[1]{%\n  \\posttitle{%\n    \\par\\end{center}\n    \\begin{center}\\large#1\\end{center}\n    \\vskip0.5em}%\n\n}\n\\usepackage{hyperref}\n\\hypersetup{\ncolorlinks=true,\nlinkcolor=blue,\nfilecolor=magenta,      \nurlcolor=blue,\ncitecolor=blue,\n}\n\n\\urlstyle{same}\n\n\n\\title{Lecture XVIII Notes}\n\\date{\\today}\n\\author{Michael Brodskiy\\\\ \\small Professor: V. Cherkassky}\n\n% Mathematical Operations:\n\n% Sum: $$\\sum_{n=a}^{b} f(x) $$\n% Integral: $$\\int_{lower}^{upper} f(x) dx$$\n% Limit: $$\\lim_{x\\to\\infty} f(x)$$\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Triple Integrals $-$ 15.6}\n\nGiven a region $B$ where:\n\n$$B=\\{(x,y,z)|a\\leq x\\leq b, c\\leq y\\leq d, r\\leq z\\leq s\\}$$\n\nIf this region is filled with many smaller boxes, or sub-boxes, the triple Riemann sum would look as such:\n\n$$\\sum_{i=1}^l\\sum_{j=1}^m\\sum_{k=1}^n f(x_{ijk}^*,y_{ijk}^*,z_{ijk}^*)\\,\\Delta V$$\n\nTo form an integral, this triple Riemann must approach infiinty:\n\n$$\\iiint_B f(x,y,z)\\,dV=\\lim_{(l,m,n)\\to\\infty}\\sum_{i=1}^l\\sum_{j=1}^m\\sum_{k=1}^n f(x_{ijk}^*,y_{ijk}^*,z_{ijk}^*)\\,\\Delta V$$\n\nFurthermore, Fubini's Theorem, which was applied to double integrals, may be applied to triple integrals, if $f$ is continuous on $B=[a,b]\\text{ x }[c,d]\\text{ x }[r,s]$:\n\n$$\\iiint_B f(x,y,z)\\,dV=\\int_a^b\\int_r^s\\int_c^df(x,y,z)\\,dy\\,dz\\,dx$$\n\n\\flushleft{\n\\textit{Example:}\nEvaluate the following triple integral along $B=\\{(x,y,z)|0\\leq x\\leq1, -1\\leq y\\leq2,0\\leq z\\leq3\\}$:\\\\\n}\n$$\\iiint_B xyz^2\\,dV$$\n$$\\int_0^1\\int_{-1}^2\\int_0^3 xyz^2\\,dz\\,dy\\,dx$$\n$$\\int_0^1\\int_{-1}^2 3xy\\,dy\\,dx$$\n$$\\int_0^1\\frac{27x}{2}\\,dx$$\n$$\\frac{27x^2}{4}\\Big|_0^1=\\frac{27}{4}$$\n\n\\subsection{Triple Integrals over General Regions}\n\nA Type I general region is defined by $E$, where:\n\n$$E=\\{(x,y,z)|(x,y)\\in D, u_1(x,y)\\leq z\\leq u_2(x,y)\\}$$\n\nThese boundaries result in the following Integral:\n\n$$\\iiint_E f(x,y,z)\\,dV$$\n\nThis may be simplified, using Fubini's Theorem, to form the following:\n\n$$\\iiint_E f(x,y,z)\\,dV= \\iint_D \\left[\\int_{u_1(x,y)}^{u_2(x,y)}f(x,y,z)\\,dz\\right]\\,dA$$\n\nA more complicated region is formed when $D$ and $E$ are Type I regions, which means: $E=\\{(x,y,z)|a\\leq x\\leq b, g_1(x)\\leq y\\leq g_2(x), u_1(x,y)\\leq z\\leq u_2(x,y)\\}\n\nThis yields the triple integral:\n\n$$\\iiint_Ef(x,y,z)\\,dV=\\int_a^b\\int_{g_1(x)}^{g_2(x)}\\int_{u_1(x,y)}^{u_2(x,y)}f(x,y,z)\\,dz\\,dy\\,dx$$\n\nIf $D$ is a Type II region, while $E$ is Type I, this results in: \n\n$$\\iiint_Ef(x,y,z)\\,dV=\\int_c^d\\int_{h_1(y)}^{h_2(y)}\\int_{u_1(x,y)}^{u_2(x,y)}f(x,y,z)\\,dz\\,dx\\,dy$$\n\nRegion $E$ is a Type II region when $E=\\{(x,y,z)|(y,z)\\in D,u_1(y,z)\\leq x\\leq u_2(y,z)$\n\n  $$\\iiint_E f(x,y,z)\\,dV= \\iint_D \\left[\\int_{u_1(y,z)}^{u_2(y,z)}f(x,y,z)\\,dx\\right]\\,dA$$\n\n  Finally, region $E$ is Type III if $E=\\{(x,y,z)|(x,z)\\in D,u_1(x,z)\\leq y\\leq u_2(x,z)$\n\nThis forms the integral:\n\n\n  $$\\iiint_E f(x,y,z)\\,dV= \\iint_D \\left[\\int_{u_1(x,z)}^{u_2(x,z)}f(x,y,z)\\,dy\\right]\\,dA$$\n\n  Much like in double integrals, a function's mass, moments, and center of mass may be determined through triple integration. The formulas are as follows:\nFor mass:\n  $$m=\\iiint_E \\rho(x,y,z)\\,dV$$\nFor the moments about the three coordinate planes:\n  $$ M_{yz} = \\iiint_E x\\rho(x,y,z)\\,dV\\,\\,\\,\\,\\,M_{xz}=\\iiint_E y\\rho(x,y,z)\\,dV$$\n  $$M_{xy}=\\iiint_E z\\rho(x,y,z)\\,dV$$\nThe center of mass at $(\\overline{x},\\overline{y},\\overline{z})$\n$$\\overline{x}=\\frac{M_{yz}}{m}\\,\\,\\,\\overline{y}=\\frac{M_{xz}}{m}\\,\\,\\,\\overline{z}=\\frac{M_{xy}}{m}$$\nFinally, the moment of inertia formulas are:\n$$I_x=\\iiint_E (y^2+z^2)\\rho(x,y,z)\\,dV$$\n$$I_y=\\iiint_E (x^2+z^2)\\rho(x,y,z)\\,dV$$\n$$I_z=\\iiint_E (x^2+y^2)\\rho(x,y,z)\\,dV$$\n\n\\subsection{Change of Variables in Triple Integrals}\n\nIn three dimensions, a change of variables revolves around the Jacobian. The Jacobian of a transformation where $x=g(u,v)$ and $y=h(u,v)$ looks as follows:\n\n$$\\frac{\\partial(x,y)}{\\partial(u,v)}=\\Large{\\begin{vmatrix} \\frac{\\partial x}{\\partial u} & \\frac{\\partial x}{\\partial v} \\\\ \\frac{\\partial y}{\\partial u} & \\frac{\\partial y}{\\partial v}\\\\ \\end{vmatrix}}=\\frac{\\partial x}{\\partial u}\\frac{\\partial y}{\\partial v} - \\frac{\\partial x}{\\partial v}\\frac{\\partial y}{\\partial u}$$\n\nIn a double integral, this change of variables would result in the integrand being multiplied by the Jacobian:\n\n$$\\iint_R f(x,y)\\,dA = \\iint_S f(x(u,v),y(u,v)) \\Big{|}\\frac{\\partial(x,y)}{\\partial(u,v)}\\Big{|}\\,du\\,dv$$\n\n\\end{document}\n", "meta": {"hexsha": "e509236726af897fc7721c2f7b392a389f25c49a", "size": 5272, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture Notes/Lecture18.tex", "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_issues_repo_path": "Lecture Notes/Lecture18.tex", "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notes/Lecture18.tex", "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.1267605634, "max_line_length": 326, "alphanum_fraction": 0.6047040971, "num_tokens": 2003, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.5623272997092617}}
{"text": "%!TEX root = main.tex\n\\section{Results}\n\\subsection{With and without Olympic Games}\nChoosing infection rate $\\beta=2$, removal rate $\\gamma=0.5$ and an transfer probability $p_{transfer} = 0.005$ we can run a simulation with and without the Olympic Games occurring. We start the simulation at the first day of the games in Rio De Janeiro and set the initial infected population in Rio to 1000. In the following we will look at the SIR curves of 6 major cities.\n\nWith the Olympics occurring we get the following curves:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.0 \\linewidth]{plots/rio-0-18-380000.pdf}\n\t\\caption{10 simulations with Olympic Games. Time in days on the x-axis and number of people on the y-axis. Shown is susceptible (S), infected (I), removed (R) and total population (T).}\n\t\\label{fig:rio-0-18-380000}\n\\end{figure}\n\nFrom figure \\ref{fig:rio-0-18-380000} it's seen that except for Rio where the infection started and New York which seems to be infected earliest, the other major cities all seem to reach its infection peak at approximately the same time. \n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.0 \\linewidth]{plots/no_rio.pdf}\n\t\\caption{10 simulations without Olympic Games. Time in days on the x-axis and number of people on the y-axis. Shown is susceptible (S), infected (I), removed (R) and total population (T).}\n\t\\label{fig:no_rio}\n\\end{figure}\n\nWhen there is no Olympic Games, as seen in figure \\ref{fig:no_rio}, the infection peaks occur later, especially for Moscow, Beijing and Sydney. The is emphasized in the summary table \\ref{table:olympic-summary}.\n\n\\begin{table}[H]\n\t\\centering\n\t\\input{tables/result}\n\t\\caption{Results of 10 simulations with and without Olympic Games. Table contains the peak times and amounts for the number of infected individuals. The standard $95\\%$-confidence interval is marked with $\\pm$ and the confidence interval using control variates is shown in the parenthesis.}\n\t\\label{table:olympic-summary}\n\\end{table}\n\nAs mentioned, in table \\ref{table:olympic-summary} one can see that when the Olympic Games occur the peak time is much less spread and generally happens earlier. This is particularly apparent when comparing with the global peak time, which in the case without Olympic Games, happens statistically significantly later than the shown cities. However, in the Olympic Games case, the city peak times are not statistical different from the global peak time (except for the New York and Rio).\n\nThe fact that Rio doesn't have a significantly difference in peak time makes sense because this is where the infection starts. Adding 380000 people (out of 12 million people) doesn't make much of a difference. The New York case is also interesting as there isn't a significant change. This is likely because New York being in north America and Rio being in South America is much more closely connected through airlines, compared to cities which are across the Pacific or Atlantic Ocean.\n\nUsing control variates generally appear to improve the variance, up to a factor of two in some cases, which seams reasonable. In some cases the control variate confidence interval is larger compared to crude confidence interval. This is because there is less degree of freedom when estimating the variance, which is a result of the extra parameter estimation of $c$. Thus if the control variates aren't sufficiently correlated, the variance of the estimate won't improve.\n\nFinally table \\ref{table:olympic-summary} also shows that the number of people infected globally is almost twice as big when including the Olympic Games compared to not including the Olympic Games.\n\n\\subsection{Visualization of result}\nWhen studying a virus outbreak it is interesting to see exactly how the virus spreads. We have thus created an animation on a world map where each region is represented as a dot. Each dot is placed at the region center (airport), the dot size is scaled according to the population size, and the color represents the fraction infected.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.0 \\linewidth]{plots/gifs/frames/rio-46}\n\t\\caption{Timestep 46, with Olympic Games. The full animation can be viewed at\n\t\t\\url{https://andreasmadsen.github.io/course-02443-stochastic-virus-outbreaks/}}\n\t\\label{fig:rio-46}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=1.0 \\linewidth]{plots/gifs/frames/noRio-67}\n\t\\caption{Timestep 67, without Olympic Games. The full animation can be viewed at\n\t\t\\url{https://andreasmadsen.github.io/course-02443-stochastic-virus-outbreaks/}}\n\t\\label{fig:noRio-67}\n\\end{figure}\n\nAnimations based on simulations with the Olympic Games shows a high activation across the global at the same time. In simulations without the games it is much more apparent how the virus first spread to regions in the following order:\n\\begin{enumerate}\n\t\\item South and Latin America\n\t\\item East coast of the USA and then Europe\n\t\\item Africa\n\t\\item Asia\n\t\\item Australia\n\\end{enumerate}\n\nThis ordering is sensible because some regions are more connected than others. E.g. when we start the virus outbreak in Rio de Janeiro, but don't hold the Olympics, the virus naturally first spreads to nearby regions and to regions with which a lot of plane traffic is shared.\n", "meta": {"hexsha": "3871a7129a56a1f7987d2951146a8737ecc126e3", "size": 5275, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/results.tex", "max_stars_repo_name": "FrederikWR/course-02443-stochastic-virus-outbreak", "max_stars_repo_head_hexsha": "4f1d7f1fa4aa197b31ed86c4daf420d5a637974e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/results.tex", "max_issues_repo_name": "FrederikWR/course-02443-stochastic-virus-outbreak", "max_issues_repo_head_hexsha": "4f1d7f1fa4aa197b31ed86c4daf420d5a637974e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/results.tex", "max_forks_repo_name": "FrederikWR/course-02443-stochastic-virus-outbreak", "max_forks_repo_head_hexsha": "4f1d7f1fa4aa197b31ed86c4daf420d5a637974e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.4492753623, "max_line_length": 486, "alphanum_fraction": 0.7865402844, "num_tokens": 1244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7931059487389968, "lm_q1q2_score": 0.562327292774728}}
{"text": "%!TEX root = da2020-06.tex\n\n\\Chapter{6}{Randomized Algorithms}\n\n\\noindent\nAll models of computing that we have studied so far were based on the formalism that we introduced in Chapter~\\chapterref{3}: a distributed algorithm $A$ is a state machine whose state transitions are determined by functions $\\Init_{A,d}$, $\\Send_{A,d}$, and $\\Receive_{A,d}$. Everything has been fully deterministic: for a given network and a given input, the algorithm will always produce the same output. In this chapter, we will extend the model so that we can study randomized distributed algorithms.\n\n\n\\section{Definitions}\\label{sec:randomized}\n\nLet us first define a \\emph{randomized distributed algorithms in the $\\PN$ model} or, in brief, a \\emph{randomized $\\PN$ algorithm}. We extend the definitions of Section~\\longref{3.3}{sec:distr-alg} so that the state transitions are chosen randomly according to some probability distribution that may depend on the current state and incoming messages.\n\nMore formally, the values of the functions $\\Init$ and $\\Receive$ are discrete probability distributions over $\\States_A$. The initial state of a node $u$ is a random variable $x_0(u)$ chosen from a discrete probability distribution \\[\\Init_{A,d}(f(u))\\] that may depend on the local input $f(u)$. The state at time $t$ is a random variable $x_t(u)$ chosen from a discrete probability distribution \\[\\Receive_{A,d}\\bigl(x_{t-1}(u), m_t(u) \\bigr)\\] that may depend on the previous state $x_{t-1}(u)$ and on the incoming messages $m_t(u)$. All other parts of the model are as before. In particular, function $\\Send_{A,d}$ is deterministic.\n\nAbove we have defined randomized $\\PN$ algorithms. We can now extend the definitions in a natural manner to define randomized algorithms in the $\\LOCAL$ model (add unique identifiers) and randomized algorithms in the $\\CONGEST$ model (add unique identifiers and limit the size of the messages).\n\n\n\\section{Probabilistic Analysis}\n\nIn randomized algorithms, performance guarantees are typically probabilistic. For example, we may claim that algorithm $A$ \\emph{stops in time $T$ with probability $p$}.\n\nNote that all probabilities here are over the random choices in the state transitions. We do not assume that our network or the local inputs are chosen randomly; we still require that the algorithm performs well with worst-case inputs. For example, if we claim that algorithm $A$ solves problem $\\Pi$ on graph family $\\calF$ in time $T(n)$ with probability $p$, then we can take \\emph{any} graph $G \\in \\calF$ and \\emph{any} port-numbered network $N$ with $G$ as its underlying graph, and we guarantee that with probability at least $p$ the execution of $A$ in $N$ stops in time $T(n)$ and produces a correct output $g \\in \\Pi(G)$; as usual, $n$ is the number of nodes in the network.\n\nWe may occasionally want to emphasize the distinction between ``Monte Carlo'' and ``Las Vegas'' type algorithms:\n\\begin{itemize}\n    \\item Monte Carlo: Algorithm $A$ always stops in time $T(n)$; the output is a correct solution to problem $\\Pi$ with probability $p$.\n    \\item Las Vegas: Algorithm $A$ stops in time $T(n)$ with probability $p$; when it stops, the output is always a correct solution to problem $\\Pi$.\n\\end{itemize}\nHowever, Monte Carlo algorithms are not as useful in the field of distributed computing as they are in the context of classical centralized algorithms. In centralized algorithms, we can usually take a Monte Carlo algorithm and just run it repeatedly until it produces a feasible solution; hence we can turn a Monte Carlo algorithm into a Las Vegas algorithm. This is not necessarily the case with distributed algorithms: verifying the output of an algorithm may require global information on the entire output, and gathering such information may take a long time. In this chapter, we will mainly focus on Las Vegas algorithms, i.e., algorithms that are always correct but may occasionally be slow, but in the exercises we will also encounter Monte Carlo algorithms.\n\n\n\\section{With High Probability}\n\nWe will use the word \\emph{failure} to refer to the event that the algorithm did not meet its guarantees\\mydash in the case of a Las Vegas algorithm, it did not stop in time $T(n)$, and in the case of Monte Carlo algorithms, it did not produce a correct output. The word \\emph{success} refers to the opposite case.\n\nUsually we want to show that the probability of a failure is negligible. In computer science, we are usually interested in asymptotic analysis, and hence in the context of randomized algorithms, it is convenient if we can show that the success probability approaches $1$ when $n$ increases. Even better, we would like to let the user of the algorithm choose how quickly the success probability approaches~$1$.\n\nThis idea is captured in the phrase ``\\emph{with high probability}'' (commonly abbreviated \\emph{w.h.p.}). Please note that this phrase is not a vague subjective statement but it carries a precise mathematical meaning: it refers to the success probability of $1 - 1/n^c$, where we can choose any constant $c > 0$. (Unfortunately, different sources use slightly different definitions; for example, it may also refer to the success probability of $1 - O(1)/n^c$ for any constant $c > 0$.)\n\nIn our context, we say that algorithm $A$ solves problem $\\Pi$ on graph family $\\calF$ in time $O(T(n))$ \\emph{with high probability} if the following holds:\n\\begin{itemize}\n    \\item I can choose any constant $c > 0$. Algorithm $A$ may depend on this constant.\n    \\item Then if I run $A$ in any network $N$ that has its underlying graph in $\\calF$, the algorithm will stop in time $O(T(n))$ with probability at least $1 - 1/n^c$, and the output is a feasible solution to problem $\\Pi$.\n\\end{itemize}\nNote that the $O(\\cdot)$ notation in the running time is used to hide the dependence on $c$. This is a crucial point. For example, it would not make much sense to say that the running time is at most $\\log n$ with probability $1 - 1/n^c$ for any constant $c > 0$. However, it is perfectly reasonable to say that the running time is, e.g., at most $c \\log n$ or $2^c \\log n$ or simply $O(\\log n)$ with probability $1 - 1/n^c$ for any constant $c > 0$.\n\n\n\\section{Randomized Coloring in Bounded-Degree Graphs}\\label{sec:bdrand}\n\nIn Chapter~\\chapterref{4} we presented a \\emph{deterministic} algorithm that finds a ${(\\Delta+1)}$-coloring in a graph of maximum degree $\\Delta$. In this section, we will design a \\emph{randomized} algorithm that solves the same problem. The running times are different:\n\\begin{itemize}[noitemsep]\n    \\item the deterministic algorithm runs in $O(\\Delta + \\log^* n)$ rounds.\n    \\item the randomized algorithm runs in $O(\\log n)$ rounds with high probability.\n\\end{itemize}\nHence for large values of $\\Delta$, the randomized algorithm can be much faster.\n\n\\subsection{Algorithm Idea}\n\nA running time of $O(\\log n)$ is very typical for a randomized distributed algorithm. Often randomized algorithms follow the strategy that in each step each node picks a value randomly from some probability distribution. If the value conflicts with the values of the neighbors, the node will try again next time; otherwise the node outputs the current value and stops. If we can prove that each node stops in each round with a constant probability, we can prove that after $\\Theta(\\log n)$ all nodes have stopped w.h.p. This is precisely what we saw in the analysis of the randomized path-coloring algorithm in Section~\\longref{1.5}{sec:algo-p3crand}.\n\nHowever, adapting the same strategy to graphs of maximum degree $\\Delta$ requires some thought. If each node just repeatedly tries to pick a random color from $\\{1,2,\\dotsc,\\Delta+1\\}$, the success probability may be fairly low for large values of $\\Delta$.\n\nTherefore we will adopt a strategy in which nodes are slightly less aggressive. Nodes will first randomly choose whether they are \\emph{active} or \\emph{passive} in this round; each node is passive with probability $1/2$. Only active nodes will try to pick a random color among those colors that are not yet used by their neighbors.\n\nInformally, the reason why this works well is the following. Assume that we have a node $v$ with $d$ neighbors that have not yet stopped. Then there are at least $d+1$ colors among which $v$ can choose whenever it is active. If all of the $d$ neighbors were also active and if they happened to pick distinct colors, we would have only a \\[\\frac{1}{d+1}\\] chance of picking a color that is not used by any of the neighbors. However, in our algorithm on average only $d/2$ neighbors are active. If we have at most $d/2$ active neighbors, we will succeed in picking a free color with probability at least \\[\\frac{d+1 - d/2}{d+1} > \\frac{1}{2},\\] regardless of what the active neighbors do.\n\n\n\\subsection{Algorithm}\n\nLet us now formalize the algorithm. For each node $u$, let\n\\[\n    C(u) = \\{1,2,\\dotsc,\\deg_G(u)+1\\}\n\\]\nbe the \\emph{color palette} of the node; node $u$ will output one of the colors of $C(u)$. \n\nIn the algorithm, node $u$ maintains the following variables:\n\\begin{itemize}[noitemsep]\n    \\item State $s(u) \\in \\{0,1\\}$\n    \\item Color $c(u) \\in \\{\\bot\\} \\cup C(u)$.\n\\end{itemize}\nInitially, $s(u) \\gets 1$ and $c(u) \\gets \\bot$. When $s(u) = 1$ and $c(u) \\ne \\bot$, node $u$ stops and outputs color $c(u)$.\n\nIn each round, node $u$ always sends $c(u)$ to each port. The incoming messages are processed as follows, depending on the current state of the node:\n\\begin{itemize}\n    \\item $s(u) = 1$ and $c(u) \\ne \\bot$:\n    \\begin{itemize}\n        \\item This is a stopping state; ignore incoming messages.\n    \\end{itemize}\n    \\item $s(u) = 1$ and $c(u) = \\bot$:\n    \\begin{itemize}\n        \\item Let $M(u)$ be the set of messages received.\n        \\item Let $F(u) = C(u) \\setminus M(u)$ be the set of \\emph{free colors}.\n        \\item With probability $1/2$, set $c(u) \\gets \\bot$; otherwise choose a $c(u) \\in F(u)$ uniformly at random.\n        \\item Set $s(u) \\gets 0$.\n    \\end{itemize}\n    \\item $s(u) = 0$:\n    \\begin{itemize}\n        \\item Let $M(u)$ be the set of messages received.\n        \\item If $c(u) \\in M(u)$, set $c(u) \\gets \\bot$.\n        \\item Set $s(u) \\gets 1$.\n    \\end{itemize}\n\\end{itemize}\n\nInformally, the algorithm proceeds as follows. For each node $u$, its state $s(u)$ alternates between $1$ and $0$:\n\\begin{itemize}\n    \\item When $s(u) = 1$, the node either decides to be \\emph{passive} and sets $c(u) = \\bot$, or it decides to be \\emph{active} and picks a random color $c(u) \\in F(u)$. Here $F(u)$ is the set of colors that are not yet used by any of the neighbors that are stopped.\n    \\item When $s(u) = 0$, the node \\emph{verifies} its choice. If the current color $c(u)$ conflicts with one of the neighbors, we go back to the initial state $s(u) \\gets 1$ and $c(u) \\gets \\bot$. However, if we were lucky and managed to pick a color that does not conflict with any of our neighbors, we keep the current value of $c(u)$ and switch to the stopping state.\n\\end{itemize}\n\n\n\\subsection{Analysis}\n\nIt is easy to see that if the algorithm stops, then the output is a proper ${(\\Delta+1)}$-coloring of the underlying graph. Let us now analyze how long it takes for the nodes to stop.\n\nIn the analysis, we will write $s_t(u)$ and $c_t(u)$ for values of variables $s(u)$ and $c(u)$ after round $t = 0,1,\\dotsc$, and $M_t(u)$ and $F_t(u)$ for the values of $M(u)$ and $F(u)$ during round $t=1,2,\\dotsc$. We also write\n\\[\n    K_t(u) = \\bigl\\{ v \\in V : \\{u,v\\} \\in E,\\ s_{t-1}(v) = 1, c_{t-1}(v) = \\bot \\bigr\\}\n\\]\nfor the set of \\emph{competitors} of node $u$ during round $t = 1,3,5,\\dotsc$; these are the neighbors of $u$ that have not yet stopped.\n\nFirst, let us prove that with probability at least $1/4$, a running node succeeds in picking a color that does not conflict with any of its neighbors.\n\\begin{lemma}\\label{lem:bdrand-onestep}\n    Fix a node $u \\in V$ and time $t = 1,3,5,\\dotsc$. Assume that $s_{t-1}(u) = 1$ and $c_{t-1}(u) = \\bot$, i.e., $u$ has not stopped before round $t$. Then with probability at least $1/4$, we have $s_{t+1}(u) = 1$ and $c_{t+1}(u) \\ne \\bot$, i.e., $u$ will stop after round $t+1$.\n\\end{lemma}\n\\begin{proof}\n    Let $f = |F_t(u)|$ be the number of free colors during round $t$, and let $k = |K_t(u)|$ be the number of competitors during round $t$. Note that $f \\ge k + 1$, as the size of the palette is one larger than the number of neighbors.\n\n    Let us first study the case that $u$ is active. As we have got $f$ free colors, for any given color $x \\in \\NN$ we have\n    \\[\n        \\Pr\\bigl[ c_t(u) = x \\bigm| c_t(u) \\ne \\bot \\bigr] \\le 1/f.\n    \\]\n    In particular, this holds for any color $x = c_t(v)$ chosen by any active competitor $v \\in K_t(u)$:\n    \\[\n        \\Pr\\bigl[ c_t(u) = c_t(v) \\bigm| c_t(u) \\ne \\bot,\\  c_t(v) \\ne \\bot \\bigr] \\le 1/f.\n    \\]\n    That is, we conflict with an active competitor with probability at most $1/f$. Naturally, we cannot conflict with a passive competitor:\n    \\[\n        \\Pr\\bigl[ c_t(u) = c_t(v) \\bigm| c_t(u) \\ne \\bot,\\  c_t(v) = \\bot \\bigr] = 0.\n    \\]\n    As a competitor is active with probability\n    \\[\n        \\Pr\\bigl[ c_t(v) \\ne \\bot \\bigr] = 1/2,\n    \\]\n    and the random variables $c_t(u)$ and $c_t(v)$ are independent, the probability that we conflict with a given competitor $v \\in K_t(u)$ is\n    \\[\n        \\Pr\\bigl[ c_t(u) = c_t(v) \\bigm| c_t(u) \\ne \\bot \\bigr] \\le \\frac{1}{2f}.\n    \\]\n    By the union bound, the probability that we conflict with some competitor is\n    \\[\n        \\Pr\\bigl[ c_t(u) = c_t(v) \\text{ for some $v \\in K_t(u)$} \\bigm| c_t(u) \\ne \\bot \\bigr] \\le \\frac{k}{2f},\n    \\]\n    which is less than $1/2$ for all $k \\ge 0$ and all $f \\ge k+1$. Put otherwise, node $u$ will avoid conflicts with probability\n    \\[\n        \\Pr\\bigl[ c_t(u) \\ne c_t(v) \\text{ for all $v \\in K_t(u)$} \\bigm| c_t(u) \\ne \\bot \\bigr] > \\frac{1}{2}.\n    \\]\n\n    So far we have studied the conditional probabilities assuming that $u$ is active. This happens with probability\n    \\[\n        \\Pr\\bigl[ c_t(u) \\ne \\bot \\bigr] = 1/2.\n    \\]\n    Therefore node $u$ will stop after round $t+1$ with probability\n    \\[\n    \\begin{split}\n        &\\Pr\\bigl[ c_{t+1}(u) \\ne \\bot] = \\\\\n        &\\Pr\\bigl[ c_t(u) \\ne \\bot \\text { and } c_t(u) \\ne c_t(v) \\text{ for all $v \\in K_t(u)$} ] > 1/4.\n        \\qedhere\n    \\end{split}\n    \\]\n\\end{proof}\n\nNow we can continue with the same argument as what we used in Section~\\longref{1.5}{sec:algo-p3crand} to analyze the running time. Fix a constant $c > 0$. Define\n\\[\n    T(n) = 2(c+1) \\log_{4/3} n.\n\\]\nWe will prove that the algorithm stops in $T(n)$ rounds. First, let us consider an individual node. Note the exponent $c+1$ instead of $c$ in the statement of the lemma; this will be helpful later.\n\n\\begin{lemma}\\label{lem:bdrand-onenode}\n    Fix a node $u \\in V$. The probability that $u$ has not stopped after $T(n)$ rounds is at most $1/n^{c+1}$.\n\\end{lemma}\n\\begin{proof}\n    By Lemma~\\ref{lem:bdrand-onestep}, if node $u$ has not stopped after round $2i$, it will stop after round $2i+2$ with probability at least $1/4$. Hence the probability that it has not stopped after $T(n)$ rounds is at most\n    \\[\n        (3/4)^{T(n)/2} = \\frac{1}{(4/3)^{(c+1) \\log_{4/3} n}} = \\frac{1}{n^{c+1}}. \\qedhere\n    \\]\n\\end{proof}\n\nNow we are ready to analyze the time until all nodes stop.\n\n\\begin{theorem}\\label{thm:bdrand}\n    The probability that all nodes have stopped after $T(n)$ rounds is at least $1 - 1/n^c$.\n\\end{theorem}\n\\begin{proof}\n    Follows from Lemma~\\ref{lem:bdrand-onenode} by the union bound.\n\\end{proof}\n\nNote that $T(n) = O(\\log n)$ for any constant $c$. Hence we conclude that the algorithm stops in $O(\\log n)$ rounds with high probability, and when it stops, it outputs a vertex coloring with $\\Delta+1$ colors.\n\n\n\\section{Quiz}\n\nConsider a cycle with $10$ nodes, and label the nodes with a random permutation of the numbers $1, 2, \\dotsc, 10$ (uniformly at random). A node is a \\emph{local maximum} if its label is larger than the labels of its two neighbors. Let $X$ be the number of local maxima. What is the expected value of $X$?\n\n\\section{Exercises}\n\n\\begin{ex}[larger palette]\\label{ex:bdrand2delta}\n    Assume that we have a graph without any isolated nodes. We will design a graph-coloring algorithm $A$ that is a bit easier to understand and analyze than the algorithm of Section~\\ref{sec:bdrand}. In algorithm $A$, each node $u$ proceeds as follows until it stops:\n    \\begin{itemize}\n        \\item Node $u$ picks a color $c(u)$ from $\\{1,2,\\dotsc,2d\\}$ uniformly at random; here $d$ is the degree of node~$u$.\n        \\item Node $u$ compares its value $c(u)$ with the values of all neighbors. If $c(u)$ is different from the values of its neighbors, $u$ outputs $c(u)$ and stops.\n    \\end{itemize}\n    Present this algorithm in a formally precise manner, using the state-machine formalism. Analyze the algorithm, and prove that it finds a $2\\Delta$-coloring in time $O(\\log n)$ with high probability.\n\\end{ex}\n\n\\begin{ex}[unique identifiers]\\label{ex:randomness-to-unique-ids}\n    Design a randomized $\\PN$ algorithm $A$ that solves the following problem in $O(1)$ rounds:\n    \\begin{itemize}[noitemsep]\n        \\item As input, all nodes get value $|V|$.\n        \\item Algorithm outputs a labeling $f\\colon V \\to \\{1,2,\\dotsc,\\chi\\}$ for some $\\chi = |V|^{O(1)}$.\n        \\item With high probability, $f(u) \\ne f(v)$ for all nodes $u \\ne v$.\n    \\end{itemize}\n    Analyze your algorithm and prove that it indeed solves the problem correctly.\n\n    In essence, algorithm $A$ demonstrates that we can use randomness to construct unique identifiers, assuming that we have some information on the size of the network. Hence we can take any algorithm $B$ designed for the $\\LOCAL$ model, and combine it with algorithm $A$ to obtain a $\\PN$ algorithm $B'$ that solves the same problem as $B$ (with high probability).\n    \\hint{Pick the labels randomly from a sufficiently large set; this takes $0$ communication rounds.}\n\\end{ex}\n\n\\begin{ex}[large independent sets]\n    Design a randomized $\\PN$ algorithm $A$ with the following guarantee: in any graph $G = (V,E)$ of maximum degree $\\Delta$, algorithm $A$ outputs an independent set $I$ such that the \\emph{expected} size of the $I$ is $|V|/O(\\Delta)$. The running time of the algorithm should be $O(1)$. You can assume that all nodes know $\\Delta$.\n    \\hint{Each node $u$ picks a random number $f(u)$. Nodes that are local maxima with respect to the labeling $f$ will join $I$.}\n\\end{ex}\n\n\\begin{ex}[max cut problem]\n    Let $G = (V,E)$ be a graph. A \\emph{cut} is a function $f\\colon V \\to \\{0,1\\}$. An edge $\\{u,v\\} \\in E$ is a \\emph{cut edge} in $f$ if $f(u) \\ne f(v)$. The \\emph{size} of cut $f$ is the number of cut edges, and a \\emph{maximum cut} is a cut of the largest possible size.\n    \\begin{subex}\n        \\item Prove: If $G = (V,E)$ is a bipartite graph, then a maximum cut has $|E|$ edges.\n        \\item Prove: If $G = (V,E)$ has a cut with $|E|$ edges, then $G$ is bipartite.\n        \\item Prove: For any $\\alpha > 1/2$, there exists a graph $G = (V,E)$ in which the maximum cut has fewer than $\\alpha |E|$ edges.\n    \\end{subex}\n    \\hint{For the last part, consider a complete graph with a sufficiently large number of nodes.}\n\\end{ex}\n\n\\begin{ex}[max cut algorithm]\n    Design a randomized $\\PN$ algorithm $A$ with the following guarantee: in any graph $G = (V,E)$, algorithm $A$ outputs a cut $f$ such that the \\emph{expected} size of cut $f$ is at least $|E|/2$. The running time of the algorithm should be $O(1)$.\n\n    Note that the analysis of algorithm $A$ also implies that for any graph there \\emph{exists} a cut with at least $|E|/2$.\n\n    \\hint{Each node chooses an output $0$ or $1$ uniformly at random and stops; this takes $0$ communication rounds. To analyze the algorithm, prove that each edge is a cut edge with probability $1/2$.}\n\\end{ex}\n\n\\begin{ex}[maximal independent sets]\n    Design a randomized $\\PN$ algorithm that finds a maximal independent set in time $O(\\Delta + \\log n)$ with high probability.\n\n    \\hint{Use the randomized coloring algorithm.}\n\\end{ex}\n\n\\begin{exs}[maximal independent sets quickly]\n    Design a randomized distributed algorithm that finds a maximal independent set in time $O(\\log n)$ with high probability.\n\n    \\hint{Look up ``Luby's algorithm''.}\n\\end{exs}\n\n\n\n\\section{Bibliographic Notes}\n\nAlgorithm of Section~\\ref{sec:bdrand} and the algorithm of Exercise~\\ref{ex:bdrand2delta} are from Barenboim and Elkin's book \\cite[Section 10.1]{barenboim13distributed}.\n", "meta": {"hexsha": "ca9416a99abdced137ba132c56dc443f854098c0", "size": 20591, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "book/ch06.tex", "max_stars_repo_name": "suomela/da2020", "max_stars_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2020-12-11T00:47:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T15:46:43.000Z", "max_issues_repo_path": "book/ch06.tex", "max_issues_repo_name": "suomela/da2020", "max_issues_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-17T18:31:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-17T18:42:16.000Z", "max_forks_repo_path": "book/ch06.tex", "max_forks_repo_name": "suomela/da2020", "max_forks_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-06-22T03:53:31.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-11T12:33:40.000Z", "avg_line_length": 77.1198501873, "max_line_length": 765, "alphanum_fraction": 0.7013743869, "num_tokens": 5895, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059462938815, "lm_q2_score": 0.7090191337850933, "lm_q1q2_score": 0.5623272910410946}}
{"text": "\\documentclass[]{extarticle}\n\\usepackage[utf8]{inputenc}\n\\usepackage{bm}\n%\\usepackage[margin=1in]{geometry}\n\\usepackage[a4paper]{geometry}\n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n\\theoremstyle{Simple}\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{lem}[thm]{Lemma}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem{cor}{Corollary}[thm]\n\n\\theoremstyle{definition}\n\\newtheorem{defn}{Definition}[section]\n\n\\theoremstyle{remark}\n\\newtheorem*{rem}{Remark}\n\\newtheorem*{note}{Note}\n\n\\theoremstyle{example}\n\\newtheorem{eg}{Example}[section]\n\n\\title{Linear Algebra - MTH100\\\\ Lecture Notes}\n\\author{Jai Luthra}\n\\date{November, 2015}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Eigenvalues and Eigenvectors}\n\n\\begin{defn}\n\nAn \\textbf{eigenvector} of an $n \\times n$ matrix is $A$ is a \\underline{non-zero} vector $\\bm{x}$ such that $A\\bm{x} = \\lambda\\bm{x}$; such a vector $\\bm{x}$ is called an \\textbf{eigenvector corresoponding} to $\\lambda$.\nEigenvalues are sometimes also called \\textit{characteristic values} or \\textit{latent roots}. Eigenvectors are sometimes also called \\textit{characteristic vectors}.\n\n\\end{defn}\n\n\\begin{rem}\nThe zero vector is \\textbf{not considered as an eigenvector} since $A\\bm{0} = \\lambda \\bm{0}$ for all matrices $A$ and all scalars $\\lambda$\n\\end{rem}\n\n\\begin{rem}\nHowever, $0$ is allowed to be an eigenvalue for a matrix $A$, in that case the equation $A\\bm{x} = 0\\bm{x}$ has a non-trivial solution.\nBut $A\\bm{x} = \\bm{0}$ has a non-trivial solution $\\bm{\\iff}$ A is not invertible. \\textit{Therefore, an $n \\times n$ matrix $A$ is invertible $\\iff$ $0$ is not an eigenvalue of $A$}\n\\end{rem}\n\n\\begin{rem}\nAn eigenvector is not unique, since all scalar\nmultiples of an eigenvector are also eigenvectors.\nActually, the set of all eigenvectors corresponding to a\nparticular eigenvalue together with the zero vector forms a\nsubspace of $V = F^n$ for some n, More formally:\n\nPut $X$ = \\{$ \\bm{v} \\in V \\mid \\bm{v} $ is an eigenvector for $\\lambda\\} \\cup \\{\\bm{0}$\\} = \\{$\\bm{v} \\in V \\mid A \\bm{v} = \\lambda \\bm{v}$\\}\n\nThen $X$ is a subspace of $V$, called the \\textbf{eigenspace} of $A$\ncorresponding to $\\lambda$. This can be proved using the subspace test, but follows easily from the fact that the eigenspace corresponding to $\\lambda$ is nothing but the null space of the matrix $(A \\mathbin{-} \\lambda I)$.\n\\end{rem}\n\n\\begin{prop} \\label{prop:50}\nIf $\\bm{v_{1}, v_{2}, ... v_{p}}$ are eigenvectors corresponding to \\underline{distinct} eigenvalues ${\\lambda_{1}, \\lambda_{2}, ... \\lambda_{p}}$ of the matrix $A$, then the set $\\{\\bm{v_{1}, v_{2}, ... v_{p}}\\}$ is linearly independent.\n\\end{prop}\n\n\\begin{cor}\nAn $n \\times n$ matrix A can have at most $n$ distinct eigenvalues.\n\\end{cor}\n\n\\begin{proof}\nSuppose, for the sake of contradiction, that $\\bm{v_{1}, v_{2}, ... v_{p}}$ are linearly dependent. Let $m$ be the smallest number such that $\\bm{v_{1}, v_{2}, ... v_{m}}$ are lin. ind. \\underline{and} $\\bm{v_{m+1}}$ is a linear combination of preceding vectors. Then:\n\n\\begin{equation} \\label{eq:1}\nc_{1}\\bm{v_{1}} + c_{2}\\bm{v_{2}} + ... + c_{m}\\bm{v_{m}} = \\bm{v_{m+1}}\n\\end{equation}\n\nLeft multiplying by $A$, \n\n$$\nc_{1}A\\bm{v_{1}} + c_{2}A\\bm{v_{2}} + ... + c_{m}A\\bm{v_{m}} = A\\bm{v_{m+1}}\n$$\n\nSince the $\\bm{v_{i}}$ are eigenvectors: \n$$\nc_{1}\\lambda_{1}\\bm{v_{1}} + c_{2}\\lambda_{2}\\bm{v_{2}} + ... + c_{m}\\lambda_{m}\\bm{v_{m}} = \\lambda_{m+1}\\bm{v_{m+1}}\n$$\n\nMultiplying \\eqref{eq:1} by $\\lambda_{m+1}$ and subtracting we get:\n\n\\begin{equation} \\label{eq:2}\nc_{1}(\\lambda_{1} - \\lambda_{m+1})\\bm{v_{1}} + ... + c_{m}(\\lambda_{m} - \\lambda_{m+1})\\bm{v_{m}} = \\bm{0}\n\\end{equation}\n\nHowever, $\\bm{v_{1}, v_{2}, ... v_{m}}$ are lin. indep. so all of the coefficients in \\eqref{eq:2} have to be zero: $c_{1}(\\lambda_{1} - \\lambda_{m+1}) = 0 \\implies c_{1} = 0$ since the $\\lambda$'s are given to be distinct. Similarly, $c_{2} = c_{3} = ... = c_{m} = 0$.\n\nBut then from \\eqref{eq:1} we get that $\\bm{v_{m+1}} = \\bm{0}$.\n\nHowever, this is not possible, since all the $\\bm{v}$'s are eigenvectors.\nSince there is a contradiction, the original hypothesis must be wrong.\nTherefore, $\\bm{v_{1}, v_{2}, ... v_{p}}$ are lin. independent.\n\n\\end{proof}\n\n\\begin{rem}\nIt is easy to verify whether a particular vector is an eigenvector of a given matrix $A$ or not. Similarly, given some number, we can verify whether it is an eigenvalue or not.\nHowever, in order to systematically find eigenvalues, we need to use the following result\n\\end{rem}\n\n\\begin{prop}\nA scalar $\\lambda$ is an eigenvalue of a $n \\times n$ matrix $A$ $\\iff$ $\\lambda$ satisfies the \\textbf{characteristic equation} $det(A - \\lambda I) = 0$.\n\\end{prop}\n\n\\begin{note}\n$det(A - \\lambda I)$ is a polynomial of degree $n$ called the \\textbf{characteristic polynomial} of $A$. It has at most $n$ roots, counting multiplicities. Hence an $n \\times n$ matrix can have at most $n$ eigenvalues (counting multiplicities). \\textbf{\\textit{It is possible for a matrix with real entries to have no real eigenvalues.}}\n\\end{note}\n\n\\begin{note}\nIf complex roots are allowed, an $n \\times n$ matrix has exactly $n$ eigenvalues (counting multiplicities). Therefore, we must clearly specify which field is being considered when we talk about the eigenvalues of a matrix. For the time being, however, we will only allow real eigenvalues.\n\\end{note}\n\n\\subsection{Eigenvalues of similar matrices}\n\nRecall that an $n \\times n$ matrix $B$ is said to be \\textbf{similar} to an $n \\times n$ matrix $A$ if $\\exists$ an invertible matrix $P$ such that:\n$B = PAP^{-1}$ (or $A = P^{-1}BP$).\nSimilarity of matrices is an equivalence relation on the set of $n \\times n$ matrices.\n\n\\begin{rem}\nUsing the multiplicative property of\ndeterminants, it is easy to see that similar matrices have\nthe same determinant. Using essentially the same idea,we\ncan derive the following result:\n\\end{rem}\n\n\\begin{prop}\nIf the $n \\times n$ matrices $A$ and $B$ are similar, then they have the same characteristic polynomial, and hence the same eigenvalues with the same multiplicities.\n\\end{prop}\n\n\\section{Diagonalization of Matrices}\n\nIf $A$ is a diagonal matrix, then its diagonal elements are its\neigenvalues, and the standard basis vectors are its\neigenvectors. This is the motivation for the following:\n\n\\begin{defn}\nAn $n \\times n$ matrix $A$ is said to be \\textbf{diagonalizable}\nif $A$ is similar to a diagonal matrix $D$, in other words if\nthere is an invertible matrix $P$ and a diagonal matrix $D$\nsuch that $A = PDP^{-1}$\n\\end{defn}\n\n\\begin{rem}\nIf $A$ is diagonalizable, then its powers are easy\nto compute.\n\\end{rem}\n\n\\begin{rem}\nIf $A$ is diagonalizable, then its eigenvalues can\nbe found by inspection of $D$. However, in practice, we\nhave to do things the other way round. First, we find the\neigenvalues from the characteristic equation, then we find\n$P$, then we get the diagonal matrix $D$.\n\\end{rem}\n\n\\begin{thm}[Diagonalization Theorem - VIT]\n\\label{thm:dt-vit}\n\\hfill\n\n\\begin{enumerate}\n\\item An $n \\times n$ matrix $A$ is diagonalizable $\\iff$ $A$ has $n$ linearly independent eigenvectors.\n\\item In this case, $A = PDP^{-1}$, where the columns of $P$ are $n$ l.i. eigenvectors of $A$, and the diagonal entries of $D$ are eigenvalues corresponding to these eigenvectors.\n\\end{enumerate}\n\n\\end{thm}\n\n\\begin{rem}\nAnother way to express the above theorem is\nthat an $n \\times n$ matrix $A$ is diagonalizable $\\iff$ it has\nenough (linearly independent) eigenvectors to form a\nbasis of $\\mathbb{R}^{n}$. Such a basis is called an \\textbf{eigenvector basis}.\n\\end{rem}\n\nIn practice, we can distinguish three cases:\n\n\\subsection{Case 1}\nAn $n \\times n$ matrix $A$ has $n$ distinct (real) eigenvalues. Then we get the following results:\n\n\\begin{prop}\nAn $n \\times n$ matrix $A$ with $n$ distinct eigenvalues is diagonalizable.\n\\end{prop}\n\n\\begin{proof}\nBy an earlier result Proposition~\\ref{prop:50}, eigenvectors corresponding to distinct eigenvalues are linearly independent. Therefore in this case $A$ has $n$ linearly independent eigenvectors. Hence by the Thm.~\\ref{thm:dt-vit} (DT-VIT), $A$ is diagonalizable.\n\\end{proof}\n\nGiven an eigenvalue $\\lambda_{1}$ for a matrix $A$, we define:\n\n\\begin{defn}\nThe \\textbf{algebraic multiplicity} of $\\lambda_{1}$ is the power of the factor $(\\lambda - \\lambda_{1})$ in the characteristic polynomial of $A$\n\\end{defn}\n\n\\begin{defn}\nThe \\textbf{geometric multiplicity} of $\\lambda_{1}$ is the dimension of the eigenspace corresponding to $\\lambda_{1}$\n\\end{defn}\n\n\\begin{rem}\nThe first definition is the one we have used for\nmultiplicity up to now, as it applies to polynomials in\ngeneral (not only the characteristic polynomial). The\nsecond definition applies specifically to the characteristic\npolynomial, since its roots are eigenvalues, which have\ncorresponding eigenspaces.\n\\end{rem}\n\n\\subsection{Case 2}\nAn $n \\times n$ matrix $A$ has $p < n$ distinct eigenvalues, but counting the (algebraic) multiplicities, there are $n$ \\underline{real} eigenvalues (\\textit{\\underline{not distinct}}). We now come to the weaker result for this case:\n\n\\begin{prop}\nLet $A$ be an $n \\times n$ matrix with n (real) eigenvalues (counting algebraic multiplicities) of which only $\\lambda_{1}, \\lambda_{2}, \\dots, \\lambda{p}$ are distinct ($p < n$). Then the following hold:\n\n\\begin{enumerate}\n\\item For $1 \\leq k \\leq p$, the geometric multiplicity is less than or equal to the algebraic multiplicity of $\\lambda_{k}$.\n\\item $A$ is diagonalizable $\\iff$ the sum of the dimensions of the distinct eigenspaces is n $\\iff$ the geometric multiplicity for each $\\lambda_{k}$ eequals its algebraic multiplicity.\n\\item If $A$ is diagonalizable, and $B_{k}$ is a basis for the eigenspace corresponding to $\\lambda_{k}$ for each k, then the total collection of vectors in $B_{1}, \\dots, B_{p}$ forms an eigenvector basis for $\\mathbb{R}^{n}$.\n\\end{enumerate}\n\n\\end{prop}\n\n\\subsection{Case 3}\n\nAn $n \\times n$ matrix $A$ has $p < n$ distinct eigenvalues, but even after counting the algebraic multiplicities, there are $< n$ (\\underline{real}) eigenvalues (p could even be 0). Then $A$ is not diagonalizable over the real field. If we want to diagonalize, we have to admit complex eigenvalues and eigenvectors.\n\n\\begin{rem}\nEven if we admit complex eigenvalues and\neigenvectors, a real matrix does not have to be\ndiagonalizable. The case is quite complicated, and we will\nnot go into the details. However, we will consider the case\nof a $2 \\times 2$ real matrix with a complex eigenvalue, and\ndescribe the nature of such a matrix and its corresponding\ntransformation (i.e. a linear operator on $\\mathbb{R}^{2}$).\n\\end{rem}\n\n\\section{Complex Eigenvalues}\n\nIn case $A$ has complex eigenvalues, we regard $A$ as a linear transformation on the vector space $\\mathbb{C}^{n}$, where $\\mathbb{C}$ is the field of complex numbers. Then:\n\nA complex scalar $\\lambda$ satisfies $det(A - \\lambda I) = 0 \\iff$ there is a nonzero vector $\\bm{x}$ in $\\mathbb{C}^{n}$ such that $A\\bm{x} = \\lambda \\bm{x}$. $\\lambda$ is called a complex eigenvalue and $\\bm{x}$ is called a complex eigenvector.\n\nIf $A\\bm{x} = \\lambda \\bm{x}$, then taking complex conjugates, $\\overline{A\\bm{x}} = \\overline{\\lambda \\bm{x}}$, or $\\overline{A}\\overline{\\bm{x}} = \\overline{\\lambda} \\overline{\\bm{x}}$. However since $A$ is real, $\\overline{A} = A$. So $A\\overline{\\bm{x}} = \\overline{\\lambda} \\overline{\\bm{x}}$, or $\\overline{\\lambda}$ is also an eigenvalue with eigenvector $\\overline{\\bm{x}}$. \\textit{In other words, If $A$ is real, then its complex eigenvalues occur in conjugate pairs.}\n\n\\begin{prop}[Basic result for Complex Eigenvalues]\nSuppose $A$ is a real $2 \\times 2$ matrix with a complex eigenvalue $\\lambda = a - bi, b \\neq 0$, and associated eigenvector $\\bm{v}$ in $\\mathbb{C}^{2}$. Then $A = PBP^{-1}$ where\n\n$$\nP =\n\\begin{bmatrix}\n \\operatorname{Re}(\\bm{v}) & \\operatorname{Im}(\\bm{v})\n\\end{bmatrix}, \nB =\n\\begin{bmatrix}\na & -b \\\\\nb & a\n\\end{bmatrix}\n$$\n\n\\begin{itemize}\n\\item Furthermore the transformation (left multiplication by $B$) corresponds to a rotation followed by scaling.\n\\item The rotation is through the angle $\\phi$ between the positive x-axis and the ray from the origin to $(a,b)$. The angle $\\phi$ is called the \\textit{argument} of $\\lambda$.\n\\item The scaling is by a factor $r = |\\lambda| = \\sqrt{a^{2} + b^{2}}$. The quantity $r = |\\lambda|$ is known as the \\textit{modulus} of $\\lambda$.\n\\end{itemize}\n\n\\end{prop}\n\n\\begin{eg}\nSuppose $A = \\begin{bmatrix} 0 & 1 \\\\ -8 & 4 \\end{bmatrix}$ then from its characteristic polynomial. Find its complex eigenvalues and eigenvectors, and briefly comment on what the matrix is doing.\n\\end{eg}\n\n\\begin{proof}[Solution]\nFrom its characteristic polynomial $det(A - \\lambda I) = 8 - 4\\lambda + \\lambda^{2}$ we get eigenvalues $2 \\pm 2i$. Take $\\lambda = 2 + 2i$ (in other words, $a = 2$ and $b = -2$) then the matrix\n\n$$\nA - \\lambda I = \\begin{bmatrix} -2 - 2i & 1 \\\\ -8 & 2 - 2i \\end{bmatrix}\n$$\n\nThe corresponding eigenvector $\\bm{v} = (x,y)$ (considered as a column vector with $x$ and $y$ as complex numbers) is obtained as the solution derived from the above matrix.\n\nThis leads to two equations:\n\n$$\n(-2 - 2i)x + y = 0\n$$\n$$\n(-8)x + (2 - 2i)y = 0\n$$\n\nSince the matrix $A - \\lambda I$ has a non-trivial solution, its two\nrows are linearly dependent. In other words, the two\nequations represent the same relationship between $x$ and $y$.\nWe may give any value to one of them arbitrarily, and\nobtain the second from either of the two equations.\n\nTaking the first equation $(-2 -2i)x + y = 0$ and putting $x = 1$, we get:\n\n$$\n\\begin{bmatrix}\nx \\\\\ny\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 \\\\\n2 + 2i\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 \\\\\n2\n\\end{bmatrix}\n+ i\n\\begin{bmatrix}\n0 \\\\\n2\n\\end{bmatrix}\n$$\n\nThus the matrix $P = \\begin{bmatrix}1 & 0\\\\ 2 & 2\\end{bmatrix}$\nand the matrix $B = \\begin{bmatrix}2 & 2\\\\-2 & 2\\end{bmatrix}$\n\nWe can verify that $PBP^{-1} = \n\\end{proof}\n\n\\end{document}\n", "meta": {"hexsha": "9cebe3853a968d07c9cecf24f0262833d71b7d1a", "size": 13929, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/linalg.tex", "max_stars_repo_name": "shauryachawla/misc", "max_stars_repo_head_hexsha": "e50eb7c8979f9b3f7ecc43464cf7ccc91bf4d0bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/linalg.tex", "max_issues_repo_name": "shauryachawla/misc", "max_issues_repo_head_hexsha": "e50eb7c8979f9b3f7ecc43464cf7ccc91bf4d0bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/linalg.tex", "max_forks_repo_name": "shauryachawla/misc", "max_forks_repo_head_hexsha": "e50eb7c8979f9b3f7ecc43464cf7ccc91bf4d0bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-10-07T08:22:09.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-07T08:22:09.000Z", "avg_line_length": 41.5791044776, "max_line_length": 478, "alphanum_fraction": 0.7055782899, "num_tokens": 4414, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5623272910410945}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section{Quadrupole approximation}\n\nThe procedure to find the quadrupole approximation \nfor the GW emitted by a Newtonian system is in the form: \n\\begin{enumerate}\n    \\item determine the density \\(\\rho (\\vec{x}, t)\\);\n    \\item calculate the trace-free inertia tensor \\(Q^{ij}(t)\\);\n    \\item calculate the gravitational wave strain \\(h_{ij}\\). \n\\end{enumerate}\n\n\\subsection{Two point particles}\n\nIn order to model the point-like nature of the particles we can write an expression for the density as a sum of two delta-functions,\nwhose positions oscillate on the \\(x\\) axis with an amplitude \\(R\\):\n%\n\\begin{align}\n\\rho (\\vec{x}, t) = m\\delta (\\vec{x} - \\vec{x}_1(t)) + m\\delta (\\vec{x} - \\vec{x}_2 (t))                                                           \\,,\n\\end{align}\n%\nwhere the positions of the particles, \\(\\vec{x}_1 (t)\\) and \\(\\vec{x}_2 (t)\\), will be the solutions to the differential equation \\(\\vec{F}_{12} = - k (\\vec{x}_1 - \\vec{x}_2)\\). \nIf they start out along the \\(x\\) axis they will remain along it, so the equation will read \\(F_{12} = - k (x_1 - x_2 )\\), so \\(\\ddot{x}_1 = - \\omega_0 ^2 (x_1 - x_2 )\\) and \\(\\ddot{x}_2 = + \\omega_0 ^2 (x_1 - x_2 )\\), where \\(\\omega^2 = k / m\\). \n\nThis can be rewritten, by taking the difference of the two, as \\(\\ddot{\\Delta x} = - 2 \\omega_0 ^2 \\Delta x \\), with \\(\\Delta x = x_1 - x_2 \\). \nTherefore, the pulsation of the system is \\(\\omega = \\sqrt{2} \\omega_0  \\) --- this could also have been calculated as \\(\\sqrt{k / m_r}\\), where \\(m_r = m^2/ (m+m) = m/ 2\\) is the reduced mass.\n\nSo, the position vectors will look like\n%\n\\begin{align}\n\\vec{x}_1 (t) = R \\left[\\begin{array}{c}\n\\cos(\\omega t) \\\\ \n0 \\\\ \n0\n\\end{array}\\right]\n\\qquad \\text{and} \\qquad\n\\vec{x}_2 (t) = - \\vec{x}_1 (t)\n\\,\n\\end{align}\n%\nup to a phase. \n\n% For simplicity, we will also assume that the radius \\(r\\) is constant --- this cannot be precisely the case, since there will be some amount of power lost because of the GW emission, but it is a reasonable approximation.\n\nSo, the inertia tensor will be given by \n%\n\\begin{align}\nI^{ij} (t) &= \\int \\rho (\\vec{x}, t) x^{i} x^{j} \\dd[3]{x}  \\\\\n&= m \\sum _{k=1, 2} x_k^{i} x_k^{j}  \\\\\n&= - m R^2 \\left[\\begin{array}{ccc}\n\\cos^2 (\\omega t) & 0  & 0\\\\ \n0 & 0 & 0 \\\\\n0 & 0 & 0\n\\end{array}\\right] \n\\,.\n\\end{align}\n%\n% where we omit the argument of the sines and cosines (which is always \\( \\)). \nIts trace is equal to its only nonzero component, \\(I^{xx}\\), therefore its traceless version will look like \n%\n\\begin{align}\nQ^{ij} (t) &= mR^2 \\left[\\begin{array}{ccc}\n- (2/3) \\cos^2\\omega t & 0 & 0 \\\\ \n0 & (1/3) \\cos^2 \\omega t & 0 \\\\ \n0 & 0 & (1/3) \\cos^2 \\omega t\n\\end{array}\\right]\n\\,.\n\\end{align}\n%\n\nThe second derivative of this tensor will be given by \n%\n\\begin{align}\n\\ddot{Q}^{ij}(t) = \\frac{2 mR^2}{3} \\omega^2 \n\\left[\\begin{array}{ccc}\n2\\cos(2 \\omega t) & 0  & 0\\\\ \n0 & - \\cos(2 \\omega t)  & 0 \\\\\n0 & 0 & - \\cos(2 \\omega t)\n\\end{array}\\right]\n\\,.\n\\end{align}\n\n\nNow, in order to compute the gravitational wave strain we need the projection tensor \\(\\Lambda_{ij, kl}\\).\nIf the propagation direction we are interested in is \\(\\vec{k}\\), then the tensor \n%\n\\begin{align}\nP_{ij} = \\delta_{ij} - \\frac{k_i k_j}{\\abs{k}^2} = \\delta_{ij} - n_i n_j\n\\,\n\\end{align}\n%\nwill project a vector onto the subspace orthogonal to \\(\\vec{k}\\); and the tensor \n%\n\\begin{align}\n\\Lambda_{ij, kl} &= P_{ik} P_{jl} - \\frac{1}{2} P_{ij} P_{kl}  \\\\\n&= \\delta_{ik} \\delta_{jl} - n_i n_k \\delta_{jl} - \\delta_{ik} n_j n_l + \\frac{1}{2} n_{i} n_k n_j n_l - \\frac{1}{2} \\delta_{ij} \\delta_{kl} + \\frac{1}{2} \\delta_{ij} n_k n_l + \\frac{1}{2} n_i n_j \\delta_{kl} \n\\,,\n\\end{align}\n%\nwill project a rank-2 tensor onto the corresponding subspace. \n\nThen, the gravitational wave emission will look like: \n%\n\\begin{align}\nh_{ij} (t) &= \\frac{2}{r} \\frac{G}{c^{4}} \\Lambda_{ij, kl} \\ddot{Q}_{kl} (t - r/ c)  \\\\\n&= \\frac{2}{r} \\frac{G}{c^{4}} \\frac{2 m R^2 \\omega^2}{3} \n\\cos(2 \\omega (t - r/c)) F_{i j} (\\theta , \\varphi )\n\\,,\n\\end{align}\n%\nwhere \\(F_{i j}(\\theta , \\varphi )\\) is a tensor depending on the two angles which define the observation-direction unit vector:\n%\n\\begin{align}\n\\vec{n} = \\left[\\begin{array}{c}\n\\sin \\theta \\cos \\varphi  \\\\ \n\\sin \\theta \\sin \\varphi  \\\\ \n\\cos \\theta \n\\end{array}\\right]\n\\,.\n\\end{align}\n\nThe explicit shape of \\(F_{i j}\\) depends on the shape of \\(\\ddot{Q}_{i j}\\), the one for this specific case can be calculated analytically with a computer algebra system --- see \\href{https://jacopok.github.io/tt_gauge_gw.html}{here}.\n\nHowever, that specific analytic form is not really interesting --- we want to extract the two actual degrees of freedom of the gravitational wave. \nIf we define two vectors \\(\\vec{m}\\) and \\(\\vec{l}\\) which are orthogonal to \\(\\vec{n}\\) and to each other, we will be able to extract the degrees of freedom as \\(h_+ (t) = h_{ij} (t) m^i m^i\\) and \\(h_\\times (t) = h_{ij} m^i l^{j}\\). \nWe can select, for example, the \\(\\hat{\\theta} \\) and \\(\\hat{\\varphi}\\) unit vectors in a spherical coordinate system in which \\(\\vec{n} = \\hat{r}\\) is the radial vector for this purpose. \n\nThe computation yields: \n%\n\\begin{align}\nh_+ (t) &= \\frac{2}{r} \\frac{G}{c^{4}} m R^2 \\omega^2  \\qty(\\cos^2 \\theta \\cos^2 \\varphi + \\cos^2 \\varphi  - 1) \\cos(2 \\omega t) \\\\\nh_\\times (t) &= - \\frac{2}{r} \\frac{G}{c^{4}}  m R^2 \\omega^2  \\cos \\theta \\sin( 2 \\varphi ) \\cos(2 \\omega t)\n\\,.\n\\end{align}\n\n% The matrix \\(h_{ij}\\) is then \n% %\n% \\begin{align}\n% h_{ij} &= C \\left[\\begin{array}{ccc}\n% (1 - n_x^2)^2/2  & \n% (n_x^2 - 1) n_x n_y / 2 & \n% (n_x^2 - 1) n_x n_z / 2 \\\\ \n%  & \n%  \\frac{1}{2} n_{y} n_x n_y n_x - \\frac{1}{2}  + \\frac{1}{2} n_x n_x + \\frac{1}{2} n_y n_y \n%   & (n_x^2 + 1) n_y n_z / 2  \\\\ \n%  &  & \n%  \\frac{1}{2} n_{z} n_x n_z n_x - \\frac{1}{2} + \\frac{1}{2}  n_x n_x + \\frac{1}{2} n_z n_z \n\n% \\end{array}\\right]\n% \\,,\n% \\end{align}\n% %\n% where \n% %\n% \\begin{align}\n% C = - \\frac{2}{r} \\frac{G}{c^{4}} \\cos( 2 \\omega \\qty(t - \\frac{r}{c}) + \\phi )\n% \\,,\n% \\end{align}\n%\n\n% \\subsection{A free-falling particle}\n\n% In this case the motion will be determined by the equation \n% %\n% \\begin{align}\n% F = - \\frac{GMm}{r^2} \\implies \\ddot{r} = - \\frac{GM}{r^2}\n% \\,,\n% \\end{align}\n% %\n% for which we need to specify two initial conditions. \n\n% \\subsection{Ellipsoid rotating along its axis}\n\n\n% The nonzero components of the inertia tensor of an ellipsoid with mass \\(m\\) and axes \\(a\\), \\(b\\), \\(c\\) are:\n% %\n% \\begin{align}\n% I_{11} = \\frac{m}{5} (b^2 + c^2) \\\\\n% I_{22} = \\frac{m}{5} (a^2 + c^2) \\\\\n% I_{33} = \\frac{m}{5} (a^2 + b^2) \n% \\,.\n% \\end{align}\n\n% We give a sketch of the computation, starting from the assumption that \\((2/5) m r^2\\) is the moment of inertia of a sphere along any axis which passes along the center.\n% This means that if we select a Cartesian coordinate system the inertia tensor of the sphere is diagonal, and its components read \n% %\n% \\begin{align}\n% \\frac{2}{5} m r^2 = I_{xx } = I_{yy} = I_{zz} = \\int_{x^2 + y^2 + z^2 \\leq r^2} \\rho_s (y^2 + z^2) \\dd[3]{x}\n% \\,,\n% \\end{align}\n% %\n% where the (assumed constant) density of the sphere is \\(\\rho_s = m / V = 3m / 4 \\pi r^3\\). \n% The integral could be equivalently that of \\(x^2 + y^2\\) or \\(x^2 + z^2\\): therefore, each integral in the sphere's volume \\(V\\) in the form \n% %\n% \\begin{align}\n% \\int_{V} \\rho x_i^2 \\dd[3]{x} = \\frac{1}{5} m r^2\n% \\,\n% \\end{align}\n% %\n% must contribute for half of the total.\n\n% With this knowledge, we can compute the moment of inertia along a certain axis, say \\(z\\), of an ellipsoid with axes \\(a\\), \\(b\\) and \\(c\\): \n% %\n% \\begin{align}\n% I_{zz} &= \\int_{(x/a)^2 + (y/b)^2 + (z/c)^2 \\leq 1} \\rho_e (x^2 + y^2) \\dd[3]{x} \n% \\,,\n% \\end{align}\n% %\n% where now the density of the ellipsoid reads \\(\\rho _e = m / V = 3m / 4 \\pi abc\\). \n\n% We start out by only computing the integral of \\(\\rho x^2\\). We can make a change of coordinates: \\(X = x/ a\\), \\(Y = y / b\\) and \\(Z = z/c\\). The volume element changes as \\(\\dd[3]{x} = abc \\dd[3]{X}\\).\n \n% In these variables, the integral can be written in terms of the previously found result for a sphere of radius 1, with density \\(\\rho _1 = 3 m / 4 \\pi \\):\n% %\n% \\begin{align}\n% I_{zz}^{(x^2 \\text{ term})} &= \\rho_e \\int_{X^2 +Y^2+ Z^2 \\leq 1} X^2 a^2 abc \\dd[3]{X}  \\\\\n% &= \\frac{\\rho_e}{\\rho _1} a^3 bc \\int_{X^2 +Y^2+ Z^2 \\leq 1} \\rho_1 X^2 \\dd[3]{X}  \\\\\n% &= \\frac{\\rho_e}{\\rho _1} a^3 bc \\frac{m}{5} = \\frac{m}{5} \\frac{a^3 bc}{abc} = a^2 \\frac{m}{5}\n% \\,,\n% \\end{align}\n% %\n% so the full component reads \\(I_{zz} = (a^2 + b^2) m/ 5\\). \n% The off-diagonal components, instead, vanish by symmetry: integrals in the form \\(\\int \\rho x^{i} x^{j} \\dd[3]{x}\\) with \\(i \\neq j\\) \n\n% \\subsection{Two particles in circular Newtonian orbit}\n\n% %\n% \\begin{align}\n% \\rho (\\vec{x}, t) = m_1 \\delta (\\vec{x} - \\vec{x}_1(t)) + m_2 \\delta (\\vec{x} - \\vec{x}_2 (t))                                                                        \n% \\,,\n% \\end{align}\n% %\n% where \n% %\n% \\begin{align}in\n% \\vec{x}_1 (t) = r \\left[\\begin{array}{c}\n% \\cos(\\omega t + \\phi ) \\\\ \n% - \\sin(\\omega t + \\phi ) \\\\ \n% 0\n% \\end{array}\\right]\n% \\qquad \\text{and} \\qquad\n% \\vec{x}_2 (t) = - \\vec{x}_1 (t)\n% \\,.\n% \\end{align}\n\n% For simplicity, we will also assume that the radius \\(r\\) is constant --- this cannot be precisely the case, since there will be some amount of power lost because of the GW emission, but it is a reasonable approximation.\n\n% So, the trace-free inertia tensor will be given by \n% %\n% \\begin{align}\n% Q^{ij} (t) &= \\int \\rho (\\vec{x}, t) \\qty( x^{i} x^{j} - \\frac{1}{3} \\delta^{ij} r^2) \\dd[3]{x}  \\\\\n% &= \\sum _{k=1, 2} m_k \\qty(x_k^{i} x_k^{j} - \\frac{1}{3} \\delta^{ij} r^2)  \\\\\n% &= 2 m r^2 \\left[\\begin{array}{cc}\n% \\cos^2 - 1/3 & - \\cos \\sin \\\\ \n% - \\cos \\sin & \\sin^2 - 1/3  \n% \\end{array}\\right] \n% \\,,\n% \\end{align}\n% %\n% where we write only the upper-left 2x2 submatrix in \\(Q^{ij}\\), since the other entries are constant, and we omit the argument of the sines and cosines (which is always \\(\\omega t + \\phi \\)). \n\n% The second derivative of this tensor will be given by \n% %\n% \\begin{align}\n% \\ddot{Q}^{ij}(t) = 4 m r^2 \\omega^2 \\left[\\begin{array}{cc}\n% \\sin^2 - \\cos^2 & 2 \\sin \\cos \\\\ \n% 2 \\sin \\cos & \\cos^2 - \\sin^2\n% \\end{array}\\right]\n% \\,.\n% \\end{align}\n\n\\section{Estimate of GW magnitude}\n\nWe want a way to estimate the GW strain, \\(h \\sim \\delta L / L\\), by simple considerations about the parameters of the system which is generating the waves. \nIn the quadrupole approximation we have \\(h \\sim (G / c^{4} r) \\ddot{I}\\), where \\(\\ddot{I}\\) is the typical magnitude of the second derivative of the inertia tensor, while \\(r\\) is the distance from the system to Earth. \n\nWe know that \\(I \\sim M R^2\\), where \\(M\\) is the mass of the system while \\(R\\) is its characteristic radius (for example, a solid sphere has \\(I = (2/5)  M R^2 \\) --- the \\(2/5\\) factor is relevant but we will neglect it in this order-of-magnitude estimate).\nIf \\(T\\) is the characteristic timescale in which the components of the inertia tensor vary, we can estimate \\(\\ddot{I}\\) as \\(\\ddot{I} \\sim M R^2 / T^2 = M v^2\\), where \\(v\\) is the typical velocity of the various parts of the system. \n\nWith this estimate, we have: \n%\n\\begin{align}\nh \\sim \\frac{G}{c^{4}r} M v^2 = \\frac{GM}{c^2 r} \\frac{v^2}{c^2} \\sim \\frac{R_s}{r} \\frac{v^2}{c^2}\n\\,.\n\\end{align}\n\nLet us compute this for a few simple examples: \n\\begin{enumerate}\n    \\item a car crash, with \\(M \\sim \\SI{e3}{kg}\\), \\(v \\sim \\SI{100}{km/h}\\), \\(r \\sim \\SI{10}{m}\\);\n    \\item a supernova explosion, with \\(M \\sim M_{\\odot}\\), \\(v \\sim \\num{.2}c\\), \\(r \\sim \\SI{2}{kpc}\\);\n    \\item a binary black hole system, with \\(M \\sim 50M_{\\odot}\\), \\(v \\sim \\num{.1}c\\), \\(r \\sim \\SI{400}{Mpc}\\).\n\\end{enumerate}\n\nWe shall use the \\texttt{units} system\\footnote{\\url{https://docs.astropy.org/en/stable/units/index.html}} provided by the python library \\texttt{astropy} in order to aid us in this computation: \nwe start out with the imports \n\\begin{lstlisting}[language=Python]\nimport astropy.units as u\nfrom astropy.constants import codata2018 as ac\n\\end{lstlisting}\n\nNow we may define a function yielding the desired estimate: \n\\begin{lstlisting}[language=Python]\n@u.quantity_input(M='mass', D='length', v='speed')\ndef estimate_h(M, D, v) -> u.dimensionless_unscaled:\n    h = ac.G * M / D / ac.c**2 * (v / ac.c)**2\n    # the units system takes care of all the unit conversion for us\n    # since we specified that we want the result to be a pure number\n    return(h)\n\nprint(f'Car crash: {estimate_h(1e3*u.kg, 10*u.m, 100* u.km/u.hr):.0e}')\nprint(f'Supernova: {estimate_h(1*u.Msun, 2*u.kpc, .2 * ac.c):.0e}')\nprint(f'Binary BH: {estimate_h(50*u.Msun, 400*u.Mpc, .1 * ac.c):.0e}')\n\\end{lstlisting}\n%\nwhich yields as output\n\\begin{lstlisting}[language=Python]\nCar crash: 6e-40\nSupernova: 1e-18\nBinary BH: 6e-23\n\\end{lstlisting}\n\nThe \\texttt{u.quantity\\_input} decorator is not strictly necessary, but this way the function will raise a very clear \\texttt{UnitsError} if we try to give it something with the wrong dimensionality as an input.\n\n% The supernova order of magnitude might be severely overestimated with this method: \n\n\\end{document}\n", "meta": {"hexsha": "82d6e83f68e19a510eeab8c64668b61a5bf19a5c", "size": 13189, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phd_courses/gravitational_waves_exercises/sheet1.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "phd_courses/gravitational_waves_exercises/sheet1.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phd_courses/gravitational_waves_exercises/sheet1.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 40.2103658537, "max_line_length": 260, "alphanum_fraction": 0.6153612859, "num_tokens": 4816, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7931059414036511, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5623272826973784}}
{"text": "%\\nomenclature[]{GPS}{Global Positioning System}\n%\\nomenclature[]{WGS84}{World Geodetic System 1984}\n\n\\section{Generation of Waypoints}\\label{sec:GenerationOfWaypoints}\nWe first address the issue of finding a discrete set of waypoints, referenced above in point 1 as $R$. We chose to formulate the problem using a discrete set of waypoints, rather than deal with the case of treating the region as continuous space, since it is much easier to accurately record sensor data when the RAV has settled at a given point.\n%Discretising the space also allows the problem to be formulated in a manner that is easier to represent using software, since describing an arbitrary continuous region may take up a prohibitively large amount of memory.\n%\\begin{itemize}\n    %\\item It is more straightforward to describe the problem using a discrete set of points rather than a continuous one.\n%    \\item It is much easier to process sensor information if it is known exactly where this data has been captured. The RAVs may stop at each discrete waypoint to visit these locations. *Focus on this point, expand if possible*\n%    \\item The case of path-finding using discrete waypoints has a strong body of literature behind it\n%    \\item The RAVs run autopilot software which can guide them using the on-board GPS sensor\n%\\end{itemize}\n\n%which are the center points of cells that partition the region of interest.\n%\\note{set of waypoints create a voronoi partition, might be worth talking about}\n\n%\\note{Above needs revision, will come back once rest of chapter has been fleshed out a bit more}\nWe began by assuming that the region which is to be surveyed, $R$, can be described by a polygon on the Cartesian plane, since it allows for a discrete representation by describing the coordinates of the corners of the polygon. \n%As mentioned above, we favour statically recording images and sensor readings at fixed discrete locations will ensure the data can be accurately recorded and spatially referenced. \nGiven this polygon, the goal is to generate a set of points, $R$, which are a uniform distance from each other in the x and y directions and lie inside the polygonal grid. The set $R$ forms a regular tessellation of the region of interest. In order to do this, we employed the following methodology:\n\\begin{enumerate}\n    \\item Find the circum-rectangle that tightly bounds the polygon, which is oriented with the x-y plane. This is shown in \\ref{fig:PolygonWithBoundingRect}\n    \\item Generate a set of grid points in the bounding rectangle. This is shown in \\ref{fig:PolygonWithBoundingRectAndGridPoints}\n    \\item Prune the points that lie inside the rectangle but outside the polygon. This is shown in \\ref{fig:PolygonWithBoundingRectAndGridPointsPruned}\n\\end{enumerate}\n\n\\vspace*{45mm}\n\\begin{figure}[H]\n\\centering\n\\subfloat[Arbitrary Polygonal Region]{\n  \\includegraphics[width=73mm]{Chapters/MultiAgentCoverage/Figs/Polygon.PNG}\\label{fig:PolygonOnly}\n}\n\\subfloat[Bounding Rectangle Found]{\n  \\includegraphics[width=73mm]{Chapters/MultiAgentCoverage/Figs/PolygonBoundingRect.PNG}\\label{fig:PolygonWithBoundingRect}\n}\n\\hspace{0mm}\n\\subfloat[Candidate Grid Points Generated in Bounding Rectangle]{\n  \\includegraphics[width=73mm]{Chapters/MultiAgentCoverage/Figs/PolygonBoundingRectGridPoints.PNG}\\label{fig:PolygonWithBoundingRectAndGridPoints}\n}\n\\subfloat[Grid Points Outside of Polygon Pruned]{\n  \\includegraphics[width=73mm]{Chapters/MultiAgentCoverage/Figs/PolygonBoundingRectGridPointsPruned.PNG}\\label{fig:PolygonWithBoundingRectAndGridPointsPruned}\n}\n\\caption{A visualisation of the steps involved in Algorithm \\ref{alg:PointInPolygon}}\n\\label{fig:PointInPolygon}\n\\end{figure}\n\n\n\n\\begin{algorithm}[H]{}\n\\caption{Algorithm to Generate a Uniformly Spaced Grid of Points in an Arbitrary Polygon}\n\\label{alg:GridGeneration}\n\\begin{algorithmic}[1]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n%Input\n\\REQUIRE $ \\newline R: \\quad\\text{ The set of (x, y) points defining the polygon which covers the region of interest}\n\\newline x\\_spacing: \\quad\\text{ The desired spacing between points in the x direction}\n\\newline y\\_spacing: \\quad\\text{ The desired spacing between points in the y direction}\n$\n%Output\n\\ENSURE $\\newline grid\\_points: \\quad \\text{ A set of uniformly spaced (x, y) points, which define a regular } \\newline \\text{ tessellation of the region of interest}$\n\n\\hfill\\pagebreak\n\\STATE grid\\_points $\\leftarrow$ empty array\n\n\\STATE max\\_x $\\leftarrow$ maximum x value of all points in R\n\\STATE min\\_x $\\leftarrow$ minimum x value of all points in R\n\\STATE max\\_y $\\leftarrow$ maximum y value of all points in R\n\\STATE min\\_y $\\leftarrow$ minimum y value of all points in R\n\n\\STATE bounding\\_rect $\\leftarrow$ The tightest bounding rectangle which contains the polygon R defined by the points (min\\_x, min\\_y), (min\\_x, max\\_y), (max\\_x, max\\_y),(max\\_x, min\\_y).\n\n\\STATE no\\_y\\_points $\\leftarrow \\left \\lfloor{\\frac{max\\_y - min\\_y}{y\\_spacing}}\\right \\rfloor$\n\\STATE no\\_x\\_points $\\leftarrow \\left \\lfloor{\\frac{max\\_x - min\\_x}{x\\_spacing}}\\right \\rfloor$\n\n\\FOR{y\\_spacing\\_index = 0 to no\\_y\\_points}\n\\FOR{x\\_spacing\\_index = 0 to no\\_x\\_points}\n\\STATE p$\\leftarrow$(min\\_x+x\\_spacing $\\times$ x\\_spacing\\_index, min\\_y+y\\_spacing $\\times$ y\\_spacing\\_index)\n\\IF{Point-in-Polygon(R, p)}\n\\STATE Add p to grid\\_points\n\\ENDIF\n\\ENDFOR\n\\ENDFOR\n\\RETURN grid\\_points\n\\end{algorithmic} \n\\end{algorithm}\n\n\n%\\par This procedure was devised with the aim of being straightforward to implement and modify. The algorithm used to carry out these steps is outlined in Algorithm \\ref{alg:GridGeneration}, which we describe here.\nWe describe the grid point generation algorithm (Algorithm \\ref{alg:GridGeneration}) here. First, the bounding circum-rectangle of the polygonal region of interest is constructed from the largest and smallest x and y coordinates of the points of the polygon, outlined in lines 2-6 and shown in Figure \\ref{fig:PolygonWithBoundingRect}. Then candidate uniformly spaced grid points in the bounding rectangle are generated in lines 7-11, shown in Figure \\ref{fig:PolygonWithBoundingRectAndGridPoints}. Finally in lines 12-13, a check is performed which adds the grid point to the set of grid point contained in the polygon if it passes a check, shown in Figure \\ref{fig:PolygonWithBoundingRectAndGridPointsPruned}. This is done using the Point-in-Polygon routine which is provided separately in Algorithm \\ref{alg:PointInPolygon}, with the details discussed in the next paragraph.\n\n\n\n\n\n\n\n\\begin{algorithm}[H]{}\n%\\caption{Crossing Test for Point in Polygon based on \\cite{Shimrat1962Algorithms}}\n\\caption{Point-in-Polygon}\n\\label{alg:PointInPolygon}\n\\begin{algorithmic}[1]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n%Input\n\\REQUIRE $ \\newline R: \\quad \\text{ The polygon which covers the region of interest}\n%\\newline R \\quad \\text{ The set of (x, y) points defining the polygon which covers the region of interest}\n\\newline P: \\quad \\text{ A point which will be tested for containment in the polygon }\n$\n%Output\n\\ENSURE $\\newline \\text{True if P is contained in R else False }$\n\n\\hfill\\pagebreak\n\\STATE polygon\\_points $\\leftarrow$ The set of points defining the vertices of R\n\\IF{$P_X$ is greater than the largest value or less than the smallest value of the x coordinates of the points in $R$}\n\\RETURN False\n\\ELSIF{$P_Y$ is greater than the largest value or less than the smallest value of the y coordinates of the points in $R$}\n\\RETURN False\n\n\n%\\ELSIF{x coordinate of P is less than the smallest value of the x coordinate of all the points in R}\n%\\RETURN false\n%\\ELSIF{y coordinate of P is greater than the largest value of the y coordinate of all the points in R}\n%\\RETURN false\n%\\ELSIF{y coordinate of P is less than the smallest value of the y coordinate of all the points in R}\n%\\RETURN false\n\n\\ELSE\n\\STATE point\\_in\\_polygon $\\leftarrow$ False\n\\STATE edges $\\leftarrow$ the set of edges defining R\n\\FOR{each edge in edges}\n\\STATE P1 $\\leftarrow$ the first point of edge\n\\STATE P2 $\\leftarrow$ the second point of edge\n\n\\IF{($P_Y$ lies between $P1_y$ and  $P2_y$) \\&\n     $P_X$ is less than the x coordinate of the point of intersection between edge and the ray extended to +$\\infty$ from $P$ in the +x direction}\n\\STATE point\\_in\\_polygon $\\leftarrow \\neg$ point\\_in\\_polygon\n\\ENDIF\n\n\\ENDFOR\n\\ENDIF\n\\RETURN point\\_in\\_polygon\n\\end{algorithmic} \n\\end{algorithm}\n\n%\\pagebreak\n%\\begin{figure}{r}{0.5\\textwidth}\n%\\includegraphics[width=0.5\\textwidth]{Chapters/MultiAgentCoverage/Figs/PointOutsidePolygon.PNG}\\caption{Ray extended from point outside polygon intersects an even number of times}\\label{fig:PointOutsidePolygon}\n%\\includegraphics[width=0.5\\textwidth]{Chapters/MultiAgentCoverage/Figs/PointInsidePolygon.PNG}\\caption{Ray extended from point inside polygon intersects an odd number of times}\\label{fig:PointInsidePolygon}\n%\\end{figure}\n\n\\begin{figure}[h]%\n    \\centering\n    \\subfloat[Odd number of crossings\\label{fig:oddCrossing}]{\\includegraphics[width=9cm,height=6cm]{Chapters/MultiAgentCoverage/Figs/CrossingTestPos.PNG}}%\n    %\\vspace{2mm}\n    \\\\\n    \\subfloat[Even number of crossings\\label{fig:evenCrossing}]{\\includegraphics[width=9cm,height=6cm]{Chapters/MultiAgentCoverage/Figs/CrossingTestNeg.PNG}}%\n    \\caption{Examples of a positive and negative results when applying the Crossing Test}%\n    \\label{fig:CrossingTest}%\n\\end{figure}\n\nAlgorithm \\ref{alg:PointInPolygon} tests whether a point lies within a polygon, described by a set of edges. Its implementation uses the \\textit{crossing test} \\cite{Shimrat1962Algorithms}, which extends a ray from the point to be tested in the positive x-direction. If it crosses the boundary of the polygon an odd number of times, then it must be contained within the polygon, otherwise it must lie outside. This is illustrated in Figure \\ref{fig:CrossingTest}. The implementation is provided in the Point-In-Polygon routine shown in Algorithm \\ref{alg:PointInPolygon}.% The running time of this algorithm is dictated by the number of edges the polygon has and the size of the polygon relative to the desired tessellation spacing in the x and y directions. We did not make any optimizations other than the one outlined in lines 2-5 of Algorithm \\ref{alg:PointInPolygon}, which checks whether the test point lies outside the largest and smallest x and y coordinates, but there is room to improve the running time significantly. This would be important if one is dealing with particularly large regions of interest or a relatively small spacing between grid points.\n\n\n%\\begin{figure}{}\n%\\subfloat[Ray extended from point outside polygon intersects an even number of times]{\\includegraphics[width=60mm]{Chapters/MultiAgentCoverage/Figs/PointOutsidePolygon.PNG}\\label{fig:PointOutsidePolygon}}\n%\\end{wrapfigure}\n%\\hspace{1em}\n%\\subfloat[Ray extended from point inside polygon intersects an odd number of times]{\\includegraphics[width=60mm]{Chapters/MultiAgentCoverage/Figs/PointInsidePolygon.PNG}\\label{fig:PointInsidePolygon}}\n%\\end{figure}\n\n\n%From a practical perspective, the region specified is defined over the earth's surface, which is not planar. For small regions, it can be assumed that this region is planar, since at \n\n%The generation of grid points takes this into account. The earth can be well described mathematically as an ellipsoid talk about WGS84\n\n\n\n\\subsection{Generating Waypoints as GPS Locations}\nFor the sake of real-world usage, it is necessary to refer to these points using some kind of reference coordinate system that RAVs can use. We provide a brief overview of how we accomplished this, without delving too far into Geographic Coordinate Systems (GCS). Since the \\textit{Global Positioning System} (GPS) is by far the most common system that RAVs use for navigation, we chose to design an implementation based on the \\textit{World Geodetic System 1984} (WGS84) coordinate system, which is the coordinate system that GPS relies upon. The WGS84 system was created by the US Department of Defence and is officially documented in the official government report entitled \"\\textit{Department of Defense World Geodetic System 1984 Its Definition and Relationships With Local Geodetic Systems}\" \\cite{ReportpreparedbytheDMAWGS84DevelopmentCommittee1991Department1984}. It assumes that the datum surface is an oblate spheroid.\n%, and the coordinate system uses Greenwich as the starting point. \nCoordinates are referenced using latitude, longitude and altitude, where longitude measurements range between [-180$^{\\circ}$, 180$^{\\circ}$], latitude measurements range between [-90$^{\\circ}$, 90$^{\\circ}$] and altitude measurements come in the form of meters above sea level. Greenwich is used as the \\textit{prime meridian} for longitude, meaning Greenwich defines 0$^{\\circ}$ longitude. The equator is used to define 0$^{\\circ}$ latitude. In order to generate a grid of points in a bounding polygon as coordinates that can be referenced by the RAVs' GPS sensors, we had to make some modifications to the grid point generation algorithm outlined above. The main issue that arose was that the grid point generation algorithm assumes that the Cartesian coordinate system is used, which is not valid when assuming coordinates lie on a non-planar surface (the earth). As an example, we present results quoted from Robert G. Chamberlain, reviewed on the comp.infosystems.gis newsgroup in October 1996 \\cite{GISL_LISTSERVER}:\n\n\"\\textit{If the distance is less than about 20 km (12 mi) and the locations of the\n two points in Cartesian coordinates are X1,Y1 and X2,Y2 then using Pythogoras' theorem with Cartesian coordinates can generate errors of \n\\begin{enumerate}\n    \\item less than 30 meters (100 ft) for latitudes less than 70 degrees\n    \\item less than 20 meters ( 66 ft) for latitudes less than 50 degrees\n    \\item less than  9 meters ( 30 ft) for latitudes less than 30 degrees\n\\end{enumerate}}\"\n\nThis is due to the non-uniform curvature of the earth and shows that the accuracy of the using Cartesian coordinates with Pythagoras' theorem depends on the latitude at which it is applied. \n%We could have treated the WGS84 coordinates that make the bounding polygon defining the region of interest as a plane and continue using the unmodified Algorithm \\ref{alg:GridGeneration}, but \nFor the sake of guaranteed accuracy we decided to use the methods which were first described by \\citeauthor{Vincenty1975DirectEquations} \\cite{Vincenty1975DirectEquations}, which gave a far superior accuracy:\n\\\\\n\"\\textit{The  latitudes of standpoints were from 0\u00b0 to 80\u00b0 in increments of 10' and the distances were in multiples of 2000 km up to 18000 km, which gave 81 test  lines. The maximum  disagreement  was  0\u00b701 mm.}\". \n\\\\\nBased on this, we made the following minor changes to the algorithm for use with WGS84 coordinates:\n\\begin{enumerate}\n    \\item When calculating distances, the \\textit{inverse solution} \\cite{Vincenty1975DirectEquations} to finding geodesics (shortest paths along the earth's surface) was used. We also used the \\textit{direct solution} \\cite{Vincenty1975DirectEquations} to find a destination given a start point, bearing and distance. These algorithms take into account the non-planar nature of the WGS84 coordinate system. The inverse solution can be used in place of subtraction of Cartesian Coordinates. It would replace the subtraction in lines 7 and 8 of Algorithm \\ref{alg:GridGeneration} and line 12 of Algorithm \\ref{alg:PointInPolygon}. The direct solution can be used in place of addition of distance to Cartesian coordinates and is used in line 11 of Algorithm \\ref{alg:GridGeneration}.\n    \n    \\item We created a specific WGS84 coordinate class, which can be used in place of regular Cartesian coordinates in Algorithms \\ref{alg:GridGeneration} and \\ref{alg:PointInPolygon}. We uploaded this to a publicly available  \\href{https://github.com/DavidLSmyth/WGS84Coordinate}{Github repository}\\footnote{\\href {https://github.com/DavidLSmyth/WGS84Coordinate}{https://github.com/DavidLSmyth/WGS84Coordinate}} with an MIT licence. If the user is not interested in generating grids which have very accurately spaced points, they can use the Cartesian solution which approximates the region of interest as a plane, otherwise the solution using Vincenty's equations in the point above can be applied.\n\\end{enumerate}\n\n%The code which allows the user to generate grids is hosted on github at <> \\note{might be worth referencing how to pull it down using maven}\n\n\\subsection{User Interface and Results of Grid Generation Algorithm}\\label{subsec:SceneSurveyingUI}\n%to Facilitate Grid Generation\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{Chapters/MultiAgentCoverage/Figs/ShinyUI/UIWithButtons.PNG}\n\\caption{The UI developed to quickly generate uniformly spaced grid points in an arbitrary polygonal region to survey}\n\\label{fig:SceneSurveyingUI}\n\\end{figure}\nWe developed a User Interface (UI) in order to have the ability to quickly generate grids for use in simulations, as outlined in Section \\ref{sec:SceneSurveying}. We also used the UI to visually validate that the grid generation code was performing as expected. We chose to use the  \\href{https://shiny.rstudio.com/}{Shiny framework}\\footnote{\\href {https://shiny.rstudio.com/}{https://shiny.rstudio.com/}}, which is a web app development framework written in the \\href{https://www.r-project.org/}{R language}\\footnote{\\href {https://www.r-project.org/}{https://www.r-project.org/}}.\n%The Shiny framework is highly suited to prototyping web apps and abstracts away much of the more nuanced aspects of web development, at the cost of flexibility.\nWe used the \\href{https://rstudio.github.io/leaflet/}{Leaflet Package}\\footnote{\\href {https://rstudio.github.io/leaflet/}{https://rstudio.github.io/leaflet/}}, which provides a high-level interface to the \\href{https://wiki.openstreetmap.org/wiki/Develop}{Open Street Map}\\footnote{\\href {https://wiki.openstreetmap.org/wiki/Develop}{https://wiki.openstreetmap.org/wiki/Develop}} API to provide the interface to a map with WGS84 Coordinate reference system. We added an \\textit{observeEvent} function to record when the user clicks on points on the map, adding an edge the previously clicked point to the current clicked point. The process of sequentially selecting the points on the map defining a polygonal region can be seen in Figure \\ref{fig:AddClicks}. \n\n\\begin{figure}[h]\n\\centering\n\\subfloat{\\includegraphics[width=72mm]{Chapters/MultiAgentCoverage/Figs/AdditionalClicks.PNG}}\n%\\end{wrapfigure}\n\\hspace{0.2em}\n\\subfloat{\\includegraphics[width=72mm]{Chapters/MultiAgentCoverage/Figs/AdditionalClicksOne.PNG}}\n\\hspace{0.2em}\n\\subfloat{\\includegraphics[width=72mm]{Chapters/MultiAgentCoverage/Figs/AdditionalClicksTwo.PNG}}\n\\caption{Creating the bounding polygon using the UI}\n\\label{fig:AddClicks}\n\\end{figure}\n\nIn order to actually generate and draw the grid points on the user interface, we added \"Show planned waypoints\" button, which can be seen in Figure \\ref{fig:SceneSurveyingUI}. When this button is clicked, the polygon defining the region of interest that the user has selected is sent to the grid generation code, as well as the desired latitude and longitude spacing between the generated grid points. The grid points are generated and written to a file. The user interface then reads the points from the file and then overlays them on the map. Examples of visualisation of polygonal regions with grid points are shown in Figure \\ref{fig:GridPointsOnUI}. We open-sourced this code with an MIT licence on Github \\cite{SceneSurveyingUI}.\n\n\n\\begin{figure}[h]\n\\centering\n%\\subfloat[Grid points with a latitude spacing of 40m and a longitude spacing of 30m.]\n\\subfloat{\\includegraphics[width=72mm, height=65mm]{Chapters/MultiAgentCoverage/Figs/RegionOne.PNG}}\n%\\end{wrapfigure}\n\\hspace{0.2em}\n%\\subfloat[Grid points with a latitude spacing of 30m and a longitude spacing of 25m.]\n\\subfloat{\\includegraphics[width=72mm, height=65mm]{Chapters/MultiAgentCoverage/Figs/RegionTwo.PNG}}\n\\caption{Examples of generated grid points overlaid onto the UI}\n\\label{fig:GridPointsOnUI}\n\\end{figure}\n\n\n%\\note{Make sure to refer to target localisation}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "3d8f45bc90c16edb95954e7009dc3683a941d632", "size": 20337, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/MultiAgentCoverage/BoundingPolygon/GeneratingWaypointsInPolygon.tex", "max_stars_repo_name": "DavidLSmyth/ResearchMScThesis", "max_stars_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/MultiAgentCoverage/BoundingPolygon/GeneratingWaypointsInPolygon.tex", "max_issues_repo_name": "DavidLSmyth/ResearchMScThesis", "max_issues_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-18T11:59:42.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-18T11:59:42.000Z", "max_forks_repo_path": "Chapters/MultiAgentCoverage/BoundingPolygon/GeneratingWaypointsInPolygon.tex", "max_forks_repo_name": "DavidLSmyth/ResearchMScThesis", "max_forks_repo_head_hexsha": "754d975535e0da9a8e99cf31b651021698155c5b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.3269961977, "max_line_length": 1165, "alphanum_fraction": 0.7899886906, "num_tokens": 5020, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.5622427127040006}}
{"text": "\\chapter{Heap Algorithms}\n\\Label{cha:heap}\n\nThe heap algorithms of the \\cxx Standard \nLibrary \\cite[28.7.7]{cxx-17-draft}\nwere already part of \\emph{\\acsl by Example} from 2010--2012.\nIn this chapter we re-introduce them and discuss---based on the\nbachelor thesis of one of the authors---the verification efforts in some \ndetail \\cite{Lapawczyk_2016_bachelor}.\n\n\nThe \\cxx standard\\footnote{\n  See \\url{http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3242.pdf}\n} introduces the concept of a \\emph{heap} as follows:\n\n\\begin{small}\n\\begin{quote}\n\\begin{enumerate}\n\\item A \\emph{heap} is a particular organization of elements in a range between two\nrandom access iterators \\inl{[a,b)}. Its two key properties are:\n\\begin{enumerate}\n\\item There is no element greater than \\inl{*a} in the range and\n\\item \\inl{*a} may be removed by \\inl{pop_heap()}, or a new element added by \\inl{push_heap()}, in\n       $O(\\log(N))$ time.\n\\end{enumerate}\n\\item These properties make heaps useful as priority queues.\n\\item \\inl{make_heap()} converts a range into a heap and \\inl{sort_heap()}\n      turns a heap into an increasing sequence.\n\\end{enumerate}\n\\end{quote}\n\\end{small}\n\n\nFigure~\\ref{fig:heap-overview} gives an overview on the five heap algorithms\nby means of an example.\nAlgorithms, which in a typical implementation are in a caller-callee relation, have the same color.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{Figures/heap-overview.pdf}\n\\caption{\\Label{fig:heap-overview}Overview on heap algorithms}\n\\end{figure}\n\n\\clearpage\n\nRoughly speaking, the algorithms from Figure~\\ref{fig:heap-overview} have\nthe following behavior.\n\n\\begin{itemize}\n\\item In \\S\\ref{sec:heap-concepts} we briefly recapitulate basic\n      heap concepts.\n\n\\item In \\S\\ref{sec:heap-acsl} we show how these heap concepts\n      can be described in \\acsl.\n\n\\item In \\S\\ref{sec:auxiliary-heap-functions} we verify two\n      auxiliary heap functions.\n\n\\item The algorithms \\isheapuntil and \\isheap from\n      \\S\\ref{sec:isheapuntil} and~\\S\\ref{sec:isheap}\n      allow to test at run time whether a given array is arranged as a heap\n\n\\item The algorithm \\pushheap from \\S\\ref{sec:pushheap} \\emph{adds} an\n        element to a given heap in such a way\n        that resulting array is again a heap\n\n\\item The algorithm \\popheap from \\S\\ref{sec:popheap}, on the other hand,\n        \\emph{removes} an element from a given heap in\n        such a way that the resulting array is again a heap\n\n\\item The algorithm \\makeheap from \\S\\ref{sec:makeheap} rearranges a given array\n        into a heap.\n\n\\item Finally, the algorithm \\sortheap from \\S\\ref{sec:sortheap} sorts a heap\n        into an increasing range.\n\\end{itemize}\n\n\nIn \\S\\ref{sec:heap-concepts} we present in more detail how heaps are defined.\nThe \\acsl logic functions and predicate that formalize the basic heap\nproperties of heaps are introduced in \\S\\ref{sec:heap-acsl}.\n\n\n\\clearpage\n\n\\input{heap/heap_concepts}\n\\input{heap/heap_acsl}\n\\input{heap/heap_parent_child}\n\\input{heap/is_heap_until}\n\\input{heap/is_heap}\n\\input{heap/fluctuations}\n\\input{heap/push_heap}\n\\input{heap/pop_heap}\n\\input{heap/make_heap}\n\\input{heap/sort_heap}\n\n", "meta": {"hexsha": "7fe98ccec54bf89404e88ee412ab9c937e9731bb", "size": 3184, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/heap/heap-algorithms.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/heap/heap-algorithms.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/heap/heap-algorithms.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 32.824742268, "max_line_length": 99, "alphanum_fraction": 0.7434045226, "num_tokens": 919, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.76908023177796, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.5622427101088693}}
{"text": "\\section{Exogenous Model Specifications}\n\\label{appx: c}  \n\n\\begin{table}[ht!]\n    \\caption{Exogenous Model Specifications}\n    \\label{table:1}\n    \\centering\n    \\begin{tabular}{|| c | c c c c c ||}\n        \\hline\n        Model Reference & $\\lambda_m:\\lambda_w$  & $B_m,B_w$ & $F_m,F_w$ & $\\delta$ & $u(\\theta)$  \\\\ [0.5ex] \n        \\hline\\hline\n        \\autoref{fig:swiping-rule}          & --       & 20         & Uniform(0,1) & 0.95       & Linear       \\\\ \n        \\autoref{fig:discount-cs}           & --       & 20         & Uniform(0,1) & (0.87, 1)  & Linear       \\\\\n        \\autoref{fig:risk-cs}               & --       & 20         & Uniform(0,1) & 0.95       & CARA, with $r\\in [-15,15]$         \\\\\n        \\autoref{fig:mkt-cs}                & 6:1      & 10         & Beta(2,2)    & 0.97       & Logarithmic  \\\\\n        \\autoref{fig:mkt-cs-bdiff}          & 6:1      & 10,40      & Beta(2,2)    & 0.97       & Logarithmic  \\\\\n        \\autoref{fig:abm-conv}              & 2:1      & 10         & Beta(2,2)    & 0.97       & Logarithmic  \\\\\n        \\autoref{fig:abm-conv-ssize}        & 1:1      & 10         & Beta(2,2)    & 0.97       & Logarithmic  \\\\\n        \\autoref{fig:abm-br}                & 2:1, 1:1 & 10         & Beta(2,2)    & 0.97       & Logarithmic  \\\\ [1ex] \n        \\hline\n    \\end{tabular}\n\\end{table}", "meta": {"hexsha": "42be9d732bcdd0a2dff34d2860849c973e1a3d9d", "size": 1333, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dissertation/appendices/ap-c.tex", "max_stars_repo_name": "patohdzs/project-tinder", "max_stars_repo_head_hexsha": "4a8c138a63e31fa36981a421863a1af5162519c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dissertation/appendices/ap-c.tex", "max_issues_repo_name": "patohdzs/project-tinder", "max_issues_repo_head_hexsha": "4a8c138a63e31fa36981a421863a1af5162519c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dissertation/appendices/ap-c.tex", "max_forks_repo_name": "patohdzs/project-tinder", "max_forks_repo_head_hexsha": "4a8c138a63e31fa36981a421863a1af5162519c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.5909090909, "max_line_length": 135, "alphanum_fraction": 0.4261065266, "num_tokens": 491, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.5622427049653538}}
{"text": "\\section{Introduction and motivation}\n\nThe models that we have studied until now are \\textbf{deterministic} : given an input a specific output is provided. Now, we want to change the point of view and study \\textbf{statistical models}, where we can deal with \\emph{probabilities} of each output. \n\nWe will now introduce Hidden Markv Models (HMMs) from a intuitive point of view.\\\\\n\nHMMs are probabilistic models that assume that the underlying system is a Markov process \\(X\\) with unobservable (or \\emph{hidden}) states. We assume that there is another \\textbf{observable} process \\(Y\\) whose outcomes we assume are influenced by the process \\(X\\). Our \\textbf{goal} is to determine the process \\(X\\) by observing \\(Y\\). In \\textbf{real world]} signals, we have observed outputs which we will want to model, so we can apply this paradigmn to our problem.\n\n\\begin{ndef}\nA \\textbf{latent variable} is a variable that is not directly observed but is rather infered from observed variables. If a model aims to explain observed variables in terms of latent variables is a \\textbf{latent variable models}.\n\\end{ndef}\n\nHMMs are a example of latent variable models.\n\n\\begin{example}\nLet us set in the speech recognition problem. In this topic we have the following features:\n\n\\begin{itemize}\n  \\item The \\emph{latent states} are the different symbols existing in the language.\n  \\item The observations are the features extracted from the voice (parameterization). Example: Linear Prediction coefficients of a short term fragment of speech (LPCC).\n\\end{itemize}\nIn this case, the goal would be to obtain the latent language symbols using the speech segment.\n\n\\begin{note}\n  The parameterization is also called feature extraction. In the speech recognition setting, it is usual to transform each audio window in a \\(\\mathbb R^n\\) vector, using the same \\(n \\in \\mathbb N\\) for each window. The coefficients of an LPC filter or a MFCC filter are examples of extracted features.\n\\end{note}\n\\end{example}\n\n\\subsection{HMMs for Speech recognition}\n\nOne particular way of modeling speech is by using HMMs. In this case, we assume that the speech sequence is generated by a Markov model as the one shown in Figure \\ref{fig:hmm:speech},\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.3]{Figures/ExampleSpeechHmm}\n\\caption{HMM Speech model.}\n\\label{fig:hmm:speech}\n\\end{figure}\n\nThis finite state machine changes state every time unit and each time \\(t\\) that we jump from state \\(i\\) to state \\(j\\), a speech vector \\(\\mathbf{o}_t\\) is generated from the p.d.f. \\(b_j(\\mathbf{o}_t)\\). Furthermore, the transition from state \\(i\\) to state \\(j\\) is also probabilistic and it is modeled by the discrete probability \\(a_{ij}\\).\n\n\\begin{ndef}\nA HMM is \\textbf{ergodic} if \\(a_{ij} > 0\\) for all \\(i,j\\).\n\\end{ndef}\n\n\\begin{example}\nAn example of \\textbf{left to right} HMM for speech modeling is called \\emph{Bakis}. The example of HMM represented in Figure \\ref{fig:hmm:speech} is a left-to-right HMM. In these models, we can not go backwards (typical in temporal information processing) and a phone is typically represented using between \\(3\\) and \\(5\\) states.\n\\end{example}\n\n\\subsection{Gaussian Mixture Models as HMMs}\n\n\\textbf{Gaussian Mixture Models (GMMs)} tries to fit a multivariate normal vector to a set of data. For each state of the HMM, we can define a GMM by its mean vector, its covariance vector and the weights of each mixture. That is, for a state \\(p\\),\n\\[\n\\lambda_p = \\{\\mu_{ip},\\Sigma_{ip},\\omega_{ip}\\},  \n\\]\nwhere \\(\\omega_{ip}\\) are the weights.\\\\\n\nIf we have a dataset and we assume that it is gaussian, it is easy to learn its meand and variance. However, learning the whole mixture is not so simple, since we \\textbf{we do not know} to which mixture does each point belong to. \\\\\n\nWith this in mind, our problem is \\textbf{the training}. To train, we need:\n\n\\begin{enumerate}\n  \\item Associate observations to HMM states (\\textbf{state alignment}).\n  \\item A \\emph{score} function that evaluates the fit of a model.\n\\end{enumerate}\n\nAfter fitting the model, we will be able to generate observations from the model or recognise a non-labeled speech sentence.\n\n\\begin{example}\nThis example is again related to the speech recognition topic. Consider that we want to \\textbf{recognize isolated digits} that have been spoken out (\"one\", \"two\",...).\n\\begin{enumerate}\n\\item To \\textbf{train} we would need labeled observations, and we would use them to train a GMM-HMM for each digit, using all digit observations\n\\item To \\textbf{recognize} a digit, having a sequence of observations that are not labeled, we would compare this sequence with the trained HMMs and choose the one that fits the best the pronounced digit.\n\\end{enumerate}\n\\end{example}\n\n\\section{HMMs: Formal definition}\n\nLet us make use of the previous intuitions in order to give a formal definition of HMMs. The elements of an HMM are:\n\n\\begin{itemize}\n\\item We will consider that we have \\(N\\) \\textbf{states}, \\(\\{S_1,\\dots,S_N\\}\\). At time \\(t\\), we will be at state \\(q_t\\), which is a priori unknown. If we know that at time \\(t\\) the state is at state \\(S_i\\), we will write \\(q_t = S_i\\).\n\n\\item We will have \\(M\\) possible \\textbf{observable symbols}, \\(\\{v_1,\\dots,v_M\\}\\). We denote the observation at time \\(t\\) as \\(O_t\\), which is also a priori unknown. If we know that at time \\(t\\) the observation is \\(v_j\\), we write \\(O_t = v_j\\)\n\n\\item A \\textbf{transition matrix}, \\(A = \\{a_{ij}\\}\\), which has the probability of transitioning from state \\(i\\) to state \\(j\\), that is:\n\\[\na_{ij} = P[q_{t+1} = S_j \\ | \\ q_t = S_i], \\quad 1 \\leq i,j \\leq N.  \n\\]\n\n\\item We have a \\textbf{likelihood} \\(B = \\{b_j(k)\\}\\) of a observation \\(v_j\\) happening in each state \\(S_i\\), that is:\n\\[\nb_j(k) = P\\left[ v_k \\ \\text{at} \\ t \\ | \\ q_t = S_j\\right], \\quad 1 \\leq j \\leq N, 1 \\leq k \\leq M.\n\\]\n\n\\item An \\textbf{a priori} distribution of each state \\(\\pi = \\{\\pi_i\\}\\), where\n\\[\n\\pi_i = P \\left[q_1 = S_i\\right], \\quad  1 \\leq i \\leq N.  \n\\]\n\\end{itemize}\n\n\\begin{ndef}\nA \\textbf{Hidden Markov Model} is defined by its parameters \\(\\lambda = (A,B,\\pi)\\).\n\\end{ndef}\n\nHMMs can be seen as \\textbf{generative models}. Consider the following \\textbf{algorithm}:\n\n\\begin{enumerate}\n\\item Choose an initial state \\(q_1 = S_i\\), following the a priori state distribution \\(\\pi\\).\n\\item Increment the time \\(t\\), making a jump.\n\\item Choose an observation \\(O_t = v_k\\) using the likelihood distribution \\(b_i\\) of the observations at the state \\(S_i\\).\n\\item Jump to a new state \\(q_{t+1}\\) using the probability of jumping from state \\(i\\) to state \\(j\\), \\(a_{ij}\\).\n\\item Increment the time \\(t = t+1\\) and return to step 3 if more more observations are wanted.\n\\end{enumerate}\n\nWe want to solve a few problems using HMMs. We will study them one by one:\n\n\\subsection{Scoring problem}\n\nThe problem can be stated as:\n\\begin{quote}\nGiven a sequence of observations \\(O = O_1,\\dots,O_T\\) and a HMM model \\(\\lambda = (A,B,\\pi)\\), how do we compute the likelihood of observing that sequence \\(P(O| \\lambda)\\)\n\\end{quote}\n\n\\subsubsection{Analitical solution}\n\nConsidering the given HMM \\(\\lambda = (A,B,\\pi)\\), we want to compute \\(P(O|\\lambda)\\) where \\(O = O_1\\dots O_T\\). We consider that \\textbf{a sequence of states has occurred}. Let the sequence of states be \n\\[\nQ = q_1 \\dots q_T,  \n\\]\nwhich is unknown. Assuming \\textbf{independent observations}, we have that\n\\[\nP(O|Q,\\lambda) = \\prod_{t=1}^T P(O_t  \\  |  \\  q_t, \\lambda) = \\prod_{t=1}^T b_{q_t}(O_t).  \n\\]\nAlso, using the conditional probability we have that:\n\\[\nP(O  \\  |  \\  Q,\\lambda) = \\frac{P(O,Q|\\lambda)}{P(Q \\ | \\ \\lambda)} \\implies P(O,Q | \\lambda) = P(O \\ | \\ Q,\\lambda)P(Q \\ | \\ \\lambda).\n\\]\nNow, since we have a Markov model, the probability of jumping to a state only depends on the actual state, so\n\\[\nP(Q \\ | \\ \\lambda) = \\pi_{q_1} a_{q_1 q_2}\\dots a_{q_{T-1}q_T}.  \n\\]\nLastly, since the probability \\(P(O|\\lambda)\\) can be obtaining summing the last probability in all possible state sequences, we have that\n\\begin{align*}\nP(O|\\lambda) & = \\sum_{\\text{all } Q} P(O|Q,\\lambda) P(Q|\\lambda)\\\\\n& = \\sum_{q_1,\\dots,q_T} \\pi_{q_1}b_{q_1}(O_1) \\  a_{q_1 q_2}b_{q_2}(O_2)\\dots a_{q_{T-1}q_T}b_{q_T}(Q_T)\n\\end{align*}\n\nThis is the analitical solution of the problem. However, this solution has a clear \\textbf{problem}: it is very computationally expensive so it is not escalable.\n\n\\subsubsection{Forward/Backward Algorithm}\n\nWe will define an algorithm that is more computationally efficient to find the solution of the Scoring problem.\n\nConsider the probability of observing the sequence \\(O = O_1\\dots O_t\\) until instant \\(t\\) and be at state \\(S_i\\) in instant \\(t\\). That is called the \\textbf{forward variable} and it can be written as:\n\\[\n\\alpha_t(i) = P(O_1\\dots O_t, q_t = S_i| \\lambda). \n\\]\n\nThis forward variable can be computed using the following \\textbf{algorithm}:\n\n\\begin{enumerate}\n\\item Initialization:\n\\[\n\\alpha_1(i) = \\pi_i b_i(O_1), \\quad 1 \\leq i \\leq N.  \n\\]\n\\item Induction:\n\\[\n\\alpha_{t+1}(j) = \\left[\\sum_{i = 1}^N \\alpha_t(i)a_{ij}\\right]b_j(O_{t+1}).  \n\\]\n\\item Termination:\n\\[\nP(O|\\lambda) = \\sum_{i = 1}^N \\alpha_T(i).  \n\\]\n\\end{enumerate}\n\nAs we can see, we are computing the forward variable \\(\\alpha_{t+1}\\) using the previous forward variables \\(\\alpha_t\\) and the probabilities of observation and transition.\\\\\n\n\nRemember that the state \\(S_j\\) can be reached at time \\(t+1\\) from \\(N\\) different states \\(S_i\\), \\(1 \\leq i \\leq N\\) at time \\(t\\). Now, since \\(\\alpha_t(i)\\) is the probability of the join of the observations \\(O_1\\dots O_t\\) and the state at time \\(t\\) is \\(S_i\\),\n\\textbf{the product} \\(\\alpha_t(i)a_{ij}\\) is the probability of the join event that \\(O_1\\dots O_t\\) are observed, the state \\(S_j\\) is reached at time \\(t+1\\) is reached from the state \\(S_i\\) at time \\(t\\). \\\\\n\nWhen summing this product over all possible states \\(S_i\\), \\(1\\leq i \\leq N\\), results in the probability of \\(S_j\\) at time \\(t+1\\) with all the previous partial observation. This way we obtain \\(S_j\\).\\\\\n\nNow that we know \\(S_j\\), we obtain \\(\\alpha_{t+1}(j)\\) by multiplying the summed quantity by the probability \\(b_j(O_{t+1})\\), and this is done \\textbf{for all states j}, \\(1 \\leq j \\leq N\\) for a given \\(t\\).\\\\\n\nWith this algorithm, we can only compute \\textbf{the probability of an observation given the model}. We can alternatively use the \\textbf{backward variable}, that represent the probability of observing the partial sequence \\(O_{t+1}\\dots O_T\\) from instant \\(t+1\\) and being in the state \\(S_i\\) at instant \\(t\\)\n\\[\n\\beta_t(i) = P(O_{t+1}\\dots O_T|q_t = S_i , \\lambda). \n\\] \n\nWe can compute \\(\\beta_T(i)\\) inductively as follows:\n\\begin{enumerate}\n\\item Initialization:\n\\[\n\\beta_T(i) = 1, \\quad 1\\leq i \\leq N.  \n\\]\nThis initialization is arbitrary.\n\n\\item Induction:\n\\[\n\\beta_t(i) =  \\sum_{j=1}^N a_{ij}b_j(O_{t+1})\\beta_{t+1}(j),   \n\\]\nwith \\(t = T-1,T_2, \\dots, 1\\), \\(1 \\leq i \\leq N\\).\n\\end{enumerate}\n\n\n\\subsection{State recognition problem}\n\nThe problem that we will now treat is the following one:\n\n\\begin{quote}\nGiven a sequence of observations \\(O = O_1\\dots O_T\\) and a HMM model \\(\\lambda\\), how do we choose a state sequence \\(Q = q_1\\dots q_T\\) which is optimal in some meaninful sense? (that is, best explains the observations).\n\\end{quote}\n\nAs we have mentioned, we have to way of measuring how optimal is a state sequence. We have a few possibilities, let us present them.\n\n\\begin{itemize}\n\\item Choosing the states \\(q_t\\) which are \\textbf{individually most likely}. This criterion maximizes the \\textbf{expected number} of correct individual states. To obtain this, we define \\(\\gamma_t(i)\\) as the probability of being in state \\(S_i\\) at time \\(t\\), given the observation sequence \\(O\\) and the model \\(\\lambda\\). That is:\n\\[\n\\gamma_t(i) = P(q_t = S_i \\ | \\ O,\\lambda).  \n\\]\n\\end{itemize}", "meta": {"hexsha": "971f7443d90fb8b1506e3744657291e2974596dd", "size": 11904, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/2-hmms.tex", "max_stars_repo_name": "fjsaezm/pit-notes", "max_stars_repo_head_hexsha": "b8e86e0fe811cbf7553f216c416125b665c53851", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/2-hmms.tex", "max_issues_repo_name": "fjsaezm/pit-notes", "max_issues_repo_head_hexsha": "b8e86e0fe811cbf7553f216c416125b665c53851", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/2-hmms.tex", "max_forks_repo_name": "fjsaezm/pit-notes", "max_forks_repo_head_hexsha": "b8e86e0fe811cbf7553f216c416125b665c53851", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.3811659193, "max_line_length": 473, "alphanum_fraction": 0.7094254032, "num_tokens": 3522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.5622427049653537}}
{"text": "\\newcommand*{\\yts}{y_t^\\ast}\n\\newcommand*{\\ets}{\\varepsilon_t^\\ast}\n\n\\section{Model}\n\nThe Stochastic Volatility model was introduced in the seminal work of~\\citet{Taylor1982}.\nBy choosing SV, one aims at capturing time varying and seasonal volatility using an AR(1) process.\nThe model used in this thesis is the Stochastic Volatility with leverage, which, additional to the AR(1) process, also models the leverage effect by letting the stock return and the increment of the log variance have a constant correlation.\n\n\\subsection{Formulation}\n\nThe SV model with leverage is, as formulated in~\\citet{Omori2007},\n\\begin{equation}\n\\begin{alignedat}{2}\\label{form:orig_model}\ny_t & = \\varepsilon_t\\exp\\left(h_t/2\\right), && \\quad t=1,\\dots,T, \\\\\nh_{t+1} & = \\mu+\\phi(h_t-\\mu)+\\eta_t, && \\quad t=1,\\dots,T-1, \\\\\n\\begin{pmatrix}\n\\varepsilon_t \\\\\n\\eta_t\n\\end{pmatrix}\n\\bigg\\vert\\left(\\rho,\\sigma\\right) & \\sim\\text{ i.i.d. }\\mathcal{N}_2\\left(\\bm{0},\\bm{\\Sigma}\\right), && \\quad t=1,\\dots,T-1, \\\\\n\\varepsilon_T &\\sim\\mathcal{N}(0,1), \\\\\n\\bm{\\Sigma} & =\n\\begin{pmatrix}\n1 & \\rho\\sigma \\\\\n\\rho\\sigma & \\sigma^2\n\\end{pmatrix},\n\\end{alignedat}\n\\end{equation}\nwhere $T$ is the number of time points, the only observed variables are the demeaned log returns, $y_t$.\nThe return at time $t$ is thus conditionally normally distributed, given $h_t$.\nThe log variance, $\\bm{h}$, is a latent vector, and it constitutes an AR(1) process with mean $\\mu$, persistence $\\phi$ and variance $\\sigma^2$.\nLeverage is the fourth parameter, $\\rho$, which is the correlation between $\\varepsilon_t$ and $\\eta_t$, i.e.\\ the increment of the stock price and the increment of the log variance.\n\nThe first equation in~\\eqref{form:orig_model} is not linear in $h_t$, which makes the model difficult to estimate. For the ease of notation, let\n\\begin{align*}\n\\yts &=\\log(y^2_t), \\\\\nd_t &=I(y_t\\ge0)-I(y_t<0), \\qquad\\text{$y_t$'s sign,} \\\\\n\\ets &=\\log(\\varepsilon^2_t),\n\\end{align*}\nthus knowing $y_t$ is equivalent to knowing the pair $(\\yts, d_t)$\\footnote{Except for the case $\\{y_t=0\\}$, which is a null set in the model, and it causes identifiability issues for $h_t$. In practice, $\\yts =\\log(y^2_t+\\epsilon)$ is used with some small $\\epsilon$ for robustness.}. By storing $d_t$ and applying $x\\mapsto\\log(x^2)$ to the first equation of~\\eqref{form:orig_model} we get the linearised form,\n\\begin{align}\n\\begin{split}\\label{form:lin_model}\n\\yts & = h_t+\\ets, \\\\\nh_{t+1} & = \\mu+\\phi(h_t-\\mu)+\\eta_t,\n\\end{split}\n\\end{align}\nwhere the error term of the first equation has a $\\log(\\chi_1^2)$ distribution. The observed variable is $\\yts$ and $h_t$ is the latent state.\n\n\\subsubsection*{Other forms}\n\nThe SV model with leverage was formulated differently in~\\citet{Jacquier2004}, where $\\varepsilon_t$ and $\\eta_{t-1}$ are correlated.\nA comparison provided in~\\citet{yu2005leverage} revealed that model~\\eqref{form:orig_model} is more attractive as it is an Euler approximation to the log-normal Ornstein--Uhlenbeck process.\nHence, the method that fits~\\eqref{form:orig_model} also fits the corresponding continuous time process with discretely sampled data.\nMoreover, in the alternative specification, $y_t$ is not a martingale difference sequence, and $\\rho$ has two roles: leverage and the skewness of $y_t\\mid y_1,\\dots,y_{t-1}$, which makes it more difficult to interpret its value.\nFinally, an empirical comparison also showed the model by~\\citet{Jacquier2004} to be inferior to~\\eqref{form:orig_model}.\nFor these reasons, we use~\\eqref{form:orig_model} in the paper at hand.\n\n\\subsection{Estimation without leverage}\n\nSV models are an attractive alternative to GARCH type models, the main difference\\footnote{For a more in-depth comparison see~\\citet{Harvey1994}.} being that while the volatility of GARCH at $t+1$ is conditionally deterministic, given the information known at $t$, it is random in SV.\nOn the one hand, this lets SV fit the data better in some cases~\\citep{Kim1998,kastner2016dealing,Chan2016}, on the other hand, it makes its estimation more difficult. In the following parts, the fitting methods for SV without leverage considered in the literature are briefly summarised.\n\n\\subsubsection{Maximum likelihood estimation}\n\nLet $\\bm{y}=(y_1,\\dots,y_n)$.\nIn order to obtain an ML estimate for $(\\phi,\\sigma^2,\\rho,\\mu)$, we need to evaluate the likelihood function $\\ell(\\phi,\\sigma^2,\\rho,\\mu\\mid\\bm{y})$, for which we need to integrate over the space of vector $\\bm{h}$.\nThis is unfortunately difficult due to the non-linear dependence between $y_t$ and $h_t$, or, in the linear form, due to the non-Gaussian error term $\\ets$.\n\nThe issue of non-normality was resolved in~\\citet{Harvey1994} using a Gaussian approximation to $\\ets$, i.e.\\ by matching the first two moments of the $\\log(\\chi_1^2)$ distribution.\nIn the resulting approximate model, a quasi-maximum likelihood estimate can be obtained by maximising a so-called quasi-likelihood function.\nThat function is the result of the application of the Kalman filter that integrates over the vector $\\bm{h}$.\nThis estimator is consistent and asymptotically normally distributed, but it has bad performance in small samples because the $\\ets$ is poorly approximated by the normal distribution~\\citep{Kim1998}.\n\n\\subsubsection{Bayesian approach}\n\nThe lack of a closed form likelihood function also means that there are no closed form posteriors for the model.\nThis suggests the usage of Markov chain Monte Carlo algorithms (MCMC), which, with the help of Markov chains and Bayes' theorem, make it possible to draw samples from the posterior distribution of the latent variables and the parameters.\nWith enough such samples we get a picture of these distributions.\nFor an introduction, see, e.g.,~\\citet{Geyer2011}.\n\nIn~\\citet{Kim1998}, two different Bayesian ideas were compared for SV without leverage based on how the latent variables are sampled.\nA single move (one-at-a-time volatility update) sampler was introduced first. It draws $h_t$ from $h_t\\mid\\bm{h}_{-t},\\bm{y},\\phi,\\sigma^2,\\mu$ one by one, where $\\bm{h}_{-t}$ is $\\bm{h}$ excluding $h_t$.\nDue to the high intercorrelation in $\\bm{h}$, slow convergence and poor mixing characterise this approach even though the algorithm used by~\\citet{Kim1998} performs better than the other ones in the literature~\\citep{shephard1993fitting,jacquier2002bayesian,shephard1994comment,shephard1997likelihood,geweke1994bayesian}.\n\nTo avoid the issues with high intercorrelation in $\\bm{h}$, a multi-move sampler was used that draws $\\bm{h}$ from $\\bm{h}\\mid\\bm{y},\\phi,\\sigma^2,\\mu$ at once.\nBy approximating the marginal distribution of $\\ets$ with a $K=7$ component mixture of normals,~\\citet{Kim1998} managed to reduce the task to the known framework of conditionally Gaussian state spaces\\footnote{For an introduction see~\\citet{Kitagawa1996}.}.\nSince the marginal of $\\ets$ does not include any model-dependent values, this mixture of normals can be specified before fitting the model.\nThe approximation errors to the original SV model can be corrected for by a reweighting scheme.\nHowever, this correction is known to have minor influence due to the good choice of the mixture approximation~\\citep{Kim1998}.\n\n\\subsection{Approximate model}\n\nBoth the single move and the multi-move samplers were generalised to SV with leverage in~\\citet{Omori2007}, and a particle filtering\\footnote{For an introduction see~\\citet{Johannes2009}.} method was also derived.\nIn this work, we favor MCMC methods over sequential Monte Carlo (particle filtering) due to the availability of computers with a multitude of strong processors.\nFinally, due to its better sampling efficiency, the multi-move sampler was chosen over the single move sampler for fitting the model.\n\n\\subsubsection{Bivariate normal approximation}\n\n\\begin{table}[t!]\n\t\\centering\n\t\\begin{tabular}{cccccc}\n\t\t$j$                       & $p_j$    & $m_j$      & $v_j^2$ & $a_j$    & $b_j$    \\\\ \\hline\n\t\t\\multicolumn{1}{l|}{1}  & 0.00609 & 1.92677   & 0.11265                & 1.01418 & 0.50710 \\\\\n\t\t\\multicolumn{1}{l|}{2}  & 0.04775 & 1.34744   & 0.17788                & 1.02248 & 0.51124 \\\\\n\t\t\\multicolumn{1}{l|}{3}  & 0.13057 & 0.73504   & 0.26768                & 1.03403 & 0.51701 \\\\\n\t\t\\multicolumn{1}{l|}{4}  & 0.20674 & 0.02266   & 0.40611                & 1.05207 & 0.52604 \\\\\n\t\t\\multicolumn{1}{l|}{5}  & 0.22715 & -0.85173  & 0.62699                & 1.08153 & 0.54076 \\\\\n\t\t\\multicolumn{1}{l|}{6}  & 0.18842 & -1.97278  & 0.98583                & 1.13114 & 0.56557 \\\\\n\t\t\\multicolumn{1}{l|}{7}  & 0.12047 & -3.46788  & 1.57469                & 1.21754 & 0.60877 \\\\\n\t\t\\multicolumn{1}{l|}{8}  & 0.05591 & -5.55246  & 2.54498                & 1.37454 & 0.68728 \\\\\n\t\t\\multicolumn{1}{l|}{9}  & 0.01575 & -8.68384  & 4.16591                & 1.68327 & 0.84163 \\\\\n\t\t\\multicolumn{1}{l|}{10} & 0.00115 & -14.65000 & 7.33342                & 2.50097 & 1.25049\n\t\\end{tabular}\n\t\\caption{Constants of the bivariate approximation~\\citep{Omori2007}.}\n\t\\label{tab:constants}\n\\end{table}\n\nDue to the correlation between $\\varepsilon_t$ and $\\eta_t$, approximating $\\ets$ affects $\\eta_t$ as well.\nThus,~\\citet{Omori2007} used a mixture of bivariate normals as an approximation to the conditional distribution of the pair $(\\ets, \\eta_t)$.\nLet $(\\xi_t,\\nu_t\\mid d_t,\\rho,\\sigma)$ denote the approximate random variable to $(\\ets,\\eta_t\\mid d_t,\\rho,\\sigma)$, and let $\\pi(X)$ denote the density of the random variable $X$ and $\\pi(X=x)$ denote its value at $x$.\n\nIn order to derive the approximation, the bivariate conditional density is decomposed first, and then the parts are separately approximated,\n\\begin{align}\n\\pi(\\ets,\\eta_t\\mid d_t,\\rho,\\sigma) &= \\pi(\\ets\\mid d_t,\\rho,\\sigma) \\pi(\\eta_t\\mid\\ets,d_t,\\rho,\\sigma)\\nonumber \\\\\n&= \\pi(\\ets) \\pi(\\eta_t\\mid\\ets,d_t,\\rho,\\sigma)\\label{eq:decomp},\n\\end{align}\nbecause the marginal of $\\ets$ is independent of $d_t$, $\\rho$, and $\\sigma$.\n\\citet{Omori2007} now used an improved normal mixture approximation to the marginal $\\pi(\\ets)$ with $K=10$,\n\\begin{equation}\\label{eq:ets}\n\\pi(\\xi_t)\\triangleq\\sum_{j=1}^{10}p_j\\pi\\left(\\mathcal{N}\\left(m_j,v_j^2\\right)\\right),\n\\end{equation}\nwhere $\\pi\\left(\\mathcal{N}\\left(m_j,v_j^2\\right)\\right)$ denotes the normal density with mean $m_j$ and variance $v_j^2$.\nThe constants $m_j$, $p_j$, and $v_j$ were found by matching the first four moments of $\\ets$ and $\\exp(\\ets)$ with the moments of $\\xi_t$ and $\\exp(\\xi_t)$, and then applying a non-linear optimiser to minimise the distance of the densities~\\citep{Kim1998}.\nThe constants are specified in Table~\\ref{tab:constants}.\n\nThe conditional distribution of $\\eta_t$ is\n\\begin{equation*}\n\\eta_t\\mid\\ets,d_t,\\rho,\\sigma\\sim\\mathcal{N}\\left(d_t\\rho\\sigma\\exp(\\ets/2),\\sigma^2\\left(1-\\rho^2\\right)\\right),\n\\end{equation*}\nthus, we could use\n\\begin{equation}\\label{eq:eta}\n\\nu_t\\mid\\xi_t,d_t,\\rho,\\sigma\\sim\\mathcal{N}\\left(d_t\\rho\\sigma\\exp(\\xi_t/2),\\sigma^2\\left(1-\\rho^2\\right)\\right),\n\\end{equation}\nbut the term $\\exp(\\xi_t/2)$ introduces difficulties.\nThese are mitigated by a linear approximation\n\\begin{equation}\\label{eq:etslinear}\n\\exp(\\xi_t/2)\\approx\\exp(m_j/2)(a_j+b_j(\\xi_t-m_j))\n\\end{equation}\nwhen the $j$th mixture component is used for $\\xi_t$, i.e.\\ $\\mathcal{N}(m_j,v_j^2)$.\nThe constants $a_j$ and $b_j$ are the results of the mean square norm minimisation,\n\\begin{equation*}\n\\E{\\left[\\exp(\\xi_t/2)-\\exp(m_j/2)(a+b(\\xi_t-m_j))\\right]^2}\n\\end{equation*}\nw.r.t.\\ $a$ and $b$, separately for each $j$, and they are listed in Table~\\ref{tab:constants}.\n\nBy combining~\\eqref{eq:decomp},~\\eqref{eq:ets},~\\eqref{eq:eta}, and~\\eqref{eq:etslinear}, the final approximation is\n\\begin{align*}\n\\pi(\\ets,\\eta_t\\mid d_t,\\rho,\\sigma) &\\approx \\pi(\\xi_t,\\nu_t\\mid d_t,\\rho,\\sigma) \\\\\n&= \\pi(\\xi_t)\\pi(\\nu_t\\mid\\xi_t,d_t,\\rho,\\sigma) \\\\\n&= \\sum_{j=1}^{10}p_j\\pi\\left(\\mathcal{N}\\left(m_j,v_j^2\\right)\\right)\\times \\\\\n&\\times\\pi\\left(\\mathcal{N}\\left(d_t\\rho\\sigma\\exp(m_j/2)(a_j+b_j(\\xi_t-m_j)),\\sigma^2\\left(1-\\rho^2\\right)\\right)\\right).\n\\end{align*}\n\n\\subsubsection[State space form]{Conditional Gaussian state space form}\n\nDue to the normal mixture approximation, a new variable $\\bm s=(s_1,\\dots,s_T)$ is included in the model, the vector of mixture components. There is one component $s_t\\in\\{1,\\dots,K\\}$ for each time point $t$. Given $s_t=j$, linear model with normal errors is obtained,\n\\begin{equation}\n\\begin{alignedat}{2}\\label{form:appr_model}\n\\yts & = h_t+\\xi_t, && \\quad t=1,\\dots,T, \\\\\nh_{t+1} & = \\mu+\\phi(h_t-\\mu)+\\nu_t, && \\quad t=1,\\dots,T-1, \\\\\n\\begin{pmatrix}\n\\xi_t \\\\\n\\nu_t\n\\end{pmatrix} &\\overset{d}{=}\n\\begin{pmatrix}\nm_j \\\\\na_j\\gamma_t^j\n\\end{pmatrix} +\n\\begin{pmatrix}\nv_j && 0 \\\\\nb_jv_j\\gamma_t^j && \\sigma\\sqrt{1-\\rho^2}\n\\end{pmatrix}\n\\begin{pmatrix}\nz_t^1 \\\\\nz_t^2\n\\end{pmatrix}, && \\quad t=1,\\dots,T-1, \\\\\n\\xi_T &\\overset{d}{=} m_j+v_jz_T^1, \\\\\n\\begin{pmatrix}\nz_t^1 \\\\\nz_t^2\n\\end{pmatrix}\n&\\sim\\text{ i.i.d. }\\mathcal{N}_2\\left(\\bm{0},\\bm{I_2}\\right), && \\quad t=1,\\dots,T,\n\\end{alignedat}\n\\end{equation}\nwhere $\\gamma_t^j\\triangleq d_t\\rho\\sigma\\exp(m_j/2)$ and ``$\\overset{d}{=}$'' means equivalence in distribution. Note that $(\\xi_t,\\nu_t)$ depends on $d_t,\\rho,\\sigma$, and $s_t$.\n\nIn order to reduce the estimation of model~\\eqref{form:appr_model} to the estimation of a well-known framework, the first two equations of~\\eqref{form:appr_model} are reformulated equivalently as\n\\begin{equation}\n\\begin{alignedat}{2}\\label{form:gauss_model}\n\\begin{pmatrix}\n\\yts \\\\\nh_{t+1} \\\\\n\\tilde{\\mu}_{t+1}\n\\end{pmatrix} &=\n\\begin{pmatrix}\nh_t \\\\\n\\tilde{\\mu}_t+\\phi(h_t-\\tilde{\\mu}_t) \\\\\n\\tilde \\mu_t\n\\end{pmatrix} +\n\\begin{pmatrix}\n\\xi_t \\\\\n\\nu_t \\\\\n0\n\\end{pmatrix}, \\quad t=1,\\dots,T-1, \\\\\ny_T^\\ast &= h_T+\\xi_T.\n\\end{alignedat}\n\\end{equation}\nThe error $(\\xi_t,\\nu_t,0)$ is a (degenerate) normal white-noise series. Hence, by assuming a Gaussian prior for $\\left(h_1,\\tilde\\mu_1\\right)$, model~\\eqref{form:gauss_model} becomes a linear Gaussian state space (GSS) with hidden state $(h_t,\\tilde{\\mu}_t)$.\nWe copy the priors of the initial latent state used by~\\citet{Omori2007}, for arbitrary constants $\\mu_0$ and $\\sigma_0$,\n\\begin{equation*}\n\\begin{pmatrix}\nh_1 \\\\\n\\tilde\\mu_1\n\\end{pmatrix} \\sim\n\\mathcal{N}\\left(\n\\begin{pmatrix}\n\\mu_0 \\\\\n\\mu_0\n\\end{pmatrix},\n\\begin{pmatrix}\n\\sigma^2/(1-\\phi^2)+\\sigma_0^2 & \\sigma_0^2 \\\\\n\\sigma_0^2 & \\sigma_0^2\n\\end{pmatrix}\n\\right).\n\\end{equation*}\nNote that $\\tilde{\\mu}_t$ is constant in~\\eqref{form:gauss_model} through $t$, and $\\mu\\equiv\\tilde{\\mu}_t$ is used to obtain $\\mu$ from~\\eqref{form:gauss_model}.\nThis ``trick'', the inclusion of $\\mu$ in the hidden state, was the key in~\\citet{Omori2007} for showing that~\\eqref{form:gauss_model} is a special case of the model specified by~\\citet{de1995simulation}.\nThe algorithm used by~\\citet{Omori2007} and by the paper at hand is also heavily based on the algorithm defined by~\\citet{de1995simulation}.\n\n\\subsubsection{Correcting for misspecification}\\label{sec:reweight}\n\nLet $\\theta$ denote $(\\sigma,\\rho,\\phi)$.\nBy using an approximate distribution for the true $\\pi(\\ets,\\eta_t\\mid d_t,\\rho,\\sigma)$, the model is misspecified, and the draws of $\\bm{h}$, $\\theta$, and $\\mu$ are from an approximate posterior density as well.\nThis can be corrected, and draws can be produced from the true posterior $\\pi(\\bm{h},\\theta,\\mu\\mid\\bm{y})$.\nFirst, the weights $w_k$ need to be calculated for all the draws $k=1,\\dots,N$, and then the original sample needs to be resampled using $w_k$ as probabilities.\nAfter obtaining the error terms\n\\begin{align*}\n\\xi_t^k &= y_t^\\ast-h_t^k, \\\\\n\\nu_t^k &= (h_{t+1}^k-\\mu^k)-\\phi^k(h_t^k-\\mu^k),\n\\end{align*}\nthe non-normalised weights are computed,\n\\begin{equation*}\nw_k^\\ast=\\prod_{t=1}^{T}\\frac{\\pi\\left(\\ets=\\xi_t^k,\\eta_t=\\nu_t^k\\mid d_t,\\mu=\\mu^k,\\theta=\\theta^k\\right)}{\\pi\\left(\\xi_t=\\xi_t^k,\\nu_t=\\nu_t^k\\mid d_t,\\mu=\\mu^k,\\theta=\\theta^k\\right)},\n\\end{equation*}\nFinally, we normalise the weights\n\\begin{equation*}\nw_k=\\frac{w_k^\\ast}{\\sum_{l=1}^{T}w_l^\\ast}\n\\end{equation*}\nto get the probabilities that we use for resampling the existing draws.\n\\citet{Omori2007} found that the weights have quite small variance, which makes the effect of this correction procedure modest.\nThat is in line with the demonstrated high precision of the approximate distribution~\\citep{Omori2007}.\n\n\\subsection{Estimation with leverage}\\label{sec:estimlev}\n\n\\comment{\n\\subsubsection[MCMC algorithms]{Markov Chain Monte Carlo algorithms}\n\nThe term Monte Carlo is used for the simulation of random processes, while the term Markov chain denotes sequences of random variables $X_1,X_2,\\dots$ for which, for each $n\\in\\mathbb{N}$, the conditional distribution of $X_{n+1}$ given $X_1,\\dots,X_n$ only depends on $X_n$ (it is ``memoryless'').\nThis way, the value of $X_n$ can be thought of as the state of the chain at time $n$, and then $f_{X_{n+1}\\mid X_n}(v\\mid u)$ is the probability or density of transition from state $u$ to state $v$ at time $n$.\nUnder sufficient conditions, the states of a Markov chain converge in distribution to $\\pi$, called the equilibrium distribution, from every initial state having positive density~\\citep{grinstead2012introduction}.\n\nIn practice, we cannot prove the conditions for the existence of $\\pi$.\nInstead we only have the output of the algorithm, a sequence of realisations, from which we can try to imply that we have a converged chain using, e.g., visualisations and autocorrelation functions.\n\\citet{Geyer2011} mentions the issues that pop up here and argues about some solution ideas.\n\n\tThese transition probabilities are called stationary if they don't depend on $n$.\n\t\n\tFor the majority of the Markov chain Monte Carlo (MCMC) algorithms stationary transition probability distributions are needed.\n\tThese are easier to handle, e.g. the joint distribution of such a Markov chain is characterised by the initial distribution of $X_1$ and on the general transition distribution from $X_n$ to $X_{n+1}$.\n\t\n\tThe sequence $X_1,X_2,\\dots$ itself is called stationary if for each $k\\in\\mathbb{N}$ and $n\\in\\mathbb{N}$ the joint distribution of the tuple $(X_n,\\dots,X_{n+k})$ is independent of $n$.}\n\n\\subsubsection{Steps overview}\n\nAfter initialising all the latent variables and parameters, there are three main steps when doing a Bayesian estimation of a conditional linear GSS.\n\\begin{enumerate}[start=0]\n\t\\item Initialise $\\bm h,\\bm s,\\theta,\\mu$,\n\t\\item Draw $\\bm{s}\\mid\\bm{y}^\\ast,\\bm{d},\\bm{h},\\mu,\\theta$\\label{enum:draw-s},\n\t\\item Draw $\\theta,\\bm h,\\mu\\mid\\bm y^\\ast,\\bm d,\\bm s$\\label{enum:draw-other},\n\t\\begin{enumerate}\n\t\t\\item Draw $\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s$\\label{enum:draw-theta},\n\t\t\\item Draw $\\bm{h},\\mu\\mid\\bm{y}^\\ast,\\bm{d},\\theta,\\bm s$\\label{enum:draw-latent},\n\t\\end{enumerate}\n\t\\item Goto Step~\\ref{enum:draw-s}.\n\\end{enumerate}\nSteps~\\ref{enum:draw-s} and~\\ref{enum:draw-other} are interchangeable in the algorithm, but steps~\\ref{enum:draw-theta} and~\\ref{enum:draw-latent} are not because the former's by-products are needed for the latter.\nThe steps are detailed below based on~\\citet{Omori2007}.\n\nIf the Markov chain specified by the algorithm above has an equilibrium distribution, then it is an approximation to the posterior $\\bm h,\\bm s,\\theta,\\mu\\mid\\bm y$, and it is the true one after the reweighting scheme (Section~\\ref{sec:reweight}).\nSo by repeating the steps sufficiently many times, after ``reaching convergence'', draws from the posterior are obtained.\nWith such a sample any point estimate, credible intervals, or transformations of the distribution can be estimated.\n\n\\subsubsection{Drawing the mixture states}\n\nA closed form can be derived for $\\pi(s_t=j\\mid\\bm{y},\\bm{h},\\mu,\\sigma,\\rho,\\phi)$, $j=1,\\dots,K$.\nLet\n\\begin{align*}\n\\hat\\xi_t &\\triangleq y_t^\\ast-h_t, \\\\\n\\hat{\\nu}_t &\\triangleq h_{t+1}-\\mu-\\phi(h_t-\\mu), \\\\\n\\gamma_t^j &\\triangleq d_t\\rho\\sigma\\exp(m_j/2).\n\\end{align*}\nThen, based on~\\citet{Omori2007} with $K=10$, for $t=1,\\dots,T-1$ and $j=1,\\dots,10$,\n\\begin{equation}\n\\begin{aligned}[1]\n\\pi\\left(s_t=j\\mid\\bm{y},\\bm{h},\\mu,\\theta\\right) &\\propto P\\left(s_t=j\\right)\\pi\\left(\\xi_t=\\hat\\xi_t\\mid s_t=j\\right)\\times \\\\\n&\\times\\pi\\left(\\nu_t=\\hat\\nu_t\\mid\\xi_t=\\hat\\xi_t,s_t=j,h_{t+1},h_t,\\mu,\\theta\\right), \\\\\n\\pi\\left(s_T=j\\mid\\bm{y},\\bm{h},\\mu,\\theta\\right)\n&\\propto P\\left(s_T=j\\right)\\pi\\left(\\xi_T=\\hat\\xi_T\\mid s_T=j\\right)\\times 1,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{align*}\nP\\left(s_t=j\\right) &= p_j, \\\\\n\\pi\\left(\\xi_t=\\hat\\xi_t\\mid s_t=j\\right) &\\propto \\frac{1}{v_j}\\exp\\left(-\\frac{\\left(\\hat\\xi_t-m_j\\right)^2}{2v_j^2}\\right),\n\\end{align*}\nand\n\\begin{multline*}\n\\pi\\left(\\nu_t=\\hat\\nu_t\\mid\\xi_t=\\hat\\xi_t,s_t=j,h_{t+1},h_t,\\mu,\\theta\\right) \\propto \\\\\n\\propto \\exp\\left(-\\frac{\\left(\\hat{\\nu}_t-\\left(\\gamma_t^j\\left[a_j+b_j\\left(\\hat\\xi_t-m_j\\right)\\right]\\right)\\right)^2}{2\\sigma^2\\left(1-\\rho^2\\right)}\\right).\n\\end{multline*}\nThis way, values proportional to the posterior probabilities are obtained, so they need to be normalised.\nWe can use the inverse sampling method to draw $\\bm s\\mid\\bm{y},\\bm{h},\\mu,\\sigma,\\rho,\\phi$~\\citep{grinstead2012introduction}.\n\n\\subsubsection{Drawing $\\rho,\\sigma,\\phi$}\n\nThe remaining two steps are more involved and based on~\\citet{de1995simulation}.\nThey are detailed and derived for SV with leverage in~\\citet{Nakajima2009}, where SV with leverage is extended to jumps and t distributed residuals.\n\nIn this step, the Metropolis-Hastings (MH) algorithm is used to obtain a draw from $\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s$.\nMH is useful when the posterior density can be evaluated up to a constant factor.\nThat can be done using Bayes' theorem, by factoring the posterior into the likelihood, the prior, and a constant factor: $\\pi(\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s)\\propto\\pi(\\bm{y}^\\ast\\mid\\theta,\\bm{d},\\bm s)\\pi(\\theta)$.\nThe prior is usually chosen to be from a well-known family, and, based on~\\cite{Nakajima2009}, the likelihood can be computed using the Kalman filter.\n\nA proposal density $g(\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s)$ is needed for MH as well, whose support is a superset of the true posterior's support.\nIt is also important that the evaluation of and sampling from the proposal density are efficient.\n\nGiven everything above, an MH step in the $n$th loop first produces a candidate $\\theta_\\ast$ from $g(\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s)$.\nThen, the new sample $\\theta_{n+1}$ is chosen randomly from among the current value $\\theta_n$ and the candidate $\\theta_\\ast$.\nThe so-called acceptance ratio drives the decision,\n\\begin{equation*}\n\\text{AR}=\\frac{\\pi(\\bm{y}^\\ast\\mid\\theta_\\ast,\\bm{d},\\bm s)\\pi(\\theta_\\ast)}{\\pi(\\bm{y}^\\ast\\mid\\theta_n,\\bm{d},\\bm s)\\pi(\\theta_n)}\\frac{g(\\theta_n\\mid\\bm{y}^\\ast,\\bm{d},\\bm s)}{g(\\theta_\\ast\\mid\\bm{y}^\\ast,\\bm{d},\\bm s)}\n\\end{equation*}\nFinally, $\\theta_\\ast$ is accepted with probability $\\min\\left\\{\\text{AR},1\\right\\}$, and $\\theta_{n+1}=\\theta_\\ast$.\nOtherwise $\\theta_{n+1}=\\theta_n$ remains the state.\n\n\\citet{Nakajima2009} chose the proposal to be $\\mathcal{N}(c, C)$ truncated to the region $R=\\left\\{(\\sigma,\\rho,\\phi)\\mid\\sigma>0,-1<\\rho<1,-1<\\phi<1\\right\\}$.\nThe mean vector $c$ and the covariance matrix $C$ are calculated in each loop such that the proposal's log density is the second order Taylor expansion of the true log density around the true log density's mode.\\footnote{\n\tEven though the true posterior density can only be evaluated up to a (positive) constant factor, the Taylor approximation can be still calculated.\n\tThe reason is that the mode is invariant under positive scaling, and differentiation is only applied to the log density, i.e.\\ constant multipliers vanish.}\nMore precisely,\n\\begin{align*}\nc &= \\hat{\\theta}+Cv, \\\\\nC^{-1} &= -\\frac{\\partial^2\\log\\pi(\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s)}{\\partial\\theta\\partial\\theta^\\prime},\n\\end{align*}\nwhere\n\\begin{align*}\nv &= \\frac{\\partial\\log\\pi(\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s)}{\\partial\\theta}, \\\\\n\\hat{\\theta} &= \\text{arg}\\max_\\theta\\pi(\\theta\\mid\\bm{y}^\\ast,\\bm{d},\\bm s).\n\\end{align*}\n\n\\subsubsection{Drawing $\\mu$ and the log variance}\n\nBoth $\\mu$ and $\\bm h$ are sampled using by-products of the Kalman filter used when calculating $\\pi(\\bm{y}^\\ast\\mid\\theta_{n+1},\\bm{d},\\bm s)$ for the AR.\nThe new value of $\\mu$ comes by simply drawing from $\\mathcal{N}(q_{T+1},Q_{T+1})$, where $q_{T+1}$ and $Q_{T+1}$ are readily available at this point.\nIn order to get the latent vector $\\bm h$, we first sample $\\bm{\\nu}\\mid\\bm{y}^\\ast,\\bm{d},\\theta,\\bm s$ using a Gaussian simulation smoother~\\citep{fruhwirth1994data,carter1994gibbs} and then reconstruct $\\bm h$ from the formulas in~\\citet{de1995simulation}.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\comment{\n\t\\begin{align}\n\t\\pi(s_t=j\\mid\\bm{y},\\bm{h},\\mu,\\theta)\n\t&= \\pi(s_t=j\\mid\\bm{y^*},\\bm{d},\\bm{h},\\mu,\\theta),\\nonumber \\\\\n\t&\\overset{(1)}{=} \\pi(s_t=j\\mid y^*_t,d_t,h_{t+1},h_t,\\mu,\\theta),\\nonumber \\\\\n\t&\\overset{(2)}{=} \\pi(s_t=j\\mid \\xi_t,d_t,\\nu_t,\\mu,\\theta),\\nonumber \\\\\n\t&\\propto P(s_t=j)\\cdot\\pi(\\xi_t,\\nu_t\\mid s_t=j,d_t,\\mu,\\theta),\\nonumber \\\\\n\t&\\overset{(3)}{=} P(s_t=j)\\cdot\\pi(\\xi_t=y^*_t-h_t\\mid s_t=j)\\cdot\\nonumber \\\\\n\t&\\qquad\\cdot\\pi(\\nu_t=h_{t+1}-\\mu-\\phi(h_t-\\mu)\\mid \\xi_t,s_t=j,d_t,\\mu,\\theta),\\nonumber \\\\\n\t&\\propto p_j\\frac{1}{v_j}\\exp\\left(\\frac{(y^*_t-h_t-m_j)^2}{2v_j^2}\\right)\\cdot \\\\\n\t&\\qquad\\exp\\left(\\dots\\right)\n\t\\end{align}\n}\n", "meta": {"hexsha": "b740f0cc54185830734a27d93f2dec7bb5dce8c1", "size": 25359, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/sections/model.tex", "max_stars_repo_name": "hdarjus/master-thesis", "max_stars_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-23T12:51:22.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-23T12:51:22.000Z", "max_issues_repo_path": "thesis/sections/model.tex", "max_issues_repo_name": "hdarjus/master-thesis-WU", "max_issues_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/sections/model.tex", "max_forks_repo_name": "hdarjus/master-thesis-WU", "max_forks_repo_head_hexsha": "1b0f4699dc49cb7bc5442214cf7901333afcd38a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-06-12T00:39:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-12T00:39:19.000Z", "avg_line_length": 64.5267175573, "max_line_length": 412, "alphanum_fraction": 0.7154856264, "num_tokens": 8284, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7310585727705127, "lm_q1q2_score": 0.5622427004589341}}
{"text": "\\section{An Example Session}\n\\label{exampleSession}\n\nTo provide a first impression of  the capabilities of the CEC-System, we\ndescribe the development of a quicksort algorithm on lists of\nnatural numbers.\n\n\\subsection{The Specification of Quicksort}\n\nSpecifications can be written and completed in a {\\em modular}\nway. CEC can combine completed specifications without \nrepeating previous work. The hierarchy of modules for the\nspecification of quicksort on lists of natural numbers is given\nin the following diagram:\n\n\\begin{picture}(450,100)\n\\put(50,0){\\makebox(400,100){}}\n\\put(205,85){\\makebox(0,0){{\\tt qsortnat}}}\n\\put(200,80){\\vector(-2,-1){45}}\n\\put(200,80){\\vector(2,-1){45}}\n\n\\put(150,50){\\makebox(0,0){{\\tt nat}}}\n\n\\put(257,50){\\makebox(0,0){{\\tt qsort}}}\n\\put(255,45){\\vector(-3,-2){30}}\n\\put(255,45){\\vector(3,-2){30}}\n\n\\put(215,20){\\makebox(0,0){{\\tt totalOrder}}}\n\n\\put(300,20){\\makebox(0,0){{\\tt lists}}}\n\\end{picture}\n\n\\noindent\nThe module \\cec{qsort} --- saved in a file named \\cec{qsort.eqn} ---\ndescribes the quicksort algorithm:\n\\begin{spec}\nmodule qsort using lists + totalOrder.\n\nop sort  : (list -> list).\nop split : (elem * list -> pair).\ncons (',') : (list * list -> pair).\n\nsort([]) = [].\nsplit(x, l) = (l1 , l2) => sort([x|l]) = append(sort(l1), [x|sort(l2)]).\n\nsplit(x, []) = ([] , []).\n(y =< x) = true and split(x, l) = (l1,l2) => split(x, [y|l]) = ([y|l1],l2).\n(y =< x) = false and split(x, l) = (l1,l2) => split(x, [y|l]) = (l1,[y|l2]).\n\\end{spec}\n\nComplex specifications can be constructed from modules by the \nelementary operations {\\em combination}, {\\em enrichment} and {\\em renaming}\n(\\refArrow chapter \\ref{OperationsOnSpecifications}).\nThe combine operator (\\cec{+}) forms the union of two specifications and\nthe rename operator (\\cec{<-}) allows to renaming sorts and operators.\nAny specification in CEC is the enrichment of a possibly empty base\nspecification (``{\\tt using} $<$base$>$'') by new vocabulary and axioms.\nThe module \\cec{qsort} is based on \\cec{lists} \nand \\cec{totalOrder}. Here \\cec{lists} is the imported module\n\\begin{spec}\nmodule lists.\n\ncons []   : list .\ncons '.'  : (elem * list -> list).\nop append : (list * list -> list).\n\nappend([], l) = l.\nappend([e | l1], l2) = [e | append(l1, l2)].\n\\end{spec}\n\nWe use the constructor \\cec{[]} to denote the empty list and \nthe constructor `\\cec{.}' for adding an element to a list\n(This is exactly the way how lists are constructed in Prolog and we can\nuse the usual Prolog notation for lists in which \\cec{[e|l]} is\na synonym for \\cec{(e '.' l)}). \nAdditionally we have defined an \noperator \\cec{append} for the concatenation of two lists and described \nits behaviour through two equations.\n\n\\cec{totalOrder} plays the r$\\hat{o}$le of a formal parameter\nof \\cec{qsort}:\n\\begin{spec}\nmodule totalOrder.\n\nop (=<) : (elem * elem -> bool) .\n\n(x =< x) = true .\n(x =< y) = false => (y =< x) = true .\n(x =< y) = true and (y =< z) = true => (x =< z) = true .\n(x =< y) = true and (y =< x) = true => x = y .\n\\end{spec}\n\nIt describes a usable approximation of the usual first-order axioms\nfor total orders in a Horn clause setting.\nAn actual parameter candidate for \\cec{totalOrder} is\nthe following specification of natural numbers:\n\\begin{spec}\nmodule nat using totalOrder(elem <- nat).\n\ncons 0 : nat.\ncons(s, 100, fy) :  (nat -> nat).\n\n(0 =< n ) = true.\n(s n =< 0) = false .\n(s n =< s m) = (n =< m).\n\\end{spec}\n\nComputing with the above specification one has to write the numbers\n$n$ as \\cec{s}$^n$\\cec{0} which is a tedious work. In CEC it is possible to provide different external\nrepresentations for the terms of a specification by specifying\nthe translations from the external to the internal\nand from the internal to the external representation \n(\\refArrow chapter \\ref{ParseAndPretty}).\n\nThe instantiation of the formal parameter of \\cec{qsort} with\nnat yields \\cec{qsortnat}:\n\\begin{spec}\nmodule qsortnat using qsort(elem <- nat) + nat.\n\\end{spec}\n\nThe modules \\cec{qsort} and \\cec{nat} are combined after\nrenaming the sort \\cec{elem} of \\cec{qsort} into \\cec{nat}.\nThe completion of \\cec{qsortnat} will check the consistency of the axioms \nfor \\cec{=<} in the actual parameter \\cec{nat} and in the formal\nparameter \\cec{totalOrder}. The latter proves the correctness\nof the actual parameter in cases where the actual parameter is\nconstructor-complete. \\cec{nat} is constructor-complete.\n\n\\subsection{Order Specifications}\n\nIn CEC the following termination orderings are available:\n\\begin{itemize}\n\\item\nTwo  {\\em precedence orderings} \\kw{kns} and \\kw{neqkns}, according to \nKapur et. al \\cite{KNS85} (\\refArrow chapter \\ref{kns}).\n%They are based on the recursive comparison of paths occurring\n%in the terms to be compared. This order is induced by a partial\n%order on operators called the {\\it precedence}. The precedence ordering\n%\\kw{neqkns} forbids that two different operators have the same precedence.\n%All constructors are given a precedence less than any nonconstructor operator.\n\\item\nThe method of {\\it polynomial interpretations}\nwhere \\[t_1 > t_2 : \\iff I(t_1) > I(t_2). \\]\nif $I(t)$ is the polynomial or tuple of polynomials associated with\nit.\nThe concrete version of this technique as it is used in CEC is due to \n\\cite{CL86} (\\refArrow chapter \\ref{poly}).\n\\kw{poly}\\nt{N} stands for polynomial interpretations with \ntuplelength $N$.\n\\end{itemize}\nThe precedence declarations or polynomial interpretations\nfor the operators of a specification are given interactively\nduring the completion process or in a {\\em order specification}\nassociated with the specification\n(\\refArrow chapter \\ref{OrderSpecification}).\n%The order specification determines the termination ordering to be used for \n%\\nt{specificationName}, the order names for its direct imports\n%and gives precedence declarations for the operators of the\n%specification (if the termination ordering is \\kw{kns} or \\kw{neqkns}) or\n%polynomial interpretations (if the termination ordering is \\kw{poly}\\nt{N}).\n\nFor example, the order specification for the module \\cec{lists}\nmay have the following form:\n\n\\begin{spec}\norder poly3 for lists.\n\nsetInterpretation(['[]'        : [2, 2, 2],\n                   '.'(x,y)    : [3 * x + y + 1, x + y, x + y],\n                   append(x,y) : [x + y, x + y, 2 * x + y]]).\n\\end{spec}\n\nIn this case, polynomial interpretations with tuple length 3\nhave been chosen. If this information is stored in the file \\cec{lists.pol.ord},\n\\cec{pol} is called the {\\em order name} of this order specification.\n\n\\subsection{Reading and Displaying}\n\nWe want to demonstrate how CEC handles this specification. \nFirst, we read in the specification of \\cec{lists} using\nthe \\comRef{in}-command (\\refArrow chapter \\ref{InCommand})\nwith two arguments, the name of the module and the order name\n(\\refArrow chapter \\ref{orderBase}):\n\n\\begin{screen}\n| ?- in(lists,pol).\n[collecting garbage...]\n[evaluating base of lists.eqn with lists.pol.ord...]\n [thawing standard.poly3.q2.0 into user...]\n [storing to standard.poly3]\n[reading body of lists.eqn and lists.pol.ord...]\n[analyzing axioms...]\n[collecting garbage...]\n\nTime used: 3.634 sec.\n\\end{screen}\n\n\\noindent\nWe can use the \\comRef{sig}-command to display the signature:\n\n\\begin{screen}\n| ?- sig.\n\nSignature :\n\ncons true       : bool.\ncons false      : bool.\ncons []\t: list.\ncons .  : (elem * list -> list).\nop append       : (list * list -> list).\n\\end{screen}\n\nThe boolean constants \\cec{true} and \\cec{false} are imported\nfrom the \\cec{standard}-module, which is automatically imported into \nevery specification. The fact that \\cec{true} and \\cec{false}\nare constructors implies that any equation, explicitly given in\na specification or derived through completion, equating these\nconstants is forbidden.\nAs expected, we also see our two constructors \\cec{[]} and\n`\\cec{.}' and the operator \\cec{append}.\n\nThe \\comRef{show}-command can be used to display the \nset of equations and rules (\\refArrow chapter \\ref{Displaying}):\n\n\\begin{screen}\n| ?- show.\n\nCurrent equations\n\n  1    append([],l) = l\n  2    append([e|l1],l2) = [e|append(l1,l2)]\n\nCurrent rules\n\nCurrent nonoperational equations\n\nAll axioms reduced.\nAll superpositions computed.\nThe set of equations is not empty.\n\\end{screen}\n\n\\noindent\nThe information about the polynomial interpretations can be made\nvisible using the \\comRef{interpretation}-command. If \\cec{kns}\nor \\cec{neqkns} is used, this can be achieved by the\n\\comRef{operators}-command (\\refArrow chapter \\ref{InspecTermination}).\n\nEvery consulted specification is saved in a special {\\em specification\nvariable}. The user can restore an older state of his system\nusing the \\comRef{load}-command (\\refArrow chapter \\ref{LoadCommand})\nor can update the content of the variable using the \\comRef{store}-command\n(\\refArrow chapter \\ref{StoreCommand}). \nWhenever a specification is to be imported, the CEC system uses the content\nof the corresponding specification variable, if this specification variable \nexists. So it increases one's speed, if the completion process of a hierarchical\nspecification follows the tree structure from its leaves to its root, \nstoring every participating specification after successful completion.\nWe assume that attention is paid to this advice for our example.\nInformation about the current specification variables\ncan be retrieved using the \\comRef{specifications}-command.\n\nIt is also possible to save a system externally using the \n\\comRef{freeze}-command (\\refArrow chapter \\ref{FreezeCommand}).\nTo restore a externally saved system use the \\comRef{thaw}-command\n(\\refArrow chapter \\ref{ThawCommand}).\n\n\\subsection{Completion}\nWe now apply the completion procedure (\\refArrow chapter\n\\ref{Completion}) to our example.\n\n\\subsubsection{Completing {\\tt lists}}\n\nThe completion process --- started with the \\comRef{c}-command ---\nis able to orient the two equations of \\cec{lists}\nwithout requesting any additional information:\n\n\\begin{screen}\n| ?- c.\n\nnew rule   1    append([],l) = l .\n\nnew rule   2    append([e|l1],l2) = [e|append(l1,l2)] .\n\n[4 superpositions yet to be considered.]\n\n0 superpositions have been computed.\nTime used: 0.883011 sec.\n\\end{screen}\n\n\\noindent\nThe system is complete, and has the following rules, displayed \nby the \\comRef{show}-command:\n\n\\begin{screen}\n| ?- show.\n\nCurrent equations\n\nCurrent rules\n\n  1    append([],l) = l\n  2    append([e|l1],l2) = [e|append(l1,l2)]\n\nCurrent nonoperational equations\n\nAll axioms reduced.\nAll superpositions computed.\nNo more equations, the system is complete.\n\\end{screen}\n\n\n\\subsubsection{Completing {\\tt totalOrder}}\n\nAn important feature is that CEC does not require to transform all\nequations into rewrite rules. This can be demonstrated during the\ncompletion of \\cec{totalOrder}.\n\nAfter orienting the first equation into a rewrite rule, \nthe system discovers that it is unable to orient the second \nequation:\n\n\\begin{screen}\nChecking reductivity constraints for rule\n        x=<y = false => y=<x = true:\nThe current ordering fails to prove\n[y=<x]  >  [x=<y,false].\nAt this point you may take any of the the following actions:\na. for assume to be proved\nc. for checking quasi-reductivity of the equation\np. for postpone\nn. for considering the equation as nonoperational\n   Please answer with a. or c. or p. or n. (Type A. to abort) > \\user{n.}\n\\end{screen}\n\n\\noindent\nWe want to consider this equation as nonoperational\n(\\refArrow chapter \\ref{Completion}), so we answer with \\user{n.}.\n\nNonoperational equations become useless for the equational theory and, hence,\nin fact nonoperational, if they are superposed on at least one of their\nconditions by all rewrite rules. This yields new conditional equations. \nWhat concerns the equational theory, the new equations together have \nthe same power as the original equation. \nHowever, they may have better operational properties than the\nequation they have been generated from.\nEquations that cannot be oriented into a reductive rule must\neither be eliminated eventually or considered nonoperational.\n\nChecking the convergence of conditional equations requires to compare different\napplications of equations and rewrite rules. The comparison of equation application is performed\nby comparing the literals of the equation instance. The {\\em status} of an equation \ndetermines the order in which the literals of the equation should be\ninspected. The status $ms$ means that the literals are compared as a multiset.\nInstead of $ms$ the user can choose an arbitrary sequence of the literals\nby entering a permutation of [0 .. n] where n is the number of conditions\n(\\refArrow \\cite{Gan88a}). \n\nWe want to use $ms$ here:\n\n\\begin{screen}\nIn which order should the literals of the equation be\ninspected when comparing proofs that use this equation?\nPlease enter ms (for multiset ordering) or a permutation of [0 .. 1]\n(0 stands for the consequent, i>0 for the ith condition). > \\user{ms.}\n\\end{screen}\n\n\\noindent\nThe other two nonreductive equations are handled in the same way and we get the \nfollowing complete system:\n\n\\begin{screen}\n| ?- \\user{show.}\n\nCurrent equations\n\nCurrent rules\n\n  1    x=<x = true\n\nCurrent nonoperational equations\n\n  1    x=<y = false => y=<x = true\n  2    x=<y = true and y=<x = true => x = y\n  3    x=<y = true and y=<z = true => x=<z = true\n\nAll axioms reduced.\nAll superpositions computed.\nNo more equations, the system is complete.\n\\end{screen}\n\nIn this case, superposing the only rule \\cec{x =< x = true} on the first\ncondition of the nonoperational equations does not generate any nontrivial\n(nonconvergent) consequences.\n\n\\subsubsection{Completing {\\tt qsort}}\n\nThe next step is the completion of the \\cec{qsort} specification.\nThe completion procedure is able to orient the first and third \nequation of our specification, but fails to orient the second one.\n\n\\begin{screen}\n| ?- c.\n\nnew rule   9    sort([]) = [] .\n\nnew rule  10    split(x,[]) = [],[] .\nThe equation\n        split(x,l) = l1,l2 => sort([x|l]) = append(sort(l1),[x|sort(l2)])\nis not reductive.\n\\end{screen}\n\n\\noindent\nThis is due to the presence of the {\\em extra variables} \\cec{l1} and \\cec{l2}\nin the condition of the equation.\n\nEquations with extra variables in the condition or in the right side of\nthe consequence are usually not admitted as rewrite rules. \nFortunately some of these equations belong to the class of what we call\n{\\em quasi-reductive rules} (\\refArrow chapter \\ref{Completion}). \nCEC is able to prove the remaining equations 2, 4\nand 5 of the module \\cec{qsort} to be quasi-reductive.\nQuasi-reductive rules are a generalization\nof reductive conditional rewrite rules and the associated rewrite\nprocess is similarly efficient.\nIt specifies e.g. for the second equation the replacement\nof \\cec{sort([x|l])} by the term \\cec{append(sort(l1), [x|sort(l2)])} \nif the normalform of \\cec{split(x, l)} matches \\cec{(l1,l2)}.\nAfter this match the extra variables \\cec{l1} and \\cec{l2} are instantiated to\nterms which include only variables of the left hand side.\nIn quasi-reductive rules, conditions are oriented, too. To solve an instance\nof a condition equation means to rewrite its left side into an instance of the\nright side.\n\n\\begin{screen}\nDo you want a check for quasi-reductivity?\nc. for check\nn. for considering the equation as nonoperational\np. for postpone\n   Please answer with c. or n. or p. (Type A. to abort) >\\user{c.}\n\\end{screen}\n\n\\noindent\nTo orient an equation into a quasi-reductive rule, we must first indicate the \ndesired orientation of the equations in the condition and the conclusion. \nIn this example, the equation in the condition\nand the conclusion should be oriented from left to right (literal annotation\n``\\user{l}'').\n\n\\begin{screen}\nEnter annotations of literals in\n\tsplit(x,l) = l1,l2 => sort([x|l]) = append(sort(l1),[x|sort(l2)])\n\t: \\user{[l,l].}\n\\end{screen}\n\n\\noindent\nNow CEC attemps to prove the quasi-reductivity of the equation according to the\ndefinition in chapter \\ref{Completion}. This involves giving  polynomial\ninterpretations to some auxiliary operators.\nHere we must give an appropriate interpretation for \\cec{$h5_0}:\n\n\\begin{screen}\nChecking reductivity constraint:\nConsider the terms\n        $h5_0((l1,l2),x)\nand\n        append(sort(l1),[x|sort(l2)])\n\nThere is no interpretation of operator '$h5_0' with arity 2\nThe default interpretation is :\n   [ 2 * x * y ,\n     2 * x * y ,\n     2 * x * y ]\nDo you want to change it ? (if so type 'y') \\user{y}\nDo you want to change it component for component ? (if so type 'y') \\user{n}\nType in the new interpretation tuple\n[ $h5_0 ] (x,y) = \\user{[2*x+3*y+1,2*x+2*y,2*x+2*y].}\nResulting interpretation for Operator '$h5_0' with arity 2 :\n   [ 2 * x + 3 * y + 1 ,\n     2 * x + 2 * y ,\n     2 * x + 2 * y ]\nDo you accept it ? (if not, type 'n') \\user{y}\n\\end{screen}\n\n\\noindent\nNow the proof of the quasi-reductivity of the equation is completed:\n\n\\begin{screen}\nnew rule  11    split(x,l) = l1,l2 => sort([x|l]) = \n                                      append(sort(l1),[x|sort(l2)]) .\n\\end{screen}\n\n\\noindent\nIn the same way the equations\n\n\\cec{y=<x = true and split(x,l) = (l1,l2) => split(x,[y|l]) = ([y|l1],l2)}\n\n\\noindent\nand \n\n\\cec{y=<x = false and split(x,l) = (l1,l2) => split(x,[y|l]) = (l1,[y|l2])}\n\n\\noindent\ncan be oriented into quasi-reductive rules.\n\nFor the quicksort specification three nontrivial superposition instances are\ncomputed. For any nontrivial equation with at least one condition that is\ngenerated during completion, CEC will ask the user what to do with it. In the\nexample we decide to declare these consequences as ``nonoperational''.\n\n\\begin{screen}\ninstance   6    l1,l2 = l4,l3 and split(x1,l5) = l1,l2 => \n                                     append(sort(l1),[x1|sort(l2)]) = \n                                     append(sort(l4),[x1|sort(l3)])\nof        6    split(x,l) = l1,l2 => sort([x|l]) = \n                                     append(sort(l1),[x|sort(l2)])\nby superposing\n          6    split(x,l) = l1,l2 => sort([x|l]) = \n                                     append(sort(l1),[x|sort(l2)]) \non the left side.\n\nConsider the equation\n        l1,l2 = l4,l3 and split(x1,l5) = l1,l2 => \n                                     append(sort(l1),[x1|sort(l2)]) = \n                                     append(sort(l4),[x1|sort(l3)]).\nThe following actions may be taken:\no. for attempting to orient into a (quasi-)reductive rule\np. for postpone\nn. for considering equation as nonoperational\n   Please answer with o. or p. or n. (Type A. to abort) >\\user{n.}\n\\end{screen}\n\nAs mentioned before it is sufficient to superpose all rewrite rules\non just one condition of a nonoperational conditional equation. So\nCEC asks the user on which condition superposition should be\napplied. \nWe will choose the first equation of the condition, since we know it\nwill generate no nontrivial superpositions:\n\n\\begin{screen}\nWhich of the condition equations in\n        l1,l2 = l4,l3 and split(x1,l5) = l1,l2 \nshould be selected for superposition?\nPlease enter index from 1 to 2. > \\user{1.}\n\nThe equation l1,l2 = l4,l3 and split(x1,l5) = l1,l2 => \n                                          append(sort(l1),[x1|sort(l2)]) = \n                                          append(sort(l4),[x1|sort(l3)]) \nwill be considered as nonoperational.\n\\end{screen}\n\n\\subsubsection{Completing {\\tt nat}}\n\nThe completion of the \\cec{nat} specification is straightforward using\nthe following order specification:\n\n\\begin{spec}\norder poly1 for nat. \n\nsetInterpretation([0 : 2,\n                   s(x) : 8 * x]).\n\\end{spec}\n\n\\subsubsection{Combining {\\tt qsort} and {\\tt nat}}\n\nWe now want to combine the \\cec{nat} specification with the \n\\cec{qsort} specification.\n\n\\begin{screen}\n| ?- store.\nyes\n| ?- load(qsort,poly3).\n\\end{screen}\n\n\\noindent\nThe \\comRef{store}-command saves the completed {\\tt nat} specification\ninto its {\\em specification variable}.\nThe \\comRef{load}-command loads the completed {\\tt qsort} specification\nfrom its specification variable.\n\n\\begin{screen}\n| ?- renameSpec(elem <- nat).\nyes\n| ?- sig.\n\nSignature :\n\ncons [] : list.\ncons .  : (nat * list -> list).\nop append       : (list * list -> list).\ncons true       : bool.\ncons false      : bool.\nop =<   : (nat * nat -> bool).\nop sort\t: (list -> list).\nop split        : (nat * list -> pair).\ncons ,    : (list * list -> pair).\nyes\n| ?- store(qsortnatSpec).\nyes\n\\end{screen}\n\n\\noindent\nNow we combine the two specifications and make the result \nto the current specification:\n\n\\begin{screen}\n| ?- combineSpecs(qsortnatSpec,'nat.poly1_qsort',user).\nyes\n\\end{screen}\n\n\\noindent\nBecause \\cec{totalOrder} is a formal parameter for \\cec{qsort},\ncompletion now checks the consistency of the axioms for\n\\cec{=<} in the actual parameter \\cec{nat} and in the formal\nparameter \\cec{totalOrder}.\nThe completion process will not do unnecessary work again. \nFor the above example the completion process\nwill only compute overlaps between axioms\nof the renamed module \\cec{qsort} and axioms of the module \\cec{nat}\nbut it will not recompute overlaps between axioms of one of these\nmodules.\n\nIf two constructor terms are shown to be equal then the specification\nis inconsistent. In our case the system is completed without any user \ninteraction, no inconsistency showing up.\n\n\\subsection{Computing in Completed Specifications}\n\nComputation in specifications is realized by term reduction with the\nrules of the completed specification. The result of such computations are unique\nnormal forms (\\refArrow chapter \\ref{NormCommand}):\n\n\\begin{screen}\n| ?- norm(sort([5,3,6,1])).\nThe normalform of sort([5,3,6,1]) is [1,3,5,6] .\n\\end{screen}\n\n\\noindent\nIf confluence can be achieved and if all rules are reductive equational theorems \nbecome decidable, e.g. it is decidable if two terms are equivalent with respect to the\nequations in the specification (\\refArrow chapter \\ref{ProveCommand}):\n\n\\begin{screen}\n| ?- prove(sort([5,3]) = [3,5]).\nNormal forms are: [3,5] and [3,5]\nyes\n\\end{screen}\n\n\\noindent \nConditional narrowing (\\refArrow chapter \\ref{NarrowCommand}) can then be used to solve \nequations.\n\n\\begin{screen}\n| ?- solve(sort([1,x]) = [x,1],U).\n\nTime used: 7.43298 sec.\n\nU = {x-nat/0} ;\n\nTime used: 7.91595 sec.\n\nU = {x-nat/1} ;\n\nno\n\\end{screen}\n\n\\noindent\nHere we proved that the only substitutions for \\cec{x} such that\n\\cec{sort([1,x])} is equal to \\cec{[x,1]} are \\cec{0} and \\cec{1}.\n\n\\subsection{Saving the CEC System}\n\nThe whole state of the CEC system can be saved using the\n\\comRef{saveCEC}-command. \n\n\\begin{screen}\n| ?- saveCEC('qsortCEC').\n[ Prolog state saved into /home/helga/cec/cec/qsortCEC ]\n\\end{screen}\n\nUsing \\cec{qsortCEC} instead of CEC offers the possibility to have\nthe complete hierarchy of our \\cec{qsort} example present, without\nwasting time to reconsult the necessary frozen states of all the\nspecification used for this example.\n", "meta": {"hexsha": "59b9bc51ec745d0252f2df02eb250df65b129acd", "size": 22833, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pack/logicmoo_packages/prolog/cec/doc/cec/example.tex", "max_stars_repo_name": "logicmoo/old_logicmoo_workspace", "max_stars_repo_head_hexsha": "44025b6e389e2f2f7d86b46c1301cab0604bba26", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-09-04T14:44:49.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-04T14:44:49.000Z", "max_issues_repo_path": "pack/logicmoo_packages/prolog/cec/doc/cec/example.tex", "max_issues_repo_name": "logicmoo/old_logicmoo_workspace", "max_issues_repo_head_hexsha": "44025b6e389e2f2f7d86b46c1301cab0604bba26", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pack/logicmoo_packages/prolog/cec/doc/cec/example.tex", "max_forks_repo_name": "logicmoo/old_logicmoo_workspace", "max_forks_repo_head_hexsha": "44025b6e389e2f2f7d86b46c1301cab0604bba26", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.3815789474, "max_line_length": 102, "alphanum_fraction": 0.711908203, "num_tokens": 6265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585669110203, "lm_q2_score": 0.76908023177796, "lm_q1q2_score": 0.5622426920831908}}
{"text": "\\section{Policy Learning Details}\n\\label{sec:supp-lqr}\n\nGiven a TVLG dynamics model and quadratic cost approximation, we can approximate our Q and value functions to second order with the following dynamic programming updates, which proceed from the last time step $t = T$ to the first step $t = 1$:\n\\begin{align*}\n    Q_{\\mathbf{s},t}=c_{\\mathbf{s},t}&+\\dynmat_{\\mathbf{s},t}^\\top{V}_{\\mathbf{s},t+1}\\,,~~Q_{\\mathbf{ss},t}=c_{\\mathbf{ss},t}+\\dynmat_{\\mathbf{s},t}^\\top{V}_{\\mathbf{ss},t+1}\\dynmat_{\\mathbf{s},t}\\,,\\\\\n    Q_{\\mathbf{a},t}=c_{\\mathbf{a},t}&+\\dynmat_{\\mathbf{a},t}^\\top{V}_{\\mathbf{s},t+1}\\,,~~Q_{\\mathbf{aa},t}=c_{\\mathbf{aa},t}+\\dynmat_{\\mathbf{a},t}^\\top{V}_{\\mathbf{ss},t+1}\\dynmat_{\\mathbf{a},t}\\,,\\\\\n    &Q_{\\mathbf{sa},t}=c_{\\mathbf{sa},t}+\\dynmat_{\\mathbf{s},t}^\\top{V}_{\\mathbf{ss},t+1}\\dynmat_{\\mathbf{a},t}\\,,\\\\\n    &V_{\\mathbf{s},t}=Q_{\\mathbf{s},t}-Q_{\\mathbf{sa},t}Q_{\\mathbf{aa},t}^{-1}Q_{\\mathbf{a},t}\\,,\\\\\n    &V_{\\mathbf{ss},t}=Q_{\\mathbf{ss},t}-Q_{\\mathbf{sa},t}Q_{\\mathbf{aa},t}^{-1}Q_{\\mathbf{as},t}\\,.\n\\end{align*}\nIt can be shown (e.g., by \\citet{synthesis}) that the action $\\mathbf{a}_t$ that minimizes the second-order approximation of the Q-function at every time step $t$ is given by\n\\[\n\\mathbf{a}_t=-Q_{\\mathbf{aa},t}^{-1}Q_{\\mathbf{as},t}\\mathbf{s}_t-Q_{\\mathbf{aa},t}^{-1}Q_{\\mathbf{a},t}\\,.\n\\]\nThis action is a linear function of the state $\\mathbf{s}_t$, thus we can construct an optimal linear policy by setting $\\mathbf{K}_t=-Q_{\\mathbf{aa},t}^{-1}Q_{\\mathbf{as},t}$ and $\\mathbf{k}_t=-Q_{\\mathbf{aa},t}^{-1}Q_{\\mathbf{a},t}$. We can also show that the maximum-entropy policy that minimizes the approximate Q-function is given by\n\\[\n\\pi^\\star(\\mathbf{a}_t|\\mathbf{s}_t)=\\N(\\mathbf{K}_t\\mathbf{s}_t+\\mathbf{k}_t,Q_{\\mathbf{aa},t}).\n\\]\nFurthermore, as in \\citet{mfcgps}, we can impose a constraint on the total KL-divergence between the old and new trajectory distributions induced by the policies through an augmented cost function $\\tilde{\\cost}(\\state_t,\\action_t)=\\frac{1}{\\lambda}\\costmodel(\\state_t,\\action_t)-\\log\\bar{\\policy}(\\action_t|\\state_t)$, where solving for $\\lambda$ via dual gradient descent can yield an exact solution to a KL-constrained LQR problem.\n\n\n\\section{Parameterizing the Cost Model}\n\\label{sec:supp-cost}\n\nThe simplest choice that we consider for parameterizing the cost model is as a full quadratic function of the state and action, i.e.,     $\\costmodel(\\state_t,\\action_t)=\\frac{1}{2}\\state_t^\\top\\costmat\\state_t+\\costvec^\\top\\state_t+\\alpha\\|\\action_t\\|_2^2+b$ where we assume that the action-dependent part of the cost -- i.e., $\\alpha$ -- is known, and we impose no restrictions on the learned parameters $\\costmat$ and $\\costvec$. This is our default option due to its simplicity and the added benefit that fitting this model locally can be done in closed form through least-squares quadratic regression on the observed states. However, another option we consider is to choose $\\costmodel(\\state_t,\\action_t)=\\frac{1}{2}\\state_t^\\top\\mathbf{L}\\mathbf{L}^\\top\\state_t+\\costvec^\\top\\state_t+\\alpha\\|\\action_t\\|_2^2+b$. $\\mathbf{L}$ is a lower-triangular matrix with non-negative diagonal entries, and thus by constructing our cost matrix as $\\costmat=\\mathbf{L}\\mathbf{L}^\\top$ we guarantee that the learned cost matrix is positive semidefinite, which can improve the behavior of the policy update.\n\nIn general, in this work, we consider quadratic parameterizations of the cost model since we wish to build a LQS model. However, in general it may be possible to use non-quadratic but twice-differentiable cost models, such as a neural network model, and compute local quadratic cost models using a second-order Taylor approximation as in \\citet{mfcgps}. We also do not assume access to a goal observation, though if provided with such information we can use construct a quadratic cost function that penalizes distance to this goal in the learned latent space, as in \\citet{spatial-ae} and \\citet{e2c}.\n\n\n\\section{The SVAE Algorithm}\n\\label{sec:supp-svae}\n\n\\citet{Johnson2016} builds off of \\citet{Hoffman2013} and \\citet{Winn2005}, who show that, for conjugate exponential models, the variational model parameters can be updated using natural gradients of the form\n\\begin{equation}\n    \\tilde{\\nabla}_{\\omega}\\L[\\q]=\\omega^0+B\\mathbb{E}_\\q\\left[t_{\\dynmat,\\dyncovar}(\\dynmat,\\dyncovar)\\right]-\\omega\\,,\n\\end{equation}\nWhere $\\omega$ denotes the MNIW parameters of the variational factors on $\\dynmat,\\dyncovar$, $B$ is the number of minibatches in the dataset, $\\omega^0$ is the parameter for the prior distribution $p(\\dynmat,\\dyncovar)$, and $t_{\\dynmat,\\dyncovar}(\\dynmat,\\dyncovar)$ is the sufficient statistic function for $p(\\dynmat,\\dyncovar)$. Thus, we can use this equation to compute the natural gradient update for $\\omega$, whereas for $\\gamma$, $\\phi$, and the parameters of the cost model, we use stochastic gradient updates on Monte Carlo estimates of the ELBO, specifically using the Adam optimizer \\citep{Kingma2014}. This leads to two simultaneous optimizations, and their learning rates are treated as separate hyperparameters. We have found $10^{-4}$ and $10^{-3}$ to be good default settings for the natural gradient step size and stochastic gradient step size, respectively.\n\n\n\\section{Fitting the Local Dynamics Model}\n\\label{sec:supp-fit}\n\nIn the pretraining phase described in \\autoref{sec:solar-modeling}, we are learning the following sets of parameters from observed trajectories:\n\\vspace{-.5em}\n\\begin{enumerate}\n    \\itemsep0em\n    \\item The parameters of the variational posterior over global dynamics $q_\\textrm{global}(\\dynmat, \\dyncovar)$;\n    \\item The weights of the encoder and decoder networks $\\decoder_\\gamma(\\state)$ and $\\encoder_\\phi(\\observation)$;\n    \\item The parameters of the cost function $\\costmodel(\\state, \\action)$.\n\\end{enumerate}\n\\vspace{-.5em}\nIn the RL phase described in \\autoref{sec:inference}, after learning the representation and global models, we fit local, linear-Gaussian dynamics models to additional trajectories. The conjugacy of the Bayesian LQS model enables a computationally efficient expectation-maximization procedure to learn the local dynamics. We assume the same graphical model as in \\autoref{eq:gm-start} to \\autoref{eq:gm-end} except we modify \\autoref{eq:dyn} and \\autoref{eq:dyn2} to be\n\\begin{align*}\n    \\dynmat_t, \\dyncovar_t &\\sim p(\\dynmat_t, \\dyncovar_t) \\triangleq q_{\\textrm{global}}(\\dynmat, \\dyncovar)\\,,\\\\\n    \\state_{t + 1} | \\state_t, \\action_t, \\dynmat_t, \\dyncovar_t &\\sim \\N(\\dynmat_t\\colvec{\\state_t\\\\\\action_t}, \\dyncovar_t)\\,.\n\\end{align*}\nThe model assumes that the TVLG dynamics are independent samples from our global dynamics, followed by a deep Bayesian LDS to generate trajectories. This is similar to the globally trained model, with the exception that we explicitly assume time-varying dynamics.\n\nNow suppose we have collected a set of trajectories of the form $\\trajectory$ and aim to fit a local dynamics model. We use variational inference to approximate the posterior distributions by setting up the variational factors\n\\vspace{-.5em}\n\\begin{enumerate}\n    \\itemsep0em\n    \\item $q(\\state_{1:\\horizon} | \\dynmat_{1:\\horizon}, \\dyncovar_{1:\\horizon}; \\observation_{1:\\horizon}, \\action_{1:\\horizon})$, which approximates the posterior distribution $p(\\state_{1:\\horizon} | \\observation_{1:\\horizon}, \\action_{1:\\horizon}, \\dynmat_{1:\\horizon}, \\dyncovar_{1:\\horizon})$;\n    \\item $q(\\dynmat_t, \\dyncovar_t)$, which approximates the posterior distribution $p(\\dynmat_t, \\dyncovar_t | \\state_{1:\\horizon}, \\action_{1:\\horizon})$\n\\end{enumerate}\n\\vspace{-.5em}\nThe ELBO under these variational factors is:\n\\begin{align*}\n    \\L[\\q] &= \\mathbb{E}_\\q \\big[\\sum_t^\\horizon \\log p(\\observation_t | \\state_t)\\\\ &- \\mathrm{KL}\\left(q(\\state_{1:\\horizon})\\|p(\\state_{1:\\horizon} | \\action_{1:\\horizon}, \\dynmat_{1:\\horizon}, \\dyncovar_{1:\\horizon})\\right)\\\\\n    &- \\sum_t^{\\horizon - 1} \\mathrm{KL}\\left(q(\\dynmat_t, \\dyncovar_t)\\|p(\\dynmat_t, \\dyncovar_t)\\right)\n    \\big]\n\\end{align*}\n\nWe use variational EM to alternatively optimize $q(\\state_{1:\\horizon} | \\dynmat_{1:\\horizon}, \\dyncovar_{1:\\horizon}; \\observation_{1:\\horizon}, \\action_{1:\\horizon})$ and $q(\\dynmat_t, \\dyncovar_t)$. Using evidence potentials $\\psi(\\state_t;\\observation_t,\\phi)$ output by the recognition network $\\encoder_\\phi(\\observation_t)$, both of these optimizations can be done in closed form. Specifically, the optimal $q(\\state_{1:\\horizon} | \\dynmat_{1:\\horizon}, \\dyncovar_{1:\\horizon}; \\observation_{1:\\horizon}, \\action_{1:\\horizon})$ is computed via Kalman smoothing using evidence potentials from the recognition network, and the optimal $q(\\dynmat_t, \\dyncovar_t)$ can be computed via Bayesian linear regression using expected sufficient statistics from $q(\\state_{1:\\horizon} | \\dynmat_{1:\\horizon}, \\dyncovar_{1:\\horizon}; \\observation_{1:\\horizon}, \\action_{1:\\horizon})$.\n\n\n\\section{Experiment Setup}\n\\label{sec:supp-set}\n\n{\\bf 2D navigation.} Our recognition model architecture for the 2D navigation domain consists of two convolution layers with \\mbox{2-by-2} filters and 32 channels each, with no pooling layers and ReLU non-linearities, followed by another convolution with \\mbox{2-by-2} filters and 2 channels. The output of the last convolution layer is fed into a fully-connected layer which then outputs a Gaussian distribution with diagonal covariance. Our observation model consists of FC hidden layers with 256 ReLU activations, and the last layer outputs a categorical distribution over pixels. We initially collect 100 episodes which we use to train our model, and for every subsequent RL iteration we collect 10 episodes. The cost function we use is the sum of the $L^2$-norm squared of the distance to the target and the commanded action, with weights of 1 and 0.001, respectively.\n\nAs discussed in \\autoref{sec:solar-experiments}, we modify the 2D navigation task from \\citet{e2c} and \\citet{rce} to randomize the location of the target every episode, and we set this location uniformly at random between $-2.8$ and $2.8$ for both the x and y coordinates, as coordinates outside of $[-3,3]$ are not visible in the image. We similarly randomize the initial position of the agent. In this setup, we use two \\mbox{32-by-32} images as the observation, one with the location of the agent and the other with the location of the target, and in the fixed-target version of the task we only use one \\mbox{32-by-32} image.\n\n{\\bf Nonholonomic car.} The nonholonomic car domain consists of \\mbox{64-by-64} image observations. Our recognition model is a convolutional neural network with four convolutional layers with \\mbox{4-by-4} filters with 4 channels each, and the first two convolution layers are followed by a ReLU non-linearity. The output of the last convolutional layer is fed into three FC ReLU layers of width 2048, 512, and 128, respectively. Our final layer outputs a Gaussian distribution with dimension 8. Our observation model consists of four FC ReLU layers of width 256, 512, 1024, and 2048, respectively, followed by a Bernoulli distribution layer that models the image. For this domain, we collect 100 episodes initially to train our model, and then for RL we collect 100 episodes per iteration. The cost function we use is the sum of the $L^2$-norm squared of the distance from the center of the car to the target and the commanded action, with weights of 1 and 0.001, respectively.\n\n{\\bf Reacher.} The reacher domain consists of \\mbox{64-by-64-by-3} image observations. Our recognition model consists of three convolutional layers with \\mbox{7-by-7}, \\mbox{5-by-5}, and \\mbox{3-by-3} filters with 64, 32 and 8 channels respectively. The first convolutional layer is followed by a ReLU non-linearity. The output of the last convolutional layer is fed into an FC ReLU layer of width 256, which outputs a Gaussian distribution with dimension 10. Our observation model consists of one FC ReLU layers of width 512, followed by three deconvolutional layers with the reverse order of filters and channels as the recognition model. This is followed by a Bernoulli distribution layer that models each image. We collect 200 episodes initially to train our model, and then for RL we collect 100 episodes per iteration. The cost function we use is the sum of the $L^2$-norm of the distance from the fingertip to the target and the $L^2$-norm squared of the commanded action, which is the negative of the reward function as defined in Gym.\n\n{\\bf Sawyer Lego block stacking.} The image-based Sawyer block-stacking domain consists of \\mbox{64-by-64-by-3} image observations. The policy outputs velocities on the end effector in order to control the robot. Our recognition model is a convolutional neural network with the following architecture: a \\mbox{5-by-5} filter convolutional layer with 16 channels followed by two convolutional layers using \\mbox{5-by-5} filters with 32 channels each. The convolutional layers are followed by ReLU activations leading to a 12 dimensional Gaussian distribution layer. Our observation model consists of a FC ReLU layer of width 128 feeding into three deconvolutional layers, the first with \\mbox{5-by-5} filters with 16 channels and the last two of \\mbox{6-by-6} filters with 8 channels each. These are followed by a final Bernoulli distribution layer.\n\nFor this domain, we collect 400 episodes initially to train our model and 10 per iteration thereafter. Note that this pretraining data is collected only once across solving all of the tasks that we test on. The cost function is the cubed root of the $L^2$-norm of the displacement vector between the end-effector and the target in 3D-space.\n\n{\\bf Sawyer pushing.} The image-based Sawyer pushing domain also operates on \\mbox{64-by-64-by-3} image observations. Our recognition and observation models are the same as those used in the block-stacking domain. The dynamics model is learned by a network with two FC ReLU layers of width 128 followed by a 12 dimensional Gaussian distribution layer. The cost model is learned jointly with the representation and dynamics by optimizing the ELBO, which with regards to the cost corresponds to logistic regression on the observed sparse reward using a sampled latent state as the input. We collect 200 episodes to train our model and 20 per iteration for RL.\n\nDuring the RL phase, the human supervisor uses keyboard input to provide the sparse reward signal to the learning algorithm, indicating whether or not the mug was successfully pushed onto the coaster. In practice, for simplicity, we label the last five images of the trajectory as either $0$ or $1$ depending on whether or not the keyboard was pressed at any time during the trajectory, as for this task a successful push is typically reflected in the end state. In order to overcome the exploration problem and provide a diverse dataset for pretraining the cost model, we manually collect $180$ ``goal images'' where the mug is on the coaster and the robot arm is in various locations.\n\n\n\\section{Implementation of Comparisons}\n\\label{sec:comp}\n\n{\\bf PPO.} We use the open source implementation of PPO (named ``PPO2'') from the OpenAI Baselines project: \\mbox{\\footnotesize{\\url{https://github.com/openai/baselines}}}. We write OpenAI gym wrappers for our simulated environments in order to test PPO on our simulated tasks.\n\n{\\bf LQR-FLM.} We implement LQR-FLM based on the open-source implementation from the Guided Policy Search project: \\mbox{\\footnotesize{\\url{https://github.com/cbfinn/gps}}}. The only modification to the LQR-FLM algorithm that we make is to handle unknown cost functions by fitting a quadratic cost model to data from the current policy.\n\n{\\bf DVF.} We train a video prediction model using the open source Stochastic Adversarial Video Prediction project: \\mbox{\\footnotesize{\\url{https://github.com/alexlee-gk/video_prediction}}}. To define the task, we specify the location of a pixel whose movement to a specified goal location indicates success. The cost function is then the predicted probability of successfully moving the selected pixel to the goal. We then use MPC, specifically the cross-entropy method (CEM) for offline planning: we sample sequences of actions from a Gaussian, predict the corresponding sequence of images using the video prediction model, evaluate the cost of the imagined trajectory with the cost model, and refit the parameters of the Gaussian to the best predicted action sequences. This iterative process eventually outputs an action sequence to perform in the real world in order to try and solve the task.\n\n{\\bf RCE.} We use model learning code directly from the authors of RCE \\citep{rce}, though this code is not publicly available and to our knowledge there are no open source implementations of RCE or E2C \\citep{e2c} that are able to reproduce the results from the respective papers. In addition to LQR-based control, we also experiment with MPC with neural network dynamics and cost models in the learned latent representation. In our experiments, we report the best results using either of these control methods.\n\n{\\bf VAE ablation.} In the VAE ablation, we replace our representation and global models with a standard VAE \\citep{Kingma2014, Rezende2014}, which imposes a unit Gaussian prior on the latent representation. Because we cannot infer local dynamics as described in \\autoref{sec:inference}, we instead use a GMM dynamics prior that is trained on all data as described by \\citet{gps}. After fitting a local quadratic cost model, we again have a local LQS model that we can use in conjunction with an LQR-FLM policy update.\n\n{\\bf MPC baseline.} MPC involves planning $H$ time steps ahead using a dynamics and cost model, executing an action based on this plan, and then re-planning after receiving the next observation \\citep{mpc}. Recently, MPC has proven to be a successful control method when combined with neural network dynamics models, where many trajectories are sampled using the model and then the first action corresponding to the best imagined trajectory is executed \\citep{nn-dyn,pets}. Similar to LQR-FLM, we can extend MPC to handle image-based domains by learning dynamics and cost models within a learned latent representation. As MPC does not require an LQS model, we can instead utilize neural network dynamics and cost models which are more expressive.\n\n\n\\section{Additional Experiments}\n\\label{sec:supp-exp}\n\n\\subsection{RCE on Fixed-Target 2D Navigation}\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[width=0.8\\linewidth]{img/solar/2dnav-rce.png}\n    \\caption{On 2D navigation with the goal fixed to the bottom right, RCE is able to successfully learning a policy for navigating to the goal.}\n    \\label{fig:pm-e2c}\n\\end{figure}\n\nAs mentioned in \\autoref{sec:solar-experiments}, RCE was unable to make progress for the 2D navigation task, though we were able to get more successful results by fixing the position of the goal to the bottom right as is done in the image-based 2D navigation task considered in E2C \\citep{e2c} and RCE \\citep{rce}. \\autoref{fig:pm-e2c} details this experiment, which we ran for three random seeds and report the mean and standard deviation of the average final distance to the goal as a function of the number of training episodes. This indicates that RCE can indeed solve some tasks from image observations, though we were unable to use RCE succesfully on any of the tasks we consider.\n\n\\subsection{Full Learning Progress of PPO}\n\nIn \\autoref{fig:log-plots} we include the plots for the simulated tasks comparing \\metabbr\\ and PPO. Note that the x-axis is on a log scale, i.e., though our method is sometimes worse in final policy performance, we use one to three orders of magnitude fewer samples. This demonstrates our method's sample efficiency compared to model-free methods, while being able to solve complex image-based domains that are difficult for model-based methods.\n\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}{0.32\\linewidth}\n        \\includegraphics[width=0.9\\linewidth]{img/solar/2dnav-log.png}\n        \\caption{}\n    \\end{subfigure}\n    \\begin{subfigure}{0.31\\linewidth}\n        \\includegraphics[width=0.9\\linewidth]{img/solar/car-log.png}\n        \\caption{}\n    \\end{subfigure}\n    \\begin{subfigure}{0.33\\linewidth}\n        \\includegraphics[width=0.9\\linewidth]{img/solar/reacher-log.png}\n        \\caption{}\n    \\end{subfigure}\n    \\caption[Comparison of \\metabbr\\ to PPO]{(a)~Comparison of our method to PPO on the 2D navigation task presented in the paper. Our method uses roughly three orders of magnitude fewer samples to solve the task compared to PPO. (b)~On the car from images task, our method achieves slightly worse performance than PPO though with about 25 times fewer samples. (c)~Comparison of our method to PPO for the reacher task. Our method achieves worse final performance but uses about 40 times fewer samples than these methods.}\n    \\label{fig:log-plots}\n\\end{figure}\n", "meta": {"hexsha": "cf13aef7532406a8ffa6b5a1b29efd091e662bec", "size": 21034, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writeup/content/appendix/solar-appendix.tex", "max_stars_repo_name": "sharadmv/thesis", "max_stars_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-30T01:28:54.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-30T01:28:54.000Z", "max_issues_repo_path": "writeup/content/appendix/solar-appendix.tex", "max_issues_repo_name": "sharadmv/thesis", "max_issues_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writeup/content/appendix/solar-appendix.tex", "max_forks_repo_name": "sharadmv/thesis", "max_forks_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 141.1677852349, "max_line_length": 1098, "alphanum_fraction": 0.7672815442, "num_tokens": 5561, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127678225574, "lm_q2_score": 0.6584175139669997, "lm_q1q2_score": 0.562231121734408}}
{"text": "\\chapter{Lecture 3 May 07th 2018}\n  \\label{chapter:lecture_3_may_07th_2018}\n\n\\section{Groups} % (fold)\n\\label{sec:groups}\n\n\\subsection{Groups} % (fold)\n\\label{sub:groups}\n\n\\begin{defn}[Groups]\\label{defn:groups}\n\\index{Groups}\n  Let $G$ be a set and $*$ an operation on $G \\times G$. We say that $G = (G, *)$ is a \\hlnoteb{group} if it satisfies\\sidenote{If you wonder why the uniqueness is not specified for \\hlnoteb{Identity} and \\hlnoteb{Inverse}, see \\cref{propo:uniqueness_of_group_identity_and_group_element_inverse}.}\n  \\begin{enumerate}\n    \\item \\hlnoteb{Closure}: $\\forall a, b \\in G \\quad a * b \\in G$\n    \\item \\hlnoteb{Associativity}: $\\forall a, b, c \\in G \\quad a * (b * c) = (a * b) * c$\n    \\item \\hlnoteb{Identity}: $\\exists e \\in G \\enspace \\forall a \\in G \\quad a * e = a = e * a$\n    \\item \\hlnoteb{Inverse}: $\\forall a \\in G \\enspace \\exists b \\in G \\quad a * b = e = b * a$\n  \\end{enumerate}\n\\end{defn}\n\n\\begin{defn}[Abelian Group]\\label{defn:abelian_group}\n\\index{Abelian Group}\n  A group $G$ is said to be abelian if $\\forall a, b \\in G$, we have $a * b = b * a$.\n\\end{defn}\n\n\\begin{propo}[Group Identity and Group Element Inverse]\\label{propo:uniqueness_of_group_identity_and_group_element_inverse}\n  Let $G$ be a group and $a \\in G$.\n  \\begin{enumerate}\n    \\item The identity of $G$ is unique.\n    \\item The inverse of $a$ is unique.\n  \\end{enumerate}\n\\end{propo}\n\n\\begin{proof}\n  \\begin{enumerate}\n    \\item If $e_1, e_2 \\in G$ are both identities of $G$, then we have\n      \\begin{equation*}\n        e_1 \\overset{(1)}{=} e_1 * e_2 \\overset{(2)}{=} e_2\n      \\end{equation*}\n      where $(1)$ is because $e_2$ is an identity and $(2)$ is because $e_1$ is an identity.\n\n    \\item Let $a \\in G$. If $b_1, b_2 \\in G$ are both the inverses of $a$, then we have\n      \\begin{equation*}\n        b_1 = b_1 * e = b_1 * (a * b_2) \\overset{(1)}{=} e * b_2 = b_2\n      \\end{equation*}\n      where $(1)$ is by associativity.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{eg}\n  The sets $(\\mathbb{Z}, +), \\, (\\mathbb{Q}, +), \\, (\\mathbb{R}, +)$, and $(\\mathbb{C}, +)$ are all abelian, where the additive identity is $0$, and the additive inverse of an element $r$ is $(-r)$.\n\\end{eg}\n\n\\begin{note}\n  $(\\mathbb{N}, +)$ is not a group for neither does it have an identity nor an inverse for any of its elements.\n\\end{note}\n\n\\begin{eg}\n  The sets $(\\mathbb{Q}, \\cdot), \\, (\\mathbb{R}, \\cdot)$ and $(\\mathbb{C}, \\cdot)$ are \\hlwarn{not} groups, since $0$ has no multiplicative inverse in $\\mathbb{Q}, \\mathbb{R}$ or $\\mathbb{C}$.\n\\end{eg}\n\nWe may define that for a set $S$, let $S^* \\subseteq S$ contain all the elements of $S$ that has a multiplicative inverse. For example, $\\mathbb{Q}^* = \\mathbb{Q} \\setminus \\{0\\}$. Then, $(\\mathbb{Q}, \\cdot), (\\mathbb{R}, \\cdot)$ and $(\\mathbb{C}, \\cdot)$ are groups and are in fact abelian, where the multiplicative identity is $1$ and the multiplicative of an element $r$ is $\\frac{1}{r}$.\n\n\\begin{eg}\n  The set $\\big( M_n(\\mathbb{R}), + \\big)$ is an abelian group, where the additive identity is the zero matrix, $0 \\in M_n(\\mathbb{R})$, and the additive inverse of an element $M = [a_{ij}] \\in M_n(\\mathbb{R})$ is $-M = [-a_{ij}] \\in M_n(\\mathbb{R})$.\n\\end{eg}\n\n\\newthought{Consider} the set $M_n(\\mathbb{R})$ under the matrix mutiplication operation that we have introduced in \\nameref{chapter:lecture_1_may_02nd_2018}. We found that the identity matrix is\n\\begin{equation*}\n  I = \\begin{bmatrix}\n    1 & 0 & \\hdots & 0 \\\\\n    0 & 1 & \\hdots & 0 \\\\\n    \\vdots & \\vdots & & \\vdots \\\\\n    0 & 0 & \\hdots & 1\n  \\end{bmatrix} \\in M_n(\\mathbb{R}).\n\\end{equation*}\nBut since not all elements of $M_n(\\mathbb{R})$ have a multiplicative inverse\\sidenote{The multiplicative inverse of a matrix does not exist if its determinant is $0$.}, $(M_n(\\mathbb{R}), \\cdot)$ is not a group.\n\n\\newthought{We can try} to do something similar as to what we did before: by excluding the elements that do not have an inverse. In this case, we exclude elements whose determinant is $0$. We define the following set\n\n\\begin{defn}[General Linear Group]\\index{General Linear Group}\n\\label{defn:general_linear_group}\n  The \\hlnoteb{general linear group of degree $n$ over $\\mathbb{R}$} is defined as\n  \\begin{equation*}\n    GL_n(\\mathbb{R}) := \\{ M \\in M_n(\\mathbb{R}) \\, : \\, \\det M \\neq 0 \\}\n  \\end{equation*}\n\\end{defn}\n\nNote that $\\because \\det I = 1 \\neq 0$, we have that $I \\in GL_n(\\mathbb{R})$. \\\\\nAlso, $\\forall A, B \\in GL_n(\\mathbb{R} )$, we have that $\\because \\det A \\neq 0 \\, \\land \\, \\det B \\neq 0$,\n\\begin{equation*}\n  \\det AB = \\det A \\det B \\neq 0,\n\\end{equation*}\nand therefore $AB \\in GL_n(\\mathbb{R} )$. Finally, $\\forall M \\in GL_n(\\mathbb{R})$, $\\exists M^{-1} \\in GL_n(\\mathbb{R})$ such that\n\\begin{equation*}\n  MM^{-1} = I = M^{-1} M\n\\end{equation*}\nsince $\\det M \\neq 0$. $\\therefore (GL_n(\\mathbb{R}), \\cdot)$ is a group.\n\n\\newthought{Since} we have introduced permutations in \\nameref{chapter:lecture_2_may_04th_2018}, we shall formalize the purpose of its introduction below.\n\n\\begin{eg}\n  Consider $S_n$, the set of all permutations on $\\{1, 2, ..., n\\}$. By \\cref{propo:properties_of_Sn}, we know that $S_n$ is a group. We call $S_n$ the \\hldefn{symmetry group} \\hlnoteb{of degree $n$}. For $n \\geq 3$, the group $S_n$ is not abelian\\sidenote{Let us make this an exercise.\n  \\begin{ex}\n    For $n \\geq 3$, prove that the group $S_n$ is not abelian.\n  \\end{ex}}.\n\\end{eg}\n\n\\newthought{Now that} we have a fairly good idea of the basic concept of a group, we will now proceed to look into handling multiple groups. One such operation is known as the \\hldefn{direct product}.\n\n\\begin{eg}\n  \\label{eg:direct_product}\n  Let $G$ and $H$ be groups. Their direct product is the set $G \\times H$ with the component-wise operation defined by\n  \\begin{equation*}\n    (g_1, h_1) * (g_2, h_2) = (g_1 *_G g_2, h_1 *_H h_2)\n  \\end{equation*}\n  where $g_1, g_2 \\in G$, $h_1, h_2 \\in H$, $*_G$ is the operation on $G$, and $*_H$ is the operation on $H$.\n\n  The \\hlnoteb{closure} and \\hlnoteb{associativity} property follow immediately from the definition of the operation. The identity is $(1_G, \\, 1_H)$ where $1_G$ is the identity of $G$ and $1_H$ is the identity of $H$. The inverse of an element $(g_1, \\, h_1) \\in G \\times H$ is $(g_1^{-1}, \\, h_1^{-1})$.\n\\end{eg}\n\nBy induction, we can show that if $G_1, G_2, ..., G_n$ are groups, then so is $G_1 \\times G_2 \\times \\hdots \\times G_n$.\n\nTo facilitate our writing, use shall use the following notations:\n\n\\begin{notation}\n  Given a group $G$ and $g_1, g_2 \\in G$, we often denote its identity by $1$, and write $g_1 * g_2 = g_1 g_2$. Also, we denote the unique inverse of an element $g \\in G$ as $g^{-1}$.\n\n  We will write $g^0 = 1$. Also, for $n \\in \\mathbb{N}$, we define\n  \\begin{equation*}\n     g^n = \\underbrace{g * g * \\hdots * g}_{n \\text{ times}}\n  \\end{equation*}\n  and\n  \\begin{equation*}\n    g^{-n} = (g^{-1})^n\n  \\end{equation*}\n\\end{notation}\n\nWith the above notations,\n\n\\begin{propo}\\label{propo:group_notations}\n  Let $G$ be a group and $g, h \\in G$. We have \\marginnote{\n    \\begin{ex}\n      Prove \\cref{propo:group_notations} as an exercise.\n    \\end{ex}\n  }\n  \\begin{enumerate}\n    \\item $(g^{-1})^{-1} = g$\n    \\item $(gh)^{-1} = h^{-1} g^{-1}$\n    \\item $g^n g^m = g^{n + m}$ for all $n, m \\in \\mathbb{Z}$\n    \\item $(g^n)^m = g^{nm}$ for all $n, m \\in \\mathbb{Z}$\n  \\end{enumerate}\n\\end{propo}\n\n\\begin{warning}\n  In general, it is not true that if $g, h \\in G$, then $(gh)^n = g^n h^n$. For example,\n  \\begin{equation*}\n    (gh)^2 = ghgh \\quad \\text{but} \\quad g^2 h^2 = gghh.\n  \\end{equation*}\n  The two are only equal if and only if $G$ is abelian.\n\\end{warning}\n\n% subsection groups (end)\n\n% section groups (end)\n\n% chapter lecture_3_may_07th_2018 (end)\n", "meta": {"hexsha": "358ff5a89701d3067bbebc0f5fd606e66df8f6db", "size": 7767, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "PMATH347S18/lectures/lec03.tex", "max_stars_repo_name": "japorized/TeX_notes", "max_stars_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-09-28T21:23:05.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-21T01:41:27.000Z", "max_issues_repo_path": "PMATH347S18/lectures/lec03.tex", "max_issues_repo_name": "japorized/TeX_notes", "max_issues_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-29T17:58:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-29T17:58:51.000Z", "max_forks_repo_path": "PMATH347S18/lectures/lec03.tex", "max_forks_repo_name": "japorized/TeX_notes", "max_forks_repo_head_hexsha": "5814c8682addc5dd6f9a323758f87e4c4ca57b8e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2017-09-27T20:55:58.000Z", "max_forks_repo_forks_event_max_datetime": "2017-09-27T20:55:58.000Z", "avg_line_length": 46.2321428571, "max_line_length": 391, "alphanum_fraction": 0.6477404403, "num_tokens": 2825, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417500561683, "lm_q2_score": 0.8539127548105611, "lm_q1q2_score": 0.5622311017201109}}
{"text": "\\chapter{Discrete Fourier Transform (DFT) Project}\n\\glsresetall\n\\label{chapter:DFT_project}\n\n\\section{Introduction}\nThe goal of this project is design architectures that implement the Discrete Fourier Transform (DFT).  The DFT is a common operation in signal processing which generates a frequency domain representation of the discrete input signal. We start with the direct implementation of the DFT which is a matrix-vector multiplication. The input signal is a vector of samples and the matrix is composed of a set of basis functions corresponding to discrete cosine and sine waveforms of different frequencies. The multiplication of the input signal with these basis functions describes how well the input signal correlates with those waveforms, which is the value of the Fourier series at that frequency.\n\n\\section{Materials}\nYou can find the following files in \\texttt{project2-dft.zip}. These are divided into four folders: \\texttt{dft\\_8\\_precomputed}, \\texttt{dft\\_32\\_precomputed}, \\texttt{dft\\_256\\_precomputed}, and \\texttt{dft\\_1024\\_precomputed}. Each of these folders has its own testbench, \\texttt{out.gold.dat} file, and \\texttt{coefficients.h} file. \n\nEach folder contains following files:\n\\begin{itemize}\n\\item \\texttt{dft.cpp}: The baseline implementation for the dft function.\n\\item \\texttt{dft.h}: Header file holding important constants and other variables.\n\\item \\texttt{dft\\_test.cpp}: Testbench\n\\item \\texttt{coefficientsX.h}: A file containing the values of corresponding to one sine/cosine period sampled based upon the DFT points. For example, an 8 point DFT has 8 samples across both the sine and cosine function evenly spaced across one period. This is equivalent to dividing one rotation in the complex plane equally by the number of points in the DFT. \n\\item \\texttt{out.gold.dat}: ``Golden'' output. The testbench (dft\\_test.cpp) generates a sample input and calls the function \\texttt{dft} in \\texttt{dft.cpp} with that sample input. This output of the function is compared to the expected output. This will indicate PASS or FAIL. If it fails, then the code in \\texttt{dft} is incorrect. There are four different versions of depending on the DFT size and way in which the DFT coefficients were generated.\n\\item \\texttt{script.tcl} and \\texttt{directives.tcl}: Used to create the project. \n\\end{itemize}\n\n\\section{Project Goal}\nYou should modify the code to create a number of different architectures that perform tradeoffs between performance and area.  For \\texttt{dft\\_256\\_precomputed} and \\texttt{dft\\_1024\\_precomputed} designs, you need to use precomputed values from \\texttt{coefficients256.h} and \\texttt{coefficients1024.h}. \n\nFor 256-point and 1024-point DFTs, you will create a report describing how you generated these different architectures (code restructuring, pragmas utilized, etc.). For each architecture you should provide its results including the resource utilization (BRAMs, DSP48, LUT, FF), and performance in terms of throughput (number of FFT operations/second), latency, clock cycles, clock frequency (which is fixed to 10 ns). You can do most of the design space exploration on the 256 point DFT. You should pick your ``best'' 256 architecture and synthesize that as a 1024 DFT.\n\nThe 8 and 32 point folders are provided for your convenience. If you would like, you can do some of your initial design space optimization on these smaller architectures. But it is not necessary to use these at all.\n\nThe key in this project is to understand the tradeoffs between loop optimizations (unrolling and pipelining) and data partitioning. Therefore you should focus on these optimizations.\n\n\\section{Optimization Hints and Guidelines}\n\\begin{itemize}\n\\item You should use a clock period of 10 ns.\n\\item The output of your architecture must closely match the golden output. Be sure to generate a working function before performing any optimizations. If the result does not match exactly, but is close, please explain why in the report.\n\\item You should use float for all data types. You do not need to perform bitwidth optimization of this project.\n\\item The current design is set up to do the DFT in-place, i.e., you put the output results back into the same arrays that give you the input results. You can change this if you think it will give you better results.\n\\item There are many different ways to generate the DFT coefficients. These are constants when the DFT size is fixed. We have given you the coefficients for both 256 (in \\texttt{coefficients256.h}) and 1024 (in \\texttt{coefficients1024.h}). They each have two constant arrays, \\texttt{sin\\_table} and \\texttt{cos\\_table}. You can use these coefficient arrays directly as memories in your architectures. You are also free to create your own arrays using a different structure (e.g., 2D array, reordering of elements in the given 1D arrays, etc.). Or you could dynamically generate the coefficients. \n\\item There is significant amount of parallelism that can be exploited by (partially) unrolling the for loops. Pipelining these (partially) unrolled for loops should lead to higher throughputs.\n\\item There are more efficient methods for performing the DFT that exploit the symmetries of the Fourier constants, e.g., the fast Fourier transform (FFT). Do not use these symmetries. In other words, treat this like a matrix-vector multiply with unknown matrix values. Do not worry, we will implement FFT architectures in a following project that will fully take advantage of these symmetries.\n\\item You do not need to report your optimizations for your 8 point and 32 point DFT; these folders are provided for your convenience. Since these will very likely synthesize much faster than larger point DFT functions, it may be useful to use these to debug your code or in your initial design space exploration.\n\\item Your report must explicitly state how you calculated the throughput results. Note that this is often not simply a function of the latency and the clock period, and involves using the initiation interval.\n\\end{itemize}", "meta": {"hexsha": "4f9d908084a125f7ed5a69e044a516e651bd87c9", "size": 6042, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dft_project.tex", "max_stars_repo_name": "mithro/pp4fpgas", "max_stars_repo_head_hexsha": "ddede5bd337f4fa33915d7e4ca98f97a7b31413a", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 418, "max_stars_repo_stars_event_min_datetime": "2018-05-09T17:28:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T05:51:12.000Z", "max_issues_repo_path": "dft_project.tex", "max_issues_repo_name": "jmuuu/pp4fpgas", "max_issues_repo_head_hexsha": "f604b68289b0a9998ace596c57ea7209f2d60fd5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2018-05-13T16:26:23.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-06T06:06:57.000Z", "max_forks_repo_path": "dft_project.tex", "max_forks_repo_name": "jmuuu/pp4fpgas", "max_forks_repo_head_hexsha": "f604b68289b0a9998ace596c57ea7209f2d60fd5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 107, "max_forks_repo_forks_event_min_datetime": "2018-05-12T16:43:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-23T22:59:42.000Z", "avg_line_length": 147.3658536585, "max_line_length": 693, "alphanum_fraction": 0.799735187, "num_tokens": 1381, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.640635854839898, "lm_q2_score": 0.8774767842777551, "lm_q1q2_score": 0.5621430897979445}}
{"text": "% 202010??\n\\documentclass[env.tex]{subfiles}\n\\begin{document}\n\n\\subsection{The model}\nThe system is a set of ordinary differential equations (ODE).  It is a temperature-static system with two biological components (\\phy\\ and \\bac) and three density state variables (organic carbon $C$, \\phy\\ biomass $P$ and \\bac l biomass $B$; Fig.\\ref{f:model}). \\Phy\\ and \\bac\\ receive carbon from different sources (\\phy\\ photosynthesize carbon dioxide from an external unlimited source; \\bac\\ consume organic carbon from the internal finite $C$ pool) but allocate carbon in the same way (respiration, leakage and biomass incorporation).  Some biomass from \\phy/\\bac\\ die and become part of the $C$ pool.  Organic carbon in the environment can either be consumed by \\bac, harvested out of the system or remained in the $C$ pool only.  There are four major assumptions:  \\Rn{1}) homogeneous environmental conditions and light intensity (i.e. a system shaped as a thin panel); \\Rn{2}) unlimited growth nutrient for \\phy\\ and \\bac; \\Rn{3}) constant system temperature; \\Rn{4}) \\bac\\ have no carbon type preferences on consumption.  In short, living space for \\phy\\ is the only limiting factor.\n\nThe rate of change of state variables (carbon density per day, \\dxdt) are the currencies in our equations.  The equations are composed of three state variables of carbon densities, four life history traits of \\phy, four life history traits of \\bac\\ and one harvest rate parameter (Eq.\\ref{eq:PBH}).\n\n\\begin{equation}\\left\\{\\begin{array}{rl}\n    C'(t) &= \\ePR(1-\\eP)\\cdot\\gP\\cdot P +\\aP\\cdot P^2 +(\\eBR(1-\\eB)-1)\\cdot\\gB\\cdot C\\cdot B +\\mB\\cdot B -xC\\\\\n    P'(t) &= \\ePR\\cdot\\eP\\cdot\\gP\\cdot P -\\aP\\cdot P^2\\\\\n    B'(t) &= \\eBR\\cdot\\eB\\cdot\\gB\\cdot C\\cdot B -\\mB\\cdot B\n\\end{array}\\right.\\label{eq:PBH}\\end{equation}\n\nIn Eq.\\ref{eq:PBH}, $C'(t)$, $P'(t)$ and $B'(t)$ are the rates of density change (\\dxdt).  $\\ePR$ is the fraction of non-respired carbon for $P$, $\\eP$ is the fraction of carbon incorporated as $P$ biomass, $\\eBR$ is the fraction of non-respired carbon for $B$ and $\\eB$ is the fraction of carbon incorporated as $B$ biomass.  These four are ``fraction parameters\u201d.  $\\gP$ is the growth rate of $P$ (\\dayU), $\\aP$ is the intraspecific interference of $P$ (\\denI), $\\gB$ is the resource clearance rate of $B$  (\\denI) and $\\mB$ is the death rate of $B$ (\\dayU).  These four are ``rate parameters\u201d.  $x$ is the harvest rate (\\dayU), which is the speed of carbon harvest from the $C$ pool in a continuous harvest system.  $x$ is set to zero in destructive harvest systems and the harvest interval $T$ (day) is the alternative parameter for the yield measurement.  The interval is defined as\n\n\\begin{equation*}\n    T = x-1\n    \\label{eq:TvsX}\n\\end{equation*}\n\nValues of equivalent $T$ and $x$ differ by 1 because the day of system establishment is day 0.  Continuous harvest starts on day 0 but destructive harvest starts on day 1.  Carbon yield from continuous harvest on the first day is equivalent to that by destructive harvest on the second day.\n\nDaily yield in continuous harvest systems is defined as\n\n\\begin{equation}\n    \\text{daily yield} = x\\cdot C\n    \\label{eq:yield}\n\\end{equation}\n\nand the equivalent for destructive harvest is ``average yield\", which is defined as\n\n\\begin{equation*}\n    \\text{average yield} = \\dfrac{[\\text{total carbon}]|_{T}-[\\text{total carbon}]|_{0}}{T}\n    \\label{eq:avgYd}\n\\end{equation*}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.8\\linewidth]{sec/model.png}\n    \\caption[Model visualization]{The theoretical coexistence open system of \\phy\\ and \\bac\\ allows carbon exchange with external carbon dioxide source.  The dashed box is the defined boundary allowed matter exchange.  Blue arrows are the directions of carbon flow.  Pink boxes are the carbon pools (i.e. state variables).  White boxes are biochemical processes in indicated organisms.  These processes divert carbon from source to different carbon pools.}\n    \\label{f:model}\n\\end{figure}\n\nEq.\\ref{eq:PBH} is a \\pbs\\ of $P$ and $B$ under continuous harvest (\\PBH).  Three systems on $P$-only continuous harvest (\\PoH), $P$ and $B$ destructive harvest (\\PBN) and $P$-only destructive harvest (\\PoN) were branched.\n\nContinuous harvest \\phy-only system (\\PoH);\n\\begin{equation*}\\left\\{\\begin{array}{rl}\n    C'(t) &= \\ePR(1-\\eP)\\cdot\\gP\\cdot P +\\aP\\cdot P^2 -xC\\\\\n    P'(t) &= \\ePR\\cdot\\eP\\cdot\\gP\\cdot P -\\aP\\cdot P^2\n\\end{array}\\right.\\label{eq:PoH}\\end{equation*}\n\nDestructive harvest \\pbs\\ of $P$ and $B$ (\\PBN); and\n\\begin{equation*}\\left\\{\\begin{array}{rl}\n    C'(t) &= \\ePR(1-\\eP)\\cdot\\gP\\cdot P +\\aP\\cdot P^2 +(\\eBR(1-\\eB)-1)\\cdot\\gB\\cdot C\\cdot B +\\mB\\cdot B\\\\\n    P'(t) &= \\ePR\\cdot\\eP\\cdot\\gP\\cdot P -\\aP\\cdot P^2\\\\\n    B'(t) &= \\eBR\\cdot\\eB\\cdot\\gB\\cdot C\\cdot B -\\mB\\cdot B\n\\end{array}\\right.\\label{eq:PBN}\\end{equation*}\n\nDestructive harvest \\phy-only system (\\PoN).\n\\begin{equation*}\\left\\{\\begin{array}{rl}\n    C'(t) &= \\ePR(1-\\eP)\\cdot\\gP\\cdot P +\\aP\\cdot P^2\\\\\n    P'(t) &= \\ePR\\cdot\\eP\\cdot\\gP\\cdot P -\\aP\\cdot P^2\n\\end{array}\\right.\\label{eq:PoN}\\end{equation*}\n\nThree biologically meaningful system stable states for continuous harvest systems are deduced using SymPy (v1.5.1) in python3 (v3.7.3) (Table \\ref{t:eqm}).  Equilibrium 2 \\& 3 are \\PoH\\ \\& \\PBH\\ respectively. Equilibrium 1 was a default alternative stable state with no carbon in the system.  Destructive systems can only be investigated through integration because the analytical solutions are invalid; these solutions consisted with state variables only but not biological parameters.\n\n\\begin{table}[H]\n    \\centering\n    \\caption[Model equilibria]{Equilibria from the four variations of the proposed model (Eq.\\ref{eq:PBH})}\n    \\begin{tabular}{cl|ccc}\\hline\n        equilibrium & scenario & $C$ & $P$ & $B$ (\\PBH\\ only) \\\\\\hline\n        1 & \\PBH, \\PoH & 0 & 0 & 0 \\\\\n        2 & \\PBH, \\PoH & $\\dfrac{\\eP(\\ePR\\gP)^2}{\\aP x}$ & $\\dfrac{\\ePR\\eP\\gP}{\\aP}$ & 0 \\\\\n        3 & \\PBH & $\\dfrac{\\mB}{\\eBR\\eB\\gB}$ & $\\dfrac{\\ePR\\eP\\gP}{\\aP}$ & $\\dfrac{(\\ePR\\gP)^2\\eBR\\eB\\gB-\\aP\\mB x}{(1-\\eBR)\\aP\\gB\\mB}$ \\\\\\hline\n    \\end{tabular}\n    \\label{t:eqm}\n\\end{table}\n\n\\subsection{Parameter space}\nData of the defined parameters in Eq. \\ref{eq:PBH} was collected from literature (Supplementary sections 3-9).  Data was temperature-standardized to \\temp\\ to align with most data collected (Supplementary section 2).  Parameters less than five data points were called ``data deficient\"; half of the parameters in Table \\ref{t:ranges} ($\\aP$, $\\eBR$, $\\eB$, $\\mB$) were data deficient.  The widest percentage range in respective parameter groups (fraction or rate parameters) was inferred as the logical range for these data-deficient parameters.  The selection was based on the Metabolic Theory of Ecology \\autocite{brown2004toward}, which suggested that similar body size organisms had similar metabolic rates.  \\Phy\\ and \\bac\\ had similar cell sizes as unicellular organisms.  Similar life history traits for \\phy\\ and \\bac\\ therefore would potentially share parameter ranges.  Ranges for each parameter was rooted around the mean of collected data bounded by the percentage range.  In each parameter range, 11 evenly-spaced sample values were selected.  The values formed a defined space of parameter combinations for model simulations.\n\n\\begin{table}[H]\n    \\centering\n    \\caption[Algebra variables]{Biological variables and corresponding ranges framing the parameter space}\n    \\begin{tabular}{cclll}\\hline\n        variable & unit & description & min & max \\\\\\hline\n        $N'(t)$ & \\dxdt & rate of change of respective carbon pool {\\tiny($N=C,P,B$)} & - & - \\\\\n        $N$ & \\den & carbon density for respective pool {\\tiny($N=C,P,B$)} & - & - \\\\\n        $\\ePR$ & - & non-respired carbon fraction for $P$ & 0.08 & 0.87 \\\\\n        $\\eP$ & - & assimilated carbon fraction for $P$ & 0.40 & 1.00 \\\\\n        $\\gP$ & \\dayU & growth rate of $P$ & 0.03 & 3.17 \\\\\n        $\\aP$ & \\denI & intraspecific interference of $P$ & 0.02 & 1.52 \\\\\n        $\\eBR$ & - & non-respired carbon fraction for $B$ & 0.13 & 1.00 \\\\\n        $\\eB$ & - & assimilated carbon fraction for $B$ & 0.07 & 0.82 \\\\\n        $\\gB$ & \\denI & clearance rate of $B$ & 0.10 & 3.50 \\\\\n        $\\mB$ & \\dayU & death rate of $B$ & 0.01 & 0.63 \\\\\n    \\hline\\end{tabular}\n    \\label{t:ranges}\n\\end{table}\n\n\\subsection{Scenario sampling}\nA set of 5500 (11 sample values $\\times$ 500 expected sample frequency) unique biological parameter combinations were randomly selected via a uniform prior with seed number ``20192020\" in R (v4.0.2).  This Latin Hypercube Sampling (LHS) method was to ensure the widest parameter space covered with a limited sample size.  200 evenly-spaced harvest rates ($x$) ranged 1-19901 \\dayU were applied on each set of biological parameter combinations on continuous harvest systems.  An equivalent harvest interval samples ($T$ = 0-19900 days, 200 evenly-spaced samples) were applied for each set of biological parameter combinations on destructive harvest systems.\n\n\\subsection{Model experiment}\nCarbon densities of continuous harvest systems were analytically calculated by C-lang (compiler gcc v7.5.0) using equilibrium positions from Table \\ref{t:eqm}. Yield in \\PoH\\ was independent from harvest rates as defined in Eq.\\ref{eq:yield}.  Carbon densities of destructive harvest systems were numerically integrated by python3 (v3.7.3) SciPy (v1.2.3) ``odeint\" function using Eq. \\ref{eq:PBH}.  Initial densities for \\PBN\\ and \\PoN\\ were $C=P=B=1$ and $C=P=1$, $B=0$ \\den\\ respectively.  I also integrated destructive systems in a log-spaced time (range of $T$ = 0-100 days) to observe the system behaviour before reaching stability.\n\n\\subsection{Carbon yield analysis}\nFeasible solutions were defined as non-negative abundances in all state variables for a scenario.  Unfeasible solutions were set as invalid data (NA) for observing yield distributions within feasibility limits.  All four systems had right-skewed carbon yield distributions with extremely high-valued outliers.  Pairwise Wilcox signed rank tests in R (v4.0.2) with bonferroni adjustment and no paired situations were therefore chosen for median comparisons between systems.  Destructive harvest system development was observed on a log-spaced time; \\bac l effect on each harvest mode was examined through selected harvest parameters; and life history influences were deduced by observing the log carbon yield distribution across the parameter range of each trait.\n\n\\end{document}", "meta": {"hexsha": "0d60206fe200ef1deb89d486eb33186a99e5f9fb", "size": 10513, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "p_paper/sec/met.tex", "max_stars_repo_name": "ph-u/PhyBac_C-Capture", "max_stars_repo_head_hexsha": "f1134050fddc48b64bdaf8d5444a99b8fc52451d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "p_paper/sec/met.tex", "max_issues_repo_name": "ph-u/PhyBac_C-Capture", "max_issues_repo_head_hexsha": "f1134050fddc48b64bdaf8d5444a99b8fc52451d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "p_paper/sec/met.tex", "max_forks_repo_name": "ph-u/PhyBac_C-Capture", "max_forks_repo_head_hexsha": "f1134050fddc48b64bdaf8d5444a99b8fc52451d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.8660714286, "max_line_length": 1139, "alphanum_fraction": 0.7144487777, "num_tokens": 3093, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.877476793890012, "lm_q2_score": 0.640635841117624, "lm_q1q2_score": 0.5621430839149237}}
{"text": "\\chapter{Theoretical Background}\n\\section{Basics in Modelling Light in Computer Graphics}\n\\subsection{Radiometry}\nOne purpose of Computer Graphics is to simulate the interaction of light with a surface and how a real-world observer, such as a human eye, will perceive this. These visual sensations of an eye are modelled relying on a virtual camera which captures the emitted light from the surface. The physical basis to measure such reflected light is studied under radiometry which deals with measures on the electromagnetic radiation transferred from a source to a receiver. \\\\\n\nFundamentally, light is a form of energy propagation, consisting of a large collection of photons, whereat each photon can be considered as a quantum of light that has a position, a direction of propagation and a wavelength $\\lambda$. A photon travels at a certain speed $v = c/n$, that depends only the speed of light $c$ and the refractive index $n$ through which it propagates. Its frequency is defined by $f = v/\\lambda$ and its carried amount of energy $q$, measured in the SI unit Joule, is given by $q = hf= hv/\\lambda n$ where $h$ is the Plank's constant. The total energy of a large collection of photons is hence $Q = \\sum_i q_i$.\n\n\\subsection{Spectral Energy}\n\nIt is important to understand that the human eye is not equally sensitive to all wavelength of the spectrum of light and therefore responds differently to specific wavelengths. Remember that our goal is to model the human visual perception. This is why we consider the energy distribution of a light spectrum rather than considering the total energy of a photon collection since then we could weight the distribution according to the human visual system. So the question we want to answer is: How is the energy distributed across wavelengths of light? \\\\\n\nOne idea is to make an energy histogram from a given photon collection. For this we have to order all photons by their associated wavelength, discretize the wavelength spectrum, count all photons which then will fall in same wavelength-interval, and then, finally, normalize each interval by the total energy $Q$. This will give us a histogram which tells us the spectral energy $Q_{\\lambda}$ for a given discrete $\\lambda$ interval and thus models the so called spectral energy distribution $\\footnote{Intensive quantities can be thought of as density functions that tell the density of an extensive quantity at an infinitesimal by a small interval or a point.}$.\n\n\\subsection{Spectral Power}\nRendering an image in Computer Graphics corresponds to capturing the color sensation of an illuminated, target scene at a certain point in time. Each color is associated with either a particular wavelength or is composed of a wavelength spectrum$\\footnote{A wavelength spectrum is a collection of certain wavelengths. For example brown color is a composition of many wavelengths in the region of yellow, orange or read color in combination with low luminance.}$. Thus a color is directly related to a certain amount of energy. In order to determine the color of a to-be-rendered pixel of an image, we have to first get a sense of how much light (in terms of energy) passes through the area which the pixel corresponds to. We begin by considering the flow of energy $\\Phi = \\frac{\\Delta Q}{\\Delta t}$ transferred through this area over a unit period of time. This allows us to measure the energy flow through a pixel during a certain amount of time. In general, power is the estimated rate of energy production for light sources and corresponds to the flux. It is measured in the unit Watts, denoted by Q. Since power is a rate over time, it is well defined even when energy production is varying over time. As with Spectral Energy for rendering, we are really interested in the spectral power $\\Phi_\\lambda = \\frac{\\Delta \\Phi}{\\Delta \\lambda}$, measured in Watts per nanometer.\n\n\\subsection{Spectral Irradiance}\nBefore we can tell how much light is reflected from a given point on a surface towards the viewing direction of an observer, we first have to know how much light arrives at this point. Since in general a point has no length, area or even volume associated, let us instead consider an infinitesimal area $\\Delta A$ around a such a point. Then, we can ask ourself how much light falls in such a small area. Further while observing this process over a short period of time, the measured quantity gives us the the spectral irradiance $E$ as illustrated in figure $\\ref{fig:irradiance}$. Summarized, this quantity tells us how much spectral power is incident on a surface per unit area and mathematically is equal:\n\n\\begin{equation}\n E_{\\lambda} = \\frac{\\Phi_{\\lambda}}{\\Delta A}\n\\end{equation} \n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.5]{background/irradiance.png}\n  \\caption[Irradiance]{Irradiance is the summed up radiance over all directions. The black border is representing a surface element.}\n  \\label{fig:irradiance}\n\\end{figure}\n\n\\subsection{Spectral Radiance}\nWhen rendering an image we have to determine the color of each pixel of the image. Although irradiance tells us how much light is arriving at a point and gets reflected, it tells us nothing about the power distribution across different directions. The direction of the element is important because the human eye may perceive the brightness of an illuminated objects differently when looking from direction. \n\n\\begin{figure}[H]\n  \\centering\n  \\subfigure[Radiances is the density of photons per area per solid angle]{\n    \\includegraphics[scale=0.4]{background/radiancehemisphere.png}\n    \\label{fig:radiance}\n  }\n~\n  \\subfigure[Solid angle is the area of a surface patch on a sphere with radius R which is spanned by a set of directions]{  \n    \\includegraphics[scale=0.6]{background/solidangle.png}\n    \\label{fig:solidangle}\n  }\n\\caption[Concept of Radiance]{Illustration of the concepts of radiance and of solid angle and how they are related.}  \n\\label{fig:radianceBasics}\n\\end{figure}\n\\noindent\nThis concept is described by the radiometric quantity radiance. Basically, radiance is the measure of light energy passing through or emitted from a small area around a point on a surface towards a given direction during a short period in time. More formally this is the spectral power emerging from an arbitrary point (an infinitesimal area around this point) and falls within a given solid angle (see figure$\\footnote{Modified from a figure in Computer Graphics class 2012 in chapter \\emph{Colors}}$ $\\ref{fig:solidangle}$) in a specific direction (usually towards the observer) as shown in figure $\\ref{fig:radiance}$. Formally, this leads us to the following mathematical formalism: \n\n\\begin{equation}\n L_{\\lambda}(\\omega) = \\frac{d^2 \\Phi_{\\lambda}}{dA d\\Omega} \\approx \\frac{\\Phi_{\\lambda}}{\\Omega A}\n\\end{equation}\n\nwhere $L$ is the observed spectral radiance in $Wm^-2 sr^-1$ in direction $\\omega$$\\footnote{The direction $\\omega$ is determined by two angles, $\\phi$ and $\\theta$ like illustrated in figure $\\ref{fig:radianceBasics}$}$, $\\Theta_{\\lambda}$ is the total power emitted, $\\theta$ is the angle between the surface normal and the specified direction, $A$ is the area of the surface and $\\Omega$ is the solid angle subtended by the observation or measurement. \\\\\n\nIt is useful to distinguish between radiance incident at a point on a surface and excitant from that point. Terms for these concepts sometimes used in the graphics literature are surface radiance $L_r$ for the radiance \\textit{reflected} from a surface and field radiance $L_i$ for the radiance \\textit{incident} at a surface.  \n\n\\subsection{BRDF}\nIn order to render the colorization of an observed object, a natural question in Computer Graphics is what portion of the incident light a viewer will receive after reflection, after when he looks at an illuminated object. For any given surface which is illuminated from a certain direction $\\omega_i$, we can ask ourself how much light is reflected from any point on this surface towards a viewing direction $\\omega_r$. This is where the Bidirectional Reflectance Distribution Function (BRDF) comes into play, which is a radiometric quantity telling us how much light is reflected at an opaque surface. Mathematically speaking, the BRDF is the ratio of the reflected radiance pointing in the direction $\\omega_r$ to the incident irradiance coming from the inverse direction of $\\omega_i$ as illustrated in figure $\\ref{fig:brdfillustration}$. Hence the BRDF is a four dimensional function defined by four angles $\\theta_i$, $\\phi_i$, $\\theta_r$ and $\\phi_r$.\n\n\\begin{figure}[ht]\n  \\centering\n  \\includegraphics[scale=0.5]{background/mybrdfmodel.png}\n  \\caption[BRDF Model]{Illustration of the BRDF model, where $\\omega_i$ is pointing to the light source and the viewing direction is denoted by $\\omega_r$. Both unit direction vectors are defined w.r.t to a surface normal $\\mathbf{n}$ for every point on the surface.}\n  \\label{fig:brdfillustration}  \n\\end{figure}\n\nFormally, a BRDF for any given wavelength $\\lambda$ is defined as:\n\n\\begin{align}\n  BRDF_{\\lambda}(\\omega_i, \\omega_r)\n  & = \\frac{dL_r(\\omega_r)}{dE_i(\\omega_i)} \\nonumber \\\\\n  & = \\frac{dL_r(\\omega_r)}{L_i(\\omega_i)cos(\\theta_i)d\\omega_i}\n  \\label{eq:defbrdf}\n\\end{align}\n\nWhere $L_{r}$ is the reflected spectral radiance, $E_i$ is the incident spectral irradiance and $\\theta_{\\text{i}}$ is the angle between $\\omega_{\\text{i}}$ and the surface normal $\\mathbf n$. Also, $L_i$ is the incident spectral radiance.\n\n\\subsection{Wavespectrum and Colors}\nIn order to see how crucial the role of human vision plays, let us consider the following definition of color by \\textit{Wyszechkiu and Siles} mentioned in the Fundamentals of Computergraphics Book$\\cite{fundcg}$, stating that \\textit{\"Color is the aspect of visual perception by which an observer may distinguish differences between two structure-free fields of view of the same size and shape such as may be caused by differences in the spectral composition of the radiant energy concerned in the observation\"}. Therefore, similarly like the humans' perceived sensation of smell and taste, color vision is just another individual sense of perception giving us the ability to distinguish between different frequency distributions of light which we experienced as different colors.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.35]{background/lightspec.png}\n  \\caption[Visible Lightspectrum]{Frequency (top) and wavelength (bottom) of colors of the visible light spectrum$\\footnotemark$.}\n  \\label{fig:colorspectrum}\n\\end{figure}\n\\footnotetext{Similar figure like used in computer graphics class 2012 in chapter colors}\n\nIn general, an eye consists of photoreceptor cells which are responsible for providing ability of color perception. A schematic of an eye is illustrated in figure $\\ref{fig:humaneye}$. Basically, there are two specialized types of photoreceptor cells, cone cells which are responsible for color vision and rod cells, which allow an eye to perceive different brightness levels.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.35]{background/humaneye.png}\n  \\caption[humanayeschematic]{Schematic$\\footnotemark$ of photoreceptor cells, cones and rods, in a human eye }\n  \\label{fig:humaneye}\n\\end{figure}\n\\footnotetext{image of illustration has been taken from \\\\ \\texttt{http://en.wikipedia.org/wiki/Bidirectional\\textunderscore reflectance\\textunderscore distribution\\textunderscore function}}\n\nA human eye is made of three different types of cone cells, having their peak sensitivity in different wavelength ranges. More precisely, there are cone cells most sensitive to short wavelengths between $420 nm$ and $440 nm$, those which are most sensitive in the middle range between $530 nm$ and $550 nm$ and those which have their peak in the long range, from $560 nm$ to $580 nm$. Therefore, any color sensation in human color perception can therefore be described by just three parameters, corresponding to the levels of stimulus of these three types of cone cells.  \n\n\\subsection{Colorspace}\n\\label{sec:colorspace}\nIn order to render accurate images of how a human observer sees its world, a mathematical model of the human color perception is required. Remember that color sensation is due to a visual stimulus processed by cone cells in an eye. A human eye contains three different types of cone cells. Therefore, one possible approach is to describe each kind of these cone cells with a function mapping each wavelength to a certain sensitivity. In the early 1920 from a series of experiments, the so called CIE RGB color space was derived (see figure $\\ref{fig:ciergb}$). This space describes the response of cone cells of an average human individual, the so called standard observer. Basically, a statistically sufficiently large number of probands were exposed to different target light colors expressed by their wavelength. The task of each proband was to reproduce these target colors by mixing three given primary colors, red, green and blue light. The strength  of each primary color could be manually adjusted by setting their relative sensitivity. Those adjustment weights have been measured, aggregated and averaged among all probands for each primary color. This model describes each color as a triple of three real valued numbers$\\footnote{note that there are  negative color weights possible in the CIE RGB colors space. This is why some human perceived color sensations could not be reconstructed using just an additive color model (adding three positively weighted primary values). Therefore, a proband was also allowed to move one of the primary colors to the target color and instead was supposed to reproduce this new color mix using the two remaining primaries (subtractive model). The value of the selected, moved primary was then interpreted as being negative weighted in an additive color model.}$, the so called tristimulus values. Summarized, these experiments provided certain weights of primary colors in order to match a color at a certain wavelength according to the average human color perception. However, some of these weights could have a negative value. \\\\\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.3]{introduction/ciergb.png}\n  \\caption[CIE RGB Color Matching Functions]{Plots$\\footnotemark$ of CIE 1931 RGB Color matching functions showing the amounts of primaries needed to match a certain wavelength.}\n\\label{fig:ciergb}\n\\end{figure}\n\\footnotetext{These plots have been taken from \\texttt{http://en.wikipedia.org/wiki/CIE\\textunderscore 1931\\textunderscore color\\textunderscore space}}\n\nThe disadvantage of the CIE RGB colorspace is that some of its color weights are negative. Thus, scientist derived the CIE XYZ colorspace which as no negative color matching functions but is still additive$\\footnote{Remember, the property of an additive colorspace is that any color can be represented as a weighted sum of matching functions of that color space.}$. Figure $\\ref{fig:matchingfunction}$ visualizes the matching functions of the CIE XYZ space. Another property of the CIE XYZ space is that its Y component is representing the luminance of the corresponding color. Usually, the CIE XYZ space is used as a reference colorspace to define colorspace transformations. \\\\\n\nPragmatically speaking, color spaces describe the range of colors a camera can see, a printer can print or a monitor can display. Thus, formally we can define it as a mapping from a range of physically produced colors from mixed light to a standard objective description of color sensations registered in the eye of an observer in terms of tristimulus values. \n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.7]{background/somatchingfunctions.png}\n  \\caption[Color Matching Functions]{Plots of our CIE XYZ color matching functions we used for rendering}\n  \\label{fig:matchingfunction}\n\\end{figure}\n\nInterpolating all measured tristimulus values gives us three basis functions, the CIE color matching functions $\\overline{x}(\\lambda)$, $\\overline{y}(\\lambda)$, $\\overline{z}(\\lambda)$. In figure $\\ref{fig:matchingfunction}$ are the numerical description of the chromatic response of the observer. They can be thought of as the spectral sensitivity curves of three linear light detectors yielding the CIE tristimulus values X, Y and Z. \\\\\n\nThe tristimulus values for a color with a spectral power distribution $I(\\lambda)$, are given in terms of the standard observer by:\n\n\\begin{align}\n    X= \\int_{\\Lambda} I(\\lambda)\\,\\overline{x}(\\lambda)\\,d\\lambda \\nonumber \\\\\n    Y= \\int_{\\Lambda} I(\\lambda)\\,\\overline{y}(\\lambda)\\,d\\lambda \\nonumber \\\\\n    Z= \\int_{\\Lambda} I(\\lambda)\\,\\overline{z}(\\lambda)\\,d\\lambda\n\\label{eq:tristimulusvalues}\n\\end{align}\n\\noindent\nwhere $\\lambda$ is the wavelength of the equivalent monochromatic light spectrum $\\Lambda = [380nm, 780nm]$. Note that it is not possible to build a display that corresponds to the CIE XYZ colorspace. For this reason it is necessary to design other color spaces, which are physically realizable, efficiently encoded, perceptually uniform and have an intuitive color specification. There are simple conversions between XYZ color spaces to other color space - such as the RGB colorspace -  described as linear transformations.\n\n\\subsection{Spectral Rendering}\nWhen rendering an image, most of the time we are using colors described in a certain RGB color space. However, a RGB colorspace results from a colorspace transformation of the tristimulus values, which themselves are inherent to the human visual system. Therefore, many physical phenomenon are poorly modelled when they rely on RGB colors for rendering. Using only RGB colors for rendering is like assuming that a given light source emits light of only three particular wavelengths. But in reality this is rarely the case. Spectral rendering refers to using certain wavelength spectrum, for e.g. the human visible light spectrum, instead of simply using only the range of RGB values in order to render an illuminated scene. This captures the physical reality of specific light sources way more accurately. Keep in mind that even when we make use of a spectral rendering approach, we have to convert the final spectra to RGB color values when we want to display an image on an actual display.\n\n\\subsection{Rendering Equation}\n\\label{sec:renderingequation}\nIn Computer Graphics we are interested in rendering images realistically of a given scene. One common choice is to use the rendering equation when knowing the BRDF of the involved scene materials. This equation models the amount of emitted radiance from a point on a surface material along a particular viewing direction. Now, let us assume we are given an incoming light source directional at a solid angle $\\omega_i$ and $\\theta_i$ is its angle of incidence and that $\\omega_r$ is the solid angle for the viewing direction. Further let $\\lambda$ denote the wavelength$\\footnote{Notice that, to keep our terms simple, we have dropped all $\\lambda$ subscripts for spectral radiance quantities.}$ and $\\Omega$ is the hemisphere of integration for the incoming light. Then, we are able to formulate a $BRDF_\\lambda$ by using its definition $\\ref{eq:defbrdf}$:  \n\n\\begin{alignat}{4}\n& f_r(\\omega_i, \\omega_r) &&= \\frac{dL_r(\\omega_r)}{L_i(\\omega_i)cos(\\theta_i)d\\omega_i} \\nonumber \\\\\n\\Rightarrow{} & f_r(\\omega_i, \\omega_r) L_i(\\omega_i)cos(\\theta_i)d\\omega_i &&= dL_r(\\omega_r) \\nonumber \\\\\n\\Rightarrow{} & \\int_{\\Omega}f_r(\\omega_i, \\omega_r) L_i(\\omega_i)cos(\\theta_i)d\\omega_i &&= \\int_{\\Omega}dL_r(\\omega_r) \\nonumber\\\\\nL_r(\\omega_r) &&= \\Rightarrow{} & \\int_{\\Omega}f_r(\\omega_i, \\omega_r) L_i(\\omega_i)cos(\\theta_i)d\\omega_i\n\\label{eq:initialbrdf}\n\\end{alignat}\n\nThe last equation is the so called rendering equation $\\label{sec:dirlighsourceassumption}$. We assume that our incident light is a directional, unpolarized light source like sunlight and therefore its radiance is given as:\n\n\\begin{equation}\n L_{\\lambda}(\\omega)=I(\\lambda)\\delta(\\omega-\\omega_i)\n\\label{eq:radiancedirlightsource}\n\\end{equation}\n\nwhere $I(\\lambda)$ is the intensity of the relative spectral power for the wavelength $\\lambda$. By plugging the identity in equation $\\ref{eq:radiancedirlightsource}$ into our current rendering equation $\\ref{eq:initialbrdf}$, we get:\n\n\\begin{align}\nL_{\\lambda}(w_r) \n& = \\int_{\\Omega} BRDF_{\\lambda}(\\omega_i, \\omega_r) L_{\\lambda}(\\omega_i) cos(\\theta_i) d\\omega_i \\nonumber \\\\\n& = BRDF_{\\lambda}(\\omega_i, \\omega_r) I(\\lambda) cos(\\theta_i)\n\\label{eq:deribrdfwithdirsource}\n\\end{align}\n\nwhere $L_{\\lambda}(\\omega_i)$ is the incident radiance and $L_{\\lambda}(\\omega_r)$ is the radiance reflected by the given surface. Note that the integral in equation $\\ref{eq:deribrdfwithdirsource}$ vanishes since $\\delta(\\omega-\\omega_i)$ is only equal one if and only if $\\omega = \\omega_i$. \n\n\\section{Wave Theory for Light and Diffraction}\n\\subsection{Basics in Wave Theory}\nIn order to prepare the reader for relevant concepts in physics which are used later for derivations and reasonings within this thesis, I am going to provide a quick introduction to the basics of wave theory and related concepts. In physics, a wave describes a disturbance that travels from one location to another through a certain medium. The disturbance temporarily displaces the particles in the medium from their rest position which results in an energy transport along the medium during wave propagation. Usually, when talking about waves we are actually referring to a complex valued function which is a solution to the so called \\emph{wave equation} which is modelling how the wave disturbance proceeds in space during time. \\\\\n\nThere are two types of waves, (a) mechanical waves which deform their mediums during propagation like sound waves and (b) electromagnetic waves consisting of periodic oscillations of an electromagnetic field, such as light. As illustrated in figure $\\ref{fig:wavebasics}$, there are several properties someone can use and apply in order to compare and distinguish different waves:\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.65]{background/waveschematicimpr.png}\n  \\caption[Sinewave]{Simplified, one dimensionally real valued wave function$\\footnotemark$, giving an idea about some important wave properties. We denote the crest of a wave as the highest point relative to the equilibrium line (zero height along time axis) and similarly the trough as the lowest point.}\n  \\label{fig:wavebasics}\n\\end{figure}\n\\footnotetext{Image source: http://neutrino.ethz.ch/Vorlesung/FS2013/index.php/vorlesungsskript}\n\n\\begin{description}\n  \\item[Wavelength:] It is usually denoted by $\\lambda$ and is a measure for the spatial distance from one point to another until the shape of a wave repeats\n  \\item[Amplitude:] It is denoted by $A$ and there are two possible interpretations: Firstly, it is a measure of the height from the equilibrium point to the highest point of a crest on the wave or the lowest point of a trough. This means that the amplitude can be positive or negative. However, usually, someone is just interested in the absolute value of an amplitude, i.e. the magnitude of a wave. For light waves it is a relative measure of intensity or brightness to other light waves of the same wavelength. And secondly, it can be interpreted as a measure how much energy a wave carries whereat the greater the absolute amplitude value, the bigger the amount of energy being carried.\n  \\item[Frequency:] Is a measure of the number of waves which are passing through a particular point in the propagation medium during one unit of time and is denoted by $f$.\n  \\item[Phase:] It is denoted by $\\varphi$. It describes either the offset of initial position of a wave or the relative displacement between or among waves having the same frequency. Two waves with same frequency are said to be \\emph{in phase} if they have the same phase. This means that they line up everywhere. On the other hand, two waves are said to be \\emph{out of phase} if they have the same frequency but a different phases. As a remark, we denote by $\\omega$ the angular frequency which is equal $2\\pi f$. \n\\end{description}\n\nA geometrical property of waves is their wavefront. This is either a surface or line along the path of wave propagation on which the disturbance at every point has the same phase. Basically, a wavefront can have any kind of shape although three prominent types of wavefronts are: spherical-, cylindrical- and plane wavefronts. If a point in an isotropic medium is sending out waves in three dimensions, then the corresponding wavefronts are spheres, centered on the source point. Hence spherical wavefront is the result of a spherical wave, also denoted as a wavelet. Note that for electromagnetic waves, the phase is a position of a point in time on a wavefront cycle (motion of wave over a whole wavelength) whereat a complete cycle is defined as being equal to $360\\degree$.\n\n\\subsection{Wave Interference}\nNext, after having seen that a wave is simply a traveling disturbance along a medium, having some special properties, someone could ask what happens when there are several waves traveling on the same medium. Especially, we are interested in how these waves will interact with each other. In physics the term interference denotes the interaction of waves when they encounter each other at a point along their propagation medium. At each point where two waves superpose, their total displacement is the sum of the displacements of each individual wave at that point. Then, the resulting wave is having a greater or lower amplitude than each separate wave and we can interpret the interference as the addition operates on waves. Two extreme scenarios are illustrated in figure $\\ref{fig:interferenceconcept}$ for waves with same frequency and equal amplitude. There are basically three variants of interferences which can occur, depending on how crest and troughs of the waves are matched up:\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.65]{background/interferenceconcept.png}\n  \\caption[interference]{Interference scenarios$\\footnotemark$ when two waves waves meet: On the left hand-side, there is constructive interference and on the right hand-side there is destructive interference illustrated.}\n  \\label{fig:interferenceconcept}\n\\end{figure}\n\\footnotetext{Image source: \\texttt{http://en.wikipedia.org/wiki/Interference\\textunderscore(wave\\textunderscore propagation)} } \n\n\\begin{itemize}\n  \\item A crest of a wave meets a crest of another wave and similarly a trough meets a trough of another wave. This scenario is denoted as constructive interference and occurs at any location along the medium where the two interfering waves have a displacement in the same direction. This is like saying that the phase difference between the waves is a multiple of $2\\pi$. Then the resulting amplitude at that point is being much larger than the amplitude of an individual wave. For two waves with an equal amplitude interfering constructively, the resulting amplitude is twice as large as the amplitude of an individual wave.\n  \\item A crest of a wave meets a trough of another wave and vice versa. This scenario is denoted as destructive interference and occurs at any location along the medium where the two interfering waves have a displacement in the opposite direction. This is like saying that the phase difference between the waves is an odd multiple of $\\pi$. Then the waves completely cancel each other out at any point they superimpose.\n  \\item If the phase difference between two waves is intermediate between the first two scenarios, then the magnitude of the displacement lies between the minimal and maximal values which we could get from constructive interference.\n\\end{itemize}\n\nKeep in mind that when two or more waves interfere which each other, the resulting wave may have a different frequency. This means that interfering waves, coming from two light sources with a certain color, may produce a light of another color than they have.\n\n\\subsection{Wave Coherence}\n\\label{sec:wavecoherence}\nWhen considering waves which are traveling on a shared medium along the same direction, we could examine how their phase difference is changing over time. Formulating the change in their relative phase as a function of time will provide us a quantitative measure of the synchronization between those two waves, the so called wave coherence. In order to better understand this concept, let us consider a perfectly mathematical sine wave and a second wave which is a phase-shifted replica of the first one. A property of these mathematical waves is that they keep their shape over an infinity amount of time (i.e. propagated wavelengths). In our scenario, both waves are traveling along the same direction on the same medium, as illustrated in figure $\\ref{fig:coherencesinsignal}$.\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.32]{background/coherencesinsignal.png}\n  \\caption[Wave Coherence]{Two mathematical sine waves which are perfectly coherent which means that their phase difference is constant for every point in time.}\n  \\label{fig:coherencesinsignal}\n\\end{figure}\n\\noindent\nTaking the difference between these two sine waves always yields a constant number. Therefore, those two waves are said to be coherent and hence perfectly synchronous over time. Notice that this scenario is completely artificial since in nature there are no mathematical sine waves. Rather, the phase difference is then a function of time $p(t)$. The more coherent two waves are, the slower this function will change over time. \nIn fact, two waves are said to be coherent if they are of the same frequency, are temporally in phase or have the same amplitude at every point in time. Thus two waves are coherent if they are generated at the same time, having the same frequency, amplitude, and phase. Conversely, waves are considered incoherent or also asynchronous if they have no stable phase difference. This means $p(t)$ is heavily varying over time. Coherence describes the effect of whether waves will tend to interfere with each other constructively or destructively at a certain point in time and space. Thus this is a property of waves that enables stationary interference. The more correlated two waves are, the higher their degree of coherence is. In physics coherence between waves is quantified by the cross-correlation function, which basically predicts the value of a second wave using the value of the first one. There are two basic coherence classifications:\n\n\\begin{itemize}\n  \\item Spatial coherence is dealing with the question of what is the range of distance between two points in space in the span of a wave for which there is occurring a significant effect of stationary interference when averaged over time. This is formally answered by considering the correlation between waves at different point in space. The range of distance with significant coherence is also denoted as the coherence area.\n  \\item Temporal coherence examines how well two waves which are observed at two different moments in time correlate with each other. Thus it may be used for predicting how well a wave interferes temporally with itself. Mathematically, this kind of coherence is computed by measuring the correlation between the value of the wave and the delayed version of itself. The coherence time denotes the maximum time delay for which the waves are coherent. The distance a wave has traveled during the coherence time is denoted as their temporal coherence length.\n\\end{itemize}\n\n\\subsection{Huygen's Principle}\n\\label{sec:huygensprincipledef}\nBesides the phases and the amplitudes of waves, their propagation directly affects the interaction between different waves and how they could interfere with each other. This is why it makes sense to formulate a model which allows us to predict the position of a moving wavefront and how it moves in space. This is where $\\emph{Huygen's Principle}$ comes into play. It states that any point of a wavefront may be regarded as a point source that emits spherical wavelets in every direction. Within the same propagation medium, these wavelets travel at the same speed as their source wavefront. The position of the new wavefront results by superimposing all of these emitted wavelets. Geometrically, the surface that is tangential to the secondary waves can be used in order to determine the future position of the wavefront. Therefore, the new wavefront encloses all emitted wavelets. Figure $\\ref{fig:huygensprinciple}$ visualizes Huygen's principle for a wavefront reflected off from a plane surface.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.6]{background/huygensprinciple.png}\n  \\caption[Huygen's Principle]{A moving wavefront (blue) encounters an obstacle (a surface shown in brown colors) and produces a new wavefront (green) as a result of superposition of all secondary wavelets.}\n  \\label{fig:huygensprinciple}\n\\end{figure}\n\n\\subsection{Waves Diffraction}\nRevisiting Huygen's principle we know that each point on a wavefront can be considered as a source of a spherical wavelet which propagates in every direction. But what exactly happens when a wave's propagation direction is only partially occluded by an object? What will be the outcome on applying Huygen's principle to this case? An example scenario for this case is shown in figure $\\ref{fig:wavediffraction}$. \n\n\\begin{figure}[H]\n  \\centering\n  \\subfigure[transmissive grating]{\n    \\includegraphics[scale=0.45]{background/diffractiontransmissive2.png}\n    \\label{fig:wavediffractiontransm}\n  }\n~\n  \\subfigure[reflective grating]{\n    \\includegraphics[scale=0.6]{background/reflectivegrating.png}\n    \\label{fig:wavediffractionrefl}\n  }\n  \n\\caption[Diffracted Wave]{Illustration$\\footnotemark$ of a diffraction scenario in which a plane wavefront passes through a surface with a certain width and how the wave will be bent, also showing the intensity of the resulting wave along a straight line in its path.}\n\\label{fig:wavediffraction}\n\\end{figure}\n\\footnotetext{Image source:\\texttt{http://cronodon.com/images/Single\\textunderscore slit\\textunderscore diffraction\\textunderscore 2b.jpg} } \n\nWhenever a propagating wavefront is partially occluded by an obstacle, the wave is not only moving in the direction along its propagation, but is also bent around the edges of the obstacle. In physics, this phenomenon is called diffraction. Waves are diffracted due to interference which occurs among all wavelets when applying Huygen's Principle for the case when a wavefront hits an obstacle. Generally, the effect of diffraction is most pronounced for the waves whose wavelength is roughly similar in size to the dimension of the occluding object. Conversely, if the wavelength is much smaller in size, then there is almost no wave diffraction perceivable at a far off distance. This relationship between the strength of wave diffraction and the wavelength is conceptually illustrated in figure $\\ref{fig:diffractionrelationshipdimension}$ when a wave is transmitted through an opening in a surface. A reflective example for diffraction provided in figure $\\ref{fig:huygensprinciple}$.\n\n\\begin{figure}[H]\n  \\centering\n  \\subfigure[W $\\ll$ $\\lambda$]{\n  \\includegraphics[scale=1.0]{background/Aa2l.png}\n    \\label{fig:a1}\n  }\n~\n  \\subfigure[W $\\approx$ $2 \\lambda$]{\n    \\includegraphics[scale=1.0]{background/Aastl.png}\n    \\label{fig:a2}\n  }\n~\n  \\subfigure[W $\\approx$ $6 \\lambda$]{\n    \\includegraphics[scale=1.0]{background/Aa6l.png}\n    \\label{fig:a3}\n  }\n  \\caption[Diffraction for different $\\texttt{Wavelength/Slit-Width}$ ratio]{Illustration$\\footnotemark$ of how diffraction changes when a wave with wavelength $\\lambda$ propagates through a slit of width equal $W$.}\n  \\label{fig:diffractionrelationshipdimension}\n\\end{figure}\n\\footnotetext{Image taken from:\\texttt{http://neutrino.ethz.ch/Vorlesung/FS2013/index.php/vorlesungsskript}, chapter 9, figure 9.14 } \n\nIn everyday's life, we can see the direct outcome of the effect of wave diffraction in form of structural colors. There are examples from nature such as the iridescent colors on various snake skins as well as artificial examples such as the colorful patterns notable when having a close look at an illuminated compact disc. All these examples comprise a surface made of highly regular nanostructures which diffract an incident light significantly. Such a nanostructure which exhibits a certain degree of regularity is also denoted as a diffraction grating. Further information about diffraction gratings can be found in section $\\ref{sec:diffractiongrating}$.\n\n\\section{Stam's BRDF formulation}\n\\label{sec:sumstam}\nThe theoretical foundation of this thesis is based on the pioneering work of J.Stam$\\cite{diffstam}$. In this model Stam derived a BRDF formulation to model the effect of far field diffraction for various analytical anisotropic surfaces. His model is relying on the so called scalar wave theory of diffraction for which a wave$\\footnote{In general, a wave is a complex valued vector satisfying Maxwell's equations. A scalar values wave only satisfies the Helmoltz equation. For further information please visit \\texttt{http://en.wikipedia.org/wiki/Maxwells\\textunderscore equations} and \\texttt{http://en.wikipedia.org/wiki/Helmholtz\\textunderscore equation}.}$ is assumed to be a complex valued scalar. Thus, Stam's BRDF formulation does not take into account the polarization of the incident light. Fortunately, light sources like sunlight and light bulbs are unpolarized. The principal behind J. Stam's approach is illustrated in figure $\\ref{fig:meaningofstamsapproach}$. \n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.6]{background/stamsapproachsmeaning2.png}\n  \\caption[Idea behind Stam's approach]{Illustration of secondary wavelets reflected off a surface. An integration over all secondary sources resulting from an incident wave according to Huygen's principle will give us an identity for the total contribution at a certain point in space.}\n  \\label{fig:meaningofstamsapproach}  \n\\end{figure}\n\nAn incident wave $p_i$ from a a light source encounters a surface representing a diffraction grating. According to Huygen's Principle, at any point $i$ on the grating at which the incident wave meets the grating a secondary, spherical wavelet $p_{r,i}$ will be emitted. A viewer, indicated by a gray circle in the figure, will perceive the superimposed contribution of all wavelets along the surface $S$ (in the figure indicated by a integration symbol), which will directly follow the laws of wave interference. Therefore the resulting color which an observer sees is the final radiance at that point which reflects from stationary interference of all emitted secondary wavelets and per due to Huygen's principle. \\\\\n\nA further assumption in Stam's Paper is, that the emanated waves from the source are stationary, which implies the wave is a superposition of independent monochromatic waves. This further implies that each wave is associated with a definite wavelength $\\lambda$. Directional light sources such as sunlight fulfills this fact and since we are using these kinds of light sources for our simulations, Stam's model can be used for our modelling purposes. \\\\\n\nThe main idea of his model is to formulate a BRDF as a function of the Fourier transformation applied on a certain correlation function relating to the given height field. His model assumes homogeneity of the height field structure and depicts an approximation of far-field diffraction effects. The geometrical setup of his model is illustrated in figure $\\ref{fig:geometricsetup}$. The classes of surfaces his model is able to support either exhibit a very regular structure or may be considered as a superposition of bumps forming a periodic like structure. Therefore, the surfaces he is dealing with can either be modelled by probabilistic distributions or have a direct analytical representation. Both cases allow him to derive an analytical solution for his BRDF model.\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[scale=0.8]{background/stamsinputp.png}\n  \\caption[Stam's geometrical setup]{Illustration$\\footnotemark$ of the geometrical setup of Stam's approach where $\\omega_i$ is a direction, pointing towards the light source, $\\omega_r$ points towards the viewer, $n$ is the surface normal and$ (u,v,w)$ are the components of the vector $-\\omega_i - \\omega_r$.}\n  \\label{fig:geometricsetup}  \n\\end{figure}\n\\footnotetext{Modified image which originally has been taken from D.S. Dhillon et. al. poster$\\cite{diffourp}$.} \n\nFigure $\\ref{fig:geometricsetup}$ illustrated schematically the geometrical setup used for Stam's BRDF formulation. An incident light with a direction $\\omega_i$ hits the surface of a given height field at the position $p$. The direction vector $\\omega_r$ points towards the viewer. After the incident light has hit the surface, a spherical wavelet is reflected off from this hit position. The direction vector of the wavelet can be computed by taking the difference between the incident and viewing direction as shown in equation $\\ref{eq:uvw}$:\n\n\\begin{equation}\n  (u,v,w) = -\\omega_i - \\omega_r \n\\label{eq:uvw}\n\\end{equation}\n\nThese coordinates will later be used in order to compute the total contribution of all secondary sources used in Stam's BRDF in equation $\\ref{eq:mainstam}$. In Stam's Derivation, the phase difference between the incident and emitted wave from the given height field is denoted by the auxiliary function $\\Phi$ defined as in equation $\\ref{eq:heightfieldphase}$.\n\n\\begin{equation}\n  \\Phi(x,y) = \\frac{2 \\pi}{\\lambda} w h(x,y) \n\\label{eq:heightfieldphase}\n\\end{equation}\n\nThen, any secondary wavelet $p$ which is emitted off from the given surface is equal to:\n\\begin{equation}\n  p(x,y) = e^{i\\Phi(x,y)} \n\\label{eq:px}\n\\end{equation}\n\nusing the idea presented for figure $\\ref{fig:meaningofstamsapproach}$ and performing some further mathematical steps shown as described in Stams paper, will lead us to the final BRDF representation. This BRDF models the total contribution of all secondary sources reflected off the the provided surface $h$ in the direction $\\omega_r$:\n\n\\begin{equation} \n  BRDF_{\\lambda}(\\omega_i, \\omega_r) = \\frac{k^2 F^2 G}{4\\pi^2 A w^2} \\langle \\left|P(ku, kv)\\right|^2\\rangle\n\\label{eq:mainstam}\n\\end{equation}\n\nwhere $F$ denotes the Fresnel coefficient and $G$ is the so called geometry term$\\footnote{The geometric terms expresses the correction factor to perform an integration over an area instead over a surface. For further information, please have a look at \\texttt{http://en.wikipedia.org/wiki/Surface\\textunderscore integral}, and read the definition about \\emph{surface element}}$ which is equal to: \n\\begin{equation}\n  G =\\frac{(1 + \\omega_i \\cdot \\omega_r)^2}{cos(\\theta_i)cos(\\theta_r)}\n\\label{eq:geometricterm}\n\\end{equation}\n\n\\myparagraph{Fourier Transformation Sign Convention}\n\\label{sec:electricalengeneeringftconvention}\n\\noindent\nOne last word about the Fourier transform terms that Stam uses in his derivation. Conventionally, following the definitions of the Fourier transformation, we are dealing with is commonly denoted as the inverse Fourier Transformation. However, especially in electrical engineering (EE), it is common to define the inverse Fourier transformation by the Fourier Transformation and vice versa. To be more precisely there are two definitions of the Fourier transformation. One commonly used by physicist where the exponential function is w.r.t. $-i$ and the one which is used in EE, performing an integration over an exponential is w.r.t. $i$. Further information about the sign convention in Fourier transformations can be looked up in the book Quantum Mechanics for Electrical Engineers$\\cite{signconvention}$. Note that by substituting the minus sign of the physicist definition of the Fourier transformation gives us the definition used in EE shown in in equation $\\ref{eq:signchangementconvention}$:\n\n\\begin{align}\n\\mathcal{F}_{FT}\\{f\\}(w) \n& = \\int_{\\mathds{R}^n} f(x)e^{-iwt} dt \\nonumber\\\\\n& = \\int_{\\mathds{R}^n} f(x)e^{i\\hat{w}t} dt \\nonumber\\\\\n& = \\mathcal{F}^{-1}_{FT}\\{f\\}(\\hat{w})\n\\label{eq:signchangementconvention}\n\\end{align} \nwhere $\\hat{w}$ is equal $-w$. \\\\\n\nThe height fields we are dealing with in this work are, however, natural gratings containing a complex shaped nano-structure and hence far from being very regularly aligned. The reason why Stam's approach in its current form is not suitable for our purpose is twofold: First his approach does not capture the complexity of natural gratings accurately well enough when relying on his statistical approaches and secondly it is way too slow in order to be usable for interactive rendering since his BRDF needs an evaluation of a Fourier Transform for every directional changing. \\\\\n\nIn the next chapter we are going to adapt Stam's BRDF model such that it will be able to handle the kind of surfaces we are dealing with and even will have a runtime complexity which permits interactive rendering.", "meta": {"hexsha": "7e53cde6557f38a17d8d510a5da1b837bc3c9049", "size": 45172, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "document/Source/Chapters/theoretical_background.tex", "max_stars_repo_name": "simplay/Bachelor-Thesis", "max_stars_repo_head_hexsha": "ef450c5420b768b2a1fd84c9ad768f34db12fc88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "document/Source/Chapters/theoretical_background.tex", "max_issues_repo_name": "simplay/Bachelor-Thesis", "max_issues_repo_head_hexsha": "ef450c5420b768b2a1fd84c9ad768f34db12fc88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-05-13T14:35:57.000Z", "max_issues_repo_issues_event_max_datetime": "2016-05-13T14:35:57.000Z", "max_forks_repo_path": "document/Source/Chapters/theoretical_background.tex", "max_forks_repo_name": "simplay/Bachelor-Thesis", "max_forks_repo_head_hexsha": "ef450c5420b768b2a1fd84c9ad768f34db12fc88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 124.7845303867, "max_line_length": 2077, "alphanum_fraction": 0.792791995, "num_tokens": 10281, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8774767842777551, "lm_q2_score": 0.640635841117624, "lm_q1q2_score": 0.5621430777569675}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{enumitem}\n\\usepackage{amsmath}\n\\usepackage{booktabs}\n\n\\begin{document}\n\\title{MAT1830 --- Assignment 10}\n\\author{Dylan Pinn --- 24160547}\n\\maketitle\n\n\\section*{Question 1}\nRewrite the following expressions without using $\\Sigma$ or $\\Pi$.\n\n\\begin{enumerate}[label= (\\alph*)]\n  \\item $\\sum_{i=2}^{6} \\frac{2}{6i - 7}$\n\n  \\begin{align*}\n    &= \\frac{2}{6 \\times 2 - 7} + \\frac{2}{6 \\times 3 - 7} + \\frac{2}{6 \\times\n    4 - 7} \\\\\n    &+ \\frac{2}{6 \\times 5 - 7} + \\frac{2}{6 \\times 6 - 7} + \\frac{2}{6 \\times\n    7 - 7} \\\\\n    &= \\frac{2}{5} + \\frac{2}{11} + \\frac{2}{17} + \\frac{2}{23} + \\frac{2}{29} +\n    \\frac{2}{35}\n  \\end{align*}\n\n  \\item $\\prod_{i=5}^{8} ({(z + 2i)}^{i} - i - 1)$\n\n    \\begin{align*}\n      &= ({(z + 2 \\times 5)}^{5} - 5 - 1) \\times ({(z + 2 \\times 6)}^{6} - 6 -\n      1) \\\\\n      &\\times ({(z + 2 \\times 7)}^{7} - 7 - 1) \\times ({(z + 2 \\times 8)}^{8} - 8\n      - 1) \\\\\n      &\\times ({(z + 2 \\times 9)}^{9} - 9 - 1) \\times ({(z + 2 \\times 10)}^{10}\n      - 10 - 1) \\\\\n      &\\times ({(z + 2 \\times 11)}^{11} - 11 - 1) \\times ({(z + 2 \\times\n      12)}^{12} - 12 - 1) \\\\\n      &= ({(z + 10)}^{5} - 6) \\times ({(z + 12)}^{6} - 7) \\times ({(z + 14)}^{7}\n      - 8) \\\\\n      &\\times ({(z + 16)}^{8} - 9) \\times ({(z + 18)}^{9} - 10) \\times ({(z +\n      20)}^{10} - 11) \\\\\n      &\\times ({(z + 22)}^{11} - 12) \\times ({(z + 24)}^{12} - 13) \\\\\n    \\end{align*}\n\n\\end{enumerate}\n\n\\section*{Question 2}\nRewrite the following expressions using $\\Sigma$ or $\\Pi$ notation.\n\n\\begin{enumerate}[label= (\\alph*)]\n  \\item $x(x+1)(x+4)(x+9)(x+16) \\dots (x+400)$\n\n    \\[ \\prod_{i=0}^{21} (x + i^2) \\]\n\n  \\item $\\frac{1}{6^4} + \\frac{1}{9^5} + \\frac{1}{12^6} + \\frac{1}{15^7} + \\dots\n    + \\frac{1}{33^{13}}$\n\n    \\[ \\sum_{i=4}^{10} \\frac{1}{{(3i - 6)}^{i}} \\]\n\n\\end{enumerate}\n\n\\section*{Question 3}\nCall a string of letters ``legal'' if it can be produced by concatenating\n(running together) copies of the strings `a', `bb' and `cc'. For example,\n`abba' is legal because it can be produced by concatenating `a', `bb' and `a',\nbut `ccca' is not legal.\n\nFor each integer $n \\geq 1$, let $t_n$ be the number of legal strings with $n$\nletters. For example, $t_1 = 1$ (`$a$' is the only legal string) and $t_2 = 3$\n(`aa', `bb' and `cc' are the legal strings).\n\n\\begin{enumerate}[label= (\\alph*)]\n  \\item Write down $t_3$ and a list of all the legal strings of length 3.\n\n    \\begin{itemize}\n      \\item $aaa$\n      \\item $abb$\n      \\item $acc$\n      \\item $bba$\n      \\item $cca$\n    \\end{itemize}\n\n    \\[ t_3 = 5 \\]\n\n  \\item Write down $t_4$ and a list of all the legal strings of length 4.\n\n    \\begin{itemize}\n      \\item $aaaa$\n      \\item $abba$\n      \\item $aabb$\n      \\item $aacc$\n      \\item $acca$\n      \\item $bbaa$\n      \\item $bbbb$\n      \\item $bbcc$\n      \\item $ccaa$\n      \\item $cccc$\n      \\item $ccbb$\n    \\end{itemize}\n\n    \\[ t_4 = 11 \\]\n\n  \\item Find a recurrence for $t_n$ that holds for all $n \\geq 3$. Explain why\n    your recurrence gives $t_n$.\n\n\\end{enumerate}\n\n\\section*{Question 4}\nDraw simple graphs with the following properties or explain why they do not\nexist.\n\n\\begin{enumerate}[label= (\\alph*)]\n  \\item The list of vertices is: $P$, $Q$, $R$, $S$, $T$ and the list of edges\n    is $PQ$, $PS$, $QR$, $RS$, $RT$.\n\n  SEE ATTACHED\n\n  \\item The graph has 10 vertices and 47 edges.\n\n  Does not exist. Maximum number of edges possible for a simple graph of $n$\n    verticies is $\\binom{n}{2}$.\n\n  \\[ \\binom{10}{2} = 45 \\]\n\n  So a simple graph of 10 vertices and 47 edges is not possible.\n\n  \\item The graph has 7 vertices and 6 edges and is connected.\n\n    Does not exist. To traverse between 7 verticies you need at least 7 edges.\n\n  \\item The graph has 7 vertices and 11 edges and its vertices can be divided\n  into two sets in such a way that no edge joins two vertices in the same set.\n\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "633cd07e13fecf2f8620d1f1488b8ce4714d5287", "size": 3898, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignments/assignment-10.tex", "max_stars_repo_name": "dylanpinn/MAT1830", "max_stars_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-03-01T22:58:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-20T03:41:28.000Z", "max_issues_repo_path": "assignments/assignment-10.tex", "max_issues_repo_name": "dylanpinn/MAT1830", "max_issues_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-03-05T13:52:05.000Z", "max_issues_repo_issues_event_max_datetime": "2018-06-03T06:46:04.000Z", "max_forks_repo_path": "assignments/assignment-10.tex", "max_forks_repo_name": "dylanpinn/MAT1830", "max_forks_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-27T03:41:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-19T15:55:08.000Z", "avg_line_length": 28.8740740741, "max_line_length": 81, "alphanum_fraction": 0.5564391996, "num_tokens": 1511, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6370307944803832, "lm_q2_score": 0.8824278587245935, "lm_q1q2_score": 0.5621337199149512}}
{"text": "\\chapter{Implementation}\n\\label{chp:implementation}\n\n\n% %\n% HEADER\n% %\nThe smart elasticity is the ability of a system to dynamically auto-scale resources according to scaling policies determined with machine learning techniques.\n%\nThe smart elasticity service hence provides a resource manager, e.g. a container orchestrator, with the ability to learn the best replication degree for its resources, e.g. deployed containers, with respect to the current cluster state.\n%\nIn this work we propose to implement smart elasticity for Kubernetes leveraging reinforcement learning, that is the most famous technique of unsupervised machine learning.\n%\nIn particular, we focus on $\\mathcal{Q}$-Learning, which is the most widely studied reinforcement learning algorithm.\n%\nIn this Chapter we show the adopted reinforcement learning model and how the smart elasticity service has been implemented in the Kubernetes environment.\n\n\n% %\n% Q-LEARNING MODEL\n% %\n\\section{$\\mathcal{Q}$-Learning model}\n\\label{sec:implementation-q-learning-model}\n\nA reinforcement learning algorithm can be used to learn the best scaling policy to adopt in response to cluster state changes without any past knowledge.\n%\nIn this work, we propose an approach that makes use of the $\\mathcal{Q}$-Learning algorithm, described in Chapter \\ref{chp:reinforcement-learning}.\n%\nA general $\\mathcal{Q}$-Learning algorithm relies on a reinforcement learning model that requires the definition of the parameters:\n\n\\begin{itemize}\n\t\n\t\\item \\textbf{State Space} $\\mathcal{X}$ where each state $x\\in\\mathcal{X}$ is a cluster state.\n\t\n\t\\item \\textbf{Action Space} $\\mathcal{A}$ where each action $a\\in\\mathcal{A}$ is a scaling action.\n\t\n\t\\item \\textbf{Quality Function} $\\mathcal{Q}:\\mathcal{X}\\times\\mathcal{A}\\rightarrow\\Re$ where the value $q_{x_{n},a_{n}}=\\mathcal{Q}(x_{n},a_{n})$ is the quality of the choice of performing the action $a_{n}$ in the state $x_{n}$ at epoch $n$.\n\t%\n\tNotice that, since $\\mathcal{Q}$-Learning is an iterative algorithm that updates $\\mathcal{Q}$, this function requires an initialization setting $\\mathcal{Q}_{0}$.\n\t\n\t\\item \\textbf{Rewarding Function} $\\mathcal{R}:\\mathcal{X}\\rightarrow\\Re$ where the value $r_{n}=\\mathcal{R}(x_{n})$ is the reward achieved for the state $x_{n}$ at epoch $n$.\n\t\n\t\\item \\textbf{Learning Rate} $\\alpha\\in (0,1]$ that trades off the importance of sooner versus later quality values. The closer $\\alpha$ is to $1$ the more important are future quality evaluations with respect to past ones.\n\t\n\t\\item \\textbf{Discount Factor} $\\gamma\\in [0,1]$ that trades off the importance of sooner versus later rewards. The closer $\\gamma$ is to $1$ the more important are future rewards with respect to past ones. \n\t\n\\end{itemize}\n\nThese parameters encapsulate the peculiarities of the domain where the algorithm is applied to and are used by the reinforcement learning agent to determine the optimality of a specific action in a given state, according to the following update:\n\n\\begin{equation}\n\\label{eqn:implementation-q-learning-update}\n\\mathcal{Q}_{n}(x,a) = \n\\begin{cases} \n(1-\\alpha_{n})\\mathcal{Q}_{n-1}(x,a) + \\alpha_{n}(r_{n} + \\gamma\\max_{b} \\mathcal{Q}_{n-1}(x_{n+1},b)) & \\text{if } x=x_{n},a=a_{n} \\\\\n\\mathcal{Q}_{n-1}(x,a)       & \\text{otherwise}           \\\\\n\\end{cases}\n\\end{equation}\n\nIn the following, we show and motivate our choice for the aforementioned parameters in the context of smart elasticity.\n\n\n\\subsection{State Space}\n\\label{sec:smart-elasticity-elasticity-leveraging-q-learning-state-space}\nINSERT HERE DESCRIPTION OF THE STATE SPACE (ITERATE WITH PROF).\n\n\n\\subsection{Action Space}\n\\label{sec:smart-elasticity-elasticity-leveraging-q-learning-action-space}\nINSERT HERE THE DESCRIPTION AND MOTIVATIONS OF THE CHOSEN ACTION SPACE (ITERATE WITH PROF).\n\n\n\\subsection{Quality Function}\n\\label{sec:smart-elasticity-elasticity-leveraging-q-learning-quality-function}\nINSERT HERE THE DESCRIPTION AND MOTIVATIONS OF THE CHOSEN QUALITY FUNCTION (ITERATE WITH PROF).\n\n\n\\subsection{Rewarding Function}\n\\label{sec:smart-elasticity-elasticity-leveraging-q-learning-rewarding-function}\nINSERT HERE THE DESCRIPTION AND MOTIVATIONS OF THE CHOSEN REWARDING FUNCTION (ITERATE WITH PROF).\n\n\n\\subsection{Learning Rate}\n\\label{sec:smart-elasticity-elasticity-leveraging-q-learning-learning-rate}\nINSERT HERE THE DESCRIPTION AND MOTIVATIONS OF THE CHOSEN LEARNING RATE (ITERATE WITH PROF).\n\n\n\\subsection{Discount Factor}\n\\label{sec:smart-elasticity-elasticity-leveraging-q-learning-discount-factor}\nINSERT HERE THE DESCRIPTION AND MOTIVATIONS OF THE CHOSEN DISCOUNT FACTOR (ITERATE WITH PROF).\n\n\n% %\n% ARCHITECTURE\n% %\n\\section{Architecture}\n\\label{sec:implementation-architecture}\n\nThe smart elasticity service is realized by the Smart Scaler Engine, that is a Kubernetes custom controller implemented as a UAS following the micro-services design pattern.\n%\nIn Figure \\ref{fig:implementation-architecture} we show the high-level architecture for the smart elasticity service, represented as a UML component diagram.\n\n\\begin{itemize}\n\t\n\t\\item \\texttt{Heapster}: collects cluster performance metrics focused on CPU, Memory and Network utilization in a per-Node and per-Pod basis. \n\t%\n\tMetrics can be queried via the \\texttt{K8S REST} interface, which proxies the \\texttt{Metric Server}, that is responsible for gathering metrics from \\texttt{Heapster Master}.\n\t\n\t\\item \\texttt{Kubernetes}: responsible for containers orchestration. \n\t%\n\tIn particular, it deployes containers in a per-Pod fashion and dynamically scales them in/out accordingly with the scaling actions submitted by the \\texttt{Smart Scale Engine}.\n\t%\n\tIt exposes its functionalities with a REST API served by \\texttt{Kubernetes Master}.\n\t%\n\tThe most important functionalities are \n\t(i) listing of Smart Scalers,\n\t(ii) listing of Deployments,\n\t(iii) watching for Smart Scalers changes,\n\t(iv) watching for Deployments changes,\n\t(v) horizontal scaling of Deployments\n\t(vi) querying of cluster metrics.\n\t\n\t\\item \\texttt{Smart Scale Engine}: realizes the smart elasticity service. It follows a micro-services architecture, where services are:\n\t\n\t\\begin{itemize}\n\t\t\n\t\t\\item \\texttt{Agents Manager}: responsible for the allocation/deallocation of \\texttt{RL Agents} accordingly to \\texttt{Smart Scalers} instantiated within \\texttt{Kubernetes}.\n\t\t\n\t\t\\item \\texttt{RL Agent}: this is a reinforcement learning agent, thus it executes the $\\mathcal{Q}$-learning loop to determine the scaling actions. Each agent is uniquely associated to a Deployment. In particular it determines the best scaling action with respect to the current cluster state accordingly to the adopted reinforcement learning technique. It submits scaling action to \\texttt{Kubernetes} via its REST interface. When this component is asked to produce a new scaling action, the \\texttt{StateManager} collects metrics from the \\texttt{ClusterMonitor} and aggregates them into a new \\texttt{ClusterState}, representing the current cluster state. The latter are then used by the \\texttt{RLAgent} to execute the reinforcement learning algorithm to produce the new \\texttt{ScalingAction}. The \\texttt{RLAgent} reads/writes all data required by the RL algorithm from/to the \\texttt{RL Repo}.\n\t\t\n\t\t\\item \\texttt{RL Repo}: a data store responsible for storing data required by \\texttt{RLAgent} for the execution of the reinforcement learning algorithm, e.g. parameters and reinforcement learning matrices.\n\t\t\n\t\t\\item \\texttt{API Gateway}: a REST API server exposing to the outside the services provided by the overall \\texttt{Smart Scaler Engine} infrastructure.\n\t\t\n\t\\end{itemize}\n\t\n\\end{itemize}\n\nFrom a technological point of view, the \\texttt{ScalerAI} has been implemented as a Spring-based REST service, that is a de-facto standard solution for Java enterprise web applications; while the \\texttt{RLRepository} has been implemented as a MongoDB datastore, which is a de-facto standard technology for NoSQL document-based datastore. \n\nLet us now focus on the main interactions flows that realizes smart elasticity.\n%\nFirst, we have the loop executed by \\texttt{Agents Manager}, that is made of the following steps:\n%\n\\begin{enumerate}\n\t\\item \n\\end{enumerate}\n%\nIn Figure \\ref{fig:implementation-agents-manager-loop-sequence-diagram} we show the flow of interactions during the loop executed by \\texttt{Agents Manager}, represented as a UML sequence diagram.\n\nThen, we have the loop executed by each \\texttt{RL Agent}, that is made of the following steps:\n%\n\\begin{enumerate}\n\t\n\t\\item the \\texttt{Orchestrator} asks \\texttt{ScalerAI} to compute a new scaling action. The request is submitted to the \\texttt{ScalerAIController}, which is the \\texttt{ScalerAI} entry-point.\n\t\n\t\\item the \\texttt{ScalerAIController} retrieves the current cluster state from the \\texttt{StateManager}. The custer state encapsulates the cluster performance metrics of interest, gathered from the \\texttt{ClusterMonitor}.\n\t\n\t\\item the \\texttt{ScalerAIController} asks the \\texttt{RLAgent} to compute a new scaling action with respect to the current cluster state.\n\t\n\t\\item the \\texttt{RLAgent} (i) computes the current state $s\\in\\mathcal{S}$ and, concurrently, (ii) retrieves reinforcement learning data from the \\texttt{RLDataManager}, which collects it from the \\texttt{RLRepository}.\n\t\n\t\\item the \\texttt{RLAgent} uses he gathered state and data to compute sequentially (i) the rewarding function, (ii) the quality function and (iii) the action.\n\t\n\t\\item the \\texttt{RLAgent} concurrently (i) saves new function values to the \\texttt{RLRepository} via the \\texttt{RLRepositoryManager} and (ii) encapsulates the computed action into a new \\texttt{ScalingAction} returned to the \\texttt{ScalerAIController}.\n\t\n\t\\item the \\texttt{ScalerAIController} returns the new scaling action to the \\texttt{Orchestrator}.\n\t\n\t\\item the \\texttt{Orchestrator} apply the scaling action.\n\t\n\\end{enumerate}\n%\nIn Figure \\ref{fig:implementation-rl-agent-loop-sequence-diagram} we show the flow of interactions during the loop executed by each \\texttt{RL Agent}, represented as a UML sequence diagram.\n\n\\clearpage\n\\vfill\n\\begin{landscape}\n\t\\begin{figure}\t\n\t\t\\label{fig:implementation-architecture}\n\t\t\\centering\n\t\t\\includegraphics[height=.6\\columnwidth, width=.95\\columnwidth]{design/implementation-architecture}\n\t\t\\caption{The architecture of Smart Scale Engine.}\n\t\\end{figure}\n\\end{landscape}\n\\vfill\n\\clearpage\n\\vfill\n\\begin{landscape}\n\t\\begin{figure}\t\n\t\t\\label{fig:implementation-agents-manager-loop-sequence-diagram}\n\t\t\\centering\n\t\t\\includegraphics[height=.6\\columnwidth, width=.95\\columnwidth]{design/implementation-agents-manager-loop-sequence-diagram}\n\t\t\\caption{The sequence diagram representing the loop executed by the \\texttt{AgentsManager}.}\n\t\\end{figure}\n\\end{landscape}\n\\vfill\n\\clearpage\n\\vfill\n\\begin{landscape}\n\t\\begin{figure}\t\n\t\t\\label{fig:implementation-rl-agent-loop-sequence-diagram}\n\t\t\\centering\n\t\t\\includegraphics[height=.6\\columnwidth, width=.95\\columnwidth]{design/implementation-rl-agent-loop-sequence-diagram}\n\t\t\\caption{The sequence diagram representing the loop executed by a single \\texttt{RLAgent}.}\n\t\\end{figure}\n\\end{landscape}\n\\vfill\n\\clearpage\n\n\n% %\n% SMART SCALER\n% %\n\\section{Smart Scaler}\n\\label{sec:implementation-smart-scaler}\n%\nA Smart Scaler is a Kubernetes Custom Resource that is associated to an existing Deployment to provide it with smart elasticity.\n%\nAs any other Kubernetes object, a Smart Scaler instance points out a desired system state.\n%\nIn particular, a Smart Scaler instance points out that a given Deployment should be provided with smart elasticity.\n%\nSmart Scalers are in a $0...1:1$ relation with Deployments, that is each existing Deployment may have zero or one Smart Scaler associated to it.\n%\nIf a Deployment has no Smart Scaler, the smart elasticity service is not provided to it.\n%\nIf a Deployment has a Smart Scaler, the smart elasticity service is provided to it.\n%\nIn particular, since the learning process involved in each Smart Scaler is relative to a any given Deployment, each Deployment should have at most one unique Smart Scaler.\n\nIf the Administrator wants to provide an existing Deployment with smart elasticity, it is enough to create a Smart Scaler \\textit{MyAppSmartScaler} associated to it.\n%\nThis is the same way the built-in auto-scaling mechanism works, as described in Section \\ref{sec:kubernetes-elasticity}.\n\nAs Smart Scalers are not built-in objects, a Custom Resource Definition (CRD) should be submitted to Kubernetes to define them as custom objects.\n%\n%Figure \\ref{fig:implementation-smart-scaler-crd} shows the CRD used to define Smart Scalers.\n%\nThe CRD it is pretty self-explanatory.\n%\nIt states that a Smart Scaler is an object provided by the API \\textit{gmarciani.com/v1} scoped in namespaces following the common plural, singular and short naming convention.\n%\nEach Smart Scaler has the following attribute:\n%\n\\begin{itemize}\n\t\n\t\\item \\textbf{deployment:} the name of an existing Deployment to provide with smart elasticity.\n\t%\n\tThis is a required field.\n\t\n\t\\item \\textbf{algorithm:} the name of the reinforcement learning algorithm to power smart elasticity.\n\t%\n\tThis is a required field with default value \\textit{qlearning}, meaning that $\\mathcal{Q}$-learning is the default reinforcement learning algorithm.\n\t\n\t\\item \\textbf{parameters:} a collection of parameters specific to each algorithm. For example, $\\mathcal{Q}$-learning requires to specify parameters $rewardFunction$, $\\alpha$ and $\\gamma$ to customize its behavior.\n\t\n\t\\item \\textbf{replicationMin:} the minimum number of replicas.\n\t%\n\tThis is an optional field with default value $1$, meaning that at least one replica should be present.\n\t\n\t\\item \\textbf{replicationMax:} the maximum number of replicas.\n\t%\n\tThis is an optional field, meaning that the replication degree is unbounded.\n\t \n\\end{itemize}\n%\nOnce CRD is submitted, Smart Scalers can be managed as native objects.\n\nFor example, to provide an existing Deployment named \\textit{MyApp} with a Smart Scaler it is enough to run the command\n%\n\\begin{verbatim}\nkubectl create -f MyAppSmartScaler.yaml\n\\end{verbatim}\n%\nwhere \\textit{MyAppSmartScaler.yaml} is an instance of a Smart Scaler defined following the schema specified in the Smart Scaler CRD.\n%\n%\\begin{figure}\n%\t\\label{fig:implementation-smart-scaler-crd}\n%\t\\lstinputlisting{code/implementation-smart-scaler-crd.yaml}\n%\t\\caption{Custom Resource Definition for the smart scaler object.}\n%\\end{figure}\n\nFor example, Figure \\ref{fig:implementation-smart-scaler-cr} shows a Smart Scaler that realizes smart elasticity for the Deployment named \\textit{MyApp} using the Q-learning algorithm with reward function \\textit{rfunc1}, alpha equals to 0.7, gamma equals to 0.8 and a replication degree between $1$ and $100$. \n%\n\\begin{figure}\n\t\\label{fig:implementation-smart-scaler-cr}\n\t\\lstinputlisting{code/implementation-smart-scaler-cr.yaml}\n\t\\caption{A Smart Scaler instance.}\n\\end{figure}\n\n\n\n% %\n% REST INTERFACES\n% %\n\\subsection{REST Interfaces}\n\\label{sec:smart-elasticity-implementation-rest-interfaces}\n\nIn this section we detail the REST API exposed by the API Gateway provided by Smart Scaler Engine to manage the learning processes.\n%\nFurthermore we detail the REST calls executed by Smart Scaler Engine against the Kubernetes Web Server to gather deployment information, performance metrics and to submit scaling actions.\n\n\\begin{table}\n\t\\label{tbl:implementation-rest-smart-scaler-engine}\n\t\\centering\n\t\\begin{tabular}{| m{1.5cm} | m{5cm} | m{7.5cm} | }\\hline\n\t\t\n\t\t\\textbf{Met.} & \\textbf{Resource} & \\textbf{Description} \\\\\\hline\n\n\t\t\\texttt{GET}\t& /status         & Retrieves the status of the overall Smart Scale Engine service \\\\\\hline\n\t\t\n\t\\end{tabular}\n\t\\caption{The REST interface exposed by Smart Scaler Engine's Web Server. Each resource path shown here is relative to the server root.}\n\\end{table}\n\n\\begin{table}\n\t\\label{tbl:implementation-rest-kubernetes}\n\t\\centering\n\t\\begin{tabular}{| m{1.5cm} | m{5cm} | m{7.5cm} | }\\hline\n\t\t\n\t\t\\textbf{Met.} & \\textbf{Resource} & \\textbf{Description} \\\\\\hline\n\t\t\n\t\t\\multicolumn{3}{| c |}{/api/gmarciani.com/v1/namespaces/\\{namespace\\}/} \\\\\\hline\n\t\t\n\t\t\\texttt{GET}  & /smart-scalers & List all deployed smart scalers \\\\\\hline\n\t\t\n\t\t\\texttt{GET}  & /smart-scalers/\\{scaler\\} & Describe the specified smart scaler \\\\\\hline\n\t\t\n\t\t\\texttt{PATCH} & /smart-scalers/\\{scaler\\} & Update the specified smart scaler \\\\\\hline\n\t\t\n\t\t\\texttt{DELETE}  & /smart-scalers/\\{scaler\\} & Delete the specified smart scaler \\\\\\hline\n\t\t\n\t\t\\multicolumn{3}{| c |}{/apis/metrics/v1alpha1/} \\\\\\hline\n\t\t\n\t\t\\textbf{Met.} & \\textbf{Resource} & \\textbf{Description} \\\\\\hline\n\t\t\n\t\t\\texttt{GET}  & /nodes            & List all nodes metrics \\\\\\hline\n\t\t\n\t\t\\texttt{GET}  & /nodes/\\{node\\}   & List metrics for the specified node \\\\\\hline\n\t\t\n\t\t\\multicolumn{3}{| c |}{/apis/metrics/v1alpha1/namespaces/\\{namespace\\}} \\\\\\hline\n\t\t\n\t\t\\texttt{GET}  & /pods & List metrics for all pods in the specified namespace \\\\\\hline\n\t\t\n\t\t\\texttt{GET}  & /pods/\\{pod\\} & List metrics for the specified pod \\\\\\hline\n\t\t\n\t\\end{tabular}\n\t\\caption{The REST interface of interest exposed by Kubernetes's Web Server to manage Smart Scalers and gather metrics.}\n\\end{table}", "meta": {"hexsha": "6c7f82c09db0783c2b1b958132057a42e0a22e92", "size": 17229, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-msc/old/implementation.tex", "max_stars_repo_name": "gmarciani/research", "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_issues_repo_path": "thesis-msc/old/implementation.tex", "max_issues_repo_name": "gmarciani/research", "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis-msc/old/implementation.tex", "max_forks_repo_name": "gmarciani/research", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "avg_line_length": 49.3667621777, "max_line_length": 902, "alphanum_fraction": 0.7693423878, "num_tokens": 4408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.88720460564669, "lm_q2_score": 0.6334102636778403, "lm_q1q2_score": 0.5619645031988642}}
{"text": "\\chapter{Similarity Distance Algorithms}\n\\label{ch:3}\n\nIn this chapter, we present seven different similarity distance measures. \nFirst, we describe four that are parameter-free, then we describe three algorithms that require a parameter. \nBy chance, all of the parametric methods are based on Edit Distance, so we briefly explain how its construction as well. \n\n\nThe methods have largely been chosen due to their prominence in the existing literature\\cite{58-UCRTime,70-SimilarityDistances,8-EffectivenessStudy,53-TimeSeries,52-OnlineEfficient}. \nHowever, we present two newer algorithms, one parameter-free and one that requires a parameter.\nThey are presented at the end of their respective groupings. \n \nAs was the case in the previous chapter, large portions here are based on the report created for \\textit{TDT4501 Computer Science, Specialization Project}\n\n\n\\section{Parameter-Free Measures}\n% The advantage of these parameter free methods is, as their names tell us, that they do not have parameters and they are \u201cout of the box\u201d ready. \n\n\\subsection{Euclidean Distance (Ed)}\nAs noted in \\Cref{ch:2}, the Euclidean distance (Ed) is a way to quantify the distance between two points in space. \nThere is a naive extension of that principle that lays the foundation for a Euclidean Distance measure for trajectories\\cite{44-FastSubsequence, 8-EffectivenessStudy, 24-ReviewTrajectory,86-ComputingMinimum}. This method would be to compute the mean point-point distances in a lock-step manner as seen in \\Cref{eq:ed}.\n\n\\begin{equation}\\label{eq:ed}\nEd(A, B) = \\frac{1}{n}\\cdot \\sum_{i=1}^{n}d(a_i, b_i)\n\\end{equation}\nwhere the distance between corresponding points is the $L_2$-norm as defined in \\Cref{eq:l2norm}\n\nIts simplicity comes with a couple of disadvantages, one notable one is that it requires the trajectories to be of equal length. \nIn the cases where the number of points does not match one would have to adapt the data, potentially altering the original observations. \nFurthermore, computing distance in a lock-step manner means that it is sensitive to local time shifts and noise\\cite{12-RobustFast}. \n\nEven with these limitations, this measure is included for its simplicity.  Previous work has shown that it holds remarkably well up to more advanced methods\\cite{26-QueryingMining}, further encouraging us to examine this metric. \n\n\n\\subsection{Dynamic Time Warping (DTW)}\n\nAs stated in \\Cref{sec:elasticity} an elastic measure is needed in order to handle local time shifts and Dynamic Time Warping (DTW) is precisely that. \nDTW was originally designed for speech recognition which means that it handles relative lag seamlessly\\cite{45-CrosswordsReference, 27-DynamicTime}.\nThis makes it a prevalent choice for examining similarity under temporal drift.\nThe idea behind this algorithm is to stretch and contract time such that a favorable alignment of the input trajectories can be found. \n\n\n\\Cref{fig:dtw_cost_path} illustrates the implementation of DTW. \nThe computation begins by taking input trajectories $A$ and $B$ and constructing a full cost matrix $C$. This matrix keeps all pairwise distances  between the trajectories' elements. \nWe use the $L_2$ norm for point-to-point distance computation as is common for DTW\\cite{5-ComputingVisualizing}. \nAfter the matrix has been constructed, the next step is to iteratively search through  $C$ and find the optimal warping path, $p$. \nThe optimal warping path is the set of point-point pairings, entries of $C$, that  minimizes the accumulated cost. \nThe accumulated cost along the warping path is the DTW distance between the trajectories. \n\\Cref{fig:ed_dtw_diff} exemplifies how the Euclidean distance and DTW differ with respect to time-shift. \n\n% how this approach stands out from \n% for an illustration of how the points are aliened under a lock-step measure like the Euclidean Distance and DTW.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=.7\\textwidth]{figs/algos/dtw_C_p.png}\n\\caption{Cost matrix $C$ (left) and accumulated cost matrix $C'$ which the optimal warping path $p$ (right) . Figure copied from \\cite{27-DynamicTime}}\n\\label{fig:dtw_cost_path}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.7\\textwidth]{figs/algos/ed_dtw_diff.png}\n\\caption{An illustration of how the Euclidean forces a lock-step computation whereas DTW realigns the trajectories. Figure copied from \\cite{58-UCRTime}}\n\\label{fig:ed_dtw_diff}\n\\end{figure}\n\n\nThe main drawback of DTW is that it weights all points of both trajectories equally and thus it is not robust to noise. \nA naive implementation of DTW would forcefully align the start- and endpoints of the two trajectories which can disproportionately punish trajectories that are overall similar\\cite{4-ElasticPartial}. \nGenerating the cost matrix quickly becomes expensive due to pairwise comparing all points of the input trajectories which makes it less suited for large data sets \\cite{27-DynamicTime}. \n\nIts shortcomings have inspired iterations of DTW that seek to address them. Some remark that constructing a full cost matrix may not be needed, and some seek to speed up how to iterate through $C$\\cite{5-ComputingVisualizing}. We note that a popular choice is \\textbf{FastDTW}, which is a parameterized version of DTW. \nAs the name indicates, it is much faster than the naive implementation yet gives comparably accurate results\\cite{9-FastDTWAccurate}. \nNevertheless, this thesis shall only focus on the parameter-free version.\n\n\n\n\\subsection{Hausdorff Distance (Hd)}\n% This distance measure is a metric that weights shape similarity, meaning the temporal aspect or order of trajectory elements does not matter \nThis technique is often discussed in relation to polygons and sets\\cite{49-HausdorffDistance} however it is applicable for time-series data as well. \nA consequence of its geometric origin is that the trajectory direction no longer affects the final result, the notion of a first or last trajectory element is not kept\\cite{86-ComputingMinimum}. \nThe Hausdorff distance from  $A$ to $B$ is the maximum of the minimum of all pairwise distances, see \\Cref{eq:Hd-directed}\\cite{48-ComputationalGeometrya} \n\n\\begin{equation}\n\\label{eq:Hd-directed}\n\\widetilde{Hd}(A, B) = max_{a\\in A}\\big\\{ \\, min_{b\\in B } \\enspace d(a, b)\\,\\big\\}\n\\end{equation}\n\nwhere $d(a, b)$ is any metric distance function, here the $L_2$-norm is used. \nThe distance function described in \\Cref{eq:Hd-directed} is not symmetric and thereby not a metric\\cite{14-EfficientAlgorithm}. The function is referred to as the directed Hausdorff distance. \nTo make the Hausdorff metric, it is defined as the maximum  of the two directed results. \nThis makes the function symmetric, so that it fulfills the requirements for a metric, see \\Cref{eq:Hd-metric}.\n\n\\begin{equation}\n\\label{eq:Hd-metric}\nHd(A, B) = max\\big\\{ \\,\\widetilde{Hd}(A, B), \\;\\widetilde{Hd}(B, A)\\big\\}\n\\end{equation}\n\nSince we can compare any point in A to any point in B this measure is elastic. \nAll points of the trajectories are weighted equally which makes it is sensitive to noisy data. \nAdditionally, the effect of reducing the similarity score to the distance between two points from each trajectory is that information about the trajectories' overall shape is lost. \nThe metric is highly sensitive to noise\\cite{51-HausdorffDistance}. \nLastly, we need to work out both of the directed distances to get the similarly score. This means more work per trajectory pair than naturally symmetric measures such as Ed and DTW.  \n\n\n\\subsection{Symmetrized Segment-Path Distance (SSPD)}\nAs the  Hausdorff distance, Symmetrized Segment-Path Distance (SSPD) method is a spatial only algorithm that largely disregards the direction of the trajectories.\nIt is was developed from the same principles as the Hausdorff distance, but with the addition of being able to account for  the trajectories as a whole.\\cite{43-TrajectoryDistance, 50-ReviewPerspective}. \nThe algorithms explained so far have used the trajectory elements directly as the basis for the computations. For SSPD, this is no longer the case, and thus we define notation and terms to describe this algorithm.  \n\n In line with  \\Cref{tb:notation}  $a, b$ are elements of Trajectories $A, B$ respectively . \n A \\textit{segment} is the line between two consecutive points, and we use $\\breve{a}$ to denote a segment of $A$. Using indexes,  $\\breve{a}_i$ is the line between $a_i$ and $a_{i+1}$. \n\nNext we define the \\textit{point-to-segment distance} as shown in \\Cref{eq:sspd-ps}. This distance function requires us to find the point\u2019s orthogonal projection onto the segment. \nIf it is within the segment, the distance will be the  $L_2$-norm between the original point $a$ and its projection $\\dot{a}$.\nOtherwise, the  $L_2$- norm to segment's end points is computed nearest those distances becomes the point-to-segment distance.  \n\n\\begin{equation}\n\\label{eq:sspd-ps}\nd_{ps}(a_i, \\breve{b}_j) =  \\begin{cases}\nd(a, \\dot{a})  &\\text{if} \\enspace \\dot{a} \\in \\breve{b}_j\\\\\nmin\\{ d(a_i, b_j), d(a_i, b_{j+1})\\}  &\\text{otherwise}\n\\end{cases}\n\\end{equation}\nWhere $\\dot{a}$ is the orthogonal projection of point  $a$ onto segment  $\\breve{b_j} $ and $d$ is the $L_2$-norm. \n\nFrom point-to-segment distance, \\textit{point-to-trajectory distance} is defined as shown in \\Cref{eq:sspd-pt}. \nThe point-to-segment distance is computed for all the segments of the trajectory and the lowest one becomes the point-to-trajectory distance.\n\n\\begin{equation}\n\\label{eq:sspd-pt}\nD_{pT}(a_i, B) = min_{\\breve{b} \\in B} \\big\\{ \\, d_{ps}(a_i, \\breve{b})\\, \\big\\}\n\\end{equation}\nThe \\textit{Segment-Path-Distance} (SPD) is directed.\nIt is defined to be the mean of the all point-to-trajectory distances from points of the first trajectory to the second trajectory, \\Cref{eq:sspd-spd}.\n\n\\begin{equation}\n\\label{eq:sspd-spd}\nSPD(A, B) = \\frac{1}{n_A} \\sum_{i=1}^{n_A} D_{pT} (a_i, B)\n\\end{equation}\nAs with the Hd, this distance measure is made symmetric by computing both directed versions first. However unlike Hd, the final result is the mean of the directed results. The name comes from this last step and the final formula is shown in \\Cref{eq:sspd-main}.\n \n \\begin{equation}\n\\label{eq:sspd-main}\nSSPD(A, B) = \\frac{\\big( \\, SPD(A, B) + SPD(B, A)\\, \\big)}{2}\n\\end{equation}\n\n \nThe creators of SSPD note that if one were to use the max function rather than computing the mean upon symmetrification this algorithm would be identical to the Hausdorff function\\cite{50-ReviewPerspective}. \nThey note that with their method, this distance function becomes more robust to noise when calculating the mean, as opposed to maximizing or minimizing.  \nMoreover, they commend SSPD for being parameter-free as well as not relying on interpolation between two observed point, preferring to strengthen the importance of observed data. \nAs a closing remark, we note that SSPD does not fulfill the requirements of a metric. \n\n\\section{Parameterized Measures}\n\n\n In this section, we have described measures that require a parameter. \n All of them are based on String Edit Distance (SED) which is a similarity measure designed for strings.  \n The idea is to count the number of \\textit{edits} needed to convert one string into the other using the edit operations are insert, delete and replace.  \n Due to the strings' natural discretization, it is trivial to determine whether or not two symbols are matching. \nThere are variations of implementation but a common choice is to let the cost of an \\textit{edit} be 1, creating a formula as seen in \\Cref{eq:ed_string}:  \n\n\\begin{equation}\\label{eq:ed_string}\n    \\text{SED}(S1, S2) = \\begin{dcases}\n        \\quad |S1| & \\text{if }  |S2| = 0\\\\\n     \t\\quad |S2| & \\text{if }  |S1| = 0\\\\\n        \\quad \\text{SED}(S1,\\; S2) & \\text{if }  |S1| =|S2| \\\\\n        \\begin{aligned}\n        1 + min\\big\\{   & \\text{SED}(Rest(S1), \\; Rest(S2)) \\\\\n              \t     & \\text{SED}(Rest(S1), \\; S2) \\\\\n           \t\t    & \\text{SED}(S1, \\; Rest(S2)) \\\\\n        \\end{aligned} & \\text{otherwise}\n  \\end{dcases}\n\\end{equation}\n\nReal data does not let itself be discretized as fortuitously as strings, thereby leading to the creation of the algorithms below. \nThe measures differ in how they have adapted SED for time series data, and the role of the parameter value changes as well. \n \n\n% \\subsection*{FastDTW}\n% As its name indicates, this algorithm is a modification of DTW. We remarked that DTW is expensive in both time and space complexity. FastDTW archives linear complexity on both fronts, but does introduce a parameter to do so \\cite{9}. \n\n% The idea behind this is to first run the algorithm at coarse resolution, compute a projected warp-path and then rerun the algorithm at a finer resolution to tune the path. This is repeated until we have reached the original resolution. Two of the most important ways this algorithm is sped up are the following. \n% Firstly, fewer cells of the cost matrix need to be calculated or evaluated. Secondly, data abstraction means that the cost of DTW itself is greatly reduced. This means a great improvement in cost, however at the expense of no longer being parameter free. The parameter in question is a radius parameter which determines how much the projected optimal path can deviate from the one found at the previous iteration. See FIGURE for an illustration of how the iterative search plays out with the radius parameter. WILL NOT INCLUDE\n\n\n% One way we could speed up DTW is by introducing a \\textit{radius parameter}, and this algorithm is called FastDTW\\cite{9}. This approach first runs the algorithm at a coarser resolution, and then reruns itself in order to tune the result. This means a great improvement in cost, however at the expense of no longer being parameter free. Nevertheless we have chosen to include both DTW and FastDTW as part of our survey to examine the impact of this alteration. \n\n\\subsection{Edit Distance on Real Sequence (EDR)}\n\nEdit Distance on Real Sequence (EDR)\\cite{12-RobustFast} introduces a threshold parameter $\\epsilon$ that determines if two trajectory elements are \u201cmatching\u201d. \nCrucially, there can be no partial matches so the point-to-point distance is either 1 or a 0 as shown in \\Cref{eq:edr_subcost}.\n\n\\begin{equation}\\label{eq:edr_subcost}\nd_{edr} (a, b) = \\begin{cases}\n0 \\quad &\\text{if } \\quad | a_{LNG} -b_{LNG}|\\leqslant  \\epsilon  \\text{ and } | a_{LAT} -b_{LAT}|\\leqslant  \\epsilon    \\\\\n1 \\quad &\\text{otherwise}\\\\\n\\end{cases}\n\\end{equation}\nwhere $\\epsilon$ is the threshold parameter. After determining whether or not two points match with the subcost function we get the full EDR is shown  algorithm as seen in \\Cref{eq:edr_main}\n\n\n\\begin{equation}\\label{eq:edr_main}\n    EDR(A, B) = \\begin{cases}\n        n_A \\qquad &\\text{if } n_B = 0\\\\\n        n_B \\qquad &\\text{if } n_A = 0\\\\\n        \\begin{aligned}\n        min\\big\\{ & EDR(\\text{ Rest}(A), \\text{ Rest}(B)) + d_{edr}(a, b),\\\\\n              & EDR(\\text{ Rest}(A), B) + 1,\\\\\n              & EDR(A, \\text{ Rest}(B)) + 1 \\big\\}\n        \\end{aligned} & \\text{otherwise}\n  \\end{cases}\n\\end{equation}where $a, b$ are the leading elements of $A, B$.  \n\nThe main advantages of EDR are its resistance to noise and the ability to handle local time shifts. \nIt is resistant to noise because of how it maps the distance between elements to a binary 1 or 0. \nToohey and Duckham asserted that EDR performed well in spite of variance in the sampling rate\\cite{17-TrajectorySimilarity}.\n\nEDR is not a metric measure as it does not fulfill triangle inequality, but it meets other requirements of metrics. \nFurthermore, it evaluates similarity as trajectory elements, not taking into account the trajectories' overall shape. \nWe have chosen to include this algorithm as it is a well-studied algorithm. \nIts prevalence in the literature has led it to be studied alongside DTW and Euclidean Distance. \n\n\n\\subsection{Edit Distance with Real Penalty (ERP) }\nEdit Distance with Real Penalty (ERP) was designed to bridge the gap between metrics distance functions and local time shifting tolerant ones. \\cite{13-MarriageLpnorms}. The design of EDR began with the $L_1$-norm \\Cref{eq:manhatten}\n\n\\begin{equation}\\label{eq:manhatten}\nDist_{L1} (A, B) = \\sum_{i=1}^{n} d(a_i, b_i) \\qquad \\text{where} \\qquad d_{L_1}(a_i, b_i) = \\sum_{j=1}^{p}|a_{i,j}- b_{i,j}|\n\\end{equation}\n\nFrom SED, the notion of a \\textit{gap element} is introduced. \nA gap element is a symbol that could have been deleted from string $S1$ but instead is inserted into string $S2$.\nThe point-to-point distance function would then become what is shown in \\Cref{eq:erp_ed}. \n\n\\begin{equation}\\label{eq:erp_ed}\n    d_{ed}(a_i, b_i) = \\begin{cases}\n        0 &\\qquad \\text{if } a_i = b_i \\\\\n        1 &\\qquad \\text{if } a_i \\text{ or } b_i \\text{ is a gap } \\\\\n        1 &\\qquad \\text{otherwise } \\\\\n    \\end{cases}\n\\end{equation}\n\n% SED, just as the $L_p$ norms, is a metric distance measure, but to further develop this algorithm it needs to accommodate real values.\nRather than using a constant value to penalize all edit operations uniformly like EDR, ERP differentiates between gap elements and non-gap elements. \nThis distinction is important so that it will be tolerant to local time sifting. \nGap elements are penalized with a constant value, while non-gap elements have a real-value cost based on their value. \nThe parameter of ERP,  $g$, is the reference value for cost computation, and the point-to-point distance function is seen in \\Cref{eq:erp_sub}\n\n\\begin{equation}\\label{eq:erp_sub}\n     d_{epr}(a_i, b_i) = \\begin{cases}\n        |a_i-b_i| & \\qquad \\text{if  neither is a gap} \\\\\n        |a_i-g|   & \\qquad \\text{if } b_i \\text{ is a gap} \\\\\n        |b_i-g|   & \\qquad \\text{if } a_i \\text{ is a gap} \\\\\n     \\end{cases}\n \\end{equation}\n \n Again, the trajectory distance function is an adaption of SED, and the full algorithm for ERP is shown in \\Cref{eq:erp_main}\n%  but was no longer a metric after it.\n\\begin{equation}\\label{eq:erp_main}\n    ERP(A, B) = \\begin{dcases}\n        \\quad \\sum_{i=1}^{n_A}|a_i - g| & \\text{if }  n_B = 0\\\\\n        \\quad \\sum_{i=1}^{n_B}|b_i - g| & \\text{if }  n_A = 0\\\\\n        \\begin{aligned}\n        min\\big\\{   & {ERP}(Rest(A), \\; Rest(B)) + d_{erp}(a_1,\\;  b_1),\\\\\n              \t    & {ERP}(Rest(A), \\; B) + d_{erp}(a_1, \\; g),\\\\\n           \t\t    & {ERP}(A, \\; Rest(B)) + d_{erp}( b_1,\\; g) \\big\\}\n        \\end{aligned} & \\text{otherwise}\n  \\end{dcases}\n\\end{equation}\n\nThe main drawback of this method stems from the characteristic that makes it a metric, using differences of real values as costs.\nThis method is a metric as long as $g$ is constant, making the cost of an edit vary the location of the trajectory element. \nHowever, this makes ERP sensitive to noise\\cite{12-RobustFast}. \n\n\n\\clearpage\n\\subsection{Move-Split-Merge (MSM)}\nMove-Split-Merge (MSM) \\cite{40-MoveSplitMergeMetric} is similar to the aforementioned to the other SED-based approaches in that it calculated the similarity score from how many operations are needed to transform one trajectory into another one.\nThis algorithm is tolerant to temporal misalignments and translation invariant\\cite{53-TimeSeries} as are EDR and ERP.  \nWhat distinguishes this algorithm from them is how insertions and deletions are handled.\nThe MSM cost model uses both the value of the element being modified as well the adjacent one whereas ERP only uses the element being modified and EDR uses a constant cost for all operations. \n\n\nAs the name of this algorithm indicates, the possible operations are Move, Split, and Merge, which are then used to emulate the established Edit Distance operations Insert, Delete, and Substitute.\nThe Move operation is a renaming of does the same as Substitute.\nThe Split operation inserts a copy of a given value directly after itself, and the Merge operation deletes a value if it is immediately followed by a matching value\\cite{53-TimeSeries}. In other words, Split and Merge are each others\u2019 inverse.\nThe Insert operation becomes a Split followed by a Move, while Delete becomes a Move followed by a Merge. \nThe MSM parameter, $c$, is  a non-negative value that determines the cost of every Split and Merge operation. See \\Crefrange{eq:msm-start}{eq:msm-end} for details. \n\n\\begin{align}\n A  &= [a_1, \\cdots , a_{i-1},\\, a_i,\\, a_{i+1}, \\cdots  , a_{n_A}] \\nonumber \\\\\n \\nonumber\\\\\n\\text{Move}_{a_i \\rightarrow b_j}(A)  &= [a_1, \\cdots , a_{i-1},\\, b_j,\\, a_{i+1}, \\cdots  , a_{n_A}] \\label{eq:msm-start} \\\\\n\\text{Cost} \\big\\{ \\text{Move}_{a_i \\rightarrow b_j}(A) \\big\\} &= d(a_i, b_j) \\\\\n\\nonumber \\\\\n\\text{Split}_{a_i}(A) &= [a_1, \\cdots , a_{i-1},\\,a_i,\\, a_i,\\, a_{i+1}, \\cdots  , a_{n_A}]\\\\\n\\text{Cost} \\big\\{ \\text{Split}_{a_i}(A) \\big\\} &= c \\\\\n\\nonumber \\\\\n\\text{Merge}_{a_i}(A) &= [a_1, \\cdots , a_{i-1},\\, a_{i+1}, \\cdots , a_{n_A}] \\\\\n\\text{Cost} \\big\\{ \\text{Merge}_{a_i}(A) \\big\\} &= c \\label{eq:msm-end}\n\\end{align}\n\nRecall that the $\\text{Merge}_{a_i}(A)$ operation is only permitted if $a_i = a_{i+1} $. \nThis algorithm fulfills the requirements of a metric and thus it can be used for indexing and clustering techniques that are designed to function in metric spaces. \nIn terms of computational cost and accuracy, we refer to work done by Bagnall et. al. which alludes to MSM being comparably efficient to ERP and more tolerant to noise DTW\\cite{56-GreatTime}.\n\n \n\n \n \\clearpage\n\\section{Summary of Algorithms}\n\n\\Cref{tab:meas_comp} summarizes some trajectory similarity features. \nRemark that none of the measures have matching rows, further indicating that the meaning of trajectory similarly varies with technique implementation. \n\n\nWe reiterate that large portions of this chapter stem from the report that was mentioned at the beginning. \nStill, there is not a complete overlap of the measures discussed in that report and this thesis. \n\n\n\n% Euclidean distance is the only measure that does not account for local time shifts, and    \n% From all aspects apart from \"overall shape\" similarity, there has been about half and half if they measures fulfilled the requirements.\n\n% distance functions satisfy the requirements of metricity \n\n% https://www.tablesgenerator.com/latex_tables\n\n\\begin{table}[t]\n\\centering\n\\caption{Quick view of some aspects the similarity distance measures}\n\\label{tab:meas_comp}\n\\resizebox{\\textwidth}{!}{%\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}[h]{*{6}{|p{1.8cm}}|}\n \\hline\n\\textbf{Name}   & \\textbf{Metric } & \\textbf{Single \\newline Element } & \\textbf{Time Shifts} & \\textbf{Parameter} & \\textbf{Noise \\newline Tolerant}      \\\\ \n\\hline\n\\textbf{ED} \t\t & Yes    & Yes     & No  & No    & No      \\\\ \n\\hline\n\\textbf{DTW}      & No:    &Yes    &Yes    & No   & No    \\\\ \n\\hline\n\\textbf{Hd}       & Yes    & Yes        &Yes     & No     & No      \\\\\n\\hline\n\\textbf{SSPD}      & No    & No          &Yes   & No   & Yes  \\\\ \n\\hline\n\\textbf{EDR}       & No   & Yes    & Yes       & Yes     & Yes   \\\\\n\\hline\n\\textbf{ERP}       & Yes   & Yes    & Yes      & Yes    & No    \\\\\n\\hline\n\\textbf{MSM}      & Yes   & No    & Yes   & Yes   & Yes \\\\ \n\\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n% double check the noise tolerances, use checkmarks?\n", "meta": {"hexsha": "6f0f3b0aa5aa101649d065e73ee55bfc2f9336c8", "size": 23179, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/3-sim-algo.tex", "max_stars_repo_name": "katrilh/thesis-NTNU", "max_stars_repo_head_hexsha": "9030f0a82524a6f863d8954656193acd9ab89f5f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/3-sim-algo.tex", "max_issues_repo_name": "katrilh/thesis-NTNU", "max_issues_repo_head_hexsha": "9030f0a82524a6f863d8954656193acd9ab89f5f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/3-sim-algo.tex", "max_forks_repo_name": "katrilh/thesis-NTNU", "max_forks_repo_head_hexsha": "9030f0a82524a6f863d8954656193acd9ab89f5f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.2077562327, "max_line_length": 528, "alphanum_fraction": 0.737521032, "num_tokens": 6298, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6893056295505783, "lm_q2_score": 0.8152324983301568, "lm_q1q2_score": 0.5619443504915594}}
{"text": "\\subsection{Batcher's Bitonic Parallel Sorter}\r\nBatcher's bitonic merger and sorter is a parallel sorting algorithm which has a good implementation in hardware. We have produced an implementation of this algorithm in Haskell originally for circuit generation for FPGAs. However, this executable model also represents an interesting software implicit parallelization exercise because the entire parallel structure of the algorithm is expressed in terms of just one combinator called \\codef{par2}:\r\n\r\n\\begin{lstlisting}\r\npar2 :: (a -> b) -> (c -> d) -> (a, c) -> (b, d)\r\npar2 circuit1 circuit2 (input1, input2)\r\n  = (output1, output2)\r\n    where\r\n    output1 = circuit1 input1\r\n    output2 = circuit2 input2\r\n\\end{lstlisting}\r\n\r\nThis combinator captures the idea of two circuits which are independent and execute in parallel. This combinator is used to define other combinators which express different ways of performing parallel divide and conquer operations:\r\n\r\n\\begin{lstlisting}\r\ntwo :: ([a] -> [b]) -> [a] -> [b]\r\ntwo r = halve >-> par2 r r >-> unhalve\r\n\r\nilv :: ([a] -> [b]) -> [a] -> [b]\r\nilv r = unriffle >-> two r >-> riffle\r\n\\end{lstlisting}\r\n\r\nThe \\codef{halve} combinator breaks a list into two sub-lists of even length and the \\codef{unhalve} operate performs the inverse operation. The \\codef{riffile} combinator permutes its inputs by breaking a list into two halves and then interleaving the resulting lists. \\codef{unriffle} performs the inverse permutation.\r\n\r\nThese combinators are in turn used to define a butterfly parallel processing network which describes a merger:\r\n\r\n\\begin{lstlisting}\r\nbutterfly circuit [x,y] = circuit [x,y]\r\nbutterfly circuit input\r\n  = (ilv (butterfly circuit) >-> evens circuit) input\r\n\\end{lstlisting}\r\n\r\nThe \\codef{evens} combinator breaks an input list into adjacent groups of two elements and applies the \\codef{circuit} argument to each group.  A column of par-wise processing elements is used to combine the results of two sub-merges:\r\n\r\n\\begin{lstlisting}\r\nevens :: ([a] -> [b]) -> [a] -> [b]\r\nevens f = chop 2 >-> map f >-> concat\r\n\\end{lstlisting}\r\n\r\nThe \\codef{chop 2} combinator breaks a list into sub-lists of length 2. This parallel Batcher's bitonic merger plus the \\codef{evens} function can be used to build a parallel Batcher's bitonic sorter:\r\n\r\n\\begin{lstlisting}\r\nsortB cmp [x, y] = cmp [x, y]\r\nsortB cmp input\r\n  = (two (sortB cmp) >-> sndList reverse >-> butterfly cmp) input\r\n\\end{lstlisting}\r\n\r\nThe \\codef{sndList} combinator breaks a list into two halves and applies its argument circuit to the top halve and the identity function to the bottom halve and then concatenates the sub-results into a single list. \r\n\r\nA straightforward way to perform a semi-explicit parallelization of the \\codef{par2} combinator is use \\codef{par} to spark off the evaluation of one of the sub-circuits.\r\n\r\n\\begin{lstlisting}\r\npar2 :: (a -> b) -> (c -> d) -> (a, c) -> (b, d)\r\npar2 circuit1 circuit2 (input1, input2)\r\n  = output1 `par` (output2 `pseq` (output1, output2))\r\n    where\r\n    output1 = circuit1 input1\r\n    output2 = circuit2 input2\r\n\\end{lstlisting}\r\n\r\nThis relatively simple change results in a definite performance gain due to parallelism. Here is the log output produced by running a test-bench program with just one Haskell execution context:\r\n\r\n\\begin{verbatim}\r\n.\\bsortpar.exe +RTS -N1 -l -qg0 -qb -sbsortpar-N1.log\r\n  SPARKS: 106496 (0 converted, 106496 pruned)\r\n\r\n  INIT  time    0.00s  (  0.00s elapsed)\r\n  MUT   time    5.32s  (  5.37s elapsed)\r\n  GC    time    0.72s  (  0.74s elapsed)\r\n  EXIT  time    0.00s  (  0.00s elapsed)\r\n  Total time    6.04s  (  6.12s elapsed)\r\n\\end{verbatim}\r\n\r\nAlthough many sparks are created none are taken up because there is only one worker thread. The execution trace for this invocation is shown in Figure~\\ref{f:bsortpar-n1}.\r\n\r\n\\begin{figure*}\r\n\\begin{center}\r\n\\includegraphics[width=17cm]{bsortpar-n1.png}\r\n\\end{center}\r\n\\caption{A sequential execution of bsort}\r\n\\label{f:bsortpar-n1}\r\n\\end{figure*}\r\n\r\n\\begin{figure*}\r\n\\begin{center}\r\n\\includegraphics[width=17cm]{bsortpar-n2.png}\r\n\\end{center}\r\n\\caption{A parallel execution of bsort}\r\n\\label{f:bsortpar-n2}\r\n\\end{figure*}\r\n\r\n Running with two threads shows a very good performance improvement:\r\n\r\n\\begin{verbatim}\r\n.\\bsortpar.exe +RTS -N2 -l -qg0 -qb -sbsortpar-N2.log\r\n  SPARKS: 106859 (49 converted, 106537 pruned)\r\n\r\n  INIT  time    0.00s  (  0.00s elapsed)\r\n  MUT   time    4.73s  (  3.03s elapsed)\r\n  GC    time    1.64s  (  0.72s elapsed)\r\n  EXIT  time    0.00s  (  0.00s elapsed)\r\n  Total time    6.36s  (  3.75s elapsed)\r\n\\end{verbatim}\r\n\r\nThis example produces very many sparks most of which fizzle but enough sparks are turned into productive work i.e. 6.36 seconds worth of work done in 3.75 seconds of time. The execution trace for this invocation is shown in Figure~\\ref{f:bsortpar-n2}. \r\nThere is an obvious sequential block of execution between 2.1 seconds and 2.9 seconds and this is due to a sequential component of the algorithm which combines the results of parallel sub-computations i.e the \\codef{evens} function. We can use the parallel strategies library to change the sequential application in the definition of \\codef{evens} to a parallel map operation:\r\n\r\n\\begin{lstlisting}\r\nevens :: ([a] -> [b]) -> [a] -> [b]\r\nevens f = chop 2 >-> parMap rwhnf f >-> concat\r\n\\end{lstlisting}\r\n\r\nThis results in many more sparks being converted:\r\n\r\n\\begin{verbatim}\r\n.\\bsortpar2.exe +RTS -N2 -l -qg0 -qb -sbsortpar2-N2.log\r\n  SPARKS: 852737 (91128 converted, 10175 pruned)\r\n\r\n  INIT  time    0.00s  (  0.04s elapsed)\r\n  MUT   time    4.95s  (  3.86s elapsed)\r\n  GC    time    1.29s  (  0.65s elapsed)\r\n  EXIT  time    0.00s  (  0.00s elapsed)\r\n  Total time    6.24s  (  4.55s elapsed)\r\n\\end{verbatim}\r\n\r\n", "meta": {"hexsha": "aaa74456b71ef033c09a53b9e5fc6f1390cb4e0c", "size": 5781, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/haskell_symposium_2009/bsort.tex", "max_stars_repo_name": "jrp2014/ThreadScope", "max_stars_repo_head_hexsha": "afd131fc0d3f53cec98d74dbcfe9f024710e6b1c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 137, "max_stars_repo_stars_event_min_datetime": "2015-01-04T10:24:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T20:06:38.000Z", "max_issues_repo_path": "papers/haskell_symposium_2009/bsort.tex", "max_issues_repo_name": "jrp2014/ThreadScope", "max_issues_repo_head_hexsha": "afd131fc0d3f53cec98d74dbcfe9f024710e6b1c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 74, "max_issues_repo_issues_event_min_datetime": "2015-01-28T15:19:11.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-24T03:39:18.000Z", "max_forks_repo_path": "papers/haskell_symposium_2009/bsort.tex", "max_forks_repo_name": "jrp2014/ThreadScope", "max_forks_repo_head_hexsha": "afd131fc0d3f53cec98d74dbcfe9f024710e6b1c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 31, "max_forks_repo_forks_event_min_datetime": "2015-08-17T17:32:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-13T14:01:26.000Z", "avg_line_length": 45.880952381, "max_line_length": 448, "alphanum_fraction": 0.711814565, "num_tokens": 1712, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.689305616785446, "lm_q2_score": 0.8152324915965392, "lm_q1q2_score": 0.5619443354434884}}
{"text": "\\section{Methods}\n\nWe consider a generative synthetic network for our study similar to \\cite{karimi2018homophily}, where the growth is based on the preferential-attachment model and homophily. The network has nodes from 2 groups, marked using the red (minority) and blue (majority) colors. An instance of this network graph $G$ can be considered as a tuple $(N,m,f,h)$ where $N$ is the total number of nodes in the network, $m$ is the number of edges each incoming node makes with existing nodes in the network, $f$ is the fraction of minority nodes in the network and $h$ is the symmetric homophily parameter. By tuning these values, we can generate and study different networks.\n\nFor RQ1, we intend to train our RL Learn-to-rank model with a specific clicking (or in this case, choosing) behavior and wish to see how this model finally ranks network nodes. At each iteration of the recommender model training phase, a list of nodes are provided as node connection options, and the model observes the choices made by the clicking model. \n\n\\bigskip\n\n{\\setlength{\\parindent}{0cm}\nOur clicking model $C$ works according to the following rules - \n}\n\\begin{enumerate}\n\t\n\t\\item The model is provided with an observer node $v \\in V$ and a list of options $R_{v}^{t}=(u_{1}^{t},...,u_{k}^{t} | u_{i}^{t} \\in V - \\{v\\}, i \\in \\{1,...,k\\})$ at iteration $t$, where $u_{1}^{t}$ is the 1-st ranked option and $u_{k}^{t}$ is the k-th ranked option at time $t$, $V$ being the set of all nodes in the network. On every training iteration this $k$ sized list is provided to the clicking model for each node $v$.\n\t\n\t\\item The clicking model chooses a maximum of $m$ nodes from the list according to probability $\\alpha$ as given in equation-\\ref{eq_prob}\n\t\n\t\\begin{equation}\n\t\\alpha_{v}(u_{i}^{t}) = \\frac{\\delta(u_{i}^{t}) \\times h(u_{i}^{t},v) \\times e^{-(i-1)}}{\\sum\\limits_{j = 1}^{k} (\\delta(u_{j}^{t}) \\times h(u_{j}^{t},v) \\times e^{-(j-1)})} \\label{eq_prob}\n\t\\end{equation}\n\t\n\twhere $h(x,y) \\in [0,1]$ denotes the homophily between the nodes $x$ and $y$, and $\\delta(x)$ denotes the degree of the node $x$.\n\\end{enumerate}\n\n{\\setlength{\\parindent}{0cm}\nAfter running this for multiple iterations we see how the nodes rank against each other. In the next section we see some results from this.\n}\n\n\\bigskip\n\nFor RQ2, we take the network data from the given sources and find out the homophily parameter. We use a similar clicking model and try to find minority ranking at different positions of the recommended nodes. \n\n\\bigskip\n\nFor RQ3, we grow a model with a combination of organic and algorithmic as in \\cite{stoica2018algorithmic}. Our approach is as following - \n\n\\begin{enumerate}\n\t\\item We set the network parameters for $G(N,m,f,h)$ as has been defined previously. We want our network to finally grow to resemble $G$ as per the given parameters. \n\t\n\t\\item We start building out network with $m$ initial nodes, and at each iteration $t$ we choose add phase with probability $\\beta$, and growth phase with probability $1-\\beta$.\n\t\\begin{enumerate}\n\t\t\\item Add phase : In this phase we add a new node $v$ to the network which chooses maximum of $m$ nodes to connect to from $u^{t} \\in V^{t}$, $V^{t}$ being the set of nodes existing in the network $G^{t}$ at iteration $t$ according to the probability $\\alpha$ in equation \\ref{pref-hom}.\n\t\t\n\t\t\\begin{equation}\n\t\t\\alpha(u^{t}) = \\frac{\\delta(u^{t}) \\times h(u^{t},v)}{\\sum\\limits_{w \\in V^{t}} (\\delta(w^{t}) \\times h(w^{t},v))}\n\t\t\\label{pref-hom}\n\t\t\\end{equation}\n\t\t\n\t\t\\item Growth phase : In this phase, we choose to grow the network by connecting existing nodes to each other. A fraction of nodes $\\gamma$ is chosen from the existing nodes $V^{t}$ and selected as growing nodes. For each of the growing nodes, we choose either organic growth according to the probability $\\eta$, and algorithmic growth according to the probability $1-\\eta$. The algorithmic growth is aided by the recommender agent and node choices happen according to equation \\ref{eq_prob}, for organic growth is done according to equation \\ref{pref-hom}.\n\t\\end{enumerate}\n\n\t\\item The reinforcement learning agent needs to be re-trained to accommodate new nodes at an interval of $r$ iterations.\n\t\n\\end{enumerate}\n\n\\bigskip\n\nFor RQ4, Existing literature suggests some methods for mitigating biases in link formation, for a better minority visibility, such as tweaking the ranking method for differing thresholds as suggested in \\cite{karimi2018homophily} or also introducing choice probability according to node parameters in random-walk as suggested in \\cite{stoica2018algorithmic}. In the reinforcement learning model, we could introduce varying reward mechanisms for choosing minority or majority group nodes, but this needs further thought and exploration.", "meta": {"hexsha": "afceb169e5fe2131d3d56f8627b04b8f395fe185", "size": 4776, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/proposal/methods.tex", "max_stars_repo_name": "dvaruas/minority_recommendations", "max_stars_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/proposal/methods.tex", "max_issues_repo_name": "dvaruas/minority_recommendations", "max_issues_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/proposal/methods.tex", "max_forks_repo_name": "dvaruas/minority_recommendations", "max_forks_repo_head_hexsha": "8adcbf5af5c322e4b20d4336b12ecda62a5c4d5f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.3448275862, "max_line_length": 661, "alphanum_fraction": 0.7407872697, "num_tokens": 1265, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.5619443234897641}}
{"text": "\\section{Limits \\& Continuity in 3D}\r\n\\noindent\r\nLimits in single-variable calculus are relatively simple because thera are only two ways to approach a point on a curve: left and right. When dealing with a surface, there are infinitely many ways to approach a point. So, we need a general and more formal idea of limits that works in higher dimensions.\r\n\r\n\\input{./differentialMultivariableCalculus/openDeltaNeighborhoods}\r\n\\input{./differentialMultivariableCalculus/boundaryPointsOpenClosedSets}\r\n\\input{./differentialMultivariableCalculus/limitContinuityDefinitions}", "meta": {"hexsha": "f374b54e086533fba94b5f6118fccc2131d5f88c", "size": 568, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "multiCalc/differentialMultivariableCalculus/limitsContinuity3D.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "multiCalc/differentialMultivariableCalculus/limitsContinuity3D.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "multiCalc/differentialMultivariableCalculus/limitsContinuity3D.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 81.1428571429, "max_line_length": 304, "alphanum_fraction": 0.8257042254, "num_tokens": 130, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8152324803738429, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5619443225043456}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Example: Cu metal at 3 temperature}\n\\begin{frame}[fragile] \\frametitle{Example: Cu metal at 3 temperature}\n\n\\begin{cenpage}{130mm}\n    A very simple example of a Multi-Data-Set Fit:\n\n    Cu metal, at 3 different  temperatures: 10K, 50K 150K.\n\n   \\begin{columns}\n     \\begin{column}{55mm}\n        \\vmm  Path Parameters:\n       \\begin{itemize}\n       \\item $E_0$:  Same for all $T$\n       \\item $S_0^2$  Same for all $T$\n       \\item $R$:  expands linearly with $T$ (slope + offset).\n       \\item $\\sigma^2$:  goes as Einstein temperature (as before).\n       \\end{itemize}\n\n       {\\RedEmph{12 parameters become 5.}}\n\n       \\vmm Fit range: \\vmm\n\n       \\hspace{2mm} $R = [1.60, 2.75] \\rm\\, \\AA$\n\n       \\vmm\n       \\hspace{2mm}  $k = [1.50, 18.50] \\rm\\, \\AA^{-1}$\n     \\end{column}\n     \\begin{column}{65mm}\n\n  \\begin{CodeBlock}{60mm}{Cu at three temperatures}\n\n# define fitting parameter group\npars = group(amp      = param(1, vary=True),\n             del_e0   = guess(2.0),\n             theta    = param(250, min=10, vary=True),\n             dr_off   = guess(0),\n             dr_slope = guess(0) )\n\n# define 3 Feff Path, give expressions for Path Parameters\npath1_10  = feffpath('feff0001.dat',\n                     s02='amp', e0='del_e0',\n                     deltar='dr_off + 10*dr_slope',\n                     sigma2='sigma2_eins(10, theta)')\n\npath1_50  = feffpath('feff0001.dat',\n                     s02='amp', e0='del_e0',\n                     deltar='dr_off + 50*dr_slope',\n                     sigma2='sigma2_eins(50, theta)')\n\npath1_150 = feffpath('feff0001.dat',\n                     s02='amp', e0='del_e0',\n                     deltar='dr_off + 150*dr_slope',\n                     sigma2='sigma2_eins(150, theta)')\n\n   \\end{CodeBlock}\n \\end{column}\n\\end{columns}\n\\end{cenpage}\n\\end{frame}\n\n\n%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Example: Cu metal Results}\n\\begin{frame}[fragile] \\frametitle{Example: Cu metal Results}\n\n  \\begin{cenpage}{135mm}\n\n  \\begin{tabular}{ll}\n    \\begin{minipage}{60mm}\n      \\begin{tabular}{lll}\n        {\\ } &{\\tt{amp}}  &   $0.91(0.08)$ \\\\\n        &{\\tt{theta}}  &   $233.5(19.6) \\rm\\, K $ \\\\\n        &{\\tt{del\\_e0}}  &   $0.4(1.3) \\rm \\, eV$ \\\\\n        & {\\tt{dr\\_off}} &   $0.002(0.003) \\rm \\, {\\AA}/K $ \\\\\n        &{\\tt{dr\\_slope}}   &    $0.5(1.8)\\times 10^{-5} \\rm \\, {\\AA}$ \\\\\n      \\end{tabular}\n  \\vmm\n\\end{minipage} &\n\\begin{minipage}{60mm}\n  \\includegraphics[width=56mm]{figs/Cu3temp/cu3temp_mag10}\n\\end{minipage} \\\\\n\\begin{minipage}{60mm}\n  \\includegraphics[width=56mm]{figs/Cu3temp/cu3temp_re50}\n\\end{minipage} &\n\\begin{minipage}{60mm}\n  \\includegraphics[width=56mm]{figs/Cu3temp/cu3temp_re150}\n\\end{minipage}\\\\\n  \\end{tabular}\n\\end{cenpage}\n\n\\end{frame}\n\n%%%%%%%%%%%%%%%%%%%%%%\n% \\begin{slide}{Room Temperature Cu Fit }\n\n%   Simple fit to first shell of Cu foil (300K): $k = [2,16] \\rm\\,\n%     \\AA^{-1}$, $R = [1.7,2.6] \\rm\\, \\AA$, $k$-weight=2, $N_{\\rm idp} = 8.4\n%     $.  Fit results and statistics:\n\n\n%     {\n%       \\hspace{0.1mm}\\begin{tabular}{lll}\n%         $R = 2.548(0.007) \\, \\rm\\AA$\n%         &\n%         $\\Delta E_0 = 4.5(0.6)$\n%         &\n%         $C_3      = 9(9) \\times10^{-5} \\rm\\, \\AA^3$\n%         \\\\\n%         $\\epsilon_k = 1.6 \\times 10^{-4}$\n%         &\n%         $S_0^2 = 0.96(0.04)$\n%         &\n%         $\\sigma^2 = 8.5(0.3) \\times10^{-3} \\rm\\, \\AA^2$\n%         \\\\\n%         $\\chi^2 = 678$ &\n%         $\\chi^2_\\nu = 196.7$   & ${\\cal{R}} = 0.00107 $\\\\\n%       \\end{tabular}\n%     }\n\n%     \\vmm\n%       \\begin{tabular}{lcl}\n%         \\wgraph{49mm}{errors/cufit02} & \\hspace{2mm} &\n%         \\wgraph{49mm}{errors/cufit01} \\\\\n%       \\end{tabular}\n\n%       \\begin{itemize}\n%       \\item ${\\cal{R}} = 0.1\\% $ -- a good fit!  But like $\\chi^2_\\nu$,\n%         ${\\cal{R}}$ is larger than the $\\epsilon_k$ suggests.\n%       \\item These error bars account for correlations.  They increase\n%         $\\chi^2$ by $\\chi^2_\\nu$ (not 1), which scales them by\n%         $\\sqrt{\\chi^2_\\nu}\\approx 14$ over ``increase $\\chi^2$ by 1''.\n%       \\end{itemize}\n\n%       \\vmm\n% \\vfill\n% \\end{slide}\n", "meta": {"hexsha": "0df47d57feb09d322a52595f0dc1dd36022626d3", "size": 4086, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/model_cu_3temp.tex", "max_stars_repo_name": "newville/xafsfun", "max_stars_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/model_cu_3temp.tex", "max_issues_repo_name": "newville/xafsfun", "max_issues_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/model_cu_3temp.tex", "max_forks_repo_name": "newville/xafsfun", "max_forks_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.6086956522, "max_line_length": 76, "alphanum_fraction": 0.5137053353, "num_tokens": 1500, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324713956856, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5619443163156513}}
{"text": "\\subsection{Mulliken populations}\r\n\\index{Mulliken populations}\r\nBy default, the density matrix printed is the Coulson matrix, which\r\nassumes that the atomic orbitals are orthogonalized.\r\n\r\nIf the assumption of orthogonality is not made, then the Mulliken density\r\nmatrix can be constructed. To construct the Mulliken density matrix (also known\r\nas the Mulliken population analysis), the M.O.s must first be re-normalized,\r\nusing the overlap matrix, $S$:\r\n$$\r\n\\psi_i^{'} = \\psi_i\\times S^{-\\frac{1}{2}}. \r\n$$\r\nFrom these M.O.s, a Coulson population is carried out. The off diagonal terms\r\nare simply the Coulson terms multiplied by the overlap:\r\n$$\r\nP_{\\lambda\\sigma\\neq\\lambda}'=S_{\\lambda\\sigma}2\\sum_{i=1}^{occ}c_{\\lambda i}\r\nc_{\\sigma i},\r\n$$\r\nwhile the on-diagonal terms are given by the Coulson terms, plus half the sum\r\nof the off-diagonal elements:\r\n$$\r\nP_{\\lambda \\lambda}' =S_{\\lambda\\sigma}2\\sum_{i=1}^{occ}c_{\\lambda i}c_{\\lambda i}\r\n + \\frac{1}{2}\\sum_{\\sigma\\neq\\lambda}P_{\\lambda \\sigma}'.\r\n$$\r\nA check of the correctness of the Mulliken populations is to add the diagonal\r\nterms: these should equal the number of electrons in the system.\r\n\r\n\\subsubsection*{Theory of Mulliken Populations}\r\nThe NDDO methods (MNDO, AM1, PM3, and MNDO-$d$) all use Slater orbitals,\r\nbut an implication of one of the approximations made, that $\\sum(F_{\\mu \\nu}-E_i\\delta_{\\mu\\nu})\r\nC_{\\nu i} =0$, is that the conventional molecular orbitals are normalized\r\nto unity:\r\n$$\r\n\\psi_i=\\sum_{\\lambda}c_{\\lambda i}\\phi_{\\lambda}\r\n$$\r\nwith\r\n$$\r\n<\\psi_i^2> = 1 = \\sum_{\\lambda}c_{\\lambda i}^2\r\n$$\r\nFor example, for H$_2$, the occupied M.O.\\ is:\r\n$$\r\n\\psi_1 = \\sqrt{\\frac{1}{2}}(\\phi_{H_1}+\\phi_{H_2}),\r\n$$\r\nand the unoccupied M.O.\\ is:\r\n$$ \r\n\\psi_2 = \\sqrt{\\frac{1}{2}}(\\phi_{H_1}-\\phi_{H_2}). \r\n$$ \r\nThe diagonal of the density matrix is then constructed using the Coulson\r\nformula:\r\n$$\r\nP_{1,1}=P_{2,2}=2.0\\times\\left(\\sqrt{\\frac{1}{2}}\\right)^2 =1.0.\r\n$$\r\nThe off-diagonal terms are constructed in the same way:\r\n$$\r\nP_{1,2}=P_{1,2}=2.0\\times\\left(\\sqrt{\\frac{1}{2}}\\right)^2 =1.0.\r\n$$\r\n\r\nIf, instead of using $\\sum(F_{\\mu \\nu}-E_i\\delta_{\\mu\\nu}) C_{\\nu i} =0$,\r\n$\\sum(F_{\\mu \\nu}-E_i) C_{\\nu i} =0$ is used, then the occupied and unoccupied\r\nM.O.s become:\r\n$$ \r\n\\psi_1 = \\sqrt{\\frac{1}{2(1+S)}}(\\phi_{H_1}+\\phi_{H_2}),\r\n$$\r\nand the unoccupied M.O.\\ is:\r\n$$\r\n\\psi_1 = \\sqrt{\\frac{1}{2(1-S)}}(\\phi_{H_1}-\\phi_{H_2}).\r\n$$       \r\nwhere $S$ is the overlap integral: $\\int\\phi_{H_1}\\phi_{H_2}{\\rm d}v$.\r\n\r\nIn this case, the Coulson population would give \r\n$$\r\n\\begin{array}{cc|cc|}\r\n   &   & \\frac{1}{1+S} & \\frac{1}{1+S} \\\\\r\nP  & = &               &               \\\\\r\n   &   & \\frac{1}{1+S} & \\frac{1}{1+S} \\\\\r\n\\end{array}\r\n$$\r\nFrom this we see that the Coulson representation is unsuitable for  two\r\nreasons: first, the number of electrons in the system, represented by the\r\ndiagonal terms, does not add to 2.0. Second, the off-diagonal terms, which\r\nshould represent the  number of electrons resulting from the overlap of the two\r\natomic orbitals, becomes unity as the overlap {\\em decreases}.  \r\n\r\nTo correct for this, it is physically meaningful to multiply the matrix \r\nelements by the overlap.  This gives:\r\n$$\r\n\\begin{array}{cc|cc|}\r\n   &   & \\frac{1}{1+S} & \\frac{S}{1+S} \\\\\r\nP  & = &               &               \\\\\r\n   &   & \\frac{S}{1+S} & \\frac{1}{1+S} \\\\\r\n\\end{array}\r\n$$\r\nNow the off-diagonal terms accurately represent the number of electrons which are\r\nassociated with the overlap electron density.  The total number of electrons\r\nin the system is now correct: $ \\frac{1}{1+S} $ on atom 1, $ \\frac{1}{1+S} $\r\non atom 2, and $ \\frac{2S}{1+S} $ in the overlap region, giving a total of 2.0.\r\n\r\nAlthough this representation is correct, it is potentially misleading, in that\r\nthe diagonal terms do not add to the number of electrons.  Mulliken reasoned\r\nthat the electron density resulting from the overlaps should be divided into\r\ntwo equal parts and added to the diagonal terms.  When that is done, we get:\r\n$$\r\n\\begin{array}{cc|ll|}\r\n   &   & \\frac{1}{1+S}+\\frac{S}{1+S} & \\frac{S}{1+S} \\\\ \r\nP  & = &               &               \\\\ \r\n   &   & \\frac{S}{1+S} & \\frac{1}{1+S} +\\frac{S}{1+S}\\\\\r\n\\end{array} \r\n$$\r\nor\r\n$$\r\n\\begin{array}{cc|ll|}\r\n   &   & 1.0  & \\frac{S}{1+S} \\\\ \r\nP  & = &               &               \\\\ \r\n   &   &  \\frac{S}{1+S} & 1.0\\\\ \r\n\\end{array} \r\n$$\r\nThis simple example can be extended to systems involving heteroatoms and to\r\npolyatomics, and is fully general.\r\n\r\nThe Mulliken analysis can be applied to semiempirical methods.  To do this, it\r\nis necessary to first convert the M.O.s from solutions of  $\\sum(F_{\\mu\r\n\\nu}-E_i\\delta_{\\mu\\nu}) C_{\\nu i} =0$ to solutions of $\\sum(F_{\\mu \\nu}-E_i)\r\nC_{\\nu i} =0$.  The simplest way to do this is to take the conventional M.O.s\r\nand multiply them by $S^{-\\frac{1}{2}}$.  In the case of H$_2$, the resulting\r\nM.O.s are exactly correct; in general, a small error is introduced.  This error\r\narises from the incomplete annihilation of the secular matrix elements, and is\r\nquite unimportant.\r\n\r\n", "meta": {"hexsha": "b838f9373aaceaf9002639a3a3e11463019d0351", "size": 5064, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuals/MOPAC2000_manual/t_mullik.tex", "max_stars_repo_name": "openmopac/MOPAC-archive", "max_stars_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-12-16T20:53:27.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-16T20:54:11.000Z", "max_issues_repo_path": "manuals/MOPAC2000_manual/t_mullik.tex", "max_issues_repo_name": "openmopac/MOPAC-archive", "max_issues_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuals/MOPAC2000_manual/t_mullik.tex", "max_forks_repo_name": "openmopac/MOPAC-archive", "max_forks_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2558139535, "max_line_length": 97, "alphanum_fraction": 0.6469194313, "num_tokens": 1641, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7279754607093178, "lm_q1q2_score": 0.5618831259348918}}
{"text": "%*****************************************\n\\chapter{Formulas}\\label{ch03:formulas}\n%*****************************************\n\nExcel workbooks are designed to create useful and complex calculations. In addition to doing arithmetic, Excel can look up data and display results based on logical conditions. Finally, Excel can highlight specific results to enhance analysis. These skills will be demonstrated in the context of a typical grade book spreadsheet that contains the results for an imaginary Excel class.\n\n\\begin{center}\n\t\\begin{objbox}{Learning Objectives}\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Use the \\textit{Quick Analysis Tool} to find the Total Points and Points Possible for all students.\n\t\t\t\\item Write a division formula to find the Percentage for each student, using an absolute reference to the \\textit{Total Points Possible}.\n\t\t\t\\item Write an \\textit{IF} Function to determine Pass/Fail where passing is $ 70\\% $ or higher.\n\t\t\t\\item Write a \\textit{VLOOKUP} to determine the Letter Grade using a Letter Grades scale.\n\t\t\t\\item Use the \\textit{TODAY} function to insert the current date.\n\t\t\t\\item Review common Error Messages using \\textit{Smart Lookup} to get definitions of some of the terms in the spreadsheet.\n\t\t\t\\item Apply \\textit{Data Bars} to the Total Points values.\n\t\t\t\\item Apply \\textit{Conditional Formatting} to the Percentage, Pass/Fail, and Letter Grade columns.\n\t\t\t\\item Printing Review \u2013 Change to Landscape, Scale to Fit Columns on One Page and Set Print Area.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{objbox}\n\\end{center}\n\nFigure \\ref{03:fig01} shows the completed workbook that will be demonstrated in this chapter. Notice the techniques used in \\textit{Column O} and \\textit{Column R} that highlight the results of the calculations. Notice, also that there are more numbers on this version of the file than in the original data file. These are all completed using Excel calculations.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig01}\n\t\\caption{Completed Grade Book Worksheet}\n\t\\label{03:fig01}\n\\end{figure}\n\n\\section{More on Formulas and Functions}\n\n\\begin{center}\n\t\\begin{objbox}{Learning Objectives}\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\t\t\t\n\t\t\t\\item Review the use of the \\textit{=MAX} function.\n\t\t\t\\item Examine the \\textit{Quick Analysis Tool} to create standard calculations, formatting, and charts very quickly.\n\t\t\t\\item Create Percentage calculation.\n\n\t\t\t\\begin{itemize}\n\t\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\t\\item Use the \\textit{Smart Lookup} tool to acquire additional information about percentage calculations.\n\t\t\t\t\\item Review the use of Absolute cell reference in a division formula.\n\t\t\t\\end{itemize}\n\n\t\t\\end{itemize}\n\t\\end{objbox}\n\\end{center}\n\n\\subsection{Another Use for \\fmtTyping{=MAX}}\n\nBefore moving on to the more interesting calculations discussed in this chapter, it is necessary to determine how many points it is possible for each student to earn for each of the assignments. This information will go into \\textit{Row} $ 25 $. The \\textit{=MAX} function is the tool of choice for this task.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Open the data file \\fmtWorksheet{CH3-Data} and save the file as \\fmtWorksheet{CH3-Grade Book}.\n\t\t\\item Click \\fmtLoc{B25}.\n\t\t\\item Start typing \\fmtTyping{=MAX} (See Figure \\ref{03:fig02}). Note the explanation that pops up on the offered list of functions. Either keep typing \\fmtTyping{(} (an open parenthesis) or double click \\fmtButton{MAX} from the popup list.\n\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig02}\n\t\t\t\\caption{Entering a function}\n\t\t\t\\label{03:fig02}\n\t\t\\end{figure}\n\t\n\t\t\\item Select the range \\fmtLoc{B5:B24}. The calculation will update to: \\fmtTyping{=MAX(B5:B24)}. Press \\fmtKeystroke{Enter} to complete the formula.\n\t\t\\item Now, use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{B25} to \\fmtLoc{C25:N25}. Note that as the calculation is copied from one column to the next, the cell references change. The calculation in \\fmtLoc{B25} reads: \\fmtTyping{=MAX(B5:B24)} and the one in \\fmtLoc{N25} reads \\fmtTyping{=MAX(N5:N24)}. These cell references are called \\textit{relative} references.\n\t\\end{enumerate}\n\\end{enumbox}\n\nBy default, the calculations that Excel copies change their cell references relative to the row or column they are copied to. That makes sense. \\textit{Column N} should not display an answer that uses the values from \\textit{Column K}.\n\nTo see all the calculations that were just created, press \\fmtKeystroke{Ctrl} + \\fmtKeystroke{$ ` $} (that is the back-tic near the \\fmtKeystroke{one} on the keyboard, see Figure \\ref{03:fig03}) \\fmtKeystroke{Ctrl} + \\fmtKeystroke{$ ` $} displays the calculations (formulas) in each cell. Pressing \\fmtKeystroke{Ctrl} + \\fmtKeystroke{$ ` $} a second time will display calculations as values, which is the default view for cells.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig03}\n\t\\caption{Relative References \u2013 Displayed as calculations.}\n\t\\label{03:fig03}\n\\end{figure}\n\n\\subsection{Quick Analysis Tool}\n\nThe \\textit{Quick Analysis Tool} creates standard calculations, formatting, and charts very quickly. In this exercise, it is used to insert the \\textit{Total Points} for each student in \\textit{Column O}.\n\nIf necessary, press \\fmtKeystroke{Ctrl} + \\fmtKeystroke{$ ` $} to return the spreadsheet to the normal view (the results of the formulas in \\textit{Row }$ 25 $ should be displayed, not the formulas themselves).\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select \\fmtLoc{B5:N25}\n\t\t\\item In the lower right corner of the selection, notice the \\fmtButton{Quick Analysis Tool} (see Figure \\ref{03:fig04}).\n\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig04}\n\t\t\t\\caption{Quick Analysis Tool}\n\t\t\t\\label{03:fig04}\n\t\t\\end{figure}\n\t\n\t\t\\item In the \\fmtButton{Quick Analysis Tool}, select \\fmtButton{Totals $ \\Rightarrow $ COLUMN SUM} (see Figure \\ref{03:fig05}). (Note: there are two SUM buttons, the second will sum columns, as indicated by the tan-colored column indicator on the button.) Selecting the \\fmtButton{COLUMN SUM} button places a \\fmtTyping{=SUM()} calculation all cells in \\fmtLoc{Column O}.\n\t\\end{enumerate}\n\\end{enumbox}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig05}\n\t\\caption{Quick Analysis Tool \u2013 Totals, Sum Column}\n\t\\label{03:fig05}\n\\end{figure}\n\n\\subsection{Percentage Calculation}\n\n\\textit{Column P} requires a Percentage calculation. Before creating a calculation for this, it might be handy to know precisely what is needed. The \\textit{Smart Lookup} tool is a handy way to get more information if the computer is connected to the Internet.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select cell \\fmtLoc{P4}.\n\t\t\\item Click \\fmtButton{Review $ \\Rightarrow $ Insights $ \\Rightarrow $ Smart Lookup} (see Figure \\ref{03:fig06}). This will find more about Percentage calculations. If this is the first time that the \\textit{Smart Lookup} tool has been used a privacy statement may pop up. Press the \\fmtButton{Got it} button.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\nExcel searches the web for articles relevant to \\textit{percentage} and lists links to those articles. Figure \\ref{03:fig06} illustrates the result of the search done when this lesson was written, but that result will change depending on what information is available on the web.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig06}\n\t\\caption{Smart Lookup tool}\n\t\\label{03:fig06}\n\\end{figure}\n\nNow that the data needed for the \\textit{Percentage} calculation is known, a formula can be written so Excel will complete the calculation. The \\textit{Total Points} for each student must be divided by the \\textit{Total Points Possible}. Notice that there is a different number of points earned for each student, but there is only one \\textit{Total Points Possible} --- the value in cell $ O25 $.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click in \\fmtLoc{P5}.\n\t\t\\item Press \\fmtTyping{=} then click \\fmtLoc{O5}. \n\t\t\\item Enter \\fmtTyping{/}\n\t\t\\item Click \\fmtLoc{O25}. The calculation should look like \\fmtTyping{=O5/O25}.\n\t\t\\item Press \\fmtKeystroke{Enter}. The result of the formula should be $ 0.95641026 $. So far, so good. DeShea Andrews is doing well in this class, with a percentage grade of almost 96\\%---definitely an ``A''!)\n\t\t\\item Next use the \\fmtButton{Auto Fill Handle} to copy the calculation from \\fmtLoc{P5} to \\fmtLoc{P6:P24} to calculate the other students' grades. Unfortunately, this error message is displayed: \\textit{\\#DIV/$ 0 $}. This message means that an attempt has been made to divide a number by $ 0 $ (zero), which is impossible. The formula in \\fmtLoc{P9} reads $ =O9/O29 $. The first cell reference is correct --- it points to Moesha Gashi's total points for the class. But the second reference is wrong. It points to an empty cell, \\fmtLoc{O29}.\n\t\\end{enumerate}\n\\end{enumbox}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.50\\linewidth}]{gfx/ch03_fig07a}\n\t\\caption{Illegal Cell Reference}\n\t\\label{03:fig07a}\n\\end{figure}\n\nBefore copying the calculation, the second reference, ($ O25 $), must be changed to an absolute cell reference. That way, when it is copied down the column, the cell reference for $ O25 $ will be locked and will not change.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click in \\fmtLoc{P5}. \n\t\t\\item In the \\textit{Formula Bar} click on \\fmtLoc{O25} (see Figure \\ref{03:fig07}).\n\t\t\\item Press \\fmtKeystroke{F4} to make the \\fmtLoc{O25} reference absolute. That way, it will not change when the cell is copied (see Figure \\ref{03:fig08}). It is also easy enough to simply type a \\$ before the \\fmtLoc{O} and another one before the \\fmtLoc{25} for devices like tablets and laptops that may not have function keys.\n\t\t\\item Press \\fmtKeystroke{Enter}.\n\t\t\\item The formula in \\fmtLoc{P5} now looks like \\fmtTyping{=O5/\\$O\\$25}.\n\t\t\\item Click in \\fmtLoc{P5} and use the \\fmtButton{Auto Fill Handle} to copy that cell to \\fmtLoc{P6:P24}. Now the formula has the correct values for all students.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.40\\linewidth}]{gfx/ch03_fig07}\n\t\\caption{Editing a Formula}\n\t\\label{03:fig07}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.40\\linewidth}]{gfx/ch03_fig08}\n\t\\caption{Absolute Cell reference \u2013 press F4}\n\t\\label{03:fig08}\n\\end{figure}\n\nThose long decimals in the percent column are nonstandard so they should be changed to a percent by applying cell formatting.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select the range \\fmtLoc{P5:P24}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ \\% (Percent)}.\n\t\\end{enumerate}\n\\end{enumbox}\n\n\\begin{center}\n\t\\begin{sklbox}{Skill Refresher}\n\t\t\\textbf{Absolute References}\n\t\t\\\\\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Click in front of the column letter of a cell reference in a formula or function that should not be altered when the formula or function is pasted into a new cell location.\n\t\t\t\\item Press the \\fmtKeystroke{F4} key or type a dollar sign (\\$) in front of the column letter and row number of the cell reference.\n\t\t\t\t\t\t\n\t\t\\end{itemize}\n\t\\end{sklbox}\n\\end{center}\n\n\\begin{center}\n\t\\begin{tkwbox}{Key Take-Aways}\n\t\t\\textbf{Functions}\n\t\t\\\\\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Functions can be created using cell ranges or selected cell locations separated by commas. Make sure to use a cell range (two cell locations separated by a colon) when applying a statistical function to a contiguous range of cells.\n\t\t\t\\item To prevent Excel from changing the cell references in a formula or function when they are pasted to a new cell location, use an absolute reference. Do this by placing a dollar sign (\\$) in front of the column letter and row number of a cell reference or by using the \\fmtKeystroke{F4} function key.\n\t\t\t\\item The \\textit{\\#DIV/$ 0 $} error appears if a formula attempts to divide a constant or the value in a cell reference by zero, typically because it refers to an empty cell.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{tkwbox}\n\\end{center}\n\n\\section{Logical and Lookup Functions}\n\n\\begin{center}\n\t\\begin{objbox}{Learning Objectives}\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Use an \\textit{IF} Function to make logical comparisons between a value and what is expected.\n\t\t\t\\item Create a \\textit{VLOOKUP} calculation to look up information in a table.\n\t\t\t\\item Understand error messages.\n\t\t\t\\item Understand how to enter and format Date/Time Functions.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{objbox}\n\\end{center}\n\nIn addition to doing arithmetic, Excel can do other kinds of functions based on the data in the spreadsheet. In this section an \\textit{=IF} function will be used to determine whether a student is passing or failing the class. Then, a \\textit{=VLOOKUP} function will be used to determine what grade each student has earned.\n\n\\subsection{If Function}\n\nThe \\textit{IF} function is one of the most popular functions in Excel. It makes logical comparisons between a value and what was expected and then fills a cell based on that comparison. In its simplest form, the \\textit{IF} function says something like, \\textit{If the value in a cell is expected (true) then do this; otherwise do that}.\n\nThe \\textit{IF} function has three arguments.\n\n\\begin{itemize}\n\t\\item \\textbf{Logical test}. This is the test to see if the value in a selected cell expected. A test can be something like ``$ B7=14 $'' or ``$ B7>12 $'' or ``$ B7<6 $.''\n\t\\item \\textbf{Value\\_if\\_true}. What to do if the requirements in the logical test are met, for example, if $ B7 $ is equal to $ 14 $ then fill the cell with text like ``True,'' or ``On budget!'' Alternatively, this argument can contain a calculation, like $ B7*2 $. That is, if $ B7 $ equals $ 14 $ then multiply it by $ 2 $. Finally, if Excel should put nothing at all in the cell then type ``'' (two empty quotes).\n\t\\item \\textbf{Value\\_if\\_false}. What to do if the requirements in the logical test are \\textit{NOT} met; for example, if \\textit{B7} does \\textit{NOT} equal $ 14 $. To have Excel do nothing, then type empty double quotes. Of course, Excel can also enter whatever text or calculation is desired.\n\\end{itemize}\n\nFor the grade book, \\textit{Column Q} should indicate whether a student is passing or failing the class. Students who score $ 70\\% $ or better are considered passing while scores less than $ 70\\% $ are failing.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click in \\fmtLoc{Q5}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ Logical $ \\Rightarrow $ IF} (see Figure \\ref{03:fig09}).\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig09}\n\t\\caption{IF Function}\n\t\\label{03:fig09}\n\\end{figure}\n\nThe \\textit{IF Function} dialog box pops up with a place to enter each of the three arguments.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click in the box for \\fmtButton{Logical Test}. To test whether a student's score is less than $ 0.7 $ enter \\fmtTyping{$ P5<0.7 $}.\n\t\t\\item Click in the box for \\fmtButton{Value\\_if\\_true}. If the student's score is less than $ 0.7 $, then they are failing the class. In this box, type \\fmtTyping{Fail}. Note: Excel automatically encloses the word \\textit{Fail} in quote marks.\n\t\t\\item Click in the box for \\fmtButton{Value\\_if\\_false}. If the student's score is \\textit{NOT} less than $ 0.7 $, then they are passing the class. In this box, type \\fmtTyping{Pass}. Note: Excel automatically encloses the word \\textit{Pass} in quote marks.\n\t\t\\item Make sure that the dialog box matches Figure \\ref{03:fig10}.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig10}\n\t\\caption{IF Function Dialog Box}\n\t\\label{03:fig10}\n\\end{figure}\n\nNotice that as each box is filled, Excel offers a brief explanation of the contents (below the boxes.) In the lower left corner is the results of the calculation so it can be checked before the function is inserted in the cell. In this case, DeShae is passing the class. Below that is a link to Help on this function. Selecting this link will open Excel help for this function, along with detailed information on how it works.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Once the required arguments are entered and checked, press \\fmtButton{OK}. \n\t\t\\item The text \\textit{Pass} should be displayed in \\fmtLoc{Q5} because DeShae is passing the class.\n\t\t\\item Click on \\fmtLoc{Q5}. The formula bar should display the \\fmtButton{IF} calculation: \\fmtTyping{$ =IF(P5<0.7 $,''Fail'',''Pass'')} (see Figure \\ref{03:fig11}).\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{Q5} to \\fmtLoc{Q6:Q24}.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig11}\n\t\\caption{IF Function Results}\n\t\\label{03:fig11}\n\\end{figure}\n\n\\subsection{Vlookup Function}\n\nA \\textit{VLOOKUP} function is used to look up information in a table. Sometimes that table is on a different sheet in the workbook, though it can be in another file entirely. In this case, all students' grades are determined by their percentage score. For this grade book, the table that relates a score to a grade is in $ A28 $:$ B32 $.\n\nThere are four pieces of information that are needed to build the \\textit{VLOOKUP} function. \n\n\\begin{itemize}\n\t\\item The \\textbf{Lookup\\_value} is the value to look up. For this exercise, the lookup value will be the student's percentage score in \\textit{Column P}.\n\t\\item The \\textbf{Table\\_array} is the range (or table) where the lookup values are located. In this example the table of percentages and corresponding letter grades is in the range $ A28 $:$ B32 $. The lookup value, or percentage grade in this case, should always be in the first column in the table array for \\textit{VLOOKUP} to work correctly. \n\t\\item The \\textbf{Col\\_index\\_num} is the column number in the range that contains the value to return. In this example, $ A28 $:$ B32 $ is the \\textit{Table\\_array} range so \\textit{Column A} is the first column (1), \\textit{Column B} is the second column (2). To return the grade in \\textit{Column B}, the number $ 2 $ would be entered as the \\textit{Col\\_index\\_num}.\n\t\\item The \\textbf{Range\\_lookup} is TRUE for an \\textit{approximate} match or FALSE for an exact match. If this is left blank the default value is TRUE, or an approximate match.\n\\end{itemize}\n\nFollow these steps to create the \\textit{VLOOKUP} to display the correct \\textit{Letter Grade} in \\textit{Column R}.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click in \\fmtLoc{R5}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ Lookup \\& Reference $ \\Rightarrow $ VLOOKUP}. (See Figure \\ref{03:fig12}).\n\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig12}\n\t\t\t\\caption{VLOOKUP Function}\n\t\t\t\\label{03:fig12}\n\t\t\\end{figure}\n\t\t\n\t\t\\item Fill in the dialog box so that it looks like Figure \\ref{03:fig13}.\n\t\t\n\t\t\\begin{itemize}\n\t\t\t\\item \\textbf{Lookup\\_value}. Specify the percentage score, which is in cell \\fmtLoc{P5} for the first student.\n\t\t\t\\item \\textbf{Table\\_array}. This is the range that contains the value to be returned by the function. In this case, that range is $ A28 $:$ B32 $. Note that this range does \\textit{NOT} include the label in \\textit{Row} $ 27 $, just the actual data. It is important that the cell references for the Table\\_array need to be absolute: \\fmtLoc{\\$A\\$28:\\$B\\$32}. When this function is copied to other cells the cell references should not change.\n\t\t\t\\item \\textbf{Col\\_index\\_number}. This is the column in the table array range that includes the information that should be returned. In this case, the grades are in the 2nd column of the range so the column index will be $ 2 $.\n\t\t\t\\item \\textbf{Range\\_lookup}. Since an approximate match is appropriate for this application the default value of TRUE is appropriate. Since that is the default, nothing needs to be entered for this value. \n\t\t\\end{itemize}\n\n\t\t\\item Be sure to observe the helpful definitions that Excel offers while filling in the \\fmtButton{VLOOKUP} dialog box.\n\t\t\\item When the dialog box is complete, press \\fmtButton{OK}.\n\t\t\\item The calculation in the formula bar is: \\fmtTyping{=VLOOKUP(P5,\\$A\\$28:\\$B\\$32,2)}\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{R5} to \\fmtLoc{R6:R24}.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig13}\n\t\\caption{VLOOKUP completed dialog box}\n\t\\label{03:fig13}\n\\end{figure}\n\nFigure \\ref{03:fig14} illustrates the grade book when the \\textit{VLOOUP} function is applied.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig14}\n\t\\caption{VLOOKUP Complete}\n\t\\label{03:fig14}\n\\end{figure}\n\nWhat if the \\textit{VLOOKUP} function does not work as expected? In this case, a mistake was made in either the calculation of the \\% scores in \\textit{Column P} or there is an error in the \\textit{VLOOKUP} function. To make repairs to the function, make sure that $ R5 $ is the active cell. On the Formula bar, press the \\textit{Insert Function} button (see Figure \\ref{03:fig15}). That will reopen the dialog box to make repairs. A common error is to forget to make the cell references for the \\textit{Table\\_array} absolute. Press \\textit{OK} when the correction is completed and then recopy the corrected function.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig15}\n\t\\caption{Insert Function}\n\t\\label{03:fig15}\n\\end{figure}\n\n\\subsection{Error Messages}\n\nSometimes Excel notices errors in the calculations and may post a slightly mysterious error message. Table \\ref{03:tab01} lists the common error messages that Excel displays along with their meanings\\footnote{Table \\ref{03:tab01} was adapted from  \\url{https://www.dummies.com/software/microsoft-office/excel/understanding-excel-2010s-formula-error-values/}}.\n\n\\begin{table}[H]\n\t\\rowcolors{1}{}{tablerow} % zebra striping background\n\t{\\small\n\t\t%\\fontsize{8}{10} \\selectfont %Replace small for special font size\n\t\t\\begin{longtable}{L{0.60in}L{3.65in}} %Left-aligned, Max width: 4.25in\n\t\t\t\\textbf{Message} & \\textbf{What Went Wrong} \\endhead\n\t\t\t\\hline\n\t\t\t\\#DIV/$ 0 $ & The division operation refers to a cell that contains the value $ 0 $ or is blank.\\\\\n\t\t\t\\#N/A & Technically, this is not an error value but a special value \t\tthat can be manually entered into a cell to indicate that there is no value yet. This is a placeholder used while a spreadsheet is being developed.\\\\\n\t\t\t\\#NAME? & This error value appears when a range name is incorrectly entered, or the name is deleted. Also, this commonly means that the formula is missing quotation marks around a text string.\\\\\n\t\t\t\\#NULL & This error occurs if a space is used instead of a comma between ranges in function arguments. The formula needs to be carefully checked and corrected.\\\\\n\t\t\t\\#NUM & This error is caused by an invalid argument in an Excel\tfunction or by a formula that produces a number too large or too small for the worksheet.\\\\\n\t\t\t\\#REF & This error occurs when a cell referred to in a formula has been deleted.\\\\\n\t\t\t\\#VALUE & This error is most often the result of specifying a mathematical operation that refers to one or more cells that contain text.\\\\\n\t\t\t\\rowcolor{captionwhite}\n\t\t\t\\caption{Common Error Messages}\n\t\t\t\\label{03:tab01}\n\t\t\\end{longtable}\n\t} % End small\n\\end{table}\n\n\\subsection{Date Functions}\n\nVery often dates and times are an important part of Excel data. Numbers that are correct today may not be accurate tomorrow, so it is frequently useful to include dates and times on the spreadsheets. Dates and times fall into two general categories.\n\n\\begin{itemize}\n\t\\item \\textbf{Remain the same.} For instance, if a spreadsheet includes data for May $ 15 $th then that date should not change each time the spreadsheet is accessed.\n\t\\item \\textbf{Change to reflect the current date/time.} When it is important to have the current date or time on a spreadsheet then Excel should update the information regularly.\n\\end{itemize}\n\nLook at the functions at \\textit{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ Date \\& Time} (see Figure \\ref{03:fig16}).\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig16}\n\t\\caption{Date \\& Time Functions}\n\t\\label{03:fig16}\n\\end{figure}\n\nFor the grade book, the date and time should be displayed in $ A2 $, and it needs to be updated whenever the workbook file is opened.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click in \\fmtLoc{A2}. Notice that \\fmtLoc{A2} extends all the way from \\fmtLoc{Column A} to \\fmtLoc{Column R}. Previously, the \\fmtButton{Merge \\& Center} tool was used on this cell to make it match the width of the title in \\fmtLoc{Row 1}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ Date \\& Time $ \\Rightarrow $ NOW}. \n\t\t\\item Click \\fmtButton{OK}.\n\t\t\\item The result in the formula bar is: \\fmtTyping{=NOW()} and the result in \\fmtLoc{A2} depends on the current date and time. The \\fmtButton{NOW} function is a very handy function; it takes no arguments and is volatile! That is not as alarming as it may seem. This just means that it does not need any information to do its job and the results will change frequently. \n\t\t\\item Wait at least one minute and then click in \\fmtLoc{A2} and press \\fmtKeystroke{F9} to update the time.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\nExcel will update this field automatically whenever the file is saved, reopened, or printed. It may also happen more frequently than that, depending on how Excel is set up.\n\nAnother variation of the current date is the \\textit{TODAY} function.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click in \\fmtLoc{A2}. Press \\fmtKeystroke{Delete} to remove the \\fmtButton{NOW} function.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ Date \\& Time $ \\Rightarrow $ TODAY}. \n\t\t\\item Click \\fmtButton{OK}.\n\t\t\\item The result in the formula bar is \\fmtTyping{=TODAY()} and the result in \\fmtLoc{A2} is the current date. Since the time was not requested it is likely $ 12\\!:\\!00 $\\textit{ AM}. That is not helpful, so the date format needs to be adjusted.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Number Format Launcher} (see Figure \\ref{03:fig17}).\n\t\t\\item In the \\textit{Format Cells} dialog box, click the \\textit{Number} tab. Choose the \\fmtButton{Date} category and select the second option, \\fmtButton{Wednesday, March 14, 2012} (this format is called \\textit{Long Date}).\n\t\t\\item Click \\fmtButton{OK}.\n\t\t\\item The current day and date should now be displayed in \\fmtLoc{A2}.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig17}\n\t\\caption{Number Format Launcher}\n\t\\label{03:fig17}\n\\end{figure}\n\n\\begin{center}\n\t\\begin{shtcutbox}{Keyboard Shortcuts}\n\t\t\\textbf{Static Date and Time}\n\t\t\\\\\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item \\fmtKeystroke{CTRL} + \\fmtKeystroke{;} (semicolon) inserts the current date\n\t\t\t\\item \\fmtKeystroke{CTRL} + \\fmtKeystroke{:} (colon) inserts the current time.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{shtcutbox}\n\\end{center}\n\n\n\\begin{center}\n\t\\begin{tkwbox}{Key Take-Aways}\n\t\t\\textbf{Date Functions}\n\t\t\\\\\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Functions do not always have to be about arithmetic. Excel provides functions that helps perform logical evaluations, look things up, and work with dates and times.\n\t\t\t\\item Excel displays error messages when formulas and functions are not constructed properly.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{tkwbox}\n\\end{center}\n\n\\section{Conditional Formatting}\n\n\\begin{center}\n\t\\begin{objbox}{Learning Objectives}\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Use Conditional Formatting techniques to provide flexible highlighting or applying specified formatting only when certain conditions are met. Techniques include:\n\n\t\t\t\\begin{itemize}\t\t\t\n\t\t\t\t\\item \\textbf{Data bars}. Makes it easy to visualize values in a range of cells.\n\t\t\t\t\\item \\textbf{Cell Rules}. Highlights values that match specified requirements.\n\t\t\t\\end{itemize}\n\n\t\t\\end{itemize}\n\t\\end{objbox}\n\\end{center}\n\n\\subsection{Initiating Conditional Formatting}\n\nAll necessary calculations are now in the \\textit{CAS 170 Grades} spreadsheet. However, the grade book contains a lot of data. To make it easier to quickly find the most important pieces of data, Excel provides \\textit{Conditional Formatting}.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select the range \\fmtLoc{O5:O24}.\n\t\t\\item At the bottom of the selection, click on the \\fmtButton{Quick Analysis Tool}. This is a popup tool with an icon that looks like a spreadsheet with colored lines.\n\t\t\\item Click \\fmtButton{Formatting $ \\Rightarrow $ Data Bars} (see Figure \\ref{03:fig18}).\n\t\\end{enumerate}\n\\end{enumbox}\n\t\nExcel places blue bars on top of the values; long blue bars for larger numbers, shorter ones for smaller numbers. This makes it easier to see how well each student did in the class immediately without having to look at the specific numbers.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig18}\n\t\\caption{Data Bars on the Quick Analysis tool}\n\t\\label{03:fig18}\n\\end{figure}\n\nFollowing is another way to apply Data Bars.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select the range \\fmtLoc{O5:O24}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Styles $ \\Rightarrow $ Conditional Formatting $ \\Rightarrow $ Data Bars}. \t\n\t\t\\item From there, data bars of different colors and opacities can be selected (see Figure \\ref{03:fig19}).\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig19}\n\t\\caption{Data Bars on the Conditional Formatting tool}\n\t\\label{03:fig19}\n\\end{figure}\n\nIt is even more important to highlight the students who are failing in the class, so that will be done in two places, the \\textit{Percentages} and \\textit{Letter Grade} columns. To start, any \\textit{F} letter grades should be formatted with a light red fill color and dark red text.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select the range \\fmtLoc{R5:R24}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Styles $ \\Rightarrow $ Conditional Formatting $ \\Rightarrow $ Highlight Cells Rules} (see Figure \\ref{03:fig20}).\n\t\t\\item Select \\fmtButton{Equal To}\n\t\t\\item Enter \\fmtTyping{F} in the \\textit{Format cells that are EQUAL TO} text box, so those cells are highlighted with \\fmtButton{Light Red Fill with Dark Red Text} (see Figure \\ref{03:fig21}).\n\t\t\\item Click \\fmtButton{OK}.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig20}\n\t\\caption{Conditional Formatting Equal To}\n\t\\label{03:fig20}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig21}\n\t\\caption{Conditional Formatting Equal To Dialog Box}\n\t\\label{03:fig21}\n\\end{figure}\n\nNow, highlight students who are passing the class. This time use the Pass/Fail text in the Pass/Fail column. If the text for a student is \\textit{Pass} the cell should be formatted with a yellow fill and dark yellow text.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select the range \\fmtLoc{Q5:Q24}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Styles $ \\Rightarrow $ Conditional Formatting $ \\Rightarrow $ Highlight Cells Rules} (see Figure \\ref{03:fig20}).\n\t\t\\item Select \\fmtButton{Equal To}\n\t\t\\item Enter \\fmtTyping{Pass} in the \\textit{Format cells that are EQUAL TO} text box, so those cells are highlighted with \\fmtButton{Yellow Fill with Dark Yellow Text}.\n\t\t\\item Click \\fmtButton{OK}.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\nThe default styles are a simple way to make specified data stand out, but any cell formatting can be set. When using custom cell formatting, it is probably a good idea to include other styling in addition to color. Remember that spreadsheets are often printed in black and white so conditional formatting that relies only on color would be lost. Next, use conditional formatting to display any \\textit{Percentages} that are less than $ 60\\% $ with red text formatted in bold and italic.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select the range \\fmtLoc{P5:P24}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Styles $ \\Rightarrow $ Conditional Formatting $ \\Rightarrow $ Highlight Cells Rules} (see Figure \\ref{03:fig20}).\n\t\t\\item Select \\fmtButton{Less Than}\n\t\t\\item Fill out the \\textit{Less Than} dialog box so that cells that are less than $ 0.6 $ (that is 60\\%) will have conditional formatting. Instead of using the default red text on a light red fill, press the down arrow at the end of the \\textit{with} box and select \\fmtButton{Custom Format}.\n\t\t\\item On the \\textit{Font} tab of the \\textit{Format Cells} dialog box, in the \\textit{Font style} box, select \\fmtButton{Bold Italic}. In the \\textit{Color} box, select \\fmtButton{Red} (see Figure \\ref{03:fig22}).\n\t\t\\item Press \\fmtButton{OK}. Then press \\fmtButton{OK} again.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig22}\n\t\\caption{Conditional Formatting Custom Format Cells Dialog box}\n\t\\label{03:fig22}\n\\end{figure}\n\nConditional Formatting is valuable since it reflects the current data and it changes whenever the data changes. To test this, delete DeShea's final exam score. (Click $ N5 $ then press \\fmtKeystroke{Delete} on the keyboard.) Suddenly, DeShae is failing the course and the Conditional Formatting reflects that. Press \\fmtKeystroke{CTRL} + \\fmtKeystroke{Z} (Undo). The test score reappears, and the Conditional formatting reflects that as well.\n\n\\subsection{Modifying Conditional Formatting}\n\nWhat if there is a mistake with the Conditional Formatting or it needs to be deleted altogether? Use the \\textit{Conditional Formatting Manage Rules} tool. The following steps remove the conditional formatting rule that formats the \\textit{Pass} text with yellow and modify the minimum passing percentage.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Styles $ \\Rightarrow $ Conditional Formatting $ \\Rightarrow $ Manage Rules}. \n\t\t\\item Select \\fmtButton{This Worksheet} for \\textit{Show formatting rules for} (see Figure \\ref{03:fig23}).\n\t\t\\item Since there is no need to highlight the students who are passing the class, click the second rule in the \\textit{Rules Manager} (``Cell Value = 'Pass''') and press the \\fmtButton{Delete Rule} button.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig23}\n\t\\caption{Conditional Formatting Manage Rules}\n\t\\label{03:fig23}\n\\end{figure}\n\nIn a previous exercise (the \\textit{IF} function), it was decided that students were failing if they got a percentage score of less than $ 70\\% $, so the \\textit{Conditional Formatting} rule in the \\textit{Percentage} column needs to be changed to match that value.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click the first rule, it reads \\textit{Cell Value $ <0.6 $}.\n\t\t\\item Click the \\fmtButton{Edit Rule} button and change the $ 0.6 $ to $ 0.7 $ (see Figure \\ref{03:fig24}).\n\t\t\\item Click \\fmtButton{OK} (or \\fmtButton{Apply}) two times.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig24}\n\t\\caption{Conditional Formatting Edit Formatting Rule Dialog box}\n\t\\label{03:fig24}\n\\end{figure}\n\nDouble check that the completed workbook matches Figure \\ref{03:fig25}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig25}\n\t\\caption{Completed Ch3-grade book}\n\t\\label{03:fig25}\n\\end{figure}\n\n\\subsection{Setting the Print Area}\n\nBefore this workbook is finished, it needs to be prepared for printing. The first thing to do is set the \\textit{Print Area} so that the table of \\textit{Letter Grades} in $ A27 $:$ B32 $ does not print.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select \\fmtLoc{A1:R25}. This is the only part of the worksheet to be printed.\n\t\t\\item Click \\fmtButton{Page Layout $ \\Rightarrow $ Page Setup $ \\Rightarrow $ Print Area $ \\Rightarrow $ Set Print Area}.\n\t\\end{enumerate}\n\\end{enumbox}\n\nNext, preview the worksheet in \\textit{Print Preview} to check that the print area setting worked and to make sure it is printing on one page.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click \\fmtButton{File $ \\Rightarrow $ Print}.\n\t\t\\item Set the orientation to \\fmtButton{Landscape}.\n\t\t\\item Change the scaling so that the entire worksheet prints on one page.\n\t\t\\item Close the print preview by clicking the arrow at the top left corner of the preview screen.\n\t\t\\item Save the \\fmtWorksheet{CH3-Grade Book} workbook.\n\t\t\\item Compare the worksheet with the self-check answer key (\\fmtWorksheet{CH3-Grade Book Solution}) and then close and submit the \\fmtWorksheet{CH3-Grade Book} workbook as directed by the instructor.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\section{Preparing to Print}\n\n\\begin{center}\n\t\\begin{objbox}{Learning Objectives}\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Locate and fix formatting consistency errors.\n\t\t\t\\item Apply new formatting techniques.\n\t\t\t\\item Use Print Titles to repeat rows and columns on each page of a multiple page worksheet.\n\t\t\t\\item Control where page breaks occur in a multiple page worksheet.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{objbox}\n\\end{center}\n\nIn this section, a worksheet will be reviewed for formatting consistency and two new formatting techniques are presented. The worksheet used in this section currently prints on four pages, so new page setup options are used to control how these pages print. \n\n\\subsection{Reviewing Formatting for Consistency}\n\nThe workbook used for this exercise contains data about the national parks in the western United States. The workbook has been formatted but needs to be reviewed for consistency and prepared for printing. Figure \\ref{03:fig26} shows how the second page of the finished worksheet will appear in \\textit{Print Preview}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig26}\n\t\\caption{Completed National Parks worksheet}\n\t\\label{03:fig26}\n\\end{figure}\n\n\\subsection{Reviewing Formatting for Inconsistencies}\n\nThe first thing to do is review the worksheet for formatting inconsistencies.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Open the data file named \\fmtWorksheet{CH3-PTP Data} and use the \\textit{File/Save As} command to save it as \\fmtWorksheet{CH3-National Parks}.\n\t\t\\item Scroll through the worksheet and locate the following formatting errors.\n\t\n\t\t\\begin{itemize}\n\t\t\t\\item The formatting of the \\textit{Utah} label does not match the other states.\n\t\t\t\\item The \\textit{Year Established} values for \\textit{Hawaii} are not center aligned like the other years.\n\t\t\t\\item The cells for the \\textit{Nevada} data should have the same green fill color as the other alternating states.\n\t\t\t\\item The number of digits after the decimal place for the \\textit{Size} values is inconsistent. Also, these values should be formatted with \\textit{Comma} style to make them easier to read.\n\t\t\\end{itemize}\n\t\n\t\t\\item Complete the following steps to fix these errors.\n\t\n\t\t\\begin{itemize}\n\t\t\t\\item Select \\fmtLoc{A34:A38}. \n\t\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Alignment $ \\Rightarrow $ Merge \\& Center}.\n\t\t\t\\item Change the font size to $ 16 $ and apply Bold format.\n\t\t\t\\item Select \\fmtLoc{C28:C29}.\n\t\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Alignment $ \\Rightarrow $ Center}.\n\t\t\t\\item Select \\fmtLoc{A31:E31}.\n\t\t\t\\item Apply a fill of \\textit{Green, Accent $ 6 $, Lighter $ 60\\% $}.\n\t\t\t\\item Select \\fmtLoc{E4:E43}. \n\t\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Comma}.\n\t\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until one digit appears after the decimal place for all values.\n\t\t\\end{itemize}\n\t\t\n\t\t\\item While these formatting errors are being corrected all typos should also be corrected.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\subsection{Fine-Tuning Formatting}\n\nNow that the formatting inconsistencies have been corrected, apply additional formatting techniques to make the worksheet look better. Start by vertically aligning the names of the states within the cells.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select \\fmtLoc{A4:A43} (the cells with the state labels).\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Alignment $ \\Rightarrow $ Middle Align} (see Figure \\ref{03:fig27}). Notice that the names of the states are now centered between the top and bottom borders of the cells.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig27}\n\t\\caption{Alignment Group}\n\t\\label{03:fig27}\n\\end{figure}\n\nThe next formatting correction is to change the label in $ E3 $ from \\textit{Size (km2)} to \\textit{Size (km\\textsuperscript{2})} with the $ 2 $ after \\textit{km} formatted as a superscript.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Double-click on cell \\fmtLoc{E3} to enter \\textit{Edit} mode\n\t\t\\item Select just the $ 2 $ (be careful not to select anything else).\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Dialog Box Launcher}. \t\n\t\t\\item In the Effects section of the \\textit{Format Cells} dialog box, check the box for \\fmtButton{Superscript} (see Figure \\ref{03:fig28}). \n\t\t\\item Click \\fmtButton{OK}.\n\t\t\\item Save the \\fmtWorksheet{CH3-National Parks} file.\n\t\\end{enumerate}\n\\end{enumbox}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig28}\n\t\\caption{Font Tab in Format Cells Dialog Box}\n\t\\label{03:fig28}\n\\end{figure}\n\n\\subsection{Repeating Column (And Row) Labels}\n\nNow that the cell and text formatting are corrected, review the worksheet in \\textit{Print Preview}. Notice that the worksheet is printing on multiple pages and it is not possible to know what each column of data represents on some of the pages.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item With the \\fmtWorksheet{CH3-National Parks} file still open, click \\fmtButton{File $ \\Rightarrow $ Print}.\n\t\t\\item Click through each of the pages using the page number identifier at the bottom of the preview. The worksheet is currently printing on four pages, with the \\textit{City} and \\textit{Sizes} columns printing on separate pages from the rest of the data.\n\t\t\\item Using the \\fmtButton{Orientation} button on the left side of the preview, change the orientation to \\textit{Landscape} to fit all the columns on one page. Unfortunately, the second and third pages have no column labels to identify the information in each column. \n\t\t\\item Exit \\textit{Print Preview} by clicking the arrow button at the top left corner of the preview.\n\t\t\\item Click \\fmtButton{Page Layout $ \\Rightarrow $ Page Setup $ \\Rightarrow $ Print Titles}. The dialog box shown in Figure \\ref{03:fig29} should appear.\n\t\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig29}\n\t\t\t\\caption{Print Titles}\n\t\t\t\\label{03:fig29}\n\t\t\\end{figure}\n\n\t\t\\item Click the \\fmtButton{Collapse Dialog} button next to the \\textbf{Rows to repeat at top:} text box in the \\textit{Page Setup} dialog box.\n\t\t\\item In the worksheet, select \\fmtLoc{Row $ 1 $}  through \\fmtLoc{Row $ 3 $}.\n\t\t\\item Press the \\fmtKeystroke{Enter} key. This returns the \\textit{Page Setup} dialog box to its expanded form. Notice that that the \\textit{Rows to repeat at top} argument is now defined as $ \\$1 $:$ \\$3 $.\n\t\t\\item Click \\fmtButton{OK}.\n\t\\end{enumerate}\n\\end{enumbox}\n\nThe worksheet does not change in Normal view, so return to \\textit{Print Preview}. While in \\textit{Print Preview}, notice that the pages are breaking in the middle of the information for a single park, but that will be corrected next.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click \\fmtButton{File $ \\Rightarrow $ Print}. Notice that the first three rows are now repeated at the top of each page.\n\t\t\\item Exit \\textit{Print Preview} by clicking the arrow button at the top left corner of the preview.\n\t\\end{enumerate}\n\\end{enumbox}\n\n\\begin{center}\n\t\\begin{sklbox}{Skill Refresher}\n\t\t\\textbf{Creating Print Titles}\n\t\t\\\\\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Open the \\textit{Page Setup} dialog box and click the \\textit{Sheet} tab.\n\t\t\t\\item Click in the \\textit{Rows to repeat at top:} box or the \\textit{Columns to repeat at left:} box.\n\t\t\t\\item Click in the worksheet and select the row(s) or column(s) to be repeated on each page.\n\t\t\t\t\t\t\n\t\t\\end{itemize}\n\t\\end{sklbox}\n\\end{center}\n\n\\subsection{Inserting Page Breaks}\n\nNotice that the data for California is split between the first and second pages. Since it is desirable to keep all the data for each state on the same page, the page breaks need to be adjusted. Start by inserting a page break before the California data to force it to start on the second page, then move the page break for the third page if needed. To make these changes work in \\textit{Page Break Preview}.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click \\fmtButton{View $ \\Rightarrow $ Workbook Views $ \\Rightarrow $ Page Break Preview}. The screen should be like Figure \\ref{03:fig30}.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig30}\n\t\\caption{Page Break Preview}\n\t\\label{03:fig30}\n\\end{figure}\n\nIn \\textit{Page Break Preview}, automatic page breaks are displayed as dotted blue lines. Notice the dotted blue lines after \\textit{Row} $ 20 $ and \\textit{Row} $ 37 $, which indicate where Excel will start a new page. For this worksheet, the first page should break at the \\textit{California} data, so insert a manual page break there.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Select cell \\fmtLoc{A15}. When inserting a page break, select the cell \\textit{below} where the page break should appear.\n\t\t\\item Click \\fmtButton{Page Layout $ \\Rightarrow $ Page Setup $ \\Rightarrow $ Breaks $ \\Rightarrow $ Insert Page Break} (see Figure \\ref{03:fig31}).\n\t\t\\item There is now a solid blue line after \\textit{Row} $ 14 $, which indicates a manual page break was inserted.\n\t\t\\item Click \\fmtButton{File $ \\Rightarrow $ Print}. Notice that the California data now starts on the second page.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig31}\n\t\\caption{Breaks Button on Page Layout tab}\n\t\\label{03:fig31}\n\\end{figure}\n\nAfter looking at each page in \\textit{Print Preview} it was decided that the third page should start with \\textit{Montana}. To make this change, move the automatic page break that appears after \\textit{Nevada}.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Exit \\textit{Print Preview} by clicking the arrow button at the top left corner of the preview.\n\t\t\\item Switch back to \\textit{Page Break Preview} if needed.\n\t\t\\item Locate the dotted blue line (automatic page break) after \\fmtLoc{Row 31}.\n\t\t\\item Put the pointer over the dotted blue line, and it will switch to a vertical double-headed arrow. Click on the dotted blue line and drag it above \\fmtLoc{Row 30} (Montana).\n\t\t\\item The line will now be a solid blue line, indicating a manual page break.\n\t\t\\item Click \\fmtButton{File $ \\Rightarrow $ Print}. The Montana row now appears at the top of the third page.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\nWhile evaluating the pages in \\textit{Print Preview} it appears that there is too much white space at the bottom of the pages. To fix this, center the contents vertically on the pages.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click the \\fmtButton{Page Setup} link at the bottom of the \\textit{Settings} section of \\textit{Backstage View} to open the \\textit{Page Setup} dialog box.\n\t\t\\item Click on the \\fmtButton{Margins} tab.\n\t\t\\item In the \\textit{Center on page} section, check the box for \\fmtButton{Vertically} then click \\fmtButton{OK}.\n\t\t\\item Review each page in \\textit{Print Preview} to see the changes. \n\t\t\\item Exit \\textit{Print Preview} by clicking the arrow button at the top left corner of the preview.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\subsection{Creating a Header and Footer Using Page Layout View}\n\nNow that the worksheet is printing on three pages, with page breaks in appropriate places, it is time to add a header with the current date and filename. A footer will also be added with the page number and the total number of pages that will appear as \\textit{Page 1 of 3}. \n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Click \\fmtButton{View $ \\Rightarrow $ Workbook Views $ \\Rightarrow $ Page Layout}. \n\t\t\\item The white space at the top of the worksheet should say \\textit{Add header}. \n\t\t\\item Place the mouse pointer over the left section of the \\textit{Header} and click to activate that section.\n\t\t\\item Click \\fmtButton{Header \\& Footer Tools Design $ \\Rightarrow $ Header \\& Footer Elements $ \\Rightarrow $ Current Date} (see Figure \\ref{03:fig32}). Inserting the date this way will insert a field that will update every time the workbook is opened. (\\fmtNewExcel{Excel 365} This tab is called \\fmtButton{Header \\& Footer}.)\n\t\t\\item Place the mouse pointer over the right section of the \\textit{Header} and click to activate that section.\n\t\t\\item Click \\fmtButton{Header \\& Footer Tools Design $ \\Rightarrow $ Header \\& Footer Elements $ \\Rightarrow $ Filename} (see Figure \\ref{03:fig32}). Inserting the filename this way will insert a field that will update if the filename is changed.\n\t\t\\item Click \\fmtButton{Header \\& Footer Tools Design $ \\Rightarrow $ Navigation $ \\Rightarrow $ Go to Footer}. \n\t\t\\item In the center section of the footer, type the word \\fmtTyping{Page} with a space after it.\n\t\t\\item Click \\fmtButton{Header \\& Footer Tools Design $ \\Rightarrow $ Header \\& Footer Elements $ \\Rightarrow $ Page Number} (see Figure \\ref{03:fig32}), then type a space after the \\fmtTyping{\\&[Page]} code that appears.\n\t\t\\item Type the word \\fmtTyping{of} with a space after it.\n\t\t\\item Click \\fmtButton{Header \\& Footer Tools Design $ \\Rightarrow $ Header \\& Footer Elements $ \\Rightarrow $ Number of Pages} (see Figure \\ref{03:fig32}). The footer should match Figure \\ref{03:fig33}.\n\t\t\\item Click anywhere on the worksheet to close the \\textit{Footer} editing.\n\t\t\\item Click \\fmtButton{File $ \\Rightarrow $ Print}. Check that the date and file name are in the header and the page numbers in the footer are correct.\n\t\t\\item Exit \\textit{Print Preview} by clicking the arrow button at the top left corner of the preview.\n\t\t\\item Save the \\fmtWorksheet{CH3-National Parks} workbook.\n\t\t\\item Compare the worksheet with the self-check answer key (\\fmtWorksheet{CH3-National Parks Solution}) and then close and submit the \\fmtWorksheet{CH3-National Parks} workbook as directed by the instructor.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig32}\n\t\\caption{Header \\& Footer Elements buttons}\n\t\\label{03:fig32}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\maxwidth{.95\\linewidth}]{gfx/ch03_fig33}\n\t\\caption{Completed Footer}\n\t\\label{03:fig33}\n\\end{figure}\n\n\\begin{center}\n\t\\begin{sklbox}{Skill Refresher}\n\t\t\\textbf{Inserting Page Numbers}\n\t\t\\\\\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item In Page Layout View, click in the section of the header or footer where the page number should appear.\n\t\t\t\\item Type the word \\textit{Page} followed by a space.\n\t\t\t\\item Click \\textit{Header \\& Footer Tools Design $ \\Rightarrow $ Header \\& Footer Elements $ \\Rightarrow $ Page Number}. This will create \\textit{Page $ 1 $}.\n\t\t\t\\item Type a space after the \\textit{\\&[Page]} code then type the word \\fmtTyping{of} followed by a space. \n\t\t\t\\item Click \\textit{Header \\& Footer Tools Design $ \\Rightarrow $ Header \\& Footer Elements $ \\Rightarrow $ Number of Pages}. This will create \\textit{Page $ 1 $ of $ 4 $}.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{sklbox}\n\\end{center}\n\n\\begin{center}\n\t\\begin{tkwbox}{Key Take-Aways}\n\t\t\\textbf{Preparing to Print}\n\t\t\\\\\n\t\t\\begin{itemize}\n\t\t\t\\setlength{\\itemsep}{0pt}\n\t\t\t\\setlength{\\parskip}{0pt}\n\t\t\t\\setlength{\\parsep}{0pt}\n\n\t\t\t\\item Always check the formatting of worksheets for consistency.\n\t\t\t\\item If a worksheet is printing on multiple pages, use \\textit{Print Titles} to repeat rows at the top and/or columns at the left of every page to make it easier to interpret the data.\n\t\t\t\\item Insert manual page breaks as needed in \\textit{Page Break Preview} to control where a new page begins.\n\t\t\t\\item Multiple page worksheets should include the page number in either the header or footer. Be sure to insert the Page Number element so that the correct page number will display on each page of the worksheet.\n\t\t\t\n\t\t\\end{itemize}\n\t\\end{tkwbox}\n\\end{center}\n\n\\section{Chapter Practice}\n\n\\subsection{Household Budget}\n\nElijah and Kelly Williams are a recently married couple living in Portland, Oregon. Elijah works part time and attends the local community college. Kelly works as a marketing manager at a clothing company in North Portland. They are trying to decide if they can afford to move to a better apartment, one that is closer to work and school. They want to use Excel to examine their household budget. They have started their budget spreadsheet, but they need help with it.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t%fileopen PR3-Data\n\t%filesave PR3-Williams\n\t\t\\item Open the file named \\fmtWorksheet{PR3-Data} and then save it as \\fmtWorksheet{PR3-Williams}.\n\t\t\\item Insert two new rows at the top of the worksheet.\n\t\t\\item Enter the following text in the indicated cells.\n\t\n\t\\begin{itemize}\n\t\t\\item \\fmtLoc{A2}: \\fmtTyping{Category}\n\t\t\\item \\fmtLoc{B2}: \\fmtTyping{Item}\n\t\t\\item \\fmtLoc{C2}: \\fmtTyping{January}\n\t\t\\item \\fmtLoc{O2}: \\fmtTyping{Yearly Total} (adjust column width as needed to fit this text)\n\t\\end{itemize}\n\t\n\t\t\\item Starting in \\fmtLoc{C2}, use the \\fmtButton{Auto Fill Handle} to fill in the months \\textit{February} through \\textit{December} in cells \\fmtLoc{D2:N2}. Adjust the widths of \\fmtLoc{Column C} to \\fmtLoc{Column N} to $ 10 $.\n\t\t\\item Select \\fmtLoc{A2:O2}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Bold}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Alignment $ \\Rightarrow $ Center}. \n\t\t\\item Click \\fmtLoc{A1} and enter \\fmtTyping{Williams Family Budget}. \n\t\t\\item Select \\fmtLoc{A1:O1}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Alignment $ \\Rightarrow $ Merge \\& Center}.\n\t\t\\item Select \\fmtLoc{A1:O1}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Bold}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ 22 point}. \n\t\n\t\t\\item Fill the cells for fixed expenses.\n\t\t\\begin{enumerate}\n\t\t\t\\item Copy \\fmtLoc{C3:C4} and paste to \\fmtLoc{D3:N4}.\n\t\t\t\\item Copy \\fmtLoc{C8:C9} and paste to \\fmtLoc{D8:N9}.\n\t\t\t\\item Copy \\fmtLoc{C25} and paste to \\fmtLoc{D25:N25}.\n\t\t\t\\item Copy \\fmtLoc{C27} and paste to \\fmtLoc{D27:N27}.\n\t\t\t\\item Copy \\fmtLoc{C41} and paste to \\fmtLoc{D41:N41}.\n\t\t\\end{enumerate}\t\n\t\t\n\t\t\\item Select \\fmtLoc{O3:O44}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ AutoSum}. \n\t\t\\item Delete the formulas from \\fmtLoc{O7}, \\fmtLoc{O17}, \\fmtLoc{O24}, \\fmtLoc{O32}, and \\fmtLoc{O38}.\n\t\n\t\t% Calculated totals for each section\n\t\t\\item Select \\fmtLoc{C6}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ AutoSum}. Make certain that Excel selects \\fmtLoc{C3:C5} for the \\textit{AutoSum} function then press \\fmtKeystroke{Enter}. This will calculate the \\textit{Total Income} for January.\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{C6} to \\fmtLoc{D6:O6}.\n\t\n\t\t\\item Select \\fmtLoc{C16}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ AutoSum}. Make certain that Excel selects \\fmtLoc{C8:C15} for the \\textit{AutoSum} function then press \\fmtKeystroke{Enter}. This will calculate the \\textit{Total Home Expenses} for January.\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{C16} to \\fmtLoc{D16:O16}.\n\t\t\n\t\t\\item Select \\fmtLoc{C23}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ AutoSum}. Make certain that Excel selects \\fmtLoc{C18:C22} for the \\textit{AutoSum} function then press \\fmtKeystroke{Enter}. This will calculate the \\textit{Total Daily Living Expenses} for January.\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{C23} to \\fmtLoc{D23:O23}.\n\t\t\n\t\t\\item Select \\fmtLoc{C31}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ AutoSum}. Make certain that Excel selects \\fmtLoc{C25:C30} for the \\textit{AutoSum} function then press \\fmtKeystroke{Enter}. This will calculate the \\textit{Total Transportation Expenses} for January.\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{C31} to \\fmtLoc{D31:O31}.\n\t\n\t\t\\item Select \\fmtLoc{C37}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ AutoSum}. Make certain that Excel selects \\fmtLoc{C33:C36} for the \\textit{AutoSum} function then press \\fmtKeystroke{Enter}. This will calculate the \\textit{Total Entertainment Expenses} for January.\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{C37} to \\fmtLoc{D37:O37}.\n\t\n\t\t\\item Select \\fmtLoc{C45}.\n\t\t\\item Click \\fmtButton{Formulas $ \\Rightarrow $ Function Library $ \\Rightarrow $ AutoSum}. Make certain that Excel selects \\fmtLoc{C39:C44} for the \\textit{AutoSum} function then press \\fmtKeystroke{Enter}. This will calculate the \\textit{Total Personal Expenses} for January.\n\t\t\\item Use the \\fmtButton{Auto Fill Handle} to copy \\fmtLoc{C45} to \\fmtLoc{D45:O45}.\n\t\t\n\t\t% Format for Total rows\n\t\t\\item Select \\fmtLoc{C3:O3}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Top Border} \n\t\t\n\t\t\\item Select \\fmtLoc{C16:O16}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Top Border} \n\t\n\t\t\\item Select \\fmtLoc{C23:O23}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Top Border} \n\t\n\t\t\\item Select \\fmtLoc{C31:O31}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Top Border} \n\t\n\t\t\\item Select \\fmtLoc{C37:O37}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Top Border} \n\t\n\t\t\\item Select \\fmtLoc{C45:O45}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Top Border} \n\t\n\t\t% The comma format for number cells\n\t\t\\item Select \\fmtLoc{C4:O5}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Comma}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\n\t\t\\item Select \\fmtLoc{C8:O15}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Comma}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\n\t\t\\item Select \\fmtLoc{C18:O22}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Comma}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\n\t\t\\item Select \\fmtLoc{C25:O30}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Comma}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\n\t\t\\item Select \\fmtLoc{C33:O36}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Comma}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\n\t\t\\item Select \\fmtLoc{C39:O44}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Comma}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\n\t\t\\item Click \\fmtLoc{A47} and enter \\fmtTyping{Total Expenses}.\n\t\t\\item Click \\fmtLoc{C47} and enter \\fmtTyping{=SUM(C16,C23,C31,C37,C45)}\n\t\t\\item Copy \\fmtLoc{C47} to \\fmtLoc{D47:O47}.\n\t\n\t\t\\item Click \\fmtLoc{A49}. \n\t\t\\item Enter \\fmtTyping{NET INCOME}. \n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Bold}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Alignment $ \\Rightarrow $ Indent}.\n\t\n\t\t\\item Click \\fmtLoc{C49}.\n\t\t\\item Enter \\fmtTyping{=C6-C47}.\n\t\t\\item Copy \\fmtLoc{C49} to \\fmtLoc{D49:O49}.\n\t\n\t\t\\item Select \\fmtLoc{C47:O47}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Bold}.\n\t\t\n\t\t\\item Select \\fmtLoc{C49:O49}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Accounting}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Number $ \\Rightarrow $ Decrease Decimal} until there are no decimal places displayed.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Bold}.\n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Font $ \\Rightarrow $ Top and Bottom Border} \n\t\n\t\t\\item Select \\fmtLoc{C49:N49}. \n\t\t\\item Click \\fmtButton{Home $ \\Rightarrow $ Styles $ \\Rightarrow $ Conditional Formatting $ \\Rightarrow $ Data Bars $ \\Rightarrow $ Gradient Fill Blue}.\n\t\t\n\t\t\\item Click \\fmtLoc{B50}. \n\t\t\\item Enter \\fmtTyping{New Apartment?}. \n\t\t\\item Click \\fmtLoc{C50}.\n\t\t\\item Enter an \\fmtButton{IF} statement in that displays the word \\textit{No} if the amount in \\fmtLoc{C49} is less than or equal to zero and \\textit{Maybe} if the amount is greater than zero. Hint: remember that the words ``No'' and ``Maybe'' must be enclosed in quotes. \n\t\t\\item Copy \\fmtLoc{C50} to \\fmtLoc{D50:N50}.\n\t\n\t\t\\item Check to see if the \\fmtButton{IF} statement worked correctly in \\fmtLoc{Row 50}. If the cells say \\textit{No} when the data bar in the cell above it is red and \\textit{Maybe} when the data bar in the cell above it is blue, then the \\fmtButton{IF} statement is correct.\n\t\n\t\t\\item Review the worksheet in \\textit{Print Preview}. Make any changes needed to make the worksheet print with landscape orientation on one page.\n\t%filesave PR3-Williams\n\t\t\\item Save the \\fmtWorksheet{PR3-Williams} workbook.\n\t%fileclose PR3-Williams\n\t%filesolution PR3-Williams Solution\n\t\t\\item Compare the result with the self-check answer key (\\fmtWorksheet{PR3-Williams Solution}) and then close and submit the \\fmtWorksheet{PR3-Williams} workbook as directed by the instructor.\n\t\\end{enumerate}\n\\end{enumbox}\n\t\n\\section{Scored Assessment}\n\n\\subsection{Astrocoffee Company}\n\nCynthia McHenry owns a coffee supply company named \\textit{AstroCoffee}. She needs some help writing the formulas for the order form she uses to invoice customers. Formulas are needed for all of the calculations on the form. Some of the more complex parts are determining if the customer will get a discount (based on the customer status) as well as the shipping charge (orders over \\$200 get free shipping). Use \\fmtButton{IF} functions for both of those calculations.\n\n\\begin{enumbox}\n\t\\begin{enumerate}\n\t\t\\item Open the \\fmtWorksheet{SC3-Data} workbook and save it as \\fmtWorksheet{SC3-AstroCoffee}.\n\t\t\\item Enter the following order information.\n\t\n\t\t\\begin{itemize}\n\t\t\t\\item Order \\#: 45676\n\t\t\t\\item Order Date: use a function that displays the current date\n\t\t\\end{itemize}\n\t\n\t\t\\item Enter the following Billing Information.\n\t\n\t\t\\begin{tabular}{l}\n\t\t\t\\hline\n\t\t\tEdwina Copeland\\\\\n\t\t\t4270 Heron Way Portland, OR 97225\\\\\n\t\t\t503-779-1873\\\\\n\t\t\tedwina.copeland@hmail.com\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\n\t\t\\item For the \\textit{Shipping Information}, create formulas using cell references to display the corresponding information from the \\textit{Billing Information} section. For example, the \\textit{Customer} cell will display the name of the customer found in \\fmtLoc{C11}.\n\t\t\\item In the range \\fmtLoc{B19:E22}, enter the following item orders:\n\t\t\n\t\t{\\small\n\t\t\t\\begin{longtable}{p{0.4in}p{2.10in}p{0.25in}p{0.5in}} %Max width: 4.25in\n\t\t\t\t\\textbf{Item \\#} & \\textbf{Description} & \\textbf{Qty} & \\textbf{Unit Price}\\endhead\n\t\t\t\t\\hline \\\\\n\t\t\t\tK56 & Dark Mocha K-Cups (12 pack) & 1 & 10.99\\\\\n\t\t\t\tG03 & Decaf Dark Roast \u2013 Ground (1 lb.) & 3 & 12.99\\\\\n\t\t\t\tB07 & Dark Roast \u2013 Whole Bean (1 lb.) & 2 & 13.99\\\\\n\t\t\t\tK52 & Chai Latte K-Cups (12 pack) & 3 & 12.99\\\\\n\t\t\t\t\\caption{AstroCoffee Orders}\n\t\t\t\t\\label{03:tab02}\n\t\t\t\\end{longtable}\n\t\t}\n\t\t\n\t\t\\item In cell \\fmtLoc{F19}, enter an \\fmtButton{IF} function that tests whether the order quantity in cell \\fmtLoc{D19} is greater than $ 0 $ (zero). lf it is, return the value of the Qty (in \\fmtLoc{D19}) multiplied by the Unit Price (in \\fmtLoc{E19}), otherwise, return no text by entering ``''.\n\t\t\\item Copy/fill the formula in \\fmtLoc{F19} \\fmtLoc{F20:F25}. \\textit{Hint:} be sure to copy the formula to all of the \\textit{Item Total} cells, even if they are blank so the worksheet is prepared for orders with more items in the future.\n\t\t\\item In cell \\fmtLoc{F26}, calculate the sum of all of the Item Total cells.\n\t\t\\item In cell \\fmtLoc{F27}, use an \\fmtButton{IF} function to calculate the discount amount for this order based on the customer's status (which is found in \\fmtLoc{F16}). If the customer's status is \\textit{Preferred}, the discount amount will be the \\textit{Order Subtotal} times the discount percentage found in cell \\fmtLoc{B29}; otherwise the discount amount will be $ 0 $ (zero). \\textit{Hint:} a formula is needed for the \\textit{Value if True} argument.\n\t\t\\item Calculate the Discounted Total for this order in cell \\fmtLoc{F28}. \\textit{Hint:} Use a simple subtraction formula.\n\t\t\\item In cell \\fmtLoc{F29}, use an \\fmtButton{IF} function to display the correct Shipping Charge, based on the amount of the \\textit{Discounted Total}. If the \\textit{Discounted Total} is greater than or equal to the \\textit{Free Shipping Minimum} found in cell \\fmtLoc{B28}, the Shipping Charge is $ 0 $ (zero), otherwise, the Shipping Charge is $ 5\\% $ of the Discounted Total. \\textit{Hint:} a formula is needed for the Value if False to calculate what $ 5\\% $ of the Discounted Total will be.\n\t\t\\item Calculate the \\textit{Invoice Total} in cell \\fmtLoc{F31}. \\textit{Hint:} This will be the total of the \\textit{Discounted Total} and the \\textit{Shipping Charge}.\n\t\t\\item Review the worksheet in \\textit{Print Preview}. Make any changes needed to make the worksheet print on one page.\n\t\t\\item Save and close the \\fmtWorksheet{SC3-AstroCoffee} workbook.\n\t\t\\item Submit the \\fmtWorksheet{SC3-AstroCoffee} workbook as directed by the instructor.\n\t\\end{enumerate}\n\\end{enumbox}\n", "meta": {"hexsha": "706d879547f265256f620092e6ec4b54067e0f45", "size": 69263, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/03Formulas.tex", "max_stars_repo_name": "grself/cll_excel", "max_stars_repo_head_hexsha": "5ee0b8a3fa4c14a9c255343dcdc1109c1184e551", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/03Formulas.tex", "max_issues_repo_name": "grself/cll_excel", "max_issues_repo_head_hexsha": "5ee0b8a3fa4c14a9c255343dcdc1109c1184e551", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/03Formulas.tex", "max_forks_repo_name": "grself/cll_excel", "max_forks_repo_head_hexsha": "5ee0b8a3fa4c14a9c255343dcdc1109c1184e551", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.6231281198, "max_line_length": 618, "alphanum_fraction": 0.7420412053, "num_tokens": 19659, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.561883121379726}}
{"text": "The characteristic analysis carried out above is now applied to specific solid mechanics problems.\nBoth linear and non-linear problems, whose solutions involve several types of waves, are considered.\nAs we shall see, the different characteristic structures involved within linear elastic, elastoplastic and hyperelastic solids require the use of different techniques in order to develop exact solutions. First a particular type of IVP, of particular interest in this manuscript, is introduced.\n%% Then by specializing the results to problems involving a linear elastic bar and a hyperelastic Saint-Venant-Kirchhoff medium, we shall see that different types of waves may propagate within solids.\n\\subsection{The Riemann problem}\nA Riemann problem is a Cauchy problem with piecewise constant initial data. In particular, the Riemann problem based on the conservative form \\eqref{eq:general_conservative_HE} for hyperelastic solids, in the arbitrary direction $\\vect{N}=N_\\alpha \\vect{E}_\\alpha$, takes the form:\n\\begin{equation}\n  \\label{eq:Riemann_problem_HE}\n  \\begin{aligned}\n    &\\Ucb_t + \\drond{\\Fcb\\cdot \\vect{N}}{X_N} = \\Scb, \\\\\n    &\\left\\lbrace \n      \\begin{aligned}\n        & \\Ucb(X_N,t=0) = \\Ucb^L \\quad \\text{if } X_N< 0\\\\\n        & \\Ucb(X_N,t=0) = \\Ucb^R \\quad \\text{if } X_N> 0\n      \\end{aligned}\n    \\right.\n  \\end{aligned}\n\\end{equation}\nAnalogously, for small strains one writes the Riemann problem corresponding to conservative forms \\eqref{eq:general_conservative} or \\eqref{eq:general_conservative_EP} in the direction $\\vect{n}=n_i \\vect{e}_i$:\n\\begin{equation}\n  \\label{eq:Riemann_problem_HPP}\n  \\begin{aligned}\n    &\\Qcb_t + \\drond{\\Fcb\\cdot \\vect{n}}{x_n} = \\Scb, \\\\\n    &\\left\\lbrace \n      \\begin{aligned}\n        & \\Qcb(x_n,t=0) = \\Qcb^L \\quad \\text{if } x_n< 0\\\\\n        & \\Qcb(x_n,t=0) = \\Qcb^R \\quad \\text{if } x_n> 0\n      \\end{aligned}\n    \\right.\n  \\end{aligned}\n\\end{equation}\nwhere $x_n=\\vect{x}\\cdot\\vect{n}$.\nProblems of the form \\eqref{eq:Riemann_problem_HE} or \\eqref{eq:Riemann_problem_HPP} are considered in the next section, in which exact solutions are recalled or derived.\n\n\\subsection{Linear elastodynamics problems}\n\\label{subsec:charac_Linear_problems}\nA homogeneous hyperbolic system of dimension $m$ is considered in a linear elastic solid so that a Riemann problem of the form \\eqref{eq:Riemann_problem_HPP} is written.% in the arbitrary direction $\\vect{n}$ reads:\n% \\begin{equation}\n%   \\label{eq:Linear_Riemann_problem}\n%   \\begin{aligned}\n%   &\\Qcb_t + \\drond{\\Fcb\\cdot \\vect{n}}{x_n} = \\vect{0}, \\\\\n%   &\\left\\lbrace \n%     \\begin{aligned}\n%       & \\Qcb(x_n,t=0) = \\Qcb^L \\quad \\text{if } x_n< 0\\\\\n%       & \\Qcb(x_n,t=0) = \\Qcb^R \\quad \\text{if } x_n> 0\n%     \\end{aligned}\n%     \\right.\n%   \\end{aligned}\n% \\end{equation}\n\n\\subsubsection*{Characteristic variables -- Waves solution}\nBy introducing a set of \\textit{characteristic variables} $\\Pcb=\\Rbsf^{-1}\\Qcb$ ($\\Rsf_{ij}=\\Rc^j_i$), the quasi-linear form of system \\eqref{eq:Riemann_problem_HPP} reads:\n\\begin{equation}\n  \\label{eq:RP_characteristic_variables}\n  \\begin{aligned}\n    &\\drond{\\Pc_i}{t} + c_i\\drond{\\Pc_i}{x} = 0 \\\\\n    &\\left\\lbrace \n      \\begin{aligned}\n        & \\Pc_i(x,t=0) = \\Pc_i^L \\quad \\text{if } x< 0\\\\\n        & \\Pc_i(x,t=0) = \\Pc_i^R \\quad \\text{if } x> 0\n      \\end{aligned}\n    \\right.\n  \\end{aligned}\n\\end{equation}\nwith $\\Csf_{ij}=c_i\\delta_{ij}$, the matrix of eigenvalues so that $\\Jsf_{ij} \\Rc^j_K = \\Rc^K_i\\Csf_{Kj}$. The solution of this problem is straightforward since it corresponds to a superposition of scalar linear advection equations, namely, the initial profile $\\Pc_i(x,t=0)$ simply propagates with speed $c_i$ as depicted in figure \\ref{fig:advection}.  \n\\begin{figure}[h!]\n  \\centering\n  \\subcaptionbox*{}[0.45\\linewidth]{\\input{chapter2/pgfFigures/advection_solution}}\n  \\subcaptionbox*{}[0.45\\linewidth]{\\input{chapter2/pgfFigures/advection_P}}\n  \\caption{Solution to linear advection equation of the quantity $\\Pc_i$ with characteristic speed $c_i$.}\n  \\label{fig:advection}\n\\end{figure}\nThus, the solution  $\\Pc_i(x,t)$ at a given point is given by tracing backward the characteristic of slope $c_i$ passing through this point to the $x$-axis, that is: $\\Pc_i(x,t)=\\Pc_i(x-c_it,0)$  \\cite[p.52]{Toro}. The vector $\\Qcb$ is then determined by inverting the relation:\n\\begin{equation}\n  \\label{eq:Q_expansion}\n  \\Qcb(x,t) = \\sum_{i=1}^m \\Rcb^i \\Pc_i(x-c_it,0) \\quad \\Rightarrow\n  \\left\\lbrace\n    \\begin{aligned}\n      & \\Qcb(x<0,0)=\\Qcb^L=\\sum_{i=1}^{m}\\Rcb^i \\Pc_i^L\\\\\n      & \\Qcb(x>0,0)=\\Qcb^R= \\sum_{i=1}^{m}\\Rcb^i \\Pc_i^R\n    \\end{aligned}\n    \\right.\n\\end{equation}\nEquation \\eqref{eq:Q_expansion} is an eigenvector expansion with coefficients $\\Pc_i^{R,L}$ from which we see that $\\Qcb$ is a linear superposition of $m$ waves, each having the shape $\\Rcb^i \\Pc_i(x,0)$.\nNoticing that for given values of $x$ and $t$, there exists one characteristic $I$ such that $x-c_i t >0$ for all $i\\leq I$, and $x-c_{i} t <0$ for all $i \\geq I+1$, equation \\eqref{eq:Q_expansion} can be rewritten \\cite[p.56]{Toro}:\n\\begin{equation}\n  \\label{eq:Q_expansion_sides}\n  \\Qcb = \\sum_{i=1}^I \\Rcb^i \\Pc_i^R + \\sum_{i=I+1}^m \\Rcb^i \\Pc_i^L\n\\end{equation}\nor, by introducing the expansions of initial data \\eqref{eq:Q_expansion_sides}:\n\\begin{align}\n  &\\Qcb = \\sum_{i=1}^m \\Rcb^i \\Pc_i^R - \\sum_{i=I+1}^m \\Rcb^i \\(\\Pc_i^R - \\Pc_i^L\\)= \\Qcb^R - \\sum_{i=I+1}^m \\Rcb^i \\(\\Pc_i^R - \\Pc_i^L\\) \\\\\n  &\\Qcb= \\sum_{i=1}^{m}\\Rcb^i \\Pc_i^L + \\sum_{i=1}^I \\Rcb^i \\(\\Pc_i^R - \\Pc_i^L\\)= \\Qcb^L + \\sum_{i=1}^I \\Rcb^i \\(\\Pc_i^R - \\Pc_i^L\\) \n\\end{align}\nThese equations are equivalent to jump conditions across multiple discontinuous waves:\n\\begin{align}\n  \\label{eq:jump_star_R}\n  &  \\Qcb-\\Qcb^R = -\\sum_{i=I+1}^{m} \\Rcb^i\\delta^i \\\\\n  \\label{eq:jump_star_L}\n  &  \\Qcb-\\Qcb^L = \\sum_{i=1}^{I} \\Rcb^i\\delta^i \n\\end{align}\nwhere $\\Qcb(x,t)$ is the state lying in the region of the ($x,t$) plane delimited by the characteristics $I$ and $I+1$, and $\\Rcb^i\\delta^i$ the jump carried by the $i$th wave.\nThe coefficients $\\delta^i=\\Pc_i^R - \\Pc_i^L$ are weighting coefficients involved in \\textit{wave strengths} $\\Wcb^i=\\Rcb^i \\delta^i$, that can be computed from the expansions of initial conditions by solving:\n\\begin{equation}\n  \\label{eq:delta_system}\n  \\Qcb^R-\\Qcb^L=\\sum_{i=1}^{m}\\Rcb^i \\delta^i=\\Rbsf \\vect{\\delta}\n\\end{equation}\n\nWe see that the solutions of Riemann problem  \\eqref{eq:Riemann_problem_HPP} consist of discontinuous waves emanating from the origin of the $(x,t)$ plane.\nAcross such discontinuous waves, the following condition is satisfied \\cite{Toro}:\n\\begin{definition}\n  The \\textbf{Rankine-Hugoniot condition} is satisfied across a discontinuous wave of speed $s_i$ associated with the $i$th characteristic field, which is a solution of the hyperbolic system $\\Qcb_t + \\Fcb(\\Qcb)_x=\\vect{0}$:\n\\begin{equation}\n  \\label{eq:rankine-hugoniot}\n  \\saut{ \\Fcb} = s_i \\saut{ \\Qcb}\n\\end{equation}\nwhere $\\saut{\\bullet}$ denotes the jump operator across the discontinuity.\nMore specifically, shock waves that will be met for non-linear problems in section \\ref{sec:SVK_solution} also satisfy the Rankine-Hugoniot condition.\n\\end{definition}\n\n\\subsubsection*{Solution of the elastic bar problem}\nThe above discussion is now specified to a one-dimensional elastic medium, $x\\in\\[-l,l\\]$, of density $\\rho$ undergoing one-dimensional stress and strain states within the infinitesimal framework: $\\tens{\\eps}=\\eps\\: \\vect{e}_1\\otimes \\vect{e}_1$ ; $\\tens{\\sigma}=\\sigma \\:\\vect{e}_1\\otimes \\vect{e}_1$. As a consequence, the bar hypothesis holds with $\\vect{v}=v \\vect{e}_1$. Neglecting body forces and introducing \\textit{Young's modulus E} such that $\\sigma = E\\eps$, the Riemann problem takes the form \\eqref{eq:Riemann_problem_HPP} with conserved quantities and flux vector:\n\\begin{equation*}\n  \\Qcb = \\matrice{v \\\\ \\sigma} \\quad ; \\quad \\Fcb = \\matrice{-\\frac{1}{\\rho}\\sigma \\\\ -Ev}\n\\end{equation*}\nalong with Riemann-type data on the horizontal velocity (\\textit{i.e. }$v(x<0,0)=v_L\\:;\\:v(x>0,0)=v_R$) as initial conditions. In addition, the solid is assumed to be initially unstressed. The eigenvalues, left and right eigenvectors of the corresponding Jacobian matrix are:\n\\begin{equation*}\n  c=\\pm\\sqrt{\\frac{E}{\\rho} }  \\quad ; \\quad \\Lcb^p=\\[\\rho c_p \\:,\\: -1\\] \\quad ; \\quad \\Rcb^p=\\matrice{1\\\\- \\rho c_p } \n\\end{equation*}\nThe characteristic structure of the solution consisting of two elastic discontinuities emanating from the origin of the ($x,t$) plane is depicted in figure \\ref{fig:elasticity_example}.\n\\begin{figure}[h]\n  \\centering\n  \\input{chapter2/pgfFigures/example_elasticity}\n  \\caption{Solution to Riemann problem  \\eqref{eq:Riemann_problem_HPP} for an elastic bar.}\n  \\label{fig:elasticity_example}\n\\end{figure}\nThe solution $\\Qcb^*$ lying in the region bounded by the two elastic waves is computed by means of equation \\eqref{eq:jump_star_L} after solving \\eqref{eq:delta_system} for the wave strength coefficients. For a system of dimension $2$ by writing $\\Qcb^R-\\Qcb^L=\\Delta \\Qcb$ those wave strengths read:\n%For a system of dimension $2$, the solution in terms of wave strengths coefficients of system \\eqref{eq:delta_system} is, by writing $\\Qcb^R-\\Qcb^L=\\Delta \\Qcb$:\n%Writing $\\Qcb^R-\\Qcb^L=\\Delta \\Qcb$, linear systems of dimension $2$ lead to wave strengths coefficients:\n\\begin{equation}\n  \\label{eq:wave_strengths}\n  \\vect{\\delta} = \\frac{1}{\\Rc^1_1\\Rc^2_2 -\\Rc_1^2\\Rc^1_2}\\matrice{\\Rc^2_2\\Delta \\Qc_1 - \\Rc^2_1\\Delta \\Qc_2 \\\\ \\Rc^1_1\\Delta \\Qc_2 - \\Rc_2^1\\Delta \\Qc_1}\n\\end{equation}\nand more specifically for a bar:\n\\begin{equation}\n  \\label{eq:wave_strengths_bar}\n  \\vect{\\delta} = \\frac{1}{2\\rho c}\\matrice{\\rho c \\Delta v + \\Delta \\sigma \\\\  \\rho c \\Delta v -\\Delta \\sigma }\n\\end{equation}\nHence, equation \\eqref{eq:jump_star_L} yields the solution:\n\\begin{equation}\n  \\label{eq:solution_charac_variables}\n  \\Qcb^* = \\Qcb^L +\\Rcb^1 \\delta^1 = \\matrice{\\frac{\\sigma_R - \\sigma_L}{2\\rho c} + \\frac{v_R+v_L}{2} \\\\ \\rho c\\frac{v_R - v_L}{2} + \\frac{\\sigma_R+\\sigma_L}{2}} \n\\end{equation}\n\n\n\\subsection{Elastic-plastic media in the geometrical linearized limit}\n\\label{subsec:elasto-plastic_problem}\nThe previous problem is now extended to elastoplastic media by considering the same bar made of a linear hardening material of tensile yield stress $\\sigma^y$. For such a solid domain, a Riemann problem of the form \\eqref{eq:Riemann_problem_HPP} can be written by means of the following conserved quantities and flux vectors \\cite{Thomas_EP}:\n\\begin{equation*}\n  \\Qcb = \\matrice{v \\\\ \\sigma} \\quad ; \\quad \\Fcb = \\matrice{-\\frac{1}{\\rho}\\sigma \\\\ -Hv}\n\\end{equation*}\nwhere $H=E$ for elastic loadings while $H=d\\sigma/d\\eps$ is the tangent modulus for elastic-plastic evolutions. In addition, Riemann-type data on the horizontal velocity (\\textit{i.e. }$v(x<0,0)=v_L\\:;\\:v(x>0,0)=v_R$) are used as initial conditions, so that plastic flow may occur.\n\n%We now generalize the previous discussion to elastoplastic solids within the small strains framework. A plane wave problem in an infinite elastoplastic medium with linear hardening is then considered. Such a plane wave can be due for instance to Riemann-type data on the horizontal velocity (\\textit{i.e. }$v(x<0,0)=v_L\\:;\\:v(x>0,0)=v_R$) so that plastic flow may occur.\n% The exact solution of such a linear problem being known \\cite{Wang} an exact Riemann solver, which however requires a particular procedure, may be used.\nThe discontinuity of $H$ across the plastic threshold prevents here the direct derivation of the solution by using the approach followed for linear elasticity. Indeed, two sets of characteristic speeds and associated eigenvectors must be considered, that is \\cite{Thomas_EP,Wang}:\n\\begin{equation*}\n  c=\\pm\\sqrt{\\frac{H}{\\rho} } \\quad ; \\quad \\Lcb^p=\\[\\rho c_p \\:,\\: -1\\] \\quad ; \\quad \\Rcb^p=\\matrice{1\\\\- \\rho c_p } \n\\end{equation*}\nso that waves are referred to as elastic or plastic waves. Whether a plastic wave appears or not hence depends on the tangent modulus and subsequently, on the yield function.\nA predictor-corrector procedure must thus be followed by first solving an elastic Riemann problem whose resulting (trial) solution $\\Qcb$ is tested against the yield criterion on both sides $x<0$ or $x>0$.\nIn general, possibly different yield stresses, plastic strains and hardening parameters in left and right regions lead to a yield criterion that may be violated or not, and hence to one, two or no plastic waves that must be added as a correction to the original problem.\nIf neither $f_L$ nor $f_R$ leads to a violation of the criterion, the problem is elastic and the trial state is the solution.\nOtherwise, the plastic correction is performed by computing the stress in regions of the ($x,t$) plane bounded by elastic and plastic waves ($\\tilde{\\Qcb}^{L,R}$ in figure \\ref{fig:EP_bar_solution}\\subref{subfig:ep_bar_charac_4waves}) so that the yield function satisfies $f_{L,R}=0$.\n\\begin{figure}[h!]\n  \\centering\n  \\subcaptionbox{Characteristic structure\\label{subfig:ep_bar_charac_4waves}}{\\input{chapter2/pgfFigures/elastoplastic_bar}}\n  \\subcaptionbox{Stress profile in the bar\\label{subfig:ep_bar_stress_4waves}}{\\input{chapter2/pgfFigures/elastoplastic_stress}}\n  % \\\\\\subcaptionbox{3-waves solution ($\\sigma_L^y>\\sigma_R^y$)\\label{subfig:ep_bar_charac_3waves}}{\\input{chapter2/pgfFigures/elastoplastic_bar_1plast}}\n  % \\subcaptionbox{Stress for 3-waves($\\sigma_L^y>\\sigma_R^y$)\\label{subfig:ep_bar_stress_3waves}}{\\input{chapter2/pgfFigures/elastoplastic_stress_1plast}}\n  \\caption{Example of a solution of a Riemann problem in a homogeneous elastoplastic bar with linear hardening and initial plastic strain $\\eps^p(x,0)= 0$.}\n  \\label{fig:EP_bar_solution}\n\\end{figure}\nThen, the velocity and elastic wave strengths in yielding regions are given by solving:\n\\begin{align}\n  & \\tilde{\\Qcb}^R = \\Qcb^R - \\delta^1_E \\Rcb^1_E\\\\\n  & \\tilde{\\Qcb}^L = \\Qcb^L + \\delta^2_E \\Rcb^2_E\n\\end{align}\nAt last a plastic Riemann solver is used in order to compute the solution of the problem $\\Qcb^*$ by solving successively the system \\eqref{eq:delta_system} for plastic wave strengths, and either system \\eqref{eq:jump_star_R} or \\eqref{eq:jump_star_L}, that is:\n\\begin{equation}\n  \\label{eq:plastic_approx_RS}\n  \\vect{\\delta}_P = \\Rbsf^{-1}\\(\\tilde{\\Qcb}^R- \\tilde{\\Qcb}^L\\) \\quad \\Rightarrow \\quad\n  \\Qcb^* = \\left\\lbrace\n  \\begin{aligned}\n      &  \\tilde{\\Qcb}^R - \\delta_P^2 \\Rcb^2_P\\\\\n      &  \\tilde{\\Qcb}^L + \\delta_P^1 \\Rcb^1_P\n  \\end{aligned}\n  \\right.\n\\end{equation}\n\n Figure \\ref{fig:EP_bar_solution}\\subref{subfig:ep_bar_charac_4waves} shows the characteristic structure of the solution of the Riemann problem in a homogeneous medium considered here, involving two plastic waves, and figure \\ref{fig:EP_bar_solution}\\subref{subfig:ep_bar_stress_4waves} the corresponding stress field in the bar.\n\n\\begin{remark}\n  Note that the above solver also applies to plane wave problems characterized by one-dimensional strain and multi-dimensional stress states by considering different wave speeds.\n\\end{remark}\n\n\n\\subsection{Hyperelastic media: A Saint-Venant-Kirchhoff solution}\n\\label{sec:SVK_solution}\nThe approaches followed above are no longer possible for problems involving a non-linear Jacobian matrix. Indeed, the writing of the Riemann problem \\eqref{eq:RP_characteristic_variables} in terms of characteristic variables is valid for right eigenvectors that are constant. \n%since right eigenvectors cannot be taken out of the derivatives.\nMoreover, as we shall see with an example, the characteristic structure of such problems can be more complex and depend on the initial data. \n%%% Picard problem\nConsider a hyperelastic medium made of a Saint-Venant-Kirchhoff material, infinite in directions $\\vect{E}_2$ and $\\vect{E}_3$, and semi-infinite in direction $\\vect{E}_1$ (\\textit{i.e. $X_1 \\in [0,+\\infty[$}) in the reference configuration. This medium suddenly undergoes a load at $(X_1=X=0,t=0)$ in direction $\\vect{E}_1$ so that the deformation gradient and the PK1 tensors are respectively of the form:\n\\begin{align}\n  \\label{eq:SVK_plane_wave}\n  &\\tens{F}=F\\vect{e}_1\\otimes\\vect{E}_1 + \\vect{e}_2\\otimes\\vect{E}_2 + \\vect{e}_3\\otimes\\vect{E}_3 \\\\\n  & \\tens{\\Pi}=\\Pi_{11}\\vect{e}_1\\otimes\\vect{E}_1 + \\Pi_{22}\\(\\vect{e}_2\\otimes\\vect{E}_2 + \\vect{e}_3\\otimes\\vect{E}_3 \\)\n\\end{align}\nwhich corresponds to a plane wave solution. We assume that $F(0,t)=\\bar{F}$ is given, leading to a \\textit{Picard problem} \\cite[p.20]{Wang} involving both initial and boundary conditions with neglected body forces:\n\\begin{equation}\n  \\label{eq:Picard_problem}\n  \\begin{aligned}\n  &\\Qcb_t + \\drond{\\Fcb\\cdot \\vect{N}}{X_N} = \\vect{0}, \\\\\n  &\\left\\lbrace \n    \\begin{aligned}\n      & \\Qcb(X_N,t=0) = \\Qcb^R \\quad \\text{if } X_N> 0 \\\\\n      & F(0,t) = \\bar{F} \n    \\end{aligned}\n    \\right.\n  \\end{aligned}\n\\end{equation}\nwith $\\vect{N}=\\vect{E}_1$ and:\n\\begin{equation*}\n \\Qcb = \\matrice{v \\\\ F} \\quad ; \\quad \\Fcb = \\matrice{-\\frac{1}{\\rho_0}\\Pi \\\\ -v}\n\\end{equation*}\nwhere $\\Pi=\\Pi_{11}$.\n%%%\n%Since the tangent modulus and the acoustic tensor of Saint-Venant-Kirchhoff model \\eqref{eq:SVK_tangent},\\eqref{eq:SVK_acoustic} depend on the deformation gradient, the quasi-linear form: $\\Qcb_t + \\drond{\\Fcb}{\\Qcb}\\drond{\\Qcb}{X}=\\vect{0}$ is more convenient. The Jacobian matrix is then:\nThe quasi-linear form is written by using the chain rule: $\\Qcb_t + \\drond{\\Fcb}{\\Qcb}\\drond{\\Qcb}{X}=\\vect{0}$ so that the Jacobian matrix reads:\n\\begin{equation}\n  \\label{eq:quasi_SVK}\n  \\Jbsf=\\drond{\\Fcb}{\\Qcb}=-\\matrice{0 & \\frac{H_{1111}}{\\rho_0} \\\\ 1 & 0}\n\\end{equation}\nThe tangent modulus of the SVK model \\eqref{eq:SVK_tangent} yields the following characteristic fields:\n\\begin{equation}\n  \\label{eq:SVK_charac_fields}\n  \\left\\lbrace\n    \\begin{aligned}\n      & c_1=- \\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}\\(3F^2-1\\) }\\\\\n      &c_2= \\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}\\(3F^2-1\\) }\n    \\end{aligned}\\right.\n \\quad ; \\quad \\Lcb^p=\\[1\\:,\\:- c_p \\] \\quad ; \\quad \\Rcb^p=\\matrice{- c_p \\\\1} \n\\end{equation}\n\n\\begin{remark}\n  \\label{rq:hyperbolicity_limit_SVK}\n  The non-linear flux function of the SVK model yields characteristic fields depending on the strain state and possibly complex celerities leading to a loss of hyperbolicity of the problem for $F<\\sqrt{\\frac{1}{3}}$.\n\\end{remark}\n\nSuppose now that initial data are given so that $\\bar{F} > F_R$. The resulting characteristic speeds then satisfy $c_2(\\bar{F})>c_2(F_R)$ and the two families of characteristics collide in the right region of the ($x,t$) plane (figure \\ref{fig:Picard_problem}\\subref{subfig:2S}). On the other hand, $\\bar{F} < F_R$ yields characteristics moving away from each other in the right region according to $c_2(\\bar{F})<c_2(F_R)$ (figure \\ref{fig:Picard_problem}\\subref{subfig:2R}). These two situations respectively correspond to a shock and a simple wave. Note that this characteristic structure is similar to that resulting from the \\textit{dam-break problem} with \\textit{shallow water} equations, the following developments are hence very close to those of \\cite[Ch.13]{Leveque}.\n\\begin{figure}[h!]\n  \\centering\n  \\subcaptionbox{Right-going shock wave\\label{subfig:2S}}{\\input{chapter2/pgfFigures/picard_2_shock}}\n  \\subcaptionbox{Right-going simple wave\\label{subfig:2R}}{\\input{chapter2/pgfFigures/picard_2_rarefaction}}\n \\caption{Solutions of the Picard problem \\eqref{eq:Picard_problem} depending on initial and boundary data.}\n  \\label{fig:Picard_problem}\n\\end{figure}\n\n\\subsubsection*{Shock waves}\n% Applying the method of characteristics between the $x$-axis and an intersection point of two characteristic straight lines, one shows that a shock wave carry a jump discontinuity.\nA shock wave is a discontinuous wave satisfying the Rankine-Hugoniot condition \\eqref{eq:rankine-hugoniot}:\n\\begin{align}\n  \\label{eq:RH_velocity}\n  & -\\frac{1}{\\rho_0}\\(\\bar{\\Pi} - \\Pi_i \\) = s \\( \\bar{v} - v_i \\)\\\\\n  \\label{eq:RH_F}\n  & - \\( \\bar{v}-v_i\\)=s\\( \\bar{F} - F_i\\)\n\\end{align}\nwhere the shock speed $s$ is to be defined and $i\\in\\{L,R\\}$.\nIn what follows, $\\bar{F}$ is considered as an unknown so that a relation connecting $\\Qcb^i$ to a set of solutions $\\Qcb$ through a shock wave can be developed.\nSubstitution of $s$ from equation \\eqref{eq:RH_F} and introduction in equation \\eqref{eq:RH_velocity} where $\\Pi=\\frac{\\lambda+2\\mu}{2}\\(F^3-F\\)$ yield:\n\\begin{align}\n  \\label{eq:shock_speed}\n  & s=-\\frac{\\bar{v}-v_i}{\\bar{F} - F_i}\\\\\n  \\label{eq:v_jump}\n  & \\bar{v}-v_i= \\pm \\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}(\\bar{F}-F_i)\\[ \\bar{F}^3-\\bar{F} - (F_i^3-F_i)\\]}\n\\end{align}\nIn addition to the Rankine-Hugoniot condition, one has to consider the \\textit{Lax entropy conditions}, stating that characteristic curves collide in a shock wave \\cite[p.268]{Leveque}:\n\\begin{equation}\n  \\label{eq:Lax_entropy}\n  c(\\bar{F})<s<c(F_i)\n\\end{equation}\nThe Lax entropy condition implies that the square root in equation \\eqref{eq:v_jump} is real, leading to two families of curves in the phase plane ($F,v$). When considering an infinitesimal jump (\\textit{i.e. $\\bar{F}=F_i\\pm\\epsilon$ with $\\epsilon \\rightarrow 0$}), each of these curves is expected to identify with one of the jump conditions derived for the linear case \\eqref{eq:jump_star_R} of \\eqref{eq:jump_star_L}.\n% One of those curve is expected to identify with the jump conditions derived for the linear case \\eqref{eq:jump_star_R} when considering an infinitesimal jump (\\textit{i.e. $\\bar{F}=F_R+\\epsilon$ with $\\epsilon \\rightarrow 0$}).\nThus, equation \\eqref{eq:v_jump} reads:\n\\begin{equation}\n  \\label{eq:linearization}\n  \\bar{v}-v_i= \\pm \\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}\\epsilon\\[ (F_i+\\epsilon)^3-(F_i+\\epsilon) - (F_i^3-F_i)\\]}\n\\end{equation}\nwhere, with $\\epsilon \\rightarrow 0$:\n\\begin{equation*}\n  (F_i+\\epsilon)^3\\approx F_i^3(1+\\frac{3\\epsilon}{F_i})\n\\end{equation*}\nso that:\n\\begin{equation*}\n  \\matrice{\\bar{v} \\\\ \\bar{F}}=\\matrice{v_i \\\\ F_i} + \\epsilon \\matrice{\\pm \\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}\\[ 3F_i^2-1\\]}\\\\1}\n\\end{equation*}\nThe minus sign yields equation \\eqref{eq:jump_star_R} associated with the right-going wave and therefore corresponds to a right-going shock wave.\nOn the other hand, the plus sign stands for equation \\eqref{eq:jump_star_L} and left-going shocks.\nFinally, the Rankine-Hugoniot condition across a left-going shock and a right-going shock respectively lead to:\n\\begin{align}\n  \\label{eq:left-going_shock}\n  &\\bar{v}-v_L= \\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}(\\bar{F}-F_L)\\[ \\bar{F}^3-\\bar{F} - (F_L^3-F_L)\\]} \\\\\n  \\label{eq:right-going_shock}\n  &\\bar{v}-v_R= -\\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}(\\bar{F}-F_R)\\[ \\bar{F}^3-\\bar{F} - (F_R^3-F_R)\\]}\n\\end{align}\n\n\\subsubsection*{Simple waves}\nIn order to study the evolution of fields within the region bounded by characteristics that move away in figure \\ref{fig:Picard_problem}\\subref{subfig:2R}, let's write the left-going characteristic equation through it with $\\Lcb^1=[1,-c_1]$:\n\\begin{equation}\n  \\label{eq:SVK_rarefaction}\n  dv -c_1(F)  dF = 0 \n\\end{equation}\nThe complete set of states $\\Qcb$ connected to $\\Qcb^R$ through a simple wave is obtained by integration of equation \\eqref{eq:SVK_rarefaction}. Note that this integration results in a smooth evolution of fields inside a simple wave even for discontinuous initial conditions, unlike shocks. Moreover, the vanishing right-hand side of the conservative form of the Picard problem \\eqref{eq:Picard_problem} yields a similarity solution. The particular case of a simple wave constant along each ray $\\xi=x/t$ corresponds to a \\textit{rarefaction wave} \\cite{Leveque}. \n\nIntegration of equation \\eqref{eq:SVK_rarefaction} is performed by using the change of variables: $F \\mapsto ch(x)/\\sqrt{3}$ so that one gets:\n%The following change of variable is then introduced: $F \\mapsto ch(x)/\\sqrt{3}$, so that  becomes:\n\\begin{equation}\n  \\label{eq:charac_equation_sh}\n  dv=-\\sqrt{\\frac{\\lambda + 2\\mu}{6\\rho_0}}sh(x)^2 dx\n\\end{equation}\nwhere, the hyperbolic cosine $ch(x)$ and sine $sh(x)$ satisfy: $ch(x)^2-sh(x)^2=1$. Equation \\eqref{eq:charac_equation_sh} can be easily integrated with the exponential form of hyperbolic sine: $sh(x)=\\frac{e^x - e^{-x}}{2}$, thus yielding:\n\\begin{equation*}\n  v-v_R=-\\frac{1}{4}\\sqrt{\\frac{\\lambda + 2\\mu}{6\\rho_0}}\\[sh(2x)-2x -(sh(2x_R)-2x_R)\\]\n\\end{equation*}\nAt last, the inverse change of variables leads to the relation:\n\\begin{equation}\n  \\label{eq:integral_curve_right}\n  v-v_R=-\\sqrt{\\frac{\\lambda + 2\\mu}{24\\rho_0}}\\[\\sqrt{3}\\(F\\sqrt{3F^2-1} -F_R\\sqrt{3F_R^2-1}\\)-\\ln\\(\\frac{\\sqrt{3}F + \\sqrt{3F^2-1}}{\\sqrt{3}F_R + \\sqrt{3F_R^2-1}}\\) \\]\n\\end{equation}\nIn a similar manner, $dv -c_2(F)  dF = 0$ must hold through a left-going rarefaction wave so that:\n\\begin{equation}\n  \\label{eq:integral_curve_left}\n  v-v_L=\\sqrt{\\frac{\\lambda + 2\\mu}{24\\rho_0}}\\[\\sqrt{3}\\(F\\sqrt{3F^2-1} -F_L\\sqrt{3F_L^2-1}\\)-\\ln\\(\\frac{\\sqrt{3}F + \\sqrt{3F^2-1}}{\\sqrt{3}F_L + \\sqrt{3F_L^2-1}}\\) \\]\n\\end{equation}\n\nEquations \\eqref{eq:integral_curve_right} and \\eqref{eq:integral_curve_left} correspond to \\textit{integral curves} that connect initial conditions to a set of solutions through a right-going or a left-going rarefaction respectively.\n\n\\subsubsection*{Solution of the Riemann problem}\nThe above developments are now generalized by considering the Riemann problem in an infinite medium:\n\\begin{equation}\n  \\label{eq:HE_Riemann_problem}\n  \\begin{aligned}\n    &\\Qcb_t + \\drond{\\Fcb\\cdot \\vect{N}}{X_N} = \\vect{0}, \\\\\n    &\\left\\lbrace \n      \\begin{aligned}\n        & \\Qcb(X_N,t=0) = \\Qcb^L = \\matrice{v=0 \\\\ F_L} \\quad \\text{if } X_N< 0\\\\\n        & \\Qcb(X_N,t=0) = \\Qcb^R = \\matrice{v=0 \\\\ F_R}\\quad \\text{if } X_N> 0\n      \\end{aligned}\n    \\right.\n  \\end{aligned}\n\\end{equation}\nsuch that the plane wave state \\eqref{eq:SVK_plane_wave} holds.\nAs for the Picard problem, initial conditions influence the characteristic structure of the solution. Indeed, if initial conditions are given such that $F_L<F_R$, left-going characteristics will collide while right-going ones will move away from one another (see figure \\ref{fig:RP_solution}\\subref{subfig:1S2R}). In that case, the first and second characteristic fields are respectively referred to as a \\textit{1-shock} and a \\textit{2-rarefaction}. Conversely, if $F_L>F_R$ the solution corresponds to a \\textit{1-rarefaction} and a \\textit{2-shock} (figure \\ref{fig:RP_solution}\\subref{subfig:1R2S}). \n\n\\begin{figure}[h]\n  \\centering\n  \\subcaptionbox{$F_L < F_R$\\label{subfig:1S2R}}{\\input{chapter2/pgfFigures/1S2R}}\n  \\subcaptionbox{$F_L > F_R$\\label{subfig:1R2S}}{\\input{chapter2/pgfFigures/1R2S}}\n  \\caption{General wave patterns arising in the solution of the Riemann problem \\eqref{eq:HE_Riemann_problem} depending on initial data. (a): 1-shock--2-rarefaction. (b): 1-rarefaction--2-shock.}\n  \\label{fig:RP_solution}\n\\end{figure}\n\nFor the 1-shock--2-rarefaction solution one then seeks a state $\\Qcb$ that is connected to $\\Qcb^L$ and $\\Qcb^R$ through a shock wave and a rarefaction wave respectively. Hence, $\\Qcb$ must satisfy equations \\eqref{eq:left-going_shock} and \\eqref{eq:integral_curve_right}, that is:\n\\begin{equation}\n  \\label{eq:1S2R_solution}\n  \\left\\lbrace\n  \\begin{aligned}\n    &v -v_L= \\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}(F -F_L)\\[ F^3-F  - (F_L^3-F_L)\\]} \\\\\n    &v -v_R=-\\sqrt{\\frac{\\lambda + 2\\mu}{24\\rho_0}}\\[\\sqrt{3}\\(F \\sqrt{3F^2-1} -F_R\\sqrt{3F_R^2-1}\\)-\\ln\\(\\frac{\\sqrt{3}F  + \\sqrt{3F^2-1}}{\\sqrt{3}F_R + \\sqrt{3F_R^2-1}}\\) \\]\n  \\end{aligned}\n  \\right.\n\\end{equation}\nAnalogously, the 1-rarefaction--2-shock solution is given by the solution of equations \\eqref{eq:integral_curve_left} and \\eqref{eq:right-going_shock}:\n\\begin{equation}\n  \\label{eq:1R2S_solution}\n  \\left\\lbrace\n  \\begin{aligned}\n    &v -v_L=\\sqrt{\\frac{\\lambda + 2\\mu}{24\\rho_0}}\\[\\sqrt{3}\\(F \\sqrt{3F^2-1} -F_L\\sqrt{3F_L^2-1}\\)-\\ln\\(\\frac{\\sqrt{3}F  + \\sqrt{3F^2-1}}{\\sqrt{3}F_L + \\sqrt{3F_L^2-1}}\\) \\]\\\\\n    &v -v_R= -\\sqrt{\\frac{\\lambda+2\\mu}{2\\rho_0}(F -F_R)\\[ F^3-F  - (F_R^3-F_R)\\]}\n  \\end{aligned}\n  \\right.\n\\end{equation}\n\\begin{figure}[h!]\n  \\centering\n  {\\input{chapter2/pgfFigures/1S2R_solution}\\phantomsubcaption \\label{subfig:1S2R_curves}}\n  {\\input{chapter2/pgfFigures/1R2S_solution}\\phantomsubcaption  \\label{subfig:1R2S_curves}}\n  \\caption{Set of connected states $\\Qcb$ to initial data through shock and rarefaction waves with $v_L=v_R=0$ in both cases: (a) 1-shock, 2-rarefaction solution ; (b) 1-rarefaction, 2-shock.}\n  \\label{fig:solutions_RP}\n\\end{figure} \nOnce one of these systems is solved, the solution $\\Qcb$ is known everywhere except inside the rarefaction fan. Nevertheless, in this region $\\Qcb$ only varies with the ray $\\xi=c_i(F)$ and hence, the solution inside an $i$-rarefaction wave satisfies:\n\\begin{equation}\n  \\label{eq:rarefaction_fan}\n  \\xi = \\pm \\sqrt{\\frac{\\lambda + 2\\mu}{2\\rho_0}\\(3F^2-1 \\)} \\quad \\Rightarrow \\quad F(\\xi)= \\sqrt{\\frac{2\\rho_0}{3(\\lambda + 2\\mu)}\\xi^2-1}\n\\end{equation}\n\nThe curves corresponding to equations \\eqref{eq:1S2R_solution} and \\eqref{eq:1R2S_solution} are depicted in figures \\ref{fig:solutions_RP}\\subref{subfig:1S2R_curves} and \\ref{fig:solutions_RP}\\subref{subfig:1R2S_curves} for parameter values such that $\\frac{\\lambda+2\\mu}{\\rho_0}=1$. In both cases, the solution $\\Qcb(x,t)$ in the region bounded by the shock and the rarefaction fan is given by the intersection of curves in the phase plane. \n\n\\begin{remark}\n  \\label{rq:charach_neoHook}\n  Note that the above developments are based on a constitutive model that leads to a concave flux function $\\Fcb^T=-\\[\\frac{1}{\\rho_0}\\Pi \\: ;\\: v\\]$ (\\textit{i.e. } $\\ddrond{\\Pi}{F}{F}>0$). As a consequence, the characteristic speeds are monotonically increasing functions of the deformation gradient (in absolute value). Since the dependence of characteristic speeds on the deformation gradient governs the wave pattern (\\textit{i.e.: either a rarefaction or a shock wave}), the structure of the solution differs from other constitutive laws with convex flux function such as the neo-Hookean model.\n  Comparisons of ($F,\\Pi_{11}$) and ($F,\\abs{c}$) are shown in figures \\ref{fig:SVK-NH}\\subref{subfig:SVK_NH_Pi} and \\ref{fig:SVK-NH}\\subref{subfig:SVK_NH_speeds} as an illustration of previous remarks.\n  \\begin{figure}[h!]\n    \\centering\n    {\\input{chapter2/pgfFigures/SVK_NH_Pi_F} \\phantomsubcaption \\label{subfig:SVK_NH_Pi}}\n    {\\input{chapter2/pgfFigures/SVK_NH_speeds} \\phantomsubcaption \\label{subfig:SVK_NH_speeds}}\n    \\caption{Comparison of neo-Hookean and Saint-Venant-Kirchhoff hyperelastic models.}\n    \\label{fig:SVK-NH}\n  \\end{figure}\n  At last, figure \\ref{fig:SVK-NH}\\subref{subfig:SVK_NH_Pi} shows the non-physical behavior of Saint-Venant-Kirchhoff model for high-compression loads that lead to a stress tensor tending to zero.\n\\end{remark}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% ispell-local-dictionary: \"american\"\n%%% TeX-master: \"../mainManuscript\"\n%%% End:", "meta": {"hexsha": "36394fa5cf982aba2f2db84804b20cb091b007f6", "size": 30934, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/chapter2/exact_solutions.tex", "max_stars_repo_name": "adRenaud/research", "max_stars_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-18T14:52:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-18T14:52:03.000Z", "max_issues_repo_path": "manuscript/chapter2/exact_solutions.tex", "max_issues_repo_name": "adRenaud/research", "max_issues_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-07T13:11:11.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-07T13:11:11.000Z", "max_forks_repo_path": "manuscript/chapter2/exact_solutions.tex", "max_forks_repo_name": "adRenaud/research", "max_forks_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.2757009346, "max_line_length": 777, "alphanum_fraction": 0.722053404, "num_tokens": 10281, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390746, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5618831168245602}}
{"text": "%\tThis is written by Zhiyang Ong as a template for typesetting mathematics in LaTeX.\n\n%\tThe MIT License (MIT)\n\n%\tCopyright (c) <2014> <Zhiyang Ong>\n\n%\tPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\n%\tThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\n%\tTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n%\tEmail address: echo \"cukj -wb- 23wU4X5M589 TROJANS cqkH wiuz2y 0f Mw Stanford\" | awk '{ sub(\"23wU4X5M589\",\"F.d_c_b. \") sub(\"Stanford\",\"d0mA1n\"); print $5, $2, $8; for (i=1; i<=1; i++) print \"6\\b\"; print $9, $7, $6 }' | sed y/kqcbuHwM62z/gnotrzadqmC/ | tr 'q' ' ' | tr -d [:cntrl:] | tr -d 'ir' | tr y \"\\n\"\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Mathematics}\n\\label{chp:Mathematics}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\\section{Mathematics}\n%\\label{chp:Mathematics}\n\n\nMath symbols that I use frequently: \\vspace{-0.3cm}\n\\begin{enumerate} \\itemsep -4pt\n\\item $\\mathbb{N}$\n\\item $\\displaystyle\\sum^{i = 1}_{n}$\n\\item $f(x) = \\displaystyle\\lim_{n \\rightarrow \\infty} \\frac{f(x)}{g(x)}$\n\\item $\\varnothing$\n\\item $q$\n\\end{enumerate}\n\nA $3 \\times 3$ matrix:\n$\\left(\n\\begin{array}{ccc}\n\t11 & 12 & 13 \\\\\n\t21 & 22 & 23 \\\\\n\t31 & 32 & 33\n\\end{array}\n\\right)$\n\\ \\\\\n\\ \\\\\n\nHere is an equation:\n\\begin{equation}\n\\label{eqn:myeqnexample}\n\\iint_{\\Sigma} \\nabla \\times \\mathbf{F} \\cdot \\mathrm{d}\\mathbf{\\Sigma} = \\oint_{\\partial\\Sigma} \\mathbf{F} \\cdot \\mathrm{d} \\mathbf{r}.\n\\end{equation}\n\\ \\\\\n\\ \\\\\n\nHere is an equation that is not numbered.\n\\begin{equation*}\n\\nabla \\times \\mathbf{E} = -\\frac{\\partial \\mathbf{B}} {\\partial t}\n\\end{equation*}\n\n\n\nHere is the set of Maxwell's equations that is numbered.\n\\begin{gather}\n\t\\nabla \\cdot \\mathbf{E} = \\frac {\\rho} {\\varepsilon_0} \\\\\n\t\\nabla \\cdot \\mathbf{B} = 0 \\\\\n\t\\nabla \\times \\mathbf{E} = -\\frac{\\partial \\mathbf{B}} {\\partial t} \\\\\n\t\\nabla \\times \\mathbf{B} = \\mu_0\\left(\\mathbf{J} + \\varepsilon_0 \\frac{\\partial \\mathbf{E}} {\\partial t} \\right)\n\\end{gather}\n\n\n\\begin{gather*}\n\t{\\rm minimize \\displaystyle\\sum^{c}_{i = 1} c_{i} \\cdot x_{i}} \\\\\t%\tobjective function defined mathematically\t\\\\\n\t\\underline{x} \\in S \\\\\n\t{\\rm subject\\ to:} \\\\\n\t%\tconstraints\t\\\\\n\tx_{1} + x_{4} = 0 \\\\\n\tx_{3} + 7 \\cdot x_{4} + 2\\cdot x_{9} = 0\n\\end{gather*}\n\n\n\\begin{equation}\n\\label{eqn:caseenv}\nf(n) = \n\t\\begin{cases}\n\tcase-1 &: \\mathrm{n\\ is\\ odd} \\\\\n\tcase-2 &: \\mathrm{n\\ is\\ even} \\\\\n\t\\end{cases}\n\\end{equation}\n\n\\begin{proof}\nThis is a proof for BLAH \\dots\n\\end{proof}\n\n\n\n\n\\begin{theorem}{TITLE of theorem.}\nMy theorem is\\dots\n\\end{theorem}\n\n\n\n\\begin{axiom}{TITLE of axiom.}\nBlah\\dots\n\\end{axiom}\n\n\n\nCases of putting a bracket/parenthesis on the right side of the equation.\n\\begin{gather*}\n\t\\left.\\begin{aligned}\n\tB'&=-\\partial \\times E,\\\\\n\tE'&=\\partial \\times B - 4\\pi j,\n\t\\end{aligned}\n\t\\right\\}\n\t\\quad\\text{Maxwell's equations}\n\\end{gather*}\n\n\nCases of putting a bracket/parenthesis on the right side of the equation.\\\\\n$\\begin{rcases*}\n\tE = m c^2 & foo \\\\\n\t\\int x-3\\, dx & barbaz\n\\end{rcases*} y=f(x)$\n\\ \\\\\n\\ \\\\\n\nLabeling an arrow: $\\xrightarrow{ewq}$\n", "meta": {"hexsha": "cc05e97c1149dda8d6af385f3585a7f9c6c03f58", "size": 3930, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/math/mathematics.tex", "max_stars_repo_name": "eda-ricercatore/SienaLaTeX", "max_stars_repo_head_hexsha": "e28cb49843420f4292071fb1fbdc5a7af0ff20aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-04-29T20:15:12.000Z", "max_stars_repo_stars_event_max_datetime": "2018-04-29T20:15:12.000Z", "max_issues_repo_path": "reports/math/mathematics.tex", "max_issues_repo_name": "eda-ricercatore/SienaLaTeX", "max_issues_repo_head_hexsha": "e28cb49843420f4292071fb1fbdc5a7af0ff20aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-10-19T20:55:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-25T15:16:14.000Z", "max_forks_repo_path": "reports/math/mathematics.tex", "max_forks_repo_name": "eda-ricercatore/SienaLaTeX", "max_forks_repo_head_hexsha": "e28cb49843420f4292071fb1fbdc5a7af0ff20aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-04-16T04:32:01.000Z", "max_forks_repo_forks_event_max_datetime": "2016-04-16T04:32:01.000Z", "avg_line_length": 30.0, "max_line_length": 462, "alphanum_fraction": 0.6595419847, "num_tokens": 1290, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7718435030872968, "lm_q1q2_score": 0.5618831115348052}}
{"text": "%!TEX root = da2020-04.tex\n\n\\Chapter{4}{\\tLOCAL{}~Model: Unique~Identifiers}\n\n\\noindent\nIn the previous chapter, we studied deterministic distributed algorithms in port-numbered networks. In this chapter we will study a stronger model: \\emph{networks with unique identifiers}\\mydash see Figure~\\ref{fig:unique-ids}. Following the standard terminology of the field, we will use the term ``$\\LOCAL$ model'' to refer to networks with unique identifiers.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[page=\\PUniqueIds]{figs.pdf}\n    \\caption{A network with unique identifiers.}\\label{fig:unique-ids}\n\\end{figure}\n\n\n\\section{Definitions}\\label{sec:unique-id}\n\nThroughout this chapter, fix a constant $c > 1$. An assignment of \\emph{unique identifiers} for a port-numbered network $N = (V,P,p)$ is an injection\n\\[\n    \\Id \\colon V \\to \\{1,2, \\dotsc, |V|^c\\}.\n\\]\nThat is, each node $v \\in V$ is labeled with a unique integer, and the labels are assumed to be relatively small.\n\nFormally, unique identifiers can be interpreted as a graph problem $\\Pi'$, where each solution $\\Id \\in \\Pi'(N)$ is an assignment of unique identifiers for network $N$. If a distributed algorithm $A$ solves a problem $\\Pi$ on a family $\\calF$ given $\\Pi'$, we say that \\emph{$A$ solves $\\Pi$ on $\\calF$ given unique identifiers}, or equivalently, \\emph{$A$ solves $\\Pi$ on $\\calF$ in the $\\LOCAL$ model}.\n\nFor the sake of convenience, when we discuss networks with unique identifiers, we will identify a node with its unique identifier, i.e., $v = \\Id(v)$ for all $v \\in V$.\n\n\n\\section{Gathering Everything}\\label{sec:gather}\n\nIn the $\\LOCAL$ model, if the underlying graph $G = (V,E)$ is connected, all nodes can learn everything about $G$ in time $O(\\diam(G))$. In this section, we will present a gathering algorithm that accomplishes this.\n\nIn the gathering algorithm, each node $v \\in V$ will construct sets $V(v,r)$ and $E(v,r)$, where $r = 1, 2, \\dotsc$. For all $v \\in V$ and $r \\ge 1$, these sets will satisfy\n\\begin{align}\n    V(v,r) &= \\ball_G(v,r), \\label{eq:gather1} \\\\\n    E(v,r) &= \\bigl\\{ \\{s,t\\} : s \\in \\ball_G(v,r),\\, t\\in \\ball_G(v,r{-}1) \\bigr\\}. \\label{eq:gather2}\n\\intertext{Now define the graph}\n    G(v,r) &= (V(v,r), E(v,r)).  \\label{eq:gather3}\n\\end{align}\nSee Figure~\\ref{fig:gather} for an illustration.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[page=\\PGather]{figs.pdf}\n    \\caption{Subgraph $G(v,r)$ defined in \\eqref{eq:gather3}, for $v = 14$ and $r = 2$.}\\label{fig:gather}\n\\end{figure}\n\nThe following properties are straightforward corollaries of \\eqref{eq:gather1}--\\eqref{eq:gather3}.\n\\begin{enumerate}\n    \\item Graph $G(v,r)$ is a subgraph of $G(v,r+1)$, which is a subgraph of~$G$.\n    \\item If $G$ is a connected graph, and $r \\ge \\diam(G) + 1$, we have $G(v,r) = G$.\n    \\item If $G_v$ is the connected component of $G$ that contains $v$, and $r \\ge \\diam(G_v) + 1$, we have $G(v,r) = G_v$.\n    \\item For a sufficiently large $r$, we have $G(v,r) = G(v,r+1)$.\n    \\item If $G(v,r) = G(v,r+1)$, we will also have $G(v,r+1) = G(v,r+2)$.\n    \\item Graph $G(v,r)$ for $r > 1$ can be constructed recursively as follows:\n    \\begin{align}\n        V(v,r) &= \\bigcup_{u \\in V(v,1)} V(u,r-1), \\label{eq:Vvr} \\\\\n        E(v,r) &= \\bigcup_{u \\in V(v,1)} E(u,r-1). \\label{eq:Evr}\n    \\end{align}\n\\end{enumerate} \n\nThe gathering algorithm maintains the following invariant: after round $r \\ge 1$, each node $v \\in V$ has constructed graph $G(v,r)$. The execution of the algorithm proceeds as follows:\n\\begin{enumerate}\n    \\item In round $1$, each node $u \\in V$ sends its identity $u$ to each of its ports. Hence after round $1$, each node $v \\in V$ knows its own identity and the identities of its neighbors. Put otherwise, $v$ knows precisely $G(v,1)$.\n    \\item In round $r > 1$, each node $u \\in V$ sends $G(u,r-1)$ to each of its ports. Hence after round $r$, each node $v \\in V$ knows $G(u,r-1)$ for all $u \\in V(v,1)$. Now $v$ can reconstruct $G(v,r)$ using \\eqref{eq:Vvr} and \\eqref{eq:Evr}.\n    \\item A node $v \\in V$ can stop once it detects that the graph $G(v,r)$ no longer changes.\n\\end{enumerate}\n\nIt is easy to extend the gathering algorithm so that we can discover not only the underlying graph $G = (V,E)$ but also the original port-numbered network $N = (V,P,p)$.\n\n\n\\section{Solving Everything}\n\nLet $\\calF$ be a family of connected graphs, and let $\\Pi$ be a distributed graph problem. Assume that there is a deterministic \\emph{centralized} (non-distributed) algorithm $A'$ that solves $\\Pi$ on $\\calF$. For example, $A'$ can be a simple brute-force algorithm\\mydash we are not interested in the running time of algorithm~$A'$.\n\nNow there is a simple distributed algorithm $A$ that solves $\\Pi$ on $\\calF$ in the $\\LOCAL$ model. Let $N = (V,P,p)$ be a port-numbered network with the underlying graph $G \\in \\calF$. Algorithm $A$ proceeds as follows.\n\\begin{enumerate}\n    \\item All nodes discover $N$ using the gathering algorithm from Section~\\ref{sec:gather}.\n    \\item All nodes use the centralized algorithm $A'$ to find a solution $f \\in \\Pi(N)$. From the perspective of algorithm $A$, this is merely a state transition; it is a local step that requires no communication at all, and hence takes $0$ communication rounds.\n    \\item Finally, each node $v \\in V$ switches to state $f(v)$ and stops.\n\\end{enumerate}\nClearly, the running time of the algorithm is $O(\\diam(G))$.\n\nIt is essential that all nodes have the same canonical representation of network $N$ (for example, $V$, $P$, and $p$ are represented as lists that are ordered lexicographically by node identifiers and port numbers), and that all nodes use the same deterministic algorithm $A'$ to solve $\\Pi$. This way we are guaranteed that all nodes have locally computed the \\emph{same} solution $f$, and hence the outputs $f(v)$ are globally consistent.\n\n\n\\section{Focus on Computational Complexity}\n\nSo far we have learned the key difference between $\\PN$ and $\\LOCAL$ models: while there are plenty of graph problems that cannot be solved at all in the $\\PN$ model, we know that all computable graph problems can be easily solved in the $\\LOCAL$ model.\n\nHence our focus shifts from computability to computational complexity. While it is trivial to determine if a problem can be solved in the $\\LOCAL$ model, we would like to know which problems can be solved quickly. In particular, we would like to learn which problems can be solved in time that is much smaller than $\\diam(G)$. It turns out that graph coloring is an example of such a problem.\n\nIn the rest of this chapter, we will design an efficient distributed algorithm that finds a graph coloring in the $\\LOCAL$ model. The algorithm will find a proper vertex coloring with $\\Delta+1$ colors in $O(\\Delta + \\log^* n)$ communication rounds, for any graph with $n = |V|$ nodes and maximum degree $\\Delta$. We will start with a simple greedy algorithm that we will later use as a subroutine.\n\n\n\\section{Greedy Color Reduction} \\label{sec:bdgreedy}\n\nLet $x \\in \\NN$. We present a greedy color reduction algorithm that reduces the number of colors from $x$ to\n\\[\n    y = \\max \\{ x-1, \\Delta+1 \\},\n\\]\nwhere $\\Delta$ is the maximum degree of the graph. That is, given a proper vertex coloring with $x$ colors, the algorithm outputs a proper vertex coloring with $y$ colors. The running time of the algorithm is one communication round.\n\n\\subsection{Algorithm}\n\nThe algorithm proceeds as follows; here $f$ is the $x$-coloring that we are given as input and $g$ is the $y$-coloring that we produce as output. See Figure~\\ref{fig:greedy} for an illustration.\n\\begin{enumerate}\n    \\item In the first communication round, each node $v \\in V$ sends its color $f(v)$ to each of its neighbors.\n    \\item Now each node $v \\in V$ knows the set\n    \\[\n        C(v) = \\{ i : \\text{there is a neighbor $u$ of $v$ with $f(u) = i$} \\}.\n    \\]\n    We say that a node is \\emph{active} if $f(v) > \\max C(v)$; otherwise it is \\emph{passive}. That is, the colors of the active nodes are local maxima. Let\n    \\[\n        \\bar{C}(v) = \\{1,2,\\dotsc\\} \\setminus C(v)\n    \\]\n    be the set of \\emph{free colors} in the neighborhood of $v$.\n    \\item A node $v \\in V$ outputs\n    \\[\n        g(v) = \\begin{cases}\n            f(v) & \\text{if $v$ is passive}, \\\\\n            \\min \\bar{C}(v) & \\text{if $v$ is active}.\n        \\end{cases}\n    \\]\n\\end{enumerate}\nInformally, a node whose color is a local maximum re-colors itself with the first available free color.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[page=\\PGreedy]{figs.pdf}\n    \\caption{Greedy color reduction. The active nodes have been highlighted. Note that in the original coloring $f$, the largest color was $99$, while in the new coloring, the largest color is strictly smaller than $99$\\mydash we have successfully reduced the number of colors in the graph.}\\label{fig:greedy}\n\\end{figure}\n\n\\subsection{Analysis}\n\n\\begin{lemma}\n    The greedy color reduction algorithm reduces the number of colors from $x$ to\n    \\[\n        y = \\max \\{ x-1, \\Delta+1 \\},\n    \\]\n    where $\\Delta$ is the maximum degree of the graph.\n\\end{lemma}\n\\begin{proof}\n    Let us first prove that $g(v) \\in \\{1,2,\\dotsc,y\\}$ for all $v \\in V$. As $f$ is a proper coloring, we cannot have $f(v) = \\max C(v)$. Hence there are only two possibilities.\n    \\begin{enumerate}\n        \\item $f(v) < \\max C(v)$. Now $v$ is passive, and it is adjacent to a node $u$ such that $f(v) < f(u)$. We have\n        \\[\n            g(v) = f(v) \\le f(u) - 1 \\le x - 1 \\le y.\n        \\]\n        \\item $f(v) > \\max C(v)$. Now $v$ is active, and we have\n        \\[\n            g(v) = \\min \\bar{C}(v).\n        \\]\n        There is at least one value $i \\in \\{1,2,\\dotsc,|C(v)|+1\\}$ with $i \\notin C(v)$; hence\n        \\[\n            \\min \\bar{C}(v) \\le |C(v)| + 1 \\le \\deg_G(v) + 1 \\le \\Delta + 1 \\le y.\n        \\]\n    \\end{enumerate}\n    \n    Next we will show that $g$ is a proper vertex coloring of $G$. Let $\\{u,v\\} \\in E$. If both $u$ and $v$ are passive, we have\n    \\[\n        g(u) = f(u) \\ne f(v) = g(v).\n    \\]\n    Otherwise, w.l.o.g., assume that $u$ is active. Then we must have $f(u) > f(v)$. It follows that $f(u) \\in C(v)$ and $f(v) \\le \\max C(v)$; therefore $v$ is passive. Now\n    $g(u) \\notin C(u)$ while\n    $g(v) = f(v) \\in C(u)$; we have $g(u) \\ne g(v)$.\n\\end{proof}\n\nThe key observation is that the set of active nodes forms an independent set. Therefore all active nodes can pick their new colors simultaneously in parallel, without any risk of choosing colors that might conflict with each other.\n\n\\subsection{Remarks}\n\nThe greedy color reduction algorithm does not need to know the number of colors $x$ or the maximum degree $\\Delta$; we only used them in the analysis. We can take any graph, blindly apply greedy color reduction, and we are guaranteed to reduce the number of colors by one\\mydash provided that the number of colors was larger than $\\Delta + 1$. In particular, we can apply the greedy color reduction repeatedly until we get stuck, at which point we have a \\Dpocol{} of~$G$\\mydash we will formalize and generalize this idea in Exercise~\\ref{ex:greedy-iterate}.\n\n\\section[{Efficient \\texorpdfstring{$(\\Delta+1)$}{(\u0394+1)}-coloring}]{\\boldmath Efficient $(\\Delta+1)$-coloring}\n\nIn the remaining sections we will describe two coloring algorithms that, together with the greedy algorithm from the previous section, can be used to $(\\Delta+1)$-color graphs of maximum degree $\\Delta$.\n\nOn a high level, the $(\\Delta+1)$-coloring algorithm is composed of the following subroutines:\n\\begin{enumerate}\n  \\item Algorithm from Section~\\ref{sec:delta2-coloring}: Using unique identifiers as input, compute an $O(\\Delta^2)$-coloring $x$ in $O(\\log^* n)$ rounds.\n  \\item Algorithm from Section~\\ref{sec:additive-group-col}: Given $x$ as input, compute an $O(\\Delta)$-coloring $y$ in $O(\\Delta)$ rounds.\n  \\item Algorithm from Section~\\ref{sec:bdgreedy}: Given $y$ as input, compute a $(\\Delta+1)$-coloring $z$ in $O(\\Delta)$ rounds.\n\\end{enumerate}\nWe have already seen the greedy algorithm that we will use in the final step; we will proceed in the reverse order and present next the algorithm that turns an $O(\\Delta^2)$-coloring into an $O(\\Delta)$-coloring. In what follows, we will assume that the nodes are given the values of $\\Delta$ and $n$ as input; these assumptions will simplify the algorithms significantly.\n\n\\section{Additive-Group Coloring} \\label{sec:additive-group-col}\n\nConsider two clocks with $q$ steps, for any prime $q$; see Figure~\\ref{fig:clocks}. The first clock moves its hand $a$ steps in each time unit, and the second clock moves its hand $b \\neq a$ steps in each time unit. Starting from the same position, when are the two hands in the same position again?\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[scale=0.25,page=\\PClocks]{figs.pdf}\n    \\caption{Two clocks for $q=7$. The blue hand moves 2 steps per time unit, and the orange hand 3 steps. Hands moving at different speeds meet again after $q$ moves, but not before.}\\label{fig:clocks}\n\\end{figure}\n\nIt is a fundamental property of \\emph{finite fields} that they are in the same position again after exactly $q$ steps. We recap definitions and facts about finite fields in Section~\\ref{app:finite-fields}.\n\nBuilding on this observation, we construct an algorithm where each node behaves like a clock with one hand, turning its hand with some constant speed. We will use the input coloring to ensure that \\emph{clocks with the same starting position turn their hands at different speeds}. Then we will simply wait until a clock is in a position not shared by any of the neighbors, and this position becomes the final color of the node. If we do not have too many neighbors, each node will eventually find such a position, leading to a proper coloring.\n\n\\subsection{Algorithm}\\label{sec:additive-group-col-alg}\n\nLet $q$ be a prime number with $q > 2\\Delta$. We assume that we are given a coloring with $q^2$ colors, and we will show how to construct a coloring with $q$ colors in $O(q)$ rounds. Put otherwise, we can turn a coloring with $O(\\Delta^2)$ colors into a coloring with $O(\\Delta)$ colors in $O(\\Delta)$ rounds, as long as we choose our prime number $q$ in a suitable manner.\n\nIf we have an input coloring with $q^2$ colors, we can represent the color of node $v$ as a pair $f(v) = \\langle f_1(v), f_2(v) \\rangle$ where $f_1(v)$ and $f_2(v)$ are integers between $0$ and $q-1$.\n\nUsing the clock analogue, $v$ can be seen as a clock with the hand at position $f_2(v)$, turning at speed $f_1(v)$. In the algorithm we will stop clocks by setting $f_1(v) = 0$ whenever this is possible in a conflict-free manner. When all clocks have stopped, all nodes have colors of the form $\\langle 0, f_2(v) \\rangle$ where $f_2(v)$ is between $0$ and $q-1$, and hence we have got a proper coloring with $q$ colors.\n\nWe say that two colors $\\langle a_1, a_2 \\rangle$ and $\\langle b_1, b_2 \\rangle$ are in \\emph{conflict} if $a_2 = b_2$. The algorithm repeatedly performs the following steps:\n\\begin{itemize}[noitemsep]\n  \\item Each node sends its current colors to each neighbor.\n  \\item For each node $v$, if $f(v)$ is in conflict with any neighbor, set\n  \\begin{align*}\n    f(v) &\\gets \\bigl\\langle f_1(v),\\, (f_1(v) + f_2(v)) \\bmod q \\bigr\\rangle.\n  \\intertext{Otherwise, set}\n    f(v) &\\gets \\bigl\\langle 0,\\, f_2(v) \\bigr\\rangle.\n  \\end{align*}\n\\end{itemize}\nIn essence, we stop non-conflicting clocks and keep moving all other clocks at a constant rate. We say that a node $v$ is \\emph{stopped} when $f_1(v) = 0$; otherwise it is \\emph{running}; note a stopped node will not change its color any more.\n\nWe show that after $O(q)$ iterations of this loop, all nodes will be stopped, and they form a proper coloring\\mydash assuming we started with a proper coloring.\n\n\\subsection{Correctness}\n\nFirst, we show that in each iteration a proper coloring remains proper. In what follows, we use $f$ to denote the coloring before one iteration and $g$ to denote the coloring after the iteration. Consider a fixed node $v$ and an arbitrary neighbor $u$. We show by a case analysis that $f(v) \\neq f(u)$ implies $g(v) \\neq g(u)$.\n\\begin{enumerate}[label=(\\arabic*)]\n  \\item Assume that $v$ is stopped after this round; then $g(v) = \\langle 0, f_2(v) \\rangle$.\n  \\begin{enumerate}\n    \\item If $f_1(u) = 0$, then $u$ has stopped and $g(u) = f(u)$. By assumption $f(v) \\neq f(u)$ and therefore $g(v) \\neq g(u)$.\n    \\item If $f_1(u) \\neq 0$ and $f(u)$ is not in conflict with its neighbors, then $g(u) = \\langle 0, f_2(u) \\rangle$. As there are no conflicts with $v$, we must have $f_2(v) \\neq f_2(u)$, and therefore $g(v) \\neq g(u)$.\n    \\item Otherwise $f_1(u) \\neq 0$ and $f(u)$ is in conflict with a neighboring color. Then $g_1(u) = f_1(u) \\neq 0 = g_1(v)$, and therefore $g(v) \\neq g(u)$.\n  \\end{enumerate}\n  \\item Otherwise we have $g(v) = \\langle f_1(v), (f_1(v) + f_2(v)) \\bmod q \\rangle$, where $f_1(v) \\ne 0$.\n  \\begin{enumerate}\n    \\item If $u$ has stopped, then $g_1(u) = 0$, and therefore $g(v) \\neq g(u)$.\n    \\item Otherwise $u$ is running. Then \\[g(u) = \\langle f_1(u), (f_1(u)+f_2(u)) \\bmod q \\rangle.\\] If $f_1(v) \\ne f_1(u)$, we will have $g_1(v) \\ne g_1(u)$ and therefore $g(v) \\ne g(u)$. Otherwise $f_1(v) = f_1(u)$ but then by assumption we must have $f_2(v) \\ne f_2(u)$, which implies $g_2(v) \\ne g_2(u)$ and therefore $g(v) \\ne g(u)$.\n  \\end{enumerate}\n\\end{enumerate}\n\n\\subsection{Running Time}\n\nNext we analyze the running time. Assume that we start with a proper coloring $f$. We want to show that after a sufficient number of iterations of the additive-group algorithm, each node must have had an iteration in which its color did not conflict with color of its neighbors, and hence got an opportunity to stop.\n\nLet $f^0$ denote the initial coloring before the first iteration and let $f^i$ denote the coloring after iteration $i = 1, 2, \\dotsc$. The following lemma shows that two running nodes do not conflict too often during the execution.\n\n\\begin{lemma} \\label{lem:agc-running-conflict}\n  Consider $t$ consecutive iterations of the additive-group coloring algorithm, for $t \\leq q$. Let $u$ and $v$ be adjacent nodes such that both of them are still running before iteration $t$. Then there is at most one iteration $i = 0, 1, \\dotsc, t-1$ with a conflict $f_2^i(u) = f_2^i(v)$.\n\\end{lemma}\n\n\\begin{proof}\n  Assume that for some $i$ we have $f^i(u) = \\langle a, b \\rangle$ and $f^i(v) = \\langle a', b \\rangle$ with $a \\ne a'$. In the subsequent iterations $j=i+1,i+2,\\dots$, we have\n  \\[\n    f^j_2(u) - f^j_2(v) \\equiv (a-a')(j-i) \\mod q.\n  \\]\n  Assume that for some $j$ we have another conflict $f^j_2(u) = f^j_2(v)$, implying that $(a-a')(j-i) \\equiv 0 \\mod q$. If a prime divides a product $xy$ of two integers $x$ and $y$, then it also divides $x$ or $y$ (Euclid's Lemma). But $a-a'$ cannot be a multiple of $q$, since $a \\ne a'$ and $0 \\le a, a' < q$, and $j-i$ cannot be a multiple of $q$, either, since $0 \\le i < j < q$.\n\\end{proof}\n\nWe also need to show that a node is not in conflict with a stopped node too often.\n\n\\begin{lemma} \\label{lem:agc-finished-conflict}\n  Consider $t$ consecutive iterations of the additive-group coloring algorithm, for $t \\leq q$. Let $u$ and $v$ be adjacent nodes such that $u$ is still running before iteration $t$ but $v$ was stopped before iteration $1$. Then there is at most one iteration $i = 0, 1, \\dotsc, t-1$ with a conflict $f_2^i(u) = f_2^i(v)$.\n\\end{lemma}\n\n\\begin{proof}\n  The same argument as in the proof of Lemma~\\ref{lem:agc-running-conflict} works, this time with $a' = 0$.\n\\end{proof}\n\nIt remains to show that, based on Lemmas~\\ref{lem:agc-running-conflict} and \\ref{lem:agc-finished-conflict}, the algorithm finishes fast.\n\nConsider a sequence of consecutive $q > 2\\Delta$ iterations of the additive-group coloring algorithm starting with any initial coloring $f$. Consider an arbitrary node $u$ that does not stop during any of these rounds. Let $v$ be a neighbor of $u$. No matter if and when $v$ stops, the color of $v$ will conflict with color of $u$ at most twice during the $q$ rounds:\n\\begin{itemize}\n    \\item Consider the rounds (if any) in which $v$ is running. There are at most $q$ such rounds. By Lemma~\\ref{lem:agc-running-conflict}, $u$ conflicts with $v$ at most once during these rounds.\n    \\item Consider the remaining rounds (if any) in which $v$ is stopped. There are at most $q$ such rounds. By Lemma~\\ref{lem:agc-finished-conflict}, $u$ conflicts with $v$ at most once during these rounds.\n\\end{itemize}\nSo for each neighbor $v$ of $u$, there are at most $2$ rounds among $q$ rounds such that the color of $v$ conflicts with the color of $u$. As there are at most $\\Delta$ neighbors, there are at most $2\\Delta$ rounds among $q$ rounds such that the color of some neighbor of $u$ conflicts with the current color of $u$. But $q > 2\\Delta$, so there has to be at least one round after which none of the neighbors are in conflict with $u$---and hence there will be an opportunity for $u$ to stop.\n\n\\section[Fast \\texorpdfstring{$O(\\Delta^2)$}{O(\u0394\u00b2)}-coloring]{\\boldmath Fast $O(\\Delta^2)$-coloring} \\label{sec:delta2-coloring}\n\nThe additive-group coloring algorithm assumes that we start with an $O(\\Delta^2)$-coloring of the network. In this section we present an algorithm that computes an $O(\\Delta^2)$-coloring in $O(\\log^* n)$ communication rounds.\n\nThe algorithm proceeds in two phases. In the first phase, the coloring given by the unique identifiers is iteratively reduced to an $O(\\Delta^2 \\log^2 \\Delta)$-coloring. In the second phase, a final color reduction step yields an $O(\\Delta^2)$ coloring.\n\nBoth phases are based on the same combinatorial construction, called a cover-free set family. We begin by describing the construction for the first phase.\n\n\\subsection{Cover-Free Set Families}\n\nThe coloring algorithm is based on the existence of non-covering families of sets. Intuitively, these are families of sets such that any two sets do not have a large overlap: then no small collection of sets contains all elements in another set. Therefore, if each node is assigned such a set, it can find an element that is not in the sets of its neighbors, and pick that element as its new color.\n\n\\begin{figure}\n  \\centering\n  \\includegraphics[scale=0.4,page=\\PCoverFreeSet]{figs.pdf}\n  \\caption{A 2-cover-free set family $J$ of 5 subsets of a base set $X$ on 7 elements. No two sets cover a third distinct set.}\\label{fig:cover-free-sets}\n\\end{figure}\n\nA family $J$ of $n$ subsets of $\\{1, \\dots, m \\}$ is \\emph{$k$-cover-free} if for every $S \\in J$ and every collection of $k$ sets $S_1, \\dots, S_k \\in J$ distinct from $S$ we have that\n\\[\n  S \\nsubseteq \\bigcup_{i=1}^{k} S_i.\n\\]\nSee Figure~\\ref{fig:cover-free-sets} for an example.\n\n\\subsection{Constructing Cover-Free Set Families}\n\nCover-free set families can be constructed using \\emph{polynomials over finite fields}. The example of finite fields we are interested in is $\\GF(q)$ for a prime $q$, which is simply modular arithmetic of integers modulo $q$. We consider polynomials over such a field. A brief recap is given in Section~\\ref{app:finite-fields}.\n\nA basic result about polynomials states that two distinct polynomials evaluate to the same value at a bounded number of points\n\n\\begin{lemma} \\label{lem:poly-roots}\n  Let $f,g$ be two distinct polynomials of degree $d$ over a finite field $\\GF(q)$, for some prime $q$. Then $f(x) = g(x)$ holds for at most $d$ elements $x \\in \\GF(q)$.\n\\end{lemma}\n\\begin{proof}\nSee Section~\\ref{app:finite-fields}.\n\\end{proof}\n\nNow fix a prime $q$. Our base set will be $X = \\GF(q) \\times \\GF(q)$. Thus we have that $|X| = m = q^2$.\n\nFor a positive natural number $d$, consider $\\Poly(d,q)$, the set of polynomials of degree $d$ over $\\GF(q)$. For each polynomial $g \\in \\Poly(d,q)$, fix the set \\[S_g = \\bigl\\{ (a, g(a)) \\bigm| a \\in \\GF(q) \\bigr\\}\\] that is associated with this polynomial. Note that each $S_g$ contains exactly $q$ elements: one for each element of $\\GF(q)$. Then we can define the family \\[J = J_{d,q} = \\bigl\\{ S_g \\bigm| g \\in \\Poly(d,q) \\bigr\\}.\\]\n\nConsider any two distinct polynomials $f$ and $g$ in $\\Poly(d,q)$: by Lemma~\\ref{lem:poly-roots} there are at most $d$ elements $a$ such that $f(a) = g(a)$. Therefore $|S_f \\cap S_g| \\leq d$, and $J$ is a $\\lfloor q/d \\rfloor$-cover-free set family.\n\nAny polynomial is uniquely defined by its coefficients. Therefore the set $J_{d,q}$ has size $q^{d+1}$, as it consists of a set of pairs for each polynomial of degree $d$.\n\nBy choosing parameters $q$ and $d$ we can construct a $\\Delta$-cover free family that can be used to color efficiently.\n\n\\begin{lemma} \\label{lem:delta-cover-free}\n  For all integers $x$, $\\Delta$ such that $x > \\Delta \\geq 2$, there exists a $\\Delta$-cover-free family $J$ of $x$ subsets from a base set of $m \\leq 4(\\Delta+1)^2 \\log^2 x$ elements.\n\\end{lemma}\n\n\\begin{proof}\n  We begin by choosing a prime $q$ such that\n  \\[\n    \\bigl\\lfloor (\\Delta+1)\\log x \\bigr\\rfloor \\leq q \\leq 2 \\cdot \\bigl\\lfloor (\\Delta+1)\\log x \\bigr\\rfloor.\n  \\]\n  By the Bertrand--Chebyshev theorem such a prime must always exist. Set $d = \\lfloor \\log x \\rfloor$. By the previous observation, the family $J_{d,q}$, for the above parameter settings, is a $\\lfloor q / d \\rfloor$-cover-free family, where \n  \\[\n    \\lfloor q / d \\rfloor \\geq \\biggl\\lfloor \\frac{\\lfloor (\\Delta+1)\\log x \\rfloor}{\\lfloor \\log x \\rfloor} \\biggr\\rfloor \\geq \\biggl\\lfloor \\frac{(\\Delta+1)\\log x - 1}{\\log x} \\biggr\\rfloor \\geq \\Delta.\n  \\]\n  There are at least \n  \\[\n  q^{d+1} \\geq (\\Delta \\log x)^{\\log x } > x\n  \\]\n  sets in $J_{d,q}$, so we can choose $x$ of them. The base set has \\[q^2 \\leq 4(\\Delta+1)^2 \\log^2 x\\] elements.\n\\end{proof}\n\n\\subsection{Efficient Color Reduction} \\label{ssec:efficient-cr}\n\nUsing $\\Delta$-cover-free sets we can construct an algorithm that reduces the number of colors from $x$ to $y \\leq 4(\\Delta+1)^2 \\log^2 x$ in one communication round, as long as $x > \\Delta$.\n\nLet $f$ denote the input $x$-coloring and $g$ the output $y$-coloring. Assume that $J$ is a $\\Delta$-cover-free family of $x$ sets on a base set of $y$ elements, as in Lemma~\\ref{lem:delta-cover-free}, that is ordered as $S_1, S_2, \\dots, S_x$. The algorithm functions as follows.\n\\begin{enumerate}\n  \\item Each node $v \\in V$ sends its current color $f(v)$ to each of its neighbors.\n  \\item Each node receives the colors $f(u)$ of its neighbors $u \\in N(v)$. Then it constructs the set $S_{f(v)}$, and the sets $S_{f(u)}$ for all $u \\in N(v)$. Since $f(v) \\neq f(u)$ for all $u\\in N(v)$, and $J$ is a $\\Delta$-cover-free family, we have that\n  \\[\n    S_{f(v)} \\nsubseteq \\bigcup_{u \\in N(v)} S_{f(u)}.\n  \\]\n  In particular, there exists at least one $c \\in S_{f(v)} \\setminus \\cup_{u \\in N(v)} S_{f(u)}$. Node $v$ sets $g(v) = c$ for the smallest such $c$.\n\\end{enumerate}\n\nNow assume that $f$ is a proper coloring, that is, $f(v) \\neq f(u)$ for all neighbors $v$ and $u$. This implies that for each node $v$, each of its neighbors $u$ selects a set that is different from $S_{f(v)}$; overall, the neighbors will select at most $\\Delta$ distinct sets. Since $J$ is a $\\Delta$-cover-free family, each node $v$ can find an element $c \\in S_{f(v)}$ that is not in the sets of its neighbors. Therefore setting $g(v) = c$ forms a proper coloring. Finally, since the sets $S \\in J$ are subsets of $\\{1,\\dots,y\\}$, for $y \\leq 4(\\Delta+1)^2 (\\log x)^2$, we have that $g$ is a $y$-coloring.\n\n\\subsection{Iterated Color Reduction}\n\nBy a repeated application of the color reduction algorithm it is possible to reduce the number of colors down to $O(\\Delta^2 \\log^2 \\Delta)$. Assuming we start with an input $x$-coloring, this will take $O(\\log^* x)$ rounds.\n\nWe will now show that $O(\\log^* x)$ iterations of the color reduction algorithm will reduce the number of colors from $x$ to $O(\\Delta^2 \\log^2 \\Delta)$. We assume that in the beginning, both $x$ and $\\Delta$ are known. Therefore after each iteration, all nodes know the total number of colors.\n\nAssume that $x > 4(\\Delta+1)^2 \\log^2 \\Delta$. Repeated iterations of the color reduction algorithm reduce the number of colors as follows:\n\\begin{align*}\n  x_0 \\mapsto x_1 &\\leq 4(\\Delta+1)^2 \\log^2 x, \\\\\n  x_1 \\mapsto x_2 &\\leq 4(\\Delta+1)^2 \\log^2 (4(\\Delta+1)^2 \\log^2 x) \\\\\n  & {} = 4(\\Delta+1)^2 \\bigl(\\log 4 + 2\\log (\\Delta+1) + 2\\log \\log x\\bigr)^2.\n\\intertext{If $\\log \\log x \\geq \\log 4 + 2\\log (\\Delta+1)$, we have that}\n  x_2 & \\leq 4(\\Delta+1)^2(3\\log \\log x)^2 \\\\\n  &= \\bigl(6 (\\Delta+1) \\log \\log x\\bigr)^2.\n\\intertext{In the next step, we reduce colors as follows:}\n  x_2 \\mapsto  x_3 &\\leq 4(\\Delta+1)^2 \\log^2 \\bigl(36(\\Delta+1)^2 (\\log \\log x)^2\\bigr) \\\\\n  & = 4(\\Delta+1)^2 \\bigl(\\log 36 + 2\\log (\\Delta+1) + 2\\log \\log \\log x\\bigr)^2.\n\\intertext{If $\\log \\log \\log x \\geq \\log 36 + 2\\log (\\Delta+1) \\}$, we have that}\n  x_3 &\\leq 4(\\Delta+1)^2(3\\log \\log \\log x)^2 \\\\\n  &= \\bigl(6(\\Delta+1) \\log \\log \\log x\\bigr)^2.\n\\end{align*}\nNow we can see the pattern: as long as \n\\[\n\\log^{(i)} x \\geq \\log 36 + 2\\log (\\Delta+1) \\},\n\\] \nwhere $\\log^{(i)} x$ is the $i$ times iterated logarithm of $x$, we reduce colors from $(6(\\Delta+1)\\log^{(i-1)} x)^2$ to $(6(\\Delta+1)\\log^{(i)} x)^2$ in the $i$th step.\n\nOnce $\\log^{(i)} x \\geq \\log 36 + 2\\log (\\Delta+1) \\}$ no longer holds, we have a coloring with at most \n\\[\nc_{\\Delta} = 4(\\Delta+1)^2 \\bigl(3(\\log 36 + 2\\log (\\Delta+1))\\bigr)^2\n\\] \ncolors. We can numerically verify that for all $\\Delta \\geq 2$, we have that\n\\[\n  4(\\Delta+1)^2 \\bigl(3(\\log 36 + 2\\log (\\Delta+1))\\bigr)^2 \\leq (11(\\Delta+1))^3.\n\\]\nWe will use this observation in the next step.\n\nIt remains to calculate how many color reduction steps are required. By definition, after $T = \\log^* x$ iterations we have that $\\log^{(T)} x \\leq 1$. Thus, after at most $\\log^* x$ iterations of the color reduction algorithm we have a coloring with at most $c_{\\Delta}$ colors.\n\n\\subsection{Final Color Reduction Step} \\label{ssec:final-step}\n\nIn the last step, we will reduce the coloring to an $O(\\Delta^2)$-coloring. We will use another construction of $\\Delta$-cover-free families based on polynomials.\n\n\\begin{lemma} \\label{lem:delta3-to-delta2}\n  For all $\\Delta$, there exists a $\\Delta$-cover-free family $J$ of $x$ subsets from a base set of $m \\le (22(\\Delta+1))^2$ elements for $x \\leq (11(\\Delta+1))^3$.\n\\end{lemma}\n\nThis immediately gives us the following color reduction algorithm.\n\n\\begin{corollary}\n  There is a distributed algorithm that, given a $(11(\\Delta+1))^3$-coloring as an input, in one round computes a $(22(\\Delta+1))^2$-coloring.\n\\end{corollary}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:delta3-to-delta2}]\n  Our base set will be $X$ with $|X| = m = q^2$, for a prime $q$. Again it is useful to see $X = \\GF(q) \\times \\GF(q)$ as pairs of elements from the finite field over $q$ elements.\n  \n  Now consider polynomials $\\Poly(2,q)$ of degree 2 over $\\GF(q)$. For each such polynomial $g \\in \\Poly(2,q)$, let \n  \\[\n    S_g = \\bigl\\{ (a, g(a)) \\bigm| a \\in \\GF(q) \\bigr\\}\n  \\]\n  be the pairs defined by the valuations of the polynomial $g$ at each element of $\\GF(q)$. We have that $|S_g| = q$ for all $g$.\n  \n  Now we can construct the family \\[J = J_{2,q} = \\bigl\\{ S_g \\bigm| g \\in \\Poly(2,q) \\bigr\\}\\] as the collection of point sets defined by all polynomials of degree 2. We have that $|J| = q^3$ since a polynomial is uniquely determined by its coefficients.\n  \n  By Lemma~\\ref{lem:poly-roots}, we have that $|S_f \\cap S_g| \\leq 2$ for any distinct polynomials $f,g \\in P(2,q)$. Therefore covering any set $S_g$ requires at least $\\lceil q/2 \\rceil$ other sets (distinct from $S_g$) from $J$.\n  \n  We are now ready to prove that $J$ is a $\\Delta$-cover-free family for suitable parameter settings. Since each set $S_g$ contains $q$ elements, and the intersection between the sets of distinct polynomials is at most $2$, we want to find $q$ such that $2\\Delta \\leq q - 1$ and $q^3$ is large enough. Using the Bertrand--Chebyshev theorem we know that there exists a prime $q$ such that\n  \\[\n    11(\\Delta + 1) \\leq q \\leq 22(\\Delta + 1).\n  \\]\n  Any value $q$ from this range is large enough. The base set $X$ has size \n  \\[\n  m = q^2 \\leq (22(\\Delta + 1))^2.\n  \\]\n  The family $J$ has size\n  \\[\n    |J| \\geq (11(\\Delta + 1))^3.\n  \\]\n  Finally, since we choose $q \\geq 2\\Delta + 1$, we have that no collection of $\\Delta$ sets $\\mathcal{S} = \\{ S_1, S_2, \\dots, S_{\\Delta} \\} \\subseteq J$ can cover a set $S \\notin \\mathcal{S}$.\n\\end{proof}\n\n\\section{Putting Things Together}\n\nIt remains to show how to use the three algorithms we have seen so far together.\n\n\\begin{theorem}\n  Assume that we know parameters $\\Delta$ and $n$, and some polynomial bound $n^c$ on the size of the unique identifiers. Graphs on $n$ vertices with maximum degree $\\Delta$ can be $(\\Delta+1)$-colored in $O(\\Delta + \\log^* n)$ rounds in the $\\LOCAL$ model.\n\\end{theorem}\n\n\\begin{proof}\n  We begin with the unique identifiers, and treat them as an initial coloring with $n^c$ colors. \n  \\begin{enumerate}\n    \\item In the first phase we run the efficient color reduction algorithm from Section~\\ref{ssec:efficient-cr} for $T_1 = \\log^* (n^c) = O(\\log^* n)$ rounds to produce a coloring $y_1$ with at most $(11(\\Delta+1))^3$ colors.\n    \\item In the second phase, after $T_1$ rounds have passed, each vertex can apply the final color reduction step from Section~\\ref{ssec:final-step} to compute a coloring $y_2$. This reduces colors from $(11(\\Delta+1))^3$ to $(22(\\Delta+1))^2$.\n    \\item After $T_1 + 1$ rounds, we have computed an $O(\\Delta^2)$-coloring $y_2$. Now each vertex runs the additive-group coloring algorithm from Section~\\ref{sec:additive-group-col}, applying it with $y_2$ as input. For a parameter $q \\leq 2\\sqrt{(22(\\Delta+1))^2} = 44\\Delta+44$, this algorithm runs for $T_2 = q$ steps and computes a $q$-coloring $y_3$.\n    \\item In the last phase, after $T_1 + 1 + T_2$ rounds, we apply the greedy color reduction algorithm from Section~\\ref{sec:bdgreedy} iteratively $T_3 = 43\\Delta+43$ times. Each iteration requires one round and reduces the maximum color by one.\n  \\end{enumerate}\n  After a total of\n  \\begin{align*}\n      T_1 + 1 + T_2 + T_3 &\\leq \\log^* (n^c) + 87\\Delta + 88 \\\\\n      &= O(\\Delta + \\log^* n)\n  \\end{align*}\n  rounds, we have computed a $(\\Delta+1)$-coloring.\n\\end{proof}\n\n\\section{Quiz}\n\nConsider the algorithm from Section~\\ref{sec:additive-group-col-alg} in the following setting:\n\\begin{itemize}[noitemsep]\n    \\item The network is a complete graph with $n = 4$ nodes; hence the maximum degree is $\\Delta = 3$, and we can choose $q = 7 > 2\\Delta$.\n    \\item We are given a coloring with $q^2 = 49$ colors; we can represent the possible input colors as pairs $(0,0),\\allowbreak (0,1),\\allowbreak \\dotsc,\\allowbreak (6,6)$.\n\\end{itemize}\nGive an example of an input coloring such that we need to do exactly $6$ iterations of the algorithm until all nodes have reached their final colors, i.e., colors of the form $(0,x)$.\n\nPlease give the answer by listing the four original color pairs of the nodes in any order; for example, if we asked for a coloring in which you need exactly 3 iterations, this would be a correct answer: $(2, 3)$, $(3, 2)$, $(3, 6)$, $(4, 6)$.\n\n\\section{Exercises}\n\n\\begin{ex}[applications]\n    Let $\\Delta$ be a known constant, and let $\\calF$ be the family of graphs of maximum degree at most $\\Delta$. Design fast distributed algorithms that solve the following problems on $\\calF$ in the $\\LOCAL$ model.\n    \\begin{subex}\n        \\item Maximal independent set.\n        \\item Maximal matching.\n        \\item Edge coloring with $O(\\Delta)$ colors.\n    \\end{subex}\n    You can assume that all nodes get the value of $n$ (the number of nodes) as input; also the parameter $c$ in the identifier space is a known constant, so all nodes know the range of unique identifiers.\n\\end{ex}\n\n\\begin{ex}[vertex cover]\n    Let $\\calF$ consist of cycle graphs. Design a fast distributed algorithm that finds a \\Apx{1.1} of a minimum vertex cover on $\\calF$ in the $\\LOCAL$ model.\n    \n    \\hint{Solve small problem instances by brute force and focus on the case of long cycles. In a long cycle, use a graph coloring algorithm to find a $3$-coloring, and then use the $3$-coloring to construct a maximal independent set. Observe that a maximal independent set partitions the cycle into short fragments (with 2--3 nodes in each fragment).\n\n    Apply the same approach recursively: interpret each fragment as a ``supernode'' and partition the cycle that is formed by the supernodes into short fragments, etc. Eventually, you have partitioned the original cycle into \\emph{long} fragments, with dozens of nodes in each fragment.\n    \n    Find an optimal vertex cover within each fragment. Make sure that the solution is feasible near the boundaries, and prove that you are able to achieve the required approximation ratio.}\n\\end{ex}\n\n\\begin{ex}[iterated greedy]\\label{ex:greedy-iterate}\n    Design a color reduction algorithm $A$ with the following properties:\n    given any graph $G = (V,E)$ and any proper vertex coloring~$f$,\n    algorithm $A$ outputs a proper vertex coloring~$g$ such that\n    for each node $v \\in V$ we have $g(v) \\le \\deg_G(v) + 1$.\n    \n    Let $\\Delta$ be the maximum degree of $G$, let $n = |V|$ be the number of nodes in $G$, and let $x$ be the number of colors in coloring $f$. The running time of $A$ should be at most\n    \\[\n        \\min \\{ n, x \\} + O(1).\n    \\]\n    Note that the algorithm does not know $n$, $x$, or $\\Delta$. Also note that we may have either $x \\le n$ or $x \\ge n$.\n    \n    \\hint{Adapt the basic idea of the greedy color reduction algorithm\\mydash find local maxima and choose appropriate colors for them\\mydash but pay attention to the stopping conditions and low-degree nodes. One possible strategy is this: a node becomes active if its current color is a local maximum among those neighbors that have not yet stopped; once a node becomes active, it selects an appropriate color and stops.}\n\\end{ex}\n\n\\begin{ex}[distance-$2$ coloring]\\label{ex:distance2col}\n    Let $G = (V,E)$ be a graph. A \\emph{distance-$2$ coloring with $k$ colors} is a function $f \\colon V \\to \\{1,2,\\dotsc,k\\}$ with the following property:\n    \\[\n        \\dist_G(u,v) \\le 2 \\text{ implies } f(u) \\ne f(v) \\text{ for all nodes } u \\ne v.\n    \\]\n\n    Let $\\Delta$ be a known constant, and let $\\calF$ be the family of graphs of maximum degree at most $\\Delta$. Design a fast distributed algorithm that finds a distance-$2$ coloring with $O(\\Delta^2)$ colors for any graph $G \\in \\calF$ in the $\\LOCAL$ model.\n\n    You can assume that all nodes get the value of $n$ (the number of nodes) as input; also the parameter $c$ in the identifier space is a known constant, so all nodes know the range of unique identifiers.\n\n    \\hint{Given a graph $G \\in \\calF$, construct a virtual graph $G^2 = (V, E')$ as follows: $\\{u,v\\} \\in E'$ if $u \\ne v$ and $\\dist_G(u,v) \\le 2$. Prove that the maximum degree of $G^2$ is $O(\\Delta^2)$. Simulate a fast graph coloring algorithm on $G^2$.}\n\\end{ex}\n\n\\begin{exs}[numeral systems]\\label{ex:dpbit-base}\n    The fast color reduction algorithm from Section~\\longref{1.4}{sec:algo-p3cbit} is based on the idea of identifying a digit that differs in the \\emph{binary} encodings of the colors. Generalize the idea: design an analogous algorithm that finds a digit that differs in the base-$k$ encodings of the colors, for an arbitrary $k$, and analyze the running time of the algorithm (cf.\\ Exercise~\\longref{1.6}{ex:logstar-tight}). Is the special case of $k = 2$ the best possible choice?\n\\end{exs}\n\n\\begin{exs}[from bits to sets]\\label{ex:dpset}\n    The fast color reduction algorithm from Section~\\longref{1.4}{sec:algo-p3cbit} can reduce the number of colors from $2^x$ to $2x$ in one round in any directed pseudoforest, for any positive integer $x$. For example, we can reduce the number of colors as follows:\n    \\[\n        2^{128} \\to 256 \\to 16 \\to 8 \\to 6.\n    \\]\n    One of the problems is that an iterated application of the algorithm slows down and eventually ``gets stuck'' at $x = 3$, i.e., at six colors.\n    \n    In this exercise we will design a faster color reduction algorithm that reduces the number of colors from\n    \\[\n        h(x) = \\binom{2x}{x}\n    \\]\n    to $2x$ in one round, for any positive integer $x$. For example, we can reduce the number of colors as follows:\n    \\[\n        184756 \\to 20 \\to 6 \\to 4.\n    \\]\n    Here\n    \\begin{align*}\n        184756 &= h(10), \\\\\n        2 \\cdot 10 = 20 &= h(3), \\\\\n        2 \\cdot 3 = 6 &= h(2).\n    \\end{align*}\n    In particular, the algorithm does not get stuck at six colors; we can use the same algorithm to reduce the number of colors to four. Moreover, at least in this case the algorithm seems to be much more efficient\\mydash it can reduce the number of colors from $184756$ to $6$ in two rounds, while the prior algorithm requires three rounds to achieve the same reduction.\n    \n    The basic structure of the new algorithm follows the fast color reduction algorithm\\mydash in particular, we use one communication round to compute the values $s(v)$ for all nodes $v \\in V$. However, the technique for choosing the new color is different: as the name suggests, we will not interpret colors as bit strings but as \\emph{sets}.\n    \n    To this end, let $H(x)$ consist of all subsets\n    \\[\n        X \\subseteq \\{1,2,\\dotsc,2x\\}\n    \\]\n    with $|X| = x$. There are precisely $h(x)$ such subsets, and hence we can find a bijection\n    \\[\n        L\\colon \\{1,2,\\dotsc,h(x)\\} \\to H(x).\n    \\]\n    \n    We have $f(v) \\ne s(v)$. Hence $L(f(v)) \\ne L(s(v))$. As both $L(f(v))$ and $L(s(v))$ are subsets of size $x$, it follows that\n    \\[\n        L(f(v)) \\setminus L(s(v)) \\ne \\emptyset.\n    \\]\n    We choose the new color $g(v)$ of a node $v \\in V$ as follows:\n    \\[\n        g(v) = \\min \\bigl( L(f(v)) \\setminus L(s(v)) \\bigr).\n    \\]\n\n    Prove that this algorithm works correctly. In particular, show that $g\\colon V \\to \\{1,2,\\dotsc,2x\\}$ is a proper graph coloring of the directed pseudoforest~$G$.\n    \n    Analyze the running time of the new algorithm and compare it with the old algorithm. Is the new algorithm always faster? Can you prove a general result analogous to the claim of Exercise~\\longref{1.6}{ex:logstar-tight}?\n\\end{exs}\n\n\n\\begin{exs}[dominating set approximation]\\label{ex:greedy-domset}\n    Let $\\Delta$ be a known constant, and let $\\calF$ be the family of graphs of maximum degree at most $\\Delta$. Design an algorithm that finds an \\Apx{O(\\log \\Delta)} of a minimum dominating set on $\\calF$ in the $\\LOCAL$ model.\n    \n    \\hint{First, design (or look up) a greedy \\emph{centralized} algorithm achieves an approximation ratio of $O(\\log \\Delta)$ on $\\calF$. The following idea will work: repeatedly pick a node that \\emph{dominates} as many new nodes as possible\\mydash here a node $v \\in V$ is said to dominate all nodes in $\\ball_G(v,1)$. For more details, see a textbook on approximation algorithms, e.g., Vazirani~\\cite{vazirani01approximation}.\n    \n    Second, show that you can \\emph{simulate} the centralized greedy algorithm in a distributed setting. Use the algorithm of Exercise~\\ref{ex:distance2col} to construct a distance-$2$ coloring. Prove that the following strategy is a faithful simulation of the centralized greedy algorithm: \n    \\begin{itemize}[label=--,noitemsep]\n        \\item For each possible value $i = \\Delta+1, \\Delta, \\dotsc, 2, 1$:\n        \\begin{itemize}[label=--,nolistsep,topsep=1ex]\n            \\item For each color $j = 1, 2, \\dotsc, O(\\Delta^2)$:\n            \\begin{itemize}[label=--,nolistsep,topsep=1ex]\n                \\item Pick all nodes $v \\in V$ that are of color $j$ and that dominate $i$ new nodes.\n            \\end{itemize}\n        \\end{itemize}\n    \\end{itemize}\n    The key observation is that if $u,v \\in V$ are two distinct nodes of the same color, then the set of nodes dominated by $u$ and the set of nodes dominated by $v$ are disjoint. Hence it does not matter whether the greedy algorithm picks $u$ before $v$ or $v$ before $u$, provided that both of them are equally good from the perspective of the number of new nodes that they dominate. Indeed, we can equally well pick both $u$ and $v$ simultaneously in parallel.}\n\\end{exs}\n\n\n\\section{Bibliographic Notes}\n\nThe model of computing is from Linial's~\\cite{linial92locality} seminal paper, and the name $\\LOCAL$ is from Peleg's~\\cite{peleg00distributed} book. The additive-group coloring algorithm is due to Barenboim et al.\\ \\cite{barenboim18iterative}. The effective color reduction algorithm is from Linial~\\cite{linial92locality}, and the construction of cover-free families from Barenboim and Elkin~\\cite{barenboim13distributed}.\nThe algorithm of Exercise~\\ref{ex:greedy-domset} is from Friedman and Kogan~\\cite{friedman11deterministic}. The Bertrand--Chebyshev theorem was first proven by Chebyshev~\\cite{chebyshev1852primes}. The proof of Lemma~\\ref{lem:poly-roots} follows the proofs of Abraham~\\cite{abraham2020polynomials}.\n\n", "meta": {"hexsha": "50e063e2db84dbe9ca13dcbb22ae1d93ee6f5ec9", "size": 45150, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "book/ch04.tex", "max_stars_repo_name": "suomela/da2020", "max_stars_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2020-12-11T00:47:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T15:46:43.000Z", "max_issues_repo_path": "book/ch04.tex", "max_issues_repo_name": "suomela/da2020", "max_issues_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-17T18:31:27.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-17T18:42:16.000Z", "max_forks_repo_path": "book/ch04.tex", "max_forks_repo_name": "suomela/da2020", "max_forks_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-06-22T03:53:31.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-11T12:33:40.000Z", "avg_line_length": 75.0, "max_line_length": 608, "alphanum_fraction": 0.6915836102, "num_tokens": 13669, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026368, "lm_q2_score": 0.7718434978390746, "lm_q1q2_score": 0.5618831077142284}}
{"text": "\\documentclass[a4paper,10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{amsthm}\n\\usepackage{amsfonts}\n\\usepackage{dsfont}\n\\usepackage{pdfpages}\n\\usepackage{bm}\n\\usepackage[ruled,vlined]{algorithm2e}\n\\theoremstyle{definition}\n\\newtheorem{exmp}{Example}[section]\n\\newtheorem{remark}{Remark}\n\n\\usepackage{float}\n\\restylefloat{table}\n\n\\usepackage[square,numbers]{natbib}\n\\bibliographystyle{unsrtnat}\n\n\\title{Generalized Convolutional Neural Network}\n\\author{Mikael Hedberg}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\nThe purpose of this paper is to derive the equations for a generalized convolutional neural network in such in depth that one may easily implement it in C++ or CUDA / OpenCL. View this document as the theory and reference to the implementation. The big picture is to combine and write several state-of-the-art Machine Learning algorithms and see what kind of cool applications one could write. This is a small step in that direction.\n\n\\section{Generalized CNN}\n\nBefore jumping into the in depth calculations, let us start with a picture describing how different layers of the network could look like.\\\\\n\n\\begin{figure}[h!]\n  \\centering\n    \\includegraphics[scale=0.3]{convolutional-neural-network}\n      \\caption{An example of a convolutional neural network \\cite{convNetPicture}. L stands for layer, the boxes corresponds to two dimensional images (maps) and the circles are perceptrons.}\n      \\label{fig:convNet}\n\\end{figure}\n\nWith reference to Figure \\ref{fig:convNet} let us define some useful notations:\\\\\n\n\\section{Fundamental Equations}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{4cm}| l| l|}\n\t\t\\hline\n\t\tLayer index& $l$& $0\\leq l \\leq L, \\quad l,L \\in \\mathbb{N}$ \\\\\n\t\t\\hline\n\t\tThe number of images in layer $l$ & $N^{l}$ & $N^{l} \\in \\mathbb{N}$\\\\\n\t\t\\hline\n\t\tThe $i$:th image in layer $l$ & $\\pmb{Y}_i^{(l)}$  & $ 1 \\leq i \\leq N^{l}$ \\\\\n\t\t\\hline\n\t\tActivation function of layer $l$ & $\\phi^{(l)}$ & $\\phi^{(l)} : \\mathbb{R} \\rightarrow \\mathbb{R}$\\\\\t\t\n\t\t\\hline\n\t\tSet of parameters in layer $l$ & $\\Omega^{(l)}$  & $\\{\\omega_i \\in \\mathbb{R}\\}$\\\\\n\t\t\\hline\n\t\tCombinator function of image $i$ at coordinate $(x,y)$ in layer $l$ & $z^{(l)}_{i,xy}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})$& $z^{(l)}_{i,xy}: (\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)}) \\rightarrow \\mathbb{R}$\\\\\n\t\t\\hline\n\t\tImage coordinates of image $i$ in layer $l$ & $S^l_i$ & $S^l_i \\subset \\mathbb{N}$\\\\\n\t\t\\hline\n\t\tConnected images in layer $l+1$ to map i in layer l& $C^l_i$& $C^l_i \\subset \\mathbb{N}$ \\\\\n\t\t\\hline\n\t\t Connected images in layer $l$-$1$ to map $i$ in layer $l$& $D_i^l$& $D^l_i \\subset \\mathbb{N}$\\\\\n\t\t \\hline\n\t\\end{tabular}\n\t\\caption{Definitions and notations}\n\t\\label{tab:notations}\n\\end{table}\nThe network that we will work with in this paper is given by:\n\\begin{equation}\n[\\pmb{Y}_i^{(l)}]_{xy} = \\phi^{(l)}([\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy})\n\\label{eq:abstractNetwork}\n\\end{equation}\n\nEq. \\ref{eq:abstractNetwork} is very abstract and it supports different image sizes, arbitrary connections, shared parameters between units and even images that has random indexing. By deriving the gradient of this equation using back-propagation, we get a very general expression for all kinds of feed-forward networks. Observe that it can be easily expanded to indexing in any dimension, just by replacing $(x,y)$.\\\\\n\nLet us now consider a set of training data $T = \\{(\\pmb{Y}_{1:N^{0}}^{(0)}, \\pmb{T}_{1:N^L})_t,  1 \\leq t \\leq N^T \\}$. By defining an appropriate error function $r_t : (\\pmb{Y}_{1:N^L}^{(L)}, \\pmb{T}_{1:N^L})_t \\rightarrow \\mathbb{R}$ we may now try to minimize the total error by adjusting the parameters of the layers $\\Omega^{(l)}$\n\\begin{equation}\nE = \\sum_{t = 1}^{N^T}r_t(\\pmb{Y}_{1:N^L}^{(L)}, \\pmb{T}_{1:N^L})_t\n\\end{equation}\n\nFrom here on, we will omit the subscript $t$ since it will only complicate the notation. For batches one may just add $t$ onto all of the below variables and sum it. Observe that the notation $\\Omega^{(k)}_m$ means the variable $m$ in the set of parameters $\\Omega^{(k)}$.\n\n\\begin{gather}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^L}\\sum_{(x,y) \\in S^L_i} \\frac{\\partial r}{\\partial [\\pmb{Y}^{(L)}_i]_{xy}} \\frac{\\partial [\\pmb{Y}^{(L)}_i]_{xy}}{\\partial \\Omega^{(k)}_m} \\\\\n\\frac{\\partial [\\pmb{Y}^{(L)}_i]_{xy}}{\\partial \\Omega^{(k)}_m} = \\phi^{(L)'}([\\pmb{Z}^{(L)}_{i}]_{xy}) \\sum_{j = 1}^{N^{L-1}}\\sum_{(x_1,y_1) \\in S^{L-1}_j}\\frac{\\partial [\\pmb{Z}^{(L)}_{i}]_{xy}}{\\partial [\\pmb{Y}^{(L-1)}_j]_{x_1y_1}}\\frac{\\partial [\\pmb{Y}^{(L-1)}_i]_{x_1y_1}}{\\partial \\Omega^{(k)}_m}\n\\end{gather}\n\nIn general we have the following relation between any two layers $\\neq L$:\n\\begin{equation}\n\\frac{\\partial [\\pmb{Y}^{(l)}_i]_{xy}}{\\partial \\Omega^{(k)}_m} = \\phi^{(l)'}([\\pmb{Z}^{(l)}_{i}]_{xy}) \\sum_{j = 1}^{N^{l-1}}\\sum_{(x_1,y_1) \\in S^{l-1}_j}\\frac{\\partial [\\pmb{Z}^{(l)}_{i}]_{xy}}{\\partial [\\pmb{Y}^{(l-1)}_j]_{x_1y_1}}\\frac{\\partial [\\pmb{Y}^{(l-1)}_j]_{x_1y_1}}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\nFrom this we can define deltas that will simplify the gradient calculation.\n\\begin{equation}\n[\\pmb{\\delta}_i^{(L)}]_{xy} = \\frac{\\partial r}{\\partial [\\pmb{Y}^{(L)}_i]_{xy}}\\phi^{(L)'}([\\pmb{Z}^{(L)}_{i}]_{xy})\n\\label{eq:fundamentalDeltaL}\n\\end{equation}\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\label{eq:fundamentalDelta}\n\\end{equation}\n\nFinally, the gradient is given by:\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\sum_{(x,y) \\in S^{k}_i} [\\pmb{\\delta}_i^{(k)}]_{xy} \\frac{\\partial [\\pmb{Z}^{(k)}_i]_{xy}}{\\partial \\Omega^{(k)}_m}\n\\label{eq:fundamentalWeight}\n\\end{equation}\n\nI call Eq. \\ref{eq:abstractNetwork}, \\ref{eq:fundamentalDeltaL}, \\ref{eq:fundamentalDelta}, \\ref{eq:fundamentalWeight} the fundamental feed-forward network equations since it defines the gradient and feed-forward relations of any known feed-forward network (known to the Author).\n\n\\subsection{MLP}\n\nLet us recap the fundamental equations:\n\\begin{equation}\n[\\pmb{Y}_i^{(l)}]_{xy} = \\phi^{(l)}([\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy})\n\\end{equation}\n\\begin{equation}\n[\\pmb{\\delta}_i^{(L)}]_{xy} = \\frac{\\partial r}{\\partial [\\pmb{Y}^{(L)}_i]_{xy}}\\phi^{(L)'}([\\pmb{Z}^{(L)}_{i}]_{xy})\n\\end{equation}\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\sum_{(x,y) \\in S^{k}_i} [\\pmb{\\delta}_i^{(k)}]_{xy} \\frac{\\partial [\\pmb{Z}^{(k)}_i]_{xy}}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\n\nFor a fully connected layer such as MLP, we always have the possibility to reshape the images $\\pmb{Y}_i^{(l)}, \\pmb{Y}_j^{(l-1)}$ so that they are equivalent to vectors $\\pmb{y}^{(l)}, \\pmb{y}^{(l-1)}$. Hence, the coordinate sets $S_i^{l}, S_i^{l-1}$ contains only one coordinate and we may skip the $(x,y)$ notation. We also have $\\phi^{(l)}$ acting component-wise on the input. We may therefore simply the fundamental equations to:\n\n\\begin{equation}\ny_i^{(l)} = \\phi^{(l)}(z^{(l)}_{i}(y^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)}))\n\\end{equation}\n\\begin{equation}\n\\delta_i^{(L)} = \\frac{\\partial r}{\\partial y^{(L)}_i}\\phi^{(L)'}(z^{(L)}_{i})\n\\end{equation}\n\\begin{equation}\n\\delta_j^{(l)} = \\sum_{i = 1}^{N^{l+1}} \\delta^{(l+1)}_i \\frac{\\partial z^{(l + 1)}_{i}}{\\partial y^{(l)}_j} \\phi^{(l)'}(z^{(l)}_{j})\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\delta_i^{(k)} \\frac{\\partial z^{(k)}_i}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\n\nLet us define $z^{(l)}_i$ by it's vector form:\n\\begin{equation}\n\\pmb{z}^{(l)} = \\pmb{W}^{(l)}\\pmb{y}^{(l-1)} + \\pmb{b}^{(l)}\n\\label{eq:mlp}\n\\end{equation}\nWhere $\\pmb{Y}^{(l-1)}_{1:N^{l-1}} = \\pmb{y}^{(l-1)}$ and $\\Omega^{(l)} = \\{\\pmb{W}^{(l)}, \\pmb{b}^{(l)} \\}$. By looking at Eq. \\ref{eq:mlp} component-wise we obtain the familiar form:\n\\begin{equation}\ny_i^{(l)} = \\phi^{(l)}(\\sum_{j = 1}^{N^{l-1}}\\omega_{ij}^{(l)} y_j^{(l-1)} + b^{(l)}_i)\n\\end{equation}\nWhich does indeed to look very similar to the component-wise Eq. \\ref{eq:abstractNetwork}.\\\\\nIn vector form we have \n\\begin{equation}\n\\pmb{y}^{(l)} = \\phi^{(l)}(\\pmb{W}^{(l)}\\pmb{y}^{(l-1)} + \\pmb{b}^{(l)})\n\\end{equation}\n\nLet us now look at the remaining fundamental equations. We will look at two different output activations and derive the corresponding fundamental equations for this case. First with MSE and linear output which is useful when training a network for regression.\n\\begin{gather}\n\\phi^{(L)}(x) = x\\\\\nr(\\pmb{t},\\pmb{y}^{(L)}) =\\frac{1}{2} \\| \\pmb{t} - \\pmb{y}^{(L)} \\|^2\n\\label{eq:MLP_SSE_LINEAR}\n\\end{gather}\nThe fundamental equation for the output now transforms into\n\\begin{equation}\n\\delta_i^{(L)} =\\frac{1}{2} \\frac{\\partial \\| \\pmb{t} - \\pmb{y}^{(L)} \\|^2}{\\partial y_i^{(L)}} = -(t_i - y_i^{(L)})\n\\end{equation}\nIn vector notation\n\\begin{equation}\n\\pmb{\\delta}^{(L)} = - (\\pmb{t} - \\pmb{y}^{(L)})\n\\end{equation}\n\nWe will now consider a cross-entropy error function with soft-max activation. The problem is this case is that the activation function $\\phi^{(l)}$ doesnt depend on the other inputs which is necessary for softmax activation. We will therefore set $\\phi^{(L)}$ to the identity transform and write the output in terms of $\\pmb{Z}_i^{L}(\\pmb{Y}_{1:N^L}^L, \\Omega^{(L)})$ which can take any type of shape needed.\n\\begin{gather}\n\\phi^{(L)}(x) = x\\\\\nz^L_i = \\frac{\\exp\\{ \\sum_{j = 1}^{N^{L-1}}\\omega_{ij}^{(L)} y_j^{(L-1)} + b^{(L)}_i\\}}{\\sum_{k = 1}^{N^L} \\exp\\{\\sum_{j = 1}^{N^{L-1}}\\omega_{kj}^{(L)} y_j^{(L-1)} + b^{(L)}_k\\}}\n\\end{gather}\nFor a single binary classification task, the Cross-entropy function takes the form:\n\\begin{equation}\nr(\\pmb{t}, \\pmb{y}^{(L)}) = -(t \\log(y^{(L)}) + (1 - t)\\log(1 - y^{(L)}))\n\\end{equation} \nWhich also implies that we need to use a normal Sigmoid activation instead of the Softmax activation function. For multiple classes the CE function is given by\n\\begin{equation}\nr(\\pmb{t}, \\pmb{y}^{(L)}) = - \\sum_{i = 0}^{N^L} t_i \\log{y^{(L)}_i}\n\\end{equation}\nLet us now calculate the delta at the output layer for this case.\n\\begin{gather}\n\\delta_i^{(L)} =-\\frac{\\partial\\sum_{j = 0}^{N^L} t_j \\log{y^{(L)}_j}}{\\partial y_i^{L}} = -\\frac{t_i}{y_i^{(L)}}\\\\\n\\end{gather} \nHowever, with this change in $z_i^{(L)}$ extra care has to be taken to the derivative $\\frac{\\partial z_i^{(L)}}{\\partial y_j^{(L-1)}}$. It can be shown that\n\\begin{equation}\n\\delta_i^{(L)}\\frac{\\partial z_i^{(L)}}{\\partial y_j^{(L-1)}} = -(t_i - y_i^{(L)})\\omega_{ij}^{(L)}\n\\end{equation}\n\nIn any case, with one dimensional sigmoid and CE, softmax with CE and regression with linear equations. We obtain exactly the same update equations.\n\n\\begin{remark}\n\tNote that $\\delta_i^{(L)}$ in the case of a one dimensional variable with sigmoid $\\phi^{(L)}$ we obtain the same equations as in the regression case. I.e. $\\delta^{(L)} = -(t - y^{(L)})$.\n\\end{remark}\n\nA case that will not give the same update equations is if we use the MLE norm together with a non-trivial activation function at layer $L$.\n\\begin{equation}\n\\delta_i^{(L)} =\\frac{1}{2} \\frac{\\partial \\| \\pmb{t} - \\pmb{y}^{(L)} \\|^2}{\\partial y_i^{(L)}} = -(t_i - y_i^{(L)}) \\phi^{'(L)}(z_i^{(L)})\n\\end{equation}\nThis may also be expanded to the CE function if you carefully choose the output activation function $\\phi^{(L)}(z_i^{(L)})$ so that it has positive or zero values.\n\\begin{equation}\n\\delta_i^{(L)} = -\\frac{t_i - y_i^{(L)}}{y^{(L)}_i(1 - y^{(L)}_i)} \\phi^{'(L)}(z_i^{(L)})\n\\end{equation}\n\n\nFor a layer not equal to the output layer, we have the following expression (where the activation function is normally a tanh or a sigmoid.)\n\\begin{gather}\nz_i^{(l)} = \\sum_{j = 1}^{N^{l-1}}\\omega_{ij}^{(l)} y_j^{(l-1)} + b^{(l)}_i \\\\\n\\delta^{(l)}_i = \\sum_{j = 1}^{N^{l + 1}}\\omega_{ji}^{(l+1)} \\delta_j^{(l+1)}\\phi^{'(l)}(z_i^{(l)})\n\\end{gather}\nOr in vector notation\n\\begin{equation}\n\\pmb{\\delta}^{(l)} = \\pmb{W}^{(l+1)T} \\pmb{\\delta}^{(l+1)} \\circ \\phi^{'(l)}(\\pmb{z}^{(l)})\n\\end{equation}\n\nBefore ending this section, let us finish by calculating the gradient of this layer\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(l)}_m} = \\sum_{i = 1}^{N^{l}}\\delta_j^{(l)} \\frac{\\partial z^{(l)}_i}{\\partial \\Omega^{(l)}_m}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\omega^{(l)}_{kj}} = \\sum_{i = 1}^{N^{l}}\\delta_j^{(l)} \\frac{\\partial \\sum_{m = 1}^{N^{l-1}}\\omega_{im}^{(l)} y_m^{(l-1)} + b^{(l)}_i}{\\partial \\omega^{(l)}_{kj}}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\omega^{(l)}_{kj}} = \\delta_k^{(l)}y_j^{(l-1)}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial b^{(l)}_{k}} = \\delta_k^{(l)}\n\\end{equation}\n\nIn vector form\n\\begin{equation}\n\\Delta\\pmb{W}^{(l)} = \\pmb{\\delta}^{(l)} \\pmb{y}^{(l-1)T}\n\\end{equation}\n\\begin{equation}\n\\Delta \\pmb{b}^{(l)} = \\pmb{\\delta}^{(l)}\n\\end{equation}\n\nThe important equations are summarized in the below tables:\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{4cm}| l|}\n\t\t\\hline\n\t\tError function & $r(\\pmb{t}, \\pmb{y}^{(L)})$ \\\\\n\t\t\\hline\n\t\tCE, binary& $-(t \\log(y^{(L)}) + (1 - t)\\log(1 - y^{(L)}))$\\\\\n\t\t\\hline\n\t\tCE, multiple classes& $- \\sum_{i = 0}^{N^L} t_i \\log{y^{(L)}_i}$\\\\\n\t\t\\hline\n\t\tMSE & $\\frac{1}{2} \\| \\pmb{t} - \\pmb{y}^{(L)} \\|^2$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Common error functions.}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{4cm}| l|}\n\t\t\\hline\n\t\t(Error, Activation) & $\\pmb{\\delta}^{(L)}$ \\\\\n\t\t\\hline\n\t\t(CE, Softmax) & $- (\\pmb{t} - \\pmb{y}^{(L)})$ \\\\\n\t\t\\hline\n\t\t(CE - binary, Sigmoid) & $- (t - y^{(L)})$ \\\\\n\t\t\\hline\n\t\t(MSE, Linear) & $- (\\pmb{t} - \\pmb{y}^{(L)})$ \\\\\n\t\t\\hline\n\t\t(MSE, Any) & $ -(\\pmb{t}_i - \\pmb{y}_i^{(L)}) \\circ \\phi^{'(L)}(\\pmb{z}_i^{(L)})$ \\\\\n\t\t\\hline\n\t\t(CE, Any) & $- \\pmb{t} \\circ \\frac{1}{\\pmb{y}^{(L)}}\\circ \\phi^{'(L)}(\\pmb{z}_i^{(L)})$ \\\\\n\t\t\\hline\n\t\t(CE - binary, Any) & $-\\frac{t - y^{(L)}}{y^{(L)}(1 - y^{(L)})} \\phi^{'(L)}(z^{(L)})$ \\\\\n\t\t\\hline\t\t\t\n\t\\end{tabular}\n\t\\label{tab:outputDeltaMLP}\n\t\\caption{Output deltas for different activations and error functions.}\n\\end{table}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{4cm}| l|}\n\t\t\\hline\n\t\tFeed forward  & $\\pmb{z}^{(l)} = \\pmb{W}^{(l)}\\pmb{y}^{(l-1)} + \\pmb{b}^{(l)}$ \\\\\n\t\t\\hline\n\t\tFeed forward & $\\pmb{y}^{(l)} = \\phi^{(l)}(\\pmb{z}^{(l)})$\\\\\n\t\t\\hline\n\t\tBack propagation & $\\pmb{\\delta}^{(L)}$ by Table \\ref{tab:outputDeltaMLP}\\\\\n\t\t\\hline\n\t\tBack propagation & $\\pmb{\\delta}^{(l)} = \\pmb{W}^{(l+1)T} \\pmb{\\delta}^{(l+1)} \\circ \\phi^{'(l)}(\\pmb{z}^{(l)})$\\\\\n\t\t\\hline\n\t\tWeight update & $\\Delta\\pmb{W}^{(l)} = \\pmb{\\delta}^{(l)} \\pmb{y}^{(l-1)T}$\\\\\n\t\t\\hline\n\t\tBias update & $\\Delta \\pmb{b}^{(l)} = \\pmb{\\delta}^{(l)}$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Fundamental MLP equations.}\n\\end{table}\n\n\\subsection{Radial basis layer}\nLet use once again recall the fundamental equations.\n\\begin{equation}\n[\\pmb{Y}_i^{(l)}]_{xy} = \\phi^{(l)}([\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy})\n\\end{equation}\n\\begin{equation}\n[\\pmb{\\delta}_i^{(L)}]_{xy} = \\frac{\\partial r}{\\partial [\\pmb{Y}^{(L)}_i]_{xy}}\\phi^{(L)'}([\\pmb{Z}^{(L)}_{i}]_{xy})\n\\end{equation}\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\sum_{(x,y) \\in S^{k}_i} [\\pmb{\\delta}_i^{(k)}]_{xy} \\frac{\\partial [\\pmb{Z}^{(k)}_i]_{xy}}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\n\nOnce again as for the MLP, if we have image inputs we may always reshape the maps into vectors. We may therefore omit the subscript $(x,y)$. Yielding the below fundamental equations:\n\\begin{equation}\ny_i^{(l)} = \\phi^{(l)}(z^{(l)}_{i}(y^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)}))\n\\end{equation}\n\\begin{equation}\n\\delta_i^{(L)} = \\frac{\\partial r}{\\partial y^{(L)}_i}\\phi^{(L)'}(z^{(L)}_{i})\n\\end{equation}\n\\begin{equation}\n\\delta_j^{(l)} = \\sum_{i = 1}^{N^{l+1}} \\delta^{(l+1)}_i \\frac{\\partial z^{(l + 1)}_{i}}{\\partial y^{(l)}_j} \\phi^{(l)'}(z^{(l)}_{j})\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\delta_i^{(k)} \\frac{\\partial z^{(k)}_i}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\n\nFor a radial basis function, the activation doesn't really make sense to use the separation between activation and combinator function. We will thus always regard $\\phi^{(l)}(x) = x$ as the identity map. We may therefore simplify the fundamental equations further.\n\\begin{equation}\ny_i^{(l)} = z^{(l)}_{i}(y^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})\n\\end{equation}\n\\begin{equation}\n\\delta_i^{(L)} = \\frac{\\partial r}{\\partial y^{(L)}_i}\n\\end{equation}\n\\begin{equation}\n\\delta_j^{(l)} = \\sum_{i = 1}^{N^{l+1}} \\delta^{(l+1)}_i \\frac{\\partial z^{(l + 1)}_{i}}{\\partial y^{(l)}_j}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\delta_i^{(k)} \\frac{\\partial z^{(k)}_i}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\n\nFor a general radial basis layer, we usually set\\\\ $z^{(l)}_{i}(y^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)}) = \\rho(\\|y^{(l-1)}_{1:N^{l-1}} - \\pmb{c}_i^{(l)}\\|)$. Which in practice normally looks like: \n\\begin{equation}\nz^{(l)}_{i}(y^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)}) = \\exp\\{ -\\beta_i\\|y^{(l-1)}_{1:N^{l-1}} - \\pmb{c}_i^{(l)}\\| \\}\n\\end{equation}\n\nThe parameter set thus consists of $\\Omega^{(l)} = \\{(\\pmb{c}_i^{(l)}, \\beta_i) \\}$. If one is interested in using RBF:s as outputs, a common approach is to linearly combine the RBF:s at the output layer. This in turn may be viewed as\n\\begin{equation}\nz^{(L)}_{i}(y^{(L-1)}_{1:N^{L-1}}, \\Omega^{(L)}) = \\sum_{j \\in D^{L}_i} a_{ij} \\rho(\\|y^{(l-1)}_{1:N^{l-1}} - \\pmb{c}_i^{(l)}\\|)\n\\end{equation}\nWhere $D^{L}_i$ is as indicated in the notation's table. The connected images to the node $i$. Normally, it's fully connected. The parameter set of the last layer is thus $\\Omega^{(l)} = \\{a_{ij} \\}$.\n\n\\subsection{Convolution Layer}\n\nAs usual, always start with the fundamental equations:\n\\begin{equation}\n[\\pmb{Y}_i^{(l)}]_{xy} = \\phi^{(l)}([\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy})\n\\end{equation}\n\\begin{equation}\n[\\pmb{\\delta}_i^{(L)}]_{xy} = \\frac{\\partial r}{\\partial [\\pmb{Y}^{(L)}_i]_{xy}}\\phi^{(L)'}([\\pmb{Z}^{(L)}_{i}]_{xy})\n\\end{equation}\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\sum_{(x,y) \\in S^{k}_i} [\\pmb{\\delta}_i^{(k)}]_{xy} \\frac{\\partial [\\pmb{Z}^{(k)}_i]_{xy}}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\n\nThis is the first time that the coordinates will come in handy. Observe however that we will omit the residual equations (i.e. $[\\pmb{\\delta}_i^{(L)}]_{xy}$) for a convolutional layer since it will not be used immediately. However, it is very straight-forward to derive for an interested reader.\\\\\n\nLet us start off by defining\n\\begin{equation}\n[\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy} = \\sum_{j \\in D_i^{l}} (\\pmb{Y}_j^{(l-1)} \\star \\pmb{\\omega}_{ij}^{(l)})(x,y) + b^{(l)}_i\n\\end{equation}\n\\begin{equation}\n(\\pmb{Y}_j^{(l-1)} \\star \\pmb{\\omega}_{ij}^{(l)})(x,y) = \\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} [\\pmb{Y}_j^{(l-1)}]_{x + u, y + v} \\cdot [\\pmb{\\omega}_{ij}^{(l)}]_{u,v}\n\\end{equation}\nWhere $M_u, M_v$ is the width and height of the kernel $\\pmb{\\omega}_{ij}^{(l)}$. Furthermore, we have the parameters $\\Omega^{(l)} = \\{\\pmb{\\omega}_{ij}^{(l)}, \\pmb{b}^{(l)} \\}$.\n\\begin{remark}\n\tObserve that in most cases $\\pmb{\\omega}_{ij}^{(l)} = \\pmb{\\omega}_{i}^{(l)}$\n\\end{remark}\n\\begin{remark}\n\tObserve that there's nothing limiting us at the moment to use different sizes of the kernels in the same layer. However, practically this is rarely used.\n\\end{remark}\n\\begin{remark}\n\tIn practical applications where sub-sampling is used, one may in some cases simply sub-sample by taking every $k$:th coordinate in the x,y directions. Instead of convolving the entire image and then sub-sample every $k$:th pixel, we may put the sub-sampling operation directly in the convolution operation. Thus yielding:\n\t\\begin{equation}\n\t(\\pmb{Y}_j^{(l-1)} \\star \\pmb{\\omega}_{ij}^{(l)})(x,y) = \\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} [\\pmb{Y}_j^{(l-1)}]_{kx + u, ky + v} \\cdot [\\pmb{\\omega}_{ij}^{(l)}]_{u,v}\n\t\\end{equation}\n\t\\label{remark:convolutionSubSamping}\n\\end{remark}\n\nWe will now turn to the interesting part of the convolutional layer. That is, how do we calculate the deltas? \n\n\\begin{gather}\n\\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}(\\pmb{Y}^{(l)}_{1:N^{l-1}}, \\Omega^{(l + 1)})]_{xy}}{\\partial [\\pmb{Y}^{(l)}_j]_{x_1y_1}} = \\frac{\\partial \\sum_{k \\in D_i^{l+1}} (\\pmb{Y}_k^{(l)} \\star \\pmb{\\omega}_{ik}^{(l + 1)})(x,y) + b^{(l)}_i}{\\partial [\\pmb{Y}^{(l)}_j]_{x_1y_1}} =\\\\\n\\frac{\\partial \\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} [\\pmb{Y}_j^{(l)}]_{x + u, y + v} \\cdot [\\pmb{\\omega}_{ij}^{(l + 1)}]_{u,v}}{\\partial [\\pmb{Y}^{(l)}_j]_{x_1y_1}} \\mathds{1}_{\\{j \\in D^{l+1}_i\\}} = \\\\\n\\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} \\mathds{1}_{\\{x_1 = x + u, y_1 = y + v\\}} \\cdot [\\pmb{\\omega}_{ij}^{(l+1)}]_{u,v}\\mathds{1}_{\\{j \\in D^{l+1}_i\\}}\n\\end{gather}\nThis latest function is pretty hard to get an intuition from on its own. Therefore, let us put it into the right context.\n\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\n\\begin{gather}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\\\\n\\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} \\mathds{1}_{\\{x = x_1 + u, y = y_1 + v\\}} \\cdot [\\pmb{\\omega}_{ij}^{(l+1)}]_{u,v}\\mathds{1}_{\\{j \\in D^{l+1}_i\\}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\label{eq:horrible}\n\\end{gather}\nThe above expression looks horrible, so let us take some time to analyze it.\n\\begin{equation}\n\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} \\mathds{1}_{\\{x = x_1 + u, y = y_1 + v\\}} \\cdot [\\pmb{\\omega}_{ij}^{(l+1)}]_{u,v}\n\\label{eq:intermediateConvolution}\n\\end{equation}\nWhat does this equation really mean? Imagine that we have performed the summation over all $(x_1, y_1)$. The only terms that will survive are the one who's given by $(x = x_1 + u, y = y_1 + v)$. This is true, but one should be careful to perform the variable substitution directly, since we have to observe that $(x,y)$ is defined on $S^l_j$ whereas $(x_1, y_1)$ is defined on $S_i^{l+1}$. By performing the summation and substitute $(x_1 = x-u, y_1 = y-v)$, we may have for some combination of $(x,y,u,v)$ that $(x_1 = x-u, y_1 = y-v) \\notin S_i^{l+1}$. Therefore, when performing the summation we have to take this into account. Eq. \\ref{eq:intermediateConvolution} is therefore transformed into:\n\\begin{equation}\n\\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v}  [\\pmb{\\delta}^{(l+1)}_i]_{x - u,y - v}  [\\pmb{\\omega}_{ij}^{(l+1)}]_{u,v} \\mathds{1}_{\\{(x - u, y - v) \\in S^{l+1}_i\\}}\n\\label{eq:simplifiedIntermediateConvolution}\n\\end{equation}\nIf you were a mathematician you would probably already scream about all the omitted details. But above is actually incorrect in the sense that $[\\pmb{\\delta}^{(l+1)}_i]_{x - u,y - v}$ may not even be defined. But on those values, we kind of indicate that with the $\\mathds{1}$ function. So imagine that you expand the size of $\\pmb{\\delta}^{(l+1)}_i$ if you want to be rigorous.\\\\\n\nLet us now turn to the implementor's perspective. A very simple method of not hassling with the boundaries is to allow expansion of $\\pmb{\\delta}^{(l+1)}_i$ so that we have zeros located on the coordinates where $\\mathds{1}_{\\{(x - u, y - v) \\in S^{l+1}_i\\}}$ doesn't hold. Let us denote this delta by $\\tilde{\\pmb{\\delta}}^{(l+1)}_i$. And we obtain\n\\begin{equation}\n\\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v}  [\\tilde{\\pmb{\\delta}}^{(l+1)}_i]_{x - u,y - v}  [\\pmb{\\omega}_{ij}^{(l+1)}]_{u,v}\n\\label{eq:negativeConvolution}\n\\end{equation}    \n\nFurthermore, we observe that convolution defined by \\ref{eq:negativeConvolution} is actually equivalent to convolution previously defined by simply rotating the kernel $\\pmb{\\omega}_{ij}^{(l+1)}$ by 180 degrees. Let us denote this kernel by $\\tilde{\\pmb{\\omega}}_{ij}^{(l+1)}$.\n\\begin{equation}\n\\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v}  [\\tilde{\\pmb{\\delta}}^{(l+1)}_i]_{x - u,y - v}  [\\pmb{\\omega}_{ij}^{(l+1)}]_{u,v} = (\\tilde{\\pmb{\\delta}}^{(l+1)}_i \\star \\tilde{\\pmb{\\omega}}_{ik}^{(l + 1)})(x,y)\n\\end{equation}\nEq. \\ref{eq:horrible} is now simplified to:\n\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} =\\sum_{i = 1}^{N^{l+1}}(\\tilde{\\pmb{\\delta}}^{(l+1)}_i \\star \\tilde{\\pmb{\\omega}}_{ij}^{(l + 1)})(x,y)\\mathds{1}_{\\{j \\in D^{l+1}_i\\}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\label{eq:convolutionDeltaNeedsSimplification}\n\\end{equation}\nBy noting that $\\sum_{i = 1}^{N^{l+1}}\\mathds{1}_{\\{j \\in D^{l+1}_i\\}}$ is actually equivalent to all connected images from $j$ in $l$ to $l+1$. Eq. \\ref{eq:convolutionDeltaNeedsSimplification} is therefore transformed into:\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} =\\sum_{i \\in C^l_j}(\\tilde{\\pmb{\\delta}}^{(l+1)}_i \\star \\tilde{\\pmb{\\omega}}_{ij}^{(l + 1)})(x,y)\\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\label{eq:convolutionDeltaSimplified}\n\\end{equation}\n\n\\begin{remark}\n\tLet us continue on Remark \\ref{remark:convolutionSubSamping} which optimizes a combined convolution and sub-sampling operation by doing it simultaneously. In this case Eq. \\ref{eq:intermediateConvolution} takes the form\n\t\\begin{equation}\n\t\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} \\mathds{1}_{\\{x = kx_1 + u, y = ky_1 + v\\}} \\cdot [\\pmb{\\omega}_{ij}^{(l+1)}]_{u,v}\n\t\\label{eq:intermediateConvolutionSubSampling}\n\t\\end{equation}\n\tThe coordinate change is not as straight forward in this case since we are dealing with integer equations and they may not be directly divided. However, this case can be handled rather neatly just by using vanilla sub-sampling and discard calculating unnecessary coordinates.\n\t\n\\end{remark}\n\nLastly, let us now derive the equations for weight updates.\n\n\\begin{equation}\n\\frac{\\partial r}{\\partial [\\pmb{\\omega}_{pq}^{(l)}]_{u,v}} = \\sum_{i = 1}^{N^{l}}\\sum_{(x,y) \\in S^{l}_i} [\\pmb{\\delta}_i^{(l)}]_{xy} \\frac{\\partial [\\pmb{Z}^{(l)}_i]_{xy}}{\\partial [\\pmb{\\omega}_{pq}^{(l)}]_{u,v}}\n\\label{eq:weightConv}\n\\end{equation}\n\\begin{equation}\n[\\pmb{Z}^{(k)}_i]_{xy} = \\sum_{j \\in D_i^{l}} \\sum_{m = 0}^{M_u} \\sum_{n = 0}^{M_v} [\\pmb{Y}_j^{(l-1)}]_{x + m, y + n} \\cdot [\\pmb{\\omega}_{ij}^{(l)}]_{m,n} + b^{(l)}_i\n\\label{eq:zConv}\n\\end{equation}\nBy inserting Eq. \\ref{eq:zConv} in Eq. \\ref{eq:weightConv}. We obtain\n\\begin{equation}\n\\frac{\\partial r}{\\partial [\\pmb{\\omega}_{pq}^{(l)}]_{u,v}} = \\sum_{i = 1}^{N^{l}}\\sum_{(x,y) \\in S^{l}_i} [\\pmb{\\delta}_i^{(l)}]_{xy} \\sum_{j \\in D_i^{l}} \\sum_{m = 0}^{M_u} \\sum_{n = 0}^{M_v} [\\pmb{Y}_j^{(l-1)}]_{x + m, y + n} \\cdot \\frac{\\partial[\\pmb{\\omega}_{ij}^{(l)}]_{m,n}}{\\partial[\\pmb{\\omega}_{pq}^{(l)}]_{u,v}}\n\\end{equation}\nAs before,  $\\sum_{i = 1}^{N^{l+1}}\\mathds{1}_{\\{j \\in D^{l+1}_i\\}}$ is equivalent to all connected images from $j$ in $l$ to $l+1$.\n\n\\begin{equation}\n\\frac{\\partial r}{\\partial [\\pmb{\\omega}_{pq}^{(l)}]_{u,v}} = \\sum_{(x,y) \\in S^{l}_p}[\\pmb{\\delta}_p^{(l)}]_{xy} [\\pmb{Y}_q^{(l-1)}]_{x + u, y + v}\n\\end{equation}\nAnd for the more common case where we don't distinguish the kernel between the connections, we obtain\n\\begin{equation}\n\\frac{\\partial r}{\\partial [\\pmb{\\omega}_{i}^{(l)}]_{u,v}} = \\sum_{(x,y) \\in S^{l}_i} \\sum_{j \\in C^{l}_i}[\\pmb{\\delta}_i^{(l)}]_{xy} [\\pmb{Y}_j^{(l-1)}]_{x + u, y + v}\n\\end{equation}\nFor the bias we obtain\n\\begin{equation}\n\\frac{\\partial r}{\\partial b^{(l)}_i} =  \\sum_{(x,y) \\in S^{l}_i}  [\\pmb{\\delta}_i^{(l)}]_{xy}\n\\end{equation}\n\nLet us now recap all the important equations.\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{4cm}| p{8cm}|}\n\t\t\\hline\n\t\tConvolution Definition & $(\\pmb{Y}_j^{(l-1)} \\star \\pmb{\\omega}_{ij}^{(l)})(x,y) = \\sum_{u = 0}^{M_u} \\sum_{v = 0}^{M_v} [\\pmb{Y}_j^{(l-1)}]_{x + u, y + v} \\cdot [\\pmb{\\omega}_{ij}^{(l)}]_{u,v}$ \\\\\n\t\t\\hline\n\t\tFeed forward  & $[\\pmb{Y}^{(l)}_i]_{xy} =  \\phi^{(l)} (\\sum_{j \\in D_i^{l}} (\\pmb{Y}_j^{(l-1)} \\star \\pmb{\\omega}_{ij}^{(l)})(x,y))$ \\\\\n\t\t\\hline\n\t\tDelta padding & $\\tilde{\\pmb{\\delta}}^{(l+1)}_i$ by padding  $\\mathds{1}_{\\{(x - u, y - v) \\in S^{l+1}_i\\}}$, which corresponds to a padding of size $M_u, M_v$ around $\\pmb{\\delta}_i^{(l+1)}$.\\\\\n\t\t\\hline\n\t\tKernel rotation & $\\tilde{\\pmb{\\omega}}_{ij}^{(l+1)} = rot180(\\pmb{\\omega}_{ij}^{(l+1)})$\\\\\n\t\t\\hline \n\t\tBack propagation & $[\\pmb{\\delta}_j^{(l)}]_{xy} =\\sum_{i \\in C^l_j}(\\tilde{\\pmb{\\delta}}^{(l+1)}_i \\star \\tilde{\\pmb{\\omega}}_{ij}^{(l + 1)})(x,y)\\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})$\\\\\n\t\t\\hline\n\t\tKernel update with multiple kernels per map& $\\frac{\\partial r}{\\partial [\\pmb{\\omega}_{pq}^{(l)}]_{u,v}} = \\sum_{(x,y) \\in S^{l}_p}[\\pmb{\\delta}_p^{(l)}]_{xy} [\\pmb{Y}_q^{(l-1)}]_{x + u, y + v}$\\\\\n\t\t\\hline\n\t\tKernel update with one kernel per map & $\\frac{\\partial r}{\\partial [\\pmb{\\omega}_{i}^{(l)}]_{u,v}} = \\sum_{(x,y) \\in S^{l}_i} \\sum_{j \\in C^{l}_i}[\\pmb{\\delta}_i^{(l)}]_{xy} [\\pmb{Y}_j^{(l-1)}]_{x + u, y + v}$\\\\\n\t\t\\hline\n\t\tBias update & $\\frac{\\partial r}{\\partial b^{(l)}_i} =  \\sum_{(x,y) \\in S^{l}_i}  [\\pmb{\\delta}_i^{(l)}]_{xy}$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Fundamental convolution equations.}\n\\end{table}\n\n\\subsection{Pooling and Sub Sampling}\n\nThe purpose of performing pooling or sub-sampling is to reduce the dimensionality of the problem.\nAs usual, always start with the fundamental equations:\n\\begin{equation}\n\t[\\pmb{Y}_i^{(l)}]_{xy} = \\phi^{(l)}([\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy})\n\\end{equation}\n\\begin{equation}\n\t[\\pmb{\\delta}_i^{(L)}]_{xy} = \\frac{\\partial r}{\\partial [\\pmb{Y}^{(L)}_i]_{xy}}\\phi^{(L)'}([\\pmb{Z}^{(L)}_{i}]_{xy})\n\\end{equation}\n\\begin{equation}\n\t[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\n\\begin{equation}\n\t\\frac{\\partial r}{\\partial \\Omega^{(k)}_m} = \\sum_{i = 1}^{N^{k}}\\sum_{(x,y) \\in S^{k}_i} [\\pmb{\\delta}_i^{(k)}]_{xy} \\frac{\\partial [\\pmb{Z}^{(k)}_i]_{xy}}{\\partial \\Omega^{(k)}_m}\n\\end{equation}\n\nIn this section we will consider three types of sub-sampling and pooling layers. Let us start with sub-sampling as defined by LeCun. Also observe that we will not consider the $\\pmb{\\delta}_i^{(L)}$ since pooling will very rarely be located on the output layer.\n\\subsubsection{Weighed sub-sampling}\nLet us begin by defining the sub-sampling regime.\n\\begin{equation}\n[\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy} = \\sum_{j \\in D^l_i} \\alpha_{ij} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_j]_{xK_u + u, yK_v + v} + b_i^{(l)}\n\\end{equation}\nWhere $(K_u, K_v)$ defines the sub-sampling factors in the respective dimensions. Normally they are all equal, but we distinguish them here for generality. One could also allow different sub-sampling sizes between interconnected maps, but the notation becomes rather tedious. Furthermore, we also allow interconnections in the sub-sampling layer. However, in a practical application $j = i$ and the entire scheme gets reduced to\n\\begin{equation}\n[\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy} =  \\alpha_{i} \\sum_{u = 0}^{K - 1}\\sum_{v = 0}^{K - 1} [\\pmb{Y}^{(l-1)}_i]_{xK + u, yK + v} + b_i^{(l)}\n\\end{equation}\n\nThe big question now is how to we propagate the delta through this sub-sampling layer. Let us study\n\\begin{gather}\n\\frac{\\partial [\\pmb{Z}^{(l)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l-1)}_k]_{xy}} = \\frac{\\partial \\sum_{j \\in D^l_i} \\alpha_{ij} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_j]_{x_1K_u + u, y_1K_v + v} + b_i^{(l)}}{\\partial  [\\pmb{Y}^{(l-1)}_k]_{xy}} \\\\\n= \\alpha_{ik} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} \\mathds{1}_{\\{x = x_1K_u + u, y = y_1K_v + v\\}}\\mathds{1}_{\\{k \\in D^l_i\\}}\n\\end{gather}\n\nLet us now put this into the fundamental equation governing delta, in order to simplify it further.\n\\begin{equation}\n\t[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i = 1}^{N^{l+1}}\\sum_{(x_1,y_1) \\in S^{l+1}_i} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\alpha_{ij} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} \\mathds{1}_{\\{x = x_1K_u + u, y = y_1K_v + v\\}}\\mathds{1}_{\\{j \\in D^{l+1}_i\\}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\nAs noted earlier in the convolution section. The sum $\\sum_{i = 1}^{N^{l+1}}\\mathds{1}_{\\{j \\in D^{l+1}_i\\}}$, is equivalent to summing $i$ over $C_j^{l}$ instead. Which transforms the expression into\n\\begin{equation}\n\t[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i \\in C_j^{l}}\\sum_{(x_1,y_1) \\in S^{l+1}} [\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1} \\alpha_{ij} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} \\mathds{1}_{\\{x = x_1K_u + u, y = y_1K_v + v\\}} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\t\\label{eq:subSampleDeltaComplex}\n\\end{equation}\nAs usual the expressions looks rather horrible, but fear not. For simplicity, we assume the same sizes in the layers because otherwise a lot more subscripts has to be added to the notation. Instead of trying a variable substitution as in the convolution case, we notice here that the summation\\\\\n$\\sum_{(x_1,y_1) \\in S^{l+1}}\\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1}\\mathds{1}_{\\{x = x_1K_u + u, y = y_1K_v + v\\}}$ will correspond to the entire image of the previous layer since we don't allow overlapping. Concretely this means that we will have the value $[\\pmb{\\delta}^{(l+1)}_i]_{x_1y_1}$ on sub rectangles defined on $\\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1}\\mathds{1}_{\\{x_1K_u + u, y_1K_v + v\\}}$. This important insight can simplify Eq. \\ref{eq:subSampleDeltaComplex} into\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i \\in C_j^{l}} \\alpha_{ij}[\\pmb{\\delta}^{(l+1)}_i]_{\\lfloor \\frac{x}{K_u}\\rfloor, \\lfloor \\frac{y}{K_v} \\rfloor} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\nAnd of course, for the practical situations the delta transforms into\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} = \\alpha_{j}[\\pmb{\\delta}^{(l+1)}_j]_{\\lfloor \\frac{x}{K_u}\\rfloor, \\lfloor \\frac{y}{K_v} \\rfloor} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\nLet us now derive the gradients for the weights.\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\alpha_{mn}} = \\sum_{i = 1}^{N^{l}}\\sum_{(x,y) \\in S^{l}_i} [\\pmb{\\delta}_i^{(l)}]_{xy} \\frac{\\partial \\sum_{j \\in D^l_i} \\alpha_{ij} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_j]_{xK_u + u, yK_v + v} + b_i^{(l)}}{\\partial \\alpha_{mn}}\n\\end{equation}\nWe see directly that the sum is zero for all $(i,j) \\neq (m,n)$. From this, we conclude that\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\alpha_{mn}} = \\sum_{(x,y) \\in S^{l}_m} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_n]_{xK_u + u, yK_v + v} [\\pmb{\\delta}_m^{(l)}]_{xy}\n\\end{equation}\nFor most practical cases we end up with\n\\begin{equation}\n\\frac{\\partial r}{\\partial \\alpha_{m}} = \\sum_{(x,y) \\in S^{l}_m} \\sum_{n \\in D^{l}_m} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_n]_{xK_u + u, yK_v + v} [\\pmb{\\delta}_m^{(l)}]_{xy}\n\\end{equation}\nThe bias is simply given by\n\\begin{equation}\n\\frac{\\partial r}{\\partial b_{i}^{l}} = \\sum_{(x,y) \\in S^{l}_i} [\\pmb{\\delta}_i^{(l)}]_{xy}\n\\end{equation}\nThe equations to remember are summarized in the below table\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{2.5cm}| l |}\n\t\t\\hline\n\t\tFeed forward, cross weights & $[\\pmb{Y}^{(l)}_i]_{xy} = \\phi^{(l)}(\\sum_{j \\in D^l_i} \\alpha_{ij} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_j]_{xK_u + u, yK_v + v} + b_i^{(l)})$ \\\\\n\t\t\\hline\n\t\tFeed forward, single weights & $[\\pmb{Y}^{(l)}_i]_{xy} =\\phi^{(l)}(\\alpha_{i}\\sum_{j \\in D^l_i} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_j]_{xK_u + u, yK_v + v} + b_i^{(l)})$ \\\\\n\t\t\\hline\n\t\tBack propagation& $[\\pmb{\\delta}_j^{(l)}]_{xy} = \\sum_{i \\in C_j^{l}} \\alpha_{ij}[\\pmb{\\delta}^{(l+1)}_i]_{\\lfloor \\frac{x}{K_u}\\rfloor, \\lfloor \\frac{y}{K_v} \\rfloor} \\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})$\\\\\n\t\t\\hline\n\t\tWeight update, cross weights & $\\frac{\\partial r}{\\partial \\alpha_{mn}} = \\sum_{(x,y) \\in S^{l}_m} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_n]_{xK_u + u, yK_v + v} [\\pmb{\\delta}_m^{(l)}]_{xy}$\\\\\n\t\t\\hline\n\t\tWeight update, single weights & $\\frac{\\partial r}{\\partial \\alpha_{m}} = \\sum_{(x,y) \\in S^{l}_m} \\sum_{n \\in D^{l}_m} \\sum_{u = 0}^{K_u - 1}\\sum_{v = 0}^{K_v - 1} [\\pmb{Y}^{(l-1)}_n]_{xK_u + u, yK_v + v} [\\pmb{\\delta}_m^{(l)}]_{xy}$\\\\\n\t\t\\hline\n\t\tBias update & $\\frac{\\partial r}{\\partial b_{i}^{l}} = \\sum_{(x,y) \\in S^{l}_i} [\\pmb{\\delta}_i^{(l)}]_{xy}$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Fundamental weighed sub-sampling equations.}\n\\end{table}\n\\subsubsection{Max pooling}\nFor a max-pooling layer we set the activation function to the identity transform. Furthermore, max pooling from several simultaneous layers are omitted. Thus yielding:\n\\begin{equation}\n[\\pmb{Y}_i^{(l)}]_{xy} = [\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy} = \\max\\{[\\pmb{Y}^{(l-1)}_i]_{K_ux + u, K_vy + v}, 0 \\leq u \\leq K_u-1, 0 \\leq v \\leq K_v-1 \\}\n\\label{eq:maxPooling}\n\\end{equation}\nThe derivative of Eq. \\ref{eq:maxPooling} is actually much simpler than it looks.\n\\begin{gather}\n\\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} = \\frac{\\max\\{[\\pmb{Y}^{(l)}_i]_{K_ux_1 + u, K_vy_1 + v}, 0 \\leq u \\leq K_u-1, 0 \\leq v \\leq K_v-1 \\}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}}\n\\end{gather} \nWe can quickly deduce that the above derivative is zero for $i \\neq j$. Furthermore we can see that the only value that will survive is if $(x,y) = (x_1K_u + u_{max}, y_1K_v + v_{max})$ inside the pooling region and this value is equal to 1. These two conditions transforms the fundamental equation over the deltas to:\n\\begin{equation}\n\t[\\pmb{\\delta}_j^{(l)}]_{xy} = \\mathds{1}_{\\{[\\pmb{Y}_j^{(l)}]_{xy} = \\max\\{[\\pmb{Y}^{(l)}_j]_{K_u\\lfloor\\frac{x}{K_u} \\rfloor + u, K_v\\lfloor\\frac{y}{K_v} \\rfloor + v} \\}\\}}[\\pmb{\\delta}_j^{(l+1)}]_{\\lfloor\\frac{x}{K_u} \\rfloor, \\lfloor\\frac{y}{K_v} \\rfloor}\\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})\n\\end{equation}\nIn this case, the table becomes quite small\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{2 cm}| l |}\n\t\t\\hline\n\t\tFeed forward& $[\\pmb{Y}^{(l)}_i]_{xy} = \\max\\{[\\pmb{Y}^{(l-1)}_i]_{K_ux + u, K_vy + v}, 0 \\leq u \\leq K_u-1, 0 \\leq v \\leq K_v-1 \\}$\\\\\n\t\t\\hline\n\t\tBack propagation & $[\\pmb{\\delta}_j^{(l)}]_{xy} = \\mathds{1}_{\\{[\\pmb{Y}_j^{(l)}]_{xy} = \\max\\{[\\pmb{Y}^{(l)}_j]_{K_u\\lfloor\\frac{x}{K_u} \\rfloor + u, K_v\\lfloor\\frac{y}{K_v} \\rfloor + v} \\}\\}}[\\pmb{\\delta}_j^{(l+1)}]_{\\lfloor\\frac{x}{K_u} \\rfloor, \\lfloor\\frac{y}{K_v} \\rfloor}\\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy})$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Fundamental max-pooling equations.}\n\\end{table}\n\\subsubsection{Vanilla sub sampling}\nThis section aims at describing a very trivial sub sampling operation which simply takes every $k$:th coordinate to the second layer. As in the previous sub section we will not allow connections between several maps, since it doesn't make sense.\n\\begin{equation}\n[\\pmb{Y}_i^{(l)}]_{xy} = [\\pmb{Z}^{(l)}_{i}(\\pmb{Y}^{(l-1)}_{1:N^{l-1}}, \\Omega^{(l)})]_{xy} = [\\pmb{Y}_i^{(l-1)}]_{xK_u, yK_v} \n\\label{eq:vanillaPooling}\n\\end{equation}\nThe derivative of Eq. \\ref{eq:vanillaPooling} will be very simple\n\\begin{equation}\n\\frac{\\partial [\\pmb{Z}^{(l + 1)}_{i}]_{x_1y_1}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} = \\frac{\\partial [\\pmb{Y}_i^{(l)}]_{x_1K_u, y_1K_v}}{\\partial [\\pmb{Y}^{(l)}_j]_{xy}} \n\\end{equation}\nAs in the max pooling case we note that the above expression is zero if $i \\neq j$. Secondly, we note that $(x,y) = (x_1K_u, y_1K_v)$ are the only values for which the derivative is non zero. The fundamental equation takes the following form\n\\begin{equation}\n[\\pmb{\\delta}_j^{(l)}]_{xy} =[\\pmb{\\delta}_j^{(l+1)}]_{\\lfloor\\frac{x}{K_u} \\rfloor, \\lfloor\\frac{y}{K_v} \\rfloor}\\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy}) \\mathds{1}_{\\{0 \\equiv x \\pmod{K_u}, 0\\equiv y\\pmod{K_v}\\}}\n\\end{equation} \n\nAs in the max-pooling case, the table becomes very simple\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|p{2cm}| l |}\n\t\t\\hline\n\t\tFeed forward& $[\\pmb{Y}_i^{(l)}]_{xy}= [\\pmb{Y}_i^{(l-1)}]_{xK_u, yK_v} $\\\\\n\t\t\\hline\n\t\tBack propagation & $[\\pmb{\\delta}_j^{(l)}]_{xy} =[\\pmb{\\delta}_j^{(l+1)}]_{\\lfloor\\frac{x}{K_u} \\rfloor, \\lfloor\\frac{y}{K_v} \\rfloor}\\phi^{(l)'}([\\pmb{Z}^{(l)}_{j}]_{xy}) \\mathds{1}_{\\{0 \\equiv x \\pmod{K_u}, 0\\equiv y\\pmod{K_v}\\}}$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Fundamental vanilla pooling equations.}\n\\end{table}\n\n\\section{LM Optimization}\n\\subsection{Multilayer Perceptron}\nFor smaller networks it's known that Levenberg Marquardt optimization outperforms gradient descent both in training time and generalization. It's thus interesting to derive the equations for a LM trained network.\\\\\n\nLet us concern ourselves the the following network:\n\\begin{equation}\n\ty^{(l)}_i= \\phi^{(l)}(\\sum_{j=1}^{N^{(l-1)}} y^{(l-1)}_j\\times \\omega^{(l)}_{ji} + b^{(l)}_i)\n\\end{equation}\n\nThe error function is the standard SSE function:\n\\begin{equation}\n\tE = \\sum_{i=1}^{N^t}\\parallel \\pmb{t}_i - \\pmb{y}^{(L)}_{i}\\parallel^2\n\\end{equation}\n\nThe LM optimization problem can be applied to the following problem:\n\\begin{equation}\n\tE(\\pmb{\\omega}) = \\frac{1}{2}\\sum_{i=1}^{N^t}r_i(\\pmb{\\omega})^2\n\\end{equation}\n\nWhere $r_i(\\pmb{\\omega}) = \\parallel \\pmb{t}_i - \\pmb{y}^{(L)}_{i}\\parallel$. We note here that the gradient of the square sum is obtained by:\n\\begin{equation}\n\t\\frac{\\partial E(\\pmb{\\omega})}{\\partial \\omega_j} = \\sum_{i=1}^{N^t}r_i(\\pmb{\\omega})\\frac{\\partial r_i(\\pmb{\\omega})}{\\partial \\omega_j}\n\\end{equation} \nBy using Levenberg \\& Marquard the solution is given iteratively by:\n\\begin{equation}\n\t(\\pmb{J}^T\\pmb{J} + \\lambda\\pmb{diag}(\\pmb{J}^T\\pmb{J}))\\pmb{\\Delta} = -\\pmb{J}^T\\pmb{r}(\\pmb{\\omega})\n\\end{equation}\n\nWhere $(\\pmb{J})_{ij} = \\frac{\\partial r_i(\\pmb{\\omega})}{\\partial \\omega_j}$. By using the previous definitions the gradient of $E(\\pmb{\\omega})$ is obtained simply by using the Jacobian and the residual vector. \n\\begin{equation}\n\t\\nabla E(\\pmb{\\omega}) = \\pmb{J}^T \\pmb{r}\n\\end{equation}\nA remark on the notation is that:\\\\\n\\begin{equation*}\n\t\\pmb{\\omega} = (\\omega^{(1)}_{11}, \\omega^{(1)}_{21} \\hdots, \\omega^{(1)}_{N^01}, \\omega^{(1)}_{12} \\hdots, \\omega^{(1)}_{N^02}, \\hdots, \\omega^{(1)}_{N^0N^1},b^{(1)}_1, b^{(1)}_2\\hdots, b^{(1)}_{N^1}, \\omega^{(2)}_{11}, \\hdots, b^{(L)}_{N^L})\n\\end{equation*}\n\nThe goal is now to make use of the standard back-propagation equations and put them into a LM form. Let us begin by calculating the derivative $\\frac{\\partial r_i(\\pmb{\\omega})}{\\partial \\omega_j}$. For readability, we hereon remove the subscript $i$ from the training set.\n\n\\begin{equation}\n\t\\frac{\\partial r(\\pmb{\\omega})}{\\partial \\omega_i} = \\frac{\\partial}{\\partial \\omega_i}\\parallel \\pmb{t} - \\pmb{y}^{(L)}\\parallel = -\\frac{1}{\\parallel \\pmb{t} - \\pmb{y}^{(L)}\\parallel} \\sum_{k = 1}^{N^L}(t_k - y^{(L)}_k)\\frac{\\partial y^{(L)}_k}{\\partial \\omega_i}\n\\end{equation}\n\n\\begin{gather*}\n\t\\frac{\\partial y^{(L)}_k}{\\partial \\omega_i} = \\phi^{(L)'}(z^{(L)}_k) \\sum_{j=1}^{N^{(L-1)}} \\frac{\\partial y^{(L-1)}_j}{\\partial \\omega_i} \\times \\omega^{(L)}_{jk} \\\\\n\t\\frac{\\partial y^{(L-1)}_j}{\\partial \\omega_i} = \\phi^{(L-1)'}(z^{(L-1)}_j) \\sum_{m=1}^{N^{(L-2)}} \\frac{\\partial y^{(L-2)}_m}{\\partial \\omega_i} \\times \\omega^{(L-1)}_{mj}\\\\\n\t\\vdots \\\\\n\t\\frac{\\partial y^{(l)}_n}{\\partial \\omega_i} = \\phi^{(l)'}(z^{(l)}_n) y^{(l-1)}_p\n\\end{gather*}\n\nThe last line holds true if $\\omega_i = \\omega^{(l)}_{pn}$. \\\\\n\nThe next step is now to derive the delta notation for the LM case. In order to do this, we put the equations together and identify the deltas.\n\n\\begin{gather*}\n\t\\frac{\\partial r(\\pmb{\\omega})}{\\partial \\omega_i} = \\frac{\\partial}{\\partial \\omega_i}\\parallel \\pmb{t} - \\pmb{y}^{(L)}\\parallel = -\\frac{1}{\\parallel \\pmb{t} - \\pmb{y}^{(L)}\\parallel} \\sum_{k = 1}^{N^L}(t_k - y^{(L)}_k)\\\\\n\t\\phi^{(L)'}(z^{(L)}_k) \\sum_{j=1}^{N^{(L-1)}} \\omega^{(L)}_{jk} \\times \\phi^{(L-1)'}(z^{(L-1)}_j) \\sum_{m=1}^{N^{(L-2)}} \\omega^{(L-1)}_{mj} \\times\\\\\n\t\\vdots\\\\\n\t\\phi^{(l)'}(z^{(l)}_n) y^{(l-1)}_p\n\\end{gather*}\n\nBy defining deltas accordingly, we may simply the above form.\n\n\\begin{equation}\n\t\\delta^{(L)}_k = (t_k - y^{(L)}_k)\n\\end{equation}\n\\begin{equation}\n\t\\delta^{(l)}_j = \\sum_{k=1}^{N^{l+1}} \\delta^{(l+1)}_k \\times \\phi^{(l+1)'}(z^{(l+1)}_k) \\times \\omega^{(l+1)}_{jk}\n\\end{equation}\n\nThis yields the following gradient equation:\n\\begin{equation}\n\t\\frac{\\partial r(\\pmb{\\omega})}{\\partial \\omega^{(l)}_{pn}} = - \\frac{1}{\\parallel \\pmb{t} - \\pmb{y}^{(L)}\\parallel} \\phi^{(l)'}(z^{(l)}_n) \\times y^{(l-1)}_p \\times \\delta^{(l)}_n\n\\end{equation}\n\nWe now have everything to construct the LM equations.\n\n\\begin{equation}\n\t\\pmb{r}(\\pmb{\\omega}) = (r_1(\\pmb{\\omega}), r_2(\\pmb{\\omega}) ..., r_{N^t}(\\pmb{\\omega}))\n\\end{equation}\n\n\\begin{equation}\n\t\\pmb{J} = \\frac{\\partial r_i(\\pmb{\\omega})}{\\partial \\omega^{(l)}_{pn}} = - \\frac{1}{\\parallel \\pmb{t} - \\pmb{y}^{(L)}\\parallel} \\phi^{(l)'}(z^{(l)}_{n,i}) \\times y^{(l-1)}_{p,i} \\times \\delta^{(l)}_{n,i}\n\\end{equation}\n\n\\subsection{Implementation Considerations}\nA difficult problem when implementing this approach for big neural networks is, a part from the memory requirements, the fact that the matrix\\\\ $(\\pmb{J}^T\\pmb{J} + \\lambda\\pmb{diag}(\\pmb{J}^T\\pmb{J}))$ is very often badly conditioned. One could try to multiply the left and right side of the equation in order to improve the condition of the linear system. However, this is not guaranteed to work in every situation. A solution to this problem is to consider implementing SVD decomposition and solve the linear system from the pseudo-inverse. The problem may arise when the training data doesn't distinguish very well.\n\nThe SVD decomposition of a matrix is given by:\n\\begin{equation}\n\t\\pmb{A} = \\pmb{U}\\pmb{W}\\pmb{V}^T\n\\end{equation}\nWhere $\\pmb{A}$ is any matrix of dimension $M\\times N$. $\\pmb{U}$ is is a column orthogonal matrix of size $M\\times N$. $\\pmb{W}$ is a diagonal matrix with positive or zero elements. $\\pmb{V}$ is a $N\\times N$ orthogonal matrix.\n\nThere are some important properties to the SVD decomposition. First of all the columns of $\\pmb{U}$ whose same numbered elements $w_j$ in the diagonal matrix $\\pmb{W}$ are non zero are an orthonormal base that span the range of $\\pmb{A}$. Secondly, the columns of $\\pmb{V}$ whose elements $w_j$ are zero are a orthonormal base for the nullspace of $\\pmb{A}$. In order to understand why we may use this method to solve a linear equation system, let us consider the inverse of $\\pmb{A}$ using its SVD form.\n\n\\begin{equation}\n\t\\pmb{A}^{-1} = \\pmb{V}[diag(1 / w_i)] \\pmb{U}^T\n\\end{equation}\n\nFrom the above equation we directly that we may encounter problem when some singular values are close to zero. The formal definition of saying that the matrix $\\pmb{A}$ is ill-conditioned is by the condition number which is simply the ratio between the greatest and the smallest singular value. In this way we can say whether or not the inverse will be susceptible to round-off errors during computation. For floating precision, we'll have problems when the inverse condition number is approaching the machine's floating point precision - which is around $10^{-7}$. By using double floats, we may boost this number to $10^{-15}$. The big question is now how we fix this problem so that a stable solution may be obtained. The fix is very simple and clever, we set the $1/w_i$ to zero for $w_i$ small. What small is considered is relative, but if the number is $10^7$ smaller than the greatest number for float precision and $10^{15}$ smaller for double precision, then we set the inverse to zero.\\\\\n\nAnother technique can increase the stability of the method is to use pre-conditioning on  $(\\pmb{J}^T\\pmb{J} + \\lambda\\pmb{diag}(\\pmb{J}^T\\pmb{J}))$. An effective technique here is to multiply the matrix by the inverse of its diagonal. However, it must be checked that none of the diagonal elements are zero.\n\n\\section{Gradient Descent}\nGiven a lower bounded continuously differentiable function $F(\\pmb{x})$, then we know that $F(\\pmb{x})$ decreases fastest in the negative gradient direction $-\\nabla F(\\pmb{x})$. Thus we can construct the below iteration schema where $\\mu_i \\in \\mathbb{R}$.\n\n\\begin{equation}\n\t\\pmb{x}_{i+1} = \\pmb{x}_i - \\mu_i \\nabla F(\\pmb{x}_i)\n\t\\label{eq:descentUpdate}\n\\end{equation}\n\nIf $\\mu_i$ are chosen small enough so that $F(\\pmb{x}_0) \\geq F(\\pmb{x}_1) \\geq F(\\pmb{x}_2) \\ge \\hdots$ then convergence to a local minimum is guaranteed. There is an alternative to the gradient descent algorithm called Stochastic Gradient Descent which is often applied in training neural networks. This method implies that the data that we are fitting on is changed for every iteration. The convergence guarantee becomes more complicated but it can be proved that the method converges almost surely to a local minimum in this case (See Robbins-Seigmund Theorem).\n\n\\subsection{Choosing $\\mu_i$}\n\nThe ideal $\\mu_i$ for every iteration would be the one that minimizes $F(\\pmb{x}_i -\\mu \\nabla F(\\pmb{x}_i))$. In order to find this optimal $\\mu_i$ we need to solve the equation:\n\\begin{equation}\n\t\\frac{d}{d\\mu_i}F(\\pmb{x}_i -\\mu_i \\nabla F(\\pmb{x}_i)) = 0\n\t\\label{eq:lineSearch}\n\\end{equation}\nObserve that for a neural net with many nested layers, it will be extremely difficult to extract the exact analytical expression for $\\mu_i$ (someone courageous could check this out to see if it's true). However, we'll get back to Eq. \\ref{eq:lineSearch} in the Conjugate Gradient chapter, which aims at actually solving the equation.\n\nVery often when looking at implementations of different neural networks, the step size is often taken in a heuristic way. For example, decreasing as a function of the training epoch. This has to be defined a tested from case to case to make sure its performance is satisfactory.\\\\\n\nIn the remaining part of this section, we'll look at other methods of finding $\\mu_i$. The methods that we'll consider are Golden Section Search and Brent's method for finding $\\mu_i$. However, there are other inexact methods based on the Armijo-Goldstein condition as well as the Wolfe condition.\\\\\n\nIn order to use Golden Section Search or Brent's method, we need to bracket the solution. This is done by searching for a bracket, in the gradient direction from $\\pmb{x}_i$ until the bracket has been obtained. Afterwards, either Brent's or the Golden Search may be applied. Note however, that this will obtain the exact value of $\\mu_i$ and is rather costly.\n\n\\subsection{Momentum}\nLet us recapitulate the gradient descent method.\n\\begin{equation}\n\t\\pmb{x}_{i+1} = \\pmb{x}_i - \\mu_i \\nabla F(\\pmb{x}_i)\n\\end{equation}\n\nIt is well known that the above learning method may be very slow. In many literature an additional term known as the momentum has been proposed.\n\n\\begin{equation}\n\t\\pmb{x}_{i+1} = \\pmb{x}_i - \\mu_i \\nabla F(\\pmb{x}_i) + p\\Delta \\pmb{x}_{i-1}\n\\end{equation}\n\nWhere $p$ is known as the momentum parameter. Intuitively, the moment term averages out the learning path. This may be particularly useful for long narrow valleys where the vanilla descent method usually oscillate.\n\n\\section{Conjugate Gradient}\n\n\\subsection{Conjugate Directions}\n\nAs we've seen previously in the gradient descent method, if we find the optimal $\\mu_i$, it's always orthogonal to the previous gradient. This will render the method ineffective in narrow valleys since it will oscillate between the edges of the valley (it depends on the starting position how severe). A cure to this problem would be to find a set of orthogonal directions $\\pmb{d}_0, \\pmb{d}_1 \\dots \\pmb{d}_{n-1}$ so that taking a step in each direction will remove the corresponding component of the error $\\pmb{e} = \\tilde{\\pmb{x}} - \\pmb{x}$. This would imply that after n steps, we are done. Mathematically, this means that we will update $\\pmb{x}_i$ by\n\\begin{equation}\n\t\\pmb{x}_{i+1} = \\pmb{x}_i + \\alpha_i\\pmb{d}_i\n\t\\label{eq:diretionxUpdate}\n\\end{equation}\nSince we require that the error $\\pmb{e}_i = \\tilde{\\pmb{x}} - \\pmb{x}_i$ has its $\\pmb{d}_i, \\pmb{d}_{i-1} \\dots \\pmb{d}_0$ components removed after every iteration, we require that $\\pmb{e}_{i+1}$ is orthogonal to $\\pmb{d}_i$. By using this condition together with Eq. \\ref{eq:diretionxUpdate} we obtain the following value on $\\alpha_i$, which is in fact useless since we don't know $\\pmb{e}_i$.\n\n\\begin{equation}\n\t\\alpha_i  = - \\frac{\\pmb{d}^T_i\\pmb{e}_i}{\\pmb{d}^T_i\\pmb{d}_i}\n\t\\label{eq:errorUpdateAlpha}\n\\end{equation}\n\nLet us start exposing a potential solution on a quadratic form.\n\\begin{equation}\n\tF(\\pmb{x}) = \\frac{1}{2}\\pmb{x}^T\\pmb{A}\\pmb{x} - \\pmb{b}^T\\pmb{x} + c\n\\end{equation}\n\\begin{equation}\n\t\\nabla F(\\pmb{x}) = \\pmb{A}\\pmb{x} - \\pmb{b}\n\\end{equation}\n\n\\cite{conjugatedGradients} now suggest that we require the errors and the directions are $\\pmb{A}$-orthogonal instead of simply orthogonal.\n\\begin{gather}\n\t\\pmb{d}_i^T\\pmb{A}\\pmb{d}_j = 0, i\\neq j \\\\\n\t\\pmb{e}_{i+1}^T\\pmb{A}\\pmb{d}_i = 0\n\\end{gather}\n\nNot surprisingly, this new condition is equivalent to finding the minimum of $F(\\pmb{x})$ along the search direction $\\pmb{d}_i$.\n\n\\begin{gather}\n\t\\frac{dF(\\pmb{x}_{i+1)}}{d\\alpha_i} = 0\\\\\n\t\\nabla F(\\pmb{x}_{i+1})^T \\frac{d\\pmb{x}_{i+1}}{\\alpha_i} = 0\\\\\n\t-\\pmb{r}^T_{i+1}\\pmb{d}_i = 0\\\\\n\t\\pmb{d}^T_i \\pmb{A} \\pmb{e}_{i+1} = 0\n\\end{gather}\nWhere we have defined the residual $\\pmb{r}_i = -\\nabla F(\\pmb{x}_i)$. Since we required $\\pmb{A}$-orthogonality between $\\pmb{e}_{i+1}$ and $\\pmb{d}_i$ we obtain the following expression for $\\pmb{e}_{i+1}$:\n\\begin{equation}\n\t\\pmb{r}_{i+1} = - \\pmb{A} \\pmb{e}_{i+1}\n\t\\label{eq:residual}\n\\end{equation}\n\nLet us now transform Eq. \\ref{eq:diretionxUpdate} by using the residual instead.\n\\begin{gather}\n\t\\pmb{b} - A\\pmb{x}_{i+1} = \\pmb{b} - A\\pmb{x}_{i} - \\alpha_i\\pmb{A}\\pmb{d}_i\\\\\n\t\\pmb{r}_{i+1} = \\pmb{r}_i - \\alpha_i \\pmb A \\pmb{d}_i\n\t\\label{eq:residualUpdate}\n\\end{gather}\n\nBy using Eq. \\ref{eq:residualUpdate} together with Eq. \\ref{eq:residual} we can obtain an update equation on the error as well.\n\n\\begin{equation}\n\t\\pmb{e}_{i+1} = \\pmb{e}_{i} + \\alpha_i\\pmb{d}_i\n\t\\label{eq:errorUpdate}\n\\end{equation}\n\nSo, instead of the useless Eq. \\ref{eq:errorUpdateAlpha} we can now calculate $\\alpha_i$ by expressing it in terms of directions and residuals. This is a direct consequence of the requirement that the directions are orthogonal to the proceeding error.\n\\begin{gather}\n\t\\pmb{d}_i^T\\pmb{A}\\pmb{e}_{i+1} = 0 \\\\\n\t\\pmb{d}_i^T\\pmb{A} (\\pmb{e}_{i} + \\alpha_i \\pmb{d}_i) = 0 \\\\\n\t\\alpha_i = -\\frac{\\pmb{d}_i^T\\pmb{A} \\pmb{e}_{i}}{\\pmb{d}_i^T\\pmb{A} \\pmb{d}_i} \\\\\n\t\\alpha_i = \\frac{\\pmb{d}_i^T\\pmb{r}_{i}}{\\pmb{d}_i^T\\pmb{A} \\pmb{d}_i}\n\t\\label{eq:alpha}\n\\end{gather} \n\nUp until now we have just required that $\\pmb{e}_{i}$ and the $\\pmb{d}_j$ to be $\\pmb{A}$-orthogonal. We now need to verify by using this condition, we can converge to the correct solution in $n$ iterations ($n$ is the dimension of $\\pmb{x}$).\\\\\n\nFirstly, we need to prove that the error $\\pmb{e}_0$ can be expressed as a linear combination of $\\{\\pmb{d}_i\\}$. Assume that $\\pmb{A}$ is a positive definite matrix and assume that $\\{\\pmb{d}_i\\}$ doesn't span $\\mathbb{R}^n$. This implies that:\n\\begin{gather}\n\t\\pmb{d}_i = \\sum_{j\\neq i} \\beta_j \\pmb{d}_j \\\\\n\t\\pmb{d}_i^T \\pmb{A} \\pmb{d}_i > \\sum_{j\\neq i} \\beta_j \\pmb{d}_i^T \\pmb{A} \\pmb{d}_j \n\\end{gather}\n\nThis is a contradiction since $\\pmb{d}_i^T \\pmb{A} \\pmb{d}_j = 0$ for $i\\neq j$ by definition. We must therefore have that $\\{\\pmb{d}_i\\}$ span $\\mathbb{R}^n$. This in turn implies that the error $\\pmb{e}_0$ can be written as a linear combination of $\\{\\pmb{d}_i\\}$.\n\n\\begin{equation}\n\t\\pmb{e}_0 = \\sum_{j=0}^{n-1}\\gamma_j\\pmb{d}_j\n\\end{equation}\n\nWe can now easily see the connection between $\\gamma_i$ and $\\alpha_i$\n\\begin{gather}\n\t\\pmb{d}_i^T \\pmb{A} \\pmb{e}_0 = \\sum_{j=0}^{n-1}\\gamma_j\\pmb{d}_i^T \\pmb{A} \\pmb{d}_j \\\\\n\t\\gamma_i = \\frac{\\pmb{d}_i^T\\pmb{A} \\pmb{e}_{0}}{\\pmb{d}_i^T\\pmb{A} \\pmb{d}_i}\\\\\n\t\\gamma_i = \\frac{\\pmb{d}_i^T\\pmb{A} (\\pmb{e}_{0} + \\sum_{j=0}^{i-1}\\alpha_j \\pmb{d}_j)}{\\pmb{d}_i^T\\pmb{A} \\pmb{d}_i} \\\\\n\t\\gamma_i =\\frac{\\pmb{d}_i^T\\pmb{A} \\pmb{e}_i}{\\pmb{d}_i^T\\pmb{A} \\pmb{d}_i} \\\\\n\t\\gamma_i = -\\alpha_i\n\t\\label{eq:gammaAlpha}\n\\end{gather}\n\nWhat Eq. \\ref{eq:gammaAlpha} means is that when we perform the update equation \\ref{eq:errorUpdate} we remove a component of $\\pmb{e}_i$ until the error is finally zero. This also proves that the algorithm converges in $n$ steps.\\\\\n\nIn order to derive an algorithm for using Conjugated Directions, we need a way of choosing $\\{\\pmb{d}_i\\}$. This can effectively be done by Conjugated Gram-Schmidt. In short the process consists of first choosing a base $\\{\\pmb{u_i}\\}$ for $\\mathbb{R}^n$ and express a conjugated direction as \n\\begin{equation}\n\t\\pmb{d}_i = \\pmb{u}_i + \\sum_{j=0}^{i-1}\\beta_{ij}\\pmb{d}_j\n\t\\label{eq:directionConstruction}\n\\end{equation}\n\\begin{equation}\n\t\\beta_{ij} = -\\frac{\\pmb{u}_i^T \\pmb{A} \\pmb{d}_j}{\\pmb{d}_j^T \\pmb{A} \\pmb{d}_j}\n\t\\label{eq:betaEquation}\n\\end{equation}\n\nThe reason to expose is not actual implement this as an algorithm since it will be equivalent to a Gaussian elimination, but to understand the Conjugate Gradient method. Another method that will calculate conjugated directions is Powel's method that is vastly more effective.\n\n\\subsection{Conjugated Gradient}\n\nBy understanding conjugated directions, it becomes much easier to understand conjugated gradients since this is just a special case with $\\pmb{u}_i = \\pmb{r}_i$. There's many reasons why this choice is interesting. Firstly, the residuals has a nice property that they are orthogonal to previous search directions (which will become clear soon). The most important reason is actually that the gradients will reduce the complexity of $\\beta_{ij}$ so that we don't need to store old search vectors. Because of this, the algorithm will have a nice $\\mathcal{O}(n)$ complexity instead of $\\mathcal{O}(n^2)$.\\\\\n\nSome results of this choice is that\n\\begin{equation}\n\t\\mathcal{D}_i = span\\{\\pmb{d}_0, \\pmb{A} \\pmb{d}_0 \\dots \\pmb{A}^{i-1}\\pmb{d}_0\\} = span\\{\\pmb{r}_0, \\pmb{A} \\pmb{r}_0 \\dots \\pmb{A}^{i-1}\\pmb{r}_0\\}\n\t\\label{eq:krylovSubspace}\n\\end{equation}\n\\begin{equation}\n\t\\pmb{r}_i^T\\pmb{r}_j = 0, \\qquad i \\neq j\n\\end{equation}\n\nThis can easily be seen from Eq. \\ref{eq:directionConstruction} and Eq. \\ref{eq:residualUpdate}. Eq. \\ref{eq:krylovSubspace} is called a Krylov subspace and is constructed by consecutive multiplications of a given matrix. By using this fact we can deduce that $\\pmb{r}_{i+1} \\perp \\mathcal{D}_{i+1} $. Since $\\pmb{A}\\mathcal{D}_i \\subset \\mathcal{D}_{i+1}$, we conclude that $\\pmb{r}_{i+1}$ is $\\pmb{A}$-orthogonal to $\\mathcal{D}_i$. What this means is that $\\pmb{r}_{i+1}$ is already $\\pmb{A}$-orthogonal to all of the previous directions in Eq. \\ref{eq:directionConstruction} - which makes the Gram-Schmidt construction easy. \\\\\n\nBy using Eq. \\ref{eq:residualUpdate} we know the two following facts:\n\\begin{equation}\n\t\\pmb{r}_i^T \\pmb{r}_{j+1} = \\pmb{r}_{i}^T\\pmb{r}_{j} - \\alpha_j\\pmb{r}_{i}^T\\pmb{A}\\pmb{d}_{j}\n\\end{equation}\n\\begin{equation}\n\t\\alpha_j\\pmb{r}_{i}^T\\pmb{A}\\pmb{d}_{j} = \\pmb{r}_{i}^T\\pmb{r}_{j} - \\pmb{r}_i^T \\pmb{r}_{j+1}\n\\end{equation}\n\nBy using Eq. \\ref{eq:betaEquation} we therefore obtain the following form of $\\beta_{ij}$.\n\n\\begin{equation}\n\t\\beta_{ij} = \\frac{1}{\\alpha_{i-1}}\\frac{\\pmb{r}_i^T \\pmb{r}_{i}}{\\pmb{d}_{i-1}^T\\pmb{A}\\pmb{d}_{i-1}}, \\qquad i = j + 1\n\t\\label{eq:simplifiedBeta}\n\\end{equation}\nFor values $i = j + 1$, else $\\beta_{ij} = 0$. By plugging Eq.\\ref{eq:alpha} and the fact that \\\\ $\\pmb{d}^T_i \\pmb{r}_i = \\pmb{u}_i^T\\pmb{r}_i = \\pmb{r}_i^T\\pmb{r}_i$, we obtain the below equation:\n\\begin{equation}\n\t\\beta_i = \\frac{\\pmb{r}_i^T\\pmb{r}_i}{\\pmb{r}_{i-1}^T\\pmb{r}_{i-1}}\n\\end{equation}\n\nBy putting it all together we obtain the Conjugate Gradient method for a quadratic form.\n\\begin{equation}\n\t\\pmb{d}_0 = \\pmb{r}_0 = \\pmb{b} - \\pmb{A}\\pmb{x}_0\n\t\\label{eq:gradientStart}\n\\end{equation}\n\\begin{equation}\n\t\\alpha_i = \\frac{\\pmb{r}_i^T \\pmb{r}_i}{\\pmb{d}^T_i\\pmb{A}\\pmb{d}_i}\n\t\\label{eq:gradientAlpha}\n\\end{equation}\n\\begin{equation}\n\t\\pmb{x}_{i+1} = \\pmb{x}_i + \\alpha_i \\pmb{d}_i\n\t\\label{eq:gradientX}\n\\end{equation}\n\\begin{equation}\n\t\\pmb{r}_{i+1} = \\pmb{r}_i - \\alpha_i\\pmb{A}\\pmb{d}_i \n\t\\label{eq:gradientR}\n\\end{equation}\n\\begin{equation}\n\t\\beta_{i+1} = \\frac{\\pmb{r}^T_{i+1}\\pmb{r}_{i+1}}{\\pmb{r}^T_i\\pmb{r}_i}\n\t\\label{eq:gradientBeta}\n\\end{equation}\n\\begin{equation}\n\t\\pmb{d}_{i+1} = \\pmb{r}_{i+1} + \\beta_{i+1}\\pmb{d}_i\n\t\\label{eq:gradientD}\n\\end{equation}\n\nUp until now, we've only considered a very special case of a quadratic function. Except for understanding the Conjugate Gradient method, it's pretty useless. There's also some alternatives for solving linear systems called Biconjugate Gradient Method which doesn't assume anything of the linear system. However, now we are going to expose the Conjugate Gradient method for a general continuously differentiable function $f(\\pmb{x})$.\\\\\n\nRecall that Eq. \\ref{eq:gradientD} and Eq. \\ref{eq:gradientR} define vectors that satisfy the orthogonality condition:\n\\begin{equation}\n\t\\pmb{r}_i^T \\pmb{r}_j = 0 \\qquad \\pmb{d}_i^T\\pmb{A}\\pmb{d}_j = 0 \\qquad \\pmb{r}_i^T \\pmb{d}_j = 0 \\qquad j < i \n\\end{equation}\nThe problem now is that we don't have access to the Hessian matrix $\\pmb{A}$. However, in proximity of a point $\\pmb{x_i}$ we can use the Taylor expansion\n\\begin{equation}\n\tf(\\pmb{x}) \\approx \\frac{1}{2}\\pmb{x}^T\\pmb{A}\\pmb{x} - \\pmb{b}^T\\pmb{x} + c\n\t\\label{eq:taylorApproximation}\n\\end{equation}\nHowever, we now that this is true for some values of $\\pmb{A}$ and $\\pmb{b}$ but we still don't know them. What we do is the following: Take a point $\\pmb{x_i}$ and construct $\\pmb{r}_i = -\\nabla f(\\pmb{x}_i)$ where $f(\\pmb{x}_i)$ is of form Eq. \\ref{eq:taylorApproximation}. Suppose we now define the next point $\\pmb{x}_{i+1} = \\pmb{x}_{i} + \\alpha_i \\pmb{d}_{i} $ where $\\alpha_i = min_{\\alpha}[f(\\pmb{x}_{i} + \\alpha \\pmb{d}_{i})]$, then $\\pmb{r}_{i+1} = -\\nabla f(\\pmb{x}_{i+1})$ is the same vector as would have been constructed by Eq. \\ref{eq:gradientR}. \\\\\n\nNow when we have a basic understanding of the algorithm let us simply write out the general non-linear algorithm.\n\\begin{gather}\n\t\\pmb{d}_0 = \\pmb{r}_0 = - \\nabla f(\\pmb{x}_0) \\\\\n\t\\alpha_i = min_{\\alpha}[f(\\pmb{x}_{i} + \\alpha \\pmb{d}_{i})] \\\\\n\t\\pmb{x}_{i+1} = \\pmb{x}_i + \\alpha_i \\pmb{d}_i \\\\\n\t\\pmb{r}_{i+1} = -\\nabla f(\\pmb{x}_{i+1}) \\\\\n\t\\beta_{i+1} = max \\Big\\{\\frac{\\pmb{r}^T_{i+1}(\\pmb{r}_{i+1} - \\pmb{r}_{i})}{\\pmb{r}^T_{i}\\pmb{r}_{i}}, 0\\Big\\} \\\\\n\t\\pmb{d}_{i+1} = \\pmb{r}_{i+1} + \\beta_{i+1}\\pmb{d}_i\n\\end{gather}\n\n\\subsection{Implementation Consideration}\nThis section mainly deals with the non-linear CG method. Remarks like pre-conditioning still holds for the standard CG method where we have a matrix $\\pmb{A}$. However, the formulas may change a bit when using this.\\\\\n\nGiven that we re-evaluate the derivative at every iteration in the non-linear CG, we shouldn't expect to have any particular round-off errors and the method is thus very stable. Of course, care has to be taken when dividing by $\\pmb{r}^T_{i}\\pmb{r}_{i}$. However, if this value is very close to zero, then we usually stop the algorithm before. \\\\\n\nFor a neural network, the most expensive part to evaluate is $\\alpha_i = min_{\\alpha}[f(\\pmb{x}_{i} + \\alpha \\pmb{d}_{i})]$ and different methods should be tried in order to find $\\alpha_i$. For example, Brent's Method, Golden Section Search and Brent's Method incorporating derivatives. One could possibly explore approximate methods for $\\alpha_i$ such as the Wolfe condition.\n\n\\section{BFGS}\nBFGS is known as a quasi Newton method in the sense that it tries to use an approximate Hessian matrix to minimize the objective function.\n\\begin{equation}\n\tf(\\pmb{x}) \\approx f(\\pmb{p}) + \\nabla f(\\pmb{p})^T (\\pmb{x} - \\pmb{p}) + \\frac{1}{2}(\\pmb{x} - \\pmb{p})^T \\pmb{B}_k (\\pmb{x} - \\pmb{p})\n\t\\label{eq:taylorExpansion}\n\\end{equation}\n\nInstead of having $\\pmb{B}_k = \\pmb{H}$, we will set $\\pmb{B}_k$ as an approximation to $\\pmb{H}$ that will converge towards the Hessian $\\pmb{H}$ with increasing k. Let us rewrite \\ref{eq:taylorExpansion} into a more convenient for where $f_k$ indicates the evaluation of the function $f$ at $\\pmb{x}_k$. Furthermore, we remove the difference by a simple coordinate transform.\n\\begin{equation}\n\t\\tilde{f}(\\pmb{x})_{k} \\approx f_k + \\nabla f_k^T \\pmb{x} + \\frac{1}{2} \\pmb{x}^T \\pmb{B}_k \\pmb{x}\n\t\\label{eq:taylorSimplified}\n\\end{equation}\n\nBy searching a minimum to Eq. \\ref{eq:taylorSimplified}, we obtain the following solution, which is simply the newton update equation:\n\\begin{equation}\n\t\\pmb{d}_{k} = -\\pmb{B}_k^{-1}\\nabla f_k\n\t\\label{eq:bfgsDirectionUpdate}\n\\end{equation}\nRemember that for a perfect quadratic function, the $\\pmb{d}_k$ will take you directly to the minimum. However, in the real world, we only use $\\pmb{d}_k$ as an approximation and we are not sure that it will yield the best solution. That's why we set the update criteria as \n\\begin{equation}\n\t\\pmb{x}_{k+1} = \\pmb{x}_k + \\alpha_k \\pmb{d}_k\n\t\\label{eq:updateBFGS}\n\\end{equation}\nWhere $\\alpha_k$ is given by a line search through the direction $\\pmb{d}_k$ (starting at $\\alpha_k = 1$). The problem that BFGS now tries to solve is how we approximate the Hessian that we use in Eq. \\ref{eq:bfgsDirectionUpdate}. A very reasonable way to tackle this problem is to require that $\\nabla \\tilde{f}(\\pmb{x}_{k+1})_{k + 1} = \\nabla f_{k+1}$ and $\\nabla \\tilde{f}(\\pmb{x}_{k})_{k + 1} = \\nabla f_{k}$. The first condition is trivial since it's given by the definition of a Taylor expansion (\\ref{eq:taylorExpansion}). The second condition will imply\n\\begin{equation}\n\t\\alpha_{k}\\pmb{B}_{k+1}\\pmb{d}_k = \\nabla f_{k+1} - \\nabla f_k\n\\end{equation}\nHeron, let us denote\n\\begin{equation}\n\t\\pmb{s}_k = \\pmb{x}_{k+1} - \\pmb{x}_k, \\qquad \\pmb{y}_k = \\nabla f_{k+1} - \\nabla f_{k}\n\\end{equation}\n\\begin{equation}\n\t\\pmb{B}_{k+1} \\pmb{s}_k = \\pmb{y}_k\n\t\\label{eq:secantEquation}\n\\end{equation}\nEq. \\ref{eq:secantEquation} is referred to as the secant equation. Since we will require that $\\pmb{B}_{k+1}$ is symmetric positive definite, we also obtain the so called curvature condition\n\\begin{equation}\n\t\\pmb{s}_k^T\\pmb{y}_k > 0\n\t\\label{eq:curvatureCondition}\n\\end{equation}\nNote that if we impose the Wolfe condition or strong Wolfe condition on the line search Eq. \\ref{eq:curvatureCondition} is guaranteed to hold. Furthermore, if the curvature condition Eq. \\ref{eq:curvatureCondition} holds, then we are guaranteed the existence of a symmetric positive definite matrix $\\pmb{B}_{k+1}$ of Eq. \\ref{eq:secantEquation}. However, in this case may choose between an infinite amount of solutions, so we therefore need to impose an additional condition that will guarantee that $\\pmb{B}_{k+1}$ is the closest matrix to its predecessor. The norm that defines closest in this since is usually the Frobenius norm with a wisely chosen weight matrix. let us define the inverse\n\\begin{equation}\n\t\\pmb{H}_{k} = \\pmb{B}^{-1}_k\n\\end{equation}\n\nInstead of imposing conditions on the Hessian approximation, we can impose conditions on its inverse $\\pmb{H}_k$ and by doing this we obtain the BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithm.\n\\begin{gather}\n\t\\pmb{s}_k = \\pmb{x}_{k+1} - \\pmb{x}_{k}\\\\\n\t\\pmb{y}_k = \\nabla f_{k+1} - \\nabla f_k\\\\\n\t\\pmb{u}_k = \\frac{\\pmb{s}_k}{\\pmb{s}_k^T \\pmb{y}_k} - \\frac{\\pmb{H}_k \\pmb{y}_k}{\\pmb{y}_k^T \\pmb{H}_k \\pmb{y}_k}\\\\\n\t\\pmb{H}_{k+1} = \\pmb{H}_k + \\frac{\\pmb{s}_k \\otimes \\pmb{s}_k}{\\pmb{s}_k^T \\pmb{y}_k} - \\frac{\\pmb{H}_k\\pmb{y}_k \\otimes \\pmb{H}_k\\pmb{y}_k}{\\pmb{y}_k^T \\pmb{H}_k\\pmb{y}_k} + [\\pmb{y}_k^T \\pmb{H}_k\\pmb{y}_k] \\pmb{u}_k \\otimes \\pmb{u}_k\n\\end{gather}\nWhere $\\rho_k = \\frac{1}{\\pmb{y}_k^T\\pmb{s}_k}$. One usually initialize $\\pmb{H}_0$ by either setting it to the identity or by calculating a finite difference of $f(\\pmb{x}_0)$.\n\nHowever a big problem with using this method is the same as Levenberg-Marquardt: i.e. memory consumption for big problems! And especially for a CNN it can be too much to store all the variables $n^2$. That's we the proceeding subsection will deal with a limited memory model of the BFGS algorithm. Before proceeding, let us summarize this section by writing out the BFGS algorithm.\\\\\n\\begin{algorithm}[H]\n\t\\DontPrintSemicolon\n\t\\caption{BFGS}\n\tInitialize $\\pmb{x}_0$, $\\pmb{H}_0 = \\pmb{I}$\\;\n\tCalculate $\\alpha_0$ : $f(\\pmb{x}_0 - \\alpha_0 \\nabla f_0)$ satisfies the Wolfe condition. \\;\n\t$\\pmb{x}_1 = \\pmb{x}_0 - \\alpha_0 \\nabla f_0$\\;\n\t$\\pmb{s}_0 = \\pmb{x}_1 - \\pmb{x}_0$\\;\n\t$\\pmb{y}_0 = \\nabla f_1 - \\nabla f_0$\\;\n\t$\\pmb{H}_0 = \\frac{\\pmb{y}_0^T \\pmb{s}_0}{\\pmb{y}_0^T\\pmb{y}_0} \\pmb{I}$\\;\n\t\\For{$k = 0$ to N}\n\t{\n\t\tCalculate $\\alpha_k$ : $f(\\pmb{x}_k - \\alpha_k \\pmb{H}_k \\nabla f_k)$ satisfies the Wolfe condition. \\;\n\t\t$\\pmb{x}_{k+1} = \\pmb{x}_k - \\alpha_k \\pmb{H}_k \\nabla f_k$\\;\n\t\t$\\pmb{s}_k = \\pmb{x}_{k+1} - \\pmb{x}_k$\\;\n\t\t$\\pmb{y}_k = \\nabla f_{k+1} - \\nabla f_k$\\;\n\t\t$\\pmb{u}_k = \\frac{\\pmb{s}_k}{\\pmb{s}_k^T \\pmb{y}_k} - \\frac{\\pmb{H}_k \\pmb{y}_k}{\\pmb{y}_k^T \\pmb{H}_k \\pmb{y}_k}$\\;\n\t\t$\\pmb{H}_{k+1} = \\pmb{H}_k + \\frac{\\pmb{s}_k \\otimes \\pmb{s}_k}{\\pmb{s}_k^T \\pmb{y}_k} - \\frac{\\pmb{H}_k\\pmb{y}_k \\otimes \\pmb{H}_k\\pmb{y}_k}{\\pmb{y}_k^T \\pmb{H}_k\\pmb{y}_k} + [\\pmb{y}_k^T \\pmb{H}_k\\pmb{y}_k] \\pmb{u}_k \\otimes \\pmb{u}_k$ \\;\n\t}\n\t\\;\n\t$\\pmb{x}_{sol\t} = \\pmb{x}_{N}$\\;\n\\end{algorithm}\n\\subsection{L-BFGS}\nQuasi newton methods, such as LM and BFGS are not very suited for large scale problems since they usually require too much memory to operate. For big data problems with 100 million parameters, they would require 400 MB of memory if we store them with floating point precision and 800 MB with double precision. Understandably, we cannot store the square of this number, not even in a distributed environment. This is the reason why Low Memory BFGS is interesting. \\\\\n\nL-BFGS stores a set of of $m$ number of vectors of size $n$ instead of a matrix of size $n^2$. These vectors are the curvature information from previous iterations that get discarded as we move along. The idea is that information from previous iterations are less likely to be relevant for the behavior of the current Hessian. Let us recall the update equations of the BFGS algorithm.\n\\begin{gather}\n\t\\pmb{s}_k = \\pmb{x}_{k+1} - \\pmb{x}_{k}\\\\\n\t\\pmb{y}_k = \\nabla f_{k+1} - \\nabla f_k\\\\\n\t\\pmb{u}_k = \\frac{\\pmb{s}_k}{\\pmb{s}_k^T \\pmb{y}_k} - \\frac{\\pmb{H}_k \\pmb{y}_k}{\\pmb{y}_k^T \\pmb{H}_k \\pmb{y}_k}\\\\\n\t\\pmb{H}_{k+1} = \\pmb{H}_k + \\frac{\\pmb{s}_k \\otimes \\pmb{s}_k}{\\pmb{s}_k^T \\pmb{y}_k} - \\frac{\\pmb{H}_k\\pmb{y}_k \\otimes \\pmb{H}_k\\pmb{y}_k}{\\pmb{y}_k^T \\pmb{H}_k\\pmb{y}_k} + [\\pmb{y}_k^T \\pmb{H}_k\\pmb{y}_k] \\pmb{u}_k \\otimes \\pmb{u}_k\n\t\\label{eq:hessianUpdate}\n\\end{gather}\n\nSo instead of storing the entire matrix $\\pmb{H}_k$, we now store $m$ number of vector pairs $\\{\\pmb{s}_k, \\pmb{y}_k\\}$. By performing the Eq. \\ref{eq:hessianUpdate} $m$ number of times for the for the pairs, we obtain the below algorithm for computing $\\pmb{H}_k \\nabla f_k$.\n\n\\begin{algorithm}[H]\n\t\\DontPrintSemicolon\n\t\\caption{L-BFGS (Two-loop)}\n\t\n\t$\\pmb{g} = \\nabla f_k$\\;\n\t\\For{$i = k-1$ to $k-m$}\n\t{\n\t\t$\\rho_i = \\frac{1}{\\pmb{y}_i^T \\pmb{s}_i}$\\;\n\t\t$\\gamma_i = \\rho_i \\pmb{s}_i^T\\pmb{g}$\\;\n\t\t$\\pmb{g} = \\pmb{g} - \\gamma_i \\pmb{y}_i$\\;\n\t}\n\t$\\pmb{H}_k^0 = \\frac{\\pmb{y}_{k-1}^T\\pmb{s}_{k-1}}{\\pmb{y}_{k-1}^T\\pmb{y}_{k-1}} \\pmb{I}$\\;\n\t$\\pmb{r} = \\pmb{H}_k^0\\pmb{g}$\\;\n\t\\For{$i = k - m$ to $k-1$}\n\t{\n\t\t$\\lambda = \\rho_i \\pmb{y}_i^T \\pmb{r}$\\;\n\t\t$\\pmb{r} = \\pmb{r} + \\pmb{s}_i(\\gamma_i - \\lambda)$\\;\n\t}\n\t$\\pmb{H}_k\\nabla f_k = \\pmb{r}$\\;\n\t\\label{alg:LBFGSTwoLoop}\n\\end{algorithm}\n\\begin{algorithm}[H]\n\t\\DontPrintSemicolon\n\t\\caption{L-BFGS}\n\tInitialize $\\pmb{x}_0$, $m$\\;\n\tCalculate $\\alpha_0$ : $f(\\pmb{x}_0 - \\alpha_0 \\nabla f_0)$ satisfies the Wolfe condition. \\;\n\t$\\pmb{x}_1 = \\pmb{x}_0 - \\alpha_0 \\nabla f_0$\\;\n\t$\\pmb{s}_0 = \\pmb{x}_1 - \\pmb{x}_0$\\;\n\t$\\pmb{y}_0 = \\nabla f_1 - \\nabla f_0$\\;\n\t\\For{$k = 1$ to $N$}\n\t{\n\t\t$\\pmb{p}_k = - \\pmb{H}_k \\nabla f_k$ by Alg. \\ref{alg:LBFGSTwoLoop}\\;\n\t\tCalculate $\\alpha_k$ : $f(\\pmb{x}_k + \\alpha_k \\pmb{p}_k)$ satisfies the Wolfe condition. \\;\n\t\t$\\pmb{x}_{k+1} = \\pmb{x}_k + \\alpha_k \\pmb{p}_k$\\;\n\t\t\\If{$k > m$}\n\t\t{\n\t\t\tDiscard $(\\pmb{s}_{k-m}, \\pmb{y}_{k-m})$ from storage.\\;\n\t\t}\n\t\t$\\pmb{s}_k = \\pmb{x}_{k+1} - \\pmb{x}_k$\\;\n\t\t$\\pmb{y}_k = \\nabla f_{k+1} - \\nabla f_k$\n\t}\n\t\\;\n\\end{algorithm}\n\\subsection{Implementation Considerations}\nIt is known that the BFGS approach has very powerful self-correcting properties of its Hessian. It is known that these properties only holds when an adequate line search is performed. I.e. a line search conforming to the Wolfe conditions. Furthermore, the line search should always try $\\alpha_k = 1$ before proceeding to other values. The Wolfe conditions are given below.\n\\begin{gather}\n\tf(\\pmb{x}_k + \\alpha_k \\pmb{p}_k) \\leq f_k + c_1 \\alpha_k \\nabla f_k^T \\pmb{p}_k\\\\\n\t\\nabla f(\\pmb{x}_k + \\alpha_k \\pmb{p}_k)^T \\pmb{p}_k \\geq c_2 \\nabla f_k^T\\pmb{p}_k\n\\end{gather}\nThe strong Wolfe conditions are given by\n\\begin{gather}\n\tf(\\pmb{x}_k + \\alpha_k \\pmb{p}_k) \\leq f_k + c_1 \\alpha_k \\nabla f_k^T \\pmb{p}_k\\\\\n\t\\vert \\nabla f(\\pmb{x}_k + \\alpha_k \\pmb{p}_k)^T \\pmb{p}_k \\vert \\leq \\vert c_2 \\nabla f_k^T\\pmb{p}_k \\vert\n\\end{gather}\nTypical values for $(c_1, c_2)$ are $(10^{-4},0.9)$. \\\\\n\nAnother problem that we have both for L-BFGS and BFGS is the initial value(s) of $\\pmb{H}_0$. A heuristic that has proven itself to be very powerful here is to start calculating a steepest descent and with the new values set\n\\begin{equation}\n\t\\pmb{H}_0 = \\frac{\\pmb{y}_k^T\\pmb{s}_k}{\\pmb{y}^T_k\\pmb{y}_k} \\pmb{I}\n\\end{equation} \nbefore the BFGS update step. This method aims at estimating a matrix with similar eigenvalues to the true Hessian. Observe that this method still holds for L-BFGS but with the exception that we've previously stored the vectors $(\\pmb{y}_k, \\pmb{s}_k)$.\\\\\n\nOne practical problem is that the value $\\pmb{y}_k^T\\pmb{s}$ (the curvature condition) may be close to zero and therefore introduce singularities. For example, if we are in an area with very small variations (i.e. a long slow valley) we may have $\\pmb{y}_k^T\\pmb{s}$ dangerously close to zero.\n\n\\section{Ideas}\nOne idea is to evaluate a very accurate line search of the steepest descent in periods so that we can re-define the step size. Another idea is somehow evaluate descent directions as a floating mean so that we could get through valleys more quickly.\\\\\n\nAnother idea would be switch between different optimization methods where one is weak and the other is strong. A bit like LM but with less memory requirements.\\\\\n\nSomething that should be analyzed is the error curve along the descent direction for a neural net where convergence is slow.\\\\\n\nFor a neural network, we could analyze the change of the weights and check whether or not we only have a subset of them are important for a particular optimization task. If this is the case, could we reduce the dimension of the problem and apply more expensive optimization methods on these?\\\\\n\n\\section{Software Implementation}\nWe will concentrate our efforts on implementing the neural network using OpenCL. The reason is that we would like to have state-of-the-art performance on inhomogeneous platforms and truly utilize all the available resources to speed up the learning / classification / regression process. To truly squeeze out everything possible from the target device, one would probably need to dive down into the device's native programming language such as CUDA, Assembler or C. However, the effort in doing this is probably not worth it (consider, buying all the target hardware and manually tune the algorithms for each one). Therefore, we will use an empirical optimization method to create dynamic kernels that are the best for the given test parameters. In order to do this efficiently, we need to understand the differences between the GPU and the CPU.\\\\\n\nBy glancing at the multiplication tests as well as the convolution tests, we can see dramatic differences in performances between the different implementations. For example, local memory when using an Intel CPU, there's no local configuration that will speed up the convolution kernel. Thus, considering local memory when constructing a kernel for CPU can be discarded. Furthermore, the Intel OpenCL 1.2 Platform compiling for a Intel Core i7 4700MQ supports implicit vectorization as long as SoA are used instead of AoS. This means that explicit vectorization is often not needed.\n\n\\begin{itemize}\n\t\\item  The CPU is relatively insensitive to local work group sizes. One could let the runtime determine the local work size for the CPU. To have the global size dividable by 2 will let the runtime to perform better choices. However, for work group sizes of 1 with very small work loads in every iteration can in some cases be extremely bad for performance. \n\t\\item Do not use explicit local memory. The CPU is very good at putting relevant memory into the L1 cache implicitly. It's a waste of resources trying to explicitly allocate local memory as can be seen in the convolution test.\n\t\\item Vectorization is often implicit, but there's usually no performance penalty to use OpenCL vectors if it makes the code more readable. One could potentially try with / without the vectorization attribute in this case to check whether or not the implicit vectorization performs better. This is mostly to safeguard against compiler without implicit vectorization.\n\t\\item Do not bother unrolling loops. Loops can even be helpful with the Intel compiler since it can make implicit vectorization easier. \n\\end{itemize}\n\nThe GPU results look very similar between the Nvidia GPU and the integrated Intel HD GPU with the difference of how effective local memory optimization are. Normally, the GPU is divided into stream processors which in turn contains one instruction stream over multiple ALUs together with a shared memory over all the ALUs. Assuming this standard architecture, the local work groups should be mapped to the SMs with the local work size spread over the ALUs in the SM. If we map a single work item into every SM the performance should be slow since we are only using a fraction of the capacity of the SM. Evidence of this can be found in the test data when a local work group of size 1 is used. However, this is contradictory when doing multiple calculations inside the work-item with the same work size. Because, in this case we don't have the same performance bottleneck and the algorithm is able to perform slightly worse than the best result. It could be that the compiler / hardware in some cases have a hard time redistributing the work load into the other ALUs / SMs. Furthermore, trying to put large chunks of work into every work-item always yield worse performance in the vector multiplication example. Whereas for the CPU, the best result was obtained when accumulating a work size of 1000 inside a single work-item. One significant performance improvement can be obtained by explicitly using local memory and reducing the amount of global memory fetches. What's interesting to note here is that using local memory does not automatically make your algorithm faster for a specific GPU. For convolution, the improvements start to apply for filters greater than 2 for the Nvidia GPU. However, for the Intel HD GPU, we don't see any significant improvements before the filter size is 20 or greater. Before rounding up with the summation, let's look at how the performance is affected by multiple kernel launches compared to a single launch for a convolution layer of 4 inputs.       \n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{IntelKernelLaunch}\n\t\\caption{Showing the execution time of a convolutional layer using a single kernel and multiple kernel launches. Full means a fully connected convolutional layer with 4 inputs and 8 outputs. Single means single connections between 4 inputs and 4 outputs}\n\t\\label{fig:intelGPUKernelLaunch}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{NvidiaGPUKernelLaunch}\n\t\\caption{Showing the execution time of a convolutional layer using a single kernel and multiple kernel launches. Full means a fully connected convolutional layer with 4 inputs and 8 outputs. Single means single connections between 4 inputs and 4 outputs}\n\t\\label{fig:nvidiaGPUKernelLaunch}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics{IntelCPUKernelLaunch}\n\t\\caption{Showing the execution time of a convolutional layer using a single kernel and multiple kernel launches. Full means a fully connected convolutional layer with 4 inputs and 8 outputs. Single means single connections between 4 inputs and 4 outputs}\n\t\\label{fig:cpuKernelLaunch}\n\\end{figure}\n\nFigure \\ref{fig:intelGPUKernelLaunch} and \\ref{fig:nvidiaGPUKernelLaunch} looks natural since our intuition says that a kernel launch is expensive. It's important to note here that we don't sum the outputs from the multiple launches which is done in the single full kernel launch. So the full single kernel is actually more expensive than the multiple kernel launches. However, we can see something interesting when looking at Figure \\ref{fig:cpuKernelLaunch}. Here multiple kernel launches seems actually more efficient than performing a single one. It could be argued that the kernel launch overhead is almost insignificant when using the CPU and the reduced amount of work becomes significant. Since the same kernel with the same arguments and memory is launched several times, it could also be that the OpenCL runtime notice this and can save significant initialization costs.\n\n  \\begin{itemize}\n  \t\\item  The GPU does not have fancy branch prediction and smart caches so this must be handled manually.\n  \t\\item Small loops can degrade performance. Unrolling them can in some cases improve performance.\n\t\\item Local memory does not automatically improve the performance of the GPU. The overhead of putting global memory into local memory has to be small in comparison with the performance gains.\n\t\\item The local work size can have a significant impact. Especially in the case where local memory is used since it will be bound to the work size.\n\t\\item The GPU is more sensitive to kernel launches than the CPU.\n  \\end{itemize}\n  \nThere are also other pitfalls one have to take into account when writing fast OpenCL code. For example, if you run OpenCL on the host (the CPU), then use calls to clEnqueueMapBuffer and clEnqueueUnmapBuffer instead of calls to clEnqueueReadBuffer or clEnqueueWriteBuffer. This will avoid unnecessary copying within the framework.\n  \n\\section{Automatically tuned OpenCL kernels}\n\nFor a CPU, we don't care about local memory. For a GPU, we don't try batch computations in the work items since the scheduling is efficiently done. The general approach to all tuning is to define a default kernel for the CPU and GPU that will represent the benchmark for the problem. If automatic tuning is chosen, the algorithm will try to choose the fastest kernel among the modifiable parameters for the problem. This is typically testing the native / preferred vectorization width of the device and modifying the local work size together with the L1 cache size. Let us start with some notations for this chapter.\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|l| l|}\n\t\t\\hline\n\t\t$U$ & Number of parallel processing units.\\\\\n\t\t\\hline\n\t\t$S_x, S_y, S_z$ & The problem size dimensions \\\\\n\t\t\\hline\n\t\t$T_{max}$ & Maximum amounts of threads $T_{max} \\geq U$ \\\\\n\t\t\\hline\n\t\t$S_{l,x}, S_{l,y}, S_{l,z}$ & The local work group dimensions \\\\\n\t\t\\hline\n\t\t$H_{image}, W_{image}, N_{image}$ & Image height, width and number of images\\\\\n\t\t\\hline\n\t\t$F_{x}, F_{y}, N_{filter}$ & Filter height and width and the number of filters\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Fundamental weighed sub-sampling equations.}\n\\end{table}\n\n\\subsection{Convolutional layer}\n\\subsubsection{CPU}\n\n\\subsubsection{GPU}\nThere's a big difference between a benchmark CPU kernel and a GPU kernel. In this case, we don't want to have thread affinity for the amount of processing units, which in turn contains a considerable amount of ALUs sharing the same decoder. If we were to have thread affinity for the GPU, we would only use a fraction of the power of the GPU (actually 1 divided by the amount of ALUs in every stream processor, assuming a naive architecture and compiler of course). Instead, since we don't have any scheduling overhead (for what I've detected so far), we simply have one work item for every pixel in the input. However, the focus here will lie in using local memory to speed up memory access. What we know at the moment is:\n\\begin{itemize}\n\t\\item Every pixel from the input must be read at least once in order for the convolution to work.\n\t\\item But, the local memory is limited to a single local work group. The amount of memory that we can store in the cache is thus dependent on the local work group.\n\t\\item We have boundaries that needs to be treated separately when performing convolution.\n\t\\item The global memory accesses of the output map cannot be smaller than the amount of pixels in the output.\n\t\\item The filters can be loaded into every cache on every streaming processor. However, this will reduce the amount of available local memory for the actual convolution and should be taken into account.\n\\end{itemize}\nTo formula for the number of global memory accesses needed for a naive convolution is given by:\n\\begin{gather*}\nS_z(S_x - R_x)(S_y - R_y)F_xF_y\\\\\nR_x = F_x + 1\\\\\nR_y = F_y + 1\\\\\n\\end{gather*}\nAnd if we load the work group size together with the boundaries into local memory, we have:\n\\begin{equation*}\nS_z(S_x - R_x)(S_y - R_y) + S_{l,z}(S_x - R_x)(S_y - R_y) ( \\frac{R_x}{S_{l,x}} + \\frac{R_y}{S_{l,y}} + \\frac{R_xR_y}{S_{l,x}S_{l,y}} + 1)\n\\end{equation*}\nWe can directly see that the amount of global memory accesses decreases dramatically for a big local size in the x and y directions. However, we need to be careful with the conclusions here since we have additional complexity when choosing the local work group size.\n\\begin{itemize}\n\t\\item The local work group executes in parallel on a stream processor. Meaning that, if our local work group is as big as the problem itself, we cannot benefit from the parallel execution of several streaming processors.\n\t\\item The filters will have as many memory accesses as the naive equation. A very simple way to make sure we don't access global memory for the filters is to put them onto the constant memory by using the constant modifier.\n\t\\item The local work size should therefore be considered as the maximum local work size that can be distributed on all the streaming processors (as long as the filter is not equal to one). The second constraint is that the amount of local memory cannot be exceeded and this must be queried on the kernel. \n\\end{itemize}\nWe have additional constrains when defining the local memory size and using the constant memory.\n\\begin{itemize}\n\t\\item The local cache size cannot exceed the value returned by  CL\\_DEVICE\\_LOCAL\\_MEM\\_SIZE - CL\\_KERNEL\\_LOCAL\\_MEM\\_SIZE before setting the cache size.\n\t\\item The local work size cannot exceed CL\\_KERNEL\\_WORK\\_GROUP\\_SIZE.\n\t\\item The filter parameters used for the convolution cannot exceed \\\\ CL\\_DEVICE\\_MAX\\_CONSTANT\\_BUFFER\\_SIZE if we are to use the constant memory space for the parameters. The minimum value is 64KB, which corresponds to 16 000 parameters. In turn, 16 000 parameters would correspond to about 55 21x21 filters in a single layer. This would only be a problem for extremely large problems. In this case, it would be acceptable to split the problem into two kernel calls (or more).\n\\end{itemize}\nSince a greater value on $S_{l,z}$ than 1 will always increase the amount of global memory accesses. We will always assume that $S_{l,z} = 1$. Furthermore, we will always assume that the filter is in constant memory space. This leads to the following discrete constrained optimization problem for reducing the amount of global memory accesses:\n\\begin{gather*}\n(S_{l,x}, S_{l,y})_{min} = \\min_{(S_{l,x}, S_{l,y})} S_z(S_x - R_x)(S_y - R_y) + \\\\\n \\frac{(S_x - R_x)(S_y - R_y)}{S_{l,x}S_{l,y}} ( R_xS_{l,y} + R_yS_{l,x} + R_xR_y + S_{l,x}S_{l,y}) \\\\\n(S_x - R_x) \\mod{S_{l,x}} \\equiv 0 \\\\\n(S_y - R_y) \\mod{S_{l,y}} \\equiv 0 \\\\\nS_{l,x} \\leq S_{l,x, max} \\\\\nS_{l,y} \\leq S_{l,y, max} \\\\\n \\frac{(S_x - R_x)(S_y - R_y)}{S_{l,x}S_{l,y}} \\geq U \\\\\nS_{l,y}S_{l,x} \\leq CL\\_KERNEL\\_WORK\\_GROUP\\_SIZE \\\\\nR_x S_{l,y} + R_y S_{l,x} + R_xR_y + S_{l,y}S_{l,x} \\leq M \\\\\nM = CL\\_DEVICE\\_LOCAL\\_MEM\\_SIZE - CL\\_KERNEL\\_LOCAL\\_MEM\\_SIZE\n\\end{gather*}\nHowever, as you can see, it's possible that the conditions that the local work group must be dividable with the global work group will not hold. In this case we need to transform the above optimization problem with padding to make sure that it may be solved.\n\\begin{gather*}\n(S_{l,x}, S_{l,y}, P_x, P_y)_{min} = \\min_{(S_{l,x}, S_{l,y}, P_x, P_y)} S_z(S_x + P_x - R_x)(S_y + P_y - R_y) + \\\\\n\\frac{(S_x + P_x - R_x)(S_y + P_y - R_y)}{S_{l,x}S_{l,y}} ( R_xS_{l,y} + R_yS_{l,x} + R_xR_y + S_{l,x}S_{l,y}) \\\\\n(S_x + P_x - R_x) \\mod{S_{l,x}} \\equiv 0 \\\\\n(S_y  + P_y - R_y) \\mod{S_{l,y}} \\equiv 0 \\\\\nS_{l,x} \\leq S_{l,x, max} \\\\\nS_{l,y} \\leq S_{l,y, max} \\\\\n\\frac{(S_x + P_x - R_x)(S_y + P_y - R_y)}{S_{l,x}S_{l,y}} \\geq U \\\\\nS_{l,y}S_{l,x} \\leq CL\\_KERNEL\\_WORK\\_GROUP\\_SIZE \\\\\nR_x S_{l,y} + R_y S_{l,x} + R_xR_y + S_{l,y}S_{l,x} \\leq M \\\\\nM = CL\\_DEVICE\\_LOCAL\\_MEM\\_SIZE - CL\\_KERNEL\\_LOCAL\\_MEM\\_SIZE\n\\end{gather*}\nThe solution to the above optimization problem is given by: \\\\\n\\begin{algorithm}[H]\n\t\\DontPrintSemicolon\n\tOptimalLocalSize\\;\n\t\\KwData{Kernel, Device}\n\t\\KwResult{$(S_{l,x}, S_{l,y}, P_x, P_y)_{min}$}\n\tHypotheses \\;\n\t\\For {$P_x$ : from 0 to $S_{l,x, max}$}\n\t{\n\t\t\\For {$P_y$ : from 0 to $S_{l,y, max}$}\n\t\t{\n\t\t\t\t\\For {$S_{l,x}$ : from 0 to $S_{l,x, max}$}\n\t\t\t\t{\n\t\t\t\t\t\\For {$S_{l,y}$ : from 0 to $S_{l,y, max}$}\n\t\t\t\t\t{\n\t\t\t\t\t\\If{$(S_x + P_x - R_x) \\mod{S_{l,x}} \\not\\equiv 0$ or \\; $(S_y  + P_y - R_y) \\mod{S_{l,y}} \\not\\equiv 0$ or $S_{l,y}S_{l,x} > CL\\_KERNEL\\_WORK\\_GROUP\\_SIZE$ or\n\t\t\t\t\t\t$\\frac{(S_x + P_x - R_x)(S_y + P_y - R_y)}{S_{l,x}S_{l,y}} < U$ or\n\t\t\t\t\t\t$R_x S_{l,y} + R_y S_{l,x} + R_xR_y + S_{l,y}S_{l,x} > M $\\;}\n\t\t\t\t\t{\n\t\t\t\t\t\tcontinue\\;\n\t\t\t\t\t}\n\t\t\t\t\t\\Else\n\t\t\t\t\t{\n\t\t\t\t\t\tHypotheses.add($(S_{l,x}, S_{l,y}, P_x, P_y)$);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t$min = \\infty$ \\;\n\t\\For{Hypothesis in Hypotheses}\n\t{\n\t\t$accesses = S_z(S_x + P_x - R_x)(S_y + P_y - R_y) +\n\t\t\\frac{(S_x + P_x - R_x)(S_y + P_y - R_y)}{S_{l,x}S_{l,y}} ( R_xS_{l,y} + R_yS_{l,x} + R_xR_y + S_{l,x}S_{l,y})$ \\;\n\t\t\\If{$min < accesses$}\n\t\t{\n\t\t$(S_{l,x}, S_{l,y}, P_x, P_y)_{min}$ = $(S_{l,x}, S_{l,y}, P_x, P_y)$ \\;\n\t\t\t$min = accesses$\n\t\t}\n\t}\n\\caption{Calculates the optimal work size and padding with minimal global memory access.}\n\\end{algorithm}\nThe above implementation is very naive and one should probably limit the padding to a fixed value of say 10 in order not to explode the computations. A common max value on the work dimension is around 1000 and by using this naive approach it could take way too long to calculate the benchmark kernel.\n\\begingroup\n\\huge ADD TUNING\n\\endgroup \n\\subsection{Pooling layer}\n\\subsubsection{CPU}\n\\subsubsection{GPU}\n\\subsection{Perceptron layer} \n\\subsubsection{CPU}\n\\subsubsection{GPU}\n\\bibliography{sample}\n\n\\appendix\n\n\\end{document}\n", "meta": {"hexsha": "81bb2f472c6a4c7701a792cfd528f38b043444b5", "size": 92563, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/Notes/ConvNet/Theory/CNN-notes.tex", "max_stars_repo_name": "mihed/ATML", "max_stars_repo_head_hexsha": "242fb951eea7a55846b8a18dd6abcabb26e2a1cc", "max_stars_repo_licenses": ["BSL-1.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/Notes/ConvNet/Theory/CNN-notes.tex", "max_issues_repo_name": "mihed/ATML", "max_issues_repo_head_hexsha": "242fb951eea7a55846b8a18dd6abcabb26e2a1cc", "max_issues_repo_licenses": ["BSL-1.0", "BSD-3-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2015-06-08T19:51:53.000Z", "max_issues_repo_issues_event_max_datetime": "2015-07-01T10:15:06.000Z", "max_forks_repo_path": "docs/Notes/ConvNet/Theory/CNN-notes.tex", "max_forks_repo_name": "mihed/ATML", "max_forks_repo_head_hexsha": "242fb951eea7a55846b8a18dd6abcabb26e2a1cc", "max_forks_repo_licenses": ["BSL-1.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.8712797619, "max_line_length": 1988, "alphanum_fraction": 0.6695223794, "num_tokens": 34634, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5618286087967848}}
{"text": "\\section{The Lebesgue Integral}\n\\setcounter{subsection}{1}\n\n\\subsection{The Lebesgue Integral of a Bounded Function}\n  \\paragraph{2.}\n  \\begin{proof}\n    $\\,$\\par\n    (a) By Problem 2.51, $h$ is upper semicontinuous as $f$ is bounded and by \n    Problem 2.50, ${x:\\, h(x)<\\lambda}$ is open and hence measurable for every\n    $\\lambda\\in\\mathbb{R}$. Thus, $h$ is measurable.\\par\n    Let $\\varphi(x)\\ge f(x)$ be a step function and $x_0$ any point other \n    than the endpoints of the intervals occurring in $\\varphi$. Then there \n    exists some $\\delta>0$ such that for all $x\\in(x_0-\\delta,x_0+\\delta)$, \n    $\\varphi(x_0) = \\varphi(x) \\ge f(x)$. Hence,\n    \\[\n      h(x_0) = \\inf_{\\delta<0}\\sup_{|x-x_0|<\\delta}f(x) \\le \\varphi(x_0).\n    \\]\n    Namely, $\\varphi\\ge h$ except at a finite number of points. Hence, $\\int_a^b\n    \\varphi \\ge \\int_a^b h$ and therefore\n    \\[\n      R\\upint_a^b f = \\inf_{\\varphi\\ge f}\\int_a^b\\varphi(x)\\rd x \\ge \\int_a^b h.\n    \\]\\par\n    We can also derive from the previous discussion that there is a sequence of \n    $\\langle\\varphi_n\\rangle$ of step functions satisfying $\\varphi \\downarrow h$. By \n    Proposition 6,\n    \\[\n      \\int_a^b h = \\lim\\int_a^b\\varphi_n \\ge R\\upint_a^b f.\n    \\]\n    Hence, $R\\upint_a^b f = \\int_a^b h$.\\par\n    (b) First suppose that $f$ is Riemann integrable and let $h$ and $g$ be the \n    upper and lower envelope of $f$ respectively. By part (a), $f$ is Riemann \n    integrable implies $\\int_a^b(h-g) = 0$. Together with the fact that $h\\ge \n    g$, we conclude that $h=g$ a.e.. Therefore, by Problem 2.50, $f$ is \n    continuous except on a set of measure zero.\\par\n    Note that the argument remains true if we reverse the order, verifying the \n    converse part. Hence, the proposition holds.\n  \\end{proof}\n% end\n\n\\subsection{The Integral of a Nonnegative Function}\n  \\paragraph{3.}\n  \\begin{proof}\n    Suppose that $E_n=\\{x:\\, f(x)>1/n\\}$. Then, \n    \\[\n      0=\\int f \\ge \\int_{E_n} f \\ge \\frac{mE_n}{n}\n    \\]\n    implies $mE_n=0$. Hence, $m\\{x:\\, f(x)>0\\}=m(\\bigcup E_n)\\le\\sum mE_n=0$. \n    Namely, $f=0$ a.e.\n  \\end{proof}\n\n  \\paragraph{5.}\n  \\begin{proof}\n    For any fixed $x_0\\in\\mathbb{R}$, let $f_n(x) = f\\cdot\\chi_{(-\\infty,x_0-\n    1/n]}$, which is a increasing sequence of nonnegative measurable function\n    whose limit is $f\\cdot\\chi_{-\\infty,x_0}$. Then by Theorem 10, \n    \\[\n      F(x_0)=\\int_{-\\infty}^{x_0} f = \\int f\\cdot\\chi_{-\\infty,x_0}\n      = \\lim\\int f\\cdot\\chi_{(-\\infty,x_0-1/n]} = \\lim F(x_0-1/n).\n    \\]\n    Meanwhile, since \n    \\[\n      |F(x_0)-F(x_0+1/n)| = \\left|\\int_{x_0}^{x_0+1/n}f(x)\\rd x\\right|=\n      \\left|\\int_{-1/n}^0 g(x)\\rd x \\right|,\n    \\]\n    where $g(x)=f(x_0-x)$, arguing on $g$ in a similar manner yields $F(x_0)=\n    \\lim F(x_0+1/n)$. Thus, $F$ is continuous.\n  \\end{proof}\n\n  \\paragraph{6.}\n  \\begin{proof}\n    By Theorem 9, $\\int f \\le \\lowlim\\int f_n$. Meanwhile, $f_n\\le f$ implies\n    $\\int f_n\\le \\int f$ and therefore $\\uplim \\int f_n \\le \\int f$. Hence, \n    $\\int f =\\lim\\int f_n$.\n  \\end{proof}\n\n  \\paragraph{7.}\n  \\begin{solution}\n    $\\,$\\par\n    (a) Let $f_n(x)=n\\cdot\\chi_{[0,1/n]}$. $f_n$ converges to $f=0$ except on \n    $x=0$. For each $n$, $\\int f_n = 1$ but $\\int f=0$. Hence, the inequality\n    could be strict.\\par\n    (b) Let $f_n(x)=\\chi_{[n,\\infty)}$. Then $\\langle f_n\\rangle$ is a \n    decreasing sequence which converges to $f=0$, the integral of which is $0$.\n    However, for every $n$, $\\int f_n = \\infty$.\n  \\end{solution}\n\n  \\paragraph{8.}\n  \\begin{proof}\n    Let $g_n = \\inf\\{f_n,f_{n+1},\\dots\\}$. Clear that \n    \\begin{equation}\n      \\label{eq:4.3.8}\n      \\int g_n \\le \\int f_n.\n    \\end{equation}\n    Meanwhile $\\langle g_n\\rangle$ is a increasing sequence converging to \n    $\\lowlim f_n$. Hence, by the Monotone Convergence Theorem and \n    \\eqref{eq:4.3.8}\n    \\[\n      \\int \\lowlim f_n = \\int \\lim_{n\\to\\infty}g_n = \n      \\lim_{n\\to\\infty}\\int g_n \\le \\lowlim\\int f_n.\n    \\]\n  \\end{proof}\n\n  \\paragraph{9.}\n  \\begin{proof}\n    By Fatou's Lemma,\n    \\begin{equation}\n      \\label{eq:4.3.9}\n      \\int_E f \\le \\lowlim\\int_E f_n.\n    \\end{equation}\n    Similarly, $\\int_{\\bar{E}}f \\le \\lowlim\\int_{\\bar{E}} f_n$ and therefore\n    \\[\n      \\int_E f_n = \\int f_n-\\int_{\\bar{E}} f_n\\quad\\Rightarrow\\quad\n      \\uplim\\int_E f_n \\le \\int f- \\int_{\\bar{E}} f = \\int_{\\bar{E}} f.\n    \\]\n    \\eqref{eq:4.3.9} and the inequality above together implies $\\int_E f_n\\to\n    \\int f$.\n  \\end{proof}\n% end\n\n\\subsection{The General Lebesgue Integral}\n  \\paragraph{12.}\n  \\begin{proof}\n    Note that $\\langle g+f_n\\rangle$ is a sequence of nonnegative measurable\n    functions. Hence by Problem 8,\n    \\[\n      \\int_E\\lowlim(g+f_n) \\le \\lowlim\\int_E(g+f_n) \\quad\\Rightarrow\\quad\n      \\int_E\\lowlim f_n \\le \\lowlim\\int_E f_n.\n    \\]\n    The second inequality follows immediately from the definition of lower and\n    upper limit. Replacing $g+f_n$ with $g-f_n$ and arguing in a similar manner\n    gives the last inequality.\n  \\end{proof}\n\n  \\paragraph{13.}\n  \\begin{proof}\n    $f_n\\ge -h$ implies $f_n+h\\ge 0$. Hence, $\\int(f_n+h)$ always has a meaning.\n    And since $g$ is integrable, $\\int f_n = \\int(f_n+h)-\\int h$ also has a\n    meaning. Similarly, $\\int f$ has a meaning. Meanwhile,\n    \\[\n      \\int f = \\int(f+h) - \\int h \\le \\lowlim\\int(f_n+h) - \\int h \n      = \\lowlim\\int f_n.\n    \\]\n  \\end{proof}\n\n\n  \\paragraph{15.}\n  \\begin{proof}\n    $\\,$\\par\n    (a) By Problem 4, for every $\\vep>0$, there exists some simple functions \n    $\\varphi_1\\le f^+$ and $\\varphi_2\\le f^-$ such that \n    \\[\n      \\int_E f^+ - \\int_E\\varphi_1 < \\vep \\quad\\text{and}\\quad\n      \\int_E f^- - \\int_E\\varphi_2 < \\vep.\n    \\]\n    Let $\\varphi=\\varphi_1-\\varphi_2$, which is also a simple function. \n    Meanwhile,\n    \\[\n      \\int_E|f-\\varphi| \\le \\int_E(f^+-\\varphi_1) + \\int_E(f^--\\varphi_2) < 2\\vep.\n    \\]\n  \\end{proof}\n  \n  \\paragraph{16.}\n  \\begin{proof}\n    For every integrable $f$, by Problem 15, there exists some step function $\\psi\n    =\\sum_{k=1}^N c_k\\chi_{E_k}$ such that $\\int|f-\\psi|<\\vep$. Note that\n    \\begin{equation}\n      \\label{eq:4.4.16}\n      \\lim_{n\\to\\infty}\\int_{-\\infty}^\\infty \\psi(x)\\cos nx\\rd x =\n      \\lim_{n\\to\\infty}\\sum_{k=1}^N c_k\\int_{E_k}\\cos nx\\rd x = 0.      \n    \\end{equation}\n    Hence,\n    \\begin{align*}\n      \\left|\\int_{-\\infty}^\\infty f(x)\\cos nx\\rd x \\right|\n      &= \\left|\\int_{-\\infty}^\\infty (f(x)-\\psi(x)+\\psi(x))\\cos nx\\rd x\\right|\\\\\n      &\\le \\int_{-\\infty}^\\infty |f(x)-\\psi(x)||\\cos nx|\\rd x + \n           \\left|\\int_{-\\infty}^\\infty \\psi(x)\\cos nx\\rd x\\right| \\\\\n      &\\le \\vep + \\left|\\int_{-\\infty}^\\infty \\psi(x)\\cos nx\\rd x\\right| \\\\\n      &\\to 0 \\text{ as }n\\to\\infty.\n    \\end{align*}\n  \\end{proof}\n\n  \\paragraph{18.}\n  \\begin{proof}\n    Let $\\langle t_n\\rangle$ be any sequence with $t_n\\ne 0$ and tending to $0$.\n    Then $\\langle f(x,t_n)\\rangle$ is sequence of functions satisfying the \n    hypotheses of Lebesgue Convergence Theorem. Meanwhile, $f(x,t_n)\\to f$ as \n    $n\\to\\infty$. Hence,\n    \\[\n      \\lim_{n\\to\\infty}\\int f(x,t_n)\\rd x = \\int f(x)\\rd x.\n    \\]\n    Since the choice of $\\langle t_n\\rangle$ is arbitrary, by Problem 2.49f, \n    \\[\n      \\lim_{t\\to 0}\\int f(x,t)\\rd x = \\int f(x)\\rd x.\n    \\]\n    If $f$ is continuous in $t$ for each $x$, then $\\lim_{\\Delta t\\to 0}f(x,t+\n    \\Delta t)=f(x,t)$ holds for every $t$. Therefore, replacing $t$ with \n    $\\Delta t$ in the previous result yields\n    \\[\n      \\lim_{\\Delta t\\to 0}\\int f(x,t+\\Delta t)\\rd x = \\int f(x,t)\\rd x.\n    \\]\n    Namely, $h(t)$ is continuous.\n  \\end{proof}\n\n% end\n", "meta": {"hexsha": "252587bf3392a8cfe7d2c6755a7b3a4803bf2161", "size": 7561, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real_analysis_3rd/ch4_the_lebesgue_integral.tex", "max_stars_repo_name": "Engineev/solutions", "max_stars_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-07-13T08:36:44.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T17:37:17.000Z", "max_issues_repo_path": "real_analysis_3rd/ch4_the_lebesgue_integral.tex", "max_issues_repo_name": "Engineev/solutions", "max_issues_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real_analysis_3rd/ch4_the_lebesgue_integral.tex", "max_forks_repo_name": "Engineev/solutions", "max_forks_repo_head_hexsha": "4e33274fe1ed9e46fd0e6671c57cb589704939bd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-28T00:05:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-28T00:05:28.000Z", "avg_line_length": 36.8829268293, "max_line_length": 86, "alphanum_fraction": 0.5970109774, "num_tokens": 2866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5618286087967848}}
{"text": "\\chapter{Annontated Examples}\n\\section{Preliminaries}\n\nThis chapter demonstrates the core functionality of the code. In Section \\ref{sec:mansol}, we present steps necessary for running a manufactured solution problem.\n\nIn this section, we will go through several example problems for running \\pkg{dgswem-v2}. In order to provide succint instructions to the reader, we introduce the following environment variables. We will assume that the following environment variables have been defined in the users environment:\n\\begin{lstlisting}{language=bash}\nexport DGSWEMV2_REPO=<path/to/dgswemv2_repo>\nexport DGSWEMV2_BUILD=<path/to/dgswemv2_build>\nexport DGSWEMV2_EXAMPLES=${DGSWEMV_REPO}/examples\n\\end{lstlisting}\n\nAdditionally, to keep the compile time of the application short, the cmake configuration does not automatically build the examples. To build the following examples, you will need to rerun your \\pkg{cmake} build command with the additional option \\lstinline{-DBUILD_EXAMPLES=On}.\n\n\n\n\\section{Manufactured Solution}\n\\label{sec:mansol}\nThe method of manufactured solutions presents an easy means of determining whether the solution is converging at the rates stipulated by the theoretical error estimates. In addition, since we obtain exact error estimates, the method of manufactured solutions may also be used to determine, whether or not errors have been introduced between parallel and serial implementations.\n\nTo avoid the stability-related difficulties associated with greatly varying mesh refinements, we evaluate the manufactured solution from~\\cite{Wiraset2014}. For the manufactured solution, we set the solution to be equal to\n\\begin{align}\n\\zeta(x,y,t) &= 2 \\zeta_0 \\frac{ \\cos \\left( \\omega(x - x_1) \\right) \\cos \\left( \\omega(y - y_1)\\right) \\cos(\\omega t)}{\n\\cos \\left( \\omega(x_2 - x_1)\\left) \\cos \\right( \\omega(y_2 - y_1)\\right)} \\nonumber \\\\\nq_x(x,y,t) &= \\zeta_0 \\frac{ \\sin \\left(\\omega(x - x_1)\\right) \\cos\\left(\\omega(y - y_1)\\right) \\sin(\\omega t)}{\\cos\\left( \\omega ( x_2 - x_1)\\right) \\cos\\left(\\omega ( y_2 - y_1)\\right)} \\nonumber \\\\\nq_y(x,y,t) &= \\zeta_0 \\frac{ \\cos\\left(\\omega (x - x_1)\\right) \\sin\\left( \\omega(y -y_1)\\right) \\sin( \\omega t)}{\\cos \\left( \\omega ( x_ 2 - x_1)\\right) \\cos\\left( \\omega(y_2 - y_1)\\right)} \\nonumber\n\\end{align}\nwhere $x_1 = 40,000\\,\\mathrm{m},\\, x_2= 83,200\\,\\mathrm{m},\\, y_1 = 10,000\\,\\mathrm{m},\\,y_2 = 53,200\\,\\mathrm{m},\\,\\zeta_0 = 0.25\\,\\mathrm{m},\\, \\omega = 2 \\pi/43,200\\, \\mathrm{rad}/\\mathrm{s}$. Additionally, $g=9.81\\,\\mathrm{m}/\\mathrm{s}^2$, and the bathymetry is constant with depth $H_0=2\\,\\mathrm{m}$. The source term is then obtained by applying the left hand side of the shallow water equations to the target solution.\n\nThe manufactured solutions can be run via the three following targets, which can be made as follows:\n\\begin{lstlisting}[language=bash]\ncd ${DGSWEMV2_BUILD}\nmake MANUFACTURED_SOLUTION_SERIAL\nmake MANUFACTURED_SOLUTION_HPX\nmake MANUFACTURED_SOLUTION_OMPI\n\\end{lstlisting}\nNote that for the HPX and the OMPI manufactured targets, the project must have the \\lstinline{-DUSE_HPX} and \\lstinline{-DUSE_OMPI} options set to \\lstinline{On}, respectively.\n\nThe workflow for building the manufactured solution consists of three parts: (1) generating a mesh, (2) if necessary, partitioning the mesh, (3) running the simulation, and (4) interpreting the output.\n\n\\subsection{Generating the mesh}\nFor the generation of meshes, we use a \\lstinline{yaml}-formatted input file. We have included an input file in\n\\begin{lstlisting}\n${DGSWEMV2_EXAMPLES}/manufactured_solution/input_files/mesh_generator_input.yml\n\\end{lstlisting}\nWe have provided comments for the individual variables in the yaml files. One key aspect for improving the accuracy of the solution is the mesh resolution. To modulate these, \\lstinline{num_x_subdivisions} and \\lstinline{num_y_subdivisions} allows one to refine or coarsen the mesh appropriately. However, for now we leave them at their defaults. To generate the mesh, run the following commands\n\\begin{lstlisting}[language=bash]\ncd ${DGSWEMV2_EXAMPLES}/manufactured_solution/input_files/\n${DGSWEMV2_BUILD}/mesh_generators/quad_mesh_generator \\\n    mesh_generator_input.yml\n\\end{lstlisting}\nThis will generate a \\lstinline{quad_mesh.14} file in the current directory. This is an ADCIRC-formatted meshfile.\n\n\\subsection{Partitioning the mesh (optional)}\nNote that this section is only necessary if you are attempting to run the simulation in a distributed manner, i.e. using the OMPI or HPX executable. If you are only trying to run the simulation in serial, skip to the next subsection.\n\nIn order to run the simulation in parallel, the mesh must be broken into smaller pieces, which can then be assigned to individual processors. Since there exist stark contrasts in the send latencies between two processors on the same node versus processors that might be connected via an interconnect, we require the user to specify the number of localities (i.e. private memory address spaces) in addition to the number of partitions to allow for the mesh partitioner to optimize these interconnect effects.\n\nThe partitioner executable can be found in\n\\begin{lstlisting}[language=bash]\n${DGSWEMV2_BUILD}/partitioner/partitioner\n\\end{lstlisting}\nand running the executable without any arguments will provide usage information. In particular, the variables mean the following:\n\\begin{itemize}\n\\item \\lstinline{<input filename>}: The name of the input file used to run the execution.\n\\item \\lstinline{<number of partitions>}: The number of partitions the mesh is to be partitioned into.\n\\item \\lstinline{<number of nodes>}: The number of hardware localities the simulation is to be run on.\n\\item \\lstinline{<ranks per locality>}: The number of ranks per locality. \n\\item \\lstinline{<rank balanced>}: Whether or not the constraints should be balanced across the individual submeshes. Note that the entry options are \\lstinline{true} or \\lstinline{false}. Recommended for OpenMP/MPI.\n\\end{itemize}\nOur two parallelization strategies rely on different parallel execution models. Thus the inputs for either version need to be slightly modified. In the following two subsections we outline, the differences in general, but also provide concrete numbers to allow the user to proceed.\n\\subsubsection{Partitioning for HPX}\nThe HPX execution model varies from that of traditional parallelization strategies in that the number of partitions does not correspond to any hardware concept. For example, traditional MPI implementations assigned one rank per core, and insofar the number of partitions equalled the number of cores. For HPX, the number of partitions is roughly proportional to the number of tasks executed on that locality. \\emph{Oversubscribing} meshes --- assigning more meshes to the locality than there are cores --- can lead to desirable behavior in the form of hiding of send latencies. However, the user needs to be careful to avoid exposing too fine grain parallelism in the form of too many meshes, because task overhead will dominate the execution time and lead performance degradation.\n\nTo continue the example for an HPX parallelization, we recommend running:\n\\begin{lstlisting}[language=bash]\ncd ${DGSWEMV2_EXAMPLES}/manufactured_solution/input_files\n${DGSWEMV2_BUILD}/partitioner/partitioner\\\n    dgswemv2_input.15 4 1\n\\end{lstlisting}\nThis should generate, 4 \\lstinline{meta}-formatted mesh files. The \\lstinline{meta}-format isn't an official mesh format, but rather a simple method of representing the mesh internally for the \\pkg{dgswem-v2} application, and 4 \\lstinline{dbmd}-formatted files, which encapsulate the distributed metadata information required to ensure that submeshes appropriately communication with one another. In addition, we will have generated an updated input file specifically for running parallel meshes. For this example, it's \n\\lstinline{dgswemv2_input_parallelized.15}.\n\n\\subsubsection{Partitioning for OpenMP/MPI}\nPartitioning for the MPI+OpenMP implementation is slightly, different. Ratio of number of partitions to number of threads should correspond to the number of threads available on each node. However, it's also possible to run a flat MPI implementation by modifying the \\lstinline{<ranks per locality>} option. This option will correspond to the number MPI ranks on each node.\n\nAdditionally, since submeshes are mapped statically to threads, we recommend setting the \\lstinline{<rank balanced>} option to \\lstinline{true}. For applications with varying load across the elements, e.g. in the case of wetting and drying, we would like to balance load and memory constraints across the submeshes. Ultimately, the performance will be constrained along the critical path, for statically mapped parallelizations optimal performance is achieved when each submesh roughly has the same amount of work. Note that this is not necessary for the HPX parallelization, which implements aggressive on-node work stealing.\n\nTo generate a mesh partitioning for this parallelization, we run\n\\begin{lstlisting}[language=bash]\ncd ${DGSWEMV2_EXAMPLES}/manufactured_solution/input_files\n${DGSWEMV2_BUILD}/partitioner/partitioner\\\n   dgswemv2_input.15 2 1 2 true\n\\end{lstlisting}\nThis configuration will result in a flat MPI run. With 2 MPI ranks. Note that similar to the HPX run, we generate both \\lstinline{meta} mesh files and \\lstinline{dbmd} connectivity information and an updated \\lstinline{dgswemv2_input_paralllelized.15} input file.\n\\subsection{Running the simulation}\nFor each of the three execution modes --- serial, with HPX, and with MPI+OpenMP --- we have a separate executable.\nTo execute the serial implementation, run\n\\begin{lstlisting}[language=bash]\ncd ${DGSWEMV2_EXAMPLES}/manufactured_solution/input_files\nmkdir -p output\n${DGSWEMV2_BUILD}/examples/MANUFACTURED_SOLUTION_SERIAL\\\n    dgswemv2_input.15\n\\end{lstlisting}\nFor the MPI and HPX versions, we assume that the parallel launcher, would be accessible through some \\lstinline{<mpirun>} command, e.g. on a slurm based system this might \\lstinline{srun}. The MPI-version can then be executed via\n\\begin{lstlisting}[language=bash]\ncd ${DGSWEMV2_EXAMPLES}/manufactured_solution/input_files\nmkdir -p ouptut\n<mpirun> ${DGSWEMV2_BUILD}/examples/MANUFACTURED_SOLUTION_OMPI\\\n    dgswemv2_input.15\n\\end{lstlisting}\nFor the HPX version, the execution command depends on the set-up of the system. If there is no distributed aspect to your computing environment, you do not need a distributed launcher like \\lstinline{srun}. Thus, for the reader running the simulation on their laptop, i.e. with one locality, the HPX example can be run as\n\\begin{lstlisting}[language=bash]\ncd ${DGSWEMV2_EXAMPLES}/manufactured_solution/input_files\nmkdir -p output\n${DGSWEMV2_BUILD}/examples/MANUFACTURED_SOLUTION_HPX\\\n    dgswemv2_input.15\n\\end{lstlisting}\n\n\\subsection{Intepreting the Output}\n\\pkg{dgswem-v2} writes output in several forms. Firstly, in the \\lstinline{output} folder that was created in the \\texttt{\\$\\{DGSWEMV2\\}\\_EXAMPLES/manufactured\\_solution} directory. You will find a log file name \\lstinline{log} and \\texttt{vtk}-formatted output. The output can be visualized using a package like \\pkg{paraview}. A sample image of what the output should look like is shown in Figure \\ref{fig:mansol}.\n\\begin{figure}\n\\centering\n\\includegraphics[width = 0.8 \\textwidth]{{images/man_sol}.png}\n\\caption{Sample output of the manufactured solution output.}\n\\label{fig:mansol}\n\\end{figure}\n\\section{1D Inlet}\n\\label{sec:inlet}", "meta": {"hexsha": "c39a6ae58fb1282caedf628002f89e0581c28439", "size": 11514, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/users-guide/chapters/annotated-examples.tex", "max_stars_repo_name": "kentonwho/dgswemv2", "max_stars_repo_head_hexsha": "917da15ebf1ba4f511b6735a632e58e3e0499076", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-05-30T08:43:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T18:33:10.000Z", "max_issues_repo_path": "documentation/users-guide/chapters/annotated-examples.tex", "max_issues_repo_name": "kentonwho/dgswemv2", "max_issues_repo_head_hexsha": "917da15ebf1ba4f511b6735a632e58e3e0499076", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 57, "max_issues_repo_issues_event_min_datetime": "2018-05-08T21:44:14.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-07T17:13:30.000Z", "max_forks_repo_path": "documentation/users-guide/chapters/annotated-examples.tex", "max_forks_repo_name": "kentonwho/dgswemv2", "max_forks_repo_head_hexsha": "917da15ebf1ba4f511b6735a632e58e3e0499076", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2018-05-07T21:50:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-30T14:02:02.000Z", "avg_line_length": 89.2558139535, "max_line_length": 781, "alphanum_fraction": 0.7945110301, "num_tokens": 2943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5618285992670192}}
{"text": "\\subsection{First-order models}\\label{subsec:first_order_models}\n\nMuch of \\fullref{subsec:first_order_logic} is dedicated to semantic equivalences between logical formulas, which are formulated and proved using \\hyperref[def:first_order_structure]{structures}. This section is dedicated to the study of structures themselves and relations between them. While model theory is a wide topic, for the purposes of this document we are only interested in the following questions:\n\n\\begin{itemize}\n  \\item Which subsets of a structure form a \\hyperref[def:first_order_substructure]{substructure}?\n\n  This is answered by \\fullref{def:first_order_substructure} and by \\fullref{def:first_order_generated_substructure}. Vacuously, if the language contains no functional symbols, by \\fullref{thm:def:first_order_substructure/no_functions} every subset of a structure is a substructure. Such is the case with \\hyperref[def:set]{sets} themselves, with \\hyperref[def:partially_ordered_set]{partially ordered sets} or with \\hyperref[def:metric_space]{metric} and \\hyperref[def:topological_space]{topological spaces}.\n\n  \\Fullref{thm:substructures_form_complete_lattice} shows that the set of all substructures of a structure is worth studying in itself.\n\n  \\item Given a model of some set \\( \\Gamma \\) of formulas, which substructures and \\hyperref[def:first_order_homomorphism]{homomorphic} images of the model are again models of \\( \\Gamma \\)?\n\n  This is answered by \\fullref{thm:positive_formulas_preserved_under_homomorphism}, \\fullref{thm:arbitrary_formulas_preserved_under_isomorphisms} and \\fullref{thm:functions_over_model_form_model}.\n\\end{itemize}\n\n\\begin{definition}\\label{def:first_order_substructure}\n  Let \\( \\mscrX = (X, I) \\) be a structure for the language \\( \\mscrL \\) and let \\( Y \\subseteq X \\). We say that \\( \\mscrY = (Y, I) \\) is a \\term{substructure} of \\( \\mscrX \\) if it satisfies any of the following equivalent conditions:\n\n  \\begin{thmenum}\n    \\thmitem{def:first_order_substructure/deductive} If \\( Y \\) is closed under function application, that is, for any functional symbol \\( f \\) in \\( \\mscrL \\) with arity \\( n \\), we have \\( I(f)(Y^n) \\subseteq Y \\).\n\n    \\thmitem{def:first_order_substructure/inductive} The universe \\( Y \\) is a \\hyperref[def:fixed_point]{fixed point} of the operator\n    \\begin{equation}\\label{eq:def:first_order_substructure/inductive/operator}\n      \\begin{aligned}\n        &T: \\pow(X) \\to \\pow(X) \\\\\n        &T(A) \\coloneqq A \\cup \\set*{ x \\in X \\given \\qexists{f \\in \\boldop{Fun}} \\qexists{x_1, \\ldots, x_{\\#f} \\in A} f\\Bracks{x_1, \\ldots, x_{\\#f}} = x },\n      \\end{aligned}\n    \\end{equation}\n    which enlarges \\( A \\) with the union of all image of \\( A \\) under functions of the language \\( \\mscrL \\).\n\n    Note that the formula inside \\eqref{eq:def:first_order_substructure/inductive/operator} is in the metalanguage despite using syntax similar to first-order logic formulas.\n  \\end{thmenum}\n\\end{definition}\n\\begin{proof}\n  By definition of \\( T \\), \\( Y \\) if a fixed point if and only if\n  \\begin{equation*}\n    \\set*{ x \\in X \\given \\qexists{f \\in \\boldop{Fun}} \\qexists{x_1, \\ldots, x_{\\#f} \\in A} f\\Bracks{x_1, \\ldots, x_{\\#f}} = x } \\subseteq Y.\n  \\end{equation*}\n\n  This condition is clearly satisfied if \\( B \\) satisfies \\fullref{def:first_order_substructure/deductive}.\n\n  If, instead \\( Y \\) is a fixed point of \\( T \\), for the \\( n \\)-ary functional symbol \\( f \\in \\boldop{Fun} \\) and for any tuple \\( x_1, \\ldots, x_n \\), the value \\( I(f)(x_1, \\ldots, x_n) \\) belongs to \\( Y \\). Therefore, \\fullref{def:first_order_substructure/deductive} is satisfied.\n\\end{proof}\n\n\\begin{remark}\\label{rem:topological_first_order_structures}\\mimprovised\n  Let \\( \\mscrX = (X, I) \\) be a structure over some language \\( \\mscrL \\) without predicate symbols.\n\n  If, for every functional symbol \\( f \\), the interpretation \\( I(f) \\) is a \\hyperref[def:global_continuity]{continuous function}, we call \\( \\mscrX \\) a \\term{topological structure}.\n\n  For every algebraic structure defined in \\fullref{sec:group_theory} and \\fullref{sec:ring_theory}, there exists a topological equivalent. We discuss \\hyperref[def:topological_group]{topological groups} and \\hyperref[def:topological_vector_space]{topological vector spaces} through the document, especially in \\fullref{sec:functional_analysis}.\n\n  Naturally, every substructure of a topological structure is again a topological structure.\n\\end{remark}\n\n\\begin{example}\\label{ex:def:first_order_substructure/vector_space}\n  The classic definition for a subset \\( U \\) of a \\hyperref[def:vector_space]{vector space} \\( \\mscrV \\) being a vector subspace is that \\( U \\) is closed under \\hyperref[rem:linear_combinations]{linear combinations}. Linear combinations are simply finite \\hyperref[def:multi_valued_function/superposition]{superpositions} of addition and scalar multiplication in \\( \\mscrV \\). So this condition ensures that \\( U \\) is closed under application of the functional symbols corresponding to addition and scalar multiplication.\n\n  See \\fullref{thm:span_via_linear_combinations} for a further discussion.\n\\end{example}\n\n\\begin{proposition}\\label{thm:def:first_order_substructure}\n  Fix a language \\( \\mscrL \\). The \\hyperref[def:first_order_substructure]{first order substructures} of a structure \\( \\mscrX = (X, I) \\) have the following basic properties:\n  \\begin{thmenum}\n    \\thmitem{thm:def:first_order_substructure/no_functions} If \\( \\mscrL \\) has no functional symbols, then \\( (Y, I) \\) is a substructure where \\( Y \\) is any subset of \\( X \\).\n\n    \\thmitem{thm:def:first_order_substructure/intersection} Let \\( \\seq{ (Y_k, I) }_{k \\in \\mscrK} \\) be a family of substructures of \\( \\mscrX \\). Then their \\term{intersection structure} \\( \\parens*{\\bigcap_{k \\in \\mscrK} Y_k, I} \\) is again a substructure of \\( \\mscrX \\).\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:def:first_order_substructure/no_functions} Both conditions in \\fullref{def:first_order_substructure} are vacuously satisfied if there are no functional symbols in \\( \\mscrL \\).\n\n  \\SubProofOf{thm:def:first_order_substructure/intersection} For any functional symbol \\( f \\) in \\( \\mscrL \\) with arity \\( n \\), we have\n  \\begin{equation*}\n    I(f)\\parens*{\\parens*{\\bigcap_{\\smash{k \\in \\mscrK}} Y_k}^n}\n    \\reloset {\\ref{thm:def:function_image/intersection}} \\subseteq\n    \\bigcap_{k \\in \\mscrK} I(f)(Y_k^n) \\subseteq \\bigcap_{k \\in \\mscrK} \\mscrY_k.\n  \\end{equation*}\n\n  Therefore, \\( \\parens*{\\bigcap_{k \\in \\mscrK} Y_k, I} \\) is indeed a substructure of \\( \\mscrX \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:first_order_generated_substructure}\n  Let \\( \\mscrX = (X, I) \\) be a structure over \\( \\mscrL \\) and let \\( A \\subseteq X \\) be any set. The set \\( A \\) is said to \\term[bg=\u043f\u043e\u0440\u0430\u0436\u0434\u0430,ru=\u043f\u043e\u0440\u043e\u0436\u0434\u0430\u0435\u0442]{generate} the substructure \\( \\mscrY = (Y, I) \\) if it satisfies any of the equivalent statements:\n  \\begin{thmenum}\n    \\thmitem{def:first_order_generated_substructure/smallest} Out of all substructures of \\( \\mscrX \\) whose domain contains \\( A \\), the domain \\( \\mscrY \\) is the smallest with respect to \\hyperref[def:subset]{set inclusion}.\n\n    \\thmitem{def:first_order_generated_substructure/intersection} \\( \\mscrY \\) is the intersection structure of all substructures of \\( \\mscrX \\) that contain \\( A \\).\n  \\end{thmenum}\n\\end{definition}\n\\begin{proof}\n  Let \\( \\seq{ (Y_k, I) }_{k \\in \\mscrK} \\) be the family of all substructures of \\( \\mscrX \\) whose domains contain \\( A \\). Fix one of these substructures, say \\( (Y_{k_0}, I) \\).\n\n  We have the obvious inclusion\n  \\begin{equation*}\n    \\bigcap_{k \\in \\mscrK} Y_k \\subseteq Y_{k_0}.\n  \\end{equation*}\n\n  The reverse inclusion holds if and only if \\( Y_{k_0} \\) is contained in each one of domains \\( Y_k \\) for \\( k \\in \\mscrK \\). In other words, \\( Y_{k_0} \\) is the smallest of the domains \\( \\seq{ Y_k }_{k \\in \\mscrK} \\) with respect to set inclusion if and only if \\( Y_{k_0} \\) equals their intersection.\n\\end{proof}\n\n\\begin{example}\\label{ex:def:first_order_generated_substructure}\n  Common examples of generated substructures are the \\hyperref[def:semimodule/submodel]{linear span} discussed in \\fullref{thm:span_via_linear_combinations} and the \\hyperref[def:generated_ring_ideal]{generated ring ideals}.\n\\end{example}\n\n\\begin{proposition}\\label{thm:first_order_generated_substructures_exist}\n  Let \\( \\mscrX = (X, I) \\) be a structure for the language \\( \\mscrL \\).\n\n  Every subset of \\( X \\) has a \\hyperref[def:first_order_generated_substructure]{generated substructure}.\n\\end{proposition}\n\\begin{proof}\n  Given a set \\( Y \\subseteq X \\), we apply \\fullref{thm:knaster_tarski_theorem} to the \\hyperref[thm:boolean_algebra_of_subsets]{Boolean algebra of all subsets} \\( \\pow(X) \\) with the operator\n  \\begin{equation*}\n    \\begin{aligned}\n      &R: \\pow(X) \\to \\pow(X) \\\\\n      &R(A) \\coloneqq Y \\cup T(A),\n    \\end{aligned}\n  \\end{equation*}\n  where \\( T \\) is defined in \\eqref{eq:def:first_order_substructure/inductive/operator}.\n\n  We thus obtain the smallest fixed point \\( Z \\) of \\( T \\), which contains \\( Y \\) and satisfies \\fullref{def:first_order_substructure/inductive}. The structure \\( \\mscrZ = (Z, I) \\) is thus a substructure of \\( \\mscrX \\).\n\n  Furthermore, \\( \\mscrZ \\) is unique because its domain is the smallest set invariant under \\( R \\).\n\\end{proof}\n\n\\begin{proposition}\\label{thm:substructures_form_complete_lattice}\n  Fix a structure \\( \\mscrX = (X, I) \\) for the language \\( \\mscrL \\).\n\n  The set of all substructures of a \\( \\mscrX \\) forms a complete lattice with respect to set inclusion of domains. It is isomorphic to a complete \\hyperref[def:semilattice/submodel]{sublattice} of the Boolean algebra \\( \\pow(X) \\) described in \\fullref{thm:boolean_algebra_of_subsets}.\n\n  Explicitly:\n  \\begin{thmenum}\n    \\thmitem{thm:substructures_form_complete_lattice/join} The \\hyperref[def:semilattice/join]{join} of the family of substructures with domains \\( \\seq{ (Y_k, I) }_{k \\in \\mscrK} \\) is the \\hyperref[def:first_order_generated_substructure]{generated substructure} of the set \\( \\bigcup_{k \\in \\mscrK} Y_k \\).\n\n    \\thmitem{thm:substructures_form_complete_lattice/top} The \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{top element} is the substructure \\( \\mscrX \\) itself. Any substructures that are different from \\( \\mscrX \\) are called \\term{proper}.\n\n    \\thmitem{thm:substructures_form_complete_lattice/meet} The \\hyperref[def:semilattice/meet]{meet} of the family of substructures \\( \\seq{ (Y_k, I) }_{k \\in \\mscrK} \\) is simply the \\hyperref[thm:def:first_order_substructure/intersection]{intersection structure} of the family.\n\n    \\thmitem{thm:substructures_form_complete_lattice/bottom} The \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{bottom element} of this lattice is the intersection of all substructures. This is called the \\term{trivial substructure}. As a matter of fact, because the trivial substructures for any two structures are isomorphic, we refer to them collectively as the \\term{trivial structure} because trivial substructures are isomorphic.\n\n    As discussed in \\fullref{rem:empty_models}, the empty set is not allowed to be the domain of a structure by definition. Nevertheless, for the sake of having a bottom element we allow structures with empty domains in this lattice.\n\n    By \\fullref{thm:order_category_isomorphism_properties/universal}, the trivial structures are the \\hyperref[def:universal_objects/initial]{initial objects} of all \\hyperref[def:category_of_small_first_order_models]{categories of models}.\n\n    The trivial substructure usually consists only of the constants of \\( \\mscrL \\) --- for example, the \\hyperref[def:group/trivial]{trivial group} \\( \\set{ e } \\) or the \\hyperref[def:semilattice/trivial]{trivial bounded lattice} \\( \\set{ \\top, \\bot } \\).\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:substructures_form_complete_lattice/join} Let \\( (Y, I) \\) be the generated substructure of the set \\( A \\coloneqq \\bigcup_{k \\in \\mscrK} \\mscrY_k \\). From \\fullref{def:first_order_generated_substructure/smallest} it follows that out of the domains of all substructures of \\( \\mscrX \\), \\( Y \\) is the smallest that contains \\( A \\) and hence the smallest that contains \\( Y_k \\) for all \\( k \\in \\mscrK \\). Therefore, it is indeed the supremum of the family \\( \\seq{ (Y_k, I) }_{k \\in \\mscrK} \\) with respect to set inclusion of domains.\n\n  \\SubProofOf{thm:substructures_form_complete_lattice/top} Since \\( \\mscrX \\) is a substructure of itself, it is not only the supremum of the entire lattice, but actually the maximum.\n\n  \\SubProofOf{thm:substructures_form_complete_lattice/meet} The domain of the intersection structure of the family \\( \\seq{ (Y_k, I) }_{k \\in \\mscrK} \\) of substructures of \\( \\mscrX \\) is obviously the infimum of the family since its domain are the infimum in the \\hyperref[thm:boolean_algebra_of_subsets]{Boolean algebra of subsets} of \\( X \\).\n\n  \\SubProofOf{thm:substructures_form_complete_lattice/bottom} It trivially follows from \\fullref{thm:substructures_form_complete_lattice/meet} that the bottom element is the intersection of all substructures of \\( \\mscrX \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:first_order_homomorphism}\n  Let \\( \\mscrX = (X, I) \\) and \\( \\mscrY = (Y, I) \\) be structures over a common language. We say that the \\hyperref[def:function]{function} \\( h: X \\to Y \\) is a \\term{homomorphism} between \\( \\mscrX \\) and \\( \\mscrY \\) if it preserves all functions and relations. Explicitly:\n  \\begin{thmenum}\n    \\thmitem{def:first_order_homomorphism/functions} For any functional symbol \\( f \\in \\boldop{Fun} \\) of arity \\( n \\) and any tuple \\( x_1, \\ldots, x_n \\in X \\) we have\n    \\begin{equation*}\n      h\\parens[\\Big]{ I_X(f)(x_1, \\ldots, x_n) } = I_Y(f) \\parens[\\Big]{ h(x_1), \\ldots, h(x_n) }\n    \\end{equation*}\n\n    \\thmitem{def:first_order_homomorphism/predicates} For any predicate symbol \\( p \\in \\boldop{Pred} \\) of arity \\( n \\) and any \\( x_1, \\ldots, x_n \\in X \\),\n    \\begin{equation*}\n      I_X(p) (x_1, \\ldots, x_n) = T \\T{implies} I_Y(p) \\parens[\\Big]{ h(x_1), \\ldots, h(x_n) } = T.\n    \\end{equation*}\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{remark}\\label{rem:first_order_strong_homomorphism}\n  Homomorphisms as they are defined in \\fullref{def:first_order_homomorphism} are sometimes called \\term{weak homomorphisms}. Under weak homomorphisms, it is possible that \\( I_X(p) (x_1, \\ldots, x_n) = F \\) and yet \\( I_Y(p) \\parens[\\Big]{ h(x_1), \\ldots, h(x_n) } = T \\).\n\n  Logicians sometimes define \\term{strong homomorphisms} where they replace \\fullref{def:first_order_homomorphism/predicates} with the stronger condition\n  \\begin{equation*}\n    I_X(p) (x_1, \\ldots, x_n) = I_Y(p) \\parens[\\Big]{ h(x_1), \\ldots, h(x_n) }.\n  \\end{equation*}\n\n  This condition seems much more natural at first, but it is less useful in practice. For example, \\hyperref[def:partially_ordered_set/homomorphism]{monotone maps} and \\hyperref[eq:def:category_of_small_hypergraphs/homomorphism]{graph homomorphisms} are both weak homomorphisms and these are the most used definitions of homomorphisms in languages with predicate symbols. For this reason, we mostly avoid studying strong homomorphisms.\n\n  They are useful for certain propositions like \\fullref{thm:totally_ordered_strong_homomorphism}, however.\n\\end{remark}\n\n\\begin{proposition}\\label{thm:def:first_order_homomorphism}\n  \\hyperref[def:first_order_homomorphism]{First-order structure homomorphisms} have the following basic properties:\n  \\begin{thmenum}\n    \\thmitem{thm:def:first_order_homomorphism/submodel} If \\( \\mscrX = (X, I) \\) is a structure and \\( \\mscrY = (Y, I) \\) is a \\hyperref[def:first_order_substructure]{substructure} of \\( \\mscrX \\), then the \\term{canonical embedding} function\n    \\begin{equation}\\label{thm:def:first_order_homomorphism/submodel/canonical_embedding}\n      \\begin{aligned}\n        &\\iota: Y \\to X \\\\\n        &\\iota(y) \\coloneqq y\n      \\end{aligned}\n    \\end{equation}\n    is indeed a \\hyperref[def:first_order_homomorphism_invertibility/surjective]{homomorphism} (and thus an embedding in the sense of \\fullref{def:first_order_homomorphism_invertibility}).\n\n    \\thmitem{thm:def:first_order_homomorphism/preserves_substructures} If \\( \\mscrX = (X, I_X) \\) and \\( \\mscrY = (Y, I_Y) \\) are structures and \\( h: X \\to Y \\) is a (weak) homomorphism, then the \\hyperref[def:multi_valued_function/image]{image} \\( h(\\mscrX) \\coloneqq (h(X), I_Y) \\) is a substructure of \\( \\mscrY \\).\n\n    \\thmitem{thm:def:first_order_homomorphism/composition} The \\hyperref[def:multi_valued_function/composition]{composition} of two homomorphisms is again a homomorphism.\n\n    \\thmitem{thm:def:first_order_homomorphism/term_valuation} Fix a homomorphism \\( h: X \\to Y \\) and a term \\( \\tau \\). For any variable assignments \\( v_X \\) and \\( v_Y \\) such that \\( v_Y(\\xi) = h(v_X(\\xi)) \\) for all \\( \\xi \\in \\boldop{Var}(\\tau) \\), we have\n    \\begin{equation*}\n      h(\\tau\\Bracks{v_X}) = \\tau\\Bracks{v_Y}.\n    \\end{equation*}\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:def:first_order_homomorphism/submodel} The interpretation in the substructure \\( \\mscrY \\) is the \\hyperref[def:multi_valued_function/restriction]{restriction} \\( I\\restr_Y \\) of \\( I \\) to \\( \\mscrY \\) which simply restricts the domain of any predicate and function is indeed an interpretation in \\( \\mscrY \\). Thus, \\( (Y, I\\restr_Y) \\) is a structure.\n\n  Conditions \\fullref{def:first_order_homomorphism/functions} and \\fullref{def:first_order_homomorphism/predicates} are both satisfied since the interpretation of any function and predicate is restricted to \\( \\mscrY \\). Thus, \\( \\iota \\) is a homomorphism.\n\n  \\SubProofOf{thm:def:first_order_homomorphism/preserves_substructures} To prove that \\( (f(X), I_Y) \\) is a substructure of \\( \\mscrY \\), we will show that \\fullref{def:first_order_substructure/deductive} holds.\n\n  Indeed, due to \\fullref{def:first_order_homomorphism/functions}, for any \\( n \\)-ary functional symbol and any tuple \\( {x_1, \\ldots, x_n \\in X} \\), we have that\n  \\begin{equation*}\n    I_Y(f) \\parens[\\Big]{ h(x_1), \\ldots, f(x_n) }\n    \\reloset {\\ref{def:first_order_homomorphism/functions}} =\n    h\\parens[\\Big]{ I_X(f)(x_1, \\ldots, x_n) }\n    \\reloset {\\ref{def:first_order_substructure/deductive}} \\in\n    h(X).\n  \\end{equation*}\n\n  \\SubProofOf{thm:def:first_order_homomorphism/composition} Let \\( h: \\mscrX \\mapsto \\mscrY \\) and \\( l: \\mscrY \\mapsto \\mscrZ \\) both be homomorphisms.\n\n  \\begin{itemize}\n    \\item \\Fullref{def:first_order_homomorphism/functions} is satisfied because for any \\( n \\)-ary functional symbol \\( f \\) and any tuple \\( x_1, \\ldots, x_n \\in X \\),\n    \\begin{align*}\n      &\\phantom{{}={}}\n      (l \\bincirc h) \\parens[\\Big]{ I_X(f)(x_1, \\ldots, x_n) }\n      \\reloset {\\ref{def:first_order_homomorphism/functions}} = \\\\ &=\n      l\\parens[\\Big]{ I_Y(f) \\parens[\\Big]{ h(x_1), \\ldots, h(x_n) } }\n      \\reloset {\\ref{def:first_order_homomorphism/functions}} = \\\\ &=\n      I_{\\mscrZ}(f) \\parens[\\Big]{ (l \\bincirc h)(x_1), \\ldots, (l \\bincirc h)(x_n) }.\n    \\end{align*}\n\n    \\item \\Fullref{def:first_order_homomorphism/predicates} is satisfied because for any \\( n \\)-ary predicate symbol \\( p \\) and any tuple \\( x_1, \\ldots, x_n \\in X \\),\n    \\begin{align*}\n      &\\phantom{{}={}}\n      I_X(p) (x_1, \\ldots, x_n)\n      \\reloset {\\ref{def:first_order_homomorphism/predicates}} = \\\\ &=\n      I_Y(p) \\parens[\\Big]{ h(x_1), \\ldots, h(x_n) }\n      \\reloset {\\ref{def:first_order_homomorphism/predicates}} = \\\\ &=\n      I_{\\mscrZ}(p) \\parens[\\Big]{ (l \\bincirc h)(x_1), \\ldots, (l \\bincirc h)(x_n) }.\n    \\end{align*}\n  \\end{itemize}\n\n  \\SubProofOf{thm:def:first_order_homomorphism/term_valuation} We use induction on the structure of \\( \\tau \\). If \\( \\tau \\) is a variable, the statement is obvious from the compatibility condition for \\( v_X \\) and \\( v_Y \\). If \\( \\tau = f(\\kappa_1, \\ldots, \\kappa_m) \\), then\n  \\begin{align*}\n    \\tau\\Bracks{v_Y}\n    &=\n    I(f) \\parens[\\Big]{ \\kappa_1\\Bracks{v_Y}, \\ldots, \\kappa_m\\Bracks{v_Y} }\n    = \\\\ &=\n    I(f) \\parens[\\Big]{ h(\\kappa_1\\Bracks{v_X}), \\ldots, h(\\kappa_m\\Bracks{v_X}) }\n    \\reloset {\\ref{def:first_order_homomorphism/functions}} = \\\\ &=\n    h\\parens[\\Big]{ I(f) \\parens[\\Big]{ \\kappa_1\\Bracks{v_X}, \\ldots, \\kappa_m\\Bracks{v_X} } }\n    = \\\\ &=\n    h(\\tau\\Bracks{v_X}).\n  \\end{align*}\n\\end{proof}\n\n\\begin{definition}\\label{def:first_order_homomorphism_invertibility}\n  In connection with \\fullref{def:function_invertibility} and \\fullref{def:morphism_invertibility}, and more importantly \\fullref{thm:first_order_categorical_invertibility}, we introduce the following terminology for homomorphisms:\n  \\begin{thmenum}\n    \\thmitem{def:first_order_homomorphism_invertibility/embedding} An \\term{embedding} is an \\hyperref[def:function_invertibility/injective]{injective} homomorphism.\n\n    \\thmitem{def:first_order_homomorphism_invertibility/surjective} The dual \\hyperref[def:function_invertibility/surjective]{surjective} homomorphisms do not have an established name. The term \\term{projection} is often used, but projections are conventionally idempotent.\n\n    \\thmitem{def:first_order_homomorphism_invertibility/isomorphism} An \\term{isomorphism} is a \\hyperref[def:function_invertibility/bijective]{bijective} homomorphism whose inverse is also a homomorphism. Equivalently, it is a bijective strong homomorphism.\n\n    The peculiarity here is that the inverse in the sense of \\fullref{def:morphism_invertibility/isomorphism} may not be a homomorphism in general. See \\fullref{ex:bijective_order_homomorphism_not_isomorphism}. For examples where this condition can be relaxed, see \\fullref{thm:automorphism_without_predicate_symbols} and \\fullref{thm:totally_ordered_strict_isomorphisms}.\n\n    \\thmitem{def:first_order_homomorphism_invertibility/endomorphism} An \\term{endomorphism} is a homomorphism that is also an \\hyperref[def:multi_valued_function/endofunction]{endofunction}.\n\n    \\thmitem{def:first_order_homomorphism_invertibility/automorphism} A homomorphism that is both an endomorphism and an isomorphism is called an \\term{automorphism}.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{example}\\label{ex:bijective_order_homomorphism_not_isomorphism}\n  Consider the \\hyperref[def:set_of_integers]{set of integers \\( \\BbbZ \\)} endowed with two different \\hyperref[def:partially_ordered_set]{partial orders}:\n  \\begin{itemize}\n    \\item The standard total order \\( \\leq \\) where \\( n \\leq m \\) if there exists a nonnegative integer \\( k \\) such that \\( n + k = m \\).\n    \\item The equality \\( = \\) relation, which is simply the \\hyperref[def:binary_relation/diagonal]{diagonal relation} \\( \\increment \\).\n  \\end{itemize}\n\n  The identity \\( \\id(x) = x \\) is an \\hyperref[def:partially_ordered_set/homomorphism]{order homomorphisms} from \\( (\\BbbZ, =) \\) to \\( (\\BbbZ, \\leq) \\). Indeed, for any integers \\( n \\) and \\( m \\), \\( n = m \\) implies \\( n \\leq m \\).\n\n  Furthermore, the identity function is bijective. The inverse of \\( \\id \\), which is again \\( \\id \\), is not however a homomorphism from \\( (\\BbbZ, \\leq) \\) to \\( (\\BbbZ, =) \\) since, for example, \\( 1 \\leq 2 \\), but \\( 1 \\neq 2 \\).\n\n  Hence, \\( \\id: (\\BbbZ, =) \\to (\\BbbZ, \\leq) \\) is a bijective homomorphism, but not an isomorphism.\n\\end{example}\n\n\\begin{proposition}\\label{thm:automorphism_without_predicate_symbols}\n  Let \\( \\mscrL \\) be a language without predicate symbols. Then any bijective homomorphism between structures for \\( \\mscrL \\) is an \\hyperref[def:first_order_homomorphism_invertibility/isomorphism]{isomorphism}.\n\n  This applies to arbitrary languages if we restrict ourselves to \\hyperref[rem:first_order_strong_homomorphism]{strong homomorphisms}.\n\\end{proposition}\n\\begin{proof}\n  If \\( \\mscrL \\) has no predicate symbols, then \\fullref{def:first_order_homomorphism/predicates} is vacuously satisfied for the inverse of any bijective homomorphism. It is just as well satisfied if we restrict ourselves to strong homomorphisms.\n\\end{proof}\n\n\\begin{definition}\\label{def:positive_formula}\n  We say that a \\hyperref[def:propositional_syntax/formula]{propositional formula} \\( \\varphi \\) is \\term{positive} if it contains only \\hyperref[def:cnf_and_dnf]{positive literals} connected using \\hyperref[def:propositional_language/connectives/conjunction]{conjunction \\( \\wedge \\)} and \\hyperref[def:propositional_language/connectives/disjunction]{disjunction \\( \\vee \\)}. We can also add propositional constants, however that would be redundant due to \\fullref{thm:binary_lattice_operations/identity}.\n\n  The point of positive formulas is to avoid \\hyperref[def:propositional_language/negation]{negation \\( \\neg \\)}. This definition is not equivalent to \\hyperref[def:positive_implicational_deductive_system]{positive implicational formulas} where \\( \\rightarrow \\) is the only connective. We avoid adding \\( \\rightarrow \\) because that would allow us, assuming classical logic, to derive negation using \\fullref{thm:boolean_equivalences/negation_bottom}.\n\n  Positive formulas are used in \\fullref{thm:positive_formulas_preserved_under_homomorphism}, which fails to hold for some non-positive formulas --- see \\fullref{ex:monoid_cancellation_not_preserved_by_homomorphism}.\n\n  When dealing with first-order logic, we simply use \\hyperref[thm:first_order_substitution_equivalence/propositional]{substitution} to replace propositional variables with atomic formulas. This way we obtain positive first-order formulas with \\hyperref[thm:implicit_universal_quantification]{implicit universal quantification}. Of course, we can always add explicit universal quantifiers, but we avoid existential quantifiers because of \\fullref{thm:first_order_quantifiers_are_dual}.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:positive_formulas_preserved_under_homomorphism}\n  Let \\( \\mscrX = (X, I) \\) and \\( \\mscrY = (Y, I) \\) be structures over a common language \\( \\mscrL \\) and let \\( h: X \\to Y \\) be a \\hyperref[def:first_order_homomorphism]{homomorphism} between them. Take \\( \\Gamma \\) to be a nonempty set of \\hyperref[def:positive_formula]{positive formulas}.\n\n  Then \\( h \\) preserves models of \\( \\Gamma \\). That is, if \\( \\mscrX \\vDash \\Gamma \\) then \\( (h(X), I_Y) \\vDash \\Gamma \\).\n\\end{proposition}\n\\begin{proof}\n  Let \\( v \\) a variable assignment in \\( \\mscrY \\). Let \\( v_X: \\boldop{Var} \\to X \\) be an assignment such that for any variable \\( \\xi \\) we have\n  \\begin{equation*}\n    v_X(\\xi) \\in h^{-1}(v_Y(\\xi)).\n  \\end{equation*}\n\n  At least one such assignment exists due to \\fullref{def:zfc/choice}. If \\( h \\) is injective, this assignment is unique.\n\n  We will show that\n  \\begin{equation}\\label{thm:positive_formulas_preserved_under_homomorphism/ind_hyp_x}\n    \\mscrX \\vDash_{v_X} \\varphi\n  \\end{equation}\n  implies\n  \\begin{equation}\\label{thm:positive_formulas_preserved_under_homomorphism/ind_hyp_y}\n    (h(X), I_Y) \\vDash_{v_Y} \\varphi.\n  \\end{equation}\n\n  We assume \\eqref{thm:positive_formulas_preserved_under_homomorphism/ind_hyp_x} for \\( \\varphi \\) and we use induction on the structure of \\( \\varphi \\) to prove \\eqref{thm:positive_formulas_preserved_under_homomorphism/ind_hyp_y}, starting with different \\hyperref[def:first_order_syntax/atomic_formula]{atomic formulas}:\n  \\begin{itemize}\n    \\item The constant \\( \\top \\) is vacuously preserved by homomorphisms because it does not depend on the interpretation or variable assignment.\n\n    \\item Suppose that \\( \\varphi = (\\tau_1 \\doteq \\tau_2) \\). We have \\( \\tau_1\\Bracks{v_X} = \\tau_2\\Bracks{v_X} \\) and hence \\( h(\\tau_1\\Bracks{v_X}) = h(\\tau_2\\Bracks{v_X}) \\) and\n    \\begin{equation*}\n      \\tau_1\\Bracks{v_Y}\n      \\reloset {\\ref{thm:def:first_order_homomorphism/term_valuation}} =\n      h(\\tau_1\\Bracks{v_X})\n      =\n      h(\\tau_2\\Bracks{v_X})\n      \\reloset {\\ref{thm:def:first_order_homomorphism/term_valuation}} =\n      \\tau_2\\Bracks{v_Y}.\n    \\end{equation*}\n\n    \\item Suppose that \\( \\varphi \\) is the predicate formula \\( p(\\tau_1, \\ldots, \\tau_n) \\). By assumption for every variable assignment in \\( \\mscrX \\) and, in particular, for any \\( v_X \\),\n    \\begin{equation*}\n      \\mscrX \\vDash_{v_X} p(\\tau_1, \\ldots, \\tau_n),\n    \\end{equation*}\n    then\n    \\begin{equation}\\label{eq:thm:positive_formulas_preserved_under_homomorphism/predicates/x}\n      I_X(p) \\parens[\\Big]{ \\tau_1\\Bracks{v_X}, \\ldots, \\tau_n\\Bracks{v_X} } = T.\n    \\end{equation}\n\n    By definition of homomorphism, we have\n    \\begin{equation}\\label{eq:thm:positive_formulas_preserved_under_homomorphism/predicates/y}\n      I_Y(p) \\parens[\\Big]{ h(\\tau_1\\Bracks{v_X}), \\ldots, h(\\tau_n\\Bracks{v_X}) } = T.\n    \\end{equation}\n\n    Now\n    \\begin{equation*}\n      (h(X), I_Y) \\vDash_{v_Y} p(\\tau_1, \\ldots, \\tau_n),\n    \\end{equation*}\n    follows from \\fullref{thm:def:first_order_homomorphism/term_valuation}.\n\n    If \\( h \\) is a \\hyperref[rem:first_order_strong_homomorphism]{strong homomorphism}, then the converse also holds, i.e. \\eqref{eq:thm:positive_formulas_preserved_under_homomorphism/predicates/x} follows from \\eqref{eq:thm:positive_formulas_preserved_under_homomorphism/predicates/y}. See \\fullref{thm:arbitrary_formulas_preserved_under_isomorphisms} for an application of this converse.\n\n    \\item Suppose that \\( \\varphi = \\psi_1 \\wedge \\psi_2 \\) and that the inductive hypothesis holds for \\( \\psi_1 \\) and \\( \\psi_2 \\).\n\n    Since \\( \\varphi\\Bracks{v_X} = T \\) by assumption, by definition of valuation of conjunction we have\n    \\begin{equation*}\n      \\psi_1\\Bracks{v_X}\n      =\n      \\psi_2\\Bracks{v_X}\n      =\n      T.\n    \\end{equation*}\n\n    This allows us to apply the inductive hypothesis to obtain\n    \\begin{equation*}\n      \\psi_1\\Bracks{v_Y}\n      =\n      \\psi_2\\Bracks{v_Y}\n      =\n      T.\n    \\end{equation*}\n    and conclude that\n    \\begin{equation*}\n      \\varphi\\Bracks{v_Y}\n      =\n      \\psi_1\\Bracks{v_Y} \\wedge \\psi_2\\Bracks{v_Y}\n      =\n      T \\wedge T\n      =\n      T.\n    \\end{equation*}\n\n    \\item Suppose that \\( \\varphi = \\psi_1 \\vee \\psi_2 \\) and that the inductive hypothesis holds for \\( \\psi_1 \\) and \\( \\psi_2 \\).\n\n    Since the formula \\( \\varphi \\) is valid in \\( \\mscrX \\), at least one of \\( \\psi_1 \\) or \\( \\psi_2 \\) is valid under \\( v_X \\). For different \\( v_X \\) the valuation pair \\( (\\psi_1\\Bracks{v_X}, \\psi_2\\Bracks{v_X}) \\) may be different, but will always have at least one \\( T \\) value.\n\n    The inductive hypothesis holds for both \\( \\psi_1 \\) and \\( \\psi_2 \\) and therefore \\( (\\psi_1\\Bracks{v_Y}, \\psi_2\\Bracks{v_Y}) \\) also contains at least one \\( T \\) value.\n\n    This allows us to conclude that\n    \\begin{equation*}\n      \\varphi\\Bracks{v_Y}\n      =\n      \\psi_1\\Bracks{v_Y} \\vee \\psi_2\\Bracks{v_Y}\n      =\n      T.\n    \\end{equation*}\n\n    \\item To see how this proof fails for conditionals, consider \\( \\varphi = (\\psi_1 \\rightarrow \\psi_2) \\). Then \\( \\varphi\\Bracks{v_X} = T \\) implies either \\( \\psi_1\\Bracks{v_X} = F \\) or \\( \\psi_1\\Bracks{v_X} = \\psi_2\\Bracks{v_X} = T \\).\n\n    If \\( \\psi_1\\Bracks{v_X} = \\psi_2\\Bracks{v_X} = F \\), we have \\( \\varphi\\Bracks{v_X} = T \\), but we cannot conclude that \\( \\varphi\\Bracks{v_Y} = T \\) because that would require the \\hyperref[def:material_implication/inverse]{inverse} of the inductive hypothesis to hold for \\( \\psi_1 \\) and \\( \\psi_2 \\).\n\n    See \\fullref{ex:monoid_cancellation_not_preserved_by_homomorphism} for an example where a conditional is not preserved by a homomorphism.\n  \\end{itemize}\n\n  Since \\( v \\) was chosen arbitrarily, we conclude that\n  \\begin{equation*}\n    (h(X), I_Y) \\vDash \\varphi.\n  \\end{equation*}\n\\end{proof}\n\n\\begin{corollary}\\label{thm:substructure_is_model}\n  If \\( \\Gamma \\) is a set of positive formulas, any \\hyperref[def:first_order_substructure]{substructure} of a model of \\( \\Gamma \\) is again a model of \\( \\Gamma \\).\n\\end{corollary}\n\\begin{proof}\n  Follows from \\fullref{thm:def:first_order_homomorphism/submodel} and \\fullref{thm:positive_formulas_preserved_under_homomorphism}.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:arbitrary_formulas_preserved_under_isomorphisms}\\mcite[thm. 25.9]{OpenLogicFull}\n  If \\( \\mscrX = (X, I_X) \\) is a model of \\( \\Gamma \\) and if \\( h: X \\to Y \\) is a embedding from \\( \\mscrX \\) to \\( \\mscrY = (Y, I_Y) \\), then \\( (h(X), I_Y) \\) is also a model of \\( \\Gamma \\).\n\n  We say that embeddings preserve arbitrary formulas.\n\n  If \\( (h(X), I_Y) \\) is a model of \\( \\Gamma \\), then \\( \\mscrX \\) is also a model if \\( h \\) is a \\hyperref[rem:first_order_strong_homomorphism]{strong homomorphism}.\n\n  We say that strong embeddings reflect arbitrary formulas.\n\\end{proposition}\n\\begin{proof}\n  The proof simply extends the induction in the proof of \\fullref{thm:arbitrary_formulas_preserved_under_isomorphisms} to\n  \\begin{equation*}\n    \\varphi\\Bracks{v_Y} = \\varphi\\Bracks{v_Y}\n  \\end{equation*}\n  which allows us to use the usual induction on the negation and all connectives and quantifiers.\n\n  The result regarding strong homomorphisms is shown in the note about \\eqref{eq:thm:positive_formulas_preserved_under_homomorphism/predicates/x} following from \\eqref{eq:thm:positive_formulas_preserved_under_homomorphism/predicates/y} under strong homomorphisms.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:functions_over_model_form_model}\n  Let \\( \\Gamma \\) to be a nonempty set of \\hyperref[def:positive_formula]{positive formulas}. Let \\( \\mscrX \\) be a model of \\( \\Gamma \\) and let \\( A \\) be any nonempty set. Consider the set \\( Y \\coloneqq \\fun(A, \\mscrX) \\) of \\hyperref[def:function]{all set functions} from \\( A \\) to \\( X \\).\n\n  Define the function \\( \\iota: X \\mapsto Y \\) by sending each \\( x \\in X \\) to the corresponding constant function in \\( Y \\).\n\n  Define the interpretation \\( I_Y \\) as follows:\n  \\begin{itemize}\n    \\item For each \\( n \\)-ary functional symbol \\( f \\) in \\( \\mscrL \\), define the interpretation of the functions \\( k_1, \\ldots, k_n \\) componentwise as\n    \\begin{equation*}\n      \\begin{aligned}\n        &I_Y(f): Y^n \\to Y \\\\\n        &I_Y(f) \\parens[\\Big]{ k_1, \\ldots, k_n } \\coloneqq \\parens[\\Big]{ s \\mapsto I(f) \\parens[\\Big]{ k_1(s), \\ldots, k_n(s) } }.\n      \\end{aligned}\n    \\end{equation*}\n\n    \\item For each \\( n \\)-ary predicate symbol \\( p \\) in \\( \\mscrL \\), define \\( I_Y(p) \\subseteq Y^n \\) via\n    \\begin{equation*}\n      \\begin{aligned}\n        &I_Y(p): Y^n \\to \\set{ T, F } \\\\\n        &I_Y(p) \\parens[\\Big]{ k_1, \\ldots, k_n } \\coloneqq \\bigwedge \\set[\\Big]{ I(p) \\parens[\\Big]{ k_1(s), \\ldots, k_n(s) } \\given* s \\in S }.\n      \\end{aligned}\n    \\end{equation*}\n\n    That is, \\( I_Y(p) (k_1, \\ldots, k_n) = T \\) if and only if \\( I(p) (k_1(s), \\ldots, k_n(s)) = T \\) simultaneously for all \\( s \\in S \\).\n  \\end{itemize}\n\n  Then the structure \\( \\mscrY = (Y, I_Y) \\) is also a model of \\( \\Gamma \\) and \\( \\iota: \\mscrX \\to \\mscrY \\) is an embedding.\n\\end{proposition}\n\\begin{proof}\n  It is obvious that \\( \\mscrY \\) is a structure and that \\( \\iota \\) is an embedding. We will prove using induction on the structure of a formula \\( \\varphi \\) that \\( \\mscrX \\vDash \\varphi \\) implies \\( \\mscrY \\vDash \\varphi \\).\n\n  Let \\( v_Y \\) be a variable assignment in \\( \\mscrY \\).\n\n  Suppose that \\( \\mscrX \\vDash \\varphi \\). We use induction on the structure of \\( \\varphi \\) to show that \\( \\varphi\\Bracks{v_Y} = T \\).\n  \\begin{itemize}\n    \\item If \\( \\varphi \\) is a propositional constant, its value does not depend on \\( v_Y \\) and thus \\( \\varphi\\Bracks{v_Y} = \\varphi\\Bracks{v_X} \\).\n\n    \\item If \\( \\varphi = (\\tau_1 \\doteq \\tau_2) \\), then \\( \\tau_1\\Bracks{v_X} = \\tau_2\\Bracks{v_X} \\) for all assignments \\( v_X \\) in \\( \\mscrX \\), hence for any \\( s \\in S \\) we have \\( \\parens[\\Big]{\\tau_1\\Bracks{v_Y}}(s) = \\parens[\\Big]{\\tau_2\\Bracks{v_Y}}(s) \\) since both sides of the equality here are elements of \\( \\mscrX \\).\n\n    \\item Analogously, if \\( \\varphi = p(\\tau_1, \\ldots, \\tau_n) \\), then\n    \\begin{equation*}\n      I_Y(p) \\parens[\\Big]{ k_1, \\ldots, k_n }\n      =\n      \\bigwedge \\set[\\Big]{ I(p) \\parens[\\Big]{ k_1(s), \\ldots, k_n(s) } \\given* s \\in S }\n      =\n      \\bigwedge \\set{ T \\given s \\in S }\n      =\n      T.\n    \\end{equation*}\n\n    \\item Analogous to the proof of \\fullref{thm:positive_formulas_preserved_under_homomorphism}, conjunction and disjunction formulas that are valid in \\( \\mscrX \\) are valid in \\( \\mscrY \\) while conditional formulas may fail to be valid. See \\fullref{ex:thm:functions_over_model_of_positive_formulas_form_model} for examples where this proposition fails.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{example}\\label{ex:thm:functions_over_model_of_positive_formulas_form_model}\n  While the statement of \\fullref{thm:functions_over_model_form_model} may be a little cryptic, a few examples show that it is actually obvious.\n  \\begin{itemize}\n    \\item \\hyperref[def:boolean_function]{Boolean functions} have their values in the Boolean algebra \\( \\set{ T, F } \\). Let \\( S \\) be the set of all tuples of values in \\( \\set{ T, F }^n \\) for arbitrary \\( n \\). That is,\n    \\begin{equation*}\n      S \\coloneqq \\bigcup_{n \\geq 1} \\set{ T, F }^n.\n    \\end{equation*}\n\n    Then from \\fullref{thm:functions_over_model_form_model} it follows that the set \\( B = \\fun(S, \\set{ T, F }) \\) of all Boolean functions of arbitrary arities is again a Boolean algebra. See \\fullref{thm:lindenmaum_tarski_algebra_of_full_propositional_logic/bijection} for further discussion.\n\n    \\item If \\( R \\) is a \\hyperref[def:ring]{ring} and \\( A \\) is any set, then \\( \\fun(A, R) \\) is again a ring with componentwise operations --- see \\fullref{thm:functions_over_algebra}.\n\n    This is useful in functional analysis where we study real-valued and complex-valued functions over arbitrary sets.\n\n    \\item If \\( \\BbbK \\) is a field, then in general \\( \\fun(A, \\BbbK) \\) is not a field. The simplest example are the real-valued real functions --- \\( \\sin(x) \\) has no multiplicative inverse since \\( \\sfrac 1 {\\sin(x)} \\) is not defined for \\( x = 2k\\pi, k = 1, 2, \\ldots \\). We can form a \\hyperref[def:ring_localization]{field of fractions}, but in general fields of fractions over function rings no longer correspond to functions --- they are purely algebraic constructions, just like \\hyperref[def:formal_power_series]{formal power series}.\n\n    This happens because the definition of a field and, more generally has a non-positive axiom --- it requires every nonzero element to have a multiplicative inverse, which can be described formally as\n    \\begin{equation*}\n      \\qforall \\xi \\parens[\\Big]{ (\\xi \\doteq 0) \\vee \\qexists \\eta (\\xi \\cdot \\eta \\doteq 1) }.\n    \\end{equation*}\n\n    As discussed in \\fullref{def:positive_formula}, a formula with an existential quantifier may fail to be positive.\n  \\end{itemize}\n\\end{example}\n\n\\begin{definition}\\label{def:first_order_definability}\n  Fix a \\hyperref[def:first_order_syntax]{first-order language} \\( \\mscrL \\) and a \\hyperref[def:first_order_structure]{structure} \\( \\mscrX = (X, I) \\) for \\( \\mscrL \\).\n\n  To every formula \\hyperref[def:first_order_syntax]{formula} \\( \\varphi[\\xi_1, \\ldots, \\xi_n] \\) there corresponds a set \\( A \\subseteq X^n \\) such that\n  \\begin{equation*}\n    (x_1, \\ldots, x_n) \\in A \\T{if and only if} \\varphi\\Bracks{x_1, \\ldots, x_n} = T.\n  \\end{equation*}\n\n  We say that \\( \\varphi \\) \\term{defines} \\( A \\). An arbitrary set \\( A \\subseteq X^n \\) is \\term{definable} if there exists a formula \\( \\varphi \\) that defines \\( A \\).\n\n  See \\fullref{thm:cumulative_hierarchy_model_of_zfc} for how this concept deeply relates to set theory.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:automorphism_preserves_validity}\n  Let \\( \\mscrX = (X, I) \\) be a structure over some language \\( \\mscrL \\). Then for every formula \\( \\varphi[\\xi_1, \\ldots, \\xi_n] \\) in \\( \\mscrL \\) and any \\hyperref[def:first_order_homomorphism_invertibility/automorphism]{automorphism} \\( h: X \\to X \\) we have\n  \\begin{equation*}\n    \\varphi\\Bracks{x_1, \\ldots, x_n} = \\varphi\\Bracks{h(x_1), \\ldots, h(x_n)}.\n  \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n  Let \\( x_1, \\ldots, x_n \\) be points in \\( X \\) such that\n  \\begin{equation*}\n    \\varphi\\Bracks{x_1, \\ldots, x_n} = T.\n  \\end{equation*}\n\n  Since \\( h \\) is a homomorphism, it follows that\n  \\begin{equation*}\n    \\varphi\\Bracks{h(x_1), \\ldots, h(x_n)} = \\varphi\\Bracks{x_1, \\ldots, x_n} = T.\n  \\end{equation*}\n\n  Conversely, suppose that \\( x_1, \\ldots, x_n \\) are points in \\( X \\) such that\n  \\begin{equation*}\n    \\varphi\\Bracks{h(x_1), \\ldots, h(x_n)} = T.\n  \\end{equation*}\n\n  Then since \\( h^{-1} \\) is a homomorphism, we have\n  \\begin{equation*}\n    \\varphi\\Bracks{h^{-1}(h(x_1)), \\ldots, h^{-1}(h(x_n))}\n    =\n    \\varphi\\Bracks{x_1, \\ldots, x_n}\n    =\n    T.\n  \\end{equation*}\n\n  Therefore,\n  \\begin{equation*}\n    \\varphi\\Bracks{h(x_1), \\ldots, h(x_n)} = T \\T{if and only if} \\varphi\\Bracks{x_1, \\ldots, x_n} = T,\n  \\end{equation*}\n  which is equivalent to the statement of the proposition.\n\\end{proof}\n\n\\begin{corollary}\\label{thm:automorphism_of_definable_set}\n  Let \\( \\mscrX = (X, I) \\) be a structure over some language \\( \\mscrL \\).\n\n  \\begin{thmenum}\n    \\thmitem{thm:automorphism_of_definable_set/direct} If the set \\( A \\subseteq X \\) is \\hyperref[def:first_order_definability]{definable} and if \\( h: X \\to X \\) is an automorphism, then \\( h(A) = A \\).\n    \\thmitem{thm:automorphism_of_definable_set/contrapositive} If for some automorphism \\( h: X \\to X \\) we have \\( h(A) \\neq A \\), then the set \\( A \\subseteq X \\) is not definable.\n  \\end{thmenum}\n\\end{corollary}\n\\begin{proof}\n  \\SubProofOf{thm:automorphism_of_definable_set/direct} If \\( A \\) is definable via \\( \\varphi[\\xi] \\), then by \\fullref{thm:automorphism_preserves_validity} for any automorphism \\( h: X \\to X \\) we have \\( \\varphi\\Bracks{x} = \\varphi\\Bracks{h(x)} \\). Thus, the set \\( h(A) \\) is just as well-definable via \\( \\varphi \\), which implies \\( A = h(A) \\).\n\n  \\SubProofOf{thm:automorphism_of_definable_set/contrapositive} This is the contrapositive of \\fullref{thm:automorphism_of_definable_set/direct}.\n\\end{proof}\n", "meta": {"hexsha": "1b2318e73bd9818a72a2e2fdc15fb8304f29ef5e", "size": 42135, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/first_order_models.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/first_order_models.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/first_order_models.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.4598662207, "max_line_length": 556, "alphanum_fraction": 0.71000356, "num_tokens": 13231, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484144, "lm_q2_score": 0.7853085859124002, "lm_q1q2_score": 0.5618285969267525}}
{"text": "\\section{{\\tt DChvMtx}: Double Precision Chevron Matrix Object}\n\\par\nLet $A$ be an $n \\times n$ matrix to be factored.\nWe are going to allow both row and column permutations as the\nfactorization proceeds, and our pivot blocks may be larger \nthan $1 \\times 1$ pivots.\nAfter the first step we have\n$$\nA = A_0 \n=\nP_0 \n\\left \\lbrack \\begin{array}{cc}\nI   & 0 \\\\\n{\\hat L}_0 & I\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{cc}\nI   & 0 \\\\\n0   & {\\hat A}_1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{cc}\n{\\hat D}_0 & {\\hat U}_0 \\\\\n 0  & I\n\\end{array} \\right \\rbrack\nQ_0\n= P_0 \\ L_0 \\ A_1 \\ U_0 \\ Q_0 \n$$\nwhere $P_0$ is the row permutation matrix,\n${\\hat D}_0$ is the pivot block,\n$\\displaystyle L_0 =\n\\left \\lbrack \\begin{array}{cc}\nI   & 0 \\\\\n{\\hat L}_0 & I\n\\end{array} \\right \\rbrack\n$,\n$\\displaystyle A_1 =\n\\left \\lbrack \\begin{array}{cc}\nI   & 0 \\\\\n0   & {\\hat A}_1\n\\end{array} \\right \\rbrack\n$,\n$\\displaystyle U_0 =\n\\left \\lbrack \\begin{array}{cc}\n{\\hat D}_0 & {\\hat U}_0 \\\\\n 0  & I\n\\end{array} \\right \\rbrack\n$ and\n$Q_0$ is the column permutation matrix.\nNow $A_1$ can be factored in the same manner,\n$$\nA_1 \n=\nP_1 \n\\left \\lbrack \\begin{array}{cc}\nI   & 0 \\\\\n{\\hat L}_1 & I\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{cc}\nI   & 0 \\\\\n0   & {\\hat A}_2\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{cc}\n{\\hat D}_1 & {\\hat U}_1 \\\\\n 0  & I\n\\end{array} \\right \\rbrack\nQ_1\n= P_1 \\ L_1 \\ A_2 \\ U_1 \\ Q_1\n$$\nso we can expand $A_0$ to find\n$A = P_0 \\ L_0 \\ P_1 \\ L_1 \\ A_2 \\ U_1 \\ Q_1 \\ U_0 \\ Q_0$.\nEventually the factorization proceeds to completion and we have\n$$\nA = (P_0 \\ L_0 \\ P_1 \\ L_1 \\ \\cdots \\ P_k \\ L_k)\n    (U_K \\ Q_k \\ \\cdots \\ U_1 \\ Q_1 \\ U_0\\ Q_0)\n$$\nNote, $L_0, L_1, \\ldots, L_k$ are all lower triangular,\nfor it is the row permutation matrices that are responsible.\nNote that ${\\tilde L}_i = P_i L_i$ is not a lower triangular\nmatrix, but one can solve a linear system involving ${\\tilde L}_i$\nwithout any difficulty.\nA similar statement holds for ${\\tilde U}_i = U_i Q_i$,\nexcept we have the presence of a block pivot ${\\hat D}_i$.\n\\par\nThe key to understanding how we factor and solve linear systems\nusing the {\\tt DChvMtx} object is that the permutation matrices\n$P_i$ and $Q_i$ are held implicitly in \n${\\tilde L}_i$ \nand \n${\\tilde U}_i$, \nand we solve any linear system with\n${\\tilde L}_i$ \nand \n${\\tilde U}_i$ correctly,\neven though they are not triangular.\nWe can write $A$ as\n$$\nA =\n{\\tilde L}_0 \\ \n{\\tilde L}_1 \\ \n\\cdots \\ \n{\\tilde L}_k \\ \n{\\tilde U}_k \\ \n\\cdots \\ \n{\\tilde U}_1 \\ \n{\\tilde U}_0\n$$\nand thus $A y = b$ by solving a sequence of linear systems\n${\\tilde L}_0 \\ x^0 = b$,\n${\\tilde L}_1 \\ x^1 = x^0$,\n$\\cdots$,\n${\\tilde L}_k \\ x^k = x^{k-1}$,\n${\\tilde U}_k \\ y^k = x^k$,\n${\\tilde U}_{k-1} \\ y^{k-1} = y^k$,\n$\\cdots$,\n${\\tilde U}_1 \\ y^1 = y^2$,\n${\\tilde U}_0 \\ y = y^1$.\n% Here is a crucial point:\n% solving a linear system with ${\\tilde L}_i$ requires \n% information about the row permutation at the $i$-th step,\n% but does not require any knowledge about the column permutation\n% at that or any other step.\n% A similar point holds for systems with ${\\tilde U}_i$.\n\\par\nLet us clarify the process with an example.\nAt the first step we permute a $3 \\times 3$ matrix so that the\n$a_{1,2}$ element is the pivot. \n\\begin{eqnarray*}\nA & = & \n\\left \\lbrack \\begin{array}{ccc}\na_{0,0} & a_{0,1} & a_{0,2} \\\\\na_{1,0} & a_{1,1} & a_{1,2} \\\\\na_{2,0} & a_{2,1} & a_{2,2}\n\\end{array} \\right \\rbrack\n =\n\\left \\lbrack \\begin{array}{ccc}\n0 & 1 & 0 \\\\\n1 & 0 & 0 \\\\\n0 & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\na_{1,2} & a_{1,0} & a_{1,1} \\\\\na_{0,2} & a_{0,0} & a_{0,1} \\\\\na_{2,2} & a_{2,0} & a_{2,1}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n0 & 0 & 1 \\\\\n1 & 0 & 0 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack \\\\\n& = &\n\\left \\lbrack \\begin{array}{ccc}\n0 & 1 & 0 \\\\\n1 & 0 & 0 \\\\\n0 & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\nl_{0,2} & 1 & 0 \\\\\nl_{2,2} & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & {\\hat a}_{0,0} & {\\hat a}_{0,1} \\\\\n0 & {\\hat a}_{2,0} & {\\hat a}_{2,1}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\nd_{1,2} & u_{1,0} & u_{1,1} \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n0 & 0 & 1 \\\\\n1 & 0 & 0 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack \\\\\n& = &\n{\\tilde L}_0 \\ A_1 \\ {\\tilde U}_0\n=\n\\left \\lbrack \\begin{array}{ccc}\nl_{0,2} & 1 & 0 \\\\\n1 & 0 & 0 \\\\\nl_{2,2} & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & {\\hat a}_{0,0} & {\\hat a}_{0,1} \\\\\n0 & {\\hat a}_{2,0} & {\\hat a}_{2,1}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\nu_{1,0} & u_{1,1} & d_{1,2} \\\\\n1 & 0 & 0 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\end{eqnarray*}\nWe then do the same for $A_1$, using ${\\hat a}_{2,1}$ as the pivot.\n\\begin{eqnarray*}\nA_1 & = & \n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & {\\hat a}_{0,0} & {\\hat a}_{0,1} \\\\\n0 & {\\hat a}_{2,0} & {\\hat a}_{2,1}\n\\end{array} \\right \\rbrack\n =\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & {\\hat a}_{2,1} & {\\hat a}_{2,0} \\\\\n0 & {\\hat a}_{0,1} & {\\hat a}_{0,0}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack \\\\\n& = &\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & l_{0,1} & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & {\\hat a}_{2,1} & {\\hat a}_{2,0} \\\\\n0 & {\\hat a}_{0,1} & {\\hat a}_{0,0}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & d_{2,1} & u_{2,0} \\\\\n0 & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack \\\\\n& = &\n{\\tilde L}_1 \\ A_2 \\ {\\tilde U}_1\n=\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & l_{0,1} & 1 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & {\\tilde a}_{0,0}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & u_{2,0} & d_{2,1} \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\end{eqnarray*}\nFinally, we have the factorization of $A_2$.\n$$\nA_2 \n= \nP_2 \\ L_2 \\ U_2 \\ Q_2\n=\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & d_{0,0}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{array} \\right \\rbrack\n$$\nWritten on one line we have the factorization of $A$ given below\nwhere ${\\tilde L}_2 = I$ is omitted.\n\\begin{eqnarray*}\nA \n& = & \n{\\tilde L}_0 \\ \n{\\tilde L}_1 \\ \n{\\tilde U}_2 \\ \n{\\tilde U}_1 \\ \n{\\tilde U}_0 \\\\\n& = &\n\\left \\lbrack \\begin{array}{ccc}\nl_{0,2} & 1 & 0 \\\\\n1 & 0 & 0 \\\\\nl_{2,2} & 0 & 1\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & l_{0,1} & 1 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & d_{0,0}\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & u_{2,0} & d_{2,1} \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\left \\lbrack \\begin{array}{ccc}\nu_{1,0} & u_{1,1} & d_{1,2} \\\\\n1 & 0 & 0 \\\\\n0 & 1 & 0\n\\end{array} \\right \\rbrack\n\\end{eqnarray*}\nAt best this equation looks awkward, but take heart, it really\ndoesn't describe how we store the entries or use them to solve a\nlinear system.\nThe entries can be logically grouped in terms of chevrons.\n$$\nA^0 = \n\\left \\lbrack \\begin{array}{ccc}\nd_{1,2} & u_{1,0} & u_{1,1} \\\\\nl_{0,2} & 0 & 0 \\\\\nl_{2,2} & 0 & 0\n\\end{array} \\right \\rbrack,\n\\qquad\nA^1 = \n\\left \\lbrack \\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & d_{2,1} & u_{2,0} \\\\\n0 & l_{0,1} & 0\n\\end{array} \\right \\rbrack,\n\\qquad\nA^2 = \n\\left \\lbrack \\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & 0 & 0 \\\\\n0 & 0 & d_{0,0}\n\\end{array} \\right \\rbrack,\n\\qquad\n$$\nFor each chevron we store \nthe indices of the rows and columns that are nonzero,\nand the entries in the diagonal, lower triangular and\nupper triangular block.\nNote, the row indices and the column indices need not be the same.\n\\par\nLet us show how we solve a linear system.\nTo solve $A x = b$ we first solve $L y = b$ and then solve $U x = y$\nwhere $L = {\\tilde L}_0 {\\tilde L}_1 {\\tilde L}_2$\nand $U = {\\tilde U}_2 {\\tilde U}_1 {\\tilde U}_0$.\nNote that there are no permutation matrices anywhere.\n\\begin{itemize}\n\\item \nTo solve with ${\\tilde L}_0$ we set $y_1 = b_1$, \n$b_0 := b_0 - l_{0,2} y_1$ and\n$b_2 := b_2 - l_{2,2} y_1$.\n\\item \nTo solve with ${\\tilde L}_1$ we set $y_2 = b_2$ and\n$b_0 := b_0 - l_{0,1} y_2$.\n\\item \nTo solve with ${\\tilde L}_2$ we set $y_0 = b_0$.\n\\item \nTo solve with ${\\tilde U}_2$ we set $x_0 = y_0/d_{0,0}$.\n\\item \nTo solve with ${\\tilde U}_1$ we set $x_1 = (y_2 - u_{2,0}x_0)/d_{2,1}$.\n\\item \nTo solve with ${\\tilde U}_0$ \nwe set $x_2 = (y_1 - u_{1,0}x_0 - u_{1,1} x_1)/d_{1,2}$.\n\\end{itemize}\n\nIn general, let us write the $k$-th chevron as\n$$\nA^2 = \n\\left \\lbrack \\begin{array}{cc}\nD_{\\alpha_k,\\beta_k} & U_{\\alpha_k,\\delta_k} \\\\\nL_{\\gamma_k,\\beta_k} & 0\n\\end{array} \\right \\rbrack\n$$\nwhere $\\alpha_k$ and $\\gamma_k$ form the row index set\nand $\\beta_k$ and $\\delta_k$ form the column index set.\nSince the pivots partition the rows and columns, we have\n$$\n\\gamma_k \\cap \\bigcup_{i < k} \\alpha_i = \\emptyset\n\\quad \\mbox{and} \\quad\n\\delta_k \\cap \\bigcup_{i < k} \\beta_i = \\emptyset.\n$$\nThe forward solve and backward solve are executed by\n\\begin{center}\n\\begin{minipage}{3 in}\n\\begin{tabbing}\nXXX\\=XXX\\=XXX\\=\\kill\nfor $k = 1, \\ldots, \\mbox{\\# of pivots}$ \\\\\n\\> $y_{\\alpha_k} = b_{\\alpha_k}$ \\\\\n\\> $b_{\\gamma_k} := b_{\\gamma_k} \n                 - L_{\\gamma_k, \\beta_k} y_{\\alpha_k}$ \\\\\nend for \\\\\nfor $k = \\mbox{\\# of pivots}, \\ldots, 1$ \\\\\n\\> $y_{\\alpha_k} = b_{\\alpha_k}$ \\\\\n\\> $x_{\\beta_k} := D_{\\alpha_k, \\beta_k}^{-1}\n(y_{\\alpha_k} - U_{\\alpha_k, \\delta_k} x_{\\delta_k})$ \nend for\n\\end{tabbing}\n\\end{minipage}\n\\end{center}\n", "meta": {"hexsha": "a2efbbd28819e8b243272d100938f5490d878392", "size": 10222, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/MT/doc/solve.tex", "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/MT/doc/solve.tex", "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/MT/doc/solve.tex", "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "avg_line_length": 24.4545454545, "max_line_length": 71, "alphanum_fraction": 0.5921541773, "num_tokens": 4554, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681086260461, "lm_q2_score": 0.6548947290421275, "lm_q1q2_score": 0.5617478130796327}}
{"text": "\n\\subsubsection{Small Open Economy Solution Details}\n\nConsider the household's normalized problem in the SOE model, given in \\eqref{eq:scaledv}.  Substituting the latter two constraints into the maximand, this problem has one first order condition (with respect to $\\cLev_{t,i}$), which is sufficient to characterize the solution:\n\\begin{equation}\\label{eq:SOEFOC}\n\\cLev_{t,i}^{-\\CRRA} - \\underbrace{\\Rfree \\PLives \\beta \\Ex_t \\big[  (\\PtyGro_{t+1} \\pmb{\\psi}_{t+1,i})^{-\\CRRA}\\vFunc^\\mLev\\big(\\Rfree/(\\PtyGro_{t+1} \\pmb{\\psi}_{t+1,i}) \\aLev_{t,i} + \\Wage \\pmb{\\theta}_{t+1,i},\\PtyGro_{t+1}\\big) \\big]}_{\\equiv\\,\\mathfrak{v}^\\aLev(\\aLev_{t,i},\\PtyGro_t)} = 0\n\\end{equation}\n\\begin{equation*}\n\\Longrightarrow \\cLev_{t,i} = \\mathfrak{v}^\\aLev(\\aLev_{t,i},\\PtyGro_t)^{-1/\\CRRA}.\n\\end{equation*}\n\nWe use the endogenous grid method to solve the model by iterating\non the first order condition.  Eliding some uninteresting complications, our procedure is straightforward:\n\\begin{enumerate}\n\\item Construct discrete approximations to the lognormal distributions of $\\theta$, $\\Theta$, $\\psi$, and $\\Psi$,\nadjusting for the point mass at 0 for $\\theta$ with probability $\\wp$.  We use equiprobable $N_\\psi = N_\\theta = 7$\npoint approximations for the (lognormal portion of) the idiosyncratic shocks and $N_\\Psi = N_\\Theta = 5$\npoint approximations for the aggregate shocks.\n\n\\item Choose an exogenous grid of end-of-period normalized assets-above-natural-borrowing-constraint $\\mathbb{A} = \\{\\blacktriangle \\aLev_j\\}_{j=1}^{N_a}$, spanning\nthe range values that an agent might reasonably encounter in a simulated lifetime.  We use a triple-exponential grid spanning $\\blacktriangle \\aLev \\in [10^{-5},40]$\nwith $N_a$ = 48 gridpoints.  The natural borrowing constraint is zero because of the possibility of $\\theta=0$, so\n assets-above-natural-borrowing-constraint is simply assets $\\aLev$.\n\n\\item Initialize the guess of the consumption function to $\\cFunc(\\mLev,\\cdot) = \\mLev$, the solution for an agent who has no future.\n\n\\item Define the marginal value function $\\vFunc^\\mLev(\\cdot)$ as $\\uFunc'(\\cFunc(\\cdot))$, as determined by the\nstandard envelope condition.\n\n\\item Use the discrete approximations to the shock processes and the Markov transition matrix $\\Xi$ to\ncompute $\\mathfrak{v}^\\aLev(\\aLev_j,\\PtyGro_k)$ for all $(\\aLev_j,\\PtyGro_k) \\in \\mathbb{A} \\times \\{\\PtyGro\\}$.\n\n\\item Use \\eqref{eq:SOEFOC} to find the level of consumption that would make ending the period\nwith $\\aLev_j$ in assets optimal (when aggregate growth is $\\PtyGro_k$): $\\cLev_{j,k} = \\mathfrak{v}^\\aLev(\\aLev_j,\\PtyGro_k)^{-1/\\CRRA}$.\n\n\\item Calculate beginning of period market resources $\\mLev_{j,k} = \\aLev_{j,k}+ \\cLev_{j,k}$ for all $j,k$.\n\n\\item For each $k$, construct $\\cFunc(\\mLev,\\PtyGro_k)$ by linearly interpolating $\\cLev_{j,k}$ over $\\mLev_{j,k}$, with\nan additional point at $(\\mLev=0,\\cLev=0)$.\n\n\\item Calculate the supnorm distance between the newly constructed $\\cFunc$ and the previous guess,\nevaluated at the $N_\\aLev \\times ||\\{\\PtyGro\\} ||$ gridpoints.  If the distance is less than $\\epsilon = 10^{-6}$, STOP; else go to step 4.\n\\end{enumerate}\n\nThe numerically computed consumption function can then be used to simulate a population of households,\nas described in Appendix~\\ref{appendix:Simulation}.\n\n\n\\subsubsection{Dynamic Stochastic General Equilibrium Solution Details} \\label{app:Sol_DSGE}\n\nConsider the household's normalized problem in the HA-DSGE model,\ngiven in \\eqref{eq:DSGEproblemNorm}. Recalling that we are taking the\naggregate saving rule $\\aleph$ as given, optimal consumption is\ncharacterized by the solution to the first-order condition:\n\\begin{equation}\\label{eq:DSGEFOC}\n\\cLev_{t,i}^{-\\CRRA} - \\underbrace{\\beta \\Ex \\big[ \\Rprod_{t+1} (\\PtyGro_{t+1} \\pmb{\\psi}_{t+1,i})^{-\\CRRA}\\vFunc\\big(\\Rprod_t \\aLev_{t,i}/(\\PLives \\PtyGro_{t+1}\\pmb{\\psi}_{t+1,i}) + \\pmb{\\theta}_{t+1,i} \\Wage_{t+1},\\MLev_{t+1},\\PtyGro_{t+1}\\big) \\big]}_{\\equiv\\, \\mathfrak{v}^\\aLev(\\aLev_{t,i},\\MLev_t,\\PtyGro_t)} = 0\n\\end{equation}\n\\begin{equation*}\n\\Longrightarrow \\cLev_{t,i} = \\mathfrak{v}^\\aLev(\\aLev_{t,i},\\MLev_t,\\PtyGro_t)^{-1/\\CRRA}.\n\\end{equation*}\n\nSolving the HA-DSGE model requires a nested loop procedure in the style of \\cite{ksHetero},\nas the equilibrium of the model is a fixed point in the space of household beliefs about the\naggregate saving rule. For the outer loop, searching for the equilibrium $\\aleph$, we use the following procedure:\n\\begin{enumerate}\n\\item Construct a grid of (normalized) aggregate market resources $\\mathbb{M} = \\{\\MLev_j \\}_{j=1}^{N_\\MLev}$.\nWe use a $N_M = 19$ point grid based on the steady state of the perfect foresight DSGE model, spanning the range of 10 percent to 500 percent of this value.\n\n\\item For each $\\PtyGro_k \\in \\{\\PtyGro\\}$, initialize the aggregate saving rule to arbitrary values.\nWe use $\\aggrSavingRuleCoeff_{k,0} = 0$ and $\\aggrSavingRuleCoeff_{k,1} = 1$; there exist more efficient initial guesses.\n\n\\item In the inner loop, solve the household's optimization problem for the current guess of $\\aleph$,\nusing the procedure described below.\n\n\\item Simulate many households for many periods, using the procedure described in\nAppendix~\\ref{appendix:Simulation}, yielding a long \\textit{history} of aggregate\nmarket resources, productivity growth, and assets $\\mathfrak{H} = \\left\\{(\\MLev_t,\\PtyGro_t,\\ALev_t)\\right\\}_{t=0}^T$.\n\n\\item For each $k$, define $\\mathfrak{H}_k \\equiv \\left\\{ \\mathfrak{H} | \\PtyGro_t = \\PtyGro_k \\right\\}$.\nRegress $\\ALev_t$ on $\\MLev_t$ on the set $\\mathfrak{H}_k$, yielding coefficients\nthat provide updated values of $\\aggrSavingRuleCoeff_{k,0}$ and $\\aggrSavingRuleCoeff_{k,1}$ for $\\aleph$.\n\n\\item Calculate the supnorm distance between the new and previous values of aggregate\nsaving rule coefficients $\\aggrSavingRuleCoeff$.  If it is less than $\\grave{\\epsilon} = 10^{-4}$, STOP;\nelse go to step 3.\n\\end{enumerate}\n\nThe inner solution loop (step 3) proceeds very similarly to the SOE solution method above,\nwith differences in the following steps:\n\\begin{enumerate}\n\\setcounter{enumi}{1}\n\n\\item The set $\\mathbb{A}$ spans $[10^{-5},120]$ because of the higher $\\beta$ in the HA-DSGE model.\n\n\\setcounter{enumi}{4}\n\n\\item End-of-period marginal value of assets is calculated as $\\mathfrak{v}^\\aLev(\\aLev_j,\\MLev_k,\\PtyGro_\\ell)$\nfor all $(\\aLev_j,\\MLev_k,\\PtyGro_\\ell) \\in \\mathbb{A} \\times \\mathbb{M} \\times \\{\\PtyGro\\}$.\n\n\\item Use \\eqref{eq:DSGEFOC} to calculate $\\cLev_{j,k,\\ell} = \\mathfrak{v}^\\aLev(\\aLev_j,\\MLev_k,\\PtyGro_\\ell)^{-1/\\CRRA}$.\n\n\\setcounter{enumi}{7}\n\n\\item For each $\\ell$, construct $\\cFunc(\\mLev,\\MLev,\\PtyGro_\\ell)$ by linearly interpolating $\\cLev_{j,k,\\ell}$ over $\\mLev_{j,k,\\ell}$\nfor each $k$, then interpolating the linear interpolations over $\\mathbb{M}$.\n\\end{enumerate}\n", "meta": {"hexsha": "8de7b0d2b76c2bc0dd25a48f67da47a1159fe1e2", "size": 6798, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX/appendices/Solution.tex", "max_stars_repo_name": "MridulS/cAndCwithStickyE", "max_stars_repo_head_hexsha": "ecbb63f5733d315f9bb7a82acb1048d0a0beed8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LaTeX/appendices/Solution.tex", "max_issues_repo_name": "MridulS/cAndCwithStickyE", "max_issues_repo_head_hexsha": "ecbb63f5733d315f9bb7a82acb1048d0a0beed8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LaTeX/appendices/Solution.tex", "max_forks_repo_name": "MridulS/cAndCwithStickyE", "max_forks_repo_head_hexsha": "ecbb63f5733d315f9bb7a82acb1048d0a0beed8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-05T07:51:31.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-05T07:51:31.000Z", "avg_line_length": 63.5327102804, "max_line_length": 318, "alphanum_fraction": 0.7319799941, "num_tokens": 2205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835207180245, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.5617220551206417}}
{"text": "\\documentclass[]{article}\n\\usepackage{graphicx}\n\\newtheorem{Def}{Definition}\n\n%opening\n\\title{MTH 343 Numerical Analysis Lecture 3: Errors, Solutions of Equations in one Variable}\n\\author{Sheikh Abdul Raheem Ali}\n\\date{January 31, 2019}\n\n\\begin{document}\n\n\\maketitle\n\nThe floating point form is obtained by terminating the mantissa using the following two ways:\n\n\\begin{enumerate}\n\t\\item Chopping\n\t\\item Rounding\n\\end{enumerate}\n\n\\begin{Def}[Absolute error]\n$\t|true \\ value - approximate \\ value| = |p - p^*| $\n\\end{Def}\n\n\\begin{Def}[Relative error]\n$ |\\frac{true \\ value \\ - \\ approximate \\ value}{true \\ value}| =  |\\frac{p \\ - \\ p^*}{p}| $\n\\end{Def}\n\n\\[  (approximate \\ value ) \\ p_1 = 1.2, \\ (true \\ value) \\ p_1^* = 1.2 \\]\n\\[  p_2 = 1000, \\ p_2^* = 1000.2 \\]\n\n\\begin{eqnarray}\nabsolute \\ error &=&  |1 - 1.2| = 0.2 \\nonumber \\\\\n&=& |1000 - 1000.2| = 0.2 \\nonumber\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\trel. \\ error &=& |\\frac{1 - 1.2}{1}| = 0.2 \\nonumber \\\\\n\t&=&  |\\frac{1000 - 1000.2}{1000}| = 0.0002 \\nonumber \n\\end{eqnarray}\n\n\\[ find \\ f(x) = x^3 = 61x^3 + 3.2x + 1.5 \\ at \\ f(x = 4.71) \\]\n\n\\begin{tabular}{l c c c c c}\n\t& $ x $ & $ x^2 $ & $ x^3 $ & $ 61x^3 $ & $ 3.2x $ \\\\ \\hline\n\tExact &4.7&22.1841&104.487111&135.32301&15.072\\\\\n\t3-digit chopping &4.7&22.1&104.0&134&15.0 \\\\\n\t3-digit rounding &4.7&22.2&104&135&15.1 \\\\\n\t\n\\end{tabular}\n\nExact: -14.263899, Chopping: -13.5. Rounding: -13.4 \\\\\nRel. error: Chopping: 0.0045, Rounding: 0.0025\n\n\\section*{Solutions of Equations of one variable}\n\n\\begin{eqnarray}\n\t0 &=& x^3 - 16x \\nonumber \\\\\n\t&=& x(x^2 - 16) \\nonumber\\\\\n\t&=& x(x-4)(x+4) \\nonumber\\\\\n\t x = 0,4,-4 \\nonumber\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\t0 &=&  x^2 - 4x + 3x - 12 \\nonumber \\\\ \n\t&=& x(x-4) + 3(x-4) \\nonumber \\\\\n\t&=& (x-4)(x-3) \\nonumber \\\\\n\tx = 4,3 \\nonumber\n\\end{eqnarray}\n\n\\section*{Next time: Bisection Method (f(x) = 0)}\n\nFind the values of $ x $ for which $ f(x) = 0, $ that is find the points of intersection of $ f(x) $ with the x-axis. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{graph.jpeg}\n\t\\caption{Visual demonstration of the bisection algorithm. Credits to Luna Hatahet. }\n\t\\label{fig:graph}\n\\end{figure}\n\n\n\\end{document}\n", "meta": {"hexsha": "ea98ddfe1916440af664e66b0e4304df98fcfc9b", "size": 2178, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Numerical/Lecture_3/Lecture3.tex", "max_stars_repo_name": "sheikheddy/aus-files", "max_stars_repo_head_hexsha": "0c38d15d560ccbb8231c8ef210916ea94a0f004b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Numerical/Lecture_3/Lecture3.tex", "max_issues_repo_name": "sheikheddy/aus-files", "max_issues_repo_head_hexsha": "0c38d15d560ccbb8231c8ef210916ea94a0f004b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Numerical/Lecture_3/Lecture3.tex", "max_forks_repo_name": "sheikheddy/aus-files", "max_forks_repo_head_hexsha": "0c38d15d560ccbb8231c8ef210916ea94a0f004b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.9285714286, "max_line_length": 118, "alphanum_fraction": 0.621671258, "num_tokens": 892, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.668880247169804, "lm_q2_score": 0.8397339616560072, "lm_q1q2_score": 0.5616814598293488}}
{"text": "\\section{Starting is Slow}\n\nLike any language, starting to learn a language is slow and in most cases, troublesome. However, a programming language is a bit easier to learn, but harder to master. The reason for this is that one must first know simple English language, basic maths and advanced logic thinking.\n\nTo wrap it around quickly, the English language is good to know as most syntax relies on it. The basic maths is good to know for simple algebraic solutions. One could of course need more advanced maths, but this course does not aim such a thing.\n\nFor the advanced logic, one need to know this to be able to tell the computer exactly what it should do. It\u2019s extremely rare that a computer is disobedient. If one always assume it will do as told, a program could easily be structured in a logical way to make advanced instructions to the computer.\n% Disobedient: Read: Not working correctly\n\n\\subsection{Variables}\n\nFirst thing to bring up is variables. Think these as labelled containers that will hold any datum one want to store within it. Most commonly this includes, but are not restricted to: \\firstfound{numbers}, \\firstfound{strings} and \\firstfound{objects}.\n% Datum: Singular to data\n\n\\important{Numbers} could either be integer or decimal numbers. These are usually represented with the ordinary base 10 system. One could of course make use of hexadecimal, octagonal and even binary to represent a number. These are left as an exercise to the reader.\n\n\\important{Strings} are used to store text. It could contain anything that can be stored as a character. This includes new line, space, even backspace.\n\n\\important{Objects} will be mentioned more further down.\n\nTo store a value to a variable, one could write like so.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $age \\gets 18$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis will store the number 18 into the variable \\code{age}. This makes it possible to temporarily store data within the program so the user experience is different depending on the variables content.\n\nThis can also be used with strings.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $name \\gets \"Jones\"$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis will store the text \\code{Jones} within the variable \\code{name}. Take a note in the quotation marks, as those are used to define a string. Anything within those quotation marks will be stored within the variable.\n\n\\subsection{Operators}\n\nNaturally, one could not make use of the variables unless one got some sort of modifier. In comes the operators. Most of them are maths operators, but there is some that is specific to the computer. This section will only go through the maths ones, which is also called the \\important{arithmetic} operators.\n\nFirst is the \\firstfound{assignment} (\\code{=}) operator. It was used briefly in previous section to assign a value to a variable. It is visualized with an equal sign. As long as a variable is on the left side, any value on the right will be assigned to it.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets 4$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nMoreover are \\important{addition} (\\code{+}), \\important{subtraction} (\\code{-}), \\important{multiplication} (\\code{*}) and \\important{division} (\\code{/}).\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets 4 + 2$\n\t\t\\State $y \\gets 4 - 2$\n\t\t\\State $z \\gets 4 * 2$\n\t\t\\State $w \\gets 4 / 2$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis will result in each variable of \\code{x}, \\code{y}, \\code{z} and \\code{w} to contain 6, 2, 8 and 2 respectively. There is also this operator called \\important{modulus} (\\code{\\%}) that is used to get the remainder from a division. How it is used is left as an exercise to the reader.\n\nAlong with the arithmetic operators, there is also the \\important{relational} operators. These are used to compare values. These are \\important{equal} (\\code{==}), \\important{less than} (\\code{<}), \\important{greater than} (\\code{>}), \\important{less equal} (\\code{<=}), \\important{greater equal} (\\code{>=}) and \\important{not equal} (\\code{\\!\\=}). Even if they sound somewhat the same, they actually are used in different manners, depending on what one is aiming for.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $1 = 1$\n\t\t\\State $1 < 1$\n\t\t\\State $1 > 1$\n\t\t\\State $1 \\leq 1$\n\t\t\\State $1 \\geq 1$\n\t\t\\State $1 \\neq 1$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nEach one of these will give a \\firstfound{boolean} value: \\code{true} or \\code{false}. From the top: \\code{true}, \\code{false}, \\code{false}, \\code{true}, \\code{true}, \\code{false}. To explain this, the first one is 1 equals 1, which they are the same. Then is two false, because 1 is neither bigger or smaller than 1. The two following is true even if it checks for values bigger or smaller, as it also checks if they are the same as well, which they are. The last one is checking if they are not equal, which they are not.\nOf course one could put in variables in there, to see if its value relation to an another value.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets 1$\n\t\t\\State $y \\gets 2$\n\t\t\\State $y \\neq x$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis will make the third statement to be true, as the value of \\code{y} (2) is not equal the value of \\code{x} (1).\n\n\\subsection{Conditions}\n\nWhile setting variables and do operations on them, one cannot do anything unless there is some way to change the program flow. In comes conditions.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets 1$\n\t\t\\If{$x > 0$}\n\t\t\t\\State $x \\gets 0$\n\t\t\\EndIf\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis will first prepare the variable \\code{x}. Then it will check with if its value is bigger than 0. As this is the case, the \\important{\\code{if}}-condition will then check if the value is \\code{true}, and if so it will execute the next statements. The \\important{\\code{then}} is only used to mark the start of a code block. The \\important{\\code{end}} is closing the previously mentioned code block. A code block is a bunch of statements grouped together.\n\nIt is worth noting that the third statement is \\important{indented}. This is only to visually tell everyone that reads this that this belongs to the statement preceding it. That is, it makes it a lot easier to understand the code-flow. In some languages, indentation is a requirement.\n\nAlong with the if-statement, there is two more normal conditions that one should use. They are \\important{\\code{else}} and \\important{\\code{else if}}. The \\code{else} is only executed if the preceding conditions were \\code{false}. Of course the \\code{else if} looks like it is \\code{else} and \\code{if} concatenated together, but in most languages they are actually split apart.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets 3$\n\t\t\\State $y \\gets 4$\n\t\t\\State $z \\gets 3$\n\t\t\\If{$x = y$}\n\t\t\t\\State $y \\gets z$\n\t\t\\ElsIf{$x = z$}\n\t\t\t\\State $z \\gets y$\n\t\t\\Else\n\t\t\t\\State $x \\gets y$\n\t\t\\EndIf\n\t\\end{algorithmic}\n\\end{algorithm}\n\nWe got three variables with values. Two of them are checked if they are equal. If they are, then it will run the next statement. In this case, it is not and will skip the next code block and go for the next condition. Here it is also is false, so it will proceed to the last code block and execute it.\n\n\\subsection{Loops}\n\nAfter getting to know how to change the flow of the program by jumping to statements that should be executed, one could easily make a program like this.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $n \\gets 1$\n\t\t\\State $x \\gets 1$\n\t\t\\State \\textcolor{highlightred}{$x \\gets x * n$}\n\t\t\\State \\textcolor{highlightred}{$n \\gets n + 1$}\n\t\t\\State $x \\gets x * n$\n\t\t\\State $n \\gets n + 1$\n\t\t\\State $x \\gets x * n$\n\t\t\\State $n \\gets n + 1$\n\t\t\\State $x \\gets x * n$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nHowever, this is duplicated code, highlighted in red. There is a way to make this way shorter and easier to manage. In comes \\important{loops}.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $n \\gets 1$\n\t\t\\State $x \\gets 1$\n\t\t\\While{$n \\leq 4$}\n\t\t\t\\State $x \\gets x * n$\n\t\t\t\\State $n \\gets n + 1$\n\t\t\\EndWhile\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThe \\important{\\code{while}}-statement will repeat its code block until its condition is false. Its condition is checked first, and then each time it repeats its code block. This generates the same result as the previous example.\n\nThere is also a loop that make the logic of the example even easier to understand. The \\important{\\code{for}}-loop is one of the most used loops, as it will easily handle your \\important{iterator}. An iterator is exactly what the variable \\code{n} is doing in the examples. It will increase by a certain amount of number and then be used to run a code block a certain amount of times or get data from a list or variables.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets 1$\n\t\t\\For{$n \\gets 1$ to $5$}\n\t\t\t\\State $x \\gets x * n$\n\t\t\\EndFor\n\t\\end{algorithmic}\n\\end{algorithm}\n\nFor variable \\code{n}, set the value of 1 and iterate it one step until value is equal or higher than 5. There is several variants of this, but this document will only use this version as it is most used that way.\n\n\\subsection{Functions}\n% Functions: Write more about the difference of parameters and arguments\n\nFunctions are code blocks that one could send in certain variables and execute and then return from where it was called from. The biggest advantages of them is to move out algorithms into them and to reduce duplicate code and reuse them.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\Function{add}{$x$, $y$}\n\t\t\t\\State $z \\gets x + y$\n\t\t\t\\State \\Return $z$\n\t\t\\EndFunction\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis function can then be called upon.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets 5$\n\t\t\\State $z \\gets 4$\n\t\t\\State $y \\gets$ \\Call{add}{$x$, $z$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\nBasically, \\code{x} and \\code{z} is set to a value each. Their value is then sent into the function \\code{add}, and now the program flow is moved into there. From there is creates two other variables, and then do some addition and send it into a third variable. Its value is then returned from the function. The program flow will then be moved back to where it was called and the returned value will be assigned to \\code{y}.\n\nKeep in mind here that the variables within add does not affect the variables outside of it. These are called \\important{local variables}. The name of the parameters could be anything, as they are created like normal variables and assigned the values that are sent into the function, in order.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\Function{sum}{$a$}\n\t\t\t\\If{$a < 1$}\n\t\t\t\t\\State \\Return $a$\n\t\t\t\\EndIf\n\t\t\t\\State \\Return \\Call{sum}{$a - 1$}$ + a$\n\t\t\\EndFunction\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis function will be called with a number contained within \\code{a}. It will then check if it is less than 1. If not, it will continue and call itself. This will continue until \\code{a} is less than one, which it then will return normally. All accumulates of \\code{a} from each call to sum will then be added together, resulting into \\code{a} sum of 0 up to the value of \\code{a} in the first call to sum.\n\n\\code{sum} is called a \\important{recursive} function. This applies whenever a function calls itself directly or indirectly. Indirectly is done when a function first calls another function that in turn calls the first function.\n\nAs previously mentioned, each call to the function will create a new set of local variables that does not affect the previous result. This means that the variable \\code{a} will be created anew with a new value for that function call.\n\nTo make this a bit simpler to understand, the previous section example will be used with recursion instead of a \\code{for}-loop.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\Function{factorial}{$n$}\n\t\t\t\\If{$n \\leq 1$}\n\t\t\t\t\\State \\Return $n$\n\t\t\t\\EndIf\n\t\t\t\\State \\Return \\Call{factorial}{$n - 1$}$ * n$\n\t\t\\EndFunction\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThis will return the same result as the previous example.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $x \\gets$ \\Call{factorial}{$4$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\nWorth noting is that even if recursive functions are really helpful, it is really easy to make it go indefinitely, if not careful. Most languages does not like to call too many functions, and will crash if going too deep.\n\n\\subsection{Objects}\n\nSometimes there is a mess keeping track on variables. It even gets worse when one needs to, for instance, handle several persons with their respective age and name. Here is where \\important{objects} are helpful.\nObjects are like variables, a container. But instead of values, they contain variables. One could then easily move around groups of data about certain types.\n\n\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\t\\State $person \\gets$ new Person\n\t\t\\State $person.age \\gets 45$\n\t\t\\State $person.name \\gets \"James\"$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThe first statement will create a \\important{\\code{new}} object called Person and store it within the variable \\code{person}. Then it will use the \\important{dot}(\\code{.})-operator to access two of its variables, \\code{age} and \\code{name}, and then assign them values.\n\nWorth noting is that both the \\code{new}-operator and dot-operator is not used in some languages, but are used here to explain how objects are used more easily. For example, some languages uses functions to both create objects and access their variables.\n", "meta": {"hexsha": "542edfec861dab6dc69392b88d2269d860769be7", "size": 13583, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/lesson1/section2.tex", "max_stars_repo_name": "McTwist/Blockland-TorqueScript-Lessons", "max_stars_repo_head_hexsha": "4d5c0d29440de2920bb98c570fbab87ab6668473", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/lesson1/section2.tex", "max_issues_repo_name": "McTwist/Blockland-TorqueScript-Lessons", "max_issues_repo_head_hexsha": "4d5c0d29440de2920bb98c570fbab87ab6668473", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-02-27T08:36:01.000Z", "max_issues_repo_issues_event_max_datetime": "2018-04-26T01:02:39.000Z", "max_forks_repo_path": "src/lesson1/section2.tex", "max_forks_repo_name": "McTwist/Blockland-TorqueScript-Lessons", "max_forks_repo_head_hexsha": "4d5c0d29440de2920bb98c570fbab87ab6668473", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.6472868217, "max_line_length": 524, "alphanum_fraction": 0.7383494073, "num_tokens": 3669, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8670357494949105, "lm_q2_score": 0.6477982247516797, "lm_q1q2_score": 0.5616642193190451}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% PROBLEM 3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section*{Problem 3}\n\nConsider a 1000 MWE reactor with a 33\\% efficiency conversion from MWT to MWE. \nWhat is the minimum volume of UO$_2$, enriched to 3 (atom) \\% $^{235}$U that could theoretically supply the yearly energy production of this reactor. \nTreat energy contributions as coming only from the fission of $^{235}$U. \nThese fission events release about 200 MeV with 95\\% of that energy staying in the reactor.\n\n", "meta": {"hexsha": "53672c6461dd1efe16bf9da7bb41676b58e31761", "size": 500, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/drafts/disc02/disc02_exercise03.tex", "max_stars_repo_name": "mitchnegus/NE150-discussion", "max_stars_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/drafts/disc02/disc02_exercise03.tex", "max_issues_repo_name": "mitchnegus/NE150-discussion", "max_issues_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/drafts/disc02/disc02_exercise03.tex", "max_forks_repo_name": "mitchnegus/NE150-discussion", "max_forks_repo_head_hexsha": "1d2afe0fc4830c3d13d491b9d6ccb7819083c5ad", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.5555555556, "max_line_length": 150, "alphanum_fraction": 0.654, "num_tokens": 120, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8311430645886584, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5616570614315032}}
{"text": "\n\n    \\filetitle{invgamma}{Create function proportional to log of inv-gamma distribution}{logdist/invgamma}\n\n\t\\paragraph{Syntax}\\label{syntax}\n\n\\begin{verbatim}\nF = logdist.invgamma(MEAN,STD)\n\\end{verbatim}\n\n\\paragraph{Input arguments}\\label{input-arguments}\n\n\\begin{itemize}\n\\item\n  \\texttt{MEAN} {[} numeric {]} - Mean of the inv-gamma distribution.\n\\item\n  \\texttt{STD} {[} numeric {]} - Std dev of the inv-gamma distribution.\n\\end{itemize}\n\n\\paragraph{Output arguments}\\label{output-arguments}\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  \\texttt{F} {[} function\\_handle {]} - Function handle returning a\n  value proportional to the log of the inv-gamma density.\n\\end{itemize}\n\n\\paragraph{Description}\\label{description}\n\nSee \\href{logdist/Contents}{help on the logdisk package} for details on\nusing the function handle \\texttt{F}.\n\n\\paragraph{Example}\\label{example}\n\n\n", "meta": {"hexsha": "cfd3de5a4bc08cfc14c9cbb65a0b937a4fd5e2c3", "size": 887, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "-help/logdist/invgamma.tex", "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_issues_repo_path": "-help/logdist/invgamma.tex", "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_forks_repo_path": "-help/logdist/invgamma.tex", "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "avg_line_length": 23.972972973, "max_line_length": 105, "alphanum_fraction": 0.7474633596, "num_tokens": 259, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430478583168, "lm_q2_score": 0.6757646140788307, "lm_q1q2_score": 0.5616570609802786}}
{"text": "\n\\subsection{The Riemann sphere (elliptic)}\n\n", "meta": {"hexsha": "6b5c16db4e4840c1ff438733a148d05319ab6861", "size": 45, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/analysis/riemann/01-01-The_Riemann_sphere_(elliptic).tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/analysis/riemann/01-01-The_Riemann_sphere_(elliptic).tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/analysis/riemann/01-01-The_Riemann_sphere_(elliptic).tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 11.25, "max_line_length": 42, "alphanum_fraction": 0.7555555556, "num_tokens": 14, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8311430562234877, "lm_q2_score": 0.6757645944891558, "lm_q1q2_score": 0.5616570503513428}}
{"text": "\\chapter{Probability Problem set Solutions}\n\\begin{enumerate}\n\t\\item An unbiased dice is thrown three times successively. The probability that the numbers of dots on the uppermost surface add up to 16 is\n\t{\\exyear{NET/JRF(DEC-2011)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $\\frac{1}{16}$\n\t\t\\task[\\textbf{B.}] $\\frac{1}{36}$\n\t\t\\task[\\textbf{C.}] $\\frac{1}{108}$\n\t\t\\task[\\textbf{D.}] $\\frac{1}{216}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{ We can get sum of dice as 16 in total six ways i.e. three ways $(6,5,5)$ and three ways $(6,6,4)$}\n\t\t\\text{Total number of ways for 3 dice having six faces }&=6 \\times 6 \\times 6\\\\\n\t\t&=\\frac{6}{6 \\times 6 \\times 6}=\\frac{1}{36}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item A ball is picked at random from one of two boxes that contain 2 black and 3 white and 3 black and 4 white balls respectively. What is the probability that it is white?\n\t{\\exyear{NET/JRF(JUNE-2012)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $34 / 70$\n\t\t\\task[\\textbf{B.}] $41 / 70$\n\t\t\\task[\\textbf{C.}] $36 / 70$\n\t\t\\task[\\textbf{D.}] $29 / 70$\n\t\\end{tasks}\n\t\\begin{answer}$\\left. \\right. $\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=1.6cm,width=4.55cm]{NET-19-2012}\n\t\t\\end{figure}\n\t\t\\begin{align*}\n\t\t\\intertext{Probability of picking white ball}\n\t\t\\text{From box }I=\\frac{3}{5} \\text{ and from box }I I&=\\frac{4}{7}\n\t\t\\intertext{Probability of picking a white ball from either of the two boxes is $=\\frac{1}{2}\\left[\\frac{3}{5}+\\frac{4}{7}\\right]=\\frac{41}{70}$}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item  A bag contains many balls, each with a number painted on it. There are exactly $n$ balls which have the number $n$ (namely one ball with 1 , two balls with 2, and so on until $N$ on them). An experiment consists of choosing a ball at random, noting the number on it and returning it to the bag. If the experiment is repeated a large number of times, the average value the number will tend to\n\t{\\exyear{NET/JRF(JUNE-2012)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $\\frac{2 N+1}{3}$\n\t\t\\task[\\textbf{B.}] $\\frac{N}{2}$\n\t\t\\task[\\textbf{C.}] $\\frac{N+1}{2}$\n\t\t\\task[\\textbf{D.}] $\\frac{N(N+1)}{2}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{Total number of balls }1+2+3+4+\\ldots . .+N&=\\frac{N(N+1)}{2}\\\\\n\t\t\\text{The probability for choosing a $k^{\\text {th }}$ ball at random } &=\\frac{k}{\\frac{N(N+1)}{2}}\\\\\n\t\t\\text{Average of it is given by }\\langle k\\rangle&=\\Sigma k \\cdot P=\\frac{2 \\Sigma k^{2}}{N(N+1)}\\\\&=\\frac{2}{N(N+1)} \\cdot \\frac{N(N+1)(2 N+1)}{6}\\\\\n\t\t&=\\frac{2 N+1}{3} \\quad\\text{ where }\\Sigma k^{2}=\\frac{N(N+1)(2 N+1)}{6}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (A)}\n\t\\end{answer}\n\t\\item  In a series of five Cricket matches, one of the captains calls \"Heads\" every time when the toss is taken. The probability that he will win 3 times and lose 2 times is\n\t{\\exyear{NET/JRF(DEC-2012)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $1 / 8$\n\t\t\\task[\\textbf{B.}]  $5 / 8$\n\t\t\\task[\\textbf{C.}] $3 / 16$\n\t\t\\task[\\textbf{D.}] $5 / 16$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tP&=\\left(\\frac{1}{2}\\right)^{3}\\left(1-\\frac{1}{2}\\right)^{5-3} \\frac{5 !}{3 !(5-3) !}\\\\&=\\frac{1}{8} \\times\\left(\\frac{1}{2}\\right)^{2} \\cdot \\frac{5 !}{3 !(5-3) !}\\\\\n\t\t&=\\frac{1}{32} \\cdot \\frac{5 \\times 4 \\times 3 !}{3 ! \\times 2 !}=\\frac{20}{32 \\times 2}\\\\&=\\frac{5}{8 \\times 2}=\\frac{5}{16}\n\t\t\\intertext{The probability of getting exactly $k$ successes in $n$ trials is given by probability mass function $=\\frac{n !}{k !(n-k) !} p^{k} \\cdot(1-p)^{n-k}, k=$ successes, $n=$ trials.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  Two independent random variables $m$ and $n$, which can take the integer values $0,1,2, \\ldots, \\infty$, follow the Poisson distribution, with distinct mean values $\\mu$ and $v$ respectively. Then\n\t{\\exyear{NET/JRF(DEC-2014)}}\n\t\\begin{tasks}(1)\n\t\t\\task[\\textbf{A.}]  The probability distribution of the random variable $l=m+n$ is a binomial distribution.\n\t\t\\task[\\textbf{B.}] The probability distribution of the random variable $r=m-n$ is also a Poisson distribution.\n\t\t\\task[\\textbf{C.}] The variance of the random variable $l=m+n$ is equal to $\\mu+v$\n\t\t\\task[\\textbf{D.}] The mean value of the random variable $r=m-n$ is equal to 0 \n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\sigma_{l}^{2}&=\\sigma_{m}^{2}+\\sigma_{n}^{2}=\\mu+v\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item  Consider a random walker on a square lattice. At each step the walker moves to a nearest neighbour site with equal probability for each of the four sites. The walker starts at the origin and takes 3 steps. The probability that during this walk no site is visited more than one is\n\t{\\exyear{NET/JRF(DEC-2015)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $12 / 27$\n\t\t\\task[\\textbf{B.}] $27 / 64$\n\t\t\\task[\\textbf{C.}] $3 / 8$\n\t\t\\task[\\textbf{D.}] $9 / 16$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{Total number of ways }&=4 \\times 4 \\times 4\\\\\n\t\t\\text{Number of preferred outcome }&=4 \\times 3 \\times 3\n\t\t\\intertext{$(\\because$ Any four option in step-1 and only 3 option in step $2 \\& 3$ because he can not go to previous position)}\\\\\n\t\t\\text{probability }&=\\frac{4 \\times 3 \\times 3}{4 \\times 4 \\times 4}=\\frac{9}{16}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  Let $X$ and $Y$ be two independent random variables, each of which follow a normal distribution with the same standard deviation $\\sigma$, but with means $+\\mu$ and $-\\mu$, respectively. Then the sum $X+Y$ follows a\n\t{\\exyear{NET/JRF(JUNE-2016)}}\n\t\\begin{tasks}(1)\n\t\t\\task[\\textbf{A.}] Distribution with two peaks at $\\pm \\mu$ and mean 0 and standard deviation $\\sigma \\sqrt{2}$\n\t\t\\task[\\textbf{B.}]  Normal distribution with mean 0 and standard deviation $2 \\sigma$\n\t\t\\task[\\textbf{C.}] Distribution with two peaks at $\\pm \\mu$ and mean 0 and standard deviation $2 \\sigma$\n\t\t\\task[\\textbf{D.}] Normal distribution with mean 0 and standard deviation $\\sigma \\sqrt{2}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\mu^{\\prime}&=\\mu_{x}+\\mu_{y}=\\mu-\\mu=0\\\\\n\t\t\\sigma^{12}&=\\sigma_{x}^{2}+\\sigma_{y}^{2}=\\sigma^{2}+\\sigma^{2}\\\\\n\t\t\\sigma^{\\prime}&=\\sqrt{2} \\sigma\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  A random variable $n$ obeys Poisson statistics. The probability of finding $n=0$ is $10^{-6}$. The expectation value of $n$ is nearest to\n\t{\\exyear{NET/JRF(JUNE-2017)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] 14\n\t\t\\task[\\textbf{B.}] $10^{6}$\n\t\t\\task[\\textbf{C.}] $e$\n\t\t\\task[\\textbf{D.}] $10^{2}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{In Poisson's statistics the probability of finding the value $n$ is given by $P(n)=\\frac{\\mu^{n}}{n !} e^{-\\mu}$}\n\t\t\\intertext{The mean of Poisson's statistics is $\\mu$. From the question}\n\t\tP(0)&=10^{-6} \\Rightarrow 10^{-6}=\\frac{\\mu^{0}}{0 !} e^{-\\mu} \\Rightarrow e^{-\\mu}=10^{-6}\\\\\n\t\t\\text{\tTalking Log of both sides, }-\\mu&=-6 \\ln 10 \\Rightarrow \\mu=6 \\ln 10\\\\\n\t\t\\text{Hence the expectation value of $n$ is }\\mu&=6 \\times 2.30=13.8 \\approx 14\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (A)}\n\t\\end{answer}\n\\item At each time step, a random walker in one dimension either remains at the same point with probability $\\frac{1}{4}$, or moves by a distance $\\Delta$ to the right or left with probabilities $\\frac{3}{8}$ each. After $N$ time steps, its root mean squared displacement is\n{\\exyear{NET/JRF(JUNE-2019)}}\n\\begin{tasks}(4)\n\t\\task[\\textbf{A.}] $\\Delta \\sqrt{N}$\n\t\\task[\\textbf{B.}] $\\Delta \\sqrt{\\frac{9 N}{16}}$\n\t\\task[\\textbf{C.}] $\\Delta \\sqrt{\\frac{3 N}{4}}$\n\t\\task[\\textbf{D.}] $\\Delta \\sqrt{\\frac{3 N}{8}}$\n\\end{tasks}\n\\begin{answer}\n\t\\begin{align*}\n\t\\intertext{For this problem it is evident from problem and given options}\n\tx_{\\mathrm{rms}}&=k \\sqrt{N}\\\\\n\t\\text{If }N&=1 \\text{ (Special case) (Let)}\n\t\\intertext{Outcomes $1,0,-1$\n\t\t(times $\\Delta$ ) with probability $\\frac{3}{8}, \\frac{1}{4}, \\frac{3}{8}$}\n\t\\left\\langle x^{2}\\right\\rangle&=\\sum P_{i} X_{i}^{2}=\\frac{3}{8} \\cdot 1+\\frac{1}{4} \\cdot 0+\\frac{3}{8} \\cdot 1\\\\&=\\frac{3}{4} \\Rightarrow x_{\\text {rms }}=\\sqrt{\\frac{3}{4} \\cdot 1}\\\\\n\t\\text{So, option }&\\Delta \\sqrt{\\frac{3 N}{4}}\\text{ is correct.}\n\t\\end{align*}\n\tSo the correct answer is \\textbf{Option (C)}\n\\end{answer}\n\t\\item A box contains 5 white and 4 black balls. Two balls are picked together at random from the box. What is the probability that these two balls are of different colours?\n\t{\\exyear{NET/JRF(DEC-2019)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $\\frac{1}{2}$ \n\t\t\\task[\\textbf{B.}] $\\frac{5}{18}$\n\t\t\\task[\\textbf{C.}] $\\frac{1}{3}$\n\t\t\\task[\\textbf{D.}] $\\frac{5}{9}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{Probability that the two balls are of different colors}\n\t\t&5 W, 4 B\\\\\n\t\t&=\\frac{5 C_{1} \\times 4 C_{1}}{9 C_{2}}=\\frac{\\frac{5 !}{4 ! \\times 1 !} \\times \\frac{4 !}{3 ! \\times 1 !}}{\\frac{9 !}{7 ! \\times 2 !}}=\\frac{5 \\times 4}{\\frac{9 \\times 8}{2}}=\\frac{5}{9}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  A particle hops randomly from a site to its nearest neighbour in each step on a square lattice of unit lattice constant. The probability of hopping to the positive $x$-direction is $0.3$, to the negative $x$-direction is $0.2$, to the positive $y$-direction is $0.2$ and to the negative $y$-direction is $0.3 .$ If a particle starts from the origin, its mean position after $N$ steps is\n\t{\\exyear{NET/JRF(DEC-2019)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $\\frac{1}{10} N(-\\hat{i}+\\hat{j})$\n\t\t\\task[\\textbf{B.}] $\\frac{1}{10} N(\\hat{i}-\\hat{j})$\n\t\t\\task[\\textbf{C.}] $N(0.3 \\hat{i}-0.2 \\hat{j})$\n\t\t\\task[\\textbf{D.}] $N(0.2 \\hat{i}-0.3 \\hat{j})$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t&\\left\\langle r_{i}\\right\\rangle \\sum_{i} p_{i} r_{i}\\\\\n\t\t&=0.3 \\hat{i}-0.2 \\hat{i}+0.2 j-0.3 \\hat{j}=0.1 \\hat{i}-0.1 \\hat{j}\\\\\n\t\t\\text{For $N$ steps, }&=\\frac{N}{10}[\\hat{i}-\\hat{j}]\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item A basket consists of an infinite number of red and black balls in the proportion $p:(1-p)$. Three balls are drawn at random without replacement. The probability of their being two red and one black is a maximum for\n\t{\\exyear{NET/JRF(JUNE-2020)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}]  $p=\\frac{3}{4}$\n\t\t\\task[\\textbf{B.}] $p=\\frac{3}{5}$\n\t\t\\task[\\textbf{C.}] $p=\\frac{1}{2}$\n\t\t\\task[\\textbf{D.}] $p=\\frac{2}{3}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tP&=p^{2}(1-p) \\quad \\\\&\\Rightarrow \\frac{d P}{d p}=\\frac{d}{d p} p^{2}(1-p)=0 \\\\&\\Rightarrow p^{2}(-1)+(1-p) 2 p=0\\\\\n\t\t&\\Rightarrow-p^{2}+2 p-2 p^{2}\\\\&=0 \\Rightarrow 3 p^{2}=2 p \\Rightarrow p=2 / 3\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\item An unbiased die is cast twice. The probability that the positive difference (bigger smaller) between the two numbers is 2 is\n\t{\\exyear{ JEST 2012}}\n\t \\begin{tasks}(2)\n\t\t\\task[\\textbf{a.}]$\\frac{1}{9}$\n\t\t\\task[\\textbf{b.}]$\\frac{2}{9}$\n\t\t\\task[\\textbf{c.}] $\\frac{1}{6}$\n\t\t\\task[\\textbf{d.}] $\\frac{1}{3}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tp(2)&=\\frac{n(E)}{n(S)}\n\t\t\\intertext{The number of ways to come positive difference}\n\t\t&[(3,1),(4,2),(5,3),(6,4),(1,3)(2,4),(3,5)(4,6)]\\\\\n\t\tp(2)&=\\frac{8}{36}=\\frac{2}{9}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (b)}\n\t\\end{answer}\n\t\\item A box contains 100 coins out of which 99 are fair coins and 1 is a double-headed coin. Suppose you choose a coin at random and toss it 3 times. It turns out that the results of all 3 tosses are heads. What is the probability that the coin you have drawn is the doubleheaded one?\n\t{\\exyear{ JEST 2013}}\n\t \\begin{tasks}(2)\n\t\t\\task[\\textbf{a.}] $0.99$\n\t\t\\task[\\textbf{b.}]$0.925$\n\t\t\\task[\\textbf{c.}] $0.75$\n\t\t\\task[\\textbf{d.}] $0.01$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\tSo the correct answer is \\textbf{Option (c)}\n\t\\end{answer}\n\t\\item There are on average 20 buses per hour at a point, but at random times. The probability that there are no buses in five minutes is closest to\n\t{\\exyear{ JEST 2013}}\n\t \\begin{tasks}(2)\n\t\t\\task[\\textbf{a.}]$0.07$\n\t\t\\task[\\textbf{b.}] $0.60$\n\t\t\\task[\\textbf{c.}]$0.36$\n\t\t\\task[\\textbf{d.}] $0.19$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{From Poision's distribution function,}\n\t\tP(n)&=\\frac{e^{-\\lambda} \\lambda^{n}}{\\lfloor n}\n\t\t\\text{here, }\\lambda&=20\\text{ buses per hour }\\\\\n\t\t\\Rightarrow \\lambda&=\\frac{5}{3}\\text{ buses in five minutes}\n\t\t\\intertext{Therefore, the probability that there are no buses in five minutes,}\n\t\tP(n=0)&=\\frac{e^{-\\frac{5}{3}}\\left(\\frac{5}{3}\\right)^{0}}{\\lfloor 0}=e^{-5 / 3}=0.1886 \\approx 0.19\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (d)}\n\t\\end{answer}\n\t\\item Two drunks start out together at the origin, each having equal probability of making a step simultaneously to the left or right along the $x$ axis. The probability that they meet after $n$ steps is\n\t{\\exyear{ JEST 2013}}\n\t \\begin{tasks}(2)\n\t\t\\task[\\textbf{a.}]$\\frac{1}{4^{n}} \\frac{2 n !}{n !^{2}}$\n\t\t\\task[\\textbf{b.}] $\\frac{1}{2^{n}} \\frac{2 n !}{n !^{2}}$\n\t\t\\task[\\textbf{c.}] $\\frac{1}{2^{n}} 2 n !$\n\t\t\\task[\\textbf{d.}]  $\\frac{1}{4^{n}} n !$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{The probability of taking ' $r$ '}&\\text{ steps out of $N$ steps }={ }^{N} C_{r}\\left(\\frac{1}{2}\\right)^{r}\\left(\\frac{1}{2}\\right)^{N-r}\\\\\n\t\t\\text{Total steps }&=N=n+n=2 n\n\t\\intertext{\tFor taking probability of $n$ steps out of $N$}\n\tP={ }^{N} C_{n}\\left(\\frac{1}{2}\\right)^{n}\\left(\\frac{1}{2}\\right)^{N-n}&=\\frac{N !}{(N-n) ! n !}\\left(\\frac{1}{2}\\right)^{n}\\left(\\frac{1}{2}\\right)^{N-n}=\\frac{2 n !}{n ! n !}\\left(\\frac{1}{2}\\right)^{2 n}=\\frac{2 n !}{(n !)^{2} 4^{n}}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (a)}\n\t\\end{answer}\n\t\\item If two ideal dice are rolled once, what is the probability of getting atleast one '6'?\n\t{\\exyear{ JEST 2015}}\n\t \\begin{tasks}(2)\n\t\t\\task[\\textbf{a.}]$\\frac{11}{36}$\n\t\t\\task[\\textbf{b.}]$\\frac{1}{36}$\n\t\t\\task[\\textbf{c.}]$\\frac{10}{36}$\n\t\t\\task[\\textbf{d.}]  $\\frac{5}{36}$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{Number of point}&\\text{ in sample space } n(S)=11\\\\\n\t\t[(1,6),(2,6),(3,6),&(4,6),(5,6),(6,1),(6,2),(6,3),(6,4),(6,5),(6,6)]\\\\\n\t\\text{\tNumber of point }&\\text{in population }n(P)=6^{2}=36\n\t\t\\intertext{Probability of getting atleast one '6' on face of dice} &=\\frac{n(S)}{n(P)}=\\frac{11}{36}\n\t\t\\end{align*}\n\t\t\tSo the correct answer is \\textbf{Option (a)}\n\t\\end{answer}\n\t\\item The mean value of random variable $x$ with probability density $p(x)=\\frac{1}{\\sigma \\sqrt{2 \\pi}} . \\exp \\left[-\\frac{\\left(x^{2}+\\mu x\\right)}{\\left(2 \\sigma^{2}\\right)}\\right]$ is:\n\t{\\exyear{ JEST 2016}}\n\t \\begin{tasks}(2)\n\t\t\\task[\\textbf{a.}]0\n\t\t\\task[\\textbf{b.}]$\\frac{\\mu}{2}$\n\t\t\\task[\\textbf{c.}] $\\frac{-\\mu}{2}$\n\t\t\\task[\\textbf{d.}]  $\\sigma$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\langle x\\rangle=\\frac{1}{\\sigma \\sqrt{2 \\pi}} \\int_{-\\infty}^{\\infty} x \\exp \\left(-\\frac{x^{2}}{2 \\sigma^{2}}\\right) d x \\int_{-\\infty}^{\\infty} \\exp \\left(-\\frac{\\mu x}{2 \\sigma^{2}}\\right) d x=0 \\quad\\text{ (due to odd function)}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (a)}\n\t\\end{answer}\n\\item Suppose that we toss two fair coins hundred times each. The probability that the same number of heads occur for both coins at the end of the experiment is\n{\\exyear{ JEST 2017}}\n \\begin{tasks}(2)\n\t\\task[\\textbf{a.}]$\\left(\\frac{1}{4}\\right)^{100} \\sum_{n=0}^{100}\\left(\\begin{array}{c}100 \\\\ n\\end{array}\\right)$\n\t\\task[\\textbf{b.}] $2\\left(\\frac{1}{4}\\right)^{100} \\sum_{n=0}^{100}\\left(\\begin{array}{c}100 \\\\ n\\end{array}\\right)^{2}$\n\t\\task[\\textbf{c.}]$\\frac{1}{2}\\left(\\frac{1}{4}\\right)^{100} \\sum_{n=0}^{100}\\left(\\begin{array}{c}100 \\\\ n\\end{array}\\right)^{2}$\n\t\\task[\\textbf{d.}] $\\left(\\frac{1}{4}\\right)^{100} \\sum_{n=0}^{100}\\left(\\begin{array}{c}100 \\\\ n\\end{array}\\right)^{2}$\n\\end{tasks}\n\\begin{answer}\n\t\\begin{align*}\n\t\\intertext{ If we toss one fair coins hundred times, then probability of $n$ number of head occurs at the end of 100 times is}\n\t&{ }^{100} C_{n}\\left(\\frac{1}{2}\\right)^{n}\\left(\\frac{1}{2}\\right)^{100-n}\n\\intertext{\tHence, the probability that same number of heads occur for both coins at the end of experiment is}\n&\\sum_{n=0}^{100}\\left({ }^{100} C_{n}\\left(\\frac{1}{2}\\right)^{100}\\right) \\cdot\\left({ }^{100} C_{n}\\left(\\frac{1}{2}\\right)^{100}\\right)=\\sum_{n=1}^{100}\\left({ }^{100} C_{n}\\right)^{2}\\left(\\frac{1}{2}\\right)^{200}=\\left(\\frac{1}{4}\\right)^{100} \\sum_{n=1}^{100}\\left({ }^{100} C_{n}\\right)^{2}\n\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (d)}\n\\end{answer}\n\\item An electronic circuit with 10000 components performs its intended function success fully with a probability $0.99$ if there are no faulty components in the circuit. The probability that there are faulty components is $0.05$. if there are faulty components, the circuit perform successfully with a probability $0.3$. The probability that the circuit performs successfully is $\\frac{x}{10000}$. What is $x$ ?\n{\\exyear{ JEST 2018}}\n\\begin{answer}\nSo the correct answer is  \\textbf{9555}\n\\end{answer}\n\\item A person plans to go from town $A$ to town $B$ by taking either the route $(R 1+R 2)$ with probability $\\frac{1}{2}$ or the route $(R 1+R 3)$ with probability $\\frac{1}{2}$ (see figure). Further, there is a probability $\\frac{1}{3}$ that $R 1$ is blocked, a probability $\\frac{1}{3}$ that $R 2$ is blocked, and a probability $\\frac{1}{3}$ that $R 3$ is blocked. What is the probability that he/she would reach town $B$ ?\n{\\exyear{ JEST 2019}}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[height=3cm,width=5.5cm]{JEST-58-2019}\n\\end{figure}\n \\begin{tasks}(2)\n\t\\task[\\textbf{a.}]$\\frac{8}{9}$\n\t\\task[\\textbf{b.}]$\\frac{1}{3}$\n\t\\task[\\textbf{c.}]$\\frac{4}{9}$\n\t\\task[\\textbf{d.}]$\\frac{2}{3}$\n\\end{tasks}\n\\begin{answer}\n\t\\begin{align*}\n\t\\text{Given that probability of}&\\text{ $R 1$ blocked }=1 / 3\\\\\n\t\\text{Probability of $R 1$ }&\\text{not blocked }=1-\\frac{1}{3}=\\frac{2}{3}\\\\\n\t\\text{Probability from $A$ to $B$ }&\\text{without restriction }=\\frac{1}{2}\\\\\n\t\\text{Route $R 2$ probability }&=\\frac{1}{2} \\times \\frac{2}{3}\\text{ not blocked}\\\\\n\t\\text{Route }R 3&=\\frac{1}{2} \\times \\frac{2}{3}\\\\\n\\text{\tTotal probability }(A \\rightarrow B)&=\\frac{2}{3}\\left[\\frac{1}{2} \\times \\frac{2}{3}+\\frac{1}{2} \\times \\frac{2}{3}\\right]=\\frac{4}{9}\n\t\\end{align*}\n\tSo the correct answer is \\textbf{Option (c)}\n\\end{answer}\n\n\\end{enumerate}", "meta": {"hexsha": "af705621356b7c2b2b7fa09b807ff110b6100ea6", "size": 18608, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CSIR- Mathematical Physics/chapter/Probability Problem set Solutions.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CSIR- Mathematical Physics/chapter/Probability Problem set Solutions.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CSIR- Mathematical Physics/chapter/Probability Problem set Solutions.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.8328690808, "max_line_length": 426, "alphanum_fraction": 0.6377364574, "num_tokens": 7234, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8311430478583168, "lm_q1q2_score": 0.5616570501257306}}
{"text": "\\section{Background on DSP-PBE}\n\nWe give a short introduction on Digital Signal Processing Programming-By-Example (DSP-PBE).\nThe idea of DSP-PBE was introduced in~\\cite{SantolucitoFARM}.\n\n\\subsection{Aural Distance as a Metric Space}\n\nOne of the key components in machine learning systems is having a metric that to quantifies how close the candidate solution is to the desired solution.\nIn the case of programming-by-example, the desired solution is defined by the user-provided input-output example pairs.\nAt a high-level, our goal is to minimize the distance, $dist$, between the output of the candidate function, $F$, on the provided input, $I$, and the provided output $O$.\nPut mathematically, we have a minimization problem: $min (dist ( F(I)), O)$.\n\nPrevious work proposed using a distance metric between two audio files based on constellation plots of multiple FFT plots over the samples~\\cite{SantolucitoFARM}.\nWe adopt the same distance metric for this work.\n\n\\url{https://en.wikipedia.org/wiki/Short-time_Fourier_transform#Sliding_DFT}\n\n\\subsection{Gradient Descent}\n\nWe use a modified version of gradient descent to find good parameters for the filter.\nHave to decide how much to go into this - probably too technical for ICMC, but there are few key points worth highlight (e.g. distance metric needs to be convex-ish).\nShould we include a similar convexity graph as in FARM? \n\n\n\\section{DSP-PBE for analog effects}\n\nIn order to use DSP-PBE there are a number of new challenges that we must overcome.\nFirst, we need a more efficient way of choosing an initial point for gradient descent. \nSecond, we must have a way to automatically explore different structures of DSP programs. \nFinally, a priority for this system is usability by novice DSP programmers, so we needed our tool to have the ability to generate program code that could be immediately run.\n\n\\markk{system image here specialized on analog effects}\n\n\\subsection{Choosing an initial point for Gradient Descent}\n\nAs a very rough estimate, if the max freq peak of the output is less than the max freq peak of input\n  we need a lpf, and it should have a value a bit less than the max peak of output\n\n\\subsection{Searching for DSP structure}\n\n\\markk{need an image of a DSP program structures (demonstrating both sequential and parallel composition) along with the program code to describe this structure. Ideally made in tikz, but if we run out of time, a screenshot of Max/PD/google slides will work too}\n\nJust brute force for now - maybe guided by user? \nSomething smarter can come later.\n\n\\subsection{Generating Program Code}\n\n\\markk{how to we turn the solution into a PD/MaxMSP/SuperCollider program (see TODO on github, still need to actually code this). Need to be sure to mention that this was not possible in previous FARM paper.}\n\n\\samm{added some words on the conversion to supercollider code}\n\nIn order to make the filter generated by the DSP-PBE program usable by computer musicians we have implemented a translation scheme to convert the in-program representation of the filter to raw SuperCollider code.", "meta": {"hexsha": "7ede2cfbd98404a2d81ff2d9afd86e354c41158c", "size": 3086, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/ICMC2019/secs/system.tex", "max_stars_repo_name": "Yale-OMI/DSP-PBE", "max_stars_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-03T02:36:39.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-03T02:36:39.000Z", "max_issues_repo_path": "papers/ICMC2019/secs/system.tex", "max_issues_repo_name": "Yale-OMI/DSP-PBE", "max_issues_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2018-11-16T21:50:44.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-16T18:57:19.000Z", "max_forks_repo_path": "papers/ICMC2019/secs/system.tex", "max_forks_repo_name": "Yale-OMI/DSP-PBE", "max_forks_repo_head_hexsha": "073f366e8096004adeec5d2cde1cf3546c4690f5", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.3461538462, "max_line_length": 262, "alphanum_fraction": 0.7922877511, "num_tokens": 684, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430394931456, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.561657044472844}}
{"text": "\\documentclass{article}\n\n\\usepackage{fullpage}\n\\usepackage{textcomp}\n\\usepackage{graphicx}\n\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\renewcommand{\\headrulewidth}{0pt}\n\\cfoot{\\sc Page \\thepage\\ of \\pageref{end}}\n\n\\begin{document}\n\n{\\large \\noindent{}University of Toronto at Scarborough\\\\\n\\textbf{CSC A67/MAT A67 - Discrete Mathematics, Fall 2015}}\n\n\\section*{\\huge Exercise \\#5: Logic}\n\n{\\large Due: November 6, 2015 at 11:59 p.m.\\\\\nThis exercise is worth 3\\% of your final grade.}\\\\[1em]\n\\textbf{Warning:} Your electronic submission on MarkUs affirms that this exercise is your own work and no\none else's, and is in accordance with the University of Toronto Code of Behaviour on Academic Matters,\nthe Code of Student Conduct, and the guidelines for avoiding plagiarism in CSC A67/MAT A67.\\\\[1ex]\nThis exercise is due by 11:59 p.m. November 6. Late exercises will not be accepted.\\\\[1ex]\n\\renewcommand{\\labelenumi}{\\arabic{enumi}.}\n\\renewcommand{\\labelenumii}{(\\alph{enumii})}\n\\begin{enumerate}\n\\item If\\marginpar{[3]} $k$: ``A bell has been rung\", $m$: ``There is meat in the room\", and $n$: ``The dogs are salivating\",\\\\\nwhich of the following statements are equivalent to $\\neg n\\to\\neg(m\\vee k)$?\n\t\\renewcommand{\\labelenumii}{\\roman{enumii})}\\begin{enumerate}\n\t\\item $\\neg n\\to\\neg m\\vee k$\n\t\\item ``If the dogs are not salivating, then there is no meat in the room, or a bell has not been rung.\"\n\t\\item ``If the dogs are not salivating, then there is no meat in the room and a bell has not been rung.\"\n\t\\item $n\\to (m\\vee k)$\n\t\\item $\\neg n\\to \\neg m\\vee\\neg k$\n\t\\item $\\neg n\\to \\neg m\\wedge\\neg k$\n\t\\item ``If the dogs are not salivating, then a bell has not been rung, nor is there meat in the room.\"\n\t\\item $n\\vee\\neg m\\wedge\\neg k$\n\t\\item $\\neg(m\\vee k)\\vee n$\n\t\\item ``There is no meat in the room, nor has a bell been rung, if the dogs are not salivating.\"\n\t\\item ``A bell has not been rung, or there is no meat in the room, if the dogs are not salivating.\"\n\t\\item ``If a bell has not been rung and there is no meat in the room, then the dogs are not salivating.\"\n\t\\item ``For the dogs to be salivating, it is sufficent and necessary that a bell has been rung and that there is meat in the room.\"\n\t\\item $\\neg (m\\vee k)\\to\\neg n$\n\t\\item ``For the dogs to be salivating, it is necessary that a bell has been rung and that there is meat in the room.\"\n\t\\item ``If the dogs are salivating, then a bell has been rung or there is meat in the room.\"\n\t\\item $n\\to m\\wedge k$\n\t\\item $\\neg n\\to m\\wedge k$\n\t\\item $n\\vee\\neg(m\\vee k)$\n\t\\item $\\neg(m\\vee k)\\to\\neg(\\neg n)$\n\t\\end{enumerate}\\pagebreak\n\\item Construct\\marginpar{[8]} truth tables for the following statements:\n\t\\renewcommand{\\labelenumii}{(\\alph{enumii})}\\begin{enumerate}\n\t\\item $a\\vee b\\to\\neg b$\n\t\\item $a\\wedge b\\wedge c$\n\t\\item $\\neg\\neg a\\vee a$\n\t\\item $a\\vee b\\to a\\vee b$\n\t\\end{enumerate}\n\\item Shade\\marginpar{[8]} the regions of a Venn diagram where each of the following statements is true:\n\t\\begin{enumerate}\n\t\\item $\\neg p\\to q$\n\t\\item $p\\leftrightarrow q\\vee\\neg q$\n\t\\item $\\neg p\\vee \\neg q\\wedge p$\n\t\\item $(p\\wedge q)\\vee(r\\wedge q)$\n\t\\end{enumerate}\n\\item \\begin{enumerate}\n\t\\item Negate\\marginpar{[4]} the statement ``Sarah has a spaceship and has three fingers and is from Venus\" and convert it to formal logic.\n\t\\item Simplify the statement from \\textbf{(a)} as much as possible using equivalence rules.\n\t\\end{enumerate}\n\\item Using\\marginpar{[3]} the statements\\\\\n\t\\begin{tabular}{p{0.47\\textwidth}p{0.5\\textwidth}}\n\t\\begin{description}\n\t\\item[A:] ``Your vehicle has a District No. 64 permit\"\n\t\\item[B:] ``It is between 8am and 9am\"\n\t\\item[C:] ``It is a school day (Monday to Friday)\"\n\t\\item[D:] ``You have been parked for less than 2 hours during legal hours\"\n\t\\item[E:] ``It is between 8am and 6pm\"\n\t\\item[F:] ``You have been parked for less than 5 minutes during legal hours\"\n\t\\item[G:] ``It is Monday\"\n\t\\item[H:] ``You are left of the sign\"\n\t\\item[I:] ``It is between 1:30pm and 4pm\"\n\t\\item[J:] ``It is between 9am and 10am\"\n\t\\item[K:] ``It is after 4pm\"\n\t\\item[L:] ``It is after 1pm\"\n\t\\end{description}\n\tcreate a formal logic statement that is true any time you are allowed to park near the following parking signs, and false any time it is illegal to park there:&\n\t\\begin{center}\n\t\\includegraphics[width=0.4\\textwidth,clip,trim=5cm 5cm 5cm 6cm]{Formal-Logic-Exercise-parkingSigns.JPG}\n\t\\end{center}\n\t\\end{tabular}\n\\end{enumerate}\n\\hrulefill\\\\\n\\noindent[Total: 26 marks]\\label{end}\n\n\\end{document}", "meta": {"hexsha": "8a389b75746aaece3f49c80e331249100703d3a8", "size": 4489, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "teaching/resources/Formal-Logic-Exercise.tex", "max_stars_repo_name": "ozhanghe/ozhanghe.github.io", "max_stars_repo_head_hexsha": "7b58b8e325da2c788c4dd7cf5bec4d08d77c24fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-23T17:23:00.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-23T17:23:00.000Z", "max_issues_repo_path": "teaching/resources/Formal-Logic-Exercise.tex", "max_issues_repo_name": "ozhanghe/ozhanghe.github.io", "max_issues_repo_head_hexsha": "7b58b8e325da2c788c4dd7cf5bec4d08d77c24fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2017-06-05T03:48:15.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-18T03:30:18.000Z", "max_forks_repo_path": "teaching/resources/Formal-Logic-Exercise.tex", "max_forks_repo_name": "ozhanghe/ozhanghe.github.io", "max_forks_repo_head_hexsha": "7b58b8e325da2c788c4dd7cf5bec4d08d77c24fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-02-11T13:35:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-09T05:34:01.000Z", "avg_line_length": 47.2526315789, "max_line_length": 161, "alphanum_fraction": 0.7121853419, "num_tokens": 1480, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.561622839695873}}
{"text": "The quasi-linear forms written above enable the particularization of the theory of first order quasi-linear systems developed in section \\ref{sec:PDEs} to solid mechanics problems. %Once the characteristic analysis of systems \\eqref{eq:HE_quasilinear}, \\eqref{eq:HPP_quasi-linear} and \\eqref{eq:EP_quasilinear} will be carried out, a particular type of IVP which is of particular interest in this manuscript will be introduced.\n\n%The theory of first order quasi-linear system is now applied to solid dynamics through the characteristic analysis of systems \\eqref{eq:HE_quasilinear}, \\eqref{eq:HPP_quasi-linear} and \\eqref{eq:EP_quasilinear} is performed in this section. Second, a particular type of IVP that will be considered in the rest of the manuscript is introduced. \n%\\subsection{Characteristic structure of solutions}\nFor the sake of simplicity, studies of finite deformations and linearized geometrical frameworks will be condensed in this part by using the generic notation of stress $\\tens{S}$ and vectors written in the reference configuration.\nFurthermore, instead of considering multi-dimensional systems of conservation laws, we will focus without loss of generality on the quasi-linear form \\eqref{eq:HE_quasilinear} projected on an arbitrary direction $\\vect{N}=\\[\\vect{E}_1,\\vect{E}_2,\\vect{E}_3\\]$ \\cite[p.425-426]{Leveque}. In this direction, one has:\n\\begin{equation}\n  \\label{eq:normal_quasi}\n  \\Qcb_t + \\Jbsf \\drond{\\Qcb}{X_N} = \\Scb\n\\end{equation}\nin which $X_N=\\vect{X}\\cdot\\vect{N}$ and the \\textit{Jacobian matrix} $\\Jbsf = \\Absf^\\alpha N_\\alpha$ of dimension $m$ arise.\nIn three dimensions, the non-symmetrical PK1 tensor in quasi-linear form \\eqref{eq:HE_quasilinear} yields a Jacobian matrix of dimension $m=3+9$ while systems \\eqref{eq:HPP_quasi-linear} and \\eqref{eq:EP_quasilinear} involving the Cauchy tensor, lead to $m=3+6$.\nThe characteristic analysis of system \\eqref{eq:normal_quasi} is therefore equivalent to that of linear combinations of matrices $\\Absf^\\alpha$. With the previous developments, the Jacobian matrix reads:\n\\begin{equation}\n  \\label{eq:jacobian_generic}\n  \\Jbsf=-\\matrice{\\tens{0}^2 & \\frac{1}{\\rho_0}\\tens{I}\\otimes \\vect{N} \\\\  \\tilde{\\Hbb}\\cdot\\vect{N} & \\tens{0}^4 }\n\\end{equation}\nin which $\\tilde{\\Hbb}$ is either the hyperelastic or elastoplastic tangent modulus, or the elastic stiffness tensor depending on the case considered. The characteristic structure of the problem is given by the $m$ eigenvalues $c_K$ and associated left eigenvectors $\\Lcb^K= \\[ \\vect{v}^K \\: , \\: \\tens{S}^K \\]$ of the Jacobian matrix satisfying:\n\\begin{equation}\n  \\label{eq:eigen_system}\n  \\vect{\\Lc}^K \\(\\Jbsf - c_K \\Ibsf\\) = \\vect{0}\n\\end{equation}\nwhere $\\Ibsf$ is the $m\\times m$ identity matrix. Thus, for non-zero eigenvalues one gets:\n\\begin{subequations}\n  \\begin{alignat}{1}\n    \\label{eq:eigen_left_stress}\n    & -\\tens{S}^K:\\(\\tilde{\\Hbb}\\cdot  \\vect{N}\\) - c_K  \\vect{v}^K =\\vect{0} \\\\\n    \\label{eq:eigen_left_velo}\n    & -\\frac{1}{\\rho_0}\\vect{v}^K\\otimes\\vect{N} - c_K \\tens{S}^K = \\tens{0}\n  \\end{alignat}\n\\end{subequations}\nSubstitution of $\\tens{S}^K$ obtained from \\eqref{eq:eigen_left_velo} in \\eqref{eq:eigen_left_stress} leads to:\n\\begin{equation}\n  \\label{eq:acoustic_eigen}\n (\\vect{v}^K\\otimes\\vect{N}):\\(\\tilde{\\Hbb}\\cdot  \\vect{N}\\) - \\rho_0c_K^2 \\vect{v}^K = \\tens{0}\n\\end{equation}\nwhich is the left eigensystem of the acoustic tensor $A_{ij}=N_\\alpha \\tilde{H}_{i\\alpha j \\beta}  N_\\beta$. Due to the symmetry of $\\tens{A}$, system \\eqref{eq:acoustic_eigen} is equivalent to the right eigensystem:\n\\begin{equation}\n  \\label{eq:acoustic_eigen_system_lambda}\n  \\(  N_\\alpha \\tilde{H}_{i\\alpha j \\beta}  N_\\beta - \\rho_0 c_K^2 \\delta_{ij} \\) v_j^K =0\n\\end{equation}\nor alternatively, with the eigenvalues $\\omega_p$ and associated eigenvectors of the acoustic tensor $\\vect{l}^p\\: \\: (p=1,2,3)$:\n\\begin{equation}\n  \\label{eq:acoustic_eigen_system}\n   \\( \\tens{A} - \\omega_p \\tens{I} \\) \\cdot \\vect{l}^p  = \\vect{0}\n\\end{equation}\nThe condition for system \\eqref{eq:normal_quasi} to be hyperbolic (real eigenvalues and independent eigenvectors) is thus ensured by the positive definiteness of the acoustic tensor, also known as the \\textit{strong ellipticity} condition \\cite{Foundation_of_elasticity}:\n\\begin{equation}\n  \\label{eq:strong_ellipticity}\n  (\\vect{n}\\otimes \\vect{N}): \\tilde{\\Hbb}: (\\vect{n}\\otimes \\vect{N}) > 0 \\quad \\forall \\vect{N},\\vect{n} \\in \\Rbb^3 \\: ; \\: \\vect{N},\\vect{n} \\ne \\vect{0}\n\\end{equation}\nIf the condition holds, the acoustic tensor admits $3$ couples of eigenvalue--eigenvector $\\{\\omega_p,\\vect{l}^p\\}$ leading to $6$ couples $\\{c_K,\\Lcb^K\\}$ for the Jacobian matrix, the $6$ other eigenvalues being null \\cite{Kluth}.\nThe couples $\\{c_K,\\Lcb^K\\}$ are referred to as \\textit{left characteristic fields}.\nThe left eigenvectors associated with non-zero eigenvalues of the Jacobian matrix are obtained by using equation \\eqref{eq:eigen_left_velo} so that the following $6$ eigenfields of the quasi-linear form \\eqref{eq:normal_quasi} can be defined:\n\\begin{equation}\n  \\label{eq:left_eigenfields}\n    \\left\\lbrace \\pm \\sqrt{\\frac{\\omega_p}{\\rho_0}} ; \\quad \\[\\: \\pm \\rho_0\\sqrt{\\frac{\\omega_p}{\\rho_0}} \\vect{l}^p , -\\vect{l}^p\\otimes \\vect{N} \\:\\]  \\right\\rbrace ,\\quad p=1,2,3\n\\end{equation}\nAt last, one has to find six independent left eigenvectors associated with the null eigenvalue of multiplicity $6$ by solving equation \\eqref{eq:eigen_left_stress} for the null eigenvalue:\n\\begin{equation}\n  \\label{eq:left_null_eigenvectors}\n  \\tens{S}^K:\\(\\tilde{\\Hbb}\\cdot  \\vect{N}\\) =\\vect{0},\\quad K=1,...,6\n\\end{equation}\nFollowing the same procedure for right eigenvectors $\\Rcb^K=\\matrice{\\vect{v}^K \\\\ \\tens{S}^K}$, the Jacobian matrix right eigensystem reads:\n\\begin{subequations}\n  \\begin{alignat}{1}\n    \\label{eq:eigen_right_stress}\n    & -\\frac{1}{\\rho_0}\\tens{S}^K\\cdot  \\vect{N} - c_K  \\vect{v}^K =\\vect{0} \\\\\n    \\label{eq:eigen_right_velo}\n    & -\\tilde{\\Hbb}:\\(\\vect{v}^K\\otimes\\vect{N}\\) - c_K \\tens{S}^K = \\tens{0}\n  \\end{alignat}\n\\end{subequations}\nwhich leads to the \\textit{right characteristic fields} associated with the non-null eigenvalues:\n\\begin{equation}\n  \\label{eq:right_eigenfields}\n  \\left\\lbrace \\pm \\sqrt{\\frac{\\omega_p}{\\rho_0}} ; \\quad \\[\\: \\pm \\sqrt{\\frac{\\omega_p}{\\rho_0}} \\vect{l}^p , -\\tilde{\\Hbb}:\\( \\vect{l}^p\\otimes \\vect{N}\\) \\:\\]  \\right\\rbrace ,\\quad p=1,2,3\n\\end{equation}\nIn equation \\eqref{eq:right_eigenfields}, $\\{\\omega_p,\\vect{l}^p\\}$ still denotes the eigenfields of the acoustic tensor. Moreover, the $6$ independent right eigenvectors associated with the zero eigenvalue required to complete the set of right characteristic fields must satisfy:\n\\begin{equation}\n  \\label{eq:right_null_eigenvectors}\n  \\tens{S}^K \\cdot  \\vect{N} =\\vect{0},\\quad K=1,...,6\n\\end{equation}\n\n\\begin{remark}\n  Since the right-hand side of equation \\eqref{eq:normal_quasi} is not involved in the characteristic analysis, linear elasticity and elasto-viscoplasticity in small strains share the same characteristic structure. \n\\end{remark}\n\n\\begin{remark}\n  \\label{rq:similarity_solution}\n  In the case of a vanishing source term $\\Scb$, the specialization of characteristic equations \\eqref{eq:PDEs_ODEs} to system \\eqref{eq:normal_quasi} leads to:\n\\begin{equation}\n  \\label{eq:characteristic_equations_homogeneous}\n  \\Lcb^K \\cdot d\\Qcb = \\vect{0},\\quad K=1,...,6\n\\end{equation}\nmeaning that the solution is constant along each characteristic straight line with slope $\\xi = c_K$. Such solutions $\\Qcb(\\xi)$ that only depend on the ray $\\xi$ are called \\textit{self-similar solutions}.\n\\end{remark}\n\n\n%%% Local Variables:\n%%% mode: latex\n%%% ispell-local-dictionary: \"american\"\n%%% TeX-master: \"../mainManuscript\"\n%%% End:\n\n\n\n\n\n", "meta": {"hexsha": "28ac1e048d01c60c6f387c9ceede5016ed99d16a", "size": 7762, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/chapter2/characteristicAnalysis.tex", "max_stars_repo_name": "adRenaud/research", "max_stars_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-18T14:52:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-18T14:52:03.000Z", "max_issues_repo_path": "manuscript/chapter2/characteristicAnalysis.tex", "max_issues_repo_name": "adRenaud/research", "max_issues_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-07T13:11:11.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-07T13:11:11.000Z", "max_forks_repo_path": "manuscript/chapter2/characteristicAnalysis.tex", "max_forks_repo_name": "adRenaud/research", "max_forks_repo_head_hexsha": "2f0062a1800d7a17577bbfc2393b084253d567f4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.2110091743, "max_line_length": 427, "alphanum_fraction": 0.730224169, "num_tokens": 2512, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031738057795402, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5616228345699321}}
{"text": "\\chapter{Algorithms}\n\\label{chap:Algorithm}\n\nThis chapter will cover the various algorithms, including interpolation, that are required for either preparing data or in the evaluation of the equations outlined in \\Cref{chap:2ndOrdCalc}.\n\n\nThe reason for the interpolation is that the number of frequencies given in the WAMIT output files (on the order of tens of frequencies) is not likely going to correspond to the number of wave frequencies actually used by HydroDyn (on the order of hundreds to thousands), nor does the WAMIT output necessarily have to be equally discretized or even complete.  The WAMIT output files may be very sparcely populated.  So, it is necessary to interpolate in order to find the missing ones.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{FFT and IFFT}\n\\label{sec:Algorithm:FFT}\n\nThe FFT (or discrete Fourier transform -- DFT) and inverse FFT used in \\HD are found in the FFTPACK version 4.1 from UCAR/NCAR.  For a given discretized function, for example the complex wave form $Z[k]$ in the frequency domain, the inverse fourier transform to the time domain can be written as:\n\\begin{equation}\n   z(t_n) = \\frac{1}{N} \\sum\\limits_{k=-\\frac{N}{2}+1}^{\\frac{N}{2}} Z[k] e^{i\\omega_k t_n} = \\Re\\left\\{\\sum\\limits_{k=1}^{N'}a_k e^{iw_k t_n}\\right\\},\n\\label{eq:IFFTofZ}\n\\end{equation}\nwhere $N$ is given in \\Cref{eq:N} as\n\\begin{equation}\n   N=\\frac{2 \\pi}{\\Delta t \\Delta \\omega} = \\frac{t_\\text{max}}{\\Delta t} = \\frac{2 \\omega_\\text{max}}{\\Delta \\omega} = 2(N'+1).\n\\label{eq:IFFT_N}\n\\end{equation}\nIn \\Cref{eq:IFFTofZ}, the expression for the first summation is what is used within \\HD and the second summation expression is used in some of Tiago's writings.  The relationship between $Z[k]$ and $a_k$ can be written as\n\\begin{equation}\n   Z[k] =\n         \\begin{dcases*}\n            \\frac{N a_k}{2}            & $k=1 ~\\ldots~ N/2-1$\\\\\n            0                          & $k=0$ and $k=N/2$\\\\\n            \\frac{N a^*_{|k|}}{2}      & $k=-N/2+1 ~\\ldots~ -1$\\\\\n         \\end{dcases*}\n\\label{eq:IFFT_Zk_ak}\n\\end{equation}\nwhere $a^*$ is the complex conjugate of $a$. \n\n\n\\subsection{Numerical Evaluation of IFFT}\nIn the evaulation of \\Cref{eq:IFFTofZ} in \\HD to yield the wave height as a function of time, the IFFT is evaluated as\n\\begin{equation}\n   z(t_n) = \\frac{1}{N} \\sum\\limits_{k=-\\frac{N}{2}+1}^{\\frac{N}{2}} Z[k] e^{i\\omega_k t_n} = \\operatorname{IFFT}\\left(Z[k]\\right) \n\\label{eq:IFFTofZ:eval}\n\\end{equation}\nwhere the IFFT is only evaluated over $k = 0 \\ldots N/2$ (the negative frequencies are evaluated internally following the relationships in \\Cref{eq:IFFT_Zk_ak}).  The normalization constant of $1/N$ is also handled by the IFFT subroutines and is set by the initialization of the IFFT.\n\nThere are some constraints imposed on what $N$ can be because of the IFFT solver used.  $N$ must be even, and preferably a product of small prime numbers for speed.  Additionally, $Z[k=0] = 0$ and $Z[k=N/2] = 0$ must be specified.\n\n\n\\subsection{$Z[k]$ in \\HD}\nIn \\HD the complex wave form in frequency space, $Z[k]$, is given as\n\\begin{equation}\n   Z[k] = W[k]\\sqrt{\\frac{2\\pi}{\\Delta t} S^\\text{2-sided}_\\zeta (\\omega_k)} = W[k]\\sqrt{N \\Delta \\omega S^\\text{2-sided}_\\zeta (\\omega_k)}\n\\label{eq:Zk}\n\\end{equation}\nwhere \n\\begin{equation}\n   W[k]  = \\sqrt{\\frac{N}{2}}\\sqrt{-2 \\ln \\left( U_1[k]\\right)}e^{i 2\\pi U_2[k]},\n\\label{eq:Wk}\n\\end{equation}\nthe DFT of gaussian white noise using the Box-Muller method, and $S^\\text{2-sided}_\\zeta(\\omega_k)$ is the two sided power spectral density (PSD) of the wave elevation per unit time.\\footnote{Note that Tiago uses $a_k = \\sqrt{2\\Delta \\omega S^\\text{1-sided}_\\zeta(\\omega_k)}e^{i 2\\pi U_k}$ which is a simplification where $U_k = U_2[k]$ and $\\sqrt{-2 \\ln\\left(U_1[k]\\right)} = \\sqrt{2}$.}\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{Interpolation}\n\\label{sec:Algorithm:Interp}\nFour interpolation algorithms are required for the \\modname{WAMIT2} module: two three-dimensional and two four-dimensional interpolations.\nFor each set of 3D and 4D interpolation algorithms, a linear interpolation for full arrays and a linear interpolation for sparse arrays are needed.  Due to the complexity of implimenting an interpolation over sparse arrays and time constraints in the development schedule, a placeholder will be created for it with an error message stating that an interpolation scheme for sparse arrays is not implimented at this time.  The WAMIT output file reading algorithm is developed in such a way that an unordered sparse array can be read in and stored (see \\Cref{sec:WamitOuput:Read}).\n\nThe first order wave forces calculated within the \\fname{WAMIT} module is interpolated with a linear method.  In light of this and considering time constraints on the development, we will use linear interpolation algorithms for now.  If time permits, we might investigate possibilities for other interpolation algorithms that will produce smooth surfaces is cubic interpolation (and smoothly continuous derivatives) or better allow for sparse data.  \n%The issue with this interpolation method is that it tends to produce overshoot which may be undesirable.  Algorithms which could minimize the effects of overshoot such as cosine fitting, radial basis function weighting, and hermite polynomial among others.  In the ideal world, one would choose an algorithm that will best fit the data.  In this case, this is not obvious considering that the general shape of the data will depend on what floating platform is used.\n\n\\subsection{3D Interpolation}\n\\label{sec:interp:3d}\n\\subsubsection{Full array interpolation}\nA three-dimensional linear interpolation routine is available in the \\modname{InflowWind} module.  This routine was written specifically for the full field wind files, so it will require some modification to generalize it.\n\n\\subsubsection{Sparse array interpolation}\n\\label{sec:interp:3d:sparse}\nDue to time constraints in the development schedule, a placeholder subroutine will be created that tells the user that this is a currently unavailable feature.  The user can then use an external data manipulation program to do the interpolation on their WAMIT output to create a full array (that can be unordered) that can be read in.  The WAMIT output file reading algorithm will accomodate the reading either a full array (both the upper and lower triangle of the QTF) or partial array (upper half only, or a mix of upper and lower).  This routine will expect a mask array (boolean?) of identical size to the sparse array that indicates which elements of the data array are missing.\n\nBy creating this placeholder subroutine, we give ourselves the option of creating this interpolation scheme as time permits with the ability to handle a limited sparseness of the QTF array (\\emph{i.e.} no more than a two step gap in any dimension).\n\n\\subsection{4D Interpolation}\n\\label{sec:interp:4d}\n\\subsubsection{Full array interpolation}\nAt present we do not have any four-dimensional interpolation routines.  A four-dimensional linear interpolation should be fairly simple to extend from the three-dimensional one.\n\n\\subsubsection{Sparse array interpolation}\n\\label{sec:interp:4d:sparse}\nSee \\Cref{sec:interp:3d:sparse}.\n", "meta": {"hexsha": "19c56148055a5b265199d186bf8b880a726dad99", "size": 7197, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/2nd_order_implementation/chaps/Chap.Algorithms.tex", "max_stars_repo_name": "NWTC/HydroDyn", "max_stars_repo_head_hexsha": "816705503bc3c9d31988f424caf551ded7b5c5eb", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-28T21:32:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-28T21:32:25.000Z", "max_issues_repo_path": "Documentation/2nd_order_implementation/chaps/Chap.Algorithms.tex", "max_issues_repo_name": "NWTC/HydroDyn", "max_issues_repo_head_hexsha": "816705503bc3c9d31988f424caf551ded7b5c5eb", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-09-18T11:49:28.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-22T08:33:26.000Z", "max_forks_repo_path": "Documentation/2nd_order_implementation/chaps/Chap.Algorithms.tex", "max_forks_repo_name": "NWTC/HydroDyn", "max_forks_repo_head_hexsha": "816705503bc3c9d31988f424caf551ded7b5c5eb", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-03-20T02:57:07.000Z", "max_forks_repo_forks_event_max_datetime": "2019-03-20T02:57:07.000Z", "avg_line_length": 77.3870967742, "max_line_length": 684, "alphanum_fraction": 0.7430873975, "num_tokens": 1945, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.803173791645582, "lm_q2_score": 0.6992544273261176, "lm_q1q2_score": 0.561622829720478}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{bm} %for bold lower greek symbols\n\n\\title{Feed Forward Neural Network}\n\\author{Masahiro Ogawa}\n\n\\begin{document}\n\\maketitle\n\n\\section{Notation}\n\\begin{itemize}\n\\item $\\bm{x}$ : vector\n\\item $\\bm{A}$ : matrix\n\\item ${}^t\\bm{A}$ : transpose of $\\bm{A}$\n\\item $f.(\\bm{A}) = \n \\begin{pmatrix}\n  f(a_{1,1}) & f(a_{1,2}) & \\cdots & f(a_{1,n}) \\\\\n  f(a_{2,1}) & f(a_{2,2}) & \\cdots & f(a_{2,n}) \\\\\n  \\vdots  & \\vdots  & \\ddots & \\vdots  \\\\\n\n  f(a_{m,1}) & f(a_{m,2}) & \\cdots & f(a_{m,n})\n \\end{pmatrix}$ : for $m \\times n$ matrix $\\bm{A}$\n\\item $(\\bm{A}.*\\bm{B})_{i,j} = a_{i,j}b_{i,j}$\n\\item $\\bm{1}_{m\\times n} = \n  \\begin{pmatrix}\n    1 & \\cdots & 1 \\\\\n    \\vdots & \\ddots & \\vdots \\\\\n    1 & \\cdots & 1 \n  \\end{pmatrix}$ : $m \\times n$ dim\n\\item $\\bm{\\tilde{y}} = \n  \\begin{pmatrix}\n    1 \\\\\n    \\bm{y}\n  \\end{pmatrix}$ : homogeneous vector\n\n\\subsection{variables}\n\\item $n$ : data number\n\\item $N$ : total number of data\n\\item $l$ : layer number\n\\item $L$ : output layer number. thus, the total number of layer is $L+1$\n\\item $k$ : learning iteration number\n\\item $K$ : final number of the learning iteration \n\\item $J$ : cost\n\\item $f(x)$ : activation function\n\\item $\\bm{X}_0 = (\\bm{x}_{0,1},\\dots , \\bm{x}_{0,N}) = \n  \\begin{pmatrix}\n    x_{0,1,1} & \\cdots & x_{0,1,N} \\\\\n    \\vdots & \\ddots & \\vdots\\\\\n    x_{0,d_0,1} & \\cdots & x_{0,d_0,N}\n  \\end{pmatrix}$\n  : $d_0 \\times N$ dim : input vectors\n\\item $\\bm{B} = (\\bm{b}_1,\\dots , \\bm{b}_N) =\n  \\begin{pmatrix}\n    b_{1,1} & \\cdots & b_{1,N}\\\\\n    \\vdots & \\ddots & \\vdots\\\\\n    b_{d_L,1} & \\cdots & b_{d_L,N}\n  \\end{pmatrix}\n  $ : $d_L \\times N$ dim : instruction signals\n\\item $\\bm{\\tilde{W}}_l(k) =\n  (\\bm{\\tilde{w}}_{l,1}(k),\\dots , \\bm{\\tilde{w}}_{l,d_{l+1}}(k)) = \n  \\begin{pmatrix}\n    \\tilde{w}_{l,0,1}(k) & \\cdots & \\tilde{w}_{l,0,d_{l+1}}(k)\\\\\n    \\vdots & \\ddots & \\vdots\\\\\n    \\tilde{w}_{l,d_l,1}(k) & \\cdots & \\tilde{w}_{l,d_l,d_{l+1}}(k)\n  \\end{pmatrix}\n  $ :\n  weights : $(d_l+1) \\times d_{l+1}$ dim \n\\end{itemize}\n\n\n\\section{Input}\nparameters\n\\begin{itemize}\n\\item $L$ : output layer number.\n\\item $\\{ d_l \\}_{l=0}^L$ : neuron numbers in each layer $l$\n\\item $\\{\\bm{\\tilde{W}}_l(0)\\}_{l=0}^L \\mid \\bm{\\tilde{W}}_l(0) =\n  (\\bm{\\tilde{w}}_{l,1}(0),\\dots , \\bm{\\tilde{w}}_{l,d_{l+1}}(0))$ :\n  $(d_l+1) \\times d_{l+1}$ dim : initial weights\n\\item $\\rho$ : learning rate parameter\n\\item $T_e$ : error threshold\n\\item $T_K$ : threshold of the maximum number of iteration\n\\end{itemize}\n\ndata \n\\begin{itemize}\n\\item $\\bm{X}_0 = (\\bm{x}_{0,1},\\dots , \\bm{x}_{0,N})$ : $d_0 \\times N$ dim : input vectors\n\\item $\\bm{B} = (\\bm{b}_1,\\dots , \\bm{b}_N)$ : $d_L \\times N$ dim : instruction signals\n\\end{itemize}\n\n\n\\section{Output}\n\\begin{itemize}\n\\item $\\{\\bm{\\tilde{W}}_l(K)\\}_{l=0}^L \\mid \\bm{\\tilde{W}}_l(K) = (\\bm{\\tilde{w}}_{l,1}(K),\\dots ,\n  \\bm{\\tilde{w}}_{l,d_{l+1}}(K))$ : $(d_l+1) \\times d_{l+1}$ dim :\n  final (after $K$th iteration) learned weights\n\\item $\\bm{Y}_L = (\\bm{y}_{L,1},\\dots , \\bm{y}_{L,N})$ :\n  $dL \\times N$ dim : output vectors\n\\item $J = \\sum_{n=1}^N J_n$\n  : cost, final error\n\\end{itemize}\n\n\\section{Algorithm(Iterative version, Stochastic Gradient descent)}\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=1.0\\textwidth]{neuralnet_architecture}\n  \\caption{Neural network architecture}\n  \\label{fig:nn}\n\\end{figure}\n\\subsection{Iteration}\nIterate feed forward and back propagation algorithm for each input\ndatum until the error become smaller than the threshold or the\niteration number exceeds the\nmaximum iteration number. \nThat is, \n\\begin{equation}\n  J < T_e \\mbox{, or } k < T_K.\n\\end{equation}\n\nThe iteration means update of the weights as,\n\\begin{eqnarray}\n  \\tilde{w}_{lij}(k+1) \n  = \\tilde{w}_{lij}(k) - \\rho \\frac{\\partial J_n}{\\partial\n    \\tilde{w}_{lij}(k)} \\\\\n \\iff \\bm{\\tilde{W}}_l(k+1) \n  = \\bm{\\tilde{W}}_{l}(k) - \\rho \\frac{\\partial J_n}{\\partial\n    \\bm{\\tilde{W}}_{l}(k)} \\label{eq:weight_update}\n\\end{eqnarray}\nFrom this equation, once we could compute $\\frac{\\partial J_n}{\\partial\n  \\bm{\\tilde{W}}_{l}(k)}$, we can update the weights.\nTherefore we concentrate on computing $\\frac{\\partial J_n}{\\partial\n  \\bm{\\tilde{W}}_{l}(k)}$ below.\n\n\\subsection{Feed Forward}\nThe below are true for all data and all iteration,\nso we omit the data number $n$ and iteration number $k$ as \ne.g.\\ $\\bm{x}_{l,n} = \\bm{x}_l, \\bm{\\tilde{W}}_l(k) = \\bm{\\tilde{W}}_l$.\n\\begin{eqnarray}\n  \\bm{x}_l = {}^t\\bm{\\tilde{W}}_{l-1} \\bm{\\tilde{y}}_{l-1} \\\\\n  \\bm{y}_l = f.(\\bm{x}_l) \\\\\n\\end{eqnarray}\n\n\\subsubsection{activation function for hidden layers}\nWe can choose below 2 types of function as activation function.\nThese 2 activation functions will not produce any difference.\n\\begin{eqnarray}\n  f(x) = \n  \\begin{cases}\n    (1+e^{-x})^{-1} \\mbox{ : logistic function}\\\\\n    \\tanh(x) \n  \\end{cases} \\label{eq:activation hidden}\n\\end{eqnarray}\n\n\\subsubsection{activation function for an output layer}\nTo fit output value as desired, we have to choose appropriate activation\nfunction for an output layer.\n\\begin{eqnarray}\n  f(x) = \n  \\begin{cases}\n    x &\\mbox{: identity function for regression}\\\\\n    (1+e^{-x})^{-1} &\\mbox{: logistic function for [0,1] output binary\n    classification}\\\\\n  \\frac{\\exp(x)}{\\sum_j \\exp(x_{Lj})} &\\mbox{: soft max function for multiclass\n  classification}\\\\\n    \\tanh(x) &\\mbox{: for [-1,1] output}\n  \\end{cases} \\label{eq:activation out}\n\\end{eqnarray}\nNote that we divide binary and multiclass case, because if we use soft\nmax function for binary class classification, $f(x)$ always become 1.\n\n\\subsection{Back Propagation}\nIn this section, we compute $\\frac{\\partial J_n}{\\partial\n  \\bm{\\tilde{W}}_{l-1}(k)}$ instead of $\\frac{\\partial J_n}{\\partial\n  \\bm{\\tilde{W}}_{l}(k)}$ for just index convenience.\nAnd we omit $n$ as $J_n = J$ in this section.\n\n\\begin{eqnarray}\n  \\frac{\\partial J}{\\partial \\tilde{w}_{l-1,ij}}\n  &=& \\frac{\\partial J}{\\partial x_{lj}} \\frac{\\partial x_{lj}}{\\partial\n    \\tilde{w}_{l-1,ij}} \\\\\n  &=& \\epsilon_{lj} \\frac{\\partial (\\sum_m{\\tilde{w}_{l-1,mj}\n      \\tilde{y}_{l-1,m}})}{\\partial \\tilde{w}_{l-1,ij}}\n  \\hspace{5mm} \\mid \\epsilon_{lj} :=  \\frac{\\partial J}{\\partial x_{lj}}\\\\\n  &=& \\epsilon_{lj} \\tilde{y}_{l-1,i}\\\\\n  \\iff \\frac{\\partial J}{\\partial \\bm{\\tilde{W}}_{l-1}}\n  &=& \\bm{\\tilde{y}}_{l-1} {}^t\\bm{\\epsilon_{l}}\\\\\n  \\Rightarrow \\bm{\\tilde{W}}_{l-1}(k+1) &=& \\bm{\\tilde{W}}_{l-1}(k) - \\rho \\bm{\\tilde{y}}_{l-1}{}^t\\bm{\\epsilon}_l\\\\\n  \\epsilon_{lj} &=&  \\frac{\\partial J}{\\partial x_{lj}}\\\\\n  &=& \\frac{\\partial J}{\\partial y_{lj}} \\frac{\\partial\n    y_{lj}}{\\partial x_{lj}}\\\\\n  &=& \\frac{\\partial J}{\\partial y_{lj}} \\frac{\\partial\n    f(x_{lj})}{\\partial x_{lj}}\\\\\n\\end{eqnarray}\n\n\\subsubsection{back propagation for hidden layers}\n\\begin{eqnarray}\n  \\epsilon_{lj} \n  &=& \\frac{\\partial J}{\\partial y_{lj}} \\frac{\\partial\n    f(x_{lj})}{\\partial x_{lj}}\\\\\n  &=& \\sum_h \\frac{\\partial J}{\\partial x_{l+1,h}} \\frac{\\partial\n      x_{l+1,h}}{\\partial y_{lj}} f'({x}_{lj})\\\\\n  &=& \\sum_h \\epsilon_{l+1,h} \n    \\frac{(\\sum_m w_{lmh}y_{lm} + w_{l0h})}{\\partial y_{lj}}  f'({x}_{lj})\\\\\n  &=& \\sum_h \\epsilon_{l+1,h} w_{ljh}  f'({x}_{lj})\\\\\n  \\Rightarrow \n  \\bm{\\epsilon}_l &=& (\\bm{W}_l\\bm{\\epsilon}_{l+1}) .* f'.(\\bm{x}_l)\\\\\n  &=&\n  \\begin{cases}\n    (\\bm{W}_l \\bm{\\epsilon}_{l+1}).*(\\bm{y}_l-\\bm{y}_l.*\\bm{y}_l)\n    &\\mbox{if } f(x)=(1+e^{-x})^{-1}\\\\\n    (\\bm{W}_l \\bm{\\epsilon}_{l+1}).*(\\bm{1}_{dl}-\\bm{y}_l.*\\bm{y}_l)\n    &\\mbox{if } f(x)=\\tanh(x)\n  \\end{cases}\n\\end{eqnarray}\n\n\\subsubsection{back propagation for an output layer}\nTo compute a back propagation for an output layer, we need a concrete cost function form.\nBecause of this situation, we have to choose appropriate cost function form for each\nproblem as below at this time.\n\\begin{eqnarray}\n  J = \n  \\begin{cases}\n    \\frac{1}{2} \\lVert \\bm{y}_L-\\bm{b} \\rVert ^2  &\\mbox{regression }\\\\ \n    -(b \\ln y_L + (1-b)\\ln(1-y_L)) &\\mbox{binary classification} \\\\\n    -{}^t\\bm{b} \\ln \\bm{y}_L &\\mbox{multiclass classification} \n  \\end{cases} \\label{eq:J}\n\\end{eqnarray}\n\nUsing cost(\\ref{eq:J}) and activation function(\\ref{eq:activation out}),\n$\\epsilon_L$ can be computed as below.\nNote that $f(x_h)$ is related to $x_j$, so we have to think about the term\n$\\frac{\\partial f(x_{Lh})}{\\partial x_{Lj}}$.\n\\begin{eqnarray}\n  \\epsilon_{Lj} \n  &=& \\sum_h \\frac{\\partial J}{\\partial y_{Lh}} \\frac{\\partial\n    f(x_{Lh})}{\\partial x_{Lj}}\\\\\n  &=&\n  \\begin{cases}\n    \\sum_h (\\frac{\\partial (1/2 \\lVert \\bm{y}_L-\\bm{b} \\rVert ^2)}{\\partial\n      y_{Lh}}) \\frac{\\partial x_{Lh}}{\\partial x_{Lj}}\n    &\\mbox{regression }\\\\\n    -(\\frac{b}{y_L} - \\frac{1-b}{1-y_L}) \\frac{\\partial}{\\partial\n      x_{L}}(1+e^{-x_L})^{-1}   &\\mbox{binary classification}\\\\\n    -\\sum_h \\frac{b_h}{y_{L,h}} \\frac{\\partial}{\\partial\n      x_{Lj}}(\\frac{\\exp(x_{Lh})}{\\sum_i \\exp(x_{Li})})\n    &\\mbox{multiclass classification}\n  \\end{cases} \\\\\n  &=&\n  \\begin{cases}\n    \\sum_h (y_{Lh}-b_h) \\delta_{h,j}\\\\\n    -(\\frac{b}{y_L} - \\frac{1-b}{1-y_L})(y_L(1-y_L))\\\\\n    -\\sum_h \\frac{b_h}{y_{L,h}} (\\delta_{h,j}y_{Lj} - y_{Lh}y_{Lj})\n  \\end{cases}\\\\\n  &=&  y_{Lj}-b_j\n\\end{eqnarray}\nWe use the softmax property $\\sum_h b_h = 1$ in the last equation of multi-class\nclassification case.\n\nIn both case the result becomes $y-b$, so there is no gradient\ndiminishing problem. That's why we use these combination of the cost and\nfinal layers activation function. \n\n\\subsubsection{final result}\nFinally, we get below results.\n\\begin{eqnarray}\n  \\bm{\\epsilon}_l &=&\n  \\begin{cases}\n    \\bm{y}_L - \\bm{b}  &\\mbox{if } l=L \\\\\n    (\\bm{W}_l\\bm{\\epsilon}_{l+1}) .* f'.(\\bm{x}_l) &\\mbox{otherwise}\n  \\end{cases} \\\\\n  &=& \n  \\begin{cases}\n    \\bm{y}_L - \\bm{b}\\\\\n    (\\bm{W}_l \\bm{\\epsilon}_{l+1}).*(\\bm{1}_{dl}-\\bm{y}_l.*\\bm{y}_l)  &\\mbox{if } f(x)=\\tanh(x)\n  \\end{cases} \\\\\n  \\bm{\\tilde{W}}_{l-1}(k+1) &=& \\bm{\\tilde{W}}_{l-1}(k) - \\rho\\ \\bm{\\tilde{y}}_{l-1}{}^t\\bm{\\epsilon}_l\n\\end{eqnarray}\n\n\n\\section{Algorithm(Matrix version)}\n\\subsection{Iteration}\nIterate feed forward algorithm for all input\ndata and back propagation for all data until the error become smaller than the threshold or the\niteration number exceeds the\nmaximum iteration number. \nThat is, \n\\begin{equation}\n  J < T_e \\mbox{, or } k < T_K.\n\\end{equation}\n\n\\subsection{Feed Forward}\nThe below are true for all iteration.\nSo we omit the iteration number as \ne.g.\\ $\\bm{\\tilde{W}}_l(i) = \\bm{\\tilde{W}}_l$.\n\\begin{eqnarray}\n  \\bm{X}_l = {}^t\\bm{\\tilde{W}}_{l-1} \\bm{\\tilde{Y}}_{l-1} \\\\\n  \\bm{Y}_l = f.(\\bm{X}_l) \n\\end{eqnarray}\n$f(x)$ is defined by (\\ref{eq:activation hidden}) and (\\ref{eq:activation\nout}).\n\n\\subsection{Back Propagation}\n\\begin{eqnarray}\nJ &=&   \n\\begin{cases}\n  \\frac{1}{2} \\mathrm{Tr} ({}^t(\\bm{Y}_L - \\bm{B})(\\bm{Y}_L - \\bm{B}))\n  &\\mbox{regression} \\\\\n  -\\mathrm{Tr} \\{\n  \\begin{pmatrix}\n    {}^t\\bm{B} \\\\\n    \\bm{1}_{1 \\times N} - {}^t\\bm{B}\n  \\end{pmatrix}\n  \\ln. ( \\bm{Y}_L \\hspace{3mm}  \\bm{1}_{N \\times 1} - \\bm{Y}_L) \\} &\\mbox{binary classification}\\\\ \n  -\\mathrm{Tr} ({}^t\\bm{B} \\ln.(\\bm{Y}_L)) &\\mbox{multiclass classification}\n  \\end{cases}\\\\\n  \\bm{E}_l = (\\bm{\\epsilon}_{l,1}, \\cdots, \\bm{\\epsilon}_{l,N}) &=&\n  \\begin{cases}\n    \\bm{Y}_L - \\bm{B}  &\\mbox{if } l=L \\\\\n    (\\bm{W}_l\\bm{E}_{l+1}) .* f'.(\\bm{X}_l) &\\mbox{otherwise}\n  \\end{cases} \\\\\n  &=&\n  \\begin{cases}\n    \\bm{Y}_L - \\bm{B} \\\\\n    (\\bm{W}_l\\bm{E}_{l+1}).*(\\bm{1}_{dl\\times N}-\\bm{Y}_l.*\\bm{Y}_l) &\\mbox{if } f(x)=\\tanh(x)\n  \\end{cases} \\\\\n  \\bm{\\tilde{W}}_{l-1}(k+1) &=& \\bm{\\tilde{W}}_{l-1}(k) - \\rho\n  \\sum_{n=1}^N \\bm{\\tilde{y}}_{l-1,n} {}^t\\bm{\\epsilon}_{l,n}\\\\\n  &=& \\bm{\\tilde{W}}_{l-1}(k) - \\rho\\ \\bm{\\tilde{Y}}_{l-1} {}^t\\bm{E}_l\n\\end{eqnarray}\n\n\\end{document}\n", "meta": {"hexsha": "0986450f01ae3fcc01658fd531efc0aa724847eb", "size": 11677, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "machine_learning/neuralnet/doc/tex/neuralnet.tex", "max_stars_repo_name": "MasahiroOgawa/mopf", "max_stars_repo_head_hexsha": "3547a481fabbff5ac6cea529531af7c6bdee3005", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "machine_learning/neuralnet/doc/tex/neuralnet.tex", "max_issues_repo_name": "MasahiroOgawa/mopf", "max_issues_repo_head_hexsha": "3547a481fabbff5ac6cea529531af7c6bdee3005", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "machine_learning/neuralnet/doc/tex/neuralnet.tex", "max_forks_repo_name": "MasahiroOgawa/mopf", "max_forks_repo_head_hexsha": "3547a481fabbff5ac6cea529531af7c6bdee3005", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.0660660661, "max_line_length": 116, "alphanum_fraction": 0.6057206474, "num_tokens": 4730, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267118026095991, "lm_q2_score": 0.679178699175393, "lm_q1q2_score": 0.5614850466893317}}
{"text": "\\chapter{\\texorpdfstring{Relation of $T/{\\cal E}$}{T/E} to \\texorpdfstring{$D(z)$}{D(z)}} %  From Self-consistent Theory of Anderson Localization\r\n\\label{sec:appendix_TE_Dz_relation}\r\n\r\n% see also lab notebook SVN 20091230_ben_transmission_energy_derivations\r\n% Note: a significant amount of content was barrowed from /svn/research/oned/TE\\_paper\r\n\r\nThis is an expansion of Appendix section~\\ref{app:Dz_derivation}. As in that section we assume a slab geometry. The $z$ coordinate normal to the slab is separated from the perpendicular component ${\\bf \\rho}$ as ${\\bf r}=({\\bf \\rho},z)$. Again assuming no dependence on ${\\bf \\rho}$ allows us to give the ensemble-averaged diffusive flux $\\langle\\vec{J}(\\vec{r},t)\\rangle$ and the energy density $\\langle {\\cal W}(\\vec{r},t)\\rangle$ are related via \\cite{1953_Morse}\r\n\\begin{equation}\r\n\\langle\\vec{J}(\\vec{r},t)\\rangle=-D(\\vec{r})\\vec{\\nabla}\\langle {\\cal W}(\\vec{r},t)\\rangle\r\n\\label{eq:Jflux_relation}\r\n\\end{equation}\r\nThe diffusion approximation amounts to $D(\\vec{r})\\equiv D_0=c\\ell_{tmfp}/3$, where $c$ is the speed of light and $\\ell_{tmfp}$ is the transport mean free path.\r\n\r\nWe consider a 3D random medium in a shape of a slab of thickness $L$, where we explicitly  separate the coordinate $z$ normal to the slab from the perpendicular component ${\\bf \\rho}$ as ${\\bf r}=({\\bf \\rho},z)$. Under a CW plane-wave illumination at normal incidence, the dependence on ${\\bf \\rho}$ and $t$ can be neglected. \r\n\\begin{equation}\r\n\\langle\\vec{J}_z(z)\\rangle=-D(z)\\frac{d}{dz}\\langle {\\cal W}(z)\\rangle\r\n\\end{equation}\r\n\r\nIntegration over $z$ gives\r\n\\begin{equation}\r\n\\int_z^L\\frac{\\langle J_z(z^\\prime)\\rangle dz^\\prime}{D(z^\\prime)}=-\\langle {\\cal W}(L)\\rangle + \\langle {\\cal W}(z)\\rangle\r\n\\label{eq:E1_relation}\r\n\\end{equation}\r\nwhere the energy stored inside the random medium ${\\cal E}$ is formally defined as\r\n\\begin{equation}\r\n\\langle {\\cal E} \\rangle =\\int_0^L\\langle {\\cal W}(z)\\rangle dz.\r\n\\label{eq:Energy_definition_relation}\r\n\\end{equation}\r\nthus\r\n\\begin{equation}\r\n\\langle {\\cal E} \\rangle = \\int_0^L \\left( \\langle {\\cal W}(L)\\rangle + \\int_z^L\\frac{\\langle J_z(z^\\prime)\\rangle }{D(z^\\prime)}dz^\\prime\\right) dz\r\n\\end{equation}\r\nThe remaining work is to factor out transmission $T$ in order to find the relation between $T/{\\cal E}$ and $D(z)$. The energy density $\\langle {\\cal W}(L)\\rangle$ at the right boundary can be expressed in terms of right- and left-propagating fluxes. From the definition of diffusive flux \\cite{1953_Morse}\r\n\\begin{equation}\r\n\\langle J_{\\pm}(z)\\rangle = \\frac{c}{4} \\langle {\\cal W}(z)\\rangle \\mp \\frac{D_0}{2} \\frac{d\\langle {\\cal W}(z)\\rangle }{dz}\r\n\\label{eq:diffusive_flux_relation}\r\n\\end{equation}\r\nwhere $ \\langle J_{-}\\rangle$ and $ \\langle J_{+}\\rangle $ are the fluxes propagating along negative and positive $z$-directions respectively. Since $\\langle J_+(L)\\rangle=J_0T$ and $\\langle J_-(L)\\rangle=0$, using Eqs.~\\ref{eq:diffusive_flux_relation} to eliminate $D_0$ yields\r\n\\begin{equation}\r\n\\langle J_+(L)\\rangle + \\langle J_-(L)\\rangle = 2 \\frac{c}{4}\\langle {\\cal W}(L)\\rangle\r\n\\end{equation}\r\nTherefore $\\langle {\\cal W}(L)\\rangle=2J_0T/c$ and the energy can be re-written as\r\n\\begin{equation}\r\n\\langle {\\cal E} \\rangle = \\int_0^L \\left( 2J_0T/c + \\int_z^L\\frac{\\langle J_z(z^\\prime)\\rangle}{D(z^\\prime)}dz^\\prime\\right) dz\r\n\\end{equation}\r\nNext, we reduce $\\langle J_z(z^\\prime)\\rangle$ to find an approximately equivalent transmission.\r\n\r\n%the boundary conditions are from Eq.~\\ref{eq:Jflux_conserv}\r\n%\\begin{equation}\r\n%J_z(z=0)=-J_0R,\\ \\, J_z(z=L)=J_0T\r\n%\\label{eq:Jflux_bc}\r\n%\\end{equation}\r\n\r\nIn the CW regime when the energy density ${\\cal W}(z)$ is stationary, $\\partial \\langle {\\cal W}(z)\\rangle/\\partial t=0$, it follows from energy conservation condition for flux $\\vec{J}$ and energy ${\\cal W}$\r\n\\begin{equation}\r\n\\frac{\\partial \\langle {\\cal W}(\\vec{r},t)\\rangle }{\\partial t}+\\vec{\\nabla} \\cdot \\langle\\vec{J}(\\vec{r},t)\\rangle=\r\n\\frac{c}{\\ell_g}\\langle {\\cal W}(\\vec{r},t)\\rangle+J_0 \\delta(z-z_p)\r\n\\label{eq:Jflux_conserv_relation}\r\n\\end{equation}\r\n that the $z$ component of flux is constant for $z>z_p\\sim\\ell$. The value of the constant can be obtained from the boundary condition at $z=L$ as\r\n\\begin{equation}\r\n\\langle J_z(z)\\rangle=\\left\\{\r\n\\begin{array}{l l}\r\n\\langle J_z(L)\\rangle\\equiv J_0 \\langle T \\rangle ,&\\quad z_p<z<L\\\\\r\n\\langle J_z(0)\\rangle\\equiv -J_0 \\langle R \\rangle,&\\quad 0<z<z_p\\\\\r\n\\end{array} \\right.\r\n\\label{eq:Jfluxz_const_relation}\r\n\\end{equation}\r\nwhere $T$ ($R$) is the transmission (reflection) coefficient. As a check, by integrating Eq.~(\\ref{eq:Jflux_conserv_relation}) over the entire system we obtain the standard (passive) flux conservation $\\langle J_z(L)\\rangle -\\langle J_z(0)\\rangle =J_0 \\langle T \\rangle-(-J_0 \\langle R \\rangle)=J_0(\\langle T \\rangle+\\langle R \\rangle)=J_0$. To take advantage of the fact that $\\langle J_z(z)\\rangle$ is piecewise constant, c.f. Eq.~(\\ref{eq:Jfluxz_const_relation}), we have to neglect by $0<z<z_p$ contribution. Then a constant can be substituted for $J_z(z')$,\r\n\\begin{equation}\r\n\\langle {\\cal E} \\rangle = \\int_0^L \\left( 2J_0\\langle T \\rangle/c + \\int_z^L\\frac{J_0 \\langle T \\rangle }{D(z^\\prime)}dz^\\prime\\right) dz\r\n\\end{equation}\r\nThis introduces an error $\\propto z_p/L\\sim\\ell/L\\ll 1$. Factoring $T$ from the integrands,\r\n\\begin{equation}\r\n\\langle {\\cal E} \\rangle =J_0\\langle T \\rangle\\int_0^L \\left( \\int_z^L\\frac{dz^\\prime}{D(z^\\prime)}+2/c\\right) dz\r\n\\label{eq:E1a_relation}\r\n\\end{equation}\r\nNote that the second term is of the same order $\\sim \\ell/L$ as the term omitted in arriving to the above expression. Hence, $2/c$ contribution has to be dropped as well.\r\n\\begin{equation}\r\n\\langle {\\cal E} \\rangle =J_0T\\int_0^L \\int_z^L \\frac{1}{D(z^\\prime)}dz^\\prime dz\r\n\\end{equation}\r\n\r\nTaking advantage of the system symmetry, $D(z)=D(L-z)$, the double integration can be further simplified as\r\n\\begin{eqnarray}\r\n\\displaystyle\\int_{0}^{L}\\int_{z}^{L}\\displaystyle\\frac{1}{D(z^\\prime)}dz^\\prime dz &=&\\frac{1}{2}\\displaystyle\\int_{0}^{L}\\int_{0}^{L}\\displaystyle\\frac{1}{D(z^\\prime)}dz^\\prime dz \\nonumber\\\\\r\n&=&\\frac{L}{2}\\int_{0}^{L}\\displaystyle\\frac{1}{D(z)}dz.\r\n\\label{eq:E3_relation}\r\n\\end{eqnarray}\r\nAfter normalizing the integral so that it yields unity in the case when the wave interference effects are neglected, $D(z)=D_0\\equiv c\\ell/3$, for passive media\r\n\\begin{equation}\r\n\\frac{\\langle T \\rangle}{\\langle {\\cal E} \\rangle}\\simeq\r\n\\frac{1}{J_0}\r\n\\frac{2D_0}{L^2}\r\n\\left(\r\n\\displaystyle\\frac{1}{L}\\displaystyle\\int_{0}^{L}\\displaystyle\\frac{D_0}{D(z)}dz\r\n\\right)^{-1},\r\n\\label{eq:TE_vs_D_relation}\r\n\\end{equation}\r\nWe note that in process of deriving Eq.~(\\ref{eq:TE_vs_D_relation}), we dropped the terms on the order of $\\sim\\ell/L\\ll 1$.\r\n\r\n\r\nDropping the localization corrections leaves\r\n\\begin{equation}\r\n\\frac{\\langle T \\rangle}{\\langle {\\cal E} \\rangle}\\simeq \\frac{1}{J_0} \\frac{2D_0}{L^2}\r\n\\label{eq:diffusion_te_only}\r\n\\end{equation}\r\nAny deviation from Eq.~\\ref{eq:diffusion_te_only} in passive diffusive media can be attributed to localization corrections.\r\n", "meta": {"hexsha": "8140cf045f5a3acb9cdda66e4f97192c12e0b6b1", "size": 7092, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/appendix_te_dz_relation.tex", "max_stars_repo_name": "bhpayne/physics_phd_dissertation", "max_stars_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/appendix_te_dz_relation.tex", "max_issues_repo_name": "bhpayne/physics_phd_dissertation", "max_issues_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/appendix_te_dz_relation.tex", "max_forks_repo_name": "bhpayne/physics_phd_dissertation", "max_forks_repo_head_hexsha": "646123088fdd226e8677e6f3edb8d109be96994e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.6666666667, "max_line_length": 563, "alphanum_fraction": 0.7062887761, "num_tokens": 2344, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5614724808091998}}
{"text": "\r\n%          -------------------------------------------------------------------------------------------------------------\r\n%          ------------ Two General Purpose Algorithms for Counting Permutations with Consecutive Sequences ------------\r\n%          -------------------------------------------------------------------------------------------------------------\r\n%          ---------------------------------------------------- by -----------------------------------------------------\r\n%          -------------------------------------------------------------------------------------------------------------\r\n%          ----------------------------------------------- Thomas Becker -----------------------------------------------\r\n%          -------------------------------------------------------------------------------------------------------------\r\n%          ------------------------------------------------- APR 2017 --------------------------------------------------\r\n\r\n\\documentclass{article}\r\n\\usepackage{amsfonts}\r\n\\usepackage{amstext}\r\n\\usepackage{amsmath}\r\n\\usepackage{amsfonts}\r\n\\usepackage{multirow}\r\n\\usepackage{hyperref}\r\n\\usepackage{xcolor}\r\n\\hypersetup{\r\n    colorlinks,\r\n    linkcolor={red!50!black},\r\n    citecolor={blue!50!black},\r\n    urlcolor={blue!80!black}\r\n}\r\n\\usepackage{theorem}\r\n\\usepackage{enumerate}\r\n\r\n% ---------------------------------------------------------------------\r\n%\r\n% Amstex math italic bold\r\n%\r\n% ---------------------------------------------------------------------\r\n%\r\n\\font\\tencmmib=cmmib10\r\n\\font\\ninecmmib=cmmib9\r\n\\font\\sevencmmib=cmmib7\r\n\\font\\fivecmmib=cmmib5\r\n%\r\n\\newfam\\cmmibfam\r\n\\def\\mib{\\fam\\cmmibfam\\tencmmib}\r\n%\r\n% NOTE: If these fonts are not available, use the following alternate\r\n% definition of \\mib:\r\n%\\def\\mib{\\mathbf}\r\n%\r\n\\textfont\\cmmibfam=\\tencmmib\r\n\\scriptfont\\cmmibfam=\\sevencmmib\r\n\\scriptscriptfont\\cmmibfam=\\fivecmmib\r\n\r\n{\\theoremstyle{plain}\r\n  \\theorembodyfont{\\rm}\r\n  \\newtheorem{theorem}{Theorem}[section]\r\n  \\newtheorem{proposition}[theorem]{Proposition}\r\n  \\newtheorem{lemma}[theorem]{Lemma}\r\n  \\newtheorem{corollary}[theorem]{Corollary}\r\n  \\newtheorem{definition}[theorem]{Definition}\r\n  \\newtheorem{example}[theorem]{Example}\r\n  \\newtheorem{notation}[theorem]{Notation}\r\n  \\newtheorem{purpose}{Purpose of this Section}\r\n}\r\n\\renewcommand{\\thepurpose}{\\kern -5pt}\r\n\\renewcommand\\refname{References and Notes}\r\n\r\n\\def\\proof{\\par \\noindent              %--- PROOF ----------------------\r\n\\mbox{\\bf Proof}\\ \\,}                  %\r\n\\def\\endproof{\\mbox{$\\Box$} \\par }     %--- END OF PROOF ---------------\r\n\r\n% Use vertical space instead of indent to start a paragraph\r\n%\r\n\\setlength{\\parindent}{0pt}\r\n\\setlength{\\parskip}{1ex plus 0.5ex minus 0.2ex}\r\n\r\n\\begin{document}\r\n\\author{Thomas Becker\\\\ {\\small \\it thomas@greaterthanzero.com}}\r\n\\title{Two General Purpose Algorithms for Counting Permutations with Consecutive Sequences}\r\n\\date{\\small April 22, 2017}\r\n\\maketitle\r\n\r\n\\begin{abstract}\r\n  We state, prove the correctness of, and discuss the complexity of two general\r\n  purpose algorithms, one building upon the other, for counting permutations with\r\n  specified configurations of consecutive sequences. The algorithms are based on\r\n  a theorem that describes how counting permutations with consecutive sequences can\r\n  be reduced to counting permutations with no consecutive sequences at all.\r\n\\end{abstract}\r\n\r\n\\section{Preface}\r\nCombinatorial mathematics is not my specialty as a mathematician. However, I recently wrote\r\na rather lightweight blog post \\cite{BlogPost} on the subject of random shuffle mode on today's\r\nmusic players. In the process, I needed to know the number of permutations with certain\r\nconfigurations of consecutive sequences. The answers to many of my questions were readily\r\navailable on the Web (see e.g. \\cite{Oeis}),\r\nbut there were also some questions to which I did not find the answer. Therefore, I\r\nwrote some general-purpose algorithms---based on one core mathematical theorem---to conveniently\r\ncalculate the numbers that I needed. The algorithms are available on github \\cite{Algos}.\r\nI checked the results against brute force calculations, and, to the extent possible, against\r\nthe Online Encyclopedia of Integer Sequences \\cite{Oeis} for small numbers of elements.\r\nBut since the correctness of the algorithms\r\nis far from obvious, I also felt that formal mathematical correctness proofs should be available.\r\nThat is the reason why I wrote this paper. It is quite possible that there is nothing here that\r\ncombinatorial mathematicians don't already know, but again, I was not able to find everything I\r\nneeded online. Any feedback, particularly regarding references to\r\nthe literature, would be much appreciated.\r\n\r\n\\section{Introduction}\r\n\r\n\\begin{notation}\r\n  If $n$ is an integer with $n \\geq 1$, we write ${\\mathcal P}_n$ for the set of all permutations of the integers\r\n  $1, 2, \\ldots, n$.\r\n\\end{notation}\r\n\r\n\\begin{definition}\r\n  Let $P\\in {\\mathcal P}_n$, and let $k \\geq 2$.\r\n  A {\\bf consecutive sequence} of length $k$ in $P$ is a contiguous subsequence of $k$ consecutive\r\n  integers in $P$, that is, a contiguous subsequence of the form $(i, i+1, \\ldots, i+k-1)$.\r\n  A consecutive sequence is called {\\bf maximal} if it is not a subsequence of a consecutive sequence\r\n  of greater length.\r\n\\end{definition}\r\n\r\nThere is a vast body of work regarding the count of permutations that have a specified configuration of\r\nconsecutive sequences, such as permutations having a certain number of consecutive pairs or triples\r\n\\cite{OeisRefs}. In this paper, we state, prove the correctness of, and discuss the complexity of two general\r\npurpose algorithms, one based upon the other, for\r\ncounting permutations whose maximal consecutive sequences are described in certain ways. The need\r\nfor these algorithms, which are available under the MIT license \\cite{Algos}, arose from the author's curiosity\r\nabout the behavior of random shuffle mode on today's music players \\cite{BlogPost}. The algorithms do not\r\nuse the (more generally applicable) inclusion-exclusion principle that is often employed for counting\r\npermutations with certain properties. Instead, they rely on a technique of reducing the problem of counting\r\npermutations with consecutive sequences to the problem of counting permutations with no consecutive sequences\r\nat all. This technique is a generalized form of an argument that the author encountered in a Quora post by Jed Yang\r\n\\cite{JedYang}.\r\n\r\nOur core algorithms deal with the number of permutations having certain configurations of\r\n{\\it maximal} consecutive sequences. However, as we will show, they can be employed to answer questions\r\nregarding just consecutive sequences as well.\r\n\r\nThe organization of the paper and the key results are as follows. In Theorem \\ref{theorem_count_by_length_and_initial_element},\r\nwe give an auxiliary algorithm on which the two main algorithms are based. It calculates the number of\r\npermutations that meet a specification of maximal consecutive sequences by initial elements and lengths.\r\nThe theorem reduces that number of permutations to the number of permutations of fewer elements that\r\nhave no consecutive sequences at all.\r\n\r\nBuilding on Theorem \\ref{theorem_count_by_length_and_initial_element}, Theorem \\ref{theorem_count_by_length_and_count}\r\nprovides an algorithm to count the permutations whose maximal consecutive sequences have been specified by\r\nstating how many exactly of each length there should be. Using that algorithm, one may, for example,\r\ncalculate the number of permutations that have exactly three consecutive pairs, none of which are linked\r\nto form a consecutive triple (specify ``three maximal consecutive sequences of length two, zero maximal\r\nconsecutive sequences of any other length''). The complexity of the algorithm is $O(n\\cdot m)$, where\r\n$m$ is the number of lengths for which the number of maximal consecutive sequences has been specified\r\nas greater than zero.\r\n\r\nBuilding on top of that, Theorem \\ref{theorem_iterating_over_specs_by_count_by_length_and_count}\r\ndescribes a generic, customizable algorithm\r\nthat iterates over specifications of maximal consecutive sequences by lengths and counts.\r\nA user-supplied function decides if permutations with a given specification should be included in the count or not.\r\nFor example, the user-supplied function could accept only configurations that specify a non-zero count for maximal\r\nsequences of length two and a zero count for all other lengths.\r\nThe result would be the number of permutations that have any number of consecutive pairs, but no linked pairs\r\nthat form larger consecutive sequences. This algorithm is of course no more than a ``glorified brute force\r\nalgorithm'': instead of iterating over permutations, we iterate over specifications of maximal consecutive sequences\r\nby lengths and counts. While this is a dramatic improvement (see Section \\ref{sec_iterating} for details), it\r\nis a brute force approach nonetheless. However, our implementation lets the user exploit the fact that oftentimes,\r\nnot all specifications of maximal consecutive sequences by lengths and counts need to be looked at.\r\nThis can lead to vastly improved complexity in many cases. In the example that we just mentioned, the iteration\r\nbecomes linear in $n$.\r\n\r\nFinally, the last section of this paper discusses how to use our core algorithms, which deal with\r\nmaximal consecutive sequences, to answer questions regarding just plain consecutive sequences.\r\n\r\n\\section{Specifying Maximal Consecutive Sequences by Lengths And Initial Elements}\r\n\r\n\\begin{definition} \\label{def_mcs_spec_by_length_and_initial_element}\r\n  Let $n$ be an integer with $n\\geq 1$. An {\\bf MCS-specification by lengths and initial elements} for $n$ is a\r\n  set of pairs of integers $$\\{\\,(a_1, k_1), (a_2, k_2), \\ldots,(a_m, k_m)\\,\\}$$ with the following properties:\r\n  \\begin{enumerate}[(i)]\r\n  \\item\r\n    $a_i \\geq 1$ and $k_i \\geq 2$ for $1\\leq i \\leq m$,\r\n  \\item\r\n    $a_i + k_i \\leq a_{i+1}$ for $1\\leq i \\leq m-1$,\r\n  \\item\r\n    $a_m + (k_m-1) \\leq n$.\r\n  \\end{enumerate}\r\n\\end{definition}\r\n\r\n\\begin{notation}\\label{notation_mcs_spec_by_length_and_initial_element}\r\n  If S is an MCS-specification by lenghts and initial elements for $n$ as in\r\n  Definition \\ref{def_mcs_spec_by_length_and_initial_element}\r\n  above, we write ${\\mathcal Q}_{(n,S)}$ for the set of all permutations\r\n  $P \\in {\\mathcal P}_n$ with the following property: for each $i$ with $1\\leq i \\leq m$, $P$ has a maximal\r\n  consecutive sequence of length $k_i$ that starts with the integer $a_i$, and $P$ has no other maximal\r\n  consecutive sequences.\r\n\\end{notation}\r\n\r\n\\begin{purpose}\r\nPresent an auxiliary algorithm, to be used in later sections,\r\nfor calculating $|{\\mathcal Q}_{(n,S)}|$ from $n$ and $S$.\r\n\\end{purpose}\r\n\r\nThe following technical lemma will be needed when we use induction on $m$ in connection with\r\nMCS-specifications by lengths and initial elements.\r\n\r\n\\begin{lemma}\\label{lemma_ind_on_m} \r\n  Let $S$ be an MCS-specification by lengths and initial elements for $n$ as in\r\n  Definition \\ref{def_mcs_spec_by_length_and_initial_element}\r\n  above, and assume that $m\\geq 1$. Then $n-(k_m-1) \\geq 1$, and\r\n  $$T = \\{\\,(a_1, k_1), (a_2, k_2), \\ldots,(a_{m-1}, k_{m-1})\\,\\}$$\r\n  is an MCS-specification by lengths and initial elements for $n-(k_m-1)$.\r\n\\end{lemma}\r\n\r\n\\proof\r\nFrom (i) and (iii) of Definition \\ref{def_mcs_spec_by_length_and_initial_element}, we may conclude that\r\n$$1 \\leq a_m \\leq n - (k_m - 1),$$ which proves the first claim of the lemma.\r\nIf $m=1$, the second claim is trivial since the empty set is an MCS-specifications by lengths\r\nand initial elements for any positive integer. Now let $m > 1$. It is clear that $T$ has properties\r\n(i) and (ii) of Definition \\ref{def_mcs_spec_by_length_and_initial_element}. Moreover, we have \r\n\\begin{eqnarray*}\r\n  a_{m-1} + (k_{m-1} - 1) & \\leq & a_m - 1 \\\\\r\n                          & \\leq & n - k_m \\\\\r\n                          & \\leq & n - (k_m-1),\r\n\\end{eqnarray*} \r\nand thus $T$ satisfies (iii) of Definition \\ref{def_mcs_spec_by_length_and_initial_element} as well.\r\n\\endproof\r\n\r\n\\begin{notation}\r\n  We let ${\\mathcal U}_n$ denote the subset of ${\\mathcal P}_n$ that consists of all permutations with no consecutive\r\n  sequences.\r\n\\end{notation}\r\n\r\nIt is well-known (see e.g. \\cite{JedYang}) that the cardinality of ${\\mathcal U}_n$ satisfies the recurrence relation\r\n$$\r\n|{\\mathcal U}_n| = (n-1) \\cdot |{\\mathcal U}_{n-1}| + (n-2) \\cdot |{\\mathcal U}_{n-2}|.\r\n$$\r\nThe theorem below provides the desired algorithm for calculating $|{\\mathcal Q}_{(n,S)}|$ by reducing the\r\nproblem to the calculation of $|{\\mathcal U}_r|$ for a certain $r$.\r\n \r\n\\begin{theorem}\\label{theorem_count_by_length_and_initial_element}\r\n  Let $n \\geq 1$, let $S = \\{\\,(a_1, k_1), (a_2, k_2), \\ldots,(a_m, k_m)\\,\\}$ be an\r\n  MCS-specification by lengths and initial elements for $n$, and let $k = \\sum_{i=1}^m k_i$.\r\n  Then $|{\\mathcal Q}_{(n,S)}| = |{\\mathcal U}_{n - (k - m)}|$.\r\n\\end{theorem}\r\n\r\n\\proof\r\nWe will prove the theorem by showing that there is a bijection between ${\\mathcal Q}_{(n,S)}$\r\nand ${\\mathcal U}_{n - (k - m)}$. For this, it suffices to show that there are maps\r\n$$\r\nf: {\\mathcal Q}_{(n,S)}\\rightarrow {\\mathcal U}_{n - (k - m)}\r\n\\quad\\text{and}\\quad \r\ng: {\\mathcal U}_{n - (k - m)} \\rightarrow {\\mathcal Q}_{(n,S)}\r\n$$\r\nsuch that $g\\circ f$ is the identity on ${\\mathcal Q}_{(n,S)}$ and $f\\circ g$ is the identity on\r\n${\\mathcal U}_{n - (k - m)}$. Intuitively speaking, $f$ is obtained by throwing out\r\nall elements of maximal consecutive sequences except for the initial ones, then adjusting greater\r\nelements of the permutation downward to close the gaps. The map $g$ is the reverse operation of that.\r\nFor a formal proof of the existence of these maps, we proceed by induction on $m$. For $m=0$,\r\nthe claim is trivial as $${\\mathcal U}_{n - (k - m)} = {\\mathcal U}_n = {\\mathcal Q}_{(n,S)}$$\r\nin that case. Now let $m>0$, and let\r\n$$\r\nT = \\{\\,(a_1, k_1), (a_2, k_2), \\ldots,(a_{m-1}, k_{m-1})\\,\\}.\r\n$$\r\nBy Lemma \\ref{lemma_ind_on_m}, $T$ is an MCS-specification by lengths and initial elements\r\nfor $n - (k_m - 1)$. This together with the induction hypothesis implies\r\nthat there is a bijection between\r\n$$\r\n{\\mathcal Q}_{(n-(k_m-1),T)}\r\n\\quad\\text{and}\\quad\r\n{\\mathcal U}_{(n-(k_m-1)) - ((k-k_m) - (m-1))} = {\\mathcal U}_{n - (k - m)}.\r\n$$\r\nTherefore, it suffices to construct maps\r\n$$\r\nf: {\\mathcal Q}_{(n,S)}\\rightarrow {\\mathcal Q}_{(n-(k_m-1),T)}\r\n\\quad\\text{and}\\quad \r\ng: {\\mathcal Q}_{(n-(k_m-1),T)} \\rightarrow {\\mathcal Q}_{(n,S)}\r\n$$\r\nsuch that $g\\circ f$ is the identity on ${\\mathcal Q}_{(n,S)}$ and $f\\circ g$ is the identity on\r\n${\\mathcal Q}_{(n-(k_m-1),T)}$.\r\nFor $P\\in {\\mathcal Q}_{(n,S)}$, let $f(P)$ be the integer sequence that is obtained from $P$ as follows:\r\n\\begin{enumerate}\r\n\\item\r\n  Strike the elements $a_m + 1, a_m+2, \\ldots, a_m+(k_m-1)$ from $P$.\r\n\\item\r\n  In the remaining sequence, replace every element $a$ that is greater than $a_m$ with $a-(k_m-1)$.\r\n\\end{enumerate}\r\nFor $Q\\in {\\mathcal Q}_{(n-(k_m-1),T)}$, first note that the integer $a_m$ occurs in the sequence $Q$\r\nbecause $a_m \\leq n-(k_m-1)$ by Definition \\ref{def_mcs_spec_by_length_and_initial_element} (iii).\r\nNow let $g(Q)$ be the integer sequence that\r\nis obtained from $Q$ by reversing the procedure that defines $f$:\r\n\\begin{enumerate}\r\n\\item\r\n Replace every element $a$ in $Q$ that is greater than $a_m$ with $a+(k_m-1)$.\r\n\\item\r\n  Augment the resulting sequence by inserting the sequence $(a_m + 1, a_m+2, \\ldots, a_m+(k_m-1))$\r\n  following the element $a_m$.\r\n\\end{enumerate}\r\nIt is easy to see that $f(P)$ contains exactly the integers $1, 2, \\ldots, n-(k_m-1)$, and $g(Q)$ contains\r\nexactly the integers $1, 2, \\ldots, n$, and therefore,\r\n$$\r\nf(P)\\in {\\mathcal P}_{n-(k_m-1)}\r\n\\quad\\text{and}\\quad\r\ng(Q)\\in {\\mathcal P}_n.\r\n$$\r\nAlso, it is immediate from the definition of $f$ and $g$ that $g\\circ f$ is the\r\nidentity on ${\\mathcal Q}_{(n,S)}$ and $f\\circ g$ is the identity on ${\\mathcal Q}_{(n-(k_m-1),T)}$.\r\nIt remains to show that\r\n$$\r\nf({\\mathcal Q}_{(n,S)}) \\subseteq {\\mathcal Q}_{(n-(k_m-1),T)}\r\n\\quad \\text{and}\\quad\r\ng({\\mathcal Q}_{(n-(k_m-1),T)}) \\subseteq {\\mathcal Q}_{(n,S)}.\r\n$$\r\nSo let $P\\in {\\mathcal Q}_{(n,S)}$. To show that $f(P)\\in {\\mathcal Q}_{(n-(k_m-1),T)}$, we must prove\r\nthat $f(P)$ has precisely the maximal consecutive sequences that $T$ specifies. Before delving into that argument,\r\nit may be helpful to visualize how $f(P)$ is obtained from $P$. Under the action of $f$, an element of the\r\nsequence $P$ may be removed, change its position, change its value, change both position and value,\r\nor change neither position nor value. The elements $a_m + 1, a_m + 2, \\ldots, a_m + (k -1)$, which we know\r\nare positioned consecutively, get removed. The elements that are positioned to the right of that subsequence,\r\nall the way to the end of $P$, move $k_m - 1$ positions to the left. Finally, those elements are greater than\r\n$a_m$---and the only ones that are left are actually greater than $a_m + (k -1)$---are decremented by $k-1$.\r\nYou may also want to remind yourself that the subscript $m$ on $a_m$ is not indicative of position in\r\n$P$ or $f(P)$. It stems from the MCS-specification $S$.\r\n\r\nNow imagine the sequence $f(P)$ being split in two, with the cut being after the element\r\n$a_m$. Let's call these two pieces $P_1$ and $P_2$. All the integers that are members of the $m-1$ maximal\r\nconsecutive sequences in $P$ starting with $a_1, a_2, \\ldots, a_{m-1}$ are less than $a_m$. Therefore,\r\ntheir values are not changed under the action of $f$, and neither are their relative positions.\r\nTherefore, each of these sequences is present as a consecutive sequence in either $P_1$ or $P_2$. \r\nAs for the elements in between and around those sequences, in $P_1$ or $P_2$,\r\nthey are either less than or equal to $a_m$,\r\nin which case their value is unchanged under $f$, or they are greater than $a_m$, in which case they are\r\nthe result of decrementing in lockstep, by the same amount, namely, $k_m -1$. Moreover, no relative positions\r\nhave changed among any of these under the action of $f$. It follows that none of these elements\r\nhave joined any of the maximal consecutive sequences of $P$, and the only new consecutive pair that\r\ncould have formed among them would be $(a_m, a_m+1)$, but that's impossible since $a_m$ sits at the\r\nend of $P_1$. We see that the maximal consecutive sequences that we find in $P_1$ and $P_2$ are precisely those\r\nthat are specified by $T$.\r\n\r\nIt remains to show that no consecutive pair forms between the last element of $P_1$ and the first element of\r\n$P_2$ as we join the two to form $f(P)$. The last element of $P_1$ is $a_m$. The first element of $P_2$ is\r\nthe result of the effect that $f$ had on the first element following the maximal consecutive sequence\r\n$a_m, a_m + 1, \\ldots, a_m + (k -1)$ in $P$. That element was either less than $a_m$, in which\r\ncase its value is unchanged, or it was greater than $a_m + (k_m-1)$ and unequal to $a_m + k$, in which case its value\r\nwas changed to something not equal to $a_m + 1$. In either case, no consecutive pair forms at the juncture\r\nof $P_1$ and $P_2$. This concludes the proof that $f(P)\\in {\\mathcal Q}_{(n-(k_m-1),T)}$ and thus\r\n$f({\\mathcal Q}_{(n,S)}) \\subseteq {\\mathcal Q}_{(n-(k_m-1),T)}$. We leave the proof of\r\n$g({\\mathcal Q}_{(n-(k_m-1),T)}) \\subseteq {\\mathcal Q}_{(n,S)}$ to the reader, as it is little more than\r\nthe argument that we just made in reverse.\r\n\\endproof\r\n\r\nSince we know how to calculate the cardinality of ${\\mathcal U}_n$ for any $n$, the theorem above gives\r\nus an algorithm to calculate the number of permutations of $n$ integers that have maximal consecutive\r\nsequences of specified lengths with specified initial elements. However, judging from experience, that algorithm\r\nisn't very interesting. The description of consecutive sequences is just too specific. What one wants\r\nis being able to count the permutations with consecutive sequences or maximal consecutive sequences that\r\nare specified by lengths and counts, as in, ``exactly $x$ number of consecutive triples,'' or, ``exactly\r\n$x$ number of consecutive triples and no longer consecutive sequences,'' or some such thing. This will\r\nbe achieved in the next two sections.\r\n\r\nAs for the complexity of the algorithm of Theorem \\ref{theorem_count_by_length_and_initial_element},\r\nit is clear that the classical recurrence relation for ${\\mathcal U}_n$ that we stated preceding the\r\ntheorem can be rewritten as a bottom-up multiplication that calculates ${\\mathcal U}_n$ in constant\r\nspace and linear time. Therefore, the complexity of the algorithm of\r\nTheorem \\ref{theorem_count_by_length_and_initial_element} is $O(n)$.\r\n\r\nAs an aside, let us mention that Theorem \\ref{theorem_count_by_length_and_initial_element} continues\r\nto hold if instead of specifying maximal consecutive sequences by initial element and count, we specify\r\nthem by initial position and count. This follows from the fact that for $n\\geq 1$,\r\nthe map that exchances value and position is a bijection on ${\\mathcal P}_n$. Here, the permutation\r\n$(a_1, a_2, \\ldots, a_m)$ maps to the permutation where $i$ is the element at position $a_i$ for $1 \\leq i\\leq n$. \r\nUnder this map, maximal consecutive sequences of length $k$ with initial element $a$ map to maximal\r\nconsecutive sequences of length $k$ that start at position $a$ and vise versa.\r\n\r\n\\section{Specifying Maximal Consecutive Sequences by Lengths And Counts}\r\n\r\n\\begin{definition} \\label{def_mcs_spec_by_length_and_count}\r\n  Let $n$ be an integer with $n\\geq 1$. An {\\bf MCS-specification by lengths and counts} for $n$ is a\r\n  set of pairs of integers $$\\{\\,(k_1, c_1), (k_2, c_2), \\ldots,(k_m, c_m)\\,\\}$$ with the following properties:\r\n  \\begin{enumerate}[(i)]\r\n  \\item\r\n    $k_i \\geq 2$ and $c_i \\geq 1$ for $1\\leq i \\leq m$,\r\n  \\item\r\n    $\\sum_{i=1}^m c_i \\cdot k_i \\leq n$, and\r\n  \\item\r\n    $k_i \\neq k_j$ for $1\\leq i, j \\leq m$.\r\n  \\end{enumerate}\r\n\\end{definition}\r\n\r\n\\begin{notation}\\label{notation_mcs_spec_by_length_and_count}\r\n  If $T$ is an MCS-specification by lenghts and counts for $n$ as in\r\n  Definition \\ref{def_mcs_spec_by_length_and_count}\r\n  above, we write ${\\mathcal R}_{(n,T)}$ for the set of all permutations\r\n  $P \\in {\\mathcal P}_n$ with the following property: for each $i$ with $1\\leq i \\leq m$, $P$ has exactly $c_i$\r\n  maximal consecutive sequence of length $k_i$, and $P$ has no other maximal\r\n  consecutive sequences.\r\n\\end{notation}\r\n\r\n\\begin{purpose}\r\nPresent an algorithm for calculating $|{\\mathcal R}_{(n,T)}|$\r\nfrom $n$ and $T$.\r\n\\end{purpose}\r\n\r\nIt is clear from Definitions \\ref{def_mcs_spec_by_length_and_initial_element}\r\nand \\ref{def_mcs_spec_by_length_and_count} and the corresponding Notations\r\n\\ref{notation_mcs_spec_by_length_and_initial_element} and \\ref{notation_mcs_spec_by_length_and_count} that\r\n${\\mathcal R}_{(n,T)}$ is the disjoint union of certain ${\\mathcal Q}_{(n,S)}$, namely, those\r\nwhere $S$ ranges over all those\r\nMCS-specifications by lengths and initial elements that are of the form\r\n$$S = \\{\\,(a_1, l_1), (a_2, l_2), \\ldots,(a_p, l_p)\\,\\}$$\r\nwith the properties\r\n\\begin{enumerate}[(i)]\r\n\\item\r\n  $p = \\sum_{i=1}^m c_i$, and \r\n\\item\r\n  for $1\\leq i \\leq m$, there are exactly $c_i$ many $j$ with $1\\leq j \\leq p$ and $l_j = k_i$.\r\n\\end{enumerate}\r\n\r\nSo if we denote the set of all MCS-specifications by lengths and initial elements that satisfy (i) and (ii)\r\nabove by ${\\mathcal S}_T$, then we have, as a first step towards our algorithm for calculating\r\n$|{\\mathcal R}_{(n,T)}|$,\r\n$$\r\n|{\\mathcal R}_{(n,T)}| = \\sum_{S\\in {\\mathcal S}_T} |{\\mathcal Q}_{(n,S)}|.\\eqno(1)\r\n$$\r\n\r\nTheorem \\ref{theorem_count_by_length_and_initial_element} tells us how to calculate $|{\\mathcal Q}_{(n,S)}|$,\r\nand moreover, the algorithm for doing so uses only $n$, $p$, and $\\sum_{j=1}^p l_j$.\r\nIt is immediate from properties (i) and (ii) above that\r\n$$\r\np = \\sum_{i=1}^m c_i\r\n\\quad \\text{and}\\quad \r\n\\sum_{j=1}^p l_j = \\sum_{i=1}^m c_i \\cdot k_i.\r\n$$\r\nSo if we let $c = \\sum_{i=1}^m c_i$ and $k = \\sum_{i=1}^m c_i \\cdot k_i$, we can extend equation (1) above\r\nto the following second step towards our algorithm for calculating $|{\\mathcal R}_{(n,T)}|$:\r\n$$\r\n|{\\mathcal R}_{(n,T)}| = |{\\mathcal S}_T| \\cdot |{\\mathcal U}_{n - (k - c)}|.\\eqno(2)\r\n$$\r\n\r\nTherefore, all that remains to do is to figure out what $|{\\mathcal S}_T|$ is:\r\nhow many MCS-specifications by lengths and initial elements\r\nare there that satisfy (i) and (ii) above? That number is fairly easy to describe: it is the number of ways\r\nin which one can choose subsets\r\n$A_1, A_2, \\ldots, A_{m}\\subset \\{\\,1, 2, \\ldots, n\\,\\}$ such that  \r\n\r\n\\begin{enumerate}[(a)]\r\n\\item\r\n  $|A_i| = c_i$ for $1\\leq i\\leq m$, and\r\n\\item\r\n  the elements of the $A_i$ are far enough apart so that each $a \\in A_i$ can\r\n  be the initial value of a maximal consecutive sequence of length $k_i$.\r\n\\end{enumerate}\r\n\r\nAt first glance, it may seem difficult to figure out the number of ways in which the $A_i$ can be chosen.\r\nThe key to making it easy lies in going back the proof of the equality\r\n$|{\\mathcal Q}_{(n,S)}| = |{\\mathcal U}_{n - (k - c)}|$ of\r\nTheorem \\ref{theorem_count_by_length_and_initial_element}, which we just used to pass from equation (1) to\r\nequation (2). This equality was proved\r\nby exhibiting a bijection between ${\\mathcal Q}_{(n,S)}$ and ${\\mathcal U}_{n - (k - c)}$.\r\nWe mapped permutations with maximal consecutive sequences to shorter permutations without any consecutive\r\nsequences by striking from all maximal consecutive sequences all elements except for the\r\nfirst one, then renumbering the remaining elements to close the resulting gaps. The inverse operation consisted of\r\nstarting with a permutation with no consecutive sequences, then\r\nblowing up the specified initial elements to consecutive sequences by inserting and renumbering elements. At the risk\r\nof being accused of a hand-waving argument, we'll say that it is now clear that picking the subsets\r\n$A_1, A_2, \\ldots, A_{m}\\subset \\{\\,1, 2, \\ldots, n\\,\\}$ with properties (a) and (b) above is equivalent to picking\r\nsubsets $B_1, B_2, \\ldots, B_{m}\\subset \\{\\,1, 2, \\ldots, n - (k - c)\\,\\}$ with just property (a).\r\nThe formal proof by induction parallels the proof of Theorem \\ref{theorem_count_by_length_and_initial_element} and\r\nis simpler than the latter.\r\nCounting the ways in which the $B_i$ can be selected is elementary. The answer is\r\n$$\r\n\\prod_{i=1}^m\\binom{n-(k-c) - \\sum_{j=1}^{i-1}c_j}{c_i}\\,,\r\n$$\r\nor, equivalently,\r\n$$\r\n\\frac{(n-(k-c))!}{c_1! \\cdot c_2! \\cdot \\ldots \\cdot c_m! \\cdot (n-k)!}\\,,\r\n$$\r\nor, equivalently,\r\n$$\r\n\\frac{(n-(k-c)) \\cdot (n-(k-c) - 1)\\cdot \\ldots \\cdot (n-(k-c) - c + 1)}{c_1! \\cdot c_2! \\cdot \\ldots \\cdot c_m!}\\,.\r\n$$\r\n\r\nWe have thus proved the following theorem, which provides the desired algorithm for calculating\r\n$|{\\mathcal R}_{(n,T)}|$ from $n$ and $T$.\r\n\r\n\\begin{theorem}\\label{theorem_count_by_length_and_count}\r\n  Let $n \\geq 1$, let $T = \\{\\,(k_1, c_1), (k_2, c_2), \\ldots,(k_m, c_m)\\,\\}$ be an\r\n  MCS-specification by lengths and counts for $n$, let $k = \\sum_{i=1}^m c_i \\cdot k_i$,\r\n  and let $c = \\sum_{i=1}^m c_i$.\r\n  Then\r\n  $$\r\n  |{\\mathcal R}_{(n,T)}| = |{\\mathcal U}_{n - (k - c)}| \\cdot\r\n  \\prod_{i=1}^m\\binom{n-(k-c) - \\sum_{j=1}^{i-1}c_j}{c_i},\r\n  $$\r\n  or, equivalently,\r\n  $$\r\n  |{\\mathcal R}_{(n,T)}| = |{\\mathcal U}_{n - (k - c)}| \\cdot\r\n  \\frac{(n-(k-c))!}{c_1! \\cdot c_2! \\cdot \\ldots \\cdot c_m! \\cdot (n-k)!}\\,,\r\n  $$\r\n  or, equivalently,\r\n  \\begin{eqnarray*}\r\n    |{\\mathcal R}_{(n,T)}| & = & |{\\mathcal U}_{n - (k - c)}| \\cdot \\\\\r\n                           &   &\r\n    \\frac{(n-(k-c)) \\cdot (n-(k-c) - 1)\\cdot \\ldots \\cdot (n-(k-c) - c + 1)}{c_1! \\cdot c_2! \\cdot \\ldots \\cdot c_m!}\\,.\r\n    \\quad \\Box\r\n  \\end{eqnarray*}\r\n \r\n\\end{theorem}\r\n\r\nIt is clear that the complexity of the algorithm of \\ref{theorem_count_by_length_and_count}\r\nis $O(m\\cdot n)$, which, depending on how the $k_i$ are defined, can be anything between\r\n$O(n)$ and $O(n^2)$.\r\n\r\n\\section{Iterating over Specifications by Lengths And Counts}\r\n\\label{sec_iterating}\r\n\\begin{purpose}\r\n  Present a generic algorithm for counting the permutations that meet certain specifications\r\n  by lengths and counts, where a client-supplied function performs the selection of\r\n  specifications to be included in the count.\r\n\\end{purpose}\r\n\r\nNow that we know how to calculate $|{\\mathcal R}_{(n,T)}|$, that is, the number of permutations that\r\nmeet a given specification by lengths and counts, it is an obvious and rather trivial thing to write\r\nan algorithm that performs an in-place creation of every specification by lengths and counts for a\r\ngiven $n$ and lets a user-provided function decide which ones should be included in the count.\r\nTherefore, the following theorem requires no further proof.\r\n\r\n\\begin{theorem}\\label{theorem_iterating_over_specs_by_count_by_length_and_count}\r\n  Let $n \\geq 1$, let ${\\mathcal T}$ be the set of all MCS-specifications by\r\n  lengths and counts for $n$, and let $f$ be a function from ${\\mathcal T}$ to the set $\\{\\, 0, 1\\,\\}$.\r\n  Then the expression\r\n  $$\r\n  \\sum_{\\{\\,T \\in{\\mathcal T}\\,\\mid\\, f(T) = 1\\,\\}}|{\\mathcal R}_{(n,T)}|\\eqno(3)\r\n  $$\r\n  amounts to an algorithm for calculating the number of permutations that meet exactly\r\n  those MCS-specifications by lengths and counts for $n$ on which the function $f$ returns $1$.\r\n  \\endproof\r\n\\end{theorem}\r\n\r\nThe problem with this algorithm is that the number of MCS-specifications by lengths and counts\r\nfor $n$ grows faster with $n$ than one would wish. By definition, the number of these specifications is\r\n\\begin{eqnarray*}\r\n  \\big|\\{\\,(k_1, c_1), (k_2, c_2), \\ldots,(k_m, c_m) & \\mid & k_i \\geq 2,\\ c_i \\geq 1 \\text{ for } 1\\leq i \\leq m,\\\\\r\n                                                 &      & \\sum_{i=1}^m c_i \\cdot k_i \\leq n,\\\r\n                                                          k_i\\neq k_j\\text{ for } 1\\leq i,j \\leq m \\,\\}\\big|\\,.\r\n\\end{eqnarray*}\r\nDetermining what this is looks like a non-trivial combinatorial problem unto itself.\r\nAt this point, the best we know how to do is to look at some numbers. The brute force\r\napproach to counting permutations begins to encounter performance issues at $n=12$, as\r\n$12!=4.79001600\\times 10^8$, and performance degrades quickly after that.\r\nA comparable number of MCS-specifications by lengths and counts,\r\nnamely, $4.83502844\\times 10^8$, is reached for $n=108$. Indeed, the algorithm of Theorem\r\n\\ref{theorem_iterating_over_specs_by_count_by_length_and_count} starts to noticeably slow\r\ndown for $n$ around $100$ \\cite{Performance}. Considering that $60!$ is roughly equal to the number of atoms\r\nin the known, observable universe, being able to count permutations for $n=100$ must be\r\nconsidered an achievement. On the other hand, $100$ is not exactly, shall we say, a large number.\r\n\r\nLuckily, many common questions regarding the number of permutations with certain consecutive\r\nsequences allow an optimization that can cut down dramatially on the number of MCS-specifications\r\nby lengths and counts that need to be considered in the sum in (3) above. Typically, when counting\r\npermutations with certain configurations of consecutive pairs, triples, quadruples, etc.\r\n(maximal or not necessarily maximal), one knows in advance that the selection function $f$ of\r\nTheorem \\ref{theorem_iterating_over_specs_by_count_by_length_and_count} will reject every\r\nMCS-specification by lengths and counts that specifies a non-zero count for maximal consecutive\r\nsequences of length greater than some bound and/or less than some bound. To exploit that,\r\nour implementation of the algorithm \\cite{Algos} lets the user specify a\r\nlower and/or upper bound for non-zero lengths\r\nof maximal consecutive sequences. If $l$ and $u$ are the specified bounds, then the algorithm\r\nwill include only those MCS-specifications by lengths and counts in the sum in (3) above that\r\nspecify 0 for any length less than $l$ and greater than $u$.\r\n\r\nFor example, when\r\ncalculating the number of permutations that have maximal consecutive pairs but no other maximal\r\nconsecutive sequences, that is, permutations that have any number of consecutive pairs none of\r\nwhich are linked to form longer consecutive sequences, one would tell the algorithm\r\nto only generate those MCS-specifications by lengths and counts that specify a count greater than zero\r\nfor length 2 and a zero count for all lengths greater than 2. This cuts the length of the sum in (3)\r\nabove down to $\\lfloor\\frac{n}{2}\\rfloor$. \r\n\r\nSometimes, it takes a bit of creativity to avail oneself of the lower bound/upper bound optimization. Suppose,\r\nfor example, that you wish to calculate the number of permutations that have at least one maximal consecutive\r\nsequence of length greater than or equal to $k$ for some $k$.\r\nAs it stands, this condition does not allow you to use the lower bound/upper bound optimization.\r\nBut you could also calculate the number of permutations that have no maximal consecutive sequences\r\ngreater than or equal to $k$ and then subtract that from $n!$. Now the optimization is applicable.\r\n\r\n\\section{Permutations with Consecutive Sequences}\r\nThe algorithms we have discussed so far deal with the number of permutations having certain configurations\r\nof maximal consecutive sequences. Oftentimes, one is interested in the number of permutations having certain\r\nkinds of---not necessarily maximal---consecutive sequences. Adapting our algorithms for that purpose is\r\nrather straightforward, and in some cases trivial. For a trivial case, consider the question, ``How many\r\npermutations are there in ${\\mathcal P}_n$ that have consecutive sequences of length $k$?'' Having a\r\nconsecutive sequence of length $k$ is obviously equivalent to having a maximal consecutive sequence of\r\nlength greater than or equal to $k$. Therefore, this is the application of Theorem\r\n\\ref{theorem_iterating_over_specs_by_count_by_length_and_count} that we mentioned at the end of the\r\nprevious section.\r\n\r\nPerhaps the most commonly asked question about consecutive sequences is, ``How many\r\npermutations are there in ${\\mathcal P}_n$ that have $c$ many consecutive sequences of length $k$?''\r\nTo use our algorithms for answering this question, we need the following lemma whose proof is trivial.\r\n\r\n\\begin{lemma}\r\n  Let $n \\geq 1$, let $T = \\{\\,(k_1, c_1), (k_2, c_2), \\ldots,(k_m, c_m)\\,\\}$ be an\r\n  MCS-specification by lengths and counts for $n$. Furthermore, let $P\\in {\\mathcal R}_{(n,T)}$,\r\n  that is, $P$ is a permutation that meets the specification $T$, and let $k \\geq 2$.\r\n  Then the number of consecutive\r\n  sequences of length $k$ in $P$ equals\r\n  $$\r\n  \\sum_{\\shortstack{$\\scriptstyle i=1$\\\\$\\scriptstyle k_i\\geq k$}}^{m}c_i\\cdot (k_i - k + 1)\\,.\r\n  $$\r\n\\end{lemma}\r\n\r\nIt is now a straightforward task to count the permutations that have exactly $c$ consecutive sequences\r\nof length $k$: apply Theorem \\ref{theorem_iterating_over_specs_by_count_by_length_and_count} with a\r\nselection function that employs the lemma above to accept exactly those MCS-specifications by lengths\r\nand counts that result in $c$ consecutive sequences. Note also that the upper bound optimization that\r\nwe mentioned following Theorem \\ref{theorem_iterating_over_specs_by_count_by_length_and_count} is\r\napplicable here. That's because we know that any MCS-specification by lengths and count that specifies\r\na non-zero count for a length $l$ with $l > k + c - 1$ will be rejected. Therefore, we only need to\r\nlook at MCS-specifications by lengths and counts that specify a zero count for lengths greater than\r\n$k + c - 1$. Our algorithm package on github \\cite{Algos} has a ready-to-use implementation.\r\n\r\n\\bibliographystyle{alpha}\r\n\\begin{thebibliography}{0}\r\n\r\n\\bibitem{Oeis}\r\n  \\href{http://oeis.org/}{Online Encyclopedia of Integer Sequences}\r\n\r\n\\bibitem{OeisRefs}\r\n\r\n\r\n  Even if the author's mathematical specialty were combinatorics, which it is not,\r\n  it would be foolish to attempt an overview or an even remotely complete set of references\r\n  in a short article like this. A good place to learn about existing results and start finding\r\n  references is the \\href{http://oeis.org/}{Online Encyclopedia of Integer Sequences}, specifically\r\n the entries \\href{http://oeis.org/A010027}{A010027}, \\href{http://oeis.org/A002628}{A002628}, and\r\n  \\href{http://oeis.org/A000255}{A0000255}.\r\n\r\n\\bibitem{Algos}\r\n    \\href{https://github.com/walkswiththebear/ConsecutiveSequences}{ConsecutiveSequences} on github.\r\n\r\n\\bibitem{BlogPost}\r\n    \\href{http://blog.greaterthanzero.com/post/159874910652/some-mathematics-algorithms-and-probabilities}{Some Mathematics, Algorithms, and Probabilities Concerning Your Music Player\u2019s Random Shuffle Mode} at the GreaterThanZero company blog. \r\n\r\n\\bibitem{JedYang}\r\n This \\href{https://www.quora.com/What-is-the-probability-that-a-shuffled-music-album-will-have-at-least-two-songs-in-their-original-relative-consecutive-order}{proof by Jed Yang}\r\n  on Quora is short, elegant, and self-contained.\r\n\r\n\\bibitem{Performance}\r\n  This is assuming that only a small number of MCS-specifications are accepted for the count. If many are accepted,\r\n  the slowdown can begin as early as $n=75$.\r\n  \r\n\\end{thebibliography}\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "2201af9c0f04b15a577e0c9ed2dbbd69edef573d", "size": 37424, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Paper/CountingPermutationsWithConsecutiveSequences.tex", "max_stars_repo_name": "walkswiththebear/ConsecutiveSequences", "max_stars_repo_head_hexsha": "e106dd7c9bee8667f6af9ddbfa9a7efec6afc59f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-07-07T19:38:00.000Z", "max_stars_repo_stars_event_max_datetime": "2017-07-07T19:38:00.000Z", "max_issues_repo_path": "Paper/CountingPermutationsWithConsecutiveSequences.tex", "max_issues_repo_name": "walkswiththebear/ConsecutiveSequences", "max_issues_repo_head_hexsha": "e106dd7c9bee8667f6af9ddbfa9a7efec6afc59f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Paper/CountingPermutationsWithConsecutiveSequences.tex", "max_forks_repo_name": "walkswiththebear/ConsecutiveSequences", "max_forks_repo_head_hexsha": "e106dd7c9bee8667f6af9ddbfa9a7efec6afc59f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.4869431644, "max_line_length": 245, "alphanum_fraction": 0.7037195383, "num_tokens": 10220, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.561472475373545}}
{"text": "% !TEX root = ../../main.tex\n\\chapter{Reconstructing Neutrino $p_z$}\n\\label{ch:neutrinopz}\n\nAlong the beam axis, $\\hat{z}$, an unknown fraction of energy from the event escapes. This restricts the ability to constrain missing energy to the transverse $\\hat{x}-\\hat{y}$ plane, as described in \\Ch{\\ref{ch:objreco}}. \nFor the event topology of this analysis, $WV\\to l\\nu J$, the neutrino escapes the detector leaving virtually no energy deposit; thus, the missing energy in the transverse plane, $E_{\\mathrm T}^{\\rm miss}$, is attributed to the transverse momentum of the neutrino, $p_{\\mathrm{T}, \\nu}$. The invariant mass of the $WV$ system is given below in \\Eqn{\\ref{eq:wv_m}}.\n\\begin{equation}\nm_{WV}^2 = \\left(E_{J} + E_{l} + E_{\\nu}\\right)^2 - \\left(\\vec{p}_{J} + \\vec{p}_{l} + \\vec{p}_{\\nu}\\right)^2\n\\label{eq:wv_m}\n\\end{equation}\n\nSince only the transverse momentum of the neutrino is known, however, \\Eqn{\\ref{eq:wv_mt}} for the transverse mass, $m_{\\mathrm{T}}$, must be used.\n\\begin{equation}\nm_{\\mathrm{T}, WV}^2 = \\left(E_{\\mathrm{T}, J} + E_{\\mathrm{T}, l} + E_{\\mathrm{T}, \\nu}\\right)^2 - \\left(\\vec{p}_{\\mathrm{T}, J} + \\vec{p}_{\\mathrm{T}, l} + \\vec{p}_{\\mathrm{T}, \\nu}\\right)^2\n\\label{eq:wv_mt}\n\\end{equation}\n\nThe transverse mass, $m_{\\mathrm{T}, WV}$, will have a dependence on the angle between the daughter particles, and will have worse resolution than the invariant mass. It would be more desirable to be able to use \\Eqn{\\ref{eq:wv_m}} to provide a better signal to background ratio.\n\n\nIt is therefore advantageous to reconstruct the $\\hat{z}$ component of the neutrino momentum vector, $p_{z,\\nu}$, with the constraint of the $W$ boson mass, $m_W$ (80.385\\,\\GeV). Using energy-momentum 4-vectors, $p^{\\mu}=(E, \\vec{p})$, for the $W$ boson, lepton, and neutrino, it is possible to derive a quadratic expression for $p_{z,\\nu}$ in terms of $m_W$, $\\vec{p}_l$, and $p_{\\mathrm{T}, \\nu}$. \n\\begin{eqnarray}\n    m_W^2 &=&p_W^{\\mu}p_{W,\\mu} = (p^{\\mu}_l+p^{\\mu}_{\\nu})(p_{l,\\mu}+p_{\\nu, \\mu}) \\nonumber \\\\\n    &=& m_l^2 + m_{\\nu}^2 + 2(\\Cline[red]{E_l}E_{\\nu}-\\vec{p}_l\\cdot\\vec{p}_{\\nu}) \\label{eq:n_first} \\\\\n    &=& 2\\Cline[red]{\\sqrt{p_{\\mathrm{T},l}^2+p_{z,l}^2}}\\sqrt{p_{\\mathrm{T},\\nu}^2+p_{z, \\nu}^2}\\nonumber\\\\\n    &&-2\\vec{p}_{\\mathrm{T},l}\\cdot \\vec{p}_{\\mathrm{T},\\nu}-2p_{z,l}p_{z,\\nu} \\label{eq:n_el}\\\\\n\\left(\\Cline[green]{m_W^2+2\\vec{p}_{\\mathrm{T},l}\\cdot \\vec{p}_{\\mathrm{T},\\nu}}+2p_{z,l}p_{z,\\nu}\\right)^2    &=&4(p_{\\mathrm{T},l}^2+p_{z,l}^2)(p_{\\mathrm{T},\\nu}^2+p_{z, \\nu}^2) \\label{eq:n_c} \\\\\n \\Cline[green]{C}^2 + 4\\Cline[green]{C}p_{z,l}p_{z,\\nu}+\\Cline[blue]{4p_{z,l}^2p_{z,\\nu}^2}&=&   4\\left[p_{\\mathrm{T},l}^2p_{z,\\nu}^2+p_{\\mathrm{T},\\nu}^2(p_{\\mathrm{T},l}^2+p_{z,l}^2)+\\Cline[blue]{p_{z,l}^2p_{z,\\nu}^2}\\right] \\label{eq:n_cancel}\\\\\n   0&=& \\left[4p_{\\mathrm{T},l}^2\\right]p_{z,\\nu}^2 - \\left[4Cp_{z,l}\\right]p_{z,\\nu}+ \\nonumber \\\\ \n   &&\\left[4p_{\\mathrm{T},\\nu}^2(p_{\\mathrm{T},l}^2+p_{z,l}^2)-C^2\\right] \\label{eq:n_final}\n\\end{eqnarray}\n\nIn \\Eqn{\\ref{eq:n_el}}, the limit where the neutrino mass is negligible is taken. It is also prudent to take the limit where the lepton momentum is much larger than the mass, $m_l$, which drops out as well in \\Eqn{\\ref{eq:n_el}}. \nThe lepton energy, $E_l$, underlined in red in \\Eqn{\\ref{eq:n_first}} is then approximated to be equal to the magnitude of the lepton momentum, $\\vec{p_l}$, underlined in red in \\Eqn{\\ref{eq:n_el}}. The radical in \\Eqn{\\ref{eq:n_el}} is removed in \\Eqn{\\ref{eq:n_c}} by isolating it and squaring both sides. The green underlined quantity in \\Eqn{\\ref{eq:n_c}} is redefined as a constant, $C$, in  \\Eqn{\\ref{eq:n_cancel}}. In \\Eqn{\\ref{eq:n_cancel}}, the blue underlined quantities cancel out, leading to the result in \\Eqn{\\ref{eq:n_final}} which is a quadratic equation for $p_{z,\\nu}$.\n\nThere can be up to two solutions in this case for $p_{z,\\nu}$, which are handled as follows:\n\\begin{enumerate}\n\t\\item If a solution is complex, remove the imaginary part\n\t\\item If two unique solutions exist, compare the magnitudes and keep the smallest one\n\\end{enumerate}\n\nThe effect of these choices are minimal, and presented in \\Fig{\\ref{fig:neutrinoPz}}. In \\Fig{\\ref{fig:mass_vs_mt}}, the reconstructed invariant mass (using the $p_{z,\\nu}$ defined above) is compared to the transverse mass. It is clear that the invariant mass reproduces the signal mass with a sharper peak, smaller width, and more accurate mean value. \n\n\\begin{figure}[hp]\n \\begin{center}\n\\subfloat[]{\\includegraphics[width=0.49\\linewidth]{figures/Appendix/h_pz_low_high}\\label{fig:nz_lowhigh}}\n\\subfloat[]{\\includegraphics[width=0.49\\linewidth]{figures/Appendix/h_real_complex}\\label{fig:nz_realcomplex}}\n\\end{center}\n  \\caption[Neutrino $p_{z}$ solution comparisons]{The invariant mass is constructed for choice of $p_{z, \\nu}$ \\protect\\subref{fig:nz_lowhigh} lowest (blue) vs highest (red) solution and \\protect\\subref{fig:nz_realcomplex} real (blue) vs complex (red) solution. The ratio (blue/red) is shown for each choice. Truth invariant mass (green) is shown as a reference.  HVT $Z'$ benchmark signal at $m=2.0\\,\\TeV$ is used.}\n  \\label{fig:neutrinoPz}\n\\end{figure}\n\n\\begin{figure}[hp]\n  \\begin{center}\n\\includegraphics[width=0.6\\linewidth]{figures/Appendix/h_m_mt}\n     \\end{center}\n  \\caption[Comparison between reconstructed invariant mass and reconstructed transverse mass]{The reconstructed invariant mass, $m_{WV}$ (blue), and the reconstructed transverse mass, $m_{\\mathrm{T}, WV}$ (red) are shown for the HVT $Z'$ benchmark signal at $m=2.0\\,\\TeV$. Truth invariant mass (green) is shown as a reference. There is a significant improvement in the reconstructed mass: the peak is more centered and sharper, while the width is smaller.}\n  \\label{fig:mass_vs_mt}\n\\end{figure}\n\n\n\n", "meta": {"hexsha": "041e513cbe10ae06ed95659ec21345d5db00421e", "size": 5780, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/appendix/neutrinoPz.tex", "max_stars_repo_name": "rynecarbone/Thesis_lvJ", "max_stars_repo_head_hexsha": "46ba8b142e945d5f3103a364630f661da3405479", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-09-29T22:35:55.000Z", "max_stars_repo_stars_event_max_datetime": "2017-09-30T22:25:37.000Z", "max_issues_repo_path": "tex/appendix/neutrinoPz.tex", "max_issues_repo_name": "rynecarbone/Thesis_lvJ", "max_issues_repo_head_hexsha": "46ba8b142e945d5f3103a364630f661da3405479", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/appendix/neutrinoPz.tex", "max_forks_repo_name": "rynecarbone/Thesis_lvJ", "max_forks_repo_head_hexsha": "46ba8b142e945d5f3103a364630f661da3405479", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.746031746, "max_line_length": 587, "alphanum_fraction": 0.694982699, "num_tokens": 1999, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7745833841649232, "lm_q1q2_score": 0.5614724716016074}}
{"text": "\\mychapter{1}{Lesson 1} %180926\n\n\\section{Secret communication}\n\nThe typical setting for the problem of secret communication is depicted in figure \\ref{fig:secrecy}. The parties Alice and Bob want to share data in a private fashion, thus preventing a third party Eve from \\emph{eavesdropping}.\\footnote{That is, getting a hold of the information shared between the parties} The objects of interest here are:\n\\begin{itemize}\n    \\item The data to be shared, or \\emph{message} $m$;\n    \\item Some secret information, shared between and known only to Alice and Bob, that is used to \\emph{encrypt} the message: the \\emph{encryption key} or just \\emph{key} $k$;\n    \\item The result of encrypting a message $m$ using the key $k$: the ciphertext $c$.\n\\end{itemize}\n\n\\begin{figure}[ht]\n    \\centering\n    \\begin{tikzpicture}\n        \\draw\n            (0, 0) node (a) [box, fill = white] {Alice} \n            (5, 0) node (b) [box, fill = white] {Bob}\n            (2.5, -1) node (e) [box, fill = white] {Eve}\n        ;\n\n        \\draw[-Stealth] (a) -- node [midway, above] {$c$} (b);\n        \\draw[-Stealth] (0, -1) node [below] {$k$} -- (a);\n        \\draw[-Stealth] (5, -1) node [below] {$k$} -- (b);\n        \\draw[-Stealth] (-1, 0) node [left] {$m$} -- (a);\n        \\draw[-Stealth] (b) -- (6, 0) node [right] {$m$};\n        \\draw[-Stealth] (2, -0.1) -- (3, -0.1) (2.5, -0.1) -- (e);\n\n    \\end{tikzpicture}\n\n    \\caption{A depiction of the problem of secret communication}\n    \\label{fig:secrecy}\n\\end{figure}\n\nTo complete the picture, the two parties Alice and Bob employ a \\emph{cryptographic secrecy scheme}, or \\emph{encryption scheme} to convert the message in an encrypted form, and vice versa. It has the form $\\Xi = (\\Enc, \\Dec)$, where:\n\\begin{itemize}\n    \\item $\\Enc \\in \\K \\times \\M \\to \\C$ is a machine that, given a message $m$ in $\\M$ and a key $k$ in $\\K$, returns a ciphertext $c$ in $\\C$, which is an \\emph{encrypted} form of the meesage;\n    \\item $\\Dec \\in \\K \\times \\C \\to \\M$ is a machine that restores the message $m$ encrypted in the given ciphertext $c$ by using the key $k$, effectively \\emph{decrypting} the message.\n\\end{itemize}\n\nFor the time being, assume that both \\Enc{} and \\Dec{} work as normal functions. A fundamental requirement of encryption schemes is that they always completely preserve the message after a whole round of encryption and decryption:\n\\[\n    \\forall m \\in \\M \\; \\forall k \\in \\K \\quad \\Dec(k, \\Enc(k, m)) = m\n\\]\n\nA first definition that formalizes secrecy of an encryption scheme comes from Shannon:\n\n\\begin{definition}[Perfect secrecy]\n    Given an encryption scheme $\\Xi: (\\Enc, \\Dec)$, let $M$ be a generic random variable over the message space $\\M$, and $K$ be a uniform random variable over the key space $\\K$:\n    \\begin{align*}\n        M \\in&\\ \\mathcal{R}and(\\M) \\\\\n        K \\in&\\ \\unifdist(\\K)\n    \\end{align*}\n    Then let $\\Enc(K, M)$ be the resulting ciphertext from encrypting $M$ using $K$. The scheme $\\Xi$ is deemed \\emph{perfectly secret} iff such ciphertext is effectively useless in retrieving any information about the original message:\n    \\[\n        \\forall m \\in \\M \\; \\forall c \\in \\C \\quad \\Pr[M = m] = \\Pr[M = m \\knowing \\Enc(K, M) = c]\n    \\]\n\\end{definition}\n\nThis definition can be rephrased in different ways, bringing more details to light:\n    \n\\begin{proposition}\n    The following statements are equivalent:\n    \\begin{enumerate}\n        \\item \\label{def:ps1} $\\Pr[M = m] = \\Pr[M = m \\knowing \\Enc(K, M) = c]$\n        \\item \\label{def:ps2} $M \\indep \\Enc(K, M)$\n        \\item \\label{def:ps3} $\\forall m_1, m_2 \\in \\M \\, \\forall c \\in \\C \\quad \\Pr[\\Enc(K, m_1) = c] = \\Pr[\\Enc(K, m_2) = c]$\n    \\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n    The proof is structured as a cyclic implication between the three proposed definitions:\n    \n    \\begin{itemize}\n        \\item $(\\ref{def:ps1}) \\implies (\\ref{def:ps2})$: Start from one side of the independency definition, and work through the other:\n        \\begin{align*}\n             & \\Pr[M = m \\wedge \\Enc(K, M) = c]                         & \\\\\n            =& \\Pr[M = m \\knowing \\Enc(K, M) = c] \\Pr[\\Enc(K, M) = c]   & \\text{(Conditional prob.)} \\\\\n            =& \\Pr[M = m] \\Pr[\\Enc(K, M) = c]                           & \\text{(Using \\ref{def:ps1})} \\\\\n        \\end{align*}\n        This proves that $M$ and $\\Enc(K, M)$ are indeed independent random variables.\n        \n        \\item $(\\ref{def:ps2}) \\implies (\\ref{def:ps3})$: Let $M$ be a generic random variable over $\\M$. Then:\n        \\begin{align*}\n             & \\Pr[\\Enc(K, m_1) = c]                                & \\\\\n            =& \\Pr[\\Enc(K, M) = c \\knowing M = m_1]                 & \\text{(Conditioning $m$)} \\\\\n            =& \\Pr[\\Enc(K, M) = c \\wedge M = m_1] \\Pr[M = m_1]^{-1} & \\text{(Conditional prob.)} \\\\\n            =& \\Pr[\\Enc(K, M) = c]                                  & \\text{(Using \\ref{def:ps2})} \\\\\n            =& \\Pr[\\Enc(K, m_2) = c]                                & \\mathllap{\\text{(Same steps reversed, where $m_1 \\mapsto m_2$)}} \\\\\n        \\end{align*}\n\n        \\item $(\\ref{def:ps3}) \\implies (\\ref{def:ps1})$:\n        \\begin{align*}\n             & \\Pr[\\Enc(K, M) = c]                                          & \\\\\n            =& \\sum_{m \\in \\M}\\Pr[\\Enc(K, M) = c \\wedge M = m]              & \\text{(Total prob.)} \\\\\n            =& \\sum_{m \\in \\M}\\Pr[\\Enc(K, M) = c \\knowing M = m] \\Pr[M = m] & \\text{(Conditional prob.)} \\\\\n            =& \\sum_{m \\in \\M}\\Pr[\\Enc(K, m) = c] \\Pr[M = m]                & \\text{(Condition collapse)} \\\\\n            =& \\Pr[\\Enc(K, m_0) = c] \\sum_{m \\in \\M} \\Pr[M = m]             & \\mathllap{\\text{(Using \\ref{def:ps3} with arbitrary $m_0$})} \\\\\n            =& \\Pr[\\Enc(K, m_0) = c]                                        & \\text{(Total prob.)} \\\\\n            =& \\Pr[\\Enc(K, M) = c \\knowing M = m_0]                         & \\text{(Conditioning $m_0$)}\n        \\end{align*}\n\n        By applying Bayes' theorem, the above result can be turned into the first definition of perfect secrecy:\n        \\begin{align*}\n                    & \\Pr[C = c] = \\Pr[C = c \\knowing M = m]                                        & \\\\\n            \\implies& \\Pr[C = c] = \\Pr[M = m \\knowing C = c] \\cdot \\frac{\\Pr[C = c]}{\\Pr[M = m]}    & \\text{(Bayes' theorem)} \\\\\n            \\implies& \\Pr[M = m] = \\Pr[M = m \\knowing C = c]                                        &\n        \\end{align*}\n        where $C = \\Enc(K, M)$.\n    \\end{itemize}\n\\end{proof}\n\nAbout definition \\ref{def:ps3}, it could be insightful to remark that, for any message $m$ and ciphertext $c$:\n\\begin{align*}\n        & \\Pr[\\Enc(K, m) = c] \\\\\n       =& \\Pr[\\Enc(K, M) = c \\knowing M = m] \\\\\n    \\neq& \\Pr[\\Enc(K, M) = c]\n\\end{align*}\n\nwhich is the difference between \\emph{choosing} a specific message $m$, and picking it at random, according to $M$' distribution.\n\n\\subsection{One-time Pad}\n\nThe One-time Pad, or \\otp{} in short, is a simple encryption scheme that leverages the involutory property of the \\textsc{xor} operation. Let all the spaces $\\K = \\M = \\C = \\binary^l$ be binary strings of some length $l$, and define this scheme $\\otp = (\\Enc, \\Dec)$ as such:\n\\begin{itemize}\n    \\item $\\Enc(k, m) = k \\oplus m$\n    \\item $\\Dec(k, c) = k \\oplus c$\n    \\item[--] Correctness: $\\Dec(k, \\Enc(k, m)) = \\Dec(k, k \\oplus m) = k \\oplus k \\oplus m = m$\n\\end{itemize}\n\n\\begin{theorem}\n    \\otp{} is perfectly secret.\n\\end{theorem}\n\\begin{proof}\n    The proof makes use of definition \\ref{def:ps3} of perfect secrecy. Let $K$ be a uniformly random key; then for any binary strings $m_1$, $m_2$ and $c$:\n    \\begin{align*}\n        & \\Pr[\\Enc(K, m_1) = c]     & \\\\\n        =& \\Pr[K \\oplus m_1 = c]    & \\\\\n        =& \\Pr[K = c \\oplus m_1]    & \\\\\n        =& |\\K|^{-1}                & \\text{(K is uniform)} \\\\\n        =& \\Pr[\\Enc(K, m_2) = c]    & \\text{(Same steps reversed, where $m_1 \\mapsto m_2$)}  \n    \\end{align*}\n\\end{proof}\n\n%TODO OLD NOTE: Observation: k is truly random, but fixed, compare with Pi_\\oplus's encryption routine: from a security standpoint, nothing changes! actually, \\Pi_\\oplus may be weaker brute-force wise than OTP!\n\nAt this point, some important observations are in order:\n\\begin{enumerate}\n    \\item As the scheme's name suggests, keys are useful just for one encryption. In fact, if the same key is used for two different encryptions, an attacker may exploit the \\textsc{xor}'s idempotency to extract valuable information from both ciphertexts:\n    \\[\n        c_1 = k \\oplus m_1 \\wedge c_2 = k \\oplus m_2 \\implies c_1 \\oplus c_2 = m_1 \\oplus m_2\n    \\]\n    \\item The key and the message's lengths must always match ($|k| = |m|$);\n\n    % AP210226: Do not introduce malleability at this level; rather, cite OTP instead when malleability is actually introduced\n    %\\footnote{This vulnerability of applying a function on a ciphertext, and expecting as a result the image of the original message by the same function, is called \\emph{malleability}, and is explored further in the notes.}\n    %\\footnote{An encryption algorithm is \"malleable\" if it is possible to transform a ciphertext into another ciphertext which decrypts to a related plaintext. That is, given an encryption $c$ of a plaintext $m$, it is possible to generate another valid ciphertext $c'$, for a known $Enc$, $Dec$, without necessarily knowing or learning $m$.}\n    \n\\end{enumerate}\n\nCombined with the fact that keys must be preemptively shared in a secret fashion, these caveats make for an impractical encryption scheme. The last point can actually be generalized for any scheme that is perfectly secret:\n\n\\begin{theorem}\n    For an encryption scheme to be perfectly secret, there must be more distinct keys than distinct messages: \n    \\[\n        \\Xi \\textrm{ is perfectly secret} \\implies |\\K| \\geq |\\M|\n    \\]\n\\end{theorem}\n\\begin{proof}\n\n    Let $M$ be a generic random variable over the messages, then fix a ciphertext $c$ that has some positive probability to be the result of $M$'s encryption. This means that there are some key-message pairs that result in such ciphertext by means of encryption:\n    \\[\n        \\exists (k, m) \\in \\K \\times \\M \\quad \\Enc(k, m) = c\n    \\]\n    Let $S$ be the set collecting all possible message ``sources'' that do encrypt into $c$:\n    \\[\n        S = \\{m \\in \\M : \\exists k \\in \\K \\quad \\Enc(k, m) = c\\}\n    \\]\n\n    By using $\\Xi$'s correctness, this definition can be twisted into using \\Dec:\n    \\[\n        S = \\{\\Dec(k, c) \\in \\M : k \\in \\K\\}\n    \\]\n\n    This reveals in a clearer way that there cannot be more sources than keys,\\footnote{Remember that \\Enc{} and \\Dec{} are still considered as mathematical functions, despite them actually being machines} therefore $|S| \\leq |\\K|$. By also assuming that $|\\K| < |\\M|$, it follows that $|S| < |\\M|$ too; of consequence is that there are some messages that cannot be possibly encrypted into $c$:\n    \\[\n        x \\in \\M \\setminus S \\implies \\forall k \\in \\K \\quad \\Pr[\\Enc(k, x) = c] = 0\n    \\]\n\n    Let $x$ be a message in $\\M \\setminus S$. Since $M$ is nonexclusive:\n    \\[\n        \\Pr[M = x] > 0\n    \\]\n    However, $x$ cannot be encrypted into $c$ by definition, which also means:\n    \\[\n        \\Pr[M = x \\knowing \\Enc(K, M) = c] = 0\n    \\]\n    Therefore:\n    \\[\n        0 = \\Pr[M = x \\knowing \\Enc(K, M) = c] \\neq \\Pr[M = m] > 0\n    \\]\n    which violates perfect secrecy.\n\\end{proof}\n", "meta": {"hexsha": "b95e20ab60dae897b952789c56346b2a644fb96d", "size": 11381, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lessons/lesson_1.tex", "max_stars_repo_name": "Project2100/Cryptography-2018_19", "max_stars_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-15T09:22:45.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-15T09:22:45.000Z", "max_issues_repo_path": "lessons/lesson_1.tex", "max_issues_repo_name": "Project2100/cryptography_1819", "max_issues_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-07-18T15:45:10.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-27T20:36:12.000Z", "max_forks_repo_path": "lessons/lesson_1.tex", "max_forks_repo_name": "Project2100/cryptography_1819", "max_forks_repo_head_hexsha": "da5dcf51b0396bd26d7fd0445feceef365950757", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-07-17T14:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-03T15:23:22.000Z", "avg_line_length": 56.0640394089, "max_line_length": 394, "alphanum_fraction": 0.5896669888, "num_tokens": 3476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7745833789613197, "lm_q1q2_score": 0.5614724678296699}}
{"text": "\\documentclass[a4paper,10pt,oneside]{article}\n% \\setcounter{secnumdepth}{-1} \n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{courier}\n\\usepackage{graphicx}\n\\usepackage{xcolor}\n\\usepackage{color}\n\\usepackage{pdflscape}\n\\usepackage[utf8]{inputenc}\n\\usepackage{listings}\n\\usepackage[inline]{enumitem}\n\\usepackage{verbatim}\n\\usepackage{pxfonts}\n\\usepackage{algorithm2e}\n\\usepackage{algpseudocode}\n\n\\usepackage[top=2cm, bottom=1.5cm, left=1cm, right=1cm]{geometry}\n\\usepackage{multicol}\n\\usepackage{fancyhdr}\n\n\\pagestyle{fancy}\n\n\\renewcommand{\\sectionmark}[1]{\\markboth{#1}{}}\n\\renewcommand{\\subsectionmark}[1]{\\markright{#1}}\n\n\\fancyhf{}\n\\rhead{\\leftmark, \\thepage}\n\n\\lhead{University of Brasilia}\n\\cfoot{\\thepage}\n\n\\usepackage{titlesec}\n\\titlespacing*{\\section}\n{0pt}{2ex}{1ex}\n\n%\\definecolor{dkgray}{rgb}{0.4,0.4,0.4}\n\\definecolor{gray}{rgb}{0.6,0.6,0.6}\n\\definecolor{dkgreen}{rgb}{0,0.6,0}\n%\\definecolor{gray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mauve}{rgb}{0.58,0,0.82}\n\n\\lstset{\n\tlanguage=c++,\n\ttabsize=4,\n\taboveskip=.1em,\n\tbelowskip=.1em,\n\tshowstringspaces=false,\n\tbasicstyle={\\small\\ttfamily},\n\tcolumns=fullflexible,\n\tnumberstyle=\\tiny\\color{gray},\n\t% keywordstyle=\\color{blue},\n\tcommentstyle=\\color{dkgreen},\n\tstringstyle=\\color{mauve},\n\tnumbers=none,\n\tbreaklines=true,\n\tbreakindent=1.1em,\n\tbreakatwhitespace=false,\n\tcommentstyle=\\color{gray},\n}\n\n\\newcommand\\includes[2]{\n   \\subsection{#1}\n   \\lstinputlisting{#2}\n}\n\\newcommand\\includess[2]{\n   \\subsubsection{#1}\n   \\lstinputlisting{#2}\n}\n\n\\setlength{\\columnseprule}{1pt}\n\n\\date{}\n\\title{Rock Lee do Pagode Namora D+}\n%\\title{ICPC Team Reference}\n\\author{University of Brasilia}\n\n\\begin{document}\n\\maketitle\n\\begin{multicols}{2}\n\\tableofcontents\n\\newpage\n\\thispagestyle{fancy}\n\n\\lstinputlisting[language=bash]{vimrc}\n\n\\lstinputlisting[language=bash]{bashrc}\n\n\\section{Data Structures}\n\\includes{Merge Sort Tree}{code/ed/merge_sort_tree.cpp}\n\\includes{Wavelet Tree}{code/ed/wavelet_tree.cpp}\n\\includes{Order Set}{code/ed/order_set.cpp}\n\\includes{Hash table}{code/ed/hash_table.cpp}\n\\includes{Convex Hull Trick Simple}{code/ed/cht_simple.cpp}\n% \\includes{Convex Hull Trick}{code/ed/cht.cpp}\n\\includes{Convex Hull Trick}{code/ed/LineContainer.cpp}\n\\includes{Min queue}{code/ed/minq.cpp}\n\\includes{Sparse Table}{code/ed/sparse_table.cpp}\n\\includes{Treap}{code/ed/treap.cpp}\n\\includes{ColorUpdate}{code/misc/ColorUpdate.cpp}\n\\includes{Heavy Light Decomposition}{code/ed/hld.cpp}\n\\includes{Iterative Segtree}{code/ed/segtree.cpp}\n\\includes{Recursive Segtree + lazy}{code/ed/segtree_rec.cpp}\n\\includes{LiChao's Segtree}{code/ed/lichao.cpp}\n\\includes{Palindromic tree}{code/ed/eertree.cpp}\n\n\\section{Math}\n\\includes{Extended Euclidean Algorithm}{code/math/euclides.cpp}\n\\includes{Chinese Remainder Theorem}{code/math/crt.cpp}\n\\includes{Preffix inverse}{code/math/inv.cpp}\n\\includes{Pollard Rho}{code/math/pollard_rho.cpp}\n\\includes{Miller Rabin}{code/math/miller_rabin.cpp}\n\\includes{Totiente}{code/math/tot.cpp}\n\\includes{Primitive root}{code/math/primitive_root.cpp}\n\\includes{Mobius Function}{code/math/mobius.cpp}\n\\includes{Mulmod TOP}{code/math/mod.cpp}\n\\includes{Matrix Determinant}{code/math/det.cpp}\n\\includes{Simplex Method}{code/math/simplex.cpp}\n\\includes{FFT}{code/fft.cpp}\n\\includes{FFT Tourist}{code/fft_tourist.cpp}\n\\includes{NTT}{code/ntt.cpp}\n\\includes{Gauss}{code/gauss.cpp}\n\\includes{Gauss Xor}{code/gauss_xor.cpp}\n\\includes{Simpson}{code/math/simpson.cpp}\n\\includes{Modular Arithmetic}{code/math/mod_arithmetic.cpp}\n\\includes{Matrix}{code/math/matrix.cpp}\n\n\\section{Graphs}\n\\includes{Dinic}{code/graph/dinic.cpp}\n\\includes{Push relabel}{code/graph/pushrelabel.cpp}\n\\includes{Min Cost Max Flow}{code/graph/mcmf.cpp}\n\\includes{Blossom Algorithm for General Matching}{code/graph/blossom.cpp}\n% \\includes{Blossom Algorithm for Weighted General Matching}{code/graph/weight_blossom.cpp}\n\\includes{Small to Large}{code/graph/stl.cpp}\n\\includes{Centroid Decomposition}{code/graph/centroid_decomp.cpp}\n\\includes{Kosaraju}{code/graph/kosaraju.cpp}\n\\includes{Tarjan}{code/graph/tarjan.cpp}\n\\includes{Max Clique}{code/graph/maxcliq.cpp}\n\\includes{Dominator Tree}{code/graph/dominator_tree.cpp}\n\\includes{Min Cost Matching}{code/graph/hungarian_mcm.cpp}\n\n\\section{Strings}\n\\includes{Aho Corasick}{code/string/aho_corasick.cpp}\n\\includes{Suffix Array}{code/string/suffix_array.cpp}\n\\includes{Adamant Suffix Tree}{code/string/adamant_suffix_tree.cpp}\n\\includes{Z Algorithm}{code/string/z_algo.cpp}\n\\includes{Prefix function/KMP}{code/string/pf.cpp}\n\\includes{Min rotation}{code/string/min_rot.cpp}\n\\includes{Manacher}{code/string/all_palindrome.cpp}\n\\includes{Suffix Automaton}{code/string/suffix_automaton.cpp}\n\\includes{Suffix Tree}{code/ed/suffix_tree.cpp}\n\n\\section{Geometry}\n\\includes{2D basics}{code/geometry/2D.cpp}\n\\includes{Circle line intersection}{code/geometry/circle_line_intersection.cpp}\n\\includes{Half plane intersection}{code/geometry/halfplane.cpp}\n\\includes{Detect empty Half plane intersection}{code/geometry/detect_halfplane.cpp}\n\n\\subsection{Circle Circle intersection}\nAssume that the first circle is centered at the origin and second at $(x2, y2)$. Find circle line intersection of first circle and line $Ax + By + C = 0$, where $A = -2x_2$, $B = -2y_2$, $C = x_2^2 + y_2^2 + r_1^2 - r_2^2$.\n\nBe aware of corner case with two circles centered at the same point.\n\\includes{Tangents of two circles}{code/geometry/tangents.cpp}\n\\includes{Convex Hull}{code/geometry/convexhull.cpp}\n\\includes{Check point inside polygon}{code/geometry/in_poly.cpp}\n\\includes{Check point inside polygon without lower/upper hull}{code/geometry/in_poly2.cpp}\n\\includes{Minkowski sum}{code/geometry/mink.cpp}\n\\subsection{Geo Notes}\n\\subsubsection{Center of mass}\n\\textbf{System of points(2D/3D):} Mass weighted average of points. \\\\\n\\textbf{Frame(2D/3D):} Get middle point of each segment solve as previously. \\\\\n\\textbf{Triangle:} Average of vertices. \\\\\n\\textbf{2D Polygon:} Compute \\textbf{signed} area and center of mass of triangle $((0, 0), p_i, p_{i+1})$. Then solve as system of points.\\\\\n\\textbf{Polyhedron surface:} Solve each face as a 2D polygon(be aware of (0, 0)) then replace each face with its center of mass and solve as system of points. \\\\\n\\textbf{Tetrahedron(Triangular pyramid):} As triangles, its the average of points. \\\\\n\\textbf{Polyhedron:} Can be done as 2D polygon, but with tetrahedralization intead of triangulation.\n\n\\subsubsection{Pick's Theorem}\nGiven a polygon without self-intersections and all its vertices on integer coordinates in some 2D grid. Let $A$ be its area, $I$ the number of points with interger coordinates stricly inside the polygon and $B$ the number of points with interger coordinates in the border of the polygon. The following formula holds: $A = I + \\frac{B}{2} - 1$.\n\n\\section{Miscellaneous}\n\\includes{LIS}{code/misc/lis.cpp}\n\\includes{DSU rollback}{code/misc/bipar.cpp}\n\\includes{Buildings}{code/misc/burn.cpp}\n\\includes{Rand}{code/misc/rand.cpp}\n\\includes{Klondike}{code/misc/klondike.cpp}\n\\includes{Hilbert Order}{code/misc/hilbert_order.cpp}\n\\includes{Modular Factorial}{code/misc/factmod.cpp}\n\\includes{Enumeration all submasks of a bitmask}{code/misc/submasks.cpp}\n\\includes{Slope Trick}{code/misc/slope.cpp}\n% \\includes{Fast IO}{code/misc/fastio.cpp}\n% \\includes{Big int}{code/misc/bigint.cpp}\n\\includes{Knapsack Bounded with Cost}{code/misc/knapsack_bounded_cost.cpp}\n\\includes{LCA \\textless O(nlgn), O(1)\\textgreater}{code/misc/lca.cpp}\n\\includes{Buffered reader}{code/misc/buffered_reader.cpp}\n\\includes{Modular summation}{code/misc/sum_mod.cpp}\n\\includes{Edge coloring CPP}{code/misc/edge_coloring.cpp}\n\n\\subsection{Burnside's Lemma}\nLet $(G, \\oplus)$ be a finite group that acts on a set $X$. It should hold that $e_g*x=x$ and $g_1 *(g_2 * x) = (g_1 \\oplus g_2) * x$, $\\forall x \\in X, g_1, g_2 \\in G$. For each $g \\in G$ let $X^g = \\{x \\in X \\mid g*x = x \\}$. The number of orbits its given by:\n\n$\\mid X / G\\mid~= \\frac{1}{|G|} \\sum_{g \\in G}{|X^g|}$\n\n\\subsection{Wilson's Theorem}\n$(n-1)! = -1 \\mod n \\iff n\\text{ is prime}$\n\n\\subsection{Fibonacci}\n\\begin{itemize}\n\\item $F_{n-1}F_{n+1} - F_n^2 = (-1)^n$\n\\item $F_{n+k} = F_kF_{n+1} + F_{k-1}F_n$\n\\item $GCD(F_n, F_m) = F_{GCD(n, m)}$\n\\item $F_n = \\frac{(\\frac{1+\\sqrt{5}}{2})^n - (\\frac{1-\\sqrt{5}}{2})^n}{\\sqrt{5}}$\n\\end{itemize}\n\n\\subsection{Lucas's Theorem}\nFor non-negative integers $m$ and $n$ and a prime $p$, the following congruence holds:\n\n$\\displaystyle \\binom{m}{n} \\equiv \\prod_{i = 0}^{k} \\binom{m_i}{n_i} \\pmod p$\n\nwhere $m_i$ is the i-th digit of $m$ in base $p$. ${\\displaystyle {\\tbinom {a}{b}}=0}$ if $a < b$.\n\n\\subsection{Kirchhoff's Theorem}\nLaplacian matrix is $L = D - A$, where $D$ is a diagonal matrix with vertex degrees on the diagonals and $A$ is adjacency matrix.\n\nThe number of spanning trees is any cofactor of L. i-th cofactor is determinant of the matrix gotten by removing i-th row and column of L.\n\n\\subsubsection{Multigraphs}\nIn $D[i][i]$ all loops are excluded. $A[i][j]$ = number of edges from $i$ to $j$.\n\n\\subsubsection{Directed multigraphs}\n$D[i][i]$ = indegree of i minus the number of loops at i. $A[i][j]$ = number of edges from $i$ to $j$.\n\nThe number of oriented spanning trees rooted at a vertex i is the determinant of the matrix gotten by removing the ith row and column of L.\n\n\\subsection{Matroid}\nLet $X$ set of objects, $I \\subseteq 2^X$ set of independents sets such that:\n\\begin{enumerate}\n\\item $\\emptyset \\in I$\n\\item $A \\in I, B \\subseteq A \\implies B \\in I$\n\\item Exchange axiom, $A \\in I, B \\in I, |B| > |A| \\implies \\exists x \\in B \\setminus A : A \\cup \\{x\\} \\in I$\n\\item $A \\subseteq X$ and $I$ and $I'$ are maximal independent subsets of A then $|I| = |I'|$\n\\end{enumerate}\nThen $(X, I)$ is a matroid. The combinatorial optimization problem associated with it is: Given a weight $w(e) \\geq 0 ~\\forall e \\in X$, find an independet subset that has the largest possible total weight.\n\n\\includess{Matroid intersection}{code/matroid.cpp}\n\nWhere path(e) = [e] if label[e] = MARK2, path(label[e]) + [e] otherwise.\n\n\\subsubsection{Matroid Union}\nGiven $k$ matroids over the same set of objects $(X, I_1)$, $(X, I_2)$, \\dots, $(X, I_k)$ find $A_1 \\in I_1$, $A_2 \\in I_2$, \\dots, $A_k \\in I_k$ such that $i \\not= j, A_i \\cap A_j = \\emptyset$ and $|\\bigcup\\limits_{i=1}^{k} A_i|$ is maximum. Matroid union can be reduced to matroid intersection as follows.\n\nLet $X' = X \\times \\{1, 2, \\dots, k\\}$, ie, $k$ copies of each element of $X$ with different colors. $M1 = (X', Q)$ where $B \\in Q \\iff \\forall ~1 \\le i \\le k, ~\\{x\\mid (x, i) \\in B\\} \\in I_i$, ie, for each color, $B$ is independent. $M2 = (X', W)$ where $B \\in W \\iff i \\not= j \\implies \\lnot((x, i) \\in B \\land (x, j) \\in B)$, ie, each element is picked by at most one color.\n\nIntersection of $M1$ and $M2$ is the answer for the combinatorial problem of matroid union.\n\n\\subsection{Notes}\nWhen we repeat something and each time we have probability $p$ to succeed then the expected number or tries is $\\frac{1}{p}$, till we succeed.\n\n\\textbf{Small to large}\n\n\\textbf{Trick in statement} If $k$ sets are given you should note that the amount of different set sizes is $O(\\sqrt{s})$ where $s$ is total size of those sets. And no more than $\\sqrt{s}$ sets have size greater than $\\sqrt{s}$. For example, a path to the root in Aho-Corasick through suffix links will have at most $O(\\sqrt{s})$ vertices.\n\n\\textbf{gcd on subsegment}, we have at most $\\log(a_i)$ different values in $\\{\\gcd(a_j, a_{j+1}, ..., a_i)$ for $j < i\\}$.\n\n\\textbf{From static set to expandable}. To insert, create a new set with the new element. While there are two sets with same size, merge them. There will be at most $\\log(n)$ disjoints sets.\n\n\\textbf{Matrix exponentiation optimization}. Save binary power of $A_{nxn}$ and answer $q$ queries $b = A^mx$ in $O((n^3 + qn^2)log(m))$.\n\n\\textbf{Ternary search on integers into binary search}, comparing f(mid) and f(mid+1), binary search on derivative\n\n\\textbf{Dynamic offline set} For each element we will wind segment of time $[a, b]$ such that element is present in the set during this whole segment. Now we can come up with recursive procedure which handles $[l, r]$ time segment considering that all elements such that $[l, r] \\subset [a, b]$ are already included into the set. Now, keeping this invariant we recursively go into $[l, m]$ and $[m+1, r]$ subsegments. Finally when we come into segment of length 1.\n\n$a > b \\implies a \\mod b < \\frac{a}{2}$\n\n\\textbf{Convex Hull}. The expected number of points in the convex hull of a random set of points is $O(log(n))$. The number of points in a convex hull with points coordinates limited by $L$ is $O(L^{2/3})$.\n\n\\textbf{Tree path query}. Sometimes the linear query is fast enough. Just do adamant's hld sorting subtrees by their size and remap vertices indexes.\n\n\n\\end{multicols}\n\n\n\\end{document}\n", "meta": {"hexsha": "8488d1f46851a6dc18e5dc293b8a80a59c61e676", "size": 12855, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notebook.tex", "max_stars_repo_name": "UnBalloon/icpc-notebook", "max_stars_repo_head_hexsha": "5e8d165090420bc4a7496e9378aca22c79d86065", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-12-09T21:01:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-30T16:07:29.000Z", "max_issues_repo_path": "notebook.tex", "max_issues_repo_name": "UnBalloon/icpc-notebook", "max_issues_repo_head_hexsha": "5e8d165090420bc4a7496e9378aca22c79d86065", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook.tex", "max_forks_repo_name": "UnBalloon/icpc-notebook", "max_forks_repo_head_hexsha": "5e8d165090420bc4a7496e9378aca22c79d86065", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.1052631579, "max_line_length": 464, "alphanum_fraction": 0.7350447297, "num_tokens": 4056, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.8104789040926008, "lm_q1q2_score": 0.561371725703877}}
{"text": "\\section{Optimal sorting}\n\\label{sec:opt_sort}\n\\index{sorting}\n\nSorting is the process of rearranging a series of objects, called\n\\emph{keys}\\index{sorting!key}, to conform with a predefined\norder. According to \\cite{Knuth_1998}, the first sorting algorithms\nwere invented and automated as tabulating machines in the late\nnineteenth century, in order to support the establishment of the\ncensus of the United States of America.\n\n\\paragraph{Permutations}\n\\label{par:permutations}\n\\index{permutation}\n\nWe saw \\vpageref{par:mean_sort} that the average cost of a\ncomparison\\hyp{}based sorting algorithm is defined as the arithmetic\nmean of the costs of sorting all permutations of a given length. A\npermutation of \\((1,2,\\dots,n)\\) is another tuple\n\\((a_1,a_2,\\dots,a_n)\\) such that \\(a_i \\in \\{1,\\dots,n\\}\\) and \\(a_i\n\\neq a_j\\) for all~\\(i \\neq j\\). For example, all the permutations\nof~\\((1,2,3)\\) are\n\\begin{equation*}\n(1,2,3) \\quad (1,3,2) \\quad (2,1,3) \\quad (2,3,1) \\quad (3,1,2) \\quad\n(3,2,1).\n\\end{equation*}\nGiven all the permutations of~\\((1,2,\\dots,n-1)\\), let us build\ninductively all the permutations of\n\\((1,2,\\dots,n)\\). If~\\((a_1,a_2,\\dots,a_{n-1})\\) is a permutation of\n\\((1,2,\\dots,n-1)\\), then we can construct \\(n\\)~permutations\nof~\\((1,2,\\dots,n)\\) by inserting~\\(n\\) at all possible places\nin~\\((a_1,a_2,\\dots,a_{n-1})\\):\n\\begin{equation*}\n(\\boldsymbol{n},a_1,a_2,\\dots,a_{n-1})\\quad\n(a_1,\\boldsymbol{n},a_2,\\dots,a_{n-1})\\quad \\ldots \\quad\n(a_1,a_2,\\dots,a_{n-1},\\boldsymbol{n}).\n\\end{equation*}\nFor example, it is obvious that all the permutations of \\((1,2)\\) are\n\\((1,2)\\) and \\((2,1)\\). The method leads from \\((1,2)\\) to\n\\((\\boldsymbol{3},1,2)\\), \\((1,\\boldsymbol{3},2)\\) and\n\\((1,2,\\boldsymbol{3})\\); and from \\((2,1)\\) to\n\\((\\boldsymbol{3},2,1)\\), \\((2,\\boldsymbol{3},1)\\) and\n\\((2,1,\\boldsymbol{3})\\). If we name~\\(p_n\\) the number of\npermutations on \\(n\\)~elements, we draw from this the recurrence \\(p_n\n= n \\cdot p_{n-1}\\), which, with the additional obvious \\(p_1 = 1\\),\nleads to \\({p_n = n!}\\), for all \\({n > 0}\\), exactly as expected. If\nthe \\(n\\)~objects to permute are not \\((1,2,\\dots,n)\\) but, for\nexample, \\((\\textsf{b},\\textsf{d},\\textsf{a},\\textsf{c})\\), simply\nassociate each of them to their index in the tuple, for example,\n\\textsf{b}~is represented by~\\(1\\), \\textsf{d}~by~\\(2\\),\n\\textsf{a}~by~\\(3\\) and \\textsf{c}~by~\\(4\\), so the tuple is then\nassociated to \\((1,2,3,4)\\) and, for instance, the permutation\n\\((4,1,2,3)\\) means~\\((\\textsf{c},\\textsf{b},\\textsf{d},\\textsf{a})\\).\n\n\\paragraph{Factorial}\\index{factorial}\n\nWe encountered the factorial function in the introduction and here\nagain. There is a simple derivation enabling the characterisation of\nits asymptotic growth, proposed by\n\\cite{GrahamKnuthPatashnik_1994}. We start by squaring the factorial\nand regrouping the factors as follows:\n\\begin{equation*}\nn!^2 = (1 \\cdot 2 \\cdot \\ldots \\cdot n) (n \\cdot \\ldots \\cdot 2 \\cdot\n1) = \\prod_{k=1}^{n}{k(n+1-k)}.\n\\end{equation*}\nThe parabola \\(P(k) := k(n+1-k) = -k^2 + (n+1)k\\) reaches its maximum\nwhere its derivative is zero: \\(P'(k_{\\max}) = 0 \\Leftrightarrow\nk_{\\max}=(n+1)/2\\). The corresponding ordinate is \\(P(k_{\\max}) =\n((n+1)/2)^2 = k_{\\max}^2\\). When \\(k\\)~ranges from~\\(1\\) to~\\(n\\), the\nminimal ordinate, \\(n\\), is reached at absissas \\(1\\)~and~\\(n\\), as\nshown in \\fig~\\vref{fig:parabola}.\n\\begin{figure}\n\\centering\n\\includegraphics[bb=55 563 205 730]{parabola}\n\\caption{Parabola \\(P(k) := k(n+1-k)\\)\\label{fig:parabola}}\n\\end{figure}\nHence, \\(1 \\leqslant k \\leqslant k_{\\max}\\) implies\n\\begin{equation*}\nP(1) \\leqslant P(k) \\leqslant P(k_{\\max}),\\quad \\text{that is,} \\quad\nn \\leqslant k(n+1-k) \\leqslant \\left(\\frac{n+1}{2}\\right)^2.\n\\end{equation*}\nMultiplying the sides by varying~\\(k\\) over the discrete interval\n\\([1..n]\\) yields\n\\begin{gather*}\nn^n = \\prod_{k=1}^{n}{n} \\leqslant n!^2\n\\leqslant\n\\prod_{k=1}^{n}{\\left(\\!\\frac{n+1}{2}\\!\\right)^2} \\!\\!=\n\\left(\\!\\frac{n+1}{2}\\!\\right)^{2n}\n\\!\\!\\!\\Rightarrow\\!\nn^{n/2} \\leqslant n! \\leqslant \\left(\\!\\frac{n+1}{2}\\!\\right)^n.\n\\end{gather*}\nIt is clear now that \\(n!\\)~is \\emph{exponential}, so it\nasymptotically outgrows any polynomial. Concretely, a function whose\ncost is proportional to a factorial is useless even for small\ninputs. For the cases where an equivalence is preferred, Stirling's\nformula\\index{Stirling's formula} states that\n\\begin{equation}\nn! \\sim n^n e^{-n} \\sqrt{2\\pi n}.\\label{eq:Stirling}\n\\end{equation}\n\n\\paragraph{Enumerating all permutations}\n\nLet us write a program computing all the permutations of a given\nstack. We define the function~\\fun{perm/1}\\index{perm@\\fun{perm/1}}\nsuch that \\(\\fun{perm}(s)\\) is the stack of all permutations of the\nitems in stack~\\(s\\). We implement the inductive method presented\nabove, which worked by inserting a new object into all possible places\nof a shorter permutation.\n\\begin{equation*}\n\\fun{perm}(\\el)         \\xrightarrow{\\smash{\\alpha}} \\el;\\quad\n\\fun{perm}([x])         \\xrightarrow{\\smash{\\beta}} [[x]];\\quad\n\\fun{perm}(\\cons{x}{s}) \\xrightarrow{\\gamma}\n                          \\fun{dist}(x,\\fun{perm}(s)).\n\\end{equation*}\nThe function~\\fun{dist/2}\\index{dist@\\fun{dist/2}} (\\emph{distribute})\nis such that \\(\\fun{dist}(x,s)\\) is the stack of all stacks obtained\nby inserting the item~\\(x\\) at all different places in\nstack~\\(s\\). Because such an insertion into a permutation of\nlength~\\(n\\) yields a permutation of length~\\(n+1\\), we must join the\nnew permutations to the previously found others of same length:\n\\begin{equation*}\n\\fun{dist}(x,\\el)         \\xrightarrow{\\smash{\\delta}} \\el;\\quad\n\\fun{dist}(x,\\cons{p}{t}) \\xrightarrow{\\smash{\\epsilon}}\n                            \\fun{cat}(\\fun{ins}(x,p),\\fun{dist}(x,t)).\n\\end{equation*}\nThe call~\\(\\fun{ins}(x,p)\\)\\index{ins@\\fun{ins/2}} computes the stack\nof permutations resulting from inserting~\\(x\\) at all places in the\npermutation~\\(p\\). We thus derive\n\\begin{equation*}\n\\fun{ins}(x,\\el) \\xrightarrow{\\smash{\\zeta}} [[x]];\\quad\n\\fun{ins}(x,\\cons{j}{s}) \\xrightarrow{\\smash{\\eta}}\n \\cons{\\cons{x,j}{s}}{\\fun{push}(j,\\fun{ins}(x,s))}.\n\\end{equation*}\nwhere the function~\\fun{push/2}\\index{push@\\fun{push/2}} (not to be\nconfused with the function of same name and arity in\nsection~\\ref{sec:persistence}) is such that any\ncall~\\(\\fun{push}(x,t)\\) pushes item~\\(x\\) on all the permutations of\nthe stack of permutations~\\(t\\). The order is left unchanged:\n\\begin{equation*}\n\\fun{push}(x,\\el) \\xrightarrow{\\smash{\\theta}} \\el;\\quad\n\\fun{push}(x,\\cons{p}{t}) \\xrightarrow{\\smash{\\iota}}\n \\cons{\\cons{x}{p}}{\\fun{push}(x,t)}.\n\\end{equation*}\nNow we can compute all the permutations of \\((4,1,2,3)\\) or\n\\((\\fun{c}, \\fun{b}, \\fun{d}, \\fun{a})\\) by calling\n\\(\\fun{perm}([4,1,2,3])\\) or \\(\\fun{perm}([\\fun{c}, \\fun{b}, \\fun{d},\n  \\fun{a}])\\). Note that, after computing the permutations of\nlength~\\(n+1\\), the permutations of length~\\(n\\) are not needed\nanymore, which would allow an implementation to reclaim the\ncorresponding memory for further uses (a process called \\emph{garbage\n  collection}\\index{garbage collection}). As far as the costs are\nconcerned, the definition of \\fun{push/2} yields\n\\begin{equation*}\n\\C{\\fun{push}}{0} \\eqn{\\theta} 1;\\qquad\n\\C{\\fun{push}}{n+1} \\eqn{\\iota} 1 + \\C{\\fun{push}}{n},\\quad\n\\text{with \\(n \\geqslant 0\\)}.\n\\end{equation*}\nWe easily deduce \\(\\C{\\fun{push}}{n} = n +\n1\\).\\index{push@$\\C{\\fun{push}}{n}$} We know that the result of\n\\(\\fun{ins}(x,p)\\) is a stack of length~\\(n+1\\) if~\\(p\\) is a\npermutation of \\(n\\)~objects into which we insert one more\nobject~\\(x\\). Hence, the definition of \\fun{ins/2} leads to\n\\begin{equation*}\n\\C{\\fun{ins}}{0}   \\eqn{\\zeta} 1;\\quad\n\\C{\\fun{ins}}{k+1} \\eqn{\\eta} 1 + \\C{\\fun{push}}{k+1} +\n\\C{\\fun{ins}}{k} = 3 + k + \\C{\\fun{ins}}{k}, \\quad \\text{with \\(k\n                   \\geqslant 0\\)},\n\\end{equation*}\nwhere~\\(\\C{\\fun{ins}}{k}\\) is the cost of \\(\\fun{ins}(x,p)\\)\nwith~\\(p\\) of length~\\(k\\). By summing for all~\\(k\\) from\n\\(0\\)~to~\\(n-1\\), for~\\(n>0\\), on both sides we draw\n\\begin{equation*}\n\\sum_{k=0}^{n-1}{\\C{\\fun{ins}}{k+1}}\n  = \\sum_{k=0}^{n-1}{3} + \\sum_{k=0}^{n-1}{k}\n     + \\sum_{k=0}^{n-1}{\\C{\\fun{ins}}{k}}.\n\\end{equation*}\nBy cancelling identical terms in the sums (telescoping)\n\\(\\sum_{k=0}^{n-1}{\\C{\\fun{ins}}{k+1}}\\) and\n\\(\\sum_{k=0}^{n-1}{\\C{\\fun{ins}}{k}}\\), we draw\\index{ins@$\\C{\\fun{ins}}{n}$}\n\\begin{equation*}\n\\C{\\fun{ins}}{n}\n  = 3n + \\tfrac{1}{2}n(n-1) + \\C{\\fun{ins}}{0}\n  = \\tfrac{1}{2}{n^2} + \\tfrac{5}{2}{n} + 1.\n\\end{equation*}\nThis last equation is actually valid even if \\(n =\n0\\). Let~\\(\\C{\\fun{dist}}{n!}\\) be the cost for distributing an item\namong \\(n!\\)~permutations of length~\\(n\\). The definition of\n\\fun{dist/2}\\index{dist@\\fun{dist/2}} shows that it repeats calls to\n\\fun{cat/2}\\index{cat@\\fun{cat/2}} and\n\\fun{ins/2}\\index{ins@\\fun{ins/2}} whose arguments are always of\nlength \\(n+1\\)~and~\\(n\\), respectively, because all processed\npermutations here have the same length. We deduce, for~\\(k \\geqslant\n0\\), that\n\\begin{equation*}\n\\C{\\fun{dist}}{0} \\eqn{\\delta} 1;\\quad\n\\C{\\fun{dist}}{k+1}\n  \\eqn{\\epsilon} 1 + \\C{\\fun{cat}}{n+1} + \\C{\\fun{ins}}{n}\n                    + \\C{\\fun{dist}}{k}\n  = \\tfrac{1}{2}{n^2} + \\tfrac{7}{2}{n} + 4 + \\C{\\fun{dist}}{k},\n\\end{equation*}\nsince we already know that \\(\\C{\\fun{cat}}{n} = n +\n1\\)\\index{cat@$\\C{\\fun{cat}}{n}$} and the value\nof~\\(\\C{\\fun{ins}}{n}\\). By summing both sides of the last equation\nfor all~\\(k\\) from \\(0\\)~to~\\(n!-1\\), we can eliminate most of the\nterms and find a non\\hyp{}recursive definition of\n\\(\\C{\\fun{dist}}{n!}\\)\\index{dist@$\\C{\\fun{dist}}{n}$}:\n\\begin{align*}\n\\sum_{k=0}^{n!-1}{\\C{\\fun{dist}}{k+1}}\n   &= \\sum_{k=0}^{n!-1}{\\left(\\frac{1}{2}{n^2}\n      + \\frac{7}{2}{n} + 4\\right)}\n      + \\sum_{k=0}^{n!-1}{\\C{\\fun{dist}}{k}},\\\\\n\\C{\\fun{dist}}{n!}\n  &= \\left(\\frac{1}{2}{n^2}+\\frac{7}{2}{n}+4\\right)n! +\n\\C{\\fun{dist}}{0} = \\frac{1}{2}(n^2 + 7n + 8)n! + 1.\n\\end{align*}\nLet us finally compute the cost of \\(\\fun{perm}(s)\\), noted\n\\(\\C{\\fun{perm}}{k}\\)\\index{perm@$\\C{\\fun{perm}}{n}$|(}, where \\(k\\)~is\nthe length of the stack~\\(s\\). From rules \\clause{\\alpha}\nto~\\clause{\\gamma}, we deduce the following equations, where \\(k >\n0\\):\n\\begin{align*}\n\\C{\\fun{perm}}{0}   &\\eqn{\\alpha} 1;\\qquad\n\\C{\\fun{perm}}{1}   \\eqn{\\beta} 1;\\\\\n\\C{\\fun{perm}}{k+1}\n  &\\eqn{\\gamma} 1 + \\C{\\fun{perm}}{k} + \\C{\\fun{dist}}{k!}\n   = \\tfrac{1}{2}(k^2 + 7k + 8)k! + 2 + \\C{\\fun{perm}}{k}.\n\\intertext{Again, summing both sides, most of the terms cancel out:}\n\\sum_{k=1}^{n-1}{\\C{\\fun{perm}}{k+1}}\n  &= \\frac{1}{2}\\sum_{k=1}^{n-1}{(k^2+7k+8)k!} + \\sum_{k=1}^{n-1}{2}\n     + \\sum_{k=1}^{n-1}{\\C{\\fun{perm}}{k}},\\\\\n\\C{\\fun{perm}}{n}\n  &= \\frac{1}{2}\\sum_{k=1}^{n-1}{(k^2 + 7k + 8)k!}\n     + 2(n-1) + \\C{\\fun{perm}}{1}\\\\\n  &= \\frac{1}{2}\\sum_{k=1}^{n-1}{((k+2)(k+1)+6+4k)k!} + 2n - 1\\\\\n  &= \\frac{1}{2}\\sum_{k=1}^{n-1}{(k+2)(k+1)k!}\n     + 3 \\sum_{k=1}^{n-1}{k!} + 2 \\sum_{k=1}^{n-1}{kk!} + 2n - 1\\\\\n  &= \\frac{1}{2}\\sum_{k=1}^{n-1}{(k+2)!}\n     + 3 \\sum_{k=1}^{n-1}{k!} + 2 \\sum_{k=1}^{n-1}{kk!} + 2n - 1\\\\\n  &= \\frac{1}{2}\\sum_{k=3}^{n+1}{k!}\n     + 3 \\sum_{k=1}^{n-1}{k!} + 2 \\sum_{k=1}^{n-1}{kk!} + 2n - 1\\\\\n%  &= \\left(\\frac{1}{2}((n+1)! + n! - 2! - 1!)\n%     + \\frac{1}{2}\\sum_{k=1}^{n-1}{k!}\\right)\n%     + 3 \\sum_{k=1}^{n-1}{k!}\\\\\n%  &\\phantom{= x} + 2 \\sum_{k=1}^{n-1}{kk!} + 2n - 1.\\\\\n  &= \\frac{1}{2}{(n+2)n!} + \\frac{7}{2}\\sum_{k=1}^{n-1}{k!}\n     + 2 \\sum_{k=1}^{n-1}{kk!} + 2n - \\frac{5}{2}.\n\\end{align*}\nThis last equation is actually valid even if~\\(n = 1\\). One sum has a\nsimple closed form:\n\\begin{equation*}\n\\sum_{k=1}^{n-1}{kk!} = \\sum_{k=1}^{n-1}((k+1)! - k!) =\n\\sum_{k=2}^{n}{k!} - \\sum_{k=1}^{n-1}{k!} = n! - 1.\n\\end{equation*}\nResuming our previous derivation,\n\\begin{align*}\n\\C{\\fun{perm}}{n}\n  &= \\frac{1}{2}{nn!} + n! + \\frac{7}{2}\\sum_{k=1}^{n-1}{k!}\n     + 2(n! - 1) + 2n - \\frac{5}{2}\\\\\n  &= \\frac{1}{2}{nn!} + 3n!\n     + 2n - \\frac{9}{2} + \\frac{7}{2}\\sum_{k=1}^{n-1}{k!},\\quad\n     \\text{with \\(n > 0\\)}.\n\\end{align*}\nThe remaining sum is called the \\emph{left factorial}\n\\citep{Kurepa_1971}\\index{factorial!left $\\sim$} and is usually\ndefined as\n\\begin{equation*}\n!n := \\sum_{k=1}^{n-1}{k!},\\quad \\text{with \\(n > 0\\)}.\n\\end{equation*}\nUnfortunately, no closed expression of the left factorial is\nknown. This is actually a common situation when determining the cost\nof relatively complex functions. The best course of action is then to\nstudy the asymptotic approximation of the cost. Obviously, \\(n!\n\\leqslant !(n+1)\\). Also,\n\\begin{equation*}\n!(n+1) - n! \\leqslant (n-2) \\cdot (n-2)! + (n-1)! \\leqslant\n(n-1) \\cdot (n-2)! + (n-1)! = 2 (n-1)!\n\\end{equation*}\nTherefore,\n\\begin{equation*}\n1 \\leqslant \\frac{!(n+1)}{n!} \\leqslant \\frac{n! + 2(n-1)!}{n!} = 1 +\n\\frac{2}{n} \\;\\Rightarrow\\; !n \\sim (n-1)!\n\\end{equation*}\nAlso, \\((n+1)! = (n+1)n!\\), so \\((n+1)!/(nn!) = 1 + 1/n\\), hence \\(nn!\n\\sim (n+1)!\\). Consequently,\n\\begin{equation*}\n\\abovedisplayshortskip=0pt\n\\abovedisplayskip=0pt\n%\\belowdisplayskip=0pt\n\\C{\\fun{perm}}{n} \\sim \\frac{1}{2}(n+1)! + 3n! + \\frac{7}{2}(n-1)!\n+ 2n - \\frac{9}{2} \\sim \\frac{1}{2}(n+1)!\n\\end{equation*}\nThis is an unbearably slow program, as expected. We should not hope to\ncompute \\(\\C{\\fun{perm}}{11}\\) \\index{perm@$\\C{\\fun{perm}}{n}$|)}\neasily and there is no way to improve significantly the cost because\nthe number of permutations it computes is inherently exponential, so\nit would even suffice to spend only one function call per permutation\nto obtain an exponential cost. In other words, the memory necessary to\nhold the result has a size which is exponential in the size of the\ninput, therefore, the cost is at least exponential, because at least\none function call is necessary to allocate some memory. For a deep\nstudy on the enumeration of all permutations, refer to the survey of\n\\cite{Knuth_2011}.\n\n\\paragraph{Permutations and sorting}\n\nPermutations\\index{permutation|(} are worth studying in detail because\nof their intimate relationship with sorting. A permutation can be\nthought of as scrambling originally ordered keys and a sorting\npermutation puts them back to their place. A slightly different\nnotation for permutations is helpful here, one which shows the indexes\ntogether with the keys. For example, instead of writing \\(\\pi_1 =\n(2,4,1,5,3)\\), we write\n\\begin{equation*}\n\\pi_1 =\n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5\\\\\n2 & 4 & 1 & 5 & 3\n\\end{pmatrix}.\n\\end{equation*}\nThe first line is made of the ordered indexes and the second line\ncontains the keys. In general, a permutation \\(\\pi =\n(a_1,a_2,\\dots,a_n)\\) is equivalent to\n\\begin{equation*}\n\\pi =\n\\begin{pmatrix}\n     1 &      2 & \\dots &     n\\\\\n\\pi(1) & \\pi(2) & \\dots & \\pi(n)\n\\end{pmatrix},\n\\end{equation*}\nwhere~\\(a_i = \\pi(i)\\), for all~\\(i\\) from \\(1\\)~to~\\(n\\). The\nfollowing permutation~\\(\\pi_s\\) sorts the keys of~\\(\\pi_1\\):\n\\begin{equation*}\n\\pi_s =\n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5\\\\\n3 & 1 & 5 & 2 & 4\n\\end{pmatrix}.\n\\end{equation*}\nTo see why, we define the composition\\index{permutation!composition}\nof two permutations\\index{permutation} \\(\\pi_a\\)~and~\\(\\pi_b\\):\n\\begin{equation*}\n\\pi_b \\circ \\pi_a :=\n\\begin{pmatrix}\n              1 &               2 & \\dots & n\\\\\n\\pi_b(\\pi_a(1)) & \\pi_b(\\pi_a(2)) & \\dots & \\pi_b(\\pi_a(n))\n\\end{pmatrix}.\n\\end{equation*}\nThen \\(\\pi_s~\\circ~\\pi_1 = \\mathcal{I}\\), where the \\emph{identity\n  permutation}~\\(\\mathcal{I}\\)\\index{permutation!identity} is such\nthat \\(\\mathcal{I}(k) = k\\), for all indexes~\\(k\\). In other words,\n\\(\\pi_s = \\pi_1^{-1}\\), that is, \\emph{sorting a permutation consists\n  in building its inverse}\\index{permutation!inverse}:\n\\begin{equation*}\n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5\\\\\n3 & 1 & 5 & 2 & 4\n\\end{pmatrix}\n\\circ\n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5\\\\\n2 & 4 & 1 & 5 & 3\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5\\\\\n1 & 2 & 3 & 4 & 5\n\\end{pmatrix}.\n\\end{equation*}\n\nAn alternative representation of permutations and their\ncomposition\\index{permutation!composition} is based on considering\nthem as bijections from an interval onto itself, denoted by\n\\emph{bipartite graphs}\\index{graph!bipartite\n  $\\sim$}\\index{permutation!bigraph}, also called\n\\emph{bigraphs}\\index{bigraph|see{graph, bipartite}}. Such\ngraphs are made of two disjoint, ordered sets of vertices of same\ncardinal, the indexes and the keys, and the edges always go from an\nindex to a key, without sharing the vertices with other edges. For\nexample, permutation~\\(\\pi_1\\) is shown in \\fig~\\vref{fig:perm1}\n\\begin{figure}\n\\centering\n\\subfloat[Bigraph of \\(\\pi_1 = (2,4,1,5,3)\\)\\label{fig:perm1}]{%\n\\includegraphics[bb=79 662 206 721]{perm1}\n}\n\\qquad\\qquad\n\\subfloat[Bigraph of \\(\\pi_1^{-1} = (3,1,5,2,4)\\)\\label{fig:perm3}]{%\n\\includegraphics[bb=77 662 208 721]{perm3}\n}\n\\caption{Permutation \\(\\pi_1\\) and its inverse \\(\\pi_1^{-1}\\)\n\\label{fig:pi_1}\\index{permutation!inverse}}\n\\end{figure}\nand its inverse \\(\\pi_1^{-1}\\) is displayed in\n\\fig~\\vref{fig:perm3}. The composition\\index{permutation!composition}\nof~\\(\\pi_1^{-1}\\) and~\\(\\pi_1\\) is then obtained by identifying the\nkey vertices of~\\(\\pi_1\\) with the index vertices of~\\(\\pi_1^{-1}\\),\nas shown in \\fig~\\vref{fig:perm4}.\n\\begin{figure}[t]\n\\centering\n\\subfloat[\\(\\pi_1^{-1} \\circ \\pi_1\\)\\label{fig:perm4}]{%\n\\includegraphics[bb=81 634 204 721]{perm4}\n}%\n\\qquad\\qquad\n\\subfloat[\\(\\mathcal{I} = \\pi_1^{-1} \\circ \\pi_1\\)\\label{fig:perm2}]{%\n\\includegraphics[bb=81 634 204 721]{perm2}\n}\n\\caption{Applying \\(\\pi_1\\) to \\(\\pi_1^{-1}\\).}\n\\end{figure}\nThe identity permutation\\index{permutation!identity} is obtained by\nreplacing two adjacent edges by their transitive\nclosure\\index{transitive closure} and erasing the intermediate\nvertices, as shown in \\fig~\\ref{fig:perm2}\npage~\\pageref{fig:perm2}. Note that a permutation may equal its\ninverse, like\n\\begin{equation*}\n\\pi_3 =\n\\begin{pmatrix}\n1 & 2 & 3 & 4 & 5\\\\\n3 & 4 & 1 & 2 & 5\n\\end{pmatrix}.\n\\end{equation*}\nIn \\fig~\\ref{fig:involution}\n\\begin{figure}\n\\centering\n\\subfloat[\\(\\pi_3 \\circ \\pi_3\\)\\label{fig:inv1}]{%\n\\includegraphics[bb=81 634 204 721]{inv1}\n}\n\\qquad\\qquad\n\\subfloat[\\(\\pi_3 \\circ \\pi_3\\)]{%\n\\includegraphics[bb=81 634 204 721]{inv2}\n}\n\\caption{Involution \\(\\pi_3\\) sorts itself\\label{fig:involution}}\n\\end{figure}\nis shown that \\(\\pi_3~\\circ~\\pi_3 = \\pi_3\\), so \\(\\pi_3\\) is an\n\\emph{involution}\\index{permutation!involution}\\index{permutation!inverse}.\n\nStudying permutations and their basic properties helps understanding\nsorting algorithms, particularly their average cost. They also provide\na way to quantify disorder. Given \\((1,3,5,2,4)\\), we can see that\nonly the pairs of keys \\((3,2)\\), \\((5,2)\\) and~\\((5,4)\\) are out of\norder. In general, given \\((a_1, a_2, \\dots, a_n)\\), the pairs\n\\((a_i,a_j)\\) such that~\\(i < j\\) and~\\(a_i > a_j\\) are called\n\\emph{inversions}\\index{permutation!inversion}. The more inversions,\nthe greater the disorder. As expected, the identity\npermutation\\index{permutation!identity} has no inversions and the\npreviously studied permutation \\(\\pi_1 = (2,4,1,5,3)\\) has~\\(4\\). When\nconsidering permutations as represented by\nbigraphs\\index{graph!bipartite $\\sim$}\\index{permutation!bigraph}, an\ninversion\\index{permutation!inversion} corresponds to an intersection\nof two edges, more precisely, it is the pair made of the keys pointed\nat by two arrows. Therefore, the number of\ninversions\\index{permutation!inversion} is the number of edge\ncrossings, so, for instance, \\(\\pi_1^{-1}\\)~has \\(4\\)~inversions. In\nfact, \\emph{the inverse\\index{permutation!inverse} of a permutation\n  has the same number of inversions\\index{permutation!inversion} as\n  the permutation\\index{permutation} itself}. This can be clearly seen\nwhen comparing the bigraphs\\index{permutation!bigraph} of~\\(\\pi_1\\)\nand~\\(\\pi_1^{-1}\\) in \\fig~\\vref{fig:pi_1}: in order to deduce the\nbigraph of~\\(\\pi_1^{-1}\\) from the one corresponding to~\\(\\pi_1\\), let\nus reverse each edge, that is, the direction in which the arrows are\npointing, then swap the indexes and the keys, that is, exchange the\ntwo lines of vertices. Alternatively, we can imagine that we fold down\nthe paper along the key line, then look through and reverse the\narrows. Anyhow, the crossings are invariant. The horizontal symmetry\nis obvious in \\figs~\\ref{fig:perm4} and~\\ref{fig:inv1}.\n\n\\paragraph{Minimax}\n\\label{par:minimax}\n\nAfter analysing the cost of a sorting algorithm based on comparisons,\nwe will need to know how close it is to an optimal sorting\nalgorithm\\index{sorting!optimality|(}. The first theoretical problem\nwe examine is that of the best worst case, so called the\n\\emph{minimax}\\index{cost!minimax}\\index{sorting!minimax|(}. \\Fig~\\vref{fig:cmp_tree}\nfeatures the tree of all possible comparisons \\index{tree!comparison\n  $\\sim$|see{sorting}} for sorting three numbers.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[bb=71 636 327 723]{cmp_tree}\n\\caption{A comparison tree for sorting three keys\n\\label{fig:cmp_tree}}\n\\end{figure}\nThe \\emph{external nodes}\\index{binary tree!external\n  node}\\label{def:external_node} are all the\npermutations\\index{permutation|)} of \\((a_1,a_2,a_3)\\). The\n\\emph{internal nodes}\\index{binary tree!internal node} are comparisons\nbetween two keys \\(a_i\\)~and~\\(a_j\\), noted \\(a_i?a_j\\). Note that\nleaves, in this setting, are internal nodes with two external nodes as\nchildren.\n\\begin{wrapfigure}[8]{r}[0pt]{0pt}\n% [8] vertical lines\n% {r} mandatory right placement\n% [0pt] of margin overhang\n\\centering\n\\includegraphics[bb=71 638 217 723]{red_cmp}\n\\caption{Useless \\(a_1 > a_3\\)\\label{fig:red_cmp}}\n\\end{wrapfigure}\nIf \\(a_i < a_j\\), then this property holds everywhere in the left\nsubtree, otherwise \\(a_i > a_j\\) holds in the right subtree. This tree\nis one possible amongst many: it corresponds to an algorithm which\nstarts by comparing \\(a_1\\)~and~\\(a_2\\) and there are, of course, many\nother strategies. But it does not contain redundant comparisons: if a\npath from the root to a leaf includes \\(a_i < a_j\\) and \\(a_j < a_k\\),\nwe do not expect the useless \\(a_i < a_k\\). \\Fig~\\vref{fig:red_cmp}\nshows an excerpt of a comparison tree with such a useless\ncomparison. The special external node~\\(\\bot\\) corresponds to no\npermutation because the comparison \\(a_1 < a_3\\) cannot fail as it is\nimplied by transitivity of the previous comparisons on the path from\nthe root.\n\\begin{center}\n  \\emph{A comparison tree for \\(n\\)~keys without redundancy has\n    \\(n!\\)~external nodes.}\n\\end{center}\nBecause we are investigating minimum\\hyp{}comparison sorting, we shall\nconsider henceforth optimal comparison trees with \\(n!\\)~external\nnodes. Furthermore, amongst them, we want to determine the trees such\nthat the maximum number of comparisons is minimum.\n\nAn \\emph{external path}\\index{binary tree!external path} is a path\nfrom the root to an external node. Let the \\emph{height}\\index{binary\n  tree!height} of a tree be the length (that is, the number of edges)\nof its longest external path. In \\fig~\\ref{fig:cmp_tree}, the height\nis~\\(3\\) and there are~\\(4\\) external paths of length~\\(3\\), like\n\\((a_1 < a_2) \\rightarrow (a_2 > a_3) \\rightarrow (a_1 > a_3)\n\\rightarrow (a_3,a_1,a_2)\\).\n\nThe maximality constraint means that we must consider the height of\nthe comparison tree because the number of internal nodes (comparisons)\nalong the maximum external paths is an upper bound for the number of\ncomparisons needed for sorting \\emph{all} the\npermutations\\index{permutation}\\index{sorting}.\n\nThe minimality constraint in the problem statement above then\nsignifies that \\emph{we want a lower bound on the height of a\n  comparison tree with \\(n!\\)~external nodes.}\n\n% Wrapping figure better declared before a paragraph\n%\n\\setlength{\\intextsep}{0pt}\n\\begin{wrapfigure}[]{r}[0pt]{0pt}\n% {r} mandatory right placement (better because of a list)\n\\centering\n\\includegraphics{comp_tree}\n\\caption{Perfect binary tree of height~\\(3\\)\\label{fig:comp_tree}}\n\\end{wrapfigure}\nA \\emph{perfect binary tree}\\index{binary tree!perfect $\\sim$} is a\nbinary tree whose internal nodes have children which are either two\ninternal nodes or two external nodes. If such a tree has height~\\(h\\),\nthen it has \\(2^h\\)~external nodes. For instance,\n\\fig~\\vref{fig:comp_tree} shows the case where the height~\\(h\\)\nis~\\(3\\) and there are indeed \\(2^h=8\\) external nodes, figured as\nsquares. Since, by definition, minimum\\hyp{}comparison trees have\n\\(n!\\)~external nodes and height~\\(S(n)\\), they must contain fewer\nexternal nodes than a perfect binary tree of identical height, that\nis, \\(n! \\leqslant 2^{S(n)}\\), therefore\n\\begin{equation*}\nS(n) \\geqslant \\ceiling{\\lg n!},\n\\end{equation*}\nwhere \\(\\ceiling{x}\\) (\\textsl{ceiling of~\\(x\\)})\n\\index{ceiling@$\\ceiling{x}$|see{ceiling function}} \\index{ceiling\n  function} is the least integer greater than or equal to~\\(x\\). To\ndeduce a good lower bound on~\\(S(n)\\), we need the following theorem.\n\\setlength{\\intextsep}{12pt}\n\\begin{thm}[Sum and integral]\n\\label{thm:integral_bounds}\nLet \\(f\\colon [a,b] \\rightarrow \\mathbb{R}\\) be an integrable,\nmonotonically increasing function. Then\n\\begin{equation}\n\\abovedisplayskip=0pt\n\\sum_{k=a}^{b-1}f(k) \\leqslant \\int_{a}^{b}{\\!\\!f(x)} \\,dx\n                   \\leqslant \\sum_{k=a+1}^{b}{\\!\\!\\!f(k)}.\n\\tag*{\\(\\blacksquare\\)}\n\\end{equation}\n\\end{thm}\n\\noindent Let us take \\(a:=1\\), \\(b:=n\\) and \\(f(x) := \\lg x\\). The\ntheorem implies\n\\begin{equation*}\nn\\lg n - \\frac{n}{\\ln 2} + \\frac{1}{\\ln 2}\n= \\int_{1}^{n}{\\!\\!\\lg x} \\,dx \\leqslant \\sum_{k=2}^{n}{\\lg k}\n= \\lg n! \\leqslant S(n).\n\\end{equation*}\nA more powerful but more complex approach in real analysis, known as\nEuler-Maclaurin summation, yields Stirling's formula\n\\citep[chap.~4]{SedgewickFlajolet_1996}\\index{Stirling's formula},\nwhich is a very precise lower bound for \\(\\lg n!\\):\n\\begin{equation}\n\\Big(n + \\frac{1}{2}\\Big)\\lg n - \\frac{n}{\\ln 2} + \\lg\\sqrt{2\\pi}\n< \\lg n! \\leqslant S(n).\n\\label{ineq:S_lower}\n\\end{equation}\n\n\\paragraph{Minimean}\n\\label{par:opt_sort:minimean}\n\\index{cost!minimean|(}\n\\index{sorting!minimean|(}\n\\index{binary tree!external path length|(}\n\nWe investigate here the best mean case, or \\emph{minimean}. Let us\ncall the sum of the lengths of all the external paths the\n\\emph{external path length of the\n  tree}\\label{sorting__external_path_length}\\index{binary\n  tree!external path length}. Then, the average number of comparisons\nis the mean external path length. In \\fig~\\vref{fig:cmp_tree}, this is\n\\((2+3+3+3+3+2)/3!=8/3\\). Our problem here therefore consists in\ndetermining the shape of the optimal comparison trees of minimum\nexternal path length.\n\nThese trees have their external nodes on one or two successive levels\nand are thus \\emph{almost perfect}.\\index{binary tree!almost perfect\n  $\\sim$} Let us consider a binary tree where this is not the case, so\nthe topmost external nodes are located at level~\\(l\\) and the\nbottommost at level~\\(L\\), with \\(L \\geqslant l + 2\\). If we exchange\na leaf at level~\\(L-1\\) with an external node at level~\\(l\\), the\nexternal path length is decreased by \\((l+2L) - (2(l+1) + (L-1)) = L -\nl - 1 \\geqslant 1\\). Repeating these exchanges yields the expected\nshape.\n\nThe external paths are made up of \\(p\\)~paths ending at the\npenultimate level~\\(h-1\\) and \\(q\\)~paths ending at the bottommost\nlevel~\\(h\\). (The root has level~\\(0\\).) Let us find two equations\nwhose solutions are \\(p\\)~and~\\(q\\).\n\\begin{itemize}\n\n  \\item From the minimax problem\\index{sorting!minimax|)}, we know\n    that an optimal comparison tree for \\(n\\)~keys has \\(n!\\)~external\n    nodes: \\(p+q=n!\\).\n\n  \\item If we replace the external nodes at level~\\(h-1\\) by leaves,\n    the level~\\(h\\) becomes full with~\\(2^h\\) external nodes:\n    \\(2p+q=2^h\\).\n\n\\end{itemize}\nWe now have two linear equations satisfied by~\\(p\\) and~\\(q\\), whose\nsolutions are \\(p=2^h-n!\\) and \\(q=2n!-2^h\\). Furthermore, we can now\nexpress the minimal external path length as follows: \\((h-1)p + hq =\n(h+1)n! - 2^h\\). Finally, we need to determine the height~\\(h\\) of the\ntree in terms of~\\(n!\\). This can be done by remarking that, by\ncontruction, the last level may be incomplete, so \\(0 < q \\leqslant\n2^h\\), or, equivalently, \\(h = \\ceiling{\\lg n!}\\). We conclude that\nthe minimum external path length is\n\\begin{equation*}\n(\\ceiling{\\lg n!} + 1) n! - 2^{\\ceiling{\\lg n!}}.\n\\end{equation*}\nLet \\(M(n)\\) \\label{def:Mn} be the minimum average number of\ncomparisons of an optimal sorting algorithm. We have\n\\begin{equation*}\nM(n) = \\ceiling{\\lg n!} + 1 - \\frac{1}{n!}2^{\\ceiling{\\lg n!}}.\n\\end{equation*}\nWe have \\(\\ceiling{\\lg n!} = \\lg n! + x\\), with \\(0 \\leqslant x < 1\\),\ntherefore, if we set the function \\(\\theta(x) := 1 + x - 2^x\\), we\ndraw\n\\begin{equation*}\nM(n) = \\lg n! + \\theta(x).\n\\end{equation*}\nWe have \\(\\max_{0\\leqslant x < 1}\\theta(x) = 1 - (1 + \\ln\\ln 2)/\\!\\ln 2\n\\simeq 0.08607\\), therefore,\n\\begin{equation}\n\\lg n! \\leqslant M(n) < \\lg n! + 0.09.\n\\label{ineq:Mn}\n\\end{equation}\n\\index{sorting!optimality|)}\n\\index{cost!minimean|)}\n\\index{sorting!minimean|)}\n\\index{binary tree!external path length|)}\n", "meta": {"hexsha": "6bad1deda820f516d7ea5ec6eac4e84dd113b5ad", "size": 28857, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sorting.tex", "max_stars_repo_name": "rinderknecht/Book", "max_stars_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sorting.tex", "max_issues_repo_name": "rinderknecht/Book", "max_issues_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sorting.tex", "max_forks_repo_name": "rinderknecht/Book", "max_forks_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.4593373494, "max_line_length": 85, "alphanum_fraction": 0.6685379631, "num_tokens": 10630, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8104789063814616, "lm_q1q2_score": 0.5613717170049277}}
{"text": "\\documentclass{beamer}\n\n\\usepackage{verbatim}\n\\usepackage{fancyvrb}\n\\usepackage{amsmath}\n\\usepackage{mathtools}\n\\usepackage{booktabs}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{calc}\n\\usepackage{color}\n\\usepackage{multicol}\n\\usepackage{wrapfig}\n\\usepackage{natbib}\n\\usepackage[ruled,vlined]{algorithm2e}\n\\usepackage{animate}\n\\usepackage{mathtools}\n\\usepackage{listings}\n\n\\usepackage{cmbright}\n\\fontencoding{OT1}\\fontfamily{cmbr}\\selectfont %to load ot1cmbr.fd\n\\DeclareFontShape{OT1}{cmbr}{bx}{n}{% change bx definition\n<->cmbrbx10%\n}{}\n\\normalfont % back to normalfont\n\n% two col: two columns\n\\newenvironment{twocol}[4]{\n\\begin{columns}[c]\n\\column{#1\\textwidth}\n#3\n\\column{#2\\textwidth}\n#4\n\\end{columns}\n}\n\n\\makeatletter\n\\setbeamertemplate{theorem begin}\n{%\n\\begin{\\inserttheoremblockenv}\n  {}{\\usebeamerfont*{block title}\\usebeamercolor[fg]{block title}%\n  \\inserttheoremname\n  %\\inserttheoremnumber\n  \\ifx \\inserttheoremaddition \\empty \\else\\ (\\inserttheoremaddition)\\fi\n  \\inserttheorempunctuation}\n  \\normalfont\n  }\n  \\setbeamertemplate{theorem end}{\\end{\\inserttheoremblockenv}}\n\\makeatother\n\n\\newcommand{\\E}{\\mathrm{E}}\n\\newcommand{\\Var}{\\mathrm{Var}}\n\\newcommand{\\Cov}{\\mathrm{Cov}}\n\\newcommand{\\sd}{\\mathrm{sd}}\n\\newcommand{\\s}{\\mathrm{s}}\n\\newcommand{\\Corr}{\\mathrm{Corr}}\n\\newcommand{\\rank}{\\mathrm{rank}}\n\\newcommand{\\trace}{\\mathrm{trace}}\n\\newcommand{\\nullspace}{\\mathrm{null}}\n\\newcommand{\\myspan}{\\mathrm{span}}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\DeclareMathOperator*{\\softmax}{softmax}\n\n\\definecolor{darkgreen}{rgb}{0,0.5,0}\n\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{exe}{Exercise}\n\\newtheorem{notation}{Notation}\n\\newtheorem{remark}{Remark}\n\n\\definecolor{darkgreen}{rgb}{0,0.5,0}\n\n\\title{Model Selection and Validation}\n\\author{Zhenisbek Assylbekov}\n\\institute{Department of Mathematics}\n\\date{Regression Analysis}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}<beamer>\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\begin{document}\n\n\\begin{frame}\n  \\titlepage\n\\end{frame}\n\n\\section{Manual model selection}\n\n\\begin{frame}[fragile]{Salary example}{\\url{https://github.com/zh3nis/MATH440/tree/main/chp09/salary.R}}\nModel annual salary (in \\$1000) as function of \\begin{itemize}\n    \\item\\pause \\texttt{age} (in years),\n    \\item\\pause \\texttt{educ}ation (years of post-high-school education), and \n    \\item\\pause \\texttt{pol}itical affiliation (\\texttt{pol} = D for\nDemocrat, \\texttt{pol} = R for Republican, and \\texttt{pol} = O for other).\n\\end{itemize}  \n\n\\begin{footnotesize}\n\\pause\\begin{verbatim}\n> salary_data = read.table(\"path/to/salary.txt\", header=FALSE)\n> colnames(salary_data) = c('salary', 'age', 'educ', 'pol')\n> head(salary_data)\n  salary age educ pol\n1     38  25    4   D\n2     45  27    4   R\n3     28  26    4   O\n4     55  39    4   D\n5     74  42    4   R\n6     43  41    4   O\n\\end{verbatim}\n\\end{footnotesize}\n\\end{frame}\n\n\\begin{frame}[fragile]{Salary example}{\\url{https://github.com/zh3nis/MATH440/tree/main/chp09/salary.R}}\n\\begin{footnotesize}\n\\begin{verbatim}\nCoefficients:\n            Estimate Std. Error t value Pr(>|t|)    \n(Intercept)  17.0313     7.3459   2.318  0.03735 *  \nage           0.8983     0.1968   4.565  0.00053 ***\neduc          1.5039     1.1841   1.270  0.22632    \npolO        -16.5404     4.8807  -3.389  0.00484 ** \npolR          9.1587     4.8482   1.889  0.08139 .  \n---\n\nResidual standard error: 8.209 on 13 degrees of freedom\nMultiple R-squared:  0.8374,\tAdjusted R-squared:  0.7873     \n\\end{verbatim}\n\\end{footnotesize}\n\n\\begin{itemize}\n\\item We can also test quadratic effects and interactions.\n\\item From the initial fit, \\texttt{educ} is not needed with \\texttt{age} and \\texttt{pol} in\nthe model. Let's refit:\n\\end{itemize}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Droc \\texttt{educ}?}\n\\begin{footnotesize}\n\\begin{verbatim}\n> m2 = update(m1, . ~ . - educ)\n> anova(m1, m2)\nAnalysis of Variance Table\n\nModel 1: salary ~ age + educ + pol\nModel 2: salary ~ age + pol\n  Res.Df    RSS Df Sum of Sq      F Pr(>F)\n1     13 876.03                           \n2     14 984.72 -1    -108.7 1.6131 0.2263\n\n> summary(m2)\nlm(formula = salary ~ age + pol, data = salary_data)\n\nCoefficients:\n            Estimate Std. Error t value Pr(>|t|)    \n(Intercept)  21.5172     6.5806   3.270  0.00559 ** \nage           1.0345     0.1686   6.136 2.58e-05 ***\npolO        -16.7414     4.9838  -3.359  0.00468 ** \npolR          8.6379     4.9354   1.750  0.10196    \nResidual standard error: 8.387 on 14 degrees of freedom\nMultiple R-squared:  0.8172,\tAdjusted R-squared:  0.778 \n\\end{verbatim}\n\\end{footnotesize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Add 2nd order terms?}\n\\begin{footnotesize}\n\\begin{verbatim}\n> salary_data$age2 = salary_data$age^2\n> m3 = update(m2, . ~ . + age2 + age*pol)\n> summary(m3)\n\nCall:\nlm(formula = salary ~ age + pol + age2 + age:pol, data = salary_data)\n\nCoefficients:\n             Estimate Std. Error t value Pr(>|t|)   \n(Intercept) -20.94355   20.34047  -1.030  0.32528   \nage           3.17751    0.93793   3.388  0.00606 **\npolO        -16.90846   22.83050  -0.741  0.47444   \npolR         -1.18699   21.44129  -0.055  0.95684   \nage2         -0.02514    0.01255  -2.004  0.07037 . \nage:polO      0.05101    0.65536   0.078  0.93936   \nage:polR      0.28944    0.61956   0.467  0.64950   \n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\nResidual standard error: 7.434 on 11 degrees of freedom\nMultiple R-squared:  0.8871,\tAdjusted R-squared:  0.8256 \n\\end{verbatim}\n\\end{footnotesize}\n\\end{frame}\n\n\n\\begin{frame}[fragile]{Drop 2nd order terms?}\n\\begin{footnotesize}\n\\begin{verbatim}\n> anova(m3, m2)\nAnalysis of Variance Table\n\nModel 1: salary ~ age + pol + age2 + age:pol\nModel 2: salary ~ age + pol\n  Res.Df    RSS Df Sum of Sq      F Pr(>F)\n1     11 607.88                           \n2     14 984.72 -3   -376.84 2.2731 0.1369    \n\\end{verbatim}\n\\end{footnotesize}\n\\pause We don't need \\textit{all} the 2nd order terms ($p=0.137$), although there\u2019s some indication in the table of regression effects that age$^2$ might be needed.\n\\pause\\begin{footnotesize}\n\\begin{verbatim}\n> anova(m4, m2)\nAnalysis of Variance Table\n\nModel 1: salary ~ age + pol + age2\nModel 2: salary ~ age + pol\n  Res.Df    RSS Df Sum of Sq      F  Pr(>F)  \n1     13 645.09                              \n2     14 984.72 -1   -339.64 6.8444 0.02134 *\n\\end{verbatim}\n\\end{footnotesize}\n\\end{frame}\n\n\\begin{frame}[fragile]{Final model}\nOur final model is \n$$\n\\text{salary}_i = \\beta_0+\\beta_1\\cdot\\text{age}+\\beta_{2}\\cdot\\mathbb{I}[\\text{pol}=\\text{O}]+\\beta_{3}\\cdot\\mathbb{I}[\\text{pol}=\\text{R}]+\\beta_{4}\\cdot\\text{age}^2+\\epsilon_i\n$$\n\\begin{footnotesize}\n\\begin{verbatim}\n> summary(m4)\n\nCall:\nlm(formula = salary ~ age + pol + age2, data = salary_data)\n\nCoefficients:\n              Estimate Std. Error t value Pr(>|t|)   \n(Intercept) -24.751745  18.529243  -1.336  0.20452   \nage           3.272388   0.867049   3.774  0.00232 **\npolO        -15.891696   4.198662  -3.785  0.00227 **\npolR          9.260234   4.152253   2.230  0.04399 * \nage2         -0.024576   0.009394  -2.616  0.02134 * \n---\n\nResidual standard error: 7.044 on 13 degrees of freedom\nMultiple R-squared:  0.8802,\tAdjusted R-squared:  0.8434 \n\\end{verbatim}\n\\end{footnotesize}\n\\end{frame}\n\n\\section{Caution regarding scatterplots}\n\n\\begin{frame}{Scatterplots}\n\\begin{itemize}\n\\item Scatterplots show the \\textit{marginal} relationship between $Y$ and\neach of the $x_1, \\ldots, x_k$. \\pause They \\textit{cannot} show you anything about the joint relationship among the $Y, x_1, \\ldots, x_k$.\n\\item \\pause Nonlinear relationship between $Y$ and $x_j$ ($j = 1, \\ldots, k$) \\textit{marginally} may or may not be present in the joint relationship.\n\\item \\pause Actually, any strong relationship between $Y$ and $x_j$ marginally\ndoesn't mean that $x_j$ will be needed in the presence of other\nvariables.\n\\item \\pause Seeing no marginal relationship between Y and $x_j$ does not\nmean that $x_j$ is not needed in a model including other\npredictors.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{No relationship?}\nHere $Y$ vs. $x_1$ and $Y$ vs. $x_2$ shows nothing. There seems to be some multicollinearity though.\n\\vspace{10pt}\n\n\\centerline{\\includegraphics[scale=0.3]{plots/scat}}\n\\end{frame}\n\n\\begin{frame}{\\texttt{proc reg} output}\n\\centerline{$x_1$ important marginally? $Y_i=\\beta_0+\\beta_1 x_{i1}+\\epsilon_i$}\n\n\\includegraphics[scale=0.35]{plots/x1}\n\n\\centerline{$x_2$ important marginally? $Y_i=\\beta_0+\\beta_2 x_{i2}+\\epsilon_i$}\n\n\\includegraphics[scale=0.35]{plots/x2}\n\n\\centerline{$x_1$, $x_2$ important jointly? $Y_i=\\beta_0+\\beta_1 x_{i1}+\\beta_2 x_{i2}+\\epsilon_i$}\n\n\\includegraphics[scale=0.35]{plots/x12}\n\\end{frame}\n\n\\begin{frame}{Nonlinear relationship?}\nMarginally, $x_1$ and $x_2$ have highly nonlinear relationships with $Y$. Should we transform?\n\n\\centerline{\\includegraphics[scale=0.3]{plots/nonlinear}}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Regression output}\nLet's try fitting a simple main effects model without any transformation.\n$$\nY_i=\\beta_0 + \\beta_1 x_{i1} + \\beta_2 x_{i2} + \\epsilon_i\n$$\n\n\\begin{footnotesize}\n\\begin{verbatim}\n                   Parameter     Standard\nVariable   DF       Estimate        Error      t Value     Pr > |t|\nIntercept   1    -0.00036626      0.00130        -0.28       0.7791\nx1          1        1.00022   0.00059936      1668.80       <.0001\nx2          1        1.00009   0.00060998      1639.54       <.0001\n\\end{verbatim}\n\\end{footnotesize}\n\nboth $x_1$ and $x_2$ are important, but does the model fit okay?\n\\end{frame}\n\n\n\\begin{frame}{Model fit is okay}\n\\centerline{\\includegraphics[scale=0.25]{plots/diag}}\n\\vspace{10pt}\n\nLook at $Y_i$ vs. $\\hat{Y}_i$ and $R^2$!\n\\end{frame}\n\n\\begin{frame}{No pattern here, either}\n\\centerline{\\includegraphics[scale=0.25]{plots/yvsy}}\n\\end{frame}\n\n\\section{Model Building and Types of Studies}\n\n\\begin{frame}{9.1 Model building overview (pp.~343--349)}\n\nBook outlines four steps in data analysis:\n\\begin{enumerate}\n\\item Data collection and preparation  \n\\item \\pause Reduction of explanatory variables. Mass screening for ``good'' predictors.\n\\item \\pause Model refinement and selection.\n\\item \\pause Model validation.\n\\end{enumerate}\n\\vspace{20pt}\n\n\\pause The way the data was collected may affect steps 2 \\& 3.\\\\~\\\\\n\\pause Model validation should not be confused with model diagnostics (residual analysis).\n\\end{frame}\n\n\\begin{frame}{Data collection strategies}\n\\begin{itemize}\n    \\item \\textbf{Controlled experiments}: subjects (experimental units) assigned to $x$-levels by experimenter\n    \\begin{itemize}\n        \\item<2-> \\textbf{Purely controlled experiments}: researcher  uses only predictors that were assigned to units\n        \\item<3-> \\textbf{Controlled experiments with covariates} (uncontrolled variables): researcher has additional predictors associated with units\n    \\end{itemize}\n    \\item<4-> \\textbf{Observational studies}: subjects have $x$-levels associated with them (not assigned by researcher)\n    \\begin{itemize}\n        \\item<5-> \\textbf{Confirmatory studies}: it is \\textit{hypothesized} that new (primary) predictors are associated with $Y$; \\pause while there are predictors, \\textit{known} to be associated with $Y$ (called risk factors).\n        \\item<6-> \\textbf{Exploratory studies}: it is hypothesized that some or all of potential predictors are associated with $Y$.\n    \\end{itemize}\n\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}{Reduction of Explanatory Variables}\n\\begin{itemize}\n    \\item Controlled experiments\n    \\begin{itemize}\n        \\item\\pause Purely controlled experiments: rarely any need or desire to reduce number of predictors\n        \\item\\pause Controlled experiments with covariates: remove any covariates that do not reduce the error variance\n    \\end{itemize}\n    \\item\\pause Observational Studies\n    \\begin{itemize}\n        \\item\\pause Confirmatory Studies: Must keep in all risk factors to compare with previous research, should keep all primary variables as well\n        \\item\\pause Exploratory Studies: Often have many potential predictors (and polynomials and interactions). \\pause Want to fit parsimonious model that explains much of the variation in $Y$, while keeping model as basic as possible. \\pause Caution: one shall not make decisions based on single variable $t$-tests.\n    \\end{itemize}\n\\end{itemize}\n\n    \n\\end{frame}\n\n\n\\begin{frame}{9.2 Surgical unit example}\n\\begin{itemize}\n\\item First steps often involve plots:\n\\begin{itemize}\n\\item Plots to indicate correct functional form of predictors and/or response.\n\\item Plots to indicate possible interaction.\n\\item Exploration of correlation among predictors (maybe).\n\\item Often a first-order model is a good starting point.\n\\end{itemize}\n\\item Once a reasonable set of potential predictors is identified, formal model selection begins.\n\\item If the number of predictors is large, say $k\\ge10$, we can use (automated) stepwise procedures to reduce the number of variables (and models) under consideration.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Surgical unit example: predictors}\nA hospital surgical unit was interested in predicting survival in patients undergoing a particular type of liver operation. A random selection of 108 patients was available for analysis.\n\\begin{align*}\nX_1\\qquad&\\text{blood clotting score}\\\\\nX_2\\qquad&\\text{prognostic index}\\\\\nX_3\\qquad&\\text{enzyme function test score}\\\\\nX_4\\qquad&\\text{liver function test score}\\\\\n\\ldots\n\\end{align*}\n\\centerline{\\includegraphics[scale=0.25]{plots/dataset}}\n\\end{frame}\n\n\\begin{frame}{Surgical unit example: residual plots}\n\\centerline{\\includegraphics[scale=0.27]{plots/res-y}}\n\\end{frame}\n\n\\begin{frame}{Surgical unit example: scatterplot matrix}\n\\centerline{\\includegraphics[scale=0.27]{plots/scat-mat}}\n\\end{frame}\n\n\\section{Model Selection}\n\n\\begin{frame}{9.3: Model selection}\nOnce we reduce the set of potential predictors to a reasonable\nnumber, we can examine all possible models and choose the ``best''\naccording to some criterion.\n\\vspace{10pt}\n\n\\pause Say we have $k$ predictors $x_1$, \\ldots, $x_k$ and we want to find a good subset of predictors that predict the data well. There are several\nuseful criteria to help choose a subset of predictors.\n\\end{frame}\n\n\\begin{frame}{Adjusted-$R^2$, $R_a^2$}\n\\begin{small}\n%``Regular'' $R^2$ measures how well the model predicts the data that built it. It is possible to have a model with $R^2=1$ (predicts the data that built it perfectly), but has \\textit{lousy out-of-sample prediction}.\n%\\vspace{10pt}\n\nThe \\textit{adjusted} $R^2$, denoted $R_a^2$ ``fixes'' $R^2$ to provide a measure of how good the  model will predict data not used to build the model. For a candidate model with $p-1$ predictors\n$$\nR^2_a=1-\\frac{\\text{SSE}/(n-p)}{\\text{SSTO}/(n-1)}\\pause\\quad\\left(=1-\\frac{\\text{MSE}}{S^2_Y}\\right).\n$$\n\\begin{itemize}\n\\item<2-> Equivalent to choosing the model with the \\textit{smallest} $\\text{MSE}$.\n\\item<3-> If irrelevant variables are added, $R_a^2$ may decrease unlike ``regular'' $R^2$ ($R^2_a$ can be negative!).\n\\item<4-> $R_a^2$ penalizes model for being too complex.\n\\item<5-> Problem: $R_a^2$ is greater for a ``bigger'' model whenever the\nF-statistic for comparing bigger to smaller is greater than 1 (\\textbf{show this}). \\onslide<6->{We usually want F-statistic to be a \\textit{lot} bigger than 1 before\nadding in new predictors}\\onslide<7->{\\quad $\\Rightarrow$\\quad \\textit{too liberal}.}\n\\end{itemize}\n\\end{small}\n\\end{frame}\n\n\\begin{frame}{Akaike Information Criterion}\nIn general, \\textbf{Akaike Information Criterion (AIC)} is\n$$\n\\text{AIC}=-2\\ln L(\\hat{\\boldsymbol\\theta})+2p\n$$\nfor a model with parameters $\\boldsymbol\\theta\\in\\mathbb{R}^p$ and likelihood function $L$.\n\n\\pause\\begin{exe}\nShow that for the MLR with normal errors,\n$$\n\\text{AIC} = n\\ln(\\text{SSE})+2p + C\n$$\nwhere $C$ is a constant that does not depend on SSE and $p$.\n\\end{exe} \n\\begin{itemize}\n\\item\\pause $2p$ is ``penalty'' term for adding predictors.\n\\item\\pause Like $R_a^2$, AIC favors models with small SSE, but penalizes models with too many variables $p$.\n\\item\\pause $\\Rightarrow$ Between two models, we prefer the one with lower AIC.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Bayesian\nInformation Criterion}\nIn general, \\textbf{Bayesian\nInformation Criterion (BIC)} is\n$$\n\\text{BIC}=-2\\ln L(\\hat{\\boldsymbol\\theta})+p\\ln(n)\n$$\n\\pause\\begin{exe}\nShow that for the MLR with normal errors,\n$$\n\\text{BIC} = n\\ln(\\text{SSE}) + p\\log(n) + C\n$$\n\\end{exe}\n\\pause\\begin{itemize}\n\\item BIC is similar to AIC, but for $n\\ge8$, the BIC ``penalty term'' is\nmore severe.\n\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}{Mallow's $C_p$}\nFull model with $k$ predictors and Reduced model with $p - 1$ predictors. \\pause \\textbf{Mallow's $C_p$} is\n$$\nC_p=\\frac{\\text{SSE}(\\text{Reduced})}{\\text{MSE}(\\text{Full})}-n+2p\n$$\n\\begin{itemize}\n\\item\\pause Estimates $\\frac{\\E[\\hat{Y}_i-\\E[Y_i]]^2}{\\sigma^2}$, where $\\hat{Y}_i$ is from the Reduced model (pp. 357--359).\n\\item\\pause Recall that $\\E[\\text{MSE}(\\text{Full})]=\\sigma^2$\n\\item\\pause If the Reduced model is unbiased, i.e. $\\E[\\text{MSE}(\\text{Reduced})]=\\sigma^2$, \\pause then $\\E[\\text{SSE}(\\text{Reduced})]=(n-p)\\sigma^2$, \\pause and\n$$\nC_p\\approx\\frac{(n-p)\\sigma^2}{\\sigma^2}-n+2p=p\n$$\n\\item\\pause The Full model always has $C_{k+1} = k + 1$.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Mallow's $C_p$}\nIf $C_p \\approx p$ then the reduced model predicts as well as the full model. If $C_p < p$ then the reduced model is estimated to\nbe less \\textit{biased} than the full model.\n\n\\begin{figure}\n    \\centering\n    \\includegraphics[height=.4\\textheight]{plots/mallows_cp.png}\n    \\caption{A $C_p$ plot}\n\\end{figure}\n\n\\pause In practice, just choose model with smallest $C_p$.\n\\end{frame}\n\n\\begin{frame}{Which criteria to use?}\n$R_a^2$, AIC, BIC, and $C_p$ may give different ``best'' models, or they\nmay agree. Ultimate goal is to find model that balances:\n\\begin{itemize}\n\\item\\pause A good fit to the data.\n\\item\\pause Low bias.\n\\item\\pause Parsimony.\n\\end{itemize}\nAll else being equal, the simpler model is often easier to interpret\nand work with. Christensen (1996) recommends $C_p$ and notes the\nsimilarity between $C_p$ and AIC.\n\\end{frame}\n\n\\begin{frame}{Methods for ``automatically'' picking variables}\n\\begin{itemize}\n\\item Any regression textbook will caution against not thinking\nabout the data at all and simply using automated procedures.\n\\item\\pause Automated procedures \\textit{cannot} assess a good functional form for a predictor, \\textit{cannot} think about which interactions might be important, etc.\n\\item\\pause Anyway, automated procedures are widely used and \\textit{can} produce good models. \\pause But they can also produce models that are \\textit{substantially inferior} to other models built from the same predictors using scientific input and common sense.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]{Example: cruise ships}{\\url{https://github.com/zh3nis/MATH440/blob/main/chp09/cruise.R}}\nA cruise ship company wishes to model the \\texttt{crew size} needed for a ship using predictors such as: \\texttt{age}, \\texttt{tonnage}, \\texttt{passengers}, \\texttt{length}, \\texttt{cabins} and passenger density (\\texttt{passdens}). \n\n\\pause\n\\begin{tiny}\n\\begin{verbatim}\n> cruise <- read.fwf(\"http://www.stat.ufl.edu/~winner/data/cruise_ship.dat\", ...)\n> head(cruise)\n         ship      cline age tonnage passengers length cabins passdens  crew\n1 Journey        Azamara   6  30.277       6.94   5.94   3.55    42.64  3.55\n2 Quest          Azamara   6  30.277       6.94   5.94   3.55    42.64  3.55\n3 Celebration   Carnival  26  47.262      14.86   7.22   7.43    31.80  6.70\n4 Conquest      Carnival  11 110.000      29.74   9.53  14.88    36.99 19.10\n5 Destiny       Carnival  17 101.353      26.42   8.92  13.21    38.36 10.00\n6 Ecstasy       Carnival  22  70.367      20.52   8.55  10.20    34.29  9.20\n\\end{verbatim}\n\\end{tiny}\n\n\\pause Without concerning ourselves with potential interactions we will look at simple additive models.\n\\end{frame}\n\n\\begin{frame}[fragile]{Cruise ships --- Full model}\n\\begin{tiny}\n\\begin{verbatim}\n> fit0 = lm(crew ~ age + tonnage + passengers + length + cabins + passdens, data=cruise)\n> summary(fit0)\n\nCall:\nlm(formula = crew ~ age + tonnage + passengers + length + cabins + \n    passdens, data = cruise)\n\nResiduals:\n    Min      1Q  Median      3Q     Max \n-1.7700 -0.4881 -0.0938  0.4454  7.0077 \n\nCoefficients:\n              Estimate Std. Error t value Pr(>|t|)    \n(Intercept) -0.5213400  1.0570350  -0.493  0.62258    \nage         -0.0125449  0.0141975  -0.884  0.37832    \ntonnage      0.0132410  0.0118928   1.113  0.26732    \npassengers  -0.1497640  0.0475886  -3.147  0.00199 ** \nlength       0.4034785  0.1144548   3.525  0.00056 ***\ncabins       0.8016337  0.0892227   8.985 9.84e-16 ***\npassdens    -0.0006577  0.0158098  -0.042  0.96687    \n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\nResidual standard error: 0.9819 on 151 degrees of freedom\nMultiple R-squared:  0.9245,\tAdjusted R-squared:  0.9215 \nF-statistic:   308 on 6 and 151 DF,  p-value: < 2.2e-16        \n\\end{verbatim}\n\\end{tiny}\n\\end{frame}\n\n\n\\begin{frame}[fragile]\n\\frametitle{Best subsets}\n\\begin{tiny}\n\\begin{verbatim}\n> library(leaps)\n> allcruise <- regsubsets(crew ~ age + tonnage + passengers + length + cabins + passdens, \n+                         nbest=4, data=cruise)\n> all_output <- summary(allcruise)\n> with(all_output, round(cbind(which, rsq, adjr2, cp, bic), 3))\n  (Intercept) age tonnage passengers length cabins passdens   rsq adjr2      cp      bic\n1           1   0       0          0      0      1        0 0.904 0.903  37.772 -360.238\n1           1   0       1          0      0      0        0 0.860 0.859 125.086 -300.954\n1           1   0       0          1      0      0        0 0.838 0.837 170.523 -277.122\n1           1   0       0          0      1      0        0 0.803 0.801 240.675 -246.201\n2           1   0       0          0      1      1        0 0.916 0.915  15.952 -376.131\n2           1   0       0          0      0      1        1 0.912 0.911  24.261 -368.502\n2           1   0       1          0      0      1        0 0.911 0.909  26.792 -366.249\n2           1   0       0          1      0      1        0 0.908 0.907  32.443 -361.332\n3           1   0       0          1      1      1        0 0.922 0.921   5.857 -382.878\n3           1   0       0          0      1      1        1 0.919 0.918  11.341 -377.413\n3           1   0       1          1      0      1        0 0.918 0.916  14.023 -374.808\n3           1   1       0          0      1      1        0 0.917 0.915  15.909 -373.002\n4           1   0       1          1      1      1        0 0.924 0.922   3.847 -381.933\n4           1   1       0          1      1      1        0 0.923 0.921   5.084 -380.652\n4           1   0       0          1      1      1        1 0.923 0.921   5.197 -380.534\n4           1   0       1          0      1      1        1 0.919 0.917  13.056 -372.631\n5           1   1       1          1      1      1        0 0.924 0.922   5.002 -377.752\n5           1   0       1          1      1      1        1 0.924 0.922   5.781 -376.939\n5           1   1       0          1      1      1        1 0.924 0.921   6.240 -376.462\n5           1   1       1          0      1      1        1 0.920 0.917  14.904 -367.717\n6           1   1       1          1      1      1        1 0.924 0.921   7.000 -372.692\n\\end{verbatim}\n\\end{tiny}\n\\pause A good model choice might be the model with 4 predictors: tonnage, passengers, length, and cabins, whose $R_a^2 = 0.922$, $C_p = 3.847$, and\n$\\text{BIC}= \u2212381.933$. %Also we note that this model's AIC is lower than that of the full model.\n\\end{frame}\n\n\\begin{frame}[fragile]{AIC full vs reduced}\n\\begin{verbatim}\n> fit3 <- update(fit0, . ~ . - age - passdens)\n> AIC(fit3)\n[1] 448.3229\n> AIC(fit0)\n[1] 451.4394    \n\\end{verbatim}\n\\end{frame}\n\n\\begin{frame}{9.4 Automated variable search}\nAs discussed, it is possible to have a large set of predictor variables (including\ninteractions). The goal is to fit a ``parsimoneous'' model that explains as\nmuch variation in the response as possible with a relatively small set of\npredictors.\\\\~\\\\\n\n\\pause There are 3 automated procedures\n\\begin{itemize}\n    \\item\\pause Backward Elimination (Top down approach)\n    \\item\\pause Forward Selection (Bottom up approach)\n    \\item\\pause Stepwise Regression (Combines Forward/Backward)\n\\end{itemize}\n\n\\pause We will explore these procedures using two different elimination/selection\ncriteria. \\pause One that uses t-test and p-value and another that uses the AIC\nvalue.\n\\end{frame}\n\n\\begin{frame}{Backward elimination}\n\\begin{enumerate}\n\\item Select a significance level to \\textit{stay} in the model (e.g. $\\alpha_s = 0.20$, generally\n.05 is too low, causing too many variables to be removed).\n\\item\\pause Fit the full model with all possible predictors.\n\\item\\pause Consider the predictor with lowest t-statistic (highest p-value).\n\\begin{itemize}\n    \\item\\pause If p-value $> \\alpha_s$, remove the predictor and fit model without this\nvariable (must re-fit model here because partial regression coefficients change). \n    \\item\\pause If p-value $\\le \\alpha_s$, stop and keep current model.\n\\end{itemize}\n\\item\\pause Continue until all predictors have p-values $\\le\\alpha_s$.    \n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}{Forward selection}\n\\begin{enumerate}\n    \\item Select a significance level to \\textit{enter} the model (e.g. $\\alpha_e = 0.20$, generally .05 is too low, causing too few variables to be entered).\n\\item\\pause Fit all simple regression models.\n\\item\\pause Consider the predictor with the highest t-statistic (lowest p-value).\n\\begin{itemize}\n    \\item\\pause If p-value $\\le\\alpha_e$, keep this variable and fit all two variable models\nthat include this predictor.\n    \\item\\pause If p-value $>\\alpha_e$, stop and keep previous model.\n\\end{itemize}\n\\item\\pause Continue until no new predictors have p-values $\\le\\alpha_e$.    \n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}{Stepwise regression}\n\\begin{enumerate}\n\\item Select $\\alpha_s$ and $\\alpha_e$, ($\\alpha_e < \\alpha_s$).\n\\item\\pause  Start like Forward Selection (bottom up process) where new variables\nmust have p-value $\\le\\alpha_e$ to enter.\n\\item\\pause Re-test all ``old variables'' that have already been entered, must have\np-value $\\le\\alpha_s$ to stay in model.\n\\item\\pause Continue until no new variables can be entered and no old variables\nneed to be removed.\n\\end{enumerate}\n\\end{frame}\n\n\n\\begin{frame}[fragile]{Backward and Forward for cruise ships}\n\\begin{scriptsize}\n\\begin{verbatim}\n> library(olsrr)\n> ols_step_backward_p(fit0)\n\n                          Elimination Summary                            \n------------------------------------------------------------------------\n        Variable                  Adj.                                      \nStep    Removed     R-Square    R-Square     C(p)       AIC        RMSE     \n------------------------------------------------------------------------\n   1    passdens      0.9245       0.922    5.0017    449.4412    0.9786    \n   2    age            0.924      0.9221    3.8468    448.3229    0.9782    \n------------------------------------------------------------------------\n\n> ols_step_forward_p(fit0)\n\n                             Selection Summary                              \n---------------------------------------------------------------------------\n        Variable                    Adj.                                       \nStep     Entered      R-Square    R-Square     C(p)        AIC        RMSE     \n---------------------------------------------------------------------------\n   1    cabins          0.9041      0.9034    37.7724    479.2060    1.0886    \n   2    length          0.9160      0.9149    15.9524    460.2507    1.0221    \n   3    passengers      0.9220      0.9205     5.8566    450.4411    0.9878    \n   4    tonnage         0.9240      0.9221     3.8468    448.3229    0.9782    \n---------------------------------------------------------------------------\n\\end{verbatim}\n\\end{scriptsize}    \n\\end{frame}\n\n\\begin{frame}[fragile]{Stepwise procedure for cruise ships}\n\\begin{tiny}\n\\begin{verbatim}\n> ols_step_both_p(fit0)\n\n                              Stepwise Selection Summary                                \n---------------------------------------------------------------------------------------\n                       Added/                   Adj.                                       \nStep     Variable     Removed     R-Square    R-Square     C(p)        AIC        RMSE     \n---------------------------------------------------------------------------------------\n   1      cabins      addition       0.904       0.903    37.7720    479.2060    1.0886    \n   2      length      addition       0.916       0.915    15.9520    460.2507    1.0221    \n   3    passengers    addition       0.922       0.921     5.8570    450.4411    0.9878    \n   4     tonnage      addition       0.924       0.922     3.8470    448.3229    0.9782    \n---------------------------------------------------------------------------------------    \n\\end{verbatim}\n\\end{tiny}\n\\end{frame}\n\n\\section{Model Validation (briefly)}\n\n\\begin{frame}{PRESS$_p$ criterion}\n$$\n\\text{PRESS}_p = \\sum_{i=1}^n(Y_i-\\hat{Y}_{i(i)})^2\\quad \\left(=\\sum_{i=1}^n\\left[\\frac{e_i}{1-h_{ii}}\\right]^2\\right),\n$$\nwhere $\\hat{Y}_{i(i)}$ is the fitted value at $\\mathbf{x}_i$ with the $(\\mathbf{x}_i, Y_i)$ omitted.\n\\begin{itemize}\n\\item\\pause This is leave-one-out prediction error. The smaller, the better.\n\\item\\pause Having $\\text{PRESS}_p \\approx \\text{SSE}_p$ supports the \\textit{validity} of the model\nwith $p$ predictors (p.~374). \n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}{Caveats for automated procedures}\n\\begin{itemize}\n\\item There is no ``best'' way to search for good models.\n\\item\\pause There may be \\textit{several} ``good'' models.\n\\item\\pause If you use the same data to \\textit{estimate} the model and \\textit{choose}\nthe model, the regression effects are \\textit{biased}!\n\\textit This leads to the idea of data splitting; one portion of the data\nis the \\textit{training data} and the other portion is the \\textit{validation set}\n(Section 9.6, p.~372). $\\text{PRESS}_p$ can also be used.\n\\end{itemize}\n\\end{frame}\n\n\\end{document}", "meta": {"hexsha": "db9205749108ef30a5db768f71ad3b1efb8e12ff", "size": 30422, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Slides/09 Model Selection and Validation/main.tex", "max_stars_repo_name": "zh3nis/MATH440", "max_stars_repo_head_hexsha": "66e547d4ce4016e39d317b6ef043223eb0e15ed0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Slides/09 Model Selection and Validation/main.tex", "max_issues_repo_name": "zh3nis/MATH440", "max_issues_repo_head_hexsha": "66e547d4ce4016e39d317b6ef043223eb0e15ed0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Slides/09 Model Selection and Validation/main.tex", "max_forks_repo_name": "zh3nis/MATH440", "max_forks_repo_head_hexsha": "66e547d4ce4016e39d317b6ef043223eb0e15ed0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.0526315789, "max_line_length": 318, "alphanum_fraction": 0.6439747551, "num_tokens": 9841, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455589, "lm_q2_score": 0.8104789155369047, "lm_q1q2_score": 0.5613717130620614}}
{"text": "\\documentclass[UTF8]{article}\n\n\\usepackage{amssymb}\n\\usepackage{float}\n\\usepackage[T1]{fontenc}\n\\usepackage[left=1.5in, right=1.5in, top=0.8in, bottom=1in]{geometry}\n\\usepackage[nice]{nicefrac}\n\\usepackage{listings}\n\\usepackage{mathtools}\n\\usepackage{xcolor}\n\n\\setlength\\parindent{0pt}\n\\setlength{\\parskip}{\\baselineskip}\n\\widowpenalties 1 10000\n\n\\definecolor{blue}{rgb}{0,0,0.5}\n\\definecolor{red}{rgb}{0.6,0,0}\n\n\\lstset{\n    language=Python,\n    basicstyle=\\ttfamily\\small,\n    columns=fullflexible,\n    frame=L,\n    numbers=left,\n    emph={exhaustive, babygiant, _map, pollard, pohlighellman},\n    otherkeywords={True,False},\n    emphstyle=\\color{red},\n    keywordstyle=\\color{blue},\n    showstringspaces=false\n}\n\n\\renewcommand{\\figurename}{Python Implementation}\n\n\\begin{document}\n\n\\title{\n    Discrete Logarithm Algorithms \\\\\n    \\large Description and Implementation\n}\n\\author{Davide Bergamaschi}\n\\date{2019}\n\\maketitle\n\n\\begin{abstract}\n\\noindent In this document we present the cryptographic problem of discrete logarithm and a number of algorithms to solve it, along with our implementation of said algorithms in the Python programming language. In particular, the following algorithms will be examined: exhaustive search, Baby-Step Giant-Step, Pollard's Rho, Pohlig-Hellman.\n\\end{abstract}\n\n\\section{The Discrete Logarithm Problem}\n\nLet $G$ be a cyclic group of order $n$, let $\\alpha$ be a generator of G and let $\\beta$ belong to G. The discrete logarithm of $\\beta$ to the base $\\alpha$ ($\\log_{\\alpha}\\beta$) is the integer value $x$ such that $0 \\leq x \\leq n - 1$ and $\\alpha^x=\\beta$.\n\n\\section{Solving Discrete Logarithms}\n\nCalculating discrete logarithms is in general a computationally hard problem. In the following section we present a selection algorithms that perform this task with different levels of sophistication.\n\nEach algorithm is implemented by a Python function which takes $\\alpha$, $\\beta$ and $n$ as its first arguments, and returns either $\\log_{\\alpha}\\beta$ or $None$ if the logarithm cannot be found due to input data not respecting preconditions (e.g. if $\\alpha$ is not a generator) or in case of failure of a randomized algorithm.\n\n\\newpage\n\n\\subsection{Exhaustive Search}\n\nThis naive method consists in computing $\\alpha^0, \\alpha^1, \\ldots, \\alpha^{n - 1}$ until $\\beta$ is eventually obtained.\n\n\\begin{figure}[H]\n    \\centering\n    \\caption{Exhaustive Search}\n    \\begin{tabular}{c}\n        \\lstinputlisting{lst1-exhaustive.py}\n    \\end{tabular}\n\\end{figure}\n\nThe time complexity is trivially given by $O(n)$ multiplications.\n\n\\subsection{Baby-Step Giant-Step}\n\nThis method relies on the fact that the logarithm $x$ can be written as $x = i m + j$, where $m$ can be conveniently chosen as $\\lceil \\sqrt{n} \\rceil$. By doing so, one obtains $\\beta = \\alpha^{x} = \\alpha^{i m + j} = \\alpha^{i m} \\alpha^{j}$, which is true if and only if $\\beta (\\alpha^{-m})^i = \\alpha^{j}$.\n\nThe algorithm hence proceeds in the following way. For $0 \\leq j < m$, entries $(a^j, j)$ are computed and stored in a hash table (hashed on the first component).\n\nThen, for $0 \\leq i < \\lceil n / m \\rceil$, $\\beta (\\alpha^{-m})^i$ is computed. At each iteration the algorithm checks if the obtained value is present in the hash table. Upon finding a match with a value $j$, the logarithm x is obtained as $x = i m + j$.\n\n\\begin{figure}[H]\n    \\centering\n    \\caption{Baby-Step Giant-Step}\n    \\begin{tabular}{c}\n        \\lstinputlisting{lst2-babygiant.py}\n    \\end{tabular}\n\\end{figure}\n\nIn the first part, if \\verb|m| has not been specified, it is calculated as $\\lceil \\sqrt{n} \\rceil$. The lookup table is then constructed accordingly.\n\nIn each iteration of the last part, $\\beta (\\alpha^{-m})^i$ is computed by multiplying the \\verb|candidate| variable, initially set to $\\beta$, by the precomputed value $\\alpha^{-m}$.\n\nThe algorithm performs $O(m)$ group multiplications while constructing the table, and $O(n/m)$ multiplications and lookups in the last part. If $m$ is set to $\\sqrt{n}$, the time complexity becomes $O(\\sqrt{n})$. The space complexity is instead given by the $m$ group elements stored by the algorithm.\n\n\\subsection{Pollard's Rho Algorithm for Logarithms}\n\nThis randomized algorithm explores a sequence of group elements $\\gamma_i = \\alpha^{a_i} \\beta^{b_i}$ looking for a cycle using Floyd's algorithm.\n\nDetecting a cycle means discovering two elements $\\gamma_c = \\alpha^{a}\\beta^{b}$ and $\\gamma_{2c} = \\alpha^{A} \\beta^{B}$ in the sequence, such that $\\gamma_c = \\gamma_{2c}$.\n\nIt follows that $\\alpha^{a} \\beta^{b} = \\alpha^{A} \\beta^{B}$, and so $\\beta^{b - B} = \\alpha^{a - A}$. By taking the logarithm to the base of $\\alpha$ of both sides of the equation one obtains the following equation:\n$$(b - B) \\log_\\alpha{\\beta} \\equiv (a - A) \\pmod{n},$$\nwhich can be solved to find $x=\\log_\\alpha{\\beta}$, but only if $b \\not\\equiv B \\pmod{n}$.\n\nLet us discuss how the sequence of group elements is generated. To increase the likelihood of obtaining a cycle in a small number of samples, G is partitioned into three disjoint subsets $S_0$, $S_1$ and $S_2$ of approximately equal size and chosen in a sufficiently ``random'' manner.\n\nThe map for group elements $f: G \\rightarrow G$ is then defined as:\n$$\nf(\\gamma) =\n\\begin{cases}\n    \\beta \\gamma  & \\enspace \\text{if } \\gamma \\in S_0 \\\\\n    \\gamma^2      & \\enspace \\text{if } \\gamma \\in S_1 \\\\\n    \\alpha \\gamma & \\enspace \\text{if } \\gamma \\in S_2\n\\end{cases},\n$$\n\nwhile the map for $a_i$ coefficients $g: G \\times \\mathbb{N} \\rightarrow \\mathbb{N}$ and the map for $b_i$ coefficients $h: G \\times \\mathbb{N} \\rightarrow \\mathbb{N}$ are consequently defined as:\n$$\ng(\\gamma, a) =\n\\begin{cases}\n    a                & \\enspace \\text{if } \\gamma \\in S_0 \\\\\n    (2a)    \\bmod{n} & \\enspace \\text{if } \\gamma \\in S_1 \\\\\n    (a + 1) \\bmod{n} & \\enspace \\text{if } \\gamma \\in S_2\n\\end{cases},\n\\qquad\nh(\\gamma, b) =\n\\begin{cases}\n    (b + 1) \\bmod{n} & \\enspace \\text{if } \\gamma \\in S_0 \\\\\n    (2b)    \\bmod{n} & \\enspace \\text{if } \\gamma \\in S_1 \\\\\n    b                & \\enspace \\text{if } \\gamma \\in S_2\n\\end{cases}.\n$$\n\nThe algorithm begins by iterating over the sequence induced by the above maps, starting from two random coefficients $a_0$ and $b_0$ (and with $x_0 = \\alpha^{a_0} \\beta^{b_0}$); in particular, at each step $i$ it computes:\n\\begin{align*}\n    \\gamma_{i}  & = f(\\gamma_{i-1})  & a_{i}  & = g(\\gamma_{i-1}, a_{i-1})   & b_{i}  & = h(\\gamma_{i-1}, a_{i-1})   \\\\\n    \\gamma_{2i} & = f(\\gamma_{2i-2}) & a_{2i} & = g(\\gamma_{2i-2}, a_{2i-2}) & b_{2i} & = h(\\gamma_{2i-2}, b_{2i-2}),\n\\end{align*}\nand compares the obtained $\\gamma_{i}$ and $\\gamma_{2i}$ values.\n\nWhen the algorithm finds $\\gamma_{i} = \\gamma_{2i}$, that is when a cycle is detected, if ($b_{i} - b_{2i}) \\bmod{n} \\not= 0$ the algorithm returns the logarithm:\n$$x = (b_{i} - b_{2i})^{-1} (a_{2i} - a_{i}) \\bmod{n},$$\notherwise it terminates with failure.\n\n\\begin{figure}[H]\n    \\centering\n    \\caption{Pollard's Rho Algorithm for Logarithms}\n    \\begin{tabular}{c}\n        \\lstinputlisting{lst3-pollard.py}\n    \\end{tabular}\n\\end{figure}\n\nThe \\verb|s_map| argument is a function which takes a group element as input and returns the index of the partition to which the element belongs.\n\nFor what concerns asymptotic complexity, if it can be assumed that the map $f$ behaves like a random function, the expected running time will be of $O(\\sqrt{n})$ group operations. On the other hand, the required storage is negligible.\n\n\\subsection{Pohlig-Hellman Algorithm}\n\nThis method leverages the prime factorization of the group order: $n = p_1^{e_1} \\ldots p_r^{e_r}$.\n\nFor $1 \\leq i \\leq r$, the algorithm calculates $x_i = x \\bmod{p_i^{e_i}}$, obtaining a set of congruences that can ultimately be solved with the Chinese Remainder Theorem to find $x$.\n\nIn particular each $x_i$ can be seen as a truncated $p_i$-ary representation of $x$:\n$$x_i = x \\bmod{p_i^{e_i}} = l_0 + l_1 p_i + l_2 p_i^2 + \\ldots + l_{e_i - 1} p_i^{e_i - 1},$$\nwhich allows for a further reduction of the magnitude of the calculations.\n\nLet us examine how the digits are calculated.\n\nFrom the definition of modulus, there exists some integer $h$ for which $x=x_i+hp_i^{e_i}$. Since \\mbox{$\\beta=\\alpha^x$}, one can derive:\n\\begin{align*}\n    \\beta^{n/p_i} & = (\\alpha^x)^{n/p_i} \\\\\n                  & = (\\alpha^{n/p_i})^x \\\\\n                  & = (\\alpha^{n/p_i})^{x_i+hp_i^{e_i}} \\\\\n                  & = (\\alpha^{n/p_i})^{l_0 + l_1 p_i + l_2 p_i^2 + \\ldots + l_{e_i - 1} p_i^{e_i - 1}+hp_i^{e_i}} \\\\\n                  & = (\\alpha^{n/p_i})^{l_0 + K p_i} \\quad \\text{\\small{(for some integer $K$)}}  \\\\\n                  & = (\\alpha^{n/p_i})^{l_0} (\\alpha^{n/p_i})^{K p_i} \\\\\n                  & = (\\alpha^{n/p_i})^{l_0} \\quad \\text{\\small{(since $\\alpha^{n/p_i}$ is of order $p_i$)}},\n\\end{align*}\nfrom which $l_0$ can be found using any other discrete logarithm technique.\n\nThen, starting from $\\beta\\alpha^{-l_0} = \\alpha^{l_1 p_i + l_2 p_i^2 + \\ldots + l_{e_i - 1} p_i^{e_i - 1}+hp_i^{e_i}}$ and by raising both sides to the power of $\\nicefrac{n}{p_i^2}$, one can obtain $(\\alpha^{n/p_i})^{l_1}=(\\beta\\alpha^{-l_0})^{n/p_i^2}$ and ultimately $l_1$ in a similar way.\n\nThe remaining digits can be computed analogously.\n\nThe calculations performed for each $p_i^{e_i}$ in the first part of the algorithm are in conclusion the following:\n\\begin{align*}\n    l_0         &              = \\log_{\\alpha^{n / p_i}}(\\beta)^{n / p_i} \\\\\n    l_1         &              = \\log_{\\alpha^{n / p_i}}(\\beta\\alpha^{-l_0})^{n / p_i^2} \\\\\n    l_2         &              = \\log_{\\alpha^{n / p_i}}(\\beta\\alpha^{-l_0 - l_1 \\, p_i})^{n / p_i^3} \\\\\n                & \\vdotswithin{=} \\\\\n    l_{e_i - 1} &              = \\log_{\\alpha^{n / p_i}}(\\beta\\alpha^{-l_0 - l_1 \\, p_i - \\, \\ldots \\, - l_{e_i - 2} \\ p_i^{e_i - 2}})^{n / p_i^{e_i}}.\n\\end{align*}\n\nDetermining all the $x_i$ values yields a set of congruences of the following type:\n\\begin{align*}\n    x & \\equiv x_1 \\pmod{p_1^{e_1}} \\\\\n      & \\vdotswithin{\\equiv}        \\\\\n    x & \\equiv x_r \\pmod{p_r^{e_r}}.\n\\end{align*}\n\nSince the moduli are pairwise coprime by construction, the Chinese Remainder Theorem guarantees the existence of a solution $x$, which can be found with any suitable computational method (e.g. Gauss's algorithm).\n\n\\begin{figure}[H]\n    \\centering\n    \\caption{Pohlig-Hellman Algorithm}\n    \\begin{tabular}{c}\n        \\lstinputlisting{lst4-pohlighellman.py}\n    \\end{tabular}\n\\end{figure}\n\nThe prime factorization of $n$ is passed as fourth argument in the form of a Python dictionary, with keys being the prime factors and values are respectively their exponents.\n\nNotice that during the computation of an $x_i$ value, at each iteration $j$ in the internal loop the implementation keeps track of the necessary powers of $p_i$ ($p_i^{j-1}$, $p_i^j$ and $p_i^{j+1}$), so that only one multiplication is enough to calculate them at each step.\n\nThe running time is given by $O(\\sum_{i=1}^{r}e_i(\\lg n + \\sqrt{p_i}))$ group multiplications.\n\n\\end{document}\n", "meta": {"hexsha": "1ef99a37791969edbd02e55413ba59a3b2953469", "size": 11057, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/paper.tex", "max_stars_repo_name": "daberg/dislog", "max_stars_repo_head_hexsha": "29c9c980ebf7c1f40e79de946ec1a0ab6ec10130", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/paper.tex", "max_issues_repo_name": "daberg/dislog", "max_issues_repo_head_hexsha": "29c9c980ebf7c1f40e79de946ec1a0ab6ec10130", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/paper.tex", "max_forks_repo_name": "daberg/dislog", "max_forks_repo_head_hexsha": "29c9c980ebf7c1f40e79de946ec1a0ab6ec10130", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.4885844749, "max_line_length": 340, "alphanum_fraction": 0.6689879714, "num_tokens": 3492, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455588, "lm_q2_score": 0.8104789086703225, "lm_q1q2_score": 0.5613717083059782}}
{"text": "\\section{Monte Carlo Variance Reduction}\n\\label{sec:MCvar}\n\nMonte Carlo methods for radiation transport are used in the nuclear\nengineering community for a wide spectrum of application problems.\nMonte Carlo methods aim to emulate\nthe transport of a particle from birth, through physical interaction, to death\nby randomly sampling the probabilities of physics that the particle could\nencounter, e.g.\\ particle production, elastic and inelastic\nscattering, absorption, and so forth.\nThis process of transporting a single particle\nis repeated many times, to simulate the transport of many particles\nthroughout the problem. When the user achieves a\nsufficient\nnumber of samples--or particles--to reach the desired statistical precision for\nthe region of interest, the\nsimulation is complete. However, this naive approach to simulating each\nparticle--disregarding whether it is likely to\ncontribute to the tallied result--can be extraordinarily\ncomputationally inefficient depending on the problem. A\ncode could waste time simulating millions of ``unusable'' particles and still not\nreach the desired statistical precision for the tally. Variance\nreduction techniques were developed to address this issue. In general, these\ntechniques bias the Monte Carlo transport to more effectively\ncontribute to a particular result, while not fundamentally changing the nature\nof the problem being solved.\n\n\\subsection{Statistical Background}\n\\label{subsec:StatBkgnd}\n\nVariance reduction techniques are rooted in statistics, so we begin our\ndiscussion of variance reduction techniques with a brief primer on the\nstatistical background relevant to Monte Carlo radiation transport. Sections\n\\ref{subsubsec:PopStat} through \\ref{subsubsec:FOM} are summarized from\n\\cite{lewis_computational_1984} and \\cite{mcnp_manual_v1}.\nMonte Carlo\nmethods transport many randomly sampled\nparticles, and when those particles reach a region of interest, they are scored\nin a tally. The statistical precision of the tally\nwill reflect the total number of particles that were sampled in a chosen region\nor at a chosen surface.\nThe reliability of the answer obtained in this region is then dependent\non the quantity and the history of the particles sampled.\n\n\\subsubsection{Population Statistics}\n\\label{subsubsec:PopStat}\n\nIn radiation transport, one desires to estimate some response in phase-space.\nThis response is the average behavior of the physical interactions in some\ndifferential phase-space in energy, space, and time. If the probability density\nfunction, $f(x)$, for the response is known exactly,\nthen the response in $dx$ can be calculated exactly by the true\nmean, or\n\\begin{equation}\n  \\bar{x} = \\int_{-\\infty}^{\\infty} xf(x) dx .\n\\end{equation}\nRarely is $f(x)$ known\nexactly, so instead it is sampled.\nUsing $N$ randomly sampled particles, the estimate of the true mean value is given as\n\\begin{equation}\n  \\hat{x} = \\frac{\\sum_{i=1}^{N}{x_i}}{N} ,\n\\end{equation}\nwhere $x_i$ is the i$^{th}$ event. $\\hat{x}$ is the sample mean, or the\nestimated value of $\\bar{x}$\nbased on the $N$ number of samples that were used to calculate $\\hat{x}$. As $N\n\\rightarrow \\infty$, $\\hat{x}$ will $\\rightarrow \\bar{x}$, which is given by the\nStrong Law of Large Numbers \\cite{mcnp_manual_v1}.\n$\\hat{x}$ in itself is a useful measure, but determining the spread of values\nabout $\\hat{x}$ is also an important measure. This is called the variance. The\ntrue variance of the distribution is\n\\begin{equation}\n  \\sigma^{2}\\big( x \\big) = \\bar{x^2} - \\bar{x}^2 ,\n\\end{equation}\nand the standard deviation is the square root of the variance\n\\begin{equation}\n  \\sigma\\big(x \\big) = \\big( \\bar{x^2} - \\bar{x}^2 \\big)^{1/2}.\n\\end{equation}\nThe variance of the sampled distribution differs, as a finite number of samples\nare used to calculate $\\bar{x}$ and $\\sigma$. The sample variance is defined by:\n\\begin{equation}\nS^{ 2 }=\\sum _{ i=1 }^{ N }{ \\frac { (x_{ i }-\\hat { x } )^{ 2 } }{ N-1 }  }\n             \\cong \\widehat{x^2}-\\hat{x}^2 ,\n\\label{eq:Var}\n\\end{equation}\nwhere\n\\begin{equation}\n  \\widehat{x^2} = \\frac{1}{N}\\sum_{i=1}^{N} x_i^2 ,\n\\end{equation}\nand the sample standard deviation is given by\n\\begin{equation}\n  S = \\big( \\widehat{x^2}-\\hat{x}^2 \\big)^{(1/2)} .\n\\end{equation}\n\nFor \\eqref{eq:Var} to hold true, the number of N samples must be large.\n$S^2$ is the sample estimate of the true variance, $\\sigma^2$.\nThe variance of the estimate of the mean value about $\\bar{x}$ is:\n\\begin{equation}\nS^{ 2 }_{ \\hat { x }  }=\\frac{S^2}{N}.\n\\label{eq:VarMean}\n\\end{equation}\nFrom \\eqref{eq:VarMean}, one can see the relationship between the sample standard\ndeviation and the standard error of $\\hat{x}$ about $\\bar{x}$ is\n\\begin{equation}\nS_{ \\hat { x }  }=\\sqrt { \\frac { S^{ 2 } }{ N }  } =\\frac { S }{ \\sqrt { N }}.\n\\label{eq:VarN}\n\\end{equation}\n$S_{\\hat{x}}$ is the standard error of the estimate of the sample mean.\nThe relative error normalizes the standard error by the estimate of the mean\n\\begin{equation}\nR = \\frac{S_{ \\hat { x }  }}{\\hat{x}} .\n\\label{eq:RelativeErr}\n\\end{equation}\nAs a result, $S$, $R$, and $N$ follow the relationship\n\\begin{equation}\nS^2\\:\\propto\\: R^2\\:\\propto\\:\\frac{1}{N} .\n\\label{eq:S to R}\n\\end{equation}\n\n\\subsubsection{The Central Limit Theorem}\n\\label{subsubsec:CLT}\n\nSuppose $\\hat{x}$ is calculated from several independent random particles\nto estimate $\\bar{x}$. At what point does one conclude that $\\hat{x}$ sufficiently\nreflects $\\bar{x}$?\nThe central limit theorem (CLT) \\cite{lewis_computational_1984, mcnp_manual_v1}\nis a very powerful supplement to the quantities\ndescribed in Section \\ref{subsubsec:PopStat}. The CLT states that for large N,\n$\\hat{x}$ will have a limiting distribution $f_N(\\hat{x})$, and that distribution will be a\nnormal distribution\n\\begin{equation}\n  f_N\\big(\\hat{x}\\big) \\approx \\frac{1}{\\sqrt{2\\pi} \\sigma(\\hat{x})}\\\n           \\exp\\Bigg[ \\frac{-\\big( \\hat{x}- \\bar{x}\\big)^2}{2\\sigma^2(\\hat{x})} \\Bigg],\\\n           \\quad N \\rightarrow \\infty.\n  \\label{eq:CLT1}\n\\end{equation}\nThe standard deviation of $\\hat{x}$ can be related to the standard deviation of\nthe samples by\n\\begin{equation}\n  \\sigma(\\hat{x}) = \\frac{\\sigma(x)}{\\sqrt{N}}.\n  \\label{eq:hatx}\n\\end{equation}\nUsing the definition from Eq. \\eqref{eq:hatx} in Eq. \\eqref{eq:CLT1} results in\n\\begin{equation}\n  f_N\\big(\\hat{x}\\big) \\approx \\sqrt{\\frac{N}{2*\\pi}} \\frac{1}{\\sigma(x)}\\\n           \\exp\\Bigg[ \\frac{-N\\big( \\hat{x}- \\bar{x}\\big)^2}{2\\sigma^2(x)} \\Bigg],\\\n           \\quad N \\rightarrow \\infty .\n  \\label{eq:CLT2}\n\\end{equation}\nThis allows us to use known values for $\\hat{x}$ and an approximation of\n$\\sigma(x)$--using $S$--to determine the probability density function of the\nsample means\n$f_N(\\hat{x})$. Because $f_N(\\hat{x})$ is normally distributed, we can find the\nprobability that $\\hat{x}$ lies in $\\bar{x} \\pm \\epsilon$ with\n\\begin{equation}\n  P\\big\\{\\bar{x} - \\varepsilon < \\hat{x} \\leq \\bar{x} + \\varepsilon\\big\\} = \\\n   \\int_{\\bar{x}-\\varepsilon}^{\\bar{x}+\\varepsilon}f_N\\big( \\hat{x} \\big) d\\hat{x}.\n   \\label{eq:probmean}\n\\end{equation}\nPlacing our definition for the distribution of $\\hat{x}$, which is $f_N(\\hat{x})$, into Eq.\n\\eqref{eq:probmean}, changing the limits of integration, and changing the\nvariables such that\n\\begin{equation*}\n  t = \\sqrt{N/2}\\big[(\\hat{x}-\\bar{x})/\\sigma(x)\\big],\n\\end{equation*}\nthis becomes\n\\begin{equation}\n  P\\big\\{\\bar{x} - \\varepsilon < \\hat{x} \\leq \\bar{x} + \\varepsilon\\big\\} = \\\n  \\frac{2}{\\sqrt{\\pi}} \\int_0^{(\\sqrt{N/2})(\\varepsilon/\\sigma(x))} e^{-t^2} dt\n  \\:.\n  \\label{eq:bigP}\n\\end{equation}\nRecalling the definition of the error function, Eq. \\eqref{eq:bigP} becomes\n\\begin{equation}\n  P\\big\\{\\bar{x} - \\varepsilon < \\hat{x} \\leq \\bar{x} + \\varepsilon\\big\\} = \\\n    \\erf\\Big[\\sqrt{\\frac{N}{2}} \\frac{\\varepsilon}{\\sigma(x)}\\Big].\n\\end{equation}\nThen, using the calculated estimation for $\\sigma(x)$ ($S$), and also recalling\nthat $S_{\\hat{x}}= S/\\sqrt{N}$ (Eq. \\eqref{eq:VarN}), the error function reduces to\na function of $\\varepsilon$ and $S_{\\hat{x}}$, or:\n\\begin{equation}\n    \\erf\\Big[\\sqrt{\\frac{N}{2}} \\frac{\\varepsilon}{\\sigma(x)}\\Big] = \\\n    \\erf\\Big[\\sqrt{\\frac{1}{2}} \\frac{\\varepsilon}{S_{\\hat{x}}}\\Big] .\n\\end{equation}\nShould $\\varepsilon$ be chosen to be a function of $S_{\\hat{x}}$, the error\nfunction reduces further and becomes merely an evaluation as multiples ($M$) of\n$S_{\\hat{x}}$ and $\\sqrt{1/2}$. For the first few multiples of the standard\nerror, this is evaluated as\n\\begin{equation}\n  P\\big\\{\\bar{x} - M S_{\\hat{x}} < \\hat{x} \\leq \\bar{x} + M S_{\\hat{x}} \\big\\} =\n  \\begin{cases}\n    .683, & M = 1, \\\\\n    .954, & M = 2, \\\\\n    .997, & M = 3\n  \\end{cases}  .\n\\end{equation}\n\nThe central limit theorem tells us that the sample mean follows a normal\ndistribution, regardless of the distribution of the underlying sample, as the\nnumber of samples approaches infinity. This\nmeans that no matter what distribution is being sampled, the sampled mean will\nhave this expected behavior. As a result, given a calculated value for\n$\\hat{x}$ and $S$, the probability that $\\hat{x}$ is near $\\bar{x}$ is known\nand calculable.\nFurther, the central limit theorem shows that this distribution is approached\nvery quickly as N increases, with most problems only requiring $N > 30$\n\\cite{lewis_computational_1984}. Note\nthat N is not the total number of samples, but the number of samples required to\ncalculate each mean.\n\nHowever, for the central limit theorem to hold a number of\nrequirements must be satisfied. All of the quantities in Section\n\\ref{subsubsec:PopStat} have the underlying assumption that\neach $x_i$ is assumed to be randomly sampled and\nindependent of other $x_i$. If some region of phase space is omitted\naccidentally, these values will not be reflective of the true $f(x)$, and so\n$\\hat{x}$ will not approximate $\\bar{x}$. Further, for $S$ to be a\ngood approximation of $\\sigma(x)$, a large number of N samples must contribute\nto the calculation of $\\hat{x}$. The central limit theorem also assumes that\n$f(x)$ is a probability density function that can be sampled and has a variance\nthat exists. As a result, one must be reasonably sure that all of these\nrequirements are satisfied if using Monte Carlo sampling methods.\n\n\\subsubsection{The Figure of Merit}\n\\label{subsubsec:FOM}\n\nThe equations in the preceding sections describe how to estimate the statistics\nof a population given a finite number of samples. In radiation transport, a\nuser seeks to estimate some response, the relative error associated with that\nresponse solution, and the time it takes to obtain those values. Equation\n\\eqref{eq:S to R} described the relationship between the sample variance, the\nrelative error, and the number of particles as\n\\begin{equation*}\nS^2\\:\\propto\\: R^2\\:\\propto\\:\\frac{1}{N} .\n\\end{equation*}\nThe relationship between the relative error, $R$, and the number of particles,\n$N$, (recall that $R^2\\:\\propto\\:\\frac{1}{N}$)\nwill be some constant value (C):\n\\begin{equation*}\n C_1 = R^2N .\n\\end{equation*}\nAs a problem is simulated, the number of particles run, $N$, will increase\nproportionally to the computational transport time, $T$. Therefore, the\nrelationship between $R$ and $T$ should also be a constant.\n\\begin{equation*}\n  C_2 = R^2T\n\\end{equation*}\nThe figure of merit (FOM) shown in Eq. \\eqref{eq:FOM}\nis the most commonly reported metric using this relationship that is reported.\nIt is widely used in quantifying the effects of variance reduction methods.\nBecause it uses the inverse quantity of the relative error and time, a\n``good'' result would be obtained from a low relative error in a short amount of\ntime, resulting in a FOM with a high numerical value.\n\\begin{equation}\nFOM=\\frac { 1 }{ R^{ 2 }T }\n\\label{eq:FOM}\n\\end{equation}\nFurther, a user may desire to determine how long a problem must be run to obtain\na desired relative error. In that case, Eq. \\eqref{eq:FOM} can simply be\nrearranged to\n\\begin{equation*}\n  R = \\frac{1}{(FOM*T)^{1/2}} .\n\\end{equation*}\n\nThe figure of merit is a very useful tool, but it is limited by statistical\nprecision in calculating $R$.\nIt is worth noting that early on in a transport simulation,\nwhen too few particles have been simulated to\neffectively capture $S$ or $\\hat{x}$, the FOM will oscillate.\nEventually, the FOM will converge to a relatively constant value. This behavior can\nalso be used to determine whether one has sufficiently sampled the\nregion in which they are quantifying the response.\n\n\\subsection{Variance Reduction Methods for Monte Carlo Radiation Transport}\n\\label{subsec:MCVR}\n\nNow that the different parameters that affect the variance in a problem have\nbeen introduced, let us transition to different variance reduction techniques\nthat are available in Monte Carlo radiation transport packages.\nVariance reduction\ntechniques in radiation transport methods fall\ninto four general categories: truncation methods, population control methods, modified\nsampling methods, and partially-deterministic methods. Of importance for this\nproject are\npopulation control methods and modified sampling methods, which are discussed in\na number\nof the papers referenced herein. Truncation methods and partially-deterministic\nmethods do not contribute to and are not the focus of this work,\nso will only be touched upon briefly.\n\nA later section (\\ref{sec:software}) of this dissertation will discuss the\nchoice of software packages used\nfor this project. In particular, our hybrid methods software package is designed\nto accelerate the Monte Carlo radiation transport package MCNP\n\\cite{hendricks_mcnp_1985, brown_mcnp_2002, mcnp_manual_v1}. Variance reduction\nmethods are available in a number of other Monte Carlo radiation transport\npackages, and are by no\nmeans limited to a particular code. However, the implementation of methods\ndiffers between software\nand the specifics may differ slightly. The discussion\nfor the remainder of this subsection will be centered around the specifics of\nthe code used for this project.\n\n\\subsubsection*{Population Control Methods}\n\nPopulation control methods adjust the particle population in the problem to\nobtain better sampling in regions of interest by preferentially increasing or\ndecreasing the particle population.\nThe first two types of population control methods that will be discussed\nare called splitting and rouletting.\nSplitting is a method by which the particle population can be increased by\nsplitting a single higher-weight particle into several lower-weight particles.\nRouletting, conversely, reduces the particle population by stochastically\nkilling particles. Particles that survive a rouletting routine have their weight\nadjusted higher, thereby conserving weight in the routine.\nBoth splitting and roulette maintain a\nfair game by adjusting the particle weights as each routine is\nperformed; statistically, the sum of the child particle weights\nis the same as the parent\nweight as it entered the routine.\n\nTo use population control methods effectively as a variance reduction technique,\nsplitting is performed in high-importance regions to increase the particle\ncount, and thus the sampling, in important regions. Conversely, rouletting is\nperformed in\n low-importance\n  regions to reduce the particle population in regions that are unimportant to\n  the tally result.\nSplitting and rouletting can be applied to include geometry, energy and time.\n\nThe weight window combines splitting and rouletting to keep particles within a\ndesired weight range. Figure \\ref{fig:ww-mcnp}\nillustrates the different processes a particle may go through when passing\nthrough a\nweight window.\n%\n\\begin{figure}\n  \\centering\n  \\includegraphics[width=0.5\\textwidth]{./chapters/lit_review/figures/ww-mcnp.png}\n  \\caption[Weight window illustration]{Cartoon illustration of a weight window,\n    adapted from \\cite{brown_mcnp_2002, mcnp_manual_v2}}\n  \\label{fig:ww-mcnp}\n\\end{figure}\n%\nThe top  particle entering the weight window is a single, high-weight\nparticle. The weight of this particle is above the weight window bounds, so as\nit enters the weight window it is split into multiple particles whose weight is\nwithin the window bounds. The second particle entering the window is within the\nweight window bounds, so it retains its weight and is not split or rouletted.\nThe last two particles entering the window have weights lower than the bound.\nThey undergo a rouletting routine and one particle is killed and the surviving\nparticle is increased in weight. As these particles leave the window, all of\nthem have weights within the range of the window. This will reduce the variance\nof the particles contributing to a tally in that region.\n\nWhile the use of weight windows in a problem helps to keep a more ideal\ndistribution of particle weights, the user is\nfaced with calculating a significant number of parameters to\ndetermine weight windows for the entire problem. In the best case with an\nexperienced user, this may just take time. With an inexperienced user or a\ncomplex problem this can\nbe insurmountable, and may be too difficult to do without some automated\nassistance.\n\nIt should be noted that\nwhile splitting and rouletting can be performed on a single variable--angle,\nenergy, space, or time--the weight windows generally used are either energy-space\ndependent or space-time dependent. Further, the weight window will split or\nroulette depending on the particle weight entering the window. Splitting and\nrouletting on their own either increase or decrease the particle weight\nproportional to the ratio of cell importances, or $I'/I$, no matter what\nthe entering particle weight is. As a\nresult, poorly chosen splitting or rouletting parameters can still\nhave significant tally variance, because particle weights may still span a wide\nrange.\n\n\\subsubsection*{Modified Sampling Methods}\n\nModified sampling methods adjust transport by sampling from a different probability\ndistribution function than the actual distribution for the problem. This is\npossible if, as with population control methods, the particle weights are adjusted\n accordingly.\n The new probability distribution function should bias particles in regions of high\n importance to the problem tallies. In MCNP, a number of modified sampling\n methods exist.\n These include the exponential transform, implicit capture, forced collisions, source\n biasing, and neutron-induced photon production biasing.\n\nThe exponential transform modifies particle transport from the analog problem by\nartificially modifying the macroscopic cross section, and thus the\ndistance-to-collision, to move particles in important directions. In directions\nof higher importance, the cross section is reduced, and particles can flow more\nfreely. In directions of lower importance, the cross section is increased, and\nparticles more frequently interact, thereby increasing their probability of\ndirectional change or absorption. The transformed cross section used by the\nexponential transform is defined by\n%\n\\begin{equation}\n  \\Sigma_t^* = \\Sigma_t(1-p\\mu) ,\n  \\label{eq:ExTrns}\n\\end{equation}\n%\nwhere $\\Sigma_t^*$ is the transformed total cross section, $\\Sigma_t$ is the\ntrue total cross section, $p$ is the transform parameter, and $\\mu$ is the\ncosine of the angle between the preferred direction and the particle's transport\ndirection \\cite{mcnp_manual_v1, mcnp_manual_v2, hendricks_mcnp_1985}.\n\nBecause the particle's transport is adjusted in the exponential transform, the\nparticle weight must be adjusted accordingly. This is given by\n%\n\\begin{equation*}\n  \\begin{split}\n  w^* &= \\frac{\\Sigma_t e^{-\\Sigma_t s}}{\\Sigma_t^* e^{-\\Sigma_t^* s}} \\\\\n      &= \\frac{e^{-\\rho \\Sigma_t \\mu s}}{1-p\\mu}, \\\\\n  \\end{split}\n  \\label{eq:ExTrnsWt}\n\\end{equation*}\n%\nwhere $s$ is the phase space of particle residence. This weight adjustment\nensures that the particle weight is conserved throughout transport, even as\nthe cross section is altered. Because the cross section in the problem is both\nenergy and material dependent (depending on the geometry), the exponential\ntransform will be dependent on space and energy, and particles will be biased in\nboth. While a powerful method, the exponential transform is quite difficult to\nuse and if $p$ is ill-chosen this method can perform quite poorly. Further,\nthe user has to know quite a bit about the problem physics and material to choose an\noptimal quantity for $p$.\n\nSource biasing, rather than preferentially adjusting particles' directionality\nby way of adjusting the cross sections, biases particles from their origin.\nSource biasing has the option to bias particles in energy, direction, and space\n(if the source is volumetric). This allows the user to choose importance\nseparately for each variable. First, the source variable (let us consider\nenergy for the moment) is defined as a series of bins or a function. Second, the bins are\nassigned probabilities of occurrence according to their importance. An energy bin\nwith a high importance will be assigned a high probability of occurrence, and a\nbin with low importance will be assigned a low probability of occurrence. As\nparticles are born in the bins with higher importances, they will have their\nweights adjusted to the inverse of their probability of occurrence, or $w^* =\np/p^*$. Here $p$ refers to the probability density function for the source\nparticles; it bears no relation to the exponential transform factor.\n\nSource biasing is a very simple method that can reduce the solution variance\nsignificantly. However, if a user chooses bin sizes or a function that does not\nproperly reflect the particles importances in the problem, the source will be\npoorly sampled. As a result, sampling may be very inefficient and the figure of\nmerit will decrease. In MCNP, if poor parameters are chosen for this method, the\nuser is given a warning.\n\n\\subsubsection*{Truncation Methods}\n\nTruncation methods stop tracking particles in a region of phase-space that is of\nlow-importance to the tally. These methods can be used in space (a vacuum boundary\ncondition), energy (eliminate particles above or below a specified energy), or\ntime (stop\ntracking after a given time). To effectively use these methods, the user must be\naware of\nparticles' importance to a tally result. If particles that are important to a\nresult are\n eliminated with a truncation method, the tally will lack the contribution from\n that particle's phase-space, and will be underestimated as a result. Further,\n as discussed in Section \\ref{subsubsec:CLT}, the central limit theorem only\n holds assuming that the histories tracked are independent and drawn from\n identical distributions.\n Truncating particles of high importance removes the independence from the\n sampling and modifies the underlying PDF being sampled,\n so the estimate of the response will be wrong.\n\nIt is important in using any variance reduction technique to ensure that a\nfair game\nis being played. The user must ensure that the fundamental nature of the problem\nis not\nbeing changed by using a variance reduction technique, or the answer will not be\nrepresentative of the original problem. Automated variance reduction techniques aim to\neliminate this uncertainty for the user by estimating the importance of\nparticles in some\n way and then setting up variance reduction parameters automatically.\nThe remainder of this chapter review will focus on efforts to\nautomate population\ncontrol methods and modified sampling methods for variance reduction.\n\n\\subsection{Automated Variance Reduction Methods for Monte Carlo Radiation\nTransport}\n\\label{subsec:AutomatedMCVR}\n\nSection \\ref{subsec:MCVR} described some methods that one may use to reduce the\nvariance in Monte Carlo radiation transport tallies. These methods, if used\ncorrectly, can significantly increase the transport efficiency in a Monte Carlo\nsimulation. However, correct use of these methods often requires intelligent\nselection of variance reduction (VR) parameters, which is a non-trivial task. Users\nhave found themselves often performing several trial runs before choosing final\nquantities for the VR\nparameters in their problems, which was computationally\ninefficient and required significant knowledge of Monte Carlo and variance\nreduction to execute well \\cite{booth_automatic_1982}.\n\nThis has been addressed by using Monte Carlo to sample the problem in an initial\ncalculation to determine more favorable variance reduction parameters automatically.\nBooth and Hendricks,\nrecognizing that choosing optimal weight window values for energy- and space-\ndependent weight windows was difficult even for experienced users, proposed two\ntools for Monte Carlo variance reduction in parallel. The first was a\nMonte Carlo importance generator \\cite{booth_automatic_1982} that could be used\nto make informed decisions on cell importances throughout the problem. The\nsecond method, a Monte Carlo generated weight window generator,\ncalculates the weight window values automatically for a given problem\n\\cite{hendricks_code-generated_1982}.\nThe importance generator estimates a cell's importance\nby tracking the weights of the particles in the cell, or\n\\begin{equation}\n  Importance  = \\frac{\\text{score of particles leaving the cell}}\n                     {\\text{weight leaving the cell}}.\n\\label{eq:BoothImp}\n\\end{equation}\nThe weight window generator calculates weight window values with\n\\begin{subequations}\n\\begin{equation}\n  W_{i,low} = \\frac{1}{kN}\\big(\\Sigma W_{i,in} + \\Sigma W_{i,out} \\big)\n\\end{equation}\n\\begin{equation}\n  W_{i,high} =\n  \\begin{cases}\n    k*W_{i,low} & \\text{if } W_{i,low} \\neq 0 \\\\\n    \\infty & \\text{if } W_{i,low} = 0\n  \\end{cases}  \\:,\n  \\label{eq:hendricksWW}\n\\end{equation}\n\\end{subequations}\nwhere $W_{i,low}$ and $W_{i,high}$ are the weight window lower and upper weight\nbounds respectively, $W_{i,in}$ and $W_{i,out}$ are the total weight entering\nand leaving the cell, $N$ is the number of source particles, and $k$ is some weight\nwindow width (a constant that Hendricks set to 5).\n\nIn his\npaper, Booth notes that the weight window target value derived from the\nimportance generator was chosen so that the\ntrack weight times the expected score in the tally region (for unit track\nweight) was approximately constant. Booth's importance generator saw\nimprovements in the FOM between 1.5-8x when compared to the analog run for the\ntest problem presented.\n\nBooth and Hendricks combined these two methods to\nautomate weight window generation based on phase-space importance\n\\cite{booth_deep_1982, booth_importance_1984}. They showed that the combination\nof the importance estimator and the weight window generator was a successful way\nto perform variance reduction in deep-penetration problems. However, their\nmethod depended on several iterations of importance-determining runs to obtain a\nsatisfactory estimation of the importance. For a 300 cm slab problem, the FOM was\nincreased from 1.9 to 75, but took 10 subsequent runs to obtain the FOM of 75,\nand these runs ranged from 1.2 min (for the analog problem) to 42 minutes (for\nthe 9th run \\cite{booth_importance_1984}).\n\nIt should be noted that both the importance generator and the weight window\ngenerator use a lower-fidelity\nMonte Carlo run to gain an initial estimate for a cell's importance and generate\nvariance reduction parameters from them to bias a more computationally-intensive\nand higher-fidelity\nrun. Naturally, the variance reduction parameters generated by using these\ntechniques are limited by the statistical precision in the regions used to\ngenerate them. Hendricks also pointed out that the weight window generator\ntended to populate all regions of phase space equally, which he conceeded was\nnot ideal for all problems \\cite{hendricks_code-generated_1982}.\nFurthermore, for deep-penetration particle transport, the\nvariance reduction parameters for low flux regions are exceedingly difficult to\ngenerate, resulting in unfavorable VR parameters.\n\nThe MCNP \\cite{mcnp_manual_v1, brown_mcnp_2002} weight window generator has been\nextended beyond the initial space- and energy-implementation described in\nBooth's paper. It now has the ability to automatically generate space- energy-\nand angle-weight windows. The importance generator in MCNP also has been\nextended to time-importance; the values of which can be used for splitting or\nrouletting parameters \\cite{brown_mcnp_2002}, and can be optimized on a grid\nindependent from the MCNP geometry \\cite{evans_enhanced_1998}.\n\nAs with Booth and Hendricks'\noriginal implementations, this updated weight window generator still relies on\nadequate sampling to obtain sufficient weight window parameters. When additional\ndegrees of freedom, like angle-dependence, are added, the time\nto converge on those parameters takes even longer. The weight window\ngenerator also only allows for a single tally to be optimized at once, so\nmultiple tallies cannot be optimized simultaneously. Finally, the weight window\ngenerator still requires user input and updating to seed the weight\nwindow solution. The user must choose the meshing of the problem and have some\nintuition as to how the problem should be subdivided. In the paper by Van Riper\net al, it was found that depending on user\nexperience, the weight window generator can have differences in the FOM from 2\nto 10 times \\cite{van_riper_generation_1995} for the problems that they\ninvestigated.\n\n", "meta": {"hexsha": "d321fe4ef37667cb466e5f7f02866471ce8588c1", "size": 29320, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/lit_review/var_reduct.tex", "max_stars_repo_name": "rachelslaybaugh/munk-disseration", "max_stars_repo_head_hexsha": "e6dc6d6a8d5613cb30bca7dc4a2d419ad1b36e65", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/lit_review/var_reduct.tex", "max_issues_repo_name": "rachelslaybaugh/munk-disseration", "max_issues_repo_head_hexsha": "e6dc6d6a8d5613cb30bca7dc4a2d419ad1b36e65", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/lit_review/var_reduct.tex", "max_forks_repo_name": "rachelslaybaugh/munk-disseration", "max_forks_repo_head_hexsha": "e6dc6d6a8d5613cb30bca7dc4a2d419ad1b36e65", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.779286927, "max_line_length": 91, "alphanum_fraction": 0.7747271487, "num_tokens": 7323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891261650248, "lm_q2_score": 0.6825737408694988, "lm_q1q2_score": 0.5612047075486853}}
{"text": "\\chapter{Physically Based Matrix Distribution}\n\\label{chapter:pbmd}\n\n\n{\\bf This chapter will be updated.}\nWe discuss how \\plapack locally stores the data associated with the \ndifferent linear algebra objects.\nLet us consider a column vector $ x $ distributed to $p$ nodes in a \n$ r \\times c $ mesh.  \nFor simplicity, we will assume that the column vector is\naligned to the first entry of the template column vector, which is\npartitioned using a distribution blocking size $ n_b $ and that\nthe template column vector is induced in row-major order.\nThus\n\\[ x = \\left( \\begin{array}{c}\nx_0 \\\\ \\hline\nx_1 \\\\ \\hline\n\\vdots \\\\ \\hline\nx_{N-1}\n\\end{array}\n\\right)\n\\]\nwhere $ x_i $ is of length $ n_b $ (except perhaps the last sub-vector).\nNode $ (i,j) $ will own the sub-vectors\n\\[\n\\left( \\begin{array}{c}\nx_{i * r + j} \\\\ \\hline\nx_{i * r + j + p} \\\\ \\hline\nx_{i * r + j + 2p} \\\\ \\hline\n\\vdots\n\\end{array}\n\\right)\n\\]\nThis vector is locally stored in memory so that the order of\nthe elements is preserved (i.e., elements of $ x_{i * r+j+kp} $\nwill precede those of $ x_{i* r+j+(k+1)p} $).\nThe stride in memory between elements will be internally determined and\ncan be inquired by one of the subsequently described calls.\n\nNext, let us consider a matrix $ A $ distributed to the same mesh.\nFor simplicity, we assume that $ A $ is aligned with the top-left\nelement of the template matrix induced by row and column template vectors\nboth induced in column-major order.\nThen,\n\\[\nA = \\left( \\begin{array}{c | c | c | c }\nA_{0,0} & A_{0,1} & \\cdots & A_{0,N-1} \\\\ \\hline\nA_{1,0} & A_{1,1} & \\cdots & A_{1,N-1} \\\\ \\hline\n\\vdots & \\vdots &        & \\vdots \\\\ \\hline\nA_{M-1,0} & A_{M-1,1} & \\cdots & A_{M-1,N-1}\n\\end{array}\n\\right)\n\\]\nand the following sub-matrix is assigned to node $ (i,j) $:\n\\[\n\\tilde{A}_{ij} = \\left( \\begin{array}{c | c | c || c | c | c || c }\nA_{i,j * r} & \\cdots & A_{i,(j+1)*r-1} &\nA_{i,j * r+p} & \\cdots & A_{i,(j+1)*r-1+p} & \\cdots\n\\\\ \\hline \\hline\nA_{i+r,j * r} & \\cdots & A_{i+r,(j+1)*r-1} &\nA_{i+r,j * r+p} & \\cdots & A_{i+r,(j+1)*r-1+p} & \\cdots\n\\\\ \\hline \\hline\nA_{i+2r,j * r} & \\cdots & A_{i+2r,(j+1)*r-1} &\nA_{i+2r,j * r+p} & \\cdots & A_{i+2r,(j+1)*r-1+p} & \\cdots\n\\\\ \\hline \\hline\n\\vdots &  & \\vdots & \\vdots &  & \\vdots &\n\\end{array} \\right)\n\\]\nIt is this matrix $ \\tilde{A}_{ij} $ that is stored in a local\ntwo-dimensional array, using FORTRAN style column-major\norder storage, with a leading dimension determined internally.\n\nFinally, let us discuss the storage of a projected\nmultivector.  Notice that if the column vector $ x $ discussed above\nis spread within a column of nodes, or gathered\nto the $ i $th node in each column,\nthe following sub-vectors will be collected on\nnode $ (i,j) $:\n\\[\n\\tilde{x}_{ij} = \\left( \\begin{array}{c}\nx_{j * r} \\\\ \\hline\n\\vdots \\\\ \\hline\nx_{(j+1)*r-1} \\\\ \\hline \\hline\nx_{j * r+p} \\\\ \\hline\n\\vdots \\\\ \\hline\nx_{(j+1)*r-1+p} \\\\ \\hline \\hline\n\\vdots\n\\end{array} \\right)\n\\]\nNotice that the sub-vectors from within a column of nodes\nare interleaved so that the indices of sub-vectors are\nstrictly increasing (the order in the global vector is maintained).\nIt is this column vector that is then stored in a local linear array,\nwith a stride determined internally.", "meta": {"hexsha": "691a0a788cbfa36af94c54344afa14657a100cc1", "size": 3214, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/libflame/old/60-pbmd.tex", "max_stars_repo_name": "haampie/libflame", "max_stars_repo_head_hexsha": "a6b27af9b7ef91ec2724b52c7c09b681379a3470", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 199, "max_stars_repo_stars_event_min_datetime": "2015-02-06T06:05:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T05:20:33.000Z", "max_issues_repo_path": "docs/libflame/old/60-pbmd.tex", "max_issues_repo_name": "haampie/libflame", "max_issues_repo_head_hexsha": "a6b27af9b7ef91ec2724b52c7c09b681379a3470", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 44, "max_issues_repo_issues_event_min_datetime": "2015-05-10T18:14:52.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-22T08:22:10.000Z", "max_forks_repo_path": "docs/libflame/old/60-pbmd.tex", "max_forks_repo_name": "haampie/libflame", "max_forks_repo_head_hexsha": "a6b27af9b7ef91ec2724b52c7c09b681379a3470", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 70, "max_forks_repo_forks_event_min_datetime": "2015-02-07T04:53:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-18T05:20:36.000Z", "avg_line_length": 34.1914893617, "max_line_length": 73, "alphanum_fraction": 0.6624144368, "num_tokens": 1125, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.6825737279551494, "lm_q1q2_score": 0.5612046939567246}}
{"text": "\n\\chapter{\\label{cha:Analytical-Solns}Analytical Solutions}\n\n\n\\section{\\label{sec:TractionProblems}Traction Problems}\n\nComputation of analytical solutions for elastostatic problems over\nregular domains is a relatively straightforward procedure. These problems\nare typically formulated in terms of a combination of displacement\nand traction boundary conditions, and such problems provide a good\ntest of the code accuracy, as well as specifically testing the implementation\nof traction boundary conditions. We present here two simple problems\nfor this purpose.\n\n\n\\subsection{Solutions Using Polynomial Stress Functions}\n\nOur derivation follows the procedures outlined in Timoshenko and Goodier\n\\cite{Timoshenko:Goodier:1987}, and we restrict ourselves to two-dimensional\nproblems. Any problem in elastostatics must satisfy the equilibrium\nequations\n\\begin{gather}\n\\frac{\\partial\\sigma_{xx}}{\\partial x}+\\frac{\\partial\\sigma_{xy}}{\\partial y}+X=0\\label{eq:traction:1}\\\\\n\\frac{\\partial\\sigma_{yy}}{\\partial y}+\\frac{\\partial\\sigma_{xy}}{\\partial x}+Y=0,\\nonumber \n\\end{gather}\nwhere \\textit{X} and \\textit{Y} are the body force components in the\n\\textit{x} and \\textit{y} directions, respectively, and the stress\ncomponents are given by $\\sigma$. In the problems considered here,\nwe neglect body forces, so \\textit{X} and \\textit{Y} disappear from\nthe equilibrium equations. The solution must also satisfy the boundary\nconditions, given as surface tractions over the surface of the body.\nFinally, the solution must satisfy the conditions of compatibility,\nwhich may be expressed as:\n\\begin{equation}\n\\left(\\frac{\\partial^{2}}{\\partial x^{2}}+\\frac{\\partial^{2}}{\\partial y^{2}}\\right)\\left(\\sigma_{xx}+\\sigma_{yy}\\right)=0.\\label{eq:traction:2}\n\\end{equation}\nTo compute the displacement field, it is also necessary to specify\ndisplacement boundary conditions.\n\nEquations \\vref{eq:traction:1} may be satisfied by taking any function $\\phi$\nof \\textit{x} and \\textit{y}, and letting the stress components be\ngiven by the following expressions:\n\\begin{equation}\n\\sigma_{xx}=\\frac{\\partial^{2}\\phi}{\\partial y^{2}},\\;\\sigma_{yy}=\\frac{\\partial^{2}\\phi}{\\partial x^{2}},\\;\\sigma_{xy}=-\\frac{\\partial^{2}\\phi}{\\partial x\\partial y}.\\label{eq:traction:3}\n\\end{equation}\nThe solution must also satisfy the compatibility equations. Substituting\nEquations \\vref{eq:traction:3} into Equation \\vref{eq:traction:2}, we find that the\nstress function $\\phi$ must satisfy\n\\begin{equation}\n\\frac{\\partial^{4}\\phi}{\\partial x^{4}}+2\\frac{\\partial^{4}\\phi}{\\partial x^{2}\\partial y^{2}}+\\frac{\\partial^{4}\\phi}{\\partial y^{4}}=0.\\label{eq:traction:4}\n\\end{equation}\nA relatively easy way to solve a number of problems is to select expressions\nfor $\\phi$ consisting of polynomial functions of \\textit{x} and \\textit{y}\nof second degree or higher. Any polynomial of second or third degree\nwill satisfy Equation \\vref{eq:traction:4}. For polynomials of higher degree,\nthey must be substituted into Equation \\vref{eq:traction:4} to determine what\nrestrictions must be placed on the solution.\n\n\n\\subsection{\\label{sub:Analytical-Constant-Traction}Constant Traction Applied\nto a Rectangular Region}\n\nFor this problem, a constant normal traction, $\\overrightarrow{N}$,\nis applied along the positive \\textit{x}-edge of the rectangular region\n($x=x_{0}$), and roller boundary conditions are applied along the\nleft and bottom boundaries (\\vref{fig:Const-tractions}). Since the\ntractions are constant, we assume a second degree polynomial for the\nstress function:\n\\begin{equation}\n\\phi_{2}=\\frac{a_{2}}{2}x^{2}+b_{2}xy+\\frac{c_{2}}{2}y^{2}.\\label{eq:traction:5}\n\\end{equation}\nThis yields the following expressions for the stresses:\n\\begin{equation}\n\\sigma_{xx}=\\frac{\\partial^{2}\\phi_{2}}{\\partial y^{2}}=c_{2}=N,\\;\\sigma_{yy}=a_{2}=0,\\;\\sigma_{xy}=-b_{2}=0.\\label{eq:traction:6}\n\\end{equation}\nFrom Hooke's law, we have:\n\\begin{equation}\n\\epsilon_{xx}=\\frac{\\partial u}{\\partial x}=\\frac{\\left(1-\\nu^{2}\\right)N}{E},\\;\\epsilon_{yy}=\\frac{\\partial v}{\\partial y}=\\frac{-\\nu\\left(1+\\nu\\right)N}{E},\\;\\epsilon_{xy}=\\frac{1}{2}\\left(\\frac{\\partial v}{\\partial x}+\\frac{\\partial u}{\\partial y}\\right)=0.\\label{eq:traction:7}\n\\end{equation}\nThe strain components are thus easily obtained from the stress components.\n\nTo obtain the displacements, we must integrate Equation \\vref{eq:traction:7}\nand make use of the displacement boundary conditions. Integrating\nthe first two of these, we obtain\n\\begin{equation}\nu=\\frac{\\left(1-\\nu^{2}\\right)Nx}{E}+f\\left(y\\right),\\; v=\\frac{-\\nu\\left(1+\\nu\\right)Ny}{E}+f\\left(x\\right).\\label{eq:traction:8}\n\\end{equation}\nSubstituting these into the third expression, we obtain\n\\begin{equation}\n\\frac{\\partial f\\left(x\\right)}{\\partial x}=-\\frac{\\partial f\\left(y\\right)}{\\partial y},\\label{eq:traction:9}\n\\end{equation}\nwhich means that both $f\\left(x\\right)$ and $f\\left(y\\right)$ must\nbe constant. We solve for these constants using the displacement boundary\nconditions along $x=x_{0}$ and $y=y_{0}$. Doing this, we obtain\nthe following expressions for the displacement components:\n\\begin{equation}\nu=\\frac{\\left(1-\\nu^{2}\\right)N}{E}\\left(x-x_{0}\\right),\\; v=\\frac{\\nu\\left(1+\\nu\\right)N}{E}\\left(y_{0}-y\\right).\\label{eq:traction:10}\n\\end{equation}\n\n\n\\noindent \\begin{center}\n\\begin{figure}[H]\n\n\n\\caption{\\label{fig:Const-tractions}Problem with constant traction boundary\nconditions applied along right edge.}\n\n\n\\noindent \\centering{}\\includegraphics{analyticalsolns/figs/consttract}\n\\end{figure}\n\n\\par\\end{center}\n", "meta": {"hexsha": "0e8a4e31c7c18db06e83bcc49418a2a34c98c138", "size": 5518, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/userguide/analyticalsolns/analyticalsolns.tex", "max_stars_repo_name": "Grant-Block/pylith", "max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2015-01-08T16:41:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T13:40:02.000Z", "max_issues_repo_path": "doc/userguide/analyticalsolns/analyticalsolns.tex", "max_issues_repo_name": "Grant-Block/pylith", "max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 277, "max_issues_repo_issues_event_min_datetime": "2015-02-20T16:27:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T21:13:09.000Z", "max_forks_repo_path": "doc/userguide/analyticalsolns/analyticalsolns.tex", "max_forks_repo_name": "Grant-Block/pylith", "max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 71, "max_forks_repo_forks_event_min_datetime": "2015-03-24T12:11:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T04:26:02.000Z", "avg_line_length": 48.8318584071, "max_line_length": 281, "alphanum_fraction": 0.753352664, "num_tokens": 1601, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.7879312031126512, "lm_q1q2_score": 0.5611899714486425}}
{"text": "\\documentclass{article}\n\\usepackage{graphicx}\n\\usepackage{pdfpages}\n\n\\begin{document}\n\n\\title{Extensions of L1 Trend Filtering: Seasonality, Outlier Modeling\nand Other Effects with $l_1$ and $l_2$ Regularization}\n\\author{David Johnston}\n\n\\maketitle\n\n\\begin{abstract}\nThis paper extends and further elucidates ideas from Kim, Koh \\& Boyd for L1 regularized\ntrend filtering and gives explicit formulas for reducing these problems down to\nquadratic and linear programming problems that can be solved with various widely available\nconvex optimization libraries. We include seasonality effects with both\n$l_1$ and $l_2$ regularization, outlier modeling, sparse level changes and\nunequally spaced points. This paper also acts a documentation for the formulas used\nin our open source python library \\emph{myl1tf}\n\\footnote{https://github.com/dave31415/myl1tf}.\n\\end{abstract}\n\n\\section{The primary and dual quadratic programming problems}\n\nWe will start off at Section 5.2 of KKB where they state the quadratic programming problem for L1TF\nand also state the dual problem.\n\n\\begin{eqnarray}\n\\mbox{minimize} & \\frac{1}{2} ~ || y - x ||_2^2  + \\lambda ||z||_1 \\\\\n\\mbox{subject to} & z = D x\n\\end{eqnarray}\nThe first is an $l_2$ norm and the second an $l_1$ norm. The Lagrangian, with a dual variable $\\nu \\in \\mathbf{R}^{n-2}$, is\n\\[\nL(x,z,\\nu) =  || y - x ||_2^2  + \\lambda ||z||_1 + \\nu^T (D x -z)\n\\]\n\n\nThe dual function is\n\\[\n\\mbox{inf}_{x,z} ~ L(x,z,\\nu) =\n    \\left\\{\n    \\begin{array}{ll}\n    - \\frac{1}{2} ~ \\nu^T D D^T \\nu + y^T D^T \\nu &  - \\lambda \\mathbf{1} < \\nu < \\lambda \\mathbf{1} \\\\\n  -\\infty  & \\mbox{otherwise.} \\\\\n  \\end{array}\n  \\right.\n\\]\n\nsince\n\n\\begin{equation}\n\\sup_\\nu \\inf_z \\left(-\\nu^T z + \\lambda ||z||_1 \\right) =\n\\left\\{\n    \\begin{array}{ll}\n   0 ~\\mbox{if}~ |\\nu| < \\lambda \\\\\n  -\\infty  & \\mbox{otherwise.} \\\\\n  \\end{array}\n  \\right. \\label{eq:supinf}\n\\end{equation}\n(see appendix for proof) and so the dual problem is\n\n\\begin{eqnarray}\n\\mbox{maximize} & -\\frac{1}{2} ~ \\nu^T D D^T \\nu + y^T D^T \\nu \\\\\n\\mbox{subject to} & - \\lambda \\mathbf{1} < \\nu < \\lambda \\mathbf{1}\n\\end{eqnarray}\n\nFrom the solution $\\nu^*$ of the dual problem, we can compute the L1TF solution,\n\\[\nx^* = y - D \\nu^*\n\\]\n\n\\section{Seasonality}\n\nAs suggested by KKB we can adapt this to add a seasonal component.\n\\begin{eqnarray}\n\\mbox{minimize} & \\frac{1}{2} ~ || y - x -s||_2^2  + \\lambda ||z||_1 \\\\\n\\mbox{subject to} & z = D x \\\\\nand & \\sum_i^P s_i = 0\\\\\nand & s_{i+P} = s_i\n\\end{eqnarray}\n\nKKB does not go into detail on how to proceed with this and so we will begin here by deriving\nthe dual problem. We will do this by putting the constraints on $s$ directly into the\nequation to be minimized.\n\nTo do this, we define $p$ to be the vector of independent variables defining the periodic components.\nThis vector has dimension $(P-1)$ as the $P$-th, dependent value is $-\\sum_{i=1}^{P-1} p_i$ which enforces the constraint that\nthey sum to zero. This constraint is required if there is to exists a unique solution as otherwise one could add\nany constant to $x$ and subtract it from $s$ without changing the model.\n\nWe can define $\\tilde{p} \\in \\mathbf{R}^{P}$ as $\\tilde{p} = \\left( p,-\\sum_{i=1}^{P-1} p_i \\right) = T p$.\n$T$ will be a $P \\times (P-1)$ matrix with a $(P-1)$ identity matrix at the top and an extra row consisting of all -1.\nThe vector $s$ is now just a periodic re-cycling of $\\tilde{p}$ which we can represent as s matrix $B$ which is formed by\nrow-wise stacking some number (ceil(N/P)) of $P \\times P$ identity matrices and truncating rows to dimension $N$. So finally\nwe can write $s = B T p \\equiv Q p$. By doing this we can rewrite the optimization problem as\n\n\\begin{eqnarray}\n\\mbox{minimize} & \\frac{1}{2} ~ || y - x - Q p||_2^2  + \\lambda ||z||_1 \\\\\n\\mbox{subject to} & z = D x\n\\end{eqnarray}\nwith the $p \\in \\mathbf{R}^{(P-1)}$ now unconstrained.\n\n\\subsection{Seasonality with $l_2$ regularization}\n\nTo improve stabilization and allow more control over $p$, we will add\na $l_2$ regularization constraint and write our problem.\n\\begin{eqnarray}\n\\mbox{minimize} & \\frac{1}{2} ~ || y - x - Q p||_2^2  + \\lambda ||z||_1 + \\eta ~ \\frac{1}{2} \\tilde{p}^T \\tilde{p}\\\\\n\\mbox{subject to} & z = D x\n\\end{eqnarray}\n\nWe will now proceed as before and derive the dual problem. To do this we first\nwrite down the Lagrangian\n\\[\nL(x,z,p,\\nu,\\eta,\\lambda) =  R  + \\lambda ||z||_1 + \\nu^T (D x -z) + \\eta ~ \\frac{1}{2} p^T H ~p\n\\]\nwith  $H = T^T T$ and\n\\begin{eqnarray}\nR & \\equiv & || y - x - Q p||_2^2 \\\\\n& = & \\frac{1}{2} y^T y + \\frac{1}{2} x^T x + \\frac{1}{2} p^T Q^T Q p \\\\\n&  & - y^T x - y^T Q p + x^T Q p\n\\end{eqnarray}\n\nWe can calculate $\\mbox{inf}_{x,z,p} L(x,z,p,\\nu,\\eta,\\lambda)$ by setting gradients w.r.t. $x$ and $p$ to zero. The\ngradients of $R$ are\n\n\\begin{eqnarray}\n\\nabla_x R &  =  &  x^T - y^T - p^T Q^T\\\\\n\\nabla_p R &  =  &  \\left( x^T - y^T - p^T Q^T \\right) Q \\\\\n& = & \\left(\\nabla_x R \\right) Q \\\\\n\\end{eqnarray}\nand the gradients of $L$ are\n\\begin{eqnarray}\n\\nabla_x L &  =  &  x^T - y^T - p^T Q^T + \\nu^T D\\\\\n\\nabla_p L &  =  &  \\left( x^T - y^T - p^T Q^T \\right) Q + \\nu p^T H \\\\\n\\end{eqnarray}\n\n\nSetting $\\nabla_x L$ =0 yields the equation.\n\\begin{equation}\ny - x - Qp = D^T \\nu \\label{eq:residual}\n\\end{equation}\nor\n\\[\nx = y - Qp - D^T \\nu\n\\]\nEquation \\ref{eq:residual} shows that $D^T \\nu$ is the residual.\nSetting $\\nabla_p L$ =0 yields\n\\[\np = \\eta^{-1} H^{-1} Q^T D^T \\nu\n\\]\nand we can use this last equation for $p$ to solve for $x$ and we can write these solutions as\n\\begin{eqnarray}\np^* & = &  \\eta^{-1} H^{-1} Q^T D^T \\nu \\\\\n& and & \\nonumber \\\\\nx^* & = & y - D^T \\nu - \\eta^{-1} Q H^{-1} Q^T D^T \\nu\n\\end{eqnarray}\nand so once we have a solution for $\\nu$ we can use these to obtain separate\nsolutions for $x$ and $p$.\n\nThe more subtle minimization w.r.t. $z$ will result in the same constraint in the dual problem as before. The reader\nshould ensure that they understand how the terms $\\lambda ||z||_1 - \\nu^T z$ result in the constraint\n$- \\lambda \\mathbf{1} < \\nu < \\lambda \\mathbf{1}$. One can show that outside of this range, the $inf_z$ of this term\nis $-\\infty$ (at either $z = \\pm \\infty$) and within is 0 (at $z=0$) (see appendix) and so any supremum for $\\nu$\nmust lie within $|\\nu| < \\lambda$.\n\nWe then construct the dual problem by plugging these solutions into the Lagrangian and we arrive at\n\\begin{eqnarray}\n\\mbox{maximize} & -\\frac{1}{2} ~ \\nu^T A \\nu + y^T D^T \\nu \\\\\n\\mbox{subject to} & - \\lambda \\mathbf{1} < \\nu < \\lambda \\mathbf{1}\n\\end{eqnarray}\nwith\n\\[\nA = D D^T + \\eta^{-1} D Q H^{-1} Q^T D^T\n\\]\nWe solve this quadratic programming problem as before for $\\nu$ and then use the equations above\nto calculate $x$ and $p$ from $\\nu$. It is apparent that as $\\eta \\rightarrow \\infty$ (seasonality is suppressed),\n$p \\rightarrow 0$ and we recover the same solution as before for $x$. We cannot use these equations directly with\nthe other limit $\\eta = 0$, though we will address that in a later section. However there is no problem setting\n$\\eta$ to a neglibly small number and applying these formula.\n\n\\subsection{Seasonality using $l_1$ regularization}\nWe can also use an $l_1$ regularization term on the seasonality terms $p$ which will result in\nsparse solutions for $p$. That is, with a well chosen regularization parameter,\nno seasonality will be used when it isn't really required. This might be more useful than the $l_2$\nregularization described above though the solution is a little more complicated.\n\nThe situation for $l_1$ regularization is similar to the above for the $x$ coordinate and leads to the\nsame Equation \\ref{eq:residual} for the residual. Submitting this solution for $x$ in terms of $\\nu$ and $p$\nleads to an optimization problem for $\\nu$ and $p$ as follows\n\n\\begin{eqnarray}\nsup_{\\nu} ~ inf_p & -\\frac{1}{2} ~ \\nu^T D D^T \\nu + y^T D^T \\nu -\\nu^T D Q p + \\eta ||T p||_1 \\\\\n\\mbox{subject to} & - \\lambda \\mathbf{1} < \\nu < \\lambda \\mathbf{1}\n\\end{eqnarray}\n\nWe can write $||T p||_1 = ||p||_1 + |u^T p| $ where $u$ is the unity vector $u_i=1$. There are no constraints on\n$p$ so this term $|u^T p|$ can be any non-negative number for some choice of $p$ and so\n$inf_{p} ~  -\\nu^T D Q p + \\eta ||T p||_1 = inf_{p} ~  -\\nu^T D Q p + \\eta ||p||_1 $ and\n\\[\ninf_{p} ~  -\\nu^T D Q p + \\eta ||p||_1 =\n    \\left\\{\n    \\begin{array}{ll}\n    0 &  - \\eta \\mathbf{1} <  Q^T D^T \\nu < \\eta \\mathbf{1} \\\\\n  -\\infty  & \\mbox{otherwise.} \\\\\n  \\end{array}\n  \\right.\n\\]\n\nThis implies that the Lagrangian for $\\nu$ to be optimized is the same as the non-seasonal\nversion but with an additional constraint.\n\n\\begin{eqnarray}\n\\mbox{maximize} & - \\frac{1}{2} ~ \\nu^T D D^T \\nu + y^T D^T \\nu \\\\\n\\mbox{subject to} & - \\lambda \\mathbf{1} < \\nu < \\lambda \\mathbf{1} \\\\\n& - \\eta \\mathbf{1} < Q^T D^T \\nu < \\eta \\mathbf{1} \\\\\n\\end{eqnarray}\n\nNotice that the $l_2$ solution leads to a modification of the quadratic form whereas the\n$l_1$ problem instead leads to an additional constraint. Both of these are standard quadratic\nprogramming problems that can be solved with any convex optimization library (such as cvxopt for python).\nA difference however is that the $l_2$ solution included a simple way of calculating $x$ and $p$ once\n$\\nu$ has been solved.\n\nWith the $l_1$ solution here, we have to solve a second optimization problem\nin order to separate out the separate components. To do this, note that after $\\nu$ has been solved for,\nthe first $\\chi^2$ term is now constant\nand so we need to optimize the sum of regularization terms by themselves\n\n\\begin{eqnarray}\n\\mbox{minimize} & ||D x||_1 + (\\eta / \\lambda) ||T p||_1 \\\\\n\\mbox{subject to} & y - x - Qp = D^T \\nu \\\\\n\\end{eqnarray}\n\nwhich we can write solely in terms of $p$ as\n\n\\begin{eqnarray}\n\\mbox{minimize over} ~p : & ||D (y - D^T \\nu - Q p)||_1 + (\\eta/\\lambda) ||T p||_1 \\\\\n\\end{eqnarray}\n\nThe trick to solving this is to combine these into a larger vector space\n\\[\nw = [D (y - D^T \\nu), 0]^T\n\\]\nand the $F$ matrix defined by two matrix blocks\n\\begin{equation}\nF=\\left[\n\\begin{array}{c}\nD Q  \\\\\n\\hline\n(\\eta/\\lambda) T \\\\\n\\end{array}\\right]\n\\end{equation}\n\nand write this as\n\\begin{eqnarray}\n\\mbox{minimize over}~ p : & ||w-F p||_1 \\\\\n\\end{eqnarray}\n\nThis is now in the form of a well known problem, the Least Absolute Deviation (LAD) problem and can be written as a\na linear program. The cvxopt library contains a program for solving this problem. Thus,\nwe can solve for $\\nu$ by solving the quadratic programming problem and then solve this equation for $p$ and then\nuse Equation \\ref{eq:residual} to solve for $x$.\n\n\\section{Modeling outliers and step functions for more robust fits}\nKKB also suggest including outliers by adding terms $u$ to the model with a strong $l_1$ regularization. This will\nresult in a sparse solution for the $u$ values which can model things like spikes without throwing off the other terms.\nThese can be treated just like the $l_1$ regularized seasonality and will result in the same quadratic programming\nproblem as before with an additional constraint $-\\delta < D^T \\nu < \\delta$ where delta is the $l_1$\nregularization parameter for the $u$ vector.\n\nAs before, we end up with a solution for $\\nu$ and must still solve the LAD problem to get separate solutions for\n$x, p$ and $u$.\n\\begin{eqnarray}\n\\mbox{minimize over} ~p,u : & ||D (y - D^T \\nu - Q p - u)||_1 + (\\lambda / \\eta) ||T p||_1 + \\delta/\\lambda ||u||_1 \\\\\n\\end{eqnarray}\n\nWe can employ the same trick as before and write this as a single LAD problem on a larger vector space.\n\\begin{eqnarray}\n\\mbox{minimize over}~ p,u : & ||w-F z||_1 \\\\\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\nz & = & [p , u]^T\\\\\nw & = & [D (y - D^T \\nu), 0, 0]^T \\\\\n\\end{eqnarray}\nand $F$ specified by matrix blocks\n\\begin{equation}\nF=\\left[\n\\begin{array}{c|c}\nD Q  & D   \\\\\n\\\\\n\\hline\n& \\\\\n(\\eta/\\lambda) T & 0_{P \\times n} \\\\\n\\\\\n\\hline\n& \\\\\n0_{n \\times P-1}, & (\\delta/\\lambda) I_n \\\\\n\\end{array}\\right]\n\\end{equation}\n\n\\subsection{Step functions}\n\nRather than spikes ($\\delta$-functions) we can also consider step functions (or Heaviside functions).\nThe solution is nearly identical to the above except that we replace matrix $D$ in the upper right\nhand corner with $D H$ where $H$ is the matrix with 1 in it's upper triangle and diagonal and zero\nin it's lower triangle. This is because step functions are simply cumulative sums of delta functions.\n\nWith both spikes $u$ and step function transitions $h$ (regularized by $\\gamma$), we can write the\nequations\n\n\\begin{eqnarray}\nz & = & [p , u, h]^T\\\\\nw & = & [D (y - D^T \\nu), 0, 0, 0]^T \\\\\n\\end{eqnarray}\nand $F$ specified by matrix blocks\n\\begin{equation}\nF=\\left[\n\\begin{array}{c|c|c}\nD Q  & D  & D H \\\\\n&\\\\\n\\hline\n&\\\\\n(\\eta/\\lambda) T & 0_{P \\times n} & 0_{P \\times n}\\\\\n&\\\\\n\\hline\n&\\\\\n0_{n \\times P-1} & (\\delta/\\lambda) I_n & 0_{n}\\\\\n&\\\\\n\\hline\n&\\\\\n0_{n \\times P-1} & 0_n & (\\gamma/\\lambda) I_n \\\\\n\\end{array}\\right]\n\\end{equation}\n\nBy now, one should notice the pattern that adding any new $l_1$ regularization\nterms simply requires a modification of the above equations which just requires\nthat one express the intentions in matrix form.\n\n\\section{L1 norms for everything}\n\nRather than using an $l_2$ on the residual we can use an $l_1$ term for this and so use\n$l_1$ norms for everything. An $l_1$ norm on the residual is another way of giving less\nweight to outliers. It also simplified the solution as there is no quadratic programming\nproblems to solve. We can use the same trick of writing the whole optimization as a single\nLAD problem in a higher dimensional space.\n\nMinimize the following for $x$, $p$ and $h$:\n\\begin{equation}\n||y - x - Q p -H h||_1 + \\lambda_1 ||D_1 x - g|| + \\lambda_2 ||D_2 x||\n+ \\eta ||T p||_1 + \\delta ||h||_1\n\\end{equation}\n\nHere we are considering first and second derivatives: matrices $D_1$ and $D_2$ plus the seaosnality\nterm and level changes. In addition we allow for $g$ a non-zero preferred first derivative. As before\nwe can write this as a single LAD problem:\n\\[\n\\mbox{minimize over}~ z :  ||w-F z||_1 \\\\\n\\]\nwhere\n\n\\begin{eqnarray}\nz & = & [x, p , h]^T\\\\\nw & = & [y, -\\lambda_1 g, 0, 0, 0]^T \\\\\n\\end{eqnarray}\nand $F$ specified by matrix blocks\n\\begin{equation}\nF=\\left[\n\\begin{array}{c|c|c}\nI_n  & Q  & H \\\\\n&\\\\\n\\hline\n&\\\\\n-\\lambda_1 D_1 & 0_{m \\times (P-1)} & 0_{m \\times n}\\\\\n&\\\\\n\\hline\n&\\\\\n-\\lambda_2 D_2 & 0_{m \\times (P-1)} & 0_{m \\times n}\\\\\n&\\\\\n\\hline\n&\\\\\n0_{P \\times n} & -\\eta T & 0_{P \\times n} \\\\\n&\\\\\n\\hline\n&\\\\\n0_{n} & 0_{n \\times (P-1)} & -\\delta I_n\n\\end{array}\\right]\n\\end{equation}\n\n\n\\section{Implementation}\nThe Githib repository https://github.com/dave31415/myl1tf contains an implementation for this\nL1TF modeling with seasonality. This was a fork of the repository\nhttps://github.com/elsonidoq/py-l1tf by Pablo Zivic which implements the\nsimpler version without seasonality and other effects. Both versions are in python and\nuse the python cvxopt library to solve the convex optimization problems. Our\nversion contains some test programs as well. For example, the following command,\n\\begin{verbatim}\ntest_myl1tf.test_l1tf_on_mock_with_period(period=6, eta=1.0,alpha=0.5)\n\\end{verbatim}\ncreates a mock data-set, fits the model and displays the following plot.\n\\begin{figure}\n\\centering\n\\includegraphics[width=500pt]{example.png}\n\\end{figure}\n(Note \\emph{eta} = $\\eta$ and \\emph{alpha} = $\\lambda$ as \\emph{lambda} is a reserved word in python).\n\n\\section{Appendix}\n\\subsection{Calculating the derivative matrices for unequally spaced points}\n\nThe first and second derivatives of unequally spaced data points can be calculated, at those data points,\nas follows. We can use Lagrange's interpolation formula to fit the unique quadratic function through any\nthree points $(x_1,y_1), (x_2,y_2), (x_3, y_3)$.\n\n\\begin{eqnarray}\nP(x) & = & \\sum_{i=1}^{3} y_i P_i(x) \\\\\nP_i(x) & = & \\prod_{j=1, j \\ne i}^{3} \\frac{\\left( x-x_j \\right)}{ \\left( x_i - x_j \\right)}\n\\end{eqnarray}\n\nWritten out in full this is\n\\begin{eqnarray}\nP(x) & = & y_1 \\left(\\frac{x-x_2}{x_1-x_2}\\right) \\left( \\frac{x-x_3}{x_1-x_3}\\right) \\nonumber \\\\\n     & + & y_2 \\left(\\frac{x-x_1}{x_2-x_1}\\right) \\left( \\frac{x-x_3}{x_2-x_3}\\right) \\nonumber \\\\\n     & + & y_3 \\left(\\frac{x-x_1}{x_3-x_1}\\right) \\left( \\frac{x-x_2}{x_3-x_2}\\right) \\nonumber\n\\end{eqnarray}\nThe first derivative is\n\\begin{eqnarray}\n\\frac{dP}{dx} & = & y_1 \\frac{\\left(2x-x_2-x_3 \\right)}{\\left(x_1-x_2\\right)\\left(x_1-x_3\\right)} \\nonumber \\\\\n     & + & y_2 \\frac{\\left(2x-x_1-x_3 \\right)}{\\left(x_2-x_1\\right)\\left(x_2-x_3\\right)} \\nonumber \\\\\n     & + & y_3 \\frac{\\left(2x-x_1-x_2 \\right)}{\\left(x_3-x_1\\right)\\left(x_3-x_2\\right)} \\nonumber\n\\end{eqnarray}\nand second derivative is\n\n\\begin{eqnarray}\n\\frac{d^2P}{dx^2} & = & y_1 \\frac{2}{\\left(x_1-x_2\\right)\\left(x_1-x_3\\right)} \\nonumber \\\\\n     & + & y_2 \\frac{2}{\\left(x_2-x_1\\right)\\left(x_2-x_3\\right)} \\nonumber \\\\\n     & + & y_3 \\frac{2}{\\left(x_3-x_1\\right)\\left(x_3-x_2\\right)} \\nonumber\n\\end{eqnarray}\n\nWe are only interested in evaluating these at the middle point $x=x_2$. The second derivative is\nindpendent of $x$ anyway but the first derivative evalued at $x=x_2$ is\n\n\\begin{eqnarray}\n\\frac{dP}{dx}(x=x_2) & = & y_1 \\frac{\\left(x_2-x_3 \\right)}{\\left(x_1-x_2\\right)\\left(x_1-x_3\\right)} \\nonumber \\\\\n     & + & y_2 \\frac{\\left(2x_2-x_1-x_3 \\right)}{\\left(x_2-x_1\\right)\\left(x_2-x_3\\right)} \\nonumber \\\\\n     & + & y_3 \\frac{\\left(x_2-x_1 \\right)}{\\left(x_3-x_1\\right)\\left(x_3-x_2\\right)} \\nonumber\n\\end{eqnarray}\n\nIf we assume that the $x_i$ are sorted to be increasing and there are no duplicates, this can be\nwritten in matrix form, with first and second derivative matrices $F$ and $D$ being\nblock diagonal with block length 3 and specified by the last two equations.\n\nFor equally spaced points of unit separation, i.e. $x_2 = x_1 +1$ and $x_3 = x_1 + 2$ they reduced to the\nusual formulas for finite difference derivatives,\n$F = BlockDiag((-0.5,0,0.5))$ and $D = BlockDiag((1,-2,1))$. These results for unequally spaced\npoints should give results exactly equal to analytical calculations when evaluated on quadratic, linear\nor constant functions. These two facts provides a useful test for any implementation.\n\n\\subsection{Proof of Equation \\ref{eq:supinf}}\n\nWe wish to prove that\n\\begin{equation}\n\\sup_\\nu \\inf_z \\left(-\\nu^T z + \\lambda ||z||_1 \\right) =\n\\left\\{\n    \\begin{array}{ll}\n   0 ~\\mbox{if}~ |\\nu| < \\lambda \\\\\n  -\\infty  & \\mbox{otherwise.} \\\\\n  \\end{array}\n  \\right.\n\\end{equation}\n\nNote that this equation is proven if we prove it for each component that is summed so\nwe can drop vector notation and treat them as scalars.\n\nFirst define $f(\\nu) = \\inf_z \\left(-\\nu z + \\lambda |z| \\right)$. Notice the important fact\nthat $f(\\nu) = f(-\\nu)$ because if the $\\inf$ is achieved at $z$ for some $\\nu$ then the same\n$\\inf$ can be achieved at $-z$ for $-\\nu$. Thus it is symmetric in $\\nu$, $f(\\nu) = f(|\\nu|)$.\nNote also that $f(0) = 0.$\n\nNow let's consider the case $|\\nu| \\le \\lambda$. Clearly $\\nu z \\le |\\nu z|$ so\n$f(\\nu) = \\inf_z \\left(-\\nu z + \\lambda |z| \\right) \\ge  \\inf_z \\left(|z| (-|\\nu| + \\lambda) \\right)\n\\ge 0$. So $f(\\nu) = 0$.\n\nNow consider the other case $|\\nu| > \\lambda$. Now for any positive $z$,\n$f(\\nu) < \\left((-\\nu + \\lambda) |z| \\right)$ which has $inf_z = -\\infty$.\n\n\n\\section{References}\nS.-J. Kim, K. Koh, S. Boyd, and D. Gorinevsky\nSIAM Review, problems and techniques section, 51(2):339\u2013360, May 2009.\\\\\nhttp://stanford.edu/\\~boyd/papers/pdf/l1\\_trend\\_filter.pdf\n\n\n\\end{document}\n\n", "meta": {"hexsha": "7b47f86f3deb10d95cd34d416ce760bfb6ad6c81", "size": 19439, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "myl1tf.tex", "max_stars_repo_name": "dave31415/myl1tf", "max_stars_repo_head_hexsha": "5c78ea2ca23a73cd5c7f0ab1870748a7ed980783", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-07-29T09:09:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-20T18:03:03.000Z", "max_issues_repo_path": "myl1tf.tex", "max_issues_repo_name": "dave31415/myl1tf", "max_issues_repo_head_hexsha": "5c78ea2ca23a73cd5c7f0ab1870748a7ed980783", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "myl1tf.tex", "max_forks_repo_name": "dave31415/myl1tf", "max_forks_repo_head_hexsha": "5c78ea2ca23a73cd5c7f0ab1870748a7ed980783", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-05-14T08:36:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-20T18:03:22.000Z", "avg_line_length": 39.0341365462, "max_line_length": 126, "alphanum_fraction": 0.6762179124, "num_tokens": 6734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.7879311956428947, "lm_q1q2_score": 0.5611899565027438}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{setspace}\n\n\\usepackage{amsmath, graphicx, color, fancyhdr, tikz-cd, mdframed, enumitem, framed, adjustbox, bbm, upgreek, xcolor, hyperref, manfnt}\n\\usepackage[framed,thmmarks]{ntheorem}\n\\usepackage[style=alphabetic, bibencoding=utf8]{biblatex}\n%Set the bibliography file\n\\bibliography{sources}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[urw-garamond]{mathdesign}\n\\usepackage{garamondx}\n\n%Replacement for the old geometry package\n\\usepackage{fullpage}\n\n%Input my definitions\n\\input{../mydefs.tex}\n\n%Shade definitions\n\\theoremindent0cm\n\\theoremheaderfont{\\normalfont\\bfseries} \n\\def\\theoremframecommand{\\colorbox[rgb]{0.9,1,.8}}\n\\newshadedtheorem{defn}[thm]{Definition}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%% Customize Below %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%header stuff\n\\setlength{\\headsep}{24pt}  % space between header and text\n\\pagestyle{fancy}     % set pagestyle for document\n\\lhead{Algebraic Groups} % put text in header (left side)\n\\rhead{Notes by Nico Courts} % put text in header (right side)\n\\cfoot{\\itshape p. \\thepage}\n\\setlength{\\headheight}{15pt}\n%\\allowdisplaybreaks\n\n% Document-Specific Macros\n\\DeclareMathOperator{\\SL}{SL}\n\\DeclareMathOperator{\\Dist}{Dist}\n\n\n\\begin{document}\n%make the title page\n\\title{Algebraic Groups\\vspace{-1ex}}\n\\author{A course by Jarod Alper and Julia Pevtsova\\\\\nNotes by Nico Courts}\n\\date{Autumn 2019/ Winter 2020}\n\\maketitle\n\n\\begin{abstract}\n\tThe topic of algebraic groups is a rich subject combining both group-theoretic and algebro-geometric-theoretic techniques. Examples include the general linear group $GL_n$, \n\tthe special orthogonal group $SO_n$ or the symplectic group $Sp_n$. Algebraic groups play an important role in algebraic geometry, representation theory and number theory.\n\n\tIn this course, we will take the functorial approach to the study of linear algebraic groups (more generally, affine group schemes) equivalent to the study of Hopf algebras. \n\tThe classical view of an algebraic group as a variety will come up as a special case of a smooth algebraic group scheme. Our algebraic approach will be independent (even complementary) to the analytic approach taken in the course on Lie groups.\n\\end{abstract}\n\n\\part{Quarter 1: Structure Theory}\n\\section{September 25, 2019}\n\\subsection{Group objects}\nLet $\\calC$ be a category with a final object and finite products.\n\\begin{defn}\n\tA \\textbf{group object $G$ in $\\calC$} is an object in $\\calC$ along with multiplication, identity, and inverse morphisms\n\tsatisfying the usual axioms.\n\\end{defn}\n\nOne thing is that we are using that there is a final object $\\ast$ along with our identity morphism $e:\\ast\\to G$.\nHere Jarrod explictly used the fact that there is a unique map to $\\ast$.\n\n\\begin{ex}\n\tIf $\\calC$ is $\\operatorname{Set}$, then $G$ is a group. If $\\calC=\\operatorname{Top}$, then $G$ is a topological group, smooth manifolds give Lie groups, and finally (interesting to us):\n\\end{ex}\n\\begin{defn}\n\tLet $S$ be a scheme and let $\\calC$ be the category of schemes over $S$. Then a group object $G$ in $\\calC$ is \n\ta \\textbf{group scheme over $S$.}\n\\end{defn}\n\nWHen $k$ is a field and $\\calC$ is schemes of finite type over $k$, we get a group scheme of finite type over $k$. There is not a great consensus on what makes an \\textbf{algebraic group}, \nbut this is what we will use.\n\nWhen we instead restrict to \\textit{affine schemes} we get an affine groupe scheme of finite tipe over $k$, or a \\textbf{linear algebraic group.}\n\n\\subsection{Examples}\n$\\bbG_m=\\operatorname{Spec} k[t]_t$ is one. \n\nIf we consider the map $f:\\bbG_m\\to \\bbG_m$ which on the level of elements sends $t\\mapsto t^p$, the kernel is \n\\[\\mu_p=\\ker(f)=\\operatorname{Spec}k[t]/(t^p-1)\\]\nand that's great, but when $\\ch k=p$, this causes the group scheme to be \\textbf{unreduced}. This is (apparently) a case when you need to use schemes.\n\n\\subsection{The Functorial Approach}\nLet $\\calC$ be a category with object $X$. Define the functor $h_X:\\calC^{op}\\to \\mathbf{Set}$ where \n\\[h_X(Y)=\\Hom_\\calC(Y,X).\\]\n\nThen we have \n\\begin{lem}[Yoneda]\n\tLet $G:\\calC^{op}\\to\\mathbf{Set}$ be a functor. There is a natural bijection\n\t\\[G(X)\\simeq \\operatorname{Nat}(h_X,G).\\]\n\\end{lem}\n\\begin{prop}\n\tA group object $G$ in $\\calC$  is the same as an aobject $X\\in\\calC$ together with a choice of factorization of \n\t$h_X:\\calC\\to\\mathbf{Set}$ through $\\mathbf{Grp}$.\n\\end{prop}\n\n\\subsection{Exercises}\n\\begin{enumerate}\n\t\\item Spell out all the details of the proof of the above propositon.\n\t\\item Given a group object $G$, define in two ways what it means for it to act on another object. (In coordinates and functorially).\n\\end{enumerate}\n\n\\subsection{Some Interesting Facts}\nIf we had to write down five results that we'd like to get out of this class:\n\\begin{prop}\n\tEvery affine group scheme of finite type over a field embeds into $GL_n$ as a closed subgroup.\n\\end{prop}\n\\begin{thm}[Chevalley's Theorem]\n\tLet $G$ be a finite type group scheme over a field. Then it factors as \n\t\\[1\\to H\\to G\\to A\\to 1\\]\n\twhere $A$ is abelian and $H$ is affine (linear algebraic).\n\\end{thm}\n\\begin{prop}\n\tIf $G$ is an affine group scheme of finite type over $k$, then we have af actorization\n\t\\[1\\to U\\to G\\to R\\to 1\\]\n\twhere $U$ is unipotent and $R$ is reductive.\n\\end{prop}\n\\begin{prop}\n\t$H\\subseteq G$  a subgroup scheme. Then $G/H$ is a projective scheme.\n\\end{prop}\nFinally we want to talk about Tanakka duality and how the representations of $G$ define $G$ itself.\n\n\\section{September 27th, 2019}\nLast time we defined a group scheme (a group object in the category of schemes over a base scheme). We also mentioned that \nYou could define it as a map $h_G:\\mathbf{Sch}/S\\to \\mathbf{Set}$ along with a factorization through $\\mathbf{Grp}$.\n\nWe defined an \\textbf{algebraic group} over $k$ as a group scheme over $\\operatorname{Spec} k$ of finite type and a \\textbf{linear algebraic group}\nto be an \\textit{affine} group scheme over $k$ of finite type.\n\n\\subsection{Hopf Algebras}\nLet $G=\\Spec A$ be a linear algebraic group over $k$. I have seen most of these before (see Waterhouse or my Hopf algebra notes)\n\\begin{rmk}\n\tOne think I haven't seen explicitly before: Notice that the augmentation ideal $\\ker \\varepsilon$, where $\\varepsilon$ is the counit,\n\tis the (maximal!) ideal corresponding in the algebro-geometric sense to the identity element in $G$.\n\\end{rmk}\n\\begin{defn}\n\tA \\textbf{Hopf algebra} is ...\n\\end{defn}\n\\begin{defn}\n\tLet $G$ be an algebraic group over $k$. Then if $h_G$ factors through $\\Ab$, $G$ is called \\textbf{commutative.}\n\\end{defn}\n\n\\subsection{Some Examples}\n\\begin{rmk}\n\tNote that to define a functor from schemes over $k$, is suffices to define it on affine schemes, thereby defining \n\tthe (Zariski) local behavior of any such map. Thus we really only need to consider maps in $\\Alg$.\n\\end{rmk}\n\\begin{itemize}\n\t\\item $\\bbG_a$. Here we can define it as a functor that sends $S\\mapsto\\Gamma(S,\\calO_S)$. Geometrically, $\\bbG_a=\\bbA^1$ where the multiplication is addition, inverses send $x\\mapsto -x$ and the unit is the zero map.\n\tThe Hopf algebraic picture is the usual dual thing.\n\t\\item $\\bbG_m$ as a scheme isthe map $S\\mapsto \\Gamma(S,\\calO_S)^\\ast$. In the geometric picture, $\\bbA^1\\setminus\\{0\\}$ and the algebra structure comes from multiplciation. Hopf is pretty easy.\n\t\\item $\\GL_n$ is a scheme that sends\n\t\\[S\\mapsto \\left\\{A=(a_{ij}): a_{ij}\\in\\Gamma(S,\\calO_S), \\det(A)\\in\\Gamma(S,\\calO_S)^\\ast\\right\\}\\]\n\tthe algebra is $\\bbA^{n\\times n}\\setminus \\{\\det = 0\\}$ with the usual multiplication. The coalgebra structure can be seen in the book.\n\\end{itemize}\nThis one requires some more explaination so I am setting it apart.\n\\begin{ex}\n\tLet $V$ be a finite dimensionatl vector space over $k$. Then we can define the algebraic group $V_a$ which sends \n\t\\[S\\mapsto \\Gamma(S,\\calO_S)\\otimes_k V.\\]\n\tGeometrically we are looking at $\\bbA(V)=\\Spec\\Sym ^\\ast V^\\vee\\simeq \\Spec k[x_1,\\dots,x_n]$ where $n=\\dim V$.\n\\end{ex}\n\nWhat about finite groups? As a scheme, we want $G=\\sqcup_{g\\in G}\\Spec k$. The functor sends $S\\mapsto \\operatorname{Mor}_{\\operatorname{Set}}(\\pi_0(S),G)$,\nor maps from the connected components into $G$.\n\n\\begin{ex}\n\tNow consider the $n^{th}$ roots of unity: as a scheme, $\\mu_n=\\Spec k[t]/(t^n-1)\\subseteq \\bbG_m$.\n\tIf both $k=\\bar k$ and $\\ch k\\nmid n$, then $\\mu_n\\cong\\bbZ/n\\bbZ$.\n\n\tBut if (e.g.) $k=\\bbQ$, then $\\mu_3$ is $\\bbQ[t]/(t^3-1)=\\Spec\\bbQ\\sqcup \\Spec\\bbQ(\\xi)$ where $\\xi$ is a primitive third root of unity.\n\n\tIf, on the other hand, $k=\\bar \\bbF_3$ and consider $\\mu_3$, we get a single point with residye field $\\bar\\bbF_3$.\n\\end{ex}\n\n\\begin{ex}\n\tIf we are in the case of positive characteristic, then we get an algebraic group $\\alpha_p$. Here the scheme is $\\Spec k[x]/x^p$ and functorially it \n\tis the map $S\\mapsto \\{F\\in\\Gamma(S,\\calO_S)|f^p=0\\}$.\n\\end{ex}\n\n\\subsection{Matrix Groups}\nWe already defined $\\GL_n$, but we can also define \n\\[\\operatorname{SL}_n:S\\mapsto\\{A=(a_{ij})|\\det A=1\\}\\]\nwith scheme $\\Spec k[x_{ij}]/(\\det-1)$.\n\nWe also have the (upper) triangular matrices $T_n$ and unitary group $U_n$ and diagonal group $D_n$\n\n\\begin{defn}\n\tLet $G$ be a linear algebraic group. Then \n\t\\begin{itemize}\n\t\t\\item $G$ is a \\textbf{vector group} if $G\\cong V_a$ for some finite dimensional $V$.\n\t\t\\item $G$ is a \\textbf{split torus} if $G\\cong \\bbG_m^n$.\n\t\t\\item $G$ is a \\textbf{torus} if there is a field extention $k\\to k'$ such that \n\t\t\\[G\\times_{\\Spec k}\\Spec k'\\cong \\bbG^n_{m,k'}\\]\n\t\\end{itemize}\n\\end{defn}\n\n\\section{September 30th, 2019}\nAnother example to consider:\n\\begin{ex}\n\tLet $G=\\operatorname{PGL}_n$, the projective linear group. Recall we want to define this as $\\GL_n/k^\\ast$ (from group theory). To do this for algebraic groups,\n\twe define \n\t\\[\\operatorname{PGL}_n=\\operatorname{Proj}k[x_{ij}]_{det}:= \\Spec (k[x_{ij}]_{det})_0\\]\n\n\tThe geometric picture is difficult since we haven't yet defined quotients, but as a functor we say $\\operatorname{PGL}_n$ \n\tis $\\Aut(\\bbP^n)$, the functor that sends $S\\mapsto \\Aut(\\bbP_S^n)$ where $\\bbP^n_S=\\bbP_k^n\\times_{\\Spec k} S$.\n\\end{ex}\n\n\\subsection{Non-affine group schemes}\n\\begin{ex}\n\tLet $\\lambda\\ne 0,1$ be an element in $k$. Then we can define the elliptic curve \n\t\\[E_\\lambda=V(y^2z-x(x-z)(x-\\lambda z))\\subset \\bbP^2\\]\n\tWhich gives us a double cover over $(0,1)$ and $(\\lambda,\\infty)$ with singleton fiber (ramified) over $0,1,$ and $\\lambda$.\n\n\tThen for any $\\lambda\\ne 0,1$, $E_\\lambda$ is a \\textbf{projective} group scheme.\n\\end{ex}\n\\begin{rmk}\n\tIf you look at the $\\bbC$-points, you get $E_\\lambda(\\bbC)=\\Lambda_\\lambda$, giving you a torus. Recall (from e.g. complex analysis) that the \n\tmoduli here is $\\operatorname{SL}_2(\\bbZ)$ of all elliptic curves.\n\\end{rmk}\n\n\\subsection{Abelian Varieties}\n\\begin{defn}\n\tAn \\textbf{abelian variety over $k$} is asmooth, geometrically connected ($A\\times_{\\Spec k}\\Spec\\bar k$ is connected), proper group scheme $A$ over $k$.\n\\end{defn}\n\\begin{ex}\n\tOver $\\bbC$, $\\bbC^g/\\Lambda$ where $\\Lambda\\cong \\bbZ^{2g}\\subseteq \\bbC^g$ gives us a genus $g$ example.\n\\end{ex}\n\\begin{thm}\n\tAny abelian variety over $k$ is commutative and projective.\n\\end{thm}\n\\begin{thm}[Chevalley]\n\tIf $G$ is any group scheme, then the sequence \n\t\\[1\\to H\\to G\\to A\\to 1\\]\n\tis exact, where $H$ is a linear algebraic group (affine!) and $A$ is an abelian variety.\n\\end{thm}\n\\begin{ex}\n\tLet $X\\to \\Spec k$ be a geometrically integral projective scheme (proper may suffice). The idea here is that over $\\bbC$\n\tthe rings over every open set are integral domains.\n\n\tNow consider the \\textbf{Picard functor} $\\operatorname{Pic}_X:\\operatorname{Pic}:\\mathbf{Sch/k}\\to \\Grp$ sending \n\t\\[S\\mapsto \\operatorname{Pic}(X_S=X\\times_k S)/p^k\\operatorname{Pic(S)}\\]\n\\end{ex}\n\\begin{thm}\n\t$\\operatorname{Pic}_X$ is represented by a scheme locally of finite type, thus $\\operatorname{Pic}_X^0$, the connected \n\tcomponent of the identity in $[\\calO_X]\\in\\operatorname{Pic}_X$ is an abelian variety.\n\\end{thm}\n\\subsection{Relative Group Schemes}\n\\begin{ex}\n\tConsider $\\bbG_{m,\\bbZ}=\\Spec \\bbZ[t]_t$. Then $G_{m,S}=\\bbG_{m,\\bbZ}\\times_{\\Spec \\bbZ}S$. \n\tIn the case that $S=\\Spec R$, $\\bbG_{m,S}=\\Spec R[t]_t$.\n\\end{ex}\n\n\\begin{ex}\n\tLet $\\bbA^1=\\Spec k[x]$ and define $G=\\Spec k[x,y]_{xy+1}\\subseteq\\bbA^2$. Notice this is the plane minus a hyperbola.\n\n\tDefine $\\cdot:G\\times_{\\bbA^1}G\\to G$ to be given by \n\t\\[(x,y)\\cdot(x,y')=(x,xyy'+y+y')\\]\n\n\tThen the thing here is the fiber (think vertical line in the plane!) over 0 is $\\bbG_a$ and is isomorphic to $\\bbG_m$ otherwise.\n\\end{ex}\n\n\\begin{ex}\n\tLet $\\calE_\\lambda=V(y^2z-x(x-z)(x-\\lambda z))$ over $\\Spec k[\\lambda]$. Then when $\\lambda=0$, we get the nodal cubic given by \n\t$y^2z-x^2(x-z)$ (node at the origin). \n\n\tNow if you look at the connected component around 0 of $\\Aut(\\calE_\\lambda)/\\bbA_\\lambda$, you actually find (when $\\lambda=0$) \n\tthat $\\bbG_m\\cong\\Aut(\\calE_0)^0$.\n\\end{ex}\n\n\\subsection{Some definitions}\n\\begin{defn}\n\tA \\textbf{homomorphism} $\\phi:G\\to G$ of group schemes over $S$ is a map $\\phi:H\\to G$ of schemes such that \n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tH\\times_S H\\ar[r,\"m_H\"]\\ar[d,\"\\phi\\times\\phi\"] & H\\ar[d]\\ar[d,\"\\phi\"]\\\\\n\t\t\tG\\times_S G\\ar[r,\"m_G\"] & G\n\t\t\\end{tikzcd}\n\t\\end{center}\n\\end{defn}\n\\begin{prob}\n\tShow that this automoatically imples that the identity and inversion maps are respected as well (automatically).\n\\end{prob}\n\n\\begin{defn}\n\tA \\textbf{subgroup of $G\\to S$} is a subscheme $H\\subseteq G$ such that $H(T)\\le G(T)$ for all $T$ over $S$.\n\\end{defn}\n\\begin{prob}\n\tShow that $\\ker(\\phi)\\subseteq H$ is a subgroup.\n\\end{prob}\n\\begin{rmk}\n\tThis gives you a nice way to construct new group schemes. For example, the following are exact:\n\t\\[1\\to \\operatorname{SL}_n\\to GL_n\\xrightarrow{\\det} \\bbG_m\\to 1\\]\n\tand \n\t\\[1\\to \\bbG_m\\to \\GL_n\\to \\operatorname{PGL}_n\\to 1\\]\n\\end{rmk}\n\n\\begin{prop}\n\tLet $G\\to S$ be a group scheme. Then $G\\to S$ is separated if andy only if $e:S\\to G$ is a closed immersion.\n\\end{prop}\n\\begin{prf}\n\tThe idea here is that $S\\to G$ is a closed immersion. Then we consider the map $m\\circ(\\id,S):G\\times_SG\\to G$\n\tand consider this along with the diagonal map $\\Delta:G\\to G\\times_S G$ and this is a pullback square!\n\\end{prf}\n\\begin{cor}\n\tAny group scheme over $k$ is separated.\n\\end{cor}\nThe idea is going to be that if $X$ is any scheme over $k$, then any point $X\\in X(k)$ is closed.\n\n\\section{October 2, 2019}\nNotice that a \\textbf{relative group scheme} (referred to in last lecture) refers to a groups scheme over an arbitrary \nbase scheme $S$.\n\n\\subsection{Properties of schemes}\nToday we are going to be talking about reducedness, connectedness, irreduciblility, regularity, and smoothness.\n\nRecall that a scheme $X$ is \\textbf{reduced} if and only if $\\forall x\\in X$, $\\calO_{X,x}$ is reduced. An example of a non-reduced \nscheme is $\\Spec k[x]/(x^2)$.\n\\begin{defn}\n\tWe say a scheme $X$ over $k$ is \\textbf{geometrically reduced} if for all field extensions $k'/k$,\n\t\\[X_{k'}=X\\times_{\\Spec k}\\Spec k'\\]\n\tis reduced.\n\\end{defn}\n\\begin{rmk}\n\tIt is equivalent that $X_{\\bar k}$ is reduced if and only if every $k'/k$ is purely inseperable (I think).\n\\end{rmk}\n\\begin{rmk}\n\tIf $k$ is perfect, then $X$ is reduced if and only if $X$ is geometrically reduced.\n\\end{rmk}\n\\begin{defn}\n\tA local ring $(A,\\frakm)$ is \\textbf{regular} if $\\dim _{\\text{Krull}}A=\\dim_{A/\\frakm} \\frakm/\\frakm^2$\n\\end{defn}\n\\begin{defn}\n\tA scheme $X$ is regular if for all $x\\in X$, $\\calO_{X,x}$ is regular.\n\\end{defn}\n\\begin{rmk}\n\tIf $X\\to \\Spec k$ and $x\\in X(k)$, the tangent space at $x$ is \n\t\\[T_{X,x}=(\\frakm/\\frakm^2)^\\vee=\\{f:\\Spec k[\\varepsilon]/\\varepsilon^2\\to X|0\\mapsto x\\}\\]\n\\end{rmk}\n\\begin{rmk}\n\tNotice that if $X\\to \\Spec k$ is regular and $k'/k$ is a field extension, then $X_{k'}$ is not necessarily regular.\n\\end{rmk}\n\n\\begin{defn} \n\tA Scheme $X\\to \\Spec k$ of finite type is \\textbf{smooth} if $X_{\\bar k}$ is regular.\n\\end{defn}\n\n\\subsection{Facts about algebraic groups}\nThen we can return to the proposition we want to prove:\n\\begin{prop}\n\tLet $G\\to \\Spec k$ be an algebraic group. Then $G$ is geometrically reduced if and only if $G$ is smooth over $\\Spec k$.\n\\end{prop}\n\\begin{prf}\n\tSmoothness over $k$ implies reducedness. Now since we are only interested in the algebraic closure of $k$, we can say $k=\\bar k$. Because $G$ \n\tis reduced, there exists a nonempty open $U\\subseteq G$ that is smooth. Then since $G(k)\\subseteq |G|$ is dense in $G$ (as a topological space) \n\tand Then $G=\\cup_{g\\in G(k)}m_g(U)$ for our smooth $U$, and this gives us a smooth cover of $G$.\n\\end{prf}\n\nWe will see next itme that all linear algebraic groups over $k$ where $\\ch k=0$ are all \ngeometrically reduced (and thus smooth).\n\n\\subsection{Connectedness}\nLet $G$ be an algebraic group over $k$. Then we have our maps $e:\\Spec k\\to G$, so consider it as $e\\in G(k)$.\nLet $G^0\\subseteq G$ be the connected component of $e$. It is both open and closed.\n\\begin{rmk}\n\tIf $X\\to \\Spec k$ is of finite type and $x\\in X(k)$, then $X$ being connected implies that $X$ is geometrically connected.\n\\end{rmk}\nThis establishes that $G^0$ is actually geometrically connected! We actually will see\n\\begin{prop}\n\t$G^0\\subseteq G$ is an (open and closed) algebraic subgroup.\n\\end{prop}\nThe idea here is that $G^0\\times G^0$ is connected, so the image of the multipication map on this set lands in a connected component (since it is connected).\nSince $e\\in G^0$, and $m(e,e)=e\\in G^0$, this shows that the multiplication map restricts to a well-defined map $G^0\\times G^0\\to G^0$. A similar argument \ngoes through for the inverset map, etc.\n\nThe upshot here is that if $G$ is an algebraic group, then there exists a factorization \n\\[1\\to G^0\\to G\\to \\pi_0(G)\\to 1\\]\nwhere $\\pi_0(G)$ is given the structure of a discrete group.\n\\begin{rmk}\n\tNow we also have that $(G^0)_{k'}=(G_{k'})^0$ for all $k'/k$. The idea is to get a map of of one into the other and then use clopenness and connectedness \n\tto show they are equal.\n\\end{rmk}\n\n\\begin{prop}\n\tA connected algebraic group over $k$ is irreducible.\n\\end{prop}\n\\begin{prf}\n\tWe can assume $k=\\bar k$. Suppose $G= X\\cup Y$, where both are closed, $X$ is irreducible, \n\tand $X\\cap Y\\ne\\varnothing$. Thus there exists an element $g\\in X\\setminus Y$. That is, $g$ lies in a single irreducible component.\n\n\tBut then using the multiplication by $h$ map on $G$, we get to every other point in $G$, so every point \n\tis in a single irreducible component. But the intersection was nontrivial! Or something.\n\\end{prf}\n\n\\begin{prop}\n\tIf $G_{\\text{red}}$ is geometrically connected, then $G_{\\text{red}}\\subseteq G$ is a subgroup. In particular, \n\tif $k$ is perfect, then $G_{\\text{red}}$ is a subgroup of $G$.\n\\end{prop}\n\\begin{rmk}\n\t$X$ is geometrically reduced implies that $X\\times X$ is geometrically reduced.\n\\end{rmk}\n\n\\section{October 4, 2019}\nSome review. Let $G$ be an algebraic group and denote $e:\\Spec k\\to G$ be the identity. We saw a lot of propositions last time.\n\nNow let $k$ be a nonperfect field and take $t\\in k\\setminus k^p$. Then define \n\\[G\\eqdef V(x^{p^2}-tx^p)\\subseteq \\bbG_a\\]\nwhich Milne claims is not reduced. We can see why it's not geometrically reduced, but we're missing the details here.\n\n\\subsection{Another special case}\n\\begin{thm}\n\tWhen $k=\\bar k$, $G$ is smooth if and only if \n\t\\[\\dim T_e \\red G=\\dim T_e G.\\]\n\\end{thm}\n\\begin{rmk}\n\tWhen $G$ is smooth, it is reduced, so the equality is clear. For the other direction, we get that $k$ is pefect, so $\\red G$\n\twhich is geoemetrically reduced if and only if $G$ is smooth. But \n\t\\[\\dim G\\le \\dim T_e G=\\dim T_e\\red G=\\dim \\red G=\\dim G\\]\n\\end{rmk}\n\\begin{thm}\n\tIf $G$ is a linear algebraic group over $k$ and $\\ch k=0$, $G$ is smooth,\n\\end{thm}\n\\begin{prf}\n\tWe can assume $k=\\bar k$. Then set $G=\\Spec A$ where $A$ is a Hopf algebra. THen we get Hopf algebra maps $m^\\ast$ and $e^\\ast$.\n\tNotice that the augmentation ideal $\\frakm=\\ker(e^\\ast)$ is a maximal ideal.\n\n\tThen we want to prove \n\t\\begin{enumerate}\n\t\t\\item $A\\cong\\frakm\\oplus k$ as a $k$-vector space (obvious).\n\t\t\\item $\\forall a\\in\\frakm$, $m^\\ast(a)-a\\otimes 1-1\\otimes a\\in \\frakm\\otimes \\frakm.$\n\t\\end{enumerate}\n\tTo see the second, notice that $m^\\ast(a)-a\\otimes 1-1\\otimes a$ is in the kernel of \n\t\\[e^\\ast\\otimes \\id:A\\otimes A\\to k\\otimes A.\\]\n\tThis is clear from the commutative diagram\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tk\\otimes A & A\\otimes A\\ar[l,\"e^\\ast\\otimes\\id\"] \\\\\n\t\t\t& A\\ar[ul,\"\\sim\"]\\ar[u,\"m^\\ast\"]\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tThen we conclude $f\\in \\ker(e^\\ast\\otimes\\id)\\cap\\ker(\\id\\otimes e^\\ast)=A\\otimes\\frakm\\cap\\frakm\\otimes A$ by a symmetric argument.\n\tFinally we notice that $A\\otimes\\frakm\\cap\\frakm\\otimes A=\\frakm\\otimes\\frakm$, and so $f$ lies in this ideal.\n\n\tNow we want to show that $\\dim T_e G=\\dim_k\\frakm/\\frakm^2=\\dim_k\\frakm/(\\sqrt{0}+\\frakm^2)=\\dim T_e\\red G$. It suffices to show that \n\tfor all $a\\in\\sqrt{0}$, $a\\in \\frakm^2$. Suppose the opposite--so let $a\\in\\sqrt{0}\\setminus \\frakm^2$. Consider the diagram \n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tA\\ar[r]\\ar[d] &A_\\frakm\\ar[d]\\\\\n\t\t\tA/\\frakm^2\\ar[r,\"\\sim\"] & A_\\frakm/(\\frakm A_\\frakm)^2\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tNow the image of $a$ in $A_\\frakm$ is nonzero, so there exists $n>0$ such that $a^n\\in A_\\frakm$ but $a^{n-1}\\ne 0$ in $A_\\frakm$.\n\tThus there exists $f\\notin \\frakm$ such that $a^nf=0\\in A$. Subsititute $af$ for $a$, and thus there is an $a\\in\\sqrt{0}$ such that $a^n=0$ in $A$ but $a^{n-1}\\ne 0$ in $A_\\frakm$.\n\n\tThen by fact 2,\n\t\\[m^\\ast(a)=1\\otimes a+a\\otimes 1+r,\\quad r\\in\\frakm\\otimes\\frakm\\]\n\tand since $m^\\ast$ is a ring homomorphism, \n\t\\[0=m^\\ast(a^n)=(m^\\ast(a))^n=(a\\otimes 1+(1\\otimes a+r))^n=a^n\\otimes 1+n(a^{n-1}\\otimes a+(a^{n-1}\\otimes 1)r)+X\\]\n\twhere $X\\in A\\otimes\\frakm^2$. But since $a^n=0$, we get \n\t\\[n(a^{n-1}\\otimes a+(a^{n-1}\\otimes 1)r)\\in A\\otimes \\frakm^2\\]\n\n\tNow since $(a^{n-1}\\otimes 1)r\\in (a^{n-1})\\frakm\\otimes A$, so \n\t\\[n(a^{n-1}\\otimes a)\\in (a^{n-1})\\frakm\\otimes A+A\\otimes\\frakm^2\\]\n\tand since $\\ch k=0$, we get that $n$ is a unit, so \n\t\\[a^{n-1}\\otimes a\\in (a^{n-1})\\frakm\\otimes A+A\\otimes\\frakm^2\\]\n\n\tNow since this lives in $A\\otimes A$, consider the image of the quotient map $A\\otimes A\\to A\\otimes A/\\frakm^2$. Then \n\t\\[a^{n-1}\\otimes\\bar a\\in (a^{n-1})\\frakm\\otimes A/\\frakm^2\\subseteq A\\otimes A/\\frakm^2\\]\n\tAnd note that $a^{n-1}\\notin a^{n-1}\\frakm$ because otherwise $a^{n-1}=a^{n-1}q$ for $q\\in \\frakm$.\n\tThen $a^{n-1}(1-q)=0\\in A_\\frakm$, which implies that $a^{n-1}=0\\in A_\\frakm$ (since $1-q$ is a unit here).\n\n\tThen somehow we get that $\\bar a=0\\in A/\\frakm^2$, so $a\\in \\frakm^2$.\n\\end{prf}\n\n\\section{October 7, 2019}\nToday we will be primarly concerned with \n\\subsection{Group actions}\nLet $G$ be an algebraic group over $k$. \n\\begin{defn}\n\tA \\textbf{group action} of $G$ on a scheme $X$ over $k$ is the data of a morphism\n\t\\[\\mu:G\\times X\\to X\\]\n\tsuch that the usual axioms hold. That is,\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tG\\times G\\times X\\ar[r,\"m\\times\\id\"]\\ar[d,\"\\id\\times \\mu\"] & G\\times X\\ar[d,\"\\mu\"]\\\\\n\t\t\tG\\times X\\ar[r,\"\\mu\"] & X\n\t\t\\end{tikzcd}\n\t\t\\begin{tikzcd}\n\t\t\t\\Spec k\\times x\\ar[r,\"e\\times\\id\"]\\ar[rd,\"\\sim\"] & G\\times X\\ar[d,\"\\mu\"]\\\\\n\t\t\t& X\n\t\t\\end{tikzcd}\n\t\\end{center}\n\\end{defn}\n\\begin{rmk}\n\tApparently it was an exercise already to show that this is equivalent to an action of $h_G$ on $h_X$.\n\\end{rmk}\n\\begin{rmk}\n\tThe map $(g,x)\\mapsto(g,gx)$ is an automorphism of $G\\times X$, so if $p_2:G\\times X\\to X$ is projection,\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tG\\times X\\ar[rr,\"\\sim\"] \\ar[dr,\"\\mu\"] & & G\\times X\\ar[dl,\"p_2\"]\\\\\n\t\t\t& X &\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tcommutes.\n\\end{rmk}\n\\begin{defn}\n\tLet $X$ and $Y$ be schemes over $k$ with a $G$ action. Then a \\textbf{$G$-equivariant morphism} $f:X\\to Y$ \n\tis one such that for all $g\\in G$, the following commutes:\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tG\\times X\\ar[r,\"\\id\\times f\"]\\ar[d,\"\\mu_X\"] &G\\times Y \\ar[d,\"\\mu_Y\"]\\\\\n\t\t\tX\\ar[r,\"f\"] & Y\n\t\t\\end{tikzcd}\n\t\\end{center}\n\\end{defn}\n\\subsubsection{Some examples}\n\\begin{itemize}\n\t\\item $G$ actions on itself by multiplication or conjugation.\n\t\\item $\\Gm$ acts on $\\A1$. Geometrically, we are just looking at $k^\\ast$ acting on $k$ by scaling.\n\tAlgebraically, we want a map $\\mu\\Gm\\times\\A1\\to\\A1$ given by the map of algebras:\n\t\\[k[x]\\xrightarrow{\\mu^\\ast}k[t]_t\\otimes k[x]\\quad\\text{via}\\quad x\\mapsto tx\\]\n\tFunctorially, if $S$ is a scheme over $k$, then $\\Gm(S)=\\Gamma(S,\\O_S)^\\ast$ which acts on $\\A1(S)=\\Gamma(S,\\O_S)$,\n\tagain by scaling.\n\t\\item You can consider $\\GL_n$ action on $\\A n$ by multiplicaiton or on $\\A{n\\times n}$ via multiplication or conjugation.\n\\end{itemize}\n\\subsubsection{Orbits and Stabilizers}\nLet $G$ be an algebraic group over $k$ action on a scheme $X$ over $k$. Let $x\\in X(k)$. Then we have a map\n\\[\\mu_x:G\\times \\Spec k\\xrightarrow{\\id\\times x}G\\times X\\xrightarrow{\\mu}X\\]\nwhere \n\\[g\\mapsto (g,x)\\mapsto gx.\\]\n\\begin{defn}\n\tThe \\textbf{orbit of $x$} is $Gx=\\mu_x(G)\\subseteq |X|$ set-theoretically. The \\textbf{stabilizer of $x$ in $G$} \n\tis $G_x=\\mu_x^{-1}(x)\\subseteq G$. \n\\end{defn}\n\\begin{rmk}\n\t$G_x$ is always a closed algebraic subgroup of $G$.\n\\end{rmk}\n\\begin{prop}\n\t$\\mu_x(G)$ is open in its closure in $|X|$.\n\\end{prop}\nRecall first the following:\n\\begin{thm}[Chevalley's Theorem (different?)]\n\tIf $f:X\\to Y$ is a map of schemes of finite type over $k$, then $f(X)\\subseteq Y$ is contstructible (i.e. \n\tis a disjoint union of finitely many locally closed subsets).\n\n\t\\noindent\\textit{Recall that locally closed means closed in an open subspace.}\n\\end{thm}\n\\begin{cor}\n\t\\textit{Maybe a definition:} The orbit $Gx\\subseteq \\operatorname{im}(\\mu_x)\\subseteq X$. If \n\t$G$ is reduced, then $Gx$ is reduced.\n\\end{cor}\n\n\\subsubsection{Applications}\nSay that $\\ch k=p$. Then $\\mu_p$ acts on $\\Gm$ by multiplication.\n\n\\brk \n\n$\\Gm$ acts on $\\A1$ with two orbits: $\\Gm\\cdot 1=\\{x\\ne 0\\}$ and $\\Gm\\cdot 0=\\{0\\}$. The stabilizers are \n$G_1=1$ and $G_0\\cong\\Gm$.\n\n\\brk\n\nConsider $G$ acting on $\\A2$ via $t(x,y)=(tx,t^{-1}y)$. Then the orbits are hyperbolas! There is also a notion of closed \norbits that I didn't quite catch. Also apparently the orbit-stabilizer statement is easy to see in geometry via \na fiber bundle $G$ over $Gx$ where the fiber over $x$ is $G_x$.\n\n\\begin{prop}\n\tIf $\\phi:G\\to H$ is a homomorphism of algebraic groups, then $\\phi(G)\\subseteq |H|$ is closed.\n\\end{prop}\n\\begin{rmk}\n\tThe proof included reducing first to $k=\\bar k$. The trick here is to consider the group action induced by $\\phi$\n\tand then consider the map $\\mu_{e_H}$ of this action. Then $\\mu_{e_H}(G)=\\phi(G)$ and one can prove that this is closed.\n\\end{rmk}\n\nIn particular, we have that \\textbf{subgroups of an algebraic group are always closed.} Note that this stands in stark contrast \nto Lie theory where you get non-closed subgroups.\n\n\\section{October 9, 2019}\nRecall that last time we were considering actions of algebraic groups on schemes of finite type over $k$. We discussed the orbit and stabilizer \nof an element $x\\in X$ and showed that $G\\cdot x$ is open in its closure. We also saw that $G_x$ is a closed subgroup.\n\nWe also say that $\\phi(H)$ (as a set) is always closed! None of these facts are true for Lie groups or relative group schemes \n(the base scheme is not $\\Spec k$ for $k$ a field).\n\n\\subsection{Cartier Duality}\nLet $G\\to\\Spec k$ be a \\textbf{finite} group scheme (so $G=\\Spec A$ and $A$ is a finite dimensional Hopf algebra).\nSome examples of finite group schemes are:\n\\begin{itemize}\n\t\\item $G$ a finite group. Then $G=\\sqcup_{g\\in G}\\Spec k=\\Spec\\left(\\prod_{g\\in G}k\\right)$\n\t\\item $\\mu_n=\\Spec k[t]/(t^n-1)$\n\t\\item $\\ch k=p$ and $\\alpha_p=\\Spec k[t]/t^p$\n\\end{itemize}\n\\begin{rmk}\n\tRecall all the maps and diagrams that $A$ has as a Hopf algebra.\n\\end{rmk}\nA question one may ask: what if we apply the idea of dualizing $(-)^\\vee=\\Hom_\\Alg(-,k)$ to $A$?\nDo we get another Hopf algebra?\n\nThe short and sweet of it is yes! But notice that we are coming from the commutative world, so we expect \n$A$ to be commutative. But in general, $A$ is not cocommutative (in fact, it is if and only if $G$ itself was commutative as a group).\n\nThus $A^\\vee$ is indeed a (cocommutative) Hopf algebra, and when $G$ is commutative, $A^\\vee$ is as well. So\n\\begin{prop}\n\tIf $G=\\Spec A$ is a commutative group scheme, then the Cartier dual $G^D=\\Spec A^\\vee$ is a commutative group scheme as well.\n\\end{prop}\n\\begin{rmk}\n\tThe above observations gives us an anti-autoequivalence of the category of commutative affine group schemes. Furthermore $(G^D)^D=G$.\n\\end{rmk}\n\\begin{ex}\n\tConsider $\\mu_n=\\Spec A-\\Spec \\oplus_0^{n-1}k\\cdot t^i$. So then if we let $\\{e_i\\}$ be the basis for $A^\\vee$ dual \n\tto $\\{t_i\\}$. we can compute comultiplication \n\t\\[e_i\\mapsto \\sum_{j=0}^{n-1}e_j\\otimes e_{i-j}\\]\n\tand multipication\n\t\\[e_i\\otimes e_j\\mapsto \\delta_{ij}e_i\\]\n\n\tThen it can be shown that $G^D\\cong\\bbZ/n\\bbZ$.\n\\end{ex}\n\nNow given $G$, an algebraic group over $k$, define \n\\[\\underline\\Hom(G,\\Gm):\\mathbf{Sch}/k\\to\\Set\\]\nwhich takes \n\\[T\\mapsto \\Hom_{\\mathbf{AlgGrp}}(G_T,{\\Gm}_T).\\]\n\\begin{thm}\\label{thm:cartier}\n\tIf $G$ is a commutative finite group scheme over $k$, then \n\t\\[G^D\\cong\\uHom(G,\\Gm)\\]\n\\end{thm}\nLet $H=\\Spec B\\to \\Spec R$ be a group scheme. Then \n\\[H_{\\mathbf{GrpSch}/R}(H,{\\Gm}_R)\\subseteq\\operatorname{Mor}_{\\mathbf{Sch}/R}(H,{\\Gm}_R)\\]\nBut the left hand side is equivalent to the grouplike elements of $B$ and the right hand side is equivalent\nto $\\Hom_{\\Alg_R}(R[t]_t,B)$.\n\nThis leads to a proof of thm.~\\ref{thm:cartier}:\n\\begin{prf}\n\tLet $G=\\Spec A$ and $G^D=\\Spec A^\\vee$. First look at the $k$-points:\n\t\\begin{align*}\n\t\tG^D(k)&= \\Mor_{\\Sch/k}(\\Spec k, G^D)\\\\\n\t\t&=\\Hom_{\\Alg_k}(A^\\vee,k)=\\{f\\in A|m^\\ast(f)=f\\otimes f\\}\\hookrightarrow \\Hom_{k}A^\\vee,k)\\\\\n\t\t&=\\Hom(G,\\Gm)\\\\\n\t\t&=\\uHom(G,\\Gm)(k)\n\t\\end{align*}\n\n\tIf we then look at $R$ points for a general $R$, most things just change over, but we see \n\t\\[\\{f\\in A\\otimes R|m_R^\\ast(f)=R\\otimes R\\}=\\Hom_{\\Alg_k}(A^\\vee,R)=\\Hom(G_R,\\Gm)\\]\n\tand the rest follows.\n\\end{prf}\nA question one may ask: what is $\\Hom_\\AlgGrp(\\Gm,\\Gm)$? It ends up it is $\\bbZ$. You can send $t\\mapsto t^n$ for all $n\\in\\bbZ$.\nBut then $\\uHom(\\Gm,\\Gm)$ is $\\bbZ$ as a group scheme over $k$, which is not quasicompact. There was more but I am le tired.\n\n\\section{October 11, 2019}\nToday we're doing problems and stuff. Forgot about that.\n\n\\subsection{Casey's Presentation}\nLet $G$ and $H$ be objects in a category $\\calC$ with finite products. Let $h:H\\to G$ be a group homomorphism. That is,\n\nThen we get similar diagrams for the identity and inverse maps (they are respected by $h$). \nThen there is a bunch of diagram work. It's too hard to do a diagram without knowing the shape ahead of time.\n\nOh hey I used Adam's site! \n\\begin{center}\n\t\\begin{tikzcd}\n\t\t\\ast \\arrow[rd, \"{(e,e,e)}\"]               &                                                                                                                                                  &                                                                                        &                                                                        &                                                                           \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   & H\\times H\\times H \\arrow[r, \"(i_g\\circ h)\\times h\\times h\"] \\arrow[d, \"1\\times m_H\"']                                                            & G\\times G\\times G \\arrow[d, \"1\\times m_g\"]                                             &                                                                        &                                                                           \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   & H\\times H \\arrow[r, \"(i_G\\circ h)\\times h\"]                                                                                                      & G\\times G \\arrow[rd, \"m_G\"]                                                            &                                                                        &                                                                           \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   &                                                                                                                                                  &                                                                                        & G                                                                      &                                                                           \\\\\n\t\\end{tikzcd}\n\\end{center}\n\\begin{center}\n\t\\begin{tikzcd}\t\t\t\t\t\t\t\t\t\t\t\t   & \\ast \\arrow[r, \"e\"] \\arrow[rd, \"{(e,e,e)}\"] \\arrow[rdddd, \"he\"', bend right=49]                                                                   & H \\arrow[rd, \"{(h,h)}\"]                                                                &                                                                        &                                                                           \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   & &H\\times H\\times H \\arrow[d, \"(i_g\\circ h)\\times h\\times h\"']                                                                                                                                                                         & G\\times G \\arrow[rd, \"\\ast\\times 1\"] \\arrow[ld, \"{(\\ast,1)\\times 1}\"',swap] &                                                                           \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   &                                                                                                                                                  & G\\times G\\times G \\arrow[d, \"1\\times m\"'] \\arrow[rd, \"m\\times 1\"]                      &                                                                        & \\ast\\times G \\arrow[ld, \"e\\times 1\"'] \\arrow[lldd, \"\\pi_1\", bend left=49] \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   &                                                                                                                                                  & G\\times G \\arrow[d, \"m\"']                                                              & G\\times G \\arrow[ld, \"m\"]                                              &                                                                           \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   &                                                                                                                                                  & G                                                                                      &                                                                        &                                                                           \\\\\n\t\\end{tikzcd}\n\\end{center}\n\\begin{center}\n\t\\begin{tikzcd}\n\t\t\t\t\t\t\t\t\t\t\t\t   & \\ast \\arrow[rd, \"he\"', dashed,swap] \\arrow[r, \"\\ast\", bend left] \\arrow[d, \"{(e,e,1)}\"'] \\arrow[ld, \"{(e,e,e)}\"'] \\arrow[dd, \"{(e,e)}\", bend left=60] & \\ast \\arrow[r, \"e\"]                                                                    & G                                                                      &                                                                           \\\\\n\t\tH\\times H\\times H \\arrow[rd, \"1\\times m\"'] & H\\times H\\times\\ast \\arrow[d, \"1\\times \\pi_2\"'] \\arrow[l, \"1\\times 1\\times e\"']                                                                  & G \\arrow[u, \"\\ast\"'] \\arrow[d, \"{(1,1)}\"'] \\arrow[rd, \"{(i,1)}\", bend left] &                                                                        &                                                                           \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t   & H\\times H \\arrow[r, \"h\\times h\"] \\arrow[rr, \"(ih)\\times h\", bend right]                                                                          & G\\times G \\arrow[r, \"i\\times 1\"]                                                       & G\\times G \\arrow[uu, \"m\"]                                              &                                                                          \n\t\t\\end{tikzcd}\n\\end{center}\n\\begin{rmk}\n\tThe idea above is we want to show the first diagram commutes. That is captured in the paths of the second diagram which commutes by \n\tthe axioms of a group object. The third diagram shows a similar commutativity for the unit $e$.\n\\end{rmk}\n\n\\section{October 14th, 2019}\nLet $G$ be a finite group. Recall the definiton of a \\textbf{representation} (a linear action of $G$ on a vector space $V/k$).\nThis is the same data as a group homomorphism to $\\GL(V)$.\n\n\\subsection{Representations of Algebraic Groups}\n\nNow what if $G$ is an algebraic group over $k$? Now we have some exra structure of $G$ as a variety.\n\\begin{defn}\n\tA (finite dimensional) \\textbf{representation} of an algebraic group $G/k$ is a homomorphism $\\rho:G\\to \\GL(V)$ of algebraic groups.\n\\end{defn}\n\\begin{rmk}\n\tNotice that when $V$ is infinite-dimensional, $\\GL(V)$ is no longer of finite type, so we have to let $\\rho$ be a \n\tmorphism of group schemes.\n\\end{rmk}\nWe have the standard representation of $\\GL_n$ acting on $k^n$ in the natural way. We also have the regular representation \n$G$ action on $\\Gamma(G,\\O_G)$. When $G=\\Gm$, we get over $\\bbC$ an action of $\\Gm$ on $\\GL(V)$ in the usual way (scaling by $\\bbC^\\ast$).\n\nObserve that \n\\[\\rho:G\\to \\GL(V)=\\Spec(\\operatorname{Sym}^\\ast(V\\otimes V^\\vee))_{\\det}\\]\ncorresponds to a ring morphism \n\\[\\Sym^\\ast(V\\otimes V^\\vee)_{\\det}\\to \\Gamma(G,\\O_G)\\eqdef \\Gamma(G)\\]\nwhich corresponds to a map \n\\[V\\otimes V^\\vee\\to \\Gamma(G)\\]\nand then tensoring with $V$, this gives us a map\n\\[V\\xrightarrow{\\sigma}\\Gamma(G)\\otimes V\\]\n\nSo any group action gives us a \\textbf{coaction} of $\\Gamma(G)$ on $V$.\n\\begin{defn}\n\tA representation of $G$ is a $k$-vector space $V$ along with a coaction \n\t\\[\\sigma:V\\to \\Gamma(G)\\otimes V\\]\n\tsatisfying the usual dual diagrams to actions.\n\\end{defn}\n\n\\begin{rmk}\n\tAs a matter of notation, recall that if $G=\\Spec A$, then $A$ is a Hopf algebra. So we call $V$ an $A$-comodule.\n\\end{rmk}\n\n\\subsection{Reps of diagonalizable group schemes}\nLet $k$ be a field (or even a ring!) and let $A$ be a finitely-generated abelian group. Define $D(A)$ to be \n\\[D(A)=\\oplus_{a\\in A}k\\cdot t^a\\eqdef \\Spec R\\]\nThen we get a multiplication \n\\[R\\otimes R\\to R\\qquad t^a\\otimes t^b\\mapsto t^{a+b}\\]\nand comultiplication\n\\[R\\to R\\otimes R\\qquad t^a\\to t^a\\otimes t^a\\]\nand counit $\\varepsilon$ sending $t^a\\to 1$ (all $t^a$ are primitive).\n\\begin{prop}\n\t$R$ is a Hopf algebra. In particular, $D(A)\\to \\Spec k$ is a linear algebraic group.\n\\end{prop}\n\nAs an example, consider $A=\\bbZ$. Then $R\\cong k[t]_t$. Thus $D(A)=\\Gm$. \n\nIf instead $A=\\bbZ/n$, then $R\\cong l[t]/(t^n-1)$, so $D(A)\\cong\\mu_n$.\n\nFinally when $A\\cong \\bbZ^r\\oplus\\bbZ/n_1\\oplus\\cdots\\oplus \\bbZ/n_k$, then \n\\[D(A)=\\Gm^r\\times\\mu_{n_1}\\times\\cdots\\times \\mu_{n_k}\\]\n\n\\begin{defn}\n\tAn algebraic group over $k$ is \\textbf{diagonalizable} if $G\\cong D(A)$ for some $A$.\n\\end{defn}\nRecall the definiton of irreduciblility.\n\\begin{thm}\n\tLet $A$ be af initely generated abelian group and $G=D(A)$. Then \n\t\\begin{itemize}\n\t\t\\item Every irreducible representation of $G$ is one-dimensional and isomorphic to $I_a$, \n\t\tcorresponding to $k\\to \\Gamma(G)\\otimes k$ where $1\\mapsto t^a\\otimes 1$ for some $a\\in A$.\n\t\t\\item Every representation decomposes as a direct sum of irreducibles.\n\t\\end{itemize}\n\\end{thm}\n\\begin{prf}\n\tLet $\\sigma:V\\to \\Gamma(G)\\otimes V$ be a representation of a diagonalizable group. For $a\\in A$, define \n\t\\[V_a\\eqdef\\{v\\in V|\\sigma(v)=t^a\\otimes v\\}\\subseteq V\\]\n\tNow the claim is that $V_a\\cap V_b=0$ if $a\\ne b$ and furthermore $\\sum V_a=V$. The first isn't too hard to see.\n\n\tThe second follows by considering $v\\in V$ and looking at the image of it under $\\sigma$. That is, \n\t\\[\\sigma(v)=\\sum_1^Nt^{\\alpha_i}\\otimes v_i\\]\n\twhere $\\alpha_i\\in A$ and $v_i\\in V$. Then a very simple argument shows $v=\\sum v_i$ (using linearity).\n\tThen it remains to show that $v_i\\in V_{a_i}$, but this will make things work. (use the other axiom of a coaction).\n\\end{prf}\n\\begin{rmk}\n\tWhen $A=\\bbZ$, $G=D(\\bbZ)=\\Gm$, which tells us that representations of $\\Gm$ are in bijection with $\\bbZ$-gradings \n\tof $V\\cong\\oplus_{n\\in\\bbZ}V_n$!\n\\end{rmk}\n\\begin{defn}\n\tA linear algebraic group $G\\to\\Spec k$ is called \\textbf{linear reductive} if every representation decomposes as a direct sum of irreducibles.\n\\end{defn}\n\\begin{prob}\n\tShow the above is equivalent to the statements\n\t\\begin{itemize}\n\t\t\\item for each $G$-representations $W\\subseteq V$, there exists $W'\\subseteq V$ subrepresentations such that $V\\cong W\\oplus W'$.\n\t\t\\item $0\\to W\\to V\\to W'\\to 0$ is exact.\n\t\\end{itemize} \n\\end{prob}\n\\begin{rmk}\n\tNotice that this says that $D(A)$ is linear reducible. In particular, $\\Gm$ and $|mu_n$ are in \\textbf{any characteristic}. This runs counter to Maschke in \n\tfinite groups.\n\\end{rmk}\n\nConsider $\\bbZ/p$ in $\\ch p$. We get an action $\\bbZ/p$ on $k^2$ via \n\\[1\\cdot(x,y)=(x+y,y).\\]\nBut notice that $k\\stackrel{y=0}{\\hookrightarrow}k^2$ is a subrepresentation, but has no complement! Thus this group is not linearly reductive!\n\n\\brk\n\nAs another example, consider\n\\[\\Ga\\cong\\left\\{\\begin{pmatrix}\n\t1 & \\alpha \\\\ 0 & 1\n\\end{pmatrix}\\right\\}\n\\subset\\GL_2(k)\\]\nwhere $\\Ga$ acts on $k^2$ by $\\alpha(x,y)=(a+\\alpha y,y)$. Then it can be easily seen not to be a linear representation.\n\n\\section{October 16th, 2019}\nLast time we talked about representations! Woot.\n\nNotice that if $G$ is linear (i.e. affine), then the multiplication map induces comultiplication \n\\[\\Gamma(G)\\to \\Gamma(G)\\otimes\\Gamma(G)\\]\nso $\\Gamma(G)$ is the \\textbf{regular} representation with coaction given by multiplication.\n\nWe also saw some equivalent contitions similar to Machke for linear reductive groups. Finally we say some examples and diagonalizable groups.\n\n\\subsection{New Stuff}\nGiven a $G$-representation $V$, let $V^G$ be \n\\[\\{v\\in V|\\sigma(v)=1\\otimes v\\}=\\operatorname{Eq}\\{V\\stackrel{\\xrightarrow{1\\otimes -}}{\\xrightarrow{\\sigma}}\\Gamma(G)\\otimes V\\}=\\Hom^G(k,V)\\subseteq V.\\]\n\\begin{rmk}\n\tI need to figure out the \\TeX for equalizers/parallel maps.\n\\end{rmk}\n\\begin{ex}\n\tGiven the representation $\\Gm$ action on $V=\\oplus V_d$, $V^G=V_0$.\n\\end{ex}\n\\begin{prob}\n\tIf $G(k)$ is dense in $G$, then $V^G=V^{G(k)}$.\n\\end{prob}\n\\begin{prop}\n\tA linear algebraic group $G$ over $k$ is linearly reductive if and only if the functor from $G$-representations to \n\t$k$-vector spaces given by $V\\mapsto V^G$ is exact.\n\\end{prop}\n\\begin{prf}\n\tIf $V\\cong W\\oplus W'\\twoheadrightarrow W$ is a $G$ representation, then \n\t\\[W^G\\oplus(W')^G=V^G\\twoheadrightarrow W^G\\]\n\tis also surjective.\n\n\tSuppose that we have a short exact sequence \n\t\\[0\\to W'\\to V\\to W\\to 0\\]\n\tand that this functor is exact. Then we want to show we get a section $\\sigma:W\\to V$. To do this, consider the functor \n\t$\\Hom^G(W,-)=\\Hom^G(k,W^\\vee\\otimes -)=(W^\\vee\\otimes -)^G$, so by the assumption this is exact and we can lift the identity on $W$ \n\tto a map in $\\Hom^G(W,V)$, giving us our section.\n\\end{prf}\n\\begin{prop}\n\tLet $G$ be a linear algebraic group over $k$ and $V$ a $G$-representation. Let $W\\subseteq V$ be a finite dimensional $k$-subspace (not \n\tnecessarily $G$-invariant). Then there exists $W\\subseteq W'\\subseteq V$ such that $W'$ is a finite dimensional representation of $G$.\n\\end{prop}\n\\begin{prf}\n\tWe can assume that $W=\\langle w\\rangle$ for $w\\in V$. Apply $\\sigma:V\\to \\Gamma(G)\\otimes V$. Then if $\\{t_i\\}$ is a \n\tbasis for $\\Gamma(G)$, we get \n\t\\[w\\mapsto\\sum t_i\\otimes w_i.\\]\n\tThen we claim that $w\\in\\langle w_i\\rangle$ and $\\langle w_i\\rangle\\subseteq V$ is a subrepresentation.\n\n\tFor the first, consider the diagrams:\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tk\\otimes V & \\Gamma(G)\\otimes V\\ar[l,swap,\"e^\\ast\\otimes\\id\"]\\\\\n\t\t\t& V\\ar[ul,\"\\sim\"]\\ar[u,\"\\sigma\",swap]\n\t\t\\end{tikzcd}\n\t\t\\begin{tikzcd}\n\t\t\t\\sum e^\\ast(t_i) w_i & \\sum t_i\\otimes w_i\\ar[l]\\\\\n\t\t\t& w\\ar[u]\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tso $w$ is in the span of the $w_i$.\n\n\tFor the second claim, we need to show that \n\t\\[\\sigma(w_i)\\in\\Gamma(G)\\otimes\\langle w_i\\rangle.\\]\n\tTo see this consider the diagram \n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\t\\Gamma(G)\\otimes\\Gamma(G)\\otimes V & \\Gamma(G)\\otimes V\\ar[l,\"m^\\ast\\otimes\\id\"]\\\\\n\t\t\t\\Gamma(G)\\otimes V\\ar[u,\"\\id\\otimes\\sigma\"] & V\\ar[l,\"\\sigma\"]\\ar[u,\"\\sigma\"]\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tAnd tracing through $w\\in V$, we get that \n\t\\[\\sum t_i\\otimes\\sigma(w_i)=\\sum_{i,j,k}\\alpha_{i,j,k}t_i\\otimes t_j'\\otimes w_k\\]\n\tand so by looking at coefficients of $t_i\\otimes \\Gamma(G)\\otimes V=\\sigma(w_i)$ (look closer here), we see it is \n\t\\[\\sum_{j,k}\\alpha_{i,j,k}t_j'\\otimes w-k\\in\\Gamma(G)\\otimes\\langle w_i\\rangle.\\]\n\\end{prf}\n\\begin{cor}\n\tIf $V$ is a $G$ representation, then \n\t\\[W=\\bigcup_{W\\subset V}W\\]\n\twhere the union is over all finite dimensional subgroups.\n\\end{cor}\n\\begin{cor}\n\t\tIf $G$ is a linear algebraic group (affine finite type over $k$), then for some $n$ $G\\subseteq \\GL_n$ is a closed subgroup.\n\t\tIn other workds, there exists a faithful representation $V$ of $G$.\n\\end{cor}\n\nNow consider the regular representation $\\Gamma(G)\\to \\Gamma(G)\\otimes \\Gamma(G)$. Notice that $\\Gamma(G)$ is \na $k$-algebra of finite type. Choose generators $g_1,\\dots,g_n$ for $\\Gamma(G)$. Take a subrepresentation of $\\Gamma(G)$\nspanned by $\\langle h_1,\\dots,h_N\\rangle$ containing the span of the $g_i$.\n\nWe have a map $G\\to \\GL(W)$ where $W=\\langle h_i\\rangle$ and we aske whether the induced map \n\\[\\Sym^\\ast(W\\otimes W^\\vee)_{\\det}\\]\nis surjective.\n\nLet's say that $h_i\\mapsto \\sum_j\\gamma_{i,j}\\otimes h_j$ under $\\sigma$. Then using this map and the natural pairing between $W$ and $W^\\vee$, we get a map \n\\[\\Sym^\\ast(W\\otimes W^\\vee)\\supset W\\otimes W^\\vee\\to\\Gamma(G)\\otimes W\\otimes W^\\vee\\to \\Gamma(G)\\]\nwhere we send \n\\[h_i\\otimes h_j^\\ast\\mapsto \\gamma_{i,j}\\]\nSo using the counit identiy we can write \n\\[h_i=\\sum_j e^\\ast(\\gamma_{i,j})h_j\\]\nbut we really want to write $h_i$ as a linear combination of the $\\gamma_{i,j}$ (since we have shown they all lie in the image of this map).\nWe don't know how to finish up.\n\n\\section{October 18th, 2019}\nLast time we say that any linear algebraic group $G$ embeds into $\\GL_n$. The argument was basically that you look at the global functions \n$\\Gamma(G)$ and doing cool stuff. Right at the end Taffy and Tuomas figured out that we just needed to use the other ``side'' of the counit diagram.\n\n\\subsection{An example}\nConsider \n\\[\\operatorname{PGL}_2=(\\operatorname{Proj} k[a,b,c,d])_{ad-bc]}=\\Spec(k[a,b,c,d]_{\\det})_0\\]\nThen consider the representation spanned by \n\\[\\left\\langle\\frac{a^2}{\\det},\\frac{ab}{\\det},\\dots,\\frac{d^2}{\\det}\\right\\rangle\\]\nwhich has dimension 10 in $\\Gamma(\\operatorname{PGL}_2)$.\n\nThus we have a representation $\\operatorname{PGL}_2\\to\\GL_{10}$. We can compute the matrix representing a matrix (whose determinant\ncan be assumed to be 1 since we are modding out by scalars).\n\\begin{prob}\n\tDo this! In Sage or something.\n\\end{prob}\n\\subsection{Special Linear Groups}\nLets discuss $\\SL_2$. \n\\begin{thm}\n\tIf $\\ch k=0$, then \n\t\\begin{itemize}\n\t\t\\item $\\SL_2$ is linearly reductive.\n\t\t\\item Every irreducible representation of $\\SL_2$ is isomorphic to $\\Sym^d k^2$ for some $d$.\n\t\\end{itemize}\n\twhere $k^2$ is the standard representation of $\\SL_2$.\n\\end{thm}\n\\begin{prf}\n\t\\textit{Sketch:} Recall that in the proof of Maschke one takes a surjection $V\\twoheadrightarrow W$ \n\tof $G$ representations and we want to show it has a section. We pick a section $s$ in terms of vector spaces and then ``average''\n\tit:\n\t\\[\\tilde s:W\\to V\\qquad w\\mapsto\\frac{1}{|G|}\\sum_{g\\in G}g\\cdot s(g^{-1}w)\\]\n\twhich is our section.\n\n\tThen using the Harr measure on the group, we get from the inclusion of (compact) $\\operatorname{SU}_2$ in $\\SL_2$ with quotient $\\bbC$\n\tand we can construct the section via \n\t\\[w\\mapsto \\int_Gg\\cdot s(g^{-1}w)\\,\\mathrm{d}g\\]\n\tand we get $T_e\\SL_2=(T_e\\operatorname{SU}_2\\otimes_\\bbR\\bbC)$ and then there is a bit more Lie theory needed to show this makes full sense over $\\bbC$.\n\\end{prf}\n\nNow consider $\\Sym^d(k^2)^\\vee$, the degree $d$ polynomials on two variables. Then we get an action of $\\SL_2$ via \n\\[g\\cdot f(x,y)=f(g^{-1}\\binom{x}{y})-f(dx-by,-cx+ay)\\]\nThen, as many arguments in linear algebraic groups, we can reduce to a so-called \\textit{maximal torus} \nof matrices $(\\begin{smallmatrix}a & 0\\\\0 & a^{-1}\\end{smallmatrix})\\cong\\Gm$. Then we can use techniques on the lie algebra \n$T_e\\SL_2=\\sl_2$.\n\nWhoa coool. The short exact sequence \n\\[1\\to \\SL_2\\to \\GL_n\\xrightarrow{\\det}\\Gm\\to 1\\]\ngives us that any representation of $\\SL_2$ lifts via $-\\otimes \\det^i$ to a representation of $\\GL_2$.\nSean asked whether this actually gets all the representations or just the polynomial ones. I feel like I should know the \nanswer to this. Jarod seems to think this is all of them. I think \\href{https://math.stackexchange.com/questions/694001/non-polynomial-representations-of-gl-n}{this stack post}\nsays something about that.\n\n\\textit{I got caught up in thinking and googling and missed an example.}\n\nLet $\\ch k=p$ and let $\\alpha\\in k^\\times$. Then define $\\alpha\\cdot f=\\alpha^pf$. This gives us a map \n\\[\\Sym^d k^n\\to \\Sym^{dp}k^n\\]\nthat is additive taking $p^{th}$ powers.\n\nSo then $\\Sym^N k^n$ is \\textbf{not simple} if $p|N$.\n\n\\section{October 21st, 2019}\nRecall that we had that any linear algebraic group over a field $k$ injects into $\\GL_n$ as a closed subgroup for some $n$.\n\nAn open question is as follows: if $G\\to\\Spec k[\\varepsilon]/\\varepsilon^2$ is flat affine group scheme of finite type\nThen is $G\\subseteq\\GL_{n,k[\\varepsilon]}$ for some $n?$ This question was asked (as far as Jarod knows) by Brian Conrad on Stack Overflow \nand is still open.\n\nOur goal is to answer the following: if $\\phi:H\\to H$ then $K=\\ker\\varphi=H\\times_G\\Spec k\\subseteq H$ is a closed subgroup. What about its image $H/K$?\n\n\\subsection{Torsors}\nToday we are going to be talking about $G$-torsors.\n\\begin{defn}\n\tIf $G$ is a group, a \\textbf{torsor under $G$} is a set $P$ with a free and transitive group action.\n\\end{defn}\n\\begin{rmk}\n\tSo then by fixing a point $p\\in P$, we get $G\\xrightarrow{\\sim}P$ by sending $g\\mapsto pg$. In this way, it is like thinking about \n\ta group without the identity.\n\\end{rmk}\nAn example is by taking a Galois extension $K(\\alpha)/K$ with minimal polynomial $f$ of $\\alpha$. Then $G=\\operatorname{Gal}(K(\\alpha)/K)$ acts on \n$\\{x|f(x)=0\\}$, which is a $G$-torsor.\n\\begin{defn}\n\tA \\textbf{$G$-torsor over a set $S$} is a set $P$ with a free right $G$-action such that $P\\to S$ is $G$-invariant and $S\\cong P/G$.\n\\end{defn}\n\\begin{rmk}\n\tNotice that a torsor under $G$ is a specialization of this definition by requiring that $S=\\{\\ast\\}$, the singleton set.\n\\end{rmk}\n\\begin{ex}\n\tLet $H\\subseteq G$. THen $H$ acting on $H\\to H\\setminus G$ (left cosets $gH$ of $H$) is an $H$-torsor.\n\\end{ex}\n\\subsection{Flatness}\n\\begin{defn}\n\tA map of rings $A\\to B$ is \\textbf{flat} if $-\\otimes_A B$ is exact.\n\\end{defn}\n\\begin{rmk}\n\tEquivalently: for all $p\\in \\Spec A$, $A_p\\to B_p$ is flat. Also: for all $q\\in\\Spec B$. $A_{\\phi^{-1}(q)}\\to B_q$ is flat.\n\\end{rmk}\n\\begin{defn}\n\t$A\\to B$ is \\textbf{faithfully flat} if and only if $-\\otimes_AB$ is \\textbf{faithfully exact} (exactness and its converse).\n\\end{defn}\n\\begin{rmk}\n\tOther equivalence to faithful flatness: $A\\to B$ is flat and $\\Spec B\\twoheadrightarrow\\Spec A$; or $A\\to B$ \n\tis flat and for any $A$-module $M$, $M=0\\leftrightarrow m\\otimes_AB=0$.\n\\end{rmk}\n\\begin{rmk}\n\tIf $\\Spec B\\twoheadrightarrow\\Spec A$ is faithfully flat and finite presented, then $\\Spec A$ has the quotient topology.\n\\end{rmk}\n\n\\begin{prop}\n\tLet $S=\\Spec A$ be Noetherian. Let $G\\to S$ be an affine group scheme of flat and finite type over $S$. Let $P\\to S$ \n\tbe a scheme over $S$ with a right $G$-action $P\\times_SG\\xrightarrow{\\sigma}P$. Then the following are equivalent:\n\t\\begin{enumerate}\n\t\t\\item $P\\to S$ is affine, (faithfully?)\\footnote{We tried to prove this in class and it seemed not to be true if we don't say this}\n\t\tflat, finite type and $(\\sigma,\\pi_P):(p,g)\\mapsto (pg,p)$ is an isomorphism.\n\t\t\\item There exists a faithfully flat $S'$ such that \n\t\t\\begin{center}\n\t\t\t\\begin{tikzcd}\n\t\t\t\tP_{S'}\\ar[r]\\ar[d] & P\\ar[d]\\\\\n\t\t\t\tS'\\ar[r,\"\\sigma\"] &S\n\t\t\t\\end{tikzcd}\n\t\t\\end{center}\n\t\tAnd $P_{S'}\\cong G_{S'}$ as $G_{S'}$-modules.\n\t\\end{enumerate}\n\\end{prop}\n\\begin{rmk}\n\tNote that the above says exactly that $P$ is a $G$-torsor. Another name that has been mentioned and Jarod seems to \n\tlike is \\textbf{principal $G$ bundle.}\n\\end{rmk}\n\n\\subsection{Descent}\nAlong the way of proving the above propositon, we use the idea of descent.\n\\begin{lem}\n\tConsider $X'=\\Spec B'\\stackrel{\\text{f.flat, f.type}}{\\twoheadrightarrow}X=\\Spec B\\to Y=\\Spec A$\n\twhere $X',X,$ and $Y$ are Noetherian. Then \n\t\\begin{enumerate}\n\t\t\\item $X'\\to Y$ is flat implies that $X\\to Y$ is flat.\n\t\t\\item $X'\\to Y$ is faithfully flat implies that $X\\to Y$ is faithfully flat.\n\t\t\\item $X'\\to Y$ is finite type implies that $X\\to Y$ is.\n\t\\end{enumerate}\n\\end{lem}\n\\begin{rmk}\n\tThe idea for the first two is just looking at the functors using that $B\\to B'$ is faithfully flat. For the third, if $B=\\cup_\\lambda B_\\lambda$,\n\tthen $A\\to B_\\lambda$ is finitely generated. Then tensoring over $B$ with $B'$ gets us $B'=\\cup_\\lambda B_\\lambda\\otimes_B B'$.\n\n\tBut since $A$ is finitely generated over $B_\\lambda$< eventually $B_\\lambda\\otimes_BB'=B'$. Then \n\tconsider\n\t\\[0\\to B_\\lambda\\to B\\to B/B_\\lambda.\\]\n\tAfter tensoring with (faithfully flat!) $B'$ over $B$, since for some $\\lambda$\n\t\\[0\\to B_\\lambda\\otimes_B B'\\xrightarrow{\\sim}B'\\to B/B_\\lambda\\otimes_BB'\\to 0\\]\n\tis exact, forcing the rightmost term to be zero. But by faithfulness this implies $B/B_\\lambda=0$ and we are done.\n\\end{rmk}\n\\begin{prop}\n\tConsider\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tX'\\ar[r]\\ar[d] & X\\ar[d]\\\\\n\t\t\tS'=\\Spec A'\\ar[r,\"\\text{f.type, f.flat}\",two heads,swap] & S=\\Spec A\n\t\t\\end{tikzcd}\n\t\\end{center}\n\twhich is a Cartesian square. Then \n\t\\begin{enumerate}\n\t\t\\item $X'\\to S'$ is an isomorphism iff $X\\to S$ is.\n\t\t\\item $X'\\to S'$ is affine iff $X\\to S$ is.\n\t\\end{enumerate}\n\\end{prop}\n\n\\section{October 23rd, 2019}\nRecall the following definition/proposition:\n\\begin{defn}\n\tLet $S=\\Spec R$ be Noetherian. Let $G\\to S$ be an affine group scheme that is flat and of finite type. Let $G$ be a scheme over $S$ with a right $G$-action.\n\n\tThen the following are equiavlent:\n\t\\begin{itemize}\n\t\t\\item $P\\to S$ is a \\textbf{$G$-torsor}\n\t\t\\item $P\\to S$ is faithfully flat and of finite type and $P\\times_SG\\xrightarrow{(\\sigma,\\pi_1}P\\times_S P$ is an isomorphism.\n\t\t\\item There exists $S'\\twoheadrightarrow S$ faithfully flat such that \n\t\t\\begin{center}\n\t\t\t\\begin{tikzcd}\n\t\t\t\tG\\times_S S'\\cong P\\times_S S'\\ar[r]\\ar[d] & P\\ar[d]\\\\\n\t\t\t\tS'\\ar[r] & S\n\t\t\t\\end{tikzcd}\n\t\t\\end{center}\n\t\tcommutes (where the isomorphism shown is $G\\times_SS'$-equivariant.)\n\t\\end{itemize}\n\\end{defn}\n\n\\subsection{Some Examples}\nWe have the trival torsor $P=G\\to S$. It is a propositoon that $P\\to S$ is trivial iff there exists a section $s:S\\to P$.\n\nLet $L/K$ be a finite Galois extension. Then we get $\\operatorname{Gal}(L/K)$ acting on $P=\\Spec L\\to \\Spec K$ is a $G=\\operatorname{Gal}(L/K)$-torsor.\nThen in the diagram in the definition above, $\\Spec L$ plays the part of $S'$. Then we get that $L\\otimes L\\cong L[x]/f\\cong \\prod_{g\\in G}L$ and \n\\[G\\times_{\\Spec K}\\Spec L\\cong \\Spec(L\\otimes L)\\cong \\sqcup_{g\\in G}\\Spec L.\\]\n\n\\brk\n\nNow let $X$ be a scheme and let $\\Gm$ act on a line bundle $L\\to X$ with section $o:X\\to L$. Then $(L\\setminus o(X))\\to X$ is a $\\Gm$ torsor.\nIt is a result, although we don't have the machinery yet, that \n\\begin{prop}\n\tThere is a bijection between line bundles on $X$ and $\\Gm$-torsors.\n\\end{prop}\n\\begin{rmk}\n\tWe can do womething similar with any vector bundle over $X$: if $V\\to X$ is one, then $V_x$ over $x\\in X$ is a vector space. \n\tWe just send $V$ to $\\operatorname{Frame}(V)$, which over any $x\\in X$ we have the set of orderd bases of $V_x$. This gives us a $\\GL_n$-torsor.\n\\end{rmk}\n\n\\brk\n\nIf we have a subgroup (say of an algebraic group) $H\\subseteq G$, recall that we wanted to show the existence of an $H$-torsor $G\\to G/H$.\n\nWe begin by talking about \\textit{abstract groups}. Assume we have an exact sequence \n\\[1\\to K\\to G\\xrightarrow{\\pi}Q.\\]\nThen $G\\times K\\xrightarrow{\\sim}G\\times_QG$ via the map $(g,k)\\mapsto(g, gk)$. The proof isn't too bad.\n\nSo now consider a geometric group. If we have the same exact sequence of \\textit{algebraic groups over $k$}, we get $K=G\\times_Q\\Spec k$(??)\nand then evaluating at any scheme $S$, we get \n\\[1\\to K(S)\\to G(S)\\to Q(S)\\]\nand consider the map $G(S)\\times K(S)\\to G(S)\\times_{Q(S)}G(S)$ as above. Then by Yoneda we get an isomorphism $G\\times K\\cong G\\times_Q G$ of schemes.\n\n\\begin{cor}\n\tIf $\\pi:G\\to Q$ is a faithfully flat map of linear algebraic groups over $k$, then $G\\to Q$ is a torsor under $K=\\ker\\pi$\n\\end{cor}\nLet's do even better!\n\\begin{cor}\n\tLet $\\pi:G\\to Q$ be dominant (i.e. the image is dense in $Q$) and furthermore that $Q$ is reduced. Then $G\\to Q$ is faithfully flat and in particular $G\\to Q$ is a $K$-torsor.\n\\end{cor}\n\n\\begin{prf}\n\t(Of the second corollary): We use the idea of ``generic flatness''. That is there exists a $U\\subseteq Q$ such that $\\pi^{-1}(U)\\to U$ is flat. Then we can translate this by \n\tthe $G$-action (after passing to the algebraic closure of $k$ so that the points are dense) and then flat descent gives us the result we want. :)\n\\end{prf}\n\n\\begin{thm}\n\tLet $G$ be alinearly algebraic group over $k$. Let $X$ be a scheme over $k$ of finite type.\n\tThen\n\t\\[\\{G\\text{-bundles on $X$}\\}\\cong H_{fl}^1(X,G)\\]\n\twhere $H_{fl}$ is flat cohomology.\n\\end{thm}\nThe idea here is clear when $G=\\Gm$: you get connections between line bundles on $X$ (i.e. the Picard group of $X$) and $\\Gm$-torsors and similarly between the line bundles and $H_{Zar}^1(X,\\O_X^\\times)$, \nusing the (usual) Zariski sheaf cohomology.\n\n\\section{October 25th, 2019}\nWe are going to deviate slightly from the result promised last time, because we want to talk first a bit about \n\\subsection{Descent}\nRecall that we had two results before about descent involving faithfully flat morphisms of finite type (where the schemes are Noetherian).\n\n\\begin{ex}Let $X=\\Spec B$ and let $U_i$ be a Zariski-open cover. Then the map \n\\[\\sqcup U_i\\to X\\]\nis faithfully flat and of finite type.\n\\end{ex}\nSo then one idea is that if we have a nice Zariski cover of $X$, flatness, faithful flatness, and finite type are all ``local on the source''\nin that you just have to check the property locally on $X$.\n\n\\brk\n\n\\begin{prop}\n\tLet $A\\to B$ be a faithfully flat ring homomorphism. Then \n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tA\\ar[r] & B\\ar[r,shift left=0.75ex, \"p_1\"]\\ar[r,shift right=0.75ex,\"p_2\",swap] & B\\otimes _AB\n\t\t\\end{tikzcd}\n\t\\end{center}\n\twhere $p_1(b)=b\\otimes 1$ and $p_2(b)=1\\otimes b$ is an exact sequence. More generally, \n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tM\\ar[r] & M\\otimes_A B\\ar[r,shift left=0.75ex,\"\\id\\otimes p_1\"]\\ar[r,shift right=0.75ex,swap,\"\\id\\otimes p_2\"] & M\\otimes_A B\\otimes_A B\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tis exact.\n\\end{prop}\n\nThere is some geometry that lost me a bit. Sorry. :(\n\\begin{prop}\n\tIf $A\\to B$ is faithfully flat, then the map $\\rmod A\\to\\{(N,\\alpha)|N\\in\\lmod B, \\alpha:p_1^\\ast N\\cong p_2^\\ast N, P(\\alpha)\\}$ \n\tsending\n\t\\[M\\mapsto(M_B=M\\otimes_AB,\\alpha_{can})\\]\n\tis \n\tan equivalence of categories where $P(\\alpha)$ means that $\\alpha$ satisfies a cocyle condition. \n\\end{prop}\n\\begin{rmk}\n\tThe canonical isomorphism above is \n\t\\[\\alpha_{can}:M_B\\otimes_{B,p_1}(B\\otimes_AB)\\to M_B\\otimes_{B,p_2}(B\\otimes_A B)\\]\n\\end{rmk}\n\n\\section{November 4th, 2019}\nToday we are going to do a bit of review as well as do a little more work with $G$-torsors.\n\n\\subsection{Review}\nRecall the following setup: $S=\\Spec A$ is Noetherian. Then let $G\\to S$ be an affine group scheme of finite type over $S$. Let $P\\to S$ \nbe a scheme with a right $G$-action:\n\\[P\\times_S G\\xrightarrow{\\sigma}P.\\]\n\\begin{defn}\\label{defn:torsor}\n\tThen $P\\to S$ is a \\textbf{$G$-torsor} if either of the following hold:\n\t\\begin{itemize}\n\t\t\\item $P\\to S$ is affine, surjective, of finite type, and furthermore transitive (that is $P\\times_S G\\xrightarrow{(\\sigma,\\pi_1)}P\\times_SP$ is an isomoporphism).\n\t\t\\item There exists a faithfully flat $S'\\to S$ with $P\\times_S S'\\cong G\\times_S S'$ withere this isomorphism is $G\\times_S S'$-equivariant.\n\t\\end{itemize}\n\\end{defn}\n\\begin{rmk}\n\tIf $G$ is a linear algebraic gorup over $k$ and $S$ is defined over $k$, then we say $P\\to S$ is a $G$-torsor if it is a torsor under $G\\times_k S$.\n\\end{rmk}\n\nNow a \\textbf{trivial $G$-torsor} over $S$ is $S\\times G\\to S$ if and only if $P\\to S$ has a section. \n\nSome examples:\n\\begin{itemize}\n\t\\item The $\\Gm$ torsors over $S$ are in bijection with the line bundles over $S$.\n\t\\item The $\\GL_n$ torsors over $S$ are in bijection with the vector bundles over $S$.\n\\end{itemize}\nSince line and vector bundles are tivialized in the Zariski topology, we see that the second condition in definition~\\ref{defn:torsor} can be replaced with: \nThere exists an open cover $\\{S_i\\}$ such that $P|_{S_i}\\cong S_i\\times G$.\n\n\\brk\n\nWe proved some stuff with descent. Look at it.\n\\begin{cor}\nThere exists an equivalence between $\\Alg_A$ and the category of $B$ algebras $C$ along with isomorphisms $\\alpha:p_1^\\ast C\\to p_2^\\ast C$ along with the cocyle condition.\n\\end{cor}\n\\begin{rmk}\n\tTo prove this, we descend a $(B,\\alpha)$ to an $A$-module and show that the multiplication on $B$ descends nicely to $A$.\n\\end{rmk}\n\\begin{cor}\n\tWe can also descend $G$-torsors. That is, we have an equivalence between $G$-torsors $P\\to S$ and $G$-torsors $P'\\to S'$ with isomorphisms and the cocycle condition.\n\\end{cor}\nThis last result takes quite a bit of doing although we have all the machinery we need. This is the \none we're going to be using.\n\n\\subsection{New Stuff}\nLet $X$ be a scheme. Then $\\operatorname{Pic}(X)$ is the group of line bundles, or equivalently $\\Gm$-torsors on $X$.\n\\begin{thm}\n\t$\\operatorname{Pic}(X)=H^1(X,\\O_X^\\times)=H^1(X,\\Gm)$.\n\\end{thm}\nNotice that $\\O_X^\\times$ is a sheaf assigning to each $U$ $\\O_X(U)^\\times$. On Zariski opens, this is the same as $\\Gm(U)$.\n\nTo prove the above result, we need to discuss \n\\subsection{\\texorpdfstring{\\v C}{C}ech Cohomology}\nAssume $X$ is separated. Let $\\calU=\\{U_i\\}$ be an affine covering where $U_i\\cap U_j$ is also affine. Then considering the complex \n\\[\\sqcup U_i\\cap U_j\\cap U_k\\to \\sqcup U_i\\cap U_j\\to \\sqcup U_i\\to X\\]\nwhere we have several a parallel maps for each $n-1$-tuple of intersections in the previous term. Then we can applie $\\O_X^\\times$:\n\\[\\prod \\O_X(U_i)^\\times\\to \\prod\\O_X(U_i\\cap U_j)^\\times\\to\\cdots\\]\nwhere, for instance, the first map above sends a product of maps $(s_i)$ to $(s_i|_{U_i\\cap U_j}-s_j|_{U_i\\cap U_j})$.\n\n\\begin{defn}\n\t$\\hat H_\\calU^i(X,\\O_X^\\times)$ is the $i^{th}$ cohomoogy of the above complex.\n\\end{defn}\nThen it can be shown that $\\hat H_\\calU^i(X,\\O_X^\\times)$ is independent of cover if the $U_i$ are affine.\n\nSo what is $H^1(X,\\O_X^\\times)$? It is the $s_{i,j}in\\O_X^\\times(U_i\\cap U_j)$ modulo the cocycle condition.\n\n\\subsection{A new result}\n\\begin{thm}\n\t$H^1(X,G)$ is identified with $G$-torsors.\n\\end{thm}\nWhat is $H^1(X,G)$? We are going to try to recover \\v Cech cohomology. Let $X$ be quasi-compact and let $\\cup_1^n U_i\\to X$ be \nfaithfully flat. The seperatedness gets us the intersections are affine.\n\nThen we can play the same game, but this time applying $G$: (notice that $U_i\\cap U_j=U_i\\times_X U_j$)\n\\[\\prod G(U_i)\\to \\prod G(U_i\\times_X U_j)\\to\\cdots\\]\nwhere (notice now $G$ may be nonabelian!) we map \n\\[(s_i)\\mapsto (s_i|_{U_i\\times_X U_j}\\cdot s_j|_{U_i\\times_X U_j}^{-1})_{i,j}\\]\nand \n\\[(s_{i,j})\\mapsto (s_{i,j}\\cdot s_{j,k}\\cdot s_{i,k}^{-1})\\]\nwhere each $s$ is restricted to $U_i\\times_XU_j\\times_X U_k$.\n\nThen we can define $\\hat H_\\calU^1(X,G)$ to be the first cohomology of the above chain. Then \n\\begin{defn}\n\tThe \\textbf{flat cohomology} is \n\t\\[H^1_\\text{flat}(X,G)=\\operatorname{colim}_\\calU H^1_\\calU(X,G).\\]\n\\end{defn}\n\nThe overall result here is \n\\begin{thm}\n\tThe $G$-torsors on $X$ in bijection with the elements of $H^1_\\text{flat}(X,G)$.\n\\end{thm}\n\n\\section{November 6th, 2019}\nToday we are going to prove some more things about representation theory. Down the pipe somewhere we will hope to talk about Tanakka duality, \nwhich gives us that Morita equivalence (is this still what it's called?) of two algebraic groups gives us the groups are isomorphic.\n\nGiven an algebraic group $G/k$, we could consider representations, which were either a group scheme morphism $G\\to \\GL(V)$ or else a coaction $V\\xrightarrow{\\sigma} \\Gamma(G)\\otimes_k V$.\nThen we said that $G$ was linear reductive if and only if every representation is completely reducible,\n\n\\begin{ex}\n\tIf $G=\\Gm$, then every irreducible representation is one-dimensional: $\\Gm\\to\\Gm$ sends $t\\mapsto t^\\alpha$.\n\\end{ex}\nA natural question one may ask: what can we say in general?\n\\subsection{The regular representation}\nThe regular representation is $\\Gamma(G)$ with coaction given by comultiplicaton.\n\\begin{prop}\n\tAnd finite dimensional representation $V$ of $G$ embeds $V\\subseteq \\Gamma(G)^{\\oplus n}$.\n\\end{prop}\n\\begin{prf}\n\tThe coaction gives us a map \n\t\\[V\\xrightarrow{\\sigma}\\Gamma(G)\\otimes V=\\Gamma(G)\\otimes_k V_{\\text{vs}}\\cong \\Gamma(G)^{\\oplus\\dim V}\\]\n\twhere $V_{vs}$ is the underlying vector space of $V$. Then the claim is that the overall map is injective and that it is a map of $G$-reps.\n\n\tThe latter property is almost tautological by the property of a coaction. Injectivity follows from the fact that \n\t\\[(e^\\ast\\otimes\\id)\\circ\\sigma\\]\n\tis an isomorphism $V\\to k\\otimes V$ (again by one of the coaction axioms), so $\\sigma$ is injective.\n\\end{prf}\n\\begin{prop}\n\tIf $V$ is a finite dimensional faithful representation, then every other finite-dimensional representation can be obtained from $V$ by direct sums, tensors, duals, subrepresentations, and quotients.\n\\end{prop}\n\\begin{rmk}\n\tMore precisely, $W\\subseteq (V\\oplus V^\\vee)^{\\otimes n}$.\n\\end{rmk}\n\\begin{prf}\n\t$G\\subseteq\\GL(V)$ is a closed subgroup. Notice that since we have an injective map into $\\GL(V)$, $G$ is already a linear algebraic group.\n\\end{prf}\n\\begin{thm}[Peter-Weyl]\n\tIf $G$ is linear reductive, then \n\t\\[\\Gamma(G)=\\oplus_{\\text{irr $V$}}(V\\otimes V^\\vee)\\]\n\\end{thm}\n\\begin{rmk}\n\tNotice that this is as two-sided representations, but if we only want to look at (say) left representations, we give $V^\\vee$ the trivial \n\trepresentation structure and we get the more familiar \n\t\\[\\Gamma(G)=\\oplus_{\\text{irr $V$}} V^{\\oplus\\dim V}\\]\n\tand then the formula from finite groups:\n\t\\[|G|=\\sum_{\\text{irr $V$}} (\\dim V)^2\\]\n\\end{rmk}\nAs an example, consider $\\Gm=\\Spec k[t]_t$ and $k[t]_t=\\oplus_{n\\in\\bbZ}k\\langle t^n\\rangle$. As an exercise, thing about what happens for $\\SL_2$.\n\n\\subsection{Stabilizers of subspaces}\nIf $G$ is an algebraic group over $k$ and $V$ is a $G$ representation, let $W\\subseteq V$ be a subspace. We want a subgroup $G_W$ which plays a similar role as the stabilizer in group theory.\n\nConsider $G$ as a functor from $\\Alg_k\\subseteq \\Sch/k\\to\\Grp$ taking $T\\mapsto\\Hom(T,G)$. Then for a $k$-algebra $R,$ $G(R)$\nacts on $V_R=V\\otimes R$.\n\\begin{prop}\n\tThe functor $\\Alg_k\\to\\Grp$ sending \n\t\\[R\\mapsto \\{g\\in G(R)|gW_R=W_R\\}\\]\n\tis representable by a subgroup of $G$, which we will denote $G_W$.\n\\end{prop}\n\\begin{rmk}\n\tThe idea here is to fix a basis $e_i$ of $W$ and complete them to a basis of $V$ by appending $f_i$. Then take the coaction $V\\to\\Gamma(G)\\otimes V$ and consider the image \n\t\\[f_k\\mapsto \\sum_{i\\in I}a_{ki}\\otimes e_i+\\sum_{j\\in J}a_{kj}\\otimes f_j\\]\n\tsuppose we ahve $g\\in G(R)=\\Hom(\\Spec R, G)$. This gives us a map in $\\Hom(\\Gamma(G),R)$. Then using the action on $V\\otimes R$:\n\t\\[g\\cdot(f_k\\otimes 1)=\\sum g(a_{ki})\\otimes e_i+\\sum g(a_{kj})\\otimes f_j\\in V_R\\]\n\tand we see that this is actually in $W_R$ exactly when $g(a_{kj})=0$ for all $j$.\n\n\tBut then this means that $g$ lands in $V(a_{kj})$, the vanishing of the functions $a_{kj}\\otimes 1\\in\\Gamma(G)_R$, so we get that \n\tthe closed subscheme $V(a_{kj})$ represents the functor $G_W$.\n\\end{rmk}\n\nFor the converse, let $G$ be an algebraic group over $k$ and let $H\\subseteq G$ be a subgroup. Then there exists a finite dimensional representation $V$ of $G$ and \na $L\\subseteq V$ a one-dimensional subspace such that $H=G_L$.\n\nAs an example, look at $G=\\SL_2$ acting on $k^2$. Let $L=\\langle 1,0\\rangle$. We can see easily that $L$ is preserved by $g\\in G$ if and only if the lower-left coordinate of $g$ is zero.\nNow if we take $\\Gm=\\operatorname{diag}(t,t^{-1})\\subseteq\\SL_2$, consider the representation $\\langle xy\\rangle\\subseteq \\operatorname{Sym}^2k^2$. This isn't quite it but maybe you can work it out!\n\n\\section{November 8th, 2019}\nWe started by talking elections. Go Andrew Lewis. You can do it.\n\nRecall that last time we had a result that said that we had an analog of the stabilizer $G_W\\subseteq G$ \nrepresenting the functor \n\\[R\\mapsto \\{g\\in G(R)|gW_R=W_R\\}.\\]\nThe idea was to take a basis for $W$ and complete it to one of $V$ and then to find a group that fixes $W$. We found \nthat $G_W=V(\\{a_{kj}\\})$.\n\\begin{thm}[Chevalley]\n\tIf $H\\subseteq G$ is a subgroup of an algebraic group over $k$, then there exists a $G$ representation $V$ and a line $L\\subseteq V$ such that $H=G_L$.\n\\end{thm}\n\\begin{prf}\n\tLet $\\pi:\\Gamma(G)\\twoheadrightarrow\\Gamma(H)$ be the map induced by $H\\hookrightarrow G$. Let $q_1,\\dots,q_n$ be\n\tgenerators of $\\ker\\pi=I\\subseteq\\Gamma(G)$, the regular representation. Take a finite dimensional representation $V$ containing $I$ in $\\Gamma(G)$\n\tand pick a basis $e_1,\\dots,e_s$ of $W=V\\cap I$. Extend this basis to one of $V$ with the additional vectors denoted $f_1,\\dots,f_t$.\n\n\tNow the image of the coaction gives us \n\t\\[e_i\\mapsto\\sum_k a_{ik}\\otimes e_k+\\sum_k b_{jk}\\otimes f_k\\]\n\tand let $I'=(b_{jk})$. We claim that $I=I'$. If this is true, then $H=G_W$ and $W\\subseteq V$, so we set\n\t\\[L=\\bigwedge^{\\dim W}W\\subseteq \\bigwedge^{\\dim W}V\\]\n\tand now $H= G_L$.\n\n\tTo see the claim, consider that since $e_i\\in I$, we get $\\sum_ka_{ik}\\otimes e_i\\in \\Gamma(G)\\otimes I$.\n\tBut then the comultiplication on $\\Gamma(G)$ gives us that \n\t\\[m^\\ast(I)\\subseteq I\\otimes \\Gamma(G)+\\Gamma(G)\\otimes I\\subseteq \\Gamma(G)\\otimes\\Gamma(G)\\]\n\tso we get that \n\t\\[\\sum b_{ik}\\otimes f_k\\in I\\otimes \\Gamma(G)+\\Gamma(G)\\otimes I\\]\n\tbut since $f_{ij}\\notin I$, this forces $b_{ik}\\in I$. Thus $I'\\subseteq I$.\n\n\tFor the other containment I got behind.\n\\end{prf}\n\\subsection{Quotients}\nLet $H\\subseteq G$ be normal. Now the goal is to construct $G/H$ as an algebraic group. As a preview, we are going to use the last theorem \nto get a representation $V$ of $G$ such that $H=G_L$ such that $G\\to \\GL(V)$ descends to a representation $G\\to \\GL(V^H)$. Then $G/H$ will be the image of this map in $\\GL(V^H)$.\n\n\\begin{defn}\n\tA subgroup $H\\subseteq G$ of an algebraic group over $k$ is \\textbf{normal} if for all $k$-schemes $S$, $H(S)\\le G(S)$ is anormal subgroup.\n\\end{defn}\n\\begin{rmk}\n\tAs an exercise one can show that if $G(k)\\subseteq G$, then $H\\subseteq G$ is normal if and only if for all $g\\in G(k)$, $gHg^{-1}\\subseteq H$.\n\\end{rmk}\n\\begin{lem}\n\tIf $\\pi:H\\to G$ is a homomorphism of algebraic groups, then $\\ker\\pi\\subseteq H$ is normal.\n\\end{lem}\nTo see the above, consider the kernel is the pullback:\n\\begin{center}\n\t\\begin{tikzcd}\n\t\tS\\ar[dr,dashed]\\ar[drr,bend left]\\ar[ddr,bend right] & &\\\\\n\t\t& \\ker\\pi\\ar[r]\\ar[d] & \\Spec k\\ar[d,\"e_G\"]\\\\\n\t\t& H\\ar[r,\"\\pi\"] & G\n\t\\end{tikzcd}\n\\end{center}\nAnd then the kernel $\\ker \\pi(S)=K(S)$ is normal in $H$. Think about this one some more.\n\\begin{defn}\n\tA homomorphism of algebraic groups $G\\to Q$ is a \\textbf{quotient map} if $G\\to Q$ is faithfully flat.\n\\end{defn}\n\\begin{prop}\n\tA quotient map of linear algebraic groups $\\pi:G\\to Q$ satisfies the universal property: for all $f:G\\to H$,\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tK=\\ker\\pi\\ar[r]\\ar[rd,\"0\"] & G\\ar[r,\"\\pi\"]\\ar[d,\"f\"] & Q\\ar[dl,dashed,\"!\"]\\\\\n\t\t\t& H &\n\t\t\\end{tikzcd}\n\t\\end{center}\n\\end{prop}\n\n\\section{November 13, 2019}\nThe big theorem here:\n\\begin{thm}\n\tLet $G$ be an algebraic group over $k$. Let $H\\subseteq G$ be an algebraic subgroup (over $k$). Then the\n\tquotient $G/H$ exists as a quasi-projective scheme over $k$.\n\n\tMoreover if $H\\subseteq G$ is normal, then $G/H$ is also an algebraic group.\n\\end{thm}\n\nToday we are going to prove/see the case when $G$ is a linear algebraic group and smooth (reduced) and that $H\\subseteq G$ is normal and $k=\\bar k$.\nThe closedness of $k$ and the smoothness can be dropped at the expense of a couple extra lectures, probably.\n\n\\begin{defn}\n\tA map $G\\xrightarrow{\\pi} Q$ of algebraic groups over $k$ is a \\textbf{quotient} if $\\pi$ is faithfully flat.\n\\end{defn}\n\nUsing descent, we showed \n\\begin{prop}\n\tIf $G\\xrightarrow{\\pi} Q$ is a quotient, then  if we have another map $G\\xrightarrow{\\phi} P$ and furthermore that $\\ker\\pi\\subseteq \\ker\\phi$,\n\tthen we get a (unique) factorization $Q\\to P$.\n\\end{prop}\n\nWe also have\n\\begin{prop}\n\tAny map of linear algebrai groups $G\\xrightarrow{\\phi} H$ factors as $G\\to Q\\hookrightarrow H$ where $G\\to Q$ is the quotient map.\n\\end{prop}\n\\begin{prf}\n\tIn the case that $G$ is reduced, we get a surjection onto $\\Im\\phi^\\ast$ of $\\Gamma(H)$, and $Q=\\Spec\\Im(\\phi^\\ast)$.\n\tThen it just takes showing that the surjection $\\Gamma(H)\\twoheadrightarrow \\Im(\\phi^\\ast)$ induces a closed immersion of $Q$ in $H$.\n\t\n\tWe also need a lemma that I missed but it can be found both in Milne and Waterhouse. This is where the heavy lifting is done.\n\\end{prf}\n\\section{November 15th, 2019}\nToday we are going to be talking about properties of algebraic groups as well as properties of lements of $g\\in G$. Today, we will be focusing on \n$g\\in \\GL(V)$, so essentially talking linear algebra.\n\n\\begin{defn}\n\tLet $V$ be a finite dimensional vector space over any field $k$. Let $g\\in \\GL(V)(k)$. Then $g$ is\n\t\\begin{itemize}\n\t\t\\item \\textbf{diagonalizable} if there exists a basis of eigenvectors.\n\t\t\\item \\textbf{semisimple} if there exsits an extension $k'/k$ such that $g\\otimes\\id\\in\\GL(V\\otimes_kk')$ is diagonalizable.\n\t\t\\item \\textbf{unipotent} if $g-\\id$ is nilpotent.\n\t\t\\item \\textbf{triagonalizable} if there exists a basis so that $g$ is upper triangular.\n\t\\end{itemize}\n\\end{defn}\n\nLet $P_g\\in k[T]$ be the characteristic polynomial for $g$. We say the \\textit{eigenvalues of $g$ are in $k$} if $P_g$ splits over $k$. Thus if $g$ is diagonalizable, \nthe eigenvalues are in $k$ and this is exactly equivalent to $g$ being triagonalizable. Furthermore $g$ is unipotent if and only if \nall of its eigenvalues are 1 or 0 (in $k$).\n\\begin{prop}\n\tIf the eigenvalues $\\lambda_i$ are in $k$, then \n\t\\[V\\cong \\oplus\\ker(g-\\lambda_i\\id)^{a_i})\\]]\n\twhich gives us our Jordan decomposition in terms of generalized eigenspaces.\n\\end{prop}\n\n\\subsection{Jordan Decomposition}\n\\begin{thm}\n\tLet $k$ be perfect and let $g\\in\\GL(V)$ with $V$ finite dimensional over $k$. Then there exist unique $g_s$ and $g_u$ in $\\GL(V)$ such that \n\t\\begin{itemize}\n\t\t\\item $g=g_sg_u=g_ug_s$\n\t\t\\item $g_s$ is semisimple and $g_u$ is unipotent.\n\t\\end{itemize}\n\tMoreover, $g_s$ and $g_u$ are polynomials in $g$.\n\\end{thm}\n\\begin{prf}\n\tAssume first that $P_g(T)$ splits over $k$. Thus we can decompose \n\t\\[V\\cong V_{\\lambda_i}\\]\n\twhere $V_{\\lambda_i}=\\ker((g-\\lambda_i\\id)^{a_i})$.\n\tThis gives us a Jordan decomposition of $g$ as a matrix with eigenvalues on the diagonal and is upper triangular. Take $g_s$ to be $\\diag(\\lambda_1,\\cdots,\\lambda_1,\\dots,\\lambda_s)$, \n\tthe diagonal of the matrix in this form. This is invertible (since $g\\in\\GL(V)$) so we set $g_u=g_s^{-1}g$. It is clear from this that these matrices commute and that they are \n\tsemisimple and unipotent, respectively.\n\n\tTo get that these are polynomials in $g$, you go through an argument I missed. The uniqueness comes from looking at \n\t\\[g=g_sg_u=h_sh_u\\]\n\tand taking \n\t\\[h_s^{-1}g_s=h_ug_u^{-1}\\]\n\tand since everything commutes (check this), you get that these two elements are both semisimple and unipotent. But then they are diagonalizable with all \n\teigenvalues 1, so they are the identity and we are done.\n\n\tFor the more general case, choose a finite extension $k'/k$ such that $P_g$ splits. Since $k$ is perfect, this is seperable. So we can assume it is Galois and \n\tset $G=\\Gal(k'/k)$. So $g\\otimes \\id\\in\\GL(V\\otimes_k k')$ factors uniquely as $g\\otimes \\id=g_ug_s$ with $g_u,g_s\\in\\GL(V)(k')$.\n\n\tBut then using the action of $G$ on $\\GL(V)(k')$, by uniqueness of this decomposition $\\sigma g_s=g_s$ and $\\sigma g_u=g_u$ for all $\\sigma\\in G$ so \n\tthe elements are in fact in the field fixed by $G$ (i.e. $k$). Similarly $G$ acts on $Q(T)\\in k'[T]$ but must fix the polynomials describing the factors as polynomials in $g$.\n\\end{prf}\n\nAs an example, consider a field $k$ of characteristic 2 and pick $\\alpha\\in k/k^2$ (obviously $k$ is not perfect). Then the matrix $g=(\\begin{smallmatrix}\n\t0& 1\\\\\\alpha,0\n\\end{smallmatrix})$ has polynomial $T^2-\\alpha$, so with a single repeated eigenvalue $\\sqrt a\\notin k$.\n\n\\subsection{A broader context}\nSo then if $\\iota:G\\hookrightarrow\\GL(V)$ is inclusion (faithful representation) we can decompose $\\iota(g)=g_ug_s$ using the above results. Some properties of this phenomenon:\n\\begin{prop}\n\tIf $L:V\\to W$ is a linear transformation and $g\\in\\GL(V)$ and $h\\in\\GL(W)$ such that $L\\circ g=h\\circ L$,\n\tthen $L\\circ g_\\ast=h_\\ast\\circ L$ for $\\ast=u,s$.\n\\end{prop}\n\\begin{prop}\n\t\\begin{align*}\n\t\t(g\\oplus h)_\\ast&=g_\\ast\\oplus h_\\ast\\\\\n\t\t(g\\otimes h)_\\ast&=g_\\ast\\otimes h_\\ast\n\t\\end{align*}\n\tfor $\\ast=s,u$.\n\\end{prop}\n\\begin{prop}\n\tIf $\\rho:\\GL(V)\\to \\GL(W)$ is a representation, then $\\rho(g)_\\ast=\\rho(g_\\ast)$.\n\\end{prop}\n\n\\begin{defn}\n\tFor $g\\in G(k)$, $g$ is \\textbf{semisimple (resp. unipotent)} if for all representations $\\rho:G\\to \\GL(V)$,\n\t$\\rho(g)$ is semisimple (resp. unipotent).\n\\end{defn}\n\n\\section{November 18th, 2019}\n\\subsection{Generalizing the Jordan decomposition}\nThe further generalization of the Jordan decomposition we saw last time is\n\\begin{thm}\n\tIf $k$ is perfect and $G$ is a linear algebraic group over $k$, then for any $g\\in G(k)$, there exist unique $g_s,g_u\\in G(k)$ such that \n\t\\begin{itemize}\n\t\t\\item $g=g_sg_u=g_ug_s$\n\t\t\\item $g_s$ is semisimple and $g_u$ is unipotent.\n\t\t\\item $g_s$ and $g_u$ are polynomials in $g$.\n\t\\end{itemize}\n\\end{thm}\nLast time we proved this for $G=\\GL(V)$. Then we saw a collection of propositions that gave us that semisimplicity and unipotence act nicely \nwith respect to direct sum, tensor, and representations. This allowed us to define what semisimple and unipotent objects are in $G(k)$. This gave us \na result which I think I botched last time:\n\\begin{prop}\n\tIf $G$ is alinear algebraic group and $V$ is a faithful finite dimentional representation ($G\\hookrightarrow \\GL(V)$ is a closed embedding), then eery representation \n\tof $G$ is obtained from $V$ using sums, tensors, subrepresentations, and quotients.\n\\end{prop}\n\\begin{rmk}\n\tAn aside: what does it mean for a subspace $W\\subset V$ to be fixed by $g\\in G$ ($V\\in \\lmod G$)? If $V$ is finite-dimensional, then one can just take the image of $G$ in $\\GL(V)$\n\tand ask whether $gW\\subseteq W$.\n\n\tMore generally since $g$ is a $k$-point, there is a maximal ideal $\\frakm_g\\lhd \\Gamma(G)$ such that \n\t\\[\\Gamma(G)/\\frakm_g\\cong k\\] \n\twith quotient map $q$. Then this gives us a map $\\Gamma(G)\\otimes V\\xrightarrow{q\\otimes\\id} k\\otimes V\\cong V$ which, when composed with the coaction $V\\to\\Gamma(G)\\otimes V$,\n\tgives us a map $m_g:V\\to V$. Then we can say that $W$ is fixed by $g$ if $m_g(W)\\subseteq W$.\n\\end{rmk}\n\\begin{prf}[in general]\n\tSince $G$ is linear algebraic, we can embed $G\\hookrightarrow\\GL_n$ for some $n$. This gives us a way to decompose $g=g_sg_u$, but the question is \n\twhether $g_s$ and $g_u$ actually lie in (the image of) $G$. Thus for every representation $\\rho:G\\to \\GL(W)$, we get a pair of elements $\\rho(g)_s$ and $\\rho(g)_u$. Due to the properties we saw before, \n\tthese are nice.\n\n\tThe idea here is that one representation we have at our disposal is the regular representation. In general this is large, but it recovers our group $G$ in some way and this will be key to our discussion.\n\tLet $V=\\Gamma(\\GL_n)$, the regular representation of $\\GL_n$. Then we get maps \n\t\\[G\\hookrightarrow\\GL_n\\to\\GL(V)\\]\n\tand we have an ideal $I\\subseteq \\Gamma(GL_n)$ where $I$ is the ideal defining $G$. Then the claim is that $g\\in G(k)$ stabilizes $I$. This is in Waterhouse chapter 9.\n\\end{prf}\n\n\\subsection{Group properties}\nThis enables us to make more classifications of groups:\n\\begin{prop}\n\tWe say that $G$ is \\textbf{diagonalizable} if there is a faithful representation $G\\hookrightarrow\\GL_n$ that factors through the \n\tdiagonal matrices in $\\GL_n$.\n\\end{prop}\n\\begin{rmk}\n\tThis is equivalent to the earlier definition of there being a closed embedding of $G\\hookrightarrow\\Gm^n$.\n\\end{rmk}\n\\begin{defn}\n\tWe say that $G$ is of \\textbf{multiplicative type} if $G\\times_k\\bar k$ is diagonalizable.\n\\end{defn}\n\n\\begin{prop}\n\tIf $G$ is a commutative linear algebraic group over $K$ and all elements $g\\in G(k')$ for $k\\to k'$ a field extension are semisimple, then $G$ is of multiplicative type.\n\\end{prop}\n\\begin{prf}\n\tWe can assume that $k=\\bar k$ and we know that all $g\\in G(k)$ are diagonalizable. Then we use the fact that commuting diagonalizable \n\tmatrices are simultaneously diagonalizable. The result follows.\n\\end{prf}\n\\begin{rmk}\n\tThe proof of the linear algebra fact above comes from taking your commuting diagonalizable elements $g$ and $h$ and showing that each \n\t$\\lambda$ eigenspace of $g$ is fixed by $h$ and vice versa, giving us the same number and dimension of eigenspaces.\n\\end{rmk}\n\n\\section{November 22, 2019}\nFrom last time (which I missed): we had\n\\begin{prop}\n\tLet $G$ be a linear algebraic group over $k$. Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item $G$ is diagonalizable (i.e. $G=D(A)$ for some finitely generated abelian group $A$)\n\t\t\\item There exists a faithful representation $G\\to \\GL(V)\\cong \\GL_n$ which maps into the diagonal matrices.\n\t\t\\item $\\Gamma(G)$ is spanned by grouplike elements.\n\t\t\\item (Discussed below) All $G$ representations are direct sums of one-dimensional ones.\n\t\\end{itemize}\n\\end{prop}\nEarlier this quarter we saw:\n\\begin{prop}\n\tIf $G=D(A)$, then any $G$ representation $V$ can be decomposed as \n\t\\[V=\\oplus_{a\\in A V_a}\\]\n\twhere \n\t\\[V_a=\\{v\\in V|\\sigma^\\ast(v)=t^a\\otimes v\\}\\]\n\\end{prop}\n\\begin{rmk}\n\tHere we used the fact (shown yesterday?) that the characters of an abelian group are in bijection with elements.\n\\end{rmk}\nNote that if $k_a$ is the one-dimensional representation $k_a\\to\\Gamma(G)\\otimes k_a$ via $1\\mapsto t^1\\otimes 1$. One can \nquickly show that $V_a\\cong k_a$, so we get that every representation decomposes as a direct sum of one-dimensional subgroups. In particular, \n$G=D(A)$ are \\textit{linearly reductive}.\n\n\\subsection{Other properties of diagonalizable groups}\nFirst of all, these groups are commutative. That is, $Hom_\\AlgGrp(G,\\Gm)\\cong A$.\n\nNext, we claim that $\\Hom_\\AlgGrp(G,\\Ga)=0$. To see this, assume that $f:G\\to \\Ga$ is such a map and consider the \ninduced map $f^\\ast:k[x]\\to k[A]$. Then following through the commutative diagram for comultiplication, we get\n\\[f^\\ast(x)\\otimes f^\\ast(x)=f^\\ast(x)\\otimes 1+1\\otimes f^\\ast(x)\\]\nby the diagonalizability of $G$ and the fact that $\\Ga$ is ``infinitesimal'' (I forget the actual term from Hopf algebras.)\n\n\\begin{defn}\n\tA linear algebraic group $G$ over $k$ is \\textbf{of multiplicative type} if there exists a finite extension $k'/k$ such that $G_{k}=G\\times_{\\Spec k}\\Spec k'$ is diagonaliable.\n\\end{defn}\n\\begin{rmk}\n\tThis is equivalent to $G_{\\bar k}$ being diagonalizable. One direction is obvious. For the other direction, Supposed we ahve a faithful \n\trepresentation $\\bar V$ of $\\bar G=G_{\\bar k}$ by using some descent ideas.\n\\end{rmk}\n\n\\begin{thm}\n\tLet $G$ be an algebraic group over $k$. Then the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item $G$ is of multiplicative type.\n\t\t\\item $G$ is commutative and $\\Hom(G,\\Ga)=0$.\n\t\t\\item $G$ is commutative and $\\Gamma(G)$ is co\\'etale.\n\t\t\\item $G_{k^s}$ is diagonalizable. That is, there exists $k'/k$ that is finite and seperable such that $G_{k'}$ is seperable.\n\t\\end{itemize}\n\\end{thm}\nWhat does that mean?\n\\begin{defn}\n\tLet $(C,\\Delta,\\varepsilon)$ be a coalgebra. Let $A=C^\\vee$ be the $k$-linear dual. This gives us (using that $C^\\vee\\otimes C^\\vee\\hookrightarrow (C\\otimes C)^\\vee$)\n\tan algebra. Then \n\t\\begin{itemize}\n\t\t\\item If $C$ is finite dimensional, we say that $C$ is \\textbf{co\\'etale} if $A$ is commutative and if $A=C^\\vee$ is \\'etale. That is \n\t\t$A=\\prod_{i=1}^n k_i'$ where $k_i'/k$ are all finite and seperable extensions.\n\t\t\\item If $C$ is infinite dimensional, then $C$ is \\textbf{co\\'etale} if it is a union of finite dimensional co\\'etales.\n\t\\end{itemize}\n\\end{defn}\n\\begin{ex}\n\t$G=\\Gm$, $\\Gamma(G)=k[t]_t$. Then let $C=k\\langle t^d\\rangle\\subseteq \\Gamma(G)$. Then $A=C^\\vee$ is just $k$, so it is co\\'etale.\n\tFurthermore $k\\langle t^{-d},\\dots,t,\\dots,t^d\\rangle$ has that $C^\\vee\\cong \\prod k$, so is as well.\n\n\tFinally this gives us that $\\Gamma(\\Gm)$ is co\\'etale since it is a union of these things.\n\\end{ex}\nAs an aside, if $V\\to \\Gamma(G)\\otimes V$ is a representation, then $\\Im(V\\otimes V^\\vee\\to \\Gamma(G))$ is a coalgebra.\n\n\\section{November 25th, 2019}\nToday we finish our discussion of group schemes of multiplicative type. There was one implication we skipped in Friday's discussion:\nthat if $G$ is commutative then $\\Hom(G,\\Ga)=0$ implies that $\\Gamma(G)$ is co\\'etale.\n\\subsection{Continuing from last time}\n\\begin{prf}[cont.]\n\tAssume that $k=\\bar k$ and let $C\\subseteq\\Gamma(G)$ be a subcoalgebra of finite dimension. Let $A=C^\\vee$ and $\\langle\\cdot,\\cdot\\rangle$ be \n\tthe natural pairing $C\\times A\\to k$. If $A$ is not \\'etale, then since $A$ is finite dimensional,\n\t\\[A=A_1\\times A_n\\]\n\twhere the $A_i$ are local Artinian $k$-algebras.\n\n\tSo there exists a projection map $\\pi:A\\to k[\\varepsilon]/\\varepsilon^2\\cong k\\oplus k\\varepsilon$ that sends \n\t\\[x\\mapsto \\langle x,c\\rangle+\\varepsilon\\langle x,d\\rangle\\]\n\tfor some $c,d\\in C$.\n\tThen a subclaim is to prove that\n\t\\begin{itemize}\n\t\t\\item $\\Delta(c)=c\\otimes c$\n\t\t\\item $\\Delta(d)=c\\otimes d+d\\otimes c$\n\t\t\\item $\\varepsilon(c)=1$\n\t\\end{itemize}\n\tThen by also showing that $c$ is invertible, one can compute \n\t\\[\\Delta(dc^{-1})=1\\otimes c^{-1}d+c^{-1}d\\otimes 1\\]\n\twhich then gives us a (nontrivial!) map $G\\to \\Ga$, which contradicts the assumption.\n\\end{prf}\n\\begin{ex}[Non co\\'etale example]\n\tConsider $\\Ga=\\Spec k[x]$ with $\\Delta(x)=x\\otimes 1+1\\otimes x$. Then consider th subcoalgebra spanned by $\\langle 1,\\dots,x^{n-1}\\rangle$\n\tand the dual algebra $A=C^\\vee$ spanned by $e_i$. One can compute that \n\t\\[e_i\\cdot e_j=\\binom{i+j}{i}e_{i+j}\\]\n\tif $i+j\\le n-1$ and it is zero otherwise. But then in particular $e_{n-1}^2=0$, so $A$ isn't even reduced! \n\t\n\tOne could ask whether this still holds for any choice of subcoalgebra, but we aren't doing that.\n\\end{ex}\n\\begin{cor}\n\tLet $G$ be a linear algebraic group over $k$. Assume that $G$ is commutative (and maybe smooth). Then $G$ is of multiplicative type \n\tif and only if $G(\\bar k)$ consists purely of semisimple elements.\n\\end{cor}\n\\begin{cor}\n\tLet \n\t\\[1\\to G'\\to G\\to G''\\to 1\\]\n\tbe a short exact sequence. If $G$ is commutative, then $G$ is of multiplciative type if and only if both $G'$ and $G''$ are.\n\\end{cor}\n\\begin{rmk}\n\tThat subgroups and quotients of multiplicative type group schemes are multiplicative type is not hard to see. To see this, apply $\\Hom(-,\\Ga)$ to the sequence and notice that quotients and subgroups \n\tof commutative groups are commutative.\n\t\n\tThe nontrivial direction is that the collection of multiplicative type groups is closed under extension. \n\\end{rmk}\n\n\\section{December 2nd, 2019}\nWe are going to continue our discussion on Jordan decomposition by visiting the idea of \n\\subsection{Unipotent group schemes}\nRecall that if $G$ is a linear algebraic group over $k$ and $g\\in G(k)$, then \n\\begin{itemize}\n\t\\item $g$ is semisimple if $\\rho(g)$ is for all representations $\\rho:G\\to \\GL(V)$ ($\\rho(g)\\otimes \\bar k$ is diagonalizable)\n\t\\item $g$ is unipotent if $\\rho(g)$ is for all representations $\\rho$ ($\\rho(g)-1$ is nilpotent).\n\\end{itemize}\n\nWe also said that, if $k$ is perfect, $g=g_sg_u=g_ug_s$ and this is a unique decomposition (the Jordan decomposition).\n\nWhen we are talking about an arbitrary group scheme, we no longer use ``semisimple'' but rather ``multiplicative type'': \nIf $G\\subseteq \\Gm^n\\subseteq\\GL_n$, then we say that $G$ is \\textbf{diagonaliable}. If $G_{\\bar k}$ is diagonalizable, then \nwe say $G$ is \\textbf{of multiplicative type.}\n\nRecall that when $G$ is smooth, we saw that $G$ was of multiplicative type if and only if $G$ is commutative and $G(\\bar k)$ consists entirely of semisimple objects.\n\nSome questions we want to answer: Is there a characterization of groups containing unipotent elements? Perhaps more interestingly:\nDo we have structure results for factoring algebraic groups analogous to the Jordan decomposition?\n\n\\begin{thm}[Kolchin fixed point theorem]\n\tLet $V$ be a finite dimensional vector space over $k$ and let $G\\subseteq \\GL(V)$ be an abstract subgroup containing unipotent elements.\n\tThen there exists a nonzero $v\\in V$ fixed by all $g\\in G$.\n\\end{thm}\n\\begin{prf}\n\tBegin by setting \n\t\\[V^G=\\{v\\in V|\\forall g\\in G, gv=v\\}.\\]\n\tIt is easy to see that if $k'/k$ is a field extension, then \n\t\\[(V\\otimes_k k')^G=V^G\\otimes_k k'\\]\n\tthus we can assume that $k=\\bar k$ (since we want to show that $V^G$ is nonempty).\n\n\tWe can also assume that $V$ is a simple $G$-module, since it suffices to find a fixed point in a submodule.\n\tThis reduces the proof to showing that $V^G=V$.\n\n\tThere are two different proofs of this: first, we want to show that for all $g\\in G$, that $g-I_V=0$. Suppose there were \n\tan element $g'$ not equal to the identity. We know that for all $g\\in G$, (since $g$ is unipotent):\n\t\\[\\tr (g)=\\dim V\\quad\\Rightarrow\\quad \\tr(gg')=\\dim V\\quad\\Rightarrow\\quad \\tr(g(g'-I_V))=0\\]\n\t\n\tLet $E=\\{f:V\\to V|\\tr(gf)=0,\\forall g\\in G\\}$, which contains $g'-I_V\\ne 0$. $E$ is $G$-invariant under composition.\n\tThen let $g'-I_V\\in X\\subseteq E$ where $X$ is a simple $G$-module containing this element. THen for any $v\\in V$, define \n\t\\[\\varphi_v:X\\to V\\quad\\text{via}\\quad f\\mapsto f(v)\\]\n\tbecause $g'\\ne I_V$, there exists a $v\\in V$ such that $g'v\\ne v$. Thus for this specific $v$, $\\varphi_v$ is an isomorphism!\n\n\tIn particular, this means there exists an $f\\in X$ such that $f(v)=v$. THen take \n\t\\[A=\\{L:V\\to V| L\\text{ linear, commuting with $G$}\\}\\]\n\twhich is a division ring over $k=\\bar k$, which forces $A=k$.\n\n\tBut then for all $w\\in V$, we get the composition \n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\tV\\ar[r,\"\\phi_v^{-1}\"]\\ar[rr,bend right] & X\\ar[r,\"\\phi_w\"] & V\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tand then we are supposed to show that our choice of $f$ and $v$ above send all other $w\\in V$ to the span of $v$,\n\tso $f$ has trace one. Something is fishy in this proof, but it's in Waterhouse.\n\\end{prf}\n\\begin{prf}[Less tricky, more fancy]\n\tBy Wedderburn, if $A$ is a ring and $M$ is a faithful left $A$-module, and if $B=\\End_A(M)$, then if $M$ is simple and finitely generated over $B$, \n\tthen \n\t\\[\\End_B(M)=A.\\]\n\tThe more traditional setting here is when $A=k$, $M=k^n$, and $B=M_n(k)$. Then $\\End_{M_n(k)}(k^n)=k.$\n\n\tNow let $A\\subseteq \\End_k(V)$ generated by $G$. Let $M=V$ (which is assumed to be simple as above). Then by letting $B=\\End_A(V)=k$, so by the above \n\tresult, $\\End_k(V)=A$ so $A$ is simple! Then let $I=\\langle g-I_V:g\\in G\\rangle$, a two-sided ideal of $A$, so is either zero (in which case we are done) or is all of $A$.\n\\end{prf}\n\n\\section{December 4th, 2019}\nRecall the Kolchin FPT from last time. \n\\subsection{More unipotent stuff}\n\\begin{cor}\n\tIf $V$ is a finite dimensional vector space over $k$ and $G\\to\\GL(V)$ is an abstract group homomorphism whose image consists of unipotent elements, then \n\tthere exists a basis of $V$ such that $G\\subseteq U_n\\subseteq\\GL_n$ where $U_n$ is the group of upper-triangular matrices with ones on the diagonal.\n\\end{cor}\n\\begin{cor}\n\tIf $G$ is asmooth linear algebraic group over $k=\\bar k$ such that $G(k)$ consists of unipotents, then $G\\subseteq U_n$ as a closed subgroup.\n\\end{cor}\nExamples include $\\Ga$ (included as $(\\begin{smallmatrix}\n\t1&a\\\\0&1\n\\end{smallmatrix})$) which is unipotent. In characteristic $p$, $\\bbZ/p$ is unipotent. There is also some weird $\\alpha_p$ stuff.\n\\begin{defn}\n\tA linear algebraic group $G$ over $k$ is unipotent if for all nonzero $G$-representations, $V^G\\ne 0$.\n\\end{defn}\n\\begin{rmk}\n\tThis is equivalent if we require $V$ to be finite dimensional over $k$.\n\\end{rmk}\n\\begin{prop}\n\t$G$ is unipotent if and only if for all finite dimensional representations $\\rho:G\\to \\GL(V)$, there exists a basis \n\tsuch that $\\rho(G)\\subseteq U_n$.\n\\end{prop}\n\\begin{prf}\n\tChoose a filtration of $V$:\n\t\\[0=V_0\\subset V_1\\subset\\cdots\\subset V_n=V\\]\n\twith all quotients $W_i\\eqdef V_i/V_{i-1}$ simple. But then since $W_i^G\\ne 0$ by the definition of unipotence, $W_i^G=W_i$.\n\tSo then $W_i$ is the trivial representation and apparently that finishes the forward direction.\n\n\tThe reverse direction is easy (pick a basis and $v_1$ is fixed by $\\rho(G)$).\n\\end{prf}\n\\begin{thm}\n\tLet $G$ be a linear algebraic group over $k$. Then $G$ is unipotent if and only if $G$ is closed in $U_n$ for some $n$.\n\\end{thm}\n\\begin{lem}\n\tIf $G$ is unipotent, so is any subgroup, quotient, or extension.\n\\end{lem}\n\\subsection{Induced representation}\nLet $H\\to G$ be a linear algebraic group homomorphism. Then we have a functor \n\\[\\operatorname{Ind}_H^G:\\Rep(H)\\to\\Rep(G)\\]\nthat is defined via \n\\[\\operatorname{Ind}_H^G(V)=\\Mor_{\\Sch/k}^H(G,\\bbA(V))=\\{G\\to \\bbA(V)|\\text{$H$ invariant}\\}=(\\Gamma(G)\\otimes_k V)^H\\]\nNotice that the inclusion functor $f:\\mathbf{B}H\\to\\mathbf{B}G$ gives us two maps, $f_\\ast$ (induction) and $f^\\ast$ (forgetful).\n\n\\part{Quarter 2: Representation Theory}\n\\section{January 17th, 2020}\n\\subsection{Recap}\nLet $T\\subseteq B\\subseteq G$ be a maximal torus, Borel, and semisimple algebraic group. Then we talked about:\n\\begin{itemize}\n\t\\item The infinitesimal theory by studying $\\Lie G$\n\t\\item Borel subgroups, parabolic subgroups, and the flag variety $G/B$.\n\t\\item We also saw the categorization of split connected semisimple algebraic groups according to their root data:\n\tThis is either given by $(X,R,X^\\vee,R^\\vee)$ of a group $X$ and a root system $R$ and their duals or equivalently a \n\tweight lattice $\\Lambda$ and root latice $\\Lambda_r=\\bbZ R$. Then we can define \n\t\\[Z/\\bbZ R\\cong \\Lambda/\\Lambda_r\\]\n\tto be the \\textbf{fundamental group.}\n\t\\item Representation theory: $\\Rep G$ is what is called a \\textbf{highest weight category.} Let $\\Lambda=X$. Then \n\twe have the \\textbf{dominant roots}\n\t\\[\\Lambda_+=\\{\\lambda\\in\\Lambda|\\langle\\lambda,\\alpha^\\vee\\rangle\\ge 0,\\alpha\\in R^+\\}\\]\n\tand for every dominant root $\\lambda$, we have the \\textbf{(co)standard} modules $\\Delta(\\lambda)$ and $\\nabla(\\lambda)$.\n\\end{itemize}\n\\begin{defn}\n\tLet $\\lambda\\in\\Lambda=\\Hom(T,\\Gm)$. Define the $T$-module $k\t_\\lambda$ by \n\t\\[t\\cdot 1\\eqdef \\lambda(t)1.\\]\n\tThen $T(k)=(k^\\times)^\\Gamma$ gives us $\\lambda(t)\\in k^\\times$.\n\\end{defn}\n\n\\begin{ex}\n\tLet $G=\\GL_n$ and $B$ the upper triangular matrices and $T$ the diagonal ones. Then the unipotent matrices $\\calU$ of upper triangulars with ones on the diagonal \n\tinclude into $B$ and we have a section $T\\to B$ giving us $B=T\\rtimes U$.\n\n\tThen we consider $k_\\lambda$ as a representation of $B$ by having $\\calU$ act trivially. Then we define \n\t\\[\\nabla(\\lambda)=\\Ind_B^G k_\\lambda\\]\n\twhere $\\Ind_B^G$ is the right adjoint to the restriction functor $\\Res^G_B$.\n\\end{ex}\n\\begin{rmk}\n\tThe induction functor always exists, but the \\textit{left} adjoint to $\\Res$ doesn't. When it does, it is called \\textbf{coinduction.}\n\\end{rmk}\n\n\\subsection{The Associated Sheaf}\nGiven $H\\subseteq G$, there is a functor\n\\[\\calF:\\Rep H\\to \\Sch G/H.\\]\nthis leads to a fact:\n\\begin{prop}\n\t\\[\\Ind_B^G k_\\lambda=\\nabla(\\lambda)\\cong H^0(G/B,\\calF_{G/B}(k_\\lambda))\\]\n\\end{prop}\n\nThe standard module $\\Delta(\\lambda)$ is essentially the dual (after applying an element of the Weyl group of longest length).\n\\begin{thm}\n\tLet $\\lambda\\in\\Lambda_+$. Then $\\nabla(\\lambda)$ has a simple socle, denoted $L(\\lambda)$. Furthermore, $\\{L(\\lambda)|\\lambda\\in\\Lambda_+\\}$ is a complete set of $G$-modules.\n\\end{thm}\n\n\\subsection{Other things to focus on:}\nLook at Humphreys' \\textit{Linear Algebraic Groups}. There is an appendix on root systems that is \n\n\\subsection{Examples}\nLet $k=\\bar k$ and $G=G(\\bar k)$. For type $A_{n+1}$, we have $\\SL_n$ as well as $\\operatorname{PGL}_n$. In the former case, $\\Lambda/\\Lambda_r=\\bbZ/n\\bbZ$ and in the latter this quotient is trivial.\n\nFor type $C_n$, we get $\\operatorname{Sp}_{2n}$. Let $\\Sigma$ be the matrix given by $-1$s and 1s on the antidiagonal and zeros elsewhere (negative in the ``third quadrant.'')\nThen the matrices in this are those preserving the associated form: that is \n\\[A^T\\Sigma A=\\Sigma\\]\n\nFor type $B_n$, we look at the $2n+1$ square matrix $\\Sigma$ that is block diagonal with blocks $(1)$ and the previous matrix. Then $\\operatorname{SO}(2n+1,k)$ are the matrices preserving $\\Sigma$.\nThe type $D_n$ matrices are the ones preserving the matrix given by (positive) ones on the antidiagonal.\n\n\\subsection{More Generally...}\nLet $A$ be a separable associative unital algebra over $k$. (That is, $A_{\\bar k}$ is a product of simple algebras). Fix an involution $\\sigma:A\\to A$ \n($\\sigma^2=\\id$). There is always an associated bilinear form for these involutions, which can be symmetric or skew-symmetric. It either preserves the base field or else \nit descends to an involution of this field. \n\n\\begin{defn}\n\t$\\GL_{1,A}$ is a group scheme whose $R$-points ($R$ a ring) are given by \n\t\\[\\GL_{1,A}(R)=\\left((A\\otimes_k R)^\\times\\right)^k\\]\n\twhich contains a subgroup scheme \n\t\\[\\operatorname{Iso}(A,\\sigma)(R)\\eqdef \\{a\\in(A\\otimes_k R)^\\times|a\\cdot\\sigma_R(a)=1\\}\\]\n\n\tWe also have $\\Aut(A,\\sigma)\\subset \\Aut(A)$ defined as \n\t\\[\\Aut(A,\\sigma)(R)\\eqdef\\{\\alpha\\in\\Aut_R(A_R)|\\alpha\\circ\\sigma_R=\\sigma_R\\circ\\alpha\\}\\]\n\n\tAnd finally,\n\t\\[\\operatorname{Sim}(A,\\sigma)\\eqdef \\{a\\in A_R^\\times|a\\cdot\\sigma_R(a)\\in k_R^\\times\\}\\]\n\\end{defn}\n\nNow if $A$ is a central simple algebra and $\\sigma$ is a symplectic form, then \n\\begin{itemize}\n\t\\item $\\operatorname{Sp}(A,\\sigma)=\\operatorname{Iso}(A,\\sigma)$\n\t\\item $\\operatorname{GSp}(A,\\sigma)=\\operatorname{Sim}(A,\\sigma)$\n\t\\item $\\operatorname{PGSp}(A,\\sigma)=\\Aut(A,\\sigma)$\n\\end{itemize}\n\nWant to know more? Look at \\textit{The Book of Involutions.}\n\n\\section{January 22nd, 2020}\nNow we begin the first part of this class: Infinitesimal theory, derivationts, differentails, and Lie algebras.\n\n\\subsection{Tangent Spaces}\nIf we have a variety $V$, we can define the dual space of $V$ at $x$ by considering \n\\[(m_{\\O_x}/m^2_{\\O_x})^\\ast\\]\nthen we can talk about the cotangent space $m/m^2$.\n\nMore formally, set $k$ to be a field (note in Jantzen he uses a commutative ring). Let $A$ be a finitely generated \\textbf{commutative} $k$-algebra,\nso there exists a surjection $k[x_1,\\dots,x_n]\\to A$. \n\\begin{defn}\n\tLet $M\\in\\lmod A$. Then a map $D:A\\to M$ is a \\textbf{derivation} if \n\t\\[D(ab)=aD(b)=bD(a).\\]\n\tWe say that it is a \\textbf{$k$-derivation} or $k$-linear if $D(k)=0$. \n\\end{defn}\n\nAs an example, let $B=k[x_1,\\dots,x_n]$ and let $\\Omega_B$ be the free $B$ module of rank $n$, generated by $dx_1,\\dots,dx_n$.\nThen the map $d_B:B\\to\\Omega_B$ sending $x_i\\mapsto dx_i$ and $1\\mapsto 0$ is a derivation.\n\\begin{lem}\n\tLet $M$ be a $B$ module. Then \n\t\\[\\operatorname{Der}_k(B,M)\\stackrel{\\sim}{\\to}\\Hom_B(\\Omega_B,M).\\]\n\\end{lem}\n\\begin{prf}\n\tGiven a derivation $D:B\\to M$, we can define a map $\\varphi:\\Omega_B\\to M$ via \n\t\\[\\varphi(dx_i)=D(x_i)\\]\n\tand extending $B$-linearly. One has to prove that this defines an algebra morphism. \n\\end{prf}\n\\begin{rmk}\n\tAnother way to phrase this is $\\Omega_B$ is a universal object in $\\lmod B$ for derivations.\n\\end{rmk}\n\\begin{thm}\n\tLet $A$ be a finitely generated commutative $k$-algebra. Then there exists a (universal) $A$-module $\\Omega_A$\n\tand a (universal) derivation $d_A:A\\to \\Omega_A$ such that for any $A$-module $M$,\n\t\\[\\operatorname{Der}_k(A,M)\\stackrel{\\sim}{\\to}\\Hom_A(\\Omega_A,M).\\]\n\\end{thm}\n\\begin{rmk}\n\tThe universality of the pair $(\\Omega_A,d_A)$ is saying that it is unique up to (unique) isomorphism.\n\\end{rmk}\n\\begin{prf}\n\tTo see this, we use the last result as well as the fact that \n\t\\[\\Omega_A=\\frac{\\Omega_B}{I\\Omega_B+B(d_B(I))}.\\]\n\n\tIf $I=(f_1,\\dots,f_m)$ then $\\Omega_A$ is an $A$-module on generators, then $\\Omega_A$ is an $A$-module on generators \n\t$dx_1,\\dots,dx_n$ subject to \n\t\\[\\sum_{i=1}^n\\frac{\\partial f_j}{\\partial x_i}dx_i=0\\]\n\tfor all $1\\le i\\le m$.\n\n\tThe next part of the proof consists of showing that given a derivation $\\tilde D:B\\twoheadrightarrow A\\xrightarrow{D}M$ and the associated map $\\tilde\\varphi:\\Omega_B\\to M$, \n\tthe latter factors through $\\Omega_A$. SHow it vanishes on both part of the sum above.\n\\end{prf}\n\\begin{rmk}\n\tNote: we just proved that the functor $\\Der_k:\\lmod A\\to\\Vect_k$ is representable! The uniqueness of the representing object is then a consequence of Yoneda lemma!\n\\end{rmk}\n\n\\begin{ex}\n\tLet $A=k[x,y]/(x^2+y^2-1)$. Then $\\Omega_A$ is generated by $d_x,d_y$ with $2xd_x+2yd_y=0$. When $\\ch k=2$,\n\twe get that $\\Omega_A$ is free of rank 2.\n\n\tThis defines a group scheme (think matrices in $\\SL_2\\subseteq\\GL_2$) and this gives us a surjection of $k[\\SL_2]\\to A$.\n\n\tWhen $\\ch k\\ne 2$, we get that $\\Omega_A$ is a free $A$-module of rank 1, generated by $dt=xdy-ydx$. I have no idea what this was,\n\tbut she wrote $dy=-xdt.$\n\\end{ex}\n\\begin{rmk}\n\tNotice that the reason we lost smoothness in characteristic 2 was that we lost reducedness: now $(x+y-1)^2=0$. There \n\tis a general theorem (see deep in Waterhouse) that an affine group scheme is smooth only if its Hopf algebra is reduced.\n\\end{rmk}\n\\subsection{Derivations on Hopf algebras.}\nRecall that if $A$ is a Hopf algebra, then $I=\\ker\\varepsilon$ is the \\textbf{augmentation ideal.}\n\\begin{thm}\n\tLet $A$ be a finitely generated commutative Hopf algebra over $k$. Then $\\Omega_A$ is a free $A$-module.\n\n\tIn fact, \\[\\Omega_A\\cong A\\otimes_k I/I^2\\]. The map $d_A:A\\to A\\otimes I/I^2$ is given by \n\t\\[a\\mapsto a'\\otimes \\pi(a'')\\]\n\twhere the primes indicate Sweedler notation and $\\pi:A\\to I/I^2$ is a linear map (not a homomorphism!) that takes a (non-unique) \n\tsplitting $A=I\\oplus k\\cdot 1$ and then project on the first coordinate and take a quotient by $I^2$. It remains to prove that, once we pass to the \n\tquotient, the choice of direct sum complement is irrelevant.\n\\end{thm}\n\n\\begin{lem}\n\tWe say that $D:A\\to M$ is an $\\varepsilon$-derivation (for $\\varepsilon:A\\to k$) if \n\t\\[D(ab)=\\varepsilon(a)D(b)+\\varepsilon(b)D(a).\\]\n\tThe universal module of $\\varepsilon$-derivations is \n\t\\[\\Omega_A\\otimes_\\varepsilon k=\\Omega_A/I\\Omega_A=I/I^2\\]\n\\end{lem}\n\n\\section{January 24th, 2020}\nLet $A$ be a finitely generated commutative Hopf algebra over $k$. Often we can drop finite generation \nbecause any Hopf algebra is a limit of finitely generated ones (because of finite global dimension), so \nthese arguments still hold if they can pass to the limit.\n\nLet $\\varepsilon:A\\to k$ be the counit and $I=\\ker\\varepsilon$ be the augmentation ideal.\n\\begin{rmk}\n\tWe have this duality between algebra and geometry, so the counit can be seen as an element of $G_A(k)=\\Hom(A,k)$,\n\tspecifically the unit element of the group. :)\n\\end{rmk}\n\n\\begin{lem}\n\tThere is a series of isomorphisms\n\t\\[\\Der_\\varepsilon(A,M)\\simeq \\Hom_k(I/I^2,M)\\simeq\\Hom_A(A\\otimes_kI/I^2,M)\\]\n\\end{lem}\nThis can be proved relatively simply. It is a good exercise.\n\\begin{thm}\n\t\\begin{itemize}\n\t\t\\item $\\Omega_A\\simeq A\\otimes_k I/I^2$\n\t\t\\item $d_A:A\\to \\Omega_A=A\\otimes I/I^2$\n\t\\end{itemize}\n\twhere the map $d_A$ is the one sending $a\\mapsto\\sum a'\\otimes \\pi(a'')$\n\\end{thm}\n\\begin{prf}\n\tRecall that $\\pi:A\\to I/I^2$ is the map that sends $A\\cong k\\cdot 1\\oplus I$ to its projection to $I$ modulo $I^2$.\n\n\tLet $M$ be an $A$ module. Give $A\\oplus M$ an $A$-algebra structure via \n\t\\[(a,m)\\cdot(a',m')=(aa',am'+a'm).\\]\n\tThen let $G_A(-)=\\Hom(A,0)$ and notice \n\t\\[G_A(A\\oplus M)=\\Hom_\\Alg(A,A\\oplus M)\\]\n\n\tThen we claim that \n\t\\[\\Hom(A,A\\oplus M)=\\{(\\varphi,D)|\\varphi\\in\\Hom(A,A),D\\in \\Der_\\varphi(A,M)\\}\\]\n\tthen notice that \n\t\\begin{align*}\n\t\t(\\varphi,D)(ab) &= (\\varphi(a)\\varphi(b),\\varphi(a)D(b)+D(a)\\varphi(b))\\\\\n\t\t&=(\\varphi(a),D(a))\\cdot(\\varphi(b),D(b)).\n\t\\end{align*}\n\tso this is indeed an algebra morphism. The other direction is clear, so the bijetion holds. \n\n\tNow consider the map $p:A\\oplus M\\to A$ and the induced map\n\t\\[\\ker p_\\ast\\hookrightarrow G_A(A\\oplus M)\\xrightarrow{G_A(A)}\\]\n\tbut the projection map admits a section, so $G_A(A\\oplus M)$ is a semidirect product. What is $\\ker p_\\ast$, though?\n\n\t\\[\\ker p_\\ast=\\{(\\varphi,D)|p_\\ast(\\varphi,D)=\\varphi=\\id\\in G_A(A)\\}\\]\n\tso in particular $\\varphi=\\varepsilon$. Thus $\\ker p_\\ast\\cong \\Der_\\varepsilon(A,M)$,\n\thence $G_A(A\\oplus M)\\simeq \\Der_\\varepsilon(A,M)\\rtimes G_A(A)$.\n\n\tFor all $D'\\in \\Der_\\varepsilon(A,M)$, the element $(D',\\id_A)\\in\\Der_\\varepsilon(A,M)\\rtimes G_A(A)=G_A(A\\oplus M)$\n\tand this gives us (why?) a ``hidden'' correspondence between the $\\varepsilon$ derivations and $k$-linear derivations.\n\n\tTo find this, take $D'\\in\\Der_\\varepsilon(A,M)$ and consider the element $(\\varepsilon, D')\\in\\ker p_\\ast$.\n\tMultiplying, we get (recalling that multiplication of functions is given by \n\t\\[(\\varphi\\cdot\\varphi')(x)=m\\circ (\\varphi\\otimes \\varphi')\\circ \\Delta(x)\\]\n\tin a Hopf algebra)\n\t\\[(\\varepsilon, D')\\cdot(\\id_A,0)=(\\varepsilon\\cdot\\id_A,a\\mapsto \\sum a'D'(a''))\\]\n\n\tNow use the map $d_A^\\varepsilon:A\\to A\\otimes I/I^2$ via $a\\mapsto 1\\otimes \\pi(a)$ and compute $\\id_A\\cdot d_A^\\varepsilon$, which gives us our map \n\t$a\\mapsto \\sum a'\\otimes \\pi(a'')$ which proves things apparently but some logic is missing.\n\\end{prf}\n\n\\subsection{Our next goal}\nWe want to show that, in characteristic zero, any algebraic group scheme is reduced (and thus smooth).\n\\begin{lem}\n\tLet $X-1,\\dots,x_r$ be a basis for $I/I^2$. Then $\\{x_i^{m_i}\\cdots x_r^{m_r}|\\sum m_i=n\\}$\n\tis a basis for $I^n/I^{n+1}$.\n\\end{lem}\n\\begin{rmk}\n\tNotice that this definitely doesn't work if our algebra is not a Hopf algebra! \n\\end{rmk}\n\\begin{prf}\n\tEssentially this boils down to the existsnce of derivations with special properties: there is a $D_i:A\\to A$ for all $1\\le i\\le r$\n\tsuch that \n\t\\[D_i(x_j)=\\delta_{ij}\\pmod{I},\\]\n\tessentially giving us differential operators that distinguish variables.\n\tTo see why the result follows, notice that if this claim is true then the Leibniz rule and induction gets us \n\t\\[D_r^{m_r}D_{r-1}^{m_{r-1}}\\cdots D_1^1(x_1^{m_1}\\cdots x_r^{m_r})=m_1!\\cdots m_r!\\pmod{I}\\]\n\twhich implies the statement.\n\n\tTo generate these operators, we will use the last result. Recall that $\\Der_\\varepsilon(A,A)=\\Hom_k(I/I^2,A)$. Then consider\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\begin{tikzcd}\n\t\t\tA\\ar[rrd]\\ar[d,\"\\pi\"] &  &\\\\\n\t\t\tI/I^2\\ar[r,\"d_i\"] & k \\ar[r,hook] & A\n\t\t\\end{tikzcd}\n\t\\end{figure}\n\t\n\t\\noindent where $d_i(x_j)=\\delta_{ij}$. But notice that with such a $d_i$ we get a $k$-derivation $D_i:A\\to A$\n\tdefined by \n\t\\[D_i(a)=\\sum a' d_i(\\pi(a''))\\]\n\tand \n\t\\[\\varepsilon D_i(a)=\\sum \\varepsilon(a') d_i(\\pi(a''))=d_i(\\pi(\\sum\\varepsilon(a')a''))=d_i(\\pi(a))=d_i(\\pi(x_j))=\\delta_{ij}\\]\n\\end{prf}\n\n\\section{January 27. 2020}\nWe'll pick back up with proving that a finitely generated Hopf algebra in characterisitc zero is reduced.\n\n\\subsection{The result}\n\\begin{thm}\n\tLet $A$ be a finitely generated Hopf algebra over $k$ with $\\ch k=0$. THen $A$ is reduced.\n\\end{thm}\n\\begin{prf}\n\tLet $Y\\in I$ be a nilpotent with $y^m=0$. We want to show that $y=0$. It sufficies to show this for $m=2$.\n\tWe claim that $y\\in \\cap_{n\\ge 1}I^n$. To see this, suppose that $y\\in I^n\\setminus I^{n+1}$.\n\tthen we can write \n\t\\[y=\\sum_{\\sum m_i=n}a_{\\underline m}x^{\\underline m}+I^{n+1}\\]\n\tby the previous lemma. Then consider \n\t\\[y^2=p(x_1,\\dots,x_\\Gamma)+I^{2n+1}\\]\n\twhere $p$ is some monomial of degree $2n$ with some nonzero coefficient. Hence $y^2$ is nonzero mod $I^{2n+1}$, a contradiction!\n\tHence $y\\in\\cap I^n$.\n\n\tWe actually need that $k=\\bar k$. Let $g:A\\to k$ be a point in $G_A$. Then define the translation map\n\t\\[T_g:A\\xrightarrow{\\Delta}A\\otimes A\\xrightarrow{g\\otimes \\id}k\\otimes A\\simeq A\\]\n\twhich is a map of algebras and is, in fact, invertible.\n\n\tDefine $\\frakm_g=\\ker g$ (think this is the maximal corresponding to the point $g$ in the spectrum). Then we claim that $T_g(I)=\\frakm_g$.\n\tIf $y\\in I$ then $y\\in \\frakm_G$ (since it's nilpotent it lies in every maximal). Thus \n\tby the previous claim $y\\in\\cap_{n\\ge 1} \\frakm_g^n$ and by the Krull intersection theorem, $y=0$ in $A_{\\frakm_g}$ for all $\\frakm_g$,\n\twhence $y=0.$\n\\end{prf}\n\n\\subsection{Properties of \\texorpdfstring{$\\Omega_A$}{Omega A}}\n\\begin{itemize}\n\t\\item $\\Omega_{A\\times B}\\cong\\Omega_A\\times\\Omega_B$\n\t\\item Let $S$ be a multiplicative system in $A$. Then  \n\t\\[\\Omega_{S^{-1}A}\\cong \\Omega_A\\otimes_A S^{-1}A\\]\n\tand in particular if $A$ is an integral domain, let $K=\\operatorname{Frac}(A).$ Then \n\t\\[\\Omega_{K/k}\\cong \\Omega_A\\otimes_A K.\\]\n\t\\item Let $L/k$ be a finitely generated seperable field extension. That is we have \n\t\\[L\\supset E=k(x_1,\\dots,x_\\Gamma)\\supset k\\]\n\twhere $L/E$ is finite. Then $\\dim_L\\Omega_{L/k}=\\operatorname{trdeg}_k L$ and $dx_1,\\dots,dx_\\Gamma$ forma a basis for $\\Omega_{L/k}$.\n\t\\item Let $G$ be an algebraic group scheme. Maybe assume $k=\\bar k$. Assume that $k[G]$ is an integral domain. Let $K=\\operatorname{Frac}(k[G])$. Then \n\t\\[\\rank\\Omega_{k[G]}=\\operatorname{trdeg}_k K\\]\n\\end{itemize}\nThe last item merits some discussion:\n\\begin{prf}\n\tBy the third item above, we get \n\t\\[\\Omega_{K/k}=\\Omega_{k[G]}\\otimes_{k[G]}K\\]\n\tso \n\t\\[\\operatorname{trdeg}_k K=\\dim_K\\Omega_{K/k}=\\rank \\Omega_{k[G]}\\]\n\\end{prf}\n\\begin{defn}\n\tLet $G$ be an algebraic group scheme over $k$. Then $G$ is smooth if \n\t\\[\\dim G=\\Omega_{k[G]}\\]\n\twhere $\\dim G=\\dim_{\\text{Krull}} k[G]$.\n\\end{defn}\n\\begin{thm}\\label{thm:smoothgrps}\n\tLet $G$ be an algebraic group scheme over $k$. Then if\n\t\\[\\bar k[G]=k[G]\\otimes_k \\bar k\\]\n\tis reduced, $G$ is smooth.\n\\end{thm}\n\\begin{cor}\n\tIn characteristic zero, all algebraic group schemes are smooth.\n\\end{cor}\n\\begin{rmk}\n\tSmoothness is independent of extending scalars (as the rank and dimension in the definition are), but \n\tyou do need to extend scalars enough so that the nilpotents aren't ``hiding''. Thus it suffices to only extend to \n\tthe seperable closure of $k$ or any perfect field $L/k$.\n\\end{rmk}\nThe following discussion and proof can be found in section 14 (and partially 6.6) of Waterhouse.\n\\begin{prf}[of thm.~\\ref{thm:smoothgrps}]\n\tFirst extend scalars up to $\\bar k$. Second recall (from section 6.6) that for $A=k[G]$, the following are equivalent:\n\t\\begin{itemize}\n\t\t\\item $\\pi_0(A)$ is trivial\n\t\t\\item $\\Spec A$ is connected\n\t\t\\item $\\Spec A$ is irreducible\n\t\t\\item $A/\\operatorname{Nil}(A)$ is an integral domain.\n\t\\end{itemize}\n\tThe second two are actually a statement about algebraic geometry (recall the geometry doesn't see the nilradical).\n\n\tAssume that $k[G]$ is reduced. THen we can assume that $G=G^0$ is connected since \n\t\\[k[G]\\simeq k[G^0]\\times\\cdots\\times k[G^0]\\]\n\twhere we have $|\\pi_0(k[G])|$ copies in the product. Then we use the first property in the last section.\n\n\tFor $G^0$ connected, the general properties in the last section imply that $k[G^0]$ is irreducible implies that $k[G^0]/\\operatorname{Nil}(k[G^0])$ is \n\tan integral gdomain. By assumption, the nilradical is trivial, so $k[G^0]$ itself is an integral domain.\n\n\tLet $K=\\operatorname{Frac}(k[G^0])$. Then\n\t\\[\\dim G^0=\\dim k[G^0]\\stackrel{\\text{AG}}{=}\\operatorname{trdeg}_kK=\\rank \\Omega_{k[G^0]}\\]\n\twhere the $AG$ referenced above is essentially the Noether normalization lemma.\n\\end{prf}\n\n\\section{January 29th, 2020}\nAs a reminder: we are not having classes February 10-14. The following Monday is president's day. So keep that in mind. :)\n\n\\subsection{Lie algebras for \\texorpdfstring{$G$}{G}}\n\\begin{defn}\n\tIf $G$ is an algebraic group scheme over $k$, then $\\Lie G$ is the space of left-invariant derivations $D:k[G]\\to k[G]$.\n\\end{defn}\nWhat is a left-invariant derivation? Let $A=k[G]=k^G$. The Yoneda lemma allows us to identify this with $\\Hom(G,\\A1)$.\nBut given $f:G\\to k$, we get an action \n\\[g\\cdot f(-)=(T_g f)(-)=f(g-)\\]\nNote that this actually defines a \\textit{right action}, but the left and right invariant derivations are canonically isomorphic and this \nwill help us with the Lie algebra case.\n\nNow let $x\\in G$ and by abuse of notation let $x=\\operatorname{ev}_x:A\\to k$ be the evaluation map. This gives us a diagram:\n\\begin{figure}[h]\n\t\\begin{tikzcd}\n\t\tA\\ar[r,\"\\Delta\"]\\ar[d,\"T_g\"] & A\\otimes A\\ar[r,\"(g;\\operatorname{id})\"] & A\\ar[d,\"x\"]\\\\\n\t\tA\\ar[rr,\"x\"] & & k\n\t\\end{tikzcd}\n\\end{figure}\n\nLet $T:A\\to A$ be a $k$ map. THen $T$ is left invariant if $T_g\\circ T=T\\circ T_g$ for all $G$. \nI wrote some notes in my digital pad. We did some computation. The result is that it is equivalent to having $\\Delta\\circ T=(\\id\\otimes T\\circ \\Delta)$.\n\nThus we can restate the definition of $\\Lie G$ as \n\\[\\Lie G=\\{D\\in\\Der_k(A,A)|\\Delta\\circ D=(\\id\\otimes D)\\circ \\Delta\\}.\\]\n\\begin{lem}\n\t$\\Lie G$ has the structure of a Lie algebra with bracket \n\t\\[[D_1,D_2]=D_1\\circ D_2-D_2\\circ D_1\\]\n\\end{lem}\nTo prove that this we need to show the three things for the Lie bracket hold! The $k$ bilinearity and $[D,D]=0$ (careful! We want to consider characteristic 2!)\nare easy to show but the Jacobi identity requires some work.\n\nNotice that it is equivalent to saying that the operator $\\ad D$ is a derivation:\n\\[\\ad D([D_1,D_2])=[\\ad D(D_1),D_2]+[D_1,\\ad D(D_2)]\\]\nand one can check this. It has nothing to do with left invariance!\n\n\\subsection{Another way to think of this algebra}\n\\begin{lem}\n\tThere are natural bijections between \n\t\\begin{enumerate}\n\t\t\\item $\\Lie G$ (the left invariant derivations on $k[G]$)\n\t\t\\item $\\Der_\\varepsilon(k[G],k)$\n\t\t\\item $\\ker (G(k[\\tau]/(\\tau^2))\\to G(k))$ where the map is induced from the projection onto the constants in $k[\\tau]/(\\tau^2)$.\n\t\\end{enumerate}\n\\end{lem}\n\\begin{prf}\n\tWe can use that $G(k[\\tau]/(\\tau^2))=\\Hom(A,k\\oplus k\\tau$ and then study the kernel.\n\\end{prf}\n\n\\section{January 31, 2020}\nRecall that we had $\\Lie G$ as well as bijections between a couple different things.\nToday we are going to talk more about that. We scribbled out some easy computations for why (b) and (c) are equivalent.\nThe idea here is that any map $A\\to k\\oplus k\\tau$ can be written as $(\\varphi,d)$ where $d$ is a $\\varphi$ derivation.\n\nTo show the equivalence of $(a)$ and $(b)$, you use the maps $D\\mapsto \\varepsilon D$ and the inverse $d\\mapsto (\\id\\otimes d)\\circ\\Delta$. One needs to check that you get derivations and that the other thing \nis left-invariant. To show that they are inverses, consider \n\\[\\varepsilon(\\id\\bar\\otimes d)\\circ \\Delta(x)=\\sum\\varepsilon(a')d(a'')=d(\\sum \\varepsilon(a')a'')=d(a).\\]\nLeave the other direction as an exercise. \n\nLet's check left invariance: we need to check (by the discussion the other day) that $\\Delta\\circ D=(d\\otimes D)\\circ\\Delta.$ Checking this si doing a little diagram chase.\nPart of the diagram is associativity.\n\n\\subsection{Lie structures}\nA natural question to ask: if we have this isomorphism in this last lemma, what are the brackets on the other two objects?\n\nFor the first, let $d_1,d_2\\in\\Der_\\varepsilon(A,k)$. Let $D_i=(\\id\\bar\\otimes d_i)\\circ\\Delta$. Do some work and chase the diagrams for $[D_1,D_2]$!\nThen you find that \n\\[[d_1,d_2]=(d_1\\bar\\otimes d_2-d_2\\bar\\otimes d_1)\\circ\\Delta\\]\nwhich really underlines that we need the group/Hopf structure on these things to really be able to do this! This is what makes these equivalences work!\n\nFor the kernel $\\ker p_\\ast$, take two elements $g_1=(\\varepsilon, d_1)\\in k[\\tau_1]$ and $g_2=(\\varepsilon, d_2)\\in k[\\tau_2]$ where we \nare using different nilpotents so that we have enough degrees of freedom. Consider $R=k[\\tau_1,\\tau_2]$ and consider the inclusion of $g_1$ and $g_2$ in $G(R)=\\Hom(A,R)$.\n\nSo we think of $g_1=\\varepsilon+d_1\\tau_1$ and $g_2=\\varepsilon+d_2\\tau_2$ and we compute the multiplication as:\n\\[g_1g_2=\\varepsilon+d_1\\tau_1+d_2\\tau_2+((d_1\\bar\\otimes d_2)\\circ\\Delta)\\tau_1\\tau_2\\]\nand coompute $g_2g_1$ as well. Then we really don't have additive inverses in $G$, but we can compute \n\\[g_1g_2=(\\varepsilon+[d_1,d_2]\\tau_1\\tau_2)g_2g_1.\\]\n\nHow do we use this in practice? Consider $\\GL_n(S)$. The Lie algebra consists of the matrices $M\\in\\GL_n(k[\\tau])$.\nEach such matrix can be written as $M_1+M_2\\tau$. But we want the condition that the projection onto $\\GL_n(k)$ is the identity,\nso really we have \n\\[\\Lie\\GL_n = \\{I+M\\tau\\}\\subset\\GL_n(k[\\tau])\\]\nbut any such matrix is invertible if $M$ is, so it is no further condition. This gets you your isomorphism with all of $\\gl_n$.\n\n\\section{February 3rd, 2020}\nWorking from last time, we have some corollaries:\n\\begin{cor}\n\tLet $f:H\\to G$ be a map. Then \n\t\\begin{itemize}\n\t\t\\item $f$ induces a map $\\Lie H\\to \\Lie G$ ($\\Lie$ is functorial).\n\t\t\\item If $f$ is inclusion, then $\\Lie H\\hookrightarrow\\Lie G$\n\t\\end{itemize}\n\\end{cor}\n\\begin{rmk}\n\tThis gives us the full equivalences \n\t\\[\\Lie G\\simeq\\Der_\\varepsilon(k[G],k)\\simeq (I/I^2)^\\ast\\simeq T_e G.\\]\n\\end{rmk}\nand from this we get \n\\begin{cor}\n\t$G$ is smooth (or reduced) if and only if $\\dim G=\\dim_k \\Lie G$.\n\\end{cor}\nwhere the above is true since $\\dim_k\\Lie G=\\rank \\Omega_{k[G]}$.\n\n\\subsection{Examples}\nWe talked about $\\GL_n$ last time. Also if we have $I+\\tau_1M_1,I+\\tau_2 M_2\\in\\Lie \\GL_n=\\gl_n$, the bracked can quickly be seen to be \n\\[I+\\tau_1\\tau_2[M_1,M_2]\\]\nas one would hope.\n\nNow consider $\\SL_n$. Then $\\Lie\\SL_n\\subset\\gl_n$. To analyze this, consider $\\SL_n(k[\\tau])$ \nand we are looking for the ones with determinant one. But \n\\[\\det(I+\\tau M)=p(\\tau)\\]\nwhere $p$ is the characteristic polynomial for $I+\\tau M$ over $k$. But the terms above the linear term die and the linear term is the trace (of $M$):\n\\[\\det(I+\\tau M)=1+(\\tr M)\\tau=1\\]\nso the condition is $\\tr M=0$.\n\n\\subsection{Positive characteristic}\nRecall that $\\Ga$ is the group scheme represented by $k[t]$ where $t$ is primitive. $\\Gm$ is \nrepresented by $k[t,t^{-1}]$ where $t$ is grouplike.\n\nThese are both one-dimensional smooth group schmemes, so the Lie algebras $\\frakg_a=\\Lie \\Ga$ and $\\frakg_m=\\Lie\\Gm$\nare both one-dimensional. If $\\ch k=0$, then, there is only one Lie structure! But what if not?\nAssume for now $\\ch k\\ne 2$.\n\nDo the following problem! It's not hard.\n\\begin{prob}\n\tLet $\\ch k=p$. Take a derivation $D\\in \\Der_k(A,A)$ and show that the $p^{th}$ power of $D$ is again a derivation.\n\\end{prob}\n\nThis give rise to a ``$p^{th}$ power map'' which gives rise to a \\textbf{restricted Lie algebra}. See Weibel for a good \nintroduction (he had to really learn it well to write about it so it is very clear).\n\nGo back to the case of $\\Ga$ and $\\Gm$. $\\frakg_a=\\Der_\\varepsilon(k[t],k)=\\langle d_t=\\frac{d}{dt}\\rangle$\nsuch that $d_tt=1$.\n\nThen $D_t=(\\id\\bar\\otimes d_t)\\circ\\Delta$ as the corresponding derivation on $k[G]$, and we can compute \n\\begin{align*}\n\tD_t(t)&= (\\id\\bar\\otimes d_t)\\circ \\Delta(t)\\\\\n\t&= (\\id\\bar\\otimes d_t)(1\\otimes t+t\\otimes 1)=1\n\\end{align*}\nand so $D_t^2(t)=0$. Thus $D_t^p(t)=0$, so $\\frakg_a$ is a trivial $p$ Lie algebra: $\\frakg_a=\\langle d_t\\rangle$\nwhere $d_t^p=0$. \n\nIn the case of $\\Gm$, we can compute \n\\[D_t(t)=t\\]\nso then $D_t^p=D_t$, so we get $\\frakg_m=\\langle d_t\\rangle$ with $d_t^p=d_t$. The punchline here is \nthat the two infinitessimal theories, at first glance, appear to coincide although they actually vary \nin the right light. $\\frakg_a\\cong\\frakg_m$ as Lie algebras, but not as \\textit{restricted} Lie algebras.\n\n\\subsection{Frobenius Kernels}\nFor reference, this can be found in Jantzen part I section 7. Some of the proof will be omitted, but can be found there.\n\n\\brk\n\nLet $\\ch k=p>0$ and let $G$ be an algebraic group scheme over $k$. We assume here that $k=\\bar k$, but it suffices that \n$k$ is perfect. This fixes some problems that arise. Recall the Frobenius map $F:\\A n\\to \\A n$\nthat sends $(a_1,\\dots,a_n)\\mapsto(a_1^p,\\dots,a_n^p)$. The \\textbf{Frobenius twist} of $G$ is the image \n$G^{(1)}$ of $G$ under the Frobenius twist.\n\nThen we can define the \\textbf{Frobenius kernel}\n\\[G_{(1)}\\ker F\\hookrightarrow G\\xrightarrow{F} G^{(1)}\\]\nwhich one can show is a connected ($k[G_{(1)}]$ is local) and finite ($k[G_{(1)}]$ is finite dimensional) and a group scheme.\n\nIs the Frobenius kernel smooth? Usually no! The finite dimensionality of $k[G_{(1)}]$ means the dimension of $G$ is zero,\nso $\\Lie G$ is trivial. But this happens only if $G$ is trivial.\n\n\\brk\n\nAn example: $\\Ga$. Consider the Frobenius map $F:\\Ga\\to \\Ga$ via $r\\mapsto r^p$. Then the kernel \nis the collection of $p$-nilpotents, \n\\[{\\Ga}_{(1)}(R)=\\{r\\in R|r^p=0\\}\\]\nand one can compute that \n\\[k[{\\Ga}_{(1)}]\\cong k[t]/t^p\\]\nwith $t$ being a primitive element. We call ${\\Ga}_{(1)}=\\alpha_p$.\n\nWhen we switch the picture to $\\Gm$, we are now looking at $p^{th}$ roots of unity! The algebra is \n$k[t,t^{-1}]/(t^p-1)=k[t]/(t^p-1)\\simeq k[t-1]/(t-1)^p\\simeq k[u]/u^p$. So the coordinate algebra is isomorphic \nto that in the last case, but with a different coproduct.\n\n\\section{February 5th, 2020}\nToday's plan: we're going to go back and talk a bit more about the Frobenius twist to understand it better.\nThen we'll talk about restricted enveloping algebras and then do some synthesis.\n\n\\subsection{Frobenius twist}\nLet's saw we have a commutative ring with $R$ modules $A$ and $B$. Then we can take the pushout to get $A\\otimes_R B$. Instead, let's\ntake a $k$-algebra and an automorphism $\\varphi:k\\to k$ we take the pushout \n\\begin{figure}[h]\n\t\\centering\n\t\\begin{tikzcd}\n\t\tk\\ar[r]\\ar[d,\"\\varphi\"] & A\\ar[d]\\\\\n\t\tk\\ar[r] & A\\otimes_\\varphi k\n\t\\end{tikzcd}\n\\end{figure}\nwhich is the twist of $A$ by the automorphism $\\varphi$. Here the scalars are controlled by $\\varphi$ since \n\\begin{align*}\n\t\\lambda(a\\otimes \\mu)&= a\\otimes \\lambda\\mu\\\\\n\t&=\\varphi^{-1}(\\lambda)a\\otimes \\mu\n\\end{align*}\nwhenever this all makes sense.\n\nNow letting $\\varphi(x)=x^p$ we get the Frobenius twist $A\\otimes_\\varphi k\\eqdef A^{(1)}$. Whenever $k$ is perfect, this means that if $A$ is an \nabelian group, so is $A^{(1)}$.\n\nWe always have the map $a^{(1)}\\eqdef a\\otimes 1\\to a^p$ end extending linearly to the map \n\\[a\\otimes\\lambda=\\lambda^{1/p}a\\otimes 1\\mapsto \\lambda a^p.\\]\n\\begin{rmk}\n\tWhile this gives us a map $A^{(1)}\\to A$, we don't ge a map in the other direction or $M^{(1)}\\to M$ for a vector space $M$.\n\tBut we do get a map $M^{(1)}\\to S^p(M)$.\n\\end{rmk}\n\\begin{rmk}\n\tLet $X/k$ be a scheme. Then we can define $X^{(1)}/k$. If $X=G$ is a group scheme, then if $A=k[G]$,\n\twe can define $G^{(1)}$ to be the group scheme such that \n\t\\[k[G^{(1)}]=k[G]^{(1)}\\]\n\\end{rmk}\n\nYou can read more about these in Jantzen, but the map $A^{(1)}\\to A$ gives us a map $G\\to G^{(1)}$\nand the kernel of this map is the Frobenius kernel.\n\n\\subsection{Representation theory}\n\\begin{rmk}\n\tIf $M\\in\\Rep G$, then this induces a representation of $\\Lie G$. Why is this true? Well we can use the functoriality \n\tof $\\Lie$! In particular, a representation $M$ of $G$ is equipped with a morphism $G\\to \\GL(M)$, yielding a lie algebra morphism \n\t\\[\\Lie G\\to \\Lie GL(M)=\\gl(M)\\]\n\\end{rmk}\n\nThe Frobenius twist also gives us maps on the level of representations. That is, we get a functor \n\\[(-)^{(1)}:\\Rep G\\to \\Rep G\\]\nMore clearly, if $M\\in\\Rep G$ we have this vector space as well as a coaction \n\\[k[G]\\to M\\otimes k[G].\\]\nThis gives us a map \n\\[M^{(1)}\\to M^{(1)}\\otimes k[G]^{(1)}\\to M^{(1)}\\otimes k[G]\\]\nso $M\\in \\operatorname{comod}k[G]\\simeq \\Rep G$.\n\nIf we have two representations $M,N\\in\\Rep G$, then the map \n\\[\\Hom(M,N)\\to \\Hom(M^{(1)},N^{(1)})\\]\nConsidering $\\End(M)\\to \\End(M^{(1)})$, this gives us a map $\\GL(M)\\to\\GL(M^{(1)})$.\n\n\\begin{rmk}\n\tThe functor $(-)^{(1)}$ is what is called a ``strict polynomial functor''. :)\n\\end{rmk}\n\n\\begin{ex}\n\t${\\Ga}_{(1)}\\simeq \\alpha_p$ is the group represented by $k[t]/t^p$ with $t$ primitive. ${\\Gm}_{(1)}$\n\tis the same algebra with $t$ grouplike.\n\\end{ex}\n\\begin{ex}\n\tWe have already seen the coordinate algebra of $\\GL_n$. What is the Frobenius twist? It is taking a matrix \n\twhere we take all entries to the $p^{th}$ power (not the whole matrix!). So then the kernel is \n\t\\[{\\GL_n}_{(1)}=\\{(a_{ij})|a_{ij}^p=\\delta_{ij}\\}\\]\n\n\tIf you try to evaluate at a field, you won't get much: only one point. Thus this is a highly-singluar connected, finite \n\tgroup scheme with coordinate algebra \n\t\\[k[x_{ij}]/(x_{ij}^p-\\delta_{ij})\\]\n\n\tThis gives you the ``infinitesimal neighborhood of 1''.\n\\end{ex}\n\n\\subsection{Restricted Lie algebra}\n\\begin{rmk}\n\tWe never mentioned what happens for $\\gl_n$! Here we can honestly multiply matrices, so we can define $A^{[p]}=A^p$. This tells you that for \n\tmatrix groups we can realize their power operation in this way, but of course this is undesirable as it is dependent on embedding.\n\\end{rmk}\n\nRecall that the universal enveloping algebra of a Lie algebra $\\frakg$ is \n\\[\\calU(\\frakg)=T(\\frakg)/\\langle x\\otimes y-y\\otimes x-[x,y]| x,y\\in\\frakg)\\]\n\\begin{defn}\n\tIf $\\frakg$ is a restricted Lie algebra (say $\\Lie G$), then the \\textbf{restricted Lie algebra} is \n\t\\[u(\\frakg)=\\calU(\\frakg)/\\langle x^p-x^{[p]}|x\\in\\frakg)\\]\n\\end{defn}\nWe have PBW bases for both $\\calU(\\frakg)$ and $u(\\frakg)$. The latter is better because if $x_1,\\dots,x_n$ are a basis for $\\frakg$,\nthe basis is \n\\[\\{x_1^{a_1}\\cdots x_n^{a_n}|a_i\\le p-1\\}.\\]\n\nOne can show that the ideal generated by $x^p-x^{[p]}$ is a Hopf ideal, so one gets that $u(\\frakg)$ is again a cocommutative Hopf algebra, \nbut this time it is finite dimensional! Then one can consider just the $p$-restricted representations of $\\frakg$ and one \nfinds that this category is equivalent to $\\lmod {u(\\frakg)}$.\n\\begin{ex}\n\tIf $\\frakg_m=\\Lie\\Gm$, we can compute \n\t\\[u(\\frakg_m)=k[t]/(t^p-1)\\]\n\tbut now $t$ is primitive! In fact, $u(\\frakg_m)=k[\\mu_p]^\\ast=k[\\bbZ/p]$.\n\\end{ex}\nThe above example highlights an important result: if $\\frakg=\\Lie G$,\n\\[u(\\frakg)\\xrightarrow{\\sim}k[G_{(1)}]^\\ast\\]\nand so we get $\\Rep\\frakg\\simeq \\Rep G_{(1)}$. Recall in manifolds that the theories are almost of Lie groups and algebras \nare very tightly connected (for connected compact manifolds). Here we don't get that! However we get a shadow of it:\n\nNote that we actually get a nested sequence of Frobenius kernels:\n\\[G\\supset G_{(n)}\\supset\\cdots\\supset G_{(1)}\\]\nmeaning that in this way we get better and better approximations of $G$.\n\n\\section{February 7th, 2020}\nToday we are going to talk about induction and restriction.\n\\subsection{The fixed points functor}\nRecall the \\textit{fixed point functor}: let $G$ be an algebraic group scheme and $M\\in\\Rep G$ (which we identify with $\\lmod G$).\nGiven an action of $G$ on $M$, we get \n\\[M^G=\\{m\\in M|\\forall R, g\\in G(R), g\\cdot(m\\otimes 1_R)=m\\otimes 1_R\\}\\]\nThis definition becomes difficult because it requires an extension of scalars and when you have a lot of nilpotents it messes with things.\n\nAn alternative method: define \n\\[M^G=\\{m\\in M|\\Delta_M(m)=m\\otimes 1\\}\\]\nwhere $\\Delta_M:M\\to M\\otimes k[G]$ is the coaction map. We can also write \n\\[M^G=\\ker(\\Delta_M-\\id_M\\otimes 1).\\]\n\nThis does indeed create a functor $(-)^G:\\Rep G\\to \\Vectk$.\n\\begin{prop}\n\t$(-)^G$ is left exact.\n\\end{prop}\n\\begin{rmk}\n\t$\\Rep G$ has an internal Hom. That is, given $M,M'\\in\\Rep G$, we have an action of $G$ on $\\Hom_k(M,M')$ (note: note $G$-equivariant maps!) via the diagonal (in the usual sense of \n\tHopf algebras). All of this, together with tensor-(internal) hom adjunction\n\t\\[\\Hom_A(M\\otimes_k N, M')\\cong\\Hom_G(M,,\\Hom_k(N,M')),\\]\n\tgives the category of Hopf modules the structure of a rigid monoidal category.\n\n\tBut now (as vector spaces), we have the isomorphism \n\t\\[\\Hom_k(M,M')^A\\cong \\Hom_A(M,M').\\]\n\\end{rmk}\n\nLet's take a moment to consider $k[G]$ as a $G$-module. The \\textbf{right regular representation} on $k[G]$\nis as follows: given $f\\in k[G]$, we have \n\\[\\rho_r(g)(f)(x)=(g\\cdot f)(x)=f(x\\cdot g)\\]\nand \n\\[\\rho_l(g)(f)(x)=(g\\ast f)(x)=f(g^{-1}\\cdot x)\\]\nand by composing the two, we get $\\rho_r\\circ\\rho_l=\\rho_l\\circ\\rho_g$ to get the adjoint representation on $k[G]$.\n\nNow as comodules, we have the action \n\\[\\Delta:k[G]\\to k[G]\\otimes k[G]\\]\nwhich is right regular representation. \\textbf{Why?} We can define the left regular representation by \n\\[\\tau\\circ(S\\otimes \\id)\\circ\\Delta.\\]\nAs an exercise, determine whether this is the same element as $(\\id\\otimes S)\\circ \\Delta$.\n\n\\subsection{Induction and restriction}\nGiven $H\\hookrightarrow G$, this gives us an obvious functor \n\\[\\Res_H^G:\\Rep G\\to \\Rep H\\]\nThen we claim (but don't prove here) that $\\Res$ has a right adjoint. In some cases, there is a left adjoint as well although \nwe don't have that in general.\n\nIn this case, with groups, we can define \n\\[\\Ind_H^G M\\eqdef (M\\otimes_kk[G])^H\\]\nwhere each are $H$-modules ($k[g]$ with the right regular action) and the action on the tensor is via the left regular action.\n\nAnother point of view: remember we can define the affine group scheme $M_a$ where $M_a(X)=M\\otimes X$. Then \n\\[M\\otimes k[G]=M_a(k[G])\\]\nand then we can look at the morphisms $G\\to M_a$ and consider \n$(M\\otimes k[G])^H$ to be the \\textit{$H$-invariant functions $f:G\\to M$} where $H$ actions on $\\Mor(G,M_a)$ via \n\\[(h\\cdot f)(g)=hf(gh)\\]\nand so \n\\[\\Mor(G,M_a)^H=\\{f:G\\to M|f(gh)=h^{-1}f(g),\\forall R, g\\in G(R)\\}.\\]\n\n\\begin{rmk}\n\tNotice that \n\t\\[\\Ind_H^G=(-)^H\\circ(-\\otimes k[G])\\]\n\tand so (since tensor is exact and restriction is left exact), $\\Ind_H^G$ is left exact.\n\\end{rmk}\n\n\\subsection{Frobenius reciprocity}\nDefine the evaluation map $\\varepsilon_M:\\Ind_H^G M\\to M$ via \n\\[m\\otimes f\\mapsto \\varepsilon(f)m\\]\nUsing the other form, if we have $f\\in\\Mor(G,M)^H$, we get $f\\mapsto f(1)\\in M$.\n\n\\begin{prop}\\label{prop:frobenius}\n\t\\begin{enumerate}\n\t\t\\item $\\varepsilon_M:\\Ind_H^G M\\to M$ is an $H$-module map.\n\t\t\\item (\\textit{Frobenius reciprocity}) $\\varepsilon_M$ induces an isomorphism \n\t\t\\[\\Hom_G(N,\\Ind_H^G M)\\xrightarrow{\\sim}\\Hom_H(\\Res_H^G N, M).\\]\n\t\\end{enumerate}\n\\end{prop}\n\\begin{cor}\n\t\\begin{enumerate}\n\t\t\\item $\\Ind$ is transitive: $\\Ind_H^G=\\Ind_{H'}^G\\circ\\Ind_H^{H'}$.\n\t\t\\item $\\Ind$ commutes with field extensions.\n\t\t\\item $\\Ind$ preserves injectives.\n\t\\end{enumerate}\n\\end{cor}\n\\begin{rmk}\n\tRecall the proof that $\\lmod R$ has enough projectives: the idea was to construct projective in $\\Ab$ and then \n\tuse a kind of induction to bring them up to the module category. In general, property (c) above empowers us to show \n\tthat categories have enough injectives!\n\\end{rmk}\nWe have a week off! So here's some homework:\n\\begin{prob}\n\tProve proposition \\ref{prop:frobenius}. Notice if we have \n\t\\[N\\xrightarrow{f}\\Ind_H^G\\xrightarrow{\\varepsilon_M} M\\]\n\tand define $\\tilde\\Psi^H:N\\to \\Ind_H^G M$ via the $H$-map $\\psi:N\\to M$ where \n\t\\[\\tilde\\Psi:N\\to \\Mor(G,M)\\]\n\tvia $n\\mapsto \\tilde\\Psi_n:G\\to M$ where $\\tilde\\Psi_n(g)=\\Psi(g^{-1}n)$. Then show $\\tilde\\Psi_n$ is $H$-invariant, a $G$-map, and prove there are mutual inverses.\n\\end{prob}\n\n\\section{February 19th, 2020}\nWelcome back! Today we're taking about the proof for that last problem (Frob. Reciprocity)\n\\begin{prf}\n\tRecall we have the map $\\varepsilon_M:\\Ind_H^G M\\to M$, where we think of \n\t\\[\\Ind_H^G M=(M\\otimes k[G])^H=\\{f:G\\to M|h^{-1}f(-)=f(-\\cdot h), gf(-)=f(g^{-1}\\cdot -)\\}\\]\n\tand $\\varepsilon_M(f)=f(1)\\in M$. Then the map giving us our Frobenius reciprocity is just composing with $\\varepsilon_M$.\n\n\tWe will prove some of this. Let $\\tilde\\psi:N\\to \\Ind_H^G M$. Thus it corresponds, for each $n\\in N$, to a map \n\t\\[\\tilde\\psi_n:G\\to M\\]\n\tand we need to define this map, check it is $H$-invariant, check that the resulting $\\tilde\\psi$ is $G$-invariant, and then check that this and $(\\varepsilon_M\\circ -)$ and this map are mutual inverses.\n\n\tThen we busted out paper and wrote stuff down. I checked $G$ invariance.\n\\end{prf}\n\\begin{cor}\n\t\\begin{enumerate}\n\t\t\\item $\\Ind$ is transitive.\n\t\t\\item $\\Ind$ takes injectives to injectives.\n\t\\end{enumerate}\n\\end{cor}\nThe way to prove this is to notice that it is a right adjoint to an exact functor.\n\n\\subsection{Projection formula}\n\\begin{thm}[Tensor identity/Projection formula]\n\tIf $M\\in\\lmod H$ and $N\\in\\lmod G$,\n\t\\[\\Ind_H^G(M\\otimes N)\\simeq (\\Ind_H^GM)\\otimes N\\]\n\\end{thm}\n\\begin{rmk}\n\tIf $k$ is a ring instead of a field, we need to make an additional assumption: that $N$ is flat over $k$.\n\\end{rmk}\n\\begin{prf}\n\tWe'll write down the maps at least. Since \n\t\\[L\\eqdef \\Ind_H^G(M\\otimes N)=(M\\otimes N\\otimes k[G])^H\\simeq\\{f:G\\to M\\otimes N|f(gh)=(h^{-1}\\otimes h^{-1})f(g)\\}\\]\n\ton the right, we have \n\t\\[R\\eqdef\\Ind_H^GM\\otimes N=(M\\otimes k[G])^H\\otimes N\\simeq\\{f:G\\to M\\otimes N|f(xh)=(h^{-1}\\otimes 1)f(x)\\}\\]\n\n\tNow we can write down two maps \n\t\\[\\alpha,\\beta:M\\otimes N\\otimes k[G]\\to M\\otimes N\\otimes k[G]\\]\n\t(thinking of these as maps $G\\to M\\otimes N$) in the following way:\n\t\\[\\alpha(f)(g)=(1\\otimes g)f(g)\\quad\\text{and}\\quad \\beta(f)(g)(1\\otimes g^{-1}f(g))\\]\n\tand we claim:\n\t\\begin{enumerate}\n\t\t\\item $\\alpha$ and $\\beta$ are mutual inverses;\n\t\t\\item $\\alpha(L)=R$ and $\\beta(R)=L$;\n\t\t\\item $\\alpha,\\beta$ are $G$-invariant.\n\t\\end{enumerate}\n\\end{prf}\n\\begin{rmk}\n\tThis is a bit tedious, but this whole induction business is spelled out in Jantzen.\n\\end{rmk}\n\n\\subsection{What do we gain from all this?}\nWell first, notice that $\\Ind_e^G k=k[G]$, which implies taht $k[G]$ is injective. Then we have \n\\begin{cor}\n\tLet $M\\in \\lmod k$ and notice \n\t\\[\\Ind_e^G M\\simeq\\Ind_e^G(k\\otimes k M)\\simeq k[G]\\otimes M_\\text{triv}\\]\n\twhich is just a sum of copies of $k[G]$, which is again projective.\n\\end{cor}\n\\begin{cor}\n\tLet $M\\in \\Rep G$. Then \n\t\\[M\\otimes k[G]\\simeq M_\\text{triv}\\otimes k[G]\\]\n\\end{cor}\n\\begin{prf}\n\t$M_\\text{triv}\\times k[G]\\simeq\\Ind_e^G M=\\Ind_e^G(k\\otimes M)\\simeq \\Ind_e^Gk\\otimes M\\simeq k[G]\\otimes M.$\n\\end{prf}\n\\begin{rmk}\n\tThe map between these things is nontrivial (we used some twisting in the tensor identity, for example). The map is \n\t\\[m\\otimes f\\mapsto (1\\otimes f)(1\\otimes S)\\Delta_M(m)\\]\n\twhere $S$ is the coinverse.\n\\end{rmk}\n\n\\begin{thm}[``$\\Rep G$ has enough injectives'']\n\t\\begin{enumerate}\n\t\t\\item $k[G]$ is injective.\n\t\t\\item $\\forall M\\in\\Rep G$, there is an injective $I$ with $M\\hookrightarrow I$.\n\t\t\\item Any injetive $I$ is a direct summand of $\\bigoplus k[G]$.\n\t\\end{enumerate}\n\\end{thm}\nCorollary 2 gets us our injectives containing $M$. This sets us up to do homological algebra, which we will do next time.\n\n\\section{February 21st, 2020}\nWe did a lot of work last time to show that $k[G]$ was injective and furthermore that every module injects into \nan injective module. In general, finding projectives is much harder.\n\n\\subsection{Projectives}\n\\begin{lem}\n\tLet $E$ be a simple $G$-module. Then there exists an injective hull $Q_E$ such that \n\t\\[E\\hookrightarrow Q_E\\]\n\twith $\\operatorname{soc}Q_E=E$ and $Q_E$ is indecomposable.\n\\end{lem}\n\\begin{rmk}\n\tAny injective in $\\Rep G$ is a direct sum of indecomposables.\n\\end{rmk}\n\\begin{lem}\n\tLet $P$ be a projective $G$-module. Then $P$ is injective.\n\\end{lem}\n\\begin{prf}\n\tLet $P$ be a projective module. We claim that for any $G$-module $M$, $P\\otimes_k M$ is again projective. This follows from tensor-hom adjointness.\n\tSo $\\Hom(-,P)$ is exact. Let \n\t\\[0\\to M_1\\to M\\to M_2\\to 0\\]\n\tbe a short exact sequence of finite dimensional modules. Dualizing is exact, as is $-\\otimes P$, so \n\t\\[0\\leftarrow M_1^\\ast\\otimes P\\leftarrow M^\\ast\\otimes P\\leftarrow M_2\\otimes P\\leftarrow 0\\]\n\tis exact. Thus $M\\otimes P$ splits as $(M_1\\otimes P)\\oplus M'$.\n\n\tBut then notice that \n\t\\[\\Hom_G(M_1,P)\\simeq(\\Hom_k(M,P))^G\\simeq(M_1^\\ast\\otimes P)^G\\]\n\tbut then $(M^\\ast\\otimes P)^G$ splits as \n\t\\[(M_1^\\ast\\otimes P)^G\\oplus M'^G\\]\n\tand this proves surjectivity.\n\\end{prf}\n\\begin{rmk}\n\tNotice that this alst proof could possibly be better. The finite dimensionality part could be able to be weakened/dropped by\n\tavoiding the tensor product representation.\n\\end{rmk}\n\n\\subsection{Frobenius categories}\nThe upshot to all of this is that \n\\begin{prop}\n\tIf there exists a projective module $P$ in $\\Rep G$, then the collections of injectives and projectives are the same.\n\\end{prop}\n\\begin{prf}\n\tLet $P$ be projective. Let $P_1\\subseteq P$ be a finite-dimensional submodule. Then \n\t\\[M\\hookrightarrow M\\otimes \\End_k(P_1)\\simeq M\\otimes P_1^\\ast\\otimes P_1\\hookrightarrow M\\otimes P_1^\\ast\\otimes P\\]\n\twhich we have seen is both injective and projective.\n\n\tSo take a simple $M$ with injective hull $Q_M$. Then we get maps $Q_M\\to M\\otimes P_1^\\ast\\otimes P$ and $M\\otimes P_1^\\ast\\otimes P\\to Q_M$ \n\tby leveraging injectivity and projectivity. Thus $Q_M$ is a direct summand of $M\\otimes P_1^\\ast\\otimes P$, which is projective. \n\tA little clean up gets us the result.\n\\end{prf}\n\n\\begin{rmk}\n\tWhen $P$ is injective iff projective, the category $\\Rep G$ is a Frobenius category.\n\\end{rmk}\n\n\\begin{rmk}\n\tFOr smooth algebraic groups $\\Gm$, $\\Ga$, and $\\GL_n$, there are no projective modules.\n\tIf $G$ is finite (finite groups, Frob kernels ${\\Ga}_{(r)}$, $\\mu_p$, ${\\GL_n}_{(r)}$) we have that the collections of projective and injectives are the same.\n\\end{rmk}\n\\begin{rmk}\nDomkin put together a complete list of affine group schemes with projective modules.\n\\end{rmk}\n\n\\subsection{Cohomology}\nWe know now that $\\Rep G$ has enough injectives, so if $\\calF:\\Rep G\\to \\Vectk$ is left exact we can take an \ninjective resolution $M\\to I^\\bullet$ and then \n\\[(\\calR^n\\calF)(M)\\eqdef H^n(\\calF(I^\\bullet)).\\]\n\nRecall the Grothendieck spectral sequence:  Let $\\calF:\\calC\\to \\calD$ and $\\calF:\\calD\\to \\calE$. Then say \nwe want to compute the derived functors of $\\calF'\\circ\\calF$. Assume that $\\calF'$ is left exact and that \n$\\calF$ takes injectives to $\\calF'$ acyclic objects. Then we have a spectral sequence whose second page is \n\\[E_2^{m,n}=\\calR^m\\calF'\\circ\\calR^n\\calF\\Rightarrow \\calF^{m+n}(\\calF'\\circ\\calF)\\]\n\n\\begin{rmk}\n\tNotice that in the Grothendieck spectral sequence if $\\calF'$ is exact (so $\\calR^n\\calF=0$ when $n>0$),\n\tyou get \n\t\\[\\calF'\\circ\\calR^n\\calF\\xrightarrow{\\sim}\\calR^n(\\calF'\\circ\\calF)\\]\n\n\tYou can do the same thing with exactness of $\\calF$.\n\\end{rmk}\n\n\\subsubsection{Notation}\nWe will use the following notation:\n\\begin{align*}\n\t\\Ext_G^n(M,-)=\\calR^n(\\Hom_G(M,-))\\\\\n\tH^n(G,M)=\\Ext_G^n(k,M)\\\\\n\tH^n(G,k)=\\Ext^n_G(k,k)\\\\\n\tH^\\ast(G,k)=\\oplus_0^\\infty\\Ext_G^n(k,k)\n\\end{align*}\nand this last object is called the cohomology ring of $G$.\n\n\\subsubsection{Applications}\n\\begin{prop}\n\tWe have the derived adjointness \n\t\\[\\Ext_G^n(M\\otimes V,N)\\simeq \\Ext_G^n(M,\\Hom(V,N))\\]\n\\end{prop}\n\n\\section{February 24th, 2020}\nToday we are going to talk more about the Grothendieck spectral sequence.\n\\subsection{Examples}\nLet's begin by proving 35.5.2\n\\begin{prf}\n\tLet $\\calF(-)=M\\otimes -$ and $\\calF'(-)=\\Hom_G(-,N)$. Then \n\t\\[\\calF'\\circ\\calF(V)=\\Hom(M\\otimes V,N)\\]\n\tand so $\\scrR^n(\\calF')\\circ \\calF(V)=\\Ext^n(M\\otimes V,N)$\n\tbut we know that this is isomorphic to $\\scrR^n(\\calF'\\circ\\calF)(V)$,\n\t\n\tI need to work this out once to understand the details. It involved using the regular tensor-hom identity and \n\tthe convergence of the Grothendieck spectral sequence again.\n\\end{prf}\n\n\\begin{prop}\n\t\\[\\Ext_G^n(M,\\scrR^m\\Ind_H^G N)\\Rightarrow \\Ext_H^{n+m}(m,N)\\]\n\\end{prop}\n\\begin{prf}\n\tLeverage Frobenius reciprocity and use the fact $\\scrF=\\Ind_H^G$ is left exact and takes injectives to injectives. Then if we let $\\scrF'=\\Hom_G(M,-)$,\n\twe get $\\scrF'\\circ\\scrF(N)=\\Hom_H(M,N)$ and it pops out.\n\\end{prf}\n\n\\begin{defn}\n\tWe say a subgroup scheme $H\\le G$ is \\textbf{exact in $G$} if $\\Ind_H^G$ is exact. Equivalently if \n\t$\\scrR^{n>0}\\Ind_H^G=0$.\n\\end{defn}\n\\begin{cor}\n\tIf $H$ is exact, we have an isomorphism \n\t\\[\\Ext_G^n(M,\\Ind_H^G N)\\xrightarrow{\\sim}\\Ext_H^n(M,N)\\]\n\\end{cor}\nThis is, of course, a much easier thing to work with. But an interesting question is when $H$ is exact.\n\\begin{prop}\n\tLet $k$ be a field. Then $H$ is exact if and only if $k[G]$ is an injective $H$ module.\n\\end{prop}\n\\begin{rmk}\n\tThe direction ``$\\Leftarrow$'' works when $k$ is a commutative ring.\n\\end{rmk}\n\\begin{rmk}\n\t$k[G]=\\Ind_1^G k=\\Ind_H^G(\\Ind_1^H k)=\\Ind_H^G(k[H])$\n\\end{rmk}\n\\begin{prf}\n\tTake a short exact sequence $0\\to M_1\\to M\\to M_2\\to 0$ of $H$-modules. Then we do the old argument:\n\ttake a splitting of $M\\otimes k[G]$ and we get that $M_1\\otimes k[G]$ is a summand of $M\\otimes k[G]$.\n\tThen applying invariants preserves this decomposition.\n\n\tThe rest of this (and all of these properties) live in Jantzen chapter 1 section 4.\n\\end{prf}\n\n\\begin{rmk}\n\tIf $H$ is a finite groups scheme, then $H$ is exact (of everything)! The motivation for this: say we have $H\\le G$ \n\tany two affine group schemes.\n\n\tIn analogy, there is a statement about Hopf algebras (see my notes for that). If $A\\hookrightarrow B$ is a map of two Hopf algebras (perhaps if $B$ is cosemisimple?)\n\tThis idea ports over here somehow. It can be proved via sheaf cohomology.\n\\end{rmk}\n\nNext we're going to try to write down the ``generalized version'' of the tensor identity using the Grothendieck spectral sequence.\n\\[(\\scrR^n\\Ind_H^G)\\circ(-\\otimes \\Res^G_H N)\\cong (\\scrR^n\\Ind_H^G(-))\\otimes N\\]\n\n\\section{February 26th, 2020}\nThis is all from Jantzen chapter I section 5. Apparently it is a bit hand-wavy.\n\\subsection{Associated sheaves}\nOur goal is to describe $\\scrR^n\\Ind_H^G(M)$ as $H^n(G/H,\\calL(M))$ where $\\calL(M)$ is a sheaf on $G/H$.\nWe are going to think of $\\scrL:\\Rep H\\to \\Sh G/H$ as a functor.\n\n\\subsubsection{Quotients}\nThe history of these quotients go back to Demazure and Gabriel. The idea is that we don't genreally have \nquotients in $\\Sch$, so we extend to the \\textbf{sheaves in the fppf topology}. This is a Grothendieck topology that stands for ``\nfinitely presented faithfully flat''. It's French, don't question it.\n\nWhen we have an algebraic group $G$ acting on a scheme $X$, we want to know when the quotient $X/G$ in this broader context \nis actually a scheme. \n\nLet us assume that $G$ acts freely (pointwise). We assume that $k$ is a field. We can drop this, but then we have \nto carry around the assumption of flatness everywhere. Let $X$ be a scheme of finitel type (algebraic) over $k$ and $G$ an \naffine group scheme (also of finite type). Then we have \n\\begin{prop}\n\tIf $G$ acts freely and $X/G$ is a scheme, then \n\t\\[X(R)/G(R)\\hookrightarrow (X/G)(R)\\]\n\\end{prop}\n\\begin{prop}\n\t($G$ always acting freely) If $X/G$ is a scheme, there exists an isomorphism \n\t\\[X\\times G\\xrightarrow{\\sim}X\\times_{X/G}X\\quad\\text{via}\\quad (x,g)\\mapsto (x,xg)\\]\n\twhere on the level of points, this is $\\{(x,x')|x=x'g\\}$ (we are using right actions obviously).\n\\end{prop}\nFrom here on out, we are going to assume that $X/G$ is a scheme unless we say something to the contrary.\n\\begin{ex}\n\t$\\P n(R)$ is, as a set, the set of free rank one direct summands of $R^{n+1}$. Work out how (and if!) this is true.\n\tThen we can define \n\t\\[\\A{n+1}_\\ast(R)=\\{(a_0,\\dots,a_n)\\in R^{n+1}|\\sum Ra_i=R\\}\\]\n\twhich is a subscheme of $\\A{n+1}$. Now we claim that \n\t\\[\\A{n+1}_\\ast/\\Gm\\xrightarrow{\\sim}\\P{n}\\]\n\tAnd we had some trouble making this precise. This is in Jantzen, however.\n\\end{ex}\nYou can do similar things with the Grassmann schemes $\\operatorname{Gr}(k,n)$.\n\\begin{thm}\n\tLet $G$ be a finite group scheme and $X$ an affine scheme with a free $G$ action. Then $X/G$ is an affine scheme with \n\tcoordinate ring $k[X]^G$.\n\\end{thm}\n\\begin{thm}\n\tLet $H\\le G$ be algebraic group schemes (sometimes requires that $H$ is closed, but this is automatic when we look at it from the functorial \n\tperspective and require that $H$ is an algebraic subgroup scheme). Then (with $H$ acting on the right of $G$) $G/H$ (or in the other case $H\\setminus G$) is a scheme.\n\tIn particular, $G\\times H\\simeq G\\times_{G/H}G$.\n\\end{thm}\n\n\\section{February 28th, 2020}\n\\subsection{Associated Sheaves}\nThe set up is the following: we have a scheme $X/k$, algebraic group scheme $G/k$ acting freely on $X$ such that $X/G$ a scheme.\nWe are constructing a functor \n\\[\\calL:\\Rep G\\to \\Sh X/G\\]\nand actually to $\\O_{X/G}$ modules. Let $\\pi:X\\to X/G$ be affine, flat and algebraic.\n\\begin{thm}\n\t\\begin{enumerate}\n\t\t\\item If $Y\\subset X/G$ is affine (open), then $\\pi^{-1}Y\\subset X$ is affine (open). T\n\t\t\\item If $Y\\subset Y/G$ is affine open, then we get the map $\\pi:\\pi^{-1}Y\\to Y$ induces a $k[Y]$ structure on $k[\\pi^{-1}Y]$ such that it is a faithfully flat, finitely presented $k[Y]$ module.\n\t\\end{enumerate}\n\\end{thm}\n\nNow we construct things in the following way. Let $M$ be a $G$-modules. Then for any open $U\\subset Y$, define \n\\[\\calL(M)(U)=\\left(M\\otimes_k k[\\pi^{-1}U]\\right)^G\\]\nwhere we put the diagonal action on the tensor product. Notice $\\pi^{-1}(U)\\subset X$.\n\nWhy is this the right thing to do? Notice that \n\\[M\\otimes k[\\pi^{-1}U]=\\operatorname{Mor}(\\pi^{-1}U,M_a)=\\{f:\\pi^{-1}U\\to M_a|f(xg)=g^{-1}f(x)\\}.\\]\nWe claim that $\\calL(M)$ is a sheaf, and specifically a sheaf of $\\O_{X/G}$-modules. That is, the map $\\pi:X\\to X/G$ gives us \na $\\O_{X/G}$ module structure on $\\O_X$ via $\\pi^{ast}:\\pi^{-1}\\O_{X/G}\\to \\O_X$. On any $U$ we have, for \n$f_1\\in \\O_{X/G}(U)$ and $f\\in\\calL(M)(U)$, an element \n\\[f_1\\cdot f\\in\\calL(M)(U).\\]\n\n\\begin{prop}\n\t\\begin{enumerate}\n\t\t\\item $\\calL$ is a sheaf of $\\O_{X/G}$ modules.\n\t\t\\item $\\calL$ is exact.\n\t\t\\item $\\calL(M)$ is quasi-coherent.\n\t\t\\item If $M$ is a finite dimensional $G$-module, $\\calL(M)$ is coherent.\n\t\\end{enumerate}\n\\end{prop}\nWe may talk about a proof of this next time.\n\\begin{ex}\n\tLet $M=k$. Then \n\t\\[\\calL(k)(U)=k[\\pi^{-1}U]^G=\\{f:\\pi^{-1}U\\to k_a=\\A1|f(xg)=f(x)\\}\\]\n\tso we are just looking for invariant functions. The claim is that this is actually \n\t\\[\\calL(k)(U)=\\{f:\\pi^{-1}U/G\\to \\A1\\}=\\{f:U\\to A1\\}=k[U]=\\O_{X/G}(U)\\]\n\\end{ex}\nWe want to know what happens to injective modules. All of our injectives are going to look like $M\\otimes k[G]$ or direct summands of this or will be \notherwise comparable to this. So if we have $\\pi:X\\to X/G$,\n\\[\\calL_{X/G}(M\\otimes k[G])\\simeq \\pi_\\ast\\calL_{X/e}(M)\\]\nThis is requires some thought and computation.\n\n\\begin{prop}\n\t\\begin{enumerate}\n\t\t\\item The functor sending \n\t\t\\[M\\mapsto H^0(X/G,\\calL(M))=\\Gamma(\\calL(M))\\]\n\t\tis left exact (but not right exact!)\n\t\t\\item If $X$ is affine, then the derived functors are \n\t\t\\[M\\mapsto H^n(X/G,\\calL(M))\\]\n\t\\end{enumerate}\n\\end{prop}\n\\begin{prf}\n\tThe second part looks almost obvious from the Grothendieck spectral sequence. We need, however, \n\tthat $\\calL$ takes injective $G$-modules to acyclic sheaves. Why is that true? Take $M\\otimes k[G]$ an \n\tinjective $G$-module, then we saw that \n\t\\[\\calL(M\\otimes k[G])\\simeq \\pi_\\ast\\calJ_{X/e}(M).\\]\n\tWe need to check that \n\t\\[H^n(X/G,\\pi_\\ast\\calL_{X/e}(M))\\]\n\tvanishes for all $n\\ge 1$. THere is a problem in Hartshorne that gives this to us for an affine morphism between separated (something else) schemes.\n\\end{prf}\n\\begin{prop}\n\tLet $H\\le G$. Then \n\t\\begin{enumerate}\n\t\t\\item $\\calR^n\\Ind_H^G(M)\\simeq H^n(G/H,\\calL_{G/H}(M))$.\n\t\t\\item If $G/H$ is Noetherian, then $\\scrR^n\\Ind_H^G(M)=0$ for $n>\\dim G/H$.\n\t\t\\item If $G/H$ is projective, then $\\scrR^n\\Ind_H^G(M)$ is finitely generated whenever $M$ is finite dimensional.\n\t\t\\item If $G/H$ is affine, $\\Ind_H^G$ is exact.\n\t\\end{enumerate}\n\\end{prop}\n\\begin{prf}\n\tFor the first statement, We are just taking higher derived functors of \n\t\\[H^0(GH,\\calL(M))=\\calL(M)(G/H)=(M\\otimes k[G])^H=\\Ind_H^G(M)\\]\n\twhere the isomorphisms here are natural. So the derived functors are the same!\n\\end{prf}\n\\begin{cor}\n\tIf $H$ is a finite group scheme, then $\\Ind_H^G$ is exact.\n\\end{cor}\n\\begin{prf}\n\tWe know that $G/H$ is affine and in fact is $\\Spec k[G]^H$.\n\\end{prf}\n\\section{March 2nd, 2020}\nSome reminders: we reinterpreted our induction functor in terms of sheaf cohomology. In our particular case, \nwe want to restrict to \n\\[\\calL:\\Rep H\\to \\O_{G/H}\\text{-}\\mathbf{mod}\\]\nwhere $H\\le G$ and we are sending $M\\mapsto \\calL(M)$ where now \n\\[\\calR^n\\Ind_H^G M\\simeq H^n(G/H,\\calL(M)).\\]\n\n\\subsection{A Preview}\nWhen we have a Borel subgroup $B\\subseteq G$ (e.g. the upper triangulars in $\\GL_n$), \nthen we get the functor \n\\[\\Rep B\\to \\O_{G/B}\\text{-}\\mathbf{mod}\\]\nwhere we send particular representations $\\lambda$ to $H^0(G/B,\\calL(k_\\lambda))$.\n\n\\subsection{Leftovers}\n\\begin{prop}\n\t$\\calL$ is exact.\n\\end{prop}\n\\begin{prf}\n\tIt suffices to show that the association of $M$ to $\\calL(M)(U)$, for $U$ an affine open, is exact. To \n\tprove this, consider $\\tilde U=\\pi^{-1}\\subset X\\to U\\subset X/G$, which is faithfully flat---that is, the ring map \n\t$\\pi_\\ast:k[U]\\to k[\\tilde U]$ is. So it suffices to show that the map \n\t\\[M\\mapsto k[\\tilde U]\\otimes_{k[U]}\\calL(M)(U)=k[\\tilde U]\\otimes_{k[U]}(k[\\tilde U]\\otimes M)^G\\]\n\tis exact. Recall that we have that $X\\times G\\simeq X\\times_{X/G} X$ via the map $(g,x)\\mapsto (g,gx).$\n\tSo via the same map, we get $\\tilde U\\times_U\\tilde U\\simeq \\tilde U\\times G$ where $G$ acts on the right factor.\n\n\tThis gives us an iso \n\t\\[k[\\tilde U]\\otimes_{k[U]}k[\\tilde U]\\simeq k[\\tilde U]\\otimes k[G]\\]\n\tand so \n\t\\[k[\\tilde U]\\otimes_{k[U]}(k[\\tilde U]\\otimes M)^G=(k[\\tilde U]\\otimes_{k[U]}k[\\tilde U]\\otimes M)^G\\simeq (k[\\tilde U]\\otimes k[G]\\otimes M)^G\\]\n\twhere in both of the cases below $G$ acts diagonally on the last two tensor factors.\n\n\tThen applying the tensor identity, we get that the above is\n\t\\[(k[U]\\otimes k[G]\\otimes M_{tr})^G\\cong k[\\tilde U]\\otimes M\\otimes (k[G])^G\\simeq k[\\tilde U]\\otimes M\\]\n\twhich is exact. \n\\end{prf}\nThe next proposition helps us to show that $\\calL$ takes injectives to acyclics so we can apply the Grothendieck spectral sequence.\n\\begin{prop}\n\t$\\calL_{X/G}(M\\otimes k[G])\\simeq \\pi_\\ast \\calL_{X/e}(M)$.\n\\end{prop}\n\\begin{prf}\n\tIt is enough to show that for all $U\\subseteq X/G$ open and affine, the two sheaves have the same values. Recall that \n\t\\[\\calL_{X/G}(M\\otimes k[G])(U)\\cong\\Mor(\\pi^{-1}(U),M\\otimes k[G])^G\\cong\\Mor(\\pi^{-1}U\\times G,M)^G\\]\n\twhere in the last iso is induced from a kind of $\\times$-hom adjunction.\n\n\tThen one can define a map on the level of varieties $(x.g)\\mapsto (xg,g)$ from $\\pi^{-1}U\\times G$ to itself where \n\tnow $\\pi^{-1}U$ has the trivial action.  Thus we get \n\t\\[Mor(\\pi^{-1}U_{tr},M)^G\\simeq\\Mor(\\pi^{-1}U,M)=\\pi_\\ast\\calL_{X/e}(M)\\]\n\\end{prf}\n\\subsection{Special Case}\nSay we have $\\pi:X\\to X/G$ which is locally trivial: for every $U_i\\subset X/G$, there is a local section $\\sigma_i:U_i\\to X$ with \nthe properties:\n\\begin{itemize}\n\t\\item $\\cup U_i=X/G$\n\t\\item $\\pi\\circ\\sigma_i=\\id_{U_i}$\n\t\\item $\\pi^{-1}(U_i)=U_i\\times G$\n\\end{itemize}\nA thing we should discuss is the grassmannian. Let's work over a field $k$. Recall that $\\operatorname{Gr}(r,n)$ (or similar notation) means the \n$r$ dimensional subspaces in $\\A n$. We have a natural action of $\\GL_n$ on $\\A n$ that induces an action on $\\operatorname{Gr}(r,n)$. Then one can show that \n$\\GL_n/\\operatorname{stab}_{\\GL_n}(v)=\\Im\\pi\\simeq\\operatorname{Gr}(r,n).$ The stabilizer is a parabolic subgroup. We hope to see more of this \non Wednesday.\n\n\\subsection{More stuff}\n\\begin{thm}\n\tIf $G$ acts on $X$ and $M\\in \\Rep G$ is locally trivial (same assumptions everywhere), then if $\\pi:X\\to X/G$,\n\twe get that $X\\times^G M= X\\times M/G\\to X/G$ is locally trivial.\n\\end{thm}\n\\begin{rmk}\n\tIn particular, if $H\\le G$ and if $M\\in \\Rep H$, the map $G\\times^H M\\to G/H$ is a vector bundle.\n\\end{rmk}\n\\begin{thm}\n\tLet $H\\le G$ and $M\\in \\Rep H$. Then $G\\times ^H M\\to G/H$ has the property that $\\calL(M)$ is the sheaf of local sections of this vector bundle. That \n\tis, if $U\\subset G/H$,\n\t\\[\\Gamma(U,G\\times^H M)\\simeq \\calL(M)(U)\\]\n\\end{thm}\n\n\\section{March 4, 2020}\nToday we are talking about \n\\subsection{Distribution algebras for an algebraic group}\nThe idea here is that we know something about representations of algebras and we want to relate\nthe representations of an algebraic group to the reps of algebras.\n\nOne idea is to notice that $\\Rep G\\simeq \\lcomod{k[G]}$ and then dualize to get the modules of the dual of $k[G]$.\nThis can blow up, of course, so we have to refine this idea.\n\nLet $X$ be an affine scheme and let $k[X]$ be its ring of functions. Let $x\\in X(k)$ and let \n\\[I_x=\\{f\\in k[X]|f(x)=0\\}\\]\nThen we define \n\\begin{defn}\n\t$\\mu:k[X]\\to k$ is a distribution of order $\\le n$ supported at $x$ if $\\mu(I_x^{n+1})=0$.\n\tThen $\\operatorname{Dist}_n(X,x)\\cong \\left(k[X]/I_x^{n+1}\\right)^\\times$ is the collection of all such $\\mu$.\n\n\tThe subset of $\\mu$ satisfying $\\mu(1)=0$ is $\\operatorname{Dist}_n^+(X,x)$ and we write $\\operatorname{Dist}(X,x)$ and $\\operatorname{Dist}^+(X,x)$\n\tfor the unions over all $n$.\n\\end{defn}\nNotice that $\\Dist^+_n(X,x)\\cong \\left(I_x/I_x^{n+1})\\right)^\\times$, so \n\\[\\Dist_1^+(X,x)\\simeq (I_x/I_x^2)^\\times\\simeq T_xX.\\]\n\nNow $\\Dist(X,x)$ is a $k[X]$-module via ($f,f_1\\in k[X],\\mu:k[X]\\to k$)\n\\[(f\\mu)(f_1)=\\mu(ff_1)\\]\n\\begin{rmk}\n\tIf $X$ is a (non-affine) scheme over $k$,\n\t\\[\\Dist_n(X,x)\\simeq \\left(\\O_{X,x}/\\frakm_{X,x}^{n+1}\\right)^\\times\\]\n\\end{rmk}\nLet's go back to the affine case. This is functorial: if we have a map $\\varphi:X\\to Y$ and algebra map \n$\\varphi^\\ast:k[Y]\\to k[X]$ that satisfies $(\\varphi^\\ast)^{-1}(I_x)=I_{\\varphi(x)}$. Thus we get a map \n\\[d\\varphi_x:\\Dist_n(X,x)\\to \\Dist_n(Y,\\varphi(x))\\]\n\nAlso $\\Dist(X,x)$ is a cocommutative coalgebra where the diagonal map $\\Delta:X\\to X\\times X$ gives us a map \n\\[\\Dist(X,x)\\to \\Dist(X,x)\\otimes \\Dist(X,x).\\]\n\n\\begin{ex}\n\tLet $X=\\A1$ and let $x=0$. Recall $k[\\A1]\\simeq k[T]$. Then $I=I_0=(T)$.\n\tTherefore \n\t\\[\\Dist_n(\\A1,0)\\simeq (k[T]/T^{n+1})^\\times\\]\n\tand \n\t\\[\\Dist(\\A1,0)\\simeq \\oplus_{r\\ge 0}k\\gamma_r\\]\n\twhere we let $\\gamma_r:k[T]\\to k$ via $\\gamma_r(T^n)=\\delta_{n,r}$.\n\n\tTherefore $\\gamma_1=\\frac{\\partial}{\\partial T}$ and if the characteristic is zero, $\\gamma_r=\\frac{1}{r!}\\frac{\\partial}{\\partial T^r}$.\n\tRemember that we have a natural algebra structure from $\\A1$ as a group scheme. The coalgebra structure comes from $\\gamma_1$ \n\tbeing primitive and extending up to the rest.\n\\end{ex}\n\nNow let $X=G$ be a (connected!) group scheme and $x=1$. Then $\\Dist(G)\\eqdef\\Dist(G,1)$. \nThen we can get an algebra structure in the following way: if $\\mu,\\nu:k[G]\\to k$,\n\\[\\mu\\nu=\\mu\\bar\\otimes\\nu\\circ\\Delta\\]\nas we usually do with these things. \n\nNow $\\Dist G$ is a filtered algebra so that \n\\[\\Dist_n(G)\\Dist_m(G)\\subseteq\\Dist_{n+m}(G).\\]\n\\begin{rmk}\n\tIn $k[G]$ we have a comultiplication with a special form. This can be read in Jantzen section I.2.4. If $I$ is the \n\taugmentation ideal of $k[G]$, and $f\\in I$,\n\t\\[\\Delta(f)=f\\otimes 1+1\\otimes f+I\\otimes I\\]\n\n\tThen multiplicativity of $\\Delta$ gives us \n\t\\[\\Delta(f_1\\cdots f_n)=\\prod_1^n(f_i\\otimes 1+1\\otimes f_n)+\\sum_1^{n-1}I^r\\otimes I^{n-r}\\]\n\\end{rmk}\n\nIf we have $\\mu,\\nu\\in\\Dist_n$, then $[\\mu,\\nu]=\\mu\\nu-\\nu\\mu\\in\\Dist_{n+m-1}(G)$.\nThus we have a filtration $\\Dist_n\\subseteq \\Dist_{n+1}$ and the associated graded ring \n$\\oplus \\Dist_i/\\Dist_{i-1}$ is commutative. This is reminiscent of the universal enveloping algebra of a Lie algebra!\n\\begin{ex}\n\tLet $G=\\Ga$ again. THen $k[T]=k[\\Ga]$ and think of $\\gamma_r=\\frac{1}{r!}\\frac{\\partial^r}{\\partial T^r}$.\n\tWhat is the multiplication? Ends up it is the divided power algebra structure:\n\t\\[\\gamma_n\\gamma_n=\\binom{n+m}{n}\\gamma_{n+m}.\\]\n\tHow do we see this? Consider \n\t\\[\\Delta(T)=1\\otimes T+T\\otimes 1\\quad\\Rightarrow\\quad \\Delta(T^l)=\\sum_0^l\\binom{l}{i}T^i\\otimes T^{l-i}\\]\n\tso then \n\t\\[\\gamma_n\\gamma_m(T^l)=\\sum\\binom{l}{i}\\gamma_n(T^i)\\gamma_m(T^{l-i})\\]\n\twhere the summands are all zero except when $l=n+m$ and $i=n$.\n\n\tIn characteristic zero, $\\gamma_r$ is honestly $\\frac{1}{r!}\\gamma_1^r$ so it ends up that this is just isomorphic\n\tto a polynomial algebra $k[\\gamma_1]$. This is not true in positive characteristic, because $\\gamma_1^p$ dies, so we get a new \n\tgenerator $\\gamma_p$ and so on.\n\\end{ex}\nNow let $\\frakg=\\Lie G$. Then since $\\frakg\\simeq\\Dist^+_1(G)$ which maps into $\\Dist(G)$, the universal property \nof the universal enveloping algebra gets us us the map \n\\[\\calU(\\frakg)\\to \\Dist(G)\\]\n\\begin{prop}\n\tIn characteristic zero, this map is an isomorphism.\n\\end{prop}\nIn positive characteristic, this fails. However we still ahve an injective map \n\\[u(\\frakg)\\hookrightarrow\\Dist(G)\\]\nwhere $u(\\frakg)$ is the usual restricted enveloping algebra.\n\n\\section{March 6th, 2020}\nLet $M\\in\\Rep G$ and let $\\mu\\in \\Dist G$. THen define an action of $\\Dist G$ on $M$ by \n\\[\\mu m=(\\id\\bar\\otimes \\mu)\\circ\\Delta_M\\]\nand we get a map $\\Rep G\\to\\Dist G$. WHen $G$ is an algebraic gorup scheme over $k$,\nthen the map on homs is an isomorphism. Notice: $\\Rep G$ lands in locally finite $\\Dist G$ modules, \nbut not all locally finite $\\Dist G$ algebras come from $G$-mod.\n\nWe're going fast today because I asked about $\\operatorname{PGL}_n$. Whoops. Gonna just listen.\n\nCheck out Jantzen for the development of roots and weights and all. But here are some theorems!\n\\begin{thm}\n\tLet $\\lambda$ be a dominant weight. Then $H^0(\\lambda)\\eqdef\\Ind_B^G k_\\lambda$ is finite dimensional and nonzero, but $H^i(\\lambda)=0$ for all $i>0$.\n\\end{thm}\n\\begin{thm}\n\t$H^0(\\lambda)$ has a simple socle called $L(\\lambda)$. It's obviously also finite dimensional.\n\\end{thm}\n\\begin{thm}\n\t$\\{L(\\lambda)|\\lambda\\in X^+\\}$ is a complete and irredundant set of representations for $G$.\n\\end{thm}\n\n\n\n\\end{document}", "meta": {"hexsha": "26ca7414f88594710ecab800c8f1070140600490", "size": 158069, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Algebraic Groups/Algebraic-Groups.tex", "max_stars_repo_name": "NicoCourts/Algebra", "max_stars_repo_head_hexsha": "2c63123ce11bf8a75bff5530c1048669f29e87f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-09-27T17:11:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-29T01:14:02.000Z", "max_issues_repo_path": "Algebraic Groups/Algebraic-Groups.tex", "max_issues_repo_name": "NicoCourts/Algebra", "max_issues_repo_head_hexsha": "2c63123ce11bf8a75bff5530c1048669f29e87f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Algebraic Groups/Algebraic-Groups.tex", "max_forks_repo_name": "NicoCourts/Algebra", "max_forks_repo_head_hexsha": "2c63123ce11bf8a75bff5530c1048669f29e87f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-10T00:24:49.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-10T00:24:49.000Z", "avg_line_length": 50.7607578677, "max_line_length": 432, "alphanum_fraction": 0.677602819, "num_tokens": 53718, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321720225278, "lm_q2_score": 0.7879312056025699, "lm_q1q2_score": 0.5611899539706473}}
{"text": "\n\\chapter{The Response of Multiple Traits to Selection.}\nThe fitness of an organism depends on the outcome of many different\norganismal processes and phenotypes. Thus natural selection is often acting on many\nphenotypes in concert. In some cases the various directions that selection\ntries to pull the population phenotypes may not all be possible to\nsatisfy all at once. Such fitness tradeoffs occur when selection acts on genetic correlated phenotypes in \ncontradictory ways. \n\nTo understand the short-term consequence of selection on multiple\nphenotypes we can generalize the Breeder's equation  to multiple traits\\cite{lande:79}. Considering two traits we can write our responses in both traits as\n\\begin{eqnarray}\nR_1 & = V_{A,1} \\beta_1 + V_{A,1,2} \\beta_2 \\nonumber \\\\\nR_2 & = V_{A,2} \\beta_2 + V_{A,1,2} \\beta_1  \\nonumber \\\\\n\\end{eqnarray}\nwhere the $1$ and $2$ index our two different traits. Here $V_{A,1} $\nand $ V_{A,2}$ are the additive genetic variance for trait $1$ and $2$\nrespectively, while $V_{A,1,2}$ is our additive covariance between our\ntraits. Our selection gradient for trait 1, $\\beta_1$, represents the\nchange in fitness as you change trait 1 alone holding other traits\nconstant constant. These $\\beta$ can be estimated by multivariate\nregression, see brelow. \nThe multivariate breeders equation is a statement that our response in\nany one phenotype is modified by selection on other traits that\ngenetically covary with that trait. \n\nWe can also write this equivalently in matrix form, for an arbitrary\nnumber of traits. Writing our change in the mean of our multiple phenotypes within a generation as the vector $\\bf{S}$ and our response across multiple generations as\nthe vector $\\bf{R}$. These two quantities are related by \n\\begin{equation}\n\\bf{R} = \\bf{G} \\bf{V}^{-1} \\bf{S} = \\bf{G} \\boldsymbol{ \\beta}\n\\end{equation}\n where $\\bf{V}$ and $\\bf{G}$ are our matrices of the\n variance-covariance of phenotypes and additive genetic values\n (eqn. \\eqref{G_matrix} \\eqref{P_matrix}) and\n $\\boldsymbol{\\beta}$ is a vector of selection gradients (i.e. the\n change within a generation as a fraction of the total phenotypic\n variance). Note that $\\boldsymbol{\\beta} = \\bf{V}^{-1} \\bf{S} $, such\n that each $\\beta$ represents the selection gradient on a trait\n accounting for it's phenotypic covariances with other traits. \n\n%\\gc{Need to add example of using this.}\n\n% Finches https://www.jstor.org/stable/pdf/2410334.pdf?refreqid=excelsior%3A17ac61589f5347fdae6d84ea6bff7288\n\n An example of the outcome of selection on multiple phenotypes\n consider the bout of selection measured by\n \\citet{grant1995predicting}  in medium ground Darwin's\n finch ({\\it Geospiza fortis}).  They measured 634 birds in '76, of\n which only $15\\%$ survived to 1977. The birds who survived were\n heavier and had longer, deeper bills than average.\\\\\n \\begin{table}\n \\begin{tabular}{lcccc}\\\\\n {\\small Trait} & {\\small Mean before Selection (1976)} & S & $\\beta$\n   & {\\small Mean next gen. (1978)}\\\\\n   \\hline\n Weight & 16.06 & 0.74 & 0.477 & 17.13 \\\\\nBill Length & 10.63 &  0.54  & -0.144  & 10.95 \\\\\n  Bill Depth & 9.21 & 0.36 & 0.528  &  9.70 \n \\end{tabular}\n \\caption[][2cm]{Trait means and selection differentials and gradients from an\n   episode of selection in {\\it Geospiza fortis}. Numbers from table\n   2 \\& 3 of \\citet{grant1995predicting}.}\n \\end{table}\n Accounting for the phenotypic covariances among the traits ($\\bf{V}^{-1}$), they found that both\n weight and bill depth showed direct directional selection towards larger\n values (positive $\\beta$s). However, bill length showed weak\n selection towards shorter beaks (negative $\\beta$), reflecting the fact that bill length shows positive phenotypic correlation with\n bill depth and weight, and most of the direct selection was on weight\n and bill depth dragging bill length along. Looking at the next\n generation all three traits have all significantly increased. Thus\n despite selection posssibly favouring shorter bill lengths, and\n certainly not favouring long bills, bill length increased in the next\n generation due to its positive genetic covariance with two traits\n that selection was acting to increase. \n\n\n %% quotes about tradeoffs\n\n\\begin{question}\nYou collect observations of red deer within a generation, recording an\nindividual's number of offspring and phenotypes for a number of traits which are known to\nhave additive genetic variation. Using your data, you construct the plots shown in\nFigure \\ref{fig:red_deer_Q} (standardizing the phenotypes). Answer the following\nquestions by choosing one of the bold options. Briefly justify each of your answers with reference to the breeder's\nequation and multi-trait breeder's equation. \\\\\n{\\bf A)}\tLooking just at figure \\ref{fig:red_deer_Q} A, in what direction do you expect male antler size to evolve? \\\\\n{\\bf Insufficient information, increase, decrease.}\\\\\n\n{\\bf B)}\tLooking just at figures \\ref{fig:red_deer_Q} B and C, in what direction do you expect male antler size to evolve? \\\\\n{\\bf Insufficient information, increase, decrease.}\\\\\n\n{\\bf C)}\tLooking at figures \\ref{fig:red_deer_Q} A, B, and C, in what direction do you expect male antler size to evolve? \\\\\n{\\bf Insufficient information, increase, decrease.}\\\\\n\\end{question}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Red_deer_selection.pdf}\n\\end{center}\n\\caption{ Observations of red deer within a generation; recording an\nindividual's number of offspring and phenotypes (simulated data), which are known to\nhave additive genetic variation. The figures left to right are\nA-C. (Data are simulated. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Rcode/Red_deer_MV_selection.R}) } \\label{fig:red_deer_Q}\n\\end{figure}\n\n\nAs an example of correlated responses to selection, consider the  \\citet{wilkinson:93} selection experiment on Stalk-eyed\n flies ({\\it Cyrtodiopsis  dalmanni}). Stalk-eyed flies have evolved amazingly long eye-stalks. In the lab, \\citeauthor{wilkinson:93} established six populations of\n wild-caught flies and selected up and down on males eye-stalk to body\n size ratio for 10 generations (left plot in Figure\n \\ref{fig:Stalk_eyed_response}). Despite the fact that he did not\n select on females, he saw a correlated response in the females from\n each of the lines (right plot), because of the genetic correlation\n between male and female body proportions. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width= \\textwidth]{Journal_figs/Quant_gen/stalk_eyed_flies/stalk_eyed_flies_response.pdf}\n\\end{center}\n\\caption[][-2cm]{ \\citeauthor{wilkinson:93} selected two populations of flies for\n increased eye-stalk to body length ratio in males (mean shown as\n up triangles), and two for a\n decreased ratio (down triangles), by taking the top 10 males with the highest (lowest)\n ratio out of 50 measures. He also established two control populations\n (circles). He constructed each generation of females by sampling 10\n at random from each population.  Data from \\citet{wilkinson:93}. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Journal_figs/Quant_gen/stalk_eyed_flies/Wilkinson_93_response_to_sel.R} } \\label{fig:Stalk_eyed_response}   %\\cite{potti:11} \n\\end{figure}\n\n\\begin{marginfigure}[1cm]\n\\begin{center}\n\\includegraphics[width=0.9 \\textwidth]{illustration_images/Quant_gen/Stalk_eyed_flies/WulpPlateVIIIjpg.jpg}\n\\end{center}\n\\caption{Stalk-eyed Flies ({\\it Diopsidae}).  \\BHLNC{Diptera. van der Wulp. 1898.}{https://www.biodiversitylibrary.org/item/39414\\#page/631/mode/1up}{Smithsonian Libraries}} \\label{fig:Stalk_eyed_flies}  \n\\end{marginfigure}\n\\begin{question}\n\nAt the end of ten generations in \\citeauthor{wilkinson:93}'s experiment (Figure\n\\ref{fig:Stalk_eyed_response}), the males from the up- and down-selected\nlines had mean eye-stalk to body ratios of $1.29$ and $1.14$\nrespectively, while the females from the up- and down-selected lines\nhad means of $0.9$ and $0.82$. \\\\\n{\\bf A)} \\citeauthor{wilkinson:93} estimated that by selecting the top/bottom 10 males, he had on average shifted the mean body ratio by 0.024 within\neach generation. What is the male heritability of eye-stalk to body-length ratio?\n\n{\\bf B)} Assume that the additive genetic variance of male and female phenotypes are\nequal and that there is no direct\nselection on female body-proportion in this experiment, i.e. that all of\nthe response in females is due to correlated selection. Can you\nestimate the male-female genetic correlation of the eye-stalk ratio? \n\\end{question}\n\n\\begin{figure*}\n\\begin{center} \n\\includegraphics[width= \\textwidth]{Journal_figs/Quant_gen/Garter_snakes_Brodie/Garter_snakes_Brodie.pdf}\n\\end{center}\n\\caption{ {\\bf Left)} The garter snake fitness surface estimated by \\citet{brodie1992correlational}\n lighter colours indicate higher\n relative fitness. {\\bf Middle)} The phenotypes of all of the snakes released\n   by Brodie, each dot is an individual. {\\bf Right)} The phenotypes\n   of surviving snakes. Note how snakes in the top left and bottom\n   right corner are over represented in the survivors. Data from  \\citet{brodie1992correlational}\n  \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Journal_figs/Quant_gen/Garter_snakes_Brodie/Garter_snakes_Brodie.R}. } \\label{fig:Garter_snakes_Brodie}\n\\end{figure*} \n\n%s, where the snakes with thehighest probability of survival perform uninterrupted flight if striped but flee evasivelyif spotted or unstriped.  \n\n% Eutaenia ordinoides\n% northwestern garter snake (Thamnophis ordinoides)\n%https://commons.wikimedia.org/wiki/File:The_natural_history_of_Washington_territory,_with_much_relating_to_Minnesota,_Nebraska,_Kansas,_Oregon,_and_California,_between_the_thirty-sixth_and_forty-ninth_parallels_of_latitude,_being_those_(14574590600).jpg\n\n\\paragraph{Estimating multivariate selection gradients}\nWe can estimate multivariate directional ($\\beta$) and quadratic selection\ngradients ($\\gamma$) just as we did for a single traits ($x_1$ and $x_2$), using linear\nand quadratic models (in eqn \\eqref{fitness_regression} and \\eqref{fitness_regression_stab}). For example, for two traits we can write\n\\begin{equation}\nw_i \\sim \\beta_1 x_{1,i} + \\nicefrac{1}{2} \\gamma_1 x_{1,i}^2 + \\beta_2 x_{2,i} + \\nicefrac{1}{2} \n\\gamma_2 x_{2,i}^2  + \\gamma_{1,2} x_{1,i} x_{2,i}  + \\wbar \\label{fitness_regression_MV}\n \\end{equation}\nwhere $\\beta_1$ and $\\gamma_1$ are the directional and quadratic\nselection gradients for trait one, and similarly for trait two \\citep{lande1983measurement}. The\ncovariance selection gradient between traits is given by\n$\\gamma_{1,2}$ . This technique for measuring multivariate selection is\nsometimes called `Lande-Arnold regression'.\n\n\\citet{brodie1992correlational}'s work provides a nice example of\nselection on multiple predation-avoidance traits in northwestern garter snakes\n({\\it Thamnophis ordinoides}). \\citeauthor{brodie1992correlational} released hundreds of snakes born in\nthe lab into the wild, and then performed mark-recapture observations\nto monitor their fate.\n\\begin{marginfigure}\n\\begin{center} \n\\includegraphics[width= 0.75\\textwidth]{illustration_images/Quant_gen/Garter_snake/Eutaenia_cooperi.jpg}\n\\end{center}\n\\caption{Northwestern garter snake ({\\it Eutaenia cooperi}, now {\\it\n    Thamnophis ordinoides})\n\\BHLNC{The natural history of Washington territory, with much relating to\n  Minnesota, Nebraska, Kansas, Oregon, and California (1859). Cooper\n  J.G. and Suckley,\n  G. }{https://commons.wikimedia.org/wiki/File:The_natural_history_of_Washington_territory,_with_much_relating_to_Minnesota,_Nebraska,_Kansas,_Oregon,_and_California,_between_the_thirty-sixth_and_forty-ninth_parallels_of_latitude,_being_those_(14574590600).jpg}{Smithsonian\n  Libraries} } \\label{fig:Garter_snake}\n%, between the thirty-sixth and forty-ninth parallels of latitude, being those parts of the final reports on the survey of the Northern Pacific railroad route, containing the climate and physical geography, with full catalogues and descriptions of the plants and animals collected from 185 to 1857\n\\end{marginfigure}  Before releasing them he measured how stripy\nthey were, and their behavioural tendency to reversals of direction\nduring simulated flight from a predator. His quadratic fitness surface is shown in Figure\n\\ref{fig:Garter_snakes_Brodie}, based on fitting the\nregression given by eqn \\eqref{fitness_regression_MV} to juvenile\nsurvival. He found that neither single trait directional or quadratic\ngradients were significant, i.e. there was no apparent selection on one \ntrait ignoring the other. However, there was a significant negative\ncovariance ($\\gamma_{1,2}<0$). The individuals with the highest chance of survival are\n{\\it either} highly striped and perform few reversals (top left\ncorner), {\\it or} have little striping but reverse course frequently\n(bottom right corner). \n\n%\\gc{Add section of multivariate fitness landscapes}\n\n%Drosophila life history chapter\n%https://www.genetics.org/content/214/1/3\n\n\\section{Some applications of the multivariate trait breeder's equation}\n\nThe multivariate breeders equation has a lot of different uses in\nunderstanding the response of multiple traits to selection. It also\noffers strong insights into the mechanistic underpinnings of kin selection and sexual selection. We'll discuss these next.\n\n\\subsection{Hamilton's Rule and the evolution of altruistic and\n  selfish behaviours}\n\\begin{quotation}\n``~`The only reason for making a buzzing-noise that I know of is\nbecause you're a bee.' Then [Pooh] thought another long time, and\nsaid: `The only reason for being a bee that I know of is to make\nhoney...And the only reason for making honey is so as I can eat it.'~''\n--Winnie-the-Pooh, \\citet{milne}.  \n\\end{quotation}\n\n%https://www.flickr.com/photos/bibliodyssey/albums/72157610318114895\n\n%\u201cIf it could be proved that any part of the structure of any one species had been formed for the exclusive good of another species, it would annihilate my theory, for such could not have been produced through natural selection.\u201d\n\n%% https://commons.wikimedia.org/wiki/File:1911_Britannica_-_Bee_-_Apis_mellifica.png\n%%bear with bee hive https://www.flickr.com/photos/internetarchivebookimages/20323812950/\n\nOne of the seismic shifts caused by Darwin's work was the realisation that organisms don't exist for the benefit of other\nindividuals or other species. \nBees didn't evolve to pollinate flowers, any more than they \nevolved to make honey for bears. If we can say that there\nis a `reason' why an organism exist it is only to leave offspring to the\nnext generation.  Pooh can be forgiven for straying from Darwinian thought, as he exists\nfor the benefit of Christopher Robin and other childrens' bedtime\nstories. \n\nHowever, there's a wrinkle to this Darwinian view. Worker bees don't make honey\nto benefit their offspring, they are sterile and are working\nfor the benefit of the Queen bee and her offspring. \n\\marginnote{\\citet{smith1964kin} coined the name kin selection to describe Hamilton's approach to this problem. It's also sometimes called the inclusive fitness approach, as we need to include not just one individual's fitness but the weighted sum of all the fitness of all their relatives. }\nIndividuals frequently behave in ways that sacrifice their own fitness for the\nbenefit of others. That selection favours such apparent acts of altruism is puzzling at first sight. \\citet{hamilton1964genetical,hamilton1964genetical2} supplied the first general evolutionary explanation of such altruism. \nHis intuition was that while an individual is losing out of some reproductive output, the alleles underlying an altruistic behaviour can still spread in the population if this cost is outweighed by benefits gained \nthrough the transmission of these alleles through a related individual. Note that this means that the\nallele is not acting in an self-sacrificing manner, even though individuals may as a result. \n\nAltruism reflects social interactions. So as a simple model let's imagine that individuals interact in pairs, with our focal\nindividual $i$ being paired with an individual $j$.  \nImagine that individuals have two possible phenotypes $X=1$ or $0$,\ncorresponding to providing or withholding some small act of `altruism'\n(we could just as easily flip these labels and call them an unselfish\nact and a selfish act respectively). \nOur pairs of individuals interacting could, for example, be siblings sharing a\nnest. The altruistic trait could be as simple as growing at a slightly slower rate so as to reducing sibling-competition for food from\nparents, or more complicated acts of altruism such as children foregoing their own reproduction so as to help their parents raise their siblings.\n\nProviding the altruistic act has a cost $C$ to the fitness of our individual and failing to provide this act has no cost. Receiving this altruistic act confers a fitness benefit $B$ over individuals who did not receive this act. \\citeauthor{hamilton1964genetical}'s rule states that such a trait will spread through the population if \n\\begin{equation}\n 2F B > C \n\\end{equation}\nwhere $F$ is the average kinship coefficient between the interacting\nindividuals ($i$ and $j$). In the usual formulation of Hamilton's Rule our $2F$ is replaced by the `Coefficient of relationship', which is the\n  proportion of alleles shared between the individuals. Here we use\n  two times the kinship coefficient to keep things inline with our \n  notation for these chapters. Note that if our individuals are themselves inbred we need\nto do a little more careful to reconcile these two measures.\nSo the altruistic behaviour will spread even if it is costly to the individual if its cost is paid off by the benefit to sufficiently related individuals. \n\nAs one example of kin-selection consider \\citet{krakauer2005kin}'s work on\nco-operative courtship in wild turkeys ({\\it Meleagris\n  gallopavo}). Male turkeys often form display\npartnerships, with a subordinate male helping a dominant male with\ndisplaying to females and defending the females from other groups of males. \\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width= \\textwidth]{illustration_images/Quant_gen/Turkey/turkey.jpg}\n\\end{center}\n\\caption{Turkey  ({\\it Meleagris\n  gallopavo}). \\BHLNC{Bilder-atlas zur Wissenschaftlich-popul\u00e4ren\nNaturgeschichte der V\u00f6gel in ihren s\u00e4mmtlichen Hauptformen (1864). Wien,K.K. Hof}{https://www.biodiversitylibrary.org/page/33050564\\#page/199/mode/1up}{Smithsonian Libraries} } \\label{fig:turkey}\n\\end{marginfigure}  These pairs are often full brothers ($F=0.25$),\nwith the subordinate male often being the younger of the two. The\nsubordinate male often loses out on mating opportunities over their\nentire lifetime by acting as a wingman for their older \nbrothers. \\citet{krakauer2005kin} estimated that dominant males gained\nan extra $6.1$ offspring when they display with a partner than males\nwho display alone. While the subordinate males lose out on fathering $0.9$\noffspring compared to solitary males. Thus the costs of helping by\nsubordinate males is more than compensated by the fitness gains of\ntheir brothers ( $ (2 \\times 0.25)  \\times 6.1 > 0.9$), and so the\nevolution of this  altruistic  helping in co-operative courtship is potentially well explained by kin-selection \\citep[see ][for more analysis]{akccay2016}.\n\\begin{question}\nHow would this answer be changed if the male Turkey partnerships were only\n$\\nicefrac{1}{2}$ sibs, or first cousins?\n\\end{question}\n\n\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width= \\textwidth]{figures/Response_to_sel/Hamiltons_rule_B_C.pdf}\n\\end{center}\n\\caption{{\\bf Top)} The fitness of individual $i$ as a function of\n  their behavioural phenotype, where altruistic/non-altruistic\n  behavioural phenotypes are encoded as $1$ and $0$ respectively. The\n  direct fitness cost of behaving altruistically is $C$. {\\bf Bottom)}\n  The fitness of our focal individual $i$ as a function of the\n  behavioural phenotype of their interacting partner ($j$). Our focal\n  individual gets an increase $B$ in fitness if their partner behaves\n  altruistically. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Rcode/Quant_gen/Hamilton_rule_B_C.R}} \\label{fig:Hamilton_B_C}\n\\end{marginfigure}\n\n\nWhere does this result come from? Well, we can use our quantitative genetics framework to gain some \nintuition by deriving a simple version of Hamilton's Rule by thinking\nabout the phenotypes of an individual's kin as genetically correlated\nphenotypes. To sketch a proof of this result, let's assume that our focal $i$ individual's fitness can be written as \n\\begin{equation}\nW(i,j)= W_0 + W_i +W_j\n\\end{equation}\nwhere $W_i$ is the contribution of the fitness of the individual $i$ due\nto their own phenotype, and $W_j$ is the contribution to our\nindividual $i$'s fitness due to the interacting individual $j$'s behaviour (i.e. $j$'s phenotype).\nWith the benefit $B$ and cost $C$, our $W(i,j)$ are depicted in Figure \\ref{fig:Hamilton_B_C}. \n\nFollowing our multivariate breeder's equation, we can write the expected change of our behavioural phenotype as \n\\begin{equation}\nR = \\beta_i V_A + \\beta_j V_{A,i,j},\n\\end{equation}\nOur altruistic phenotype is increasing in the population if $R>0$, i.e. if \n\\begin{equation}\n  \\beta_i V_A + \\beta_j V_{A,i,j}  > 0 \\end{equation}\n\\marginnote{ Here we've following a simplified version of\n  \\citet{queller1992quantitative}'s treatment, to re-derive Hamilton's rule in a quantitative genetics framework (Hamilton's original papers did this in a population genetics framework).} The slope $\\beta_i$ of the regression of our focal individual's\nbehavioural phenotype on fitness is proportional to $-C$. The slope\n$\\beta_j$ of the regression of our interacting partner's phenotype on\nour focal individual's fitness is proportional to $B$ (with the same\nconstant of proportionality). Therefore, our altruistic phenotype is increasing in the population if\n\\begin{eqnarray}\n  \\beta_i V_A + \\beta_j V_{A,i,j} & > 0  \\nonumber  \\\\\n B \\frac{V_{A,i,j}}{V_A} & >  C  \\label{eqn:Covar_Hamilton}\n\\end{eqnarray}\nSo what's the average genetic covariance between\nindividual $i$ and $j$'s altruistic phenotype? Well it's the same\nbehavioural phenotype in both individuals, so the phenotypes are\ngenetically correlated if our individuals are related to each\nother. The covariance of the same phenotype between two individuals is\njust $2 F_{i,j} V_A$ (see \\eqref{additive_covar_general_rellys}). So\nour altruistic phenotype is increasing in the population if\n\n\n\n\\begin{eqnarray}\n   & B\\frac{2 F_{i,j} V_A}{V_A} &> C \\nonumber  \\\\\n  2 F_{i,j} B & > C \n\\end{eqnarray}\nSeen from this perspective, \\citeauthor{hamilton1964genetical}'s rule\nis simply a statement that altruistic behaviours can spread via\nkin-selection, if the average cost to an individual of displaying an\naltruistic phenotype, i.e. carrying altruistic alleles, is paid back through the average benefit of interacting with altruistic relatives (kin).\n\n%  1791 book on Hymenoptera https://twitter.com/BioDivLibrary/status/1174773646113562624\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width= 0.75 \\textwidth]{illustration_images/Quant_gen/Hymenoptera/Hymenoptera.jpg}\n\\end{center}\n\\caption{A selection of the huge diversity of  Hymenoptera. \\BHLNC{Naturgeschichte, Klassification und Nomenclatur der\n    Insekten vom Bienen, Wespen und Ameisen. Christ, JL}{https://www.biodiversitylibrary.org/item/165219?\\#page/6/mode/1up}{University of Illinois Urbana Champaign} } \\label{Hymenoptera}\n\\end{figure}\n\n \\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width= 0.7 \\textwidth]{illustration_images/Quant_gen/honey_ant/honey_ant.png}\n\\end{center}\n\\caption{Australian Honey-pot Ant ({\\it Camponotus inflatus}). Honey ants are gorged with\n  honeydew collected by their nest mates, till they swell to the size\n  of grapes, and are used as a food storage\n  device.  \\BHLNC{Ants,\n  bees, and wasps; a record of observations on the habits of the\n  social Hymenoptera (1897) Lubbock, J.}{https://www.biodiversitylibrary.org/page/9657360\\#page/485/mode/1up}{Smithsonian Libraries}  } \\label{fig:honey_ant}\n\\end{marginfigure} \nUnder the kin-selection, relatedness and the breeding structure of the populations are\nhypothesized to be a key factor in determining the evolution of altruistic behaviours. \nOne most impressive example of the evolution of altruism is the\nrepeated evolution of eusociality, where sterile castes have evolved\nto help to rear their siblings rather than their own offspring. Eusociality has evolved at least eight independent times in\n Hymenoptera ( bees, wasps, and ants). There's huge variation in\n mating systems in Hymenoptera from high levels of multiple mating to monandry. \\citet{hughes2008ancestral} conducted a\ncomparative phylogenetic analysis of mating system across hundreds of Hymenoptera species. They found that each of the eight of\neusocial clades had monandry, females mating with a single male, as an ancestral\nstate. Thus, eusociality initially evolved in populations where\nrelatedness was maximized among siblings.\n\n%Once eusociality evolved,\n%the females in some eusocial species have evolved to mate multiple\n%times but this only occurs in species where non-reproductive females\n%can not regain fertility, i.e. \n\n%\\erin{this phrasing is a little misleading because it's 'paid back' to the allele, but not to the individual. I'd suggest something like 'if the fitness cost of altruism to an allele is paid back on average through the benefits of interacting with altruistic relatives (kin).\n\n\n% Special eg on kinship, good for other egs https://www.cell.com/current-biology/issue?pii=S0960-9822(18)X0012-8\n\n%% reciprocal altuisim Quller\n%% https://www.pnas.org/content/pnas/108/Supplement_2/10792.full.pdf\n% http://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1000&context=sysc_fac\n\n% https://sci-hub.se/https://www.nature.com/articles/318366a0\n%% https://archive.org/stream/allaboutanimalsf00newy/allaboutanimalsf00newy#page/53/mode/1up\n\n% http://darwin-online.org.uk/content/frameset?pageseq=23&itemID=F8.2&viewtype=side\n\n\n%%\n\n\\input{Chapters/Notes_on_new_sections/alt_alturism}\n\n\\subsection{Sexual selection and the evolution of mate preference by indirect benefits. }\n\n\nOrganisms often put an enormous effort into finding and attracting mates, sometimes at\na considerable cost to their chances of survival. Why are individuals so choosy about who they mate with, particularly when their choice seems to be based on elaborate characters and arbitrary displays\nthat surely lower the viability of their mates?  \n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width= \\textwidth]{illustration_images/Quant_gen/glow_worm/18011950889_b2a1a1323e_z.jpg}\n\\end{center}\n\\caption{Male (left) and female (right) common glow worm  ({\\it Lampyris\n    noctiluca}). \\BHLNC{The animal kingdom : arranged after its organization;\n  forming a natural history of animals, and an introduction to\n  comparative anatomy. (1863) Cuvier, G.}{https://www.flickr.com/photos/internetarchivebookimages/18011950889/in/photolist-Uqr16w-UqqZZu-4VxYcU-ZiyCTu-8DEFyz-6pxZpd-ZiyCXC-yFyBF-GEoD6s-u3DKnd-6pxZrd-4q7euj-5KsWen-yqphkW-xvGrfw-ytA3rP-ybfeBA-yqi7oo-tFL3h3-xvJnrz-BPm4Jd-wLVg9F-trDXtp-wVbxw2-y6MpEq-BWCQVA-vb793E-tHGekh-u4av2t-oewQcC-ow671H-t6ZJg7-u4Ap1m-xntXw9-ovPEMu-u3DFws-vbeMw2-tHMJiY-u1FJpo-tLqTCw-fFFAEG-y1Pfmz-u1FHaE}{University of Toronto - Gerstein Science Information Centre} } \\label{fig:glow_worms}\n\\end{marginfigure}\n% Slug mating https://commons.wikimedia.org/wiki/File:Limax_maximus_mating.jpg\n% https://www.flickr.com/photos/internetarchivebookimages/20406571412/\n% https://www.lastwordonnothing.com/2012/08/10/tgipf-slug-sex-redux/\n\n\n\n\nOne major reason why individuals evolve to be choosy about who they mate with is that it can directly impact their fitness. By choosing a mate with\nparticular characteristics, individuals can gain more parental care for\ntheir offspring, avoid parasites, or be choosing a mate with higher fertility. For example,  female glow-worms flash at night to attract males flying by. Females with larger, brighter lanterns have higher fecundity, so\nmales with a preference for brighter flashes will gain a direct benefit to their own fitness. (Note that males will benefit even if these differences in female fecundity are entirely driven by differences in environment, and thus non-heritable.) Indeed male glow worms have evolved to be attracted to brighter\nflashing lures.  \n\\begin{marginfigure}[-0.5cm]\n\\begin{center}\n\\includegraphics[width= \\textwidth]{Journal_figs/Quant_gen/glow_worm_flashes/glow_worm_flashes.pdf}\n\\end{center}\n\\caption{Female Glow worms who have the largest, and therefore\n  brightest, lanterns have the highest fecundity. Data from\n  \\citet{hopkins2015m}. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Journal_figs/Quant_gen/glow_worm_flashes/glow_worm_flashes.R} } \\label{fig:glow_worms_lantern}\n\\end{marginfigure}\nHowever, even in the absence of direct benefits of choice, selection can still indirectly favour the evolution of choosiness. These\nindirect benefits occur because individuals can have higher fitness\noffspring by choosing a mate whose phenotype indicates high viability\n(the so-called `good genes' hypothesis), or by choosing a mate whose\nphenotype is simply attractive, and likely to produce similarly\nattractive offspring (the `runaway' or `sexy sons' hypothesis).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Response_to_sel/Genetic_corr_assort_mating.pdf}\n\\end{center} \\label{fig:assort_mating_2_trait}\n\\caption{{\\bf Left)} Assortative mating between males and\n  females. Males vary in a display trait (e.g. tail length), females\n  vary in their preference for this trait. We see evidence of\n  assortative mating as females with a preference for a particular\n  value of the male trait tend to mate with those males. {\\bf Right}) As both\n  male trait and female preference are genetic this establishes a\n  genetic correlation in the next generation. This is simulated data. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Rcode/Quant_gen/QT_cross_assortative_mating_2_kids.R}} \\label{fig:assort_mating_2_trait}\n\\end{figure}\n\n\nWe'll denote a display trait, e.g. tail length, in males by $\\mars$ and a preference\ntrait in females by $\\venus$. Our display trait is under direct selection in males, such that its response to selection can be written as\n\\begin{equation}\nR_{\\mars} = \\beta_{\\mars} V_{A, \\mars}\n\\end{equation}\nLet's assume that the female preference trait, the degree to which\nfemales are attracted to long tails, is not under direct\nselection $\\beta_{\\venus}=0$. Then the response to selection of the\npreference trait can be written as\n\\begin{eqnarray}\nR_{\\venus} &=\\beta_{\\venus}V_{A,\\venus}  + \\beta_{\\mars} V_{A, \\venus\n  \\mars}\n& = \\beta_{\\mars} V_{A, \\venus  \\mars}\n\\end{eqnarray}\nSo the female preference will respond to selection if it is\ngenetically correlated with the male trait, i.e. if $V_{A, \\venus\n  \\mars}$ is not zero. There's a number of different ways this genetic correlation could arise; the\nsimplest is that the loci underlying the male trait may have a\npleiotropic effect on female preference. However, female preference\nmay often have quite a distinct genetic basis from male display traits.\n\nA more general way in which trait-preference genetic correlations may arise is through assortative mating. As females vary in their\ntail-length preference, the ones with a preference for longer\ntails will mate with long-tailed males and the opposite for females\nwith a preference for shorter-tails. Therefore, a\ngenetic correlation between display and preference traits will\nbecome established (see Figure \\ref{fig:assort_mating_2_trait}). \n\nThe males with the longer tails will also carry the alleles\nassociated with the preference for longer tails, as their long-tailed\ndads tended to mate with females with a genetic preference for long\ntails. Similarly, the males with shorter tails will carry alleles associated with the preference for\nshorter tails. Thus if there is direct selection for males with longer tails, then\nthe female preference for longer tails will increase too, as it is\ngenetically correlated via assortative mating. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Journal_figs/Quant_gen/guppies_female_choice/guppies_female_choice.pdf}\n\\end{center} \n\\caption{Mean phenotypes for the two up- and two down-selected\n  populations of Guppies. Left panel: A response to selection was seen\n  due to the direct selection on male colouration. Right panel: An\n  indirect, correlated response was also seen in female\n  preference. Data from\n  \\citet{houde:94}. \\gitcode{https://github.com/cooplab/popgen-notes/blob/master/Journal_figs/Quant_gen/guppies_female_choice/guppies_female_choice.R}} \\label{fig:assort_mating_guppies}\n\\end{figure}\n\nAs an example of how direct selection on display traits can drive the\nevolution of preference traits, let's consider some data from\nguppies. Guppies ({\\it Poecilia reticulata}) are a classic system for\nstudying the interplay of natural and sexual selection. In some populations of\nguppies, females show a preference for males with more orange colouration.\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{illustration_images/Quant_gen/Guppies/1439_fish_40.png}  %https://commons.wikimedia.org/wiki/File:1439_fish_40.png\n\\end{center} \\label{fig:guppies}\n\\caption{Guppy ({\\it Poecilia reticulata}). \\newline \\noindent \\tiny{From a set of 1962 stamps\n  of Hungary. Contributed to \\href{https://commons.wikimedia.org/wiki/File:1439_fish_40.png}{wikimedia} by Darjac, not covered by copyright}}\n\\end{marginfigure} \n \\citet{houde:94} established four replicate\npopulation pairs of guppies and selected one of each pair for an increased or decreased orange coloration in males, selecting the top/bottom $20$ out of $50$\nmales. She randomly chose females from each population to form the next generation, and so did not\nexert direct selection on females. She measured the response to \nselection on male colouration and on female preference for orange (left\nand right panels of Figure \\ref{fig:assort_mating_guppies}\nrespectively). In the lines that were selected for more orange males,\nfemales showed an increased preference for orange. While in those\nlines selected for less orange in male displays, females\nshowed a decreased preference for orange. This is consistent with indirect selection on female orange preference as a response to\nselection on male colouration, due to a genetic correlation between\nfemale preference and male trait. It is {\\it a priori} unlikely\nthat pleiotropy is the source of the genetic correlation between these\ntraits, rather it is likely caused by females assortatively mating with\nmales that match their colour preference. \n\n\nReturning to our bird tail example, what could drive the direct\nselection on male tail length? The selection for longer tails in males could come about because\nlonger tails are genetic correlated with higher male viability, for\nexample perhaps only males who gather an excess of food have the\nresources to invest in growing long tail, i.e. a long tail is an\nhonest signal of fitness. This would correspond to a `good genes' explanation of female mate\nchoice evolution.  \n\n\n\\marginnote{\n\\begin{quote}\n``The case of the male Argus Pheasant is eminently interesting,\nbecause it affords good evidence that the most refined beauty may\nserve as a sexual charm, and for no other purpose.'' -- \\citet{darwin1888descent}\n\\end{quote} }\nThere's another subtler way that selection could favour our male\ntrait. Imagine that the variation in female preference trait is\nbecause some females have no strong preference for male tail\nlength, but some females have a strong preference for males with\nlonger tails.\n\\begin{marginfigure}\n\\begin{center}\n\\includegraphics[width= \\textwidth]{illustration_images/Quant_gen/Argus_pheasant/Argus_pheasant_small.jpg}\n\\end{center}\n\\caption{Argus Pheasant.  \\BHLCC{A monograph of the pheasants. (1918). Beebe, W}{https://www.flickr.com/photos/biodivlibrary/10053909294/}{Smithsonian Institution Libraries}{2.0}} \\label{fig:argus}\n\\end{marginfigure}\nMales with longer tails would then have higher fecundity than\nthe short-tailed males as there's a subset of females who are strongly\nattracted to long tails, and these males also get to mate with the\nother females. Thus selection favours long-tailed males, and so indirectly favours\nfemale preference for longer tails; females with a preference\nfor longer-tails have sons who in turn are more attractive. This\nmodel is sometimes called the sexy-son model. It is also called\nthe Fisherian runaway model \\citep{fisher1915evolution}, as female\npreference and male trait can coevolve in an escalating fashion\ndriving more and more extreme preferences for arbitrary traits. Thus\nmany extravagant display traits in males and females may exist purely\nbecause individuals find them beautiful and are attracted to them. \n\n\n%\\erin{I don't take away from this description that you can get runaway sexual selection without external directional selection for longer tails ... can you make that more clear? I think the problem is making the female trait a preference for short vs. long tails when maybe it should be presented as a preference for long tails vs. no preference so the males with longer tails get a boost from positive sexual selection. Also, why do you plot mean daughter's trait against mean son's trait and not, say, mean father's trait against mean daughter's trait or increase in population prevalence or correlation between male-female traits over time?}\n\n\n\n% potential guppy image https://www.google.com/imgres?imgurl=https%3A%2F%2Fc1.staticflickr.com%2F1%2F457%2F20360503816_a88cdcd96d_b.jpg&imgrefurl=https%3A%2F%2Fwww.flickr.com%2Fphotos%2Finternetarchivebookimages%2F20360503816&docid=jtYRcc7UmAvIeM&tbnid=DBAauK0xAgK4mM%3A&vet=10ahUKEwirkrKWy8rcAhWTAHwKHfVdCmUQMwg2KAEwAQ..i&w=1024&h=840&itg=1&client=firefox-b-1-ab&bih=681&biw=1280&q=Lebistes%20reticulatus&ved=0ahUKEwirkrKWy8rcAhWTAHwKHfVdCmUQMwg2KAEwAQ&iact=mrc&uact=8\n", "meta": {"hexsha": "bc9e45b27cf006a4696959e2b30db4bd852178fa", "size": 38193, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/Multi_trait_selection.tex", "max_stars_repo_name": "rsbrennan/popgen-notes", "max_stars_repo_head_hexsha": "08295efedcd99066bf96c5e31f643ca5cd247100", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/Multi_trait_selection.tex", "max_issues_repo_name": "rsbrennan/popgen-notes", "max_issues_repo_head_hexsha": "08295efedcd99066bf96c5e31f643ca5cd247100", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/Multi_trait_selection.tex", "max_forks_repo_name": "rsbrennan/popgen-notes", "max_forks_repo_head_hexsha": "08295efedcd99066bf96c5e31f643ca5cd247100", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.6114754098, "max_line_length": 645, "alphanum_fraction": 0.7933129107, "num_tokens": 9960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.7879311906630568, "lm_q1q2_score": 0.5611899529559431}}
{"text": "\\section{Error Calculation}\n\\label{sec:Error_Calculation}\nThis section contains the error calculation of the measured values. The systematic and the total error is calculated in this chapter. The error calculation is done for the fitted values longtidudinal sound velocity $c_L$, transversal sound velocity $c_T$ and the attenuation coefficient $\\mu$ (see sections \\ref{subsec:wave_length_calculation} to \\ref{subsec:Longitudinal_Ultrasound_Velocities_in_Liquids}).\n\n% ----------------------------------------------------------------------------------\n\\subsection{Uncertainties}\n\\label{subsec:Uncertainties}\nAll conducted measurements in this experiment have uncertainties. Although it was possible to use the oscilloscope to measure the values very precisely, there is still an uncertainty. This is due to the difficulty to move the cursors exactly to a peak. The systematic uncertainties are shown in table \\ref{tab:equipment}.\n\n% ----------------------------------------------------------------------------------\n\\subsection{Uncertainty of the sound velocity $c_L$ and $c_T$}\n\\label{subsec:Uncertainty_Sound_Velocity}\nThe total uncertainty $s_{c,\\ \\text{tot}}$ of the sound velocity for a specific medium $c_m$ is calculated with the following equation \\ref{eq:total_uncertainty_sound_velocity}:\n\n\\begin{equation}\ns_{c,\\ \\text{tot}}=\\sqrt{s_{c,\\ \\text{syst}}^2+s_{c,\\ \\text{stat}}^2}\n\\label{eq:total_uncertainty_sound_velocity}\n\\end{equation}\n\nwith:\n\n\\begin{equation}\ns_{c,\\ \\text{syst}}=\\sqrt{\\left(\\frac{\\partial c}{\\partial x}\\Biggr|_{c}\\cdot s_{x}\\right)^2 + \\left(\\frac{\\partial c}{\\partial \\Delta t}\\Biggr|_{c}\\cdot s_{\\Delta t}\\right)^2}\n\\label{eq:syst_uncertainty_sound_velocity}\n\\end{equation}\n\nand:\n\n\\[\n\\frac{\\partial c}{\\partial x}\\Biggr|_{c}=\\dfrac{2}{\\Delta t} \\qquad , \\qquad \\frac{\\partial c}{\\partial \\Delta t}\\Biggr|_{c}=-\\dfrac{2x}{\\Delta t^2}\n\\]\n\nwhere:\n\\begin{multicols}{2}\n\\begin{conditions}\n\ts_{c,\\ \\text{tot}} & total uncertainty of $c_m$ \\\\\n\ts_{c,\\ \\text{stat}} & statistical uncertainty of $c_m$ \\\\\n\ts_{\\Delta t} & uncertainty of $\\Delta t$ \\\\\n\t\\Delta t & time of flight (see figure \\ref{fig:measuring_procedure})\n\\end{conditions}\n\\begin{conditions}\n\ts_{c,\\ \\text{syst}} & systematic uncertainty of $c_m$ \\\\\n\ts_x & uncertainty of x \\\\\n\tx & sound path distance \\\\\n\tc & sound velocity $c_m$\n\\end{conditions}\n\\end{multicols}\n\n\\hspace{0.5cm}\n\nTable \\ref{tab:Uncertainty_Sound_Velocity} shows the statistical, the systematic and the total uncertainty. The systematic uncertainty was calculated by using equation \\ref{eq:syst_uncertainty_sound_velocity}. The statistical uncertainty was obtained from QtiPlot. The total uncertainty was calculated by using equation \\ref{eq:total_uncertainty_sound_velocity}. All calculations were done in MATLAB (see appendix \\ref{sec:MATLAB_Error_Calculation}).\n\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.1}\n\t\\begin{tabular}{|l|c|c|c||c|c|c|}\n\t\t\\cline{2-7}\n\t\t\\multicolumn{1}{c|}{} & \\multicolumn{3}{c||}{\\textbf{1 MHz}} & \\multicolumn{3}{c|}{\\textbf{5 MHz}} \\\\\n\t\t\\cline{2-7}\n\t\t\\multicolumn{1}{c|}{} & \\textbf{Statistic} & \\textbf{Systematic} & \\textbf{Total} & \\textbf{Statistic} & \\textbf{Systematic} & \\textbf{Total} \\\\\n\t\t\\multicolumn{1}{c|}{} & in m/s & in m/s & in m/s & in m/s & in m/s & in m/s \\\\\n\t\t\\hline\n\t\t\\textbf{PMMA} & 4.8 & 18.5 & 19.1 & 9.8 & 18.4 & 20.9 \\\\\n\t\t\\hline\n\t\t\\textbf{Aluminium} & - & - & - & 30.2 & 50.6 & 58.9 \\\\\n\t\t\\hline\n\t\t\\textbf{Copper} & - & - & - & 16.9 & 50.6 & 53.4 \\\\\n\t\t\\hline\n\t\t\\textbf{Brass} & - & - & - & 45.7 & 24.6 & 51.9 \\\\\n\t\t\\hline\n\t\t\\textbf{Water 21 \\textdegree C} & - & - & - & 2.8 & 6.3 & 7.0 \\\\\n\t\t\\hline\n\t\t\\textbf{Water 49 \\textdegree C} & - & - & - & 4.7 & 6.9 & 8.4 \\\\\n\t\t\\hline\n\t\t\\textbf{Saltwater 40 \\textdegree C} & - & - & - & 5.9 & 7.8 & 9.8 \\\\\n\t\t\\hline\n\t\t\\textbf{PMMA trans.} & 14.3 & 5.1 & 15.1 & - & - & - \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{The statistical uncertainty is from QtiPlot. Error propagation is used to calculate the systematic and the total uncertainty.}\n\t\\label{tab:Uncertainty_Sound_Velocity}\n\\end{table}\n\n\\newpage\n% ----------------------------------------------------------------------------------\n\\subsection{Uncertainty of the attenuation coefficient $\\mu$}\n\\label{subsec:Uncertainty_Attenuation_Coefficient}\nThe total uncertainty $s_{\\mu,\\ \\text{tot}}$ of the attenuation coefficient for a specific medium $\\mu$ is calculated with the following equation \\ref{eq:total_uncertainty_attenuation_coefficient}:\n\n\\begin{equation}\ns_{\\mu,\\ \\text{tot}}=\\sqrt{s_{\\mu,\\ \\text{syst}}^2+s_{\\mu,\\ \\text{stat}}^2}\n\\label{eq:total_uncertainty_attenuation_coefficient}\n\\end{equation}\n\nwith:\n\n\\begin{equation}\ns_{\\mu,\\ \\text{syst}}=\\sqrt{\\left(\\frac{\\partial \\mu}{\\partial x}\\Biggr|_{\\mu}\\cdot s_{x}\\right)^2 + \\left(\\frac{\\partial \\mu}{\\partial \\hat{p}}\\Biggr|_{\\mu}\\cdot s_{\\hat{p}}\\right)^2 + \\left(\\frac{\\partial \\mu}{\\partial \\hat{p}_0}\\Biggr|_{\\mu}\\cdot s_{\\hat{p}_0}\\right)^2}\n\\label{eq:syst_uncertainty_attenuation_coefficient}\n\\end{equation}\n\nand:\n\n\\[\n\\frac{\\partial \\mu}{\\partial x}\\Biggr|_{\\mu}=\\dfrac{\\ln{\\Bigr(\\frac{\\hat{p}}{\\hat{p}_0}\\Bigr)}}{x^2} \\qquad , \\qquad \\frac{\\partial \\mu}{\\partial \\hat{p}}\\Biggr|_{\\mu}=- \\dfrac{1}{x\\hat{p}} \\qquad , \\qquad \\frac{\\partial \\mu}{\\partial \\hat{p}_0}\\Biggr|_{\\mu}=\\dfrac{1}{x\\hat{p}_0}\n\\]\n\nwhere:\n\\begin{multicols}{2}\n\t\\begin{conditions}\n\t\ts_{\\mu,\\ \\text{tot}} & total uncertainty of $\\mu$ \\\\\n\t\ts_{\\mu,\\ \\text{stat}} & statistical uncertainty of $\\mu$ \\\\\n\t\ts_{\\hat{p}} & uncertainty of $\\hat{p}$ \\\\\n\t\tx & sound path distance \\\\\n\t\t\\hat{p}_0 & amplitude (see figure \\ref{fig:measuring_procedure})\n\t\\end{conditions}\n\t\\begin{conditions}\n\t\ts_{\\mu,\\ \\text{syst}} & systematic uncertainty of $\\mu$ \\\\\n\t\ts_x & uncertainty of x \\\\\n\t\ts_{\\hat{p}_0} & uncertainty of $\\hat{p}_0$ \\\\\n\t\t\\hat{p} & amplitude (see figure \\ref{fig:measuring_procedure}) \\\\\n\t\t\\mu & attenuation coefficient\n\t\\end{conditions}\n\\end{multicols}\n\n\\hspace{0.5cm}\n\nTable \\ref{tab:Uncertainty_Attenuation_Coefficient} shows the statistical, the systematic and the total uncertainty. The systematic uncertainty was calculated by using equation \\ref{eq:syst_uncertainty_attenuation_coefficient}. The statistical uncertainty was obtained from QtiPlot. The total uncertainty was calculated by using equation \\ref{eq:total_uncertainty_attenuation_coefficient}. All calculations were done in MATLAB (see appendix \\ref{sec:MATLAB_Error_Calculation}).\n\n\\hspace{0.5cm}\n\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.1}\n\t\\begin{tabular}{|l|c|c|c||c|c|c|}\n\t\t\\cline{2-7}\n\t\t\\multicolumn{1}{c|}{} & \\multicolumn{3}{c||}{\\textbf{1 MHz}} & \\multicolumn{3}{c|}{\\textbf{5 MHz}} \\\\\n\t\t\\cline{2-7}\n\t\t\\multicolumn{1}{c|}{} & \\textbf{Statistic} & \\textbf{Systematic} & \\textbf{Total} & \\textbf{Statistic} & \\textbf{Systematic} & \\textbf{Total} \\\\\n\t\t\\multicolumn{1}{c|}{} & in m/s & in m/s & in m/s & in m/s & in m/s & in m/s \\\\\n\t\t\\hline\n\t\t\\textbf{PMMA} & 0.7 & 2.6 & 2.6 & 2.5 & 5.8 & 6.3 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{The statistical uncertainty is from QtiPlot. Error propagation is used to calculate the systematic and the total uncertainty.}\n\t\\label{tab:Uncertainty_Attenuation_Coefficient}\n\\end{table}\n", "meta": {"hexsha": "57b8b054c31a5a38dc46472790a5def36e861c17", "size": 7149, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "glaL4_W_12_Ultrasound/sections/error_calculation.tex", "max_stars_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_stars_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "glaL4_W_12_Ultrasound/sections/error_calculation.tex", "max_issues_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_issues_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "glaL4_W_12_Ultrasound/sections/error_calculation.tex", "max_forks_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_forks_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.3040540541, "max_line_length": 477, "alphanum_fraction": 0.6725416142, "num_tokens": 2410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.787931190663057, "lm_q1q2_score": 0.5611899529559431}}
{"text": "\\documentclass[a4paper]{scrartcl}\n\\usepackage[margin=1.5cm]{geometry}\n\n\\usepackage{amsmath}\n\\usepackage{cleveref}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amsthm}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\\usepackage{booktabs}\n\\usepackage{csquotes}\n\\usepackage{algpseudocode}\n\\usepackage{pdflscape}\n\n\\renewcommand{\\vec}{\\mathbf}\n\\renewcommand{\\Re}{\\operatorname{Re}}\n\\renewcommand{\\Im}{\\operatorname{Im}}\n\\newcommand{\\dd}[1]{\\,\\mathrm{d}#1}\n\\newcommand{\\ca}[1]{\\accentset{\\circ}{#1}}\n\\newcommand{\\vp}[2]{\\left<#1,#2\\right>}\n\\newcommand{\\abs}[1]{\\left\\lvert#1\\right\\rvert}\n\\newcommand{\\diag}[1]{\\mathrm{diag}\\left(#1\\right)}\n\\newcommand{\\R}{\\mathbb{R}}\n\n\\renewcommand{\\epsilon}{\\varepsilon}\n\\renewcommand{\\theta}{\\vartheta}\n\n\\begin{document}\n\\textbf{\\Huge Warning: These are my notes. Don't expect this document to be self-contained and correct.}\n\n\\section{ADER-DG for linear acoustics}\nThe linearized acoustic equations are given as (see Finite Volume book by LeVeque)\n\\begin{equation}\\label{eq:pde}\n \\frac{\\partial Q_p}{\\partial t} + A_{pq}\\frac{\\partial Q_q}{\\partial x} + B_{pq}\\frac{\\partial Q_q}{\\partial y} = 0,\n\\end{equation}\nwhere\n\\begin{equation}\n q = \\begin{pmatrix}p \\\\ u \\\\ v\\end{pmatrix}, \\quad\n A = \\begin{pmatrix}0 & K_0 & 0 \\\\ 1/\\rho_0 & 0 & 0 \\\\ 0 & 0 & 0 \\end{pmatrix}, \\quad\n B = \\begin{pmatrix}0 & 0 & K_0 \\\\ 0 & 0 & 0 \\\\ 1/\\rho_0 & 0 & 0 \\end{pmatrix}.\n\\end{equation}\n\nThe corresponding weak form is\n\\begin{equation}\n \\int_{\\Omega}\\phi_k\\frac{\\partial Q_p}{\\partial t}\\dd{V} +\n \\int_{\\partial\\Omega}\\phi_k \\left(n_xA_{pq} + n_yB_{pq}\\right)Q_q\\dd{S} -\n \\int_{\\Omega}\\left(\\frac{\\partial \\phi_k}{\\partial x}A_{pq}Q_q + \\frac{\\partial \\phi_k}{\\partial y}B_{pq}Q_q\\right)\\dd{V} = 0,\n\\end{equation}\nwhere $n=(n_x,n_y)$ is the outward unit surface normal.\n\nWe discretise the weak form with finite elements, which are triangles $\\mathcal R^{(m)}$, and obtain\n\\begin{equation}\n \\int_{\\mathcal R^{(m)}}\\phi_k\\frac{\\partial Q_p}{\\partial t}\\dd{V} +\n \\int_{\\partial\\mathcal R^{(m)}}\\phi_k \\left(\\left(n_xA_{pq} + n_yB_{pq}\\right)Q_q\\right)^*\\dd{S} -\n \\int_{\\mathcal R^{(m)}}\\left(\\frac{\\partial \\phi_k}{\\partial x}A_{pq} + \\frac{\\partial \\phi_k}{\\partial y}B_{pq}\\right)Q_q\\dd{V} = 0,\n\\end{equation}\nwhere we a numerical flux (indicated with *).\n\nSuppose we are given an unstructured triangular grid where each cell is given by the 3 points\n$P_0,\\dots,P_2$ in counter-clockwise order. The coordinates of each point is $P_i = (x_i,y_i)$.\n\nWe approximate $Q$ with a modal basis, i.e.\n\\begin{equation}\n Q_p^h(x,y,t) = \\hat{Q}_{lp}\\left(t\\right)\\phi_l\\left(\\xi^{(m)}(x,y), \\eta^{(m)}(x,y)\\right),\n\\end{equation}\nwhere\n\\begin{align}\n x(\\xi,\\eta) = x_0 + (x_1-x_0)\\xi + (x_2-x_0)\\eta, \\\\\n y(\\xi,\\eta) = y_0 + (y_1-y_0)\\xi + (y_2-y_0)\\eta.\n\\end{align}\n\nThen we obtain, using the substitution rule,\n\\begin{multline}\n |J|\\frac{\\partial \\hat{Q}_{lp}}{\\partial t}(t)\\int_{0}^{1}\\int_{0}^{1}\\phi_k(\\xi,\\eta)\\phi_l(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi}\n + \\sum_{i=1}^3 |S_i|\\int_0^1\\phi_k(r_i(\\chi))(n_xA_{pq}+n_yB_{pq}Q_q)^*\\dd{\\chi} \\\\\n - |J|\\hat{Q}_{lp}(t)\\int_{0}^{1}\\int_{0}^{1}\\left(\\frac{\\partial \\phi_k}{\\partial x}(\\xi,\\eta)A_{pq}\n + \\frac{\\partial \\phi_k}{\\partial y}(\\xi,\\eta)B_{pq}\\right)\\phi_l(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi} = 0,\n\\end{multline}\nwhere $J=(x_1-x_0)(y_2-y_0)-(x_2-x_0)(y_1-y_0)$, and $r_i$ and $|S_i|$ are given by\n\\begin{align}\n r_1(\\chi) &= \\begin{pmatrix}\\chi \\\\ 0\\end{pmatrix}, & |S_1| = \\sqrt{(x_1-x_0)^2 + (y_1-y_0)^2}, \\\\\n r_2(\\chi) &= \\begin{pmatrix}1-\\chi \\\\ \\chi\\end{pmatrix}, & |S_2| = \\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}, \\\\\n r_3(\\chi) &= \\begin{pmatrix}0 \\\\ 1-\\chi\\end{pmatrix}, & |S_3| = \\sqrt{(x_2-x_0)^2 + (y_2-y_0)^2}.\n\\end{align}\nWe also need to transform the gradients, that is,\n\\begin{equation}\n \\begin{pmatrix}\n  \\dfrac{\\partial\\phi_k}{\\partial x} & \\dfrac{\\partial\\phi_k}{\\partial y}\n \\end{pmatrix} =\n \\begin{pmatrix}\n  \\dfrac{\\partial\\phi_k}{\\partial\\xi} & \\dfrac{\\partial\\phi_k}{\\partial\\eta}\n \\end{pmatrix}\n \\underbrace{\n \\begin{pmatrix}\n  \\dfrac{\\partial\\xi}{\\partial x} & \\dfrac{\\partial\\xi}{\\partial y} \\\\\n  \\dfrac{\\partial\\eta}{\\partial x} & \\dfrac{\\partial\\eta}{\\partial y}\n \\end{pmatrix}}_{=:D\\boldsymbol{\\xi}(x,y)}\n\\end{equation}\nUsing the inverse function theorem we have\n\\begin{equation}\n D\\boldsymbol{\\xi}(x,y) = [D\\boldsymbol{x}(\\xi,\\eta)]^{-1} =\n \\begin{pmatrix}\n  x_1-x_0 & x_2-x_0 \\\\\n  y_1-y_0 & y_2-y_0\n \\end{pmatrix}^{-1} =\n \\dfrac{1}{J}\n \\begin{pmatrix}\n  y_2-y_0 & x_0-x_2 \\\\\n  y_0-y_1 & x_1-x_0\n \\end{pmatrix}.\n\\end{equation}\nFor later use we note\n\\begin{align}\n A^* = A\\dfrac{\\partial\\xi}{\\partial x} + B\\dfrac{\\partial\\xi}{\\partial y} =\n \\dfrac{y_2-y_0}{J}A + \\dfrac{x_0-x_2}{J}B, \\\\\n B^* = A\\dfrac{\\partial\\eta}{\\partial x} + B\\dfrac{\\partial\\eta}{\\partial y} =\n \\dfrac{y_0-y_1}{J}A + \\dfrac{x_1-x_0}{J}B.\n\\end{align}\n\n\n\n\nWe turn now to the flux term, which shall have the following form:\n\\begin{equation}\n (n_xA_{pq}+n_yB_{pq}Q_q)^* = A^+Q_q^{(m)} + A^-Q_q^{(m_i)},\n\\end{equation}\nwhere $m_i$ denotes the neighbour of element $m$.\nFirst, note that we may use rotational invariance:\n\\begin{equation}\n n_xA + n_yB = TAT^{-1},\n\\end{equation}\nwhere\n\\begin{equation}\nT(n_x,n_y)=\\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & n_x & -n_y \\\\\n 0 & n_y & n_x\n\\end{pmatrix}, \\quad\nT^{-1}(n_x,n_y)=\\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & n_x & n_y \\\\\n 0 & -n_y & n_x\n\\end{pmatrix},\n\\end{equation}\ni.e. we only need to solve the Riemann problem in x-direction. In the homogeneous case we have\n\\begin{equation}\n A^{+} = \\frac{1}{2}\\begin{pmatrix}\n c & K_0 & 0 \\\\\n 1/\\rho_0 & c & 0 \\\\\n 0 & 0 & 0\n\\end{pmatrix}, \\quad\n A^{-} = \\frac{1}{2}\\begin{pmatrix}\n -c & K_0 & 0 \\\\\n 1/\\rho_0 & -c & 0 \\\\\n 0 & 0 & 0\n\\end{pmatrix},\n\\end{equation}\nwhere $c=\\sqrt{K_0/\\rho_0}$.\n\nIn the inhomogeneous case we have\n\\begin{equation}\n A^{+} = \\begin{pmatrix}\n \\frac{K_0^+c^-c^+}{K_0^-c^++K_0^+c^-} & \\frac{K_0^-K_0^+c^+}{K_0^-c^++K_0^+c^-} & 0 \\\\\n \\frac{K_0^+c^-}{\\rho_0^+\\left(K_0^-c^++K_0^+c^-\\right)} & \\frac{K_0^-K_0^+}{\\rho_0^+\\left(K_0^-c^++K_0^+c^-\\right)} & 0 \\\\\n 0 & 0 & 0\n\\end{pmatrix}, \\quad\n A^{-} = \\begin{pmatrix}\n -\\frac{K_0^-c^-c^+}{K_0^-c^++K_0^+c^-} & \\frac{K_0^-K_0^+c^-}{K_0^-c^++K_0^+c^-} & 0 \\\\\n \\frac{K_0^-c^+}{\\rho_0^-\\left(K_0^-c^++K_0^+c^-\\right)} & -\\frac{K_0^-K_0^+}{\\rho_0^-\\left(K_0^-c^++K_0^+c^-\\right)} & 0 \\\\\n 0 & 0 & 0\n\\end{pmatrix}.\n\\end{equation}\nHence, the flux is given as\n\\begin{equation}\n TA^-T^{-1}Q^- + TA^+T^{-1}Q^+.\n\\end{equation}\nFor ease of notation, we define\n\\begin{equation}\n \\mathcal{A}^{x,y,\\pm} = T(x,y)A^\\pm T(x,y)^{-1}.\n\\end{equation}\n\nNote, that we need the transposed version as we multiply the matrix from the right. Hence,\nwith abuse of notation we define\n\n\\begin{equation}\n \\mathcal{A}^{x,y,\\pm} = T^{-T}\\left(A^\\pm\\right)^T T^{T} = T\\left(A^\\pm\\right)^T T^{-1},\n\\end{equation}\nwhere we used that $T^{-1}=T^T$.\n\nFinally, in order to correctly integrate over the neighbours we need the following integrals:\n$$\n\\int_0^1\\phi_k(r_i(\\chi))\\phi_l(r_j(1-\\chi))\\dd{\\chi}\n$$\nHere $j$ is the number of the edge of the neighbour of the local edge $i$.\n\nWe are now able to obtain the complete semi-discrete scheme:\n\\begin{multline}\n|J|\\frac{\\partial \\hat{Q}_{lp}}{\\partial t}(t)\\int_{0}^{1}\\int_{0}^{1}\\phi_k(\\xi,\\eta)\\phi_l(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi} \\\\\n + \\sum_{i=1}^3 |S_i|\\mathcal{A}_{pq}^{n_{x,i},n_{y,i},+}\\hat{Q}_{lp}(t)\\int_0^1\\phi_k(r_i(\\chi))\\phi_k(r_i(\\chi))\\dd{\\chi} \\\\\n + \\sum_{i=1}^3 |S_i|\\mathcal{A}_{pq}^{n_{x,i},n_{y,i},-}\\hat{Q}_{lp}^{(m_j)}(t)\\int_0^1\\phi_k(r_i(\\chi))\\phi_k(r_j(1-\\chi))\\dd{\\chi}\\\\\n - |J|\\hat{Q}_{lp}(t)A^*_{pq}\\int_{0}^{1}\\int_{0}^{1}\\frac{\\partial \\phi_k}{\\partial \\xi}(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi}\n - |J|\\hat{Q}_{lp}(t)B^*_{pq}\\int_{0}^{1}\\int_{0}^{1}\\frac{\\partial \\phi_k}{\\partial \\eta}(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi} = 0,\n\\end{multline}\n\nTo be precomputed:\n\\begin{align*}\n M_{kl} &= \\int_{0}^{1}\\int_{0}^{1}\\phi_k(\\xi,\\eta)\\phi_l(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi} \\\\\n F_{kl}^{i,0} &= \\int_0^1\\phi_k(r_i(\\chi))\\phi_k(r_i(\\chi))\\dd{\\chi} \\\\\n F_{kl}^{i,j} &= \\int_0^1\\phi_k(r_i(\\chi))\\phi_k(r_j(1-\\chi))\\dd{\\chi} \\\\\n K_{kl}^\\xi &= \\int_{0}^{1}\\int_{0}^{1}\\frac{\\partial\\phi_k}{\\partial\\xi}(\\xi,\\eta)\\phi_l(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi} \\\\\n K_{kl}^\\eta &= \\int_{0}^{1}\\int_{0}^{1}\\frac{\\partial\\phi_k}{\\partial\\eta}(\\xi,\\eta)\\phi_l(\\xi,\\eta)\\dd{\\eta}\\dd{\\xi}\n\\end{align*}\n\n\\subsection{Boundary conditions}\nPeriodic boundary conditions are difficult in the unstructured case.\nWe introduce two alternatives:\n\\subsubsection{Absorbing boundary conditions}\nSet $\\hat{Q}^+=0$.\n\\subsubsection{Wall boundary conditions}\nSet $\\hat{Q}^+=T\\Gamma T^{-1}\\hat{Q}^-$, where $\\Gamma=\\diag{1,-1,1}$.\n\n\\subsection{Source terms}\nAssume we have a source term of the form\n\\begin{equation}\n S_p(t)\\delta(x_s,y_s)\n\\end{equation}\nthen we need to add\n\\begin{equation}\n \\frac{1}{|J|}M_{kq}^{-1}\\phi_q\\left(\\xi(x_s,y_s),\\eta(x_s,y_s) \\right)\\int_{t_n}^{t_n+\\Delta t} S_p(t)\\dd{t}\n\\end{equation}\nto the cell that contains the source term.\n\n\n\\end{document}", "meta": {"hexsha": "425e3efc79368010f9f46b3205156d82170714a7", "size": 8966, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/acoustics_unstructured.tex", "max_stars_repo_name": "uphoffc/LinA", "max_stars_repo_head_hexsha": "077bc0acbd440d2af63e7ec5a02cdc3c0b24a0bd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/acoustics_unstructured.tex", "max_issues_repo_name": "uphoffc/LinA", "max_issues_repo_head_hexsha": "077bc0acbd440d2af63e7ec5a02cdc3c0b24a0bd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/acoustics_unstructured.tex", "max_forks_repo_name": "uphoffc/LinA", "max_forks_repo_head_hexsha": "077bc0acbd440d2af63e7ec5a02cdc3c0b24a0bd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8312236287, "max_line_length": 135, "alphanum_fraction": 0.6378541155, "num_tokens": 3782, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428946, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5611899516898949}}
{"text": "%% LyX 2.2.3 created this file.  For more info, see http://www.lyx.org/.\n%% Do not edit unless you really know what you are doing.\n\\documentclass[english]{article}\n\\usepackage[T1]{fontenc}\n\\usepackage[latin9]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{babel}\n\\begin{document}\n\n\\section{Quantification of code accuracy}\n\nThe accuracy of numerical predictions is quantified with a Least Squares\nnorm regularized for the extent of data coverage. Here is an example\nfor lift polars, where $C_{l}^{exp}$ denotes the measured lift coefficient\nand $C_{l}^{sim}$ denotes the simulated value:\n\\[\n\\epsilon_{i}^{C_{l}}=\\left(\\alpha_{max}-\\alpha_{min}\\right)^{-1}\\left(\\int_{\\alpha_{min}}^{\\alpha_{max}}\\left(C_{l}^{sim}-C_{l}^{exp}\\right)^{2}d\\alpha\\right)^{\\frac{1}{2}}\n\\]\nThe same approach is adopted for drag and moment coefficient polars,\nwhen available. A similar norm is adopted for boundary layer runs,\nbut the integration proceeds along the streamwise coordinate until\nthe end of the dataset at $x=L$:\n\\[\n\\epsilon_{i}^{\\theta}=\\frac{1}{L}\\left(\\int_{0}^{L}\\left(\\theta_{\\left(x\\right)}^{sim}-\\theta_{\\left(x\\right)}^{exp}\\right)^{2}dx\\right)^{\\frac{1}{2}}\n\\]\nError measures for each quantity are then defined \n\n\\section{Parametrization of the Skin Friction Relation}\n\nThe skin friction closure relation is a function of the form:\n\\[\nC_{f}=f_{\\left(H,Re_{\\theta}\\right)}^{C_{f}^{0}}\n\\]\nThis function has three key features:\n\\begin{enumerate}\n\\item It tends to infinity as H approaches one:\n\\[\n\\lim_{H\\rightarrow1}C_{f}=+\\infty\n\\]\n\\item It is positive for share factors below separation $H<H_{sep}$ and\nnegative afterwards:\n\\[\n\\exists H_{sep}=f_{\\left(Re_{\\theta}\\right)}\\::\\:\\begin{cases}\nC_{f}>0 & H<H_{sep}\\\\\nC_{f}<0 & H>H_{sep}\n\\end{cases}\n\\]\n\\item And it tends to a nearly constant value in deep separation:\n\\[\n\\exists C_{f}^{dsep}=f_{\\left(Re_{\\theta}\\right)}\\::\\:\\lim_{H\\rightarrow\\infty}C_{f\\left(H,Re_{\\theta}\\right)}=C_{f}^{dsep}\n\\]\n\\end{enumerate}\nValues of the shape factor at which separation occurs $\\left(H_{sep}\\right)$\nand the skin friction in deeply separated $C_{f}^{dsep}$ flows are\nnot precisely known. A good parametrization must let them be varied\nby the minimization algorithm, and that is indirectly achieved by\nthe following  expression:\n\\[\nC_{f\\left(H,Re_{\\theta},A_{i}\\right)}^{mod}=S_{\\left(H,Re_{\\theta},A_{i}\\right)}^{dim}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{C_{f}^{0}}+\\delta^{C_{f}}\\right)-\\delta^{C_{f}}\\text{\\qquad,\\quad\\ensuremath{\\delta^{C_{f}}\\in\\mathbb{R}}}\n\\]\nWhere $\\delta^{C_{f}}$ is a constant property of the parametrization\nthat enables variations in the shape factor at which the onset of\nseparation occurs, whereas the general shape of the closure relation\nis taylored by the $S_{\\left(\\right)}^{dim}$. The dimensional shape\nfunction maps the region for which the closure relation is being taylored\ninto a unit square, over which shape of the curve is taylored with\na classic Bernstein polynomial approach. \n\\[\nS_{\\left(H,Re_{\\theta},A_{i}\\right)}^{dim}=B_{\\left(\\eta_{\\left(H\\right)}^{H},\\eta_{\\left(Re_{\\theta}\\right)}^{RT},A_{i}\\right)}^{dxx}\n\\]\n\\[\n\\qquad with\\quad\\eta^{H}=\\begin{cases}\n0 & H<u_{ub}\\\\\n\\frac{H-H_{lb}}{H_{ub}-H_{lb}} & otherwise\\\\\n1 & H>u_{ub}\n\\end{cases}\\qquad and\\quad\\eta^{Re_{\\theta}}=\\begin{cases}\n0 & Re<u_{ub}\\\\\n\\ln\\left(\\frac{Re}{Re_{min}}\\right)/\\ln\\left(\\frac{Re_{max}}{Re_{min}}\\right) & otherwise\\\\\n1 & H>u_{ub}\n\\end{cases}\n\\]\n\n\\section{Bernstein polynomials}\n\nIn the future, we will use Bernstein polynomials of arbitrary order.\nHowever, for this early phase of the work we will restrict our study\nto Bernstein polynomials of order 6 and degree 5. The basis of Bernstein\npolynomials of order 6 and degree 5 is written as:\n\\[\n\\begin{array}{rl}\nB_{\\left(x\\right)}^{d5r0} & =x^{5}\\\\\nB_{\\left(x\\right)}^{d5r1} & =5x^{4}\\left(1-x\\right)\\\\\nB_{\\left(x\\right)}^{d5r2} & =10x^{3}\\left(1-x\\right)^{2}\\\\\nB_{\\left(x\\right)}^{d5r3} & =10x^{2}\\left(1-x\\right)^{3}\\\\\nB_{\\left(x\\right)}^{d5r4} & =5x\\left(1-x\\right)^{4}\\\\\nB_{\\left(x\\right)}^{d5r5} & =\\left(1-x\\right)^{5}\n\\end{array}\\qquad\\qquad\\begin{array}{rl}\n\\frac{\\partial}{\\partial x}\\left(B^{d5r0}\\right) & =5x^{4}\\\\\n\\frac{\\partial}{\\partial x}\\left(B^{d5r1}\\right) & =5\\left(4x^{3}\\left(1-x\\right)-x^{4}\\right)\\\\\n\\frac{\\partial}{\\partial x}\\left(B^{d5r2}\\right) & =10\\left(3x^{2}\\left(1-x\\right)^{2}-2x^{3}\\left(1-x\\right)\\right)\\\\\n\\frac{\\partial}{\\partial x}\\left(B^{d5r3}\\right) & =10\\left(2x\\left(1-x\\right)^{3}-3x^{2}\\left(1-x\\right)^{2}\\right)\\\\\n\\frac{\\partial}{\\partial x}\\left(B^{d5r4}\\right) & =5\\left(\\left(1-x\\right)^{4}-4x\\left(1-x\\right)^{3}\\right)\\\\\n\\frac{\\partial}{\\partial x}\\left(B^{d5r5}\\right) & =-5\\left(1-x\\right)^{4}\n\\end{array}\n\\]\nLet us now define the 6 bernstein polynomial (shape) coefficients:\n\\[\nA_{1},A_{2},A_{3},A_{4},A_{5},A_{6}\n\\]\nUsed to construct arbitrary polynomials by linear combination of the\nBernstein basis:\n\\[\nB_{\\left(x,A_{i}\\right)}^{d5}=A_{1}B_{\\left(x\\right)}^{d5r0}+A_{2}B_{\\left(x\\right)}^{d5r1}+A_{3}B_{\\left(x\\right)}^{d5r2}+A_{4}B_{\\left(x\\right)}^{d5r3}+A_{5}B_{\\left(x\\right)}^{d5r4}+A_{6}B_{\\left(x\\right)}^{d5r5}\n\\]\nThe derivative of $B^{d5}$ to the $x$-coordinate is written as:\n\\[\n\\frac{\\partial}{\\partial x}\\left(B_{\\left(x,A_{i}\\right)}^{d5}\\right)=A_{1}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r0}\\right)+A_{2}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r1}\\right)+A_{3}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r2}\\right)+A_{4}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r3}\\right)+A_{5}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r4}\\right)+A_{6}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r5}\\right)\n\\]\n\n\\section{Shape Function}\n\nFor now, we will only apply changes as a function of the shape factor\n$H$. Let us then write:\n\\[\nS_{\\left(H,Re_{\\theta},A_{i}\\right)}^{dim}=B_{\\left(\\eta_{\\left(H\\right)}^{H},A_{i}\\right)}^{d5}\\qquad with\\qquad\\qquad with\\quad\\eta^{H}=\\begin{cases}\n0 & H<u_{ub}\\\\\n\\frac{H-H_{lb}}{H_{ub}-H_{lb}} & otherwise\\\\\n1 & H>u_{ub}\n\\end{cases}\n\\]\nSo that we can write:\n\\[\n\\frac{\\partial S}{\\partial H}=\\frac{\\partial}{\\partial H}\\left(B_{\\left(\\eta_{\\left(H\\right)}^{H},A_{i}\\right)}^{d5}\\right)=\\frac{\\partial}{\\partial\\eta}\\left(B_{\\left(\\eta_{\\left(H\\right)}^{H},A_{i}\\right)}^{d5}\\right)\\frac{\\partial\\eta}{\\partial H}\\qquad with\\qquad x=\\eta^{H}\n\\]\nSo:\n\\[\n\\frac{\\partial}{\\partial\\eta}\\left(B_{\\left(\\eta_{\\left(H\\right)}^{H},A_{i}\\right)}^{d5}\\right)=\\frac{\\partial}{\\partial x}\\left(B_{\\left(x,A_{i}\\right)}^{d5}\\right)\n\\]\nAnd:\n\\[\n\\frac{\\partial\\eta}{\\partial H}=\\begin{cases}\n\\frac{\\partial}{\\partial H}\\left(0\\right) & H<u_{ub}\\\\\n\\frac{\\partial}{\\partial H}\\left(\\frac{H-H_{lb}}{H_{ub}-H_{lb}}\\right) & otherwise\\\\\n\\frac{\\partial}{\\partial H}\\left(1\\right) & H>u_{ub}\n\\end{cases}=\\begin{cases}\n0 & H<u_{ub}\\\\\n\\frac{1}{H_{ub}-H_{lb}} & otherwise\\\\\n0 & H>u_{ub}\n\\end{cases}\n\\]\nWhereby:\n\\[\n\\frac{\\partial S}{\\partial H}=\\frac{\\partial}{\\partial x}\\left(B_{\\left(x,A_{i}\\right)}^{d5}\\right)\\frac{\\partial\\eta}{\\partial H}\n\\]\n\n\\section{Cf Parametrization Summary}\n\nThe bernstein polynomials are given as:\n\\[\nB_{\\left(x,A_{i}\\right)}^{d5}=A_{1}B_{\\left(x\\right)}^{d5r0}+A_{2}B_{\\left(x\\right)}^{d5r1}+A_{3}B_{\\left(x\\right)}^{d5r2}+A_{4}B_{\\left(x\\right)}^{d5r3}+A_{5}B_{\\left(x\\right)}^{d5r4}+A_{6}B_{\\left(x\\right)}^{d5r5}\n\\]\nTheir derivatives come as:\n\\[\n\\frac{\\partial}{\\partial x}\\left(B_{\\left(x,A_{i}\\right)}^{d5}\\right)=A_{1}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r0}\\right)+A_{2}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r1}\\right)+A_{3}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r2}\\right)+A_{4}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r3}\\right)+A_{5}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r4}\\right)+A_{6}\\frac{\\partial}{\\partial x}\\left(B_{\\left(x\\right)}^{d5r5}\\right)\n\\]\nThe map from x to H is given as:\n\\[\nx=\\begin{cases}\n0 & H<u_{ub}\\\\\n\\frac{H-H_{lb}}{H_{ub}-H_{lb}} & otherwise\\\\\n1 & H>u_{ub}\n\\end{cases}\n\\]\nThe jacobian of the transformation is given as:\n\\[\n\\frac{\\partial x}{\\partial H}=\\begin{cases}\n0 & H<u_{ub}\\\\\n\\frac{1}{H_{ub}-H_{lb}} & otherwise\\\\\n0 & H>u_{ub}\n\\end{cases}\n\\]\nThe dimensional shape function is given as:\n\\[\nS_{\\left(H,A_{i}\\right)}^{d5}=B_{\\left(\\eta_{\\left(H\\right)}^{H},A_{i}\\right)}^{d5}\n\\]\nAnd it derivative is given as:\n\\[\n\\frac{\\partial S^{d5}}{\\partial H}=\\frac{\\partial}{\\partial x}\\left(B_{\\left(x,A_{i}\\right)}^{d5}\\right)\\frac{\\partial x}{\\partial H}\n\\]\nThe new skin friction correlation is given as:\n\\[\nC_{f\\left(H,Re_{\\theta},A_{i}\\right)}^{mod}=S_{\\left(H,A_{i}\\right)}^{d5}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{C_{f}^{0}}+\\delta^{C_{f}}\\right)-\\delta^{C_{f}}\\text{\\qquad,\\quad\\ensuremath{\\delta^{C_{f}}\\in\\mathbb{R}}}\n\\]\nIts derivative to H is given as:\n\\[\n\\frac{\\partial}{\\partial H}\\left(C_{f\\left(H,Re_{\\theta},A_{i}\\right)}^{mod}\\right)=\\frac{\\partial S^{d5}}{\\partial H}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{C_{f}^{0}}+\\delta^{C_{f}}\\right)+S_{\\left(H,A_{i}\\right)}^{d5}\\frac{\\partial}{\\partial H}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{C_{f}^{0}}\\right)\n\\]\nDerivatives to other variables are simpler. Here, the example for\n$Re_{\\theta}$:\n\\[\n\\frac{\\partial}{\\partial Re_{\\theta}}\\left(C_{f\\left(H,Re_{\\theta},A_{i}\\right)}^{mod}\\right)=S_{\\left(H,A_{i}\\right)}^{d5}\\frac{\\partial}{\\partial Re_{\\theta}}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{C_{f}^{0}}\\right)\n\\]\nDone!\n\n\\subsection{For paper}\n\nThe skin friction correlation is parametrized by combining the original\ncorrelation $\\left(C_{f\\left(H,Re_{\\theta}\\right)}^{0}\\right)$ with\na shape function $\\left(S_{\\left(H,A_{i}\\right)}^{dM}\\right)$ whose\nshape depends on a set of coefficients $\\left(A_{i}\\right)$ that\nare varied by the optimization algorithm:\n\\[\nC_{f\\left(H,Re_{\\theta},A_{i}\\right)}^{mod}=S_{\\left(H,A_{i}\\right)}^{dM}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{C_{f}^{0}}+\\delta^{C_{f}}\\right)-\\delta^{C_{f}}\\text{\\qquad,\\quad\\ensuremath{\\delta^{C_{f}}\\in\\mathbb{R}}}\n\\]\nThe shape function consists of a linear combination of 5th degree\nBernstein polynomials and $\\left(\\delta^{C_{f}},\\,H_{lb},\\,H_{ub}\\right)$\nare constants that affect the parametrization scope:\n\\[\nB_{\\left(x,A_{i}\\right)}^{dM}=\\sum_{i=0}^{i=M}A_{i+1}B_{\\left(x\\right)}^{5i}\\qquad with\\qquad\\left\\{ \\begin{array}{l}\nB_{\\left(x\\right)}^{Mi}=\\left(\\begin{array}{c}\nM\\\\\ni\n\\end{array}\\right)x^{i}\\left(1-x\\right)^{M-i}\\\\\nx=\\frac{H-H_{lb}}{H_{ub}-H_{lb}}\n\\end{array}\\right.\n\\]\nSetting all shape coefficients to unity yields the original closure\nrelation and the $H_{\\left(H,Re_{\\theta}\\right)}^{*}$ correlation\nwas parametrized with a similar procedure.\n\n\\section{Hstar Parametrization}\n\nThe key feature here is to preserve the limit value for a collapsing\nboundary layer:\n\\[\n\\lim_{H\\rightarrow1}H^{*}=2\n\\]\nThis is easily by setting the lower limit of the $H$ intervention\nregion at $H_{min}=1$ (which we would do anyway), fixing the first\nBernstein coefficient of to $A_{1}^{H}=1$ (it is simply not provided\nas a free parameter to the optimizer) and making a classic CST approach\n(no offset needed here). So we write:\n\\[\nH_{\\left(H,Re_{\\theta},A_{i}\\right)}^{*mod}=S_{\\left(H,A_{i}\\right)}^{d5}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{H^{*0}}\\right)\n\\]\nWhereby the derivative to $H$ comes from the chain rule:\n\\[\n\\frac{\\partial}{\\partial H}\\left(H_{\\left(H,Re_{\\theta},A_{i}\\right)}^{*mod}\\right)=\\frac{\\partial}{\\partial H}\\left(S_{\\left(H,A_{i}\\right)}^{d5}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{H^{*0}}\\right)\\right)\n\\]\n\\[\n=\\frac{\\partial}{\\partial H}\\left(S_{\\left(H,A_{i}\\right)}^{d5}\\right)\\left(f_{\\left(H,Re_{\\theta}\\right)}^{H^{*0}}\\right)+S_{\\left(H,A_{i}\\right)}^{d5}\\frac{\\partial}{\\partial H}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{H^{*0}}\\right)\n\\]\nAnd the other derivatives come more simply, here the example for $Re_{\\theta}$:\n\\[\n\\frac{\\partial}{\\partial Re_{\\theta}}\\left(H_{\\left(H,Re_{\\theta},A_{i}\\right)}^{*mod}\\right)=\\frac{\\partial}{\\partial Re_{\\theta}}\\left(S_{\\left(H,A_{i}\\right)}^{d5}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{H^{*0}}\\right)\\right)=S_{\\left(H,A_{i}\\right)}^{d5}\\frac{\\partial}{\\partial Re_{\\theta}}\\left(f_{\\left(H,Re_{\\theta}\\right)}^{H^{*0}}\\right)\n\\]\n\n\\subsection{For paper}\n\\end{document}\n", "meta": {"hexsha": "cdcaefa435094b5a85831f30aa16b2ba9a2c3d05", "size": 12178, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sci/06_flap_parametrization/ml_doc/ml_parametrization_alfa4.tex", "max_stars_repo_name": "gaeldeoliveira/optiflow_open", "max_stars_repo_head_hexsha": "64a2f26647aaa9a5ab8bdf58ba14dced7814dc8c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-30T13:06:07.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-30T13:06:07.000Z", "max_issues_repo_path": "sci/06_flap_parametrization/ml_doc/ml_parametrization_alfa4.tex", "max_issues_repo_name": "wuyou33/optiflow_open", "max_issues_repo_head_hexsha": "9c76f28c6514058d9a303f9166163e3cb50d4cb2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sci/06_flap_parametrization/ml_doc/ml_parametrization_alfa4.tex", "max_forks_repo_name": "wuyou33/optiflow_open", "max_forks_repo_head_hexsha": "9c76f28c6514058d9a303f9166163e3cb50d4cb2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-04-30T13:06:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T19:13:37.000Z", "avg_line_length": 46.6590038314, "max_line_length": 495, "alphanum_fraction": 0.6680078831, "num_tokens": 4587, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8615382165412808, "lm_q2_score": 0.6513548714339145, "lm_q1q2_score": 0.5611671142706499}}
{"text": "\\documentclass[11pt,a4paper]{report}\n\\usepackage{amsmath,amsfonts,amssymb,amsthm,epsfig,epstopdf,titling,url,array}\n\\usepackage{changepage}\n\\usepackage{graphicx}\n\\theoremstyle{plain}\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{lem}[thm]{Lemma}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem*{cor}{Corollary}\n\\theoremstyle{definition}\n\\newtheorem{defn}{Definition}[section]\n\\newtheorem{conj}{Conjecture}[section]\n\\newtheorem{exmp}{Example}[section]\n\\newtheorem{exercise}{Exercise}[section]\n\\theoremstyle{remark}\n\\newtheorem*{rem}{Remark}\n\\newtheorem*{note}{Note}\n\\begin{document}\n\n\n\\section*{Problem}\nTwo quarters are placed next to each other on a table (side-by-side, just touching each other).  One is held fixed and the other is revolved (rolled) around it.  How many rotations does the revolving quarter make?  \n\\\\\n\\\\\n\\textit{Source:} A learned of this problem from a coworker. I do not know its original source.\n\n\\section*{Bonus Problem}\n Suppose the that the two coins have different sizes, say the fixed one has radius $r_1$ and the moving coin has radius $r_2$.  Find a function of $r_1$ and $r_2$ that determines the number of rotations that the revolving coin makes.\n\n\\section*{Solution}\nIt makes two full rotations. \n\n\\section*{Bonus Solution}\n We will start by giving an intuitively appealing, but incorrect solution and then correct it. The circumference of the stationary coin is $2{\\pi}r_1$. That is how far the rotating coin has to travel.  Suppose that the rotating coin were just rolling along a flat surface.  How many revolutions would it take for it to move that far?  In each revolution, it goes $2{\\pi}r_2$ units of distance.  Therefore, it should take $2{\\pi}r_1 / 2{\\pi}r_2 = r_1/r_2$ revolutions to do it. Unfortunately, this answer does not agree with what you can see when you do the $r_1 = r_2$ case with your hands and eyes with two quarters. There you get 2. To understand where the problem is, imagine now that $r_2$ is very small in relation to $r_1$.  A great example to think about is that the stationary circle is the earth and the moving one is a tire on a car driving around the equator, with the simplifying assumption that the earth is perfectly round and the road it is driving on is a perfect circle around the earth.  The number of turns of the tire that it takes to cover any distance along the great circle route is correctly given by $d/2{\\pi}r_2$ where $r_2$ is the radius of the tire measured in whatever units of distance we are using and $d$ is the distance.  Now suppose that the tire starts with the little ``Goodyear'' logo facing up toward the top of the car.  Suppose further that when the car has gone exactly 1/2 way around the world, it has done exactly 3.5 million full rotations.  The logo will be facing ``up'' again, but relative to its initial position, it will have made an additional 1/2 rotation.  What is going on here is that the ``linear'' displacement model that measures progress along the great circle is not accounting for the fact that the path is a circle.  Each rotation of the wheel results in a small (tiny in this case) additional change in the orientation of the top of the wheel relative to the center of the earth. The small additional changes associated with each rotation add up to one full rotation.  The correct answer is therefore $1 + r_1/r_2$.\n\n\\newpage \nAnother way to get to the same result is to consider Figure \\ref{fig:quarters} below.\n\n\\begin{figure}[h!]\n  \\includegraphics[width=2in]{quarters.png}\n  \\caption{}\n  \\label{fig:quarters}\n\\end{figure}\n\nThe center of the moving circle travels along the dotted circle. That circle has circumference $2\\pi(r_1+r_2)$. Computing linear displacement along this longer path effectively cancels the ``rotational advantage'' in the model above, so you can compute the number of rotations directly: $2\\pi(r_1+r_2) / 2{\\pi}r_1 = (r_1 + r_2) / r_1 = 1 + r_2/r_1$.\n\nThe degenerate cases are interesting and good to check.  If $r_2$ is zero, you get $1$ which makes sense, since you are just spinning the top circle around a point.  If $r_1$ is zero, things are hopeless - you get nowhere spinning a zero-radius wheel. If, as in the round-the-world example, $r1$ is much smaller than $r_2$, the ``$+1$'' is negligible.\n\n\n\n\n\\end{document}", "meta": {"hexsha": "188960a36df6b70d9eb7414eaceae227aad5a4e4", "size": 4269, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "quarters/quarters.tex", "max_stars_repo_name": "psteitz/problems", "max_stars_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "quarters/quarters.tex", "max_issues_repo_name": "psteitz/problems", "max_issues_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-03T21:08:11.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-03T21:08:11.000Z", "max_forks_repo_path": "quarters/quarters.tex", "max_forks_repo_name": "psteitz/problems", "max_forks_repo_head_hexsha": "c231561593ef7de6264c21d2c78d736866c1b341", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.0961538462, "max_line_length": 2077, "alphanum_fraction": 0.7648161162, "num_tokens": 1134, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6513548511303336, "lm_q2_score": 0.8615382129861583, "lm_q1q2_score": 0.5611670944626929}}
{"text": "\\chapter{Assigning and Testing Algebraic Properties}\n\nSometimes algebraic expressions can be further simplified if\nthere is additional information about the value ranges\nof its components. The following section describes \nhow to inform {\\REDUCE} of such assumptions.\n\n\n\\section{REALVALUED Declaration and Check}\n\n\nThe declaration {\\tt REALVALUED} \\ttindex{REALVALUED} may be used \nto restrict variables to the real numbers. The syntax is:\n\\begin{verbatim}\n        realvalued v1,...vn;\n\\end{verbatim}\nFor such variables the operator {\\tt IMPART} \\ttindex{IMPART} gives \nthe result zero. Thus, with\n\\begin{verbatim}\n        realvalued x,y;\n\\end{verbatim}\nthe expression \\verb;impart(x+sin(y)); is evaluated as zero.\nYou may also declare an operator as real valued\nwith the meaning, that this operator maps real arguments always to\nreal values. Example:\n\\begin{verbatim}\n        operator h; realvalued h,x;\n        impart h(x);\n   \n           0\n  \n        impart h(w);\n\n           impart(h(w))\n\\end{verbatim}\nSuch declarations are not needed for the standard elementary functions.\n        \nTo remove the propery from a variable or an operator use the declaration\n{\\tt NOREALVALUED} \\ttindex{NOREALVALUED} with the syntax:\n\\begin{verbatim}\n        norealvalued v1,...vn;\n\\end{verbatim}\n\nThe boolean operator {\\tt REALVALUEDP} \\ttindex{REALVALUEDP}\nallows you to check if a variable, an operator, or\nan operator expression is known as real valued.\nThus, \n\\begin{verbatim}\n        realvalued x;\n        write if realvaluedp(sin x) then \"yes\" else \"no\";\n        write if realvaluedp(sin z) then \"yes\" else \"no\";\n\\end{verbatim}\nwould print first \\verb+yes+ and then \\verb+no+. For general\nexpressions test the impart for checking the value range:\n\\begin{verbatim}\n        realvalued x,y; w:=(x+i*y); w1:=conj w;\n        impart(w*w1);\n\n           0\n\n        impart(w*w);\n\n           2*x*y\n\\end{verbatim}\n\n\n\\section{Declaring Expressions Positive or Negative}\n\nDetailed knowlege about the sign of expressions allows {\\REDUCE}\nto simplify expressions involving exponentials or {\\tt ABS}\\ttindex{ABS}.\nYou can express assumptions about the \n{\\tt positivity}\\ttindex{positivity} or {\\tt netativity}\\ttindex{negativity}\nof expressions by rules for the operator {\\tt SIGN}\\ttindex{SIGN}.\nExamples:\n\\begin{verbatim}\n         abs(a*b*c);\n      \n            abs(a*b*c);\n\n         let sign(a)=>1,sign(b)=>1; abs(a*b*c);\n\n            abs(c)*a*b\n\n         on precise; sqrt(x^2-2x+1);\n\n            abs(x - 1)\n\n         ws where sign(x-1)=>1;\n\n            x - 1\n\\end{verbatim}\nHere factors with known sign are factored out of an {\\tt ABS} expression.\n\\begin{verbatim}\n         on precise; on factor; \n\n         (q*x-2q)^w;\n\n                      w\n           ((x - 2)*q)\n\n         ws where sign(x-2)=>1;\n\n            w        w\n           q *(x - 2)\n\n\\end{verbatim}\n       \nIn this case the factor $(x-2)^w$ may be extracted from the base\nof the exponential because it is known to be positive.\n\nNote that {\\REDUCE} knows a lot about sign propagation.\nFor example, with $x$ and $y$ also $x+y$, $x+y+\\pi$ and $(x+e)/y^2$\nare known as positive.\nNevertheless, it is often necessary to declare additionally the sign of a \ncombined expression. E.g.\\ at present a positivity declaration of $x-2$ does not \nautomatically lead to sign evaluation for $x-1$ or for $x$.\n\n", "meta": {"hexsha": "a1e8d4f38e71232bb7b1ddc5e40f2b586181c8f8", "size": 3337, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "atomic_Decomp/Redlog/reduce.doc/aprop.tex", "max_stars_repo_name": "Korosensei42/AtomicDecomposition", "max_stars_repo_head_hexsha": "ca10f97c2cef1a258a4e9fade0a3133d1389d08e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "atomic_Decomp/Redlog/reduce.doc/aprop.tex", "max_issues_repo_name": "Korosensei42/AtomicDecomposition", "max_issues_repo_head_hexsha": "ca10f97c2cef1a258a4e9fade0a3133d1389d08e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "atomic_Decomp/Redlog/reduce.doc/aprop.tex", "max_forks_repo_name": "Korosensei42/AtomicDecomposition", "max_forks_repo_head_hexsha": "ca10f97c2cef1a258a4e9fade0a3133d1389d08e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.2796610169, "max_line_length": 81, "alphanum_fraction": 0.6667665568, "num_tokens": 868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7490872131147276, "lm_q1q2_score": 0.5611316528519892}}
{"text": "%\n% 462\n%\n\\chapter{The Theta Functions}\n\n\\Section{21}{1}{The definition of a Theta-function.}\n\nWhen it is desired to obtain definite numerical results in problems\ninvolving Elliptic functions, the calculations are most simply\nperformed with the aid of certain auxiliary functions known as\nThetafunctions. These functions are of considerable intrinsic\ninterest, apart from their connexion with Elliptic functions, and we\nshall now give an account of their funda- mental properties.\n\nThe Theta-functions were first systematically studied by Jacobi*, who\nobtained their properties by purely algebraical methods; and his\nanalysis was so complete that practically all the results contained in\nthis chapter (with the exception of the discussion of the problem of\ninversion in \u00a7\u00a7 21-7 et seq.) are to be found in his works. In\naccordance with the general scheme of this book, we shall not employ\nthe methods of Jacobi, but the more powerful methods based on the use\nof Cauchy's theorem. These methods Avere first employed in the theory\nof Elliptic and allied functions by Liouville in his lectures and have\nsince been given in several treatises on Elliptic functions, the\nearliest of these works being that by Briot and Bouquet.\n\n[Note. The first function of the Theta-function type to appear in\nAnalysis was the\n\nPartition function U (l-.*'\" )\" of Euler, Introductio in Anaiysin\nInfinitorum, i.\n\n(Lausanne, 1748), \u00a7 304; by means of the results given in 5 21-3, it\nis easy to express Theta-functions in terms of Partition functions.\nEuler also obtained properties of products of the type\n\nn (i\u00b1A-\"), n (i\u00b1a;2 ), n (i\u00b1 -2 -i).\n\nn=l n = l n=\\\n\nThe associated series 2 ?n \"('*+ \\ 2 m-\" \"\"* and 2 m ' had previously\noccurred in the\n\n>i=0 n=0 M=0\n\nposthumous work of Jakob Bei'nouUi, Ars Conjectandi (1713), p. 55.\n\n* Fundamenta Nova Theoriae Fimctionum Ellipticarum (Konigsberg, 1829),\nand Ges. Werke, I. pp. 497-538.\n\nt The Partition function and as-sociated functions have been studied\nby Gauss, Comm. Soc. reg. sci. Gottinnensis rec. i. (1811), pp. 7-l'2\n[Werke, ii. pp. 16-21] and Werke, iii. pp. 433-480 and Cauchy,\nCoviptes liendus, x. (1840), pp. 178-181. For a discussion of\nproperties of various functions involving what are known as Basic\nlumbers (which are closely connected with Partition functions) see\nJackson, Froc. Roijal Soc. Lxxiv. (1905), pp. 64-72, Froc. London\nMath. Soc. (1) xxviii. (1897), pp. 475-486 and (2) i. (1904), pp.\n63-88, ii. (1904), pp. 192-220; and Watson, Camb. Fhil. Trails. XXI.\n(1912), pp. 281-299. A fundamental formula in the tlieory of Basic\nnumbers was given by Heine, Kugelfunktionen (Berlin, 1878), i. p. 107.\n\n%\n% 463\n%\n\nTheta-functions also occur in Fourier's La Theorie Analytique de la\nChaleur (Paris, 1822), cf. p. 265 of Freeman's translation (Cambridge,\n1878).\n\nThe theory of Theta-functions was developed from the theory of\nelliptic functions by Jacobi in his Fundameata Nova Theoriae\nFunctionum Ellipticarum (1829), reprinted in his Ges. Werke, i. pp.\n49-239; the notation there employed is explained in \\hardsubsectionref{21}{6}{2}. In his\nsubsequent lectures, he introduced the functions discussed in this\nchapter; an account of these lectures (1838) is given by Borchardt in\nJacobi's Ges. Werl-e, i. pp. 497-538. The most important results\ncontained in them seem to have been discovered in 1835, cf. Kronecker,\nSitzungsherichte der ALad. zu Berlin (1891), pp. 653-659.]\n\nLet T be a (constant) complex number whose imaginary part is positive\n; and write q = e' '\", so that q\\ < l.\n\nConsider the function z, q), defined by the series\n\n00\n\nqua function of the variable z.\n\nIf A be any positive constant, then, when \\ 2\\ \\ A, we have\n\nI qII' Q-k2.niz i < I (7 i '*'\" QinA\n\nn being a positive integer.\n\n00\n\nNow d'Alembert's ratio \\hardsubsectionref{2}{3}{6}) for the series S | q [\"'e ji is j q\n|2n+ig2\n\n  = - 00\n\nwhich tends to zero as ?i x . The series for z, q) is therefore a\nseries of analytic functions, uniformly convergent (\u00a7 334) in any\nbounded domain of values of vS\", and so it is an integral function (\u00a7\u00a7\n5\"3, 5*64).\n\nIt is evident that\n\n 70\n\n  (, 5) - 1 + 2 S -y'q\" ' COS 2nz,\n\n =i\n\nand that\n\n ( + 7r,g) = (ir, g);\n\nfurther\n\n'z + 'TTT,q)= S (\\ ) 5\"- 2ng2nw\n\n - - 00\n\n-- \\ Q-1 Q-2iz y (\\ \\ n+iQ(n+i)-Q'i n+ ) iz\n\nand SO z + ttt, q) = ~ q~ e~ ' ' (z, q).\n\nIn consequence of these results, z, q) is called a quasi doubly\n-periodic function of z. The effect of increasing 2 by tt or ttt is\nthe same as the effect of multiplying z, q) by 1 or - q~ e~', and\naccordingly 1 and - q~ e~' are called the multipliers or periodicity\nfactors associated with the periods ir and TTT respectively.\n\n\\Subsection{21}{1}{1}{The four types of Theta functions.}\n\nIt is customary to write 4 z, q) in place of z, q); the other three\ntypes of Theta-functions are then defined as follows :\n\n%\n% 464\n%\n\nThe function 3(2', ) is defined by the equation\n\n/ 1 \\ \",\n\n 3 (, g) = 4 ( + 2 TT, 9 1 = 1 + 2 1 7\"- COS 2)12.\n\nNext, %(z, q) is defined in terms of 4(2, q) by the equation\n\n i = - 00\n\nand hence* % 2, q) = 2 t (-)\"(/(\" + *)' sin (2 n + l)z.\n\n71 =\n\nLastly, 2 ( . q) is defined by the equation\n\n' ., z,q = %(z + l7r,q) = 2 t q'<\" + - ' cos 2n+ ) z.\n\nV / H=0\n\nWriting down the series at length, we have\n\n 1 ( ) q) = q* sin z - 2q* sin 3 + 25\"*\"\" sin bz - ..., 2 z, q) = 2q*\ncos z + 2g' cos Sz + 2q' '' cos 5 + . . ., 3 (z, fy) = 1 + 2 ' cos 2z\n+ 2q* cos 4 + 29 cos 62 + ..., 4 ( > ) = 1 ~ 2g cos 22 + 2q* cos 4 -\nS *\" cos Qz + -\n\nIt is obvious that j (2, q) is an odd function of 2 and that the other\nTheta-functions are even functions of z.\n\nThe notation which has now been introduced is a modified form of that\nemployed in the treatise of Tannery and Molk; the only difference\nbetween it and Jacobi's notation is that 4 (2, q) is written where\nJacobi would have written (2, q). There are, unfortunately, several\nnotations in use; a scheme, giving the connexions between them, will\nbe found in \\hardsectionref{2}{9}.TODO:verifyref\n\nFor brevity, the parameter q will usually not be specified, so that 1\n(2), ... will be written for 1 (2, q), .... When it is desired to\nexhibit the dependence of a Theta-function on the parameter r, it will\nbe written (2 j t). Also 2(0), 3(0), ' 4(0) will be replaced by 2> 3,\n' 4 respectively; and / will denote the result of making 2 equal to\nzero in the derivate of 1(2). Example 1. Shew that\n\n3i z, q) = 9s 2z, q )-h 2z, q*). Example 2. 01)tain the results\n\nBi z)= -B.2 z + i ) =-iMB-i z + n + UT)=-iMSi z + !TT\\\n\n\\$2(2)= MS3 z + Ut)= mi z + \\ n + nr)= S Z + hTr),\n\n 3(2)= 3i z+U) = .l/,(3 + W+i7rr)= m, z + UT),\n\n3i (2) = - iJfSi (2 + 777\") = im.2 z+U+ i rr) = 3 (2 + U), where M=q*\ne\".\n\n* Throughout the chapter, the many-valued function q is to be\ninterpreted to mean exp (Xttit).\n\n%\n% 465\n%\n\nExample 3. Shew that the multipliers of the Theta-functions associated\nwith the periods tt, ttt are given by the scheme\n\nS,iz)\n\n 2 Z)\n\nh z)\n\n3, z)\n\nn\n\n-1\n\n- 1\n\nN\n\n1\n\n1\n\nTTT\n\n-N-\n\niV\" .\n\n-N\n\nwhere iV = q~ e ' .\n\nExample 4. If -9 (2) be any one of the four Theta-functions and d' (2)\nits derivate with respect to z, shew that\n\n 9'(2+7r) \\ .9'(2) (0 +,r) ~T \u00a3)'\n\n\\Subsection{21}{1}{2}{The reros of the Theta-functions.}\n\nFrom the quasi-periodic properties of the Theta-functions it is\nobvious that if z) be any one of them, and if o be any zero of z),\nthen\n\nz - - riiTT + niTT\n\nis also a zero of z), for all integral values of in and n.\n\nIt will now be shewn that if G be a cell with corners i, t -f- vr, -I-\nvr + ttt, t - TTT, then z) has one and only one zero inside G.\n\nSince z) is analytic throughout the finite part of the -plane, it\nfollows, from \\hardsubsectionref{6}{3}{1}, that the number of its zeros inside G is\n\n ' z)\n\ndz.\n\n27riJc ( ) Treating the contour after the manner of \u00a7 2012, we see\nthat 1 f \" '(z)\n\nInriir ( )\n\ndz\n\n1 / +-| ( ) '( + 7 rT)) \\ J\\ 27rtj, l ( ) (s-f-TTT) \" \" 27n\n\ndz\n\nt + T\n\n2idz,\n\n27rij t by \\hardsubsectionref{21}{1}{1}, example 4. Therefore\n\n1 r y( )\n\n27ri J c ( )\n\nC? = 1,\n\nthat is to say, z) has one simple zero only inside G; this is the\ntheorem stated.\n\nW. M. A.\n\n30\n\n%\n% 466\n%\n\nSince one zero of i z) is obviously z = 0, it follows that the zeros\nof i( ), %(z), %, z), 4( ) are the points congruent respectively to 0,\n,\n\nItt-I-Ittt, Ittt. The reader will observe that these four points form\nthe corners of a parallelogram described counter-clockwise.\n\n\\Section{21}{2}{The relations between the squares uf the Theta-f unctions.}\n\nIt is evident that, if the Thcsta-functions be regarded as functions\nof a single variable z, this variable can be eliminated from the\nequations defining any pair of Theta-functions, the result being a\nrelation* between the functions which might be expected, on general\ngrounds, to be non-algebraic; there are, however, extremely simple\nrelations connecting any three of the Theta- functions; these\nrelations will now be obtained.\n\nEach of the four functions i\" z), \" z), 3- z), V z) is analytic for\nall values of z and has periodicity factors 1, q-\" e-* associated with\nthe periods IT, TTT; and each has a double zero (and no other zeros)\nin any cell.\n\nFrom these considerations it is obvious that, if a, b, a' and b' are\nsuitably chosen constants, each of the functions\n\naV(g)-h6V ( ) a'% '(z)+b'X'(z) V( ) ' V( )\n\nis a doubly-periodic function (with periods tt, ttt) having at most\nonly a simple pole in each cell. By \\hardsubsectionref{20}{1}{3}, such a function is merely\na constant; and obviously we can adjust a, b, a, b' so as to make the\nconstants, in each of the cases under consideration, equal to unity.\n\nThere exist, therefore, relations of the form\n\n     z) = a%' (z) + 6V (z), %' (z) = a%' (z) + b'X' (z).\n\nTo determine a, b, a, b', give z the special values ttt and; since\n\nwe have ' 3- = - (( 4-, 2\" = 4\"; %- = - a\", ' 3- = 6\" 4-.\n\nConsequently, we have obtained the relations\n\nX' z) 4' = 4 z) X' - X' (z) %', % (z) %' = X' z) %' - %' z),1 If we\nwrite z + - ir for z, we get the additional relations\n\n%' (z) V = %' (z) %' - . (z) 3% V (z) V = %' (z) V - X' (z) V. By\nmeans of these results it is possible to express any Theta-function in\nterms of any other pair of Theta-functions.\n\n* The analoo;ou.s relation for the fuuctions sinz and cos 2 is, of\ncourse, (sin2)\"''+(cos,:)2= 1.\n\n%\n% 467\n%\n\nCorollary. Writing z=0 in the last relation, we have that is to say\n\n\\Subsection{21}{2}{1}{The addition-formulae for the Theta functions.}\n\nThe results just obtained are particular cases of formulae containing\ntwo variables; these formulae are not addition-theorems in the strict\nsense, as they do not express Theta-functions of 2 + y algebraically\nin terms of Theta- functions of z and y, but all involve\nTheta-functions of z - y as well as of z - -y, z and y.\n\nTo obtain one of these formulae, consider 3 z -h y) ' 3 z - y) qua\nfunction of z. The periodicity factors of this function associated\nwith the periods tt and TTT are 1 and (f e-2''2+!/) . q- Q-iiKz-y) =\nq-2 -Hz\n\nBut the function a j- z) + 6 1- z) has the same periodicity factors,\nand we can obviously choose the ratio a:b so that the doubly -periodic\nfunction\n\na%' z) + fh Hz) % z + y)% z-y) has no poles at the zeros of 3 z - y);\nit then has, at most, a single simple pole in any cell, namely the\nzero of 3(2:4- y) in that cell, and consequently \\hardsubsectionref{20}{1}{3}) it is a\nconstant, i.e. independent of z; and, as only the ratio a : 6 is so\nfar fixed, we may choose a and b so that the constant is unity. We\nthen have to determine a and b from the identity in z,\n\na%' (z) + b -' (z) =%(z-h y) X (z - y). To do this, put z in turn\nequal to and - ir + - ttt, and we get\n\naX' = Hy\\ h '( 7r + rTT = % ir + \\ 7rr + y)% 'rr + rrT-y\n\nand so a = 3- y)l ., b = \" i y)l 3\\\n\nWe have therefore obtained an addition-formula, namely\n\n 3 ( + y) % z - y) / = V y) V z) + i iy) X' z).\n\nThe set of formulae, of which this is typical, will be found in\nexamples 1 and 2 at the end of this chapter.\n\n21 '22. Jacobi's fundameMal formulae *.\n\nThe addition-formulae just obtained are particular cases of a set of\nidentities first given by Jacobi, who obtained them by purely\nalgebraical methods; each identity involves as many as four\nindependent variables, w, x, y, z.\n\nLet iv', x\\ y\\ z' be defined in terms of lu, x, y, z by the set of\nequations\n\n2w' = -w- x- y- z 2x' = iv - x+y + z, 2?/' = w+x - y + z, 22' = tv- x-\ny - z.\n\n* Ges. Werke, i. p. 505.\n\n30-2\n\n%\n% 468\n%\n\nThe re<\\ der will easily verify that the connexion between w, x, y, z\nand iv\\ x\\ y\\ z' is a reciprocal one*.\n\nFor brevity t, write [? ] for 5, (w) 5 (.r) 5 y) 3 z) and [r]' for S,\n(to') \\$, (x') \\$, (if) S, (z').\n\nConsider [3], [1]', [2]', [3]', [4J qua functions of z. The effect of\nincreasing by tt or nr is to transform the functions in the first row\nof the following table into those in the second or third row\nrespectively.\n\n[3]\n\n[1]'\n\n[2]'\n\n[3]'\n\n[4]'\n\nin)\n\n[3]\n\n-[ J\n\n-[I]'\n\n[4]'\n\n[3]'\n\n(rrr)\n\n [3J\n\n-iy[4]'\n\niV[3]'\n\niV[2]'\n\n-.V[l]'\n\nFor brevity, iV has been written in place of q~ e~- .\n\nHence both -[l]' + [2]' + [3]'-|-[4]' and [3] have periodicity factors\n1 and N, and so their quotient is a doubly-periodic function with, at\nmost, a single simple pole in any cell, namely the zero of 3 (z) in\nthat cell.\n\nBy \\hardsubsectionref{20}{1}{3}, this quotient is merely a constant, i.e. independent of 2;\nand considerations of symmetry shew that it is also independent of w,\nx and y.\n\nWe have thus obtained the result\n\nJ[3]=-[l]' + [2]' + [3]' + [4]',\n\nwhere A is independent of w, x, y, z; to determine A put w=x=y=z= and\nwe get\n\nJ g - .-' + V + i and so, by \\hardsectionref{21}{2} corollary, we see that 4 = 2.\n\nTherefore 2 [3]= -[l]' + [2]' + [3]' + [4j' (i).\n\nThis is one of Jacobi's formulae; to obtain another, increase if, .r,\ny, z (and therefore also tp', x\\ y', z) by w; and we get\n\n2[4] = [lJ-[2]' + :3]' + [4]' (ii).\n\nIncreasing all the variables in (i) and (ii) by htrr, we obtain the\nfurther results\n\n2[2] = [l]' + [2]' + [3]'-[4]' (iii),\n\n2[l] = [l]' + [2]'-[3]' + [4j (iv).\n\n[Note. There are 256 expressions of the form dp (ic) 3g (x) S (y) 3\n(2) which can be obtained from 3 (w) 3 (x) 3 (y) 3 (2) by incre;ising\nw, x, y, z by suitable half-period.s, but only those in which the\nsuffixes p, q, r, s are either equal in pairs or all different give\nrise to formulae not containing quarter-periods on the right-hand\nside.]\n\nExample 1. Shew that\n\n[1] + [2] = [!]' + [2]', [2]-h[3] = [2]' + [.3]', [l]-h[4]=.[l]'-f\n[4]', [3] -h [4] = [3]' -h [4]',\n\n[l] + [3] = [2]' + [4]', [2]-f[4] = [l]'-h[3]'.\n\nIn Jacobi's work the signs of u-, .r', y', z' are changed throughout\nso that the complete symmetry of the relations is destroyed; tlie\nsymmetrical forms just given are due to H. J. S. Smith, Proc. London\nMath. Soc. 1. (May 21, 1860, pjx 1-12).\n\nt The idea of this abridged notation is to be traced in H. J. S.\nSmith's memoir. It seems, however, not to have been used before\nKronecker, Journal j'ilr Math. cii. (1887), pp. 260-272.\n\n%\n% 469\n%\n\nExample 2. By writing tv + n, x + \\ tv for iv, x (and consequently /-\n\\ i, s' + tt for y\\ /), shew that\n\n[3344] + [2211] = [4433]' + [1122j,\n\nwhere [3344] means 3 w) S3 (x) S y) 4 (z), etc. Example 3. Shew that\n\n2[1234] = [3412]'+[2143]'-[1234]' + [4321]'. Example 4. Shew that\n\n\\Section{21}{3}{Jacobi's expressions for the Theta-functions as infinite products*TODO.}\nWe shall now establish the result\n\nM = l\n\n(where G is independent of z), and three similar formulae. Let fi2)= n\n(1 - n-i e-- '>) n l-q->'- e--'');\n\neach of the two products converges absolutely and uniformly in any\nbounded\n\ndomain of values of z, by \\hardsubsubsectionref{3}{3}{4}{1}, on account of the absolute\nconvergence of 00 V 2n-i. hence / (2:) is analytic throughout the\nfinite part of the 2 -plane,\n\nand so it is an integral function.\n\nThe zeros of/(2') are simple zeros at the points where\n\ng2iz = g(2H+l) T (w=..., -2, - 1,0, 1,2, ...)\n\ni.e. where 2iz = (2?i + 1) ttit + 2ni7ri; so that f(z) and 4 (z) have\nthe same zeros; consequently the quotient ' i z)/f z) has neither\nzeros nor poles in the finite part of the plane.\n\nNow, obviously / (2 + tt) =f z);\n\nGO QO\n\nand f z+'77r)= IT (1 - 5- +ie-'' ) IT (1 - 52 -3 g-2fe)\n\n  = 1 n = \\\n\n=f z) l-q-U- )! qe )\n\n= -q- e'- f z). That is to say f(z) arid i z) have the same\nperiodicity factors (\u00a7 2 I'll example 3). Therefore i z)/f(z) is a\ndoubly-periodic function with no zeros or poles, and so (| 20*12) it\nis a constant G, say; consequently\n\n 4 (z)=G U 1- 2q\"'\"-' cos 22 + 5 \"--). 11=1\n\n00 [It will appear in \u00a7 2142 that G= U (1 - q-'').]\n\nn = l\n\nWrite z + TT for 2 in this result, and we get\n\n%,(z)=G n (1 + 25 ' -! cos 2z + q' '-').\n\nn=l * Cf. Fundamenta Nova, p. 145.\n\n%\n% 470\n%\n\nAlso 1 (z) = - iq e\", + ttt)\n\n00 3C\n\n= \\ igi e'z G IJ (1 - 9=\" e-'') TI (1 - q'''~\"- e -'''')\n\nn=l = 1\n\n= 26 5* sin 11 (1 - f\"e-\") U I - e'-'O,\n\nw = 1 = 1\n\nand so, (z) = 2Gq sin ft ( I - 25-\" cos 2z + q' )\n\nn = \\\n\nwhile ', z)=' Jz + l7r]\n\n= 2Gq cosz U 1 + 2q-'' cos 2z + \").\n\nn = l Example. Shew that*\n\n( cc 18 (-00 ISfoc 18\n\nJ n (i-?2 -i)i +16?- n (i+?2 )l = ' n (i+92 -i) .\n\nbi = l J 'n=l J ln = l J\n\n\\addexamplecitation{Jacobi.}\n\n\\Section{21}{4}{The differential equation satisfied by the Theta-functions.}\n\nWe may regard s(z t) as a function of two independent variables z and\nt; and it is permissible to differentiate the series for 3(2 |t) any\nnumber of times with regard to z or r, on account of the uniformity of\nconvergence of the resulting series \\hardsectionref{4}{7} corollary); in particular\n\n- ' = - 4 2 n- exip n-7nT + 2mz)\n\nOZ ft = - 00\n\nConsequently, the function ' 3 (2 \\ t) satisfies the partial\ndifferential equation\n\n1 .d y dy\n\nThe reader will readily prove that the other three Theta-functions\nalso satisfy this equation.\n\n\\Subsection{21}{4}{1}{A relation between Theta-functions of zero argument.}\n\nThe remarkable result that\n\nV(0) = 2 (0) 3 (0) 4(0)\n\nwill now be established f. It is first necessary to obtain some\nformulae for differential coefficients of all the Theta-functions.\n\n* Jacobi describes this result (Fund. Nova, p. 90) as 'aequatio\nidentica satis abstrusa.'\n\nt Several proofs of this important proposition have been given, but\nnone are simple.\n\nJacobi's original proof (Gfis. Werke, i. pp. 515-517), though somewhat\nmore difficult than the\n\nproof given here, is well worth study.\n\n%\n% 471\n%\n\nSince the resulting series converge uniformly, except near the zeros\nof the respective Theta-functions, we may differentiate the formulae\nfor the logarithms of Theta-functions, obtainable from \\hardsectionref{21}{3}, as many\ntimes as we please.\n\nDenoting differentiations with regard to z by primes, we thus get\n\n%' z) = X z)\n\n1-1 a-'iiZ\n\nL =i (1 + q e ) n=\\\n\nMaking 2r 0, we get\n\na% (>2n- 1 g-2i3\n\n=1 (1 + g \"-i e-- )-\n\nV (0) = 0, 3\" (0) = - 8 3 (0) J (3 - .\n\nIn like manner.\n\nV(0)-0, V(0) = 8 4(0) 2\n\n 2' (0) = 0, 2\" (0) = 2 (0)\n\n..=i(] -cf- r\n\n-1-8 S\n\n =i(l + <? '?J\n\nand, if we write 1 z) = sin z . (z), we get\n\n</)'(0) = 0, f (0) = 8(/>(0) i '\"\n\nIf, however, we differentiate the equation 1 (z) = sin z . cj) (z)\nthree times, we get\n\n / (0) = (/> (0), /\" (0) = S<p\" (0) - </> (0).\n\nTherefore\n\nV\"(0) V(0)\n\n= 24 2\n\n=1 (1 - q' y\n\n-1;\n\nand\n\nV(0) v:iO), V:(0)\n\n\",(0) Sf3(0) 4(0)\n\n 2?l 00,2/1-1 CO 2W- 1\n\n8-2 2 + 2\n\nL =i (1 + q' 'f .=1 (1 + q' '-'r .=1 (1 - ?''\"-'/\n\n= 8\n\n\\ V\n\n+ 2\n\n- 2\n\n.=1 (1 + q 'T- .=1 (1 - q y n=i (1 - q y on combining the first two\nseries and writing the third as the difference of two series. If we\nadd corresponding terms of the first two series in the last line, we\nget at once\n\nV(0)\n\n V(0) V10)\\ Vi0) 2\n\n= 1 +\n\n,(0) %(0) %iO) n=i l~q'''y V(0)\n\n%\n% 472\n%\n\nUtilising the differential equations of \\hardsectionref{21}{4}, this may be written\n\n1 d%' (0 I t) V(0|t) dr\n\n\\ 1 d%(0\\ T) 1 d% 0\\ r) 1 d%(0\\ r)\n\n~ 2(0|t) dr \" 3(0|t) dr %(0\\ t) dr\n\nIntegrating with regard to r, we get\n\nV (0, q) = C% (0, q) % (0, q) % (0, q),\n\nwhere C is a constant (independent of q). To determine C, make q- 0;\nsince\n\n\\ imq- X = % \\ imq-- % = 2 lim 3 = l, lim 4 = l,\n\nq O q O q -O (/ -O\n\nwe see that = 1; and so\n\nwhich is the result stated.\n\n\\Subsection{21}{4}{2}{The value of the constant G.}\n\nFrom the result just obtained, we can at once deduce the value of the\nconstant G which was introduced in \\hardsectionref{21}{3}. For, by the formulae of\nthat section,\n\n / = 0(0)= 2q GU 1- q' % % = 2qiGU l + f )\n\nM=l n=l\n\n 3 = G n (1 + r''-')\\ x = GU i- q ' -'f,\n\nand so, by | 21-41, we have\n\n00 O) OO CO\n\nn (1 - (f y = G n (1 + q;\"')' n (i + q '- y n (i - q -y.\n\nNow all the products converge absolutely, since \\ q\\ < l, and so the\nfollowing rearrangements are permissible :\n\nI n (1 - g \"-') n (1 - ? 'ol  I fi (1 + (t-') n (1 + 'ol = n (i-9'O n\n(i + g* )\n\nM=l W=l\n\n= n (1 - g * ),\n\nM = l\n\nthe first step following from the consideration that all positive\nintegers are comprised under the forms 2n - 1 and 2n. Hence the\nequation determining G is\n\nn (1 - (f'J = G\\\n\nn=\\\n\nandso G=+ n (1 - r/\n\n%\n% 473\n%\n\nTo determine the ambiguity in sign, we observe that G is an analytic\nfunction of q (and consequently one-valued) throughout the domain \\ q\\\n< \\; and from the product for 3(2 ), we see that G- 1 as q->0. Hence\nthe plus sign must always be taken; and so we have established the\nresult\n\nG=Yl (l- 'O-\n\nExample 1. Shew that 1=22*0 .\n\nExample 2. Shew that\n\nExample 3. Shew that\n\n1 + 2 i y -= n (i-(72 )(i+g2'i-i)2).\n\n)! = 1 n = l\n\n21 S. Connexion of the Sigma-f unction with the Theta- functions.\n\nIt has been seen 20-421 example 3) that the function a- [z \\ wx, M2),\nformed with the periods 2a)i, 2co2, is expressible in the form\n\nwhere (/=exp (ttiojo/wi).\n\nIf we compare this result with the product of \u00a7 21 '4 for 1 [z \\ t),\nwe see at once that\n\n.(.) = exp( Y,-in(l-./ )-3i( |- ). TT \\ 2coi/ 2 =i \\ 2coi I COj/\n\nTo express j i in terms of Theta-functious, take logarithms and\ndifferentiate twice, so that\n\n < )=:i-(0- =(e,)-( .T\n\n.4>'\n\nwhere v = 7rzja) and the function cf) is that defined in \\hardsubsectionref{21}{4}{1}.\n\nExpanding in ascending powers of z and equating the terms independent\nof z in this result, we get\n\na>i 3 \\ 2(oiJ \\ 2(Ui/ (f) (0)\n\nand SO,;=\\ -- -- .\n\nLzooi i\n\nConsequently a- z \\ wi, wo) can be expressed in terms of\nTheta-functions by the formula\n\n'031 / I'-. l \\ o / I \"2\n\nmi\n\na( lo, co,)=-,exp( --g J5,( v\n\nwhere v hivzlwi.\n\nExample. Prove that\n\n/n'-a-iBi\" 7n'\\\n\n\\Section{21}{5}{The expression of elliptic functions by means of Theta-functions.}\n\nIt has just been seen that Theta-functions are substantially\nequivalent to Sigma-functions, and so, corresponding to the formulae\nof \u00a7\u00a7 20'5-20\"58, there will exist expressions for elliptic functions\nin terms of Theta-functions.\n\n%\n% 474\n%\n\nFrom the theoretical point of view, the formulae of \u00a7\u00a7 20\"5-20\"53 are\nthe more important on account of their S3'mmetry in the periods, but\nin practice the Theta-function formulae have two advantages, (i) that\nTheta-functions are more readily computed than Sigma-functions, (ii)\nthat the Theta- functions have a specially simple behaviour with\nrespect to the real period, which is generally the significant period\nin applications of elliptic functions in Applied Mathematics.\n\nLet $f(z)$ be an elliptic function with periods 2wi, 2\\&J2; let a\nfundamental set of zeros (aj, Oa, ... an) and poles (/3j, /3..., ...\n/3 ) be chosen, so that\n\nas in \\hardsectionref{20}{3}TODO:verifyref.\n\nThen, by the methods of \\hardsubsectionref{20}{5}{3}, the reader will at once verify that\n\nTTZ - TTCUr 1 W.jX TTZ - TTySr t Wg\n\nr=\\ (.\n\nwhere A is a constant; and if\n\nf z) = A,\\ \\ \\ X[- - -'~' \\ \\ % .\n\n' V Iw, (jdJ \\ 2\\&), CO\n\n1)1).\n\nm = l\n\nbe the principal part oi f(z) at its pole, then, by the methods of\n\u00a720-')2,\n\nr=i (, =.! (m-1)! dz\"\" \" V 2\\&)i \\&)i/J where J. 2 is a constant.\n\nThis formula is important in connexion with the integration of\nelliptic functions. An example of an application of the formula to a\ndynamical problem will be found in \u00a7 22741.\n\nExample. Shew that\n\n  3 (2)\\ \\ \\ \\ - l' ( ), - 3 - 3\"\n\n5i2 (2) ' dz .9i (2) i'3 ' and deduce that\n\n\\Subsection{21}{5}{1}{Jacobi's imaginary transformation.}\n\nIf an elliptic function be constructed with periods 2\\&)i, 2w2, such\nthat\n\n/ (( 2/a)i) > 0, it might be convenient to regard the periods as being\n'Iw, - 2\\&)i : for these numbers are periods and, if I (co-i/coi) >0,\nthen also /(- oji/wo)> 0. In the case of the elliptic functions which\nhave been considered up to this point, the periods have appeared in a\nsymmetrical manner and nothing is gained by this point of view. But in\nthe case of the Theta-functions, which are only quasi-periodic, the\nbehaviour of the function with respect to the real period tt is quite\ndifferent from its behaviour with respect to the complex period ttt.\nConsequently, in view of the result of \\hardsubsectionref{21}{4}{3}, we may expect to\n\n%\n% 475\n%\n\nobtain transformations of Theta-functions in which the period-ratios\nof the two Theta-functions involved are respectively t and - l/r.\n\nThe transformations of the four Theta-functions were first obtained by\nJacobi*, who obtained them from the theory of elliptic functions; but\nPoissonf had previously obtained a formida identical with one of the\ntransformations and the other three transformations can be obtained\nfrom this one by ele- mentary algebra. A direct proof of the\ntransformations is due to Landsberg, who used the methods of contour\nintegration ij:. The investigation of Jacobi's formulae, which we\nshall now give, is based on Liouville's theorem; the precise formula\nwhich we shall establish is\n\nwhere (- ir)' is to be interpreted by the convention arg(- iV) < '\n\nFor brevity, we shall write - 1 /t = r', q - exp ttW).\n\nThe only zeros of ' 3 z \\ t) and ' 3 t z j t) are simple zeros at the\npoints at which\n\n1 1 / /,,,1,1/\n\nz = mir - UTTT + 2 '\"' + o '\"\"''' TZ = mir + tiTTT 4- vr 4- .3 ttt\n\nrespectively, where m, n, m, n take all integer values; taking m =- )i\n- \\, n = m, we see that the quotient\n\nis an integral function with no zeros.\n\nA 1 . / X . X (IZTTT + 7r-T-\\ \\ \\ .,\n\nAlso z + ttt) ylf z) = exp ( -. j q e -' = 1,\n\nwhile yjr (z - 'tt) - yjr ( z) = exp (; j x q~ e~-' '' = 1.\n\nConsequently -v/r (z) is a doubly-periodic function with no zeros or\npoles; and so \\hardsubsectionref{20}{1}{2}) i/ ( ) must be a constant, A (independent of\n2).\n\nThus 3 (z ! t) = exp (tW/7r) 3 ( r \\ r');\n\nand writing z + -tt, + ttt, -t- - tt -i- ttt in turn for z, we easily\nget, (z : t) = exp irz-'/Tr) % (zr I r),\n\nA% (z\\ t)= exp (itZ /tt) % (ZT I t),\n\nJ.' i (z\\ t)= - i exp Wz-j-n) % (zt \\ t').\n\n* Journal fur Math. in. (1828), pp. 403-404 [Ges. Werke, i. (1881),\npp. 264-265].\n\nt Mem. dc VAcad. des Sci. vi. (1827), p. 592; the special case of the\nformula in which z-0 had been given earlier by Poisson, Journal de\nVEcole polyteclmique, xii. (cahier xix), (1823), p. 420.\n\n+ This method is indicated in example 17 of Chapter vi, p. 124. See\nLandsberg, Journal fiir Math. CXI. (1893), pp. 234-253.\n\n%\n% 476\n%\n\nWe still have to prove that A = - it)-; to do so, differentiate the\nlast equation and then put 2 = 0; we get\n\n /(OJT) = -iVV(0 t'). But %' (0 t) =, (0 j t) 3 (0 i t) 4 (0 1 t)\n\nand / (0 : t') =, (0 : r') 3 (0 \\ r), (0 t');\n\non dividing these results and substituting, we at once get A~'- = - W,\nand so\n\nA = \u00b1 -iT) -. To determine the ambiguity in sign, we observe that 3(0\nt)= 3(0|t'),\n\nboth the Theta-functions being analytic functions of t when 7 (t) >;\nthus A is analytic and one-valued in the upper half r-plane. Since the\nTheta-functions are both positive when t is a pure imaginary, the plus\nsign must then be taken. Hence, by the theory of analytic\ncontinuation, we always have\n\nA = + - ir);\n\nthis gives the transformation stated. It has thus been shewn that\n\nX 1 \u00b0\n\n5\" on-Trir+tniz \\ 'V As -mr]- 1 1:17)\n\n7j= -oe\n\nExample 1. Shew that\n\nwhen TT = -.\n\nExample 2. Shew that\n\nB, Q\\ t) \\ %AO\\ t') 53(0|r) 53(0|r')\n\nMOPr + 1) i 2iOJjr) 53(0 I r + 1) 54(0|r)'\n\nExample 3. Shew that\n\nand shew that the plus sign should be taken.\n\n\\Subsection{21}{5}{2}{Landen's type of transformation.}\n\nA transformation of elliptic integrals (\u00a7 227), which is of historical\ninterest, is due to Landen \\hardsubsectionref{22}{4}{2}); this transformation follows at\nonce from a transformation connecting Theta-functions with parameters\nt and 2t,\n\nnamely\n\nX z\\ t)X z I t) 3(0 It) 4(0 JT) 4(22|2t) 4(0|2t)\n\nwhich we shall now prove.\n\nThe zeros of z r) i z\\ r) are simple zeros at the points where\n\nz = [m + -\\ IT - \\ n + -A TTT and where z = imr +in + -Airr, where m\nand n\n\n%\n% 477\n%\n\ntake all integral values; these are the points where 1z = mir + Ui Ar\n- ir .'Ir, which are the zeros of 4 (2 j 2t). Hence the quotient\n\n 4 (2 I 2t) has no zeros or poles. Moreover, associated with the\nperiods it and ttt, it has multipliers 1 and cf- e\"-'' ) - q- e'' ) -\n- q~~\"' e- \" ) = \\ \\ it is therefore a doubly-periodic function, and\nis consequen-tly \\hardsubsectionref{20}{1}{2}) a constant. The value of this constant may\nbe obtained by putting z - and we then have the result stated.\n\nIf we write \\ \\ +;- TTT for z we get a corresponding result for the\nother\n\nTheta-functions, namely\n\n,( |t) i( |t) 3 (0 1 )>4 (OK)\n\n i(2 |2r) '\"\" 4(0i2T)\n\n\\Section{21}{6}{The differential equations satisfied by quotients of Theta-functions.}\nFrom \\hardsubsectionref{21}{1}{1} example 3, it is obvious that the\nfunction\n\nhas periodicity factors - 1, + 1 associated with the periods tt, ttt\nrespectively; and consequently its derivative\n\n[X z) 4 z) - %' ( ) 1 ( )l - V ( ) has the same periodicity factors.\n\nBut it is easy to verify that .(z),, z)/ J (z) has periodicity\nf;xctors - 1, + 1; and consequently, if < (z) be defined as the\nquotient\n\n X (z) % (z) - X ( ) 1 i )] - [% (z) % (z), then (f) (z) is\ndoubly-periodic with periods tt and ttt; and the only possible poles\nof (f) (z) are simple poles at points congruent to tt and vr -h . ttt.\n\nNow consider cf) iz + I ttt]; from the relations of \\hardsubsectionref{21}{1}{1}, namely\n\n% Z + l7rT =iq-h-''X z\\ ',( z + l7rT'j = iq-ie-''%(z),\n\n%\\ z+l7rT =q-ie'''X(z), x z- l'rrT =q~ e-''% z), we easily see that\n\n( ( -i- ttt) = - X (z) 1 (z) + X ( ) 4 ( )l - [% (z) % (z) . Hence (z)\nis doubly-periodic with periods tt and - ttt; and, relative to\n\nthese periods, the only possible poles of z) are simple poles at\npoints 1 2\n\ncongruent to vr\n\n%\n% 478\n%\n\nTherefore ( 20-12), </)( ) is a constant; and making z- 0, we see that\nthe value of this constant is [ i' ' 4 - 1 2' a = Z-\n\nWe have therefore established the important result that\n\nwriting = * i ( )/ 4 2) and making use of the results of \\hardsectionref{21}{2}, we\nsee that\n\n(\u00a7)' = - ~ ' '' ' \" ' '' '\n\nThis differential equation possesses the solution i( )/ 4(2). It is\nnot difficult to see that the general solution is \u00b1% z + a)/%(z + a)\nwhere a is the constant of integration; since this quotient changes\nsign when a is increased by tt, the negative sign may be suppressed\nwithout affecting the generality of the solution.\n\nExample 1. Shew that\n\ndz [Si z;j 2 S4 (2) 4 (z)\n\nExample 2. Shew thcat\n\nlPiii)l=\\ q2 ii?) Mi)\n\n\\Subsection{21}{6}{1}{The genesis of the Jacohian Ellip.tic function* snu. }\nThe differential equation\n\n(S)' '' \" ' ' '' \" ' '' ' which was obtained in \\hardsectionref{21}{6}, may be brought\nto a canonical form by a slight change of variable.\n\nWritet %/% = y, V =;\n\nthen, if A:- be written in place of 2/ 3, the equation determining y\nin terms of M is\n\n(|y = a-/)(i-A.y).\n\nThis differential equation has the particular solution\n\nThe function of u on the right has multipliers -1,4-1 associated with\nthe periods 7r%\", irT -,;-; it is therefore a doubly-periodic function\nwith periods 27r 3 ttt -. In any cell, it has two simple poles at the\npoints congruent to hirr a- and tt -J + iTTT j-; and, on account of\nthe nature of the quasi-periodicity of y, the residues at these points\nare equal and opj3osite in sign; the zeros of the function are the\npoints congruent to and tt sI\n\n* Jacobi and other early writers used the notation sin am in phice of\nsn.\n\nt Notice, from the formulae of \\hardsectionref{21}{3}, that 2 + 0, 3 + when \\ q\\ < l,\nexcept when q = 0, in which case the Theta-functions degenerate; the\nsubstitutions are therefore legitimate.\n\n%\n% 479\n%\n\nIt is customary to regard y as depending on k rather than on q; and\nto exhibit y as a function of u and k, we write\n\n2/ = sn u, k), or simply y = sn u.\n\nIt is now evident that sn (u, k) is an elliptic function of the second\nof the types described in \\hardsubsectionref{20}{1}{3}; when g- >0 (so that '- >0), it is\neasy to see that sn(w, A\")- >sin v.\n\nThe constant k is called the modulus; if '' = 4/ 3, so that k + k''\n=l, k' is called the complementary modulus. The quasi-periods ir, ttt\n. are usually written 2K, 2iK', so that sn (u, k) has periods 4iK, 2\nK'.\n\nFrom \\hardsubsectionref{21}{5}{1}, we see that 2K' = 7r' . (0 \\ r'), so that K' is the same\nfunction of t as K is of t, when tt' = - 1.\n\nExample 1. Shew that\n\ndzS, z) -\" S,(z) S, z)' and deduce that, if ?y = - \" ~., and u = z\\$\n-, theu\n\n J - 4 ( )\n\nExample 2. Shew that\n\nrf2 3.\n\n4(2) ''' S, z)3, z)'\n\nd 3 ' (z) and deduce that, if j/ = -r 6\"7 \\ > ' u zS, then\n\n 3 4 [z)\n\nExample 3. Obtain the following results :\n\n[These results are convenient for calculating /, k\\ A\", A'' when q is\ngiven.]\n\n\\Subsection{21}{6}{2}{Jacohi's earlier notation*. The Theta-f unction \u00a9(?<) and the Eta-fanction H ( ).}\n\nThe presence of the factors 3\"- in the expression for sn u, k) renders\nit sometimes desirable to use the notation which Jacobi employed in\nthe Fvndamenta Nova, and subsequently discarded. The function which is\nof primary importance with this notation is \u00a9 (u), defined by the\nequation\n\nCh) (u) = 4 u%-' i t), so that the periods associated with \u00a9 (u) are\n2K and 2iK'.\n\n* This is the notation employed throughout the Fundamenta Nova.\n\n%\n% 480\n%\n\nThe function \\& u + K) then replaces 3 (' ); and in place of i(-?) we\nhave the function H (u) defined by the equation\n\nH ( ) = - iq - ie'\"\" \" <-' ' e (u + i7v\") = u%-'-, t), and . z) is\nreplaced by H (?/ + A\").\n\nThe reader will have no difficulty in translating the analysis of this\nchapter into Jacobi's earlier notation.\n\nExample 1. If e' u)=-,, shew that the singularities of -t-t-t are\nsimple poles\n\nat the points congruent to I'K' (mod 2A', -liK'); and the residue at\neach singularity is 1.\n\nExample 2. Shew that\n\nH' (0) = W A'-i H ( A') e (0) e (A').\n\n\\Section{21}{7}{The problem of Inversion.}\n\nUp to the present, the Jacobian elliptic function sn (i k) has been\nimplicitly regarded as depending on the parameter q rather than on the\nmodulus k; and it has been shewn that it satisfies the differential\nequation\n\nI - - - I = (1 - sn- u) (1 - k- sn- u),\n\nwhere A: = V (0, ?)/ V (0, 5).\n\nBut, in those problems of Applied Mathematics in which elliptic\nfunctions occur, we have to deal with the solution of the differential\nequation\n\n I)-('- '> '-'y->\n\nin which the modulus k is given, and we have no a priori knowledge of\nthe value of q\\ and, to prove the existence of an analytic function\nsn(M, k) which satisfies this equation, we have to shew that a number\nt exists* such that\n\nWhen this number t has been shewn to exist, the function sn(w, k) can\nbe constructed as a quotient of Theta-functions, satisfying the\ndifferential equation and possessing the properties of being\ndoubly-periodic and analytic except at simple poles; and also\n\nlim sn u, k)/i( = 1.\n\nThat is to say, we can invert the integral\n\n\\ [y dt\n\n ~Jo l-(') l-k'i ) ' so as to obtain the equation y= sn (u, k).\n\n* The existence of a number r, for which / (t) > 0, involves t)ie\nexistence of a number q such that I g I < 1. An alternative procedure\nwould be to discuss tlie differential equation directly, after the\nmanner of Chapter x.\n\n%\n% 481\n%\n\nThe difficulty, of course, arises in shewing that the equation\n\nc=V(0|tW(0|t),\n\n(where c has been written for L-). has a solution.\n\nWhen* < c < 1, it is easy to shew that a solution exists. From the\nidentity given in \\hardsectionref{21}{2} corollary, it is evident that it is sufficient\nto prove the existence of a solution of the equation\n\n1-c = V(0|t)/V(0!t),\n\nCO / \\ Q2n-1\\ 8\n\nwhich may be written 1 - c = 11 - ) .\n\nNow, as q increases from to 1, the product on the right is continuous\nand steadily decreases from 1 to; and so \\hardsubsectionref{3}{6}{3}) it passes through\nthe value 1 - c once and only once. Consequently a solution of the\nequation in T exists and the problem of inversion may be regarded as\nsolved.\n\n21 \"71. The problem of inversion for complex values of c. The modular\nfunctions\n\nf r),g T),h T).\n\nThe i roblem of inversion may be regarded as a problem of Integral\nCalcuhis, and it may be j roved, by somewhat lengthy algebraical\ninvestigations involving a discussion of\n\nthe behaviour of I (I -;!-) ~ 2 (1 - k fi) ~ 2 dt, when y lies on a\n'Eiemann surface,' that the\n\nJ problem of inversion possesses a solution. For an exhaustive\ndiscussion of this aspect of the problem, the reader is referred to\nHancock, Elliptic Functions, i. (New York, 1910).\n\nIt is, however, more in accordance with the sjjirit of this work to\nprove by Cauchy's method (\u00a7 6-.31) that the equation = 2* ( I ' V- s*\n( 1 '') one root lying in a certain domain of the T-j)lane and that\n(subject to certain limitations) this root is an analytic function of\nc, when c is regarded as variable. It has been seen that the existence\nof this root yields the solution of the inversion problem, so that the\nexistence of the Jacobian elliptic function with given modulus k will\nhave been demonstrated.\n\nThe method just indicated has the advantage of exhibiting the\npotentialities of what are known as modular /mictions. The general\ntheory of these functions (which are of great importance in connexion\nwith the Theories of Transformation of Elliptic Functions) has been\nconsidered in a treatise by Klein and Fricket.\n\n:Mnir,s S./ 0\\ t)\n\nLet / (r) = We ir n \\ -, - r \\\n\nIgV\"\" '\" ' J 3 (0\n\n\" =53-'(0\n\n.. \\ g(2n-l) T 8 54*(0ir)\n\nh r)=-f r)lg r). Then, if tt = - 1, the functions just introduced\npossess the following properties : /(r + 2)=/(r), 5r(r + 2)=5r(r), f\nr)+g r) = \\,\n\nf r + ) h (r), / (r') =g (r), g (r') =/(r),\n\nby \u00a7\u00a7 21 '2 corollary, 2r51 example 1.\n\n* This is the case which is of practical importance.\n\nt F. Klein, Vorlesungen uber die Theorie der clliptischen\nModnlfunktionen (ausgearbeitet und vervoUstandigt von E. Fricke).\n(Leipzig, 1890.)\n\nW. M. A. 31\n\n%\n% 482\n%\n\nIt is easy|\\ to see that as /(r)- - + QO, the functions iV<*~\"\"/('\")\n= /i W and g (t) tend to unity, uniformly with resi)ect to R (r), when\n- 1 (r) 1; and the derivates of these two functions (with regard to\nr) tend uniformly to zero* in the same circumstances.\n\n21 'Til. The principal solution off (t) - r = 0.\n\nIt has been seen in \\hardsubsectionref{6}{3}{1} that, if /(t) is analytic inside and on any\ncontour, iiri times the number of roots of the equation /(t) - c =\ninside the contour is equal to\n\n/ 1 df r) Ifir c ir '\n\ntaken ]round the contour in question.\n\nTake the contour ABCDEFE' D'C B' A shewn in the figure, it being\nsupposed temporarily! that /(t) - c has no zero actually on the\ncontour.\n\nE' . F E\n\n-1 1\n\nThe contour is constructed in the following manner :\n\nFE is drawn parallel to the real axis, at a large distance from it.\n\nAB la the inverse of FE with respect to the circle | t | = 1.\n\nBC is the inverse of ED with respect to [ r | = 1, Z) being chosen so\nthat D\\=AO.\n\nBy elementary geometry, it follows that, since C and D are inverse\npoints and 1 is its own inverse, the circle on D\\ as diameter passes\nthrough C; and so the arc CD of this circle is the reflexion of the\narc AB in the line R (r) = .\n\nThe left-hand half of the figure is the reflexion of the right-hand\nhalf in the line R t) = 0.\n\n* This follows from the expressions for the Tlieta-functions as power\nseries in q, it being observed that [ 9 [ - as I (t) - -|- oo .\n\nt The values of/ T) at points on the contour are discussed in \u00a7\n21'712.\n\n%\n% 483\n%\n\nIt will now be shewn that, unless* c 1 or c O, the equation /(r) - c=0\nhas one, and only one, root inside the contour, provided that FE is\nsufficiently distant from the real axis. This root will be called the\nprincipal root of the equation.\n\nTo establish the existence of this root, consider / -rr\\ - --y dr\ntaken along; the various portions of the contour.\n\nSince/(r + 2)=/(r), we have,\n\nI j BE J ED- ) f (t) -C dr\n\nAlso, as T describes BC and B'C\", r'(= - l/r) describes E'D' and ED\nrespectively; and so\n\n   BC j C-B'] f T)-C dr \\ j BC Jc-B']g T)-C dr\n\n ] ED' J DE) g r)-C dr = 0, because g (r' + 2)= (r'), and\nconsequently corresponding elements of the integrals cancel. Since /\n(r \u00b1 1 ) = A (r), we have\n\n[j D'C jCD]f r)-C dr jB:ABh T)-c dr\n\nbut, as T describes B'AB, r describes EE\\ and so the integral round\nthe complete contoui* reduces to\n\n/\" f\\ ] df r) 1 dh r') 1 dfiyiXdr\n\njEE'\\ f -r)-c dr h(T') - c dr f 'r') - c dr ]\n\n i dfjr) 1 dh r) 1 dlMidr\n\n]EE'\\ f T)-C dr h r)\\ \\ -c.h r)] dr ' g r) - C dr j '\n\nNow as EE' moves off to infinityt, /(t) - c-*- -c=t=0, (t)-c- -1 -\nc4=0, and so the limit of the integral is\n\n- lim f - - log h (r) dr\n\nJ EE' l-C.A(r) dr \u00b0 '\n\n X m \\ 1 r . logACr) \\ rflog (r) ]\n\n.' EE C.h r)\\ dr dr j '\n\nBut 1- c.A(t) 1, fi(r)~ \\,g, r)- l, - 0, - 0, and so the limit of the\n\ndr clr\n\nintegral is\n\nI nidr = 2-i\n\nJ E'E\n\nNow, if we choose EE' to be initially so far from the real axis that /\n(r) - c, - c.h (r), g r) - e have no zeros when r is above EE', then\nthe contour will pass over no zeros of /(r) - c as EE' moves off to\ninfinity and the radii of the arcs CD, D'C, B'AB diminish to zero;\nand then the integral will not change as the contour is modified, and\nso the original contour integral will be Sttz, and the number of zeros\noif r) - c inside the original contour will be precisely one.\n\n* It is shewn in \\hardsubsectionref{21}{7}{1}'2 that, if c l or c O, then/(T) -c has a zero\non the contour. t It has been supposed temporarily that c=|=0 and 4=1.\n\n31-2\n\n%\n% 484\n%\n\n21 '712. The values of the modular function f(t) on the contoitr\nconsidered.\n\nWe now have to discuss the point mentioned at the beginning of 5\n21-711, concerning the zeros of f(r) - c on the lines* joining +1 to\n\u00b1l + x t and on the semicircles of 05C1, - ) C'B'0.\n\nAs T goes from 1 to 1 + x t or from - 1 to - 1 + x i, /(t) goes from -\nx to through real negative values. So, if c is negative, we make an\nindentation in DE and a corre- sponding indentation in D' E'; and the\nintegrals along the indentations cancel in virtue of the relation /(t\n+ 2) +/(r).\n\nAs r describes the semicircle 0 C1,t' goes from - 1 + x I'to - 1,\nand/(r) = 5r(T') = l - /(r), and goes from 1 to +x through real values\n; it would be possible to make indentations in BC and B'C to avoid\nthis difficulty, but we do not do so for the following reason : the\nettect of changing the sign of the imaginary part of the number ' is\nto change the sign of the real part of r. Now, if < 7 (c) < 1 and I\n(c) be small, this merely make-s t cross OF by a short i>ath; if R\n(c) < 0, t goes from DE to D' E' (or vice versa) and the value of q\nalters only slightly; but if R c) > 1, r goes from BC to B'C, and so\nq is not a one-valued function of c so far as circuits round c = +1\nare concerned; to make q a one-valued function of c, we cut the\nc-plane from -l-l to -fx; and then for values of c in the cut plane,\nq is determined as a one-valued analytic function of c, say q (c), by\nthe formula q (c) = e' '\"' '''\n\nwhere\n\n., 1 / r df r), 2771 j/(r)-C dr\n\nas may be seen from \\hardsectionref{6}{3}, by using the method of \\hardsubsectionref{5}{2}{2}.\n\nIf c describes a circuit not siu-rounding the point c=l, q c) is\none-valued, but t c) is one-valued only if, in addition, the circuit\ndoes not surround the point c = 0.\n\n21 '72. The periods, regarded as functions of the modidus.\n\nSince K=\\ 7rB: (0, q) we see from 5 21-712 that K is a one-valued\nanalytic function of c =k ) when a cut from 1 to -l-x is made in the\nc-plane; but since K'= -irK, we see that K' is not a one-valued\nfunction of c unless an additional cut is made from to - x; it will\nappear later \\hardsubsectionref{22}{3}{2}) that the cut fi'om 1 to -fx which was necessary\nso far as K is concerned is not necessary c\\ s regards K'.\n\n2173. The inversion-problem associated iv it h Weierstrassiari\nelliptic functions.\n\nIt will now be shewn that, when invariants g. and g are given, such\nthat g %lg, it is possible to construct the Weierstrassian elliptic\nfunction with these invariants; that is to say, we shall shew that it\nis possible to construct \\ periods 2(i)i, 2u>.2 such that the function\nfp z ( oi, do) has invariants g and g .\n\nThe problem is solved if we can obtain a solution of the differential\nequation\n\nm\n\n   if -9- -93\n\nof the form = (2 i wi, <t>2)-\n\n\"We proceed to effect the solution of the equation with the aid of\nTheta-functions. Let v = Az, where .4 is a constant to be determined\npresently.\n\n* We have seen that EE ' can be so chosen that f (t) -c lias uo zeros\neither on EE ' or on the small circular arcs.\n\nt On the actual calculation of the periods, see E. T. A. Innes, Proc.\nEdinburgh Royal Sue. XXVII. (1907), pp. 357-368.\n\n%\n% 485\n%\n\nBy the methods of \u00a7 21 -e, it is easily seeu that\n\nand hence, using the results of \u00a7 21 \"2, we have\n\nNow let ei, e., e be the roots of the equation ' y' -g-iy-gz = j\nchosen in such an order that ( 1 - e'2)l ei - 63) is not* a real\nnumber greater than unity or negative.\n\nIn these circumstances the equation\n\n61-63 3'(U|r)\n\npossesses a solution (\u00a7 21 \"712) such that /(r)>0; this eqvxation\ndetermines the parameter T of the Theta-functions, which has, up till\nnow, been at our disposal.\n\nChoosing t in this manner, let A be next chosen so thatt\n\nThen the function satisfies the equation\n\n.V = ' |! j 53MO I r) V (0 I r) + ei ( ) = 4 (y - ei) y - 62) y - 63).\n\nThe periods of y, qua function of z, are ttJ, tttJA; calling these\n2(bi, 2W2 we have\n\n/(co2/<ai)>0. The function z | wi, oa. may be constructed with these\nperiods, and it is easily\n\nseen that (3)- %\\ f-f-J,2(0 | t) V(0 | r)-ei is an elliptic function\nwith no pole at the origin |; it is therefore a constant, C, say.\n\nIf 6*2, 6*3 be the invariants of (2 | wi, a>., we have\n\nAip ( z)-G. z)-G, = f(z) = ip z)-C-e,] z)-C-e. l iJ z)-C-e,, and so,\ncomparing coefiicients of powers of < (z), we have\n\n= 12C, G2=g-2-l2C% G3=g3-g,C+iC Hence (7=0, G.2=g2, Gz g;\n\nand so the function z \\ co, wo) with the required invariants has been\nconstructed.\n\n\\Section{21}{8}{The numerical Computation of elliptic functions.}\n\nThe series proceeding in ascending powers of q are convenient for\ncalculating Theta-functions generally, even when | | is as large as\n0\"9. But it usually happens in practice that the modulus k is given\nand the calculation\n\n* If \\ iZ: >i thenO< <:l; andif <0, then 1- >1, and\n\nei-e ei-i'j e -e Ci-e\n\n j -ej ri\\ lZ 1\" <l.\n\nThe values 0, 1, qo of (ej - <'2)/( i - 3) a-i'e excluded since (12 4=\n27(/3 .\n\nt The sign attached to is a matter of indifference, since we deal\nexclusively with even functions of v and -.\n\nt The terms in z'\" cancel, and there is no term in z~i because the\nfunction is even.\n\n%\n% 486\n%\n\nof K, K' and q is necessary. It v ll be seen later (\u00a7\u00a7 22'801, 22'32)\nthat K, K' are expressible in terms of hypergeometric functions, by\nthe equations\n\nbut these series converge slowly except when | k \\ and j k' \\\nrespectively are quite small; so that the series are never\nsimultaneously suitable for numerical calculations.\n\nTo obtain more convenient series for numerical work, we first\ncalculate q as a root of the equation k = o\" (0, q)l 'i' (0, q), and\nthen obtain K from the\n\nformula K= r ir' i (0, q) and K' from the formula\n\nK'=7r- K\\ og, l!q).\n\nThe equation k = V (0, ?)/ V (0, q)\n\nis equivalent to* Jk' = 4 (0, 5)/ 3 (0, q).\n\n-i 111 Writing 2e = -j,, (so that < e <;j when 0<k< 1), we get\n\n\\ %i O,q)-' AO, q) %(0,q') ' % 0,q) + % 0,q) % 0,q )-\n\nWe have seen (\u00a7\u00a7 21*71-21-712) that this equation in g possesses a\n\nsolution which is an analytic function of e* when I e < -; and so q\nwill be\n\nexpansible in a Maclaurin series in powers of e in this domainf.\n\nIt remains to determine the coefficients in this expansion from the\nequation\n\ng + (/ + r + ... ~ 1 + 2 * + 25\" + . . . ' which may be written\n\nq=6 + 2qU-q''+2q' \u20ac-q-'+ ...;\n\nthe reader will easily verify by continually substituting e + 2q*e - q\n+ ... for q w herever q occurs on the right that the first two termsj\nare given by\n\nq = e + 2e' + 15e + loOe' + U'').\n\nIt has just been seen that this series converges when j e, < .\n\n[Note. The first two terms of this expansion usually suffice; thus,\neven if k be as large as (0-8704) = 0-933..., e = |, 2\u20ac- = 0-0000609,\n15f = 0-0000002.]\n\nExample. Given k = k' = \\ IJ2, calculate q, K, A\" by means of the\nexpansion just obtained, and also by observing that t=i, so that q =\ne~ .\n\n[y = 0-04.32 139, = A\" = 1-854075.]\n\n* In numerical work < A; < 1, and so q is positive and < k' < 1.\n\nt The Theta-functions do not vanish when |5|<1 except at ' = 0, so\nthis gives the only possible branch point.\n\nX This expansion was given by Weierstrass, Werke, ii. (1895), p. '276.\n\n%\n% 487\n%\n\n21 'Q. The notations employed for the Theta-f unctions.\n\nThe following scheme indicates the principal systems of notation which\nhave been employed by various writers; the symbols in any one column\nall denote the same function.\n\n i(tj)\n\n9,(nz)\n\n9s (nz)\n\nS (nz)\n\nJacobi\n\n l( )\n\n 2(2)\n\nh )\n\nSi z)\n\nTannery and Molk\n\n(9i (0)2)\n\n62 (os)\n\n03 (CZ)\n\n6 m)\n\nBriot and Bouquet\n\ne, z)\n\n02 )\n\ne iz)\n\n 0(2)\n\nWeierstrass, Halphen, Hancock\n\n6 z)\n\n6i z)\n\nOsiz)\n\nBoM\n\nJordan, Harkness and Morley\n\nThe notation employed by Hermite, H. J. S. Smith and some other\nmathematicians is expressed by the equation\n\nv=0, 1)\n\n6,, x)= 2 ( - T\" q '+i )- e'\"\" \" > '; (/x = 0, 1 tt = -\n\nwith this notation the results of \\hardsubsectionref{21}{1}{1} example 3 take the very\nconcise form e,,(a;+a) = -y 6,, (.r),,, (. + ar) = (-)\"?- e\" '\n*\"* ' 6,, (.r).\n\nCayley employs Jacobi's earlier notation (\u00a7 21 62). The advantage of\nthe Weierstrassian notation is that unity (instead of tt) is the real\nperiod of 63 z) and 0 (2).\n\nJordan's notation exhibits the analogy between the Theta-functions and\nthe three Sigraa-functions defined in \\hardsubsubsectionref{20}{4}{2}{1}. The reader will easily\nobtain relations, similar to that of \u00a7 21 43, connecting 6 ( ) with\n0-,. iwxz) when r= 1, 2, 3.\n\nREFERENCES. L. EuLER, Opera Omnia, (1), xx. (Leipzig, 1912). C G. J.\nJacobi, Fundamenta Nova* (Konigsberg, 1829); Ges. Math. Werke, i.\n\npp. 497-538. C. Hermite, Oeuvres Mathematiqiies. (Paris, 1905-1917.)\nF. Klein, Vorlesungen iiher die Theorie der elliptischen\nModulfunktionen (Ausgear-\n\nbeitet und vervoUstandigt von R. Fricke). (Leipzig, 1890.) H. Weber,\nEUiptische Funktionen und algebraische Zahlen. (Brunswick, 1891.) J.\nTannery et J. Molk, Fonctions Elliptiques. (Paris, 1893-1902.)\n\nMiscellaneous Examples.\n\n1. Obtain the addition-formulae\n\n9, y+z)9, y-z)9, =hHy)9o z)-9. 2/)h z) = S, y)9 z)-9 y)9, z\\\n\n 2 (y + 2) 2 y - ) V = 9 iy) s,' (z) - s,- (y) V (2) = S-2' (.y) 4 (2)\n- 3 (y) 1' (2),\n\n 3 (y + ) 3 (y - 2) V = 4 (y ) 3 (2) - 5 2 (y) V (,) 532 (3/) 5 2 (,)\n\\ 2 (3,) 2 (,)\n\n 4 (y + -') 4 iy - z) 9C = 3 (i/) 3 z) - 5/ (y ) 2 z) = V [y) 9i' (z)\n- 9, (y) 5i2 (3).\n\n\\addexamplecitation{Jacobi.} * Reprinted in his Ges. Math. Werke, i. (1881), pp. 49-239.\n\n%\n% 488\n%\n\n2. Obtain the addition-formulae\n\n94(y+2)54Cy- )V=V(y) 2M2)+V(y)V(2)=V(3/)V(2)+V(y) 3'( ).\n\nand, by increasing / by half periods, obtain the corresponding\nformulae for\n\n5r (i/ + z) Sr Ly - --) 2 and Br y- -z)Br 2f-z) V. where r=l, 2, 3.\n\\addexamplecitation{Jacobi.}\n\n3. Obtain the formulae\n\n5l 0/ \u00b1 --) 2 (y + ) h -94 = -9, Q/) \\$2 (i/) 3 (2) 4 (2) \u00b1 3 (y) 4 C\n) 1 C ) 2 z), Sl(]/\u00b1z)Ss(l/ + z) 2 4 =  ! Cy) 3 (y) 2 (2) 4 C ) \u00b1 2 i\n) 4 (y) 1 (z) 3 i l h l/\u00b1z) -34 (y + ') 5.2-93 = 5, Cy) 4 (y) 2 (2) 3\nC ) \u00b1 o (y) 3 (y) Si z) Si (z),\n\nSo i/\u00b1z)SsQ/ + z) 3. S = So (y) S3 (v/) S., (2) 3 (z) + S y) S 1/) S\n(z) 3 (z), S, y\u00b1z) Si i/ + z) S.,Si=S, (j/) Si (y) 2 z) Si z) + Si\n(j/) 3 (y) 1 (2) 3 (2), S3 i/\u00b1z)S, y + z)SsSi = S, y)Si 9/)S3 z)Siiz)\n+ S,iy)S. i/)S, z)S2 z).\n\n\\addexamplecitation{Jacobi.}\n\n4. Obtain the duplication-fomiulae\n\n5, 2y) S,Si = 5,2 (y) V (3/) - 3i2 (3,) S 2 (\\ y)\n\n53 (2y) 53 V = 3 (y) 54 0/) - - i' (y) -92' (. ), Si 2y) Si' = S3*\n(1/) - S,* (y ) = Si* y)- S,* (y).\n\nObtain the dupHcation-formula\n\nS, (2y) 52 3 4 = 2 1 (y) -9, (y) 3 (y) 4 (y).\n\nObtain dupUcation-fomuilae from the results indicated in example 2.\n\nShew that, with the notation of \u00a7 21 '22,\n\n[l]-[2] = [4]'-[3]', [l]-[3] = [l]'-[3]', [l]-[4] = [2]'-[3]', [2]-[3]\n= [l]'-[4]', [2]-[4] = [2]-[4]', [3]-[4] = [2]'-[l]'.\n\nShew that\n\n2 [11 22] = [11 22]' + [2211]' -[4433]' + [3344]',\n\n2[1133] = [1133]' + [3311]'-[4422]' + [2244]', 2[1144] =\n[1144]'+[4411]'-[3322]' + [2233]', 2 [2233] = [2233]' + [3322]' - [441\n1]' + [1 1 44]', 2[2244] = [2244]' + [4422]'-[3311]' + [1133]', 2\n[3344] = [3344]' + [4433]' - [221 1 ]' + [1 1 22]'.\n\nObtain the formulae\n\n2n- fa =-2qi U (1 -?2\")2 (1 -y2n-l)-2| .\n\n\\addexamplecitation{Jacobi.} \\addexamplecitation{Jacobi.}\n\n\\addexamplecitation{Jacobi.}\n\nk k'\n\ni = 2fyi n (l+?2H)2(l-f/ -i)-\n\n10. Deduce the following i-esults from example 9 : n (1 - j2n- 1)6 = 2\ni > '/(->,\n\nn l-f' f =27v-'q- M'K\\\n\nn (l+?2 -i) =22 (M')-i, 1=1\n\nn (i+92 ) =lq -kk'-,\n\nn (i-j )\n\nn=l\n\n= An- q- k k'-'K\\\n\nn (1+9\"\n\nn = l\n\nIq- k k'-K\n\n\\addexamplecitation{Jacobi.}\n\n%\n% 489\n%\n\n11. By considering I .* /* e \"\" dz taken along the contour formed by\nthe parallelogram\n\nJ \u00b0i (2)\n\nwhose corners are - tt, Itt, Tr + Trr, - tt + ttt, shew that and\ndeduce that, when | I z) \\ < I ttt),\n\n12. Obtain the following expansions :\n\nS3' z)\\ * ( - )' sin 2 3\n\neach expansion being valid in the strip of the s-plane in which the\nseries involved is\n\nabsolutely convergent.\n\n\\addexamplecitation{Jacobi.}\n\n13. Shew that, if \\ I y)\\ < l (ttt-) and 1 / (2) | < 7 (ttt), then\n\n  \u00b1 = cot y + cot 2 + 4 i 2 q'\"' sin 2my + 2nz). 1 (y) 1 (2) m=l =i\n\n\\addexamplecitation{Math. Trip. 1908.}\n\n14. Shew that, if | /(s) j < \\ I (ttt), then\n\nTT,(S)\"2'\n\nwhere a = 2 2 j(' + )(2' +' + ).\n\n\\addexamplecitation{Math. Trip. 1903.}\n\n[Obtain a reduction formula for by considering Si(z) -U-'' 'dz taken\nround the contour of example 11.]\n\n15. Shew that\n\ncos 22 + g*\" J\n\n --rT=i o+ 2 a cos2?i2,\n\n 1(2) L =il-2j2ncos:\n\nis a doublv-periodic function of z with no singularities, and deduce\nthat it is zero.\n\nProve similarly that\n\n 2( )\\ . A% g-\" sin 22\n\n.92 (z) !=i 1 + 2? \" cos 22 + 2 \"\n\n 3 (2) \" ~ lil + 2g''' -icos22 + j4 -2'\n\n 4'.(g) . I g2n-l sii, 2g\n\n  4(2) =i l-2g ''-icos22 + \\$ \"-2' 16. Obtain the values of k, k', K,\nK' correct to six places of decimals when q= . [/ .= 0-895769, F =\n0-444518, =2-262700, Zr' = 1-658414.]\n\n%\n% 490\n%\n\n17. Shew that, if w+x + i/ + 2=0, then, with the notation of \\hardsubsectionref{21}{2}{2},\n\n[3] + [l] = [2] + [4], [1234] + [3412] + [2143] + [4321] =0.\n\n18. Shew that\n\n 4(y) 4(2) hilZ + z) ' ' i(y)Si )h(2/ + y\n\n19. By putting x=y = z, w=3a; in Jacobi's fundamental formulae, obtain\nthe following\n\nresults :\n\nSi3 ( ) j (3 .) . 3 ( .) (ar) = 3 (2.1,.) 5,\n\n 33 x) S3 (ar) - 3i (x) \\$i (ar) = B 2x) 5-2, So3 (.f) 3. (Sx) + 3 3 (\n) 5 (3.1;) = 33 (2 ) 3 .\n\n20. Deduce from example 19 that\n\n Si (x) Si 3x) Si + Si x) Si Sx) Si + Ss (x) S3 3x) S - V ( ) 4 (3 ) 2\n\n= 52=* ('*-) 2 (3.f) S3- + 4 (x) Si 3x) .932  \\addexamplecitation{Trinity, 1882.}\n", "meta": {"hexsha": "e0c04d5634a53329b6c2f46ff0fa45f8a3403db2", "size": 55173, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/wandw-ch21.tex", "max_stars_repo_name": "CdLbB/Whittaker-and-Watson", "max_stars_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/wandw-ch21.tex", "max_issues_repo_name": "CdLbB/Whittaker-and-Watson", "max_issues_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/wandw-ch21.tex", "max_forks_repo_name": "CdLbB/Whittaker-and-Watson", "max_forks_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.3811414392, "max_line_length": 104, "alphanum_fraction": 0.6544505465, "num_tokens": 19324, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7490872131147275, "lm_q1q2_score": 0.5611316528519892}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% CS630: Database Management Systems\n% Copyright 2014 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% More info: https://github.com/ghorbanzade/beacon\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section*{Question 2}\n\nConsider a database schema with three relations:\\\\\n\n\\begin{terminal}\nMovies (@*\\underline{movie\\_id}*@:integer, title:string, year:integer, studio:string)\nActors (@*\\underline{actor\\_id}*@:integer, name:string, nationality:string)\nStarsIn(@*\\underline{actor\\_id}*@:integer, @*\\underline{movie\\_id}*@:integer, character:string)\n\\end{terminal}\n\nThe keys are underlined in each relation.\nRelation \\texttt{Movies} stores information such as unique movie identifier, title, year and  producing studio.\n\\texttt{Actors} contains unique actor identifier, actor name and nationality.\nRelation \\texttt{StarsIn} tracks which actor starred in which movie, and the name of the character interpreted in that movie.\nAssume that one actor plays at most one character in the same movie.\n\nWrite relational algebra expressions for the following queries:\n\n\\begin{enumerate}\n\\item Find the titles of movies produced by \\textit{Universal} studio.\n\n\\textbf{Solution:}\n$$\\displaystyle \\pi_{title}(\\sigma_{studio=\"Universal\"}Movies) $$\n\n\\item Find the names of actors that played a character named \\textit{Forrest Gump} in some movie.\n\n\\textbf{Solution:}\n$$\\displaystyle \\pi_{name}(\\pi_{actor\\_id}(\\sigma_{character=\"Forest\\ Gump\"}StarsIn)\\Join Actors)$$\n\n\\item Find the names of actors of nationality \\textit{German}.\n\n\\textbf{Solution:}\n$$\\displaystyle \\pi_{name}(\\sigma_{nationality=\"German\"}Actors)$$\n\n\\item Find the nationality of actors who played a character named \\textit{Forrest Gump} or who starred in a movie in year 1980.\n\n\\textbf{Solution:}\n\\begin{enumerate}\n\\item A1 includes IDs of actors who played a character named \\textit{Forest Gump}.\n$$ \\rho(A1,\\pi_{actor\\_id}(\\sigma_{character=\"Forest\\ Gump\"}StarsIn))$$\n\\item A2 includes IDs of actors who starred in a movie in year 1980.\n$$ \\rho(A2,\\pi_{actor\\_id}(\\pi_{movie\\_id}(\\sigma_{year=1980}Movies)\\Join StarsIn))$$\n\\item Desired list includes nationality of actors whose ID is either in A1 or A2.\n$$ \\pi_{nationality}((A1\\cup A2)\\Join Actors)$$\n\\end{enumerate}\n\n\\item Find the names of actors that star in exactly one movie.\n\n\\textbf{Solution:}\n\\begin{enumerate}\n\\item A1 includes IDs of actors never starred in any movie.\n$$ \\rho(A1,\\pi_{actor\\_id}Actors-\\pi_{actor\\_id}StarsIn) $$\n\\item A2 includes IDs of actors starred in more than one movie.\n$$ \\rho(S1(1\\rightarrow actor\\_id1,2\\rightarrow movie\\_id1,3\\rightarrow character1),StarsIn) $$\n$$ \\rho(S2(1\\rightarrow actor\\_id2,2\\rightarrow movie\\_id2,3\\rightarrow character2),StarsIn) $$\n$$ \\rho(A2(1\\rightarrow actor\\_id),\\pi_{actor\\_id1} (S1 \\Join_{(actor\\_id1=actor\\_id2) \\wedge (movie\\_id1\\neq movie\\_id2)} S2)) $$\n\\item Desired list includes names of actors who are neither in A1 nor in A2.\n$$ \\pi_{name}((\\pi_{actor\\_id}Actors - A1 - A2) \\Join Actors)  $$\n\\end{enumerate}\n\n\\item Find the names of actors who starred in some movie in or after year 1980, but who did not star in any role (ever) for a movie produced by \\textit{Universal} studio.\n\n\\textbf{Solution:}\n\\begin{enumerate}\n\\item A1 includes IDs of actors who starred in some movie in or after year 1980.\n$$ \\rho(A1,\\pi_{actor\\_id}(\\pi_{movie\\_id}(\\sigma_{year>1979}Movies) \\Join StarsIn)) $$\n\\item A2 includes IDs of actors who starred in some role for a movie produced by \\textit{Universal} studio.\n$$ \\rho(A2,\\pi_{actor\\_id}(\\pi_{movie\\_id}(\\sigma_{studio=\"Universal\"}Movies) \\Join StarsIn)) $$\n\\item Desired list includes names of actors whose ID is in A1 and not in A2.\n$$ \\pi_{name}((A1-A2)\\Join Actors) $$\n\\end{enumerate}\n\n\\end{enumerate}\n", "meta": {"hexsha": "dd35d4259f76eb400d571d4fa04e9410dfe01176", "size": 3886, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "umb-cs630-2014f/src/tex/hw01/hw01q02.tex", "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_issues_repo_path": "umb-cs630-2014f/src/tex/hw01/hw01q02.tex", "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "umb-cs630-2014f/src/tex/hw01/hw01q02.tex", "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "avg_line_length": 47.975308642, "max_line_length": 170, "alphanum_fraction": 0.7228512609, "num_tokens": 1103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.752012562644147, "lm_q1q2_score": 0.5611059047280372}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{wrapfig}\n\\usepackage{float}\n\\usepackage{graphicx}\n\\usepackage[margin=1in]{geometry} \n\\usepackage{comment}\n\\usepackage{amsmath,amsthm,amssymb}\n\\usepackage{hyperref}\n% \\usepackage[vlined,linesnumbered,ruled,resetcount]{algorithm2e}\n\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\DeclareMathOperator*{\\val}{val}\n\\DeclareMathOperator*{\\MOSM}{MOSM}\n\\DeclareMathOperator*{\\WOSM}{WOSM}\n\\DeclareMathOperator*{\\best}{best}\n\\DeclareMathOperator*{\\worst}{worst}\n\n\\usepackage{algorithm}\n% \\usepackage{algorithmicx}\n\\usepackage[noend]{algpseudocode}\n\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\Rgz}{\\mathbb{R}_{\\ge 0}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\n\\newcommand{\\Es}[2]{\\mathbb{E}_{#1}\\left[{#2}\\right]}\n\\newcommand{\\E}[1]{\\mathbb{E}\\left[{#1}\\right]}\n\\newcommand{\\ip}[2]{\\left\\langle{#1} , {#2}\\right\\rangle}\n\n\\newcommand{\\M}{\\mathcal{M}}\n\\newcommand{\\W}{\\mathcal{W}}\n\n\\newtheorem{definition}{Definition}\n\\newtheorem{lemma}[definition]{Lemma}\n\\newtheorem{corollary}[definition]{Corollary}\n\\newtheorem{theorem}[definition]{Theorem}\n\\newtheorem{claim}[definition]{Claim}\n\\newtheorem{proposition}[definition]{Proposition}\n\n\\usepackage{filecontents}\n\\begin{filecontents*}{\\jobname.bib}\n\n@article{ThurberConcerningMaxStab02,\n  author = {Thurber, Edward G.},\n  title = {Concerning the maximum number of stable matchings in the stable\n    marriage problem},\n  journal = {Discrete Mathematics},\n  year = {2002},\n  volume = { 248 },\n  pages = {195-219},\n}\n@article{IrvingCountingStab86,\n  author    = {Robert W. Irving and\n               Paul Leather},\n  title     = {The Complexity of Counting Stable Marriages},\n  journal   = {{SIAM} J. Comput.},\n  volume    = {15},\n  number    = {3},\n  pages     = {655--667},\n  year      = {1986},\n  url       = {https://doi.org/10.1137/0215048},\n  doi       = {10.1137/0215048},\n  timestamp = {Sat, 27 May 2017 14:22:59 +0200},\n  biburl    = {https://dblp.org/rec/bib/journals/siamcomp/IrvingL86},\n  bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n@article{PittelAverageStable89,\n  author = {Pittel, B.},\n  title = {The Average Number of Stable Matchings},\n  journal = {SIAM J. Discret. Math.},\n  issue_date = {Nov. 1989},\n  volume = {2},\n  number = {4},\n  month = nov,\n  year = {1989},\n  issn = {0895-4801},\n  pages = {530--549},\n  numpages = {20},\n  url = {http://dx.doi.org/10.1137/0402048},\n  doi = {10.1137/0402048},\n  acmid = {75544},\n  publisher = {Society for Industrial and Applied Mathematics},\n  address = {Philadelphia, PA, USA},\n}\n@misc {ShorAsymptoticStab19,\n  TITLE = {The asymptotic behavior of a recurrence related to stable matchings},\n  AUTHOR = {Peter Shor },\n  HOWPUBLISHED = {Theoretical Computer Science Stack Exchange},\n  NOTE = {URL: \\url{https://cstheory.stackexchange.com/q/44371} (version: 2019-08-08)},\n  EPRINT = {https://cstheory.stackexchange.com/q/44371},\n  URL = {https://cstheory.stackexchange.com/q/44371}\n}\n\\end{filecontents*}\n\n\\begin{document}\n\n% \\renewcommand{\\qedsymbol}{\\filledbox}\n \n\\title{\n  A recurrence giving a lower bound for stable matchings\n}\n\\author{Clay Thomas\\\\\nclaytont@princeton.edu}\n\\maketitle\n\nIt's an interesting open question to find good upper and lower bounds on the\nmaximum number of distinct stable matches that an $n\\times n$ instance of the\nstable marriages problem can have.\nThe best known lower bound (for $n$ a power of $2$) was originally constructed\nin \\cite{IrvingCountingStab86},\nand a recurrence relation for the lower bound was found.\nSpecifically, define\n  \\[ b_0 = 1, \\qquad b_1 = 2 \\]\n  \\[ b_n = 3 b_{n-1}^2 - 2 b_{n-2}^4, \\qquad n > 1 \\]\nThen $b_n$ gives a lower bound on the maximum number of stable matches in\na $2^n \\times 2^n$ instance.\nFor a justification of this recurrence, see \\cite{ThurberConcerningMaxStab02},\nappendix A.\nKnuth analyzed $b_n$ and found it was $\\Omega(2.28^{2^n})$\n(mentioned, e.g. in \\cite{PittelAverageStable89}).\nHowever, the details of this analysis were, to the best of our knowledge,\nnever published.\nIn this note, we establish the growth behavior of the sequence,\nfollowing the proof given by Peter Shor \\cite{ShorAsymptoticStab19}.\n\n\\begin{proposition}\n  There exist constants $s, r$ such that $b_n \\sim r s^{2^n}$,\n  i.e. $b_n = rs^{2^n}(1 + o(1))$.\n  Moreover, $s\\approx 2.280142$ and $r = \\frac 1 2 (\\sqrt 3 - 1) \n  \\approx 0.366025$\n\\end{proposition}\n\nFirst, note that this is plausible because, if $c_n = 3 c_{n-1}^2$ (i.e. if we\neliminated the lower order terms of the recurrence)\nwe would have $c_n = (1/3) s^{2^n}$ where $s=3c_0$.\nNow, if we had $b_n = r s^{2^n}$ exactly, we could recover $r$ and $s$ as\nfollows: $r = b_{n-1}^2 / b_n$ and\n$s = \\left(\\frac{b_n}{r}\\right)^{1/2^n}$ for any value of $n$.\nIn fact, we'll show that the above quantities \\emph{quickly converge} to some\nvalues $r$ and $s$ which produce the desired result.\n\n\\begin{proof}\nFirst we handle $r$. Define\n\\[ r_n = \\frac{b_{n-1}^2}{b_n} \\]\n\n\\begin{claim}\n  $r_n \\to r$ as $n\\to \\infty$ for some value $r$.\n  Moreover, $r_n$ monotonically decreases to $r$.\n\\end{claim}\n\\begin{proof}\n  Because $r_n \\ge 0$, it suffices to show that $r_n$ is decreasing.\n  We have\n  \\[ r_n = \\frac{b_{n-1}^2}{b_n}\n  = \\frac{b_{n-1}^2}{3b_{n-1}^2 - 2b_{n-2}^4}\n  = \\frac{1}{3 - 2(b_{n-2}^2/b_{n-1})^2}\n  = \\frac{1}{ 3 - 2r_{n-1}^2} \\]\n  Now, $r_1 = 1/2$, and we can easily see from a graph that the function\n  $f(x) = 1/(3-2x^2)$ that $f(x) \\le x$ for $r < x < 1$, where\n  $r \\approx 0.366$. (In particular, $r = (\\sqrt 3 - 1)/2$ is the smaller\n  positive root of $x(3-2x^2)=1$).\n  Moreover, for $r < x < 1$ we have $f(x) > r$ as well.\n  Thus, $r_n < r_{n-1}$ for all $n > 0$.\n\n\\end{proof}\n\n\\includegraphics{RatioConvergencePlot}\n\nNote that certainly $r > 0$.\nNow we can turn to $s$. Define\n\\[ s_n = \\left(\\frac{b_n}{r}\\right)^{1/2^n} \\]\n\n\\begin{claim}\n  $s_n \\to s$ as $n\\to \\infty$ for some value $s$.\n  Moreover, $s_n$ monotonically decreases to $s$.\n\\end{claim}\n\\begin{proof}\n  Because $s_n \\ge 0$, it suffices to show that\n  $s_n$ is decreasing.\n  We have\n  \\[ \\frac{s_{n}}{s_{n-1}}\n    = \\left(\\frac{b_n}{r}\\right)^{1/2^n} \\left(\\frac{r}{b_{n-1}}\\right)^{2/2^n}\n    = \\left(\\frac{rb_n}{b_{n-1}^2}\\right)^{1/2^n}\n    = \\left(\\frac{r}{r_n}\\right)^{1/2^n} < 1\n  \\]\n  where the inequality follows because $r_n > r$.\n  Thus $s_n < s_{n-1}$, as desired.\n\\end{proof}\n\nSo far, we have $b_n = r s_n^{2^n} \\ge r s^{2^n}$.\nAll that remains is to show that, in fact,\n$b_n \\le rs^{2^n}(1 + o(1))$ as well.\n\n\\begin{claim}\n  $s^{2^n} / s_n^{2^n} \\to 1$ as $n\\to \\infty$\n\\end{claim}\n\\begin{proof}\n  As $s_n \\ge s$, we know $(s/s_n)^{2^n} \\le 1$.\n  We have\n  \\[ \\left(\\frac{s}{s_n}\\right)^{2^n}\n  = \\left(\\prod_{k=n}^\\infty \\frac{s_{k+1}}{s_k}\\right)^{2^n}\n  = \\prod_{k=n}^\\infty \\left(\\frac{r}{r_{k+1}}\\right)^{\\frac{1}{2^{k+1}}\\cdot 2^n}\n  \\]\n  \\[ \\ge \\prod_{k=n}^\\infty \\left(\\frac{r}{r_{n+1}}\\right)^{\\frac{1}{2^{k+1}}\\cdot 2^n}\n  = \\left(\\frac{r}{r_{n+1}}\\right)^{ 2^n\\cdot\\sum_{k=n}^\\infty \\frac{1}{2^{k+1}}\\cdot}\n  = \\frac{r}{r_{n+1}}\n  \\]\n  The inequality follows from the fact that $r_n$ is decreasing.\n  The right hand side approaches $1$ by the definition of $r$.\n\\end{proof}\n\nThus, we can conclude that \n\\[ \\frac{b_n}{rs^{2^n}} = \\frac{s_n^{2^n}}{s^{2^n}} \\to 1 \\]\nas $n\\to \\infty$, as desired.\n\\end{proof}\n\n\\paragraph{Remark:}\nWe saw that $r = (\\sqrt 3 - 1)/2$ can be calculated exactly.\nCombining a few facts above we can also get highly accurate estimates for $s$.\nSpecifically, in claims 2, 3, and 4 respectively we showed\n\\[ r_n = \\frac 1 {3 - 2r_{n-1}^2} \\]\n\\[ s_n = s_{n-1} \\left(\\frac r {r_n}\\right)^{1/2^n} \\]\n\\[ s_n \\ge s \\ge s_n \\left(\\frac r {r_n}\\right)^{1/2^n} \\]\nThe first two lines give us fast ways to compute the estimates of $s_n$, and\nthe last line shows that this estimate converges extremely quickly\n(indeed, all we need to do is observe that $r/r_n$ is bounded by a constant $c$\nand that $c^{1/2^n} = 1 - O(1/2^n)$ to get exponential convergence).\n\n\\bibliography{\\jobname}{}\n\\bibliographystyle{alpha}\n\n\\end{document}\n\n", "meta": {"hexsha": "864f91d3522dbc4116f1adcc75fe33158bb9829d", "size": 8054, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tex/IrvGusRecurrence.tex", "max_stars_repo_name": "ClathomasPrime/CompetitiveStableMatching", "max_stars_repo_head_hexsha": "8a0b1edadbffb8ea7dde8fac8d7696f91747668f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tex/IrvGusRecurrence.tex", "max_issues_repo_name": "ClathomasPrime/CompetitiveStableMatching", "max_issues_repo_head_hexsha": "8a0b1edadbffb8ea7dde8fac8d7696f91747668f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tex/IrvGusRecurrence.tex", "max_forks_repo_name": "ClathomasPrime/CompetitiveStableMatching", "max_forks_repo_head_hexsha": "8a0b1edadbffb8ea7dde8fac8d7696f91747668f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.1271186441, "max_line_length": 87, "alphanum_fraction": 0.6596722126, "num_tokens": 2976, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5611059045315985}}
{"text": "\\documentclass[12 pt,a4paper]{report}\n\\setlength{\\topmargin}{-2cm}\n\\setlength{\\oddsidemargin}{0cm}\n\\setlength{\\textheight}{24cm}\n\\setlength{\\textwidth}{16cm}\n\n\\usepackage{graphicx}\n\n\n\n%Use sffamily for all titles\n\n\n\\begin{document}\n\n\\title{\\sffamily\\huge\\textbf{Report on Sorting Algorithms}}\n\\author{Kavi Zahan Sultana}\n\\date{01/12/2014}\n\\maketitle\n\n\n\\newpage\n\n\\tableofcontents\n\n\\newpage\n\n\\section{Introduction}\n\nThis report provides the introduction, algorithm,running time and graphs of sorting algorithms.\n\n\\section{Insertion Sort}\n\nInsertion sort is an efficient algorithm for sorting a small number of elements. We present the pseudocode called INSERTION-SORT.\nIt takes as a parameter an array /textit{A[1..n]}  containing a sequence of length \\textit{n} that is to be sorted.\n\n\\subsection{Algorithm}\n\n$INSERTION-SORT(A)$\n\n1   \\textbf{for}  $j=2   \\textbf{   to  }   A.length$\n\n2    \\hspace{1cm}$ key=A[j] $\n\n3     \\hspace{1cm}$i=j-1$\n\n4     \\hspace{1cm}$\\textbf{while  }i>0\\hspace{0.3cm}and \\hspace{0.3cm} A[i]>key$\n\n5     \\hspace{1.5cm}$A[i+1]=A[i]$\n\n6     \\hspace{1.5cm}$i=i-1$\n\n7     \\hspace{1cm}$A[i+1]=key$\n\n\n\\subsection{ Time Complexity  of  Insertion  Sort}\n\n$INSERTION-SORT(A)  \\hspace{3cm}cost \\hspace{1cm}times$\n\\vspace{0.4cm}\n\n1\\hspace{0.75cm}$\\textbf{for}\\hspace{0.3cm}j=2 \\textbf{ to } A.length\\hspace{2.5cm}c_1\\hspace{1cm} n  $\n\n2\\hspace{1.5cm}$key=A[j]\\hspace{3.94cm}c_2\\hspace{1cm}n-1$\n\n3\\hspace{1.5cm}$i=j-1\\hspace{4.2cm}c_3\\hspace{1cm}n-1$\n\n4\\hspace{1.5cm}$\\textbf{while }i>0 and A[i]>key\\hspace{1.1cm}c_4\\hspace{1cm}\\sum_{j=2}^nt_j$\n\n5\\hspace{2cm}$A[i+1]=A[i]\\hspace{2.7cm}c_5\\hspace{1cm}\\sum_{j=2}^n(t_j-1)$\n\n6\\hspace{2cm}$i=i-1\\hspace{3.8cm}c_6\\hspace{1cm}\\sum_{j=2}^n(t_j-1)$\n\n7\\hspace{1.5cm}$A[i+1]=key\\hspace{3.3cm}c_7\\hspace{1cm}n-1$\n\n\n\\vspace{1.5cm}\n\n\n\nThe running time of the algorithm is the sum of running times for each statement executed.To compute $T(n)$, the  running time of $INSERTION-SORT$ on an input of \\textit{n} values, we sum the products of the \\textit{cost} and \\textit{times} columns,obtaining\\\\\n\n$T(n)=c_1n+c_2(n-1)+c_3(n-1)+c_4\\sum_{j=2}^nt_j + c_5\\sum_{j=2}^n(t_j-1)+ c_6\\sum_{j=2}^n(t_j-1) + c_7(n-1)$\\\\\n\n\\textbf{1.Best case:}\n\nIn $INSERTION-SORT$, the best case occurs if the array is already sorted. For each$ j=2,3,...,n$, we then find that $A[i]<=key$ in line 5 when \\textit{i} has its initial value of $j-1$. Thus $t_j=1$ for $ j=2,3,...,n$, and the best-case running time is\\\\\n\n$T(n)=c_1n+c_2(n-1)+c_3(n-1)+c_4(n-1) + c_7(n-1)$\\\\\n\n\\hspace{1cm}$=(c_1+c_2+c_3+c_4+c_7)n-(c_2+c_3+c_4+c_7).$\\\\\n\n\\hspace{1cm}$=an+b$\\\\\n\nwhere, $a=(c_1+c_2+c_3+c_4+c_7)$ and $b=-(c_2+c_3+c_4+c_7)$, which depend on the statement costs $c_i$; it is thus a \\textbf{\\textit{linear function}} of \\textit{n}.\\\\\n\n\\newpage\n\n\\textbf{2.Worst case:}\n\nIf the array is in reverse sorted order-that is, in decreasing order- the worst case results.We must compare each element $A[j]$ with each element in the entire sorted subarray$A[1..j-1]$, and so $t_j=j$ for $j=2,3,...,n$. Noting that \\\\\n\n$\\sum_{j=2}^nj=n(n+1)/2-1$\\\\\n\nand\\\\\n\n$\\sum_{j=2}^n(j-1)=n(n-1)/2$\\\\\n\nTherefore, in worst case, the running time of $INSERTION-SORT$ is\\\\\n$$\\hspace{0.5cm}T(n)=c_1n+c_2(n-1)+c_3(n-1)+c_4(n(n+1)/2-1)+c_5(n(n-1)/2)+ c_6(n(n-1)/2)+c_7(n-1)$$\n$$\\hspace{1.5cm}=(c_4/2+c_5/2+c_6/2)n^2+(c_1+c_2+c_3+c_4/2-c_5/2-c_6/2+c_8)n-(c_2+c_3+c_4+c_7)$$\n$\\hspace{1.5cm}=an^2+bn+c$\\\\\n\nwhich is a \\textbf{\\textit{ quadratic equation}}. Here,$an^2+bn+c$ for constants a,b and c that again depend on the statement costs $c_i$; it is thus a \\textbf{\\textit{quadratic function}} of \\textit{n}.\\\\\n\\newpage\n\n\n\\textbf{3.Average  case:}\\\\\n\nThe \\textbf{average case} is often roughly as bad as the worst case. On average half the elements in $A[1..j-1]$ are less than $A[j]$,and half the elements are greater. On average, therefore, we check half of the subarray $A[1..j-1]$, and so $t_j$ is about $j/2$. The resulting average case running time turns out to be a quadratic function of the input size, just like the worst-case running time.\n\nHere is the chart of time complexity of insertion sort.\n\n\\vspace{1cm}\n\n\\includegraphics{insertdata.png}\n\n\\hspace{5cm}fig: Chart of Insertion Sort.\n\n\\subsection{Graph of Time Complexity of Insertion Sort}\n\nHere is the graph of  comparison of running times of average case,best case and worst case.\\\\\n\n\n\\includegraphics{img-1.png}\n\n\\hspace{4cm}fig: time vs n graph for insertion sort.\n\n\\newpage\n\n\\section{Merge Sort}\n\nThe \\textbf{\\textit{Merge Sort}} algorithm closely follows the divide and conquer paradigm. Intuitively, it operates as follows,\n\\vspace{0.5cm}\n\n\\textbf{Divide:}\n\nDivide the \\textit{n}- element sequence to be sorted into two subsequences of $n/2$ elements  each.\n\\vspace{0.5cm}\n\n\\textbf{Conquer:}\n\nSort the two subsequences recursively using merge sort.\n\\vspace{0.5cm}\n\n\\textbf{Combine:}\n\nMerge the two sorted subsequences to produce the sorted answer.\n\\vspace{0.5cm}\n\nThe key operation of the merge sort algorithm is the merging of two sorted sequences in the \\textbf{combine} step. We merge by calling an auxiliary procedure $MERGE(A,p,q,r)$, where \\textit{A} is an array and \\textit{p,q} and \\textit{r} are indices into the array such that $p\\leq q<r$. The procedure assumes that the subarray $A[p..q]$ and $A[q+1..r]$ are in sorted order. It \\textbf{\\textit{merges}} them to form a single sorted subarray that replaces the current subarray $A[p..r]$.\n\\vspace{0.5cm}\n\nOur $MERGE$ procedure takes time $\\Theta(n)$, where $n=r-p+1$ is the total number of elements being merged.\n\n\\newpage\n\n\n\\subsection{Algorithm}\n\nThe following pseudocode implements $MERGE(A,p,q,r)$ procedure, which merges the two subsequences. And another pseudocode $MERGE-SORT(A,p,r)$ sorts the elements in the subarray$A[p..r]$.\n\\vspace{1cm}\n\n$MERGE(A,p,q,r)$\n\n\n1\\hspace{0.5cm} $n_1=q-p+1$\n\n\n2\\hspace{0.6cm}$n_2=r-q$\n\n\n3\\hspace{0.6cm}//create arrays $L[1...n_1+1]$ and $R[1...n_2+1]$.\n\n\n4\\hspace{0.6cm}$\\textbf{for } i=1\\hspace{0.2cm}to \\hspace{0.2cm}n_1$\n\n\n5\\hspace{1cm}$do\\hspace{0.2cm}L[i]=A[p+i-1]$\n\n6\\hspace{0.6cm}$\\textbf{for}\\hspace{0.2cm} j=1\\hspace{0.2cm}to\\hspace{0.2cm}n_2$\n\n7\\hspace{1cm}$do\\hspace{0.2cm} R[j]=A[q+j]$\n\n8\\hspace{0.6cm}$L[n_1+1]=\\infty$\n\n9\\hspace{0.6cm}$R[n_2+1]=\\infty$\n\n10\\hspace{0.5cm}$i=1$\n\n11\\hspace{0.5cm}$j=1$\n\n12\\hspace{0.5cm}$\\textbf{for}\\hspace{0.2cm}k=p\\hspace{0.2cm}to\\hspace{0.2cm}r$\n\n13\\hspace{1cm}$do\\hspace{0.2cm}if\\hspace{0.2cm}L[i]\\leq R[j]$\n\n14\\hspace{1.5cm}$then\\hspace{0.2cm}A[k]=L[i]$\n\n15\\hspace{1.5cm}$i=i+1$\n\n16\\hspace{1cm}$else\\hspace{0.3cm}A[k]=R[j]$\n\n17\\hspace{1.5cm}$j=j+1$\n\n\\vspace{1cm}\n\n$MERGE-SORT(A,p,r)$\n\n1\\hspace{0.5cm}$\\textbf{if}\\hspace{0.2cm}p<r$\n\n2\\hspace{1cm}$q=\\left [(p+r)/2\\right]$\n\n3\\hspace{1cm}$MERGE-SORT(A,p,q)$\n\n4\\hspace{1cm}$MERGE-SORT(A,q+1,r)$\n\n5\\hspace{1cm}$MERGE(A,p,q,r)$\n\n\\subsection{Time Complexity of MERGE-SORT}\n\n\\vspace{1cm}\n\nThere are no such differences among the best,worst and average case of time complexity in merge sort.\n\n\\textbf{Best case:} In best case, the running time of merge sort is,$T(n)=O(nlog(n))$.\n\\vspace{0.5cm}\n\n\\textbf{Average case:} In average case, the running time of merge sort is, $T(n)=O(nlog(n))$.\n\\vspace{0.3cm}\n\n\\textbf{Worst case:} In worst case, the running time of worst case is, $T(n)=O(nlog(n))$.\n\nHere, is the chart of time complexity in merge sort-\n\n\\vspace{1cm}\n\n\\includegraphics{mergedata.png}\n\n\\hspace{5cm}fig: Time Complexity chart.\n\n\\subsection{Graph of Time Complexity of MERGE-SORT}\n\nWe can represent time complexity of $MERGE-SORT$ by the following graph.\n\n\n\\includegraphics{mergesort.png}\n\n\\hspace{5cm}fig: Time complexity  graph  of merge sort.\n\n\\section{Bubble Sort}\n\n\n\n\\textbf{Bubble Sort} is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order.\n\n\\subsection{Algorithm of BUBBLE SORT}\n\n\\vspace{1cm}\n\n$BUBBLE-SORT(A)$\n\n1\\hspace{0.5cm}$\\textbf{for}\\hspace{0.2cm}i=1\\hspace{0.2cm}to\\hspace{0.2cm}length[A]$\n\n2\\hspace{1cm}$do\\hspace{0.2cm}\\textbf{for}\\hspace{0.2cm}j=length[A]\\hspace{0.2cm}downto\\hspace{0.2cm}i+1$\n\n3\\hspace{1.5cm}$do\\hspace{0.2cm}\\textbf{if}\\hspace{0.2cm}A[j]<A[j-1]$\n\n4\\hspace{2cm}$then\\hspace{0.2cm}exchange\\hspace{0.2cm}A[j]<->A[j-1]$\n\n\\subsection{Time Complexity of BUBBLE SORT}\n\n In  average and worst case, the running time of BUBBLE-SORT are same, $T(n)=O(n^2)$. But in best case, the running time is $O(n)$.\n\n\\vspace{1cm}\n\nHere is the table of time complexity-\n\n\\vspace{1cm}\n\n\\includegraphics{bubbledata.png}\n\n\\hspace{5cm}fig: Table of Time Complexity of Bubble Sort.\n\n\\subsection{Time Complexity Graph of BUBBLE-SORT}\n\nThe following graph represents time complexity of BUBBLE-SORT.\n\n\\vspace{1cm}\n\n\\includegraphics{bubble sort.png}\n\n\\hspace{5cm}fig: Time complexity of BUBBLE-SORT.\n\n\\section{Quick Sort}\n\nQuick Sort, like Merge Sort, applies the divide and conquer paradigm. Here is the three-step divide and conquer process for sorting a typical subarray $A[p..r]$.\n\n\\textbf{Divide:} Partition the array $A[p..r]$ into two (possible empty) subarrays $A[p..q-1]$ and $A[q+1..r]$ such that each element of $A[p..q-1]$ is less than or equal to $A[q]$, which is, in turn, less than or equal to each element of $A[q+1..r]$. Compute the index \\textit{q} as part of this partitioning procedure.\n\n\\textbf{Conquer:} Sort the two subarrays $A[p..q-1]$ and $A[q+1..r]$ by recursive calls to quick sort.\n\n\\textbf{Combine:} Because the subarrays are already sorted, no work is needed to combine them: the entire array $A[p..r]$ is now sorted.\n\n\\subsection{Algorithm of Quick Sort}\n\nThe following procedure implements quicksort:\n\n\\vspace{1cm}\n\n$QUICKSORT(A,p,r)$\n\n1\\hspace{0.5cm}$\\textbf{if}\\hspace{0.2cm}p<r$\n\n2\\hspace{1cm}$q=PARTITION(A,p,r)$\n\n3\\hspace{1cm}$QUICKSORT(A,p,q-1)$\n\n4\\hspace{1cm}$QUICKSORT(A,q+1,r)$\n\n\\vspace{0.5cm}\n\n\\textbf{Partitioning the array}\n\n\\vspace{0.5cm}\n\nThe key to the algorithm is the $PARTITIONING$ procedure, which rearranges the subarray $A[p..r]$ in place.\n\n\\vspace{0.5cm}\n\n$PARTITION(A,p,r)$\n\n1\\hspace{0.5cm}$x=A[r]$\n\n2\\hspace{0.5cm}$i=p-1$\n\n3\\hspace{0.5cm}\\textbf{for} $j=p$  \\textbf{to} $r-1$\n\n4\\hspace{1cm} \\textbf{if} $A[j] \\leq x$\n\n5\\hspace{1.5cm} $i=i+1$\n\n6\\hspace{1.5cm} exchange $A[i]$ with $A[j]$\n\n7\\hspace{0.5cm}exchange $A[i+1]$ with $A[r]$\n\n8\\hspace{0.5cm}\\textbf{return} $i+1$\n\n\\subsection{Time Complexity of Quick Sort}\n\nThe running time of quick sort is similar at best and average case, but it is larger at worst case.\n\n\\vspace{0.5cm}\n\n\\textbf{Average case:} The running time is $O(nlgn)$.\n\n\\textbf{Best case:} The running time is $O(nlgn)$.\n\n\\textbf{Worst case:} The running time is $O(n^2)$.\n\n\n\\vspace{0.5cm}\n\nHere is the Time Complexity Chart of Quick Sort:\n\n\\includegraphics{quickdata.png}\n\n\\hspace{5cm}fig: Time Complexity chart of Quick Sort.\n\n\\subsection{Time Complexity Graph of Quick Sort}\n\nFrom chart, we draw a graph that represents the efficiency of Quick Sort algorithm.\n\n\\vspace{1cm}\n\n\\includegraphics{quicksort.png}\n\n\\hspace{5cm}fig: Time Complexity graph of quick sort.\n\n\\section{Heap Sort}\n\nLike merge sort, but unlike insertion sort, heapsort's running time is $O(n lgn)$. Like insertion sort, but unlike merge sort, heapsort sorts in place: only a constant number of array elements are stored outside the input array at any time. \n\nHeapsort also introduces another algorithm design technique: using a data structure, in this case one we call a \\textbf{heap}, to manage information. \n\n\\subsection{Algorithm of Heap Sort}\n\n\n$MAX-HEAPIFY(A,i)$\n\n1\\hspace{0.5cm}$l=LEFT(i)$\n\n2\\hspace{0.5cm}$r=RIGHT(i)$\n\n3\\hspace{0.5cm}\\textbf{if} $l \\leq A.heap-size$ and $A[l]>A[i]$\n\n4\\hspace{1cm}$largest=l$\n\n5\\hspace{0.5cm}\\textbf{else} $largest=i$\n\n6\\hspace{0.5cm}\\textbf{if} $r \\leq A.heap-size$ and $A[r]>A[largest]$\n\n7\\hspace{1cm}$largest=r$\n\n8\\hspace{0.5cm}\\textbf{if} $largest \\neq i$\n\n9\\hspace{1cm}exchange $A[i]$ with $A[largest]$\n\n10\\hspace{1cm}$MAX-HEAPIFY(A,largest)$\n\n\\vspace{1cm}\n\n$BUILD-MAX-HEAP(A)$\n\n1\\hspace{0.5cm}\\textit{A.heap-size}=\\textit{A.length}\n\n2\\hspace{0.5cm}\\textbf{for} $i=[A.length/2]$ \\textbf{downto} 1\n\n3\\hspace{1cm}$MAX-HEAPIFY(A,i)$\n\n\\vspace{1cm}\n\n$HEAPSORT(A)$\n\n1\\hspace{0.5cm}$BUILD-MAX-HEAP(A)$\n\n2\\hspace{0.5cm}\\textbf{for} $i=A.length$ \\textbf{downto} 2\n\n3\\hspace{1cm}exchange $A[1]$ with $A[i]$\n\n4\\hspace{1cm}\\textit{A.heap-size}=\\textit{A.heap-size}-1\n\n5\\hspace{1cm}$MAX-HEAPIFY(A,1)$\n\n\\subsection{Time Complexity of Heap Sort}\n\nThe Time complexity of Heap sort is not so complex. The running time of Heao sort is $O(nlgn)$. This is same for the best, average and worst case.\n\n\\textbf{Chart for Time Complexity}\n\nHere is the table that contains the data of running time and number of elements.\n\n\\vspace{1cm}\n\n\n\\includegraphics{heapdata.png}\n\n\\hspace{5cm}fig: Time Complexity chart.\n\n\\subsection{Time Complexity Graph}\n\nHere is the graph that represents the time complexity graph in Heap-Sort.\n\n\\vspace{1cm}\n\n\\includegraphics{heapsort.png}\n\n\\hspace{5cm}fig: Time Complexity graph of Heap-Sort.\n\n\n\n\n\n\n\n\n\n\n\\end{document}", "meta": {"hexsha": "4148b260e142fd55504b0e1e441db78d154242f5", "size": 12932, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "graph.tex", "max_stars_repo_name": "kavizahan12/ReportOnSortingAlgorithmUsingLaTeX", "max_stars_repo_head_hexsha": "cb05f0edc315a9f2a37baf9166134f9080626125", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "graph.tex", "max_issues_repo_name": "kavizahan12/ReportOnSortingAlgorithmUsingLaTeX", "max_issues_repo_head_hexsha": "cb05f0edc315a9f2a37baf9166134f9080626125", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "graph.tex", "max_forks_repo_name": "kavizahan12/ReportOnSortingAlgorithmUsingLaTeX", "max_forks_repo_head_hexsha": "cb05f0edc315a9f2a37baf9166134f9080626125", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.1111111111, "max_line_length": 485, "alphanum_fraction": 0.7060779462, "num_tokens": 4836, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.640635854839898, "lm_q2_score": 0.8757869867849166, "lm_q1q2_score": 0.5610605449366135}}
{"text": "\\documentclass[11pt]{article}\n\n\\usepackage{graphicx}\n\\usepackage{courier}\n\\usepackage{underscore}\n\\setcounter{secnumdepth}{4}\n\n\\title{Laconic to TMD Compilation}\n\\author{Adam Yedidia}\n\n\\begin{document}\n    \n\\maketitle\n\nThis document is an explanation of the Laconic-to-TMD compilation process. It won't be useful to users who only want to know how to use Laconic files, or how to write Laconic files of their own. Rather, it is intended for an audience who is simply curious about how the algorithm works, or wants to write their own language which will compile down to TMD. \\\\\n\nEach part of the logic in a Laconic program transforms into an equivalent section of TMD code. In the below sections, it is explained how this happens for each Laconic command. \\\\\n\nEvery Laconic compiler presented in this repository makes use of the algorithm described herein. So, for example, the compiler that transforms Laconic programs into 2-symbol Turing machines just calls this compiler, and then calls the compiler that transforms TMD programs into 2-symbol Turing machines. The same is true for compilation to 4-symbol Turing machines.\n\n\\section{Variable Values} \\label{sec:varvalues}\n\nIn Laconic, variables can have any value corresponding to an integer, a list of integers, or a list of lists of integers. In TMD, ``variables'' and ``tapes'' refer to the same thing, and correspondingly the ``value'' of a ``variable'' is simply the symbols currently on the tape. \\\\\n\nRecall that in TMD, the tape alphabet is (\\texttt{_}, \\texttt{1}, \\texttt{E}), and that tapes must take the form \\texttt{_}$(\\texttt{1}|\\texttt{E})^*\\texttt{_}^\\infty$. We call the second symbol on the tape the ``home position'' of that tape.\n\n\\subsection{\\texttt{int} Values}\n\n\\subsubsection{Non-Negative Integers}\n\nTo represent the value of an \\texttt{int} \\texttt{x} with value $x \\ge 0$, the tape representation is: \\\\ \\\\\n\\texttt{_}$\\texttt{1}^x\\texttt{E}\\texttt{_}^\\infty$ \\\\\n\nSo, for example, the value 5 would be represented as: \\\\ \\\\\n\\texttt{_11111E___}\\dots\n\n\\subsubsection{Negative Integers}\n\nTo represent the value of an \\texttt{int} \\texttt{x} with value $x < 0$, the tape representation is: \\\\ \\\\\n\\texttt{_}$\\texttt{E1}^{-x}\\texttt{E_}^\\infty$ \\\\ \n\nSo, for example, the value -3 would be represented as: \\\\ \\\\\n\\texttt{_E111E___}\\dots \n%In the subsections following this one, I call the part of the tape containing only \\texttt{1}'s and \\texttt{E}'s.\n\n\\subsection{\\texttt{list} Values}\n\nTo represent the value of a \\texttt{list} that contains integer values $e_1$, $e_2$, $e_3$, and so on, the tape representation is: \\\\ \\\\\n$\\texttt{_E}((e_1>0)?\\texttt{1}:\\texttt{E})\\texttt{1}^{|e_1|}\\texttt{E}((e_2>0)?\\texttt{1}:\\texttt{E})\\texttt{1}^{|e_2|}\\texttt{E}((e_3>0)?\\texttt{1}:\\texttt{E})\\texttt{1}^{|e_3|}\\texttt{E}\\dots\\texttt{_}^\\infty$ \\\\\n\nIn other words, the tape representation of the \\texttt{list} begins with an \\texttt{E} in the home position, and then for each element of the \\texttt{list}, there would be a \\texttt{1} if the element positive and an \\texttt{E} otherwise, followed by a number of \\texttt{1}'s equal to the magnitude of the number, followed by an \\texttt{E}. So, for example, the value $[5, -2, 0]$ would be represented as: \\\\ \\\\\n\\texttt{_E111111EE11EEE___}\\dots\n\n\\subsection{\\texttt{list2} Values}\n\nTo represent the value of a \\texttt{list2} that contains \\texttt{list} values $l_1$, $l_2$, and so on, each of which contains \\texttt{int} values $e_{11}$, $e_{12}$, and so on, and $e_{21}$, $e_{22}$, and so on, and so on, the tape representation is: \\\\ \\\\\n$\\texttt{_EE1}((e_{11}>0)?\\texttt{1}:\\texttt{E1})\\texttt{1}^{|e_{11}|}\\texttt{E1}\n((e_{12}>0)?\\texttt{1}:\\texttt{E1})\\texttt{1}^{|e_{12}|}\\texttt{E1}\\dots\\texttt{EE1}\n((e_{21}>0)?\\texttt{1}:\\texttt{E1})\\texttt{1}^{|e_{21}|}\\texttt{E1}\n((e_{22}>0)?\\texttt{1}:\\texttt{E1})\\texttt{1}^{|e_{22}|}\\texttt{E1}\\dots\\dots\\texttt{E_}^\\infty$ \\\\\n\nIn other words, the tape representation of each list in the \\texttt{list2} begins with \\texttt{EE1}, and then for each \\texttt{list} in the \\texttt{list2}, the \\texttt{int}s in those \\texttt{list}s are preceded by a \\texttt{1} if the number is positive and a \\texttt{E1} otherwise, followed by a number of \\texttt{1}'s equal to the magnitude of the number, followed by a \\texttt{E1}. Finally, to signal the end of the entire \\texttt{list2}, the symbol \\texttt{E} is used, followed by infinite underscores. So, for example, the value $[[3,-1], [], [0,4]]$ would be represented as: \\\\ \\\\\n\\texttt{_EE11111E1E11E1EE1EE1E1E111111E1E___}\\dots\n\n\\section{Laconic Operations}\n\nEach operation in Laconic has a corresponding TMD function which performs the operation. As an example, the Laconic operation \\texttt{a*b} will compile down to a call to the TMD function \\texttt{BUILTIN_multiply}. \\\\\n\nThe following is the full mapping between Laconic operations and TMD functions: \\\\ \\\\\n\n\\begin{center}\n    \\begin{centering}\n    \\begin{tabular}{||c c c||}\n    \\hline\n    Operation & Laconic symbol & TMD function \\\\ [0.5ex]\n    \\hline\n    Addition & \\texttt{+} & \\texttt{BUILTIN_add} \\\\\n    \\hline\n    Subtraction & \\texttt{-} & \\texttt{BUILTIN_subtract} \\\\\n    \\hline\n    Multiplication & \\texttt{*} & \\texttt{BUILTIN_multiply}\\\\\n    \\hline\n    Integer Division & \\texttt{/} & \\texttt{BUILTIN_divide}\\\\\n    \\hline\n    Negation & \\texttt{~} & \\texttt{BUILTIN_intneg}\\\\\n    \\hline\n    Equality & \\texttt{==} & \\texttt{BUILTIN_equal}\\\\\n    \\hline\n    Inequality & \\texttt{!=} & \\texttt{BUILTIN_notEqual}\\\\\n    \\hline\n    Greater Than & \\texttt{>} & \\texttt{BUILTIN_greaterThan}\\\\\n    \\hline\n    Less Than & \\texttt{<} & \\texttt{BUILTIN_greaterThan} (arguments reversed)\\\\\n    \\hline\n    Greater or Equal & \\texttt{>=} & \\texttt{BUILTIN_greaterOrEqual}\\\\\n    \\hline\n    Less Than or Equal & \\texttt{<=} & \\texttt{BUILTIN_greaterOrEqual} (arguments reversed)\\\\\n    \\hline\n    And & \\texttt{\\&} & \\texttt{BUILTIN_and}\\\\\n    \\hline\n    Or & \\texttt{|} & \\texttt{BUILTIN_or}\\\\\n    \\hline\n    Not & \\texttt{!} & \\texttt{BUILTIN_assignNot}\\\\\n    \\hline\n    List Index & \\texttt{@} & \\texttt{BUILTIN_listindex}\\\\\n    \\hline\n    List2 Index & \\texttt{@*} & \\texttt{BUILTIN_list2index}\\\\\n    \\hline\n    List Append & \\texttt{\\^} & \\texttt{BUILTIN_append}\\\\\n    \\hline\n    List2 Append & \\texttt{\\^{}*} & \\texttt{BUILTIN_list2append}\\\\\n    \\hline\n    List Length & \\texttt{\\#} & \\texttt{BUILTIN_len}\\\\\n    \\hline\n    List2 Length & \\texttt{\\#*} & \\texttt{BUILTIN_len2}\\\\\n    \\hline\n    List Concatenation & \\texttt{||} & \\texttt{BUILTIN_concatenate}\\\\\n    \\hline\n    Explicit Integer & --- & \\texttt{BUILTIN_assign*}\\\\\n    \\hline\n    Explicit List Description & \\texttt{[]} & \\texttt{BUILTIN_listAssemble*}\\\\\n    \\hline \n    Explicit List2 Description & \\texttt{::} & \\texttt{BUILTIN_list2Assemble*}\\\\\n    \\end{tabular}\n    \\end{centering}\n\\end{center}\n\nThe above TMD functions are all available for perusal at: \\\\ \\\\\n\\texttt{parsimony/src/tmd/laconic_std_library/} \\\\\n\nNote that for explicit descriptions of \\texttt{int}s, \\texttt{list}s, or \\texttt{list2}s, there are separate functions callable based on the size of the relevant value. This means that it is important to avoid putting integers larger than 20 or so, or lists of similar length, into the program explicitly. (Building them up implicitly, of course, is fine.) I would recommend against doing this, because probably at this point it is more parsimonious to describe the number or list implicitly, but if necessary one can use the programs \\texttt{assemblexgen.py} and \\texttt{assignxgen.py}, which can be found in the \\texttt{tmd_meta} directory, to generate larger explicit-description functions. \\\\\n\nThe Laconic Standard Library has more than just the functions shown in the table above. It also contains many helper functions which are called by the functions above. \\\\\n\nThe Laconic Standard Library's functions all begin with the string ``\\texttt{BUILTIN_}''. It is therefore a very bad idea to begin your function names in TMD or Laconic with that string. \\\\\n\nEvery function in the Laconic Standard Library was written by me in order to compute the desired operation. Each one is composed entirely of explicit tape commands, return statements, and calls to other built-in functions.\n\n\\section{Complex Expressions}\n\nComplex expressions are expressions that are made up of multiple operations, such as (a+b)*c, for example. There is no order of operations to worry about, since full parenthesization is required. \\\\\n\nCompiling a complex expression is somewhat more tricky than compiling a single operation, since the built-in functions can only handle one operation at a time. The solution is to hold temporary results in \\emph{holder variables}, which are created automatically by the compiler. They are created as necessary and reused when their contents become irrelevant. They are recognizable by the fact that their names all begin with the \\texttt{!} symbol. They exist only in the compiled TMD, of course. \\\\\n\nWhen a variable is assigned to the value of a (potentially complex) expression, a series of function calls to built-in functions (potentially involving holder variables) is the result in the compiled TMD. Then, the built-in function \\texttt{BUILTIN_assign} is called to transfer the held value of the result of the complex expression to the variable. \\\\\n\n\\section{If Statements}\n\nIn Laconic, an \\texttt{if} statement has the following form: \\\\ \\\\\n\\texttt{if (}[\\emph{expression}]\\texttt{)\\{}[\\emph{function body}]\\texttt{\\}} \\\\\n\nThis is compiled down to a section of TMD code that looks like the following: \\\\ \\\\\n{}[Sequence of commands designed to load the value of \\emph{expression} into the holder variable $h$] \\\\\n\\texttt{[}$h$\\texttt{] E (IF_STATE_}$i$\\texttt{_FALSE); 1 ()} \\\\\n{}[Compiled version of \\emph{function body}] \\\\\n\\texttt{IF_STATE_}$i$\\texttt{_FALSE:} [Continuation of the program] \\\\\n\nIn a Laconic program, the function body is evaluated if and only if the expression evaluates to a positive integer (and the expression is forbidden from being a \\texttt{list} or \\texttt{list2}). As can be seen above, in the compiled TMD, the function body is evaluated if and only if the home-position tape symbol in the result of the expression is not an \\texttt{E}. As can be seen from Section~\\ref{sec:varvalues}, this corresponds precisely to the scenarios when the value of the expression is equal to a positive integer. \n\n\\section{While Statements}\n\nIn Laconic, a \\texttt{while} statement has the following form: \\\\ \\\\ \n\\texttt{while (}[\\emph{expression}]\\texttt{)\\{}[\\emph{function body}]\\texttt{\\}} \\\\\n\nThis is compiled down to a section of TMD code that looks like the following: \\\\ \\\\\n\\texttt{WHILE_TEST_}$i$\\texttt{:} [Sequence of commands designed to load the value of \\emph{expression} into the holder variable $h$] \\\\\n\\texttt{[}$h$\\texttt{] E (WHILE_STATE_}$i$\\texttt{_FALSE); 1 ()} \\\\ \n{}[Compiled version of \\emph{function body}] \\\\\n\\texttt{[}$h$\\texttt{] E (WHILE_TEST_}$i$\\texttt{); 1 (WHILE_TEST_}$i$\\texttt{)} \\\\\n\\texttt{WHILE_STATE_}$i$\\texttt{_FALSE:} [Continuation of the program] \n\nIn a Laconic program, the function body is evaluated repeatedly until the expression evaluates to a non-positive integer (and the expression is forbidden from being a \\texttt{list} or \\texttt{list2}). As can be seen above, in the compiled TMD, the function body is evaluated repeatedly until the home-position tape symbol in the result of the expression is an \\texttt{E}. As can be seen from Section~\\ref{sec:varvalues}, this corresponds precisely to the scenarios when the value of the expression is equal to a non-positive integer.\n\n\\section{Function and Variable Compilation} \n\nEach Laconic function defined within a Laconic program is given its own corresponding TMD function file. There is additionally a TMD function created named \\texttt{main}, which is given its own function file and contains all of the logic that exists in the Laconic program outside of function definitions. The function \\texttt{main} is called first in the TMD program, and is accordingly put at the top of the \\texttt{functions.tff} file. \\\\\n\nThe input line of \\texttt{main.tmd} contains every variable declared in the Laconic file, along with every holder variable needed in \\texttt{main} or in any of the other functions defined in the Laconic program. \\\\\n\nOnce \\texttt{main} is fully converted to TMD as described in the sections above, we find all of \\texttt{main}'s ``dependencies'' on other functions, by going through each function called by \\texttt{main}. For each of those functions, we include them in \\texttt{functions.tff}, and we recursively transform them to TMD as above if necessary, and import their dependencies as well. Some of these functions may come directly from the Laconic Standard Library, in which case they are copied to the target directory; others will be functions defined within the Laconic program, in which case they need to be generated (using the conversion process as described in the sections above) and moved into the target directory,\n\nThe functions are \\texttt{functions.tff} are ordered from top to bottom in decreasing order of how many times each is called, with more frequently-called functions towards the top of the file. (Of course, \\texttt{main} is the top function in the list.)\n\n\\end{document}\n", "meta": {"hexsha": "0f11851e14dce81da785dfeedd5dadff0569e807", "size": 13280, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/docs/laconic_to_tmd.tex", "max_stars_repo_name": "ricsonc/parsimony", "max_stars_repo_head_hexsha": "37cbead5421f546b2f687c1a916fc50ad21f417d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/docs/laconic_to_tmd.tex", "max_issues_repo_name": "ricsonc/parsimony", "max_issues_repo_head_hexsha": "37cbead5421f546b2f687c1a916fc50ad21f417d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/docs/laconic_to_tmd.tex", "max_forks_repo_name": "ricsonc/parsimony", "max_forks_repo_head_hexsha": "37cbead5421f546b2f687c1a916fc50ad21f417d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.7837837838, "max_line_length": 715, "alphanum_fraction": 0.7336596386, "num_tokens": 3820, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5610286717563525}}
{"text": "\\chapter{\\padic power series}\n\t\\section{Elementary functions}\n\t\tIn many of the following reasonings we'll use the next proposition, which will give us the possibility to manipulate formal power series, knowing their behaviour in some neighbourhood of $0$.\n\t\t\\begin{prop}\n\t\t\t\\label{prop:formal-series}\n\t\t\tLet $f(X_1, \\dots, X_n) \\in \\C\\llbracket X_1, \\dots, X_n \\rrbracket$ be a power series and  let $\\epsilon > 0$ such that $f$ is absolutely convergent on $[-\\epsilon, \\epsilon]^n$ and $f(x_1, \\dots, x_n) = 0$ for every $x_i \\in [-\\epsilon, \\epsilon]$. Then $f \\equiv 0$, i.e. all terms of $f$ vanishes.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tWe prove the proposition by induction on $n$.\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item $n=1$: let \n\t\t\t\t\\[\n\t\t\t\t\tf(X) = \\sum_{i=0}^{+\\infty} a_iX^i.\n\t\t\t\t\\]\n\t\t\t\tObviously $f(0) = a_0 = 0$ so we can write $f(X) = X \\cdot(a_1 + a_2X + \\dots) =: X \\cdot f_1(X)$. Now $f_1(X) \\in \\C\\llbracket X \\rrbracket$ vanishes for every $x \\in [-\\epsilon, \\epsilon] \\setminus \\{0\\}$. It is well known from complex analysis that a formal power series in $\\C$ defines a holomorphic function where it converges (so, in particular, it's continuous). We then obtain that $f_1$ is continuous so $f_1(0) = 0$, i.e. $a_1 = 0$. Then we can write $f(X) = X^2 \\cdot (a_2 + a_3X + \\dots) =: X^2 \\cdot f_2(X)$, where $f_2(x) = 0$ for every $x \\in [-\\epsilon, \\epsilon] \\setminus \\{0\\}$. Iterating this process we obtain $a_n = 0$ for each $n \\in \\N$ so $f \\equiv 0$.\n\t\t\t\t\\item $n>1$: let's assume that the thesis holds for every $i < n$ and let's prove it for $n$. For brevity, let $Y = (X_1, \\dots, X_{n-1})$. Since $f$ is absolutely convergent we can write \n\t\t\t\t\\[\n\t\t\t\t\tf(Y, X_n) = \\sum_{i=0}^{+\\infty} g_i(Y)X_n^i, \\qquad g_i(Y) \\in \\C\\llbracket Y \\rrbracket.\n\t\t\t\t\\]\n\t\t\t\tFor $x_n = 0$ we have $f(y, 0) = g_0(y) = 0$ for $y \\in [-\\epsilon, \\epsilon]^{n-1}$. Then, by induction, $g_0 \\equiv 0$ so\n\t\t\t\t\\[\n\t\t\t\t\tf(Y, X_n) = X_n \\cdot (g_1(Y) + g_2(Y)X_n + \\dots) =: X_n \\cdot f_1(Y, X_n)\n\t\t\t\t\\]\n\t\t\t\tand by hypothesis $f_1(y, x_n) = 0$ for every $y \\in [-\\epsilon, \\epsilon]^{n-1}, x_n \\in [-\\epsilon, \\epsilon] \\setminus \\{0\\}$. Clearly, fixed $y \\in [-\\epsilon, \\epsilon]^{n-1}$, the function $X \\mapsto f_1(y, X)$ is a continuous function so $f_1(y, 0) = \\lim_{x \\to 0} f_1(y, x) = 0$. We have obtained $0 \\equiv f_1(Y, 0) = g_1(Y)$ so, by inductive hypothesis, $g_1(Y) \\equiv 0$ and we can write\n\t\t\t\t\\[\n\t\t\t\t\tf(Y, X_n) = X_n^2 \\cdot (g_2(Y) + g_3(Y)X_n + \\dots) =: X_n^2 \\cdot f_2(Y, X_n)\n\t\t\t\t\\]\n\t\t\t\twhere $f_2(y, x_n) = 0$ if $y \\in [-\\epsilon, \\epsilon]^{n-1}, x_n \\in [-\\epsilon, \\epsilon] \\setminus \\{0\\}$. Iterating this process we obtain $g_n(Y) \\equiv 0$ for every $n \\in \\N$, i.e. $f \\equiv 0$.\n\t\t\t\\end{itemize}\n\t\t\\end{proof}\n\t\tAnother important lemma about \\padic power series is the Dwork's lemma. It expresses an important phenomenon in \\padic analysis: if we know $F(X^p)/(F(X)^p)$ then we also know something about $F$. This ratio represents how far off $F$ is from commuting with the $p$-power map, which is a very important map also in different contexts (e.g. Frobenius morphism for characteristic $p$ fields).\n\t\t\\begin{lemma}[Dwork's lemma]\n\t\t\t\\label{lemma:dwork}\n\t\t\tLet $F(X) \\in 1 + X\\Qp\\ser{X}$. Then $F(X) \\in 1 + X\\Zp\\ser{X}$ if and only if $\\tfrac{F(X^p)}{F(X)^p} \\in 1 + pX\\Zp\\ser{X}$.\n\t\t\\end{lemma}\n\t\t\\begin{proof}\n\t\t\tIf $F(X) \\in 1 + X\\Zp\\ser{X}$ then, since $(a + b)^p \\equiv a^p + b^p \\mod p$ and $a^p \\equiv a \\mod p$ if $a \\in \\Zp$, we have\n\t\t\t\\[\n\t\t\tF(X)^p = F(X^p) + pG(X) \\qquad \\exists G(X) \\in X\\Zp\\ser{X}.\n\t\t\t\\]\n\t\t\tThen\n\t\t\t\\[\n\t\t\t\\frac{F(X^p)}{F(X)^p} = 1 - p\\cdot\\frac{G(X)}{F(X)^p} \\in 1 + pX\\Zp\\ser{X},\n\t\t\t\\]\n\t\t\tbecause $F(X)^p \\in 1 + X\\Zp\\ser{X}$ so it can be inverted.\\newline\n\t\t\tFor the other implication let\n\t\t\t$F(X) = \\sum a_iX^i$; by hypothesis we know that $\\exists G(X) = \\sum b_iX^i$ such that $G(X) \\in 1 + pX\\Zp\\ser{X}$ and \n\t\t\t\\[\n\t\t\tF(X^p) = F(X)^p\\cdot G(X)\n\t\t\t\\]\n\t\t\tWe'll prove by induction that $a_i \\in \\Zp$. By assumption $F(X) \\in 1 + X\\Qp\\ser{X}$ so $a_0 = 1$. Let's now suppose that $a_i \\in \\Zp$ for every $i < n$. Looking at the coefficients of $X^n$ on both sides of the above equation we obtain\n\t\t\t\\begin{gather*}\n\t\t\t\t\\text{coefficient of $X^n$ in } \\left(\\sum_{i=0}^n a_iX^i\\right)^p\\cdot\\left(1 + \\sum_{i=1}^n b_iX^i\\right) = \n\t\t\t\t\\begin{cases}\n\t\t\t\t\ta_{n/p}, & \\text{if $p$ divides $n$;}\\\\\n\t\t\t\t\t0, & \\text{otherwise;}\n\t\t\t\t\\end{cases}.\n\t\t\t\\end{gather*}\n\t\t\tExpanding the expression for the coefficient of $X^n$ on the left and subtracting $a_{n/p}$ (recall that $a_{n/p}^p \\equiv a_{n/p} \\mod p$) we notice that the resulting expression consists of $pa_n$ added to some terms in $p\\Zp$ so we can conclude that $pa_n \\in p\\Zp$, i.e. $a_n \\in \\Zp$. (To see why this is true it can be convenient to recall the formula $(x_1 + \\dots + x_n)^m = \\sum_{i_1 + \\dots + i_n = m} \\binom{m}{i_1, \\dots, i_n} x_1^{i_1}\\dots x_n^{i_n}$).\n\t\t\\end{proof}\n\t\n\t\tWe'll also prove here a technical lemma, which we will use to study the \\padic logarithm.\n\t\t\\begin{lemma}\n\t\t\t\\label{exercise:7-p.74}\n\t\t\tLet $a$ be a primitive $m$-th root of $1$ in $\\Qpa$. Then\n\t\t\t\\begin{enumerate}[label=(\\roman*)]\n\t\t\t\t\\item if $m = p^n$ for some $n \\in \\N$ then $\\pabs{a-1} = p^{-1/\\phi(p^n)}$;\n\t\t\t\t\\item otherwise, $\\pabs{a-1} = 1$.\n\t\t\t\\end{enumerate}\n\t\t\\end{lemma}\n\t\t\\begin{proof}\n\t\t\tLet $\\Phi_n(X) \\in \\Zp[X]$ be the $n$-th cyclotomic polynomial. \\newline\n\t\t\t\\textit{(i)} To prove this case we'll do an induction on $n$. The case $n=0$ is trivial, since $a = 1$. If $n=1$ then $a$ is a primitive $p$-th root of $1$, i.e. $a^p = 1, a \\neq 1$, so $\\Phi_p(a) = 0$. By Eisenstein criterion (\\cref{prop:eisenstein}) it is easy to prove that $\\Phi_p(X)$ is irreducible over $\\Qp$. We consider then \n\t\t\t\\[\n\t\t\t\tf(X) := \\Phi_p(X + 1) = \\frac{(X + 1)^p - 1}{X} =  X^{p-1} + \\binom{p}{p-1} X^{p-2} + \\dots + \\binom{p}{2}X + p.\n\t\t\t\\]\n\t\t\tClearly $f(X) \\in \\Qp[X]$ is irreducible and $f(a - 1) = 0$ so, recalling how we extended $\\pabs{\\ }$ to $\\Qpa$, we have\n\t\t\t\\[\n\t\t\t\t\\pabs{a - 1} = \\pabs{p}^{1/(p-1)} = p^{-1/(p-1)}\n\t\t\t\\]\n\t\t\twhich concludes the proof of the case $n=1$. Now, suppose we know the thesis holds for $m=p^i, i < n$ and let's prove it also holds for $m=p^n$. If we consider the extension $K = \\Qp(a)$ with the usual notation ($A$ is the maximal subring and $M$ is its maximal ideal), it is easy to note that $\\pabs{a} = 1$ and\n\t\t\t\\[\n\t\t\t\ta^{p^n} = 1 \\implies a^{p^n} \\equiv 1 \\mod M\n\t\t\t\\]\n\t\t\tand since $A/M$ is a finite field of characteristic $p$ we obtain\n\t\t\t\\[\n\t\t\t\ta \\equiv 1 \\mod M\n\t\t\t\\]\n\t\t\twhich means exactly $a = 1 + b$ for some $b \\in K, \\pabs{b} < 1$. We recall the easy facts\n\t\t\t\\[\n\t\t\t\t\\deg \\Phi_{p^n}(X) = \\phi(p^n) = p^n - p^{n-1}, \\qquad \\Phi_{p^n}(X) = \\Phi_p \\left(X^{p^{n-1}}\\right)\n\t\t\t\\]\n\t\t\twhich imply $\\Phi_{p^n}(1) = \\Phi_p(1) = p$. Since $a$ is a primitive $p^n$-th root of $1$, every other primitive $p^n$-th root of $1$ is $a^j$, with $p \\nmid j$, so\n\t\t\t\\[\n\t\t\t\t\\Phi_{p^n}(X) = \\prod_{1 \\leq j < p^n, p \\nmid j} (X - a^j).\n\t\t\t\\]\n\t\t\tEvaluating at $X = 1$ we obtain\n\t\t\t\\[\n\t\t\t\t\\pabs{p} = \\prod_{1 \\leq j < p^n, p \\nmid j} \\pabs{1 - a^j}.\n\t\t\t\\]\n\t\t\tUsing the fact $a \\equiv 1 \\mod M$ we can see that\n\t\t\t\\[\n\t\t\t\t\\frac{1 - a^j}{1 - a} = 1 + a + \\dots + a^{j-1} \\equiv j \\mod M\n\t\t\t\\]\n\t\t\tand if $p \\nmid j$ we obtain\n\t\t\t\\[\n\t\t\t\t\\pabs{\\frac{1 - a^j}{1 - a}} = 1\n\t\t\t\\]\n\t\t\tso $\\pabs{1 - a^j} = \\pabs{1 - a}$, which implies $\\pabs{1 - a} = \\pabs{p}^{1/\\phi(p^n)}$.\\newline\n\t\t\t\\textit{(ii)} First of all let's consider the basic case $p \\nmid m$. Then, $a -1$ is a root of the polynomial\n\t\t\t\\[\n\t\t\t\tf(X) = \\Phi_m(X+1) = X^{m-1} + \\dots + m = g_1(X) \\dotsm g_r(X)\n\t\t\t\\]\n\t\t\twhere $g_i(X) \\in \\Qp[X]$ is an irreducible factor of $f$, and we can assume that every $g_i(X)$ is monic and has coefficients in $\\Zp$. By hypothesis $\\pabs{m} = 1$ and if $b_i$ is the constant term of $g_i(X)$ we have\n\t\t\t\\[\n\t\t\t\t\\pabs{b_1b_2\\dotsm b_r} = \\pabs{m} = 1, b_i \\in \\Zp  \\implies \\pabs{b_i} = 1 \\quad \\forall \\, i=1,\\dots,r.\n\t\t\t\\]\n\t\t\tSince $f(a-1)=0$ there is at least one $g_i(X)$ such that $g_i(a - 1)=0$ so $\\lambda_{\\Qp}(a - 1) = g_i(X)$. Then\n\t\t\t\\[\n\t\t\t\t\\pabs{a-1}^{\\deg g_i(X)} = \\pabs{b_i} = 1 \\implies \\pabs{a-1} = 1.\n\t\t\t\\]\n\t\t\tNow let $m = p^nq$ with $p \\nmid q$ (clearly $q \\in \\N_{>1}$) and suppose the thesis holds for every $m \\in \\N$ such that $m$ is not a power of $p$ and $p^n \\nmid m$. Then, if $a$ is a primitive $m$-th root of $1$, $a^p$ is a primitive $(p^{n-1}q)$-th root of $1$ and, by inductive hypothesis, we know\n\t\t\t\\[\n\t\t\t\t\\pabs{a - 1}\\cdot\\pabs{a^{p-1} + a^{p-2} + \\dots + a + 1} = \\pabs{a^p - 1} = 1.\n\t\t\t\\]\n\t\t\tSince $\\pabs{a} = 1$ we have\n\t\t\t\\begin{gather*}\n\t\t\t\t\\pabs{a-1} \\leq 1, \\qquad \\pabs{a^{p-1} + \\dots + 1} \\leq 1 \\\\\n\t\t\t\t\\implies \\pabs{a-1} = 1\n\t\t\t\\end{gather*}\n\t\t\twhich proves the statement.\n\t\t\\end{proof}\n\t\tWe recall that in an ultrametric space, like $(\\Cp, \\pabs{\\ })$, a sequence is Cauchy if and only if the difference between adjacent terms tends to $0$, and if the space is also complete, an infinite series converges if and only if its general term tends to $0$ (see \\cref{lemma:cauchy-sequence-ultrametric} and \\cref{prop:summable_families}). Now we are ready to define analytic functions on $\\Cp$ and prove some of their basic properties.\n\t\t\\begin{defn}\n\t\t\tA function $f$ is an \\emph{analytic function} if\n\t\t\t\\[\n\t\t\t\tf(X) = \\sum_{n=0}^{+\\infty} a_nX^n, \\qquad a_n \\in \\Cp.\n\t\t\t\\]\n\t\t\tWe can define $f(x)$ for every $x \\in \\Cp$ such that the series converges, i.e. $\\pabs{a_nx^n} \\to 0$ as $n \\to +\\infty$. \n\t\t\\end{defn}\n\t\tLike in complex analysis, given an analytic function $f$, we can define its \\emph{radius of convergence}. Surprisingly we have the exact same formula as in classic analysis.\n\t\t\\begin{prop}\n\t\t\tLet $f(X) = \\sum_{n=0}^{+\\infty} a_nX^n$ be an analytic function. We can define its radius of convergence as\n\t\t\t\\[\n\t\t\t\tr := \\frac{1}{\\limsup \\pabs{a_n}^{1/n}},\n\t\t\t\\]\n\t\t\twith the usual meaning: $f$ converges if $\\pabs{x} < r$ and diverges if $\\pabs{x} > r$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tWe recall the definition of $\\limsup$: $1/r$ is the least real number such that for any $C > 1/r$ there are only finitely many $\\pabs{a_n}^{1/n} > C$. \\newline\n\t\t\tLet's first consider the case $\\pabs{x} < r$: we can write $\\pabs{x} = (1 - \\epsilon)r$ for some $\\epsilon > 0$. We have\n\t\t\t\\[\n\t\t\t\t\\pabs{a_nx^n} = \\left(r\\pabs{a_n}^{1/n}\\right)^n\\cdot (1 - \\epsilon)^n\n\t\t\t\\]\n\t\t\tand, if $n$ is big enough, by definition of $r$ we have\n\t\t\t\\[\n\t\t\t\t\\pabs{a_n}^{1/n} \\leq \\frac{1}{r - \\frac{1}{2}\\epsilon r}.\n\t\t\t\\] \n\t\t\tThen\n\t\t\t\\[\n\t\t\t\t\\lim_{n \\to +\\infty} \\pabs{a_nx^n} \\leq \\lim_{n \\to +\\infty} \\left( \\frac{(1 - \\epsilon)r}{(1 - \\frac{1}{2}\\epsilon)r} \\right)^n = \\lim_{n \\to +\\infty} \\left( \\frac{1 - \\epsilon}{1 - \\frac{1}{2}\\epsilon} \\right)^n = 0,\n\t\t\t\\]\n\t\t\twhich gives us the desired convergence. \\newline\n\t\t\tLet's now prove that if $\\pabs{x} > r$ (and $r < +\\infty$) the series diverges. Let's choose an element $\\pabs{x} > r$ and an $\\epsilon > 0$ such that $\\pabs{x} \\geq (1 + \\epsilon)r$. By definition of $\\limsup$ we can find a subsequence $(a_{n_k})_k$ such that $\\pabs{a_{n_k}}^{1/n_k} \\geq 1/(r + \\tfrac{1}{2}\\epsilon r)$. Then\n\t\t\t\\[\n\t\t\t\t\\lim_{k \\to +\\infty} \\pabs{a_{n_k}x^{n_k}} \\geq \\lim_{k \\to +\\infty} \\left( \\frac{1 + \\epsilon}{1 +\\frac{1}{2} \\epsilon} \\right) ^ {n_k} = +\\infty,\n\t\t\t\\]\n\t\t\twhich implies that $f$ cannot converge. \\newline\n\t\t\tFinally, if $r = +\\infty$, i.e. $\\lim_{n \\to +\\infty} \\pabs{a_n}^{1/n} = 0$, chosen an element $x \\in \\Cp^\\times$ ($x = 0$ is trivial) we have that, if $n$ is big enough, $\\pabs{a_n} \\leq 1/(2^n\\pabs{x}^n)$ so\n\t\t\t\\[\n\t\t\t\t\\lim_{n \\to +\\infty} \\pabs{a_nx^n} \\leq \\lim_{n \\to +\\infty} 2^{-n} = 0\n\t\t\t\\]\n\t\t\tand $f$ converges everywhere.\n\t\t\\end{proof}\n\t\tThis proposition tells us nothing about the case $\\pabs{x} = r$. In classical analysis there isn't a simple answer: for example the well known function \n\t\t\\[\n\t\t\t\\log(1 + X) = \\sum_{n=1}^{+\\infty} (-1)^{n+1} \\frac{X^n}{n}\n\t\t\\]\n\t\thas radius of convergence $r = 1$ on $\\C$. When $\\abs{x} = 1$ this series can diverge (for example if $x = -1$ we obtain the divergent series $- \\sum 1/n$) or converge (if $x = 1$ we obtain $\\sum (-1)^{n+1}/n$, which converges by Leibniz criterion). This happens because, over $\\R$, there are conditionally convergent series that aren't absolutely convergent. In \\padic analysis this cannot happen because convergence only depends on $\\pabs{x}$: a given analytic function behaves exactly in the same way for every $\\pabs{x} = r$. We will study more deeply this formal series in $\\Cp$ when we'll talk about \\padic logarithm.\\newline\n\t\tLet's prove two basic facts about analytic functions. For brevity we'll adopt the notation $D_a(r) := B_{\\leq r}(a)$ and $D_a(r^-) := B_{<r}(a)$, where we consider the balls in $\\Cp$. We'll also omit the subscript $a$ if $a=0$, for example $D(r) = D_0(r) = B_{\\leq r}(0)$.\n\t\t\\begin{prop}\n\t\t\t\\label{prop:convergence-power-series-Zp}\n\t\t\tEvery $f(X) \\in \\Zp \\llbracket X \\rrbracket$ converges in $D(1^-)$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tLet $f(X) = \\sum_{n=0}^{+\\infty} a_nX^n \\in \\Zp \\llbracket X \\rrbracket$ and let $x \\in D(1^-)$. Then \n\t\t\t\\begin{gather*}\n\t\t\t\t\\pabs{x} < 1, \\pabs{a_n} \\leq 1 \\,\\forall\\, n\\in\\N \\\\\n\t\t\t\t\\implies \\lim_{n \\to +\\infty} \\pabs{a_nx^n} \\leq \\lim_{n \\to +\\infty} \\pabs{x}^n = 0. \\qedhere\n\t\t\t\\end{gather*}\n\t\t\\end{proof}\n\t\t\\begin{prop}\n\t\t\t\\label{prop:continuity-analitic-function}\n\t\t\tEvery $f(X) = \\sum_{n=0}^{+\\infty} a_nX^n \\in \\Cp \\llbracket X \\rrbracket$ which converges in a disc $D = D(r)$, or $D(r^-)$, is continuous on $D$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tLet's first prove continuity in $0$. Let $x \\in D$ such that $\\pabs{x} < \\delta < r$ ($\\delta > 0$ will be chosen later); then, by continuity of absolute value, we have\n\t\t\t\\[\n\t\t\t\t\\pabs{f(x) - f(0)} = \\pabs{\\sum_{n=1}^{+\\infty} a_nx^n} \\leq \\max_{n \\in \\N^{\\times}} \\,\\pabs{a_nx^n} \\leq \\max_{ n \\in \\N^{\\times}} \\, \\left( \\pabs{a_n}\\cdot\\delta^n\\right).\n\t\t\t\\]\n\t\t\tClearly, since $f$ converges on $D$, we must have $1/r' > \\limsup \\pabs{a_n}^{1/n}$ where $\\delta < r' < r$ so, for a large enough $N$, $\\pabs{a_n} < r'^{-n}$ if $n > N$. Let's introduce\n\t\t\t\\[\n\t\t\t\tC(\\delta) := \\max_{1 \\leq n \\leq N}\\,\\left(\\pabs{a_n} \\cdot \\delta^n \\right);\n\t\t\t\\]\n\t\t\tit's obvious that $C(\\delta) \\to 0^+$ as $\\delta \\to 0^+$. Instead, if $n > N$, we have\n\t\t\t\\[\n\t\t\t\t\\pabs{a_n}\\cdot\\delta^n \\leq \\left( \\frac{\\delta}{r'}\\right)^n \\leq \\left( \\frac{\\delta}{r'}\\right)^N,\n\t\t\t\\]\n\t\t\tsince $\\delta/r' < 1$.\n\t\t  \tThen\n\t\t\t\\[\n\t\t\t\t\\pabs{f(x) - f(0)} \\leq \\max\\left\\{C(\\delta), \\left(\\frac{\\delta}{r'}\\right)^N\\right\\}\n\t\t\t\\]\n\t\t\tand we can make the right member as small as we want by choosing smaller $\\delta$. This proves continuity in $0$. \\newline\n\t\t\tLet's now prove continuity in $0 \\neq x \\in D$ and consider $y \\in D$ such that $\\pabs{x - y} < \\delta$, where $\\delta < \\pabs{x}$ will be chosen later, as before. Then, by the isosceles triangle principle, $\\pabs{x} = \\pabs{y}$. We have\n\t\t\t\\begin{gather*}\n\t\t\t\t\\pabs{f(x) - f(y)} = \\pabs{\\sum_{n=1}^{+\\infty} (a_nx^n - a_ny^n)} \\leq \\max_{n \\in \\N^{\\times}}\\,\\left(\\pabs{a_n}\\cdot\\pabs{x^n - y^n}\\right)  \\leq \\\\\n\t\t\t\t\\leq \\max_{n \\in \\N^{\\times}}\\,\\left(\\pabs{a_n}\\cdot\\pabs{(x - y)(x^{n-1} + x^{n-2}y + \\dots + xy^{n-2} + y^{n-1})} \\right)\n\t\t\t\\end{gather*}\n\t\t\tbut $\\pabs{x^{n-1} + x^{n-2}y + \\dots + xy^{n-2} + y^{n-1}} \\leq \\max_{1 \\leq i \\leq n}\\, \\pabs{x^{n-i}y^{i-1}} = \\pabs{x}^{n-1}$ hence\n\t\t\t\\[\n\t\t\t\t\\pabs{f(x) - f(y)} \\leq \\max_{n \\in \\N^{\\times}}\\,\\left(\\pabs{x - y}\\cdot\\pabs{a_n}\\pabs{x}^{n-1} \\right) < \\frac{\\delta}{\\pabs{x}} \\cdot \\max_{n \\in \\N^{\\times}}\\,\\left(\\pabs{a_n}\\cdot\\pabs{x}^n \\right).\n\t\t\t\\]\n\t\t\tWe know that $\\lim_{n \\to +\\infty} \\pabs{a_n}\\pabs{x}^n = 0$ so as $\\delta \\to 0^+$ we have $\\pabs{f(x) - f(y)} \\to 0$, which proves the statement.\n\t\t\\end{proof}\n\t\t\\begin{defn}\n\t\t\tThe (partial) function $\\log_p(1 + X)\\colon \\Cp \\to \\Cp$ defined by\n\t\t\t\\[\n\t\t\t\t\\log_p(1 + x) := \\sum_{n=1}^{+\\infty} (-1)^{n+1} \\frac{x^n}{n}\n\t\t\t\\]\n\t\t\tis the \\emph{\\padic logarithm}.\n\t\t\\end{defn}\n\t\t\\begin{prop}\n\t\t\tThe function $\\log_p(1 + X)$ converges on $D(1^-)$ and diverges elsewhere.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tIt's immediate to verify that the series converges if $\\pabs{x} < 1$ and diverges if $\\pabs{x} \\geq 1$. In-fact $\\pabs{a_n} = p^{\\ord n}$ so $\\lim_{n \\to +\\infty} \\pabs{a_n}^{1/n} = \\lim_{n \\to +\\infty} p^{(\\ord n)/n} = 1$ and we obtain the desired radius of convergence. Lastly, if $\\pabs{x} = 1$, we have $\\pabs{a_nx^n} = p^{\\ord n} \\geq 1$ so the series diverges.\n\t\t\\end{proof}\n\t\tFrom now on, unless otherwise specified, we'll use $\\log_p$ meaning the \\padic logarithm we have just defined. Let's now prove the basic property of logarithms, which also holds in \\padic environment.\n\t\t\\begin{prop}\n\t\t\t\\label{prop:padic-logarithm-product-sum}\n\t\t\tThe logarithm of a product is the sum of the logarithms. More precisely, if $x, y \\in D(1^-)$ then $\\log_p \\left[ (1 + x)(1+y)\\right] = \\log_p(1 + x) + \\log_p(1 + y)$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tFirst of all let's observe that $x, y \\in D(1^-) \\implies x + y + xy \\in D(1^-)$, so we can compute the logarithms. By definition\n\t\t\t\\[\n\t\t\t\t\\log_p\\left[(1 + x)(1 + y)\\right] = \\sum_{n=1}^{+\\infty} (-1)^{n+1} \\frac{(x + y + xy)^n}{n}.\n\t\t\t\\]\n\t\t\tIf we work in $\\R$ with the usual metric we already know that $\\log\\left[ (1+x)(1+y)\\right] = \\log(1 + x) + \\log(1+y)$ and, using the Taylor expansion of $\\log$, we have\n\t\t\t\\[\n\t\t\t\t\\sum_{n = 1}^{+\\infty} (-1)^{n+1} \\frac{x^n}{n} + \\sum_{n = 1}^{+\\infty} (-1)^{n+1} \\frac{y^n}{n} = \\sum_{n = 1}^{+\\infty} (-1)^{n+1} \\frac{(x + y + xy)^n}{n}\n\t\t\t\\]\n\t\t\tfor every $x, y \\in \\left[-\\tfrac{1}{2}, \\tfrac{1}{2}\\right]$. Thanks to \\cref{prop:formal-series} we infer that this relation also holds in the ring of formal power series in two variables $\\Q\\llbracket X,Y \\rrbracket$. Then, using the fact that if a series converges in $\\Cp$ its terms can be rearranged in any order without changing the sum, we can write\n\t\t\t\\begin{gather*}\n\t\t\t\t\\log_p\\left[(1 + x)(1 + y)\\right] = \\sum_{n = 1}^{+\\infty} (-1)^{n+1} \\frac{(x + y + xy)^n}{n} =\\\\\n\t\t\t\t= \\sum_{n = 1}^{+\\infty} (-1)^{n+1} \\frac{x^n}{n} + \\sum_{n = 1}^{+\\infty} (-1)^{n+1} \\frac{y^n}{n} = \\log_p(1 + x) + \\log_p(1 + y)\n\t\t\t\\end{gather*}\n\t\t\twhich concludes the proof.\n\t\t\\end{proof}\n\t\t\\begin{corollary}\n\t\t\t\\label{corollary:log-root-of-1}\n\t\t\tIf $1 + x \\in \\Cp$ is a root of $1$ and $\\pabs{x} < 1$, then $\\log_p(1 + x) = 0$. In particular if $1+x$ is a $p^m$-th root of $1$ then $\\log_p(1 + x) = 0$.\n\t\t\\end{corollary}\n\t\t\\begin{proof}\n\t\t\tLet's first observe that we can actually compute the logarithm of $1 + x$ since by hypothesis $\\pabs{x} < 1$ (if $1+x$ is a $p^m$-th root of $1$ then automatically $\\pabs{x} < 1$ by \\cref{exercise:7-p.74}). Now we have\n\t\t\t\\[\n\t\t\t\tk\\cdot\\log_p(1 + x) = \\log_p\\left[(1 + x)^k\\right] = \\log_p(1) = 0,\n\t\t\t\\]\n\t\t\twhich concludes the proof.\n\t\t\\end{proof}\n\t\tWe have obtained a function, defined on a particular disc of $\\Cp$, using the Taylor expansion of the classical $\\log(1 + X)$. Now we would like to define the exponential function, beginning from the classical $\\exp(x) = \\sum_{n=0}^{+\\infty} x^n/n!$, and study its relation with the logarithm.\n\t\t\\begin{defn}\n\t\t\tThe (partial) function $\\exp_p(X)\\colon \\Cp \\to \\Cp$ defined by\n\t\t\t\\[\n\t\t\t\t\\exp_p(x) := \\sum_{n=0}^{+\\infty} \\frac{x^n}{n!}\n\t\t\t\\]\n\t\t\tis the \\emph{\\padic exponential}.\n\t\t\\end{defn}\n \t\tLooking at this series we immediately see that, unlike in the classical case where the $n!$ in the denominator makes sure the series converges for every $x \\in \\C$, there can be some problems. In-fact if $n!$ is divisible by a high power of $p$, its reciprocal will have a big absolute value. More precisely, we can compute exactly $\\pabs{1/n!} = p^{\\ord(n!)}$.\n\t\t\\begin{lemma}\n\t\t\t\\label{exercise:14-p.7}\n\t\t\tGiven $n \\in \\N$,\n\t\t\t\\[\n\t\t\t\t\\emph{ord}_p (n!) = \\frac{n-S_n}{p-1}\n\t\t\t\\] \n\t\t\twhere $S_n$ is the sum of digits in $n$ to base $p$.\n\t\t\\end{lemma}\n\t\t\\begin{proof}\n\t\t\tLet's write $n$ in base $p$:\n\t\t\t\\[\n\t\t\t\tn = a_0 + a_1p + \\dots + a_rp^r, \\qquad a_i \\in \\{0, \\dots, p-1\\},\\, a_r \\neq 0.\n\t\t\t\\]\n\t\t\tThen $S_n = a_0 + a_1 + \\dots + a_n$. By definition, $\\ord(n!)$ is the maximum $t$ such that $p^t \\mid (n!)$. We can use this little formula to compute it:\n\t\t\t\\[\n\t\t\t\t\\ord(n!) = \\sum_{k=1}^{+\\infty} \\left[\\frac{n}{p^k} \\right] = \\sum_{k=1}^r \\left[\\frac{n}{p^k} \\right]\n\t\t\t\\]\n\t\t\twhere $[x]$ is the integer part of $x \\in \\R$, i.e. the only integer such that $[x] \\leq x < [x]+1$. Using the representation of $n$ in base $p$ we have that $\\left[n / p^k \\right] = 0$ if $k > r$ and, otherwise,\n\t\t\t\\[\n\t\t\t\t\\left[\\frac{n}{p^k} \\right] = \\frac{n - a_0 - \\dots - a_{k-1}p^{k-1}}{p^k}\n\t\t\t\\]\n\t\t\tso if we add them together we obtain\n\t\t\t\\[\n\t\t\t\t\\sum_{k=1}^r \\left[\\frac{n}{p^k} \\right] = \\sum_{k=1}^r \\frac{n - \\sum_{j=0}^{k-1} a_jp^j}{p^k}.\n\t\t\t\\] \n\t\t\tWith a little bit of computation, recalling that $1 + p + \\dots + p^{k-1} = \\frac{p^k - 1}{p-1}$, we obtain the desired formula.\n\t\t\\end{proof}\n\t\t\\begin{prop}\n\t\t\tThe function $\\exp_p(X)$ converges on $D(r_p^-)$ and diverges elsewhere, where $r_p := p^{-1/(p-1)}$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tUsing \\cref{exercise:14-p.7} we obtain\n\t\t\t\\[\n\t\t\t\\pabs{1/n!} = p^{\\frac{n - S_n}{p - 1} }\n\t\t\t\\]\n\t\t\tand, recalling the formula for the radius of convergence $r = 1/(\\limsup \\pabs{a_n}^{1/n})$, we can write\n\t\t\t\\[\n\t\t\t\\ord r = -\\ord \\left( \\limsup p^{-(\\ord a_n)/n} \\right) = -\\ord \\left( p^{-\\liminf (\\ord a_n)/n} \\right) = \\liminf \\left( \\frac{\\ord a_n}{n} \\right)\n\t\t\t\\]\n\t\t\tso, in our case where $a_n = 1/n!$, we obtain \n\t\t\t\\[\n\t\t\t\\ord r = \\liminf \\left(-\\frac{n - S_n}{n(p-1)} \\right).\n\t\t\t\\]\n\t\t\tWe can use the easy upper bound $S_n \\leq (p-1) \\cdot \\ord n$ to prove \n\t\t\t\\[\n\t\t\t\t\\lim_{n \\to +\\infty} \\frac{S_n - n}{n(p-1)} = -\\frac{1}{p-1}\n\t\t\t\\]\n\t\t\tso the exponential series $\\sum_{n=0}^{+\\infty} x^n/n$ converges if $\\pabs{x} < p^{-1/(p-1)} = r_p$ and diverges if $\\pabs{x} > p^{-1/(p-1)} = r_p$. If $\\pabs{x} = r_p$, i.e. $\\ord x = 1/(p-1)$, we have\n\t\t\t\\[\n\t\t\t\t\\ord (a_nx^n) = -\\frac{n-S_n}{p-1} + \\frac{n}{p-1} = \\frac{S_n}{p-1}\n\t\t\t\\]\n\t\t\tand, choosing $n = p^m$ so $S_n = 1$, we have $\\pabs{a_{p^m}x^{p^m}} = p^{-1/(p-1)} > 0$; we have found a subsequence of $(\\pabs{a_nx^n})_n$ which does not converge to zero so we conclude that if $\\pabs{x} =r_p$ the exponential series diverges.\n\t\t\\end{proof}\n\t\tWe immediately note that $D(r_p^-) \\subsetneq D(1^-)$, i.e. $\\exp_p$ converges in a smaller disc than $\\log_p$. We now prove that, like in the classical case, the \\padic exponential transforms sums into products.\n\t\t\\begin{prop}\n\t\t\tIf $x, y \\in D(r_p^-)$ then $\\exp_p(x + y) = \\exp_p(x) \\cdot \\exp_p(y)$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tLet's first observe that $x, y \\in D(r_p^-) \\implies x + y \\in D(r_p^-)$ so we can compute $\\exp_p(x + y)$. The rest of the proof is completely analogue to the proof of \\cref{prop:padic-logarithm-product-sum}, using the fact that $\\exp(x + y) = \\exp(x) \\cdot \\exp(y)$ if $x, y \\in \\R$ (which then can be translated to a relation between power series by \\cref{prop:formal-series}).\n\t\t\\end{proof}\n\t\tFinally we have all the tools we need to prove the relation between \\padic exponential and logarithm. \n\t\t\\begin{prop}\n\t\t\t\\label{prop:exp-and-log-inverse}\n\t\t\tThe functions $\\log_p$, defined by $x \\mapsto \\log_p(1 + (x-1))$, and $\\exp_p$ give mutually inverse isomorphisms between the multiplicative group $(D_1(r_p^-), \\cdot)$ and the additive group $(D(r_p^-), +)$. \n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tFirst of all let's observe that $\\exp_p\\colon D(r_p^-) \\to D_1(r_p^-)$ and $\\log_p\\colon D_1(r_p^-) \\to D(r_p^-)$ so that the proposition actually makes sense. To prove that $\\exp_p(x) \\in 1 + D(r_p^-) \\subset 1 + D(1^-)$ let's note that\n\t\t\t\\[\n\t\t\t\tx \\in D(r_p^-) \\implies \\ord\\left(\\frac{x^n}{n!}\\right) = n\\cdot\\ord(x) - \\ord(n!) > \\frac{n}{p-1} - \\frac{n - S_n}{p-1} = \\frac{S_n}{p-1} \\geq \\frac{1}{p-1}\n\t\t\t\\]\n\t\t\tso we have\n\t\t\t\\begin{gather*}\n\t\t\t\t\\ord(\\exp_p(x) - 1) = \\ord\\left( \\sum_{n=1}^{+\\infty} \\frac{x^n}{n!} \\right) \\geq \\min_{n \\geq 1} \\left\\{\\ord\\left(\\frac{x^n}{n!}\\right) \\right\\} > \\frac{1}{p-1} \\\\\n\t\t\t\t\\implies \\exp_p(x) \\in 1 + D(r_p^-).\n\t\t\t\\end{gather*}\n\t\t\tInstead, to prove that $\\log_p(1 + x) \\in D(r_p^-)$ if $x \\in D(r_p^-)$ let's observe that\n\t\t\t\\[\n\t\t\t\t\\ord\\left(\\frac{x^n}{n}\\right) - \\frac{1}{p-1} > \\frac{n}{p-1} - \\ord(n) - \\frac{1}{p-1} = \\frac{n-1}{p-1} - \\ord(n) =: f(n).\n\t\t\t\\]\n\t\t\tWe claim that $f$ has its minima at $n=1$ and $n=p$, where it's zero. To see why this is true let's first observe that we can just consider the case where $n = p^k$ for $k \\in \\N$ since if $n' = p^km$ with $p \\nmid m$ we have $f(n') \\geq f(n)$. It is then an easy calculation to verify that $f(p^{k+1}) \\geq f(p^k)$ for $k \\in \\N$. Thus we have\n\t\t\t\\[\n\t\t\t\t\\ord(\\log_p(1 + x)) = \\ord\\left(\\sum_{n=1}^{+\\infty} (-1)^{n+1}\\frac{x^n}{n}\\right) \\geq \\min_{n > 0} \\left\\{ \\ord\\left(\\frac{x^n}{n}\\right) \\right\\} > \\frac{1}{p-1}\n\t\t\t\\]\n\t\t\twhich means precisely $\\log_p(1 + x) \\in D(r_p^-)$. \\newline\n\t\t\tWe have already proved in some previous propositions that $\\exp_p\\colon (D(r_p^-),+) \\to (D_1(r_p^-), \\cdot)$ and $\\log_p\\colon (D_1(r_p^-), \\cdot) \\to (D(r_p^-), +)$ are group morphisms so now we have only to prove that they are mutually inverse.\\newline\n\t\t\tTo see that $\\log_p \\circ \\exp_p\\colon D(r_p^-) \\to D(r_p^-)$ is the identity function we compute\n\t\t\t\\[\n\t\t\t\t\\log_p(\\exp_p(x)) = \\sum_{n=1}^{+\\infty} (-1)^{n+1}\\frac{(\\exp_p(x) - 1)^n}{n} = \\sum_{n=1}^{+\\infty} (-1)^{n+1}\\frac{\\left(\\sum_{m=1}^{+\\infty} \\frac{x^m}{m!}\\right)^n}{n}.\n\t\t\t\\]\n\t\t\tSince if $x \\in \\R$ we have $\\log(\\exp(x)) = x$ we infer, by \\cref{prop:formal-series}, that the following formal identity  holds in $\\Q\\llbracket X\\rrbracket$:\n\t\t\t\\[\n\t\t\t\t\\sum_{n=1}^{+\\infty} (-1)^{n+1}\\frac{\\left(\\sum_{m=1}^{+\\infty} \\frac{X^m}{m!}\\right)^n}{n} = X\n\t\t\t\\]\n\t\t\twhich implies $\\log_p(\\exp_p(x)) = x$ for $x \\in D(r_p^-)$. The same exact reasoning can be also used to prove $\\exp_p(\\log_p(1 + x)) = 1 + x$ for $x \\in D(r_p^-)$.\n\t\t\\end{proof}\n\t\tThis proposition implies, in particular, that $\\log_p$ is injective on $D_1(r_p^-)$. It is easy to see that this is the biggest disc where this is true: in-fact if $\\zeta \\in \\Cp$ is a primitive $p$-th root of $1$ then, by \\cref{exercise:7-p.74}, $\\pabs{\\zeta - 1} = p^{-1/(p-1)} = r_p$ and $\\log_p(\\zeta) = 0 = \\log_p(1)$.\n\t\t\\begin{defn}\n\t\t\tThe (partial) functions $\\sin_p\\colon \\Cp \\to \\Cp$ and $\\cos_p\\colon \\Cp \\to \\Cp$ defined by \n\t\t\t\\begin{gather*}\n\t\t\t\t\\sin_p(x) := \\sum_{n=0}^{+\\infty} (-1)^n \\frac{x^{2n+1}}{(2n+1)!}\\\\\n\t\t\t\t\\cos_p(x) := \\sum_{n=0}^{+\\infty} (-1)^n \\frac{x^{2n}}{(2n)!}\n\t\t\t\\end{gather*}\n\t\t\tare the \\padic sine and the \\padic cosine.\n\t\t\\end{defn}\n\t\tIt's easy to prove that $\\sin_p$ and $\\cos_p$ are defined on $D(r_p^-)$.\n\t\t\n\t\tAnother important function in classical analysis is the binomial expansion \n\t\t\\[\n\t\t\tB_a(x) = \\sum_{n=0}^{+\\infty} \\binom{a}{k} x^n\n\t\t\\]\n\t\twhere $x, a \\in \\C$ and we used the generalized binomial coefficient defined by:\n\t\t\\begin{gather*}\n\t\t\t\\binom{a}{k} :=\n\t\t\t\\begin{cases*}\n\t\t\t\t1, & \\text{if $k = 0$} \\\\\n\t\t\t\t\\frac{a(a-1)\\dots (a - k + 1)}{k!}, & \\text{otherwise} \\\\\n\t\t\t\\end{cases*}.\n\t\t\\end{gather*}\n\t \tThis is exactly the MacLaurin series of $f(x) = (1 + x)^a$. Using ratio test it can be proved that for any $a \\in \\C$ this series converges if $\\abs{x} < 1$ and, unless $a \\in \\N$, diverges if $\\abs{x} > 1$. Its behaviour when $\\abs{x} = 1$ is a little more complicated and depends on the value of $a$. We'll now try to define an analogue function in the \\padic environment.\n\t\t\\begin{defn}\n\t\t\tFixed $a \\in \\Cp$, the (partial) function $B_{a, p}(X)\\colon \\Cp \\to \\Cp$ defined by\n\t\t\t\\[\n\t\t\t\tB_{a,p}(X) := \\sum_{n=0}^{+\\infty} \\binom{a}{n}X^n = 1 + \\sum_{n=1}^{+\\infty} \\frac{a(a-1)\\dots (a - n + 1)}{n!}X^n\n\t\t\t\\]\n\t\t\tis the \\padic binomial expansion \n\t\t\\end{defn}\n\t\tWe'll now try to study where it converges (it will be more complicated than the previous functions, since this is actually the first series with coefficient in $\\Cp$, and not in $\\Q$).\n\t\t\\begin{prop}\n\t\t\t\\label{prop:convergence-binomial}\n\t\t\tIf $\\pabs{a} > 1$ then the region of convergence of $B_{a, p}(X)$ is $D((r_p/\\pabs{a})^-)$.  Instead, if $\\pabs{a} \\leq 1$ the binomial expansion surely converges on $D(r_p^-)$ (although the region of convergence can be bigger). Finally, if $a \\in \\Zp$ then $B_{a, p}(X) \\in \\Zp\\llbracket X \\rrbracket$ so it surely converges on $D(1^-)$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tLet's suppose $\\pabs{a} > 1$. Then, by the isosceles triangle principle, if $i \\in \\Z$ then $\\pabs{a - i} = \\pabs{a}$ and we obtain that the $n$-th term of the series has norm $\\pabs{ax}^n/\\pabs{n!}$. Thus, with a little computation, we obtain that the radius of convergence is $r = p^{-1/(p-1)}/\\pabs{a} = r_p/\\pabs{a}$. Similarly to the exponential case it's easy to prove that the region of convergence if $D((r_p/\\pabs{a})^-)$.\\newline\n\t\t\tIf $\\pabs{a} \\leq 1$ it is more difficult to find the exact region of convergence; anyway if $i \\in \\Z$ we have $\\pabs{a - i} \\leq \\max\\{\\pabs{a}, \\pabs{i}\\} \\leq 1$ so $\\pabs{\\binom{a}{n}x^n} \\leq \\pabs{x^n/n!}$. Then $B_{a,p}(X)$ surely converges on $D(r_p^-)$.\\newline\n\t\t\tTo prove that if $a \\in \\Zp$ then $B_{a,p}(X) \\in \\Zp\\llbracket X \\rrbracket$ we just need to show that $\\binom{a}{n} \\in \\Zp$ for every $n \\in \\N$ (we already know $\\binom{a}{n} \\in \\Qp$). Let's fix $n$ and choose $a_0 \\in \\Z$ such that $a_0 > n$ and $\\ord(a - a_0) > N$, where $N$ will be chosen later (to choose $a_0$ we can just truncate the \\padic expansion of $a \\in \\Zp$ at some index greater than $N$). Now $\\binom{a_0}{n} \\in \\Z \\subset \\Zp$ and it suffices to show that $\\pabs{\\binom{a_0}{n} - \\binom{a}{n}} \\leq 1$ for a suitable $N$ (then we can conclude using the ultrametric inequality). This easily follows from the continuity of the polynomial $X(X-1)\\dots(X - n + 1)$ (special case of \\cref{prop:continuity-analitic-function}). Then $B_{a, p}(X) \\in \\Zp\\llbracket X \\rrbracket$ if $a \\in \\Zp$ so, by \\cref{prop:convergence-power-series-Zp}, it converges in $D(1^-)$.\n\t\t\\end{proof}\n\t\tWe can now prove the main property of the binomial expansion, and justify the shorthand $B_{a,p}(X) = (1 + X)^a$, at least for $a \\in \\Q$.\n\t\t\\begin{prop}\n\t\t\tIf $a \\in \\Q^\\times$ and $x \\in \\Cp$ is in the region of convergence of $B_{a,p}(X)$, then $\\left[B_{a, p}(x)\\right]^{1/a} = 1 + x$.\n\t\t\\end{prop}\n\t\t\\begin{proof}\n\t\t\tLet's first consider $a = 1/m$ with $m \\in \\Z^\\times$. The idea behind the proof is the usual one: if $x \\in \\R$ and $\\abs{x} < 1$ we have $B_{1/m}(x) = (1 + x)^{1/m}$ so $B_{1/m}(x)^m = 1 + x$ which, by \\cref{prop:formal-series}, gives us the formal identity between the two power series in $\\Q\\llbracket X \\rrbracket$ (observe that $m < 0$ doesn't create problems since $B_{1/m}(X)$ is an invertible element of $\\Q\\llbracket X \\rrbracket$) that we then translate into an equality between \\padic analytic functions (observe the trivial fact $a \\in \\Q \\implies B_{a, p}(X) \\in \\Q\\llbracket X \\rrbracket$). We must pay attention only to the last step, i.e. we can substitute only $x$ in the region of convergence of $B_{1/m, p}(X)$ so, for example, if $p \\mid m$ we can use $x \\in D((r_p\\pabs{m})^-)$ and if $p \\nmid m$ we can choose $x \\in D(r_p^-)$. We have proved that $B_{1/m, p}(x)^m = 1 + x$ for every $x$ where $B_{1/m, p}(X)$ converges. \\newline\n\t\t\tNow let $a = n/m$ with $n, m \\in \\Z^\\times$. It is easy to prove, using the same technique as before, that $B_{n/m, p}(X) = B_{1/m, p}(X) ^ n$. Then we can write\n\t\t\t\\[\n\t\t\t\tB_{n/m, p}(X)^{m/n} = B_{1/m, p}(X)^m = 1 + X,\n\t\t\t\\]\n\t\t\twhich proves the thesis.\n\t\t\\end{proof}\n\t\tWe can use the \\padic binomial expansion to study an interesting example of how the same convergent series in $(\\Q, \\abs{\\ })$ and in $(\\Qp, \\pabs{\\ })$ can have different sums.\n\t\t\\begin{example}\n\t\t\tLet's consider the following power series:\n\t\t\t\\begin{gather*}\n\t\t\t\tB_{1/2}\\left(\\frac{7}{9}\\right) = \\sum_{n=0}^{+\\infty} \\binom{1/2}{n} \\left(\\frac{7}{9}\\right)^n \\qquad \\in \\Q\\llbracket X \\rrbracket, \\\\\n\t\t\t\tB_{1/2, 7}\\left(\\frac{7}{9}\\right) = \\sum_{n=0}^{+\\infty} \\binom{1/2}{n} \\left(\\frac{7}{9}\\right)^n \\qquad \\in \\Q_7 \\llbracket X \\rrbracket.\n\t\t\t\\end{gather*} \n\t\t\tThey are exactly the same power series but they converge to different numbers, both of which are of course square roots of $\\tfrac{16}{9}$ (clearly its square roots are the same both in $\\Q$ and in $\\Q_7$). In the first case, working in $(\\Q, \t\\abs{\\ })$, we have \n\t\t\t\\[\n\t\t\t\tB_{1/2}\\left(\\frac{7}{9}\\right) = \\left(1 + \\frac{7}{9}\\right)^{1/2} = \\frac{4}{3} > 0.\n\t\t\t\\]\n\t\t\tInstead, in the second case, we have \n\t\t\t\\[\n\t\t\t\tB_{1/2, 7}\\left(\\frac{7}{9}\\right) = \\left(1 + \\frac{7}{9}\\right)^{1/2} = -\\frac{4}{3} < 0.\n\t\t\t\\]\n\t\t\tIn-fact, $\\textrm{ord}_7\\left(\\tfrac{7}{9}\\right) = 1$ so for $n \\geq 1$ we have\n\t\t\t\\[\n\t\t\t\t\\abs{\\frac{1/2(1/2 - 1)\\dots(1/2 - n + 1)}{n!}\\cdot \\left(\\frac{7}{9}\\right)^n}_7 \\leq \\frac{7^{-n}}{\\abs{n!}_7} = 7^{\\frac{-5n - S_n}{6}} < 1\n\t\t\t\\]\n\t\t\tso it must be $B_{1/2, 7}\\left(\\tfrac{7}{9}\\right) \\equiv 1 \\mod 7$. Now it's easy to see that $-\\tfrac{4}{3} \\equiv 1 \\mod 7$ and $\\tfrac{4}{3} \\equiv -1 \\mod 7$. We conclude that necessarily $B_{1/2, 7}\\left(\\tfrac{7}{9}\\right) = -\\tfrac{4}{3}$.\n\t\t\\end{example}\n\t \tThis example also warns us about the danger of using the notation $B_{a, p}(X) = (1 + X)^a$, which comes certainly handy sometimes but we have to remember that it can yield different results than the ones we would expect on $\\R$.\n\t \\section{The Iwasawa logarithm and Artin-Hasse exponential}\n\t \t\\begin{defn}\n\t \t\tLet $X \\subseteq \\Cp$ be a set with no isolated points. A function $f\\colon X \\to \\Cp$ is \\emph{differentiable} at $a \\in X$ if\n\t \t\t\\[\n\t \t\t\t\\exists \\lim_{X \\ni x \\to a} \\frac{f(x) - f(a)}{x - a} =: f'(a) \\in \\Cp.\n\t \t\t\\]\n\t \t\tEquivalently, $f$ is differentiable at $a \\in X$ if \n\t \t\t\\[\n\t \t\t\tf(x) = f(a) + (x - a)f'(a) + (x - a)\\phi(x), \\qquad \\lim_{X \\ni x \\to a} \\phi(x) = 0.\n\t \t\t\\]\n\t \t\\end{defn}\n \t\tWe also introduce a stronger notion of differentiability for \\padic functions, which will give us some analogue theorems to the classical case.\n \t\t\\begin{defn}\n \t\t\tLet $X \\subseteq \\Cp$ be a set with no isolated points. A function $f\\colon X \\to \\Cp$ is \\emph{strictly differentiable} at $a \\in X$ (and we write $f \\in S^1(a)$) if the difference quotients\n \t\t\t\\[\n \t\t\t\t\\Phi f(x, y) := \\frac{f(x) - f(y)}{x - y}\n \t\t\t\\]\n \t\t\ttends to $\\Cp \\ni \\ell = f'(a)$ as $X \\times X \\setminus \\Delta_X \\ni (x, y) \\to (a, a)$. \n \t\t\tHere we used the notation $\\Delta_X = \\{(x, x) : x \\in X \\} \\subset X \\times X$.\n \t\t\tWe say $f \\in S^1(X)$ if $f \\in S^1(a)$ for every $a \\in X$.\n \t\t\\end{defn}\n \t\tIn the classical case this definition is not very useful: in-fact if $I \\subset \\R$ is an open interval and $f \\in \\mathcal{C}^1(I, \\R)$ then $f$ is strictly differentiable at every point of $I$. In the next example we'll see this is not the case in \\padic analysis.\n \t\t\\begin{example}\n \t\t\tLet's consider the sequence of disjoint open balls $(B_n)_{n \\geq 1}$ defined by\n \t\t\t\\[\n \t\t\t\tB_n := \\{x \\in \\Zp: \\pabs{x - p^n} < \\pabs{p^{2n}} \\} \\subseteq \\{x \\in \\Zp: \\pabs{x} = \\pabs{p^n}\\}\n \t\t\t\\]\n \t\t\tand let $f\\colon \\Zp \\to \\Cp$ defined by\n \t\t\t\\begin{gather*}\n \t\t\t\tf(x) :=\n \t\t\t\t\\begin{cases}\n\t \t\t\t\tp^{2n}, & \\text{if $x \\in B_n$;} \\\\\n\t \t\t\t\t0, & \\text{otherwise};\\\\\n \t\t\t\t\\end{cases}.\n \t\t\t\\end{gather*}\n \t\t\tThe function $f$ is constant on each open ball $B_n$, hence $f$ is locally constant outside the origin. Then $f$ is differentiable at every $\\Zp \\ni x \\neq 0$ and $f'(x) = 0$. At the origin\n \t\t\t\\[\n \t\t\t\t\\lim_{\\Zp \\ni x \\to 0} \\frac{f(x) - f(0)}{x} = \\lim_{\\Zp \\ni x \\to 0} \\frac{f(x)}{x} = 0\n \t\t\t\\]\n \t\t\tso $f'(0) = 0$ (to see why it is true, let $x = up^n, u \\in \\Zp^\\times$; then $f(x) = p^{2n}$ and so $\\tfrac{f(x)}{x} = u^{-1}p^n$). Then $f'\\colon \\Zp \\to \\Cp$ is identically $0$ so it is obviously continuous (i.e. $f \\in \\mathcal{C}^1$) but $f$ is not strictly differentiable at $0$. In-fact, let's consider $\\Phi f(x, y)$ where $x = x_n = p^n$ and $y = y_n = p^n - p^{2n}$:\n \t\t\t\\[\n \t\t\t\t\\Phi f(x_n, y_n) = \\frac{f(x_n) - f(y_n)}{x_n - y_n} = \\frac{p^{2n} - 0}{p^{2n}} = 1 \n \t\t\t\\]\n \t\t\tso, if we consider this particular path $(x_n, y_n) \\to (0,0)$ as $n \\to +\\infty$ we obtain \n \t\t\t\\[\n \t\t\t\t0 = f'(0) \\neq \\lim_{n \\to +\\infty} \\Phi f(x_n, y_n) = 1,\n \t\t\t\\]\n \t\t\twhich implies $f$ is not strictly differentiable at $0$ (we have used that $\\pabs{y_n} = \\pabs{p^n}$ and $y_n \\notin B_n$).\n \t\t\\end{example}\n \t\tWe'll now prove a proposition which we're very familiar with in classical analysis.\n \t\t\\begin{prop}\n \t\t\tIf $f\\colon X \\to \\Cp$ is strictly differentiable at $a \\in X$ and $f'(a) \\neq 0$, then there is a neighbourhood $V$ of $a \\in X$ in which $f$ is injective.\n \t\t\\end{prop}\n \t\t\\begin{proof}\n \t\t\tSince $f \\in S^1(a)$ and $\\pabs{f'(a)} > 0$ we can find a neighbourhood $V$ of $a$ such that\n \t\t\t\\[\n \t\t\t\t\\pabs{\\Phi f(x, y) - f'(a)} < \\pabs{f'(a)} \\qquad \\text{for $(x, y) \\in V \\times V \\setminus \\Delta_V$}.\n \t\t\t\\]\n \t\t\tThen, by the isosceles triangle principle, we must have $\\pabs{\\Phi f(x, y)} = \\pabs{f'(a)}$ which means exactly\n \t\t\t\\[\n \t\t\t\t\\pabs{f(x) - f(y)} = \\pabs{f'(a)}\\pabs{x - y} \\qquad \\text{for $(x, y) \\in V \\times V$}. \\qedhere\n \t\t\t\\]\n \t\t\\end{proof}\n \t\tLet's now focus on analytic functions; they are, like in the classical case, everywhere strictly differentiable any number of times (i.e. they're in $\\bigcap_{k > 0} S^k$). We'll only prove it for $k=1$.\n \t\t\\begin{thm}\n \t\t\t\\label{thm:derivative-power-series}\n \t\t\tLet $f(X) = \\sum_{n \\geq 0} a_nX^n$ be an analytic function which converges on $D = D(r^-)$. Then $f \\in S^1(D)$ and $f'$ is given by\n \t\t\t\\[\n \t\t\t\tf'(X) = \\sum_{n=1}^{+\\infty} na_nX^{n-1}. \t\t\t\n \t\t\t\\]\n \t\t\\end{thm}\n \t\t\\begin{proof}\n \t\t\tIt's immediate to note that the radius of convergence of $f'$ is greater or equal to $r$, the radius of convergence of $f$, since $\\pabs{n} \\leq 1$ for every $n \\in \\N$. First of all let's fix $x \\in D$ and prove that \n \t\t\t\\[\n \t\t\t\t\\lim_{h \\to 0}\\, \\pabs{\\frac{f(x+h) - f(x)}{h} - f'(x)} = 0.\n \t\t\t\\]\n \t\t\tWe can re-write this limit as \n \t\t\t\\[\n \t\t\t\t\\lim_{h \\to 0}\\,\\pabs{\\sum_{n = 2}^{+\\infty} a_n \\cdot \\left( \\frac{(x+h)^n - x^n}{h} - nx^{n-1} \\right) } = 0;\n \t\t\t\\]\n \t\t\tand using the binomial theorem on $(x + h)^n$ we can then write\n \t\t\t\\[\n \t\t\t\t\\lim_{h \\to 0}\\,\\pabs{\\sum_{n = 2}^{+\\infty} a_n \\cdot \\left( \\sum_{i=0}^{n-2} \\binom{n}{i} x^ih^{n-1-i} \\right) } = 0.\n \t\t\t\\]\n \t\t\tLet's now distinguish two cases: $x = 0$ and $x \\neq 0$. \\newline\n \t\t\tIf $x = 0$ then we must prove $\\lim_{h \\to 0}\\,\\pabs{\\sum_{n = 2}^{+\\infty} a_n \\cdot h^{n-1} } = 0$,\twhich easily follows from \n \t\t\t\\[\n \t\t\t\t\\lim_{h \\to 0}\\,\\pabs{\\sum_{n = 2}^{+\\infty} a_n \\cdot h^{n-1} }  \\leq \\lim_{h \\to 0}\\,\\left( \\pabs{h} \\cdot \\max_{n \\geq 2} \\left\\{\\pabs{a_nh^{n-2}} \\right\\} \\right)= 0,\n \t\t\t\\]\n \t\t\twhere we considered $0 < \\pabs{h} < r$ and exploited the fact that $\\lim_{n \\to +\\infty} \\pabs{a_nh^{n-2}} = 0$ so the maximum in the limit above is bounded.\\newline\n \t\t\tNow, assuming $x \\neq 0$ and $0 < \\pabs{h} < \\pabs{x}$, it's easy to see that\n \t\t\t\\[\n \t\t\t\t\\pabs{\\sum_{i=0}^{n-2} \\binom{n}{i} x^ih^{n-1-i}} \\leq \\pabs{h}^{n-1} \\cdot \\max_{0 \\leq i \\leq n-2}\\left\\{\\pabs{x^ih^{-i}}\\right\\} \\leq \\pabs{h}^{n-1}\\cdot \\left(\\frac{\\pabs{x}}{\\pabs{h}}\\right)^{n-2} = \\pabs{h}\\cdot \\pabs{x}^{n-2}.\n \t\t\t\\]\n \t\t\tThen we have\n \t\t\t\\[\n \t\t\t\t\\pabs{\\sum_{n = 2}^{+\\infty} a_n \\cdot \\left( \\sum_{i=0}^{n-2} \\binom{n}{i} x^ih^{n-1-i} \\right) } \\leq \\pabs{h}\\cdot\\max_{n \\geq 2}\\left\\{\\pabs{a_nx^{n-2}}\\right\\},\n \t\t\t\\]\n \t\t\tand since $\\lim_{n\\to +\\infty} \\pabs{a_nx^{n-2}} = 0$ the maximum above is bounded so\n \t\t\t\\[\n \t\t\t\t\\lim_{h \\to 0}\\,\\pabs{\\sum_{n = 2}^{+\\infty} a_n \\cdot \\left( \\sum_{i=0}^{n-2} \\binom{n}{i} x^ih^{n-1-i} \\right) } \\leq \\lim_{h \\to 0} \\left( \\pabs{h}\\cdot\\max_{n \\geq 2}\\left\\{\\pabs{a_nx^{n-2}}\\right\\}\\right) = 0.\n \t\t\t\\]\n \t\t\tWe have proved that $f$ is differentiable everywhere and $f'$ is its derivative. \\newline\n \t\t\tWe won't prove here that $f$ is actually strictly differentiable: a proof of this statement for a particular case (where $r \\geq 1$, i.e. $\\lim_{n\\to+\\infty}\\pabs{a_n}=0$) can be found at \\cite[239]{robert:padic-analysis}.\n \t\t\\end{proof}\n \t\t\\begin{example}\n \t\t\tWe can now prove one well known result of classical analysis: the derivative of $e^x$ is $e^x$. More precisely, if $x \\in D(r_p^-)$ then $\\frac{\\mathrm{d}}{\\mathrm{d}x}\\exp_p(x) = \\exp_p(x)$. It easily follows applying \\cref{thm:derivative-power-series}:\n \t\t\t\\[\n \t\t\t\t\\frac{\\mathrm{d}}{\\mathrm{d}x}\\exp_p(x) = \\frac{\\mathrm{d}}{\\mathrm{d}x}\\left( \\sum_{n=0}^{+\\infty} \\frac{x^n}{n!} \\right) = \\sum_{n=1}^{+\\infty} \\frac{x^{n-1}}{(n-1)!} = \\exp_p(x).\n \t\t\t\\]\n \t\t\\end{example}\n \t\t\\begin{defn}\n \t\t\tLet $f\\colon \\Cp \\to \\Cp$ be a (partial) function. If for every $x$ in its domain there exists a neighbourhood where $f$ is a power series, we say that $f$ is \\emph{locally analytic}.\n \t\t\\end{defn}\n \t\tWe present now two \\padic locally analytic functions: the \\emph{Iwasawa logarithm} and the \\emph{Artin-Hasse} exponential.\n \t\t\\begin{prop}\n \t\t\tThere exists a unique function $\\Log_p\\colon \\Cp^\\times \\to \\Cp$ such that:\n \t\t\t\\begin{enumerate}[label=(\\arabic*)]\n \t\t\t\t\\item $\\Log_p$ agrees with $\\log_p$ in $D_1(1^-)$, i.e.,\n \t\t\t\t\\[\n \t\t\t\t\t\\Log_p(x) = \\sum_{n=1}^{+\\infty} (-1)^{n+1}\\frac{(x-1)^n}{n} \\qquad \\text{for $\\pabs{x-1} < 1$};\n \t\t\t\t\\]\n \t\t\t\t\\item $\\Log_p(xy) = \\Log_p(x) + \\Log_p(y)$ for all $x, y \\in \\Cp^\\times$;\n \t\t\t\t\\item $\\Log_p(p) = 0$.\n \t\t\t\\end{enumerate}\n \t\t\\end{prop}\n \t\t\\begin{proof}\n \t\t\tWe recall from \\cref{prop:structure-Cp} that any non-zero $x \\in \\Cp$ can be written as $x = p^r\\omega(x_1)\\langle x_1\\rangle$, where $p^r$ is a root of the equation $X^b - p^a = 0$ where $r=\\tfrac{a}{b}=\\ord(x)$, $\\omega(x_1)$ is a root of $1$ and $\\pabs{\\langle x_1 \\rangle - 1} < 1$. If such an extension of the logarithm exists, then, by \\textit{(2)} and \\textit{(3)}, it must be\n \t\t\t\\[\n \t\t\t\t\\Log_p(x) = \\Log_p(p^r) + \\Log_p(\\omega(x_1)) + \\Log_p(\\langle x_1 \\rangle) = 0 + 0 + \\Log_p(\\langle x_1 \\rangle) = \\log_p(\\langle x_1 \\rangle),\n \t\t\t\\]\n \t\t\tsince $\\langle x_1 \\rangle \\in D_1(1^-)$. Then there is at most one extension of the logarithm and it is the one defined by\n \t\t\t\\[\n \t\t\t\t\\Log_p(x) := \\log_p(\\langle x_1 \\rangle).\n \t\t\t\\] \n \t\t\tFirst of all we have to show that this is well defined: in-fact we could have chosen another root of $X^b - p^a= 0$ and we would have obtained a different factorization of the same element. Let's suppose that \n \t\t\t\\[\n \t\t\t\tx = p^r\\cdot\\omega(x_1)\\cdot\\langle x_1\\rangle = \\frac{p^r}{\\zeta}\\cdot\\omega\\left(x_1\\overline{\\zeta}\\right)\\cdot\\left\\langle x_1\\overline{\\zeta}\\right\\rangle,\n \t\t\t\\]\n \t\t\twhere $\\zeta \\in \\Cp$ is a $b$-th root of unity and $\\overline{\\zeta} = \\zeta + M \\in A/M$ (we recall that $A = D(1), M = D(1^-)$ in $\\Cp$). We have to prove then that $\\log_p(\\langle x_1\\rangle) = \\log_p(\\left\\langle x_1\\overline{\\zeta}\\right\\rangle)$. Let's first recall how the Teichm{\\\"u}ller representatives are defined: if $\\overline{\\Fp}$ is the algebraic closure of $\\Fp$ then $\\omega\\colon \\overline{\\Fp} \\to \\Zpu$ is a section of the projection $\\pi\\colon \\Zpu \\twoheadrightarrow \\Zpu/p\\Zpu = \\overline{\\Fp}$ such that $\\omega(x)^{p^f - 1} = 1$ if $x \\in \\F_{p^f}^\\times$ and $\\omega(0)=0$ (it is immediate that since $\\omega$ can be defined on every finite field of characteristic $p$, see the proof of \\cref{prop:structure-finite-extension}, it can be extended to $\\overline{\\Fp}$). It's easy to see that $\\omega\\colon \\overline{\\Fp}^\\times \\to \\left(\\Zpu\\right)^\\times$ is a group morphism, i.e. $\\omega(xy) = \\omega(x) \\cdot \\omega(y)$: in-fact if $x, y \\in \\F_{p^f}$ then $\\omega(xy)$ is defined as the only element of $\\Zpu$ such that $\\omega(xy)^{p^f} = \\omega(xy)$ and $\\pi(\\omega(xy))=xy$ and it's clear that $\\omega(x)\\cdot\\omega(y)$ satisfies both these conditions.\n \t\t\tWe also recall from \\cref{prop:structure-Cp} that we can find a big enough $f \\in \\N$ such that \n \t\t\t\\begin{gather*}\n\t \t\t\t\\F_{p^f} \\ni x_1 = \\frac{x}{p^r} + M,\\\\\n\t \t\t\t\\F_{p^f} \\ni x_1\\overline{\\zeta} = \\frac{x\\zeta}{p^r} + M = \\left(\\frac{x}{p^r} + M\\right) \\cdot (\\zeta + M). \n \t\t\t\\end{gather*}\n \t\t\tExplained all the notations, we can finally write\n \t\t\t\\[\n \t\t\t\t\\omega\\left(x_1\\overline{\\zeta}\\right) = \\omega(x_1)\\cdot\\omega\\left(\\overline{\\zeta}\\right) \\implies \\left\\langle x_1\\overline{\\zeta}\\right\\rangle = \\frac{x\\zeta}{p^r\\cdot\\omega\\left(x_1\\overline{\\zeta}\\right)} = \\langle x_1\\rangle \\cdot \\frac{\\zeta}{\\omega\\left(\\overline{\\zeta}\\right)}.\n \t\t\t\\]\n \t\t\tNow, since $\\zeta^b = 1$, we have $\\overline{\\zeta}^b = 1$ so $\\omega\\left(\\overline{\\zeta}\\right)^b = 1$, since $\\omega$ is a group morphism and $\\omega(1) = 1$. Finally,\n \t\t\t\\[\n \t\t\t\t\\left(\\frac{\\zeta}{\\omega\\left(\\overline{\\zeta}\\right)}\\right)^b = \\frac{\\zeta^b}{\\omega\\left(\\overline{\\zeta}\\right)^b} = 1\n \t\t\t\\]\n \t\t\tso $\\xi := \\tfrac{\\zeta}{\\omega\\left( \\overline{\\zeta}\\right)}$ is a root of $1$. Let's prove that $\\pabs{\\xi - 1} < 1$; let's suppose by contradiction that $\\xi = 1 + \\Delta$ with $\\pabs{\\Delta} \\geq 1$ and let's write $\\langle x_1 \\rangle = 1 + \\delta$ with $\\pabs{\\delta} < 1$. We know by hypothesis that $\\langle x_1\\overline{\\zeta} \\rangle \\in D_1(1^-)$ so we must have\n \t\t\t\\[\n \t\t\t\t\\pabs{(1 + \\delta) \\cdot (1 + \\Delta) - 1} = \\pabs{\\delta + \\Delta \\cdot(1 + \\delta)} < 1\n \t\t\t\\]\n \t\t\tbut since $\\pabs{\\delta} < 1$ and $\\pabs{1 + \\delta} = 1$, we have $\\pabs{\\Delta \\cdot (1 + \\delta)} = \\pabs{\\Delta} \\geq 1$ so\n \t\t\t\\[\n \t\t\t\t\\pabs{\\delta + \\Delta\\cdot(1 + \\delta)} = \\max\\left\\{\\pabs{\\delta}, \\pabs{\\Delta \\cdot (1 + \\delta)}\\right\\} = \\pabs{\\Delta} \\geq 1,\n \t\t\t\\]\n \t\t\twhich is absurd (here we have used several times the isosceles triangle principle). We have proved that $\\xi$ is a root of $1$ with $\\pabs{\\xi - 1} < 1$ so we can compute $\\log_p(\\xi) = 0$. Then\n \t\t\t\\[\n \t\t\t\t\\log_p\\left(\\left\\langle x_1\\overline{\\zeta}\\right\\rangle\\right) = \\log_p(\\langle x_1 \\rangle) + \\log_p(\\xi) = \\log_p(\\langle x_1 \\rangle)\n \t\t\t\\]\n \t\t\tand the function $\\Log_p$ is well defined. \\newline\n \t\t\tProperties \\textit{(1)} and \\textit{(3)} are now obvious from the definition: if $x \\in D_1(1^-)$ then we can choose $x = \\langle x \\rangle$  so $\\Log_p(x) = \\log_p(x)$ and $p = p^1 \\cdot 1 \\cdot 1$ so $\\Log_p(p) = \\log_p(1) = 0$. To prove \\textit{(2)} let $x = p^r\\omega(x_1)\\langle x_1 \\rangle$, $y = p^s \\omega(y_1) \\langle y_1 \\rangle$ and $z = xy = p^{r+s}\\omega(z_1)\\langle z_1 \\rangle$. Now $p^{r+s}$ isn't necessarily the same fractional power as $p^rp^s$ (it can differ by a root of unit), but we can choose to use exactly $p^rp^s$, since the value of $\\Log_p$ doesn't depend on the choice of the fractional power. In this case we'll have $z_1 = \\tfrac{z}{p^rp^s} + M = x_1y_1$ so $\\omega(z_1) = \\omega(x_1)\\cdot\\omega(y_1)$ and $\\langle z_1 \\rangle = \\langle x_1 \\rangle \\cdot \\langle y_1 \\rangle$. Then $\\Log_p(xy) = \\Log_p(x) + \\Log_p(y)$.\n \t\t\\end{proof}\n \t\t\\begin{prop}\n \t\t\t$\\Log_p$ is locally analytic on $\\Cp^\\times$ with derivative $\\Cp^\\times \\ni x \\mapsto \\tfrac{1}{x}$.\n \t\t\\end{prop}\n \t\t\\begin{proof}\n \t\t\tLet's fix a point $x_0 \\in \\Cp^\\times$ and let $r := \\pabs{x_0}$. For every $x \\in D_{x_0}(r^-)$ (the largest disc about $x_0$ which doesn't contain $0$) we have $\\pabs{\\tfrac{x}{x_0} - 1} < 1$ and so\n \t\t\t\\[\n \t\t\t\t\\Log_p(x) = \\Log_p\\left(x_0 \\cdot \\left(1 + \\frac{x}{x_0} - 1\\right)\\right) = \\Log_p(x_0) + \\sum_{n=1}^{+\\infty} (-1)^{n+1}\\cdot\\frac{(x - x_0)^n}{n\\cdot x_0^n}.\n \t\t\t\\]\n \t\t\tWe have just proved that, in a neighbourhood of $x_0$, $\\Log_p$ can be represented by a convergent power series in $x - x_0$. Since this reasoning can be done for any $x_0 \\in \\Cp^\\times$ we can conclude that $\\Log_p$ is locally analytic.\\newline\n \t\t\tLet's consider $x \\in D_{x_0}(r^-)$ as above: using the locally analyticity of $\\Log_p$ and \\cref{thm:derivative-power-series} we obtain:\n \t\t\t\\begin{gather*}\n\t\t\t\t\\frac{\\mathrm{d}}{\\mathrm{d}x}\\Log_p(x) = \\sum_{n=1}^{+\\infty} (-1)^{n+1}\\cdot\\frac{(x-x_0)^{n-1}}{x_0^n} = x_0^{-1} \\cdot \\sum_{n=0}^{+\\infty} \\left(1 - \\frac{x}{x_0}\\right)^n = \\frac{x_0^{-1}}{(x/x_0)} = \\frac{1}{x}. \\qedhere\n \t\t\t\\end{gather*}\n \t\t\\end{proof}\n \t\tWe have found a locally analytic function defined on $\\Cp^\\times$ which extends $\\log_p$ and has the same basic properties. \n \t\t\n \t\tIt is now natural to try to build a homomorphism $f\\colon \\Cp \\to \\Cp^\\times$ extending the exponential, which is only defined in $D(r_p^-)$. If there exists such an extension then, fixed $x \\in \\Cp^\\times$ and $n \\in \\N$ such that $p^nx \\in D(r_p^-)$, then\n \t\t\\[\t\n \t\t\tf(x)^{p^n} = f(p^nx) = \\exp_p(p^nx)\n \t\t\\]\n \t\tso $f(x)$ must be a $p^n$-th root of $\\exp_p(p^nx)$. As stated in the next proposition, this extension can actually be done in a coherent way.\n \t\t\\begin{prop}\n \t\t\tThere exists a continuous homomorphism $\\mathrm{Exp}\\colon \\Cp \\to D_1(1^-)$ extending $\\exp_p$.\n \t\t\\end{prop}\n \t\t\\begin{proof}\n \t\t\tThe idea behind the proof exploits the fact that, since $(D_1(1^-), \\cdot)$ is a divisible group, there is an extension property for homomorphisms defined over subgroups. For the whole proof see \\cite[259]{robert:padic-analysis}.\n \t\t\\end{proof}\n \t\tUnlike the Iwasawa logarithm, the extensions $\\mathrm{Exp}$ of the exponential are not defined in a canonic way so they're not very useful. Anyway it is easy to prove that, chosen such an extension $\\mathrm{Exp}$, $\\log_p \\circ\\, \\mathrm{Exp} = \\mathrm{id}_{\\Cp}$. In-fact:\n \t\t\\[\n \t\t\tp^n\\cdot \\left(\\log_p\\circ\\,\\mathrm{Exp}(x)\\right) = \\log_p\\left(\\mathrm{Exp}(x)^{p^n}\\right) = \\log_p\\left(\\mathrm{Exp}\\left(p^nx\\right)\\right) = \\log_p\\left(\\exp_p\\left(p^nx\\right)\\right) = p^nx.\n \t\t\\]\n \t\t\n \t\tWe'll now describe a slightly different exponential function which converges in $D(1^-)$: the Artin-Hasse exponential. Before defining it we'll need to study some basic properties of the well known M{\\\"o}bius function.\n \t\t\\begin{defn}\n \t\t\tLet $\\mu\\colon \\N^\\times \\to \\N$ be defined by\n \t\t\t\\[\n \t\t\t\t\\mu(n) := \n \t\t\t\t\\begin{cases}\n\t \t\t\t\t0, & \\text{if $n$ is divisible by a perfect square greater than 1;}\\\\\n\t \t\t\t\t(-1)^k, & \\text{if $n$ is a product of $k$ distinct prime factors;}\n \t\t\t\t\\end{cases}.\n \t\t\t\\]\n \t\t\tThis is the \\emph{M{\\\"o}bius function}.\n \t\t\\end{defn}\n \t\t\\begin{prop}\n\t \t\t\\label{prop:mobius-function}\n\t \t\tLet $n \\in \\N^\\times$, then\n\t \t\t\\[\n\t \t\t\t\\sum_{d \\mid n} \\mu(d) =\n\t \t\t\t\\begin{cases}\n\t\t \t\t\t1, & \\text{if $n=1$;} \\\\\n\t\t \t\t\t0, & \\text{otherwise;}\n\t \t\t\t\\end{cases}.\n\t \t\t\\]\n\t \t\tIn particular, if $p$ is a prime,\n\t \t\t\\[\n\t \t\t\t\\sum_{\\mathclap{d \\mid n,\\,p \\nmid d}} \\mu(d) = \n\t \t\t\t\\begin{cases}\n\t\t \t\t\t1, & \\text{if $n$ is a power of $p$;} \\\\\n\t\t \t\t\t0, & \\text{otherwise;}\n\t \t\t\t\\end{cases}.\n\t \t\t\\]\n \t\t\\end{prop}\n \t\t\\begin{proof}\n \t\t\tThe case $n=1$ is trivial ($\\mu(1) = 1$). Let $n = p_1^{a_1}\\dots \\,p_s^{a_s}$ with $s \\geq 1$ and $p_i$ prime for every $i=1,\\dots,s$. Then, by an easy combinatoric argument, we have\n \t\t\t\\[\n \t\t\t\t\\sum_{d \\mid n} \\mu(d) = \\sum_{\\epsilon_i = 0 \\lor 1} \\mu(p_1^{\\epsilon_1}\\dots\\,p_s^{\\epsilon_s}) = \\sum_{\\epsilon_i = 0 \\lor 1} (-1)^{\\sum \\epsilon_i} = (1 - 1)^s = 0.\n \t\t\t\\]\n \t\t\tThe second statement is just a particular case of the first one applied to $n \\cdot p^{-\\ord(n)}$ in place of $n$.\n \t\t\\end{proof}\n \t\t\\begin{prop}\n \t\t\t\\label{prop:formal-identity-exp}\n \t\t\tIn $\\Q\\llbracket X \\rrbracket$ the following holds:\n \t\t\t\\[\n \t\t\t\t\\exp(X) = \\prod_{n=1}^{+\\infty} B_{-\\mu(n)/n}(-X^n).\n \t\t\t\\]\n \t\t\\end{prop}\n \t\t\\begin{proof}\n \t\t\tFirst of all let's observe that the infinite product of series actually makes sense: in-fact $B_{-\\mu(n)/n,\\,p}(-X^n) = 1 + \\tfrac{\\mu(n)}{n}X^n + o(X^n)$ so the $n$-th factor has no power of $X$ less than the $n$-th, so only a finite number of series is involved to determine the coefficient of any power of $X$. \n \t\t\tTo prove that the identity holds we'll use \\cref{prop:formal-series}; let $x \\in \\R$ with $\\abs{x} < 1$, then we know that\n \t\t\t\\[\n \t\t\t\tB_{-\\mu(n)/n}(-x^n) = (1 - x^n)^{-\\frac{\\mu(n)}{n}}.\n \t\t\t\\]\n \t\t\tTaking the (classical) $\\log$ of the right side we obtain\n \t\t\t\\[\n \t\t\t\t\\log\\left(\\prod_{n=1}^{+\\infty} (1 - x^n)^{-\\frac{\\mu(n)}{n}}\\right) = -\\sum_{n=1}^{+\\infty}\\frac{\\mu(n)}{n} \\cdot \\log(1 - x^n) = \\sum_{n=1}^{+\\infty} \\frac{\\mu(n)}{n}\\cdot\\sum_{m=1}^{+\\infty} \\frac{x^{nm}}{m} = \\sum_{j=1}^{+\\infty}\\left(\\frac{x^j}{j}\\cdot \\sum_{n \\mid j} \\mu(n)\\right)\n \t\t\t\\]\n \t\t\twhere in the last step we set $j = nm$ and we rearranged the terms of the series since it is absolutely convergent. To see why this is true, let's consider\n \t\t\t\\[\n \t\t\t\t\\sum_{n=1}^{+\\infty} \\abs{\\frac{\\mu(n)}{n}}\\cdot\\abs{\\log(1 - x^n)} \\leq \\sum_{n=1}^{+\\infty} \\frac{\\abs{\\log(1 - x^n)}}{n}.\n \t\t\t\\]\n \t\t\tSince $\\abs{\\log(1 - x^n)} \\sim -\\abs{x}^n$ as $n \\to +\\infty$, we can just study the convergence of the series\n \t\t\t\\[\n \t\t\t\t\\sum_{n=1}^{+\\infty} \\frac{\\abs{x}^n}{n},\n \t\t\t\\]\n \t\t\twhich converges since it is dominated by the convergent geometric series $\\sum_{n=1}^{+\\infty} \\abs{x}^n$ (we're using $\\abs{x} < 1$). Now that we have justified why we can rearrange terms, using \\cref{prop:mobius-function}, we obtain\n \t\t\t\\begin{gather*}\n \t\t\t\t\\log\\left(\\prod_{n=1}^{+\\infty} (1 - x^n)^{-\\frac{\\mu(n)}{n}}\\right) = \\sum_{j=1}^{+\\infty}\\left(\\frac{x^j}{j}\\cdot \\sum_{n \\mid j} \\mu(n)\\right) = x = \\log(\\exp(x)) \\\\\n \t\t\t\t\\implies \\exp(x) = \\prod_{n=1}^{+\\infty} (1 - x^n)^{-\\frac{\\mu(n)}{n}}\n \t\t\t\\end{gather*}\n \t\t\twhich, translated back to formal power series, concludes the proof.\n \t\t\\end{proof}\n \t\tWe have just proved that\n \t\t\\[\n \t\t\t\\exp_p(X) = \\prod_{n=1}^{+\\infty} B_{-\\mu(n)/n,\\,p}(-X^n)\n \t\t\\]\n \t\t(recall that $B_{-\\mu(n)/n,\\,p}(X) = B_{-\\mu(n)/n}(X)$ and $\\exp_p(X) = \\exp(X)$, as elements of $\\Q \\llbracket X \\rrbracket$).\n \t\tWith this new expression of $\\exp_p(X)$ we can understand where convergence ``problems'' arise. In-fact if $p \\mid n$ and $n$ is square-free (so $\\mu(n) \\neq 0$) then $\\pabs{\\mu(n)/n} = \\pabs{n}^{-1} \\geq p$ so $B_{-\\mu(n)/n,\\,p}(-X^n)$ converges only if $\\pabs{x}^n \\in D((r_p\\pabs{n})^-)$ (see \\cref{prop:convergence-binomial}). If $n=p$ we have convergence precisely when\n \t\t\\[\n \t\t\t\\pabs{x} < \\left(p^{-1/(p-1)}\\cdot p^{-1}\\right)^{\\frac{1}{p}} = p^{-1/(p-1)} = r_p.\n \t\t\\]\n \t\tInstead, if $p \\nmid n$, we have no problems, since $-\\tfrac{\\mu(n)}{n} \\in \\Zp$ and, by \\cref{prop:convergence-binomial}, we have $B_{-\\mu(n)/n,\\,p}(-X^n) \\in \\Zp\\llbracket X \\rrbracket$ so $x \\in D(1^-)$ guarantees convergence.\tThis motivates the following definition.\n \t\t\\begin{defn}\n \t\t\t\\label{defn:artin-hasse}\n \t\t\tThe (partial) function $\\E_p(X)\\colon \\Cp \\to \\Cp$  defined by\n \t\t\t\\[\n \t\t\t\t\\E_p(X) := \\prod_{\\substack{n=1 \\\\ p\\nmid n}}^{+\\infty} B_{-\\mu(n)/n,\\,p}(-X^n) = \\prod_{\\substack{n=1 \\\\ p\\nmid n}}^{+\\infty} (1 - X^n)^{-\\frac{\\mu(n)}{n}}\n \t\t\t\\]\n \t\t\tis called the \\emph{Artin-Hasse exponential}.\n \t\t\\end{defn}\n \t \tWe observe again that  $B_{-\\mu(n)/n,\\,p}(-X^n) \\in 1 + X^n\\Q\\llbracket X \\rrbracket$ so the infinite product makes sense.\n \t \t\\begin{prop}\n \t \t\t\\label{prop:artin-hasse-formula}\n \t \t\tIn $\\Q\\llbracket X \\rrbracket$ the following holds:\n \t \t\t\\[\n \t \t\t\t\\E_p(X) = \\exp_p\\left(\\sum_{i=0\n \t \t\t\t}^{+\\infty} \\frac{X^{p^i}}{p^i}\\right).\n \t \t\t\\]\n \t \t\\end{prop}\n  \t\t\\begin{proof}\n  \t\t\tAs usual let's consider $x \\in \\R$ with $\\abs{x} < 1$: then \n  \t\t\t\\[\n  \t\t\t\t\\E_p(x) := \\prod_{\\substack{n=1 \\\\ p\\nmid n}}^{+\\infty} (1 - x^n)^{-\\frac{\\mu(n)}{n}}\n  \t\t\t\\]\n  \t\t\tand taking logarithm we obtain\n  \t\t\t\\[\n  \t\t\t\t\\log\\left(\\E_p(x)\\right) = -\\sum_{\\substack{n=1 \\\\ p \\nmid n}} \\frac{\\mu(n)}{n} \\left( \\sum_{m=1}^{+\\infty} \\frac{x^{mn}}{m}\\right) = \\sum_{j=1}^{+\\infty} \\left(\\frac{x^j}{j} \\cdot \\sum_{\\mathclap{n \\mid j,\\,p \\nmid n}} \\mu(n) \\right),\n  \t\t\t\\] \n  \t\t\twhere in the last step we set $j=nm$ and we rearranged the terms of the series (we can do it, the proof is analogue to the one given in \\cref{prop:formal-identity-exp}). Using \\cref{prop:mobius-function} we obtain the following relation (in $\\R$):\n  \t\t\t\\[\n  \t\t\t\t\\log\\left(\\E_p(x)\\right) = \\sum_{m=0}^{+\\infty} \\frac{x^{p^m}}{p^m} \\implies \\E_p(x) = \\exp\\left(\\sum_{m=0}^{+\\infty} \\frac{x^{p^m}}{p^m}\\right).\n  \t\t\t\\]\n  \t\t\tWe can conclude immediately applying \\cref{prop:formal-series} (recall that  $\\exp_p$ and $\\exp$ are exactly the same formal series in $\\Q\\llbracket X\\rrbracket$).\n  \t\t\\end{proof}\n  \t\tAt this point it is very easy to prove that $\\E_p(X)$ converges on $D(1^-)$ (much better than the smaller disc of convergence of $\\exp_p(X)$).\n  \t\t\\begin{prop}\n  \t\t\t\\label{prop:artin-hasse-convergence}\n  \t\t\tThe Artin-Hasse exponential $\\E_p(X)$ converges on $D(1^-)$.\n  \t\t\\end{prop} \n  \t\t\\begin{proof}\n  \t\t\tWe recall the definition of $\\E_p$:\n  \t\t\t\\[\n  \t\t\t\t\\E_p(X) := \\prod_{\\substack{n=1 \\\\ p \\nmid n}}^{+\\infty} B_{-\\mu(n)/n,\\,p}(-X^n).\n  \t\t\t\\]\n  \t\t\tNow if $p \\nmid n$ we have already proved that $B_{-\\mu(n)/n,\\,p}(-X^n) \\in 1 + X^n(\\Zp\\cap\\Q)\\llbracket X\\rrbracket$ so the whole series $\\E_p(X)$ has coefficients in $\\Zp\\cap\\Q$. Then we conclude using \\cref{prop:convergence-power-series-Zp}.\n  \t\t\\end{proof}\n  \t\tWe could have proved directly that $\\exp_p\\left(\\sum_{n=0}^{+\\infty} \\frac{x^{p^n}}{p^n}\\right) \\in \\Zp\\llbracket X\\rrbracket$ using the Dwork's lemma. In-fact\n  \t\twe already know that $\\E_p(X) \\in 1 + X\\Qp\\ser{X}$ and we can compute\n  \t\t\\begin{gather*}\n  \t\t\t\\E_p(X^p) = \\exp_p\\left(\\sum_{n=0}^{+\\infty} \\frac{X^{p^{n+1}}}{p^n}\\right), \\\\\n  \t\t\t\\E_p(X)^p = \\exp_p\\left(\\sum_{n=0}^{+\\infty} \\frac{X^{p^n}}{p^{n-1}}\\right) = \\exp_p\\left(pX + \\sum_{n=0}^{+\\infty} \\frac{X^{p^{n+1}}}{p^n}\\right),\n  \t\t\\end{gather*}\n  \t\twhere we used $\\exp_p(Y)^p = \\exp_p(pY)$ (this formal identity can be easily verified using \\cref{prop:formal-series}). Since $\\tfrac{\\exp_p(X)}{\\exp_p(Y)} = \\exp_p(X - Y)$ (also easy to prove) we have\n  \t\t\\[\n  \t\t\t\\frac{\\E_p(X^p)}{\\E_p(X)^p} = \\frac{\\exp_p\\left(\\sum_{n=0}^{+\\infty} \\frac{X^{p^{n+1}}}{p^n}\\right)}{\\exp_p\\left(pX + \\sum_{n=0}^{+\\infty} \\frac{X^{p^{n+1}}}{p^n}\\right)} = \\exp_p(-pX) \\in 1 + pX\\Zp\\ser{X}\n  \t\t\\]\n  \t\tand we can conclude that $\\E_p(X) \\in 1 + X\\Zp\\ser{X}$ thanks to \\cref{lemma:dwork}.", "meta": {"hexsha": "956d8e2d0310fe908069e823089e90242119d5ff", "size": 57562, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Mainmatter/chapter4.tex", "max_stars_repo_name": "carlo300/BachelorThesis", "max_stars_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-21T10:59:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-29T10:11:24.000Z", "max_issues_repo_path": "Mainmatter/chapter4.tex", "max_issues_repo_name": "carlo300/BachelorThesis", "max_issues_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mainmatter/chapter4.tex", "max_forks_repo_name": "carlo300/BachelorThesis", "max_forks_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 73.4209183673, "max_line_length": 1190, "alphanum_fraction": 0.5988499357, "num_tokens": 23293, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5610286715599435}}
{"text": "%\n% 211\n%\n\\chapter{Integral Equations}\n\n\\Section{11}{1}{Definition of an integral equation.}\n\nAn integral equation is one which involves an unknown function under\nthe sign of integration; and the process of determining the unknown\nfunction is called solving the equation*.\n\nThe introduction of integral equations into analysis is due to Laplace\n(1782) who considered the equations\n\nf x) = [e f (f) (t) dt, g x) = [ - 4> (t) dt\n\n(where in each case (f) represents the unknown function), in connexion\nwith the solution of differential equations. The first integral\nequation of which a solution was obtained, was Fourier's equation\n\nf(.v) = l cos (xt) (f> (t) dt,\n\nof which, in certain circumstances, a solution isf\n\n2 f\n\n(f)( x) = - I cos (ux)f(u) du,\n\nTTJo\n\nf x) being an even function of a;, since cos xt) is an even function.\n\nLater, Abel+ was led to an integral equation in connexion with a\nmechanical problem and obtained two solutions of it; after this,\nLiouville investigated an integral equation which arose in the course\nof his researches on differential equations and discovered an\nimportant method for solving integral equations \u00a7, which will be\ndiscussed in \\hardsectionref{11}{4}.\n\nIn recent years, the subject of integral equations has become of some\nimportance in various branches of Mathematics; such equations (in\nphysical problems) frequently involve repeated integrals and the\ninvestigation of them naturally presents greater difficulties than do\nthose elementary equations which will be treated in this chapter.\n\nTo render the analysis as easy as possible, we shall suppose\nthroughout that the constants a, h and the variables x, y, are real\nand further that\n\n* Except in the case of Fourier's integral \\hardsectionref{9}{7}) we practically\nalways need continuom solutions of integral equations.\n\nt If this value of (p be substituted in the equation we obtain a\nresult which is, effectively, that of \\hardsectionref{9}{7}.\n\n+ Solution de quelques problemes a Vaide d'integrales definies (1823).\nSee Oeuvres, i. pp. 11, 97.\n\n\u00a7 The numerical computation of solutions of integral equations has\nbeen investigated recently by Whittaker, Proc. Roijal Soc. xciv. (A),\n(1918), pp. 367-383.\n\nU-2\n\n%\n% 212\n%\n\nO', y, i>' also that the given function* K (x, y), which occurs under\nthe integral sign in the majority of equations considered, is a real\nfunction of a; and y and either (i) it is a continuous function of\nboth variables in the range (a x b, a i/ b), or (ii) it is a\ncontinuous function of both variables in the range a y x b and K (x,\ny) = when y > x; in the latter case K x, y) has its discontinuities\nregularly distributed, and in either case it is\n\neasily proved that, iff(y) is continuous when a y b, f(y) K x, y) dy\nis a\n\nJ a\n\ncontinuous function of x when a x b.\n\n11 'll. An algebraical lemma.\n\nThe algebraical result which will now be obtained is of great\nimportance in Fredholm's theory of integral equations.\n\nLet (a'i, 3/1, Zy), .V2, i/2, 22)) ( '35 3) 3) be three points at\nunit distance from the origin. The greatest (numerical) value of the\nvolume of the parallelepiped, of which the lines joining the origin to\nthese points are conterminous edges, is +1, the edges then being\nperpendicular. Therefore, if Xr + 7/ + Zr = l r = l, 2, 3), the upper\nand lower bounds of the determinant\n\n 2 2/2 22 I\n\n 3 3 23 I\n\nare \u00b11.\n\nA lemma due to Hadamardt generalises this result.\n\nLet\n\n 21? 0 22, ... a n\n\nA\n\n nl) )!2'  < n\n\nn\n\nwhere a,. is real and 2 a\" r = l ( i = l, 2, ... n); let A j. be the\ncofactor of a r in D and let A be the determinant whose elements are A\nj., so that, by a well-known theorem |,\n\nSince Z) is a continuous function of its elements, and is obviously\nbounded, the ordinary theory of maxima and minima is applicable, and\nif we consider variations in\n\n\" dD\n\nair ( '=!) 2, ... n) only, D is stationary for such variations if 2 -\n8aij. = 0, where Saj,...\n\nr=l VOlir\n\nn\n\nare variations subject to the sole condition 2 a.ir8air=0; therefore\n\u00a7\n\nr = l\n\nn\n\nbut 2 airAir=I), and so X'2a\\ r = D; therefore Air = Da.ir-\n\nr=l\n\n* Bocher in his important work on integral equations (Camb. Math.\nTracts, No. 10), always considers the more general case in which A'\n(x, y) has discontinuities regularly distributed, i.e. the\ndiscontinuities are of the nature described in Chapter iv, example 10.\nThe reader will see from that example that the results of this chapter\ncan almost all be generalised in this way. To make this chapter more\nsimple we shall not consider such generalisations.\n\n+ Bulletin des Sci. Math. (2), xvii. (1893), p. 240.\n\nJ Burnside and Panton, Theory of Equations, ii. p. 40.\n\n\u00a7 By the ordinary theory of undetermined multipliers.\n\n%\n% 213\n%\n\nConsidering variations in the other elements of D, we see that D is\nstationary for variations in all elements when Amr=Damj. (m=l, 2, ...\nn; r=l, 2, ... n). Consequently A = D\". D, and so I)' ' - D ~ . Hence\nthe maximum and minimum values of I) are +1.\n\nCorollary. If a r be real and subject only to the condition | a I < j\nsince\n\n2 a,,/( ii/) 2 1,\n\nr=l\n\nwe easily see that the maximum value of | i9 | is (/i J/)\" = ?i \"i/\".\n\n\\Section{11}{2}{Fredholms* equation and its tentative solution.}\nAn important\nintegral equation of a general type is\n\nJ a\n\nwhere f(x) is a given continuous function, X is a parameter (in\ngeneral complex) and K (oo, ) is subject to the conditions f laid down\nin \\hardsectionref{11}{1}. K (x, ) is called the nucleusX of the equation.\n\nThis integral equation is known as Fredholms equation or the integral\nequation of the second kind (see \\hardsectionref{11}{3}). It was observed by Volterra\nthat an equation of this type could be regarded as a limiting form of\na system of linear equations. Fredholm's investigation involved the\ntentative carrying out of a similar limiting process, and justifying\nit by the reasoning given below in \\hardsubsectionref{11}{2}{1}. Hilbert (Gottinger Nach.\n1904, pp. 49-91) justified the limiting process directly.\n\nWe now proceed to write down the system of linear equations in\nquestion, and shall then investigate Fredholm's method of justifying\nthe passage to the limit.\n\nThe integral equation is the limiting form (when 5-*-0) of the\nequation\n\n<i> x) =f x) + \\ i K X, X,) (t> (.r,) 8, where x - Xq\\ i = 8, XQ = a,\n.r = b.\n\nSince this equation is to be true when a x b, it is true when x takes\nthe values Xi, Xi, ... Xni and so\n\nn\n\n-\\ 8 2 K x.p, x ) (.rg) + (f) (Xj,) =/ (Xj,) ip = l,2,... n).\n\nq=l\n\n* Fredholm's first paper On the subject appeared in the Ofversigt af\nK. Vetenskaps-Akad. Forhandlingar (Stockholm), lvii. (1900), pp.\n39-46. His researches are also given in Acta Blath. xxvii. (1903), pp.\n365-390.\n\nt The reader will observe that if K x, |) = |>x), the equation may be\nwritten\n\n<p (x) =f (x) +\\ Tk (x, I) ( ) d|.\n\nThis is called an equation with variable upper limit.\n\n1;. Another term is kernel; French noyau, German Kern.\n\n%\n% 214\n%\n\nThis system of equations for < (.rp), p=l, 2, ... 7i) has a unique\nsolution if the determinant formed by the coefficients of x ) does not\nvanish. This determinant is\n\nZ> (X) = 1 - X8 K (. 1,xi) -X8K .vi, x. ) ... -UK x, .v ) -\\ 8K x.,\n, A'l) 1 - XS K x.,, .r.,) ... - X S A' (.rg, . )\n\n-\\ 8K (.r, xi) - \\ 8 K x, x.,) ... 1-X8K (x, x ) = 1 - X 2 8A' x,\n.Vp) +- 2 S2 1 ''''' '  '\" \" '\n\np = l 2 !p, q=l I K X, Xp) A [Xq, Xg)\n\nX3 )i\n\n';>> <I, r=l\n\n+ ...\n\n \"(\" 'p' ' P' V\" p \" '9/ ' (.\" p> \" r)\n\nK Xq,Xp) K Xg,.Vq) K Xg,Xr)\n\nlK x,.,Xp) K Xr,Xq) K Xr, Xr)\n\non expanding* in powers of X.\n\nMaking S-3-O, ti-s-qo, and writing the summations a-s integrations, we\nare thus led to consider the series\n\n/6 x2 [ [\n\ni)(X) = l-X| A' (li, 0 1 + 2] / j\n\n ( 2, 1) />:( 2, I2)\n\nd idi;-...\n\nFurther, if Z) (.r, x,) is the cofactor of the term in I)n ) which\ninvolves K x\\, x J, the solution of the system of linear equations is\n\n,, /( i)i) (.r, xi)+f x.2) Z) (av, .v.i) + ...+f xJD x, Xn)\n\nNow it is easily seen that the appropriate limiting form to be\nconsidered in association with Da Xf, x ) is D ); also that, if /n +\ni',\n\nDn (AV, -iV) = U\\ K x, x ) - XS 2\n\nK(x,x ) K x,Xp)\n\nA Xpi Xi,) A (A'p, Xp)\n\n+,\\ \\ 8 2\n\n ( A'- V) Xf,.Vp) K x,Xg)\n\nK Xp, .r ) K (Xp, .?7p) (Xp,; g)\n\nA' (.Pg, X ) K (Xg, Xp) K Xg, Xg)\n\nSo that the limiting form for 8~ I) x ., x ) to be considered t is\n\nD (.r,Xt,; X) = X A\" x\n\n>, Av)-X2 j\n\n2! ja Ja A:( i,\n\n'' I K x,x ) A'(.r, 1) I\n\nK 2,X;) K -2, l) ( 2,6)\n\n Il' l2-----\n\nConsequently we are led to consider the possibility of the equation\n<P(x)=f x) + J \\ [ d .x,; ) fii)d giving the solution of the integral\nequation.\n\n* The factorials appear because each determinant of s rows and columns\noccurs s ! times as p, q, ... take all the values 1, 2, ... n, whereas\nit appears only once in the original determinant for D (X).\n\nt The law of formation of successive terms is obvious from those\nwritten down.\n\n] 1 -21] INTEGRAL EQUATIONS\n\nExample 1. Shew that, in the case of the equation\n\n  x) = x->r\\ \\ xii(l) y)dy, J\n\nD ) = 1\\, D (.r, y; X) = \\ xy Zx\n\n%\n% 215\n%\n\nwe have\n\nand a solution is\n\n0( ) =\n\n3-X'\n\nwe have\n\nExample 2. Shew that, in the case of the equation\n\n<j> x) = x + \\ I (xy+y )<f>(i/)dy,\n\nJ\n\nD x, y; X) = X xy + ?/2) +X xy -,xy - If + y\\ and obtain a sohition\nof the equation.\n\n\\Subsection{11}{2}{1}{Investigation of Fredholm's solution.}\n\nSo far the construction of the solution has been purely tentative; we\nnow start ah initio and verify that we actually do get a solution of\nthe equation; to do this we consider the two functions D ( ), D x,\ny; ) arrived at in \\hardsectionref{11}{2}.\n\nWe write the series, by which D ( -) was defined in \\hardsectionref{11}{2}, in the\nform 1 + 2 so that\n\n =i n\\\n\nrb fb rb J a -1 a J a\n\nd,d 2  d n;\n\nsince K x, y) is continuous and therefore bounded, we have \\ K x,y)\\ <\nM, where M is independent of x and y; since K x, y) is real, we may\nemploy Hadamard's lemma \\hardsubsectionref{11}{1}{1}) and we see at once that\n\nWrite n 'M'' (b - a)\" = n ! 6; then\n\nlim(6.WM=lim f-\" - f|(l+jYf\n\nu oo M oo (71+1)5 (\\ nj )\n\nsince ( 1 +\n\nThe series 2 bnX\"- is therefore absolutely convergent for all values\nof X,;\n\nw = l\n\nand so \\hardsubsectionref{2}{3}{4}) the series 1+2 - ~ converges for all values of X and\nthere- fore \\hardsubsectionref{5}{6}{4}) represents an integral function of X.\n\nNow Tfrite the series for D (x, y;X) in the form 2 - - '- - .\n\n%\n% 216\n%\n\nThen, by Hadamard's lemma (\\hardsubsectionref{11}{1}{1}),\n\nand hence ' < Cn, where c is independent of a; and y and 2 Cn> ' is n\\\n=o\n\nabsohitely convergent.\n\nTherefore D x, y; ) is an integral function of \\ and the series for D\nx, y\\ \\ )- \\ K (x, y) is a uniformly convergent \\hardsubsectionref{3}{3}{4}) series of\ncontinuous* functionsj|of x and y when a x b, a y b.\n\nNow pick out the coefficient of A\" (a;, y) in D(x, y;X); and we get\n\nD x,y;X) \\ D ) K x, y) + 2 (-) X +', where\n\nExpanding in minors of the first column, we g t Q x, y) equal to the\nintegral of the sum of n determinants; writing | i, o, ... m-i, |,\n,n,   n-\\ in place of fi, o, ... in the ??ith of them, we see that\nthe integrals of all the determinants + are equal and so\n\nQn x,y) = -n\\ K x, y) Pnd d i    d n-i,\n\nJ a J a J a\n\nwhere\n\nPn=\\ K X, I), K X,,), ... K(x, | \\ 0\n\ni di.a K A ... A-(inf -o\n\nIt follows at once that\n\nD(x,y\\ X) = \\ D(X)K(x,y)+\\ ( D (x, \\ X)K ly)dl\n\n. a\n\nNow take the equation\n\n<i> )=fi ) + \\' K ly)<f> y)dy,\n\nJ a\n\nmultiply by D (x,; X) and integrate, and we get\n\nl'f(BD(,; )d\n\nJ a\n\n= I % (I) 0 -. ?; ) ? - r [' (' '; ) < ' ) > y* '\n\n.'a . a J i(\n\nthe integrations in the repeated integral being in either order.\n\n* It is easy to verify that every term (except possibly the first) of\nthe series for D (x, y; ) is a continuous function under either\nhypothesis (i) or hypothesis (ii) of \\hardsectionref{11}{1}.\n\nh\n\nt The order of integration is immaterial \\hardsectionref{4}{3}).\n\nt\n\n%\n% 217\n%\n\nThat is to say\n\nJ a\n\n= r<j>( )D(w,; ) d -r D w,y; ) -\\ D( ) K x,y)]<j>(y)dy\n\nJ a J a\n\n= \\ D ) \\' K x,y)<i> y)dy\n\nin virtue of the given equation.\n\nTherefore if D X) 0 and if Fredholm's equation has a solution it can\nbe none other than\n\n< X) =f x) +jy( ) d;\n\nand, by actual substitution of this value of <f> x in the integral\nequation, we see that it actually is a solution.\n\nThis is, therefore, the unique continuous solution of the equation if\ni)(X)4=0.\n\nCorollary. If we put /( ) = 0, the ' homogeneous ' equation\n\nJ a\n\nhas no continuous solution except (j) (.i')=0 unless D X) = 0.\n\nExample 1. By expanding the determinant involved in \\$ (.r, y) in\nminors of its first row, shew that\n\nD x,y; ) = \\ D K)K x,y) + \\ \\ ' K x, )D,y; ) d .\n\nJ a\n\nExample 2. By using the formulae\n\n2)(X) = 1+ i \"\"fi, D x,y; ) = \\ D X) K(x, y)+ i ( - )\" \" ' ' \"/'' \\\n/i=i 'I 11=1 \"\n\nshew that f \" i> (,; X) f l = - X .\n\nExample 3. If K x, y) = 1 (y 0) i \\ y) = (y > ' )-> shew that D ) = b-\na) X.\n\nExample 4. Shew that, if K (.r, y)=fi (x) .f [y), and if\n\n-6\n\nfi )f2 x)dx = A,\n\nthen\n\nD ) = A\\ D x,y; ) = \\ h x)f., (y),\n\nand the solution of the corresponding integral equation is\n\n%\n% 218\n%\n\nExample 5. Shew that, if\n\nK x, y) =/i x) gi y) +f x) 2 y), then D (X) and D x, y; X) are\nquadratic in X; and, more generally, if\n\nII K x,y)= 2 f x)g,n ),\n\nm = l\n\nthen i)(X) and D x, y, X) are polynomials of degree n in X.\n\n\\Subsection{11}{2}{2}{Volterra's reciprocal functions.}\n\nTwo functions K x, y), k x, y; X) are said to be reciprocal if they\nare bounded in the ranges a- x, y - 1), any discontinuities they may\nhave are regularly distributed (\\hardsectionref{11}{1}, footnote), and if\n\nK x,y)+k x,y] ) = \\ \\ k x, \\ \\ ) K, y)d .\n\nJ a\n\nWe observe that, since the right-hand side is continuous*, the surn of\ntwo reciprocal functions is continuous.\n\nAlso, a function K x, y) can only have one reciprocal if Z) (X) 4=;\nfor if there were two, their difference k x, y) would be a continuous\nsolution of the homogeneous equation\n\nh x,y; X) = X f >(-,(.r, )K k,y)di,\n\n(where x is to be regarded as a parameter), and by \\hardsubsectionref{11}{2}{1} corollary,\nthe only continuous solution of this equation is zero.\n\nBy the use of reciprocal functions, Volterra has obtained an elegant\nreciprocal relation between pairs of equations of Fredholm's type.\n\nWe first observe, from the relation\n\nB x,y; ) = \\ D X)K x,y) + \\ \\ D x, -X) K H y) d\n\nproved in \\hardsubsectionref{11}{2}{1}, that the value of A;(, y; X) is\n\n-D x,y;X)l[\\ D ) ], and from \\hardsubsectionref{11}{2}{1} example 1, the equation\n\nk x, y; ) + K x, y) = X f K x, ) k, y;X)d\n\nJ a M\n\nis evidently true.\n\nThen, if we take the integral equation\n\n< > x)=f(x) + xl'K x, )<f>( )d,\n\nJ a\n\nwhen a'> x h, we have, on multiplying the equation\n\nJ a * By example 10 at the end of Chapter iv.\n\n%\n% 219\n%\n\nby k (x,; ) and integrating,\n\nJ a\n\n= r k(x, : ) f( )d + xf' I' k(x, : ) K 1,)<f>(,)d,dl\n\nJ a J II J a\n\nReversing the order of integration* in the repeated integral and\nmaking use of the relation defining reciprocal functions, we get\n\n\\ \\ x, : ) 4>( )di\n\nJ a\n\n= !'k(w, : ) f( )d +r K(x,,) + k w,,;X) < (|0 i,\n\nJ a J a\n\nand so X f '(, X)/( )fZ = -X l K (x, ) <f> (,) d,\n\nJ a n\n\n= -<P(x)+f x). Hence f(x) = ( ) + \\ f ' (a;, f; ) /( ) d;\n\n. a\n\nsimilarly, from this equation we can derive the equation\n\n4> x)=f(x) + \\ f'K(x, )4>( )d,\n\nJ II\n\nso that either of these equations with reciprocal nuclei may be\nregarded as the solution of the other.\n\n\\Subsection{11}{2}{3}{Homogeneous integral equations.}\n\nrb The equation < x) = X ) K x, ) </> ( ) d is called a homogeneous\nintegral\n\n- n\n\nequation. We have seen (\\hardsubsectionref{11}{2}{1} corollary TODO) that the only continuous\nsolution of the homogeneous equation, when D (X) 4= 0, is ( x) = 0.\n\nThe roots of the equation D (X) = are therefore of considerable\nimportance in the theory of the integral equation. They are called the\ncharacteristic numbers of the nucleus.\n\nIt will now be shewn that, when D (X) = 0, a solution which is not\nidentically zero can be obtained.\n\nLet+ X = Xo be a root m times repeated of the equation D (X) = 0.\n\nSince D (X) is an integral function, we may expand it into the\nconvergent series\n\nD (X) = Cm (X - Xo)\" + c,n, (X - Xor +1 + . . . (m > 0, c, + 0).\n\n* The reader will have no diflSculty in extending the result of \\hardsectionref{4}{3}\nto the integral under consideration.\n\nt French valeurs caracteristiques, German Eigenicerthe.\n\nJ It will be proved in \\hardsubsectionref{11}{5}{1} that, if K (x, ij) = K y, x), the\nequation D (X) = has at least one root.\n\n%\n% 220\n%\n\nSimilarly, since D x, y; X) is an integral function of \\, there\nexists a Taylor series of the form\n\nby \\hardsubsectionref{3}{3}{4} it is easily verified that the series defining g (cc, y), (n\n= 1, 1 + 1, ...) converges absolutely and uniformly when a x b, a y -\nb, and thence that the series for D (x, y; ) converges absolutely and\nuniformly in the same domain of values of x and y.\n\nBut, by \\hardsubsectionref{11}{2}{1} example 2,\n\nL\n\ni)(f.f;X)df=-X >\n\nnow the right-hand side has a zero of order m - 1 at A,o, while the\nleft-hand side has a zero of order at least I, and so we have m-\\' I.\n\nSubstituting the series just given for D ( ) and D x,y\\ X) in the\nresult of \\hardsubsectionref{11}{2}{1} example 2, viz.\n\nD x,y- ) = XD (X) K x, y) + \\ \\ ''K x, ) D I y; ) d\n\nJ a\n\ndividing by (X, - XoY and making X -* X,o. we get\n\n9i, y) = \\ K x, I) gi (, y) d .\n\nJ a\n\nHence if y have any constant value, gi x, y) satisfies the homogeneous\nintegral equation, and any linear combination of such solutions,\nobtained by giving y various values, is a solution.\n\nCorollary. The equation\n\n<l> G ) =/(. -) + Xo f ' iT (.r, ) < ( ) d\n\nJ a\n\nhas no solution or an infinite number. For, if ( x) is a solution, so\nis x) + c gi (x, y),\n\ny where Cy may be any function of y.\n\nExample 1. Shew that solutions of\n\n< x) = \\ I cos'<(.r-|)( ( )c/\n\nare </> (.r) = cos (?i - 2r) .r, and ( (.r) = sin (% - 2?-) .r; where\nr assumes all positive integral values (zero included) not exceeding\nhi.\n\nExample 2. Shew that\n\n  x) = \\ y cos'' (.r-h I) ( ( ) o?|\n\nhas the same solutions as those given in example 1, and shew that the\ncorresponding values of X give all the roots of D (X) = 0.\n\n%\n% 221\n%\n\n\\Section{11}{3}{Integral equations of the first and second kinds. }\nFredholra's\nequation is sometimes called an integral equation of the second kind;\nwhile the equation\n\nf x) = \\ \\ \\ \\ {x, ), )d\n\nJ a\n\nis called the integral equation of the first kind.\n\nIn the case when K x, ) = Q ii > x, we may write the equations of the\nfirst and second kinds in the respective forms\n\nf x) = \\ [ K x, ) )dl\n\nJ a\n\n4> a )=f x) + xrK x, )cf>( )d .\n\nJ a\n\nThese are described as equations with variable upper limits.\n\n\\Subsection{11}{3}{1}{Volterra's equation.}\n\nThe equation of the first kind with variable upper limit is frequently\nknown as Volterra's equation. The problem of solving it has been\nreduced by that writer to the solution of Fredholm's equation.\n\nAssuming that K x, ) is a continuous function of both variables when\n\\$ X, we have\n\nf x) = \\ \\ K x, )< )dl\n\nJ a\n\nThe right-hand side has a differential coefficient \\hardsectionref{4}{2} example 1)\nif -T- exists and is continuous, and so\n\nox\n\nf (x) = \\ K (x, x) <p X) + Xj -cf> ) dl\n\nThis is an equation of Fredholm's type. If we denote its solution by\nx), we get on integrating ft-om a to x,\n\nf x)-f a)=\\ \\ K x, )<l> )dl\n\nJ a\n\nand so the solution of the Fredholm equation gives a solution of\nVolterra's equation if /(a) = 0.\n\nThe solution of the equation of the first kind with constant upper\nlimit can frequently be obtained in the form of a series *.\n\n\\Section{11}{4}{The Liouville- Neumann method of successive substitutions \"f TODO. }\nA\nmethod of solving the equation\n\ncf> x)=f(x) + \\ \\ ' K(x, )cf ( )d,\n\nJ a\n\nwhich is of historical importance, is due to Liouville.\n\n* See example 7, p. 231; a solution valid under fewer restrictions is\ngiven by Bocher. t Journal de Math. ii. (1837), iii. (1838). K.\nNeumann's investigations were later (1870); see his Untersuchungen\nilber das logarithmische und Newton'sche Potential.\n\n%\n% 222\n%\n\nIt consists in continually substituting the value of 4) oo) given by\nthe right-hand side in the expression < ( ) which occurs on the\nright-hand side.\n\nThis procedure gives the series S x)=f x) + \\ \\ ' K x, aA?) + S X- f K\nx, 0 [* K I,)\n\nJ a m = 2 J a J a\n\nJ a\n\nSince | K x, y) \\ and \\ f(x) \\ are bounded, let their upper bounds be\nM, M'. Then the modulus of the general term of the series does not\nexceed\n\n|\\ [ ' l/' ir(6-a)'\". The series for S x) therefore converges\nuniformly when\n\n\\ \\ \\ < M-' h-a)-; and, by actual substitution, it satisfies the\nintegral equation.\n\nIf Kix, y) = when y>x\\ we find by induction that the modukis of the\ngeneral term in the series for S x) does not exceed\n\nX I * Jf'\" M' x - a)>\"l m l) \\ X\\'>' i/' M' (6 - ay>'jm !,\n\nand so the series converges uniformly for all values of X; and we\ninfer that in this case Fredholm's solution is an integral function of\nX.\n\nIt is obvious from the form of the solution that when | X. | < M~ (b -\na)~\\ the reciprocal function k (x,; X) may be written in the form\n\nk x, X)=.-K x, )- t X-- f' K(x, 0 f /l (I,, eO\n\nm=2 J a J a\n\nfor with this definitipn of k x,; A,), we see that\n\nS x)=f x)-\\ \\ ' k(x,; ) f( )d,\n\nJ a\n\nso that k x, |;X) is a reciprocal function, and by \\hardsubsectionref{11}{2}{2} there is\nonly one reciprocal function.\n\nWrite\n\nK (X, I) = K, X, ), f K X, r) Kn (r, B r - J n+r (, ),\n\nand then we have\n\n-k x,;X)= S V K, +,(x, ),\n\nm =\n\nwhile f K (x, r ) Kn (r, e r = A .+ (*'. )>\n\nJ a\n\nas may be seen at once on writing each side as an (m + n - l)-tuple\nintegral. The functions K, (x, f) are called iterated functions.\n\nJ a\n\n%\n% 223\n%\n\n\\Section{11}{5}{Symmetric nuclei.}\n\nLet Ki x, y) = K (y, x); then the nucleus K x, y) is said to be\nsymmetric. The iterated functions of such a nucleus are also\nsymmetric, i.e. Kn, y) = Kn (y, x) for all values of n; for, if Kn\nx, y) is symmetric, then\n\nKn+r (, y) = f K,, ) Kn I y) d = j K, (|, o:) K, y, ) d\n\nJ a 'a\n\n= Kn(y, )K,i,x)d = Kn, y,x),\n\nJ a\n\nand the required result follows by induction.\n\nAlso, none of the iterated functions are identically zero; for, if\npossible, let Kp (x, y) = 0; let n be chosen so that 2\"\" <]) - 2\",\nand, since Kp (x, y) = 0, it follows that K n x, y) = 0. from the\nrecurrence formula.\n\nBut then = K n x, x) = I, \\ i x, ) K -i (, ) d\n\nJ a\n\n= f [Kn-.i . rr-d,\n\nJ a\n\nand so K n-i i, ) - j continuing this argument, we find ultimately\nthat Ki (x, y) = 0, and the integral equation is trivial.\n\n\\Subsection{11}{5}{1}{Schmidt's* theorem that, if the nucleus is symmetric, the equation J) ( ) = has at least one root.}\n\nTo prove this theorem, let\n\nUn = Kn (X, X) dx,\n\nJ a\n\nSO that, when \\ \\ \\ < M~ (b - a)~\\ we have, by \\hardsubsectionref{11}{2}{1} example 2 and\n\\hardsectionref{11}{4},\n\n\\ 1 dDiX)\n\nD ) dx ri \"\n\nNow since I I fjLKn+i (x, |) +,i\\ i (x, )]- d dx\n\nJ a J a\n\nfor all real values of fi, we have\n\ntl'U . + 2/JLU.>n + U,n-, > 0, and so U n+i U.n-- U n-, Uon-1 > 0.\n\nTherefore U.,, Ui, ... are all positive, and if- JZ /C, = i, it\nfollows, by in- duction from the inequality ?7o +2 C/2,1-2 C 2 i'.\nthat Unn+n/U.n > v -\n\nX\n\nTherefore when j X- v~\\ the terms of S Un'. ' ~ do not tend to zero;\n\nand so, by \\hardsectionref{5}{4}, the function, - -7- - has a singularity inside or\non the\n\n* The proof given is due to Kneser, Palermo Bendiconti, xxii. (1906),\np. 236.\n\n%\n% 224\n%\n\ncircle \\ \\ = v~ \\ but since D( ) is an integral function, the only\npossible\n\nsinerularities of -r., . - j- are at zeros of D(X); therefore D (X)\nhas a zero \u00b0 1) (X,) d\\\n\ninside or on the circle \\ \\ \\ = v~ .\n\n[Note. By \\hardsubsectionref{11}{2}{1}, Z> (X) is either an integral function or else a\nmere polynomial; in the latter case, it has a zero by \\hardsubsectionref{6}{3}{1} example\n1; the point of the theorem is that in the former case D (X) cannot\nbe such a function as e ', which has no zeros.]\n\n\\Section{11}{6}{Orthogonal functions.}\n\nThe real continuous functions (jj (x), (f)o (x), . . . are said to be\northogonal and normal* for the range a, b) if\n\nIf Ave are given n real continuous linearly independent functions u-\nx), Uo x), ...Un(x), we can form n linear combinations of them which\nare orthogonal.\n\nFor suppose we can construct m - 1 orthogonal functions </>!, ...\n(f>m-i such that (f>p is a linear combination of u-, ii, ... Up\n(where p = 1, 2, ... ni - I); we shall now shew how to construct the\nfunction, such that c i, <p.2,  (f>m are all normal and\northogonal.\n\nLet (f>m (x) = Ci, m (f>i ( ) + C2, m < 2 ( ) +  + Cm-i <pm-i (i )\n+ Urn ),\n\nso that i( is a function of Mj, lu, ... w, . Then, multiplying by and\nintegrating,\n\ni< ni x) (t>p oc) dx = Cp,, + I Um ) <j>p (x) dx p < m). Hence I i4>m\ni ) 4>p oc) dx =\n\nJ a\n\nif Cp m == - Uin (jc) <pp ( ) dx;\n\nJ a\n\na function i, (x), orthogonal to (f) (x), (f>. (x), . . . <pm-i (x),\nis therefore con- structed.\n\nrb\n\nNow choose a so that a- I [i(f>r,i (x)]- dx=l;\n\nJ a\n\nand take <p,n (x) = a. i<, . (x).\n\nThen j </) (x) 4>p (x) dx |~ '\n\nWe can thus obtain the functions (/)i, c/).., ... in order.\n\n* They are said to be orthogonal if the first equation only is\nsatisfied; the systematic study of such functions is due to Murphy,\nCamb. Phil. Trans, iv. (1833), pp. 353-408, and v. (1835), pp.\n113-148, 31.5-394.\n\n%\n% 225\n%\n\nThe members of a finite set of orthogonal functions are linear! '\ninde- pendent. For, if\n\na,(j), (x) + a.2(f>. (x)+ ... + an(f>n ( v) = 0,\n\nwe should get, on multiplying by (f>p (a;) and integrating, a =;\ntherefore all the coefficients a vanish and the relation is nugatory.\n\nIt is obvious that tt ~ cost/u;, tt ~ - sin mx form a set of normal\northogonal functions foifthe range ( - tt, tt).\n\nExample 1. From the functions 1, .r, x-, ... construct the following\nset of functions which are orthogonal (but not normal) for the range (\n- 1, 1) :\n\n1, X, - i, x -'jx, x* - x- + - g, ....\n\nExample 2. From the functions 1, x, x, ... construct a set of\nfunctions\n\nwhich are orthogonal (but not normal) for the range (a, h); where\n\n[A similar investigation is given in >5 15-14.]\n\n11 \"61. The connexion of orthogonal functions with homogeneous\nintegral equations.\n\nConsider the homogeneous equation\n\n<l> x) = \\,\\% )K x, )dl\n\n  a\n\nwhere Xq is a real* characteristic number for K(x, ); we have already\nseen how to construct solutions of the equation; let n linearly\nindependent solutions be taken and construct from them n orthogonal\nfunctions (pi, < .\\,, ... (/> .\n\nThen, since the functions, are orthogonal, rr i <l>Uy)f K x, )cf>, (\n)di\\'dy = i f\\ < p.n(y)fK(x, )<f>, )Ydi\n\nJ a |\\ t = 1 J a J m = lJ a [\\ J a J\n\nand it is easily seen that the expression on the right may be written\nin the form\n\nrb\n\ni If K(x. )4>, ( )d\n\nH = 1 (. J a\n\n>H = 1\n\non performing the integration with regard to y; and this is the same\nas i r K x, y) cf>,n (y) dyi' K x, |),, ( ) rff\n\nm. = \\ J a J a\n\nTherefore, if we write K for K x, y) and A for\n\ni 4>Ay)\\' K, )<l>.nX )dl\n\nm=\\ J a\n\n* It will be seen immediately that the characteristic numbers of a\nsymmetric nucleus are all real.\n\nW. M. A. 15\n\nTherefore\n\n%\n% 226\n%\n\nrb rb\n\nwe have A?dy = \\ KAdy,\n\nJ a J a\n\nrb rb rb\n\nand so 1 A-dij = K'dy - (K - Kfdy.\n\nla J a J a\n\na \\ m = \\ f o ) J a\n\nand SO Xo~' <, ( ) [ [K x,y)Ydy.\n\nw = 1 J a\n\nIntegrating, we get\n\nn <: Xo'- I / [K x, y)Y-dydx.\n\nJ a J a\n\nThis formula gives an upper limit to the number, 7i, of orthogonal\nfunctions corresponding to any characteristic number Xo-\n\nThese n orthogonal functions are called characteristic functions (or\nauto- functions) corresponding to Xq.\n\nNow let '\"* (x), < <i' (x) be characteristic functions corresponding\nto different characteristic numbers Xq, Xj.\n\nThen </>' > (x) </><!' (x) = xJ K x, ) ( c) x) </)' ' ) d,\n\nJ a\n\nand so\n\n[ (f> ' >(x)(f) ' (x)dx = \\,\\ i K(x, )4> '> (x)cf)'' )d dx ...(1),\n\nand similarly\n\n[ (/)' ' (x) </) w (x) dx = \\ of I K x, ) 0 \" ( ) </)(!> (x) d dx\n\nla J a J a\n\n= X [ [ K(lx)cf> '' ix)(f> ' ( )dxd ...(2),\n\nJ a ' a\n\non interchanging x and .\n\nWe infer from (1) and (2) that if \\ \\ and if K x, ) = K, x),\n\nI 4> ' (x)(f> ' (x)dx=0,\n\nJ a\n\nand so the functions < \"'' (x), '\" (x) are mutually orthogonal.\n\nIf therefore the nucleus be symmetric and if, corresponding to each\ncharacteristic number, we construct the complete system of orthogonal\nfunctions, all the functions so obtained will be orthogonal.\n\nFurther, if the nucleus be symmetric all the characteristic numbers\nare real; for if Xq, Xj be conjugate complex roots and if* Uq x) = v\n(x) + iw (x) be\n\n* V (x) and w (x) being real.\n\n%\n% 227\n%\n\na solution for the characteristic number X, then iii (x) = v (x) -\niiu (x) is a solution for the characteristic number \\; replacing 0*''\n(x), 'i* (x) in the equation\n\nI ( 'o' (x) </>< ' (x) dx =\n\nJ a\n\nby V (x) + iw (x), v x) - iw (x), (which is obviously permissible), we\nget\n\nf [\\ v(x)Y+ \\ w x)Y]dx = 0,\n\nJ a\n\nwhich implies v (x) = w (x) = 0, so that the integral equation has no\nsolution except zero corresponding to the characteristic numbers Xq.\nil this is contrary to \\hardsubsectionref{11}{2}{3};\nhence, if the nucleus be symmetric, the characteristic\nnumbers are real.\n\n11 7. The development* of a symmetric nucleus.\n\nLet < j x), (f)., (x), 3 (x), ... be a complete set of orthogonal\nfunctions satisfying the homogeneous integral equation with symmetric\nnucleus\n\ncf>(x) = xfK(x, )<f>( )dl\n\nJ a\n\nthe corresponding characteristic numbers beingf Xj, X, X, -\n\nXT J. 1, f' < n (* ) < i(v)  -r 1\n\nNow supposel that the series - - - - is umiormly convergent\n\nwhen a x b, a y %b. Then it luill he shewn that\n\nK(x.y)=i M My). =i\n\nFor consider the symmetric nucleus\n\n71 = 1 / n\n\nIf this nucleus is not identically zero, it will possess (\\hardsubsectionref{11}{5}{1}) at\nleast one characteristic number /j,.\n\nLet ylr(x) be any solution of the equation\n\n jr x)=fJ,f H(x.. )ylri )dl\n\n' a\n\nwhich does not vanish identically.\n\nMultiply by cf)n (x) and integrate and we get\n\nry!r(x)cf>,, x)dx = f,f Hxix, )- I \"t if l l ( ) cj U ) dxd;\n\nJ a a ' a [ n = l 7i )\n\n* This investigation is due to Schmidt, the result to Hilbert.\n\nt These numbers are not all different if there is more than one\northogonal function to each characteristic number.\n\nX The supposition is, of course, a matter for verification with any\nparticular equation.\n\n15-2\n\n>t\\\n\n%\n% 228\n%\n\nsince the series converges uniformly, we may integrate term by term\nand get\n\nf ir (.v) 4> x) dx = \\ \\ ( ) (/> i )d - r < (I) f ( ) d\n\n= 0.\n\nTherefore t/ (x) is orthogonal to < i (x), < 2 ( ),  ', and so\ntaking the equation\n\nJ a [ n = l n )\n\nrb\n\nwe have yjr (x) = y\"- / K (x, )' ) d .\n\nJ a\n\nTherefore /i, is a characteristic number of K (x, y), and so -v/r x)\nmust be a linear combination of the (finite number of) functions < n\n(*') corresponding to this number; let\n\nm\n\nMultiply by (j>m (x) and integrate; then since -yfr (x) is orthogonal\nto all the functions (j)n (x), we see that a, = 0, so, contrary to\nhypothesis, yjr (x) = 0.\n\nThe contradiction implies that the nucleus H (x, y) must be\nidentically zero; that is to say, K (x, y) can be expanded in the\ngiven series, if it is uniformly convergent.\n\nExample. Shew that, if Xq be a characteristic number, the equation\n\n4> x)=f .v) + \\ J'' E x, ) )d\n\nJ a\n\ncertainly has no sokition . unless f(x) is orthogonal to all the\ncharacteristic functions corresponding to .\n\n11 \"71. The solution of Fredhohns equation by a series. Retaining the\nnotation of \\hardsectionref{11}{7}, consider the integral equation\n\n   x) =f x) +\\ f K (x, )< ) dl\n\n-' a\n\nwhere K x, ) is symmetric.\n\nIf we assume that ( ) can be expanded into a uniformly convergent\n\n00\n\nseries 2 an4>n (1), we have\n\n00 00 A\n\n2 an(f)n x!)=f(x)+ 1 -an(f>n( ), ' w = l M=l n\n\nSO that f(x) can be expanded in the series\n\n00 A -A\n\nS an -\\ (f)n x).\n\nHence if the function f(x) can be expanded into the convergent series\n\n< . \" b \\\n\n2 bn< >n(ix), then the series 2 . \" \\ <f>n ), if it converges\nuniformly in\n\nn = l n-l n\n\nthe range a, b), is the solution of Fredhohns equation.\n\n%\n% 229\n%\n\nX\n\nTo determine the coefficients 6, we observe that S bn (f)n ( )\nconverges uni-\n\nn=l\n\nformly by \\hardsubsectionref{3}{3}{5}*; then, multiplying by (f)n(x) and integrating, we\nget\n\nJ a\n\n11 \"8. Solution of AbeVs integral equation. This equation is of the\nform\n\n  -llw\n\nd\\$ 0<fi<l, a x b),\n\nwhere/' (x) is continuous aud/(a)=0; we proceed to find a continuous\nsolution u (x)\n\nLet (f)(x)= I u (I) o?|, and take the formula t\n\nJ a\n\n7T n dx\n\nsin/iTT ~ J z-xY~> x - y\n\nmultiply by u ( ) and integrate, and we get, on using Dirichlet's\nformula \\hardsubsectionref{4}{5}{1} corollary\\\n\n  f' f x)dx\n\nSince the original expression has a continuous derivate, so has the\nfinal one; therefore the continuous solution, if it exist, can be\nnone other than\n\n\"\" ''V~dzj z-xy-> '\n\nand it can be verified l)y substitution J that this function actually\nis a solution.\n\n11 \"SI. Schlomilch's integral equation.\n\nLet $f(x)$ have a continuous differential coefficient when - tt .r tt.\nThen the equation\n\n2 /\"i\" f x)=- (f) xsmd)de\n\n  J n\n\nhas one solution with a continuoiis differential coefficient when - it\nx it, namely\n\n/ in < x)=f 0) + x ' f' xsm6)d6. Jo From \\hardsectionref{4}{2} it follows that\n\nf (.v) = - sin Oct)' x sin (9) dd j\n\n(so that we have (0)=/(0), ' (0) = |7r/' (0)).\n\n* Since the numbers X, are all real we may arrange them in two sets,\none negative the other positive, the members in each set being in\norder of magnitude; then, when ' X ! > X, it is evident that X /(X -\nX) is a monotonic sequence in the case of either set.\n\nt This follows from \\hardsubsectionref{6}{2}{4} example 1, by writing (z - x)l(x - f ) in\nplace of x.\n\nX For the details we refer to Bocher's tract.\n\n\u00a7 Zeitschrift filr Math, und Phys. ii. (1857). The reader will easily\nsee that this is reducible to a case of Volterra's equation with a\ndiscontinuous nucleus.\n\n%\n% 230\n%\n\nWrite .r sin // for x\\ and we have on multiplying by x and integrating\n\nX j /' x sin \\//-) d\\ lr - - I < I sin ( ' (.i- sin 6 sin \\//-) c? \\\ndyjr.\n\nChange the order of integration in the repeated integral \\hardsectionref{4}{3}) and\ntake a new variable x in iDlace of \\ f defined by the equation sin; =\nsin 6 sin yj/.\n\nrr,. fi\" .,, .s,, 2.V fi ( f (b' (x sin v) cos Y dv],\n\nThen xj f'ixsn.i.)d = j \\ j -~J -~ - \\ de.\n\nChanging the order of integration again \\hardsubsectionref{4}{5}{1}),\n$$\nTODO\n$$\nand so x \\ f x sin y\\ r) d = x \\ </>' (.r sin x) cos;>( o?;\n\nJo J\n\n= 0Gt-)-( (O).\n\nSince (f) (0) =/ (0), we must have\n\n(j) (.r) =/ (0)+x ' f (x sin;/.) c a/;\n\ny\n\nand it can be verified by substitution that this function actually is\na solution.\n\nREFERENCES.\n\nH. Bateman, Report to the British Association*, 1910.\n\nM. BocHER, Introditction to Integral Equations (Cambridge Math.\nTracts, No. 10, 1909).\n\nH. B. Heywood et M. Fr chet, Liquation de Fredkolm. (Paris, 1912).\n\nV. Volte ra, Lego7is sur les equations integrales et les equations\nintegro-differentielles (Paris, 1913).\n\nT. Lalesco, Introduction a la the'orie des equations integrales\n(Paris, 1912).\n\nI. Fredholm, Acta Mathematica, xxvii. (1903), pp. 365-390.\n\nD. HiLBERT, Orundzilge einer allgemeinen Theorie der linearen\nIntegralgleichungen (Leipzig, 1912).\n\nE. Schmidt, Math. Ann. lxiii. (1907), pp. 433-476.\n\nE. Goursat, Cours d' Analyse, iii. (Paris, 1915), Chs. xxx-xxxill.\n\nMiSCELT.ANEOUS EXAMPLES.\n\n1. Shew that if the time of descent of a particle down a smooth curve\nto its lowest point is independent of the starting point (the particle\nstarting from rest) the curve is a cycloid. \\addexamplecitation{Abel.}\n\n* The reader will find a more complete bibliugrapby in this Report\nthan it is possible to give here.\n\n%\n% 231\n%\n\n2. Shew that, if/( ) is continuous, the sohition of\n\n(f) (x) =f x) + \\ j COS (2.rs) <f) (s) ds\n\n/(.r) + X I f s)cos 2jcs)ds\n\n'\" *( >= Hrjxv .\n\nassuming the legitimacy of a certain change of order of integration.\n\n3. Shew that the Weber-Hermite functions\n\nsatisfy 4> x) = \\ I e '*- (s) ds\n\nfor the characteristic values of X. (A. Milne.)\n\n4. Shew that even periodic solutions (with period Stt) of the\ndiflferential equation\n\n- - + ( 2 + >l-2 cos2 x) cf) (.r) = satisfy the integral equation\n\n<f> x) = \\ j e < oo'=< '(f, s)ds. (Whittaker; see \\hardsubsectionref{19}{2}{1}.)\n\n5. Shew that the characteristic functions of the equation\n\nare <f (x) = cos m.v, sin inx,\n\nwhere \\=m' and m is any integer.\n\n6. Shew that <p (x) = / \" - -f (|) d\\$\n\nhas the discontinuous solution <f) (x) = Lr ~ . \\addexamplecitation{Bocher.}\n\n7. Shew that a solution of the integral equation with a symmetric\nnucleus\n\nf :v)=rK x, )<t>i\\$)di\n\nJ a\n\nis 0( )= 2 a \\ \\ (f) (x),\n\nn=l\n\nprovided that this series converges uniformly, where X, 0 (x) are the\ncharacteristic numbers and functions of K x, |) and 2 a 0 (x) is the\nexpansion off(x).\n\nn=l\n\n8. Shew that, if | A ] < 1, the characteristic functions of the\nequation\n\n'f' '' =LI[ i-2hcL i- Hh'' \"\n\nare 1, cos?n.r, sin mx, the corresponding characteristic numbers being\n1, l/A'\", Ijh'\", where m takes all positive integral values.", "meta": {"hexsha": "d858cadc55b07dd2fd3001113848ca0007e55154", "size": 36754, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/tex/wandw-ch11.tex", "max_stars_repo_name": "CdLbB/Whittaker-and-Watson", "max_stars_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/tex/wandw-ch11.tex", "max_issues_repo_name": "CdLbB/Whittaker-and-Watson", "max_issues_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/tex/wandw-ch11.tex", "max_forks_repo_name": "CdLbB/Whittaker-and-Watson", "max_forks_repo_head_hexsha": "5fefdd2c36b4e48cc078a88afc77df289e680a73", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.2121212121, "max_line_length": 121, "alphanum_fraction": 0.6533710617, "num_tokens": 12269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676284, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.5610286674621328}}
{"text": "\\documentclass{article}\n\n\\usepackage{amsmath}\n\\usepackage{booktabs}\n\\usepackage{fullpage}\n\\usepackage{parskip}\n\\usepackage{tikz}\n\\usetikzlibrary{calc, shapes, patterns}\n\n\\begin{document}\n\n\\section*{\\centering{Moran process}}\n\\subsection*{\\centering{Fitness}}\n\n\n\\[\nN=3\n\\text{ and }\nA =\n\\begin{pmatrix}\n    0 & 3 \\\\ \n    1 & 2\n\\end{pmatrix}\n\\]\n\n\\vspace{1cm}\n\n\\begin{center}\n    \\begin{tabular}{r|c|c}\n        \\toprule\n                         & \\(f(\\text{Hawk})\\)           & \\(f(\\text{Dove})\\) \\\\\n        \\midrule\n                         &                              & \\\\\n        1 Hawk, 2 Doves  & \\(0\\times 0  + 3\\times 2\\)=6 & \\phantom{\\(0\\times 0 + 3\\times 2\\)=6} \\\\\n                         &                              & \\\\\n        \\midrule\n                         &                              & \\\\\n        2 Hawks, 1 Dove  &                              & \\\\\n                         &                              & \\\\\n        \\bottomrule\n    \\end{tabular}\n\\end{center}\n\n\\subsection*{\\centering{Probabilities}}\n\n\\begin{center}\n    \\begin{tabular}{r|r|c|c}\n        \\toprule\n        & Select       & Selection: Birth   & Selection: Death   \\\\\n        \\midrule\n        &             &                                                                          & \\\\\n        &Hawk         & \\(\\frac{f(\\text{Hawk})}{f(\\text{Hawk}) + 2f(\\text{Dove})}=\\frac{6}{12}\\) & \\(\\frac{1}{3}\\)\\\\\n        &             &                                                                          & \\\\\n1 Hawk, 2 Doves        &             &                                                                          & \\\\\n        &             &                                                                          & \\\\\n        &Dove         &                                                                          & \\\\\n        &             &                                                                          & \\\\\n        \\midrule\n        &             &                                                                          & \\\\\n        &Hawk         &                                                                          & \\\\\n        &             &                                                                          & \\\\\n2 Hawks, 1 Dove        &             &                                                                          & \\\\\n        &             &                                                                          & \\\\\n        &Dove         &                                                                          & \\\\\n        &             &                                                                          & \\\\\n        \\bottomrule\n    \\end{tabular}\n\\end{center}\n\n\\newpage\n\\section*{\\centering{Simulation}}\n\nUse the appropriate dice to simulate 1 Hawk taking over a population of Doves.\n\nDecide what dice you will use to sample birth/death selection at all possible\nstates:\n\n\\begin{center}\n    \\begin{tabular}{r|c|c|c|c}\n        \\toprule\n        State & Birth: dice used & Select Hawk values   & Death: dice used &\n        Select Hawk values \\\\\n        \\midrule\n                        &   &                 &   & \\\\\n        1 Hawk          & 6 & \\(\\{1, 2, 3\\}\\) & 6 & \\(\\{1, 2\\}\\) \\\\\n                        &   &                 &   & \\\\\n        \\midrule\n                        &   &                 &   & \\\\\n        2 Hawks         &   &                 &   & \\\\\n                        &   &                 &   & \\\\\n        \\bottomrule\n    \\end{tabular}\n\\end{center}\n\n\n\\subsection*{\\centering{Example}}\n\n\\begin{center}\n    \\begin{tabular}{r|c|c|c|c|c}\n        \\toprule\n        State & Birth: dice used & Birth: value rolled & Death: dice used & Death: value rolled & Next state   \\\\\n        \\midrule\n                      &                  &              &                  &              &              \\\\\n        1 Hawk        & 6                &     2        & 6                &             1& 1 Hawk       \\\\\n                      &                  & (Select Hawk)&                  & (Select Hawk)&              \\\\\n\n                      &                  &              &                  &              &              \\\\\n        1 Hawk        & 6                &     3        & 6                &             5& 2 Hawks      \\\\\n                      &                  & (Select Hawk)&                  & (Select Dove)&              \\\\\n                      &                  &              &                  &              &              \\\\\n        2 Hawks       & 4                &     4        & 6                &             2& 1 Hawk       \\\\\n                      &                  & (Select Dove)&                  & (Select Hawk)&              \\\\\n                      &                  &              &                  &              &              \\\\\n        1 Hawk        & 6                &     4        & 6                & 1& \\framebox{0 Hawks}      \\\\\n                      &                  & (Select Dove)&                  & (Select Hawk)&              \\\\\n        \\bottomrule\n    \\end{tabular}\n\\end{center}\n\n\\subsection*{\\centering{Activity}}\n\nEvery time you arrive at 0 \\textbf{or} 3 Hawks:\n\n\\begin{enumerate}\n    \\item Stop;\n    \\item Circle your final state\n    \\item Draw a line in the table (next page);\n    \\item Start again.\n\\end{enumerate}\n\n\\newpage\n\n\\begin{center}\n    \\begin{tabular}{r|c|c|c|c|c}\n        \\toprule\n        Current state & Birth: dice used & Birth: value rolled & Death: dice used & Death: value rolled & Next state   \\\\\n        \\midrule\n        1 Hawk        & 6                &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n                      &                  &              &                  &              &              \\\\\n    \\end{tabular}\n\\end{center}\n\n\n\\section*{\\centering{Computation}}\n\n\\begin{center}\n    \\begin{tikzpicture}[\n        dove/.style={circle, pattern=north west lines, pattern color=blue!70, draw=blue},\n        hawk/.style={circle, pattern=north east lines, pattern color=red!70, draw=red},\n        ]\n\n\t\\node (N1) at (1, 1) [dove] {D};\n\t\\node (N2) at (0, 0) [dove] {D};\n\t\\node (N3) at (2, 0) [dove] {D};\n\n    \\draw [thick, <-] ($(N1)!0.5!(N2) + (2.5, -.25)$) -- node [below] {\\(p_{10}\\)} ++(1, 0);\n\n    \\node (N1) at ($(N1) + (5, 0)$) [dove] {D};\n    \\node (N2) at ($(N2) + (5, 0)$) [dove] {D};\n    \\node (N3) at ($(N3) + (5, 0)$) [hawk] {H};\n\n    \\draw [thick, ->] ($(N1)!0.5!(N2) + (2.5, -.25)$) -- node [below] {\\(p_{12}\\)} ++(1, 0);\n    \\draw [thick, <-] ($(N1)!0.5!(N2) + (2.5, .25)$) -- node [above] {\\(p_{21}\\)} ++(1, 0);\n\n    \\node (N1) at ($(N1) + (5, 0)$) [dove] {D};\n    \\node (N2) at ($(N2) + (5, 0)$) [hawk] {H};\n    \\node (N3) at ($(N3) + (5, 0)$) [hawk] {H};\n\n    \\draw [thick, ->] ($(N1)!0.5!(N2) + (2.5, .25)$) -- node [above] {\\(p_{23}\\)} ++(1, 0);\n\n    \\node (N1) at ($(N1) + (5, 0)$) [hawk] {H};\n    \\node (N2) at ($(N2) + (5, 0)$) [hawk] {H};\n    \\node (N3) at ($(N3) + (5, 0)$) [hawk] {H};\n    \\end{tikzpicture}\n\\end{center}\n\n\\vspace{1cm}\n\nWhich gives:\n\n\\[\n    p_{10}=\\frac{6}{12}\\frac{1}{3}=\\frac{1}{6}\\qquad\n    p_{12}=\\phantom{\\frac{6}{12}\\frac{2}{3}=\\frac{1}{3}}\\qquad\n    p_{21}=\\phantom{\\frac{2}{8}\\frac{2}{3}=\\frac{1}{6}}\\qquad\n    p_{23}=\\phantom{\\frac{6}{8}\\frac{1}{3}=\\frac{1}{4}}\n\\]\n\n\n\\end{document}\n\n", "meta": {"hexsha": "6e6ab9fd1cc6036e886bf47a5135c68058dfc6a6", "size": 12166, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/assets/activities/moran_process/main.tex", "max_stars_repo_name": "prokolyvakis/gt", "max_stars_repo_head_hexsha": "e679e5d54d9a98583ad4981411ce505cea31f028", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2017-05-25T08:10:40.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-07T21:01:51.000Z", "max_issues_repo_path": "docs/assets/activities/moran_process/main.tex", "max_issues_repo_name": "prokolyvakis/gt", "max_issues_repo_head_hexsha": "e679e5d54d9a98583ad4981411ce505cea31f028", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 65, "max_issues_repo_issues_event_min_datetime": "2017-05-23T16:12:03.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T13:42:25.000Z", "max_forks_repo_path": "docs/assets/activities/moran_process/main.tex", "max_forks_repo_name": "prokolyvakis/gt", "max_forks_repo_head_hexsha": "e679e5d54d9a98583ad4981411ce505cea31f028", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-06-19T11:04:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-30T11:28:00.000Z", "avg_line_length": 51.5508474576, "max_line_length": 121, "alphanum_fraction": 0.1872431366, "num_tokens": 2494, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149923816048, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5610286672657236}}
{"text": "\n\\section{On voting}\nWe want to tap into the rich pool of established knowledge on  voting systems, to inform about our options and guide our decisions in designing suitable algorithms. \n\n\\subsection{On existing voting systems}\n%TODO: specify, for which elections (2 candidates etc) the following holds\nIn this section we recall the basic established definitions and then reflect upon our needs. For the dApp-voting system, the term \\textit{voters} does refer to the users of the dApp-store and \\textit{candidates} does refer to the dApps available there. \\\\ \n\n\\subsubsection{Commonly demanded properties}\nCommonly, the three properties that a voting system should fulfill in order to be fair are anonymity, neutrality and monotonicity. For the election of one out of two possible candidates, these properties are formalized as follows:\n\\begin{itemize}\n\\item {\\textbf{Anonymity:}\\footnote{This notion of anonymity is unrelated to the anonymity of a user using an app.}}:\tA voting system is said to be anonymous if it treats all voters equally. In other words, if any two voters trade ballots, this shouldn't change the election's outcome. \\\\\nConcerning a dApp-voting-system, it is up for discussion in what way this property is wanted and how it should be implemented. For example, it could be sensible to have not total anonymity, but to give users who have a high reputation, which indicates their knowledge, or users who hold a large stake and therefore are likely to want the best for the platform, more voting power than others. Certainly it would establish an unwanted great inequality between users if the relation between voting power and reputation or stake was a linear one. We discuss this topic further in \\textcolor{red}{TODO}. %TODO \\ref{label} \n\\item {\\textbf{Neutrality:}} A voting system is said to be neutral if it treats all candidates equally. This means that if every voter switched their vote from one candidate to another, the outcome should change accordingly. \\\\\nThis point is not about the voters. For dApp-rating, the neutrality property is about whether the voting-evaluation algorithm treats all the dApps that are applicable to partake in the voting round equally. \n\n\\item {\\textbf{Monotonicity:}} A voting system is said to be monotone if it is impossible for a candidate to change from winning to losing by gaining additional votes and to change from losing to winning by losing votes without gaining others.\n\\end{itemize} \n(see \\cite{voting}, p. 3-4)\n\n\\subsubsection{Classical voting systems and their properties}\n\\begin{itemize}[leftmargin = 0pt]\n\\item {\\textbf{Majority rule:}} This voting system elects the candidate who receives more than half of the votes, if such a candidate exists. If there is no such candidate, then majority rule results in a tie, with no winner elected.\n\\end{itemize}\n\n\\noindent $\\bm{\\ast}$ The so-called \\textbf{May's Theorem} states that majority rule %TODO: see \\ref{} \nis the only voting system for two-candidate-elections that is anonymous, neutral, and monotone, and that avoids the possibility of ties. For more than two candidates this is not true, since it is not guaranteed that one of the candidates is voted for by more than half of all voters. \\\\\nBecause on the dApp-platform there will be more than two candidates, and not only a winner but a whole ranking is needed, majority rule isn't the voting system to go for.\n\n%\\cite{Handsonapproach} The Mathematics of Voting and Elections: A Hands-On Approach. Chapter One; maybe TODO: quota systems\n\\begin{itemize}[leftmargin = 0pt, nosep]\n\\item {\\textbf{Plurality rule:}} One class of methods, which majority rule can be viewed as a special case of, is plurality rule. It is the voting system that elects the candidate who receives the highest number of votes, even if that number is less than half of the total number of votes cast. Here, only when two or more candidates receive the same number of votes (and more than the number of votes received by any of the other candidates), plurality method does result in a tie.\n\\end{itemize}\n(see \\cite{voting}, p. 5-20)\\\\ \\\\\n\n\\noindent Regarding a voting system for the dApp-ranking, the possibility of ties is no downside as there is no need to determine a single winner.\\\\ %S. 19\nRather, what is needed is a system that leads to some sort of preference order of all dApps. \\\\\n%\\noindent $\\bm{\\ast}$ Such preference order produced by the voting system is called {\\textbf{societal preference order}} since it can be thought of as the ranking of the candidates that, according to the voting system being used, reflects the voters' will the best. \\\\ \n%n protip: avoid \" in TeX\nThere are various systems that can be used to determine the societal preference order. One property of such systems that might sound sensible at first, is the following: \\\\\n%\\noindent $\\bm{\\ast}$ A voting system is said to satisfy the {\\textit{majority criterion}} if whenever a candidate is ranked first by a majority of the voters, that candidate will also be ranked first in the corresponding societal preference order. \n\\\\In the following, we give an example for why this would be no legitimate property for the dApp-voting-system. \\textcolor{red}{TODO?} \n\\begin{itemize}[leftmargin = 0pt, nosep] \n%\\item \\textbf{Borda count:} A voting system that does not fulfill the majority criterion is the so-called Borda count, which uses a non-trivial {\\textsf{point system}} to determine the overall rankings and is often used in collegiate sports polls, for example. In an election with $n$ candidates it works as follows: \nFirstly, each voter submits a ballot that contains his or her individual preference order of all the candidates. \nThen points are awarded to each candidate for each ballot cast, according to the following rule: \nAn $m$-place rank is worth $n-m$ points (where $1\\leq m \\leq n$). In other words, a first-place rank is worth $n-1$ points, a second-place rank is worth $n-2$ points, and so on. \n\\item \nFinally the candidate whose total number of points from all of the ballots is the largest is declared the winner and the corresponding societal preference order is determined by the number of points each candidate has got from largest to smallest. If there is more than one candidate with the largest number of points, a tie occurs. Also some sort of tie occurs in the societal preference order whenever candidates receive the same number of total points. These candidates then occupy indistinguishable positions in the preference order and are listed consecutively.\n\\end{itemize}\n%(see \\cite{voting}, p. 20-26) \\newline\n \nIntuitively, Borda count appears to be quite fair. However, it violates the majority criterion. \\\\\nBut for the purpose of the dApp-voting-system, using some sort of variation of the Borda count is more appropriate than demanding that the majority criterion is fulfilled: \n\\\\For example, if an extreme situation is considered, in which a small majority of users ranks a specific dApp at the top position place while all of the other voters rank it last on their personal preference it would obviously make no sense to automatically rank this dApp first in the resulting societal preference order. Determining its rank according to the Borda count clearly seems much more legitimate. \\\\\nThus, a voting system for ranking dApps does not have to fulfill the majority criterion and will not use plurality method, but rather use a variant of Borda count to guarantee a sensible societal preference order. \\\\\n \nTaking account of personal and societal preference orders, the three properties of fair voting systems have to be redefined for elections with more than two candidates: \n\\begin{itemize}\n\\item \\textbf{Anonymity:} This property stays the same.  \n\\item \\textbf{Neutrality:} A voting system is said to be neutral if it treats all candidates equally. This means that if every voter switched the positions of two specific candidates on their personal preference orders, the positions of these two candidates in the resulting societal preference order would be switched accordingly. \n\\item\\textbf{Monotonicity:} A voting system is said to be monotone if it is impossible for a candidate to go from winning to losing or to experience a decrease in rank on the resulting societal preference order whenever changes in favor of that candidate, but no changes in disadvantage of that candidate, occur on individual preference ballots. \\\\\n%(see \\cite{voting}, p. 26-27) \n\\end{itemize}\nIn this definition, the second and third property now are no longer about winning or losing, but about the ranks of the candidates in the societal preference order resulting from the personal preference orders. \\\\\n\\\\Two additional properties that seem sensible to be demanded to be fulfilled by a voting system, is to belong to both kinds of voting systems that are characterized by the following definitions, for which the terms {\\textbf{Condorcet winner} and \\textbf{Condorcet loser}} are needed: \\\\\nA candidate who would defeat every other candidate in a head-to-head election (under majority rule) is called a Condorcet winner. If a candidate would lose to every other candidate in a head-to-head contest (under majority rule), he is called a Condorcet loser. \n\\begin{itemize}\n\\item \\textbf{CWC:} A voting system is said to satisfy the {\\textbf{Condorcet winner criterion}} (CWC), if it always elects the Condorcet winner whenever one exists.\n\\item \\textbf{CLC:} A voting system is said to satisfy the {\\textbf{Condorcet loser criterion }} (CLC), if it always elects the Condorcet loser whenever one exists. \n\\end{itemize}\nThese definitions involve two-candidate-elections. It can be challenging to design a voting system with more than two candidates in a way that it fulfills at least one of the Condorcet criterions. The Borda count, for example, does not (since it violates majority rule). \\\\\n%But since the Condorcet properties clearly are valuable requirements concerning a voting system, the dApp-store-voting system should fulfil them. The so-called sequential pairwise voting, which is explained in the following lines, does. A voting system we propose below, a variation of this system, turns out to do so, too.\\\\ %sp\u00e4ter zeigen, dass es das Kriterium erf\u00fcllt\n%Sequential pairwise voting: \\\\\n%To determine the winner and the societal preference order according to sequential pairwise voting, at first, the voters have to choose between two candidates. Majority rule is used to decide the winner. Next, the voters have to choose between the winner of the preceding contest and the next candidate. Then they have to choose between the winner of the latter contest and the next candidate and so on, whereby the winner always is decided according to majority rule. Finally, the winner from the last two-candidate-contest is declared the winner of the whole election. \\\\\n% Though sequential pairwise voting has the benefit of always electing a Condorcet winner if one exists, it comes with great downsides. First of all, it would produce a societal preference order that doesn't have the mathematical property of transitivity and therefore would not be sensible. Furthermore, in sequential pairwise voting, an order in which new candidates are to be introduced into the head-to-head comparisons has to be specified beforehand. In elections where no Condorcet winner exists, exactly this order, called agenda, has an enormous influence on who wins the election. In other words, if there is no Condorcet winner, this system is highly manipulable and violates the fundamental property of neutrality, which is why sequential pairwise voting clearly can't be said to be a fair method. \\\\\nAnother criterion that can be considered related to the assessment of the utility and sensibility of a voting system is the so-called \"Independence of Irrelevant Alternatives\" - criterion, defined as follows: \\\\\nA voting system is said to satisfy the irrelevant alternatives criterion (IIA) if the societal preference between any two candidates is only dependent on the individual voters' preferences between those two candidates (and not affected by the candidacy of any other candidate). \\\\\nHowever, none of the voting systems discussed so far satisfies this criterion. \\\\\nIn 1951, Kenneth arrow in the context of his Ph.D.-thesis published the commonly called \"Arrow's impossibility theorem\" which states that no voting system thathas the five desired qualities mentioned above does exist. In 1972, this discovery even earned him the Nobel Prize in economic science. Various versions of the theorem exist, but what it states, more precisely, is the following: \\\\\nYou can consider a voting system to be a rule that assigns to each possible collection of preference orders for a poll some kind of societal preference order, or more concretely, \\\\\nwe define it as a function that takes as input a collection of transitive preference orders of all the voters and produces as output a transitive societal preference order that represents the will of the electorate. \\\\\nThis definition doesn't rule out the possibility of ties, neither ties in individual preference ballots nor ties in the resulting societal preference orders. So if one defines the societal preference order of pairwise sequential voting as the winner followed by a tie of all the other candidates, even pairwise sequential voting fits in with that defintion of a voting system. \\\\\nThe mathematical conditions Arrow had in mind can be interpreted as follows: \n\\begin{enumerate} % from the voting book, page 68; andere Kriterien auf Wikipedia: https://en.wikipedia.org/wiki/Kenneth_Arrow, https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem, https://plato.stanford.edu/entries/arrows-theorem/, \n%TODO: Arrow's theorem http://pi.math.cornell.edu/~mec/Summer2008/anema/maystheorem.html\n\\item Universality: Voting systems should place no restrictions apart from transitivity on how voters can rank the candidates in an election. There should be no kind of dictatorship presetting that only specific preference orders are acceptable or are not. Every possible collection of transitive preference ballots must yield a transitive societal preference order. \n\\item Positive Association of Social and Individual Values: Voting systems should be monotone. \n\\item Independence of Irrelevant Alternatives: Voting systems should satisfy the IIA - criterion. \n\\item Citizen Sovereignty: Voting systems should not be imposed in any way. I.e., there should never be a pair of candidates in an election such that one of these candidates is preferred over or tied with the other in the resulting societal preference order in defiance of any vote. \n\\item Nondictatorship: Voting systems should not be dictatorial. I.e., there should never be a voter such that, for any pair of candidates, if that voter prefers one of the candidates over the other, then society will also have the same preference order regarding these two candidates. \n\\end{enumerate} \nThe second condition had been called monotonicity so far. The third one is the one we introduced before. The fourth condition is close to neutrality and the fifth is similar to anonymity. \nThe first of these conditions is one that we haven't stated before, but which we have been assuming anyways. However, according to this condition a voting system must always output a transitive societal preference order, but some potential voting systems - e.g. sequential pairwise voting - yield cyclic societal preference orders when receiving certain inputs. So one could postulate that only systems which output transitive societal preference orders are called voting systems by definition or one could handle the problem with systems of this kind by excluding inputs that yield preferential orders which are not transitive through restrictions. The latter option then in some way would violate universality. \\\\\nWhat Kenneth arrow discovered is the following: For an eleciton with three or more candidates, it is impossible for a voting system to satisfy all five of the conditions above. \\\\\nIn other words, there is no \"perfect\" voting system for polls with more than two candidates, at least one sensible condition always will be violated. But there is a great variety of interpretations of Arrow's Theorem, what it really means and how it should be viewed in light of the search for a voting system that is truly fair. \nOne of these is Pareto's Unanimity condition, which replaces the conditions of Positive Association of Social and Individual Values and Citizen Sovereignty. It postulates that if there is a pair of candidates in an election such that every voter prefers the same one over the other, then that one should be ranked higher than the other in the resulting societal preference order. \nSince the holding true of Unanimity implies that Positive Association of Social and Individual Values and Citizen Sovereignty also are fulfilled, one can formulate a stronger form aof Arrow's Theorem: \\\\\nFor an election with more than two candidates, a voting system can't satisfy Universality, Independence of Irrelevant Alternatives, Nondictatorship and Unanimity all at once. \\\\\n\nThis stronger version is easier to prove. In order to do so, the following Lemma is needed: \\\\\n\nAssume having an electin with three or more candidates and a voting system that satisfies universality, IIA and unanimity. Suppose that A is a candidate in the election and that every voter ranks A either first or last on their individual preference order (whereby some can rank him first and some last, not all have to rank him the same; furthermore this assumption excludes ties between A and any other candidate). Then the societal preference order the voting system yields, must also rank A either first or last. \\\\\n\nProof: % TODO: rethink it\n\\begin{enumerate} \n\\item If A was neither ranked alone first nor last in the societal preference order, there would have to be some other candidates B and C, so that B was ranked higher than or tied with A, and C was ranked lower than or tied with A in the societal preference order. \nDue to universality, transitivity holds and so B would be ranked higher than C or would be tied with him. \\\\\n\n\\item Furthermore, if the assumption of the Lemma holds and every voter changed the position of C above B in their personal preference order, the societal preference between A and B as well as A and C wouldn't be affected since A was ranked highest or lowest and thus the relation between A and B as well as A and C wouldn't change. Due to unanimity, we know that C would be ranked higher than B in the societal preference order then, too. \n\n\\item So we get the contradiction that (1) says that B was ranked higher than or tied with C and (2) says that C would be ranked higher than B. Clearly, both relations can't be true at once, so it's not possible that the voting system yields a societal preference order in which A is ranked neither first nor last. So the Lemma is proven. \n\\end{enumerate}\n\nWe will prove the strong form of Arrow's theorem by assuming that universality, IIA and unanimity hold and showing that the system can't be non-dictatorial under this assumption by finding a dictator. To do so, we assume without loss of generality that A is ranked last by all of the voters (and therefore is ranked last in the societal preference order, too). The voters are labeled $v_1, \\cdots, v_n$. \\\\\nIf some of the voters changed A from last place to first place on their individual preference orders, according to the Lemma above A then is either still ranked last or ranked first. But if all of the voters changed A from last place to first place, A would definitely be ranked first in the corresponding societal preference order. \\\\\nTherefore we know that if the voters changed their ranking of A from last to first one by one, there has to be one voter, $v_k, 1\\leq k \\leq n$, whose change in the individual preference order causes A to switch from last place to first place in the societal preference order. Thus, this voter is a potential dictator. \\\\\nWe show know $v_k$ controls the societal preference between any pair of candidates not including B. \nLet B and C be any two candidates other than A and assume that $v_k$ prefers B over C. \nIf $v_k$ moved A between B and C on his individual preference order, this would change nothing in the societal preference between B and C. Neither would a movement of A to the top of their individual preference orders by $v_1, \\cdots, v_{k-1}$ and a movement of A to the bottom of their individual preference orders by $v_k, \\cdots, v_n$ would not either. \\\\\nAssume that all of these three changes occur. Then B would be ranked lower than A by $v_1, \\cdots, v_{k-1}$ and higher than A by $v_k, \\cdots, v_n$. Since we already know that $v_k$ would have yet to change his ranking of A from bottom to top in order to yield a societal preference order in which A were ranked first instead of last, clearly B would be preferred over A in the societal preference order resulting after the three changes. With a similar argumentation, it becomes obvious that A would be preferred over C in that societal preference order. Due to transitivity, B would be preferred over C in the societal preference order. \\\\\nSince we observed before that the three changes yield no change in the corresponding societal preference order, we now also know, that B would be preferred over C in the societal preference order corresponding to the individual rankings before the three changes. \\\\\n % S. 86, Q 5.14\n\n\n% maybe add instant runoff (p 48-50) though it violates monotonicity and CWC\n% TODO: majority criterion erg\u00e4nzen\n\n%https://en.wikipedia.org/wiki/Social_Choice_and_Individual_Values\n\n%TODO: May's theorem\n\n\n%%@Alex\n%%\n%% plz don't use German in this text (also comments). \n%%It is harder to read for outsiders and also a problem when you want to to grammar correction.\n%%\n%% And I propose to do line breaks after sentences to make the diff be easier to read.\n%% (At the very latest after the dot, possibly every 80 chars, even.)\n%% If you do a line break without \\\\, it will be ignored in the formatted text anyway.\n\n\n\n\n", "meta": {"hexsha": "36fefa1b1647a80a9d1c58bb001f80028a447619", "size": 22134, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "voting-paper/section_texts/s75_voting_properties.tex", "max_stars_repo_name": "Nikolaj-K/NOS-voting-system-research", "max_stars_repo_head_hexsha": "fec7c18ce41075f87b5025580ae58eb9cb3c53af", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-06-05T06:21:50.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-13T21:57:01.000Z", "max_issues_repo_path": "voting-paper/section_texts/s75_voting_properties.tex", "max_issues_repo_name": "Nikolaj-K/NOS-voting-system-research", "max_issues_repo_head_hexsha": "fec7c18ce41075f87b5025580ae58eb9cb3c53af", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "voting-paper/section_texts/s75_voting_properties.tex", "max_forks_repo_name": "Nikolaj-K/NOS-voting-system-research", "max_forks_repo_head_hexsha": "fec7c18ce41075f87b5025580ae58eb9cb3c53af", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-08-31T17:09:39.000Z", "max_forks_repo_forks_event_max_datetime": "2018-08-31T17:09:39.000Z", "avg_line_length": 151.602739726, "max_line_length": 811, "alphanum_fraction": 0.7941176471, "num_tokens": 4783, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149758396752, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5610286549722908}}
{"text": "\\section{Constraint Satisfaction Problems}\n\nOutline:\n\\begin{itemize}\n    \\item A special subset of search problems\n    \\item State is defined by variables Xi with values from a domain D (sometimes D depends on i)\n    \\item Goal test is a set of constraints specifying allowable combinations of values for subsets of variables\n\\end{itemize}\n\n\\paragraph{Use cases}\n\\begin{itemize}\n    \\item Assignment\n    \\item Timetabling\n    \\item Hardware configuration\n    \\item Transportation scheduling\n    \\item Factory scheduling\n    \\item Circuit layout\n    \\item Fault diagnosis\n\\end{itemize}\n\n\\subsection{Solving constraint satisfaction problems}\n\n\\paragraph{Backtracking}\n\n\\begin{itemize}\n    \\item Fix an ordering for variables, and select values for variables in this order. Because assignments are commutative (e.g. assigning WA = Red, NT = Green is identical to NT = Green, WA = Red), this is valid.\n    \\item When selecting values for a variable, only select values that don\u2019t conflict with any previously assigned values. If no such values exist, backtrack and return to the previous variable, changing its value.\n\\end{itemize}\n\n\\subsection{Filtering}\n\nAn arc $X \\rightarrow Y$ if consistent if and only if for every value $x$ in X there is some $y$ in $Y$  which could be assigned without violating a constraint.\n\n\\subsubsection{Arc consistency}\n\\begin{itemize}\n    \\item Begin by storing all arcs in the constraint graph for the CSP in a queue Q. For a binary constraint $(X,Y)$, there are two arcs to add to the queue - $X \\rightarrow Y$ and $Y \\rightarrow X$\n    \\item Check the arcs in the queue for consistency:\n    \\begin{itemize}\n        \\item If one arc $X \\rightarrow Y$ is not consistent for a given value $x$, remove $x$ from the domain of $X$\n        \\item If at least one value is removed for a variable $X_i$, add arcs of the form $X_k \\rightarrow X_i$ to the queue, for all unassigned variables $X_k$ (skip duplicate arcs already in the queue.\n        \\item Continue until Q is empty, or the domain of some variable is empty and triggers a backtrack.\n    \\end{itemize}\n\\end{itemize}\n\n\\subsubsection{Forward checking}: whenever a value is assigned to a variable $X$, prune the domains of unassigned variables that share a constraint with $X$ that would violate the constraint\nif assigned.\n\nForward checking is a special type of enforcing arc consistency, in which we only enforce the arcs pointing into the newly assigned variable.\n\n\\subsection{Ordering}\n\n\\paragraph{Minimum remaining values (MRV)} Choose the variable with the fewest legal options left in its domain\n\n\\paragraph{Iterative improvement} \n\\begin{itemize}\n    \\item Take an assignment with unsatisfied constraints\n    \\item While a goal has not been reached:\n    \\begin{itemize}\n        \\item Reassign one variable to another value. With  min-conflict heuristic: choose a value that violates the fewest constraints\n    \\end{itemize}\n\\end{itemize}\n\n\\subsection{K-Consistency}\n\n\\textbf{K-Consistency}: For each $k$ nodes, any consistent assignment to $k-1$ ca be extended to the $k^{th}$ node.\n\n\\textbf{Strong k-consistency}: also $k-1$, $k-2$, $\\cdots$ $1$ consistent\n\nstrong n-consistency means we can solve without backtracking!\n\n\\subsection{Tree-Structured CSPs}\n\nTheorem: if the constraint graph has no loops, the CSP can be solved in $O(n d^2)$ time, with $n$ the number of nodes and $d$.\n\n\\begin{itemize}\n    \\item Pick an arbitrary node in the constraint graph for the CSP to serve as the root of the tree (it doesn\u2019t matter which one because basic graph theory tells us any node of a tree can serve as a root).\n    \\item Convert all undirected edges in the tree to directed edges that point away from the root. Then linearize (or topologically sort) the resulting directed acyclic graph\n    \\item Perform a backwards pass of arc consistency. Iterating from i = n down to i = 2, enforce arc consistency for all arcs $Parent(Xi) \\rightarrow Xi$\n    \\item For the linearized CSP from above, this domain pruning will eliminate a few values, leaving us with the following\n    \\item Perform a forward assignment. Starting from X1 and going to Xn, assign each Xi a value consistent with that of its parent. Because we\u2019ve enforced arc consistency on all of these arcs, no matter what value we select for any node, we know that its children will each all have at least one consistent value. Hence, this iterative assignment guarantees a correct solution, a fact which can be proven inductively without difficulty.\n\\end{itemize}\n\nThe tree structured algorithm can be extended to CSPs that are reasonably close to being tree-structured\nwith cutset conditioning. Cutset conditioning involves first finding the smallest subset of variables in a constraint graph such that their removal results in a tree (such a subset is known as a cutset for the graph).\n", "meta": {"hexsha": "6703f9bdcd8b5c83e5708e7e051aaea3e3fa2f23", "size": 4829, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/csp.tex", "max_stars_repo_name": "Calcifer777/columbia-ai", "max_stars_repo_head_hexsha": "aaa7173bca6f2bc9edfe6fe55b5a1a37ab310066", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/csp.tex", "max_issues_repo_name": "Calcifer777/columbia-ai", "max_issues_repo_head_hexsha": "aaa7173bca6f2bc9edfe6fe55b5a1a37ab310066", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/csp.tex", "max_forks_repo_name": "Calcifer777/columbia-ai", "max_forks_repo_head_hexsha": "aaa7173bca6f2bc9edfe6fe55b5a1a37ab310066", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.8117647059, "max_line_length": 437, "alphanum_fraction": 0.7601987989, "num_tokens": 1130, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432182679956, "lm_q2_score": 0.7772998560157665, "lm_q1q2_score": 0.5610108996400689}}
{"text": "% !TEX TS-program = pdflatexmk\n\\documentclass{article} % For LaTeX2e\n\\usepackage{nips15submit_e,times}\n\\usepackage{hyperref}\n\\usepackage{url}\n%\\documentstyle[nips14submit_09,times,art10]{article} % For LaTeX 2.09\n\n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathtools}\n\n\\usepackage{algpseudocode,algorithm,algorithmicx}  \n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{corollary}{Corollary}[theorem]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{example}{Example}[theorem]\n\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}\n\n\\newcommand\\myeq{\\stackrel{\\mathclap{\\tiny\\mbox{d}}}{=}}\n\n\\newcommand\\mc{\\mathcal} %calligraphic\n\\newcommand\\ts{\\mathcal} %tensor\n\\newcommand\\mt{} %matrix\n\\newcommand\\vt{\\mathbf} %vector\n\\newcommand\\fn{} %function\n\\newcommand\\triple[3]{(#1 \\stackrel{#2}\\rightarrow #3)}\n%\\newcommand\\triple[3]{(#1, #2, #3)}\n\n\\title{Multiple Matrices Factorisation\\\\ via Matching Gradients}\n\n\\author{\nDongwoo Kim\n}\n\n% The \\author macro works with any number of authors. There are two commands\n% used to separate the names and addresses of multiple authors: \\And and \\AND.\n%\n% Using \\And between authors leaves it to \\LaTeX{} to determine where to break\n% the lines. Using \\AND forces a linebreak at that point. So, if \\LaTeX{}\n% puts 3 of 4 authors names on the first line, and the last on the second\n% line, try using \\AND instead of \\And before the third author name.\n\n\\newcommand{\\fix}{\\marginpar{FIX}}\n\\newcommand{\\new}{\\marginpar{NEW}}\n\n\\nipsfinalcopy % Uncomment for camera-ready version\n\n\\begin{document}\n\n\\maketitle\n\n%\\begin{abstract}\n%\\end{abstract}\n\n\\section{Joint Matrix Factorisation}\nLet $X \\in \\mathbb{R}^{n_1 \\times n_2}$ and $Y \\in \\mathbb{R}^{n_1 \\times n_3}$ be two matrices where the first dimension represents the same object, i.e. user-rating matrix $X$ and user-attribute matrix $Y$. A typical collaborative filtering approach to predict the unknown entry of the first matrix is to factorise the matrix into a low-rank representation $X \\approx f(UV^\\top)$ where $U\\in\\mathbb{R}^{n_1\\times k}$, $V\\in \\mathbb{R}^{n_2 \\times k}$ and $f:\\mathbb{R}^{n_1 \\times n_2} \\rightarrow \\mathbb{R} ^{n_1 \\times n_2}$ is a link function. Joint (or co-) factorisation approach tries to factorise two related matrices into some shared low-rank representations in hope that there is a common latent representation shared across different domains. For example, $X$ and $Y$ can be factorised into $X \\approx UV^\\top$ and $Y \\approx UW^\\top$ where $W \\in \\mathbb{R}^{n_3 \\times k}$ with the identity link function. Therefore, both matrices share the same latent representation $U$ on the first dimension.\nMany of the joint factorisation approaches attempts to minimise the following objective:\n\\begin{equation}\n\\label{eqn:joint_obj}\nL_{XY}(X,Y,U,V,W) = L_X(X, UV^\\top) + \\lambda L_Y(Y, UW^\\top),\n\\end{equation}\nwhere $L_X$ and $L_Y$ are appropriate loss functions for matrix $X$ and $Y$, and $\\lambda$ controls the relative importance of $Y$ with respect to $X$. Various gradient based algorithms are developed to minimise the objective function.\n\n\\subsection{Gradient Matching}\nWhat I and Lexing initially thought about gradient matching is that we match the direction of gradient w.r.t $U$ on both side of losses $L_X$ and $L_Y$, so that the both side of losses are reduced together in the same direction. In the worse case without matching gradients, two gradients could be cancelled out, which makes convergence slow. Formally, we want to ensure that the two gradients are similar during the optimisation\n\\begin{align}\n\\frac{\\partial L_X}{\\partial U_i} \\stackrel{\\angle}{\\approx} \\frac{\\partial L_Y}{\\partial U_i},\n\\end{align}\nwhere $\\stackrel{\\angle}{\\approx}$ represents the similarity between angles of two vectors. Given $X$,$Y$,$V$ and $W$, however, there is no way to match these two gradients because the gradients are decided once the other quantities and losses are fixed (btw, this is why I said it's impossible). Having said that, it might be possible to adjust another quantities such as $V$ and/or $W$ in order to match these gradients. If we think about SVD, there are no unique solution for $U$ and $V$ except the singular values, therefore it might be possible to find the new $W$ that reduces the divergence in gradients while preserving the approximation $Y \\approx UW^\\top$ relatively the same or reduced further. Let's assume that we adjust $W$ to match the gradients, then the optimal(?)  $W^*$ would be the solution of the following objective for some positive $\\epsilon$:\n\\begin{align}\n\\label{eqn:matching_obj}\n\\arg\\min_{W^*}L_{\\angle}(\\frac{\\partial L_X}{\\partial U_i}, \\frac{\\partial L_Y(W^*)}{\\partial U_i}) \\leq \\epsilon \\quad \\text{subject to} \\quad L_Y(Y, UW^{*\\top}) \\leq L_Y(Y, UW_{\\text{old}}^\\top)\n\\end{align}\nwhere $L_{\\angle}$ measures the difference between two vectors, i.e. ($1 - \\frac{1}{2}(\\text{cosine similarity}+1)$), and $W_{\\text{old}}$ is the previously estimated matrix. Or we could relax the constraint and minimise the following objective to find $W^*$:\n\\begin{align}\nL_{\\angle Y}(W^*) = \\lambda_\\angle L_{\\angle}(\\frac{\\partial L_X}{\\partial U_i}, \\frac{\\partial L_Y(W^*)}{\\partial U_i}) + |L_Y(Y, UW^{*\\top}) - L_Y(Y, UW_{\\text{old}}^\\top)|_{+},\n\\end{align}\nwhere $\\lambda_\\angle$ represents the relative importance of the angle difference, and $|x|_+ = \\max(x, 0)$ in order to ensure that the reconstruction loss will not increase. Algorithm \\ref{alg:jmfgm} summarises the gradient matching approach. We first compute the gradients of $U$ and then find an optimal $W^*$ that matches the gradients. After we update $U$ and repeat this process until converge.\n\n\\subsection{Discussion}\n\n\\begin{itemize}\n\\item Can we proof that the matching gradients approach leads to a better solution? or converge?\n\\item How can we find $W^*$ in Equation \\ref{eqn:matching_obj}?\n\\item What if we cannot find $W^*$ in Equation \\ref{eqn:matching_obj}? Do we stop the optimisation?\n\\item Would it be better to jointly optimising $W^*$ and $V^*$ together while matching gradients?\n\\item How can we generalise this framework to incorporate a tensor?\n\\end{itemize}\n\n\\begin{algorithm}  \n  \\caption{Joint Matrix Factorisation via Matching Gradients \\label{alg:jmfgm}}\n  \\begin{algorithmic}\n  \t\\Require{$X \\in \\mathbb{R}^{n \\times n_2}$, $Y \\in \\mathbb{R}^{n \\times n_3}$, $L_X$, $L_Y$, $L_\\angle$}\n    \\Statex\n    \\Function{MatchingGradient}{$U, V, W$}\n    \\State Randomly initialise $U, V, W$\n    \\For{$t \\gets 1 \\textrm{ to } T$} \n      \\For{$i \\gets 1 \\textrm{ to } n$} \n      \t\\State Compute gradient of ${\\partial L_X}/{\\partial U_i}$ and ${\\partial L_Y}/{\\partial U_i}$\n        \\While{$L_{\\angle}({\\partial L_X}/{\\partial U_i}, {\\partial L_Y}/{\\partial U_i}) \\leq \\epsilon$}\n        \t\\State update $W$ from Equation \\ref{eqn:matching_obj}\n        \\EndWhile\n        \\State update $U_i = U_i - \\gamma_t \\nabla_{U_i} L_{XY}$\n        \\State update $V_i = V_i - \\gamma_t \\nabla_{V_i} L_{XY}$        \n      \\EndFor  \n    \\EndFor\n      \\State \\Return{$U$, $V$, $W$}\n    \\EndFunction\n  \\end{algorithmic}  \n\\end{algorithm}\n\n\\begin{algorithm}\n\\end{algorithm}\n\n\\section{Concrete Example}\nLet us examine the framework with more concrete example. For every matrix reconstruction loss we use squared Frobenious norm, and for the angle loss we use $1 - \\frac{1}{2}(\\text{cosine similarity}+1)$ between two vectors.\n\n\\begin{align}\nL_{XY}(X,Y,U,V,W) =& L_X(X, UV^\\top) + \\lambda L_Y(Y, UW^\\top)\\\\\n=& \\sum_{ij} \\frac{1}{2} (X_{ij} - U_i^\\top V_j)^2 + \\lambda \\sum_{ij} \\frac{1}{2} (Y_{ij} - U_i^\\top W_j)^2\n\\end{align}\nTake the gradient with respect to $U_i$\n\\begin{align}\n\\frac{\\partial L_X(X, UV^\\top)}{\\partial U_i} &= -\\sum_j^{n_2} V_j(X_{ij} - U_i^\\top V_j)\\\\\n&= - V^\\top(X_i - VU_i)\\\\\n\\frac{\\partial L_Y(Y, UW^\\top)}{\\partial U_i} &= -\\sum_j^{n_3} W_j(Y_{ij} - U_i^\\top W_j)\\\\\n&= - W^\\top(Y_i - WU_i)\n\\end{align}\nCosine similarity between two gradient vectors can be computed as follow \n\\begin{align}\n\\frac{(X_i - VU_i)^\\top V W^\\top(Y_i - WU_i)}{||V^\\top(X_i - VU_i)||\\cdot||W^\\top(Y_i - WU_i)||}\n\\end{align}\n\n\\bibliographystyle{apalike}\n\\bibliography{ref}\n\n\\end{document}\n", "meta": {"hexsha": "59e5f9ce942515501d28c5dbee5dfd066453b2a6", "size": 8147, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/gradient_matching/gradient_matching.tex", "max_stars_repo_name": "arongdari/sparse-graph-prior", "max_stars_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-12-08T19:04:31.000Z", "max_stars_repo_stars_event_max_datetime": "2016-12-08T19:04:31.000Z", "max_issues_repo_path": "notes/gradient_matching/gradient_matching.tex", "max_issues_repo_name": "dongwookim-ml/sparse-graph-prior", "max_issues_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-07-10T05:20:44.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-10T05:20:44.000Z", "max_forks_repo_path": "notes/gradient_matching/gradient_matching.tex", "max_forks_repo_name": "dongwookim-ml/sparse-graph-prior", "max_forks_repo_head_hexsha": "01bbe59d356b24e9967851d3ab5d7195c3bcd790", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.3732394366, "max_line_length": 1010, "alphanum_fraction": 0.7176874923, "num_tokens": 2479, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.5610108773074729}}
{"text": "\\section{Preliminary Results}\n\\label{sec:preliminary}\n\\subsection{Test statistic and quantifying sensitivity}\n\\label{sec:sensitivity}\nThe CP mixing angle, \\PhiTau, acting as the parameter of interest, is measured in a binned maximum likelihood (ML) fit for each CP hypothesis, using the likelihood function [arxiv:1503.07622]: \n\\begin{equation}\n\t\\label{eq:binnedprofileLH}\n\t\\mathcal{L}(\\operatorname{\\textbf{x}};\\phi_{\\tau},\\boldsymbol{\\theta}) = \n\t\\prod_{i\\in \\operatorname{bins}} \\mathcal{P}(\\operatorname{\\textbf{x}_i}|\\mu\\cdot S_i (\\phi_{\\tau},\\boldsymbol{\\theta})+B_i (\\boldsymbol{\\theta}))\\times\\prod_{j\\in syst} \\mathcal{G}(\\theta_j^0)|\\theta_j,\\Delta\\theta_j)\n\\end{equation}\n$\\textbf{x}$ is a special, artificial data set called the \u201cAsimov data set\u201d explained below[arxiv:1007.1727]. $\\mu$ is the normalization factor of Higgs signal (NF H), $S_i$ and $B_i$ indicate the expectation value of signal and background events in bin $i$ respectively, therefore $\\mu\\cdot S_i (\\phi_{\\tau},\\boldsymbol{\\theta})+B_i (\\boldsymbol{\\theta})$ is the signal + background prediction. Moreover, $\\boldsymbol{\\theta}$ is a set of the nuisance parameter (NP). NPs following Gaussian distribution for each systematic $j$ with a standard derivation $\\pm1\\sigma$ and expectation value of $\\theta_j^0$ are obtained from Combined Performance working groups or data-driven methods. \n\nTo quantify the disagreement between our tested CP hypothesis and the data, we consider the profile likelihood ratio: \n\\begin{equation}\n\t\\label{}\n\t\\lambda(\\phi_{\\tau}) = \\frac{\\mathcal{L}(\\phi_{\\tau},\\hat{\\hat{\\boldsymbol{\\theta}}})}{\\mathcal{L}(\\hat{\\phi_{\\tau}},\\hat{\\boldsymbol{\\theta}})}\n\\end{equation}\nwhere the denominator indicates the best estimator of $\\phi_{\\tau}$ and $\\boldsymbol{\\theta}$ for which the $\\mathcal{L}$ is maximized (unconditional maximum-likelihood estimator), while $\\hat{\\hat{\\boldsymbol{\\theta}}}$ in the numerator denotes the value of $\\theta$ that maximizes $\\mathcal{L}$ given the specified \\PhiTau (conditional maximum-likelihood estimator). One can see $0 < \\lambda < 1$ with $\\lambda \\rightarrow 1$ implying good agreement between data and CP hypothesis. Conveniently, the negative log-likelihood (NLL) is defined as the test statistic $q$: \n\\begin{equation}\n\t\\label{}\n\tq = -2 \\ln \\lambda(\\phi_{\\tau}) =-2 \\ln \\frac{\\mathcal{L}(\\phi_{\\tau},\\hat{\\hat{\\boldsymbol{\\theta}}})}{\\mathcal{L}(\\hat{\\phi_{\\tau}},\\hat{\\boldsymbol{\\theta}})} =(\\frac{\\phi_{\\tau}-\\hat{\\phi}_{\\tau}}{\\sigma_{\\hat{\\phi}_{\\tau}}})^2 \n\\end{equation}\nThe NLL adopted in the analysis is equivalent to $q/2$. The frequentist central confidence interval (C.I.) $\\phi_{\\tau}\\in[\\hat{\\phi}_{\\tau} -N\\sigma_{\\hat{\\phi}_{\\tau}},\\hat{\\phi}_{\\tau} +N\\sigma_{\\hat{\\phi}_{\\tau}}]$ can be directly read off the NLL function using Neyman constructions [10.2307/91337]:\n\\begin{equation}\n\t\\label{eq:DNLL}\n\t\\Delta \\operatorname{NLL} = \\operatorname{NLL} - \\operatorname{NLL_{min}} = \\ln \\mathcal{L}(\\hat{\\phi}_{\\tau}) -\\ln \\mathcal{L}(\\hat{\\phi}_{\\tau}\\pm N\\sigma_{\\hat{\\phi}_{\\tau}}) = (\\frac{\\pm N\\sigma_{\\hat{\\phi}_{\\tau}}}{\\sigma_{\\hat{\\phi}_{\\tau}}})^2 = \\frac{N^2}{2}\n\\end{equation}\nThe 68\\% central confidence interval corresponds to 1 $\\sigma_{\\hat{\\phi}_{\\tau}}$ Gaussian standard deviation, which indicates that 68\\% of all the test templates are containing true value of \\PhiTau within this C.I.. To determine this C.I., or to estimate the standard deviation $\\sigma_{\\hat{\\phi}_{\\tau}}$ namely, the Asimov data set is defined as its signal and background events are equal to their expectation values. In this analysis, Standard Model based Asimov data set is used, which contains the pure \\CP-even signal plus the expected background, so the NLL curve minimizes at the best estimator $\\hat{\\phi}_\\tau= 0^\\circ$ (SM hypothesis). Within the fitting region \\PhiTau $\\in [-90^\\circ, 90^\\circ]$, a set of 19 signal templates with the \\PhiTau interval of $10^\\circ$ are created by reweighting the \\CP-flat VBFH, ggH, VH, and ttH signal samples produced through ATLAS simulations with TAUSPINNER [Przedzinski:2014pla].  \n\n%\\begin{figure}[h!]\n%\t\\begin{center}\n%\t\t\\includegraphics[width=0.7\\linewidth]{figures/results/Fit_model.png}\n%\t\\end{center}\n%\t\\caption{Schematic summary of the fit models used in the analysis. They are grouped by topologies (Boosted (green) and VBF (red)) and by decay channels (HH for \\tauhh channel and LH for \\taulh channel). }\n%\t\\label{fig:fit_model}\n%\\end{figure}\n\nIn this analysis, 16 signal regions (SRs) and 2 control regions (CRs) are taken as the fit models, (8 SR, 1 CR in \\taulh and the same in \\tauhh). The normalization factors (NF) are unconstrained by Poisson or Gaussian distribution in Eq.~\\ref{eq:binnedprofileLH}, where NF H is the NF of \\htt signal samples while  NF Ztt is the NF of \\ztt background samples. \n\n\\subsection{Expected Asimov fit results}\n\\label{sec:Asimovfit}\nThe $\\Delta$NLL curves and the expected \\CP-odd exclusion limits for \\tauhh + \\taulh combined fit with only statistical uncertainties of the Asimov data set (stat-only) and with all statistical and systematic uncertainties (stat+syst) are shown in Fig.~\\ref{fig:expected_sensitivity} and Table.~\\ref{tab:expected_sigma}. \n\nAs a result, according to the \\CP-odd exclusion limits with statistical and systematic uncertainties, one have the confidence of 1.90 $\\sigma$ to reject \\CP-odd hypothesis within SM Asimov data set. \n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.75\\linewidth]{figures/results/NLL_comparison.pdf}\n\t\\caption{Expected $\\Delta$NLL curves for the \\tauhh + \\taulh combined fit with stat-only (black dashed curve) and stat+syst (red dotted curve). \n%\t\tThe expected value for \\PhiTau is $0\\pm35^{\\circ}$ ($0\\pm32^{\\circ}$ for stat-only) at one standard deviation.\n\t}\n\t\\label{fig:expected_sensitivity}\n\\end{figure}\n\n\\begin{table}[h!]\n\t\\centering\n\t\\caption{Expected CP-odd exclusion limits for the \\tauhh + \\taulh combined fit with stat-only and stat+syst}\n\t\\begin{tabular}{lc}\n\t\t\\hline\n\t\t\\hline\n\t\tFit settings & \\CP-odd exclusion limits\\\\\n\t\t\\hline\n\t\tstat-only & 2.07$\\sigma$ \\\\\n\t\tstat+syst & 1.90$\\sigma$ \\\\\n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{tab:expected_sigma}\n\\end{table}\n\n\\clearpage\n", "meta": {"hexsha": "72fd3304ffec1983ae4ac142e5e3bad83299f86e", "size": 6197, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "preliminaryresults.tex", "max_stars_repo_name": "beatrice-pan/ResearchProposal_TongPan", "max_stars_repo_head_hexsha": "29b94ac443c04e26301dd351c857ce1abe9780fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "preliminaryresults.tex", "max_issues_repo_name": "beatrice-pan/ResearchProposal_TongPan", "max_issues_repo_head_hexsha": "29b94ac443c04e26301dd351c857ce1abe9780fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "preliminaryresults.tex", "max_forks_repo_name": "beatrice-pan/ResearchProposal_TongPan", "max_forks_repo_head_hexsha": "29b94ac443c04e26301dd351c857ce1abe9780fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.2816901408, "max_line_length": 936, "alphanum_fraction": 0.7342262385, "num_tokens": 1862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8856314677809303, "lm_q2_score": 0.6334102775181399, "lm_q1q2_score": 0.5609680737859166}}
{"text": "%\n%\nLet the assumption with inputs $I_i$ and challenge $C$ be\n  defined as in Chapter~\\ref{assumption_def}.\nSince we require $\\lambda = k$ for decisional problems, the adversary\n  cannot win unless he computes $C$ from $I_1,\\ldots,I_n$ for both kinds of problems.\nIt is not hard to see that addition is useless to compute a monomial from monomials\n  and we only have to consider multiplications.\n\nMore precisely, the adversary can win iff there are $\\delta_i \\in \\mathbb{N}$ for $i \\in [n]$\n  such that holds that\\footnote{\n    Defining $S_1 * S_2 := \\{ s_1 * s_2 \\mid s_1 \\in S_1 \\land s_2 \\in S_2\\}$\n    and $S^\\delta = \\{ \\Pi_{i=1}^{\\delta} s_i \\mid s_1 \\in S \\land \\ldots \\land s_{\\delta} \\in S\\}$\n    as usual.\n  }\n$\n  C \\in (I_1^{\\delta_1} * \\cdots * I_n^{\\delta_n})\n$\n  and the group setting allows for the computation of the product on the right-hand-side, i.e., \n$\n  \\Sigma_{i=1}^n\\, \\delta_i * \\lambda_i = \\lambda \\text{.}\n$\n%\nIt is therefore sufficient to perform the following tasks to analyze such assumptions:\n\\begin{enumerate}\n\\item Compute a range expression $J$ that characterizes the set $I_1 * I_2$.\n\\item Check if $C \\in I$ for a a monomial $C$ and a range expression $I$.\n\\item Compute a range expression $J$ that characterizes the set $I^\\delta$ for a variable $\\delta$.\n\\end{enumerate}\nWe will now describe our approach to handle these tasks.\n\n\\paragraph{1.}%\nWe can rename all range indices apart and then perform the following computation\n$$\n  (\\forall \\vec{r} \\in \\vec{R}:\\, \\vec{X}^{\\vec{f}})\n  *  (\\forall \\vec{r'} \\in \\vec{R'}:\\, \\vec{X}^{\\vec{h}})\n=\n    (\\forall \\vec{r} \\in \\vec{R}, \\vec{r'} \\in \\vec{R'}:\\,\n         \\vec{X}^{\\vec{f} + \\vec{h}}) \\text{.}\n$$\n\n\\paragraph{2.}%\nLet $C := \\vec{X}^{\\vec{g}}$ and\n$I := \\forall \\range{1}, \\ldots, \\range{w}:\\, \\vec{X}^{\\vec{f}}$.\nWe can then define a translation into the following system of polynomial constraints:\n\\begin{align}\n \\alpha_i \\leq \\beta_i \\\\\n 0 \\leq{}& \\delta_i \\\\\n \\Sigma_{i=1}^n\\, \\delta_i * \\lambda_i ={}& \\lambda \\\\\n \\alpha_j \\leq r_j \\leq{}& \\beta_j \\\\\n k >{}& i \\quad \\text{for all levels $k - i$} \\\\\n f_i ={}& g_i\n\\end{align}\nThe constraint system is over the integer variables $k$, $\\vec{l}$,\n  $r_{j}$, and $\\delta_i$.\n%Using Z3 we can prove unsatisfiability or find models.\n%Note that there are classes of inputs for which\n%  the satisfiability problem is decidable.\n%\\begin{framed}\n%  \\noindent {\\bf FIXME:} We probably have to allow for user-defined constraints\n%  (such as $k > 2$) in the problem definition. We might also have to check\n%  well-formedness of the problem, e.g., exponents of $X_i$ always positive.\n%\\end{framed}\n\n\\paragraph{3.}%\nLet $I$ be defined as above.\nIf $f$ is linear in $\\vec{r}$ (considering $k$ and $\\vec{l}$\n  as constants), then we can write $\\vec{f}(k,\\vec{l},\\vec{r})$\n  as $r_1*\\phi_1(k,\\vec{l}) + \\ldots + r_k*\\phi_k(k,\\vec{l}) + \\psi(k,\\vec{l})$.\nIn this case, $I^{\\delta}$ is characterized by the following range expression:\n$$\n    \\forall r_1 \\in \\brack{\\delta\\alpha_1, \\delta\\beta_1},\n            \\ldots,\n            r_k \\in \\brack{\\delta\\alpha_k, \\delta\\beta_k}:\n            \\vec{X}^{r_1*\\vec{\\phi}_1(k,\\vec{l}) + \\ldots + r_w*\\vec{\\phi}_w(k,\\vec{l})\n                     + \\delta*\\vec{\\psi}(k,\\vec{l})} \\text{.}\n$$\n\n\\input{range_expr_pow_proof}\n", "meta": {"hexsha": "709d565efaa1ba1158f902154ae4269701fb9820", "size": 3285, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/constraint_generation.tex", "max_stars_repo_name": "generic-group-analyzer/gga", "max_stars_repo_head_hexsha": "75d362fb3db4cc34b8e3fc7e6d76d8d31068457f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-08-17T11:00:45.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-14T14:00:14.000Z", "max_issues_repo_path": "doc/constraint_generation.tex", "max_issues_repo_name": "generic-group-analyzer/gga", "max_issues_repo_head_hexsha": "75d362fb3db4cc34b8e3fc7e6d76d8d31068457f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/constraint_generation.tex", "max_forks_repo_name": "generic-group-analyzer/gga", "max_forks_repo_head_hexsha": "75d362fb3db4cc34b8e3fc7e6d76d8d31068457f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.0625, "max_line_length": 99, "alphanum_fraction": 0.6447488584, "num_tokens": 1126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8519528019683106, "lm_q2_score": 0.658417487156366, "lm_q1q2_score": 0.5609406230478002}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\\markright{fmt}\n\\section*{\\hspace*{-1.6cm} fmt}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nFast Mellin Transform.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[mellin,beta] = fmt(x)\n[mellin,beta] = fmt(x,fmin,fmax)\n[mellin,beta] = fmt(x,fmin,fmax,N)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty fmt} computes the Fast Mellin Transform of signal {\\ty x}.\\\\\n \n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty x} & signal in time\\\\\n        {\\ty fmin, fmax} & respectively lower and upper frequency bounds of \n         the analyzed signal. These parameters fix the equivalent \n         frequency bandwidth (expressed in Hz). When unspecified, you\n         have to enter them at the command line from the plot of the\n         spectrum. {\\ty fmin} and {\\ty fmax} must be between 0 and 0.5\\\\     \n        {\\ty N} & number of analyzed voices. {\\ty N} must be even &\n\tauto\\footnote{This value, determined from {\\ty fmin} and {\\ty fmax}, is the \n\tnext-power-of-two of the minimum value checking the non-overlapping\n\tcondition in the fast Mellin transform.}\\\\\n\\hline  {\\ty mellin} & the {\\ty N}-points Mellin transform of signal {\\ty x}\\\\\n        {\\ty beta} & the {\\ty N}-points Mellin variable\\\\ \n\\hline\n\\end{tabular*}\n\\vspace*{.5cm}\n\nThe Mellin transform is invariant in modulus to dilations, and decomposes\nthe signal on a basis of hyperbolic signals. This transform can be defined\nas\\,:\n\\[M_x(\\beta)=\\int_0^{+\\infty} x(\\nu)\\ \\nu^{j2\\pi \\beta-1}\\ d\\nu\\]\nwhere $x(\\nu)$ is the Fourier transform of the analytic signal\ncorresponding to $x(t)$. The $\\beta$-parameter can be interpreted as a {\\it\nhyperbolic modulation rate}, and has no dimension\\,; it is called the {\\it\nMellin's scale}. \n\nIn the discrete case, the Mellin transform can be calculated rapidly using\na fast Fourier transform ({\\ty fft}). The fast Mellin transform is  used,\nfor example, in the computation of the affine time-frequency distributions.\n\\end{minipage}\n\n%\\newpage\n\n{\\bf \\large \\sf Example}\n\\begin{verbatim}\n         sig=altes(128,0.05,0.45); \n         [mellin,beta]=fmt(sig,0.05,0.5,128);\n         plot(beta,real(mellin));\n\\end{verbatim}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nifmt, fft, ifft.\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf References}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n[1] J. Bertrand, P. Bertrand, J-P. Ovarlez ``Discrete Mellin Transform for\nSignal Analysis'' Proc IEEE-ICASSP, Albuquerque, NM USA, 1990.\\\\\n\n[2] J-P. Ovarlez, J. Bertrand, P. Bertrand ``Computation of Affine\nTime-Frequency Representations Using the Fast Mellin Transform'' Proc\nIEEE-ICASSP, San Fransisco, CA USA, 1992.\n\\end{minipage}\n\n\n", "meta": {"hexsha": "cff43915724816ad4617eca261bdd2133c48f6a0", "size": 3183, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/fmt.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/fmt.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/fmt.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 30.0283018868, "max_line_length": 78, "alphanum_fraction": 0.6798617656, "num_tokens": 1057, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943822145998, "lm_q2_score": 0.7401743735019594, "lm_q1q2_score": 0.5608999820989957}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\n\\markright{fmodany}\n\\section*{\\hspace*{-1.6cm} fmodany}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nSignal with arbitrary frequency modulation.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[y,iflaw] = fmodany(iflaw)\n[y,iflaw] = fmodany(iflaw,t0)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty fmodany} generates a frequency modulated signal whose\n        instantaneous frequency law is approximately given by the vector\n        {\\ty iflaw} (the integral is approximated by {\\ty cumsum}).  The\n        phase of this modulation is such that {\\ty y(t0)=1}.\\\\\n  \n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n        {\\ty iflaw} & vector of the instantaneous frequency law samples\\\\\n        {\\ty t0}    & time reference          & {\\ty 1}\\\\\n  \\hline {\\ty y}     & output signal\\\\\n\n\\hline\n\\end{tabular*}\n\n\\end{minipage}\n\\vspace*{1cm}\n\n\n{\\bf \\large \\sf Example}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n         [y1,ifl1]=fmlin(100); [y2,ifl2]=fmsin(100);\n         iflaw=[ifl1;ifl2]; sig=fmodany(iflaw); \n         subplot(211); plot(real(sig))\n         subplot(212); plot(iflaw); \n\\end{verbatim}\nThis example shows a signal composed of two successive frequency\nmodulations\\,: a linear FM followed by a sinusoidal FM.\\\\\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\nfmconst, fmlin, fmsin, fmpar, fmhyp, fmpower.\n\\end{verbatim}\n\\end{minipage}\n\n\n\n", "meta": {"hexsha": "469e0dbfc11d415517b78319aed0533e70f2f136", "size": 1921, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/fmodany.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/fmodany.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/fmodany.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 23.4268292683, "max_line_length": 73, "alphanum_fraction": 0.6548672566, "num_tokens": 685, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.5608999609716875}}
{"text": "\\documentclass[10pt, reqno]{article}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n%american mathematical society - AMS \n\n\\usepackage{hyperref}\n\\usepackage{float}\n\\usepackage{enumerate}\n\\usepackage{graphicx}\n\\usepackage{wrapfig}\n\\usepackage{subfig}\n\n\\oddsidemargin = 0in\n\\textwidth=6.5in\n\n\\numberwithin{equation}{section} \n\n\\numberwithin{figure}{section}\n\n\\title{Calculating Volume from an Implicit Function }\n\\author{Johnny Minor}\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\nIn everyday life we may never notice the intricacies of something so simple as a volume. When at the grocery store we can pick up a can of soup and be greeted immediately with the volume of the can in our hand. However, a clever mathematician or engineer had to calculate the volume to make our life easier. Otherwise one might find many seasoned professors looking contemplative in isle seven at Safeway as their mind churns the numbers to find the volume of the can of soup. In this particular paper we will concern ourselves with describing the method of finding the volume of an egg shaped wine fermentation tank. \n\n\n\n\\section{Background}\n\\label{sec:background}\n\nThe particular tank that we are interested in calculating the volume of is a tank manufactured by the Sonoma Stone company. \nAccording to their website the tank has a volume of 476 gallons. \\cite{ref:sonoma}\n\nThis value is very helpful to guide us as we calculate the volume. It's generally good practice to have a reference value or expression when doing computations. So you're not just blindly clutching at straws. Really it's nice because often times when programming numerical methods you're first attempt probably isn't correct. Without a value to compare to it's very difficult to decipher if you're program is working correctly. \n\nThe equation which describes a cross section of the tank is given to be \n\\begin{equation}\n0.0002017404542x^2 + \\frac{0.0001303290910y}{20.9520444748+\\alpha y} = 1.\n\\end{equation}\n\n\\noindent Where $x$ and $y$ are their usual $x$ and $y$ coordinates(measure in centimeters) and $\\alpha$ is a constant that changes the overall roundness of the cross section. A lower value of $\\alpha$ corresponds to a rounder cross section. A high value corresponds to a less circular cross section, or a more \"smushed\" shape. For our example the $\\alpha$ is equal to $0.005$. \n\nA natural progression would be to wonder what this unfamiliar equation looks like when plotted. It can be seen that it looks very similar to an egg. %TODO: include cross section of function HERE!!! \n\n\\begin{figure}[H] % 'h' means put it here and 'ht' means that put it at top the and here. 'ht!' put it here now! \n\\centering\n\\includegraphics[width=0.5\\textwidth]{cross_section.png}\n\\caption{What a cross section of our function looks like.}\n%\\label{fig:cross-seciton}\n\\end{figure}\n\n\\noindent And this egg shaped equation spun around the $y$ axis what we hope to solve for. \n\n\n\\section{Mathematical Method}\n\nIt seems like an insurmountable task to a compute a volume integral at first glance of this function. And it would be quite a formidable task without a little help from our friends in form of numpy and scipy. Well, as Professor Cooper says frequently: \"We're mathematicians, we know how to do this(paraphrased).\" Let's get on it with it then. \n\nAs we learned in our calculus courses. We can use the disc method to integrate our function. The general form of the disc method is to form a shape that resembles a circle. Since we know the area of a circle to be $\\pi R^2$ where $R$ is the radius of the circle. If we form an infinite number of discs this will become an integral. This integral will take the form \n\\begin{equation}\n\\label{eq:formula}\n\\mathrm{Volume}=\\pi\\int_a^b [R(y)]^2\\ \\mathrm{d}y. \n\\end{equation}\n\n\\noindent Where $R(y)$ is our function and $a$ and $b$ are our limits of integration. Although we run into a problem quickly. the equation \\ref{eq:formula} is an implicit function. So we first need to solve for $x$ as a function of $y$. This can be done easily with Sympy, but it is an important step in the process nevertheless. \n\nIt is often nice to have a visualization of what's going on. A visual representation of the disc method is like this \n\n\\begin{figure}[H] % 'h' means put it here and 'ht' means that put it at top the and here. 'ht!' put it here now! \n\\centering\n\\includegraphics[width=0.4\\textwidth]{Disc_integration.png}\n\\caption{What a cross section integration looks like.\\cite{ref:cross_section}}\n\\label{fig:cross-seciton}\n\\end{figure}\n\nIn figure \\ref*{fig:cross-seciton} the function is being rotated around the $y$ axis. \nNow, once the function is solved for $x$ as a function of $y$ then we can use Sympy to carry out the integration for us. This is an easy task for the computer and yields the volume that we're interested in. \n\n\\section{Conclusion}\nUsing Sympy and Numpy we were easily able to calculate the volume of the egg shaped barrel to be 475.82 gallons which is slightly less than the advertised 476 gallons. In this volume we would expect 2401.59027585 bottles that are 750mL. This isn't a perfectly even value of bottles of wine, but I would speculate there is some quality control testing that takes place before the barrel is bottled for consumption. \n\nFurther in our analysis we varied the value of $\\alpha$ and then calculated the corresponding volume of the tank with the given cross section equation. \n\n\\begin{figure}[H] % 'h' means put it here and 'ht' means that put it at top the and here. 'ht!' put it here now! \n\\centering\n\\includegraphics[width=0.8\\textwidth]{alpha_volume_plot.png}\n\\caption{Varying $\\alpha$ and calculating the volume}\n\\label{fig:alpha_volume}\n\\end{figure}\n\nThis is an interesting plot, and it behooves us to do a simple analysis. Figure \\ref{fig:alpha_volume} shows that there are actually values of $\\alpha$ that would yield a greater volume--interesting. The engineering intentions come to the surface and we might speculate that the egg shape was chosen to minimize floor space while maximizing volume. Perhaps this shape was chosen to help aid in the fermentation process as well. \n\nSpeculation aside, it is an extraordinary feat of mathematics to be able to carry out a calculation via a computer that even the most clever mathematician could only hope to be able to do a mere 400 years ago. \n\n\n\n\\begin{thebibliography}{XX}\n%TODO: actually fill out the biography. \n\n\\bibitem{ref:cross_section} %when we refer to it by name we use this name. \nPetteri Aimonen (\\url{https://en.wikipedia.org/wiki/File:Disc_integration.svg})[Public Domain], via Wikimedia Commons\n\n\\bibitem{ref:sonoma} %when we refer to it by name we use this name. \nSonoma Cast Stone (\\url{http://www.concretewinetanks.com/concrete-egg-tank.html}) \\today\n\n\\end{thebibliography} \n\n\n\n\n\\end{document}", "meta": {"hexsha": "ebf5d2eb9d3e4de6e717ada163ad55bfe64818b3", "size": 6856, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "math300/homeworkb.tex", "max_stars_repo_name": "johnnydevriese/wsu_courses", "max_stars_repo_head_hexsha": "b55efd501c2d8f0651891f422a486e32533f5aa0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "math300/homeworkb.tex", "max_issues_repo_name": "johnnydevriese/wsu_courses", "max_issues_repo_head_hexsha": "b55efd501c2d8f0651891f422a486e32533f5aa0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "math300/homeworkb.tex", "max_forks_repo_name": "johnnydevriese/wsu_courses", "max_forks_repo_head_hexsha": "b55efd501c2d8f0651891f422a486e32533f5aa0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.1016949153, "max_line_length": 618, "alphanum_fraction": 0.7750875146, "num_tokens": 1720, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.5608999609716875}}
{"text": "\\documentclass[main.tex]{subfiles}\n\n\\begin{document}\n\n\\subsection{Training}\n\nWe trained classifiers to solve the problem with machine learning algorithms: Logistic Regression, Support Vector Machine, Fisher Discriminant Analysis, and Neural Networks.\nFor each algorithm, we check the speed of training, inference speed, and translation invariance under the same condition.\nIn the following sub-sections, we will describe how we used each algorithm.\n\n\\subsubsection{Logistic Regression}\nLogistic Regression is designed for binary classification tasks. Given an input vector, it outputs yes or no.\nTherefore, we need to adopt a heuristic, one-vs-rest, because our task is multi-class classification.\nIt divides multi-class dataset into several binary classification problems.\nA logistic regression model is then optimized on each binary classification problem.\nFinally, we use the Limited Memory BFGS (LBFGS) algorithm to optimize the model by minimizing the objective function following.\n\n\\begin{equation}\n\tL(\\mathbf{w}, b) = \\sum_{i=1}^{N} \\log \\left(\\exp\\left(-y_{i} \\left(\\mathbf{x}_i^{\\top}\\mathbf{w} + b\\right)\\right) + 1\\right) + \\frac{1}{2} \\mathbf{w}^{\\top}\\mathbf{w}\n\\end{equation}\n\n\n\\subsubsection{Support Vector Machine}\nSupport Vector Machine is also a binary classifier.\nHence, we used one-vs-rest heuristic for the same reason as in logistic regression.\nAnd we experimented with a few different kernel functions: Linear kernel, Gaussian kernel, Sigmoid kernel, and Polynomial kernel with degree 2 and 3.\n\n\n\\subsubsection{Fisher Discriminant Analysis}\nFisher Discriminant Analysis, unlike the previous two algorithms, allows for multi-class classification.\nAnd it has closed-form solution rather than using iterative method.\nWe used least square solver to fit the model.\n\n\n\\subsubsection{Multi Layer Perceptron}\nWe design a network with the architecture following:\n\\begin{enumerate}[(1)]\n\t\\item A fully connected layer to a hidden layer of size 64\n\t\\item A rectifier non-linearity\n\t\\item A fully connected layer to a hidden layer of size 32\n\t\\item A rectifier non-linearity\n\t\\item A fully connected layer that outputs a vector of size 10, corresponding to logit probabilities for each class\n\\end{enumerate}\n\nThe model parameters $\\theta$ are updated by gradient descent using Adam optimizer on the loss function following:\n\n\\begin{equation} \\label{crossentropy}\n\tL(\\theta) = - \\mathbf{t}^{\\top} \\log \\mathbf{p} + 10^{-4} \\times \\|\\theta\\| ^2\n\\end{equation}\n\n\n\\subsubsection{Convolutional Neural Network}\nUnlike other algorithms above, convolutional neural network gets the grayscale of $8 \\times 8 \\times 1$ itself as input without flattening.\nThe input data are processed by the feature extractor.\n\nThe feature extractor applies the following modules:\n\\begin{enumerate}[(1)]\n\t\\item A convolutional of $16$ filters of kernel size $3 \\times 3$ with stride 1 and zero-padding.\n\t\\item A rectifier non-linearity\n\t\\item A max-pooling of kernel size $2 \\times 2$ with stride 2\n\t\\item A convolutional of $32$ filters of kernel size $3 \\times 3$ with stride 1 and zero-padding.\n\t\\item A rectifier non-linearity\n\t\\item A max-pooling of kernel size $2 \\times 2$ with stride 2\n\\end{enumerate}\n\nThe output of the feature extractor is passed into a fully connected layer that outputs a vector of size 10, corresponding to logit probabilities for each class.\nWe optimized the model in the same way as in the case of the multi layer perceptron.\n\n\\subsection{Validation}\nTo measure the accuracy of each model and check whether the model is in overfitting or not, we first split the MNIST dataset into two subsets.\nOne is the training set. It is used to update the parameters of the models.\nThe other is the validation set, which consists of 200 data pairs (20 samples per class, about 11\\% of the total data). It is not used to train the models directly.\nTwo datasets do not have any overlapped samples.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.5]{img/experiment/translated_img.png}\n\t\n\t\\caption{Mean image of images in translated validation set. Compared to Fig. \\ref*{edafig}-(c), they are skewed to the left or right.}\n\t\\label{translation}\n\\end{figure}\n\nTo test translation robustness, we also created a new validation set, translated validation dataset.\nIt consists of images of the validation dataset translated left or right.\nIt consists of a total of 400 image sets, with 200 left-biased and 200 right-biased.\nHowever, we do not include this kind of data in the training set.\nIn other words, the models experience only center-centered images in the training phase.\n\n\\end{document}\n", "meta": {"hexsha": "10627a5c2b7184b20bb9e5b3fbfffc09ee161d2b", "size": 4599, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/experiments.tex", "max_stars_repo_name": "frechele/CSE4007", "max_stars_repo_head_hexsha": "a691df9ec03a200d8a40017083abb38b55216c6c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/experiments.tex", "max_issues_repo_name": "frechele/CSE4007", "max_issues_repo_head_hexsha": "a691df9ec03a200d8a40017083abb38b55216c6c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/experiments.tex", "max_forks_repo_name": "frechele/CSE4007", "max_forks_repo_head_hexsha": "a691df9ec03a200d8a40017083abb38b55216c6c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.1, "max_line_length": 173, "alphanum_fraction": 0.7819091107, "num_tokens": 1112, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743505760728, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.5608999606771492}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{graphicx}\n\\usepackage{enumerate}\n\\usepackage{amsmath}\n\n\\setlength{\\oddsidemargin}{.1in}\n\\setlength{\\textwidth}{6.3in}\n\\setlength{\\textheight}{8.9in}\n\\setlength{\\topmargin}{-.5in}\n\n\\title{Simulating the dynamics of the pendulum and cart}\n\\date{}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Kinematics}\n\nConsider a cart of mass $M$, motorized and free to move along the x-axis upon a frictionless track. A pendulum bob of mass $m$ is attached to the cart by a rigid, massless, freely swinging rod of length $l$. Let's start by writing down the kinematics---that is, the position, velocity, and acceleration---for each of the cart and bob.\n\nFor the cart,\n\n\\begin{align}\n\\mbox{position:} &\\quad x \\mathbf{\\hat{x}} \\nonumber\\\\[5pt]\n\\mbox{velocity:} &\\quad x' \\mathbf{\\hat{x}} \\nonumber\\\\[5pt]\n\\mbox{acceleration:} &\\quad x'' \\mathbf{\\hat{x}}. \\nonumber\n\\end{align}\n\nAnd for the bob,\n\n\\begin{align}\n\\mbox{position:} &\\quad (x + l \\sin{\\theta}) \\mathbf{\\hat{x}} + l \\cos{\\theta} \\mathbf{\\hat{y}} \\nonumber\\\\[5pt]\n\\mbox{velocity:} &\\quad (x' + l \\theta' \\cos{\\theta}) \\mathbf{\\hat{x}} - l \\theta' \\sin{\\theta} \\mathbf{\\hat{y}} \\nonumber\\\\[5pt]\n\\mbox{acceleration:} &\\quad (x'' + l \\theta'' \\cos{\\theta} - l \\theta'^2 \\sin{\\theta}) \\mathbf{\\hat{x}} - (l \\theta'' \\sin{\\theta} + l \\theta'^2 \\cos{\\theta}) \\mathbf{\\hat{y}}. \\nonumber\n\\end{align}\n\n\\section{Forces}\n\nWe can list the forces acting on each of the cart and bob as well (see Fig. \\ref{fig:free_body}).\n\n\\begin{figure}\n\\center\n\\includegraphics[width=0.9\\linewidth]{free_body.png}\n\\caption{Free body diagram for our setting of the cart and pendulum.}\n\\label{fig:free_body}\n\\end{figure}\n\nThe cart experiences\n\n\\begin{align}\n\\mbox{force due to gravity:} &\\quad {-M} g \\mathbf{\\hat{y}} \\nonumber\\\\[5pt]\n\\mbox{rod tension:} &\\quad T \\sin{\\theta} \\mathbf{\\hat{x}} + T \\cos{\\theta} \\mathbf{\\hat{y}} \\nonumber\\\\[5pt]\n\\mbox{normal force on track:} &\\quad N \\mathbf{\\hat{y}} \\nonumber\\\\[5pt]\n\\mbox{motor force:} &\\quad f \\mathbf{\\hat{x}}. \\nonumber\n\\end{align}\n\nAnd the bob experiences\n\n\\begin{align}\n\\mbox{force due to gravity:} &\\quad -m g \\mathbf{\\hat{y}} \\nonumber\\\\[5pt]\n\\mbox{rod tension:} &\\quad {-T} \\sin{\\theta} \\mathbf{\\hat{x}} - T \\cos{\\theta} \\mathbf{\\hat{y}}. \\nonumber\n\\end{align}\n\n\\section{Equations of motion}\n\nUsing Newton's law $\\mathbf{F_{net}} = m \\mathbf{x''}$ to relate forces with kinematics, we get the two expressions which follow.\n\nFor the cart,\n\n\\begin{equation}\nT \\sin{\\theta} \\mathbf{\\hat{x}} + f \\mathbf{\\hat{x}} - M g \\mathbf{\\hat{y}} + T \\cos{\\theta} \\mathbf{\\hat{y}} + N \\mathbf{\\hat{y}} = M x'' \\mathbf{\\hat{x}}.\n\\end{equation}\n\nAnd for the bob,\n\n\\begin{equation}\n-T \\sin{\\theta} \\mathbf{\\hat{x}} - m g \\mathbf{\\hat{y}} - T \\cos{\\theta} \\mathbf{\\hat{y}} = m x'' \\mathbf{\\hat{x}} + m l \\theta'' \\cos{\\theta} \\mathbf{\\hat{x}} - m l \\theta'^2 \\sin{\\theta} \\mathbf{\\hat{x}} - m l \\theta'' \\sin{\\theta} \\mathbf{\\hat{y}} - m l \\theta'^2 \\cos{\\theta} \\mathbf{\\hat{y}}.\n\\end{equation}\n\nSeparating these two equations into vector components gives four equations.\n\n\\begin{align}\n\\label{eq:a} \\mbox{cart x:} &\\quad T \\sin{\\theta} + f = M x'' \\\\[5pt]\n\\mbox{cart y:} &\\quad {-M} g + T \\cos{\\theta} + N = 0 \\\\[5pt]\n\\label{eq:b} \\mbox{bob x:} &\\quad {-T} \\sin{\\theta} = m x'' + m l \\theta'' \\cos{\\theta} - m l \\theta'^2 \\sin{\\theta} \\\\[5pt]\n\\label{eq:c} \\mbox{box y:} &\\quad {-m} g - T \\cos{\\theta} = -m l \\theta'' \\sin{\\theta} - m l \\theta'^2 \\cos{\\theta}.\n\\end{align}\n\nAdding together equations (\\ref{eq:a}) and (\\ref{eq:b}) to eliminate $T$ gives\n\n\\begin{equation}\n\\label{eq:d} f = (M + m) x'' + m l \\theta'' \\cos{\\theta} - m l \\theta'^2 \\sin{\\theta}.\n\\end{equation}\n\nMultiplying (\\ref{eq:b}) through by $\\cos{\\theta}$ and multiplying (\\ref{eq:c}) through by $-\\sin{\\theta}$ and then adding these together gives\n\n\\begin{equation}\n\\label{eq:e} m g \\sin{\\theta} = m x'' \\cos{\\theta} + m l \\theta''.\n\\end{equation}\n\nOur two equations of motion are (\\ref{eq:d}) and (\\ref{eq:e}). However, these must be rewritten to isolate $x''$ and $\\theta''$ in each equation. This can be done algebraically or using \\textit{Mathematica}, yielding\n\n\\begin{align}\nx'' &= \\frac{f + m l \\theta'^2 \\sin{\\theta} - m g \\sin{\\theta} \\cos{\\theta}}{M - m - m \\cos^2{\\theta}} \\\\[5pt]\n\\theta'' &= \\frac{-f \\cos{\\theta} + (M + m) g \\sin{\\theta} - m l \\theta'^2 \\sin{\\theta} \\cos{\\theta}}{l (M + m - m \\cos^2{\\theta})}.\n\\end{align}\n\nWe can turn these two second-order ODEs into four first-order ODEs, yielding a form which will be suitable for numerical simulation via the Runge-Kutta method:\n\n\\begin{align}\nx' &= v \\\\[5pt]\n\\theta' &= \\omega \\\\[5pt]\nv' &= \\frac{f + m l \\omega^2 \\sin{\\theta} - m g \\sin{\\theta} \\cos{\\theta}}{M + m - m \\cos^2{\\theta}} \\\\[5pt]\n\\omega' &= \\frac{-f \\cos{\\theta} + (M + m) g \\sin{\\theta} - m l \\omega^2 \\sin{\\theta} \\cos{\\theta}}{l (M + m - m \\cos^2{\\theta})}.\n\\end{align}\n\nThese are the equations of motion we will use for numerical simulation.\n\n\\section{Runge-Kutta update rule}\nNow we will work out the update rules for simulating via the Runge-Kutta method. We can represent the system with a 5-dimensional state vector\n\n\\begin{align}\n\\mathbf{x} = (t, x, \\theta, v, \\omega) \\in {\\rm I\\!R}^5, \\quad \\mbox{where, equivalently,} \\\\[5pt]\nx_1 = t \\nonumber\\\\[5pt]\nx_2 = x \\nonumber\\\\[5pt]\nx_3 = \\theta \\nonumber\\\\[5pt]\nx_4 = v \\nonumber\\\\[5pt]\nx_5 = \\omega \\nonumber.\n\\end{align}\n\nSo, our equations of motion become\n\n\\begin{align}\n\\frac{d}{dt} x_1 &= f_1(\\mathbf{x}) = 1 \\tag*{(passage of time; $dt = 1$)} \\nonumber\\\\[5pt]\n\\frac{d}{dt} x_2 &= f_2(\\mathbf{x}) = x_4 \\tag*{(change in cart position; $\\frac{dx}{dt} = v$)} \\\\[5pt]\n\\frac{d}{dt} x_3 &= f_3(\\mathbf{x}) = x_5 \\tag*{(change in pendulum angle; $\\frac{d\\theta}{dt} = \\omega = 1$)} \\\\[5pt]\n\\frac{d}{dt} x_4 &= f_4(\\mathbf{x}) = \\frac{f + m l {x_5}^2 \\sin{x_3} - m g \\sin{x_3} \\cos{x_3}}{M + m - m \\cos^2{x_3}} \\tag*{(change in cart velocity; $\\frac{dv}{dt} = a$)} \\\\[5pt]\n\\frac{d}{dt} x_5 &= f_5(\\mathbf{x}) = \\frac{-f \\cos{x_3} + (M + m) g \\sin{x_3} - m l {x_5}^2 \\sin{x_3} cos{x_3}}{l (M + m - m \\cos^2{x_3})} \\tag*{(change in pendulum angular velocity; $\\frac{d\\omega}{dt} = \\alpha$).}\n\\end{align}\n\nThis can be expressed even more generally as\n\n\\begin{equation}\n\\frac{d}{dt} \\mathbf{x} = \\mathbf{f}(\\mathbf{x})\n\\end{equation}\n\nwhere $\\mathbf{f}$ is a loosely defined ``vector of functions'' $\\mathbf{f} = (f_1, f_2, f_3, f_4, f_5)$, each accepting the system state $\\mathbf{x}$ as an argument and returning the rate of change of a particular state variable.\n\nThe Runge-Kutta method, using a step of size $h$, numerically approximates four intermediate values of the state as it advances from $\\mathbf{x}_i$ to $\\mathbf{x}_{i+1}$\n\n\\begin{align}\n\\mathbf{k}_1 &= \\mathbf{f}(\\mathbf{x}) \\\\[5pt]\n\\mathbf{k}_2 &= \\mathbf{f}(\\mathbf{x} + \\frac{h}{2} \\mathbf{k}_1) \\\\[5pt]\n\\mathbf{k}_3 &= \\mathbf{f}(\\mathbf{x} + \\frac{h}{2} \\mathbf{k}_2) \\\\[5pt]\n\\mathbf{k}_4 &= \\mathbf{f}(\\mathbf{x} + h \\mathbf{k}_3).\n\\end{align}\n\nand then returns a weighted average as the result:\n\n\\begin{equation}\n\\mathbf{x}_{i+1} = \\mathbf{x}_i + \\frac{h}{6}(\\mathbf{k}_1 + 2 \\mathbf{k}_2 + 2 \\mathbf{k}_3 + \\mathbf{k}_4).\n\\end{equation}\n\nThis is the update rule for advancing the simulation by a step $h$.\n\n\\end{document}", "meta": {"hexsha": "ebd45f9cd1482236d52ea965b946c43d3c9312d3", "size": 7242, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "0-sim/writeup/0-sim.tex", "max_stars_repo_name": "lukearend/swirl", "max_stars_repo_head_hexsha": "83e105f3f6b656a02ecb94bf100393ff355e291f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "0-sim/writeup/0-sim.tex", "max_issues_repo_name": "lukearend/swirl", "max_issues_repo_head_hexsha": "83e105f3f6b656a02ecb94bf100393ff355e291f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "0-sim/writeup/0-sim.tex", "max_forks_repo_name": "lukearend/swirl", "max_forks_repo_head_hexsha": "83e105f3f6b656a02ecb94bf100393ff355e291f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.3652694611, "max_line_length": 334, "alphanum_fraction": 0.6425020713, "num_tokens": 2726, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.560899957217487}}
{"text": "\\documentclass{amsart}\n\n\\usepackage{amssymb, amsfonts, hyperref, graphicx, bbm}\n\\usepackage{enumerate}\n\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{cor}[thm]{Corollary}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem{lem}[thm]{Lemma}\n\n\\theoremstyle{definition}\n\\newtheorem{defn}[thm]{Definition}\n\\newtheorem{defns}[thm]{Definitions}\n\\newtheorem{con}[thm]{Construction}\n\\newtheorem{exmp}[thm]{Example}\n\\newtheorem{exmps}[thm]{Examples}\n\\newtheorem{notn}[thm]{Notation}\n\\newtheorem{notns}[thm]{Notations}\n\\newtheorem{addm}[thm]{Addendum}\n\\newtheorem{exer}[thm]{Exercise}\n\n\\theoremstyle{remark}\n\\newtheorem{rem}[thm]{Remark}\n\\newtheorem{rems}[thm]{Remarks}\n\\newtheorem{warn}[thm]{Warning}\n\\newtheorem{sch}[thm]{Scholium}\n\n\\bibliographystyle{plain}\n\n\\graphicspath{ {../figures/} }\n\n%--------Meta Data: Fill in your info------\n\\title{Optimizing Multistage Portfolio Returns with Risk Constraints}\n\n\\author{Kurt Ehlert}\n\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\tableofcontents\n\n\\section{Modeling Stocks with Geometric Brownian Motion}\nStochastic differential equations (SDEs) are commonly used to model stock prices over time. In particular,\ngeometric Brownian motion (GBM) is often used.\\cite{hull} In this paper, we will use GBM with constant coefficients to model stock prices over time. The SDE for GBM is\n\\begin{equation}\\label{eq:GBM_SDE}\ndX_t = \\mu X_t dt + \\sigma X_t dB_t\n\\end{equation}\nwhere $\\mu$ and $\\sigma$ are respectively the drift and volatility parameters, and $B_t$ is Brownian motion at time $t$. We call $X_t$ GBM, and see figure \\ref{fig:mesh1} for some example GBM paths.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{geometric_bm.png}\n\\caption{Paths of geometric Brownian motion generated by the Euler-Maruyama method. Code is attached (``geometric\\_bm.py''). $X_0 = 50, \\mu=0.1, \\sigma=0.2$.}\n\\label{fig:mesh1}\n\\end{figure}\n\n\\subsection{1-D Geometric Brownian Motion}\nWe can solve \\eqref{eq:GBM_SDE} by using an integrating factor $Z_t$ \\cite[p. 235]{timo}:\n\\begin{equation*}\nZ_t = \\exp\\left(-\\mu t -\\sigma B_t + \\sigma^2 t/2 \\right)\n\\end{equation*}\nFor most of the following calculations, we will drop the subscript $t$. By It\\^o's lemma we know that\n\\begin{equation}\\label{eq:GBM_SDE_PROD}\nd(XZ) = X dZ + Z dX + d[X,Z]\n\\end{equation}\nwhere $[\\cdot,\\cdot]$ denotes quadratic covariation. The goal is to substitute in expressions for each of the variables on the right-hand side of the above expression. It\\^o's lemma gives us $dZ$\n\\begin{equation*}\ndZ = Z \\left[\\left(-u + \\sigma^2\\right) dt - \\sigma dB\\right]\n\\end{equation*}\nNow we need to calculate the quadratic covariation of $X$ and $Z$,\n\\begin{align*}\n\\begin{split}\nd[X,Z]_t &= d\\left[\\int_0^t \\mu X_s ds + \\int_0^t \\sigma X_s dB_s,  \\int_0^t (-\\mu +\\sigma^2) Z_s ds - \\int_0^t \\sigma Z_s dB_s \\right]_t\\\\\n&= d\\left[\\int_0^t \\sigma X_s dB_s,  - \\int_0^t \\sigma Z_s dB_s \\right]_t\\\\\n&= -\\sigma ^2 \\frac{d}{dt}\\int_0^t X_s Z_s ds\\\\\n&= -\\sigma^2 X_t Z_t dt\n\\end{split}\n\\end{align*}\nThus \\eqref{eq:GBM_SDE_PROD} becomes\n\\begin{align*}\nd(XZ) &= Z(\\mu X dt + \\sigma X dB) + XZ\\left[\\left(-\\mu + \\sigma^2 \\right)dt -\\sigma dB \\right] - \\sigma^2 X Z dt\n&= 0\n\\end{align*}\nTherefore $X_t Z_t = X_0 Z_0 = X_0$, which implies that\n\\begin{equation}\\label{eq:GBM_soln}\nX_t =  X_0 Z_t^{-1} = X_0 \\exp\\left[(\\mu - \\sigma^2/2) t + \\sigma B_t\\right]\n\\end{equation}\nThen use It\\^o's lemma to check that \\eqref{eq:GBM_soln} solves \\eqref{eq:GBM_SDE}. It is clear that $\\log(X)$ is normally distributed, and\n\\begin{align*}\n\\mathbb{E}[\\log X ] &= (\\mu - \\sigma^2/2)t\\\\\n\\text{Var}(\\log X ) &= \\sigma^2 t\n\\end{align*} \nThus $X_t$ has a log-normal distribution, and\n\\begin{align*}\n\\mathbb{E} X&= X_0 e^\\mu\\\\\n\\text{Var}X &= X_0^2 e^{2\\mu t}(e^{\\sigma^2 t} - 1)\n\\end{align*}\n\n\\subsection{Correlated d-Dimensional Geometric Brownian Motion}\\label{corr_gmb}\nIf the Brownian motions driving each SDE are all independent, then the solution is just the d-dimensional version of \\eqref{eq:GBM_soln}. The much more interesting case is when there is a correlation between the different dimensions. In other words, we want to find and solve the SDE where $X(t)$ is d-dimensional GBM and $\\text{Corr}(\\log X_i(t), \\log X_j(t)) = \\rho_{ij}$. This would allow us to model correlated stock returns. We are interested in the log of the prices, because we want the returns to be correlated. We are not interested in whether or not the stock prices are correlated. In fact, the correlation between the prices goes to zero as times goes to infinity, even if the returns have a positive correlation that's less than one.\n\nThe return of stock $i$, $R_i(t)$, is equal to\n\\begin{equation*}\nR_i(t) = \\log \\frac{X_i(t)}{X_i(0)} = (\\mu_i-\\sigma_i^2/2)t +\\sigma_i dB_i(t)\n\\end{equation*}\nIf Corr$(B_i(t),B_j(t)) = \\rho_{ij}$, then\n\\begin{equation*}\n\\text{Corr}(R_i(t),R_j(t)) = \\rho_{ij}\n\\end{equation*}\nTherefore, if we can construct $d$-dimensional Brownian motion that has correlation matrix $\\Sigma$, where $(\\Sigma)_{ij} = \\rho_{ij}$, then we are done. Suppose we can choose $\\alpha_{ij}$ such that\n\\begin{align*}\n\\sum_{k=1}^i \\alpha_{ik}^2 &= 1, \\text{ where } 1\\le i\\le d\\\\\n\\sum_{k=1}^i \\alpha_{ik}\\alpha_{jk}&= \\rho_{ij}, \\forall j < i \\le d\n\\end{align*}\nIn other words, we need to find a $d\\times d$ lower-triangular matrix $L$, where\n\\begin{equation*}\nL= \\begin{bmatrix}\n    \\alpha_{11} & 0 & 0 & \\dots  & 0 \\\\\n    \\alpha_{21} & \\alpha_{22} & 0 & \\dots  & 0 \\\\\n    \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n    \\alpha_{d1} & \\alpha_{d2} & \\alpha_{d3}& \\dots  & \\alpha_{dd}\n\\end{bmatrix}\n\\end{equation*}\nsuch that\n\\begin{equation*}\nL L^T = \\Sigma\n\\end{equation*}\nThis is equivalent to finding the Cholesky decomposition of $\\Sigma$. Thus $L$ exists as long as $\\Sigma$ is positive definite.\n\nSay we have $d$-dimensional independent Brownian motion, $B(t)$, then\n\\begin{equation*}\n\\tilde{B}(t) = LB(t)\n\\end{equation*}\nis also Brownian motion. $\\tilde{B}(t)$ is a martingale, and each component has a quadratic variation of $t$, thus by the Levy characterization of Brownian motion , it is also Brownian motion\\cite{timo}. Furthermore, based on our choice of $L$, we can check that  $\\tilde{B}(t)$ has the correlation matrix $\\Sigma$. Thus to simulate correlated returns, we just simulate regular Brownian motion, and then apply the matrix $L$ to it. I wrote some python code (``test\\_correlation.py'') to test this method of constructing correlated Brownian motion.\n\\section{Portfolio Optimization}\nAssume that there are $n$ stocks we can invest in, and also one risk-free asset that gives a constant rate of return $r$. We will model the path of all the stock prices as $n$ (possibly correlated) geometric Brownian motions. The average rate of return (also known as drift) and the volatility of each stock can vary.\n\nIf we simply want to maximize the expected return, then we should put all of our money into the asset with the highest expected return. However, that is not a very sensible approach, because it does not take into account our risk tolerance. In this project, we will use two different methods for controlling risk. The first approach is the one used by Markowitz, where we will maximize the return subject to a constraint on the variance of the return. The second method is to maximize the return subject to a constraint on the conditional value-at-risk (CV@R). Finally, we will consider a multistage model, where we can reallocate our capital after some time.\n\n\\section{Markowitz Portfolio Optimization}\n\\subsubsection{Problem Description}\nSuppose that we have $C$ dollars to invest, and we want to maximize the expected return after time $t$. However, we also want to keep the variance of the return below a specified level. There are $n$ stocks to invest in. Let $X_i(t)$ be the price of stock $i$ at time $t$. As mentioned earlier, we model the stock prices as independent paths of geometric Brownian motion. Therefore $X_i(t)$ is log-normally distributed, and\n\\begin{align*}\n\\mathbb{E} X_i(t) &= X_i(0)e^{\\mu_i t}\\\\\n\\text{Var} X_i(t) &= X_i(0)^2e^{2\\mu_i t}\\left(e^{\\sigma_i^2 t} -1\\right)\n\\end{align*}\nLet $R_i$ be the random return of stock $i$.\n\\begin{equation*}\nR_i = \\frac{X_i(t)}{X_i(0)} \\implies \\log R_i = (\\mu_i - \\sigma_i^2 / 2) t + \\sigma dB_t\n\\end{equation*}\nTherefore the return $R_i$ is also log-normally distributed, with\n\\begin{align*}\n\\mathbb{E}R_i &= e^{\\mu_i t}\\\\\n\\text{Var}R_i &= e^{2\\mu_i t}\\left(e^{\\sigma_i^2 t} -1\\right)\n\\end{align*}\n\nThere is also a risk-free asset with a constant rate of return $r$. Let $x_i$ be the amount of capital allocated to stock $i$, and $x_r$ be the amount of capital invested in the risk-free asset. Let $C(x, R_i)$ be the amount of capital we have at time $t$. Then\n\\begin{equation*}\nC(x, R_i) = r x_r + \\sum_{i=1}^n R_i x_i \n\\end{equation*}\nand let $C$ be the capital we have at $t=0$.\nLet $\\Sigma$ be the $n\\times n$ covariance matrix of the stocks. Then\n\\begin{align*}\n\\text{Var}[ C(x,R_i)] = x^T \\Sigma x\n\\end{align*}\nThus the problem we want to solve is\n\\begin{equation}\\label{eq:markowitz}\n\\max_{x,x_r\\ge 0} \\mathbb{E}C(x, R_i) = e^{rt} x_r + \\sum_{i=1}^n x_i e^{\\mu_i t}, x^T \\Sigma x \\le v, x^T \\mathbbm{1} + x_r \\le C\n\\end{equation}\nwhere $\\mathbbm{1}$ is the column vector of all ones. We can actually solve the above problem with Lagrange multipliers, but it's easier to just use Gurobi to find a solution.\n\nI wrote some Python code that uses Gurobi to find the solution to \\ref{eq:markowitz} when there are two stocks and one risk-free asset (``plot\\_markowitz\\_efficient\\_frontier.py'', ``markowitz.py''). To be more specific, the code plots the efficient frontier (see figure \\ref{fig:markowitz}), which means that plots given standard deviations on the axis, and the maximum return on the y-axis. Since the model also handles correlated stocks, the code plots efficient frontiers for various correlation coefficients. We can see that the return is highest for a given standard deviation when the correlation is negative, and the return decreases as the correlation increases. This make some sense, because we if the stocks are negatively correlated, then we can diversify away some of the risk by investing in both of the stocks.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{markowitz.png}\n\\caption{Efficient frontiers based on the Markowitz model. The different curves correspond to different correlations between the stock returns.  Code is attached. The two files are ``plot\\_markowitz\\_efficient\\_frontier.py'' and ``markowitz.py''. $\\mu = (0.03, 0.035), \\sigma=(0.1, 0.15), r = 0.02$.}\n\\label{fig:markowitz}\n\\end{figure}\n\n\\section{Average Value-at-Risk}\nThe Markowitz model involves a linear objective and a quadratic constraint. The objective is the expected return, and the constraint controls the variance of the return. The model can be switched around such that you specify the expected return you want, which leads to a linear constraint, and then you minimize the variance, which means you have a quadratic objective. Either way, there is a quadratic term.\n\nAdditionally, the Markowitz model discourages strategies that have high variance. This is a problem, because it penalizes low returns and high returns. Ideally, we would like to only penalize low returns.\n\nValue-at-risk (V@R) controls the size of our losses. The $\\text{V@R}_\\alpha$ of a random variable $X$ is\n\\begin{equation*}\n\\text{V@R}(X) = \\inf\\{t : P(X \\le t) \\ge \\alpha\\}\n\\end{equation*}\nRoughly speaking, average value-at-risk (AV@R) is a way to quantify the expected losses of a strategy. If $X$ is a continuous random variable, then the AV@R$_\\alpha$ of a random variable X is defined as\n\\begin{equation*}\n\\text{AV@R}_\\alpha (X) = \\mathbb{E}[X | X \\ge V@R(X)]\n\\end{equation*}\nWe can incorporate $AV@R$ into a stochastic program by using a set of linear constraints. Thus by using AV@R as our risk-measure, we can avoid quadratic terms in our portfolio optimization problem, plus it only penalizes losses.\n\nLet's say there are two stocks and one risk-free asset. The risk-free asset's rate of return is 2\\%. Also assume that the stock prices follow geometric Brownian motion and that the returns are not correlated. Thus\n\\begin{equation*}\ndX_i(t) = \\mu_i X_i(t) dt + \\sigma_i dB_i(t)\n\\end{equation*}\nwhere $i=1,2$. For stock $i$, $\\mu_i$ is the average rate of return per year, and $\\sigma_i^2$ the rate of return's variance per year.\nSay we have \\$10,000 to invest, and we want to maximum our returns after 3 years. Also suppose that\n\\begin{align*}\n\\mu &= (0.3, 0.035)\\\\\n\\sigma &= (0.1, 0.15)\\\\\n\\end{align*}\nLet's say we could tolerate losing \\$1,000. Let $x$ be vector representing the amount of money invested in each of the assets. $x_1$ and $x_2$ are for stocks 1 and 2, and $x_3$ is for the risk-free asset. Then we would like to solve\n\\begin{align*}\n&\\max_x \\sum_{i=1}^2\\mathbb{E}[e^{3\\mu_i}]x_i + e^{0.02(3)}x_3 \\text{ s.t.}\\\\\n&\\sum_{i=1}^3 x_i = 10,000\\\\\n&\\text{AV@R}_{0.05}\\left(10,000 - \\sum_{i=1}^2 X_i(3) X(0)^{-1} x_i + e^{0.02(3)} x_3\\right) \\le 1,000\\\\\n&x \\ge 0\n\\end{align*}\nThe term in the parentheses of $\\text{AV@R}_{0.05}$ represents the random loss. If we discretize $X_i$ by choosing a bunch of scenarios, then we can turn the last constraint into a set of linear constraints. There will be an extra constraint and variable for each scenario. Discretizing $X_i(3)$ is fairly straightforward, because we know that it is log-normally distributed. We can also handle correlated returns. Section \\ref{corr_gmb} shows how to correlate the stock returns. \n\nIf we discretize $X_i(3)$ by drawing $N$ random variables for each stock, then the problem becomes\n\\begin{align*}\n&\\max_{x,w,\\gamma} \\sum_{i=1}^2 \\bar{\\mu}_i x_i +  e^{0.02(3)} x_3 \\text{ s.t.}\\\\\n&\\sum_{i=1}^3 x_i = 10,000,  \\forall k = 1,\\ldots, N\\\\\n& w_k \\ge 10,000 - \\sum_{i=1}^2 X^k_i(3) X(0)^{-1} x_i - e^{0.02(3)} x_3 - \\gamma, \\forall k = 1,\\ldots, N\\\\\n&x \\ge 0, w \\ge 0, \\gamma \\in \\mathbb{R}\n\\end{align*}\nwhere $\\bar{\\mu}_i = \\frac{1}{N} \\sum_{k=1}^N X_i^k(3)$.\nIt might not necessary to discretize the above problem, but it allows for more flexibility. We can make the model more complex, and then it's straightforward to add random events that affect the prices. Note that we need to decide how to invest our money at year 0, and then we sell it all at year 3. I wrote Python code that implements a Gurobi model for the above problem (``single\\_stage.py''). It plots the efficient frontier (figure \\ref{fig:single_stage}), which means that it specifies a maximum AV@R, and then maximizes the return.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{single_stage.png}\n\\caption{Efficient frontier based on the single-stage AV@R model.  Code is attached (``single\\_stage.py'').}\n\\label{fig:single_stage}\n\\end{figure}\n\n\\section{Multistage Portfolio Optimization Without Transaction Costs}\nWe might want to extend the model, and allow for asset reallocation at specified times. We will not include transaction costs, but we will include the possibility of correlated stock returns. Instead of just having $N$ scenarios, the plan is to use a scenario tree. Let's say we are allowed to reallocate our capital after the first year, and then we sell everything after year 3. We also have the same AV@R constraint as before, which, roughly speaking, limits our worst losses to be less than \\$1,000.  The optimization problem is basically the same as the earlier one, but now we need to add another constraint. This constraint makes sure that we only allocate capital that we already have at each stage. Let $x^j$ be the allocation as of year $j$, then the problem becomes\n\\begin{align*}\n&\\max_{x,w,\\gamma} \\frac{1}{N}\\sum_{k=1}^N\\left[\\sum_{i=1}^2 \\bar{\\mu}^k_i x^{2,k}_i +  e^{0.02(3)} x_3^{2,k}\\right] \\text{ s.t.}\\\\\n&\\sum_{i=1}^3 x_i^{j,k} = 10,000, j=1,2, \\forall k=1,\\ldots,N\\\\\n& w_k \\ge 10,000 - \\sum_{i=1}^2 X_i^k(3) X(0)^{-1} x^{2,k}_i - e^{0.02(3)} x_3^{2,k} - \\gamma, \\forall k = 1,\\ldots, N\\\\\n&\\sum_{i=1}^2 X_i^k(j) (x_i^{j+1,k}-x_i^{a(j+1,k)}) + e^{0.02j}(x_3^{j+1,k}-x_3^{a(j+1,k)}) = 0\\\\\n&x \\ge 0, w \\ge 0, \\gamma \\in \\mathbb{R}\n\\end{align*}\n\n$a(j+1,k)$ means the ancestor of the node $(j+1,k)$. This notation is terribly confusing, but all I'm trying to denote with the $4^{th}$ line is that the amount of capital we have at each stage is equal to the amount we started with, plus the amount we made up to that point. We do not allow for extra capital to be injected at any stage.\n\nNote that $x^{j+1}$ can depend on $X(j)$. In other words, we are allowed to adjust our reallocation based on the returns in year 1. I wrote Python code that creates a Gurobi model for this problem (``multi\\_stage.py''). It then plots the efficient frontiers for different correlations between the stock returns (figure \\ref{fig:multi_stage_no_transaction_costs}) (``plot\\_efficient\\_frontier\\_for\\_correlated\\_returns.py'') . Since the size of the scenario tree is the number of branches cubed (3 stages), the number of branches at each node can't be too large. Thus computed many efficient frontiers for each correlation, and averaged them. This overestimates the expected return, but it seems to give a better picture than just maximizing the size of the tree. I decided to use 60 branches at each node, and calculated each efficient frontier 150 times.\nJust like with the Markowitz model, the efficient frontier decreases as the correlation increases. Once again makes sense, because we can diversify more of the risk away when the stock returns are negatively correlated.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{multi_stage_no_transaction_costs.png}\n\\caption{Efficient frontier based on the multi-stage AV@R model. Does not include transaction costs.  Code is attached. There are two files: ``plot(...)returns.py'' and ``multi\\_stage.py''.}\n\\label{fig:multi_stage_no_transaction_costs}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{multi_stage_with_transaction_costs.png}\n\\caption{Efficient frontier based on the multi-stage AV@R model. Does include transaction costs.  Code is attached.}\n\\label{fig:multi_stage_with_transaction_costs}\n\\end{figure}\n\n\\section{Multistage Portfolio Optimization With Transaction Costs}\nExtending the above model to include transaction costs is relatively straightforward, because the units of $x$ is in the number of securities purchased. Thus we just need to keep track of the difference in the number of each type of stock and risk-free asset we own at each stage.\n\nThe model is the same as in the previous section. But now for all $i$ ($i$ denotes the security), we let\n\\begin{align*}\ny_i \\ge x_i^{2} - x_i^1\\\\\ny_i  \\ge x_i^{1} - x_i^{2}\\\\\ny_i \\ge 0\n\\end{align*}\nThere is a $y$ vector of variables at each node in the scenario tree at stage $j$. Let $c$ be the cost to trade one security. Then the objective is\n\\begin{equation*}\n\\max_{x,w,\\gamma,y} \\frac{1}{N}\\sum_{k=1}^N\\left[\\sum_{i=1}^2 \\bar{\\mu}^k_i x^{2,k}_i +  e^{0.02(3)} x_3^{2,k}\\right] - \\frac{\\text{cost}}{\\text{number of nodes -1}}\\sum_{\\text{all } y} y\\\\\n\\end{equation*}\nIn words, the objective just now subtracts the cost of all the transactions. Together the constraint and objective  force $y$ to be equal to number of shares bought or sold. The reason we divide $c$ by $(\\text{number of nodes}- 1)$ is that there is a vector of $y$ variables at every node in the tree except for at the root node. We just assume that we can buy the securities at the beginning without paying any transaction costs. As we would expect, the efficient frontier decreases as the costs go up (figure \\ref{fig:multi_stage_with_transaction_costs}). As the transaction costs go up, the efficient frontier should be the same as the single-stage frontier. The increased costs discourage any trading after the beginning.\n\n\\bibliography{719_project.bib}\n\\bibliographystyle{ieeetr}\n\n\\end{document}\n\n", "meta": {"hexsha": "a27440354a43a67878a7d0ac831dfffd43611a61", "size": 19821, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/719_project.tex", "max_stars_repo_name": "kehlert/multistage_portfolio", "max_stars_repo_head_hexsha": "d5c9c040e095330fd68efda050803572a41093ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/719_project.tex", "max_issues_repo_name": "kehlert/multistage_portfolio", "max_issues_repo_head_hexsha": "d5c9c040e095330fd68efda050803572a41093ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/719_project.tex", "max_forks_repo_name": "kehlert/multistage_portfolio", "max_forks_repo_head_hexsha": "d5c9c040e095330fd68efda050803572a41093ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.7892857143, "max_line_length": 855, "alphanum_fraction": 0.7305887695, "num_tokens": 6092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494678483918, "lm_q2_score": 0.817574478416099, "lm_q1q2_score": 0.5608147783959496}}
{"text": "\\section{Capital Budgeting, Part II}\n\n\\subsection{Real Options}\n\n{\\bf Growth Options}  An investment includes a growth option if it allows follow-on investments,\nand the decision whether to undertake the follow-on investments will be\nmade later on the basis of new information, akin to {\\bf call} options. \\\\\n{\\bf Abandonment options} An investment includes an abandonment option if, under\ncertain circumstances, it can be shut down if so chosen, akin to {\\bf put} options.\n{\\bf Scale options } example: Ability to slow the rate of mineral extraction from a mine.\n{\\bf Timing options} Flexibility about the timing of an investment can\nbe very valuable, akin to {\\bf American call} option.\n\n\n\\subsection*{R16Q1}\nTo find sector 1 an2 $\\beta$'s: $\\beta_{s_1}$ and $\\beta_{s_2}$ for firms $A$ and $B$:\n\n$\\beta_{A} = w_{A,s_1} \\beta_{s_1} +  w_{A,s_2}  \\beta_{s_2}  $ \\\\\n$\\beta_{B} = w_{B,s_1} \\beta_{s_1} +  w_{B,s_2}  \\beta_{s_2}  $ \\\\\n\nsolve system for  $\\beta_{s_1}$ and $\\beta_{s_2}$.\n\n\\subsection*{R16Q2}\n\ninputs: firm's $FCF_1$,  $\\beta_1$, $\\beta_2$, table below:\n\\begin{center}\n\t\\begin{tabular}{ |c|c|c|c| } \n\t\t\\hline\n\t\tPortfolio & $E[r_i]$ & $\\beta_{i,1}$ & $\\beta_{i,2}$ \\\\ \n\t\t\\hline\n\t\tA & $r_A$ & $\\beta_{A,1}$ &  $\\beta_{A,2}$ \\\\ \n\t\tB & $r_B$ & $\\beta_{B,1}$ &  $\\beta_{B,2}$ \\\\ \n\t\tC & $r_C$ & $\\beta_{C,1}$ &  $\\beta_{C,2}$ \\\\ \n\t\t\\hline\n\t\\end{tabular}\n\\end{center}\n\nfind firm value.\n\nRecall APT: $E[r_p] = r_f + \\lambda_1 \\beta_{P,1} + \\lambda_2 \\beta_{P,2}$ :\n\n\n$\n\\begin{pmatrix}\n\t1 & \\beta_{A,1} &  \\beta_{A,2} \\\\\n\t1 & \\beta_{B,1} &  \\beta_{B,2} \\\\\n\t1 & \\beta_{C,1} &  \\beta_{C,2} \\\\\t\t\t\t\t\t\t\n\\end{pmatrix} \\cdot \n\\begin{pmatrix}\n\tr_f \\\\\n\t\\lambda_1 \\\\\n\t\\lambda_2 \\\\\t\t\t\t\t\t\t\t\n\\end{pmatrix}\t\n=\n\\begin{pmatrix}\n\tr_A \\\\\n\tr_B \\\\\n\tr_C \\\\\t\t\t\t\t\t\t\t\n\\end{pmatrix}\n$\n \nsolve system to find $r_f$, $\\lambda_1$, $\\lambda_2$ then apply APT with firm loadings ($\\beta$'s) to find $E[r]=r_f+\\beta_1 \\lambda_1 + \\beta_2 \\lambda_2 $ finally $V_0 = \\frac{FCF_1}{E[r]-g}$", "meta": {"hexsha": "ffda22712f432d1a91ca8c67b15f397326755711", "size": 1959, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "15.415.2x/assets/week_16.tex", "max_stars_repo_name": "j053g/cheatsheets", "max_stars_repo_head_hexsha": "22f7a84879c04d44de40467ddcc0f6e551b812c7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-12-14T08:49:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-07T17:26:15.000Z", "max_issues_repo_path": "15.415.2x/assets/week_16.tex", "max_issues_repo_name": "j053g/cheatsheets", "max_issues_repo_head_hexsha": "22f7a84879c04d44de40467ddcc0f6e551b812c7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "15.415.2x/assets/week_16.tex", "max_forks_repo_name": "j053g/cheatsheets", "max_forks_repo_head_hexsha": "22f7a84879c04d44de40467ddcc0f6e551b812c7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.5967741935, "max_line_length": 193, "alphanum_fraction": 0.628892292, "num_tokens": 742, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.685949467848392, "lm_q2_score": 0.8175744739711883, "lm_q1q2_score": 0.5608147753469656}}
{"text": "\\newpage{}\n\n\\section{A004 - reducing dispersion by assigning a concrete value per\nword and learning from\nit}\n\nA001 collects as many different values for each word as it can get. \nWhen estimating, it uses any of those values more or less randomly (of cause\nsmoothend by the fact that we take 100 random values and then only use a\nvalue from a certain position representing the percentage of certainty\nwe want to have). \nThat hinders our ability to ``learn''.\n\nThe idea behind A004 is to assign just one value to each word.\nThis way, when we see our error margin we might design a little learning by adding or reducing the values a bit and see what happens.\n\n\\subsection{Learning - the Initialisation}\n\nAt the beginning we learn just the same way like A001 did.\nThe only difference is, that instead of saving n values per word we just save one, the average of all beformentioned values. \n\n\\subsection{Learning - Adapting}\n\n\\begin{enumerate}\n        \\tightlist\n        \\item starting from the base model we create 10 mutations by randomly adding and substracting 1/10th of the value for each word.\n        \\item we calculate the mean squared error of all mutations\n        \\item the mutation with the smallest mean squared error becomes the new base model\n        \\item repeat n times from the beginning\n\\end{enumerate}\n\n\\subsection{Estimating}\n\n\\begin{enumerate}\n        \\tightlist\n        \\item split the text we want to estimate into words\n        \\item for every word use the saved value as duration\n        \\item add all values gained this way, the sum is our estimate\n\\end{enumerate}\n\n\n", "meta": {"hexsha": "06a7cd8fcce48619247b328251d14a266fcd64ec", "size": 1589, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documentation/10000-_Algorithms/A004/index.tex", "max_stars_repo_name": "stho32/Automatically-Estimating-Task-Durations", "max_stars_repo_head_hexsha": "4f63d75dd56f56c05d9a046b98f21cff04971a08", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-12T17:24:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T06:43:27.000Z", "max_issues_repo_path": "Documentation/10000-_Algorithms/A004/index.tex", "max_issues_repo_name": "stho32/Automatically-Estimating-Task-Durations", "max_issues_repo_head_hexsha": "4f63d75dd56f56c05d9a046b98f21cff04971a08", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 52, "max_issues_repo_issues_event_min_datetime": "2021-08-13T00:24:46.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-26T10:01:19.000Z", "max_forks_repo_path": "Documentation/10000-_Algorithms/A007/index.tex", "max_forks_repo_name": "stho32/Automatically-Estimating-Task-Durations", "max_forks_repo_head_hexsha": "4f63d75dd56f56c05d9a046b98f21cff04971a08", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8333333333, "max_line_length": 136, "alphanum_fraction": 0.7514159849, "num_tokens": 357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744761936437, "lm_q2_score": 0.6859494485880928, "lm_q1q2_score": 0.5608147611247287}}
{"text": "\n\\section{Well-posedness of the continuous problem}\n\n\\begin{intro}\n  We begin our investigation into the Stokes problem by investigating\n  the well-posedness of the continuous problem. This is particularly\n  simple, since we have\n  \\begin{gather*}\n    a(u,v) = \\form(\\strain u,\\strain v)\n  \\end{gather*}\n  for the original Stokes problem in \\slideref{Definition}{stokes-eq1}\n  and\n  \\begin{gather*}\n    a(u,v) = \\form(\\nabla u,\\nabla v)\n  \\end{gather*}\n  for the simplified \\putindex{Stokes equations} in\n  \\slideref{Definition}{stokes-eq2}. From the standard theory for the\n  Laplacian, we know that the second one is $V$-elliptic on\n  $V=H^1_0(\\domain;\\R^d)$. For the first one, we conclude this by using a\n  \\putindex{Korn inequality}. Therefore, we can already conclude a\n  first result:\n\\end{intro}\n\n\\begin{Lemma}{stokes-a-elliptic}\n  Let $V=H^1_0(\\domain, \\R^d)$ and $V_h\\subset V$ a finite dimensional\n  subspace. Then, $a(.,.)$ is elliptic on $\\ker B$ and on $\\ker{B_h}$\n  independent of the choices of $Q$ and $Q_h$.\n\\end{Lemma}\n\n\\begin{remark}\n  We focus here on no-slip boundary condition on the whole boundary as\n  the exemplary case. Other boundary conditions are possible, but as\n  soon as the Dirichlet boundary for one velocity component becomes\n  too small, the ellipticity of $a(.,.)$ on $V$ must be established by\n  new arguments known for instance for Robin boundary conditions. In\n  the extreme case of natural boundary conditions all around, $V$ is\n  the subspace of $H^1(\\domain, \\R^d)$ obtained by dividing by the\n  space of all translations for the simplified form and by the space\n  of all rigid body movements.\n\n  Note that we have established already in\n  \\slideref{Lemma}{divergence-compatibility} that the condition\n  $V=H^1_0(\\domain, \\R^d)$ implies the reduction of the pressure to\n  the space $Q = L^2_0(\\domain)$ from\n  \\slideref{Notation}{pressure-constant}.\n\\end{remark}\n\n\\begin{intro}\n  The previous lemma guarantees well-posedness of the\n  \\putindex{reduced problem} in all possible cases. Therefore, the\n  remainder of this section is only concerned with the inf-sup\n  condition for the divergence operator. We\n  follow~\\cite{GiraultRaviart86} in this presentation.\n\\end{intro}\n\n\\begin{Lemma}{stokes-helmholtz}\n  Let $V=H^1_0(\\domain,\\R^d)$. Then, the divergence operator\n  $\\div\\colon V \\to L^2(\\domain)$ is continuous and the subspace\n  \\begin{gather*}\n    V^0 = \\ker \\div\n    = \\bigl\\{v\\in V \\big|\n    \\div v = 0 \\text{ a.e.} \\bigr\\}\n  \\end{gather*}\n  is closed in $V$ and $V$ admits the orthogonal decomposition\n  \\begin{gather*}\n    V = V^0\\oplus V^\\perp.\n  \\end{gather*}\n\\end{Lemma}\n\n\\begin{proof}\n  We have that\n  \\begin{gather*}\n    \\norm{\\div v}_{L^2(\\domain)}^2\n    = \\int_\\domain \\left(\\sum \\d_iv_i\\right)^2\\dx\n    \\le d \\int_\\domain \\sum \\abs{\\d_iv_i}^2\\dx\n    \\le d \\norm{v}_{H^1(\\domain;\\R^d)}^2.\n  \\end{gather*}\n  Thus, the divergence operator is a continuous mapping from $V$ to\n  $L^2(\\domain)$. The definition of $V^0$ is equivalent to the\n  definition of zero in $L^2(\\domain)$. Finally, since the kernel is\n  the pre-image of a closed set under a continuous map, it is\n  closed. The existence of the decomposition follows from\n  \\slideref{Theorem}{orthogonal-complement}.\n\\end{proof}\n\n\\begin{Lemma}{stokes-grad}\n  If $f\\in V^* = H^{-1}(\\domain;\\R^d)$ satisfies\n  \\begin{gather*}\n    f(v) = 0 \\quad\\forall v\\in V^0,\n  \\end{gather*}\n  then, there exists $p\\in L^2(\\domain)$ such that\n  \\begin{gather*}\n    f = \\nabla p.\n  \\end{gather*}\n  If $\\domain$ is connected, then $p$ is unique up to an additive\n  constant.\n\\end{Lemma}\n\n\\begin{proof}\n  First, we identify $L^2(\\domain)$ with its dual. Then, by\n  \\begin{gather*}\n    \\scal(-\\nabla p, v)_{V^*\\times V}\n    = \\scal(p, \\div v)_{L^2(\\domain)},\n    \\qquad\\forall v\\in V,\n  \\end{gather*}\n  we see that $-\\nabla\\colon L^2(\\domain)\\to V^*$ is the dual to the\n  divergence operator. Using the Cauchy-sequence argument, we see that\n  $\\range{\\div}$ is closed in $L^2(\\domain)$ and the closed range\n  theorem applies. Thus, $\\range{-\\nabla}$ is closed in $V^*$ and\n  \\begin{gather*}\n    \\range{\\nabla} = \\polar{(V^0)} \\cong V^\\perp\n  \\end{gather*}\n  is the polar set of the kernel $V^0$. This implies the statement\n  that there is a $p$ for every $f$. Uniqueness follows by the fact\n  that the only differentiable functions on a connected domain with\n  $\\nabla p=0$ are the constant functions, and by density of such\n  functions in $L^2(\\domain)$.\n\\end{proof}\n\n\\begin{Corollary}{stokes-iso}\n  Let $\\domain$ be connected. Then,\n  \\begin{enumerate}\n  \\item $\\nabla\\colon L^2_0(\\domain) \\to V^0$ is an isomorphism\n  \\item $\\div\\colon V^\\perp \\to L^2_0(\\domain)$ is an isomorphism\n  \\end{enumerate}\n\\end{Corollary}\n\n\\begin{Theorem}{stokes-infsup}\n  Let $\\domain\\subset \\R^d$ be a Lipschitz-domain,\n  $V=H^1_0(\\domain,\\R^d)$ and $Q=L^2_0(\\domain)$. Then, there is a\n  constant $\\beta>0$ depending only on the geometry of $\\domain$ such\n  that\n  \\begin{gather}\n    \\label{eq:stokes:1}\n    \\inf_{q\\in Q}\\sup_{v\\in V}\\frac{\\form(\\div\n      v,q)}{\\norm{v}_V\\norm{q}_Q} \\ge \\beta.\n  \\end{gather}\n  Furthermore, the problem finding $(u,p)\\in V\\times Q$ such that\n  \\begin{gather}\n    \\label{eq:stokes:3}\n    a(u,v)+\\form(\\div v,p)+\\form(\\div u,q) = f(v)+g(q)\n    \\quad\\forall v\\in V, q\\in Q,\n  \\end{gather}\n  has a unique solution for any right hand side $f\\in V^*$ and $g\\in\n  \\range{\\div}$.\n\\end{Theorem}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Stable discretizations}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  We begin by application of the generic theory of the previous\n  chapter to the Stokes problem in order to obtain a generic error\n  estimate based on the concrete choice of norms and a single\n  assumption. Guided by this theorem, we spend the remaining part of\n  this section exploring different options for the discrete spaces.\n\\end{intro}\n\n\\begin{Theorem}{stokes-convergence}\n  Let $V=H^1_0(\\domain;\\R^d)$ and $Q=L^2_0(\\domain)$. Let furthermore\n  $V_h\\subset V$ and $Q_h\\subset Q$ be discrete subspaces such that\n  there exists $\\beta>0$ independent of $h$ such that\n  \\begin{gather}\n    \\label{eq:stokes:2}\n    \\inf_{q_h\\in Q_h}\\sup_{v_h\\in V_h}\\frac{\\form(\\div\n      v_h,q_h)}{\\norm{v_h}_V\\norm{q_h}_Q} \\ge \\beta.\n  \\end{gather}\n  Then, the Galerkin approximation of~\\eqref{eq:stokes:3} admits a\n  unique solution $(u_h, p_h)\\in V_h\\times Q_h$ with the\n  quasi-bestapproximation property\n  \\begin{gather}\n    \\label{eq:stokes:4}\n    \\begin{split}\n      \\norm{u-u_h}_1\n      &\\le c_1 \\inf_{v_h\\in V_h}\\norm{u-v_h}_1\n      + c_2 \\inf_{q_h\\in Q_h}\\norm{p-q_h}_0\n      \\\\\n      \\norm{p-p_h}_1\n      &\\le c_3 \\inf_{v_h\\in V_h}\\norm{u-v_h}_1\n      + c_4 \\inf_{q_h\\in Q_h}\\norm{p-q_h}_0.\n    \\end{split}\n  \\end{gather}\n\\end{Theorem}\n\n\\begin{Corollary}{stokes-convergence2}\n  Under the assumptions of \\slideref{Theorem}{stokes-convergence},\n  let there be in addition interpolation operators $I_{V_h}$ and\n  $I_{Q_h}$ such that\n  \\begin{gather}\n    \\label{eq:stokes:5}\n    \\begin{split}\n      \\norm{u-I_{V_h} u}_1 &\\le c h^k \\snorm{u}_{k+1} \\\\\n      \\norm{p-I_{Q_h} p}_0 &\\le c h^k \\snorm{p}_{k}.\n    \\end{split}\n  \\end{gather}\n  Then, there is a constant $c$ independent of the approximation\n  spaces such that\n  \\begin{gather}\n    \\label{eq:stokes:6}\n    \\begin{split}\n      \\norm{u-u_h}_1 &\\le c h^k \\bigl(\\snorm{u}_{k+1} +\n      \\snorm{p}_{k}\\bigr)\n      \\\\\n      \\norm{p-p_h}_1 &\\le c h^k \\bigl(\\snorm{u}_{k+1} +\n      \\snorm{p}_{k}\\bigr).\n    \\end{split}\n  \\end{gather}\n\\end{Corollary}\n\\begin{intro}\n  We continue showing that the most natural discretizations\n  in two dimensions are not inf-sup stable. This holds for the\n  discretization using continuous linear or bilinear elements for both\n  velocity components and the pressure as well as for continuous\n  linear or bilinear velocity functions combined with piecewise\n  constant pressure functions.\n\\end{intro}\n\n\\begin{example}\n  \\label{ex:p1-p1}\n  We begin with a one-dimensional example. Piecewise linear velocity\n  and piecewise linear pressure ($P_1-P_1$). Both continuous. Then, $\\div v_h$ is\n  piecewise constant. Consequently, a pressure function which has zero\n  mean value on each cell is in the kernel of $B_h^T$.\n  \\begin{figure}[tp]\n    \\centering\n    \\includegraphics[width=.6\\textwidth]{./fig/p1-p1-1d.tikz}\n    \\caption[Example for the $P_1-P_1$ pair in one\n    dimension]{Piecewise linear pressure (\\tikz\\draw[color=cyan] (0,0)\n      -- (1em,0);) and divergence (\\tikz\\draw[color=red] (0,0)\n      -- (1em,0);) of\n      piecewise linear velocity.}\n    \\label{fig:stokes:p1p1-1d}\n  \\end{figure}\n\\end{example}\n\n\\begin{example}\n  Take a patch of four quadrilaterals or triangles meeting in a common\n  vertex. Let $\\domain$ be the union of these grid cells. Choose\n  linear and bilinear shape functions for $V_h$, respectively. Then, $\\dim\n  V_h = 2$, since we have one interior vertex with one basis function for\n  each velocity component. Choose piecewise constant pressure\n  functions. Dividing out the global constant, we conclude that $\\dim\n  Q_h = 3$. Thus, the statement\n  \\begin{gather*}\n    \\forall q_h\\in Q_h \\;\\exists v_h\\in V_h:\n    \\quad \\norm{v_h}_1 = \\norm{q_h}_0\n    \\;\\wedge\\; b(v_h, q_h) \\ge \\beta \\norm{q_h}^2\n  \\end{gather*}\n  cannot hold true. Therefore, the inf-sup condition does not hold. In\n  fact, $\\ker{B_h} = \\{0\\}$.\n  \\begin{figure}[tp]\n    \\begin{center}\n    \\hfill\n    \\includegraphics[width=.3\\textwidth]{./fig/patch1.tikz}\n    \\hfill\n    \\includegraphics[width=.3\\textwidth]{./fig/patch2.tikz}\n    \\hfill\\mbox{}\n    \\end{center}\n    \\caption[Very coarse meshes with Dirichlet boundary.]{Very coarse meshes with Dirichlet boundary. Degrees of freedom for pressure (\\tikz\\node[pressure] (0,0) {};) and for both velocity components(\\tikz\\node[veloxy] (0,0) {};).}\n    \\label{fig:stokes:example1}\n  \\end{figure}\n\n  Thus, we conclude that for this combination of shape function\n  spaces, there is a mesh such that they are not suited for the\n  approximation of the Stokes problem. But, this may be a problem of a\n  mesh with too few cells. In fact, asymptotically, a triangular mesh\n  contains twice as many vertices as cells, a quadrilateral mesh as\n  many. Therefore, $\\dim V_h > \\dim Q_h$ as soon as the mesh is\n  sufficiently fine. Will this be sufficient?\n\\end{example}\n\n\\begin{Problem}{p1-p0-unstable}\nThe domain $\\Omega=[0,1]^2$ is decomposed into $N \\times N$ congruent squares where each\nof them is again divided into two triangles. The decomposition $\\mesh_h$\nis given by these triangles.\n\nWe again choose piecewise linear ansatz functions for the velocity for $V_h$\n(vanishing on $\\partial \\Omega$) and piecewise constant ansatz functions\nfor $Q_h$.\n\nIs there a $N$ and an orientation of the triangles such that $V_h\\times Q_h$ is\ninf-sup stable?\n\\begin{solution}\n  The number of degrees of freedom for the velocity is $2(N-1)^2$ and for the\n  pressure $2N^2-1$. Hence, there is always a pressure ansatz functions such that\n  \\begin{align*}\n    (\\nabla \\cdot \\boldsymbol{v}_h, q_h)=0 \\quad \\forall \\boldsymbol{v}_h\\in V_h.\n  \\end{align*}\n\\end{solution}\n\\end{Problem}\n\n\\begin{Problem}{checker-board}\n  Let $\\domain = (0,1)^2$ be the unit square and let the mesh consist\n  of Cartesian squares of side length $1/n$. Choose $V_h \\subset V$\n  based on bilinear shape functions. Show that the piecewise constant\n  pressure function $p_c=\\pm 1$ in a checkerboard fashion is in the\n  kernel of $B_h^T$, that is\n  \\begin{gather*}\n    b(v_h, p_c) = 0 \\quad\\forall v_h\\in V_h.\n  \\end{gather*}\n\\begin{solution}\nThis time there are $2N^2$ for the velocity and $2N^2-1$ ansatz functions for\nthe pressure. Therefore, we have to look a big deeper.\n\nDenote by $p_{i+\\frac12, j+\\frac12}$ the pressure constant on cell $K_{i,j}$\n($0\\leq i,j \\leq N-1$). The values of the velocity at the four vertices are\ndenoted by $u^1_{i,j}$, $u^1_{i,j+1}$, $u^1_{i+1,j}$, $u^1_{i+1,j+1}$ for the\nfirst component and similarly for the second component.\n\nUsing this notation we can write the divergence constraint in terms of nodal and cell values:\n\\begin{align*}\n &(\\nabla \\cdot \\boldsymbol{v}_h, q_h)\\\\\n   &= \\sum_{i,j} q_{i+\\frac12,j+\\frac12} \\int_{K_{i,j}} \\nabla \\cdot \\boldsymbol{v}_h \\,\\mathrm{d}x\\\\\n   &= \\sum_{i,j} q_{i+\\frac12,j+\\frac12} \\int_{\\partial K_{i,j}} \\boldsymbol{n} \\cdot \\boldsymbol{v}_h \\,\\mathrm{d}x\\\\\n   &= \\sum_{i,j} q_{i+\\frac12,j+\\frac12} \\frac{h}{2}(u^1_{i+1,j}-u^1_{i,j+1}+u^1_{i+1,j}-u^1_{i+1,j+1})\\\\\n      &\\quad +\\sum_{i,j} q_{i,j} \\frac{h}{2}(u^2_{i,j+1}-u^2_{i+1,j}+u^2_{i,j+1}-u^2_{i+1,j+1})\\\\\n   &= \\frac{h}{2}\\sum_{i,j} u^1_{i,j}(q_{i-\\frac12,j+\\frac12}+q_{i-\\frac12,j-\\frac12}-q_{i+\\frac12,j+\\frac12}-q_{i+\\frac12,j-\\frac12})\\\\\n      &\\quad +\\frac{h}{2}\\sum_{i,j} u^2_{i,j}(q_{i+\\frac12,j-\\frac12}+q_{i-\\frac12,j-\\frac12}-q_{i+\\frac12,j+\\frac12}-q_{i-\\frac12,j+\\frac12})\n\\end{align*}\nThus,\n\\begin{align*}\n(\\nabla \\cdot \\boldsymbol{v}_h, q_h)=0 \\quad \\forall \\boldsymbol{v}_h\\in V_h\n\\end{align*}\nimplies\n\\begin{align*}\nq_{i-\\frac12,j+\\frac12}+q_{i-\\frac12,j-\\frac12}-q_{i+\\frac12,j+\\frac12}-q_{i+\\frac12,j-\\frac12} &=0 \\\\\nq_{i+\\frac12,j-\\frac12}+q_{i-\\frac12,j-\\frac12}-q_{i+\\frac12,j+\\frac12}-q_{i-\\frac12,j+\\frac12} &=0.\n\\end{align*}\nfor all ($0\\leq i,j \\leq N-1$).\nThese two constraints can be rephrased as\n\\begin{align*}\nq_{i-\\frac12,j+\\frac12} &= q_{i+\\frac12,j-\\frac12} \\\\\nq_{i-\\frac12,j-\\frac12} &= q_{i+\\frac12,j+\\frac12}\n\\end{align*}\nfor all ($0\\leq i,j \\leq N-1$). Since the mean value of the pressure must be zero,\nthe set of pressure functions that fulfill\n\\begin{align*}\n(\\nabla \\cdot \\boldsymbol{v}_h, q_h)=0 \\quad \\forall \\boldsymbol{v}_h\\in V_h\n\\end{align*}\ncan be described by\n\\begin{align*}\n p_{i,j} = c \\quad \\text{if } (i+j) \\operatorname{mod} 2 = 0\\\\\n p_{i,j} = -c \\quad \\text{if } (i+j) \\operatorname{mod} 2 = 1\n\\end{align*}\nwhere $c\\not=0$.\n\\end{solution}\n\\end{Problem}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Bubble stabilization and the MINI element}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{Definition}{barycentric-coordinates}\n  A simplex $T\\in \\R^d$ with vertices $x_0,\\dots,x_d$ is described by\n  a set of $d+1$ \\define{barycentric coordinates}\n  $\\lambda_0,\\dots,\\lambda_d$ such that\n  \\begin{xalignat}2\n    0\\le\\lambda_i(x) &\\le 1& i&=0,\\dots,d;\\quad x\\in T\\\\\n    \\lambda_i(x_j) &= \\delta_{ij}& i,j&=0,\\dots,d\\\\\n    \\sum \\lambda_i(x) &= 1.\n  \\end{xalignat}\n\\end{Definition}\n\n\\begin{remark}\n  The functions $\\lambda_i(x)$ are the shape functions of the linear\n  $P_1$ element on $T$. They allow us to define basis functions on the\n  cell $T$ without use of a reference element $\\widehat T$.\n\n  Note that $\\lambda_i\\equiv 0$ on the face opposite to the\n  vertex $x_i$.\n\\end{remark}\n\n\\begin{example}\n  We can use barycentric coordinates to define shape functions on\n  simplicial meshes easily, as in\n  Table~\\ref{tab:barycentric-shapes}.\n  \\begin{table}[tp]\n    \\centering\n    \\begin{tabular}{|c|l|}\n      \\hline Degrees of freedom\n      & Shape functions \\\\\\hline\n      \\adjustbox{valign=center,margin=3pt}{\\includegraphics[width=2cm]{./fig/p1-p.tikz}}\n      &\n        {\\begin{minipage}[b]{6cm}\n          \\begin{gather*}\n            \\phi_i = \\lambda_i,\n            \\quad i=0,1,2\n          \\end{gather*}\n        \\end{minipage}}\n      \\\\\\hline\n      \\adjustbox{valign=center,margin=3pt}{\\includegraphics[width=2cm]{./fig/p2-p.tikz}}\n      &\n        {\\begin{minipage}[b]{6cm}\n          \\begin{xalignat*}2\n            \\phi_{ii} &= 2\\lambda_i^2 - \\lambda_i,\n            &i&=0,1,2\\\\\n            \\phi_{ij} &= 4\\lambda_i\\lambda_j\n            &j&\\neq i\n          \\end{xalignat*}\n        \\end{minipage}}\n        \\\\\\hline\n      \\adjustbox{valign=center,margin=3pt}{\\includegraphics[width=2cm]{./fig/p3-p.tikz}}\n      &\n        {\\begin{minipage}[b]{6cm}\n          \\begin{xalignat*}2\n          \\phi_{iii} &= \\tfrac12 \\lambda_i(3\\lambda_i-1)(3\\lambda_i-2)\n          &i&=0,1,2\\\\\n          \\phi_{ij} &= \\tfrac92\\lambda_i\\lambda_j(3\\lambda_j-1)\n          &j&\\neq i\\\\\n          \\phi_0 &= 27\\lambda_0\\lambda_1\\lambda_2\n        \\end{xalignat*}\n        \\end{minipage}}\n        \\\\\\hline\n    \\end{tabular}\n    \\caption{Degrees of freedom and shape functions of simplicial elements\n      in terms of barycentric coordinates}\n    \\label{tab:barycentric-shapes}\n  \\end{table}\n\\end{example}\n\n\\begin{Notation}{piecewise-polynomial-spaces}\n  We denote by\n  \\begin{gather}\n    \\label{eq:stokes:8}\n    H^k_h(\\mathcal P) =\n    \\bigl\\{ v\\in H^k(\\Omega) \\big\\vert\n    v_{|\\cell} \\in \\mathcal P \\;\\forall \\cell\\in\\mesh_h\\bigr\\}\n  \\end{gather}\n  the finite element space which is based on the shape function space\n  $\\mathcal P$, the mesh $\\mesh_h$ and is a subspace of\n  $H^k(\\domain)$. Examples are the continuous spaces of piecewise\n  polynomials or tensor product polynomials of degree $k$\n  \\begin{gather*}\n    H^1_h(\\P_k) \\qquad H^1_h(\\Q_k),\n  \\end{gather*}\n  and the discontinuous spaces\n  \\begin{gather*}\n    H^0_h(\\P_k) \\qquad H^0_h(\\Q_k).\n  \\end{gather*}\n\\end{Notation}\n\n\\begin{Definition}{h1-bubble-space}\n  An $H^1$-\\define{bubble function} on a mesh cell $\\cell$ is a\n  function $b\\in H^1_0(\\cell)$. A \\define{bubble space} $b_\\cell$ on\n  $\\cell$ is a discrete vector space of such bubble functions.  We\n  denote the space of bubble functions on the mesh $\\mesh_h$ by\n  \\begin{gather*}\n    B_h(b_\\cell) = \\bigl\\{ v\\in H^1(\\domain) \\big\\vert\n    v_{|_\\cell} \\in b_\\cell \\;\\forall \\cell\\in\\mesh_h\n    \\bigr\\}.\n  \\end{gather*}\n  If there is no confusion about the local bubble space $b_T$, we also\n  write just $B_h$.\n\\end{Definition}\n\n\\begin{example}\n  A bubble function on a triangle $\\cell$ is easily defined by\n  \\begin{gather}\n    \\label{eq:stokes:7}\n    b_3 = b_{3,\\cell} = \\lambda_0\\lambda_1\\lambda_2.\n  \\end{gather}\n\\end{example}\n\n\\begin{Definition}{mini-element-p}\n  The \\define{MINI element} consists of the spaces\n  \\begin{gather}\n    V_h = \\bigl(H^1_h(\\P_1) \\oplus B_h(b_3)\\bigr)^2 \\cap V,\n    \\qquad\n    Q_h = H^1_h(\\P_1) \\cap Q.\n  \\end{gather}\n  Its degrees of freedom are:\n  \\begin{center}\n    \\includegraphics[width=.2\\textwidth]{./fig/p-mini-v.tikz}\n    \\hspace{1cm}\n    \\includegraphics[width=.2\\textwidth]{./fig/p1-p.tikz}\n  \\end{center}\n\\end{Definition}\n\n\\begin{intro}\n  We will show now that the MINI element is indeed inf-sup stable. To\n  this end, we construct the \\putindex{Fortin projection} according to\n  \\slideref{Lemma}{fortin}. Since the construction of such a\n  projection operator turns out a bit complicated, we first introduce\n  a construction principle, which will help us in our further\n  analysis. The idea of this principle is separating the interpolation\n  into $V_h$ from the preservation of the divergence.\n\\end{intro}\n\n\\begin{Lemma}{fortin-construction-1}\n  Let there be operators $\\Pi_1,\\Pi_2\\colon V \\to V_h$ such that\n  \\begin{xalignat}2\n    \\label{eq:stokes:10}\n    \\norm{\\Pi_1 v}_V &\\le c_1 \\norm{v}_V\n    &\\forall v&\\in V,\\\\\n    \\label{eq:stokes:11}\n    \\norm{\\Pi_2(\\identity-\\Pi_1)v}_V &\\le c_2 \\norm{v}_V\n    &\\forall v&\\in V,\\\\\n    \\label{eq:stokes:12}\n    b(v-\\Pi_2v,q_h) &= 0\n    &\\forall v&\\in V, \\;q_h\\in Q_h,\n  \\end{xalignat}\n  with constants $c_1$ and $c_2$ independent of the discretization\n  parameter $h$. Then, the operator\n  \\begin{gather}\n    \\label{eq:stokes:9}\n    \\Pi_h = \\Pi_1 + \\Pi_2 - \\Pi_2\\Pi_1\n  \\end{gather}\n  is a \\putindex{Fortin projection}, that is, it is bounded on $V$ and\n  \\begin{gather*}\n    b(v-\\Pi_h v, q_h) =0 \\qquad\\forall q_h\\in Q_h.\n  \\end{gather*}\n\\end{Lemma}\n\n\\begin{proof}\n  Boundedness of $\\Pi_h$ is obvious, such that we only focus on\n  preservation of the kernel ob $B$:\n  \\begin{multline*}\n    b(v-\\Pi_h v,q_h) = b(v-\\Pi_1 v - \\Pi_2 v + \\Pi_2\\Pi_1 v, q_h)\n    \\\\\n    = b(v-\\Pi_2 v, q_h) - b(\\Pi_1 v - \\Pi_2\\Pi_1 v,q_h) = 0-0 = 0.\n  \\end{multline*}\n\\end{proof}\n\n\\begin{Assumption}{h1-stable-interpolation}\n  There exists an $H^1$-stable interpolation operator $I_h:V\\to V_h$\n  such that for each cell $\\cell \\in \\mesh_h$ there holds for $m=0,1$\n  \\begin{gather}\n    \\label{eq:stokes:13}\n    \\snorm{v-I_h v}_{m,\\cell} \\le c \\sum_{\\cell'\\cap\\cell\n      \\neq\\emptyset} h_{\\cell'}^{1-m}\\snorm{v}_{1,\\cell'},\n  \\end{gather}\n  with a constant $c$ independent of the mesh parameter $h$.\n\\end{Assumption}\n\n\\begin{remark}\n  The interpolation operators of Cl\u00e9ment, Scott and Zhang, Sch\u00f6berl or\n  Ern and Guermond fullfil these assumptions.\n\\end{remark}\n\n\\begin{Definition}{locally-quasi-uniform}\n  A family of meshes $\\{\\mesh_h\\}$ is called \\define{locally\n    quasi-uniform}, if there is a constant $c$ such that\n  \\begin{gather}\n    \\label{eq:stokes:14}\n    \\forall h\n    \\;\n    \\forall \\cell,\\cell'\\in \\mesh_h\n    \\quad\n    \\cell\\cap\\cell'\\neq \\emptyset\n    \\Rightarrow\n    h_\\cell \\le c h_{\\cell'}.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{Assumption}{locally-quasi-uniform}\n  We assume of all families of meshes that they are shape regular and\n  locally quasi-uniform, such that with\n  \\slideref{Assumption}{h1-stable-interpolation} there holds\n  for $m=0,1$\n  \\begin{gather}\n    \\label{eq:stokes:15}\n    \\snorm{v-I_h v}_{m,\\cell} \\le c h_\\cell^{1-m} \\snorm{v}_{1,\\domain_\\cell},\n  \\end{gather}\n  where $\\Omega_\\cell$ is the union of all cells with nonempty\n  intersection with $\\cell$.\n\\end{Assumption}\n\n\\begin{Theorem}{mini-stability}\n  Under \\slideref{Assumption}{locally-quasi-uniform},\n  the MINI element is inf-sup stable.\n\\end{Theorem}\n\n\\begin{proof}\n  We construct a \\putindex{Fortin projection} by choosing\n  $\\Pi_1 = I_h$, where $I_h:V\\to \\bigl(H^1_h(\\P_1)\\bigr)^2$ is an\n  $H^1$-stable interpolation operator into the standard linear finite\n  element space. Now, we construct $\\Pi_2: V \\to \\bigl(B_h\\bigr)^2$\n  such that for all $q_h\\in Q_h$\n  \\begin{gather*}\n    \\int_\\domain \\div(\\Pi_2 v-v) q_h \\dx\n    = \\int_\\domain (v-\\Pi_2v)\\cdot\\nabla q_h\\dx\n    = 0.\n  \\end{gather*}\n  Indeed, $\\Pi_2 v$ can be chosen on each cell. Since $\\nabla q_h$ is\n  constant on a cell $\\cell$, we choose\n  \\begin{gather*}\n    \\int_\\cell \\Pi_2 v_i \\dx\n    = \\alpha_{\\cell,i} \\int_\\cell b_{3,\\cell}\\dx\n    = \\int_\\cell v_i\\dx,\n  \\end{gather*}\n  where $i=1,2$ enumerates the velocity components. This is possible,\n  since the mean value of $b_3$ is strictly positive. Assuming shape\n  regularity, we can use the inverse estimate for $b_3$ to obtain\n  \\begin{gather*}\n    \\norm{\\Pi_2 v}_{1,\\cell}\n    \\le c h_\\cell^{-1} \\norm{\\Pi_2 v}_{0,\\cell}\n    \\le c h_\\cell^{-1} \\norm{v}_{0,\\cell}.\n  \\end{gather*}\n  Finally, we use the estimates for $I_h$ to obtain\n  \\begin{gather*}\n    \\norm{\\Pi_2 (\\identity-\\Pi_1) v}_{1,\\cell}\n    \\le c h_\\cell^{-1} \\norm{v-I_h v}_{0,\\cell}\n    \\le c \\snorm{v}_{1,\\domain_\\cell}.\n  \\end{gather*}\n  Since the number of intersecting of cells of shape regular meshes is\n  bounded, the final term is bounded by $\\norm v_{1,\\domain}$.\n\\end{proof}\n\n\\begin{Notation}{broken-bilinear-form}\n  We use the abbreviation\n  \\begin{gather}\n    \\label{eq:stokes:18}\n    \\form(f,g)_{\\mesh_h} = \\sum_{\\cell\\in\\mesh_h} \\form(f,g)_\\cell,\n  \\end{gather}\n  for so called \\define{broken bilinear form}s, where instead of\n  integrating over the union of subsets, we sume the integrals.\n\\end{Notation}\n\n\\begin{Lemma}{mini-stabilized}\n  The discretization of the Stokes problem~\\eqref{eq:stokes:3} with\n  the MINI element is equivalent to solving\n  \\begin{multline}\n    \\label{eq:stokes:16}\n    \\form(\\nabla u,\\nabla v) + \\form(\\div v,p) + \\form(\\div u,q)\n    - \\form(c_T \\nabla p, \\nabla q)_{\\mesh_h}\n    \\\\\n    = f(v) + g(q) + \\form(c_T f_T,\\nabla q)_{\\T_h}\n  \\end{multline}\n  with standard, continuous linear finite elements for velocity and\n  pressure. Here,\n  \\begin{gather*}\n    f_T = \\int_\\cell f\\dx,\n    \\qquad\n    c_T = \\frac{\\form(b_3,1)_\\cell}{\\norm{\\nabla\n      b_3}_\\cell^2}.\n  \\end{gather*}\n\\end{Lemma}\n\n\\begin{proof}\n  Let $V_h^1 = H^1_h(\\P_1)^2$ be the linear, vector-valued velocity\n  space and $V_h^b = B_h(b_3)^2$ the bubble function space, such that\n  the MINI element space is\n  \\begin{gather*}\n    V_h = V_h^1 \\oplus V_h^b.\n  \\end{gather*}\n  Accordingly, we split the solution with the MINI element into\n  $u_h = u_h^1 + u_h^b$.  By integration by parts, we obtain for the\n  cubic bubble $b_3$\n  \\begin{gather*}\n    \\form(\\nabla v,\\nabla b_{3,\\cell})_\\cell = \\form(-\\Delta v,b_{3,\\cell})_\\cell = 0\n    \\qquad\\forall v\\in \\P_1,\n  \\end{gather*}\n  such that\n  \\begin{gather*}\n    \\form(\\nabla v, \\nabla b) = \\form(\\nabla b, \\nabla v) = 0\n    \\qquad\n    \\forall v\\in V_h^1,b\\in V_h^b.\n  \\end{gather*}\n\n  Thus, testing~\\eqref{eq:stokes:3} with $v_h\\in V_h^1$ yields\n  \\begin{gather}\n    \\label{eq:stokes:17}\n    \\form(\\nabla u_h^1,\\nabla v_h)\n    + \\form(\\div v_h, p_h) = f(v_h)\n    \\qquad\\forall v_h\\in V_h^1.\n  \\end{gather}\n  Testing the same equation with $v\\in V_h^b$, we obtain\n  \\begin{gather}\n    \\form(\\nabla u_h^b,\\nabla v_h)\n    = f(v_h) - \\form(\\div v_h, p_h)\n    = f(v_h) + \\form(v_h,\\nabla p_h)_{\\mesh_h}.\n  \\end{gather}\n  Choosing more specifically $v_h$ as the bubble function $b_{3,\\cell}$ of the\n  cell $\\cell$ for each vector component yields\n  \\begin{gather}\n    \\label{eq:stokes:20}\n    \\mu_\\cell^{(i)}  = \\frac{1}{\\norm{\\nabla b_3}_\\cell^2}\\form(f+\\d_i p_h,b_3)_\\cell,\n  \\end{gather}\n  where $\\mu_\\cell^{(i)}$ is the coefficient in front of the basis\n  function $b_{3,\\cell}$ on cell $\\cell$ in the basis representation of\n  $u_h^{(i)}$, such that\n  \\begin{gather}\n    \\label{eq:stokes:19}\n    u_h^b = \\sum_{\\cell\\in\\mesh_h}\n    \\begin{pmatrix}\n      \\mu_\\cell^{(1)} b_{3,\\cell}\\\\\n      \\mu_\\cell^{(2)} b_{3,\\cell}\n    \\end{pmatrix}\n  \\end{gather}\n  Testing the Stokes equations with $q_h\\in Q_h$, we obtain the\n  divergence equation\n  \\begin{gather}\n    \\form(\\div u_h^1 + \\div u_h^b,q_h)\n    = \\form(\\div u_h^1, q_h)\n    - \\form(u_h^b, \\nabla q_h)_{\\mesh_h} = g(q_h).\n  \\end{gather}\n  Using~\\eqref{eq:stokes:20}, \\eqref{eq:stokes:19} and using\n  $f_T^{(i)} = \\form(f^{(i)},1)_\\cell$ yields\n  \\begin{align*}\n    \\form(u_h^b, \\nabla q_h)_{\\mesh_h}\n    &=\n    \\sum_{\\cell\\in\\mesh_h} \\frac{1}{\\norm{\\nabla b_3}_\\cell^2}\n    \\sum_{i=1,2}\\form(f_\\cell^{(i)}+\\d_i\n      p_h,{b_3(T)}\\d_i q_h)_\\cell\\\\\n    &= \\sum_{\\cell\\in\\mesh_h} \\frac{\\form(b_3,1)_\\cell}{\\norm{\\nabla\n      b_3}_\\cell^2}\n      \\form(f_T+\\nabla p_h,\\nabla q_h)_\\cell\n  \\end{align*}\n\\end{proof}\n\n\\begin{remark}\n  The constant $c_T$ in the previous lemma was computed by the formula\n  \\begin{gather*}\n    c_T = \\frac{\\form(b_3,1)_\\cell}{\\norm{\\nabla b_3}_\\cell^2}.\n  \\end{gather*}\n  This formula is complicated and we would rather like to avoid\n  computing $c_T$ for every mesh cell, since we have to evaluate\n  integrals of cubic functions. On the other hand, the same constant\n  $c_T$ appears on the left and on the right of the modified\n  equation~\\eqref{eq:stokes:16}. Therefore, we can replace both by a\n  constant of similar size without affecting consistency or the\n  characteristic properties of the equation. Therefore, we estimate\n  \\begin{gather}\n    c_T = \\frac{\\form(b_3,1)_\\cell}{\\norm{\\nabla b_3}_\\cell^2}\n    \\simeq \\frac{\\norm{b_3}_\\cell^2}{\\norm{\\nabla b_3}_\\cell^2}\n    \\simeq h_T^2,\n  \\end{gather}\n  where ``$\\simeq$'' indicates equality up to a constant independent\n  of $h$, but depending on the constant in \\putindex{shape regularity}.\n\\end{remark}\n\n\\begin{remark}\n  The method introduced in \\slideref{Lemma}{mini-stabilized} is an\n  example for a \\define{stabilized method}, here in particular\n  \\define{pressure stabilization}. Such methods were particularly\n  popular in the early decades of finite element computation, since\n  they only involve simple shape function spaces. They are still\n  widely used due to their simplicity. The method constructed this way\n  is consistent, i.~e.~, the continuous solution $(u,p)$ solves the\n  discrete problem.\n\\end{remark}\n\n\n\\begin{Problem}{quadrilateral-mini}\n  Show that the MINI element can be generalized to quadrilateral\n  meshes. Design a bubble space $b_Q$ of minimal tensor degree such that\n  \\begin{gather*}\n    V_h = \\bigl(H^1_h(\\Q_1) \\oplus B_h(b_Q) \\cap V\\bigr)^2,\n    \\qquad\n    Q_h = H^1_h(\\Q_1) \\cap Q.\n  \\end{gather*}\n\n  Discuss extensions to tetrahedra and hexahedra in three dimensions.\n\\begin{solution}\n  For simplices in any dimension consider the bubble function\n  \\begin{align*}\n    b_T=\\prod_{i=1}^d\\lambda_i\\in \\mathbb{P}_d(\\mathcal{T}_h)\n  \\end{align*}\n  and we can procede exactly as in the 2D case, i.e. the bubble\n  space is spanned by\n  \\begin{align*}\n   \\left\\{b_T e_1,\\dotsc, b_T e_d\\right\\}.\n  \\end{align*}\n\n  For quadrilaterals and higher-dimensional generalizations consider\n  on the reference cell $[-1,1]^d$\n  \\begin{align*}\n    \\hat{b}_T=\\prod_{i=1}^d(1-x_i^2)\\in \\mathbb{Q}_{2d}(\\hat{\\mathcal{T}}_h).\n  \\end{align*}\n  In particular, we have to satisfy\n  \\begin{gather*}\n    \\int_\\domain \\div(\\Pi_2 v-v) q_h \\dx\n    = \\int_\\domain (v-\\Pi_2v)\\cdot\\nabla q_h\\dx\n    = 0 \\quad \\forall q_h\\in Q_h.\n  \\end{gather*}\n  Since $\\nabla Q_h$ is spanned by\n  \\begin{align*}\n    \\left\\{\n      \\begin{pmatrix} 1 \\\\0 \\end{pmatrix},\n      \\begin{pmatrix} 0 \\\\1 \\end{pmatrix},\n      \\begin{pmatrix} y \\\\x \\end{pmatrix}\n    \\right\\}\n  \\end{align*}\n  in 3D, we need three additional bubble functions we choose as\n  \\begin{align*}\n    b_{T,1}= \\begin{pmatrix} \\hat{b}_T   \\\\ 0         \\end{pmatrix},\n    b_{T,2}= \\begin{pmatrix}         0   \\\\ \\hat{b}_T \\end{pmatrix},\n    b_{T,3}= \\begin{pmatrix} \\hat{b}_T y \\\\ \\hat{b}_T x \\end{pmatrix}.\n  \\end{align*}\n  With respect to the reference element $[-1,1]^2$, these functions are orthogonal\n  to the basis of $\\nabla Q_h$ and non-negative. Hence, there exists a function\n  fulfilling the three requirements and we define $\\Pi_2$ to be the projection to the\n  space spanned by $b_{T,1}, b_{T,2}$ and $b_{T,3}$.\n\\end{solution}\n\\end{Problem}\n\n\\begin{Problem}{mini-3d}\n  By introducing barycentric coordinates $\\lambda_0,\\dots,\\lambda_3$\n  for a tetrahedron $\\cell\\subset\\R^3$ and the quartic bubble\n  \\begin{gather}\n    b_{4,\\cell} = \\lambda_0\\lambda_1\\lambda_2\\lambda_3,\n  \\end{gather}\n  show that the MINI element has a natural generalization to three\n  dimensional problems.\n\\begin{solution}\n  We procede just as in the two-dimensional case and observe for any\n  $\\Pi_2: V \\to \\bigl(B_h\\bigr)^3$\n  \\begin{align*}\n    \\int_\\domain \\div(\\Pi_2 v-v) q_h \\dx\n    = \\int_\\domain (v-\\Pi_2v)\\cdot\\nabla q_h\\dx = \\sum_\\cell\\int_\\cell (v-\\Pi_2v)\\dx\\cdot\\nabla q_\\cell\\dx\n  \\end{align*}\n  due to the continuity of the ansatz spaces.\n  If we choose $\\Pi_2: V \\to \\bigl(B_h\\bigr)^3$ such that for all $q_h\\in Q_h$,\n  \\begin{align*}\n   \\int_\\cell \\Pi_2 v_i \\dx\n    = \\alpha_{\\cell,i} \\int_\\cell b_{4,\\cell}\\dx\n    = \\int_\\cell v_i\\dx,\n  \\end{align*}\n  we indeed get the desired property\n  \\begin{align*}\n    \\int_\\domain \\div(\\Pi_2 v-v) q_h \\dx = 0.\n  \\end{align*}\n  This is possible as $b_{4,\\cell}$ is nonnegative. Stability follows as in the two-dimensional case.\n  \\begin{gather*}\n    \\norm{\\Pi_2 v}_{1,\\cell}\n    \\le c h_\\cell^{-1} \\norm{\\Pi_2 v}_{0,\\cell}\n    \\le c h_\\cell^{-1} \\norm{v}_{0,\\cell}.\n  \\end{gather*}\n  Finally, we use the estimates for $I_h$ to obtain\n  \\begin{gather*}\n    \\norm{\\Pi_2 (\\identity-\\Pi_1) v}_{1,\\cell}\n    \\le c h_\\cell^{-1} \\norm{v-I_h v}_{0,\\cell}\n    \\le c \\snorm{v}_{1,\\domain_\\cell}.\n  \\end{gather*}\n  Since the number of intersecting of cells of shape regular meshes is\n  bounded, the final term is bounded by $\\norm v_{1,\\domain}$.\n\\end{solution}\n\n\\end{Problem}\n\n\\begin{intro}\n  The reasoning behind the MINI element can be applied easily to\n  pressure spaces of higher order. Take for instance the pair\n  $P_k-P_k$, generalized from Example~\\ref{ex:p1-p1}.\n  There holds $\\div v_h \\in \\P_{k-1}$ on each cell,\n  and the term\n  \\begin{gather*}\n    \\int_\\cell \\div v_h q_h\\dx\n  \\end{gather*}\n  does not control the function in $\\hat p_\\cell \\in \\P_k$ which is\n  orthogonal to $\\P_{k-1}$. The only function $p_h\\in Q_h$ such that\n  $p_{h|\\cell} = \\hat p_\\cell$ for each cell $\\cell\\in\\mesh_h$ may be\n  zero or not, depending on the mesh geometry. Thus, the element is\n  not stable on arbitrary shape regular meshes. But, as we prove\n  below, the same enrichment process by bubble functions can be\n  employed for its stabilization.\n\\end{intro}\n\n\\begin{Definition}{higher-order-bubble}\n  With any pressure space $Q_h$ we associate the \\define{bubble space}\n  \\begin{gather}\n    \\label{eq:stokes:24}\n    B_h(b_\\cell \\nabla Q_h)\n    = \\bigl\\{ v\\in V \\big\\vert\n    \\; \\exists q_h\\in Q_h \\colon v_{|\\cell} = b_\\cell \\nabla q_h \\bigr\\}.\n  \\end{gather}\n  Here, $b_\\cell$ is a bubble function on $\\cell$ like the cubic buble\n  $b_{3,\\cell}$ of a triangle, the quartic bubble $b_{4,\\cell}$, the\n  biquadratic bubble $b_{2^2,\\cell}$ of a quadrilateral or the\n  triquadratic bubble $b_{2^3,\\cell}$ of a hexahedron.\n\n  We also define the cell bubble space\n  \\begin{gather}\n    \\label{eq:stokes:25}\n    B_\\cell(\\nabla Q_h) = \\bigl\\{ v\\in L^2(\\cell) \\big\\vert\n    \\;\\exists q_h\\in Q_h \\colon v = b_T \\nabla q_{h|\\cell}\\bigr\\}.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{Theorem}{higher-order-bubble}\n  Assume that the pair $V_h\\times Q_h$ is chosen such that there is\n  an $H^1$-stable interpolation operator according to\n  \\slideref{Assumption}{h1-stable-interpolation}, such that $Q_h\\subset\n  C^0(\\domain)$, piecewise differentiable, and such that\n  \\begin{gather}\n    B_h(b_\\cell \\nabla Q_h) \\subset V_h.\n  \\end{gather}\n  Then, the pair $V_h\\times Q_h$ is inf-sup stable.\n\\end{Theorem}\n\n\\begin{proof}\n  We construct the \\putindex{Fortin projection} by\n  \\slideref{Lemma}{fortin-construction-1} choosing $\\Pi_1$ as the\n  $H^1$-stable interpolation operator. The operator $\\Pi_2$ is\n  constructed cell-wise such that $\\Pi_2\\colon H^1(\\cell) \\to\n  B_\\cell(\\nabla Q_h)$ fulfills\n  \\begin{gather}\n    \\label{eq:stokes:26}\n    \\int_\\cell (\\Pi_2 u - u) \\cdot\\nabla q = 0,\n    \\qquad\n    \\forall q\\in Q_{h|\\cell}.\n  \\end{gather}\n  Clearly, the dimension of $B_\\cell(\\nabla Q_h)$ equals the dimension\n  of $Q_{h|\\cell}$. Then, since the bubble functions are strictly\n  positive inside $\\cell$, equation~\\eqref{eq:stokes:26} defines\n  $\\Pi_2 u$ uniquely. It remains to show the $H^1$-stability of\n  $\\Pi_2(\\identity-\\Pi_1)$, which is done by the standard scaling\n  argument\n  \\begin{gather*}\n    \\snorm{\\Pi_2 v}_{1,\\cell}\n    =\\snorm{\\widehat{\\Pi_2 v}}_{1,\\widehat\\cell}\n    \\le c \\norm{\\widehat v}_{1,\\widehat\\cell}\n    \\le c \\bigl(h_\\cell^{-1} \\norm{v}_{0,\\cell} + \\snorm{v}_{1,\\cell}\\bigr).\n  \\end{gather*}\n\\end{proof}\n\n\\begin{Corollary}{pk-bubble}\n  Let $Q_h\\subset Q$ be continuous and cell-wise differentiable. If\n  \\begin{gather*}\n    H^1_h(\\P_1)^d\\oplus B_h(b_\\cell\\nabla Q_h) \\subset V_h \\subset V,\n  \\end{gather*}\n  then the pair $V_h\\times Q_h$ is inf-sup stable. The same holds on\n  quadrilateral and hexahedral meshes replacing $\\P_1$ by $\\Q_1$.\n\\end{Corollary}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Elements with discontinuous pressure}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{intro}\n  In this section, we consider a second stable element, which like\n  the MINI element is not so much of practical use, but exhibits\n  typical properties of the analysis of finite element spaces for the\n  Stokes problem.\n\\end{intro}\n\n\\begin{Definition}{p2-p0-element}\n  The \\textbf{$\\mathbf{P_2-P_0}$ element} on triangles consists of the finite\n  element spaces\n  \\begin{gather}\n    V_h = H^1_h(\\P_2)^2 \\cap V,\n    \\qquad\n    Q_h = H^0_h(\\P_0) \\cap Q.\n  \\end{gather}\n  Its degrees of freedom are:\n    \\begin{center}\n    \\includegraphics[width=.2\\textwidth]{./fig/p2-v.tikz}\n    \\hspace{1cm}\n    \\includegraphics[width=.2\\textwidth]{./fig/p0-p.tikz}\n  \\end{center}\n\\end{Definition}\n\n\\begin{Lemma}{p2-p0-stability}\n  The $P_2-P_0$ element is inf-sup stable.\n\\end{Lemma}\n\n\\begin{proof}\n  We again prove stability by constructing a \\putindex{Fortin\n    projection} using the two step algorithm of\n  \\slideref{Lemma}{fortin-construction-1}. Again, we choose for\n  $\\Pi_1$ an $H^1$-stable interpolation according to\n  \\slideref{Assumption}{h1-stable-interpolation}. It remains therefore\n  to construct $\\Pi_2$. First, since $q_{h|\\cell}$ is constant on\n  each cell $\\cell\\in\\mesh_h$, we can apply the Gauss theorem to the\n  divergence condition to obtain\n  \\begin{gather}\n    \\label{eq:stokes:23}\n    \\int_\\cell \\div (u-\\Pi_2u)\\dx = \\int_{\\d\\cell} (u-\\Pi_2u)\\cdot\\n\\ds.\n  \\end{gather}\n  Hence, the following interpolation conditions on each cell $\\cell$\n  define a divergence preserving operator $\\Pi_2$:\n  \\begin{xalignat}2\n    \\label{eq:stokes:21}\n    \\Pi_2 u(x) &= 0\n    & \\forall x &\\text{ is vertex of } \\cell\\\\\n    \\label{eq:stokes:22}\n    \\int_E \\Pi_2 u\\ds &= \\int_E u\\ds\n    & \\forall E &\\text{ is edge of } \\cell\n  \\end{xalignat}\n  This is true, since~\\eqref{eq:stokes:22} implies the right hand side\n  of~\\eqref{eq:stokes:23}. It remains to show the $H^1$-stability of\n  $\\Pi_2(\\identity-\\Pi_1)$. Let us first observe that the\n  interpolation operator only involves edge integrals of $u$, which\n  are well-defined on $H^1$. Thus, we have by the standard scaling\n  argument\n  \\begin{gather*}\n    \\snorm{\\Pi_2 v}_{1,\\cell}\n    =\\snorm{\\widehat{\\Pi_2 v}}_{1,\\widehat\\cell}\n    \\le c \\norm{\\widehat v}_{1,\\widehat\\cell}\n    \\le c \\bigl(h_\\cell^{-1} \\norm{v}_{0,\\cell} + \\snorm{v}_{1,\\cell}\\bigr).\n  \\end{gather*}\n  Entering $v=u-\\Pi_1 u$ and the estimates~\\eqref{eq:stokes:15} of\n  \\slideref{Assumption}{locally-quasi-uniform}, we obtain\n  \\begin{gather*}\n    \\norm{\\Pi_2 (\\identity-\\Pi_1)u}_1^2\n    = \\sum_{\\cell\\in\\mesh_h}\\norm{\\Pi_2\n      (\\identity-\\Pi_1)u}_{1,\\cell}^2\n    \\le c \\norm{u}_1^2\n  \\end{gather*}\n\\end{proof}\n\n\\begin{remark}\n  The proof shows, that from a mathematical point of view degrees of\n  freedom on edges are more reasonably defined by integrals along the\n  edge than by values in the mid points. This is something, we will\n  encounter again and again. Nevertheless, we will not change the\n  cartoons for the degrees of freedom and just note that a degree of\n  freedom on an edge, while drawn as a point, may be an integral value.\n\\end{remark}\n\n\\begin{Theorem}{p2-p0-convergence}\n  Let $(u,p)\\in V\\times Q$ be a solution to the Stokes problem and let\n  the pair $(u_h,p_h) \\in V_h\\times Q_h$ be the approximation on a\n  mesh $\\mesh_h$ of mesh size $h$ with the $P_2-P_0$ element of\n  \\slideref{Definition}{p2-p0-element}. Then, we have the error\n  estimate\n  \\begin{gather*}\n    \\norm{u-u_h}_1 + \\norm{p-p_h}_0\n    \\le c \\bigl(h^2\\snorm{u}_3 + h \\snorm{p}_1\\bigr).\n  \\end{gather*}\n\\end{Theorem}\n\n\\begin{remark}\n  While this theorem is optimal with respect to our analysis, it is\n  not optimal with respect to the approximation properties of $V_h$.\n\\end{remark}\n\n\\begin{remark}\n  Let us review the construction principles behind the \\putindex{MINI\n    element} and the $P_2-P_0$ element. The uncontrolled pressure\n  modes in $\\ker B_h^T$ of the $P_1-P_1$ element in\n  Example~\\ref{ex:p1-p1} were those pressures\n  with alternating signs at neighboring vertices, such that the mean\n  value of $p_h$ is zero on each cell. Therefore, $p_h$ is orthogonal\n  to the constant derivatives of the linear velocity space. Thus, we\n  add a local function on each cell with nonconstant gradient, and the\n  mean value of the pressure can be controlled.\n\n  The kernel of $B_h^T$ for the element $H^1(\\P_1)^2 - H^0(\\P_0)$ on\n  the other hand contains functions that are constant on each cell,\n  but jump over cell boundaries. By integration by parts, we have\n  \\begin{gather*}\n    \\int_\\cell \\div b_\\cell q_h \\dx\n    = -\\int_\\cell b_\\cell \\cdot\\nabla q_h\n    + \\int_{\\d\\cell} b_\\cell q_h \\ds\n    = 0.\n  \\end{gather*}\n  Hence, no kind of bubble function helps controlling the jump of\n  $p_h$ over an edge. Instead, we introduce a degree of freedom on the\n  edge. Integrating by parts on two neighboring cells $\\cell_1$ and\n  $\\cell_2$, we obtain on the common edge $E_{12}$ a term of the form\n  \\begin{gather*}\n    \\int_{E_{12}} \\left[u\\cdot\\n_1 q_1 + u\\cdot\\n_2 q_2\\right],\n  \\end{gather*}\n  which by the continuity of $u\\cdot\\n$ translates to\n  \\begin{gather}\n    \\int_{E_{12}} u\\cdot\\n_1 (q_1- q_2).\n  \\end{gather}\n  Thus, we can use the interpolation operator $\\Pi_2$ to obtain a\n  function $u$ such that\n  \\begin{gather*}\n    \\int_{E_{12}} u\\cdot\\n_1 \\ds = (q_1-q_2),\n  \\end{gather*}\n  such that\n  \\begin{gather*}\n    \\form(\\div u,q_h) = \\int_{E_{12}} \\abs{q_1-q_2}^2\\ds + \\text{\n      other terms}.\n  \\end{gather*}\n\\end{remark}\n\n\\begin{Problem}{q2-q0}\n  Show that the quadrilateral element\n  \\begin{gather}\n    V_h = H^1_h(\\Q_2)^2\\cap V,\n    \\qquad Q_h = H^0_h(\\P_0) \\cap Q,\n  \\end{gather}\n  with degrees of freedom\n  \\begin{center}\n    \\includegraphics[width=.2\\textwidth]{./fig/q2-v.tikz}\n    \\hspace{1cm}\n    \\includegraphics[width=.2\\textwidth]{./fig/q0-p.tikz}\n  \\end{center}\n  is inf-sup stable. Does the proof translate to the $P_2-P_0$ element\n  on tetrahedra or the $Q_2-P_0$ element on hexahedra?\n\\begin{solution}\n  Let's first consider the element pair $\\mathbb{Q}_2/\\mathbb{P}\\_0^-$ in 2D.\n  Since any function in the pressure ansatz space is piecewise constant, we observe\n  for any $\\Pi_2: V \\to \\bigl(V_h\\bigr)^2$\n  \\begin{align*}\n    \\int_\\domain \\div(\\Pi_2 v-v) q_h \\dx\n    = \\sum_\\cell\\int_{\\partial \\cell} (v-\\Pi_2v)q_h \\cdot n \\ds \\,q_\\cell.\n  \\end{align*}\n  If we choose $\\Pi_2: V \\to \\bigl(V_h\\bigr)^2$ such that for all $q_h\\in Q_h$,\n  \\begin{align*}\n    \\Pi_2 u(x) &= 0\n    & \\forall x &\\text{ is vertex or midpoint of } \\cell\\\\\n    \\int_E \\Pi_2 u\\ds &= \\int_E u\\ds\n    & \\forall E &\\text{ is edge of } \\cell\n  \\end{align*}\n  we clearly get the desired property\n  \\begin{align*}\n    \\int_\\domain \\div(\\Pi_2 v-v) q_h \\dx = 0.\n  \\end{align*}\n  In fact, the first condition determines 5 of the 9 degrees of freedom\n  and the second condition the remaining 4. This means that the interpolation\n  again is given by face integrals alone and we achieve stability via\n  \\begin{align*}\n    \\snorm{\\Pi_2 v}_{1,\\cell}\n    =\\snorm{\\widehat{\\Pi_2 v}}_{1,\\widehat\\cell}\n    \\le c \\norm{\\widehat v}_{1,\\widehat\\cell}\n    \\le c \\bigl(h_\\cell^{-1} \\norm{v}_{0,\\cell} + \\snorm{v}_{1,\\cell}\\bigr).\n  \\end{align*}\n  Entering $v=u-\\Pi_1 u$ and the estimates~\\eqref{eq:stokes:15} of\n  \\slideref{Assumption}{locally-quasi-uniform}, we obtain\n  \\begin{align*}\n    \\norm{\\Pi_2 (\\identity-\\Pi_1)u}_1^2\n    = \\sum_{\\cell\\in\\mesh_h}\\norm{\\Pi_2\n      (\\identity-\\Pi_1)u}_{1,\\cell}^2\n    \\le c \\norm{u}_1^2.\n  \\end{align*}\n\n  For the pair $\\mathbb{P}_2/\\mathbb{P}_0^-$ in 3D on tetrahedra we have\n  4 degrees of freedom at the vertices and 6 degrees of freedom on the midpoints of the edges.\n  Again, the condition we need to satisfy is\n  \\begin{align*}\n    \\int_{\\partial \\cell} (v-\\Pi_2v)q_h \\cdot n \\ds \\,q_\\cell = 0.\n  \\end{align*}\n  Hence, one way to choose the interpolation operator is by requiring\n  \\begin{align*}\n    \\int_E \\Pi_2 u\\ds &= 0           & \\forall E &\\text{ is edge of } \\cell\\\\\n    \\int_F \\Pi_2 u\\ds &= \\int_F u\\ds & \\forall F &\\text{ is face of } \\cell.\n  \\end{align*}\n  The first and second condition determine 6 respectively 4 degrees of freedom\n  and stability follows as usual by inverse estimates.\n\n  For the pair $\\mathbb{Q}_2/\\mathbb{P}_0^-$ in 3D on hexahedra we have the\n  following number of degree of freedoms:\n  \\begin{itemize}\n  \\item 1 in the interior\n  \\item 6 on the 6 faces\n  \\item 12 on the 12 edges\n  \\item 8 at the 8 vertices\n  \\end{itemize}\n  Again, the condition we need to satisfy is\n  \\begin{align*}\n    \\int_{\\partial \\cell} (v-\\Pi_2v)q_h \\cdot n \\ds \\,q_\\cell = 0.\n  \\end{align*}\n  Hence, one way to choose the interpolation operator is by requiring\n  \\begin{align*}\n    \\Pi_2 u(x)        &= 0           & \\forall x &\\text{ is vertex, edge or midpoint of } \\cell\\\\\n    \\int_F \\Pi_2 u\\ds &= \\int_F u\\ds & \\forall F &\\text{ is face of } \\cell.\n  \\end{align*}\n  The first and second condition determine 6 respectively 4 degrees of freedom\n  and stability follows as usual by inverse estimates. In fact, the interpolator again\n  just relies on face integrals.\n\\end{solution}\n\\end{Problem}\n\n\\begin{intro}\n  In spite of our remarks above, there is a generalization of the\n  $P_2-P_0$ element involving bubble functions. We will discuss it in\n  an abstract theorem first and then derive a family of inf-sup stable\n  pairs.\n\\end{intro}\n\n\\begin{Lemma}{bubble-discontinuous}\n  Given a space $Q_h\\subset Q$ possibly discontinuous, choose\n  $V_h\\subset V$ such that\n  \\begin{gather*}\n    B_h(b_\\cell \\nabla Q_h)^d \\subset V_h.\n  \\end{gather*}\n  If there is an operator $\\Pi_1$ such that\n  \\begin{xalignat*}2\n    \\norm{\\Pi_1 v}_V & \\le \\norm{v}_V\n    &\\forall v&\\in V,\\\\\n    \\int_\\cell \\div (v-\\Pi_1v) \\dx &= 0\n    &\\forall v&\\in V, \\cell\\in\\mesh_h,\n  \\end{xalignat*}\n  then the pair $V_h\\times Q_h$ is inf-sup stable.\n\\end{Lemma}\n\n\\begin{proof}\n  We construct the \\putindex{Fortin projection} using $\\Pi_1$ and\n  define $\\Pi_2$ only on $V^0 = \\ker{\\div}$. This is sufficient, since\n  for any $v\\in V$ there holds $v-\\Pi_1 v \\in V^0$. Therefore, define\n  cell-wise $\\Pi_2\\colon V^0_{|\\cell} \\to B_\\cell(b_\\cell \\nabla Q_h)$\n  by the conditions\n  \\begin{gather}\n    \\int_\\cell \\div (\\Pi_2 v - v ) q_h \\dx = 0\n    \\qquad\\forall q_h\\in Q_{h|\\cell}.\n  \\end{gather}\n  By this condition, $\\Pi_2 v$ is divergence free itself.\n  Note that by the Gauss theorem, the divergence of a bubble function\n  has always zero mean. Therefore, we have unique solvability and\n  $\\Pi_h v$ is well defined. It remains to apply the standard scaling\n  argument to prove\n  \\begin{gather*}\n    \\norm{\\Pi_2 v}_1 \\le c \\norm{v}_1.\n  \\end{gather*}\n\\end{proof}\n\n\\begin{remark}\n  The divergence condition in the previous lemma is different from the\n  condition on Fortin projections, since it only involves piecewise\n  constant pressure. Therefore, the lemma in effect splits the\n  pressure space into a piecewise constant part and its\n  complement. Then, the pressure in the complement is controlled by\n  the bubble functions. It still remains to guarantee the existence\n  of the operator $\\Pi_1$. In one case, we have verified the\n  existence of such an operator: the Fortin operator for the $P_2-P_0$\n  element. Therefore, we have\n\\end{remark}\n\n\\begin{Corollary}{bubble-p2}\n  Let $Q_h\\subset Q$ be a space of piecewise differentiable\n  functions. If for $V_h\\subset V$ holds\n  \\begin{gather*}\n    H^1_h(\\P_2)^2 \\oplus B_h(b_\\cell \\nabla Q_h)^2 \\subset V_h,\n  \\end{gather*}\n  then the pair $V_h\\times Q_h$ is inf-sup stable.\n\\end{Corollary}\n\n\\begin{Corollary}{pk-pk2}\n  Let the space dimension be $d=2$ and $k\\ge 2$. Then, the spaces\n  \\begin{gather}\n    V_h = H^1_h(\\P_k)^2 \\cap V,\n    \\qquad\n    Q_k = H^0_h(\\P_{k-2}) \\cap Q,\n  \\end{gather}\n  form an inf-sup stable pair.\n\\end{Corollary}\n\n\\begin{proof}\n  For $k=2$, this is the $P_2-P_0$ element. For $k>2$, we have for all\n  $q_h\\in Q_h$ on every cell $\\nabla q_h\\in \\P_{k-3}$. Therefore,\n  $(b_{3,\\cell} q_h)_{|\\cell} \\in \\P_k$.\n\\end{proof}\n\n\\begin{intro}\n  Studying the proof of \\slideref{Lemma}{p2-p0-stability} in more\n  detail, we can find a much more general result with much weaker\n  assumptions. Indeed, we only need to be able to have a single degree\n  of freedom on each edge or face which allows to adjust the average\n  normal velocity over this edge of face. We summarize this in:\n\\end{intro}\n\n\\begin{Theorem}{discontinuous-pressure-normal-velocity}\n  Let $Q_h\\subset Q$ and let $V_h \\subset V$ be such that there is an\n  $H^1$-stable interpolation according to\n  \\slideref{Assumption}{h1-stable-interpolation}. Furthermore, let\n  there be a degree of freedom on each edge (face in 3D) controlling\n  the average normal derivative of $u\\in V_h$ on this face. Finally,\n  let $V_h$ contain the bubble space for $\\nabla Q_h$,\n  \\begin{gather*}\n    B_h(b_\\cell \\nabla Q_h)^d \\subset V_h.\n  \\end{gather*}\n  Then, the pair $V_h\\times Q_h$ is inf-sup stable.\n\\end{Theorem}\n\n\\begin{proof}\n  With the assumptions made, it is sufficient to construct the\n  operator $\\Pi_1$ in \\slideref{Lemma}{bubble-discontinuous}. Then, we\n  can apply this lemma and the result is proven. Going back\n  to~\\eqref{eq:stokes:23}, we see that the interpolation condition\n  ~\\eqref{eq:stokes:22} is more than needed.\n\n  Now, let $\\{\\nodal_{\\cell,i}\\}$ be the $n_\\cell$ node values for the\n  discrete velocity space $V_\\cell = V_{h|\\cell}$ on the cell $\\cell$. For\n  convenience, let them be ordered in a way, that the first ones\n  control the normal derivatives of $u_h$ on the faces of the cell,\n  that is,\n  \\begin{gather}\n    \\nodal_{\\cell,i} = \\int_{F_i} u\\cdot\\n \\ds\n    \\qquad i=1,\\dots,n_F,\n  \\end{gather}\n  where $n_F$ is the number of faces per cell. Given the $H^1$-stable\n  interpolation operator $I_h$, define $\\Pi_2$ cell-wise such that\n  \\begin{xalignat}2\n    \\nodal_{\\cell,i}(\\Pi_2 u) &= \\nodal_{\\cell,i}(u)\n    &i&=1,\\dots,n_F\\\\\n    \\nodal_{\\cell,i}(\\Pi_2 u) &= 0\n    &i&=n_F+1,\\dots,n_\\cell.\n  \\end{xalignat}\n  Choosing the basis on $V_T$ which is dual to $\\{\\nodal_{\\cell,i}\\}$,\n  we see that this indeed implies\n  \\begin{gather*}\n    0=\\int_{\\d\\cell} (\\Pi_2 u - u) \\ds = \\int_\\cell \\div (\\Pi_2 u-u)\\dx.\n  \\end{gather*}\n  Therefore, $\\Pi_1 = I_h + \\Pi_2(\\identity-I_h)$ is divergence\n  preserving. Boundedness follows by the standard scaling argument.\n\\end{proof}\n\n\\begin{Corollary}{qk-pk1}\n  The finite-element pair $V_h\\times Q_h$ with\n  \\begin{gather*}\n    V_h = H^1_h(\\Q_k)^2 \\cap V,\n    \\qquad\n    Q_h = H^0_h(\\P_{k-1}) \\cap Q,\n  \\end{gather*}\n  called the $Q_k-P_{k-1}$ element is inf-sup stable for any $k\\ge 2$.\n\\end{Corollary}\n\n\\begin{proof}\n  First, we prove that $B_h(b_\\cell \\nabla Q_h)^d \\subset V_h$. To\n  this end, we note that on each cell, we have that the gradient of\n  a discrete pressure $q_h$ restricted to this cell is in\n  $\\P_{k-2}\\subset \\Q_{k-2}$. The bubble function $b_T$ is in $\\Q_2$,\n  such that $b_T\\nabla q_T \\in \\Q_k$, which was to be proven.\n\n  For the assumption on the degrees of freedom, we refer to the\n  following definition. Once the degrees of freedom for each velocity\n  component are determined by this definition, we can simply select\n  the normal component on Cartesian meshes. On meshes with straight\n  interfaces, it is clear that we can choose a linear combination of\n  the components of $u$ splitting into normal and tangential and thus\n  get the desired result. In general, we refer to the \\putindex{Piola\n  transformation} in the next chapter.\n\\end{proof}\n\n\\begin{Definition}{moment-basis-1d}\n  The shape function space $\\P_k$ on the reference element $\\widehat\n  T = [-1,1]$ in one dimension can be split into\n  \\begin{gather}\n    \\label{eq:stokes:35}\n    \\P_k^0 \\oplus \\overline{\\P_k},\n  \\end{gather}\n  where $\\P_k^0 = \\P_k \\cap H^1_0(\\widehat T)$. We choose an\n  orthogonal basis for $\\P_k^0$ with respect to the $H^1_0$-inner\n  product $\\scal(p,q) = \\int p'q'\\dx$ by\n  \\begin{gather}\n    \\label{eq:Lobatto-polynomials}\n    \\phi_i(x) = \\int_{-1}^x L_{i}(t)\\dt\\quad i=1,\\dots,k-1,\n  \\end{gather}\n  where $L_i$ is the Legendre polynomial of degree $i$. The two\n  basis functions for $\\overline{\\P_k}$ are chosen such that\n  \\begin{xalignat*}2\n    \\phi_0(-1) &= 1, & \\scal(\\phi_0,\\phi_i) &= 0 \\quad i=1,\\dots,k-1,\\\\\n    \\phi_k(1) &= 1, & \\scal(\\phi_k,\\phi_i) &= 0 \\quad i=1,\\dots,k-1.\n  \\end{xalignat*}\n\\end{Definition}\n\n\\begin{Lemma}{moment-dofs-1d}\n  The degrees of freedom\n  \\begin{gather}\n    \\label{eq:stokes:30}\n    \\begin{aligned}\n    \\nodal_0(\\phi) &= \\phi(-1),&\n    \\nodal_k(\\phi) &= \\phi(1),\\\\\n    \\nodal_i(\\phi) &= \\frac1{\\int {\\phi_i'}^2} \\int_{-1}^{1}\n    \\phi'\\phi_i'\\dx\n    &i&=1,\\dots,k-1,\n    \\end{aligned}\n  \\end{gather}\n  are the dual basis for the basis described in\n  \\slideref{Definition}{moment-basis-1d}.\n\\end{Lemma}\n\n\\begin{proof}\n  The proof is left to the reader.\n\\end{proof}\n\n\\begin{Definition}{qk0}\n  Let $\\widehat T = [-1,1]^d$ be the reference square in $\\R^d$. We\n  define the space\n  \\begin{gather}\n    \\label{eq:stokes:36}\n    \\Q_k^0 = \\Q_k \\cap H^1_0(\\widehat T).\n  \\end{gather}\n  A basis for $\\Q_k^0$ consists of the functions\n  \\begin{gather}\n    \\label{eq:stokes:37}\n    \\phi_{i_1\\cdots i_d}(x) = \\phi_{i_1}(x_1) \\cdots \\phi_{i_d}(x_d),\n  \\end{gather}\n  where $i_j=1,\\dots,k-1$. For a tensor product mesh cell $T$, the\n  space $\\Q_k^0(T) = \\Q_k(T) \\cap H^1_0(T)$ is defined through\n  $\\Q_h^0$ by mapping.\n\\end{Definition}\n\n\\begin{Definition}{moment-dofs-qk-2d}\n  The moment degrees of freedom of the $\\Q_k$ element are defined on\n  the reference cell $\\widehat T$ in two space dimensions as\n  \\begin{xalignat*}3\n    \\nodal_{0,i}(u) &= u(x_i)\n    &&&x_i&\\text{ is vertex of } \\widehat T\\\\\n    \\nodal_{1,i,j}(u) &= \\int_{E_i} u \\phi_j\\ds\n    &\\phi_j&\\in\\Q_{k}^0(E_i)&E_i&\\text{ is edge of } \\widehat T\\\\\n    \\nodal_{2,j}(u) &=\\int_{\\widehat T} u \\phi_j \\dx\n    &\\phi_j&\\in\\Q_{k}^0(\\widehat T).\n  \\end{xalignat*}\n\\end{Definition}\n\n\\begin{Definition}{moment-dofs-qk-3d}\n  The moment degrees of freedom of the $\\Q_k$ element are defined on\n  the reference cell $\\widehat T$ in three space dimensions as\n  \\begin{xalignat*}3\n    \\nodal_{0,i}(u) &= u(x_i)\n    &&&x_i&\\text{ is vertex of } \\widehat T\\\\\n    \\nodal_{1,i,j}(u) &= \\int_{E_i} u \\phi_j\\ds\n    &\\phi_j&\\in\\Q_{k}^0(E_i)&E_i&\\text{ is edge of } \\widehat T\\\\\n    \\nodal_{2,i,j}(u) &= \\int_{F_i} u \\phi_j\\ds\n    &\\phi_j&\\in\\Q_{k}^0(F_i)&F_i&\\text{ is face of } \\widehat T\\\\\n    \\nodal_{3,j}(u) &=\\int_{\\widehat T} u \\phi_j \\dx\n    &\\phi_j&\\in\\Q_{k}^0(\\widehat T).\n  \\end{xalignat*}\n\\end{Definition}\n\n\\begin{Example}{qk-pk1}\n  The first two members of the $Q_k-P_{k-1}$ family have the nodal\n  representations\n  \\begin{center}\n    \\begin{tabular}{c@{\\hspace{.2\\textwidth}}c}\n      \\includegraphics[width=.2\\textwidth]{./fig/q2-v.tikz}\n      &\n      \\includegraphics[width=.2\\textwidth]{./fig/q-p1-p.tikz}\n      \\\\[5mm]\n      \\includegraphics[width=.2\\textwidth]{./fig/q3-v.tikz}\n      &\n      \\includegraphics[width=.2\\textwidth]{./fig/q-p2-p.tikz}\n    \\end{tabular}\n  \\end{center}\n\\end{Example}\n\n\\begin{remark}\n  When we map the reference square to a quadrilateral mesh cell, this\n  mapping may be affine for parallelograms or bilinear for general\n  quadrilaterals. At some point, we have proven that the mapped $\\Q_k$\n  space has optimal approximation properties, that is, approximation\n  of order $k$ in $H^1$ and of order $k+1$ in $L^2$. Such a thing has\n  not been proven for a bilinearly mapped $\\P_k$ element. And,\n  unfortunately it is not true. We therefore have to distinguish\n  between a mapped and an unmapped pressure\n  space. In~\\cite{ArnoldBoffiFalk02}, it is proven that the mapped\n  polynomial space has worse approximation, in the worst case one\n  order less than the unmapped.\n\\end{remark}\n\n\\subsection{The family of Hood-Taylor elements}\n\n\\begin{Definition}{hood-taylor}\n  The family of Hood-Taylor elements on simplices in dimension $d=2,3$\n  for polynomial degrees $k\\ge 1$ consists of the pairs\n  \\begin{gather}\n    \\label{eq:stokes:38}\n    V_h = H^1_h(\\P_k)^d \\cap V,\n    \\qquad\n    Q_h = H^1_h(\\P_{k-1}) \\cap Q.\n  \\end{gather}\n  On quadrilaterals and hexahedra, it consists of the pairs\n  \\begin{gather}\n    \\label{eq:stokes:39}\n    V_h = H^1_h(\\Q_k)^d \\cap V,\n    \\qquad\n    Q_h = H^1_h(\\Q_{k-1}) \\cap Q.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{Example}{hood-taylor-triangle}\n  The first two members of the Hood-Taylor family on triangles have\n  the nodal representations\n  \\begin{center}\n    \\begin{tabular}{cc@{\\hspace{.2\\textwidth}}c}\n      $\\P_2-\\P_1$\n      &\\includegraphics[width=.2\\textwidth]{./fig/p2-v.tikz}\n      &\\includegraphics[width=.2\\textwidth]{./fig/p1-p.tikz}\n      \\\\[5mm]\n      $\\P_3-\\P_2$\n      &\\includegraphics[width=.2\\textwidth]{./fig/p3-v.tikz}\n      &\\includegraphics[width=.2\\textwidth]{./fig/p2-p.tikz}\n    \\end{tabular}    \n  \\end{center}\n\\end{Example}\n\n\\begin{Example}{hood-taylor-quad}\n  The first two members of the Hood-Taylor family on quadrilaterals\n  have the nodal representations\n  \\begin{center}\n    \\begin{tabular}{cc@{\\hspace{.2\\textwidth}}c}\n      $\\Q_2-\\Q_1$\n      &\\includegraphics[width=.2\\textwidth]{./fig/q2-v.tikz}\n      &\\includegraphics[width=.2\\textwidth]{./fig/q1-p.tikz}\n      \\\\[5mm]\n      $\\Q_3-\\Q_2$\n      &\\includegraphics[width=.2\\textwidth]{./fig/q3-v.tikz}\n      &\\includegraphics[width=.2\\textwidth]{./fig/q2-p.tikz}\n    \\end{tabular}    \n  \\end{center}\n\\end{Example}\n\n\\begin{intro}\n  The stable elements of the previous section featured discontinuous\n  pressure spaces. Therefore, it was natural to split the analysis\n  into cell-wise constant pressure and higher order. This is the basic\n  technique behind \\slideref{Lemma}{bubble-discontinuous} and\n  \\slideref{Theorem}{discontinuous-pressure-normal-velocity}. Here,\n  the pressure is continuous, such that a cell-wise analysis is not\n  feasible anymore. The solution is looking at patches, so called\n  macro elements. The analysis is due to~\\cite{Stenberg90} and\n  consists of three parts: first, the covering of the domain with a\n  macro element partitioning, then the local stability on each macro\n  element with respect to an auxiliary norm on the pressure space, and\n  finally the application of an abstract argument known as Verf\u00fcrth's\n  trick~\\cite{Verfuerth84}.\n\\end{intro}\n\n\\begin{Definition}{macro-equivalence}\n  A \\define{macro element} $M\\subset \\T_h$ is a union of cells\n  $\\cell_i\\in\\T_h$ connected by their boundary faces. Given the\n  mappings $\\Phi_\\cell\\colon \\widehat\\cell \\to \\cell$, there is a\n  reference macro element $\\widehat M$ and a mapping\n  $\\Phi_M\\colon \\widehat M\\to M$ such that $\\Phi_M(\\widehat M) =\n  M$. We say that $M$ is equivalent to $\\widehat M$.\n\n  We will use the symbol $M$ for the set of cells constituting a macro\n  element as well as for the subset of $\\domain$ covered by their union.\n\\end{Definition}\n\n\\begin{Problem}{reference-macros}\n  Suggest reference macro elements $\\widehat M$ for the following\n  situations:\n  \\begin{enumerate}\n  \\item Two triangles sharing an edge\n  \\item Two quadrilaterals sharing an edge\n  \\item Two hexahedra sharing a face\n  \\item Three quadrialterals at the edge between a coarse cell and a\n    refined cell\n  \\end{enumerate}\n  Based on the mappings $\\Phi_\\cell$ from the reference cell to the\n  actual mesh cells, define a continuously invertible mapping from\n  $\\Psi_M:\\widehat M \\to M$ (it is sufficent to describe the\n  construction without writing a closed formula). Argue, that under the\n  assumption of shape regularity, all macro elements $M$ with the same\n  connectivity between their cells are equivalent to the same\n  reference macro element $\\widehat M$.\n\\end{Problem}\n\n\n\\begin{Definition}{macro-spaces}\n  For a \\putindex{macro element} $M$, we introduce the spaces\n  \\begin{align}\n    \\label{eq:stokes:27}\n    V_M &= \\bigl\\{ u\\in H^1_0(M;\\R^d) \\big\\vert\n            \\;\\exists v_h\\in V_h\\colon u=v_{h|M}\\bigr\\},\n    \\\\\n    \\label{eq:stokes:28}\n    Q_M &= \\bigl\\{ p\\in L^2(M) \\big\\vert\n            \\;\\exists q_h\\in Q_h\\colon p=q_{h|M}\\bigr\\},\n  \\end{align}\n  the kernel of the discrete, local gradient operator\n  \\begin{gather}\n    \\label{eq:stokes:29}\n    \\ker{B^T_M} = \\bigl\\{q\\in Q_M \\big\\vert\n    \\;\\forall v\\in V_M\\colon \\form(\\div v,q) = 0\\bigr\\}.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{Definition}{verfuerth-norm}\n  Let $\\faces_h^i$ be the set of interior faces of the mesh $\\mesh_h$.\n  We introduce the alternative norm on $Q_h$ defined by\n  \\begin{gather}\n    \\label{eq:stokes:40}\n    \\norm{p}_{h}^2 =\n    \\sum_{\\cell\\in\\mesh_h} h^2_\\cell \\norm{\\nabla p}_\\cell^2\n    +\n    \\sum_{\\face\\in\\faces_h^i} \\norm{\\jmp{p}}_\\face^2,\n  \\end{gather}\n  where for a face $\\face$ between two cells $\\cell_1$ and $\\cell_2$\n  we define the \\define{jump operator}\n  \\begin{gather}\n    \\jmp{p} = p_1-p_2.\n  \\end{gather}\n\\end{Definition}\n\n\\begin{remark}\n  For continuous pressure spaces, the norm $\\norm{p}_h$ is simply the\n  norm of the gradient taken in the interior of all cells.\n\\end{remark}\n\n\\begin{Definition}{macro-seminorm}\n  On each macro element $M$, let $\\faces_M^i$ be the set of interior\n  faces of $M$. We define the seminorm\n  \\begin{gather}\n    \\label{eq:stokes:41}\n    \\snorm{p}_M\n    = \\sum_{\\cell\\in M} h^2_\\cell \\norm{\\nabla p}_\\cell^2\n    + \\sum_{\\face\\in\\faces_M^i} \\norm{\\jmp{p}}_\\face^2.\n  \\end{gather}\n  It is not a norm because $Q_M$ contains constant functions.\n\\end{Definition}\n\n\\begin{Lemma}{macro1}\n  Assume that there is a covering of $\\domain$ by macro elements such\n  that every interior face $\\face \\in \\faces_h^i$ is an interior face\n  of one macro element and each cell $\\cell\\in\\mesh_h$ is an element\n  of not more than $n_O$ macro elements. Then, the local stability\n  estimate\n  \\begin{gather}\n    \\label{eq:stokes:42}\n    \\sup_{v\\in V_M} \\frac{\\form(\\div v,q)_M}{\\snorm{v}_{1,M}}\n    \\ge \\widehat \\beta \\snorm{q}_M\n    \\qquad\\forall q\\in Q_M,\n  \\end{gather}\n  implies the stability estimate\n  \\begin{gather}\n    \\label{eq:stokes:43}\n    \\sup_{v\\in V_h} \\frac{\\form(\\div v,q)}{\\norm{v}_{1}}\n    \\ge \\beta \\norm{q}_h\n    \\qquad\\forall q\\in Q_h,\n  \\end{gather}\n  with a constant $\\beta$ independent of the mesh size $h$.\n\\end{Lemma}\n\n\\begin{proof}\n  For arbitrarily chosen $q\\in Q_h$, choose for each $M$ according to\n  assumption~\\eqref{eq:stokes:42} functions $v_M\\in V_M$ with\n  $\\norm{v_M}_1 \\le \\snorm{q}_M$\n  such that\n  \\begin{gather*}\n    \\form(\\div v_M,q)\n    = \\form(\\div v_M,q)_M\n    \\ge \\widehat \\beta \\snorm{q}_M^2.\n  \\end{gather*}\n  Define $v = \\sum v_M$. Since every face is an interior face of a\n  macro element, every cell is element of at least one macro\n  element. Hence,\n  \\begin{gather*}\n    \\form(\\div v, q) = \\sum_M \\form(\\div v_M,q)\n    \\ge \\widehat \\beta \\sum_M \\snorm{q}_M^2\n    \\ge \\widehat \\beta \\norm{q}_h^2.\n  \\end{gather*}\n  Furthermore, there holds by Poincar\u00e9 inequality\n  \\begin{gather*}\n    c_S \\norm{v}_1 \\le \\snorm{v}_1\n    \\le \\sum_M \\snorm{v_M}_{1} \\le \\sum_M \\snorm{q}_M\n    \\le n_O \\norm{q}_h.\n  \\end{gather*}\n  Thus, the estimate holds with\n  \\begin{gather*}\n    \\beta = \\frac{c_S\\widehat\\beta}{n_O}.\n  \\end{gather*}\n\\end{proof}\n\n\\begin{Lemma}{macro-local}\n  Let $\\{M\\}$ with $M\\subset \\mesh_h$ be a set of macro elements\n  equivalent to the same reference macro element $\\widehat M$. Let the\n  family $\\{\\mesh_h\\}$ be shape regular and assume that on each macro\n  $M$ the set $\\ker{B^T_{M}}$ only contains the constant functions. Then,\n  there is a constant $\\beta_M>0$ independent of $h$ such that for all\n  $M$ there holds\n  \\begin{gather}\n    \\label{eq:stokes:31}\n    \\inf_{p\\in Q_M} \\sup_{v\\in V_M}\n    \\frac{\\form(\\div v,q)_M}{\\snorm{v}_{1,M}\\snorm{q}_{M}}\n    \\ge \\beta_{\\widehat M}.\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{Problem}{macro-local}\n  Prove \\slideref{Lemma}{macro-local}. Furthermore, prove that under\n  the assumption that there is a finite set of reference macro\n  elements $\\widehat M_i$, such that all macro elements in a family\n  are equivalent to one of them, the estimate holds with a uniform\n  constant $\\beta>0$.\n\\begin{solution}\n  For each $M$, the existence of such a constant $\\beta_{M}$ is\n  equivalent to the fact that the smallest nonzero singular value is\n  strictly greater than zero in finite dimensional\n  spaces. Furthermore, the kernel of the seminorm and of $B_M^T$\n  coincide.\n\n  Next, we observe that $\\nabla \\Phi_{M}$ is bounded by the\n  $\\nabla\\Phi_{\\cell_i}$, which in turn is uniformly bounded by the\n  shape regularity. Therefore, by a cell-wise scaling argument and the\n  fact that $\\widehat M$ has only finitely many cells we have\n  \\begin{gather*}\n    \\beta_{\\widehat M} = \\min_M \\beta_M > 0.\n  \\end{gather*}\n\\end{solution}\n\\end{Problem}\n\n\\begin{remark}\n  Depending on the technique of proof being used, we also may decide\n  to impose~\\eqref{eq:stokes:31} directly for each macro element.\n\\end{remark}\n\n\\begin{remark}\n  So far, we have proven that under the assumption that the kernel of\n  the macro problems only contains the constant functions, we have an\n  inf-sup condition with the pressure norm $\\norm{.}_h$. It remains to\n  use Verf\u00fcrth's trick to prove the condition for $\\norm{.}_0$.\n\\end{remark}\n\n\\begin{Lemma}{verfuerth1}\n  Assume that there is an $H^1$-stable interpolation operator $I_h :\n  V\\to V_h$\n  according to \\slideref{Assumption}{locally-quasi-uniform}. Then,\n  there are positive constants $c_1$ and $c_2$ independent of $h$ such\n  that for any $q_h\\in Q_h$ there holds\n  \\begin{gather}\n    \\label{eq:stokes:44}\n    \\sup_{v_h\\in V_h} \\frac{\\form(\\div v_h, q_h)}{\\norm{v_h}_V}\n    \\ge c_1 \\norm{q_h}_Q - c_2 \\norm{q_h}_h.\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{proof}\n  We begin by using the continuous inf-sup condition to deduce that\n  for arbitrary $q_h\\in Q_h$ there is $v\\in V$ with\n  $\\norm{v}_V \\le \\norm{q_h}_Q$ such that\n  \\begin{gather*}\n    \\form(\\div v,q_h) \\ge c_1 \\norm{q_h}_Q^2.\n  \\end{gather*}\n  Now, let $v_h = I_h v$. Hence,\n  \\begin{align*}\n    \\form(\\div v_h, q_h)\n    &= \\form(\\div v_h - \\div v,q_h) + \\form(\\div v,q_h)\\\\\n    &\\ge \\sum_{\\cell\\in\\mesh_h} \\form(v-v_h,\\nabla q_h)_{\\cell}\n      + \\sum_{\\face\\in\\faces_h^i}\n      \\form([v_h-v]\\cdot\\n_1,\\jmp{q_h})_\\face\n      + c_1 \\norm{q_h}_Q\\\\\n    &\\ge -\\left[\\sum_{\\cell\\in\\mesh_h} h_\\cell^{-2} \\norm{v-v_h}_\\cell^2\n      +\\sum_{\\face\\in\\faces_h^i} h_\\face^{-1}\n      \\norm{v-v_h}_\\face\\right]\n      \\norm{q_h}_h + c_1 \\norm{q_h}_Q\\\\\n    &\\ge -c \\snorm{v}_1\\norm{q_h}_h+ c_1 \\norm{q_h}_Q\\\\\n    &\\ge \\left[c_1 \\norm{q_h}_Q - c_2 \\norm{q_h}_h\\right]\n      \\norm{q_h}_Q.\n  \\end{align*}\n  Furthermore, we have by the interpolation estimate\n  \\begin{gather*}\n    \\snorm{v_h} \\le c \\snorm{v} \\le c \\norm{q_h}_Q,\n  \\end{gather*}\n  which proves the result by dividing $\\form(\\div v_h,q_h)$ by the\n  norm of $v_h$.\n\\end{proof}\n\n\\begin{Lemma}{verfuerth2}\n  Let the assumptions of \\slideref{Lemma}{verfuerth1} hold, and assume\n  that there is a constant $\\tilde \\beta$ such that\n  \\begin{gather*}\n    \\sup_{v\\in V_h} \\frac{\\form(\\div v,q)}{\\norm{v}_{1}}\n    \\ge \\tilde \\beta \\norm{q}_h\n    \\qquad\\forall q\\in Q_h.\n  \\end{gather*}\n  Then, the inf-sup condition\n  \\begin{gather*}\n    \\sup_{v\\in V_h} \\frac{\\form(\\div v,q)}{\\norm{v}_{1}}\n    \\ge \\beta \\norm{q}_Q\n    \\qquad\\forall q\\in Q_h,\n  \\end{gather*}\n  holds with $\\beta$ determined by $c_1$, $c_2$ and $\\tilde \\beta$.\n\\end{Lemma}\n\n\\begin{proof}\n  For any $q_h\\in Q_h$ and $\\theta\\in[0,1]$ we have\n  \\begin{align*}\n    \\sup_{v_h\\in V_h} \\frac{\\form(\\div v_h,q_h)}{\\norm{v_h}_V}\n    &=\n      \\theta \\sup_{v_h\\in V_h} \\frac{\\form(\\div v_h,q_h)}{\\norm{v_h}_V}\n    + (1-\\theta)\\sup_{v_h\\in V_h} \\frac{\\form(\\div\n      v_h,q_h)}{\\norm{v_h}_V}\n    \\\\\n    &\\ge \\theta c_1 \\norm{q_h}_Q - c_2 \\theta \\norm{q_h}_h\n      + (1-\\theta) \\tilde\\beta \\norm{q_h}_h\n    \\\\\n    & \\ge \\frac{c_1 \\tilde\\beta}{c_2+\\tilde\\beta} \\norm{q_h}_Q,\n  \\end{align*}\n  by choosing $\\theta = \\tilde\\beta/(c_2+\\tilde\\beta)$.\n\\end{proof}\n\n\\begin{Theorem}{hood-taylor-stability}\n  The Hood-Taylor families are inf-sup stable. Thus, for solutions\n  $u\\in H^{k+1}(\\domain;\\R^d)\\cap V$ and $q\\in H^k(\\domain)\\cap Q$,\n  there holds\n  \\begin{gather}\n    \\norm{u-u_h}_1 + \\norm{p-p_h}_0\n    \\le c h^k\\bigl(\\snorm{u}_{k+1} + \\snorm{p}_k\\bigr).\n  \\end{gather}\n\\end{Theorem}\n\n\\begin{proof}\n  Summarizing all results of this section, the only thing that is left\n  is defining a covering of $\\mesh_h$ with macro elements, such that\n  $\\ker{B_M^T}$ contains only the constant functions. We do this\n  at the example of the lowest order elements on quadrilaterals and\n  triangles in the lemmas below. Both kinds of patches can be used to\n  cover the whole mesh, such that we can use \\slideref{Lemma}{macro1}\n  and \\slideref{Lemma}{verfuerth2} to prove the inf-sup condition.\n\n  A general proof for higher order elements can be found\n  in~\\cite{StenbergSuri96}.\n\\end{proof}\n\n\\begin{Lemma}{patch-test-triangle}\n  For the $P_2-P_1$ element choose the patch $M$ as in\n  \\begin{center}\n    \\includegraphics[width=.8\\textwidth]{./fig/triangle-patch.tikz}\n  \\end{center}\n  Then,\n  \\begin{gather}\n    \\label{eq:stokes:32}\n    \\ker{B_M^T} = \\bigl\\{ q\\in Q_M \\big\\vert\n    \\;\\forall v\\in V_M\\colon b(v,q) = 0 \\bigr\\}\n    = \\P_0.\n  \\end{gather}\n\\end{Lemma}\n\n\\begin{proof}\n  First, we observe that $\\nabla q_h$ is constant on each cell and\n  that the tangential derivatives $t_i\\cdot\\nabla q_h$ coincide for\n  both adjacent cells due to the continuity of $q_h$. Now, we will\n  derive conditions for $\\ker{B_M^T}$ by choosing special test\n  functions in $V_M$ defined through interpolation in the points $x_1$\n  and $x_2$.\n\n  Furthermore, note that the shape function $\\phi$ in $\\P_2$\n  associated with the center of an edge is of the form\n  $\\lambda_1\\lambda_2$ using the barycentric coordinates associated to\n  the vertices at the ends of this edge. This function is positive\n  everywhere inside the triangle $\\cell_i$. Hence, there are positive\n  numbers\n  \\begin{gather*}\n    w_1 = \\int_{\\cell_1} \\phi\\dx,\n    \\quad\n    w_1 = \\int_{\\cell_2} \\phi\\dx,\n  \\end{gather*}\n  Now, let $u(x_1)\\cdot t_1 = 1$, $u(x_1)\\cdot\\n_1 = 0$, and $u(x_2)\n  = 0$. Then,\n  \\begin{gather*}\n    \\form(\\div u,q_h)_M = -\\form(u,\\nabla q_h)_M\n    = -(w_1+w_2) \\nabla q_h\\cdot t_1.\n  \\end{gather*}\n  Hence, $q_h\\in \\ker{B_M^T}$ implies $\\nabla q_h\\cdot t_1 = 0$ in\n  $\\cell_1$ and $\\cell_2$.\n\n  Exchanging $x_2$ for $x_1$, there holds  $\\nabla q_h\\cdot t_2 = 0$ in\n  $\\cell_2$ and $\\cell_3$. Since $t_1$ and $t_2$ are not collinear, we\n  obtain\n  \\begin{gather}\n    \\label{eq:stokes:34}\n    \\nabla q_h|_{\\cell_1} = 0.\n  \\end{gather}\n\n  Now we choose the test function $u(x_1)\\cdot \\n_1 = 1$,\n  $u(x_1)\\cdot t_1 = 0$, and $u(x_2) = 0$. We get\n  \\begin{gather*}\n    0 = \\form(\\div u,q_h) = -w_1 \\nabla q_h|_{\\cell_1}\\cdot\\n_1\n    - w_2 \\nabla q_h|_{\\cell_2}\\cdot\\n_1.\n  \\end{gather*}\n  Due to~\\eqref{eq:stokes:34}, the first term vanishes and together\n  with the tangential condition before, we obtain\n  \\begin{gather*}\n    \\nabla q_h|_{\\cell_2} = 0.\n  \\end{gather*}\n  Exchanging again $x_2$ for $x_1$, we have the same for $\\cell_3$,\n  which proves the result.\n\\end{proof}\n\n\\begin{Lemma}{patch-test-quad}\n  For the $Q_2-Q_1$ element choose the patch $\\widehat M$ as in\n  \\begin{center}\n    \\includegraphics[width=.5\\textwidth]{./fig/rectangle-patch.tikz}\n  \\end{center}\n  Then,\n  \\begin{gather*}\n    \\ker{B_M^T} = \\bigl\\{ q\\in Q_M \\big\\vert\n    \\;\\forall v\\in V_M\\colon b(v,q) = 0 \\bigr\\}\n    = \\P_0.\n  \\end{gather*}\n\\end{Lemma}\n\n\\begin{proof}\n  Choose macro elements consisting of two quadrilateral charing an\n  edge. Then, the reference macro element $\\widehat M$ consists of the\n  cells $[-1,0]\\times [0,1]$ and $[0,1]^2$. We note that the velocity\n  degrees of freedom are in $x_1$, $x_2$, and $x_3$, while those for\n  the pressure are in $a$ to $f$.\n\n  We do the analysis on the reference patch first. There, we have\n  $u\\in \\Q_2^2$ and $\\nabla q\\in \\Q_1^2$. Therefore, $u\\cdot\\nabla q\n  \\in \\Q_3$ and the Simpson rule is exact on each cell. Hence\n  \\begin{gather*}\n    -\\form(\\div u,q)_{\\widehat M} = \\form(u,\\nabla q)_{\\widehat M}\n    = \\frac49 u(x_1) \\nabla q(x_1)\n    + \\frac49 u(x_2) \\nabla q(x_2)\n    + \\frac49 u(x_3) \\nabla q(x_3).\n  \\end{gather*}\n  We first test with velocites such that $u(x_3)=0$ and one of\n  $u_{1/2}(x_{1/2})$ is equal to one. Take for instance $u_1(x_1) =\n  1$. Then, the equation above implies for $q\\in \\ker{B^T_M}$ that\n  $\\d_1 q(x_1) = 0$. Traversing through all four coimbinations, we\n  obtain\n  \\begin{gather*}\n    \\nabla q(x_1) = \\nabla q(x_2) = 0.\n  \\end{gather*}\n  On the cell $\\cell_2$, we have by bilinear interpolation\n  \\begin{gather*}\n    q(x,y) = q(b)(1-x)(1-y) + q(x)x(1-y) + q(e)(1-x)y + q(f)xy,\n  \\end{gather*}\n  and a similar representation on $\\cell_1$. Thus,\n  the conditions above translates to the system\n  \\begin{align*}\n    q(c)+q(f) &= q(b)+q(e) \\\\\n    q(b)+q(c) &= q(e)+q(f) \\\\\n    q(b)+q(e) &= q(a)+q(d) \\\\\n    q(d)+q(e) &= q(a)+q(b),\n  \\end{align*}\n  which in tern has solutions given for any $\\alpha,\\beta\\in\\R$ by\n  \\begin{gather*}\n    \\arraycolsep2pt\n    \\begin{matrix}\n      q(a) &=& q(c) &=& q(e) &=& \\alpha\\\\\n      q(b) &=& q(d) &=& q(f) &=& \\beta\n    \\end{matrix}\n  \\end{gather*}\n  The kernel of $B^T_M$ is a subspace of the space generated by\n  $\\alpha$ and $\\beta$. Now we choose $u_2(x_3)=1$ and all other\n  degrees of freedom zero. Again by simpson rule, we have\n  \\begin{gather*}\n    0 = -\\form(\\div u,q) = \\frac49 u(x_3)\\cdot\\nabla q(x_3)\n    = \\frac29 \\bigl(q(e)-q(b)\\bigr).\n  \\end{gather*}\n  Hence, $\\alpha = q(e) = q(b) = \\beta$ and\n  \\begin{gather*}\n    q\\in \\ker{B^T_M} \\quad\\Rightarrow\\quad\n    q\\in \\P_0.\n  \\end{gather*}\n  For a patch $M$ equivalent to $\\widehat M$, we observe that\n  \\begin{gather*}\n    -\\form(\\div u,q)_M = \\form(u,\\nabla q)_M\n    = \\sum_{i=1,2}\n    \\int_{\\widehat{\\cell_i}} \\hat u^T (\\nabla\\Phi)^{-T} \\nabla\n    \\widehat q \\;\\det(\\nabla\\Phi) \\,d\\widehat x.\n  \\end{gather*}\n  Cramer's rule implies\n  \\begin{gather*}\n    (\\nabla\\Phi_i)^{-T} \\det(\\nabla\\Phi) =\n    \\begin{pmatrix}\n      \\d_2\\Phi_2 & -\\d_1\\Phi_2\\\\\n      -\\d_2\\Phi_1 & \\d_1\\Phi_1,\n    \\end{pmatrix}\n  \\end{gather*}\n  where the mapping $\\Phi$ is bilinear on each cell. Hence, on\n  $\\widehat T$\n  \\begin{gather*}\n    (\\nabla\\Phi_i)^{-T} \\det(\\nabla\\Phi) \\nabla \\widehat q \\in Q_1^2,\n  \\end{gather*}\n  such that the integrand above is bicubic and the Simpson rule\n  argument still applies.\n\\end{proof}\n\n\\subsection{Almost incompressible elasticity}\n\n\\begin{intro}\n  If we discretize the mixed formulation of almost incompressible\n  elasticity with any of the stable Stokes pairs of the preceding\n  sections, we can apply\n  \\slideref{Corollary}{stabilized-mixed-convergence} to obtain optimal\n  error estimates. Nevertheless, our problem with almost\n  incompressible elasticity was not the approximation of the pressure\n  (which was introduced artificially anyway), but locking. So, how\n  does the choice of an inf-sup stable pair avoid locking?\n  \n  Locking, in the terminology developed in the previous chapter, can\n  be described as the fact that the kernel of the discrete divergence\n  operator is too small, in the example presented even\n  \\begin{gather*}\n    \\ker B_h = \\{0\\}.\n  \\end{gather*}\n  Note though, that there might be more subtle locking effects, where\n  the approximation is reduced but not destroyed.\n\n  In view of \\slideref{Theorem}{galerkin-mixed-u-kerbh}, locking means\n  that $V_h^g$ is too small or even the zero space, and therefore the\n  quasi best-approximation result of this theorem is useless, since\n  \\begin{gather*}\n    \\inf_{w_h\\in V_h^g} \\norm{u-w_h}_V \\not\\to 0\n    \\quad\\text{as } h \\to 0.\n  \\end{gather*}\n  \n  The additional assumption of the inf-sup condition\n  in\\slideref{Theorem}{galerkin-mixed-p} on the other hand guarantees\n  that an approximation of the kernel is possible, and thus, locking\n  becomes impossible.\n\\end{intro}\n\n\\begin{Lemma}{reduced integration}\n  Let $V_h\\times Q_h$ be a stable pair for the Stokes problem\n  admitting a Korn inequality. Let furthermore $\\Pi_Q$ be the\n  $L^2$-projection onto $Q_h$. Then, the solution $u_h\\in V_h$ to the\n  weak formulation\n  \\begin{gather}\n    \\label{eq:stokes:33}\n    2\\mu \\form(\\strain{u_h},\\strain v) + \\lambda (\\Pi_Q \\div\n    u_h,\\Pi_q\\div v) = (f,v)\n    \\quad\\forall v\\in V_h,\n  \\end{gather}\n  admits the quasi-optimality estimate\n  \\begin{gather}\n    \\norm{u-u_h}_1 \\le c \\sup_{v_h\\in V_h}\\norm{u-v_h}_1,\n  \\end{gather}\n  with a constant $c$ independent of the quotient $\\lambda/\\mu$.\n\\end{Lemma}\n\n\\begin{proof}\n  We introduce the auxiliary variable $p_h\\in Q_h$ by the condition\n  \\begin{gather*}\n    \\int_\\domain \\div u_h q_h \\dx = \\int_\\domain p_h q_h\\dx\n    \\quad\\forall q_h\\in Q_h.\n  \\end{gather*}\n  By definition of the $L^2$-projection, we have\n  \\begin{gather*}\n    \\form(\\div u_h, q_h) = \\form(\\Pi_Q\\div u_h,q_h) = \\form(p_h,q_h).\n  \\end{gather*}\n  Since $\\Pi_Q\\div u_h$ and $p_h$ are in the same space, this implies\n  that $p_h = \\Pi_Q\\div u_h$ pointwise. In addition, we observe that\n  \\begin{gather*}\n    \\form (p_h, \\Pi_Q\\div v) = \\form (p_h, \\div v).\n  \\end{gather*}\n  Hence, the formulation~\\eqref{eq:stokes:33} is equivalent to\n  \\begin{gather*}\n    \\arraycolsep2pt\n    \\begin{matrix}\n      2\\mu\\form(\\strain{u_h},\\strain v) &+& \\form(p_h,\\div v)\n      &=& \\form(f,v) &\\qquad&\\forall v\\in V_h\\\\\n      \\form(\\div u_h, q) &-&\\tfrac1\\lambda\\form(p_h,q)&=&0&&\\forall q\\in Q_h.\n    \\end{matrix}\n  \\end{gather*}\n  This is the Stokes problem augmented by a positive definite bilinear\n  form $c(.,.)$, such that\n  \\slideref{Theorem}{mixed-stabilized-well-posed} and\n  \\slideref{Corollary}{stabilized-mixed-convergence} apply.\n\\end{proof}\n\n\\begin{remark}\n  The technique in the previous lemma is often called \\define{reduced\n    integration}. This refers to the fact, that we can replace the\n  explicit projection $\\Pi_Q$ by using a quadrature formula which is\n  exact on $Q_h$, but zero for all higher order polynomials occuring\n  in $\\div u_h$.\n\\end{remark}\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End:\n", "meta": {"hexsha": "14d3e5bbbf3103cf86ca1fe5c1dd951895bac1df", "size": 77193, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mixed/stokes.tex", "max_stars_repo_name": "ahumanita/notes", "max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mixed/stokes.tex", "max_issues_repo_name": "ahumanita/notes", "max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mixed/stokes.tex", "max_forks_repo_name": "ahumanita/notes", "max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.6000974184, "max_line_length": 231, "alphanum_fraction": 0.6638684855, "num_tokens": 26937, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494421679929, "lm_q2_score": 0.8175744761936437, "lm_q1q2_score": 0.5608147558758189}}
{"text": "\\documentclass[main.tex]{subfiles}\n\n\\begin{document}\nIn this chapter we will derive the \\emph{Linear Power Flow} (LPF) transformation: a linear map from the vector of power injections at the buses to the vector of currents flowing through the lines of the network.\n\nIt is a \\emph{right inverse} of $\\FRT$, which means that applying $\\LPF$ to a power injection gives a flow that would induce that injection. There are many possible candidates for a right inverse. For example, one way to construct one is to fix a minimum spanning tree; when we set line currents outside of the tree to zero, all line currents inside the tree are uniquely determined.\n\nYet, only one right inverse is \\emph{physically correct}: the $\\LPF$. This function can be derived explicitly by constructing an electric circuit that represents the whole transmission network and everything connected to it, and applying Ohm's Law and Kirchoff's Laws to relate line currents to power injections. We linearise the resulting \\emph{node flow equations} to find the LPF matrix (page \\pageref{eq:LPF}).\n\nUsing this linear map, we can transform the normal distribution of stochastic injections to a Gaussian distribution of line flows. Using the results of Chapter \\ref{chap:probtheory}, we can estimate the overload probability of each line, resulting in a ranking of most vulnerable lines. Additionally, for each such line, we compute the most probable injection to cause the failure, and simulate the subsequent \\emph{cascading failures}.\n\n\\section{The model}\nThe $\\LPF$ is essentially the \\emph{high-level model} used in the final chapters of this thesis (\\ie the transmission network is modelled as a linear transformation). In this chapter, we derive a closed-form expression for the $\\LPF$ from a lower-level, \\emph{electrical} model. This derivation is based on the basic principles of \\emph{AC circuit theory}, which extends the more familiar concepts of DC circuits (consisting of constant voltage and current sources and resistors) to circuits with time-varying (often sinusoidal) voltage and current sources. (This is where \\emph{inductors} and \\emph{capacitors} come into play.)\n\n\\emph{Electric Power Systems} by \\cite{VonMeier2006} provides a very readable introduction into these subjects. Chapter 1 is an introduction to the physics of \\emph{electricity}; Chapter 2 introduces \\emph{DC circuit theory}; Chapter 3 concerns \\emph{AC circuit theory}, specifically in the context of AC power transmission. \n\nFor a fascinating, rigorous mathematical introduction into the subject, see \\emph{Mathematical Foundations of Network Analysis} by \\cite{Slepian1968} or \\emph{Circuits, Matrices and Linear Vector Spaces} by \\cite{huelsman1963circuits}.\n\n\\subsection{Electrical model}\nWe make a distinction between the \\emph{structure} and the \\emph{state} of the network. The structure is a directed graph, where each line is given an \\emph{admittance}. The state collects the (real and reactive) power injected at each bus, the (complex) voltage of each bus, and the (complex) current flowing through each line.\n\nThe use of complex-valued voltage and current (and therefore power) is essential when analysing AC circuits, even though we are only interested in \\emph{real} power. For example, we will find that the amount of real power transmitted over a line is approximately inversely proportional to the inductance of the line (the \\emph{imaginary} equivalent of resistance), and approximately proportional to the difference in phase angles at its ends (the \\emph{complex argument} of voltage).\n% Feynman: What I cannot create, I do not understand.\n%This approach attempts to combine mathematical and physical theory, while maintaining a clear distinction between the two.\n\n\\subsection{Time invariance}\nThe grid structure remains unchanged during normal operation, while the grid state is continually changing over time. For example, an important aspect of grid operation is \\define{load profiling}: examining and predicting the total load connected to a node, as a function of time. As a result of changing loads, the flow of power in a grid is constantly changing.\n\nOne example of a change in grid structure is a \\emph{line failure}, which can be modelled as the removal of an edge from the graph. In some cases, the removal of an edge from the graph results in an unconnected graph (\\ie there exist two nodes with no sequence of lines connecting them). This scenario is called \\emph{power islanding}\\index{power island}.\nMost transmission networks are designed in such a way that no single (or double) line failure can cause power islanding or a blackout, by increasing the \\emph{edge-connectivity} of the graph.\n\nIn the case of a \\emph{line overload}, however, a line failure is caused by an exceptionally high power flow, as a result of high supply or demand.\\footnote{These high power injections are not necessarily located at the two endpoints of an overloaded line; it could also be a grid-wide pattern of power injections, all adding up to a high power flow on that line. We will study the \\emph{most likely power injection} in Section \\ref{sec:mostlikelypowerinjection}.}\nThe failure of an overloaded line will cause a redistribution of power flow, since the power flowing through the failed line now needs to 'find another path' between the nodes. In a highly stressed network, this redistribution can cause a second failure, which can then cause a third failure, and so on, which can eventually cause a blackout. We will study these \\define{cascading failures} in Section \\ref{sec:flowredistribution}.\n\n\\section{Grid structure}\nA transmission grid is modelled as a directed graph $(\\mathcal{N},\\mathcal{L})$.%\\todo{Require the graph to be connected? How about power islands?}\nAs vertices we take the \\emph{nodes} of the network, which are those points where transmission lines connect to a generator, load, or to each other. Nodes are electrically distinct, in the sense that there is some non-zero impedance between them, allowing them to sustain a potential difference. In a network of $n$ nodes, they are represented by the natural numbers $1,\\dots,n$, \\ie $\\mathcal{N}=\\range{n}$.\n\nA pair of two distinct nodes $(i,j)$ is contained in our set of lines $\\mathcal{L}$ if there is a transmission line connecting the nodes. The choice of line orientation can be arbitrary, as long as we have $(i,j) \\in \\mathcal{L} \\Rightarrow (j,i) \\notin \\mathcal{L}$ for all lines $(i,j) \\in \\mathcal{L}$. This transmission line has non-zero impedance. (Otherwise $i$ and $j$ would be the same node.) In a network of $m$ lines, lines are labelled $\\mathcal{L}_1, \\dots, \\mathcal{L}_m$. \n\nIn literature on the subject, buses\\footnote{`bus' is short for `busbar': the latin `omnibus' in conjunction with `bar': some high-voltage cables are connected by welding them to a heavy metal bar.} are also called \\emph{vertices} or \\emph{nodes}, and lines can be called \\emph{edges}, \\emph{wires}, \\emph{cables} or \\emph{circuits}.\n\nWe model transmission lines as time-invariant impedances, which are assumed to be constant under any electric potential and current, allowing us to apply Ohm's Law. Instead of the impedance $\\phym{Z}$ of a line, we use its \\define{admittance} $\\phym{Y=1/Z} \\in \\mathbb{C}$,\\footnote{Written in Cartesian form, $\\phym{Y=G}+i\\phym{B}$, where $\\phym{G}$ is the conductance, and $\\phym{B}$ the susceptance of a line. Both have unit siemens (S), or mho (the reverse of 'ohm'), defined as $1 \\,\\si{\\siemens}=1\\,\\si{\\ohm}^{-1}$. }\n and Ohm's Law becomes:\n\\begin{align*}\n    \\mathsf{I=VY}\n\\end{align*}\nThis allows us to define the admittance between two unconnected nodes as $0$. (\\ie no current is induced by a potential difference.) We define $\\mat{\\eta} \\in \\mathbb{C}^m$ as the \\emph{admittance vector}\\index{admittance!vector}, where $\\eta_l$ is the admittance of $\\mathcal{L}_l$, for each $l \\in \\range{m}$. \n\n\\begin{definition}\\label{def:gridstructure}\nTo summarise, an \\emph{$(n,m)$-grid structure}\\index{grid!structure} is defined as a tuple $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$, where:\n\\begin{itemize}\n    \\item $n=\\# \\mathcal{N}$ is the number of nodes;\n    \\item $m=\\# \\mathcal{L}$ is the number of lines;\n    \\item $(\\mathcal{N},\\mathcal{L})$ is a connected directed graph with $\\mathcal{N}=\\range{n}$;\n    \\item $\\mat{K} \\in \\mathbb{R}^{n \\times m}$ is the vertex-edge incidence matrix of $(\\mathcal{N},\\mathcal{L})$;\n    \\item $\\mat{\\eta} \\in \\mathbb{C}^m$ is the line admittance vector.\n\\end{itemize}\n\\end{definition}\n\n\\section{Power grid state}\nThe \\emph{state} of the network describes how the transmission network is being used (the (real and reactive) power injected at each node, and the voltage magnitudes) and how the electric circuit responds (voltage angles and (complex) line currents). More precisely, we use three physical quantities used in AC circuit analysis to describe the grid state:\n\\begin{definition}\\label{def:gridstate}\nA \\emph{grid state}\\index{grid!state} of an $(n,m)$-grid structure is defined as a tuple $(\\mat{S}, \\mat{V}, \\mat{I})$, where\n\\begin{itemize}\n    \\item $\\mat{S} \\in \\mathbb{C}^n$ is the \\emph{complex power injection vector};\n    \\item $\\mat{V} \\in \\mathbb{C}^n$ is the \\emph{bus voltage vector};\n    \\item $\\mat{I} \\in \\mathbb{C}^m$ is the \\emph{line current vector}. (Currents are directed along digraph edges.)\n\\end{itemize}\n\\end{definition}\n\n\\section{State validity}\nGiven an $(n,m)$-grid structure $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$, only some states are physically possible. A state that satisfies Kirchoff's Laws and Ohm's Law will be called \\emph{valid}. Of course, since we are studying a real-world system, we are mainly interested in states that are valid, or at least close to being valid (in the sense of \\emph{DC-valid}, see Section \\ref{DCapproximation}).\n\n\\subsection{Circuit representation}\nAn $(n,m)$-grid structure $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$ represents an electrical circuit, consisting of impedances and AC sources. A \\emph{valid} grid state $(\\mat{S}, \\mat{V}, \\mat{I})$ corresponds to a physical state of the circuit.\nFrom the mathematical structure, we will construct the corresponding electric circuit in two layers, as follows:\n\nIn the first layer, each node $i$ in $\\mathcal{N}$ becomes a node in the circuit. The value of $\\mel{V}_i$ is the electric potential of that node, relative to ground.\n\nA line $\\mathcal{L}_k=(i,j)$ is modelled as an impedance element\\footnote{complex-valued resistor} (\\inlineres) with admittance $\\mel{\\eta}_k$ between the two nodes $i$ and $j$. The value of $\\mel{I}_k$ is the current flowing through the impedance from $i$ to $j$.\n\nSee Figure \\ref{fig:KVLcircuit} for the first constructed layer. This is not the final circuit: we have not yet added generators and loads to the circuit! Also, the transmission lines have no return wire. This means that there is no closed loop between two nodes, and therefore no energy can be transmitted.\n\nTo construct the second layer, we add a new component to each node $i \\in \\mathcal{N}$, which can be seen as the collection of AC sources (generators) and impedances (loads), connected in parallel between the node and ground. The exact way that loads and generators are connected to the node is not important,\\footnote{This would be a circuit comprising the medium and low-voltage networks connected to the node, including every generator, fridge and phone charger that it serves. This is a common (and necessary) abstraction in power grid analysis.} so we will simply state that this component:\n\\begin{itemize}\n    \\item sustains a potential difference (which is $\\mel{V}_i$);\\footnote{In physical systems, the \\emph{operating voltage} $|\\mel{V}_i|$ (remember that $\\mat{V}$ is complex) can be controlled by power plant operators by adjusting excitation current of a generator. The \\emph{phase angle} $\\theta_j=\\Arg(V_j)$ can be controlled by adjusting the amount of energy (steam) supplied to a generator.\n\n    Both methods are an indirect form of control,\n    %and the effects are very complex in nature, since the operating voltage and phase angle of one node are inherently linked to that of \n\twhich also affects    \n    all other nodes in the network. In fact, \\emph{maintaining} a constant operating voltage and phase angle is a complicated task, requiring continuous adjustments to generator operation. Section 4.3 of \\cite{VonMeier2006} covers this topic in more detail.}\n    \\item either supplies or draws a current, such that the amount of power generated or consumed by the component equals $\\mel{S}_i$ (when $\\mel{S}_i$ is positive or negative, respectively).\n\\end{itemize}\nWe will call this aggregation of generators and loads a \\define{power injector} (\\inlineac).\n\nEach power injector is connected to a node on one end, and to a ground terminal on the other. In electric circuit theory, a ground terminal represents a direct connection to a universal ground: the `zero' reference of electric potential. One could say that between two different ground terminals, there exists a zero-impedance link connecting the two.\nWe now have a simple closed circuit between any two nodes connected by a transmission line, consisting of a power injector for the first node, an impedance between the two nodes, a power injector for the second node and a ground link back to the first node.\n\nThis zero-impedance `ground link' does not physically exist. Rather, it represents the ground wire of a three-phase transmission line. Because there is no return current and no electric potential across the return wire (see Section \\ref{sec:threephase}), we can set its impedance to zero. Another interpretation is that the single high-voltage line \\emph{represents the whole physical circuit} of three high-voltage wires and one return wire.\n\nThe complete, two-layer model is shown in Figure~\\ref{fig:KVLcircuitside}.\n\n\\begin{figure}\n    \\centering\n    \\begin{adjustbox}{width=.6\\textwidth}\n    \\input{\"fig/KVLcircuit.tex\"}\n    \\end{adjustbox}\n    \\caption{A section of a transmission network, showing the line $(i,j)$. Two more nodes, both connected to $i$, are also shown. Note that this is only one layer of the electric circuit used in the model. In the full model (Figure \\ref{fig:KVLcircuitside}), each transmission line forms a closed circuit, allowing current to flow.}\n    \\label{fig:KVLcircuit}\n\\end{figure}\n\n%(\\protect{\\makebox[\\width]{\\raisebox{-1mm}{\\begin{circuitikz}\\ctikzset{bipoles/length=7mm}\\draw[] (0,0) to[sV] (1,0);\\end{circuitikz}}}})\n\\begin{figure}\n    \\centering\n    \\begin{adjustbox}{width=.9\\textwidth}\n    \\input{\"fig/KVLcircuitside.tex\"}\n    \\end{adjustbox}\n    \\caption{\n    The electric model of the transmission network, showing the line $(i,j)$ and two other nodes. A node is represented by a single component (\\inlineac), which is the aggregation of all generators and loads connected to that node. A transmission line is modelled as an impedance (\\inlineres), a complex-valued resistor. The ground 'wires' have zero resistance. \\protect\\newline\n    \\textbf{KVL} is applied to each \\textbf{line}, by traversing the loop drawn in blue.\\protect\\newline\n    \\textbf{KCL} is applied to each \\textbf{node}, by summing all currents entering and leaving the orange area.}\n    \\label{fig:KVLcircuitside}\n\\end{figure}\n\n\\subsection{Kirchoff's Voltage Law (KVL) \\& Ohm's Law}\nFor each line $\\mathcal{L}_k=(i,j)$, we can apply Kirchoff's Voltage Law to the loop ``ground $\\rightarrow$ $i$ $\\rightarrow$ $j$ $\\rightarrow$ ground'', as shown in Figure \\ref{fig:KVLcircuitside}. This gives us:\n$$\\mel{V}_i + \\Delta \\mel{V}_k + -\\mel{V}_j = 0,$$\nwhere $\\Delta \\mel{V}_k$ is the electric potential of the line impedance. This potential relates to the line current according to Ohm's Law:\n$$\\mel{I}_k = \\Delta \\mel{V}_k \\cdot \\mel{\\eta}_k = (\\mel{V}_j - \\mel{V}_i) \\cdot \\mel{\\eta}_k.$$\nSince the $k$th column of $\\mat{K}$ (which corresponds to the line $\\mathcal{L}_k=(i,j)$) has exacly two non-zero entries: $\\mel{K}_{i,k}=1$ and $\\mel{K}_{j,k}=-1$, we can write:\n$$\\mel{I}_k = (\\mel{V}_j - \\mel{V}_i) \\cdot \\mel{\\eta}_k = -(\\mel{K}^*{V})_{k}\\mel{\\eta}_k.$$\nThis holds for every $k \\in \\range{m}$, and this system of equations can be written compactly as:\n$$\\mat{I} = i\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{V}$$\nWritten in this form, we see that when $i\\bm{\\eta}, \\mathbf{C}$ and $\\mathbf{V}$ are purely real, then $\\mathbf{I}$ is purely imaginary. Physically, this means that line current is always $90\\si{\\degree}$ out of phase with voltage differences. When phase angles are small, and voltage magnitude is constant, then the voltages `point roughly to the right' in the complex plane. Therefore, their \\emph{differences} are almost purely imaginary.\n\\subsection{Kirchoff's Current Law (KCL)}\nUsing KVL and Ohm's Law, we found a relation between line current and node voltages. We can use KCL to relate the power injection to currents leaving and entering a bus.\n\nWe apply KCL to every high-voltage node in the electric circuit, as shown in Figure \\ref{fig:KVLcircuitside}. For a bus $i \\in \\range{n}$, Kirchoff's Current Law states:\n\\[\n\\left[ \\text{\\textit{sum of currents leaving the node}}\\right] \\, - \\, \\left[\\text{\\textit{sum of currents entering the node}}\\right] = 0.\n\\]\nThese currents are the currents of lines incident at the bus, together with the current `generated or consumed' by the power injector. Complex power is given by:\n\\[\n\\phym{S} = \\conj{\\,\\phym{I}\\,}\\phym{V}.\n\\]\nIn our case, $\\phym{I}$ is the current that we are looking for, flowing from the ground node to the high-voltage node at $i$, and $\\phym{V}$ is the potential difference between the high-voltage node and ground, which is $\\mel{V}_i$. The amount of power generated or consumed is given by the grid state: $\\phym{S}=\\mel{S}_i$. The current through the power injector is now given by $\\conj{\\mel{S}_i \\mel{V}_i^{-1}}$.\n\nThe $i^{\\text{th}}$ row of $\\mat{K}$ corresponds to the bus $i$, and its entries are $1$ for lines leaving $i$, and $-1$ for lines entering $i$. This allows us to write the sum of currents in a compact way:\n\\[\n\\conj{\\mel{S}_i \\mel{V}_i^{-1}} + (\\mel{K}\\mel{I})_i = 0.\n\\]\nTaking the complex conjugate and multiplying both sides by $\\mel{V}_i$ gives $\\mel{S}_i + \\conj{(\\mel{K}\\mel{I})}_i \\mel{V}_i = 0$ for each $i \\in \\range{n}$, or in matrix form:\n$$\\mat{S}+\\conj{(\\mat{K} \\mat{I})}\\pointwise \\mat{V} = \\mat{0}.$$\n(The symbol $\\pointwise$ denotes \\emph{point-wise} multiplication.)\n\\subsection{Validity conditions}\nInstead of \\emph{requiring}\\footnote{\\cite{Slepian1968} gives an \\emph{axiomatic} formulation of circuit theory.} the circuits laws to hold, we define them as an \\emph{optional property} of the grid state.\n\\begin{definition}\\label{def:statevalidity}\nGiven an $(n,m)$-grid structure $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$, a grid state $(\\mat{S}, \\mat{V}, \\mat{I})$ is \\define{valid} if it satisfies the \\define{KVL-Ohm equality}:\n\\begin{align}\\label{eq:KVLOhmeq}\n    \\mat{I} = i\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{V} \\tag{KVL-Ohm}\n\\end{align}\nand the \\define{S-KCL equality}:\n\\begin{align}\\label{eq:KCLeq}\n    \\mat{S}+\\conj{(\\mat{K} \\mat{I})}\\pointwise \\mat{V} = \\mat{0}. \\tag{S-KCL}\n\\end{align}\n\\end{definition}\n\\begin{remark}\nIn physical terms, the \\emph{KVL-Ohm equality} states:\n\\begin{gather*}\n    \\textit{Each line $\\mathcal{L}_k=(i,j)$ satisfies Ohm's Law ($\\phym{I=VY}$), where:} \\\\\n    \\phym{I}=\\mel{I}_k, \\qquad \\phym{V}=\\mel{V}_{i} - \\mel{V}_j, \\qquad \\phym{Y}=\\mel{\\eta}_k \\nonumber\n\\end{gather*}\nand the \\emph{S-KCL equality} states:\n\\begin{gather*}\n    \\textit{At each node $i$, the sum of power injected at the node, $\\mel{S}_i$}, \\\\\n    \\textit{and power injected from the grid, must equal $0$.} \\nonumber\n\\end{gather*}\n\\end{remark}\n\n\\begin{proposition}\nGiven a grid structure and state as in Definition \\ref{def:statevalidity}, the following are equivalent:\n\\begin{enumerate}[label=\\roman*.]\n    \\item The grid state is valid.\n    \\item The grid state satisfies (\\ref{eq:KVLOhmeq}) and (\\ref{eq:KCLeq}).\n    \\item The grid state satisfies (\\ref{eq:KVLOhmeq}), and each node $i$ satisfies the \\define{node flow equation}, also called the \\define{power mismatch equation}:\n    \\begin{empheq}[box=\\fbox]{gather}\n        \\mel{S}_i = i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| e^{i(\\theta_i - \\theta_j)}\\quad\\quad\\text{for each $i \\in \\range{n}$,}\\label{eq:nodefloweq}\\\\\n        \\text{where }\\mat{L}=\\mat{K} \\diag(i\\mat{\\eta}) \\mat{K}^*\n    \\end{empheq}\n    and $\\mat{\\theta} \\in \\mathbb{R}^n$ is the vector of \\emph{voltage angles}, \\ie $\\theta_j=\\Arg(V_j)$ (the principle argument of $\\mel{V}_j$). We call $\\mat{L}$ the \\define{nodal susceptance matrix} \\citep{Ronellenfitsch2017}.\n\\end{enumerate}\n\n\\end{proposition}\n\\begin{remark}\nNote that (\\ref{eq:nodefloweq}) does not depend on $\\mat{I}$. This means that a valid state is uniquely determined by $\\mat{S}$ and $\\mat{V}$, since $\\mat{I}$ can be computed from $\\mat{V}$ using (\\ref{eq:KVLOhmeq}).\n\\end{remark}\n\n\n\\begin{remark}\nWriting in real and imaginary components, we get the node flow equations for real and reactive power for each node $i \\in \\range{n}$:\n\\begin{align}\n    %\\mel{S}_i &= \\sum_{j=1}^{n} (\\Re(\\conj{\\mel{M}}_{i,j})+i\\Im(\\conj{\\mel{M}}_{i,j})) |\\mel{V}_i| |\\mel{V}_{j}| (\\cos(\\theta_i - \\theta_j)+i\\sin(\\theta_i - \\theta_j)) \\\\\n    %\\phym{P}=\\Re (\\mel{S}_i) &= \\sum_{j=1}^{n} |\\mel{V}_i| |\\mel{V}_{j}| \\Big[\\Re(\\mel{M}_{i,j})\\cos(\\theta_i - \\theta_j)+\\Im(\\mel{M}_{i,j})\\sin(\\theta_i - \\theta_j)\\Big]\n    \\phym{P}=\\Re (\\mel{S}_i) &= \\sum_{j=1}^{n} |\\mel{V}_i| |\\mel{V}_{j}| \\Big[\\Im(\\mel{L}_{i,j})\\cos(\\theta_i - \\theta_j)-\\Re(\\mel{L}_{i,j})\\sin(\\theta_i - \\theta_j)\\Big]\n    \\label{eq:realreactivenodefloweqreal}\n    \\\\\n    %\\phym{Q}=\\Im (\\mel{S}_i) &= \\sum_{j=1}^{n} |\\mel{V}_i| |\\mel{V}_{j}|\\Big[\\Re(\\mel{M}_{i,j})\\sin(\\theta_i - \\theta_j)-\\Im(\\mel{M}_{i,j})\\cos(\\theta_i - \\theta_j)\\Big]\n    \\phym{Q}=\\Im (\\mel{S}_i) &= \\sum_{j=1}^{n} |\\mel{V}_i| |\\mel{V}_{j}|\\Big[\\Im(\\mel{L}_{i,j})\\sin(\\theta_i - \\theta_j)+\\Re(\\mel{L}_{i,j})\\cos(\\theta_i - \\theta_j)\\Big]\n    \\label{eq:realreactivenodefloweqreactive}\n\\end{align}\nIn literature on the subject, the node flow equation is often given in this form. The summands in the expression above are essentially the two-dimensional rotation matrix of angle $\\theta_i - \\theta_j$, applied to the vector $(\\Re(i \\conj{\\mel{L}}_{i,j}), \\Im(i \\conj{\\mel{L}}_{i,j}))^*$.\n\nWe will later study so called \\emph{DC-valid} grid structures, where the values of $\\mat{L}$ are real. If so, (\\ref{eq:realreactivenodefloweqreal}) simplifies to:\n\\begin{align*}\n    \\phym{P}=\\Re (\\mel{S}_i) &= \\sum_{j=1}^{n} \\Re(\\mel{L}_{i,j}) |\\mel{V}_i| |\\mel{V}_{j}| \\sin(\\theta_j - \\theta_i).\n\\end{align*}\n(Notice that we flipped $\\theta_i$ and $\\theta_j$.)\nThis tells us that in a network of just two nodes and one line with purely imaginary admittance, the amount of real power transmitted is proportional to $\\sin (\\theta_2-\\theta_1)$.\\footnote{The quantity $\\theta_2 - \\theta_1$ is called the \\define{power angle} of the transmission line, a common measure of the amount of power being transmitted. A power angle greater than $45\\si{\\degree}$ will cause the nodes to lose synchronicity, making power transmission between the two nodes impossible. \\citep{VonMeier2006} For long lines (over $100\\si{\\kilo\\meter}$), this \\define{stability limit} places an upper limit on the amount of power that a line can transmit. For shorter lines, the \\emph{thermal limit} dominates.}\n\n\n\\end{remark}\n\n\\begin{proof}\n(i) $\\iff$ (ii) is true by definition.\n\nSuppose the grid state satisfies (\\ref{eq:KVLOhmeq}). We have\n\\begin{align*}\n    \\mat{S}+\\conj{(\\mat{K} \\mat{I})}\\pointwise \\mat{V} &=\\\\\n    \\mat{S}+\\conj{(i\\mat{K} \\diag(i\\mat{\\eta}) \\mat{K}^* \\mat{V})} \\pointwise \\mat{V} &= \\\\\n    \\mat{S}+\\conj{(i\\mat{L} \\mat{V})} \\pointwise \\mat{V} &= \\mat{0}\n\\end{align*}\niff for each $i\\in\\range{n}$\n\\begin{align*}\n    \\mel{S}_i &= -\\conj{\\left(\\sum_{j=1}^{n} i\\mel{L}_{i,j} \\mel{V}_{j}\\right)} \\mel{V}_i \\\\\n    &= -\\left(\\sum_{j=1}^{n} -i\\conj{\\mel{L}}_{i,j} \\conj{\\mel{V}}_{j}\\right) \\mel{V}_i \\\\\n    &= i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} \\mel{V}_i \\conj{\\mel{V}}_{j} \\\\\n    &= i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| e^{i\\theta_i} |\\mel{V}_{j}| e^{-i\\theta_j} \\\\\n    &= i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| e^{i(\\theta_i - \\theta_j)},\n\\end{align*}\nproving (ii) $\\iff$ (iii).\n\\end{proof}\n\n\\section{Power Flow}\nIn the previous section, we derived a fundamental result: the \\emph{node flow equation} (\\ref{eq:nodefloweq}).\nRecall from Section \\ref{sec:powerflow} that the \\emph{Power Flow problem} entails the following:\n\\begin{empheq}{gather*}\n    \\text{\\emph{Given the production or consumption at each node,}}\\\\\n    \\text{\\emph{\\textbf{find the current flowing through each line.}}}\n\\end{empheq}\nIn the context of our electric model, this translates to:\n\\begin{empheq}{gather*}\n    \\text{\\emph{Given an $(n,m)$-grid structure $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$, and a power injection $\\mat{S}$, }}\\\\\n    \\text{\\emph{\\textbf{find $\\mat{V}$ and $\\mat{I}$ such that $(\\mat{S}, \\mat{V}, \\mat{I})$ is a valid state.}}}\n\\end{empheq}\n\nIt turns out that the easiest way to solve this problem is to solve the node flow equation, obtaining $\\mat{V}$. Once $\\mat{V}$ is known, one can easily compute $\\mat{I}$ using \\ref{eq:KVLOhmeq}, giving a state $(\\mat{S}, \\mat{V}, \\mat{I})$ that is valid by construction.\n\nThe node flow equation is a system of non-linear equations, and no closed-form solution is known to exist in general. Fortunately, solving the node flow equation is essentially a root-finding problem. This means that well-established techniques, such as the Newton-Raphson algorithm, can be used to find a numerical solution.\n\nThe only difficulty lies in the \\emph{number of unknowns} ($2n$ real numbers\\footnote{Actually, the voltage angle $\\theta$ of one bus (the \\emph{slack bus}) is usually fixed to $0$, since \\ref{eq:nodefloweq} only depends on \\emph{differences} in voltage angles. We then have $2n-1$ unknowns.}) and finding an \\emph{initial value} that converges to the solution. When studying the evolution of the grid state over time, we can use the solution of a previous iteration as initial value. In general, however, we need to come up with an initial guess.\n\nThe classical approach is to use the \\define{flat start} as initial value: all voltage angles are set to $0$, and magnitudes are all set to the same value (say, $380 \\, \\si{\\kilo\\volt}$). For small networks, the Newton-Raphson algorithm then converges to a valid state, with small power angles between lines, and voltage magnitudes close to the initial value. For larger networks, however, this initial value rarely converges, and a valid is state can be `maddeningly difficult to obtain' \\citep{Overbye2004}. \n\nInstead, a common approach is to approximate the grid structure and to linearises the laws of physics. The more approximations that we make, the easier it is to find a solution. A particular combination of approximations is known as the \\emph{DC approximation}, in which case a \\emph{closed-form} solution always exists, known as the \\emph{Linear Power Flow}. For accurate power flow analysis, this solution is then used as initial value for the original system of equations. In this thesis, however, we will only use the solution of the Linear Power Flow, as it is easier to compute and analyse. This is common practice when studying \\emph{cascading failures} \\citep{Nesti2018emergentfailures, Ronellenfitsch2017, Purchala}.\n\\section{DC approximation}\\label{DCapproximation}\n`DC approximation' is a name given to a collection of assumptions/approximations, described below. The name `DC' refers to the approximation that the network is \\emph{decoupled}. Unlike the abbreviation might suggest (DC usually stands for `Direct Current'), the power grid is still modelled as an AC (Alternating Current) network. The first version of this technique was published by \\cite{Scott1974}, allowing the node flow equations to be solved efficiently using the computational power available at that time.\\footnote{Their method does solve the actual node flow equation, but they optimised the iterative root-finding process by \\emph{approximating the Jacobian}.}\n\nCompare the following with Definitions \\ref{def:gridstructure} and \\ref{def:gridstate}.\n\\begin{definition}\\label{def:DCaproximated}\nSuppose $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$ is an $(n,m)$-grid structure.\n\\begin{itemize}\n    \\item If $i\\mat{\\eta} \\in \\mathbb{R}^m \\subseteq \\mathbb{C}^m$  (\\ie $\\mat{\\eta}$ has purely imaginary values) then the grid structure \\emph{satisfies the DC approximation}\\index{DC approximation!structure}.\n    \\footnote{\\cite{Nesti2018emergentfailures} write $\\beta=i\\mat{\\eta}$. Transmission line impedance ($\\phym{Z=R+iX}$) is dominated by inductance, which is positive reactance  ($\\phym{X}$). Resistance ($\\phym{R}$) is always positive for passive components. Therefore, $\\phym{Z}$ lies in the top-right quadrant of $\\mathbb{C}$. Then the admittance, $\\phym{Y=1/Z=G+iB}$, lies in the bottom-right quadrant of $\\mathbb{C}$. In the DC approximation, line conductance ($\\phym{G}$) is neglected, so ($\\phym{Y=iB}$ with $\\phym{B}<0$). Therefore, $\\beta_k=i\\eta_k=i\\phym{Y}=-\\phym{B}>0$ is the \\emph{susceptance of line $k$, \\textbf{with reversed sign.}}}\n\\end{itemize}\nFor a grid state $(\\mat{S}, \\mat{V}, \\mat{I})$ on the structure:\n\\begin{itemize}\n    \\item If $\\mat{S} \\in \\mathbb{R}^n \\subseteq \\mathbb{C}^n$ then the grid state \\emph{satisfies the DC approximation}\\index{DC approximation!state}. (Note that $\\mat{V}$ and $\\mat{I}$ need not be real-valued! We are still studying an AC circuit.)\n    \\item If $\\mat{V} \\in \\mathbb{T}^n \\cdot V_{op} \\subseteq \\mathbb{C}^n$ (\\ie $|V_i|=V_{op}$ for each node $i$) for some \\define{operating voltage} $V_{op} \\in \\mathbb{R}_{\\geq 0}$, then the grid state admits a \\define{flat profile}.\n\n    In the special case $V_{op}=1$, the grid state admits a \\define{normalised profile}.\n\\end{itemize}\n\\end{definition}\nWe note that $V_{op}=1$ does not necessarily mean that the transmission network is operating at $1 \\, \\si{\\volt}$, it simply means that the network is operating at exactly \\emph{one unit of electric potential} (which could be set to $1 \\, \\si{\\volt}$, but also $380 \\, \\si{\\kilo\\volt}$, for example). This unit is known as a \\define{power unit} (p.u.).\n\nCompare the following with (\\ref{eq:KVLOhmeq}) and (\\ref{eq:KCLeq}).\n\\begin{definition}\\label{def:approximatestatevalidity}\nGiven an $(n,m)$-grid structure $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$ and a grid state $(\\mat{S}, \\mat{V}, \\mat{I})$ that both satisfy the DC approximation. The grid state is \\emph{approximately valid} if it satisfies the \\emph{approximate KVL-Ohm equality}:\n\\begin{align}\\label{eq:approxKVLOhmeq}\n    \\mat{I} = i\\diag(i\\mat{\\eta})i\\mat{K}^*\\mat{\\theta}V_{op} \\tag{approx. KVL-Ohm}\n\\end{align}\nand the \\emph{approximate S-KCL equality}:\n\\begin{align}\\label{eq:approxKCLeq}\n    \\mat{S}+\\conj{(\\mat{K} \\mat{I})} V_{op} = \\mat{0}. \\tag{approx. S-KCL}\n\\end{align}\n\\end{definition}\nThe approximate S-KCL equality can be obtained by replacing the $\\exp$ function in (\\ref{eq:nodefloweq}) by $z \\mapsto 1+z$ (the first two terms of the Maclaurin series of $\\exp$):\n\\begin{proposition}\\label{prop:approxnodeflow}\nGiven a grid structure state as in the previous definition, the state is approximately valid if and only if it satisfies the approximate KVL-Ohm equality and each node $i$ satisfies the \\emph{approximate node flow equation}\\index{node flow equation!approximate}:\n\\begin{empheq}[box=\\fbox]{align}\n    \\mel{S}_i &= i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| (1+i(\\theta_i - \\theta_j))\\quad\\quad\\text{for each $i \\in \\range{n}$.}\\label{eq:approxnodefloweq}\n\\end{empheq}\n\\end{proposition}\n\\begin{proof}\nSuppose the approximate KVL-Ohm equality hold. The approximate node flow equality holds if and only if for each node $i$, we have:\n\\begin{align*}\n\\mel{S}_i &= i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| (1+i(\\theta_i - \\theta_j)) \\\\\n    &= i\\sum_{j=1}^{n} \\mel{L}_{i,j} |\\mel{V}_i||\\mel{V}_{j}| -  \\sum_{j=1}^{n} \\mel{L}_{i,j} |\\mel{V}_i||\\mel{V}_{j}|(\\theta_i - \\theta_j)\\\\\n    \\intertext{We assumed a DC state ($\\mel{S}_i \\in \\mathbb{R}$):}\n    &= -\\sum_{j=1}^{n} \\mel{L}_{i,j} |\\mel{V}_i||\\mel{V}_{j}|(\\theta_i - \\theta_j)\\\\\n    &= -\\sum_{j=1}^{n} \\mel{L}_{i,j} |\\mel{V}_i||\\mel{V}_{j}|\\theta_i + \\sum_{j=1}^{n} \\mel{L}_{i,j} |\\mel{V}_i||\\mel{V}_{j}|\\theta_j\\\\\n    &= -\\theta_i |\\mel{V}_i| \\sum_{j=1}^{n} \\mel{L}_{i,j}|\\mel{V}_{j}| + |\\mel{V}_i| \\sum_{j=1}^{n} \\mel{L}_{i,j} |\\mel{V}_{j}|\\theta_j\\\\\n    \\intertext{We assumed a flat profile ($|\\mel{V}_i|=|\\mel{V}_j|=V_{op}$ for every $i,j \\in \\range{n}$):}\n    &= -\\theta_i V_{op}^2 \\sum_{j=1}^{n} \\mel{L}_{i,j} + V_{op}^2 \\sum_{j=1}^{n} \\mel{L}_{i,j} \\theta_j\\\\\n\t&= -\\theta_i V_{op}^2 \\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} + V_{op}^2 \\sum_{j=1}^{n} \\mel{L}_{i,j} \\theta_j\\\\\n    \\intertext{The rows of $\\mat{L}$ add up to $0$:}\n\t&= 0 + V_{op}^2 \\sum_{j=1}^{n} \\mel{L}_{i,j} \\theta_j\\\\\n\t&= V_{op}^2 \\sum_{j=1}^{n} \\mel{L}_{i,j} \\theta_j\\\\\n\t&= (\\conj{\\mel{L}}\\mel{\\theta})_i V^2_{op} \\\\\n\t&= \\conj{(\\mel{K} \\diag(i\\mel{\\eta})\\mel{K}^*\\mel{\\theta})_i} V^2_{op} \\\\\n\t&= -\\conj{(\\mel{K} i\\diag(i\\mel{\\eta})i\\mel{K}^*\\mel{\\theta}V_{op})_i} V_{op} \\\\\n\t&= -\\conj{(\\mel{K} \\mel{I})_i} V_{op}\n\\end{align*}\nwhich is equal to $\\mel{S}_i$ if and only if the approximate S-KCL equality holds.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:approxnodeflowlineq}\nSuppose that a grid structure and state satisfy the DC approximation and that the state admits a flat, normalised profile. Then the approximated node flow equation is linear and real:\n\\begin{empheq}[box=\\fbox]{align}\n    \\mat{S} &= \\mat{L}\\mat{\\theta}.\\label{eq:approxnodeflowlineq}\n\\end{empheq}\n\\end{theorem}\n\n\\begin{proof}\nIn the proof of Theorem \\ref{prop:approxnodeflow}, we derived that the approximate node flow equation holds if and only if for each node $i$, we have:\n\\begin{align*}\n    \\mel{S}_i = (\\conj{\\mel{L}}\\mel{\\theta})_i V^2_{op}.\n\\end{align*}\nBecause $\\mat{L}$ is real-valued and $V_{op}=1$, we find the result.\n\\end{proof}\n%\\towrite{Appendix: give a direct formula for $\\mathbf{L}$}\n%\\towrite{compute $\\mathbf{L}$ for the example networks}\n%\\towrite{give an alternative interpretation of $\\mathbf{L}$ or $\\mathbf{L}$: discrete laplacian}\n%\\subsection{Usefulness}\n%Real transmission networks do not satisfy the DC approximation and, in general, a DC-valid state is not (physically) valid. Yet, using these two approximations has one important property: the approximated node flow equation becomes \\emph{linear}. This has a number of advantages:\n%\\begin{itemize}\n%    \\item Solving the Power Flow problem (\\ie computing line flows induced by a power injection vector) becomes trivial. As we will see in Section \\ref{sec:LPFequations}, \\towrite{ugh}\n%    \\item A linear power flow is crucial for the study of stochastic power injections, since it can be used to transform a probability distribution (like a multivariate normal distribution) without losing its structure.\\todo{be specific}\n%\\end{itemize}\n\n\\subsection{Accuracy}\nThe DC approximation is a useful tool for understanding the complex nature of transmission networks. Therefore, it is crucial to verify that the DC approximation is, in fact, a good approximation, when real-world networks are studied. More precisely, one should ask:\n\\begin{enumerate}\n    \\item How close is a DC-valid state to being valid? More precisely, when the approximate node flow equation (\\ref{eq:approxnodefloweq}) holds, what is the mismatch between the left hand and right hand side of the node flow equation (\\ref{eq:nodefloweq})?\n    \\item How close are real-world grid structures to satisfying the DC approximation?\n    %\\item How does modifying a grid structure to satisfy the DC approximation affect the resulting power flow solution?\n\\end{enumerate}\n\nThe first question is answered in Proposition \\ref{prop:powermismatchupperbound}, where an upper bound is derived for the power mismatch. It shows that the mismatch is bounded by the \\emph{squares} of local differences in phase angles (i.e. $\\theta_i - \\theta_j$ for line $(i,j)$). \\citet{Purchala} have shown that in the Belgian transmission network, all phase differences are below $7 \\si{\\degree}$, and $94\\si{\\percent}$ of lines have a phase difference below $2\\si{\\degree}$.\n\nThe second question is answered by \\cite{Nesti2018emergentfailures}, who confirm that the SciGRID network (which we will study in Part II) satisfies the criteria found by \\cite{Purchala}.\n\n\\begin{lemma}\\label{lem:expaprrox}%\\todo{Move to appendix?}\nThere exists $K \\in \\mathbb{R}_{\\geq 0}$ such that\n\\begin{align*}\n    |\\exp ix - (1 + ix)| = K |x|^2 \\qquad \\text{for every $-2\\pi \\leq x \\leq 2\\pi$}.\n\\end{align*}\n\\end{lemma}%\\todo{Find an upper bound for $K$, $K=0.5$ works}\n\n\\begin{proof}\nWe provide a proof using \\emph{complex analysis}.\\footnote{For an introduction, see \\cite{GarlingVolIII}.} The complex (entire) function $z \\mapsto \\exp z$ is defined by the power series\n\\begin{align*}\n    z\\mapsto \\sum_{k=0}^{\\infty} \\frac{z^k}{k!} = 1 + z + z^2\\left(\\frac{z^0}{2!}+\\frac{z^1}{3!}+\\dots\\right)\n\\end{align*}%\\todo{$0^0$}\nwhich has infinite radius of convergence, and is continuous on $\\mathbb{C}$.\n\nThe functions $z \\mapsto \\frac{1}{z^2}$, $z \\mapsto \\exp z$ and $z \\mapsto 1+z$ are all holomorphic on $\\mathbb{C}^*=\\mathbb{C} \\setminus \\{0\\}$, and so the function\n\\begin{align}\n    g: \\mathbb{C}^* \\rightarrow \\mathbb{C} \\qquad \\qquad\n    g: z \\mapsto \\frac{1}{z^2}(\\exp z - (1+z))\n\\end{align}\nis holomorphic on $\\mathbb{C}^*$, with a removable singularity at $0$. The (unique) extension of $g$ to $\\mathbb{C}$ is an entire function, and by construction:\n\\begin{align*}\n    \\exp z = 1 + z + z^2g(z)\\qquad \\text{for each $z \\in \\mathbb{C}$}.\n\\end{align*}\nSince $g$ is entire, it is continuous on $\\mathbb{C}$.\n$I=[-i2\\pi, i2\\pi]$ is a compact subset of $\\mathbb{C}$, so $g$ is bounded on $I$, proving the result.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:powermismatchupperbound}\nSuppose $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$ is an $(n,m)$-grid structure, with a state $(\\mat{S}, \\mat{V}, \\mat{I})$ that admits a flat profile with operating voltage $V_{op}$.\n\nIf the state is DC-valid, then there exists $K \\in \\mathbb{R}_{\\geq 0}$ such that for each node $i$:\n\\begin{align*}\n    \\abs{\\mel{S}_i - i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| e^{i(\\theta_i - \\theta_j)}}\n    &\\leq\n    K V_{op}^2 \\sum_{j=1}^{n} |\\mel{L}_{i,j}| (\\theta_i - \\theta_j)^2 \\\\\n    &\\leq\n    KV_{op}^2\n    \\norm{\\mat{\\eta}}_{\\infty}  \\norm{\\mat{K}^*\\mat{ \\theta}}_2^2.\n\\end{align*}\n%\\todo{A more important result would be an upper limit for the error in \\textit{line flow}, instead of the error in \\textit{node flow}}\n\\end{proposition}\n\\begin{remark}\nThis result states that if the grid state is \\emph{DC-valid}, and the power angles (i.e. $\\theta_i - \\theta_j$ for line $(i,j)$) are low, then the state is \\emph{close} to also being \\emph{valid}.\n\nQuantitatively, it tells us that for a given node $i$, the 'error' resulting from using the approximate node flow equation (\\ref{eq:approxnodefloweq}) is proportional to the \\emph{squares} of phase angles of lines that connect to $i$.%\\todo{this follows from the proof, not the statement itself}\n\\end{remark}\n\n\\begin{proof}\nSuppose $i \\in \\range{n}$.\n\nChoose a $K \\in \\mathbb{R}_{\\geq 0}$ for which Lemma \\ref{lem:expaprrox} holds.\nThe state is DC-valid, so we can substitute (\\ref{eq:approxnodefloweq}) for $\\mel{S}_i$:\n\\begin{align*}\n    &\\abs{\\mel{S}_i - \\sum_{j=1}^{n} \\conj{\\mel{M}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| e^{i(\\theta_i - \\theta_j)}} \\\\\n    &=\n    \\abs{\n    i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| (1+i(\\theta_i - \\theta_j)) - i\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j} |\\mel{V}_i| |\\mel{V}_{j}| e^{i(\\theta_i - \\theta_j)}\n    } \\\\\n    &=\n    V_{op}^2\\abs{\\sum_{j=1}^{n} \\conj{\\mel{L}}_{i,j}  \\left(e^{i(\\theta_i - \\theta_j)} - (1+i(\\theta_i - \\theta_j))\\right)} \\\\\n    &\\leq\n    V_{op}^2\\sum_{j=1}^{n} \\abs{\\mel{L}_{i,j}} \\abs{e^{i(\\theta_i - \\theta_j)} - (1+i(\\theta_i - \\theta_j))} \\qquad \\text{(triangle inequality)}\\\\\n    &\\leq\n    KV_{op}^2\\sum_{j=1}^{n} \\abs{\\mel{L}_{i,j}} (\\theta_i - \\theta_j)^2  \\qquad \\text{(Lemma \\ref{lem:expaprrox})}\\\\\n    &=\n    KV_{op}^2\n    \\left[\n        \\sum_{j=i} \\abs{\\mel{L}_{i,j}} (\\theta_i - \\theta_j)^2 +\n        \\sum_{\\substack{j\\neq i\\\\i,j \\text{ connected}}} \\abs{\\mel{L}_{i,j}} (\\theta_i - \\theta_j)^2 +\n        \\sum_{\\substack{j\\neq i\\\\i,j \\text{ not connected}}} \\abs{\\mel{L}_{i,j}} (\\theta_i - \\theta_j)^2\n    \\right]\\\\\n    &=\n    KV_{op}^2\n    \\left[\n        \\abs{\\mel{L}_{i,i}} (\\theta_i - \\theta_i)^2 +\n        \\sum_{\\substack{j\\neq i\\\\i,j \\text{ connected}}} \\abs{\\mel{L}_{i,j}} (\\theta_i - \\theta_j)^2 +\n        \\sum_{\\substack{j\\neq i\\\\i,j \\text{ not connected}}} 0 \\cdot (\\theta_i - \\theta_j)^2\n    \\right]\\\\\n    &=\n    KV_{op}^2\n    \\sum_{\\substack{j\\neq i\\\\i,j \\text{ connected}}} \\abs{\\mel{L}_{i,j}} (\\theta_i - \\theta_j)^2\n    \\\\\n    &\\leq\n    KV_{op}^2\n    \\sum_{\\mathcal{L}_k=(a,b) \\in \\mathcal{L}} \\abs{\\mel{L}_{a,b}} (\\theta_a - \\theta_b)^2\n    \\\\\n    &=\n    KV_{op}^2\n    \\sum_{\\mathcal{L}_k=(a,b) \\in \\mathcal{L}} \\abs{\\mel{\\eta}_k} (\\mel{K}^*\\mel{\\theta})_k^2\n    \\\\\n    &=\n    KV_{op}^2\n    \\norm{\\mat{\\eta}  \\pointwise \\mat{K}^*\\mat{\\theta} \\pointwise \\mat{K}^*\\mat{\\theta}}_{1} \\qquad \\text{(in $\\ell^1$)}\n    \\\\\n    &\\leq\n    KV_{op}^2\n    \\norm{\\mat{\\eta}}_{\\infty}  \\norm{\\mat{K}^*\\mat{\\theta} \\pointwise \\mat{K}^*\\mat{\\theta}}_{1} \\qquad \\text{(H\u00f6lder's inequality)}\n    \\\\\n    &=\n    KV_{op}^2\n    \\norm{\\mat{\\eta}}_{\\infty}  \\norm{\\mat{K}^*\\mat{\\theta}}_2^2\n\\end{align*}\n\\end{proof}\n%\\towrite{Discuss the PDEs used for computing the PF?}\n%\\todo[inline]{Compare three methods for computing the PF in the two-node network:\n%\n%- original node flow equations\n%\n%- original node flow equations, but using the DC approximated Jacobian for root-finding\n%\n%- LPF\n%\n%they should be incrementally faster; the first two should find the same solution}\n%\n\\section{Nodal susceptance matrix}\nTheorem \\ref{thm:approxnodeflowlineq} shows that the \\emph{nodal susceptance matrix}, given by\n\\[\n\\mat{L} = \\mat{K}\\diag(i\\mat{\\eta})\\mat{K}^*\n\\]\nis not just useful for compact notation: it relates phase angles to power injection, taking the grid structure into account ($\\mat{S} = \\mat{L}\\mat{\\theta}$).\n\nHowever, for our purposes, we are more interested in the \\emph{inverse} of this function, which maps a power injection to the vector of phase angles that it induces at the buses. Unfortunately, the inverse does not exist: $\\mat{L}$ is an $n \\times n$ matrix, but it does not have full rank. This can be seen by the fact that $\\mat{K}$ has rank $n-1$, so the rank of $\\mat{L}$ is at most $n-1$. We can be more precise:\n\n\\begin{theorem}\nSuppose that the entries of $i\\mat{\\eta}$ are non-zero.\\footnote{An impedance element never has \\emph{zero admittance}: this would be equivalent to having no element at all.} Then the nodal susceptance matrix, $\\mat{L}$, has rank $n-1$, and its kernel has a one-element basis:\n\\[\n\\ker \\mat{L} = \\linspan \\left\\lbrace \n(1,1,\\dots,1)^*.\n\\right\\rbrace\n\\]\n\\end{theorem}\n\\begin{proof}\nWe have $\\rank \\mat{K} = n-1$ (Theorem \\ref{thm:imageLPF}), so $\\rank \\mat{K}^* = \\rank \\mat{K} = n-1$. We will prove $\\rank \\mat{L} = n-1$ by showing that $\\ker \\mat{L} = \\ker \\mat{K}^*$. This tells us that they have the same \\emph{nullity}, and by the Rank-Nullity Theorem, the result follows.\n\nSuppose $\\mat{\\theta} \\in \\ker \\mat{K}^*$. Then \n$$\n\\mat{L}\\mat{\\theta} = \\mat{K}\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{\\theta} = \\mat{0},\n$$\n\\ie $\\mat{\\theta} \\in \\ker \\mat{L}$.\n\nConversely, suppose $\\mat{\\theta} \\in \\ker \\mat{L}$. Then\n\\begin{align*}\n\\mat{L}\\mat{\\theta} &= \\mat{0} \\\\\n\\Rightarrow \\quad \\mat{K}\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{\\theta} &= \\mat{0} \\\\\n\\Rightarrow \\quad \\mat{\\theta}^* \\mat{K}\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{\\theta} &= 0 \\\\\n\\Rightarrow \\quad (\\mat{\\theta}^* \\mat{K}\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{\\theta})^* &= 0 \\\\\n\\Rightarrow \\quad (\\diag(i\\mat{\\eta})^{\\frac{1}{2}} \\mat{K}^*\\mat{\\theta})^* (\\diag(i\\mat{\\eta})^{\\frac{1}{2}} \\mat{K}^*\\mat{\\theta}) &= 0 \\qquad \\text{($\\diag(i\\mat{\\eta})$ is self-adjoint)}\\\\\n\\Rightarrow \\quad \\norm{\\diag(i\\mat{\\eta})^{\\frac{1}{2}} \\mat{K}^*\\mat{\\theta}} &= 0.\n\\end{align*}\nAll entries of $i\\mat{\\eta}$ are non-zero, so $\\mat{K}^*\\mat{\\theta}$ must be zero. It follows that $\\mat{\\theta} \\in \\ker \\mat{K}^*$.\n\nThe given basis has the right number of elements, and we can see from (\\ref{eq:approxnodeflowlineq}) that $(1,1,\\dots,1)^*$ is indeed an element of the kernel.\n\\end{proof}\n\n\\begin{corollary}\\label{cor:imageL}\nIf the entries of $i\\mat{\\eta}$ are non-zero, then the \\emph{image} of $\\mat{L}$ is the set of all power injections with zero sum.\n\\end{corollary}\n\\begin{proof}\nWe have $\\mat{L} = \\mat{K}\\diag(i\\mat{\\eta})\\mat{K}^*$, so $\\Ima \\mat{L}$ is a linear subspace of $\\Ima \\mat{K}$. Applying Theorem~\\ref{thm:imageLPF}, we find that $\\Ima \\mat{K}$ is the set of zero-sum injections, and that $\\Ima \\mat{K}$ has dimension $n-1$. The only linear subspace of $\\Ima \\mat{K}$ with dimension $n-1$ is $\\Ima \\mat{K}$ itself, so $\\Ima \\mat{L}=\\Ima \\mat{K}$, proving the result.\n\\end{proof}\n\nIn Theorem \\ref{thm:approxnodeflowlineq}, we found the identity $\\mat{S} = \\mat{L}\\mat{\\theta}$, relating the power injection to the vector of voltage angles at the nodes. We now know that if $\\mat{S}$ has zero sum, there is a set of phase angles that are mapped to $\\mat{S}$ by $\\mat{L}$. This set of solutions is a linear subspace of $\\mathbb{R}^n$ of dimension $1$, and any two solutions differ by a constant angle.\\footnote{This makes sense physically: increasing the phase angle at each node by the same amount makes no difference to the grid state, since power transmission is related to \\emph{relative angles}: see Equation (\\ref{eq:approxnodefloweq}).}\n\nThere are two paths to take after this point. One option is to fix the phase angle of the slack bus to $0$, which uniquely defines all other phase angles. This corresponds to taking the ${n-1 \\times n-1}$ submatrix of $\\mathbf{L}$, by removing the row and column of the slack bus. This submatrix is then inverted, and a row and column of all zeroes is added (corresponding to the slack bus). This method is used by \\cite{PyPSA}.\n\nA second method is to leave $\\mat{L}$ as-is, and to consider the \\emph{Moore-Penrose pseudoinverse} of $\\mathbf{L}$. The resulting matrix, $\\mat{L}^+$, has the property that \\emph{if any solution to $\\mat{S}=\\mat{L}\\mat{\\theta}$ exists, one will be given by $\\mat{L}^+\\mat{S}$}.\\footnote{In fact, the Moore-Penrose pseudoinverse solves the \\emph{linear least squares} problem.} Corollary~\\ref{cor:imageL} tells us that a solution exists if and only if $\\mat{S}$ has zero sum (which is a realistic assumption). This is the approach of \\cite{Nesti2018emergentfailures}, which they interpret as 'distributive slack': not fixing a single slack bus (we will discuss this concept in Section \\ref{sec:nonzeroinjection}).\n\n\\section{Linear Power Flow}\\label{sec:LPFequations}\nPhysically, there is a \\emph{current} flowing along a line in the network, which has the function of transporting \\emph{energy}. Although the current is alternating, the average effect is a \\emph{flow of energy} from one bus to another. We define the \\emph{power flow} along a line as the rate of flow of energy, which is constant when the power injection is constant.\n\nIn the context of line overloads, we are only interested in line \\emph{current}, which can be seen as proportional to the amount of power being transmitted by the line. (Again, power is not a physical quantity that can be transmitted.) In a flat, normalised profile, power flow and line current are identical in magnitude. To follow general convention, we will talk about power flow instead of line current. For example, line ratings are often given in Watts, not Amperes.\n\nGiven an $(n,m)$-grid structure $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$ and grid state $(\\mat{S}, \\mat{V}, \\mat{I})$ that both satisfy the DC approximation, such that the state admits a flat profile, we define the \\emph{power flowing through line $\\mathcal{L}_k$} as:\n\\begin{align}\n    \\mel{f}_k = V_{op} (-i \\mel{I}_k).\\label{eq:powerflowdef}\n\\end{align}\nThis equation resembles\\footnote{We multiply the current with $-i$ to account for the $90 \\, \\si{\\degree}$ phase shift between current and voltage in a pure inductor.} the equation for electrical power ($\\phym{P=VI}$), but it is not a quantity of power being generated or consumed in a component! Rather, it should be seen as a quantity symbolising the amount of power that a line transmits.\\footnote{Alternatively, this definition can be seen as a \\emph{change of unit} for electrical current.} Equation (\\ref{eq:powerflowdef}) can be written in matrix form, giving the \\emph{vector of line flows}:\n\\begin{align}\n    \\mat{f} = V_{op} (-i \\mat{I}).\n\\end{align}\n\nWe assume a normalised profile (\\ie $V_{op}=1$), and we substitute (\\ref{eq:approxKVLOhmeq}) for $\\mat{I}$: \n\\begin{align*}\n    \\mat{f} = -iV_{op}\\mat{I}=-iV_{op}^2i\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{\\theta}\n    = \\diag(i\\mat{\\eta})\\mat{K}^*\\mat{L}^+\\mat{p}.\n\\end{align*}\n(We write $\\mat{p}$ instead of $\\mat{S}$ when $\\mat{S}$ is real-valued.)\n\nWe find the \\emph{Linear Power Flow equation}\\index{Linear Power Flow}:\n\\begin{empheq}[box=\\fbox]{gather}\\label{eq:LPF}\n    \\mat{f}=\\mat{F}\\mat{p} \\tag{LPF}\\\\[3mm]\n    \\text{where }\\quad\\mat{F}=\\diag(i\\mat{\\eta})\\mat{K}^*\\mat{L^+} \\notag\n\\end{empheq}\n\n\\emph{This is a linear transformation from a power injection vector $\\mat{p}$ to the line flow $\\mat{f}$ that induces it.}\n\nIf the line thresholds\\index{line threshold} are $\\mat{W}=(W_1, \\dots, W_m)^* \\in \\mathbb{R}^m$, then we can define the \\emph{normalised line flow}\\index{line flow!normalised} as:\n\\begin{align*}\n\\hat{\\mat{f}} = \\diag(\\mat{W})^{-1}\\mat{f} = \\hat{\\mat{F}}\\mat{p},\n\\end{align*}\nwhere $\\hat{\\mat{F}} = \\diag(\\mat{W})^{-1}\\mat{F}$ is the \\emph{normalised LPF}.\n\n\\section{Stochastic power injections}\\label{stochasticpowerinjections}\nUsing the linear transformation $\\mat{F}$ that we just derived, we can compute the normalised line flow that a \\emph{zero-sum} power injection $\\mat{p}$ induces. This is a useful result: for example, one could check whether a generator configuration is \\emph{admissible} by writing down the injection $\\mat{p}$ associated to this configuration, and checking that no value of $\\hat{\\mat{F}}\\mat{p}$ is greater in absolute value than $1$. (Since we \\emph{normalised} the line flows, each line has unit threshold.) In a $(4,5)$-grid structure, this process might look something like:\\footnote{Here, we are actually showing the \\emph{absolute values} of $\\hat{\\mat{f}}$.}\n\n\\[\n\\begin{pmatrix}\n\\mel{p}_1\\\\\n\\mel{p}_2\\\\\n\\mel{p}_3\\\\\n\\mel{p}_4\n\\end{pmatrix}\n\\quad\n\\tikz{\\draw[|->,thick] (0,0) -- (10mm,0) node[above,midway] {\\begin{tabular}{c} \\hphantom{.}norm. \\\\[-1mm] LPF \\end{tabular}};}\n\\quad\n\\begin{pmatrix}\n\\hat{\\mel{f}}_1 = \\raisebox{-1mm}{\\inlinebar{.5}{green}} \\\\\n\\hat{\\mel{f}}_2 = \\raisebox{-1mm}{\\inlinebar{.2}{green}} \\\\\n\\hat{\\mel{f}}_3 = \\raisebox{-1mm}{\\inlinebar{.8}{green}} \\\\\n\\hat{\\mel{f}}_4 = \\raisebox{-1mm}{\\inlinebar{1.15}{red}} \\\\\n\\hat{\\mel{f}}_5 = \\raisebox{-1mm}{\\inlinebar{.3}{green}}\n\\end{pmatrix}\n\\quad\n\\tikz{\\draw[|->,thick] (0,0) -- (20mm,0) node[above,midway] {\\begin{tabular}{c} below \\\\[-1mm] threshold? \\end{tabular}};}\n\\quad\n\\begin{pmatrix}\n\\Checkmark\\\\\n\\Checkmark\\\\\n\\Checkmark\\\\\n\\TikzCross\\\\\n\\Checkmark\n\\end{pmatrix}\n\\]\n\nThings get even more interesting when $\\mat{p}$ is a \\emph{stochastic variable}. In this case, $\\hat{\\mat{f}}$ is also a stochastic variable! In fact, given a probability distribution function for $\\mat{p}$, we can use $\\diag(\\mat{W})^{-1}\\mat{F}$ to compute the \\emph{probability distribution function of $\\hat{\\mat{f}}$}. Using this probability distribution, we can now answer questions such as: ``What is the probability that line $i$ overloads?'' ($\\PROB\\left[|\\hat{\\mel{f}}_l| \\geq 1 \\right]$) Or: ``What is the probability that line $i$ overloads, given that line $j$ is operating at 95\\% capacity?'' ($\\PROB \\left[ |\\hat{\\mel{f}}_i| \\geq 1 \\,\\mid\\, \\hat{\\mel{f}}_j = 0.95 \\right]$). \n\n\\[\n\\hspace{-7mm}\n\\begin{pmatrix}\n\\mel{p}_1\\\\\n\\mel{p}_2\\\\\n\\mel{p}_3\\\\\n\\mel{p}_4\n\\end{pmatrix}\n%\n\\tikz{\\draw[|->,thick] (0,0) -- (10mm,0) node[above,midway] {\\begin{tabular}{c} \\hphantom{.}norm. \\\\[-1mm] LPF \\end{tabular}};}\n%\n\\begin{pmatrix}\n\\hat{\\mel{f}}_1 \\sim \\raisebox{-1mm}{\\inlineprob{0.5}{10.0}{orange}} \\\\\n\\hat{\\mel{f}}_2 \\sim \\raisebox{-1mm}{\\inlineprob{0.25}{20.0}{orange}} \\\\\n\\hat{\\mel{f}}_3 \\sim \\raisebox{-1mm}{\\inlineprob{0.9}{100.0}{orange}} \\\\\n\\hat{\\mel{f}}_4 \\sim \\raisebox{-1mm}{\\inlineprob{1.1}{10.0}{orange}} \\\\\n\\hat{\\mel{f}}_5 \\sim \\raisebox{-1mm}{\\inlineprob{0.7}{5.0}{orange}}\n\\end{pmatrix}\n\\tikz{\\draw[|->,thick] (0,0) -- (15mm,0) node[above,midway] {$\\PROB \\left[ \\,\\cdot\\, \\geq 1 \\right]$};}\n\\begin{pmatrix}\n\\phantom{0}0.001\\%\\phantom{0}\\\\\n\\phantom{0}0.0001\\%\\\\\n\\phantom{0}0.1\\%\\phantom{000}\\\\\n60.0\\%\\phantom{000}\\\\\n10.0\\%\\phantom{000}\n\\end{pmatrix}\n\\]\n\nWe will discuss the implications of having a stochastic power injection and the meaning of these absolute overload probabilities in Section \\ref{sec:generationforecast}. For now, assume that we are studying the injection and flow during a time window long enough for line failures to occur,\\footnote{\\ie long enough for line protection mechanisms to have an effect. The classical assumption is that lines switch off within one AC cycle (\\eg $20\\,\\si{\\milli\\second}$ for $50\\,\\si{\\hertz}$) when overloaded.} but also short enough for fluctuations to be significant (and not `averaged out'). These assumptions should be seen as \\emph{embodied in the bus covariance matrix}, but for the remainder of this chapter, we make no formal assumptions about the covariance matrix.\n\n\\subsection{Normally distributed power injection}\nFollowing \\cite{Nesti2018emergentfailures}, we model $\\mat{p}$ to be \\emph{multivariate normally distributed}:\n\\[\n\\mat{p} \\, \\sim \\, \\gaussdistr(\\mat{\\mu}_p, \\mat{\\Sigma}_{\\mat{p}})\n\\]\nwhere $\\mat{\\mu}_p$ is the mean power injection, which is the sum of deterministic generation and expected stochastic generation, minus the load. $\\mat{\\Sigma}_{\\mat{p}}$ is the \\emph{power injection correlation matrix}, which can be estimated from historical generation series. When power injection at different nodes is correlated (because of correlated weather), this matrix is non-diagonal.\n\nBecause $\\hat{\\mat{F}}$ is a linear transformation, the vector of normalised line flows $\\hat{\\mat{f}}$ is (multivariate) Gaussian distributed (Theorem \\ref{thm:linearmapofgaussian}), and its distribution is given by\n\n\\begin{equation}\n\\hat{\\mat{f}} \\sim \\gaussdistr(\\mat{\\mu_f}, \\mat{\\Sigma}_{\\mat{f}}), \\quad \\text{ where } \\quad\n\\mat{\\mu_f} = \\hat{\\mat{F}}\\mat{\\mu_f} \\quad \\text{and} \\quad\n\\mat{\\Sigma}_{\\mat{f}} = \\hat{\\mat{F}}\\mat{\\Sigma}_{\\mat{p}}\\hat{\\mat{F}}^*.\n\\end{equation}\n\nFor realistic networks we have $m > n$, which means that $\\hat{\\mat{F}}$ is not surjective. Then $\\mat{\\Sigma}_{\\mat{f}}$ is not injective, meaning that $\\hat{\\mat{f}}$ is Gaussian, but not normally distributed (Theorem \\ref{thm:normaliffinvertible}).\n\n\\subsection{Overload probabilities}\nFor a line $l$, the probability distribution of the current through that line is simply the marginal distribution $\\mel{f}_l$. Therefore, the probability of an emergent failure of line $l$ is given by: \\footnote{Remember that the \\emph{sign} of $\\mel{f}_l$ corresponds to the \\emph{direction} of current through the line. The line orientations were chosen arbitrarily, and only have meaning in our bookkeeping.}\n\\[\n\\PROB\\left[|\\mel{f}_l| \\geq 1 \\right] = \\PROB\\left[\\mel{f}_l \\leq -1 \\right] + \\PROB\\left[\\mel{f}_l \\geq 1 \\right]\n\\]\nFrom Theorem \\ref{prop:gaussianmarginaldistr} it follows that $\\mel{f}_l \\, \\sim \\, \\gaussdistr(\\mel{\\mu}_{\\mat{f}\\,l}, \\mel{\\Sigma}_{\\mat{f}\\,ll})$, and $\\PROB\\left[|\\mel{f}_l| \\geq 1 \\right]$ can now be computed using standard techniques.\n\nThe probability of \\emph{any} emergent failure is $\\PROB\\left[\\exists_{l \\in \\range{m}} |\\mel{f}_l| \\geq 1 \\right]$, which can be approximated by an upper and lower bound:\n\\[\n\\max_{l \\in \\range{m}} \\PROB\\left[|\\mel{f}_l| \\geq 1 \\right]\n\\quad\\leq\\quad\n\\PROB\\left[\\exists_{l \\in \\range{m}} |\\mel{f}_l| \\geq 1 \\right]\n\\quad\\leq\\quad\n\\sum_{l \\in \\range{m}} \\PROB\\left[|\\mel{f}_l| \\geq 1 \\right].\n\\]\nTrivially, the most likely line flow, given that any emergent failure occurred, coincides with most likely line flow, given that the most vulnerable line failed.\n\n\\subsection{Most likely power injection}\\label{sec:mostlikelypowerinjection}\nNow that we have identified the most vulnerable lines in the network, we naturally want to simulate the effect that the failure of one of these lines will have. When we remove the line from our model, we get a new LPF matrix, which can be used to check the currents through all other lines, after the initial failure. But what power injection should be used? Because we are studying the \\emph{hypothetical} failure of a line, we do not yet know the exact power injection that caused it.\n\nThe classical approach is to use the nominal power injection, $\\mat{\\mu}_{\\mat{p}}$. This is exactly what we would do when studying regular line failures (caused by a fallen tree, for example). In our case, however, we assumed that the line failed because of an \\emph{overload}, which tells us that the power injection must have deviated from its nominal value.\n\nBecause we have estimated a probability distribution for $\\mat{p}$, we can find \\emph{the most likely power injection, given that line $l$ overloaded}. We can compute this injection explicitly, leveraging the fact that the LPF map is linear.\n\n\\begin{theorem}\\label{thm:mostlikelyinjection}\nSuppose a grid with LPF $\\hat{\\mat{F}}$ has a $\\gaussdistr(\\mat{\\mu}_p, \\mat{\\Sigma}_{\\mat{p}})$-distributed power injection $\\mat{p}$. The most likely power injection $\\tilde{\\mat{p}}^{(l)}$, given the emergent failure of a line $l$, is uniquely given by\n\\begin{empheq}[box=\\fbox]{gather}\\label{eq:mostlikelyinjection}\n    \\tilde{\\mat{p}}^{(l)} = \\mat{\\mu}_{\\mat{p}}  + \\frac{\\sign(\\mel{\\mu}_{\\mat{f}\\, l} ) - \\mel{\\mu}_{\\mat{f}\\, l} }{ \\mel{\\Sigma}_{\\mat{f}\\, ll}} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l\n\\end{empheq}\nwhen $\\mel{\\mu}_{\\mat{f}\\, l} \\neq 0$. Otherwise, there are two injections that maximise the conditional probability of $\\mat{p}$, which are given by\n\\begin{align*}\n    \\tilde{\\mat{p}}^{(l, +)} = \\mat{\\mu}_{\\mat{p}}  + \\frac{1}{ \\mel{\\Sigma}_{\\mat{f}\\, ll}} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l \\quad \\text{ and }\\quad\n    \\tilde{\\mat{p}}^{(l, -)} = \\mat{\\mu}_{\\mat{p}}  + \\frac{-1}{ \\mel{\\Sigma}_{\\mat{f}\\, ll}} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l.\n\\end{align*}\n\\end{theorem}\n\\begin{proof}\n\nThe set of power injections associated with the failure event of line $l$ is a union of two parallel \\emph{planes} in $\\mathbb{R}^n$. Indeed, the condition $\\mel{f}_l = 1$ can be written as:\n\\[\n\\mel{f}_l = 1 \\iff (\\mel{F}\\mel{p})_l = 1 \\iff \\mat{e}_l^*\\hat{\\mat{F}}\\mat{p} = 1 \\iff \\left\\langle \\hat{\\mat{F}}^*\\mat{e}_l, \\mat{p} \\right\\rangle = 1\n\\]\nwhich is the equation defining the plane with pillar $\\hat{\\mat{F}}^*\\mat{e}_l$. Similarly, the condition $\\mel{f}_l = -1$ is satisfied if and only if $\\mat{p}$ is contained in the plane with pillar $-\\hat{\\mat{F}}^*\\mat{e}_l$.\n\nWe can now apply Theorem~\\ref{thm:modeofaplaneconditional} to each pillar to find the mode of $\\mat{p}$, given $\\mel{f}_l = 1$, or $\\mel{f}_l = 1$, respectively:\n\\begin{align}\n\\tilde{\\mat{p}}^{(l, +)}\n&=\n\\mat{\\mu}_{\\mat{p}}  + \\frac{1 - \\left\\langle \\mat{\\mu}_{\\mat{p}}, \\hat{\\mat{F}}^*\\mat{e}_l \\right\\rangle}{\\left\\langle  \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l, \\hat{\\mat{F}}^*\\mat{e}_l \\right\\rangle} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l \\notag \\\\\n&=\n\\mat{\\mu}_{\\mat{p}}  + \\frac{1 - \\left\\langle \\hat{\\mat{F}} \\mat{\\mu}_{\\mat{p}}, \\mat{e}_l \\right\\rangle}{\\left\\langle \\hat{\\mat{F}} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l, \\mat{e}_l \\right\\rangle} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l \\qquad \\text{($\\hat{\\mat{F}}^*$ is the \\emph{adjoint} of $\\hat{\\mat{F}}$)} \\notag \\\\\n&=\n\\mat{\\mu}_{\\mat{p}}  + \\frac{1 - \\left\\langle \\mat{\\mu}_{\\mat{f}}, \\mat{e}_l \\right\\rangle}{\\left\\langle \\mat{\\Sigma}_{\\mat{f}} \\mat{e}_l, \\mat{e}_l \\right\\rangle} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l \\notag \\\\\n&=\n\\mat{\\mu}_{\\mat{p}}  + \\frac{1 - \\mel{\\mu}_{\\mat{f}\\, l} }{ \\mel{\\Sigma}_{\\mat{f}\\, ll}} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l \\label{eq:modeinjectionpos}\\\\\n\\notag \\\\\n\\tilde{\\mat{p}}^{(l, -)}\n&=\n\\mat{\\mu}_{\\mat{p}} - \\frac{1 - \\left\\langle \\mat{\\mu}_{\\mat{p}}, -\\hat{\\mat{F}}^*\\mat{e}_l \\right\\rangle}{\\left\\langle - \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l, - \\hat{\\mat{F}}^*\\mat{e}_l \\right\\rangle} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l \\notag \\\\\n&=\n\\mat{\\mu}_{\\mat{p}}  + \\frac{-1 - \\left\\langle \\mat{\\mu}_{\\mat{p}}, \\hat{\\mat{F}}^*\\mat{e}_l \\right\\rangle}{\\left\\langle \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l, \\hat{\\mat{F}}^*\\mat{e}_l \\right\\rangle} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l \\notag \\\\\n& \\;\\, \\vdots \\notag \\\\\n&=\n\\mat{\\mu}_{\\mat{p}}  + \\frac{-1 - \\mel{\\mu}_{\\mat{f}\\, l} }{ \\mel{\\Sigma}_{\\mat{f}\\, ll}} \\mat{\\Sigma}_{\\mat{p}} \\hat{\\mat{F}}^*\\mat{e}_l. \\label{eq:modeinjectionneg}\n\\end{align}\nBy symmetry of the marginal distribution of $\\mel{f}_l$, it follows that in the unlikely case where $\\mel{\\mu}_{\\mat{f}\\, l}$ is zero, the line current is equally likely to deviate to the left as it is to deviate to the right, and both cases come with a different power injection. When $\\mel{\\mu}_{\\mat{f}\\, l}$ is non-zero, one of the two cases is more likely.\n\\begin{align*}\n\\mel{\\mu}_{\\mat{f}\\, l} > 0 \\, &\\iff \\, \\tilde{\\mat{p}}^{(l, +)}\\text{ is the most probable injection,} \\\\\n\\mel{\\mu}_{\\mat{f}\\, l} = 0 \\, &\\iff \\, \\tilde{\\mat{p}}^{(l, +)} \\text{ and }\\tilde{\\mat{p}}^{(l, +)}\\text{ are the two most probable injections,} \\\\\n\\mel{\\mu}_{\\mat{f}\\, l} < 0 \\, &\\iff \\, \\tilde{\\mat{p}}^{(l, -)}\\text{ is the most probable injection.}\n\\end{align*}\nWhen $\\mel{\\mu}_{\\mat{f}\\, l} \\neq 0$, we can use the $\\sign$ function to combine Equations (\\ref{eq:modeinjectionpos}) and (\\ref{eq:modeinjectionneg}) into one, which gives the desired expression.\n\\end{proof}\n%\\towrite{Large dev: in het limiet $\\epsilon \\rightarrow 0$ valt de verwachtingswaarde van $\\mathbf{p}$, gegeven emergent failure $l$, samen met de zojuist gegeven mode.}\n%\n%\n\\section{Redistribution of flow}\\label{sec:flowredistribution}\nIn the previous section, we studied normally distributed power injections, and we can now discover which lines are likely to fail, and what power injection was the most likely cause. The next step is to study the \\emph{redistribution of flow}: when overloaded lines are switched off (which happens almost instantly), the remaining lines in the network will have to take over their function, because \\emph{the power injection remains unchanged after a line outage}.\nIn the simplest case of two parallel lines that connect two otherwise unconnected grids (say, a geographical island connected to the mainland via two cables), the failure of one line will force the other line to carry its original current, plus the current that would normally flow through the failed line.\n\nExcept for special cases like these, this \\emph{redistributed flow} is, in general, hard to compute without the tools developed in this section. The general case not only consists of all possible line failures, we also want to study all \\emph{possible combinations} of line failures.\n\nWe will discuss two methods to solve this problem: the Direct and the Optimised methods. The first method simply considers the graph obtained by removing the failed lines from the network, and then recalculates (\\ref{eq:LPF}) for the network. Calculating the Moore-Penrose inverse of $\\mat{L}$ is computationally expensive\\footnote{Scientific computing libraries generally calculate $\\mat{L}^+$ using the Singular Value Decomposition of $\\mat{L}$.}, and every combination of line failures requires this calculation.\\footnote{Calculating the LPF of the SciGrid network ($n=489$, $m=895$) takes approximately $700 \\, \\si{\\milli\\second}$, excluding the additional overhead of copying the unperturbed network.} A second method, first introduced by \\cite{Guler2007}, utilises the LPF of the original network to derive the redistribution of flow. This Optimised method is computationally less expensive, and provides additional insight into the effect of line outages, that would not be obtained when discarding the original network.\n\n\\subsection{Direct method}\nFor the Direct method, we simply recompute the LPF for the perturbed network. To avoid reducing the dimension $m$, and recalculating $\\mat{K}$, we set the admittance of each failed line to zero.\n\nMore formally, suppose $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$ is an $(n,m)$-grid structure with $\\mathcal{Z} \\subseteq \\range{m}$ a collection of $v \\in \\range{m}$ lines that fail. For an injection $\\mat{p} \\in \\mathbb{R}^n$, the line flows \\emph{before} the failures of $\\mathcal{Z}$ are given by:\n\\begin{align}\n\\mat{f}^{\\mathcal{Z}} &= \\diag(i\\mat{\\eta})\\mat{K}^*\\left(\\mat{K}\\diag(i\\mat{\\eta})  \\mat{K}^*\\right)^+\\mat{p}\n\\intertext{(by definition of $\\mat{F}$) and the line flows \\emph{after} the failures are given by:}\n\\mat{f}^{\\mathcal{Z}} &= \\diag(i\\mat{\\eta})\\mat{K}^*\\left(\\mat{K}\\diag(i\\mat{\\eta})\\mat{I}_{\\range{m} \\setminus \\mathcal{Z}}\\mat{K}^*\\right)^+\\mat{p},\n\\end{align}\nwhere $\\mat{I}_{\\range{m} \\setminus \\mathcal{Z}}$ is the identity matrix, with diagonal entries set to zero for line numbers contained in $\\mathcal{Z}$. By taking the product $\\diag(i\\mat{\\eta})\\mat{I}_{\\range{m} \\setminus \\mathcal{Z}}$, we are essentially setting the admittance of failed lines to zero, which corresponds physically to a circuit break.\n\\subsection{Optimised method}\\label{sec:optimisedmethod}\n\\begin{theorem}\\label{thm:lineflowsafterfailures}\nSuppose $((\\mathcal{N},\\mathcal{L}),\\mat{K},\\mat{\\eta})$ is an $(n,m)$-grid structure, with LPF $\\mat{F}$. Suppose that $\\mathcal{Z} \\subseteq \\range{m}$ is a collection of $v \\in \\range{m}$ lines that fail. For a given injection $\\mat{p} \\in \\mathbb{R}^n$, the line flows \\emph{before} the failures of $\\mathcal{Z}$ are given by:\n\\begin{align}\n\\mat{f} &= \\mat{F} \\mat{p} \\\\\n\\end{align}\nand the line flows \\emph{after} the failures are given by:\n\\begin{empheq}[box=\\fbox]{gather}\n\\mat{f}^{\\mathcal{Z}} = \\mat{f} - \\mat{M}\\mat{N}\\left(\\mat{N}^*\\mat{M}\\mat{N}\\right)^+\\mat{N}^*\\mat{f}\\label{eq:lineflowsafterfailures}\n\\end{empheq}\nwhere $\\mat{M} = \\mat{F}\\mat{K} - \\mat{I} \\in \\mathbb{R}^{m \\times m}$ and $\\mat{N}=\\left(\\mat{e}_{\\mathcal{Z}_1} \\cdots \\mat{e}_{\\mathcal{Z}_v}\\right) \\in \\mathbb{R}^{m \\times v}$, the matrix that is zero everywhere, except for the entries $\\left(\\mathcal{Z}_i, i\\right)$ (for $i \\in \\range{v}$), where it has value $1$.\n\\end{theorem}\n\\begin{remark}\nThis expression for $\\mat{f}^{\\mathcal{Z}}$ \\emph{also} contains a pseudo-inverse, which might not look computationally advantageous, compared to the Direct method. Note, however, that right multiplying by $\\mat{N}$ corresponds to taking the \\emph{submatrix of column numbers} $\\mathcal{Z}_1$ through $\\mathcal{Z}_v$, and left multiplying by $\\mat{N}^*$ gives the submatrix of \\emph{row} numbers in $\\mathcal{Z}$. This means that $\\mat{M}$ only needs to be computed once, after which most matrix multiplications in (\\ref{eq:lineflowsafterfailures}) can done by \\emph{indexing appropriately}, for any combination of line failures.\nAdditionally, if the number of failed lines is small, the product $\\mat{N}^*\\mat{M}\\mat{N}$ will be a small matrix, drastically improving performance.\nLastly, this expression only requires (pseudo-)solving the system of equations $\\left(\\mat{N}^*\\mat{M}\\mat{N}\\right)\\mat{\\alpha}=\\mat{N}^*\\mat{f}$, which can be done more efficiently than computing the full pseudo-inverse. (The same optimisation can be also applied to the Direct method.)\n\\end{remark}\n\nSeveral proofs to this theorem exist. The first proof, by \\cite{Guler2007}, iterates over the failed lines, proving the result by natural induction. \\citep{Guo2009} provide two additional proofs, which follow from a careful analysis of the Direct method, where they \\emph{remove the columns of $\\mat{K}$} that correspond to failed lines. An entirely different, fourth proof, due to \\cite{Ronellenfitsch2017}, uses the formalism of \\emph{graph cycles}, which form a basis for $\\ker \\mat{K}$. Their article is truly fascinating, as it explores how flow redistributions are composed of loop flows. Additionally, they derive a more general result, where the grid perturbation is expressed as a \\emph{change in line admittances $\\mat{\\eta}$}.\\footnote{Some transmission networks deploy adjustable inductors that clamp onto transmission lines, to \\emph{steer} the flow of current, making this generalisation especially relevant.} Taking the limit $\\mel{\\eta}_l \\rightarrow 0$ then corresponds to the removal of the line.\n\nBefore providing this fourth proof, we will try to deduce the result from intuitive reasoning, to better understand the found expression.\n\n\\begin{intuition}\nBoth $\\mat{f}$ and $\\mat{f}^{\\mathcal{Z}}$ are induced by the same power injection, \\ie $\\mat{K} \\mat{f} = \\mat{K} \\mat{f}^{\\mathcal{Z}} = \\mat{p}$. This means that the difference between the two, which is the change in flow right after the line failures, is an \\emph{element of the kernel of $\\mat{K}$}. We denote this difference by\n$\\Delta\\mat{f} \\defeq \\mat{f}^{\\mathcal{Z}} - \\mat{f}$.\n\nFor each $l \\in \\mathcal{Z}$, the current through line $l$ must become zero. This imposes the condition\n\\begin{align}\n\\Delta\\mel{f}_l = \\mel{f}^{\\mathcal{Z}}_l - \\mel{f}_l = -\\mel{f}_l,\\label{eq:diffcondition}\n\\intertext{or more compactly,}\n\\mat{N}^*\\Delta\\mat{f} = -\\mat{N}^*\\mat{f}.\\label{eq:diffconditionmat}\n\\end{align}\nLet us first consider the case of a single failure: $\\mathcal{Z}=\\left\\{l\\right\\}$, that does not cause the network to become disconnected. We are looking for a $\\Delta\\mat{f} \\in \\ker \\mat{K}$, under the condition that $\\mel{f}^{\\mathcal{Z}}_l = -\\mel{f}_l$. The first choice that might come to mind is to fix a basis of $\\ker \\mat{K}$ consisting of \\emph{unit loop flows} in the graph. We can pick a unit loop flow that is non-zero at $l$, and scale by $\\pm \\mel{f}_l$ to find the desired flow difference.\n\nAlthough this does satisfy the condition at $l$, it is in general not the flow that will be \\emph{induced} in the perturbed network, which is uniquely determined by power flow physics. There are many linear combinations of unit loop flows that satisfy the condition at $l$, one of which is the correct one.\n\nSuppose that the network is unused ($\\mat{p}=\\mat{0}$). We now \\emph{force} a current of $1$ through line $l$ and we fix all other line currents to $0$. This line flow vector is given by $\\mat{e}_l$. The result of this flow will be a power injection which remains zero everywhere, except at the two nodes that $\\mathcal{L}_l = (i,j)$ connects. This power injection is given by $\\mat{K}\\mat{e}_l$.\n\nOn the other hand, if we were to apply the injection $\\mat{K}\\mat{e}_l$ to the network, allowing current to flow naturally, we would find the vector of line currents $\\mat{F}\\mat{K}\\mat{e}_l$, which in general does not equal $\\mat{e}_l$! Of course, the natural line flow in $l$ will still be relatively large, but some power will be transmitted along different routes. For example, in a circular network of four buses and four lines of equal admittance, with $l=1$, we find\\footnote{The series combination of lines 2, 3 and 4 has a third of the admittance of line 1, so their current must equal a third of the current through line 1.}\n\\begin{align*}\n\\mat{F}\\mat{K}\\mat{e}_1 = \\begin{pmatrix}\n\\hphantom{-}0.75 \\\\\n-0.25 \\\\\n-0.25 \\\\\n-0.25\n\\end{pmatrix},\n\\text{ with difference }\n\\mat{F}\\mat{K}\\mat{e}_1 - \\mat{e}_1 = \\begin{pmatrix}\n-0.25 \\\\\n-0.25 \\\\\n-0.25 \\\\\n-0.25\n\\end{pmatrix}.\n\\end{align*}\nBecause $\\mat{F}$ is the right-inverse of $\\mat{K}$, we find that $\\mat{K} \\left(\\mat{F}\\mat{K}\\mat{e}_l\\right) = \\mat{K}\\mat{e}_l$, which means that the \\emph{difference between a natural flow and a forced flow}, $\\mat{F}\\mat{K}\\mat{e}_1 - \\mat{e}_1$, is an element of the kernel of $\\mat{K}$.\n\nWe have not yet satisfied the condition (\\ref{eq:diffconditionmat}). Given a power injection $\\mat{p}$ and unperturbed flow $\\mat{f}=\\mat{F}\\mat{p}$, we can \\emph{scale} the above difference with some $\\alpha \\in \\mathbb{R}$:\n\\begin{align*}\n\\Delta\\mat{f} = \\left(\\mat{F}\\mat{K}\\mat{e}_l - \\mat{e}_l\\right)\\alpha\n\\intertext{with $\\alpha$ such that}\n\\Delta\\mel{f}_l = -\\mel{f}_l\n\\end{align*}\nis satisfied. If the network remains connected, $\\alpha$ is given by $-\\mel{f}_l/\\left(\\mel{F}\\mel{K}\\mel{e}_l - \\mel{e}_1\\right)_l = \\mel{f}_l/\\left(1 - (\\mel{F}\\mel{K})_{ll}\\right)$.\n%\\todo{Het voorbeeld van een circulair netwerk is te simpel}\nWe state, \\emph{without proof}, that this is indeed the flow difference dictated by power flow physics.\\footnote{If we assume the Theorem to be true, one could work backwards from (\\ref{eq:lineflowsafterfailures}) to find this result.}\n\nIn our above example, we find $\\alpha = 3$, which gives\n\\begin{align*}\n\\Delta\\mat{f} = \\begin{pmatrix}\n-0.75 \\\\\n-0.75 \\\\\n-0.75 \\\\\n-0.75\n\\end{pmatrix},\n\\text{ and the \\emph{redistributed flow}: }\n\\mat{f}^{\\mathcal{Z}} = \\mat{f} + \\Delta\\mat{f} = \\begin{pmatrix}\n\\hphantom{-}0.00 \\\\\n-1.00 \\\\\n-1.00 \\\\\n-1.00\n\\end{pmatrix}.\n\\end{align*}\nOf course, we could have found this result quite easily: after the first line fails, the unit of power simply traverses the loop in the other direction (hence the flipped sign in $\\mat{f}^{\\mathcal{Z}}$).\n\nIn the general case of multiple line failures, we essentially apply the above procedure to each failed line, and add the resulting differences. If we were to use this method \\emph{iteratively}, by considering the failed lines in some chosen order, we run into the following issue, when more than one line is removed: when removing the second line from the network, we will find a difference flow that sets the second line current to zero. Unfortunately, this difference flow will also change the current of the first line, which is then no longer zero. If we then consider the first line again, we will also affect the second line, et cetera.\n\nTo avoid this cat-and-mouse game, we need to find all scaling factors $\\alpha_1, \\dots, \\alpha_v$ \\emph{simultaneously}. By virtue of linearity, the $v$ conditions imposed by (\\ref{eq:diffconditionmat}) form a set of \\emph{linear conditions} on $\\mat{\\alpha} = (\\alpha_1, \\dots, \\alpha_v)^*$.\n\nFor each line $l \\in \\mathcal{Z}$, the natural-forced difference is $\\Delta\\mat{f}^l \\defeq \\mat{F}\\mat{K}\\mat{e}_l - \\mat{e}_l$. When combining all differences for $l \\in \\range{m}$ as columns of a matrix, we find:\n\\begin{align*}\n\\begin{pmatrix}\n\\mid & & \\mid \\\\\n\\Delta\\mat{f}^1 & \\cdots & \\Delta\\mat{f}^m\\\\\n\\mid & & \\mid\n\\end{pmatrix} = \\mat{F}\\mat{K} - \\mat{I} = \\mat{M}\n\\end{align*}\nThe sub-matrix of differences for $l \\in \\mathcal{Z} \\subseteq \\range{m}$ is given by:\n\\begin{align*}\n\\begin{pmatrix}\n\\mid & & \\mid \\\\\n\\Delta\\mat{f}^{\\mathcal{Z}_1} & \\cdots & \\Delta\\mat{f}^{\\mathcal{Z}_v}\\\\\n\\mid & & \\mid\n\\end{pmatrix}\n= \\mat{M}\\mat{N}\n\\end{align*}\nNote that $\\mathcal{Z}_1, \\dots, \\mathcal{Z}_v$ do not need to be ordered.\n\nA linear combination of $\\Delta\\mat{f}^{\\mathcal{Z}_1}, \\dots, \\Delta\\mat{f}^{\\mathcal{Z}_v}$ with scale factors $\\alpha_1, \\dots, \\alpha_v$ is given by\n\\begin{align}\n\\Delta\\mat{f}(\\mat{\\alpha}) = \\mat{M}\\mat{N}\\mat{\\alpha}.\\label{eq:alphalincombination}\n\\end{align}\nCondition (\\ref{eq:diffconditionmat}) then becomes:\n\\begin{align}\n\\mat{N}^*\\mat{M}\\mat{N}\\mat{\\alpha} &= -\\mat{N}^*\\mat{f}\\\\\n\\intertext{with pseudo-solution}\n\\mat{\\alpha} &= -\\left(\\mat{N}^*\\mat{M}\\mat{N}\\right)^+\\mat{N}^*\\mat{f}.\\label{eq:alphasolution}\n\\end{align}\nFinally, combining (\\ref{eq:alphalincombination}) and (\\ref{eq:alphasolution}) gives:\n\\begin{align*}\n\\Delta\\mat{f} = -\\mat{M}\\mat{N} \\left(\\mat{N}^*\\mat{M}\\mat{N}\\right)^+\\mat{N}^*\\mat{f},\n\\end{align*}\nin agreement with the result.\\hfill$\\neg$\\leafNE\n\\end{intuition}\n\nA rigorous proof of Theorem \\ref{thm:lineflowsafterfailures} is given in \\cite{Ronellenfitsch2017}. This only needs to be adapted to our notation.\n\nIn (\\ref{eq:alphasolution}) we take the \\emph{pseudo-inverse}, instead of the general inverse, to account for a set of failures that disconnects the network. We discuss the implications of this modification in Section \\ref{sec:discussionpowerislands}.\n\\end{document}\n", "meta": {"hexsha": "49bdca66448ba9bb0920d51a6af3431b41030b2d", "size": 76994, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/thesis/model.tex", "max_stars_repo_name": "fons-/grid-failures", "max_stars_repo_head_hexsha": "947ccefe4ced7aa45b7b77339a6f28d7b0881c44", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-07-15T21:43:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-25T08:14:11.000Z", "max_issues_repo_path": "doc/thesis/model.tex", "max_issues_repo_name": "fons-/grid-analysis", "max_issues_repo_head_hexsha": "947ccefe4ced7aa45b7b77339a6f28d7b0881c44", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/thesis/model.tex", "max_forks_repo_name": "fons-/grid-analysis", "max_forks_repo_head_hexsha": "947ccefe4ced7aa45b7b77339a6f28d7b0881c44", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-01-23T17:12:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-05T22:22:44.000Z", "avg_line_length": 88.8050749712, "max_line_length": 1027, "alphanum_fraction": 0.7035353404, "num_tokens": 24545, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867825403176, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.5607502111213596}}
{"text": "% !TEX root = index.tex\n\n\\setcounter{section}{-1}\n\\section{Introduction}\n\\epigraph{There is nothing more deceptive than an obvious fact.}{Sherlock Holmes}\n\nWe have an intuitive understanding of what it means for something to be curved. A circle has some curvature where as a line has no curvature. The \\emph{larger} the radius of the circle the \\emph{smaller} the curvature (after all a line is a circle with infinite radius). Similarly for surfaces, a sphere is curved but a plane is not. The larger the radius of the sphere the smaller the curvature (a plane is a sphere with infinite radius).\n\n\\begin{center}\n\\begin{tabular}{|l|l|}\n  \\hline Line & No curvature \\\\\n   Circle & Curvature $\\sim?$ 1/radius \\\\\n   Sine Curve & Varying Curvature \\\\\n   Plane & No curvature \\\\\n   Sphere & Curvature $\\sim?$ 1/radius \\\\\n   Cylinder & ??\\\\\n  \\hline\n\\end{tabular}\n\\end{center}\nBut there is a subtle difference between curves and surfaces. It is possible to take a string and form a circle without stretching or compressing it, but it is not possible to take a flat piece of paper and mold it into a sphere without stretching or compressing.\\footnote{Neglect the thickness.}\n\nOr is it?\n\nHmmm.\\\\\n\nThe curvature of the sphere is in some sense \\emph{intrinsic} to the sphere whereas the curvature of a circle isn't. This difference between curves and surfaces was first quantified by Gauss using what's now called Gaussian Curvature and later generalized by Riemann leading to the creation of the field of Riemannian Geometry.\\\\\n\nIn this class, we'll learn how linear algebra and calculus naturally help us figure out the \\emph{correct} definition of Curvature for surfaces - Riemann's generalization of this to higher dimensions lies at the heart of Riemannian Geometry. We'll try to understand the geometric significance of the various kinds of curvature - Gaussian, Mean, and Principal, the statement of Gauss' Theorema Egregium, and what it means for the Gaussian Curvature to be \\emph{intrinsic} to the surface.\n", "meta": {"hexsha": "d9502ba08ab3b6da79647d67be5155fa0042c98e", "size": 2006, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "00.tex", "max_stars_repo_name": "apurvnakade/mc2018-how-curved-is-a-potato", "max_stars_repo_head_hexsha": "d76acd32d3f030b4fbf3ed0cad3876639bdf4e8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "00.tex", "max_issues_repo_name": "apurvnakade/mc2018-how-curved-is-a-potato", "max_issues_repo_head_hexsha": "d76acd32d3f030b4fbf3ed0cad3876639bdf4e8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "00.tex", "max_forks_repo_name": "apurvnakade/mc2018-how-curved-is-a-potato", "max_forks_repo_head_hexsha": "d76acd32d3f030b4fbf3ed0cad3876639bdf4e8f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.1724137931, "max_line_length": 486, "alphanum_fraction": 0.7696909272, "num_tokens": 480, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.7981867825403176, "lm_q1q2_score": 0.5607502061501436}}
{"text": "\\documentclass[12pt]{article}\n\\author{David Alves}\n\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{dirtytalk}\n\\usepackage[a4paper, total={6.5in, 8.75in}]{geometry}\n\\usepackage{forest}\n\\usepackage{skak}\n\\usepackage{tikz}\n\\usepackage{titling}\n\\usepackage{wrapfig}\n\\usepackage{xcolor}\n\n\\def\\multichoose#1#2{\\ensuremath{\\left(\\kern-.3em\\left(\\genfrac{}{}{0pt}{}{#1}{#2}\\right)\\kern-.3em\\right)}}\n\n\\title{Math 142 Problem Set 1}\n\\author{David Alves}\n\\date{2016-08-30}\n\n\\begin{document}\n\\pagenumbering{gobble}\n\n\\begin{center}\n\\large \\thetitle \\\\\n\\theauthor \\\\\n\\thedate\n\\end{center}\n\n\\subsection*{Sources}\n\n    \\begin{itemize}\n    \\item http://tex.stackexchange.com and https://www.sharelatex.com for help with \\LaTeX\n    \\end{itemize}\n\n\\section*{Subset Counting}\n\n\\subsection*{Problem Statement}\nProve that there are $2^{17}$ subsets of $[17]$\n\\subsection*{Solution}\nLet $f(s)$ denote the set of all subsets of a set $s$. We can show that $|f([17])| = 2^{17}$ using induction by demonstrating two things:\n\n\\begin{enumerate}\n\\item $|f([0])| = 2^0$ (Base case)\n\\item If $|f([k])| = 2^k$, then $|f([k+1])| = 2^{k+1}$\n\\end{enumerate}\n\n\\subsubsection*{Base Case}\n$[0]$ is the empty set, which contains no elements. The empty set is a subset of itself, but has no other subsets. Therefore $|f([0])| = 2^0$.\n\\subsubsection*{Inductive Step}\nFor each subset $s$ of $[k]$ there exist exactly two subsets of $[k+1]$: the original subset, and a subset consisting of the original subset plus the element not in $[k]$. Therefore $[k+1]$ contains exactly twice as many subsets as $[k]$, so $|f([k+1])| = 2|f([k])|$. Thus if $|f([k])| = 2^k$, then $|f([k+1])| = 2^{k+1}$ \n\n\\end{document}\n", "meta": {"hexsha": "675bb1e9ef1a410d0d5690c2daa93bdae7e1e66a", "size": 1694, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "math142_ps1.tex", "max_stars_repo_name": "dalves/combinatorics", "max_stars_repo_head_hexsha": "059a05b548401df59099a6ba93109f736e0b9ed7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-10-20T14:26:36.000Z", "max_stars_repo_stars_event_max_datetime": "2016-10-20T14:26:36.000Z", "max_issues_repo_path": "math142_ps1.tex", "max_issues_repo_name": "dalves/combinatorics", "max_issues_repo_head_hexsha": "059a05b548401df59099a6ba93109f736e0b9ed7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "math142_ps1.tex", "max_forks_repo_name": "dalves/combinatorics", "max_forks_repo_head_hexsha": "059a05b548401df59099a6ba93109f736e0b9ed7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.8, "max_line_length": 322, "alphanum_fraction": 0.6829988194, "num_tokens": 598, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289836, "lm_q2_score": 0.7371581626286833, "lm_q1q2_score": 0.5607198455293143}}
{"text": "\\chapter{Conclusion and Future Work}\n\\label{ch:concl}\n\n\\section{Conclusion}\nAfter having looked at DHAM with respect to different classes of graphs, we did not observe a exponential rate of growth for any of these models (upto the sizes of n, that are practicall to simulate). Our results suggest that DHAM successfully finds the cycle in most Hamiltonian graphs, though the probability of finding such a cycle starts to drop with the minimum degree in the graph dropping below 7.\n\nWe also verified the original claims made by the algorithm and noted that slightly modifying the algorithm to consider a constant multiple more edges could reduce the number of attemps required to find a Hamilton Cycle without any significant impact on the runtime.\n\n\\section{Future Research Directions}\n\nWith encouraging results like described above, one future direction of research is to theoretically bound the runtime of this algorithm on the models of graphs we looked at. \n\nAnother interesting direction to look at is to see if the $\\bigO(n\\log n)$ bipartite matching algorithm given by Karp, Rinnooy-Kan, and Vohra\\cite{karp:bip} can be used to improve the runtime bound on phase 1 of DHAM. Though the runtime does look better at first sight, we are not sure if the runtime/correctness analysis will work out cleanly.\n", "meta": {"hexsha": "f94662f93070ca5f3c5e0ea1a80a39d30ab607cf", "size": 1308, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/chapters/ch-concl.tex", "max_stars_repo_name": "LaughingBudda/hachikuji", "max_stars_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/chapters/ch-concl.tex", "max_issues_repo_name": "LaughingBudda/hachikuji", "max_issues_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/chapters/ch-concl.tex", "max_forks_repo_name": "LaughingBudda/hachikuji", "max_forks_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.4285714286, "max_line_length": 404, "alphanum_fraction": 0.8073394495, "num_tokens": 275, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5607198335301906}}
{"text": "\\section{Adversary Argument}\n\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Horse Racing}\n  \\begin{example}[Horse Racing]\n\t\\begin{itemize}\n\t  \\item $25$ horses\n\t  \\item Round: $\\le 5$ horses race\n\t  \\item Goal: Find $\\#1, \\#2, \\#3$ fastest.\n\t\\end{itemize}\n  \\end{example}\n\n  \\pause\n  \\[ 8 \\pause \\implies 7 \\pause \\quad (a_2, a_3, b_1, b_2, c_1) \\]\n\n  \\pause\n  \\[ (< 5) \\pause \\implies (= 5) \\pause \\quad (\\#1) \\pause \\implies (= 6) \\pause \\quad (\\#2)\\]\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n\\begin{frame}{Finding patterns in bit strings}\n  \\begin{example}[Finding patterns in bit strings]\n\t\\begin{itemize}\n\t  \\item Bit string $A[1 \\ldots n]$\n\t  \\item Bit pattern $01$\n\t  \\item Question: checking every bit?\n\t\\end{itemize}\n  \\end{example}\n\n  \\pause\n  \\vspace{0.60cm}\n  \\centerline{$n$ is odd: checking $A[2, 4,\\ldots, n-1]$}\n\n  \\pause\n  \\vspace{0.50cm}\n  \\centerline{$n$ is even: adversary argument}\n\\end{frame}\n%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "82373c238b46d8e9cc1b06eb782f273907be4853", "size": 917, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2017/algorithm-tutorial-algorithm-analysis-20170412/sections/adversary-argument.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2017/algorithm-tutorial-algorithm-analysis-20170412/sections/adversary-argument.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2017/algorithm-tutorial-algorithm-analysis-20170412/sections/adversary-argument.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 24.1315789474, "max_line_length": 94, "alphanum_fraction": 0.5888767721, "num_tokens": 339, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5607198335301906}}
{"text": "\\section{Length and Energy}\n", "meta": {"hexsha": "d38ebd2452a1ea9787beafa590855791d3033bb3", "size": 28, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/LengthEnergy.tex", "max_stars_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_stars_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-12-28T05:53:38.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-20T05:56:59.000Z", "max_issues_repo_path": "src/LengthEnergy.tex", "max_issues_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_issues_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/LengthEnergy.tex", "max_forks_repo_name": "siddhartha-gadgil/MetricGeometryCourse", "max_forks_repo_head_hexsha": "92ec7727f358107a8ad61a7229bc94e2aa9bbafc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.0, "max_line_length": 27, "alphanum_fraction": 0.7857142857, "num_tokens": 7, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799252, "lm_q2_score": 0.760650658103136, "lm_q1q2_score": 0.5607198327450361}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{geometry}\n\\usepackage{enumerate}\n\\usepackage{natbib}\n\\usepackage{float}%\u7a33\u5b9a\u56fe\u7247\u4f4d\u7f6e\n\\usepackage{graphicx}%\u753b\u56fe\n\\usepackage[english]{babel}\n\\usepackage{a4wide}\n\\usepackage{indentfirst}%\u7f29\u8fdb\n\\usepackage{enumerate}%\u52a0\u5e8f\u53f7\n\\usepackage{multirow}%\u5408\u5e76\u884c\n\\title{\\large UM-SJTU JOINT INSTITUTE\\\\DISCRETE MATHEMATICS\\\\(VE203)\\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\\nASSIGNMENT 1\\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ \\\\\\ }\n\\author{Name: Pan Chongdan\\\\ID: 516370910121}\n\\date{Date: \\today}\n\n\n\\begin{document}\n\\maketitle\n\\newpage\n\\section{Q1}\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\nA & B &$\\neg$(A$\\wedge$B)  &($\\neg$A$\\vee\\neg$B)  &$\\neg$(A$\\wedge$B)$\\Leftrightarrow$($\\neg$A$\\vee\\neg$B)  &$\\neg$(A$\\vee$B)  &($\\neg$A$\\wedge\\neg$B)  &$\\neg$(A$\\vee$B)$\\Leftrightarrow$($\\neg$A$\\wedge\\neg$B)  \\\\ \\hline\nT & T &F  &F  &T  &F  &F  &T  \\\\ \\hline\nT & F &T &T  &T  &F  &F  &T  \\\\ \\hline\nF & T &T  &T  &T  &F  &F &T  \\\\ \\hline\nF & F &T  &T  &T  &T  &T  &T  \\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n\\section{Q2}\n\\begin{enumerate}\n\\item\n$$A\\cap B\\Leftrightarrow\\{x|x\\in A\\wedge x\\in B\\}$$\n$$M\\setminus(A\\cap B)\\Leftrightarrow\\{ x|x\\in M\\wedge (x\\notin (A\\cap B)\\} $$\\\\\n$$M\\setminus A\\Leftrightarrow\\{x|x\\in M\\wedge x\\notin A \\}$$\n$$M\\setminus B\\Leftrightarrow\\{x|x\\in M\\wedge x\\notin B \\}$$\n$$(M\\setminus A)\\cup(M\\setminus B)\\Leftrightarrow\\{x|(x\\in M\\wedge x\\notin A)\\vee (x\\in M\\wedge x\\notin B)\\}$$\n$$\\Leftrightarrow\\{x|x\\in M\\wedge (x\\notin A\\vee x\\notin B)\\}\\Leftrightarrow\\{x|x\\in M\\wedge (x\\notin (A\\cap B)\\}$$\n$$\\therefore M\\setminus(A\\cap B)=(M\\setminus A)\\cup(M\\setminus B)$$\n\\item\n$$A\\cup B\\Leftrightarrow\\{x|x\\in A\\vee x\\in B\\}$$\n$$M\\setminus(A\\cup B)\\Leftrightarrow\\{ x|x\\in M\\wedge x\\notin A\\wedge x\\notin B\\}$$\\\\\n$$M\\setminus A\\Leftrightarrow\\{x|x\\in M\\wedge x\\notin A \\}$$\n$$M\\setminus B\\Leftrightarrow\\{x|x\\in M\\wedge x\\notin B \\}$$\n$$(M\\setminus A)\\cap(M\\setminus B)\\Leftrightarrow\\{ x|(x\\in M\\wedge x\\notin A)\\wedge(x\\in M\\wedge x\\notin B\\} $$\n$$\\Leftrightarrow\\{ x|x\\in M\\wedge x\\notin A\\wedge x\\notin B\\}$$\n$$\\therefore M\\setminus(A\\cup B)=(M\\setminus A)\\cap(M\\setminus B)$$\n\\end{enumerate}\n\n\\section{Q3}\n\\begin{enumerate}[(i)]\n\\item\n(A$\\Rightarrow$(B$\\Rightarrow$C))$\\Rightarrow$(B$\\Rightarrow$(A$\\Rightarrow$C)) is a tautology.\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nA & B &C  &A$\\Rightarrow$(B$\\Rightarrow$C) &B$\\Rightarrow$(A$\\Rightarrow$C)&(A$\\Rightarrow$(B$\\Rightarrow$C))$\\Rightarrow$(B$\\Rightarrow$(A$\\Rightarrow$C)) \\\\ \\hline\nT & T &T  &T  &T  &T\\\\ \\hline\nT & T &F  &F  &F  &T \\\\ \\hline\nT & F &T  &T  &T  &T\\\\ \\hline\nT & F &F  &T  &T  &T\\\\ \\hline\nF & T &T  &T  &T  &T\\\\ \\hline\nF & T &F  &T  &T  &T \\\\ \\hline\nF & F &T  &T  &T  &T\\\\ \\hline\nF & F &F  &T  &T  &T\\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n\\item\n((A$\\vee$B)$\\wedge$(A$\\vee$C))$\\Rightarrow$(B$\\vee$C) is not a tautology.\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nA & B &C  &(A$\\vee$B)$\\wedge$(A$\\vee$C)&B$\\vee$C&((A$\\vee$B)$\\wedge$(A$\\vee$C))$\\Rightarrow$(B$\\vee$C) \\\\ \\hline\nT & T &T  &T  &T  &T\\\\ \\hline\nT & T &F  &T  &T  &T \\\\ \\hline\nT & F &T  &T  &T  &T\\\\ \\hline\nT & F &F  &T  &F  &F\\\\ \\hline\nF & T &T  &T  &T  &T\\\\ \\hline\nF & T &F  &F  &T  &T \\\\ \\hline\nF & F &T  &F  &T  &T\\\\ \\hline\nF & F &F  &F  &F  &T\\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n\\item\nA$\\Rightarrow$($\\neg$B)$\\Rightarrow$B$\\Rightarrow$($\\neg$A) is a tautology.\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\nA & B &A$\\Rightarrow$($\\neg$B)&B$\\Rightarrow$($\\neg$A)&A$\\Rightarrow$($\\neg$B)$\\Rightarrow$B$\\Rightarrow$($\\neg$A) \\\\ \\hline\nT & T &F  &F  &T  \\\\ \\hline\nT & F &T  &T  &T  \\\\ \\hline\nF & T &T  &T  &T  \\\\ \\hline\nF & F &T  &T  &T  \\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n\\end{enumerate}\n\\section{Q4}\nSince we need to take the disjunction of conjunctions of the variables or their negations, there must at least two variables. If the disjunctive normal form of the first two variables is true, then we don't need to consider the remaining part. The proposition can be \n$$(A_1\\wedge A_2)\\vee (\\neg A_1\\wedge\\neg A_2)\\vee (\\neg A_1\\wedge A_2)\\vee (A_1\\wedge\\neg A_2)\\vee \\cdots$$\nIt is always true no matter what the remaining part is.\n\\section{Q5}\n\\begin{enumerate}\n\\item\n$$A\\wedge B\\Leftrightarrow\\neg((\\neg A)\\vee(\\neg B))$$\n\\item\n$$A\\Rightarrow B\\Leftrightarrow(\\neg A)\\vee B$$\n\\item\n$$A\\Leftrightarrow B\\Leftrightarrow(\\neg((\\neg A)\\vee(\\neg B)))\\vee(\\neg(A\\vee B))$$\n\\end{enumerate}\n\\section{Q6}\n\\begin{enumerate}[(i)]\n\\item\n$$X\\bigtriangleup Y=(X\\cup Y)\\setminus (X\\cap Y)$$\n$$(X\\cup Y)\\setminus (X\\cap Y)=(X\\setminus  (X\\cap Y ))\\cup (Y\\setminus (X\\cap Y))$$\n$$X\\setminus  (X\\cap Y )\\Leftrightarrow\\{x|x\\in X\\wedge\\neg(x \\in X\\wedge x \\in Y)\\}\\Leftrightarrow\\{x|x\\in X\\wedge (x \\notin X\\vee x \\notin Y)\\}$$\n$$\\Leftrightarrow\\{x|(x\\in X\\wedge x\\notin X)\\vee (x\\in X\\wedge x\\notin Y)\\}\\Leftrightarrow\\{x|(x\\in X\\wedge x\\notin Y)\\}\\Leftrightarrow X\\setminus Y$$\nSimilarly,$$Y\\setminus (X\\cap Y)=Y\\setminus X$$\n$$(X\\setminus  (X\\cap Y ))\\cup (Y\\setminus (X\\cap Y))\\Leftrightarrow(X\\setminus Y)\\cup (Y\\setminus X)$$\n$$\\therefore X\\bigtriangleup Y=(X\\setminus Y)\\cup (Y\\setminus X) $$\n\\item\n$$(M\\setminus X)\\bigtriangleup(M\\setminus Y)=((M\\setminus X)\\setminus (M\\setminus Y))\\cup ((M\\setminus Y)\\setminus (M\\setminus X))$$\n$$(M\\setminus X)\\setminus (M\\setminus Y)\\Leftrightarrow\\{x|(x\\in M\\wedge \\neg x\\in X)\\wedge\\neg(x\\in M\\wedge \\neg x\\in Y)\\}$$\n$$\\Leftrightarrow\\{x|(x\\in M\\wedge \\neg x\\in X)\\wedge(\\neg x\\in M\\vee x\\in Y)\\}\\Leftrightarrow\\{x|x\\in M\\wedge x\\in Y\\wedge \\neg x\\in X\\}\\Leftrightarrow Y\\setminus X$$\nSimilarly,$$(M\\setminus Y)\\setminus (M\\setminus X)=X\\setminus Y$$\n$$\\therefore (M\\setminus X)\\bigtriangleup(M\\setminus Y)=(Y\\setminus X)\\cup (X\\setminus Y)=X\\bigtriangleup Y$$\n\\item \n$$X\\setminus Y\\Leftrightarrow\\{x|x\\in X\\wedge\\neg x\\in Y\\}\\Leftrightarrow X\\cap Y^c$$\nSimilarly $$Y\\setminus X\\Leftrightarrow Y\\cap X^c$$\n$$ X\\bigtriangleup Y=(X\\setminus Y)\\cup (Y\\setminus X)=(X\\cap Y^c)\\cup(Y\\cap X^c)=(X\\cup Y)\\cap(X^c\\cup Y^c) $$\n$$(X\\bigtriangleup Y)\\bigtriangleup Z=((X\\bigtriangleup Y)\\cup Z)\\cap((X\\bigtriangleup Y)^c\\cup Z^c) $$\n$$=(((X\\cup Y)\\cap(X^c\\cup Y^c))\\cup Z)\\cap(((X^c\\cup Y)\\cap(X\\cup Y^c))\\cup Z^c)$$\n$$=(X\\cup Y \\cup Z)\\cap(X^c\\cup Y \\cup Z)\\cap(X\\cup Y^c \\cup Z)\\cap(X\\cup Y \\cup Z^c)$$\nSince its symmetric about $X,Y,Z,$,similarly,\n$$X\\bigtriangleup(Y\\bigtriangleup Z)=(X\\cup Y \\cup Z)\\cap(X^c\\cup Y \\cup Z)\\cap(X\\cup Y^c \\cup Z)\\cap(X\\cup Y \\cup Z^c)$$\n$$\\therefore (X\\bigtriangleup Y)\\bigtriangleup Z=X\\bigtriangleup(Y\\bigtriangleup Z)$$ \n\\item\n$$X\\cap(Y\\bigtriangleup Z)=X\\cap((Y\\setminus Z)\\cup(Z\\setminus Y))=(X\\cap(Y\\setminus Z))\\cup(X\\cap(Z\\setminus Y))$$\n$$=((X\\cap Y)\\setminus(X\\cap Z))\\cup((X\\cap Z)\\setminus(X\\cap Y))$$\n$$(X\\cap Y)\\bigtriangleup(X\\cap Z)=((X\\cap Y)\\setminus(X\\cap Z))\\cup((X\\cap Z)\\setminus(X\\cap Y))$$\n$$\\therefore X\\cap(Y\\bigtriangleup Z)=(X\\cap Y)\\bigtriangleup(X\\cap Z)$$\n\\end{enumerate}\n\\section{Q7}\n\\begin{enumerate}[(i)]\n\\item\n$$x\\in X\\bigtriangleup Y=\\{x|x\\in (A\\setminus B)\\cup (B\\setminus A)\\}\\Leftrightarrow\\{x|(x\\in A \\wedge\\neg x\\in B)\\vee(x\\in B \\wedge\\neg x\\in A)\\}$$\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n$A(x)$ & $B(x)$ &$A(x)\\wedge\\neg B(x)$  &$B(x)\\wedge\\neg A(x)$ &$A(x)\\oplus B(x)$ &$x\\in X\\bigtriangleup Y$ \\\\ \\hline\nT & T &F  &F  &F  &F\\\\ \\hline\nT & F &T  &F  &T  &T \\\\ \\hline\nF & F &F  &F  &F  &F\\\\ \\hline\nF & T &F  &T  &T  &T\\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n$$\\therefore x\\in X\\bigtriangleup Y\\Leftrightarrow A(x)\\oplus B(x) $$\n\\item\n$$\\neg((A(x)\\cap B(x))\\cup(\\neg A(x)\\cap\\neg B(x)))\\Leftrightarrow A(x)\\oplus B(x)$$\n\\item \nIt's a valid argument because $(A(x)\\oplus B(x))\\wedge(B(x)\\oplus C(x))\\Rightarrow \\neg(A(x)\\oplus C(x))$ is always true according the following truth table.\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n$A$ & $B$ &$C$  &$A(x)\\oplus B(x)$&$B(x)\\oplus C(x)$&$(A(x)\\oplus B(x))\\wedge(B(x)\\oplus C(x))$&$\\neg(A(x)\\oplus C(x))$ \\\\ \\hline\nT & T &T  &F  &F&F  &T\\\\ \\hline\nT & T &F  &F  &T&F  &F \\\\ \\hline\nT & F &T  &T  &T&T  &T\\\\ \\hline\nT & F &F  &T  &F&F  &F\\\\ \\hline\nF & T &T  &T  &F&F  &F\\\\ \\hline\nF & T &F  &T  &T&T  &T \\\\ \\hline\nF & F &T  &F  &T&F  &F\\\\ \\hline\nF & F &F  &F  &F&F  &T\\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n\\end{enumerate}\n\\section{Q8}\n$$\\exists x(P(x)\\Rightarrow Q(x))\\Leftrightarrow\\exists x(\\neg P(x)\\vee Q(x))\\Leftrightarrow\\exists x\\neg P(x)\\vee \\exists xQ(x)$$\n$$\\Leftrightarrow(\\neg\\forall xP(x))\\vee(\\exists xQ(x))\\Leftrightarrow(\\forall xP(x))\\Rightarrow(\\exists xQ(x))$$\n$$\\therefore \\exists x(P(x)\\Rightarrow Q(x))\\Leftrightarrow(\\forall xP(x))\\Rightarrow(\\exists xQ(x))$$\n\n\\section{Q9}\n$$T=(A\\cap B)\\cup((M\\setminus A)\\cap(M\\setminus B))=(A\\cap B)\\cup(M\\setminus(A\\cup B))$$\n$$M\\setminus T=(M\\setminus(A\\cap B))\\cap(A\\cup B)\\supseteq(M\\setminus(A\\cap B))\\cap B$$\n$$= ((M\\setminus A)\\cup (M\\setminus B))\\cap B\\supseteq (M\\setminus A)\\cap B$$\n$$\\therefore (M\\setminus A)\\cap B\\subseteq M\\setminus T$$\n\\section{Q10}\n\\begin{enumerate}[(i)]\n\\item\nThe truth tables:\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n$A$ & $B$ &$A|B$ &$A\\downarrow B$\\\\ \\hline\nT & T &F  &F  \\\\ \\hline\nT & F &T  &F  \\\\ \\hline\nF & T &T  &F  \\\\ \\hline\nF & F &T  &T  \\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n\\item\n$$A\\wedge B\\equiv (A|B)|(A|B)$$\n$$A\\vee B\\equiv(A|A)|(B|B)$$\n$$A\\Rightarrow B\\equiv A|(A|B)$$\n$$A\\Leftrightarrow B\\equiv[A|(A|B)]|[B|(B|A)]|[A|(A|B)]|[B|(B|A)]$$\n\\item\n$$A\\wedge B\\equiv(A\\downarrow A)\\downarrow(B\\downarrow B)$$\n$$A\\vee B\\equiv(A\\downarrow B)\\downarrow(A\\downarrow B)$$\n$$A\\Rightarrow B\\equiv [(A\\downarrow B)\\downarrow B]\\downarrow[(A\\downarrow B)\\downarrow B]$$\n$$A\\Leftrightarrow B\\equiv[(A\\downarrow B)\\downarrow B]\\downarrow[(B\\downarrow A)\\downarrow A]$$\n\\item No\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n$A$ & $B$ &$C$ &$A\\downarrow (B\\downarrow C)$&$(A\\downarrow B)\\downarrow C$\\\\ \\hline\nT & T &T  &F &F  \\\\ \\hline\nT & T &F  &F &T \\\\ \\hline\nT & F &T  &F &F \\\\ \\hline\nT & F &F  &F &T \\\\ \\hline\nF & T &T  &T &F \\\\ \\hline\nF & T &F  &T &T \\\\ \\hline\nF & F &T  &T &F \\\\ \\hline\nF & F &F  &F &F \\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n\\item\nI can prove it through truth tables\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n$A$ & $B$ &$C$ &$(\\neg A)\\downarrow B$ &$A|C$&$(\\neg A)\\downarrow B\\wedge A|C$ &$B\\downarrow C$\\\\ \\hline\nT & T &T  &F &F &F&F  \\\\ \\hline\nT & T &F  &F &T &F&F\\\\ \\hline\nT & F &T  &T &F &F&F\\\\ \\hline\nT & F &F  &T &T &T&T\\\\ \\hline\nF & T &T  &F &T &F&F\\\\ \\hline\nF & T &F  &F &T &F&F\\\\ \\hline\nF & F &T  &F &T &F&F\\\\ \\hline\nF & F &F  &F &T &F&T\\\\ \\hline\n\\end{tabular}\n\\caption{Truth Table}\n\\end{table}\n$((\\neg A)\\downarrow B\\wedge A|C)\\Rightarrow B\\downarrow C$ is true, so,it's valid\n\\end{enumerate}\n\\end{document}", "meta": {"hexsha": "afed3fe2d141f7064d7f374ccef5b31d06823e11", "size": 10931, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "VE203DiscreteMaths/Assignment/Assignment 1/1.tex", "max_stars_repo_name": "PANDApcd/Algorithm", "max_stars_repo_head_hexsha": "3017c3abd6f3df227addaa924ff5b1a6d28d9b7c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "VE203DiscreteMaths/Assignment/Assignment 1/1.tex", "max_issues_repo_name": "PANDApcd/Algorithm", "max_issues_repo_head_hexsha": "3017c3abd6f3df227addaa924ff5b1a6d28d9b7c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "VE203DiscreteMaths/Assignment/Assignment 1/1.tex", "max_forks_repo_name": "PANDApcd/Algorithm", "max_forks_repo_head_hexsha": "3017c3abd6f3df227addaa924ff5b1a6d28d9b7c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.2490566038, "max_line_length": 267, "alphanum_fraction": 0.6159546245, "num_tokens": 5051, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210896, "lm_q2_score": 0.8056321866478979, "lm_q1q2_score": 0.5606864170515834}}
{"text": "\\documentclass[11pt,a4paper]{article}\n\n% These are extra packages that you might need for writing the equations:\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{booktabs}\n\\usepackage{hyperref}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\\usepackage{outlines}\n\\usepackage{mathtools}\n\n\n\n\\lstset {language=C++,\n\t\t basicstyle=\\ttfamily,\n         keywordstyle=\\color{blue}\\ttfamily,\n         stringstyle=\\color{red}\\ttfamily,\n         commentstyle=\\color{purple}\\ttfamily,\n         morecomment=[l][\\color{magenta}]{\\#},\n       \t basicstyle=\\tiny}\n\n% You need the following package in order to include figures in your report:\n\\usepackage{graphicx}\n\n% With this package you can set the size of the margins manually:\n\\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}\n\n\\renewcommand{\\vec}[1]{\\mathbf{#1}}\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\n\\begin{document}\n\n% Enter the exercise number, your name and date here:\n\\noindent\\parbox{\\linewidth}{\n \\parbox{.25\\linewidth}{ \\large ICP, Exercise 12 }\\hfill\n \\parbox{.5\\linewidth}{\\begin{center} \\large Beat Hubmann \\end{center}}\\hfill\n \\parbox{.2\\linewidth}{\\begin{flushright} \\large Dec 07, 2018 \\end{flushright}}\n}\n\\noindent\\rule{\\linewidth}{2pt}\n\n\n\\section{Introduction}\n\nAs a continuation of last week's work on exercise 11, the conjugate gradient method was \nimplemented to also solve\nthe discretized Poisson equation in two dimensions for point charges in a grounded box.\nThis time around, the time to solution was compared to a high performance library Cholesky solver as well \nas to the library Conjugate Gradient solver.\n\n\\section{Algorithm Description}\n\nRepeating last week's description:\\\\\nFor all cases, the two-dimensional Poisson equation(equation~\\ref{eqn:1}) on $\\Omega$ is \ndiscretized using second-order central finite differences in both the x- and the y-direction (equation~\\ref{eqn:2}).\nBoth axes share a common grid spacing of $\\Delta x= \\frac{1}{N+1}$ where $N$ is the number of interior points\nper axis direction on the grid.\nFollowing the established finite difference method procedure to employ natural ordering, the left-hand side of equation~\\ref{eqn:2} then can be written\nin form of an $N*N \\times N*N$ matrix $\\vec{A}$ while the values of $\\phi$ on the grid get unrolled into a vector $\\vec{b}$ of size $N*N$ on the right-hand side (equation~\\ref{eqn:3}).\\\\\nThe resulting matrix $A$ is both sparse and block tridiagonal.\n\n\n\n\\begin{equation}\n\\Delta \\Phi = -\\phi \\quad \\text{on}\\ \\Omega = (0, 1) \\times (0,1)\n\\label{eqn:1}\n\\end{equation}\n\n\n\\begin{equation}\n4x_{i,j} - x_{i-1, j} - x_{x+1, j} - x_{i, j-1} - x_{i, j+1} = -(\\Delta x)^2 \\cdot \\rho(x_{i,j})\n\\label{eqn:2}\n\\end{equation}\n\n\n\\begin{equation}\nAx = b\n\\label{eqn:3}\n\\end{equation}\n\n\n\\subsection{Conjugate gradient method}\nThe conjugate gradient method is the most well known member of the family of Krylov subspace methods~\\cite{elman}.\nGiven the system of equations $\\vec{Ax}=\\vec{b}$ with $\\vec{A}$ a symmetric positive definite (SPD) matrix, the method iteratively constructs a solution $\\vec{x}^{(t)} \\in \\mathcal{K}_t(\\vec{A}, \\vec{r}^{(0)})$ using a starting solution $\\vec{x}^{(0)} = (0,0,\\ldots, 0)^T$,\nthe residual $\\vec{r}^{(0)}=\\vec{b} - \\vec{Ax}^{(0)}$ and the associated Krylov subspace $\\mathcal{K}_t(\\vec{A}, \\vec{r}^{(0)})=\\text{span}\\{\\vec{x}, \\vec{Ax}, \\vec{A}^2\\vec{x}, \\ldots, \\vec{A}^{t-1}\\vec{x}\\}$.\nIn this implementation, we use the simple Richardson iteration with $\\vec{x}^{(t)}=\\vec{x}^{(t-1)} + \\alpha_{t-1}\\vec{x}^{(t-1)}$ where $\\alpha$ is a scalar factor\ncalculated as shown in the outline below. $\\vec{M}^{-1}\\ =\\ \\frac{\\delta_{ij}}{\\vec{A}_{ii}}$ is the Jacobi preconditioner matrix. Setting $\\vec{M} = \\vec{M}^{-1} = \\vec{\\mathcal{I}}$ instead trivially falls back to \nusing no preconditioner:\n\\begin{outline}\n\\1 $\\vec{r}^{(0)} \\leftarrow \\vec{b} - \\vec{Ax}^{(0)}$\n\\1 $\\vec{z}^{(0)} \\leftarrow \\vec{M}^{-1}\\vec{r}^{(0)}$\n\\1 $\\vec{p} \\leftarrow \\vec{z}^{(0)}$ \n\\1 Iterate until convergence and/or iteration limit:\n    \\2 $\\alpha \\leftarrow \\frac{(\\vec{r}^{(t)})^T \\vec{z}^{(t)}}{(\\vec{p})^T \\vec{Ap}}$\n    \\2 $\\vec{x} \\leftarrow \\vec{x} + \\alpha\\vec{p}$\n    \\2 $\\vec{r}^{(t)} \\leftarrow \\vec{r}^{(t-1)} - \\alpha\\vec{Ap}$\n    \\2 check preconditioned residual $\\norm{\\vec{r}^{(t)}}_2$ for convergence; stop if reached\n    \\2 $\\vec{z}^{(t)} \\leftarrow \\vec{M}^{-1} \\vec{r}^{(t-1)}$\n    \\2 $\\beta \\leftarrow \\frac{(\\vec{z}^{(t)})^T \\vec{r}^{(t)}}{(\\vec{z}^{(t-1)})^T \\vec{r}^{(t-1)}}$\n    \\2 $\\vec{p} \\leftarrow \\vec{z}^{(t)} = \\beta \\vec{p} $\n    \\2 $\\vec{r}^{(t-1)} \\leftarrow \\vec{r}^{(t)}$\n    \\2 $\\vec{z}^{(t-1)} \\leftarrow \\vec{z}^{(t)}$ \n\\1 return $\\vec{x}$\n\\end{outline}\n\n\n\\section{Results}\n\nThe program was implemented as described above and submitted with this report. \\\\\nThe conjugate gradient method was iterated until the residual's norm went below the set treshold: $\\norm{\\vec{r}}_{2} \\le 10^{-4}$.\nThe conjugate gradient method took only $t=82$ iterations and $\\sim 1 \\text{ms}$ which compares very favourably with the Jacobi relaxation method ($t=3478$ iterations in $\\sim 45 \\text{ms}$)\nand the Gauss-Seidel method ($t=1922$ iterations in $\\sim 6400 \\text{ms}$ examined in exercise 11. For comparison, Eigen's optimized library Cholesky method\nsolver obtained the reference solution in $\\sim 10 \\text{ms}$ while Eigen's own conjugate gradient method set to use a complete matrix took $\\sim 3 \\text{ms}$.\\\\\nThe conjugate gradient method with the set residual treshold reached a very minor deviation from the reference Cholesky solution:\\\\ \n$\\norm{\\vec{x}^*_{\\text{Conjugate gradient}} - \\vec{x}^*_{\\text{Cholesky}}}_{2} \\approxeq 0.0002$. \\\\\nThe heat map for the conjugate gradient solver is shown in figure~\\ref{fig:1}.\n\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[scale=1.2]{figure_1.png} \n\\end{center}\n\\caption{Conjugate gradient solver reference solution for Poisson equation with point charges at $(0.25, 0.75)$, $(0.75, 0.25)$; grid parameter $N=50$.}\n\\label{fig:1}\n\\end{figure}\n\n\\section{Discussion}\nThe results are mostly are as expected, the exception being that the Jacobi preconditioner didn't make a difference for the system under consideration.\nThen again, neither is the Jacobi preconditioner sophisticated, nor does the given system setup (2D finite differences on a small system with $N=50$) actually\nmandate using a preconditioner.\n\n\\begin{thebibliography}{99}\n\n\n\\bibitem{elman}\nElman, H., Silvester, D., Wathen, A. \\\\\n\\emph{Finite Elements and Fast Iterative Solvers},\\\\\nOxford University Press,\\\\\n2014.\n\n\n\n% \\bibitem{metropolis}\n% Metropolis, N.,\n% Rosenbluth, A.W.,\n% Rosenbluth, M.N.,\n% Teller, A.H.,\n% Teller, E.\\\\\n% \\emph{Equations of State Calculations by Fast Computing Machines},\\\\\n% Journal of Chemical Physics. 21 (6): 1087,\\\\\n% 1953.\n\n\n% \\bibitem{herrmann}\n% \tHerrmann, H. J.,\n% \tSinger, H. M.,\n% \tMueller L.,\n% \tBuchmann, M.-A.,\\\\\n% \t\\emph{Introduction to Computational Physics - Lecture Notes},\\\\\n% \tETH Zurich,\\\\\n% \t2017.\n\n% \\bibitem{Gottschling}\n% Gottschling, Peter\\\\\n% \\emph{Discovering Modern C++},\\\\\n% Addison-Wesley,\\\\\n% 2016.\n\n\n\n\n\\end{thebibliography}\n\n\\end{document}", "meta": {"hexsha": "de215f2e3283fde21db9f659b8e1b7e21d793263", "size": 7132, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ex12/ex12_report.tex", "max_stars_repo_name": "BeatHubmann/18H-ICP", "max_stars_repo_head_hexsha": "2ad1bcef73f3f43d832031cf45c4909341176ebd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ex12/ex12_report.tex", "max_issues_repo_name": "BeatHubmann/18H-ICP", "max_issues_repo_head_hexsha": "2ad1bcef73f3f43d832031cf45c4909341176ebd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ex12/ex12_report.tex", "max_forks_repo_name": "BeatHubmann/18H-ICP", "max_forks_repo_head_hexsha": "2ad1bcef73f3f43d832031cf45c4909341176ebd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.7542857143, "max_line_length": 273, "alphanum_fraction": 0.6909702748, "num_tokens": 2346, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583124210896, "lm_q2_score": 0.8056321843145405, "lm_q1q2_score": 0.5606864154276638}}
{"text": "\\section{Air Velocity} \\label{sec:velocity}\nDid you ever feel the wind blow? Most probably. That's what we will be calculating here. How hard the wind will blow. This is noted as velocity, how fast something moves. \n\n\\subsection{Equation of State and the Incompressible Atmosphere}\nThe equation of state relates one or more variables in a dynamical system (like the atmosphere) to another. The most common equation of state in the atmosphere is the ideal gas equation as \ndescribed by \\autoref{eq:ideal gas} \\cite{idealGas}. The symbols in that equation represent:\n\n\\begin{itemize}\n    \\item $p$: The gas pressure (\\si{Pa}).\n    \\item $V$: The volume of the gas (\\si{m^3}).\n    \\item $n$: The amount of moles in the gas (\\si{mol}).\n    \\item $R$: The Gas constant as defined in \\autoref{sec:gas constant} (\\si{JK^{-1}mol^{-1}}) \\cite{idealGas}.\n    \\item $T$: The temperature opf the gas ($K$).\n\\end{itemize}\n\nIf we divide everything in \\autoref{eq:ideal gas} by $V$ and set it to be unit (in this case, set it to be exactly $1$ \\si{m^3}) we can add in the molar mass in both the top and bottom parts of \nthe division as show in \\autoref{eq:gas unit}. We can then replace $\\frac{nm}{V}$ by $\\rho$ the density of the gas (\\si{kgm^{-3}}) and $\\frac{R}{m}$ by $R_s$ the specific gas constant (gas \nconstant that varies per gas in \\si{JK^{-1}mol^{-1}}) as shown in \\autoref{eq:state gas}. The resulting equation is the equation of state that you get that most atmospheric physicists use when \ntalking about the atmosphere \\cite{simon}.\n\n\\begin{subequations}\n    \\begin{equation}\n        \\label{eq:ideal gas}\n        pV = nRT\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:gas unit}\n        p = \\frac{nR}{V}T = \\frac{nmR}{Vm}T\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:state gas}\n        p = \\rho R_sT\n    \\end{equation}\n\\end{subequations}\n\nThe pressure is quite important, as air moves from a high pressure point to a low pressure point. So if we know the density and the temperature, then we know the pressure and we can work out \nwhere the air will be moving to (i.e. how the wind will blow). In our current model, we know the atmospheric temperature but we do not know the density. For simplicities sake, we will now assume\nthat the atmosphere is Incompressible, meaning that we have a constant density. Obviously we know that air can be compressed and hence our atmosphere can be compressed too but that is not \nimportant enough to account for yet, especially considering the current complexity of our model.\n\nThe code that corresponds to this is quite simple, the only change that we need to make in \\autoref{eq:state gas} is that we need to replace $T$ by $T_a$, the temperature of the atmosphere. As\n$T_a$ is a matrix (known to programmers as a double array), $p$ will be a matrix as well. Now we only need to fill in some values. $\\rho = 1.2$\\cite{densityAir}, $R_s = 287$\\cite{specificGasConstantAir}.\n\n\\subsection{The Momentum Equations} \\label{sec:momentum}\nThe momentum equations are a set of equations that describe the flow of a fluid on the surface of a rotating body. For our model we will use the f-plane approximation. The equations corresponding\nto the f-plane approximation are given in \\autoref{eq:x momentum} and \\autoref{eq:y momentum} \\cite{momentumeqs}. Note that we are ignoring vertical movement, as this does not have a significant\neffect on the whole flow. All the symbols in \\autoref{eq:x momentum} and \\autoref{eq:y momentum} mean:\n\n\\begin{itemize}\n    \\item $u$: The east to west velocity (\\si{ms^{-1}}).\n    \\item $t$: The time (\\si{s}).\n    \\item $f$: The coriolis parameter as in \\autoref{eq:coriolis}.\n    \\item $v$: The north to south velocity (\\si{ms^{-1}}).\n    \\item $\\rho$: The density of the atmosphere (\\si{kgm^{-3}}).\n    \\item $p$: The atmospheric pressure (\\si{Pa}).\n    \\item $x$: The local longitude coordinate (\\si{m}).\n    \\item $y$: The local latitude coordinate (\\si{m}).\n\\end{itemize}\n\nIf we then define a vector $\\bar{u}$ as $(u, v, 0)$, we can rewrite both \\autoref{eq:x momentum} as \\autoref{eq:x momentum laplace}. Here $\\nabla u$ is the gradient of $u$ in both $x$ and $y$ \ndirections. Then if we write out $\\nabla u$ we get \\autoref{eq:x momentum final}. Similarly, if we want to get $\\delta v$ instead of $\\delta u$ we rewrite \\autoref{eq:y momentum} to get \n\\autoref{eq:y momentum laplace} and \\autoref{eq:y momentum final}.\n\n\\begin{subequations}\n    \\begin{equation}\n        \\label{eq:x momentum}\n        \\frac{Du}{Dt} - fv = -\\frac{1}{\\rho} \\frac{\\delta p}{\\delta x}\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:y momentum}\n        \\frac{Dv}{Dt} - fu = -\\frac{1}{\\rho} \\frac{\\delta p}{\\delta y}\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:x momentum laplace}\n        \\frac{\\delta u}{\\delta t} + \\bar{u} \\cdot \\nabla u - fv = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta x}\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:y momentum laplace}\n        \\frac{\\delta v}{\\delta t} + \\bar{u} \\cdot \\nabla v - fu = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta y}\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:x momentum final}\n        \\frac{\\delta u}{\\delta t} + u\\frac{\\delta u}{\\delta x} + v\\frac{\\delta u}{\\delta y} - fv = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta x}\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:y momentum final}\n        \\frac{\\delta v}{\\delta t} + u\\frac{\\delta v}{\\delta x} + v\\frac{\\delta v}{\\delta y} - fu = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta y}\n    \\end{equation}\n\\end{subequations}\n\nWith the gradient functions defined in \\autoref{alg:gradient x} and \\autoref{alg:gradient y}, we can move on to the main code for the momentum equations. The main loop is shown in \n\\autoref{alg:stream3}. Do note that this loop replaces the one in \\autoref{alg:stream2v2} as these calculate the same thing, but the new algorithm does it better.\n\n\\begin{algorithm}\n    \\caption{Calculating the flow of the atmosphere (wind)}\n    \\label{alg:stream3}\n    $S_{xu} \\leftarrow \\texttt{gradient\\_x}(u, lat, lon)$ \\;\n    $S_{yu} \\leftarrow \\texttt{gradient\\_y}(u, lat, lon)$ \\;\n    $S_{xv} \\leftarrow \\texttt{gradient\\_x}(v, lat, lon)$ \\;\n    $S_{yv} \\leftarrow \\texttt{gradient\\_y}(v, lat, lon)$ \\;\n    $S_{px} \\leftarrow \\texttt{gradient\\_x}(p, lat, lon)$ \\;\n    $S_{py} \\leftarrow \\texttt{gradient\\_x}(p, lat, lon)$ \\;\n    \\For{$lat \\leftarrow 1$ \\KwTo $nlat - 1$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlon$}{\n            $u[lat, lon] \\leftarrow u[lat, lon] + \\delta t \\frac{-u[lat, lon]S_{xu} - v[lat, lon]S_{yu} + f[lat]v[lat, lon] - S_{px}}{\\rho}$ \\;\n            $v[lat, lon] \\leftarrow v[lat, lon] + \\delta t \\frac{-u[lat, lon]S_{xv} - v[lat, lon]S_{yv} - f[lat]u[lat, lon] - S_{py}}{\\rho}$ \\;\n        }\n    }\n\\end{algorithm}\n\n\\subsection{Improving the Coriolis Parameter}\nAnother change introduced is in the coriolis parameter. Up until now it has been a constant, however we know that it varies along the latitude. So let's make it vary over the latitude. Recall \n\\autoref{eq:coriolis}, where $\\Theta$ is the latitude. Coriolis ($f$) is currently defined in \\autoref{alg:gradient}, so let's replace it with \\autoref{alg:coriolis}.\n\n\\begin{algorithm}\n    \\caption{Calculating the coriolis force}\n    \\label{alg:coriolis}\n    \\SetAlgoLined\n    $\\Omega \\leftarrow 7.2921 \\cdot 10^{-5}$ \\;\n\n    \\For{$lat \\leftarrow -nlat$ \\KwTo $nlat$}{\n        $f[lat] \\leftarrow 2\\Omega \\sin(lat \\frac{\\pi}{180})$ \\;\n    }\n\\end{algorithm}\n\n\\subsection{Adding Friction}\nIn order to simulate friction, we multiply the speeds $u$ and $v$ by $0.99$. Of course there are equations for friction but that gets complicated very fast, so instead we just assume that we\nhave a constant friction factor. This multiplication is done directly after \\autoref{alg:stream3} in \\autoref{alg:stream4v1}.\n\n\\subsection{Adding in Layers}\nWith adding in atmospheric layers we need to add vertical winds, or in other words add the $w$ component of the velocity vectors. We do that by editing \\autoref{alg:stream3}. We change it to \n\\autoref{alg:velocity}. Here we use gravity ($g$) instead of the coriolis force ($f$) and calculate the change in pressure. Therefore we need to store a copy of the pressure before we do any \ncalculations. This needs to be a copy due to aliasing \\footnote{Aliasing is assigning a different name to a variable, while it remains the same variable. Take for instance that we declare a \nvariable $x$ and set it to be $4$. Then we say $y \\leftarrow x$, which you might think is the same as saying they $y \\leftarrow 4$ but behind the screen it is pointing to $x$. So if $x$ changes, \nthen so does $y$.}. Since we use pressure as the vertical coordinate, we must be able to convert that into meters (why we opted for pressure is explained in \\autoref{sec:rad layers}) in order to \nbe able to say something sensible about it. To do that we need the concept of geopotential height.\n\n\\subsubsection{Dimensionless Pressure}\nGeopotential height is similar to geometric height, except that it also accounts for the variation in gravity over the planet \\cite{geopot}. One could say that geopotential height is the \n\"gravity adjusted\" height. That means that it is similar to the height, but not exactly the same. Height is a linear function, whereas the geopotential height is not, though it is very similar \nto a linear function if you would plot it. Now to convert easily to and from potential temperature into temperature, we need another function which is known as the Exner function. The Exner \nfunction is a dimensionless \\footnote{Being dimensionless means that there is no dimension (unit) attached to the number. This is useful for many applications and is even used in daily life. For \ninstance when comparing price rises of different products, it is way more useful to talk about percentages (who are unitless) instead of how much you physically pay more (with your favourite \ncurrency as the unit).} pressure. The Exner function is shown in \\autoref{eq:exner} \\cite{verticalcoords}. The symbols in the equation are: \n\n\\begin{itemize}\n    \\item $c_p$: The specific heat capacity of the atmosphere.\n    \\item $p$: Pressure (\\si{Pa}).\n    \\item $p_0$: Reference pressure to define the potential temperature (\\si{Pa}).\n    \\item $R$: The gas constant $8.3144621$ (\\si{J(mol)^{-1}K}).\n    \\item $T$: The absolute temperature (\\si{K}).\n    \\item $\\theta$: the potential temperature (\\si{K}).\n\\end{itemize}\n\nSince the right hand side contains what we want to convert to and from, we can do some basic rewriting, which tells us what we need to code to convert potential temperature in absolute \ntemperature and vice versa. This is shown in \\autoref{eq:temp exner} and \\autoref{eq:potential temp exner} respectively.\n\n\\begin{subequations}\n    \\begin{equation}\n        \\label{eq:exner}\n        \\Pi = c_p(\\frac{p}{p_0})^{\\frac{R}{c_p}} = \\frac{T}{\\theta}\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:temp exner}\n        T = \\Pi\\theta\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:potential temp exner}\n        \\theta = \\frac{T}{\\Pi}\n    \\end{equation}\n\\end{subequations}\n\nNow onto some code. Let us initialise $\\Pi$ before we do any other calculations. This code is already present in the control panel section (\\autoref{sec:cp}) as that is where it belongs, so for \nfurther details please have a look at the code there. Now onto the geopotential height. \n\n\\subsubsection{Geopotential Height}\nAs stated before, geopotential height is similar to geometric height, except that it also accounts for the variation in gravity over the planet. One could say that geopotential height is the \n\"gravity adjusted\" height. That means that it is similar to the height, but not exactly the same. Height is a linear function, whereas the geopotential height is not, though it is very similar \nto a linear function if you would plot it. Now one could ask why we would discuss dimensionless pressure before geopotential height. The answer is quite simple, in order to define geopotential \nheight, we need the Exner function to define it. Or rather, we need that function to convert potential temperature into geopotential height. How those three are related is shown in \n\\autoref{eq:geopot}. Then with a little transformation we can define how the geopotential height will look like, as shown in \\autoref{eq:geopot final}. The symbols in both equations are:\n\n\\begin{itemize}\n    \\item $\\Pi$: The Exner function.\n    \\item $\\theta$: Potential temperature (\\si{K}).\n\\end{itemize}\n\n\\begin{subequations}\n    \\begin{equation}\n        \\label{eq:geopot}\n        \\theta + \\frac{\\delta\\Phi}{\\delta\\Pi} = 0\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:geopot final}\n        \\delta\\Phi = -\\theta\\delta\\Pi\n    \\end{equation}\n\\end{subequations}\n\nNow to turn this into code we need to be careful about a few things. First we are talking about a change in geopotential height here, so defining one level of geopotential height means that it \nis dependent on the level below it. Second this calculatoin needs potential temperature and therefore it should be passed along to the velocity calculations function. With those two things out \nof the way, we get the code as shown in \\autoref{alg:geopot}. Note that \\texttt{Smooth3D} refers to \\autoref{alg:smooth}.\n\n\\begin{algorithm}\n    \\caption{Calculating the geopotential height}\n    \\label{alg:geopot}\n    \\For{$level \\leftarrow 1$ \\KwTo $nlevels$}{\n        $\\Phi[:, :, level] \\leftarrow \\Phi[:, :, level - 1] - T_{pot}[:, :, level](\\Pi[level] - \\Pi[level - 1])$ \\;\n    }\n    $\\Phi \\leftarrow \\texttt{Smooth3D}(\\Phi, smooth_t)$ \\;\n\\end{algorithm}\n\n\\subsubsection{Finally Adding in the Layers}\nNow with the geopotential height and dimensionless pressure out of the way, we need to use those two concepts to add layers to the velocity calculations. Before we dive into the code however, \nthere are slight changes that we need to discuss. The equation shown in \\autoref{eq:velocity} is the primitive equation (as discussed in \\autoref{sec:primitive}). The momentum equations are \ngesostrphic momentum, which are a special form of the primitive equation. Since this whole system must remain in equilibrium, we need to set the right hand side to $0$ as shown in \n\\autoref{eq:vel eq}. Now let us rewrite \\autoref{eq:velocity} into \\autoref{eq:velocity int}. We replaze $z$ with pressure as that is our vertical coordinate. $\\omega$ is the velocity of the \npressure field, as defined in \\autoref{eq:vert vel}. Note that $p_k$ is the pressure for layer $k$ and $p_0$ is the pressure at the planet surface. Now we need to turn the velocity of the \npressure field into the velocity of a packet of air (see it as a box of air being moved), which is done in \\autoref{eq:vertical velocity}. Here $\\rho$ is the density and $g$ is gravity \n(\\si{ms^{-2}}).\n\n\\begin{subequations}\n    \\begin{equation}\n        \\label{eq:velocity}\n        \\frac{\\delta T}{\\delta x} + \\frac{\\delta T}{ \\delta y} + \\frac{\\delta T}{\\delta z} = \\nabla T\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:vel eq}\n        \\nabla T = 0\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:velocity int}\n        \\frac{\\delta u}{\\delta x} + \\frac{\\delta v}{\\delta y} + \\frac{\\delta\\omega}{\\delta p} = 0\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:vert vel}\n        \\omega_k = -\\int^{p_k}_{p_0}\\frac{\\delta u}{\\delta x} + \\frac{\\delta v}{\\delta y} dp\n    \\end{equation}\n    \\begin{equation}\n        \\label{eq:vertical velocity}\n        w = \\omega \\rho g\n    \\end{equation}\n\\end{subequations}\n\nBut I hear you say, what is the density? Since we have moved to pressure coordinates we can actually calculate the density rather than store it. This is done in \\autoref{eq:density}, where each \nsymbol means:\n\n\\begin{itemize}\n    \\item $\\rho$: The density of the atmosphere.\n    \\item $p$: The pressure of the atmosphere (\\si{Pa}).\n    \\item $c$: Specific heat capacity of the atmosphere (\\si{JKg^{-1}K^{-1}}).\n    \\item $T$: Temperature of the atmosphere (\\si{K}).\n\\end{itemize}\n\n\\begin{equation}\n    \\label{eq:density}\n    \\rho = \\frac{p}{cT}\n\\end{equation}\n\nFinally, let us convert \\autoref{eq:vertical velocity} to code in \\autoref{alg:velocity}. Here $T_{trans}$ is a call to the algorithm as described in \\autoref{alg:temp to pot}. \n\\texttt{gradient\\_x}, \\texttt{gradient\\_y} and \\texttt{gradient\\_z} are calls to \\autoref{alg:gradient x}, \\autoref{alg:gradient y} and \\autoref{alg:gradient z} respectively.\n\n\\begin{algorithm}\n    \\caption{Calculating the flow of the atmosphere (wind)}\n    \\label{alg:velocity}\n    //\\texttt{The following variables are function calls to algorithms and their shorthand notations are used in the loops}\\\\\n    $S_{xu} \\leftarrow \\texttt{gradient\\_x}(u, lat, lon, layer)$ \\;\n    $S_{yu} \\leftarrow \\texttt{gradient\\_y}(u, lat, lon, layer)$ \\;\n    $S_{xv} \\leftarrow \\texttt{gradient\\_x}(v, lat, lon, layer)$ \\;\n    $S_{yv} \\leftarrow \\texttt{gradient\\_y}(v, lat, lon, layer)$ \\;\n    $S_{zu} \\leftarrow \\texttt{gradient\\_z}(u[lat, lon], p_z, layer)$ \\;\n    $S_{zv} \\leftarrow \\texttt{gradient\\_z}(v[lat, lon], p_z, layer)$ \\;\n    $S_{px} \\leftarrow \\texttt{gradient\\_x}(geopot, lat, lon, layer)$ \\;\n    $S_{py} \\leftarrow \\texttt{gradient\\_y}(geopot, lat, lon, layer)$ \\;\n\n    //\\texttt{The following variables are real variables}\\\\\n    $nlat \\leftarrow \\Phi.length$ \\;\n    $nlon \\leftarrow \\Phi[0].length$ \\;\n    $u_t \\leftarrow $ array like $u$ \\;\n    $v_t \\leftarrow $ array like $v$ \\;\n    $w_t \\leftarrow $ array like $w$ \\;\n    \\For{$lat \\leftarrow 1$ \\KwTo $nlat - 1$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlon$}{\n            \\For{$layer \\leftarrow 0$ \\KwTo $nlevels$}{\n                $u_t[lat, lon, layer] \\leftarrow u[lat, lon, layer] + \\delta t \\frac{-u[lat, lon, layer]S_{xu} - v[lat, lon, layer]S_{yu} + f[lat]v[lat, lon, layer] - S_{px}}\n                    {10^5u[lat, lon, layer]}$ \\;\n                $v_t[lat, lon, layer] \\leftarrow v[lat, lon, layer] + \\delta t \\frac{-u[lat, lon, layer]S_{xv} - v[lat, lon, layer]S_{yv} - f[lat]u[lat, lon, layer] - S_{py}}\n                    {10^5v[lat, lon, layer]}$ \\;\n            }\n        }\n    }\n\n    $T_a \\leftarrow T_{trans}(T_{pot}, p_z, \\texttt{False})$ \\;\n\n    \\For{$lat \\leftarrow 2$ \\KwTo $nlat - 2$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlon$}{\n            \\For{$level \\leftarrow 1$ \\KwTo $nlevels$}{\n                $w_t[lat, lon, level] \\leftarrow  w_t[lat, lon, level - 1] - \\frac{(p_z[level] - p_z[level - 1])p_z[level]g(S_{xu} + S_{yv})}{C_pT_a[lat, lon, layer]}$ \\;\n            }\n        }\n    }\n\n    $u \\leftarrow u + u_t$ \\;\n    $v \\leftarrow v + v_t$ \\;\n    $w \\leftarrow w + w_t$ \\;\n    $p_0 \\leftarrow copy(p)$ \\;\n\\end{algorithm}", "meta": {"hexsha": "cdb7bad732c47c1297ffaa3f0d7906ca88b5dfcd", "size": 18695, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex-docs/topics/velocity.tex", "max_stars_repo_name": "davleop/claude", "max_stars_repo_head_hexsha": "09ee880d502dcad8cc1a8d2fd681978b812d32dd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 175, "max_stars_repo_stars_event_min_datetime": "2020-06-15T16:29:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T21:53:34.000Z", "max_issues_repo_path": "tex-docs/topics/velocity.tex", "max_issues_repo_name": "davleop/claude", "max_issues_repo_head_hexsha": "09ee880d502dcad8cc1a8d2fd681978b812d32dd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 14, "max_issues_repo_issues_event_min_datetime": "2020-06-26T06:47:47.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-08T09:17:45.000Z", "max_forks_repo_path": "tex-docs/topics/velocity.tex", "max_forks_repo_name": "davleop/claude", "max_forks_repo_head_hexsha": "09ee880d502dcad8cc1a8d2fd681978b812d32dd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2020-06-24T10:39:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-12T08:07:56.000Z", "avg_line_length": 61.0947712418, "max_line_length": 203, "alphanum_fraction": 0.6861192832, "num_tokens": 5588, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321796478256, "lm_q2_score": 0.6959583124210896, "lm_q1q2_score": 0.5606864121798248}}
{"text": "\\section{Basics of Reinforcement Learning}\n\\label{sec:reinforcement_learning}\n\n\\begin{frame}{Markov Decision Processes}\n\t\\begin{block}{Reinforcement Learning}\n\t\tGeneral class of algorithms that allow an agent to learn how to behave\n\t\tin a stochastic and possibly unknown environment by trial-and-error.\n\t\\end{block}\n\t\n\t\\begin{block}{Markov Decision Process (MDP)}\n\t\tstochastic dynamical system specified by $<\\S, \\A, \\calP, \\calR, \\gamma>$\n\t\t\\begin{enumerate}\n\t\t\t\\item $(\\S, \\calS)$ is a measurable state space\n\t\t\t\\item $(\\A, \\calA)$ is a measurable action space\n\t\t\t\\item $\\calP: \\S \\times \\A \\times \\calS \\to \\R$ is a Markov transition kernel\n\t\t\t\\item $\\calR: \\S \\times \\A \\to \\R$ is a reward function\n\t\t\t\\item $0 < \\gamma < 1$ is the discount factor.\n\t\t\\end{enumerate}\n\t\\end{block}\n\\end{frame}\n\n\\begin{frame}{Interaction Between Agent and Environment}\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{tikzpicture}[node distance = 6em, auto, thick]\n\t\t\\node [block] (Agent) {Agent\\\\$\\pi$};\n\t\t\\node [block, below of=Agent] (Environment) {Environment\\\\$\\calP, \\calR$};\t\t    \n\t\t\\path [line] (Agent.0) --++ (4em,0em) |- node [near start]{Action $a_t$} (Environment.0);\n\t\t\\path [line] (Environment.190) --++ (-6em,0em) |- node [near start]{State  $s_{t}$} (Agent.170);\n\t\t\\path [line] (Environment.170) --++ (-4.25em,0em) |- node [near start, right] {Reward $r_{t+1}$} (Agent.190);\n\t\\end{tikzpicture}\n\\end{figure}\n\\end{frame}\n\n\\begin{frame}{Policy and Value Function}\n\t\\begin{block}{Policy}\n\t\t A policy is a function $\\pi: \\S \\times \\A \\to \\R$ such that, $\\forall s \\in \\S$, $C \\mapsto \\pi(s,C)$ is a probability distribution over $(\\A, \\calA)$\n\t\\end{block}\n\t\\begin{block}{Return}\n\t\t\\begin{equation*}\n\t\t\tG_t = \\sum^{\\infty}_{t=0} \\gamma^t R_{t+k+1} \n\t\t\\end{equation*}\n\t\\end{block}\n\t\\begin{block}{Value Function}\n\t\t\\begin{equation*}\n\t\t\tV_\\pi(s) = \\E[\\pi]{G_t|S_t = s}\n\t\t\\end{equation*}\n\t\\end{block}\n\t\\begin{block}{Agent's goal}\n\t\tSelect a policy $\\pi^*$ that maximizes his expected return in all possible states. This policy is called \\emph{optimal}.\n\t\\end{block}\n\\end{frame}\n\n\\begin{frame}{Policy Gradient Methods}\n\t\\begin{block}{Key idea}\n\t$\\pi_*$ is approximated using a parametrized policy $\\pi_{\\theta^*}(s,a)$, where\n\t\\begin{equation*}\n\t\t\\theta^* = \\argmax_{\\theta \\in \\Theta} J(\\theta) = V_{\\pi_\\theta}(s_0)\n\t\\end{equation*}\n\tUsing gradient descent, we have the following update scheme\n\t\\begin{equation*}\n\t\t\\theta_{k+1} = \\theta_k + \\alpha_k \\nabla_\\theta J\\left(\\theta_k\\right)\n\t\\end{equation*}\n\t\\end{block}\n\n\t\\begin{block}{Policy Gradient Theorem}\n\t\t\tLet $\\pi_\\theta$ be a differentiable policy. The policy gradient for the average reward formulation is given by\n\t\t\t\\begin{equation*}\n\t\t\t\t\\nabla_\\theta J(\\theta) =\n\t\t\t\t\\E[\\substack{S \\sim d^\\theta\\\\A \\sim \\pi_\\theta}]{\\nabla_\\theta\\log\n\t\t\t\t\\pi_\\theta(S,A) Q_{\\theta}(S, A)}\n\t\t\t\\end{equation*}\n\t\t\t$d^\\theta$ is the stationary distribution of the Markov chain induced by $\\pi_\\theta$.\n\t\\end{block}\n\\end{frame}\n\n\\begin{frame}{Monte-Carlo Policy Gradient Method}\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=1.0\\framewidth]{Images/gpomdp}\n\t\\end{figure}\n\\end{frame}\n", "meta": {"hexsha": "193f8190fe6b314dc2bcb2f826b046f9a4846439", "size": 3114, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Pacs/Presentation/Sections/2_reinforcement_learning.tex", "max_stars_repo_name": "AmineAboussalah/Thesis", "max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z", "max_issues_repo_path": "Pacs/Presentation/Sections/2_reinforcement_learning.tex", "max_issues_repo_name": "pnecchi/Thesis", "max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pacs/Presentation/Sections/2_reinforcement_learning.tex", "max_forks_repo_name": "pnecchi/Thesis", "max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z", "avg_line_length": 37.5180722892, "max_line_length": 153, "alphanum_fraction": 0.6759794477, "num_tokens": 1099, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7090191460821871, "lm_q1q2_score": 0.560501131495252}}
{"text": "\\chapter{Hybrid Neural Network for State Inference} \\label{sec:approach}\nIn this section, I describe my proposed deep learning approach for the black-box state inference task, in details. \n\n\\section{The Model Architecture}\n\\begin{figure*}\n    \\centering\n    \\includegraphics[width=\\textwidth]{ASE_files/GeneralConvolutionalNet.pdf}\n    \\caption{The input and output signals of the black-box system are captured as a multivariate time series; they are processed in a deep neural network that consists of 3 sections: convolutional, recurrent, and dense (fully connected) to predict the system's internal state and its changes over time.}\n    \\label{fig:general_net}\n\\end{figure*}\n\nThe goal of this study is to infer the states of a running software system, over time. Given that my assumption is we don't have access to the source code (or part of it), I only leverage the values of inputs and outputs of the system, over time. \nAs can be seen in figure~\\ref{fig:general_net}, I capture all the inputs and outputs of the system as a time series and then process it in a DNN. \nThe architecture of my proposed model is a hybrid DNN which is inspired by models proposed in the field of Human Activity Recognition (HAR). This task is quite similar to the subject of this thesis in the sense that they both take in a multivariate time series-data (from sensor readings) and output the state of the system that generated those readings (see section~\\ref{sec:related_work_har} for more details on HAR papers). \nThis DNN is made of three parts in sequence: 1) Convolutional, 2) Recurrent, and 3) Fully connected layers. This architecture addresses the aforementioned traditional methods' challenges; each part serves a different purpose in this process, as follows.\n\n\nConvolutions, being more generalized than simple sliding windows, can discover patterns and features in the signals, both in temporal and in spatial (how signals affect each other) dimensions \\cite{wang2017time}. \nThe convolutional layers' flexibility allows them to learn some typical preprocessing operations. For example a moving average or a discrete derivative can be learned as simple convolutional filters. They also help the model to be more resilient to varying time delays between noticing a deviation in input signals and the reaction that will appear in the output signals. Applying convolutional layers in sequence has been shown to result in each layer learning more complex features than the previous layers \\cite{zeiler2014visualizing}.\nThe number of layers, filters, and the kernel size are hyper-parameters that should be selected based on the size of data and the complexity of the system being modeled.\nUsing a sequence of convolutions with a) increasing number of filters and the same kernel size, b) same number of filters and increasing kernel size, and c) decreasing filters with increasing kernel sizes are all different approaches that have been used in the literature by well-known architectures such as VGG and U-net \\cite{simonyan2014very, ronneberger2015u}. \nI will discuss more details of CNN layers in my approach in Section~\\ref{sec:architecture_detail}.\n\nConvolutions are quite powerful in discovering local features. To capture long-term features, recurrent layers which learn sequences of data are leveraged. For example, in my case, they can learn that ``accelerate'' and ``take off'' states only happen in the start of the states sequence, and each ``take off'' state is usually followed by a ``climb'' state. The type of recurrent cell to use (LSTM, GRU, etc.), how many cells to unravel in the layer, and the number of layers are also hyper-parameters that need to be tuned depending on the size and complexity of the system under study.\n\nFinally, one or more dense (fully connected) layers in the end are a common way of reducing the dimensions to match expected output dimensions. \nIf there are only two states, the last layer can have a sigmoid activation function and be of shape $L$ (the length of the input), otherwise, to match the one-hot encoding of labels, an output of shape $L\\times N_s$ with softmax activation along the second axis ($N_s$) is required ($N_s$ being the number of possible states).\n\nIn terms of loss function to optimize in the training process, a good choice is a dice overlap loss function, which is used in image semantic segmentation tasks, as well. An important property of this loss function is not getting negatively affected by class imbalances \\cite{milletari2016v, sudre2017generalised}.\n\n% To make it more clear, compare it to applying a convolution (or a discrete derivative) to each signal independently. It can only learn temporal features, nothing about the interactions between the signals.\n% Using convolutional layers instead of predefined kernels allows the model to learn which subset of possible connections and interactions (which can involve two or more variables) can be used as features. This flexibility has another advantage as well. It helps the model to be more resilient to time lag between noticing a deviation in input signals and the reaction that will appear in the output signals. This time lag can be caused by a number of reasons including: physical limitations, sampling resolution, tuning of the controller, or design choices in the system.\n%\\subsubsection{Data Collection}\n%The object system is treated as a black-box where we can only read its input and output values. \n%Capturing the input and output data ($T_k$s) is straightforward. In the first step the system is executed and multiple logs of the input/output values are recorded. After that, state change points ($CP_k$s) data is required to be collected. A typical way to obtain them is with help of a domain expert. They label the logs indicating the time stamps where the system's internal state has changed as well as the new state that it went in. Domain experts are ideal for labeling since they know what the system is and what it is supposed to do, so they can (probably with some inherent human error) detect the system state. \n%They can do it based on a variety of information sources such as examining the system's I/O (which we already have captured), logs, or a high level program or set of instructions preloaded into the system.\n\n\n%We aimed to infer high level states of a black-box system using a machine learning model that observes the system's behavior. \n\\section{Data Encoding}\n% To collect data we run the system $N$ times using $N$ existing test cases and record the input/output values. The recorded data from $k$-th run, $T_k$, forms a discrete multivariate time series data.\nThe input/output values of the black-box system create a multivariate time-series ($T_k$), which can be defined as a set of $n$ univariate time series ($V_i$) of the same length $l_k$. Each $V_i$ corresponds to the recorded values for one of the inputs or outputs of the system:\n\\begin{equation} \\label{eq:T_k}\n    T_k = \\{{V_1}_k, {V_2}_k, \\ldots, {V_n}_k\\}\n\\end{equation}\n\\begin{equation} \\label{eq:l_k}\n    |{V_1}_k|=|{V_2}_k|=\\ldots=|{V_n}_k|=l_k \n\\end{equation}\n\nNote that, as figure~\\ref{fig:general_net} shows, we take both inputs and outputs as part of the time-series data to be fed as input into my deep learning models. \nThis is to make sure that we can model state-based behavior of the system, where the current state depends not only on the inputs, but also on the last state(s) (captured as previous outputs) of the system. As an example, from the case study, if the outputs are not taken into account \n% the ``acceleration'' state before ``takeoff'' and the breaking state after landing\na mid-flight ``descend'' state and the ``approach'' state right before landing\nare indistinguishable, using the sensor readings (inputs) alone. \n\nHaving such a time-series, the only remaining pieces from a training set are the labels. Unlike the input/output values (the features in the dataset) the labels are not usually given. My method to infer the labels is a supervised approach. Thus, we need the domain expert to manually label each individual time stamp with a state name/ID. In practice, what they would do is to identify the approximate time that a state change happens and assign the new state to one of the previous states labels or define a new label for this new state.\nThus I encode the states information over time as a set of tuples in the form of $(t_s, s)$ where $t_s$ denotes the timestamp where the system entered state $s$. We show the set of all possible states with $S$ ($s \\in S$) and define $N_s$ as the cardinality of this set. \n\\begin{equation}\\label{eq:change_point}\\begin{split}\n    CP_k {}&{}= \\big\\{ (t_{s_1}, s_1), (t_{s_2}, s_2), \\ldots, (t_{s_l}, s_l) \\big\\}\\:,\\; s_i \\in S \\\\\n    N_s  {}&{}= |S|\n\\end{split}\n\\end{equation}\nSo in summary, the dataset consists of $N$ pairs of the I/O values as features and their state information as labels $\\big\\{(X=T_k,\\;y=CP_k)|1\\leq~k\\leq~N\\big\\}$. \n% Each example in dataset contains one whole test case. The output labels were one-hot encoded to match the model output shape.\n%It is worth mentioning that in section~\\ref{mp_data_collection} we explain what we did to keep the variety of the collected data (and hence its quality) high. \n\n\\subsection{Data Preprocessing}\n%\\subsubsection{Inputs and Labels} %\\label{data_set_properties}\nBefore being fed into the model $\\mathcal{F}$ (as defined below), the inputs and labels need some preprocessing. \n\\begin{equation}\\label{eq:model_F}\n    \\mathcal{F}(\\delta(T), m) \\colon \\mathbb{R}^{L\\times n}\\times\\mathbb{R}^L\\to S^{L}.\n\\end{equation}\nTo run more efficiently, TensorFlow expects all the inputs to have the same length. To do that, the shorter $T_k$s should be zero-padded to length $L = max\\{l_k\\}$. The padding function $\\delta$ does that.\nTherefore, eventually, the input to the model will be $T_k$s that are rearranged to form a tensor of shape $n \\times L$ along with a padding mask (denoted with $m$). \nThe mask tells the model where the tail starts so the model can ignore all the zeros from there on. % It prevents the added zero values to have a negative effect on the model's performance.\n%The inputs are the sampled in/out values of the system given in form of $n$ time series of length $L$.\n\\begin{equation}\\label{eq:model_as_function}\n\\begin{split}\n    \\hat{O} {}&{}= \\langle \\hat{o}_i \\in S \\rangle^L_{i=1} = \\mathcal{F}\\left(\\left[ {\\delta(V_1)}^\\intercal \\: {\\delta(V_2)}^\\intercal \\; \\ldots \\; {\\delta(V_n)}^\\intercal \\right],m\\right) \\\\\n    % \\delta(V_i) {}&{}= \\left[V_i \\quad \\vec{0}_{L-l}\\right] \\\\\n    m {}&{}= \\delta(\\vec{\\mathds{1}}_l) \\quad\\text{i.e.}\\quad \\langle {m}_j \\rangle^l_{j=1} = 1\\:, \\: \\langle {m}_j \\rangle^L_{j=l+1} = 0 \\\\\n\\end{split}\n\\end{equation}\nHere $l$ denotes the length of the input before padding. It is equal to $l_k$ for the $k$th training data ($T_k$).\n\n\nAs defined in \\eqref{eq:change_point}, $CP_k$s are tuples of $(t, s)$ which indicate the system have gone into state $s$ at time $t$. To train the model, $CP_k$ needs to be expanded into a vector of length $L$ denoted by $O$ where each element $o_t$ holds the state at time $t$. To define it formally, the elements can be derived from $CP_k$ using the following formula:\n\\begin{equation} \\label{eq:output}\n\\begin{split}\nO = \\langle  \\forall t \\in \\mathbb{N}_L : s_i \\:|\\:\n           (& t_{s_i}, s_i) \\in CP_k \\: \\land  \\\\\n            & t_{s_i} = max\\{t_{s_j} \\:|\\: (t_{s_j}, s_j) \\in CP_k \\land t_{s_j}\\leq t\\} \\rangle\n\\end{split}\n\\end{equation}\nFor example: Suppose $L=10$ and $CP = \\{(0, a),\\:(3, b),\\:(5, c),\\:(8, a)\\}$ \n% which means starting with state $a$ then going to state $b$ at time $t=3$,\\ldots and it goes on for 10 steps; \nthen $O = \\langle a\\:a\\:a\\:b\\:b\\:c\\:c\\:c\\:a\\:a \\rangle$.\nIf there are more than two possible states ($N_s > 2$), $O$ needs to be one-hot encoded, at this stage.\n\n\n\n\\section{The Model Implementation} \\label{sec:architecture_detail}\nThe first few layers of the model are convolutional layers. I used 5 convolutional layers with 64 filters each and a growing kernel size.\nThe intuition behind this design is that starting with a small kernel guides the training in a way that the first layers learn simpler more local features that fits in their window (kernel size).\nKernel sizes started with 3 since it is a common number in the literature for kernel sizes, then I used multiples of 5 from 5 to 20. \nThe rationale behind choosing 5 is because the sampling frequency is 5, so each layer with a kernel size of $5n$ processes a whole $n$ seconds worth of simulation data, in each step.\nStopping at kernel size of 20 was a compromise between generalizability and model size. Generally, a larger model has more learning capacity, but it is also more prone to over-fitting. The current models are the smallest I could make the models (to avoid over-fitting), without compromising the performance.\n\nSame compromise was made in the second section of the model (Recurrent layers), the sweet spot for hyper-parameters here was to use two GRU layers with 128 cells each. Their output was fed into a fully connected layer with 128 neurons with a leaky ReLU ($\\alpha=0.3$) activation function \\cite{maas2013rectifier} and finally to a dense layer with $N_s=25$ units with softmax activation.\nI used Adam optimizer \\cite{kingma2014adam} that could converge in 60-80 epochs, i.e. validation accuracy plateaued. The full architecture can be seen in figure~\\ref{fig:model_arch}.\n\\begin{landscape}\n\\begin{figure*}\n    \\centering\n    \\includegraphics[width=\\columnwidth]{ASE_files/Convolutional_Net.pdf}\n    \\caption{Model architecture in a nutshell. Tandem convolutional layers with increasing kernel size fed into two sequence-to-sequence recurrent layers with 128 GRU cells each, which is then fed into dense layers to output the predicted system state, as a list of one-hot encoded states. $\\hat{O}$ will be the result of applying argmax operation on the last layer's output. $L=18000,\\: N_s=25$}\n    \\label{fig:model_arch}\n\\end{figure*}\n\\end{landscape}\n", "meta": {"hexsha": "7d0f0639ca739e83f174c85758ae7e4e102a33cc", "size": 14033, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6_Approach.tex", "max_stars_repo_name": "MJafarMashhadi/University-of-Calgary-Graduate-Thesis", "max_stars_repo_head_hexsha": "b3745c9d254c818e8df359ad448148a17edc04bf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-09T07:18:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-09T07:18:10.000Z", "max_issues_repo_path": "6_Approach.tex", "max_issues_repo_name": "MJafarMashhadi/University-of-Calgary-Graduate-Thesis", "max_issues_repo_head_hexsha": "b3745c9d254c818e8df359ad448148a17edc04bf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6_Approach.tex", "max_forks_repo_name": "MJafarMashhadi/University-of-Calgary-Graduate-Thesis", "max_forks_repo_head_hexsha": "b3745c9d254c818e8df359ad448148a17edc04bf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-14T16:04:29.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-14T16:04:29.000Z", "avg_line_length": 117.9243697479, "max_line_length": 622, "alphanum_fraction": 0.7617758142, "num_tokens": 3519, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.7090191337850932, "lm_q1q2_score": 0.5605011217740264}}
{"text": "\\newcommand{\\globalassum}{Assumptions \\ref{assu:paper_smoothness}---\\ref{assu:paper_bounded} }\n\n\\section{Detailed assumptions, lemmas, and proofs\\label{sec:appendix_proofs}}\n\n\n\n\n\n\\subsection{Tools}\n\nWe begin by stating two general propositions that will be useful.\nFirst, we show that a version of Cauchy-Schwartz can be applied to\nweighted sums of tensors.\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{prop}\n\\label{propref:tensor_cauchy_schwartz}Tensor array version of H{\\\"o}lder's\ninequality. Let $w$ be an array of scalars and let $a=\\left(a_{1},...,a_{N}\\right)$\nbe an array of tensors, were each $a_{n}$ is indexed by $i=1,\\ldots,D_{A}$\n($i$ may be a multi-index---e.g., if $A$ is a $D\\times D$ matrix,\nthen $i=\\left(j,k\\right)$, for $j,k\\in\\left[D\\right]$ and $D_{A}=D^{2}$).\nLet $p,q\\in\\left[1,\\infty\\right]$ be two numbers such that $p^{-1}+q^{-1}=1$.\nThen\n\\begin{align*}\n\\norm{\\frac{1}{N}\\sum_{n=1}^{N}w_{n}a_{n}}_{1} & \\le\\frac{D_{A}^{\\frac{1}{p}}}{N}\\norm w_{p}\\norm a_{q}.\n\\end{align*}\nIn particular, with $p=q=2$,\n\\begin{align*}\n\\norm{\\frac{1}{N}\\sum_{n=1}^{N}w_{n}a_{n}}_{1} & \\le\\sqrt{D_{A}}\\frac{\\norm w_{2}}{\\sqrt{N}}\\frac{\\norm a_{2}}{\\sqrt{N}}.\n\\end{align*}\n\\end{prop}\n%\n\\begin{proof}\nThe conclusion follows from the ordinary H{\\\"o}lder's inequality\napplied term-wise to $n$ and Jensen's inequality applied to the indices\n$i$.\n\n\\begin{align*}\n\\norm{\\frac{1}{N}\\sum_{n=1}^{N}w_{n}a_{n}}_{1} & =\\sum_{i=1}^{D_{A}}\\left|\\frac{1}{N}\\sum_{n=1}^{N}w_{n}\\left(a_{n}\\right)_{i}\\right|\\\\\n & \\le\\frac{1}{N}\\sum_{i=1}^{D_{A}}\\left|\\left(\\sum_{n=1}^{N}\\left|w_{n}\\right|^{p}\\right)^{\\frac{1}{p}}\\left(\\sum_{n=1}^{N}\\left|\\left(a_{n}\\right)_{i}\\right|^{q}\\right)^{\\frac{1}{q}}\\right|\\text{(H{\\\"o}lder)}\\\\\n & =\\frac{1}{N}\\norm w_{p}\\frac{D_{A}}{D_{A}}\\sum_{i=1}^{D_{A}}\\left(\\sum_{n=1}^{N}\\left|\\left(a_{n}\\right)_{i}\\right|^{q}\\right)^{\\frac{1}{q}}\\\\\n & \\le\\frac{1}{N}\\norm w_{p}D_{A}\\left(\\frac{1}{D_{A}}\\sum_{i=1}^{D_{A}}\\sum_{n=1}^{N}\\left|\\left(a_{n}\\right)_{i}\\right|^{q}\\right)^{\\frac{1}{q}}\\textrm{ (Jensen applied to }i\\textrm{)}\\\\\n & =\\frac{1}{N}\\norm w_{p}D_{A}\\left(\\frac{1}{D_{A}}\\sum_{n=1}^{N}\\norm{a_{n}}_{q}^{q}\\right)^{\\frac{1}{q}}\\\\\n & =\\frac{1}{N}\\norm w_{p}D_{A}^{1-\\frac{1}{q}}\\norm a_{q}\\\\\n & =\\frac{D_{A}^{\\frac{1}{p}}}{N}\\norm w_{p}\\norm a_{q}.\n\\end{align*}\n\\end{proof}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%\nNext, we prove a relationship between the term-wise difference between\nmatrices and the difference between their operator norms. It is well-known\nthat the minimum eigenvalue of a non-singular matrix is continuous\nin the entries of the matrix. In the next proposition, we quantify\nthis continuity for the $L_{1}$ norm.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{prop}\n\\label{propref:operator_norm_continuity}\n%\nLet $A$ and $B$ be two square matrices of the same size.\nLet $\\norm{A^{-1}}_{op}\\le \\constop$ for some finite $\\constop$, and  Then\n%\n\\begin{align*}\n\\norm{A-B}_{1}\\le \\frac{1}{2} (\\constop)^{-1} &\n    \\quad\\Rightarrow\\quad\\norm{B^{-1}}_{op} \\le 2 \\constop.\n\\end{align*}\n%\n\\begin{proof}\n%\nWe will use the results stated in Theorem 4.29 of \\citet{schott:2016:matrix} and\nthe associated discussion in Example 4.14, which establish the following result.\nLet $A$ be a square, nonsigular matrix, and let $I$ be the identity matrix of\nthe same size.  Let $\\norm{\\cdot}$ denote any matrix norm satisfying $\\norm{I} =\n1$.  Let $D$ be a matrix of the same size as $A$ satisfying\n%\n\\begin{align}\\eqlabel{ab_matrix_condition}\n\\norm{A^{-1}} \\norm{D} \\le 1.\n\\end{align}\n%\nThen\n%\n\\begin{align}\\label{eq:matrix_norm_continuity}\n    \\norm{A^{-1} - (A + D)^{-1}} \\le\n    \\frac{\\norm{A^{-1}}\\norm{D}}{1 - \\norm{A^{-1}\\norm{D}}} \\norm{A^{-1}}.\n\\end{align}\n%\nWe will apply \\eqref{matrix_norm_continuity} using the operator norm\n$\\norm{\\cdot}_{op}$, for which $\\norm I_{op}=1$ and with $D := B - A$.\nBecause $\\norm{A^{-1}}_{op}\\le \\constop$, $A$ is invertible.\n\nAssume that $\\norm{A-B}_{1}\\le \\frac{1}{2} (\\constop)^{-1}$.  First, note that\n%\n\\begin{align}\\label{eq:ab_matrix_condition_fulfilled}\n\\norm{A^{-1}}_{op} \\norm{D}_{op} &=\n    \\norm{A^{-1}}_{op}\\norm{B - A}_{op} \\nonumber \\\\\n&\\le\\norm{A^{-1}}_{op}\\norm{B - A}_{1}\n    & \\textrm{(ordering of matrix norms)}\\nonumber \\\\\n & \\le \\constop \\frac{1}{2} (\\constop)^{-1}\n    & \\textrm{(by assumption)} \\nonumber \\\\\n&= \\frac{1}{2}  < 1,\n\\end{align}\n%\nso \\eqref{ab_matrix_condition} is satisfied and we can apply\n\\eqref{matrix_norm_continuity}. Then\n%\n\\begin{align*}\n\\norm{B^{-1}}_{op}\n & \\le \\norm{B^{-1}-A^{-1}}_{op} + \\norm{A^{-1}}_{op}\n    & \\textrm{ (triangle inequality)}\\\\\n & \\le \\frac{\\norm{A^{-1}}_{op}\\norm{B - A}_{op}}\n            {1 - \\norm{A^{-1}}_{op}\\norm{B - A}_{op}}\n        \\norm{A^{-1}}_{op} + \\norm{A^{-1}}_{op}\n     & \\textrm{(\\eqref{matrix_norm_continuity})}\\\\\n & \\le \\frac{\\frac{1}{2}}{1-\\frac{1}{2}}\\norm{A^{-1}}_{op} +\n    \\norm{A^{-1}}_{op}\n    &\\textrm{(\\eqref{ab_matrix_condition_fulfilled})} \\\\\n & \\le 2 \\constop.&\\textrm{(by assumption)}\n\\end{align*}\n\\end{proof}\n%\n\\end{prop}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nNext, we define the quantities needed to make use of the integral form of\nthe Taylor series remainder.\\footnote{We are indebted to Pang Wei Koh for\npointing out the need to use the integral form of the remainder for\nTaylor series expansions of vector-valued functions.}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{prop}\n\\label{propref:taylor_series_remainder}\n%\nFor any $\\theta \\in \\Omega_{\\theta}$ and any $\\tilde{w} \\in W$,\n%\n\\begin{align*}\nG(\\theta, \\tilde{w}) - G(\\thetaone, \\tilde{w}) =&\n    \\left(\\int_0^1 H(\\thetaone + t (\\theta - \\thetaone), w) dt\\right)\n    \\left(\\theta - \\thetaone\\right)\n\\end{align*}\n%\n\\end{prop}\n%\n\\begin{proof}\n%\nLet $G_d(\\theta, \\tilde{w})$ denote the $d$-th component of the vector\n$G(\\theta, \\tilde{w})$, and define the function $f_d(t) := G_d(\\thetaone + t\n(\\thetaone - \\theta), \\tilde{w})$. The proposition follows by taking the\nintegral remainder form of the zero-th order Taylor series expansion of $f_d(t)$\naround $t=0$ \\citep[Appendix B.2]{dudley:2018:analysis}, and stacking the result\ninto a vector.\n%\n\\end{proof}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nThe Taylor series residual of \\proprefref{taylor_series_remainder} will show up\nrepeatedly, so we will give it a concise name in the following definition.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{defn}\n\\label{defref:hess_integral}\n%\nFor a fixed weight $w$ and a fixed parameter $\\theta$, define the Hessian\nintegral\n%\n\\begin{align*}\n\\hint(\\theta, w) :=&\n    \\int_0^1 H(\\thetaone + t (\\theta - \\thetaone), w) dt.\n\\end{align*}\n\\end{defn}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\n\\subsection{Lemmas}\n\nWe now prove some useful consequences of our assumptions. The proof\nroughly proceeds for all $w\\in W_{\\delta}$ by the following steps:\n%\n\\begin{enumerate}\n\\item When $\\delta$ is small we can make $\\norm{{{\\thetaw}}-\\thetaone}_{2}$\nsmall. (\\lemref{theta_difference_bound} below.)\n\\item When $\\norm{\\theta-\\thetaone}_{2}$ is small, then the derivatives\n$H\\left(\\theta,w\\right)$ are close to their optimal value, $H\\left(\\thetaone,\\onevec\\right)$.\n(\\lemref{bounded_continuous} and \\lemref{gh_difference_from_one}\nbelow.)\n\\item When the derivatives are close to their optimal values, then $H\\left(\\theta,w\\right)$\nis uniformly non-singular. (\\lemref{continuous_invertibility} below.)\n\\item When the derivatives are close to their optimal values and $H\\left(\\theta,w\\right)$\nis uniformly non-singular we can control the error in $\\thetaij-\\thetaw$\nin terms of $\\delta$. (\\thmrefref{taylor_error_first} below.)\n\\end{enumerate}\n%\nWe begin by showing that the difference between $\\thetaw$ and $\\thetaone$\nfor $w\\in W_{\\delta}$ can be made small by making $\\delta$ from\n\\condref{paper_uniform_bound} small.  First, however, we need to prove that\noperator norm bounds on $H(\\theta, w)$ also apply to the integrated Hessian\n$\\hint(\\theta, w)$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{lem}\\label{lem:hess_integral_invertible}\nInvertibility of the integrated Hessian.\n%\nIf, for some domain $\\Omega$ and some constant $C$, $\\sup_{\\theta \\in \\Omega}\n\\norm{H(\\theta, w)^{-1}}_{op} \\le C$, then\n$\\sup_{\\theta \\in \\Omega} \\norm{\\hint(\\theta, w)^{-1}}_{op} \\le C$.\n%\n\\end{lem}\n\n\\begin{proof}\n%\nBy definition of the operator norm,\n%\n\\begin{align*}\n\\norm{\\hint(\\theta, w)^{-1}}_{op}^{-1} =&\n    \\min_{v \\in \\mathbb{R}^D: \\norm{v}_2 = 1} v^T \\hint(\\theta, w) v \\\\\n=& \\min_{v \\in \\mathbb{R}^D: \\norm{v}_2 = 1}\n    \\int_0^1 v^T H(\\thetaone + t (\\theta - \\thetaone), w) v dt \\\\\n\\ge& \\int_0^1 \\min_{v \\in \\mathbb{R}^D: \\norm{v}_2 = 1}\n    v^T H(\\thetaone + t (\\theta - \\thetaone), w) v dt \\\\\n\\ge& \\inf_{\\theta \\in \\Omega} \\min_{v \\in \\mathbb{R}^D: \\norm{v}_2 = 1}\n        v^T H(\\theta, w) v \\\\\n\\ge& C^{-1}.\n\\end{align*}\n%\nThe result follows by inverting both sides of the inequality.\n%\n\\end{proof}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{lem}\n\\label{lem:theta_difference_bound}Small parameter changes. Under\n\\globalassum and \\condref{paper_uniform_bound},\n\\begin{align*}\n\\textrm{for all }w\\in W_{\\delta},\\quad\\norm{{{\\thetaw}}-\\thetaone}_{2} &\n    \\le\\constop\\delta.\n\\end{align*}\n\\end{lem}\n%\n\\begin{proof}\n%\nApplying \\proprefref{taylor_series_remainder} with $\\theta = \\thetaw$\nand $\\tilde{w} = \\onevec$ gives\n%\n%\n\\begin{align*}\nG\\left(\\thetaw,\\onevec\\right) &\n    =G\\left(\\thetaone,\\onevec\\right) +\n    \\hint\\left(\\thetaw, \\onevec\\right)\n    \\left(\\thetaw-\\thetaone\\right).\n\\end{align*}\n%\nBy \\assuref{paper_hessian} and\n% $\\sup_{\\theta \\in \\Omega_\\theta}\n% \\norm{H(\\theta,\\onevec)^{-1}} \\le \\constop$, so by\n\\lemref{hess_integral_invertible},\n$\\sup_{\\theta \\in \\Omega_\\theta}\n\\norm{\\hint(\\theta,\\onevec)^{-1}} \\le \\constop$.\n%\nIn particular, $\\hint(\\theta,\\onevec)$ is non-singular.\nA little manipulation, together with the fact that\n$G\\left(\\thetaw,w\\right)=G\\left(\\thetaone,\\onevec\\right)=0$ gives\n%\n\\begin{align*}\n\\thetaw-\\thetaone & =\n    \\hint\\left(\\thetaw,\\onevec\\right)^{-1}\n    \\left(G\\left(\\thetaw, \\onevec\\right) - G\\left(\\thetaw,w\\right)\\right).\n\\end{align*}\n%\nTaking the norm of both sides gives\n%\n\\begin{align*}\n\\norm{{{\\thetaw}}-\\thetaone}_{2} & =\n    \\norm{\\hint\\left(\\thetaw,\\onevec\\right)^{-1}\n        \\left(G\\left({{\\thetaw}},\\onevec\\right) -\n             G\\left({{\\thetaw}},w\\right)\\right)}_{2}\\\\\n& \\le\\norm{\\hint\\left(\\thetaw,\\onevec\\right)^{-1}}_{op}\n    \\norm{\\left(G\\left({{\\thetaw}},\\onevec\\right) -\n                G\\left({{\\thetaw}},w\\right)\\right)}_{2}\\\\\n& \\le\\constop\\norm{G\\left({{\\thetaw}},\\onevec\\right) -\n                   G\\left({{\\thetaw}},w\\right)}_{2}\n    \\textrm{ (Lemma \\ref{lem:hess_integral_invertible})}\\\\\n& \\le\\constop\\norm{\n    G\\left({{\\thetaw}},\\onevec\\right) -\n    G\\left({{\\thetaw}},w\\right)}_{1}\\textrm{ (relation between norms)}\\\\\n& \\le\\constop\\sup_{\\theta\\in\\Omega_{\\theta}}\n    \\norm{G\\left(\\theta,\\onevec\\right)-G\\left(\\theta,w\\right)}_{1}\\\\\n& \\le\\constop\\delta.\\textrm{ (Condition \\ref{cond:paper_uniform_bound}).}\n%\n\\end{align*}\n%\n\\end{proof}\n%%%%%%%%%%%%%%%%%%%%%%%\n%\nBecause we will refer to it repeatedly, we give the set of $\\theta$\ndefined in \\lemref{theta_difference_bound} a name.\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{defn}\nFor a given $\\delta$, define the region around $\\thetaone$ given\nby \\lemref{theta_difference_bound} as\n\n\\begin{align*}\n\\thetadeltaball & :=\\left\\{ \\theta:\\norm{\\theta-\\thetaone}_{2}\\le\\constop\\delta\\right\\} \\bigcap\\Omega_{\\theta}.\n\\end{align*}\n\\end{defn}\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\nIn other words, \\lemref{theta_difference_bound} states that \\condref{paper_uniform_bound}\nimplies $\\thetaw\\in\\thetadeltaball$ when $w\\in W_{\\delta}$.\n\nNext, we show that closeness in $\\theta$ will mean closeness in $H\\left(\\theta,w\\right)$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{lem}\n\\label{lem:bounded_continuous} Boundedness and continuity. Under\n\\paperallcoreassum and \\condref{paper_uniform_bound},\n%\n\\begin{align*}\n\\textrm{for all }\\theta\\in\\thetaball,\\quad\\sup_{w\\in W}\n    \\norm{H\\left(\\theta,w\\right) - H\\left(\\thetaone,w\\right)}_{1}\n    &\\le D\\constw\\liph\\norm{\\theta-\\thetaone}_{2}.\n\\end{align*}\n\\end{lem}\n\\begin{proof}\n%\nFor $\\theta\\in\\thetaball$,\n%\n\\begin{align*}\n\\sup_{w\\in W}\\norm{H\\left(\\theta,w\\right)-H\\left(\\thetaone,w\\right)}_{1} &=\n    \\sup_{w\\in W}\\norm{\\frac{1}{N}\n    \\sum_{n=1}^{N}w_{n}\\left(h_{n}\\left(\\theta\\right) -h_{n}\\left(\\thetaone\\right)\\right)}_{1}\\textrm{ (by definition)}\\\\\n & \\le D\\sup_{w\\in W}\\frac{\\norm w_{2}}{\\sqrt{N}}\n    \\frac{\\norm{h\\left(\\theta\\right) -\n                h\\left(\\thetaone\\right)}_{2}}{\\sqrt{N}}\n        \\textrm{ (Proposition \\ref{propref:tensor_cauchy_schwartz})}\\\\\n & \\le D\\constw\\frac{\\norm{h\\left(\\theta\\right) -\n                           h\\left(\\thetaone\\right)}_{2}}{\\sqrt{N}}\n       \\textrm{ (Assumption \\ref{assu:paper_weight_bounded})}\\\\\n & \\le D\\constw\\liph\\norm{\\theta-\\thetaone}_{2}\n    \\textrm{ (Assumption \\ref{assu:paper_lipschitz} and }\n    \\theta\\in\\thetaball\\textrm{)}.\n\\end{align*}\n%\n\\end{proof}\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\nWe now combine \\lemref{theta_difference_bound} and \\lemref{bounded_continuous}\nto show that $H\\left(\\theta,w\\right)$ is close to its value at the\nsolution $H\\left(\\thetaone,\\onevec\\right)$ for sufficiently small\n$\\delta$ and for all $\\theta\\in\\thetadeltaball$.\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{lem}\n\\label{lem:gh_difference_from_one}Bounds for difference in parameters.\nUnder \\paperallcoreassum and \\condref{paper_uniform_bound}, if $\\delta\\le\\thetasize\\constop[-1]$,\nthen\n\\begin{align*}\n\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}} &\n    \\norm{H\\left(\\theta,w\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1}\n    \\le\\left(1 + D\\constw\\liph\\constop\\right)\\delta.\n\\end{align*}\n\\end{lem}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{proof}\nBy \\lemref{theta_difference_bound}, $\\delta\\le\\thetasize\\constop[-1]$\nimplies that $\\constop\\delta\\le\\thetasize$ and so $\\thetadeltaball\\subseteq\\thetaball$.\nConsequently, we can apply \\lemref{bounded_continuous}:\n\\begin{align*}\n\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)-H\\left(\\thetaone,w\\right)}_{1} & \\le\\sup_{\\theta\\in\\thetaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)-H\\left(\\thetaone,w\\right)}_{1}\\\\\n & \\le D\\constw\\liph\\norm{\\theta-\\thetaone}_{2}\\textrm{ (Lemma \\ref{lem:bounded_continuous})}\\\\\n & \\le D\\constw\\liph\\constop\\delta\\quad\\textrm{ (because }\\theta\\in\\thetadeltaball\\textrm{)}.\n\\end{align*}\n%\nNext, we can use this to write\n\\begin{align*}\n\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}} & \\norm{H\\left(\\theta,w\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1}\\\\\n & =\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)-H\\left(\\theta,\\onevec\\right)+H\\left(\\theta,\\onevec\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1}\\\\\n & \\le\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)-H\\left(\\theta,\\onevec\\right)}_{1}+\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,\\onevec\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1}\\\\\n & \\le\\sup_{\\theta\\in\\Omega_{\\theta}}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)-H\\left(\\theta,\\onevec\\right)}_{1}+\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,\\onevec\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1}\\\\\n & \\le\\delta+\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,\\onevec\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1}\\textrm{ (Condition \\ref{cond:paper_uniform_bound})}\\\\\n & \\le\\delta+D\\constw\\liph\\constop\\delta.\n\\end{align*}\n\\end{proof}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%\nThe constant that appears multiplying $\\delta$ at the end of the proof of\n\\lemref{gh_difference_from_one} will appear often in what follows, so we give it\nthe special name $\\constij$ in \\defrefref{constants_definition}.\n\nNote that \\lemref{gh_difference_from_one} places a condition on how small\n$\\delta$ must be in order for our regularity conditions to apply.\n\\lemref{theta_difference_bound} will guarantee that $\\thetaw\\in\\thetadeltaball$,\nbut if we are not able to make $\\delta$ arbitrarily small in\n\\condref{paper_uniform_bound}, then we are not guaranteed to ensure that\n$\\thetadeltaball\\subseteq\\thetaball$, will not be able to assume Lipschitz\ncontinuity, and none of our results will apply.\n\nNext, using \\lemref{gh_difference_from_one}, we can extend the operator bound on\n$\\hone^{-1}$ from \\assuref{paper_hessian} to $H\\left(\\theta,w\\right)^{-1}$ for\nall $w\\in W_{\\delta}$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{lem}\n\\label{lem:continuous_invertibility}Uniform invertibility of the\nHessian. Under \\paperallcoreassum and \\condref{paper_uniform_bound}, if $\\delta\\le\\min\\left\\{ \\thetasize\\constop[-1],\\frac{1}{2}\\constij^{-1}\\constop[-1]\\right\\} $,\nthen\n\\begin{align*}\n\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)^{-1}}_{op} & \\le2\\constop.\n\\end{align*}\n\\end{lem}\n\\begin{proof}\nBy \\assuref{paper_hessian}, $\\norm{H\\left(\\thetaone,\\onevec\\right)^{-1}}_{op}\\le\\constop$.\nSo by \\proprefref{operator_norm_continuity}, it will suffice to select\n$\\delta$ so that\n\\begin{align}\n\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1} & \\le\\frac{1}{2}\\constop[-1].\\label{eq:h_bound}\n\\end{align}\nWhen we can apply \\lemref{gh_difference_from_one}, we have\n\\begin{align*}\n\\sup_{\\theta\\in\\thetadeltaball}\\sup_{w\\in W_{\\delta}}\\norm{H\\left(\\theta,w\\right)-H\\left(\\thetaone,\\onevec\\right)}_{1} & \\le\\constij\\delta.\n\\end{align*}\nSo $H\\left(\\theta,w\\right)$ will satisfy \\eqref{h_bound} if we can\napply \\lemref{gh_difference_from_one} and if\n\\begin{align*}\n\\delta\\le & \\frac{1}{2}\\constop[-1]\\constij^{-1}.\n\\end{align*}\nTo apply \\lemref{gh_difference_from_one} we additionally require\nthat $\\delta\\le\\thetasize\\constop[-1]$. By taking $\\delta\\le\\min\\left\\{ \\thetasize\\constop[-1],\\frac{1}{2}\\constop[-1]\\constij^{-1}\\right\\} $,\nwe satisfy \\eqref{h_bound} and the result follows.\n\\end{proof}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nNext, we show that a version of \\lemref{gh_difference_from_one} also applies\nto the integrated Hessian $\\hint(\\theta, w)$ when $\\theta \\in \\thetadeltaball$.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{lem}\n\\label{lem:int_h_difference_from_one}\nBounds for difference of the integrated Hessian.\n%\nUnder \\paperallcoreassum and \\condref{paper_uniform_bound}, if $\\delta\\le\\thetasize\\constop[-1]$ and $\\theta \\in \\thetadeltaball$,\n%\n\\begin{align*}\n\\sup_{w\\in W_{\\delta}}\\norm{\n    \\hint\\left(\\theta, w\\right) -\n    H(\\thetaone, \\onevec)}_{1}\n\\le& \\left(1 + D\\constw\\liph\\constop\\right)\\delta.\n\\end{align*}\n%\n\\end{lem}\n%\n\\begin{proof}\n\\begin{align*}\n\\MoveEqLeft\n\\sup_{w\\in W_{\\delta}}\\norm{\n    \\hint\\left(\\theta, w\\right) -\n    H(\\thetaone, \\onevec)}_{1} &\\\\\n=& \\sup_{w\\in W_{\\delta}}\\norm{\n    \\int_0^1 \\left(\n        H(\\thetaone + t (\\theta - \\thetaone), w) dt -\n        H(\\thetaone, \\onevec)\\right)}_{1}\n        &\\textrm{ (Definition \\ref{defref:hess_integral})} \\\\\n\\le&\n    \\sup_{w\\in W_{\\delta}}\n    \\int_0^1 \\norm{\n        H(\\thetaone + t (\\theta - \\thetaone), w) -\n        H(\\thetaone, \\onevec)}_{1} dt\n        &\\textrm{ (Jensen's inequality)} \\\\\n\\le&\n    \\sup_{\\theta\\in\\thetadeltaball} \\sup_{w\\in W_{\\delta}}\n    \\norm{H(\\theta), w) - H(\\thetaone, \\onevec)}_{1} &\\\\\n\\le&\n    \\left(1 + D\\constw\\liph\\constop\\right)\\delta\n    &\\textrm{ (Lemma \\ref{lem:gh_difference_from_one})}\n\\end{align*}\n\\end{proof}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%\n\nWith these results in hand, the upper bound on $\\delta$ will at last be\nsufficient to control the error terms in our approximation. For compactness, we\ngive it the upper bound on $\\delta$ the name $\\deltasize$ in\n\\defrefref{constants_definition}.\n\nFinally, we state a result that will allow us to define derivatives\nof $\\thetaw$ with respect to $w$.\n\\begin{lem}\n\\label{lem:implicit_function_theorem}Inverse function theorem. Under\n\\paperallcoreassum and \\condref{paper_uniform_bound}, and for $\\delta\\le\\deltasize$,\nthere exists a continuous, differentiable function of $w$, $\\thetahat\\left(w\\right)$,\nsuch that, for all $w\\in W$, G$\\left(\\thetahat\\left(w\\right),w\\right)=0$.\n\\end{lem}\n\\begin{proof}\nThis follows from \\lemref{continuous_invertibility} and the implicit\nfunction theorem.\n\\end{proof}\nBy definition, $\\thetahat\\left(\\onevec\\right)=\\thetaone$.\n\n\n\n\n\\subsection{Bounding the errors in a Taylor expansion}\n\nWe are now in a position to use \\paperallcoreassum and \\condref{paper_uniform_bound}\nto bound the error terms in a first-order Taylor expansion of $\\thetaw$.\nWe begin by simply calculating the derivative $d\\thetahat\\left(w\\right)/dw$.\n\\begin{prop}\n\\label{propref:theta_w_first_derivative}For any $w\\in W$ for which\n$H\\left(\\thetaw,w\\right)$ is invertible, and for any vector $a\\in\\mathbb{R}^{N}$,\n\\begin{align*}\n\\frac{d\\thetaw}{dw^{T}}\\at wa & =-H\\left(\\thetaw,w\\right)^{-1}G\\left(\\thetaw,a\\right).\n\\end{align*}\n\\end{prop}\n\\begin{proof}\nBecause $G\\left(\\thetaw,w\\right)=0$ for all $w\\in W$, by direct\ncalculation,\n\\begin{align*}\n0 & =\\frac{d}{dw^{T}}G\\left(\\thetaw,w\\right)\\at wa\\\\\n & =\\left(\\frac{\\partial G}{\\partial\\theta^{T}}\\frac{d\\hat{\\theta}}{dw^{T}}+\\frac{\\partial G}{\\partial w^{T}}\\right)\\at{_{w}}a\\\\\n & =H\\left(\\thetaw,w\\right)\\frac{d\\hat{\\theta}}{dw^{T}}\\at{_{w}}a+\\left(\\frac{\\partial}{\\partial w^{T}}\\frac{1}{N}\\sum_{n=1}^{N}w_{n}g_{n}\\left(\\theta\\right)\\right)\\at{_{w}}a\\\\\n & =H\\left(\\thetaw,w\\right)\\frac{d\\hat{\\theta}}{dw^{T}}\\at{_{w}}a+\\frac{1}{N}\\sum_{n=1}^{N}g_{n}\\left(\\thetaw\\right)a\\\\\n & =H\\left(\\thetaw,w\\right)\\frac{d\\hat{\\theta}}{dw^{T}}\\at{_{w}}a+G\\left(\\thetaw,a\\right).\n\\end{align*}\nBecause $H\\left(\\thetaw,w\\right)$ is invertible by assumption, the\nresult follows.\n\\end{proof}\n\\begin{defn}\n\\label{defref:theta_infinitesimal_jackknife}Define\n\\begin{align*}\n\\thetaij\\left(w\\right) & :=\\thetaone+\\frac{d\\thetaw}{dw^{T}}\\at{\\onevec}\\left(w-\\onevec\\right)\\\\\n & =\\thetaone-\\hone^{-1}G\\left(\\thetaone,w\\right).\\textrm{ (because }G\\left(\\thetaone,\\onevec\\right)=0\\textrm{)}\n\\end{align*}\n\\end{defn}\n%\n$\\thetaij\\left(w\\right)$ in \\defrefref{theta_infinitesimal_jackknife}\nis the first term in a Taylor series expansion of $\\thetaw$ as a\nfunction of $w$. We want to bound the error, $\\thetaij\\left(w\\right)-\\thetaw$.\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{thm}\n\\label{thmref:taylor_error_first}\n\nUnder \\paperallcoreassum and \\condref{paper_uniform_bound},\nwhen $\\delta\\le\\deltasize$,\n%\n\\begin{align*}\n\\sup_{w\\in W_{\\delta}}\\norm{\\thetaij\\left(w\\right)-\\thetahat\\left(w\\right)}_{2} & \\le2\\constop[2]\\constij\\delta^{2}.\n\\end{align*}\n%\n\\end{thm}\n%\n\\begin{proof}\n%\nApplying \\proprefref{taylor_series_remainder} with $\\theta = \\thetaw$ and\n$\\tilde{w} = w$, we have\n%\n\\begin{align*}\n0=G\\left(\\thetaw,w\\right) & =\n    G\\left(\\thetaone, w\\right)\n    +\\hint\\left(\\thetaw, w\\right) \\left(\\thetaw[w] - \\thetaone\\right).\n\\end{align*}\n%\nBecause $\\delta\\in W_{\\delta}$, \\lemref{theta_difference_bound} implies that\n$\\thetaw\\in\\thetadeltaball$ so, \\lemref{continuous_invertibility}\nand \\lemref{hess_integral_invertible} imply that\n$\\hint\\left(\\thetaw,w\\right)$ is invertible and we can solve for\n$\\thetaw - \\thetaone$.\n%\n\\begin{align*}\n\\thetaw[w]-\\thetaone & =\n    -\\hint\\left(\\thetaw, w\\right)^{-1}G\\left(\\thetaone,w\\right)\\\\\n & =\\left(-\\hint\\left(\\thetawiggle,w\\right)^{-1} +\n          H\\left(\\thetaone,\\onevec\\right)^{-1} -\n          H\\left(\\thetaone,\\onevec\\right)^{-1}\\right)\n            G\\left(\\thetaone,w\\right)\\\\\n & =\\left(H\\left(\\thetaone,\\onevec\\right)^{-1} -\n          \\hint\\left(\\thetaw,w\\right)^{-1}\\right) G\\left(\\thetaone,w\\right)+\n    \\thetaij\\left(w\\right)-\\thetaone.\n\\end{align*}\n%\nEliminating $\\thetaone$ and taking the supremum of both sides gives\n%\n\\begin{align*}\n\\sup_{w\\in W_{\\delta}} &\n    \\norm{\\thetaij\\left(w\\right)-{{\\thetaw[w]}}}_{2}\\\\\n & =\\sup_{w\\in W_{\\delta}}\\norm{\n    \\left(H\\left(\\thetaone,\\onevec\\right)^{-1} -\n          \\hint\\left(\\theta, w\\right)^{-1}\\right)\n            G\\left(\\thetaone,w\\right)}_{2}\\\\\n & =\\sup_{w\\in W_{\\delta}}\\norm{\n    \\hint\\left(\\thetaw, w\\right)^{-1}\\left(\n        \\hint\\left(\\thetaw,w\\right) -\n        H\\left(\\thetaone,\\onevec\\right)\\right)\n            H\\left(\\thetaone,\\onevec\\right)^{-1}\n                G\\left(\\thetaone,w\\right)}_{2}\\\\\n & \\le2\\constop\\sup_{w\\in W_{\\delta}}\\norm{\n    \\left(\\hint\\left(\\thetaw,w\\right) -\n          H\\left(\\thetaone,\\onevec\\right)\\right)\n            H\\left(\\thetaone,\\onevec\\right)^{-1}\n                G\\left(\\thetaone,w\\right)}_{2}\\textrm{ (Lemma\n                    \\ref{lem:hess_integral_invertible})}\\\\\n & \\le2\\constop\\sup_{w\\in W_{\\delta}}\\norm{\n    \\hint\\left(\\thetaw,w\\right) -\n    H\\left(\\thetaone,\\onevec\\right)}_{op}\n    \\norm{H\\left(\\thetaone,\\onevec\\right)^{-1}G\\left(\\thetaone,w\\right)}_{2}\\\\\n & \\le2\\constop\\sup_{w\\in W_{\\delta}}\\norm{\n    \\hint\\left(\\thetaw,w\\right) -\n        H\\left(\\thetaone,\\onevec\\right)}_{1}\n    \\norm{H\\left(\\thetaone,\\onevec\\right)^{-1}G\\left(\\thetaone,w\\right)}_{2}\n    \\textrm{ (ordering of matrix norms)} \\\\\n & \\le 2\\constop\\constij\\delta\n    \\sup_{w\\in W_{\\delta}}\n        \\norm{H\\left(\\thetaone,\\onevec\\right)^{-1}\n              G\\left(\\thetaone,w\\right)}_{2}\n    \\textrm{ (Lemma \\ref{lem:int_h_difference_from_one})}\\\\\n & \\le2\\constop[2]\\constij\\delta\n    \\sup_{w\\in W_{\\delta}}\\norm{G\\left(\\thetaone,w\\right)}_{2}\n    \\textrm{ (Assumption \\ref{assu:paper_hessian})}\\\\\n & =2\\constop[2]\\constij\\delta\n    \\sup_{w\\in W_{\\delta}}\n        \\norm{G\\left(\\thetaone,w\\right) -\n              G\\left(\\thetaone,\\onevec\\right)}_{2}\n    \\textrm{ (because }G\\left(\\thetaone,\\onevec\\right)=0\\textrm{)}\\\\\n & \\le2\\constop[2]\\constij\\delta^{2}\n    \\textrm{ (Condition \\ref{cond:paper_uniform_bound})}.\n\\end{align*}\n\n\\end{proof}\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\n\n\\subsection{Use cases\\label{sec:use_cases}}\n\nFirst, let us state a simple condition under which \\coreassum\nhold. It will help to have a lemma for the Lipschitz continuity.\n\\begin{lem}\nDerivative Cauchy Schwartz. Let $a\\left(\\theta\\right)=\\left(a_{1}\\left(\\theta\\right),...,a_{N}\\left(\\theta\\right)\\right)$\nbe an array of tensors with multi-index $i\\in\\left[D_{A}\\right]$,\nand let $\\frac{\\partial a\\left(\\theta\\right)}{\\partial\\theta}=\\left(\\frac{\\partial}{\\partial\\theta}a_{1}\\left(\\theta\\right),...,\\frac{\\partial}{\\partial\\theta}a_{N}\\left(\\theta\\right)\\right)$\nbe an array of tensors of size $D\\times D_{A}$. Then\n\\begin{align*}\n\\norm{\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}}_{2} & \\le D_{A}\\norm{\\frac{\\partial a}{\\partial\\theta}}_{2}.\n\\end{align*}\n\\end{lem}\n\\begin{proof}\nBy direct calculation,\n\\begin{align*}\n\\norm{\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}^{2}}_{2}^{2} & =\\sum_{r=1}^{D}\\left(\\frac{\\partial}{\\partial\\theta_{r}}\\sum_{n=1}^{N}\\sum_{i=1}^{D_{A}}a_{n,i}\\left(\\theta\\right)^{2}\\right)^{2}\\\\\n & =\\sum_{r=1}^{D}\\left(\\sum_{n=1}^{N}\\sum_{i=1}^{D_{A}}2a_{n,i}\\left(\\theta\\right)\\frac{\\partial a_{n,i}\\left(\\theta\\right)}{\\partial\\theta_{r}}\\right)^{2}\\\\\n & \\le\\sum_{r=1}^{D}\\left(2\\sum_{i=1}^{D_{A}}\\left(\\sum_{n=1}^{N}a_{n,i}\\left(\\theta\\right)^{2}\\right)^{\\frac{1}{2}}\\left(\\sum_{n=1}^{N}\\left(\\frac{\\partial a_{n,i}\\left(\\theta\\right)}{\\partial\\theta_{r}}\\right)^{2}\\right)^{\\frac{1}{2}}\\right)^{2}\\\\\n & \\le\\sum_{r=1}^{D}\\left(2D_{A}^{2}\\left(\\frac{1}{D_{A}}\\sum_{i=1}^{D_{A}}\\sum_{n=1}^{N}a_{n,i}\\left(\\theta\\right)^{2}\\right)^{\\frac{1}{2}}\\left(\\frac{1}{D_{A}}\\sum_{n=1}^{N}\\left(\\frac{\\partial a_{n,i}\\left(\\theta\\right)}{\\partial\\theta_{r}}\\right)^{2}\\right)^{\\frac{1}{2}}\\right)^{2}\\\\\n & =4D_{A}^{2}\\norm a_{2}^{2}\\sum_{r=1}^{D}\\norm{\\frac{\\partial a}{\\partial\\theta_{r}}}_{2}^{2}\\\\\n & =4D_{A}^{2}\\norm a_{2}^{2}\\norm{\\frac{\\partial a}{\\partial\\theta}}_{2}^{2}.\n\\end{align*}\nBy the chain rule,\n\\begin{align*}\n\\norm{\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}}_{2}^{2} & =\\frac{1}{4\\norm{a\\left(\\theta\\right)}_{2}^{2}}\\norm{\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}^{2}}_{2}^{2}\\le D_{A}^{2}\\norm{\\frac{\\partial a}{\\partial\\theta}}_{2}^{2}.\n\\end{align*}\n\\end{proof}\n%\n\\begin{lem}\n\\label{lem:lipschitz_helper}Let $a\\left(\\theta\\right)\\in\\mathbb{R}^{D\\times D}$\nbe a continuously differentiable random matrix with a $D\\times D\\times D$\nderivative tensor. (Note that the function, not $\\theta$, is random.\nFor example, $\\mbe\\left[a\\left(\\theta\\right)\\right]$ is still a function\nof $\\theta$.) Suppose that $\\mbe\\left[\\norm{a\\left(\\theta\\right)}_{2}\\right]$\nis finite for all $\\theta\\in\\Omega_{\\theta}$. Then, for all $\\theta_{1},\\theta_{2}\\in\\Omega_{\\theta}$,\n\\begin{align*}\n\\left|\\mbe\\left[\\norm{a\\left(\\theta_{1}\\right)}_{2}\\right]-\\mbe\\left[\\norm{a\\left(\\theta_{2}\\right)}_{2}\\right]\\right| & \\le\\sqrt{\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\frac{\\partial a\\left(\\theta\\right)}{\\partial\\theta}}_{2}^{2}\\right]}\\norm{\\theta_{1}-\\theta_{2}}_{2}.\n\\end{align*}\n\\end{lem}\n\\begin{proof}\nFor any tensor $a$ with multi-index $i$,\n\\begin{align*}\n\\norm{\\frac{\\partial}{\\partial\\theta}\\norm a_{2}^{2}}_{2}^{2} & =\\sum_{r=1}^{D}\\left(\\frac{\\partial}{\\partial\\theta_{r}}\\norm a_{2}^{2}\\right)^{2}\\\\\n & =\\sum_{r=1}^{D}\\left(\\frac{\\partial}{\\partial\\theta_{r}}\\sum_{i=1}^{D_{A}}a_{i}^{2}\\right)^{2}\\\\\n & =\\sum_{r=1}^{D}\\left(2\\sum_{i=1}^{D_{A}}a_{i}\\frac{\\partial a_{i}}{\\partial\\theta_{r}}\\right)^{2}\\\\\n & \\le4\\sum_{r=1}^{D}\\sum_{i=1}^{D_{A}}a_{i}^{2}\\sum_{i=1}^{D_{A}}\\left(\\frac{\\partial a_{i}}{\\partial\\theta_{r}}\\right)^{2}\\textrm{ (Cauchy-Schwartz)}\\\\\n & =4\\sum_{i=1}^{D_{A}}a_{i}^{2}\\sum_{r=1}^{D}\\sum_{i=1}^{D_{A}}\\left(\\frac{\\partial a_{i}}{\\partial\\theta_{r}}\\right)^{2}\\\\\n & =4\\norm a_{2}^{2}\\norm{\\frac{\\partial a}{\\partial\\theta}}_{2}^{2}.\n\\end{align*}\n\nConsequently,\n\\begin{align*}\n\\norm{\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}}_{2}^{2} & =\\norm{\\frac{1}{2\\norm{a\\left(\\theta\\right)}_{2}}\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}^{2}}_{2}^{2}\\\\\n & =\\frac{1}{4\\norm{a\\left(\\theta\\right)}_{2}^{2}}\\norm{\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}^{2}}_{2}^{2}\\\\\n & \\le\\frac{4\\norm{a\\left(\\theta\\right)}_{2}^{2}}{4\\norm{a\\left(\\theta\\right)}_{2}^{2}}\\norm{\\frac{\\partial}{\\partial\\theta}a\\left(\\theta\\right)}_{2}^{2}\\\\\n & =\\norm{\\frac{\\partial a\\left(\\theta\\right)}{\\partial\\theta}}_{2}^{2}.\n\\end{align*}\nSo for any $\\theta_{1},\\theta_{2}\\in\\Omega_{\\theta}$,\n\\begin{align*}\n\\left|\\mbe\\left[\\norm{a\\left(\\theta_{1}\\right)}_{2}\\right]-\\mbe\\left[\\norm{a\\left(\\theta_{2}\\right)}_{2}\\right]\\right| & \\le\\mbe\\left[\\left|\\norm{a\\left(\\theta_{1}\\right)}_{2}-\\norm{a\\left(\\theta_{2}\\right)}_{2}\\right|\\right]\\\\\n & \\le\\mbe\\left[\\left(\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\frac{\\partial}{\\partial\\theta}\\norm{a\\left(\\theta\\right)}_{2}}_{2}\\right)\\right]\\norm{\\theta_{1}-\\theta_{2}}_{2}\\textrm{ (}\\theta\\textrm{ is not random)}\\\\\n & \\le\\mbe\\left[\\left(\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\frac{\\partial a\\left(\\theta\\right)}{\\partial\\theta}}_{2}\\right)\\right]\\norm{\\theta_{1}-\\theta_{2}}_{2}\\\\\n & \\le\\sqrt{\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\frac{\\partial a\\left(\\theta\\right)}{\\partial\\theta}}_{2}^{2}\\right]}\\norm{\\theta_{1}-\\theta_{2}}_{2}.\n\\end{align*}\nThe result follows. Note that the bound still holds (though vacuously)\nif $\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\frac{\\partial a\\left(\\theta\\right)}{\\partial\\theta}}_{2}^{2}\\right]$\nis infinite.\n\\end{proof}\n\\begin{prop}\n\\label{prop:assumptions_hold}Let $\\Omega_{\\theta}$ be a compact\nset. Let $g_{n}\\left(\\theta\\right)$ be twice continuously differentiable\nIID random functions. Define\n\\begin{align*}\nh_{n}\\left(\\theta\\right) & :=\\frac{\\partial g{}_{n}\\left(\\theta\\right)}{\\partial\\theta}\\\\\nr_{n}\\left(\\theta\\right) & :=\\frac{\\partial^{2}g{}_{n}\\left(\\theta\\right)}{\\partial\\theta\\partial\\theta},\n\\end{align*}\nwhere $r_{n}\\left(\\theta\\right)$ is a $D\\times D\\times D$ tensor.\nAssume that\n\n1a) $\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right]<\\infty$;\n\n1b) $\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{h_{n}\\left(\\theta\\right)}_{2}^{2}\\right]<\\infty$;\n\n1c) $\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{r_{n}\\left(\\theta\\right)}_{2}^{2}\\right]<\\infty;$\n\n2) $\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]$ is non-singular for\nall $\\theta\\in\\Omega_{\\theta}$;\n\n3) We can exchange expectation and differentiation.\n\nThen $\\lim_{N\\rightarrow\\infty}P\\left(\\textrm{\\coreassum\\ hold}\\right)=1.$\n\\end{prop}\n%\n\\begin{proof}\n    %\nThe proof follows from Theorems 9.1 and\n9.2 of \\citet{keener:2011:theoretical}. We will first show that the expected values of\nthe needed functions satisfy \\coreassum, and then that the sample versions\nconverge uniformly.\n\nBy Jensen's inequality,\n\\begin{align*}\n\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{g_{n}\\left(\\theta\\right)}_{2}\\right] & =\\mbe\\left[\\sqrt{\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}}\\right]\\le\\sqrt{\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right]}.\n\\end{align*}\nAlso, for the $i^{th}$ component of $g_{n}\\left(\\theta\\right)$\n\\begin{align*}\n\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|g_{n,i}\\left(\\theta\\right)\\right|\\right] & \\le\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{g_{n}\\left(\\theta\\right)}_{\\infty}\\right]\\le\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{g_{n}\\left(\\theta\\right)}_{2}\\right].\n\\end{align*}\nBy Theorem 9.1 of \\citet{keener:2011:theoretical}, $\\mbe\\left[\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right]$\n, $\\mbe\\left[\\norm{g_{n}\\left(\\theta\\right)}_{2}\\right]$, and $\\mbe\\left[g_{n}\\left(\\theta\\right)\\right]$\nare continuous functions of $\\theta$, and because $\\Omega_{\\theta}$\nis compact, they are each bounded. Similar reasoning applies to $h_{n}\\left(\\theta\\right)$\nand $r_{n}\\left(\\theta\\right)$. Consequently we can define\n\\begin{align*}\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\mbe\\left[\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right] & =:Q_{g}^{2}<\\infty\\\\\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\mbe\\left[\\norm{h_{n}\\left(\\theta\\right)}_{2}^{2}\\right] & =:Q_{h}^{2}<\\infty.\n\\end{align*}\nBelow, these constants will be used to satisfy \\assuref{paper_smoothness}\nand \\assuref{paper_bounded} with high probability.\n\nBecause $\\Omega_{\\theta}$ is compact, $\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]$\nis continuous, $\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]$ is non-singular,\nand the operator norm is a continuous function of $\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]$,\nwe can also define\n\\begin{align*}\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]^{-1}}_{op} & =:Q_{op}<\\infty.\n\\end{align*}\nBelow, this constant be used to satisfy \\assuref{paper_hessian}\nwith high probability.\n\nFinally, we turn to the Lipschitz condition. \\lemref{lipschitz_helper}\nimplies that\n\\begin{align*}\n\\left|\\mbe\\left[\\norm{h_{n}\\left(\\theta_{1}\\right)}_{2}\\right]-\\mbe\\left[\\norm{h_{n}\\left(\\theta_{2}\\right)}_{2}\\right]\\right| & \\le\\sqrt{\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{r_{n}\\left(\\theta\\right)}_{2}^{2}\\right]}\\norm{\\theta_{1}-\\theta_{2}}_{2}.\n\\end{align*}\nDefine\n\\begin{align*}\n\\Lambda_{h} & =\\sqrt{\\mbe\\left[\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{r_{n}\\left(\\theta\\right)}_{2}^{2}\\right]},\n\\end{align*}\nso that we have shown that $\\mbe\\left[\\norm{h_{n}\\left(\\theta\\right)}_{2}\\right]$\nis Lipschitz in $\\Omega_{\\theta}$ with constant $\\Lambda_{h}$, which\nis finite by assumption.\n\nWe have now shown, essentially, that the expected versions of the\nquantities we wish to control satisfy \\coreassum with $N=1$.\nWe now need to show that the sample versions satisfy \\coreassum\nwith high probability, which will follow from the fact that the sample\nversions converge uniformly to their expectations by\nTheorem 9.2 of \\citet{keener:2011:theoretical}.\n\nFirst, observe that \\assuref{paper_smoothness} holds with probability\none by assumption. For the remaining assumption choose an $\\epsilon>0$,\nand define\n\\begin{align*}\n\\constg & :=\\sqrt{Q_{g}^{2}+\\epsilon}\\\\\n\\consth & :=\\sqrt{Q_{h}^{2}+\\epsilon}\\\\\n\\constop & :=2Q_{op}\\\\\n\\liph & :=\\sqrt{D^{4}\\Lambda_{h}^{2}+\\epsilon}.\n\\end{align*}\n\nBy \\citet{keener:2011:theoretical} Theorem 9.2,\n\\begin{align*}\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\sum_{n=1}^{N}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}-\\mbe\\left[\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right]\\right| & \\xrightarrow[N\\rightarrow\\infty]{p}0.\n\\end{align*}\nBecause\n\\begin{align*}\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\sum_{n=1}^{N}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right| & >Q_{g}^{2}+\\epsilon\\ge\\sup_{\\theta\\in\\Omega_{\\theta}}\\mbe\\left[\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right]+\\epsilon\\Rightarrow\\\\\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\sum_{n=1}^{N}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}-\\mbe\\left[\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right]\\right| & >\\epsilon,\n\\end{align*}\n\nwe have\n\\begin{align*}\n & P\\left(\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\sum_{n=1}^{N}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right|\\ge Q_{g}^{2}+\\epsilon\\right)\\le\\\\\n & \\quad P\\left(\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\sum_{n=1}^{N}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}-\\mbe\\left[\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right]\\right|\\le\\epsilon\\right),\n\\end{align*}\nso\n\\begin{align*}\nP\\left(\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\sum_{n=1}^{N}\\norm{g_{n}\\left(\\theta\\right)}_{2}^{2}\\right|\\ge\\constg^{2}\\right) & \\xrightarrow[N\\rightarrow\\infty]{}0.\n\\end{align*}\nAn analogous argument holds for $\\frac{1}{N}\\norm{h_{n}\\left(\\theta\\right)}_{2}^{2}$.\nConsequently, $P\\left(\\textrm{Assumption \\ref{assu:paper_bounded} holds}\\right)\\xrightarrow[N\\rightarrow\\infty]{}1$.\n\nWe now consider \\assuref{paper_hessian}. Again, by \\citet{keener:2011:theoretical} Theorem 9.2\napplied to each element of the matrix $h_{n}\\left(\\theta\\right)$,\nusing a union bound over each of the $D^{2}$ entries,\n\\begin{align*}\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\frac{1}{N}\\sum_{n=1}^{N}h_{n}\\left(\\theta\\right)-\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]}_{1} & \\xrightarrow[N\\rightarrow\\infty]{p}0.\n\\end{align*}\nBy the converse of \\proprefref{operator_norm_continuity}, because\n$\\norm{\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]^{-1}}_{op}\\le Q_{op}$,\n\\begin{align*}\n\\norm{\\left(\\frac{1}{N}\\sum_{n=1}^{N}h_{n}\\left(\\theta\\right)\\right)^{-1}}_{op} & >2Q_{op}=\\constop\\Rightarrow\\\\\n\\norm{\\frac{1}{N}\\sum_{n=1}^{N}h_{n}\\left(\\theta\\right)-\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]}_{1} & >\\frac{1}{2}Q_{op}^{-1}.\n\\end{align*}\nConsequently,\n\\begin{align*}\n & P\\left(\\norm{\\left(\\frac{1}{N}\\sum_{n=1}^{N}h_{n}\\left(\\theta\\right)\\right)^{-1}}_{op}\\ge\\constop\\right)\\le\\\\\n & \\quad P\\left(\\norm{\\frac{1}{N}\\sum_{n=1}^{N}h_{n}\\left(\\theta\\right)-\\mbe\\left[h_{n}\\left(\\theta\\right)\\right]}_{1}\\right)\\xrightarrow[N\\rightarrow\\infty]{p}0,\n\\end{align*}\nand $P\\left(\\textrm{Assumption \\ref{assu:paper_hessian} holds}\\right)\\xrightarrow[N\\rightarrow\\infty]{}1.$\n\nFinally, applying \\lemref{lipschitz_helper} to $\\frac{1}{\\sqrt{N}}\\norm{h\\left(\\theta_{2}\\right)}_{2}$,\n\\begin{align*}\n\\left|\\frac{1}{\\sqrt{N}}\\norm{h\\left(\\theta_{1}\\right)}_{2}-\\frac{1}{\\sqrt{N}}\\norm{h\\left(\\theta_{2}\\right)}_{2}\\right| & \\le\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{\\frac{\\partial}{\\partial\\theta}\\frac{1}{\\sqrt{N}}\\norm{h\\left(\\theta\\right)}_{2}}_{2}\\norm{\\theta_{1}-\\theta_{2}}_{2}\\\\\n & \\le\\frac{D^{2}}{\\sqrt{N}}\\sup_{\\theta\\in\\Omega_{\\theta}}\\norm{r\\left(\\theta\\right)}_{2}\\norm{\\theta_{1}-\\theta_{2}}_{2}\\\\\n & =D^{2}\\sqrt{\\sup_{\\theta\\in\\Omega_{\\theta}}\\frac{1}{N}\\norm{r\\left(\\theta\\right)}_{2}^{2}}\\norm{\\theta_{1}-\\theta_{2}}_{2}.\n\\end{align*}\nConsequently,\n\\begin{align*}\n\\left|\\frac{1}{\\sqrt{N}}\\norm{h\\left(\\theta_{1}\\right)}_{2}-\\frac{1}{\\sqrt{N}}\\norm{h\\left(\\theta_{2}\\right)}_{2}\\right| & \\ge\\liph\\norm{\\theta_{1}-\\theta_{2}}_{2}\\Rightarrow\\\\\nD^{2}\\sqrt{\\sup_{\\theta\\in\\Omega_{\\theta}}\\frac{1}{N}\\norm{r\\left(\\theta\\right)}_{2}^{2}} & \\ge\\liph\\Rightarrow\\\\\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\frac{1}{N}\\norm{r\\left(\\theta\\right)}_{2}^{2}-\\sup_{\\theta\\in\\Omega_{\\theta}}\\mbe\\left[\\norm{r_{n}\\left(\\theta\\right)}_{2}^{2}\\right] & \\ge\\frac{\\liph^{2}}{D^{4}}-\\sup_{\\theta\\in\\Omega_{\\theta}}\\mbe\\left[\\norm{r_{n}\\left(\\theta\\right)}_{2}^{2}\\right]\\Rightarrow\\\\\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\norm{r\\left(\\theta\\right)}_{2}^{2}-\\mbe\\left[\\norm{r_{n}\\left(\\theta\\right)}_{2}^{2}\\right]\\right| & \\ge\\frac{\\liph^{2}}{D^{4}}-\\Lambda_{h}^{2}=\\epsilon.\n\\end{align*}\nHowever, again by \\citet{keener:2011:theoretical} Theorem 9.2,\n\\begin{align*}\n\\sup_{\\theta\\in\\Omega_{\\theta}}\\left|\\frac{1}{N}\\norm{r\\left(\\theta\\right)}_{2}^{2}-\\mbe\\left[\\norm{r_{n}\\left(\\theta\\right)}_{2}^{2}\\right]\\right| & \\xrightarrow[N\\rightarrow\\infty]{p}0,\n\\end{align*}\nso $P\\left(\\textrm{Assumption \\ref{assu:paper_lipschitz} holds}\\right)\\xrightarrow[N\\rightarrow\\infty]{}1$.\n\\end{proof}\n", "meta": {"hexsha": "2f398bcd2178571b4658b6ed997f67bb537bb175", "size": 40992, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writing/arxiv/app_theory.tex", "max_stars_repo_name": "rgiordan/AISTATS2019SwissArmyIJ", "max_stars_repo_head_hexsha": "1a74a8890f1e1889faa966f24f041f6ffddf3437", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-06-13T01:21:15.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-03T20:20:47.000Z", "max_issues_repo_path": "writing/arxiv/app_theory.tex", "max_issues_repo_name": "rgiordan/AISTATS2019SwissArmyIJ", "max_issues_repo_head_hexsha": "1a74a8890f1e1889faa966f24f041f6ffddf3437", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writing/arxiv/app_theory.tex", "max_forks_repo_name": "rgiordan/AISTATS2019SwissArmyIJ", "max_forks_repo_head_hexsha": "1a74a8890f1e1889faa966f24f041f6ffddf3437", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.9036954087, "max_line_length": 295, "alphanum_fraction": 0.6450526932, "num_tokens": 15244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5604985374440008}}
{"text": "\\section{Conclusion}\n\\label{sec:Conclusion}\n\nIn this paper, we propose a two-stages row-by-row parallel VGLCS\nalgorithm and optimization for both stages.  The proposed two-stage\nalgorithm is much more efficient than the traditional wavefront method\nsince it is much regular and much easy to parallelize.\n\n%% it shrinks the length of the\n%% critical path of intuitive parallel algorithm.\n\nWe use {\\em sparse table} to solve the variable gapped longest common\nsubsequence problem.  The sparse table implementation runs efficiently\nin parallel because it improves the thread synchronize and workload\nimbalance than the disjoint set.  In particular, we present a\nrightmost-pops encoding that is both easy-to-implement and efficient\nin a parallel environment.  Our VGLCS algorithm using right-most-pops\nencoding sparse table runs in $O(n^2 s / p + n \\; \\max(\\log n, s))$,\nwhere $n$ is the number of data, $p$ is the number of processors, and\n$s$ is the block size.\n\nWe also present a tree labeling technique that order trees\nlexicographically so that we can encode every binary search tree into\na Catalan index.  Then, we apply this technique in building Cartesian\ntree building so that the insertion takes amortized $O(1)$ time.  As a\nresult the VGLCS problem can be solved in $O(n^2 / p + n \\log n)$\ntime with this labeling technique.\n\nFinally, we present a dynamic Catalan index computation algorithm for\nsparse table.  We can answer the incremental ranged maximum query run in\namortized $O(1)$ by our sparse table, and it can be applied to the\nincremental suffix maximum query as well.  The time complexity of our\nVGLCS algorithm is $O(n^2 / p + n \\log n)$.  Note that this dynamic\nCatalan index computation technique is very general and can be applied\nto other circumstances where data is constantly inserted into a binary\ntree.\n\nWe observed two interesting results from our experiments.  First\nblocked sparse table outperforms unblocked sparse tables in a\nparallel environment.  When the block size is properly chosen, the\nperformance of blocked version runs more efficiently than the unblocked\nversion.  Second, a asymptotically better algorithm may not perform\nbetter in practice.  For example, from our experiments we conclude\nthat blocked sparse table with rightmost-pops, which has a query time\n$O(s)$, actually performs better than a sparse table with $O(1)$\namortized query time.  We believe that an easy and straightforward\nimplementation is the key of good performance, especially in a\nparallel environment.\n", "meta": {"hexsha": "86cfecacc1898b5da79cc6c31dab4e09c1ca730d", "size": 2518, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/IEEE/partial/conclusion-en.tex", "max_stars_repo_name": "morris821028/parallel-VGLCS", "max_stars_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-02-11T08:45:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T07:30:24.000Z", "max_issues_repo_path": "doc/IEEE/partial/conclusion-en.tex", "max_issues_repo_name": "morris821028/parallel-VGLCS", "max_issues_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2017-02-21T02:01:16.000Z", "max_issues_repo_issues_event_max_datetime": "2017-02-24T00:13:34.000Z", "max_forks_repo_path": "doc/IEEE/partial/conclusion-en.tex", "max_forks_repo_name": "morris821028/parallel-VGLCS", "max_forks_repo_head_hexsha": "87fe1c71e14cf7ed6092f728b085b735cf683a4b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.387755102, "max_line_length": 72, "alphanum_fraction": 0.7922954726, "num_tokens": 576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825006, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5604985364183904}}
{"text": "\\chapter{\\uppercase{Geometry of Output Quantities and Consistent Solutions} \\label{chapter:geometry}}\n\nHere, we describe how the relationship of a geometric quantity called \\emph{skewness} in QoI maps impacts the accuracy of consistent solutions to SIPs approximated with random sampling.\nWhile prior work has addressed skewness in the context of solving the SIP, its impact on parameter estimation problems formulated within the Data-Consistent framework has not been previously studied.\nWe demonstrate that the skewness of a map impacts the \\emph{precision} of a parameter estimate.\nThis is subsequently utilized in Chapter~\\ref{chapter:vector-valued} to aggregate data into different components of a QoI map to more precisely estimate parameter values.\n\nIn this chapter, we begin with a definition of skewness and overview of a set-based approach for constructing consistent solutions to SIPs in \\ref{sec:skewness}.\nThat review is followed by a series of numerical examples which establish fundamental connections between the skewness of QoI maps and the difficulty of accurately approximating solutions with finite sampling.\nNamely, we establish that in addition to the implied invariability of skewness to translations, rotations of maps have no impact on solution accuracy.\nFurthermore, we show that the number of samples required to achieve a predefined level of error is directly proportional to the skewness of the QoI map used to solve the inverse problem.\n\n\\section{A Brief Literature Review of Skewness}\\label{sec:skewness}\n\\input{set-based/skewness.tex}\n\nWe demonstrate that the number of samples required to approximate densities using uniform i.i.d.~sampling is proportional to the skewness of the map used for inversion, though the convergence rate of the algorithm used to solve the SIP is unaffected.\nWe focus on the accuracy of the consistent solutions to the SIP.\nIt is illustrative to begin with the original set-based approximations to solutions developed in \\cite{BBE11}, \\cite{BES12}, and \\cite{BET+14} as the dependence of solutions on skewness is more explicit than in the density-based approach.\nWhile the content of this chapter is concerned with estimating distributions and not individual parameter estimates, we remind the reader that the solutions to the SIP are inherently densities.\nIn Chapter~\\ref{chapter:mud}, we used these densities to produce a parameter estimate, and so we are motivated to study the accuracy of these solutions to the SIP.\n\n\n%%%%%\n\n\n\\section{Set-Based Inversion for Consistent Measures}\\label{sec:set-based}\n% Intro\n\\input{set-based/set_derivation.tex}\n%% \\input{ch02/set_derivation_bayes.tex}\n\n% Numerical Approximation\n\\input{set-based/set_algorithm.tex}\n\n% Descriptions of Error\n\\input{set-based/set_error.tex}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%% discussion of convergence\n\n\\subsection{Convergence}\nTo study the accuracy of the solutions to SIPs using QoI maps with different skewness values (or number of available model evaluations), we need to select a metric on the space of probability measures.\nRates of convergence depend on this choice.\nSince the spaces $\\pspace$ we are considering are generally bounded and finite, the Total Variation metric metrizes weak convergence (see Thm. 6 in \\cite{GS02}).\nThe latter property of the metric is of notable importance because the QoI maps we study are indeed (component-wise) functionals on the space of model inputs $\\pspace$.\nThus, convergence of a sequence of probability measures under the Total Variation metric implies that the QoIs will also converge component-wise in $\\RR$.\nIn other words, convergence in the Total Variation metric implies the convergence of the sampled QoI map to the exact QoI map since the map is a linear functional of the probability measure.\nMore formally, if $\\PP_{\\pspace,\\ndiscs,\\nsamps,h}$ converges to either $\\PP_{\\pspace,\\ndiscs,\\nsamps}$, $\\PP_{\\pspace,\\ndiscs}$, or $P_\\pspace$ using the Total Variation metric, this implies that the error converged to zero in the numerically computed $\\qoi(\\param^{(j)})$.\nThus, convergence in the Total Variation metric implies convergence of the numerical method used to construct the QoI map.\nFurthermore, recall that weak convergence $\\PP_n \\to \\PP$ is defined to mean\n\\[\n\\int f \\PP_n \\to \\int f \\PP \\text{ as } n \\to \\infty\n\\]\nfor bounded Lipschitz functions $f:\\pspace\\to\\RR$.\nTaking $f = \\Chi_A$, this leads to the following implication:\n\\[\n\\PP_{\\pspace, \\ndiscs, \\nsamps} \\to P_\\pspace \\implies \\PP_{\\pspace, \\ndiscs, \\nsamps} (A) \\to P_\\pspace (A) \\quad \\forall{A\\in\\pborel}.\n\\]\n%provided we rigorously define $\\P\\PP_{\\pspace, \\ndiscs, \\nsamps}$ to measure sets in $\\BB_\\pspace$, which we proceed to do in the following section\\footnote{\\bf{I know this is clumsy, but I'm not exactly sure how to phrase this correctly, because it almost seems like this would be true for all $A\\in\\pspace$ instead. I suppose the omission of the differential operator above is intentionally vague to gloss over this detail. Can we sharpen this up?}}\nWe choose Total Variation as the metric used in the numerical results of Section~\\ref{sec:ch03-set} because of its common use in the literature as the ``statistical distance'' between densities \\cite{GS02, Silverman}, and implications for convergence.\n\n\n\\input{set-based/set_accuracy}\n% \\input{ch03/sample_accuracy}\n\n% \\input{set-based/heatrod.tex}\n% \\FloatBarrier\n%\n% \\input{ch03/decay.tex}\n\n\\section{Conclusions}\nIn this chapter we reviewed the skewness property of QoI maps and demonstrated that for solutions to the SIP, maps which exhibit lower values of skewness exhibit lower overall finite-sampling approximation error against a reference solution.\nWe saw that the solutions are invariant to transformations such as rotation (which also holds for translations).\nThe latter observation may aid in the selection of optimal maps (i.e., one can disregard the direction of the maps' contours and focus solely on the angles among the generalized contours).\nIn the following chapter, we show how vector-valued QoI maps such as those in Chapter~\\ref{chapter:mud} constructed from data relate to the geometry of the resulting data-spaces and subsequently impacts the solutions to the SIP.\nWe demonstrate that even a basic awareness and consideration of skewness can aid in the construction of maps which are more informative and thus increase the precision of parameter estimates.\n\n\\FloatBarrier\n", "meta": {"hexsha": "c9fb3a5f75ea32f0c28b822979224f92e92854b7", "size": 6429, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter04.tex", "max_stars_repo_name": "mathematicalmichael/thesis", "max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z", "max_issues_repo_path": "chapter04.tex", "max_issues_repo_name": "mathematicalmichael/thesis", "max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 59, "max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z", "max_forks_repo_path": "chapter04.tex", "max_forks_repo_name": "mathematicalmichael/thesis", "max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 81.3797468354, "max_line_length": 452, "alphanum_fraction": 0.7911028154, "num_tokens": 1458, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708561, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5604985280427135}}
{"text": "\\chapter{Differential Geometry}\nAs we have noted before, general relativity is a inherent local theory. It is\nconvenient to formulate it in terms of differential geometry.\n\\section{Manifolds}\n\\begin{definition}\nA $n$ dimensional manifold $M$ is a Hausdorff space with countable basis, that\nis locally homeomorphic to $\\mathbb{R}^n$. \n\\end{definition}\n%TODO picture\nWe will give a short introduction to\nthe most important terms.\n\\begin{remark}\nThe requirements Hausdorff and countable basis are of a more technical nature and are satisfied for most of the objects one can imagine \nexcept some pathological examples (we won't go into the details on this).\n\nLocally homeomorphic to $\\mathbb{R}^n$ means there exists a set of \\emph{charts} \n$(\\varphi,U^\\varphi)$ called an \\emph{atlas} $\\mathcal{A}$ with $\\cup_{\\varphi\\in\\mathcal{A}} U^\\varphi =M$, \ni.e. the charts cover the whole manifold. The maps $\\varphi:U^\\varphi\\to \\varphi(U^\\varphi)\\subset\\mathbb{R}^n $ are homeomorphisms, \nmeaning that $U^\\varphi$ is open, $\\varphi$ is onto and both $\\varphi$ and\n$\\varphi^{-1}$ are continuous.\nFurther for any two $\\varphi,\\psi\\in \\mathcal{A}$, the coordinate changes \n$\\varphi\\circ\\psi^{-1}:\\psi(U^\\psi\\cup U^\\varphi)\\to \\phi(U^\\psi\\cup U^\\varphi)$\nbe smooth\\footnotemark .\n\\end{remark}\n\\begin{figure}[hbtp!]\n\\centering\n \\includegraphics{coordinateenvironment.pdf}\n\\caption{Coordinate change}\n%TODO Caption\n\\end{figure}\n\n\\footnotetext{Infinitely often differentiable or short $C^\\infty$.}\nWe can now reduce differentiation on the manifold to the ordinary differentiation in $\\mathbb{R}^n$. \nSince physical laws are described in terms of differential equations, we can formulate them on $M$. \nThe fact that the coordinate changes are smooth ensures that differentiability\nis well defined (and thus the physical laws are).\n\n\\begin{sidenote}[Differential Structures]\nThere can be different \\emph{differential structures} on a manifold, \nwhich means there are multiple (maximal)alases, which\ncould not be merged because the coordinate changes would not be $C^\\infty$. Those differentiable structures therefore imply different notions of differentiability. \nRemarkably this may even play a role in some physical theories. \nAs an example an 11d-supergravity can be described as a product\n$\\Reals^{3+1}\\times \\Sphere^7$.\nWhere $\\Sphere^7$ is the seven-sphere and $\\Reals^{3+1}$ Minkovski space.\nThis means on every point in the $\\mathbb{R}^{3+1}$ there is a (small) $\\Sphere^7$  located that contains additional spatial dimensions. \nThe $\\Sphere^7$ has 28 different differential structures, so the choice of\nsuch a structure affects the theory for the above reasons.\n\\end{sidenote}\nAll simple examples we come of can be embedded in a higher space. The\n\\name{Whitney} embedding theorem states that every real $n$-dimensional Manifold\ncan be embedded to $\\Reals^{2n}$ (This is however not true for complex, i.e. analytic manifolds).\nFor example the $\\Sphere^2$ can be interpreted as submanifold of the\n$\\Reals^3$.\nHowever manifolds are objects that exists independent of such embeddings. \nFor example a torus can be thought of as a square with the opposite sides identified (leaving to the left results in re-entering in the left).\n\\begin{sidenote}[Topology of the Universe]\nIn addition to the local structure, we may question the global, i.e. the\ntopological structure of the universe.\nOn may for example imagine that we live on the surface of a three-sphere (finite\nbut boundless universe).\nHowever this might be observable in cross-corelation in the cosmic microwave\nbackground from photons reaching us from different directions but coming from the same event. There is no evidence of such phenomena so far. \nMost models can be excluded to some certainty. A cylindrical\nuniverse is still possible (finite in one, infinite in the\nother directions).\n\\end{sidenote}\n\\section{Vectors}\nVectors are important objects describing physics. The naive view as an 'arrow\npointing frow one point to another' is flawed.\nFor example on a sphere an arrow connecting two points does not make much sense.\nWe want to find a description of vectors as objects that are naturally related to the structure of the manifold independent of the embedding.\n\\subsection{Definitions}\nThere are three equivalent definitions for a\n(contravariant) vector:\n\\begin{enumerate}\n    \\item algebraic \n    \\item physical\n    \\item geometrical.\n\\end{enumerate}\nWe start by giving the algebraic definition which is the most\nabstract and preferred by mathematicians, because it is suitable for proofs.\nVectors are identified with derivatives, which are formally defined by\n\\begin{definition}[Derivation]\nA derivation $D$ satisfies the following rules, for all $f,g\\in\nC^\\infty(M,\\Reals)$ and $\\lambda \\in \\Reals$:\n\\begin{align}\n    D(af+bg) &=Df+Dg\\,,\\\\\n    D(\\lambda f)&=\\lambda f\\,,\\\\\n    D(fg)&= (Df)g+f(Dg)\\,.\n\\end{align}\n\\end{definition}\nWe then define a vector by \n\\begin{definition}[Vector,\nalgebraic] A vector in $p$ is a derivation on the germ at $p$.\n\\end{definition}\nThe germ is the set of all functions $f\\in C^\\infty(M,\\Reals)$, where we\nidentify all functions that are equal in some neighbourhood of $p$, i.e. vectors are local objects.\nGiven two vectors we can construct a new one, the \\emph{Lie bracket}\n\\begin{equation}\n    [X,Y]f:=X(Yf)-Y(Xf)\\, .\\label{eq:LieBraket}\n\\end{equation}\nThe only property that has to be checked is that it satisfies the Leibniz rule.\n\\begin{equation}\n    XY(fg)=X[(Yf)g+f(Yg)]=(XYf)g+(Yf)(Xg)+(Yg)(Xf)+(XYg)f\\\\\n\\end{equation}\nSubtracting $YX(fg)$ proves that $[X,Y]$ is indeed a vector. The fact that we\nhave a natural vector space structure on the set of vectors at $p$ motivates the\nfollowing\n\\begin{definition}\nThe tangent space $T_pM$ is the space of all vectors in $p\\in M$.\n\\end{definition}\nA basis of $T_pM$ is given by the derivation along the\ncoordinates $\\partial_i$, therefore its dimension is equal to that of the manifold $M$.\nProof sketch:\n\\begin{enumerate}\n    \\item Show $f(x^i)=f(0)+x^i\\tilde{f}(x^i)$\n    \\item Write $X=a^i\\partial_i$\n    \\item Show $Xf=0\\quad \\forall f \\iff X=0$\n\\end{enumerate}\nEvery vector $A$ can be written as $A=A^i\\pd{}{{x^i}}$, where $A^i$ are the\ncomponents of the vector. We can now look how the components of the vector\ntransform under a change of coordinates \\footnote{The vector itself is\ninvariant!}.\nWe usually denote the elements of the transformed systems with a bar.\nBy the chain rule we have\n\\begin{equation}\n    A= a^k\\pd{}{{x^k}}= a^k\\pd{\\overline{x}^i}{{x^k}}\\pd{}{{\\overline{x}^k}}\\, .\n\\end{equation}\nWe can also express $A$ directly in the new basis\n\\begin{equation}\n    A= \\overline{a}^i\\pd{}{{\\overline{x}^i}}\\, .\n\\end{equation}\nComparing the coefficients gives the vector transformation law\n\\begin{equation}\n    \\overline{a}^i=a^k\\pd{\\overline{x}^i}{{x^k}}\\label{eq:coefftrafo}\\, .\n\\end{equation}\n\\begin{definition}[Vector,\nphysical] A vector with components $A^i$ is a object that transforms according\nto \\ref{eq:coefftrafo} under a change of coordinates.\n\\end{definition}\nConsider a curve on a Manifold $M$, i.e. a\nmap $\\gamma:\\mathbb{R}\\to M$, with $\\gamma(0)=p$, $\\dot{\\gamma}(0)=X$. Then $D_X f=\\od{}{t}(f\\circ\\gamma)(0)$ is\na derivative, namely the directional derivative along $X$.\nConsider the special curves $\\gamma_i(t)=\\varphi(p+te_i)$, with $\\varphi$ a\nchart of $M$. Then $D_{\\dot{\\gamma}_i} f=\\partial_if$,  so $D_{\\dot{\\gamma}_i}$\nrepresents a the directional derivative and we can relate derivatives to the\ngeometrical tangent space.\n\nSince we have a basis, we can work in (local) coordinates and will do so most of\nthe time.\n\\begin{example}[Lie brackets in local coordinates]\nLet $A=A^i\\partial_i$, $B=B^i\\partial_i$ be vectors, then the \\name{Lie} bracket\n\\eqref{eq:LieBraket} in local coordinates is given as\n\\begin{equation}\n    [A,B]^j=A^i\\partial_iB^j-B^i\\partial_iA^i\\, .\n\\end{equation}\n\\end{example}\nSince the tangent space is a vector space, we can define its dual space\n\\begin{definition}[Cotangent space] The cotangent space $T_pM^*$ is the set of\nlinear maps from $T_pM$ to $\\mathbb{R}$, i.e.\n\\begin{equation}\n    T_pM^*:=\\{L:T_pM\\to \\mathbb{R}\\, |\\, L \\text{ linear}\\}\\, .\n\\end{equation}\n\\end{definition}\nThe cotangent space is again a vector space of the same dimension. Its elements\nare called \\emph{dual} or \\emph{covariant} vectors.\nWe can define a basis on $\tT_pM^*$, which we denote by $\\dif x^i$ and  which acts on $T_pM$ via\n\\begin{equation}\n    \\dif x^i(\\partial_j)=\\delta^i_j\\, . \\label{eq:orthdual}\n\\end{equation}\nIt can easily deduced by \\eqref{eq:orthdual} that the components of a dual vector transform as\n\\begin{equation}\n    \\overline{a}_i=\\pd{x^k}{{\\overline{x}^i}}a_k\\, .\n\\end{equation}\n\\begin{remark}[Dual vectors in euclidean space]\nIf $\\vec{a},\\vec{b}\\in\\mathbb{R}^n$ contain the components of a vector and a\ndual vector respectively, then the transformation can be written in matrix form\n\\begin{align}\n    \\vec{a}&\\to\\overline{\\vec{a}}= V\\vec{a}\\, ,\\\\\n    \\vec{b}&\\to\\overline{\\vec{b}}=\\left(V\\transpose\\right)^{-1}\\vec{b}\\, ,\n\\end{align}\nwith $V_{ij}=\\dpd{\\overline{x}^i}{{x^j}}$ the\nJacobian of the transformation.\nIn normal calculus we restrict ourselves to orthogonal transformations (i.e.\nmapping orthonormal bases onto each other) for which\n$\\left(O\\transpose\\right)^{-1}=O$.\nWhich is the reason why we do not bother to distinguish between vectors and dual vectors because they transform identically. \nIn special relativity we have e.g.\n$\\left(\\Lambda\\transpose\\right)^{-1}\\neq\\Lambda$ for a boost and the difference\nbecomes even more important in general relativity where the relation can become arbitrarily complicated.\n\\end{remark}\n\\section{Tensors}\nFrom vectors $A$ ,$B$ we can construct new objects with multiple indices that posses well defined transformation behaviour. \nFor example consider\n\\begin{equation}\n    \\overline{T}^{ij}=a^ib^j\\, ,\n\\end{equation}\nwhich transforms as\n\\begin{equation}\n    T^{ij}=\\pd{\\overline{x}^i}{{x^k}}\\pd{\\overline{x}^j}{{x^l}}a^kb^l=\\pd{\\overline{x}^i}{{x^k}}\\pd{\\overline{x}^j}{{x^l}}T^{kl}\\,\n   .\\label{eq:tensortrafo}\n\\end{equation}\nWe call an object that transforms in this way a \\emph{tensor}. \nAs with vectors, it is possible to define tensors in a coordinate independent\nway.\nAt this point we will make things easier and only consider the physical\ndefinition, i.e. classify tensors by a transformation according to \\eqref{eq:tensortrafo}.\n\nA tensor is said to be symmetric in two indices if it stays invariant when exchanging those indices, e.g.\n\\begin{equation}\n    T_{ab}=T_{ba}\\, .\n\\end{equation}\n\\begin{remark}\nWe have not yet established a relation between upper and lower indices, i.e. we have no metric. Expressions of the form\n\\begin{equation}\n    \\tensor{T}{^a_b}=\\tensor{T}{^b_a}\n\\end{equation}\ntherefore make no sense.\n\\end{remark}\n\\section{The Metric}\nSo far we have not defined a length scale on manifolds yet. We will do so now by\nintroducing a \\emph{metric} \n\\begin{definition}[Metric]\nA metric $g$ on a manifold $M$, is a non-degenerate ($\\det(g)\\neq 0$), symmetric\ncovariant two tensor.\n\\end{definition}\nWe have already seen examples of metrics for the flat space, e.g. in spherical coordinates $g$ was given as\n\\begin{equation}\n    g=\n    \\begin{bmatrix}\n        1 & 0\\\\\n        0 & r^2\\\\\n    \\end{bmatrix}\\,.\n\\end{equation}\nA metric that is positive definite is called \\emph{\\name{Riemann}ian metric}. In\nrelativity we deal with \\emph{\\name{Lorentz}ian metrics}, for which there are\nvectors beside the zero vector which have zero 'length'. In flat space we have\n$\\tensor{g}{_i_j}=\\tensor{\\eta}{_i_j}$.\nA metric gives two natural notions on the tangent space of the manifold.\nInner product \n\\begin{equation}\ng(A,B)=\\tensor{g}{_i_j}\\tensor{A}{^i}\\tensor{B}{^j}\n\\end{equation}\nPseudo\\footnote{Not positive definite.} norm\n\\begin{equation}\ng(A,A)=\\tensor{g}{_i_j}\\tensor{A}{^i}\\tensor{A}{^j}\n\\end{equation}\n\\begin{remark}[About\nraising and lowering indices] Suppose we have given a vector $A=A^i\\partial_i$\nwith coordinates $A^i$, then we can relate it \nin a natural way to a linear form $A^\\flat\\footnote{The symbols\n$\\flat$ and $\\sharp$ are borrowed from musical notation.}:=g(A,\\cdot)$,\n\\begin{equation}\n\\begin{split}\n A^\\flat\\,\\colon T_pM &\\to \\Reals\\\\\n X &\\mapsto g(A,X)\\, ,\n\\end{split}\n\\end{equation}\ni.e. a dual vector. The components of this dual vector are given by its action\non the basis elements of the tangent space\n\\begin{equation}\nA_i:=(A^\\flat)_i=g(A,\\partial_i)=A^jg(\\partial_j,\\partial_i)=g_{ji}A^j\\,,\n\\end{equation}\nwhich is exactly the law for lowering indices. Given the inverse metric we can\nmultiply this equation by it to obtain $A^i$ in terms of $A_i$. \n\\end{remark}\n\\section{Affine Connections}\nTo derive a vector field, we have to relate different tangent spaces. The idea\nis again to generalize from flat space. The connection is established by\nintroducing a affine connection.\n\\subsection{Parallel Transport}\nWe consider a the parallel transport of a\nvector.\n%TODO pictures\nIf we express a vector in non-Cartesian coordinates and shift it it's\ncoordinates do not change.\nWe take a look on two operations:\n\\begin{enumerate}\n\\item the change of the vector itself\n\\item the change of its coordinates.\n\\end{enumerate}\nLet $A_i$ be the coordinates of a vector in a system $x^i$ and $B_i$ in a\nsystem associated with coordinates $y^i$ respectively. They are therefore\nrelated by\n\\begin{equation}\nA_i=\\dpd{{y^j}}{{x^i}}B_j\\, ,\\quad B_i=\\pd{{x^j}}{{y^i}}A_j\\, .\n\\end{equation}\nWe look at vectors whose coordinates in the system $y^i$ do not change i.e.\\ \n$\\delta B_i=0$. The variation of $A_i$ is given by\n\\begin{equation}\n\\delta\nA_i=\\delta\\left(\\dpd{{y^j}}{{x^i}}\\right)B_j\n=\\md{{y^j}}{2}{{x^i}}{}{{x^k}}{}\\delta\nx^k B_j\\, .\n\\end{equation}\nExpressing $B_i$ in terms of $A_i$ yields\n\\begin{equation}\n\\delta A_i = \\md{{y^j}}{2}{{x^i}}{}{{x^k}}{}\\pd{{x^l}}{{y^j}}A_l\\delta x^k\n=:\\affin{l}{i}{k}A_l\\delta x^k\n\\end{equation}\n$\\affin{l}{i}{k}$ is called \\emph{affine connection} or short affinity.\n\\begin{remark}\nWe can always find a coordinate system in which $\\affin{l}{i}{k}\\equiv 0$, this\nsystem is called \\emph{\\name{Riemann}ian normal coordinate system} (RNCS).\n\\end{remark}\n%TODO construction i.e. geodesic coordinates\nWe notice that if \n\\begin{equation}\n\\left(\\dpd{{A_i}}{{x^j}}-\\affin{l}{i}{k}A_l\\right)\\delta x^k = 0\\,\n,\\label{eq:covdev}\n\\end{equation}\nThe vector $A$ does not change its coordinates. We define a \\emph{covariant\nderivative} \n\\begin{equation}\n\\tensor{A}{_i_;_j}:=\\tensor{\\nabla}{_j}\\tensor{A}{_i}:=\\pd{{A_i}}{{x^j}}-\\affin{l}{i}{k}A_l\\,\n.\n\\end{equation}\nIt can easily be seen that the covariant derivative of a tensor transforms as a\ntensor, by inspecting \\eqref{eq:covdev} and applying the quotient theorem.\n\\begin{remark}[The\nCovariant Derivative in Electrodynamics] Example from Electrodynamics concerning the covariant derivative. \nThe theory is invariant under transformations $\\phi\\to e^{\\imI \\alpha}\\phi$, \nbecause $\\phi^*\\phi$ and $\\phi^*\\nabla\\phi-\\phi\\nabla\\phi^*$ do not change.\nIn order to make the Lagrangian gauge invariant we exchange\n\\begin{equation}\n\\tensor{\\partial}{_\\mu}\\to\\tensor{D}{_\\mu}+\\imI\\tensor{A}{_\\mu}\\, ,\n\\end{equation}\nwhich effectively produces additional terms in the Lagrangian, namely\n\\begin{equation}\n\\tensor{A}{^\\mu}\\left(\\phi^*\\tensor{\\partial}{_\\mu}\\phi\n-\\phi\\tensor{\\partial}{_\\mu}\\phi^*\\right)=\\tensor{A}{^\\mu}\\tensor{J}{_\\mu}\\,,\n\\quad\\tensor{A}{_\\mu}\\tensor{A}{^\\mu}\\phi^2\\,.\n\\end{equation}\nThe commutator between the covariant derivatives calculates to  \n\\begin{equation}\n\\left[\\tensor{D}{_\\mu},\\tensor{D}{_\\nu}\\right]=\\imI \\tensor{F}{_\\mu_\\nu}\\,.\n\\end{equation}\nSo the noncommutativity is associated with the presence of a field. This is\nsimilar to GR where it was related with curvature. \n\\end{remark}\nSince we have now established a relation between vectors and dual vectors, we\ncan also determine the covariant derivative of a dual vector. Therefore we\nconsider the scalar $A_iB^i$. Since the covariant derivative satisfies the\nLeibniz rule we get\n\\begin{align}\n\\tensor{(A_iB^i)}{_{;j}} =\n\\tensor{A}{_i_;_j}\\tensor{B}{^i}+\\tensor{A}{_i}\\tensor{B}{^i_;_j}\\,.\n\\end{align}\nBut for scalars the covariant derivative is identical to the normal derivative\nso that \n\\begin{align}\n(A_iB^i)_{;j} =(A_iB^i)_{,j}=\n\\tensor{A}{_i_,_j}\\tensor{B}{^i}+\\tensor{A}{_i}\\tensor{B}{^i_,_j}\n\\end{align}\nIf we put in the covariant derivative of a we get \n\\begin{equation}\n\\tensor{A}{_i}\\tensor{B}{^i_;_j}=\\tensor{A}{_i}\\left(\\tensor{B}{^i_{,j}}+\\Gamma^i_{kj}\\tensor{B}{^k}\\right)\n\\end{equation}\nSince $A$ was arbitrary, we can deduce that\n\\begin{equation}\n\\tensor{B}{^i_;_j}=\\left(\\tensor{B}{^i_{,j}}+\\Gamma^i_{kj}\\tensor{B}{^k}\\right)\n\\end{equation}\nfor a (1,1)-tensor we get:\n\\begin{equation}\n\\tensor{A}{^i_j_;_k}=\\tensor{A}{^i_j_{,k}}-\\Gamma^a_{jk}\\tensor{A}{^i_a}+\\Gamma^i_{ak}\\tensor{A}{^a_j}\\,\n.\\end{equation}\nSimilar expressions hold for tensors of arbitrary rank, where each index gives\nan additional term containing a contraction with the affinity $\\Gamma^i_{jk}$. \nWe now want to consider curved spaces. This can not immediately be determined by\nthe metric, for example the polar coordinates do not look flat even though they\ndescribe the ordinary $\\Reals^2$.\n\\subsection{Geodesics}\nA curve is a map $\\gamma:\\Reals\\to M$. The parametrisation is arbitrary e.g.\n\\begin{equation}\n\\begin{split}\n\\gamma:\\, &\\Reals\\to \\Reals^2\\\\\n& t\\mapsto\n\\frac{1}{2} t^2\n\\begin{bmatrix}\n1 \\\\\n1\n\\end{bmatrix}\\, ,\n\\end{split}\n\\end{equation}\nclearly describes a straight line with $y=x$.\nHowever, we notice that $\\od{\\tensor{\\gamma}{^i}}{t}$ is parallel to\n$\\od[2]{\\tensor{\\gamma}{^i}}{t}$. This gives rise to another possible\ngeneralisation of a straight line.\nIn curved coordinates we have \n\\begin{equation}\n\\nabla \\left( \\dod{\\tensor{x}{^j}}{t} \\right)  = \n\\od[2]{\\tensor{x}{^i}}{t}+\\affin{i}{j}{k}\\od{\\tensor{x}{^j}}{t}\\od{\\tensor{x}{^k}}{t}\n=\\lambda(t)\\od{\\tensor{x}{^j}}{t}\\,.\n\\end{equation}\nSuppose we have a different parametrisation $s(t)$\n\\begin{equation}\n\\od{\\tensor{x}{^i}}{t}=\\od{\\tensor{x}{^i}}{s}\\od{s}{t}\\,,\\quad\n\\od[2]{\\tensor{x}{^i}}{t}=\\od[2]{\\tensor{x}{^i}}{s}\\left(\\dod{s}{t}\\right)^2\n+\\od{\\tensor{x}{^i}}{s}\\dod[2]{s}{t}\\, ,\n\\end{equation} \nthen the equation for a straight line reads as\n\\begin{equation}\n\\left(\\dod[2]{\\tensor{x}{^i}}{s}+\\affin{i}{j}{k}\\dod{\\tensor{x}{^j}}{s}\\dod{\\tensor{x}{^k}}{s}\n\\right)\\left(\\dod{s}{t}\\right)^2+\\dod[2]{s}{t}\\dod{\\tensor{x}{^i}}{s}\n=\\lambda(t)\\dod{\\tensor{x}{^j}}{t}\\dod{s}{t}\\,.\n\\end{equation}\nTo get to the usual form of the geodesic equation we choose $s$ so that\n\\begin{equation}\n\\dod[2]{s}{t}=\\lambda(t)\\dod{s}{t}\\label{eq:bedaffpara}\n\\end{equation}\nWhich is an ordinary differential equation of type $\\ddot{s}=\\lambda\\dot{s}$\nthat should posses a solution. \nFor this special choice of parametrisation we have\n\\begin{equation}\n\\dod[2]{\\tensor{x}{^i}}{s}+\\affin{i}{j}{k}\\dod{\\tensor{x}{^j}}{s}\\od{\\tensor{x}{^k}}{s}\n=0\\,.\\label{eq:affingeod}\n\\end{equation}\nWe therefore have a preferred set of parameters\ncalled affine parameters that satisfy \\eqref{eq:bedaffpara}. The character\nof the differential equation implies that the affine parameter is only\ndefined modulo affine transformations $s\\to as+b$. This freedom once more\nreflects some kind of gauge invariance.\n\\begin{remark}\nOnly the symmetric part of the affinity does contribute to\nthe geodesic equation \\eqref{eq:affingeod}. As long as you are on a geodesic you\ncan always compare lengths without needing a metric, by the affine parameter.\nThe 'length' defined this way does not have to concede with the length given by\nthe metric and is also defined for example for lightlike curves (which cannot\nbe parametrised by the arc length).\n\\end{remark}\nIf we demand that the two definitions of a geodesic \\eqref{eq:affingeod} is\nidentical to \\eqref{eq:geodeq} coincide, we get a preferred\nconnection.\nThis corresponds to the choice $\\affin{k}{i}{j}=\\cSym{k}{i}{j}$.\nThis is called the \\emph{metric}, \\emph{Levi-Civita} or \\emph{Christoffel\nconnection}, and we will always choose it in the following.\nA general metric compatible\nconnection\\footnote{$\\tensor{\\nabla}{_k}\\tensor{g}{_i_j}=0$} satisfies\n\\begin{equation}\n\\affin{k}{i}{j}=\\cSym{k}{i}{j}+\\tensor{T}{^k_i_j}\n+\\tensor{g}{^i^r}\\left(\\tensor{T}{^s_j_r}\\tensor{g}{_s_i}+\\tensor{T}{^s_i_r}\\tensor{g}{_s_j}\\right)\\,,\n\\end{equation}\nwith the torsion tensor\n\\begin{equation}\n\\tensor{T}{^k_i_j}=\\frac{1}{2}\\left(\\affin{k}{i}{j}-\\affin{k}{j}{i}\\right)\\,.\n\\end{equation}\nTherefore the Christoffel connection is the unique metric compatible\nsymmetric connection.\n\\begin{remark}\nNon-Christoffel connections play a role, for example when dealing with spinors.\n\\end{remark}\n\\begin{figure}[hbtp!]\n\\centering\n \\includegraphics{fold1.pdf}\\qquad\\qquad\n \\includegraphics{fold2.pdf}\\qquad\\qquad\n \\includegraphics{fold3.pdf}\n\\caption{The geodesics on a cylinder can be obtained by ``folding'' flat space.}\n\\end{figure}\n%TODO pictures\n\\section{The Riemannian Curvature Tensor}\n%TODO Missing definition of R\nWe can contract indices on the Riemann Tensor\n\\begin{equation}\n\\tensor{R}{^i_i_k_l}=\\pd{{\\Gamma^i_{il}}}{{x^k}}-\\pd{{\\Gamma^i_{ik}}}{{x^l}}\\,,\n\\end{equation}\nwhich is zero for a metric affinity. The Ricci tensor is given by\n\\begin{equation}\n\\tensor{R}{_j_k}=\\tensor{R}{^i_j_k_i}\\, .\n\\end{equation}\nBecause of the symmetry this are all independent\ncontractions.\nNotice that at this point we cannot raise or lower indices to contract different indices.\nGiven a metric the curvature or Ricci skalar is defined as \n\\begin{equation}\n\\tensor{R}{^i_i}=\\tensor{g}{^i^j}\\tensor{R}{_j_i}\\, .\n\\end{equation}\n%TODO here we assume we have not metric..\nSince we can express the \\name{Riemann} tensor in terms of commutators there is\na also a symmetry that can be derived from the Jacobi identity:\n\\begin{equation}\n\\left[A\\left[B,C\\right]\\right]\n+\\left[C\\left[A,B\\right]\\right]\n+\\left[B\\left[C,A\\right]\\right]=0\\,.\n\\end{equation}\nPhysics can be described in terms of differential equations. We for example\nwould like to have a object similar to the Laplacian in curved coordinates.\nHowever $\\partial_i\\partial_i$ is not coordinate invariant. We therefore\nintroduce a metric. The equivalence principle implies that space is locally\nMinkovski.\n\\begin{sidenote}\nThere are certain theories that can be formulated without a metric. An example\nbeing three dimensional gravity, which is non-dynamic. \n\\end{sidenote}\nSince the connection and the metric are not related, we still don't have a length\nscale on the manifold\n% \\begin{equation}\n% \\tensor{R}{_i_j_k_l}:=\\tensor{g}{_i_a}\\tensor{R}{^a_j_k_l}\n% \\end{equation}\n% Only useful if $R$ and $g$ are related i.e. metric connection.\n% (Bilder)\n% TODO the following is fits better into parallel transport section, we can\n% possibly leave out what is written there\n\n\\subsection{Symmetries of the Riemann Tensor}\n\\begin{equation}\n\\tensor{R}{^i_j_k_l}=-\\tensor{R}{^i_j_l_k}\\,.\n\\end{equation}\nBianci identities for the Riemann tensor\n\\begin{equation}\n\\tensor{R}{^i_j_k_l_{;m}}+\\tensor{R}{^i_j_m_k_{;l}}+\\tensor{R}{^i_j_l_m_{;k}}=0\n\\end{equation}\nProof: If we would write it all out we wold have to write 22 terms. Instead we\nuse a RNCS so that the Riemann tensor simplifies to \n\\begin{equation}\n\\tensor{R}{^i_j_k_l}\n=\\tensor{\\partial}{_k}\\affin{i}{j}{l}\n-\\tensor{\\partial}{_l}\\affin{i}{j}{k}\n\\end{equation}\nAnd with $\\tensor{\\nabla}{_k}=\\tensor{\\partial}{_k}$ we get\n\\begin{equation}\n\\tensor{R}{^i_j_k_l_{;m}}\n=\\tensor{\\partial}{_m}\\tensor{R}{^i_j_k_l}\n=\\tensor{\\partial}{_k}\\tensor{\\partial}{_m}\\affin{i}{j}{l}\n-\\tensor{\\partial}{_l}\\tensor{\\partial}{_m}\\affin{i}{j}{k}\\,.\n\\end{equation}\nPlugging in gives the result in the RNCS system, but since the equation is\ntensorial, it holds in all systems. Notice the similarity to \n\\begin{equation}\n\\tensor{\\partial}{_m}\\tensor{F}{_a_b}\n=\\tensor{\\partial}{_b}\\tensor{\\partial}{_m}\\tensor{A}{^a}\n-\\tensor{\\partial}{_a}\\tensor{\\partial}{_m}\\tensor{A}{^b}\n\\end{equation}\nTherefore the second set of Maxwell's\nequations \\eqref{eq:maxwell_eqs} is in fact a the Bianchi identity.\nIf the affinity is symmetrical $\\affin{k}{i}{j}=\\affin{k}{j}{i}$ we further have\nthe identity\n\\begin{equation}\n\\tensor{R}{^i_j_k_l}+\\tensor{R}{^i_l_j_k}+\\tensor{R}{^i_k_l_j}=0\n\\end{equation}\nGiven a metric $\\tensor{g}{_i_j}$, we can raise and lower\nindices\n\\begin{equation}\n\\begin{split}\n\\tensor{R}{_i_j_k_l}\n&=\\tensor{g}{_i_a}\\tensor{R}{^a_j_k_l}\\\\\n&=\\tensor{g}{_i_a}\\left(\\tensor{\\partial}{_k}\\affin{i}{j}{l}\n-\\tensor{\\partial}{_l}\\affin{i}{j}{k}\\right)\\\\\n&=\\tensor{g}{_i_a}\\left(\\tensor{\\partial}{_k}\\tensor{g}{^a^s}\\csym{j}{l}{s}\n-\\tensor{\\partial}{_l}\\tensor{g}{^a^s}\\csym{j}{k}{s}\\right)\\\\\n&=\\tensor{\\partial}{_k}\\csym{j}{l}{i}\n-\\tensor{\\partial}{_l}\\csym{j}{k}{i}\\\\\n&=\\frac{1}{2}\\left(\n\\dmd{{\\tensor{g}{_i_l}}}{2}{\\tensor{x}{^j}}{}{\\tensor{x}{^k}}{}\n+\\dmd{{\\tensor{g}{_j_k}}}{2}{\\tensor{x}{^i}}{}{\\tensor{x}{^l}}{}\n-\\dmd{{\\tensor{g}{_i_k}}}{2}{\\tensor{x}{^j}}{}{\\tensor{x}{^l}}{}\n-\\dmd{{\\tensor{g}{_j_l}}}{2}{\\tensor{x}{^i}}{}{\\tensor{x}{^j}}{}\\right)\n\\end{split}\n\\end{equation}\nImmediately we can extract symmetries\n\\begin{align}\n\\tensor{R}{_i_j_k_l}&=-\\tensor{R}{_j_i_k_l}\\\\\n\\tensor{R}{_i_j_k_l}&=-\\tensor{R}{_i_j_l_k}\n\\end{align} \nIf we further introduce multi-indices $A=(i,j)$, $B=(k,l)$ (by antisymmetry\nthere are six independent components for $A,B$ each)\n\\begin{equation}\n\\tensor{R}{_A_B} = \\tensor{R}{_B_A}\n\\end{equation}\nSo $R$ can be thought as a $6\\times 6$ matrix.\n\\begin{table}\n    \\centering\n        \\caption{Number of independent components of Riemann\n    and \\name{Ricci} tensor.\\label{tab:Nindcomp}}\n    \\begin{tabulars}{rrr}\n      \t\\toprule\n\t\tdimension&Riemann tensor &Ricci tensor \\\\\n\t\t\\midrule\n\t\t1&0&0\\\\\n\t\t2&1&1\\\\\n\t\t3&6&6\\\\\n\t\t4&20&10\\\\\n\t\t$n$&$\\frac{1}{12}n^2(n^2-1)$&$\\frac{1}{2}n(n+1)$\\\\\n\t\t\\bottomrule\n    \\end{tabulars}\n\\end{table}\nTable~\\ref{tab:Nindcomp} lists the number of independent components of the\ncurvature tensor and the \\name{Ricci} tensor in various dimensions. Thereby a\none dimensional space is always flat, a two dimensional is characterised by the\ncurvature scalar $R$ alone and in three dimensions the Ricci tensor is\nsufficient to know the Riemann tensor. Therefore four is the lowest dimension in\nwhich the Riemann tensor contains additional information of the curvature of\nspace.\n", "meta": {"hexsha": "3aa07dc0433e438a47a3eb031be749d4d94b0016", "size": 26249, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/04-differential-geometry.tex", "max_stars_repo_name": "Bigben37/GeneralRelativity", "max_stars_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-31T13:18:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-31T13:18:57.000Z", "max_issues_repo_path": "src/04-differential-geometry.tex", "max_issues_repo_name": "QuantumDancer/GeneralRelativity", "max_issues_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/04-differential-geometry.tex", "max_forks_repo_name": "QuantumDancer/GeneralRelativity", "max_forks_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.1902356902, "max_line_length": 164, "alphanum_fraction": 0.7244085489, "num_tokens": 8434, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8652240930029118, "lm_q2_score": 0.6477982315512489, "lm_q1q2_score": 0.5604906373428196}}
{"text": "\\Lecture{Jayalal Sharma}{Sept 19, 2020}{07}{From Bijections to PIE}{Anshu Yadav}{$\\alpha$}{JS}\n\n\\section{Introduction}\nIn this lecture, we will continue with the use of bijections and use it in formally proving the two identities that we discussed in class and then see their relationship to the Principal of Inclusion and Exclusion. \n\n\\section{The Identities}\nRecall that we proved following two identities in one of the discussion sessions\n\\begin{align}\n     \\sum_{k=0}^n (-1)^k{n\\choose k} &= 0 \\label{REV1}\\\\ \n     \\sum_{k=0}^m (-1)^k{n\\choose k} &= (-1)^m{n-1\\choose m} \\label{REV2}\n\\end{align}\nIn this section, we will see the proofs for the above equations is detail\n\\subsection{Proof for Eqn. \\eqref{REV1}} \\label{subsec:identity-even-odd-1}\n$$\\sum_{k=0}^n (-1)^k{n\\choose k} = 0$$\n\\begin{proof}\n    The LHS counts the number of even sized subsets of $[n]$ with positive sign and odd size subsets with negative sign. Then we proved the result using bijection between even sized and odd sized subsets of $[n]$. Hence, we get 0 on RHS. Let us formally define the bijection here.\n    \n    Let $E$ be the set of all even sized subsets of $[n]$ and $O$ be the set of all odd sized subsets of $[n]$. Then the bijection  $\\phi_i:E\\rightarrow O$ is defined with respect to an element $i\\in[n]$ as follows.\n    \n    Let $X\\subseteq [n]$, such that $|X|$ is even. Then \n    \\[\n        \\phi_i(X) = \n        \\begin{cases}\n            X\\setminus \\{i\\} & ~~~~~\\text{ if } i\\in X\\\\\n            X\\cup\\{i\\} & ~~~~~\\text{ if } i\\not\\in X\n        \\end{cases}\n    \\]\n    \\underline{Proof of bijection:}\n    \\begin{description}\n        \\item \\textit{Well-defined:} Given any even sized subset $X$, there are two possibilities: (i) $i\\in X$, (ii) $i\\not\\in X$. In first case, $i$ is removed from $X$, hence its size reduces by one and becomes odd. In the second case, $i$ is added, hence the size of the subset increases by one and becomes odd. Hence, $\\phi$ is well defined.\n        \\item \\textit{Injective:} Let $X$ and $X'$ be two distinct subsets of $[n]$. Then $\\exists j\\in[n]$ such that $j$ is present in exactly one of the two subsets. Wlog, let $j\\in X$ and $j\\not\\in X'$. Now, if $j\\neq i$, then $j\\in \\phi(X)$ and $j\\not\\in \\phi(X')$ and hence $\\phi(X)\\neq \\phi(X')$. On the other hand, if $j=i$, then $j\\not\\in \\phi(X)$ and $j\\in \\phi(X')$. Hence, $\\phi(X)\\neq \\phi(X')$.  \n        \\item \\textit{Surjective:} Let $Y\\in O$ be an odd sized subset of $[n]$. From $Y$, we can recover $X$ such that $\\phi(X) = Y$ by the same operation as in $\\phi$. That is, \n        \\[\n        X= \\begin{cases}\n            Y\\setminus \\{i\\} & ~~~~~\\text{ if } i\\in Y\\\\\n            Y\\cup\\{i\\} & ~~~~~\\text{ if } i\\not\\in Y\n        \\end{cases}\n        \\]\n        It can easily be verified that in both the cases, $X$ is an even sized subset of $[n]$.\n    \\end{description}\n    This completes the proof.\n\\end{proof}\n%\n%\n\\subsection{Proof for Eqn. \\eqref{REV2}} \\label{subsec:identity-even-odd-2}\n$$\\sum_{k=0}^m (-1)^k{n\\choose k} = {n-1\\choose m}$$\n\\begin{proof}\nNow we look at the second identity which is even more interesting. To prove this identity we use \\emph{almost bijection} where the bijection is between a set and subset of another set.\n\nIn words, the identity to prove, can be described as\n$$\\Large\\substack{\\# \\textrm{ of even sized subsets of  $[n]$}\\\\  \\textrm{of size atmost $m$}} - \\substack{\\# \\textrm{ of odd sized subsets of $[n]$} \\\\ \\textrm{of size atmost $m$}} = (-1)^m{n-1\\choose m}.$$ Clearly, there cannot be a bijection between the two sets (even sized subsets and odd sized subsets) in this case, since their difference is non-zero. This is where we use almost bijection.\n\nWe use following case analysis. \n\\begin{description}\n\\item \\underline{\\textbf{Case1:} $m$ is even:} Then the identity to prove is:\n\\begin{equation} \\label{eq:even-odd}\n    \\sum_{k=0}^m (-1)^k{n\\choose k} = {n-1\\choose m}\n\\end{equation}\nThis can be interpreted as \n\\begin{equation} \\label{eq:even-odd-1}\n    \\sum_{\\substack{k=0,\\\\k\\text{ is even}}}^m {n\\choose k} -  \\sum_{\\substack{k=1,\\\\k\\text{ is odd}}}^{m-1} {n\\choose k} = {n-1\\choose m}\n\\end{equation}\nLet $E$ be the set of all the even sized subsets of $[n]$ of size at most $m$ and $O$ be the set of odd sized subsets of $[n]$ having size at most $m-1$. Then, Eqn.~\\eqref{eq:even-odd-1} can intuitively interpreted as follows: there is a subset $E'\\subseteq E$, such that $E'$ is in bijection with $O$ and $|E\\setminus E'| = {n-1\\choose m}$. Thus, we have three tasks at hand\n\\begin{itemize}\n    \\item identify the set $E'$, and\n    \\item define and prove the bijection between $E'$ and $O$.\n    \\item prove that $|E\\setminus E'| = {n-1\\choose m}$\n\\end{itemize}\n\\underline{Defining the set $E'$:} Set $E'$ is the union of two sets: \n$$E' = \\{X\\subseteq [n]: |X| \\textrm{ is even and } |X|\\le m-2\\}\\cup\\{X\\subseteq [n]: i\\in X \\textrm{ and } |X| = m\\}$$\n\\underline{Defining the bijection:} The bijection $\\phi:E'\\rightarrow B$ is defined in the same way as we defined it for first identity. That is, for $X\\in E'$,\n\\[\n\\phi(X) = \n\\begin{cases}\nX\\setminus \\{i\\} & ~~~~~\\text{ if } i\\in X\\\\\nX\\cup\\{i\\} & ~~~~~\\text{ if } i\\not\\in X\n\\end{cases}\n\\]\n\\underline{Proof of bijection}\n\\begin{description}\n\\item \\textit{Well-defined:} Let $X\\in E'$, then (i) if $|X|\\le m-2$, then $|\\phi(X)|$ is odd and $|\\phi(X)|\\le m-1$, (ii) if $|X| = m$, then $i\\in X$, hence $\\phi(X) = X\\setminus \\{i\\}$. This implies $|\\phi(X)| = m-1$. Thus, in both the cases $\\phi(X)\\in O$.\n\\item \\textit{Injective:} Since, the function is same as in the previous case, the same argument for injectivity works.\n\\item \\textit{Surjective:} Let $Y\\in O$ be an odd sized subset of $[n]$. From $Y$, we can recover $X\\in E'$ such that $\\phi(X) = Y$ by the same operation as in $\\phi$. That is, \n \\[\n X= \n \\begin{cases}\nY\\setminus \\{i\\} & ~~~~~\\text{ if } i\\in Y\\\\\nY\\cup\\{i\\} & ~~~~~\\text{ if } i\\not\\in Y\n \\end{cases}\n \\]\n It can easily be verified that in both the cases, $|X|$ is even. \n In first case, since $|Y|\\le m-1, |X|\\le m-2$, hence $X\\in E'$. In second case, since $i\\not\\in Y$ and $|Y|\\le m-1$, $|X|\\le m$ and $i\\in X$. Hence $X\\in E'$, by definition.\n\\end{description}\nThis proves the bijection between $E'$ and $O$. \n\n\\underline{Proof for: $|E\\setminus E'| = {n-1\\choose m}$}\n\nFrom the above definitions, $E\\setminus E' = \\{X\\subseteq [n]: |X| = m, i\\not\\in X\\}$. This can be interpreted as $E\\setminus E' = \\{X\\subseteq [n]\\setminus \\{i\\}: |X| = m\\}$. Hence, $|E\\setminus E'| = {n-1\\choose m}$.\n\n\\item \\underline{\\textbf{Case2:} $m$ is odd:} In this case the identity to prove is:\n\\begin{equation}\n\\label{eq:even-odd-3}\n    \\sum_{k=0}^m (-1)^k{n\\choose k} = - {n-1\\choose m}\n\\end{equation}\nThis can be interpreted as \n\\begin{equation}\n\\label{eq:even-odd-2}\n    \\sum_{\\substack{k=0,\\\\k\\text{ is even}}}^{m-1} {n\\choose k} -  \\sum_{\\substack{k=1,\\\\k\\text{ is odd}}}^{m} {n\\choose k} = -{n-1\\choose m}\n\\end{equation}\nEquivalently,\n\\begin{equation}\n\\label{eq:odd-even-1}\n    \\sum_{\\substack{k=1,\\\\k\\text{ is odd}}}^{m} {n\\choose k} - \\sum_{\\substack{k=0,\\\\k\\text{ is even}}}^{m-1} {n\\choose k}  = {n-1\\choose m}\n\\end{equation}\nThis time the set of odd sized subsets of $[n]$ of size at most $m$ is bigger than the even sized subsets of $[n]$ of size at most $m$.\nThe proof is same as that for the case of even $m$. \nLet $E$ be the set of all the even sized subsets of $[n]$ of size at most $m-1$ (since $m$ is odd) and $O$ be the set of odd sized subsets of $[n]$ having size at most $m$. Then~\\eqref{eq:odd-even-1} can be interpreted as follows: there is a subset $O'\\subseteq O$, such that $E$ is in bijection with $O'$ and $|O\\setminus O'| = {n-1\\choose m}$.\n  \nThus, we have two task at hand\n\\begin{itemize}\n    \\item identify the set $O'$, and\n    \\item define and prove the bijection between $E$ and $O'$.\n    \\item prove that $|O\\setminus O'| = {n-1\\choose m}$\n\\end{itemize}\n\\underline{Defining the set $O'$:} Set $O'$ to be the union of two sets: \n$$O' = \\{Y\\subseteq [n]: |Y| \\text{ is odd and } |Y|\\le m-2\\}\\cup\\{Y\\subseteq [n]: i\\in Y \\text{ and } |Y| = m\\}$$\n\\underline{Defining the bijection:} The bijection $\\phi:E\\rightarrow O'$ is defined in the same way as we defined it for first identity. That is, for $X\\in E$,\n\\[\n\\phi(X) = \n\\begin{cases}\nX\\setminus \\{i\\} & ~~~~~\\text{ if } i\\in X\\\\\nX\\cup\\{i\\} & ~~~~~\\text{ if } i\\not\\in X\n\\end{cases}\n\\]\n\\underline{Proof of bijection}\n\\begin{description}\n\\item \\textit{Well-defined:} Let $X\\in E$, then $\\phi(X)$ is of odd size because either an element is added or removed from $X$, which is of even size. Now, (i) if $i\\in X$, then $\\phi(X) = X\\setminus\\{i\\}$. Hence, $|\\phi(X)|\\le m-2$ (because $|X|\\le m-1$) which implies $\\phi(X)\\in O'$ (ii) if $i\\not\\in X$, then, $\\phi(X) = X\\cup \\{i\\}$. This implies $|\\phi(X)| \\le m$. But since, $i\\in \\phi(X)$, $\\phi(X)\\in O'$. This proves that $\\phi$ is well- defined.\n\\item \\textit{Injective:} Since, the function is same as in sub section~\\ref{subsec:identity-even-odd-1}, the same argument for injectivity works.\n\\item \\textit{Surjective:} Let $Y\\in O'$ be an odd sized subset of $[n]$. From $Y$, we can recover $X\\in E$ such that $\\phi(X) = Y$ by the same operation as in $\\phi$. That is, \n \\[\n X= \n \\begin{cases}\nY\\setminus \\{i\\} & ~~~~~\\text{ if } i\\in Y\\\\\nY\\cup\\{i\\} & ~~~~~\\text{ if } i\\not\\in Y\n \\end{cases}\n \\]\n It can easily be verified that in both the cases, $|X|$ is even. \n In first case, $|Y|\\le m$ and hence $|X|\\le m-1$. So, $X\\in E$. In second case, since $i\\not\\in Y$, $|Y|\\le m-2$ (by definition) and hence $|X|\\le m-1$. Hence $X\\in E$.\n\\end{description}\nThis proves the bijection between $E$ and $O'$. \n\n\\underline{Proof for: $|O\\setminus O'| = {n-1\\choose m}$}\\\\\nFrom the above definitions, $O\\setminus O' = \\{Y\\subseteq [n]: |Y| = m, i\\not\\in Y\\}$. This can be interpreted as $O\\setminus O' = \\{Y\\subseteq [n]\\setminus \\{i\\}: |Y| = m\\}$. Hence, $|O\\setminus O'| = {n-1\\choose m}$.\n\\end{description}\nThis completes the proof\n\\end{proof}\nThis proves both the identities.\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Principle of Inclusion and Exclusion}\nSuppose we are given $n$ sets $A_1, A_2, \\ldots, A_n\\subseteq G$, where $G$ is some ground set. We are interested in finding the size of $A= A_1\\cup A_2\\cup\\ldots\\cup A_n$. This is very abstract scenario and we will see specific examples later, but here we are going to see classic use of the above identities in deriving this number.\n\nSo, we are interested in finding $|A| = |A_1\\cup A_2\\cup\\ldots\\cup A_n|$. \n\nSo, here is a thought process - \nClearly, we can add the size of individual sets as \n$|A| = |A_1|+|A_2|+\\ldots +|A_n|$, but this will over-count if there are some elements present in more than one sets. So, for that we need to subtract the double counting. For e.g. if $x\\in A_1$ and $x\\in A_2$, then it gets counted twice and to compensate for that we need to subtract $|A|=|A_1\\cap A_2|$ and we might attempt $|A| = |A_1|+|A_2|+\\ldots +|A_n| - \\sum_{1\\le i < j\\le n}|A_i\\cap A_j|$. But then, if $x$ is present in $A_1, A_2$ and $A_3$, the it is under-counted (added thrice and subtracted thrice). So, again we need to compensate for that by adding $\\sum_{1\\le i \\le j\\le k\\le n}|A_i\\cap A_j\\cap A_k|$ in the above expression and this sequence goes on for any element being present in $k\\le n$ sets and finally we get the expression for $|A|$  as follows\n\\begin{equation} \\label{eq:pie-1}\n    |A| = |A_1|+\\cdots+|A_n|-\\sum_{1\\le i< j\\le n}|A_i\\cap A_j| +\\sum_{1\\le i<j<k}|A_i\\cap A_j\\cap A_k| -\\cdots + (-1)^{n+1}|A_1\\cap A_2\\cap\\cdots\\cap A_n|\n\\end{equation}\nFor $n=2$, the above expression gives\n$$|A| = |A_1|+|A_2|-|A_1\\cap A_2|$$\nwhich we all must have seen before and can easily prove using Venn diagram. \n\nIn this section, we will formally prove the above expression for general $n$ using the two identities we proved in previous section.\n\\begin{proof}\nConsider any $x\\in A_1\\cup A_2\\cup\\cdots\\cup A_n$. Let $x$ appears in $k$ of the $A_i$'s. Then let us see how $x$ gets counted\n\\begin{itemize}\n    \\item[-] $|A_1|+|A_2|+\\cdots +|A_n|$: counts $x$ $k$ times (added)\n    \\item[-] $\\sum_{1\\le i<j\\le n}|A_i\\cap A_j|$: counts $x$ ${k\\choose 2}$ times (subtracted)\n    \\item[-] $\\sum_{1\\le i<j<k\\le n}|A_i\\cap A_j\\cap A_k|$: counts $x$ ${k\\choose 3}$ times (added) \n    \\item[-] and so on $\\ldots$\n\\end{itemize}\nNotice that in terms involving intersection of more than $k$ sets, $x$ never appears.\n\nThus, \n\\begin{eqnarray*} \n{\\Large\\substack{\\# \\text{of times $x$}\\\\ \\text{gets counted}}} &=& k+{k\\choose 2}-{k\\choose 3}+\\cdots +(-1)^{k+1}{k\\choose k}\\\\\n&=&-{k\\choose 0}+{k\\choose 1}+{k\\choose 2}-{k\\choose 3}+\\cdots +(-1)^{k+1}{k\\choose k}+{k\\choose 0}\\\\\n&=&-\\sum_{i=0}^k(-1)^{i}{k\\choose i}+{k\\choose 0}\\\\\n%&&\\text{from~\\eqref{eq:identity-1}, $\\sum_{i=0}^k(-1)^k{k\\choose i} = 0$}\\\\\n&=&{k\\choose 0} ~~~~~~~~~~~~~~~~~~~~~~~\\text{from ~\\eqref{REV1}}\\\\\n&=& 1\n\\end{eqnarray*}\nThus, irrespective of the value of $k$, any element $x\\in A_1\\cup A_2\\cup\\cdots\\cup A_n$ is counted exactly once. Hence, every $x\\in A_1\\cup A_2\\cup\\cdots\\cup A_n$ is counted exactly once in RHS in~\\eqref{eq:pie-1}.\n\nThis proves the PIE\n\\end{proof}\nNow let us look at the application of second identity that we derived. This identity is used in deriving a version of PIE which appears very naturally in several context. Let us look at one such example.\n\nPIE says that if we want to derive $|A_1\\cup\\A_2\\cup\\cdots\\cup A_n|$, then the following expression does not give the correct count.\n$$|A_1\\cup\\A_2\\cup\\cdots\\cup A_n| = |A_1|+|\\A_2|+\\cdots+|A_n|$$ \nBut we can ask, does this expression gives a lower or an upper bound? As we saw, this does over-counting, hence we can write\n$$|A_1\\cup\\A_2\\cup\\cdots\\cup A_n| \\le |A_1|+|\\A_2|+\\cdots+|A_n|$$\nNow, suppose we include the next component, i.e. $$|A_1|+|\\A_2|+\\cdots+|A_n|-\\sum_{1\\le i<j\\le n}|A_i\\cap A_j|$$\nAgain from PIE we know that this also does not give the correct count. But we ask the same question again - does it give any lower or upper bound. And as we saw that this term can do some over-subtraction and hence we can say that this expression gives the lower bound. That is, \n$$|A_1\\cup\\A_2\\cup\\cdots\\cup A_n| \\ge |A_1|+|\\A_2|+\\cdots+|A_n|-\\sum_{1\\le i<j\\le n}|A_i\\cap A_j|$$\nSimilarly, \n$$|A_1\\cup\\A_2\\cup\\cdots\\cup A_n| \\le |A_1|+|\\A_2|+\\cdots+|A_n|-\\sum_{1\\le i<j\\le n}|A_i\\cap A_j|+\\sum_{1\\le i<j<k\\le n}|A_i\\cap A_j\\cap A_k|$$\nand we continue like this. \n\nLet us now formally establish this observation. We use the same technique that we used in the proof of PIE. \n\nLet $x$ appears in $k$ of the sets in $A_1, A_2, \\ldots, A_n$. Suppose we cut off the PIE after $m\\le n$ sized intersections. Then \n\\begin{eqnarray*} \n{\\Large\\substack{\\# \\text{of times $x$}\\\\ \\text{gets counted}}} &=& {k\\choose 1}-{k\\choose 2}+\\cdots+(-1)^{m+1}{k\\choose m}\\\\\n%&=&-{k\\choose 0}+{k\\choose 1}+{k\\choose 2}-{k\\choose 3}+\\cdots +(-1)^{k+1}{k\\choose k}+{k\\choose 0}\\\\\n&=&-\\sum_{i=0}^m(-1)^{i}{k\\choose i}+{k\\choose 0}\\\\\n%&&\\text{from~\\eqref{eq:identity-1}, $\\sum_{i=0}^k(-1)^k{k\\choose i} = 0$}\\\\\n&=& 1+(-1)^{m+1}{k-1\\choose m} ~~~~~~~~~~~~~~~~~~~~~~~\\text{from ~\\eqref{REV2}}\n\\end{eqnarray*}\nThus, $x$ is over counted or under counted depending on whether the second term on RHS is positive or negative. Let us analyze this for two cases.\n\\begin{description}\n\\item Case1: $k\\le m$\n\nSince, $x$ appears in only $k\\le m$ sets and we are cutting down only after $m$, then this means that all possible intersections of this particular $x$ are added and subtracted and $x$ can not appear in any of the intersections of more than $k$ sets. Hence, $x$ is neither under counted nor over counted. In the expression, ${k-1\\choose m} = 0$ Hence,\n$$\\# \\text{of times $x$ is counted } = 1$$\n\\item Case2: $k>m$\n\nIn this case, $x$ can be under counted or over counted depending upon whether $m$ is even or odd.\nIf $m$ is odd then $x$ is over counted.\n\nIf $m$ is even then $x$ is under counted.\n\\end{description}\nNotice that either all $x\\in A_1\\cup\\A_2\\cup\\cdots\\cup A_n$ are correctly counted or under counted or all $x$ are correctly counted or over counted based on the parity of $m$. Thus, whether a PIE cut down after $m$ intersections gives lower bound or upper bound depends only on the parity of $m$. This principle is also called the \\emph{Bon Ferroni's inequality}. \n\\begin{remark} \n    We used the equality in~\\eqref{eq:even-odd} to prove PIE. We can actually do the other way round as well, i.e. we can use PIE to prove this equality too.\n\\end{remark}\nThis completes this lecture. In the next lecture we will look at some applications of PIE.\n", "meta": {"hexsha": "0f10f398d9f057ad999a9797aa2f577478c242ea", "size": 16590, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lecture08.tex", "max_stars_repo_name": "narasimhasai07/theory-toolkit", "max_stars_repo_head_hexsha": "fde5621c515c2e05e3d91e8b021b745ea6ea2075", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lecture08.tex", "max_issues_repo_name": "narasimhasai07/theory-toolkit", "max_issues_repo_head_hexsha": "fde5621c515c2e05e3d91e8b021b745ea6ea2075", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lecture08.tex", "max_forks_repo_name": "narasimhasai07/theory-toolkit", "max_forks_repo_head_hexsha": "fde5621c515c2e05e3d91e8b021b745ea6ea2075", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.4142259414, "max_line_length": 770, "alphanum_fraction": 0.651416516, "num_tokens": 5966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6477982179521103, "lm_q2_score": 0.8652240860523328, "lm_q1q2_score": 0.5604906210739445}}
{"text": "\\documentclass[12pt]{article}\n\n\\title{CSC 420 - Assignment 3}\n\\author{Robert Krency, kre1188@calu.edu \\\\}\n\\date{\\today}\n\n\\usepackage{tikz}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{tabularx}\n\\usepackage{multicol}\n\\usepackage{algpseudocode}\n\\usepackage{algorithm}\n\\usepackage{listings}\n\n% Geometry \n\\usepackage{geometry}\n\\geometry{letterpaper, left=20mm, top=20mm, right=20mm, bottom=20mm}\n\n\n% Add vertical spacing to tables\n\\renewcommand{\\arraystretch}{1.4}\n\n% Macros\n\\newcommand{\\definition}[1]{\\underline{\\textbf{#1}}}\n\n\\newenvironment{rcases}\n  {\\left.\\begin{aligned}}\n  {\\end{aligned}\\right\\rbrace}\n\n% Begin Document\n\\begin{document}\n\n\\maketitle\n\n\\section{Question 1}\n\nThe Magic 3x3 Square is a 3x3 grid of the digits 1 to 9. Each grid cell has a unique number, and when arranged in a certain order,\nthe sum of each row, column, and diagonal will each be 15. \n\nConsider the following square, with letters marking each grid tile, and a possible solution:\n\n\\begin{center}\n\n\\begin{tabular}{c c}\n\n\\begin{tabular}{| c | c | c |}\n    \\hline \n    A & B & C \\\\ \\hline\n    D & E & F \\\\ \\hline\n    G & H & I \\\\ \\hline\n\\end{tabular}\n\n&\n\n\\begin{tabular}{| c | c | c |}\n    \\hline \n    6 & 1 & 8 \\\\ \\hline\n    7 & 5 & 3 \\\\ \\hline\n    2 & 9 & 4 \\\\ \\hline\n\\end{tabular}\n\n\\end{tabular}\n\\end{center}\n\nThis problem can be formulated as a Constraint Satisfaction Problem, with the following properties:\n\n\\begin{itemize}\n    \\item Variables: $X$ = \\{ A, B, C, D, E, F, G, H, I \\}\n    \\item Domains: $X_i$ = \\{ 1, 2, 3, 4, 5, 6, 7, 8, 9 \\}\n    \\item Constraints: $C$ = \\{\n    \\begin{itemize}\n        \\item AllDifferent\\{ A, B, C, D, E, F, G, H, I \\},\n        \\item A + B + C = 15,\n        \\item D + E + F = 15,\n        \\item G + H + I = 15,\n        \\item A + D + G = 15,\n        \\item B + E + H = 15,\n        \\item C + F + I = 15,\n        \\item A + E + I = 15,\n        \\item C + E + G = 15 \\}\n    \\end{itemize}\n\\end{itemize}\n\n\\pagebreak\n\n\\section{Question 2}\n\nThe attached MiniZinc script produces the following output:\n\n\\begin{lstlisting}\n    Running Magic15Square.mzn\n    847msec\n\n    4 9 2 \n    3 5 7 \n    8 1 6 \n    ----------\n    Finished in 847msec.\n\\end{lstlisting}\n\n\n\\end{document}\n", "meta": {"hexsha": "03b41ad083e0bb9f41c37f467b8e4a87b9126908", "size": 2186, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CSC 420/Assignments/Assignment 3/a3.tex", "max_stars_repo_name": "Anthony91501/uwp-2022-spring", "max_stars_repo_head_hexsha": "2ca2b95d3a502551870fe3203ba4e97969d2835f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2022-01-15T21:17:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T05:53:56.000Z", "max_issues_repo_path": "CSC 420/Assignments/Assignment 3/a3.tex", "max_issues_repo_name": "Anthony91501/uwp-2022-spring", "max_issues_repo_head_hexsha": "2ca2b95d3a502551870fe3203ba4e97969d2835f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-29T01:30:06.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-31T17:51:39.000Z", "max_forks_repo_path": "CSC 420/Assignments/Assignment 3/a3.tex", "max_forks_repo_name": "Anthony91501/uwp-2022-spring", "max_forks_repo_head_hexsha": "2ca2b95d3a502551870fe3203ba4e97969d2835f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2022-01-28T19:39:27.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-23T21:09:09.000Z", "avg_line_length": 21.0192307692, "max_line_length": 130, "alphanum_fraction": 0.6148215919, "num_tokens": 782, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5604883442281727}}
{"text": "\n\n\\documentclass{article}\n\\title{Distributions and copulas in GOSM}\n\\author{Ma\\\"{e}l Forcier}\n\\date{April 2017}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{mathtools}\n\\usepackage{amsfonts}\n\\usepackage{amsthm}\n\\usepackage{relsize}\n\\usepackage{dsfont}\n\\usepackage[all]{xy}\n\\usepackage{float}\n\\usepackage{color}\n\\usepackage{comment}\n\\newcommand{\\mf}[1]{\\textcolor{red}{$\\textrm{Mael: }${#1}}}\n\\newcommand{\\code}[2]{\\texttt{#1}}\n\n\\begin{document}\n   \\maketitle\n   \\section{Introduction}\n\n\tWe present a python library to use distribution with copulas.\n\n\n\tWe want to model the random properties of a data. In this section we will take the example of the production of wind power that we want to model by a gaussian.\n\n\tToday we do not know what wind production there would be tomorrow., so we model this wind production by a random variable $X$ that has a lot of properties.\n\n\t\\section{Mathematical Definitions}\n\t\\newtheorem{definition}{Definition}\n\t\\newtheorem{property}{Property}\n\t\\begin{definition}\n\tThe cumulative density function (cdf) of a random variable $X \\in \\mathbb{R}$ is the function $F : \\mathbb{R} \\to [0,1]$ where :\n\t\\begin{equation*}\n\tF(x) = \\mathbb{P}(X\\leq x)\n\t\\end{equation*}\n\t\\end{definition}\n\n\tThis definition also works when X is a multidimensional random\tvariable i.e. $\\in \\mathbb{R}^d$ if we use the notation $(x_1,...,x_d) \\leq (y_1,...,y_d) \\Leftrightarrow \\forall i, x_i \\leq y_i$\n\n\n\t\\begin{definition}\n\tIf the cdf of $X \\in \\mathbb{R}$ is differentiable, the probability density function (pdf) of X is the function $f : \\mathbb{R} \\to [0,1]$ where :\n\t\\begin{equation*}\n\tf(x) = \\frac{dF(x)}{dx} = \\frac{d\\mathbb{P}(X\\leq x)}{dx}\n\t\\end{equation*}\n\tIf $X \\in \\mathbb{R}^d$ and F is regular enough, the probability density function pdf of X is the function $f : \\mathbb{R} \\to [0,1]$ :\n\t\\begin{equation*}\n\tf(x) = \\frac{d^d F(x)}{dx_1...dx_d}\n\t\\end{equation*}\n\t\\end{definition}\n\n\n\n\tEach one of these two functions gives all the informations about how the random variable X behaves. With them, we can compute different charasteristic values of the distribution.\n\n\t\\begin{definition}\n\tThe mean of a random variable $X \\in \\mathbb{R}^d$ is :\n\t\\begin{equation*}\n\t\\mathbb{E}(X) = \\int_{\\mathbb{R}^d} xf(x)dx\n\t\\end{equation*}\n\t\\end{definition}\n\n\t\\begin{definition}\n\tThe variance of a random variable $X \\in \\mathbb{R}^d$ is :\n\t\\begin{equation*}\n\tVar(X) = \\mathbb{E}((X-\\mathbb{E}(X))^2) = \\int_{\\mathbb{R}^d} (x-\\mathbb{E}(X))^2f(x)dx\n\t\\end{equation*}\n\t\\end{definition}\n\n\t\\begin{property}\n\tIf $X \\in \\mathbb{R}$ is a random variable with a continuous cdf F,\n\t\\begin{equation*}\n\t\\text{then } U = F(X) \\text{ is a uniform random variable in [0,1]}\n\t\\end{equation*}\n\t\\end{property}\n\n\t\\section{Distribution user manual}\n\n\tTo model the random properties of data (for example the production of wind power or wind forecast errors). We define distribution as a python object that contains several methods. NB : This distribution python object does not correspond strictly to the mathematical definition of a distribution because it contains several different methods that describe many things linked to the probability law of the variable. To simplify, when we will talk about distribution, it will refer to this distribution python object.\n\n\t\\begin{figure}[H]\n\\[\n   \t\\xymatrix{\n   \t\t & \\boxed{input\\_data} \\ar[dd]_{\\_\\_init\\_\\_} & \\\\\n   \t\t & & \\\\\n   \t\t & \\boxed{parameters} \\ar[ldd]_{generates\\_X} \\ar[rdd]^{calculus} & \\\\\n   \t\t & & \\\\\n   \t\t\\boxed{sample} \\ar@/_/[rr]_{counting} & & \\boxed{functions} \\ar@/_/[ll]_{CDF^{-1}(Uniform\\text{ }variable)}\n   \t}\n   \\]\n   \\caption{Simplified diagram of how a distribution object  works}\n\t\\end{figure}\n\n\t\\subsection{Methods and parameters of a distribution object}\n\t\\subsubsection{cdf, pdf and other functions}\n\tThe methods cdf and pdf compute the two chararacteristic functions defined above. If the object \\texttt{distrobject}  represents a random variable X with cdf F and pdf f, \\texttt{distrobject.cdf(x)} will return the value of $F(x)$ and \\texttt{distrobject.pdf(x)} will return the value of $f(x)$. Most of the time, this value are calculated using analytical formulas.\n\n\tFor example, the pdf of a normal distribution is calculated using the formula\n\t\\begin{equation*}\n\tf(x) = \\frac{1}{\\sqrt{2\\pi}}exp(\\frac{(x-\\mu)^2}{2\\sigma^2})\n\t\\end{equation*}\n\n\tTo make others calculus and depending on the class of the object, one might need other functions linked to the cdf and the pdf. For example, the method \\texttt{cdf\\_inverse}  will compute the inverse of the cdf. \\texttt{distrobject.cdf\\_inverse(x)} will return the value $F^{-1}(x)$.\n\n\n\t\\subsubsection{mean, var and other parameters}\n\tOne can see that these formulas depend on parameters. In the previous normal case, they are $\\mu$ and $\\sigma$. \\newline\n\n\tAlthought the mean and the variance can be interesting values that we calculate using the cdf or the pdf. They also can be considered as parameters. In the normal case, $\\mu$ is also the mean and $\\sigma^2$ the variance. If the object \\texttt{distrobject} represents a random variable X, \\texttt{distrobject.mean} will return the value of $\\mathbb{E}(x)$ and \\texttt{distrobject.var} will return the value of $Var(x)$. \\newline\n\n\t Depending on the class of the object, it could have other parameters that would be explained in detail while presenting the different classes. For instance, we can quote the degree of freedom (\\texttt{df}) for student distributions, the covariance matrix (\\texttt{cov}) for multivariate distribution or the theta parameters (\\texttt{theta}) for archimedian copulas.\n\n\t\\subsubsection{Initializing Distributions}\n\n\tThe method \\_\\_init\\_\\_ creates a new object of a distribution class. It will be called with the specific parameters fo the distribution. For instance, $distrobject = UnivariateNormalDistribution(mean=mu,var=sigma**2)$ will return an Normal distribution object with the parameters \\texttt{mu} for the mean and \\texttt{sigma**2} for the variance.\n\tBut, in practice, it is very useful to fit a distribution to a data set.\n\tThis can be done by calling the \\texttt{fit} method of any distribution class with the fit method.\n\tThis method will estimate the parameters for the distribution from the\n\tinput data. For instance, \\texttt{UnivariateNormalDistribution.fit(data = Y)} will return a normal distribution with the\n\tmean and variance fit to the data (in this case, these parameters are\n\tjust taken to be the sample mean and sample variance of the data).\n\n\t\\subsubsection{\\texttt{generates\\_X} and sample}\n\n\tA distribution object modelizing a random variable X also has a method that can generates realisations of X following the probability law.\n\n\tFor instance, in the normal case we call the numpy function \\texttt{numpy.random.randn} with the good parameters to generates such variables. In the case of univariate distribution it can also be obtained by generating a uniform variable in [0,1] and composing by the inverse of the cdf. $F^{-1}(U)$, where U is uniformly distributed on [0,1], follows the same probability law as X.\n\n\tBecause sometimes generating a lot of variables can be long, they are stored automatically in \\texttt{distrobject.X}. Then,  \\texttt{distrobject.generates\\_X(n)} returns a vector of n independent realisations of X and store them in the attribute \\texttt{mydistrobject.X} by appending this n new values by appending it to the old attribute.\n\n\n\t \\subsection{Files and classes}\n\n\t In order to make comparisons and find the best distribution model for a variable, I implemented several different distributions. Even if they follow the same global pattern described above, their parameters, their functions and the way they are implemented can be very different because of a need to speed up the computation or simply because the mathetical object are really different. All the distributions are coded with classes that one can find in a certain file. I explain in this section how the files are organized and how the classes work. At the beginning of each file's subsection, one can see the whole list of the class it contains.\n\n\n\t \\subsubsection{base\\_distribution.py}\n\t This file contains various abstract classes that define interfaces and common functions for distributions :\\newline\n\t \\newline\n\t \\texttt{BaseDistribution} \\newline\n\t \\texttt{UnivariateDistribution} \\newline\n\t \\texttt{MultivariateDistribution} \\newline\n\n\t \\texttt{BaseDistribution} is the base class from which all the distributions classes will inherit. This class describes at a minimum, what all distributions must have.\n\t For our purposes, we expect all distributions to define a \\texttt{pdf} method and a \\texttt{fit} method.\\newline\n\n\t \\texttt{UnivariateDistribution} inherits from \\texttt{BaseDistribution}. It serves as the base class for all univariate distributions.\n\t This exports basic methods for computing the cdf, the inverse cdf, and the expectation over a specific region.\n\n\t \\texttt{MultivariateDistribution} inherits from \\texttt{BaseDistribution}. It is the abstract class from which all the distributions with more than one variable will inherit. The particularity of multivariate distribution is that it works with dictionnaries. This permits not to confuse the different coordinates of the distribution. To this end, any distribution can be constructed or fit by passing by the keyword \\texttt{dimkeys} a list of strings which will serve as names for the dimensions. Then one can call a function which expects a vector as input (say the pdf) with the a dictionary mapping dimension names to the values. This abstract class also contains the method \\texttt{rect\\_prob} that computes the probability for the random variable to be in a n dimension rectangle defined by two opposed points \\texttt{lowerdict} and \\texttt{upperdict}. Even thaugh this calculus requires a non trivial recursion, it only needs the cdf to be computed and it is the same calculus for all multivariate distribution.\n\n\t \\subsubsection{distributions.py}\n\n\t This file contains all the concrete classes of the classical distributions that are not built with copulas. If they are called Univariate, they directly inherit from \\texttt{BaseDistribution}, if they are called Multi they inherit form \\texttt{MultivariateDistribution} : \\newline\n\\newline\n\t \\texttt{UnivariateBasicEmpiricalDistribution} \\newline\n\t \\texttt{UnivariateEmpiricalDistribution} \\newline\n\t \\texttt{UnivariateEpiSplineDistribution} \\newline\n\t \\texttt{UnivariateUniformDistribution} \\newline\n\t \\texttt{UnivariateNormalDistribution} \\newline\n\t \\texttt{UnivariateStudentDistribution} \\newline\n\t \\texttt{MultiNormalDistribution} \\newline\n\t \\texttt{MultiStudentDistribution} \\newline\n\n\t \\texttt{UnivariateBasicEmpiricalDistribution} is the simplest distribution you can build without making an hypothesis on the model. It assumes that all the results in the \\texttt{input\\_data} are the only possible ones and that they are equiprobable. This gives us a method to generate a sample, we simply pick randomly one element of \\texttt{input\\_data}. If we note $Y = (Y_1,...,Y_n) = \\texttt{input\\_data}$, the functions would be :\n\t\\begin{equation*}\n  \tF(x) = \\frac{1}{n}\\sum_{k=1}^n \\mathds{1}_{Y_k\\leq x}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \t\\mathbb{P}(X=x) = \\frac{1}{n}\\sum_{k=1}^n \\mathds{1}_{Y_k = x}\n  \t\\end{equation*}\n  \tOne can notice that F is not continuous, that implies that f is not well defined. \\newline\n\n\t \\texttt{UnivariateEmpiricalDistribution} works almost like \\texttt{UnivariateBasicEmpirical} but avoids the problem of continuity by cleverly interpolating lines between the points we have. When the set of data is big enough it gives the same result than \\texttt{UnivariateBasicEmpricalDistribution}. \\newline\n\n\t \\texttt{UnivariateEpiSplineDistribution} is a more clever object obtained thanks to the resolution of an optimization problem where the cdf as a particular shape : \\newline\n\t \\begin{equation*}\n  \tF(x) = e^{-g(x)}\\text{, where }g\\text{ is a piecewise defined polynomial}\n  \t\\end{equation*}\n  \t For example, one can choose the number N which indicates how much subsegments we want to divide our initial segment to create each piece of the polynomial function. Plus, this class needs the installation of the language pyomo and of the solver ipopt. To learn more about these installation, check \\texttt{GOSM} user manual. This class is unusual because it was extracted from an earlier version of software called \\texttt{Prescient}.\\newline\n\n\n\t \\texttt{UnivariateUniformDistribution} contains only two parameters called \\texttt{a} and \\texttt{b} who represent respectively the minimum and the maximum of the distribution. When an object of this class is called with \\texttt{input\\_data}, the \\texttt{\\_\\_init\\_\\_} method, assign the minimum of \\texttt{input\\_data} to $a$ and its maximum to $b$.\n\t \\begin{equation*}\n\t a = \\texttt{a} = \\texttt{min(input\\_data)}\n\t \\end{equation*}\n\t \\begin{equation*}\n\t b = \\texttt{b}= \\texttt{max(input\\_data)}\n\t \\end{equation*}\n\t \\[\n   \t\tf(x) =\n   \t\t\\begin{cases}\n\n    \t\\frac{1}{b-a} & \\quad \\text{if } x \\in [a,b]\\\\\n    \t0  & \\quad \\text{else}\\\\\n  \\end{cases}\n  \\]\n\t \\[\n   \t\tF(x) =\n   \t\t\\begin{cases}\n        0  & \\quad \\text{if } x \\leq a \\\\\n    \t1 & \\quad \\text{if } x \\geq b \\\\\n    \t\\frac{x-a}{b-a} & \\quad \\text{else}\n  \\end{cases}\n  \\]\n\n  \\texttt{UnivariateNormalDistribution} has two parameters \\texttt{mean} and \\texttt{var} which are stored as attributes. We present below the main formulas that define this class. Each time we represent parameters with their names in the python code and with its usual mathematical notation so that the formulas are not too hard to read.\n  \t\\begin{equation*}\n  \t\\mu = \\texttt{mean} =  \\texttt{mean(input\\_data})\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \t\\sigma^2 = \\texttt{var} = \\texttt{var(input\\_data})\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tf(x) = \\frac{1}{\\sqrt{2\\pi \\sigma^2}}e^{-\\frac{(x-\\mu)^2}{\\sigma^2}}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tF(x) = \\int_{-\\infty}^x \\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{-\\frac{(y-\\mu)^2}{\\sigma^2}}dy\n  \t\\end{equation*}\n\n  \t\\texttt{UnivariateStudentDistribution}\n\n  \t\\begin{equation*}\n  \t\\mu = \\texttt{mean} = \\texttt{mean(input\\_data)}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \t\\nu = \\texttt{df} = \\texttt{2var(input\\_data)/(var(input\\_data)-1)}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tf(x) = \\frac{1}{\\sqrt{\\nu\\pi}}\\frac{\\Gamma(\\frac{\\nu+1}{2})}{\\Gamma(\\frac{\\nu}{2})}\\left(1+\\frac{(x-\\mu)^2}{\\nu}\\right)^{-\\frac{\\nu+1}{2}}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tF(x) = \\int_{-\\infty}^x \\frac{1}{\\sqrt{\\nu\\pi}}\\frac{\\Gamma(\\frac{\\nu+1}{2})}{\\Gamma(\\frac{\\nu}{2})}\\left(1+\\frac{(y-\\mu)^2}{\\nu}\\right)^{-\\frac{\\nu+1}{2}}dy\n  \t\\end{equation*}\n\n\n\t \\texttt{MultiNormalDistribution}\n\n  \t\\begin{equation*}\n  \t\\mu = \\texttt{mean} = \\texttt{mean(input\\_data)}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tK = \\texttt{cov} = \\texttt{cov(input\\_data)}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tf(x) = \\frac{1}{(2\\pi)^{\\frac{d}{2}}\\sqrt{det(K)}}e^{-\\frac{(x-\\mu)^\\top K^{-1} (x-\\mu)x}{2}}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tF(x) = \\int_{-\\infty}^x \\frac{1}{(2\\pi)^{\\frac{d}{2}}\\sqrt{det(K)}}e^{-\\frac{(y-\\mu)^\\top K^{-1} (y-\\mu)}{2}}dy\n  \t\\end{equation*}\n\n  \t\\texttt{MultiStudentDistribution}\n\n  \t\\begin{equation*}\n  \t\\mu = \\texttt{mean} = \\texttt{mean(input\\_data)}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \t\\nu = \\texttt{df}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tK = \\texttt{cov} = \\texttt{cov(input\\_data)}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tf(x) = \\frac{\\Gamma (\\frac{\\nu +d}{2})}{\\Gamma\n\t (\\frac{\\nu}{2})(\\nu \\pi)^{\\frac{d}{2}}\\sqrt{det(K)}}\\left(1+\\frac{1}{\\nu}(x-\\mu)^\\top K^{-1} (x-\\mu) \\right)^{-\\frac{\\nu +d}{2}}\n  \t\\end{equation*}\n  \t\\begin{equation*}\n  \tF(x) = \\int_{-\\infty}^x \\frac{1}{\\sqrt{\\nu\\pi}}\\frac{\\Gamma(\\frac{\\nu+1}{2})}{\\Gamma(\\frac{\\nu}{2})}\\left(1+\\frac{(y-\\mu)^2}{\\nu}\\right)^{-\\frac{\\nu+1}{2}}dy\n  \t\\end{equation*}\n\n\n\t \\subsubsection{copula.py}\n\t This file contains the abstract class \\texttt{CopulaBase} which inherits from \\texttt{MultivariateDistribution} and from which all the concrete copula classes inherit. The mathematical details of copulas are explained in the next sections. As was the case with our distribution objects and classes in python, our copula objects and classes in python are different from the strict mathematical definition of a copula which is just what we call a C function. The inheritance from \\texttt{MultivariateDistribution} is very convenient in our applications, but is non-standard for copula classes.\\newline\n\\newline\n\t \\texttt{CopulaBase} \\newline\n\t \\texttt{GaussianCopula} \\newline\n\t \\texttt{StudentCopula} \\newline\n\t \\texttt{FrankCopula} \\newline\n\t \\texttt{ClaytonCopula} \\newline\n\t \\texttt{GumbelCopula} \\newline\n\t \\texttt{WeigthedCombinedCopula} \\newline\n\t \\texttt{IndependenceCopula} \\newline\n\t \\texttt{EmpiricalCopula} \\newline\n\n\t \\texttt{CopulaBase} is the abstract class that contains the attributes and methods that all the copula classes have in common. The method is to use what we presented as property 1 on each diagonal. Instead of working on the dependence between $X_1$, $X_2$ ... and $X_n$, we prefer to focus on the dependence between $U_1=F_1(X_1)$, $U_2=F_2(X_2)$ ... and $U_n=F_n(X_n)$. It is now much easier to compare the dependence between coordonates that all have the same distribution which is uniform in [0,1]. We work in what we call the U-space, $[0,1]^d$, as opposed to the X-space, $\\mathbb{R}^d$. One can easily pass from one to the other by composing coordonate by coordonate with the $F_i$ or by using the formulas presented in the next sections. Like all the children of \\texttt{MultivariateDistribution} a copula object has methods and attributes in the X-space, but it contains also its twins in the U-space, just like if a copula object contained the distribution of X and the distribution of U. Considering this analogy, C (as the cdf of U) plays in the U-space the role of cdf in the X-space. c (as the pdf of U) plays in the U-space the role of pdf in the X-space. Finally, generates\\_U permits to return samples of U with the good dependence, it plays in the U-space the same role as generates\\_X in the X-space.\n\n\n\t \\subsubsection{vine.py}\n\t Vine copulas are copulas that are built using other bivariate copulas. I explain them with more details in the next section.\\newline\n\t \\newline\n\t \\texttt{CVineCopula} \\newline\n\t \\texttt{DVineCopula}\n\n\t\\subsubsection{distribution\\_factory.py}\n\n\t In all the previous files, each definition of a new concrete class begins with \\texttt{@register\\_distribution(name=\"name-of-the-class\")}. This permits to give string names to the class. The file distribution\\_factory.py contains the code that allow us to write :\\newline \\texttt{distr\\_class = distribution\\_factory(copula\\_string)}\\newline\n    \\texttt{distr\\_object = distr\\_class(input)}\\newline\n\n     instead of\\newline\\texttt{if copula\\_string ==\"univariate-uniform\":}\n\n    \\texttt{distr\\_object = UnivariateUniformDistribution(input)} \\newline\n    \\texttt{if copula\\_string ==\"univariate-normal\":}\n\n    \\texttt{distr\\_object = UnivariateNormalDistribution(input)} \\newline etc, for all the cases. \\newline\n\n    This pratical way of selecting the classes is also used to build distributions that are created thanks to other ones such as combined copulas or vine copulas.\n\n\t \\subsubsection{tester.py}\n\tThis file contains several unittests to check the distribution classes attempt. Some are just quick tests to check if the code runs. But most of them attempt to verify if the implemented formulas are correct. In order to do that,the method is trying to obtain the same result by different ways or by making a loop on diagram that should be commutative like the one in figure 1. For instance, \\texttt{test\\_pdf\\_cdf} integrates the pdf to compare it to the cdf and differentiate the cdf to verify if it is equal to the pdf. \\texttt{test\\_c\\_with\\_C\\_2\\_dim} integrates twice \\texttt{c} and verify if the value is equal to \\texttt{C}. \\texttt{test\\_C\\_with\\_sample} checks the low loop of diagram in figure 1 in the U-space : with a big data set created by \\texttt{generates\\_U}, it creates an empirical cdf of U (called \\texttt{C\\_from\\_sample}) and compares it to the actual cdf of U which is \\texttt{C}. Another example is \\texttt{test\\_with\\_gaussian\\_copula\\_3\\_dim} which checks if making a distribution with normal (or gaussian) marginals and a gaussian copula give the same result than a creating directly a multinormal (or multigaussian) distribution. \\newline\n\\newline\n\t \\texttt{MultiNormalDistributionTester} \\newline\n\t \\texttt{UnivariateNormalDistributionTester} \\newline\n\t \\texttt{MultiStudentDistributionTester} \\newline\n\t \\texttt{UnivariateStudentDistributionTester}\\newline\n\t \\texttt{CopulaTester} \\newline\n\t \\texttt{VineCopulaTester}\n\n\n\n\n   \\section{Copulas properties}\n\n\tIn order to study the dependance of several correlated random variables, we need to focus on a tool called copula. It is more practical to consider our different variables together as a multidimensional variable. The distribution univariate random variables are then called marginals. Copulas permit measurement of the dependance without caring about the marginals. In our object oriented program, it is really practical because one can work on the dependance with the copula while the marginals are just input variables. To introduce and define the copulas, let us quote Sklar's theorem :\n\t\\newtheorem{Sklar}{Theorem}\n\t\\begin{Sklar}\n\t\tSklar, 1959. Let us consider a random vector $X = (X_{1},...,X_{d})$ with a \t\t\tcumulative distibution function $F(x_{1},...,x_{d})= \\mathbb{P}(X_{1}\\leq \tx_{1},..,X_{d}\\leq x_{d}).$ \\newline\n\tThen, exists a function C called copula $C : [0,1]^{d}\\to [0,1]$ so that :\n\t\\begin{equation*}\n\tF(x_{1},...x_{d})=C(F_{1}(x_{1}),...,F_{d}(x_{d}))\n\t\\end{equation*}\n\twhere $F_{i}$ is the cumulative density function of the ith marginal.\\newline\n\tThe copula C is unique if the marginals are continuous.\n\t\\end{Sklar}\n\n\tWe will also use the density of the copula :\n\n\t\\begin{definition}\n\tThe copula density of a copula C is the function $c : [0,1]^{d}\\to \\mathbb{R^+}$:\\newline\n\t\\begin{equation*}\n\tc(u,v) = \\frac{\\partial^d C}{\\partial u_{1} ... \\partial u_{d}} (u,v)\n\t\\end{equation*}\n\t\\newline\n\tBy differentiating, we obtain the equation :\n\t\\begin{equation*}\n\t\tf(x_1,...,x_d) = c(F_1(x_1),...,F_d(x_d))f_1(x_1)...f_d(x_d)\n\t\\end{equation*}\n\n\t\\end{definition}\n\n\t\\begin{definition}\n\tThe Kendall distribution function  $\\mathcal{K}_F : [0,1]\\to [0,1]$ of a distribution of cumulative density function F is : \\newline\n\t\\begin{equation*}\n\t\\mathcal{K}_F (u) = \\mathbb{P} (F(X) \\leq u)\n\t\\end{equation*}\n\twhere X is a random variable following the distribution defined by F.\n\t\\end{definition}\n\n\n\t\\begin{multline*}\n\t\\begin{split}\n\t\\mathcal{K}_F (u)\t&= \\mathbb{P} (F(X) \\leq u) \\\\\n\t\t\t\t\t\t&= \\mathlarger{ \\int_{\\mathbb{R}^d} \\mathds{1}_{F(x) \\leq u}f(x) dx} \\\\\n\t\t\t\t\t\t&= \\mathlarger{ \\int_{\\mathbb{R}^d} \\mathds{1}_{C(F_1(x_1),...,F_d(x_d)) \\leq u} c(F_1(x_1),...,F_d(x_d)) f_1(x_1)...f_d(x_d) dx} \\\\\n\t\t\t\t\t\t&= \\mathlarger{ \\int_{[0,1]^d} \\mathds{1}_{C(y) \\leq u} c(y) dy}\\\\\n\t\t\t\t\t\t&= \\mathlarger{ \\int_{K_u} c(y) dy} \\text{\t, where } K_u = \\{ y\\in [0,1]^d, C(y) \\leq u \\}\\\\\n\t\t\t\t\t\t&= \\mathbb{P} (C(U) \\leq u)\n\t\\end{split}\n\t\\end{multline*}\n\tWhere U is a random variable on $[0,1]^d$ with cdf C (and uniform marginals).\n\n\n\tIntuitively, when we do the changing of variable $y_i=F_i(x_i)$ :\\newline We have $dy_i = F_i^\\prime (x_i) dx_i = f_i(x_i)dx_i$. So, indeed $dy = f_1(x_1)...f_d(x_d)dx$.\\newline\n\tThe meticulous proof with the Jacobian determinant gives the same result.\n\t\\newline\n\n\t$\\mathcal{K}_F$ only depends on the copula C of the distribution function.\\newline\n\tThus, we will use the notation $\\mathcal{K}_C$. \\newline\n\tIf we write $\\Gamma_{u_2,...,u_{d}}(x) = C(x,u_2,...,u_d)$, we also have :\n\t\\begin{equation*}\n\t\\mathcal{K}_C (u) = \\mathlarger{ \\int_0^1 ... \\int_0^1 \\bigg(\\int_0^{\\Gamma_{y_2,...,y_{d}}^{-1}(u)} c(y) dy_1 \\bigg) dy_2...dy_d}\n\t\\end{equation*}\n\t\\newline\n\n\n\tWe want to use Vine copulas to study multidimensional dependance. To compute such vine copulas, we need to have the partial derivative of C function and its inverse. For each of the following copulas in 2 dimensions, I calculated the partial derivate \\begin{math} h(u,v,\\theta) = \\frac{\\partial C_{\\theta}}{\\partial v} (u,v) \\end{math}. I also calculated its inverse \\begin{math} h^{-1} \\end{math} considering its first variable : \\begin{math} h(h^{-1}(u,v,\\theta),v,\\theta)=u \\end{math}\n\n\t\\begin{figure}\n\t\\[\n   \t\\xymatrix{\n   \t\t & C \\ar[ldd]_{\\frac{\\partial}{\\partial u_{1}}} \\ar[rdd]^{\\frac{\\partial^d}{\\partial u_{1} ... \\partial u_{d}}} & \\\\\n   \t\t & & \\\\\n   \t\t\\frac{\\partial C}{\\partial u_{1}} \\ar@/_/[rr]_{\\frac{\\partial^{d-1}}{\\partial u_{2} ... \\partial u_{d}}} & & c \\ar@/_/[ll]_{\\int ... \\int}\n   \t}\n   \\]\n\t\\caption{Relations between copula functions}\n\t\\end{figure}\n\n\n   \\section{Elliptic Copulas}\n   Explain that they are copulas of multidimensional distributed vector with different law.\n   Add the difference between correlation matrix and covariance matrix and say we only consider 1 on the diagonal.\n\n\t\\subsection{Gaussian Copula}\n\n\tThe gaussian copula of correlation matrix K is the copula of a gaussian vector \\begin{math} \\mathcal{N}(0,K) \\end{math}.\\newline\n\t\\newline\n\tBecause one can change the variance of each marginal, we will only consider covariance matrix where there are only 1 in the diagonal.\n\t\\newline\n\t\\newline\n\tWe will note \\begin{math} \\mathcal{N}(x)=\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^{x} e^{-\\frac{t^{2}}{2}}dt \\end{math}, the cumulative density function of the standard normal distribution and \\begin{math} \\mathcal{N}^{-1} \\end{math} its inverse.\n\n\t\\subsubsection{C function in dimension n}\n\n\n\t\\begin{math}\n\t C_{K}(u_{1},...u_{d})= \\mathlarger{ \\int_{-\\infty}^{\\mathcal{N}^{-1}(u_{1})}...\\int_{-\\infty}^{\\mathcal{N}^{-1}(u_{d})} \\frac{1}{(2\\pi)^{\\frac{d}{2}}\\sqrt{det(K)}}e^{-\\frac{x^\\top K^{-1} x}{2}}dx}\n\t\\end{math}\n\t\\newline\n\t\\newline\n\tWith a change of variable, we have also \\newline\n\t\\newline\n\t\\begin{math}\n\tC_{K}(u_{1},...u_{d})=  \\mathlarger{ \\int_{0}^{u_{1}}...\\int_{0}^{u_{d}} \\frac{1}{\\sqrt{det(K)}}e^{-\\frac{\\mathcal{N}^{-1}(x)^\\top (K^{-1}-I_{d}) \\mathcal{N}^{-1}(x)}{2}}dx}\n\t\\end{math}\n\t\\newline\n\t\\newline\n\t\\newline\n\twhere \\begin{math} \\mathcal{N}^{-1}(x) =  \\begin{pmatrix}\n   \\mathcal{N}^{-1}(x_{1}) \\\\\n  \\vdots   \\\\\n   \\mathcal{N}^{-1}(x_{d})\n \\end{pmatrix}\\end{math}\n\n\\subsubsection{C function in dimension 2}\n\n \tIn dimension 2, all matrix of correlation can be written \\begin{math} K =\n \\begin{pmatrix}\n  1 & \\rho \\\\\n  \\rho & 1\n \\end{pmatrix}\n\\end{math}. So we will only consider one parameter \\begin{math} \\rho \\end{math}.\\newline\n\\newline\n\n\\begin{math}\n\tC_{\\rho} (u,v) = \\mathlarger{\\int_{-\\infty}^{\\mathcal{N}^{-1}(u)}\\int_{-\\infty}^{\\mathcal{N}^{-1}(v)} \\frac{1}{2\\pi\\sqrt{1-\\rho^{2}}}e^{-\\frac{x^2-2\\rho x y +y^{2}}{2(1-\\rho^{2})} }dx dy}\n\\end{math}\n\\newline\n\\newline\nWith the same change of variable, we have also\n\\newline\n\\begin{math}\n\tC_{\\rho} (u,v) = \\mathlarger{\\int_{0}^{u}\\int_{0}^{v} \\frac{1}{\\sqrt{1-\\rho^{2}}}e^{-\\frac{\\rho^{2}\\mathcal{N}^{-1}(x)^2-2\\rho\\mathcal{N}^{-1}(x)\\mathcal{N}^{-1}(y) + \\rho^{2} \\mathcal{N}^{-1}(y)^{2}}{2(1-\\rho^{2})} }dx dy}\n\\end{math}\n\n\\subsubsection{Partial derivative of C}\n\nIf we derivate the second formula, we have \\newline\n\\begin{math}\nh(u,v,\\rho)=\\frac{\\partial C_{\\rho}}{\\partial v} (u,v) = \\mathlarger{\\int_{0}^{u} \\frac{1}{\\sqrt{1-\\rho^{2}}}e^{-\\frac{\\rho^{2}\\mathcal{N}^{-1}(x)^2-2\\rho\\mathcal{N}^{-1}(x)\\mathcal{N}^{-1}(v) + \\rho^{2} \\mathcal{N}^{-1}(v)^{2}}{2(1-\\rho^{2})} }dx}\n\\end{math}\n\\newline\n\\newline\nIf we derivate the first formula above, we obtain \\newline\n\\begin{math}\nh(u,v,\\rho)=\\frac{\\partial C_{\\rho}}{\\partial v} (u,v) = \\mathlarger{\\frac{1}{\\mathcal{N}^{'}(\\mathcal{N}^{-1}(v))}\\int_{-\\infty}^{\\mathcal{N}^{-1}(u)} \\frac{1}{2\\pi\\sqrt{1-\\rho^{2}}}e^{-\\frac{x^2-2\\rho x \\mathcal{N}^{-1}(v) +\\mathcal{N}^{-1}(v)^{2}}{2(1-\\rho^{2})} }dx}\n\\newline\n\\newline\nh(u,v,\\rho)=\\mathlarger{\\int_{-\\infty}^{\\mathcal{N}^{-1}(u)} \\frac{1}{\\sqrt{2\\pi}}e^{-\\frac{(x-\\rho \\mathcal{N}^{-1}(v))^2}{2(1-\\rho^{2})}}dx }\n\\newline\n\\newline\nh(u,v,\\rho)=\\mathlarger{\\int_{-\\infty}^{\\frac{\\mathcal{N}^{-1}(u)-\\rho \\mathcal{N}^{-1}(v)}{\\sqrt{1-\\rho^{2}}}} \\frac{1}{\\sqrt{2\\pi}}e^{-\\frac{y^2}{2}dy }}\n\\newline\n\\newline\nh(u,v,\\rho)=\\mathlarger{  \\mathcal{N}(\\frac{\\mathcal{N}^{-1}(u)-\\rho\\mathcal{N}^{-1}(v)}{\\sqrt{1-\\rho^{2}}})}\n\\end{math}\n\\newline\n\\newline\nIf we inverse this function, we obtain\\newline\n\\newline\n\\begin{math}\nh^{-1}(u,v,\\rho) = \\mathcal{N}(\\sqrt{1-\\rho^2}\\mathcal{N}^{-1}(u)+\\rho \\mathcal{N}^{-1}(v))\n\\end{math}\n\n\t\\subsubsection{c function in dimension n}\n\t\\begin{equation*}\n\tc_{K}(u_1,...,u_d)=\\frac{1}{\\sqrt{det(K)}}e^{-\\frac{\\mathcal{N}^{-1}(u)^\\top (K^{-1}-I_{d}) \\mathcal{N}^{-1}(u)}{2}}\n\t\\end{equation*}\n\n\twhere \\begin{math} \\mathcal{N}^{-1}(u) =  \\begin{pmatrix}\n   \\mathcal{N}^{-1}(u_{1}) \\\\\n  \\vdots   \\\\\n   \\mathcal{N}^{-1}(u_{d})\n \\end{pmatrix}\\end{math}\n\n\t\\subsubsection{c function in dimension 2}\n\t\\begin{math}\n\t\tc_\\theta (u,v) = \\frac{1}{\\sqrt{1-\\rho ^2}} e^{-\\frac{\\rho ^2(\\mathcal{N}^{-1}(u)^2+\\mathcal{N}^{-1}(v)^2) - 2 \\rho \\mathcal{N}^{-1}(u) \\mathcal{N}^{-1}(v)}{2 (1-\\rho ^2)}}\n\t\\end{math}\n\n\n\n\t\\subsection{Student Copula}\n\tThe student copula of correlation matrix K and of degree of freedom \\begin{math} \\nu \\end{math} is the copula of a student random vector \\begin{math} t_\\nu (0,K) \\end{math}.\\newline\n\t\\newline\n\tBecause one can change the variance of each marginal, we will only consider covariance matrix where there are only 1 in the diagonal.\n\t\\newline\n\t\\newline\n\tWe will note \\begin{math} t_\\nu(x)=\\frac{\\Gamma (\\frac{\\nu+1}{2})}{\\sqrt{\\nu \\pi} \\Gamma (\\frac{\\nu}{2})}\\int_{-\\infty}^{x} \\left(1+\\frac{t^2}{\\nu}\\right)^{-\\frac{\\nu +1}{2}}dt \\end{math}, the cumulative density function of the student distribution with degree of freedom \\begin{math} \\nu \\end{math} and \\begin{math}  t_\\nu^{-1} \\end{math} its inverse. \\newline\n\tWe could write both with the regularized incomplete Beta function \\newline \\begin{math} I_{a,b}(x) = \\frac{1}{B(a,b)} \\int_{0}^{x} t^{a-1}(1-t)^{b-1}dt \\newline = \\frac{\\Gamma (a+b)}{\\Gamma (a)\\Gamma (b)} \\int_{0}^{x} t^{a-1}(1-t)^{b-1}dt\\end{math} \\newline\n\t\\newline\n\tand its inverse \\begin{math} I_{a,b}^{-1} \\end{math} : \\newline\n\t\\[\n   \t\tt_\\nu (x) =  \\begin{cases}\n        I_{\\frac{\\nu}{2},\\frac{1}{2}}(\\frac{\\nu}{x^2+\\nu})  & \\quad \\text{if } x \\leq 0 \\\\\n    \t1-I_{\\frac{\\nu}{2},\\frac{1}{2}}(\\frac{\\nu}{x^2+\\nu}) & \\quad \\text{if } x \\geq 0\\\\\n  \t\\end{cases}\n  \t\\]\n\n  \tand \\newline\n  \t\\[\n   \t\tt_\\nu^{-1} (x) =  \\begin{cases}\n        -\\sqrt{\\nu (\\frac{1}{I^{-1}_{\\frac{\\nu}{2},\\frac{1}{2}}(2y)}-1)}  & \\quad \\text{if } x \\leq \\frac{1}{2} \\\\\n    \t\\sqrt{\\nu (\\frac{1}{I^{-1}_{\\frac{\\nu}{2},\\frac{1}{2}}(2(1-y))}-1)}  & \\quad \\text{if } x \\geq \\frac{1}{2}\\\\\n  \t\\end{cases}\n  \t\\]\n\n\t\\subsubsection{Generates X}\n\tTo generates our U vector in the $[0,1]^d$ space, we need to first generates a X vector in the $\\mathbb{R}^d$ space following a Multivariate Student Distribution. Thus, we use the equality in law :\n\t\\begin{equation*}\n\t\\sqrt{\\frac{\\nu}{S}} Z \\sim t_\\nu (0,K)\n\t\\end{equation*}\n\twhere $s \\sim \\chi_\\nu^2$ and $Z \\sim \\mathcal{N} (0,K)$\n\n\t\\subsubsection{C function in dimension n}\n\n\n\t\\begin{math}\n\t C_{K}(u_{1},...u_{d})= \\mathlarger{ \\int_{-\\infty}^{t_\\nu^{-1}(u_{1})}...\\int_{-\\infty}^{t_\\nu^{-1}(u_{d})} \\frac{\\Gamma (\\frac{\\nu +d}{2})}{\\Gamma\n\t (\\frac{\\nu}{2})(\\nu \\pi)^{\\frac{d}{2}}\\sqrt{det(K)}}\\left(1+\\frac{1}{\\nu}x^\\top K^{-1} x \\right)^{-\\frac{\\nu +d}{2}}dx}\n\t\\end{math}\n\t\\newline\n\t\\newline\n\tWith a change of variable, we have also \\newline\n\t\\newline\n\t\\begin{math}\n\tC_{K}(u_{1},...u_{d})=  \\mathlarger{ \\int_{0}^{u_{1}}...\\int_{0}^{u_{d}} \\frac{\\Gamma (\\frac{\\nu +d}{2}) \\Gamma (\\frac{\\nu}{2})^{^{d-1}}}{\\sqrt{det(K)}\\Gamma (\\frac{\\nu +1}{2})^{^{d}}\\Pi_{j=1}^d 1+\\frac{t_\\nu^{-1}(x_j)^2}{2})^{-\\frac{\\nu+1}{2}}}\\left(1+\\frac{1}{\\nu}t_\\nu^{-1}(x)^\\top K^{-1} t_\\nu^{-1}(x) \\right)^{-\\frac{\\nu +d}{2}}dx}\n\t\\end{math}\n\t\\newline\n\t\\newline\n\t\\newline\n\twhere \\begin{math} t_\\nu^{-1}(x) =  \\begin{pmatrix}\n   t_\\nu^{-1}(x_{1}) \\\\\n  \\vdots   \\\\\n   t_\\nu^{-1}(x_{d})\n \\end{pmatrix}\\end{math}\n\n\\subsubsection{C function in dimension 2}\n\n \tIn dimension 2, all matrix of correlation can be written \\begin{math} K =\n \\begin{pmatrix}\n  1 & \\rho \\\\\n  \\rho & 1\n \\end{pmatrix}\n\\end{math}. So we will only consider one parameter \\begin{math} \\rho \\end{math}.\\newline\n\\newline\n\n\\begin{math}\n\tC_{\\rho,\\nu} (u,v) = \\mathlarger{\\int_{-\\infty}^{t_\\nu^{-1}(u)}\\int_{-\\infty}^{t_\\nu^{-1}(v)} \\frac{1}{2\\pi\\sqrt{1-\\rho^{2}}}\\left(1+\\frac{x^2 -\\rho x y + y^2}{\\nu (1-\\rho^2)}\\right)^{-\\frac{\\nu +2}{2}}dx dy}\n\\end{math}\n\\newline\n\n\n\\subsubsection{Partial derivative of C}\n\n\nIf we derivate the formula above, we obtain (cf Pair copula constructions of muliple dependance) : \\newline\n\\begin{math}\nh(u,v,\\rho,\\nu)=\\frac{\\partial C_{\\rho,\\nu}}{\\partial v} (u,v) = t_\\nu \\bigg(\\frac{t_\\nu^{-1}(u)-\\rho t_\\nu^{-1}(v)}{\\sqrt{\\frac{(\\nu+t_\\nu^{-1}(v)^2)(1-\\rho ^2)}{\\nu+1}}}\\bigg)\n\\end{math}\n\nIf we inverse this function, we obtain\\newline\n\\newline\n\\begin{math}\nh^{-1}(u,v,\\rho,\\nu) = t_\\nu (\\sqrt{\\frac{(\\nu+t_\\nu^{-1}(v)^2)(1-\\rho)}{\\nu+1}}t_\\nu^{-1}(u)+\\rho t_\\nu^{-1}(v))\n\\end{math}\n\n\n\t\\subsubsection{c function in dimension n}\n\n\t\\begin{equation*}\n\t c_{K,\\nu} (u_{1},...,u_{d}) = \\frac{\\Gamma (\\frac{\\nu +d}{2}) \\Gamma (\\frac{\\nu}{2})^{^{d-1}}}{\\sqrt{det(K)}\\Gamma (\\frac{\\nu +1}{2})^{^{d}}\\Pi_{j=1}^d (1+\\frac{t_\\nu^{-1}(u_{j})^2}{2})^{-\\frac{\\nu+1}{2}}}\\left(1+\\frac{1}{\\nu}t_\\nu^{-1}(u)^\\top K^{-1} t_\\nu^{-1}(u) \\right)^{-\\frac{\\nu +d}{2}}\n\t\\end{equation*}\n\t\\newline\n\t where \\begin{math} t_\\nu^{-1}(u) =  \\begin{pmatrix}\n   t_\\nu^{-1}(u_{1}) \\\\\n  \\vdots   \\\\\n   t_\\nu^{-1}(u_{d})\n \\end{pmatrix}\\end{math}\n\t\\subsubsection{c function in dimension 2}\n\n\t\\begin{equation*}\n\t\tc_\\theta (u,v) = \\frac{\\Gamma (\\frac{\\nu}{2})\\Gamma (\\frac{\\nu +2}{2})}{\\Gamma(\\frac{\\nu+1}{2})^2 (1+\\frac{t_\\nu^{-1} (u)^2}{\\nu})^{-\\frac{\\nu+1}{2}} (1+\\frac{t_\\nu^{-1} (v)^2}{\\nu})^{-\\frac{\\nu+1}{2}} \\sqrt{ 1-\\rho^2} } \\left(1 + \\frac{t_\\nu^{-1} (u)^2 + t_\\nu^{-1} (v)^2 - 2 \\rho t_\\nu^{-1} (u) t_\\nu^{-1} (v)}{\\nu (1-\\rho^2)}\\right)^{-\\frac{\\nu +1}{2}}\n\t\\end{equation*}\n\n\n   \\section{Archimedean Copulas}\n   Archimedean copulas take the form :\n   \\newline\n   \\newline\n   \\begin{math}\n   C(u_{1},...,u_{d})= \\Phi(\\Phi^{-1}(u_{1})+...+\\Phi^{-1}(u_{d}))\n   \\end{math}\n   \\newline\n   \\newline\n   where \\begin{math} \\Phi \\end{math} is the Laplace transform of a positive nonzero random variable Y :\n\t\\begin{math} \\Phi (u)=\\mathbb{E}(e^{-uY})\n\t\\newline\n\t\\newline\n\t\\Phi \\end{math} is called the generator function.\n\t\\newline\n\tIt generally depends on a parameter called \\begin{math} \\theta \\end{math}.\n\t\\newline\n\tIn some papers, the definition of the generator function and its inverse can be inverted. We will use the definition above which gives us simpler formulas.\n\t\\newline\n\n\n\tAll of the next formulas can be expressed with the generator function.\\newline\n\tFor example, if we refer to \\cite{archimedeancopulas}, we can write the Kendall distribution function thanks to the generator function :\n\t\\[\n   \t\t\\mathcal{K}_C (x) =\n   \t\t\\begin{cases}\n        \\frac{(-1)^{d-1}(\\Phi^{-1}(0))^{d-1}}{(d-1)!}\\Phi_-^{(d-1)}(\\Phi^{-1}(0))  & \\quad \\text{if } x=0 \\\\\n    \\sum\\limits_{k=0}^{d-2}\\frac{(-1)^{k}(\\Phi^{-1}(x))^{k}\\Phi^{(k)}(\\Phi^{-1}(x))}{k!}+\\frac{(-1)^{d-1}(\\Phi^{-1}(x))^{d-1}\\Phi_-^{(d-1)}(\\Phi^{-1}(x))}{(d-1)!} & \\quad \\text{if } x \\in ]0,1] \\\\\n  \\end{cases}\n  \\]\n   \t\\subsection{Frank Copula}\n   \t\\subsubsection{Generator, generator inverse and derivatives}\n   \t\\begin{math}\n   \t\t\\theta \\in \\mathbb{R} \\backslash \\{ 0 \\} \\newline\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}(t)= -\\frac{1}{\\theta}log(1+e^{-t}(e^{-\\theta}-1))\n   \t\t\\newline\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}^{-1}(t)=-log(\\frac{e^{-\\theta t}-1}{e^{-\\theta}-1})\n   \t\t\\newline\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}^\\prime (t)= \\frac{e^{-t}}{\\theta (\\frac{1}{e^{-\\theta} -1}+e^{-t})}\n   \t\t\\newline\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}^{\\prime \\prime} (t)= \\frac{-e^{-t}}{\\theta (e^{-\\theta} -1) (\\frac{1}{e^{-\\theta} -1}+e^{-t})^2}\n   \t\\end{math}\n\n\n   \t\\subsubsection{C function in dimension 2}\n   \t\\begin{math}\n   \t\tC_{\\theta}(u,v) = -\\frac{1}{\\theta}log(1+\\frac{(e^{\\theta u}-1)(e^{\\theta v}-1)}{e^{-\\theta}-1})\n   \t\\end{math}\n\n   \t\\subsubsection{Partial derivative of C and inverse}\n   \t\\begin{math}\n   \t\th(u,v,\\theta)=\\frac{\\partial C_{\\theta}}{\\partial v} (u,v) = \\frac{e^{\\theta u}-1}{e^{\\theta u}+e^{\\theta v}-e^{\\theta (u+v-1)}-1}\n   \t\\newline\n\t\\newline\n\t\\text{We first solve } \\frac{a X + b}{c X + d}=y \\text{, where } a = 1, b =-1, c= 1-e^{\\theta (v-1)}, d = e^{\\theta v}-1\\newline\n\t\\newline\n\t\\text{We have then } X = \\frac{dy-b}{a-cy}\\newline\n\t\\newline\n\t\\text{We solve } e^{\\theta u}=X : u = \\frac{1}{\\theta}log(X)\\newline\n\t\\newline\n\t\\text{So, we now have}\\newline\n  \th^{-1}(u,v,\\theta) = \\frac{1}{\\theta}log(\\frac{(e^{\\theta v}-1)u+1}{1-(1-e^{\\theta (v-1)})u})\n   \t\\end{math}\n\n   \t\\subsubsection{c function in dimension 2}\n\n   \t\\begin{math}\n   \tc_\\theta (u,v) = \\frac{\\theta e^{\\theta (u+v)} (1-e^{-\\theta})}{(e^{\\theta u}+e^{\\theta v}-e^{\\theta (u+v-1)}-1)^2}\n   \t\\end{math}\n\n\n   \t\\subsection{Gumbel Copula}\n   \t\\subsubsection{Generator and generator inverse}\n   \t\\begin{math}\n   \t\t\\theta \\in  [1,\\infty[ \\newline\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}(t)= e^{-t^{ \\frac{1}{\\theta} }}\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}^{-1}(t)=(-log(t))^{\\theta} \\newline\n   \t\t\\Phi_{\\theta}^\\prime (t) = -\\frac{1}{\\theta} t^{ \\frac{1-\\theta}{\\theta}} e^{-t^{ \\frac{1}{\\theta} }} \\newline\n   \t\t\\Phi_{\\theta}^{\\prime \\prime} (t) = ((-\\frac{1}{\\theta} t^{ \\frac{1-\\theta}{\\theta}})^2 - \\frac{1-\\theta}{\\theta^2} t^{ \\frac{1-2\\theta}{\\theta}}) e^{-t^{ \\frac{1}{\\theta} }}\n   \t\\end{math}\n   \t\\subsubsection{C function in dimension 2}\n   \t\\begin{math}\n   \t\tC_{\\theta}(u,v) = e^{-((-log(u))^{\\theta}+(-log(v))^{\\theta})^{\\frac{1}{\\theta}}}\n   \t\\end{math}\n   \t\\subsubsection{Partial derivative of C and inverse}\n   \t\\begin{math}\n   \t\th(u,v,\\theta)=\\frac{\\partial C_{\\theta}}{\\partial v} (u,v) = \\frac{1}{v}(-log(v))^{\\theta-1}((-log(u))^{\\theta}+(-log(v))^{\\theta})^{\\frac{1}{\\theta}-1}e^{-((-log(u)^{\\theta}+(-log(v))^{\\theta})^{\\frac{1}{\\theta}}}\n\t\\newline\n\t\\newline\n\t\\text{We have to introduce the lambert W function which is the inverse of } x \\to x e^{x} :\\newline\n\tW(x)e^{W(x)}=x\n\t\\newline\\newline\n\t\\text{For } x \\geq 0, ye^{y}= x \\text{ has only one solution y.}\\newline\n\t\\text{So, there is no ambiguity to define } W(x) \\text{ if } x\\geq 0.\n\t\\newline\n\t\\newline\n\t\\text{We first solve } \\lambda X^\\alpha e^{-X} = y \\text{ where } \\lambda = \\frac{(-log(v))^{\\theta-1}}{v}, \\alpha = 1-\\theta\n\t\\newline\n\t\\newline\n\t\\text{We obtain } X = \\alpha W(\\frac{1}{\\alpha}(\\frac{y}{\\lambda})^{\\frac{-1}{\\theta-1}})\n\t\\newline\n\t\\newline\n\t\\text{Then, we solve } X = ((-log(u))^{\\theta}+(-log(v))^{\\theta})^{\\frac{1}{\\theta}}\n\t\\newline\n\t\\newline\n\t\\text{which gives us } u = e^{-(X^{\\theta}-(-log(v))^{\\theta})^{\\frac{1}{\\theta}}}\n\t\\newline\n\t\\newline\n\t\\text{Finally, we have }\n  \th^{-1}(u,v,\\theta) = e^{-(((\\theta-1)W(\\frac{-log(v)}{\\theta-1}(vu)^{\\frac{-1}{\\theta -1}}))^{\\theta}-(-log(v))^{\\theta})^{\\frac{1}{\\theta}}}\n\t\\end{math}\n\n\t\\subsubsection{c function in dimension 2}\n\n\t\\begin{math}\n\t\tc_\\theta (u,v) =  \\frac{C_\\theta (u,v)}{u v} ((-log(u)^{\\theta}+(-log(v))^{\\theta})^{-2+\\frac{2}{\\theta}} [1+ (\\theta -1)((-log(u)^{\\theta}+(-log(v))^{\\theta})^{-\\frac{1}{\\theta}}]\n\t\\end{math}\n\n\n\n   \t\\subsection{Clayton Copula}\n   \t\\subsubsection{Generator, generator inverse and derivatives}\n   \t\\begin{math}\n   \t\t\\theta \\in  [-1,\\infty[ \\backslash \\{ 0 \\} \\newline\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}(t)= (1+\\theta t)_+^{-\\frac{1}{\\theta} }\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}^{-1}(t)=\\frac{1}{\\theta}(t^{-\\theta}-1) \\newline\n   \t\t\\newline\n   \t\t\\Phi_{\\theta}^{(n)}(t) = (-1)^n (1+\\theta t)^{-\\frac{1+n \\theta}{\\theta}} \\prod\\limits_{k=0}^{n-1} (1 + k \\theta)\n   \t\\end{math}\n\n\t\\subsubsection{C function in dimension 2}\n   \t\\begin{math}\n   \t\tC_{\\theta}(u,v) = (max\\{u^{-\\theta}+v^{-\\theta}-1;0\\})^{-\\frac{1}{\\theta}}\n   \t\\end{math}\n\t\\subsubsection{Partial derivative of C and inverse}\n   \t\\[\n   \t\th(u,v,\\theta)=\\frac{\\partial C_{\\theta}}{\\partial v} (u,v) = \\begin{cases}\n        v^{-\\theta-1}(u^{-\\theta}+v^{-\\theta}-1)^{-\\frac{1}{\\theta}-1}  & \\quad \\text{if } u^{-\\theta}+v^{-\\theta} \\geq 1 \\\\\n    0  & \\quad \\text{else} \\\\\n  \\end{cases}\n  \\]\n\n  \t\\begin{math}\n  \th^{-1}(u,v,\\theta) = (u^{-\\frac{\\theta}{\\theta+1}}v^{-\\theta}-v^{\\theta}+1)^{-\\frac{1}{\\theta}}\n\t\\end{math}\n\twhere u must be nonzero.\n\n\t\\subsubsection{c function in dimension 2}\n\t\\begin{math}\n\t\tc_\\theta (u,v) = (1+\\theta)(u v)^{-1-\\theta}(u^{-\\theta}+v^{-\\theta}-1)^{-\\frac{1}{\\theta}-2}\n\t\\end{math}\n\n\t\\section{Empirical Copula}\n\tHaving a sample of multivariate random variables, we can define the finite empirical distribution of this sample where each realisation of the sample is equiprobable. The empirical copula is the copula of this empirical distribution. Contrary to the other ones, the empirical copula depends on its marginals.\n\n\t\\begin{equation*}\n\tC(u_1,...,u_d) = \\sum_{k=1}^n \\mathds{1}_{\\forall i, F_i(x_k)\\leq u_i}\n\t\\end{equation*}\n\n\twhere, $F_i$ is the cdf of the marginal i.\n\n\n\t\\section{Vine Copulas}\n\n\t\\subsection{Canonical Vine Copula}\n\n\t\\subsubsection{Global probability density function}\n\n\t\\begin{equation*}\n\t\tf(x_1,...,x_d)= \\prod_{k=1}^d f(x_k) \\prod_{j=1}^{d-1} \\prod_{i=1}^{d-j} c_{j,j+i|1,...,j-1}(F(x_j|x_1,...,x_{j-1}),F(x_{j+i}|x_1,...,x_{j-1}))\n\t\\end{equation*}\n\n\t\\subsubsection{c function in dimension n}\n\tBy identifying c in the previous equation, we have :\n\n\t\\begin{equation*}\n\t\tc_{1,...,d}(F_1(x_1),...,F_d(x_d))=  \\prod_{j=1}^{d-1} \\prod_{i=1}^{d-j} c_{j,j+i|1,...,j-1}(F(x_j|x_1,...,x_{j-1}),F(x_{j+i}|x_1,...,x_{j-1}))\n\t\\end{equation*}\n\n\tWe now do an approximation of conditional independance : \\newline\n\t\\begin{math} c_{j,j+i|1,...,j-1}=1 \\end{math} when \\begin{math} (1,...,j-1) \\neq \\emptyset \\Leftrightarrow j \\neq 1\\end{math}\n\t\\newline\n\t\\newline\n\tWe obtain :\\newline\n\t\\begin{equation*}\n\t\tc_{1,...,d}(F_1(x_1),...,F_d(x_d)) = \\prod_{i=1}^{d-1} c_{1,i+1} (F_1(x_1),F_{i+1}(x_{i+1}))\n\t\\end{equation*}\n\n\tThen,\n\n\t\\begin{equation*}\n\t\tc_{1,...,d}(u_1,...,u_d) = \\prod_{i=1}^{d-1} c_{1,i+1} (u_1,u_d)\n\t\\end{equation*}\n\n\t\\subsection{D Vine Copula}\n\n\t\\subsubsection{Global probability density function}\n\n\t\\begin{equation*}\n\t\tf(x_1,...,x_d)= \\prod_{k=1}^d f(x_k) \\prod_{j=1}^{d-1} \\prod_{i=1}^{d-j} c_{i,i+j|i+1,...,i+j-1}(F(x_i|x_{i+1},...,x_{i+j-1}),F(x_{i+j}|x_{i+1},...,x_{i+j-1}))\n\t\\end{equation*}\n\n\t\\subsubsection{c function in dimension n}\n\tBy identifying c in the previous equation, we have :\n\n\t\\begin{equation*}\n\t\tc_{1,...,d}(F_1(x_1),...,F_d(x_d))= \\prod_{j=1}^{d-1} \\prod_{i=1}^{d-j} c_{i,i+j|i+1,...,i+j-1}(F(x_i|x_{i+1},...,x_{i+j-1}),F(x_{i+j}|x_{i+1},...,x_{i+j-1}))\n\t\\end{equation*}\n\n\tWe now do an approximation of conditional independance : \\newline\n\t\\begin{math} c_{i,i+j|i+1,...,i+j-1}=1 \\end{math} when \\begin{math} (i+1,...,i+j-1) \\neq \\emptyset \\Leftrightarrow j\\neq 1\\end{math}\n\n\tWe obtain :\\newline\n\t\\begin{equation*}\n\t\tc_{1,...,d}(F_1(x_1),...,F_d(x_d)) = \\prod_{i=1}^{d-1} c_{i,i+1} (F_i(x_i),F_{i+1}(x_{i+1}))\n\t\\end{equation*}\n\n\tThen,\n\t\\begin{equation*}\n\t\tc_{1,...,d}(u_1,...,u_d) = \\prod_{i=1}^{d-1} c_{i,i+1} (u_i,u_{i+1})\n\t\\end{equation*}\n\n\n\\begin{thebibliography}{9}\n\n\t\\bibitem{archimedeancopulas}\n  \tAlexander J. McNeil, Johanna Neslehova\n \t \\emph{Multivariate Archimedean Copulas, d-monotone Functions and l1-norm Symmetric Distributions},Ann. Stat., 37:3059-3097, 2009.\n\n\t\\bibitem{vineconstruction}\n  \tKjersti Aas, Claudia Czado, Arnoldo Frigessi, Henrik Bakken,\n \t \\emph{Pair-copula construction of multiple dependence},\n  \t2007.\n\n  \t\\bibitem{fourcopulas}\n  \tKjersti Aas,\n \t \\emph{Modeling the dependance structure of financial assets : A survey of four copulas}\n  \t2004.\n\n  \t\\bibitem{kendallfunction}\n  \tJohanna F. Ziegel, Tilmann Gneiting,\n \t \\emph{Copula Calibration}, Electronic Journal of Statistics,\n  \t2014.\n\n\t\\end{thebibliography}\n\\end{document}\n", "meta": {"hexsha": "0b8da178bcc1bd83bef50673d003a4bf4801dd5f", "size": 44026, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "prescient/gosm/Latex/Copula formulae.tex", "max_stars_repo_name": "iSoron/Prescient", "max_stars_repo_head_hexsha": "a3c1d7c5840893ff43dca48c40dc90f083292d26", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2020-06-03T13:54:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T18:20:35.000Z", "max_issues_repo_path": "prescient/gosm/Latex/Copula formulae.tex", "max_issues_repo_name": "iSoron/Prescient", "max_issues_repo_head_hexsha": "a3c1d7c5840893ff43dca48c40dc90f083292d26", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 79, "max_issues_repo_issues_event_min_datetime": "2020-07-30T17:29:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-09T00:06:39.000Z", "max_forks_repo_path": "prescient/gosm/Latex/Copula formulae.tex", "max_forks_repo_name": "bknueven/Prescient", "max_forks_repo_head_hexsha": "6289c06a5ea06c137cf1321603a15e0c96ddfb85", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2020-07-14T17:05:56.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-17T17:51:13.000Z", "avg_line_length": 50.9560185185, "max_line_length": 1320, "alphanum_fraction": 0.6694226139, "num_tokens": 15301, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.5604883392964484}}
{"text": "\\documentclass{article}\n\\usepackage{hyperref,lipsum,amsmath,amssymb}\n\\hypersetup{colorlinks=true} % make sure our hyperlinks are coloured for visibility\n% We define an Author, Title and Date\n\\author{\\emph{Your Name Here}}\n\\title{A Short \\LaTeX{} Worksheet \\\\ \\small Created by Sam Fearn -- s.m.fearn@durham.ac.uk}\n\\date{March 15\\textsuperscript{th}, 2018}\n\n\\begin{document}\n% Create a title from our Author, Title and Date\n\\maketitle\n% We put our content into section environments\n\\section{Introduction}\n\\label{sec:introduction}\n% I didn't have time to discuss labels today, but adding labels to your document allows you to refer back to the relevant place later in your document using the \\ref{label} command. For example, I could refer to the introduction later in my document by saying `see section \\ref{sec:introduction}'. This would produce the output `see section 1'. The advantage os using labels, is that if I add a reference to section 2, and then later add a new section before the old section 2, the reference will automatically be updated to say section 3. Try adding some references to the various sections and subsections of this document.\nThis worksheet has been created using the \\LaTeX{} typesetting system. By reproducing this document you should be able to develop your skills with \\LaTeX{}. You should \\emph{typeset} your file regularly to see whether it is looking correct. Try it now if you haven't already. If everything has worked properly you should now have a PDF file of the worksheet up to this point. \n\n\\section{Content}\n\\label{sec:content}\n\n\\emph{Sections} allow us to add structure our documents. For a short document such as this, we will use \\emph{sections} and \\emph{subsections}. A longer document such as a thesis would also use \\emph{chapters} (and the \\emph{report} document class). The first this we will discuss is how to typeset mathematics. To start with, ignore the `verbatim' code and hyperlinks in the following subsection, these will be explained later. A good reference for some of the symbols and commands in the following subsections is \\href{https://wch.github.io/latexsheet/latexsheet.pdf}{the \\LaTeXe{} cheat sheet}.\n\n\\subsection{Typesetting Mathematics}\n\\label{sub:typesetting_mathematics}\n\nOne of the reasons to use \\LaTeX{} is that it can produce very good looking mathematical formulae. We have already seen some examples in the talk. \\LaTeX{} only recognises mathematical commands while in `maths mode'. There are a few ways to enter maths mode -- which one we use depends on how we want the output to be formatted. If I want the maths to be `inline', part of the current line, then I place it between dollar symbols. Therefore \\$ x = 4 \\$ produces $x=4$. If I want to produce an equation which is placed on its own line then I can use the \\emph{\\textbf{equation} environment}.\n\\begin{verbatim}\n\t\\begin{equation}\n\t    x^3 + 2 \\ge 6.\n\t\\end{equation}\n\\end{verbatim}\nThis produces\n\\begin{equation}\n\tx^3 + 2 \\ge 6.\n\\end{equation}\nSee \\href{https://www.sharelatex.com/learn/Mathematical_expressions}{this webpage} for more information on mathematics modes. Try to reproduce the rest of this subsection as closely as you can.\n\nA \\emph{bijection} is a map between two sets which is both \\emph{injective} and \\emph{surjective}. Let us now define these terms. Given two sets $A$ and $B$, a map $\\phi:A \\to B$ is said to be injective if\n\\begin{equation}\\label{eq:injective} % I can also add referenecs to my equations using labels.\n\t\\phi(a_1) = \\phi(a_2) \\iff a_1 = a_2,\\ \\forall\\ a_1,a_2 \\in A.\n\\end{equation}\nFurthermore, the map is surjective if\n\\begin{equation}\\label{eq:surjective}\n\t\\forall\\ b \\in B,\\ \\exists\\ a \\in A\\ |\\ \\phi(a)=b.\n\\end{equation}\n% Try uncommenting the following line.\n% If both equation \\ref{eq:injective} and equation \\ref{eq:surjective} are satisfied, then the map is said to be \\emph{bijective}.\n\nThe talk also covered the import result of the \\emph{Gaussian integral}\n\\begin{equation}\n\t\\int_{-\\infty}^\\infty e^{-\\frac{1}{2}\\left( \\frac{x - \\mu}{\\sigma} \\right)^2} dx = \\sigma\\sqrt{2 \\pi}.\n\\end{equation}\n\nIn the partitions project, we saw how the geometric progression formula\n\\begin{equation}\n\t\\frac{1}{1-x} = 1 + x + x^2 + \\ldots + x^n + \\ldots = \\sum_{n=0}^\\infty x^n,\n\\end{equation}\ncould be used to deduce the generating function for the partition number $p(n)$,\n\\begin{equation}\n\t\\sum_{n=0}^\\infty p(n) x^n = \\prod_{n=1}^\\infty \\frac{1}{1-x^n}.\n\\end{equation}\n\n\\subsection{Verbatim} % (fold)\n\\label{sub:verbatim}\n\nIn order to explain some of the commands that you will need to add to your file, we will need to show the names of the commands on the worksheet. In order for \\LaTeX{} to display the command rather than process it we use an \\emph{environment} called \\textbf{Verbatim}.\n\\begin{verbatim}\n\t\\begin{verbatim}\n\t    \\LaTeX{} is now printed rather than processed.\n\t\\end{verbatim }\n\\end{verbatim}\n% Note the space after in the inner \\end{verbatim }. This is so \\LaTeX{} understand the difference between the one I want to print verbatim and the actual end of the verbatim environment. \nWe can also produce verbatim output in the middle of a line using \\verb!\\verb|\\LaTeX{}|!.\n% We can use both the exclamation mark or the pipe mark to indicate what content is to be inline verbatim. As previously, I have to use one of each to ensure that \\LaTeX{} understands which one ends the actual verb environment, and which is to be printed verbatim.\n\\subsection{Packages}\n\\label{sub:packages}\n\nOne of the advantages of \\LaTeX{} (or \\TeX{}) is that it can be extended using \\emph{packages}. A package is \\emph{loaded} by adding the the line\n\\begin{verbatim}\n\t\\usepackage{package_name}\n\\end{verbatim}\ninto the \\emph{preamble} of your tex file. The preamble is the part of the file before the line \\verb|\\begin{document}|. Everything between \\verb|\\begin{document}| and \\verb|\\end{document}| is called the \\emph{body} of the document. We can use a package called \\href{https://ctan.org/pkg/lipsum?lang=en}{lipsum} to produce dummy text. This hyperlink was in turn created with a package called \\href{latex hyperref}{hyperref} and the command used was\n\\begin{verbatim}\n\t\\href{https://ctan.org/pkg/lipsum?lang=en}{lipsum}.\n\\end{verbatim}\nThe following paragraph is produce using the lipsum package, you can read about how to use this package from the package documentation on the previously linked webpage for lipsum -- the following is the lipsum paragraph 66.\n\n\\lipsum[66]\n\nTwo packages that we will often want to use are the \\href{https://michael-prokop.at/latextug/amsldoc.pdf}{amsmath and amssymb} packages, which are documented at the previously linked page. They add more mathematical commands that you can then use in you documents, such as matrices, unnumbered environments and mathematical fonts. Once I have loaded amsmath and amssymb, I can prodcue the output\n\\begin{equation*}\n\tA = \\begin{pmatrix}\n\t\t4 - 2i & i \\\\\n\t\t-i & a\n\t\\end{pmatrix} \\in M_2(\\mathbb{C}).\n\\end{equation*}\nThese packages also allow us to align equations using the \\textbf{align} environment. We can use this to define two block matrices $B$ and $C$. We can then give the multiplication of these matrices in a new equation environment. If\n\\begin{align}\n\tB &= \\left(\\begin{array}{c | c} % The c here is to centre align the array columns.\n\t\tP & Q \\\\\n\t\t\\hline\n\t\tR & S\n\t\\end{array}\\right),\\nonumber \\\\\n\tC &= \\left(\\begin{array}{c | c}\n\t\tW & X \\\\\n\t\t\\hline\n\t\tY & Z\n\t\\end{array}\\right),\n\\end{align}\nthen\n\\begin{equation}\n\tBC = \\left(\\begin{array}{c | c}\n\t\tPW + QY & PX + QZ \\\\\n\t\t\\hline\n\t\tRW + SY & RX + SZ\n\t\\end{array}\\right).\n\\end{equation}\n% There's quite a lot going on in the previous few lines, this was supposed to be the hardest part to reproduce for anyone who already had some experience with latex. The \\left( and \\right) commands are used to produce parentheses which scale automatically based on what is inside them. The array environment lets me format content in columns and specify a serperator. I use this to produce the vertical line of the block matrix, the horizontal line is produced with the \\hline command. The ampersand character & is used by \\LaTeX{} to align elements. It is used between the B and the equals and between the C and the equals to align these two lines. The double slash \\\\ after the first right) is used to end the first line and stat a new line. The ampersand is also used inside the array environment to tell \\LaTeX{} how to align the elements of the columns. Finally, the \\nonumber command suppresses the equation number for the first line of the align environment.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nRegardless of how much of this sheet you have been able to reproduce, I hope you have learnt something about \\LaTeX{}. In my opinion, it is best to learn \\LaTeX{} simply by starting to use it, and answering any questions you have as you go by using the many excellent resources online. The \\href{https://www.sharelatex.com/learn}{ShareLaTeX} website contains a lot of useful help, as does \\href{https://tobi.oetiker.ch/lshort/lshort.pdf}{The Not So Short Introduction To \\LaTeXe{}}. The tex file used to produce this talk will be available shortly so that you can compare with your tex file. I have also added some comments to my file to explain some additional things, so I encourage you to read through if you are interested.\n\n\\end{document}\n", "meta": {"hexsha": "1a9ab8ef77e3a778cd9d2a05d92b94c72686ec1b", "size": 9345, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/postfiles/20180626/LaTeXWorksheet.tex", "max_stars_repo_name": "samfearn/al-folio", "max_stars_repo_head_hexsha": "def82ffb174fd1c3181a72847484958d55b62217", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/postfiles/20180626/LaTeXWorksheet.tex", "max_issues_repo_name": "samfearn/al-folio", "max_issues_repo_head_hexsha": "def82ffb174fd1c3181a72847484958d55b62217", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/postfiles/20180626/LaTeXWorksheet.tex", "max_forks_repo_name": "samfearn/al-folio", "max_forks_repo_head_hexsha": "def82ffb174fd1c3181a72847484958d55b62217", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-06T00:05:16.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-11T14:11:08.000Z", "avg_line_length": 74.76, "max_line_length": 966, "alphanum_fraction": 0.7539860888, "num_tokens": 2512, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7634837581726991, "lm_q1q2_score": 0.5604883353457156}}
{"text": "\\chapter{Fusion}\r\n\\section {Definitions}\r\n{\\bf Definition:}\r\nLet $p$ be a prime, $T \\in S_p(G), W \\le T$ with $W$ weakly closed in $T$ with respect\r\nto $G$ and $D=C_G (W)$.  Then $N_G(W)$ \\emph{controls fusion} in $D$.\r\n\\\\\r\n\\\\\r\n{\\bf Definition:}\r\n$P \\in S_p(G)$.  $X \\in p(G)$ is a \\emph {tame intersection} of\r\n$Q, R \\in S_p (G)$ if $X= Q \\cap R$ and $N_Q (X), N_R (X) \\in S_p(N(X))$.\r\n\\\\\r\n\\\\\r\n{\\bf Definition:}\r\nFor $R, Q \\in S_p(G)$, write $R \\rightarrow_x Q$ if $\\exists Q_i \\in S_p(G), 1 \\le i \\le n$,\r\nand $x_i \\in N_G(P \\cap Q_i)$ such that (1) $P \\cap Q_i$ is a tame intersection of\r\n$P$ and $Q_i$ \r\nfor each $1 \\le i \\le n$,\r\n(2) $P \\cap R \\le P \\cap Q_1$ and $(P \\cap R)^{x_1 x_1 \\ldots x_i} \\le (P \\cap Q_i)$\r\nfor each $1 \\le i \\le n$, and\r\n(3) $R^x=Q$ where $x= x_1 x_2 \\ldots x_n$.\r\n\\section {Alperin's Results}\r\n{\\bf Alperin's Fusion Theorem:}\r\nIf $P \\in S_p(G), g \\in G$ and $ \\langle A, A^g \\rangle \\subseteq P$.  Then \r\nfor $1 \\le i \\le n$, $\\exists Q_i \\in S_p (G)$ and $x_i \\in N(P \\cap Q_i)$ such\r\nthat (1) $g= x_1 x_2 ... x_n$,\r\n(2) $P \\cap Q_i$ is a tame intersection of $P$ and $Q_i$ for each $i$,\r\n(3) $A \\subseteq P \\cap Q_1$ and\r\n$A^{x_1 x_2 ... x_i} \\subseteq P \\cap Q_{i+1}$.\r\n\\\\\r\n\\\\\r\n{\\bf Lemma 1:}\r\n$Q \\rightarrow P, \\forall Q \\in S_p(G)$.\r\n\\\\\r\n\\\\\r\n{\\bf Lemma 2:}\r\n$P \\rightarrow P$. \r\n\\\\\r\n\\\\\r\n{\\bf Lemma 3:}\r\n$\\rightarrow$ is transitive.\r\n\\begin{quote}\r\n\\emph{Proof:} \r\nLet $\\{R_i, y_i: 1 \\le i \\le m \\}$ and\r\n$\\{Q_i, x_i: 1 \\le i \\le n \\}$ accomplish $S \\rightarrow R$ and $R \\rightarrow Q$\r\nrespectively.  Then $R_1 , R_2 , \\ldots , R_m , Q_1, \\ldots , Q_n$ and\r\n$y_1 , y_2 , \\ldots , y_m , x_1 , \\ldots , x_n$ accomplish $A \\rightarrow Q$.\r\n\\end{quote}\r\n{\\bf Lemma 4:}\r\nIf $S \\rightarrow_x P$, $Q^x \\rightarrow P$ and $P \\cap Q = P \\cap S$\r\nthen $Q \\rightarrow P$.\r\n\\begin{quote}\r\n\\emph{Proof:} It suffices to show $Q \\rightarrow Q^x$.\r\nLet $\\{S_i, x_i: 1 \\le i \\le n \\}$\r\naccomplish $S \\rightarrow P$ then\r\n$\\{S_i, x_i: 1 \\le i \\le n \\}$ also accomplishes\r\n$Q \\rightarrow Q^x$.\r\n\\end{quote}\r\n{\\bf Lemma 5:}  If \r\nAssume $R, Q \\in S_p(G)$ with $R \\rightarrow P$ and $P \\cap Q < R \\cap Q$.  Assume further,\r\n$\\forall S \\in S_p(G)$ with $|S \\cap P| > |Q \\cap  P|$ and $S \\rightarrow P$.  Then\r\n$Q \\rightarrow P$.\r\n\\\\\r\n\\\\\r\n{\\bf Lemma 6:}\r\nAssume $P \\cap Q$ is tame and \r\n$S \\rightarrow P, \\forall S \\in S_p(G)$ with $|S \\cap P| > |Q \\cap P|$ and $S \\rightarrow P$\r\nthen $Q \\rightarrow P$.\r\n\\begin{quote}\r\n\\emph{Proof:}\r\nBy Lemma 2, we can assume $Q \\ne P$ and thus $P \\cap Q < P_0 = N_P(P \\cap Q)$\r\nBy hypothesis $P_0$ and $Q_0= N_Q(P \\cap Q)$ are Sylow in $M=N_G(P \\cap Q)$ so there is\r\nan $x \\in M$ with $Q_0^x=P_0$.  Hence $Q \\rightarrow Q_x$ by $Q, x$; further,\r\n$P \\cap Q < P_0 \\le P \\cap Q^x$ so $Q^x \\rightarrow P$ and finally by Lemma 3 $Q \\rightarrow P$.\r\n\\\\\r\n\\\\\r\n\\emph{Proof of Lemma 1:}\r\nPick a counterexample $Q$ with $P \\cap Q$ of maximal order.  By Lemma 2, $P \\ne Q$ so\r\n$P \\cap Q \\ne P$ hence $P \\cap Q < N_P(P \\cap Q)$.  Let $S \\in S_p(G)$ with\r\n$N_P(P \\cap Q) < N_S(P \\cap Q) \\le P \\cap S$, $S \\rightarrow P$ by the maximality of\r\n$P \\cap Q$ this there is a $x \\in G: S \\rightarrow_x P$.  Now $(P \\cap Q)^x \\le q^x$,\r\n$P \\cap Q \\le S$ and $S^x=P$ so $(P \\cap Q)^x \\le P$.  Thus\r\n$(P \\cap Q)^x \\le P \\cap Q^x$.  If $(P \\cap Q)^x \\ne P \\cap Q^x$ then\r\n$|P \\cap Q| < |P \\cap Q^x|$ so by the maximality of $P \\cap Q$, $Q^x \\rightarrow P$.  But\r\nthen $Q \\rightarrow P$ by Lemma 4 contradicting the choice of $Q$.  Now we have\r\n$(P \\cap Q)^x = P \\cap Q^x$ and let $T \\in S_p(G)$ with \r\n$N_{Q^x}(P \\cap Q^x) \\le\r\nN_{T}(P \\cap Q^x) \\in S_p(N_G(P \\cap Q^x))$.  Again\r\n$P \\cap Q^x < N_{Q^x}(P \\cap Q^x) \\le T$ so $P \\cap Q^x < T \\cap Q^x$.  Hence if\r\n$T \\rightarrow P$, by Lemma 5 $Q^x \\rightarrow P$ which was already shown false.  Thus\r\nwe do not have $T \\rightarrow P$ and by the maximality od $|P \\cap Q|$,\r\n$P \\cap Q^x = P \\cap T$.\r\nBy the choice of $T$ and since $P \\cap Q^x = P \\cap T$, we have\r\n$N_T(P \\cap T) \\in S_p(N_G(P \\cap T)$.\r\nBy the choice of $S$, \r\n$N_S(P \\cap Q) \\in S_p(N_G(P \\cap Q)$.  Since\r\n$(P \\cap Q)^x = (P \\cap Q^x) = P \\cap T$ and $S^x=P$, we have \r\n$N_P(P \\cap T) \\in S_p(N_G(P \\cap T)$.  But now, by Lemma 6, $T \\rightarrow P$, contrary to the\r\nprevious observation.\r\n\\\\\r\n\\\\\r\n\\emph{Proof of Alperin:}  By Lemma 1, $P^{g^{-1}} \\rightarrow P$.  Let \r\n$Q_i, x_i, 1 \\le i < n$ accomplish\r\n$P^{g^{-1}} \\rightarrow P$. $ \\langle A, A^{g{-1}} \\rangle \\subseteq P \\cap P^{g^{-1}}$ so\r\n$A \\subseteq P \\cap P^{g^{-1}} \\le (P \\cap Q_1)$ and $A^{x_1 x_2 \\ldots x_i}\r\n\\le P \\cap Q_{i+1}$\r\nfor $1 \\le i < n$ setting $x=x_1 x_2 \\ldots x_{n-1}$.\r\n$P = P^{g^{-1}x}$ so $x_n=x^{-1}g \\in N_G(P)$ and\r\n$g= x x_n$.  Finally, let $Q_n=P$ and note that\r\n$A^{x_1 x_2 \\ldots x_{n-1}}= A^{g x_n^{-1}} \\le P^{x_n^{-1}}=P=P \\cap Q_n$ and \r\nthe theorem holds.\r\n\\end{quote}\r\n", "meta": {"hexsha": "f1ec5d776b23b83a19c1a45942ce6a74f582d0a6", "size": 4814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "groups/gtFusion.tex", "max_stars_repo_name": "jlmucb/class_notes", "max_stars_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "groups/gtFusion.tex", "max_issues_repo_name": "jlmucb/class_notes", "max_issues_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "groups/gtFusion.tex", "max_forks_repo_name": "jlmucb/class_notes", "max_forks_repo_head_hexsha": "b8571df2dca933f6594a16eb02b581d38ca4ccfd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.9821428571, "max_line_length": 97, "alphanum_fraction": 0.5712505193, "num_tokens": 2072, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837527911056, "lm_q2_score": 0.734119526900183, "lm_q1q2_score": 0.5604883313949827}}
{"text": "% -*- mode: latex; TeX-master: \"Vorbis_I_spec\"; -*-\n%!TEX root = Vorbis_I_spec.tex\n\\section{Helper equations} \\label{vorbis:spec:helper}\n\n\\subsection{Overview}\n\nThe equations below are used in multiple places by the Vorbis codec\nspecification.  Rather than cluttering up the main specification\ndocuments, they are defined here and referenced where appropriate.\n\n\n\\subsection{Functions}\n\n\\subsubsection{ilog} \\label{vorbis:spec:ilog}\n\nThe \"ilog(x)\" function returns the position number (1 through n) of the highest set bit in the two's complement integer value\n\\varname{[x]}.  Values of \\varname{[x]} less than zero are defined to return zero.\n\n\\begin{programlisting}\n  1) [return\\_value] = 0;\n  2) if ( [x] is greater than zero ) {\n\n       3) increment [return\\_value];\n       4) logical shift [x] one bit to the right, padding the MSb with zero\n       5) repeat at step 2)\n\n     }\n\n   6) done\n\\end{programlisting}\n\nExamples:\n\n\\begin{itemize}\n \\item ilog(0) = 0;\n \\item ilog(1) = 1;\n \\item ilog(2) = 2;\n \\item ilog(3) = 2;\n \\item ilog(4) = 3;\n \\item ilog(7) = 3;\n \\item ilog(negative number) = 0;\n\\end{itemize}\n\n\n\n\n\\subsubsection{float32\\_unpack} \\label{vorbis:spec:float32:unpack}\n\n\"float32\\_unpack(x)\" is intended to translate the packed binary\nrepresentation of a Vorbis codebook float value into the\nrepresentation used by the decoder for floating point numbers.  For\npurposes of this example, we will unpack a Vorbis float32 into a\nhost-native floating point number.\n\n\\begin{programlisting}\n  1) [mantissa] = [x] bitwise AND 0x1fffff (unsigned result)\n  2) [sign] = [x] bitwise AND 0x80000000 (unsigned result)\n  3) [exponent] = ( [x] bitwise AND 0x7fe00000) shifted right 21 bits (unsigned result)\n  4) if ( [sign] is nonzero ) then negate [mantissa]\n  5) return [mantissa] * ( 2 ^ ( [exponent] - 788 ) )\n\\end{programlisting}\n\n\n\n\\subsubsection{lookup1\\_values} \\label{vorbis:spec:lookup1:values}\n\n\"lookup1\\_values(codebook\\_entries,codebook\\_dimensions)\" is used to\ncompute the correct length of the value index for a codebook VQ lookup\ntable of lookup type 1.  The values on this list are permuted to\nconstruct the VQ vector lookup table of size\n\\varname{[codebook\\_entries]}.\n\nThe return value for this function is defined to be 'the greatest\ninteger value for which \\varname{[return\\_value]} to the power of\n\\varname{[codebook\\_dimensions]} is less than or equal to\n\\varname{[codebook\\_entries]}'.\n\n\n\n\\subsubsection{low\\_neighbor} \\label{vorbis:spec:low:neighbor}\n\n\"low\\_neighbor(v,x)\" finds the position \\varname{n} in vector \\varname{[v]} of\nthe greatest value scalar element for which \\varname{n} is less than\n\\varname{[x]} and vector \\varname{[v]} element \\varname{n} is less\nthan vector \\varname{[v]} element \\varname{[x]}.\n\n\\subsubsection{high\\_neighbor} \\label{vorbis:spec:high:neighbor}\n\n\"high\\_neighbor(v,x)\" finds the position \\varname{n} in vector [v] of\nthe lowest value scalar element for which \\varname{n} is less than\n\\varname{[x]} and vector \\varname{[v]} element \\varname{n} is greater\nthan vector \\varname{[v]} element \\varname{[x]}.\n\n\n\n\\subsubsection{render\\_point} \\label{vorbis:spec:render:point}\n\n\"render\\_point(x0,y0,x1,y1,X)\" is used to find the Y value at point X\nalong the line specified by x0, x1, y0 and y1.  This function uses an\ninteger algorithm to solve for the point directly without calculating\nintervening values along the line.\n\n\\begin{programlisting}\n  1)  [dy] = [y1] - [y0]\n  2) [adx] = [x1] - [x0]\n  3) [ady] = absolute value of [dy]\n  4) [err] = [ady] * ([X] - [x0])\n  5) [off] = [err] / [adx] using integer division\n  6) if ( [dy] is less than zero ) {\n\n       7) [Y] = [y0] - [off]\n\n     } else {\n\n       8) [Y] = [y0] + [off]\n\n     }\n\n  9) done\n\\end{programlisting}\n\n\n\n\\subsubsection{render\\_line} \\label{vorbis:spec:render:line}\n\nFloor decode type one uses the integer line drawing algorithm of\n\"render\\_line(x0, y0, x1, y1, v)\" to construct an integer floor\ncurve for contiguous piecewise line segments. Note that it has not\nbeen relevant elsewhere, but here we must define integer division as\nrounding division of both positive and negative numbers toward zero.\n\n\n\\begin{programlisting}\n  1)   [dy] = [y1] - [y0]\n  2)  [adx] = [x1] - [x0]\n  3)  [ady] = absolute value of [dy]\n  4) [base] = [dy] / [adx] using integer division\n  5)    [x] = [x0]\n  6)    [y] = [y0]\n  7)  [err] = 0\n\n  8) if ( [dy] is less than 0 ) {\n\n        9) [sy] = [base] - 1\n\n     } else {\n\n       10) [sy] = [base] + 1\n\n     }\n\n 11) [ady] = [ady] - (absolute value of [base]) * [adx]\n 12) vector [v] element [x] = [y]\n\n 13) iterate [x] over the range [x0]+1 ... [x1]-1 {\n\n       14) [err] = [err] + [ady];\n       15) if ( [err] >= [adx] ) {\n\n             16) [err] = [err] - [adx]\n             17)   [y] = [y] + [sy]\n\n           } else {\n\n             18) [y] = [y] + [base]\n\n           }\n\n       19) vector [v] element [x] = [y]\n\n     }\n\\end{programlisting}\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "0a13795b236e46d12eb52dcaecfa17097dcdba49", "size": 4894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "external/libvorbis-1.3.6/doc/09-helper.tex", "max_stars_repo_name": "tris790/SDL_mixer", "max_stars_repo_head_hexsha": "a79cbe6dee6990c4c1d61a024b2f996363620b81", "max_stars_repo_licenses": ["Zlib"], "max_stars_count": 1514, "max_stars_repo_stars_event_min_datetime": "2015-01-02T17:00:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T14:11:21.000Z", "max_issues_repo_path": "external/libvorbis-1.3.6/doc/09-helper.tex", "max_issues_repo_name": "tris790/SDL_mixer", "max_issues_repo_head_hexsha": "a79cbe6dee6990c4c1d61a024b2f996363620b81", "max_issues_repo_licenses": ["Zlib"], "max_issues_count": 1462, "max_issues_repo_issues_event_min_datetime": "2015-01-01T10:53:29.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-27T04:35:53.000Z", "max_forks_repo_path": "external/libvorbis-1.3.6/doc/09-helper.tex", "max_forks_repo_name": "tris790/SDL_mixer", "max_forks_repo_head_hexsha": "a79cbe6dee6990c4c1d61a024b2f996363620b81", "max_forks_repo_licenses": ["Zlib"], "max_forks_count": 552, "max_forks_repo_forks_event_min_datetime": "2015-01-02T05:34:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T05:19:19.000Z", "avg_line_length": 27.0386740331, "max_line_length": 125, "alphanum_fraction": 0.6624438087, "num_tokens": 1611, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542925, "lm_q2_score": 0.7341195152660687, "lm_q1q2_score": 0.5604883304139909}}
{"text": "\\documentclass[jou]{apa6}\n\n\\usepackage[american]{babel}\n\n\\usepackage{csquotes}\n\\usepackage[style=apa,sortcites=true,sorting=nyt,backend=biber]{biblatex}\n\\DeclareLanguageMapping{american}{american-apa}\n\\addbibresource{bibliography.bib}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%% Discrete Structures\n%% The start of RBS stuff\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n% Working internal and external links in PDF\n\\usepackage{hyperref}\n% Extra math symbols in LaTeX\n\\usepackage{amsmath}\n\\usepackage{gensymb}\n\\usepackage{amssymb}\n% Enumerations with (a), (b), etc.\n\\usepackage{enumerate}\n\n\\let\\OLDitemize\\itemize\n\\renewcommand\\itemize{\\OLDitemize\\addtolength{\\itemsep}{-6pt}}\n\n\\usepackage{etoolbox}\n\\makeatletter\n\\preto{\\@verbatim}{\\topsep=3pt \\partopsep=3pt }\n\\makeatother\n\n% These sizes redefine APA for A4 paper size\n\\oddsidemargin 0.0in\n\\evensidemargin 0.0in\n\\textwidth 6.27in\n\\headheight 1.0in\n\\topmargin -24pt\n\\headheight 12pt\n\\headsep 12pt\n\\textheight 9.19in\n\n\n\n\\title{Discrete Structures (W2): Sets and Predicates}\n\\author{Kalvis}\n\\affiliation{RBS}\n\n\\leftheader{Discrete Structures (W2)}\n\n\\abstract{\nYou will do more experiments with Boolean expressions and \nstart to prove propositional tautologies in Coq.\nYou will also formulate simple \"human-oriented\" proofs\nabout integer arithmetic and other topics.\n}\n\n\\keywords{Coq, Boolean expressions, Syntax trees, tautologies, proofs.}\n\n\\begin{document}\n\\maketitle\n\n\n{\\bf (W2.1)} Try Boolean short-circuit evaluation on Python: \nDefine two functions: {\\tt f(x)} and {\\tt g(x)}. Each of them \nprints a message and then returns value {\\tt False}. For example:\n\\begin{verbatim}\ndef f():\n    print('Running f(x)')\n\treturn False\n\\end{verbatim}\nAfter that evaluate this statment:\n\\begin{verbatim}\nif f(x) and g(x):\n    print('Unreachable statement')\n\\end{verbatim}\nThere should be also eager \n\n%{\\bf (W2.3)} Verify identities on set operations.\n%\n%{\\bf (W2.4)} Indexed operations $\\vee_{i=1}^{k}$, $\\wedge_{i=1}^{k}$, \n%$\\cup_{i=1}^{k}$, $\\cap_{i=1}^{k}$, $\\sum_{i=1}^{k}$, \n%$\\prod_{i=1}^{k}$.\n\n{\\bf (W2.2)} Compare Boolean expressions in Coq, using prefix and infix notation.\n\\begin{verbatim}\nRequire Import Bool.\nEval compute in orb true false.\nEval compute in true || false.\nEval compute in andb true false.\nEval compute in true && false.\nEval compute in negb false.\nEval compute in if true then 3 else 4.\nDefinition a := true.\nEval compute in orb a (negb a).\n\nDefinition eMiddle (a:bool): bool :=\n  orb a (negb a).\nEval compute in eMiddle true.\n\nDefinition Nor (a b:bool): bool :=\n  negb (orb a b).\nEval compute in Nor true true.  \n\\end{verbatim}\n\n\n{\\bf (W2.3)} Use precedence and associativity to draw abstract syntax trees.\nUse boolean commands \"orb\", \"andb\", \"implb\" and \"negb\" to \ncompute in prefix notation this function: \n$$E(a,b,c) := a \\vee \\neg(b \\wedge (a \\rightarrow c)).$$\nDraw the syntax tree of this operation. Its leaves\nare variables $a,b,c$ (and also constants, if they are present in the expression). \nInner nodes are all the Boolean operations with 1 or 2 variables. \nSo, in this tree every inner node has 1 or 2 children. \n\n{\\bf (W2.4)} Read a definition of a DNF (Disjunctive Normal Form). \nCreate a DNF for a 3-argument Boolean function. \nYou can pick the $8$ truth falues at random.\n\n{\\bf (W2.5)} \nProve that for any odd integer $k$, the square $k^2$ \ngives remainder $1$, when divided by $8$.\n\n{\\bf (W2.6)} \nProve that a positive integer $n$ has odd number of positive divisors \n(including $1$ and $n$ itself) if and only if $n$ is a full square - can be expressed as $k^2$.\n\n{\\bf (W2.7)} Prove that for any prime number $p$, the square root  \n$\\sqrt{p}$ is irrational.\n\n{\\bf (W2.8)}\nProve that for any real number $x$ its rounded value to the nearest tenth \n(rounding to one decimal place) is equal to $\\frac{1}{10}\\lfloor 10x + 0.5 \\rfloor$.\n\n{\\bf (W2.9)}\nProve that there is a function  $f:(-\\pi;\\pi) \\rightarrow \\mathbb{R}$ \nmapping interval $(-\\pi;\\pi)$ to $\\mathbb{R}$ such that every real number\n$y$ has exactly one $x$ such that $f(x)=y$.\n\n{\\bf (W2.10)}\nProve that it is impossible to enumerate all subsets of natural numbers with natural numbers.\n(Natural numbers are all integers $\\geq 0$).\n\n{\\bf (W2.11)}\nTry out the examples in \\url{https://bit.ly/2sfIrUK}. \nDiscuss, if you are unsure, how to write proofs. \nVisit tutorials \\url{https://bit.ly/37Zjso0}, if you need more inspiration. \n\n{\\bf (W2.12)}\nTry out the Coq assignment on \\url{https://bit.ly/37Zjso0}\n(the link \"Assignment about Coq (1 of 5)\"). \n\n\\newpage\n\n\\section{Some Answers}\n\n{\\bf (W2.1)}\n\nFull Python program looks like this:\n\\begin{verbatim}\ndef f(): print('Runs f()'); return False\ndef g(): print('Runs g()'); return False\ndef h(): print('Runs h()'); return True\n\n## Prints 'Runs f()':\nif f() and g(): print('Unreachable')\n## Prints 'Runs f()', 'Runs g()':\nif f() & g(): print('Unreachable')\n## Prints 'Runs h()' and 'Hi':\nif h() or g(): print('Hi')\n## Prints 'Runs h()', 'Runs g()' and 'Hi'\nif h() | g(): print('Hi')\n\\end{verbatim}\n\n{\\bf (W2.5)} Assume that $n$ is odd. Represent it as \n$n = 2k+1$. Then \n$$n^2 = (2k+1)^2 = 4k^2 +4k + 1 = 4k(k+1) + 1.$$\nConsider two cases:\\\\\n(1) If $k$ is odd, then $k+1$ is even and \n$4\\times (k+1)$ is divisible by $8$.\\\\\n(2) If $k$ is even, then $4 \\times k$ is divisible by \n$8$.\\\\\nIn either case $4k(k+1)$ is divisible by $8$. \nTherefore $4k(k+1) + 1$ always gives remainder $1$ when \ndivided by $8$. \n\n{\\bf (W2.6)} Biconditional (if-and-only-if) statement contains two \nimplications:\\\\\n(1) First, assume that $n$ is a full square: $n = k^2$. We need to \nprove that it has odd number of positive divisors.\nFor any divisor of $n$: $d < \\sqrt{n}=k$ there exists another divisor \n$d' = n/d$ which is bigger than $\\sqrt{n}$ \\textendash{} you can \ndivide all divisors in pairs. In addition, $n$ has a divisor $k = \\sqrt{n}$\nthat is paired with itself.\\\\\nExample: The number $36=6^2$ has these divisor pairs:\n$$(1;36),\\;(2;18),\\;(3;12),\\;(4;9),\\;(6;6).$$\nSince the divisor $\\sqrt{n}$ is paired with itself, there is\nan odd number of positive divisors.\\\\\n(2) Secondly, assume that $n$ has odd number of divisors; we should prove\nthat it is a full square.\\\\\nIn fact, we will prove the counterpositive: If $n$ is {\\bf not} a full square, \nthen it has even number of divisors. To see this, we also arrange divisors\nof $n$ in pairs \\textendash{} as before, one of them is less than $\\sqrt{n}$, another\none is bigger than $\\sqrt{n}$. Only this time the number $\\sqrt{n}$ itself\ndoes not count as a divisor of $n$, because it is not an integer.\\\\\nExample: The number $12 \\approx (3.4641\\ldots)^2$ has these divisor pairs:\n$$(1;12),\\;(2;6),\\;(3;4).$$\n\n\\end{document}\n\n\n", "meta": {"hexsha": "4ed69b09f1afc9339fefe2e62fef6ea02adbf356", "size": 6649, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/site/discrete-spring2020/static/homeworks/discrete-lab-week02.tex", "max_stars_repo_name": "kapsitis/math", "max_stars_repo_head_hexsha": "f21b172d4a58ec8ba25003626de02bfdda946cdc", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/site/discrete-spring2020/static/homeworks/discrete-lab-week02.tex", "max_issues_repo_name": "kapsitis/math", "max_issues_repo_head_hexsha": "f21b172d4a58ec8ba25003626de02bfdda946cdc", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-07-20T03:40:52.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-10T21:50:18.000Z", "max_forks_repo_path": "src/site/discrete-spring2020/static/homeworks/discrete-lab-week02.tex", "max_forks_repo_name": "kapsitis/math", "max_forks_repo_head_hexsha": "f21b172d4a58ec8ba25003626de02bfdda946cdc", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.3632075472, "max_line_length": 95, "alphanum_fraction": 0.6835614378, "num_tokens": 2110, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056295505783, "lm_q2_score": 0.8128673201042492, "lm_q1q2_score": 0.560314019825551}}
{"text": "\\documentclass[11 pt]{scrartcl}\n\\usepackage[header, margin, koma]{tyler}\n\n\\newcommand{\\hwtitle}{Discussion 3A Recap}\n\n\\pagestyle{fancy}\n\\fancyhf{}\n\\fancyhead[l]{\\hwtitle{}}\n\\fancyhead[r]{Tyler Zhu}\n\\cfoot{\\thepage}\n\n\\begin{document} \n\\title{\\Large \\hwtitle{}}\n\\author{\\large Tyler Zhu}\n\\date{\\large\\today}\n\n\\maketitle \n\n\\section{Modular Arithmetic}\n\\begin{definition}\n    We say that $a$ is \\emph{congruent} to $b \\pmod{m}$ if \n    \\[ a\\equiv b \\pmod{m} \\iff m | a-b \\iff a-b = m\\cdot k, \\quad k\\in\\ZZ .\\] \n\\end{definition}\n\nOne interpretation is that mod $m$ gets the remainder when we divide by $m$, but the mod operator is more powerful than just that. For example, we have that \n\\begin{align*}\n    \\dots \\equiv -9 \\equiv -4 \\equiv \\boxed{1} \\equiv 6 \\equiv 11 \\equiv \\dots \\pmod{5} \\\\ \n    \\dots \\equiv -8 \\equiv -3 \\equiv \\boxed{2} \\equiv 7 \\equiv 12 \\equiv \\dots \\pmod{5} \\\\ \n    \\dots \\equiv -7 \\equiv -2 \\equiv \\boxed{3} \\equiv 8 \\equiv 13 \\equiv \\dots \\pmod{5} \n\\end{align*}\n\nGiven that so many numbers are equivalent to each other when working over a certain modulus, it helps us to agree upon a set of \\emph{representatives} for each equivalence class of numbers. In the above example, the representatives for each class has been boxed. In general, the representatives are $\\{0, 1, \\dots, m-1\\}$. \n\nTo drive this point home, compare to how we say that all of the following fractions are the same, but we use the boxed one as their representative (namely $\\frac 13$): \n\\[ \\dots = \\dfrac{-3}{-9} = \\dfrac{-2}{-6} = \\dfrac{-1}{-3} = \\boxed{\\dfrac{1}{3}} = \\dfrac{2}{6} = \\dfrac{3}{9} = \\dots \\]\n\nDoing math over a modulus is similar to normal arithmetic; addition, subtraction, multiplication, and exponentiation all hold. \n\nDivision is tricky however. Being able to divide by $a$ is equivalent to having an \\emph{inverse}, i.e. a number $x$ which makes $ax \\equiv 1 \\pmod{m}$. The existence of an inverse is equivalent to having a solution $(x,k)$ over integers to the equation $ax = 1 + m\\cdot k$. We saw in the notes (and you can reason why) that this happens only when $\\gcd(a,m) = 1$, and is unique mod $m$. \n\nIn general, it's a good question to ask when we have solutions to the equation \n\\[ ax + by = c\\] \nfor $a,b,d \\in \\ZZ$. If we let $d = \\gcd(a,b)$, then solutions exist only when $d | c$ (take this equation mod $d$ if you don't believe me). Additionally, mod $a$ or $b$ these solutions are unique. \n\nWe even have an algorithm called the \\emph{Extended Euclidean Algorithm} which helps us find solutions to $ax+by = d$, from which we can get solutions to any equation in the above form (the Euclidean Algorithm lets us compute $\\gcd(a,b)$ efficiently; you can imagine why extending it lets us recover the above solutons). \n\nOne common followup is finding the number of solutions to an equation like $10x \\equiv 25 \\pmod{30}$. Think about this (you may find the above context helpful).\n\n\\section{Tips}\n\\itemnum\n    \\ii ``Find the last digit'' $\\rar$ Take mod $10$. ``Last two digits'' $\\rar$ Take mod $100$, and so on. \n    \\ii It's useful to write a number $n$ as $n = \\sum_{i=0}^k d_i 10^i = \\overline{d_kd_{k-1}\\dots d_1d_0}$ when taking mods. \n    \\ii Recall that $a\\equiv b \\implies a^n \\equiv b^n \\pmod{n}$; in other words, reduce the bases of your powers. \n    \\ii It can be helpful to work with different definitions of modular arithmetic. For example, showing that $3x\\equiv 10 \\pmod{21}$ has no solutions is easiest by demonstrating that $3x = 10 + 21k$ reduces to $0\\equiv 1 \\pmod{3}$.\n    \\ii While the EEA is great, finding inverses will often be faster/easier by writing out multiples of $m$. For example, if I'm finding the inverse of $9$ mod $11$, it's easier to look for a multiple of 9 in $1, 12, 23, 34, 45, \\dots$ and then divide than to do the EEA. \n\\itemend\n\n\\section{Extra Practice}\n\\begin{exercise}\n    Prove that for any number $n$, the alternating sum of the digits of $n$, i.e. $d_0 - d_1 + d_2 - \\dots$, is divisible by 11 if and only if $n$ is divisible by 11. \n\\end{exercise}\n\n\\end{document}\n", "meta": {"hexsha": "155362880ac3deabf3f16c0a9aafc897d2209f85", "size": 4041, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CS70/recap3a/recap3a.tex", "max_stars_repo_name": "cbugwadia32/course-notes", "max_stars_repo_head_hexsha": "cc269a2606bab22a5c9b8f1af23f360fa291c583", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-07-20T19:22:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-07T01:19:16.000Z", "max_issues_repo_path": "CS70/recap3a/recap3a.tex", "max_issues_repo_name": "cbugwadia32/course-notes", "max_issues_repo_head_hexsha": "cc269a2606bab22a5c9b8f1af23f360fa291c583", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CS70/recap3a/recap3a.tex", "max_forks_repo_name": "cbugwadia32/course-notes", "max_forks_repo_head_hexsha": "cc269a2606bab22a5c9b8f1af23f360fa291c583", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-13T08:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-07T17:21:17.000Z", "avg_line_length": 63.140625, "max_line_length": 388, "alphanum_fraction": 0.6973521406, "num_tokens": 1252, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.689305616785446, "lm_q2_score": 0.8128673155708975, "lm_q1q2_score": 0.5603140063243273}}
{"text": "\\subsection{Soft constraint specifications: Fizz-Buzz}\nWe start with an extremely simple example, highlighting how easy our system makes it to encode and check data integrity constraints. The Fizz-Buzz programming exercise is based on a children's game and frequently found in programming interviews. The synthetic dataset we generated obeys the following rules: for each record \\(x, y\\), $x$ is a number between 0 and 1000, and $y$ is ``Fizz'' if \\(x \\mod 3 = 0\\), ``Buzz'' if \\(x \\mod 5 = 0\\), ``FizzBuzz'' if \\(x \\mod 15 = 0\\), and \\(x\\) otherwise. In our synthetic dataset we introduced three outliers: \\texttt{(25, \"Fizz\")}, \\texttt{(28, \"Woof!\")}, \\texttt{(30, \"Buzz\")}. Each demonstrates a different error, namely swapping ``Fizz'' and ``Buzz'', producing entirely incorrect output, and failing to recognize that a number is divisible by both $3$ and $5$.\n\nA traditional way of checking that all tuples verify the production rule outlined above is to encode this rule itself as a database integrity constraint. This requires encoding the full complexity of the exercise in the rule, and would require manual adjustments if the rules were to change. Instead, a user might want to specify the bare minimum for the system to infer the rules; in this case, it is sufficient to add one extraction rule, mapping integers to two booleans denoting whether they are divisible by $3$ or $5$. In \\dBoost/, this is expressed as:\n\n\\begin{minted}{python3}\n@rule\ndef divisibleBy3or5(x: int) -> (\"div 3\", \"div 5\"):\n  return (x % 3 == 0, x % 5 == 0)\n\\end{minted}\n\nRunning the discrete statistical analyzer on the synthetic datasets suggests that the two columns are correlated, and using the histogram model flags the aforementioned outliers. The output of the program for the \\texttt{(30, \"Buzz\")} line, for example, is similar to:\n\n\\begin{lstnobreak}[gobble=2]\n   $30$ $Buzz$\n   > Values ($30$, '$Buzz$') do not\n     match features ('$div 3$', '$strip numbers$')\n   \u2022 histogram for ('div 3', 'strip numbers'):\n     [532] \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 (False, '<num>')\n     [133] \u2588\u2588\u2588\u2588\u2588 (False, 'Buzz')\n     [  1] \u258c (False, 'Fizz')\n     [  1] \u258c (False, 'Woof!')\n     [  1] /\u258c/ $(True, 'Buzz')$\n     [267] \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 (True, 'Fizz')\n     [ 66] \u2588\u2588 (True, 'FizzBuzz')\n\\end{lstnobreak}\n\nUsing the partitioned histogram model produces similar output:\n\n\\begin{lstnobreak}[gobble=2]\n   $30$ $Buzz$\n   > Values ($30$, '$Buzz$') do not\n     match features ('$div 3$', '$strip numbers$')\n   \u2022 histogram for ('strip numbers',) if 'div 3' = True:\n     [  1] /\u258c/ $('Buzz',)$\n     [267] \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 ('Fizz',)\n     [ 66] \u2588\u2588\u2588\u2588 ('FizzBuzz',)\n   ... if 'div 3' = False:\n     [532] \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 ('<num>',)\n     [133] \u2588\u2588\u2588\u2588\u2588 ('Buzz',)\n     [  1] \u258c ('Fizz',)\n     [  1] \u258c ('Woof!',)\n\\end{lstnobreak}\n", "meta": {"hexsha": "1c9a006222ef3c002e1207a8e89a048ac448a600", "size": 2775, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "raha/tools/dBoost/paper/icde/fizzbuzz-evaluation.tex", "max_stars_repo_name": "adrianlut/raha", "max_stars_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 30, "max_stars_repo_stars_event_min_datetime": "2019-07-05T12:03:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T07:44:58.000Z", "max_issues_repo_path": "raha/tools/dBoost/paper/icde/fizzbuzz-evaluation.tex", "max_issues_repo_name": "adrianlut/raha", "max_issues_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-10T12:59:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-10T12:59:43.000Z", "max_forks_repo_path": "raha/tools/dBoost/paper/icde/fizzbuzz-evaluation.tex", "max_forks_repo_name": "adrianlut/raha", "max_forks_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2019-04-21T12:28:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-28T06:42:36.000Z", "avg_line_length": 63.0681818182, "max_line_length": 807, "alphanum_fraction": 0.6436036036, "num_tokens": 791, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333246118695627, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.5602705522661823}}
{"text": "\\documentclass[a4paper,12pt]{article}\n\n\\usepackage{amsmath,amssymb,amsthm,tikz}\n\\usetikzlibrary{calc,arrows.meta}\n\\usepackage[margin=20mm]{geometry}\n\\usepackage{hyperref}\n\n\\setlength{\\parindent}{0pt}\n\\setlength{\\columnsep}{1cm}\n\n\\begin{document}\n\n\\twocolumn\n\n\\thispagestyle{empty}\n\n\\begin{center}\n{\\Large Assignment 12}\\\\\n{\\Large Published on 2020-12-06,}\\\\\n{\\em Estimated Time: 30 minutes,}\\\\\n{\\em Max.grade 10\\textperthousand} \n\\end{center}\n\n\n\\section{Strongly Connected Components}\n\nOne of the multiple practical applications of a DFS traversal of a directed graph\nis finding strongly connected components (strongly connected graphs are defined \nin (Goodrich2011, p.626)), the relevant algorithm is \nknown as Kosaraju's algorithm. \nSee \\url{https://bit.ly/3lI20ec}, \\url{https://bit.ly/3mNU2la}. \n\n{\\bf Definition.} A subset of vertices in a directed graph $S \\subseteq G.V$ makes a strongly \nconnected component, iff for any two distinct vertices $u,v$ there is \na path $u \\leadsto v$ (one or more  and also another path $v \\leadsto u$ that goes \nback from $v$ to $u$).\n\nIf you can travel only in one direction (say, from $u$ to $v$), but cannot return, \nthen $u,v$ should be in different strongly connected components.\n(Same thing, if $u$ and $v$ are mutually unreachable.) Moreover, every \nvertex is strongly connected to itself \\textendash{} so even in the worst case\na graph with $n$ vertices would have at most $n$ strongly connected components \n(containing one vertex each). \n\nFigure~\\ref{fig:strongly-connected-transposed} shows an example of a graph with $n=5$ vertices\nhaving $3$ strongly connected components. Next to that graph is the {\\em transposed graph} $G^T$\nwhere all the edges are reversed.\n\n\\begin{figure}[!htb]\n\\center{\\includegraphics[width=3in]{assignment12-strongly-connected/strongly-connected-transposed.png}}\n\\caption{\\label{fig:strongly-connected-transposed} Graph $G$ and $G^T$}\n\\end{figure}\n\n\\subsection{Kosaraju's algorithm} \n\nThere is a way to find strongly connected components in an \narbitrary graph by \nrunning DFS twice (i.e. it works in linear time $O(n+m)$). \n\n\n$$\\begin{array}{rl}\n  & \\text{\\sc Strongly\\textunderscore{}Connected}(G)\\\\\n  & \\textcolor{teal}{\\text{\\em (compute all finishing times $u.f$)}}\\\\\n1 & \\text{call}\\;\\text{\\sc DFS}(G)\\\\\n  & \\textcolor{teal}{\\text{\\em ($G^T$ is transposed $G$, all edges reversed)}}\\\\\n2 & \\text{compute}\\;G^{T}\\\\\n  & \\textcolor{teal}{\\text{\\em (visit vertices in decreasing $u.f$ order)}}\\\\\n3 & \\text{call}\\;\\text{\\sc DFS}(G^T)\\\\\n4 & \\text{\\bf for each}\\;\\text{tree $T$ in the forest}\\;\\text{\\sc DFS}(G^T)\\\\\n5 & \\hspace{0.5cm} \\text{Output $T$ as a component}\\\\\n\\end{array}$$\n\nTo see how this works, we can run it on the example graph shown earlier. \nAfter the DFS on graph $G$ is run, we get the finishing times \nfor the vertices $0,1,2,3,4$ (all shown in red on the left side \nof Figure~\\ref{fig:strongly-connected-dfs}.). \nAfter that we replace $G$ by $G^T$ (to the right side of \nthe same figure), and assign priorities in the decreasing sequence\nof $u.f$ (the finishing times when running $\\text{\\sc DFS}(G)$). \n\n\\begin{figure}[!htb]\n\\center{\\includegraphics[width=3in]{assignment12-strongly-connected/strongly-connected-dfs.png}}\n\\caption{\\label{fig:strongly-connected-dfs} DFS on $G$ and $G^T$}\n\\end{figure}\n\n\nTo make this reverse order obvious, we assign new priorities to \nthe vertices in $G^T$. The new priorities in $G^T$ are the following:\n\n\\begin{itemize}\n\\item Vertex {\\tt \"0\"} has priority $11 - 10 = 1$.\n\\item Vertex {\\tt \"1\"} has priority $11 - 4 = 7$. \n\\item Vertex {\\tt \"2\"} has priority $11 - 5 = 6$. \n\\item Vertex {\\tt \"3\"} has priority $11 - 9 = 2$. \n\\item Vertex {\\tt \"4\"} has priority $11 - 8 = 3$. \n\\end{itemize}\n\nNow run $\\text{\\sc DFS}(G^T)$. It turns out that the DFS algorithm starts\nin the vertex {\\tt \"0\"} once again (since it was finished last in $\\text{\\sc DFS}(G)$). \nBut unlike the DFS algorithm in $G$ itself (it produced just one DFS tree), \nwe get a DFS forest with 3 components (tree/discovery edges shown bold and black in\nFigure~\\ref{fig:strongly-connected-dfs}). \n\n\\begin{itemize}\n\\item $\\{ \\mathtt{\"0\"}, \\mathtt{\"1\"}, \\mathtt{\"2\"} \\}$ (DFS tree has root $\\mathtt{\"0\"}$).\n\\item $\\{ \\mathtt{\"3\"} \\}$ (DFS tree has root $\\mathtt{\"3\"}$).\n\\item $\\{ \\mathtt{\"4\"} \\}$ (DFS tree has root $\\mathtt{\"4\"}$).\n\\end{itemize}\n\nThey represent the strongly connected components in $G$ (they are also \nstrongly connected in $G^T$). \n\n\n\n\\section{Problem}\n\nWe start with the graph shown in Figure~\\ref{fig:problem-graph}.\n\n\\begin{figure}[!htb]\n\\center{\\includegraphics[width=3in]{assignment12-strongly-connected/problem-graph.png}}\n\\caption{\\label{fig:problem-graph} Graph diagram}\n\\end{figure}\n\n\\vspace{10pt}\n{\\bf (A)} Compute the following three numbers from $a,b,c$ (your last \nStudent ID numbers): \n$$\\left\\{ \\begin{array}{l}\nU = 2 \\cdot ((a+b)\\;\\text{mod}\\;5)\\\\\nV = 2 \\cdot ((b+c)\\;\\text{mod}\\;5)+1\\\\\nW = 2 \\cdot ((c+a)\\;\\text{mod}\\;5)+1\\\\\n\\end{array} \\right.$$\n\nBy $(x\\;\\text{mod}\\;y)$ we denote the remainder as $x$ is divided by $y$. \nAdd to the original graph two new directed edges $(U,V)$ and $(U,W)$. \n(For example, if $U = 2$, $V = 7$, $W = 1$ then add two outgoing edges from \n{\\tt \"2\"} to {\\tt \"5\"} and {\\tt \"1\"} respectively. If an edge exists, do \nnot add anything.)\n\nDraw the new graph; show the newly added edges in bold or colored differently.\n\n\\vspace{10pt}\n{\\bf (B)} Run the DFS traversal algorithm on the graph $G$. \nMark each vertex \nwith the pair of numbers {\\tt d/f}, where the first number {\\tt d} is the \ndiscovery time, and the second number {\\tt f} is the finishing time. \n\n\\vspace{10pt}\n{\\bf (C)} Draw the transposed directed graph (same vertices, but each arrow points\nin the opposite direction). \nRun the DFS traversal algorithm on $G^T$. Make sure that the DFS \nouter loop visits the vertices in the reverse order by $u.f$ \n(the finishing time for the DFS algorithm in step (B)). \nIn this case you do not produce the discovery/finishing times once again, \njust draw the discovery edges used by the DFS on $G^T$ \\textendash{}\nyou can highlight them (show them in bold or use a different color). \n\n\\vspace{10pt}\n{\\bf (D)} List all the strongly connected components (they are \nthe separate pieces in the forest obtained by running DFS \non $G^T$). \n\n\\end{document}\n\n", "meta": {"hexsha": "08811112c7f9225a1ddaa2046f210c42e51d5831", "size": 6301, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "data-structures-fall2020/assignments/assignment12-strongly-connected.tex", "max_stars_repo_name": "kapsitis/linen-tracer-682", "max_stars_repo_head_hexsha": "b17f5c8b16e1d83032e0d9679a7219c0cbdba0dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "data-structures-fall2020/assignments/assignment12-strongly-connected.tex", "max_issues_repo_name": "kapsitis/linen-tracer-682", "max_issues_repo_head_hexsha": "b17f5c8b16e1d83032e0d9679a7219c0cbdba0dc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2020-07-17T17:42:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-13T23:55:14.000Z", "max_forks_repo_path": "src/site/data-structures-fall2020/assignments/assignment12-strongly-connected.tex", "max_forks_repo_name": "kapsitis/math", "max_forks_repo_head_hexsha": "f21b172d4a58ec8ba25003626de02bfdda946cdc", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4207317073, "max_line_length": 103, "alphanum_fraction": 0.7017933661, "num_tokens": 1921, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723317123102956, "lm_q2_score": 0.8333245891029456, "lm_q1q2_score": 0.5602705479018569}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath,amssymb}\n\\usepackage{mathtools}\n\\usepackage{csquotes}\n\\usepackage[pdfborder={0 0 0}]{hyperref}\n\n\\newcommand\\E{\\mathbb{E}}\n\\newcommand\\N{\\mathcal{N}}\n\\DeclareMathOperator\\cov{cov}\n\\DeclareMathOperator\\var{var}\n\n\\newcommand\\given{\n  \\nonscript\\:\n  \\vert\n  \\nonscript\\:\n  \\mathopen{}\n  \\allowbreak\n}\n\n\\title{Mathematical notes on Bishop's book on Pattern Recognition and Machine Learning}\n\\author{Lukas Prokop}\n\\date{October 2016 to February 2016}\n\n\\begin{document}\n\\maketitle\n\\tableofcontents\n\\clearpage\n\n\\section{Preliminaries}\n\n\\section{Exercise 1.5}\n\n\\begin{description}\n  \\item[Given] $\\var[f] = \\E[(f(x) - \\E[f(x)])^2]$\n  \\item[Find] $\\var[f] = \\E[f(x)^2] - \\E[f(x)]^2$\n\\end{description}\n\nFor the expected value $\\E$, random variables $a$ and $b$ and constant $c$ it holds that\n\\begin{align}\n  \\E[a+b] &= \\E[a] + \\E[b]       \\label{eq:e-add} \\\\\n  \\E[c \\cdot a] &= c \\cdot \\E[a] \\label{eq:e-multi} \\\\\n  \\E[c] &= c                     \\label{eq:e-const}\n\\end{align}\n\nFollowingly, we can prove the desired equation:\n\\begin{align*}\n  \\var[f] &= \\E[(f(x) - \\E[f(x)])^2] \\\\\n          &= \\E[f(x)^2 - 2f(x) \\E[f(x)] + \\E[f(x)]^2] \\\\\n          &\\overset{\\eqref{eq:e-add}}{=} \\E[f(x)^2] - \\E[2 \\cdot f(x) \\cdot \\E[f(x)]] + \\E[f(x)]^2 \\\\\n          &\\overset{\\eqref{eq:e-multi}}{=} \\E[f(x)^2] - \\E[2] \\cdot \\E[f(x)] \\cdot \\E[\\E[f(x)]] + \\E[f(x)]^2 \\\\\n          &\\overset{\\eqref{eq:e-const}}{=} \\E[f(x)^2] - 2 \\cdot \\E[f(x)] \\cdot \\E[f(x)] + \\E[f(x)]^2 \\\\\n          &= \\E[f(x)^2] - \\E[f(x)]^2\n\\end{align*}\n\n\\section{$\\E[ab] = \\E[a] \\cdot \\E[b]$ if $a$ and $b$ are independent}\n%\n\\begin{description}\n  \\item[Given] two independent random variables $a$ and $b$\n  \\item[Find] $\\E[a \\cdot b] = \\E[a] \\cdot \\E[b]$\n\\end{description}\n\nLet $p(x)$ be the probability density function on $\\mathbb{R}$\nand $x$ be the $\\mathbb{R}$-valued random variable (for $p$).\nThe expectation $\\E[x]$ is defined by\n\\[ \\E[x] = \\int_{\\mathbb R} p(x) \\cdot x \\, dx \\]\nWe will prove the formula under this setting.\nThe case that the probability distribution is defined on ${\\mathbb R}^d$\ncan be discussed analogously. The case that the probability distribution\nis given on a finite set $\\Omega$ can be discussed by replacing the\nintegral by the sum over the finite set $\\Omega$.\nFurthermore for two independent random variables $a$ and $b$, it holds that\n\\[ p(ab) = p(a) \\cdot p(b) \\]\n\n\\begin{align*}\n  \\E[x] &= \\int p(x) \\cdot x \\, dx \\\\\n  \\E[a \\cdot b]\n    &= \\int \\left( p(ab) \\cdot ab \\right) \\: d(ab) \\\\\n    &= \\int \\int \\left( p(a) \\cdot p(b) \\cdot a \\cdot b \\right)   \\: da \\, db \\\\\n    &= \\int \\int \\left( p(a) \\cdot a \\right)  \\cdot \\left(p(b) \\cdot b\\right)  \\: da \\, db \\\\\n%    &= \\int \\left( \\left( \\int p(a) \\cdot a \\: da\\right) \\cdot p(b) \\cdot b \\right)   \\: db \\\\\n    &= \\left(\\int p(a) \\cdot a \\: da\\right) \\left(\\int p(b) \\cdot b  \\: db\\right) \\\\\n    &= \\E[a] \\cdot \\E[b]\n\\end{align*}\n\n\\section{Exercise 1.6}\n\n\\begin{quote}\n  \\enquote{For two random variables $x$ and $y$, the \\emph{covariance} is defined by\n  \\begin{align*}\n    \\cov{[x,y]} &= \\E_{x,y}[\\{x - \\E_{x}[x]\\}\\{y - \\E_{y}[y]\\}] \\\\\n                &= \\E_{x,y}[xy] - \\E_{x}[x] \\E_{y}[y]\n  \\end{align*}\n  which expresses the extent to which $x$ and $y$ vary together.\n  If $x$ and $y$ are independent, then their covariance vanishes.} \\\\\n  ---page 20\n\\end{quote}\n\n\\begin{description}\n  \\item[Given] two independent random variables $x$ and $y$ with covariance defined by $\\cov[x,y] = \\E_{x,y}[\\{x - \\E_{x}[x]\\}\\{y - \\E_{y}[y]\\}]$\n  \\item[Find] $\\cov[x,y] = 0$\n\\end{description}\n\nIf two random variables $a$ and $b$ are independent, then it holds that\n\\begin{align}\n  \\E[a \\cdot b] &= \\E[a] \\cdot \\E[b]       \\label{eq:e-2-multi}\n\\end{align}\n\n\\begin{align*}\n  \\cov{[x,y]}\n    &= \\E_{x,y}[\\{x - \\E_{x}[x]\\}\\{y - \\E_{y}[y]\\}] \\\\\n    &= \\E_{x,y}[xy - y\\E_x[x] - x\\E_y[y] + \\E_x[x] \\cdot \\E_y[y]] \\\\\n    &\\overset{\\eqref{eq:e-add}}{=} \\E_{x,y}[xy] + \\E_{x,y}[-y \\cdot \\E_x[x]] + \\E_{x,y}[-x \\cdot \\E_y[y]] + \\E_{x,y}[\\E_x[x] \\cdot \\E_y[y]] \\\\\n    &\\overset{\\eqref{eq:e-multi}}{=} \\E_{x,y}[xy] - \\E_x[x] \\cdot \\E_{y}[y] - \\E_{y}[y] \\cdot \\E_{x}[x] + \\E_{x,y}[\\E_x[x] \\cdot \\E_y[y]] \\\\\n    &\\overset{\\eqref{eq:e-2-multi}}{=} \\E_{x,y}[x] \\cdot \\E_{x,y}[y] - 2 \\cdot \\E_x[x] \\cdot \\E_{y}[y] + \\E_{x,y}[\\E_x[x] \\cdot \\E_y[y]] \\\\\n    &\\overset{\\eqref{eq:e-const}}{=} \\E_{x,y}[x] \\cdot \\E_{x,y}[y] - 2 \\cdot \\E_x[x] \\cdot \\E_{y}[y] + \\E_x[x] \\cdot \\E_y[y] \\\\\n    &= 0\n\\end{align*}\n\n\\section{Gaussian interpretation of curve fitting}\n\n\\begin{description}\n  \\item[Given] $p(t \\given x,w,\\beta) = \\N(t \\given y(x,w), \\beta^{-1})$ and\n    \\[ \\N(x \\given \\mu, \\sigma^2) = (2 \\pi \\sigma^2)^{-\\frac12} \\exp\\left(-2^{-1} \\sigma^{-2} (x - \\mu)^2\\right) \\]\n  \\item[Find]\n    \\[ \\ln{p(t \\given x,w,\\beta)} = -\\frac{\\beta}{2} \\cdot \\sum_{n=1}^N \\left(y(x_n,w) - t_n\\right)^2 + \\frac{N}{2} \\ln\\beta - \\frac{N}{2} \\ln(2\\pi) \\]\n\\end{description}\n\nRemember the basic laws for logarithms:\n\\begin{align}\n  \\ln(a \\cdot b) &= \\ln{a} + \\ln{b}              \\label{eq:ln-multi} \\\\\n  \\ln\\left(\\frac{a}{b}\\right) &= \\ln{a} - \\ln{b} \\label{eq:ln-div} \\\\\n  \\ln(\\exp(a)) &= a                              \\label{eq:ln-inv} \\\\\n  \\ln(a^b) &= b \\cdot \\ln(a)                     \\label{eq:ln-power}\n\\end{align}\n\n\\begin{align}\n  \\ln{p(t \\given x,w,\\beta)} &= (2 \\pi \\beta^{-1})^{-\\frac12} \\exp\\left(-2^{-1} \\beta (t - y(x,w))^2\\right) \\\\\n      &\\overset{\\eqref{eq:ln-multi}}{=} \\ln(2 \\pi \\beta^{-1})^{-\\frac12} + \\ln\\exp\\left(-2^{-1} \\beta (t - y(x,w))^2\\right) \\\\\n      &\\overset{\\eqref{eq:ln-inv}}{=} \\ln\\left(\\frac{\\beta}{2\\pi}\\right)^{\\frac12} - \\frac{\\beta}{2}\\left(t - y(x,w)\\right)^2 \\\\\n      &\\overset{\\eqref{eq:ln-inv}}{=} \\frac12 \\cdot \\ln\\left(\\frac{\\beta}{2\\pi}\\right) - \\frac{\\beta}{2}\\left(t - y(x,w)\\right)^2 \\\\\n      &\\overset{\\eqref{eq:ln-inv}}{=} \\frac12 \\cdot \\ln \\beta - \\frac12 \\cdot \\ln(2\\pi) - \\frac{\\beta}{2}\\left(t - y(x,w)\\right)^2 \\\\\n  \\sum_{n=1}^N \\ln{p(t_n \\given x_n,w,\\beta)} &= \\sum_{n=1}^N \\left(\\frac12 \\ln{\\beta} - \\frac12 \\ln{2\\pi} - \\frac\\beta2 (t_n - y(x_n, w))^2\\right) \\\\\n      &= \\frac{N}{2} \\ln\\beta - \\frac{N}{2} \\ln{2\\pi} - \\frac{\\beta}{2} \\sum_{n=1}^N \\left(y(w_n, w) - t_n\\right)^2\n\\end{align}\n\n\\section{Exercise 1.11}\n\n\\begin{description}\n  \\item[Given] $\\ln{p(x \\given \\mu, \\sigma^2)} = -\\frac1{2\\sigma^2} \\sum_{n=1}^N (x_n - \\mu)^2 - \\frac{N}{2} \\ln{\\sigma^2} - \\frac{N}{2} \\ln(2\\pi)$\n  \\item[Find] $\\mu_{\\text{ML}} = \\frac{1}{N} \\cdot \\sum_{n=1}^N x_n$ for maximized $\\mu$ and \\\\\n    $\\sigma_{\\text{ML}} = \\frac{1}{N}\\cdot \\sum_{n=1}^N (x_n - \\mu_{\\text{ML}})^2$ for maximized $\\sigma^2$\n\\end{description}\n\nSo we want to determine the 2 parameters of a Gaussian distribution, namely $\\mu$ and $\\sigma^2$, in the maximum likelihood case.\nWe begin with $\\mu$:\n\n\\begin{enumerate}\n  \\item Derive $\\ln{p(x \\given \\mu, \\sigma^2)}$ for $\\mu$\n    \\begin{align*}\n      \\frac\\partial{\\partial \\mu} \\ln{p(x \\given \\mu, \\sigma^2)}\n      &= \\frac\\partial{\\partial \\mu} \\left(-\\frac{1}{2\\sigma^2} \\cdot \\sum_{n=1}^N (x_n - \\mu)^2 - \\frac{N}{2} \\ln{\\sigma^2} - \\frac{N}{2} \\ln(2\\pi)\\right) \\\\\n      &= \\frac\\partial{\\partial \\mu} \\left(-\\frac{1}{2\\sigma^2} \\cdot \\sum_{n=1}^N (x_n^2 - 2 x_n \\mu + \\mu^2) - \\frac{N}{2} \\ln{\\sigma^2} - \\frac{N}{2} \\ln(2\\pi)\\right) \\\\\n      &= -\\frac{1}{2\\sigma^2} \\cdot \\sum_{n=1}^N (-2x_n + 2\\mu) \\\\\n      &= -\\frac1{\\sigma^2} \\cdot \\sum_{n=1}^N (\\mu - x_n)\n    \\end{align*}\n  \\item Set result zero\n    \\[ 0 = -\\frac{1}{\\sigma^2} \\cdot \\sum_{n=1}^N (\\mu - x_n) = \\sum_{n=1}^N (\\mu - x_n) = N \\cdot \\mu - \\sum_{n=1}^N x_n \\]\n    \\[ \\implies \\mu_{\\text{ML}} = \\frac1N \\cdot \\sum_{n=1}^N x_n \\qquad \\text{commonly called \\enquote{sample mean}} \\]\n\\end{enumerate}\n\nWe continue with $\\sigma^2$ and use the same approach:\n\n\\begin{enumerate}\n  \\item Derive $\\ln{p(x \\given \\mu, \\sigma^2)}$ for $\\sigma^2$\n    \\begin{align*}\n      \\frac{\\partial}{\\partial \\sigma^2} \\ln{p(x \\given \\mu, \\sigma^2)}\n      &= \\frac{\\partial}{\\partial \\sigma^2} \\left(-\\frac{1}{2\\sigma^2} \\cdot \\sum_{n=1}^N (x_n - \\mu)^2 - \\frac{N}2 \\ln{\\sigma^2} - \\frac{N}2 \\ln(2\\pi)\\right) \\\\\n      &= \\frac{1}{2\\sigma^4} \\cdot \\sum_{n=1}^N (x_n - \\mu)^2 - \\frac{N}{2} \\cdot \\frac{1}{\\sigma^2} \\\\\n      &= \\frac{1}{2\\sigma^2} \\left(\\frac{1}{\\sigma^2} \\cdot \\sum_{n=1}^N (x_n - \\mu)^2 - N\\right)\n    \\end{align*}\n  \\item Set result zero\n    \\begin{align*}\n      0 &= \\frac{1}{2\\sigma^2} \\left(\\frac{1}{\\sigma^2} \\cdot \\sum_{n=1}^N (x_n - \\mu)^2 - N\\right) \\\\\n      N \\cdot \\sigma^2 &= \\sum_{n=1}^N (x_n - \\mu)^2 \\\\\n      \\sigma^2_{\\text{ML}} &= \\frac{1}{N} \\cdot \\sum_{n=1}^N (x_n - \\mu)^2 \\qquad \\text{commonly called \\enquote{sample variance}}\n    \\end{align*}\n\\end{enumerate}\n\n\\section{Precision parameter $\\beta$ in the maximum likelihood case}\n\n\\begin{description}\n  \\item[Given] $\\ln{p(t \\given x,w,\\beta)} = -\\frac{\\beta}{2} \\cdot \\sum_{n-1}^N \\left(y(x_n,w) - t_n\\right)^2 + \\frac{N}2 \\ln{\\beta} - \\frac{N}{2} \\ln(2\\pi)$\n  \\item[Find] $\\frac{1}{\\beta_{\\text{ML}}} = \\frac{1}{N} \\cdot \\sum_{n=1}^N (y(x_n, w_{\\text{ML}}) - t_n)^2$ by maximizing $\\beta$\n\\end{description}\n\n\\begin{enumerate}\n  \\item Derive $\\ln{p(t \\given x,w,\\beta)}$ with $\\beta$\n    \\begin{align*}\n      \\frac{\\partial}{\\partial \\beta} \\ln{p(t \\given x,w,\\beta)}\n      &= \\frac{\\partial}{\\partial \\beta} \\left(-\\frac{\\beta}{2} \\sum_{n=1}^N (y(x_n,w) - t_n)^2 + \\frac{N}{2} \\ln\\beta - \\frac{N}{2} \\ln(2\\pi)\\right) \\\\\n      &= -\\frac12 \\cdot \\sum_{n=1}^N \\left(y(x_n,w) - t_n\\right)^2 + \\frac{N}{2} \\cdot \\frac1\\beta\n    \\end{align*}\n  \\item Set result zero\n    \\begin{align*}\n      0 &= -\\frac{1}{2} \\cdot \\sum_{n=1}^N \\left(y(x_n,w) - t_n\\right)^2 + \\frac{N}{2\\beta} \\\\\n      \\frac{N}{\\beta} &= \\sum_{n=1}^N \\left(y(x_n,w) - t_n\\right)^2 \\\\\n      \\frac{1}{\\beta_{\\text{ML}}} &= \\frac{1}{N} \\cdot \\sum_{n=1}^N (y(x_n,w) - t_n)^2\n    \\end{align*}\n\\end{enumerate}\n\n\\end{document}\n", "meta": {"hexsha": "60d83d1647267fe543d3ccc491af687681da9d07", "size": 9852, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/proofs.tex", "max_stars_repo_name": "prokls/bakk_kobe", "max_stars_repo_head_hexsha": "795061ae69a5130d0d553bcaa0cdf2af9e9c9d30", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/proofs.tex", "max_issues_repo_name": "prokls/bakk_kobe", "max_issues_repo_head_hexsha": "795061ae69a5130d0d553bcaa0cdf2af9e9c9d30", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/proofs.tex", "max_forks_repo_name": "prokls/bakk_kobe", "max_forks_repo_head_hexsha": "795061ae69a5130d0d553bcaa0cdf2af9e9c9d30", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.2535211268, "max_line_length": 172, "alphanum_fraction": 0.5568412505, "num_tokens": 4128, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.672331699179286, "lm_q2_score": 0.8333245932423309, "lm_q1q2_score": 0.5602705397425037}}
{"text": "\\subsubsection{lag}\nWe design a lag controller with siso tool box.\n$$\nC(s) = \\dfrac{3.6493(s + 7.347)}{s + 3.694} \n$$\n\\begin{itemize}\n\t\\item all figures from siso toolbox\n\t\\begin{figure}[H]\n\t\t\\caption{All figures from siso toolbox}\n\t\t\\centering\n\t\t\\includegraphics[width=16cm]{../Figure/Q1/Q1_b/lag/siso_all.png}\n\t\\end{figure}\n\t\\newpage\n\t\\item root locus with lag controller\n\t\\begin{figure}[H]\n\t\t\\caption{root locus}\n\t\t\\centering\n\t\t\\includegraphics[width=12cm]{../Figure/Q1/Q1_b/lag/rlocus.png}\n\t\\end{figure}\n\t\\item step response for closeloop system with lag controller\n\t\\begin{figure}[H]\n\t\t\\caption{step response for closeloop system}\n\t\t\\centering\n\t\t\\includegraphics[width=12cm]{../Figure/Q1/Q1_b/lag/feedback_step.png}\n\t\\end{figure}\n\t\\item closeloop bode (magnitude) with lag controller\n\t\\begin{figure}[H]\n\t\t\\caption{closeloop bode (magnitude)}\n\t\t\\centering\n\t\t\\includegraphics[width=12cm]{../Figure/Q1/Q1_b/lag/feedback_bode.png}\n\t\\end{figure}\n\t\\item openloop bode (magnitude) with lag controller\n\t\\begin{figure}[H]\n\t\t\\caption{openloop bode (magnitude)}\n\t\t\\centering\n\t\t\\includegraphics[width=12cm]{../Figure/Q1/Q1_b/lag/openloop_bode.png}\n\t\\end{figure}\n\t\\item sensitivity function with lag controller\n\t\\begin{figure}[H]\n\t\t\\caption{sensitivity function}\n\t\t\\centering\n\t\t\\includegraphics[width=12cm]{../Figure/Q1/Q1_b/lag/s_bode.png}\n\t\\end{figure}\n\t\\item complementary sensitivity function with lag controller\n\t\\begin{figure}[H]\n\t\t\\caption{complementary sensitivity function}\n\t\t\\centering\n\t\t\\includegraphics[width=12cm]{../Figure/Q1/Q1_b/lag/t_bode.png}\n\t\\end{figure}\n\\end{itemize}\nSystem is stable with controller and have a noise cancelation for frequancy after $100_{rad/{\\sec}}$ and it have effect on system about $-20_{dB}$. System have very good disturbance rejection about $1_{rad/{\\sec}}$ and have a good disturbance rejection about $10_{rad/{\\sec}}$ and disturbance have effect on system about $-5_{dB}$. \n\nIn this question we don't know what is plant and actuator and how noise or disturbance effect on system and about what frequancy so we assume that noise is about more than $100_{rad/{\\sec}}$ and disturbance is about $10_{rad/{\\sec}}$ and$-5_{dB}$ is a low effect and system work well.\n\nNo. System have staedy state error. we could increase gain in controller but it needed very high gaib controller and no actuator can do this so we can't make staedy state error zero with this requirements.", "meta": {"hexsha": "9336905a8fd754ecda708029b0ab4001c43586db", "size": 2406, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW/HW V/Report/Q1/Q1_b/lag/lag.tex", "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW/HW V/Report/Q1/Q1_b/lag/lag.tex", "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW/HW V/Report/Q1/Q1_b/lag/lag.tex", "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.7454545455, "max_line_length": 332, "alphanum_fraction": 0.7477140482, "num_tokens": 743, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.6688802603710085, "lm_q2_score": 0.8376199653600371, "lm_q1q2_score": 0.5602674605219767}}
{"text": "\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Copyright (c) 2003-2018 by The University of Queensland\n% http://www.uq.edu.au\n%\n% Primary Business: Queensland, Australia\n% Licensed under the Apache License, version 2.0\n% http://www.apache.org/licenses/LICENSE-2.0\n%\n% Development until 2012 by Earth Systems Science Computational Center (ESSCC)\n% Development 2012-2013 by School of Earth Sciences\n% Development from 2014 by Centre for Geoscience Computing (GeoComp)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\section{The Stokes Problem}\n\\label{STOKES PROBLEM} \nIn this section we discuss how to solve the Stokes problem.\nWe want to calculate the velocity\\index{velocity} field $v$ and pressure $p$\nof an incompressible fluid\\index{incompressible fluid}.\nThey are given as the solution of the Stokes problem\\index{Stokes problem}\n\\begin{equation}\\label{Stokes 1}\n-\\left(\\eta(v_{i,j}+ v_{j,i})\\right)_{,j}+p_{,i}=f_{i}-\\sigma_{ij,j}\n\\end{equation}\nwhere  $f_{i}$ defines an internal force\\index{force, internal} and\n$\\sigma_{ij}$ is an initial stress\\index{stress, initial}.\nThe viscosity $\\eta$ may weakly depend on pressure and velocity.\nIf relevant we will use the notation $\\eta(v,p)$ to express this dependency.\n\nWe assume an incompressible medium:\n\\begin{equation}\\label{Stokes 2}\n-v_{i,i}=0\n\\end{equation}\nNatural boundary conditions are taken in the form \n\\begin{equation}\\label{Stokes Boundary}\n\\left(\\eta(v_{i,j}+ v_{j,i})\\right)n_{j}-n_{i}p=s_{i} - \\alpha \\cdot n_{i} n_{j} v_{j}+\\sigma_{ij} n_{j}\n\\end{equation}\nwhich can be overwritten by constraints of the form \n\\begin{equation}\\label{Stokes Boundary0}\nv_{i}(x)=v^D_{i}(x)\n\\end{equation}\nat some locations $x$ at the boundary of the domain.\n$s_{i}$ defines a normal stress and $\\alpha\\ge 0$ the spring constant for\nrestoring normal force.\nThe index $i$ may depend on the location $x$ on the boundary.\n$v^D$ is a given function on the domain.\n\n\\subsection{Solution Method \\label{STOKES SOLVE}}\nIf we assume that $\\eta$ is independent from the velocity and pressure,\nequations~\\ref{Stokes 1} and~\\ref{Stokes 2} can be written in the block form\n\\begin{equation}\n\\left[ \\begin{array}{cc}\nA     & B^{*} \\\\\nB & 0 \\\\\n\\end{array} \\right]\n\\left[ \\begin{array}{c}\nv \\\\\np \\\\\n\\end{array} \\right]\n=\\left[ \\begin{array}{c}\nG \\\\\n0 \\\\\n\\end{array} \\right]\n\\label{STOKES}\n\\end{equation}\nwhere $A$ is a coercive, self-adjoint linear operator in a suitable Hilbert\nspace, $B$ is the $(-1) \\cdot$ divergence operator and $B^{*}$ is the adjoint\noperator (=gradient operator).\nFor more details on the mathematics see references \\cite{AAMIRBERKYAN2008,MBENZI2005}.\n\nIf $v_{0}$ and $p_{0}$ are given initial guesses for velocity and pressure we\ncalculate a correction $dv$ for the velocity by solving the first equation of\n\\eqn{STOKES}\n \\begin{equation}\\label{STOKES ITER STEP 1}\n A dv_{1} = G - A v_{0} - B^{*} p_{0}\n\\end{equation}\nWe then insert the new approximation $v_{1}=v_{0}+dv_{1}$ to calculate a\ncorrection $dp_{2}$ for the pressure and an additional correction $dv_{2}$ for\nthe velocity by solving\n \\begin{equation}\n \\begin{array}{rcl}\n B A^{-1} B^{*} dp_{2} & = & Bv_{1} \\\\\n A dv_{2} & = & B^{*} dp_{2} \n\\end{array}\n \\label{STOKES ITER STEP 2}\n \\end{equation}\nThe new velocity and pressure are then given by $v_{2}=v_{1}-dv_{2}$ and\n$p_{2}=p_{0}+dp_{2}$ which will fulfill the block system~\\ref{STOKES}. \nThis solution strategy is called the Uzawa scheme\\index{Uzawa scheme}.\n\nThere is a problem with this scheme: in practice we will use an iterative\nscheme to solve any problem for operator $A$.\nSo we will be unable to calculate the operator $ B A^{-1} B^{*}$ required for\n$dp_{2}$ explicitly. In fact, we need to use another iterative scheme to solve\nthe first equation in~\\ref{STOKES ITER STEP 2} where in each iteration step\nan iterative solver for $A$ is applied. Another issue is the fact that the\nviscosity $\\eta$ may depend on velocity or pressure and so we need to iterate\nover the three equations~\\ref{STOKES ITER STEP 1} and~\\ref{STOKES ITER STEP 2}. \n\nIn the following we will use the two norms\n\\begin{equation}\n\\|v\\|_{1}^2 = \\int_{\\Omega} v_{j,k}v_{j,k} \\; dx \n\\mbox{ and }\n\\|p\\|_{0}^2= \\int_{\\Omega} p^2 \\; dx.\n\\label{STOKES STOP}\n\\end{equation}\nfor velocity $v$ and pressure $p$.\nThe iteration is terminated if the stopping criterion\n \\begin{equation} \\label{STOKES STOPPING CRITERIA}\n\\max(\\|Bv_{1}\\|_{0},\\|v_{2}-v_{0}\\|_{1}) \\le \\tau \\cdot \\|v_{2}\\|_{1} \n \\end{equation}\nfor a given tolerance $0<\\tau<1$ is met.\nNotice that because of the first equation of~\\ref{STOKES ITER STEP 2} we have\nthat $\\|Bv_{1}\\|_{0}$ equals the norm of $B A^{-1} B^{*} dp_{2}$ and\nconsequently provides a norm for the pressure correction.\n\nWe want to optimize the tolerance choice for solving~\\ref{STOKES ITER STEP 1}\nand~\\ref{STOKES ITER STEP 2}. To do this we write the iteration scheme as a\nfixed point problem. Here we consider the errors produced by the iterative\nsolvers being used.\nFrom \\eqn{STOKES ITER STEP 1} we have\n\\begin{equation} \\label{STOKES total V1}\nv_{1} = e_{1} + v_{0} + A^{-1} ( G - Av_{0} - B^{*} p_{0} ) \n\\end{equation}\nwhere $e_{1}$ is the error when solving~\\ref{STOKES ITER STEP 1}.\nWe will use a sparse matrix solver so we have not full control on the norm\n$\\|.\\|_{s}$ used in the stopping criterion for this equation.\nIn fact we will have a stopping criterion of the form\n\\begin{equation} \n\\| A e_{1} \\|_{s}  = \\| G - A v_{1} - B^{*} p_{0} \\|_{s} \\le \\tau_{1} \\| G - A v_{0} - B^{*} p_{0} \\|_{s} \n\\end{equation}\nwhere $\\tau_{1}$ is the tolerance which we need to choose.\nThis translates into the condition\n\\begin{equation} \n\\| e_{1} \\|_{1} \\le K \\tau_{1} \\| dv_{1} - e_{1} \\|_{1} \n\\end{equation}\nThe constant $K$ represents some uncertainty combining a variety of unknown\nfactors such as the norm being used and the condition number of the stiffness matrix.\nFrom the first equation of~\\ref{STOKES ITER STEP 2} we have\n\\begin{equation}\\label{STOKES total P2}\np_{2} =  p_{0} + (B A^{-1} B^{*})^{-1} (e_{2} + Bv_{1} )\n\\end{equation}\nwhere $e_{2}$ represents the error when solving~\\ref{STOKES ITER STEP 2}.\nWe use an iterative preconditioned conjugate gradient method\n(PCG)\\index{linear solver!PCG}\\index{PCG} with iteration operator\n$B A^{-1} B^{*}$ using the $\\|.\\|_{0}$ norm.\nAs suitable preconditioner\\index{preconditioner} for the iteration operator we\nuse $\\frac{1}{\\eta}$ \\cite{ELMAN}, i.e. the evaluation of the preconditioner\n$P$ for a given pressure increment $q$ is the solution of\n\\begin{equation} \\label{STOKES P PREC}\n\\frac{1}{\\eta} (Pq) = q \\; . \n\\end{equation}\nNote that in each evaluation of the iteration operator $q=B A^{-1} B^{*} s$\none needs to solve the problem\n\\begin{equation} \\label{STOKES P OPERATOR}\nA w = B^{*} s \n\\end{equation}\nwith sufficient accuracy to return $q=Bw$. We assume that the desired\ntolerance is sufficiently small, for instance one can take $\\tau_{2}^2$ where\n$\\tau_{2}$ is the tolerance for~\\ref{STOKES ITER STEP 2}.\n\nIn an implementation we use the fact that the residual $r$ is given as\n\\begin{equation} \\label{STOKES RES }\n r= B (v_{1} -  A^{-1} B^{*} dp) =  B (v_{1} - A^{-1} B^{*} dp) = B (v_{1}-dv_{2}) = B v_{2}\n\\end{equation}\nIn particular we have $e_{2} = B v_{2}$.\nSo the residual $r$ is represented by the updated velocity $v_{2}=v_{1}-dv_{2}$.\nIn practice, if one uses the velocity to represent the residual $r$ there is\nno need to recover $dv_{2}$ in~\\ref{STOKES ITER STEP 2} after $dp_{2}$ has\nbeen calculated.\nIn PCG the iteration is terminated if\n\\begin{equation} \\label{STOKES P OPERATOR ERROR}\n\\| P^{\\frac{1}{2}}B v_{2} \\|_{0} \\le \\tau_{2} \\| P^{\\frac{1}{2}}B v_{1} \\|_{0}\n\\end{equation}\nwhere $\\tau_{2}$ is the given tolerance. This translates into\n\\begin{equation} \\label{STOKES P OPERATOR ERROR 2}\n\\|e_{2}\\|_{0} = \\| B v_{2} \\|_{0} \\le M \\tau_{2} \\| B v_{1} \\|_{0}\n\\end{equation}\nwhere $M$ is taking care of the fact that $P^{\\frac{1}{2}}$ is dropped.\n\nAs we assume that there is no significant error from solving with the operator\n$A$ we have \n\\begin{equation} \\label{STOKES total V2}\nv_{2} =  v_{1} - dv_{2} \n= v_{1}  - A^{-1} B^{*}dp \n\\end{equation}\nCombining the equations~\\ref{STOKES total V1},~\\ref{STOKES total P2} and~\\ref{STOKES total V2}\nand setting the errors to zero we can write the solution process as a fix\npoint problem\n\\begin{equation} \nv = \\Phi(v,p) \\mbox{ and } p = \\Psi(u,p) \n\\end{equation}\nwith suitable functions $\\Phi(v,p)$ and $ \\Psi(v,p)$ representing the\niteration operator without errors. In fact, for a linear problem, $\\Phi$ and\n$\\Psi$ are constant. With this notation we can write the update step in the\nform $p_{2}= \\delta p + \\Psi(v_{0},p_{0})$ and $v_{2}= \\delta v + \\Phi(v_{0},p_{0})$\nwhere the total error $\\delta p$ and $\\delta v$ are given as\n\\begin{equation} \n \\begin{array}{rcl}\n\\delta p & = &  (B A^{-1} B^{*})^{-1} ( e_{2} + B e_{1} ) \\\\\n\\delta v & = &  e_{1} -  A^{-1} B^{*}\\delta p  \\;.\n\\end{array}\\label{STOKES ERRORS}\n\\end{equation}\nNotice that $B\\delta v = - e_{2}=-Bv_{2}$.\nOur task is now to choose the tolerances $\\tau_{1}$ and $\\tau_{2}$ such that\nthe global errors $\\delta p$ and $\\delta v$ do not stop the convergence of the\niteration process.\n\nTo measure convergence we use\n\\begin{equation} \n\\epsilon = \\max(\\|v_{2}-v\\|_{1}, \\|B A^{-1} B^{*} (p_{2}-p)\\|_{0})\n\\end{equation}\nIn practice using the fact that $B A^{-1} B^{*} (p_{2}-p_{0}) = B v_{1}$\nand assuming that $v_{2}$ gives a better approximation to the true $v$ than\n$v_{0}$ we will use the estimate\n\\begin{equation} \n\\epsilon = \\max(\\|v_{2}-v_{0}\\|_{1}, \\|B v_{1}\\|_{0})\n\\end{equation}\nto estimate the progress of the iteration step after the step is completed.\nNote that the estimate of $\\epsilon$ is used in the stopping\ncriterion~\\ref{STOKES STOPPING CRITERIA}.\nIf $\\chi^{-}$ is the convergence rate assuming exact calculations, i.e.\n$e_{1}=0$ and $e_{2}=0$, we are expecting to maintain $\\epsilon \\le \\chi^{-} \\cdot \\epsilon^{-}$.\nFor the pressure increment we get:\n\\begin{equation} \\label{STOKES EST 1}\n\\begin{array}{rcl}\n\\|B A^{-1} B^{*} (p_{2}-p)\\|_{0}\n & \\le & \\|B A^{-1} B^{*} (p_{2}-\\delta p-p)\\|_{0} +\n\\|B A^{-1} B^{*} \\delta p\\|_{0} \\\\\n & = & \\chi^{-} \\cdot \\epsilon^{-} + \\|e_{2} + B e_{1}\\|_{0}  \\\\\n& \\approx & \\chi^{-} \\cdot \\epsilon^{-} + \\|e_{2}\\|_{0} \\\\\n& \\le & \\chi^{-} \\cdot \\epsilon^{-} + M \\tau_{2} \\|B v_{1}\\|_{0} \\\\  \n\\end{array}\n\\end{equation}\nSo we choose the value for $\\tau_{2}$ from\n\\begin{equation} \\label{STOKES TOL2}\n M \\tau_{2} \\|B v_{1}\\|_{0}  \\le (\\chi^{-})^2 \\epsilon^{-}\n\\end{equation}\nin order to make the perturbation for the termination of the pressure\niteration a second order effect. We use a similar argument for the velocity:\n\\begin{equation}\\label{STOKES EST 2}\n\\begin{array}{rcl}\n\\|v_{2}-v\\|_{1} & \\le & \\|v_{2}-\\delta v-v\\|_{1} + \\| \\delta v\\|_{1} \\\\\n& \\le &  \\chi^{-} \\cdot \\epsilon^{-}  + \\| e_{1} -  A^{-1} B^{*}\\delta p \\|_{1} \\\\\n& \\approx &  \\chi^{-} \\cdot \\epsilon^{-}  + \\| e_{1} \\|_{1} \\\\\n& \\le &  \\chi^{-} \\cdot \\epsilon^{-}  +  K \\tau_{1} \\| dv_{1} - e_{1} \\|_{1}\n\\\\\n& \\le &  ( 1  + K \\tau_{1}) \\chi^{-} \\cdot \\epsilon^{-}\n\\end{array}\n\\end{equation}\nSo we choose the value for $\\tau_{1}$ from\n\\begin{equation} \\label{STOKES TOL1}\nK \\tau_{1} \\le \\chi^{-}\n\\end{equation}\nAssuming we have estimates for $M$ and $K$\\footnote{if no estimates are\navailable, we use the value $1$} we can use~\\ref{STOKES TOL1} and\n\\ref{STOKES TOL2} to get appropriate values for the tolerances.\nAfter the step has been completed we can calculate a new convergence rate\n$\\chi =\\frac{\\epsilon}{\\epsilon^{-}}$.\nFor partial reasons we restrict $\\chi$ to be less or equal a given maximum\nvalue $\\chi_{max}\\le 1$.\nIf we see $\\chi \\le \\chi^{-} (1+\\chi^{-})$ our choices for the tolerances were\nsuitable. Otherwise, we need to adjust the values for $K$ and $M$.\nFrom the estimates~\\ref{STOKES EST 1} and~\\ref{STOKES EST 2} we establish\n\\begin{equation}\\label{STOKES EST 3}\n\\chi \\le ( 1 + \\max(M \\frac{\\tau_{2} \\|B v_{1}\\|_{0}}{\\chi^{-} \\epsilon^{-}},K \\tau_{1}  ) ) \\cdot \\chi^{-} \n\\end{equation}\nIf we assume that this inequality would be an equation if we would have chosen\nthe right values $M^{+}$ and $K^{+}$ then we get\n\\begin{equation}\\label{STOKES EST 3b}\n\\chi =  ( 1 + \\max(M^{+} \\frac{\\chi^{-}}{M},K^{+} \\frac{\\chi^{-}}{K}) ) \\cdot \\chi^{-} \n\\end{equation}\nFrom this equation we see if our choice for $K$ was not good enough.\nIn this case we can calculate a new value\n \\begin{equation}\nK^{+} =  \\frac{\\chi-\\chi^{-}}{(\\chi^{-})^2} K\n\\end{equation}\nIn practice we will use\n \\begin{equation}\nK^{+}  = \\max(\\frac{\\chi-\\chi^{-}}{(\\chi^{-})^2} K,\\frac{1}{2}K,1)\n\\end{equation}\nwhere the second term is used to reduce a potential overestimate of $K$.\nThe same identity is used for to update $M$. The updated $M^{+}$ and $K^{+}$\nare then use in the next iteration step to control the tolerances.\n\nIn some cases one can observe that there is a significant change in the\nvelocity but the new velocity $v_{1}$ has still a small divergence, i.e.\nwe have $\\|Bv_{1}\\|_{0} \\ll \\|v_{1}-v_{0}\\|_{1}$.\nIn this case we will get a small pressure increment and consequently only very\nsmall changes to the velocity as a result of the second update step which\ntherefore can be skipped and we can directly repeat the first update step\nuntil the increment in velocity becomes significant relative to its divergence.\nIn practice we will ignore the second half of the iteration step as long as\n \\begin{equation}\\label{STOKES LARGE BV1}\n\\|Bv_{1}\\|_{0} \\le \\theta \\cdot \\|v_{1}-v_{0}\\| \n\\end{equation}\nwhere $0<\\theta<1$ is a given factor. In this case we will also check the\nstopping criterion with $v_{1}\\rightarrow v_{2}$ but we will not correct $M$\nin this case.\n\nStarting from an initial guess $v_{0}$ and $p_{0}$ for velocity and pressure\nthe solution procedure is implemented as follows:\n\\begin{enumerate}\n \\item calculate viscosity $\\eta(v_{0},p)_{0}$ and assemble operator $A$ from $\\eta$\n \\item calculate the tolerance $\\tau_{1}$ from \\eqn{STOKES TOL1}\n \\item solve \\eqn{STOKES ITER STEP 1} for $dv_{1}$ with tolerance $\\tau_{1}$\n \\item update $v_{1}= v_{0}+ dv_{1}$\n \\item if $Bv_{1}$ is large (see~\\ref{STOKES LARGE BV1}):\n \\begin{enumerate}\n   \\item calculate the tolerance $\\tau_{2}$ from~\\ref{STOKES TOL2}\n   \\item solve~\\ref{STOKES ITER STEP 2} for $dp_{2}$ and $v_{2}$ with tolerance $\\tau_{2}$\n   \\item update $p_{2}\\leftarrow p_{0}+ dp_{2}$\n \\end{enumerate}\n \\item else:\n  \\begin{itemize}\n    \\item update $p_{2}\\leftarrow p$ and $v_{2}\\leftarrow v_{1}$\n  \\end{itemize}\n \\item calculate convergence measure $\\epsilon$ and convergence rate $\\chi$\n \\item if stopping criterion~\\ref{STOKES STOPPING CRITERIA} holds:\n \\begin{itemize}\n   \\item return $v_{2}$ and $p_{2}$\n \\end{itemize}\n \\item else:\n \\begin{enumerate}\n   \\item update $M$ and $K$\n   \\item goto step 1 with $v_{0}\\leftarrow v_{2}$ and $p_{0}\\leftarrow p_{2}$.\n \\end{enumerate}\n\\end{enumerate}\n\n\\subsection{Functions}\n\n\\begin{classdesc}{StokesProblemCartesian}{domain}\nopens the Stokes problem\\index{Stokes problem} on the \\Domain domain.\nThe domain needs to support LBB compliant elements for the Stokes problem, see~\\cite{LBB} for details\\index{LBB condition}.\nFor instance one can use second order polynomials for velocity and first order\npolynomials for the pressure on the same element.\nAlternatively, one can use macro elements\\index{macro elements} using linear\npolynomials for both pressure and velocity with a subdivided element for the\nvelocity. Typically, the macro element is more cost effective.\nThe fact that pressure and velocity are represented in different ways is\nexpressed by\n\\begin{python}\n  velocity=Vector(0.0, Solution(mesh))\n  pressure=Scalar(0.0, ReducedSolution(mesh))\n\\end{python}\n\\end{classdesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{initialize}{\\optional{f=Data(),\n    \\optional{fixed_u_mask=Data(), \\optional{eta=1,%\n    \\optional{surface_stress=Data(), \\optional{stress=Data()},%\n    \\optional{restoration_factor=0}}}}}}\nassigns values to the model parameters. In any call all values must be set.\n\\var{f} defines the external force $f$, \\var{eta} the viscosity $\\eta$,\n\\var{surface_stress} the surface stress $s$ and \\var{stress} the initial stress $\\sigma$.\nThe locations and components where the velocity is fixed are set by the values\nof \\var{fixed_u_mask}.\n\\var{restoration_factor} defines the restoring force factor $\\alpha$.\nThe method will try to cast the given values to appropriate \\Data class objects.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{solve}{v,p\n\\optional{, max_iter=100 \\optional{, verbose=False \\optional{, usePCG=True }}}}\nsolves the problem and returns approximations for velocity and pressure. \nThe arguments \\var{v} and \\var{p} define initial guesses.\n\\var{v} must have function space \\var{Solution(domain)} and \\var{p} must have\nfunction space \\var{ReducedSolution(domain)}.\nThe values of \\var{v} marked by \\var{fixed_u_mask} remain unchanged.\nIf \\var{usePCG} is set to \\True then the preconditioned conjugate gradient\nmethod (PCG)\\index{preconditioned conjugate gradient method!PCG} scheme is\nused. Otherwise the problem is solved with the generalized minimal residual\nmethod (GMRES)\\index{generalized minimal residual method!GMRES}.\nIn most cases the PCG scheme is more efficient.\n\\var{max_iter} defines the maximum number of iteration steps.\nIf \\var{verbose} is set to \\True information on the progress of of the solver\nis printed.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{setTolerance}{\\optional{tolerance=1.e-4}}\nsets the tolerance in an appropriate norm relative to the right hand side.\nThe tolerance must be non-negative and less than 1.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{getTolerance}{}\nreturns the current relative tolerance.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{setAbsoluteTolerance}{\\optional{tolerance=0.}}\nsets the absolute tolerance for the error in the relevant norm.\nThe tolerance must be non-negative.\nTypically the absolute tolerance is set to 0.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{getAbsoluteTolerance}{}\nreturns the current absolute tolerance.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsVelocity}{}\nreturns the solver options used to solve \\eqn{STOKES ITER STEP 1} and \\eqn{STOKES P OPERATOR}) for velocity.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsPressure}{}\nreturns the solver options used to solve the preconditioner \\eqn{STOKES P PREC} for pressure.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsDiv}{}\nsets the solver options for solving the equation to project the divergence of\nthe velocity onto the function space of pressure.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{setStokesEquation}{\\optional{f=None,\n    \\optional{fixed_u_mask=None, \\optional{eta=None,%\n    \\optional{surface_stress=None, \\optional{stress=None},%\n    \\optional{restoration_factor=None}}}}}}\nassigns new values to the model parameters, see method \\function{initialize} for description of the \nparameter list. In contrast to \\function{initialize} only values given in the argument list are set. \nTypically this method is called to update parameters such as viscosity $\\eta$ within a time integration scheme\nafter the problem has been set up by an initial call of method \\function{initialize}.\n\\end{methoddesc}\n\n\\begin{methoddesc}[StokesProblemCartesian]{updateStokesEquation}{v, p}\nthis method is called by the solver to allow for updating the model parameter during the iteration process for \nincompressibility. In order to implement a non-linear dependence of model parameters such as viscosity $\\eta$\nfrom the current velocity \\var{v} or pressure field \\var{p} this function can be overwritten in the following way:\n\\begin{python}\n    class MyStokesProblem(StokesProblemCartesian):\n       def updateStokesEquation(self, v, p):\n           my_eta=<eta derived from v and p>\n           self.setStokesEquation(eta=my_eta)\n\\end{python}\nNote that \\function{setStokesEquation} to update the model. \\warning{It is not guaranteed that the iteration converges if model parameters are altered.}\n\\end{methoddesc}\n\n\\subsection{Example: Lid-driven Cavity}\nThe following script \\file{lid_driven_cavity.py}\\index{scripts!\\file{lid_driven_cavity.py}}\nwhich is available in the \\ExampleDirectory illustrates the usage of the\n\\class{StokesProblemCartesian} class to solve the lid-driven cavity problem:\n\\begin{python}\n  from esys.escript import *\n  from esys.finley import Rectangle\n  from esys.weipa import saveVTK\n  from esys.escript.models import StokesProblemCartesian\n  NE=25\n  dom = Rectangle(NE,NE,order=2)\n  x = dom.getX()\n  sc=StokesProblemCartesian(dom)\n  mask= (whereZero(x[0])*[1.,0]+whereZero(x[0]-1))*[1.,0] + \\\n        (whereZero(x[1])*[0.,1.]+whereZero(x[1]-1))*[1.,1]\n  sc.initialize(eta=.1, fixed_u_mask=mask)\n  v=Vector(0., Solution(dom))\n  v[0]+=whereZero(x[1]-1.)\n  p=Scalar(0.,ReducedSolution(dom))\n  v,p=sc.solve(v, p, verbose=True)\n  saveVTK(\"u.vtu\", velocity=v, pressure=p)\n\\end{python}\n\n", "meta": {"hexsha": "6dc62d2eb032ebe33eeeb1df0df1714fbe63122c", "size": 21102, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user/stokessolver.tex", "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user/stokessolver.tex", "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_forks_repo_path": "doc/user/stokessolver.tex", "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.2763157895, "max_line_length": 152, "alphanum_fraction": 0.7040090987, "num_tokens": 6644, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199552262967, "lm_q2_score": 0.6688802471698041, "lm_q1q2_score": 0.5602674426861256}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\begmath 15.3 Cumulative Distribution Function for Chi-Square\n\\hbox{Probability Distribution}\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThe procedure described in this Section computes the Cumulative Distribution\nFunction (CDF) of the Chi-Square probability distribution. The CDF is\nsometimes called the lower tail. The\nlower tail, or CDF, $P(\\chi ^2|\\nu )$, and the upper tail, $Q(\\chi ^2|\\nu )$\nfor the Chi-Square probability distribution with argument $\\chi $ and $\\nu $\ndegrees of freedom are defined by%\n\\begin{gather*}\n\\hspace{-5pt}P(\\chi ^2|\\nu )=\\frac{2^{-\\nu /2}}{\\Gamma (\\nu /2)}\\int_0^{\n\\chi^2}t^{(\\nu / 2)-1}e^{-t/2}dt,\\phantom{=1-P(\\chi ^2|\\nu )}\\\\\n\\hspace{-5pt}Q(\\chi ^2|\\nu )=\\frac{2^{-\\nu /2}}{\\Gamma (\\nu /2)}\n\\int_{\\chi ^2}^\\infty t^{(\\nu / 2)-1}e^{-t/2}dt\\phantom{,}=1-P(\\chi ^2|\\nu )\n\\end{gather*}\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\n\\begin{description}\n\\item[REAL]  \\ {\\bf CHISQ, NU, P, Q}\n\n\\item[INTEGER]  \\ {\\bf IERR}\n\\end{description}\n\nAssign values to CHISQ and NU, and obtain P $=P(\\chi ^2|\\nu )$ and Q $%\n=Q(\\chi ^2|\\nu )$ by using\n$$\n\\fbox{{\\bf CALL SCDCHI (CHISQ, NU, P, Q, IERR)}}\n$$\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[CHISQ]  \\ [in] Argument $\\chi ^2$ of the functions $P(\\chi ^2|\\nu )$\nand $Q(\\chi ^2|\\nu ).$  Must be nonnegative.  Must be positive if NU = 0.\n\n\\item[NU]  \\ [in] Degrees of freedom $\\nu $ of the functions $P(\\chi ^2|\\nu )\n$ and $Q(\\chi ^2|\\nu ).$  Must be nonnegative.  Must be positive if\nCHISQ = 0.\n\n\\item[P]  \\ [out] The value of the function $P(\\chi ^2|\\nu ).$\n\n\\item[Q]  \\ [out] The value of the function $Q(\\chi ^2|\\nu ).$\n\n\\item[IERR]  \\ [out] A flag that normally is zero to indicate successful\ncomputation. See Section E below for discussion of non-zero values.\n\\end{description}\n\n\\subsubsection{Modifications for Double Precision}\n\nFor double precision computation, change the REAL type statement to DOUBLE\nPRECISION and change the initial letter of the procedure name to D.\n\n\\subsection{Example and Remarks}\n\nSee DRDCDCHI and ODDCDCHI for an example of the usage of this subprogram.\n\nThe procedures SGAMIK and SGAMIE, described in Chapter~2.19, are used to\ncontrol the procedure SGAMI and determine the error estimate it returns.\nSGAMI is used as described in Section D below.\n\n\\subsection{Functional Description}\n\n\\subsubsection{Method}\n\nThe identities $P(\\chi ^2|\\nu ) = P(a,x)$ and $Q(\\chi ^2|\\nu ) = Q(a,x)$,\nwith $a = \\nu /2$ and $x = \\chi ^2/2$, where $P(a,x)$ and $Q(a,x)$ are\nincomplete gamma function ratios, are used. The procedure SGAMI described in\nChapter~2.19 is used to evaluate $P(a,x)$ and $Q(a,x).$\n\n\\subsubsection{Accuracy Tests}\n\nSee Section~2.19.D.\n\n\\subsection{Error Procedures and Restrictions}\n\nThe procedure SGAMI issues error messages, under several conditions, at\nlevel 2 + MSGOFF, where MSGOFF is zero unless specified by a call to SGAMIK\n(see Chapter~2.19) at some time before calling SCDCHI. If error termination\nis suppressed by setting MSGOFF $< 0$, or by calling ERMSET (see Chapter\n19.2), IERR will be set to a non-zero value.\n\nIf the desired tolerance could not be achieved, IERR is set to~2.\n\nIf SCDCHI is called with both CHISQ and NU zero, IERR is set to~3 and P is\nset to~3.0.\n\nIf SCDCHI is called with one or both of CHISQ and NU negative, IERR is set\nto~4 and P is set to~4.0.\n\n\\subsection{Supporting Information}\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nDCDCHI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, DCDCHI, DCSEVL, DERF, DERM1, DERV1, DGAM1, DGAMMA, DINITS, DRCOMP,\nDREXP, DRLOG, DXPARG, ERFIN, ERMSG, IERM1, IERV1\\rule[-5pt]{0pt}{8pt}}\\\\\nSCDCHI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, ERFIN, ERMSG, IERM1, IERV1, SCDCHI, SCSEVL, SERF, SERM1, SERV1,\nSGAM1, SGAMMA, SINITS, SRCOMP, SREXP, SRLOG, SXPARG}\\\\\n\\end{tabular}\n\nDesigned and programmed by W. V. Snyder, JPL, 1993.\n\n\n\\begcodenp\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRDCDCHI}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{dcdchi}}\n\n\\vspace{30pt}\\centerline{\\bf \\large ODDCDCHI}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{dcdchi}}\n\\end{document}\n", "meta": {"hexsha": "1c6520cd81f3ec0215ba8031c190c97bab8ea278", "size": 4451, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch15-03.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch15-03.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch15-03.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 35.3253968254, "max_line_length": 98, "alphanum_fraction": 0.710851494, "num_tokens": 1511, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.766293653760418, "lm_q1q2_score": 0.5602055493312843}}
{"text": "%Suggested order of slides\n% slides-linsvm-hard-margin\n% slides-linsvm-hard-margin-dual\n% slides-linsvm-soft-margin\n% slides-linsvm-erm\n% slides-linsvm-optimization\n\n\\subsection{Linear Hard Margin SVM}\n\\includepdf[pages=-]{../slides-pdf/slides-linsvm-hard-margin.pdf}\n\n\\subsection{Hard-Margin SVM Dual}\n\\includepdf[pages=-]{../slides-pdf/slides-linsvm-hard-margin-dual.pdf}\n\n\\subsection{Soft-Margin SVM}\n\\includepdf[pages=-]{../slides-pdf/slides-linsvm-soft-margin.pdf}\n\n\\subsection{SVMs and Empirical Risk Minimization}\n\\includepdf[pages=-]{../slides-pdf/slides-linsvm-erm.pdf}\n\n\\subsection{Support Vector Machine Training}\n\\includepdf[pages=-]{../slides-pdf/slides-linsvm-optimization.pdf}\n\n", "meta": {"hexsha": "98c69fe09d821aab8b6ed0ef1f2e4751ba33e445", "size": 693, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/linear-svm/chapter-order.tex", "max_stars_repo_name": "jukaje/lecture_i2ml", "max_stars_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 93, "max_stars_repo_stars_event_min_datetime": "2019-02-27T17:20:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-24T12:05:52.000Z", "max_issues_repo_path": "slides/linear-svm/chapter-order.tex", "max_issues_repo_name": "jukaje/lecture_i2ml", "max_issues_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 323, "max_issues_repo_issues_event_min_datetime": "2019-02-27T08:02:59.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T08:11:41.000Z", "max_forks_repo_path": "slides/linear-svm/chapter-order.tex", "max_forks_repo_name": "jukaje/lecture_i2ml", "max_forks_repo_head_hexsha": "cd4900f5190e9d319867b4c0eb9d8e19f659fb62", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 56, "max_forks_repo_forks_event_min_datetime": "2019-02-27T16:25:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T05:53:20.000Z", "avg_line_length": 30.1304347826, "max_line_length": 70, "alphanum_fraction": 0.7705627706, "num_tokens": 195, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.766293653760418, "lm_q2_score": 0.7310585669110202, "lm_q1q2_score": 0.5602055403511007}}
{"text": "\n\\subsection{Two dimensional uniform dataset}\n\n\\begin{figure}\n  \\includegraphics[width=\\columnwidth]{experiment-2D-uniform.pdf}\n  \\vspace{2mm}\n  \\centering\n  \\includegraphics[width=.975\\columnwidth]{colormap.pdf}\n  %\n  \\caption{%\n  %\n  {\\bfseries \\sffamily Two dimensional uniform dataset (results)}\n  %\n  Randomized SOM made of $1024$ neurons with a $2$-nearest neighbors induced topology. Model has been trained for $25,000$ epochs on two-dimensional points drawn from a uniform distribution on the unit square. \\textbf{A} Map topology in neural space. \\textbf{B} Map topology in data space. \\textbf{C to H} Receptive field of the map for six samples.\n  %\n  }\n  \\label{fig:2D-uniform:results}\n\\end{figure}\n", "meta": {"hexsha": "d518c8a676124c5f6b1aff241ee3353b65b058da", "size": 708, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "article-overleaf/05-appendix-B.tex", "max_stars_repo_name": "rougier/VSOM", "max_stars_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-11-20T06:27:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:20:28.000Z", "max_issues_repo_path": "article-overleaf/05-appendix-B.tex", "max_issues_repo_name": "rougier/VSOM", "max_issues_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "article-overleaf/05-appendix-B.tex", "max_forks_repo_name": "rougier/VSOM", "max_forks_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-03T04:41:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T04:41:57.000Z", "avg_line_length": 37.2631578947, "max_line_length": 349, "alphanum_fraction": 0.7443502825, "num_tokens": 198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7310585727705126, "lm_q1q2_score": 0.5602055370375383}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{III}\n\n\\def\\ntitle{Category Theory}\n\\def\\nlecturer{P.\\ T.\\ Johnstone}\n\n\\def\\nterm{Michaelmas}\n\\def\\nyear{2018}\n\n\\input{header}\n\n%\\usepackage{tipa} % \\textcorner\n\\usepackage{xcolor}\n\n% add annotation to middle of a commutative square in tikzcd\n% https://tex.stackexchange.com/questions/119543/how-can-i-get-symbols-to-appear-in-the-middle-of-commutative-diagrams-using-tikz\n\\tikzset{commutative diagrams/.cd,\nmysymbol/.style={start anchor=center,end anchor=center,draw=none}\n}\n\\newcommand\\MySymb[2][?]{%\n  \\arrow[mysymbol]{#2}[description]{#1}}\n\n\\renewcommand{\\c}[1]{\\mathbf{#1}}\n\\DeclareMathOperator{\\ob}{ob}\n\\DeclareMathOperator{\\mor}{mor}\n\\DeclareMathOperator{\\dom}{dom}\n\\DeclareMathOperator{\\cod}{cod}\n\n\\newcommand{\\Set}{{\\c{Set}}}\n\\newcommand{\\Top}{{\\c{Top}}}\n\\newcommand{\\Rel}{{\\c{Rel}}}\n\\newcommand{\\Cone}{{\\c{Cone}}}\n\\newcommand{\\Sh}{{\\c{Sh}}}\n\\newcommand{\\sh}{{\\c{sh}}}\n\\newcommand{\\Sub}{{\\c{Sub}}}\n\n\\newcommand{\\mono}{\\rightarrowtail}\n\\newcommand{\\epi}{\\twoheadrightarrow}\n\n\\newcommand{\\adjoint}{\\dashv}\n\n\\newcommand{\\T}{{\\mathbb{T}}} % monad\n\\newcommand{\\blue}[1]{\\textcolor{blue}{#1}}\n\n\\DeclareMathOperator{\\ev}{ev} % evaluation\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\tableofcontents\n\n\\section{Definitions and examples}\n\n\\begin{definition}[category]\\index{category}\n  A \\emph{category} \\(\\c C\\) consists of\n  \\begin{enumerate}\n  \\item a collection \\(\\ob \\c C\\) of \\emph{objects} \\(A, B, C, \\dots\\),\n  \\item a collection \\(\\mor \\c C\\) of \\emph{morphisms} \\(f, g, h, \\dots\\),\n  \\item two operations \\(\\dom\\) and \\(\\cod\\) assigning to each \\(f \\in \\mor \\c C\\) a pair of objects, its \\emph{domain} and \\emph{codomain}. We write \\(A \\xrightarrow{f} B\\) to mean \\(f\\) is a morphism and \\(\\dom f = A, \\cod f = B\\),\n  \\item an operation assigning to each \\(A \\in \\ob \\c C\\) a morhpism \\(A \\xrightarrow{1_A} A\\),\n  \\item a partial binary operation \\((f, g) \\mapsto fg\\) on morphisms, such that \\(fg\\) is defined if and only if \\(\\dom f = \\cod g\\) and let \\(\\dom fg = \\dom g, \\cod fg = \\cod f\\) if \\(fg\\) is defined\n  \\end{enumerate}\n  satisfying\n  \\begin{enumerate}\n  \\item \\(f 1_A = f = 1_B f\\) for any \\(A \\xrightarrow{f} B\\),\n  \\item \\((fg) h = f(gh)\\) whenever \\(fg\\) and \\(gh\\) are defined.\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item This definition is independent of any model of set theory. If we're given a particuar model of set theory, we call \\(\\c C\\) \\emph{small}\\index{category!small} if \\(\\ob \\c C\\) and \\(\\mor \\c C\\) are sets.\n  \\item Some texts say \\(fg\\) means \\(f\\) followed by \\(g\\) (we are not).\n  \\item Note that a morphism \\(f\\) is an identity if and only if \\(fg = g\\) and \\(hf = h\\) whenever the compositions are defined so we could formulate the definitions entirely in terms of morphisms.\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item The category \\(\\Set\\) has all sets as objects and all functions between sets as morphisms (strictly, morphisms \\(A \\to B\\) are pairs \\((f, B)\\) where \\(f\\) is a set theoretic function).\n  \\item The category \\(\\c{Gp}\\) where objects are groups, morphisms are group homomorphisms. Similarly \\(\\c{Ring}\\) is the category of rings and \\(\\c{Mod}_R\\) is the category of \\(R\\)-modules.\n  \\item The category \\(\\Top\\) has all topological spaces as objects and continuous functions as morphisms. Similarly \\(\\c{Unif}\\) the category of uniform spaces with uniformly continuous functions and \\(\\c{Mf}\\) the category of manifolds with smooth maps.\n  \\item The cateogry \\(\\c{Htpy}\\) has same objects as \\(\\Top\\) but morphisms are homotopy classes of continuous functions.\n\n    More generally, given \\(\\c C\\) we call an equivalence relation \\(\\simeq\\) on \\(\\mor \\c C\\) a \\emph{congruence}\\index{congruence} if \\(f \\simeq g\\) implies \\(\\dom f = \\dom g, \\cod f = \\cod g\\) and \\(fh \\simeq gh\\) and vice versa.\n    Then we have a set \\(\\c C / \\simeq\\) with the same objects as \\(\\c C\\) but congruence classes as morphisms.\n  \\item Given \\(\\c C\\), the \\emph{opposite category}\\index{category!opposite} \\(\\c C^{\\text{op}}\\) has the same objects and morphisms as \\(\\c C\\), but domain and codomain interchanged and \\(fg\\) in \\(\\c C^{\\text{op}}\\) is \\(gf\\) in \\(\\c C\\).\n\n    This leads to \\emph{duality principle}: if \\(P\\) is a valid statement about categories so is \\(P^*\\) attained by reversing all the arrows.\n  \\item A small category with one object is a \\emph{monoid}\\index{monoid}, i.e.\\ a semigroup with \\(1\\). In particular, a group is a small category with one object, in which every morphism is an isomorphism (i.e.\\ for all \\(f\\) there exists \\(g\\) such that \\(fg\\) and \\(gf\\) are identities).\n  \\item A \\emph{groupoid} is a category in which every morphism is an isomorphism. For example, for a topological space \\(X\\), the \\emph{fundamental groupoid} \\(\\pi(X)\\) has all points of \\(X\\) as objects and morphisms \\(x \\to y\\) as homotopy classes rel \\({0, 1}\\) of paths \\(\\gamma: [0, 1] \\to X\\) with \\(\\gamma(1) = y\\).\n  \\item A \\emph{discrete}\\index{category!discrete} category is one whose only morphisms are identities. A \\emph{preorder} is a category \\(\\c C\\) in which for any pair \\((A, B)\\) there exists at most one morphism \\(A \\to B\\). A small preorder is a set equipped with a binary relation which is reflexive and transitive. In particular a partially ordered set is a partially ordered set is a small preorder in which the only isomorphisms are identities.\n  \\item The category \\(\\Rel\\) has the same objects as \\(\\Set\\) but morphisms \\(A \\to B\\) are arbitrary relations. Given \\(R \\subseteq A \\times B, S \\subseteq B \\times C\\), we define\n    \\[\n      S \\compose R = \\{(a, c) \\in A \\times C: \\exists b \\in B \\text{ s.t. } (a, b) \\in R, (b, c) \\in S\\}.\n    \\]\n    The identity \\(1_A: A \\to A\\) is \\(\\{(a, a): a \\in A\\}\\). Similarly, the category \\(\\c{Part}\\) of sets and partial functions (i.e.\\ relations such that for all \\((a, b), (a, b') \\in R\\), \\(b = b'\\)) can be defined.\n  \\item Let \\(K\\) be a field. The category \\(\\c{Mat}_K\\) has natural numbers as objects and morphisms \\(n \\to p\\) are \\((p \\times n)\\) matrices with entries from \\(K\\). Composition is matrix multiplication.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{definition}[functor]\\index{functor}\n  Let \\(\\c C, \\c D\\) be categories. A \\emph{functor} \\(F: \\c C \\to \\c D\\) consists of\n  \\begin{enumerate}\n  \\item a mapping \\(A \\mapsto F(A)\\) from \\(\\ob \\c C\\) to \\(\\ob \\c D\\),\n  \\item a mapping \\(f \\mapsto F(f)\\) from \\(\\mor \\c C\\) to \\(\\mor \\c D\\) such that\n    \\begin{align*}\n      \\dom(F(f)) &= F(\\dom f), \\cod (Ff) = F \\cod(f) \\\\\n      1_{F(A)} &= F(1_A), (F(f))(F(g)) = F(fg)\n    \\end{align*}\n    wherever \\(fg\\) is defined.\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{eg}\n  We write \\(\\c{Cat}\\) for the category whose objects are all small categories and whose morphisms are functors between them.\n\\end{eg}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item We have \\emph{forgetful functors}\\index{functor!forgetful} \\(U: \\c{Gp} \\to \\Set, \\c{Ring} \\to \\Set, \\Top \\to \\Set \\dots\\) and \\(\\c{Ring} \\to \\c{AbGp}\\) (forget multiplication), \\(\\c{Ring} \\to \\c{Mon}\\) (forget addition).\n  \\item Given \\(A\\), the free group \\(F(A)\\) has the property given any group \\(G\\), any \\(A \\xrightarrow{f} U G\\), there is a unique homomorphism \\(FA \\xrightarrow{\\tilde f} G\\) extending \\(f\\).\n\n    \\(F\\) is a functor \\(\\Set \\to \\c{Gp}\\): given any \\(A \\xrightarrow{f} B\\), we define \\(F(f)\\) to be the unique homomorphism extending \\(A \\xrightarrow{f} B \\embed U FB\\). Functoriality follows from uniqueness: given \\(B \\xrightarrow{g} C\\), \\(F(gf)\\) and \\((Fg)(Ff)\\) are both homomorphisms extending\n    \\[\n      A \\xrightarrow{f} B \\xrightarrow{g} C \\embed U F C.\n    \\]\n  \\item Given a set \\(A\\), we write \\(PA\\) for the set of all subsets of \\(A\\). We can make \\(P\\) into a functor \\(\\Set \\to \\Set\\): given \\(A \\xrightarrow{f} B\\), we define\n    \\[\n      Pf(A') = \\{f(a): a \\in A'\\}\n    \\]\n    for \\(A' \\subseteq A\\).\n\n    But we also have a functor \\(P^*: \\Set \\to \\Set^{\\text{op}}\\) defined by\n    \\[\n      P^*f(B') = \\{a \\in A: f(a) \\in B'\\}\n    \\]\n    for \\(B' \\subseteq B\\).\n  \\item By a \\emph{contravariant functor}\\index{functor!contravariant} \\(\\c C \\to \\c D\\) we mean a functor \\(\\c C \\to \\c D^{\\text{op}}\\) (or \\(\\c C^{\\text{op}} \\to \\c D\\). (A \\emph{covariant functor} is one that doesn't reverse arrows)\n\n    Let \\(K\\) be a field. We have a functor \\(\\cdot^*: \\c{Mod}_K \\to \\c{Mod}^{\\text{op}}_K\\) defined by\n    \\[\n      V^* = \\{\\text{linear maps } V \\to K\\}\n    \\]\n    and if \\(V \\xrightarrow{f} W\\), \\(f^*(\\theta) = \\theta f\\).\n  \\item We have a functor \\(\\cdot^\\text{op}: \\c{Cat} \\to \\c{Cat}\\) which is the identity on morphisms. (note that this is \\emph{covariant})\n  \\item A functor between monoids is a monoid homomorphism.\n  \\item A functor between posets is an order-preserving map.\n  \\item Let \\(G\\) be a group. A functor \\(F: G \\to \\Set\\) consists of a ast \\(A = F*\\) together with an action of \\(G\\) on \\(A\\), i.e.\\ permutation representation of \\(G\\). Similarly a functor \\(G \\to \\c{Mod}_K\\) is a \\(K\\)-linear representation of \\(G\\).\n  \\item The construction of the fundamental gropu \\(\\pi_1(X, x)\\) of a space \\(X\\) with basepoint \\(x\\) is a functor\n    \\[\n      \\Top_* \\to \\c{Gp}\n    \\]\n    where \\(\\Top_*\\) is the category of spaces with a chosen basepoint.\n\n    Similarly, the fundamental groupoid is a functor\n    \\[\n      \\Top \\to \\c{Gpd}\n    \\]\n    where \\(\\c{Gpd}\\) is the category of groupoids and functors between them.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{definition}[natural transformation]\\index{natural transformation}\n  Let \\(\\c C, \\c D\\) be categories and \\(F, G: \\c C \\to \\c D\\) be two functors. A \\emph{natural transformation} \\(\\alpha: F \\to G\\) consists of an assignment \\(A \\mapsto \\alpha_A\\) from \\(\\ob \\c C\\) to \\(\\mor \\c D\\) such that \\(\\dom \\alpha_A = FA, \\cod \\alpha_A = GA\\) for all \\(A\\), and for all \\(A \\xrightarrow{f} B\\) in \\(\\c C\\), the square\n  \\[\n    \\begin{tikzcd}\n      FA \\ar[r, \"Ff\"] \\ar[d, \"\\alpha_A\"] & FB \\ar[d, \"\\alpha_B\"] \\\\\n      GA \\ar[r, \"Gf\"] & GB\n    \\end{tikzcd}\n  \\]\n  commutes, i.e.\\ \\(\\alpha_B(Ff) = (Gf)\\alpha_A\\).\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Given categories \\(\\c C, \\c D\\), we write \\([\\c C, \\c D]\\) for the category whose objects are functors \\(\\c C \\to \\c D\\) and whose morphisms are natural transformations.\n  \\item Let \\(K\\) be a field and \\(V\\) a vector space over \\(K\\). There is a linear map \\(\\alpha_V: V \\to V^{**}\\) given by\n    \\[\n      \\alpha_V(v)(\\theta) = \\theta(v)\n    \\]\n    for \\(\\theta \\in V^*\\). This is the \\(V\\)-component of a natural transformation\n    \\[\n      1_{\\c{Mod}_K} \\to \\cdot^{**}: \\c{Mod}_K \\to \\c{Mod}_K.\n    \\]\n  \\item For any set \\(A\\), we have a mapping \\(\\sigma_A: A \\to PA\\) sending \\(a\\) to \\(\\{a\\}\\). If \\(f: A \\to B\\) then \\(Pf(\\{a\\}) = \\{f(a)\\}\\). So \\(\\sigma\\) is a natural transformation \\(1_\\Set \\to P\\).\n  \\item Let \\(F: \\Set \\to \\c{Gp}\\) be the free group functor and \\(U: \\c{Gp} \\to \\Set\\) the forgetful functor. The inclusions \\(A \\to UFA\\) is a natural transformation \\(1_\\Set \\to UF\\).\n  \\item Let \\(G, H\\) be groups and \\(f, g: G \\to H\\) be two homomorphisms. Then a natural tranformation \\(\\alpha: f \\to g\\) corresponds to an element \\(h = \\alpha_*\\) such that \\(h f(x) = g(x) h\\) for all \\(x \\in G\\), or equivalently \\(f(x) = h^{-1} g(x) h\\), i.e.\\ \\(f\\) and \\(g\\) are conjugate group homomorphisms.\n  \\item Let \\(A\\) and \\(B\\) be two \\(G\\)-sets, regarded as functors \\(G \\to \\Set\\). A natural transformation \\(A \\to B\\) is a function \\(f\\) satisfying \\(f(g . a) = g. f(a)\\) for all \\(a \\in A\\), i.e.\\ a \\(G\\)-equivariant map.\n  \\end{enumerate}\n\\end{eg}\n\nWhen we say ``natural isomorphism'', it is ambiguous and can formally mean two different things: one could mean there is a natural transformation going the other way which when composed produces identity, or each component is an isomorphism. It turns out they coincide:\n\n\\begin{lemma}\n  Let \\(F, G: \\c C \\to \\c D\\) be two functors and \\(\\alpha: F \\to G\\) a natural transformation. Then \\(\\alpha\\) is an isomorphism in \\([\\c C, \\c D]\\) if and only if each \\(\\alpha_A\\) is an isomorphism in \\(\\c D\\).\n\\end{lemma}\n\n\\begin{proof}\n  Only if is trivial. For if, suppose each \\(\\alpha_A\\) has an inverse \\(\\beta_A\\). We need to prove the \\(\\beta\\)'s satisfy the naturality condition: give \\(f: A \\to B\\) in \\(\\c C\\), we need to show that\n  \\[\n    \\begin{tikzcd}\n      GA \\ar[r, \"Gf\"] \\ar[d, \"\\beta_A\"] & GB \\ar[d, \"\\beta_B\"] \\\\\n      FA \\ar[r, \"Ff\"] & FB\n    \\end{tikzcd}\n  \\]\n  commutes. But\n  \\[\n    (Ff) \\beta_A = \\beta_B \\alpha_B(Ff) \\beta_A\n    = \\beta_B (GF) \\alpha_A \\beta_A\n    =\\beta_B (Gf)\n  \\]\n  by naturality of \\(\\alpha\\).\n\\end{proof}\n\nIn study of algebraic theories (for example), we are interested in isomorphisms of objects and investigate the properties of objects ``up to isomorphism''. However, in category theory a weaker notion of isomorphism is often more useful:\n\n\\begin{definition}[equivalence]\\index{equivalence}\n  Let \\(\\c C\\) and \\(\\c D\\) be categories. By an \\emph{equivalence} between \\(\\c C\\) and \\(\\c D\\) we mean a pair of functors \\(F : \\c C \\to \\c D, G: \\c D \\to \\c C\\) together with natural isomorphisms \\(\\alpha: 1_{\\c C} \\to GF, \\beta: FG \\to 1_{\\c D}\\). We write \\(\\c C \\simeq \\c D\\) if \\(\\c C\\) and \\(\\c D\\) are equivalent.\n\n  We say a property \\(P\\) of categories is a \\emph{category propery} if whenever \\(\\c C\\) has \\(P\\) and \\(\\c C \\simeq \\c D\\) then \\(\\c D\\) has \\(P\\).\n\\end{definition}\n\nFor example, being a groupoid or a preorder are categorical properies, but being a group or a partial order are not.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item The category \\(\\c{Part}\\) is equivalent to the category \\(\\Set_*\\) of pointed sets (and basepoint preserving functions). We define\n    \\begin{align*}\n      F: \\Set_* &\\to \\c{Part} \\\\\n      (A, a) &\\mapsto A \\setminus \\{a\\} \\\\\n    \\end{align*}\n    and if \\(f: (A, a) \\to (B, b)\\) then \\(Ff(x) = f(x)\\) if \\(f(x) \\neq b\\) and undefined otherwise, and\n    \\begin{align*}\n      G: \\c{Part} &\\to \\Set_* \\\\\n      A &\\mapsto A^+ = (A \\cup \\{A\\}, A)\n    \\end{align*}\n    and if \\(f: A \\to B\\) is a partial function , we define\n    \\[\n      x \\mapsto\n      \\begin{cases}\n        f(x) & \\text{if } x \\in A \\text{ and } f(x) \\text{ defined} \\\\\n        B & \\text{otherwise}\n      \\end{cases}\n    \\]\n    Then \\(FG\\) is the identity on \\(\\c{Part}\\) but \\(GF\\) is not. However there is an isomorphism\n    \\[\n      (A, a) \\to ((A \\setminus \\{a\\})^+, A \\setminus \\{a\\})\n    \\]\n   sending \\(a\\) to \\(A \\setminus \\{a\\}\\) and everything else to itself. This is natural.\n\n   Note that there can be no isomorphism \\(\\Set_* \\to \\c{Part}\\) since \\(\\c{Part}\\) has a \\(1\\)-element isomorphism class \\(\\{\\emptyset\\}\\) and \\(\\Set_*\\) doesn't.\n \\item The category \\(\\c{fdMod}_K\\) of finite-dimensional vector spaces over \\(K\\) is equivalent to \\(\\c{fdMod}_K^{\\text{op}}\\):  the functors in both directions are \\(\\cdot^*\\) and both isomorphisms are the natural transformations given by double dual.\n \\item \\(\\c{fdMod}_K\\) is also equivalent to \\(\\c{Mat}_K\\): we write \\(F: \\c{Mat}_K \\to \\c{fdMod}_K\\) by \\(F(n) = K^n\\), and \\(F(A)\\) is the map represented by \\(A\\) with respect to the standard basis. To define functor \\(G\\) the other way, choose a basis for each finite-dimensional vector space and define\n   \\begin{align*}\n     G(V) &= \\dim V \\\\\n     G(V \\xrightarrow{f} W) &= \\text{ matrix representing \\(f\\) w.r.t.\\ chosen bases}\n   \\end{align*}\n   \\(GF\\) is the identity, provided we choose the standard bases for the space \\(K^n\\). \\(FG \\neq 1\\) but the chosen bases give isomorphisms \\(FG(V) = K^{\\dim V} \\to V\\) for each \\(V\\), which form a natural isomorphism.\n  \\end{enumerate}\n\\end{eg}\n\nExample 3 illustrates a general principle: when constructing a pair of functors between equivalent categories, ususally one is ``canonical'' and the other requires some choice, and a clever choice results in a particularly simple form for one way of composition. The next theorem abstracts away the ``choice'' and tells us when a functor is part of an equivalence purely by its properties.\n\nThe criterion is stated in term of ``bijectivity'' of functors, informally. It is generally a bad idea to look at sur/injectivity of functors on objects. Instead the correct way is to look at their behaviour on morphisms.\n\n\\begin{definition}[faithful, full, essentially surjective]\\index{functor!faithful}\\index{functor!full}\\index{functor!essentially surjective}\n  Let \\(\\c C \\xrightarrow{F} \\c D\\) be a functor.\n  \\begin{enumerate}\n  \\item \\(F\\) is \\emph{faithful} if given \\(f, f' \\in \\mor \\c C\\) with \\(\\dom f = \\dom f', \\cod f = \\cod f'\\) and \\(Ff = Ff'\\) then \\(f = f'\\).\n  \\item \\(F\\) is \\emph{full} if given \\(FA \\xrightarrow{g} FB\\) in \\(\\c D\\) then there exists \\(A \\xrightarrow{f} B\\) in \\(\\c C\\) with \\(Ff = g\\).\n  \\item \\(F\\) is \\emph{essentially surjective} if for every \\(B \\in \\ob \\c D\\) there exists \\(A \\in \\ob \\c C\\) and an isomorphism \\(FA \\to B\\) in \\(\\c D\\).\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{definition}\n  A subcategory \\(\\c C' \\subseteq \\c C\\) is \\emph{full} if the inclusion \\(\\c C' \\to \\c C\\) is a full functor.\n\\end{definition}\n\n\\begin{eg}\n  \\(\\c{Gp}\\) is a full subcategory of \\(\\c{Mon}\\) but \\(\\c{Mon}\\) is not a full subcategory of the category \\(\\c{SGp}\\) of semigroups.\n\\end{eg}\n\n\\begin{lemma}\n  Assuming the axiom of choice. A functor \\(F: \\c C \\to \\c D\\) is part of an equivalence \\(\\c C \\simeq \\c D\\) if and only if it's full, faithful and essentially surjective.\n\\end{lemma}\n\n\\begin{proof}\n  Suppose given \\(G, \\alpha, \\beta\\) as in the definition of equivalence of categories. Then for each \\(B \\in \\ob \\c D\\), \\(\\beta_B\\) is an isomorphism \\(FGB \\to B\\) so \\(F\\) is essentially surjective.\n\n  Given \\(A \\xrightarrow{f} B\\) in \\(\\c C\\), we can recover \\(f\\) from \\(Ff\\) as the via conjugation by \\(\\alpha\\):\n  \\[\n    \\begin{tikzcd}\n      GFA \\ar[r, \"GFf\"] & GFB \\\\\n      A \\ar[u, \"\\alpha_A\"] \\ar[r, \"f\"] & B \\ar[u, \"\\alpha_B\"]\n    \\end{tikzcd}\n  \\]\n  Hence if \\(A \\xrightarrow{f'} B\\) satisfies \\(Ff = Ff'\\) then \\(f = f'\\).\n\n  Similarly there is a natural preimage given a morphism in \\(\\c D\\). Given \\(FA \\xrightarrow{g} FB\\), define \\(f\\) to be the composite\n  \\[\n    \\begin{tikzcd}\n      GFA \\ar[r, \"Gg\"] & GFB \\\\\n      A \\ar[u, \"\\alpha_A\"] \\ar[r, dashed] & B \\ar[u, \"\\alpha_B\"]\n    \\end{tikzcd}\n  \\]\n  Then \\(GFf = \\alpha_B f \\alpha_A^{-1} = Gg\\). As \\(G\\) is faithful for the same reasons as \\(F\\), \\(Ff = g\\).\n\n  Conversely, for each \\(B \\in \\ob \\c D\\), choose \\(GB \\in \\ob \\c C\\) and an isomorphism \\(\\beta_B: FGB \\to B\\) in \\(\\c D\\). Given \\(B \\xrightarrow{g} B'\\), define \\(Gg: GB \\to GB'\\) to be the unique morphism whose image under \\(F\\) is the composition\n  \\[\n    \\begin{tikzcd}\n      B \\ar[r, \"f\"] & B' \\\\\n      FGB \\ar[u, \"\\beta_B\"] \\ar[r, dashed] & FGA \\ar[u, \"\\beta_{B'}\"]\n    \\end{tikzcd}\n  \\]\n  Faithfulness implies functoriality: given \\(B' \\xrightarrow{g'} B''\\), \\((Gg') (Gg)\\) and \\(G(g'g)\\) have the same image under \\(F\\) so they are equal.\n\n  By construction, \\(\\beta\\) is a natural transformation \\(FG \\to 1_{\\c D}\\).\n\n  Given \\(A \\in \\ob \\c C\\), define \\(\\alpha_A: A \\to GFA\\) to be the unique morphism whose image under \\(F\\) is\n  \\[\n    FA \\xrightarrow{\\beta_{FA}^{-1}} FGFA.\n  \\]\n  \\(\\alpha_A\\) is an isomorphism since \\(\\beta_{FA}\\) also has a unique preimage under \\(F\\). Finally \\(\\alpha\\) is a natural transformation, since any naturality square for \\(\\alpha\\) is mapped by \\(F\\) to a commutative square (corresponding to naturality square for \\(\\beta\\)) and \\(F\\) is faithful.\n\\end{proof}\n\nNote that axiom of choice is only used in the if part. The lemma is useful as it saves us from making explicit choices when showing an equivalence by exhibiting inverses. However note that the choice is always required.\n\n\\begin{definition}[skeleton]\\index{skeleton}\\index{category!skeletal}\n  By a \\emph{skeleton} of a category \\(\\c C\\) we mean a full subcategory \\(\\c C_0\\) containing one object from each isomorphism class. We say \\(\\c C\\) is \\emph{skeletal} if it's a skeleton of itself.\n\\end{definition}\n\n\\begin{eg}\n  \\(\\c{Mat}_K\\) is skeletal and the image of \\(F: \\c{Mat}_K \\to \\c{fdMod}_K\\) is a skeleton of \\(\\c{fdMod}_K\\) (essentially because \\(F\\) is full and faithful).\n\\end{eg}\n\n\\begin{remark}\n  Almost any assertion about skeletons is equivalent to the axiom of choice. See example sheet 1 Q2. This is one reason why we should not restrict out attention to skeletal categories.\n\\end{remark}\n\n\\begin{definition}[monomorphism, epimorphism]\\index{monomorphism}\\index{epimorphism}\n  Let \\(A \\xrightarrow{f} B\\) be a morphism in \\(\\c C\\).\n  \\begin{enumerate}\n  \\item We say \\(f\\) is a \\emph{monomorphism} or \\emph{monic} if given any pair \\(g, h: C \\to A\\), \\(fg = fh\\) implies \\(g = h\\).\n  \\item We say \\(f\\) is a \\emph{epimorphism} or \\emph{epic} if it is a monomorphism in \\(\\c C^{\\text{op}}\\), i.e.\\ given any pair \\(g, h: B \\to C\\), \\(gf = hf\\) implies \\(g = h\\).\n  \\end{enumerate}\n\n  We denote monomorphisms by \\(f: A \\mono B\\) and epimorphisms by \\(f: A \\epi B\\).\n\\end{definition}\n\nAny isomorphism is monic and epic. More generally if \\(f\\) has a left inverse then it's monic. We call such monomorphisms \\emph{split}\\index{monomorphism!split}.\n\n\\begin{definition}[balanced]\\index{category!balanced}\n  We say \\(\\c C\\) is a \\emph{balanced} category if any morphism which is both monic and epic is an isomorphism.\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item In \\(\\Set\\), monomorphism is precisely an injection (one direction is easy and for the other direction take \\(C = 1 = \\{*\\}\\)) and epimorphism is precisely a surjection (use morphisms \\(B \\to 2 = \\{0, 1\\}\\)). Thus \\(\\Set\\) is balanced.\n  \\item In \\(\\c{Gp}\\), monomorphism is precisely an injection (use homomorphism from the free group with one generator, i.e.\\ \\(\\Z \\to A\\)) and epimorphism is precisely a surjection (use free product with amalgamation). Thus \\(\\c{Gp}\\) is balanced.\n  \\item In \\(\\c{Rng}\\), monomorphism is precisely an injection (similarly to free group) but the inclusion \\(\\Z \\to \\Q\\) is an epimorphism, since if \\(f, g: \\Q \\to R\\) agree on all integers they agree everywhere. So \\(\\c{Rng}\\) is not balanced.\n  \\item In \\(\\Top\\), monomorphism is precisely an injection and epimorphism is precisely a precisely surjection (same argument as \\(\\Set\\)) but \\(\\Top\\) is not balanced since a continuous bijection need not to have a continuous inverse.\n  \\end{enumerate}\n\\end{eg}\n\n\\section{The Yoneda lemma}\n\nIt may seem weird to devote an entire chapter to a lemma. However, although the Yoneda lemma is indeed a lemma, with a simple statement and straightforward proof, it is much more than a normal lemma and underlies the entire category theory.\n\nHere is a little story about Yoneda lemma. It is named after Nobuo Yoneda, who is better known as a computer scientist than a mathematician. The result is likely not due to him and he wasn't the first person to write it down. In fact, he never actually wrote down the lemma, as opposed to some books claiming the lemma was first to be found in one of Yoneda's papers. The story, according to Saunders Mac Lane, is that after a conference he met Yoneda on a train platform. While exchanging conversations Yoneda told him the result, which Mac Lane was not aware of at that moment but immediately recognised its importance. Mac Lane later attributed the lemma to Yoneda and because of his standing in category theory, the name retains. Perhaps this is the first and only result in mathematics to be enunciated on a train platform!\n\n\\begin{definition}[locally small category]\\index{category!locally small}\n  We say a category \\(\\c C\\) is \\emph{locally small} if, for any two objects \\(A, B\\), the morphisms \\(A \\to B\\) in \\(\\c C\\) form a set \\(\\c C(A, B)\\).\n\\end{definition}\n\nIf we fix \\(A\\) and let \\(B\\) vary, the assignment \\(B \\mapsto \\c C(A, B)\\) becomes a functor \\(\\c C(A, -): \\c C \\to \\Set\\): given \\(B \\xrightarrow{f} C\\), \\(\\c C(A, f)\\) is the mapping \\(g \\mapsto fg\\). Similarly, \\(A \\mapsto \\c C(A, B)\\) defines a functor \\(\\c C(-, B): \\c C^{\\text{op}} \\to \\Set\\).\n\n\\begin{lemma}[Yoneda lemma]\\index{Yoneda lemma}\n  \\label{lem:Yoneda}\n  Let \\(\\c C\\) be a locally small category, \\(A \\in \\ob \\c C\\) and \\(F: \\c C \\to \\Set\\) is a functor. Then natural transformations \\(\\c C(A, -) \\to F\\) are in bijection with elements of \\(FA\\).\n\n  Moreover, this bijection is natural in both \\(A\\) and \\(F\\).\n\\end{lemma}\n\n\\begin{proof}\n  We prove the first part now. The second part follows very easily once we have gained some intuitions so we'll come back to it later. Given \\(\\alpha: \\c C(A, -) \\to F\\), we define\n  \\[\n    \\Phi(\\alpha) = \\alpha_A(1_A) \\in FA.\n  \\]\n  Conversely, given \\(x \\in FA\\), we define \\(\\Psi(x): \\c C(A, -) \\to F\\) by\n  \\[\n    \\Psi(x)_B (A \\xrightarrow{f} B) = Ff(x) \\in FB\n  \\]\n  which is natural: given \\(g: B \\to C\\), we have\n  \\begin{align*}\n    \\Psi(x)_C \\c C(A, g)(f) &= \\Psi(x)_C (gf) = F(gf) (x) \\\\\n    (Fg) \\Psi(x)_B (f) &= (Fg) (Ff)(x) = F(gf) (x)\n  \\end{align*}\n  \\[\n    \\begin{tikzcd}\n      \\c C(A, B) \\ar[rrr, \"{\\c C(A, g)}\"] \\ar[ddd, \"\\Psi(x)_B\"] &&& \\c C(A, C) \\ar[ddd, \"\\Psi(x)_C\"] \\\\\n      & f \\ar[r, mapsto] \\ar[d, mapsto] & gf \\ar[d, mapsto] \\\\\n      & Ff(x) \\ar[r, mapsto] & F(gf)(x) \\\\\n      FB \\ar[rrr, \"Fg\"] &&& FC\n    \\end{tikzcd}\n  \\]\n  by functoriality of \\(F\\). Now left to show they are inverses to each other.\n  \\begin{align*}\n    \\Phi\\Psi(x) &= \\Psi(x)_A(1_A) = F(1_A) (x) = x \\\\\n    \\Psi\\Phi(\\alpha)_B(f) &= \\Psi (\\alpha_A(1_A))_B (f) = Ff (\\alpha_A(1_A)) = \\alpha_B \\c C(A, f) (1_A) = \\alpha_B(f)\n  \\end{align*}\n  so \\(\\Psi \\Phi(\\alpha) = \\alpha\\).\n\\end{proof}\n\n\\begin{corollary}[Yoneda embedding]\\index{Yoneda embedding}\n  The assignment \\(A \\mapsto \\c C(A, -)\\) defines a full faithful functor \\(\\c C^{\\text{op}} \\to [\\c C, \\Set]\\).\n\\end{corollary}\n\n\\begin{proof}\n  Put \\(F = \\c C(B, -)\\) in the above proof, we get a bijection between \\(\\c C(B, A)\\) and morphisms \\(\\c C(A, -) \\to \\c C(B, -)\\) in \\([\\c C, \\Set]\\). We need to verify that this is functorial. But it sends \\(f: B \\to A\\) to the natural transformation \\(g \\mapsto gf\\). So functoriality follows from associativity.\n\\end{proof}\n\nWe call this functor (or the functor \\(\\c C \\to [\\c C^{\\text{op}}, \\Set]\\) sending \\(A\\) to \\(\\c C(-, A)\\)) the \\emph{Yoneda embedding} of \\(\\c C\\) and typically denote it by \\(Y\\). At first glance, \\([\\c C, \\Set]\\) seems like a much more complicated entity and is much more unwieldy. However, it stands out as being more concrete and thus easier to deal with. It is in analogy with group representation: instead of an abstract group, we consider its action on a set, which is more explicit and concrete. We'll come back to this point in a minute.\n\nNow return to the second part of the lemma. Suppose for the moment that \\(\\c C\\) is small, so that \\([\\c C, \\Set]\\) is locally small. Then we have two functors \\(\\c C \\times [\\c C, \\Set] \\to \\Set\\): one sends \\((A, F)\\) to \\(FA\\), and the other is the composite\n\\[\n  \\c C \\times [\\c C, \\Set]\n  \\xrightarrow{Y \\times 1} [\\c C, \\Set]^{\\text{op}} \\times [\\c C, \\Set]\n  \\xrightarrow{[\\c C, \\Set] (-, -)} \\Set\n\\]\nwhere the last map maps a pair of functors to the set of natural transformations between them, and the naturality in the statement of Yoneda lemma says that these are naturally isomorphic. We can translate this into an elementary statement, making sense even when \\(\\c C\\) isn't small: given \\(A \\xrightarrow{f} B\\) and \\(F \\xrightarrow{\\alpha} G\\), there are two ways of producing an element of \\(GB\\) from the natural transformation. For example, given \\(\\beta: \\c C(A, -) \\to F\\), the two ways give the same result, namely\n\\[\n  \\alpha_B(Ff) \\beta_A (1_A) = (Gf)\\alpha_A \\beta_A (1_A)\n\\]\nwhich is equal to \\(\\alpha_B\\beta_B(f)\\).\n\\[\n  \\begin{tikzcd}\n    {[\\c C, \\Set](\\c C(A, -), F)} \\ar[rrr] \\ar[ddd] \\ar[dr, \"\\Phi_{A, F}\"] &&& {[\\c C, \\Set](\\c C(B, -), F)} \\ar[ddd] \\ar[dl, \"\\Phi_{B, F}\"] \\\\\n    & FA \\ar[r, \"Ff\"] \\ar[d, \"\\alpha_A\"] & FB \\ar[d, \"\\alpha_B\"] \\\\\n    & GB \\ar[r, \"Gf\"] & GB \\\\\n    {[\\c C, \\Set](\\c C(A, -), G)} \\ar[rrr] \\ar[ur, \"\\Phi_{A, G}\"] &&& {[\\c C, \\Set](\\c C(B, -), G)} \\ar[ul, \"\\Phi_{B, G}\"] \\\\\n  \\end{tikzcd}\n\\]\n\n\\begin{definition}[representable functor, representation]\\index{functor!representable}\\index{universal element}\n  We say a functor \\(F: \\c C \\to \\Set\\) is \\emph{representable} if it's isomorphic to \\(\\c C(A, -)\\) for some \\(A\\).\n\n  By a \\emph{representation} of \\(F\\), we mean a pair \\((A, x)\\) where \\(x \\in FA\\) is such that \\(\\Psi(x)\\) is an isomorphism. We also call \\(x\\) a \\emph{universal element} of \\(F\\).\n\\end{definition}\n\n\\begin{corollary}\n  If \\((A, x)\\) and \\((B, y)\\) are both representations of \\(F\\), then there is a unique isomorphism \\(f: A \\to B\\) such that \\((Ff)(x) = y\\).\n\\end{corollary}\n\n\\begin{proof}\n  Consider the composite\n  \\[\n    \\c C(B, -) \\xrightarrow{\\Psi(y)} F \\xrightarrow{\\Psi(x)^{-1}} \\c C(A, -),\n  \\]\n  By Yoneda embedding, this is of the form \\(Y(f)\\) for a unique isomorphism \\(f: A \\to B\\) and the diagram\n  \\[\n    \\begin{tikzcd}\n      \\c C(B, -) \\ar[rr, \"Y(f)\"] \\ar[dr, \"\\Psi(y)\"'] & & \\c C(A, -) \\ar[dl, \"\\Psi(x)\"] \\\\\n      & F\n    \\end{tikzcd}\n  \\]\n  commutes if and only if \\((Ff) (x) = y\\).\n\\end{proof}\n\n\\begin{eg}\\leavevmode\n  \\label{eg:representable functor}\n  \\begin{enumerate}\n  \\item The forgetful functor \\(\\c{Gp} \\to \\Set\\) is representable by \\((\\Z, 1)\\). Similarly the forgetful functor \\(\\c{Rng} \\to \\Set\\) is representable by \\((\\Z[x], x)\\). The forgetful functor \\(\\Top \\to \\Set\\) is representable by \\((\\{*\\}, *)\\).\n  \\item The functor \\(P^*: \\Set^{\\text{op}} \\to \\Set\\) is representable by \\((\\{0, 1\\}, \\{1\\})\\). This is the bijection between subsets and characteristic functions.\n  \\item Let \\(G\\) be a group. The unique (up to isomorphism) representable functor \\(G(*, -): G \\to \\Set\\) is the \\emph{Cayley representation} of \\(G\\), i.e.\\ the set \\(UG\\) with \\(G\\) acting by left multiplication.\n  \\item Let \\(A\\) and \\(B\\) be two objects of a locally small category \\(\\c C\\). Then we have a functor \\(\\c C^{\\text{op}} \\to \\Set\\) sending \\(C\\) to \\(\\c C(C, A) \\times \\c C(C, B)\\) (note that it is a purely categorical product and require only cartesian product of the morphism sets). A representation of this, if it exists, is called a (categorical) \\emph{product}\\index{product} of \\(A\\) and \\(B\\), and denoted\n    \\[\n      \\begin{tikzcd}\n        & A \\times B \\ar[dl, \"\\pi_1\"'] \\ar[dr, \"\\pi_2\"] \\\\\n        A & & B\n      \\end{tikzcd}\n    \\]\n    This pair has the property that, for any pair \\((C \\xrightarrow{f} A, C \\xrightarrow{g} B)\\), there is a unique \\(C \\xrightarrow{h} A \\times B\\) with \\(\\pi_1 h = f\\) and \\(\\pi_2 h = g\\).\n    \\[\n      \\begin{tikzcd}\n        & A \\times B \\ar[dl, \"\\pi_1\"'] \\ar[dr, \"\\pi_2\"] \\\\\n        A & C \\ar[l, \"f\"] \\ar[r, \"g\"] \\ar[u, \"h\", dashed] & B\n      \\end{tikzcd}\n    \\]\n\n    Products exist in many categories of interest: in \\(\\Set, \\c{Gp}, \\c{Rng}, \\Top, \\dots\\) they are ``just'' cartesian products. In posets they are binary meets.\n\n    Dually, we have the notion of \\emph{coproduct}\\index{coproduct}\n    \\[\n      \\begin{tikzcd}\n        & A + B \\\\\n        A \\ar[ur, \"\\nu_1\"] & & B \\ar[ul, \"\\nu_2\"']\n      \\end{tikzcd}\n    \\]\n    These also exist in many categories of interest.\n  \\item Let \\(f, g: A \\to B\\) be morphisms in a locally small category \\(\\c C\\). We have a functor \\(F: \\c C^{\\text{op}} \\to \\Set\\) defined by\n    \\[\n      F(C) = \\{h \\in \\c C(C, A): fh = gh\\},\n    \\]\n    which is a subfunctor of \\(\\c C(-, A)\\). A representation of \\(F\\), if it exists, is called an \\emph{equaliser}\\index{equaliser} of \\((f, g)\\). It consists of an object \\(E\\) and a morphism \\(E \\xrightarrow{e} A\\) such that \\(fe = ge\\) and every \\(h\\) with \\(fh = gh\\) factors uniquely through \\(e\\). In \\(\\Set\\), we take \\(E = \\{x \\in A: f(x) = g(x)\\}\\) and \\(e\\) to be inclusion. Similar constructions work in \\(\\c{Gp}, \\c{Rng}, \\Top, \\dots\\)\n    \\[\n      \\begin{tikzcd}\n        E \\ar[r, \"e\"] & A \\ar[r, \"f\", shift left] \\ar[r, \"g\"', shift right] & B \\\\\n        C \\ar[u, dashed] \\ar[ur, \"h\"]\n      \\end{tikzcd}\n    \\]\n\n    Dually we have the notion of \\emph{coequaliser}\\index{coequaliser}.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{remark}\n  If \\(e\\) occurs as an equaliser then it is a monomorphism, since any \\(h\\) factors through it in at most one way. We say a monomorphism is \\emph{regular}\\index{monomorphism!regular} if it occurs as an equaliser.\n\n  Split monomorphisms are regular (see example sheet 1 Q6 (i)). Note that a regular mono that is also epic implies isomorphism: if the equaliser \\(e\\) of \\((f, g)\\) is epic then \\(f = g\\) so \\(e \\cong 1_{\\cod e}\\).\n\\end{remark}\n\n\\begin{definition}[separating/detecting family, seperator, detector]\\index{separating family}\\index{detecting family}\\index{separator}\\index{detector}\n  Let \\(\\c C\\) be a category, \\(\\mathcal G\\) a class of objects of \\(\\c C\\).\n  \\begin{enumerate}\n  \\item  We say \\(\\mathcal G\\) is a \\emph{separating family} for \\(\\c C\\) if, given \\(f, g: A \\to B\\) such that \\(fh = gh\\) for all \\(G \\xrightarrow{h} A\\) with \\(G \\in \\mathcal G\\) then \\(f = g\\). (i.e.\\ the functor \\(\\c C(G, -)\\) where \\(G \\in \\mathcal G\\) are collectively faithful)\n  \\item We say \\(\\mathcal G\\) is a \\emph{detecting family}  for \\(\\c C\\) if, given \\(A \\xrightarrow{f} B\\) such that every \\(G \\xrightarrow{h} B\\) with \\(G \\in \\mathcal G\\) factors uniquely through \\(f\\), then \\(f\\) is an isomorphism.\n  \\end{enumerate}\n\n  If \\(\\mathcal G = \\{G\\}\\) then we call \\(G\\) a \\emph{separator} or \\emph{detector}.\n\\end{definition}\n\n\\begin{lemma}\\leavevmode\n  \\begin{enumerate}\n  \\item If \\(\\c C\\) is a balanced category then any separating family is detecting.\n  \\item If \\(\\c C\\) has equalisers (i.e.\\ every pair has an equaliser) then any detecting family is separating.\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item Suppose \\(\\mathcal G\\) is separating and \\(A \\xrightarrow{f} B\\) satisfies condition in definition 2.\n    If \\(g, h: B \\to C\\) satisfy \\(gf = hf\\), then \\(gx = hx\\) for every \\(G \\xrightarrow{x} B\\), so \\(g = h\\), i.e.\\ \\(f\\) is epic.\n\n    Similarly if \\(k, \\ell: D \\to A\\) satisfy \\(fk = f\\ell\\) then \\(ky = \\ell y\\) for any \\(G \\xrightarrow{y} D\\), since both are factorisations of \\(fky\\) through \\(f\\). So \\(k = \\ell\\), i.e.\\ \\(f\\) is monic.\n  \\item Suppose \\(\\mathcal G\\) is detecting and \\(f, g: A \\to B\\) satisfy definition 1. Then the equaliser \\(E \\xrightarrow{e} A\\) of \\((f, g)\\) is isomorphism so \\(f = g\\).\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item In \\([\\c C, \\Set]\\) the family \\(\\{\\c C(A, -): A \\in \\ob \\c C\\}\\) is both separating and detecting. This is just a restatement of Yoneda lemma.\n  \\item In \\(\\Set\\), \\(1 = \\{*\\}\\) is both a separator and a detector since it represents the identity functor \\(\\Set \\to \\Set\\). Similarly \\(\\Z\\) is both in \\(\\c{Gp}\\) since it represents the forgetful functor \\(\\c{Gp} \\to \\Set\\).\n\n    Dually, \\(2 = \\{0, 1\\}\\) is a coseparator and a codetector in \\(\\Set\\) since it represents \\(P^*: \\Set^{\\text{op}} \\to \\Set\\).\n  \\item In \\(\\Top\\), \\(1 = \\{*\\}\\) is a separator since it represents the forgetful functor \\(\\Top \\to \\Set\\), but not a detector. In fact \\(\\Top\\) has no detecting \\emph{set} of objects: for any infinite cardinality \\(\\kappa\\), let \\(X\\) be a discrete space of cardinality \\(\\kappa\\), and \\(Y\\) the same set with ``co-\\(\\kappa\\)'' topology, i.e.\\ \\(F \\subseteq Y\\) closed if and only if \\(F = Y\\) or \\(F\\) has cardinality smaller \\(\\kappa\\). The identity map \\(X \\to Y\\) is continuous but not a homomorphism. So if \\(\\{G_i: i \\in I\\}\\) is any set of spaces, taking \\(\\kappa\\) larger than cardinality of \\(G_i\\) for all \\(i\\) yields an exmaple to show that the set is not detecting.\n  \\item Let \\(\\c C\\) be the category of pointed conntected CW-complexes and homotopy classes of (basepoint-preserving) continuous maps. J.\\ H.\\ C.\\ Whitehead proved that if \\(X \\xrightarrow{f} Y\\) in this category induces isomorphisms \\(\\pi_n(X) \\to \\pi_n(Y)\\) for all \\(n\\) then it is an isomorphism in \\(\\c C\\). This says that \\(\\{S^n: n \\geq 1\\}\\) is a detecting set for \\(\\c C\\). But P.\\ J.\\ Freyd showed there is no faithful functor \\(\\c C \\to \\Set\\), so no separating \\emph{set}: if \\(\\{G_i: i \\in I\\}\\) were separating then\n    \\[\n      X \\mapsto \\coprod_{i \\in I} \\c C(G_i, X)\n    \\]\n    would be faithful.\n  \\end{enumerate}\n\\end{eg}\n\nNote that any functor of the form \\(\\c C(A, -)\\) preserves monos, but they don't preserve epis. We give a name to those special functors.\n\n\\begin{definition}[projective, injective]\\index{projective}\\index{injective}\n  We say an object \\(P\\) is \\emph{projective} if given\n  \\[\n    \\begin{tikzcd}\n      & P \\ar[d, \"f\"] \\ar[dl, \"g\"', dashed] \\\\\n      A \\ar[r, \"e\", twoheadrightarrow] & B\n    \\end{tikzcd}\n  \\]\n  there exists \\(P \\xrightarrow{g} A\\) with \\(eg = f\\). If \\(\\c C\\) is locally small, this says \\(\\c C(P, -)\\) preserves epimorphisms.\n\n  Dually an \\emph{injective} object of \\(\\c C\\) is a projective object of \\(\\c C^{\\text{op}}\\).\n\n  Given a class \\(\\mathcal E\\) of epimorphisms, we say \\(P\\) is \\emph{\\(\\mathcal E\\)-projective} if it satisfies the condition for all \\(e \\in \\mathcal E\\).\n\\end{definition}\n\n\\begin{lemma}\n  \\label{lem:representable functors are projectives}\n  Representable functors are (pointwise) projectives in \\([\\c C, \\Set]\\).\n\\end{lemma}\n\n\\begin{proof}\n  Suppose given\n  \\[\n    \\begin{tikzcd}\n      & \\c C(A, -) \\ar[d, \"\\beta\"] \\ar[dl, \"\\gamma\"', dashed]\\\\\n      F \\ar[r, \"\\alpha\"] & G\n    \\end{tikzcd}\n  \\]\n  where \\(\\alpha\\) is pointwise surjective. By Yoneda, \\(\\beta\\) corresponds to some \\(y \\in GA\\) and we can find \\(x \\in FA\\) with \\(\\alpha_A(x) = y\\). Now if \\(\\gamma: \\c C(A, -) \\to F\\) corresponds to \\(x\\), then naturality of the Yoneda bijection yields \\(\\alpha\\gamma = \\beta\\).\n\\end{proof}\n\n\\section{Adjunctions}\n\n\\begin{definition}[adjunction]\\index{adjunction}\n  Let \\(\\c C\\) and \\(\\c D\\) be two categories and \\(F: \\c C \\to \\c D, G: \\c D \\to \\c C\\) be two functors. By an \\emph{adjunction} between \\(F\\) and \\(G\\) we mean a bijection between morphisms \\(\\hat f: FA \\to B\\) in \\(\\c D\\) and \\(f: A \\to GB\\) in \\(\\c C\\), which is natural in \\(A\\) and \\(B\\), i.e.\\ given \\(A' \\xrightarrow{g} A\\) and \\(B \\xrightarrow{h} B'\\), \\(h \\hat f(Fg) = \\widehat{(Gh) fg}: FA' \\to B'\\)\n  \\[\n    \\begin{tikzcd}\n      FA \\ar[r, \"\\hat f\"] & B \\ar[d, \"h\"] \\\\\n      FA' \\ar[u, \"Fg\"] \\ar[r, dashed] & B'\n    \\end{tikzcd}\n    \\qquad\n    \\begin{tikzcd}\n      A \\ar[r, \"f\"] & GB \\ar[d, \"Gh\"] \\\\\n      A' \\ar[u, \"g\"] \\ar[r, dashed] & GB'\n    \\end{tikzcd}\n  \\]\n  We say \\(F\\) is \\emph{left adjoint} to \\(G\\) and write \\(F \\adjoint G\\).\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item The functor \\(F: \\Set \\to \\c{Gp}\\) is left adjoint to the forgetful functor \\(U: \\c{Gp} \\to \\Set\\) since any function \\(f: A \\to UB\\) extends uniquely to a group homomorphism \\(\\hat f: FA \\to B\\) and any homomorphism induces a set function. Naturality in \\(B\\) is easy and naturality in \\(A\\) follows from the definition of \\(F\\) as a functor. Similar for \\(\\c{Rng}, \\c{Mod}_K, \\dots\\)\n  \\item The forgetful functor \\(U: \\Top \\to \\Set\\) has a left adjoint \\(D\\) which equips any set with the discrete topology and a right adjoint \\(I\\) which equips any set with the indiscrete topology so \\(D \\adjoint U \\adjoint I\\).\n  \\item The functor \\(\\ob: \\c{Cat} \\to \\Set\\) (recall that \\(\\c{Cat}\\) is the category of small categories) has a left adjoint \\(D\\) sending \\(A\\) to the discrete category with \\(\\ob(DA) = A\\) and only identity morphisms, and a right adjoint \\(I\\) sending \\(A\\) to the category with \\(\\ob(IA) = A\\) and one morphism \\(x \\to y\\) for each \\((x, y) \\in A \\times A\\). In this case \\(D\\) in turn has a left adjoint \\(\\pi_0\\) sending a small category \\(\\c C\\) to its set of \\emph{connected components}, i.e.\\ the quotient of \\(\\ob \\c C\\) by the smallest equivalence relation identifying \\(\\dom f\\) with \\(\\cod f\\) for all \\(f \\in \\mor \\c C\\). So \\(\\pi_0 \\adjoint D \\adjoint \\ob \\adjoint I\\).\n  \\item Let \\(M\\) be the monoid \\(\\{1, e\\}\\) with \\(e^2 = e\\). An object of \\([M, \\Set]\\) is a pair \\((A, e)\\) where \\(e: A \\to A\\) satisfying \\(e^2 = e\\). We have a functor \\(G: [M, \\Set] \\to \\Set\\) sending \\((A, e)\\) to\n    \\[\n      \\{x \\in A: e(x) = x\\} = \\{e(x): x \\in A\\}\n    \\]\n    and a functor \\(F: \\Set \\to [M, \\Set]\\) sending \\(A\\) to \\((A, 1_A)\\). Claim that\n    \\[\n      F \\adjoint G \\adjoint F.\n    \\]\n    Given \\(f: (A, 1_A) \\to (B, e)\\), it must take values in \\(G(B, e)\\) and any \\(g: (B, e) \\to (A, 1_A)\\) is determined by its values on the image of \\(e\\). In some way this is due to the two ways in which the fixed point of \\(e\\) can be written.\n  \\item Let \\(\\c 1\\) be the discrete category with one object \\(*\\). For any \\(\\c C\\), there is a unique functor \\(\\c C \\to \\c 1\\). A left adjoint for this picks out an \\emph{initial object}\\index{initial object} of \\(\\c C\\), i.e.\\ an object \\(I\\) such that there exists a unique \\(I \\to A\\) for each \\(A \\in \\ob C\\). Dually a right adjoint for \\(\\c C \\to \\c 1\\) corresponds to a \\emph{terminal object}\\index{terminal object} of \\(\\c C\\).\n  \\item Let \\(A \\xrightarrow{f} B\\) be a morphism in \\(\\Set\\). We can regard \\(PA\\) and \\(PB\\) as posets, and we have functors \\(Pf: PA \\to PB\\) and \\(P^*f: PB \\to PA\\). Claim \\(Pf \\adjoint P^*f\\): we have \\(Pf(A') \\subseteq B'\\) if and only if \\(f(x) \\in B'\\) for all \\(x \\in A'\\), if and only if \\(A' \\subseteq P^*f(B')\\).\n  \\item Galois connection: suppose givens sets \\(A\\) and \\(B\\) and a relation \\(R \\subseteq A \\times B\\). We define mappings \\(\\cdot^\\ell, \\cdot^r\\) between \\(PA\\) and \\(PB\\)  by\n    \\begin{align*}\n      S^r &= \\{y \\in B: \\forall x \\in S, (x, y) \\in R\\}, S \\subseteq A \\\\\n      T^\\ell &= \\{x \\in A: \\forall y \\in T, (x, y) \\in R\\}, T \\subseteq B\n    \\end{align*}\n    These mappings are order-reversing, i.e.\\ contravariant functors, and \\(T\\subseteq S^r\\) if and only if \\(S \\times T \\subseteq R\\), so by symmetry if and only if \\(S \\subseteq T^\\ell\\). We say \\(\\cdot^r\\) and \\(\\cdot^\\ell\\) are \\emph{adjoint on the right}\\index{adjoints on the right}.\n  \\item The functor \\(P^*: \\Set^{\\text{op}} \\to \\Set\\) is self-adjoint on the right since a function \\(A \\to PB\\) corresponds bijectively to subsets of \\(A \\times B\\), and hence by symmetry to functions \\(B \\to PA\\). % IID Logic and set\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{theorem}\n  \\label{thm:unit-counit characterisation of adjunction}\n  Let \\(G: \\c D \\to \\c C\\) be a functor. Then specifying a left adjoint functor for \\(G\\) is equivalent to specifying an initial object of \\((A \\downarrow G)\\) for each \\(A \\in \\ob \\c C\\) where \\((A \\downarrow G)\\)\\index{arrow category} has objects pairs \\((B, f)\\) with \\(A \\xrightarrow{f} GB\\) and morphisms \\((B, f) \\to (B', f')\\) are morphisms \\(B \\xrightarrow{g} B'\\) such that the following diagram commutes\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"f\"] \\ar[dr, \"f'\"'] & GB \\ar[d, \"Gg\"] \\\\\n      & GB'\n    \\end{tikzcd}\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  Suppose given \\(F \\adjoint G\\). Consider the morphism \\(\\eta_A: A \\to GFA\\) corresponding to \\(FA \\xrightarrow{1} FA\\). Then \\((FA, \\eta_A)\\) is an object of \\((A \\downarrow G)\\). Moreover, given \\(g: FA \\to B\\) and \\(f: A \\to GB\\), the diagram\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"\\eta_A\"] \\ar[dr, \"f\"'] & GFA \\ar[d, \"Gg\"] \\\\\n      & GB\n    \\end{tikzcd}\n  \\]\n  commutes if and only if\n  \\[\n    \\begin{tikzcd}\n      FA \\ar[r, \"1_A\"] \\ar[dr, \"\\hat f\"'] & FA \\ar[d, \"g\"] \\\\\n      & B\n    \\end{tikzcd}\n  \\]\n  commutes by naturality condition in adjunction, i.e.\\ \\(g = \\hat f\\). So \\((FA, \\eta_A)\\) is initial to \\((A \\downarrow G)\\).\n\n  Conversely, suppose given an initial object \\((FA, \\eta_A)\\) for each \\((A \\downarrow G)\\). Given \\(f: A \\to A'\\), we define \\(Ff: FA \\to FA'\\) to be the unique morphism making\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"\\eta_A\"] \\ar[d, \"f\"] & GFA \\ar[d, \"GFf\"] \\\\\n      A' \\ar[r, \"\\eta_{A'}\"] & GFA'\n    \\end{tikzcd}\n  \\]\n  commutes. Functoriality follows from uniqueness: given \\(f': A' \\to A''\\), both \\(F(f'f)\\) and \\((Ff')(Ff)\\) are both morphisms \\((FA, \\eta_A) \\to (FA'', \\eta_{A''}f'f)\\) in \\((A \\downarrow G)\\). To show \\(F \\adjoint G\\): given \\(A \\xrightarrow{f} GB\\), we define \\(\\hat f: FA \\to B\\) to be the unique morphism \\((FA, \\eta_A) \\to (B, f)\\) in \\((A \\downarrow G)\\). This is a bijection with inverse\n  \\[\n    (FA \\xrightarrow{g} B) \\mapsto (A \\xrightarrow{\\eta_A} GFA \\xrightarrow{Gg} GB).\n  \\]\n  The latter mapping is natural in \\(B\\) since \\(G\\) is a functor, and in \\(A\\) since by construction \\(\\eta\\) is a natural transformation \\(1_{\\c C} \\to GF\\).\n\\end{proof}\n\n\\begin{corollary}\n  If \\(F\\) and \\(F'\\) are both left adjoint to \\(G: \\c D \\to \\c C\\) then they are naturally isomorphic.\n\\end{corollary}\n\n\\begin{proof}\n  It basically follows from the fact that initial object in any category, if exists, is unique up to isomorphism. For any \\(A\\), \\((FA, \\eta_A)\\) and \\((F'A, \\eta'_A)\\) are both initial in \\((A \\downarrow G)\\) so there is a unique isomorphism\n  \\[\n    \\alpha_A: (FA, \\eta_A) \\to (F'A, \\eta'_A).\n  \\]\n  In any naturality square for \\(\\alpha\\), the two ways round are both morphisms in \\((A \\downarrow G)\\) whose domain is initial, so they're equal.\n\\end{proof}\n\n\\begin{lemma}\n  \\label{lem:composition of adjoints}\n  Given\n  \\[\n    \\begin{tikzcd}\n      \\c C \\ar[r, \"F\", shift left] & \\c D \\ar[l, \"G\", shift left] \\ar[r, \"H\", shift left] & \\c E \\ar[l, \"K\", shift left]\n    \\end{tikzcd}\n  \\]\n  with \\(F \\adjoint G\\) and \\(H \\adjoint K\\), we have\n  \\[\n    HF \\adjoint GK.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  We have bijections between morphisms \\(A \\to GKC\\), morphisms \\(FA \\to KC\\) and morphisms \\(HFA \\to C\\), which are both natural in \\(A\\) and \\(C\\).\n\\end{proof}\n\n\\begin{corollary}\n  \\label{cor:commutative square of adjoints}\n  Given a commutative square\n  \\[\n    \\begin{tikzcd}\n      \\c C \\ar[r] \\ar[d] & \\c D \\ar[d] \\\\\n      \\c E \\ar[r] &\\c F\n    \\end{tikzcd}\n  \\]\n  of categories and functors, if the functors all have left adjoints, then the diagram of left adjoints commutes up to natural isomorphsms.\n\\end{corollary}\n\n\\begin{proof}\n  By \\Cref{lem:composition of adjoints}, both ways round the diagram of left adjoints are left adjoint to the composite \\(\\c C \\to \\c F\\), so by \\Cref{thm:unit-counit characterisation of adjunction} they are isomorphic.\n\\end{proof}\n\nActually, we didn't use the full strength of \\Cref{lem:composition of adjoints}: if we require merely instead that the original commutative diagram is only up to natural isomorphism, then we'll get the same conclusion. In practice, however, the weaker version stated above will usually suffice.\n\n\\begin{definition}[unit, counit]\\index{unit}\\index{counit}\n  Given an adjunction \\(F \\adjoint G\\), the natural transformation \\(\\eta: 1_{\\c C} \\to GF\\) emerging in the proof of \\Cref{thm:unit-counit characterisation of adjunction} is called the \\emph{unit} of the adjunction.\n\n  Dually we have a natural transformation \\(\\varepsilon: FG \\to 1_{\\c D}\\) such that \\(\\varepsilon_B: FGB \\to B\\) corresponds to \\(GB \\xrightarrow{1_{GB}} GB\\), is called the \\emph{counit}.\n\\end{definition}\n\n\\begin{theorem}\n  Given \\(F: \\c C \\to \\c D, G: \\c D \\to \\c C\\), specifying an adjunction \\(F \\adjoint G\\) is equivalent to specifying two natural transformations\n  \\begin{align*}\n    \\eta: 1_{\\c C} &\\to GF \\\\\n    \\varepsilon: FG &\\to 1_{\\c D}\n  \\end{align*}\n  satisfying the commutative diagrams\n  \\[\n    \\begin{tikzcd}\n      F \\ar[r, \"F\\eta\"] \\ar[dr, \"1_F\"'] & FGF \\ar[d, \"\\varepsilon_F\"] \\\\\n      & F\n    \\end{tikzcd}\n    \\qquad\n    \\begin{tikzcd}\n      G \\ar[r, \"\\eta_G\"] \\ar[dr, \"1_G\"'] & GFG \\ar[d, \"G\\varepsilon\"] \\\\\n      & G\n    \\end{tikzcd}\n  \\]\n  which are called the \\emph{triangular identities}.\\index{triangular identities}\n\\end{theorem}\n\n\\begin{proof}\n  First suppose given \\(F \\adjoint G\\). Define \\(\\eta\\) and \\(\\varepsilon\\) as in \\Cref{thm:unit-counit characterisation of adjunction} and its dual. Now consider the composite\n  \\[\n    FA \\xrightarrow{F\\eta_A} FGFA \\xrightarrow{\\varepsilon_{FA}} FA.\n  \\]\n  Under the adjunction this corresponds to\n  \\[\n    A \\xrightarrow{\\eta_A} GFA \\xrightarrow{1_{GFA}} GFA\n  \\]\n  but this also corresponds to \\(1_{FA}\\) so \\(\\varepsilon_{FA} F\\eta_A = 1_{FA}\\). The other identity is dual.\n\n  Conversely, suppose \\(\\eta\\) and \\(\\varepsilon\\) satisfying the trianglular identities. Given \\(A \\xrightarrow{f} GB\\), let \\(\\Phi(f)\\) be the composite\n  \\[\n    FA \\xrightarrow{Ff} FGB \\xrightarrow{\\varepsilon_B} B\n  \\]\n  and given \\(FA \\xrightarrow{g} B\\), let \\(\\Psi(g)\\) be\n  \\[\n    A \\xrightarrow{\\eta_A} GFA \\xrightarrow{Gg} GB.\n  \\]\n  Then both \\(\\Phi\\) and \\(\\Psi\\) are both natural. Need to show that \\(\\Phi\\Psi\\) and \\(\\Psi\\Phi\\) are identity mappings. But\n  \\begin{align*}\n    &\\Psi\\Phi(A \\xrightarrow{f} GB) \\\\\n    =& A \\xrightarrow{\\eta_A} GFA \\xrightarrow{GFf} GFGB \\xrightarrow{G \\varepsilon_B} GB \\\\\n    =& A \\xrightarrow{f} GB \\xrightarrow{\\eta_{GB}} GFGB \\xrightarrow{G\\varepsilon_B} GB \\\\\n    =& A \\xrightarrow{f} GB\n  \\end{align*}\n  where the second equality is naturality of \\(\\eta\\) and the third equality is triangular equation. Dually \\(\\Phi\\Psi(g) = g\\).\n\\end{proof}\n\nSometimes this is taken to be the definition of adjunction.\n\nObviously two inverse functors form an adjunction. We have seen before a weaker notion of inverse, namely a pair of functors forming an equivalence of categories. The question is, do they always from an adjunction? The answer is yes, but sometimes we can't see it since we've chosen the wrong isomorphism.\n\n\\begin{lemma}\n  Given functors \\(F: \\c C \\to \\c D, G: \\c D \\to \\c C\\) and natural isomorphisms\n  \\begin{align*}\n    \\alpha: 1_{\\c C} &\\to GF \\\\\n    \\beta: FG &\\to 1_{\\c D}\n  \\end{align*}\n  there are isomorphism\n  \\begin{align*}\n    \\alpha': 1_{\\c C} &\\to GF \\\\\n    \\beta': FG &\\to 1_{\\c D}\n  \\end{align*}\n  which satisfy the triangular identities so \\(F \\adjoint G\\) and \\(G \\adjoint F\\).\n\\end{lemma}\n\nThis is often summarised as ``every equivalence is an adjoint equivalence''.\n\n\\begin{proof}\n  We fix \\(\\alpha' = \\alpha\\) and modify \\(\\beta\\). We have to change the domain and codomain of \\(\\beta\\) by conjugation.\n\n  Let \\(\\beta'\\) be the composite\n  \\[\n    FG \\xrightarrow{(FG\\beta)^{-1}} FGFG \\xrightarrow{(F\\alpha_G)^{-1}} FG \\xrightarrow{\\beta} 1_{\\c D}.\n  \\]\n  Note that \\(FG\\beta = \\beta_{FG}\\) since\n  \\[\n    \\begin{tikzcd}\n      FGFG \\ar[r, \"FG\\beta\"] \\ar[d, \"\\beta_{FG}\"] & FG \\ar[d, \"\\beta\"] \\\\\n      FG \\ar[r, \"\\beta\"] & 1_{\\c D}\n    \\end{tikzcd}\n  \\]\n  commmutes by naturality of \\(\\beta\\) and \\(\\beta\\) is monic.\n\n  Now \\((\\beta'_F)(F\\alpha')\\) is the composite\n   \\begin{align*}\n     &F \\xrightarrow{F\\alpha} FGF \\xrightarrow{(\\beta_{FGF})^{-1}} FGFGF \\xrightarrow{(F\\alpha_{GF})^{-1}} FGF \\xrightarrow{\\beta_F} F \\\\\n     =& F \\xrightarrow{(\\beta_F)^{-1}} FGF \\xrightarrow{FGF\\alpha} FGFGF \\xrightarrow{(F\\alpha_{GF})^{-1}} FGF \\xrightarrow{\\beta_F} F \\\\\n     =& F \\xrightarrow{(\\beta_F)^{-1}} FGF \\xrightarrow{\\beta_F} F \\\\\n     =& 1_F\n   \\end{align*}\n  since \\(GF \\alpha = \\alpha_{GF}\\). Similarly \\((G\\beta')(\\alpha'_G)\\) is\n  \\begin{align*}\n    &G \\xrightarrow{\\alpha_G} GFG \\xrightarrow{(GFG\\beta)^{-1}} GFGFG \\xrightarrow{(GF\\alpha_G)^{-1}} GFG \\xrightarrow{G\\beta} G \\\\\n    =& G \\xrightarrow{(G\\beta)^{-1}} GFG \\xrightarrow{\\alpha_{GFG}} GFGFG \\xrightarrow{(GF\\alpha_G)^{-1}} GFG \\xrightarrow{G\\beta} G \\\\\n    =& G \\xrightarrow{(G\\beta)^{-1}} GFG \\xrightarrow{G\\beta} G \\\\\n    =& 1_G\n  \\end{align*}\n\\end{proof}\n\n\\begin{lemma}\n  Suppose \\(G: \\c D \\to \\c C\\) has a left adjoint \\(F\\) with counit \\(\\varepsilon: FG \\to 1_{\\c D}\\), then\n  \\begin{enumerate}\n  \\item \\(G\\) is faithful if and only if \\(\\varepsilon\\) is pointwise epic,\n  \\item \\(G\\) is full and faithful is and only if \\(\\varepsilon\\) is an isomorphism.\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item Given \\(B \\xrightarrow{g} B'\\), \\(Gg\\) corresponds under the adjunction to the composite\n    \\[\n      FGB \\xrightarrow{\\varepsilon_B} B \\xrightarrow{g} B'.\n    \\]\n    Hence the mapping \\(g \\mapsto Gg\\) is injective on morphisms with domain \\(B\\) (and specified codomain) if and only if \\(g \\mapsto g\\varepsilon_B\\) is injective, if and only if \\(\\varepsilon_B\\) is epic.\n  \\item Similarly, \\(G\\) is full and faithful if and only if \\(g \\mapsto g\\varepsilon_B\\) is bijective. If \\(\\alpha: B \\to FGB\\) is such that \\(\\alpha\\varepsilon_B = 1_{FGB}\\), i.e.\\ \\(\\alpha\\) is left inverse of \\(\\varepsilon_B\\), then\n    \\[\n      \\varepsilon_B\\alpha \\varepsilon_B = \\varepsilon_B,\n    \\]\n    whence \\(\\varepsilon_B\\alpha = 1_B\\). So \\(\\varepsilon_B\\) is an isomophism. Thus \\(\\varepsilon\\) is an isomorphism.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{definition}[reflection, reflective subcategory]\\index{reflection}\\index{reflective subcategory}\n  By a \\emph{reflection} we mean an adjunction in which the right adjoint is full and faithful (equivalently the counit is an isomorphism).\n\n  We say a full subcategory \\(\\c C' \\subseteq \\c C\\) is \\emph{reflective} if the inclusion \\(\\c C' \\to \\c C\\) has a left adjoint.\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item The category \\(\\c{AbGp}\\) of abelian groups is reflective in \\(\\c{Gp}\\): the left adjoint sends a group \\(G\\) to its \\emph{abelianization} \\(G/G'\\), where \\(G'\\) is the subgroup generated by all commutators \\([x, y] = xyx^{-1}y^{-1}\\) for \\(x, y \\in G\\). (The unit of the adjunction is the quotient map \\(G \\to G/G'\\))\n\n    This is where the name ``reflecting subcategory'' comes from: mirror is 2D so reflection in a mirror cannot create a copy of a 3D object. However, it keeps every 2D detail fully and faithfully. Similarly abelianization does not tell you everything about \\(G\\) but as much as an abelian group can. c.f.\\ universal property.\n  \\item Given an abelian group, let \\(A_T\\) denote the torsion subgroup, i.e.\\ the subgroup of elements of finite orders. The assigment \\(A \\mapsto A/A_T\\) gives a left adjoint to the inclusion \\(\\c{tfAbGp} \\to \\c{AbGp}\\), where \\(\\c{tfAbGp}\\) is the full subcategory of torsion-free abelian groups.\n\n    On the other hand \\(A \\mapsto A_T\\) is the right adjoint to the inclusion \\(\\c{tAbGp} \\to \\c{AbGp}\\) from torsion abelian groups to abelain groups, so this subcategory is coreflective. There are many many examples in algebra involving (co)reflective subcategories, and the constructions all give rise to important universal properries.\n  \\item Let \\(\\c{KHaus} \\subseteq \\Top\\) be the full subcategory of compact Hausdorff spaces. The inclusion \\(\\c{KHaus} \\to \\Top\\) has a left adjoint \\(\\beta\\), the \\emph{Stone-\u010cech compactification}\\index{Stone-\u010cech compactification}. It is a reflective subcategory. We'll revisit this example later in the course.\n  \\item Let \\(X\\) be a topological space. We say \\(A\\subseteq X\\) is \\emph{sequentially closed} if \\(x_n \\to x_\\infty\\) and \\(x_n \\in A\\) for all \\(n\\) implies \\(x_\\infty \\in A\\). Note that closed implies sequentially closed but not vice versa. We say \\(X\\) is \\emph{sequential} if all sequentially closed sets are closed, e.g.\\ a metric space. Given a non-sequential space \\(X\\), let \\(X_s\\) be the same set with topology given by the sequentially open sets (complements of sequentially closed sets) in \\(X\\). Certainly the identity \\(X_s \\to X\\) is continuous, and defines the counit of an adjunction between the inclusion \\(\\c{Seq} \\to \\Top\\) and its right adjoint \\(X \\mapsto X_s\\).\n  \\item If \\(X\\) is a topological space, the poset \\(CX\\) of closed subsets of \\(X\\) is reflective in \\(PX\\) with reflector given by closure and the poset \\(OX\\) of open subsets is coreflective with coreflector given by interior.\n  \\end{enumerate}\n\\end{eg}\n\n\\section{Limits}\n\n\\begin{definition}[diagram, cone, limit]\\index{diagram}\\index{cone}\\index{diagram}\\leavevmode\n  \\begin{enumerate}\n  \\item Let \\(\\c J\\) be a category (almost always small, often finite). By a \\emph{diagram of shape \\(\\c J\\)} in \\(\\c C\\) we mean a functor \\(D: \\c J \\to \\c C\\). The objects \\(D(j)\\) for \\(j \\in \\ob \\c J\\) are called \\emph{vertices} of the diagram and the morphisms \\(D(\\alpha)\\) \\(\\alpha \\in \\mor \\c J\\) are called \\emph{edges} of \\(D\\).\n \\item Given \\(D: \\c J \\to \\c C\\), a \\emph{cone} over \\(D\\) consists of an object \\(A\\) of \\(\\c C\\), called the \\emph{apex} of the cone, together with morphisms \\(A \\xrightarrow{\\lambda_j} D(j)\\) for each \\(j \\in \\ob \\c J\\), called the \\emph{legs} of the cone, such that\n    \\[\n      \\begin{tikzcd}\n        & A \\ar[dl, \"\\lambda_j\"'] \\ar[dr, \"\\lambda_{j'}\"] \\\\\n        D(j) \\ar[rr, \"D(\\alpha)\"] && D(j')\n      \\end{tikzcd}\n    \\]\n    commutes for all \\(j \\xrightarrow{\\alpha} j'\\) in \\(\\mor \\c J\\).\n\n    Given cones \\((A, (\\lambda_j)_{j \\in \\ob \\c J})\\) and \\((B, (\\mu_j)_{j \\in \\ob \\c J})\\), a \\emph{morphism} of cones between them is a morphism \\(A \\xrightarrow{f} B\\) such that\n    \\[\n      \\begin{tikzcd}\n        A \\ar[dr, \"\\lambda_j\"'] \\ar[rr, \"f\"] && B \\ar[dl, \"\\mu_j\"] \\\\\n        & D(j)\n      \\end{tikzcd}\n    \\]\n    commutes for all \\(j\\).\n\n    We write \\(\\Cone(D)\\) for the category of all cones over \\(D\\).\n  \\item A \\emph{limit} for \\(D\\) is a terminal object of \\(\\Cone(D)\\), if this exists.\n\n    Dually we have the notion of cone \\emph{under} a diagram (sometime called \\emph{cocone}) and of \\emph{colimit} (i.e.\\ initial cone under \\(D\\)).\n  \\end{enumerate}\n\\end{definition}\n\nFor example, if \\(\\c J\\) is the category\n\\[\n  \\begin{tikzcd}\n    \\cdot \\ar[r] \\ar[d] \\ar[dr] & \\cdot \\ar[d] \\\\\n    \\cdot \\ar[r] & \\cdot\n  \\end{tikzcd}\n\\]\nwith 4 objects and 5 non-identity morphisms, a diagram of shape \\(\\c J\\) is a commutative square\n\\[\n  \\begin{tikzcd}\n    A \\ar[r, \"f\"] \\ar[d, \"g\"] & B \\ar[d, \"h\"] \\\\\n    C \\ar[r, \"k\"] & D\n  \\end{tikzcd}\n\\]\n\nOn the other hand, to express a not-necessarily-commutative square, we use the shape\n\\[\n  \\begin{tikzcd}\n    \\cdot \\ar[r] \\ar[d] \\ar[dr, shift left] \\ar[dr, shift right] & \\cdot \\ar[d] \\\\\n    \\cdot \\ar[r] & \\cdot\n  \\end{tikzcd}\n\\]\n\nAn alternative way to understand limits is that, if \\(\\c C\\) is locally small and \\(\\c J\\) is small, we have a functor \\(\\c C^{\\text{op}} \\to \\Set\\) sending \\(A\\) to the set of cones with apex \\(A\\). A limit for \\(D\\) is a representation of this functor.\n\nA thrid way to visualise cones: if \\(\\Delta A\\) denotes the constant diagram of shape \\(\\c J\\) with all vertices \\(A\\) and all edges \\(1_A\\), then a cone over \\(D\\) with apex \\(A\\) is the same thing as a natural transformation \\(\\Delta A \\to D\\). \\(\\Delta\\) is a functor \\(\\c C \\to [\\c J, \\c C]\\) and \\(\\Cone(D)\\) is the arrow category \\((\\Delta \\downarrow D)\\). So to say that every diagram of shape \\(\\c J\\) in \\(\\c C\\) has a limit is equivalent to saying that \\(\\Delta\\) has a right adjoint. (We say \\(\\c C\\) \\emph{has limits} of shape \\(\\c J\\))\n\nDually \\(\\c C\\) has colimits of shape \\(\\c J\\) if and only if \\(\\Delta: \\c C \\to [\\c J, \\c C]\\) has a left adjoint.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Suppose \\(J = \\emptyset\\). There is a unique diagram of shape \\(\\c J\\) in \\(\\c C\\); a cone over it is just an object, and a morphism of cones is a morphism of \\(\\c C\\). So a limit for the empty diagram is a terminal object of \\(\\c C\\). We defined limits in terms of terminal object but now the terminal object is also a special limit. Dually a colimit for it is an initial object.\n  \\item Let \\(\\c J\\) be the category\n    \\[\n      \\begin{tikzcd}\n        \\cdot & \\cdot\n      \\end{tikzcd}\n    \\]\n    A diagram of shape \\(\\c J\\) is a pair of objects \\(A, B\\); a cone over it is a span\n    \\[\n      \\begin{tikzcd}\n        & C \\ar[dl] \\ar[dr] \\\\\n        A & & B\n      \\end{tikzcd}\n    \\]\n    and a limit for it is a \\emph{product}\\index{product}\n    \\[\n      \\begin{tikzcd}\n        & A \\times B \\ar[dl, \"\\pi_1\"'] \\ar[dr, \"\\pi_2\"] \\\\\n        A & & B\n      \\end{tikzcd}\n    \\]\n    as defined in example~4 on page~\\pageref{eg:representable functor}. Dually a colimit for it is a \\emph{coproduct}\\index{coproduct}.\n\n    More generally, if \\(\\c J\\) is a small discrete category, a diagram of shape \\(\\c J\\) is an indexed family \\((A_j: j \\in \\c J)\\), and a limit for it is a product \\((\\prod_{j \\in \\c J} A_j \\xrightarrow{\\pi_j} A_j: j \\in \\c J)\\). Dually, \\((A_j \\xrightarrow{\\nu_j} \\sum_{j \\in \\c J} A_j: j\\in \\c J)\\), sometimes also written as \\(\\coprod_{j \\in \\c J} A_j\\).\n  \\item Let \\(\\c J\\) be the category\n    \\[\n      \\begin{tikzcd}\n        \\cdot \\ar[r, shift left] \\ar[r, shift right] & \\cdot\n      \\end{tikzcd}\n    \\]\n    A diagram of shape \\(\\c J\\) is a parallel pair\n    \\[\n      \\begin{tikzcd}\n        A \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & B\n      \\end{tikzcd}\n    \\]\n    a cone over it is\n    \\[\n      \\begin{tikzcd}\n        & C \\ar[dl, \"h\"'] \\ar[dr, \"k\"] \\\\\n        A & & B\n      \\end{tikzcd}\n    \\]\n    satisfying \\(fh = k = gh\\), or equivalently a morphism \\(h: C \\to A\\) satisfying \\(fh = gh\\). A (co)limit for the diagram is a \\emph{(co)equaliser}\\index{equaliser}\\index{coequaliser} as defined in example~5 on page~\\pageref{eg:representable functor}.\n  \\item Let \\(\\c J\\) be the category\n    \\[\n      \\begin{tikzcd}\n        & \\cdot \\ar[d] \\\\\n        \\cdot \\ar[r] & \\cdot\n      \\end{tikzcd}\n    \\]\n    A diagram of shape \\(\\c J\\) is a cospan\n    \\[\n      \\begin{tikzcd}\n        & A \\ar[d, \"f\"] \\\\\n        B \\ar[r, \"g\"] & C\n      \\end{tikzcd}\n    \\]\n    a cone over it is\n    \\[\n      \\begin{tikzcd}\n        D \\ar[r, \"p\"] \\ar[d, \"q\"] \\ar[dr, \"r\"] & A \\\\\n        B & C\n      \\end{tikzcd}\n    \\]\n    satisfying \\(fp = r = gq\\), or equivalently a span \\((p, q)\\) completing the diagram to a commutative square. A limit for the diagram is called a \\emph{pullback}\\index{pullback} of \\((f, g)\\). In \\(\\Set\\), the apex of the pullback is the ``fibre product''\n    \\[\n      A \\times_C B = \\{(x, y): A \\times B: f(x) = g(y)\\}.\n    \\]\n    Dually, colimits of shape \\(\\c J^{\\text{op}}\\) are \\emph{pushouts}\\index{pushout}. Given\n    \\[\n      \\begin{tikzcd}\n        A \\ar[r, \"f\"] \\ar[d, \"g\"] & B \\\\\n        C\n      \\end{tikzcd}\n    \\]\n    we ``push \\(g\\) along \\(f\\)'' to get the RHS of the colimt square.\n  \\item Let \\(\\c J\\) be the poset of natural numbers. A diagram of shape \\(\\c J\\) is a \\emph{directed system}\n    \\[\n      A_0 \\xrightarrow{f_0} A_1 \\xrightarrow{f_1} A_2 \\xrightarrow{f_2} A_3 \\xrightarrow{f_3} \\dots\n    \\]\n    A colimit for this is called a \\emph{direct limit}\\index{direct limit}: it consists of \\(A_\\infty\\) equipped with morphisms \\(A_n \\xrightarrow{g_n} A_\\infty\\) satisfying \\(g_n = g_{n + 1} f_n\\) for all \\(n\\) and universal among such. Dually we have \\emph{inverse system} and \\emph{inverse limit}\\index{inverse limit}.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{theorem}\\leavevmode\n  \\label{thm:existence of limits}\n  \\begin{enumerate}\n  \\item Suppose \\(\\c C\\) has equalisers and all finite (respectively small) products. Then \\(\\c C\\) has all finite (respectively small) limits.\n  \\item Suppose \\(\\c C\\) has pullbacks and a terminal object, then \\(\\c C\\) has all finite limts.\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item Suppose given \\(D: \\c J \\to \\c C\\). Form the industrial strength products\n    \\[\n      P = \\prod_{j \\in \\ob \\c J} D(j), Q = \\prod_{\\alpha \\in \\mor \\c J} D(\\cod \\alpha).\n    \\]\n    We have morphisms \\(f, g: P \\to Q\\) defined by\n    \\[\n      \\pi_\\alpha f = \\pi_{\\cod \\alpha}, \\pi_\\alpha g = D(\\alpha) \\pi_{\\dom \\alpha}\n    \\]\n    for all \\(\\alpha\\). Let \\(e: E \\to P\\) be an equaliser of \\((f, g)\\). The composites\n    \\[\n      \\lambda_j = \\pi_j e: E \\to D(j)\n    \\]\n    form a cone over \\(D\\): given \\(\\alpha: j \\to j'\\) in \\(\\c J\\),\n    \\[\n      D(\\alpha) \\lambda_j = D(\\alpha) \\pi_j e = \\pi_\\alpha ge = \\pi_\\alpha fe = \\pi_{j'} e = \\lambda_{j'}.\n    \\]\n    Given any cone \\((A, (\\mu_j: j \\in \\ob \\c J))\\) over \\(D\\), there is a unique \\(\\mu: A \\to P\\) with \\(\\pi_j \\mu = \\mu_j\\) for each \\(j\\) and\n    \\[\n      \\pi_\\alpha f \\mu = \\mu_{\\cod \\alpha} = D(\\alpha) \\mu_{\\dom \\alpha} = \\pi_\\alpha g \\mu\n    \\]\n    for all \\(\\alpha\\), and hence \\(f\\mu = g\\mu\\), so exists unique \\(\\nu: A \\to E\\) with \\(e\\nu = \\mu\\). So \\((E, (\\lambda_j: j \\in \\ob \\c J))\\) is a limit cone.\n  \\item It is enough to construct finite products and equalisers. But if \\(1\\) is the terminal object, then a pullback for\n    \\[\n      \\begin{tikzcd}\n        & A \\ar[d] \\\\\n        B \\ar[r] & 1\n      \\end{tikzcd}\n    \\]\n    has the universal property of a product \\(A \\times B\\) and we can form \\(\\prod_{i = 1}^n A_i\\) inductively as\n    \\[\n      A_1 \\times (A_2 \\times (A_3 \\times \\cdots (A_{n - 1} \\times A_n)) \\cdots).\n    \\]\n    Now to form the equalisers of \\(f, g: A \\to B\\), consider the cospan\n    \\[\n      \\begin{tikzcd}\n        & A \\ar[d, \"{(1_A, f)}\"] \\\\\n        A \\ar[r, \"{(1_A, g)}\"] & A \\times B\n      \\end{tikzcd}\n    \\]\n    A cone over this consists of\n    \\[\n      \\begin{tikzcd}\n      P \\ar[r, \"h\"] \\ar[,d, \"k\"] & A \\\\\n      A\n      \\end{tikzcd}\n    \\]\n    satisfying \\((1_A, f) h = (1_A, g) k\\) or equivalently \\(1_A h = 1_Ak\\) and \\(fh = gk\\), or equivalently a morphism \\(h: P \\to A\\) satisfying \\(fh = gh\\). So a pullback for \\((1_A, f)\\) and \\((1_A, g)\\) is an equaliser of \\((f, g)\\).\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{definition}[complete, cocomplete]\\index{category!complete}\\index{category!cocomplete}\n  We say a category \\(\\c C\\) is \\emph{complete} if it has all small limits. Dually, \\(\\c C\\) is \\emph{cocomplete} if has all small colimts.\n\\end{definition}\n\nFor example, \\(\\Set\\) is both complete and cocomplete: products are cartesian products and coproducts are disjoint unions. Similarly \\(\\c{Gp}, \\c{AbGp}, \\c{Rng}, \\c{Mod}_K\\) are all complete and cocomplete.\\footnote{Note that the products have underlying set the cartesian products of those of each component, but coproducts tend to be different. We'll discuss this later.} \\(\\Top\\) is also complete and cocomplete, with both product and coproduct given by the underlying set.\n\n\\begin{definition}[preserve limit, reflect limit, create limit]\\index{preserve limit}\\index{preserve limit}\\index{create limit}\n  Let \\(F: \\c C \\to \\c D\\) be a functor.\n  \\begin{enumerate}\n  \\item We say \\(F\\) \\emph{preserves limits} of shape \\(\\c J\\) if, given \\(D: \\c J \\to \\c C\\) and a limit cone \\((L, (\\lambda_j: j \\in \\ob \\c J))\\) in \\(\\c C\\), \\((FL, (F\\lambda_j: j \\in \\ob \\c J))\\) is a limit for \\(FD\\).\n  \\item We say \\(F\\) \\emph{reflects limits} of shape \\(\\c J\\) if, given \\(D: \\c J \\to \\c C\\) and a cone \\((L, (\\lambda_j: j \\in \\ob \\c J))\\) such that \\((FL, (F\\lambda_j: j \\in \\ob \\c J))\\) is a limit for \\(FD\\) then \\((L, (\\lambda_j: j \\in \\ob \\c J))\\) for \\(D\\).\n  \\item We say \\(F\\) \\emph{creates limts} of shape \\(\\c J\\) if, given \\(D: \\c J \\to \\c C\\) and a limit \\((M, (\\mu_j: j \\in \\ob \\c J))\\) for \\(FD\\), there exists a cone \\((L, (\\lambda_j: j \\in \\ob \\c J))\\) over \\(D\\) whose image under \\(F\\) is isomorphic to the limit cone, and any such cone is a limit for \\(D\\).\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item If \\(\\c C\\) has limits of shape \\(\\c J\\) and \\(F: \\c C \\to \\c D\\) preserves them and reflects isomorphisms then \\(F\\) reflects limits of shape \\(\\c J\\).\n  \\item \\(F\\) reflects limits of shape \\(1\\) if and only if \\(F\\) reflects isomorphism.\n  \\item If \\(\\c D\\) has limits of shape \\(\\c J\\) and \\(F: \\c C \\to \\c D\\) creates them, then \\(F\\) both preserves and reflects them.\n  \\item In any of the statement of \\Cref{thm:existence of limits}, we may replace both instances of ``\\(\\c C\\) has'' by either ``\\(\\c C\\) has and \\(F: \\c C \\to \\c D\\) preserves'' or ``\\(\\c D\\) has and \\(F: \\c C \\to \\c D\\) creates''.\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(U: \\c{Gp} \\to \\Set\\) creates all small limits: given a family \\((G_i: i \\in I)\\) of groups, there is a unique group structure on \\(\\prod_{i \\in I} UG_i\\) making the projections homomorphisms, and this makes it a product in \\(\\c{Gp}\\). Similarly for equalisers.\n\n    But \\(U\\) doesn't preserve coproducts: \\(U(G * H) \\ncong UG \\amalg UH\\).\n  \\item \\(U: \\Top \\to \\Set\\) preserves all small limits and colimits but doesn't reflect them: if \\(L\\) is a limit for \\(D: \\c J \\to \\Top\\) and \\(L\\) is not discrete, there is another cone with apex \\(L_d\\), which is the same underlying space as \\(L\\) with discrete topology, mapped to the same limit in \\(\\Set\\).\n  \\item The inclusion functor \\(I: \\c{AbGp} \\to \\c{Gp}\\) reflects coproducts, but doesn't preserve them. The direct sum \\(A \\oplus B\\) (coproduct in \\(\\c{AbGp}\\)) is not normally isomorphic to the free product \\(A * B\\), which is not abelian unless either \\(A\\) or \\(B\\) is trivial, but if \\(A \\cong \\{e\\}\\) then \\(A \\times B \\cong A \\oplus B \\cong B\\).\n  \\end{enumerate}\n\\end{eg}\n\nThe following lemma tells us how to construct limit in functor categories:\n\n\\begin{lemma}\n  If \\(\\c D\\) has limits of shape \\(\\c J\\) then so does the functor category \\([\\c C, \\c D]\\) for any \\(\\c C\\), and the forgetful functor \\([\\c C, \\c D] \\to \\c D^{\\ob \\c C}\\) creates them.\n\\end{lemma}\n\n\\begin{proof}\n  Suppose given a diagram of shape \\(\\c J\\) in \\([\\c C, \\c D]\\). Think of it as a functor \\(D: \\c J \\times \\c C \\to \\c D\\). For each \\(A \\in \\ob \\c C\\), let \\((LA, (\\lambda_{j, A}: j \\in \\ob \\c J))\\) be a limit cone for the diagram \\(D(-, A): \\c J \\to \\c D\\).\n\n  Given \\(A \\xrightarrow{f} B\\) in \\(\\c C\\), the composition\n  \\[\n    LA \\xrightarrow{\\lambda_{j, A}} D(j, A) \\xrightarrow{D(j, f)} D(j, B)\n  \\]\n  form a cone over \\(\\c D(-, B)\\), since the square\n  \\[\n    \\begin{tikzcd}\n      D(j, A) \\ar[r, \"{D(j, f)}\"] \\ar[d, \"{D(\\alpha, A)}\"] & D(j, B) \\ar[d, \"{D(\\alpha, B)}\"] \\\\\n      D(j', A) \\ar[r, \"{D(j', f)}\"] & D(j', B)\n    \\end{tikzcd}\n  \\]\n  commutes. So there is a unique \\(Lf: LA \\to LB\\) making\n  \\[\n    \\begin{tikzcd}\n      LA \\ar[r, \"{\\lambda_{j, A}}\"] \\ar[d, \"Lf\"] & {D(j, A)} \\ar[d, \"{D(j, f)}\"] \\\\\n      LB \\ar[r, \"{\\lambda_{j, B}}\"] & D(j, B)\n    \\end{tikzcd}\n  \\]\n  commute for all \\(j\\). Uniqueness follows from functoriality: given \\(g: B \\to C\\), \\(L(gf)\\) and \\(L(g)L(f)\\) are factorisations of the same cone through the limit \\(LC\\). And this is the unique functor structure on \\(A \\mapsto LA\\) making the \\(\\lambda_{j, -}\\) into natural transformations.\n\n  The cone \\((L, (\\lambda_{j, -}: j \\in \\ob \\c J))\\) is a limit: suppose given another cone \\((M, (\\mu_{j, -}: j \\in \\ob \\c J))\\), then for each \\(A\\), \\((MA, (\\mu_{j, A}: j \\in \\ob \\c J))\\) is a cone over \\(\\c D(-, A)\\), so induces a unique \\(\\alpha_A: MA \\to LA\\). Naturality of \\(\\alpha\\) follows from uniqueness of factorisation through a limit. So \\((M, (\\mu_J))\\) factors uniquely through \\((L, (\\lambda_j))\\). (This is creation is the strict sense, i.e.\\ equality instead of isomorphism)\n\\end{proof}\n\n\\begin{remark}\n  In any category, a morphism \\(A \\xrightarrow{f} B\\) is monic if and only if\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"1_A\"] \\ar[d, \"1_A\"] & A \\ar[d, \"f\"] \\\\\n      A \\ar[r, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  is a pullback. Hence any functor which preserves pullbacks preserves monomorphisms. In particular if \\(\\c D\\) has pullbacks momomorphisms in \\([\\c C, \\c D]\\) are just pointwise monos. See example sheet 1 for a counterexample for the neccessity of the pullback condition. Now we can delete the word ``pointwise'' in the statement of \\Cref{lem:representable functors are projectives}.\n\\end{remark}\n\n\\begin{theorem}[right adjoint preserves limits]\\index{right adjoint preserves limits}\n  Suppose \\(G: \\c D \\to \\c C\\) has a left adjoint. Then \\(G\\) preserves all limits which exist in \\(\\c D\\).\n\\end{theorem}\n\n\\begin{proof}[Proof 1 with additional assumption]\n  Suppose \\(\\c C\\) and \\(\\c D\\) both have limits of shape \\(\\c J\\). We have a commutative diagram\n  \\[\n    \\begin{tikzcd}\n      \\c C \\ar[r, \"F\"] \\ar[d, \"\\Delta\"] & \\c D \\ar[d, \"\\Delta\"] \\\\\n      {[\\c J, \\c C]} \\ar[r, \"{[\\c J, F]}\"] & {[\\c J, \\c D]}\n    \\end{tikzcd}\n  \\]\n  where \\(\\Delta\\) sends an object to its constant diagram and \\([\\c J, F]\\) is composition with \\(F\\). All functors in it have right adjoints (in particular, \\([\\c J, F] \\adjoint [\\c J, G]\\)). So by \\Cref{cor:commutative square of adjoints} the diagram of right adjoints\n  \\[\n    \\begin{tikzcd}\n      \\c D \\ar[r, \"G\"] & \\c C \\\\\n      {[\\c J, \\c D]} \\ar[u, \"\\lim_{\\c J}\"] \\ar[r, \"{[\\c J, G]}\"] & {[\\c J, \\c C]} \\ar[u, \"\\lim_{\\c J}\"]\n    \\end{tikzcd}\n  \\]\n  commutes up to isomorphism, i.e.\\ \\(G\\) preserves limits of shape \\(\\c J\\).\n\\end{proof}\n\n\\begin{proof}[Proof 2]\n  Suppose given \\(D: \\c J \\to \\c D\\) and a limit cone \\((L, (\\lambda_j: L \\to D(j): j \\in \\ob \\c J))\\). Given a cone \\((A, (\\alpha_j: A \\to GD(j):j \\in \\ob \\c J))\\) over \\(GD\\), the morphisms \\(FA \\xrightarrow{\\hat \\alpha_j} D(j)\\) form a cone over \\(D\\), so they induce a unique \\(FA \\xrightarrow{\\hat \\beta} L\\) such that \\(\\lambda_j \\hat \\beta = \\hat \\alpha_j\\) for all \\(j\\). Then \\(A \\xrightarrow{\\beta} GL\\) is the unique morphism satisfying \\((G\\lambda_j)\\beta = \\alpha_j\\) for all \\(j\\). So \\((GL, (G\\lambda_j: j \\in \\ob \\c J))\\) is a limit cone in \\(\\c C\\).\n\\end{proof}\n\nThe last major theorem in this chapter is adjoint functor theorem. It says that morally the converse of the above theorem is also true, i.e.\\ a functor preserving all limits ought to have a left adjoint. It may only fail so if some limits do not exist. The ``primeval'' adjoint functor theorem is exactly this: if \\(\\c D\\) has and \\(G: \\c D \\to \\c C\\) preserves \\emph{all} limits, then \\(G\\) has a left adjoint. However, this is too strong a condition as the categories having all limits can be shown to be preorders. Thus there are two more versions cut down on the all limits requirement and use some set theory to replace part of it.\n\n\\begin{lemma}\n  Suppose \\(\\c D\\) has and \\(G: \\c D \\to \\c C\\) preserves limits of shape \\(\\c J\\). Then for any \\(A \\in \\ob \\c C\\) the arrow category \\((A \\downarrow G)\\) has limits of shape \\(\\c J\\), and the forgetful functor \\(U: (A \\downarrow G) \\to \\c D\\) creates them.\n\\end{lemma}\n\n\\begin{proof}\n  Suppose given \\(D: \\c J \\to (A \\downarrow G)\\). Write \\(D(j)\\) as \\((UD(j), f_j)\\). Let \\((L, (\\lambda_j: L \\to UD(j))_{j \\in \\ob \\c J})\\) be a limit for \\(UD\\). Then \\((GL, (\\lambda_j)_{j \\in \\ob \\c J})\\) is a limit for \\(GUD\\). Since the edges of \\(UD\\) are morphisms in \\((A \\downarrow G)\\), the \\(f_j\\) form a cone over \\(GUD\\) so there is a unique \\(h: A \\to GL\\) such that \\((G\\lambda_j)h = f_j\\) for all \\(j\\), i.e.\\ there's a unique \\(h\\) such that the \\(\\lambda_j\\)'s are all morphisms in \\((L, h) \\to (UD(j), f_j)\\) in \\((A \\downarrow G)\\). We need to show that \\(((L, h), (\\lambda_j)_{j \\in \\ob \\c J})\\) is a limit cone in \\((A \\downarrow G)\\). If \\((C, (\\mu_j)_{j \\in \\ob \\c J})\\) is any cone over \\(\\c D\\) then \\((C, (\\mu_j)_{j \\in \\ob \\c J})\\) is a cone over \\(UD\\) so ther is a unique \\(\\ell: C \\to L\\) with \\(\\lambda_j \\ell = \\mu_j\\) for all \\(j\\). We need to show \\((G\\ell) k = h\\). But\n  \\[\n    (G\\lambda_j) (G\\ell) k = (G \\mu_j)k = f_j = (G\\lambda_j)h\n  \\]\n  for all \\(j\\) so \\((G\\ell) k = h\\) by uniqueness of factorisations through limits.\n\\end{proof}\n\nRecall that we have seen a limit for the empty diagram is a terminal object. We can also consider dually the diagram of ``maximal size'', namely that of a category over itself, to realise inital object as a limit. Think for example posets, in which limit for an empty diagram is maximimum while limit for the\n\n\\begin{lemma}\n  A category \\(\\c C\\) has an initial object if and only if \\(1_{\\c C}: \\c C \\to \\c C\\), regarded as a diagram of shape \\(\\c C\\) in \\(\\c C\\), has a limit.\n\\end{lemma}\n\nNote that this is an exception to \\(\\c J\\) being small.\n\n% joke about one-to-one real world map\n\n\\begin{proof}\n  First suppose \\(\\c C\\) has an initial object \\(I\\). Then the unique morphisms \\((I \\to A: A \\in \\ob \\c C)\\) form a cone over \\(1_{\\c C}\\) and given any cone \\((\\lambda_A: C \\to A: A \\in \\ob \\c C)\\), for any \\(A\\) the triangle\n  \\[\n    \\begin{tikzcd}\n      C \\ar[r, \"\\lambda_I\"] \\ar[dr, \"\\lambda_A\"'] & I \\ar[d] \\\\\n      & A\n    \\end{tikzcd}\n  \\]\n  commutes so \\(\\lambda_I\\) is the unique factorisation of \\((\\lambda_A: A \\in \\ob \\c C)\\) through \\((I \\to A: A \\in \\ob \\c C)\\).\n\n  Conversely, suppose \\((I, (\\lambda_A: I \\to A)_{A \\in \\ob \\c C})\\) is a limit. Then for any \\(f: I \\to A\\) the diagram\n  \\[\n    \\begin{tikzcd}\n      I \\ar[r, \"\\lambda_I\"] \\ar[dr, \"\\lambda_A\"'] & I \\ar[d, \"f\"] \\\\\n      & A\n    \\end{tikzcd}\n  \\]\n  commutes. \\(I\\) is weakly initial as we don't know if it is unique, i.e.\\ if \\(\\lambda_I = 1_I\\). In particular, putting \\(f = \\lambda_A\\), we see that \\(\\lambda_I\\) is a factorisation of the limit cone through itself so \\(\\lambda_I = 1_I\\). Hence every \\(f: I \\to A\\) satisfies \\(f = \\lambda_A\\).\n\\end{proof}\n\nThe primeval adjoint functor theorem follows immediately from the previous two lemmas and 3.3. However, it only applies to functors between preorders. See example sheet 2 Q6.\n\n\\begin{theorem}[general adjoint functor theorem]\\index{adjoint functor theorem!general}\n  Suppose \\(\\c D\\) is locally small and complete. Then \\(G: \\c D \\to \\c C\\) has a left adjoint if and only if \\(G\\) preserves all small limits and satisfies the \\emph{solution set condition}, which says that for each \\(A \\in \\ob \\c C\\), there exists a set of morphisms \\(\\{f_i: A \\to GB_i: i \\in I\\}\\) such that every \\(h: A \\to GC\\) factors as\n  \\[\n    A \\xrightarrow{f_i} GB_i \\xrightarrow{Gg} GC\n  \\]\n  for some \\(i\\) and some \\(g: B_i \\to C\\).\n\\end{theorem}\n\n\\begin{proof}\n  If \\(F \\adjoint G\\) then \\(G\\) preserves limits. To obtain the solution set, note that \\(\\{\\eta_A: A \\to GFA\\}\\) is a singleton solution set by 3.3 since it is initial.\n\n  Conversely, by 4.10 \\((A \\downarrow G)\\) is complete and it inherits local smallness from \\(\\c D\\). We need to show that if \\(\\mathcal A\\) is complete and locally small and has a weakly initial set of objects \\(\\{B_i: i \\in I\\}\\) then \\(\\mathcal A\\) has an initial object. First form \\(P = \\prod_{i \\in I} B_i\\); then \\(P\\) is weakly initial. Now form the limit of\n  \\[\n    \\begin{tikzcd}\n      P \\ar[r, shift left, \"\\vdots\"'] \\ar[r, shift right] & P\n    \\end{tikzcd}\n  \\]\n  where edges are all the endomorphisms of \\(P\\). Denote it \\(i: I \\to P\\). \\(I\\) is also weakly initial in \\(\\mathcal A\\). Suppose given \\(f, g: I \\to C\\). Form the equaliser \\(e: E \\to I\\) of \\((f, g)\\). Then there exists \\(h: P \\to E\\) since \\(P\\) is weakly initial. \\(ieh: P \\to P\\) and \\(1_P\\) are edegs of the diagram so \\(i = iehi\\). But \\(i\\) is monic so \\(ehi = 1_I\\) so \\(e\\) is split epic so \\(f = g\\). Thus \\(I\\) is initial.\n\\end{proof}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item This example comes from Mac Lane. He asked the question that, if we are not given the free group functor, how can we to construct the left adjoint of forgetful functor? Consider the forgetful functor \\(U: \\c{Gp} \\to \\Set\\). By 4.6a \\(U\\) creates all small limits so \\(\\c{Gp}\\) has them and \\(U\\) preserves them. \\(\\c{Gp}\\) is locally smal, given a set \\(A\\), any \\(f: A \\to UG\\) factors as\n    \\[\n      A \\to UG' \\to UG\n    \\]\n    where \\(G'\\) is the subgroup generated by \\(\\{f(x): x \\in A\\}\\) and \\(\\operatorname{card} G' \\leq \\max\\{\\aleph_0, \\operatorname{card} A\\}\\). Let \\(B\\) be a set of this cardinality. Consider all subsets \\(B' \\subseteq B\\), all group structures on \\(B'\\) and all mappings \\(A \\to B'\\). These give us a solution set at \\(A\\). % grudge against Mac Lane. Similar as http://wwwf.imperial.ac.uk/~buzzard/maths/research/notes/the_adjoint_functor_theorem.pdf\n  \\item Consider the category \\(\\c{CLat}\\) of complete lattices (posets with arbitrary meets and joints). Again the forgetful functor \\(U: \\c{CLat} \\to \\Set\\) creates all small limits. But A.\\ W.\\ Hales showed in 1964 that for any cardinal \\(\\kappa\\) there exists complete lattices of cardinality \\(\\geq \\kappa\\) generated by three elements. So the solution set condition fails at \\(A = \\{x, y, z\\}\\) as we cannot bound the cartinality of the solution set, and \\(U\\) doesn't have a left adjoint.\n  \\end{enumerate}\n\\end{eg}\n\nThe general adjoint functor theorem is general in the sense that it applies to all categories, although it imposes solution set condition, which is a rather strong condition on the functor. Special adjoint functor theorem aims to get rid of the condition on the functor.\n\n\\begin{definition}[subobject, quotient object]\\index{subobject}\\index{quotient object}\n  By a \\emph{subobject} of an object \\(A\\) of \\(\\c C\\) we mean a monomorphism \\(A' \\mono A\\). The subobjects of \\(A\\) are preordered by \\(A'' \\leq A'\\) if there is a factorisation\n  \\[\n    \\begin{tikzcd}\n      A'' \\ar[r] \\ar[dr, tail] & A' \\ar[d, tail] \\\\\n      & A\n    \\end{tikzcd}\n  \\]\n  Dually we have \\emph{quotient objects}.\n\\end{definition}\n\n\\begin{definition}[well-powered]\\index{category!well-powered}\n  We say \\(\\c C\\) is \\emph{well-powered} if each \\(A \\in \\ob \\c C\\) has a set of subobjects \\(\\{A_i \\mono A: i \\in I\\}\\) such that every subobject of \\(A\\) is isomorphic to some \\(A_i\\).\n\n  Dually if \\(\\c C^{\\text{op}}\\) is well-powered, we say \\(\\c C\\) is \\emph{well-copowered}. (\\emph{not} cowell-powered, as it implies badly powered. It also sounds like something being powered by Simon Cowell, which is not quite what we study in this course)\n\\end{definition}\n\nFor example in \\(\\Set\\) we can take inclusions \\(\\{A' \\embed A: A ' \\in PA\\}\\). This is also where the name ``well-powered'' comes from as in the \\(\\Set\\) case it simply means power set exists.\n\nBefore stating and proving the special functor theorem we point out a simple yet powerful observation about pullback square.\n\n\\begin{lemma}\n  Given a pullback square\n  \\[\n    \\begin{tikzcd}\n      P \\ar[r, \"h\"] \\ar[d, \"k\"] & A \\ar[d, \"f\", tail] \\\\\n      B \\ar[r, \"g\"] & C\n    \\end{tikzcd}\n  \\]\n  with \\(f\\) monic. Then \\(k\\) is monic.\n\\end{lemma}\n\n\\begin{proof}\n  Suppose \\(x, y: D \\to P\\) satisfy \\(kx = ky\\). Then\n  \\[\n    fhx = gkx = gky = fhy.\n  \\]\n  By \\(f\\) is monic so \\(hx = hy\\). So \\(x, y\\) are factorisations of the same cone through the limit cone \\((h, k)\\).\n\\end{proof}\n\n\\begin{theorem}[special adjoint functor theorem]\\index{adjoint functor theorem!special}\n  Suppose \\(\\c C\\) and \\(\\c D\\) are both locally small, and that \\(\\c D\\) is complete and well-powered and has a coseparating set. Then a functor \\(G: \\c D \\to \\c C\\) has a left adjoint if and only if it preserves all small limits.\n\\end{theorem}\n\n\\begin{proof}\n  The only if is given by right adjoint preserves limits. For the other direction, for any \\(A \\in \\ob \\c C\\), \\((A \\downarrow G)\\) is locally complete by 4.10, locally small, and well-powered since the subobjects of \\((B, f)\\) in \\((A \\downarrow G)\\) are just those subobjects \\(B' \\mono B\\) in \\(\\c D\\) for which \\(f\\) factors through \\(GB' \\mono GB\\). Also if \\(\\{S_i: i \\in I\\}\\) is a coseparating set for \\(\\c D\\) then the set\n  \\[\n    \\{(S_i, f): i \\in I, f \\in \\c C(A, GS_i)\\}\n  \\]\n  is coseparating in \\((A \\downarrow G)\\): given \\(g, h: (B, f) \\to (B', f')\\) in \\((A \\downarrow G)\\) with \\(g \\neq h\\), there exists \\(k: B' \\to S_i\\) for some \\(i\\) with \\(kg \\neq kh\\), and then \\(k\\) is also a morphism \\((B', f') \\to (S_i, (Gk) f')\\) in \\((A \\downarrow G)\\).\n\n  So we need to show that if \\(\\mathcal A\\) is complete, locally small and well-powered and has a coseparating set \\(\\{S_i: i \\in I\\}\\) then \\(\\mathcal A\\) has an initial object. Form the product \\(P = \\prod_i \\in S_i\\). We have the industrial stength product and now we need the industrial strength pullback. Consider the diagram\n  \\[\n    \\begin{tikzcd}[row sep=tiny, column sep=tiny]\n      & & P_j \\ar[dddr, tail] & P_i \\ar[ddd, tail] \\\\\n      & \\ar[ddrr, tail] \\\\\n      & \\vdots \\\\\n      P' \\ar[rrr, tail] & & & P\n    \\end{tikzcd}\n  \\]\n  whose edges are representative set of subobjects of \\(P\\) and form its limit\n  \\[\n    \\begin{tikzcd}[row sep=tiny, column sep=tiny]\n      I \\ar[rrr] \\ar[rrrd] \\ar[ddd] & & & P_i \\\\\n      & & & P_j \\\\\n      & & \\dots \\\\\n      P'\n    \\end{tikzcd}\n  \\]\n  By the argument in the previous lemma, the legs of the cones are all monic; in particular \\(I \\mono P\\) is monic, and it's a least subobject of \\(P\\). Hence \\(I\\) has no proper subobjects. So given \\(f, g: I \\to A\\) their equaliser is an isomorphism and hence \\(f = g\\).\n\n  We get uniqueness for free but need to work harder to show existence. Now let \\(A\\) be any object of \\(\\mathcal A\\). Form the product \\(Q = \\prod_{i \\in I, f \\in \\mathcal A (A, S_i)} S_i\\). There is an obvious \\(h: A \\to Q\\) defined by \\(\\pi_{i, F} h = f\\); and \\(h\\) is monic, since the \\(S_i\\)'s are a coseparating set. We also have a morphism \\(k: P \\to Q\\) defined by \\(\\pi_{i, f} k = \\pi_i\\). Now form the pullback\n   \\[\n     \\begin{tikzcd}\n       B \\ar[r] \\ar[d, tail] & A \\ar[d, \"h\", tail] \\\\\n       P \\ar[r, \"k\"] & Q\n     \\end{tikzcd}\n   \\]\n  by lemma \\(P\\) is monic so \\(B\\) is a subobject of \\(P\\). Hence there exists\n  \\[\n    \\begin{tikzcd}\n      I \\ar[r] \\ar[dr] & B \\ar[d] \\\\\n      & P\n    \\end{tikzcd}\n  \\]\n  and hence a morphism \\(I \\to B \\to A\\).\n\\end{proof}\n\nThis result is due to Freyd, who first published it in a book as an exercise to the readers (!).\n\n\\begin{eg}\n  Consider the inclusion \\(I: \\c{KHaus} \\to \\Top\\), where \\(\\c{KHaus}\\) is the full subcategory of compact Hausdorff spaces. \\(\\c{KHaus}\\) has and \\(I\\) preserves small products (by Tychonoff's theorem) and equalisers (since equalisers of pairs \\(f, g: X \\to Y\\) with \\(Y\\) Hausdorff are closed subspaces). Both cateogories are locally small and \\(\\c{KHaus}\\) is well-powered (subobjects of \\(X\\) are isomorphic to closed subspaces). The closed interval \\([0, 1]\\) is a coseparateor in \\(\\c{KHaus}\\) by Urysohn's lemma. So by special adjoint functor theorem \\(I\\) has a left adjoint \\(\\beta\\), the Stone-\u010cech compactification\\index{Stone-\u010cech compactification}.\n\\end{eg}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item % I've never asked Freyed but this is probabily how he gets the inspiration for AFT. not from Marshall Stone's proof\n\n  \u010cech's construction of \\(\\beta\\) is as follow: given \\(X\\), form\n  \\[\n    P = \\prod_{f: X \\to [0, 1]} [0, 1]\n  \\]\n  and define \\(h: X \\to P\\) by \\(\\pi_f h = f\\). Define \\(\\beta X\\) to be the closure of the image of \\(h\\)\n\n  % coseparator in \\((X \\downarrow [0, 1])\\), smallest subobject being precisely the closure.\n\n  \u010cech's proof that this works is essentially the same as SAFT.\n\\item We could have used GAFT to construct \\(\\beta\\) by a cardinality argument: we get a solution set at \\(X\\) by considering all continuous \\(f: X \\to Y\\) with \\(Y\\) compact Hausdorff and \\(f(X)\\) dense in \\(Y\\) and such \\(Y\\) have cardinality \\(\\leq 2^{2^{\\operatorname{card} X}}\\).\n  \\end{enumerate}\n\\end{remark}\n\n\\section{Monad}\n\nThe idea of a monad is what is left in an adjunction when you cannot see one of the categories. Suppose given \\(f: \\c C \\to \\c D, g: \\c D \\to \\c C\\) with \\(F \\adjoint G\\). How much of this structure can we describe without mentioning \\(\\c D\\)? We have\n\\begin{enumerate}\n\\item the functor \\(T = GF: \\c C \\to \\c C\\),\n\\item the unit \\(\\eta: 1_{\\c C} \\to T = GF\\),\n\\item and ``shadow'' of counit as a natural transformation \\(\\mu = G\\varepsilon_F: TT = GFGF \\to GF = T\\)\n\\end{enumerate}\nsatisfying the commutative diagrams 1, 2\n\\[\n  \\begin{tikzcd}\n    T \\ar[r, \"T\\eta\"] \\ar[dr, \"1_T\"'] & TT \\ar[d, \"\\mu\"] & T \\ar[l, \"\\eta_T\"'] \\ar[dl, \"1_T\"] \\\\\n    & T\n  \\end{tikzcd}\n\\]\nby the triangle inequalities, and 3\n\\[\n  \\begin{tikzcd}\n    TTT \\ar[r, \"T\\mu\"] \\ar[d, \"\\mu_T\"] & TT \\ar[d, \"\\mu\"] \\\\\n    TT \\ar[r, \"\\mu\"] & T\n  \\end{tikzcd}\n\\]\nby naturality of \\(\\varepsilon\\).\n\n\\begin{definition}[monad]\\index{monad}\\index{unit}\\index{multiplication}\n  A \\emph{monad} \\(\\T = (T, \\eta, \\mu)\\) on a category \\(\\c C\\) consists of a functor \\(T: \\c C \\to \\c C\\) and natural transofrmations \\(\\eta: 1_{\\c C} \\to T, \\mu: TT \\to T\\) satisfying commutative diagrams 1 - 3.\n\n  \\(\\eta\\) and \\(\\mu\\) are called the \\emph{unit} and \\emph{multiplication} of \\(\\T\\).\n\\end{definition}\n\nThe name ``monad'' was used because of the similarity of axioms of monad with those of monoid. Before that, although being well-known to mathematicians, monad didn't really have an identifier. It goes by the name ``standard construction'', then ``triple'', thereby the letter \\(\\T\\). But both are confessions of failure to come up with a meaningful name! Someone invented the name ``monad'' and Mac Lane popularised it.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Any adjunction \\(F \\adjoint G\\) induces a monad \\((GF, \\eta, G\\varepsilon_F)\\) on \\(\\c C\\) and a \\emph{comonad}\\index{comonad} \\((FG, \\varepsilon, F\\eta_G)\\) on \\(\\c D\\).\n  \\item Let \\(M\\) be a monoid. The functor \\((M \\times - ): \\Set \\to \\Set\\) has a monad structure with unit given by \\(\\eta_A(a) = (a_M, a)\\) and multiplication \\(\\mu_A(m, m', a) = (mm', a)\\). The monad identities follow from the monoid ones.\n  \\item Let \\(\\c C\\) be any category with finite products and \\(A \\in \\ob \\c C\\). The functor \\((A \\times -): \\c C \\to \\c C\\) has a comonad structure with counit \\(\\varepsilon_B: A \\times B \\to B\\) given by \\(\\pi_2\\) and comultiplication \\(\\delta_B: A \\times B \\to A \\times A \\times B\\) given by \\((\\pi_1, \\pi_1, \\pi_2)\\).\n  \\end{enumerate}\n\\end{eg}\n\nDoes every monad comes from an adjunction? The answer is yes and is given independently in 1965 by Eilenberg, Moore and Kleisli. We will cover both.\n\nIn example 2 above we have the cateogry \\([M, \\Set]\\). Its forgetful functor to \\(\\Set\\) has a left adjoint, sending \\(A\\) to \\(M \\times A\\) with \\(M\\) acting by multiplication on the left factor. This adjunction gives rise to the monad.\n\n\\begin{definition}[Eilenberg-Moore algebra]\\index{algebra}\n  Let \\(\\T\\) be a monad on \\(\\c C\\). A \\emph{\\(\\T\\)-algebra} is a pair \\((A, \\alpha)\\) with \\(A \\in \\ob \\c C\\) and \\(\\alpha: TA \\to A\\) satisfying commutative diagrams 4, 5\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"\\eta_A\"] \\ar[dr, \"1_A\"'] & TA \\ar[d, \"\\alpha\"] \\\\\n      & A\n    \\end{tikzcd}\n    \\qquad\n    \\begin{tikzcd}\n      TTA \\ar[r, \"T\\alpha\"] \\ar[d, \"\\mu_A\"] & TA \\ar[d, \"\\alpha\"] \\\\\n      TA \\ar[r, \"\\alpha\"] & A\n    \\end{tikzcd}\n  \\]\n\n  A \\emph{homomorphism} \\(f: (A, \\alpha) \\to (B, \\beta)\\) is a morphism \\(f: A \\to B\\) such that diagram 6\n  \\[\n    \\begin{tikzcd}\n      TA \\ar[r, \"Tf\"] \\ar[d, \"\\alpha\"] & TB \\ar[d, \"\\beta\"] \\\\\n      A \\ar[r, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  commutes.\n\n  The category of \\(\\T\\)-algebras is denoted \\(\\c C^\\T\\).\n\\end{definition}\n\n\\begin{lemma}\n  The forgetful functor \\(G^\\T: \\c C^\\T \\to \\c C\\) has a left-adjoint \\(F^\\T\\) and the adjunction induces \\(\\T\\).\n\\end{lemma}\n\n\\begin{proof}\n  We define a ``free'' \\(\\T\\)-algebra functor, mimicking that in monoid case. Define \\(F^\\T A = (TA, \\mu_A)\\) (an algebra by 2, 3) and \\(F^\\T(A \\xrightarrow{f} B) = Tf\\) (a homomorphism by naturality of \\(\\mu\\)). Clearly \\(G^\\T F^\\T = T\\), the unit of the adjunction is \\(\\eta\\). We define the counit \\(\\varepsilon_{(A, \\alpha)} = \\alpha: (TA, \\mu_A) \\to (A, \\alpha)\\) (a homomorphism by 5) and is natural by 6. The triangle identities\n  \\begin{align*}\n    \\varepsilon_{FA} (F\\eta_A) &= 1_{FA} \\\\\n    G\\varepsilon_{(A, \\alpha)} \\eta_A &= 1_A\n  \\end{align*}\n  are given by 1 and 4. Finally, the monad induced by \\(F^\\T \\adjoint G^\\T\\) has functor \\(T\\) and unit \\(\\eta\\), and\n  \\[\n    G^\\T \\varepsilon_{F^\\T A} = \\mu_A\n  \\]\n  by definition of \\(F^\\T A\\).\n\\end{proof}\n\nKleisli took a ``minimalist'' approach: if \\(F: \\c C \\to \\c D, G: \\c D \\to \\c C\\) induces \\(\\T\\) then so does \\(F: \\c C \\to \\c D', G 1_{\\c D'}: \\c D' \\to \\c C\\) where \\(\\c D'\\) is the full subcategory of \\(\\c D\\) on objects \\(FA\\). So in trying to construct \\(\\c D\\), we may assume \\(F\\) is surjective (or indeed bijective) on objects. But then morphisms \\(FA \\to FB\\) correspond bijectively to morphisms \\(A \\to GFB = TB\\) in \\(\\c C\\). This leads us half way through as we sitll have to specify how to compose morphisms (in general domains and codomains don't match up).\n\n\\begin{definition}[Kleisli category]\\index{Kleisli category}\n  Given a monad \\(\\T\\) on \\(\\c C\\), the \\emph{Kleisli category} \\(\\c C_\\T\\) has \\(\\ob \\c C_\\T = \\ob \\c C\\) and morphisms\n  \\begin{tikzcd}\n    A \\ar[r, blue] & B\n  \\end{tikzcd}\n  (we use blue arrow to signify morphism in \\(\\c C_\\T\\)) are \\(A \\to TB\\) in \\(\\c C\\). The composite\n  \\begin{tikzcd}\n    A \\ar[r, blue, \"f\"] & B \\ar[r, blue, \"g\"] & C\n  \\end{tikzcd}\n  is\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"f\"] & TB \\ar[r, \"Tg\"] & TTC \\ar[r, \"\\mu_C\"] & C\n    \\end{tikzcd}\n  \\]\n  and the identity\n  \\begin{tikzcd}\n    A \\ar[r, blue] & A\n  \\end{tikzcd}\n  is \\(A \\xrightarrow{\\eta_A} TA\\).\n\\end{definition}\n\nTo verify associativity, suppose given\n\\begin{tikzcd}\n  A \\ar[r, blue, \"f\"] & B \\ar[r, blue, \"g\"] & C \\ar[r, blue, \"h\"] & D\n\\end{tikzcd}\nthen\n\\[\n  \\begin{tikzcd}\n    A \\ar[r, \"f\"] & TB \\ar[r, \"T_g\"] & TTC \\ar[r, \"TTh\"] \\ar[d, \"\\mu_C\"] & TTTD \\ar[r, \"T\\mu D\"] \\ar[d, \"\\mu_{TD}\"] & TTD \\ar[d, \"\\mu_D\"] \\\\\n    & & TC \\ar[r, \"Th\"] & TTD \\ar[r, \"\\mu_D\"] & TD\n  \\end{tikzcd}\n\\]\ncommutes: the upper way monad is \\((hg) f\\) and the lower is \\(h(gf)\\). (used diagram 3 in rightmost square)\n\nThe unit laws similarly follow from\n\\[\n  \\begin{tikzcd}\n    A \\ar[r, \"f\"] & TB \\ar[r, \"T\\eta_B\"] \\ar[dr, \"1_{TB}\"'] & TTB \\ar[d, \"\\mu_B\"] \\\\\n    & & TB\n  \\end{tikzcd}\n  \\qquad\n  \\begin{tikzcd}\n    A \\ar[r, \"f\"] \\ar[d, \"\\eta_A\"] & TB \\ar[dr, \"1_{TB}\"] \\ar[d, \"1_{TB}\"] \\\\\n    TA \\ar[r, \"Tf\"] & TTB \\ar[r, \"\\mu_B\"] & TB\n  \\end{tikzcd}\n\\]\n(used diagram 1 and 2 respecitively in the triangles)\n\n\\begin{lemma}\n  There exists an adjunction \\(F_\\T: \\c C \\to \\c C_\\T, G_\\T: \\c C_T \\to \\c C\\) inducing the monad \\(\\T\\).\n\\end{lemma}\n\n\\begin{proof}\n  We define \\(F_\\T A = A, F_\\T(A \\xrightarrow{f} B) = A \\xrightarrow{f} B \\xrightarrow{\\eta_B} TB\\). \\(F_\\T\\) preserves identities by definition. For composites, consider \\(A \\xrightarrow{f} B \\xrightarrow{g} C\\), we get\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"f\"] & B \\ar[r, \"\\eta_B\"] \\ar[d, \"g\"] & TB \\ar[d, \"Tg\"] \\\\\n      & C \\ar[r, \"\\eta_C\"] & TC \\ar[r, \"T\\eta_C\"] \\ar[dr, \"1_{TC}\"'] & TTC \\ar[d, \"\\mu_C\"] \\\\\n      & & & TC\n    \\end{tikzcd}\n  \\]\n  (used diagram 1 in the triangle)\n\n  We define\n  \\begin{align*}\n    G_\\T A &= TA \\\\\n    G_\\T(A \\blue{\\xrightarrow{f}} B) &= TA \\xrightarrow{Tf} TTB \\xrightarrow{\\mu_B} TB\n  \\end{align*}\n  \\(G_\\T\\) preserves identities by diagram 1. For composite, consider\n  \\begin{tikzcd}\n    A \\ar[r, blue, \"f\"] & B \\ar[r, blue, \"g\"] & C\n  \\end{tikzcd}\n  , we get\n  \\[\n    \\begin{tikzcd}\n      TA \\ar[r, \"Tf\"] & TTB \\ar[r, \"TTg\"] \\ar[d, \"\\mu_B\"] & TTTC \\ar[r, \"T\\mu_C\"] \\ar[d, \"\\mu_{TC}\"] & TTC \\ar[d, \"\\mu_C\"] \\\\\n      & TB \\ar[r, \"Tg\"] & TTC \\ar[r, \"\\mu_C\"] & TC\n    \\end{tikzcd}\n  \\]\n  (used diagram 3 is last square)\n\n  Have\n  \\begin{align*}\n    G_\\T F_\\T A &= TA \\\\\n    G_\\T T_\\T f &= \\mu_B (T\\eta_B) Tf = TF\n  \\end{align*}\n  so we take \\(\\eta: 1_{\\c C} \\to T\\) as the unit of \\(F_\\T \\adjoint G_\\T\\). The counit\n  \\begin{tikzcd}\n    TA \\ar[r, \"\\varepsilon_A\"] & A\n  \\end{tikzcd}\n  is \\(1_{TA}\\). To verify naturality consider the square\n  \\[\n    \\begin{tikzcd}\n      TA \\ar[r, blue, \"F_\\T G_\\T f\"] \\ar[d, blue, \"\\varepsilon_A\"] & TB \\ar[d, blue, \"\\varepsilon_B\"] \\\\\n      A \\ar[r, blue, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  This expands to\n  \\[\n    \\begin{tikzcd}\n      TA \\ar[r, \"Tf\"] & TTB \\ar[r, \"\\mu_B\"] & TB \\ar[r, \"\\eta_{TB}\"] \\ar[dr, \"1_{TB}\"'] & TTB \\ar[d, \"\\mu_B\"] \\\\\n      & & & TB\n    \\end{tikzcd}\n  \\]\n  (used diagram 2 in triangle) so \\(\\varepsilon\\) is natural.\n\n  Finally, \\(G_\\T(TA \\blue{\\xrightarrow{\\varepsilon_A}} A) = \\mu_A\\) so\n   \\[\n     G_\\T(\\blue{\\varepsilon_A}) \\eta_{G_\\T A} = \\mu_A \\eta_{TA} = 1_{TA}\n   \\]\n   and \\(\\blue{(\\varepsilon_{F_\\T A}) (F_\\T \\eta_A)}\\) is\n   \\[\n     \\begin{tikzcd}\n       A \\ar[r, \"\\eta_A\"] & TA \\ar[r, \"\\eta_{TA}\"] \\ar[dr, \"1_{TA}\"'] & TTA \\ar[d, \"\\mu_A\"] \\\\\n       & & TA\n     \\end{tikzcd}\n   \\]\n    (used diagram 1 in triangle) which is \\(\\blue{(1_{F_\\T A})}\\). Also \\(G_\\T(\\blue{\\varepsilon_{F_\\T A}}) = \\mu_A\\) so \\(F_\\T \\adjoint G_\\T\\) induces \\(\\T\\).\n\\end{proof}\n\n\\begin{theorem}\n  Given a monad \\(\\T\\) on \\(\\c C\\), let \\(\\c{Adj} (\\T)\\) be the category whose objects are the adjunctions \\(F: \\c C \\to \\c D, G: \\c D \\to \\c D\\) inducing \\(\\T\\), and whose morphisms\n  \\[\n    \\begin{tikzcd}\n      (\\c C \\ar[r, shift left, \"F\"] & \\c D) \\ar[l, shift left, \"G\"] \\ar[r] & (\\c C \\ar[r, shift left, \"F'\"] & \\c D') \\ar[l, shift left, \"G'\"]\n    \\end{tikzcd}\n  \\]\n  are functors \\(H: \\c D \\to \\c D'\\) satisfying \\(HF = F'\\) and \\(G'H = G\\). Then the Kleisli adjunction is an initial object of \\(\\c{Adj} (\\T)\\) and Eilenberg-Moore adjunction is terminal.\n\\end{theorem}\n\n\\begin{proof}\n  Let \\(F G\\) be an object of \\(\\c{Adj} (\\T)\\). We define the \\emph{Eilenberg-Moore comparison functor}\\index{Eilenberg-Moore comparison functor} \\(K: \\c D \\to \\c C^\\T\\) by \\(KB = (GB, G\\varepsilon_B)\\) where \\(\\varepsilon\\) is the counit of \\(F \\adjoint G\\). Note this is an algebra by one of the triangular identities for \\(F \\adjoint G\\) and naturality of \\(\\varepsilon\\), and \\(K(B \\xrightarrow{g} B') = Gg\\), a homomorphism by naturality of \\(\\varepsilon\\).\n\n  Clearly \\(G^\\T K = G\\) and\n  \\begin{align*}\n    KFA &= (GFA, G\\varepsilon_{FA}) = (TA, \\mu_A) = F^\\T A \\\\\n    KF(A \\xrightarrow{f} A') &= Tf = F^\\T f\n  \\end{align*}\n  so \\(K\\) is a morphism of \\(\\c{Adj} (\\T)\\).\n\n  Suppose \\(K': \\c D \\to \\c C^\\T\\) is another such, then since \\(G^\\T K' = G\\) we know \\(K'B = (GB, \\beta_B)\\) where \\(\\beta\\) is a natural transformation \\(GFG \\to G\\). Also since \\(K' F = F^\\T\\), we have\n  \\[\n    \\beta_{FA} = \\mu_A = G\\varepsilon_{FA}.\n  \\]\n  Now given any \\(B \\in \\ob \\c D\\), consider the diagram.\n\n  Also since \\(K'F = F^\\T\\) we have \\(\\beta_{FA} = \\mu_A = G \\varepsilon_{FA}\\).\n\n  Now given any \\(B \\in \\ob \\c D\\), consider the diagram\n  \\[\n    \\begin{tikzcd}\n      GFGFGB \\ar[r, \"GFG\\varepsilon_B\"] \\ar[d, shift right, \"G\\varepsilon_{FGB}\"', \"=\"] \\ar[d, shift left, \"\\beta_{FGB}\"] & GFGB \\ar[d, shift right, \"G\\varepsilon_B\"'] \\ar[d, shift left, \"\\beta_B\"] \\\\\n      GFGB \\ar[r, \"G\\varepsilon_B\"] & GB\n    \\end{tikzcd}\n  \\]\n  Both squares commute so \\(G\\varepsilon_B\\) and \\(\\beta_B\\) have the same composite with \\(GFG\\varepsilon_B\\). But this is split epic with splitting \\(GF\\eta_{GB}\\) so \\(\\beta = G\\varepsilon\\). Hence \\(K' = K\\).\n\n  We now define the Kleisli comparison functor \\(L: \\c C _\\T \\to \\c D\\) by \\(LA = FA\\),\n  \\[\n    L(A \\blue{\\xrightarrow{f}} B) = FA \\xrightarrow{Ff} FGFB \\xrightarrow{\\varepsilon_{FB}} FB.\n  \\]\n  \\(L\\) preserves identities by one of the triangular identities for \\(F \\adjoint G\\). Given \\(A \\blue{\\xrightarrow{f}} B \\blue{\\xrightarrow{g}} C\\), we have\n  \\[\n    \\begin{tikzcd}\n      FA \\ar[r, \"Ff\"] & FGFB \\ar[r, \"FGFg\"] \\ar[d, \"\\varepsilon_{FB}\"] & FGFGFC \\ar[r, \"FG\\varepsilon_{FC}\"] \\ar[d, \"\\varepsilon_{FGFC}\"] & FGFC \\ar[d, \"\\varepsilon_{FC}\"] \\\\\n      & FB \\ar[r, \"Fg\"] & FGFC \\ar[r, \"\\varepsilon_{FC}\"] & FC\n    \\end{tikzcd}\n  \\]\n  Also\n  \\begin{align*}\n    GLA &= TA = G_\\T A \\\\\n    GL(A \\blue{\\xrightarrow{f}} B) &= (G \\varepsilon_{FB}) (FGf) = \\mu_B(Tf) = G_\\T f \\\\\n    GF_\\T A &= FA \\\\\n    LF_\\T(A \\xrightarrow{f} B) &= (\\varepsilon_{FB}) (F\\eta_B) (Ff) = Ff\n  \\end{align*}\n  For future reference, note that \\(L\\) is full and faithful: its effect on morphisms (with blue domains and codmains) is that of transposition across \\(F \\adjoint G\\).\n\n  Finally for uniqueness, suppose \\(L': \\c C_\\T \\to \\c D\\) is a morphism of \\(\\c{Adj}(\\T)\\). We must have \\(L'A = FA\\) and \\(L'\\) maps the counit \\(TA \\blue{\\to} A\\) to the counit \\(FGFA \\xrightarrow{\\varepsilon_{FA}} FA\\). For any \\(A \\blue{\\xrightarrow{f}} B\\), we have\n  \\begin{align*}\n    \\blue{f = 1_{TA}(F_\\T f)}\n  \\end{align*}\n  so \\(L'(\\blue{f}) = \\varepsilon_{FA} (Ff) = Lf\\).\n\\end{proof}\n\nIf \\(\\c C\\) has coproducts then so does \\(\\c C_\\T\\) since \\(F_\\T\\) preserves them. But in general it has few other limits or colimits. In contrast, we have\n\n\\begin{theorem}\\leavevmode\n  \\begin{enumerate}\n  \\item The forgetful functor \\(G: \\c C^\\T \\to \\c C\\) creates all limits which exists in \\(\\c C\\).\n  \\item If \\(\\c C\\) has colimits of shape \\(\\c J\\) then \\(G: \\c C^\\T \\to \\c C\\) creates them if and only if \\(T\\) preserves them.\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item Suppose given \\(D: \\c J \\to \\c C^\\T\\). Write \\(D(j) = (GD(j), \\delta_j)\\) and suppose \\((L, (\\mu_j: L \\to GD(j): j \\in \\ob \\c J))\\) is a limit cone for \\(GD\\). Then the composites\n    \\[\n      TL \\xrightarrow{T\\mu_j} TGD(j) \\xrightarrow{\\delta_j} GD(j)\n    \\]\n    form a cone over \\(GD\\) since the edges of \\(GD\\) are homomorphisms, so they induce a unique \\(\\lambda: TL \\to L\\) such that \\(\\mu_j \\lambda = \\delta_j(T_\\mu)\\) for all \\(j\\). The fact that \\(\\lambda\\) is a \\(\\T\\)-algebra structure on \\(L\\) follows from the fact that the \\(\\delta_j\\) are algebra structure and uniqueness of factorisations through limits. So \\(((L, \\lambda), (\\mu_j: j \\in \\ob \\c J))\\) is the unique lifting of the limit cone over \\(GD\\) to a cone over \\(D\\), and it's a limit, since given a conve over \\(D\\) with apex \\((A, \\alpha)\\), we get a unique factorisation \\(A \\xrightarrow{f} L\\) in \\(\\c C\\), and \\(F\\) is an algebra homomorphism by uniqueness of factorisation through \\(L\\).\n  %\\item For \\(\\implies\\) direction, \\(F: \\c C \\to \\c C^\\T\\) preserves colimits since it is a left adjoint, so \\(T = GF\\) preserves colimits of shape \\(\\c J\\).\n    %TODO: uncomment the previous line\n\n    For \\(\\impliedby\\) direction, suppose given \\(D: \\c J \\to \\c C^\\T\\) as in 1, and a colimit cone \\((GD(j) \\xrightarrow{\\mu_j} L: j \\in \\ob \\c J)\\) in \\(\\c C\\), then \\((TGD(j) \\xrightarrow{T\\mu_j} TL: j \\in \\ob \\c C)\\) is also a colimit cone, so the composite\n    \\[\n      TGD(j) \\xrightarrow{f_j} GD(j) \\xrightarrow{\\mu_j} L\n    \\]\n    induces a unique \\(\\lambda: TL \\to L\\). The rest of the argument is like 1.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{definition}[monadicity]\\index{monadicity}\n  Given an adjunctin \\(F \\adjoint G\\), we say the adjunction (or the functor \\(G\\)) is \\emph{monadic} if the comparison functor \\(K: \\c D \\to \\c C^\\T\\) is part of an equivalence of categories.\n\\end{definition}\n\nNote that since the Kleisi comparison \\(\\c C_\\T \\to \\c D\\) is full and faithful, it's part of an equivalence if and only if it (equivalently, \\(F\\)) is essentially surjective on objects.\n\n\\begin{remark}\n  Given any adjunction \\(F \\adjoint G\\), for each object \\(B\\) of \\(\\c D\\) we have a diagram\n  \\[\n    \\begin{tikzcd}\n      FGFGB \\ar[r, shift left, \"FG\\varepsilon_B\"] \\ar[r, shift right, \"\\varepsilon_{FGB}\"'] & FGB \\ar[r, \"\\varepsilon_B\"] & B\n    \\end{tikzcd}\n  \\]\n  with equal composites. The ``primeval'' monadicity theorem asserts that \\(\\c C^\\T\\) is characterised in \\(\\c{Adj}(\\T)\\) by the fact that these diagrams are all coequalisers.\n\\end{remark}\n\n\\begin{definition}[reflexivity, split coequaliser]\\index{reflexivity}\\index{coequaliser!split}\\leavevmode\n  \\begin{enumerate}\n  \\item We say a parallel pair \\(f, g: A \\to B\\) is \\emph{reflexive} if their exists \\(B \\xrightarrow{r} A\\) such that \\(fr = gr = 1_B\\).\n\n    We say \\(\\c C\\) has reflexive coequalisers if it has coequalisers of all reflexive pairs. Equivalently, colimits of shape\n    \\[\n      \\begin{tikzcd}\n        \\cdot \\ar[loop, out=90, in=150, looseness=6] \\ar[loop, out=210, in=270, looseness=5] \\ar[r, shift left] \\ar[r, shift right] & \\cdot \\ar[l]\n      \\end{tikzcd}\n    \\]\n  \\item By a \\emph{split coequaliser diagram} we mean a diagram\n    \\[\n      \\begin{tikzcd}\n        A \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & B \\ar[l, bend left=50, \"t\"] \\ar[r, \"h\"] & C \\ar[l, bend left=50, \"s\"]\n      \\end{tikzcd}\n    \\]\n    satisfying \\(hf = hg, hs = 1_C, gt = 1_B\\) and \\(ft = sh\\).\n\n    These equalisers imply that \\(h\\) is a coequaliser of \\((f, g)\\), if \\(B \\xrightarrow{x} D\\) satisfies \\(xf = xg\\) then\n    \\[\n      x = xgt = xfg = xsh\n    \\]\n    so \\(x\\) factors through \\(h\\) and the factorisation is unique since its split monic.\n\n    Note that split coequalisers are preserved by \\emph{all} functors.\n  \\item Given a functor \\(G: \\c D \\to \\c C\\), a parallel pair\n    \\begin{tikzcd}\n      A \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & B\n    \\end{tikzcd}\n    is called \\emph{\\(G\\)-split} if there exists a split coequaliser diagram\n    \\[\n      \\begin{tikzcd}\n        GA \\ar[r, shift left, \"Gf\"] \\ar[r, shift right, \"Gg\"'] & GB \\ar[l, bend left=50, \"t\"] \\ar[r, \"h\"] & C \\ar[l, bend left=50, \"s\"]\n      \\end{tikzcd}\n    \\]\n    in \\(\\c C\\).\n\n     Note that\n     \\begin{tikzcd}\n       FGFGB \\ar[r, shift left, \"FG\\varepsilon_B\"] \\ar[r, shift right, \"\\varepsilon_{FGB}\"'] & FGB\n     \\end{tikzcd}\n    is \\(G\\)-split, since\n    \\[\n      \\begin{tikzcd}\n        GFGFGB \\ar[r, shift left, \"GFG\\varepsilon_B\"] \\ar[r, shift right, \"G\\varepsilon_{FGB}\"'] & GFGB \\ar[l, bend left=50, \"\\eta_{GFGB}\"] \\ar[r, \"G\\varepsilon_B\"] & C \\ar[l, bend left=50, \"\\eta_{GB}\"]\n      \\end{tikzcd}\n    \\]\n  \\end{enumerate}\n\\end{definition}\n\nNote that the aforementioned pair\n\\[\n  \\begin{tikzcd}\n    FGFGB \\ar[r, shift left, \"FG\\varepsilon_B\"] \\ar[r, shift right, \"\\varepsilon_{FGB}\"'] & FGB\n  \\end{tikzcd}\n\\]\nis reflexive with \\(r = F\\eta_{GB'}\\).\n \n\\begin{lemma}\n  \\label{lem:left adjoint and coequaliser}\n  Suppose given an adjunction\n  \\begin{tikzcd}\n    \\c C \\ar[r, shift left, \"F\"] & \\c D \\ar[l, shift left, \"G\"]\n  \\end{tikzcd}\n  where \\(F \\adjoint G\\), inducing a monad \\(\\T\\) on \\(\\c C\\). Then \\(K: \\c D \\to \\c C^\\T\\) has a left adjoint provided, for every \\(\\T\\)-algebra \\((A, \\alpha)\\), the pair\n  \\begin{tikzcd}\n    FGFA \\ar[r, shift left, \"F_\\alpha\"] \\ar[r, shift right, \"\\varepsilon_{FA}\"'] & FA\n  \\end{tikzcd}\n  has a coequaliser in \\(\\c D\\).\n\\end{lemma}\n\n\\begin{proof}\n  We define \\(L: \\c C^\\T \\to \\c D\\) by taking \\(FA \\to L(A, \\alpha)\\) to be a coequaliser for \\((F\\alpha, \\varepsilon_{FA})\\). Note that this is a functor \\(\\c C^\\T \\to \\c D\\). Recall that \\(K\\) is defined by \\(KB = (GB, G\\varepsilon_B)\\). For any \\(B\\), morphisms \\(LA \\to B\\) correspond bijectively to morphisms \\(FA \\xrightarrow{f} B\\) satisfying \\(f(F\\alpha) = f(\\varepsilon_{FA})\\). These correspond to morphisms \\(A \\xrightarrow{\\check f} GB\\) satisfying\n  \\[\n    \\check f \\alpha = Gf = G(\\varepsilon_B(F \\check f)) = (G\\varepsilon_B)(T \\check f)\n  \\]\n  i.e.\\ to algebra homomorphisms \\((A, \\alpha) \\to KB\\). And these bijections are natural in \\((A, \\alpha)\\) and in \\(B\\).\n\\end{proof}\n\n\\begin{theorem}[precise monadicity theorem]\\index{monadicity theorem}\n  \\label{thm:precise monadicity theorem}\n  \\(G: \\c D \\to \\c C\\) is monadic if and only if \\(G\\) has a left adjoint and creates coequalisers of \\(G\\)-split pairs.\n\\end{theorem}\n\n\\begin{theorem}[refined/reflexive monadicity theorem]\\index{monadicity theorem}\n  \\label{thm:refined monadicity theorem}\n  Suppose \\(\\c D\\) has and \\(G: \\c D \\to \\c C\\) preserves reflexive coequalisers, and that \\(G\\) reflects isomorphisms and has a left adjoint. Then \\(G\\) is monadic.\n\\end{theorem}\n\n\\begin{proof}\n  \\Cref{thm:precise monadicity theorem} \\(\\implies\\): sufficient to show that \\(G^\\T: \\c C^\\T \\to \\c C\\) creates coequalisers of \\(G^\\T\\)-split pairs. But this follows from the argument of 5.4 (2), %(note: lecture sasy 5.8 (ii), check later).\n  since if \\(f, g: (A, \\alpha) \\to (B, \\beta)\\) is a \\(G^\\T\\)-split pairs, the coequalisers of\n  \\begin{tikzcd}\n    A \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & B\n  \\end{tikzcd}\n  is preserved by \\(T\\) and \\(TT\\).\n\n  \\Cref{thm:precise monadicity theorem} \\(\\impliedby\\) and \\Cref{thm:refined monadicity theorem}: Let \\(\\T\\) denote the monad induced by \\(F \\adjoint G\\). For any \\(\\T\\)-algebra \\((A, \\alpha)\\), the pair\n  \\begin{tikzcd}\n    FGFA \\ar[r, shift left, \"F\\alpha\"] \\ar[r, shift right, \"\\varepsilon_{FA}\"'] & A\n  \\end{tikzcd}\n  is both reflexive and \\(G\\)-split, so has a coequaliser in \\(\\c D\\) and hence by \\Cref{lem:left adjoint and coequaliser}, \\(K: \\c D \\to \\c C^\\T\\) has a left adjoint \\(L\\). Then unit of \\(L \\adjoint K\\) at an algebra \\((A, \\alpha)\\), the coequaliser defining \\(L(A, \\alpha)\\) is mapped by \\(K\\) to the diagram\n  \\[\n    \\begin{tikzcd}\n      F^\\T TA \\ar[r, shift left, \"F^\\T\\alpha\"] \\ar[r, shift right, \"\\mu_A\"'] & F^\\T A \\ar[r] \\ar[dr, \"\\alpha\"] & KL(A, \\alpha) \\\\\n      & & (A, \\alpha) \\ar[u, dashed, \"\\iota_{(A, \\alpha)}\"'] % check this cell\n    \\end{tikzcd}\n  \\]\n  and \\(\\iota_{(A, \\alpha)}\\) is the factorisation of this through the \\(G^\\T\\)-split coequaliser \\(\\alpha\\). But either set of hypothesis implies that \\(G\\) preserves the coequaliser defining \\(L(A, \\alpha)\\), so \\(\\iota_{(A, \\alpha)}\\) is an isomorphism. For the counit \\(\\xi: LKB \\to B\\), we have a coequaliser\n  \\[\n    \\begin{tikzcd}\n      FGFGB \\ar[r, shift left, \"FG\\varepsilon_B\"] \\ar[r, shift right, \"\\varepsilon_{FGB}\"'] & FGB \\ar[r] \\ar[dr, \"\\varepsilon_B\"] & LKB \\ar[d, dashed, \"\\xi_B\"] \\\\\n      & & B\n    \\end{tikzcd}\n  \\]\n  Again, either set of hypothesis implies that \\(\\varepsilon_B\\) is a coequaliser of \\((FG\\varepsilon_B, \\varepsilon_{FGB})\\) so \\(\\xi_B\\) is an isomorphism.\n\\end{proof}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item The forgetful functors \\(\\c{Gp} \\to \\Set, \\c{Rng} \\to \\Set, \\c{Mod}_R \\to \\Set \\dots\\) all satisfy the hypothesis of refined monadicity theorem, for the relexive coequalisers, use example sheet 4 Q3 which shows that if\n    \\[\n      \\begin{tikzcd}\n        A \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & B \\ar[r, \"h\"] & C\n      \\end{tikzcd}\n    \\]\n    is a reflexive coequaliser diagram in \\(\\Set\\) then so is\n    \\[\n      \\begin{tikzcd}\n        A^n \\ar[r, shift left, \"f^n\"] \\ar[r, shift right, \"g^n\"'] & B^n \\ar[r, \"h^n\"] & C^n.\n      \\end{tikzcd}\n    \\]\n  \\item Any reflection is monadic: this follows from example sheet 3 Q3, but can also be proved using precise monadicity theorem. Let \\(\\c D\\) be a relfecitve (full) subcategory of \\(\\c C\\), and suppose a pair \\(f, g: A \\to B\\) in \\(\\c D\\) fits into a split coequaliser diagram\n    \\[\n      \\begin{tikzcd}\n        A \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & B \\ar[l, blue, bend left=50, \"t\"] \\ar[r, blue, \"h\"] & C \\ar[l, blue, bend left=50, \"s\"]\n      \\end{tikzcd}\n    \\]\n    in \\(\\c C\\). Then \\(t\\) and \\(ft = sh\\) belongs to \\(\\c D\\) since \\(\\c D\\) is full and hence \\(s\\) is in \\(\\c D\\) since it's an equaliser of \\((1_B, sh)\\) and \\(\\c D\\) is closed under limits in \\(\\c C\\). Hence also \\(h \\in \\mor \\c D\\).\n  \\item Consider the composite adjunction\n    \\[\n      \\begin{tikzcd}\n        \\Set \\ar[r, shift left, \"F\"] & \\c{AbGp} \\ar[l, shift left, \"U\"] \\ar[r, shift left, \"L\"] & \\c{tfAbGp} \\ar[l, shift left, \"I\"]\n      \\end{tikzcd}\n    \\]\n    These two factors are monadic by the above two examples respectively, but the composite isn't, since the monad it induces on \\(\\Set\\) is isomorphic to that induced by \\(F \\adjoint U\\).\n  \\item Consider the forgetful functor \\(U: \\Top \\to \\Set\\). This is faithful and has both left and right adjoint (so preserves all coequalisers), but the monad induced on \\(\\Set\\) is \\((1, 1, 1)\\) and the category of algebras is \\(\\Set\\).\n  \\item One may get the impression that monadicty is something only shared by algebraic construction but not topological constructions. Consider the composite adjunction\n    \\[\n      \\begin{tikzcd}\n        \\Set \\ar[r, shift left, \"D\"] & \\Top \\ar[l, shift left, \"U\"] \\ar[r, shift left, \"\\beta\"] & \\c{KHaus} \\ar[l, shift left, \"I\"]\n      \\end{tikzcd}\n    \\]\n    We shall show that this satisfies the hypothesis of precise monadicity theorem. Let\n    \\[\n      \\begin{tikzcd}\n        X \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & Y \\ar[l, bend left=50, blue, \"t\"] \\ar[r, blue, \"h\"] & Z \\ar[l, bend left=50, blue, \"s\"]\n      \\end{tikzcd}\n    \\]\n    be a split coequaliser in \\(\\Set\\) where \\(X\\) and \\(Y\\) have compact Hausdorff topologies and \\(f, g\\) are continuous. Note that the quotient topology on \\(Z \\cong Y/R\\) is compact, so it's the only possible candidate for a compact Hausdorff topology making \\(h\\) continuous.\n\n    We use the lemma from general topology: if \\(Y\\) is compact Hausdorff, then a quotient \\(Y/R\\) is Hausdorff if and only if \\(R \\subseteq Y \\times Y\\) is closed. We note\n    \\begin{align*}\n      R\n      &= \\{(y, y'): h(y) = h(y')\\} \\\\\n      &= \\{(y, y'): sh(y) = sh(y')\\} \\\\\n      &= \\{(y, y'): ft(y) = ft(y')\\}\n    \\end{align*}\n    so if we define \\(S = \\{(x, x'): f(x) = f(x')\\} \\subseteq X \\times X\\) then \\(R \\subseteq (g \\times g)(S)\\), but this reverse inclusion also holds. But\n    \\[\n      \\begin{tikzcd}\n        S \\ar[r] & X \\times X \\ar[r, shift left, \"f\\pi_1\"] \\ar[r, shift right, \"f\\pi_2\"'] & Y\n      \\end{tikzcd}\n    \\]\n    is an equaliser, \\(Y\\) is Hausdorff, so \\(S\\) is closed on \\(X \\times X\\) and hence compact. So \\(R = (g \\times g)(S)\\) is compact and hence closed in \\(Y \\times Y\\).\n\n    Morally, a category that is monadic over \\(\\Set\\) can be thought as an algebraic object, in the sense that it is defined by algebraic equations. This applies to \\(\\c{KHaus}\\) by including ``infinitary equations''.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{definition}[monadic tower]\\index{monadic tower}\n  Let\n  \\begin{tikzcd}\n    \\c C \\ar[r, shift left, \"F\"] & \\c D \\ar[l, shift left, \"G\"]\n  \\end{tikzcd}\n  be an adjunction and suppose \\(\\c D\\) has reflexive coequalisers. The \\emph{monadic tower} of \\(F \\adjoint G\\) is the diagram\n  \\[\n    \\begin{tikzcd}[row sep=huge]\n      & \\vdots \\ar[d, shift left] \\ar[dl, shift left] \\\\\n      \\c D \\ar[ur, shift left] \\ar[r, shift left, \"K'\"] \\ar[dr, shift left, \"K\"] \\ar[ddr, shift left, \"G\"] & (\\c C^\\T)^{\\mathbb S} \\ar[l, shift left, \"L\"] \\ar[d, shift left] \\ar[u, shift left] \\\\\n      & \\c C^\\T \\ar[ul, shift left, \"L\"] \\ar[u, shift left] \\ar[d, shift left] \\\\\n      & \\c C \\ar[uul, shift left, \"F\"] \\ar[u, shift left]\n    \\end{tikzcd}\n  \\]\n  where \\(\\T\\) is the monad induced by \\(F \\adjoint G\\), \\(K\\) is as in 5.7, \\(L\\) as in 5.11 (comparison?) \\(\\mathbb S\\) is the moand induced by \\(L \\adjoint K\\) and so on.\n\n  We say \\(F \\adjoint G\\) has \\emph{monadic length} \\(n\\) if we reach an equlvalence after \\(n\\) steps.\n\\end{definition}\n\nMonadic length 0: already equivalence. Monadic length 1: monad\n\nFor example, the adjunction of 5.14c has monadic length \\(1\\). The adjunction of 5.14d has monadic length \\(\\infty\\).\n\nWe will need one more result for topos theory, which we state without proof. (lifting of left adjoint in categorical algebra)\n\n\\begin{theorem}\n  Suppose given an adjunction\n  \\begin{tikzcd}\n    \\c C \\ar[r, shift left, \"L\"] & \\c D \\ar[l, shift left, \"R\"]\n  \\end{tikzcd}\n  and monads \\(\\T, \\mathbb S\\) on \\(\\c C, \\c D\\) respectively, and a functor \\(\\overline R: \\c D^{\\mathbb S} \\to \\c C^\\T\\) such that\n  \\[\n    \\begin{tikzcd}\n      \\c D^{\\mathbb S} \\ar[r, \"\\overline R\"] \\ar[d, \"G^{\\mathbb S}\"] & \\c C^\\T \\ar[d, \"G^\\T\"] \\\\\n      \\c D \\ar[r, \"R\"] & \\c C\n    \\end{tikzcd}\n  \\]\n  commutes up to isomorphism. Suppose also \\(\\c D^{\\mathbb S}\\) has reflexive coequalisers. Then \\(\\overline R\\) has a left adjoint \\(\\overline L\\).\n\\end{theorem}\n\n\\begin{proof}\n  Note that if \\(\\overline L\\) exists we must have \\(\\overline L F^\\T \\cong F^{\\mathbb S} L\\) by 3.6. So we'd expect \\(\\overline L(A, \\alpha)\\) to be a coequaliser of two morphisms\n  \\begin{tikzcd}\n    F^{\\mathbb S} LTA \\ar[r, shift left, \"F^{\\mathbb S}L\\alpha\"] \\ar[r, shift right, \"?\"'] & F^{\\mathbb S}LA.\n  \\end{tikzcd}\n  To construct the second morphism, note first that we assume wlog \\(G^\\T \\overline R = RG^{\\mathbb G}\\), by transporting \\(\\T\\)-algebra structurs along the isomorphism \\(G^\\T R(B, \\beta) \\to RB\\).\n\n  We obtain \\(\\phi: TR \\to RS\\) by starting from\n  \\[\n    R \\xrightarrow{R\\iota} RS = RG^{\\mathbb S} F^{\\mathbb S} = G^\\T \\overline R F^{\\mathbb S},\n  \\]\n  conjugate by \\(F\\) to get\n  \\[\n    F^\\T R \\to \\overline R F^{\\mathbb S},\n  \\]\n  and finally (?)\n  \\[\n    TR = G^\\T F^\\T R \\xrightarrow{\\phi} G^\\T \\overline R F^{\\mathbb S} = RG^{\\mathbb S} F^{\\mathbb S} = RS.\n  \\]\n  Convert it into \\(\\varphi: LT \\to SL\\) by\n  \\[\n    LT \\xrightarrow{LT\\gamma} LTRL \\xrightarrow{L\\theta_L} LRSL \\xrightarrow{\\delta_{SL}} SL\n  \\]\n  where \\(\\gamma\\) and \\(\\delta\\) are the unit and counit of \\(L \\adjoint R\\). Transposing across \\(F^{\\mathbb S} \\adjoint G^{\\mathbb S}\\), we get \\(\\overline \\varphi: F^{\\mathbb S} LT \\xrightarrow{F^{\\mathbb S}} L\\). The pair \\((F^{\\mathbb S} L\\alpha, \\overline \\varphi_A)\\) is relexive, with common splitting \\(F^{\\mathbb S}L\\eta\\). (\\(\\overline \\varphi\\) is the question mark in the diagram)\n\n  It can be verified (albeit extremely tedious) that the coequaliser of this pair has the unviersal property we require for \\(\\overline L(A, \\alpha)\\).\n\\end{proof}\n\n\\section{Cartesian closed categories}\n\n\\begin{definition}[exponentiable object, cartesian closed category]\\index{exponentiable object}\\index{category!cartesian closed}\n  Let \\(\\c C\\) be a category with finite products. We say \\(A \\in \\ob \\c C\\) is \\emph{exponentiable} if the functor \\((-) \\times A: \\c C \\to \\c C\\) has a right adjoint \\((-)^A\\).\n\n  If every object of \\(\\c C\\) is exponentiable, we say \\(\\c C\\) is \\emph{cartesian closed}.\n\\end{definition}\n\nIntuitively, exponential object ``lifts'' morphisms to an object, instead of a set. ``internalisation''\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\Set\\) is cartesian closed, with \\(B^A = \\Set(A, B)\\). A function \\(f: C \\times A \\to B\\) corresponds to \\(\\overline f: C \\to B^A\\).\n  \\item \\(\\c{Cat}\\) is cartesian closed with \\(\\c D^{\\c C} = [\\c C, \\c D]\\). In fact we have implicitly used this idea when discussing limits in functor categories.\n  \\item In \\(\\Top\\), if an exponential \\(Y^X\\) exists, its points must be the continuous maps \\(X \\to Y\\). The \\emph{compact-open} topology on \\(\\c{Top}(X, Y)\\) has the universal property of an exponential if and only if \\(X\\) is locally compact.\n\n    Note that finite products of exponential objects are exponentiable: since\n    \\[\n      (-) \\times (A \\times B) \\cong (- \\times A) \\times B,\n    \\]\n    we have \\((-)^{A \\times B} \\cong ((-)^B)^A\\). However, even if \\(X\\) and \\(Y\\) are locally compact, \\(Y^X\\) needn't be. So the exponentiable objects don't form a cartesian closed full subcategory.\n  \\item A cartesian closed poset is called a \\emph{Heyting semilattice}\\index{Heyting semilattice}: it's a poset with finite meet \\(\\wedge\\) and a binary operation \\(\\implies\\) satisfying\n    \\[\n      a \\leq (b \\implies c) \\text{ if and only if } a \\wedge b \\leq c.\n    \\]\n    For example, a complete poset is a Heyting semilattice if and only if it satisfies the infinite distributive law\n    \\[\n      a \\wedge \\bigvee_{i \\in I} \\{b_i\\} = \\bigvee_{i \\in I} \\{a \\wedge b_i\\}.\n    \\]\n    For any topological space, the lattice \\(\\mathcal O(X)\\) of open sets satisfies this condition, since \\(\\wedge\\) and \\(\\vee\\) conincide with \\(\\cap\\) an \\(\\cup\\). (However it does not satisfy the dual condition: arbitrary intersection needs to take interior).\n  \\end{enumerate}\n\\end{eg}\n\nRecall from example sheet 2 that, if \\(B \\in \\ob C\\), we define the \\emph{slice category}\\index{slice category} \\(\\c C /B\\) to have objects which are morphisms \\(A \\to B\\) in \\(\\c C\\) and morphisms are commutative triangles\n\\[\n  \\begin{tikzcd}\n    A \\ar[r] \\ar[dr] & A' \\ar[d] \\\\\n    & B\n  \\end{tikzcd}\n\\]\nThe forgetful functor \\(\\c C/B \\to \\c C\\) will be denoted \\(\\Sigma_B\\). If \\(\\c C\\) has finite products, \\(\\Sigma_B\\) has a right adjoint \\(B^*\\) which sends \\(A\\) to\n\\[\n  A \\times B \\xrightarrow{\\pi_2} B\n\\]\nsince morphisms\n\\[\n  \\begin{tikzcd}\n    C \\ar[r, \"{(f, g)}\"] \\ar[dr, \"g\"] & A \\times B \\ar[d, \"\\pi_2\"] \\\\\n    & B\n  \\end{tikzcd}\n\\]\n% diagonal. Dually left adjoint is diagonal???\ncorrespond to morphisms \\(f: \\Sigma_Bg = C \\to A\\).\n\n\\begin{lemma}\n  If \\(\\c C\\) has all finite limits then an object \\(B\\) is exponentiable if and only if \\(B^*: \\c C \\to \\c C/B\\) has a right adjoint \\(\\Pi_B\\).\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\impliedby\\): The composite \\(\\Sigma_B B^*\\) is equal to \\((-) \\times B\\) so we take \\((-)^B\\) to be \\(\\Sigma_B B^*\\).\n  \\item \\(\\implies\\): If \\(B\\) is exponentiable, for any \\(f: A \\to B\\) we define \\(\\Pi_B9f)\\) to be the pullback\n    \\[\n      \\begin{tikzcd}\n        \\Pi_B(f) \\ar[r] \\ar[d] & A^B \\ar[d, \"F^B\"] \\\\\n        1 \\ar[r, \"\\overline \\pi_2\"] & B^B\n      \\end{tikzcd}\n    \\]\n    where \\(\\overline \\pi_2\\) is the transpose of the projection (?). Then morphisms \\(C \\to \\Pi_B(f)\\) corresponding to morphisms \\(C \\to A^B\\) making\n    \\[\n      \\begin{tikzcd}\n        C \\ar[r] \\ar[d] & A^B \\ar[d, \"f^B\"] \\\\\n        1 \\ar[r, \"\\overline \\pi_2\"] & B^B\n      \\end{tikzcd}\n    \\]\n    commute, i.e.\\ to morhpisms \\(C \\times B \\to A\\) making\n    \\[\n      \\begin{tikzcd}\n        C \\times B \\ar[r] \\ar[dr, \"\\pi_2\"'] & A \\ar[d, \"f\"] \\\\\n        & B\n      \\end{tikzcd}\n    \\]\n    commute.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{lemma}\n  Suppose \\(\\c C\\) has finite limits. If \\(A\\) is exponentiable in \\(\\c C\\) then \\(B^*A\\) is exponentiable in \\(\\c C/B\\) for any \\(B\\). Moreover \\(B^*\\) preserves exponentials.\n\\end{lemma}\n\n\\begin{proof}\n  Given an object\n  \\begin{tikzcd}\n    C \\ar[d, \"f\"] \\\\\n    B\n  \\end{tikzcd}\n  , form the pullback\n  \\[\n    \\begin{tikzcd}\n      P \\ar[r] \\ar[d, \"f^{B^*A}\"] & C^A \\ar[d, \"f^A\"] \\\\\n      B \\ar[r, \"\\overline \\pi_1\"] & B^A\n    \\end{tikzcd}\n  \\]\n  Then for any\n  \\begin{tikzcd}\n    D \\ar[d, \"g\"] \\\\\n    B\n  \\end{tikzcd}\n  , morphisms \\(g \\to f^{B^*A}\\) in \\(\\c C/B\\) correspond to morphisms \\(D \\xrightarrow{\\overline h} C^A\\) making\n  \\[\n    \\begin{tikzcd}\n      D \\ar[r, \"\\overline h\"] \\ar[d, \"g\"] & C^A \\ar[d, \"f^A\"] \\\\\n      B \\ar[r, \"\\overline \\pi_1\"] & B^A\n    \\end{tikzcd}\n  \\]\n  commute, and hence to morphisms \\(D \\times A \\xrightarrow{h} C\\) making\n  \\[\n    \\begin{tikzcd}\n      D \\times A \\ar[r, \"h\"] \\ar[dr, \"g\\pi_1\"] & C \\ar[d, \"f\"] \\\\\n      & B\n    \\end{tikzcd}\n  \\]\n  commmute. But\n  \\[\n    \\begin{tikzcd}\n      D \\times A \\ar[r, \"g \\times 1_A\"] \\ar[d, \"\\pi_1\"] & B \\times A \\ar[d, \"\\pi_1\"] \\\\\n      D \\ar[r, \"g\"] & B\n    \\end{tikzcd}\n  \\]\n  is a pullback in \\(\\c C\\), i.e.\\ a product in \\(\\c C/B\\).\n\n  For the second assertion, note that if \\(C \\xrightarrow{f} B\\) is of the form \\(B \\times E \\xrightarrow{\\pi_1} B\\) then the pullback defining \\(f^{B^*A}\\) becomes\n  \\[\n    \\begin{tikzcd}\n      B \\times E^A \\ar[r, \"\\overline \\pi_1 \\times 1_{E^A}\"] \\ar[d, \"\\pi_1\"] & B^A \\times E^A \\ar[d, \"\\pi_1\"] \\\\\n      B \\ar[r, \"\\overline \\pi_1\"] & B^A\n    \\end{tikzcd}\n  \\]\n  so \\(f^{B^*A} \\cong B^*(E^A)\\).\n\\end{proof}\n\n\\begin{remark}\n  \\(\\c C/B\\) is isomorphic to the category of coalgebra for the monad structure on \\((-) \\times B\\) (5.2c). So the first part of the lemma could be proved using lift of adjoint (last theorem of chapter 5).\n\\end{remark}\n\n\\begin{definition}[locally cartesian closed]\\index{category!locally cartesian closed}\n  We say \\(\\c C\\) is \\emph{locally cartesian closed} if it has all finite limits and each \\(\\c C/B\\) is cartesian closed.\n\\end{definition}\n\nNote that this includes the fact that \\(\\c C \\cong \\c C/1\\) is cartesian closed. So the usuage is to the contrary of normal usage of ``locally'' as it imposes a stricter condition.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\Set\\) is locally cartesian closed since \\(\\Set/B \\simeq \\Set^B\\) for any \\(B\\).\n  \\item For any small category \\(\\c C\\), \\([\\c C, \\Set]\\) is cartesian closed: by Yoneda\n    \\[\n      G^F(A) \\cong [\\c C, \\Set](\\c C(A, -), G^F) \\cong [\\c C, \\Set](\\c C(A, -) \\times F, G)\n    \\]\n    so we take RHS as a definition of \\(G^F(A)\\) and define \\(G^F\\) on morphisms \\(A \\xrightarrow{f} B\\) by composition with \\(\\c C(f, -) \\times 1_F\\). Note that the class of funtors \\(H\\) for which we have\n    \\[\n      [\\c C, \\Set](H, G^F) \\cong [\\c C, \\Set](H \\times F, G)\n    \\]\n    is closed under colimits. But every functor \\(\\c C \\to \\Set\\) is a colimit of representables.\n\n    In fact \\([\\c C, \\Set]\\) is locally cartesian closed since all its slice categories \\([\\c C, \\Set]/F\\) are of the same form. See example sheet 4 Q6.\n  \\item Any Heyting semilattice \\(H\\) is locally cartesian closed since \\(H/b \\cong \\downarrow(b)\\), the poset of elements \\(\\leq b\\), and \\(b^* = (-) \\wedge b\\) is surjective.\n  \\item \\(\\c{Cat}\\) is \\emph{not} locally cartesian closed, since not all strong epis are regular. c.f.\\ example sheet 3 Q6.\n  \\end{enumerate}\n\\end{eg}\n\nNote that given\n\\begin{tikzcd}\n  A \\ar[d, \"f\"] \\\\\n  B\n\\end{tikzcd}\nin \\(\\c C/B\\), the iterated slice \\((\\c C/B)/f\\) is isomorphic to \\(\\c C/A\\), and this identifies \\(f^*: \\c C/B \\to (\\c C/B)/f\\) with the operation of pulling back morphisms along \\(f\\). So by 6.3 \\(\\c C\\) is locally cartesian closed if and only if it has finite limits and \\(f^*: \\c C/B \\to \\c C/A\\) has a right adjoint \\(\\Pi_f\\) for every \\(A \\xrightarrow{f} B\\) in \\(\\c C\\). This can be taken as the definition of locally cartesian closed category.\n\n\\begin{theorem}\n  Suppose \\(\\c C\\) is locally cartesian closed and has reflexive coequalisers. Then every morphism \\(A \\xrightarrow{f} B\\) factors as\n  \\[\n    A \\stackrel{q}{\\epi} I \\stackrel{m}{\\mono} B\n  \\]\n  where \\(q\\) is regular epic and \\(m\\) is monic.\n\\end{theorem}\n\n\\begin{proof}\n  First form the pullback\n  \\[\n    \\begin{tikzcd}\n      R \\ar[r, \"a\"] \\ar[d, \"b\"] & A \\ar[d, \"f\"] \\\\\n      A \\ar[r, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  and then form the coequaliser\n  \\[\n    \\begin{tikzcd}\n      R \\ar[r, shift left, \"a\"] \\ar[r, shift right, \"b\"'] & A \\ar[r, \"q\"] & I.\n    \\end{tikzcd}\n  \\]\n  Since \\(fa = fb\\), \\(f\\) factors as\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"q\"] & I \\ar[r, \"m\"] & B.\n    \\end{tikzcd}\n  \\]\n  Suppose given\n  \\begin{tikzcd}\n    D \\ar[r, shift left, \"g\"] \\ar[r, shift right, \"h\"'] & I\n  \\end{tikzcd}\n  with \\(mg = mh\\), form the pullback\n  \\[\n    \\begin{tikzcd}\n      E \\ar[r, \"n\"] \\ar[d, \"{(k, \\ell)}\"] & D \\ar[d, \"{(g, h)}\"] \\\\\n      A \\times A \\ar[r, \"q \\times q\"] & I \\times I\n    \\end{tikzcd}\n  \\]\n  Since \\(q \\times q\\) factors as \\((q \\times 1_I)(1_A \\times q)\\) and both factors are pullbacks of \\(q\\), it is an epimorphism and so is \\(n\\). Now\n  \\[\n    fk = mqk = mgn = mhn = mg\\ell = f\\ell\n  \\]\n  so there exists \\(E \\xrightarrow{p} R\\) with \\(ap = k, bp = \\ell\\).\n\n  Now\n  \\[\n    qk = qap = qbp = q\\ell,\n  \\]\n  i.e.\\ \\(gn = hn\\). But \\(n\\) is epic so \\(g = h\\). Hence \\(m\\) is monic.\n\\end{proof}\n\nNote that this implies any strong epi \\(A \\xrightarrow{f} B\\) is regular since the monic part of its image factorisation is an isomorphism. In particular, regular epimorphisms are stable under composition.\n\n\\begin{definition}\n  If \\(\\c C\\) and \\(\\c D\\) are cartesian closed categories and \\(F: \\c C \\to \\c D\\) preserves products then for each pair of objects \\((A, B)\\) of \\(\\c C\\), we get a natural morphism \\(\\theta: F(B^A) \\to FB^{FA}\\), namely the transpose of\n  \\[\n    F(B^A) \\times FA \\cong F(B^A \\times A) \\xrightarrow{F(\\ev)} FB\n  \\]\n  where \\(\\ev\\) is the counit of \\(((-) \\times A \\adjoint (-)^A)\\).\n\n  We say \\(F\\) is a \\emph{cartesian closed functor} if \\(\\theta\\) is an isomorphism for every pair \\((A, B)\\). Note that the second part of 6.4 syas that if \\(\\c C\\) is locally cartesian closed then \\(f^*: \\c C/B \\to \\c C/A\\) is a continuous cartesian functor for any \\(A \\xrightarrow{f} B\\).\n\\end{definition}\n\n\\begin{theorem}\n  Let \\(\\c C\\) and \\(\\c D\\) be cartesian closed categories and \\(F: \\c C \\to \\c D\\) a functor having a left adjoint \\(L\\). Then \\(F\\) is cartesian closed if and only if the canonical morphism \\(\\varphi_{A, B}\\)\n  \\[\n    \\begin{tikzcd}\n      L(B \\times FA) \\ar[r, \"{(L\\pi_1, L\\pi_2)}\"] & LB \\times LFA \\ar[r, \"1_{LA} \\times \\varepsilon_B\"] & LB \\times A\n    \\end{tikzcd}\n  \\]\n  is an isomorphism for all \\(A \\in \\ob \\c C, B \\in \\ob \\c D\\). This condition is called \\emph{Frobenius reciprocity}\\index{Frobenius reciprocity}.\n\\end{theorem}\n\n% check this below paragraph\nNote that if \\(\\c C\\) is locally cartesian closed then \\(f^*: \\c C/B \\to \\c C/A\\) has a left adjoint \\(\\Sigma_f\\) given by composition with \\(f\\), and it's easy to verify that\n\\[\n  \\Sigma_f(g \\times f^*h) \\cong \\Sigma_f g \\times h.\n\\]\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\implies\\): Given an inverse for \\(\\theta: F(C^A) \\to FA^{FA}\\), we define \\(\\varphi_{A, B}^{-1}\\) to be the composite\n    \\[\n      \\begin{tikzcd}\n        LB \\times B \\ar[r, \"L\\lambda \\times 1\"] & L((B \\times FA)^{FA}) \\times A \\ar[r, \"L(\\eta^{FA}) \\times 1\"] & L(FL(B \\times FA)^{FA}) \\times B \\ar[d, \"L\\theta^{-1} \\times 1\"] \\\\\n        L(B \\times FA) & L(B \\times FA)^A \\times A \\ar[l, \"\\operatorname{ev}\"] & LF(L(B \\times FA)^A) \\times A \\ar[l, \"\\varepsilon \\times 1\"]\n      \\end{tikzcd}\n    \\]\n    The verification is a tedious exercise.\n  \\item \\(\\impliedby\\): Given an inverse for \\(\\varphi\\), we define \\(\\theta^{-1}\\) to be\n    \\[\n      \\begin{tikzcd}\n        F((L(FC^{FA} \\times A))^A) \\ar[d, \"F(\\varphi^{-1 A})\"] & FL(FC^{FA}) \\ar[l, \"F\\lambda\"] & FC^{FA} \\ar[l, \"\\eta\"] \\\\\n        F((L(FC^{FA} \\times FA))^A) \\ar[r, \"F((L(\\operatorname{ev}))^A)\"] & F((LFC)^A) \\ar[r, \"F(\\varepsilon^A)\"] & F(C^A).\n      \\end{tikzcd}\n    \\]\n  \\end{itemize}\n\\end{proof}\n\n\\begin{corollary}\n  Suppose \\(\\c C\\) and \\(\\c D\\) are cartesian closed, and \\(F: \\c C \\to \\c D\\) has a left adjoint \\(L\\) which preserves finite products. Then \\(F\\) is cartesian closed if and only if \\(F\\) is full and faithful.\n\\end{corollary}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\implies\\): \\(L\\) preserves \\(1\\) so if we substitute \\(B\\) in the definition of \\(\\varphi\\) above, we get \\(LFA \\xrightarrow{\\varepsilon_A} A\\). But \\(\\varepsilon\\) is an isomorphism if and only if \\(F\\) is full and faithful.\n  \\item \\(\\impliedby\\): If \\(L\\) preserves binary products and \\(\\varepsilon\\) is an isomorphism. Then both factors in the definition of \\(\\varphi\\) are isomorphisms.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{definition}[exponential ideal]\\index{exponential ideal}\n  Let \\(\\c C\\) be a cartesian closed category. By an \\emph{exponential ideal} of \\(\\c C\\) we mean a class of objects (or a full subcategory) \\(\\mathcal E\\) such that \\(B \\in \\mathcal E\\) implies \\(B^A \\in \\mathcal E\\) for all \\(A \\in \\ob \\c C\\).\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item We say \\(A\\) is \\emph{subterminal}\\index{subterminal} if \\(A \\to 1\\) is monic. In any cartesian closed category \\(\\c C\\), the class \\(\\operatorname{Sub}_{\\c C}(1)\\) is an exponential ideal since \\(A\\) is subterminal if and only if there exists at most 1 morphisms \\(B \\to A\\) for any \\(B\\), if and only if at most 1 morphism \\(C \\times B \\to A\\) for any \\(B\\) and \\(C\\), so by adjunction if and only if at most 1 morphism \\(C \\to A^B\\), if and only if \\(A^B\\) is subterminal.\n\n    More generally, if \\(\\c C\\) is locally cartesian closed then \\(\\operatorname{Sub}_{\\c C}(A)\\) is an exponential ideal in \\(\\c C/A\\) for any \\(A\\).  if \\(\\c C\\) also satisfies the hypotheses of 6.7 then \\(\\operatorname{Sub}_{\\c C}(A)\\) is reflexive in \\(\\c C/A\\).\n  \\item Let \\(X\\) be a topological space. By a \\emph{presheaf}\\index{presheaf} of \\(X\\) we mean a functor \\(F: \\mathcal O(X)^{\\text{op}} \\to \\Set\\) where \\(\\mathcal O(X)\\) is the partial order of open subsets of \\(X\\). So \\(F\\) has sets \\(F(U)\\) for each open \\(U\\) and restriction maps \\(F(U) \\to F(V): x \\mapsto x|_V\\) whenever \\(V \\subseteq U\\).\n\n    We say \\(F\\) is a \\emph{sheaf}\\index{sheaf} if whenever \\(U = \\bigcup_{i \\in I} U_i\\) and we're given \\(x_i \\in F(U_i)\\) for each \\(i\\), such that\n    \\[\n      x_i|_{U_i \\cap U_j} = x_j|_{U_i \\cap U_j}\n    \\]\n    for all \\((i, j)\\), then exists a unique \\(x \\in F(U)\\) such that \\(x|_{U_i} = x_i\\) for all \\(i\\).\n\n    We write \\(\\Sh(X) \\subseteq [\\mathcal O(X)^{\\text{op}}, \\Set]\\) for the full subcategory of sheaves. We'll show \\(\\Sh(X)\\) is an exponential ideal: given presheaves \\(F, G\\), \\(G^F(U)\\) is the set of natural transofrmations \\(F \\times \\mathcal O(X) (-, U) \\to G\\) or equivalently, the set of natural transformations \\(F|_U \\to G|_U\\) where \\(F|_U: \\mathcal O(X)^{\\text{op}} \\to \\Set\\) is the presheaf obtained by restricting \\(F\\) to open sets in \\(U\\). Now suppose \\(G\\) is a sheaf. Suppose \\(U = \\bigcup_{i \\in I} U_i\\) and suppose given \\(\\alpha_i: F|_{U_i} \\to G|_{U_i}\\) for each \\(i\\) such that\n    \\[\n      \\alpha_i|_{U_i \\cap U_j} = \\alpha_j|_{U_i \\cap U_j}\n    \\]\n    for all \\(i, j\\). Given \\(x \\in F(V)\\) where \\(V \\subseteq U\\), write \\(V_i = V \\cap U_i\\), then \\(V = \\bigcup_{i \\in I} V_i\\). The elements \\(x|_{V_i}\\), \\(i \\in I\\), satisfying the compatibility condition and hence so do the elements \\((\\alpha_i)_{V_i} (x|_{V_i}) \\in G(V_i)\\) . So there's a unique \\(y \\in G(V)\\) such that\n    \\[\n      y|_{V_i} = (\\alpha_i)_{V_i} (x|_{V_i})\n    \\]\n    for all \\(i\\) and we define this to be \\(\\alpha_V(x)\\). This defines natural transformations \\(\\alpha: F|_U \\to G|_U\\) and it's the unique transformation whose restriction to \\(U_i\\) is \\(\\alpha_i\\) for each \\(i\\).\n\n    Note that since \\(\\Sh(X)\\) is closed under finite products (and in fact all limits) in \\([\\mathcal O(X)^{\\text{op}}, \\Set]\\), it is itself cartesian closed.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{lemma}\n  Suppose \\(\\c C\\) is cartesian closed and \\(\\c D \\subseteq \\c C\\) is a (full) reflective subcategory, with reflector \\(L: \\c C \\to \\c D\\). Then \\(\\c D\\) is an exponential ideal if and only if \\(L\\) preserves binary products.\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\implies\\): Suppose \\(A, B \\in \\ob \\c C, C \\in \\ob \\c D\\). Then we have bijections\n    \\begin{align*}\n      A \\times B &\\to C \\\\\n      A &\\to C^B \\\\\n      LA &\\to C^B \\\\\n      LA \\times B &\\to C \\\\\n      B &\\to C^{LA} \\\\\n      LB &\\to C^{LA} \\\\\n      LA \\times LA &\\to C\n    \\end{align*}\n    so \\(LA \\times LB\\) has the universal property of \\(L(A \\times B)\\).\n  \\item \\(\\impliedby\\): Suppose \\(B \\in \\ob \\c D, A, C \\in \\ob \\c C\\). We have bijections\n    \\begin{align*}\n      C &\\to B^A \\\\\n      C \\times A &\\to B \\\\\n      LC \\times LA \\cong L(C \\times A) &\\to B \\\\\n      L(LC \\times A) &\\to B \\\\\n      LC \\times A &\\to B \\\\\n      LC &\\to B^A\n    \\end{align*}\n    so every \\(C \\to B^A\\) factors throu \\(C \\to LC\\), hence \\(B^A \\in \\ob \\c D\\).\n  \\end{itemize}\n\\end{proof}\n\n\\section{Toposes}\n\nTopos has it orgin in the French school of algebraic geometry. In 1963, when studing cohomologies in gemoetry, Grothendieck studied toposes as categories of ``generalised sheaves''. As sheaves can be seen as representation of spaces and properties of the space can be detected from sheaves, toposes become generalised spaces. Thus the name topos: something more fundamental than topology.\n\nJ.\\ Giraud gave a characterisation of such categories by (set-theoretic) categorical properties. F.\\ W.\\ Lawvere and M. Tierney (1969 - 1970) investigated the elementary categorical properties of these categories and come up with the elementary definition. In fact a Grothendieck topos is exactly a Lawvere-Tierney topos which is (co)complete and locally small, and has a separating set of objects (which is what Grothendieck should, but didn't, come up with, by the way).\n\n\\begin{definition}[subobject classifier, topos, logical functor]\\index{subobject classifier}\\index{topos}\\index{logical functor}\\leavevmode\n  \\begin{enumerate}\n  \\item Let \\(\\c{\\mathcal E}\\) be a category with finite limits. A \\emph{subobject classifier} for \\(\\mathcal E\\) is a monomorphism \\(\\top: \\Omega' \\mono \\Omega\\) such that for every mono \\(m: A' \\mono A\\) in \\(\\mathcal E\\), there is a unique \\(\\chi_m: A \\to \\Omega\\) for which there is a pullback square\n    \\[\n      \\begin{tikzcd}\n        A' \\ar[r] \\ar[d, \"m\", tail] & \\Omega' \\ar[d, \"\\top\", tail] \\\\\n        A \\ar[r, \"\\chi_m\"] & \\Omega\n      \\end{tikzcd}\n    \\]\n    Note that, for any \\(A\\), there is a unique \\(A \\to \\Omega\\) which factors through \\(\\top: \\Omega' \\mono \\Omega\\), so the domain of \\(\\top\\) is actually a terminal object.\n\n    If \\(\\mathcal E\\) is well-powered, we have a functor \\(\\operatorname{Sub}_{\\c{\\mathcal E}}: \\c{\\mathcal E} \\to \\Set\\) sending \\(A\\) to the set of (isomorphism classes) of subobjects of \\(A\\) and acting on morphisms by pullback, and an subobject classifier is a representation of this functor.\n  \\item A \\emph{topos} is a category which has finite limits, is cartesian closed and has a subobject classifier.\n  \\item If \\(\\c{\\mathcal E}\\) and \\(\\c{\\mathcal F}\\) are toposes, a \\emph{logical functor} \\(F: \\c{\\mathcal E} \\to \\c{\\mathcal F}\\) is one which preserves finite limits, exponentiables and the subobject classifier.\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\Set\\) is a topos, with \\(\\Omega = \\{0, 1\\}\\) and \\(\\top = 1: 1 \\mapsto \\{0, 1\\}\\). Have\n    \\[\n      \\chi(a) =\n      \\begin{cases}\n        1 & a \\in A \\\\\n        0 & a \\notin A\n      \\end{cases}\n    \\]\n    So also is the category \\(\\Set_f\\) of finite set, or the category \\(\\Set_\\kappa\\) of sets of cardinality \\(< \\kappa\\), where \\(\\kappa\\) is an infinite cardinal such that if \\(\\lambda < \\kappa\\) then \\(2^\\lambda < \\kappa\\). %? does this exists in, say, standard set axioms?\n  \\item For any small category \\(\\c C\\), \\([\\c C^{\\text{op}}, \\Set]\\) is a topos: we've seen that it's cartesian, and \\(\\Omega\\) is determined by Yoneda:\n    \\[\n      \\Omega(A) \\cong [\\c C^{\\text{op}}, \\Set](\\c C(-, A), \\Omega) \\cong \\{\\text{subfunctors of } \\c C(-, A)\\}.\n    \\]\n    So we define \\(\\Omega(A)\\) to be the set of \\emph{sieves}\\index{sieve} on \\(A\\), i.e.\\ sets \\(R\\) of morphisms with codomain \\(A\\) such that if \\(f \\in R\\) then \\(fg \\in R\\) for any \\(g\\).\n\n    Given \\(f: B \\to A\\) and a sieve \\(R\\) on \\(A\\), we define \\(f^*R\\) to be the set of \\(g\\) with codomain \\(B\\) such that \\(fg \\in R\\). This makes \\(\\Omega\\) into a functor \\(\\c C^{\\text{op}} \\to \\Set\\): \\(\\top: 1 \\to \\Omega\\) is defined by\n    \\[\n      \\top_A(*) = \\{\\text{all morphisms with codomain } A\\}.\n    \\]\n    Given a subfunctor \\(m: F' \\mono F\\), we define \\(\\chi_m: F \\to \\Omega\\) by\n    \\[\n      (\\chi_m)_A(x) = \\{f: B \\to A: Ff(x) \\in F'(B)\\}.\n    \\]\n    This is the unique natural transformation making\n    \\[\n      \\begin{tikzcd}\n        F' \\ar[r] \\ar[d, \"m\", tail] & 1 \\ar[d, \"\\top\", tail] \\\\\n        F \\ar[r, \"\\chi_m\"] & \\Omega\n      \\end{tikzcd}\n    \\]\n    a pullback.\n  \\item For any space \\(X\\), \\(\\Sh(X)\\) is a topos. It's cartesian closed. For the subobject classifier we take\n    \\[\n      \\Omega(U) = \\{V \\in \\mathcal O(X): V \\subseteq U\\}\n    \\]\n    and \\(\\Omega(U' \\to U)\\) is the map \\(V \\mapsto V \\subseteq U'\\). \\(\\Omega\\) is a sheaf since if we have \\(U = \\bigcup_{i \\in I} U_i\\) and \\(U_i \\subseteq U_i\\) such that \\(V_i \\cap U_j = V_j \\cap U_i\\) for each \\(i, j\\) then \\(V = \\bigcup_{i \\in I} V_i\\) is the unique open subset of \\(U\\) with \\(V \\cap U_i = V_i\\) for each \\(i\\).\n\n    If \\(m: F' \\mono F\\) is a subsheaf then for any \\(x \\in F(U)\\) the sieve\n    \\[\n      \\{V \\subseteq U: x|_V \\in F'(V)\\}\n    \\]\n    has a greatest element since \\(F'\\) is a sheaf. So we define \\(\\chi_m: F \\to \\Omega\\) to send \\(x\\) to this object.\n  \\item Let \\(\\c C\\) be a group \\(G\\). The topos structure on \\([G, \\Set]\\) is particularly simple: \\(B^A\\) is the set of all \\(G\\)-equivariant maps \\(f: A \\times G \\to B\\) but such an \\(f\\) is determined by its values at elements of the form \\((a, 1)\\) since \\(f(a, g) = g . f(g^{-1} . a, 1)\\), and this restriction can be any mapping \\(A \\times \\{1\\} \\to B\\). So we can take \\(B^A\\) to be the set of functions \\(A \\to B\\) with \\(G\\) acting by\n    \\[\n      (g . f) (a) = g(f(g^{-1} . a))\n    \\]\n    and \\(\\Omega = \\{0, 1\\}\\) with trivial \\(G\\)-action. So the forgetful functor \\([G, \\Set] \\to \\Set\\) is logical, as is the functor which equips a set \\(A\\) with trivial \\(G\\)-action.\n\n    Moreover, even if \\(G\\) is infinite, \\([G, \\Set]\\) is a topos and this inclusion \\([G, \\Set_f] \\to [G, \\Set]\\) is logical. Similarly if \\(\\mathcal G\\) is a large group (i.e.\\ the underlying space may not be a set), then \\([\\mathcal G, \\Set]\\) is a topos.\n  \\item Let \\(\\c C\\) be a category such that every \\(\\c C/A\\) is equivalent to a finite category. Then \\([\\c C^{\\text{op}}, \\Set_f]\\) is a topos. Similarly if \\(\\c C\\) is large but all \\(\\c C/A\\) are small, then \\([\\c C^{\\text{op}}, \\Set]\\) is a topos. In partciular \\([\\c{On}, \\Set]\\) is a topos, but it's not locally small.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{lemma}\n  Suppose \\(\\mathcal E\\) has finite limits and a subobject classifier. Then every monomorphism in \\(\\mathcal E\\) is regular. In particular \\(\\mathcal E\\) is balanced.\n\\end{lemma}\n\n\\begin{proof}\n  The universal monomorphism \\(\\top: 1 \\mono \\Omega\\) is split and hence regular. But any pullback of a regular mono is regular: if \\(f\\) is an equaliser of \\((g, h)\\) then \\(K^*(f)\\) is an equaliser of \\((gk, hk)\\). The second assertion follows since a regular mono that is also epic is an isomorphism.\n\\end{proof}\n\nGiven an object \\(A\\) in a topos \\(\\mathcal E\\), we write \\(PA\\) for the exponential \\(\\Omega^A\\). and \\(\\ni_A \\mono PA \\times A\\) for the subobject corresponding to \\(\\ev: PA \\times A \\to \\Omega\\). This has the property that, for any \\(B\\) and any \\(m: R \\mono B \\times A\\), there is a unique \\(\\ulcorner m \\urcorner: B \\to PA\\) such that\n\\[\n  \\begin{tikzcd}\n    R \\ar[r] \\ar[d, \"m\", tail] & \\ni_A \\ar[d, tail] \\\\\n    B \\times A \\ar[r, \"\\ulcorner m \\urcorner \\times 1_\\Delta\"] & PA \\times A\n  \\end{tikzcd}\n\\]\nis a pullback.\n\n\\begin{definition}[power-object]\\index{power-object}\\index{topos}\\index{logical functor}\n  By a \\emph{power-object} for \\(A\\) in a category \\(\\mathcal E\\) with finite limits, we mean an object \\(PA\\) equipped with \\(\\ni_A \\to PA \\times A\\) satisfying th above.\n\n  We say \\(\\mathcal E\\) is a \\emph{weak topos} if every \\(A \\in \\ob \\mathcal E\\) has a power-object.\n\n  Similarly we say \\(F: \\mathcal E \\to \\mathcal F\\) is \\emph{weakly logical} if \\(F(\\ni_A) \\mono F(PA) \\times FA\\) is a power-object for \\(FA\\) for every \\(A \\in \\ob \\mathcal E\\).\n\\end{definition}\n\nA power-object for the termimal object is the same as a subobject classifier.\n\nThis is an ad hoc defintion and we will soon show that \\(\\mathcal E\\) is cartesian closed and therefore we can safely drop the adjective ``weak''. Consequently this may be taken as the definition of topos.\n\n\\begin{lemma}\n  \\(P\\) is a functor \\(\\mathcal E^{\\text{op}} \\to \\mathcal E\\) . Moreover it is self-adjoint on the right.\n\\end{lemma}\n\nCompare this with the contravariant power set functor on \\(\\Set\\), which is a special case.\n\n\\begin{proof}\n  Given \\(f: A \\to B\\), we define \\(Pf: PB \\to PA\\) to correspond to the pullback\n  \\[\n    \\begin{tikzcd}\n      E_f \\ar[r] \\ar[d, tail] & \\ni_B \\ar[d, tail] \\\\\n      PB \\times A \\ar[r, \"1 \\times f\"] & PB \\times B\n    \\end{tikzcd}\n  \\]\n  For any \\(\\ulcorner m \\urcorner: c \\to PB\\), it's easy to see that \\((Pf) \\ulcorner m \\urcorner\\) corresponds to \\((1_C \\times f)^* (m)\\), hence \\(f \\mapsto Pf\\) is functorial. For any \\(A\\) and \\(B\\), we have a bijection between subobjects of \\(A \\times B\\) and of \\(B \\times A\\) given by composition with \\((\\pi_2, \\pi_1): A \\times B \\to B \\times A\\). This yields a (natural) bijection between morphisms \\(A \\to PB\\) and \\(B \\to PA\\).\n\\end{proof}\n\nWe write \\(\\{\\}_A: A \\to PA\\) (pronounced ``singleton'')\\index{singleton} for the morphism corresponding to \\((1_a, 1_a): A \\mono A \\times A\\).\n\n\\begin{lemma}\n  Given \\(f: A \\to B\\), \\(\\{\\}_Bf\\) corresponds to \\((1_A, f): A \\mono A \\times B\\) and \\((Pf) \\{\\}_B\\) corresponds to \\((f, 1_A): A \\mono B \\times A\\).\n\\end{lemma}\n\n\\begin{proof}\n  The square\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"f\"] \\ar[d, \"{(1, f)}\", tail] & B \\ar[d, \"{(1, 1)}\", tail] \\\\\n      A \\times B \\ar[r, \"f \\times 1\"] & B \\times B\n    \\end{tikzcd}\n  \\]\n  is a pullback. Similarly for the second assertion.\n\\end{proof}\n\n\\begin{corollary}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\{\\}_A: A \\to A\\) is monic.\n  \\item \\(P\\) is faithful.\n  \\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item If \\(\\{\\}f = \\{\\}g\\) then \\((1_A, f)\\) and \\((1_A, g)\\) are isomorphic as subobjects of \\(A \\times B\\), which forces \\(f = g\\).\n  \\item Similarly if \\(Pf = Pg\\) then \\((Pf)\\{\\} = (Pg)\\{\\}\\) so we again deduce \\(f = g\\).\n  \\end{enumerate}\n\\end{proof}\n\nGiven a mono \\(f: A \\mono B\\) is \\(\\mathcal E\\), we define \\(\\exists f: PA \\to PA\\) to correspond to the composite\n\\[\n  \\begin{tikzcd}\n    \\ni_A \\ar[r, tail] & PA \\times A \\ar[r, \"1 \\times f\", tail] & PA \\times B.\n  \\end{tikzcd}\n\\]\nThen for any \\(\\ulcorner m \\urcorner: C \\to PA\\), \\((\\exists f) \\ulcorner m \\urcorner\\) correponds to\n\\[\n  \\begin{tikzcd}\n    R \\ar[r, \"m\", tail] & C \\times A \\ar[r, \"1 \\times f\", tail] & C \\times B\n  \\end{tikzcd}\n\\]\nso \\(f \\mapsto \\exists f\\) is a functor \\(\\c{Mono}(\\mathcal E) \\to \\mathcal E\\).\n\n\\begin{lemma}[Beck-Chevalley condition]\\index{Beck-Chevalley condition}\n  Suppose\n  \\[\n    \\begin{tikzcd}\n      D \\ar[r, \"h\"] \\ar[d, \"k\", tail] & A \\ar[d, \"f\", tail] \\\\\n      B \\ar[r, \"g\"] & C\n    \\end{tikzcd}\n  \\]\n  is a pullback with \\(f\\) monic. Then the diagram\n  \\[\n    \\begin{tikzcd}\n      PA \\ar[r, \"\\exists f\"] \\ar[d, \"Ph\"] & PC \\ar[d, \"Pg\"] \\\\\n      PD \\ar[r, \"\\exists k\"] & PB\n    \\end{tikzcd}\n  \\]\n  commutes.\n\\end{lemma}\n\n\\begin{proof}\n  Consider the diagram\n  \\[\n    \\begin{tikzcd}\n      E_n \\ar[r] \\ar[d, tail] & \\ni_A \\ar[d, tail] \\\\\n      PA \\times D \\ar[r, \"1 \\times h\"] \\ar[d, \"1 \\times k\"] & PA \\times A \\ar[d, \"1 \\times f\"] \\\\\n      PA \\times B \\ar[r, \"1 \\times g\"] & PA \\times C\n    \\end{tikzcd}\n  \\]\n  The lower square is a pullback so the upper square is a pull back. This is equivalent to the composite is a pullback.\n\\end{proof}\n\n\\begin{theorem}[Par\u00e9]\\index{Par\u00e9 theorem}\n  The functor \\(P: \\mathcal E^{\\text{op}} \\to \\mathcal E\\) is monadic.\n\\end{theorem}\n\n\\begin{proof}\n  It has a left adjoint \\(P: \\mathcal E \\to \\mathcal E^{\\text{op}}\\) by 7.5 (the ``self-adjoint on the right'' lemma). It's faithful by 7.7 (ii) and hence reflects isomorphisms by 7.3. \\(\\mathcal E^{\\text{op}}\\) has coequalisers since \\(\\mathcal E\\) has equalisers. Suppose\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, shift left, \"f\"] \\ar[r, shift right, \"g\"'] & B \\ar[l, \"r\"']\n    \\end{tikzcd}\n  \\]\n  is a coreflexive pair in \\(\\mathcal E\\), then \\(f\\) and \\(g\\) are (split) monic and the equaliser \\(e: E \\to A\\) makes\n  \\[\n    \\begin{tikzcd}\n      E \\ar[r, \"e\"] \\ar[d, \"e\", tail] & A \\ar[d, \"g\", tail] \\\\\n      A \\ar[r, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  a pullback square. Since any cone over\n  \\[\n    \\begin{tikzcd}\n      & A \\ar[d, \"g\"] \\\\\n      A \\ar[r, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  has both legs equal so by Beck-Chevalley condition we have \\((Pf)(\\exists g) = (\\exists e)(Pe)\\). But we also have \\((Pg) (\\exists g) = 1_{FA}\\) since\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"1\"] \\ar[d, \"1\"] & A \\ar[d, \"g\"] \\\\\n      A \\ar[r, \"g\"] & B\n    \\end{tikzcd}\n  \\]\n  is a pullback, and similarly \\((PE)(\\exists e) = 1_{PE}\\). So\n  \\[\n    \\begin{tikzcd}\n      PB \\ar[r, shift left, \"Pf\"] \\ar[r, shift right, \"Pg\"'] & PA \\ar[l, bend left=50, \"\\exists g\"] \\ar[r, \"Pe\"] & PE \\ar[l, bend left=50, \"\\exists e\"]\n    \\end{tikzcd}\n  \\]\n  is a split coequaliser and in particular a coequaliser. Hence by 5.13 \\(P\\) is monadic.\n\\end{proof}\n\n\\begin{corollary}\\leavevmode\n  \\begin{enumerate}\n  \\item A weak topos has finite colimits. Moreover if it has any infinite limits then it has the corresponding colimits. In particular if it is complete then it is cocomplete.\n  \\item If a weakly logical functor has a left adjoint then it has a right adjoint.\n  \\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(P\\) creates all limits which exist, by 5.e.\n  \\item By definition if \\(F\\) is weakly logical then\n    \\[\n      \\begin{tikzcd}\n        \\mathcal E^{\\text{op}} \\ar[r, \"F\"] \\ar[d, \"P\"] & \\exists^{\\text{op}} \\ar[d, \"P\"] \\\\\n        \\mathcal E \\ar[r, \"F\"] & \\exists\n      \\end{tikzcd}\n    \\]\n    commutes up to isomorphism. So this follows from 5.16.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(\\mathcal E\\) be a category with finite limits and suppose \\(A \\in \\ob \\mathcal E\\) has a power-object \\(PA\\). Then, for any \\(B\\), \\(B^*(PA)\\) is a power-object for \\(B^*A\\) in \\(\\mathcal E/B\\).\n\\end{lemma}\n\n\\begin{proof}\n  Given\n  \\begin{tikzcd}\n    C \\ar[d, \"g\"] \\\\\n    B\n  \\end{tikzcd}\n  we have a pullback square\n  \\[\n    \\begin{tikzcd}\n      C \\times A \\ar[r, \"g \\times 1\"] \\ar[d, \"\\pi_1\"] & B \\times A \\ar[d, \"\\pi_1\"] \\\\\n      C \\ar[r, \"g\"] & B\n    \\end{tikzcd}\n  \\]\n  so \\(\\Sigma_B(g \\times B^*A) \\cong C \\times A\\). Hence\n  \\[\n    \\Sub_{\\mathcal E/B} (g \\times B^*A) \\cong \\Sub_{\\mathcal E} (C \\times A),\n  \\]\n  but if \\(h: C \\to PA\\) corresponds to\n  \\begin{tikzcd}\n    R \\ar[d, tail] \\\\\n    C \\times A\n  \\end{tikzcd}\n  then the upper square of the diagram\n   \\[\n     \\begin{tikzcd}\n       R \\ar[rr] \\ar[d, tail] & & B \\times \\ni_A \\ar[d, tail] \\\\\n       C \\times A \\ar[rr, \"{(g, h) \\times 1_A}\"] \\ar[dr, \"g\\pi_1\"'] & & B \\times PA \\times A \\ar[dl, \"\\pi_1\"] \\\\\n         & B\n       \\end{tikzcd}\n     \\]\n    is a pullback. So\n    \\begin{tikzcd}\n      B \\times PA \\ar[d, \"\\pi_1\"] \\\\\n      B\n    \\end{tikzcd}\n    equipped with \\(B^*(\\ni_A) \\mono B^*(PA \\times A)\\) is a power object for \\(B^*A\\).\n\\end{proof}\n\n\\begin{theorem}\n  Suppose \\(\\mathcal E\\) is a weak topos. Then for any \\(B \\in \\ob \\mathcal E\\), \\(\\mathcal E/B\\) is a weak topos and \\(B^*: \\mathcal E \\to \\mathcal E/B\\) is weakly logical.\n\\end{theorem}\n\n\\begin{proof}\n  The second assertion follows from the previous lemma. For the first, we need to construct a power object for an arbitrary\n  \\begin{tikzcd}\n    A \\ar[d, \"f\"] \\\\\n    B\n  \\end{tikzcd}\n  in \\(\\mathcal E/B\\). Then the pullback\n  \\[\n    \\begin{tikzcd}\n      \\Sigma_B(g \\times f) \\ar[r] \\ar[d] & A \\ar[d, \"f\"] \\\\\n      C \\ar[r, \"g\"] & B\n    \\end{tikzcd}\n  \\]\n  is a subobject of \\(C \\times A\\), namely the equaliser of\n  \\[\n    \\begin{tikzcd}\n      C \\times A \\ar[r, shift left, \"f\\pi_1\"] \\ar[r, shift right, \"g\\pi_2\"'] & B\n    \\end{tikzcd}\n  \\]\n\n  Define \\(\\wedge: PA \\times PA \\to PA\\) to correspond to the intersection of \\(\\pi_{13}^*(\\ni_A \\mono PA \\times A)\\) and \\(\\pi_{23}^*(\\ni_A \\mono PA \\times A)\\) and dfine \\(P_1A \\mono PA \\times PA\\) to be the equaliser of\n  \\begin{tikzcd}\n    PA \\times PA \\ar[r, shift left, \"\\wedge\"] \\ar[r, shift right, \"\\pi_1\"'] & PA.\n  \\end{tikzcd}\n  Then, for any \\(C\\),\n  \\begin{tikzcd}\n    C \\ar[r, \"{(\\ulcorner m \\urcorner, \\ulcorner n \\urcorner)}\"] & PA \\times PA\n  \\end{tikzcd}\n  factors through \\(P_1A\\) if and only if \\(m \\leq n\\) in \\(\\Sub_{\\mathcal E} (C \\times A)\\). Now form the pullback\n  \\[\n    \\begin{tikzcd}\n      Q \\ar[rr] \\ar[d, tail, \"{(h, k)}\"] & & P_1A \\ar[d, tail] \\\\\n      PA \\times B \\ar[r, \"1 \\times \\{\\}\"] & PA \\times PB \\ar[r, \"1 \\times Pf\"] & PA \\times PA\n    \\end{tikzcd}\n  \\]\n  Given any\n  \\begin{tikzcd}\n    C \\ar[d, \"g\"] \\\\\n    B\n  \\end{tikzcd}\n  morphisms \\(g \\xrightarrow{\\ell} k\\) in \\(\\mathcal E /B\\) correspond to morphisms \\(C \\xrightarrow{h \\ell} PA\\) such that the subobject named by \\(h\\ell\\) is contained in that named by \\((Pf)(\\{\\})g\\). But the latter is indeed \\(\\Sigma_B(g \\times f) \\mono C \\times A\\) so \\(k\\) is a power object for \\(f\\) in \\(\\mathcal E/B\\).\n\\end{proof}\n\n\\begin{corollary}\n  A weak topos is locally cartesian closed. In particular, it is a topos.\n\\end{corollary}\n\n\\begin{proof}\n  For any \\(f: A \\to B\\) in \\(\\mathcal E\\), we can define \\((\\mathcal E/B)/f)\\) with \\(\\mathcal E/A\\) and \\(f^*: \\mathcal E/B \\to \\mathcal E/A\\) with pullback along \\(f\\). Hence all such functors are weakly logical.\n\n  But \\(f^*\\) has a left adjoint \\(\\Sigma_f\\) so by 7.7 (2) (in lecture 7.10 (ii)) it has a right adjoint \\(\\Pi_f\\). Hence by 6.3 \\(\\mathcal E/B\\) is cartesian closed for any \\(B\\).\n\\end{proof}\n\n\\begin{remark}\n  It can be shown that a weakly logical functor is cartesian closed, and hence logical.\n\\end{remark}\n\n\\begin{corollary}\\leavevmode\n  \\begin{enumerate}\n  \\item Any epimorphism in a topos is regular.\n  \\item Any \\(f: A \\to B\\) in a topos factors uniquely up to isomorphism as\n    \\[\n      \\begin{tikzcd}\n        A \\ar[r, \"q\", twoheadrightarrow] & I \\ar[r, tail, \"m\"] & B\n      \\end{tikzcd}\n    \\]\n  \\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\n  \\(\\mathcal E\\) is locally cartesian closed by the above corollary and has coequalisers by 7.10(i) so by 6.7 every \\(f\\) factors uniquely as regular epi plus mono. If \\(f\\) itself is epic then the monic part of the factorisation is isomorphism by 7.3, so \\(f\\) is regular epic.\n\\end{proof}\n\n\\subsection{Sheaves and local operators*}\n\nRecall that \\(\\Sh(X) \\subseteq [\\mathcal O(X)^{\\text{op}}, \\Set]\\) is a full subcategory closed under limits. In fact it's reflective and the reflector \\(L: [\\mathcal O(X)^{\\text{op}}, \\Set] \\to \\Sh(X)\\) preserves finite limits. This suggests considering reflective subcategories \\(\\mathcal D \\subseteq \\mathcal E\\) for which the reflector preserves finite limits (equivalently, pullbacks).\n\n\\begin{lemma}\n  \\label{lem:closure operator}\n  Given such a reflective subcategory and a monomorphism \\(A' \\mono A\\) in \\(\\mathcal E\\), define \\(c(A') \\mono A\\) by the pullback diagram\n  \\[\n    \\begin{tikzcd}\n      c(A') \\ar[r] \\ar[d, tail] & LA' \\ar[d, tail] \\\\\n      A \\ar[r, \"\\eta_A\"] & LA\n    \\end{tikzcd}\n  \\]\n  Then \\(A' \\mapsto c(A')\\) is a closure\\index{closure} operation on \\(\\Sub_{\\mathcal E}(A)\\) and commutes with pullback along a fixed morphism of \\(\\mathcal E\\).\n\\end{lemma}\n\nBy ``closure'' we mean an order-preserving inflationary idempotent operator.\n\n\\begin{proof}\n  Since\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"\\eta_{A'}\"] \\ar[d, tail] & LA' \\ar[d, tail] \\\\\n      A \\ar[r, \"eta_A\"] & LA\n    \\end{tikzcd}\n  \\]\n  and \\(A' \\leq A''\\) in \\(\\Sub(A)\\) implies \\(LA' \\leq LA''\\) in \\(\\Sub(LA)\\) and hence \\(c(A') \\leq c(A'')\\).\n\n  Since \\(L\\eta\\) is an isomorphism,\n  \\[\n    \\begin{tikzcd}\n      LA' \\ar[r, \"L\\eta_{A'}\"] \\ar[d, tail] & LLA' \\ar[d, tail] \\\\\n      LA \\ar[r, \"L\\eta_A\"] & LLA\n    \\end{tikzcd}\n  \\]\n  is a pullback, and since \\(L\\) preserves pullbacks we deduce \\(Lc(A') \\cong LA'\\) in \\(\\Sub(LA)\\). Hence \\(c(c(A')) \\cong c(A')\\).\n\n  For stability under pullback, suppose\n  \\[\n    \\begin{tikzcd}\n      A' \\ar[r] \\ar[d, tail] & B' \\ar[d, tail] \\\\\n      A \\ar[r, \"f\"] & B\n    \\end{tikzcd}\n  \\]\n  is a pullback. Then in the cube\n  \\[\n    \\begin{tikzcd}\n      c(A') \\ar[rr] \\ar[dr] \\ar[dd] & & LA' \\ar[dr] \\ar[dd] \\\\\n      & c(B') \\ar[rr, crossing over] & & LB' \\ar[dd] \\\\\n      A \\ar[rr, \"\\eta_A\", near end] \\ar[dr, \"f\"] && LA \\ar[dr, \"Lf\"] \\\\\n      & B \\ar[rr, \"\\eta_B\"] \\ar[from=uu, crossing over] && LB\n    \\end{tikzcd}\n  \\]\n\n  the front, back and right faces are pullbacks, whence the left face is too.\n\\end{proof}\n\n\\begin{definition}[local operator]\\index{local operator}\n  Let \\(\\mathcal E\\) be a topos. By a \\emph{local operator} in \\(\\mathcal E\\) we mean a morphism \\(j: \\Omega \\to \\Omega\\) satisfying the commutative diagrams\n  \\[\n    \\begin{tikzcd}\n    \\end{tikzcd}\n  \\]\n  \\[\n    \\begin{tikzcd}\n      1 \\ar[r, \"\\top\"] \\ar[dr, \"\\top\"'] & \\Omega \\ar[d, \"j\"] & \\Omega \\ar[l, \"j\"'] \\ar[dl, \"j\"] \\\\\n      & \\Omega\n    \\end{tikzcd}\n    \\qquad\n    \\begin{tikzcd}\n      \\Omega_1 \\ar[r] \\ar[d, tail] & \\Omega_1 \\ar[d, tail] \\\\\n      \\Omega \\times \\Omega \\ar[r, \"j \\times j\"] & \\Omega \\times \\Omega\n    \\end{tikzcd}\n  \\]\n  where \\(\\Omega_1\\) is the order relation on \\(\\Omega\\) define in 7.12.\n\n  Given a closure operator on subobjects in the above lemma, define \\(J \\mono \\Omega\\) to be the closure of \\(\\top: 1 \\mono \\Omega\\) and \\(j: \\Omega \\to \\Omega\\) to be the classifying map of \\(J \\mono \\Omega\\). Then, for any \\(A' \\mono A\\) with classifying map \\(\\chi_m: A \\to \\Omega\\), the composite \\(j \\chi_m\\) classifies \\(c(A') \\mono A\\).\n\\end{definition}\n\nGiven a pullback-stable cloure operation \\(c\\) on subobjects, we say \\(A' \\mono A\\) is \\emph{dense}\\index{dense} if \\(c(A') \\mono A\\) is isomorphism, and \\emph{closed}\\index{closed} if \\(A' \\mono c(A')\\) is isomorphism.\n\n\\begin{lemma}\n  \\label{lem:factorisation through dense and closed}\n  Suppose given a commutative square\n  \\[\n    \\begin{tikzcd}\n      B' \\ar[r, \"f'\"] \\ar[d, \"n\", tail] & A' \\ar[d, \"m\", tail] \\\\\n      B \\ar[r, \"f\"] & A\n    \\end{tikzcd}\n  \\]\n  with \\(n\\) dense and \\(m\\) closed. Then there is a unique \\(g: B \\to A'\\) with \\(mg = f\\) (and \\(gn = f'\\)).\n\\end{lemma}\n\n\\begin{proof}\n  We have \\(n \\leq f^*(m)\\) in \\(\\Sub(B)\\) so\n  \\[\n    1_B \\cong c(n) \\leq f^*(c(m)) \\cong f^*(m)\n  \\]\n  so we define \\(g\\) as\n  \\[\n    \\begin{tikzcd}\n      B \\ar[r, \"\\cong\"] & f^*(A') \\ar[r] & A'\n    \\end{tikzcd}\n  \\]\n\\end{proof}\n\nNote that \\(c(A')\\) may be characterised as the unique (up to isomorphism) subobject \\(A''\\) such that \\(A' \\mono A''\\) is dense and \\(A'' \\mono A\\) is closed.\n\n\\begin{lemma}\n  Suppose \\(c\\) is induced as in \\Cref{lem:closure operator} by a reflector \\(L: \\mathcal E \\to \\mathcal D\\) preserving finite limits. Then an object \\(A\\) of \\(\\mathcal E\\) belongs to \\(\\mathcal D\\) (up to isomorphism) if and only if, given any diagram\n  \\[\n    \\begin{tikzcd}\n      B' \\ar[r, \"f'\"] \\ar[d, \"m\", tail] & A \\\\\n      B\n    \\end{tikzcd}\n  \\]\n  with \\(m\\) dense, there exists a unique \\(f: B \\to A\\) with \\(fm = f'\\).\n\\end{lemma}\n\n\\begin{proof}\n  Note first that \\(m\\) is dense if and only if \\(Lm\\) is an isomorphism: \\(\\impliedby\\) follows from the definition. \\(\\implies\\) follows since, by the proof of \\Cref{lem:closure operator}, we know \\(L(B')\\) and \\(L(c(B'))\\) are isomorphic in \\(\\Sub(B)\\).\n\n  Given this, if \\(A\\) is in \\(\\mathcal D\\) then the given diagram extends uniquely to\n  \\[\n    \\begin{tikzcd}\n      B' \\ar[r, \"\\eta_{B'}\"] \\ar[d, tail] & LB' \\ar[d, \"\\cong\"] \\ar[r] & A \\\\\n      B \\ar[r, \"\\eta_B\"] & LB \\ar[ur]\n    \\end{tikzcd}\n  \\]\n  Conversely, suppose \\(A\\) satisfies the condition. Let\n  \\begin{tikzcd}\n    R \\ar[r, shift left, \"a\"] \\ar[r, shift right, \"b\"'] & A\n  \\end{tikzcd}\n  be the kernel-pair of \\(\\eta_A: A \\to LA\\) and \\(d: A \\mono R\\) the factorisation of \\((1_A, 1_A)\\) through \\((a, b)\\). Since \\(L\\eta_A\\) is isomorphism and \\(L\\) preservse pullbacks, \\(Ld\\) is isomorphism so \\(d\\) is dense. This forces \\(a = b\\) so \\(\\eta_A\\) is monic. And \\(\\eta_A\\) is dense so we get a unique \\(r: LA \\to A\\) with \\(r\\eta_A = 1_A\\). Now \\(\\eta_A r \\eta_A = \\eta_A\\) and since \\(LA\\) satisfies the condition we have \\(\\eta_A r = 1_{LA}\\).\n\\end{proof}\n\nWe say \\(A\\) is a \\emph{sheaf} (for \\(c\\), or for \\(j\\)) it it satisfies the condition in the previous lemma. Given a local operator \\(j\\) on \\(\\mathcal E\\), we write \\(\\sh_j(\\mathcal E)\\) for the full subcategory of \\(j\\)-sheaves in \\(\\mathcal E\\).\n\nOur aim is to show \\(\\sh_j(\\mathcal E)\\) is a topos.\n\n\\begin{lemma}\n  \\(\\sh_j(\\mathcal E)\\) is closed under limits in \\(\\mathcal E\\) and an exponential ideal.\n\\end{lemma}\n\n\\begin{proof}\n  The first assertion follows since the definition involves only morphisms with codomain \\(A\\). For the second, note that if \\(m: B' \\mono B\\) is dense the so is \\( \\times 1_C: B' \\times C \\mono B \\times C\\) for any \\(C\\) (since it's \\(\\pi_1^*(m)\\)) and so if \\(A\\) is a sheaf then any morphism \\(B' \\to A^C\\) extends uniquely to a morphism \\(B \\to A^C\\).\n\\end{proof}\n\n\\begin{lemma}\n  If \\(A\\) is a sheaf then a subobject \\(m: A' \\mono A\\) in \\(\\mathcal E\\) is a sheaf if and only if it is closed.\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\impliedby\\): Immediate from \\Cref{lem:factorisation through dense and closed}.\n  \\item \\(\\implies\\): Consider\n    \\[\n      \\begin{tikzcd}\n        A' \\ar[r, \"p\", tail] & c(A') \\ar[r, \"q\", tail] & A\n      \\end{tikzcd}\n    \\]\n    \\(p\\) is dense so if \\(A\\) is a sheaf we get a unique \\(r: c(A') \\to A'\\) with \\(rp = 1_{A'}\\). But \\(c(A')\\) is a sheaf so \\(prp = p\\) and thus \\(pr = 1_{c(A')}\\).\n  \\end{itemize}\n\\end{proof}\n\nWe define \\(\\Omega_j \\mono \\Omega\\) to be the equaliser of\n\\begin{tikzcd}\n  \\Omega \\ar[r, shift left, \"j\"] \\ar[r, shift right, \"1_\\Omega\"'] & \\Omega.\n\\end{tikzcd}\nThen for any \\(A\\), morphisms \\(A \\to \\Omega_j\\) corresponding to closed subobjects of \\(A\\).\n\n\\begin{lemma}\n  \\(\\Omega_j\\) is a \\(j\\)-sheaf.\n\\end{lemma}\n\n\\begin{proof}\n  We want to show that if \\(m: B \\mono A\\) is a dense mono the pullback along \\(m\\) yields a bijection from closed subobjects of \\(A\\) to closed subobjects of \\(B\\). If \\(n: A' \\mono A\\) is closed then the pullback\n  \\[\n    \\begin{tikzcd}\n      B' \\ar[r, \"m'\", tail] \\ar[d, \"n'\", tail] & A' \\ar[d, \"n\", tail] \\\\\n      B \\ar[r, \"m\", tail] & A\n    \\end{tikzcd}\n  \\]\n  \\(m'\\) is dense so \\(A' \\mono A\\) is the closure of \\(B' \\mono B \\mono A\\). It remains to show that if \\(B' \\mono B\\) is closed, it is iso up to the pullback of its closure in \\(A\\). But (writing \\(A' \\mono A\\) for the closure) we have a factorisation \\(B' \\to f^*A'\\) which is dense since \\(B' \\to A'\\) is dense, and closed since \\(B' \\to B\\) is closed.\n\\end{proof}\n\n\\begin{theorem}\n  For any local operator \\(j\\) on \\(\\mathcal E\\), \\(\\sh_j(\\mathcal E)\\) is a topos. Moreover it's reflective in \\(\\mathcal E\\) and the reflector preserves finite limits.\n\\end{theorem}\n\n\\begin{proof}\n  \\(\\sh_j(\\mathcal E)\\) is cartesian closed and has a subobject classifier \\(\\Omega_j\\) by the lemma. To construct the reflect, consider the composite\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"\\{\\}\", tail] & \\Omega^A \\ar[r, \"j^A\", tail] & \\Omega_j^A\n    \\end{tikzcd}\n  \\]\n  this corresponds to the closure \\((a, b): \\cl A \\mono A \\times A\\) of the diagonal subobject \\((1_A, 1_A): A \\mono A \\times A\\). I claim (with proof omitted) that\n  \\begin{tikzcd}\n    \\cl A \\ar[r, shift left, \"a\"] \\ar[r, shift right, \"b\"'] & A\n  \\end{tikzcd}\n  is the kernel-pair of \\(f\\). Hence any morphism \\(g: A \\to B\\) where \\(B\\) is a sheaf satisfies \\(ga = gb\\). So if we form the image\n  \\[\n    \\begin{tikzcd}\n      A \\ar[r, \"q\", twoheadrightarrow] & I \\ar[r, \"m\", tail] & \\Omega_j^A\n    \\end{tikzcd}\n  \\]\n  of \\(f\\) any such \\(g\\) factors uniquely through \\(q\\).\n\n  Now \\(\\Omega_j^A\\) is a sheaf by the lemmas so if we form the closure \\(LA \\mono \\Omega_j^A\\) of \\(m\\), we get a morphism \\(A \\to LA\\) through which any morphism from \\(A\\) to a sheaf factors uniquely. Hence \\(L\\) becomes a functor \\(\\mathcal E \\to \\sh_j(\\mathcal E)\\), left adjoint to the inclusion.\n\n  By 6.13, we know \\(L\\) preserves finite products. In fact it preserves equalisers as well (can be checked but omitted here).\n\\end{proof}\n\nFinally, we state the relation between topos and the topos Grothendieck was interested in (i.e.\\ sheaves):\n\n\\begin{theorem}\n  For a category \\(\\mathcal E\\), TFAE:\n  \\begin{enumerate}\n  \\item \\(\\mathcal E\\) is a topos, complete and locally small and has a separating set of objects.\n  \\item there exists a small category \\(\\c C\\) and a local operators on \\([\\c C^{\\text{op}}, \\Set]\\) such that\n    \\[\n      \\mathcal E \\simeq \\sh_j([\\c C^{\\text{op}}, \\Set]).\n    \\]\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}[Sketch of proof]\\leavevmode\n  \\begin{enumerate}\n  \\item \\(1 \\implies 2\\): since \\(\\sh_j([\\c C^{\\text{op}}, \\Set])\\) has given properties.\n  \\item \\(1 \\impliedby 2\\): take \\(\\c C\\) to be the full subcategory of \\(\\mathcal E\\) on the separating set and consider\n    \\[\n      \\begin{tikzcd}\n        \\mathcal E \\ar[r, \"Y\"] & {[\\mathcal E^{\\text{op}}, \\Set]} \\ar[r] & {[\\c C^{\\text{op}}, \\Set]}.\n      \\end{tikzcd}\n    \\]\n  \\end{enumerate}\n\\end{proof}\n\n\\printindex\n\\end{document}\n", "meta": {"hexsha": "cd55b2cf36207608b39f082c1244a30470a78df3", "size": 162708, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "III/category_theory.tex", "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_issues_repo_path": "III/category_theory.tex", "max_issues_repo_name": "geniusKuang/tripos", "max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_forks_repo_path": "III/category_theory.tex", "max_forks_repo_name": "geniusKuang/tripos", "max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "avg_line_length": 55.7410071942, "max_line_length": 905, "alphanum_fraction": 0.6157287902, "num_tokens": 57340, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.5602055337239754}}
{"text": "\\section{Conclusion}\n\nFormal methods in cryptocurrency space are growing significantly.\nCompanies like \\gls{iohk}, creator of Cardano, and Tezos are investing a lot in it.\nThis work contributes to the formal specification and definition of Bitcoin.\n\nA good way of defining Bitcoin is by creating a model of it in a language with dependent types.\nAgda looks like a good language for it, but other languages like CoQ, Lean can do this work too.\n\nIn this work, we define a lot of functionalities about Bitcoin.\nThere were definitions of transactions, transactions tree, block, and blockchain.\nThey were all defined in data constructors and records.\nMost of the model definition was in the transaction tree\nbecause of the state of Bitcoin changes after every transaction.\nThere are other ways of doing the same thing,\nbut I thought that this way is easier to define.\nAnother way could be defining more characteristics of the block and blockchain in their\ntypes instead of doing all of it in the transaction tree (what was done in this work).\n\nSome part of this code is not just for modeling the Bitcoin,\nbut to validates inputs that can be wrong.\nThey were functions that transform terms with simpler types (without dependent types)\nto terms with more complex types (with them).\nFor example, transforming raw transactions into possible valid transactions.\n\n\\subsection{Future Work}\n\nIn this work, there was a code that transforms a raw transaction into a possible valid transaction.\nIt is not a decidable function, because there is no definition of what it is an invalid transaction.\nFrom future work, it should have a definition of what is an invalid raw transaction.\nSo it will avoid that valid transaction will be discarded. \n\nThere is no definition of crypto functions like SHA-256 and elliptic curves in this work.\nOne thing that can be done is importing these functions from some Agda or Haskell packages\n(\\emph{cryptohash} in \\emph{Hackage}).\n\nIn this cryptocurrency, there is no nonce and mining either.\n\nThis work does not have any IO operation.\nSo it is not possible to add transactions in the blockchain from the command line or the network.\n\nThe cryptocurrency of this work does not have any smart contract.\nIt would be good to define some of them in it.\n", "meta": {"hexsha": "730cafd5b0f1fad7501ec8f3894d8886cda0323b", "size": 2263, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/conclusion.tex", "max_stars_repo_name": "guilhermehas/crypto-agda", "max_stars_repo_head_hexsha": "ac91e00abca9a26678d0cbc1bedecf8abef6b703", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-02-13T16:56:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-22T19:27:12.000Z", "max_issues_repo_path": "docs/conclusion.tex", "max_issues_repo_name": "guilhermehas/cripto-agda", "max_issues_repo_head_hexsha": "ac91e00abca9a26678d0cbc1bedecf8abef6b703", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-11-01T11:36:06.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-03T14:31:16.000Z", "max_forks_repo_path": "docs/conclusion.tex", "max_forks_repo_name": "guilhermehas/cripto-agda", "max_forks_repo_head_hexsha": "ac91e00abca9a26678d0cbc1bedecf8abef6b703", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.4318181818, "max_line_length": 100, "alphanum_fraction": 0.8011489174, "num_tokens": 459, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7310585669110203, "lm_q1q2_score": 0.5602055325474466}}
{"text": "\\subsubsection{CrossValidation}\n\\label{CVPP}\nThe \\textbf{CrossValidation} post-processor is specifically used to evaluate estimator (i.e., ROMs) performance.\nCross-validation is a statistical method of evaluating and comparing learning algorithms by dividing data into\ntwo portions: one used to `train' a surrogate model and the other used to validate the model, based on specific\nscoring metrics. In typical cross-validation, the training and validation sets must crossover in successive\nrounds such that each data point has a chance of being validated against the various sets. The basic form of\ncross-validation is k-fold cross-validation. Other forms of cross-validation are special cases of k-fold or involve\nrepeated rounds of k-fold cross-validation. \\nb It is important to notice that this post-processor currently can\nonly accept \\textbf{PointSet} data object.\n%\n\\ppType{CrossValidation}{CrossValidation}\n%\n\\begin{itemize}\n  \\item \\xmlNode{SciKitLearn}, \\xmlDesc{string, required field}, the subnodes specifies the necessary information\n    for the algorithm to be used in the post-processor. `SciKitLearn' is based on algorithms in SciKit-Learn\n    library, and currently it performs cross-validation over \\textbf{PointSet} only.\n  \\item \\xmlNode{Metric}, \\xmlDesc{string, required field}, specifies the \\textbf{Metric} name that is defined via\n    \\textbf{Metrics} entity. In this xml-node, the following xml attributes need to be specified:\n    \\begin{itemize}\n      \\item \\xmlAttr{class}, \\xmlDesc{required string attribute}, the class of this metric (e.g. Metrics)\n      \\item \\xmlAttr{type}, \\xmlDesc{required string attribute}, the sub-type of this Metric (e.g. SKL, Minkowski)\n    \\end{itemize}\n    \\nb Currently, cross-validation post-processor only accepts \\xmlNode{SKL} metrics with \\xmlNode{metricType}\n    \\xmlString{mean\\_absolute\\_error}, \\xmlString{explained\\_variance\\_score}, \\xmlString{r2\\_score},\n    \\xmlString{mean\\_squared\\_error}, and \\xmlString{median\\_absolute\\_error}.\n\\end{itemize}\n\n\\textbf{Example:}\n\n\\begin{lstlisting}[style=XML]\n<Simulation>\n ...\n  <Files>\n    <Input name=\"output_cv\" type=\"\">output_cv.xml</Input>\n    <Input name=\"output_cv.csv\" type=\"\">output_cv.csv</Input>\n  </Files>\n  <Models>\n    ...\n    <ROM name=\"surrogate\" subType=\"SciKitLearn\">\n      <SKLtype>linear_model|LinearRegression</SKLtype>\n      <Features>x1,x2</Features>\n      <Target>ans</Target>\n      <fit_intercept>True</fit_intercept>\n      <normalize>True</normalize>\n    </ROM>\n    <PostProcessor name=\"pp1\" subType=\"CrossValidation\">\n        <SciKitLearn>\n            <SKLtype>KFold</SKLtype>\n            <n_splits>3</n_splits>\n            <shuffle>False</shuffle>\n        </SciKitLearn>\n        <Metric class=\"Metrics\" type=\"SKL\">m1</Metric>\n    </PostProcessor>\n    ...\n  </Models>\n  <Metrics>\n    <SKL name=\"m1\">\n      <metricType>mean_absolute_error</metricType>\n    </SKL>\n  </Metrics>\n  <Steps>\n    <PostProcess name=\"PP1\">\n        <Input class=\"DataObjects\" type=\"PointSet\">outputDataMC</Input>\n        <Input class=\"Models\" type=\"ROM\">surrogate</Input>\n        <Model class=\"Models\" type=\"PostProcessor\">pp1</Model>\n        <Output class=\"Files\" type=\"\">output_cv</Output>\n        <Output class=\"Files\" type=\"\">output_cv.csv</Output>\n    </PostProcess>\n  </Steps>\n ...\n</Simulation>\n\\end{lstlisting}\n\nIn order to access the results from this post-processor, RAVEN will define the variables as ``cv'' +\n``\\_'' + ``MetricName'' + ``\\_'' + ``ROMTargetVariable'' to store the calculation results, and these\nvariables are also accessible by the users through RAVEN entities \\textbf{DataObjects} and \\textbf{OutStreams}.\nIn previous example, variable \\textit{cv\\_m1\\_ans} are accessible by the users.\n\n\\paragraph{SciKitLearn}\n\nThe algorithm for cross-validation is chosen by the subnode \\xmlNode{SKLtype} under the parent node \\xmlNode{SciKitLearn}.\nIn addition, a special subnode \\xmlNode{average} can be used to obtain the average cross validation results.\n\n\\begin{itemize}\n  \\item \\xmlNode{SKLtype}, \\xmlDesc{string, required field}, contains a string that\n    represents the cross-validation algorithm to be used. As mentioned, its format is:\n\n    \\xmlNode{SKLtype}algorithm\\xmlNode{/SKLtype}.\n  \\item \\xmlNode{average}, \\xmlDesc{boolean, optional field}, if `True`, dump the average cross validation results into the\n    output files.\n\\end{itemize}\n\n\nBased on the \\xmlNode{SKLtype} several different algorithms are available. In the following paragraphs a brief\nexplanation and the input requirements are reported for each of them.\n\n\\paragraph{K-fold}\n\\textbf{KFold} divides all the samples in $k$ groups of samples, called folds (if $k=n$, this is equivalent to the\n\\textbf{Leave One Out} strategy), of equal sizes (if possible). The prediction function is learned using $k-1$ folds,\nand fold left out is used for test.\nIn order to use this algorithm, the user needs to set the subnode:\n\\xmlNode{SKLtype}KFold\\xmlNode{/SKLtype}.\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{n\\_splits}, \\xmlDesc{integer, optional field}, number of folds, must be at least 2. \\default{3}\n  \\item \\xmlNode{shuffle}, \\xmlDesc{boolean, optional field}, whether to shuffle the data before splitting into\n    batches.\n  \\item \\xmlNode{random\\_state}, \\xmlDesc{integer, optional field}, when shuffle=True,\n    pseudo-random number generator state used for shuffling. If not present, use default numpy RNG for shuffling.\n\\end{itemize}\n\n\\paragraph{Stratified k-fold}\n\\textbf{StratifiedKFold} is a variation of \\textit{k-fold} which returns stratified folds: each set contains approximately\nthe same percentage of samples of each target class as the complete set.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}StratifiedKFold\\xmlNode{/SKLtype}.\n\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{labels}, \\xmlDesc{list of integers, (n\\_samples), required field}, contains a label for each sample.\n  \\item \\xmlNode{n\\_splits}, \\xmlDesc{integer, optional field}, number of folds, must be at least 2. \\default{3}\n  \\item \\xmlNode{shuffle}, \\xmlDesc{boolean, optional field}, whether to shuffle the data before splitting into\n    batches.\n  \\item \\xmlNode{random\\_state}, \\xmlDesc{integer, optional field}, when shuffle=True,\n    pseudo-random number generator state used for shuffling. If not present, use default numpy RNG for shuffling.\n\\end{itemize}\n\n\\paragraph{Label k-fold}\n\\textbf{LabelKFold} is a variation of \\textit{k-fold} which ensures that the same label is not in both testing and\ntraining sets. This is necessary for example if you obtained data from different subjects and you want to avoid\nover-fitting (i.e., learning person specific features) by testing and training on different subjects.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}LabelKFold\\xmlNode{/SKLtype}.\n\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{labels}, \\xmlDesc{list of integers with length (n\\_samples, ), required field}, contains a label for\n    each sample. The folds are built so that the same label does not appear in two different folds.\n  \\item \\xmlNode{n\\_splits}, \\xmlDesc{integer, optional field}, number of folds, must be at least 2. \\default{3}\n\\end{itemize}\n\n\\paragraph{Leave-One-Out - LOO}\n\\textbf{LeaveOneOut} (or LOO) is a simple cross-validation. Each learning set is created by taking all the samples\nexcept one, the test set being the sample left out. Thus, for $n$ samples, we have $n$ different training sets and\n$n$ different tests set. This is cross-validation procedure does not waste much data as only one sample is removed from\nthe training set.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}LeaveOneOut\\xmlNode{/SKLtype}.\n\n\\paragraph{Leave-P-Out - LPO}\n\\textbf{LeavePOut} is very similar to \\textbf{LeaveOneOut} as it creates all the possible training/test sets by removing\n$p$ samples from the complete set. For $n$ samples, this produces $(^n_p)$ train-test pairs. Unlike \\textbf{LeaveOneOut}\nand \\textbf{KFold}, the test sets will overlap for $p > 1$.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}LeavePOut\\xmlNode{/SKLtype}.\n\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{p}, \\xmlDesc{integer, required field}, size of the test sets\n\\end{itemize}\n\n\\paragraph{Leave-One-Label-Out - LOLO}\n\\textbf{LeaveOneLabelOut} (LOLO) is a cross-validation scheme which holds out the samples according to a third-party\nprovided array of integer labels. This label information can be used to encode arbitrary domain specific pre-defined\ncross-validation folds. Each training set is thus constituted by all samples except the ones related to a specific\nlabel.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}LeaveOneLabelOut\\xmlNode{/SKLtype}.\n\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{labels}, \\xmlDesc{list of integers, (n\\_samples,), required field}, arbitrary\n    domain-specific stratificatioin of the data to be used to draw the splits.\n\\end{itemize}\n\n\\paragraph{Leave-P-Label-Out}\n\\textbf{LeavePLabelOut} is imilar as \\textit{Leave-One-Label-Out}, but removes samples related to $P$ labels for\neach training/test set.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}LeavePLabelOut\\xmlNode{/SKLtype}.\n\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{labels}, \\xmlDesc{list of integers, (n\\_samples,), required field}, arbitrary\n    domain-specific stratificatioin of the data to be used to draw the splits.\n  \\item \\xmlNode{n\\_groups}, \\xmlDesc{integer, optional field}, number of samples to leave out in the test split.\n\\end{itemize}\n\n\\paragraph{ShuffleSplit}\n\\textbf{ShuffleSplit} iterator will generate a user defined number of independent train/test dataset splits. Samples\nare first shuffled and then split into a pair of train and test sets. it is possible to control the randomness for\nreproducibility of the results by explicitly seeding the \\xmlNode{random\\_state} pseudo random number generator.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}ShuffleSplit\\xmlNode{/SKLtype}.\n\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{n\\_splits}, \\xmlDesc{integer, optional field}, number of re-shuffling and splitting iterations\n    \\default{10}.\n  \\item \\xmlNode{test\\_size}, \\xmlDesc{float or integer, optional field}, if float, should be between 0.0 and 1.0 and\n    represent the proportion of the dataset to include in the test split. \\default{0.1}\n    If integer, represents the absolute number of test samples. If not present, the value is automatically set to\n    the complement of the train size.\n  \\item \\xmlNode{train\\_size}, \\xmlDesc{float or integer, optional field}, if float, should be between 0.0 and 1.0 and represent\n    the proportion of the dataset to include in the train split. If integer, represents the absolute number of train\n    samples. If not present, the value is automatically set to the complement of the test size.\n  \\item \\xmlNode{random\\_state}, \\xmlDesc{integer, optional field}, when shuffle=True,\n    pseudo-random number generator state used for shuffling. If not present, use default numpy RNG for shuffling.\n\\end{itemize}\n\n\\paragraph{Label-Shuffle-Split}\n\\textbf{LabelShuffleSplit} iterator behaves as a combination of \\textbf{ShuffleSplit} and \\textbf{LeavePLabelOut},\nand generates a sequence of randomized partitions in which a subset of labels are held out for each split.\nIn order to use this algorithm, the user needs to set the subnode:\n\n\\xmlNode{SKLtype}LabelShuffleSplit\\xmlNode{/SKLtype}.\n\nIn addition to this XML node, several others are available:\n\\begin{itemize}\n  \\item \\xmlNode{labels}, \\xmlDesc{list of integers, (n\\_samples)}, labels of samples.\n  \\item \\xmlNode{n\\_splits}, \\xmlDesc{integer, optional field}, number of re-shuffling and splitting iterations\n    \\default{10}.\n  \\item \\xmlNode{test\\_size}, \\xmlDesc{float or integer, optional field}, if float, should be between 0.0 and 1.0 and\n    represent the proportion of the dataset to include in the test split. \\default{0.1}\n    If integer, represents the absolute number of test samples. If not present, the value is automatically set to\n    the complement of the train size.\n  \\item \\xmlNode{train\\_size}, \\xmlDesc{float or integer, optional field}, if float, should be between 0.0 and 1.0 and represent\n    the proportion of the dataset to include in the train split. If integer, represents the absolute number of train\n    samples. If not present, the value is automatically set to the complement of the test size.\n  \\item \\xmlNode{random\\_state}, \\xmlDesc{integer, optional field}, when shuffle=True,\n    pseudo-random number generator state used for shuffling. If not present, use default numpy RNG for shuffling.\n\\end{itemize}\n", "meta": {"hexsha": "35b0dbdb6fa78f22d71dc2f7069068d84a4b40a8", "size": 13151, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/PostProcessors/CrossValidation.tex", "max_stars_repo_name": "dgarrett622/raven", "max_stars_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user_manual/PostProcessors/CrossValidation.tex", "max_issues_repo_name": "dgarrett622/raven", "max_issues_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/user_manual/PostProcessors/CrossValidation.tex", "max_forks_repo_name": "dgarrett622/raven", "max_forks_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.5684647303, "max_line_length": 128, "alphanum_fraction": 0.7559881378, "num_tokens": 3399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289388083214156, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5601675030745262}}
{"text": "% -----------------------------*- LaTeX -*------------------------------\n\\documentclass[12pt]{report}\n\\usepackage{scribe_hgen486}\n\\usepackage{listings}\n\\usepackage{color}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\n\\definecolor{dkgreen}{rgb}{0,0.6,0}\n\\definecolor{gray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mauve}{rgb}{0.58,0,0.82}\n\n\\lstset{frame=tb,\n  language=R,\n  aboveskip=3mm,\n  belowskip=3mm,\n  showstringspaces=false,\n  columns=flexible,\n  basicstyle={\\small\\ttfamily},\n  numbers=none,\n  numberstyle=\\tiny\\color{gray},\n  keywordstyle=\\color{blue},\n  commentstyle=\\color{dkgreen},\n  stringstyle=\\color{mauve},\n  breaklines=true,\n  breakatwhitespace=true,\n  tabsize=3\n}\n\\begin{document}\n\n\\scribe{Katherine Silliman}\t\t% required\n\\lecturenumber{12}\t\t\t% required, must be a number\n\\lecturedate{February 11}\t\t% required, omit year\n\\lecturer{Matthew Stephens} \n\n\\maketitle\n\n% please leave this comment \n\\framebox[.95\\textwidth]{\\parbox{.93\\textwidth}{ {{\\bf Note:}} These\nlecture notes are still rough, and have only have been mildly\nproofread.  }}\n\\vspace*{.1in}\n\n\n% feel free to delete content below this line \n% ----------------------------------------------------------------------\n\n\nComputing practical on implementing MCMC in R. See \\textit{github.com/stephens999/mcmc-examples/blob/master/MCMC/IntroMCMC.R} for original code example.\n\n\n\\section{Ex.1: Sampling from an exponential distribution using MCMC}\nAny MCMC scheme aims to produce (dependent) samples from a \"target\" distribution $\\pi \\left (x \\right )$.\nIn this case we are going to use the exponential distribution with mean 1 as our target distribution. So we start by defining our target density: \n\n\\begin{lstlisting}\ntarget = function(x){\n    if(x<0){\n        return(0)}\n    else {\n        return( exp(-x))}\n}\n\\end{lstlisting}\n\nRecall that the Metropolis-Hastings algorithm is useful for sampling from a distribution that is proportional to our target distribution. It involves the following steps: \n\n\\begin{enumerate}\n\\item Initialization: pick an initial state \\textit{x} (usually at random)\n\\item Randomly draw a state \\textit{x'} from the proposal distribution $Q\\left ( x \\rightarrow {x}' \\right )$\n\\item Accept the state according to the acceptance distribution \\textit{H}:\n\\begin{equation}\nH = min\\left (1, \\frac{\\pi \\left ( {x}' \\right ) Q\\left ( {x}' \\rightarrow x \\right )}{\\pi\\left ( x \\right ) Q\\left ( x\\rightarrow {x}' \\right )} \\right ).\n\\end{equation}\nIf not accepted state remains at \\textit{x}, otherwise transitions to \\textit{x'}\n\\item Save state x, go to \\#2\n\\item Repeat desired number of iterations\n\\end{enumerate}\nWe can code a Metropolis-Hastings scheme in R like this:\n\\begin{lstlisting}\neasyMCMC = function(niter, startval, proposalsd){\n    x = rep(0,niter)\n    x[1] = startval     \n    for(i in 2:niter){\n        currentx = x[i-1]\n        proposedx = rnorm(1,mean=currentx,sd=proposalsd) \n        A = target(proposedx)/target(currentx)\n        if(runif(1)<A){\n            x[i] = proposedx       # accept move with probabily min(1,A)} \n        else {\n        x[i] = currentx        # otherwise \"reject\" move, and stay where we are} }\n    return(x) }\n\\end{lstlisting}\n\n$\\ast$ Note that since $\\frac{Q\\left ({x}' \\rightarrow x \\right )}{Q \\left (x \\rightarrow {x}' \\right )} = 1$, we are only computing $\\frac{\\pi \\left ({x}' \\right )}{\\pi \\left (x \\right ) }$ to determine acceptance.\n\nPlotting \\textit{x} shows us the \\textbf{trace} of our Markov chain. When x is high dimensional it is more difficult or impossible to view a trace of \\textit{x}.\n\\begin{lstlisting}\n#Run MCMC 3 times and look at how similar the results are.\nz1=easyMCMC(1000,3,1)\nz2=easyMCMC(1000,3,1)\nz3=easyMCMC(1000,3,1)\nplot(z1,type=\"l\")\nlines(z2,col=2)\nlines(z3,col=3)\n\\end{lstlisting}\n\n\\begin{figure}[!htb]\n\\minipage{0.5\\textwidth}\n\\includegraphics[width=\\linewidth]{mcmcplot3.png}\n\\endminipage\\hfill\n\\minipage{0.5\\textwidth}\n\\includegraphics[width=\\linewidth]{mcmcplot4.png}\n\\endminipage\\hfill\n\\end{figure}\n\nBy playing around with the proposal standard deviation, number of iterations, and starting value we can see how these affect the MCMC output. \n\\begin{description}\n\\item[Starting value:] If the starting value is way outside of the target distribution, it will take longer to converge. The code also will not run with a negative starting value. \n\\item[Proposal SD:] A very small proposal SD will make it take longer to converge, whereas a large SD will result in the chain getting \"stuck\" a lot as values will more frequently get rejected. Intuitively, an SD of 0 will result in a flat trace as x never changes.\n\\item[Number of iterations:] An adequate nuber of iterations are required in order for the chain to converge. \n\\end{description}\n\n\\textbf{\"sticky chain\":} high autocorrelation among steps\n\nWe can also change the target distribution.\n\\begin{lstlisting}\ntarget = function(x){\n    return((x>0 & x <1) + (x>2 & x<3))\n}\n\\end{lstlisting}\nThis target will have a bimodal distribution. If the SD is too small (i.e. 0.1) it will not work.\n\n\\subsection{Tuning the MCMC}\n\nTuning your proposal involves finding the best values of the proposal standard deviation in order for the chain to mix faster. \nThis is usually accomplished by ruunning a test chain and looking at the proportion of steps that are accepted. If the proportion of accepted steps is too high, it may slow down mixing. In general, when you have higher dimensions in your target values the optimal optimal acceptance rate is lower.\n\n\\textbf{adaptivity:} where you change the rules (i.e. the proposal SD) for an ongoing MCMC chain based on what you see in the trace. This is risky as it may end up not being a true Markov chain. \nAnother important consideration is that the probability of storing an observation must be unbiased and not depend on what stage the chain is currently in. \nA common way to \\textit{thin} values is to set a consistent interval for recorded observations (i.e. every 10 iterations).\n\n\\section{Ex.3: Estimating an allele frequency and inbreeding coefficient (Gibbs Sampler)}\nIn class we glossed over Example 2 in the IntroMCMC.R, which goes over how to implement an MCMC to estimate the allele frequency in a population showing Hardy-Weinberg equilibrium.\nExample 3 illustrates how to implement a 2 dimensional Markov chain to estimate an allele frequency and inbreeding coefficient.\nFrom IntroMCMC.R:\n\" A slightly more complex alternative than HWE is to assume that there is a tendency for people to mate with others who are slightly more closely-related than \"random\" (as might\nhappen in a geographically-structured population, for example). This will result in an excess of homozygotes compared with\nHWE. A simple way to capture this is to introduce an extra parameter,\nthe \"inbreeding coefficient\" f, and assume that the genotypes AA, Aa and aa have frequencies $fp + \\left ( 1-f\\right )p^2$, $\\left (1-f \\right ) 2p\\left (1-p\\right )$, and $f\\left (1-p \\right) + \\left(1-f\\right )\\left (1-p\\right)^2$\nIn most cases it would be natural to treat f as a feature of\nthe population, and therefore assume f is constant across loci. For simplicity we will consider just a single locus.\n\nNote that both f and p are constrained to lie between 0 and 1 (inclusive). A simple prior for each of these two parameters is to assume\nthat they are independent, uniform on [0,1]. Suppose that we sample n individuals, and observe nAA with genotype AA,\nnAa with genotype Aa and naa with genotype aa.\n\nOne way we can implement and MCMC routine to get f and p is to update both f and p each iteration and assume that they are independent. Another way is to implement a Gibbs Sampler. \nTo do this, we use a \"latent variable\" $Z_i$, a representation of whether an individual came from an inbred mating $f$ or non-inbred mating $(1-f)$. This way we can implement a posterior dstribution for p that does not depend on f.\n\n\n\\begin{equation}\n    Z_i =\n    \\begin{cases}\n    1, & wpf \\\\\n    0, & wp\\left(1-f \\right)\n    \\end{cases}\n\\end{equation}\n$Z_i \\sim Bernoulli\\left (f \\right )$\n\nThis makes the likelihoods of the data(genotypes) given p and Z not depend on f:\n\\begin{eqnarray}\np\\left (AA|z_i = 1 \\right ) = p \\\\\n%\np\\left (AA|z_i = 0 \\right ) = p^2 \\\\\n%\np\\left (Aa|z_i = 1 \\right ) = 0 \\\\\n%\np\\left (Aa|z_i = 0 \\right ) = 2p\\left (1-p \\right )\\\\\n%\np\\left (aa|z_i = 1 \\right ) = 1-p \\\\\n%\np\\left (aa|z_i = 0 \\right ) = \\left ( 1-p \\right )^2\n\\end{eqnarray}\n\nTo implement the Gibbs sampler, you would iterate over the following steps:\n\\begin{enumerate}\n\\item Sample Z from $p\\left(z| g, f, p\\right)$\n\\item Sample f,p from $p\\left(f, p |g, z\\right)$\n\\end{enumerate}\n\n\\end{document}\n\n", "meta": {"hexsha": "960f8971b315683331cae037fb0eaf50c503197f", "size": 8620, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/scribe_notes_2016/lec12.tex", "max_stars_repo_name": "stephens999/hgen48600", "max_stars_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-03-19T18:02:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:15:28.000Z", "max_issues_repo_path": "docs/scribe_notes_2016/lec12.tex", "max_issues_repo_name": "stephens999/hgen48600", "max_issues_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-02-05T00:34:09.000Z", "max_issues_repo_issues_event_max_datetime": "2017-03-07T20:15:19.000Z", "max_forks_repo_path": "docs/scribe_notes_2016/lec12.tex", "max_forks_repo_name": "stephens999/hgen48600", "max_forks_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2016-01-08T16:59:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-11T23:09:29.000Z", "avg_line_length": 44.6632124352, "max_line_length": 297, "alphanum_fraction": 0.7197215777, "num_tokens": 2442, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.828938806208442, "lm_q1q2_score": 0.5601675016466534}}
{"text": "\\documentclass{article}\n\\usepackage{eecstex}\n\\usepackage{circledsteps}\n\n\\title{EE 123 HW 03}\n\\author{Bryan Ngo}\n\\date{2022-02-07}\n\n\\begin{document}\n\n\\maketitle\n\n\\setcounter{section}{2}\n\n\\section{}\n\n\\subsection{}\n\n\\(y[n]\\) is\n\\begin{itemize}\n    \\item \\(y[n \\geqslant L] = 0\\) since an autocorrelation implies that the range of nonzero values is \\([-L - 1, L - 1]\\).\n    \\item Conjugate-symmetric since the frequency domain representatio is purely real.\n    \\item Odd length since the multiplication of two length \\(L\\) signals in the frequency domain is the circular convolution, which creates a length \\(2L - 1\\) signal.\n\\end{itemize}\n\n\\subsection{}\n\nDue to the symmetry and convolution properties of the DFT,\n\\begin{equation}\n    \\bar{y}[n] = x[n] \\, \\Circled{N} \\, x^\\ast[-n]\n\\end{equation}\nwhere we want\n\\begin{equation}\n    y[n] = \\bar{y}[m[n]] = x[n] \\ast x^\\ast[-n]\n\\end{equation}\nLet \\(N = 2L - 1\\) so that we can capture the entire range of the circular autocorrelation.\nSince circular convolution retains the same indices as the original signals, we now have a signal with range \\([0, 2N - 2]\\).\nThus, we must time shift by \\(m[n] = n + (L - 1)\\) to center the signal around 0.\n\n\\newpage\n\\section{}\n\n\\begin{equation}\n    \\begin{array}[]{||c|c||}\n        \\hline\n        k & X[k] \\\\\n        \\hline\n        0 & 3 \\\\\n        2 & 0.5 - 4.5j \\\\\n        4 & 5 \\\\\n        5 & 3.5 + 3.5j \\\\\n        7 & -2.5 - 7j \\\\\n        \\hline\n    \\end{array}\n\\end{equation}\n\n\\subsection{}\n\nUsing the symmetry properties of the DFT, we can complete the DFT as shown:\n\n\\begin{equation}\n    \\begin{array}[]{||c|c||}\n        \\hline\n        k & X[k] \\\\\n        \\hline\n        0 & 3 \\\\\n        1 & -2.5 + 7j \\\\\n        2 & 0.5 - 4.5j \\\\\n        3 & 3.5 - 3.5j \\\\\n        4 & 5 \\\\\n        5 & 3.5 + 3.5j \\\\\n        6 & 0.5 + 4.5j \\\\\n        7 & -2.5 - 7j \\\\\n        \\hline\n    \\end{array}\n\\end{equation}\nmeaning that\n\\begin{equation}\n    x[0] = \\frac{1}{8} \\sum_{k = 0}^7 X[k] = \\frac{11}{8}\n\\end{equation}\n\n\\subsection{}\n\nBy the circular convolution and time-shift theorems of the DFT,\n\\begin{equation}\n    x[n] \\, \\Circled{8} \\, \\delta[n - 1] = e^{-j \\frac{2\\pi}{8} k} X[k]\n\\end{equation}\n\n\\subsection{}\n\nSince \\(W[k]\\) is simply \\(X[k]\\) evaluated at the even terms, we can represent \\(w[n]\\) as\n\\begin{align}\n    W[k] &= X[2k] = \\sum_{n = 0}^7 x[n] W_8^{2kn} \\\\\n    \\overset{\\mathcal{F}^{-1}}{\\implies} w[n] &= \\frac{1}{4} \\sum_{k = 0}^3 W[k] W_4^{-kn} = \\frac{1}{4} \\sum_{k = 0}^3 X[2k] W_4^{-kn} \\\\\n    &= \\frac{1}{4} \\sum_{k = 0}^7 \\frac{1}{2} (1 + (-1)^k) X[k] W_8^{-kn} = \\frac{1}{8} \\sum_{k = 0}^7 (1 + e^{-j \\pi k}) X[k] W_8^{-kn} \\\\\n    &= x[n] + \\sum_{k = 0}^7 e^{-j \\pi k} X[k] W_8^{-kn} \\\\\n    &= x[n] + \\sum_{k = 0}^7 W_8^4 X[k] W_8^{-kn} = x[n] + \\sum_{k = 0}^7 X[k] W_8^{-kn + 4} \\\\\n    &= x[n] + x[(n - 4)_8]\n\\end{align}\n\n\\newpage\n\\section{Faster DFTs?}\n\nSince \\(f[n]\\) and \\(g[n]\\) are both purely real, then \\(j g[n]\\) must be purely imaginary.\nBy the linearity of the DFT, \\(H[k] = F[k] + j G[k]\\).\nWe can then express \\(F[k]\\) and \\(G[k]\\) as\n\\begin{align}\n    F[k] &= \\Re\\{H[k]\\} = \\frac{1}{2} (H[k] + H^\\ast[k]) \\\\\n    G[k] &= \\Im\\{H[k]\\} = \\frac{1}{2j} (H[k] - H^\\ast[k])\n\\end{align}\n\n\\newpage\n\\section{Diagonalizing Circulant Matrices}\n\n\\begin{theorem}\n    The DFT matrix diagonalizes all circulant matrices.\n\\end{theorem}\n\\begin{proof}\n    Let \\(\\bm{H}\\) be a circulant matrix\n    \\begin{equation}\n        \\bm{H} =\n        \\begin{bmatrix}\n            c_1 & c_2 & \\cdots & c_{N - 1} & c_N \\\\\n            c_N & c_1 & \\cdots & c_{N - 2} & c_{N - 1} \\\\\n            \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n            c_2 & c_3 & \\cdots & c_N & c_1\n        \\end{bmatrix} =\n        \\begin{bmatrix}\n            c[n] \\\\\n            c[(n - 1)_N] \\\\\n            \\vdots \\\\\n            c[(n - (N - 1))_N]\n        \\end{bmatrix}\n    \\end{equation}\n    and \\(\\bm{D}\\) be the \\(N \\times N\\) DFT matrix.\n    Note that the DFT matrix satisfies the following relation: \\(\\bm{D}^{-1} = \\frac{1}{N} \\bm{D}^\\ast\\).\n    Then,\n    \\begin{align}\n        \\bm{D} \\bm{H} \\bm{D}^{-1} &=\n        \\begin{bmatrix}\n            1 & 1 & \\cdots & 1 \\\\\n            1 & e^{-j \\frac{2\\pi}{n}} & \\cdots & e^{-j \\frac{2\\pi}{n} (n - 1)} \\\\\n            \\vdots & \\vdots & \\ddots & \\vdots \\\\\n            1 & e^{-j \\frac{2\\pi}{n} (n - 1)} & \\cdots & e^{-j \\frac{2\\pi}{n} (n - 1) (n - 1)}\n        \\end{bmatrix}\n        \\begin{bmatrix}\n            c[n] \\\\\n            c[(n - 1)_N] \\\\\n            \\vdots \\\\\n            c[(n - (N - 1))_N]\n        \\end{bmatrix}\n        \\frac{1}{N} \\begin{bmatrix}\n            1 & 1 & \\cdots & 1 \\\\\n            1 & e^{j \\frac{2\\pi}{N}} & \\cdots & e^{j \\frac{2\\pi}{N} (N - 1)} \\\\\n            \\vdots & \\vdots & \\ddots & \\vdots \\\\\n            1 & e^{j \\frac{2\\pi}{N} (N - 1)} & \\cdots & e^{j \\frac{2\\pi}{n} (N - 1) (N - 1)}\n        \\end{bmatrix} \\\\\n        &=\n        \\begin{bmatrix}\n            C[0] \\\\\n            W_N C[1] \\\\\n            \\vdots \\\\\n            W_N^{N - 1} C[N - 1]\n        \\end{bmatrix}\n        \\frac{1}{N} \\begin{bmatrix}\n            1 & 1 & \\cdots & 1 \\\\\n            1 & e^{j \\frac{2\\pi}{N}} & \\cdots & e^{j \\frac{2\\pi}{N} (N - 1)} \\\\\n            \\vdots & \\vdots & \\ddots & \\vdots \\\\\n            1 & e^{j \\frac{2\\pi}{N} (N - 1)} & \\cdots & e^{j \\frac{2\\pi}{n} (N - 1) (N - 1)}\n        \\end{bmatrix} \\\\\n        &=\n        \\frac{1}{N} \\begin{bmatrix}\n            N C[0] & 0 & \\cdots & 0 \\\\\n            0 & N C[1] & \\cdots & 0 \\\\\n            \\vdots & \\vdots & \\ddots & \\vdots \\\\\n            0 & 0 & \\cdots & N C[N - 1]\n        \\end{bmatrix}\n        =\n        \\begin{bmatrix}\n            C[0] & 0 & \\cdots & 0 \\\\\n            0 & C[1] & \\cdots & 0 \\\\\n            \\vdots & \\vdots & \\ddots & \\vdots \\\\\n            0 & 0 & \\cdots & C[N - 1]\n        \\end{bmatrix}\n    \\end{align}\n    where we use the circular shift property to rewrite the circulant matrix and use the frequency shift properties of the DFT to prove that the DFT matrix diagonalizes any circulant matrix.\n\\end{proof}\n\n% IDFT{X[k]} = 1/N (DFT(X*[k]))*\n\n\\end{document}\n", "meta": {"hexsha": "5b7461d83f12046797102dd3ca37805e94fd2789", "size": 6039, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw03/hw03.tex", "max_stars_repo_name": "bdngo/ee-123", "max_stars_repo_head_hexsha": "d10fe34fdb95f2d7785eeaec9c578991aca9054d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hw03/hw03.tex", "max_issues_repo_name": "bdngo/ee-123", "max_issues_repo_head_hexsha": "d10fe34fdb95f2d7785eeaec9c578991aca9054d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw03/hw03.tex", "max_forks_repo_name": "bdngo/ee-123", "max_forks_repo_head_hexsha": "d10fe34fdb95f2d7785eeaec9c578991aca9054d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.453125, "max_line_length": 190, "alphanum_fraction": 0.4874979301, "num_tokens": 2375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8289388019824946, "lm_q1q2_score": 0.5601674987909078}}
{"text": "%!TEX root = ../notes.tex\n\\section{February 14, 2022}\n\\subsection{Groups \\emph{continued}}\n\\begin{example}\n    Some itemize of groups and nongroups:\n    \\begin{itemize}\n        \\item $(\\ZZ/N\\ZZ, +)$: Yes (Abelian).\n        \\item $(\\ZZ/N\\ZZ, \\times)$: No. $0^{-1}$ does not exist (inverse).\n        \\item $((\\ZZ/n\\ZZ)^\\times, \\times)$: Yes (Abelian).\n        \\item $(\\ZZ\\setminus \\{0\\}, \\times)$: No. $2^{-1}$ does not exist (inverse).\n        \\item $(\\ZZ\\setminus \\{0\\}, +)$: No. No identity $e$.\n        \\item $\\left(\\{n\\times n\\text{ matrices} : \\det M \\neq 0\\}, \\times\\right)$: Yes (not Abelian for $n\\geq 2$).\n    \\end{itemize}\n\\end{example}\n\\begin{definition}\n    For $g\\in G$, $x = 1, 2, 3, \\dots$,\n    \\[g^x = \\underbrace{g\\circ g\\circ \\cdots \\circ g}_{x\\text{ times}}\\]\n    We extend this to define $g^0 = e$ and $g^{-n} = (g^n)^{-1}$ (so that our usual exponent rules also apply).\n\\end{definition}\n\\begin{example}\n    From just now, in $(\\ZZ/N\\ZZ, +)$, $1^3 = 3$.\n\\end{example}\n\\begin{definition}[Element Order]\n    The smallest (positive) $n$ with $g^n = e$ is called the \\ul{order} of $g$.\n\n    If there is no such $n$, we say $g$ has \\ul{infinite} order.\n\\end{definition}\n\\begin{proposition}\n    If $G$ is a finite group, then every element $g\\in G$ has finite order.\n\\end{proposition}\n\\begin{proof}\n    Consider all powers of $g$\n    \\[g, g^2, g^3, g^4, \\dots\\]\n    so at some point, we will have\n    \\[g, g^2, g^3, g^4, \\dots, g^i, \\dots, g^j, \\dots\\]\n    where $g^i$ and $g^j$ are equal. Then $g^{j-i} = e$. Hence $G$ has a finite order.\n\\end{proof}\n\\begin{proposition}\n    Let $g\\in G$ have order $k$, with $g^n = e$. Then $k\\mid n$.\n\\end{proposition}\n\\begin{proof}\n    We use the division algorithm. We write\n    \\[n = q\\cdot k + r\\text{ with }0\\leq r < k\\]\n    then we have\n    \\[e = g^n = (g^k)^q g^r = e^q\\cdot g^r\\]\n    so $g^r = e$, which forces $r = 0$ since $0\\leq r < k$. So $n = qk \\Rightarrow k\\mid n$.\n\\end{proof}\n\\begin{theorem}\n    $g^{\\#G} = e$. In particular, $\\ord g\\mid \\#G$.\n\\end{theorem}\n\\begin{proof}[Proof for Abelian groups]\n    Let $G = \\{g_1, \\dots, g_n\\} = \\{gg_1, gg_2, \\dots, gg_n\\}$. No two are equal, since we can take inverse of $g$. We multiply them all together:\n    \\begin{align*}\n        g_1g_2\\cdots g_n          & = (gg_1)\\cdots (gg_n)        \\\\\n        \\cancel{g_1g_2\\cdots g_n} & = \\cancel{g_1\\cdots g_n} g^n \\\\\n        e                         & = g^n\n    \\end{align*}\n    so we have as desired.\n\\end{proof}\nThis is true even if $G$ is not Abelian - it's Lagrange's Theorem, which we won't cover here\\footnote{Covered in Math 1530, Abstract Algebra.}.\n\nNote that our previous cryptosystems: Diffie-Hellman key exchange and Elgamal, works in \\emph{any} group.\n\n\\textbf{Q: Why would we want to be able to pick our group?}\n\nMight we want to do this in a group that allows for fast operations? That makes encryption and decryption easy, but it also makes computing the discrete log difficult. We want groups that are \\emph{easy enough} and \\emph{hard enough}. We might appreciate this by the end of the course\\dots\n\n\\subsection{Computation Complexity}\nHow might we quantify ``easy'' or ``hard'' in cryptography.\n\\begin{example}\n    Let $g\\in G$ a group. Let's consider exponentiation\n    \\[x\\longmapsto g^x\\]\n    if $x$ has $k$ bits (i.e. $x\\approx 2^k$). How many steps does it take us to compute $g^x$? At most $2k$ multiply and add steps.\n\n    What about solving the discrete log problem:\n    \\[g^x\\longmapsto x\\]\n    where $x$ has $k$ bits. How many steps does this take (na\\\"ively, trying every power)? About $2^k$ steps.\n\\end{example}\n\\begin{definition}[Big-O]\n    We say $f(x) = \\mathcal{O}(g(x))$ if there are \\emph{constants} $c$ and $c'$ with\n    \\[f(x) \\leq c\\cdot g(x) \\quad \\text{for all }x\\geq c'\\]\n\\end{definition}\n\\begin{example}\n    Say $f(x) = \\mathcal{O}(1) \\Leftrightarrow f$ is bounded.\n\\end{example}\n\nIf $f(x) = \\mathcal{O}(x^c)$, then we say this is a ``easy'' problem.\\footnote{We take $x$ to be the number of bits of the input}\n\nIf $f(x) = \\mathcal{O}(c^x)$, we think of this as a ``hard'' problem.\n\n\\begin{proposition}\n    If\n    \\[\\lim_{x\\to \\infty}\\frac{f(x)}{g(x)} < \\infty\\]\n    then $f(x) = \\mathcal{O}(g(x))$.\n\\end{proposition}\n\\begin{proof}\n    Using definition of limits, for any $\\varepsilon > 0$:\n    \\[\\left\\vert\\frac{f(x)}{g(x)} - L\\right\\vert < \\varepsilon \\text{ for }x\\geq c\\]\n    then $f(x) < (L + \\varepsilon)\\cdot g(x)$.\n\\end{proof}\n\\begin{example}\n    $2x^2+5x+7 = \\mathcal{O}(x^2)$.\n\\end{example}", "meta": {"hexsha": "3ccc351c8670052a22f660b379d7d31ee6894f5b", "size": 4495, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-02-14.tex", "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_issues_repo_path": "lectures/2022-02-14.tex", "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-02-14.tex", "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.640776699, "max_line_length": 289, "alphanum_fraction": 0.6137931034, "num_tokens": 1642, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105454764746, "lm_q2_score": 0.8418256532040707, "lm_q1q2_score": 0.5601596670946102}}
{"text": "\\documentclass[t,usenames,dvipsnames]{beamer}\n\\usetheme{Copenhagen}\n\\setbeamertemplate{headline}{} % remove toc from headers\n\\beamertemplatenavigationsymbolsempty\n\n\\usepackage{amsmath, sfmath, tikz, xcolor}\n\\usetikzlibrary{arrows.meta, calc}\n\n\\title{Dot Product and Projection}\n\\author{}\n\\date{}\n\n\\AtBeginSection[]\n{\n  \\begin{frame}\n    \\frametitle{Table of Contents}\n    \\tableofcontents[currentsection]\n  \\end{frame}\n}\n\n\\begin{document}\n\n\\begin{frame}\n    \\titlepage\n\\end{frame}\n\n\\section{Find the dot product of two vectors.}\n\n\\begin{frame}{Dot Product}\n    For vectors $\\vec{v} = \\langle v_1, v_2 \\rangle$ and $\\vec{w} = \\langle w_1, w_2 \\rangle$, the \\alert{dot product} of $\\vec{v}$ and $\\vec{w}$ is given as \n\\begin{align*}\n\\vec{v} \\cdot \\vec{w} &= \\langle v_1, v_2 \\rangle \\cdot \\langle w_1, w_2 \\rangle \\\\\n\\onslide<2->{&= v_1w_1 + v_2w_2}\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 1}\nFind $\\vec{v} \\cdot \\vec{w}$ if $\\vec{v} = \\langle 3, 4 \\rangle$ and $\\vec{w} = \\langle 1, -2 \\rangle$.    \n\\begin{align*}\n\\onslide<2->{\\vec{v}\\cdot\\vec{w} &= \\langle 3, 4\\rangle \\cdot \\langle 1, -2\\rangle} \\\\[8pt]\n\\onslide<3->{&= 3(1) + 4(-2)} \\\\[8pt]\n\\onslide<4->{&= -5} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Dot Product Properties}\nBecause the dot product produces a scalar, it is often called the \\alert{scalar product}.    \\newline\\\\\n\n\\onslide<2->{\\textbf{Properties}} \\newline\\\\\n\\begin{tabular}{ll}\n    \\onslide<3->{$\\vec{v} \\cdot \\vec{w} = \\vec{w} \\cdot \\vec{v}$ &\n    Commutative Property}    \\\\[8pt]\n    \\onslide<4->{$\\vec{u} \\cdot \\left(\\vec{v} + \\vec{w}\\right) = \\vec{u} \\cdot \\vec{v} + \\vec{u} \\cdot \\vec{w}$  &   Distributive Property} \\\\[8pt]\n    \\onslide<5->{$k\\left(\\vec{v}\\right) \\cdot \\vec{w} = k\\left(\\vec{v} \\cdot \\vec{w}\\right) = \\vec{v} \\cdot \\left(k\\vec{w}\\right)$    &   Scalar Property}    \\\\[8pt]\n    \\onslide<6->{$\\vec{v} \\cdot \\vec{v} = \\lVert \\vec{v} \\rVert ^2$  &\n    Relation to Magnitude} \\\\[8pt]\n\\end{tabular}\n\\end{frame}\n\n\\begin{frame}{Example 2}\n    Show that $ \\lVert \\vec{v} - \\vec{w} \\rVert ^2 = \\lVert \\vec{v} \\rVert ^2 - 2\\left( \\vec{v} \\cdot \\vec{w} \\right) + \\lVert \\vec{w} \\rVert ^2 $\n\\begin{align*}\n    \\onslide<2->{\\lVert \\vec{v} - \\vec{w} \\rVert^2 &= (\\vec{v} - \\vec{w})\\cdot (\\vec{v} - \\vec{w})} \\\\[8pt]\n    \\onslide<3->{&= \\vec{v}\\cdot(\\vec{v}-\\vec{w}) -\\vec{w} \\cdot (\\vec{v}-\\vec{w})} \\\\[8pt]\n    \\onslide<4->{&=\\vec{v}\\cdot\\vec{v} - \\vec{v}\\cdot\\vec{w} - \\vec{v}\\cdot\\vec{w} + \\vec{w}\\cdot\\vec{w}} \\\\[8pt]\n    \\onslide<5->{&= \\vec{v}\\cdot\\vec{v} - 2(\\vec{v}\\cdot\\vec{w}) + \\vec{w}\\cdot\\vec{w}} \\\\[8pt]\n    \\onslide<6->{&= \\lVert v\\rVert^2 - 2(\\vec{v}\\cdot\\vec{w}) + \\lVert w \\rVert^2} \\\\\n\\end{align*}\n\\end{frame}\n\n\\section{Find the angle measure between two vectors.}\n\n\\begin{frame}{Angles Between Vectors}\n    If we draw $\\vec{v}$ and $\\vec{w}$ with the same initial point, then the angle $\\theta$ between them is illustrated below:  \\newline\\\\\n\n\\begin{tabular}{p{0.3\\textwidth}p{0.3\\textwidth}p{0.25\\textwidth}}\n\\begin{tikzpicture}[scale=0.9]\n    \\draw [-{Stealth}] (0,0) -- (2,2) node [right] {$\\vec{v}$};\n    \\draw [-{Stealth}] (0,0) -- (1.25,1.25) node [right] {$\\vec{w}$};\n    \\draw [fill=black] (0,0) circle (1pt);\n    \\node at (1,0) [below] {$\\theta = 0$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[scale=0.9]\n    \\draw [-{Stealth}] (0,0) -- (30:1.5) node [right] {$\\vec{v}$};\n    \\draw [-{Stealth}] (0,0) -- (150:1.5) node [left] {$\\vec{w}$};\n    \\draw [fill=black] (0,0) circle (1pt);\n    \\draw [<->, >=stealth] (30:0.35) arc (30:150:0.35) node [midway, above] {$\\theta$};\n    \\node at (0,0) [below] {$0 \\leq \\theta < \\pi$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[scale=0.9]\n    \\draw [-{Stealth}] (0,0) -- (30:1.5) node [right] {$\\vec{v}$};\n    \\draw [-{Stealth}] (0,0) -- (210:1.5) node [left] {$\\vec{w}$};\n    \\draw [fill=black] (0,0) circle (1pt);\n    \\draw [<->, >=stealth] (30:0.25) arc (30:210:0.25) node [midway, above] {$\\theta$};\n    \\node at (0,0) [below right] {$\\theta = \\pi$};\n\\end{tikzpicture}   \n\\end{tabular}\n\\end{frame}\n\n\\begin{frame}{Dot Product and Angles Between Vectors}\nGeometrically, the dot product between two vectors is \n\\[\n\\vec{v} \\cdot \\vec{w} = \\lVert \\vec{v} \\rVert \\lVert \\vec{w} \\rVert \\cos \\theta\n\\]\n\\vspace{20pt}\n\\pause\n\\begin{tabular}{p{0.5\\textwidth}p{0.35\\textwidth}}\n\\begin{tikzpicture}\n    \\draw [-{Stealth}, shorten >= 1pt] (0,0) -- (15:2) node [right] {$\\vec{v}$};\n    \\draw [-{Stealth}, shorten >= 1pt] (0,0) -- (120:2) node [left] {$\\vec{w}$};\n    \\draw [-{Stealth}, shorten >= 1pt] (120:2) -- (15:2) node [midway, above right] {$\\vec{v}-\\vec{w}$};\n    \\draw (15:0.35) arc (15:120:0.35) node [midway, above] {$\\theta$};\n    \\draw [fill=black] (15:2) circle (1pt);\n    \\draw [fill=black] (120:2) circle (1pt);\n    \\draw [fill=black] (0,0) circle (1pt);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}\n    \\draw (0,0) -- (15:2) node [midway, below] {$\\lVert \\vec{v} \\rVert$};\n    \\draw (0,0) -- (120:2) node [midway, left] {$\\lVert \\vec{w} \\rVert$};\n    \\draw (120:2) -- (15:2) node [midway, above right] {$\\lVert \\vec{v}-\\vec{w} \\rVert$};\n    \\draw (15:0.35) arc (15:120:0.35) node [midway, above] {$\\theta$};\n    \\draw [fill=black] (15:2) circle (1pt);\n    \\draw [fill=black] (120:2) circle (1pt);\n    \\draw [fill=black] (0,0) circle (1pt);\n\\end{tikzpicture}   \\\\[8pt]\n\\end{tabular}    \n\\end{frame}\n\n\\begin{frame}{Dot Product and Angles Between Vectors}\nBy the Law of Cosines, $\\lVert \\vec{v} - \\vec{w} \\rVert ^2 = \\lVert \\vec{v} \\rVert ^2 + \\lVert \\vec{w} \\rVert ^2 - 2 \\lVert \\vec{v} \\rVert \\lVert \\vec{w} \\rVert \\cos \\theta$, and by Example 2, $ \\lVert \\vec{v} - \\vec{w} \\rVert ^2  = \\lVert \\vec{v} \\rVert ^2 - 2\\left( \\vec{v} \\cdot \\vec{w} \\right) + \\lVert \\vec{w} \\rVert ^2 $.    \\\\[11pt]\n\\pause\nEquating these and solving for $\\vec{v} \\cdot \\vec{w}$ gives us $\\vec{v} \\cdot \\vec{w} = \\lVert \\vec{v} \\rVert \\lVert \\vec{w} \\rVert \\cos \\theta$, from which\n\\[\n\\cos \\theta = \\frac{\\vec{v} \\cdot \\vec{w}}{\\lVert \\vec{v} \\rVert \\, \\lVert \\vec{w} \\rVert}\n\\]\n\\pause\n\\vspace{11pt}\nTo find the angle between two vectors, solve for $\\theta$:\n\\[\n\\theta = \\cos^{-1}\\left(\\frac{\\vec{v} \\cdot \\vec{w}}{\\lVert \\vec{v} \\rVert \\, \\lVert \\vec{w} \\rVert} \\right) = \\cos^{-1}\\left(\\hat{v} \\cdot \\hat{w} \\right)\n\\]\n\\end{frame}\n\n\\begin{frame}{Example 3a}\nFind the angle between each of the following.   \\newline\\\\\n(a) \\quad $\\vec{v} = \\langle 3, -3\\sqrt{3} \\rangle$ and $\\vec{w} = \\langle -\\sqrt{3}, 1 \\rangle$\n\\begin{align*}\n    \\onslide<2->{\\vec{v}\\cdot\\vec{w} &= \\langle 3, -3\\sqrt{3} \\rangle \\cdot \\langle -\\sqrt{3}, 1 \\rangle} \\\\[8pt]\n    \\onslide<3->{&= -3\\sqrt{3}-3\\sqrt{3}=-6\\sqrt{3}} \\\\[8pt]\n    \\onslide<4->{\\lVert \\vec{v} \\rVert &= \\sqrt{3^2 + (3\\sqrt{3})^2} = \\sqrt{36}} \\\\[8pt]\n    \\onslide<5->{\\lVert \\vec{w} \\rVert &= \\sqrt{(\\sqrt{3})^2 + 1^2} = \\sqrt{4}} \\\\[8pt]\n    \\onslide<6->{\\theta &= \\arccos\\left(\\frac{-6\\sqrt{3}}{\\sqrt{36\\times 4}}\\right) = \\frac{5\\pi}{6}} \n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 3b}\n(b) \\quad $\\vec{v} = \\langle 2, 2 \\rangle$ and $\\vec{w} = \\langle 5, -5 \\rangle$\n\\begin{align*}\n    \\onslide<2->{\\vec{v}\\cdot\\vec{w} &= 2(5) + 2(-5) = 0} \\\\[8pt]\n    \\onslide<3->{\\lVert \\vec{v} \\rVert &= \\sqrt{2^2 + 2^2} = \\sqrt{8}} \\\\[8pt]\n    \\onslide<4->{\\lVert \\vec{w} \\rVert &= \\sqrt{5^2 + 5^2} = \\sqrt{50}} \\\\[8pt]\n    \\onslide<5->{\\theta &= \\arccos\\left(\\frac{0}{\\sqrt{8\\times50}}\\right) = \\frac{\\pi}{2}} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 3c}\n(c) \\quad  $\\vec{v} = \\langle 3, -4 \\rangle$ and $\\vec{w} = \\langle 2, 1 \\rangle$\n\\begin{align*}\n    \\onslide<2->{\\vec{v}\\cdot\\vec{w} &= 3(2) + (-4)(1) = 2} \\\\[8pt]\n    \\onslide<3->{\\lVert \\vec{v} \\rVert &= \\sqrt{3^2+4^2} = \\sqrt{25}} \\\\[8pt]\n    \\onslide<4->{\\lVert \\vec{w} \\rVert &= \\sqrt{2^2 + 1^2} = \\sqrt{5}} \\\\[8pt]\n    \\onslide<5->{\\theta &= \\arccos{\\left(\\frac{2}{\\sqrt{25\\times5}}\\right)} \\approx 80^\\circ}\n\\end{align*}\n\\end{frame}\n\n\\section{Find the projection of one vector onto another.}\n\n\\begin{frame}{Orthogonal Projection}\nTwo vectors are {\\color{blue}\\textbf{orthogonal}} if they meet at a right angle.  \\newline\\\\\n\nIf two vectors are orthogonal, then their dot product is 0.  \n\\end{frame}\n\n\\begin{frame}{Orthogonal Projection}\nThe \\alert{orthogonal projection} of $\\vec{v}$ onto $\\vec{w}$ is a new vector $\\vec{p}$ that is parallel to $\\vec{w}$ and has a magnitude of $\\lVert \\vec{v} \\rVert \\cos \\theta$.    \\newline\\\\ \\pause\n\n$\\vec{p}$ can be thought of as the ``shadow\" $\\vec{v}$ casts over $\\vec{w}$: \\newline\\\\\n\n\\begin{tabular}{p{0.4\\textwidth}p{0.25\\textwidth}}\n\\begin{tikzpicture}[rotate=30]\n    \\coordinate (A) at (0:3);\n    \\coordinate (B) at (60:3);\n    \\draw [-{Stealth}, shorten >= 1pt] (0,0) -- (A) node [right] {$\\vec{w}$};\n    \\draw [fill = black] (A) circle (1pt);\n    \\draw [-{Stealth}, shorten >= 1pt] (0,0) -- (B) node [left] {$\\vec{v}$};\n    \\draw [fill = black] (B) circle (1pt);\n    \\draw [color=red] (1.5,0) rectangle +(0.2,0.2);\n    \\draw [dashed] (B) -- (1.5,0);\n    \\node at (0,0) [above right, yshift = 0.1cm] {$\\theta$};\n    \\draw [-{Stealth}, line width = 1.25, color = blue, shorten >= 1pt] (0,0) -- (1.5,0) node [midway, below] {$\\vec{p}$};\n    \\draw [fill=blue] (1.5,0) circle (1pt);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[rotate=30]\n    \\coordinate (B) at (60:3);\n    \\draw  (0,0) -- (B) node [midway, left] {$\\lVert v \\rVert$};\n    \\draw [color=red] (1.5,0) rectangle +(-0.2,0.2);\n    \\draw [dashed] (B) -- (1.5,0);\n    \\node at (0,0) [above right, yshift = 0.1cm] {$\\theta$};\n    \\draw (0,0) -- (1.5,0) node [midway, below right] {$\\lVert \\vec{p} \\rVert = \\lVert v \\rVert \\cos \\theta$};\n\\end{tikzpicture}   \\\\[8pt]\n\\end{tabular}    \n\\end{frame}\n\n\\begin{frame}{Orthogonal Projection}\nWe need $\\vec{p}$ to be parallel to $\\vec{w}$. To do this, we could take the dot product of $\\vec{p}$ and $\\vec{w}$. \\newline\\\\\n\\pause\nHowever, that would likely change the magnitude of $\\vec{p}$. \\newline\\\\\n\\pause\nRecall that when finding a unit vector for a given vector, that unit vector has a magnitude of 1 and is parallel to the original vector.  \\newline\\\\\n\\pause\n\nThus, we can multiply the magnitude of $\\vec{p}$ by a unit vector for $\\vec{w}$.\n\\[\n    \\lVert \\vec{p} \\rVert \\, \\hat{w}\n\\]\n\\pause\nThis will guarantee that $\\vec{w}$ is scaled to $\\vec{p}$ and give us the projection of $\\vec{v}$ onto $\\vec{w}$.\n\\end{frame}\n\n\\begin{frame}{Orthogonal Projection}\n\\begin{minipage}{0.4\\textwidth}\n\\begin{tikzpicture}[rotate=30]\n    \\coordinate (A) at (0:3);\n    \\coordinate (B) at (60:3);\n    \\draw [-{Stealth}, shorten >= 1pt] (0,0) -- (A) node [right] {$\\vec{w}$};\n    \\draw [fill = black] (A) circle (1pt);\n    \\draw [-{Stealth}, shorten >= 1pt] (0,0) -- (B) node [left] {$\\vec{v}$};\n    \\draw [fill = black] (B) circle (1pt);\n    \\draw [color=red] (1.5,0) rectangle +(0.2,0.2);\n    \\draw [dashed] (B) -- (1.5,0);\n    \\node at (0,0) [above right, yshift = 0.1cm] {$\\theta$};\n    \\draw [-{Stealth}, line width = 1.25, color = blue, shorten >= 1pt] (0,0) -- (1.5,0) node [midway, below] {$\\vec{p}$};\n    \\draw [fill=blue] (1.5,0) circle (1pt);\n\\end{tikzpicture}\n\\end{minipage}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{align*}\n\\onslide<2->{\\text{proj}_{\\vec{w}} \\vec{v} &= \\lVert {\\color{blue}{\\vec{p}}} \\rVert \\, {\\color{red}\\hat{w} }} \\\\[8pt]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\onslide<3->{&= \\left( {\\color{blue}{\\lVert v \\rVert \\cos \\theta }}\\right) \\left( {\\color{red}\\frac{\\vec{w}}{\\lVert w \\rVert} } \\right)} \\\\[8pt]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\onslide<4->{&= \\lVert {\\color{blue}v} \\rVert \\left( {\\color{blue}\\frac{v \\cdot w}{\\lVert v \\rVert \\, \\lVert w \\rVert} } \\right) \\left({\\color{red}\\frac{\\vec{w}}{\\lVert w \\rVert} } \\right)} \\\\[8pt]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\onslide<5->{&= \\left({\\color{blue}\\frac{v \\cdot w}{\\lVert w \\rVert \\, {\\color{red}\\lVert w \\rVert}} } \\right) {\\color{red}\\vec{w} } }  \\\\[8pt]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\onslide<6->{\\text{proj}_{\\vec{w}} \\vec{v} &= \\left({\\color{blue} \\frac{v \\cdot w}{\\lVert w \\rVert ^2} } \\right) {\\color{red}\\vec{w} }}\n\\end{align*}\n\\end{minipage}\n\\end{frame}\n\n\\begin{frame}{Example 4a}\nLet $\\vec{v} = \\langle 1, 8 \\rangle$ and $\\vec{w} = \\langle -1, 2 \\rangle$. Find each of the following.   \\newline\\\\\n(a) \\quad $\\text{proj}_{\\vec{w}} \\vec{v}$\n\\begin{align*}\n    \\onslide<2->{\\text{proj}_{\\vec{w}} \\vec{v} &= \\left(\\frac{v \\cdot w}{\\lVert w \\rVert^2}\\right)\\vec{w}} \\\\[8pt]\n    \\onslide<3->{v \\cdot w = 1(-1) + 8(2) = 15} \\quad & \\quad \\onslide<4->{\\lVert \\vec{w} \\rVert^2 = \\left(\\sqrt{1^2+2^2}\\right)^2 = 5} \\\\[8pt]\n    \\onslide<5->{\\text{proj}_{\\vec{w}} \\vec{v} &= \\frac{15}{5}\\left<-1, 2\\right>} \\\\[8pt]\n    \\onslide<6->{&= 3\\langle -1, 2\\rangle} \\\\[8pt]\n    \\onslide<7->{&= \\langle -3, 6\\rangle} \\\\\n\\end{align*}\n\\end{frame}\n\n\\begin{frame}{Example 4b}\nLet $\\vec{v} = \\langle 1, 8 \\rangle$ and $\\vec{w} = \\langle -1, 2 \\rangle$. Find each of the following.   \\newline\\\\\n(b) \\quad   $\\text{proj}_{\\vec{v}} \\vec{w}$\n\\begin{align*}\n\\onslide<2->{\\text{proj}_{\\vec{v}} \\vec{w} &= \\left(\\frac{w \\cdot v}{\\lVert v \\rVert^2}\\right)\\vec{v}} \\\\[8pt]\n    \\onslide<3->{w \\cdot v = 1(-1) + 8(2) = 15} \\quad & \\quad \\onslide<4->{\\lVert \\vec{v} \\rVert^2 = \\left(\\sqrt{1^2+8^2}\\right)^2 = 65} \\\\[8pt]\n    \\onslide<5->{\\text{proj}_{\\vec{v}} \\vec{w} &= \\frac{3}{13}\\left<1, 8\\right>} \\\\[8pt]\n    \\onslide<6->{&= \\left< \\frac{3}{13}, \\frac{24}{13}\\right>} \\\\[8pt]    \n\\end{align*}\n\\end{frame}\n\n\\section{Find the work done in applying a force to an object.}\n\n\\begin{frame}{Work}\n    In physics, if a constant force $F$ is exerted over a distance (really, \\textit{displacement}) $d$, the work $W$ done by the force is given by $W=Fd$.  \\newline\\\\\n\\pause\nHowever, if the force being applied is not in the direction of motion, we can use the dot product to find the work done. \\newline\\\\\n\\pause\n\\begin{center}\n\\begin{tikzpicture}\n    \\coordinate (O) at (0,0);\n    \\coordinate (F) at (50:2);\n    \\coordinate (A) at (2,0);\n    \\draw [-{Stealth}, dashed] (O) -- (A);\n    \\draw [-{Stealth}] (O) -- (F) node [right] {$\\vec{F}$};\n    \\draw [color=red] ($(O)!(F)!(A)$) rectangle +(0.2,0.2);\n    \\draw ($(O)!(F)!(A)$) -- (F);\n    \\draw [-{Stealth}, line width = 1.25, color=blue] (O) -- ($(O)!(F)!(A)$) node [midway, below] {\\color{blue}$d$};\n    \\node at (O) [yshift=0.2cm, xshift=0.35cm] {$\\theta$};\n\\end{tikzpicture}\n\n$W = Fd = \\lVert F \\rVert \\, \\lVert d \\rVert \\cos \\theta = F \\cdot d$\n\\end{center}\n\\end{frame}\n\n\\begin{frame}{Example 5}\nMr. Bain exerts a force of 230 pounds to pull a stack of rocks a distance of 50 ft. over level ground. If the rope makes a $30^\\circ$ angle with the horizontal, how much work did Mr. Bain do pulling the rocks? Assume Mr. Bain exerts the force of 230 pounds at a $30^\\circ$ angle for the duration of the 50 feet.  \n\\begin{align*}\n    \\onslide<2->{W &= \\lVert F\\rVert \\lVert d \\rVert \\cos\\theta} \\\\[8pt]\n    \\onslide<3->{&= (230\\text{ pounds})(50\\text{ ft})\\cos30^\\circ} \\\\[8pt]\n    \\onslide<4->{&= 11,500\\text{ foot-pounds}\\left(\\frac{\\sqrt{3}}{2}\\right)} \\\\[8pt]\n    \\onslide<5->{&= 5,750\\sqrt{3}\\,(\\approx 9,959.3)\\text{ foot-pounds}} \\\\\n\\end{align*}\n\\end{frame}\n\n\n\\end{document}\n", "meta": {"hexsha": "adf5bc629864e65d6061ad2ee833e060bc060957", "size": 15071, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Dot_Product_and_Projection(BEAMER).tex", "max_stars_repo_name": "BryanBain/Trig_BEAMER", "max_stars_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Dot_Product_and_Projection(BEAMER).tex", "max_issues_repo_name": "BryanBain/Trig_BEAMER", "max_issues_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Dot_Product_and_Projection(BEAMER).tex", "max_forks_repo_name": "BryanBain/Trig_BEAMER", "max_forks_repo_head_hexsha": "3639d0202fc691738d26f922d9e84f0a0d62b42a", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:06.000Z", "avg_line_length": 46.5154320988, "max_line_length": 339, "alphanum_fraction": 0.5805188773, "num_tokens": 6247, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6406358411176238, "lm_q2_score": 0.8740772368049822, "lm_q1q2_score": 0.5599652058023282}}
{"text": "\\chapter{The Standard Model}\n\nThe standard model described the interaction of quarks, leptons and gauge bosons.\nThe gauge boson intermediate electromagnetic, weak, and strong interaction.\nIn the standard model, the gauge group is $\\mathrm{SU(3)}\\times \\mathrm{SU(2)} \\times \\mathrm{U(1)}$, where $\\mathrm{SU(3)}$ concerns the color degrees of freedom, the $\\mathrm{SU(2)}\\times \\mathrm{U(1)}$ is gauge group of the electroweak interaction.\n\nThe matters in the standard model consists of quarks and leptons, each of them have three generations:\n\\begin{equation}\n\t\\begin{tabular}{c |c c |c c}\n\t\t\\hline \\hline \n\t\tGeneration & \\multicolumn{2}{c|}{Quarks} & \\multicolumn{2}{c}{Leptons} \\\\ \\hline\n\t\tI & $u$ & $d$ & $e$ & $\\nu_e$ \\\\ \n\t\tII & $c$ & $s$ & $\\mu$ & $\\nu_\\mu$ \\\\  \n\t\tIII & $t$ & $b$ & $\\tau$ & $\\nu_\\tau$ \\\\\n\t\t\\hline \\hline\n\t\\end{tabular}\n\\end{equation}\nThe gauge theory is written as a nonabelian gauge theory, or the Yang-Mills theory, which forbid the gauge field to have mass term.\nAlso, the weak interaction breaks the parity -- it only involve the left-handed spinor field, and such single-handed gauge symmetry forbid the mass term for fermions.\nIn order to obtain the mass, the Higgs field should be introduced, which involves the spontaneous symmetry breaking.\n\n\n\n\n\\section{Nonabelian Gauge Theory}\nThe U(1)-gauge-invariant QED Lagrangian consists of the gauge part and the matter part:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_{\\mathrm{QED}} &= \\mathcal L_\\gamma + \\mathcal L_\\psi \\\\\n\t&= -\\frac{1}{4} F^{\\mu\\nu} F_{\\mu\\nu} + \\bar\\psi(i\\cancel D-m)\\psi.\n\\end{aligned}\n\\end{equation}\nwhere the gauge-covariant derivative \n\\begin{equation}\n\tD_\\mu = \\partial_\\mu - iq A_\\mu\n\\end{equation}\nand the field strength tensor\n\\begin{equation}\n\tF_{\\mu\\nu} = \\frac{i}{q} [D_\\mu, D_\\nu] = \\partial_\\mu A_\\nu -\\partial_\\nu A_\\mu\n\\end{equation}\nare all manifestly invariant under U(1) gauge transformation.\n\nThe nonabelian gauge theory generalize the gauge group to be a general (nonabelian) Lie group parametrized as\n\\begin{equation}\n\tU(x) = \\exp\\left[-i g \\pi^a(x) T^a\\right],\n\\end{equation}\nwhere $\\{T^a\\}$ are the generators of the gauge group.\nFor the nonabelian Lie group, the commutation relation, the Lie algebra, specify the structure of the infinitesimal gauge transformation.\n\n\\subsection{Yang-Mills Theory}\nThe gauge-invariant Lagrangian can be:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_{\\mathrm{n.a.}} &= \\mathcal L_{\\mathrm{YM}} + \\mathcal L_{\\mathrm{matter}} \\\\\n\t&= -\\frac{1}{4} F^{a,\\mu\\nu} F^a_{\\mu\\nu} + \\bar\\psi(i\\cancel D-m)\\psi.\n\\end{aligned}\n\\end{equation}\nThe form of Yang-Mills Lagrangian similar to the Maxwell field:\n\\begin{equation}\n\t\\mathcal L_{\\mathrm{YM}} = -\\frac{1}{4} F^{a,\\mu\\nu} F^a_{\\mu\\nu}.\n\\end{equation}\nTo get gauge-invariant Lagrangian for the matter fields, we simply replace the ordinary derivative with the gauge-covariant derivative:\n\\begin{equation}\n\t\\mathcal L_{\\mathrm{matter}} = \\begin{cases}\n\t\t\\psi(i\\gamma^\\mu D_\\mu - m)\\psi & \\text{Fermion} \\\\\n\t\t(D^\\mu \\phi)^\\dagger D_\\mu \\phi - m^2 \\phi^\\dagger \\phi & \\text{Scalar}\n\t\\end{cases}.\n\\end{equation}\nThe covariant derivative $D_\\mu$ is now defined as\n\\begin{equation}\n\tD_\\mu = \\partial_\\mu - i g A_\\mu^a T^a,\n\\end{equation}\nand the field-strength tensor is then\n\\begin{equation}\n\\begin{aligned}\n\tF^a_{\\mu\\nu} &= \\frac{i}{g}[D_\\mu, D_\\nu]^a \\\\\n\t&= \\partial_\\mu A_\\nu^a - \\partial_\\nu A_\\nu^a -ig f^{abc} A^b_\\mu A^c_\\nu,\n\\end{aligned}\n\\end{equation}\nThe field strength tensor $F_{\\mu\\nu}$ transform as\n\\begin{equation}\n\tF_{\\mu\\nu}(x) \\rightarrow U(x) F_{\\mu\\nu}(x) U^\\dagger(x).\n\\end{equation}\nThe Lagrangian \n\\begin{equation*}\n\t\\mathcal L_{\\mathrm{GI}} = -\\frac{1}{2} \\mathrm{Tr}\\left[F_{\\mu\\nu} F^{\\mu\\nu}\\right]\n\\end{equation*}\nis manifestly gauge-invariant.\nAlso, we can also choose the renormalization of the generators so that $\\mathrm{Tr}[T^aT^b] = \\delta^{ab}$.\nIn this way, when express the field strength as $F_{\\mu\\nu} = F_{\\mu\\nu}^a T^a$, the above gauge-invariant Lagrangian is exactly the Yang-Mills Lagrangian $\\mathcal L_{\\mathrm{YM}}$.\n\nThe covariant derivative transforms as\n\\begin{equation}\n\tD_\\mu \\rightarrow U(x) D_\\mu U^\\dagger(x)\n\t=\\partial_\\mu - i g \\left[U(x) A_\\mu(x) U^\\dagger(x) + \\frac{i}{g} U(x)\\partial_\\mu U^\\dagger(x)\\right].\n\\end{equation}\nThe matter part $\\mathcal L_{\\mathrm{matter}}$ is invariant if we define the transformation of the gauge field as:\n\\begin{equation}\\label{eq:SM-YM-GT-1}\n\tA_\\mu(x) \\rightarrow U(x) A_\\mu(x) U^\\dagger(x) + \\frac{i}{g} U(x)\\partial_\\mu U^\\dagger(x).\n\\end{equation}\n\nSimilar to QED case, the gauge redundancy causes trouble in quantization.\nWe follow the Faddeev-Popov procedure to quantize the Yang-Mills Lagrangian.\n\nThe partition function of the Yang-Mills Lagriangian is\n\\begin{equation}\n\tZ = \\int D[A] \\exp\\left(i \\int d^4x \\mathcal{L}[A]\\right)\n\t= \\int D[A_{f}]\\ \\mathcal V_G[A] \\exp\\left(i \\int d^4x \\mathcal{L}_{\\mathrm{YM}}[A_f]\\right)\n\\end{equation}\nwhere $A_f$ is the gauge-fixed field, and $\\mathcal V_G[A]$ denotes the phase volume of the gauge redundancy.\nThe task is to determine $\\mathcal V_G[A]$. \nThis can be done by introducing a gauge-fixing condition:\n\\begin{equation}\n\tZ = \\int D[\\pi] \\mathcal V_G[A]\\int D[A] \\delta(G[A^\\pi]) \\exp\\left(i \\int d^4x \\mathcal{L}_{\\mathrm{YM}}[A^\\pi]\\right),\n\\end{equation}\nwhere the gauge-fixing function is \n\\begin{equation}\\label{eq:SM-YM-GF-1}\n\tG[A] = \\partial^\\mu A_\\mu - X.\n\\end{equation}\nHere $D[\\pi]$ is the Haar measure over the Lie group.\n$A^\\pi$ denotes the gauge transformed field. \nSince the Lagrangian is gauge-invariant, we can simply drop a superscript, and the integral over $\\pi$ field is just a number depending on $A$:\n\\begin{equation}\n\t\\int D[\\pi] \\delta(G[A]) = \\det\\left(\\frac{\\delta G[A^\\pi]}{\\delta \\pi}\\right)^{-1}\n\\end{equation}\nThe infinitesimal version of the gauge transformation (\\ref{eq:SM-YM-GT-1}) is\n\\begin{equation}\n\tA_\\mu^a \\rightarrow A_\\mu^a +\\frac{1}{g}\\partial_\\mu \\pi^a + f^{abc}A^b_\\mu \\pi^b\n\t\\equiv A_\\mu + \\frac{1}{g} D_\\mu \\pi^a,\n\\end{equation}\nwhere the covariant derivative on the operator involves the adjoint representation:\n\\begin{equation}\n\tD_\\mu \\pi^a = \\partial_\\mu \\pi^a + g f^{abc} A_\\mu^b \\pi^c = \\partial_\\mu -i g A_\\mu^a \\left(T^a_{\\mathrm{adj}}\\right)_{bc} \\pi^c.\n\\end{equation}\nThe integral is then formally the operator determinant, and we thus know\n\\begin{equation}\n\t \\mathcal V_G[A] = \\det(\\partial^\\mu D_\\mu).\n\\end{equation}\nThis functional determinant can be regarded as the functional integral on a ``ghost'' field:\n\\begin{equation}\n\t\\det(\\partial^\\mu D_\\mu) = \\int D[\\bar c,c] \\exp\\left[i \\int d^4 x\\ \\bar c (-\\partial^\\mu D_\\mu)c \\right].\n\\end{equation}\nThe gauge-fixing condition (\\ref{eq:SM-YM-GF-1}) impose a hard constraint on the field configuration.\nFor our convenience, we \n\\begin{equation}\n\\begin{aligned}\n\tZ &= \\int DX e^{-\\frac{X}{2 \\xi}}\\int D[A_X] \\exp\\left\\{i\\int d^4x \\left[\\mathcal{L}_{\\mathrm{YM}} - \\bar c\\partial^\\mu D_\\mu c \\right]\\right\\} \\\\\n\t&= \\int D[A] e^{-\\frac{i}{2\\xi}(\\partial^\\mu A_\\mu)^2} \\exp\\left\\{i\\int d^4x \\left[\\mathcal{L}_{\\mathrm{YM}} -\\frac{1}{2\\xi}(\\partial^\\mu A_\\mu)^2 - \\bar c\\partial^\\mu D_\\mu c \\right]\\right\\}.\n\\end{aligned}\n\\end{equation}\nThe gauge-fixing results in a modified Lagrangian with gauge-fixing term and ghost term:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L &= \\mathcal{L}_{\\mathrm{YM}} + \\mathcal{L}_{\\mathrm{gf}} + \\mathcal{L}_{\\mathrm{gh}}, \\\\\n\t\\mathcal{L}_{\\mathrm{gf}} &= -\\frac{1}{2\\xi}(\\partial^\\mu A_\\mu)^2, \\\\\n\t\\mathcal{L}_{\\mathrm{gh}} &= -\\bar c\\partial^\\mu D_\\mu c.\n\\end{aligned}\n\\end{equation}\n\nIn the standard model, the weak and strong interaction is described by the SU(2) and SU(3) Yang-Mills theory respectively, so we mainly focus on the those two cases.\n\n\n\\subsection{SU(3) Lie Algebra}\nFor the fundamental representation, the generators for the SU(3) is one-half of the \\textit{Gell-Mann matrices} $T^a = \\frac{1}{2}\\lambda^a$, $a=1,\\dots,8$, where the 8 matrices are:\n\\begin{equation}\n\\begin{aligned}\n\t\\lambda^1 &= \\left[\\begin{array}{ccc}\n\t\t0 & 1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 0 & 0\n\t\\end{array} \\right], &\n\t\\lambda^2 &= \\left[\\begin{array}{ccc}\n\t\t0 & -i & 0 \\\\ i & 0 & 0 \\\\ 0 & 0 & 0\n\t\\end{array} \\right], \\\\\n\t\\lambda^3 &= \\left[\\begin{array}{ccc}\n\t\t1 & 0 & 0 \\\\ 0 & -1 & 0 \\\\ 0 & 0 & 0\n\t\\end{array} \\right], &\n\t\\lambda^4 &= \\left[\\begin{array}{ccc}\n\t\t0 & 0 & 1 \\\\ 0 & 0 & 0 \\\\ 1 & 0 & 0\n\t\\end{array} \\right], \\\\\n\t\\lambda^5 &= \\left[\\begin{array}{ccc}\n\t\t0 & 0 & -i \\\\ 0 & 0 & 0 \\\\ i & 0 & 0\n\t\\end{array} \\right], &\n\t\\lambda^6 &= \\left[\\begin{array}{ccc}\n\t\t0 & 0 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 1 & 0\n\t\\end{array} \\right], \\\\\n\t\\lambda^7 &= \\left[\\begin{array}{ccc}\n\t\t0 & 0 & 0 \\\\ 0 & 0 & -i \\\\ 0 & i & 0\n\t\\end{array} \\right], &\n\t\\lambda^8 &= \\frac{1}{\\sqrt{3}}\\left[\\begin{array}{ccc}\n\t\t1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & -2\n\t\\end{array} \\right].\n\\end{aligned}\n\\end{equation}\nThe commutation relation among the generator defines the \\textit{structure constant}:\n\\begin{equation}\n\t\\left[T^a, T^b\\right] = i f^{abc} T^c.\n\\end{equation}\nThe product of two SU(3) generators is\n\\begin{equation}\n\tT^a T^b = \\frac{1}{6}\\delta^{ab} + \\frac{1}{2} d^{abc} T^c + \\frac{i}{2} f^{abc} T^c,\n\\end{equation}\nwhere\n\\begin{equation}\n\t\\frac{i}{2} f^{abc} = \\mathrm{Tr}\\left([T^a,T^b]T^c \\right), \\quad\n\t\\frac{1}{2} d^{abc} = \\mathrm{Tr}\\left(\\{T^a,T^b\\}T^c \\right).\n\\end{equation}\nThe cycling properties of the trace expression leads to \n\\begin{equation}\n\tf^{abc} = f^{bca} = f^{cab}, \\quad\n\td^{abc} = d^{bca} = d^{cab}.\n\\end{equation}\nIt also manifest that $f^{abc}$ is totally anti-symmetric and $d^{abc}$ totally symmetric.\nThis also leads to the trace identities:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathrm{Tr}(T^a T^b) &= \\frac{1}{2}\\delta^{ab}, \\\\\n\t\\mathrm{Tr}(T^a T^b T^c) &= \\frac{1}{4}(d^{abc} + if^{abc}), \\\\\n\t\\mathrm{Tr}(T^a T^b T^c T^d) &= \\frac{1}{12} \\delta^{ab}\\delta^{cd} + \\frac{1}{8}(d^{abe}+if^{abe})(d^{cde}+if^{cde}).\n\\end{aligned}\n\\end{equation}\n\nThe generator in representation labeled by $R$ is denoted as $T_R^a$.\nOne important representation apart from the fundamental representation is the adjoint representation, which is a 8-dimensional representation defined as\n\\begin{equation}\n\t(T^a_{\\mathrm{adj}})^{bc} = -i f^{abc}.\n\\end{equation}\n\nAlso, different representations can be characterized by the quadratic Casimir $C_2(R)$ defined as\n\\begin{equation}\n\tT^a_R T^a_R = C_2(R) \\mathbbm 1.\n\\end{equation}\nWe know\n\\begin{equation}\n\tC_2(\\text{fund}) = \\frac{4}{3},\\quad\n\tC_2(\\text{adj}) =3.\n\\end{equation}\nThis also means\n\\begin{equation}\n\tf^{acd} f^{bcd} = 3 \\delta^{ab}.\n\\end{equation}\n\n\n\n\n\n\\subsubsection{Roots and Weights}\nFor a simple Lie algebra, a standard set of generators (called the \\textit{Cartan-Weyl basis}) can be chosen so that it contains a maximal number of mutually commuting subset.\nThe maximum number of such generators is defined as the \\textit{rank} of the algebra.\nThe SU(3) Lie algebra is rank-2, with the Cartan-Weyl basis\n\\begin{equation}\n\tH_1 = \\frac{1}{2}\\lambda_3, \\quad H_2 = \\frac{1}{2}\\lambda_8\n\\end{equation}\nThe common eigenstates of Cartan sub-algebra is denoted as $\\{|\\bm m\\rangle\\}$, where each $\\bm m$ is a vector, called the \\textit{weight}, defined by\n\\begin{equation}\n\tH_i |\\bm m\\rangle = m_i |\\bm m\\rangle.\n\\end{equation}\nApart from the Cartan sub-algebra, other generators can be linearly combined to form a standard non-Hermitian basis $\\{E^\\pm_{\\bm \\alpha}\\}$.\nEach element is labeled by an $r$-dimensional vectors $\\pm \\bm\\alpha$, called the \\textit{root}, which is defined by the commutation relation:\n\\begin{equation}\n\t[H_i, E_{\\bm \\alpha}^\\pm] = \\pm \\alpha_i E^\\pm_{\\bm \\alpha}.\n\\end{equation}\nWe can also denote $E_{\\pm\\bm{\\alpha}} \\equiv E_{\\bm{\\alpha}}^\\pm$ for notational simplicity.\nNote that the action of $E_\\alpha$ will change the weight,\n\\begin{equation}\n\\begin{aligned}\n\tH_i E_{\\bm \\alpha}|\\bm m\\rangle \n\t&= [H_i, E_{\\bm \\alpha}]|\\bm m\\rangle + E_{\\bm \\alpha} H_i |\\bm m\\rangle \\\\\n\t&= (m_i + \\alpha_i) E_{\\bm \\alpha}|\\bm m\\rangle,\n\\end{aligned}\n\\end{equation}\n\nFor SU(3), since $\\{H_i\\}$ are all diagonal, the common eigenstates are\n\\begin{equation}\n\t\\begin{bmatrix}\n\t\t1 \\\\ 0 \\\\ 0\n\t\\end{bmatrix} = \\left|\\left(\\frac{1}{2}, \\frac{\\sqrt{3}}{6}\\right)\\right\\rangle, \\quad\n\t\\begin{bmatrix}\n\t\t0 \\\\ 1 \\\\ 0\n\t\\end{bmatrix} = \\left|\\left(-\\frac{1}{2}, \\frac{\\sqrt{3}}{6}\\right)\\right\\rangle, \\quad\n\t\\begin{bmatrix}\n\t\t0 \\\\ 0 \\\\ 1\n\t\\end{bmatrix} = \\left|\\left(0, -\\frac{\\sqrt{3}}{3}\\right)\\right\\rangle.\n\\end{equation}\nThese vectors, plotted in a plane, form a equilateral triangle:\n\\begin{equation}\n\t\\includegraphics[width=0.35\\linewidth,align=c]{pics/SM-LA-SU3-1.png}\n\\end{equation}\nOther generators are labeled by \n\\begin{equation}\n\tE_{\\bm{\\alpha}_1} = \\left[\\begin{array}{ccc}\n\t\t0 & 1 & 0 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 0\n\t\\end{array}\\right], \\quad\n\tE_{\\bm{\\alpha}_2} = \\left[\\begin{array}{ccc}\n\t\t0 & 0 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 0 & 0\n\t\\end{array}\\right], \\quad\n\tE_{\\bm{\\alpha}_3} = \\left[\\begin{array}{ccc}\n\t\t0 & 0 & 1 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 0\n\t\\end{array}\\right],\n\\end{equation}\nwhere the root vectors are\n\\begin{equation}\n\t\\bm{\\alpha}_1 = (1, 0), \\quad  \n\t\\bm{\\alpha}_2 = \\left(-\\frac{1}{2}, \\frac{\\sqrt 3}{2} \\right), \\quad\n\t\\bm{\\alpha}_3 = \\left(\\frac{1}{2}, \\frac{\\sqrt 3}{2} \\right).\n\\end{equation}\nThe root vectors plotted in a plane form a regular hexagon:\n\\begin{equation}\n\t\\includegraphics[width=0.35\\linewidth,align=c]{pics/SM-LA-SU3-2.png}\n\\end{equation}\n\nThe roots of $\\mathfrak{su}(3)$ are not linearly independent.\nWe can find a set of linear independent roots called the \\textit{simple roots}.\nAll other roots can be expressed as a linear combination of simples roots with all positive/negative coefficients. \nThe SU(3) algebra, for example, has $\\bm{\\alpha}_1$ and $\\bm{\\alpha}_2$ as its simple roots.\n\n\n\n\\subsubsection{Irreducible Representations}\n\nAn irreducible representation (Irrep) of a Lie algebra is entirely specified by the \\textit{highest weight} state $|\\bm M\\rangle$ in the Irrep, which satisfies\n\\begin{equation}\n\tE_{\\bm \\alpha}|\\bm M\\rangle = 0\n\\end{equation}\nfor all simple root.\nThe Irrep then can be constructing by applying the lowering operators to the highest weight state:\n\\begin{equation}\n\t\\text{Irrep} = \\mathrm{span}\\left\\{ E_{\\bm \\alpha}^- E_{\\bm \\beta}^- \\cdots E_{\\bm \\gamma}^- |\\bm M\\rangle, \\ \\text{all possible sequence}\\right\\}.\n\\end{equation}\n\nThe highest weight state can be systematically constructed using the tensor method.\nFor SU(3), we first need to introduce the complex of the fundamental representation $\\bar 3$, defined by\n\\begin{equation}\n\t\\exp(-i \\pi^a T_{\\bar 3}^a) = \\exp(-i \\pi^a T_{3}^a)^*,\n\\end{equation}\nwhich gives\n\\begin{equation}\n\tT_{\\bar 3}^a = - (T_{3}^{a})^T.\n\\end{equation}\nThe definition flip the sign of $\\{H_i\\}$, the weight plotted in the plane becomes the upside-down triangle:\n\\begin{equation}\n\t\\includegraphics[width=0.35\\linewidth,align=c]{pics/SM-LA-SU3-3.png}\n\\end{equation}\nThe highest wight state in $\\bar 3$ is $|3\\rangle$.\nFor a general Irrep (which can be labeled by ($m,n$)-representation), the highest weight states is\n\\begin{equation}\n\t|\\bm M\\rangle = \\bigotimes_{i=1}^m |1\\rangle_i \\bigotimes_{j=m+1}^{m+n} |3\\rangle_j.\n\\end{equation}\nThe ladder operators are now\n\\begin{equation}\n\tE_{\\bm \\alpha} = \\sum_{i=1}^m E_{\\bm \\alpha}^{(i)} - \\sum_{j=m+1}^{m+n} E^{(j)}_{-\\bm \\alpha}.\n\\end{equation}\nThe adjoint representation of the SU(3) is the $(1,1)$-representation. \n\n\n\n\\subsection{Quantum Chromodynamics}\nThe SU(3) Yang-Mills theory describe the strong interaction, also known as the quantum chromodynamics (QCD).\nThe Feynman rule in QCD is very complicated.\nHere we just consider the simplest tree-level process $u \\bar d \\rightarrow u \\bar d$, corresponding to the diagram\n\\begin{equation}\n\t\\includegraphics[width=0.2\\linewidth,align=c]{pics/SM-YM-1}\n\t= T^a_{ji} T^a_{kl} \\times (\\text{QED-like term}),\n\\end{equation}\nwhere the index $i$ labels the color:\n\\begin{equation}\n\t\\begin{tabular}{c c c}\n\t\t\\hline\n\t\tIndex & Color & Simbol \\\\ \\hline\n\t\t1 & red & $r$ \\\\ \n\t\t2 & green & $g$ \\\\  \n\t\t3 & yellow & $y$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{equation}\nSince the total color is conserved in the scattering process, we can divide the color degrees of freedom to the singlet and octet, corresponding to\n\\begin{equation}\n\t3\\otimes \\bar 3 = 1 \\otimes 8.\n\\end{equation}\nThe singlet states is\n\\begin{equation}\n\t|s\\rangle = \\frac{1}{\\sqrt 3} \\left(|r\\bar r\\rangle + |g\\bar g\\rangle + |y\\bar y\\rangle\\right)\n\\end{equation}\nThe factor then becomes\n\\begin{equation}\n\t\\left(\\frac{1}{\\sqrt{3}} \\right)^2 (T^aT^a)_{jl} = \\frac{4}{9} \\delta_{jl}.\n\\end{equation}\nFor the octet state $|r \\bar y\\rangle$\n\\begin{equation}\n\tT^a_{i1} T^a_{3l} = -\\frac{1}{6} \\delta_{i1}\\delta_{l3}.\n\\end{equation}\nIn QED, we know that two particle with opposite charge are attractive to each other.\nFrom the factors above, we know that color singlet channel is attractive while the color octet channel is repulsive.\nIndeed, in QCD there is phenomenon named \\textit{color confinement}, which means quarks tend to form composite colorless bound state.\n\n\n\n\n\n\\subsection{Gauge Group of the Standard Model}\n\nThe standard model has $\\mathrm{SU(3)}\\times\\mathrm{SU(2)}\\times\\mathrm{U(1)}$ gauge symmetry.\nFor a general field $\\phi_{iI}$, where $i$ labels the electroweak degrees of freedom and $I$ labels the color degrees of freedom.\nThe covariant derivative for $\\phi_{iI}$ is then\n\\begin{equation}\n\tD_\\mu = \\partial_\\mu -i \\left(g_1 B_\\mu Y + g_2 W^a_\\mu T^a_{2} + g_3 A_\\mu^b T^b_3                                                  \\right),\n\\end{equation}\nwhere the operator $Y$, $T_2^a$ and $T_3^b$ is depend on the representation of U(1), SU(2) and SU(3) groups respectively. \n\nThe weak interaction only involve left-handed spinor, so the largest fermion sector is\n\\begin{equation}\n\tQ_I = \\left[\n\t\t\\begin{pmatrix} u_L \\\\ d_L \\end{pmatrix} \n\t\t\\begin{pmatrix} c_L \\\\ s_L \\end{pmatrix} \n\t\t\\begin{pmatrix} t_L \\\\ b_L \\end{pmatrix}\n\t\\right]_I.\n\\end{equation}\nEach $Q_I$ is in the $\\left(3,2,\\frac{1}{6}\\right)$ representation, where $3$ is the fundamental representation of SU(3), $2$ is the fundamental representation of SU(2), and $\\frac{1}{6}$ is the hypercharge of the U(1) gauge group.\nFor example the sector of first generation left-handed quarks are\n\\begin{equation}\n\tQ_1 = \\begin{bmatrix}\n\t\tu_L^r & u_L^g & u_L^y \\\\ d_L^r & d_L^g & d_L^y\n\t\\end{bmatrix}.\n\\end{equation}\nNote that the electric charge satisfies\n\\begin{equation}\n\tq = I_3 + Y,\n\\end{equation}\nwhere $I_3$ is the z-component of the isospin.\n\nBesides, the left-handed leptons also involves the weak interaction but not the strong interaction\n\\begin{equation}\n\tL_I = \\left[\n\t\t\\begin{pmatrix} \\nu_{e L} \\\\ e_L \\end{pmatrix} \n\t\t\\begin{pmatrix} \\nu_{\\mu L} \\\\ \\mu_L \\end{pmatrix} \n\t\t\\begin{pmatrix} \\nu_{\\tau L} \\\\ \\tau_L \\end{pmatrix}\n\t\\right]_I,\n\\end{equation}\nand thus each $L_I$ is in the $\\left(1,2,\\frac{1}{2}\\right)$ representation.\n\nThe right-handed quarks form $\\left(3,1,\\frac{2}{3}\\right)$ and $\\left(3,1,-\\frac{1}{3}\\right)$ representations, the right-handed electrons/muons/tauons form $\\left(1,1,-1\\right)$ representations, and the right-handed neutrinos form trivial $\\left(1,1,0\\right)$ representations.\n\nIn addition to the fermion, there is also a scalar Higgs field in the representation $\\left(1,2,\\frac{1}{2}\\right)$.\nTogether, we say that the standard model contains the representations (only fermions in the first generation are listed):\n\\begin{equation}\n\\begin{tabular}{c|c|c}\n\\hline \\hline\n\\multicolumn{2}{c|}{Particles} & Representation\\tabularnewline\n\\hline \n\\multirow{3}{*}{Quarks} & $Q_{1}$ & $(3,2,1/6)$\\tabularnewline\n & $u_{R}$ & $(3,1,2/3)$\\tabularnewline\n & $d_{R}$ & $(3,1,-1/3)$\\tabularnewline\n\\hline \n\\multirow{3}{*}{Leptons} & $L_{1}$ & $(1,2,-1/2)$\\tabularnewline\n & $e_{R}$ & $(1,1,-1)$\\tabularnewline\n & $\\nu_{eR}$ & $(1,1,0)$\\tabularnewline\n\\hline \nHiggs & $\\varphi$ & $(1,2,1/2)$\\tabularnewline\n\\hline \\hline\n\\end{tabular}\n\\end{equation}\n\n\n\n\n\\section{Lagrangian of the Standard Model}\n\n\\subsection{Electroweak Symmetry Breaking}\nBeside the quarks, leptons and gauge bosons, a doublet scalar field (Higgs) is also introduced in order to generate mass for the particles:\n\\begin{equation}\n\t\\varphi \\equiv \\left[\\begin{array}{c}\n\t\t\\varphi_1 \\\\ \\varphi_2\n\t\\end{array} \\right],\n\\end{equation}\nwhich is also an isospin doublet with hyperchage $Y=\\frac{1}{2}I$:\n\\begin{equation}\n\t\\begin{tabular}{c c c c}\n\t\t\\hline \n\t\tField & $I_3$ & $Y$ & $q$\\\\ \\hline\n\t\t$\\varphi_1$ & $1/2$ & $1/2$ & $1$ \\\\ \n\t\t$\\varphi_2$ & $-1/2$ & $1/2$ & $0$ \\\\  \n\t\t\\hline \n\t\\end{tabular}\n\\end{equation}\n\nThe Lagrangian of the Higgs field is \n\\begin{equation}\\label{eq:SM-Higgs-L1}\n\t\\mathcal L = -\\frac{1}{4}(W^a_{\\mu\\nu})^2 - \\frac{1}{4}B_{\\mu\\nu}^2 +(D^\\mu \\varphi)^\\dagger D_\\mu \\varphi- V(\\varphi),\n\\end{equation}\nwhere the field strength tensors are\n\\begin{equation}\n\\begin{aligned}\n\tW^a_{\\mu\\nu} &= \\partial_\\mu W_\\nu^1 - \\partial_\\nu W^1_\\mu + g_2 (W_\\mu^2 W_\\nu^3 - W_\\mu^3 W_\\nu^2), \\\\\n\tW^a_{\\mu\\nu} &= \\partial_\\mu W_\\nu^1 - \\partial_\\nu W^1_\\mu + g_2 (W_\\mu^3 W_\\nu^1 - W_\\mu^1 W_\\nu^3), \\\\\n\tW^a_{\\mu\\nu} &= \\partial_\\mu W_\\nu^1 - \\partial_\\nu W^1_\\mu + g_2 (W_\\mu^1 W_\\nu^2 - W_\\mu^2 W_\\nu^1), \\\\\n\tB^a_{\\mu\\nu} &= \\partial_\\mu B_\\nu^1 - \\partial_\\nu B^1_\\mu,\n\\end{aligned}\n\\end{equation}\nand the Higgs interaction is\n\\begin{equation}\n\tV(\\varphi) = \\frac{\\lambda}{4} \\left(\\varphi^\\dagger \\varphi - \\frac{v^2}{2} \\right)^2.\n\\end{equation}\nThis potential gives $\\varphi$ a nonzero vacuum expectation value (VEV). \nWe can make a gauge transformation so that\n\\begin{equation}\n\t\\varphi_0 = \\langle 0| \\varphi |0\\rangle = \\frac{v}{\\sqrt 2} \\left[\\begin{array}{c}\n\t\t0 \\\\ 1\t\\end{array} \\right].\n\\end{equation}\n\n\nThe kinetic Lagrangian is described by the SU(2) gauge-invariant Lagrangian\n\\begin{equation}\n\t\\mathcal L_{\\mathrm{kin}} = (D^\\mu \\varphi)^\\dagger D_\\mu \\varphi,\n\\end{equation}\nwhere the covariant derivative is\n\\begin{equation}\n\tD_\\mu = \\partial_\\mu -i \\left(g_1 B_\\mu Y + g_2 W^a_\\mu T_2^a\\right).\n\\end{equation}\nWe can introducing the Weinberg angle\n\\begin{equation}\n\t\\theta_W \\equiv\\arctan \\frac{g_1}{g_2},\n\\end{equation}\nand the fields\n\\begin{equation}\n\\begin{aligned}\n\tW_\\mu^\\pm &\\equiv \\frac{1}{\\sqrt 2}(W^1_\\mu \\mp i A_\\mu^2), \\\\\n\tZ_\\mu &\\equiv \\cos{\\theta_W} W^3_\\mu - \\sin{\\theta_W} B_\\mu, \\\\\n\tA_\\mu &\\equiv \\sin{\\theta_W} W_\\mu^3 + \\cos{\\theta_W} B_\\mu.\n\\end{aligned}\n\\end{equation}\nThe reverse transformation for $W_\\mu^3$ and $B_\\mu$ is\n\\begin{equation}\n\\begin{aligned}\n\tW_\\mu^3 &= \\cos{\\theta_W} Z_\\mu + \\sin{\\theta_W} A_\\mu, \\\\\n\tB_\\mu &= -\\sin{\\theta_W} Z_\\mu +\\cos{\\theta_W} A_\\mu.\n\\end{aligned}\n\\end{equation}\nThe off-diagonal part of the Lie algebra becomes\n\\begin{equation}\n\t\\frac{1}{\\sqrt 2}\\begin{bmatrix}\n\t\t0 &  W^+_\\mu \\\\\n\t\tW_\\mu^- & 0\n\t\\end{bmatrix},\n\\end{equation}\nand the diagonal part becomes\n\\begin{equation}\n\\begin{aligned}\n\tg_2 W_\\mu^3 T^z_2 + g_1 B_\\mu Y\n\t&= g_2 (\\cos{\\theta_W} Z_\\mu + \\sin{\\theta_W} A_\\mu) T^3_2 + g_1 Y (-\\sin{\\theta_W} Z_\\mu +\\cos{\\theta_W} A_\\mu) \\\\\n\t&= g_2 \\cos{\\theta_W} A_\\mu \\left(T^3_2 + Y\\right) + eZ_\\mu \\left(\\cot{\\theta_W}T_2^3 - \\tan{\\theta_W}Y \\right) \\\\\n\t&= e A_\\mu q + \\frac{e}{\\sin{\\theta_W}\\cos{\\theta_W}} Z_\\mu \\left(\\cos^2{\\theta_W}T_2^3 - \\sin^2{\\theta_W} Y\\right) \\\\\n\t&= e A_\\mu q + \\frac{e}{\\sin{\\theta_W}\\cos{\\theta_W}} Z_\\mu \\left(T_2^3 - \\sin^2{\\theta_W} q\\right).\n\\end{aligned}\n\\end{equation}\nNote that the gauge field $A_\\mu$ describe QED, so we identify the coupling constant with the electric charge:\n\\begin{equation}\n\te = g_2 \\sin{\\theta_W} = g_1 \\cos{\\theta_W}.\n\\end{equation}\n\nFor Higgs field, \n\\begin{equation}\n\tT^3_2 = \\frac{1}{2}\\sigma^z, \\quad\n\tY = \\frac{1}{2} I,\\quad \n\tq = \\begin{bmatrix}\n\t\t1 & 0 \\\\ 0 & 0\n\t\\end{bmatrix}.\n\\end{equation}\n$T^3_2 = \\frac{1}{2}\\sigma^z$, $Y=\\frac{1}{2}I$, so \nWith the new definition of the field, the Lie algebra of becomes\n\\begin{equation}\n\\begin{aligned}\n\tg_2 W^a_\\mu T_2^a + g_1 B_\\mu Y \n\t&= \\frac{e}{2 \\sin{\\theta_W}} \n\t\\begin{bmatrix}\n\t\t2\\sin{\\theta_W} A_\\mu + \\frac{\\cos{2\\theta_W}}{\\cos{\\theta_W}} Z_\\mu & \\sqrt{2} W^+_\\mu \\\\\n\t\t\\sqrt{2} W_\\mu^- & \\frac{1}{\\cos{\\theta_W}} Z_\\mu\n\t\\end{bmatrix}.\n\\end{aligned}\n\\end{equation}\nWe can now expand the kinetic term to get the effective mass of the the gauge field in unitary gauge. \nThe mass term is the quadratic part of $\\varphi$:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_{\\mathrm{mass}} \n\t&= \\varphi_0^\\dagger \\left(g_2 W^a_\\mu T_2^a + g_1 B_\\mu Y\\right)^2 \\varphi_0 \\\\\n\t&= \\frac{e^2v^2}{8 \\sin^2{\\theta_W}} \n\t\\begin{bmatrix}\n\t\t0 & 1\n\t\\end{bmatrix} \n\t\\begin{bmatrix}\n\t\t\\cdots & \\sqrt{2} W^+_\\mu \\\\\n\t\t\\sqrt{2} W_\\mu^- & \\frac{1}{\\cos{\\theta_W}} Z_\\mu\n\t\\end{bmatrix}^2\n\t\\begin{bmatrix}\n\t\t0 \\\\ 1\n\t\\end{bmatrix} \\\\\n\t&= m_W W^{+\\mu} W_\\mu^- + \\frac{1}{2}m_Z^2 Z^\\mu Z_\\mu,\n\\end{aligned}\n\\end{equation}\nwhere the mass for $W$ and $Z$ bosons are\n\\begin{equation}\n\tm_W = \\frac{ev}{2 \\sin{\\theta_W}}, \\quad \n\tm_Z = \\frac{e v}{2\\sin{\\theta_W}\\cos{\\theta_W}} = \\frac{m_W}{\\cos{\\theta_W}}.\n\\end{equation}\nNote that the photon field $A_\\mu$ does not obtain mass from the Higgs field.\n\n\n\n\\subsection{Higgs-Gauge Sector}\nWe are now going to express the Lagrangian (\\ref{eq:SM-Higgs-L1}) in terms of the fields we have just introduced.\nIn the unitary gauge, the Higgs field can be written as\n\\begin{equation}\n\t\\varphi = \\frac{1}{\\sqrt 2} \\begin{bmatrix}\n\t\tv + H(x) \\\\ 0\n\t\\end{bmatrix},\n\\end{equation}\nwhere $H(x)$ is a real scalar field.\nThe interaction terms gives\n\\begin{equation}\n\\begin{aligned}\n\tV(H) &= \\frac{\\lambda v^2}{4} H^2 + \\frac{\\lambda v}{4}H^3 + \\frac{\\lambda}{16}H^4 \\\\\n\t&= \\frac{1}{2}m_H^2 H^2 + \\frac{m_H^2}{2v} H^3 + \\frac{m_H^2}{8v^2}H^4,\n\\end{aligned} \n\\end{equation}\nwhere we have defined the Higgs mass\n\\begin{equation}\n\tm_H^2 = \\frac{\\lambda v^2}{2}.\n\\end{equation}\nAlso, the fluctuation around the $\\varphi_0$ also create coupling between Higgs field and the gauge fields.\nWe can simply replace $v^2$ with $(v+H)^2$ in the gauge field mass term to capture the coupling.\nThus, the Lagrangian is\n\\begin{equation}\n\t\\mathcal L_{\\mathrm{H}} = \\partial^\\mu H^\\dagger \\partial_\\mu H - V(H) + \\left(m_W W^{+\\mu} W_\\mu^- + \\frac{1}{2}m_Z^2 Z^\\mu Z_\\mu \\right) \\left(H^2+2vH \\right).\n\\end{equation}\nNow consider the Lagrangian for the gauge field\n\\begin{equation}\n\t\\mathcal L_{\\mathrm{G}} = -\\frac{1}{4}(W^a_{\\mu\\nu})^2 - \\frac{1}{4}B_{\\mu\\nu}^2 + m_W W^{+\\mu} W_\\mu^- + \\frac{1}{2}m_Z^2 Z^\\mu Z_\\mu.\n\\end{equation}\nWe can regroup the field strength as\n\\begin{equation}\n\\begin{aligned}\n\tW^+_{\\mu\\nu} &= D_\\mu W_\\nu^+ - D_\\nu W^+_\\mu, \\\\\n\tW^-_{\\mu\\nu} &= D_\\mu^\\dagger W_\\nu^- - D_\\nu^\\dagger W^-_\\mu, \\\\\n\tW^3_{\\mu\\nu} &= \\sin{\\theta_W} F_{\\mu\\nu} + \\cos{\\theta_W}Z_{\\mu\\nu} -ig_2(W^+_\\mu W^-_\\nu - W_\\mu^- W_\\nu^+), \\\\\n\tB^a_{\\mu\\nu} &= \\cos{\\theta_W} F_{\\mu\\nu} - \\sin{\\theta_W} Z_{\\mu\\nu},\n\\end{aligned}\n\\end{equation}\nwhere we have defined \n\\begin{equation}\n\\begin{aligned}\n\tW^\\pm_{\\mu\\nu} &= \\frac{1}{\\sqrt 2} (W^1_{\\mu\\nu} \\mp i W^2_{\\mu\\nu}), \\\\\n\tF_{\\mu\\nu} &= \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu, \\\\\n\tZ_{\\mu\\nu} &= \\partial_\\mu Z_\\nu - \\partial_\\nu Z_\\mu.\n\\end{aligned}\n\\end{equation}\nThe covariant derivative for $W^\\pm$ field is\n\\begin{equation}\n\\begin{aligned}\n\tD_\\mu &= \\partial_\\mu -i g_2 W^3_\\mu \\\\\n\t&= \\partial_\\mu -i g_2 \\left(\\sin{\\theta_W}A_\\mu + \\cos{\\theta_W} Z_\\mu \\right) \\\\\n\t&= \\partial_\\mu -i e \\left(A_\\mu + \\cot{\\theta_W} Z_\\mu \\right)\n\\end{aligned}\n\\end{equation}\nSo the Lagrangian for the gauge field is\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_\\mathrm{G}\n\t=&\\ -\\frac{1}{4}(2W_{\\mu\\nu}^+ W^{-\\mu\\nu} + W_{\\mu\\nu}^3 W^{3\\mu\\nu}) -\\frac{1}{4} B_{\\mu\\nu}B^{\\mu\\nu} + m_W W^{+\\mu} W_\\mu^- + \\frac{1}{2}m_Z^2 Z^\\mu Z_\\mu \\\\\n\t=&\\ -\\frac{1}{4}(F_{\\mu\\nu}F^{\\mu\\nu} + Z_{\\mu\\nu} Z^{\\mu\\nu}) - D^{\\mu} W^{+\\nu} D^\\dagger_\\mu W_\\nu^- + D^\\mu W^{+\\nu} D^\\dagger_\\nu W^-_\\mu \\\\\n\t&\\ +ie (F^{\\mu\\nu} + \\cot{\\theta_W} Z^{\\mu\\nu})W_\\mu^+ W_\\nu^- + m_W W^{+\\mu} W_\\mu^- + \\frac{1}{2}m_Z^2 Z^\\mu Z_\\mu \\\\\n\t&\\ -\\frac{e^2}{2\\sin^2{\\theta_W}} \\left(W^{+\\mu}W^-_\\mu W^{+\\nu}W^-_\\nu - W^{+\\mu}W^+_\\mu W^{-\\nu}W^-_\\nu \\right).\n\\end{aligned}\n\\end{equation}\nWe remark that in the unitary gauge, there is no redundancy of the gauge field.\nNo ghost field is needed in the quantization procedure.\n\n\n\\subsection{Yukawa Coupling and Fermion Mass}\nThe fermion fields in the electroweak interaction\n\\begin{equation}\n\t\\begin{tabular}{c c c c}\n\t\t\\hline \n\t\tField & $I_3$ & $Y$ & $q$\\\\ \\hline\n\t\t$u_L$ & $1/2$ & $1/6$ & $2/3$ \\\\ \n\t\t$d_L$ & $-1/2$ & $1/6$ & $-1/3$ \\\\  \n\t\t$u_R$ & $0$ & $2/3$ & $2/3$ \\\\\n\t\t$d_R$ & $0$ & $-1/3$ & $-1/3$ \\\\ \\hline\n\t\t$e_L$ & $-1/2$ & $-1/2$ & $-1$ \\\\ \n\t\t$\\nu_{eL}$ & $1/2$ & $-1/2$ & $0$ \\\\  \n\t\t$e_R$ & $0$ & $-1$ & $-1$ \\\\\n\t\t$\\nu_{eR}$ & $0$ & $0$ & $0$ \\\\\n\t\t\\hline \n\t\\end{tabular}\n\\end{equation}\n\nThe kinetic term for the fermion Lagrangian is\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_{\\mathrm{kin}}\n\t=&\\ i Q^\\dagger_J \\sigma^\\mu D_\\mu Q_J + i u_{RJ}^\\dagger \\bar\\sigma^\\mu D_\\mu u_{RJ} + i d_{RJ}^\\dagger \\bar\\sigma^\\mu D_\\mu d_{RJ} \\\\\n\t& + i L^\\dagger_J \\sigma^\\mu D_\\mu L_J + i e_{RJ}^\\dagger \\bar\\sigma^\\mu D_\\mu e_{RJ} + i \\nu_{RJ}^\\dagger \\bar\\sigma^\\mu D_\\mu \\nu_{RJ},\n\\end{aligned}\n\\end{equation}\nwhere we have use the index $J$ to label the generation of the fermion:\n\\begin{equation}\n\\begin{aligned}\n\tu_{RJ} &= \\{u_R, c_R, t_R\\}, & \n\td_{RJ} &= \\{d_R, s_R, b_R\\}, \\\\\n\te_{RJ} &= \\{e_R, \\mu_R, \\tau_R\\}, &\n\t\\nu_{RJ} &= \\{\\nu_{eR},\\nu_{\\mu R},\\nu_{\\tau R}\\}.\n\\end{aligned}\n\\end{equation}\n\nThe left-handed SU(2) gauge symmetry forbid any fermion mass term, which is incompatible with the reality.\nHowever, fermion can obtain fermion mass by introducing Yukawa coupling between the Higgs and fermion fields.\n\n\\subsubsection{Quark Sector}\nThe Yukawa coupling between quark and Higgs is\n\\begin{equation}\n\t\\mathcal L_{q,\\mathrm{Yuk}}\n\t= - \\varepsilon^{ij} Y^u_{IJ} Q^\\dagger_{iI} \\varphi^\\dagger_j u_{RJ}\n\t- Y^d_{IJ} Q^\\dagger_{I} \\varphi d_{RJ} +h.c..\n\\end{equation}\nThe first term corresponds to\n\\begin{equation}\n\t\\left(\\bar 3, \\bar 2, -\\frac{1}{6}\\right) \\times \\left(1, \\bar 2, -\\frac{1}{2}\\right) \\times \\left(3, 1, \\frac{2}{3}\\right) = \\left(1,1,0\\right)\\oplus \\cdots,\n\\end{equation}\nwith $\\varepsilon^{ij}$ being the Clebsch-Gordan coefficient, and the second term corresponds to\n\\begin{equation}\n\t\\left(\\bar 3, \\bar 2, -\\frac{1}{6}\\right) \\times \\left(1, 2, \\frac{1}{2}\\right) \\times \\left(3, 1, -\\frac{1}{3}\\right) = \\left(1,1,0\\right).\n\\end{equation}\nNow substitute $\\varphi$ with \n\\begin{equation}\n\t\\varphi = \\frac{1}{\\sqrt 2} \\begin{bmatrix}\n\t\t0 \\\\ v+H\n\t\\end{bmatrix}.\n\\end{equation}\nThe Yukawa potential then gives quark mass term:\n\\begin{equation}\n\t\\mathcal L_{Q,\\mathrm{Yuk}}\n\t= -\\frac{v+H}{\\sqrt 2} \\left( Y^u_{IJ} u^\\dagger_{LI} u_{RJ} +Y^d_{IJ} d^\\dagger_{LI} d_{RJ} \\right) +h.c..\n\\end{equation}\nWe can then use a basis transformation (singular value decomposition) to diagonalize the coupling matrix:\n\\begin{equation}\n\tY^u_{IJ} = U_u M_u K^\\dagger_u, \\quad\n\tY^d_{IJ} = U_d M_d K^\\dagger_d,\n\\end{equation}\nwhere $M_u$ and $M_d$ are diagonal matrices.\nWe can change the basis\n\\begin{equation}\n\\begin{aligned}\n\tu_L &\\rightarrow U_u u_L, & u_R &\\rightarrow K_u u_R, \\\\\n\td_L &\\rightarrow U_d d_L, & d_R &\\rightarrow K_d d_R,\n\\end{aligned}\n\\end{equation}\nand define the diagonal masses:\n\\begin{equation}\n\tm_{u_I} = \\frac{v}{\\sqrt 2} (M_u)_{II}, \\quad \n\tm_{d_I} = \\frac{v}{\\sqrt 2} (M_d)_{II},\n\\end{equation}\nthen the Lagrangian in the quark sector can be expressed as:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_Q =&\\ \\sum_I \\bar u_I\\left(i\\cancel D^c-m_{u_I}\\right)u_I + \\sum_I \\bar d_I \\left(i \\cancel D^c-m_{d_I}\\right)d_I \\\\\n\t& -\\frac{H}{v} \\sum_I\\left(m_{u_I}\\bar u_I u_I+m_{d_I}\\bar d_I d_I \\right) + \\mathcal L_{Q-G},\n\\end{aligned}\n\\end{equation}\nwhere $\\cancel D^c$ is the covariant derivative in QCD (neglecting the electroweak gauge), and $\\mathcal L_{Q-G}$ denotes the coupling between quarks and the gauge fields.\n\n\\subsubsection{Lepton Sector}\nThe coupling between Higgs and $e_R$ has the general Yukawa form:\n\\begin{equation}\n\t\\mathcal L_{e,\\mathrm{Yuk}} = -Y^e_{IJ} L_{iI}^\\dagger \\varphi_j e_{R iJ} + h.c.,\n\\end{equation}\nwhere $y_{IJ}$ is the Hermitian coupling matrix on generation space.\nThe Yukawa coupling term corresponds to\n\\begin{equation}\n\t\\left(\\bar 2, \\frac{1}{2}\\right) \\times \\left(2, \\frac{1}{2}\\right) \\times \\left(1, -1\\right) = \\left(1,0\\right),\n\\end{equation}\nwhich is indeed a singlet under gauge transformation (it is also a singlet under Lorentz transformation).\nThe Higgs field then produce the term\n\\begin{equation}\n\t\\mathcal L_{e,\\mathrm{Yuk}} = -\\frac{Y^e_{IJ}}{\\sqrt 2}(v+H) e_{LI}^\\dagger e_{RJ} + h.c.,\n\\end{equation}\nWe then diagonalize the coupling matrix:\n\\begin{equation}\n\t Y^e_{IJ} = \\left[U_{e} M_e K^\\dagger_{e}\\right]_{IJ},\n\\end{equation}\nwhere $M_e$ is diagonal.\nWe can then change the basis by\n\\begin{equation}\n\\begin{aligned}\n\te_R &\\rightarrow K_e e_R, \\\\\n\te_L &\\rightarrow U_e e_L,\n\\end{aligned}\n\\end{equation}\nThen the coupling gives the mass:\n\\begin{equation}\n\t\\mathcal L_{e,\\mathrm{Yuk}} \n\t= - \\sum_I m_{e_I} \\bar e_I e_I - \\frac{1}{v}\\sum_I m_{e_I} H \\bar e_I e_I,\n\\end{equation}\nwhere\n\\begin{equation}\n\tm_{e_I} = \\frac{v}{\\sqrt 2} (M_e)_{II}.\n\\end{equation}\n\nNow we consider the neutrino mass, which is much smaller than electrons.\nThe neutrinos can similarly obtain mass by coupling to Higgs field.\nHowever, as there is no gauge symmetry constraint on neutrinos, the Yukawa coupling can be\n\\begin{equation}\n\t\\mathcal L_{\\nu,\\mathrm{Yuk}} = - Y^\\nu_{IJ} \\varepsilon^{ij} L_{iI} \\varphi_j \\nu_{RJ} - \\frac{1}{2} M_{IJ} \\nu_{RI} \\nu_{RJ} + h.c..\n\\end{equation}\nThe first term corresponds to\n\\begin{equation}\n\t\\left(2, -\\frac{1}{2}\\right) \\times \\left(2, \\frac{1}{2}\\right) \\times \\left(1, 0\\right) = \\left(1,0\\right) \\oplus(3,0),\n\\end{equation}\nand $\\varepsilon^{ij}$ is the Clebsch-Gordan coefficients.\nThe second term is automatically a gauge singlet.\nWe note here that the left-handed neutrino do not participate in any interaction, so we can neglect it in the standard model.\nThe only role it plays is to generate neutrino mass.\n\nThe Yukawa potential gives\n\\begin{equation}\n\t\\mathcal L_{\\nu,\\mathrm{mass}} = -\\frac{1}{2}\n\t\\begin{bmatrix}\n\t\t\\nu_L & \\nu_R\n\t\\end{bmatrix} \n\t\\begin{bmatrix}\n\t\t0 & Y^\\nu \\\\ Y^{\\nu\\dagger} & M\n\t\\end{bmatrix} \n\t\\begin{bmatrix}\n\t\t\\nu_L \\\\ \\nu_R\n\t\\end{bmatrix}.\n\\end{equation}\nIf $M \\gg m$, the eigenvalues of the mass term will be a huge masses plus some tiny masses.\nIn general, the low-energy part of the mass term is\n\\begin{equation}\n\t\\mathcal L_{\\nu,\\mathrm{mass}} = -\\frac{1}{2} (M_\\nu)_{IJ} \\left(\\nu_{LI} \\nu_{LJ} + \\nu_{LI}^\\dagger \\nu_{LJ}^\\dagger\\right)\\left(1+\\frac{H}{v}\\right)^2,\n\\end{equation}\nwhere\n\\begin{equation}\n\t(M_\\nu)_{IJ} = \\frac{v^2}{2} \\left(Y^{\\nu T} M^{-1} Y \\right)_{IJ}.\n\\end{equation}\nWe can also diagonalize the matrix $M_\\nu$ by a unitary matrix $U_\\nu$, corresponding to\n\\begin{equation}\n\t\\nu_L \\rightarrow U_\\nu \\nu_L.\n\\end{equation}\n\nWith the discussion above, we can write down the Lagrangian in the lepton sector:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_L =&\\ \\sum_I \\bar e_I (i\\cancel\\partial-m_{e_I})e_I + \\sum_I \\left[i\\nu_{LI}^\\dagger \\bar\\sigma^\\mu \\partial_\\mu \\nu_{LI} - \\frac{1}{2}m_{\\nu_I} \\left(\\nu_L \\nu_L + \\nu_L^\\dagger \\nu_L^\\dagger\\right)\\right] \\\\\n\t& -\\frac{H}{v} \\sum_I \\left(m_{e_I} \\bar e_I e_I + m_{\\nu_I}\\bar\\nu_{LI}\\nu_{LI}\\right)+ \\mathcal L_{L-G},\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal L_{L-G}$ is the lepton-gauge coupling term.\n\n\n\n\\subsection{Gauge-Current Coupling}\nUsing the formula\n\\begin{equation}\n\\begin{aligned}\n\tg_2 \\sum_{a=1}^2 W^a_\\mu T^a_2 &= \\frac{e}{\\sqrt{2}\\sin{\\theta_W}}\\begin{bmatrix}\n\t\t0 & W_\\mu^+ \\\\ W_\\mu^- & 0\n\t\\end{bmatrix}, \\\\\n\tg_2 W_\\mu^3 T^z_2 + g_1 B_\\mu Y\n\t&= e A_\\mu q + \\frac{e}{\\sin{\\theta_W}\\cos{\\theta_W}} Z_\\mu \\left(T_2^3 - \\sin^2{\\theta_W} q\\right).\n\\end{aligned}\n\\end{equation}\nThe coupling terms $\\mathcal L_{Q-G}$ and $\\mathcal L_{L-G}$ are then of the form:\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal L_{Q-G} &= \\frac{e}{\\sqrt{2}\\sin{\\theta_W}} \\left(W^{+}_\\mu J_Q^{-\\mu} + W^{-}_\\mu J_Q^{+\\mu} \\right) + e A_\\mu J^{\\mu}_{\\mathrm{EM},Q} + \\frac{e}{\\sin{\\theta_W}\\cos{\\theta_W}} Z_\\mu J^\\mu_{z,Q}, \\\\\n\t\\mathcal L_{L-G} &= \\frac{e}{\\sqrt{2}\\sin{\\theta_W}}\\left(W^{+}_\\mu J_L^{-\\mu} + W^{-}_\\mu J_L^{+\\mu} \\right) + e A_\\mu J^{\\mu}_{\\mathrm{EM},L} + \\frac{e}{\\sin{\\theta_W}\\cos{\\theta_W}} Z_\\mu J^\\mu_{z,L}.\n\\end{aligned}\n\\end{equation}\nNote that the off-diagonal current $J^\\pm$ will change under the basis transformation $U_{IJ}$, i.e., the transformation mixes different generations:\n\\begin{equation}\n\\begin{aligned}\n\tJ^{-\\mu}_Q &= (V_Q)_{IJ} u_{LI}^\\dagger \\bar\\sigma^\\mu d_{LJ}, \\\\\n\tJ^{+\\mu}_Q &= (V^\\dagger_Q)_{IJ} d_{LI}^\\dagger \\bar\\sigma^\\mu u_{LJ}, \\\\\n\tJ^{-\\mu}_L &= (V_L)_{IJ} \\nu_{LI}^\\dagger \\bar\\sigma^\\mu e_{LJ}, \\\\\n\tJ^{+\\mu}_L &= (V^\\dagger_L)_{IJ} e_{LI}^\\dagger \\bar\\sigma^\\mu \\nu_{LJ},\n\\end{aligned}\n\\end{equation}\nwhere the mixing matrices are defined as\n\\begin{equation}\n\tV_Q \\equiv U^\\dagger_u U_d, \\quad\n\tV_L \\equiv U_\\nu^\\dagger U_e.\n\\end{equation}\n\nThe diagonal part is invariant under the basis transformation:\n\\begin{equation}\n\\begin{aligned}\n\tJ^\\mu_{\\mathrm{EM},Q} &= \\frac{2}{3} u_{LI}^\\dagger \\bar\\sigma^\\mu u_{LI} - \\frac{1}{3} d_{LI}^\\dagger \\bar\\sigma^\\mu d_{LI} \\\\\n\tJ^\\mu_{\\mathrm{EM},L} &= - e_{LI}^\\dagger \\bar\\sigma^\\mu e_{LI} \\\\\n\tJ_{z,Q}^\\mu &= \\frac{1}{2} u_{LI}^\\dagger \\bar\\sigma^\\mu u_{LI} - \\frac{1}{2} d_{LI}^\\dagger \\bar\\sigma^\\mu d_{LI} J^\\mu_3-\\sin^2{\\theta_W}J_{\\mathrm{EM},Q}^\\mu, \\\\\n\tJ_{z,L}^\\mu &= \\frac{1}{2} \\nu_{LI}^\\dagger \\bar\\sigma^\\mu \\nu_{LI} - \\frac{1}{2} e_{LI}^\\dagger \\bar\\sigma^\\mu e_{LI} J^\\mu_3-\\sin^2{\\theta_W}J_{\\mathrm{EM},L}^\\mu.\n\\end{aligned}\n\\end{equation}\n\nFinally, we can sum over the current, and write down the total fermion-gauge coupling:\n\\begin{equation}\n\t\\mathcal L_{F-G} = \\frac{e}{\\sqrt{2}\\sin{\\theta_W}} \\left(W^{+}_\\mu J^{-\\mu} + W^{-}_\\mu J^{+\\mu} \\right) + e A_\\mu J^{\\mu}_{\\mathrm{EM}} + \\frac{e}{\\sin{\\theta_W}\\cos{\\theta_W}} Z_\\mu J^\\mu_{z}.\n\\end{equation}\nFrom the current-gauge coupling, we can recover the effective 4-fermion theory by integrating out the gauge field.\nIn the low energy regime ($p^2 \\ll m_W^2$), the vertex function for electron decay is:\n\\begin{equation}\n\t2\\left(\\frac{e}{\\sqrt{2}\\sin{\\theta_W}} \\right)^2 J^+_\\mu \\frac{g^{\\mu\\nu}}{p^2-m_W^2} J^-_\\nu \n\t\\simeq \\frac{e^2}{m_W^2 \\sin^2{\\theta_W}} J^{+\\mu} J^-_\\mu.\n\\end{equation}\nThe diagonal part of the vertex is\n\\begin{equation}\n\t\\left(\\frac{e}{\\sin{\\theta_W}\\cos{\\theta_W}} \\right)^2 J^z_\\mu \\frac{g^{\\mu\\nu}}{p^2-m_Z^2} J^z_\\nu \n\t\\simeq -\\frac{e^2}{m_Z^2 \\sin^2{\\theta_W}\\cos^2{\\theta_W}} J^{z\\mu} J^z_\\mu.\n\\end{equation}\\\nNote that $m_Z^2\\cos^2{\\theta_W} = m_W^2$, we can then identify the fermion constant\n\\begin{equation}\n\t\\frac{4G_F}{\\sqrt 2} = -\\frac{e^2}{m_W^2 \\sin^2{\\theta_W}},\n\\end{equation}\nand the 4-Fermion vertex is\n\\begin{equation}\n\t\\mathcal L_{4F} = -\\frac{4G_F}{\\sqrt 2} \\left(J^{+\\mu} J_\\mu^- + J^{z\\mu} J_\\mu^z \\right).\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "65680d44812bde2c0a6ac5c981d647624077ca60", "size": 38894, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/StandardModel.tex", "max_stars_repo_name": "jayren3996/Notes_on_QFT", "max_stars_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/StandardModel.tex", "max_issues_repo_name": "jayren3996/Notes_on_QFT", "max_issues_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/StandardModel.tex", "max_forks_repo_name": "jayren3996/Notes_on_QFT", "max_forks_repo_head_hexsha": "f4a9590b7fda5f4d2f2f230eb6cb5e31e2c40954", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.3765957447, "max_line_length": 278, "alphanum_fraction": 0.6658867692, "num_tokens": 14885, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765708, "lm_q2_score": 0.6791787121629466, "lm_q1q2_score": 0.5599569893838214}}
{"text": "\\chapter{\u001b$B>oHyJ,J}Dx<0\u001b(B}\n\\section{\u001b$B>oHyJ,J}Dx<0$N0lHL7A\u001b(B}\n\u001b$B>oHyJ,J}Dx<0$H$O!\"FHN)JQ?t$,\u001b(B1\u001b$B$D$NHyJ,J}Dx<0$G!\"\u001b(B$n$\u001b$B3,$N0lHLE*$JHyJ,J}Dx<0\u001b(B\n\u001b$B$O!\"<!$N$h$&$K=q$1$k!#\u001b(B\n\\begin{equation}\n a_1\\frac{du}{dx} + a_2\\frac{d^2u}{dx^2} + \\cdot + a_n\\frac{d^nu}{dx^n}\n  = f(x).\n\\end{equation}\n\n\u001b$B0l3,$NF1<!J}Dx<0$K$D$$$F$OJQ?tJ,N%K!$r;H$C$F4JC1$K2r$1$k!#:#!\"HyJ,J}Dx<0\u001b(B\n\u001b$B$,0l<!$N6a;w$K$h$C$F@.N)$7$F$$$k$H$$$&2>Dj$r$9$k$H!\"\u001b(B$du/dx=f(x)$\u001b$B$O<!$N$h\u001b(B\n\u001b$B$&$K=q$-49$($i$l$k!#\u001b(B\n\\begin{equation}\n du = f(x)dx.\n\\end{equation}\n\u001b$B$3$l$O!\"\u001b(B$u$\u001b$B$N\u001b(B$x$\u001b$B$K$h$kHyJ,$O5i?tE83+$K$h$j<!$N$h$&$KI=$5$l$k$H$-!\"\u001b(B\n\\begin{equation}\n \\phi(x+dx) = \\phi(x) + \\left.\\frac{d\\phi}{dx}\\right|_{x}dx + O(dx^2),\n\\end{equation}\n\u001b$BFs<!0J9_$N9`$rL5;k$7$F!\"\u001b(B$x$\u001b$B$G$N\u001b(B$\\phi$\u001b$B$NHyJ,$O!\"\u001b(B$\\phi$\u001b$B$NL58B>.JQ2=$O\u001b(B\n$d\\phi=\\phi(d+dx)-\\phi(x)$\u001b$B$G$\"$k$H2>Dj$9$k$H!\"\u001b(B\n\\begin{equation}\n d\\phi = \\frac{d\\phi}{dx}dx,\n\\end{equation}\n\u001b$B$G$\"$k$H$$$&2>Dj$K$h$k!#$3$l$K$h$j!\"0l3,$NF1<!J}Dx<0$O!\"<!$N$h$&$J@QJ,$K\u001b(B\n\u001b$B$h$j2r$rF@$i$l$k!#\u001b(B\n\\begin{equation}\n u = \\int dx\\mspace{5mu}f(x).\n\\end{equation}\n\n\u001b$B<!$K!\"0l3,$N8GM-CMLdBj$K$D$$$F9M$($k!#0lHV4JC1$J7A$H$7$F!\"\u001b(B\n\\begin{equation}\n u_x + au = 0,\n\\end{equation}\n\u001b$B$r9M$($k!#$3$N$H$-!\"\u001b(B$a$\u001b$B$OG$0U$N?t$@$,!\"\u001b(B$u$\u001b$B$N4X?t$G$O$J$$$H$9$k!#$3$l$rJQ\u001b(B\n\u001b$B?tJ,N%$9$k$H!\"@QJ,Dj?t\u001b(B$C$\u001b$B$rMQ$$$F!\"\u001b(B\n\\begin{equation}\n u = Ce^{-\\int dx\\mspace{5mu}a},\n\\end{equation}\n\u001b$B$,F@$i$l$k!#\u001b(B\n\n\u001b$B<!$K!\"HsF1<!$N>l9g$N0l3,$NJ}Dx<0$r9M$($k!#\u001b(B\n\\begin{equation}\n u_x + au = f(x).\n\\end{equation}\n\u001b$B$3$3$G!\"$3$NJ}Dx<0$NFC2r$O!\"F1<!<0$N2r$G$\"$k!#HsF1<!9`\u001b(B$f(x)$\u001b$B$,\u001b(B0\u001b$B$N$h$&$J\u001b(B\n\u001b$B>l9g$K$O!\"J}Dx<0$N2r$OF1<!J}Dx<0$N2r$H$J$k!#$=$3$G!\"HsF1<!9`$,\u001b(B0\u001b$B$G$J$$$H\u001b(B\n\u001b$B$-$O!\"F1<!J}Dx<0$N@QJ,Dj?t$O\u001b(B$x$\u001b$B$N4X?t$G$\"$k$H$$$&F6;!$+$i!\"0lHL2r$r\u001b(B\n$C(x)\\exp(\\int dx\\mspace{5mu}\\phi(x))$\u001b$B$H$9$k!#$3$l$rHsF1<!J}Dx<0$KBeF~$9\u001b(B\n\u001b$B$k$3$H$G!\":G=*E*$K$O\u001b(B$C$\u001b$B$K$D$$$F$NHyJ,J}Dx<0$,F@$i$l$k!#$3$l$K$h$j!\"HsF1\u001b(B\n\u001b$B<!J}Dx<0$N0lHL2r$O!\"\u001b(B\n\\begin{equation}\n u = \n  \\left(\\int dx\\mspace{5mu}f(x)e^{-\\int dx\\mspace{5mu}a} + b\\right)\n  e^{-\\int dx\\mspace{5mu}a}\n\\end{equation}\n\u001b$B$H$J$k!#\u001b(B\n\n\u001b$B<!$K!\"\u001b(B$n$\u001b$B<!$NJ}Dx<0$r9M$($k!#\u001b(B$n$\u001b$B<!$NHyJ,J}Dx<0$O\u001b(B$n$\u001b$B$3$N3,$r$b$A!\"$=$N0l\u001b(B\n\u001b$BHL2r$O$=$l$i$N3,$N@~7AOB$G!\"\u001b(B\n\\begin{equation}\n u = \\sum_{i=1}^n a_iu_i,\n\\end{equation}\n\u001b$B$N$h$&$KM?$($i$l$k!#\u001b(B\n\n\u001b$B:#!\"$3$N3,$N@~7AFHN)@-$r9M$($k$?$a$K!\"$3$l$i$N2r$,@~7A=>B0$J>l9g$r9M$($k!#\u001b(B\n\u001b$B$=$N>l9g$O!\"\u001b(B\n\\begin{equation}\n  \\sum_{i=1}^n a_iu_i = 0,\n\\end{equation}\n\u001b$B$N$h$&$K!\"\u001b(B$n$\u001b$BHVL\\$N2r$,$=$NB>$N3,$N@~7AOB$GI=$5$l$k!#$=$7$F!\"$3$l$O\u001b(B$n-1$\n\u001b$B2q$OHyJ,$G$-$k$O$:$G$\"$j!\"\u001b(B\n\\begin{equation}\n \\sum_{i=1}^n a_iu^{(i)}_i = 0,\n\\end{equation}\n\u001b$B$G$\"$k!#\u001b(B\n\n\u001b$B$3$N4X78$K$h$j!\"78?t\u001b(B$a_i$\u001b$B$O!\"<!$N$h$&$J9TNsJ}Dx<0$N2r$H$7$F5a$a$k$3$H$,\u001b(B\n\u001b$B=PMh$k!#\u001b(B\n\\begin{equation}\n \\begin{pmatrix}\n  u_1 &  u_2  & \\cdots & u_n \\\\\n  u'_1 & u'_2 & \\cdots & u'_n \\\\ \n  \\vdots & \\vdots & \\ddots& \\vdots \\\\\n  u^{(n-1)}_1 & u^{(n-1)}_2 & \\cdots & u^{(n-1)}_n\n \\end{pmatrix}\n \\begin{pmatrix}\n  a_1 \\\\ a_2 \\\\ \\vdots \\\\ a_n\n \\end{pmatrix}\n = 0.\n\\end{equation}\n\u001b$B$3$N$H$-!\"$3$N9TNs$N9TNs<0$,\u001b(B0\u001b$B$N$H$-$K$OJ}Dx<0$N2r$,0l0U$K$O7h$^$i$J$$$?\u001b(B\n\u001b$B$a!\"J}Dx<0$O@~7A=>B0$G$\"$k!#=>$C$F!\"9TNs<0$,\u001b(B0\u001b$B$G$O$J$3$H$,J}Dx<0$N0lHL2r\u001b(B\n\u001b$B$N0l<!FHN)@-$NI,MW>r7o$G$\"$k!#$^$?!\"$3$N$H$-!\"$3$N9TNs$r\u001b(BWronski\u001b$B9TNs\u001b(B$W(u_1,\\cdots,u_n)$\u001b$B$H8F\u001b(B\n\u001b$B$S!\"$=$N9TNs<0$r\u001b(BWronski\u001b$B9TNs<0\u001b(B(Wronskian)\u001b$B$H8F$S!\"\u001b(B$\\Delta(u_1,\\cdots,u_n)$\n\u001b$B$HI=5-$9$k!#\u001b(B\n\\begin{equation}\n W(u_1, u_2, \\cdots, u_n) =\n\\begin{vmatrix}\n  u_1 &  u_2  & \\cdots & u_n \\\\\n  u'_1 & u'_2 & \\cdots & u'_n \\\\ \n  \\vdots & \\vdots & \\ddots& \\vdots \\\\\n  u^{(n-1)}_1 & u^{(n-1)}_2 & \\cdots & u^{(n-1)}_n\n\\end{vmatrix}\n\\end{equation}\n\n\u001b$B$^$?!\"$3$N\u001b(BWronski\u001b$B9TNs<0$K$h$k@QJ,Dj?t$K4X$9$k9TNsJ}Dx<0$N2r$O!\"\u001b(BCramer\u001b$B$N\u001b(B\n\u001b$BDjM}$r;H$C$FF@$i$l$F!\"\u001b(B\n\\begin{equation}\n C_i = \\frac{|W_i|}{|W|},\n\\end{equation}\n\u001b$B$G$\"$k!#$3$N$H$-!\"\u001b(B$|W_i|$\u001b$B$O\u001b(B$i$\u001b$B9TL\\$NNs$H:G8e$N$r=|$$$?\u001b(BWronski\u001b$B9TNs<0$N>.9TNs<0$G\u001b(B\n\u001b$B$\"$k!#\u001b(B\n\n\\section{\u001b$BHs@F<!J}Dx<0$N0lHL2r\u001b(B}\n\u001b$B$\"$k\u001b(B$n$\u001b$B3,$N>oHyJ,J}Dx<0$N:nMQAG$,\u001b(B$L$\u001b$B$GM?$($i$l$F$$$k$H$9$k!#Nc$($P!\"\u001b(B\n\\[\n L = a_n\\frac{d^n}{dt^n} + a_{n-1}\\frac{d^{n-1}}{dt^{n-1}} + \\cdots + a_n.\n\\]\n\u001b$B$=$7$F!\"$3$N$H$-!\"Hs@F<!J}Dx<0$,<!$N$h$&$KDj5A$5$l$F$$$?$H$9$k!#\u001b(B\n\\begin{equation}\n L(y) = f(x).\n\\end{equation}\n\u001b$B$3$N$h$&$J>l9g$N\u001b(B1\u001b$B3,$NJ}Dx<0$K$D$$$F$ODj?tJQ2=K!$H$7$F!\"A0@a$G07$C$?$,!\"\u001b(B\n\u001b$B0lHLE*$P$\"$$$K$D$$$F9M$($k!#0l3,$NJ}Dx<0$N>l9g$HF1$8$h$&$K!\"4pK\\7O\u001b(B$u$\u001b$B$O\u001b(B\n$L(u)=0$\u001b$B$rK~$?$9$H$7$F!\"\u001b(B\n\\begin{equation}\n u = C_1u_1 + C_2 u_2 + \\cdots + C_nu_n,\n\\end{equation}\n\u001b$B$N$h$&$KI=$5$l$k$H$9$k!#$=$7$F!\"$3$l$r0lHL2r$K$D$$$F3HD%$9$k$H$-!\"0lHL2r\u001b(B\n$y$\u001b$B$O\u001b(B$x$\u001b$B$K$D$$$F$N4X?t\u001b(B$V=V(x)$\u001b$B$K$h$j!\"\u001b(B\n\\begin{equation}\n y = V_1u1_ + V_2u_2 + \\cdots + V_nu_n,\n\\end{equation}\n\u001b$B$HI=$5$l$k!#$3$3$G!\"\u001b(B$V_i$\u001b$B$O\u001b(B$L(y)=f(x)$\u001b$B$rK~$?$9$h$&$J4X?t$G$\"$k$,!\"\u001b(B$n$\u001b$B$N\u001b(B\n\u001b$B4pK\\2r$r;}$D\u001b(B$V_i$\u001b$B$H\u001b(B$f(x)$\u001b$B$N4V$K$OHyJ,J}Dx<0$K$h$k\u001b(B1\u001b$B$D$N4X78@-$7$+5,Dj$5$l\u001b(B\n\u001b$B$F$$$J$$$?$a!\";D$j$N\u001b(B$n-1$\u001b$B$K$D$$$F$OG$0U$N4X78@-$r5,Dj$G$-$k!#\u001b(B\n\n\u001b$B:#!\"J}Dx<0$r4JC1$K2r$/$?$a$K!\"<!$N$h$&$K2>Dj$9$k!#\u001b(B\n\\begin{equation}\n \\begin{cases}\n  V_1u_1' + V_2u_2' + \\cdots + V_nu_n' = 0, \\\\\n  V_1'u_1' + V_2'u_2' + \\cdots + V_n'u_n' = 0, \\\\\n  \\cdots \\\\\n  V_1'u_1^{(n-2)} + V_2'u_2^{(n-2)}  + \\cdots + V_n'u_n^{(n-2)} = 0,\n \\end{cases}\n \\label{variation_constant_method_suppose1}\n\\end{equation}\n\n\u001b$B$3$l$r$b$H$K$7$F!\"HyJ,J}Dx<0$K\u001b(B$y$\u001b$B$rBeF~$7$F$$$/!#\u001b(B$y$\u001b$B$NHyJ,$K$D$$$F$O!\">e\u001b(B\n\u001b$B5-$N2>Dj$K$h$j!\"\u001b(B\n\\begin{equation}\n \\begin{cases}\n  y' = V_1u_1' + V_2u_2' + \\cdots + V_nu_n', \\\\\n  \\cdots \\\\\n  y^{(n-1)} = V_1u_1^{(n-1)} + V_2u_2^{(n-1)} + \\cdots +\n  V_nu_n^{(n-1)}, \\\\\n  y^{(n)} =  V_1u_1^{(n)} + V_2u_2^{(n)} + \\cdots +\n  V_nu_n^{(n)} +\n  V_1u_1^{(n-1)} + V_2u_2^{(n-1)} + \\cdots +\n  V_nu_n^{(n-1)},\n \\end{cases}\n\\end{equation}\n\u001b$B$G$\"$k!#$3$l$rHyJ,J}Dx<0$KBeF~$9$k!#$3$N$H$-!\"\u001b(B$L(u_i)=0$\u001b$B$OHyJ,J}Dx<0$N4p\u001b(B\n\u001b$BK\\7OA4$F$K@.N)$7$F$*$j!\"$?$H$($P!\"\u001b(B$u_1$\u001b$B$K$D$$$F$O!\"\u001b(B\n$V_1L(u_1)+V_1'u_1^{(n-1)}$\u001b$B$K$J$j!\"99$K!\"\u001b(B$L(u_1)=0$\u001b$B$J$N$G!\"\u001b(B\n$V_1'u_1^{(n-1)}$\u001b$B$,;D$k!#=>$C$F!\"\u001b(B\n\\begin{equation}\n V_1'u_1^{(n-1)} + V_2'u_2^{(n-1)} + \\cdots + V_n'u_n^{(n-1)}\n = f(x),\n\\end{equation}\n\u001b$B$,F@$i$l$k!#$3$l$H!\"<0\u001b(B(\\ref{variation_constant_method_suppose1})\u001b$B$+$i!\"<!\u001b(B\n\u001b$B$N$h$&$JBe?tJ}Dx<0$,F@$i$l$k!#\u001b(B\n\\begin{equation}\n \\begin{pmatrix}\n  u_1 & u_2 & \\cdots & u_n \\\\\n  u_1' & u_2' & \\cdots & u_n' \\\\\n  \\vdots & \\vdots & \\ddots & \\vdots \\\\\n  u_1^{(n-1)} & u_2^{(n-1)} & \\cdots & u_n^{(n-1)}\n \\end{pmatrix}\n \\begin{pmatrix}\n  V_1' \\\\ V_2' \\\\ \\vdots \\\\ V_n'\n \\end{pmatrix}\n =\n \\begin{pmatrix}\n  0 \\\\ 0 \\\\ \\vdots \\\\ f(x)\n \\end{pmatrix}.\n\\end{equation}\n\n\u001b$B$3$l$O\u001b(BCramer\u001b$B$N8x<0$K$h$j!\"\u001b(B\n\\begin{equation}\n V_i' = \\frac{W_i}{W},\n\\end{equation}\n\u001b$B$K$h$j2r$,5a$a$i$l!\"<!$N$h$&$J@QJ,$K$h$j0lHL2r$,5a$a$i$l$k!#\u001b(B\n\\begin{equation}\n y = \\left(\\sum_{i=0}^n\\int dx \\frac{|W_i|}{|W|}\\right)u_i.\n\\end{equation}\n\n\u001b$B$^$?!\"$3$N$H$-!\"\u001b(BWronski\u001b$B9TNs<0$N$&$A!\"\u001b(B$W_i$\u001b$B$O\u001b(B$W$\u001b$B$NItJ,9TNs<0$@$,!\"FC$K!\"\u001b(B\n\\begin{equation}\n W_i =\n  \\begin{pmatrix}\n   u_1 & u_2 & \\cdots & u_{i-1} & 0 & \\cdot & u_n \\\\\n   u_1' & u_2' & \\cdots & u_{i-1}' & 0 & \\cdot & u_n' \\\\\n   \\vdots & & &\\ddots & \\ddots& & \\vdots \\\\\n   u_1^{(n-1)} & u_2^{(n-1)} & \\cdots & u_{i-1}^{(n-1)} & 0 & \\cdot & u_n^{(n-1)} \\\\\n  \\end{pmatrix}\n\\end{equation}\n\u001b$B$N$h$&$KDj5A$5$l$k9TNs<0$G$\"$k!#\u001b(B\n\n\u001b$B$^$?!\"FC$K!\"\u001b(B2\u001b$B3,$NJ}Dx<0$K$D$$$F$O!\"\u001b(B\n\\begin{equation}\n V_1 = -\\int dx \\frac{u_2}{W}f(x),\n\\end{equation}\n\\begin{equation}\n V_2 = \\int dx \\frac{u_1}{W}f(x),\n\\end{equation}\n\u001b$B$G$\"$k!#\u001b(B\n\n\\begin{thebibliography}{10}\n \\bibitem{Ince} INCE, E. L., Ordinary Differential Equations, Dover\n\t publishing inc., New York\n\\end{thebibliography}", "meta": {"hexsha": "a2173e583e98e16089092e36c1e02d81235ca2e6", "size": 7746, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ODE.tex", "max_stars_repo_name": "sakairi-nobuyuki/my_technical_document", "max_stars_repo_head_hexsha": "0ff03d6ea0cfa2466b5d169c81811f4009a1bca6", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ODE.tex", "max_issues_repo_name": "sakairi-nobuyuki/my_technical_document", "max_issues_repo_head_hexsha": "0ff03d6ea0cfa2466b5d169c81811f4009a1bca6", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ODE.tex", "max_forks_repo_name": "sakairi-nobuyuki/my_technical_document", "max_forks_repo_head_hexsha": "0ff03d6ea0cfa2466b5d169c81811f4009a1bca6", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.735426009, "max_line_length": 108, "alphanum_fraction": 0.509424219, "num_tokens": 5114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619177503205, "lm_q2_score": 0.6791787121629466, "lm_q1q2_score": 0.5599569835250559}}
{"text": "\\chapter{{\\tt ILUMtx}: Incomplete $LU$ Matrix Object}\n\\label{chapter:ILUMtx}\n\\par\nThe {\\tt ILUMtx} object represents and approximate (incomplete) \n$(L+I)D(I+U)$, $(U^T+I)D(I+U)$ or $(U^H+I)D(I+U)$ factorization.\nIt is a very simple object, rows and columns of $L$ and $U$ are\nstored as single vectors. \nAll computations to compute the factorization and to solve linear\nsystems are performed with sparse BLAS1 kernels.\nPresently, the storage scheme is very simple minded, we use {\\tt\nmalloc()} and {\\tt free()} to handle the individual vectors of the\nrows and columns of $L$ and $U$.\n\\par\nAt present we have one factorization method.\nNo pivoting is performed.\nRows of $U$ are stored, along with columns of $L$ if the matrix is\nnonsymmetric.\nIf a zero pivot is encountered on the diagonal during the\nfactorization, the computation stops and returns a nonzero error\ncode.\n(Presently, there is no ``patch-and-go'' functionality.)\nAn $L_{j,i}$ entry is kept if\n$\n|L_{j,i} D_{i,i}| \\ge \\sigma \\sqrt{|D_{i,i}| \\ |A_{j,j}|},\n$\nwhere $\\sigma$ is a user supplied drop tolerance,\nand similarly for $U_{i,j}$.\nNote, if $A_{j,j} = 0$, as is common for KKT matrices,\nall $L_{j,i}$ and $U_{i,j}$ entries will be kept.\nIt is simple to modify the code to use another drop tolerance\ncriteria, e.g., an absolute tolerance, or one based only on\n$|D_{i,i}|$.\nWe intend to write other factorization methods that will\nconform to a user-supplied nonzero structure for the factors.\n", "meta": {"hexsha": "75e6d9c881d78229ef89d723fdb94a273c882d19", "size": 1457, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/ILUMtx/doc/intro.tex", "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/ILUMtx/doc/intro.tex", "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/ILUMtx/doc/intro.tex", "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "avg_line_length": 41.6285714286, "max_line_length": 66, "alphanum_fraction": 0.7275223061, "num_tokens": 424, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.82446190912407, "lm_q2_score": 0.679178699175393, "lm_q1q2_score": 0.5599569669585469}}
{"text": "% Copyright 2018 Google LLC\n%\n% Use of this source code is governed by an MIT-style\n% license that can be found in the LICENSE file or at\n% https://opensource.org/licenses/MIT.\n\n%!TeX spellcheck = en-US\n\n\\documentclass[eprint.tex]{subfiles}\n\\begin{document}\n\\section{Security of HBSH}\nAssuming the security of the underlying block and stream ciphers,\nwe show here that HBSH has an\nadvantage bound that grows quadratically with the number of queries.\n\\subsection{Definition of HBSH}\\label{funcdef}\nWe provide here an\nequivalent definition of HBSH in functional form.\nWhere a parameter is given as $L \\Concat R$ we have that\n$L \\in \\mathcal{L}$, $R \\in \\mathcal{R}$, $L \\Concat R \\in \\mathcal{M}$.\n\n\\begin{align*}\n    \\xi :{}& \\mathcal{K}_H \\times \\mathcal{T} \\times \\mathcal{M} \\rightarrow \\mathcal{R} \\\\\n    \\xi_{K_H}(T, L \\Concat R) \\defeq{}& R \\boxplus H_{K_H}(T, L) \\\\\n    %\n    \\allowdisplaybreaks[4] \\vphantom{|} \\\\ \\allowdisplaybreaks[0]\n    %\n    \\phi :{}& \\mathcal{K}_H \\times \\mathcal{T} \\times \\mathcal{M} \\rightarrow \\mathcal{M} \\\\\n    \\phi_{K_H, T}(L \\Concat R) \\defeq{}& L \\Concat \\xi_{K_H}(T, L \\Concat R) \\\\\n    ={}& L \\Concat (R \\boxplus H_{K_H}(T, L)) \\\\\n    \\phi^{-1}_{K_H, T}(L \\Concat R) ={}& L \\Concat (R \\boxminus H_{K_H}(T, L)) \\\\\n    %\n    \\allowdisplaybreaks[4] \\vphantom{|} \\\\ \\allowdisplaybreaks[0]\n    %\n    \\theta :{}& \\Perm(\\mathcal{R}) \\times (\\mathcal{N} \\rightarrow \\bin^{l_S}) \\times \\mathcal{M} \\rightarrow \\mathcal{M} \\\\\n    \\theta_{\\pi, F}(L \\Concat R) \\defeq{}& (L \\oplus F(\\pi(R))[0;\\abs{L}]) \\Concat \\pi(R) \\\\\n    \\theta_{\\pi, F}^{-1}(L \\Concat R) ={}& (L \\oplus F(R)[0;\\abs{L}]) \\Concat \\pi^{-1}(R) \\\\\n    %\n    \\allowdisplaybreaks[4] \\vphantom{|} \\\\ \\allowdisplaybreaks[0]\n    %\n    \\eta :{}& \\mathcal{K}_H \\times \\Perm(\\mathcal{R}) \\times (\\mathcal{N} \\rightarrow \\bin^{l_S}) \\times \\mathcal{T} \\times \\mathcal{M} \\rightarrow \\mathcal{M} \\\\\n    \\eta_{K_H, \\pi, F, T} \\defeq{}& \\phi_{K_H,T}^{-1} \\circ \\theta_{\\pi, F} \\circ \\phi_{K_H,T} \\\\\n    %\n    \\allowdisplaybreaks[4] \\vphantom{|} \\\\ \\allowdisplaybreaks[0]\n    %\n    \\bar{\\eta} :{}& (\\mathcal{N} \\rightarrow \\bin^{l_S}) \\times \\mathcal{T} \\times \\mathcal{M} \\rightarrow \\mathcal{M} \\\\\n    \\bar{\\eta}_{F} \\defeq{}& \\eta_{K_H, E_{K_E}, F} \\quad \\textrm{where} \\ K_E \\Concat K_H \\Concat \\ldots = F(\\lambda) \\\\\n    %\n    \\allowdisplaybreaks[4] \\vphantom{|} \\\\ \\allowdisplaybreaks[0]\n    %\n    \\HBSH :{}& \\mathcal{K}_S \\times \\mathcal{T} \\times \\mathcal{M} \\rightarrow \\mathcal{M} \\\\\n    \\HBSH_{K_S} \\defeq{}& \\bar{\\eta}_{S_{K_S}} \\\\\n\\end{align*}\n\n\\subsection{Security definitions}\n\\parintro{Hash function}\nThe hash function $H$\nmust be $\\epsilon$-almost-$\\Delta$-universal\\label{eadudef}\n($\\epsilon$-$\\Delta$U)\nfor some $\\epsilon$~\\cite{eadu}:\nfor any $g \\in \\mathcal{R}$ and\nany two distinct messages $(T, L) \\neq (T', L')$:\n%\n\\begin{displaymath}\n\\probsub{K \\sample \\mathcal{K}_H}{H_{K}(T, L) \\boxminus H_{K}(T', L') = g} \\leq \\epsilon\n\\end{displaymath}\n%\nGiven bounds on the lengths of $T$ and $L$, the value of $\\epsilon$ for the\nhash function used in HPolyC is given in \\autoref{hpolycepsilon},\nand for Adiantum in \\autoref{adiantumepsilon}.\n\n\\parintro{Block cipher}\nThe block cipher\n$E$\nmust be a super-pseudorandom permutation~\\cite{concsym}:\n%\n\\begin{align*}\n    \\advantage{\\pm \\mathrm{prp}}{E}[(A)] \\defeq\n    {}&\\left\\lvert\\probsub{K \\sample \\mathcal{K}_E}{A^{E_K,E_K^{-1}}\\Rightarrow 1}\\right.\n    \\\\\n    {}&\\left. - \\probsub{\\pi \\sample \\Perm(\\mathcal{R})}{A^{\\pi,\\pi^{-1}}\\Rightarrow 1}\\right\\rvert\n    \\\\\n    \\advantage{\\pm \\mathrm{prp}}{E}[(q, t)] \\defeq\n    {}&\\max_{A \\in \\mathcal{A}(q, t)} \\advantage{\\pm \\mathrm{prp}}{E}[(A)]\n\\end{align*}\n%\nwhere $A$ is an adversary,\n$\\Perm(S)$ denotes the set of all permutations on a set $S$,\nand\n$\\mathcal{A}(q, t)$\nis the set of all adversaries that make at most $q$ queries and take at most $t$ time.\n\n\\parintro{Stream cipher}\nOur definition is related to the definition of a PRF in \\cite{concsym}, but\nbecause we model the stream cipher $S$ as a pseudorandom function with a very long output,\nwe bound the adversary not only in how many queries they make,\nbut also in how many bits they read in total.\nThus, a query consists of a pair\n$(N, l_q) \\in \\mathcal{N} \\times \\NN$ where $0 < l_q \\leq l_S$,\nand the response is $S_K(N)[0;l_q]$ or $F(N)[0;l_q]$;\nwe cap the sum of $l_q$ values across queries.\n%\n\\begin{align*}\n    \\advantage{\\mathrm{sc}}{S}[(A)] \\defeq\n    {}&\\left\\lvert \\probsub{K \\sample \\mathcal{K}_S}{A^{S_{K}(.)[0;.]}\\Rightarrow 1}\\right.\n    \\\\\n    {}&\\left. - \\probsub{F \\sample (\\mathcal{N} \\rightarrow \\bin^{l_S})}\n    {A^{F(.)[0;.]}\\Rightarrow 1} \\right\\rvert\n    \\\\\n    \\advantage{\\mathrm{sc}}{S}[(q, l, t)]\n    \\defeq {}&\\max_{A \\in \\mathcal{A}(q, l, t)} \\advantage{\\mathrm{sc}}{S}[(A)]\n\\end{align*}\n%\nwhere $A$ is an adversary,\n$\\mathcal{N} \\rightarrow \\bin^{l_S}$ denotes the set of all\nfunctions from $\\mathcal{N}$ to $\\bin^{l_S}$,\nand\n$\\mathcal{A}(q, l, t)$\nis the set of all adversaries that make at most $q$ queries such that\n$\\sum l_q \\leq l$, and which take at most $t$ time.\n\n\\parintro{Tweakable SPRP}\nLet $\\LP^\\mathcal{T}(\\mathcal{M})$ denote the set of all\ntweakable length-preserving functions\n$\\bm{f} : \\mathcal{T} \\times \\mathcal{M} \\rightarrow \\mathcal{M}$\nsuch that for all $T, M \\in \\mathcal{T} \\times \\mathcal{M}$,\n$\\abs{\\bm{f}(T, M)} = \\abs{M}$. Let $\\Perm^\\mathcal{T}(\\mathcal{M})$ denote\nthe set of $\\bm{\\pi} \\in \\LP^\\mathcal{T}(\\mathcal{M})$ such that\nfor all $T \\in \\mathcal{T}$, $\\bm{\\pi}_{T}$ is a bijection.\nIn an abuse of notation\nwe use $\\bm{\\pi}^{-1}$ to refer to the function\nsuch that $\\bm{\\pi}^{-1}(T, \\bm{\\pi}(T, M)) = M$ ie $(\\bm{\\pi}^{-1})_T = (\\bm{\\pi}_T)^{-1}$.\n\nPer~\\cite{cmc}, for a tweakable, variable-input-length, super-pseudorandom permutation\n$\\bm{E} : \\mathcal{K} \\times \\mathcal{T} \\times \\mathcal{M} \\rightarrow \\mathcal{M}$\nthe distinguishing advantage of an adversary $A$ is:\n%\n\\begin{align*}\n    \\advantage{\\pm \\widetilde{\\mathrm{prp}}}{\\bm{E}}[(A)] \\defeq\n    {}&\\left\\lvert\\probsub{K \\sample \\mathcal{K}}{A^{\\bm{E}_K,\\bm{E}_K^{-1}}\\Rightarrow 1}\\right.\n    \\\\\n    {}&\\left. - \\probsub{\\bm{\\pi} \\sample \\Perm^\\mathcal{T}(\\mathcal{M})}\n        {A^{\\bm{\\pi},\\bm{\\pi}^{-1}}\\Rightarrow 1}\\right\\rvert\n    \\\\\n    \\intertext{and}\n    \\advantage{\\pm \\widetilde{\\mathrm{prp}}}{\\bm{E}}[(q, l_T, l_M, t)]\n    \\defeq {}&\n    \\max_{A \\in \\mathcal{A}(q, l_T, l_M, t)} \\advantage{\\pm \\widetilde{\\mathrm{prp}}}{\\bm{E}}[(A)]\n\\end{align*}\n%\n\\begin{displaymath}\n\\end{displaymath}\nwhere $\\mathcal{A}(q, l_T, l_M, t)$\nis the set of all adversaries that\nmake at most $q$ queries,\nwith tweak of length at most $l_T$\nand message of length at most $l_M$,\nand take at most $t$ time.\n\n\\subsection{Primary claim}\n\\begin{theorem}\\label{hbshadvantage}\n    Where HBSH mode is instantiated with hash function $H$, block cipher $E$ and stream cipher $S$,\n    and where $H$ is $\\epsilon$-almost-$\\Delta$-universal for inputs such that\n    $\\abs{T} \\leq l_T$, $\\abs{L} \\leq l_M - n$, then:\n    %\n    \\begin{align*}\n        \\advantage{\\pm \\widetilde{\\mathrm{prp}}}{\\HBSH}[(q, l_T, l_M, t)]\n        \\leq {}&(\\epsilon + 2(2^{-n}))\\binom{q}{2} \\\\\n        {}&+ \\advantage{\\mathrm{sc}}{S}[(q + 1, \\abs{K_E} + \\abs{K_H} + q(l_M - n), t')] \\\\\n        {}&+ \\advantage{\\pm \\mathrm{prp}}{E}[(q, t')]\\\\\n    \\end{align*}\n    %\n    where $t' = t + \\bigO{q(l_T + l_M)}$.\n\\end{theorem}\n\nThis is proven in what follows. First we use the H-coefficient technique to establish\n\\autoref{xyadv}, a closely related bound; then in \\autoref{hbshproof} we bridge the\ngap between this bound and the desired bound.\n\n\\subsection{H-coefficient technique}\nThe H-coefficient technique was introduced by Patarin in 1991~\\cite{ppdes,hco}.\nIn what follows we rely on the highly recommended exposition of\n\\cite{hco2} Section 3,\n``The H-coefficient Technique in a Nutshell'', though we vary slightly by introducing\na new symbol $\\Upsilon$ so we can distinguish between what is sampled and the adversary oracles.\n\nWe wish to bound the adversary's ability to distinguish between\ntwo ``worlds'', world X (the ``real world'') and world Y (the ``ideal world'').\nAssociated with world X we have\n\\begin{itemize}\n    \\item $\\Omega_X$: a set of instances we sample fairly from. We write\n    $\\probsub{\\Omega_X}{}$ as shorthand for $\\probsub{\\omega \\sample \\Omega_X}{}$.\n    \\item $\\Upsilon_X$: a map from an instance $\\omega \\in \\Omega_X$ to a tuple of\n    deterministic oracles we can present to the adversary.\n    \\item $\\rho_X \\defeq \\probsub{\\Omega_X}{A^{\\Upsilon_X(\\omega)} \\Rightarrow 1}$\\label{rhox}\n    where the adversary $A$ is clear from context.\n    As the adversary interacts with the oracles, a transcript $\\tau$\n    of queries and responses is generated.\n    \\item $X$: a random variable representing a transcript for $A^{\\Upsilon_X(\\omega)}$\n    given $\\omega \\sample \\Omega_X$; we write $\\tau \\sample X$\n    to indicate that $\\tau$ is sampled from this distribution.\n    \\item $\\comp_X$:  We write $\\omega \\in \\comp_X(\\tau)$\n    if a transcript $\\tau$ is ``compatible'' with an instance $\\omega \\in \\Omega_X$,\n    ie if given an adversary $A$ that makes those queries, the oracles $\\Upsilon_X(\\omega)$\n    make those responses and thus $A^{\\Upsilon_X(\\omega)}$ produces that transcript.\n\\end{itemize}\nWe have the same for world Y throughout.\n\nWe assume a deterministic adversary. This is without loss of\ngenerality; if we assume a distribution of adversaries $A \\sample \\mathcal{A}$\nthen an advantage bound on each of the deterministic adversaries $A$ bounds the advantage\nof the ensemble $\\mathcal{A}$.\n\nOnce $\\omega$ is sampled, the oracles $\\Upsilon_X(\\omega)$ are then deterministic;\nthe transcript produced by $A^{\\Upsilon_X(\\omega)}$ is thus the unique transcript\ncompatible both with adversary $A$ and instance $\\omega$. Where a transcript\nis not compatible with $A$, $\\prob{X = \\tau} = \\prob{Y = \\tau} = 0$. If either\nof these is not zero, the transcript is compatible with $A$, and\n$\\prob{X = \\tau} = \\probsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}$ and similarly for Y.\n\nThe adversary always returns the same result for the same transcript, so\nits advantage is maximized if it returns 1 exactly when $\\prob{Y = \\tau} > \\prob{X = \\tau}$.\nTherefore:\n\\begin{align*}\n    \\advantage{\\textrm{Y}}{\\textrm{X}}[(A)] ={}&\n    \\abs{\\rho_X - \\rho_Y} \\\\\n    \\leq {}& \\sum_{\\tau: \\prob{Y = \\tau} > \\prob{X = \\tau}}\n    \\left(\\prob{Y = \\tau} - \\prob{X = \\tau}\\right) \\\\\n    = {}& \\sum_{\\tau: \\prob{Y = \\tau} > \\prob{X = \\tau}}\n    \\prob{Y = \\tau}\\left(1 - \\frac{\\prob{X = \\tau}}{\\prob{Y = \\tau}}\\right) \\\\\n    = {}& \\sum_{\\tau: \\prob{Y = \\tau} > 0}\n    \\prob{Y = \\tau}\\left(1 - \\minone{\\frac{\\prob{X = \\tau}}{\\prob{Y = \\tau}}}\\right) \\\\\n    = {}& \\expsub{\\tau \\sample Y}{1 - \\minone{\n           \\frac{\\prob{X = \\tau}}{\\prob{Y = \\tau}}\n        }} \\\\\n    = {}& 1 - \\expsub{\\tau \\sample Y}{\\minone{\n           \\frac{\\probsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}}\n           {\\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)}}\n        }}\n\\end{align*}\nwhere $\\expect{}$ is the expected value.\nWith this rearrangement, the only probability distribution we sum over is that\nof $Y$, which can be more convenient to work with.\n\n\\subsection{Preliminaries}\n\\parintro{World X}\nWorld X is\nan idealized form of HBSH which uses a random function and permutation:\n$\\Omega_X \\defeq \\mathcal{K}_H \\times \\Perm(\\mathcal{R}) \\times (\\mathcal{N} \\rightarrow \\bin^{l_S})$,\nand given $(K_H, \\pi, F) \\in \\Omega_X$,\n$\\Upsilon_X(K_H, \\pi, F) \\defeq (\\eta_{K_H, \\pi, F}, \\eta_{K_H, \\pi, F}^{-1})$.\n\n\\parintro{World Y}\nWorld Y samples fairly from all possible pairs of tweakable length-preserving functions:\n$\\Omega_Y \\defeq \\LP^\\mathcal{T}(\\mathcal{M}) \\times \\LP^\\mathcal{T}(\\mathcal{M})$,\nso given $(\\mathcal{E}, \\mathcal{D}) \\in \\Omega_Y$,\n$\\Upsilon_Y(\\mathcal{E}, \\mathcal{D}) \\defeq (\\mathcal{E}, \\mathcal{D})$.\n\n\\parintro{Transcript}\nOur transcript $\\tau$ is a sequence of tuples\n$(d^i, T^i, P^i, C^i)$\nin\n$\\{+, -\\} \\times \\mathcal{T} \\times \\mathcal{M} \\times \\mathcal{M}$\nfor $i \\in [0 \\ldots q-1]$.\nFor the $i$th sequential query\n$d^i$ is the direction of the query:\nif $d^i = +$ then a plaintext query $T^i, P^i$ is made and the result is $C^i$,\nwhile if $d^i = -$ then a ciphertext query $T^i, C^i$ is made and the result is $P^i$.\n\n\\parintro{Pointless queries}\nWe consider adversaries contained in $\\mathcal{A}(q, l_T, l_M, t)$ for some value of\nthe bounds $q$, $l_T$, $l_M$, $t$.\nWithout loss of generality, we consider only adversaries who do not make ``pointless''\nqueries as defined in \\cite{cmc}. Thus for $i < j$, if $d^j = +$ then\n$(T^i, P^i) \\neq (T^j, P^j)$, and similarly if $d^j = -$ then\n$(T^i, C^i) \\neq (T^j, C^j)$.\n\n\\parintro{Bad events}\nWe define various classes of ``bad event'':\n\n\\begin{itemize}\n    \\item $(K_H, \\tau) \\in \\badQ$ if there exists $i < j$ such that either\n    \\begin{itemize}\n        \\item $d^j = +$ and $\\xi_{K_H}(T^i, P^i) = \\xi_{K_H}(T^j, P^j)$, or\n        \\item $d^j = -$ and $\\xi_{K_H}(T^i, C^i) = \\xi_{K_H}(T^j, C^j)$.\n    \\end{itemize}\n    \\item $(K_H, \\tau) \\in \\badR$ if there exists $i < j$ such that either\n    \\begin{itemize}\n        \\item $d^j = +$ and $\\xi_{K_H}(T^i, C^i) = \\xi_{K_H}(T^j, C^j)$, or\n        \\item $d^j = -$ and $\\xi_{K_H}(T^i, P^i) = \\xi_{K_H}(T^j, P^j)$.\n    \\end{itemize}\n\\end{itemize}\n\nFinally we define the disjunction\n$\\bad \\defeq \\badQ \\cup \\badR$.\n\n\\subsection{Lemmas}\n\\begin{lemma} \\label{badQ}\n    For any $\\tau$ such that $\\prob{Y = \\tau} > 0$,\n    \\begin{displaymath}\n        \\probsub{K_H \\sample \\mathcal{K}_H}{(K_H, \\tau) \\in \\badQ}\n        \\leq \\epsilon \\binom{q}{2}\n    \\end{displaymath}\n\\end{lemma}\n\n\\begin{proof}\nAssume $d^j = +$ for some pair $i, j$, and let $L^i \\Concat R^i = P^i$ and similarly for $P^j$.\nFrom $\\prob{Y = \\tau} > 0$ we know that $\\abs{T^i}, \\abs{T^j} \\leq l_T$ and $\\abs{P^i}, \\abs{P^j} \\leq l_M$,\nand therefore that $\\abs{L^i}, \\abs{L^j} \\leq l_M - n$.\nBecause pointless queries are forbidden we also know that $(T^i, P^i) \\neq (T^j, P^j)$.\n%\n\\begin{align*}\n    {}&\\xi_{K_H}(T^i, L^i \\Concat R^i) = \\xi_{K_H}(T^j, L^j \\Concat R^j) \\\\\n    \\Leftrightarrow{}& R^i \\boxplus H_{K_H}(T^i, L^i) = R^j \\boxplus H_{K_H}(T^j, L^j) \\\\\n    \\Leftrightarrow{}& H_{K_H}(T^i, L^i) \\boxminus H_{K_H}(T^j, L^j) = R^j \\boxminus R^i \\\\\n\\end{align*}\n%\nIf $(T^i, L^i) = (T^j, L^j)$ then $R^i \\neq R^j$ and equality cannot occur.\nOtherwise by the $\\epsilon$-$\\Delta$U property of $H$ this occurs with probability\nat most $\\epsilon$ (where $\\epsilon$ depends on the bounds on\nthe parameters $l_T$, $l_M -n$).\n\nWhere $d^j = -$, a similar argument applies for $C^i$, $C^j$.\nFor an upper bound, we sum across all $\\binom{q}{2}$ pairs $i, j$.\n\\end{proof}\n\n\\begin{lemma} \\label{badR}\n    For any $K_H \\sample \\mathcal{K}_H$,\n    \\begin{displaymath}\n        \\probsub{\\tau \\sample Y}{(K_H, \\tau) \\in \\badR}\n        \\leq 2^{-n} \\binom{q}{2}\n    \\end{displaymath}\n\\end{lemma}\n\n\\begin{proof}\n    Assume $d^j = +$ for some pair $i, j$, and let $L^i \\Concat R^i = C^i$ and similarly for $C^j$.\n    Because pointless queries are forbidden, in world Y,\n    conditioning on all prior queries and responses,\n    all possible values of $C^j$ such that $\\abs{C^j} = \\abs{P^j}$ will be equally likely.\n    In particular, even after conditioning on $L^j$,\n    all values of $R^j$ are equally likely. Therefore\n    $\\prob{R^j = R^i \\boxplus H_{K_H}(T^i, L^i) \\boxminus H_{K_H}(T^j, L^j)} = 2^{-n}$.\n\n    Where $d^j = -$, a similar argument applies for $P^i$, $P^j$.\n    For an upper bound, we sum across all $\\binom{q}{2}$ pairs $i, j$.\n\\end{proof}\n\n\\begin{lemma} \\label{notbad}\n    For any $K_H \\in \\mathcal{K}_H$ and transcript $\\tau$ such that $\\prob{Y = \\tau} > 0$ and\n    $(K_H, \\tau) \\notin \\bad$,\n    \\begin{displaymath}\n        \\condprobsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}{\\omega = (K_H, ., .)}\n        \\geq\n        \\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)}\n    \\end{displaymath}\n\\end{lemma}\n\n\\begin{proof}\n    In world Y, for any transcript such that $\\prob{Y = \\tau} > 0$,\n    since all queries are distinct, the responses are independent fair random draws of\n    binary strings of the appropriate length, and therefore\n    $\\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)} = \\prod_i 2^{-\\abs{P^i}}$.\n\n    For world X,\n    let $P_L^i \\Concat P_R^i = P^i$, $P_M^i = \\xi_{K_H, T^i}(P^i)$ and similarly for $C^i$.\n    Since $(K_H, \\tau) \\notin \\bad$ we have that $P_M^i \\neq P_M^j$\n    and $C_M^i \\neq C_M^j$ for all $i \\neq j$.\n    $(K_H, \\pi, F) \\in \\comp_X(\\tau)$ exactly if, for each $i$:\n    \\begin{align*}\n        {}& \\eta_{K_H, \\pi, F, T^i}(P^i) = C^i\\\\\n        \\Leftrightarrow {}& \\phi_{K_H,T^i}^{-1}(\\theta_{\\pi, F}(\\phi_{K_H,T^i}(P^i))) = C^i\\\\\n        \\Leftrightarrow {}& \\theta_{\\pi, F}(P_L^i \\Concat P_M^i) = C_L^i \\Concat C_M^i \\\\\n        \\Leftrightarrow {}& P_L^i \\oplus F(C_M^i)[0;\\abs{P_L^i}] = C_L^i \\wedge \\pi(P_M^i) = C_M^i \\\\\n        \\Leftrightarrow {}& F(C_M^i)[0;\\abs{P^i} - n] = P_L^i \\oplus C_L^i \\wedge \\pi(P_M^i) = C_M^i \\\\\n    \\end{align*}\n\n    Since $\\pi$ and $F$ are drawn independently, we can consider these conditions on them\n    separately. For $F$, since all $C_M^i$ are distinct,\n    these are once again independent fair random draws of\n    binary strings of the appropriate length:\n    \\begin{displaymath}\n        \\probsub{\n            F \\sample (\\mathcal{N} \\rightarrow \\bin^{l_S})\n        }{\n            \\forall_i : F(C_M^i)[0;\\abs{P^i} - n] = P_L^i \\oplus C_L^i\n        } = \\prod_i 2^{-(\\abs{P^i} - n)}\n    \\end{displaymath}\n\n    For $\\pi$, again given that all $P_M^i$ are distinct and all $C_M^i$ are distinct,\n    we have that\n    \\begin{displaymath}\n        \\condprobsub{\n            \\pi \\sample \\Perm(\\mathcal{R})\n        }{\n            \\pi(P_M^j) = C_M^j\n        }{\n            \\forall_{0 \\leq i < j} : \\pi(P_M^i) = C_M^i\n        } = \\frac{1}{2^n - j}\n    \\end{displaymath}\n    (we number queries in the range $i \\in [0 \\ldots q-1]$) and therefore that:\n    \\begin{displaymath}\n        \\probsub{\n            \\pi \\sample \\Perm(\\mathcal{R})\n        }{\n            \\forall_i : \\pi(P_M^i) = C_M^i\n        } = \\prod_i \\frac{1}{2^n - i}\n    \\end{displaymath}\n\n    Therefore:\n    \\begin{align*}\n        {}&\\condprobsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}{\\omega = (K_H, ., .)} \\\\\n        ={}& \\probsub{\n            \\pi \\sample \\Perm(\\mathcal{R}),\n            F \\sample (\\mathcal{N} \\rightarrow \\bin^{l_S})\n        }{\n            \\forall_i : \\eta_{K_H, \\pi, F, T^i}(P^i) = C^i\n        } \\\\\n        ={}& \\prod_i \\frac{1}{2^n - i}2^{-(\\abs{P^i} - n)} \\\\\n        \\geq {}& \\prod_i 2^{-\\abs{P^i}} = \\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)} \\qedhere \\\\\n    \\end{align*}\n\\end{proof}\n\n\\begin{lemma} \\label{xyadv}\n    \\begin{displaymath}\n        \\abs{\\rho_X - \\rho_Y}\n        \\leq (\\epsilon + 2^{-n})\\binom{q}{2}\n    \\end{displaymath}\n\\end{lemma}\n\n\\begin{proof}\n    For any transcript $\\tau$ such that $\\prob{Y = \\tau} > 0$:\n    \\begin{align*}\n        {}& \\minone{\n               \\frac{\\probsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}}\n               {\\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)}}\n            } \\\\\n        ={}& \\minone{\n              \\frac{\\expsub{K_H \\sample \\mathcal{K}_H}{\\condprobsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}{\\omega = (K_H, ., .)}}}\n              {\\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)}}} \\\\\n        \\intertext{$\\minone{\\expect{U}} \\geq \\expect{\\minone{U}}$ for any real random variable $U$, therefore}\n        \\geq{}&\n            \\expsub{K_H \\sample \\mathcal{K}_H}{\n              \\minone{\n              \\frac{\\condprobsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}{\\omega = (K_H, ., .)}}\n              {\\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)}}\n              }} \\\\\n        \\geq{}&\n            \\probsub{K_H \\sample \\mathcal{K}_H}{\n                \\frac{\\condprobsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}{\\omega = (K_H, ., .)}}\n                {\\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)}} \\geq 1\n            } \\\\\n        \\intertext{by \\autoref{notbad}}\n        \\geq{}& \\probsub{K_H \\sample \\mathcal{K}_H}{(K_H, \\tau) \\notin \\bad} \\\\\n    \\end{align*}\n\n    Using the H-coefficient technique:\n    \\begin{align*}\n        {}&\\abs{\\rho_X - \\rho_Y} \\\\\n        \\leq{}& 1 - \\expsub{\\tau \\sample Y}{\\minone{\n               \\frac{\\probsub{\\Omega_X}{\\omega \\in \\comp_X(\\tau)}}\n               {\\probsub{\\Omega_Y}{\\omega \\in \\comp_Y(\\tau)}}\n            }} \\\\\n        \\leq{}& 1 - \\expsub{\\tau \\sample Y}{\n            \\probsub{K_H \\sample \\mathcal{K}_H}{(K_H, \\tau) \\notin \\bad}} \\\\\n        = {}& \\probsub{\\tau \\sample Y, K_H \\sample \\mathcal{K}_H}{(K_H, \\tau) \\in \\bad} \\\\\n        \\leq {}& \\probsub{\\tau \\sample Y, K_H \\sample \\mathcal{K}_H}{(K_H, \\tau) \\in \\badQ}\n         + \\probsub{\\tau \\sample Y, K_H \\sample \\mathcal{K}_H}{(K_H, \\tau) \\in \\badR} \\\\\n         \\intertext{by \\autoref{badQ} and \\autoref{badR}}\n        \\leq {}& (\\epsilon + 2^{-n})\\binom{q}{2} \\qedhere \\\\\n    \\end{align*}\n\\end{proof}\n\n\\subsection{Proof of primary claim}\n\\begin{proof}[Proof of \\autoref{hbshadvantage}]\\label{hbshproof}\n    Copying the definitions of $\\rho_X$, $\\rho_Y$ from \\autoref{rhox},\n    we define\n    \\begin{align*}\n        \\rho_V \\defeq{}& \\probsub{K_S \\sample \\mathcal{K}_S}\n            {A^{\\HBSH_{K_S}, \\HBSH_{K_S}^{-1}}\\Rightarrow 1} \\\\\n        \\rho_W \\defeq{}& \\probsub{F \\sample (\\mathcal{N} \\rightarrow \\bin^{l_S})}\n            {A^{\\bar{\\eta}_{F}, \\bar{\\eta}_{F}^{-1}}\\Rightarrow 1} \\\\\n        \\rho_X \\defeq{}& \\probsub{\\Omega_X}{A^{\\Upsilon_X(\\omega)} \\Rightarrow 1} \\\\\n            ={}&\\probsub{K_H, \\pi, F \\sample \\Omega_X}{A^{\\eta_{K_H, \\pi, F}, \\eta_{K_H, \\pi, F}^{-1}} \\Rightarrow 1} \\\\\n        \\rho_Y \\defeq{}& \\probsub{\\Omega_Y}{A^{\\Upsilon_Y(\\omega)} \\Rightarrow 1} \\\\\n            ={}&\\probsub{\\mathcal{E}, \\mathcal{D} \\sample \\LP^\\mathcal{T}(\\mathcal{M}) \\times \\LP^\\mathcal{T}(\\mathcal{M})}{A^{\\mathcal{E}, \\mathcal{D}} \\Rightarrow 1} \\\\\n        \\rho_Z \\defeq{}& \\probsub{\\bm{\\pi} \\sample \\Perm^\\mathcal{T}(\\mathcal{M})}\n            {A^{\\bm{\\pi},\\bm{\\pi}^{-1}}\\Rightarrow 1} \\\\\n    \\end{align*}\n\n    Distinguishing\n    $\\rho_V$ and $\\rho_W$ is distinguishing the substitution of a stream cipher\n    for a random function.\n    Including the key schedule, the adversary distinguishing\n    $\\rho_V$ and $\\rho_W$ makes at most $q + 1$ queries to the stream cipher\n    or random function respectively, and uses at most $\\abs{K_E} + \\abs{K_H} + q(l_M - n)$ bits\n    of the output; by a standard substitution argument per \\cite{cbcsec,concsym},\n    $\\abs{\\rho_V - \\rho_W} \\leq\n    \\advantage{\\mathrm{sc}}{S}[(q + 1, \\abs{K_E} + \\abs{K_H} + q(l_M - n), t')]$\n    where $t' = t + \\bigO{q(l_T + l_M)}$.\n\n    The differences between $\\rho_W$ and $\\rho_X$ are the use of a block cipher\n    in place of a random permutation, and the use of $F(\\lambda)$ to determine\n    $K_E$ and $K_H$. Since $F$ is a random function and $F(\\lambda)$ is used\n    only here, this is equivalent to choosing them at random; again by a substitution\n    argument we have that $\\abs{\\rho_W - \\rho_X} \\leq \\advantage{\\pm \\mathrm{prp}}{E}[(q, t')]$.\n\n    $\\abs{\\rho_X - \\rho_Y} \\leq (\\epsilon + 2^{-n})\\binom{q}{2}$ by \\autoref{xyadv}.\n    Since we forbid pointless queries,\n    $\\abs{\\rho_Y - \\rho_Z} \\leq 2^{-n}\\binom{q}{2}$ by Halevi and Rogaway's PRP-RND lemma\n    (\\cite{cmc}, Appendix C, Lemma 6).\n\n    \\autoref{hbshadvantage} follows by summing these bounds:\n    $\\abs{\\rho_V - \\rho_Z} \\leq\n    \\abs{\\rho_V - \\rho_W} + \\abs{\\rho_W - \\rho_X} + \\abs{\\rho_X - \\rho_Y} + \\abs{\\rho_Y - \\rho_Z}$.\n\\end{proof}\n\n\\subbib\n\\end{document}\n", "meta": {"hexsha": "043a7b55609d91ed0b6cf56d600c265136fef341", "size": 23452, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "specification/proof.tex", "max_stars_repo_name": "enowy/adiantum", "max_stars_repo_head_hexsha": "6a2a4f1c9ff5775d6b1924189482ce9a29e7ca15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 401, "max_stars_repo_stars_event_min_datetime": "2018-10-15T15:03:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T14:57:36.000Z", "max_issues_repo_path": "specification/proof.tex", "max_issues_repo_name": "enowy/adiantum", "max_issues_repo_head_hexsha": "6a2a4f1c9ff5775d6b1924189482ce9a29e7ca15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-02-11T15:18:15.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-25T21:54:45.000Z", "max_forks_repo_path": "specification/proof.tex", "max_forks_repo_name": "enowy/adiantum", "max_forks_repo_head_hexsha": "6a2a4f1c9ff5775d6b1924189482ce9a29e7ca15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 53, "max_forks_repo_forks_event_min_datetime": "2018-10-15T07:03:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-15T12:42:05.000Z", "avg_line_length": 45.2741312741, "max_line_length": 170, "alphanum_fraction": 0.6056199898, "num_tokens": 8521, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637505099167, "lm_q2_score": 0.651354857898194, "lm_q1q2_score": 0.5599461600536153}}
{"text": "\\lesson{1}{Nov 15 2021 Mon (08:01:30)}{Polynomial Long Division}{Unit 3}\n\n\\subsubsection*{Steps to Dividing Polynomials}\n\n\\begin{enumerate}\n    \\item Divide\n    \\item Multiply\n    \\item Subtract\n    \\item Bring down\n    \\item Repeat\n\\end{enumerate}\n\n\\subsubsection*{Missing terms}\n\nWhen either polynomial does not have all terms according to the descending powers on the variable, it is necessary to add in terms with $0$ coefficients.\n\n\\begin{example}[Dividing Polynomials]\n    \\[ \\polylongdiv{x^2 + 3x + 2}{x + 1} \\]\n    \n    Let me quickly go over everything:\n    \n    \\begin{enumerate}\n        \\item First part, I'm multiplying $x$ by $x$ to get $x^2$.\n        \\item Second part, I'm also multiplying $1$ by $2$ to get $2$ because every time we do something to one, we must do it to all.\n        \\item Third part, I'm moving the answer we get ($x^2 + 2$) down. Now, I also distributed a negative ($-$), because I'm subtracting.\n        \\item Then, I repeat. I move the $2$ down next to the result I got, which was $2x$.\n        \\item Then, I multiply $x$ by $2$, which gets me $2x$.\n        \\item Since I multiplied $x$ by $2$, I have to do the same on the $1$. In the end, it brought me to $2x + 1$. But, I wanted to subtract the entire thing from the other answer, so I negated the whole equation.\n        \\item After subtracting, you are left with a $0$, meaning there is no remainder.\n    \\end{enumerate}\n\\end{example}\n\n\\subsubsection*{Remainders}\n\nIf the quotient has no remainder (a remainder of $0$), then the divisor is said to be a factor of the dividend.\n\n\\newpage\n", "meta": {"hexsha": "012476cae37886587ca1da0635f3bcf81c9d2ace", "size": 1579, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-1.tex", "max_stars_repo_name": "SingularisArt/notes", "max_stars_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_stars_repo_licenses": ["Info-ZIP"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-08-31T12:45:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T07:29:05.000Z", "max_issues_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-1.tex", "max_issues_repo_name": "SingularisArt/notes", "max_issues_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_issues_repo_licenses": ["Info-ZIP"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-1.tex", "max_forks_repo_name": "SingularisArt/notes", "max_forks_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad", "max_forks_repo_licenses": ["Info-ZIP"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.5526315789, "max_line_length": 216, "alphanum_fraction": 0.6757441419, "num_tokens": 453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.7826624738835051, "lm_q1q2_score": 0.5599355239131705}}
{"text": "\\section{Representations of Tweets}\n\\label{sec:tweetRepresentations}\n\nIn this section, we present all the algorithms that we investigated for converting our input tweets to numerical representations.\nThese representations try to capture the semantic meaning of the tweets \nwhile having a constant size, which is independent of the words used in each tweet.\nThus, letting $N$ be the number of input tweets, we create an $N \\times D$ matrix\nwith each line representing a different tweet with $D$ features.\n\n\n\\paragraph{\\textbf{Bag of Words Representation}}\nBag of Words \\cite{harris1954distributional} is a very common and straightforward numerical representation of text data.\nGiven that we have a vocabulary $V$ with finite size, we create\nan array of size $|V|$ with $1s$ in the position of the words belonging in this text and $0s$ everywhere else.\nThus, in our case, we would have an $N \\times |V|$ matrix representing our input tweets.\n\nThe problem with this representation is that $|V|$ can be arbitrary big especially if we do not handle appropriately slang and words with spelling mistakes, which can actually be solved by using the preprocessing methods that we proposed in Section~\\ref{sec:preprocessing}. \nAnother problem of this representation is that it does not comprise any semantic information about the significance of each word in a tweet. \n\n\n\\paragraph{\\textbf{TF-IDF Representation}}\nThe significance of the words in tweets is captured by a more sophisticated representation, which uses the Term Frequency - Inverse Document Frequency (TF-IDF).\nTF-IDF \\cite{sparck1972statistical} works by determining the relative frequency of a word in a specific document compared to the inverse proportion of that word over the entire document corpus.\nWords that are common in a single or small group of documents tend to have higher TF-IDF numbers than common words such as articles and prepositions. \n\nAs can be used as a standalone representation, this metric can also be used to weight the Bag of Words representation, in which case the $1s$ in $N \\times |V|$  representation matrix are replaced by $TF-IDFs$.\n\n\n\\paragraph{\\textbf{N-grams Extension}}\nIn both of the above representations the $N \\times |V|$ matrix\ncan become even bigger if we add sequences of contiguous words in our vocabulary.\nThese sequences are called \\textit{n-grams} with $n$ being the maximum number of words in a sequence.  \nFor example, for bigrams, in the worst case the size of the vocabulary becomes $|V| + {|V| \\choose 2}$.\nAs we will see in the experimental evaluation we tested representations with up to 8-grams. \n\n\n\\paragraph{\\textbf{Word Embeddings Representation}}\nWord Embeddings (WE) \\cite{DBLP:journals/corr/MikolovSCCD13} is a recently proposed representation that maps words to high dimensional vectors of real numbers. \nThis representation tries to capture the multiple contexts in which a word can appear in a text.\nHence, it guarantees that words that appear in similar context will have vectors that are very close each other.\n\nThe advantage of WE compared to the other aforementioned  representations is that each word is represented by a fixed-size vector regardless of the used vocabulary.\nIn tweet classification problem, given a of size $|V|$, tweets can be represented by a $|V| \\times D$ matrix, where $D$ is the dimension of our vectors.\nDepending on the vocabulary, we have different options for WE methods, which are listed below.\n\n\\subsubsection{GloVe Training Methods}\n\\textit{GloVe} \\cite{pennington2014glove} has an open-source implementation of related training methods.\nAs a first option, we used the given training tweet dataset to obtain the representation matrix using both the given implementation and \\textit{glove-python}\\footnote{\\url{http://github.com/maciejkula/glove-python}}, which is the python implementation of the original algorithm that uses \\textit{SGD} for training.\n\n\\subsubsection{Pre-trained GloVe Embeddings}\nAs another option, we also employed the pre-trained set of Word Embeddings for tweets, published by GloVe\\footnote{\\url{http://nlp.stanford.edu/projects/glove/}}.\nThese vectors were trained with $2$ billions tweets containing $1.2$ millions words.\nRegarding the one order of magnitude difference in terms of the amount of training data, their corpus is observed to cover a high percentage of the words used in our given train set.\n\n\\subsubsection{Hybrid Method}\nSince not all of the words in our training dataset were found in the pre-trained Word Embeddings, we decided to train our own vectors for these remaining words and append them to the pre-trained ones.\n\n\n\\paragraph{\\textbf{From Word Embeddings to Tweet Embeddings}}\nAll aforementioned representation methods enable us to construct vector representations for all the words in the vocabulary used by the tweets ($|V| \\times D$ matrix).\nSince tweets contain a subset of the words within the vocabulary, it's necessary to aggregate multiple rows of this matrix, in order to create representation for tweets ($N \\times D$ matrix).\n\nAfter investigating with various ways of aggregation (concatenation, summation and averaging), mean of WEs of all the words in a tweet is found to be the most expressive one for our classification algorithms.\n\n\n\\paragraph{\\textbf{Paragraph Embeddings}}\nOne method that constructs directly embeddings for paragraphs (tweets, in our case) is proposed by \\cite{le2014distributed}.\nThe advantage of this method is the fact that the order of the words in a tweet is also taken into account, by permuting them multiple times, which is claimed to reveal the semantic meaning of the phrases in a paragraph/tweet.\nThere are two model representations described in this paper: Distributed bag of words (DBoW) and Distributed Memory (DM) both of which are included in our experiments.\nWe observed that DBoW model is more efficient and requires less memory.\n", "meta": {"hexsha": "44717f0d199da37288d77ee947be75e447bb03f3", "size": 5885, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/tweetRepresentations.tex", "max_stars_repo_name": "dsar/Twitter_Sentiment_Analysis", "max_stars_repo_head_hexsha": "e2ec0b87a594aa1ee69be59f9959a0244a504047", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-03-10T18:43:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-31T18:54:45.000Z", "max_issues_repo_path": "report/tweetRepresentations.tex", "max_issues_repo_name": "Saibo-creator/Twitter_Sentiment_Analysis", "max_issues_repo_head_hexsha": "e2ec0b87a594aa1ee69be59f9959a0244a504047", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/tweetRepresentations.tex", "max_forks_repo_name": "Saibo-creator/Twitter_Sentiment_Analysis", "max_forks_repo_head_hexsha": "e2ec0b87a594aa1ee69be59f9959a0244a504047", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2017-12-21T22:01:47.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-08T23:33:34.000Z", "avg_line_length": 82.8873239437, "max_line_length": 314, "alphanum_fraction": 0.7977909941, "num_tokens": 1269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5599355180423091}}
{"text": "\\documentclass[a4paper,twocolumn]{article}\n\\usepackage{enumitem}\n\\usepackage{tikz}\n\\usetikzlibrary{arrows,automata}\n\\usepackage{graphicx}\n\\usepackage{mathtools}\n\\usepackage{marvosym}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage[a4paper, top=0.8in, bottom=1.0in, left=0.8in, right=0.8in]{geometry}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{verbatim}\n\\usepackage{minted}\n\n\\setcounter{secnumdepth}{0}\n\n\n\\begin{document}\n\n\\title{A Introduction to an Algebra of labelled Graphs}\n\\author{Anton Lorenzen}\n\\date{March 2018}\n\n\\maketitle\n\nThe algebraic-graphs package, or just alga, is a library for constructing graphs in Haskell using a functional interface.\nThis is a ground up introduction to alga. You should definitely read it from the beginning on though, even if\nyou have already read the original functional pearl\\footnote{https://github.com/snowleopard/alga-paper} since some definitions are different. \n\n\\section{A Introduction to Algebraic Graphs}\n\nThink of any (finite) graph. As you probably know a graph can be represented as a matrix.\nLet's say we have the vertices $V = (a, b, c, d, e)$:\n\n\\begin{center}\n\\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.0cm,\n                    semithick]\n\n  \\node[state] (C)                    {$c$};\n  \\node[state] (B) [below left  of=C] {$b$};\n  \\node[state] (D) [below right of=C] {$d$};\n  \\node[state] (A) [below right of=B] {$a$};\n  \\node[state] (E) [right of=D]       {$e$};\n\n  \\path (A) edge              node {} (B)\n            edge              node {} (C)\n        (B) edge [loop above] node {} (B)\n            edge              node {} (C)\n        (C) edge              node {} (D)\n            edge [bend left]  node {} (E)\n        (D) edge              node {} (A)\n        (E) edge [bend left]  node {} (A);\n\\end{tikzpicture}\n\\end{center}\n\nThis is equivalent to the following matrix:\n\n\\[ A=\n\\begin{bmatrix}\n  0 & 1 & 1 & 0 & 0 \\\\\n  0 & 1 & 1 & 0 & 0 \\\\\n  0 & 0 & 0 & 1 & 1 \\\\\n  1 & 0 & 0 & 0 & 0 \\\\\n  1 & 0 & 0 & 0 & 0\n\\end{bmatrix}\n\\]\n\nWe can decompose $A$ into 25 matrices each containing one of $A$'s elements and zeros everywhere else:\n\n\\[ A=\n\\begin{bmatrix}\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0\n\\end{bmatrix}\n+ \n\\begin{bmatrix}\n  0 & 1 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0\n\\end{bmatrix}\n+ \\dots\n\\]\n\nTODO: (also note that $A + A = A$, since $1$ is the maximum element in each cell).\n\nWe can write each of these matrices as $FromVertex\\xrightarrow{}ToVertex$\nin the case of a $1$ and $\\varepsilon$ in the case of a zero.\nTherefore we have:\n\n\\[ A = \\varepsilon + a\\xrightarrow{}b + \\dots \\]\n\nWe can therefore represent every matrix using the following data type:\n\\begin{minted}{haskell}\nnewtype Vertex a = Vertex a\n\ndata Graph a = Empty\n  | Connect Vertex Vertex\n  | Overlay (Graph a) (Graph a)\n\\end{minted}\n\nHere \\texttt{Connect} corresponds to $\\xrightarrow{}$, \\texttt{Empty} to $\\varepsilon$ and \\texttt{Overlay} to $+$.\nWe require that \\texttt{Overlay} is associative, commutative, has \\texttt{Empty} as an identity element\nand is idempotent ($a\\xrightarrow{}b + a\\xrightarrow{}b = a\\xrightarrow{}b$) just like our matrix addition above.\n\nNow, this isn't too revolutionary since we are basically just building an unbalanced binary tree of edges.\nBut it gets more interesting if we allow \\texttt{Connect} to connect more than simple vertices.\nFor example, maybe we could define the following:\n\\begin{minted}[escapeinside=~~]{haskell}\ndata Graph a = ~$\\varepsilon$~\n  | Vertex a\n  | (Graph a) ~$\\xrightarrow{}$~ (Graph a)\n  | (Graph a) ~$+$~ (Graph a)\n\n[x,y,z] = map Vertex [\"x\", \"y\", \"z\"]\n\\end{minted}\nWith the law:\n\\begin{minted}[escapeinside=~~]{haskell}\nx ~$\\xrightarrow{}$~ (y ~$+$~ z) == (x ~$\\xrightarrow{}$~ y) ~$+$~ (x ~$\\xrightarrow{}$~ z)\n(x ~$+$~ y) ~$\\xrightarrow{}$~ z == (x ~$\\xrightarrow{}$~ z) ~$+$~ (y ~$\\xrightarrow{}$~ z)\n\\end{minted}\nThat opens up the question what\n\\begin{minted}[escapeinside=~~]{haskell}\nx ~$\\xrightarrow{}$~ y == x ~$\\xrightarrow{}$~ (y ~$+$~ ~$\\varepsilon$~)\n       == (x ~$\\xrightarrow{}$~ y) ~$+$~ (x ~$\\xrightarrow{}$~ ~$\\varepsilon$~)\n\\end{minted}\nmeans for the $x \\xrightarrow{} \\varepsilon$ part?\nObviously, it needs to be $\\varepsilon$, you might say and you would end up with\na $\\xrightarrow{}$ that looks pretty much like matrix multiplication.\nBut alga took a different route and added another law, which gives it a different spin:\n\\begin{minted}[escapeinside=~~]{haskell}\nx ~$\\xrightarrow{}$~ ~$\\varepsilon$~ == ~$\\varepsilon$~ ~$\\xrightarrow{}$~ x == x\nx ~$\\xrightarrow{}$~ (y ~$\\xrightarrow{}$~ z) == (x ~$\\xrightarrow{}$~ (y ~$+$~ z)) ~$+$~ (y ~$\\xrightarrow{}$~ z)\n(x ~$\\xrightarrow{}$~ y) ~$\\xrightarrow{}$~ z == ((x ~$+$~ y) ~$\\xrightarrow{}$~ z) ~$+$~ (x ~$\\xrightarrow{}$~ y)\n\\end{minted}\nThis law is what makes alga different from existing approaches, because it does it away with the difference between vertices and edges:\nThe $\\xrightarrow{}$ constructor of the graph preserves the vertices contained in it. We can even derive this property from the laws:\n\n\\begin{align*}\n  x \\xrightarrow{} y &= (x \\xrightarrow{} y) \\xrightarrow{} \\varepsilon \\\\\n  &= ((x + y) \\xrightarrow{} \\varepsilon) + (x \\xrightarrow{} y) \\\\\n  &= x + y + (x \\xrightarrow{} y)\n\\end{align*}\n\nConvince yourself that we can see $x$ and $y$ as arbitrary graphs from now on and not just as vertices.\nAlso we will assume that $\\xrightarrow{}$ binds stronger than $+$.\n\nSo, what is a good intuition for $\\xrightarrow{}$? To come back to your $A$ matrix above, we can write\n\n\\[ (a + b) \\xrightarrow{} (d + e) = \n\\begin{bmatrix}\n  0 & 0 & 0 & 1 & 1 \\\\\n  0 & 0 & 0 & 1 & 1 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0\n\\end{bmatrix}\n\\]\n\nExercises:\n\\begin{itemize}\n\\item Show that the laws imply that $\\xrightarrow{}$ is associative.\n\\item Show that the idempotence of $+$ and $\\varepsilon$ being the identity of $+$ follow from the other laws.\n\\item Show that $x \\xrightarrow{} x \\xrightarrow{} x = x \\xrightarrow{} x$.\n\\item Show we can derive the decomposition law from the original paper:\n  \\[ a \\xrightarrow{} b \\xrightarrow{} c = a \\rightarrow{} b + a \\xrightarrow{} c + b \\xrightarrow{} c \\]\n\\end{itemize}\n\n\\subsection{About intuition}\n\nNow that you have worked through the exercises, you will probably agree\nthat we can characterize a graph by the following laws:\n\\begin{itemize}\n\\item Addition: $+$ is associative and commutative.\n\\item Identity: $\\varepsilon$ is the identity of $\\xrightarrow{}$.\n\\item Distribution:\n  \\begin{align*} \n    x \\xrightarrow{} (y + z) &= (x \\xrightarrow{} y) + (x \\xrightarrow{} z) \\\\\n    (x + y) \\xrightarrow{} z &= (x \\xrightarrow{} z) + (y \\xrightarrow{} z)\n  \\end{align*}\n\\item Extraction:\n  \\begin{align*}\n    x \\xrightarrow{} (y \\xrightarrow{} z) &= (x \\xrightarrow{} (y + z)) + (y \\xrightarrow{} z) \\\\\n    (x \\xrightarrow{} y) \\xrightarrow{} z &= ((x + y) \\xrightarrow{} z) + (x \\xrightarrow{} y)\n  \\end{align*}\n\\end{itemize}\n\nThe intuition I gave for these laws above might help you to understand what these laws mean in the contexts of graphs\nand indeed all implementations of these laws as adjacency maps or sets follow this intuition.\nHowever it falls short, if we go outside the context of graphs, so I would like to show you some statements you can't derive from the laws:\n\\begin{itemize}\n\\item \\textit{$+$ and $\\xrightarrow{}$ are distinct}. More specifically,\n  every associative, commutative, idempotent operation with an identity fulfills the laws.\n\\item \\textit{$\\varepsilon$ denotes the empty matrix}. Consider for example $a + b = a \\xrightarrow{} b = \\text{min}(a,b)$\n  with the identity being $\\infty$.\n\\end{itemize}\n\n\\section{Labelled Graphs}\n\nLet's go back to our graph from the beginning and add labels to the edges.\n\n\\begin{center}\n\\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.0cm,\n                    semithick]\n\n  \\node[state] (C)                    {$c$};\n  \\node[state] (B) [below left  of=C] {$b$};\n  \\node[state] (D) [below right of=C] {$d$};\n  \\node[state] (A) [below right of=B] {$a$};\n  \\node[state] (E) [right of=D]       {$e$};\n\n  \\path (A) edge              node {k} (B)\n            edge              node {l} (C)\n        (B) edge [loop above] node {m} (B)\n            edge              node {n} (C)\n        (C) edge              node {o} (D)\n            edge [bend left]  node {p} (E)\n        (D) edge              node {q} (A)\n        (E) edge [bend left]  node {r} (A);\n\\end{tikzpicture}\n\\end{center}\n\nThis is equivalent to the following matrix:\n\n\\[ A=\n\\begin{bmatrix}\n  0 & k & l & 0 & 0 \\\\\n  0 & m & n & 0 & 0 \\\\\n  0 & 0 & 0 & o & p \\\\\n  q & 0 & 0 & 0 & 0 \\\\\n  r & 0 & 0 & 0 & 0\n\\end{bmatrix}\n\\]\n\nHere you can think of $k, l, m, \\dots$ as variable names standing for strings, numbers or anything else.\nWe will come back to which elements exactly we can use here in a minute.\n\nAgain we can decompose $A$ into 25 matrices each containing one of $A$'s elements and zeros everywhere else:\n\n\\[ A=\n\\begin{bmatrix}\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0\n\\end{bmatrix}\n+ \n\\begin{bmatrix}\n  0 & k & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0 \\\\\n  0 & 0 & 0 & 0 & 0\n\\end{bmatrix}\n+ \\dots\n\\]\n\nAnd write each of these matrices as $FromVertex\\xrightarrow{edge}ToVertex$.\nTherefore we have:\n\n\\[ A = a\\xrightarrow{0}a + a\\xrightarrow{k}b + \\dots \\]\n\nWe can therefore represent every labelled graph as the following data type:\n\\begin{minted}[escapeinside=~~]{haskell}\ndata Graph l a = ~$\\varepsilon$~\n  | Vertex a\n  | (Graph a) ~$\\xrightarrow{l}$~ (Graph a)\n  | (Graph a) ~$+$~ (Graph a)\n\n[x,y,z] = map Vertex [\"x\", \"y\", \"z\"]\n[k,l,m,n] = [\"k\", \"l\", \"m\", \"n\"]\n\\end{minted}\n\nWith the new laws:\n\\begin{itemize}\n\\item Addition: $+$ is associative and commutative.\n\\item Identity: $\\varepsilon$ is the identity of $\\xrightarrow{l}$.\n\\item Distribution:\n  \\begin{align*} \n    x \\xrightarrow{k} (y + z) &= (x \\xrightarrow{k} y) + (x \\xrightarrow{k} z) \\\\\n    (x + y) \\xrightarrow{k} z &= (x \\xrightarrow{k} z) + (y \\xrightarrow{k} z)\n  \\end{align*}\n\\item Extraction:\n  \\begin{align*}\n    x \\xrightarrow{k} (y \\xrightarrow{l} z) &= (x \\xrightarrow{k} (y + z)) + (y \\xrightarrow{l} z) \\\\\n    (x \\xrightarrow{k} y) \\xrightarrow{l} z &= ((x + y) \\xrightarrow{k} z) + (x \\xrightarrow{l} y)\n  \\end{align*}\n\\item Absorption:\n  \\begin{align*}\n    x \\xrightarrow{k} y + x \\xrightarrow{l} y &= x \\xrightarrow{k + l} y\n  \\end{align*}\n\\end{itemize}\n\nExercise: Under which circumstances is the new $\\xrightarrow{l}$ still associative?\n\n\\subsection{Semirings}\n\nYou have probably noticed that we used a new $+$ operation on our labels above\nand as you might have guessed there are laws for it too, since we might run into contradictions otherwise:\n\\begin{itemize}\n\\item $+$ is commutative, associative and idempotent.\n\\item There is an element called $0$ which acts as an identity for $+$.\n\\end{itemize}\n\n\\end{document}\n", "meta": {"hexsha": "d3cbd287827066ded3a3a5cd9aa90a1768e0ab6a", "size": 11116, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Laws.tex", "max_stars_repo_name": "anfelor/alga-tutorials", "max_stars_repo_head_hexsha": "1344493bce87e7485912ad5e1628dd31c383d1bd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-04-15T01:39:19.000Z", "max_stars_repo_stars_event_max_datetime": "2018-04-25T11:39:05.000Z", "max_issues_repo_path": "Laws.tex", "max_issues_repo_name": "anfelor/alga-tutorials", "max_issues_repo_head_hexsha": "1344493bce87e7485912ad5e1628dd31c383d1bd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Laws.tex", "max_forks_repo_name": "anfelor/alga-tutorials", "max_forks_repo_head_hexsha": "1344493bce87e7485912ad5e1628dd31c383d1bd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.5143769968, "max_line_length": 142, "alphanum_fraction": 0.6242353365, "num_tokens": 3742, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484143, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5599355157982406}}
{"text": "\\subsection{Recursive types}\n\n\\FOM{} is very powerful, but does not allow us to define (non-positive) recursive\ntypes. Adding a type-level fixed point operator enables us to do\nthis (see e.g. \\cite[Chapter 20]{pierce2002types}).\nHowever, we must make a number of choices about the precise nature of our type-level fixed points. \n\n\\subsubsection{Isorecursive and equirecursive types.}\n\nThe first choice we have is between two approaches to exposing the fixpoint\nproperty of our recursive types.\nSystems with \\emph{equirecursive} types identify $(\\fixo f)$ and $f (\\fixo f)$; whereas\nsystems with \\emph{isorecursive} types provide an isomorphism between\nthe two, using a term $\\unwrap$ to convert the first into the second, and a term\n$\\wrap$ for the other direction.\n\nThe tradeoff is that equirecursive types add no additional terms to the language, but\nhave a more complicated metatheory. Indeed, typechecking \\FOMF{} with equirecursive types is not\nknown to be decidable in general (\\cite{dreyer:recursive-modules,cai-giarrusso-ostermann}). Isorecursive types, on the other hand,\nhave a simpler metatheory, but require additional terms. It is not\ntoo important for an IR to be easy to program by hand, so we opt for\nisorecursive types, with our witness terms being $\\wrap$ and $\\unwrap$.\n\n\\subsubsection{Choosing an appropriate fixpoint operator.}\n\nWe also have a number of options for \\emph{which} fixpoint operator to add. The most obvious choice is a fixpoint\noperator $\\fixo$ which takes fixpoints of type-level endofunctions \nat any kind $K$ (i.e. it has signature $\\fixo : (K \\kindArrow K) \\kindArrow\nK$). In contrast, our language \\FOMF{} has a fixpoint operator $\\ifix$\n(``indexed fix'') which allows us to\ntake fixpoints only at kinds $K \\kindArrow \\Type$. \n\n\\noindent The key advantage of $\\ifix$ over $\\fixo$ is that it is much easier to give\nfully-synthesizing type rules for $\\ifix$. To see this, suppose we had a $\\fixo$\noperator in our language, with corresponding $\\wrap$\nand $\\unwrap$ terms. We now want to write typing rules for $\\wrap$.\nHowever, $\\fixo$ allows us to take fixpoints at \\emph{arbitrary} kinds, whereas\n$\\wrap$ and $\\unwrap$ are terms, which always have types of kind $\\Type$.\nThus, the best we can hope for is to use $\\wrap$ and $\\unwrap$ with \\emph{fully applied} fixed points, i.e.:\n\\begin{displaymath}\n\\begin{array}{l@{\\ }l@{\\ }l@{\\ }l@{\\ }l}\n\\wrap_0 f_0            & t &: \\fixo f_0 & \\quad\\textrm{where} & t : f_0\\ (\\fixo f_0)\\\\\n\\wrap_1 f_1  ~ a1      & t &: \\fixo f_1\\ a1 & \\quad\\textrm{where} & t : f_1\\ (\\fixo f_1)\\ a1\\\\\n\\wrap_2 f_2  ~ a1 ~ a2 & t &: \\fixo f_2\\ a1\\ a2 & \\quad\\textrm{where} & t : f_2\\ (\\fixo f_2)\\ a1\\ a2\\\\\n\\dots & & & & \n\\end{array}\n\\end{displaymath}\nThis must be accounted for in our typing rules for fixed points. \n\nIt is possible to give typing rules for $\\wrap$ that will do the right thing\nregardless of how the fixpoint type is applied. One approach is to use\n\\emph{elimination contexts}, which represent the context in which a type will be\neliminated (i.e. applied). This is the approach taken in\n\\cite{dreyer2005understanding}. However, this incurs a cost, since we cannot\n\\emph{guess} the elimination context (since type synthesis is bottom-up), so we\nmust attach elimination contexts to our terms somehow.\n\nAn alternative approach is to pick a more restricted fixpoint operator. Using $\\ifix$\navoids the problems of $\\fixo$: it always produces fixpoints at kind $K \\kindArrow\n\\Type$, which must be applied to precisely one argument of kind $K$ before\nproducing a type of kind $\\Type$. This means we can give relatively straightforward typing rules as shown\nin \\cref{fig:fir_typing}.\n\n\\subsubsection{Adequacy of $\\ifix$.}\n\nPerhaps surprisingly, $\\ifix$ is powerful enough to give us fixpoints at any\nkind $K$. We give a semantic argument here, but the idea is simply \nstated: we can ``CPS-transform'' a kind $K$ into\n$(K \\kindArrow \\Type) \\kindArrow \\Type$, which then has the correct shape for\n$\\ifix$.\n \n\\begin{definition}\n  Let $J$ and $K$ be kinds. Then $J$ is a \\emph{retract} of $K$ if\n  there exist functions $\\phi: J \\kindArrow K$ and $\\psi : K \\kindArrow J$\n  such that $\\psi \\circ \\phi = id$.\n\\end{definition}\n\n\\begin{proposition}\n   \\label{prop:retracts-fixed-points}\n  Suppose $J$ is a retract of $K$ and there is a fixpoint operator \\textnormal{$\\fixo_K$} on $K$.\n  Then there is fixpoint operator \\textnormal{$\\fixo_J$} on $J$.\n\\end{proposition}\n\\begin{proof}\n  Take $\\fixo_J(f) = \\psi (\\fixo_K (\\phi \\circ f \\circ \\psi))$.\n\\end{proof}\n\n\\begin{proposition}\n  \\label{prop:kind-structure}\n  Let $K$ be a kind in \\FOMF{}. Then there is a unique (possibly empty) sequence\n  of kinds $(K_0, \\dots, K_n)$ such that $K = \\seqKindArr{K}{\\Type}$.\n\\end{proposition}\n\\begin{proof}\n  Simple structural induction.\n\\end{proof}\n\n\\begin{proposition}\n  For any kind $K$ in \\FOMF{}, $K$ is a retract of $(K \\kindArrow \\Type)\n  \\kindArrow \\Type$.\n\\end{proposition}\n\\begin{proof}\n  Let $K = \\seqKindArr{K}{\\Type}$ (by \\cref{prop:kind-structure}), and take\n  \\begin{align*}\n    \\phi &: K \\kindArrow (K \\kindArrow \\Type) \\kindArrow \\Type \\\\\n    \\phi &= \\lambda (x :: K) . \\lambda (f :: K \\kindArrow \\Type) . f\\ x\\\\\n    \\psi &: ((K \\kindArrow \\Type) \\kindArrow \\Type) \\kindArrow K\\\\\n    \\psi &= \\lambda (w :: (K \\kindArrow \\Type) \\kindArrow \\Type) . \\lambda (\\seq{a :: K}) . w (\\lambda (o :: K) . o\\ \\seq{a})\\\\\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n  If there is a fixpoint operator at kind $(K \\kindArrow \\Type) \\kindArrow \\Type$ then there is a fixpoint operator at any kind $K$.\n\\end{corollary}\n\nWe can instantiate $\\ifix$ with $K \\kindArrow \\Type$ to get fixpoints at $(K\n\\kindArrow \\Type) \\kindArrow \\Type$, so $\\ifix$ is sufficient to get fixpoints\nat any kind.\n\nNote that since our proof relies on \\cref{prop:kind-structure}, it will not go\nthrough for arbitrary kinds when there are additional kind forms beyond $\\Type$\nand $\\kindArrow$. However, it will still be true for all kinds of the structure\nshown in \\cref{prop:kind-structure}.\n\nThe fact that retractions preserve the fixed point property is\nwell-known in the context of algebraic topology: see\n~\\cite[Exercise 4.7]{fulton1995algebraic} or\n~\\cite[Proposition 23.9]{bredon1993topology} for example. While retractions\nbetween datatypes are a common tool in theoretical computer science\n(see e.g. \\cite{stirling}), we have been unable to find a version of\n\\cref{prop:retracts-fixed-points} in the computer science\nliterature. Nonetheless, we suspect this to be widely known.\n", "meta": {"hexsha": "dadf1872e34851fbacfbbc1e60bef92f5e98bab6", "size": 6507, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/unraveling-recursion/Fixpoints.tex", "max_stars_repo_name": "AriFordsham/plutus", "max_stars_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1299, "max_stars_repo_stars_event_min_datetime": "2018-10-02T13:41:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T01:10:02.000Z", "max_issues_repo_path": "papers/unraveling-recursion/Fixpoints.tex", "max_issues_repo_name": "AriFordsham/plutus", "max_issues_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2493, "max_issues_repo_issues_event_min_datetime": "2018-09-28T19:28:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T15:31:31.000Z", "max_forks_repo_path": "papers/unraveling-recursion/Fixpoints.tex", "max_forks_repo_name": "AriFordsham/plutus", "max_forks_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 399, "max_forks_repo_forks_event_min_datetime": "2018-10-05T09:36:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T11:18:25.000Z", "avg_line_length": 49.6717557252, "max_line_length": 132, "alphanum_fraction": 0.7224527432, "num_tokens": 1948, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.5599355132934815}}
{"text": "% \n% firstly set up on Jan 2012\n%\n%\n%\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{HGP Method}\n%\n%\n%\n%\nHGP method\\cite{HGP} is a revised method based on OS\\cite{OS1986} framework.\nIt's central idea is to employ the horizontal recurrence relation so that \nto take the contraction of coefficients into consideration. \n\nBefore we step into the discussion, let's define some expressions. The general\nun-contracted integral, for example; the electron repulsion integral is\nexpressed as:\n\\begin{equation}\n (ab|cd) = \\int dr \\int dr^{'} \\chi_{a}(r)\\chi_{b}(r)\\frac{1}{|r-r^{'}|}\n\\chi_{c}(r^{'})\\chi_{d}(r^{'})\n\\label{HGP:1}\n\\end{equation}\nThis is same with the definition in \\ref{OS_ERI_eq:1}. As we have noted in the\nprevious section that the integral are actually based on shell so we only need\nto specify the shells of a, b, c, d. Therefore, the integral in the \\ref{HGP:1}\nis also called ``shell quartet''.\n\nFor the contracted integrals, we express it as:\n\\begin{align}\n \\label{HGP:2}\n[ij|kl] &=\n\\sum_{\\mu}^{K_{\\mu}}\\sum_{\\nu}^{K_{\\nu}}\\sum_{\\lambda}^{K_{\\lambda}}\n\\sum_{\\eta}^{K_{\\eta}}c_{\\mu i}c_{\\nu j}c_{\\lambda k}c_{\\eta l} \\nonumber \\\\\n&\\int dr \\int dr^{'} \\chi_{\\mu}(r)\\chi_{\\nu}(r)\\frac{1}{|r-r^{'}|}\n\\chi_{\\lambda}(r^{'})\\chi_{\\eta}(r^{'})\n\\end{align}\nThe Greek letters are the indices over AO space, the i,j,k,l are indices over\nthe basis set function space. $K$ denotes the contraction degree.\n \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Horizontal Recurrence Relation}\nIn the OS framework, we already derived the recursive relation for the electron\nrepulsion integral:\n\\begin{equation}\n \\begin{split}\n ((a+\\iota_{i})b|cd)^{(m)} &= (P_{i} - A_{i})(ab|cd)^{(m)} +\n\\left(W_{i} -P_{i}\\right)(ab|cd)^{(m+1)} \\\\\n&+\\frac{N_{i}(A)}{2\\epsilon}\\left(((a-\\iota_{i})b|cd)^{(m)}-\\frac{\\rho}{\n\\epsilon }((a-\\iota_{i})b|cd)^{(m+1)}\\right)  \\\\\n&+\\frac{N_{i}(B)}{2\\epsilon}\\left((a(b-\\iota_{i})|cd)^{(m)}-\\frac{\\rho}{\n\\epsilon }(a(b-\\iota_{i})|cd)^{(m+1)}\\right)  \\\\\n&+\\left(\\frac{N_{i}(C)}{2}\\right)\\frac{1}{\\epsilon+\\eta}\n(ab|(c-\\iota_{i})d)^{(m+1)} \\\\\n&+\\left(\\frac{N_{i}(D)}{2}\\right)\\frac{1}{\\epsilon+\\eta}\n(ab|c(d-\\iota_{i}))^{(m+1)} \n \\end{split}\n\\label{HGP_ERI_eq:1}\n\\end{equation}\nHere we noted that in such relation, the shell $a$ and shell $b$ are in some\n``symmetrical'' position thus if we consider the expression of\n$(a(b+\\iota_{i})|cd)^{(m)}$, it's:\n\\begin{equation}\n\\begin{split}\n (a(b+\\iota_{i})|cd)^{(m)} &= (P_{i} - B_{i})(ab|cd)^{(m)} +\n\\left(W_{i} -P_{i}\\right)(ab|cd)^{(m+1)} \\\\\n&+\\frac{N_{i}(A)}{2\\epsilon}\\left(((a-\\iota_{i})b|cd)^{(m)}-\\frac{\\rho}{\n\\epsilon }((a-\\iota_{i})b|cd)^{(m+1)}\\right)  \\\\\n&+\\frac{N_{i}(B)}{2\\epsilon}\\left((a(b-\\iota_{i})|cd)^{(m)}-\\frac{\\rho}{\n\\epsilon }(a(b-\\iota_{i})|cd)^{(m+1)}\\right)  \\\\\n&+\\left(\\frac{N_{i}(C)}{2}\\right)\\frac{1}{\\epsilon+\\eta}\n(ab|(c-\\iota_{i})d)^{(m+1)} \\\\\n&+\\left(\\frac{N_{i}(D)}{2}\\right)\\frac{1}{\\epsilon+\\eta}\n(ab|c(d-\\iota_{i}))^{(m+1)} \n \\end{split}\n\\label{HGP_ERI_eq:2} \n\\end{equation}\nWe can see that most of the terms in \\ref{HGP_ERI_eq:2} are same with\n\\ref{HGP_ERI_eq:1}. Then if we subtract the two terms, it gives:\n\\begin{equation}\n \\label{HGP_ERI_HRR}\n (a(b+\\iota_{i})|cd)^{(m)} = ((a+\\iota_{i})b|cd)^{(m)} + \n(A_{i} - B_{i})(ab|cd)^{(m)}\n\\end{equation}\nThe new recursive relation in \\ref{HGP_ERI_HRR} is called ``horizontal\nrecurrence relation''(HRR). The relation in \\ref{HGP_ERI_eq:1} and \n\\ref{HGP_ERI_eq:2} are called ``vertical recurrence relation''(VRR). \nHere it has an important feature, that for the \\ref{HGP_ERI_HRR} \nwe could contract it so that to transform it into:\n\\begin{equation}\n \\label{HGP_ERI_eq:3} \n [a(b+\\iota_{i})|cd]^{(m)} = [(a+\\iota_{i})b|cd]^{(m)} + \n(A_{i} - B_{i})[ab|cd]^{(m)}\n\\end{equation}\nTherefore, the advantages for the HRR relation is that it reduces the\ncalculation inside the loop of primitive function pairs. For example, for the\n$[pp|pp]$ primitives, now we just need to calculate the $[ds|ds]$. Therefore,\nmuch less calculation is done with VRR.\n\nWhat's more, we note that the HRR relation in \\ref{HGP_ERI_HRR} is also applied \nto the other OS recursive relations. For example, for the three center overlap\nintegral whose relationship is:\n\\begin{equation}\n \\begin{split}\n (a+\\iota_{i}|b|c) \n&= (G_{Ai} - A_{i})(a|b|c) + \nN_{i}(A)\\left(\\frac{1}{2(\\alpha+\\beta+\\gamma)}\\right)(a-\\iota_{i}|b|c) \\\\\n&+ \nN_{i}(B)\\left(\\frac{1}{2(\\alpha+\\beta+\\gamma)}\\right)(a|b-\\iota_{i}|c) \\\\\n&+\nN_{i}(C)\\left(\\frac{1}{2(\\alpha+\\beta+\\gamma)}\\right)(a|b|c-\\iota_{i}) \n \\end{split}\n\\end{equation}\nIt's easy to see that the corresponding HRR is:\n\\begin{equation}\n\\label{HGP_ERI_eq:4}\n (a|b+\\iota_{i}|c) = (a+\\iota_{i}|b|c) + \n(A_{i} - B_{i})(a|b|c)\n\\end{equation}\nTherefore, the same formula retains.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{HGP Algorithm to Evaluate Integrals}\n%\n%\n%\nBased on the HRR, we have some new algorithm to evaluate the integrals(the\nreal calculation will be in reverse order):\n\\begin{description}\n \\item[HRR step:]\n For a general shell quartet of $[ab|cd]$, through the HRR it could be\n evidently reduced to the integrals of $[e0|f0]$; where for the $e$ the angular\n momentum would be the sum of $a$ and $b$, for the $f$ it would be sum of $c$\n and $d$.\n \\item[CONTRACTION step:]\n Perform the contraction from $(e0|f0)$ to $[e0|f0]$.\n \\item[VRR step:]\n Calculate the primitive integral of $(e0|f0)$ in VRR. Each of such integrals\n will be reduced to the $(ss|ss)^{(m)}$, while for $(ab|cd)$ $m$ ranges from\n 0 to $a+b+c+d$. \n \\end{description}\n\nHow to evaluate the HGP algorithm? By using the HRR, a lot of calculations\nwithin the loop of primitives pairs now could be moved out of the contraction\nloop (this loop is scaled with $K^{4}$). However, in HRR the angular momentum\nincreases in the right hand side so it implies that more shell quartets\nare needed to be generated; which is sacrifice in terms of the balance. \n\nNow let's give an example to see how the HGP works. Suggest that for the \nshell quartet of $[FF|FF]$, by using the HRR we can have:\n\\begin{enumerate}\n \\item $[FF|FF]$ is reduced to $[GD|FF]$ and $[FD|FF]$;\n \\item $[GD|FF]$ is reduced to $[HP|FF]$ and $[GP|FF]$;\n \\item $[FD|FF]$ is reduced to $[GP|FF]$ and $[FP|FF]$;\n \\item $[HP|FF]$ is reduced to $[IS|FF]$ and $[HS|FF]$;\n \\item $[GP|FF]$ is reduced to $[HS|FF]$ and $[GS|FF]$;\n \\item $[FP|FF]$ is reduced to $[GS|FF]$ and $[FS|FF]$\n\\end{enumerate}\nTherefore, there are only four elementary shell quartets, namely the \n$[FS|FF]$, $[GS|FF]$, $[HS|FF]$ and $[IS|FF]$. On the other hand, similar\ntreatment to the ket side will yields $[FF|FS]$, $[FF|GS]$, $[FF|HS]$ and\n$[FF|SI]$ therefore only their combinations are needed:\n\\begin{enumerate}\n \\item $[FS|FS]$, $[FS|GS]$, $[FS|HS]$, $[FS|IS]$\n \\item $[GS|FS]$, $[GS|GS]$, $[GS|HS]$, $[GS|IS]$\n \\item $[HS|FS]$, $[HS|GS]$, $[HS|HS]$, $[HS|IS]$\n \\item $[IS|FS]$, $[IS|GS]$, $[IS|HS]$, $[IS|IS]$\n\\end{enumerate}\nThen in the VRR step we could calculate all of these integrals.\n\n\n", "meta": {"hexsha": "b15e0c34a4e035973e81ec3fe06cc67d4f4e47b5", "size": 7079, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "algorithm/technic/integral/hgp.tex", "max_stars_repo_name": "murfreesboro/fenglai-note", "max_stars_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-16T07:23:48.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-16T07:23:48.000Z", "max_issues_repo_path": "algorithm/technic/integral/hgp.tex", "max_issues_repo_name": "murfreesboro/fenglai-note", "max_issues_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "algorithm/technic/integral/hgp.tex", "max_forks_repo_name": "murfreesboro/fenglai-note", "max_forks_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.4514285714, "max_line_length": 80, "alphanum_fraction": 0.6283373358, "num_tokens": 2552, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079208, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5598874290232694}}
{"text": "\\chapter{Standardization}\n\n\\section{Problem Statement}\nThe standardization process is a process to convert the instrumental values to physical values. In CCD, what is being read is the potential ($ \\propto $ number of electrons $ \\propto $ flux). But what does a CCD pixel value mean? $ 1 $ ADU can mean $ \\SI{1}{Jy} $ at one CCD but at different one it can mean $ \\SI{5}{Jy} $ because it is designed to be insensitive to photons for some reason. Thus, what astronomers do is\n\n\\begin{enumerate}\n\\item Make a list of objects which have known flux (e.g., star A has spectrum of blahblah, and it has $ V_\\mathrm{std} $ magnitude or flux $ I_\\mathrm{std} $ in the V-band). These stars are called \\textbf{standard stars}.\n\\item Observe the target and the standard stars simultaneously. If they cannot be in the same field of view, observe them at the same night when airmasses are not too different and weather is not changed largely.\n\\end{enumerate}\nNow the power of CCD comes in: It's highly linear, i.e., the pixel counts of $ N $ (of the target of interest) and $ N_\\mathrm{std} $ are very much proportional to the original flux, $ I $ and $ I_0 $. Thus, you can use Pogson's formula, because what it requires is only the ratio of flux: \n\\begin{equation}\\label{eq: Pogson}\n  V - V_\\mathrm{std} \n    = -2.5 \\lg \\frac{I}{I_\\mathrm{std}} \n    =  -2.5 \\lg \\frac{N}{N_\\mathrm{std}} ~.\n\\end{equation}\n\n\\begin{ex}[Simplest Standardization]\nIf the aperture photometry gave pixel count of $ 1000 $ for a standard star of $ V_\\mathrm{std} = \\m{10.00} $ and the object had pixel count of $ 500 $, the above formula will give $ V = \\m{10.75} $. \n\\end{ex}\n\nIn practice\\footnote{I extensively refered to Ch. 6 of ``A Practical Guide to Lightcurve Photometry and Analysis'' by Brian D. Warner, 2e in this subsection.}, especially when stars with catalogued magnitude are not in the same field of view as the target of interest, it is very difficult to use these formulae. In such a case, we have two FITS images to compare: one for target and one for standard star (usually far away from the target). A direct comparison of $ N $ and $ N_0 $ is difficult because\n\\begin{enumerate}\n\\item The atmosphre exists. The magnitude we observe on ground is different from the one we would have observed outside of the atmosphere (space). This gives the $ k' $ and $ k'' $ terms in \\cref{eq: std} below.\n\\item The CCD is not ideally simple. For example, if it is more sensitive to redder wavelength, making red stars brighter than they should be. This gives the $ k $ term in \\cref{eq: std} below.\n\\end{enumerate}\nIf all these are considered with proper approximations (derived later in this chapter), we can obtain the following second-order approximation of the standard magnitude of an object seen on CCD:\n\\begin{equation}\\label{eq: std}\n\\begin{aligned}\n  M_f &= m_f + (\\mathrm{effect\\ of\\ atmosphere}) + (\\mathrm{effect\\ of\\ CCD}) \\\\\n    &\\approx m_f - k_f' X - k_f''XC + z_f + k_f C \\\\\n    &\\equiv m_{0f} + z_f + k_f C ~,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n  m_{f} \\equiv m_{0f} + k_f'X + k_f'' XC\n\\end{equation}\nand\n\\begin{itemize}\n\\item $ f $: The filter (V, B, g', etc).\n\\item $ X $: airmass (the simplest approximation is the secant of zenith angle, $ \\sec Z $).\n\\item $ M_f $: The \\emph{standard} apparent magnitude (or the \\emph{true} apparent magnitude) at filter $ f $.\n\\item $ m_f $: The \\emph{instrumental} magnitude ($ m_f = -2.5 \\lg N $).\n\\item $ m_{0f} $: The extra-atmospheric magnitude ($ m_f $ value we would have obtained if we were in space $ X = 0 $).\n\\item $ C $: The \\emph{true} color index\\footnote{Not necessarily include filter $ f $, but it is better that the wavelength ranges of the selected two filters ``contain'' the range of $ f $ for interpolation purpose. Also in many classical literatures, you will see the $ C $ in this equation is the \\textit{observed} color, not the true color. This is just a matter of preference, because anyway it's used as a crude approximation factor, as we will see soon.}, e.g., $ B - V $ or $ r' - i' $.\n\\item $ k_f' $: The first order extinction coefficient at filter $ f $.\n\\item $ k_f'' $: The second order extinction coefficient at filter $ f $.\n\\item $ z_f $: The zero point at filter $ f $.\n\\item $ k_f $: The system transform coefficient at filter $ f $.\n\\item Note: lower- and upper-cased letters are used for the \\emph{instrumental} and \\emph{true} magnitudes, respectively\\footnote{For example, $ v $, $ b $, $ m_{g'} $ are instrumental magnitudes of an object and $ V $, $ B $, and $ M_{g'} $ are true appparent magnitudes of it.}.\n\\end{itemize}\n\n%The example I showed with \\cref{eq: Pogson} is the case when $ k_f = k_f' = k_f'' = 0 $, or equivalently $ k_f = 0 $ and $ X = 0 $. In such a case, the true apparent magnitude is $ V = v + z_V = -2.5 \\lg N + z_V $ where $ v $ is the instrumental magnitude and $ z_V $ is just a constant\\footnote{}. Same goes true for the standard star, so $ V_0 = v_0 + z_V = -2.5 \\lg N_0 + Z_V $. Then \\cref{eq: Pogson} makes sense and $ V- V_0 = v - v_0 $. If the coefficients are non-zero, you can see that \n\nIf the subscript std again means the value for the standard star, from \\cref{eq: std}:\n\\begin{equation}\n\\begin{aligned}\n  V - V_\\mathrm{std}\n    &= (v - v_\\mathrm{std})\n    + k_V'(X - X_\\mathrm{std})\n    + k_V''(X C - X_\\mathrm{std} C_\\mathrm{std})\n    + k_V(C - C_\\mathrm{std})\n    + \\Delta z_V\\\\\n    &\\neq v - v_\\mathrm{std} ~.\n\\end{aligned}\n\\end{equation}\nSo the calculation given in the example is true only if the airmass of the object and standard star are identical AND the true color indices of them are identical. Otherwise, we cannot simply equate the right hand side of \\cref{eq: Pogson} ($ = v - v_\\mathrm{std} $) to the left hand side ($ = V - V_\\mathrm{std} $). In space, we can remove all the atmosphere related terms since $ X = 0 $. Also $ \\Delta z \\approx 0 $ is assumed (discussed in \\cref{ss: zeropt}), so only the $ k_f $ term remains. This is why space observation is powerful.\n\n\\section{Understanding the Standardization Formula}\nIn this section, I will discuss about the terms in \\cref{eq: std} with some realistic data and plots. At the same time, I will give a derivation of the equation. Many textbooks only give the former; I do not want to go against that trend, but I wanted more quantitative explanations and justifications of those at the same time. \n\n\\subsection{Atmospheric Extinction}\nThe atmospheric extinction is dependent on the wavelength as in \\cref{fig:air-ext-and-filter}. The extinction is expressed as mag/airmass, i.e., the extincted magnitude when airmass $ X = 1 $, i.e., ``$ m_f - m_{0f} $ at $ X = 1 $'' or $ k_f' + k_f''C $ in the language of \\cref{eq: std}. The extinction is severe at shorter wavelengths, and that is why the Sun looks redder when it rises or sets (i.e., when airmass is larger). \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figs/air-ext-and-filter}\n\\caption{The atmospheric extinction as a function of wavelength at Mauna Kea, \\textit{based on some 4285 standard star spectra obtained on 478 nights spread over a period of 7 years obtained by the Nearby SuperNova Factory using the SuperNova Integral Field Spectrograph.} (excerpt from The Nearby SuperNova Factory+ 2013, A\\&A, 549, 8). The SDSS and Johnson-Cousins filters' filter profiles are overplotted.}\n\\label{fig:air-ext-and-filter}\n\\end{figure}\n\nConsider an object with spectrum $ S_0(\\lambda) $, measured at space, is observed at an airmass of $ X $. The intensity at filter with profile $ f_f(\\lambda) $ without atmosphere is \n\\begin{equation}\n  \\int_{0}^{\\infty} S_0(\\lambda) f_f(\\lambda) d\\lambda \n  = \\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f_f(\\lambda) d\\lambda ~.\n\\end{equation}\nThis is because $ f_f(\\lambda) $ can be set as non-zero for $ \\lambda \\in (\\lambda_1, \\lambda_2) $ and 0 otherwise. If the spectrum undergoes atmospheric extinction described by optical depth of $ \\tau(\\lambda) $, the intensity after the filter throughput is\n\\begin{equation}\n  \\int_{\\lambda_1}^{\\lambda_2} \n    S_0(\\lambda) f_f(\\lambda) e^{-\\int \\tau(\\lambda) dX} d\\lambda \n  \\approx \n    \\int_{\\lambda_1}^{\\lambda_2} \n    S_0(\\lambda) f_f(\\lambda)  e^{-\\tau(\\lambda) X} d\\lambda ~.\n\\end{equation}\nHere, $ e^{\\int -\\tau(\\lambda) dX} \\approx e^{-\\tau(\\lambda) X} $ is used, and the integration is along the optical path along the atmosphere. When extinction is not severe ($ \\tau(\\lambda) X \\ll 1 $ for all $ \\lambda \\in (\\lambda_1, \\lambda_2)$), $ e^{-\\tau(\\lambda) X} \\approx 1 - \\tau(\\lambda) X $ (say ``approx 1''). Also when $ A \\ll 1 $, $ \\lg (1 - A) \\approx - A / \\ln 10 $ (say ``approx 2''). Combining these information, Pogson's formula states\n\\begin{equation}\n\\begin{aligned}\n  m_f - m_{0f}\n    &= -2.5 \\lg\n      \\qty( \\frac{\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) e^{-\\tau(\\lambda) X} d\\lambda}\n      {\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) d\\lambda} ) \\\\\n    \\mathrm{(using\\ approx\\ 1)}\n    &\\approx -2.5 \\lg \n      \\qty( 1 - \\frac{\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) \\tau(\\lambda) d\\lambda}\n        {\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) d\\lambda} X ) \\\\\n    \\mathrm{(using\\ approx\\ 2)}\n    &\\approx \\frac{2.5}{\\ln 10} \n      \\frac{\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) \\tau(\\lambda) d\\lambda}\n        {\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) d\\lambda} X ~.\n\\end{aligned}\n\\end{equation}\n\nA short digression here. Remembering $ I(\\lambda) = I_0(\\lambda) e^{-\\tau(\\lambda) X} $, we have extinction (magnitude)\n\\begin{equation*}\n  \\Delta m(\\lambda) \n    = -2.5 \\lg \\qty( \\frac{I(\\lambda)}{I_0(\\lambda)} ) \n    = 1.086 \\tau(\\lambda) X ~.\n\\end{equation*} \nThen the $ y $-axis of \\cref{fig:air-ext-and-filter} is $ 1.086 \\tau(\\lambda) \\approx \\tau(\\lambda) $. Thus, you can roughly understand that the $ y $-axis represents $ \\tau(\\lambda) $. Now you can visually confirm that $ \\tau(\\lambda) X \\ll 1 $ (``approx 1'') is reasonable. In cases such as short wavelength (shorter than $ B $ or $ g $) and high airmass observation, this assumption may break down\\footnote{Also for longer wavelegths ($ \\lambda \\gtrsim 1 \\,\\mathrm{\\mu m} $, including JHK bands), this approximation breaks down severely.}. This is why classical photometric observers dislike observations at airmass $ X \\gtrsim 1.5\\mathrm{-}2 $ (elevation $ \\lesssim 48^\\circ \\mathrm{-} 30^\\circ$). For polarimetry, however, only the \\emph{ratio} of two (orthogonal) electric field intensities are important, so airmass does not matter\\footnote{For example, \\href{https://ui.adsabs.harvard.edu/abs/2018NatCo...9.2486I/abstract}{\\texttt{ItoT+2018, NatCo, 9, 2486}} demonstrated the polarization degree is not seriously affected by airmass ($ X = 1.03 \\mathrm{-} 7 $). The change in the polarization for $ X = 1 $ and 7 is at most 0.05 \\%p. This is because atmospheric scattering is basically a \\emph{forward} scattering, which should not induce any additional polarization degree (except for very small scattering angle multiple scatterings), although the total intensity should decrease.} The error due to the approximation, however, may not be severe compared to other error sources (e.g., changing weather). \n\nNow coming back to the original logic, we want to further assume that $ \\tau(\\lambda) \\approx \\tilde{c}_1 + \\tilde{c}_2 \\lambda $ for $ \\lambda \\in (\\lambda_1, \\lambda_2) $. This is similar to approximating the black markers in \\cref{fig:air-ext-and-filter} within each filter as a line because its y-axis is nothing but $ 1.086 \\tau \\approx \\tau $. Then\n\\begin{equation}\n  m_f - m_{0f} \n    \\approx 2.5 \n    \\qty (c_1 + c_2\\frac{\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) \\lambda d\\lambda}\n      {\\int_{\\lambda_1}^{\\lambda_2} S_0(\\lambda) f(\\lambda) d\\lambda} ) X ~.\n\\end{equation}\nHere, $ c_1 $ and $ c_2 $ are also constants. If the filter is fixed (e.g., $ V $-band or SDSS $ g' $ filter, etc), the only unknown thing in the second term in the parentheses is $ S_0(\\lambda) $, i.e., the spectral shape. If it is a black body spectrum, the shape of $ S_0 $ is determined uniquely once the color index $ C $ is known. Even if it is not a perfect black body, it is reasonable to assume the spectral shape, $ S_0(\\lambda) $, and color index, $ C $, have \\textit{nearly} one-to-one relationship\\footnote{For example, if you look at the color-color diagram of stars, A0, F0, and G0 stars all share similar $ U - B $ colors, i.e., color index and spectral shape are not one-to-one. This happens because (1) star spectra are not perfect black bodies and (2) filter profile is not ``flat'' as a function of $ \\lambda $. But to the first-order approximation, it is acceptable.}. Fortunately most wildely used color indices (such as $ B - V $ or gri colors) are more like one-to-one for \\emph{many} (but not all) cases. Thus, $ C $ is an indicator of $ S_0(\\lambda) $, so the second term is roughly a function of $ C $, say $ c_2 \\tilde{S}(C) $. Note that this logic does not change even if $ C $ were the \\textit{observed} color index (not the \\textit{true} color index), as I mentioned earlier. In theoretical development like we are doing here, choice of true color index makes the concept easier to understand. In practical observation, however, it is beneficial to use the observed color index (as you may see from many other literatures), because that is what you get from the observation and the true value is what is unknown.\n\nThe final assumption we make here is that the second term is $ c_2 \\tilde{S} (C) \\approx c_{3f} + c_{4f} C $ as the first-order approximation. Here $ c_{3f} $ and $ c_{4f} $ have subscript $ f $ because they depend on the \\textit{filter} profile, but not on the spectral shape under our simplifying assumptions, because all the dependency from the spectral shape is absorbed into $ C $. Then\n\\begin{equation}\n  m_f - m_{0f} \n  \\approx 2.5 (c_1 + c_{3f} + c_{4f} C) X \n  \\equiv k_f' X + k_f'' CX\n\\end{equation}\n These are the origins of $ k_f' $ and $ k_f'' $ in \\cref{eq: std}.\n\nTo illustrate the result, I used the SDSS filter system as shown in \\cref{fig:air-ext-bbrad} and calculated how much magnitude extinction happens depending on the black body temperature\nat airmass $ X = 1 $ in the following table. Note the color $ C $ can be any gri color, such as $ r'-\\mathrm{i}' $, and it is determined once the blackbody temperature is given.\n\n\\begin{table}[ht!]\n\\centering\n  \\begin{tabular}{c||ccc}\n   & \\multicolumn{3}{c}{Extinction magnitude} \\\\\n   Blackbody & $ m_g' - m_{0 g'} $ & $ m_r' - m_{0 r'} $ & $ m_i' - m_\\mathrm{0 i'} $ \\\\\n  Temperature & $ = k_{g'}' + k_{g'}'' C $ & $ = k_{r'}' + k_{r'}'' C $ & $ = k_{i'}' + k_{i'}'' C $ \\\\\n  \\hline\n  $ \\SI{3000}{K} $  & $ \\m{0.142} $ & $ \\m{0.081} $ & $ \\m{0.034} $ \\\\\n  $ \\SI{6000}{K} $  & $ \\m{0.158} $ & $ \\m{0.084} $ & $ \\m{0.035} $ \\\\\n  $ \\SI{20000}{K} $ & $ \\m{0.171} $ & $ \\m{0.087} $ & $ \\m{0.036} $ \\\\\n  \\end{tabular}\n\\end{table}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figs/air-ext-bbrad}\n\\caption{A black body radiation spectrum ($ T = \\SI{6000}{K} $) before and after the extinction at SDSS bands. Note that the y-axis is changed to \\% per airmass (cf. \\cref{fig:air-ext-and-filter}) by $ 10^{-0.4 \\Delta m} $.}\n\\label{fig:air-ext-bbrad}\n\\end{figure}\n\nAs can be seen, \\emph{the extinction is stronger for higher temperature (lower color index)}, so you can guess $ k_f'' < 0 $. However, the difference of extinction between the objects gets smaller as we look at longer wavelength. Thus, you can also guess $ k_f'' \\sim 0 $ (except for short wavelength, e.g., $ B $ or $ u $).\n\nThese facts can also be understood qualitatively. The higher the temperature the spectrum will have more fraction of energe at shorter wavelength. Considering that the atmospheric extinction is stronger at shorter wavelength (see black markers in \\cref{fig:air-ext-bbrad}), high termperature object will get more ``penalty'' when it comes into the atmosphere. That is why high temperature object is more strongly extincted, i.e., $ k_f'' $ is negative. At the same time, since the amount of extinction drops significantly at wavelengths of $ r $ and $ i $ bands, and making $ k_f'' $ itself very small. \n\n\\texttt{SmithJA+(2002)}\\footnote{\\url{https://ui.adsabs.harvard.edu/abs/2002AJ....123.2121S/abstract}} determined the coefficients for SDSS filters at 1.0 m Ritchey--Chr\\'{e}tien telescope at the USNO Flagstaff Station, from the observations in 1998--2000 (see \\cref{tab: SDSS ext}). The $ k_f' $ value fluctuates much for each night (see Fig 6 of the original publication), so I just took representative values from visual inspection. They mention that the $ k_f'' $ values were obtained by two independent pipelines, \\texttt{superExcal} (method 1) and \\texttt{solve\\_network} (method 2). Both are quite consistent except for the $ u $ filter. The color $ C $ used for each filter is the nearby filter color: $ u'-g' $ for $ u' $, $ g'-r' $ for $ g' $, etc, and $ i' - z' $ for both $ i' $ and $ z' $. The atmospheric extinction coefficients $ (k_f',\\, k_f'') $ all change as a function of time. We just hope they are reasonably constant during our observation of our targets and standard stars. From experience, we know $ k_f'' $ is always very small for filters longer than u or B-band, but is not necessarily ignorable because $ k_f'' XC $ might be larger than the accuracy you want to obtain. \n\n\n\\begin{table}[ht!]\n\\caption{The extinction coefficients of SDSS from \\texttt{SmithJA+(2002)}.}\n\\label{tab: SDSS ext}\n\\centering\n  \\begin{tabular}{c||lllll}\n  Parameter  & u$ ' $ & g$ ' $ & r$ ' $ & i$ ' $ & z$ ' $ \\\\\n  \\hline\n  $ k_f' $ & $ > +0.5 $ & $ +0.20 \\pm 0.05 $ & $ +0.10 \\pm 0.05 $ & $ +0.05 \\pm 0.05 $ & $ +0.05 \\pm 0.05 $\\\\\n  $ k_f'' $ method 1 \n    & $ -0.021 \\pm 0.003 $\n    & $ -0.016 \\pm 0.003 $ \n    & $ -0.004 \\pm 0.003 $\n    & $ +0.006 \\pm 0.003 $ \n    & $ +0.003 \\pm 0.003 $ \\\\\n  $ k_f'' $ method 2\n    & $ -0.032 $ \n    & $ -0.015 $\n    &  $ 0.000 $\n    & $ +0.005 $\n    & $ +0.006 $\n  \\end{tabular}\n\\end{table}\n\n\n%Atmosphere diminishes the flux of the object. For an optical depth of $ \\tau $, an object with initial flux $ I_0 $ will be observed as $ I = I_0 e^{-\\tau} $ and its magnitude will be increased (because flux is decreased) by $ \\Delta m = -2.5 \\lg (I / I_0) = \\frac{2.5}{\\lg e} \\tau = 1.086 \\tau $. But $ \\tau = \\int n \\sigma dl $ for the number density of particles $ n $, the extinction cross-section $ \\sigma $, and traveling distance $ l $. The total traveling distance, $ L $, is $ L_0 / \\cos z = L_0 X $ where $ X $ is the airmass. Thus, a simple approximation that $ \\tau \\propto L $ will result in $ \\Delta m \\propto X $. \n\n%The real story is more complicated: This extinction is of course wavelength-dependent (\\cref{fig:air-ext-and-filter}). The black markers show the extinction magnitude per airmass. At zenith, $ X = 1 $ by definition, so this means that $ \\Delta m \\sim \\m{0.2} $ at $ \\lambda = \\SI{4000}{\\AA} $ but is $ \\Delta m \\ll \\m{0.1} $ for $ \\lambda = \\SI{8000}{\\AA} $. Therefore, the extinction is not simply proportional to $ X $, but the proportionality is a function of wavelength.\n\n%\\begin{thm}[Atmospheric Extinction: 1st Order]\n%The extinction due to the atmosphere has the following 1st order term:\n%\\begin{equation*}\n%  \\Delta m(\\lambda) = k'(\\lambda) X ~,\n%\\end{equation*}\n%where $ k'(\\lambda) $ is the first-order extinction coefficient and is a function of wavelength $ \\lambda $. In photometry, we are dealing with filters rather than each single wavelength, so normally we denote \n%\\begin{equation}\\label{eq: air-ext 1st ord}\n%  \\Delta m_f = k_f' X ~.\n%\\end{equation}\n%\\end{thm}\n \n\n%Consider a black body spectrum is underwent this atmospheric extinction: \\cref{fig:air-ext-bbrad}. The extinction is of course a function of wavelength. \n\n%In photometry, we are interested in the total number of photons \\textit{after} multiplied with the filter profile: $ \\int_{\\lambda_1}^{\\lambda_2} S(\\lambda) f(\\lambda) d\\lambda $ where $ S(\\lambda) $ is the spectrum and $ f(\\lambda) $ is the filter profile. Thus, the magnitude change before and after the atmospheric extinction is, if the extinction is $ E(\\lambda) $,\n%\\begin{equation}\n%  \\Delta m = \n%    -2.5 \\lg \\qty (\\frac{\\int_{\\lambda_1}^{\\lambda_2} E(\\lambda) S(\\lambda) f (\\lambda) d\\lambda}{\\int_{\\lambda_1}^{\\lambda_2} S(\\lambda) f(\\lambda) d\\lambda} )\n%\\end{equation} \n%Because it contains a ratio of the integration, it is not simple to calculate $ \\Delta m $. From target to target, what changes is, except for the airmass (Note that $ E(\\lambda) = e^{-\\tau} \\propto \\sim e^X $), the $ S(\\lambda) $. Thus, $ \\Delta m = \\Delta m (X, S) $. For a black body, this is a function of the temperature, and the temperature has (roughly) one-to-one counterpart of the color index. Thus, if the spectral shape is assumed as black body, we can write $ \\Delta m = \\Delta m (X, C) $ where $ C $ is the true color index. Even if it is not a black body, it is sufficient if the spectral shape and color index have \\textit{nearly} one-to-one relationship. The extinction due to the spectral shape should also be proportional to the airmass to the first order approximation. Thus, as an approximation, we add a term proportion to $ CX $ to  \\cref{eq: air-ext 1st ord}:\n\n%\\begin{thm}[Atmospheric Extinction: 2nd Order]\n%The extinction due to the atmosphere is:\n%\\begin{equation}\n%  \\Delta m(\\lambda) = k'(\\lambda) X + k''(\\lambda) C X ~,\n%\\end{equation}\n%where $ k''(\\lambda) $ is the second order atmospheric extinction coefficient.\n%In photometry, we are dealing with filters rather than each single wavelength, so normally we denote \n%\\begin{equation}\\label{eq: air-ext 2nd ord}\n%  \\Delta m_f = k_f' X + k_f'' C X ~.\n%\\end{equation}\n%\\end{thm}\n\n\n\\begin{ex}[Ignoring the Second Order Extinction Term]\nConsider an observation at $ X = 2 $ of $ C = 0.2 $ star. If you ignore the second order extinction term, you are making $ k_f'' XC = 0.4 k_f'' $ of uncertainty. According to \\cref{tab: SDSS ext}, this is most likely smaller than 0.01 magnitude. \n\nThe Sun has $ C = g' - r' \\sim 0.5 $, and then the error for Sun-like star becomes $ 1.0 k_f'' $: It is $ < \\m{0.01} $ for r, i, and z bands. It is about $ 0.02\\mathrm{-}0.03 $ mag for $ u' $.\n\nThe red M0 stars have $ C = g' - r' \\lesssim 1.5 $ and the error is now up to $ 3.0 k_f'' $. The accuracy of $ \\m{0.01} $ can be achieved in riz bands, but risky.\n\\end{ex}\n\nThe calculation above is only for SDSS observatory at altitude of 2.3 km. But fortunately, \\texttt{BuchheimB(2005)}\\footnote{\\url{https://ui.adsabs.harvard.edu/abs/2005SASS...24..111B/abstract}} found that the $ k_f'' \\lesssim 0.005 $ for V band even at many low-altitude observatories (including Bochum observatory at altitude 200 m, Vainu Bappu Observatory at altitude 700 m), so likely we expect the correction from the second-order extinction term is small enough.\n\n\n\\subsection{Transformation Coefficient}\nThe sensitivity of the optics, such as filter, lens, mirror, and/or CCD cover glass, is also a function of $ \\lambda $. The argument is identical to atmospheric extinction, but there is no $ X $ (similar parameter will be something like the optical depth of materials blocking CCD pixel, but that should be a device-dependent constant). Then the same logic leads us to the conclusion that there should be a color term which tunes the final output of the CCD count, and that is the $ \\tilde{k}_f \\times c $ (``transformation'') term. Here, $ c $ is the color index of the object after all the atmospheric extinction, and the true color before it enters telescopic optics. Once we assume there exists a function $ f_c $ such that $ f_c(C) = c $ is a one-to-one function and linearly approximate $ c = f_c(C) \\approx \\tilde{c}_5 + \\tilde{c}_6 C $ for constants $ \\tilde{c}_5 $ and $ \\tilde{c}_6 $, the transformation term becomes $ k_f c = k_f C + \\tilde{c}_{7f} $ for the filter-dependent constant $ \\tilde{c}_{7f} $. This constant is finally absorbed in to another filter-dependent constant, called the zero point, $ z_f $. Therefore we reach \\cref{eq: std}. Even now, the logic holds identically if we have selected $ C $ to be the observed color, not the true color. \n\nThe transformation coefficient, which I denoted $ k_f $, is fortunately nearly constant for the given device. Warner argues that it is enough to update $ k_f $ (Warner uses notation of $ T_f $) value only about 2--4 times a year, unless you physically changed the device elements (e.g., filter, CCD, lens, etc). Moreover, from experience, we know that this is nearly zero: $ |k_f| \\lesssim 0.1 $. Many cases $ |k_f| \\lesssim 0.01 $. Since the range of color indices are $ \\mathrm{max}(\\Delta C) \\lesssim \\m{1} $, we have $ |k_f C| \\lesssim 0.1 $, and in many cases, $ |k_f C| \\lesssim 0.01 $.\n\n\n\\subsection{A Note on Linearity} \\label{ss: linearity note}\nAs we noted at the beginning part of this chapter, CCD is highly linear. That means, $ N = g N_e = \\alpha N_\\gamma $ where $ N $ is the pixel count (after \\emph{bias} and \\emph{dark} subtraction), $ N_e $ is the total number of photo-electrons, $ g $ is the electronic gain of the CCD (a constant; unit of counts per electrons), and $ N_\\gamma $ is the photon incident to the CCD from the true photon number $ N_\\mathrm{\\gamma 0} $. No higher-order terms, no other constants. There is no CCD as of 2021 that returns $ N = \\alpha' N_\\gamma^2 $, etc. The $ \\alpha $ value may differ from pixel to pixel due to the inhomogeneity of optics or CCD pixels, but they are homogeneized by the so-called \\emph{flat fielding}, so here I can safely say it is strictly constant over all the pixels. \n\nAny kind of extinction (atmosphere or optics in front of the CCD) is multiplicative only to the linear term, i.e., $ N_\\gamma = N_{\\gamma 0} \\times \\mathrm{something_1} $ (e.g., $ e^{-\\tau} $). There is neither higher-order terms like $ N_{\\gamma 0}^2 $ nor addition of constant. Therefore, $ N = \\mathrm{something_2} \\times N_\\mathrm{\\gamma 0} $ and the instrumental magnitude\n\\begin{equation}\n  m_f \n    := -2.5 \\lg N \n    = \\mathrm{something_3} - 2.5 \\lg N_{\\gamma 0}\n    \\equiv \\mathrm{something_3} + M_f ~.\n\\end{equation} \nThus, thanks to the linearity of CCD, we have \\textbf{no additional coefficient \\emph{multiplied}} in front of $ m_f $ or $ M_f $. If, for example, $ N $ were $ \\alpha N_\\gamma + \\alpha' $, or there were other terms in the extinction (proportional to $ N^2 $ or a constant radiation from the optics), this simple relation wouldn't hold.\n\n%In reality, the broad-band photometry (the  has some tricky problem. For a simple illustration, assume (1) we don't have atmosphere, (2) have a flat throughput: $ f(\\lambda) = 1 $ for $ \\lambda = [\\lambda_1, \\lambda_2] $ and zero otherwise, (3) CCD and optics has no wavelength-dependency in its sensitivity. We observed two stellar black-body spectra of temperatures $ T_a $ and $ T_b $ ($ T_a < T_b $). Because of different radii of or distance to the stars, consider they are shooting identical number of photons per time to Earth, i.e., $ \\int_{\\lambda_1}^{\\lambda_2} S(\\lambda) / (hc / \\lambda) d\\lambda $ is the same for two stars. The CCD will produce the same number of photo-electrons when it is hit by, e.g., 100 photons of $ \\lambda_1 $ or 100 photons of $ \\lambda_2 $. Thus, the two stars will appear to have identical brightness.\n\nTo emphasize, \\emph{you should not worry about whether to \\textbf{multiply} something in front of $ M_f $ or $ m_f $ to satisfy \\cref{eq: std}}. Their coefficients \\emph{must be unity}. From our experiences, most observational experts and electrical engineers would say that you should only care about this if you are sure that some parts of the optics have serious problems (e.g., your CCD underwent serious problem and shows non-linearity). \n\n\\subsection{A Note on Zero Point} \\label{ss: zeropt}\nThe zero point $ z_{f} $ is a constant to convert the instrumental magnitude (which is nothing but a $ -2.5 $ multiplied by $ \\lg (\\mathrm{count}) $) to a realistic standard magnitude system astronomers have been using. In intensity sence, this ``addition of a constant'' in magnitude represents a ``multiplication of a constant'' to the instrumental count to measure the intensity. \n\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n% add photon/sec for 0-mag star from Ishiguro's notebook and do realistic calculation\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\nWhat is a typical value of zero point? From telescopes with diameter $ \\gtrsim \\SI{1}{m} $ in Seoul, let's say we are observing objects of around $ \\m{15} $ with 100 seconds exposure. Say the sky-subtracted peak value of our target is\\footnote{Because CCDs are usually operated in 16-bit unsigned integer mode which represents $ 0 $ to $ 65,535 $, and non-linearity appears around $ 40,000\\mathrm{-}60,000 $ ADU, the sky subtracted peak pixel value of an intermediate-brightness object in the FOV is order of $ \\sim 10^4 $ after sky subtraction} $ \\sim 10^4 $ and the integral of the profile gives intensity of $ \\sim 10^{5} $, and the intensity per unit time is $ 10^3 $ (divided by exposure time). The instrumental magnitude is $ m \\sim -2.5 \\lg 10^3 = -7.5 $. This means $ z_f \\sim 22.5 $. If we do realistic calculation, $ z_f $ is mostly within a range of 20 to 25. This is why, in IRAF, the default initial guess of $ z_f $ is 25 mag.\n\nZero point must be a constant unless the device is affected by external disturbance in our simple model dealt in this chapter. In reality it is true that this zero point fluctuate at each exposure, and that is because of the imperfect readout process of CCD electronics. $ \\Delta z_V \\approx 0 $ is assumed in this chapter. Although you may be unconfortable, but this is what is assumed even in professional space telescope data reduction processes, if this fact makes you more comfortable.\n\n\\section{Standardization Applied to Photometry}\nNow that we justified the usage of \\cref{eq: std}, let's find the cases of application. The simplest case is the \\textit{differential photometry}. \n\nIf there are many celestial objects (of course including your target) in the field of view with known standard magnitudes, we can use them as standard stars. Although there can be variable stars and galaxies\\footnote{Galaxies can have spectra significantly different from those of black bodies. Therefore, the coefficients $ k_f $ and $ k_f'' $, which are approximations of spectral shape (i.e., not $ k_f' $), should be different from those derived from standard \\textit{stars}, which are black bodies to the first order. But mostly this effect is not serious because, as we discussed before, both $ k_f C $ and $ k_f'' X C $ terms are anyway very small. For this reason, people use $ k_f $ and $ k_f'' $ derived from standard stars for their target galaxies (or any non-black body like spectra). If you really worry about this, you must conduct spectroscopic observation, not broad-band photometry.}, if most of the objects with known magnitudes are non-variable stars, those \\textit{outliers} will be smoothed out. Thus, we just assume all the celestial objects in the field of view with known standard magnitudes as ``standard stars''. \n\nThis technique is very widely used in variable star and asteroidal light curve observations. This is widely used than absolute photometry, because it is annoying and difficult to observe standard stars at different airmasses while observing your target, which requires telescope time and human labor. In asteroidal science, even single-filter differential photometry is frequently used. \n\n\\subsection{Differential Photometry: Single-Filter}\n\nConsider there are many stars in the FOV with known (catalogued) magnitudes at multiple filters, so that the true apparent magnitude $ M_f $ and color $ C $ are known. Say, from the photometry, we could determine the instrumental magnitude of stars $ m_f $. Rearranging \\cref{eq: std}:\n\\begin{equation} \\label{eq: zeropt}\n  M_f - m_f = (z_f - k_f' X) + (k_f - k_f'' X) C \\equiv a_f (X) + b_f (X) C\n\\end{equation}\nThe first term in the RHS, $ a_f (X) $, is a constant for all the stars in the same FITS frame, as they will have nearly identical airmass\\footnote{If you observed near the horizon, $ \\Delta X $ across the FOV may not be negligible. For accurate photometry, this should also be taken into account.} $ X $ and the identical zero point $ z_f $. $ M_f - m_f = a_f(X) $ for $ C = 0 $, i.e., a perfectly flat spectrum. The second term is color-dependent, but as we discussed before, it is likely to be very small. Sometimes people just call this value (LHS) as ``zero point'', although I will stick to use this term for $ z_f $ for clarity. \n\n\\begin{ex}[How small is the second term?]\n  Consider observatinos made in wavelength ranges around V or gri bands. From \\cref{tab: SDSS ext}, $ |k_f''| \\sim 0.000 \\mathrm{-} 0.020 $ and we expect that $ k_f \\lesssim 0.02 $ for most optics. Since $ k_f'' $ is mostly negative, $ k_f - k_f'' X $ is likely be positive, but not always (see \\cref{tab: SDSS ext}). Also we have $ 1 < X \\lesssim 2 $ in most cases. Then if you play with many combinations of numbers, $ b_f(X) C = (k_f - k_f'' X) C \\lesssim 0.05 C $ and most likely much smaller than that. Note that the color index is mostly $ -1 \\lesssim C \\lesssim +1 $.\n\\end{ex}\n\nWhat we do now is to draw several test plots as in \\cref{fig:zeropt}. It is better if the color of the stars span wider than about 0.5 ($ \\mathrm{max}(C) - \\mathrm{min} (C) \\gtrsim 0.5 $). On the left side of the figure, I plotted $ M_f $ VS $ m_f $ for $ f = R $ (Johnson--Cousins $ R_C $ filter). The fitted slope is $ \\sim 1.015 $, which is near the unity as we expect from linearity. Sometimes it becomes as high as 1.05 throughout the night. Below is a residual plot. Normally, since the uncertainty in the magnitude measurement increases for larger magnitude (fainter star), the scatter of the residual must increase at large magnitude. The right panel of the figure shows the residual as a function of catalogued stars. The color-dependent slope is 0.02, and it was $ \\lesssim 0.05 $ throughout the night; this is small indeed as we expected above. As we do not know our target's color, assuming the color uncertainty of $ \\pm 0.5 $ will give around $ \\pm 0.03 $ \\textbf{\\emph{systematic offset to the magnitude}} of our target. Note that this is a systematic parallel shift in magnitude, \\textbf{not a random error}. Therefore, the \\emph{shape} of the light curve will not be affected by this, althought the magnitude \\emph{value} may have been affected by a constant.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=\\linewidth]{figs/SNUO_STX16803-2005UD-1-1-20181012-190829_zeropoint.pdf}\n\\caption{A typical linearity curve with residual (left) and color-zero plot (right) on 2018-10-12 (UT 19:08:29) observation at SNUO 1-m telescope for the field stars near an asteroid (155140) 2005 UD. I used PS1 DR1 catalog, within 13.5 to 15 mag, removed any object with flag of quasar, variable, galaxy, etc, and dropped any pairs of stars if there's any nearby object in PS1 DR1 catalog.}\n\\label{fig:zeropt}\n\\end{figure}\n\nA summary of the three plots:\n\\begin{table}[ht!]\n\\caption{Differential Photometry Diagnostic Plots}\n\\label{tab: 1-filter diff phot}\n\\centering\n  \\begin{tabular}{c|cc|c|l}\n  Name & $ x $-axis & $ y $-axis & Appearance & Comments \\\\\n  \\hline\n  Linearity & $ M_f $ & $ m_f $       & straight line & \n  $ |\\mathrm{slope} - 1| \\gtrsim 0.01 $ means you need to \\\\\n  Curve & & & slope of unity & check your photometry and/or device linerity \\\\\n  \\hline\n  Linearity & $ M_f $ & $ M_f - m_f $ & constant value & \n  Scatter is larger for large $ M_f $ objects \\\\\n  Residual & & & regardless of $ M_f $ & ($ \\because $ faint object's mag is less accurate) \\\\\n  \\hline\n  Color & $ C $  & $ M_f - m_f $ & $ \\sim $ constant & \n  If you can see a clear trend, \\\\\n  Dependency & & & regardless of $ M_f $ & the color-terms in \\cref{eq: zeropt} not negligible.\\\\\n\\end{tabular}\n\\end{table}\n\nIf graphs do not look as expected, that means (1) many field objects are variable or highly non-blackbody-like galaxies so that they are not suitable as ``standard stars'' and the basic assumptions of \\cref{eq: std} break down, (2) the airmass is too high that the 2nd order approximations do not hold, (3) catalog $ M_f $ suffer from unknown uncertainties or systematic errors so the catalog value is not reliable, (4) $ m_f $ was measured incorrectly, (5) many other possibilities. Check these before going further, because your final results may be affected.\n\n\nConsider you determined the $ (a_f, b_f) $ for all the FITS files you obtained through a night, where each FITS file has different airmass. Then, if you \n\\begin{itemize}\n\\item plot $ a_f $ as $ X $, the intercept and slope are $ z_f $ and $ -k_f' $,\n\\item plot $ b_f $ as $ X $, the intercept and slope are $ k_f $ and $ -k_f'' $.\n\\end{itemize}\n\nNote, however, the following assumptions should be met: \n\\begin{enumerate}\n\\item Atmospheric conditions (extinction coefficients $ k' $ and $ k'' $), remain constant over the night \n\\item Zero point $ z_f $ remain constant over the night \n\\item The approximations used for \\cref{eq: std} (2nd-order approximation) must hold\n\\end{enumerate}\n\nBecause not all these are met in Seoul sky, the plots you made will not look like to have a linear trend. It can be largely scattered so that a linear fit does not seem to make sense, clear non-linear trend appears (mostly due to the third assumption breaks down at large airmass), etc.\n\nTherefore, as a simple yet reasonable approximation, I assumed $ M_f - m_f $ for all the objects in a single FITS file should be a constant, and found this value. Then added it to the instrumental magnitude $ m_f $ of the target of interest.\n\n\n\\subsection{Differential Photometry: Multi-Filter}\nWhen we have more than one filter observation, we can even eliminate the systematic offset due to the color uncertainty of the object. Just write the equation for two filters, $ x $ and $ y $\n\\begin{equation}\n\\begin{cases}\n  M_x - m_x &= (z_x - k_x' X_x) + (k_x - k_x''X_x) C_{xy} = a_x(X_x) + b_x(X_x) C_{xy} \\\\\n  M_y - m_y &= (z_y - k_y' X_y) + (k_y - k_y''X_y) C_{xy} = a_y(X_y) + b_y(X_y) C_{xy} \n\\end{cases}\n\\end{equation}\nNote here that, for field stars with known standard magnitudes, $ M_x $, $ M_y $, and thus $ C_{xy} = M_x - M_y $ should all be known. The $ m_x $ and $ m_y $ are known from a photometry to the image. Although $ z $ values are assumed to be constant throughout the night for the same detector, I explicitly put $ z_x $ and $ z_y $ for generality. Following the logic of single-filter case, if we plot $ M_x - m_x $ as ordinate and $ C_{xy} $ as abscissa for $ N $ field stars, the $ y $-intercept is $ a_x $ and the slope is $ b_x $. Same goes for the $ y $ filter. So  $ (a_x, b_x) $ and $ (a_y, b_y) $ are determined with the uncertainties.\n\nThen for the target of interest, which $ M_x $ and $ M_y $ are unknown,\n\\begin{equation}\n\\begin{cases}\n  M_x^\\mathrm{target} - m_x^\\mathrm{target} &= a_x(X_x) + b_x(X_x) C_{xy}^\\mathrm{target} \\\\\n  M_y^\\mathrm{target} - m_y^\\mathrm{target} &= a_y(X_y) + b_y(X_y) C_{xy}^\\mathrm{target}\n\\end{cases}\n\\end{equation}\nor since $ C_{xy}^\\mathrm{target} = M_x^\\mathrm{target} - M_y^\\mathrm{target} $, (dropping $ X_x $ and $ X_y $ for brevity)\n\\begin{equation}\n  C_{xy}^\\mathrm{target} = \\frac{(m_x^\\mathrm{target} - m_y^\\mathrm{target}) + (a_x - a_y)}{1 - (b_x - b_y)} ~.\n\\end{equation}\nPutting this back to the original equation,\n\\begin{equation}\\label{eq: 2-filter diff phot sol}\n\\begin{cases}\n  M_x^\\mathrm{target}\n    = m_x + a_x + b_x \\frac{(m_x^\\mathrm{target} - m_y^\\mathrm{target}) + (a_x - a_y)}{1 - (b_x - b_y)} \\\\\n  M_y^\\mathrm{target}\n    = m_y + a_y + b_y \\frac{(m_x^\\mathrm{target} - m_y^\\mathrm{target}) + (a_x - a_y)}{1 - (b_x - b_y)}\n\\end{cases}\n\\end{equation}\nNow we have the standard magnitude of the target in both $ x $ and $ y $ filters. Since we have taken the color of the target into account, \\textbf{\\emph{there is no systematic offset}} as in single-filter case.\n\nWhat if we have more than 2 filters, say $ x $, $ y $, and $ z $? Solve for $ x $ and $ y $ as above by using $ (a_x, b_x) $. Then solve for $ y $ and $ z $, but this time using color as $ C_{yz} $, not $ C_{xy} $, using $ (a_y', b_y') $. Theoretically $ a_y = a_y' $, because there is no color term. However, $ b_x \\neq b_x' $, because $ k_y'' $ is defined for $ C_{xy} $ for the first case, but it is defined for $ C_{yz} $ for the second case. In reality, because what we get is only the best-fit values with uncertainties, $ a_x $ and $ a_x' $ can also be different (likely within certain amount of error-bar).\n\n\n\n\n\\section{Photometry Using Standard Stars}\nWhen we need accurate absolute photometry, photometric standard star observation is essential. Unlike ``field stars with known magnitude,'' standard stars are confirmed as non-variable stars with very accurate magnitudes. By observing them, we determine the coefficients ($ k_f $, $ k_f' $, and $ k_f'' $) and zero point ($ z_f $), mostly for at least two filters, and get the magnitude of our target of interest with high accuracy. Hence, I will only talk about multi-filter observation of standard stars.\n\nA different thing for standard star frames is that there is only one star with known magnitude in the FOV, and there will be only one point in the right panel of \\cref{fig:zeropt}. Therefore, we select at least two standard stars with different colors, sometimes called a \\textbf{\\emph{blue--red pair}}. Observe one standard star at an airmass. When the other standard star reaches similar airmass, observe it, so that you have at least two stars at the same airmass. Then you have two points in the right panel of \\cref{fig:zeropt} at the given airmass. Repeat this for many airmasses, so that you obtain the $ a_f $ and $ b_f $ for many airmasses for each filter. Following the logic of multi-filter case, you can determine all the coefficients.\n\nIf you are not interested in zero point and the transformation coefficient $ k_f $, you can follow this calculation: Consider a standard star with ID $= i $ is observed for $ N_i $ times, and denote each observation as $ j $ $ (j = 1, \\cdots, N_i) $, and write the parameters of \\cref{eq: std} as $ M_{f, i} $, $ C_{i} $ ($ M $ and $ C $ are fixed values for a standard star, so no $ j $ is needed), $ m_{f, i}^{(j)} $, $ X_{f, i}^{(j)} $, etc. We here assume the zero point, $ z_f $, and the coefficients $ k_f $, $ k_f' $, and $ k_f'' $ are almost constant over the night\\footnote{Even SDSS standard stars were also observed and analyzed under this assumption. See SmithJA (2002, AJ, 123, 2121)}. Then\n\\begin{equation}\n\\begin{cases}\n  M_{f, i} - m_{f, i}^{(1)}\n    &= z_f + k_f' X_{f, i}^{(1)} + k_f'' X_{f, i}^{(1)} C_{i} + k_f C_{i} ~, \\\\\n  M_{f, i} - m_{f, i}^{(2)}\n    &= z_f + k_f' X_{f, i}^{(2)} + k_f'' X_{f, i}^{(2)} C_{i} + k_f C_{i} ~, \\\\\n  \\qquad\\vdots & \\qquad\\vdots \\\\\n  M_{f, i} - m_{f, i}^{(N_i)}\n    &= z_f + k_f' X_{f, i}^{(N_i)} + k_f'' X_{f, i}^{(N_i)} C_{i} + k_f C_{i}   ~.\n\\end{cases}\n\\end{equation}\nFor the first two observations of the same standard star (ID $= i $) at filter $ f $, subtracting two,\n\\begin{equation}\n  \\Delta m_{f, i}^{(1, 2)} \n    = (k_f' + k_f'' C_i) \\Delta X_{f, i}^{(1, 2)} \n  \\quad\\rightarrow\\quad\n  \\qty( \\frac{\\Delta m }{\\Delta X} )_{f, i}^{(1, 2)}\n    = k_f' + k_f'' C_i ~.\n\\end{equation}\nOf course $ \\Delta m_{f, i}^{(1, 2)}  = m_{f, i}^{(1)} - m_{f, i}^{(2)} $ and $ \\Delta X_{f, i}^{(1, 2)}  = X_{f, i}^{(1)} - X_{f, i}^{(2)} $. \n\nTherefore, if we plot $ \\qty ( \\frac{\\Delta m }{\\Delta X} )_{f, i} $ as a function of color $ C_i $ of the standard stars with different color indices (colors of them are all known), the linear fit will give intercept of $ k_f' $ and slope of $ k_f'' $. For star $ i $, we have $ N_i $ observations, so we have $ \\binom{N_i}{2} = N_i (N_i - 1) / 2 $ points at the single $ C_i $ value. If we have $ N $ standard stars of wide color range, we can have $ N \\sum_i \\binom{N_i}{2} $ points to fit the linear line. For a simple blue--red pair, $ N = 2 $, so you have two $ x $-axis values (color index), but many points at each $ x $-axis value.\n\n", "meta": {"hexsha": "fcb7d86ca152007204296a8e0da198c60055c8c5", "size": 44865, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Books/chaps/04_Std.tex", "max_stars_repo_name": "ysBach/SNU_AOclass", "max_stars_repo_head_hexsha": "e2e364b08c2e6e129c267db9cbd76cfd0ab77527", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-03-23T06:14:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-14T01:49:51.000Z", "max_issues_repo_path": "Books/chaps/04_Std.tex", "max_issues_repo_name": "ysBach/SNU_AOclass", "max_issues_repo_head_hexsha": "e2e364b08c2e6e129c267db9cbd76cfd0ab77527", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2020-05-04T17:21:49.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-24T11:41:55.000Z", "max_forks_repo_path": "Books/chaps/04_Std.tex", "max_forks_repo_name": "ysBach/SNU_AOclass", "max_forks_repo_head_hexsha": "e2e364b08c2e6e129c267db9cbd76cfd0ab77527", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-05-10T14:19:34.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-14T09:18:08.000Z", "avg_line_length": 107.5899280576, "max_line_length": 1643, "alphanum_fraction": 0.6916304469, "num_tokens": 13287, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5598874223719847}}
{"text": "\\newpage\n\\chapter{Transient flamelet}\n\\section{Introduction}\nThe transient flamelet model of Camflow solves the flamelets equations in mixture fraction coordinate. The model solves the species profiles and the temperature profile as a function of time and mixture fraction.\n\\section{Fundamentals}\nThe mass of all atoms of $j$ in a system can be written as\n\\begin{equation}\n m_j = \\sum_{k=1}^{K_g} n_ka_{kj}W_j = \\sum_{k=1}^{K_g} \\frac{m_k}{W_k}a_{kj}W_j,\n\\end{equation}\nwhere, $K_g$ is the number of species, $n_k$ the number of moles of species $k$, $a_{kj}$ is the number of atoms pf element $j$ in species $k$, $W_j$ is the atomic mass of element $j$, $W_k$ is the molecular weight of species $k$, and $m_k$ is the mass of species $k$. Mass fraction of element $j$ can be written as \n\\begin{equation}\n Z_j = \\frac{m_j}{m} = \\sum_{k=1}^{K_g} \\frac{m_k}{W_k}a_{kj} W_j/m = \\sum_{k=1}^{K_g} a_{kj}\\frac{W_j}{W_k}Y_k\n\\label{mixfrac}\n\\end{equation}\nMass fraction $Y_k$ can be converted to mole fractions $X_k$ according to\n\\begin{equation}\n Y_k = \\frac{X_kW_k}{\\bar{W}}\n\\label{mass2mole}\n\\end{equation}\nsubstituting \\ref{mass2mole} into \\ref{mixfrac} gives\n\\begin{equation}\n Z_j = \\frac{W_j}{\\bar{W}} \\sum_{k=1}^{K_g} a_{kj}X_k, \\quad j=1\\ldots K_e\n\\label{mixture_frac}\n\\end{equation}\n\nNow summing up the mass fractions in the species transport equation defined in \\ref{species} and utilizing the relationship \\ref{mixfrac} leads to\n\n\\begin{equation}\n \\frac{\\partial }{\\partial t}\\sum_{k=1}^{K_g} Y_k + \\frac{\\partial}{\\partial x}\\sum_{k=1}^{K_g} (\\rho u Y_k)+ \\frac{\\partial}{\\partial x} \\sum_{k=1}^{K_g} j_kW_k = \\sum_{k=1}^{K_g} \\dot{\\omega}_kW_k \n\\end{equation}\n\n multiplying throughout by $a_{kj}W_j/W_k$ and realizing that the sum over the source term vanishes, the above equation turns to\n \\begin{equation}\n  \\rho \\frac{\\partial}{\\partial t}\\sum_{k=1}^{K_g} a_{kj} \\frac{W_j}{W_k}Y_k + \n  \\rho u \\frac{\\partial}{\\partial x}\\sum_{k=1}^{K_g} a_{kj} \\frac{W_j}{W_k}Y_k +\n \\frac{\\partial}{\\partial x}\\sum_{k=1}^{K_g} a_{kj} \\frac{W_j}{W_k}J_kW_k = 0\n \\end{equation}\n \nsubstituting  \\ref{mixture_frac} into the above equation gives\n\\begin{equation}\n \\rho \\frac{\\partial Z_j}{\\partial t} + \\frac{\\partial (\\rho u Z_j) }{\\partial x} + \\frac{\\partial }{\\partial x} \\sum_{k=1}^{K_g} a_{kj}\\frac{W_j}{W_k}j_kW_k=0, \\quad j=1\\ldots K_e\n\\label{conserve_mix_frac}\n\\end{equation}\n\n\\subsection{Diffusion term simplification}\nThe flux $j_k$ is defined as\n\\begin{equation}\n j_k = -\\frac{\\rho D_km}{W_k}\\frac{dY_k}{dx}\n\\label{flux}\n\\end{equation}\n\nIf the diffusion coefficient of all species are assumed to be the same then the 3rd term of Eq.\\ref{conserve_mix_frac} can be simplified by the use of Eq.\\ref{flux} as\n\\begin{equation}\n \\frac{\\partial }{\\partial x}\\sum_{k=1}^{K_g} a_{kj} \\frac{W_j}{W_k}j_kW_k = \\frac{\\partial }{\\partial x}\\rho \\mathcal{D} \\frac{\\partial }{\\partial x}\\sum_{k=1}^{K_g} a_{kj} \\frac{W_j}{W_k}Y_k = \\frac{\\partial }{\\partial x}\\rho \\mathcal{D} \\frac{\\partial Z_j}{\\partial x}\n\\label{diff_eq_1}\n\\end{equation}\nsubstituting Eq.\\ref{diff_eq_1} back to Eq.~\\ref{conserve_mix_frac} gives\n\\begin{equation}\n \\rho \\frac{\\partial Z_j}{\\partial t} + \\frac{\\partial (\\rho u Z_j)}{\\partial x} + \\frac{\\partial}{\\partial x}\\mathcal{J}_j = 0\n\\label{diff_eq_2}\n\\end{equation}\nwhere \n\\begin{equation}\n \\mathcal{J}_j = \\frac{\\lambda}{c_p}\\frac{\\partial Z_j}{\\partial x}\n\\end{equation}\n\n An equation similar to Eq.~\\ref{diff_eq_2} can be derived for mixture fraction $Z$. The mass fraction of the fuel is basically the sum of element mass fractions, i.e\n\\begin{equation}\n Y_{fm} = \\sum_{j=1}^{K_e} Z_{jf}\n\\end{equation}\nThe mixture fraction of fuel is defined as \n\\begin{equation}\n Z = \\frac{Y_{fm}}{Y_{f1}} = \\frac{\\sum_{j=1}^{K_e}Z_{if}}{Y_{f1}}\n\\end{equation}\nNow a summation over Eq.\\ref{conserve_mix_frac} leads to\n\n\\begin{equation}\n \\rho \\frac{\\partial Z}{\\partial t} + \n\\frac{\\partial (\\rho u Z)}{\\partial x} + \\frac{\\partial }{\\partial x}\\bigg(\\frac{\\lambda}{c_p} \\frac{\\partial Z}{\\partial x} \\bigg) =0\n\\end{equation}\n\n\\subsection{Transformation to mixture fraction coordinates}\nApplying the following transformation rule\n\\begin{equation}\n \\frac{\\partial}{\\partial x}= \\frac{\\partial Z}{\\partial x} \\frac{\\partial }{\\partial Z}\n\\end{equation}\nto energy equation Eq.~\\ref{energy} leads to\n\n\n\\begin{equation}\n\\begin{split}\n \\rho \\partialfrac{T}{t} + \\rho u \\partialfrac{Z}{x}\\partialfrac{T}{Z} - \\frac{\\lambda}{c_p}\\partialfrac{Z}{x}\\partialfrac{{^2Z}}{{x\\partial Z}} \\partialfrac{T}{Z} -\n\\frac{\\lambda}{c_p}\\bigg( \\partialfrac{Z}{x}\\bigg)^2 \\partialfrac{{^2T}}{{Z^2}} + \\\\\n\\frac{1}{c_p}\\sum_{k=1}^{K_g} c_{pk\t} \\partialfrac{Z}{x}\\partialfrac{T}{Z}\\rho\\mathcal{D}\\partialfrac{Z}{x}\\partialfrac{{Y_k}}{Z} + \n\\frac{1}{c_p} \\sum_{k=1}^{K_g} h_k\\dot{\\omega}_k = 0\n\\end{split}\n\\end{equation}\nNow if the flamelet is thin in the $Z$ direction an order of magnitude analysis shows that the second derivative with respect to $Z$ is the dominating term, leading to the simplified form of the above equation as\n\\begin{equation}\n \\rho \\partialfrac{T}{t} -\\frac{\\lambda}{c_p}\\bigg( \\partialfrac{Z}{x} \\bigg)^2 \\partialfrac{{^2T}}{{Z^2}}\n+ \\frac{\\rho \\mathcal{D}}{c_p} \\bigg(\\partialfrac{Z}{x}\\bigg)^2 \\bigg[\\sum_{k=1}^{K_g}c_{pk}\\partialfrac{{Y_k}}{Z}\\bigg] \\partialfrac{T}{Z} + \\frac{1}{c_p}\\sum_{k=1}^{K_g} h_k\\dot{\\omega}_k = 0\n\\label{T_mixfrac}\n\\end{equation}\nDefining the scalar dissipation rate ($\\chi$) as\n\\begin{equation}\n \\chi = 2\\mathcal{D} \\bigg( \\partialfrac{Z}{x} \\bigg)^2,\n\\end{equation}\nand assuming unit Lewis number and the same specific heat capacity for all species simplifies Eq.~\\ref{T_mixfrac} to\n\n\\begin{equation}\n \\rho \\partialfrac{T}{t} - \\frac{\\rho \\chi}{2}\\partialfrac{{^2T}}{{Z^2}} + \\frac{1}{c_p} \\sum_{k=1}^{K_g} h_k\\dot{\\omega}_k =0\n\\end{equation}\n\nApplying the same procedure to the species transport equation~\\ref{species} gives\n\\begin{equation}\n \\rho \\partialfrac{{Y_k}}{t} - \\frac{\\rho \\chi}{2}\\partialfrac{{^2Y_k}}{{Z^2}} = \\dot{\\omega}_k W_k\n\\end{equation}\n\nThe scalar dissipation rate $\\chi$ is related to the strain rate $a$ in a counter flow flame by\n\\begin{equation}\n \\chi = \\frac{a \\exp \\bigg( -2[\\mathrm{erfc}^{-1}(2Z)]^2 \\bigg)} {\\pi},\n\\end{equation}\nwhere $Z$ is the mixture fraction\n\n \\section{Input file}\nAn example of ``camflow.xml'' input file for flamelet is shown below\n{\\scriptsize{\n\\begin{verbatim}\n<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n<camflow>\n   <reactor model=\"flamelet\">\n    <length unit=\"cm\">1.0</length>\n  </reactor>\n  <op_condition>\n     <temperature>adiabatic</temperature>\n     <pressure unit=\"bar\">1</pressure>\n     <strain>400</strain>\n  </op_condition>\n  <inlet>\n     <fuel>\n       <velocity unit=\"m/s\">1.0</velocity>\n       <temperature unit=\"K\">300.0</temperature>\n       <molefrac>\n        <species name=\"H2\">1.0</species>\n       </molefrac>\n     </fuel>\n     <oxidizer>\n       <velocity unit=\"m/s\">1.0</velocity>\n       <temperature unit=\"K\">300.0</temperature>\n       <molefrac>\n        <species name=\"O2\">0.21</species>\n        <species name=\"N2\">*</species>\n       </molefrac>\n     </oxidizer>\n  </inlet>\n  <solver mode=\"coupled\" solver=\"cvode\" residual=\"on\">\n     <maxTime>10000</maxTime>\n     <iterations>1</iterations>\n     <tols>\n        <species>\n           <aTol>1.e-12</aTol>\n           <rTol>1.e-08</rTol>\n        </species>\n        <temperature>\n           <aTol>1.e-03</aTol>\n           <rTol>1.e-03</rTol>\n        </temperature>\n        <flow>\n           <aTol>1.e-03</aTol>\n           <rTol>1.e-03</rTol>\n        </flow>\n     </tols>\n  </solver>\n  <initialize>\n    <mCenter unit=\"cm\">50</mCenter>\n    <mWidth unit=\"cm\">40</mWidth>\n    <massfrac>\n      <intrmdt name=\"H\">0.1</intrmdt>\n    </massfrac>    \n </initialize>\n <report outfile=\"final\" species=\"mole\">\n </report>\n <grid>grid.inp</grid>\n</camflow>\n\n\\end{verbatim}}\n}\n\nThe input file follows xml standard. A detailed description about the various elements in the input file is specified below.\n\n\\begin{itemize}\n \\item \\textbf{rector} : The reactor element specifies which model is to be simulated and for flamlet camflow expects ``flamelet'' as the model attribute value. The reactor element also holds child element length for specifying the nozzle separation, and is given with the unit attribute. Since the flamelet equations are solved in the mixture fraction coordinate, this element has no meaning in this case.\n\n\\item \\textbf{op\\_conditions} : The element op\\_conditions describes the operating conditions for the flame. This includes the specification of the pressure and the condition applied to the solution of energy equation. The flame pressure may be specified in the units of ``Pa'', ``atm'', or ``bar''. The temperature element can take the values of ``isothermal'', ``adiabatic'', or ``userdefined''. However, for flamelets the option should either be adiabatic or userdefined.\\\\\n\nIn addition to the temperature and pressure, the strain rate needs to be specified for the calculation of scalar dissipation rate.\n\n\\item \\textbf{inlet} : For flamelets there are two inlets; one for the fuel and the other for the oxidizer. All properties pertaining to the fuel inlet must be specified under the element ``fuel'' and all properties pertaining to the oxidizer inlet must be specified under the ``oxidizer'' element. Both ``fuel'' and ``oxidizer'' element must specify the velocity at the inlet using the ``velocity'' element, temperature using the ``temperature'' element and the species composition using ``molefrac'' or ``massfrac'' element.The unit for velocity may be in m/s or in cm/s and temperature can be in C or in K. The appropriate units must be specified as attribute values. The chemical species present in the fuel/oxidizer can be specified using the species elements, with the species name as attribute and corresponding mole/mass fraction as attribute. The last species composition may be specified using ``*'', and this case the ``*'' will stand for 1-sum of composition of other species. \n\n\\item \\textbf{solver}: The solver element holds the solver control specifications. The attributes ``mode'' can be specified as ``coupled'' or ``segregated'' for flamelets. The solver name is essentially provided to switch from one solver to another. However, the present version of Camflow uses only CVode as the numerical integrator, and therefore accepts only ``cvode'' as the solver name. When the solver mode is specified as coupled, the governing equations are solved simultaneously, and for ``segregated'' mode, the governing equations are solved sequentially for a number of times that is specified by the element integrations. By default the value of iterations element is one. At the end of the number of iterations, the solution mode will automatically switch to coupled. The user is encouraged to use coupled mode for flamelet calculations.\\\\\n\nAdditionally the integration time may be optionally specified by the element maxTime. By default the value of maxTime is 100000 s. However, the final integration time is the maxTime or the time required to reach steady state whichever is lower. This means the solver will stop integration, if steady state is reached before the specified integration time.\\\\\n\nThe element ``tols'' hold the various tolarences that can be applied to the species, energy, and continuity equations. For species a relative tolerance of at least 10$^{-6}$ should be used. The user may need to adjust the tolarence values for the species in case of solution difficulties.\n\n\\item \\textbf{initialize} The initialize element can be used to specify various initial conditions. \nA guess value for the intermediate species composition has to be specified to initialize the flow filed. When specifying the intermediate compostions, the mixing center and mixing width needs to be specified. When these compositions are specified a guassien profile will be generated for the intermediate species with peaks at the mixing centers and the having the spread specified by mixing width. The mixing center is specified by the element ``mCenter`` and the mixing width is specified by ''mWidth``. Both these elements are provided with ''unit`` attribute, and appropriate unit of length must be specified. Unlike the specification of fuel inlet species composition, the sum of intermediate or product species composition need not to sum up to one. However, the user must ensure that the sum does not exceeds one.\\\\\n\nThe temperature profile can be specified by using the ``Tprofile'' element with two attributes namely ``unit\\_L'' for length unit and ``unit\\_T'' for temperature unit. The length unit can be in ``cm'' or in ``m'', where as the temperature unit can be either in ``K'' or in ``C''. The actual temperature as a function of reactor position is specified with the child elements position with the attribute ``x'', which stands for the position with the reactor. If the length unit is specified as ``cm'' then ``x'' is the position from the reactor inlet in ``cm'', and the value for the position element is the temperature at position ``x''. However, this is not mandatory.\n\n\\item \\textbf{report}: The desired output for the species composition must be specified in this element using the species attribute. ``mole'' or ``mass'' may be used as the attribute values, and correspondingly the output will be produced either in mole fraction or mass fractions.\n\n\\item \\textbf{grid}: The flamelet model requires a discretized geometry in the mixture fraction coordinate. The geometry may be specified using anyfile that contains a nodes of the descretised mixture fraction coordinate. In the case of flamelets the contents of the grid file does not have any units are they are simply mixture fraction coordinates at which the solution is desired. An example is shown below\\\\\n{\\scriptsize{\\begin{verbatim}\n0.0\n0.002\n0.003\n0.004\n0.005\n0.006\n0.007\n0.009\n0.01\n0.015\n0.02\n0.03\n0.04\n0.05\n0.06\n0.07\n0.08\n0.09\n0.1\n\\end{verbatim}\n}}\n\n\\end{itemize}\n\n\\section{Executing the binary}\nThe flamelet model of Camflow expects four input files namely, ``camflow.xml'', ``therm.dat'',  ``chem.inp'', and ``tran.dat''. All the files must be present in the working directory. Upon successful execution the output file ``profile.dat'' containing the final integration time (s), mixture fraction, scalar dissipation rate (1/s), density (kg/m$^3$), temperature (K), and the species compositions in mass or mole fractions.\n\n\\section{Results}\nFugure~\\ref{flamelet_profile} show the steady state profiles of various chemical species as a function of mizture fraction coordinate with both fuel and oxidizer entering at 300 K, and Fig.~\\ref{flamelet_temp} shows the temperature profile as a function of mixture fraction.\n\n\\begin{figure*}[h]\n \\centering\n\\includegraphics[scale=0.6]{flamelet_profile.eps}\n\\caption{Species profile for hydrogen flame}\n\\label{flamelet_profile}\n\\end{figure*}\n\n\\begin{figure*}[h]\n \\centering\n\\includegraphics[scale=0.6]{flamelet_temp.eps}\n\\caption{Temperature profile for hydrogen flame}\n\\label{flamelet_temp}\n\\end{figure*}\n\n%===============================================================================================\n%\n%\n%\n%===============================================================================================\n", "meta": {"hexsha": "daedf2c7c2337b5e99fc03e0ac82f6212d901661", "size": 15202, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/supporting-information/camflow/flamelet.tex", "max_stars_repo_name": "sm453/MOpS", "max_stars_repo_head_hexsha": "f1a706c6552bbdf3ceab504121a02391a1b51ede", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-09-08T14:06:33.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-04T07:52:19.000Z", "max_issues_repo_path": "doc/supporting-information/camflow/flamelet.tex", "max_issues_repo_name": "sm453/MOpS", "max_issues_repo_head_hexsha": "f1a706c6552bbdf3ceab504121a02391a1b51ede", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/supporting-information/camflow/flamelet.tex", "max_forks_repo_name": "sm453/MOpS", "max_forks_repo_head_hexsha": "f1a706c6552bbdf3ceab504121a02391a1b51ede", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-11-15T05:18:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-01T13:51:20.000Z", "avg_line_length": 57.5833333333, "max_line_length": 989, "alphanum_fraction": 0.7155637416, "num_tokens": 4389, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5598715385322892}}
{"text": "\\documentclass[11pt]{article}\n\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n\\usepackage{framed}\n\\usepackage{xcolor}\n\\usepackage{enumitem}\n\\usepackage{booktabs}\n\\usepackage{ltablex}\n\\usepackage{ifthen}\n\\usepackage[linktoc=all,colorlinks=true,urlcolor=blue]{hyperref}\n\\hypersetup{\n  colorlinks,\n  linkcolor=blue\n}\n\n\\newcommand{\\real}{\\mathbb{R}}\n\\newcommand{\\trans}{{\\rm T}}\n\n\\newenvironment{function}[4]\t\t%A new environment for functions \n{\n\\begin{shaded*}\n\\noindent\n\\verb~#1~\t\t%The function's name.\n\\end{shaded*}\n\\vspace*{-5pt}\n\\ifthenelse{\\equal{\\detokenize{#2}}{\\detokenize{}}}\n{}                              % what to do if there is no description\n{\\noindent\nInput:\n\\begin{description}[leftmargin=1.5cm,labelindent=0.5cm,itemsep=-4pt,topsep=0pt]\t\n#2\n\\end{description}}\n\\ifthenelse{\\equal{\\detokenize{#3}}{\\detokenize{}}}\n{}                              % what to do if there is no description\n{\\noindent \nOutput: \n\\begin{description}[leftmargin=1.5cm,labelindent=0.5cm,itemsep=-4pt,topsep=0pt]\t\t%Output list.\n#3\n\\end{description}}\n\\ifthenelse{\\equal{\\detokenize{#4}}{\\detokenize{}}}\n{}                              % what to do if there is no description\n{\\vspace*{5pt} \\noindent #4}    % otherwise give the description\n\\vspace*{10pt}\n}{}\n\n\n\\setlength{\\oddsidemargin}{0cm}\t\t\t%Page settings.\n\\setlength{\\evensidemargin}{3cm}\n\\setlength{\\marginparsep}{0.75cm}\n\\setlength{\\marginparwidth}{2.5cm}\n\\setlength{\\marginparpush}{1.0cm}\n\\setlength{\\textwidth}{16.5cm}\n\\setlength{\\textheight}{22.86cm}\n\\setlength{\\topmargin}{-1.27cm}\n\n\\definecolor{shadecolor}{rgb}{0.9,0.9,0.9}\n\n\\begin{document}\n\n\\begin{center}\t\t\t%Title.\n\\Large\n{\\bf Modern Robotics:  Mechanics, Planning, and Control} \\\\\n{\\bf Code Library} \\\\\n\\normalsize\nVersion 1.0.1\\\\\n\\vspace*{0.2in}\nHuan Weng and Kevin M. Lynch \\\\\nJuly 6, 2018 \\\\\n(beta version:  January 14, 2017)\n\\end{center}\n\n\\section*{Introduction}\t\t\t\n\nThis is the documentation for the code library accompanying \\emph{Modern Robotics:  Mechanics, Planning, and Control}, by Kevin M. Lynch and Frank C. Park, Cambridge University Press, 2017, \\url{http://modernrobotics.org}.  The code is written for MATLAB, Mathematica, and Python, and originates from students' solutions to programming assignments in courses using material from the book.  The current version of the code is largely the work of Huan Weng, based on contributions from Bill Hunt, Jarvis Schultz, Mikhail Todes, Matthew Collins, Mojtaba Mozaffar, Chang Liu, and Wentao Chen.  \n\nThe code is commented and mostly self-explanatory in conjunction with the book.  An example use is provided with each function.  The primary purpose of the software is to be easy to read and educational, reinforcing the concepts in the book.  The code is optimized neither for efficiency nor robustness (it does not perform error-checking on its inputs).  This is to keep the code as simple and unintimidating as possible.  Users are encouraged to use and modify the code however they wish; the process of using and modifying the code certainly aids in understanding the concepts in the book.\n\nInformation on installing and using the library is available at the code website, \\url{https://github.com/NxRLab/ModernRobotics}.  Your feedback on bugs or documentation errors are appreciated via the issue tracker at the same site.\n\nThis document provides an overview of the available functions using MATLAB syntax.  Functions are organized according to the relevant chapter in the book.  Basic functions, such as functions to calculate the magnitude of a vector, normalize a vector, test if a value is near zero, and perform matrix operations such as multiplication and inverses, are not documented here.\n\nNotation that is used throughout this document is summarized below.\n\n\\begin{tabularx}{\\linewidth}{llp{4.5in}}\n\\toprule\n{\\bf Math} & {\\bf Computer} & \\\\\n{\\bf symbol} & {\\bf variable} & {\\bf Description} \\\\\n\\midrule\n$R$ & {\\tt R} & $3 \\times 3$ rotation matrix in $SO(3)$. \\\\\n$\\omega$ & {\\tt omg} & $3$-vector angular velocity. \\\\\n$\\hat{\\omega}$ & {\\tt omghat} & $3$-vector unit rotation axis or unit angular velocity. \\\\\n$\\theta$ & {\\tt theta} & Angle of rotation about an axis or distance traveled along a screw axis. \\\\\n$\\hat{\\omega} \\theta$ & {\\tt expc3} & $3$-vector of exponential coordinates for rotation. \\\\\n$[\\omega], [\\hat{\\omega} \\theta] $ & {\\tt so3mat} & $3 \\times 3$ skew-symmetric $so(3)$ representation of $\\omega$ or $\\hat{\\omega} \\theta$. \\\\\n$p$ & {\\tt p} & $3$-vector for a position in space. \\\\\n$T$ & {\\tt T} & $4 \\times 4$ transformation matrix in $SE(3)$ corresponding to $(R,p)$. \\\\\n$[\\operatorname{Ad}_T]$ & {\\tt AdT} & $6 \\times 6$ matrix adjoint representation of $T \\in SE(3)$. \\\\\n$v$ & {\\tt v} & $3$-vector linear velocity. \\\\\n$\\mathcal{V}$ & {\\tt V} & $6$-vector twist $(\\omega, v)$. \\\\\n$\\mathcal{S}$ & {\\tt S} & A normalized $6$-vector screw axis $(\\omega,v)$, where (a) $\\|\\omega\\| = 1$ or (b) $\\|\\omega\\| = 0$ and $\\|v\\| = 1$. \\\\\n$\\mathcal{S}\\theta$ & {\\tt expc6} & $6$-vector of exponential coordinates for rigid-body motion. \\\\\n$[\\mathcal{V}], [\\mathcal{S} \\theta]$ & {\\tt se3mat} & $4 \\times 4$ $se(3)$ representation of $\\mathcal{V}$ or $\\mathcal{S} \\theta$. \\\\\n$M$ & {\\tt M} & End-effector configuration in $SE(3)$ when manipulator is at its zero position. \\\\\n$\\mathcal{B}_i$ & {\\tt Blist} & $\\mathcal{B}_i$ is the screw axis of the $i$th joint expressed in the end-effector frame when the manipulator is at the zero position.  {\\tt Blist} is a list of all the joint screw axes for the manipulator, $i = 1, \\ldots, n$. \\\\\n$\\mathcal{S}_i$ & {\\tt Slist} & $\\mathcal{S}_i$ is the screw axis of the $i$th joint expressed in the space frame when the manipulator is at the zero position.  {\\tt Slist} is a list of all the joint screw axes for the manipulator,  $i = 1, \\ldots, n$. \\\\\n$J_b, J_s$ & {\\tt Jb}, {\\tt Js} & The $6 \\times n$ manipulator Jacobian for a robot with $n$ joints, expressed in the end-effector frame ($J_b$) or the space frame ($J_s$). \\\\\n${\\epsilon}_{\\omega}$ & {\\tt eomg} & A small positive tolerance on the end-effector orientation error when calculating numerical inverse kinematics. \\\\\n${\\epsilon}_{\\nu}$ & {\\tt ev} & A small positive tolerance on the end-effector linear position error when calculating numerical inverse kinematics. \\\\\n$\\theta_0$ & {\\tt thetalist0} & A list of joint variables that serve as an initial guess for the inverse kinematics solution.\\\\\n$\\theta_i$ & {\\tt thetalist} & $\\theta_i$ is the joint variable for joint $i$, and {\\tt thetalist} is $\\theta = (\\theta_1, \\ldots, \\theta_n)$.\\\\\n& {\\tt thetamat} & An $N \\times n$ matrix where each row represents $\\theta$ one timestep after the row preceding it in the matrix. \\\\\n$\\dot{\\theta}_i$ & {\\tt dthetalist} & $\\dot{\\theta}_i$ is the rate of change of joint variable $i$, and {\\tt dthetalist} is $\\dot{\\theta} = (\\dot{\\theta}_1, \\ldots, \\dot{\\theta}_n)$. \\\\\n& {\\tt dthetamat} & An $N \\times n$ matrix where each row represents $\\dot{\\theta}$ one timestep after the row preceding it in the matrix. \\\\\n$\\ddot{\\theta}_i$ & {\\tt ddthetalist} & $\\ddot{\\theta}_i$ is the acceleration of joint $i$, and {\\tt ddthetalist} is $\\ddot{\\theta} = (\\ddot{\\theta}_1, \\ldots, \\ddot{\\theta}_n)$. \\\\\n& {\\tt dthetamat} & An $N \\times n$ matrix where each row represents $\\ddot{\\theta}$ one timestep after the row preceding it in the matrix. \\\\\n$\\mathfrak{g}$ & {\\tt g} & $3$-vector for gravitational acceleration. \\\\\n$\\tilde{\\mathfrak{g}}$ & {\\tt gtilde} & A possibly incorrect model for $\\mathfrak{g}$ used by a controller. \\\\\n$M_{i-1,i} $ & {\\tt Mlist} & $M_{i-1,i} \\in SE(3)$ is the configuration of manipulator link $i$ relative to link $i-1$ when the manipulator is at its zero position.  The link frames are defined at the link centers of mass.  {\\tt Mlist} is a list of all $M_{i-1,i}$ for $i = 1, \\ldots, n+1$.  The frame $\\{n+1\\}$ is the end-effector frame, and it is fixed relative to the frame $\\{n\\}$ of the last link.  It simply offers the opportunity to define an end-effector frame other than at the center of mass of the last link. \\\\\n& {\\tt Mtildelist} & A possibly incorrect model for {\\tt Mlist} used by a controller. \\\\\n$\\mathcal{F}_{\\text{tip}}$ & {\\tt Ftip} & $6$-vector wrench applied by the manipulator end-effector, expressed in the end-effector frame $\\{n+1\\}$. \\\\\n& {\\tt Ftipmat} & An $N \\times 6$ matrix where each row represents $\\mathcal{F}_{\\text{tip}}$ one timestep after the row preceding it in the matrix. \\\\\n$\\mathcal{G}_i$ & {\\tt Glist} & $\\mathcal{G}_i$ is the $6\\times 6$ spatial inertia matrix for link $i$ of the manipulator, and {\\tt Glist} is a list of all $\\mathcal{G}_i$ for $i = 1, \\ldots, n$. \\\\\n& {\\tt Gtildelist} & A possibly incorrect model for {\\tt Glist} used by a controller. \\\\\n$\\tau_i$ & {\\tt taulist} & $\\tau_i$ is the generalized force applied at joint $i$, and {\\tt taulist} is the list of all joint forces/torques $\\tau = (\\tau_1, \\ldots, \\tau_n)$. \\\\\n& {\\tt taumat} & An $N \\times n$ matrix where each row represents $\\tau$ one timesetep after the row preceding it in the matrix. \\\\\n$[\\operatorname{ad}_{\\mathcal{V}}]$ & {\\tt adV} & $6 \\times 6$ matrix adjoint representation of $\\mathcal{V} \\in se(3)$, used to calculate the Lie bracket of two twists, $[\\operatorname{ad}_{\\mathcal{V}_1}] \\mathcal{V}_2$. \\\\\n$T_f$ & {\\tt Tf} & The total time of a motion in seconds from rest to rest when calculating trajectories. \\\\\n$\\Delta t$ & {\\tt dt} & A timestep (e.g., between consecutive rows in a matrix representing a trajectory or force history). \\\\\n$t$ & {\\tt t} & The current time. \\\\\n$\\theta_{\\text{start}}$ & {\\tt thetastart} & An $n$-vector of initial joint variables with which to start a trajectory.\\\\\n$\\theta_{\\text{end}}$ & {\\tt thetaend} & An $n$-vector of final joint variables with which to end a trajectory.\\\\\n$X_{\\text{start}}$ & {\\tt Xstart} & An initial end-effector configuration $X_{\\text{start}} \\in SE(3)$ with which to start a trajectory.\\\\\n$X_{\\text{end}}$ & {\\tt Xend} & A final end-effector configuration $X_{\\text{end}} \\in SE(3)$ with which to end a trajectory.\\\\\n$e_{\\text{int}}$ & {\\tt eint} & An $n$-vector of the time-integral of joint errors.\\\\\n$\\theta_d$ & {\\tt thetalistd} & An $n$-vector of reference joint variables $\\theta_d$.\\\\\n& {\\tt thetamatd} & An $N \\times n$ matrix where each row represents $\\theta_d$ one timestep after the row preceding it in the matrix. \\\\\n$\\dot{\\theta}_d$ & {\\tt dthetalistd} & An $n$-vector of reference joint velocities $\\dot{\\theta}_d$.\\\\\n& {\\tt dthetamatd} & An $N \\times n$ matrix where each row represents $\\dot{\\theta}_d$ one timestep after the row preceding it in the matrix. \\\\\n$\\ddot{\\theta}_d$ & {\\tt ddthetalistd} & An $n$-vector of reference joint accelerations $\\ddot{\\theta}_d$.\\\\\n& {\\tt dthetamatd} & An $N \\times n$ matrix where each row represents $\\ddot{\\theta}_d$ one timestep after the row preceding it in the matrix. \\\\\n$K_p$ & {\\tt Kp} & A scalar feedback proportional gain.\\\\\n$K_i$ & {\\tt Ki} & A scalar feedback integral gain.\\\\\n$K_d$ & {\\tt Kd} & A scalar feedback derivative gain.\\\\\n& {\\tt intRes} & The number of integration steps during each timestep $\\Delta t$.  The value must be a positive integer.  Larger values result in slower simulations but less accumulation of integration error.  \\\\\n\\bottomrule\n\\end{tabularx}\n\n\\section*{Chapter 3:  Rigid-Body Motions}\t\t%Chapter 3\t\n\n\\begin{function}\t\t\t%RotInv\n{invR = RotInv(R)}\n{\\item \\verb~R~: Rotation matrix.}\n{\\item \\verb~invR~: The inverse of {\\tt R}.}\n{For efficiency, the inverse is calculated as the transpose rather than a matrix inverse.}\n\\end{function}\t\n\n\\begin{function}\t\t\t%VecToso3\n{so3mat = VecToso3(omg)}\n{\\item \\verb~omg~: A $3$-vector.}\n{\\item \\verb~so3mat~: The corresponding $3 \\times 3$ skew-symmetric matrix in $so(3)$.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%so3ToVec\n{omg = so3ToVec(so3mat)}\n{\\item \\verb~so3mat~: A $3 \\times 3$ skew-symmetric matrix (an element of $so(3)$).}\n{\\item \\verb~omg~: The corresponding $3$-vector.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%AxisAng3\n{[omghat,theta] = AxisAng3(expc3)}\n{\\item \\verb~expc3~: A $3$-vector of exponential coordinates for rotation $\\hat{\\omega}\\theta$.}\n{\n\\item \\verb~omghat~: The corresponding unit rotation axis $\\hat{\\omega}$.  \n\\item \\verb~theta~: The corresponding rotation angle $\\theta$.\n}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%MatrixExp3\n{R = MatrixExp3(so3mat)}\n{\\item \\verb~so3mat~: An $so(3)$ representation of exponential coordinates for rotation, $[\\hat{\\omega} \\theta]$.}\n{\\item \\verb~R~: The $R^\\prime \\in SO(3)$ that is achieved by rotating about $\\hat{\\omega}$ by $\\theta$ from an initial orientation $R=I$.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%MatrixLog3 \n{so3mat = MatrixLog3(R)}\n{\\item \\verb~R~: Rotation matrix.}\n{\\item \\verb~so3mat~: The corresponding $so(3)$ representation of exponential coordinates.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%DistanceToSO3\n{d = DistanceToSO3(mat)}\n{\\item \\verb~mat~: A $3 \\times 3$ matrix $M$.}\n{\\item \\verb~d~: A measure of the distance from $M$ to $SO(3)$, the space of rotation matrices.  If ${\\rm det}(M)>0$ (the determinant of $M$ should be $1$ if $M \\in SO(3)$), this distance is calculated as $\\|M^\\trans M -I\\|_F$, since $M^\\trans M$ should be the identity matrix if $M\\in SO(3)$.  The Frobenius norm $\\| \\cdot \\|_F$ of a matrix is the square root of the sum of the squares of the absolute values of the elements of the matrix.  If the determinant is not positive, a large value is returned. }\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%TestIfSO3\n{judge = TestIfSO3(mat)}\n{\\item \\verb~mat~: A $3 \\times 3$ matrix $M$.}\n{\\item \\verb~judge~:  $1$ (True) if $M$ is a rotation matrix (an element of $SO(3)$) and $0$ (False) otherwise.  This function calls \\verb~DistanceToSO3(mat)~ and tests if the returned distance is smaller than a small value (which you should feel free to change to suit your purposes).}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%ProjectToSO3\n{R = ProjectToSO3(mat)}\n{\\item \\verb~mat~: A $3 \\times 3$ matrix $M$.}\n{\\item \\verb~R~:  The closest rotation matrix (element of $SO(3)$) to $M$.  This function is only appropriate for matrices $M$ that are ``close'' to $SO(3)$.  For example, $M$ could be the result of a long series of multiplications of rotation matrices, which has caused the result to drift slightly away from satisfying the conditions of $SO(3)$ (${\\rm det}(M) = 1, M^\\trans M = I$) due to roundoff errors.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%RpToTrans\n{T = RpToTrans(R,p)}\n{\n\\item \\verb~R~: Rotation matrix.\n\\item \\verb~p~: A position $p \\in \\real^3$.\n}\n{\\item \\verb~T~: The corresponding homogeneous transformation matrix $T \\in SE(3)$.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%TransToRp\n{[R,p] = TransToRp(T)}\n{\\item \\verb~T~: Transformation matrix.}\n{\n\\item \\verb~R~: The corresponding rotation matrix.\n\\item \\verb~p~: The corresponding position.\n}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%TransInv\n{invT = TransInv(T)}\n{\\item \\verb~T~: Transformation matrix.}\n{\\item \\verb~invT~: Inverse of \\verb~T~.}\n{Uses the structure of transformation matrices to avoid taking a matrix inverse, for efficiency.}\n\\end{function}\n\n\\begin{function}\t\t\t%VecTose3\n{se3mat = VecTose3(V)}\n{\\item \\verb~V~: A $6$-vector (representing a twist, for example).}\n{\\item \\verb~se3mat~: The corresponding $4 \\times 4$ $se(3)$ matrix.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%se3ToVec\n{V = se3ToVec(se3mat)}\n{\\item \\verb~se3mat~: A $4 \\times 4$ $se(3)$ matrix.}\n{\\item \\verb~V~: The corresponding $6$-vector.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%Adjoint\n{AdT = Adjoint(T)}\t\t\n{\\item \\verb~T~: Transformation matrix.}\n{\\item \\verb~AdT~: The corresponding $6 \\times 6$ adjoint representation $[\\operatorname{Ad}_T]$.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%ScrewToAxis\n{S = ScrewToAxis(q,s,h)}\n{\n\\item \\verb~q~: A point $q \\in \\real^3$ lying on the screw axis.\n\\item \\verb~s~: A unit vector $\\hat{s} \\in \\real^3$ in the direction of the screw axis.\n\\item \\verb~h~: The pitch $h \\in \\real$ (linear velocity divided by angular velocity) of the screw axis.\n}\n{\\item \\verb~S~: The corresponding normalized screw axis ${\\mathcal S} = (\\omega,v)$.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%AxisAng6\n{[S,theta] = AxisAng6(expc6)}\n{\\item \\verb~expc6~: A $6$-vector of exponential coordinates for rigid-body motion, $\\mathcal{S}\\theta$.}\n{\n\\item \\verb~S~: The corresponding normalized screw axis $\\mathcal{S}$.\n\\item \\verb~theta~: The distance traveled along/about $\\mathcal{S}$.\n}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%MatrixExp6\n{T = MatrixExp6(se3mat)}\n{\\item \\verb~se3mat~: An $se(3)$ representation of exponential coordinates for rigid-body motion, $[\\mathcal{S}\\theta]$.}\n{\\item \\verb~T~: The $T^\\prime \\in SE(3)$ that is achieved by traveling along/about the screw axis $\\mathcal{S}$ a distance $\\theta$ from an initial configuration $T=I$.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%MatrixLog6\n{se3mat = MatrixLog6(T)}\n{\\item \\verb~T~: Transformation matrix.}\n{\\item \\verb~se3mat~: The corresponding $se(3)$ representation of exponential coordinates.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%DistanceToSE3\n{d = DistanceToSE3(mat)}\n{\\item \\verb~mat~: A $4 \\times 4$ matrix $M$.}\n{\\item \\verb~d~: A measure of the distance from $M$ to $SE(3)$, the\n  space of transformation matrices.  Let $R$ be the top-left $3 \\times 3$ submatrix of $M$, i.e., the portion of $M$ expected to represent a rotation matrix.  If ${\\rm det}(R)>0$ (the determinant of $R$ should be $1$ if $R \\in SO(3)$), the distance is calculated as $\\|M^\\prime -I\\|_F$, where the top-left $3 \\times 3$ submatrix of $M^\\prime$ is $R^\\trans R$ (which should be the identity matrix if $R \\in SO(3)$), the $1 \\times 4$ bottom row of $M^\\prime$ is the same as the bottom row of $M$, and the elements $M^\\prime_{14}$, $M^\\prime_{24}$, and $M^\\prime_{34}$ are zero.  The Frobenius norm $\\| \\cdot \\|_F$ of a matrix is the the square root of the sum of the squares of the absolute values of the elements of the matrix.  If the determinant of $R$ is not positive, a large value is returned. }\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%TestIfSE3\n{judge = TestIfSE3(mat)}\n{\\item \\verb~mat~: A $4 \\times 4$ matrix $M$.}\n{\\item \\verb~judge~:  $1$ if $M$ is a transformation matrix (an element of $SE(3)$) and $0$ otherwise.  This function calls \\verb~DistanceToSE3(mat)~ and tests if the returned distance is smaller than a small value (which you should feel free to change to suit your purposes).}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%ProjectToSE3\n{T = ProjectToSE3(mat)}\n{\\item \\verb~mat~: A $4 \\times 4$ matrix $M$.}\n{\\item \\verb~T~:  The closest transformation matrix (element of $SE(3)$) to $M$.  This function is only appropriate for matrices $M$ that are ``close'' to $SE(3)$.  For example, $M$ could be the result of a long series of multiplications of transformation matrices, which has caused the result to drift slightly away from satisfying the conditions of $SE(3)$ (top-left $3\\times 3$ submatrix is in $SO(3)$ and the bottom row is $[0 \\; 0\\; 0 \\; 1]$) due to roundoff errors.  The top-left $3 \\times 3$ submatrix of $T$ is the $SO(3)$ matrix closest to the top-left $3 \\times 3$ submatrix of $M$, the bottom row of $T$ is $[0 \\; 0\\; 0\\; 1]$, and the elements $T_{14}$, $T_{24}$, and $T_{34}$ are equivalent to the elements  $M_{14}$, $M_{24}$, and $M_{34}$, respectively.  }\n{}\n\\end{function}\n\n\\section*{Chapter 4:  Forward Kinematics}\t\t%Chapter 4\n\n\\begin{function}\t\t\t%FKinBody\n{T = FKinBody(M,Blist,thetalist)}\n{\n\\item \\verb~M~: The home configuration of the end-effector.\n\\item \\verb~Blist~: The joint screw axes in the end-effector frame when the manipulator is at the home position.\n\\item \\verb~thetalist~: A list of joint coordinate values.\n}\n{\\item \\verb~T~: The $T \\in SE(3)$ representing the end-effector frame when the joints are at the specified coordinates.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%FKinSpace\n{T = FKinSpace(M,Slist,thetalist)}\n{\n\\item \\verb~M~: The home configuration of the end-effector.\n\\item \\verb~Slist~: The joint screw axes in the space frame when the manipulator is at the home position.\n\\item \\verb~thetalist~: A list of joint coordinate values.\n}\n{\\item \\verb~T~: The $T \\in SE(3)$ representing the end-effector frame when the joints are at the specified coordinates.}\n{}\n\\end{function}\n\n\\section*{Chapter 5:  Velocity Kinematics and Statics}\t\t%Chapter 5\n\n\\begin{function}\t\t\t%JacobianBody\n{Jb = JacobianBody(Blist,thetalist)}\n{\n\\item \\verb~Blist~: The joint screw axes in the end-effector frame when the manipulator is at the home position.\n\\item \\verb~thetalist~: A list of joint coordinate values.\n}\n{\\item \\verb~Jb~: The corresponding body Jacobian $J_b(\\theta) \\in \\real^{6 \\times n}$.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%JacobianSpace\n{Js = JacobianSpace(Slist,thetalist)}\n{\n\\item \\verb~Slist~: The joint screw axes in the space frame when the manipulator is at the home position.\n\\item \\verb~thetalist~: A list of joint coordinate values.\n}\n{\\item \\verb~Js~: The corresponding space Jacobian $J_s(\\theta) \\in \\real^{6 \\times n}$.}\n{}\n\\end{function}\n\n\\section*{Chapter 6:  Inverse Kinematics}\t\t%Chapter 6\n\n\\begin{function}\t\t\t%IKinBody\n{[thetalist,success] = IKinBody(Blist,M,T,thetalist0,eomg,ev)}\n{\n\\item \\verb~Blist~: The joint screw axes in the end-effector frame when the manipulator is at the home position.\n\\item \\verb~M~: The home configuration of the end-effector.\n\\item \\verb~T~: The desired end-effector configuration $T_{sd}$.\n\\item \\verb~thetalist0~: An initial guess $\\theta_0 \\in \\real^n$ that is ``close'' to satisfying $T(\\theta_0) = T_{sd}$.\n\\item \\verb~eomg~: A small positive tolerance on the end-effector orientation error.  The returned joint variables must give an end-effector orientation error less than ${\\epsilon}_{\\omega}$.\n\\item \\verb~ev~: A small positive tolerance on the end-effector linear position error.  The returned joint variables must give an end-effector position error less than ${\\epsilon}_{\\nu}$.\n}\n{\n\\item \\verb~thetalist~:  Joint variables that achieve \\verb~T~ within the specified tolerances.\n\\item \\verb~success~:  A logical value where \\verb~TRUE~ means that the function found a solution and \\verb~FALSE~ means that it ran through the set number of maximum iterations without finding a solution within the tolerances ${\\epsilon}_{\\omega}$ and ${\\epsilon}_{\\nu}$.\n}\n{The algorithm uses an iterative Newton-Raphson root-finding method starting from the initial guess {\\tt thetalist0}.  The algorithm terminates when the stopping criteria are met or after a maximum number of iterations, whichever comes first. The maximum number of iterations has been hardcoded in as a variable in the function, which can be changed if desired. If the stopping criteria are not met, the function returns the last calculation of \\verb~thetalist~ as well as a \\verb~FALSE~ value for the success variable.}\n\\end{function}\n\n\\begin{function}\t\t\t%IKinSpace\n{[thetalist,success] = IKinSpace(Slist,M,T,thetalist0,eomg,ev)}\n{}\n{}\n{Equivalent to \\verb~IKinBody~, except the joint screw axes are specified in the space frame.}\n\\end{function}\n\n\n\\section*{Chapter 8:  Dynamics of Open Chains}\t\t%Chapter 8\n\nThis chapter is concerned with calculating and simulating the dynamics of a serial-chain manipulator with dynamics of the form\n\\[\n\\tau = M(\\theta)\\ddot{\\theta} + c(\\theta,\\dot{\\theta}) + g(\\theta) + J^\\trans(\\theta) \\mathcal{F}_{\\text{tip}}.\n\\]\n\n\\begin{function}\t\t\t%ad\n{adV = ad(V)}\n{\\item \\verb~V~: A $6$-vector (e.g., a twist).}\n{\\item \\verb~adV~: The corresponding $6 \\times 6$ matrix $[\\operatorname{ad}_{\\mathcal{V}}]$.}\n{Used to calculate the Lie bracket $[\\operatorname{ad}_{\\mathcal{V}_1}]\\mathcal{V}_2$.}\n\\end{function}\n\n\\begin{function}\t\t\t%InverseDynamics\n{taulist = InverseDynamics(thetalist,dthetalist,ddthetalist,\\\\\n\\hspace*{1in} g,Ftip,Mlist,Glist,Slist)}\n{\n\\item \\verb~thetalist~:  $n$-vector of joint variables $\\theta$.\n\\item \\verb~dthetalist~: $n$-vector of joint velocities $\\dot{\\theta}$.\n\\item \\verb~ddthetalist~:  $n$-vector of joint accelerations $\\ddot{\\theta}$.\n\\item \\verb~g~:  Gravity vector $\\mathfrak{g}$.\n\\item \\verb~Ftip~:  Wrench $\\mathcal{F}_{\\text{tip}}$ applied by the end-effector expressed in frame $\\{n+1\\}$.\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n}\n{\\item \\verb~taulist~: The $n$-vector $\\tau$ of required joint forces/torques.}\n{This function uses forward-backward Newton-Euler iterations.}\n\\end{function}\n\n\\begin{function}\t\t\t%MassMatrix\n{M = MassMatrix(thetalist,Mlist,Glist,Slist)}\n{\n\\item \\verb~thetalist~:  $n$-vector of joint variables $\\theta$.\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n}\n{\\item \\verb~M~: The numerical inertia matrix $M(\\theta)$ of an $n$-joint serial chain at the given configuration $\\theta$.}\n{This function calls {\\tt InverseDynamics} $n$ times, each time passing a $\\ddot{\\theta}$ vector with a single element equal to one and all other inputs set to zero.  Each call of {\\tt InverseDynamics} generates a single column of the robot's mass matrix, and these columns are assembled to create the full mass matrix.}\n\\end{function}\n\n\\begin{function}\t\t\t%VelQuadraticForces\n{c = VelQuadraticForces(thetalist,dthetalist,Mlist,Glist,Slist)}\n{\n\\item \\verb~thetalist~:  $n$-vector of joint variables $\\theta$.\n\\item \\verb~dthetalist~: $n$-vector of joint velocities $\\dot{\\theta}$.\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n}\n{\\item \\verb~c~: The vector $c(\\theta,\\dot{\\theta})$ of Coriolis and centripetal terms for a given $\\theta$ and $\\dot{\\theta}$.}\n{This function calls {\\tt InverseDynamics} with $\\mathfrak{g}=0$, $\\mathcal{F}_{\\textrm{tip}} = 0$, and $\\ddot{\\theta} = 0$.}\n\\end{function}\n\n\\begin{function}\t\t\t%GravityForces\n{grav = GravityForces(thetalist,g,Mlist,Glist,Slist)}\n{\n\\item \\verb~thetalist~:  $n$-vector of joint variables $\\theta$.\n\\item \\verb~g~:  Gravity vector $\\mathfrak{g}$.\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n}\n{\\item \\verb~grav~: The joint forces/torques required to balance gravity at $\\theta$.}\n{This function calls \\verb~InverseDynamics~ with $\\dot{\\theta} = \\ddot{\\theta} = 0$ and $\\mathcal{F}_{\\textrm{tip}} = 0$.}\n\\end{function}\n\n\\begin{function}\t\t\t%EndEffectorForces\n{JTFtip = EndEffectorForces(thetalist,Ftip,Mlist,Glist,Slist)}\n{\n\\item \\verb~thetalist~:  $n$-vector of joint variables $\\theta$.\n\\item \\verb~Ftip~:  Wrench $\\mathcal{F}_{\\text{tip}}$ applied by the end-effector expressed in frame $\\{n+1\\}$.\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n}\n{\\item \\verb~JTFtip~: The joint forces and torques $J^\\trans(\\theta) \\mathcal{F}_{\\text{tip}}$ required to create the end-effector force $\\mathcal{F}_{\\text{tip}}$.}\n{This function calls \\verb~InverseDynamics~ with $\\mathfrak{g}=0$ and $\\dot{\\theta} = \\ddot{\\theta} = 0$.}\n\\end{function}\n\n\\begin{function}\t\t\t%ForwardDynamics\n{ddthetalist = ForwardDynamics(thetalist,dthetalist,taulist,\\\\\n\\hspace*{1in} g,Ftip,Mlist,Glist,Slist)}\n{\n\\item \\verb~thetalist~:  $n$-vector of joint variables $\\theta$.\n\\item \\verb~dthetalist~: $n$-vector of joint velocities $\\dot{\\theta}$.\n\\item \\verb~taulist~: The $n$-vector $\\tau$ of required joint forces/torques.\n\\item \\verb~g~:  Gravity vector $\\mathfrak{g}$.\n\\item \\verb~Ftip~:  Wrench $\\mathcal{F}_{\\text{tip}}$ applied by the end-effector expressed in frame $\\{n+1\\}$.\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n}\n{\\item \\verb~ddthetalist~:  The resulting joint accelerations $\\ddot{\\theta}$.}\n{This function computes $\\ddot{\\theta}$ by solving \n\\[\nM(\\theta) \\ddot{\\theta} = \\tau - c(\\theta,\\dot{\\theta}) - g(\\theta) - J^\\trans(\\theta) \\mathcal{F}_{\\textrm{tip}}.\n\\]\n}\n\\end{function}\n\n\\begin{function}\t\t\t%EulerStep\n{[thetalistNext,dthetalistNext] = EulerStep(thetalist,dthetalist,ddthetalist,dt)}\n{\n\\item \\verb~thetalist~:  $n$-vector of joint variables $\\theta$.\n\\item \\verb~dthetalist~: $n$-vector of joint velocities $\\dot{\\theta}$.\n\\item \\verb~ddthetalist~:  $n$-vector of joint accelerations $\\ddot{\\theta}$.\n\\item \\verb~dt~: The timestep $\\Delta t$.\n}\n{\n\\item \\verb~thetalistNext~: Vector of joint variables $\\theta$ after $\\Delta t$ from first-order Euler integration.\n\\item \\verb~dthetalistNext~: Vector of joint velocities $\\dot{\\theta}$ after $\\Delta t$ from first-order Euler integration.\n}\n{\n}\n\\end{function}\n\n\\begin{function}\t\t\t%InverseDynamicsTrajectory\n{taumat = InverseDynamicsTrajectory(thetamat,dthetamat,ddthetamat, \\\\\n\\hspace*{1in} g,Ftipmat,Mlist,Glist,Slist)}\n{\n\\item \\verb~thetamat~: An $N \\times n$ matrix of robot joint variables.  Each row is an $n$-vector of joint variables, and the $N$ rows correspond to $N$ time instants.  (The time instants can be thought of as starting at time $0$ and ending at time $T_f$, in increments $\\Delta t = T_f/(N-1)$.)\n\\item \\verb~dthetamat~: An $N \\times n$ matrix of robot joint velocities.\n\\item \\verb~ddthetamat~: An $N \\times n$ matrix of robot joint accelerations.\n\\item \\verb~g~:  Gravity vector $\\mathfrak{g}$.\n\\item \\verb~Ftipmat~: An $N \\times 6$ matrix, where each row is a vector of the form $\\mathcal{F}_{\\textrm{tip}}(k \\Delta t)$. (If there are no tip forces the user should input a zero and a zero matrix will be used).\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n}\n{\\item \\verb~taumat~: The $N \\times n$ matrix of joint forces/torques for the specified trajectory, where each of the $N$ rows is the vector of joint forces/torques at each time step.}\n{This function uses \\verb~InverseDynamics~ to calculate the joint forces/torques required to move the serial chain along the given trajectory.}\n\\end{function}\n\n\\begin{function}\t\t\t%ForwardDynamicsTrajectory\n{[thetamat,dthetamat] = ForwardDynamicsTrajectory(thetalist,dthetalist,taumat, \\\\\n\\hspace*{1in} g,Ftipmat,Mlist,Glist,Slist,dt,intRes)}\n{\n\\item \\verb~thetalist~: $n$-vector of initial joint variables.\n\\item \\verb~dthetalist~: $n$-vector of initial joint velocities.\n\\item \\verb~taumat~: An $N \\times n$ matrix of joint forces/torques, where each row is the joint force/torque at any instant.  The time corresponding to row $k$ is $k\\Delta t$, $k \\in \\{0, \\ldots, N-1\\}$, where $\\Delta t$ is defined below.\n\\item \\verb~g~:  Gravity vector $\\mathfrak{g}$.\n\\item \\verb~Ftipmat~: An $N \\times 6$ matrix, where each row is a vector of the form $\\mathcal{F}_{\\textrm{tip}}(k \\Delta t)$. (If there are no tip forces the user should input a zero and a zero matrix will be used).\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n\\item \\verb~dt~: The timestep $\\Delta t$ between consecutive joint forces/torques.\n\\item \\verb~intRes~: This input must be an integer greater than or equal to 1.  {\\tt intRes} is the number of Euler integration steps during each timestep $\\Delta t$.  Larger values result in slower simulations but less accumulation of integration error.  }\n{\n\\item \\verb~thetamat~: The $N \\times n$ matrix of robot joint variables resulting from the specified joint forces/torques.\n\\item \\verb~dthetamat~: The $N \\times n$ matrix of robot joint velocities resulting from the specified joint forces/torques.\n}\n{This function simulates the motion of a serial chain given an open-loop history of joint forces/torques.  It calls a numerical integration procedure that uses \\verb~ForwardDynamics~.}\n\\end{function}\n\n\n\\section*{Chapter 9: Trajectory Generation}\t\t\t%Chapter 9\n\n\\begin{function}\t\t\t%CubicTimeScaling\n{s = CubicTimeScaling(Tf,t)}\n{\n\\item \\verb~Tf~: Total time of the motion $T_f$ in seconds from rest to rest.\n\\item \\verb~t~: The current time $t$ satisfying $0 \\leq t \\leq T_f$.\n}\n{\\item \\verb~s~: The path parameter $s(t)$ corresponding to a third-order polynomial motion that begins (at $s(0)=0$) and ends (at $s(T_f)=1$) at zero velocity.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%QuinticTimeScaling\n{s = QuinticTimeScaling(Tf,t)}\n{\n\\item \\verb~Tf~: Total time of the motion $T_f$ in seconds from rest to rest.\n\\item \\verb~t~: The current time $t$ satisfying $0 \\leq t \\leq T_f$.\n}\n{\\item \\verb~s~: The path parameter $s(t)$ corresponding to a fifth-order polynomial motion that begins (at $s(0)=0$) and ends (at $s(T_f)=1$) at zero velocity and zero acceleration.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%JointTrajectory\n{traj = JointTrajectory(thetastart,thetaend,Tf,N,method)}\n{\n\\item \\verb~thetastart~: The initial joint variables $\\theta_{\\text{start}} \\in \\real^n$.\n\\item \\verb~thetaend~: The final joint variables $\\theta_{\\text{end}}$.\n\\item \\verb~Tf~: Total time of the motion $T_f$ in seconds from rest to rest.\n\\item \\verb~N~: The number of points $N \\geq 2$ in the discrete representation of the trajectory.\n\\item \\verb~method~: The time-scaling method, where 3 indicates cubic (third-order polynomial) time scaling and 5 indicates quintic (fifth-order polynomial) time scaling.\n}\n{\\item \\verb~traj~: A trajectory as an $N \\times n$ matrix, where each row is an $n$-vector of joint variables at an instant in time.  The first row is $\\theta_{\\text{start}}$ and the $N$th row is $\\theta_{\\text{end}}$.  The elapsed time between each row is $T_f/(N-1)$. }\n{The returned trajectory is a straight-line motion in joint space.}\n\\end{function}\n\n\\begin{function}\t\t\t%ScrewTrajectory\n{traj = ScrewTrajectory(Xstart,Xend,Tf,N,method)}\n{\n\\item \\verb~Xstart~: The initial end-effector configuration $X_{\\text{start}} \\in SE(3)$.\n\\item \\verb~Xend~: The final end-effector configuration $X_{\\text{end}}$.\n\\item \\verb~Tf~: Total time of the motion $T_f$ in seconds from rest to rest.\n\\item \\verb~N~: The number of points $N \\geq 2$ in the discrete representation of the trajectory.\n\\item \\verb~method~: The time-scaling method, where 3 indicates cubic (third-order polynomial) time scaling and 5 indicates quintic (fifth-order polynomial) time scaling.\n}\n{\\item \\verb~traj~: The discretized trajectory as a list of $N$ matrices in $SE(3)$ separated in time by $T_f/(N-1)$.  The first in the list is $X_{\\text{start}}$ and the $N$th is $X_{\\text{end}}$.}\n{This function calculates a trajectory corresponding to a screw motion about a constant screw axis.}\n\\end{function}\n\n\\begin{function}\t\t\t%CartesianTrajectory\n{traj = CartesianTrajectory(Xstart,Xend,Tf,N,method)}\n{\n\\item \\verb~Xstart~: The initial end-effector configuration $X_{\\text{start}} \\in SE(3)$.\n\\item \\verb~Xend~: The final end-effector configuration $X_{\\text{end}}$.\n\\item \\verb~Tf~: Total time of the motion $T_f$ in seconds from rest to rest.\n\\item \\verb~N~: The number of points $N \\geq 2$ in the discrete representation of the trajectory.\n\\item \\verb~method~: The time-scaling method, where 3 indicates cubic (third-order polynomial) time scaling and 5 indicates quintic (fifth-order polynomial) time scaling.\n}\n{\\item \\verb~traj~: The discretized trajectory as a list of $N$ matrices in $SE(3)$ separated in time by $T_f/(N-1)$.  The first in the list is $X_{\\text{start}}$ and the $N$th is $X_{\\text{end}}$.}\n{Similar to {\\tt ScrewTrajectory}, except the origin of the end-effector frame follows a straight line, decoupled from the rotational motion.}\n\\end{function}\n\n\n\\section*{Chapter 11:  Robot Control}\t\t\t%Chapter 11\n\nThe two functions in this chapter focus on the use of the computed torque controller\n\\[\n\\tau = \\widehat{M}(\\theta) \\left(\\ddot{\\theta}_d + K_p \\theta_e + K_i \\int \\theta_e(t) dt + K_d \\dot{\\theta}_e\\right) + \\widehat{h}(\\theta,\\dot{\\theta})\n\\]\nto control the motion of a serial chain in free space.  The term $\\widehat{h}(\\theta,\\dot{\\theta})$ comprises the model of centripetal, Coriolis, and gravitational forces, and the term $\\widehat{M}(\\theta)$ is the model of the robot's mass matrix.\n\n\\begin{function}\t\t\t%ComputedTorque\n{taulist = ComputedTorque(thetalist,dthetalist,eint,g,\\\\\n\\hspace*{1in} Mlist,Glist,Slist,thetalistd,dthetalistd,ddthetalistd,Kp,Ki,Kd)}\n{\n\\item \\verb~thetalist~: $n$-vector of initial joint variables.\n\\item \\verb~dthetalist~: $n$-vector of initial joint velocities.\n\\item \\verb~eint~: An $n$-vector of the time-integral of joint errors.\n\\item \\verb~g~:  Gravity vector $\\mathfrak{g}$.\n\\item \\verb~Mlist~:  List of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n\\item \\verb~thetalistd~: $n$-vector of reference joint variables $\\theta_d$.\n\\item \\verb~dthetalistd~: $n$-vector of reference joint velocities $\\dot{\\theta}_d$.\n\\item \\verb~ddthetalistd~: $n$-vector of reference joint accelerations $\\ddot{\\theta}_d$.\n\\item \\verb~Kp~: The feedback proportional gain (identical for each joint).\n\\item \\verb~Ki~: The feedback integral gain (identical for each joint).\n\\item \\verb~Kd~: The feedback derivative gain (identical for each joint).\n}\n{\\item \\verb~taulist~: The vector of joint forces/torques computed by the computed torque controller at the current instant.}\n{}\n\\end{function}\n\n\\begin{function}\t\t\t%SimulateControl\n{[taumat,thetamat] = SimulateControl(thetalist,dthetalist,g,Ftipmat,Mlist,Glist, \\\\\n\\hspace*{1in} Slist,thetamatd,dthetamatd,ddthetamatd,gtilde,Mtildelist, \\\\\n\\hspace*{1in} Gtildelist,Kp,Ki,Kd,dt,intRes)}\n{\n\\item \\verb~thetalist~: $n$-vector of initial joint variables.\n\\item \\verb~dthetalist~: $n$-vector of initial joint velocities. \n\\item \\verb~g~:  Actual gravity vector $\\mathfrak{g}$.\n\\item \\verb~Ftipmat~: An $N \\times 6$ matrix, where each row is a vector of the form $\\mathcal{F}_{\\textrm{tip}}(k \\Delta t)$. (If there are no tip forces the user should input a zero and a zero matrix will be used).\n\\item \\verb~Mlist~:  Actual list of link frames $\\{i\\}$ relative to $\\{i-1\\}$ at the home position.\n\\item \\verb~Glist~:  Actual spatial inertia matrices $\\mathcal{G}_i$ of the links.\n\\item \\verb~Slist~:  Screw axes $\\mathcal{S}_i$ of the joints in a space frame.\n\\item \\verb~thetamatd~: An $N \\times n$ matrix of desired joint variables $\\theta_d$ from the reference trajectory.  The first row is the initial desired joint configuration, and the $N$th row is the final desired joint configuration.  The time between each row is {\\tt dt}, below.\n\\item \\verb~dthetamatd~: An $N \\times n$ matrix of desired joint velocities $\\dot{\\theta}_d$.\n\\item \\verb~ddthetamatd~: An $N \\times n$ matrix of desired joint accelerations $\\ddot{\\theta}_d$.\n\\item \\verb~gtilde~: The (possibly incorrect) model of the gravity vector.\n\\item \\verb~Mtildelist~: The (possibly incorrect) model of the link frame locations.\n\\item \\verb~Gtildelist~: The (possibly incorrect) model of the link spatial inertias.\n\\item \\verb~Kp~: The feedback proportional gain (identical for each joint).\n\\item \\verb~Ki~: The feedback integral gain (identical for each joint).\n\\item \\verb~Kd~: The feedback derivative gain (identical for each joint).\n\\item \\verb~dt~: The timestep $\\Delta t$ between points on the reference trajectory.\n\\item \\verb~intRes~: This input must be an integer greater than or equal to 1.  {\\tt intRes} is the number of Euler integration steps during each timestep $\\Delta t$.  Larger values result in slower simulations but less accumulation of integration error. \n}\n{\n\\item \\verb~taumat~: An $N \\times n$ matrix of the controller's commanded joint forces/torques, where each row of $n$ forces/torques corresponds to a single time instant.\n\\item \\verb~thetamat~: An $N \\times n$ matrix of actual joint variables, to be compared to \\verb~thetamatd~.\n\\item Plot: Plot of actual and desired joint variables.\n}\n{This function uses {\\tt ComputedTorque}, {\\tt ForwardDynamics}, and numerical integration to simulate the performance of a computed torque control law operating on a serial chain.  Disturbances come in the form of initial position and velocity errors; incorrect models of gravity, the locations of the link center of mass frames, and the link spatial inertias; and errors introduced by the numerical integration.}\n\\end{function}\n\n\n\\end{document}\n", "meta": {"hexsha": "1fc8b8c83dc66f14b966988787eec6f1c339d25d", "size": 41169, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/MRlib.tex", "max_stars_repo_name": "Nutellaman/ModernRobotics", "max_stars_repo_head_hexsha": "88c94eec1e0e4eedbd3ae32819664179a9a5a6ba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1126, "max_stars_repo_stars_event_min_datetime": "2016-10-10T19:04:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T21:22:58.000Z", "max_issues_repo_path": "doc/MRlib.tex", "max_issues_repo_name": "Nutellaman/ModernRobotics", "max_issues_repo_head_hexsha": "88c94eec1e0e4eedbd3ae32819664179a9a5a6ba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 34, "max_issues_repo_issues_event_min_datetime": "2017-10-11T04:52:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-17T18:23:05.000Z", "max_forks_repo_path": "doc/MRlib.tex", "max_forks_repo_name": "Nutellaman/ModernRobotics", "max_forks_repo_head_hexsha": "88c94eec1e0e4eedbd3ae32819664179a9a5a6ba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 631, "max_forks_repo_forks_event_min_datetime": "2016-10-11T03:43:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T21:41:47.000Z", "avg_line_length": 60.6318114875, "max_line_length": 799, "alphanum_fraction": 0.709222959, "num_tokens": 12880, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7690802370707281, "lm_q1q2_score": 0.5598715308262788}}
{"text": "\\subsection{Grothendieck universes}\\label{subsec:grothendieck_universes}\n\nInstead of having one single universe, we can have multiple universes where each is contained in another one. The upside of this is that we can do category theory formally within set theory --- see the discussions in \\fullref{def:category_size}. The downside of this is that, unlike models of \\hyperref[def:zfc]{\\logic{ZFC}}, models of \\hyperref[def:axiom_of_universes]{\\logic{ZFC+U}} (\\logic{ZFC} with the axiom that every set is contained in some \\hyperref[def:grothendieck_universe]{Grothendieck universe}) are much less studied. In particular, this axiom requires the existence of an unbounded hierarchy of \\hyperref[rem:strongly_inaccessible_cardinal]{regular strong limit cardinal}, unlike \\( \\logic{ZFC} \\) for which only one such cardinal is sufficient.\n\n\\begin{definition}\\label{def:grothendieck_universe}\\mcite{nLab:grothendieck_universe}\n  We say that the set \\( \\mscrU \\) is a \\term{Grothendieck universe} if it satisfied the following conditions:\n  \\begin{thmenum}\n    \\thmitem[def:grothendieck_universe/nonempty]{GU1} It is \\hyperref[def:empty_set]{nonempty}.\n\n    \\thmitem[def:grothendieck_universe/transitive]{GU2} It is a \\hyperref[def:transitive_set]{transitive set}.\n\n    \\thmitem[def:grothendieck_universe/power_set]{GU3} For any \\( A \\in \\mscrU \\), the \\hyperref[def:basic_set_operations/power_set]{power set} \\( \\pow(A) \\) also belongs to \\( \\mscrU \\).\n\n    \\thmitem[def:grothendieck_universe/union]{GU4} For any member \\( A \\in \\mscrU \\) and any \\( A \\)-indexed family \\( \\set{ B_a }_{a \\in A} \\subseteq \\mscrU \\), the union\n    \\begin{equation*}\n      \\bigcup\\set{ B_a \\given a \\in A }\n    \\end{equation*}\n    belongs to \\( \\mscrU \\). This is a restriction from unions over completely arbitrary families of sets to those families that can be indexed by members of \\( A \\).\n  \\end{thmenum}\n\n  We formalize the entire concept via the following monstrous formula:\n  \\small\n  \\begin{equation*}\\taglabel[\\op{IsUniverse}]{eq:def:grothendieck_universe/predicate}\n    \\begin{aligned}\n      \\ref{eq:def:grothendieck_universe/predicate}[\\upsilon] \\coloneqq\n        &\\neg \\ref{eq:def:empty_set/predicate}[\\upsilon]\n        \\wedge\n        \\qforall {\\tau \\in \\upsilon}\n        \\vast(\n          \\ref{eq:def:subset/predicate}[\\tau, \\upsilon]\n          \\wedge\n          \\parens[\\Big]\n            {\n              \\qexists {\\xi \\in \\upsilon}\n              \\ref{eq:def:basic_set_operations/power_set/predicate}[\\xi, \\tau]\n            }\n          \\wedge \\\\ &\\wedge\n          \\qforall \\xi\n          \\qforall \\eta\n          \\parens[\\Bigg]\n          {\n            \\parens[\\Big]\n              {\n                \\underbrace\n                  {\n                    \\ref{eq:def:function/predicate}[\\xi, \\tau, \\upsilon]\n                  }_{\\mathclap{\\xi \\T*{is a function} \\tau \\to \\upsilon}}\n                \\wedge\n                \\underbrace\n                  {\n                    \\ref{eq:def:grothendieck_universe/predicate_isimage}[\\eta, \\xi]\n                  }_{ \\mathclap{\\eta \\T*{is the image of} \\xi} }\n              }\n            \\rightarrow\n            \\underbrace\n              {\n                \\qexists {\\zeta \\in \\upsilon} \\ref{eq:def:basic_set_operations/union/predicate}[\\zeta, \\xi]\n              }_{ \\bigcup \\img(\\xi) \\in \\upsilon}\n        }\n      \\vast),\n    \\end{aligned}\n  \\end{equation*}\n  \\normalsize\n  where\n  \\begin{equation*}\\taglabel[\\op{IsImage}]{eq:def:grothendieck_universe/predicate_isimage}\n    \\ref{eq:def:grothendieck_universe/predicate_isimage}[\\rho, \\tau]\n    \\coloneqq\n    \\qforall \\xi\n    \\parens[\\Big]\n    {\n      \\xi \\in \\rho\n      \\leftrightarrow\n      \\underbrace\n      {\n        \\qexists {\\eta \\in \\tau}\n        \\qexists \\zeta\n        \\ref{eq:def:cartesian_product/kuratowski_pair_predicate}[\\eta, \\zeta, \\xi]\n      }_{\\tau(\\zeta) = \\xi \\T*{for some} \\xi \\in \\dom(\\tau) }\n    }.\n  \\end{equation*}\n\\end{definition}\n\n\\begin{lemma}\\label{thm:grothendieck_universe_contains_finite_sets}\n  Every Grothendieck universe is a superset of universe of hereditary finite sets \\hyperref[def:universe_of_hereditary_finite_sets]{\\( V_\\omega \\)}.\n\\end{lemma}\n\\begin{proof}\n  Let \\( \\mscrU \\) be a Grothendieck universe.\n\n  \\ref{def:grothendieck_universe/nonempty} ensures that it is nonempty. Then there exists some set \\( A \\in \\mscrU \\). By \\ref{def:grothendieck_universe/power_set}, \\( \\pow(A) \\in \\mscrU \\). By \\ref{def:grothendieck_universe/transitive}, \\( \\varnothing \\in \\pow(A) \\in \\mscrU \\) implies \\( \\varnothing \\in \\mscrU \\).\n\n  Finally, from \\ref{def:grothendieck_universe/power_set} by \\fullref{thm:bounded_transfinite_induction} if follows that \\( V_{n+1} = \\pow(V_n) \\) is a member of \\( \\mscrU \\) for every \\( n \\in \\omega \\).\n\n  Therefore, \\( V_\\omega = \\bigcup \\set{ V_n \\given n \\in \\omega } \\) is a subset of \\( \\mscrU \\).\n\\end{proof}\n\n\\begin{definition}\\label{def:axiom_of_universes}\\mcite{nLab:grothendieck_universe}\n  The \\term{axiom of universes} states that any set is contained in a \\hyperref[def:grothendieck_universe]{Grothendieck universe}. Symbolically,\n  \\begin{equation}\\label{eq:def:axiom_of_universes}\n    \\begin{aligned}\n      \\qforall \\tau \\qexists \\upsilon \\parens[\\Big]{ \\ref{eq:def:grothendieck_universe/predicate}[\\upsilon] \\wedge \\tau \\in \\upsilon }.\n    \\end{aligned}\n  \\end{equation}\n\n  We usually add this theorem to \\hyperref[def:zfc]{ZFC} and call the resulting \\hyperref[def:first_order_theory]{logical theory} \\logic{ZFC+U}.\n\\end{definition}\n\n\\begin{example}\\label{ex:def:axiom_of_universes}\n  From \\fullref{thm:grothendieck_universe_iff_regular_strong_limit} it follows that the universe of hereditary finite sets \\hyperref[def:universe_of_hereditary_finite_sets]{\\( V_\\omega \\)} is a Grothendieck universe.\n\n  The existence of other universes cannot be proven in \\logic{ZFC}. For this reason, we use the \\hyperref[def:axiom_of_universes]{axiom of universes}.\n\\end{example}\n\n\\begin{proposition}\\label{thm:smallest_grothendieck_universe_existence}\n  Suppose that we are working in \\logic{ZFC+U}. Then for any set \\( A \\), there exists a smallest Grothendieck universe containing \\( A \\).\n\n  More generally, fix a set \\( A \\). Then there exists a smallest Grothendieck universe containing \\( A \\).\n\\end{proposition}\n\\begin{proof}\n  If no set \\( A \\) is given, we simply take \\( A = \\varnothing \\) since it must belong to every universe by definition.\n\n  We use a trick analogous to \\fullref{thm:smallest_inductive_set_existence}.\n\n  The \\hyperref[def:axiom_of_universes]{axiom of universes} states that there exists at least one universe \\( \\mscrU \\) that contains \\( A \\). Define\n  \\begin{equation*}\n    \\widehat \\mscrU \\coloneqq \\set{ x \\in \\mscrU \\given x \\T{belongs to every Grothendieck universe} }.\n  \\end{equation*}\n\n  Now that we have defined \\( \\widehat \\mscrU \\), it remains to verify that it is itself a universe. To show \\ref{def:grothendieck_universe/nonempty}, note that \\( V_\\omega \\in \\mscrU \\) by \\fullref{thm:grothendieck_universe_contains_finite_sets} and hence, by \\ref{def:grothendieck_universe/transitive}, \\( \\omega \\in \\mscrU \\).\n\n  The rest of the verification is trivial.\n\\end{proof}\n\n\\begin{definition}\\label{def:large_and_small_sets}\\mcite{nLab:grothendieck_universe}\n  Suppose \\( \\mscrV = (V, I) \\) is a \\hyperref[def:first_order_semantics/satisfiability]{model} of \\logic{ZFC+U}. Let \\( \\mscrU \\) be a fixed Grothendieck universe.\n\n  We say that a set \\( A \\) is \\( \\mscrU \\)-\\term{small} if \\( A \\in \\mscrU \\) and \\( \\mscrU \\)-\\term{moderate} if \\( A \\subseteq \\mscrU \\). This situation resembles the difference between sets and proper classes described in \\fullref{def:set_builder_notation}.\n\n  A set that is not \\( \\mscrU \\)-\\term{small} is called \\( \\mscrU \\)-\\term{large}. Note that any strict superset of \\( \\mscrU \\) is \\( \\mscrU \\)-large, but not \\( \\mscrU \\)-moderate.\n\n  Without further context (i.e. in \\enquote{ordinary mathematics}), we assume that \\( \\mscrU \\) refers to the \\hyperref[thm:smallest_grothendieck_universe_existence]{smallest Grothendieck universe} that contains all sets of interest and instead of the terms \\( \\mscrU \\)-large and \\( \\mscrU \\)-small, we simply use the terms \\term{large} and \\term{small}.\n\n  In category theory, however, if there is nothing to guarantee the existence of a larger Grothendieck universe, we cannot construct the \\hyperref[def:functor_category]{functor category} of \\( \\mscrU \\)-large categories, as discussed in \\fullref{rem:functor_category_size}. This is the main motivation for the \\hyperref[def:axiom_of_universes]{axiom of universes}.\n\\end{definition}\n\n\\begin{example}\\label{ex:large_and_small_sets}\n  \\hfill\n  \\begin{itemize}\n    \\item A set is finite if and only if it is \\( V_\\omega \\)-small. A set is \\( V_\\omega \\)-moderate if and only if it is an infinite family of finite sets.\n\n    For finite mathematics such as most of combinatorics, we rarely need to work outside of \\( V_\\omega \\).\n\n    \\item If \\( \\kappa < \\mu \\) are regular strong limit cardinals, the stage \\( V_\\kappa \\) of von Neumann's hierarchy is \\( V_\\mu \\)-small by \\fullref{thm:def:cumulative_hierarchy/membership}.\n  \\end{itemize}\n\\end{example}\n\n\\begin{theorem}\\label{thm:grothendieck_universe_iff_regular_strong_limit}\\mcite{Kruse1966}\n  The stage \\( V_\\kappa \\) of the \\hyperref[def:cumulative_hierarchy]{von Neumann's cumulative hierarchy} is a \\hyperref[def:grothendieck_universe]{Grothendieck universe} for every \\hyperref[rem:strongly_inaccessible_cardinal]{regular strong limit cardinal} \\( \\kappa \\).\n\n  Conversely, for every Grothendieck universe \\( \\mscrU \\), there exists a regular strong limit cardinal \\( \\kappa \\) such that \\( \\mscrU = V_\\kappa \\).\n\\end{theorem}\n\\begin{proof}\n  \\SufficiencySubProof Let \\( \\kappa \\) be a regular strong limit cardinal.\n\n  \\SubProofOf*{def:grothendieck_universe/nonempty} Since \\( \\kappa \\) is infinite and thus \\( 0 < \\kappa \\), from \\fullref{thm:def:cumulative_hierarchy/membership} it follows that \\( V_0 \\in V_\\kappa \\).\n\n  \\SubProofOf*{def:grothendieck_universe/transitive} The set \\( V_\\kappa \\) is transitive as shown in \\fullref{thm:def:cumulative_hierarchy/transitive}.\n\n  \\SubProofOf*{def:grothendieck_universe/power_set} Since \\( \\kappa \\) is a limit ordinal, \\( V_\\kappa \\) satisfies the axiom of power sets as shown in \\fullref{thm:cumulative_hierarchy_model_of_z} and thus if \\( A \\in V_\\kappa \\), then \\( \\pow(A) \\in V_\\kappa \\).\n\n  \\SubProofOf*{def:grothendieck_universe/union} Fix some member \\( A \\in V_\\kappa \\) and some \\( A \\)-indexed family \\( \\set{ B_a }_{a \\in A} \\subseteq V_\\kappa \\). From \\fullref{thm:strong_regular_cardinal_stage_cardinality} it follows that \\( \\card(A) < \\kappa \\) and \\( \\card(B_a) < \\kappa \\) for every \\( a \\in A \\). Thus,\n  \\begin{equation*}\n    \\card(\\set{ B_a \\given a \\in A }) \\leq \\card(A) < \\kappa\n  \\end{equation*}\n  and from \\fullref{thm:regular_cardinal_stage_inverse_transitivity} it follows that\n  \\begin{equation*}\n    \\set{ B_a \\given a \\in A } \\in V_\\kappa.\n  \\end{equation*}\n\n  The union\n  \\begin{equation*}\n    \\bigcap \\set{ B_a \\given a \\in A }\n  \\end{equation*}\n  is then a member of a lower stage, hence it also belongs to \\( V_\\kappa \\).\n\n  \\NecessitySubProof Let \\( \\mscrU \\) be a Grothendieck universe and let \\( \\alpha \\) be the smallest \\hi{ordinal} not in \\( \\mscrU \\). We will first show that \\( \\mscrU = V_\\alpha \\) and gradually prove that \\( \\alpha \\) is actually an inaccessible cardinal.\n\n  \\SubProof*{Proof that \\( V_\\beta \\in \\mscrU \\) for ordinals \\( \\beta \\in \\mscrU \\)} We will use \\fullref{thm:bounded_transfinite_induction} on \\( \\beta < \\alpha \\).\n  \\begin{itemize}\n    \\item From \\fullref{thm:grothendieck_universe_contains_finite_sets} it follows that \\( \\varnothing \\in \\mscrU \\).\n\n    \\item If \\( \\beta < \\alpha \\) and \\( V_\\beta \\in \\mscrU \\), then by \\ref{def:grothendieck_universe/power_set} we have \\( V_{\\beta + 1} = \\pow(V_\\beta) \\in \\mscrU \\).\n\n    We will come back to this step a bit later, but for now note that \\( V_{\\beta + 1} \\in \\mscrU \\) regardless of whether \\( \\beta + 1 \\in \\mscrU \\).\n\n    \\item If \\( \\lambda < \\alpha \\) is a limit ordinal and \\( V_\\beta \\in \\mscrU \\) for every \\( \\beta < \\lambda \\), we have\n    \\begin{equation*}\n      V_\\lambda\n      \\reloset {\\eqref{eq:def:cumulative_hierarchy}} =\n      \\bigcup\\set{ V_\\beta \\given \\beta < \\lambda },\n    \\end{equation*}\n    which is a \\( \\lambda \\)-indexed union of members of \\( \\mscrU \\). Since \\( \\lambda \\in \\mscrU \\), by \\ref{def:grothendieck_universe/power_set} we have \\( V_\\lambda \\in \\mscrU \\).\n  \\end{itemize}\n\n  \\SubProof*{Proof that \\( \\alpha \\) is a limit ordinal} In the successor case we noted that \\( V_{\\beta + 1} \\in \\mscrU \\) for every \\( \\beta < \\alpha \\) regardless of whether \\( \\beta + 1 < \\alpha \\). Since \\( \\rank(\\beta + 1) = \\beta + 1 \\), it follows that \\( \\beta + 1 \\in V_{\\beta + 2} \\in \\mscrU \\) and thus by \\ref{def:grothendieck_universe/transitive}, \\( \\beta + 1 \\in \\mscrU \\). Therefore, \\( \\alpha \\) cannot be a successor ordinal --- if \\( \\alpha = \\beta + 1 \\), then \\( \\beta \\in \\mscrU \\) by definition of \\( \\alpha \\) and thus \\( \\beta + 1 = \\alpha \\in \\mscrU \\), which is a contradiction.\n\n  Since \\( \\alpha > 0 \\), it remains for \\( \\alpha \\) to be a limit ordinal.\n\n  \\SubProof*{Proof that \\( V_\\alpha \\subseteq \\mscrU \\)} By \\ref{def:grothendieck_universe/transitive}, \\( V_\\beta \\subseteq \\mscrU \\) for every \\( \\beta < \\alpha \\). We can conclude that\n  \\begin{equation*}\n    V_\\alpha\n    \\reloset {\\eqref{eq:def:cumulative_hierarchy}} =\n    \\bigcup\\set{ V_\\beta \\given \\beta < \\alpha }\n    \\subseteq\n    U.\n  \\end{equation*}\n\n  In order to show that equality holds, we must first prove that \\( \\alpha \\) is a strongly inaccessible cardinal. But this requires some auxiliary results.\n\n  \\SubProof*{Proof that \\( \\set{ B } \\in \\mscrU \\) for every \\( B \\in \\mscrU \\)} By \\ref{def:grothendieck_universe/power_set} we have that \\( \\pow(\\pow(B)) \\in \\mscrU \\). But \\( \\set{ B } \\subseteq \\pow(B) \\) and hence \\( \\set{ B } \\in \\pow(\\pow(B)) \\). By \\ref{def:grothendieck_universe/transitive}, \\( \\set{ B } \\in \\mscrU \\).\n\n  \\SubProof*{Proof that \\( \\kappa = \\alpha \\) is a cardinal} Suppose that \\( \\alpha \\) is not a cardinal. Indeed, suppose that there exists some \\( \\beta < \\alpha \\) such that there exists a bijective function \\( f: \\beta \\to \\alpha \\). Then\n  \\begin{equation*}\n    \\alpha = \\bigcup\\set{ \\set{ f(\\gamma) } \\given \\gamma < \\beta }\n  \\end{equation*}\n  is a \\( \\beta \\)-indexed union of members of \\( \\alpha \\) and hence \\( \\alpha \\in \\alpha \\). But this contradicts \\fullref{thm:simple_foundation_theorems/member_of_itself}.\n\n  Therefore, \\( \\alpha \\) is a cardinal. We will henceforth denote it by \\( \\kappa \\) to highlight that it is a cardinal.\n\n  \\SubProof*{Proof that \\( \\card(B) \\in \\mscrU \\) for every \\( B \\in \\mscrU \\)} Let \\( B \\in \\mscrU \\) and let \\( f: B \\to \\card(B) \\) be a bijective function. Then\n  \\begin{equation*}\n    \\card(B)\n    =\n    f[B]\n    =\n    \\set{ f(x) \\given x \\in B }\n    =\n    \\bigcup\\set[\\Big]{ \\set{ f(x) } \\given x \\in B }\n  \\end{equation*}\n   is a \\( B \\)-indexed union of members of \\( \\mscrU \\) and, by \\ref{def:grothendieck_universe/union}, \\( \\card(B) \\in \\mscrU \\).\n\n  \\SubProof*{Proof that \\( \\kappa \\) is a strong limit} For every \\( \\beta < \\kappa \\) by \\ref{def:grothendieck_universe/power_set} we have \\( \\pow(\\beta) \\in \\mscrU \\). We have already shown that \\( \\card(\\pow(\\beta)) \\in \\mscrU \\) and, by \\fullref{thm:cardinal_exponentiation_power_set}, we have\n  \\begin{equation*}\n    \\card(\\pow(\\beta)) = 2^{\\card(\\beta)} = 2^\\beta.\n  \\end{equation*}\n\n  Hence, \\( 2^\\beta < \\kappa \\) and \\( \\kappa \\) is a strong limit.\n\n  \\SubProof*{Proof that \\( \\kappa \\) is regular} Let \\( C \\subseteq \\kappa \\) be an unbounded set. We will show that \\( \\card(C) = \\kappa \\).\n\n  Suppose that \\( \\card(C) < \\kappa \\). Then \\( \\card(C) \\in \\mscrU \\) since \\( \\kappa = \\alpha \\) is the smallest ordinal not contained in \\( \\mscrU \\). Let \\( f: \\card(C) \\to C \\) be a bijective function. Then\n  \\begin{equation*}\n    C = \\bigcup\\set[\\Big]{ \\set{ f(\\gamma) } \\given \\gamma < \\card(C) }\n  \\end{equation*}\n  and by \\ref{def:grothendieck_universe/union}, \\( C \\in \\mscrU \\).\n\n  Since \\( C \\) is unbounded, we have \\( \\sup C \\geq \\kappa \\). But from \\fullref{thm:union_of_set_of_ordinals} is follows that \\( \\bigcup C = \\sup C \\) and by \\ref{def:grothendieck_universe/union}, \\( \\bigcup C \\in \\mscrU \\), which implies that \\( \\sup C < \\kappa \\).\n\n  The obtained contradiction shows that \\( \\card(C) = \\kappa \\). Since \\( C \\) was an arbitrary unbounded set, it follows that \\( \\kappa \\) satisfies \\fullref{def:regular_cardinal/unbounded_subsets} and is thus regular.\n\n  \\SubProof*{Proof that \\( V_\\kappa = U \\)} Finally, now that we know that \\( \\kappa \\) is a strongly inaccessible cardinal, we can show that equality holds in \\( V_\\kappa \\subseteq \\mscrU \\).\n\n  Aiming at a contradiction, suppose that \\( U \\setminus V_\\kappa \\) is nonempty. By the \\hyperref[def:zfc/foundation]{axiom of foundation}, there exists a set \\( C \\in \\mscrU \\setminus V_\\kappa \\) such that\n  \\begin{equation*}\n    C \\cap (U \\setminus V_\\kappa) = \\varnothing,\n  \\end{equation*}\n  thus \\( C \\subseteq V_\\kappa \\). From \\fullref{thm:strong_regular_cardinal_stage_cardinality} if follows that \\( \\card(C) < \\kappa \\) and from \\fullref{thm:regular_cardinal_stage_inverse_transitivity} it follows that \\( C \\in V_\\kappa \\), which contradicts our choice of \\( C \\) as a member of \\( U \\setminus V_\\kappa \\).\n\n  Therefore, \\( V_\\kappa = U \\).\n\\end{proof}\n\n\\begin{corollary}\\label{thm:grothendieck_universe_is_model_of_zfc}\n  Every uncountable Grothendieck universe is a standard model of \\hyperref[def:zfc]{\\logic{ZFC}}.\n\\end{corollary}\n\\begin{proof}\n  Follows from \\fullref{thm:grothendieck_universe_iff_regular_strong_limit} and \\fullref{thm:cumulative_hierarchy_model_of_zfc}.\n\\end{proof}\n", "meta": {"hexsha": "d19a9259b04041919eba32b415bc909fd1e7587b", "size": 17879, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/grothendieck_universes.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/grothendieck_universes.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/grothendieck_universes.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.5451263538, "max_line_length": 761, "alphanum_fraction": 0.6783377146, "num_tokens": 5776, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5598715262874209}}
{"text": "\\documentclass{report}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{listings}\n\\begin{document}\n\\section{LDA}\n\\subsection{Abstract}\nIn this chapter, we will learn another algorithm of the binary-classification-hard-output: LDA (linear discriminant analysis). Actually, it's also used to reduce the dimension.\\\\\\\\\nWe'll select a direction and project the high-dimensional samples to this direction to divide them into two classes.\n\\subsection{Idea}\nThe core idea of $LDA$ is to make the projected data satisfy two conditions:\n\\begin{itemize}\n\t\\item the distance between samples within the same class is close\n\t\\item the distance between different classes is large.\n\\end{itemize}\n\\subsection{Algorithm}\nFirstly, to reduce the dimension, we have to find out how to calculate the projection length.\\\\\nwe assume a sample $x$ and project it to the direction $w$.\\\\\nAs we know: $w\\cdot x=||w||\\ ||x||\\ \\cos{\\theta}$\\\\\\\\\nHere we assume $||w||=1$ to detemine the unique $w$ to prevent countless solutions caused by scaling.\\\\\nso $w x=||x||\\cos{\\theta}$\\\\\nAnd $||x||\\cos{\\theta}$ is exactly the definition of projection.\\\\\nTherefore, the projection length of the sample on the vector $w$ is $wx$.\\\\\\\\\nThus the projection is $z=w^T x$\nWe assume the number of samples belonging to the two classes is $N1$,$N2$.\\\\\\\\\nBelow, as to the first condition: the distance of sample within the same class is close, we use the variance matrix to represent the overall distribution of each class.\\\\\nHere we use the definitioin of covariance matrix and the covariance matrix of origin data $x$ is denoted as $S$.\\\\\n\\begin{equation}\n\\begin{aligned}C_1: Var_z[C_1]&=\\frac{1}{N_1}\\sum_{i=1}^{N_1} (z_i-\\bar{z_{c1}})(z_i-\\bar{z_{c1}})^T\\\\\n&=\\frac{1}{N_1}\\sum_{i=1}^{N_1}(w^T x_i-\\frac{1}{N_1}\\sum_{j=1}^{N_1}w^T x_j)(w^T x_i-\\frac{1}{N_1}\\sum_{j=1}^{N_1}w^T x_j)^T \\\\&=w^T \\frac{1}{N_1}\\sum_{i=1}^{N_1}(x_i-\\frac{1}{N_1}\\sum_{j=1}^{N_1} x_j)(x_i-\\frac{1}{N_1}\\sum_{j=1}^{N_1} x_j)^T w\\\\&=w^{T} \\frac{1}{N_{1}} \\sum_{i=1}^{N_{1}}\\left(x_{i}-\\bar{x_{c 1}}\\right)\\left(x_{i}-\\bar{x_{c 1}}\\right)^{T} w\\\\&=w^T S_1 w\\\\C_2: Var_z[C_2]&=\\frac{1}{N_2}\\sum_{i=1}^{N_2} (z_i-\\bar{z_{c2}})(z_i-\\bar{z_{c2}})^T\\\\&=w^T S_2 w\n\\end{aligned}\n\\end{equation}\nTherefore the distance between classes can be denoted by:\n$$\nVar_z[C_1]+Var_z[C_2]=w^T(S_1+S_2)w\n$$\nAs to the second condition: the distance between different classes is large\\\\\nThe distance between classes can be denoted by the difference between the mean projection length of two classes.\n\\begin{equation}\n\\begin{aligned}\n(z_{c1}-z_{c2})^2&=(\\frac{1}{N_1}\\sum_{i=1}^{N_1}w^T x_i - \\frac{1}{N_2}\\sum_{i=1}^{N_2}w^T x_i)^2\\\\\n&=(w^T(\\frac{1}{N_1}\\sum_{i=1}^{N_1} x_i - \\frac{1}{N_2}\\sum_{i=1}^{N_2} x_i))^2\\\\\n&=(w^T(\\bar{x_{c1}}-\\bar{x_{c2}}))^2\\\\\n&=w^T(\\bar{x_{c1}}-\\bar{x_{c2}})(\\bar{x_{c1}}-\\bar{x_{c2}})^T w\n\\end{aligned}\n\\end{equation}\nWell, let's look back on our two conditions:\n\\begin{itemize}\n\t\\item the distance of samples within the same class is close\n\t\\item the distance between different classes is large\n\\end{itemize}\nSo it's easy to obtain a intuitive loss function:\n\\begin{equation}\nL(w)=\\frac{Var_z[C_1]+Var_z[C_2]}{(z_{c1}-z_{c2})^2}\n\\end{equation}\nVia minimizing the loss function, we can obtain the target $w$:\n\\begin{equation}\n\\begin{aligned}\n\\hat{w}=argmin(L(w))&=argmin(\\frac{Var_z[C_1]+Var_z[C_2]}{(z_{c1}-z_{c2})^2})\\\\\n&=argmin(\\frac{w^T(S_1+S_2)w}{w^T(\\bar{x_{c1}}-\\bar{x_{c2}})(\\bar{x_{c1}}-\\bar{x_{c2}})^T w})\\\\\n&=argmin(\\frac{w^T S_w w}{w^T S_b w})\\\\\n\\end{aligned}\\end{equation}\nIn the formula:\n\\begin{equation}\n\\begin{aligned}\n&S_w: with-class:variance\\ within\\ the\\ class\\\\\n&S_b: between-class:variance\\ between\\ classes\\\\\n\\end{aligned}\n\\end{equation}\nThe following is the partial derivative of the above formula:\\\\\n\\begin{equation}\n\\begin{aligned}\n\\frac{\\partial{L(w)}}{\\partial{w}}\n&=\\frac{\\partial}{\\partial{w}}(w^T S_w w)(w^T S_b w)^{-1}\\\\\n&=2S_{b} w\\left(w^{T} S_{w} w\\right)^{-1}-2 w^{T} S_{b} w\\left(w^{T} S_{w} w\\right)^{-2} S_{w} w=0\\\\\n\\end{aligned} \\end{equation}\ntry to transform the equation:\n$$\n\\begin{aligned}\n\\left(w^{T} S_{b} w\\right) S_{w} w&=S_{b} w\\left(w^{T} S_{w} w\\right)\\\\\n\\left(w^{T} S_{b} w\\right) w&=S_{w}^{-1}S_{b} w\\left(w^{T} S_{w} w\\right)\n\\end{aligned}\n$$\nNotes: the shape of $w^T S_b w$ and $w^T S_w w$ is : $(1,p)(p,p) (p,1)=(1,1)$\\\\\\\\\nSince the two terms are scalars, they only scale the module of a vector and can't change its direction, so the above formula is updated to:\n$$\nw \\propto S_{w}^{-1} S_{b} w=S_{w}^{-1}\\left(\\bar{x_{c 1}}-\\bar{x_{c 2}}\\right)\\left(\\bar{x_{c 1}}-\\bar{x_{c 2}}\\right)^{T} w \n$$\nAnd because $\\left(\\bar{x_{c 1}}-\\bar{x_{c 2}}\\right)^{T} w$ is also a scalae, we obtain the final formula:\n$$\n\\hat{w}\\propto S_{w}^{-1}\\left(\\bar{x_{c 1}}-\\bar{x_{c 2}}\\right)\n$$\nSo $S_{w}^{-1}\\left(\\bar{x_{c 1}}-\\bar{x_{c 2}}\\right)$ is the direction we have been seeking, finally we can get the standard $w$ via scaling.\n\\newpage\n\\subsection{Implement}\n\\begin{lstlisting}[language={python}]\nimport numpy as np\nimport os\nos.chdir(\"../\")\nfrom models.linear_models import LDA\n\nx = np.linspace(0, 100, num=100)\nw1, b1 = 0.1, 10\nw2, b2 = 0.3, 30\nepsilon = 2\nk = 0.2\nb = 20\nw = np.asarray([-k, 1])\nv1 = x * w1 + b1 + np.random.normal(scale=epsilon, size=x.shape)\nv2 = x * w2 + b2 + np.random.normal(scale=epsilon, size=x.shape)\nx1 = np.c_[x, v1]\nx2 = np.c_[x, v2]\nl1 = np.ones(x1.shape[0])\nl2 = np.zeros(x2.shape[0])\ndata = np.r_[x1, x2]\nlabel = np.r_[l1, l2]\n\nmodel = LDA()\nmodel.fit(x1, x2)\nmodel.draw(data, label)\n\\end{lstlisting}\n\\end{document}", "meta": {"hexsha": "f5023334defa02320dbdc53891539114a10ad333", "size": 5537, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EN-TeX_files/LinearClassification/07_linear_classification_lda.tex", "max_stars_repo_name": "btobab/Machine-Learning-notes", "max_stars_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-08-28T18:47:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T07:36:27.000Z", "max_issues_repo_path": "EN-TeX_files/LinearClassification/07_linear_classification_lda.tex", "max_issues_repo_name": "btobab/Machine-Learning-notes", "max_issues_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EN-TeX_files/LinearClassification/07_linear_classification_lda.tex", "max_forks_repo_name": "btobab/Machine-Learning-notes", "max_forks_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-28T18:47:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T18:47:22.000Z", "avg_line_length": 45.3852459016, "max_line_length": 472, "alphanum_fraction": 0.6705797363, "num_tokens": 2159, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7690802370707283, "lm_q1q2_score": 0.5598715217485628}}
{"text": "\n\\subsection{F test for equal population means}\n\n", "meta": {"hexsha": "dff236387a069bc170df9a7745956b3ad341fcb7", "size": 49, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/hypothesis/02-01-Ftest_equal.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/hypothesis/02-01-Ftest_equal.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/hypothesis/02-01-Ftest_equal.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 12.25, "max_line_length": 46, "alphanum_fraction": 0.7755102041, "num_tokens": 11, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7931059414036511, "lm_q2_score": 0.7057850402140659, "lm_q1q2_score": 0.5597623087475905}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n\n%\\logo{./resources/pdf/logo.pdf}\n%\\institute{Rice University}\n%\\faculty{Faculty of Whatever Sciences}\n%\\department{Department of Mathematics}\n%\\title{Class Notes}\n%\\subtitle{Based on MATH xxx}\n%\\author{\\textit{Author}\\\\Gabriel \\textsc{Gress}}\n%\\supervisor{Linus \\textsc{Torvalds}}\n%\\context{Well, I was bored...}\n%\\date{\\today}\n\n\\begin{document}\n\n% \\maketitle\n\n% Notes taken on 02/01/21\n\n\\section{Irreducibility Criteria}\n\\label{sec:irreducibility_criteria}\n\nIf \\(R\\) is an integral domain, then for \\(f(x) \\in R[x]\\) monic, of degree \\(>0\\), is irreducible if and only if \\(f(x)\\) cannot be factored as a product of two polynomials of deg \\(\\geq 1\\).\nFortunately, we have a few tools to get irreducibility of polynomials, such as Gauss' Lemma.\\\\\n\nAnother direction we can take is roots:\n\\begin{prop}\n\tLet \\(f(x) \\in F[x]\\) for \\(F \\) a field. Then\n\t\\begin{itemize}\n\t\t\\item \\(f(x)\\) has a degree 1 factor if and only if \\(f(x)\\) has a root \\(\\alpha\\) in \\(F\\), i.e. \\(\\exists \\alpha \\in F\\) such that \\(f(\\alpha)=0\\) \n\t\t\\item \\(f(x)\\) of degree 2 or 3 is reducible if and only if \\(f(x)\\) has a root in \\(F\\)\n\t\\end{itemize}\n\\end{prop}\nIf we look at \\(R = \\Z\\) and \\(F = \\Q\\) specifically, we have more options.\n\\begin{prop}[Rational Root Test]\n\tLet \\(f(x) = \\sum_{i=1}^{n} a_i x^{i} \\in \\Z[x]\\). \n\\begin{itemize}\n\t\\item If \\(\\frac{r}{s}\\in \\Q\\) with \\(gcd(r,s) = 1\\) and \\(\\frac{r}{s}\\) is a root of \\(f(x)\\), then \\(r\\mid a_0\\) and \\(s\\mid a_n\\).\n\t\\item If \\(f(x) \\in Z[x]\\) is monic and if \\(f(\\alpha)\\neq 0\\) for all \\(\\alpha \\in \\Z\\) dividing \\(\\alpha_0\\), then \\(f(x)\\) has no roots in \\(\\Q\\).\n\\end{itemize}\n\\end{prop}\nUnfortunately, while these theorems are powerful, they are relatively dependent on the polynomial being of low degree. Ideals can help us extend these ideas to higher degree polynomials.\n\n\\begin{prop}\n\tLet \\(R\\) be an integral domain, and let \\(I \\triangleleft R\\) be a proper ideal of \\(R\\). Take \\(f(x) \\in R[x]\\) a monic polynomial of degree \\(\\geq 1\\). If the image of \\(f(x)\\) in \\((R / I)[x]\\) is irreducible, then \\(f(x)\\) is irreducible in \\(R[x]\\).\n\\end{prop}\n\nWhile nice, unfortunately many irreducible polynomials are reducible when modulated by the ideal.\n\n\\begin{thm}[Eisenstein-Schonemann Criteria]\n\tLet \\(P\\) be a prime ideal of an integral domain \\(R\\), and take\n\t\\begin{align*}\n\t\tf(x) = x^{n}+ a_{n-1}x^{n-1}+ \\ldots + a_1x + a_0\n\t\\end{align*}\n\tto be a monic polynomial in \\(R[x]\\) of degree \\(\\geq 1\\). Suppose \\(a_{n-1},\\ldots,a_1,a_0 \\in P\\) and \\(a_0 \\not\\in P^2\\). Then \\(f(x) \\) is irreducible in \\(R[x]\\).\n\\end{thm}\nA trick we can use to help apply this is that if \\(f(x)\\) doesn't satisfy the criteria, use \\(f(x-c)\\) and try again. If it is (ir)reducible for \\(f(x-c)\\), it is ir(reducible) for \\(f(x)\\).\n\\end{document}\n", "meta": {"hexsha": "a6518e90137c47dc7c4d3d555f792b312afdb5ad", "size": 2821, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture4 - PolyIrreducibilityCrit.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture4 - PolyIrreducibilityCrit.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture4 - PolyIrreducibilityCrit.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.0166666667, "max_line_length": 256, "alphanum_fraction": 0.6536689117, "num_tokens": 981, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370111, "lm_q2_score": 0.7931059511841119, "lm_q1q2_score": 0.5597623058341776}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{indentfirst}\n\n\\usepackage{geometry}\n\\geometry{textwidth=20cm}\n\n\n\\title{CS131 B1: HW 6}\n\\author{Duy Nguyen}\n\\date{16 November 2016}\n\n\\begin{document}\n\n\\maketitle\n\n\\subsection*{1.}\nThere are 2504 CS students at a school. Of these, 1876 have taken a course in Java, 999 have\ntaken a course in Python, and 345 have taken a course in C++. Further, 876 have taken a courses\nin both Java and Python, 231 have taken courses in both Python and C++, and 290 have taken\ncourses in both Java and C++. If 189 of these students have taken courses in Python, Java, and\nC++, how many of these 2504 students have not taken a course in any of these three\nprogramming languages?\n\nThe number of student who are taking at least a programming language: $1876 + 999 + 345 - 876 - 231 - 290 + 189 = 2012$.\n\nTherefore, the number of student who aren't taking a programing language is $2054 - 2012 = 492$ students.\n\n\\subsection*{2.}\nHow many elements are there in the union of four sets if each of the sets has 100 elements,\neach pair of the sets shares 50 elements, each three of the sets share 25 elements, and there\nare 5 elements in all four sets?\n\\begin{center}\n\\begin{tabular}{r|l}\n    Add all set together &  $100\\cdot 4 = 400$ \\\\\n    Minus all the pairs (6 pairs) & $- 50 \\cdot 6 = 300 $\\\\\n    Add all the triples (4 triples) & $ 25 \\cdot 4 = 100$ \\\\\n    Minus the quad & $-5$ \\\\ \\hline\n    & $195$\n\\end{tabular}\n\\end{center}\n\\subsection*{3.}\nIf the password locking TV consists of 4 digits, how many different passwords my son will have to try in the worst case to unlock the TV?\n\nLike the iPhone old passcode lock system, 10 digits in each spot, 4 spot, so $10^4 = 10,000$.\n\n\\subsection*{4.}\n How many license plates can be made using either 3 uppercase English letters followed by 3 digits or 4 uppercase English letters followed by 2 digits?\n\nIn the first case: 3 uppercase English letters followed by 3 digits, there are $26^3 \\cdot 10^3 = 17,576,000$ different plates.\nIn the second case: 4 uppercase English letters followed by 2 digits, there are $26^4 \\cdot 10^2 = 45,697,600$ different plates.\n\nThus in total there are $17,576,000 + 45,697,600 = 63,273,600$ different plates.\n\n\\subsection*{5.}\nThere are four possibilities for each base in DNA: A, C, G, and T. How many 5-element DNA sequences of bases\n\na. end with A? $4^3 \\cdot 1 = 64$\n\nb. start with T and end with G? $1 \\cdot 4^2 \\cdot 1 = 16$\n\nc. contain only A and T? $ 2^4 = 16$\n\nd. do not contain C? $3^4 = 81$\n\n\\subsection*{6.}\nA committee is formed consisting of one representative from each of the 50 states in the US, where a representative from a state is either the governor or one of the 2 senators from that state. How many ways are there to form this committee?\n\nHow many 50 people group you can pick out from 150 people? $$C(150,50) = \\frac{150!}{50!(150-50)!}$$. \nThe result is $20,128,660,909,731,932,294,240,234,380,929,315,748,140$ different ways. This looks wrong!\n\n\\subsection*{7.}\nA drawer shelf contains 6 pairs of socks, each pair of a certain color (different from the rest), and all the socks are mixed on a shelf. How many individual socks should be taken from the shelf to ensure that at least 2 of them are of the same color?\n\n$$\\left \\lceil{\\frac{7}{6}}\\right \\rceil = 2$$\n\nSo at least 7 socks.\n\n\\subsection*{8.}\nShow that if there are 30 students in a class, then at least 2 have the last names that begin with the same letter\n\nThere are 26 letters in the English alphabet, so if there are 27 students at least two of them will share a common first letter in their last name according to Pigeonhole principle: \n$$ \\left \\lceil{\\frac{27}{26}} \\right \\rceil = 2$$. So 30 are more than enough.\n\n\\subsection*{9.}\nWhat should be the minimum number of US students in a college to guarantee that there are at least 100 students from the same US state? \n\nThrough the Pigeonhole principle, we have $$ \\left \\lceil{\\frac{N}{50}} \\right \\rceil = 100$$. We note that $\\frac{4950}{50} = 99$, so 4951 students should suffice to have at least 100 students from an US state.\n\n\\subsection*{10.}\nFind the number of ways that an organization consisting of 10 members can elect a president, a\ntreasurer, and a secretary (one person may be elected just for one position).\n\n$$P(10,3) = 720$$\n\n\\subsection*{11.}\nFind the number of ways that an organization consisting of 10 members can elect 3\nrepresentatives to a senate\n\n$$C(10,3) = 120$$\n\n\\subsection*{12.}\nHow many different strings can be made by reordering the characters of \u201cMassachusetts\u201d?\n\n\\begin{center}\n\\begin{tabular}{c|c}\n    M & $C(13, 1) = 13$ \\\\\n    a & $C(12, 2) = 66$ \\\\\n    s & $C(10, 4) = 210$ \\\\\n    c & $C(6, 1) = 6$ \\\\\n    h & $C(5, 1) = 5$ \\\\\n    u & $C(4, 1) = 4$ \\\\\n    e & $C(3, 1) = 3$ \\\\\n    t & $C(2, 2) = 1 $ \\\\ \\hline\n    & $64,864,800$\n\\end{tabular}\n\\end{center}\n\n\\subsection*{13.}\nHow many different signals, each consisting of 6 flags hung in a vertical line, can be formed from\n4 identical red flags and 2 identical blue flags?\n\n$$\\frac{6!}{4!2!} = 15 $$\n\n\\subsection*{14.}\nSuppose that an ice-cream caf\u00e9 has 10 different flavors of ice cream. In how many different\nways one can choose 3 scoops of ice-cream, so that order of flavors does not matter? (There is\nan unlimited amount of each flavor of ice-cream).\n\n$C(10, 3) = 120$ different ways to scoops those ice cream.\n\n\n\n\\end{document}\n", "meta": {"hexsha": "ac1a1b8780175d7e3d7dde5a580e8257f7f253e2", "size": 5392, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "unnatural_rubber/hw/CS131/hw6.tex", "max_stars_repo_name": "zuik/stuff", "max_stars_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "unnatural_rubber/hw/CS131/hw6.tex", "max_issues_repo_name": "zuik/stuff", "max_issues_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "unnatural_rubber/hw/CS131/hw6.tex", "max_forks_repo_name": "zuik/stuff", "max_forks_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.9407407407, "max_line_length": 251, "alphanum_fraction": 0.7091988131, "num_tokens": 1648, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.7931059487389968, "lm_q1q2_score": 0.5597623041084521}}
{"text": "% arara: pdflatex\n% arara: pdflatex\n\\documentclass[letterpaper,10pt]{article}\n\\usepackage{preamble}\n\n\\title{Error Propagation}\n\n\\begin{document}\n\\maketitle\n\n\\section{Background}\n\n\"Error propagation\" refers to the task of determining the uncertainty (or error) in a quantity that is calculated from one or more quantities with uncertainty. For example, given two quantities\nwith uncertainty, $x$ and $y$, what is the uncertainty in the product $xy$? In general, if we have $z = f(x,y)$, we want to determine the uncertainty in $z$. The most common definition used\nfor the uncertainty in a quantity is the standard deviation\\footnote{The uncertainty in experimental measurements is commonly taken to be the standard error in the mean. Since standard error is the standard deviation of the sampling distribution, all of the calculations involving the standard deviation will also apply to standard error.}:\n$$\n  \\sigma_x = \\sqrt{ \\ex{ (x - \\nom{x})^2 } } = \\unc{x}\n$$\nSo, if we have an uncertain quantity $x = \\uq{x}$ and a calculation $z = f(x)$, the uncertainty in $z$ is defined as\n$$\n  \\unc{z} = \\sqrt{ \\ex{ (z - \\nom{z})^2 } }.\n$$\nIf $x$ is the result of multiple measurements, ${x_i}$, then we could simple calculate $\\unc{z}$ directly,\n\\begin{align}\n  \\label{eq:calc-nom}\n  \\nom{z} &= \\avg{ f(x_i) },\\\\\n  \\unc{z} &= \\sqrt{ \\avg{ (f(x_i) - \\nom{z})^2 } }.\n\\end{align}\nThis would give the standard deviation for $z$ based on the measured data. However, if $x$ is a random variable described by a probability distribution, then the expectation values should\nbe evaluated in the limit $N \\rightarrow \\infty$. And even if we have ${x_i}$ measurements, $x$ can be considered a random variable that we have approximated using sample statistics.\n\nThere are two commonly used methods for determining the uncertainty in a calculation involving uncertain quantities. The most common method is based on a first-order approximation to the function $f$.\nThe Taylor Series expansion of $f$ about $\\nom{x}$ is:\n$$\nf(x) = f(\\nom{x}) + f^{\\prime}(\\nom{x})(x - \\nom{x}) + \\mathcal{O}( (x-\\nom{x})^2)\n$$\nWith this approximation, the expectation values for $z$ can be evaluated\n\n\\begin{align*}\n  \\nom{z} &= \\ex{ f(x) } \\approx f(\\nom{x}), \\\\\n  \\unc{z} &= \\sqrt{ \\ex{ (f(x) - \\nom{z})^2 } } \\approx \\sqrt{ \\ex{ \\left( f^\\prime(\\nom{x}) (x - \\nom{x}) \\right)^2 } } = \\sqrt{ (f^{\\prime}(\\nom{x}) \\unc{x})^2 } = | f^{\\prime}(\\nom{x}) \\unc{x} |.\n\\end{align*}\n\nThis method is of course an approximation, but it in most cases it is\nsufficient because the uncertainty $\\unc{x}$ should be small if $x$ is supposed\nto be a measurement (if it isn't, then our measurement isn't very good).  There\nare known issues with this method. For example, the uncertainty in $\\sin(x)$ is\nzero if $\\nom{x} = n\\pi/2$ and it may not be possible to compute the\nderivatives of $f(x)$, but it is certainly the ``standard'' method.\n\nThe other most common method for calculating $\\unc{z}$ is based on Monte Carlo.\nValues for $x$ are drawn from its distribution and used to calculate multiple\nvalues of $z$ which are then used to calculate the average and standard\ndeviation. While this method is more accurate (it works for arbitrary,\nnon-linear functions), it is also more time consuming and may not be feasible.\n\nOther methods for error propagation based on the Taylor Series expansion of the function $f$ exist (all you have to do is keep more terms), but they are much less common because they require higher moments of $x$ to be known, which\nis often not the case.\n\nIf $f$ is a function of two variables, $x$ and $y$, then the Taylor Series expansion of the function truncated to first order can still be used and the nominal value for $z$ is a straight forward generalization of Equation \\ref{eq:calc-nom}.\nHowever, since $\\unc{z}$ is the expected value of $z - \\nom{z}$ squared, it will contain cross terms of $x$ and $y$. The formulas are:\n\\begin{align}\n  \\nom{z} &= \\ex{ f(x,y) } \\approx f(\\nom{x},\\nom{y}), \\\\\n  \\unc{z} &= \\sqrt{ \\ex{ (f(x,y) - \\nom{z})^2 } } \\approx \\sqrt{ (f^{\\prime}_x(\\nom{x},\\nom{y}) \\unc{x})^2 + (f^{\\prime}_y(\\nom{x},\\nom{y}) \\unc{y})^2 + 2f^{\\prime}_{x}(\\nom{x},\\nom{y})f^{\\prime}_{y}(\\nom{x},\\nom{y})\\sigma_{x,y} }.\n  \\label{eq:calc-unc}\n\\end{align}\nwhere $f^\\prime_x$ and $f^\\prime_y$ indicate the partial derivatives of $f$. The $\\sigma_{x,y}$ in the cross term is called covariance, and is defined as the expectation value of the product of the $x$ and $y$ deviations:\n$$\n\\sigma_{x,y} = \\ex{ (x-\\nom{x})(y-\\nom{y}) }.\n$$\nthe \\emph{correlation} is defined as the covariance normalized to $\\unc{x}\\unc{y}$:\n$$\n\\label{correlation}\nr_{x,y} = \\frac{ \\ex{ (x-\\nom{x})(y-\\nom{y}) } }{\\unc{x}\\unc{y}}.\n$$\nThe two quantities, $x$ and $y$ are said to be \\emph{uncorrelated} if their deviations are independent of each other. In this case, positive and negative deviations in both $x$ and $y$ happen at random, and on average will all cancel\nsuch that $\\sigma_{x,y}$ (and $r_{x,y}$) will be zero.\n\n\n\\section{Numerical Method}\n\\label{sec-numrical-method}\n\nThe method implemented by this module differs slightly from the first-order method. Rather than computing the derivatives of $f(x)$ at $\\nom{x}$, we simply evaluate $f(x)$ at $\\nom{x}$ and $\\nom{x} + \\unc{x}$ and take the difference:\n$$\n\\begin{aligned}\n  \\nom{z} &\\approx f(\\nom{x}), \\\\\n  \\unc{z} &\\approx f(\\upp{x}) - \\nom{x}.\n\\end{aligned}\n$$\nThis has the advantage of being very simple, and can be performed by hand if needed. In fact, that is the original motivation for this module. I needed to determine the uncertainty in a calculation so that I could check the uncertainty\ncalculated by students in a physic laboratory class. This method makes intuitive sense (we just want to see how much different our answer would be if $x$ increased by its uncertainty), but it is not without justification. The first-order\nerror propagation method is based on the approximation\n$$\nf(x) = f(\\nom{x}) + f^{\\prime}(\\nom{x})(x - \\nom{x}).\n$$\nRewriting $x$ here as $\\upp{x}$ gives\n$$\n\\begin{aligned}\nf(\\upp{x}) &= f(\\nom{x}) + f^{\\prime}(\\nom{x})\\unc{x} \\\\\nf^{\\prime}(\\nom{x})\\unc{x} &= f(\\upp{x}) - f(\\nom{x})\n\\end{aligned}\n$$\nSo, to first-order, our calculation agrees with the derivative based method.\n\nWe can do the same for functions of two (or more) quantities, but we must take care of correlations. Using the first-order approximation, we can show that\n$$\n\\begin{aligned}\nf^{\\prime}_x(\\nom{x},\\nom{y})\\unc{x} = f(\\upp{x},\\nom{y}) - f(\\nom{x},\\nom{y})\\\\\nf^{\\prime}_y(\\nom{x},\\nom{y})\\unc{y} = f(\\nom{x},\\upp{y}) - f(\\nom{x},\\nom{y})\n\\end{aligned}\n$$\nNow, let $\\unc{z}_x$ and $\\unc{z}_y$ be the uncertainties in $z$ caused by the uncertainty in $x$ and $y$ separately,\n$$\n\\begin{aligned}\n\\unc{z}_{x} &= f^{\\prime}_x(\\nom{x},\\nom{y})\\unc{x} =  f(\\upp{x},\\nom{y}) - f(\\nom{x},\\nom{y})\\\\\n\\unc{z}_{y} &= f^{\\prime}_y(\\nom{x},\\nom{y})\\unc{y} =  f(\\nom{x},\\upp{y}) - f(\\nom{x},\\nom{y})\n\\end{aligned}\n$$\nThen the total uncertainty $\\unc{z}$ can be written as (from Equation \\ref{eq:calc-unc}):\n\n$$\n\\begin{aligned}\n        \\unc{z} = \\sqrt{ \\unc{z}_{x}^2 + \\unc{z}_{y}^2 + 2\\unc{z}_{x}\\unc{z}_{y} r_{x,y} }\n\\end{aligned}\n$$\nIf the correlation coefficient is negative, it is possible for the total uncertainty in $z$ to be zero.\n\n\n\\section{Correlations}\n\\label{sec-correlations}\n\nWhen a calculation involves more than one uncertain quantity, then correlations\nbetween the quantities in the calculation must be considered. In simple\nexperiments (like those performed in an undergraduate physics lab), the\nquantities measured in the experiment are not correlated and correlations\ninvolving the measurements can safely ignore correlation. \\textbf{However}, if a\nseries of calculations are performed with the measurements, then the results of\nthese calculations \\emph{will} be correlated, and additional calculations that\ninvolve the result *cannot* ignore correlation. Basically, if you have a set of\nuncorrelated measurements and you do not want to deal with correlation, then\nyou must evaluate the uncertainty of all of your calculations directly from the\nuncertainty of the measurements. The following example illustrates this.\n\n\n\\subsection{Determining the Correlation Coefficients}\n\\label{sec-determining-the-correlation-coefficients}\n\nThe correlation between two measurements can be directly calculated from Equation \\ref{eq:calc-unc}, it is the average of the product of their deviations. However, if we perform a calculation that involves a measured quantity, then\nthe result of the calculation will be correlated to the measurement and we must take this into account if other calculations involving the result are done.\n\nWe first consider the one dimensional case, $z = f(x)$. The correlation coefficient $r_{z,x}$ is defined as\n$$\nr_{z,x} = \\frac{ \\ex{ (z-\\nom{z})(x-\\nom{x}) } }{\\unc{z}\\unc{x}}.\n$$\nReplacing $z$ with its first order approximation gives\n$$\n\\ex{ (z-\\nom{z})(x-\\nom{x})}  = \\ex{ f^{\\prime}(\\nom{x})(x-\\nom{x})(x-\\nom{x}) } = f^{\\prime}(\\nom{x})\\ex{ (x-\\nom{x})^2 } = f^{\\prime}(\\nom{x})\\unc{x}^2\n$$\nRecall that $\\unc{z} = |f^{\\prime}(\\nom{x})\\unc{x}|$, so the correlation coefficient can be written as \n$$\nr_{z,x} = \\frac{ \\pm\\unc{z} \\unc{x} }{\\unc{z}\\unc{x}} = \\pm 1.\n$$\nSo, the correlation coefficient for $z$ and $x$ will be plus or minus 1, depending on the sign of the derivative $f^{\\prime}$. If the derivative is positive, then the correlation will be positive. This\nindicates that $x$ and $z$ are directly correlated, which makes since if $z$ is directly calculated from $x$.\n\nIf $z$ is a function of two (or more) quantities, then we need to determine the correlation coefficient between $z$ and each of the quantities it depends on. The procedure is generally the same.\nLet $z = f(x,y)$. Then, to first order, we can write\n$$\nf(x,y) = f(\\nom{x},\\nom{y})\n      +  f^{\\prime}_x(\\nom{x},\\nom{y}) (x-\\nom{x})\n      +  f^{\\prime}_y(\\nom{x},\\nom{y}) (y-\\nom{y})\n$$\nWith this, the covariance of $z$ and $x$ is \n$$\n\\begin{aligned}\n\\ex{(z-\\nom{z})(x-\\nom{x})} &= \\ex{ (f^{\\prime}_x(\\nom{x},\\nom{y}) (x-\\nom{x}) +  f^{\\prime}_y(\\nom{x},\\nom{y}) (y-\\nom{y}))(x - \\nom{x}) } \\\\\n                            &= \\ex{ (f^{\\prime}_x(\\nom{x},\\nom{y}) (x-\\nom{x})(x - \\nom{x} )} + \\ex{ f^{\\prime}_y(\\nom{x},\\nom{y}) (y-\\nom{y}))(x - \\nom{x}) } \\\\\n                            &= (f^{\\prime}_x(\\nom{x},\\nom{y}) \\ex{ (x-\\nom{x})^2} +  f^{\\prime}_y(\\nom{x},\\nom{y})\\ex{ (y-\\nom{y}))(x - \\nom{x}) } \\\\\n                            &= f^{\\prime}_x(\\nom{x},\\nom{y}) \\unc{x}^2 + f^{\\prime}_y(\\nom{x},\\nom{y})\\sigma_{y,x}\n\\end{aligned}\n$$\nAnd the correlation coefficient is (recall that $\\unc{z}_x = f^{\\prime}_x(\\nom{x},\\nom{y}) \\unc{x}$)\n$$\n\\begin{aligned}\nr_{z,x} &= \\frac{f^{\\prime}_x(\\nom{x},\\nom{y}) \\unc{x}^2}{\\unc{z}\\unc{x}} + \\frac{f^{\\prime}_y(\\nom{x},\\nom{y})\\sigma_{y,x}}{\\unc{z}\\unc{x}} \\\\\n        &= \\frac{\\unc{z}_x\\unc{x}}{\\unc{z}\\unc{x}} + \\frac{f^{\\prime}_y(\\nom{x},\\nom{y})\\unc{y}\\sigma_{y,x}}{\\unc{z}\\unc{y}\\unc{x}}  \\\\\n        &= \\frac{\\unc{z}_x\\unc{x}}{\\unc{z}\\unc{x}} + \\frac{\\unc{z}_y\\sigma_{y,x}}{\\unc{z}\\unc{y}\\unc{x}}  \\\\\n        &= \\frac{\\unc{z}_x}{\\unc{z}} + \\frac{\\unc{z}_y\\sigma_{y,x}}{\\unc{z}\\unc{y}\\unc{x}}  \\\\\n        &= \\frac{\\unc{z}_x}{\\unc{z}} + \\frac{\\unc{z}_y}{\\unc{z}}r_{y,x} \n\\end{aligned}\n$$\nIf $z$ depended on another quantity, then an additional term similar to the second would be required. Assume $z = f(x,y,w)$, then we would have\n$$\nr_{z,x} = \\frac{\\unc{z}_x}{\\unc{z}} + \\frac{\\unc{z}_y}{\\unc{z}}r_{y,x} + \\frac{\\unc{z}_w}{\\unc{z}}r_{w,x}\n$$\nSo, the correlation between $z$ and the quantities it is calculated from depends on the correlation between the quantities themselves. If all quantities are uncorrelated, then\n$z$ will be correlated to each. However, if two or more quantities are correlated, then $z$ could be uncorrelated with them.\n\nThe first term can be rewritten in the same form as the others,\n$$\nr_{z,x} = \\frac{\\unc{z}_x}{\\unc{z}}r_{x,x} + \\frac{\\unc{z}_y}{\\unc{z}}r_{y,x} + \\frac{\\unc{z}_w}{\\unc{z}}r_{w,x}.\n$$\nIn general:\n$$\nr_{z,x_i} = \\sum_j \\frac{\\unc{z}_{x_j}}{\\unc{z}}r_{x_j,x_i} = \\frac{1}{\\unc{z}} \\sum_j r_{x_j,x_i}\\unc{z}_{x_j}.\n$$\nIn fact, this formula works for quantities that $z$ does not directly depend on. Let $z = f(x,y,w)$, and consider the correlation between $z$ and $v$,\n$$\n\\begin{aligned}\nr_{z,v} &= \\frac{1}{\\unc{z}\\unc{v}} \\ex{ \\left[f^{\\prime}_x(\\nom{x},\\nom{y},\\nom{w}) (x-\\nom{x})\n                                             + f^{\\prime}_y(\\nom{x},\\nom{y},\\nom{w}) (y-\\nom{y})\n                                             + f^{\\prime}_w(\\nom{x},\\nom{y},\\nom{w}) (w-\\nom{w})\\right] (v-\\nom{v})\n                                    } \\\\\n        &= \\frac{1}{\\unc{z}\\unc{v}} \\left[f^{\\prime}_x(\\nom{x},\\nom{y},\\nom{w}) \\sigma_{x,v}\n                                        + f^{\\prime}_y(\\nom{x},\\nom{y},\\nom{w}) \\sigma_{y,v}\n                                        + f^{\\prime}_w(\\nom{x},\\nom{y},\\nom{w}) \\sigma_{w,v}\\right] \\\\\n        &= \\frac{\\unc{z}_x}{\\unc{z}} r_{x,v}\n         + \\frac{\\unc{z}_y}{\\unc{z}} r_{y,v}\n         + \\frac{\\unc{z}_w}{\\unc{z}} r_{w,v}\n\\end{aligned}\n$$\n\nIt is possible for the uncertainty in $z$ to be zero if its inputs are correlated, which will cause the denominator to be zero. If this happens, then $z$ is\nuncorrelated with the inputs.\n\n\\section{Summary}\n\\label{sec-summary}\n\nIn general, assume we have a calculation we need to perform that is a function of several uncertain quantities:\n$$\nz = f(x,y,w,\\ldots)\n$$\nThe nominal value for $z$ is\n$$\n\\nom{z} = f(\\nom{x},\\nom{y},\\nom{w},\\ldots)\n$$\nThe individual uncertainties are\n$$\n\\begin{aligned}\n\\unc{z}_x &= f(\\upp{x},\\nom{y},\\nom{w},\\ldots) - \\nom{z} \\\\\n\\unc{z}_y &= f(\\nom{x},\\upp{y},\\nom{w},\\ldots) - \\nom{z} \\\\\n\\unc{z}_w &= f(\\nom{x},\\nom{y},\\upp{w},\\ldots) - \\nom{z} \\\\\n\\ldots\n\\end{aligned}\n$$\nThe total uncertainty is\n$$\n\\unc{z} = \\sqrt{ \\unc{z}_x^2 + 2\\unc{z}_x\\unc{z}_y r_{x,y} + \\unc{z}_y^2 + 2\\unc{z}_y\\unc{z}_wr_{y,w} + \\unc{z}_w^2 + \\ldots}.\n$$\nThe quantity $z$ will be correlated to the inputs of $f$. The correlation coefficients are\n$$\n\\begin{aligned}\nr_{z,x} &= \\frac{\\unc{z}_x}{\\unc{z}}\n         + \\frac{\\unc{z}_y}{\\unc{z}}r_{y,x}\n         + \\frac{\\unc{z}_w}{\\unc{z}}r_{w,x}\n         + \\ldots \\\\\nr_{z,y} &= \\frac{\\unc{z}_x}{\\unc{z}}r_{x,y}\n         + \\frac{\\unc{z}_y}{\\unc{z}}\n         + \\frac{\\unc{z}_w}{\\unc{z}}r_{w,y}\n         + \\ldots \\\\\nr_{z,w} &= \\frac{\\unc{z}_x}{\\unc{z}}r_{x,w}\n         + \\frac{\\unc{z}_y}{\\unc{z}}r_{y,w}\n         + \\frac{\\unc{z}_w}{\\unc{z}}\n         + \\ldots \\\\\n\\ldots\n\\end{aligned}\n$$\nIf the input quantities are not correlated, then the total uncertainty will simplify\n$$\n\\unc{z} = \\sqrt{ \\unc{z}_x^2 + \\unc{z}_y^2 + \\unc{z}_w^2 + \\ldots}\n$$\nas will the correlations\n$$\n\\begin{aligned}\nr_{z,x} &= \\frac{\\unc{z}_x}{\\unc{z}} \\\\\nr_{z,y} &= \\frac{\\unc{z}_y}{\\unc{z}} \\\\\nr_{z,w} &= \\frac{\\unc{z}_w}{\\unc{z}} \\\\\n\\ldots\n\\end{aligned}\n$$\n\n\\end{document}\n", "meta": {"hexsha": "305abccecec785b99e6e58fab9a81b96872e231f", "size": 14887, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/writups/TechnicalDetails/TechnicalDetails.tex", "max_stars_repo_name": "CD3/libUncertainty", "max_stars_repo_head_hexsha": "b7220cf1ae56032cdc806be41daca2fe1ab43dcb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/writups/TechnicalDetails/TechnicalDetails.tex", "max_issues_repo_name": "CD3/libUncertainty", "max_issues_repo_head_hexsha": "b7220cf1ae56032cdc806be41daca2fe1ab43dcb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/writups/TechnicalDetails/TechnicalDetails.tex", "max_forks_repo_name": "CD3/libUncertainty", "max_forks_repo_head_hexsha": "b7220cf1ae56032cdc806be41daca2fe1ab43dcb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.6042402827, "max_line_length": 340, "alphanum_fraction": 0.6413649493, "num_tokens": 4933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599562, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.5597623011950386}}
{"text": "\\subsection{Optimization}\n\\label{sec:optimization}\n\n\\textbf{Purpose}: Iteratively optimize yield, quality, etc. of crops as more environment condition and plant metric data is gathered across different programs over multiple growth cycles.\n\n\\textbf{Function}:\n\\begin{itemize}\n    \\item \\textbf{Inputs}: Growth cycle datasets (Environment data across time (control system portion of $\\vec E(t)$), plant performance metric (PPM) data across time ($\\vec P(t)$), associated program (actuated portion of $\\vec E(t)$))\n    \\item \\textbf{Outputs}: Plant-program performance prediction model, novel programs\n\\end{itemize}\n\n\\textbf{Method}: \n\nAssume a plant's growth rate (or state change) is related to its current internal state $\\vec P \\in \\R^n$ (for $n$ plant metrics) and the environment conditions $\\vec E \\in \\R^m$ (for $m$ environment parameters). Let these both be functions $\\vec P (t),\\vec E(t)$ defined at each $t$, where $t=0$ indicates the time of planting. Assume that this relationship is constant for all members of a given species.\n\nDefine plant state change $\\vec P'$: \n\n$$\\vec P'(t) = \\frac{d}{dt}\\vec P(t)$$\n\nDefine the plant-environment behaviour function $Q$: \n\n$$Q(\\vec P(t), \\vec E(t), t)=\\vec P'(t)$$ \n\nGiven the current internal and external states, determine the plant's state change.\n\n\\begin{enumerate}\n    \\item Set $\\vec E_{set}(t)~\\forall~ t$, aka the program (\\ref{sec:automation});\n    \\item Record $\\vec P(t)~\\forall~ t$ and $\\vec E(t)\\approx \\vec E_{set}(t)~\\forall~ t$;\n    \\item Calculate $\\vec P'(t)~\\forall~ t$;\n    \\item Fit $\\vec Q$ to our data;\n\\end{enumerate}\n\nBy fitting $\\vec Q$ across iterations, we can predict $\\vec P$ at any $\\vec E$ and $t$. For example:\n\n$$\\vec P(t+\\Delta t)=P(t)+\\Delta t\\cdot Q(\\vec P(t),\\vec E(t))$$\n\nGradient ascent with this model can be used to generate novel (theoretically improved) programs.\n\n\\textbf{Features}:\n\\begin{itemize}\n    \\item \\textit{Machine Learning Model}: Represented by $Q$. Operates in the cloud.\n    \\item \\textit{Environment Data} (over time): Represented by $\\vec E(t)$. Collected by sensors (for \\textit{control loop} environment parameters) and extracted from the associated program (for \\textit{actuated instruction} environment parameters). See Section \\ref{sec:automation}.\n    \\item \\textit{PPM Data} (over time): Represented by $\\vec P(t)$. Extracted from computer vision. See Section \\ref{sec:automation}.\n\\end{itemize}", "meta": {"hexsha": "3c0170d007bccade716b7f9ab0fe84042ab1e3e6", "size": 2416, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/design/Optimization.tex", "max_stars_repo_name": "UpRouteFoundation/PeaPod", "max_stars_repo_head_hexsha": "8693ad8d73e609ab2ebd4fa2b0db7b2ea6e8de1c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/tex/design/Optimization.tex", "max_issues_repo_name": "UpRouteFoundation/PeaPod", "max_issues_repo_head_hexsha": "8693ad8d73e609ab2ebd4fa2b0db7b2ea6e8de1c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-10-13T04:36:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-13T09:11:25.000Z", "max_forks_repo_path": "docs/tex/design/Optimization.tex", "max_forks_repo_name": "UpRouteFoundation/PeaPod", "max_forks_repo_head_hexsha": "8693ad8d73e609ab2ebd4fa2b0db7b2ea6e8de1c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.9090909091, "max_line_length": 406, "alphanum_fraction": 0.7139900662, "num_tokens": 666, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8688267660487572, "lm_q2_score": 0.6442251201477016, "lm_q1q2_score": 0.5597200277452996}}
{"text": "\\section{\\textbf{DisBarrierTest}}\n\n\\subsection{Particular Case (problem)}\nThe problem we are trying to solve is that of synchronization on a\nmulti-threaded environment; and the particular mechanism we want to\nuse for that is a barrier. \n\n\\subsection{Solution}\nThe particular type of barrier proposed for this exercise is called a\nDissemination Barrier, and it has a nice tree structure. Imagine that\nin order to implement the barrier, you need to perform ``r'' rounds;\nwhere each rounds means that all the threads involved communicated\nwith other two threads (everyone sent a message to one predefined\nthread and waited to receive a message from another predefined\nthread). We use the communication word loosely here, as in a shared\nmemory computer it will simply mean writing or reading from a shared\nvariable. \\\\ \n\nNow, each round is essentially some sort of permutation; is like you\nalign the $n$ threads in one line, duplicate that line and do a\nshuffle on it, then you pair its elements with the original line. It\nturns out (can be proved), that if you do $ceil(log_2(n))$ rounds and\non each round $k$ use the formula $(i+2^k) \\bmod n$ for thread\ncommunication (where $i$ is the slot associated to each thread on an\narray), then all the threads can safely assumed that everybody has\nfinished its call to \\C{await} and they can move on. The\nimplementation is compact enough to fit here, so here it goes: \\\\ \n\n\\begin{lstlisting}[style=numbers]\npublic class DisBarrier implements Barrier {  \n  int size;\n  int logSize;\n  Node[][] node;\n  ThreadLocal<Boolean> mySense;\n  ThreadLocal<Integer> myParity;\n  \n  public DisBarrier(int capacity) {\n    size = capacity;\n    logSize = 0;\n    while (capacity != 1) {\n      this.logSize++;\n      capacity >>= 1;\n    }\n    node = new Node[logSize][size];\n    for (int r = 0; r < logSize; r++) {\n      for (int i = 0; i < size; i++) {\n        node[r][i] = new Node();\n      }\n    }\n    int distance = 1;\n    for (int r = 0; r < logSize; r++) {\n      for (int i = 0; i < size; i++) {\n        node[r][i].partner = node[r][(i + distance) % size];\n      }\n      distance *= 2;\n    }\n    this.mySense = new ThreadLocal<Boolean>() {\n      protected Boolean initialValue() { return true; };\n    };\n    this.myParity = new ThreadLocal<Integer>() {\n      protected Integer initialValue() { return 0; };\n    };\n  }\n  \n  public void await() {\n    int parity = myParity.get();\n      boolean sense = mySense.get();\n      int i = ThreadID.get();\n    for (int r = 0; r < logSize; r++) {\n      node[r][i].partner.flag[parity] = sense;\n      while (node[r][i].flag[parity] != sense) {}\n    }\n    if (parity == 1) {\n      mySense.set(!sense);\n    }\n    myParity.set(1 - parity);\n  }\n  \n  private static class Node {\n    boolean[] flag = {false, false};  // signal when done\n    Node partner;                     // partner node\n  }\n}\n\\end{lstlisting}\n\\hfill\n\n\\subsection{Experiment Description}\nThe test consists in having a shared array of 8 slots, one per thread\nto be created. Then the test has two phases, both using the barrier;\nthe first phase is to wait all threads to write on their assigned slot\non the array and the second stage is to allow all threads to validate\nthat the rest actually finished their writing. These two phases are\nrepeated 8 times. \n\n\\subsection{Observations and Interpretations}\nThe test hangs time to time, on the 24 cores machine (with 2 cores\nmachine, issue was more difficult to reproduce);  the hanging threads\nhave an stack like this: \\\\  \n\n\\begin{verbatim}\n\"Thread-5\" prio=10 tid=0x00007f1bc0174800 nid=0x8414 runnable [0x00007f1b6bb59000]\njava.lang.Thread.State: RUNNABLE\nat barrier.DisBarrier.await(DisBarrier.java:59)\nat barrier.DisBarrierTest$TestThread.run(DisBarrierTest.java:79)\n\\end{verbatim}\n\\hfill\n\nThe stack above refers to the following wait cycle from the \\C{await} method: \\\\\n\n\\begin{lstlisting}[style=nonumbers]\n      while (node[r][i].flag[parity] != sense) {}\n\\end{lstlisting}\n\\hfill\n\nThe problem then implies that the thread in charge of writing to the\nflag of the hanging thread, never made the write ... well, that is the\nappearance but in reality what happened is that the flag variable was\nnot declared as volatile, therefore there is no guarantee about when\nthat new value will be propagated from caches to main memory, and from\nthere to the hanging thread core's cache. Thus, even when the write\nwas actually done the other thread could not see it: \\\\ \n\nAs with other tests from chapter 2, we can solve this issue simply by\ndeclaring the flag as volatile: \\\\\n\n\\begin{lstlisting}[style=nonumbers]\nvolatile boolean[] flag = {false, false}\n\\end{lstlisting}\n\\hfill\n\nWith above fix the test no longer hangs on both 2 and 24 core machines. \n\n", "meta": {"hexsha": "0412206fef561dc4d023d0933e519d0692499c9a", "size": 4717, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "dario/Multiprocessor/DisBarrierTest.tex", "max_stars_repo_name": "rzavalet/multiprocessor", "max_stars_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dario/Multiprocessor/DisBarrierTest.tex", "max_issues_repo_name": "rzavalet/multiprocessor", "max_issues_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dario/Multiprocessor/DisBarrierTest.tex", "max_forks_repo_name": "rzavalet/multiprocessor", "max_forks_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.4661654135, "max_line_length": 82, "alphanum_fraction": 0.7048971804, "num_tokens": 1217, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.6926419894793246, "lm_q1q2_score": 0.5597012824282002}}
{"text": "\\label{s:appendix:interaction}\nThe BCS theory of superconductivity is often introduced as\na consequence of the effective electron-electron interaction\narising from the underlying electron-phonon interaction.\nIn this appendix, we review general two-particle interactions,\nand derive the form of the electron-electron interaction\nwhen applied to the tight-binding model.\nThese results are the foundation for deriving the\nintrinsic TMD superconducting state.\n\n\\section{Electron-electron interaction}\n\nConsider a general spin-independent interaction potential\n$v \\of{\\vc{x}, \\vc{x}'}$ which depends on the positions\n$\\vc{x}$ and $\\vc{x}'$ of a pair of particles\n\\cite{ballentine1998quantum}.\nIn a multiparticle system of dimension $d$ with no self-interaction,\nthe interaction $V$ may be written as an additive pair operator\nin a given bases,\n$C_\u03b1^\u2020 \\Ket{0} = \\Ket{\u03b1} = \\Ket{f_\u03b1} \u2297 \\Ket{\u03c3_\u03b1}$,\nof fermionic Fock operators,\n\\begin{equation}\n  \\label{eq:interaction:general}\n  \\begin{aligned}\n  V\n  & = \\frac{1}{2} \u2211_{i \u2260 j} v \\of{\\vc{Q}_i, \\vc{Q}_j}, \\\\\n  & = \\frac{1}{4} \\sumInt\n    \\Braket{\u03b1 \u03b2 | v | \u03b3 \u03b4} C_\u03b1^\u2020 C_\u03b2^\u2020 C_\u03b4 C_\u03b3, \\\\ % chktex 19\n  & = \\frac{1}{2} \\sumInt\n    v_{\u03b1 \u03b2, \u03b3 \u03b4} C_\u03b1^\u2020 C_\u03b2^\u2020 C_\u03b4 C_\u03b3. % chktex 19\n  \\end{aligned}\n\\end{equation}\nHere, $\\Ket{\u03b1 \u03b2} = \\left( \\Ket{\u03b1} \\Ket{\u03b2}\n- \\Ket{\u03b2} \\Ket{\u03b1} \\right) / \\sqrt{2}$,\nand we introduce the symbol\n\\begin{equation}\n  \\begin{aligned}\n    v_{\u03b1 \u03b2, \u03b3 \u03b4} % chktex 19\n    & = \\left( \\Bra{f_\u03b1} \\Bra{f_\u03b2} \\right) v\n        \\left( \\Ket{f_\u03b3} \\Ket{f_\u03b4} \\right) % chktex 19\n        \\Braket{\u03c3_\u03b1 | \u03c3_\u03b3} \\Braket{\u03c3_\u03b2 | \u03c3_\u03b4}, \\\\ % chktex 19\n    & = \u03b4_{\u03c3_\u03b1 \u03c3_\u03b3} \u03b4_{\u03c3_\u03b2 \u03c3_\u03b4} % chktex 19\n        \u222b \\cc{f}_\u03b1 \\of{\\vc{x}} \\cc{f}_\u03b2 \\of{\\vc{x}'} v % chktex 19\n        \\of{\\vc{x}, \\vc{x}'} f_\u03b3 \\of{\\vc{x}} f_\u03b4 \\of{\\vc{x}'} % chktex 19\n        \\dif^d \\vc{x} \\dif^d \\vc{x}',\n  \\end{aligned}\n\\end{equation}\nso that $\\Braket{\u03b1 \u03b2 | v | \u03b3 \u03b4} = v_{\u03b1 \u03b2, \u03b3 \u03b4} - v_{\u03b1 \u03b2, \u03b4 \u03b3}$. % chktex 19\nFor a given interaction, the explicit form of $V$\ndepends on the choice of basis states.\nFor illustration, we consider\nthe position and momentum space descriptions below.\n\nIf one chooses the continuous basis of position eigenstates,\n$\u03a8_\u03c3^\u2020 \\of{\\vc{x}} \\Ket{0} = \\Ket{\\vc{x}} \u2297 \\Ket{\u03c3}$,\nthen \\cref{eq:interaction:general} takes the form of an integral over\ndensity-density interactions,\n\\begin{equation}\n  \\label{eq:interaction:position-space}\n  V\n  = \\frac{1}{2}\n    \u222b v \\of{\\vc{x}, \\vc{x}'}\n    \\normalorder{\u03c1 \\of{\\vc{x}} \u03c1 \\of{\\vc{x}'}}\n    \\dif^d \\vc{x} \\dif^d \\vc{x}',\n\\end{equation}\nwhere the colon denotes normal ordering and\n\\begin{equation}\n  \u03c1 \\of{\\vc{x}} = \u2211_\u03c3 \u03a8_\u03c3^\u2020 \\of{\\vc{x}} \u03a8_\u03c3 \\of{\\vc{x}}.\n\\end{equation}\n\nFor interactions $v \\of{\\vc{r}}$\nwhich depend only on the relative separation,\n$\\vc{r} = \\vc{x}' - \\vc{x}$,\nif one chooses the countable basis\nof box normalized momentum eigenstates in a volume $\u03a9$\nwith periodic boundary conditions,\n$c_{\\vc{p} \u03c3}^\u2020 \\Ket{0} = \\Ket{\\vc{p}} \u2297 \\Ket{\u03c3}$,\nwith\n$\\Braket{\\vc{x} | \\vc{p}} = \u03a9^{-1 / 2} e^{i \\vc{p} \u00b7 \\vc{x}}$,\nthen\n\\begin{equation}\n  \\label{eq:interaction:momentum-space}\n  V\n  = \\frac{1}{2}\n    \u2211_{\\vc{p}, \\vc{p'}, \\vc{q}}\n    \u2211_{\u03c3, \u03c3'}\n    \\tilde{v}_{\\vc{q}}\n    c_{\\vc{p} + \\vc{q} \u03c3}^\u2020 c_{\\vc{p}' - \\vc{q} \u03c3'}^\u2020\n    c_{\\vc{p}' \u03c3'} c_{\\vc{p} \u03c3},\n\\end{equation}\nwhere\n\\begin{equation}\n  \\label{eq:interaction:fourier:coefficients}\n  \\tilde{v}_{\\vc{q}}\n  = \\frac{1}{\u03a9} \u222b v \\of{\\vc{r}} e^{-i \\vc{q} \u00b7 \\vc{r}} \\dif^d \\vc{r}.\n\\end{equation}\n\n\\subsection{Interaction in the tight-binding model}\n\nWe now apply the above to the basis introduced in the tight-binding model.\nIntroduce a Wannier representation,\n\\begin{equation}\n  \\Ket{\u03d5_{\u03bd \\vK}}\n  = \\frac{1}{\\sqrt{N}}\n    \u2211_{\\vR} e^{i \\vK \u22c5 \\vR}\n    \\Ket{\u03d5_{\u03bd \\vR}},\n\\end{equation}\nwhere for convenience we write $\\vR$ for $\\vRn{n}$.\nWe adopt the following section-specific notation for the Fock operators,\n\\begin{subequations}\n  \\begin{alignat}{2}\n    a_{\\vK \u03bd \u03c3}^\u2020 \\Ket{0}\n    &  = \\Ket{\u03d5_{\u03bd \\vK \u03c3}}\n    && = \\Ket{\u03d5_{\u03bd \\vK}} \u2297 \\Ket{\u03c3}, \\\\\n    a_{\\vR \u03bd \u03c3}^\u2020 \\Ket{0}\n    &  = \\Ket{\u03d5_{\u03bd \\vR \u03c3}}\n    && = \\Ket{\u03d5_{\u03bd \\vR}} \u2297 \\Ket{\u03c3},\n  \\end{alignat}\n\\end{subequations}\nwhich explicitly separates out the spin index.\nSince\n\\begin{equation}\n  \\Braket{\\vc{x} | \u03d5_{\u03bd \\vR}}\n  = \\Braket{\\vc{x} - \\vR | \u03c6_\u03bd}\n  = \u03c6_\u03bd \\of{\\vc{x} - \\vR},\n\\end{equation}\nby keeping only on-center, like-orbital terms,\nthe interaction may by simplified to the approximate form\n\\begin{equation}\n  V\n  \u22cd \\frac{1}{2}\n    \u2211_{\\vR, \\vR'}\n    \u2211_{\u03bd, \u03bd'}\n    v_{\\vR \\vR'}^{\u03bd \u03bd'}\n    \\normalorder{n_{\\vR \u03bd} n_{\\vR' \u03bd'}},\n\\end{equation}\nwith the number operators,\n\\begin{equation}\n  n_{\u03bd \\vR}\n  = \u2211_\u03c3 a_{\\vR \u03bd \u03c3}^\u2020 a_{\\vR \u03bd \u03c3},\n\\end{equation}\nand interaction integral,\n\\begin{equation}\n  v_{\\vR \\vR'}^{\u03bd \u03bd'}\n  = \u222b v \\of{\\vc{x}, \\vc{x}'}\n    \\abs{\u03c6_\u03bd \\of{\\vc{x} - \\vR}}^2\n    \\abs{\u03c6_{\u03bd'} \\of{\\vc{x}' - \\vR'}}^2\n    \\dif^d \\vc{x} \\dif^d \\vc{x}'.\n\\end{equation}\nThis sum over density-density interactions mimics\n\\cref{eq:interaction:position-space}.\n\nFurther, assuming that the interaction\ndepends only on the relative separation,\n$v_{\\vR \\vR'}^{\u03bd \u03bd'} = v_{\u03bd \u03bd'} \\of{\\vR' - \\vR}$,\nwe may introduce the Fourier expansion\n\\begin{equation}\n  \\tilde{v}^{\u03bd \u03bd'}_{\\vc{q}}\n  = \u2211_{\\vR} v_{\u03bd \u03bd'} \\of{\\vR} e^{i \\vc{q} \u00b7 \\vR},\n\\end{equation}\nand switch back to the Bloch representation,\n\\begin{multline}\n  \\label{eq:interaction:tight-binding:momentum}\n  V\n  = \\frac{1}{2} {\\left( \\frac{\u03a9}{N} \\right)}^2\n    \u2211_{\u03bd, \u03bd'}\n    \u2211_{\u03c3, \u03c3'}\n    \u2211_{\\vK', \\bar{\\vK}}\n    \u2211_{\\vK, \\bar{\\vK}'}\n    \u2211_{\\vc{q}}\n    \\tilde{v}^{\u03bd \u03bd'}_{\\vc{q}} \\\\\n    \u03b4 \\left[ \\vc{q} - \\left(\\vK - \\bar{\\vK} \\right) \\right] % chktex 19\n    \u03b4 \\left[ \\vc{q} - \\left(\\bar{\\vK}' - \\vK' \\right) \\right] % chktex 19\n    a_{\\bar{\\vK} \u03bd \u03c3}^\u2020 a_{\\bar{\\vK}' \u03bd' \u03c3'}^\u2020\n    a_{\\vK' \u03bd' \u03c3'} a_{\\vK \u03bd \u03c3}.\n\\end{multline}\nAs a convenience, we have written this\nas a sum over a countable set of momentum states,\nhowever any sum over momentum may be converted to an integral\naccording to the substitution\n$\u2211_{\\vK} \u2192 \\left( N / \u03a9 \\right) \u222b \\dif^d \\vK$.\nUsing this, we may integrate out the $\u03b4$-functions % chktex 19\nand obtain a form that mimics\n\\cref{eq:interaction:momentum-space},\n\\begin{equation}\n  \\label{eq:interaction:tight-binding:final}\n  V\n  = \\frac{1}{2}\n    \u2211_{\\vK, \\vK', \\vc{q}}\n    \u2211_{\u03bd, \u03bd'}\n    \u2211_{\u03c3, \u03c3'}\n    \\tilde{v}^{\u03bd \u03bd'}_{\\vc{q}}\n    a_{\\vK + \\vc{q} \u03bd \u03c3}^\u2020 a_{\\vc{\\vK' - \\vc{q}} \u03bd' \u03c3'}^\u2020\n    a_{\\vK' \u03bd' \u03c3'} a_{\\vK \u03bd \u03c3}.\n\\end{equation}\n", "meta": {"hexsha": "4196ab43246add228e132146bbbe02adb89613ae", "size": 6356, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/_appendix-interaction.tex", "max_stars_repo_name": "razor-x/doctoral-thesis", "max_stars_repo_head_hexsha": "b48dd021d3b796537f2582967a790ca323b9f86d", "max_stars_repo_licenses": ["BSD-Source-Code"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-12-25T23:01:11.000Z", "max_stars_repo_stars_event_max_datetime": "2017-12-25T23:01:11.000Z", "max_issues_repo_path": "tex/_appendix-interaction.tex", "max_issues_repo_name": "evansosenko/doctoral-thesis", "max_issues_repo_head_hexsha": "b48dd021d3b796537f2582967a790ca323b9f86d", "max_issues_repo_licenses": ["BSD-Source-Code"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/_appendix-interaction.tex", "max_forks_repo_name": "evansosenko/doctoral-thesis", "max_forks_repo_head_hexsha": "b48dd021d3b796537f2582967a790ca323b9f86d", "max_forks_repo_licenses": ["BSD-Source-Code"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.4285714286, "max_line_length": 75, "alphanum_fraction": 0.6137507867, "num_tokens": 2603, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5597012773013462}}
{"text": "\\label{ch:pert}\n\n\\section{Predicting interface states}\n\nThe \\maestro\\ hyperbolic equations come in two forms: advective and\nconservative.  The procedure for predicting interface states differs\nslightly depending on which form we are dealing with.\n\n\\subsection{Advective form}\n\nMost of the time, we are dealing with equations in advective form.\nHere, a scalar, $\\phi$, obeys:\n\\begin{equation}\n\\frac{\\partial \\phi}{\\partial t} = -\\Ub \\cdot \\nabla \\phi + f\n\\end{equation}\nwhere $f$ is the force.  This is the form that the perturbation\nequations take, as well as the equation for $X_k$ itself.\n\nA piecewise linear prediction of $\\phi$ to the interface \nwould appear as:\n\\begin{eqnarray}\n\\phi_{i+1/2,j}^{n+1/2} &=& \\phi_{i,j}^n \n    + \\left . \\frac{\\Delta x}{2} \\frac{\\partial \\phi}{\\partial x} \\right |_{i,j}\n    + \\left . \\frac{\\Delta t}{2} \\frac{\\partial \\phi}{\\partial t} \\right |_{i,j} \\\\\n &=& \\phi_{i,j}^n \n    + \\left . \\frac{\\Delta x}{2} \\frac{\\partial \\phi}{\\partial x} \\right |_{i,j}\n    +  \\frac{\\Delta t}{2} \\left ( -u \\frac{\\partial \\phi}{\\partial x} \n                                         -v \\frac{\\partial \\phi}{\\partial y} + f \\right ) \\\\\n &=& \\phi_{i,j}^n + \\frac{\\Delta x}{2} \\left ( 1 - \\frac{\\Delta t}{\\Delta x} u \\right ) \n           \\frac{\\partial \\phi}{\\partial x} \n    \\underbrace{- \\frac{\\Delta t}{2} v \\frac{\\partial \\phi}{\\partial y}}_{\\text{``transverse~term''}} + \\frac{\\Delta t}{2} f\n\\end{eqnarray}\n(see the Godunov notes section for more details).  Here, the\n``transverse term'' is accounted for in {\\tt make\\_edge\\_scal}.  Any\nadditional forces should be added to $f$.  For the perturbation form\nof equations, we add additional advection-like terms to $f$ by calling\n{\\tt modify\\_scal\\_force}.  This will be noted below.\n\n\\subsection{Conservative form}\n\nA conservative equation takes the form:\n\\begin{equation}\n\\frac{\\partial \\phi}{\\partial t} = -\\nabla \\cdot ( \\phi \\Ub) + f\n\\end{equation}\n%\nNow a piecewise linear prediction of $\\phi$ to the interface is \n\\begin{eqnarray}\n\\phi_{i+1/2,j}^{n+1/2} &=& \\phi_{i,j}^n \n    + \\left . \\frac{\\Delta x}{2} \\frac{\\partial \\phi}{\\partial x} \\right |_{i,j}\n    + \\left . \\frac{\\Delta t}{2} \\frac{\\partial \\phi}{\\partial t} \\right |_{i,j} \\\\\n &=& \\phi_{i,j}^n \n    + \\left . \\frac{\\Delta x}{2} \\frac{\\partial \\phi}{\\partial x} \\right |_{i,j}\n    +  \\frac{\\Delta t}{2} \\left ( -\\frac{\\partial (\\phi u)}{\\partial x} \n                                  -\\frac{\\partial (\\phi v)}{\\partial y} + f \\right ) \\\\\n &=& \\phi_{i,j}^n + \\frac{\\Delta x}{2} \\left ( 1 - \\frac{\\Delta t}{\\Delta x} u \\right ) \n           \\frac{\\partial \\phi}{\\partial x} \n    \\underbrace{- \\frac{\\Delta t}{2} \\phi \\frac{\\partial u}{\\partial x} }_{\\text{``non-advective~term''}}\n                \\underbrace{- \\frac{\\Delta t}{2} \\frac{\\partial (\\phi v)}{\\partial y}}_{\\text{``transverse~term''}} + \\frac{\\Delta t}{2} f\n\\end{eqnarray}\nHere the ``transverse term'' is now in conservative form, and an additional\nterm, the non-advective portion of the\n$x$-flux (for the $x$-prediction) appears.  Both of the underbraced terms are\naccounted for in {\\tt make\\_edge\\_scal} automatically when we call it\nwith {\\tt is\\_conservative = .true.}.\n\n\n%-----------------------------------------------------------------------------\n% density\n%-----------------------------------------------------------------------------\n\\section{Density Evolution}\n\n\\label{sec:pred:density}\n\n\\subsection{Basic equations}\n\nThe full density evolution equation is\n\\begin{eqnarray}\n\\frac{\\partial\\rho}{\\partial t} &=& -\\nabla\\cdot(\\rho\\Ub) \\nonumber \\\\\n&=& -\\Ub\\cdot\\nabla\\rho - \\rho\\nabla\\cdot\\Ub \\, . \\label{rho equation}\n\\end{eqnarray}\nThe species are evolved according to\n\\begin{eqnarray}\n\\frac{\\partial(\\rho X_k)}{\\partial t} &=& -\\nabla\\cdot(\\rho\\Ub X_k) + \\rho \\omegadot_k \\nonumber \\\\\n&=& -\\Ub\\cdot\\nabla(\\rho X_k) - \\rho X_k \\nabla\\cdot\\Ub + \\rho \\omegadot_k \\, . \\label{species equation}\n\\end{eqnarray}\nIn practice, only the species evolution equation is evolved, and the\ntotal density is set as\n\\begin{equation}\n\\rho = \\sum_k (\\rho X_k)\n\\end{equation}\nTo advance $(\\rho X_k)$ to the next timestep, we need to predict a\ntime-centered, interface state.  Algebraically, there are multiple\npaths to get this interface state---we can predict $(\\rho X)$ to the\nedges as a single quantity, or predict $\\rho$ and $X$ separately\n(either in full or perturbation form).  In the notes below, we use the\nsubscript `{\\tt edge}' to indicate what quantity was {\\em predicted} to the\nedges.  In \\maestro, the different methods of computing $(\\rho X)$ on\nedges are controlled by the \\runparam{species\\_pred\\_type} parameter.  The\nquantities predicted to edges and the \nresulting edge state are shown in the table~\\ref{table:pred:species}\n\n\\begin{table}[h]\n\\centering\n\\caption{Summary of species edge state construction}\n\\label{table:pred:species}\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}{l|c|c}\n\\hline\n\\hline\n{\\tt species\\_pred\\_type} &   {quantities predicted} & {$(\\rho X_k)$ edge state} \\\\[-5pt]\n & {in {\\tt make\\_edge\\_scal}} & \\\\\n\\hline \n1 / {\\tt predict\\_rhoprime\\_and\\_X}  &  \n  $\\rho'_\\mathrm{edge}$, $(X_k)_\\mathrm{edge}$ &\n  $\\left(\\rho_0^{n+\\myhalf,{\\rm avg}}  \n  + \\rho'_\\mathrm{edge} \\right)(X_k)_\\mathrm{edge}$ \\\\\n2 / {\\tt predict\\_rhoX}  &  \n  $\\sum(\\rho X_k)_\\mathrm{edge}$, $(\\rho X_k)_\\mathrm{edge}$ &\n  $(\\rho X_k)_\\mathrm{edge}$ \\\\\n3 / {\\tt predict\\_rho\\_and\\_X}  &  \n  $\\rho_\\mathrm{edge}$, $(X_k)_\\mathrm{edge}$ &\n  $\\rho_\\mathrm{edge} (X_k)_\\mathrm{edge}$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\nWe note the labels {\\tt predict\\_rhoprime\\_and\\_X}, {\\tt predict\\_rhoX}, and\n{\\tt predict\\_rho\\_and\\_X} are provided by the {\\tt pred\\_parameters}\nmodule.\n\n\n\n\n\\subsection{Method 1: {\\tt species\\_pred\\_type} = {\\tt predict\\_rhoprime\\_and\\_X}}\n\nHere we wish to construct $(\\rho_0^{n+\\myhalf,{\\rm avg}}        \n+ \\rho'_\\mathrm{edge})(X_k)_\\mathrm{edge}$.\n\nWe predict both $\\rho'$ and $\\rho_0$ to edges separately and later use them to \nreconstruct $\\rho$ at edges.  The base state density evolution equation is\n\\begin{equation}\n\\frac{\\partial\\rho_0}{\\partial t} = -\\nabla\\cdot(\\rho_0 w_0 \\eb_r) = \n-w_0\\frac{\\partial\\rho_0}{\\partial r} \n\\underbrace{-\\rho_0\\frac{\\partial w_0}{\\partial r}}_{``\\rho_0 ~ \\text{force\"}}.\n\\label{rho0 equation}\n\\end{equation}\nSubtract (\\ref{rho0 equation}) from (\\ref{rho equation}) and rearrange\nterms, noting that $\\Ub = \\Ubt + w_o\\eb_r$, to obtain the\nperturbational density equation,\n\\begin{equation}\n\\frac{\\partial\\rho'}{\\partial t} = -\\Ub\\cdot\\nabla\\rho' \\underbrace{- \\rho'\\nabla\\cdot\\Ub \n- \\nabla\\cdot(\\rho_0\\Ubt)}_{\\rho' ~ \\text{force}} \\, .\n\\label{rhoprime equation}\n\\end{equation}\n\nWe also need $X_k$ at the edges.  Here, we subtract $X_k \\times$\nEq.~\\ref{rho equation} from Eq.~\\ref{species equation} to obtain\n\\begin{equation}\n\\frac{\\partial X_k}{\\partial t} = -\\Ub \\cdot \\nabla X_k + \\omegadot_k\n\\end{equation}\nWhen using Strang-splitting, we ignore the $\\omegadot_k$ source terms, and\nthen the species equation is a pure advection equation with no force.\n\n\\subsubsection{Predicting $\\rho'$ at edges}\nWe define $\\rho' = \\rho^{(1)} - \\rho_0^n$.  Then we predict $\\rho'$ to\nedges using {\\tt make\\_edge\\_scal} in {\\tt density\\_advance} and the\nunderbraced term in Eq.~\\ref{rhoprime equation} as the forcing.  This\nforce is computed in {\\tt modify\\_scal\\_force}.  This prediction is\ndone in advective form.\n\n\\subsubsection{Predicting $\\rho_0$ at edges}\\label{Predicting rho0 at edges}\nThere are two ways to predict $\\rho_0$ at edges.\n\\begin{enumerate}\n\n\\item We call {\\tt make\\_edge\\_state\\_1d} using the underbraced term\nin (\\ref{rho0 equation}) as the forcing.  This gives us\n$\\rho_0^{n+\\myhalf,{\\rm pred}}$.  This term is used to advect $\\rho_0$\nin {\\bf Advect Base Density}.  In plane-parallel geometries, we also use\n$\\rho_0^{n+\\myhalf,{\\rm pred}}$ to compute $\\etarho$, which will be used \nto compute $\\psi$.\n\n\\item We define $\\rho_0^{n+\\myhalf,{\\rm avg}} = (\\rho_0^n +\n\\rho_0^{(2)})/2$.  We compute $\\rho_0^{(2)}$ from $\\rho_0^n$ using\n{\\bf Advect Base Density}, which advances equation (\\ref{rho0 equation})\nthrough $\\Delta t$ in time.  The $(2)$ in the superscript indicates\nthat we have not called {\\bf Correct Base} yet, which computes\n$\\rho_0^{n+1}$ from $\\rho_0^{(2)}$.  We use $\\rho_0^{(2)}$ rather than\n$\\rho_0^{n+1}$ to construct $\\rho_0^{n+\\myhalf,{\\rm avg}}$ since $\\rho_0^{n+1}$\nis not available yet.  $\\rho_0^{n+\\myhalf,{\\rm avg}}$ is used to construct \n$\\rho$ at edges from $\\rho'$ at edges, and\nthis $\\rho$ at edges is used to compute fluxes for $\\rho X_k$.\n\\end{enumerate}\n\nWe note that in essence these choices reflect a hyperbolic (1)\nvs.\\ elliptic (2) approach.  In \\maestro, if we setup a problem with\n$\\rho = \\rho_0$ initially, and enforce a constraint $\\nabla \\cdot\n(\\rho_0 \\Ub) = 0$ (i.e.\\ the anelastic constraint), then analytically,\nwe should never generate a $\\rho'$.  To realize this behavior\nnumerically, we use $\\rho_0^{n+\\myhalf,{\\rm avg}}$ in the prediction\nof $(\\rho X_k)$ on the edges to be consistent with the use of the\naverage of $\\rho$ to the interfaces in the projection step at the end\nof the algorithm.\n\n\\subsubsection{Computing $\\rho$ at edges}\\label{Computing rho at edges}\nFor the non-radial edges, we directly add $\\rho_0^{n+\\myhalf,{\\rm avg}}$\nto $\\rho'$ since $\\rho_0^{n+\\myhalf,{\\rm avg}}$ is a cell-centered\nquantity.  For the radial edges, we interpolate to obtain\n$\\rho_0^{n+\\myhalf,{\\rm avg}}$ at radial edges before adding it to\n$\\rho'$.\n\n\\subsubsection{Predicting $X_k$ at edges}\n\\label{sec:pert:predict_X}\nPredicting $X_k$ is straightforward.  We convert the cell-centered\n$(\\rho X_k)$ to $X_k$ by dividing by $\\rho$ in each zone and then we\njust call {\\tt make\\_edge\\_scal} in {\\tt density\\_advance} on $X_k$.\nThe force seen by {\\tt make\\_edge\\_scal} is 0.  The prediction is\ndone in advective form.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Method 2: {\\tt species\\_pred\\_type} = {\\tt predict\\_rhoX}}\n\nHere we wish to construct $(\\rho X_k)_\\mathrm{edge}$ by predicting\n$(\\rho X_k)$ to the edges as a single quantity.  We recall\nEq.~\\ref{species equation}:\n\\begin{equation}\n\\frac{\\partial(\\rho X_k)}{\\partial t} =\n  -\\nabla \\cdot (\\rho \\Ub X_k) + \\rho \\omegadot_k \\, . \\nonumber\n\\end{equation}\nThe edge state is created by calling {\\tt make\\_edge\\_scal} in {\\tt\n  density\\_advance} with {\\tt is\\_conservative = .true.}.\nWe do not consider the $\\rho \\omegadot_k$ term in the forcing when\nStrang-splitting.\n\n\nWe note that even though it is not needed here, we still compute\n$\\rho_\\mathrm{edge}=\\sum(\\rho X_k)_\\mathrm{edge}$ at the edges since certain \nenthalpy formulations need it.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Method 3: {\\tt species\\_pred\\_type} = {\\tt predict\\_rho\\_and\\_X}}\n\nHere we wish to construct $\\rho_\\mathrm{edge} (X_k)_\\mathrm{edge}$\nby predicting $\\rho$ and $X_k$ to the edges separately.\n\nPredicting $X_k$ to the edges proceeds exactly as described in\n\\S~\\ref{sec:pert:predict_X}.  \n\nPredicting the full $\\rho$ begins with Eq.~\\ref{rho equation}:\n\\begin{equation}\n\\frac{\\partial\\rho}{\\partial t} \n= -\\Ub\\cdot\\nabla\\rho \\, \\underbrace{- \\rho\\nabla\\cdot\\Ub}_{``\\rho~\\text{force''}} \\, . \\label{rho equation labeled}\n\\end{equation}\nUsing this, $\\rho$ is predicted to the edges using {\\tt\n  make\\_edge\\_scal} in {\\tt density\\_advance}, with the underbraced\nforce computed in {\\tt modify\\_scal\\_force} with {\\tt fullform =\n  .true.}.  \\MarginPar{we need to switch this over to doing the conservative prediction}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Advancing $\\rho X_k$}\\label{Advancing rhoX_k}\nThe evolution equation for $\\rho X_k$, ignoring the reaction terms\nthat were already accounted for in {\\tt react\\_state}, and the\nassociated discretization is \\\\\n\n\\noindent {\\tt species\\_pred\\_type = predict\\_rhoprime\\_and\\_X}:\n\\begin{equation}\n\\frac{\\partial\\rho X_k}{\\partial t} = \n-\\nabla\\cdot\\left\\{\\left[\\left({\\rho_0}^{n+\\myhalf,{\\rm avg}}\n+ \\rho'_\\mathrm{edge} \\right)(X_k)_\\mathrm{edge} \\right](\\Ubt+w_0\\eb_r)\\right\\}.\n\\end{equation} \\\\\n\n\n\\noindent {\\tt species\\_pred\\_type = predict\\_rhoX}:\n\\begin{equation}\n\\frac{\\partial\\rho X_k}{\\partial t} = \n-\\nabla\\cdot\\left\\{\\left[\\left(\\rho X_k \\right)_\\mathrm{edge} \\right](\\Ubt+w_0\\eb_r)\\right\\}.\n\\end{equation} \\\\\n\n\n\\noindent {\\tt species\\_pred\\_type = predict\\_rho\\_and\\_X}:\n\\begin{equation}\n\\frac{\\partial\\rho X_k}{\\partial t} = \n-\\nabla\\cdot\\left\\{\\left[\\rho_\\mathrm{edge} (X_k)_\\mathrm{edge} \\right](\\Ubt+w_0\\eb_r)\\right\\}.\n\\end{equation}\n\n\n%-----------------------------------------------------------------------------\n% energy\n%-----------------------------------------------------------------------------\n\\section{Energy Evolution}\n\n\\label{sec:pred:enthalpy}\n\n\\subsection{Basic equations}\n\n\\maestro\\ solves an enthalpy equation.  \nThe full enthalpy equation is\n\\begin{eqnarray}\n\\frac{\\partial(\\rho h)}{\\partial t} &=& -\\nabla\\cdot(\\rho h \\Ub) + \\frac{Dp_0}{Dt} \n+ \\nabla\\cdot \\kth \\nabla T + \\rho H_{\\rm nuc} + \\rho H_{\\rm ext} \\nonumber \\\\\n&=& \\underbrace{-\\Ub\\cdot\\nabla(\\rho h) - \\rho h\\nabla\\cdot\\Ub}_{-\\nabla\\cdot(\\rho h\\Ub)} \n+ \\underbrace{\\psi + (\\Ubt \\cdot \\er) \\frac{\\partial p_0}{\\partial r}}_{\\frac{Dp_0}{Dt}} \n+ \\nabla\\cdot\\kth\\nabla T + \\rho H_{\\rm nuc} + \\rho H_{\\rm ext}.\n\\end{eqnarray}\n%The full temperature equation is\n%\\begin{equation}\n%\\frac{\\partial T}{\\partial t} = -\\Ub\\cdot\\nabla T\n%+ \\frac{1}{\\rho c_p}\\left\\{(1-\\rho h_p)\\left[\\psi\n%+ (\\Ubt \\cdot \\er )\\frac{\\partial p_0}{\\partial r}\\right] - \\sum_k\\rho\\xi_k\\dot\\omega_k \n%+ \\rho H_{\\rm nuc} + \\rho H_{\\rm ext}\\right\\}.\n%\\end{equation}\nDue to Strang-splitting of the reactions, the call to {\\tt\n  react\\_state} has already been made.  Hence, the goal is to compute\nan edge state enthalpy starting from $(\\rho h)^{(1)}$ using an\nenthalpy equation that does not include the $\\rho H_{\\rm nuc}$ and\n$\\rho H_{\\rm ext}$ terms, where were already accounted for in {\\tt\n  react\\_state}, so our equation becomes\n\\begin{equation}\n\\frac{\\partial(\\rho h)}{\\partial t} = -\\Ub\\cdot\\nabla(\\rho h) - \\rho h\\nabla\\cdot\\Ub \n+ \\underbrace{\\psi + (\\Ubt \\cdot \\er) \\frac{\\partial p_0}{\\partial r} + \\nabla\\cdot\\kth\\nabla T}_{``(\\rho h) ~ \\text{force}\"} \\label{rhoh equation}\n\\end{equation}\nWe define the base state enthalpy evolution equation as\n\\begin{eqnarray}\n\\frac{\\partial(\\rho h)_0}{\\partial t} &=& -\\nabla\\cdot[(\\rho h)_0 w_0\\eb_r] \n+ \\frac{D_0p_0}{Dt} \\nonumber \\\\\n&=& -w_0\\frac{\\partial(\\rho h)_0}{\\partial r} \n- \\underbrace{(\\rho h)_0\\frac{\\partial w_0}{\\partial r}+ \\psi}_{``(\\rho h)_0 ~ \\text{force}\"}\n\\enskip .\\label{rhoh0 equation}\n\\end{eqnarray}\n\n\\subsubsection{Perturbational enthalpy formulation}\n\nSubtracting (\\ref{rhoh0 equation}) from (\\ref{rhoh equation}) and\nrearranging terms gives the perturbational enthalpy equation\n\\begin{eqnarray}\n\\frac{\\partial(\\rho h)'}{\\partial t} &=& -\\nabla\\cdot[(\\rho h)'\\Ub] \n- \\nabla\\cdot[(\\rho h)_0\\Ubt] + (\\Ubt \\cdot \\er)\\frac{\\partial p_0}{\\partial r} \n+ \\nabla\\cdot\\kth\\nabla T\\nonumber \\\\\n&=& -\\Ub\\cdot\\nabla(\\rho h)' \\underbrace{- (\\rho h)'\\nabla\\cdot\\Ub \n- \\nabla\\cdot[(\\rho h)_0\\Ubt] + (\\Ubt \\cdot \\er)\\frac{\\partial p_0}{\\partial r}\n+ \\nabla\\cdot\\kth\\nabla T}_{``(\\rho h)' ~ \\text{force}\"}, \\label{rhohprime equation}\n\\end{eqnarray}\n\n\\subsubsection{Temperature formulation}\n\nAlternately, we can consider an temperature evolution equation, derived\nfrom enthalpy, as:\n\\begin{equation}\n\\frac{\\partial T}{\\partial t} = -\\Ub\\cdot\\nabla T\n+ \\frac{1}{\\rho c_p}\\left\\{(1-\\rho h_p)\\left[\\psi\n+ (\\Ubt \\cdot \\er )\\frac{\\partial p_0}{\\partial r}\\right] \n+ \\nabla \\cdot \\kth \\nabla T\n- \\sum_k\\rho\\xi_k\\omegadot_k \n+ \\rho H_{\\rm nuc} + \\rho H_{\\rm ext}\\right\\}.\n\\end{equation}\nAgain, we neglect the reaction terms, since that will be handled during\nthe reaction step, so we can write this as:\n\\begin{equation}\n\\frac{\\partial T}{\\partial t} = -\\Ub\\cdot\\nabla T\n\\underbrace{\n+ \\frac{1}{\\rho c_p}\\left\\{(1-\\rho h_p)\\left[\\psi\n+ (\\Ubt \\cdot \\er )\\frac{\\partial p_0}{\\partial r}\\right] \n+ \\nabla \\cdot \\kth \\nabla T \\right \\} }_{``T~\\text{force''}} \\, .\n\\label{T equation labeled}\n\\end{equation}\n\n\\subsubsection{Pure enthalpy formulation}\n\nA final alternative is to consider an evolution equation for $h$\nalone.  This can be derived by expanding the derivative of $(\\rho h)$\nin Eq.~\\ref{rhoh equation} and subtracting off $h \\times$ the\ncontinuity equation (Eq.~\\ref{rho equation}):\n\\begin{equation}\n\\frac{\\partial h}{\\partial t} = -\\Ub \\cdot \\nabla h \n\\underbrace{+ \\frac{1}{\\rho}\n\\left \\{ \\psi + (\\Ubt \\cdot \\er )\\frac{\\partial p_0}{\\partial r}\n+ \\nabla \\cdot \\kth \\nabla T \\right \\} }_{``h~\\text{force''}} \\, .\n\\label{h equation labeled}\n\\end{equation}\n\n\\subsubsection{Prediction requirements}\n\nTo update the enthalpy, we need to compute an interface state for\n$(\\rho h)$.  As with the species evolution, there are multiple\nquantities we can predict to the interfaces to form this state,\ncontrolled by \\runparam{enthalpy\\_pred\\_type}.  A complexity of the\nenthalpy evolution is that the formation of this edge state will\ndepend on \\runparam{species\\_pred\\_type}.  \n\nThe general procedure for making the $(\\rho h)$ edge state is as follows:\n\\begin{enumerate}\n\\item predict $(rho h)$, $(\\rho h)'$, $h$, or $T$ to the edges (depending on {\\tt\n  enthalpy\\_pred\\_type} ) using {\\tt make\\_edge\\_scal} and the forces\n  identified in the appropriate evolution equation\n  (Eqs.~\\ref{rhohprime equation}, \\ref{T equation labeled}, or \\ref{h\n    equation labeled} respectively).\n\n  The appropriate forces are summaried in table~\\ref{table:pred:hforce}.\n\n\\item if we predicted $T$, convert this predicted\n  edge state to an intermediate ``enthalpy'' state (the quotes\n  indicate that it may be perturbational or full enthalpy) by calling\n  the EOS.\n \n\\item construct the final enthalpy edge state in {\\tt mkflux}.  The\n  precise construction depends on what species and enthalpy quantities\n  are input to {\\tt mkflux}.\n\n\\end{enumerate}\n\nFinally, when \\maestro\\ is run with \\runparam{use\\_tfromp} {\\tt = T}, the\ntemperature is derived from the density, basestate pressure ($p_0$),\nand $X_k$.  When run with reactions or external heating, {\\tt\n  react\\_state} updates the temperature after the reaction/heating\nterm is computed.  In {\\tt use\\_tfromp = T} mode, the temperature will\nnot see the heat release, since the enthalpy does not feed in.  Only\nafter the hydro update does the temperature gain the effects of the\nheat release due to the adjustment of the density (which in turn sees\nit through the velocity field and $S$).  As a result, the {\\tt\n  enthalpy\\_pred\\_type}s that predict temperature to the interface\n({\\tt predict\\_T\\_then\\_rhoprime} and {\\tt predict\\_T\\_then\\_h}) will\nnot work.  \\maestro\\ will abort if the code is run with this\ncombination of parameters.\n\nTable~\\ref{table:pred:hoverview}\ngives a summary\nof the {\\tt enthalpy\\_pred\\_type} behavior.\n\n\n\\begin{table*}[h]\n\\centering\n\\caption{Forcing term into {\\tt make\\_edge\\_scal}}\n\\label{table:pred:hforce}\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}{l|c}\n\\hline\n\\hline\n{\\tt enthalpy\\_pred\\_type} &   {advective force} \\\\\n\\hline\n0 / {\\tt predict\\_rhoh}  $(\\rho h)$ & $ \\left [\\psi + (\\Ubt \\cdot \\er)\n  \\frac{\\partial p_0}{\\partial r} + \\nabla \\cdot \\kth \\nabla T \\right ]$ \\\\\n1 / {\\tt predict\\_rhohprime} $((\\rho h)^\\prime)$ &  \n $-(\\rho h)^\\prime \\; \\nabla \\cdot (\\Ubt+w_0 \\er) - \n \\nabla \\cdot (\\Ubt (\\rho h)_0) + (\\Ubt \\cdot \\er) \\frac{\\partial p_0}{\\partial r} + \\nabla \\cdot \\kth \\nabla T$ \\\\\n2 / {\\tt predict\\_h}  $(h)$ & $\\frac{1}{\\rho} \\left [\\psi + (\\Ubt \\cdot \\er)\n  \\frac{\\partial p_0}{\\partial r} + \\nabla \\cdot \\kth \\nabla T \\right ]$ \\\\\n3 / {\\tt predict\\_T\\_then\\_rhohprime} $(T)$ & $\\frac{1}{\\rho c_p} \\left \\{ (1 - \\rho h_p) \n   \\left [\\psi + (\\Ubt \\cdot \\er) \\frac{\\partial p_0}{\\partial r} \\right ] + \\nabla \\cdot \\kth \\nabla T \\right \\}$ \\\\\n4 / {\\tt predict\\_T\\_then\\_h}  $(T)$ & $\\frac{1}{\\rho c_p} \\left\\{ (1 - \\rho h_p) \\left [\\psi + (\\Ubt \\cdot \\er)\n\\frac{\\partial p_0}{\\partial r}\\right ] +  \\nabla \\cdot \\kth \\nabla T \\right\\}$ \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{landscape}\n\\begin{table}[t]\n\\caption{Summary of enthalpy edge state construction}\n\\label{table:pred:hoverview}\n\\renewcommand{\\arraystretch}{1.5}\n{\\small\n\\centering\n\\begin{tabular}{l|l|c|c|c|c}\n\\hline\n\\hline\n{\\tt species\\_pred\\_type} &   \n{\\tt enthalpy\\_pred\\_type} &\n{cell-centered} &\n{intermediate} &\n{species quantity} &\n{final $(\\rho h)$} \\\\[-5pt]\n &\n &\n{quantity predicted} &\n{``enthalpy''} &\n{available in} &\n{edge state} \\\\[-5pt]\n &\n &\n{in {\\tt make\\_edge\\_scal}} &\n{edge state} &\n{\\tt mkflux} &\n \\\\\n\\hline \n1 / {\\tt predict\\_rhoprime\\_and\\_X}  & 0 / {\\tt predict\\_rhoh} &\n  $(\\rho h)$ & $(\\rho h)_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho'_\\mathrm{edge}$ & \n  $(\\rho h)_\\mathrm{edge}$ \\\\\n1 / {\\tt predict\\_rhoprime\\_and\\_X}  & 1 / {\\tt predict\\_rhohprime} &\n  $(\\rho h)'$ & $(\\rho h)'_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho'_\\mathrm{edge}$ & \n  $\\left [ (\\rho h)_0^{n+\\myhalf,{\\rm avg}} + (\\rho h)'_\\mathrm{edge} \\right ]$ \\\\\n1 / {\\tt predict\\_rhoprime\\_and\\_X}  & 2 / {\\tt predict\\_h} &\n  $h$ & $h_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho'_\\mathrm{edge}$ & \n  $\\left ( \\rho_0^{n+\\myhalf,{\\rm avg}} + \\rho'_\\mathrm{edge} \\right ) h_\\mathrm{edge}$ \\\\\n1 / {\\tt predict\\_rhoprime\\_and\\_X}  & 3 / {\\tt predict\\_T\\_then\\_rhohprime} &\n  $T$ & $(\\rho h)'_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho'_\\mathrm{edge}$ & \n  $\\left [ (\\rho h)_0^{n+\\myhalf,{\\rm avg}} + (\\rho h)'_\\mathrm{edge} \\right ]$ \\\\\n1 / {\\tt predict\\_rhoprime\\_and\\_X}  & 4 / {\\tt predict\\_T\\_then\\_h} &\n  $T$ & $h_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho'_\\mathrm{edge}$ & \n  $\\left ( \\rho_0^{n+\\myhalf,{\\rm avg}} + \\rho'_\\mathrm{edge} \\right ) h_\\mathrm{edge}$ \\\\\n\\hline\n\\hline\n2 / {\\tt predict\\_rhoX}  & 0 / {\\tt predict\\_rhoh} &\n  $(\\rho h)$ & $(\\rho h)_\\mathrm{edge}$ & \n  $(\\rho X)_\\mathrm{edge}$, $\\sum(\\rho X)_\\mathrm{edge}$ & \n  $(\\rho h)_\\mathrm{edge}$ \\\\\n2 / {\\tt predict\\_rhoX}  & 1 / {\\tt predict\\_rhohprime} &\n  $(\\rho h)'$ & $(\\rho h)'_\\mathrm{edge}$ & \n  $(\\rho X)_\\mathrm{edge}$, $\\sum(\\rho X)_\\mathrm{edge}$ & \n  $\\left [ (\\rho h)_0^{n+\\myhalf,{\\rm avg}} + (\\rho h)'_\\mathrm{edge} \\right ]$ \\\\\n2 / {\\tt predict\\_rhoX}  & 2 / {\\tt predict\\_h} &\n  $h$ & $h_\\mathrm{edge}$ & \n  $(\\rho X)_\\mathrm{edge}$, $\\sum(\\rho X)_\\mathrm{edge}$ & \n  $\\sum(\\rho X)_\\mathrm{edge} h_\\mathrm{edge}$ \\\\\n2 / {\\tt predict\\_rhoX}  & 3 / {\\tt predict\\_T\\_then\\_rhohprime} &\n  $T$ & $(\\rho h)'_\\mathrm{edge}$ & \n  $(\\rho X)_\\mathrm{edge}$, $\\sum(\\rho X)_\\mathrm{edge}$ & \n  $\\left [ (\\rho h)_0^{n+\\myhalf,{\\rm avg}} + (\\rho h)'_\\mathrm{edge} \\right ]$ \\\\\n2 / {\\tt predict\\_rhoX}  & 4 / {\\tt predict\\_T\\_then\\_h} &\n  $T$ & $h_\\mathrm{edge}$ & \n  $(\\rho X)_\\mathrm{edge}$, $\\sum(\\rho X)_\\mathrm{edge}$ & \n  $\\sum(\\rho X)_\\mathrm{edge} h_\\mathrm{edge}$ \\\\\n\\hline\n\\hline\n3 / {\\tt predict\\_rho\\_and\\_X}  & 0 / {\\tt predict\\_rhoh} &\n  $(\\rho h)$ & $(\\rho h)_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho_\\mathrm{edge}$ & \n  $(\\rho h)_\\mathrm{edge}$ \\\\\n3 / {\\tt predict\\_rho\\_and\\_X}  & 1 / {\\tt predict\\_rhohprime} &\n  $(\\rho h)'$ & $(\\rho h)'_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho_\\mathrm{edge}$ & \n  $\\left [ (\\rho h)_0^{n+\\myhalf,{\\rm avg}} + (\\rho h)'_\\mathrm{edge} \\right ]$ \\\\\n3 / {\\tt predict\\_rho\\_and\\_X}  & 2 / {\\tt predict\\_h} &\n  $h$ & $h_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho_\\mathrm{edge}$ & \n  $\\rho_\\mathrm{edge} h_\\mathrm{edge}$ \\\\\n3 / {\\tt predict\\_rho\\_and\\_X}  & 3 / {\\tt predict\\_T\\_then\\_rhohprime} &\n  $T$ & $(\\rho h)'_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho_\\mathrm{edge}$ & \n  $\\left [ (\\rho h)_0^{n+\\myhalf,{\\rm avg}} + (\\rho h)'_\\mathrm{edge} \\right ]$ \\\\\n3 / {\\tt predict\\_rho\\_and\\_X}  & 4 / {\\tt predict\\_T\\_then\\_h} &\n  $T$ & $h_\\mathrm{edge}$ & \n  $X_\\mathrm{edge}$, $\\rho_\\mathrm{edge}$ & \n  $\\rho_\\mathrm{edge} h_\\mathrm{edge}$ \\\\\n\\hline\n\\end{tabular}\n} % end \\small\n\\end{table}\n\\end{landscape}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Method 0: {\\tt enthalpy\\_pred\\_type} = {\\tt predict\\_rhoh}}\n\nHere we wish to construct $(\\rho h)_\\mathrm{edge}$ by predicting\n$(\\rho h)$ to the edges directly.  We use {\\tt make\\_edge\\_scal} with\n{\\tt is\\_conservative = .true.} on $(\\rho h)$, with the underbraced term\nin Eq.~\\ref{rhoh equation} as the force (computed in {\\tt mkrhohforce}).\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Method 1: {\\tt enthalpy\\_pred\\_type} = {\\tt predict\\_rhohprime}}\n\nHere we wish to construct $\\left [ (\\rho h)_0^{n+\\myhalf,{\\rm avg}} + (\\rho\n  h)'_\\mathrm{edge} \\right ]$ by predicting $(\\rho h)'$ to the edges.\n\n\\subsubsection{Predicting $(\\rho h)'$ at edges}\\label{Predicting rhohprime at edges}\n\nWe define $(\\rho h)' = (\\rho h)^{(1)} - (\\rho h)_0^n$.  Then we predict \n$(\\rho h)'$ to edges using {\\tt make\\_edge\\_scal} in {\\tt enthalpy\\_advance} \nand the underbraced term in (\\ref{rhohprime equation}) as the forcing (see\nalso table~\\ref{table:pred:hforce} for the forcing term).\nThe first two terms in $(\\rho h)'$ force are computed in \n{\\tt modify\\_scal\\_force}, and the last two terms are accounted for in\n{\\tt mkrhohforce}.  For spherical problems, we have found that a different \nrepresentation of the pressure term in the $(\\rho h)'$ force gives better\nresults, namely:\n\\begin{equation}\n(\\Ubt \\cdot \\er)\\frac{\\partial p_0}{\\partial r} \\equiv \\Ubt\\cdot\\nabla p_0 = \n\\nabla\\cdot(\\Ubt p_0) - p_0\\nabla\\cdot\\Ubt.\n\\end{equation}\n\n\n\n\\subsubsection{Predicting $(\\rho h)_0$ at edges}\nWe use an analogous procedure described in Section \\ref{Predicting\nrho0 at edges} for computing $\\rho_0^{n+\\myhalf,\\rm{avg}}$ to obtain \n$(\\rho h)_0^{n+\\myhalf,\\rm{avg}}$, i.e., \n$(\\rho h)_0^{n+\\myhalf,{\\rm avg}} = [(\\rho h)_0^{n} + (\\rho h)_0^{n+1}]/2$.\n\nFor spherical, however, instead of computing $(\\rho h)_0$ on edges\ndirectly, we compute $\\rho_0$ and $h_0$ separately at the edges, and\nmultiply to get $(\\rho h)_0$.\n\n\n\\subsubsection{Computing $\\rho h$ at edges}\nWe use an analogous procedure described in Section \\ref{Computing rho\n  at edges} for computing $\\rho$ at edges to compute $\\rho h$ at\nedges.  \n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Method 2: {\\tt enthalpy\\_pred\\_type} = {\\tt predict\\_h}}\n\nHere, the construction of the interface state depends on what species \nquantities are present.  In all cases, the enthalpy state is found\nby predicting $h$ to the edges.\n\n\nFor {\\tt species\\_pred\\_types}: {\\tt predict\\_rhoprime\\_and\\_X}, we wish to construct \n$(\\rho_0 + \\rho'_\\mathrm{edge} ) h_\\mathrm{edge}$.\n\nFor {\\tt species\\_pred\\_types}: {\\tt predict\\_rho\\_and\\_X} or\n{\\tt predict\\_rhoX}, we wish to construct $\\rho_\\mathrm{edge} h_\\mathrm{edge}$.\n\n\\subsubsection{Predicting $h$ at edges}\n\nWe define $h = (\\rho h)^{(1)}/\\rho^{(1)}$.  Then we predict $h$ to edges\nusing {\\tt make\\_edge\\_scal} in {\\tt enthalpy\\_advance} and the\nunderbraced term in Eq.~\\ref{h equation labeled} as the forcing (see\nalso table~\\ref{table:pred:hforce}).  This force is computed by\n{\\tt mkrhohforce} and then divided by $\\rho$.  Note: {\\tt mkrhoforce}\nknows about the different {\\tt enthalpy\\_pred\\_type}s and computes\nthe correct force for this type.\n\n\\subsubsection{Computing $\\rho h$ at edges}\n\n{\\tt species\\_pred\\_types}: {\\tt predict\\_rhoprime\\_and\\_X}:\\\\[1mm]\n%\nWe use the same procedure described in Section \\ref{Computing rho at\n  edges} for computing $\\rho_\\mathrm{edge}$ from $\\rho_0$ and\n$\\rho'_\\mathrm{edge}$ and then multiply by $h_\\mathrm{edge}$.\n\n\\ \\\\\n{\\tt species\\_pred\\_types}: {\\tt predict\\_rhoX}:\\\\[1mm]\n%\nWe already have $\\sum(\\rho X_k)_\\mathrm{edge}$ and simply multiply by\n$h_\\mathrm{edge}$.\n\n\\ \\\\\n{\\tt species\\_pred\\_types}: {\\tt predict\\_rho\\_and\\_X}:\\\\[1mm]\n%\nWe already have $\\rho_\\mathrm{edge}$ and simply multiply by\n$h_\\mathrm{edge}$.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Method 3: {\\tt enthalpy\\_pred\\_type} = {\\tt predict\\_T\\_then\\_rhohprime}}\n\nHere we wish to construct $\\left [ (\\rho h)_0 + (\\rho\n  h)'_\\mathrm{edge} \\right ]$ by predicting $T$ to the edges and then\nconverting this to $(\\rho h)'_\\mathrm{edge}$ via the EOS.\n\n\\subsubsection{Predicting $T$ at edges}\n\nWe predict $T$ to edges using {\\tt make\\_edge\\_scal} in {\\tt\n  enthalpy\\_advance} and the underbraced term in Eq.~\\ref{T equation\n  labeled} as the forcing (see also table~\\ref{table:pred:hforce}).\nThis force is computed by {\\tt mktempforce}.\n\n\\subsubsection{Converting $T_\\mathrm{edge}$ to $(\\rho h)'_\\mathrm{edge}$}\n\nWe call the EOS in {\\tt makeHfromRhoT\\_edge} (called from {\\tt\n  enthalpy\\_advance}) to convert from $T_\\mathrm{edge}$ to $(\\rho\nh)'_\\mathrm{edge}$.  For the EOS call, we need $X_\\mathrm{edge}$ and\n$\\rho_\\mathrm{edge}$.  This construction depends on {\\tt\n  species\\_pred\\_type}, since the species edge states may differ\nbetween the various prediction types (see the ``species quantity''\ncolumn in table~\\ref{table:pred:hoverview}).  The EOS inputs are\nconstructed as:\n\n\\begin{table*}[h]\n\\centering\n\\caption{EOS states in {\\tt makeHfromRhoT\\_edge}}\n\\label{table:pred:EOSinputs}\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}{l|c|c}\n\\hline\n\\hline\n{\\tt species\\_pred\\_type} & {$\\rho$ edge state} &   {$X_k$ edge state}  \\\\\n\\hline\n{\\tt predict\\_rhoprime\\_and\\_X} & \n  $\\rho_0^{n+\\myhalf,\\rm{avg}} + \\rho'_\\mathrm{edge}$ & \n  $(X_k)_\\mathrm{edge}$ \\\\\n{\\tt predict\\_rhoX} & \n  $\\sum_k (\\rho X_k)_\\mathrm{edge}$ & \n  $(\\rho X_k)_\\mathrm{edge}/\\sum_k (\\rho X_k)_\\mathrm{edge}$ \\\\\n{\\tt predict\\_rho\\_and\\_X} & \n  $\\rho_\\mathrm{edge}$ & \n  $(X_k)_\\mathrm{edge}$ \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\nAfter calling the EOS, the output of {\\tt makeHfromRhoT\\_edge} is\n$(\\rho h)'_\\mathrm{edge}$.\n\n\\subsubsection{Computing $\\rho h$ at edges}\n\nThe computation of the final $(\\rho h)$ edge state is done identically\nas the {\\tt predict\\_rhohprime} version.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Method 4: {\\tt enthalpy\\_pred\\_type} = {\\tt predict\\_T\\_then\\_h}}\n\nHere, the construction of the interface state depends on what species\nquantities are present.  In all cases, the enthalpy state is found by\npredicting $T$ to the edges and then converting this to\n$h_\\mathrm{edge}$ via the EOS.\n\nFor {\\tt species\\_pred\\_types}: {\\tt predict\\_rhoprime\\_and\\_X}, we wish to \nconstruct $(\\rho_0 + \\rho'_\\mathrm{edge} ) h_\\mathrm{edge}$.\n\nFor {\\tt species\\_pred\\_types}: {\\tt predict\\_rhoX}, we wish to\nconstruct $\\sum(\\rho X_k)_\\mathrm{edge} h_\\mathrm{edge}$.\n\nFor {\\tt species\\_pred\\_types}: {\\tt predict\\_rho\\_and\\_X}, we wish to\nconstruct $\\rho_\\mathrm{edge} h_\\mathrm{edge}$.\n\n\n\n\\subsubsection{Predicting $T$ at edges}\n\nThe prediction of $T$ to the edges is done identically as the\n{\\tt predict\\_T\\_then\\_rhohprime} version.\n\n\n\\subsubsection{Converting $T_\\mathrm{edge}$ to $h_\\mathrm{edge}$}\n\nThis is identical to the {\\tt predict\\_T\\_then\\_rhohprime} version,\nexcept that on output, we compute $h_\\mathrm{edge}$.\n\n\\subsubsection{Computing $\\rho h$ at edges}\n\nThe computation of the final $(\\rho h)$ edge state is done identically\nas the {\\tt predict\\_h} version.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Advancing $\\rho h$}\nWe update the enthalpy analogously to the species update in \nSection \\ref{Advancing rhoX_k}.  The forcing term does not include reaction\nsource terms accounted for in {\\bf React State}, and is the same\nfor all {\\tt enthalpy\\_pred\\_type}s.\n\\begin{equation}\n\\frac{\\partial(\\rho h)}{\\partial t} = \n-\\nabla\\cdot\\left\\{\\left \\langle (\\rho h) \\right \\rangle_\\mathrm{edge}\n \\left(\\Ubt + w_0\\eb_r\\right)\\right\\} + (\\Ubt \\cdot \\er)\\frac{\\partial p_0}{\\partial r} + \\psi  \\enskip .\n\\end{equation}\nwhere $\\left \\langle (\\rho h) \\right \\rangle_\\mathrm{edge}$ is the\nedge state for $(\\rho h)$ computed as listed in the final column of\ntable~\\ref{table:pred:hoverview} for the given {\\tt enthalpy\\_pred\\_type}\nand {\\tt species\\_pred\\_type}.\n\n\n\n\n%-----------------------------------------------------------------------------\n% toy_convect\n%-----------------------------------------------------------------------------\n\\section{Experience from {\\tt toy\\_convect}}\n\n\\label{sec:toyconvect}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Why is {\\tt toy\\_convect} Interesting?}\n\nThe {\\tt toy\\_convect} problem consists of a carbon-oxygen white dwarf with \nan accreted layer of solar composition. There is a steep composition \ngradient between the white dwarf and the accreted layer. The convection \nthat begins as a result of the accretion is extremely sensitive to the \namount of mixing.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Initial Observations}\n\nWith {\\tt use\\_tfromp = T} and {\\tt cflfac = 0.7} there is a large difference \nbetween \\runparam{species\\_pred\\_type} {\\tt = 1} and {\\tt species\\_pred\\_type = 2,3} as \nseen in Figure \\ref{fig:spec1_vs_23}. {\\tt species\\_pred\\_type = 1} shows \nquick heating (peak T vs. t) and there is ok agreement between {\\tt tfromh} \nand {\\tt tfromp}. {\\tt species\\_pred\\_type = 2,3} show cooling (peak T vs. t) \nand {\\tt tfromh} looks completely unphysical (see Figure \n\\ref{fig:tfromh_unphysical}). There are also strange filament type features in \nthe momentum plots shown in Figure \\ref{fig:mom_filaments}.\n\n% compare spec 1 with spec 2,3\n%-----------------------------------------\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/spec1_vs_23}\n\\caption{Compare {\\tt species\\_pred\\_type = 1,2,3} with {\\tt use\\_tfromp = \nT, enthalpy\\_pred\\_type = 1, cflfac = 0.7}}\n\\label{fig:spec1_vs_23}\n\\end{figure}\n\n% tfromh is unphysical\n%-----------------------------------------\n\\begin{figure}[!h]\n\\begin{minipage}[b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/tfromh_unphysical}\n\\caption{{\\tt tfromh} is unphysical when using {\\tt species\\_pred\\_type = \n2,3, enthalpy\\_pred\\_type = 1, cflfac = 0.7, use\\_tfromp = T}. Shown above \nis {\\tt species\\_pred\\_type = 2}}\n\\label{fig:tfromh_unphysical}\n\\end{minipage}\n\\hspace{0.5cm}\n% momentum filaments\n%-----------------------------------------\n\\begin{minipage}[b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/mom_filaments}\n\\caption{There are strange filament type features at the bottom of the \ndomain. {\\tt species\\_pred\\_type = 2, enthalpy\\_pred\\_type = 1, cflfac = 0.7, \nuse\\_tfromp = T}}\n\\label{fig:mom_filaments}\n\\end{minipage}\n\\end{figure}\n\nUsing {\\tt use\\_tfromp = F} and \\runparam{dpdt\\_factor} {\\tt $>$ 0} results in many runs \ncrashing very quickly and gives unphyiscal temperature profiles as seen in \nFigure \\ref{fig:tfrompF_unphys}.\n\n% unphysical temp profiles with use_tfromp = F dpdt > 0\n%-----------------------------------------\n\\begin{figure}[!h]\n\\begin{minipage}[b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/tfrompF_unphys}\n\\caption{Unphysical temperature profile with {\\tt use\\_tfromp = F} and \n{\\tt dpdt\\_factor = 0.1}. {\\tt dpdt\\_factor = 0.2,0.3} lead to the code \ncrashing.}\n\\label{fig:tfrompF_unphys}\n\\end{minipage}\n\\hspace{0.5cm}\n% temp vs time of castro ppm 0,1 and maestro spec = 2\n%----------------------------------------------\n\\begin{minipage}[b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/tfrompF_cfl_1vs7}\n\\caption{Compare {\\tt cflfac = 0.1} with {\\tt cflfac = 0.7} for \n{\\tt use\\_tfromp = F, dpdt\\_factor = 0.0, species\\_pred\\_type = 2, \nenthalpy\\_pred\\_type = 4}}\n\\label{fig:tfrompF_cfl_1vs7}\n\\end{minipage}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Change {\\tt cflfac} and {\\tt enthalpy\\_pred\\_type}}\n\nWith {\\tt species\\_pred\\_type = 1} and {\\tt cflfac = 0.1}, \nthere is much less heating (peak T vs. t) than the {\\tt cflfac = 0.7} \n(default). There is also a lower overall Mach number (see Figure \n\\ref{fig:spec1_cfl_1vs7}) with the \\runparam{cflfac} {\\tt = 0.1} and excellent agreement \nbetween {\\tt tfromh} and {\\tt tfromp}.\n\n{\\tt use\\_tfromp = F, dpdt\\_factor = 0.0, enthalpy\\_pred\\_type = 3,4 and \nspecies\\_pred\\_type = 2,3} shows cooling (as seen in {\\tt use\\_tfromp = T}) \nwith a comparable rate of cooling (see Figure \\ref{fig:compare_tfromp}) \nto the {\\tt use\\_tfromp = T} case. The \nlargest difference between the two runs is that the {\\tt use\\_tfromp = F} \ncase shows excellent agreement between {\\tt tfromh} and {\\tt tfromp} with \n{\\tt cflfac = 0.7}. The filaments in the momentum plot of Figure \n\\ref{fig:mom_filaments} are still present.\n\n% Mach number of cfl 0.1 vs cfl 0.7\n%-----------------------------------------\n\\begin{figure}[!h]\n\\begin{minipage}[!b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/spec1_cfl_1vs7}\n\\caption{Comparing the Mach number of {\\tt cflfac = 0.1} and \n{\\tt cflfac = 0.7}. {\\tt species\\_pred\\_type = 1, enthalpy\\_pred\\_type = 1}}\n\\label{fig:spec1_cfl_1vs7}\n\\end{minipage}\n\\hspace{0.5cm}\n% comparable cooling rates between tfromp = T and tfromp = F\n%-----------------------------------------\n\\begin{minipage}[!b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/compare_tfromp}\n\\caption{Illustrate the comparable cooling rates between \n{\\tt use\\_tfromp = T} and {\\tt use\\_tfromp = F with dpdt\\_factor = 0.0} \nusing {\\tt species\\_pred\\_type = 2, enthalpy\\_pred\\_type = 3,1}}\n\\label{fig:compare_tfromp}\n\\end{minipage}\n\\end{figure}\n\nFor a given {\\tt enthalpy\\_pred\\_type} and {\\tt use\\_tfromp = F}, \n{\\tt species\\_pred\\_type = 2} has a lower Mach number (vs. t) compared to \n{\\tt species\\_pred\\_type = 3}.\n\nAny {\\tt species\\_pred\\_type} with {\\tt use\\_tfromp = F, dpdt\\_factor = 0.0} \nand {\\tt enthalpy\\_pred\\_type = 1} shows significant heating, although \nthe onset of the heating is delayed in {\\tt species\\_pred\\_type = 2,3} (see \nFigure \\ref{fig:compare_tF_d0_h1_s123}). Only \n{\\tt species\\_pred\\_type = 1} gives good agreement between {\\tt tfromh} and \n{\\tt tfromp}.\n\nComparing {\\tt cflfac = 0.7} and {\\tt cflfac = 0.1} with \n{\\tt use\\_tfromp = F, dpdt\\_factor = 0.0, species\\_pred\\_type = 2} and \n{\\tt enthalpy\\_pred\\_type = 4} shows good agreement overall (see Figure \n\\ref{fig:tfrompF_cfl_1vs7}).\n\n% compare tfromp=F%, dpdt=0, h=1, spec=1,2,3\n%-----------------------------------------\n\\begin{figure}[!h]\n\\begin{minipage}[b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/compare_tF_d0_h1_s123}\n\\caption{Compare various {\\tt species\\_pred\\_type} with {\\tt use\\_tfromp = F, \ndpdt\\_factor = 0.0, enthalpy\\_pred\\_type = 1}}\n\\label{fig:compare_tF_d0_h1_s123}\n\\end{minipage}\n\\hspace{0.5cm}\n% compare tfromp=F,dpdt=0,spec=2,h=4, cfl=0.1,0.7\n%-----------------------------------------\n\\begin{minipage}[b]{0.5\\linewidth}\n\\vspace{0pt}\n\\centering\n\\includegraphics[height=0.3\\textheight]{\\pertfigpath/compare_castro}\n\\caption{Compare the {\\tt castro.ppm\\_type CASTRO} runs with the \n{\\tt species\\_pred\\_type MAESTRO} runs.}\n\\label{fig:compare_castro}\n\\end{minipage}\n\\end{figure}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Additional Runs}\n\n\\subsubsection{{\\tt bds\\_type = 1}}\n\nUsing \\runparam{bds\\_type} {\\tt = 1, use\\_tfromp = F, dpdt\\_factor = 0.0, \nspecies\\_pred\\_type = 2, enthalpy\\_pred\\_type = 4} and \n{\\tt cflfac = 0.7} seems to cool off faster, perhaps due to less mixing. \nThere is also no momentum filaments in the bottom of the domain.\n\n\\subsubsection{{\\tt evolve\\_base\\_state = F}}\n\nUsing \\runparam{evolve\\_base\\_state} {\\tt = F, use\\_tfromp = F, dpdt\\_factor = 0.0, \nspecies\\_pred\\_type = 2} and {\\tt enthalpy\\_pred\\_type = 4} seems to agree \nwell with the normal {\\tt evolve\\_base\\_state = T} run.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{{\\tt toy\\_convect} in {\\tt CASTRO}}\n\n{\\tt toy\\_convect} was also run using {\\tt CASTRO} with \n{\\tt castro.ppm\\_type = 0,1}. These runs show temperatures that cool \noff rather than increase (see Figure \\ref{fig:compare_castro}) which \nsuggests using {\\tt species\\_pred\\_type = 2,3} instead of \n{\\tt species\\_pred\\_type = 1}. \n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Recommendations}\n\nAll of these runs suggest that running under {\\tt species\\_pred\\_type = \n2 or 3, enthalpy\\_pred\\_type = 3 or 4} with either {{\\tt use\\_tfromp = F} and \n{\\tt dpdt\\_factor = 0.0}} or {\\tt use\\_tfromp = T} gives the most \nconsistent results.\n\n\n\n", "meta": {"hexsha": "c6fb313a15cfda9836411c31fd2bca6c7da629a0", "size": 40622, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Docs/pert_notes/pert.tex", "max_stars_repo_name": "sailoridy/MAESTRO", "max_stars_repo_head_hexsha": "f957d148d2028324a2a1076be244f73dad63fd67", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2017-05-15T15:28:56.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-09T08:13:32.000Z", "max_issues_repo_path": "Docs/pert_notes/pert.tex", "max_issues_repo_name": "sailoridy/MAESTRO", "max_issues_repo_head_hexsha": "f957d148d2028324a2a1076be244f73dad63fd67", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2017-06-14T23:05:00.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-28T16:40:42.000Z", "max_forks_repo_path": "Docs/pert_notes/pert.tex", "max_forks_repo_name": "sailoridy/MAESTRO", "max_forks_repo_head_hexsha": "f957d148d2028324a2a1076be244f73dad63fd67", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2017-06-14T14:52:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-04T07:16:09.000Z", "avg_line_length": 41.366598778, "max_line_length": 147, "alphanum_fraction": 0.6500664664, "num_tokens": 13441, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5597012721744922}}
{"text": "\\documentclass[authordraft=true,dvipsnames]{acmart}\n\n\\usepackage{blindtext}\n\\usepackage{xpatch}\n\\usepackage{tikz}\n\\usepackage{subcaption}\n\\usepackage{pbox}\n\\usepackage{url}\n\n\\usetikzlibrary{matrix,backgrounds}\n\\pgfdeclarelayer{myback}\n\\pgfsetlayers{myback,background,main}\n\n% By default the URLs are put in typewriter type in the body and the\n% bibliography of the document when using the \\url command.  If you are\n% using many long URLs you may want to uncommennt the next line so they\n% are typeset a little smaller.\n\\renewcommand{\\UrlFont}{\\small\\tt}\n\n%% Some hacks to remove all the ACM hardcoded text as I only want to use its style\n\\makeatletter\n\\xpatchcmd{\\ps@firstpagestyle}{Manuscript submitted to ACM}{}{\\typeout{First patch succeeded}}{\\typeout{first patch failed}}\n\\xpatchcmd{\\ps@standardpagestyle}{Manuscript submitted to ACM}{}{\\typeout{Second patch succeeded}}{\\typeout{Second patch failed}}    \\@ACM@manuscriptfalse% Also in titlepage\n\\makeatother\n\\renewcommand\\footnotetextcopyrightpermission[1]{} % removes footnote with conference info\n\\setcopyright{none}\n\\pagestyle{plain} % remove running headers\n%% End of hack\n\n%% Some shorthands\n\\newcommand{\\pref}[1]{(\\ref{#1})}\n\\newcommand{\\bx}[0]{\\mathbf{X}}\n\\newcommand{\\bw}[0]{\\mathbf{W}}\n\\newcommand{\\by}[0]{\\mathbf{Y}}\n\\newcommand{\\bb}[0]{\\mathbf{B}}\n\n%% Drawing a cube in tikz\n\\newcommand{\\tikzcube}[3]{% width, height, depth\n\\foreach \\x in {0,...,#1}\n{   \\draw (\\x ,0  ,#3 ) -- (\\x ,#2 ,#3 );\n    \\draw (\\x ,#2 ,#3 ) -- (\\x ,#2 ,0  );\n}\n\\foreach \\x in {0,...,#2}\n{   \\draw (#1 ,\\x ,#3 ) -- (#1 ,\\x ,0  );\n    \\draw (0  ,\\x ,#3 ) -- (#1 ,\\x ,#3 );\n}\n\\foreach \\x in {0,...,#3}\n{   \\draw (#1 ,0  ,\\x ) -- (#1 ,#2 ,\\x );\n    \\draw (0  ,#2 ,\\x ) -- (#1 ,#2 ,\\x );\n}\n}\n\n%% Highlight the elements in tikz matrix\n\\tikzset{mycolor/.style = {dotted,line width=1bp,color=#1}}%\n\\tikzset{myfillcolor/.style = {draw,fill=#1,opacity=0.3}}%\n\n\\NewDocumentCommand{\\highlight}{O{blue!40} m m}{%\n\\draw[mycolor=#1] (#2.north west)rectangle (#3.south east);\n}\n\n\\NewDocumentCommand{\\fhighlight}{O{blue!40} m m}{%\n\\draw[myfillcolor=#1] (#2.north west)rectangle (#3.south east);\n}\n\n\\title{Convolutional Layer and Its Matrix Implementation}\n\\author{Yinghai Lu}\n\n\\begin{document} \n\\begin{abstract}\nThere are quite some tutorials~\\cite{cs231n,andrew14,jefkin16} and textbooks~\\cite{goodfellow16} explaining the mathematics of convolutional neural networks. However, in order to implement the CNN efficiently, we usually cast it into matrix operations. Few articles discuss this transformation process. \\cite{cs231n} does a good job at the forward propagation pass but the backward propagation part is missing. In addition, many of them discuss simplified model with zero padding and $1$-stride. When I was reading the code of $CAFFE2$, I found myself constantly needing to figure out the all the parameters and all the indexing into the tensors. This notes tries to bridge the gap and provides a reference for matrix implementation of convolutional layer, consider various parameters and flavors. \n\\end{abstract}\n\\maketitle\n\n\\section{Configuration}\nThe input to the convolutional layer is a 4D tensor $\\bx \\in \\mathbb{R} ^{N \\times C \\times H_X \\times W_X}$ (assuming NCHM format), which means a batch of $N$ images of size $H \\times W$, where each image has $C$ channels. A set of $M$ filters of size $C \\times H_K \\times W_K$ will be applied to those input and generate the output $Y$. The filters are thus represented by a tensor $\\bw \\in \\mathbb{R}^{M \\times C \\times H_K \\times W_K}$. Optionally, we can have a bias vector $\\mathbf{b} \\in \\mathbb{R}^{M}$, which offsets the result of the convolution. The output tensor $\\by$ is of size $N \\times M \\times H_Y \\times W_Y$, where the output image (activation map) dimension $H_Y \\times W_Y$ is determined by configurations such as padding and stride and etc. \n\nFor now, we assume zero-padding and stride of size $1$ for simplicity. We don't take group filters into account either. Later, we will see how to extend it. We also assume $N=1$ and omit the $n$ subscript. \n\n\\section{Forward Propagation}\nFor a certain filter $m$, we drag it through the image to apply the convolution. Each time, we apply it onto a $H_K \\times W_K$ tile of the image with $C$ channels and produce one element (pixel) in the output map:\n\\begin{equation} \\label{forward}\n\\by_{m, i,j}=\\sum_{c=0}^{C-1}{\\sum_{p=0}^{H_K-1}{\\sum_{q=0}^{W_K-1}{\\bw_{m,c,p,q}\\bx_{c, i+p,j+q}}}} + \\mathbf{b}_m\n\\end{equation}\n\nThe above equation can be naturally converted to maxtrix multiplication. Let's first ignore the bias and focus on the convolution term. Given an input sample $\\bx \\in \\mathbb{R}^{C \\times H_X \\times W_X}$, we are going to apply one filter $H_Y \\times W_Y$ times by going from left to right and then from top to bottom. We introduce an operator $Im2Col()$ which takes  $\\by$ as input and creates a 2D matrix $\\bb \\in \\mathbb{R}^{(C \\times H_K \\times W_K) \\times (H_Y \\times W_Y)}$, where each column $B_{n,:,j}$ is made by reshaping the input area where the filter is going to be applied to into a $C \\times H_K \\times W_K$ column. As we are moving the filter, we are going to generate such a column $(H_Y \\times W_Y$ times from left to right, top to bottom of the image. Here is an example of how we generate $\\mathbf{B}$ from a $2 \\times 3 \\times 3$ input $\\mathbf{X}$ with a $2 \\times 2 \\times 2$ filter kernel. \n\\begin{figure}[h]\n\\begin{subfigure}[b]{0.3\\textwidth}\n\\centering\n\\begin{tikzpicture}[scale=1]\n\\tikzcube{3}{3}{2}\n\\draw[fill=Yellow,opacity=0.3] (0,0,2) -- (0,3,2) -- (3,3,2) -- (3,0,2);\n\\draw[fill=Yellow,opacity=0.3] (3,0,2) -- (3,0,1) -- (3,3,1) -- (3,3,2);\n\\draw[fill=Yellow,opacity=0.3] (0,3,2) -- (0,3,1) -- (3,3,1) -- (3,3,2);\n\n\\node at (-0.25, -0.25, 0) {$\\bx_{0,2,0}$};\n\\node at (-0.25, 0.75, 0) {$\\bx_{0,1,0}$};\n\\node at (-0.25, 1.75, 0) {\\color{Red}{$\\bx_{0,0,0}$}};\n\\node at (0.75, -0.25, 0) {$\\bx_{0,2,1}$};\n\\node at (0.75, 0.75, 0) {\\color{Green}{$\\bx_{0,1,1}$}};\n\\node at (0.75, 1.75, 0) {\\color{Cyan}{$\\bx_{0,0,1}$}};\n\\node at (1.75, -0.25, 0) {$\\bx_{0,2,2}$};\n\\node at (1.75, 0.75, 0) {$\\bx_{0,1,2}$};\n\\node at (1.75, 1.75, 0) {$\\bx_{0,0,2}$};\n\\end{tikzpicture}\n\\caption{Input $\\bx$ of $2 \\times 3 \\times 3$}\n\\end{subfigure}\n\\hspace{20mm}\n\\begin{subfigure}[b]{0.3\\textwidth}\n\\centering\n\\begin{tikzpicture}[baseline=-\\the\\dimexpr\\fontdimen22\\textfont2\\relax ]\n\\matrix(m)[matrix of math nodes,left delimiter=(,right delimiter=)]\n{\n\\color{Red}{\\bx_{0,0,0}} & \\color{Cyan}{\\bx_{0,0,1}} & \\bx_{0,1,0} & \\color{Green}{\\bx_{0,1,1}}\\\\ \n\\color{Cyan}{\\bx_{0,0,1}} & \\bx_{0,0,2} & \\color{Green}{\\bx_{0,1,1}} & \\bx_{0,1,2}\\\\ \n\\bx_{0,1,0} & \\color{Green}{\\bx_{0,1,1}} & \\bx_{0,2,0} & \\bx_{0,2,1}\\\\ \n\\color{Green}{\\bx_{0,1,1}} & \\bx_{0,1,2} & \\bx_{0,2,1} & \\bx_{0,2,2}\\\\ \n\\bx_{1,0,0} & \\bx_{1,0,1} & \\bx_{1,1,0} & \\bx_{1,1,1}\\\\ \n\\bx_{1,0,1} & \\bx_{1,0,2} & \\bx_{1,1,1} & \\bx_{1,1,2}\\\\ \n\\bx_{1,1,0} & \\bx_{1,1,1} & \\bx_{1,2,0} & \\bx_{1,2,1}\\\\ \n\\bx_{1,1,1} & \\bx_{1,1,2} & \\bx_{1,2,1} & \\bx_{1,2,2}\\\\\n};\n\\fhighlight[Yellow]{m-1-1}{m-4-4}\n\\end{tikzpicture}\n\\caption{$\\bb = Im2Col(\\bx)$ of $8 \\times 4$ }\n\\end{subfigure}\n\\caption{An example of $Im2Col()$} \\label{buffer} \n\\end{figure}\n\nWith $\\bb$, if we think the weight tensor as a 2D matrix of $M \\times (C \\times H_K \\times W_K)$, where each row is just the filter map $m$ flattened, we can easily get the output of applying $\\bw$ to $\\bx$ as \n\\begin{equation} \\label{bm}\n\\by = \\bw \\bb\n\\end{equation}\nwhere $\\by \\in \\mathbb{R}^{M \\times (H_Y \\times W_Y)}$. \n\nNow we amend $\\by$ to add bias term. What we want to do is to add $\\mathbf{b}_i$ to each element in row $i$ of $\\mathbf{Y}$. This can be done by generating a increament matrix and do element-wise add to  $\\mathbf{Y}$:\n\\begin{equation} \\label{bias}\n \\mathbf{Y'} =  \\by + \\mathbf{b} \\mathbf{u}^T\n\\end{equation}\nwhere $\\mathbf{u} \\in \\mathbb{R}^{(H_Y \\times W_Y) \\times 1}$ is a vector filled with $1$s. By doing an outer-product between $\\mathbf{b}$ and $\\mathbf{u}$, we strectch the incremental matrix we want and add it to $ \\by$. Note that most numerical libraries offser GEMM routine which does operation in above equation in one call. \n\nApplying this to each of the input batch, we can get $\\by$ for each $n \\in \\{0,\\ldots,N-1\\}$. \n \n\\section{Backward Propagation}\nFor backward propagation, we are interested in generating the gradients for inputs ($d\\bx$), for weights ($d\\bw$) and for the bias terms ($d\\mathbf{b}$). In addition to $\\mathbf{X}$ and $\\mathbf{W}$, we also know the output gradient back-propagated from next layer $d\\by \\in \\mathbb{R}^{N \\times M \\times H_Y \\times W_Y}$. We will focus on one sample of the input batch $N$ and hence omit the index $n$ in the following discussion.  \n\nThe gradient of weight $\\bw_{m,c,p,q}$ with respect to the training error is about how much a perturbation in $\\mathbf{W}_{m,c,p,q}$ is going to affect the output. It's worth noticing how much $\\mathbf{W}_{m,c,p,q}$ is involved in the computation. Basically, for each filter $m$, if we have zero padding and stride of $1$, $\\mathbf{W}_{m,c,p,q}$ will be involved in the computation of each pixel of the output activation map with respect to a specific filter and there are $H_Y \\times W_Y$ of them. With that,  \napplying the chain rule on the gradient and we get:\n\\begin{equation}\n\\frac{\\partial E}{\\partial \\bw_{m,c,p,q}} = \\sum_{i=0}^{H_Y-1}{\\sum_{j=0}^{W_Y-1}{\\frac{\\partial E}{\\partial \\by_{m,i,j}}\\frac{\\partial \\mathbf{Y}_{m,i,j}}{\\partial  \\mathbf{W}_{m,c,p,q}}}}\n\\end{equation}\nSubstituting $\\by_{m,i,j}$ with \\pref{forward}, we have\n\\begin{equation} \\label{dw}\n\\frac{\\partial E}{\\partial \\bw_{m,c,p,q}} = \\sum_{i=0}^{H_Y-1}{\\sum_{j=0}^{W_Y-1}{\\frac{\\partial E}{\\partial \\by_{m,i,j}}\\bx_{c,i+p,j+q}}}\n\\end{equation}\n \nFor a given input of $d\\by$ from a batch of $N$, we can treat it as a matrix $d\\mathbf{Y}$ of size $M \\times (H_Y \\times W_Y)$. For each row $m$, each column represents an element $\\mathbf{Y}_{m,i,j}$, flatterned into one row. Remember the buffer matrix $\\bb=Im2Col(\\bx)$ is a matrix of size $(C \\times H_K \\times W_K) \\times (H_Y \\times W_Y)$, we have \n\\begin{equation}\nd\\bw = d\\by \\bb^T\n\\end{equation}\nTo see why, you can expand the matrix multiplication and verify that each element operation corresponds exactly to \\pref{dw}.\n\nNext, let's check the gradient of bias $\\mathbf{b}_m$. Similar to $\\bw_{m,c,p,q}$, $\\mathbf{b}_m$ contributes to the computation of each of the pixels in an activation map of size  $H_Y \\times W_Y$. Similarly, applying the chain rule and we have\n\\begin{equation}\n\\frac{\\partial E}{\\partial \\mathbf{b}_m} =  \\sum_{i=0}^{H_Y-1}{\\sum_{j=0}^{W_Y-1}{\\frac{\\partial E}{\\partial \\by_{m,i,j}}\\frac{\\partial \\mathbf{Y}_{m,i,j}}{\\partial \\mathbf{b}_m}}}\n\\end{equation}\nFrom \\pref{forward}, we know that $\\frac{\\partial \\by_{m,i,j}}{\\partial \\mathbf{b}_m} = 1$. Therefore we have \n\\begin{equation}\nd\\mathbf{b} = d\\by \\mathbf{u}\n\\end{equation}\nwhere $\\mathbf{u}$ is the vector with all ones, as we have already seen in \\pref{bias}.\n\nFinally, we still need to compute the gradient of input, $d\\bx$. Using chain rule, the gradient of each input $\\mathbf{X}_{c,i,j}$ is \n\\begin{equation}\n\\frac{\\partial E}{\\partial \\bx_{c,i,j}} = \\sum_{p=0}^{H_K-1}{\\sum_{q=0}^{W_K-1}{\\sum_{m=0}^{M-1}{\\frac{\\partial E}{\\partial \\by_{m,i-p,j-q}}\\frac{\\partial \\mathbf{Y}_{m,i-p,j-q}}{\\partial \\mathbf{X}_{c,i,j}}}}}\n\\end{equation}\nSubstituting \\pref{forward} into the above euqation and we have\n\\begin{equation} \\label{dx}\n\\frac{\\partial E}{\\partial \\bx_{c,i,j}} = \\sum_{p=0}^{H_K-1}{\\sum_{q=0}^{W_K-1}{\\sum_{m=0}^{M-1}{\\frac{\\partial E}{\\partial \\by_{m,i-p,j-q}} \\bw_{m,c,p,q}}}}\n\\end{equation}\nNote that the ranges of the three levels of summation show intuitively how a pixel $\\bx_{c,i, j}$ contributes to the computation of output activation map $\\by$. The outer two levels show that each pixel will be involved in computing (at most) $H_K \\times W_K$ pixels in the activation map of one filtering, as is shown by the highlighted elements in Figure \\pref{buffer}. And the innermost summation means that there are $M$ such filters. First, we can tackle the the innermost summation. \nConsidering the following matrix $\\bb^* \\in \\mathbb{R} ^ {C \\times H_K \\times W_K} \\times (H_Y \\times W_Y)$ \n\\begin{equation} \\label{rbb}\n\\bb^* = \\bw^T d\\by\n\\end{equation}\nwhere each element in $B^*$ corresponds to a combination of $\\sum_{m=0}^{M-1}{ \\bw_{m,c,p,q}\\by_{m,i,j} }$. Basically, $B^*$ gives the all the information we need to compute $d\\bx_{c,i,j}$ and we can rewrite \\pref{dx} to \n\\begin{equation} \\label{dx2}\n\\frac{\\partial E}{\\partial \\bx_{c,i,j}} = \\sum_{(a,b) \\in A}{\\bb^*_{a,b}}\n\\end{equation}\n\nThe problem is how to pick and choose the set $A$ so that the elements we choose satisfies \\pref{dx}. Notice the structural similarity between $\\bb^*$ and $\\mathbf{B}$ in Figure \\ref{buffer}. Both of them are of the same size and the elements are arranged in the same way. Since $\\mathbf{B} = Im2Col(X)$, we can actually obtain $d\\bx$ from $\\mathbf{B}^*$ through a dual process of $Im2Col()$, which we call $Col2Im()$. What $Col2Im()$ does is the reverse process of $Im2Col()$, taking each column of the input matrix and patch it back to the output matrix of the same size of the input. Since patches overlap with each other, the values are added up, which corresponds to the summation in \\pref{dx2}. The following Figure shows an example of how we reconstruct an element in $d\\mathbf{X}$ from $\\mathbf{B}^*$, where the input and kernl size are the same as the example in Figure \\pref{buffer}. The colored boxes show how each column in $\\mathbf{B}^*$ is stretched into a tile of pixels in $d\\bx$. Note how the process is exactly the reverse of that in Figure \\ref{buffer}.\n\\begin{figure}[h]\n\\begin{subfigure}[c]{0.5\\textwidth}\n\\centering\n\\begin{tikzpicture}[baseline=-\\the\\dimexpr\\fontdimen22\\textfont2\\relax ]\n\\matrix(m)[matrix of math nodes,left delimiter=(,right delimiter=)]\n{\n\\color{Red}{\\bw_{0,0,0}d\\by_{0,0}} & \\color{Cyan}{\\bw_{0,0,0} d\\by_{0,1}} & \\bw_{0,0,0}d\\by_{1,0} & \\color{Green}{\\bw_{0,0,0}d\\by_{1,1}}\\\\ \n\\color{Cyan}{\\bw_{0,0,1}d\\by_{0,0}} & \\bw_{0,0,1}d\\by_{0,1} & \\color{Green}{\\bw_{0,0,1}d\\by_{1,0}} & \\bw_{0,0,1}d\\by_{1,1}\\\\ \n\\bw_{0,1,0}d\\by_{0,0} & \\color{Green}{\\bw_{0,1,0}d\\by_{0,1}} & \\bw_{0,1,0}d\\by_{1,0} & \\bw_{0,1,0}d\\by_{1,1}\\\\ \n\\color{Green}{\\bw_{0,1,1}d\\by_{0,0}} & \\bw_{0,1,1}d\\by_{0,1} & \\bw_{0,1,1}d\\by_{1,0} & \\bw_{0,1,1}d\\by_{1,1}\\\\ \n\\bw_{1,0,0}d\\by_{0,0} & \\bw_{1,0,0}d\\by_{0,1} & \\bw_{1,0,0}d\\by_{1,0}& \\bw_{1,0,0}d\\by_{1,1}\\\\ \n\\bw_{1,0,1}d\\by_{0,0} & \\bw_{1,0,1}d\\by_{0,1} & \\bw_{1,0,1}d\\by_{1,0}& \\bw_{1,0,1}d\\by_{1,1}\\\\ \n\\bw_{1,1,0}d\\by_{0,0} & \\bw_{1,1,0}d\\by_{0,1} & \\bw_{1,1,0}d\\by_{1,0}& \\bw_{1,1,0}d\\by_{1,1}\\\\ \n\\bw_{1,1,1}d\\by_{0,0} & \\bw_{1,1,1}d\\by_{0,1} & \\bw_{1,1,1}d\\by_{1,0}& \\bw_{1,1,1}d\\by_{1,1}\\\\\n};\n\\begin{pgfonlayer}{myback}\n\\highlight[Red]{m-1-1}{m-4-1}\n\\highlight[Blue]{m-1-2}{m-4-2}\n\\highlight[ForestGreen]{m-1-3}{m-4-3}\n\\highlight[Orange]{m-1-4}{m-4-4}\n\\end{pgfonlayer}\n\\end{tikzpicture}\n\\caption{$\\bb^* = \\bw^T d\\by$ of $8 \\times 4$ }\n\\end{subfigure} \\\\\n\\vspace{5mm}\n\\begin{subfigure}[c]{0.75\\textwidth}\n\\centering\n\\begin{tikzpicture}[baseline=-\\the\\dimexpr\\fontdimen22\\textfont2\\relax ]\n\\matrix(m)[matrix of math nodes,execute at begin cell = {\\pbox[t]{\\linewidth}}, left delimiter=(,right delimiter=), nodes={align=left, anchor=center, text width=100pt, minimum height=6ex, inner sep=4pt}]\n{\n\\color{Red}{\\bw_{0,0,0}d\\by_{0,0}} & \\color{Cyan}{\\bw_{0,0,1}d\\by_{0,0} + \\bw_{0,0,0} d\\by_{0,1}}  &  \\bw_{0,0,1}d\\by_{0,1} \\\\\n\\bw_{0,1,0}d\\by_{0,0} +  \\bw_{0,0,0}d\\by_{1,0}  & { \\color{Green}{$\\bw_{0,1,1}d\\by_{0,0} + \\bw_{0,1,0}d\\by_{0,1}$} \\\\ $+ \\bw_{0,0,1}d\\by_{1,0} + \\bw_{0,0,0}d\\by_{1,1}$} &  \\bw_{0,1,1}d\\by_{0,1} +  \\bw_{0,0,1}d\\by_{1,1} \\\\\n \\bw_{0,1,0}d\\by_{1,0} & \\bw_{0,1,1}d\\by_{1,0} +  \\bw_{0,1,0}d\\by_{1,1}  &   \\bw_{0,1,1}d\\by_{1,1} \\\\\n};\n\\begin{pgfonlayer}{myback}\n\\highlight[Red]{m-1-1}{m-2-2}\n\\highlight[Blue]{m-1-2}{m-2-3}\n\\highlight[ForestGreen]{m-2-1}{m-3-2}\n\\highlight[Orange]{m-2-2}{m-3-3}\n\\end{pgfonlayer}\n\\end{tikzpicture}\n\\caption{First channel of $d\\bx = Col2Im(\\bb^*)$ of $2 \\times 3 \\times 3$}\n\\end{subfigure}\n\\caption{An example of $Col2Im()$ \\label{dual}}\n\\end{figure}\n\n\\section{Different Flavors}\nNormally, convulational layer will be configured to be more than just zero padding and stride $1$. Hence, we discuss how to take those parameters into consideration. \n\n\\subsection{Padding, Stride and Dilation}\nThe beauty of $Im2Col()$ is that it abstracts away how we grab the pixels from the input to form the receptive field for convolution. Padding, stride and dilation all just change the way we grab the pixels and is taken care of by the $Im2Col()$ function. Padding tells $Im2Col()$ if the pixels are padded one, just put $0$ into $\\bb$. Stride tells $Im2Col()$ how to move the convolution window around. And dilation tells $Im2Col()$ to strech the receiptive field and collect pixels that corresponds to non-zero weights. Since $Col2Im()$ is just a reverse process of $Im2Col()$, it will also take padding, stride and dilation into consideration. The core of the computation, i.e. the matrix multiplications, does not change.  \n\nAnother important influence of padding ($P$), stride ($S$) and dilation ($D$) is that they determine the size of output image, i.e. $H_Y$ and $W_Y$, which has been missing in the discussion. With some geometry intuition, we can see that\n\\begin{eqnarray}\nH_Y &=& (H_X + P_T + P_B - (D_H * (H_K - 1) + 1)) / S_H + 1 \\\\\nW_Y &=& (W_X + P_L + P_R - (D_W * (W_K - 1) + 1)) / S_W + 1 \n\\end{eqnarray}\nwhere $P_{\\{T,B,L,R\\}}$ are paddings from top, bottom, left and right, and $D_{\\{H,W\\}}$ and $S_{\\{H,W\\}}$ are dilation and stride along height and width direction. \n\n\\subsection{Group Filters}\\footnote{Group filters currently don't work with dilated convolution in $CAFFE2$}\n\\cite{alexnet} introduces group filters to reduce the number of weight so that the computation can fit into the configeration of GPUs. In addition, it seems that group filters can actually improve the accuracy, and the resulting trained filters look more intuitive when plotted as 2D images. For group filters, we partition a set of $M$ filters groups of $G$, and there will be $M/G$ groups. Each filter in a group, intead of working on all the $C$ channels, works on $C/G$ channels. For all the filters that work on group $g \\in \\{0,\\ldots,M/G-1\\}$, we collect them together to form a matrix of size $(M/G) \\times ((C/G) \\times H_K \\times W_K)$.\n\\begin{equation*}\n\\bw_g = [\\mathbf{w}_{g \\times M/G}^l, \\mathbf{w}_{g\\times M/G+1}^l, \\ldots, \\mathbf{w}_{(g+1) \\times M/G-1}^{l}]^T\n\\end{equation*}\nwhere $\\mathbf{w}_g^l$ means a filter $g$-th group and works on the input channel $[l\\times C/G, (l+1)\\times C/G-1]$.\n\nIn addition, we need to tell the $Im2Col()$ function to grab pixels from $C/G$ instead of $G$ channels to form the buffer matrix $\\bb_g$ and its resulting size will be $(C/G \\times H_K \\times W_K) \\times (H_Y \\times W_Y)$. Similar to~\\pref{bm}, output regarding to this group of filters can be computed as \n\\begin{equation} \\label{bmg}\n\\by_g = \\bw_g \\bb_g\n\\end{equation}\nNote that $\\by_g \\in \\mathbb{R} ^ {(M/G) \\times H_Y \\times W_Y}$. Repeat this for $g \\in \\{0,\\ldots,M/G-1\\}$ and combine the results together and you will get $\\mathbf{Y}$.\n\n\\section{Locally-Connected Layer}\nIn conventional convolutional layers, the weights of a filters are shared among computation of all the pixels of the output map. In other words, we use this same filter to scan the whole image to extract feature. It makes sense in the case of general images because the objects in interest can appear anywhere in the image. However, for some specific applications such as face recognition, this may not be a good idea, because the input of the face recognition is the frontal face photo where specific features such as eyes and mouth exist in specific region of the image. In this case, we do not want to share the weights of the filters at each location of convolution. We call convolutional layer without weight sharing locally-connected layer.\n\n\\subsection{Forward Propagation}\nWith out weight sharing, \\pref{forward} becomes \n\\begin{equation}\n\\by_{m, i,j}=\\sum_{c=0}^{C-1}{\\sum_{p=0}^{H_K-1}{\\sum_{q=0}^{W_K-1}{\\bw_{m,c,p,q,i,j}\\bx_{c, i+p,j+q}}}} + \\mathbf{b}_{m,i,j}\n\\end{equation} \nNotice the additional indices $i,j$ in the weights and biases. Since we have different set of weights for each of the $H_Y \\times H_W$ convolutional locations. The input weight tensor is now of size $M  \\times C \\times H_K \\times W_K \\times H_Y \\times W_Y$. Similarly, bias becomes a tenor of size $M \\times H_Y \\times W_Y$. However, the output $\\by$ is still of the same dimension ($N \\times M \\times H_Y \\times W_Y$). Again, we will assume $N=1$ and omit the first dimension in the following discussion.\n\nFirst, we can still use $Im2Col()$ to take care of the padding, stride and dilation and prepare the buffer matrix $\\bb  \\in \\mathbb{R}^{(C \\times H_K \\times W_K) \\times (H_Y \\times W_Y)}$. However, we cannot do direct matrix multiplication with parts of $\\bw$ and $\\mathbf{B}$ because it will create terms $\\bw_{m,c,p,q,i,j}\\bx_{c, i+p,j+q}$ where $i \\neq j$. Instead, if we arrange $\\bw$ into $\\bw_m \\in \\mathbb{R} ^{(C \\times H_K \\times W_K) \\times (H_Y \\times W_Y)}$ for $m=0,\\ldots,M-1$, we can use Hadamard product to compute the first term of the output, ignoring the bias for now:\n\\begin{equation} \\label{ym2}\n\\by_m = \\mathbf{u}^T (\\bw_m \\odot \\bb)\n\\end{equation}\nwhere $\\odot$ denotes the Hadamard product (element-wise product) and $\\mathbf{u}$ is the same all one vector of size $C \\times H_K \\times W_K$. The result $\\by_m$ is a $H_Y \\times W_Y$ vector. Repeating this for all $m$ and assemble the result, and we will get $\\by$. Some remarks: \n\\begin{enumerate}\n\\item When repeating~\\pref{ym2} $M$ times, in terms of complexity, it is the same as matrix multiplication in~\\pref{bm}. But we are making $2M$ calls ($M$ for Hadamont product and $M$ for AXPY) instead of $1$. This may introduce inefficiency compared to the weigth sharing convolutional layer. \n\\item cuBLAS and Intel MKL provide support for Hadamond product (DHAD, vdMul) but OpenBLAS doesn't yet~\\cite{openblas}. This is not a hard blocker though because it is fairly easy to add this considering OpenBLAS already has elementwise addition. \n\\item The AXPY operation with $\\mathbf{u}$ in~\\pref{ym2} is an overkill. Effectively, we just need a reduction operation. BLAS provides ASUM operation for that. If there is support for that in CUDA, we can just use that. \n\\item We can actually convert the Hadamard product to matrix multiplication by creating a huge block-diagnal matrix. BLAS has built-in support for diagnal matrix but I am not sure whether this will help us in terms of performance or not. \n\\end{enumerate}\n\nNext, we add the bias terms. Since $Y$ and $\\mathbf{b}$ are of the size shape. Adding the bias is as simple as a matrix addition\n\\begin{equation}\n\\by' = \\by + \\mathbf{b}\n\\end{equation}\n\n\\subsection{Backward Propagation}\nThe input of gradient propagation is the same as normal convulational layer where we have $\\bx$, $\\bw$ and $d\\by$, except for that $\\bw$ is of size $M  \\times C \\times H_K \\times W_K\\times H_Y \\times W_Y$ and $\\mathbf{b}$ is of size $M \\times H_Y \\times W_Y$. For output, $d\\bw$ will be the same size as $\\bw$ and bias gradient $d\\mathbf{b}$ will be the same size of $\\mathbf{b}$. $d\\bx$ will keep the same size as input $N \\times C \\times H_X \\times H_Y$.\n\nFirst, we compute the gradient of the weight $\\bw_{m,c,p,q,i,j}$. Note that for a given filter $m$, each set of weight for each convolutional operation only contributes to one pixel in the output activation map, instead of $H_Y \\times W_Y$ of them. Therefore, we have\n\\begin{equation}\n\\frac{\\partial E}{\\partial \\bw_{m,c,p,q,i,j}} = \\frac{\\partial E}{\\partial \\by_{m,i,j}} \\frac{\\partial \\by_{m,i,j}}{\\partial  \\bw_{m,c,p,q,i,j}} =  \\frac{\\partial E}{\\partial \\by_{m,i,j}} \\bx_{c,i+p,j+q}\n\\end{equation}\nNotice that compared to~\\pref{dw}, it is a single multiplication instead of a convulation. Again, we will iterate through filter $m=0,\\ldots,M-1$ and suppose we already have $\\bb=Im2Col(\\bx)$. Considering $d\\by$ a $M \\times (H_Y \\times W_Y)$ matrix, and $\\mathbf{B}$ and matrix of size $(C \\times H_H \\times W_K) \\times (H_Y \\times W_Y)$,  \n\\begin{equation*}\nd\\bw_{m,c,p,q,:,:} = \\by_{m,:} \\odot \\bb_{k,:}\n\\end{equation*}\ncomputes a vector of $H_Y \\times W_Y$ weights, where we first iterate on each row of $\\by$ for $m=0,\\ldots,M-1$ and then on each row of $\\bb$ for $k=0,\\ldots,C\\times H_K \\times W_K$. There will be $M \\times C \\times H_K \\times W_K$ Hadamard product of size $H_Y \\times W_Y$. \n\n \nSimilarly, the bias gradients can be propagated as \n\\begin{equation}\n\\frac{\\partial E}{\\partial \\mathbf{b}_{m,i,j}} = \\frac{\\partial E} {\\partial \\by_{m,i,j}} \\frac{\\partial \\by_{m,i,j}} {\\partial \\mathbf{b}_{m,i,j}} = \\frac{\\partial E} {\\partial \\by_{m,i,j}} \n\\end{equation}\nSo literally, $d\\mathbf{b}=d\\by$.\n\nFinally, we compute the input gradient $d\\bx$. A pixel in input $\\bx_{c,i,j}$ still contributes to the computation of output activation map $\\by$ in locally connected layer the same way as that in convolutional layer does. However, the weigths are a bit different. The formula to compute $d\\bx_{c,i,j}$ is as follows\n\\begin{equation}\n\\frac{\\partial E}{\\partial \\bx_{c,i,j}} = \\sum_{p=0}^{H_K-1}{\\sum_{q=0}^{W_K-1}{\\sum_{m=0}^{M-1}{\\frac{\\partial E}{\\partial \\by_{m,i-p,j-q}}\\frac{\\partial \\mathbf{Y}_{m,i-p,j-q}}{\\partial \\mathbf{X}_{c,i,j}}}}} = \\sum_{p=0}^{H_K-1}{\\sum_{q=0}^{W_K-1}{\\sum_{m=0}^{M-1}{\\frac{\\partial E}{\\partial \\by_{m,i-p,j-q}} \\bw_{m,c,p,q,i,j}}}} \n\\end{equation}\nThe idea is the same as computing $d\\bx$ in convulational layer. We want to compute a buffer matrix $\\bb^*$ similar to~\\pref{rbb} first and get $d\\bx = Col2Im(\\mathbf{B}^*)$, where each element in $\\mathbf{B}^*$ corresponds to $\\sum_{m=0}^{M-1}{\\bw_{m,c,p,q,i,j}\\by_{m,i,j}}$. Observe each element in $\\bb^*$ in Figure~\\ref{dual} presents multiplication between $\\bw$ and $d\\by$ along kernel pixel index as the row index increases and along output pixel index as the column index increases. The difference is that for each column, we have different weights. In order to do that, we first initialze $\\mathbf{B}^*$ to be all zero.  \nNext, we iterate on $m=0,\\ldots,M-1$ and arrange $\\bw$ into $M$ chunks of $\\bw_m$ (same as in~\\pref{ym2}). Then, we iterate on the rows of $\\bw_m$:\n\\begin{equation*}\n\\bb^*_{k,:} = \\bb^*_{k,:} + \\bw_{m_{k,:}} \\odot d\\by_{m,:}\n\\end{equation*}\nwhere $k=0,\\ldots, C \\times H_K \\times W_K$. Repeating the process by $M$ times and we will obtain the final result of $\\bb^*$. With that, we can obtain $d\\bx$ by applying $Col2Im(\\bb^*)$. \n\n\\subsection{Efficient implementation with Batched GEMM}\nFrom last section, we see that computing the forward and backward propagation usually involves $M$ alls to Hadamard product, which may require multiple kernels calls when implemented on GPU, which is suboptimal. Both Intel MKL and cuBLAS supports an operation called $BatchedGEMM$. $BatchedGEMM$ in a simpled form takes two tensor input $A \\in \\mathbb{R} ^{N \\times P \\times Q}$ and $B \\in \\mathbb{R}^{N \\times Q \\times R}$ and produce an output tensor $C \\in \\mathbb{R}^{N \\times P \\times R}$, where\n\\begin{equation*}\nC_{n, :, :} = A_{n, :, :} B_{n, :, :}\n\\end{equation*}\nIt is one single kernel call to the GPU. \n\nTo take advantage of that when computing the forward propagation, we now reshape $\\bw$ to be of size $H_Y \\times W_Y \\times M \\times C \\times H_K \\times W_K$. We partition it into $H_Y \\times W_Y$ matrices $\\bw_k$ of size $M \\times (C \\times H_K \\times W_K)$ and iterate on $k$. Each iteration yields $M$ results of $\\by$ on the same convolution location:\n\\begin{equation*}\n\\by'_{i,j,:} = \\bw_k \\bb_{:,k}\n\\end{equation*}\nBasically, we can implement this with one call to $BatchedGEMM(\\bw, \\bb^T)$, where $\\bw$ is considered a 3D tensor of size $(H_Y \\times W_Y) \\times M \\times (C \\times H_K \\times W_K)$ and $\\bb^T$ is considered of size $(H_Y \\times W_Y) \\times (C \\times H_K \\times W_K) \\times 1$. Notice that $\\by'$ is of size $H_Y \\times W_Y \\times M$, we might need to reshape it into $\\by$.\n\nFor backward propagation of weight gradients, similarly, we will iterate over output pixels $H_Y \\times W_Y$ and use that as our batch size to call $BatchedGEMM$. For each $k=0,\\ldots,H_Y \\times W_Y-1$, we take a column of $d\\by$ and a column of $\\bb$ and computes their outer product, which yields a matrix of $M \\times (C \\times H_K \\times W_K)$ weight gradients $d\\bw_{i,j,:,:,:,:}$. Implementation is basically one call to $BatchedGEMM(d\\by^T, \\bb)$, where $d\\by^T$ is considered of size $(H_Y \\times W_Y) \\times M \\times 1$.\n\nFinally, for backward propagation of input gradients, we focus on how to compute $\\bb^*$ with $BatchedGEMM$. Again, we use slices of input weight $\\bw_k$, for $k=0,\\ldots,H_Y \\times W_Y -1$. In each iteration \n\\begin{equation*}\n\\bb^*_{:,k} = \\bw_k^T d\\by_{:,k}\n\\end{equation*}\ngenerates one column of $\\bb^*$. The whole operation can be wrapped with one call to $BatchedGEMM(\\bw_k^T, d\\by^T)$. Similarly, resulting $\\bb^*$ needs reshaping before applying $Col2Im$ to it. \n\n%\\section{Tiled Convulational Layer}\n\n\\bibliographystyle{IEEEtran}\n\\bibliography{cnnref} \n\n\\end{document}\n", "meta": {"hexsha": "433031760c53a6ea68246e5c390268ebbeb32352", "size": 29319, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cnn/cnn.tex", "max_stars_repo_name": "yinghai/ainotes", "max_stars_repo_head_hexsha": "ecbe8e6845a0b711671f78173041950ad01e8bf3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-22T23:56:46.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-22T23:56:46.000Z", "max_issues_repo_path": "cnn/cnn.tex", "max_issues_repo_name": "yinghai/ainotes", "max_issues_repo_head_hexsha": "ecbe8e6845a0b711671f78173041950ad01e8bf3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cnn/cnn.tex", "max_forks_repo_name": "yinghai/ainotes", "max_forks_repo_head_hexsha": "ecbe8e6845a0b711671f78173041950ad01e8bf3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-10-05T10:56:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-05T10:56:10.000Z", "avg_line_length": 84.25, "max_line_length": 1072, "alphanum_fraction": 0.6909171527, "num_tokens": 10226, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455589, "lm_q2_score": 0.8080672089305841, "lm_q1q2_score": 0.559701263846123}}
{"text": "\\chapter{Coordinate Systems} \\label{Ch:CoordinateSystems}\r\n\r\n%\\section{Temp}\r\n%\r\n%From IAU-76 theory, we can covert from a vector in Earth fixed IAU-76, denoted $\\mathbf{r}_{F_{76}}$, to\r\n%inertial IAU-76/FK5, denoted $\\mathbf{r}_{I_{76}}$ using the following expression.\r\n%%\r\n%\\begin{equation}\r\n%     \\mathbf{r}_{F_{76}}  = \\mathbf{W}(t) \\mathbf{R}(t) \\mathbf{N}(\\Psi,\\epsilon)\\mathbf{P}(t)  \\mathbf{r}_{I_{76}}\r\n%\\end{equation}\r\n%%\r\n%where $\\mathbf{W}(t)$,$\\mathbf{R}(t)$, $\\mathbf{N}(\\Psi,\\epsilon)$, and $\\mathbf{P}(t)$ are respectively polar motion, sidereal time, nutation, and precession.  The IERS provides corrections to $\\Psi$ and $\\epsilon$ to enable software codes that use 1976-based computer implementations to transform from ITRF to GCRF using\r\n%%\r\n%\\begin{equation}\r\n%     \\mathbf{r}_{ITRF}  = \\mathbf{W}(t) \\mathbf{R}(t) \\mathbf{N}(\\Psi + \\delta\\Psi,\\epsilon + \\delta\\epsilon)\\mathbf{P}(t)    \\mathbf{r}_{I_{GCRF}}\r\n%\\end{equation}\r\n%%\r\n%If the fixed frames of the two theories are equivalent, then we can equate the two previous expressions to arrive at\r\n%%\r\n%\\begin{equation}\r\n%     \\mathbf{r}_{I_{76}}   = \\mathbf{P}^T(t) \\mathbf{N}^T(\\Psi ,\\epsilon)\\mathbf{N}(\\Psi + \\delta\\Psi,\\epsilon + \\delta \\epsilon)\\mathbf{P}  (t)  \\mathbf{r}_{I_{GCRF}}\r\n%\\end{equation}\r\n%%\r\n%I suspect this is wrong because\r\n%%\r\n%\\begin{equation}\r\n%     \\mathbf{r}_{ITRF}  \\neq \\mathbf{r}_{F_{76}}\r\n%\\end{equation}\r\n%\r\n%There are numerous coordinate systems used in space mission\r\n%analysis, that when used appropriately can greatly simplify the work\r\n%and yield insight that is not obvious otherwise.  Some examples are\r\n%equatorial and ecliptic systems, and rotating coordinate systems\r\n%based on the relative motion of two bodies such as the Earth and\r\n%Moon.  GMAT has the capability to calculate many parameters in\r\n%different coordinate systems, and these parameters can then be used\r\n%in plots, reports, solvers, control flow statements and stopping\r\n%conditions to name a few.\r\n%\r\n%In this chapter we investigate how GMAT performs coordinate system\r\n%transformations, and how different coordinate systems are defined.\r\n%We begin by defining some notation.  Next, we look at how to\r\n%transform a vector and its first derivative from one coordinate\r\n%system to another when the coordinate systems are translating and\r\n%rotating with respect to one another.  Finally, we look at each\r\n%coordinate system defined in GMAT and discuss how to find its\r\n%rotation matrix and the first derivative of the rotation matrix to\r\n%rotate to the J2000 coordinate system.\r\n\r\n\\section{General Coordinate System \\\\ Transformations } \\index{Coordinate systems!general transformations}\r\n\r\nGMAT has the capability to take a position and velocity vector in\r\none coordinate system, and convert them to another coordinate system\r\nthat may be both translating and rotating with respect to the\r\noriginal system.  In this section we derive the equations governing\r\ncoordinate system transformations and describe the algorithm GMAT\r\nuses to transform position and velocity vectors.\r\n\r\nWe start by defining some notation.  In Fig.~\\ref{fig:Frames}, we\r\nsee an illustration of a point ``$p$\" and two\r\ncoordinate systems $\\mathcal{F}_\\mathcal{O}$ and $\\mathcal{F}_\\mathcal{F}$.\r\nDefine the the position of $p$\r\nexpressed in $\\mathcal{F}_\\mathcal{O}$  as\r\n%\r\n$\\mathbf{r}_p^\\mathcal{O}$.\r\n%\r\nDefine the position of point $p$ with respect to\r\nframe $\\mathcal{F}_\\mathcal{F}$ as\r\n%\r\n$\\mathbf{r}^\\mathcal{F}_p$\r\n%\r\n\\textit{It is important to\r\nnote that using this notation,} $\\|\\mathbf{r}_p^\\mathcal{O} \\| \\neq \\|\\mathbf{r}_p^F\\|$, \\textit{\r\nbecause the transformation defined in the notation contains a translation from the origin\r\nof $\\mathcal{F}_\\mathcal{O}$ to the origin of $\\mathcal{F}_\\mathcal{\\mcF}$ as well as a coordinate rotation.}\r\n$\\mathbf{r}_{f/i}$ is the vector from the origin of $\\mathcal{F}_i$\r\nto origin of $\\mathcal{F}_f$.  Define the rotation matrix that rotates\r\nfrom $\\mathcal{F}_\\mcI$ to $\\mathcal{F}_\\mcF$ as $\\mathbf{R}^{\\mcF/\\mcI}$.\r\nFinally, let's define the  angular velocity $\\mbox{\\boldmath{$\\omega\r\n$}}_{f/i}$ as the angular velocity of $\\mathcal{F}_\\mcI$ with respect to\r\n$\\mathcal{F}_\\mcF$.  To simplify the notation, we assume that a vector\r\nis expressed in the frame denoted by the superscript.  If need to define a\r\npoint ``p\" with respect to $\\mathcal{F}_\\mathcal{O}$, but express this result in\r\n$\\mathcal{F}_\\mathcal{F}$\r\nwe use the notation $[\\mathbf{r}_p^\\mathcal{O}]^\\mcF$\r\ncurly brackets.  In summary, we have\r\n%\r\n\\begin{tabbing}\r\n\\index{Coordinate systems!nomenclature}\r\n%\r\n12345678 \\= Reynoldsnumber based on length $s$ \\kill\r\n$\\mathbf{r}^{\\mathcal{O}}_{p}$        \\> Position of point $p$ w/r/t frame $\\mathcal{F}_\\mathcal{O}$\r\n                               expressed in  $\\mathcal{F}_\\mathcal{O}$ \\\\\r\n%\r\n$[\\mathbf{r}^{\\mathcal{O}}_{p} ]^\\mcF$      \\> Position of point $p$ w/r/t frame $\\mathcal{F}_\\mathcal{O}$\r\n                                   expressed in $\\mathcal{F}_\\mcF$   \\\\\r\n%\r\n$\\mathbf{r}_{f/o}^\\mcF$        \\> Position vector from origin of $\\mathcal{F}_\\mathcal{O}$ to origin\r\n                         of $\\mathcal{F}_\\mcF$ expressed in $\\mathcal{F}_\\mcF$  \\\\\r\n%\r\n$\\mathbf{R}^{\\mcF/\\mathcal{O}}$      \\> Rotation matrix from frame $\\mathcal{F}_\\mathcal{O}$ to $\\mathcal{F}_\\mcF$\\\\\r\n%\r\n$\\mbox{\\boldmath{$\\omega $}}_{f/o}^\\mathcal{O}$   \\>  Angular velocity of frame\r\n$\\mathcal{F}_\\mathcal{O}$ w/r/t $\\mathcal{F}_\\mcF$,   expressed in frame\r\n$\\mathcal{F}_\\mathcal{O}$ \\\\\r\n%\r\n$\\mbox{\\boldmath{$\\omega $}}_{f/o}^\\mcF$   \\>  Angular velocity of\r\nframe $\\mathcal{F}_\\mathcal{O}$ w/r/t $\\mathcal{F}_\\mcF$,  expressed in\r\nframe\r\n$\\mathcal{F}_\\mcF$ \\\\\r\n\\end{tabbing}\r\n\r\n\r\n\\begin{figure*}[htb]\r\n \\centerline{\r\n\\begin{picture}(100,230)\r\n\\special{psfile= Images/Frames.eps hoffset= -160 voffset= -200\r\nhscale=60 vscale=60}\r\n\\makebox(-140,15){$\\mathcal{F}_{\\mathcal{O}}$}\r\n\\makebox(175,140){$\\mathcal{F}_{\\mcF}$}\r\n\\makebox(-300,240){$\\mathbf{r}_{o}^\\mathcal{O}$}\r\n\\makebox(-265,180){$\\mathbf{r}_{i/f}$}\r\n\\makebox(-165,310){$\\mathbf{r}_{o}^\\mcF$}\r\n\\makebox(-220,430){$o$}\r\n\\end{picture}}\r\n\\caption{ Illustration of a Translating and Rotating Coordinate\r\nSystem } \\label{fig:Frames}\r\n\\end{figure*}\r\n\r\n\r\nFrom inspection of Figure \\ref{fig:Frames} we can write\r\n%\r\n\\begin{equation}\r\n      \\mathbf{r}_{o}^\\mcF = \\underbrace{\\mathbf{R}^{\\mcF/\\mcI}\\mathbf{r}_{o}^\\mcI}_{Rot.} +\r\n     \\underbrace{  \\mathbf{r}_{i/f}^\\mcF}_{Trans.}\r\n     \\label{Eq:PosTransform}\r\n\\end{equation}\r\n%\r\nEquation (\\ref{Eq:PosTransform}) is the equation used to convert a\r\nvector known in frame $\\mathcal{F}_i$ to a vector in frame\r\n$\\mathcal{F}_f$, where both a rotation and a translation are\r\nrequired.  The first term in Eq. (\\ref{Eq:PosTransform}) is the\r\nterm that performs the rotation portion of the transformation.\r\nHere, $\\mathbf{r}_{i}$ is the position vector w/r/t to\r\n$\\mathcal{F}_i$ and is expressed in $\\mathcal{F}_i$.\r\n$\\mathbf{R}_{fi}$ is the rotation matrix that rotates from\r\n$\\mathcal{F}_i$ to $\\mathcal{F}_f$.   $\\mathbf{r}_{if}$ is the\r\nvector that goes from the origin of $\\mathcal{F}_f$ to the origin\r\nof $\\mathcal{F}_i$, and is expressed in $\\mathcal{F}_f$.\r\n\r\nWe also need to be able to determine the time rate of change of a\r\nvector in frame $\\mathcal{F}_f$ if we know the time rate of change\r\nof the vector in $\\mathcal{F}_i$.  To determine the equation that\r\ndescribes the transformation, we must take the derivative of\r\nEq.~(\\ref{Eq:PosTransform}) with respect to time.\r\n\\begin{equation}\r\n     \\frac{d\\mathbf{r}_{f}}{dt} = \\frac{d\\mathbf{R}_{fi}\\mathbf{r}_{i}}{dt}\r\n     +\\frac{d \\mathbf{r}_{if}}{dt}\r\n\\end{equation}\r\n%\r\nLet's use a single dot above a variable to denote the first\r\nderivative of that variable with respect to time.  Then, we can\r\nexpand this to obtain\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{r}}_{f} = \\dot{\\mathbf{R}}_{fi}\\mathbf{r}_{i} +\r\n     \\mathbf{R}_{fi}\\dot{\\mathbf{r}_{i}}\r\n     +\\dot{ \\mathbf{r}}_{if}  \\label{Eq:WithRdot}\r\n\\end{equation}\r\n%\r\nIn Eq.~(\\ref{Eq:WithRdot}) we see a term that contains the time\r\nderivative of the rotation matrix from $\\mathcal{F}_i$ to\r\n$\\mathcal{F}_f$.  We can write the time derivative of\r\n$\\mathbf{R}_{fi}$ as\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{R}}_{fi} = \\mathbf{R}_{fi}\r\n     \\mbox{\\boldmath{$\\omega $}}^{\\mbox{x}}_{fi} =  \\{\\mbox{\\boldmath{$\\omega\r\n     $}}^{\\mbox{x}}_{fi}\\}_f\\mathbf{R}_{fi}\\label{Eq:dRdt}\r\n\\end{equation}\r\n%\r\nwhere $\\mbox{\\boldmath{$\\omega $}}_{fi}$ is the angular velocity of\r\n$\\mathcal{F}_i$ with respect to $\\mathcal{F}_f$ expressed in\r\n$\\mathcal{F}_i$.  The skew symmetric matrix, $\\boldsymbol \\omega\r\n^x$, is defined as\r\n\\begin{equation}\r\n\\boldsymbol \\omega ^x= \\left( \\begin{array}{ccc}\r\n  0 & -\\omega_z  & \\omega_y \\\\\r\n  \\omega_z & 0 & -\\omega_x \\\\\r\n  -\\omega_y & \\omega_x & 0 \\\\\r\n\\end{array} \\right)\r\n\\end{equation}\r\n%\r\nIn summary, using Eq.~(\\ref{Eq:dRdt}) to transform a derivative\r\nvector from $\\mathcal{F}_i$  to $\\mathcal{F}_f$ we can use any of\r\nthe following three equations:\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{r}}_{f}  = \\underbrace{\\mathbf{R}_{fi}\r\n     \\mbox{\\boldmath{$\\omega $}}^{\\mbox{x}}_{fi}\\mathbf{r}_{i} + \\mathbf{R}_{fi}\\dot{\\mathbf{r}}_{i}}_{Rot.}\r\n     + \\underbrace{\\dot{\\mathbf{r}}_{if}}_{Trans.} \\label{Eq:Transform1}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\dot{\\mathbf{r}}_{f}  = \\underbrace{\\{\\mbox{\\boldmath{$\\omega $}}^{\\mbox{x}}_{fi}\\}_f\\mathbf{R}_{fi}\\mathbf{r}_{i} + \\mathbf{R}_{fi}\\dot{\\mathbf{r}}_{i}}_{Rot.}\r\n     + \\underbrace{\\dot{\\mathbf{r}}_{if}}_{Trans.} \\label{Eq:Transform2}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{r}}_{f} = \\underbrace{\\dot{\\mathbf{R}}_{fi}\\mathbf{r}_{i} + \\mathbf{R}_{fi}\\dot{\\mathbf{r}}_{i}\r\n     }_{Rot.}+  \\underbrace{\\dot{\\mathbf{r}}_{if}}_{Trans.}\\label{Eq:Transform3}\r\n\\end{equation}\r\n%\r\nWe choose between Eqs.~(\\ref{Eq:Transform1}), (\\ref{Eq:Transform2}),\r\nor (\\ref{Eq:Transform3}) depending on the type of information we\r\nhave, and which frame is most convenient to express the angular\r\nvelocity $\\boldsymbol \\omega_{fi}$ in.  In general, we know\r\n$\\mathbf{r}_{i}$ and $\\dot{\\mathbf{r}}_{i}$.  To perform the\r\ntransformation we need to determine $\\mathbf{R}$,\r\n$\\dot{\\mathbf{R}}$, and $\\dot{\\mathbf{r}}_{if}$ and these quantities\r\ndepend on $\\mathcal{F}_i$ and $\\mathcal{F}_f$.\r\n\r\nOne of the difficulties in implementing coordinate system\r\ntransformations in GMAT is that we often can't calculate\r\n$\\mathbf{R}_{fi}$ and $\\dot{\\mathbf{R}}_{fi}$ directly.  For\r\nexample, it is nontrivial to directly calculate the rotation matrix\r\nfrom the Earth fixed frame to the Moon fixed frame.  Hence, we need\r\nto choose a convenient intermediate coordinate system. We choose the\r\naxis system defined by Earth's mean equinox and mean equator at the\r\nJ2000 epoch, denoted $\\mathcal{F}_{J_{2k}}$, as the intermediate\r\nreference frame for all transformations that require an intermediate\r\ntransformation. This choice is motivated by the fact that most of\r\nthe data needed to calculate $\\mathbf{R}$ and $\\dot{\\mathbf{R}}$ is\r\ngiven so that it is fast and convenient to calculate\r\n$\\mathbf{R}_{J_{2k},i}$, and $\\dot{\\mathbf{R}}_{J_{2k},i}$.\r\n\r\nThe steps taken to perform a general coordinate transformation in\r\nGMAT are described below and illustrated in\r\nFig.~\\ref{fig:RotSequence}. \\index{Coordinate systems!transformation\r\nalgorithm} We start with a vector and its first derivative known in\r\nframe $\\mathcal{F}_i$, and wish to determine the vector and its\r\nfirst derivative with respect to frame $\\mathcal{F}_f$.  However, we\r\nassume that the transformation to go directly from $\\mathcal{F}_i$\r\nto $\\mathcal{F}_f$ is not known.\r\n\r\nThe first step in the process is to perform a rotation from\r\n$\\mathcal{F}_i$ to $\\mathcal{F}_{J_{2k}}$. We define this\r\nintermediate system as $\\mathcal{F}_1$.  No translation is performed\r\nin step one. Using only the rotation portions of from\r\nEqs.~(\\ref{Eq:PosTransform}) and (\\ref{Eq:Transform3}) we see that\r\n%\r\n\\begin{equation}\r\n          \\left\\{\\mathbf{r}_i\\right\\}_{1} = \\mathbf{R}_{J_{2k},i}\r\n          \\mathbf{r}_{i} \\label{Eq:PosRot1}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n          \\left\\{\\dot{\\mathbf{r}}_i\\right\\}_1 = \\dot{\\mathbf{R}}_{J_{2k},i}\\mathbf{r}_{i} +\r\n          \\mathbf{R}_{J_{2k},i}\\dot{\\mathbf{r}}_{i} \\label{Eq:VelRot1}\r\n\\end{equation}\r\n%\r\nThe second step is to perform a translation from the origin of\r\n$\\mathcal{F}_i$ to the origin of $\\mathcal{F}_f$.  We define this\r\nsecond intermediate system as $\\mathcal{F}_2$. $\\mathcal{F}_2$ has\r\nthe same origin as $\\mathcal{F}_f$ but has the same axes as\r\n$\\mathcal{F}_{J_{2k}}$.  From inspection of\r\nFig.\\ref{fig:RotSequence} we can see that\r\n%\r\n\\begin{equation}\r\n     \\left\\{\\mathbf{r}_i\\right\\}_{1}  = \\left\\{\\mathbf{r}_{Ri}\\right\\}_{J_{2k}}+ \\left\\{\\mathbf{r}_{fR}\\right\\}_{J_{2k}} +\r\n     \\left\\{\\mathbf{r}_f\\right\\}_{2}\r\n\\end{equation}\r\n%\r\nSolving for $\\mathbf{r}_f$ we obtain\r\n%\r\n\\begin{equation}\r\n     \\left\\{\\mathbf{r}_f\\right\\}_{2} =  \\left\\{\\mathbf{r}_i\\right\\}_{1} -\r\n      \\left\\{\\mathbf{r}_{Ri}\\right\\}_{J_{2k}} - \\left\\{\r\n      \\mathbf{r}_{fR}\\right\\}_{J_{2k}}\\label{Eq:Translater}\r\n\\end{equation}\r\n%\r\nwhere $\\left\\{\\mathbf{r}_{Ri}\\right\\}_{J_{2k}}$ is the vector from\r\nthe origin of $\\mathcal{F}_i$ to the origin of $\\mathcal{F}_R$\r\nexpressed in $\\mathcal{F}_{J{2k}}$.  Similarly\r\n$\\left\\{\\mathbf{r}_{fR}\\right\\}_{J_{2k}}$ is the vector from the\r\norigin of $\\mathcal{F}_R$ to the origin of $\\mathcal{F}_f$\r\nexpressed in $\\mathcal{F}_{J{2k}}$. Because the vector $ \\left\\{\r\n\\mathbf{r}_f\\right\\}_2$ is expressed in an inertial system we can\r\nwe can take the derivative of Eq~.(\\ref{Eq:Translater}) to obtain\r\n%\r\n\\begin{equation}\r\n      \\left\\{\\mathbf{v}_f\\right\\}_{2} =  \\left\\{\\mathbf{v}_i\\right\\}_{1} - \\left\\{ \\mathbf{v}_{Ri} \\right\\}_{J_{2k}} - \\left\\{\\mathbf{v}_{fR}\\right\\}_{J_{2k}}\r\n\\end{equation}\r\n%\r\nwhere $\\left\\{\\mathbf{v}_{Ri}\\right\\}_{J_{2k}}$ is the velocity of\r\nthe origin of $\\mathcal{F}_R$ w/r/t the origin of $\\mathcal{F}_i$.\r\nSimilarly, $\\left\\{\\mathbf{v}_{fR}\\right\\}_{J_{2k}}$ is the\r\nvelocity of the origin of $\\mathcal{F}_f$ w/r/t the origin of\r\n$\\mathcal{F}_R$. Finally, we perform a rotation from\r\n$\\mathcal{F}_{J_{2k}}$ to $\\mathcal{F}_f$ about the origin of\r\n$\\mathcal{F}_f$ to obtain the desired quantities.\r\n%\r\n\\begin{equation}\r\n    \\mathbf{r}_f = \\mathbf{R}_{f,J_{2k}}\r\n    \\left\\{\\mathbf{v}_f\\right\\}_{2} \\label{Eq:PosRot2}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n  \\dot{\\mathbf{r}}_f = \\dot{\\mathbf{R}}_{f,J_{2k}}\\left\\{\\mathbf{r}_f\\right\\}_{2} + \\mathbf{R}_{f,J_{2k}}\\left\\{\\mathbf{v}_f\\right\\}_{2}\r\n   \\label{Eq:VelRot2}\r\n \\end{equation}\r\n%\r\n\\begin{figure*}[htb]\r\n \\centerline{\r\n\\begin{picture}(100,320)\r\n\\special{psfile= Images/RotSequence.eps hoffset= -200 voffset= -160\r\nhscale=80 vscale=80}\r\n%\r\n\\makebox(-175,315){$\\mathcal{F}_i$}\r\n\\makebox(350,295){$\\mathcal{F}_f$}\r\n\\makebox(-140,410){$\\mathcal{F}_2$}\r\n\\makebox(-440,220){$\\mathcal{J}_{2k}$}\r\n\\makebox(-830,460){$\\mathcal{F}_1$}\r\n\\makebox(-730,580){$\\mathbf{r}_{i}$}\r\n\\makebox(-490,560){$\\mathbf{r}_{f}$}\r\n\\makebox(-740,290){$\\mathbf{r}_{Ri}$}\r\n\\makebox(-510,290){$\\mathbf{r}_{fR}$}\r\n\\end{picture}}\\vspace{ -1 in} \\caption{ General Coordinate System Transformation Approach in GMAT} \\label{fig:RotSequence}\r\n\\index{Coordinate systems!transformation algorithm}\r\n\\end{figure*}\r\n%\r\n\r\n\r\n\\section{Pseudo-Rotating Coordinate\\\\ Systems}\\index{Psuedo-Rotating\r\nCoordinate Systems} \\label{Sec:PseudoRotating}\r\n\r\nIn mission analysis, sometimes it is useful to consider a rotating\r\ncoordinate system to be inertial at a given instant in time. In this\r\ncase, we ignore the effects of rotation on the velocity.  Let's call\r\nsystems where we neglect the rotational effects on velocity\r\n pseudo-rotating coordinate systems.\r\n\r\nTo perform transformations to a pseudo-rotating coordinate system,\r\nthe equations to convert a  position vector do not change and are\r\ngiven by Eqs.~(\\ref{Eq:PosRot1}) and (\\ref{Eq:PosRot2}).  However,\r\nthe velocity conversion equations change because we neglect the\r\nterms that contain $\\dot{\\mathbf{R}}$.  For pseudo-rotating\r\ncoordinate systems the velocity transformations shown in\r\nEqs.~(\\ref{Eq:VelRot1}) and (\\ref{Eq:VelRot2}) become\r\n%\r\n\\begin{equation}\r\n          \\left\\{\\frac{d\\mathbf{r}_i}{dt}\\right\\}_1 =  \\mathbf{R}_{J_{2k},i}\\frac{d\\mathbf{r}_{i}}{dt}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n  \\frac{d\\mathbf{r}_f}{dt} =  \\mathbf{R}_{f,J_{2k}}\\left\\{\\mathbf{v}_f\\right\\}_{2}\r\n\\end{equation}\r\n%\r\n\r\nTo perform the transformations describe in the last few sections, we\r\nneed to be able to calculate the rotation matrix between any\r\ncoordinate system and $\\mathcal{F}_{J_{2k}}$, and the derivative of\r\nthe rotation matrix. In the following sections we calculate these\r\nmatrices for the systems used in GMAT.  We assume that we want the\r\ntransformation from some generic frame $\\mathcal{F}_i$ to\r\n$\\mathcal{F}_{J_{2k}}$ which is the Earth Mean J2000 Equatorial\r\n(MJ2000Eq) system.  The rotation matrix from $\\mathcal{F}_{J_{2k}}$\r\nto $\\mathcal{F}_i$ can be found by the simple relationship.\r\n%\r\n\\begin{equation}\r\n      \\mathbf{R}_{i,J_{2k}} = \\mathbf{R}_{J_{2k},i}^{-1} = \\mathbf{R}_{J_{2k},i}^{T}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n      \\dot{\\mathbf{R}}_{if} = \\dot{\\mathbf{R}}_{J_{2k},i}^{-1} = \\dot{\\mathbf{R}}_{J_{2k},i}^{T}\r\n\\end{equation}\r\n\r\n\r\n\r\n\r\n\\clearpage\r\n\r\n\\section{ITRF and ICRF}\r\n\r\nThe computation for the ICRF and ITRF tranformations in GMAT are from Kaplan\\cite{Kaplan:05} and Capolla\\cite{Coppola:etal:05} \\emph{et.al.} and employ the IAU 2000A nutation IAU 2006 precession models.  The transformation is performed using three intermediate rotations as follows:\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R} = \\mathbf{C}^T\\mathbf{R}_3(-\\theta)\\mathbf{W}\r\n\\end{equation}\r\n%\r\nwhere $\\mathbf{W}$ is the polar motion matrix, $\\mathbf{R}_3$ is a 3 rotation through the Earth rotation angle $\\theta$ (see Section \\ref{sec:BasicRotationMatrices}), and\r\n$\\mathbf{C}$ is a single matrix that captures precession, nutation, and frame bias.\r\nThe time derivative of the rotation matrix assumes that the only significant time variation of the rotation matrix is due to the Earth's spin and is computed using\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R} = \\mathbf{C}^T\\mathbf{R}_3(-\\theta)\\boldsymbol{\\omega}_E^x\\mathbf{W}\r\n\\end{equation}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n   \\boldsymbol{\\omega}_E^x =\r\n   \\left(\\begin{array} {ccc}\r\n     0 & -\\omega_e & 0 \\\\\r\n     \\omega_e & 0 & 0 \\\\\r\n     0 & 0 & 0\r\n   \\end{array}\\right)\r\n\\end{equation}\r\n%\r\nand $\\omega_e$ is computed using Eq.~(\\ref{Eq:EarthAngularVelocity}).\r\n\r\n$\\mathbf{W}$ is computed from\r\nEq.~6.15 in Kaplan\\cite{Kaplan:05} as shown below.\r\n%\r\n\\begin{equation}\r\n   \\mathbf{W} = \\mathbf{R}_3(-s')\\mathbf{R}_2(x_p)\\mathbf{R}_1(y_p)\r\n\\end{equation}\r\n%\r\nThe variable $s'$ is computed from\r\n%\r\n\\begin{equation}\r\n    s' =  (-47 \\mu\\mbox{as}) T_{TT}\r\n\\end{equation}\r\n%\r\nwhere $T_{TT}$ is given by Eq.~(\\ref{Eq:T_TTComputation})and $x_p$ and $y_p$ are interpolated using third order lagrange interpolation from Earth Orientation Parameters (EOP) files provided by the IERS. The Earth Rotation angle is computed from\r\n%\r\n\\begin{equation}\r\n    \\theta = 2\\pi(0.7790572732640 + 1.00273781191135448(JD_{UT1} - 2451545.0))\r\n\\end{equation}\r\n%\r\nwhere $JD_{UT1}$ is the Julian date expressed in UT1 using Eq.~(\\ref{Eq:UT1ToUTC}).\r\n\r\n$\\mathbf{C}^T$ is computed using\r\n%\r\n\\begin{equation}\r\n    \\mathbf{C}^T = \\left(\\begin{array}{ccc}\r\n    1 - b X^2 & -b X Y & X\\\\\r\n    -b X Y & 1-b Y^2 & Y\\\\\r\n    -X & -Y & 1 - b(X^2 + Y^2)\r\n    \\end{array}\\right) \\mathbf{R}_{3}(s)\r\n\\end{equation}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n    b = \\frac{1}{1 + \\sqrt{1 - X^2 - Y^2}}\r\n\\end{equation}\r\n%\r\nThe variables $X$, $Y$, are respectively the $x$-component and $y$-component of the Celestial Intermediate\r\nPole unit vector (CIP), and $s$ is called the Celestial Intermediate Origin (CIO) locator. The computation of $X$, $Y$, and $s$ requires evaluating series expansions with thousands of terms and many trigonometric function evaluations and is computationally expensive.  Vallado \\emph{et. al.} show that it is possible to precompute $X$, $Y$, and $s$ at one day intervals and interpolate the values using ninth order Lagrange interpolation and provide values that are accurate to within the error of the theory itself.  The interpolation of $X$, $Y$, and $s$ is over two orders of magnitude faster than the series evaluation.  Interpolation of data at 1 day intervals is possible because all physical affects with a period of two days or less are not included in the theory for $X$, $Y$,\r\n$s$ and are accounted for in the daily observations provided by the IERS in the EOP files\\cite{Kaplan:05}.  GMAT interpolates tabulated values $X$, $Y$, $s$ created using the IAU SOFA routines.  $s$ is computed using the routine iauS06a.c which uses the 2000A nutation model and 2006 precession model.  $X$ and $Y$ are computed using the IAU SOFA routine called iauXy06 which also uses the the 2000A nutation model and 2006 precession model.\r\n\r\n\r\n\\section{Transformation from ICRT to MJ2000Eq}\r\n\r\nWith the inclusion of ICRF-based systems, GMAT supports two types of axis systems: (1) axis systems that are\r\nbased on IAU-1976 theory (called MJ2000Eq in GMAT), and (2) axis systems based on IAU-2000 theory (called ICRF in GMAT).  Rotation from ICRF to MJ2000Eq is performed using the Frame Bias matrix, $\\mathbf{B}$, given by Kaplan\\cite{Kaplan:05}.\r\n%%\r\n\\begin{equation}\r\n   \\mathbf{B} = \\left(\r\n   \\begin{array}{ccc}\r\n       1 - 0.5(d\\alpha_0^2 + \\xi_0^2) & d \\alpha_0 & -\\xi_0\\\\\r\n       -d \\alpha_0 - \\eta_0 \\xi_0 & 1 - 0.5(d \\alpha_0^2 + \\eta_0^2) & -\\eta_0\\\\\r\n      \\xi_0 - \\eta_0 d \\alpha_0 & \\eta_0+\\zeta_0 d \\alpha_0 & 1 - 0.5(\\eta_0^2 + \\xi_0^2)\r\n   \\end{array}\r\n   \\right)\r\n\\end{equation}\r\n%\r\nwhere $d\\alpha_0 = -14.6 \\mbox{ mas}$, $\\xi_0 = -16.6170 \\mbox{ mas}$, and $\\eta_0 = -6.8192 \\mbox{ mas}$.\r\n\r\nThe rotation between any two axes, defind here as $\\mathcal{A}_1$ and $\\mathcal{A}_2$, are performed in three steps:\r\n%\r\n\\begin{enumerate}\r\n   \\item Convert from $\\mathcal{A}_1$ to the base axes for $\\mathcal{A}_1$.\r\n   \\item If $\\mathcal{A}_1$ and $\\mathcal{A}_2$ have different base axis systems, use $\\mathbf{B}$ to convert from $\\mathcal{A}_1$ to the base axes of $\\mathcal{A}_2$\r\n   \\item Convert from the base axes for $\\mathcal{A}_2$ to $\\mathcal{A}_2$\r\n\\end{enumerate}\r\n\r\n\r\n\\section{The $\\mathcal{F}_{J_{2k}}$ Inertial System and FK5 Reduction}\r\n\r\nIt is well know that Newton's laws must be applied in an inertial\r\nsystem.  The struggle to determine a truly inertial system has\r\ncontinued since Newton's time.  In reality, the best we can do is\r\napproximate a truly inertial system  in which to apply Newton's\r\nlaws. In GMAT that system is the FK5 system, here called\r\n$\\mathcal{F}_{J_{2k}}$.  The $\\mathcal{F}_{J_{2k}}$ system is\r\nreferenced to the Earth's equator and the Earth's orbit about the\r\nsun.  Because neither of these two planes are fixed in space, we\r\nmust pick an epoch and define an inertial system based on the\r\ngeometry at that epoch.  This epoch is commonly chosen as the J2000\r\nepoch.  In this section, we present the definition of the\r\n$\\mathcal{F}_{J_{2k}}$ system, and discuss the transformation from\r\n$\\mathcal{F}_{J_{2k}}$  to the Earth Fixed system.  This\r\ntransformation is called FK5 reduction.  We begin with a conceptual\r\ndiscussion of how the Earth's spin axis moves with respect to\r\ninertial space.  We conclude this section with a presentation of the\r\nmathematical theory of FK5 reduction.\r\n\r\n\r\n\r\n\\subsection{Overview of FK5 Reduction} \\index{FK5 Reduction!overview}\r\n\\index{FK5 Reduction!Earth fixed system}\r\n\r\nThe inertial system most commonly used in astrodynamics as of this\r\nwriting is the FK5 system.  We call this system\r\n$\\mathcal{F}_{J_{2k}}$.  The $\\mathcal{F}_{J_{2k}}$ system is used\r\nfor many calculations in GMAT.  The two most important are for\r\nintegrating equations of motion, and as an intermediate system for\r\ncoordinate system transformation. $\\mathcal{F}_{J_{2k}}$ is used\r\nthroughout the astrodynamics community as the coordinate system to\r\nrepresent time varying data such as planetary ephemerides and\r\nplanetary pole and prime meridian locations.\r\n\r\nThe rigorous mathematical definition of $\\mathcal{F}_{J_{2k}}$ is\r\ncomplex. So, let's start with a simple qualitative explanation.\r\nThe nominal $z$-axis of $\\mathcal{F}_{J_{2k}}$ is normal to the\r\nEarth's equatorial plane.  The nominal $x$-axis points along the\r\nline formed by the intersection of the Earth's equatorial plane\r\nand the ecliptic plane, in the direction of Aries.  The nominal\r\n$y$-axis completes the right-handed system. Both the equatorial\r\nand ecliptic planes move slowly with respect to inertial space.\r\nThe rigorous definition of FK5 uses the mean planes of the\r\necliptic and equator, at the J2000 epoch. We'll take a closer look\r\nat the mathematical definitions of the mean ecliptic and equator\r\nbelow.\r\n\r\n\r\nFK5 reduction is the transformation that rotates a vector\r\nexpressed in the $\\mathcal{F}_{J_{2k}}$ system to the Earth Fixed\r\ncoordinate system. To perform this transformation obviously\r\nrequires an understanding of how the Earth's orientation changes\r\nwith respect to $\\mathcal{F}_{J_{2k}}$.  The time varying\r\norientation of the Earth is complex and is due to complicated\r\ninteractions between the Earth and it's external environment and\r\ncomplicated internal dynamics. In fact, the dynamic orientation of\r\nthe Earth is so complicated that we can't model it completely and\r\nFK5 reduction is a combination of dynamics models and empirical\r\nobservations that are updated daily.\r\n\r\nWe describe the orientation of the Earth using three types of\r\nmotion.  The first type, including precession and nutation,\r\ndescribes how the Earth's principal moment of inertia moves with\r\nrespect to inertial space\\cite{seidelmann}.  The motion is\r\nillustrated in Fig. \\ref{fig:FK5FigOne}. The principal moment of\r\ninertia is defined as the Celestial Ephemeris Pole\\index{Celestial\r\nEphemeris Pole}, and due to the fact that Earth's mass distribution\r\nchanges with time, the Celestial Ephemeris Pole is not constant with\r\nrespect to the Earth's surface. Precession is the coning motion that\r\nthe Celestial Ephemeris Pole makes around the ecliptic north pole.\r\nThe other principal component of the motion of the Celestial\r\nEphemeris Pole is commonly called nutation and is the oscillation in\r\nthe angle between the Celestial Ephemeris Pole and the north\r\necliptic pole. The theory of Precession and Nutation come from\r\ndynamics models of the Earth's motion.  The second type of motion is\r\ncalled sidereal time, and represents a rotation about the Celestial\r\nEphemeris Pole. The sidereal time model is a combination of theory\r\nad observation. The third motion is that of the Earth's\r\ninstantaneous spin axis with respect to the Earth's surface.  As,\r\nwe'll see below, the Earth's spin axis is not constant with respect\r\nto the Earth's crust and it's motion is called Polar Motion.  A\r\nportion of polar motion is due to complicated dynamics, and a\r\nportion is due to unmodelled errors in nutation. Polar motion is\r\ndetermined from observation. Now that we've had a brief introduction\r\nto precession, nutation, sidereal time, and polar motion, let's look\r\nat each in more detail.\r\n\r\n\\subsubsection{Precession}\\index{FK5 Reduction!precession}\r\n\r\nAs we mentioned above, precession is the coning motion of the\r\nCelestial Ephemeris Pole about the ecliptic north pole and is\r\nillustrated in Fig \\ref{fig:FK5FigOne}. The motion is caused by two\r\nprimary effects. The first is the motion of the ecliptic plane due\r\nto the gravitational effects of the Sun, Moon, and planets on the\r\nEarth's orbit, and is called planetary precession. If the Earth's\r\nequator were fixed in inertial space, the effects of planetary\r\nprecession would cause a precession of the equinox of about 12\" per\r\ncentury and a decrease in the obliquity of the ecliptic of about 47\"\r\nper century \\cite{seidelmann}.  The second cause of precession is\r\ndue to the gravitational attraction of the Sun and Moon on the\r\nirregular mass distribution of the Earth. This causes a change in\r\nthe orientation of the Earth's equatorial plane with respect to\r\ninertial space with a smooth, long-period motion with a period of\r\n26,000 years.  The combined effects of planetary and lunisolar\r\nprecession are called general precession and account for the secular\r\nand long period motion of the Celestial Ephemeris Pole (the short\r\nperiod motion is called nutation).  \\index{FK5 Reduction!secular\r\nmotion} \\index{FK5 Reduction!long period motion}The secular and long\r\nperiod motion is often used to define a mean equator and equinox,\r\nbecause it does not contain the short period motion that is modelled\r\nusing nutation. Precession is modelled using three cubic equations\r\nthat are shown in Section %\\ref{Sec:Precession}\r\n\r\n\\subsubsection{Nutation}\\index{FK5 Reduction!nutation}\r\n\r\nNutation is the most complex motion in FK5 reduction.   According to\r\nSeidelmann\\cite{seidelmann}, nutation is ``the short period motion\r\nof the Earth's rotation axis with respect to a space-fixed\r\ncoordinate system.\"  Nutation is actually a superposition of motions\r\nwith many different periods, the longest of which is 18.6 years and\r\nis associated with the regression of the node of the Moon's orbit.\r\nThere are nutation effects due to the gravitational torque of the\r\nSun, Moon, and planets on the Earth's irregular mass distribution.\r\nThere are also Nutation effects due to the fact that Earth is not a\r\nrigid body.  Nutation motion has an amplitude of about 9\" and is\r\nusually represented as the sum of two components one in longitude\r\nand one in obliquity.\r\n\r\n\\clearpage\r\n\r\nNutation is modelled by separating the free and forced motion of the\r\nEarth. The forcing terms are due to torques from the Sun, Moon, and\r\nplanets.   The free terms are determined by observation because they\r\nare beyond our current modelling abilities. The resulting theory is\r\na series expansion that contains coefficients and is a function of\r\nthe location of the Sun, Moon, and planets. Nutation is intimately\r\nconnected with polar motion, and in fact, as we'll see in a later\r\nsection, errors in nutation modelling are captured in polar motion\r\nmeasurements.\r\n%\r\n\\begin{figure}[h]\r\n    \\begin{picture}(100,200)\r\n       \\special{psfile= Images/NutPrec.eps hoffset= 0 voffset=\r\n       0\r\n       hscale=10 vscale=10}\r\n    \\end{picture}\r\n    \\makebox(-180,90){$\\Upsilon$} \\makebox(-200,90){}\r\n    \\caption{Inertial Motion of Earth's Spin Axis}\r\n    \\label{fig:FK5FigOne}\r\n\\end{figure}\r\n%\r\nNutation is modelled by separating the free and forced motion of the\r\nEarth. The forcing terms are due to torques from the Sun, Moon, and\r\nplanets.   The free terms are determined by observation because they\r\nare beyond our current modelling abilities.   The resulting theory\r\nis a series expansion that contains coefficients and is a function\r\nof the location of the Sun, Moon, and planets.  Nutation is\r\nintimately connected with polar motion, and in fact, as we'll see in\r\na later section, errors in nutation modelling are captured in polar\r\nmotion measurements.\r\n\r\n\\clearpage\r\n\r\n\\subsubsection{Sidereal Time}\\index{FK5 Reduction!polar motion}\r\n\r\n%-------------------------------------------------------------------------\r\n%------------------------------------------------------------------------\r\n\\begin{figure*}\\begin{picture}(100,250)\\begin{picture}(100,250)\r\n\\makebox(500,250){$\\mathbf{R}_{IF} =\r\n\\mathbf{PREC}^T\\mathbf{NUT}^T\\mathbf{ST}^T\\mathbf{PM}^T =\r\n\\underbrace{\\mathbf{R}_3^T(-\\zeta)\\mathbf{R}_2^T(\\Theta)\\mathbf{R}_3^T(-z)}_{\\mathbf{PREC}^T}\r\n%\r\n\\underbrace{\\mathbf{R}_1^T(\\bar{\\epsilon})\\mathbf{R}_3^T(-\\Delta\r\n\\Psi)\\mathbf{R}_1^T({-\\epsilon})}_{\\mathbf{NUT}^T}\r\n%\r\n\\underbrace{\\mathbf{R}_3^T(\\theta_{AST})}_{\\mathbf{ST}^T}\r\n%\r\n\\underbrace{\\mathbf{R}_1^T(-y_p)\\mathbf{R}^T_2(-x_{p})}_{\\mathbf{PM}^T}$}\r\n%%\r\n\\makebox(-510,160){$\\mathbf{R}_{FI} = \\mathbf{PM}\\hspace{.05\r\nin}\\mathbf{ST}\\hspace{.05 in}\\mathbf{NUT}\\hspace{.05\r\nin}\\mathbf{PREC} =\r\n\\overbrace{\\mathbf{R}_2(-x_{p})\\mathbf{R}_1(-y_p)}^{\\mathbf{PM}}\r\n%\r\n\\overbrace{\\mathbf{R}_3(\\theta_{AST})}^{\\mathbf{ST}}\r\n%\r\n\\overbrace{\\mathbf{R}_1({-\\epsilon})\\mathbf{R}_3(-\\Delta\r\n\\Psi)\\mathbf{R}_1(\\bar{\\epsilon})}^{\\mathbf{NUT}}\r\n%\r\n\\overbrace{\\mathbf{R}_3(-z)\\mathbf{R}_2(\\Theta)\\mathbf{R}_3(-\\zeta)}^{\\mathbf{PREC}}$}\r\n%\r\n\\makebox(-783,00){FK5} \\makebox(-772,18){MODEq}\r\n\\makebox(-780,40){MODEc} \\makebox(-790,60){TODEc}\r\n\\makebox(-796,77){TODEq}\\makebox(-810,98){PEF}\r\n\\makebox(-810,120){ITRF}\r\n%\r\n\\makebox(-820,300){FK5} \\makebox(-810,320){MODEq}\r\n\\makebox(-817,340){MODEc} \\makebox(-826,360){TODEc}\r\n\\makebox(-832,380){TODEq}\\makebox(-846,400){PEF}\r\n\\makebox(-847,420){ITRF}\r\n%\r\n\\drawline[10](-400,0)(-55,0)\\drawline[10](-55,0)(-55,65)\r\n\\drawline[10](-400,10)(-158,10)\\drawline[10](-158,10)(-158,65)\r\n\\drawline[10](-400,20)(-185,20)\\drawline[10](-185,20)(-185,65)\r\n\\drawline[10](-400,30)(-230,30)\\drawline[10](-230,30)(-230,65)\r\n\\drawline[10](-400,40)(-260,40)\\drawline[10](-260,40)(-260,65)\r\n\\drawline[10](-400,50)(-310,50)\\drawline[10](-310,50)(-310,65)\r\n\\drawline[10](-400,60)(-383,60)\\drawline[10](-383,60)(-383,65)\r\n%\r\n\\drawline[10](-398,150)(-385,150)\\drawline[10](-385,150)(-385,140)\r\n\\drawline[10](-398,160)(-285,160)\\drawline[10](-285,160)(-285,140)\r\n\\drawline[10](-398,170)(-258,170)\\drawline[10](-258,170)(-258,140)\r\n\\drawline[10](-398,180)(-210,180)\\drawline[10](-210,180)(-210,140)\r\n\\drawline[10](-398,190)(-173,190)\\drawline[10](-173,190)(-173,140)\r\n\\drawline[10](-398,200)(-130,200)\\drawline[10](-130,200)(-130,140)\r\n\\drawline[10](-398,210)(-40,210)\\drawline[10](-40,210)(-40,140)\r\n%\r\n\\end{picture}\\end{picture}\\caption{Intermediate Transformations and Coordinate Systems in FK5 Reduction}\r\n\\end{figure*}\r\n%-------------------------------------------------------------------------\r\n%------------------------------------------------------------------------\r\n\r\n\\subsection{Precession Calculations} \\index{FK5 Reduction!precession}\\label{Sec:PrecessionTheory}\r\n\r\n\\begin{equation}\r\n   JD_{TDB} \\approx JD_{TT}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n   T_{TDB} = \\frac{JD_{TDB} - 2,451,545.0}{36525}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\zeta = 2306.2181^{\"} T_{TDB} +0.30188 T_{TDB}^2 + 0.017998\r\n    T_{TDB}^3\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\Theta = 2004.3109^{\"} T_{TDB} -0.42665 T_{TDB}^2 - 0.041833\r\n    T_{TDB}^3\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    z = 2306.2181^{\"}T_{TDB} + 1.09468 T_{TDB}^2 + 0.018203\r\n    T_{TDB}^3\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\mathbf{P} =\r\n     \\mathbf{R}_3(-z)\\mathbf{R}_2(\\Theta)\\mathbf{R}_3(-\\zeta)\r\n\\end{equation}\r\n\r\n\\subsection{Nutation Calculations} \\index{FK5 Reduction!nutation} \\label{Sec:NutationTheory}\r\n\r\nGMAT has the ability to use either the 1980 IAU Theory of Nutation,\r\nor the IERS 1996 Theory of Nutation.  There are some calculations\r\nthat are common to both, so let's look at them first.  The mean\r\nobliquity of the ecliptic at the J2000 epoch, $\\bar{\\epsilon}$, is\r\ngiven by\r\n\r\n\\begin{equation}\r\n    \\begin{split}\r\n    \\bar{\\epsilon} = & 84381.448 - 46.8150T_{TDB} - 0.00059T_{TDB}^2 \\\\ & +\r\n    0.001813T_{TDB}^3\r\n    \\end{split}\r\n\\end{equation}\r\n\r\nAs we mentioned previously, Earth's nutation is caused by the\r\ncombined gravitational effect of the Moon and Sun.  So, we would\r\nexpect to see the time dependent location of the Moon and Sun appear\r\nin the equations for Earth nutation.  The theories of nutation\r\ndescribed below take into account of the Moon and Sun position by\r\nmodelling mean anomalies of the Moon and Sun, $l$ and $l'$\r\nrespectively, the mean argument of latitude of the Moon, $F$,  the\r\ndifference between the mean longitude of the Sun and Moon, $D$, and\r\nthe mean longitude of the ascending node of the Moon's orbit,\r\n$\\Omega$. The equations used to determine these values as a function\r\nof $T_{TDB}$ are:\r\n%\r\n\\begin{equation}\r\n      \\begin{split}\r\n      l = & 134.96340251^{\\circ} + (1717915923.2178 T_{TDB} + \\\\ & 31.8792 T_{TDB}^2 + 0.051635 T_{TDB}^3 - 0.00024470\r\n      T_{TDB}^4)^{\"}\r\n      \\end{split}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n      \\begin{split}\r\n     l' = & 357.52910918^{\\circ} + ( 129596581.0481T_{TDB}- \\\\ & 0.5532\r\n      T_{TDB}^2 - 0.000136T_{TDB}^3 - 0.00001149T_{TDB}^4)^{\"}\r\n      \\end{split}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n      \\begin{split}\r\n     F = & 93.27209062^{\\circ} + (1739527262.8478T_{TDB}\r\n     - \\\\ &\r\n       12.7512T_{TDB}^2 + 0.001037T_{TDB}^3 + 0.00000417T_{TDB}^4)^{\"}\r\n      \\end{split}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n      \\begin{split}\r\n      D = & 297.85019547^{\\circ} + (1602961601.2090T_{TDB} -\r\n      \\\\ &\r\n       6.3706T_{TDB}^2 + 0.006593T_{TDB}^3 - 0.00003169T_{TDB}^4)^{\"}\r\n      \\end{split}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n      \\begin{split}\r\n      \\Omega  = & 125.04455501^{\\circ} + (  -6962890.2665T_{TDB}\r\n      + \\\\ &\r\n      7.4722T_{TDB}^2 + 0.007702T_{TDB}^3 - 0.00005939T_{TDB}^4)^{\"}\r\n      \\end{split}\r\n\\end{equation}\r\n\r\n\\subsubsection{ 1980 Nutation Theory } \\index{FK5 Reduction!1980 nutation theory}\r\n\r\n\r\n\\begin{equation}\r\n     \\Delta \\Psi_{1980} = \\sum_{i=1}^{106}\\left( A_i + B_i T_{TDB}\r\n     \\right) \\sin{a_p}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\Delta \\epsilon_{1980} = \\sum_{i=1}^{106}\\left( C_i + D_i T_{TDB}\r\n     \\right) \\cos{a_p}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     a_p = a_{1i} l + a_{2i} l' + a_{3i} F+\r\n     a_{4i} D + a_{5i}\\Omega\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\mathbf{N} = \\mathbf{R}_1(-\\epsilon) \\mathbf{R}_3(-\\Delta\r\n     \\Psi)\\mathbf{R}_1(\\bar \\epsilon)\r\n\\end{equation}\r\n\r\n\\subsubsection{ 1996 Nutation Theory }\\index{FK5 Reduction!1996 nutation\r\ntheory}\r\n\r\nThe 1996 theory of nutation published by the IERS is a higher\r\nfidelity model of Earth nutation.  There are two primary differences\r\nbetween the 1908 IAU theory and the 1996 IERS theory.  The first\r\ndifference is the 1996 theory uses a 263 term series expansion for\r\nthe effects of the Earth and Moon.  The 1980 theory uses a 106 term\r\nseries.  The second difference is that the 1996 theory has a second\r\nseries expansion to account for the effects of nutation caused by\r\nthe more massive planets.  The planetary series expansion is a 118\r\nterm series.  Let's begin with the equations for the Earth and\r\nMoon's effect on Earth nutation, according to the 1996 IERS theory:\r\n%\r\n\\begin{equation}\r\n     \\Delta \\Psi_{1996} = \\sum_{i=1}^{263}\\left( A_i + B_i T_{TDB}\r\n     \\right) \\sin{a_p} +\r\n     E_i\\cos{a_p}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\Delta \\epsilon_{1996} = \\sum_{i=1}^{263}\\left( C_i + D_i T_{TDB}\r\n     \\right) \\cos{a_p} + F_i\\sin{a_p}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     a_p = a_{1i} M_\\circ + a_{2i} M_\\odot + a_{3i}u_{M\\circ} +\r\n     a_{4i}D_{\\odot} + a_{5i}\\Omega_{\\circ}\r\n\\end{equation}\r\n%\r\n\r\nTo calculate the planetary effects on nutation, we begin by\r\ncalculating the mean heliocentric longitude of the planets.  Only\r\nthe effects of Venus(V), Mars(M),  Jupiter{J), and Saturn(S) are\r\nincluded in the theory. We require the Earth's (E) mean longitude\r\nalso.  The mean longitudes are calculated using:\r\n%\r\n\\begin{eqnarray}\r\n    \\lambda_{V} &=& 181.979800853^\\circ + 58,517.8156748T_{TDB} \\nonumber \\\\\r\n    %\r\n    \\lambda_{E} &=& 100.466448494^\\circ + 35,999.3728521T_{TDB} \\nonumber \\\\\r\n    %\r\n    \\lambda_{M} &=& 355.433274605^\\circ + 19,140.299314T_{TDB} \\nonumber \\\\\r\n    %\r\n    \\lambda_{J} &=& 34.351483900^\\circ + 3,034.90567464T_{TDB} \\nonumber \\\\\r\n    %\r\n    \\lambda_{S} &=& 50.0774713998^\\circ + 1,222.11379404T_{TDB} \\nonumber\r\n\\end{eqnarray}\r\n%\r\nThe general precession in longitude, $p_a$, is calculated using\r\n%\r\n\\begin{equation}\r\n    p_a = 1.39697137214^\\circ T_{TDB} + 0.0003086T_{TDB}^2 \\nonumber\r\n\\end{equation}\r\n%\r\nFinally, the planetary terms are calculated using:\r\n%\r\n\\begin{equation}\r\n     \\Delta \\Psi_{pl} = \\sum_{i=1}^{118}\\left( A_i + B_i T_{TDB}\r\n     \\right) \\sin{a_{pl}}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\Delta \\epsilon_{pl} = \\sum_{i=1}^{118}\\left( C_i + D_i T_{TDB}\r\n     \\right) \\cos{a_{pl}}\r\n\\end{equation}\r\n%\r\n\\begin{eqnarray}\r\n     a_{pl} = &a_{1i} \\lambda_V + a_{2i} \\lambda_E + a_{3i} \\lambda_M + a_{4i}\\lambda_J + a_{5i} \\lambda_S + \\nonumber\\\\\r\n     &a_{6i} p_a + a_{7i}D  + a_{8i}F +\r\n    a_{9i} l +  a_{10i}\\Omega\r\n\\end{eqnarray}\r\n%\r\n\\begin{equation}\r\n     \\Delta\\Psi = \\Delta\\Psi_{1996} +\r\n     \\underbrace{\\Delta\\Psi_{pl}}_{optional}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\Delta\\epsilon = \\Delta\\epsilon_{1996} +\r\n     \\underbrace{\\Delta\\epsilon_{pl}}_{optional}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\epsilon = \\bar{\\epsilon} + \\Delta\\epsilon\r\n\\end{equation}\r\n In GMAT, the planetary terms are optional.  If the user has\r\nselected to include the planetary terms, the\r\n\\begin{equation}\r\n     \\mathbf{N} = \\mathbf{R}_1(-\\epsilon) \\mathbf{R}_3(-\\Delta\r\n     \\Psi)\\mathbf{R}_1(\\bar \\epsilon)\r\n\\end{equation}\r\n\r\n\\subsection{Sidereal Time Calculations} \\index{FK5 Reduction!sidereal time}\r\n\r\nTo calculate the sidereal time of the Earth, we need the current\r\ntime which is then used to determine the Greenwich Mean Sidereal\r\nTime (GMST) and the equation of the equinoxes.  GMST is calculated\r\nusing:\r\n%\r\n\\begin{equation}\r\n     \\begin{split}\r\n        \\theta_{GMST} =  & 1.00965822615e6 + 4.746600277219299e10T_{UT1}\\\\ & + 1.396560T_{UT1}^2 +\r\n        9.3e-5T_{UT1}^3  \\hspace{.15 in} \\mbox{(arcseconds)}\r\n     \\end{split}\r\n\\end{equation}\r\n%\r\nThe calculation of the equation of the equinoxes is dependent upon\r\nthe time. If the Julian date falls after 2450449.5, then we use\r\n%\r\n\\begin{equation}\r\n   EQ_{equinox} = \\Delta \\Psi \\cos{\\epsilon} +\r\n   0.00264^{\"}\\sin{\\Omega} + 0.000063^{\"}\\sin{2\\Omega}\r\n\\end{equation}\r\n%\r\nIf the Julian date falls on or before 2450449.5 we use\r\n\\begin{equation}\r\n   EQ_{equinox} = \\Delta \\Psi \\cos{\\epsilon}\r\n\\end{equation}\r\n\r\n%\r\n\\begin{equation}\r\n   \\theta_{AST} = \\theta_{GMST} + EQ_{equinox}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n   \\mathbf{ST} = \\mathbf{R}_3(\\theta_{AST} )\r\n\\end{equation}\r\n\r\n\\subsection{Polar Motion Calculations} \\index{FK5 Reduction!polar motion}\r\n\r\n\\begin{equation}\r\n     \\mathbf{PM} = \\mathbf{R}_2 (-x_p) \\mathbf{R}_1(-y_p)\r\n\\end{equation}\r\n\r\n\r\n\\section{ Deriving $\\mathbf{R}_{J_{2k},i}$ and $\\dot{\\mathbf{R}}_{J_{2k},i}$ for Various\r\nCoordinate Systems }\r\n\r\nIn GMAT, there are numerous coordinate systems that can be used to\r\ndefine variables, stopping conditions, spacecraft states, and other\r\nquantities.  Some examples include the Earth centered mean ecliptic\r\nof J2000 system, the Earth-fixed system, the Mars equator system,\r\nand the Earth-Moon rotating system.\r\n\r\nIn the following subsections, we determine how\r\n$\\mathbf{R}_{J_{2k},i}$ and $\\dot{\\mathbf{R}}_{J_{2k},i}$ are\r\ncalculated in GMAT for all of the coordinate systems available in\r\nGMAT.  Let's begin by looking at coordinate systems defined by the\r\nequator of a celestial body.\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{Equator System}  \\label{Sec:Equator} \\index{Coordinate systems!body\r\nequator}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\nThe Equator axis system has the following nominal configuration:\r\n%\r\n\\begin{itemize}\r\n\\item $x$-axis:  Along the line formed by the intersection of the bodies equator and the ecliptic plane.\r\n%\r\n\\item $y$-axis:  Completes the right-handed set.\r\n%\r\n\\item $z$-axis:  Normal to the  equatorial plane.\r\n\\end{itemize}\r\n%\r\nThe Equator system in GMAT is a true equator of date axis system.\r\nThe equatorial coordinate system is defined only for celestial\r\nbodies. For a particular body, the equatorial system is defined by\r\nthe bodies equatorial plane and its intersection with the ecliptic\r\nplane, at the current epoch.   The Earth and Moon have highly\r\naccurate models for their equatorial systems and and are treated at\r\nthe end of this section. For the remaining bodies in the solar\r\nsystem, the equatorial coordinate system is calculated in GMAT using\r\ndata published by the International Astronomical Union\r\n(IAU)\\cite{Seidelmann:etal:02}. The IAU publishes data that gives\r\nthe spin axis direction and prime meridian location of all the\r\nplanets and major moons as a function of time.  For the Earth, GMAT\r\nuses FK5 reduction for the Equator system.  For the Moon, GMAT can\r\nuse either the IAU data, or Euler angles provided in the JPL DE405\r\nfiles.\r\n\r\nLet's look more closely at the data provided by the IAU. Figure\r\n\\ref{fig:IAUEqDef} contains an illustration of the three variables,\r\n$\\alpha_o$, $\\delta_o$, and $W$, that are used to define a body's\r\nspin axis and prime meridian location w/r/t MJ2000Eq. $\\alpha_o$ and\r\n$\\delta_o$ are used to define a body's spin axis direction. $W$ is\r\nthe body's sidereal time.  The equations for $\\alpha_o$, $\\delta_o$,\r\nand $W$ for the nine planets and the Earth's moon are found in\r\nTables \\ref{Table:PlanetsPolesMeridians} and\r\n\\ref{Table:LunaPoleMeridian}.\r\n%\r\n\\begin{figure*}[htb]\r\n \\centerline{\r\n\\begin{picture}(100,260)\r\n\\special{psfile = Images/IAUEqDef.ps hoffset= -195 voffset= -200\r\nhscale=80 vscale=80} \\makebox(55,470){$(\\alpha_o,\\delta_o)$}\r\n\\makebox(265,460){North pole of planet} \\makebox(-520,260){$W$}\r\n\\makebox(-500,220){$90^{\\circ} - \\delta_o$}\r\n\\makebox(-680,395){$\\Upsilon$}\r\n\\makebox(-722,315){$90^{\\circ}+\\alpha_o$} \\makebox(-312,355){Prime\r\nMeridian} \\makebox(-342,165){Equator of $\\mathcal{F}_{J_{2k}}$}\r\n%\r\n\\end{picture}}\\vspace{ -1 in} \\caption{ IAU Definition of Pole and Meridian Locations for Planets\r\nand Moons} \\label{fig:IAUEqDef} \\index{Coordinate\r\nsystems!Equatorial}\r\n\\end{figure*}\r\n%\r\nFrom inspection of Fig. \\ref{fig:IAUEqDef} we see that\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{J_{2k},i} = \\mathbf{R}_3^{T}(90^{\\circ} + \\alpha_o)\r\n     \\mathbf{R}_1^{T}(90^{\\circ}- \\delta_o) \\label{Eq:EquatorR}\r\n\\end{equation}\r\n%\r\n$\\alpha_o$ and $\\delta_o$ vary slowly with time, so we can assume\r\nthe derivative of $\\mathbf{R}_{Ii}$ for the Equator system is the\r\nzero matrix.\r\n%\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}}_{J_{2k},i} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\nIf the user chooses to use the DE405 files to determine the Moon's\r\norientation, then GMAT gets a set of Euler angles and rates from the\r\nDE405 files.  We then use the following equations to determine\r\n$\\mathbf{R}_{J_{2k},i}$ and $\\dot{\\mathbf{R}}_{J_{2k},i}$.\r\n%\r\n\\begin{eqnarray}\r\n   \\mathbf{R}_{J_{2k},i} &=&\r\n   \\mathbf{R}_3(\\theta_1)^T\\mathbf{R}_1(\\theta_2)^T\\\\\r\n   %\r\n   \\dot{\\mathbf{R}}_{J_{2k},i} &=&\r\n   \\mathbf{R}_3(\\theta_1)^T\\dot{\\mathbf{R}}_1^T(\\theta_2) + \\dot{\\mathbf{R}}_3^T(\\theta_1)\\mathbf{R}_1^T(\\theta_2)\r\n\\end{eqnarray}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n   \\dot{\\mathbf{R}}_1(\\theta_2) = \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & -\\dot{\\theta}_2\\sin{\\theta_2} & \\dot{\\theta}_2\\cos{\\theta_2}\\\\\r\n     0.0 & -\\dot{\\theta}_2\\cos{\\theta_2} & -\\dot{\\theta}_2\\sin{\\theta_2}\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n   \\dot{\\mathbf{R}}_3(\\theta_1) = \\begin{pmatrix}\r\n     -\\dot{\\theta}_1\\sin{\\theta_1} & \\dot{\\theta}_1\\cos{\\theta_1} & 0.0\\\\\r\n     -\\dot{\\theta}_1\\cos{\\theta_1} & -\\dot{\\theta}_1\\sin{\\theta_1} & 0.0 \\\\\r\n     0.0                           & 0.0                          &\r\n     0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\nFinally, for the Earth, the Equator axis system a true of date\r\nequator system and is calculated using the algorithm described in\r\nSec. \\ref{Sec:TODEq}.\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{MJ2000 Ecliptic (MJ2000Ec) } \\label{Sec:MJ2000Ec} \\index{Coordinate systems!mean J2000\r\necliptic}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\nThe MJ2000 Ecliptic axis system is defined as follows:\r\n%\r\n\\begin{itemize}\r\n\\item $x$-axis:  Along the line formed by the intersection of the Earth's mean\r\n                 equator and the mean ecliptic plane, at the J2000\r\n                 epoch.  The axis points in the direction of the\r\n                 first point of Aries.\r\n%\r\n\\item $y$-axis:  Completes the right-handed set.\r\n%\r\n\\item $z$-axis:  Normal to the Earth's mean equatorial plane at the J2000 Epoch.\r\n\\end{itemize}\r\n%\r\nThe matrix to rotate from MJ2000 Ecliptic (MJ2000Ec)  to MJ2000\r\nEquatorial  (MJ2000Eq) is a rotation about the $x$-axis through the\r\nobliquity of the ecliptic at the J2000 epoch which is\r\n23.439291$^\\circ$:\r\n%\r\n\\begin{equation}\r\n  \\mathbf{R} =   \\begin{pmatrix}\r\n     1.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.91748206 & -0.397777156\\\\\r\n     0.0 & 0.39777716 & 0.9174820621\r\n     \\end{pmatrix} \\label{Eq:Ec2Eq}\r\n\\end{equation}\r\n%\r\nGMAT uses more significant digits than included here.  The rotation\r\nmatrix is constant by definition so its time derivative is\r\nidentically the zero matrix.\r\n%\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{True of Epoch Equator (TOEEq) }\r\n\\label{Sec:TOEEq} \\index{Coordinate systems!true of epoch equator}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\nThe True of Epoch Equator axis system is defined as follows:\r\n%\r\n\\begin{itemize}\r\n\\item $x$-axis:  Along the true equinox at the chosen epoch.\r\n                 The axis points in the direction of the\r\n                 first point of Aries.\r\n%\r\n\\item $y$-axis:  Completes the right-handed set.\r\n%\r\n\\item $z$-axis:  Normal to the Earth's true equatorial plane at the chosen Epoch.\r\n\\end{itemize}\r\n%\r\nThe TOEEq axis system is an intermediate system in FK5 reduction.\r\n$\\mathbf{R}_{Ii}$ and $ \\dot{\\mathbf{R}}_{Ii}$ for the TOEEq system\r\nare calculated using the following equations\r\n%\r\n\\begin{equation}\r\n      \\mathbf{R}_{Ii} = \\mathbf{N}^T(t_o)\\mathbf{P}^T(t_o)\r\n\\end{equation}\r\n%\r\nwhere $t_o$ is the epoch defined in the coordinate system\r\ndescription provided by the user in the epoch field.  Hence, $t_o$\r\nis a constant value for the TOEEq system.  For a given $t_o$, the\r\nmatrices associated with the TOEEq system only need to be evaluated\r\nonce and can be reused later when necessary. $\\mathbf{P}(t_o)$ and\r\n$\\mathbf{N}(t_o)$ are part of the FK5 reduction algorithm and are\r\nexplained in detail in Sec. \\ref{Sec:NutationTheory} and\r\n\\ref{Sec:PrecessionTheory} . Because $t_o$ is fixed for a TOEEq\r\nsystem, the derivative of $\\mathbf{R}_{Ii}$ is identically equal to\r\nthe zero matrix.\r\n%\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{Mean of Epoch Equator (MOEEq)}\r\n\\label{Sec:MOEEq} \\index{Coordinate systems!mean of epoch equator}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\nThe Mean of Epoch Equator axis system is defined as follows:\r\n%\r\n\\begin{itemize}\r\n\\item $x$-axis:  Along the mean equinox at the chosen epoch.\r\n                 The axis points in the direction of the\r\n                 first point of Aries.\r\n%\r\n\\item $y$-axis:  Completes the right-handed set.\r\n%\r\n\\item $z$-axis:  Normal to the Earth's mean equatorial plane at the chosen Epoch.\r\n\\end{itemize}\r\n%\r\nThe MOEEq is an intermediate system in FK5 reduction and\r\n$\\mathbf{R}_{Ii}$ and $ \\dot{\\mathbf{R}}_{Ii}$ for the MOEEq system\r\ncan be calculated using the following equations\r\n%\r\n\\begin{equation}\r\n      \\mathbf{R}_{Ii} = \\mathbf{P}^T(t_o)\r\n\\end{equation}\r\n%\r\nwhere $t_o$ is the epoch defined in the coordinate system\r\ndescription provided by the user in the epoch field.  Hence $t_o$ is\r\na constant value for the MOEEq system.  For a given $t_o$, the\r\nmatrices associated with the MOEEq system only need to be evaluated\r\nonce and can be reused later when necessary. $\\mathbf{P}(t_o)$ is\r\ndescribed in Sec. \\ref{Sec:PrecessionTheory}. Because $t_o$ is fixed\r\nfor a MOEEq system, the derivative of $\\mathbf{R}_{Ii}$ is the zero\r\nmatrix.\r\n%\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{True of Date Equator (TODEq)}\r\n\\label{Sec:TODEq} \\index{Coordinate systems!true of date equator}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\n$\\mathbf{R}_{Ii}$ and $ \\dot{\\mathbf{R}}_{Ii}$ for the TODEq\r\nsystem can be calculated using the following equations\r\nVallado\\cite{vallado2} Fig. 3-29).\r\n%\r\n\\begin{equation}\r\n      \\mathbf{R}_{Ii} = \\mathbf{N}^T(t_o)\\mathbf{P}^T(t_o)\r\n\\end{equation}\r\n%\r\nwhere $t_o$ is the epoch.  Unlike the TOEEq sytem, for the TODEq\r\nsystem $t_o$ is a variable and usually comes from the epoch of the\r\nobject whose state we wish to convert.  $\\mathbf{P}(t_o)$ and\r\n$\\mathbf{N}(t_o)$ are part of the FK5 reduction algorithm and can\r\nbe found in Vallado pgs. 214 - 219.  Because $t_o$ is not fixed\r\nfor a TODEq system the derivative of $\\mathbf{R}_{Ii}$ is\r\nnon-zero. However, we will assume its derivative is negligibly\r\nsmall so\r\n   that\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{Mean of Date Equator (MODEq)}\r\n\\label{Sec:MODEq} \\index{Coordinate systems!mean of date equator}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\n$\\mathbf{R}_{Ii}$ and $ \\dot{\\mathbf{R}}_{Ii}$ for the MODEq\r\nsystem can be calculated using the following equations\r\n%\r\n\\begin{equation}\r\n      \\mathbf{R}_{Ii} = \\mathbf{P}^T(t_o)\r\n\\end{equation}\r\n%\r\nwhere $t_o$ is the epoch.  Unlike the MOEEq sytem, for the MODEq\r\nsystem $t_o$ is a variable and usually comes from the epoch of the\r\nobject whose state we wish to convert.  $\\mathbf{P}(t_o)$ and\r\n$\\mathbf{N}(t_o)$ are part of the FK5 reduction algorithm and can\r\nbe found in Vallado\\cite{vallado2} pgs. 214 - 219.  Because $t_o$\r\nis not fixed for a  MODEq system, the derivative of\r\n$\\mathbf{R}_{Ii}$ is non-zero. However, we will assume its\r\nderivative is negligibly small so that\r\n   %\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{Mean of Date Ecliptic (MODEc)}\r\n\\label{Sec:MODEc} \\index{Coordinate systems!mean of epoch equator}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\n$\\mathbf{R}_{Ii}$ and $ \\dot{\\mathbf{R}}_{Ii}$ for the MODEc\r\nsystem can be calculated using the following equations\r\n%\r\n\\begin{equation}\r\n      \\mathbf{R}_{Ii} = \\mathbf{P}^T(t_o)\\mathbf{R}_1^T(\\bar{\\epsilon})\r\n\\end{equation}\r\n%\r\nwhere $t_o$ is the epoch.  For the MODEc system $t_o$ is a\r\nvariable and usually comes from the epoch of the object whose\r\nstate we wish to convert.   $\\mathbf{P}(t_o)$ comes from the FK5\r\nreduction algorithm and can be found in Vallado\\cite{vallado2}\r\npgs. 214 - 219.  $\\bar{\\epsilon}$ is given by\r\nVallado\\cite{vallado2}, Eq. (3-52).  For a more complete\r\ndiscussion, you can also refer to Seidelmann\\cite{seidelmann},\r\npgs. 114 - 115.   Because $t_o$ is not fixed for a MODEc system,\r\nthe derivative of $\\mathbf{R}_{Ii}$ is non-zero. However, we will\r\nassume its derivative is negligibly small so that\r\n%\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{True of Date Ecliptic (TODEc)}\r\n\\label{Sec:TODEc} \\index{Coordinate systems!true of date equator}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\n$\\mathbf{R}_{Ii}$ and $ \\dot{\\mathbf{R}}_{Ii}$ for the TODEc\r\nsystem can be calculated using the following equations\r\n%\r\n\\begin{equation}\r\n      \\mathbf{R}_{Ii} = \\mathbf{P}^T(t_o)\\mathbf{R}_1^T(\\bar{\\epsilon})\\mathbf{R}_3^T(-\\Delta \\Psi)\r\n\\end{equation}\r\n%\r\nwhere $t_o$ is the epoch.  Unlike the TOEEc sytem, for the TODEc\r\nsystem $t_o$ is a variable and usually comes from the epoch of the\r\nobject whose state we wish to convert.  $\\mathbf{P}(t_o)$is part\r\nof the FK5 reduction algorithm and can be found in Vallado pgs.\r\n214 - 219.   $\\bar{\\epsilon}$ is given by Vallado\\cite{vallado2},\r\nEq. (3-52).  $\\Delta \\Psi$ is given by Eq.(3-62) in\r\nVallado\\cite{vallado2}.  For a more complete discussion, you can\r\nalso refer to Seidelmann\\cite{seidelmann}, pgs. 114 - 115. Because\r\n$t_o$ is not fixed for a MODEq system, the derivative of\r\n$\\mathbf{R}_{Ii}$ is non-zero. However, we will assume its\r\nderivative is negligibly small so that\r\n%\r\n\\begin{equation}\r\n  \\dot{\\mathbf{R}} =   \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{Topocentric Horizon}  \\label{Sec:TopocentricHorizon} \\index{Coordinate systems! Topocentric Horizon}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\n\r\nThe Topocentric coordinate system has its origin at a ground station\r\nwith the following axes definitions\r\n\\begin{itemize}\r\n\\item $x$-axis:  Completes the right handed set and points south in\r\nthe local horizon system.\r\n%\r\n\\item $y$-axis:  Points due East.\r\n%\r\n\\item $z$-axis:  Normal to the local horizon.  The local horizon is\r\ndefined by the selection of the \\st{HorizonReference} which is\r\neither \\st{Sphere} or \\st{Ellipsoid}.\r\n\\end{itemize}\r\n\r\nThe calculation of the Topocentric to MJ2000Eq systems is different\r\nfor the different ground system state representations.  Regardless of\r\nhow the user inputs the state of a ground station, the Topocentric\r\naxes are always defined with respect to the reference shape of the central body.  GMAT uses\r\nthe appropriate transformation to convert the user input to the cartesian location\r\nof the ground station expressed in the local body-fixed frame.\r\nThis cartesian representation is used to calculate the topocentric axes.\r\n\r\nDefine the axes of the topocentric coordinate\r\nsystem, expressed in the body fixed system as $\\hat{\\mathbf{x}}$,\r\n$\\hat{\\mathbf{y}}$, and $\\hat{\\mathbf{z}}$.  Define, $x_F$, $y_F$, and $z_F$ as the location of the station in the local body-fixed coordinate system.\r\n%If the \\st{StateType} is \\st{Spherical} and the \\st{HorizonReference} is \\st{Sphere} then calculate the body fixed location of the station using.\r\n%%\r\n%\\begin{eqnarray}\r\n%     x_F &=& (R_b + h)\\cos{\\phi_{sp}}\\cos{\\lambda} \\\\\r\n%     y_F &=& (R_b + h)\\cos{\\phi_{sp}}\\sin{\\lambda} \\\\\r\n%     z_F &=& (R_b + h)\\sin{\\lambda}\r\n%\\end{eqnarray}\r\n%%\r\n%where $R_b$ is the body's equatorial radius, $h$ is the height above the reference sphere,  $\\phi_{sp}$ is the latitude w/r/t to the reference sphere, and $\\lambda$ is the longitude.\r\n%If the \\st{StateType} is \\st{Cartesian}, or \\st{Spherical} with the \\st{HorizonReference = Sphere} then calculate the latitude w/r/t to the reference ellipsoid as follows:\r\n%%\r\n%\\begin{equation}\r\n%     r_{xy} = \\sqrt{ x_F^2 + y_F^2 }\r\n%\\end{equation}\r\n%%\r\nCalculate the geocentric latitude to use as an initial guess to find\r\nthe geodetic latitude\r\n%\r\n\\begin{equation}\r\n     \\phi_{gd}  \\approx \\mbox{atan2}(z_F,  r_{xy}  );\r\n\\end{equation}\r\n%\r\nThe eccentricity of the body is calculated from the flatting $f$ using\r\n%\r\n\\begin{equation}\r\n    e^2 = 2f-f^2\r\n\\end{equation}\r\n%\r\n\r\n\\noindent Set $\\delta = 1.0$ to initialize the loop, then,\r\n\r\n\\noindent While ( $\\delta > 10^{-11}$ )\r\n%\r\n\\begin{eqnarray}\r\n   \\phi' & = & \\phi_{gd}\\\\\r\n   C & = & \\frac{R} { \\sqrt{1 - e^2\\sin^2{\\phi}}    }\\\\\r\n   \\phi_{gd} & = & \\mbox{atan} \\left( \\frac{z_F + C\r\n   e^2\\sin{\\phi}}{r_{xy}}\r\n   \\right)\\\\\r\n   \\delta & = & | \\phi_{gd} - \\phi' |\r\n\\end{eqnarray}\r\n%\r\nEndWhile\\\\\r\n\r\n\r\nThe longitude  of the station location, $\\lambda$,  is calculated\r\nusing\r\n\r\n\\begin{equation}\r\n    \\lambda = \\mbox{atan2}(y_F,x_F)\r\n\\end{equation}\r\n\r\n%\r\nFinally,\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{z}} = \\left(\r\n     \\begin{array}{cc}\r\n          \\cos{\\phi_{gd}}\\cos{\\lambda}\\\\\r\n          \\cos{\\phi_{gd}}\\sin{\\lambda}\\\\\r\n          \\sin{\\phi_{gd}}\r\n     \\end{array}\r\n     \\right) \\label{Eq:GeodeLatToz}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n     \\hat{\\mathbf{y}} = \\hat{\\mathbf{k}}\r\n     \\times \\hat{\\mathbf{z}}\r\n\\end{equation}\r\n%\r\nwhere\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{k}} = [0 \\hspace{.05 in} 0 \\hspace{.05 in} 1]^T\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{x}} = \\hat{\\mathbf{y}}\r\n     \\times \\hat{\\mathbf{z}}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{FT} = [\\hat{\\mathbf{x}} \\hspace{.05 in} \\hat{\\mathbf{y}}\\hspace{.05 in} \\hat{\\mathbf{z}}]\r\n\\end{equation}\r\n\r\nThe last step is to determine the rotation matrix from the\r\ntopocenric system to the inertial system by using the rotation\r\nmatrix from body fixed to inertial, $\\mathbf{R}_{IF}$, at the\r\ndesired epoch, $t$.  We determine the body fixed rotation matrix\r\nusing the algorithm in Sec. 3.1.9.  Once we have evaluated\r\n$\\mathbf{R}_{IF}$ and $\\dot{\\mathbf{R}}_{IF}$ we calculate\r\n$\\mathbf{R}_{IT}$ and $\\dot{\\mathbf{R}}_{IT}$ using\r\n%\r\n\\begin{eqnarray}\r\n      \\mathbf{R}_{IT} &=& \\mathbf{R}_{IF}\r\n      \\mathbf{R}_{FT}\\\\\r\n%\r\n      \\dot{\\mathbf{R}}_{IT} &=& \\dot{\\mathbf{R}}_{IF}\r\n      \\mathbf{R}_{FT}\\label{Eq:TopoRdot}\r\n\\end{eqnarray}\r\n\r\n\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{Celestial Body Fixed}  \\label{Sec:Fixed} \\index{Coordinate systems!body fixed}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\nThe body fixed coordinate system is referenced to the body equator\r\nand the prime meridian of the body.  The body fixed system for\r\nEarth is found  by using FK5 reduction to the ITRF system as\r\ndescribed by Vallado.  The ITRF system is the earth fixed system.\r\n\r\nVallado denotes the four rotation sequences required to transform\r\nfrom the ITRF to the FK5 system as [PM], the polar motion, [ST],\r\nthe sidereal time, [NUT], the nutation, and [PREC], the\r\nprecession. GMAT calculates these four rotation matrices as\r\ndescribed in Vallado.  The rotation matrix from ITRF to FK5 can be\r\nwritten as follows.\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{Ii}  = \\mathbf{P}^T\\mathbf{N}^T\\mathbf{ST}^T\\mathbf{PM}^T\r\n     \\label{Eq:RFK5}\r\n\\end{equation}\r\n%\r\nGMAT assumes that the only intermediate rotation that has a\r\nsignificant time derivative is the sidereal time, [ST].  So, we can\r\nwrite\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{R}}_{Ii}  = \\mathbf{P}^T\\mathbf{N}^T\\dot{\\mathbf{ST}}^T\\mathbf{PM}^T \\label{Eq:RdotFK5}\r\n\\end{equation}\r\n%\r\nwhere $\\dot{\\mathbf{ST}}$ is given by\r\n%\r\n\\begin{equation}\r\n    \\dot{\\mathbf{ST}} =   \\begin{pmatrix}\r\n     -\\omega_e\\sin{\\theta_{AST}}  & \\omega_e\\cos{\\theta_{AST}} & 0.0\\\\\r\n     -\\omega_e\\cos{\\theta_{AST}} & -\\omega_e\\sin{\\theta_{AST}} & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nand $\\omega_e$ is given by\r\n%\r\n\\begin{equation}\r\n    \\omega_e = 7.29211514670698e^{-5} \\left( 1 - \\frac{LOD}{86400}  \\right) \\label{Eq:EarthAngularVelocity}\r\n\\end{equation}\r\n%\r\nNote that the 2nd edition of Vallado\\cite{vallado2} has\r\ninconsistencies in Eqs.~(\\ref{Eq:RFK5}) and (\\ref{Eq:RdotFK5}) and\r\nthey are discussed in the errata to the 2nd edition.  We have\r\nmodified equations Eqs.~(\\ref{Eq:RFK5}) and (\\ref{Eq:RdotFK5})\r\naccording to the errata.\r\n\r\nFor bodies other than the earth,   the IAU gives the spin axis\r\ndirection as a function of time with respect to the MJ2000Eq\r\nsystem and rotation of the prime meridian in the MJ2000Eq system.\r\nThis data for all of the planets and many moons can be found in\r\n``Report of the IAU/IAG Working Group on Cartographic Coordinates\r\nand Rotational Elements of the Planets and Satellites: 2000\"\r\nSeidelmann\\cite{Seidelmann:etal:02} \\emph{et.al.} Figure 1 in this\r\ndocument explains the three variables, $\\alpha_o$, $\\delta_o$, and\r\n$W$,  that are used to define the body spin axis and prime\r\nmeridian location w/r/t J2000.  The values of $\\alpha_o$,\r\n$\\delta_o$, and $W$ for the nine planets and the Earth's moon are\r\nfound on pgs. 8 and 9.\r\n\r\nUsing the notation found in the reference we can write\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{Ii} = \\mathbf{R}_3^{T}(90^{\\circ} + \\alpha_o)\r\n     \\mathbf{R}_1^{T}(90^{\\circ}- \\delta_o)\\mathbf{R}_3^T(W)\r\n\\end{equation}\r\n%\r\n\r\nFor the derivative we assume that\r\n%\r\n\\begin{equation}\r\n    \\frac{d}{dt}\\left( \\mathbf{R}_3^{T}(90^{\\circ} + \\alpha_o) \\right)  =  \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nand\r\n\\begin{equation}\r\n    \\frac{d}{dt}\\left( \\mathbf{R}_1^{T}(90^{\\circ}- \\delta_o) \\right)  =  \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\frac{d}{dt}\\left( \\mathbf{R}_3^{T}(W) \\right)  =  \\begin{pmatrix}\r\n     -\\dot{W}\\mbox{sin}(W) & -\\dot{W}\\mbox{cos}(W) & 0.0\\\\\r\n     \\dot{W}\\mbox{cos}(W) & -\\dot{W}\\mbox{sin}(W) & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nwhere $\\dot{W}$ is the time derivative of W for the given body.\r\nNote, Seidelmann\\cite{Seidelmann:etal:02} does not provide the\r\nvalues for $\\dot{W}$ so we include them in Table\r\n\\ref{Table:PlanetsPolesMeridians}.\r\n%\r\nIn summary\r\n%\r\n\\begin{equation}\r\n      \\dot{\\mathbf{R}} = \\mathbf{R}_3^{T}(90^{\\circ} + \\alpha_o)\r\n     \\mathbf{R}_1^{T}(90^{\\circ}- \\delta_o)\\frac{d}{dt}\\mathbf{R}_3^{T}(W)\r\n\\end{equation}\r\n%\r\n\r\n\r\n\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\\subsection{Body Inertial}  \\label{Sec:BodyInertial} \\index{Coordinate systems!body inertial}\r\n%-------------------------------------------------------------------------------------\r\n%-------------------------------------------------------------------------------------\r\n\r\n\r\nThe \\st{BodyInertial} axis system is an inertial system based on the\r\nequator of the celestial body chosen as the origin of the system.\r\nThe origin of a \\st{BodyInertial} system must be a celestial body,\r\nand cannot be a spacecraft, libration point etc. The axes are\r\ndefined as follows (except for Earth):\r\n%\r\n\\begin{itemize}\r\n\\item $x$-axis:  Along the line formed by the intersection of the\r\nbodies equator and the $x$-$y$ plane of the FK5 system, at the J2000\r\nepoch.\r\n%\r\n\\item $y$-axis:  Completes the right-handed set.\r\n%\r\n\\item $z$-axis:  Along the bodies instantaneous spin axis direction at the\r\nJ2000 epoch.\r\n\\end{itemize}\r\n%\r\n\r\nFor Earth, the \\st{BodyInertial} axis system is the FK5 system.  For\r\nall other bodies, the \\st{BodyInertial} axis system is based upon\r\nthe bodies equator and spin axis at the J2000 epoch. So,\r\n\\st{BodyInertial} is essentially a true-of-epoch system referenced\r\nto the chosen central body.  The body orientation at the J2000 epoch\r\nis calculated from the IAU Data in Seidelmann\r\n\\cite{Seidelmann:etal:02} for the Sun, Mercury, Venus, Mars,\r\nJupiter, Saturn, Uranus, Neptune, and Pluto.  For the Moon, the\r\norientation at the J2000 epoch comes from the DE405 files.  Because\r\nthe \\st{BodyInertial} system is an inertial system, the derivative\r\nof the rotation matrix is always zero:\r\n%\r\n\\begin{equation}\r\n   \\dot{\\mathbf{R}}_{Ii}  = \\begin{pmatrix}\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\\\\\r\n     0.0 & 0.0 & 0.0\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n%\r\nThe rotation matrix, $\\mathbf{R}_{Ii}$, is different for each\r\ncelestial body.  We begin by calculating the angles $\\alpha$ and\r\n$\\delta$ used to define the bodies orientation with respect to the\r\nFK5 system:\r\n%\r\n\\begin{equation}\r\n    \\alpha = \\alpha_o(T_{J2000})\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\delta = \\delta_o(T_{J2000})\r\n\\end{equation}\r\n%\r\nWhere $T_{J2000} = 2451544.9999999990686$ TDB and the equations for\r\n$\\alpha_o$ and $\\delta_o$ are given by Seidelmann\r\n\\cite{Seidelmann:etal:02} and reproduced in Table\r\n\\ref{Table:PlanetsPolesMeridians}. Finally, the rotation matrix is\r\ncalculated using\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{J_{2k},i} = \\mathbf{R}_3^{T}(90^{\\circ} + \\alpha)\r\n     \\mathbf{R}_1^{T}(90^{\\circ}- \\delta) \\label{Eq:BodyEquatorR}\r\n\\end{equation}\r\n%\r\nThe result is a rotation matrix, that is time invariant, for each\r\ncelestial body.%  The individual matrices are shown below.\r\n%\r\n%\\noindent For the Sun\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  0.9606338208497771 \\nonumber \\\\\r\n%   R_{21} & = &  0.2778176780544363 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.2494239059807128 \\nonumber \\\\\r\n%   R_{22} & = &  0.8624542595398803 \\nonumber \\\\\r\n%   R_{32} & = &  0.4404093156676427 \\nonumber \\\\\r\n%   R_{13} & = &  0.1223534934723278 \\nonumber \\\\\r\n%   R_{23} & = &  -0.4230720836476433 \\nonumber \\\\\r\n%   R_{33} & = &  0.8977971010607901\r\n%\\end{eqnarray}\r\n%%\r\n%For Mercury\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  0.9815938660446788 \\nonumber \\\\\r\n%   R_{21} & = &  0.1909803187332696 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.1677571842642237 \\nonumber \\\\\r\n%   R_{22} & = &  0.8622324234816705 \\nonumber \\\\\r\n%   R_{32} & = &  0.4779254910806334 \\nonumber \\\\\r\n%   R_{13} & = &  0.09127436261733374 \\nonumber \\\\\r\n%   R_{23} & = &  -0.4691287304711406 \\nonumber \\\\\r\n%   R_{33} & = &  0.8784003785150228\r\n%\\end{eqnarray}\r\n%%\r\n%For Venus\r\n%%\r\n% \\begin{eqnarray}\r\n%   R_{11} & = &  0.9988399975085459 \\nonumber \\\\\r\n%   R_{21} & = &  0.04815245972043356 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.04437694044018298 \\nonumber \\\\\r\n%   R_{22} & = &  0.9205233405740161 \\nonumber \\\\\r\n%   R_{32} & = &  0.3881590738545506 \\nonumber \\\\\r\n%   R_{13} & = &  0.01869081416890205 \\nonumber \\\\\r\n%   R_{23} & = &  -0.3877088083617988 \\nonumber \\\\\r\n%   R_{33} & = &  0.9215923900425705\r\n%\\end{eqnarray}\r\n%%\r\n%For the Earth\r\n%%\r\n%\\begin{eqnarray}\r\n%    R_{11} & = & 1 \\nonumber \\\\\r\n%    R_{12} & = & 0 \\nonumber \\\\\r\n%    R_{13} & = & 0 \\nonumber \\\\\r\n%    R_{21} & = & 0 \\nonumber \\\\\r\n%    R_{22} & = & 1 \\nonumber \\\\\r\n%    R_{23} & = & 0 \\nonumber \\\\\r\n%    R_{31} & = & 0 \\nonumber \\\\\r\n%    R_{32} & = & 0 \\nonumber \\\\\r\n%    R_{33} & = & 1\r\n%\\end{eqnarray}\r\n%%\r\n%For the Moon\r\n%%\r\n%\\begin{eqnarray}\r\n%    R_{11} & = & 0.998496505205088 \\nonumber \\\\\r\n%    R_{21} & = & -0.0548154092680678 \\nonumber \\\\\r\n%    R_{31} & = & 0 \\nonumber \\\\\r\n%    R_{12} & = & 0.0499357293985327 \\nonumber \\\\\r\n%    R_{22} & = & 0.909610125238044 \\nonumber \\\\\r\n%    R_{32} & = & 0.412451018902689 \\nonumber \\\\\r\n%    R_{13} & = & -0.0226086714041825 \\nonumber \\\\\r\n%    R_{23} & = & -0.411830900942613 \\nonumber \\\\\r\n%    R_{33} & = & 0.91097977859343\r\n%\\end{eqnarray}\r\n%%\r\n%For Mars\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  0.6732521982472343 \\nonumber \\\\\r\n%   R_{21} & = &  0.7394129276360177 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.5896387605430038 \\nonumber \\\\\r\n%   R_{22} & = &  0.5368794307891334 \\nonumber \\\\\r\n%   R_{32} & = &  0.6033958972853944 \\nonumber \\\\\r\n%   R_{13} & = &  0.4461587269353554 \\nonumber \\\\\r\n%   R_{23} & = &  -0.4062376142607542 \\nonumber \\\\\r\n%   R_{33} & = &  0.7974417791532832\r\n%\\end{eqnarray}\r\n%%\r\n%For Jupiter\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  0.9994209020316729 \\nonumber \\\\\r\n%   R_{21} & = &  -0.03402735050216735 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  0.0307100286015568 \\nonumber \\\\\r\n%   R_{22} & = &  0.9019874904580493 \\nonumber \\\\\r\n%   R_{32} & = &  0.430668621100356 \\nonumber \\\\\r\n%   R_{13} & = &  -0.01465451212046692 \\nonumber \\\\\r\n%   R_{23} & = &  -0.4304192217768545 \\nonumber \\\\\r\n%   R_{33} & = &  0.9025101322420253\r\n%\\end{eqnarray}\r\n%%\r\n%For Saturn\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  -0.6506284356468798 \\nonumber \\\\\r\n%   R_{21} & = &  0.7593962330217962 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.7545700815904118 \\nonumber \\\\\r\n%   R_{22} & = &  -0.6464935305479939 \\nonumber \\\\\r\n%   R_{32} & = &  0.1125615694996715 \\nonumber \\\\\r\n%   R_{13} & = &  0.08547883186107168 \\nonumber \\\\\r\n%   R_{23} & = &  0.07323575787752883 \\nonumber \\\\\r\n%   R_{33} & = &  0.9936447519469775\r\n%\\end{eqnarray}\r\n%%\r\n%For Uranus\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  0.9755767334083795 \\nonumber \\\\\r\n%   R_{21} & = &  -0.2196589111150187 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.05749969269512644 \\nonumber \\\\\r\n%   R_{22} & = &  -0.2553748540714755 \\nonumber \\\\\r\n%   R_{32} & = &  0.9651308042166816 \\nonumber \\\\\r\n%   R_{13} & = &  -0.2119995815377986 \\nonumber \\\\\r\n%   R_{23} & = &  -0.9415591572895125 \\nonumber \\\\\r\n%   R_{33} & = &  -0.2617680858165513\r\n%\\end{eqnarray}\r\n%%\r\n%For Neptune\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  0.8717809455009272 \\nonumber \\\\\r\n%   R_{21} & = &  0.4898958900230839 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.3337976487639454 \\nonumber \\\\\r\n%   R_{22} & = &  0.5940005535292547 \\nonumber \\\\\r\n%   R_{32} & = &  0.7319443094160927 \\nonumber \\\\\r\n%   R_{13} & = &  0.3585765089087282 \\nonumber \\\\\r\n%   R_{23} & = &  -0.6380951021167846 \\nonumber \\\\\r\n%   R_{33} & = &  0.6813644604126335\r\n%\\end{eqnarray}\r\n%%\r\n%For Pluto\r\n%%\r\n%\\begin{eqnarray}\r\n%   R_{11} & = &  0.7311155947298647 \\nonumber \\\\\r\n%   R_{21} & = &  0.6822536091093958 \\nonumber \\\\\r\n%   R_{31} & = &  0 \\nonumber \\\\\r\n%   R_{12} & = &  -0.1077863335431382 \\nonumber \\\\\r\n%   R_{22} & = &  0.1155058299435205 \\nonumber \\\\\r\n%   R_{32} & = &  0.9874413955017208 \\nonumber \\\\\r\n%   R_{13} & = &  0.6736854558650673 \\nonumber \\\\\r\n%   R_{23} & = &  -0.7219338031331282 \\nonumber \\\\\r\n%   R_{33} & = &  0.1579857286263988\r\n%\\end{eqnarray}\r\n%%\r\n\r\n\\subsection{Object Referenced}\\index{Coordinate systems!object\r\nreferenced}\r\n\r\nAn object referenced system is a coordinate system whose axes are\r\ndefined by the motion of one object with respect to another\r\nobject. GMAT allows the user to define many different types of\r\nObject Referenced system.\r\n%\r\n\\begin{figure*}[htb]\r\n \\centerline{\r\n\\begin{picture}(100,290)\r\n\\special{psfile = Images/ObjectRefCS.eps hoffset= -180 voffset= -180\r\nhscale=70 vscale=70} \\makebox(-60,170){}\r\n\\makebox(173,368){$\\mathbf{r}$} \\makebox(-115,488){$\\mathbf{v}$}\r\n \\makebox(-175,478){$\\mathbf{n}$}\\makebox(-245,215){Location of Primary}\r\n \\makebox(-20,335){Location of Secondary}\r\n  \\makebox(-270,155){}\r\n\\end{picture}}\\vspace{ -1 in} \\caption{ Diagram of an Object Referenced Coordinate System} \\label{fig:ObjectRefCS}\r\n\\end{figure*}\r\n%\r\nIn Fig. \\ref{fig:ObjectRefCS} we see a diagram that defines the\r\ndirections a user can choose from in creating an Object Referenced\r\ncoordinate system.  There are six directions.  One is the relative\r\nposition, denoted here by $\\mathbf{r}$, of the secondary object\r\nwith respect to the primary object, expressed in an inertial\r\nframe. The second is the the relative velocity, denoted here by\r\n$\\mathbf{v}$, of the secondary object with respect to the primary,\r\nexpressed in an inertial frame. The third direction is the vector\r\nnormal to the direction of motion which is denoted by $\\mathbf{n}$\r\nand is calculated from $\\mathbf{n} = \\mathbf{r} \\times\r\n\\mathbf{v}$.  The remaining three directions are the negative of\r\nof the first three.\r\n\r\nIn GMAT, a user can use the directions described above to define\r\nan Object Referenced coordinate system.  In doing so, the user can\r\nchoose two of the available directions, and define which of the\r\nthree axes, $x$, $y$, and $z$, they desire the direction to be\r\naligned with.  Given this information, GMAT automatically\r\nconstructs an orthogonal coordinate system.  For example, if user\r\nchooses the $x$-axis to be in the direction of $\\mathbf{r}$ and\r\nthe $z$-axis to be in the direction of $\\mathbf{n}$, the GMAT\r\ncompletes the right-handed set by setting the $y$-axis in the\r\ndirection of $\\mathbf{n} \\times \\mathbf{r}$.  Obviously there are\r\nsome choices that not yield an orthogonal system, or that yield a\r\nleft handed system.  GMAT does not allow the user to select these\r\npairs of axes and throws an error message.\r\n\r\nIn general, given the unit vectors that define the axes system of\r\nthe Object Referenced system, but expressed in the inertial frame,\r\nGMAT uses the following equations to determine $\\mathbf{R}_{Ii}$\r\nand $ \\dot{\\mathbf{R}}_{Ii}$.\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{Ii} = \\left[\\hspace{.05 in} \\hat{\\mathbf{x}} \\hspace{.2 in} \\hat{\\mathbf{y}} \\hspace{.2 in} \\hat{\\mathbf{z}} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{R}}_{Ii} = \\left[\\hspace{.05 in} \\dot{\\hat{\\mathbf{x}}} \\hspace{.2 in}\r\n     \\dot{\\hat{\\mathbf{y}}} \\hspace{.2 in} \\dot{\\hat{\\mathbf{z}}} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n%\r\n%\\begin{equation}\r\n%         \\mathbf{R}^{Ii} = \\begin{pmatrix}\r\n%               \\hat{x}_1 & \\hat{y}_1  & \\hat{z}_1  \\\\\r\n%               \\hat{x}_2 & \\hat{y}_2  & \\hat{z}_2  \\\\\r\n%               \\hat{x}_3 & \\hat{y}_3  & \\hat{z}_3  \\\\\r\n%               \\label{Eq:RObjectReferenced}\r\n%     \\end{pmatrix}\r\n%\\end{equation}\r\n%and\r\n%\\begin{equation}\r\n%         \\dot{\\mathbf{R}}^{Ii} = \\begin{pmatrix}\r\n%               \\dot{\\hat{x}}_1 & \\dot{\\hat{y}}_1  & \\dot{\\hat{z}}_1  \\\\\r\n%               \\dot{\\hat{x}}_2 & \\dot{\\hat{y}}_2  & \\dot{\\hat{z}}_2  \\\\\r\n%               \\dot{\\hat{x}}_3 & \\dot{\\hat{y}}_3  & \\dot{\\hat{z}}_3\r\n%               \\\\\\label{Eq:RdotObjectReferenced}\r\n%     \\end{pmatrix}\r\n%\\end{equation}\r\n%%\r\n%where\r\n%%\r\n%\\begin{equation}\r\n%     \\hat{\\mathbf{x}} = \\left[ \\hat{x}_1 \\hspace{.2 in}  \\hat{x}_2 \\hspace{.2 in}  \\hat{x}_3\r\n%     \\right]^T \\label{Eq:xhatdef}\r\n%\\end{equation}\r\n%%\r\n%\\begin{equation}\r\n%     \\dot{\\hat{\\mathbf{x}}} = \\left[ \\dot{\\hat{x}}_1 \\hspace{.2 in}  \\dot{\\hat{x}}_2 \\hspace{.2 in}  \\dot{\\hat{x}}_3\r\n%     \\right]^T \\label{Eq:yhatdef}\r\n%\\end{equation}\r\n%and similarly for $\\hat{\\mathbf{y}}$, $\\dot{\\hat{\\mathbf{y}}}$,\r\n%$\\hat{\\mathbf{z}}$ and $\\dot{\\hat{\\mathbf{z}}}$.\r\n\r\nRecall that the user chooses two axes to  an Object Referenced\r\nsystem among the following choices:   $\\hat{\\mathbf{r}}$,\r\n$\\hat{\\mathbf{v}}$, $\\hat{\\mathbf{n}}$,  $-\\hat{\\mathbf{r}}$,\r\n$-\\hat{\\mathbf{v}}$, and $-\\hat{\\mathbf{n}}$.  In general, one of\r\nthe axes chosen by the user must be either $\\hat{\\mathbf{n}}$, or\r\n$-\\hat{\\mathbf{n}}$.\r\n\r\n\r\nIf the user defines the $x$-axis and $y$-axis then GMAT determines\r\nthe z axis using\r\n%\r\n\\begin{equation}\r\n    \\hat{\\mathbf{z}} =  \\hat{\\mathbf{x}}\\times\\hat{\\mathbf{y}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{z}}} =  \\dot{\\hat{\\mathbf{x}}}\r\n    \\times\\hat{\\mathbf{y}}+  \\hat{\\mathbf{x}} \\times\r\n    \\dot{\\hat{\\mathbf{y}}}\r\n\\end{equation}\r\n%\r\nIf the user defines the $y$-axis and $z$-axis, then GMAT determines\r\nthe $x$ axis using\r\n%\r\n\\begin{equation}\r\n    \\hat{\\mathbf{x}} =  \\hat{\\mathbf{y}}\\times\\hat{\\mathbf{z}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{x}}} =  \\dot{\\hat{\\mathbf{y}}}\r\n    \\times\\hat{\\mathbf{z}}+  \\hat{\\mathbf{y}} \\times\r\n    \\dot{\\hat{\\mathbf{z}}}\r\n\\end{equation}\r\n%\r\nAnd finally, if the user defines the $x$-axis and $z$-axis then GMAT\r\ndetermines the $y$ axis using\r\n%\r\n\\begin{equation}\r\n    \\hat{\\mathbf{y}} =  \\hat{\\mathbf{z}}\\times\\hat{\\mathbf{x}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{y}}} =  \\dot{\\hat{\\mathbf{z}}}\r\n    \\times\\hat{\\mathbf{x}}+  \\hat{\\mathbf{z}} \\times\r\n    \\dot{\\hat{\\mathbf{x}}}\r\n\\end{equation}\r\n%\r\n\r\n\r\nDepending on the users choice of axes for an Object Referenced\r\ncoordinate system, GMAT will need to calculate $\\hat{\\mathbf{r}}$,\r\n$\\hat{\\mathbf{v}}$, $\\hat{\\mathbf{n}}$, $\\dot{\\hat{\\mathbf{r}}}$,\r\n$\\dot{\\hat{\\mathbf{v}}}$, and $\\dot{\\hat{\\mathbf{n}}}$.  These are\r\ngiven by:\r\n%\r\n\\begin{equation}\r\n\\hat{\\mathbf{r}} = \\frac{\\mathbf{r}}{\\| \\mathbf{r}\r\n     \\|} = \\frac{\\mathbf{r}}{r}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{v}} = \\frac{\\mathbf{v}}{\\| \\mathbf{v}\r\n     \\|} = \\frac{\\mathbf{v}}{v}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{n}} = \\frac{\\mathbf{r}\\times \\mathbf{v}}{ \\| \\mathbf{r}\\times \\mathbf{v}\r\n     \\|}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{r}}} = \\frac{\\mathbf{v}}{r}  -\r\n     \\frac{\\hat{\\mathbf{r}}}{r}\r\n     \\left(\\hat{\\mathbf{r}} \\cdot\r\n     \\mathbf{v} \\right)\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{v}}} = \\frac{\\mathbf{a}}{v} -\r\n    \\frac{\\hat{\\mathbf{v}}}{v}\\left(\\hat{\\mathbf{v}}\\cdot\\mathbf{a}\\right)\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\dot{\\hat{\\mathbf{n}}} = \\frac{\\mathbf{r} \\times \\mathbf{a}}{n} - \\frac{\\hat{\\mathbf{n}}}{n} \\left(  \\mathbf{r}\\times\\mathbf{a} \\cdot \\hat{\\mathbf{n}} \\right)\r\n     \\label{Eq:VBN_Ndot}\r\n\\end{equation}\r\n\r\n\r\n\r\n%---------------------------------------------------------------------------------\r\n%---------------------------------------------------------------------------------\r\n\\subsection{Geocentric Solar\r\nEcliptic (GSE)  }\r\n%---------------------------------------------------------------------------------\r\n%---------------------------------------------------------------------------------\r\n\r\n%  Reference for this: http://sscweb.gsfc.nasa.gov/users_guide/Appendix_C.html\r\nThe Geocentric Solar Ecliptic system is a time varying axis system\r\noften used to describe and analyze the Earth's magnetic field. The\r\ncoordinate system is defined such that\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{x}} = \\frac{\\mathbf{r}_{sun}}{\\| \\mathbf{r}_{sun} \\|}\r\n\\end{equation}\r\n%\r\nwhere $\\mathbf{r}_{sun}$ is the vector from the Earth to the Sun\r\nin the MJ2000Eq axis system.   The $z$-axis is defined to be the\r\necliptic pole. To ensure we have an orthogonal system, we\r\ncalculate $\\hat{\\mathbf{z}}$ using\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{z}} = \\frac{\\mathbf{r}_{sun} \\times \\mathbf{v}_{sun}}{ \\|  \\mathbf{r}_{sun} \\times \\mathbf{v}_{sun}   \\|}\r\n\\end{equation}\r\n\r\n\r\n% The unit vector pointing in the direction of the\r\n%ecliptic pole, but expressed in the MJ2000Eq system, is simply the\r\n%third column of rotation matrix from MJ2000Ec to MJ2000Eq given in\r\n%Eq.~(\\ref{Eq:Ec2Eq}). So,\r\n%%\r\n%\\begin{equation}\r\n%  \\hat{\\mathbf{z}} =   \\begin{pmatrix}\r\n%     0.0\\\\\r\n%     -0.397777155914121383\\\\\r\n%     0.917482062076895741\r\n%     \\end{pmatrix}\r\n%\\end{equation}\r\n%\r\nFinally, the $y$-axis completes the right-handed set\r\n%\r\n\\begin{equation}\r\n    \\hat{\\mathbf{y}} = \\hat{\\mathbf{z}} \\times \\hat{\\mathbf{x}}\r\n\\end{equation}\r\n%\r\nWe can construct the rotation matrix that goes from the GSE axis\r\nsystem to the MJ2000Eq axis system as\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{Ii} = \\left[\\hspace{.05 in} \\hat{\\mathbf{x}} \\hspace{.2 in} \\hat{\\mathbf{y}} \\hspace{.2 in} \\hat{\\mathbf{z}} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n%\r\nWe also need to compute the derivative of the rotation matrix.  We\r\nstart by computing\r\n%\r\n\\begin{equation}\r\n     \\frac{d\\hat{\\mathbf{x}}}{dt} =      \\displaystyle\\frac{\\mathbf{v}_{sun}}{r_{sun}}  -\\displaystyle\r\n     \\hat{\\mathbf{x}}\r\n     \\left(\\hat{\\mathbf{x}}  \\cdot\r\n     \\frac{\\mathbf{v}_{sun}}{r_{sun}} \\right)\r\n\\end{equation}\r\n%\r\nwhere $\\mathbf{v}_{sun}$ is the velocity of the Sun with respect\r\nto the Earth in the MJ2000Eq system.  We can approximate the\r\nderivative of the $z$ axis using\r\n%\r\n\\begin{equation}\r\n     \\frac{d\\hat{\\mathbf{z}}}{dt} \\approx      \\mathbf{0}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\frac{d\\hat{\\mathbf{y}}}{dt} =     \\hat{\\mathbf{z}} \\times  \\frac{d\\hat{\\mathbf{x}}}{dt}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{R}}_{Ii} = \\left[\\hspace{.05 in} \\frac{d\\hat{\\mathbf{x}}}{dt} \\hspace{.2 in}\r\n     \\frac{d\\hat{\\mathbf{y}}}{dt} \\hspace{.2 in} \\frac{d\\hat{\\mathbf{z}}}{dt} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n\r\n\r\n\\subsection{ Geocentric Solar\r\nMagnetic (GSM)}\r\n\r\n\r\n\\begin{equation}\r\n     \\hat{\\mathbf{x}} = \\frac{\\mathbf{r}_{sun}}{\\| \\mathbf{r}_{sun} \\|}\r\n\\end{equation}\r\n%\r\n\r\nLet's define the spherical coordinates of the Earth's dipole in\r\nthe Earth fixed frame to be $\\lambda_d$ and $\\phi_d$.  The\r\nlocation of the dipole actually changes with time.  Also, the\r\ndipole does not actually pass through the center of the Earth.\r\nHowever, GMAT currently assumes that the dipole direction is\r\nconstant, and passes directly through the center of the Earth.  If\r\nthis approximation is not sufficient for future studies, the model\r\nwill have to be updated.\r\n\r\n\\begin{equation}\r\n     \\lambda_d = 70.1^{\\circ}\\mbox{  W}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\phi_d = 78.6 ^{\\circ} \\mbox{  N}\r\n\\end{equation}\r\n%(http://nssdc.gsfc.nasa.gov/planetary/factsheet/earthfact.html)\r\n%Latitude/Longitude of dipole N: 78.6 degrees N/70.1 degrees W\r\n%Dipole offset (planet center to dipole center) distance: 0.0725 Re\r\n%Latitude/Longitude of offset vector: 18.3 degrees N/147.8 degrees\r\n%\r\nThe dipole vector in the Earth Fixed system is simply\r\n%\r\n\\begin{equation}\r\n    \\{\\mathbf{r}_d\\}_F =   \\begin{pmatrix}\r\n               \\cos{\\phi_d}\\cos{(-\\lambda_d)}\\\\\r\n                \\cos{\\phi_d}\\sin{(-\\lambda_d)}\\\\\r\n                \\sin{\\phi_d}\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\n\r\nIf $R_{IF}$ is the rotation matrix from the Earth Fixed frame to\r\nMJ2000Eq at the current epoch, then we can write the vector that\r\ndescribes the dipole direction in the inertial frame as\r\n%\r\n\\begin{equation}\r\n        \\{\\mathbf{r}_d\\}_I =   R_{IF}\\{\\mathbf{r}_d\\}_F\r\n\\end{equation}\r\n%\r\nThen, the $y$-axis is defined as\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{y}} = \\frac{\\{\\mathbf{r}_d\\}_I \\times \\hat{\\mathbf{x}}  }{\\|  \\{\\mathbf{r}_d\\}_I \\times \\hat{\\mathbf{x}}   \\|}\r\n\\end{equation}\r\n%\r\nthe $z$-axis is defined as\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{z}} = \\hat{\\mathbf{x}} \\times \\hat{\\mathbf{y}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{Ii} = \\left[\\hspace{.05 in} \\hat{\\mathbf{x}} \\hspace{.2 in} \\hat{\\mathbf{y}} \\hspace{.2 in} \\hat{\\mathbf{z}} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n%\r\nTo calculate the derivative of the rotation matrix, we know that\r\n%\r\n%\r\n\\begin{equation}\r\n     \\frac{d\\hat{\\mathbf{x}}}{dt} =      \\displaystyle\\frac{\\mathbf{v}_{sun}}{r_{sun}}  -\\displaystyle\r\n     \\hat{\\mathbf{x}}\r\n     \\left(\\hat{\\mathbf{x}}  \\cdot\r\n     \\frac{\\mathbf{v}_{sun}}{r_{sun}} \\right)\r\n\\end{equation}\r\n%\r\nLet's define\r\n%\r\n\\begin{equation}\r\n      \\mathbf{y}  = (R_{IF}\\{\\mathbf{r}_d\\}_F) \\times\r\n      \\hat{\\mathbf{x}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n      y  = \\|(R_{IF}\\{\\mathbf{r}_d\\}_F) \\times\r\n      \\hat{\\mathbf{x}}\\|\r\n\\end{equation}\r\n%\r\nthen\r\n%\r\n\\begin{equation}\r\n      \\frac{d\\mathbf{y}}{dt}  = \\dot{\\mathbf{y}}=\\left(\\dot{R}_{IF}\\{\\mathbf{r}_d\\}_F\\right) \\times\r\n      \\hat{\\mathbf{x}} + \\left(R_{IF}\\{\\mathbf{r}_d\\}_F\\right) \\times\r\n      \\frac{\\hat{\\mathbf{x}}}{dt}\r\n\\end{equation}\r\n%\r\nNow we can write\r\n%\r\n\\begin{equation}\r\n     \\frac{d\\hat{\\mathbf{y}}}{dt} = \\dot{\\hat{\\mathbf{y}}} =\r\n     \\frac{\\dot{\\mathbf{y}}}{y} - \\hat{\\mathbf{y}}\\left(\\hat{\\mathbf{y}} \\cdot \\frac{\\dot{\\mathbf{y}}}{y}  \\right)\r\n\\end{equation}\r\n%\r\nFinally,\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{z}}} = \\dot{\\hat{\\mathbf{x}}} \\times\r\n    \\hat{\\mathbf{y}}+ \\hat{\\mathbf{x}} \\times\r\n    \\dot{\\hat{\\mathbf{y}}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{R}}_{Ii} = \\left[\\hspace{.05 in} \\frac{d\\hat{\\mathbf{x}}}{dt} \\hspace{.2 in}\r\n     \\frac{d\\hat{\\mathbf{y}}}{dt} \\hspace{.2 in} \\frac{d\\hat{\\mathbf{z}}}{dt} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n\r\n\r\n\\subsection{ Body-Spin Sun Coordinates}\r\n\r\nThe body-spin Sun coordinate system is a celestial-body-based coordinate system defined as follows:\r\n%\r\n\\begin{itemize}\r\n   \\item x points from the central body to the sun\r\n   \\item y completes the right-handed set\r\n   \\item z lies in the plane of the body's spin axis and the x axis\r\n\\end{itemize}\r\n%\r\nThis system is similar to the GSM system with the following two differences:  (1)  The magnetic field vector is replaced with the body's spin axis, and (2) the system is based on the fixed frame of the central body and not always referenced to the Earth fixed system, and (3) x points in the opposite direction in the two systems.\r\n\r\nDefine the vector $\\mathbf{r}_{sun}$ as the vector from the central body to the sun. Then,\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{x}} = \\frac{\\mathbf{r}_{sun}}{\\| \\mathbf{r}_{sun} \\|}\r\n\\end{equation}\r\n%\r\nDefine the body's spin axis in the body fixed system as\r\n%\r\n\\begin{equation}\r\n    \\mathbf{r}_s^I =   \\begin{pmatrix}\r\n               0\\\\\r\n                0\\\\\r\n                1\r\n     \\end{pmatrix}\r\n\\end{equation}\r\n\r\nIf $\\mathbf{R}^{I/F}$ is the rotation matrix from the central body's fixed frame to\r\nMJ2000Eq at the current epoch, then we can write the vector that\r\ndescribes the spin axis in the inertial frame as\r\n%\r\n\\begin{equation}\r\n     \\mathbf{r}_s^I =  \\mathbf{R}^{I/F} \\mathbf{r}_s^F\r\n\\end{equation}\r\n%\r\nThen, the $y$-axis is defined as\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{y}} = \\frac{\\mathbf{r}_s^I \\times \\hat{\\mathbf{x}}  }{\\|  \\mathbf{r}_s^I\\times \\hat{\\mathbf{x}}   \\|}\r\n\\end{equation}\r\n%\r\nthe $z$-axis is defined as\r\n%\r\n\\begin{equation}\r\n     \\hat{\\mathbf{z}} = \\hat{\\mathbf{x}} \\times \\hat{\\mathbf{y}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n     \\mathbf{R}_{Ii} = \\left[\\hspace{.05 in} \\hat{\\mathbf{x}} \\hspace{.2 in} \\hat{\\mathbf{y}} \\hspace{.2 in} \\hat{\\mathbf{z}} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n%\r\nTo calculate the derivative of the rotation matrix, we know that\r\n%\r\n%\r\n\\begin{equation}\r\n     \\frac{d\\hat{\\mathbf{x}}}{dt} =      \\displaystyle\\frac{\\mathbf{v}_{sun}}{r_{sun}}  -\\displaystyle\r\n     \\hat{\\mathbf{x}}\r\n     \\left(\\hat{\\mathbf{x}}  \\cdot\r\n     \\frac{\\mathbf{v}_{sun}}{r_{sun}} \\right)\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n      \\frac{d\\mathbf{y}}{dt}  = \\dot{\\mathbf{y}}=\\left(\\dot{\\mathbf{R}}^{I/F}\\mathbf{r}_s^F\\right) \\times\r\n      \\hat{\\mathbf{x}} + \\mathbf{r}_s^I \\times\r\n      \\frac{d\\hat{\\mathbf{x}}}{dt}\r\n\\end{equation}\r\n%\r\nNow we can write\r\n%\r\n\\begin{equation}\r\n     \\frac{d\\hat{\\mathbf{y}}}{dt} = \\dot{\\hat{\\mathbf{y}}} =\r\n     \\frac{\\dot{\\mathbf{y}}}{y} - \\hat{\\mathbf{y}}\\left(\\hat{\\mathbf{y}} \\cdot \\frac{\\dot{\\mathbf{y}}}{y}  \\right)\r\n\\end{equation}\r\n%\r\nFinally,\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{z}}} = \\dot{\\hat{\\mathbf{x}}} \\times\r\n    \\hat{\\mathbf{y}}+ \\hat{\\mathbf{x}} \\times\r\n    \\dot{\\hat{\\mathbf{y}}}\r\n\\end{equation}\r\n%\r\nand\r\n%\r\n\\begin{equation}\r\n     \\dot{\\mathbf{R}}_{Ii} = \\left[\\hspace{.05 in} \\frac{d\\hat{\\mathbf{x}}}{dt} \\hspace{.2 in}\r\n     \\frac{d\\hat{\\mathbf{y}}}{dt} \\hspace{.2 in} \\frac{d\\hat{\\mathbf{z}}}{dt} \\hspace{.05 in} \\right]\r\n\\end{equation}\r\n\r\n\r\n\\section{Appendix 1: Derivatives of ObjectReferenced Unit Vectors}\r\n\r\nThe derivations of the above quantities are shown below.  We start\r\nby deriving two derivatives with respect to $\\mathbf{n}$, where\r\n$\\mathbf{n}$ is given by:\r\n\\begin{equation}\r\n    \\mathbf{n} =\r\n    \\mathbf{r}\\times\\mathbf{v}\r\n\\end{equation}\r\n%\r\nWe need to determine two derivatives of $\\mathbf{n}$. First\r\n%\r\n\\begin{equation}\r\n    \\frac{d \\mathbf{n}}{dt} = \\frac{d }{d\r\n    t}\\left(\\mathbf{r}\\times\\mathbf{v}\\right) =\r\n    \\underbrace{\\frac{d \\mathbf{r}}{d t} \\times\\mathbf{v}}_0 +  \\mathbf{r} \\times\r\n    \\frac{d\\mathbf{v}}{\\partial t}\r\n\\end{equation}\r\n%\r\nso we know that\r\n%\r\n\\begin{equation}\r\n    \\boxed{\\frac{d \\mathbf{n}}{dt} =   \\mathbf{r} \\times\r\n    \\mathbf{a}}\r\n\\end{equation}\r\n%\r\nThe next useful derivative is\r\n%\r\n\\begin{equation}\r\n    \\frac{dn}{dt} = \\frac{d  \\| \\mathbf{n} \\|}{dt} = \\frac{d }{\r\n    d\r\n    t}\\left( \\mathbf{n}^T \\mathbf{n} \\right)^{1/2} =\r\n    \\frac{\\mathbf{n}^T}{n}\\frac{d \\mathbf{n}}{dt}\r\n\\end{equation}\r\n%\r\nSo we can write\r\n%\r\n\\begin{equation}\r\n    \\boxed{\\frac{dn}{dt} =\r\n    \\frac{\\mathbf{n}}{n}\\cdot\\left(\\mathbf{r}\\times\\mathbf{a}\\right)}\r\n\\end{equation}\r\n%\r\nThe following two derivatives are also useful\r\n\\begin{equation}\r\n    \\frac{dr}{dt} = \\frac{d  \\| \\mathbf{r} \\|}{dt} =\r\n    \\frac{d}{dt}(\\mathbf{r}^T\\mathbf{r})^{1/2} = \\frac{\\mathbf{v}  \\cdot \\mathbf{r} }{r}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\boxed{\\frac{dr}{dt} = \\frac{\\mathbf{v}  \\cdot \\mathbf{r}\r\n     }{r}}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\frac{dv}{dt} = \\frac{d \\| \\mathbf{v} \\|}{dt} = \\frac{d}{dt}(\\mathbf{v}^T\\mathbf{v})^{1/2} = \\frac{\\mathbf{v}  \\cdot \\mathbf{a} }{v}\r\n\\end{equation}\r\n%\r\nso we can write\r\n\\begin{equation}\r\n     \\boxed{\\frac{dv}{dt} = \\frac{\\mathbf{v}  \\cdot \\mathbf{a}\r\n     }{v}}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n     \\boxed{\\hat{\\mathbf{v}} = \\frac{\\mathbf{v}}{\\| \\mathbf{v}\r\n     \\|}}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n\\boxed{\\hat{\\mathbf{r}} = \\frac{\\mathbf{r}}{\\| \\mathbf{r}\r\n     \\|}}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\boxed{\\hat{\\mathbf{n}} = \\frac{\\mathbf{r}\\times \\mathbf{v}}{ \\| \\mathbf{r}\\times \\mathbf{v}\r\n     \\|}}\r\n\\end{equation}\r\n%\r\nThe time derivatives are derived as follows.\r\n%\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{r}}} =\r\n    \\frac{\\partial \\hat{\\mathbf{r}}}{ \\partial t} = \\frac{\\partial }{ \\partial\r\n    t}\\left( \\mathbf{r} r^{-1}  \\right) = \\frac{\\mathbf{v}}{r} -\r\n    \\frac{\\mathbf{r}}{r^2}\\left(\\frac{\\mathbf{r}\\cdot\\mathbf{v}}{r}\\right)\r\n\\end{equation}\r\n%\r\nwhich can be rewritten as\r\n%\r\n\\begin{equation}\r\n    \\boxed{\\frac{d\\hat{\\mathbf{r}}}{dt} = \\frac{\\mathbf{v}}{r}  -\r\n     \\frac{\\hat{\\mathbf{r}}}{r}\r\n     \\left(\\hat{\\mathbf{r}} \\cdot\r\n     \\mathbf{v} \\right)}\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n    \\dot{\\hat{\\mathbf{v}}} =\r\n    \\frac{\\partial \\hat{\\mathbf{v}}}{ \\partial t} = \\frac{\\partial }{ \\partial\r\n    t}\\left( \\mathbf{v} v^{-1}  \\right) = \\frac{\\mathbf{a}}{v} -\r\n    \\frac{\\mathbf{v}}{v^2}\\left(\\frac{\\mathbf{v}\\cdot\\mathbf{a}}{v}\\right)\r\n\\end{equation}\r\n%\r\nwhich can be rewritten as\r\n%\r\n\\begin{equation}\r\n    \\boxed{\\dot{\\hat{\\mathbf{v}}} = \\frac{\\mathbf{a}}{v} -\r\n    \\frac{\\hat{\\mathbf{v}}}{v}\\left(\\hat{\\mathbf{v}}\\cdot\\mathbf{a}\\right)}\r\n\\end{equation}\r\n%\r\nFinally,\r\n%\r\n\\begin{equation}\r\n     \\dot{\\hat{\\mathbf{n}}} = \\frac{d}{dt}\\left( \\mathbf{n} n^{-1}\r\n     \\right)= \\frac{\\mathbf{r} \\times \\mathbf{a}}{n} - \\frac{\\mathbf{n}}{n^3} \\left(  \\mathbf{r}\\times\\mathbf{a} \\cdot \\mathbf{n} \\right)\r\n\\end{equation}\r\n%\r\n\\begin{equation}\r\n     \\boxed{\\dot{\\hat{\\mathbf{n}}} = \\frac{\\mathbf{r} \\times \\mathbf{a}}{n} - \\frac{\\hat{\\mathbf{n}}}{n} \\left(  \\mathbf{r}\\times\\mathbf{a} \\cdot \\hat{\\mathbf{n}} \\right)\r\n     }\r\n\\end{equation}\r\n\r\n\\subsection{Basic Rotation Matrices} \\label{sec:BasicRotationMatrices}\r\n\r\n\\begin{equation}\r\n   \\mathbf{R}_1 = \\left(\\begin{array}{ccc}\r\n      1 & 0 & 0 \\\\\r\n      0 & \\cos{\\theta} & \\sin{\\theta} \\\\\r\n      0 & -\\sin{\\theta} & \\cos{\\theta}\r\n   \\end{array}\\right)\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n   \\mathbf{R}_2 = \\left(\\begin{array}{ccc}\r\n      \\cos{\\theta}  & 0 & -\\sin{\\theta} \\\\\r\n      0 & 1 & 0 \\\\\r\n      \\sin{\\theta} & 0 & \\cos{\\theta}\r\n   \\end{array} \\right)\r\n\\end{equation}\r\n\r\n\r\n\\begin{equation}\r\n   \\mathbf{R}_3 = \\left(\\begin{array}{ccc}\r\n       \\cos{\\theta} & \\sin{\\theta} & 0 \\\\\r\n      -\\sin{\\theta} &  \\cos{\\theta} & 0 \\\\\r\n      0 & 0 & 1\r\n   \\end{array}\\right)\r\n\\end{equation}\r\n\r\n\r\n\r\n\\begin{table*} \\caption{Recommended Values for Pole and Prime Meridian Locations of the Sun and\r\nPlanets\\cite{Seidelmann:etal:02}} \\index{Planets!pole locations}\r\n\\index{Planets!prime meridian locations}\\centering\r\n\\begin{tabular}{p{.5 in} p{4.5 in} }\r\n  \\hline\\hline\r\n  % after \\\\: \\hline or \\cline{col1-col2} \\cline{col3-col4} ...\r\n   Name & Values  \\\\\r\n  \\hline\r\n  Sun & $\\alpha_o = 286.13^{\\circ}$ \\hspace{.2 in} (deg)\\\\\r\n       & $\\delta_o = 63.87^{\\circ}$ \\hspace{.2 in} (deg)\\\\\r\n       &$W = 84.10^{\\circ}+ 14.1844000^{\\circ}d$  \\hspace{.2 in} (deg)\\\\\r\n       &$\\dot{W} =14.1844000^{\\circ}$  \\hspace{.2 in} (deg/s)\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n  Mercury & $\\alpha_o = 281.01 - 0.033T$\\\\\r\n       & $\\delta_o = 61.45 - 0.005T$\\\\\r\n       & $W = 329.548 + 6.1385025d$\\\\\r\n       &$\\dot{W} =6.1385025$\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n   Venus & $\\alpha_o = 272.76$\\\\\r\n       & $\\delta_o = 67.16$\\\\\r\n       & $W = 160.20 - 1.4813688d$\\\\\r\n       &$\\dot{W} = - 1.4813688$\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n     Earth& $\\alpha_o = 0.00 - 0.641T$\\\\\r\n       & $\\delta_o =  90.00 - 0.557T$ \\\\\r\n       & $W = 190.147 + 360.9856235d$\\\\\r\n       &$\\dot{W} = 360.9856235$\\\\\r\n       & \\emph{Earth Data is included for completeness only, GMAT uses FK5 reduction for the Earth}\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n   Mars & $\\alpha_o = 317.68143 - 0.1061T$\\\\\r\n       & $\\delta_o = 52.88650 - 0.0609T$\\\\\r\n       & $W = 176.630 + 350.89198226d$\\\\\r\n       &$\\dot{W} = 350.89198226$\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n    Jupiter & $\\alpha_o = 268.05 - 0.009T$\\\\\r\n       & $\\delta_o = 64.49 + 0.003T$\\\\\r\n       & $W = 284.95 + 870.5366420d$\\\\\r\n       &$\\dot{W} = 870.5366420$\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n    Saturn & $\\alpha_o = 40.589 - 0.036T$\\\\\r\n       & $\\delta_o = 83.537 - 0.004T$\\\\\r\n       & $W =  38.90 + 810.7939024d$\\\\\r\n       &$\\dot{W} = 810.7939024$\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n    Uranus & $\\alpha_o = 257.311$\\\\\r\n       & $\\delta_o = -15.175$\\\\\r\n       & $W = 203.81-501.1600928d$\\\\\r\n       &$\\dot{W} = -501.1600928$\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n     Neptune & $\\alpha_o = 299.36 + 0.70\\sin{N}$\\\\\r\n       & $\\delta_o = 43.46 - 0.51\\cos{N}$\\\\\r\n       & $W = 253.18 + 536.3128492d - 0.48 \\sin{N}$\\\\\r\n       &$\\dot{W} = 536.3128492- 0.48\\dot{N} \\cos{N}$\\\\\r\n       &$N = 357.85 + 52.316T$\\\\\r\n       &$\\dot{N} = 6.0551\\times 10^{-4}$ \\hspace{.2in}(deg/day)\\\\\r\n       & \\\\\r\n %------------------------------------------------------\r\n     Pluto & $\\alpha_o = 313.02$\\\\\r\n       & $\\delta_o = 9.09$\\\\\r\n       & $W = 236.77 - 56.3623195d$\\\\\r\n       &$\\dot{W} = - 56.3623195$\\\\\r\n %------------------------------------------------------\r\n  \\hline\\hline \\label{Table:PlanetsPolesMeridians}\r\n\\end{tabular}\r\n\\end{table*}\r\n\r\n\\begin{table*} \\caption{Recommended Values for Pole and Prime Meridian Locations of Luna \\cite{Seidelmann:etal:02}} \\index{Luna!pole locations}\r\n\\index{Luna!prime meridian locations}\\centering\r\n\\begin{tabular}{p{.5 in} p{4.5 in} }\r\n  \\hline\\hline\r\n  % after \\\\: \\hline or \\cline{col1-col2} \\cline{col3-col4} ...\r\n   Name & Values  \\\\\r\n  \\hline\r\n %------------------------------------------------------\r\n  Luna & \\[\r\n\\begin{array}{lllll}\r\n               \\alpha_o  = &269.9949 &+ 0.0031T             &- 3.8787\\sin{E1} &- 0.1204\\sin{E2}  \\\\\r\n                        & \\mbox{}  &+ 0.0700\\sin{E3}      &- 0.0172\\sin{E4} &+0.0072\\sin{E6}\\\\\r\n                        & \\mbox{}  &- 0.0052\\sin{E10}      &- 0.0043\\sin{E13}\r\n\\end{array}\r\n\\]\r\n\\\\\r\n %------------------------------------------------------\r\n& \\[\r\n\\begin{array}{lllll}\r\n               \\delta_o  = &66.5392 &+ 0.0130T             &+ 1.5419\\cos{E1} &+0.0239\\cos{E2}  \\\\\r\n                        & \\mbox{  }  &- 0.0278\\cos{E3}      &+0.0068\\cos{E4} &-0.00292\\cos{E6}\\\\\r\n                        & \\mbox{  }  &+0.0009\\cos{E7}    &+.0008\\cos{E10} &-0.0009\\cos{E13}\r\n\\end{array}\r\n\\]\r\n\\\\\r\n %------------------------------------------------------\r\n& \\[\r\n\\begin{array}{lllll}\r\n               W  =     &38.3213     &+ 13.17635815d     &-1.4\\times 10^{-12}d^2 &+3.5610\\sin{E1}  \\\\\r\n                        & \\mbox{  }  &+ 0.1208\\sin{E2}  &-0.0642\\sin{E3}        &+0.0158\\sin{E4}\\\\\r\n                        & \\mbox{  }  &+ 0.0252\\sin{E5}  &-0.0066\\sin{E6}        &-0.0047\\sin{E7}\\\\\r\n                        & \\mbox{  }  &- 0.0046\\sin{E8}  &+0.0028\\sin{E9}        &+0.0052\\sin{E10}\\\\\r\n                        & \\mbox{  }  &+ 0.0040\\sin{E11}  &+0.0019\\sin{E12}        &-0.0044\\sin{E13}\\\\\r\n\\end{array}\r\n\\]\r\n\\\\\r\n %------------------------------------------------------\r\n& \\[\r\n\\begin{array}{llllll}\r\n               \\dot{W}  =     &+ 13.17635815     &-2.8\\times 10^{-12}d   &-.18870\\cos{E1}  \\\\\r\n                        &-.01280\\cos{E2}  &-.835\\cos{E3}        &+.211\\cos{E4}\\\\\r\n                        &+.0248\\cos{E5}  &-.17\\cos{E6}        &-.061\\cos{E7}\\\\\r\n                        &-.0015\\cos{E8}  &+.0049\\cos{E9}        &-.00083\\cos{E10}\\\\\r\n                        &+.00001\\cos{E11}  &+.00031\\cos{E12}        &-.057\\cos{E13}\\\\\r\n\\end{array}\r\n\\]\r\n\\\\\r\n% %------------------------------------------------------\r\nwhere & \\[\r\n\\begin{array}{lllllll}\r\n                        &E1 = 125.045 - 0.0529921d     &E2 = 250.089 - 0.1059842d \\\\\r\n                        &E3 = 260.008 + 13.0120009d    &E4 = 176.625 + 13.3407154d \\\\\r\n                        &E5 = 357.529 + 0.9856003d     &E6 = 311.589 + 26.4057084d   \\\\\r\n                        &E7 = 134.963 + 13.0649930d    &E8 = 276.617 + 0.3287146d \\\\\r\n                        &E9 = 34.226 + 1.7484877d      &E10 = 15.134 - 0.1589763d \\\\\r\n                        &E11 = 119.743 + 0.0036096d    &E12 = 239.961 + 0.1643573d   \\\\\r\n                        &E13 = 25.053 + 12.9590088d \\\\\r\n\\end{array}\r\n\\]\r\n\\\\\r\n  \\hline\\hline \\label{Table:LunaPoleMeridian}\r\n\\end{tabular}\r\n\\end{table*}\r\n\r\n\\clearpage\r\n\\include{CSFlowChart}\r\n", "meta": {"hexsha": "49ec518c9dd958af5153670d0689910ebf1b7906", "size": 103133, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/SystemDocs/MathematicalSpecification/CoordinateSystems.tex", "max_stars_repo_name": "ddj116/gmat", "max_stars_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_stars_repo_licenses": ["NASA-1.3"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-16T16:58:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-16T16:58:21.000Z", "max_issues_repo_path": "doc/SystemDocs/MathematicalSpecification/CoordinateSystems.tex", "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_licenses": ["NASA-1.3"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/SystemDocs/MathematicalSpecification/CoordinateSystems.tex", "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": ["NASA-1.3"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.5448619632, "max_line_length": 786, "alphanum_fraction": 0.6154383175, "num_tokens": 34549, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.899121388082479, "lm_q2_score": 0.6224593312018545, "lm_q1q2_score": 0.5596664978951029}}
{"text": "\\section*{Ex.2.3}\n\\subsection*{Is the identity function $f$ a universal hash function?}\nNo $f$ is not a universal hash function. By the definition of a universal hash function, $h$ must be a random variable with some distribution, which the identity function is not.\n", "meta": {"hexsha": "a91a4a501dc7fef44e4e779f44b3449560b6538f", "size": 267, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Uge3/Ex.2.3.tex", "max_stars_repo_name": "pdebesc/AADS", "max_stars_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Uge3/Ex.2.3.tex", "max_issues_repo_name": "pdebesc/AADS", "max_issues_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Uge3/Ex.2.3.tex", "max_forks_repo_name": "pdebesc/AADS", "max_forks_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.75, "max_line_length": 178, "alphanum_fraction": 0.7677902622, "num_tokens": 66, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.8198933359135361, "lm_q1q2_score": 0.5596376561142804}}
{"text": "\\documentclass[]{article}\n\\usepackage{amsmath}\n\\usepackage{bbm}\n\n%opening\n\\title{Unbiased Pixel Cache Tracing}\n\\author{Johannes Jendersie}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Pixel Cache Projection}\n\nFor a randomly chosen cache with position $\\mathbbm{c}_{i}$ and a random point on the light path $\\mathbbm{p}_{i}$ the unbiased light transport is\n\\begin{equation}\n\tw = \\frac{f_r(\\overrightarrow{\\mathbbm{p}_{i-1}\\mathbbm{p}_i}, \\overrightarrow{\\mathbbm{p}_i\\mathbbm{c}_i})\n\tf_r(\\overrightarrow{\\mathbbm{p}_i\\mathbbm{c}_i}, \\overrightarrow{\\mathbbm{c}_i, \\mathbbm{c}_{i-1}})}\n\t{p_{cf}(\\overrightarrow{\\mathbbm{p}_i\\mathbbm{c}_i})/p_c(\\overrightarrow{\\mathbbm{p}_i\\mathbbm{c}_i})}\n\t\\label{eq:transport0}\n\\end{equation}\nwhere the denominator $p_{cf}/p_c$ is the probability for the connection direction under the distribution of caches.\nIt depends on the global spherical distribution of pixel caches $p_c(\\mathbbm{d})$ with respect to $\\mathbbm{p}$ and on the distribution of directions from the BRDF $p_f(\\mathbbm{d}) = f_r(\\overrightarrow{\\mathbbm{p}_{i-1}\\mathbbm{p}_i}, \\mathbbm{d})$.\nThe common probability is\n\\begin{equation}\n\tp_{cf}(\\mathbbm{d}) = \\frac{p_c(\\mathbbm{d})p_f(\\mathbbm{d})}{\\int_\\Omega\\int_\\Omega p_c(\\mathbbm{d})p_f(\\mathbbm{d}) d\\mathbbm{d} d\\mathbbm{d}}\n\\end{equation}\nbecause both destributions are independent.\nThe integrals normalize the distribution function to 1 and are the only reason why $w$ depends on $p_c$.\nPutting this back into equation \\ref{eq:transport0} gives\n\\begin{equation}\nw = f_r(\\overrightarrow{\\mathbbm{p}_{i-1}\\mathbbm{p}_i}, \\overrightarrow{\\mathbbm{p}_i\\mathbbm{c}_i})\n\tf_r(\\overrightarrow{\\mathbbm{p}_i\\mathbbm{c}_i}, \\overrightarrow{\\mathbbm{c}_i, \\mathbbm{c}_{i-1}})\n\t\\frac{\\int_\\Omega\\int_\\Omega p_c(\\mathbbm{d})p_f(\\mathbbm{d}) d\\mathbbm{d} d\\mathbbm{d}}\n\t{p_f(\\overrightarrow{\\mathbbm{p}_i\\mathbbm{c}_i})}\n\\end{equation}\n\nTo solve the integral we would need to iterate over all existing caches which is unfeasible.\nHowever, making assumptions to $p_f$ can remove this ugly term.\n\n\\subsection{Diffuse case}\n\nSetting $p_f$ to an uniform distribution on the hemisphere gives the following quotient.\n\\begin{align}\np_f(\\mathbbm{d}) &= \\frac{1}{2\\pi}\\\\\nq &= \\int_\\frac{\\Omega}{2}\\int_\\frac{\\Omega}{2} p_c(\\mathbbm{d}) d\\mathbbm{d} d\\mathbbm{d}\\\\\n&= 2\\pi p_c\\left(\\in\\frac{\\Omega}{2}\\right)\n\\end{align}\nProblem: we need to integrate over $\\frac{\\Omega}{2}$! $p_f$ is 0 otherwise.\nI.e. the inner integral is the probability that the cache is in the positive hemisphere $p_c(\\in\\frac{\\Omega}{2})$ which is constant.\nThe constant integrated over the hemisphere just adds another $2\\pi$.\n%With the assumption that $p_f=1/2\\pi$ for the entire sphere we can go further.\n%This is allowed because we discard the path anyway and the further weight is irrelevant.\n%In that case the inner integral resolves to 1 (as this is a property of the density function $p_c$).\n%\\begin{align}\n%q &= \\int_\\Omega 1 d\\mathbbm{d}\\\\\n%&= 4\\pi\n%\\end{align}\n\n\\end{document}\n", "meta": {"hexsha": "ae6871fdedea8bcce42680c536958fc8071304bb", "size": 2995, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documentation/unbiasedpixca.tex", "max_stars_repo_name": "Wumpf/gpugi", "max_stars_repo_head_hexsha": "55fb5d9cf12b34b22e1df0d54c64c1c6e74d8e58", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2015-06-22T06:36:27.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-23T20:56:44.000Z", "max_issues_repo_path": "documentation/unbiasedpixca.tex", "max_issues_repo_name": "Wumpf/gpugi", "max_issues_repo_head_hexsha": "55fb5d9cf12b34b22e1df0d54c64c1c6e74d8e58", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-02-20T14:21:53.000Z", "max_issues_repo_issues_event_max_datetime": "2015-02-20T14:22:02.000Z", "max_forks_repo_path": "documentation/unbiasedpixca.tex", "max_forks_repo_name": "Wumpf/gpugi", "max_forks_repo_head_hexsha": "55fb5d9cf12b34b22e1df0d54c64c1c6e74d8e58", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2016-04-30T13:53:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-20T14:52:57.000Z", "avg_line_length": 49.0983606557, "max_line_length": 252, "alphanum_fraction": 0.7368948247, "num_tokens": 1023, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8499711908591638, "lm_q2_score": 0.6584174938590246, "lm_q1q2_score": 0.5596359013378613}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\n\\title{MAT257 Notes}\n\\author{Jad Elkhaleq Ghalayini}\n\\date{October 22 2018}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{mathtools}\n\\usepackage{enumitem}\n\\usepackage{graphicx}\n\\usepackage{cancel}\n\n\\usepackage[margin=1in]{geometry}\n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{definition}{Definition}\n\\newtheorem*{corollary}{Corollary}\n\\newtheorem{exercise}{Exercise}\n\n\\newcommand{\\reals}[0]{\\mathbb{R}}\n\\newcommand{\\nats}[0]{\\mathbb{N}}\n\\newcommand{\\ints}[0]{\\mathbb{Z}}\n\\newcommand{\\rationals}[0]{\\mathbb{Q}}\n\\newcommand{\\brac}[1]{\\left(#1\\right)}\n\\newcommand{\\sbrac}[1]{\\left[#1\\right]}\n\\newcommand{\\mc}[1]{\\mathcal{#1}}\n\\newcommand{\\eval}[3]{\\left.#3\\right|_{#1}^{#2}}\n\\newcommand{\\ip}[2]{\\left\\langle#1,#2\\right\\rangle}\n\\newcommand{\\prt}[2]{\\frac{\\partial #1}{\\partial #2}}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Tangent Space}\nWe begin with some examples:\n\\begin{enumerate}\n\n  \\item Consider an ellipsoid\n  \\[\\frac{x^2}{a^2} + \\frac{y^2}{b^2} + \\frac{z^2}{c^2} = 1\\]\n  The tangent plane at a point \\((x_0, y_0, z_0)\\) of the ellipsoid is given by\n  \\[\\frac{\\cancel{2}x_0}{a^2}(x - x_0) + \\frac{\\cancel{2}y_0}{b^2}(y - y_0) + \\frac{\\cancel{2}z_0}{c^2}(z - z_0) = 0\\]\n  \\[\\frac{x_0}{a^2}x + \\frac{y_0}{b^2}y + \\frac{z_0}{c^2}z = 1\\]\n\n  \\item Let \\(f(x, y) = 0\\) and \\(g(x, y) = 0\\) be curves in \\(\\reals^2\\). Assume the curves are smooth and intersect at point \\((a, b)\\). The two curves are \\underline{orthogonal} at \\((a, b)\\) if\n  \\[\\prt{f}{x}\\prt{g}{x} + \\prt{f}{y}\\prt{g}{y} = 0\\]\n  at \\((a, b)\\). The two curves are \\underline{tangent} at \\((a, b)\\) if the gradients are proportional to each other, i.e.\n  \\[\\prt{f}{x}\\prt{g}{y} - \\prt{f}{y}\\prt{g}{x} = 0\\]\n  at \\((a, b)\\).\n\n  \\item Consider the family of parabolas\n  \\[y^2 - 2p(x + p/2) = 0\\]\n  Here, \\(p\\) is the parameter. These parabolas open along the \\(x\\) axis, either to the right or two the left, with \\(x\\) acting like a function of \\(y\\).\n\n  This kind of family is what's called \\textit{confocal}: these parabolas all have the same focus, 0. Of course, they don't all intersect, but all those which open to the right intersect all those which open to the left. Let's look at the intersections.\n\n  If \\(p_1 > 0\\) and \\(p_2 > 0\\) then the two parabolas\n  \\[f(x, y) = y^2 - 2p_1(x + p_1/2) = 0, g(x, y) = y^2 - 2p_2(x + p_2/2) = 0\\]\n  intersect where\n  \\[f(x, y) = y^2 - 2p_1(x + p_1/2) = 0 = g(x, y) = y^2 - 2p_2(x + p_2/2)\\]\n  Let's show that all the intersections among all these possible parabolas are right angles.\n\n  Let's try to compute the inner product of the gradients. That is, we want to show\n  \\[f_xg_x + f_yg_y = 0\\]\n  at an intersection point. So what's the product of the \\(x\\) derivatives? Well it'll just be\n  \\[f_xg_x = (-2p_1)(-2p_2) = 4p_1p_2\\]\n  And what's the product of the \\(y\\) derivatives?\n  \\[f_yg_y = (2y)(2y) = 4y^2\\]\n  So we want to see that\n  \\[f_xg_x + f_yg_y = 4p_1p_2 + 4y^2\\]\n  vanishes when both \\(f\\) and \\(g\\) are zero.\n\n  If you take \\(p_2\\) times \\(f\\), and subtract \\(p_1\\) times \\(g\\), you get rid of the \\(x\\) terms:\n  \\[p_2f - p_1g = p_2y^2 - 2p_2p_1(x + p_1/2) - p_1y^2 + 2p_1p_2(x + p_2/2) = (p_1 - p_2)y^2 + (p_2 - p_1)p_1p_2\\]\n  So we have\n  \\[f_xg_x + f_yg_y = 4p_1p_2 + 4y^2 = 4\\frac{p_2f - p_1g}{p_2 - p_1} = 0\\]\n  Since \\(f = g = 0\\) and \\(p_2 - p_1 \\neq 0\\) at the intersection point.\n\n\\end{enumerate}\nSo these are some examples of computations with tangent spaces.\n\n\\section*{Proof of the Inverse Function Theorem}\nThe hypotheses of the inverse function theorem are as follows: \\(f: \\reals^n \\to \\reals^n\\) is a \\(\\mc{C}^1\\) function in a neighborhood of a point \\(a \\in \\reals^n\\) such that\n\\[\\det f'(a) \\neq 0\\]\nWe have to show that there are open neighborhoods \\(V\\) of \\(a\\) and \\(W\\) of \\(f(a)\\) such that \\(f: V \\to W\\) and has an inverse \\(f^{-1}: W \\to V\\) which is differentiable. Remember of course that the inverse function theorem includes a very important formula for the inverse, but this is theminimum that we have to prove, since if we prove this we can get the formula for the inverse function just from the Chain Rule. We also already checked before that if we have the formula for the inverse, we can check from the formula that the inverse if \\(\\mc{C}^1\\), or in the general case \\(\\mc{C}^r\\).\n\nOne thing that in general it is good to do at the beginning, and it's something that in general we'll do at the beginning of a lot of things in this course, is that when you're given a problem which has a condition depending on the linear part of a function, for example, here, that the linear part is invertible, its always very simple to reduce to the case that the linear part is the identity. So let's just see here why we can make that assumption, that is, that we can assume without loss of generality that\n\\[f'(a) = I\\]\nSo, why? Let\n\\[\\lambda = f'(a)\\]\nIf we know this, how can we use \\(\\lambda\\) and \\(f\\) to find another function such that it's derivative at that point will be the identity? Quite simply, we can write, since \\(\\lambda\\) is invertible,\n\\[g = \\lambda^{-1} \\circ f \\implies g'(a) = (\\lambda^{-1})Df + (\\cancel{D\\lambda^{-1}})f = \\lambda^{-1}\\lambda = I\\]\nby the Chain Rule. And if \\(g\\) has an inverse \\(g^{-1}: W' \\to V\\) where \\(W'\\) is an open neighborhood of \\(g(a)\\), then \\(f\\) has an inverse \\(f^{-1}: W \\to V\\) given by\n\\[f = \\lambda \\circ g \\implies f^{-1} = g^{-1} \\circ \\lambda^{-1}\\]\nwhere \\(W' = \\lambda(W)\\). So knowing the theorem for \\(g\\), it follows for \\(f\\).\nSo first we'll show the statement above, but we'll check that the inverse is merely continuous instead of differentiable.\n\nLet's stop here, and hopefully do the entire proof next time.\n\n\\end{document}\n", "meta": {"hexsha": "aa9827bd45e92c7d38d0869b23c922dfa8167623", "size": 5751, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/oct22.tex", "max_stars_repo_name": "imbrem/mat257-notes", "max_stars_repo_head_hexsha": "965b1a0e5e5aae44577c5ed58e98623af1f4560d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/oct22.tex", "max_issues_repo_name": "imbrem/mat257-notes", "max_issues_repo_head_hexsha": "965b1a0e5e5aae44577c5ed58e98623af1f4560d", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/oct22.tex", "max_forks_repo_name": "imbrem/mat257-notes", "max_forks_repo_head_hexsha": "965b1a0e5e5aae44577c5ed58e98623af1f4560d", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.7714285714, "max_line_length": 599, "alphanum_fraction": 0.662841245, "num_tokens": 2079, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6584175005616829, "lm_q2_score": 0.849971181358171, "lm_q1q2_score": 0.5596359007793078}}
{"text": "\\subsection{Synthetic Data}\n\nFirst, tests were run on data generated in situ from a list of frequencies,\ncoefficients, and phase differences\n\n\\begin{align*}\n    c &= \\{c_1, c_2, \\ldots, c_n\\}\\\\\n    \\omega &= \\{\\omega_1, \\omega_2, \\ldots, \\omega_n\\}\\\\\n    \\phi &= \\{\\phi_1, \\phi_2, \\ldots, \\phi_n\\}\\\\\n\\end{align*}\n\nwhich generate a 'generating' function for the discrete signal to follow\n\n\\begin{align*}\n    f(x) = \\sum_{i = 1}^{n} c_i \\cdot sin(w_i x + \\phi_i)\n\\end{align*}\n\nimplemented as a lambda-function in code \n\n\\begin{lstlisting}[language=C++]\nauto f_gen = [w, c, p](int x){\n                float sum = 0;\n                for(int i = 0; i < wlist.size(); i++){\n                    sum += c[i] * sin(w[i]*x + p[i]);\n                }\n                return 7.0*sum/float(std::accumulate(c.begin(), c.end(), 0));\n            };\n            \nauto f = new signal[SIZE];\n\nfor(int x = 0; x < SIZE; x++){\n    f[i] = signal(f_gen(2*x), f_gen(2*x + 1));\n}\n\\end{lstlisting}\n\nThe normalising factor may be succinctly written as \n\n\\begin{align*}\n    a_N = \\frac{7.0}{\\sum_{i = 1}^{n}c_i}~.\n\\end{align*}\n\nIt simply ensures that the cross correlation of the signals do not blow up and\nthat a single signal agnostic threshold value may be chosen as a metric for\nmatching. The factor of 7 scales the float value to the range (-7, 7) so as to\nmaintain reasonable resolution after conversion to integer type to save memory.\n\nThe signal is generated as even/odd pairs to facilitate pairwise storage\nimplemented by \\nameref{sec:doobit}. The typename \\texttt{signal} is an alias\nfor \\texttt{doobit}, kept as such for modularity amongst storage backends, and\nto possibly expand across architectures, saving major edits.\n\nPlaying with the described parameters, experiments were performed, and the\nresults have been tabulated in \\autoref{fig:synthexp}. Since we look for a\nglobal maxima of cross correlation, the phase is irrelevant to the results, but\nfor the sake of completeness, an indication has been provided where phase shifts\nwere tested. Several arbitrary values were tested, with the result being\nuneaffected as expected.\n\nAll synthetic tests were performed with a resolution of 1ms, and 400 data points.\n\n\\begin{figure}[ht]\n    \\centering\n    \\def\\arraystretch{1.5}\n    \\setlength\\tabcolsep{2em}\n    \\begin{tabular}{c | c | c}\n        NCC Threshold (t)   & Input frequencies and coefficients (kHz)      & Output (kHz)  \\\\ \\hline\n        0.1                 & 0.3, 0.5, 0.8                                 & 0.3, 0.5, 0.8 \\\\\n        0.1                 & 0.3, 0.5, 0.8$^\\dagger$                       & 0.3, 0.5, 0.8 \\\\\n        0.1                 & 0.3, 0.5$^\\dagger$, 0.8$^\\dagger$             & 0.3, 0.5, 0.8 \\\\\n        0.2                 & 0.3, 0.5$^\\dagger$, 0.8$^\\dagger$             & 0.3, 0.5, 0.8 \\\\\n        0.2                 & 0.3, 0.5$^\\dagger$, 0.8$^\\dagger$             & 0.3, 0.5, 0.8 \\\\\n        0.2                 & 0.3, 4$\\times$ 0.5$^\\dagger$, 0.8$^\\dagger$   & 0.3, 0.5, 0.8 \\\\\n        0.4                 & 0.3, 4$\\times$ 0.5$^\\dagger$, 0.8$^\\dagger$   & 0.3, 0.5, 0.8 \\\\\n        0.5                 & 0.3, 4$\\times$ 0.5$^\\dagger$, 0.8$^\\dagger$   & 0.3, 0.5      \\\\\n        0.6                 & 0.3, 4$\\times$ 0.5$^\\dagger$, 0.8$^\\dagger$   & 0.5           \\\\\n        0.8                 & 0.3, 4$\\times$ 0.5$^\\dagger$, 0.8$^\\dagger$   & $\\emptyset$   \\\\\n        0.1                 & 0.3, 0.5, 0.35$^\\dagger$                      & 0.3, 0.4, 0.5 \\\\\n        0.2                 & 0.3, 0.5, 0.35$^\\dagger$                      & 0.3, 0.5     \n    \\end{tabular}\n    \\captionsetup{justification=centering}\n    \\caption{Experiments with varying synthetic inputs and thresholds.\\\\\n    $\\dagger.$ Phase shifted. Weight unit unless otherwise specified}\n    \\label{fig:synthexp}\n\n\\end{figure}\n\n\\subsection{Real Inputs}\n\nNot having a microphone module for the Arduino, we resorted to passing a\nrecorded array to the device via MATLAB, opening its own can of worms by way of\nbuffer overflows and race conditions, as described in detail in\n\\autoref{subsec:buffinp}. \n\nFor currently unknown reasons, we had memory overflows with the same size of\nreal data as we ran synthetic tests for. Curiously, in this case, the Arduino\ncontinuosly prints \\texttt{'w'} on the Serial line. After much testing we have\nlinked this behaviour to memory overflow, but are yet to find resources\nconfirming it. \n\nReducing the size, however, allows us to perform some tests, limited by the data\ntransfer speed as well, with the serial line taking up to 30 seconds to transfer\nall the data successfully, yet with significant packet loss. The best transfer\nratio we were able to obtain after fine tuning the transfer rates and timing was\n400 packets received for 420 sent, but at this delicate parameter island, the\nArduino would, with roughly half probability, run out of memory. It may be\ndependent on buffering of incoming packets. Buffering just a few more packets\nmay have been pushing it over the edge.\n\nFor generating required frequencies, the Android app\n\\href{https://play.google.com/store/apps/details?id=com.luxdelux.frequencygenerator}{Frequency\nSound Generator} was used.\n\nAll real tests were performed with a resolution of 1ms, and 200 data points.\n\n\\begin{figure}[ht]\n    \\centering\n    \\def\\arraystretch{1.5}\n    \\setlength\\tabcolsep{2em}\n    \\begin{tabular}{c | c | c}\n        NCC Threshold (t)   & Input signals                                 & Output (kHz) \\\\ \\hline\n        0.1                 & 0.4 kHz (generated using a phone speaker)     & 0.2$^\\ast$, 0.3, 0.4 \\\\   \n        0.1                 & 0.5 kHz (generated using a phone speaker)     & 0.3, 0.4, 0.5, 0.6$^\\ast$ \\\\   \n        0.1                 & 0.6 kHz (generated using a phone speaker)     & 0.4, 0.5, 0.6, 0.7$^\\ast$ \\\\   \n        0.2                 & 0.6 kHz (generated using a phone speaker)     & 0.4, 0.5, 0.6 \\\\   \n        0.3                 & 0.6 kHz (generated using a phone speaker)     & $\\emptyset$ \\\\  \n        0.1                 & Speech (\"Hello Hello\")                        & 0.2$^\\ast$, 0.3, 0.4, 0.5$^\\ast$, 0.6 \\\\   \n        0.1                 & Middle C (261 Hz)                             & 0.3 \\\\   \n        0.1                 & Middle A (440 Hz)                             & 0.3, 0.4 \n    \\end{tabular}\n    \\captionsetup{justification=centering}\n    \\caption{Experiments with real inputs and thresholds. $\\ast$ indicating the frequency appeared sometimes during several test runs.}\n    \\label{fig:realexp}\n\n\\end{figure}\nWe further noted that the algorithm itself could give us better resolution with more frequencies. However, as the memory in our Arduino was quite strained, we ran into memory overflow issues with garbage data values being sent. Howver, this algorithm can be easily scaled up on better devices by adding more frequencies to test with. \n\n\\subsection{Proofs}\n\nScreenshots for the code compiled and working may be seen in\n\\autoref{fig:pushkcompile} and \\autoref{fig:sankacompile} respectively for the\ngroup emmbers, and a photo each of the physical setups is found in\n\\autoref{fig:realsetup}. These, along with some more photos, and video\ndemonstrations, have been uploaded to the required shared directory, as well as\n\\href{https://drive.google.com/drive/folders/1jjNn4WWWuB7WvTU04xIGaotz5W_Lfz55?usp=sharing}{to\nour own folder, linked here}.\n\n\\begin{figure}[pt]\n    \\centering\n    \\includegraphics[width=0.9\\textwidth]{fig/pushkcompile1.png}\n\n    \\includegraphics[width=0.9\\textwidth]{fig/pushkcompile2.png}\n    \\caption{Code compiled and running tests for Pushkar}\n    \\label{fig:pushkcompile}\n\\end{figure}\n\n\n\\begin{figure}[pt]\n    \\centering\n    \\includegraphics[width=0.9\\textwidth]{fig/sankacompile1.png}\n\n    \\includegraphics[width=0.9\\textwidth]{fig/sankacompile2.png}\n    \\caption{Code compiled and running tests for Sankalp}\n    \\label{fig:sankacompile}\n\\end{figure}\n\n\\begin{figure}[pt]\n    \\centering\n    \\includegraphics[width=0.9\\textwidth]{fig/pushkreal.png}\n    \n    \\includegraphics[width=0.9\\textwidth]{fig/sankareal.png}\n    \\caption{Physical setup. Top, Pushkar; bottom, Sankalp.}\n    \\label{fig:realsetup}\n\\end{figure}\n", "meta": {"hexsha": "adc5edafbd9ecd8afafeef2c21e3f7fcf5dbd2ef", "size": 8155, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/results.tex", "max_stars_repo_name": "sankalpgambhir/ardio", "max_stars_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-29T18:00:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-29T18:00:57.000Z", "max_issues_repo_path": "doc/results.tex", "max_issues_repo_name": "sankalpgambhir/ardio", "max_issues_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/results.tex", "max_forks_repo_name": "sankalpgambhir/ardio", "max_forks_repo_head_hexsha": "c70c28ddd00690aab04aeb8f1e84fe90fcdb5d02", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.867816092, "max_line_length": 334, "alphanum_fraction": 0.6255058246, "num_tokens": 2498, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718435083355187, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.5594864078774338}}
{"text": "\\section{PIC Implementation}\n\nImplementing PIC into SINATRA was completed using the algorithm seen in Section \\ref{sec:algorithm}. Two important factors were critical during implementation: robustness and efficiency. SINATRA is built to be able to simulate many different fluid scenarios chosen by the user. Therefore, the PIC portion should also have this generality. Therefore, even though the Finite Difference method requires equal cell sizes, the rest of the algorithms are based on the individual cells so that when the mesh is non-uniform the simulation could be updated with minimum work. It was also implemented in an efficient manner for the DSMC setup and algorithms found in SINATRA. It uses flattened arrays, built-in particle and cell index arrays, and a structure which fits the current parallelization within SINATRA. \\textcolor{white}{Live Long and Prosper}\n\n\\subsection{Finite Difference}\n\\label{sec:finite_diff}\n\n\\indent Finite Difference was chosen as the method to discrete Poisson's equation. Finite difference is a simple and common method for discretizing the Poisson equation and is common in PIC codes. There are other algorithms which could be used, but Finite Difference allows for a relatively simple implementation. The derivation below is for a 7-point finite difference for the Poisson equation \\cite{FD_GS,FDM}.\n\nThe derivation begins with the Poisson equation expanded out to each of the 3 directions. \n\n\\begin{equation}\n    \\label{eqn:poisson_expanded}\n    \\nabla^2 \\phi = \\frac{\\partial^2 \\phi}{\\partial x^2} + \\frac{\\partial^2 \\phi}{\\partial y^2} + \\frac{\\partial^2 \\phi}{\\partial z^2} = - \\frac{\\rho}{\\epsilon_0}\n\\end{equation}\n\n% Make those \nNext the Poisson equation can be discretized in the \\(x \\: \\text{,} \\: y \\: \\text{and} \\: z\\) directions as shown in Equations \\ref{eqn:x_partial}, \\ref{eqn:y_partial}, and \\ref{eqn:z_partial}. The indices for \\(x \\: \\text{,} \\: y \\: \\text{and} \\: z\\) are given by \\(i \\: \\text{,} \\: j \\: \\text{and} \\: k\\) respectively \\cite{FD_GS}. \n\n\\begin{equation}\n    \\label{eqn:x_partial}\n    \\frac{\\partial^2 \\phi}{\\partial x^2}(x_i,y_i,z_i) \\approx \\frac{1}{h^2}(\\phi(x_{i-1},y_j,z_k) - 2\\phi(x_i,y_j,z_k) + \\phi(x_{i+1},y_j,z_k)),\n\\end{equation}\n\\begin{equation}\n    \\label{eqn:y_partial}\n    \\frac{\\partial^2 \\phi}{\\partial x^2}(x_i,y_i,z_i) \\approx \\frac{1}{h^2}(\\phi(x_i,y_{j-1},z_k) - 2\\phi(x_i,y_j,z_k) + \\phi(x_i,y_{j+1},z_k)),\n\\end{equation}\n\\begin{equation}\n    \\label{eqn:z_partial}\n    \\frac{\\partial^2 \\phi}{\\partial x^2}(x_i,y_i,z_i) \\approx \\frac{1}{h^2}(\\phi(x_i,y_j,z_{k-1}) - 2\\phi(x_i,y_j,z_k) + \\phi(x_i,y_j,z_{k+1})),\n\\end{equation}\n\nSubstituting these into Equation \\ref{eqn:poisson_expanded} gives Equation \\ref{eqn:poisson_full} below.\n\n\\begin{align}\\label{eqn:poisson_full}\n    \\frac{\\phi(x_{i-1},y_j,z_k) - 2\\phi(x_i,y_j,z_k) + \\phi(x_{i+1},y_j,z_k)}{h^2} +& \\nonumber \\\\\n    \\frac{\\phi(x_i,y_{j-1},z_k) - 2\\phi(x_i,y_j,z_k) + \\phi(x_i,y_{j+1},z_k)}{h^2} +& \\nonumber \\\\\n    \\frac{\\phi(x_i,y_j,z_{k-1}) - 2\\phi(x_i,y_j,z_k) + \\phi(x_i,y_j,z_{k+1})}{h^2} =& - \\frac{\\rho_{ijk}}{\\epsilon_0}\n\\end{align}\n\\(h\\) = Cell side length \\par\n\nThis builds a set of linear equations which is a common problem in linear algebra and therefore multiple methods exist to solve it. First, the matrix must be created. This matrix, also called a stencil, only needs to be created once per simulation because it is only dependent on the mesh. Equation \\ref{eqn:stencil} shows the stencil for a 7-point mesh on a \\(3\\times3\\times3\\) node domain. \\par\n\n% Make this prettier  \\ddots\n\\begin{equation}\n\\label{eqn:stencil}\nA = \n\\begin{bmatrix}\nS & I &  & I &  &  &  &  & \\\\ \nI & S &I  &  & I &  &  &  & \\\\ \n & I & S &  &  &I  &  &  & \\\\ \nI &  &  & S & I &  &I &  & \\\\ \n & I &  & I & S & I &  &I  & \\\\ \n &  & I &  & I & S &  &  & I\\\\ \n &  &  & I &  &  & S & I & \\\\ \n &  &  &  & I &  & I & S & I\\\\ \n &  &  &  &  & I &  & I & S\n\\end{bmatrix}\n\\end{equation}\nwhere,\n\\begin{equation} \\nonumber\nS = \n\\begin{bmatrix}\n-6 & 1 & \\\\\n1 & -6 & 1\\\\\n & 1 & -6 \\\\\n\\end{bmatrix}\n\\ \\text{and} \\ I = \n\\begin{bmatrix}\n1 &  & \\\\\n & 1 & \\\\\n & & 1 \\\\\n\\end{bmatrix}\n\\end{equation}\n\n\\indent Boundary conditions in a finite difference model can either be Neumann or Dirichlet. Neumann boundary conditions define the derivative at the boundary while Dirichlet define the value of the solution along the boundary. In this case, that means Neumann boundaries define the electric field while Dirichlet conditions define the value of the electric potential. For PIC simulations, it is best to use Neumann for open or uncharged boundaries while Dirichlet is best for objects like charged surfaces. SINATRA currently only includes Neumann boundary conditions; Dirichlet will be added when the boundary class is updated.\\footnote{This current condition causes the stencil to be ill posed for a direct solver like Gauss-Seidel \\cite{bvp_neumann}. This means that while it will converge upon an answer it is possible that it converged upon an inaccurate result and therefore will skew the results. For current results it has be observed to calculate relatively accurate solutions, however this should be explored in further work and users should be stopped from selecting this situation in the initial conditions.}Neumann boundaries have been implemented by editing the stencil when the node is along a boundary. At that point Equation \\ref{eqn:neumann} is used to change the values of the stencil. An average is calculated between the direction of boundary and the perpendicular direction in order to keep the slip boundary along open walls. \\par\n\n\\begin{equation}\n    \\label{eqn:neumann}\n     A_{x_0,y_j,z_k} = \\frac{-\\phi(x_0,y_j,z_k) + \\phi(x_1,y_j,z_k)}{2 h}\n\\end{equation}\n\n\\indent This stencil was implemented into SINATRA as a flattened 2D matrix which in itself is two flattened 3D matrices. The electric potential is stored as a flattened 3D matrix. The matrix items are selected through conversion functions that go from 3 dimensions to 1 dimension and vice versa, shown in Equations \\ref{eqn:to1D} and \\ref{eqn:to3D}. Figure \\ref{fig:sparse} shows a MATLAB\\textsuperscript{\\textregistered} \\textit{spy} command on SINATRA's stencil. The spy command shows which items in a matrix are non-zero. As seen by these figures the stencil is largely empty. Future work would include using a different data structure and access algorithm to reduce the wasted memory. \\par\n\n\n\n\\begin{figure}\n    \\centering\n  \\begin{minipage}[b]{0.49\\textwidth}\n    \\includegraphics[width=\\textwidth]{figures/sparse_8.jpg}\n  \\end{minipage} %\n  \\begin{minipage}[b]{0.49\\textwidth}\n    \\includegraphics[width=\\textwidth]{figures/sparse_64.jpg}\n  \\end{minipage}\n  \\caption[SINATRA Sparse Stencil Matrix]{SINATRA Sparse Stencil Matrix\\textmd{, shown through the MATLAB\\textsuperscript{\\textregistered} spy command (left) 8 Cells in the mesh (right) 64 cells in the mesh.}}\n  \\label{fig:sparse}\n\\end{figure}\n\n\n\\Needspace{5\\baselineskip}\n\\begin{equation}\n    \\label{eqn:to1D}\n    Index = i \\times s^2 + j \\times s + k\n\\end{equation}\n\\(Index\\) = The index of the 1D array \\\\\n\\(i \\: \\text{,} \\: j \\: \\text{and} \\: k\\) = The indices of the 3D array \\\\\n\\(s\\) = The size of one dimension of the 3D array \\par\n\n\\Needspace{8\\baselineskip}\n\\begin{align}\\label{eqn:to3D}\n    i =& \\frac{Index}{s^2} \\nonumber \\\\\n    j =& \\Big(\\frac{Index}{s}\\Big) \\: \\% \\: s \\\\\n    k =& Index \\: \\% \\: s \\nonumber\n\\end{align}\nNote: Values rounded down to closest whole number for use as indices \\newline\n\\(\\%\\) = The modulus operator which gives the remainder of division \\par\n\n\n\n\n\n\n\\indent At this stage, the simulation assumes equal cell sizes and refinement across the whole domain. However, the charge density summation is calculated cell by cell, and the electric field is distributed cell by cell. These allow future iterations to have unequal cell sizes. \\par\n\n\n\\subsection{Gauss-Seidel}\n\\label{sec:gauss}\n\nThe Gauss-Seidel iterative method was chosen to be the linear algebra solver for SINATRA. It was chosen for its simplicity, universality, and robustness. It has one improvement over the simplest iterative method, Jacobi, in that it uses the information as it calculates it. It solves a version of the Equation \\ref{eqn:linearset}.\n\n\n\\Needspace{8\\baselineskip}\n\\begin{equation}\n    \\label{eqn:linearset}\n    A\\cdot x = b\n\\end{equation}\nIn terms of SINATRA, \\\\\n\\(A\\) = Stencil - Matrix of coefficients\\\\\n\\(x\\) = Electric Potential - The vector to be solved \\\\\n\\(b\\) = Charge Density - The calculated vector \\par\n\n\nThe Charge Density (\\(b\\)), as defined in Equation \\ref{eqn:density}, includes the electron density. In SINATRA the electrons are approximated with a fluid assumption as shown in the Boltzmann relationship (\\ref{eqn:e_density}). This thesis will not go into a derivation of the Gauss-Seidel method, but the equation to update the electric potential at each time-step is given by Equation \\ref{eqn:gauss_seidel} \\footnote{A derivation of the Gauss-Seidel method can be found in Reference \\cite{Gauss_eqn}}.\n\n\\begin{equation}\n    \\label{eqn:gauss_seidel}\n    x_i^{k+1} = \\Big( b_i - \\sum_{j=1}^{i-1} A_{i j} x_j^{k+1} - \\sum_{j=i+1}^n A_{i j} x_j^{k} \\Big) / a_{ii}\n\\end{equation}\n\n\n\n\\indent  It was discussed previously how to distribute the ion's charge, but not how to combine the ion charge and the electron charge. Therefore, by discretizing Poisson's Equation (\\ref{eqn:poisson}), combining it with Equations \\ref{eqn:density} and \\ref{eqn:e_density} for the charge density, and putting it in matrix form (\\ref{eqn:stencil}) we get Equation \\ref{eqn:mixed}. \n\n\\Needspace{10\\baselineskip}\n\\begin{equation}\n    \\label{eqn:mixed}\n    A \\cdot \\phi = - \\frac{e}{\\epsilon_0} \\Big[n_i - n_o \\exp\\Big(\\frac{\\phi - \\phi_0}{k \\: T_e}\\Big)\\Big]\n\\end{equation}\n\\(A\\) = The sparse stencil \\\\\n\\(e\\) = Charge on an electron \\\\\n\\(\\epsilon_0\\) = Permittivity of free space \\\\\n\\(n\\) = Number density \\\\\n\\(\\phi\\) = Electric Potential \\\\\n\\(\\phi_0\\) = Initial Electric Potential \\\\\n\\(k\\) = Boltzmann Constant \\\\\n\\(T_e\\) = Temperature of the Electrons \\par\n\n\\indent This shows that the \\(b\\) in Equation \\ref{eqn:linearset} is the right hand side of the set of linear equations for the Gauss-Seidel solver. Importantly, note that both sides depend on the electric potential. It means that the right hand side must be recalculated at every time-step. \\par\n", "meta": {"hexsha": "a5c89247c1bb6d4d2c32d76270372d4448ab8433", "size": 10347, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/Implimentation.tex", "max_stars_repo_name": "dclunde/thesis-template", "max_stars_repo_head_hexsha": "7f987df93321a6522a784aa9d0dc788608c2166a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/Implimentation.tex", "max_issues_repo_name": "dclunde/thesis-template", "max_issues_repo_head_hexsha": "7f987df93321a6522a784aa9d0dc788608c2166a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/Implimentation.tex", "max_forks_repo_name": "dclunde/thesis-template", "max_forks_repo_head_hexsha": "7f987df93321a6522a784aa9d0dc788608c2166a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.224852071, "max_line_length": 1453, "alphanum_fraction": 0.7166328404, "num_tokens": 3093, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.559486404856385}}
{"text": "\\section{Symbolic Execution in Presence of Unbounded Loops\\label{sec-motor-control}}\n\nMany programs targeting REDFIN share the distinctive feature of\nthe energy estimation program considered in section~\\S\\ref{sec-verification},\ni.e. the existence of an  \\emph{upper bound on execution time},\nsince their termination does not depend on input data.\nHowever, other programs may have a loop\nwhich is guarded by a termination condition that involves computation\nconsidering the input parameters of the program, thus making the loop\n\\emph{unbounded}.\n\nPresence of unbounded loops makes program verification by symbolic execution considerably\nharder~\\cite[p.~50:20]{SurveySymExec-CSUR18}, since the number of program execution\npaths becomes infinite. In this section we consider an example\nof a control program that drives a stepper motor and verify one of its essential\nsafety properties by formulating it as a~\\emph{loop invariant} and ensuring that the\ninvariant holds for every possible state of the loop.\n\n\\subsection{Stepper Motor Control Program}\n\nStepper motors are often deployed as parts of antenna and solar panel pointing units\nin space satellites. We consider a program for controlling a motor with\none degree of freedom. The control Algorithm~\\ref{alg-motor} takes three input\nparameters:\n\\begin{itemize}\n\\item $dist$ --- the distance to move the motor\n\\item $v_{max}$ --- the maximal permitted velocity\n\\item $a_{max}$ --- the maximal permitted acceleration\n\\end{itemize}\n\n\\noindent\nand computes a series of\ndisplacement and velocity values that will be used to move the motor. Since the algorithm is\ndesigned for controlling a stepper motor, the calculations happen in~\\emph{discrete time},\ni.e. every iteration of the $while$ loop corresponds to a time interval; thus\nthe deceleration (i.e. braking) distance is computed as\n\\[\ns_{decel} = a_{max} \\cdot \\frac{decel\\_steps \\cdot (decel\\_steps + 1)}{2},\n\\]\nwhere $decel\\_steps = \\frac{v}{a_{max}}$ is the number of decelerating iterations\nneeded for a full stop.\n\nThe conditional statement in line 9 decides whether\nto accelerate, to keep the velocity, or to decelerate;\nsee Fig.~\\ref{fig-motor} for example plots of velocity and distance travelled\nagainst time. The spike at the bottom-right of the velocity plot illustrates\nthe edge case covered by the conditional statement on line 18: if the velocity\nis zero, but the target distance has not yet been reached, the motor must be\nmoved further.\n\n\\begin{algorithm}[h]\n  \\begin{algorithmic}[1]\n\\Require {$dist$, $v_{max}$, $a_{max}$}\n\\State $s \\gets 0$\n\\State $v \\gets 0$\n\\While {$true$}\n  % Compute deceleration distance based on current speed\n  \\State $decel\\_steps\\gets v / a_{max}$\n  \\algorithmiccomment{Compute deceleration distance}\n  \\State $s_{decel} \\gets a_{max} \\cdot decel\\_steps \\cdot (decel\\_steps + 1) / 2$\n  \\algorithmiccomment{based on the current velocity\\hspace{2.55mm}~}\n  \\If {$decel\\_steps \\cdot a_{max} \\neq v$}\n    \\State $s_{decel} \\gets s_{decel} + v$\n  \\EndIf\n  \\State {$v_{next} = min(v_{max}, dist, v + a_{max})$}\n  \\If {$s + s_{decel} + v_{next} \\leq dist$}\n    \\State $v \\gets v_{next}$ \\algorithmiccomment{Accelerate}\n  \\ElsIf {$s + s_{decel} + v \\leq dist$}\n    \\State $ v \\gets v $     \\algorithmiccomment{Keep velocity}\n  \\Else\n     \\algorithmiccomment{Decelerate}\n    \\If {$v > decel\\_steps \\cdot a_{max}$}\n      \\State $v \\gets decel\\_steps \\cdot a_{max}$\n    \\Else\n      \\State $v \\gets v - a_{max}$\n      \\EndIf\n  \\EndIf\n  \\If {$v = 0$}\n    \\If {$s \\neq dist$}  \\algorithmiccomment{Accelerate again to reach target}\n      % didn't quite reach our target after deceleration\n      % => accelerate again\n      \\State $v \\gets min(dist - s, a_{max})$\n    \\Else\n      % reached target\n      \\State $break$  \\algorithmiccomment{Terminate execution}\n      \\EndIf\n      \\EndIf\n  \\State $s \\gets s + v$\n\\EndWhile\n\\end{algorithmic}\n\\caption{Motor Control Algorithm.\\label{alg-motor}}\n\\end{algorithm}\n\nTo deploy the Algorithm~\\ref{alg-motor} to REDFIN, it has been manually\nimplemented in REDFIN assembly. The resulting assembly program comprises 85 lines of\ncode and closely mirrors the high-level pseudocode. Fig.~\\ref{fig-sym-tree} shows\na fragment of the program's symbolic execution tree that corresponds to the\ndecision whether to accelerate, keep the velocity, or decelerate the motor.\n\nThe decision is performed by computing the resulting total distance\ntravelled from start to stop, based on the action taken in the current\ntime step.  First, the total distance is computed if the motor were to\naccelerate for one more time step and then decelerate in the\nsubsequent time steps.  If the computed total distance is less than or\nequal to $dist$, the decision to accelerate is committed.  Otherwise,\nthe algorithm checks whether the targeted distance can be met by\nmaintaining the current velocity for one more time step.  If even that\nwould cause an overshoot, the decision for immediately commencing\ndeceleration is taken. Fig.~\\ref{fig-motor} illustrates this decision process by\nplotting the velocity and distance over time for a specific simulation\nrun.\n\n\\subsection{Loop Invariant Verification}\n\nIn order to ensure that the motor will not introduce disturbances\nand will not lead the whole unit out of its normal mode of operation, the velocity and\nacceleration of the motor must be kept within safe limits. This verification condition\nis motivated by the correctness requirements of the whole space satellite unit.\n\nMore formally, the verification condition means\nthat at any iteration $t$ of the loop the values of the\nexpressions $v^t$, velocity, and $\\left| v^t_{next} - v^t \\right|$,\nacceleration, must never exceed the parameters $v_{max}$ and $a_{max}$, respectively.\nThis property is the loop invariant for the motor control program which\nensures that velocity and acceleration always stay within their safe bounds. We\nformalise it as the following predicate that universally quantifies over the\nprogram's inputs and the loop's state:\n\n\\vspace{1mm}\n\\begin{figure}[H]\n\\begin{tcolorbox}\n\\LARGE{\n\\[\n  \\forall\\ v_{max}\\ a_{max}\\ t\\ v^t\\ v^t_{next},\\ v^t \\leq v_{max} \\land \\left| v^t_{next} - v^t \\right| \\leq a_{max}\n\\]}\n\\end{tcolorbox}\n\\caption{Motor control loop invariant.\\label{fig-loop-invariant}}\n\\end{figure}\n\n\\noindent\nWe will verify the loop invariant by using the verification framework in the\n\\emph{branching} mode~\\S\\ref{sec-transformer}. While symbolic execution with\nmerging, which is implemented by the framework's \\emph{merging} mode, allows\nfor intuitive formulation of properties for whole-program verification and\nis very useful for verifying finite programs, as we have\nreported in the section~\\S\\ref{sec-verification}, in the presence of branches\nguarded by symbolic values, it suffers from \\emph{symbolic non-termination};\nthus, for verifying the loop invariant, we rely on the branching mode of\nsymbolic execution.\n\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.50]{fig/motor_control_graph.pdf}}\n  % \\vspace{-13mm}\n\\caption{Velocity ($v$) and distance travelled ($s$) plotted against time ($t$)\\label{fig-motor}}\n\\Description[Please consult the text of this section for the description]{}\n\\end{figure}\n\n\\noindent\nWe take the following generic approach:\n\n\\begin{itemize}\n  \\item Obtain the binary tree-shaped trace by symbolic execution in~\\emph{branching} mode.\n  \\item Split the trace into linear paths, thus enumerating all possible\n    execution scenarios.\n  \\item For every path perform the analysis:\n    \\begin{itemize}\n      \\item extract the relevant parts of the state from the~\\emph{last} node in\n        the path, i.e. symbolic expressions stored representing $s$,\n        $v$ and $v_{next}$;\n      \\item extract the terminal~\\emph{path condition} $\\phi$\n            from the~\\emph{last} node in the path;\n      \\item construct a symbolic expression representing the~\\emph{verification condition}\n            $\\psi$ (Fig.~\\ref{fig-loop-invariant});\n      \\item verify the property in the given path by checking the following\n        formula for satisfiability: $\\phi \\land \\lnot\\psi$, i.e. the path condition\n        conjoined with negated verification condition.\n\n    \\end{itemize}\n    \\item  The property holds if and only if for every path the solver returns\n           \\textsc{Unsat}, i.e. there are no assignments of the variables which\n           satisfy the~\\emph{negation} of the property to check, considering the\n           terminal path condition.\n\\end{itemize}\n\nAs illustrated by Fig.~\\ref{fig-sym-tree}, every conditional jump instruction produces\ntwo branches in the symbolic execution tree: the one\nwhere the current path condition is conjoined with the jump's guard and the one where it is\nconjoined with the guard's negation. However, if the resulting\nconjunction is unsatisfiable, the corresponding branch need not to be explored\nand can be safely pruned.\nThus the symbolic execution engine needs to call an SMT solver every\ntime a conditional jump is encountered to check if the path conditions of the branches\nare satisfiable.\n\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.4]{fig/sym-tree.pdf}}\n\\caption{Symbolic execution tree of a code fragment with conditional branching.\\label{fig-sym-tree}}\n\\Description[Please consult the text of this section for the description]{}\n\\end{figure}\n\nChecking satisfiability of path conditions is essential for mitigating the path explosion\nproblem. Precondition, when available, is assigned as the initial path condition and thus\nwill become a subterm of every formula submitted to the solver.\nBy identifying strong preconditions, we can drastically reduce the number of\nsatisfiable paths in the symbolic execution tree of a program that is being\nverified, thereby significantly shortening verification times.\n\nWith the branch pruning optimisation in place and the parameters restricted\nto $dist \\in \\mathopen[1, 200\\mathclose]$, $v_{max} \\in \\mathopen[1, 30\\mathclose]$ and\n$a_{max} \\in \\mathopen[1, 30\\mathclose]$, the verification of the loop invariant\ntakes around 40 minutes.\n", "meta": {"hexsha": "011f8acf91351f7070c7d4ff06ce47b73cc71f39", "size": 10036, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/acm-tecs-paper/5-motor-control.tex", "max_stars_repo_name": "tuura/redfin", "max_stars_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-05T19:13:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-05T19:13:46.000Z", "max_issues_repo_path": "papers/acm-tecs-paper/5-motor-control.tex", "max_issues_repo_name": "tuura/redfin", "max_issues_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "papers/acm-tecs-paper/5-motor-control.tex", "max_forks_repo_name": "tuura/redfin", "max_forks_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.3396226415, "max_line_length": 117, "alphanum_fraction": 0.7570745317, "num_tokens": 2565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434978390747, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.559486404856385}}
{"text": "\n\n\\newpage\n\\section{Center of Gravity Computation}\n\n\n\nThe Center of Gravity of the experiment has been calculated considering all the components' mass listed in Section \\ref{components}.\n\n\n\\subsection{Code}\n\n\\lstinputlisting[language=Matlab]{appendix/code/centerofgravity/TUBULAR_COG.m}\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "475f97bf2dcf0a4a6c5f237354d06ed7faacf1aa", "size": 295, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "appendix/appendix-center-of-gravity-computation.tex", "max_stars_repo_name": "georgeslabreche/tubular-bexus-sed", "max_stars_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-01-17T10:38:07.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-17T10:38:07.000Z", "max_issues_repo_path": "appendix/appendix-center-of-gravity-computation.tex", "max_issues_repo_name": "georgeslabreche/tubular-bexus-sed", "max_issues_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "appendix/appendix-center-of-gravity-computation.tex", "max_forks_repo_name": "georgeslabreche/tubular-bexus-sed", "max_forks_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 13.4090909091, "max_line_length": 132, "alphanum_fraction": 0.7830508475, "num_tokens": 69, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7718434873426302, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.5594863972478245}}
{"text": "\\section{A Tutorial on Spectral Clustering}\n\\label{ch:ulrike07}\n\n\\textit{A Tutorial on Spectral Clustering} by Ulrike Von Luxburg. Professor of Computer Science, focus on ML and computational statistics, University of Hamburg.\\\\\nCited by 2597. \\textit{Statistics and computing, 2007}.\n\\newline\n\n\\textbf{Main point} is that \\begin{inparaenum}[\\itshape a\\upshape)]\n\\item spectral clustering obtains high-quality solution to general clustering problem.\n\\item graph Laplacian has important properties.\n\\item three different perspectives  are given as motivation and explanation: a graph cut point of view, a random walks point of view, and a pertubation theory point of view.\n\\end{inparaenum}\n\n\\subsection{graph Laplacian}\nSpectral clustering algorithms attemp to find $k$ subsets of vertices which have the property that vertices within a cluster are similar to each other than to vertices in other clusters. In order to find sets, spectral clustering makes the unnormalized Laplacian $n \\times n$ matrix $L$.\n\n$L = D - W$ where $D$ is the diagonal degree matrix $d_{ii}$ is the detree of vertex $i$ defined as $d_{ii} = \\sum_{j=1}^n$ and $W$ is the weighted adjacency matrix, such that $W_{ii} = 0$ and $w_{ij}$ is the weight of the edge between $v_i \\in V$ and $v_j \\in V$.\n\nNote that we are assuming that the graph is undirect which makes $W$ and $L$ symmetric and the weight of an edge is a non-negative measure of similiarity between the two vertices. Higher weights imply greater similarity.\n\n\\subsubsection{Unnormalized $L$}\n\\begin{figure}[ht]\n\\begin{mdframed}\n\\begin{description}\n\\item[Property 1] \\hfill \\\\\nfor every vector $f \\in R^n$, we have $f^T L f = \\frac{1}{2} \\sum_{i,j=1}^{n} w_{i,j}(f_i - f_j)^2$.\n\\item[Property 2] \\hfill \\\\\n$L$ is symmetric and positive-definite.\n\\item[Property 3] \\hfill \\\\\nThe smallest eigenvalue of $L$ is 0, and the corresponding eigenvector is the constant eigenvector 1.\n\\item[Property 4] \\hfill \\\\\n$L$ has $n$ non-negative, real-valued eigenvalues are $0 = \\lambda_1 \\leq \\lambda_2 \\leq \\cdots \\leq  \\lambda_n$.\n\\end{description}\n\\end{mdframed}\n\\caption{Unnormalized $L$}\n\\end{figure}\n\n\\subsubsection{Normalized $L$}\nHere is the definition of $L_{sym}$ and $L_{rw}$. \\\\\n$\\bullet$$L_{sym} = D^{-\\frac{1}{2}} L D^{-\\frac{1}{2}} = I - D^{-\\frac{1}{2}} W D^{-\\frac{1}{2}}$ \\\\\n$\\bullet$$L_{rw} = D^{-1} L = I - D^{-1} W$\n\n\\begin{figure}[ht]\n\\begin{mdframed}\n\\begin{description}\n\\item[Property 1] \\hfill \\\\\nfor every vector $f \\in R^n$, we have $f^T L_{sym} f = \\frac{1}{2} \\sum_{i,j=1}^{n} w_{i,j}(\\frac{f_i}{\\sqrt{d_{ii}}} - \\frac{f_j}{\\sqrt{d_{jj}}})^2$.\n\\item[Property 2] \\hfill \\\\\n$\\lambda$ is an eigenvalue of $L_{rw}$ with eigenvector $u$ if and only if $\\lambda$ is an eigenvalue of $L_{sym}$ with eigenvector $w = D^{\\frac{1}{2}} u$.\n\\item[Property 3] \\hfill \\\\\n$\\lambda$ is an eigenvalue of $L_{rw}$ with eigenvector $u$ if and only if $u$ solve the gereralized eigenvalue problem $L u = \\lambda D u$.\n\\item[Property 4] \\hfill \\\\\n0 is an eigenvalue of $L_{rw}$ with the constant vector 1 as eigenvector.\\\\\n0 is an eigenvalue of $L_{sym}$ with eigenvector $D^{\\frac{1}{2}} 1 $.\n\\item[Property 5] \\hfill \\\\\n$L_{sym}$ and $L_{rw}$ are positive semi-definite and have $n$ non-negative, real-valued eigenvalues are $0 = \\lambda_1 \\leq  \\lambda_2 \\leq  \\cdots \\leq  \\lambda_n$.\n\\end{description}\n\\end{mdframed}\n\\caption{Normalized $L$}\n\\end{figure}\n\n\\subsection{Approaches}\n\\begin{description}\n\\item[Graph cut] \\hfill \\\\\n$\\bullet$ Apporaximate solution by finding eigenvector. \\\\\n$\\bullet$ There is no guarantee on quality of the solution.\n\\item[Random walk approach] \\hfill \\\\\n$\\bullet$ Minimizing normalized cut is equivalent to minimizing the probability that a random walk on the graph.\n\\item[Perturbation theory approach] \\hfill \\\\\n$\\bullet$ The larger the gap between $\\lambda_{k}$ and $\\lambda_{k+1}$, the closer the piecewise-constant eigenvectors will be.\n\\end{description}\n\n\\newpage\n\n\\subsection{Spectral clustering}\nNormalized and unnormalized spectral clustering is similar.\n\n\\begin{figure}[ht]\n\\begin{mdframed}\n\\begin{enumerate}\n\\item[Input] : Similarity matrix $S \\in R^{n \\times n}$, number $k$ of clusters. \\\\\n\\item[Step 1] : Construct the unnormalized graph Laplacian $L = D - W$, using the weighted $n \\times n$ adjacency matrix $W$, where $w_{ij}$ is a function of $S_{ij}$ that gives non-negative weights. \\\\\n\\item[Step 2] : Find the $k$ eigenvectors $u_1, \\cdots, u_k$ corresponding to the smallest $k$ eigenvalues of $L$ \\\\\n\\item[Step 3] : Let $U = R^{n \\times k}$ be the matrix containing the eigenvectors $u_i$ as columns, and let $y_i \\in R^k$ be the $i$th row of $U$.\\\\\n\\item[Step 4] : Cluster the points $(y_i)_{i=1,\\cdots,n}$ in $R^k$ using $k$-means clustering into clusters $C_1,\\cdots,C_k$.\\\\\n\\item[Output] : clusters $A_1, \\cdots, A_k$ where $A_k - {v_j|y_j \\in C_i}$\n\\end{enumerate}\n\\end{mdframed}\n\\caption{Unnormalized spectral clustering}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{mdframed}\n\\begin{enumerate}\n\\item[Input] : Similarity matrix $S \\in R^{n \\times n}$, number $k$ of clusters. \\\\\n\\item[Step 1] : Construct the unnormalized graph Laplacian $L = D - W$, using the weighted $n \\times n$ adjacency matrix $W$, where $w_{ij}$ is a function of $S_{ij}$ that gives non-negative weights. \\\\\n\\item[Step 2] : Find the $k$ eigenvectors $u_1, \\cdots, u_k$ corresponding to the smallest $k$ eigenvalues that solve the generalized eigenvector problem $L u = \\lambda D u$ \\\\\n\\item[Step 3] : Let $U = R^{n \\times k}$ be the matrix containing the eigenvectors $u_i$ as columns, and let $y_i \\in R^k$ be the $i$th row of $U$.\\\\\n\\item[Step 4] : Cluster the points $(y_i)_{i=1,\\cdots,n}$ in $R^k$ using $k$-means clustering into clusters $C_1,\\cdots,C_k$.\\\\\n\\item[Output] : clusters $A_1, \\cdots, A_k$ where $A_k - {v_j|y_j \\in C_i}$\n\\end{enumerate}\n\\end{mdframed}\n\\caption{Normalized spectral clustering according to Shi}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{mdframed}\n\\begin{enumerate}\n\\item[Input] : Similarity matrix $S \\in R^{n \\times n}$, number $k$ of clusters. \\\\\n\\item[Step 1] : Construct the normalized graph Laplacian $L_{sym} = I - D^{-\\frac{1}{2}} S D^{-\\frac{1}{2}}$. \\\\\n\\item[Step 2] : Find the $k$ eigenvectors $u_1, \\cdots, u_k$ corresponding to the largest $k$ eigenvalues of $L_{sym}$. \\\\\n\\item[Step 3] : Let $U = R^{n \\times k}$ be the matrix containing the eigenvectors $u_i$ as columns, and let $y_i \\in R^k$ be the $i$th row of $U$.\\\\\n\\item[Step 4] : Let $T$ be the row-normalized $U$ matrix where $t_{ij} = \\frac{u_{ij}}{(\\sum_k u_{ik}^2)^2}$ and let $y_i \\in R^k$ be the $i$th row of $T$.\\\\\n\\item[Step 5] : Cluster the points $(y_i)_{i=1,\\cdots,n}$ in $R^k$ into clusters $C_1,\\cdots,C_k$ via $k$-means clustering or any other algorithm that attempts to minimize distortion. \\\\\n\\item[Output] : clusters $A_1, \\cdots, A_k$ where $A_k - {v_j|y_j \\in C_i}$\n\\end{enumerate}\n\\end{mdframed}\n\\caption{Normalized spectral clustering according to Ng}\n\\end{figure}\n\n", "meta": {"hexsha": "7c5525cf14c21937a20cce461a7bfb3831b09041", "size": 6950, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/references/reference_research/ulrike2007.tex", "max_stars_repo_name": "wsgan001/AnomalyDetection", "max_stars_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/references/reference_research/ulrike2007.tex", "max_issues_repo_name": "wsgan001/AnomalyDetection", "max_issues_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/references/reference_research/ulrike2007.tex", "max_forks_repo_name": "wsgan001/AnomalyDetection", "max_forks_repo_head_hexsha": "397673dc6ce978361a3fc6f2fd34879f69bc962a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-03-16T21:50:52.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-16T21:50:52.000Z", "avg_line_length": 57.4380165289, "max_line_length": 287, "alphanum_fraction": 0.7017266187, "num_tokens": 2300, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.7853085909370422, "lm_q1q2_score": 0.5593220530246992}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n%opening\n\\title{Diffusion equations for clines at migration-selection balance}\n\\author{Erin Calfee}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{For addition to Graham's pop gen textbook..}\n\nLet $s(x)$ describe the selection coefficient over one-dimensional space, such at that allele 1 has fitness 1 and allele 2 has fitness $1+ s(x)$ at location $x$. Consider the change in frequency of allele 2, $\\Delta q(x)$, at location $x$ in one short time step $\\Delta t$. \\\\\nWe can calculate the change in allele frequency by adding the change due to migration and the change due to selection. This is an approximation supposing migration and selection happen simultaneously and independently, which is sufficiently accurate as long as selection is weak, such that we don't have to worry about the contribution of migrants who should have died from selection (i.e. $ms(x)$ is small):\n\\begin{align*}\n\\frac{\\Delta q(x)}{\\Delta t} = \\frac{\\Delta q_{m}(x)}{\\Delta t} + \\frac{\\Delta q_{s}(x)}{\\Delta t}\n\\end{align*}\n\nWe'll start with the contribution from migration. Assume $\\Delta t$ is small enough that individuals can only disperse a short distance $\\Delta x$ in this time. Let $m$ be the probability an individual migrates distance $\\Delta x$, with equal probability to the left or right, and $1-m$ be the probability an individual doesn't migrate in $\\Delta t$:\n\n\\begin{align*}\n\\frac{\\Delta q_{m}(x)}{\\Delta t} & = (1-m)q(x) + m(\\frac{1}{2}q(x + \\Delta x) + \\frac{1}{2}q(x - \\Delta x)) - q(x) \\\\\n& = q(x) - q(x) + \\frac{m}{2}\\left(q(x + \\Delta x) + q(x - \\Delta x) - 2q(x)\\right) \\\\\n& = \\frac{m}{2}\\left((q(x + \\Delta x) - q(x)) - (q(x) - q(x - \\Delta x))\\right)\n\\end{align*}\nSomehow I'm missing a $\\Delta x^{2}$ in the denominator. \\newline\nFor the contribution of selection, we use the approximation that selection does not strongly distort mean population fitness, i.e. $\\overline{w} \\approx 1$:\n\\begin{align*}\n\\frac{\\Delta q_{s}(x)}{\\Delta t} = (1+s(x))(1-q(x))q(x)\n\\end{align*}\nSumming these terms together and taking the limit as $\\Delta x$ and $\\Delta t$ go to zero, we get the classic diffusion equation for the description of a cline:\n\\begin{align}\n\\frac{dq(x)}{dt} = \\frac{m}{2}\\frac{d^{2}q(x)}{dx^2} + s(x)q(x)(1-q(x))\n\\end{align}\nIt could be that the cline is dynamic and one allele is on the path to fixation as it diffuses across space, but for the equilibrium case at migration-selection balance, $\\frac{dq(x)}{dt} = 0$. If we make assumptions about the way selection pressures are distributed across the environment (e.g. a discrete environmental boundary where selection is $-s$ to the left of $x=0$ and $+s$ to the right), we can estimate the strength of selection $s$ by plugging in empirically-estimated migration rates, where $m$ is the variance of individual dispersal distances.\n\\end{document}\n", "meta": {"hexsha": "90f4aec98a5aba136af63f655feeea7ca72bfcd4", "size": 2851, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/cline_equilibrium_proof.tex", "max_stars_repo_name": "hammer/popgen-notes", "max_stars_repo_head_hexsha": "1a8e46c06c5eba4cf23487e0432596c3d44f96c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/cline_equilibrium_proof.tex", "max_issues_repo_name": "hammer/popgen-notes", "max_issues_repo_head_hexsha": "1a8e46c06c5eba4cf23487e0432596c3d44f96c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/cline_equilibrium_proof.tex", "max_forks_repo_name": "hammer/popgen-notes", "max_forks_repo_head_hexsha": "1a8e46c06c5eba4cf23487e0432596c3d44f96c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.0540540541, "max_line_length": 559, "alphanum_fraction": 0.7151876535, "num_tokens": 809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5593220506641049}}
{"text": "\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Copyright (c) 2003-2018 by The University of Queensland\n% http://www.uq.edu.au\n%\n% Primary Business: Queensland, Australia\n% Licensed under the Apache License, version 2.0\n% http://www.apache.org/licenses/LICENSE-2.0\n%\n% Development until 2012 by Earth Systems Science Computational Center (ESSCC)\n% Development 2012-2013 by School of Earth Sciences\n% Development from 2014 by Centre for Geoscience Computing (GeoComp)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%!TEX root = user.tex\n\\chapter{The escript symbolic toolbox}\n\\label{CHAP:Symbolic}\n\\section{Introduction}\n\\escript builds on the existing Sympy~\\cite{Sympy} symbolic maths library to\nprovide a \\SYMBOL class with support for \\escript Data objects. \\SYMBOL objects\nact as placeholders for a single mathematical symbol, such as \\texttt{x}, or\nfor arbitrarily complex mathematical expressions such as\n\\texttt{c*x**4 + alpha*exp(x) - 2*sin(beta * x)}, where \\texttt{alpha},\n\\texttt{beta}, \\texttt{c}, and \\texttt{x} are also symbols (the symbolic\n``atoms\" of the expression).\nWith the help of the \\EVALUATOR class, these symbols and expressions can\nbe evaluated by substituting numeric values and/or \\escript \\Data objects\nfor the atoms. Escript's \\SYMBOL class has a shape (and thus a rank) as well\nas a dimensionality.\nSymbols are useful to perform mathematical simplifications, compute\nderivatives, take gradients and in the case of \\escript describe PDEs.\nAs an example of how the symbolic toolbox can be used, consider the following\ncode extract.\n\\begin{python}\nimport esys.escript as es\nu = es.Symbol('u')\np = 2*u**2 + 3*u + 1\np2 = es.sin(u)\np3 = p.diff(u)              # p3 = derivative of p with respect to u\nevalu = es.Evaluator()\nevalu.addExpression(p)\nevalu.addExpression(p2)\nevalu.addExpression(p3)\nevalu.subs(u=2*es.symconstants.pi)\nevaluated=evalu.evaluate()\nprint(\"p3 =\", p3)            # The symbols can be printed, this line will print p3.\nprint(evaluated)\n\\end{python}\nRunning this code outputs:\\\\\\texttt{p3 = 4*u + 3\\\\(1 + 6*pi + 8*pi**2, 0, 3 + 8*pi)}.\\\\\nTo get the numeric value of the expression we replace\\\\ \n\\texttt{evalu.evaluate()}\n\\\\with\\\\ \n\\texttt{evalu.evaluate(evalf=True)}.\\\\\nThis results in\n\\texttt{(98.806, 0, 28.132)}.\nThe use of these Symbols becomes more interesting in the context of escript\nwhen they are integrated with escript \\Data objects.\n\n\\pagebreak\n\\section{NonlinearPDE}\nThe \\NLPDE class in escript makes use of the escript \\SYMBOL class and allows\nfor the solution of PDEs of the form:\n\\begin{equation}\n-X_{ij,j} + Y_i = 0\n\\label{symbolic eq1}\n\\end{equation}\nwhere $X$ and $Y$ are both functions of $u_{k,j}$ and $u_k$, and $u$ is the\nunknown function implemented as a \\SYMBOL. $\\nabla\\cdotp x$ denotes divergence\nof $x$.\nThe \\NLPDE class uses the \\SYMBOL class to solve the nonlinear PDE given in\n\\eqn{symbolic eq1}.\nThe class incorporates Newton's method to find the zeroes of the left hand side\nof \\eqn{symbolic eq1} and as a consequence finding the $X$ and $Y$ which\nsatisfy \\eqn{symbolic eq1}. \nConsecutive updates are calculated until the equation is satisfied to the\ndesired level of accuracy. The solution to each update step involves solving a\nlinear PDE. The \\NLPDE class uses $X$ and $Y$ to produce the coefficients of\nthe linear PDE for the update step. The linear PDE class given in\n\\Sec{SEC LinearPDE} is used to solve the linear PDEs from the update step.\nThe coefficients of the linear PDE to be solved are calculated as follows: \n\\begin{equation*}\n A_{ijkl} = \\frac{\\partial \\text{X}_{ij}}{\\partial u_{k,l}}, B_{ijkl} = \\frac{\\partial \\text{X}_{ij}}{\\partial u_{k}}, C_{ijkl} = \\frac{\\partial \\text{Y}_{ij}}{\\partial u_{k,l}}, D_{ijkl} = \\frac{\\partial \\text{Y}}{\\partial u_{k}}\n\\end{equation*}\n\\section{2D Plane Strain Problem}\nThe \\NLPDE class can be used to solve a 2D plane strain problem. In continuous\nmedia, stress is given by Lam\\'e's \\eqn{symbolic eq2}.\n\\begin{equation} \n-\\sigma_{ij,j}=f\n\\label{symbolic eq2}\n\\end{equation} \nHook's Law provides a relation between $\\sigma$ and $\\epsilon$ in the following form\n\\begin{equation}\n\\left[ \\begin{array}{c}\n\\sigma_{00} \\\\\n\\sigma_{11} \\\\\n\\sigma_{01} \\\\\n\\end{array} \\right] = \n\\left[ \\begin{array}{ccc}\nc_{00} & c_{01} & c_{05}\\\\\nc_{01} & c_{11} & c_{15}\\\\\nc_{05} & c_{15} & c_{55}\\\\\n\\end{array}\\right]\n\\left[ \\begin{array}{c}\n\\epsilon_{00} \\\\\n\\epsilon_{11} \\\\\n2\\epsilon_{10} \\\\\n\\end{array} \\right]\n\\label{symbolic eq3}\n\\end{equation}\nWhere $\\epsilon = symmetric(grad(u)) \\text{ or } %\n\\epsilon_{ij}=\\frac{1}{2}\\left(\\frac{\\partial u_i}{\\partial x_j} + %\n{\\frac{\\partial u_j}{\\partial x_i}}\\right)$, $u$ is the unknown function and\n$c_{ij}$ is the stiffness matrix. To fit this to the nonlinear PDE class'\nstandard form, X is set to $C \\times symmetric(grad(u))$.\nThe following \\PYTHON extract shows how an example 2D plane strain problem can\nbe set up. \n\n\\begin{python}\nfrom esys.escript import *\nfrom esys.finley import Rectangle\n#set up domain and symbols\nmydomain = Rectangle(l0=1.,l1=1.,n0=10, n1=10)\nu = Symbol('u',(2,), dim=2)\nq = Symbol('q', (2,2))\nsigma = Symbol('sigma',(2,2))\ntheta = Symbol('theta')\n# q is a rotation matrix represented by a Symbol. Values can be substituted for \n# theta.\nq[0,0]=cos(theta)\nq[0,1]=-sin(theta)\nq[1,0]=sin(theta)\nq[1,1]=cos(theta)\n# Theta gets substituted by pi/4 and masked to lie between .3 and .7 in the \n# vertical direction. Using this masking means that when q is used it will apply\n# only to the specified area of the domain. \nx = Function(mydomain).getX()\nq=q.subs(theta,(symconstants.pi/4)*whereNonNegative(x[1]-.30)*\n    whereNegative(x[1]-.70))\n# epsilon is defined in terms of u and has the rotation applied. \nepsilon0 = symmetric(grad(u))\nepsilon = matrixmult(matrixmult(q,epsilon0),q.transpose(1))\n# For the purposes of demonstration, an arbitrary c with isotropic constraints \n# is chosen here. In order to act as an isotropic material c is chosen such that \n# c00 = c11 = c01+c1+2*c55\nc00 = 10\nc01 = 8; c11 = 10\nc05 = 0; c15 = 0; c55 = 1\n# sigma is defined in terms of epsilon\nsigma[0,0] = c00*epsilon[0,0]+c01*epsilon[1,1]+c05*2*epsilon[1,0]\nsigma[1,1] = c01*epsilon[0,0]+c11*epsilon[1,1]+c15*2*epsilon[1,0]\nsigma[0,1] = c05*epsilon[0,0]+c15*epsilon[1,1]+c55*2*epsilon[1,0]\nsigma[1,0] = sigma[0,1]\nsigma0=matrixmult(matrixmult(q.transpose(1),sigma),q)\n# set up boundary conditions\nx=mydomain.getX()\ngammaD=whereZero(x[1])*[1,1]\nyconstraint = FunctionOnBoundary(mydomain).getX()[1]\n# The nonlinear PDE is set up, the values are substituted in and the solution is\n# calculated, y represents an external shearing force acting on the domain. \n# In this case a force of magnitude 50 is acting in the x[0] direction.\np = NonlinearPDE(mydomain, u, debug=NonlinearPDE.DEBUG0)\np.setValue(X=sigma0,q=gammaD,y=[-50,0]*whereZero(yconstraint-1),r=[1,1])\nv = p.getSolution(u=[0,0])\n\\end{python}\n%\\pagebreak\nThe way in which the rotation matrix q is set up demonstrates the seamless integration of escript symbols and \\Data objects. A \\SYMBOL is used to set up the matrix, the values for theta are then later substituted in. The example also demonstrates how the symbolic toolbox can be used as an aid to easily move from a mathematical equation to an escript data object which can be used to do numerical calculations. \nRunning the script calculates the unknown function u and assigns it to v. We can use v to calculate the stress and strain.  \n\\begin{table}[!h]\n\\centering\n\\begin{tabular}{|c|c|c|}\n  \\hline\n   & Anisotropic & Isotropic\\\\%\\multicolumn{2}{|c|}{Isotopic} \\\\\n  \\hline\n  No rotation & \\includegraphics[scale=0.2]{0RotAniso} & \\includegraphics[scale=0.2]{0RotIso}\\\\\n  \\hline\n  60\\textdegree\\ rotation & \\includegraphics[scale=0.2]{MidRotAniso} & \\includegraphics[scale=0.2]{MidRotIso}\\\\ \n  \\hline\n  difference & \\includegraphics[scale=0.2]{diffaniso} & \\raisebox{2cm}{no difference}\\\\ \n  \\hline\n\\end{tabular}\n\\caption{Displacement vectors calculated using \\NLPDE}\n\\label{isovsaniso}\n\\end{table}\nTable \\ref{isovsaniso}, shows the result of running the above script under varying values of c and theta. Both isotropic and anisotropic cases are considered.  For the anisotropic case, $c_{ij}$ is chosen such that\n$c00 = c11 = c01+c1+2*c55$ does not hold.\nTwo values of theta are also considered; one with a masked 60\\textdegree\nrotation in the middle and one with no rotation.\nThe last row of the table shows the difference between rotation in the middle\nand no rotation. In the isotropic case it can be seen that there is no\ndifference in the output when the rotation is applied.\nThere is however, an obvious difference when the anisotropic case is considered.\n\n\\section{Classes}\nA number of classes are associated with escript symbols. A detailed listing of the definitions and usage is provided below. \n\\subsection{Symbol class}\n\\begin{classdesc}{Symbol}{symbol \\optional{, shape} \\optional{, Dim}}\nDefines a \\SYMBOL object. The first argument \\member{symbol} is a string given\nto represent the \\SYMBOL. The string typically matches the name of the object,\nfor instance \\texttt{u=Symbol('u')}. Next optional \\member{shape} argument\ndefines whether the \\SYMBOL is a scalar, vector, matrix, or tensor and the\nlength or size of it. \\member{dim} is used to define the dimensionality of the\nobject contained in the \\SYMBOL.\nFor a \\SYMBOL definition \\texttt{u = Symbol('u',(10,), dim=2)}, the value of u\nwill be a vector with 10 components and the domain on which u will be used is\n2-dimensional (this is relevant with operations such as \\texttt{grad} where the\nnumber of spatial dimensions is important). \n\\end{classdesc}\n\\subsubsection{Symbol class methods}\n\\begin{methoddesc}[Symbol]{atoms}{\\optional{types}}\nReturns the atoms that form the current \\SYMBOL.\nBy default, only objects that are truly atomic and cannot be divided into\nsmaller pieces are returned: symbols, numbers, and number symbols like $e$ and\n$pi$. It is possible to request atoms of any type via the \\member{types}\nargument, however.\n\\end{methoddesc}\n\n\\begin{methoddesc}[Symbol]{coeff}{x \\optional{, expand=true}}\nReturns the coefficient of the term \\texttt{x} or $0$ if there is no \\texttt{x}.\nIf \\texttt{x} is a scalar \\SYMBOL then \\texttt{x} is searched in all components\nof this \\SYMBOL. Otherwise the shapes must match and the coefficients are\nchecked component by component. For example:\n\\begin{python}\n     x=Symbol('x', (2,2))\n     y=3*x\n     print y.coeff(x)\n     print y.coeff(x[1,1])\n\\end{python}\nwill print:\n\\begin{python}\n     [[3 3]\n      [3 3]]\n     [[0 0]\n      [0 3]]\n\\end{python} \n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{diff}{symbols}\nTakes the derivative of the \\SYMBOL object of which the method is called with\nrespect to the symbols specified in the argument \\member{symbols}.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{evalf}{}\nApplies the \\texttt{sympy.evalf} operation on all elements of the \\SYMBOL which\nare of type or inherit from \\texttt{sympy.Basic}.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{expand}{}\nApplies the \\texttt{sympy.expand} operation on all elements in this \\SYMBOL.\n\\end{methoddesc}\n%\\begin{methoddesc}[Symbol]{getDataSubstitutions}{}\n%end{methoddesc}\n\\begin{methoddesc}[Symbol]{getDim}{}\nReturns the \\SYMBOL's spatial dimensionality, or -1 if undefined.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{getRank}{}\nReturns the \\SYMBOL's rank which is equal to the length of the shape.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{getShape}{}\nReturns the shape of this \\SYMBOL.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{grad}{\\optional{where=none}}\nReturns the gradient of this \\SYMBOL. The \\SYMBOL must have a dimensionality\ndefined in order for \\texttt{grad} to work. As with the normal escript\n\\texttt{grad} function a \\FunctionSpace can be specified using the\n\\member{where} argument. The \\FunctionSpace should be wrapped in a \\SYMBOL.\nTo do this, set up a \\SYMBOL and then use the \\texttt{subs} function to\nsubstitute in the \\FunctionSpace.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{inverse}{}\nFind the inverse of the \\SYMBOL to which the function is applied. Inverse is only valid for square rank 2 symbols.\n\\end{methoddesc}\n%\\begin{methoddesc}[Symbol]{item}{}\n%\\end{methoddesc} \n%\\begin{methoddesc}[Symbol]{lambdarepr}{}\n%test\n%\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{simplify}{}\nApplies the \\texttt{sympy.simplify} operation on all elements in this \\SYMBOL.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{subs}{old, new}\nSubstitutes or replaces a \\SYMBOL specified in \\member{old} with whatever is in\n\\member{new} for this \\SYMBOL. Consider:\n  \\begin{python}\n     import esys.escript as es\n     u=es.Symbol(\"u\")\n     expr=2*u\n     expr.subs(u,2)\n\\end{python}\nThis prints 4.\n\\end{methoddesc}\n\\begin{methoddesc}[Symbol]{trace}{axis_offset}\nReturns the trace of the \\SYMBOL object.\n\\end{methoddesc}\n\n\\subsection{Evaluator class}\nThe \\EVALUATOR class is intended to have a group of expressions added to it,\nsubstitutions can be made across all expressions and the expressions can then\nall be evaluated.\n\\subsubsection{Evaluator class methods}\n\\begin{classdesc}{Evaluator}{\\optional{expressions}}\nAn \\EVALUATOR object is initiated via \\texttt{Evaluator()} with an optional\nargument of expressions to store.\n\\end{classdesc}\n\\begin{methoddesc}[Evaluator]{addExpression}{expression}\nAdds an expression to this \\EVALUATOR.\n\\end{methoddesc}\n\\begin{methoddesc}[Evaluator]{evaluate}{\\optional{evalf=False}\\optional{, args}}\nEvaluates all expressions in this \\EVALUATOR and returns the result as a tuple.\n\\member{evalf} can be set to \\texttt{True} to call \\texttt{evalf} on any sympy\nsymbols which may be part of the expression.\n\\member{args} can be provided to make any substitutions before the expression\nis evaluated.\n\\end{methoddesc}\n\\begin{methoddesc}[Evaluator]{subs}{old,new}\nSubstitutes or replaces a \\SYMBOL specified in \\member{old} with whatever is\nin \\member{new} for all expressions in the \\EVALUATOR.\n\\end{methoddesc}\n\n\\subsection{NonlinearPDE class}\n\\begin{classdesc}{NonlinearPDE}{domain, u}\nDefines a general nonlinear, steady, second order PDE for an unknown function\n\\var{u} on a given domain defined through a \\Domain object \\var{domain}.\n\\var{u} is a \\SYMBOL object.\nThe general form is -div(X) + Y = 0 \n\\end{classdesc}\n\\iffalse\n\\begin{methoddesc}[NonlinearPDE]{concatenateRow}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{createCoefficient}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{getUnknownSymbol}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{getLinearSolverOptions}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{getLinearPDE}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{getNumSolutions}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{getShapeOfCoefficient}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{getCoefficient}{}\ntest\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{getSensitivity}{}\ntest\n\\end{methoddesc}\n\\fi\n\\begin{methoddesc}[NonlinearPDE]{getSolution}{subs}\nReturns the solution of the PDE. \\member{subs} contatins the substitutions for\nall symbols used in the coefficients including the initial value for the\nunknown \\texttt{u}.\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{setOptions}{opts}\n\\begin{verbatim}\n  Allows setting options for the nonlinear PDE.\n  The supported options are:\n    tolerance\n        error tolerance for the Newton method\n    iteration_steps_max\n        maximum number of Newton iterations\n    omega_min\n        minimum relaxation factor\n    atol\n        solution norms less than atol are assumed to be atol.\n        This can be useful if one of your solutions is expected to\n        be zero.\n    quadratic_convergence_limit\n        if the norm of the Newton-Raphson correction is reduced by\n        less than quadratic_convergence_limit between two iteration\n        steps, quadratic convergence is assumed.\n    simplified_newton_limit\n        if the norm of the defect is reduced by less than\n        simplified_newton_limit between two iteration steps and\n        quadratic convergence is detected, the iteration switches to the\n        simplified Newton-Raphson scheme.\n\\end{verbatim}\n\n\\end{methoddesc}\n\\begin{methoddesc}[NonlinearPDE]{setValue}{\n\\optional{X}\\optional{, Y}\n\\optional{, y}\n\\optional{, q}\\optional{, r}}\nAssigns new values to coefficients. By default all values are assumed to be\nzero\\footnote{In fact, it is assumed they are not present by assigning the\nvalue \\code{escript.Data()}. This can be used by the solver library to reduce\ncomputational costs.}.\nIf the new coefficient value is not a \\Data object, it is converted into a\n\\Data object in the appropriate \\FunctionSpace.\n\\end{methoddesc}\n\n\\subsection{Symconsts class}\nSymconsts provides symbolic constants for use in symbolic expressions. These constants are preferred to floating point implementation as they can cancel perfectly when mathematical expressions are evaluated, avoiding numerical imprecision. \n\\begin{verbatim}\nusage:\n  symconsts.pi this provides a Symbol object\n\nAvailable constants currently include:\n  pi and e \n\\end{verbatim}\n", "meta": {"hexsha": "02ba65ab15b994f8dec3b6a342ce90df6ec2c66c", "size": 17138, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user/symbolic.tex", "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user/symbolic.tex", "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_forks_repo_path": "doc/user/symbolic.tex", "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.3160493827, "max_line_length": 412, "alphanum_fraction": 0.7408098961, "num_tokens": 4858, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7853085758631159, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.559322047085393}}
{"text": "\\subsubsection{Quantization}\n\n\\begin{frame}{Quantization}\n    Quantization based on \\textit{uWave} \\cite{liu2009uwave}\n    \n    \\begin{block}{Compressing}\n        \\begin{itemize}\n            \\item Recorded acceleration data were compressed to an average value for a window size of 50\n                ms and a step length of 30 ms\n        \\end{itemize}\n    \\end{block}\n    \n    \\begin{block}{Conversion}\n        \\begin{itemize}\n            \\item The compressed records are converted into 33 different levels\n            \\begin{center}\n                \\tiny\n                \\begin{tabular}{ll}\n                    \\textbf{Acceleration data ($a$) in $\\frac{dm}{s^2}$} & \\textbf{Converted value}\\\\\n                    \\hline\n                    $a > 200$ & 16\\\\\n                    $100 < a < 200$ & 11 to 15 (five levels linearly)\\\\\n                    $0 < a < 100$ & 1 to 10 (ten levels linearly)\\\\\n                    $a = 0$ & 0\\\\\n                    $-100 < a < 0$ & -1 to - 10 (ten levels linearly)\\\\\n                    $-200 < a < -100$ & -11 to - 15 (five levels linearly)\\\\\n                    $a < -200$ & -16\n                \\end{tabular}\n            \\end{center}\n        \\end{itemize}\n    \\end{block}\n\\end{frame}\n\n\\begin{frame}{Quantization}{Example - raw acceleration data}\n    \\begin{center}\n        \\begin{tikzpicture}\n            \\pgfplotsset{every axis legend/.append style={\n        \t\tat={(0.5,1.03)},\n        \t\tanchor=south}}\n            \\begin{axis}[\n                xmin=1,\n                xmax=295,\n                xlabel=time,\n                ylabel=acceleration in $\\frac{dm}{s^2}$,\n                legend columns=4]\n                \\addplot[blue, ultra thick, mark=none] table[x=t, y=x] {../data/fig/quantization/raw.dat};\n                \\addlegendentry{x-axis}\n                \\addplot[red, ultra thick, mark=none] table[x=t, y=y] {../data/fig/quantization/raw.dat};\n                \\addlegendentry{y-axis}\n                \\addplot[green, ultra thick, mark=none] table[x=t, y=z] {../data/fig/quantization/raw.dat};\n                \\addlegendentry{z-axis}\n            \\end{axis}\n        \\end{tikzpicture}\n    \\end{center}\n\\end{frame}\n\n\\begin{frame}{Quantization}{Example - compressed acceleration data}\n    \\begin{center}\n        \\begin{tikzpicture}\n            \\pgfplotsset{every axis legend/.append style={\n                at={(0.5,1.03)},\n                anchor=south}}\n            \\begin{axis}[\n                xmin=1,\n                xmax=52,\n                xlabel=time,\n                ylabel=acceleration in $\\frac{dm}{s^2}$,\n                legend columns=4]\n                \\addplot[blue, ultra thick, mark=none] table[x=t, y=x] {../data/fig/quantization/compressed.dat};\n                \\addlegendentry{x-axis}\n                \\addplot[red, ultra thick, mark=none] table[x=t, y=y] {../data/fig/quantization/compressed.dat};\n                \\addlegendentry{y-axis}\n                \\addplot[green, ultra thick, mark=none] table[x=t, y=z] {../data/fig/quantization/compressed.dat};\n                \\addlegendentry{z-axis}\n            \\end{axis}\n        \\end{tikzpicture}\n    \\end{center}\n\\end{frame}\n\n\\begin{frame}{Quantization}{Example - compressed \\& converted acceleration data}\n    \\begin{center}\n        \\begin{tikzpicture}\n            \\pgfplotsset{every axis legend/.append style={\n                at={(0.5,1.03)},\n                anchor=south}}\n            \\begin{axis}[\n                xmin=1,\n                xmax=52,\n                xlabel=time,\n                ylabel=converted acceleration,\n                legend columns=4]\n                \\addplot[blue, ultra thick, mark=none] table[x=t, y=x] {../data/fig/quantization/converted.dat};\n                \\addlegendentry{x-axis}\n                \\addplot[red, ultra thick, mark=none] table[x=t, y=y] {../data/fig/quantization/converted.dat};\n                \\addlegendentry{y-axis}\n                \\addplot[green, ultra thick, mark=none] table[x=t, y=z] {../data/fig/quantization/converted.dat};\n                \\addlegendentry{z-axis}\n            \\end{axis}\n        \\end{tikzpicture}\n    \\end{center}\n\\end{frame}\n", "meta": {"hexsha": "729e8d7378195ace51f4e006b356a2a93c39bca6", "size": 4106, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "presentation/experiment/experimental_protocol/quantization.tex", "max_stars_repo_name": "GordonLesti/SlidingWindowFilter", "max_stars_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-06-22T09:37:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-14T11:43:53.000Z", "max_issues_repo_path": "presentation/experiment/experimental_protocol/quantization.tex", "max_issues_repo_name": "GordonLesti/SlidingWindowFilter", "max_issues_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentation/experiment/experimental_protocol/quantization.tex", "max_forks_repo_name": "GordonLesti/SlidingWindowFilter", "max_forks_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-01-11T23:15:57.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-11T23:15:57.000Z", "avg_line_length": 40.2549019608, "max_line_length": 114, "alphanum_fraction": 0.5148563078, "num_tokens": 1096, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5593220458672756}}
{"text": "\\documentclass{beamer}\n\n\\input{../../shared_slides.tex}\n\n% reference: https://indico.cern.ch/event/845380/attachments/1915103/3241592/Dvurechensky_lectures.pdf\n% https://arxiv.org/abs/1803.00567\n\n% Use this aswell\n% https://www.math.ucdavis.edu/~qlxia/Research/monge.pdf\n\n\\title{Optimal Transport}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\\frame{\\tableofcontents}\n\n\\section{Monge Problem}%\n\\label{sec:}\n\n\\begin{frame}\n  \\frametitle{The Monge Problem (1781)}\n\n  \\begin{minipage}{0.5\\textwidth}\n    \\begin{figure}[ht]\n      \\centering\n      \\includegraphics[width=\\textwidth]{monge-sand.png}\n      \\caption{How to best move piles of sand to fill up holes of the same total volume?\\label{fig:label} }\n    \\end{figure}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}{0.45\\textwidth}\n    \\begin{itemize}\n      \\item $X,Y$ are metric spaces\n      \\item $\\B$ is a Borel $\\sigma$-Algebra (open sets, countable union, \\dots)\n      \\item $\\mu$ is a probability measure\n    \\end{itemize}\n  Given a cost $c: X\\times Y \\to \\R_+$ find a \\textbf{transport map} $T: X \\to Y$ minimizing\n  \\begin{equation}\n    M(\\mu, \\nu) = \\inf_T \\int_X c(x, Tx) \\diff \\mu(x)\n  \\end{equation}\n  s.t.\\ the \\emph{mass} remains the same:\n  \\begin{equation}\n    \\mu(T^{-1}(B)) = \\nu(B) \\quad \\forall B \\in \\B.\n  \\end{equation}\n  \\end{minipage}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Drawbacks of Monge formulation}\n  \\begin{block}{Example}\n    $X=(x_1, \\dots, x_m)$ and $Y=(y_1, \\dots, y_n)$ and\n    $\\mu= \\frac{1}{m}(\\delta_{x_1}+ \\dots + \\delta_{x_m})$ and $\\nu= \\frac{1}{n}(\\delta_{y_1}+ \\dots + \\delta_{y_m})$\n  \\end{block}\n  \\begin{minipage}{0.5\\textwidth}\n    \\underline{If $m=n$}, then $c(\\cdot, \\cdot) = C \\in \\R^{m\\times n}$ is just a (square) matrix and $T$ is a permutation $\\sigma$\n    \\begin{equation}\n      \\begin{aligned}\n        M(\\mu, \\nu) &= \\min_T \\frac{1}{n} \\sum_{i} c(x_i, T(x_i)) \\\\\n        &= \\min_\\sigma \\frac{1}{n} \\sum_{i} c(x_i, y_{\\sigma(i)}) \\\\\n        &= \\min_\\sigma \\frac{1}{n} \\sum_{i} C_{i, \\sigma(i)} \\\\\n      \\end{aligned}\n    \\end{equation}\n  \\end{minipage}\n  \\hfill\n  \\begin{minipage}{0.4\\textwidth}\n    \\underline{If $m \\neq n$}, for example $\\mu=\\delta_{x_1}$ and $\\nu= \\frac12(\\delta_{y_1}+ \\delta_{y_2})$\n    $\\Rightarrow$ No $T$ exists.\n    \\vspace{3.5cm}\n    \\vfill\n  \\end{minipage}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Nonuniqueness of solutions}\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[scale=0.3]{book-shifting.png}\n    \\caption{\\label{fig:label} }\n  \\end{figure}\n\\end{frame}\n\n\\section{Kantorovich formulation}%\n\\label{sec:}\n\n\n\\begin{frame}\n  \\frametitle{Kantorovich's relaxation (1940s)}\n  % we allow for splitting the mass coming from one point (can be spread out)\n  Before: \\emph{Transport map}, $\\forall x$ move all amount to some $y$.\\\\\n  Now: \\emph{Transport plan}, $\\forall (x,y)$ how much to move from $x$ to $y$.\n\n  \\begin{block}{}\n    Find $\\gamma \\in \\P(X \\times Y)$\n    \\begin{equation}\n      \\begin{aligned}\n        K(\\mu, \\mu) &= \\inf_{\\gamma} \\int_{X \\times Y} c(x,y) \\diff \\gamma(x,y) \\\\\n        \\text{s.t.} & \\gamma(A \\times Y) = \\mu(A) \\\\\n                    & \\gamma(X \\times B) = \\nu(B)\n      \\end{aligned}\n    \\end{equation}\n    % Linear in \\gamma\n    for all $A \\in \\B(X)$ and $B \\in \\B(Y)$.\n    Note: constraints say that $\\mu$ and $\\nu$ are the marginals of $\\gamma$, i.e.\\ $\\gamma \\in \\Pi(\\mu, \\nu)$.\n  \\end{block}\n\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Kantorovich's relaxation}\n  Allows for many more settings.\n  \\begin{figure}[ht]\n    \\centering\n    \\includegraphics[scale=0.2]{Kantorovich-relaxation.png}\n    \\caption{\\label{fig:label} }\n  \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{}\n  \\begin{block}{Example (as before)}\n    $X=(x_1, \\dots, x_m)$ and $Y=(y_1, \\dots, y_n)$ and\n    $\\mu= \\frac{1}{m}(\\delta_{x_1}+ \\cdots + \\delta_{x_m})$ and $\\nu= \\frac{1}{n}(\\delta_{y_1}+ \\cdots + \\delta_{y_m})$\n  \\end{block}\n  The problem reduces to\n  \\begin{equation}\n    \\min_{\\gamma} \\sum_{i,j} C_{i,j} \\gamma_{i,j},\n  \\end{equation}\n  where again $C_{i,j}$ is the cost of moving mass from $x_i$ to $y_j$\n  Then the space of probability measures on $X\\times Y$ is just the set of matrices ${[\\gamma_{i,j}]}_{i=1,\\dots,m, j=1,\\dots,n}$\n  such that\n  \\begin{equation}\n    \\sum_{i=1}^{m} \\gamma_{i,j} = \\frac{1}{n} \\quad \\text{and} \\quad \\sum_{j=1}^{n} \\gamma_{i,j} = \\frac{1}{m}\n  \\end{equation}\n  called \\textbf{bistochastic matrices}.\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{Doubly stochastic matrices and the Birkhoff polytope}\n  If $m=n$, then\n  \\begin{equation}\n    \\sum_{i=1}^{m} \\gamma_{i,j} = \\frac{1}{n} = \\sum_{j=1}^{n} \\gamma_{i,j}\n  \\end{equation}\n  which called \\textbf{bistochastic matrices} (scaled by \\frac{1}{n}).\\\\\n  The set of all such matrices is called the \\textbf{Birkhoff polytope}.\n\n  \\onslide<2->{%\n    \\begin{block}{Vertices of the Birkhoff polytope}\n      Are given by the permutation matrices.\n    \\end{block}\n  }\n  \\onslide<3->{%\n    By linearity\n  \\begin{equation}\n    \\begin{aligned}\n    &\\min_{\\gamma \\in \\Pi} \\sum_{i,j} C_{i,j} \\gamma_{i,j}\\\\\n    &= \\frac{1}{n}\\min_{\\gamma \\in Perm}\\sum_{i,j} C_{i,j} \\gamma_{i,j}\n    &= \\frac{1}{n} \\min_{\\sigma}\\sum_{i,j} C_{i,\\sigma(i)}\n    \\end{aligned}\n  \\end{equation}\n  }\n\\end{frame}\n\n\\begin{frame}\n  \\frametitle{}\n  \\begin{block}{Example (with non-uniform distribution)}\n    Now $\\mu = p = (p_1, \\dots, p_m)$ and $\\nu = q = (q_1, \\dots, q_n)$\n  \\end{block}\n  \\begin{equation}\n    \\begin{aligned}\n      K(\\mu, \\nu) &= \\min_\\gamma \\sum_{}^{} C_{i,j} \\gamma_{i,j} \\\\\n      s.t. & \\begin{cases}\n        \\sum_{i}^{} \\gamma_{i,j} = q_j \\\\\n        \\sum_{j}^{} \\gamma_{i,j} = p_i \\\\\n        \\gamma_{i,j} \\ge 0\n      \\end{cases}\n    \\end{aligned}\n  \\end{equation}\n  Linear optimization problem.\n\\end{frame}\n\n% Write here about wasserstein distance?\n% \\section{Wasserstein distance}%\n% \\label{sec:}\n\n\n\\end{document}\n", "meta": {"hexsha": "8c0c3fbf75af4b4bb271bf2c82462b0c23f8550b", "size": 5844, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/Optimal-transport/Optimal_transport.tex", "max_stars_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_stars_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-10-03T14:40:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-20T15:34:36.000Z", "max_issues_repo_path": "slides/Optimal-transport/Optimal_transport.tex", "max_issues_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_issues_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-10-21T13:02:40.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-06T19:50:32.000Z", "max_forks_repo_path": "slides/Optimal-transport/Optimal_transport.tex", "max_forks_repo_name": "kiwomuc/optimization-for-DS-lecture", "max_forks_repo_head_hexsha": "43ea50ef85f73b5bbc7659e8c457218ae136bb94", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-10-05T21:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T15:38:30.000Z", "avg_line_length": 31.085106383, "max_line_length": 131, "alphanum_fraction": 0.6146475017, "num_tokens": 2187, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.712232184238947, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5593220422885637}}
{"text": "\\documentclass{article}\n\\usepackage{../allan-eason}\n\n\n\\title{Sample Document using allan-eason.sty}\n\\author{Allan Pan and Eason Shao}\n\\date{\\today}\n\n\n% abcdefghijklmnopqrstuvwxyz\n% zyxwvutsrqpomnlkjihgfedcba\n\n\\begin{document}\n    \\maketitle\n    \\newpage\n    \\tableofcontents\n    \\newpage\n\n    \\section{maths}\n        \\url{https://mathxstudio.github.io/} \\\\\n        % \\href{https://mathxstudio.github.io/}{Allan's website} \\\\\n        \\techlnk{https://mathxstudio.github.io/} \\\\\n        \\defword{https://mathxstudio.github.io/} \\\\\n        https://mathxstudio.github.io/\n        \\subsection{test subsection}\n        Let \\(n\\geq 3\\) be a positive integer. Let \\(C_1,C_2,\\ldots,C_n\\) be unit circles in the plane, with centres \\(O_1,O_2,\\ldots,O_n\\) respectively. If no line meets more than two of the circles, prove that:\n        \\[\\sum\\limits_{1\\leq i<j\\leq n}\\dfrac{1}{O_iO_j}\\leq \\dfrac{(n-1)\\pi}{4}.\\]\n\n        For brevity, let \\(d_{ij}\\) be the length of \\(O_{ij}\\) and let \\(\\angle(ijk)\\) be shorthand for \\(\\angle O_iO_jO_k\\) (or its measure in radians).\n        \\\\\n        First, we eliminate the circles completely and reduce the problem to angles using the following \\textbf{\\color{allanred} Lemma}:\n        \\\\\n        \\begin{allanlemma}\n            For any indicies \\(i,j,m\\) we have the inequalities\n            \\[\\angle(imj)\\geq \\max\\left(\\dfrac{2}{d_{mi}}, \\dfrac{2}{d_{mj}}\\right)\\quad \\mathrm{and} \\quad \\pi -\\angle(imj)\\geq \\max\\left(\\dfrac{2}{d_{mi}}, \\dfrac{2}{d_{mj}}\\right)\\]\n            \\sublemma\n                We first prove the former line. Consider the altitude from \\(O_i\\) to \\(O_mO_j\\). The altitude must have length at least \\(2\\), otherwise its perpendicular bisector passes intersects all of \\(C_i, C_m, C_j\\). Thus\n                \\[2\\leq d_{mi}\\sin\\angle(imj)\\leq \\angle(imj)\\]\n                proving the first line. The seconf line follows by considering the external angle formed by lines \\(O_mO_i\\) and \\(O_mO_j\\) instead of the internal one.\\(\\square\\)\n        \\end{allanlemma}\n        \\showlemma\n        \\begin{allanlemma}\n            another test lemma.\n            \\sublemma\n                proof of lemma.\n        \\end{allanlemma}\n        \\showlemma\n        \\\\\n        Our idea now is for any index \\(m\\) we will make an estimate on \\(\\displaystyle \\sum_{\\substack{1\\leq i\\leq n\\\\i\\neq b} }{\\dfrac{1}{d_{bi}}} \\) for each index \\(b\\). If the centers formed a convex polygon, this would be much simpler, but because we do not have this assumption some more care is needed.\n        \\begin{allanclaim}\n            Suppose \\(O_a,O_b,O_c\\) are consecutive verticies of the convex hull. Then\n            \\[\\dfrac{n-1}{n-2}\\dang(abc)\\geq \\dfrac{2}{d_{1b}}+\\dfrac{2}{d_{2b}}+\\ldots+\\dfrac{2}{d_{nb}}\\]\n            where the term \\(\\displaystyle \\dfrac{2}{d_{bb}}\\) does not appear (obviously).\n            \\subclaim\n                WLOG let's suppose \\((a,b,c)=(2,1,n)\\) and that ...\n        \\end{allanclaim}\n        another line of text...\n        \\begin{allanfact}\n            Describe your fact.\n            \\subfact\n                Describe proof.\n        \\end{allanfact}\n        another line of text...\n        \\begin{allantheorem}[Test theorem]\n            Here is a theorem. Here is a theorem. Here is a theorem. Here is a theorem. Here is a theorem. Here is a theorem.\n        \\end{allantheorem}\n        ...\n        \\\\\n        Now suppose there were \\(r\\) verticies in the convex hull. If we sum the first claim across all \\(b\\) on the hull, and the second across all \\(b\\) not on the hull (inside it), we get\n        $$\n        \\begin{aligned}\n            \\sum\\limits_{1\\leq i< j\\leq n} \\dfrac{2}{d_{ij}}\n            &= \\dfrac{1}{2} \\sum\\limits_{b} \\sum\\limits_{i\\neq b} \\dfrac{2}{d_{bi}}\n            \\\\\n            &\\leq \\dfrac{1}{2}\\cdot\\dfrac{n-1}{n-2} ((r-2)\\pi + (n-2)\\pi)\n            \\\\\n            &= \\dfrac{(n-1)\\pi}{4}\n        \\end{aligned}\n        $$\n        as needed (with \\((r-2)\\pi\\) being the sum of all angles in the hull.\n\n        \\begin{allanenvremark}\n            This is the sixth and last problem of IMO 2002, and is a difficult one. Allan put it here to test the latest style file.\n        \\end{allanenvremark}\n\n    \\section{code}\n        \\begin{allanenvhypothesis}\n            test hypothesis.\n        \\end{allanenvhypothesis}\n        type some justifications.\n\n        \\begin{allanpy}\n            allanpy\n            \\begin{lstlisting}\n                # observation from the air\n\n                %matplotlib inline\n                import numpy as np\n                import matplotlib.pyplot as plt\n\n                v_car=5.611 # 20km/h on average in hk\n                v_eye=16 # Hz\n                alpha_lag=1.00\n                v_reload=alpha_lag*v_eye\n\n                pie=math.pi\n                r_a=13000\n                rho_car=0.001023\n                delta_d_car=v_car/v_reload\n                L_car=4.71769\n                C_eye=40960000\n                alpha_c1=1.00\n                C1=C_eye*alpha_c1\n                alpha_c2=0.90\n                C2=C_eye*alpha_c2\n                alpha_c3=0.80\n                C3=C_eye*alpha_c3\n\n                def alpha_clarity(x):\n                    if x>0 and x<sep_point:\n                        return float(0)\n                    elif x>=sep_point and x<=1:\n                        return $\\displaystyle\\left[ -\\left( \\frac{4096}{4095} \\right) ^2 \\right] \\cdot \\left[ \\left( x-1 \\right) ^2+1 \\right]$\n                    elif x>1:\n                        return float(1)\n                output=[0 for i in range(len(dataport))]\n                for i in range(len(dataport)):\n                    output[i]=alpha_clarity(dataport[i])\n                dataport=np.arange(0,1.01,0.01)\n                sep_point=1/4096\n            \\end{lstlisting}\n            \\begin{allaninpseudo}\n                \\[\\mathrm{data}_i=4\\pi \\sqrt{r_{a}^{2}-\\mathrm{datax}_{i}^{2}}\\cdot \\rho _{\\mathrm{car}}\\cdot \\Delta d_{\\mathrm{car}}\\]\n            \\end{allaninpseudo}\n        \\end{allanpy}\n        another line of text.\n        \\begin{lstlisting}\n                # observation from the air\n\n                %matplotlib inline\n                import numpy as np\n                import matplotlib.pyplot as plt\n\n                v_car=5.611 # 20km/h on average in hk\n                v_eye=16 # Hz\n                alpha_lag=1.00\n                v_reload=alpha_lag*v_eye\n\n                pie=math.pi\n                r_a=13000\n                rho_car=0.001023\n                delta_d_car=v_car/v_reload\n                L_car=4.71769\n                C_eye=40960000\n                alpha_c1=1.00\n                C1=C_eye*alpha_c1\n                alpha_c2=0.90\n                C2=C_eye*alpha_c2\n                alpha_c3=0.80\n                C3=C_eye*alpha_c3\n\n                def alpha_clarity(x):\n                    if x>0 and x<sep_point:\n                        return float(0)\n                    elif x>=sep_point and x<=1:\n                        return float((-(4096/4095)**2)*((x-1)**2)+1)\n                    elif x>1:\n                        return float(1)\n                output=[0 for i in range(len(dataport))]\n                for i in range(len(dataport)):\n                    output[i]=alpha_clarity(dataport[i])\n                dataport=np.arange(0,1.01,0.01)\n                sep_point=1/4096\n        \\end{lstlisting}\n        \\begin{codebox}\n            \\Procname{$\\proc{Insertion-Sort}(A)$}\n            \\li \\For $j \\gets 2$ \\To $\\attrib{A}{length}$\n            \\li \\Do\n            $\\id{key} \\gets A[j]$\n            \\li \\Comment Insert $A[j]$ into the sorted sequence\n            $A[1 \\twodots j-1]$.\n            \\li $i \\gets j-1$\n            \\li \\While $i > 0$ and $A[i] > \\id{key}$\n            \\li \\Do\n            $A[i+1] \\gets A[i]$\n            \\li $i \\gets i-1$\n            \\End\n            \\li $A[i+1] \\gets \\id{key}$\n            \\End\n        \\end{codebox}\n        \\begin{codebox}\n            \\Procname{$\\proc{Segments-Intersect}(p_1, p_2, p_3, p_4)$}\n            \\li $d_1 \\gets \\proc{Direction}(p_3, p_4, p_1)$\n            \\li $d_2 \\gets \\proc{Direction}(p_3, p_4, p_2)$\n            \\li $d_3 \\gets \\proc{Direction}(p_1, p_2, p_3)$\n            \\li $d_4 \\gets \\proc{Direction}(p_1, p_2, p_4)$\n            \\li \\If $((d_1 > 0 \\mbox{ and } d_2 < 0) \\mbox{ or }\n            (d_1 < 0 \\mbox{ and } d_2 > 0))$ and\n            \\Indentmore\n            \\zi $((d_3 > 0 \\mbox{ and } d_4 < 0) \\mbox{ or }\n            (d_3 < 0 \\mbox{ and } d_4 > 0))$\n            \\End\n            \\li \\Then \\Return \\const{true}\n            \\li \\ElseIf $d_1 \\isequal 0$ and $\\proc{On-Segment}(p_3, p_4, p_1)$\n            \\li \\Then \\Return \\const{true}\n            \\li \\ElseIf $d_2 \\isequal 0$ and $\\proc{On-Segment}(p_3, p_4, p_2)$\n            \\li \\Then \\Return \\const{true}\n            \\li \\ElseIf $d_3 \\isequal 0$ and $\\proc{On-Segment}(p_1, p_2, p_3)$\n            \\li \\Then \\Return \\const{true}\n            \\li \\ElseIf $d_4 \\isequal 0$ and $\\proc{On-Segment}(p_1, p_2, p_4)$\n            \\li \\Then \\Return \\const{true}\n            \\li \\ElseNoIf \\Return \\const{false}\n            \\End\n        \\end{codebox}\n    \\section{colors}\n        \\color{allanblue} allanblue \\color{black}\n        \\color{allanred} allanred \\color{black}\n        \\color{allangreen} allangreen \\color{black}\n        \\color{allanpurple} allanpurple \\color{black}\n        \\color{allancyan} allancyan \\color{black}\n        \\color{allanorange} allanorange \\color{black}\n        \\color{allanyellow} allanyellow \\color{black}\n        \\color{allandarkblue} allandarkblue \\color{black}\n    \\section{cites}\n        I love bibliography. \\cite{bibliography}\n    \\begin{thebibliography}{30}\n        \\bibitem{bibliography} bibliography is important.\n    \\end{thebibliography}\n\\end{document}\n\n", "meta": {"hexsha": "e7b9e5633c4a82e075c891d82efcc9843681e677", "size": 9724, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "example using allan-eason style/example using allan-eason style.tex", "max_stars_repo_name": "EasonSYC/LaTeX-Templates", "max_stars_repo_head_hexsha": "224b477345887ac6bab199b76d49f4bcd9c26917", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-07T11:33:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T21:28:14.000Z", "max_issues_repo_path": "example using allan-eason style/example using allan-eason style.tex", "max_issues_repo_name": "EasonSYC/LaTeX-Templates", "max_issues_repo_head_hexsha": "224b477345887ac6bab199b76d49f4bcd9c26917", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "example using allan-eason style/example using allan-eason style.tex", "max_forks_repo_name": "EasonSYC/LaTeX-Templates", "max_forks_repo_head_hexsha": "224b477345887ac6bab199b76d49f4bcd9c26917", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-07T11:34:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-15T09:24:31.000Z", "avg_line_length": 42.0952380952, "max_line_length": 310, "alphanum_fraction": 0.5278691896, "num_tokens": 2940, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5593220410704463}}
{"text": "% !TEX root=talk.tex\n\\section{Special Material}\n\n\\begin{frame}[fragile]\n\\frametitle{Mathematics}\n\n\\begin{center}\\begin{minipage}{.7\\textwidth}\\rmfamily\n\\eqref{eq:gratio} relates the golden ratio and the Fibonacci series.  Recall that the golden ratio, $\\varphi = \\frac{1}{2} (1 + \\sqrt{5})$.\n\n\\begin{equation}\\label{eq:gratio}\n\\varphi = 1 + \\sum^{\\infty}_{n=1}\n                \\frac{ (-1)^{n+1} }{ F_n F_{n+1} }\n\\end{equation}\n\\end{minipage}\n\\end{center}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[escapechar=|,basicstyle=\\ttfamily\\small,moretexcs=eqref,emph={equation}]\n\\eqref{eq:gratio} relates the golden ratio and the Fibonacci series. \nRecall that the golden ratio, |\\textcolor{red}{\\large\\ttfamily\\$}|\\phi = \\frac{1}{2} (1 + \\sqrt{5})|\\textcolor{red}{\\large\\ttfamily\\$}|.\n\n\\begin{equation}\\label{eq:gratio}\n\\phi = 1 + \\sum^{\\infty} _{n=1}\n                \\frac{ (-1)^{n+1} }{ F_n F_{n+1} }\n\\end{equation}\n\\end{lstlisting}\n\\vspace*{-1em}\n\\end{beamerboxesrounded}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Chemical Equations and Molecules}\n\n\\begin{center}\\rmfamily\\small\n\\ce{Zn^2+ <=>[\\ce{+ 2OH-}][\\ce{+ 2H+}]\n$\\underset{\\text{amphoteres Hydroxid}}{\\ce{Zn(OH)2 v}}$\n<=>C[+2OH-][{+ 2H+}]\n$\\underset{\\text{Hydroxozikat}}{\\cf{[Zn(OH)4]^2-}}$\n}\n\\hfil\n\\chemfig{H-C(-[2]H)(-[6]H)-C(-[7]H)=[1]O}\n\\end{center}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\small,moretexcs={ce,chemfig,underset,text,cf},\nemph={mhchem,chemfig}]\n\\usepackage[version=3]{mhchem}   % sufficient for chemical equations\n\\usepackage{chemfig}   % for 2-D molecule drawings\n...\n\\ce{Zn^2+ <=>[\\ce{+ 2OH-}][\\ce{+ 2H+}]\n$\\underset{\\text{amphoteres Hydroxid}}{\\ce{Zn(OH)2 v}}$\n<=> C[+2OH-][{+ 2H+}] \n$\\underset{\\text{Hydroxozikat}}{\\cf{[Zn(OH)4]^2-}}$ }\n\n\\chemfig{H-C(-[2]H)(-[6]H)-C(-[7]H)=[1]O}\n\\end{lstlisting}\n\\vspace*{-1em}\n\\end{beamerboxesrounded}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Linguistics}\n\n\\begin{columns}[T]\n\\begin{column}{.54\\textwidth}\\small\\rmfamily\n\\ex\n\\begingl\n\\gla \\%*Wen liebt seine Mutter?//\n\\glb Whom loves his mother//\n\\glc `Who does his mother love?'//\n\\endgl\n\\xe\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[moretexcs={ex,xe,gla,glb,glc},basicstyle=\\ttfamily\\footnotesize\\lsstyle,\nemph={expex},\nlineskip=-2pt,commentstyle={}]\n\\usepackage{linguex,qtree}\n...\n\\ex\n\\begingl\n\\gla \\%*Wen liebt seine Mutter?//\n\\glb Whom loves his mother//\n\\glc `Who does his mother love?'//\n\\endgl\n\\xe\n\\end{lstlisting}\n\\vskip-1em\n\\end{beamerboxesrounded}\n\\end{column}\n\\begin{column}{.45\\textwidth}\\footnotesize\\rmfamily\n\\Tree [ .S [.NP [.Pron He ] ] [.VP [.V kicked ] [.NP [.Det the ] [.N ball ] ] ] ]\n\n\\medskip\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[moretexcs={ex,xe,gla,glb,glcTree},basicstyle=\\ttfamily\\footnotesize\\lsstyle,\nemph={expex,qtree},\nlineskip=-2pt,commentstyle={}]\n\\usepackage{qtree}\n...\n\\Tree [ .S [.NP [.Pron He ] ] [.VP [.V kicked ] [.NP [.Det the ] [.N ball ] ] ] ]\n\\end{lstlisting}\n\\vspace*{-1em}\n\\end{beamerboxesrounded}\n\n\\end{column}\n\\end{columns}\n\n\\end{frame}\n\n\n\\begin{frame}[fragile]\n\\frametitle{Program Listings}\n\n\\begin{columns}\n\\begin{column}{.5\\textwidth}\n\\begin{beamerboxesrounded}{}\n\\vspace{-1em}\n\\begin{lstlisting}[basicstyle=\\ttfamily\\footnotesize,\nemph={listings,lstlisting},moretexcs={color}]\n\\usepackage{listings,xcolor}\n...\n\\begin{lstlisting}\n[language=C,columns=fullflexible,\nbasicstyle=\\ttfamily,\nkeywordstyle=\\bfseries\\color{red},\ncommentstyle=\\sffamily\\color{green},\nstringstyle=\\rmfamily\\color{orange}]\n#include <stdio.h>\n/* \n | Prints \"hello world\"\n */\nint main(void)\n{\n    printf(\"hello, world\\n\");\n    return 0;\n}\n:\\bfseries\\color{Maroon}\\textbackslash end:{lstlisting}\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{column}\n\\hfill\\begin{column}{.46\\textwidth}\n\\begin{lstlisting}[language=C,escapechar=~,lineskip=-2pt,\nbasicstyle=\\ttfamily,\ncommentstyle=\\upshape\\sffamily\\small\\color{SeaGreen4},keepspaces=true,\nkeywordstyle=\\bfseries\\color{Maroon},stringstyle=\\rmfamily\\color{Sienna2}]\n#include <stdio.h>\n\n/* \n | Prints \"hello world\"\n */\nint main(void)\n{\n    printf(\"hello, world\\n\");\n    return 0;\n}\n\\end{lstlisting}\n\\end{column}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Network Protocols}\n\\begin{columns}\n\\begin{column}{.505\\textwidth}\n\\begin{beamerboxesrounded}{}\n\\vspace{-1em}\n\\begin{lstlisting}[basicstyle=\\ttfamily\\footnotesize,\nmoretexcs={bitheader,wordgroupr,bitbox,endwordgroupr,wordbox},\nemph={bytefield,rightwordgroup}]\n\\usepackage{bytefield}\n...\n\\begin{bytefield}{16} \n\\bitheader{0,7,8,15} \\\\ \n\\begin{rightwordgroup}{Header} \n\\bitbox{4}{Tag} & \\bitbox{12}{Mask} \\\\ \n\\bitbox{8}{Source} & \n\\bitbox{8}{Destination} \n\\end{rightwordgroup} \\\\ \n\\wordbox{3}{Data} \n\\end{bytefield} \n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{column}\n\\begin{column}{.49\\textwidth}\\rmfamily\\small\n\\hfill\\begin{bytefield}[bitwidth=.75em]{16}\n\\bitheader{0,7,8,15} \\\\ \n\\begin{rightwordgroup}{Header} \n\\bitbox{4}{Tag} & \\bitbox{12}{Mask} \\\\ \n\\bitbox{8}{Source} & \\bitbox{8}{Destination} \n\\end{rightwordgroup}\\\\ \n\\wordbox{3}{Data} \n\\end{bytefield} \n\\end{column}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Life Sciences}\n\n\\begin{texshade}{examples/AQPpro.MSF.txt}\n\\shadingmode{similar} \n\\threshold[80]{50} \n\\setends{1}{80..112} \n\\hideconsensus \n\\feature{top}{1}{93..93}{fill:$\\downarrow$}{first case (see text)} \n\\feature{bottom}{1}{98..98}{fill:$\\uparrow$}{second case (see text)} \n\\end{texshade}\n\\vskip-1em\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[\nmoretexcs={setends,shadingmode,threshold,hideconsensus,feature,downarrow,uparrow},\nemph={texshade},\nbasicstyle=\\ttfamily\\small,lineskip=-2pt,escapechar=|]\n\\usepackage{texshade}  % for nucleotide and peptide alignments\n...\n\\begin{texshade}{AQPpro.MSF.txt} \n\\shadingmode{similar} \n\\threshold[80]{50} \n\\setends{1}{80..112} \n\\hideconsensus \n\\feature{top}{1}{93..93}{fill:$\\downarrow$}{first case (see text)} \n\\feature{bottom}{1}{98..98}{fill:$\\uparrow$}{second case (see text)} \n\\end{texshade} \n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Circuits and SI Units}\n\\begin{columns}\n\\begin{column}{.49\\textwidth}\\rmfamily\n\\begin{circuitikz}[transform shape,scale=.9]\n\\draw (0,0) node[anchor=east] {B}  to[short, o-*] (1,0)    to[R=20<\\ohm>, *-*] (1,2)\n  to[R=10<\\ohm>, v=$v_x$] (3,2) -- (4,2)\n  to[ cI=$\\frac{\\si{\\siemens}}{5} v_x$, *-*] (4,0) -- (3,0)  to[R=5<\\ohm>, *-*] (3,2)\n  (3,0) -- (1,0)   (1,2) to[short, -o] (0,2) node[anchor=east]{A}\n;\\end{circuitikz}\n\\end{column}\n\\begin{column}{.49\\textwidth}\n\\begin{itemize}\\rmfamily\n\\item \\SI{3.45d4}{\\square\\volt\\cubic\\lumen\\per\\farad}\n\\item \\SIlist[per-mode=symbol]{40;85;103}{\\kilo\\metre\\per\\hour}\n\\end{itemize}\n\\end{column}\n\\end{columns}\n\n\\medskip\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\footnotesize,lineskip=-2pt,\nmoretexcs={draw,si,siemens,ohm,SI,SIlist,square,volt,cubic,lumen,per,farad,kilo,metre,hour},\nemph={siunitx,circuitikz},emph={[2]{draw,node,to}}\n]\n\\usepackage{siunitx}\n\\usepackage[siunitx]{circuitikz}\n...\n\\begin{circuitikz}\n\\draw (0,0) node[anchor=east] {B}\n  to[short, o-*] (1,0)    to[R=20<\\ohm>, *-*] (1,2)\n  to[R=10<\\ohm>, v=$v_x$] (3,2) -- (4,2)\n  to[ cI=$\\frac{\\si{\\siemens}}{5} v_x$, *-*] (4,0) -- (3,0)\n  to[R=5<\\ohm>, *-*] (3,2)\n  (3,0) -- (1,0)   (1,2) to[short, -o] (0,2) node[anchor=east]{A}\n;\\end{circuitikz}\n\n\\SI{3.45d4}{\\square\\volt\\cubic\\lumen\\per\\farad}\n\\SIlist[per-mode=symbol]{40;85;103}{\\kilo\\metre\\per\\hour}\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Meh, What Good is That? Can't Use it Anywhere Else.}\nActually, you can.\n\n\\bigskip\n\n\\pause\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[moretexcs={PreviewEnvironment,texshade},basicstyle=\\ttfamily,emph={preview}]\n\\usepackage[active,tightpage]{preview}\n\\PreviewEnvironment{texshade}\n...\n\\begin{texshade}\n...\n\\end{texshade}\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\n\\begin{itemize}\n\\item Run \\texttt{pdflatex} $\\rightarrow$ cropped \\textsmaller{PDF} containing \\emph{only} contents of \\texttt{texshade}\n\\pause\n\\item ImageMagick: \\verb|convert -depth 150 texshade.pdf texshade.png|\n\\pause\n\\item Multiple environments $\\rightarrow$ multi-page \\textsmaller{PDF} and multiple \\textsmaller{PNG}s\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Bar Codes}\n%%% CAUTION!!! This takes a LOOONG time to compile if you're using pdflatex. \n% I'm just going to load the already generated PDFs instead.\n\n% \\begin{pspicture}\n% \\psbarcode{MECARD:N:Malaysia Open Source Conference 2011;TEL:+60196085482;URL:http://www.mosc.my/;EMAIL:secretariat@mosc.my;ADR:Bayview Beach Resort, Baru Ferringgi Penang;NOTE:Malaysia Open Source Conference 2011 (MOSC2011);;}{eclevel=L width=0.75 height=0.75}{qrcode}\n% \\end{pspicture}\\;\n\\includegraphics[page=1]{talk-pics}\\;\n% \\begin{pspicture}\n% \\psbarcode[scalex=0.7,scaley=0.7]{9781860742712}{ includetext guardwhitespace }{ean13} \n% \\end{pspicture}\\;\n\\includegraphics[page=2]{talk-pics}\\;\n% \\begin{pspicture}\n% \\psbarcode[scalex=0.7,scaley=0.7]{978-3-86541-114}{includetext guardwhitespace}{isbn} \n% \\end{pspicture}\\;\\;\n\\includegraphics[page=3]{talk-pics}\\;\n% \\begin{pspicture}\n% \\psbarcode[scalex=0.7,scaley=0.7]{^453^178^121^239}{ columns=2 rows=10}{pdf417}\n% \\end{pspicture}%\n\\includegraphics[page=4]{talk-pics}\n\\llap{\\raisebox{0.45in}{\\includegraphics[page=5]{talk-pics}%\n% \\begin{pspicture}\n% \\psbarcode[scalex=0.6,scaley=0.6]{LE28HS9Z}{includetext}{royalmail}\n% \\end{pspicture}\n}}\n\n\\bigskip\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[moretexcs={psbarcode},escapechar=|,basicstyle=\\ttfamily\\footnotesize,\nemph={pst-barcode},emph={[2]{qrcode,ean13,isbn,pdf417,royalmail,}},\nalsoletter={1347-}\n]\n\\usepackage{auto-pst-pdf}  % Needed if running pdflatex; must use option -shell-escape\n\\usepackage{pstricks,pst-barcode}\n...\n|\\color{Maroon}\\bfseries\\textbackslash begin|{pspicture}\n\\psbarcode{MECARD:N:Malaysia Open Source Conference...}{eclevel=L}{qrcode}\n\\psbarcode{9781860742712}{includetext guardwhitespace}{ean13} \n\\psbarcode{978-3-86541-114}{includetext guardwhitespace}{isbn} \n\\psbarcode{LE28HS9Z}{includetext}{royalmail}\n\\psbarcode{^453^178^121^239}{columns=2 rows=10}{pdf417}\n|\\color{Maroon}\\bfseries\\textbackslash end|{pspicture} \n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Graph Plots}\n\n{\\centering\n\\pgfplotsset{height=.75\\textheight,width=.9\\textwidth}\n\\begin{tikzpicture}[transform shape,scale=.7]\n\\begin{loglogaxis}[xlabel=Dof]\n\\addplot table[x=dof,y=L2] {examples/datafile.dat}; \\addlegendentry{$L_2$};\n\\addplot table[x=dof,y=Lmax] {examples/datafile.dat}; \\addlegendentry{$L_\\text{max}$};\n\\end{loglogaxis} \n\\end{tikzpicture}\n\\par}\n\n\\medskip\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\footnotesize,\nemph={pgfplots,tikzpicture,loglogaxis},\nmoretexcs={addplot, table, addlegendentry,text},lineskip=-2pt]\n\\usepackage{pgfplots}\n...\n\\begin{tikzpicture}\n\\begin{loglogaxis}[xlabel=Dof]\n\\addplot table[x=dof,y=L2]{datafile.dat}; \\addlegendentry{$L_2$};\n\\addplot table[x=dof,y=Lmax]{datafile.dat};  \\addlegendentry{$L_\\text{max}$};\n\\end{loglogaxis} \n\\end{tikzpicture} \n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Spreadsheets}\n\\framesubtitle{(Seriously, use a proper spreadsheet application for complex stuff.)}\n\\begin{center}\\rmfamily\\STautoround*{2}\n\\begin{spreadtab}{{tabular}{l rrr}}\n@ Year ending Mar 31 & @2009 & @2008 & @2007\\\\\\midrule\n@ Revenue & 14580.2 & 11900.4 & 8290.3\\\\\n@ Cost of sales & 6740.2 & 5650.1 & 4524.2\\\\\\cmidrule{2-4}\n@\\emph{Gross profit} & \\STcopy{>}{b2-b3} & &\\\\\\cmidrule[\\lightrulewidth]{2-4}\n\\end{spreadtab}\n\\end{center}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\small,moretexcs={STcopy,STautoround*},\nalsoletter={23->}, emph={spreadtab},\nemph={[2]{b2-b3,>}}]\n\\STautoround*{2}\n\\begin{spreadtab}{{tabular}{l rrr}}\n@Year ending Mar 31 & @2009 & @2008 & @2007\\\\ \\hline\n@Revenue & 14580.2 & 11900.4 & 8290.3\\\\\n@Cost of sales & 6740.2 & 5650.1 & 4524.2\\\\ \\cline{2-4}\n@\\emph{Gross profit} & \\STcopy{>}{b2-b3} & &\\\\ \\cline{2-4}\n\\end{spreadtab}\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Gantt Charts}\n\n\\scalebox{.85}{\\rmfamily\n\\begin{ganttchart}%\n[y unit title=0.4cm,\ny unit chart=0.5cm,\nvgrid=true,\ntitle/.style={draw=none, fill=RoyalBlue!50!black},\ntitle label font=\\sffamily\\bfseries\\color{white}, title label anchor/.style={below=-1.6ex},\ntitle left shift=.05,\ntitle right shift=-.05,\ntitle height=1,\nbar/.style={draw=none, fill=OliveGreen!75},\nbar height=.6,\nbar label font=\\normalsize\\color{black!50},\ngroup right shift=0,\ngroup top shift=.6,\ngroup height=.3, group peaks height=.2,\nbar incomplete/.style={fill=Maroon}]{1}{16} \n\\gantttitle{2010}{4} \\gantttitle{2011}{12} \\\\ \n\\ganttbar%\n[progress=100, bar progress label font=\\small\\color{OliveGreen!75}, progress label anchor/.style={right=4pt},\nbar label font=\\normalsize\\color{OliveGreen},\nname=pp]%\n  {Preliminary Project}{1}{4} \\\\\n\\ganttset{progress label text={}, link/.style={black, -to}}\n\\ganttgroup{Objective 1}{5}{16} \\\\\n\\ganttbar[progress=4, name=T1A]{Task A}{5}{10} \\\\\n\\ganttlinkedbar[progress=0]{Task B}{11}{16} \\\\\n\\ganttgroup{Objective 2}{5}{16} \\\\\n\\ganttbar[progress=15, name=T2A]{Task A}{5}{13} \\\\\n\\ganttlinkedbar[progress=0]{Task B}{14}{16}\n\\end{ganttchart}\n}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\footnotesize,lineskip=-2pt,\nmoretexcs={gantttitle,ganttbar,ganttlink,ganttgroup,ganttlinkedbar},\nemph={pgfgantt,tikzpicture,ganttchart}]\n\\usepackage{pgfgantt}\n...\n\\begin{ganttchart}[...settings...]{1}{16} \n\\gantttitle{2010}{4} \\gantttitle{2011}{12} \\\\ \n\\ganttbar[progress=100]{Preliminary Project}{1}{4} \\\\ \n\\ganttgroup{Objective 1}{5}{16} \\\\ \n\\ganttbar[progress=4, name=T1A]{Task A}{5}{10} \\\\ \n\\ganttlinkedbar[progress=0]{Task B}{11}{16} \\\\ \n...\n\\end{ganttchart} \n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{`Smart Diagrams'}\n\\begin{columns}[T]\n\n\\begin{column}{.49\\textwidth}\n\\resizebox{\\linewidth}{!}{\\smartdiagram[bubble diagram]{Planning Cycle,Assess,Plan,Implement,Renew}}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\footnotesize,lineskip=-2pt]\n\\usepackage{smartdiagram}\n\\smartdiagram[bubble diagram]{\n  Planning Cycle,Assess,Plan,\n  Implement,Renew}\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{column}\n\n\\begin{column}{.49\\textwidth}\n\\resizebox{\\linewidth}{!}{\\smartdiagram[priority descriptive diagram]{Assess,Plan,Implement,Renew}}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\footnotesize,lineskip=-2pt]\n\\usepackage{smartdiagram}\n\\smartdiagram\n  [priority descriptive diagram]{\n  Assess,Plan,Implement,Renew}\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{column}\n\n\\end{columns}\n\n% \\resizebox{\\linewidth}{\\smartdiagram[priority descriptive diagram]{Planning Cycle,Assess,Plan,Implement,Renew}}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Chess games}\n\n\\begin{columns}\n\\begin{column}{.51\\textwidth}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[basicstyle=\\ttfamily\\small,\nmoretexcs={newgame,mainline,chessboard},\nemph={skak,chessboard}]\n\\usepackage[skaknew]%\n{skak,chessboard}\n...\n\\newgame\n\\mainline{1. e4 e5 2. Nf3 Nc6 3. Bb5 a6}\n\\chessboard[smallboard]\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{column}\n\n\\begin{column}{.47\\textwidth}\n\\rmfamily\n\\newgame\\mainline{1. e4 e5 2. Nf3 Nc6 3. Bb5 a6}\n\n\\chessboard[smallboard]\n\n\\end{column}\n\\end{columns}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Crossword Puzzles}\n\\begin{columns}\n\\begin{column}{.3\\textwidth}\\rmfamily\n\\begin{Puzzle}{5}{3}\n|* |* |[1]E|X |* |.\n|[2]A|[3]S|T |* |[4]T|.\n|* |[5]P|A |R |T |.\n\\end{Puzzle}\n\\end{column}\n\\begin{column}{.65\\textwidth}\\rmfamily\n\\begin{PuzzleClues}{\n\\textbf{Across:} }\n  \\Clue{1}{EX}{unit of measure}\n  \\Clue{2}{AST}{\\(\\ast\\)}\n  \\Clue{5}{PART}{sectioning unit}\n\\end{PuzzleClues}\n\\begin{PuzzleClues}{\n\\textbf{Down:} }\n  \\Clue{1}{ETA}{\\(\\eta\\)}\n  \\Clue{3}{SP}{unit of measure}\n  \\Clue{4}{TT}{nonproportional font}\n\\end{PuzzleClues}\n\\end{column}\n\\end{columns}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{multicols}{2}\n\\begin{lstlisting}[escapechar=?,basicstyle=\\ttfamily\\footnotesize,\nmoretexcs={Clue},emph={cwpuzzle,Puzzle,PuzzleClues}]\n\\usepackage{cwpuzzle}\n...\n\\begin{Puzzle}{5}{3}\n|* |* |[1]E|X |* |.\n|[2]A|[3]S|T |* |[4]T|.\n|* |[5]P|A |R |T |.\n\\end{Puzzle}\n\\begin{PuzzleClues}{\n\\textbf{Across:} }\n  \\Clue{1}{EX}{unit of measure}\n  \\Clue{2}{AST}{\\(\\ast\\)}\n  \\Clue{5}{PART}{sectioning unit}\n\\end{PuzzleClues}\n\\begin{PuzzleClues}{\n\\textbf{Down:} }\n  \\Clue{1}{ETA}{\\(\\eta\\)}\n  \\Clue{3}{SP}{unit of measure}\n  \\Clue{4}{TT}{nonproportional font}\n\\end{PuzzleClues}\n\\end{lstlisting}\n\\end{multicols}\n\\vspace{-.5em}\n\\end{beamerboxesrounded}\n\\end{frame}\n\n\\begin{frame}[fragile]\n\\frametitle{Song Books with Guitar Tabs}\n\\vskip-.55\\textheight\n\\begin{guitar}\n\\rmfamily\\smallchords\\def\\chordsize{.5em}\n\\renewcommand\\yoff{2}\n\\renewcommand\\xoff{0}\n\\renewcommand\\namefont{\\footnotesize\\sffamily}\n\\renewcommand\\normalsiz{1.1}\n\\renewcommand\\topfretsiz{1.2pt}\n\\newcommand{\\CMaj}{\\chord{t}{n,p3,p2,n,p1,n}{C}}\n\\newcommand{\\Amin}{\\chord{t}{n,n,p2,p2,p1,n}{Am}}\n\\newcommand{\\FMaj}{\\chord{t}{n,n,p3,p2,p1,p1}{F}}\n\\newcommand{\\GMaj}{\\chord{t}{p3,p2,n,n,n,p3}{G}}\nCountry [\\CMaj]road, take me [\\GMaj]home, to the [\\Amin]place I be[\\FMaj]long.\nWest Vir[\\CMaj]ginia, mountain [\\GMaj]momma, take me [\\FMaj]home, country [\\CMaj]road.\n\\end{guitar}\n\n\\begin{beamerboxesrounded}{}\n\\vskip-1em\n\\begin{lstlisting}[moretexcs={chord,CMaj,Amin,FMaj,GMaj},\nemph={gchords,guitar},\nbasicstyle=\\ttfamily\\small]\n\\usepackage{gchords,guitar}\n...\n\\begin{guitar}\n\\newcommand{\\CMaj}{\\chord{t}{n,p3,p2,n,p1,n}{C}}\n\\newcommand{\\Amin}...\nCountry [\\CMaj]road, take me [\\GMaj]home, ...\n\\end{guitar}\n\\end{lstlisting}\n\\vspace{-1em}\n\\end{beamerboxesrounded}\n\\end{frame}\n\n", "meta": {"hexsha": "990fb9ce067fb5a04e01043f057f0572072fb68b", "size": 18126, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/domain.tex", "max_stars_repo_name": "omartrinidad/seminar_kga", "max_stars_repo_head_hexsha": "1eab2e81ebf5c09952eca2416b41c4e1e012d44f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/domain.tex", "max_issues_repo_name": "omartrinidad/seminar_kga", "max_issues_repo_head_hexsha": "1eab2e81ebf5c09952eca2416b41c4e1e012d44f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/domain.tex", "max_forks_repo_name": "omartrinidad/seminar_kga", "max_forks_repo_head_hexsha": "1eab2e81ebf5c09952eca2416b41c4e1e012d44f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0154559505, "max_line_length": 271, "alphanum_fraction": 0.7095332671, "num_tokens": 6904, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947155710233, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.559222947032065}}
{"text": "% Created 2018-03-06 mar 16:38\n\\documentclass[letterpaper,fleqn]{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{fixltx2e}\n\\usepackage{graphicx}\n\\usepackage{longtable}\n\\usepackage{float}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{marvosym}\n\\usepackage{wasysym}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\tolerance=1000\n\\usepackage{khpreamble}\n\\usepackage{tabularx}\n\\usepackage{geometry}\n\\usepackage{pgfplots}\n\\pgfplotsset{compat=1.13}\n\\geometry{top=20mm, bottom=20mm, left=24mm, right=18mm}\n\\author{Kjartan Halvorsen}\n\\date{}\n\\title{State feedback exercise}\n\\hypersetup{\n  pdfkeywords={},\n  pdfsubject={},\n  pdfcreator={Emacs 24.5.1 (Org mode 8.2.10)}}\n\\begin{document}\n\n\\maketitle\n\n\\section*{Input signal design}\n\\label{sec-1}\nConsider the sampled double-integrator on state-space form\n\\begin{equation}\n\\begin{aligned}\nx(kh+h) &= \\underbrace{\\bbm 1 & h\\\\ 0 & 1\\ebm}_{\\Phi(h)} x(kh) + \\underbrace{\\bbm \\frac{h^2}{2}\\\\h\\ebm}_{\\Gamma(h)} u(kh)\\\\\ny(kh) &= \\bbm 1 & 0 \\ebm x(kh)\n\\end{aligned}\n\\label{eq:ss}\n\\end{equation}\nWe want to find the input sequence $u(0)$, $u(h)$ that takes the system from the origin of the state space $x(0) = \\bbm 0 & 0 \\ebm\\transp$ to the point $x(2h) = \\bbm a & 0 \\ebm\\transp$ in two time steps. \n\n\\subsection*{Iterate the state space model}\n\\label{sec-1-1}\n\\ldots{} in order to find an expression for $x(2h)$ in terms of the unknown input signals\n\\begin{align*}\nx(h) &= \\Phi x(0) + \\Gamma u(0) \\\\\nx(2h) &= \\Phi x(h) + \\Gamma u(h) = \\Phi\\left(\\Phi x(0) + \\Gamma u(0)\\right) + \\Gamma u(h)\\\\\n          &= \\Phi^2x(0) + \\Phi\\Gamma u(0) + \\Gamma u(h)\n          = \\Phi^2x(0) + \\underbrace{\\bbm & & & & \\ebm}_{W_c} \\bbm u(h)\\\\u(0) \\ebm  \n\\end{align*}\n\n\\subsection*{Set up system of equations}\n\\label{sec-1-2}\nWith \\(u = \\bbm u(h) & u(0) \\ebm\\transp\\) we get the linear system of equations \\(W_c u = x(2)-\\Phi^2x(0)\\). In the particular case we are considering here this gives\n\\[ \\bbm & & & & &\\\\ & & & & &\\\\ & & & & & \\ebm \\bbm u(h)\\\\u(0)\\ebm = \\bbm & &  \\\\ && \\\\ &&\\ebm.\\]\n\n\n\\newpage\n\n\\section*{Verify the result}\n\\label{sec-2}\n\n\\subsection*{Calculate the state sequence}\n\\label{sec-2-1}\n\\begin{align*}\nx(h) &= \\\\\nx(2h) &= \n\\end{align*}\n\n\\subsection*{Plot the results}\n\\label{sec-2-2}\n\\begin{center}\n\\includegraphics[width=0.6\\linewidth]{../figures/empty-input-state-sequences}\n\\end{center}\n\n\\section*{The deadbeat control law}\n\\label{sec-3}\nWe are looking for the linear control law \\(u(kh) = l_0y_{ref}(kh) - Lx(kh) = l_0y_{ref}(kh) - l_1x_1(kh) -l_2x_2(kh)\\) that produced the result above.\n\\subsection*{Set up the equations}\n\\label{sec-3-1}\n\\begin{description}\n\\item[{\\(k=0\\):}] \\(u(0) = \\)\n\\item[{\\(k=1\\):}] \\(u(h) = \\)\n\\item[{\\(k=2\\):}] \\(u(2h) = \\)\n\\end{description}\n\n\\subsection*{Solve the system of equations}\n\\label{sec-3-2}\nWrite the system of equations\n\n\\[ \\bbm & & & & & & & &\\\\ & & & & & & & &\\\\ & & & & & & & & \\\\ & & & & & & & &  \\ebm \\bbm l_0\\\\l_1\\\\l_2\\ebm = \\bbm && &  \\\\ &&& \\\\ &&& \\\\ && &\\ebm.\\]\n\n\\newpage\n\n\\subsection*{The poles of the closed-loop system}\n\\label{sec-3-3}\n\nInserting the control law \\(u(kh) = l_0y_{ref}(kh) - Lx(kh)\\) into the state-space system \\eqref{eq:ss} gives the closed-loop system\n\\begin{equation*}\n\\begin{aligned}\n x(kh+h) &= \\Phi x(kh) + \\Gamma u(kh) = \\Phi x(kh) - \\Gamma L x(kh) + \\Gamma l_0y_{ref}(kh)\\\\\n    &= \\\\\n\\end{aligned}\n\\end{equation*}\n\nDetermine the poles of the closed-loop system.\n% Emacs 24.5.1 (Org mode 8.2.10)\n\\end{document}", "meta": {"hexsha": "a2e5e66d6d2ebc23138ed1823ed2ecdd5da9ec11", "size": 3521, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "state-space/exercises/state-feedback-exercise.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "state-space/exercises/state-feedback-exercise.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "state-space/exercises/state-feedback-exercise.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 30.8859649123, "max_line_length": 204, "alphanum_fraction": 0.6415790968, "num_tokens": 1385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494678483918, "lm_q2_score": 0.8152324871074608, "lm_q1q2_score": 0.5592082907040836}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage[margin=1in]{geometry}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\n\\usepackage{hyperref}\n\n\\usepackage{mathpazo}\n\n\\newcommand{\\nup}{{(\\nu)}}\n\\newcommand{\\evm}{{(-)}}\n\\newcommand{\\evz}{{(0)}}\n\\newcommand{\\evp}{{(+)}}\n\n\\begin{document}\n\n\\begin{center}\n{\\Large Treatment of gas pressure interface state in Castro + radiation}\n\\end{center}\n\n\\subsection*{Background}\n\nThe interface states for radiation work in the primitive variable\nsystem, $q = (\\rho, u, p, (\\rho e)_g, E_r)^\\intercal$, where $p$ is\nthe gas pressure only, and $(\\rho e)_g$ is the gas energy density.\n\nWritten in the form:\n\\begin{equation}\nq_t + A(q) q_x = 0\n\\end{equation}\nThe matrix $A$ takes the form:\n\\begin{equation}\nA = \\left (\n\\begin{matrix}\nu & \\rho & 0 & 0 & 0\\\\\n0 & u & {1}/{\\rho} & 0 & \\lambda_f/{\\rho}\\\\\n0 & c_{g}^{2} \\rho & u & 0 & 0\\\\\n0 & h_{g} \\rho & 0 & u & 0\\\\\n0 & E_{r} \\left(\\lambda_f + 1\\right) & 0 & 0 & u\n\\end{matrix}\\right )\n\\end{equation}\nhere, $c_g$ is the gas sound speed and $h_g = e_g + p/\\rho$ is the \ngas specific enthalpy.\nA has eigenvalues, $\\lambda^\\evm = u - c$, $\\lambda^\\evz = u$, \n$\\lambda^\\evp = u + c$, where the total sound speed, $c$ is related\nto the gas sound speed, $c_g$, as:\n\\begin{equation}\nc^2 = c_g^2 + (\\lambda_f + 1)\\frac{\\lambda_f E_r}{\\rho}\n\\end{equation}\nIn constructing the interface states, start with a reference state,\n$q_\\mathrm{ref}$ and define the jumps carried by each wave as the\nintegral under the parabolic profile with respect to this reference\nstate:\n\\begin{equation}\n\\Delta q^\\nup \\equiv q_\\mathrm{ref} - \\mathcal{I}^\\nup(q)\n\\end{equation}\nand then the interface states are:\n\\begin{equation}\nq_\\mathrm{int} = q_\\mathrm{ref} - \\sum_\\nu (l^\\nup \\cdot \\Delta q^\\nup) r^\\nup\n\\end{equation}\nDefining:\n\\begin{align}\n\\beta^\\evm = ( l^\\evm \\cdot \\Delta q^\\evm ) &= \\frac{1}{2 c^{2}} \\left(\n    \\Delta E_r^\\evm \\lambda_f + \\Delta p^\\evm - \\Delta u^\\evm c \\rho\\right)\\\\\n&= \\frac{1}{2 c^{2}} \\left(\n    \\Delta p_\\mathrm{tot}^\\evm - \\Delta u^\\evm c \\rho\\right)\n\\end{align}\nwhere we recognized that $p_\\mathrm{tot} = p + \\lambda_f E_r$.\nSimilarly, we have:\n\\begin{align}\n\\beta^\\evp = ( l^\\evp \\cdot \\Delta q^\\evp ) &= \\frac{1}{2 c^{2}} \\left(\n    \\Delta E_r^\\evp \\lambda_f + \\Delta p^\\evp + \\Delta u^\\evp c \\rho\\right)\\\\\n           &= \\frac{1}{2 c^{2}} \\left(\n    \\Delta p_\\mathrm{tot}^\\evp + \\Delta u^\\evp c \\rho\\right)\n\\end{align}\nand for the 0-wave, we have three degenerate eigenvalues and\neigenvectors.  We label these with the subscripts $\\rho$, $\\rho e_g$,\nand $E_r$, indicating which of the corresponding components in our\nprimitive variable vector, $q$, is non-zero in the right eigenvector,\n$r^\\nup$.  Then we have:\n\\begin{align}\n\\beta^\\evz_\\rho &= \n    \\Delta\\rho^\\evz  - \\frac{\\Delta p^\\evz_\\mathrm{tot}}{c^2} \\\\\n%\n\\beta^\\evz_{{\\rho e}_g} &= \\Delta(\\rho e)^\\evz_g - \\frac{\\Delta p_\\mathrm{tot}^\\evz}{c^2} h_g \\\\\n%\n\\beta^\\evz_{E_r} &= \\Delta E_r^\\evz - \\frac{\\Delta p_\\mathrm{tot}^\\evz}{c^2} h_r\n\\end{align}\nwhere $h_r = (\\lambda_f + 1)E_r/\\rho$.  Note, these match the derivation\ndone in the Jupyter/SymPy notebook:\\newline\n{\\footnotesize \\url{https://github.com/zingale/hydro_examples/blob/master/compressible/euler-generaleos.ipynb}}\n\nThe gas pressure update in these terms is:\n\\begin{equation}\np_\\mathrm{int} = p_\\mathrm{ref} - (\\beta^\\evp + \\beta^\\evm) c_g^2 + \\lambda_f \\beta^\\evz_{E_r}\n\\end{equation}\nThis matches the expression in Castro.  Notice that this expression is\nunusual for pressure, since it jumps across the $\\evz$-wave, whereas\nthe total pressure would not.  \n\nCastro computes the edge state of the radiation energy density as:\n\\begin{equation}\n{E_r}_\\mathrm{int} = {E_r}_\\mathrm{ref} - (\\beta^\\evp + \\beta^\\evm) h_r - \\beta^\\evz_{E_r}\n\\end{equation}\nand we see that the total pressure can be constructued as\n${p_\\mathrm{tot}}_\\mathrm{int} = p_\\mathrm{int} + \\lambda_f\n{E_r}_\\mathrm{int}$, giving:\n\\begin{equation}\n{p_\\mathrm{tot}}_\\mathrm{int} = {p_\\mathrm{tot}}_\\mathrm{ref} -\n   (\\beta^\\evp + \\beta^\\evm) c^2\n\\end{equation}\nThis looks, as it should, analogous to the pure hydrodynamics case.\n\n\\subsection*{The Interface States}\n\nWhat is the interface state? for this we need to choose a reference\nstate.  The choice of reference state should not matter if we counted\nthe contributions of all waves, then:\n\\begin{equation}\nq_\\mathrm{int} = q_\\mathrm{ref} - \\sum_\\nu [ l^\\nup \\cdot (q_\\mathrm{ref} - \\mathcal{I}^\\nup(q)) ] r^\\nup\n\\end{equation}\nand $q_\\mathrm{ref}$ cancels out.  However, when we do characteristic\ntracing, this is not guaranteed.\n\nIn Castro, the quantity $\\Delta p^\\evz_\\mathrm{tot}$ is defined as:\n\\begin{equation}\n\\Delta p^\\evz_\\mathrm{tot} =\n  p_\\mathrm{tot,ref} - \\mathcal{I}^\\evz(p_\\mathrm{tot})\n\\end{equation}\nand we adopt the common strategy of picking the reference state to be\nthe $\\mathcal{I}$ corresponding to the fastest wave moving toward the\ninterface.\n\nConsider a zone, $i$, and tracing to the right edge of the zone to\nform the interface state, $i+1/2,L$, where $L$ indicates that it is\nimmediately to the left of the $i+1/2$ interface.  The fastest wave\nthat can potentially move toward that interface is the $\\evp$ wave,\nso we pick:\n\\begin{equation}\nq_\\mathrm{ref} = \\left (\n   \\begin{array}{c}\n     \\mathcal{I}^\\evp(\\rho) \\\\\n     \\mathcal{I}^\\evp(u) \\\\\n     \\mathcal{I}^\\evp(p_\\mathrm{tot}) \\\\\n     \\mathcal{I}^\\evp((\\rho e)_g) \\\\\n     \\mathcal{I}^\\evp(E_r)\n   \\end{array}\n\\right )\n\\end{equation}\nLooking at the gas pressure interface state, we have:\n\\begin{equation}\np_\\mathrm{int} = p_\\mathrm{ref} - \\frac{1}{2} \\frac{c_g^2}{c^2} \\left \\{\n   \\left [ \\Delta p_\\mathrm{tot}^\\evp + \\Delta u^\\evp c\\rho \\right ]\n + \\left [ \\Delta p_\\mathrm{tot}^\\evm - \\Delta u^\\evm c\\rho \\right ] \\right \\}\n + \\lambda_f \\left [ \\Delta E_r^\\evz - \\frac{h_r}{c^2} \\Delta p^\\evz_\\mathrm{tot} \\right ]\n\\end{equation}\nSubstituting in our choice of reference state, we have:\n\\begin{align}\np_\\mathrm{int} = \\mathcal{I}^\\evp(p) &- \\underbrace{\\frac{1}{2} \\frac{c_g^2}{c^2} \\left \\{\n \\left [ \\mathcal{I}^\\evp(p_\\mathrm{tot}) - \\mathcal{I}^\\evm(p_\\mathrm{tot}) \\right ] \n   - \\rho c \\left [ \\mathcal{I}^\\evp(u) - \\mathcal{I}^\\evm(u) \\right ] \\right \\}}_{\\mbox{\\footnotesize carried by the $\\evm$ wave}} \\\\\n &+ \\underbrace{\\lambda_f \\left \\{ \\left [ \\mathcal{I}^\\evp(E_r) - \\mathcal{I}^\\evz(E_r) \\right ]\n   - \\frac{h_r}{c^2} \\left [ \\mathcal{I}^\\evp(p_\\mathrm{tot}) - \\mathcal{I}^\\evz(p_\\mathrm{tot}) \\right ] \\right \\}}_{\\mbox{\\footnotesize carried by the $\\evz$ wave}}\n\\end{align}\nWe see that the expression for\n$p_\\mathrm{int}$ starts with the gas pressure.  In the \nevent that no other waves are moving toward the interface, then we find:\n\\begin{equation}\np_\\mathrm{int} \\rightarrow \\mathcal{I}^\\evp(p)\n\\end{equation}\n(since in our algorithms, we set the $\\beta$'s to $0$ if those waves\nare not moving toward the interface.\n\n\\subsection*{Alternative?}\n\nA, perhaps more consistent, way to handle this is to predict $p_\\mathrm{tot}$\nand $E_r$ to the interfaces.  This is consistent with our choice of\nreference state, and using $p_\\mathrm{tot}$ is more analogous to the\npure hydrodynamics case.  We then algebraically construct the\ngas-pressure interface state as:\n\\begin{equation}\np_\\mathrm{int} = {p_\\mathrm{tot}}_\\mathrm{int} - \\lambda_f {E_r}_\\mathrm{int}\n\\end{equation}\n\nNote that this extends naturally to multigroup radiation as well, simply by\nsumming up the independent $E_r$.\n\n\\subsection*{$\\gamma_e$ system}\n\nThe alternate approach that Castro takes to incorporate auxiliary\nthermodynamic information into the system is to evolve an equation for\n$\\gamma_e$.  We use the primitive variable system, $q = (\\tau, u, p,\n\\gamma_e, E_r)^\\intercal$, where $\\tau = 1/\\rho$.  The matrix $A$ now\ntakes the form:\n\\begin{equation}\nA = \\left (\n   \\begin{matrix}\n   u & - \\tau & 0 & 0 & 0\\\\\n   0 & u & \\tau & 0 & \\lambda_f \\tau\\\\\n   0 & \\frac{c_{g}^{2}}{\\tau} & u & 0 & 0\\\\\n   0 & - \\alpha & 0 & u & 0\\\\\n   0 & E_{r} \\left(\\lambda_f + 1\\right) & 0 & 0 & u\n\\end{matrix}\\right)\n\\end{equation}\nThe eigenvalues are unchanged and the eigenvectors are derived in the\nsame Jupyter notebook as with the previous system.  \nHere, $\\alpha = (\\gamma_e - 1)(\\gamma_e - \\Gamma_1)$.\nNow we have\n\\begin{align}\n\\beta^\\evp = ( l^\\evp \\cdot \\Delta q^\\evp ) &= -\\frac{1}{2C}\n   \\left ( \\Delta u^\\evp + \\frac{\\Delta p_\\mathrm{tot}^\\evp}{C} \\right ) \\\\\n\\beta^\\evz_\\tau = ( l^\\evz_\\tau \\cdot \\Delta q^\\evz ) &= \n   \\Delta \\tau^\\evz + \\frac{\\Delta p_\\mathrm{tot}^\\evz}{C^2} \\\\\n\\beta^\\evz_{\\gamma_e} = ( l^\\evz_{\\gamma_e} \\cdot \\Delta q^\\evz ) &= \n   \\Delta \\gamma_E^\\evz + \\alpha \\frac{\\Delta p_\\mathrm{tot}^\\evz}{\\tau C^2} \\\\\n\\beta^\\evz_{E_r} = ( l^\\evz_{E_r} \\cdot \\Delta q^\\evz ) &= \n   \\Delta E_r^\\evz - \\frac{h_r}{c^2} \\Delta p_\\mathrm{tot}^\\evz \\\\\n\\beta^\\evm = ( l^\\evm \\cdot \\Delta q^\\evm ) &= \\frac{1}{2C} \n   \\left ( \\Delta u^\\evm - \\frac{\\Delta p_\\mathrm{tot}^\\evm}{C} \\right ) \n\\end{align}\nHere we use the Lagrangian sound speed, $C = \\rho c = c/\\tau$.\n\nThe interface states are then:\n\\begin{align}\n\\tau_\\mathrm{int} &= \\tau_\\mathrm{ref} - \\beta^\\evp - \\beta^\\evm - \\beta_\\tau^\\evz \\\\\nu_\\mathrm{int} &= u_\\mathrm{ref} + C (\\beta^\\evp - \\beta^\\evm) \\\\\np_\\mathrm{int} &= p_\\mathrm{ref} + \\frac{c_g^2}{\\tau^2} ( \\beta^\\evp + \\beta^\\evm) + \\beta_{E_r} \\lambda_f \\\\\n{\\gamma_e}_\\mathrm{int} &= {\\gamma_e}_\\mathrm{ref} - \\beta_{\\gamma_e}^\\evz \n   - \\frac{\\alpha}{\\tau} (\\beta^\\evp + \\beta^\\evm) \\\\\n{E_r}_\\mathrm{int} &= {E_r}_\\mathrm{ref} + \\frac{h_r}{\\tau^2} (\\beta^\\evp + \\beta^\\evm) - \\beta^\\evz_{E_r}\n\\end{align}\nAgain, we can also construct the total pressure on the interface:\n\\begin{align}\n{p_\\mathrm{tot}}_\\mathrm{int} &= p_\\mathrm{int} + \\lambda_f {E_r}_\\mathrm{int}\\\\\n    &= p_\\mathrm{ref} + \\lambda {E_r}_\\mathrm{ref} + \\frac{h_r \\lambda_f + c_g^2}{\\tau^2} (\\beta^\\evp + \\beta^\\evm) \\\\\n    &= {p_\\mathrm{tot}}_\\mathrm{ref} + C^2 (\\beta^\\evp + \\beta^\\evm)\n\\end{align}\n\n\\end{document}\n", "meta": {"hexsha": "041f80ed1c1645ea885708c9181a949bb1a3a0be", "size": 9872, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Docs/radiation_notes/rad.tex", "max_stars_repo_name": "yingtchen/Castro", "max_stars_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Docs/radiation_notes/rad.tex", "max_issues_repo_name": "yingtchen/Castro", "max_issues_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Docs/radiation_notes/rad.tex", "max_forks_repo_name": "yingtchen/Castro", "max_forks_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.7933884298, "max_line_length": 166, "alphanum_fraction": 0.6616693679, "num_tokens": 3576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.5592082870098504}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 21, 2014}\n\\maketitle\n\\section*{\\#8}\nfind irreducible factors of $x^4-5x^2+6$ over $\\mathbb{Q}, \\mathbb{Q}(\\sqrt{2}),\\mathbb{R}$\n\n$\\mathbb{Q}(\\sqrt{2})$ is $(x-\\sqrt{2})(x+\\sqrt{2})(x^2-3)$\n\\section*{\\#14}\nif $m|n$ then $mk=n$ and $x^m\\equiv 1\\mod x^m-1$ and $x^{mk}\\equiv 1\\mod x^m-1$ and $x^n-1\\equiv 0\\mod x^m-1$\n\ncan pull out factors too\n\nassume that $x^m-1|x^n-1$ then $x^n-1\\equiv 0\\mod x^m-1$ and $x^n=0\\mod x^m-1$ and $n=mq+r$ and $x^n\\equiv x^r\\mod x^m-1$ and so $(x^r-1)|(x^m-1)$ but $r<m$ and so $r=0$\n\n\\section*{commutative rings}\nwhat is a ring? a set (often denoted with $R$) with two {\\bf binary} operations (indicating closure) similar to fields. ``addition'' operation is abelian group, and ``multiplication'' is associative (and commutative when the ring is commutative) and has an identity. distribution holds. every field is a ring. rings don't require inverse for ``multiplication''.\n\n\\subsection*{examples of commutative rings}\n$\\mathbb{Z},\\mathbb{Z}_n,$ if $K$ is a field then $K[x]$ is a commutative ring.\n\nif $R$ is a commutative ring, then $R[x]$ is a commutative ring.\n\n\\subsection*{definition}\nif $R$ is a comm ring, then we say that $S\\subseteq R$ is a subring if $S$ is a comm ring on the same operations as $R$ and has the same identity element.\n\n\\subsubsection*{example}\n$R\\subseteq R[x]$\n\\section*{proposition}\nif $S\\subseteq R$ then $S$ is a subring iff\n\\begin{enumerate}\n\\item\n$S$ is closed under it's operations\n\\item\nif $a\\in S$ then $-a\\in S$\n\\item\n$1_R\\in S$ (identity in $R$ is in $S$)\n\\end{enumerate}\n\n\\subsubsection*{examples}\ncomplex integers: $\\mathbb{Z}[i]=\\{a+bi:a,b\\in\\mathbb{Z}\\}\\subseteq \\mathbb{C}$\n\n\\subsection*{definition}\nwe say the $a\\in R$ is invertible if $b\\in R$ such that $ab=1_R$. another term for this is saying $a$ is a unit, but some books call the identity a unit, so just call it invertible.\n\n\\subsubsection*{example}\nif $R=\\{0\\}$ then $1_R=0$.\n\n$\\pm 1\\in \\mathbb{Z}$ are only invertible elements in $\\mathbb{Z}$\n\n\\subsection*{notation}\n$R^{\\times}=\\{x:x\\in R,x \\text{ is invertible}\\}$\n\n$R=\\mathbb{Z}_n\\to R^\\times=\\{[k]:\\gcd(k,n)=1\\}$\n\n\\subsection*{proposition}\nif $R$ is a commutative ring, then $(R^\\times,\\cdot)$ is an abelian group.\n\n\\subsection*{definition}\nan element $a$ is called a zero divisor if there exists some $ab=0$ where $b\\ne 0$.\n\nin $\\mathbb{Z}_4$ $[2][2]=[0]$.\n\n\\subsection*{definition}\ngiven $R$ a commutative ring, then we say that $R$ is an integral domain (emphasis on integrity and domain) if $1_R\\ne 0_R$ and $ab=0$ only when $a=0$ or $b=0$ (that is $0_R$ is the only zero divisor).\n\n\\subsubsection*{example}\nevery field is an integral domain. not that if $ab=0$  then $a^{-1}\\in F$ and $ab=a^{-1}ab=a^{-1}0=b=0$ and so we have a contraction if we assume $b\\ne 0$.\n\n$\\mathbb{R}[x]/\\langle x^2-1\\rangle$ is not a field because $x^-1$ is reducible but it is a commutative ring.\n\\end{document}\n\n\n", "meta": {"hexsha": "67672029a07ac23ecfcbadc2f496a3e365999e8f", "size": 3126, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-11-21.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-11-21.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-11-21.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2142857143, "max_line_length": 361, "alphanum_fraction": 0.6842610365, "num_tokens": 1144, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8152324960856177, "lm_q1q2_score": 0.559208286394898}}
{"text": "\\documentclass{article}\n%\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{color}\n\\usepackage{tabu}\n\\usepackage{longtable}\n\\usepackage{mathrsfs}\n\\usepackage{enumerate}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\lhead{Final 01}\n\\rhead{Jon Allen}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\\newcommand{\\degree}{\\ensuremath{^\\circ}}\n\n\\begin{document}\n\\subsubsection*{PDE A.}\n\\begin{align*}\n  \\text{PDE.}&&\\frac{\\partial u}{\\partial t}&=\\frac{\\partial^2u}{\\partial x^2}&&\\text{for}&0&<x<1,&0&<t<\\infty\\\\\n  \\text{BC.}&&u_x(0,t)&=0=u_x(1,t)&&\\text{for}&&&0&<t<\\infty\\\\\n  \\text{IC.}&&u(x,0)&=f(x)&&\\text{for}&0&<x<1\n\\end{align*}\n\nFor PDE A, apply separation of variables and, for separated solutions $u=T(t)X(x)$, analyze the associateed eigenvalue problem $X''(x)=\\lambda X(x)$ and determine the eigenfunctions (or their nonexistence) for the cases:\n\\begin{align*}\n  u&=T(t)X(x)\\\\\n  \\frac{\\partial u}{\\partial t}&=T'(t)X(x)\\\\\n  \\frac{\\partial u}{\\partial x}&=X'(x)T(t)\\\\\n  \\frac{\\partial^2 u}{\\partial x^2}&=X''(x)T(t)\\\\\n  T'(t)X(x)&=X''(x)T(t)\\\\\n  \\intertext{$t$, and $x$ are independant of eachother, therefore:}\n  \\frac{T'(t)}{T(t)}&=\\frac{X''(x)}{X(x)}=\\lambda\\\\\n  T'(t)-\\lambda T(t)&=0\\\\\n  \\omega(t)&=e^{\\int{-\\lambda\\,\\mathrm{d}t}}\\\\\n  \\omega(t)T(t)&=\\int{0\\,\\mathrm{d}t}=c_3\\\\\n  T(t)&=c_3e^{\\lambda t}\\\\\n  X''(x)-\\lambda X(x)&=0\\\\\n  X''-\\lambda X=0\\\\\n  r^2+0r-\\lambda&=0\\\\\n  r&=\\frac{{-0}\\pm\\sqrt{0^2-4(-\\lambda)}}{2}\\\\\\\n  &=\\pm\\sqrt{\\lambda}\n\\end{align*}\n\\begin{enumerate}[(a)]\n\\item\n$\\lambda=+\\mu^2>0$\n\\begin{align*}\n  r&=\\pm\\mu\\\\\n  X(x)&=c_1e^{\\mu x}+c_2e^{-\\mu x}\\\\\n  u_x&=X'(x)T(t)=\\left(c_1\\mu e^{\\mu x}-c_2\\mu e^{-\\mu x}\\right)T(t)\\\\\n  u_x(0,t)&=0=u_x(1,t)\\\\\n  \\left(c_1\\mu-c_2\\mu\\right)T(t)&=0=\\left(c_1\\mu e^\\mu-c_2\\mu e^{-\\mu}\\right)T(t)\n  \\intertext{note that if $T(t)=0$ then we are dealing with the trivial case $u(x,t)=0$ which is not what we are looking for, so we say that $T(t)\\ne0$}\n  c_1\\mu-c_2\\mu&=0=c_1\\mu e^{\\mu}-c_2\\mu e^{-\\mu}&\\mu\\ne0\\\\\n  c_1-c_2&=0&c_1=c_2\\\\\n  c_1e^{\\mu}-c_1e^{-\\mu}&=0\\\\\n  e^\\mu&=e^{-\\mu}\\\\\n  e^{2\\mu}&=1\\\\\n  \\ln(e^{2\\mu})&=\\ln(1)=2\\mu=0\\\\\n  \\mu&=0\n\\end{align*}\nBut we have defined $\\mu^2>0$ so we have no solutions.\n\\item\n$\\lambda=0$\n\\begin{align*}\n  r&=\\pm\\sqrt{0}=0\\\\\n  X(x)&=(c_1+c_2x)e^{0x}=c_1+c_2x\\\\\n  u_x(0,t)&=0=u_x(1,t)\\\\\n  c_2T(t)&=0=c_2T(t)\\\\\n  \\intertext{Again we take $T(t)\\ne0$}\n  c_2&=0\\\\\n  X(x)&=c_1&T(t)&=c_3e^{0t}=c_3\\\\\n  u(x,t)&=c_1\\cdot c_3=c_4\n\\end{align*}\nSo we have one eigenfunction, $u(x,t)=c_0$\n\\item\n$\\lambda=-\\mu^2<0$\n\\begin{align*}\n  r&=\\pm\\sqrt{-\\mu^2}=\\pm\\mu i\\\\\n  X(x)&=c_1\\cos(\\mu x)+c_2\\sin(\\mu x)\\\\\n  X'(x)&=-c_1\\mu\\sin(\\mu x)+c_2\\mu\\cos(\\mu x)\\\\\n  u_x(0,t)&=0=u_x(1,t)\\\\\n  [c_2\\mu\\cos(0)-c_1\\mu\\sin(0)]T(t)&=0=[c_2\\mu\\cos(\\mu)-c_1\\mu\\sin(\\mu)]T(t)\\\\\n  \\intertext{Taking $T(t)\\ne0$}\n  c_2\\mu&=0=c_2\\mu\\cos(\\mu)-c_1\\mu\\sin(\\mu)&\\mu>0&\\to c_2=0\\\\\n  -c_1\\mu\\sin(\\mu)&=0\\\\\n  \\intertext{Avoiding the trivial solution requires $\\sin(\\mu)=0$}\n  \\mu&=n\\pi&n&=1,2,3,\\dots\\\\\n  T(t)&=c_3e^{-\\mu^2t}=c_3e^{-n^2\\pi^2t}\\\\\n  u_n(x,t)&=c_ne^{-n^2\\pi^2t}\\cos(n\\pi x)\n\\end{align*}\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "5e44c8950992a806d7a89c7eee8fc4578b024cbe", "size": 3183, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "partial differential equations/pde-final-01.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "partial differential equations/pde-final-01.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "partial differential equations/pde-final-01.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.4795918367, "max_line_length": 220, "alphanum_fraction": 0.5934652843, "num_tokens": 1566, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081925, "lm_q2_score": 0.8152324938410784, "lm_q1q2_score": 0.5592082848552574}}
{"text": "\\chapter{Hypothesis testing} % Casella berger, S373\n\\section{Introduction}\nThis chapter explains hypothesis testing and p-values. A hypothesis is defined as below.\n\\begin{defn}\nA hypothesis is a statement about a population parameter.\n\\end{defn}\nFurthermore one tests two hypotheses against each other. A definiton of this is as follows.\n\\begin{defn}\nThe two complementary hypotheses in a hypothesis testing problem are called the null hypothesis and the alternative hypothesis. They are denoted by $H_0$ and $H_1$, respectivly.\n\\end{defn}\nTo do a hypothesis test one need to calculate a test statistic from data. For a one sided test one can assume that the data is outside the null model for large values of the test statistic. In the case of a two sided test the data can be outside the null hypothesis for small values of the test statistic too. This leads to calculation of p-values\n\\section{P-values} % s397\nA p-value can give the result of a hypothesis test. The following defintion and theorem can be found in \\cite{casella2002statistical}. First we are going to define a valid p-value.\n\\begin{defn}\n\\label{def:valpval}\nA p-value $p(\\boldsymbol{X})$ is a test statistic satisfying $0 \\leq p(\\boldsymbol{x}) \\leq 1$ for every sample point $\\boldsymbol{x}$. Small values of $p(\\boldsymbol{X})$ give evidence that $H_1$ is true. A p-value is valid if, for every $\\theta \\in \\Theta_0$ and every $0 \\leq \\alpha \\leq 1$,\n\\begin{equation}\nP_\\theta (p(\\boldsymbol{X}\\leq \\alpha)) \\leq \\alpha.\n\\end{equation}\n\\label{defn:validpvalue}\n\\end{defn}\nThen a p-value is as follows.\n\\begin{theorem}\n\\label{th:pvalue}\nLet $W(\\boldsymbol{X})$ be a test statistic such that large values of $W$ give evidence that $H_1$ is true. For each sample point $\\boldsymbol{x}$, define\n\\begin{equation*}\np(\\boldsymbol{x}) = \\sup_{\\theta \\in \\Theta_0} P_\\theta (W(\\boldsymbol{X}) \\geq W(\\boldsymbol{x})).\n\\end{equation*}\nThen, $p(\\boldsymbol{X})$ is a valid p-value.\n\\end{theorem}\nThe $\\Theta_0$ is the subset of the parameter space for the null model.\nP-values can also be defined by using sufficient statistics. A p-value is then defined as\n\\begin{equation}\np(\\boldsymbol{x}) = P(W(\\boldsymbol{X}) \\geq W(\\boldsymbol{x}) | S(\\boldsymbol{X}) = S(\\boldsymbol{x})),\n\\label{eq:pvalue}\n\\end{equation}\nwhere $S(\\boldsymbol{x})$ is a sufficient statistic under the null hypothesis. \nBy this definition the p-value given a sufficient statistic is valid as shown below\n\\begin{equation*}\nP_\\theta(p(\\boldsymbol{X}) \\leq \\alpha ) = \\sum_x P(p(\\boldsymbol{X}) \\leq \\alpha | S(\\boldsymbol{X}) =s) P_\\theta (S(\\boldsymbol{X})=s) \\leq \\sum_s \\alpha P_\\theta (S(\\boldsymbol{X})=s) = \\alpha.\n\\end{equation*}\nThis result is for discrete $S(\\boldsymbol{X})$. However for the continuous case the one can replace the sums with integrals.\n\\\\\n\\\\\nFor a two-sided hypothesis test the p-value is found by the equation below,\n\\begin{equation*}\np(\\boldsymbol{x}) = 2(\\min(P(W(\\boldsymbol{X}) \\geq W(\\boldsymbol{x})), P(W(\\boldsymbol{X}) \\leq W(\\boldsymbol{x})))).\n\\end{equation*}\n\\\\ % http://www.math.ntnu.no/~bo/Fordypning/2015/Marius/NHPP-Iran-paper.pdf\n\\section{Test statistics}\n\\label{sec:teststat}\nAs mentioned earlier to calculate the p-value one need a test statistic. A test statistic is often a function which takes a sample as input and gives out a meassure of an certain attribute. For our data we will use several types of test statistics. For the test statistics used, the transformed times are applied. This is has also been done in \\cite{lindqvist2011monte}. The transformed times have an uniform distribution between 0 and 1. This is because if the times $t_1,...,t_n$ is from a NHPP then $\\Lambda(t_1),...,\\Lambda(t_n)$ is a homogenous Poisson distribution with intensity 1. The tranformed times are defined as below.\n\\begin{equation}\nV_j = \\frac{\\Lambda(t_j)}{\\Lambda(\\tau)}.\n\\label{eq:transtimes}\n\\end{equation}\nHow to find the transformed times for our NHPP is discussed in section \\ref{sec:teststatpvalue}.\nThe different test statistics used will be introduced below. Here $\\hat{V}_j$ are the estimated versions of $V_j$ obtained by using estimated parameters in $\\Lambda(\\cdot)$.\n\\subsection{Greenwood statistic}\nThe Greenwood statistic is a two sided statistic where values are rejected for small and large values.\n\\begin{equation*}\nG = \\sum_{j=1}^{n+1} (\\hat{V_j} - \\hat{V}_{j-1})^2\n\\end{equation*}\n\\subsection{Laplace statistic}\nThis one is also a two sided statistic and are rejected for small and large values.\n\\begin{equation*}\nL = \\sqrt{\\frac{12}{n}} \\sum_{j=1}^{n+1} \\left( \\hat{V_j} - \\frac{1}{2} \\right)\n\\end{equation*}\n\\subsection{Modified Cramer von Mises statistic}\n\\begin{equation*}\nW^2 = \\sum_{j=1}^{n} \\left[ \\hat{V}_j - \\frac{(2j-1)}{2n} \\right]^2 + \\frac{1}{12n}\n\\end{equation*}\n\\subsection{Modified Kolmogorov-Smirnov statistic}\n\\begin{equation*}\nD = max[D^+ , D^-]\n\\end{equation*}\n\\begin{equation*}\nD^+ = \\max_{1 \\leq j \\leq n} \\left( \\frac{j}{n} - \\hat{V}_j \\right)\n\\end{equation*}\n\\begin{equation*}\nD^- = \\max_{1 \\leq j \\leq n} \\left( \\hat{V}_j - \\frac{(j-1)}{n} \\right)\n\\end{equation*}\n\\\\\n\\\\\nBoth the Modified Cramer von Mises statistic and Modified Kolmogorov-Smirnov statistic reject the null hypothesis for large values of the statistic.\n\n\n\\section{Test statistics and p-values in NHPP}\n\\label{sec:teststatpvalue}\nTo be able to use the test statistics in section \\ref{sec:teststat} we need to find the tranformed times as defined in equation \\ref{eq:transtimes}. To do this the parameters for the rate function need be estimated. These estimates can be found by using maximum likelihood estimators as defined in chapter \\ref{chap:like}. From equation \\ref{eq:loglike} we see that finding the maximum likelihood might not be trivial because of the term $\\Lambda(\\tau)$. However the maximum likelihood can be found numerically in R programming language by using the built in function Optim. Furthermore the integral in equation \\ref{eq:largelambda} can also be solved numerically. Hence the tranformed times and test statistics can be calculated.\n\\\\\n\\\\\nTo calculate the p-value in equation \\ref{eq:pvalue} we refer to \\cite{iranNHPP}. From this paper we have that the p-value can be estimated by\n\\begin{equation*}\n\\hat{p} = \\#\\{W^* \\geq W_{obs}\\}/M,\n\\end{equation*}\nwhere $W^*$ is a test statistic for a sample, $W_{obs}$ is the observed test statistic from original data and $M$ is the number of samples.\n", "meta": {"hexsha": "23caca71a73ccbcfcf975827fab911cfafbd7d5e", "size": 6402, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/chapters/hypotesetesting.tex", "max_stars_repo_name": "mariufa/ProsjektOppgave", "max_stars_repo_head_hexsha": "3ef2fda314c55322de20f19ca861e4268a5e2d08", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Thesis/chapters/hypotesetesting.tex", "max_issues_repo_name": "mariufa/ProsjektOppgave", "max_issues_repo_head_hexsha": "3ef2fda314c55322de20f19ca861e4268a5e2d08", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/chapters/hypotesetesting.tex", "max_forks_repo_name": "mariufa/ProsjektOppgave", "max_forks_repo_head_hexsha": "3ef2fda314c55322de20f19ca861e4268a5e2d08", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.3265306122, "max_line_length": 730, "alphanum_fraction": 0.7411746329, "num_tokens": 1888, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494421679929, "lm_q2_score": 0.8152324983301568, "lm_q1q2_score": 0.5592082774667902}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[super,square]{natbib}\n\\usepackage{tabularx}\n\\usepackage{parskip}\n\\usepackage[margin=1.4in]{geometry}\n\\usepackage{csquotes}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amsthm}\n\n\\newcommand{\\comment}[1]{}\n\n\\title{\\vspace{-3cm} Ergodic Theory}\n\\author{}\n\\date{}\n\n\\comment{\nTODO:\n- add section on statisitcal learning\n}\n\n\\begin{document}\n\\maketitle\n\\vspace{-1.5cm}\n\\section{Dynamical Systems}\nA \\textbf{dynamical system}, usually written as the tuple $(T, X)$, is described by a transformation that maps a phase space onto itself, $T: X \\to X$. The set of points attained from repeated applications of the transformation from some starting point is known as its \\textbf{forward orbit} or trajectory, \n\\[\n\\mathcal{O}(x, T) = \\{ T^i(x) : i \\in \\mathbb Z^+ \\}. \n\\]\n\nIf the trajectory repeats itself, the point is considered \\textbf{periodic}, $T^n(x) = T^m(x), n \\neq m $.\n\n\\section{Measure}\nA measure is an extension of the concepts of length, area, or volume in Euclidean geometry. In a generic space, a measure is a way of assigning a value or weight to different parts of the space.\n\nLet $X$ be a set. A collection $\\mathcal{F}$ of subsets of $X$ is called a \\textbf{$\\sigma$-algebra} if:\n\\begin{enumerate}\n    \\item $X$ is in $\\mathcal{F}$\n    \\item If $A$ is in $\\mathcal{F}$, then so is the complement of $A$\n    \\item  if $A_n$ is a countable sequence of sets in $\\mathcal{F}$ then their union is in $\\mathcal{F}$, i.e. $ \\displaystyle \\bigcup_{n=1}^{\\infty} A_n \\in \\mathcal{F}$\n\\end{enumerate}\nThis definition provides us with sets that can be considered \\emph{events} in a probability space, while elements not in $\\mathcal{F}$ have no defined probability measure. $\\sigma$-algebras allow us to avoid certain undefined behaviour which arises from non-measurable sets.\n\nA function $\\mu: \\mathcal{F} \\mapsto \\mathbb{R}^+ \\cup \\{\\infty\\}$ is called a \\textbf{measure} if:\n\\begin{enumerate}\n    \\item $\\mu(\\emptyset)=0$\n    \\item if $A_n$ is a countable collection of pairwise disjoint sets in $\\mathcal{F}$ (i.e. $A_n \\cap A_m = \\emptyset$ for $n \\neq m$) then\n    \\[\n        \\mu \\Big(  \\bigcup_{n=1}^{\\infty} A_n \\Big) = \\sum_{n=1}^{\\infty}\\mu(A_n).\n    \\]\n\\end{enumerate}\n\nWe call $(X, \\mathcal{F}, \\mu)$ a measure space. If $\\mu(X) = 1$ then we consider $\\mu$ a probability measure similar to $ 0 \\leq Pr(X) \\leq 1$ and may refer to $(X, \\mathcal{F}, \\mu)$ as a probability space. \n\nWe say that $T$ is a measure-preserving transformation\nor, equivalently, $\\mu$ is said to be a $T$-invariant measure if the measure of a set is always the same as the measure of the set of points which map to it, i.e. $\\mu(T^{-1}(A))=\\mu(A)$ for any measurable set $A \\in \\mathcal{F}$.\n\n\n\\section{Ergodicity}\n\nLet $(X, \\mathcal{F}, \\mu)$ be a probability space and let \\ $T : X \\mapsto X$ be a measure-preserving transformation. We say that $T$ is an \\textbf{ergodic transformation}, or that $\\mu$ is an \\textbf{ergodic measure}, if whenever $A \\in \\mathcal{F}$ satisfies $T ^{-1}(A) = A$, then $\\mu(A) = 0$ or $1$.\n\nErgodicity can be understood as a measure theoretic version of irreducibility, similar to the inability to split up markov chains into smaller sub-chains. Ergodicity can be viewed as an indecomposability condition and is concerned with how a typical orbit of a dynamical system is distributed throughout the phase space while ergodic theory studies the qualitative distributional properties of typical orbits of a dynamical system with these properties being expressed in terms of measure theory. An ergodic dynamical system is one in which, with respect to some probability distribution, all invariant sets either have measure 0 or 1.\n\n\\section{Birkhoff\u2019s Ergodic Theorem}\n\\begin{quote}\n    \\begin{enumerate}\n        \\item\n        Let $(X, \\mathcal B, \\mu, T)$ be a measure-preserving system on a $\\sigma$-finite measure space ($\\mu(X)< \\infty$), and let $f$ be any integrable function.\n        \n        \\item\n        We consider $\\hat f$ the time-average of $f$,\n        \\[\n            \\hat f(x) = \\frac{1}{n}\\sum_{i=0}^{n-1} f(T^{i}(x)).\n        \\]\n        $\\hat f(x)$ converges for almost every $x \\in X$, and is integrable i.e. $\\hat f$ is in $L^1(\\mu)$. Moreover, $\\hat f$ is $T$-invariant, i.e., $ \\hat f \\circ T = \\hat f$.\n        \n        \\item\n        We consider $\\bar f$ the space-average of $f$,\n        \\[\n            \\bar f = \\frac{1}{\\mu(x)} \\int_{X} f d\\mu.\n        \\]\n        For a probability space, $\\mu (X)=1$.\n        \n        \\item\n        If $T$ is ergodic, then $\\hat f$ is constant in $\\mu$ almost everywhere, so we have $\\hat f = \\bar f$, i.e.\n        \\begin{align*}\n            \\displaystyle \\lim_{n\\to \\infty} \\frac{1}{n}\\sum_{i=0}^{n-1}f(T^{i}(x)) &= \\frac{1}{\\mu(x)} \\int_{X} f d \\mu \n        \\end{align*}\n    \\end{enumerate}\n      \n\\end{quote}\n\nIf a mapping is ergodic, as the number of finite averages taken along any of its orbit increases to infinity (the time-average), this value will converge to the continuous integral (the space average). That is, a finite average sampling of points of any orbit will be be as accurate as a continuous average integral over the entire state space.\n\n\\section{Statistical Mechanics and the Law of Large Numbers}\nThe term \\emph{ergodic} was originally introduced by Ludwig Boltzman, the founder of statistical mechanics, to describe a stronger but related property: starting from a random point in state space, orbits will typically pass through every point in state space. To understand this relation in terms of a mathematical space, consider the \\textbf{indicator function} of A, $\\mathbb{I_A} :X \\to \\{0,1\\}$ defined by,\n\\[\n\t\\mathbb I_A(x) = \\begin{cases}\n\t\t1 &\\text{ if $x\\in A$}\\\\ 0 &\\text{ otherwise}\n\t\\end{cases}.\n\\]\n\nThe time-average of $\\mathbb I_A$ is the fraction of time that the orbit spends in $A$ while the space-average of $I_A$ is the probability that a randomly picked point is in $A$. In an ergodic system, the two averages are almost always equal, meaning almost all trajectories end up covering the state space in the same way. The classical ergodic model is a version of the \\textbf{law of large numbers} --- the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.\n\n\\section{Ergodic Hierarchy}\nErgodicity is at the bottom level of the ergodic hierarchy, where the higher a system is categorized the more random its behaviour. This unpredictability occurs as a result of a decay of correlations between the systems\u2019 past and present states.\n\\[\n\\text{Bernoulli} \\subset \\text{Kolmogorov} \\subset \\text{Strong Mixing} \\subset \\text{Weak Mixing} \\subset \\text{Ergodic}.\n\\]\nWe can gain an intuition for a mixing transformation, $T$, through a comparison with stirring two parts $(A, B)$ of a fluid together, meaning that $T^n(A)$ is the region or $n$-orbit of the first part of fluid after $n$ time units of mixing. If the volume of the entire fluid $V$ is equal to $C$, then we consider the fluid to be thoroughly mixed if the concentration of the first part of fluid equals $\\mu(A)/\\mu(C)$ in both the whole volume of fluid and also in sub-region in the volume. That is, the fluid is thoroughly mixed at time $n$ if\n\\[\n    \\frac{\\mu(T^n A \\cap V)}{\\mu(V)}=\\frac{\\mu(A)}{\\mu(C)}\n\\]\nfor any volume $V$ (of non-zero measure). We assume that the volume of the combined liquid is one unit, i.e. $\\mu(C)=1$, then for all subsets $A$ and $B$ of $X$ a system is \\textbf{strong mixing} (S-M) if,\n\\[\n    \\displaystyle \\lim_{n\\to\\infty} \\mu(T^n B \\cap A) = \\mu(B) \\mu(A).\n\\]\nA system is \\textbf{weak mixing} (W-M) if,\n\\[\n    \\displaystyle \\lim_{n\\to \\infty} \\frac{1}{n} \n        \\sum_{k=0}^{n-1} | \\mu(T^k B \\cap A) - \\mu(B)\\mu(A)|=0.\n\\]\n\nA system is \\textbf{K-mixing} (K-M) if for any subsets $A_0,A_1,...,A_r$ of $X$, where $r$ is an arbitrary natural number, the following condition holds:\n\\[\n    \\lim_{n\\to\\infty}\n        \\sup_{B \\in \\sigma(n,r)} \n        |\\mu(T^n B \\cap A) - \\mu(B)\\mu(A)| = 0\n\\]\n\nA partition of $X$ is a division of $X$ into different mutually exclusive parts or atoms that jointly cover $X$. A system is a \\textbf{Bernoulli system} if if there exists a partition $\\alpha$ of $X$ so that the images of $\\alpha$ under its transformation $T$ at different instants of time are independent, i.e.\n\\[\n    \\mu(\\delta \\in \\beta) = \\mu(\\delta_i) \\mu(\\beta_j)\n\\]\nfor all atoms $\\delta_i$ of $T^k\\alpha$ and all atoms $\\beta_j$ of $T^l\\alpha$ for all $k \\neq l$.\n\n\n\\begin{thebibliography}{}\n\\bibitem{}\nhttps://personalpages.manchester.ac.uk/staff/charles.walkden/magic/\n\n\\bibitem{}\nhttps://plato.stanford.edu/entries/ergodic-hierarchy/\n\n\\bibitem{}\nhttp://bactra.org/notebooks/ergodic-theory.html\n\n\\bibitem{}\nhttps://terrytao.files.wordpress.com/2011/01/measure-book1.pdf\n\n\\end{thebibliography}\n\\end{document}\n", "meta": {"hexsha": "076d3033d4ffb300ac5009c65ed368dc104c2704", "size": 8911, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/2020-ergodic-theory/main.tex", "max_stars_repo_name": "lukepereira/latex-ci", "max_stars_repo_head_hexsha": "4390a2da344ec00a3f651f464c79b7e097cbabe6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2020-09-04T20:32:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T21:30:32.000Z", "max_issues_repo_path": "documents/2020-ergodic-theory/main.tex", "max_issues_repo_name": "lukepereira/latex-ci", "max_issues_repo_head_hexsha": "4390a2da344ec00a3f651f464c79b7e097cbabe6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-07-13T01:21:22.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-13T02:09:19.000Z", "max_forks_repo_path": "documents/2020-ergodic-theory/main.tex", "max_forks_repo_name": "lukepereira/latex-ci", "max_forks_repo_head_hexsha": "4390a2da344ec00a3f651f464c79b7e097cbabe6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.4903225806, "max_line_length": 635, "alphanum_fraction": 0.6918415442, "num_tokens": 2668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494421679929, "lm_q2_score": 0.8152324960856175, "lm_q1q2_score": 0.5592082759271498}}
{"text": "\\section{Conclusion}\nAll in all, we successfully implemented a deep reinforcement learning architecture using convolutional neural networks, trained with Deep Q-Learning, whose input is raw pixel values, varying between \\( \\{0, 1, \\dots, 255\\} \\), of a grayscale version of the image of the current state, and output is the most suitable action from the action space. As we demonstrated in our final presentation demo, our trained agent outperforms the random agent with less than an hour of training. As demonstrated, our own group members also played the game using the keyboard agent, although none of the group members can perform more than \\(20\\) in any of the trials. In this context, the trained agent outperforms every group member with the score of \\(28\\). Therefore, the answer to our initial question is yes! The trained agent with the specified model can outperform both random and human agents.\n\nThroughout the project, we learned the basic approaches of reinforcement learning. We learned that by using CNNs and DQNs, it is possible to learn control policies directly from high-dimensional sensory input using reinforcement learning. The resultant agent is satisfactory to respond our question with ``yes''. In addition to basic theoretical approach, we also had the chance to interact with a practical issue that we were not expecting to face. While we were trying to adjust the exploration versus exploitation hyperparameter, \\(\\varepsilon \\), we faced with an issue that we later learned it is called the credit assignment problem. The credit assignment problem means that it can happen that we choose an action, and we only win or lose hundreds of actions later, leaving us with no idea of as to which of our actions led to this win or lose, thus, making it difficult to learn from our actions~\\autocite{mnih2013playing}. We learned that solely increasing the training duration does increase the performance of the agent. We learned that in order to overcome this credit assignment pitfall, using random restarts actually increases the performance of the agent, by adjusting the exploration versus exploitation problem in the direction of exploration.\n\nAs the future work of the project, performance of the agent can be increased by changing the deep neural network architecture and increasing the training duration. During our project, we have encountered with more complex approaches like double DQNs, Never Give-Up (NGU), etc. Experimentation on these approaches can be expressed as potential future work. Furthermore, these approaches can be generalized as \\textit{DeepMind}'s \\texttt{Agent57} to provide trained agents for \\(57\\) atari games~\\autocite{agent57}.\n", "meta": {"hexsha": "0e31411df55120c2d5746421cc5835fbe5a7bf2e", "size": 2685, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "final/conclusion.tex", "max_stars_repo_name": "atahanyorganci/dqn", "max_stars_repo_head_hexsha": "56bc0d964dd1c84ba02780bd9288370d257f126d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "final/conclusion.tex", "max_issues_repo_name": "atahanyorganci/dqn", "max_issues_repo_head_hexsha": "56bc0d964dd1c84ba02780bd9288370d257f126d", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "final/conclusion.tex", "max_forks_repo_name": "atahanyorganci/dqn", "max_forks_repo_head_hexsha": "56bc0d964dd1c84ba02780bd9288370d257f126d", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 383.5714285714, "max_line_length": 1260, "alphanum_fraction": 0.8074487896, "num_tokens": 538, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5590502998680713}}
{"text": "\nIn this chapter, we give some hints on the automatic computation of the number of index sets, the number of derivatives in the {\\tt Interaction} and the levelMin and LevelMax.\n\n\n\\section{Why is the  relative degree not relevant ?}\nIn this section, we give a very brief overview of the notion of relative degree.\n\n\\subsection{First order Linear complementary systems}\n\n A  Linear Complementarity System (LCS) is defined by\n\\begin{equation}\n  \\label{eq:LCS-bis}\n  \\begin{cases}\n    \\dot x = A x +B \\lambda \\\\\n     y = C x + D \\lambda\\\\\n    0 \\leq  y \\perp \\lambda \\geq 0 \\\\\n  \\end{cases}\n\\end{equation} \n \\begin{definition}[Relative degree in the SISO case]\n      Let us consider a linear system in state representation given by the quadruplet $(A,B,C,D) \\in \\RR^{n\\times n}\\times\\RR^{n \\times m}\\times \\RR^{m\\times n}\\times\\RR^{m\\times m} $:\n      \\begin{equation}\n        \\label{eq:LS}\n        \\begin{cases}\n          \\dot x = A x +B \\lambda \\\\\n          y = C x + D \\lambda\n        \\end{cases}\n      \\end{equation}\n      \\begin{itemize}\n      \\item In the Single Input/ Single Output (SISO) case ($m=1$), the relative\n        degree is defined by the first non zero Markov parameters :\n        \\begin{equation}\n          \\label{eq:Markov-Parameter}\n          D, CB, CAB, CA^2B, \\ldots, CA^{r-1}B, \\ldots\n        \\end{equation}\n      \\item In the multiple input/multiple output (MIMO) case ($m>1$), an \\textit{uniform} relative degree is defined as follows.\n        If $D$ is non singular, the relative degree is equal to $0$. Otherwise, it is assumed to be the first positive integer $r$ such that \n        \\begin{equation}\n          \\label{eq:mimo-r}\n          CA^{i}B =0, \\quad i=0\\ldots q-2\n        \\end{equation}\n        while\n        \\begin{equation}\n          \\label{eq:mimo-r2}\n          CA^{r-1}B \\text{ is non singular}.\n        \\end{equation}\n      \\end{itemize}\n    \\end{definition}\n    The Markov parameters arise naturally when we derive with respect\n    to time the output $y$,\n    \\begin{eqnarray*}\n      \\label{eq:y-derive}\n      y &=& C x + D \\lambda \\\\\n      \\dot y &=& CA x + CB \\lambda, \\text{ if } D= 0  \\\\\n      \\ddot y &=& CA^2 x + CAB \\lambda, \\text{ if }  D=0, CB=0\\\\\n      &\\ldots& \\\\\n      y^{(r)} &=& CA^{r} x + CA^{r-1}B \\lambda, \\text{ if } D=0, CB=0, CA^{r-2}B=0, r=1\\ldots r-2 \\\\\n      &\\ldots&\n    \\end{eqnarray*}\n    and the first non zero Markov parameter allows us to define the\n    output $y$ directly in terms of the input $\\lambda$.\n\nIn continuous time, the nature of solutions depends strongly on the relative degree. When we want to perform the time--integration of such systems, we need also to reduce the relative degree or to known it to correctly operate.\n\nWe can observe that the relative degree $0$ is well defined only by the relation ($D$ nonsingular) and by the nonsmooth law. Indeed, let us imagine that the nonsmooth law is defined by $0\\leq\\dot y \\perp \\lambda \\geq 0 $. We can easily see that the relative degree is reduced.\n\nIn the MIMO, the computation of non uniform relative degree is hard task. This is also the case for nonlinear systems.\n\n\n\n\\subsection{Second order Lagrangian systems}\n\n\nLet us consider a second order linear and time-invariant Lagrangian dynamical system (see \\S~\\ref{Sec:LagrangianLineatTIDS})\n\\begin{equation}\n  \\label{eq:rd1}\n  \\begin{cases}\n    M \\dot v + C v + K q = F_{Ext}(t) + p \\\\\n    \\dot q = v\n  \\end{cases}\n\\end{equation}\ntogether with a Lagrangian linear relation\n\\begin{equation}\n  y= Cq + e + D \\lambda + Fz,\n  \\label{eq:rd2}\n\\end{equation}\n\\begin{equation}\n  p = C^t \\lambda\n\\label{eq:rd3}  \n\\end{equation}\nand a simple nonsmooth law,\n\\begin{equation}\n  0\\leq y \\perp \\lambda \\geq 0\n\\label{eq:rd4}  \n\\end{equation}\n\nIf $D>0$, the relative degree is uniformly zero and the system can be solved without deriving the output~(\\ref{eq:rd2}). Indeed, we known that the solution of the LCP\n\\begin{equation}\n  0\\leq Cq + e + D \\lambda + Fz, \\perp \\lambda \\geq 0\n\\label{eq:rd5}  \n\\end{equation}\nis unique and Lipschitz with respect to $q$. It can be denoted as $\\lambda(q) = \\mbox{SOL}(D,Cq + e +Fz)$. Therefore, the differential equation~(\\ref{eq:rd1}) reduces to a standard ODE with a Lipschitz RHS\n \\begin{equation}\n  \\label{eq:rd6}\n  \\begin{cases}\n    M \\dot v + C v + K q = F_{Ext}(t) + C^t \\lambda(q)  \\\\\n    \\dot q = v\n  \\end{cases}\n\\end{equation}\n\nIn the case that we deal with unilateral contact, we usually have $D=0$ and the relative degree of the system is $2$. In this case, the output has to be differentiated as\n\\begin{equation}\n  \\label{eq:rd7}\n   \\dot y= C \\dot q,\n\\end{equation}\nand an impact law has to added, for instance the newton's impact law\n\\begin{equation}\n  \\label{eq:rd8}\n  \\text{ if } y=0, \\text{ when } \\dot y^+= -e y^-\n\\end{equation}\nIn the same vein, the equations of motion (\\ref{eq:rd1}) is not sufficient since the velocity may encounter jumps. The dynamics is usually replaced by a measure differential equation of the form\n\\begin{equation}\n  \\label{eq:rd10}\n  \\begin{cases}\n    M dv + C v^+(t) dt + K q(t) dt = F_{Ext}(t)dt + di \\\\\n    \\dot q = v\n  \\end{cases}\n\\end{equation}\nwhere $di$ is the measure that can be related to $p$ thanks to\n\\begin{equation}\n  \\label{eq:rd11}\n  di = p dt + \\sigma \\delta _{t^*}\n\\end{equation}\nis only one jump is expected at ${t^*}$.\n\n\n\\subsection{Conclusion for the implementation}\nFrom the continuous time mathematical analysis, the relative degree is very important to know if we have to compute the derivatives of the output $y^{(n)}$ and to consider various levels for the input $p , \\sigma, ....$\n\nHowever in the numerical practice,  the time --discretization makes an assumption on the relative degree and treats the nonsmooth law at different levels. The resulting time discretized system posseses more or less variables.\n\nConsider for instance  (\\ref{eq:rd1}) in the case of the Moreau scheme\n\\begin{subnumcases}{\\label{eq:MoreauTS}}\n  M(v_{k+1}-v_k)  + h  (K q_{k+\\theta}+ C v_{k+\\theta}) = p_{k+1} = G(q_{k+1}) \\lambda_{k+1},\\quad\\,\\\\[1mm] \n  q_{k+1} = q_{k} + h v_{k+\\theta}, \\quad \\\\[1mm]\n  \\dot y_{k+1} = G^\\top(q_{k+1})\\, v_{k+1} \\\\[1mm]\n  \\begin{array}{l}\n    \\text{if }\\quad\\bar y^\\alpha_{k+1} \\leq 0 \\text{ then } 0 \\leq \\dot y^\\alpha_{k+1} + e  \\dot y^\\alpha_{k} \\perp \\lambda^\\alpha_{k+1}  \\geq 0, \\\\[1mm]\n    \\text{otherwise  } \\lambda^\\alpha_{k+1}  =0.\\label{eq:MoreauTSd}\n  \\end{array}, \\alpha \\in \\mathcal I\n\\end{subnumcases} \nand the Schatzman--Paoli  scheme\n\\begin{subnumcases}{}\n  M(q_{k+1}-2q_{k}+q_{k-1})  + h^2 (K q_{k+\\theta}+ C v_{k+\\theta})  =  p_{k+1},\\quad\\,\\\\ \\notag\\\\ \n  v_{k+1}=\\Frac{q_{k+1}-q_{k-1}}{2h}, \\\\ \\notag \\\\\n  y_{k+1} = h\\left(\\Frac{q_{k+1}+e q_{k-1}}{1+e}\\right) \\\\\n  p_{k+1}= G\\left(\\Frac{q_{k+1}+e q_{k-1}}{1+e}\\right) \\lambda_{k+1} \\\\\n  0 \\leq y_{k+1}  \\perp\\lambda_{k+1} \\geq 0 .\n\\end{subnumcases}\n\nWe can see easily that the number of derivatives (or levels) that we store for $y$ and $\\lambda$ is independent on the relative degree but is chosen by the {\\tt OneStepIntegrator} with respect to the type of systems.\n\n\\section{How to define and compute the various levels and the number of indexSets }\n\n\\subsection{$y$ related variables}\n\nThe size of the vector {\\tt y} in the {\\tt Interaction} depends on\n\\begin{itemize}\n\\item the {\\tt  OneStepIntegrator} type.\n  \\begin{itemize}\n  \\item see the difference between the Moreau and Schatzman Paoli\n    scheme,\n  \\item plan the time--discontinuous Galerkin scheme\n  \\item plan the Higher Order Moreau sweeping process (HOSP)\n  \\end{itemize}\n\\item the {\\tt  Simulation} type.\n  \\begin{itemize}\n  \\item In {\\tt Timestepping} or {\\tt Event-driven} we do not need the same number of stored $y$\n  \\end{itemize}\n\n\\item the {\\tt NonSmoothLaw} type.\n  \\begin{itemize}\n  \\item If we consider some cases with or without friction in {\\tt\n      Timestepping} or {\\tt Event-driven}, we need to adapt the number\n    of stored $y$\n  \\end{itemize}\n\n\\end{itemize}\n\nSince the various levels of  {\\tt y} are used to build the index sets we will need from $0$ to a computed size that depends on the previous criteria. Only a part will be used in the {\\tt OneStepNSProblem}.\n\n\\subsection{$\\lambda$ related variables}\n\nThe size of the vector {\\tt lambda} in the {\\tt Interaction} depends on the same criteria than in the previous section.  Only, the number of lambda is not the same as {\\tt y} since a multiplier {\\tt lambda[i]} is not necessarily related to {\\tt y[i]}\n\n\\section{Rules for implementation}\n\nWe can define new members in {\\tt Interaction}:\n\\begin{itemize}\n\\item {\\tt \\_lowerlevelForOutput}, this value is to $0$ by default\n\\item {\\tt \\_upperlevelForOutput},  this value must be computed at initialization with respect to the previous criteria\n\\item {\\tt \\_lowerlevelForInput},  this value must be computed at initialization with respect to the previous criteria\n\\item {\\tt \\_upperlevelForInput},  this value must be computed at initialization with respect to the previous criteria\n\\end{itemize}\n\n\n\n\nThis level are computed in {\\tt Simulation::ComputeLevelsForInputAndOutput}. A visitor is used for the {\\tt OneStepIntegrator}. Furthermore, four global levels are computed \n\\begin{itemize}\n\\item {\\_levelMinForOutput} this value is the minimum level for the output {\\tt Interaction::\\_lowerlevelForOutput}  for all the interactions\n\\item {\\_levelMaxForOutput} this value is the maximum level for the output {\\tt Interaction::\\_upperlevelForOutput}  for all the interactions\n\\item {\\_levelMinForInput} this value is the minimum level for the output {\\tt Interaction::\\_lowerlevelForInput}  for all the interactions\n\\item {\\_levelMaxForInput} this value is the maximum level for the output {\\tt Interaction::\\_upperlevelForInput}  for all the interactions\n\\end{itemize}\n\n\n\n\n\\begin{itemize}\n\\item the values {\\tt y[i]} must be initialized from {\\tt \\_lowerlevelForOutput} to {\\tt \\_upperlevelForOutput}.\n\\item the values {\\tt lamdba[i]} must be initialized from {\\tt \\_lowerlevelForInput} to  {\\tt \\_upperlevelForInput}.\n\\item the values {\\tt y[i]} in {\\tt Interaction} must be used in priority to store the i-th derivative of $y$. When it is needed, higher index $i$ can be used for other triggering variables. For instance, for an Event--Driven scheme with a Lagrangian systems with friction, sliding velocity must be stored.\n\\item the values of {\\tt lamdba[i]} must stored the various multiplier for the nonsmooth law. We affect the same index $i$ as for the level of {\\tt y[i]} present in the corresponding nonsmooth law.\n\\item The number of {\\tt IndexSets} should follows {\\tt \\_levelMaxForY}.\n\\end{itemize}\n\n\n\nFor the dynamical systems :\n\\begin{itemize}\n\\item The number of levels for {\\tt \\_r} and {\\tt \\_p} in the DS should follow {\\tt \\_lowerlevelForInput} and {\\tt \\_upperlevelForOutput} of the associated interactions. This is done in {\\tt Interaction::initialize}.\n\\item A new variable should be added in the LagrangianDS to store the multiplier at the position level ({\\tt \\_tau} ?) to avoid the use of {\\tt \\_p[0]}. Indeed, we will continue to assume that {\\tt \\_p} is the input in the equation of motion. For {\\tt lambda} we can  use {\\tt lambda[0]} \n\\end{itemize}\n\nTODO LIST AND QUESTIONS\n\\begin{itemize}\n\\item What about the case of multiples interactions on a DS with various {\\tt \\_lowerlevelForInput} and {\\tt \\_upperlevelForOutput} ? Normally, all the levels should be correctly initialized thanks to the proposed implementation (r2821)\n\\item {\\tt DynamicalSystem::\\_r} should be a VectorOfVectors\n\\item {\\tt DynamicalSystem::\\_r} is split in {\\tt LagrangianDS}. a first part is {\\tt LagrangianDS::\\_p}. The other is not implemented !! {\\tt LagrangianDS::\\_tau} ?\n\\end{itemize}\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"DevNotes\"\n%%% End: \n", "meta": {"hexsha": "67f3f016922e72acf360d12d66ba074db8307e4d", "size": 11722, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/sphinx/devel_guide/notes/ChapterLevelsAndIndices.tex", "max_stars_repo_name": "ljktest/siconos", "max_stars_repo_head_hexsha": "85b60e62beca46e6bf06bfbd65670089e86607c7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 137, "max_stars_repo_stars_event_min_datetime": "2015-06-16T15:55:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T06:01:59.000Z", "max_issues_repo_path": "docs/sphinx/devel_guide/notes/ChapterLevelsAndIndices.tex", "max_issues_repo_name": "ljktest/siconos", "max_issues_repo_head_hexsha": "85b60e62beca46e6bf06bfbd65670089e86607c7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 381, "max_issues_repo_issues_event_min_datetime": "2015-09-22T15:31:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-14T09:05:23.000Z", "max_forks_repo_path": "docs/sphinx/devel_guide/notes/ChapterLevelsAndIndices.tex", "max_forks_repo_name": "ljktest/siconos", "max_forks_repo_head_hexsha": "85b60e62beca46e6bf06bfbd65670089e86607c7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2015-08-06T22:57:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T20:30:20.000Z", "avg_line_length": 47.4574898785, "max_line_length": 306, "alphanum_fraction": 0.6937382699, "num_tokens": 3597, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.5590502952320358}}
{"text": "\\section{Results}\n\\label{sec:Results}\nIn this section we describe the results of our methodologies on the observational and treatment data. We investigate the relations between features and symptoms of the real data provided in the files\\footnote{See GitHub} and determine answers to our questions from the data. \n\n\\subsection{Observational Data: Task $1_a$}\n\\label{subsec:1_a_real}\n\\graphicspath{{pictures/task1a/}}\nIn this subsection we use the real data provided in the \\textit{Observation-feature.csv} file. \n\n%Methodology 1, observational data\nApplying methodology\\textsubscript{1} to the observational data, we obtain the following correlations between features and $Death$ shown in Figure \\textbf{Figure \\ref{fig:1a1r}}. We firstly observe that vaccines have relevant negative correlations, while for positive correlations $CovidPositive$ is the major influencer of $Death$.\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1a1r.png}\n  \\caption{Correlations between Death and other features for Observational Data.}\n    \\label{fig:1a1r}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1a2r.png}\n  \\caption{Auto-correlations among features for Observational Data.}\n  \\label{fig:1a2r}\n\\end{figure}\n\nThe auto-correlation matrix in \\textbf{Figure \\ref{fig:1a2r}} shows high values of correlation between $CovidPositive$ and some symptoms: such as $No\\_Taste/Smell$, $Pneumonia$ and $Blood\\_Clots$. Hence, similarly to the synthetic data we found that it is necessary to work in a subset of the data containing only the $CovidPositive$ individuals to eliminate the dependence of symptoms on it. As our experiments with synthetic showed, this should allow us to determine the features which are actually predictive of death.\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1a3r.png}\n  \\caption{Correlations between Death and selected features for Observational Data.}\n    \\label{fig:1a3r}\n\\end{figure}\nThen we obtain the following correlations, where we can observe that vaccines are much more strongly predictive of survival. Perhaps even more notably, the features most predictive of death are no longer dominated solely by symptoms, but now show several genes which are especially predictive of death, namely $Gene_{16}$, $Gene_{27}$, $Gene_{63}$, $Gene_{77}$ in the top 5 features. The only symptom remaining among the top 5 predictive features was $Pneumonia$.\n\n%Methodology 2, observational data 1a\nFollowing from the useful indications from methodology\\textsubscript{1}, we then use methodology\\textsubscript{2}, yielding the conditional probability of death given explanatory features.\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1a7r.png}\n  \\caption{Conditional Probability of Death given $Feature = 0$.}\n    \\label{fig:1a7r}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1a8r.png}\n  \\caption{Conditional Probability of Death given $Feature = 1$.}\n    \\label{fig:1a8r}\n\\end{figure}\n\n\\textbf{Figures \\ref{fig:1a7r}} and \\textbf{\\ref{fig:1a8r}} show the conditional probability of death given the features from the dataset. We see in the plots that the probability of death increased by the greatest amount when conditioned on $Pneumonia$, from a mean of approximately $0.013$ to $0.025$. We also see a small increase in the probability of death when conditioned on the gene features, however the range of values when the genes are present versus not present overlap quite heavily. In this regard, we are not able to deduce that these genes affect the probability of death from methodology\\textsubscript{2}.\n\n%Methodology 3, observational data 1a\n\nWe then apply our final methodology to the observational data to determine our final conclusions with respect to the effect of genes and symptoms on death in the observational data.\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1a9r.png}\n  \\caption{Coefficient Values of Logistic Regression observational data, after randomized grid-search 500-fold CV.}\n    \\label{fig:1a9r}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1a10r.png}\n  \\caption{95\\%  Confidence  Interval  for  Logistic  Regression Coefficient Values.}\n    \\label{fig:1a10r}\n\\end{figure}\n\nFollowing our procedure, the $Logistic$ $regression$ models were trained on the top 10 negatively correlated and top 10 positively correlated features from observational data. When trained with all features, including $CovidPositive$, the logistic regression models were unable to predict $Death$ at any better than $0.5$ (i.e. random guess). When the models were trained on a subset containing only the $CovidPositive$ population without feature selection, the best model had an accuracy of $0.71$. Finally, when using the 10 most negatively and 10 most positively correlated features, the best scoring model had an accuracy of $0.87$. This approach therefore remains the same in our subsequent tasks and models trained for methodology\\textsubscript{3}.\n\nThe resulting coefficient values are as shown in \\ref{fig:1a9r}, with $95\\%$ confidence intervals for the logistic regression shown in \\textbf{Figure \\ref{fig:1a10r}}. This is to say that we are $95\\%$ confident that the true value of the coefficient will be within the intervals shown in the figure. From this, we can conclude that if a feature has a lower range of possible values, then it is less predictive of death (or more predictive of death if the range is higher than other features). \n\nBased on these results along with the previous methodologies, we return to our first set of questions:\n\n\\begin{itemize}\n    \\item \\textit{Can we predict death rate given age, gender, income, genes and comorbidities?} \nThe logistic regression coefficients and accuracy show that it is possible to train a model which is able to accurately predict death.\n\n    \\item Our second question (\\textit{Which explanatory features are best to predict death?})\nis answered by these results as well, showing that the symptoms and comorbidities which are most effective in predicting death are $Gene_{16}$, $Gene_{63}$ and $Gene_{27}$. $Pneumonia$ did not arise in methodology\\textsubscript{3} as a top predictor of death, and $Gene_7$ does not appear from methodologies $1$ and $2$, so we conclude that these features are most likely comparatively less important predictors of death (though likely still risk factors).\n\\end{itemize}\n \n\\subsection{Observational Data: Task $1_b, 1_c$}\nIn this section we apply our methodologies for estimating the efficacy of vaccines and investigate their possible side-effects. The results of methodology 1 are the same as from Subsection \\ref{subsec:1_a_real}, but in this case we focus on the fact that each of the vaccines are highly negatively correlated to death (\\textbf{Figure \\ref{fig:1a3r}}), indicating that they are possibly effective in preventing it.\n\n%Methodology 2, observational 1bc\n\nOn the other hand, methodology\\textsubscript{2} differs from our previous analysis, as in this case we are modelling the conditional probability of side-effects (including death) from vaccines.\n\\graphicspath{{pictures/task1b/}}\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1b5r.png}\n  \\caption{Conditional  Probability  of  Death  given $Feature = 0$.}\n    \\label{fig:1b5r}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1b6r.png}\n  \\caption{Conditional  Probability  of  Death  given $Feature = 1$.}\n    \\label{fig:1b6r}\n\\end{figure}\n\n\\textbf{Figures \\ref{fig:1b5r}} and \\textbf{\\ref{fig:1b6r}} show the conditional probabilities of side-effects where the patient is unvaccinated and vaccinated, respectively. The figures show that the probability of $Death$ is greatly reduced when conditioned on all of the vaccines, with $Vaccine1$ being the most effective (specifically, reducing the probability of death the most). In this case we also observe that the quantiles for the probability of death given vaccines are greatly below the probability of death without vaccines. Interestingly, while both this methodology and methodology\\textsubscript{1} show $Vaccine1$ to have the greatest effect on reducing the probability of death, they differ on which of $Vaccine2$ or $Vaccine3$ is the second most effective in preventing death.\n\nAn additional aspect of our analysis with methodology\\textsubscript{2} is the estimation of side effects from vaccines. With this methodology we are also able to calculate the conditional probability of symptoms given vaccines. \\textbf{Figures \\ref{fig:1c3r}} and \\textbf{\\ref{fig:1c4r}} show that the probability of $Fever$ and $Headache$ symptoms was greatly increased when conditioned on the vaccines. Without vaccines the mean probability of $Headache$ was $0.03$ and $Fever$ was $0.05$. $0.05$ for headache, $0.085$ for Fever. We did not find a similar association with any other side-effects.\n\nThe mean probability of $Headache$ symptom increased from approximately $0.03$ to $0.05$ when conditioned on each of the vaccines (all resulted in similar mean values), while the mean probability of $Fever$ increased from $0.05$ to $0.085$. For the probability of $Fever$ and $Headache$ conditioned on each of the vaccines, the quantiles of the probability estimate did not overlap with the probability estimate conditioned on not receiving the vaccine. For this reason we conclude that this is a reasonable indication that the vaccines are associated with $Fever$ and $Headache$ as side effects.\n\n\\graphicspath{{pictures/task1c/}}\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1c3r.png}\n  \\caption{Conditional  Probability  of  Symptoms  given $Vaccines = 0$.}\n    \\label{fig:1c3r}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{1c4r.png}\n  \\caption{Conditional  Probability  of  Symptoms  given $Vaccines = 1$.}\n    \\label{fig:1c4r}\n\\end{figure}\n\n%Methodology 3, observational 1bc\n\nNext, turning to methodology\\textsubscript{3}, we train $Logistic$ $regression$ models predicting symptoms including death. As with methodology\\textsubscript{1}, we refer to the same models described in \\ref{subsec:1_a_real} to determine the answer to our second set of questions (Tasks 1b and 1c). The top 3 negative features in the logistic regression model are the three vaccines (\\textbf{Figure} \\ref{fig:1a9r}), indicating that they are the most predictive of survival. The order of greatest effectiveness can be loosely observed as $Vaccine1$, $Vaccine2$, $Vaccine3$. This  Nonetheless, the confidence intervals on the effectiveness of each vaccine (as measured with the coefficient values) broadly overlap (\\textbf{Figure \\ref{fig:1a10r}}) and all three can be considered \"effective\" under this measure (and there may be no reason for a patient to, for instance, prefer $Vaccine1$ to $Vaccine3$ in practice). \n\n%Side effects here\n\nIn conclusion with this, we answer the questions posed for Tasks 1b and 1c. \n\\begin{itemize}\n\\item \\textit{Can we predict death rate (efficacy) of a vaccine?}\nYes, our methodologies indicate that all three vaccines are very effective in reducing the chance of death in the Covid-Positive population.\n\n\\item \\textit{Which vaccine is most effective?}\nOur analysis indicates that $Vaccine1$ is the most effective of the three. However, we cannot definitively say whether $Vaccine2$ or $Vaccine3$ is superior, as the results of our methodologies differ on which is more predictive of survival.\n\n\\item \\textit{Can we predict a specific symptom(s) (side-effect(s)) of a vaccine?}\nThrough analyzing the conditional probability estimates obtained in methodology\\textsubscript{2}, we predict that all three of the vaccines in the data are associated with side-effects of $Headache$ and $Fever$. In the case of features such as blood clots, the conditional probability estimates do not differ sufficiently from the conditional probability of death without the vaccines to conclude that the vaccines may cause them as side effects. Blood clots may be associated with $Vaccine2$ as shown in \\ref{fig:1c4r}, but the overlap between this estimate and the baseline is too great to rule out the possibility of this being merely random chance.\n\n\\item \\textit{Which side-effect(s) each vaccine produce?}\nFrom our analysis it appears that all three vaccines produce $Fever$ and $Headache$ as side-effects, with the unconfirmed possibility of blood clots from $Vaccine3$, as noted above.\n\\end{itemize}\n\\subsection{Treatment Data: Task $2$}\n\\graphicspath{{pictures/task2/}}\nIn the final subsection we use the real data contained in the files 'treatment\\_features.csv', 'treatment\\_action.csv' and 'treatment\\_outcome.csv' regarding the treatments. \n\n%Methodology 1, treatment data (Task 2)\nWe again perform methodology\\textsubscript{1} in order to obtain the correlations of features in the data. The results shown in \\textbf{Figure \\ref{fig:2_1r}} are similar to those in the observational data, with the new addition of $Treatment2$ as a variable which is highly negatively correlated with death. Notably, we do not observe $Treatment1$ in the top 10 negatively correlated variables.\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{2_1r.png}\n  \\caption{Correlations between Death and selected features for Observational Data.}\n    \\label{fig:2_1r}\n\\end{figure}\n\n% Methodology 2, treatment data (Task 2)\n\nWe then proceed with analyzing \\textbf{Figures \\ref{fig:2_5r_top}} and \\textbf{Figures \\ref{fig:2_6r_top}} using methodology\\textsubscript{2}. The results show that the probability of $Death$ conditioned on $Treatment2$ appears to be lowered, but there is some overlap in the estimates from this method. For this reason, these results may not be as indicative in establishing that $Treatment2$ is effective in preventing $Death$. \n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{2_5r_top.png}\n  \\caption{Conditional Probability of $Death$ given selected $Features = 0$.}\n    \\label{fig:2_5r_top}\n\\end{figure}\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{2_6r_top.png}\n  \\caption{Conditional Probability of $Death$ given selected $Features = 1$.}\n    \\label{fig:2_6r_top}\n\\end{figure}\n\nWe also observe in \\textbf{Figures \\ref{fig:2_5r_after}} and \\textbf{\\ref{fig:2_6r_after}} that the probability of symptoms when conditioned on treatments are very similar to the probability when conditioned on the absence of treatment. The one notable feature of these charts is that the probability of $Blood\\_clots$ conditioned on $Treatment 1$ is $0$. This is however a very marginal result and so we can not establish any evidence of side-effects from treatments using this method. We note that the amount of total samples in the data was very small, particularly with respect to certain symptoms. This may be a reason that this methodology does not reach a clear conclusion and estimates several probabilities precisely at $0$.\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{2_5r_after.png}\n  \\caption{Conditional Probability of Symptoms given $Treatments = 0$.}\n    \\label{fig:2_5r_after}\n\\end{figure}\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{2_6r_after.png}\n  \\caption{Conditional Probability of Symptoms given $Treatments = 1$.}\n    \\label{fig:2_6r_after}\n\\end{figure}\n\n%Methodology 3, treatment data (Task 2)\n\nOur last analysis of the treatment data is performed with methodology\\textsubscript{3}. In this case, due to the scarcity of data points, we train the $Logistic$ $regression$ model using bootstrapping to re-sample the data, in order to balance the outcomes for the training process. The resulting best model accuracy is $0.98$ with confidence intervals as in \\textbf{Figure \\ref{fig:2_12r}}. We observe once again in \\textbf{Figure \\ref{fig:2_11r}} that the the coefficient for $Treatment1$ is not in the top selected features. On the other hand, $Treatment2$ is once again among the most negatively associated variables with $Death$. Combined with our observation from methodology\\textsubscript{1}, this indicates that $Treatment2$ is effective at preventing $Death$, and very likely more effective than $Treatment1$.\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{2_11r.png}\n  \\caption{Coefficient Values of Logistic Regression, after randomized grid-search 500-fold CV.}\n    \\label{fig:2_11r}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[width=\\linewidth]{2_12r.png}\n  \\caption{95\\%  Confidence  Intervals  for  Logistic  Regression Coefficient Values.}\n    \\label{fig:2_12r}\n\\end{figure}\n\nWe are then able to answer the questions we posed for Task 2.\n\n\\begin{itemize}\n\\item \\textit{Can we predict death rate given a specific treatment?}\nYes, we are able to train a logistic regression model which is able to predict death among the $CovidPositive$ population with an cross-validated accuracy of $0.98$. \n\\item \\textit{Which treatment is the most effective?}\nWe estimate that the probability of death for a patient given $Treatment2$ is lower than without treatment, as well as lower than when given $Treatment1$. While $Treatment1$ may still be effective in preventing death (we do not establish that it is not), our analysis indicates that $Treatment2$ is more effective.\n\\item \\textit{Can  we  predict  a  precise  symptom(s)(side-effect(s))  given  a  specific  treatment?}\nWe were not able to establish that any side-effects were more likely given either of the treatments. We speculate that this may be due to the low number of samples for each symptom, such that our calculation of conditional probabilities was not able to make an precise estimate approximating the actual outcomes.\n\\item \\textit{Which side-effect(s) does each treatment produce?}\nAs noted in the previous answer, we cannot establish any side-effects from the treatments, and conversely we also do not establish that the treatments do \\textbf{not} cause side effects.\n\\end{itemize}", "meta": {"hexsha": "f1233b6839f8d5d16979fb31362fbfe49b7b33df", "size": 17817, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "project1/report/content/results.tex", "max_stars_repo_name": "fabiorodp/IN_STK5000_Adaptive_methods_for_data_based_decision_making", "max_stars_repo_head_hexsha": "f8c049ceed6e3123e8676bcd9b29afaba9bd1f9b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "project1/report/content/results.tex", "max_issues_repo_name": "fabiorodp/IN_STK5000_Adaptive_methods_for_data_based_decision_making", "max_issues_repo_head_hexsha": "f8c049ceed6e3123e8676bcd9b29afaba9bd1f9b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project1/report/content/results.tex", "max_forks_repo_name": "fabiorodp/IN_STK5000_Adaptive_methods_for_data_based_decision_making", "max_forks_repo_head_hexsha": "f8c049ceed6e3123e8676bcd9b29afaba9bd1f9b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-25T14:45:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-25T14:45:41.000Z", "avg_line_length": 86.0724637681, "max_line_length": 916, "alphanum_fraction": 0.7839142392, "num_tokens": 4377, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5590502914763701}}
{"text": "\\RequirePackage[l2tabu, orthodox]{nag}\n\\documentclass[a4paper]{article}\n\\usepackage[a4paper]{geometry}\n\\usepackage[colorlinks=false, pdfborder={0 0 0}]{hyperref}\n\\usepackage{array,multirow,amsmath,amssymb,esint,bm,siunitx,cleveref}\n\\renewcommand{\\arraystretch}{2.5}\n\\title{Green's Function}\n\\author{Naitree Zhu}\n\\date{Last modified: \\today}\n\\begin{document}\n\\maketitle\nIn mathematics, a Green's function\\footnote{More information:\\url{http://en.wikipedia.org/wiki/Green\\%27s_function}} is the impulse response of an inhomogeneous differential equation defined on a domain, with specified initial conditions or boundary conditions. Via the superposition principle, the convolution of a Green's function with an arbitrary function $f(x)$ on that domain is the solution to the inhomogenous differential equation for $f(x)$.\n\nGreen's functions are named after the British mathematician George Green, who first developed the concept in the 1830s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead.\n\nUnder many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition.\n\\part{Definition and Use}\nA Green's function, $G(x,s)$, of a linear differential operator$L=L(x)$ acting on distributions over a subset of the Euclidean space $R^{n}$, at a point s, is any solution of \n\\begin{equation}\nL^{*}\\,G(x,s)=\\delta(x-s)\n\\end{equation}\nwhere $\\delta$ is the Dirac delta function and $L^{*}$ is the adjoint $L$. In the case of a self-adjoint operator, the equation can be written as \n\\begin{equation}\nL\\,G(x,s)=\\delta(x-s)\n\\end{equation}\nIf the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Also, Green's functions in general are distributions, not necessarily proper functions.\n\nGreen's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, the Green's function of the Hamiltonian is a key concept with important links to the concept of density of states. As a side note, the Green's function as used in physics is usually defined with the opposite sign; that is,\n\\begin{equation}\nL\\,G(x,s)=-\\delta(x-s)\n\\end{equation}\nThis definition does not significantly change any of the properties of the Green's function.\n\\part{Motivation}\nLoosely speaking, if such a function $G$ can be found for the operator $L$, then if we multiply the equation (2) for the Green's function by $f(s)$, and then perform an integration in the $s$ variable, we obtain:\n\\begin{align*}\n\\int L\\,G(x,s)\\,f(s)\\,\\mathrm{d}s&=\\int \\delta(x-s)\\,f(s)\\,\\mathrm{d}s\\\\&=f(x)\\\\&=L\\,u(x)\n\\end{align*}\ni.e.\n\\[\nL\\,u(x)=\\int L\\,G(x,s)\\,f(s)\\,\\mathrm{d}s\n\\]\nBecause the operator $L = L(x)$ is linear and acts on the variable $x$ alone, we can take the operator $L$ outside of the integration on the right-hand side (Note it's definitive integral over the region that the operator $L$ is acting on), obtaining\n\\[\nL\\,u(x)=L\\left(\\int G(x,s)\\,f(s)\\,\\mathrm{d}s\\right)\n\\]\nProvided operator $L$ is arbitrary, so it suggests\n\\begin{equation}\nu(x)=\\int G(x,s)\\,f(s)\\,\\mathrm{d}s\n\\end{equation}\nThus, we can obtain the function $u(x)$ through knowledge of the Green's function in equation (2) and the source term on the right-hand side in equation (3). This process relies upon the linearity of the operator $L$. The problem now lies in finding the Green's function $G$ that satisfies equation (2). For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator $L$.\n\nNot every operator $L$ admits a Green's function. A Green's function can also be thought of as a right inverse of $L$. Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation (4), may be quite difficult to evaluate. However the method gives a theoretically exact result.\n\\part{Finding Green's functions}\n\\section{Eigenvalue expansions}\nIf a differential operator $L$ admits a set of eigenvectors $\\Psi_{n}(x)$ (i.e., a set of functions $\\Psi_{n}$ and scalars $\\lambda_{n}$ satisfying $L\\Psi_{n}=\\lambda_{n}\\Psi_{n}$) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues.\n``Complete'' means that the set of functions $\\lbrace\\Psi_{n}\\rbrace$ satisfies the following completeness relation:\n\\begin{equation}\n\\delta(x-x')=\\sum_{n=0}^\\infty\\Psi_n^\\dag(x)\\Psi_{n}(x')\n\\end{equation}\nThen the following holds:\n\\begin{equation}\nG(x,x')=\\sum_{n=0}^\\infty\\frac{\\Psi_{n}^\\dag(x)\\,\\Psi_{n}(x')}{\\lambda_{n}}\n\\end{equation}\nwhere \\dag represents complex conjugation.\n\nApplying the operator $L$ to each side of this equation results in the completeness relation, which was assumed true.\n\nThe general study of the Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory.\n\\section{Table of Green's functions}\nThe following table gives an overview of Green's functions of frequently appearing differential operators, where $\\theta(t)$ is the Heaviside step function,$r=\\sqrt{x^2+y^2+z^2}$ and $\\rho=\\sqrt{x^2+y^2}$.\n\\begin{table}[h]\n\\centering\n\\caption{Green's functions of some common operators}\n\\begin{tabular}{|>{$}c<{$}|>{$}c<{$}|>{$}m{4.85cm}<{$}|}\n\\hline\n\\text{Differential Operator}\\,L & \\text{Green's Function}\\,G & \\text{Example of application}\\\\\n\\hline\n\\displaystyle \\partial_t+\\gamma &\\displaystyle \\theta(t)e^{-\\gamma t}&\\\\\n\\hline\n\\displaystyle \\left(\\partial_t +\\gamma\\right)^2&\\displaystyle \\theta(t)te^{-\\gamma t}&\\\\\n\\hline\n\\multirow{2}{*}{$\\displaystyle \\partial_t^2 + 2\\gamma\\partial_{t}+\\omega_0^2$} & \\displaystyle \\theta(t)e^{-\\gamma t}\\frac{1}\n{\\omega}sin(\\omega t) & \\text{one-dimensional damped harmo-}\\\\ &\\text{with }\\omega=\\sqrt{\\omega_0^2 \n-\\gamma^2}&\\text{nic oscillator}\\\\\n\\hline\n\\displaystyle \\Delta_{2D}=\\partial_x^2 +\\partial_y^2 & \\displaystyle \\frac{1}{2\\pi}\\ln \\rho &\\\\\n\\hline\n\\displaystyle \\Delta=\\partial_x^2 +\\partial_y^2+\\partial_z^2 &\\displaystyle \\frac{-1}{4\\pi r}& \\text{Poisson equation}\\\\\n\\hline\n\\text{Helmholtz operator}\\,\\displaystyle \\Delta +k^2&\\displaystyle \\frac{-e^{-ikr}}{4\\pi r}&\\text{stationary Schr\\\"{o}dinger equation}\\\\\n\\hline\n\\text{D'Alembert operator}\\, \\displaystyle \\square=\\frac{1}{c^2}\\partial_t^2-\\Delta &\\displaystyle \\frac{\\delta(t-\\frac{r}{c})}{4\\pi r}&\\text{wave equation}\\\\\n\\hline\n\\displaystyle \\partial_t-D\\Delta &\\displaystyle \\theta(t)\\left(\\frac{1}{4\\pi Dt}\\right)^{\\frac{3}{2}}e^{\\frac{r^2}{4Dt}}&\\text{diffusion}\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\part{Green's functions for the Laplacian}\nGreen's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities:\n\\begin{equation}\n\\int_V \\left(\\varphi\\nabla^{2}\\psi-\\psi\\nabla^{2}\\varphi\\right)\\mathrm{d}V=\\oint_S\\left(\\varphi\\frac{\\partial\\psi}{\\partial n}-\\psi\\frac{\\partial \\varphi}{\\partial n}\\right)\\mathrm{d}S\n\\end{equation}\nSuppose that the linear differential operator $L$ is the Laplacian $\\nabla^2$, and that there is a  Green's function $G$ for the Laplacian.\n\nAccording to the defining property of the Green's function,\n\\begin{equation}\nL\\,G(x,x')=\\nabla^2 G(x,x')=\\delta(x-x')\n\\end{equation}\nLet $\\psi=G$ in Green's second identity.Then:\n\\begin{equation}\n\\int_V \\left(\\varphi\\delta(x-x')-G(x,x')\\nabla'^2\\varphi\\right)\\mathrm{d}V'=\\oint_S\\left(\\varphi\\frac{\\partial G(x,x')}{\\partial n'}-G(x,x')\\frac{\\partial \\varphi}{\\partial n'}\\right)\\mathrm{d}S'\n\\end{equation}\nUsing this expression, it is possible to solve Laplace's equation $\\nabla^2\\varphi=0$ or Poisson's equation $\\nabla^2\\varphi(x)=-\\rho(x)$, subject to either Dirichlet or Neumann boundary conditions. In other words, we can solve for $\\varphi(x)$ everywhere inside a volume where one of the following conditions are satisfied:\n\\begin{enumerate}\n\\item the value of $\\varphi(x)$ is specified on the bounding surface of the volume.\n\\item the normal derivative of $\\varphi(x)$ is specified on the bounding surface.\n\\end{enumerate}\nThis is specified followingly.\n\nSuppose the problem is to solve for $\\varphi(x)$ inside the region. Then according to the defining property of the Dirac delta function, we have\uff1a\n\\begin{equation}\n\\varphi(x)=\\int_V G(x,x')\\rho(x')\\mathrm{d}V'+\\oint_S\\left(\\varphi\\frac{\\partial G(x,x')}{\\partial n'}-G(x,x')\\frac{\\partial \\varphi}{\\partial n'}\\right)\\mathrm{d}S'\n\\end{equation}\n\\begin{enumerate}\n\\item If the problem is to solve a Dirichlet boundary value problem, the Green's function should be chosen such that $G(x,x')$ vanishes when either $x$ or $x'$ is on the bounding surface. Thus only the first of the two terms in the surface integral remains.\n\\item If the problem is to solve a Neumann boundary value problem, the Green's function is chosen such that its normal derivative vanishes on the bounding surface, as it would seem to be the most logical choice. However, this is not possible. Intuitively, Green's function suggests that a source of field exists in the region. So the integral of field at the boundary,i.e. flux outward the region must be nonzero. And mathematically,\n\\begin{equation}\n\\oint_S \\frac{\\partial G(x,x')}{\\partial n'}\\mathrm{d}S'=\\int_V \\nabla'^2 G(x,x')\\mathrm{d}V'=\\int_V \\delta(x-x')\\mathrm{d}V'=1\n\\end{equation}\nso $\\frac{\\partial G(x,x')}{\\partial n'}$ can not be 0 at the boundary, otherwise the first integral equals to zero and divergence theorem is violated. Thus, the simplest form the normal derivative can take is that of a constant, namely letting $\\frac{\\partial G(x,x')}{\\partial n'}=\\frac{1}{S}$, where $S$ is the surface area of the surface. The first surface term in eq.~(10) becomes \n\\begin{equation}\n\\oint_S \\frac{\\partial G(x,x')}{\\partial n'}\\mathrm{d}S'=\\left<\\varphi\\right>_S\n\\end{equation}\nwhere $\\left<\\varphi\\right>_S$ is the average value of the potential on the surface. This number is not known in general, but is often unimportant, as the goal is often to obtain the electric field given by the gradient of the potential, rather than the potential itself.\n\\end{enumerate}\n\nSupposing the boundary goes out to infinity and $G=0$ at infinity (Dirichlet problem), the corresponding Green's function is\n\\begin{equation}\nG(x,x')=\\frac{1}{\\lvert x-x' \\rvert}\n\\end{equation}\nApplying this to eq.~(10) gives familiar expression for electric potential:\n\\begin{equation}\n\\varphi(x)=\\int_V \\frac{\\rho(x')}{\\lvert x-x' \\rvert}\\mathrm{d}V'\n\\end{equation}\nNotice that because of the defining property eq.~(8) of Green's function for Laplacian and infinity boundary condition stated above, the corresponding Green's function eq.~(13) is exactly what a point charge would behave under the same boundary condition.\n\\end{document}", "meta": {"hexsha": "b9a6d0b9ba78a99efe2348decef0f967565c3f66", "size": 11046, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "math/Green's function/Green's function.tex", "max_stars_repo_name": "Naitreey/notes-and-knowledge", "max_stars_repo_head_hexsha": "48603b2ad11c16d9430eb0293d845364ed40321c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-05-16T06:06:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-12T08:46:18.000Z", "max_issues_repo_path": "math/Green's function/Green's function.tex", "max_issues_repo_name": "Naitreey/notes-and-knowledge", "max_issues_repo_head_hexsha": "48603b2ad11c16d9430eb0293d845364ed40321c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-04-06T01:46:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-02-13T03:11:33.000Z", "max_forks_repo_path": "math/Green's function/Green's function.tex", "max_forks_repo_name": "Naitreey/notes-and-knowledge", "max_forks_repo_head_hexsha": "48603b2ad11c16d9430eb0293d845364ed40321c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-04-11T11:02:32.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-27T11:59:09.000Z", "avg_line_length": 75.6575342466, "max_line_length": 451, "alphanum_fraction": 0.7450660873, "num_tokens": 3167, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7745833737577158, "lm_q1q2_score": 0.5590502877207044}}
{"text": "% Created 2016-02-27 Sat 15:28\n% -----------------------------*- LaTeX -*------------------------------\n\\documentclass[12pt]{report}\n\\usepackage{scribe_hgen486}\n\\begin{document}\n\n\\scribe{Nicholas Knoblauch}\t\t% required\n\\lecturenumber{11}\t\t\t% required, must be a number\n\\lecturedate{February 9}\t\t% required, omit year\n\\lecturer{John Novembre} \n\n\\maketitle\n\n% please leave this comment \n\\framebox[.95\\textwidth]{\\parbox{.93\\textwidth}{ {{\\bf Note:}} These\nlecture notes are still rough, and have only have been mildly\nproofread.  }}\n\\vspace*{.1in}\n\n\n% feel free to delete content below this line \n% ----------------------------------------------------------------------\n\n\n% please leave this comment \n\\tolerance=1000\n\\providecommand{\\alert}[1]{\\textbf{#1}}\n\n\\title{HMMLec}\n\\author{nwknoblauch}\n\\date{\\today}\n\n\n\\section{Continuous Time Markov Chains}\n\\label{sec-1}\n\\subsection{Differential equations that lead to the poisson distribution}\n\\label{sec-1-1}\n\nWe can describe the probability that the number of events that have occured by time $t$ as $N_t$, and can write the probability as:\n$P(N_t=i)=P_i(t)$\n\nWith the rate parameter $\\lambda$, the probability that $N_t=i$ at $t+h$ is given by:\n \n$P_i(t+h)=P_{\\mathrm{i-1}}(t)(\\lambda h + o(h)) + P_i(t)(1-\\lambda h + o(h)) $\n\nHere we are summing over the probability of two scenarios.  The first represents that $N_t=i-1$ and that there was another event in time $h$.  The second is that $N_t=i$ and that there was no event in time $h$. For small vaues of $h$ we can ignore the probabilty of two steps in time $h$.\n\n\nThe limit as $h$ goes to 0 is: $\\frac{P_i(t+h)-P_i(t)}{h}=\\frac{d}{dt}P_i(t)=P_{i-1}(t)(\\lambda h + o(h)) + P_i(t)(1-\\lambda h + o(h))-P_i(t)$\n\nCancel the one and divide by $h$ to get:\n\n$(\\lambda )P_{i-1}(t) -P_i(t) P_0(t+h)=P_0(t)(1-\\lambda h + o(h))$\n\n $\\frac{d P_0(t)}{dt}= -\\lambda P_0(t)$\n \nThe solution to this differential equation is:\n$P_0(t)= e^{\\mathrm{-\\lambda t}}$ plus some constant\n\n$\\frac{P_i(t)}{dt}=\\lambda P_0(t) -\\lambda P_i(t)=\\lambda e^{-\\lambda t} -\\lambda P_1(t)$\n\nTo get the solution for $P_1$ we use an integrating factor\n\n$\\frac{d P_1(t)}{dt} + \\lambda P_1(t) = \\lambda e^{-\\lambda t}$\n\n$\\int_0^T e^{-\\lambda t} \\frac{d P_1(t)}{dt} + \\lambda e^{-\\lambda t} P_1(t) = \\int_0^T \\lambda$\n\nRemembering the product rule: $(fg)'=f'g+g'f$\n\n$$ \\int_0^T \\frac{d}{dt} = \\int_0^T \\frac{d}{dt} ( e^{\\lambda t} P_1(t)) = \\int_0^T \\lambda dt = P_1(t)=\\lambda t e^{-\\lambda t}$$\n\n\nWe can keep iterating:\n$P_2(t)= \\frac{(\\lambda t)^2}{2} e^{-\\lambda t}$\nAnd eventually we have \n$P_n(t) = \\frac{(\\lambda t)^n}{n!} e^{-\\lambda t}$\n\nThis is the poisson process\n\\subsection{A More General Process, the Pure Birth Process}\n\\label{sec-1-2}\n\nMore general than the poisson process is a ``birth process'', or ``pure birth process''\n\nIn a pure birth process there is a state dependent rate of arrival:\n\n$\\frac{d}{dt} P_i(t) = \\lambda_{i-1} P_{i-1}(t) -\\lambda_i P_i(t)$\n\nThe poisson process is a special case where $\\lambda_i$ is constant.\n\nAnother example is the \\textbf{linear birth process}, where the $\\lambda_i=i \\lambda$ (This is also known as a Yule process)\n\n$P_j(t) = {{j-1}\\choose{k-1}} e^{-\\lambda t} (1-e^{\\lambda t})^{k-i}$ where $k$ is the starting size.  This is a negative binomial distribution (and is a transient solution)\n\nAnother major class is a birth-death process.  We have a set of birth rates and a set of death rates. ($\\lambda$ from 0 to infinity, and $\\mu$ from 1 to infinity)\n\n$\\frac{d}{dt} P_i(t) = \\lambda_{i-1} P_{i-1}(t) + \\mu_{i-1}P_{i+1}(t) - (\\lambda_i + \\mu_i)P(i)(t)$\n\nThere is a linear birth-death process $\\lambda_i=i \\lambda$ and $\\mu_i = i\\mu$\n\nThere is also linear birth-death with immigration\n\n$\\lambda_i = i \\lambda + \\theta$\n$\\mu_i = i \\mu$\n\\subsection{Continuous version of the Markov property}\n\\label{sec-1-3}\n\nIndependence of the past before the immediate past\n$P(X(t+s)=j|X(s)=i,X_u=x(u),0 \\leq u < s ) = P(X(t+s)=j|X(s)=i)$\n\\subsection{Rate matrix $Q$}\n\\label{sec-1-4}\n\nWe have been thinking about going from 0 to 1, but we can think in general about going from $i$ to $j$ \nDefining the generator matrix or Rate Matrix $Q$\n$P_{i,j}(h) = q_{ij} h + o(h)$\n$P_{i,i}(h)= 1- v_i h + o(h)$ Where $v_i = \\sum_j q_{ij}$\n\nIf we define $q_{ii}=-v_i$ Then $Q = {q_{ij}}$\n\nInter-event times depend on $i$ and they are exponential with rate $v_i$\n$P_{ij}=\\frac{q_{ij}}{v_i}$\nThis looks like a discrete markov chain. There is an idea that we have a discrete markov chain embedded in a continuous markov chain\n\\subsubsection{$Q$ matrix for poisson}\n\\label{sec-1-4-1}\n\nFor the poisson process, $Q$ is infinite in both directions.   \nThe diagonal has $-\\lambda$ along the diagonal. One to the right of the diagonal is $\\lambda$, the rate at which we arrive at the next state. (There is no return to previous states in the poisson process)\n \n\\subsubsection{$Q$ matrix for the birth-death process}\n\\label{sec-1-4-2}\n\nMatrix goes on infinitely in both directions.  Along the diagonal we have $\\lambda0$ for 0,0 followed by $-(\\mu_k+\\lambda_k)$ at $k,k$  at $k,k-1$ we have $\\mu_k$ and at $k,k+1$ we have $\\lambda_k$\nFor $P$, we have $\\frac{\\mu_k}{\\lambda_k+\\mu_k}$, at $k,k-1$.  At $k,k+1$ we have $\\frac{\\lambda_k}{\\lambda_k+\\mu_k}$, and at the diagonal we have 1 minus the two off diagonals\n\n$\\frac{P_{ij}(t)}{dt} = v_j P{ij}(t) + \\sum_{k \\neq j} P_{ik}(t) q_{kj}$ This is the probability of going from $i$ to $j$, plus the probability of going form $i$ to $k$ and then going to $j$\n\n$\\frac{d P_t}{dt} = P_t Q$\n$P_t = e^{Qt}$\n\nIf $Q$ is diagonalizable, then we can write $Q t =U D U^{-1}$ where $D$ is a diagonal matrix, then $e^{Qt} = Ue^{D}U^{-1}$\n\nThere are a branch of methods called Krylov methods for exponentiating matrices.\n\\subsection{What about stationary distributions?}\n\\label{sec-1-5}\n\n   \n\\subsubsection{The Global Balance Equations}\n\\label{sec-1-5-1}\n\nDefine $P_i$ as the stationary probability of being in state $i$ which is the limit as $t$ goes to infinity of $P_{ij}(t)$\n$v_j P_j = \\sum_k q_{kj} P_k$\nThis equation describes a situation where the rate out of state $j$ (weighted by the probability of being in state $j$, or the flux) is  equal to the flux into state $j$\n\nIf a Continuous Time Markov Chain is time reversible, then $P_i q_{ij} = P_j q_{ji}$ and it satisfies the local balance equations.  The flux from $i$ to $j$ is equal to the flux from $j$ to $i$\n\nWhat is the flux out for state 0?\n\n\n\\begin{center}\n\\begin{tabular}{rll}\n State  &  Flux out                   &  Flux in                                    \\\\\n\\hline\n     0  &  $\\lambda_0 P_0$            &  $\\mu_1 P_1$                                \\\\\n     1  &  $(\\lambda_1 + \\mu_1) P_1$  &  $\\lambda_0 P_0 + \\mu_2 P_2$                \\\\\n     2  &  $(\\lambda_2 + \\mu_2) P_2$  &  $\\lambda_1 P_1 + \\mu_3 P_3$                \\\\\n     n  &  $(\\lambda_n + \\mu_n) P_n$  &  $\\lambda_{n-1} + \\mu_{n+1} P_{n+1}$  \\\\\n\\end{tabular}\n\\end{center}\n\n\n\nThese all have to be equal\n\n$\\lambda_0 P_0 = \\mu_1 P_1$\n$\\lambda_1 P_1 = \\mu_2 P_2$\n$\\lambda_n P_n= \\mu_{n+1} P_{n+1}$\n\nWe can start solving everything in terms of $P_0$\n\n$P_1 = \\frac{\\lambda_0 P_0}{\\mu_1}$\n$P_3 = \\frac{\\lambda_2}{\\mu_3} P_2$\n$P_n = \\frac{\\lambda_{n-1} ... \\lambda_1}{\\mu_n ... \\mu_1} P_0$\n\nWe know that $1= P_0 + \\sum_{n=1}^\\infty P_n P_0$\n\nin the linear birth-death model\n$P_n = \\frac{\\lambda}{\\mu}^n (\\frac{1}{1+\\sum_{i=1}{\\infty} \\frac{\\lambda}{\\mu}^n}$\n\nEven though this is an infinte sum, it turns out that:\n$P_n= \\frac{\\lambda}{\\mu}^n (1-\\frac{\\lambda}{\\mu})$\n\nWe know this because $\\sum_{n=1}^\\infty p(1-p)^{n-1} = 1$\n\n\\end{document}\n", "meta": {"hexsha": "5d8e292249dd6f96b1bd255dc9d65a215ccf2ede", "size": 7578, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/scribe_notes_2016/lec11.tex", "max_stars_repo_name": "stephens999/hgen48600", "max_stars_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-03-19T18:02:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:15:28.000Z", "max_issues_repo_path": "docs/scribe_notes_2016/lec11.tex", "max_issues_repo_name": "stephens999/hgen48600", "max_issues_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-02-05T00:34:09.000Z", "max_issues_repo_issues_event_max_datetime": "2017-03-07T20:15:19.000Z", "max_forks_repo_path": "docs/scribe_notes_2016/lec11.tex", "max_forks_repo_name": "stephens999/hgen48600", "max_forks_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2016-01-08T16:59:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-11T23:09:29.000Z", "avg_line_length": 39.0618556701, "max_line_length": 288, "alphanum_fraction": 0.6451570335, "num_tokens": 2617, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458152, "lm_q2_score": 0.8031738034238806, "lm_q1q2_score": 0.5589755050715507}}
{"text": "\\section{Test Description and Success Criteria}\nThis test is located in \\texttt{simulation/dynamics/sphericalPendulum/UnitTest/\\newline\ntest\\_sphericalPendulum.py}. In this integrated test there are two spherical pendulums connected to the spacecraft hub.  There are two scenarios being tested: one with the time step being 0.01 and one with the time step 0.001. Both tests should show energy and momentum conservation but the 0.001 should show less integration error. \n\n\\section{Test Parameters}\n\nTo validate and test the code a simulation with two spherical pendulums has been run using the following input parameters:\n\\begin{align}\n\\begin{split}\nl_1=0.3 m, \\qquad m_1=20 kg, \\qquad \\dot{\\varphi}_{1,0}=0.01 rad/s, \\qquad \\dot{\\vartheta}_{1,0}= 0.05 rad/s \\\\\n\\bm{d}_1=\n\\begin{bmatrix}\n0.1 \\\\\n0.1 \\\\\n0.1\\\\\n\\end{bmatrix} m\n\\qquad\n\\qquad\n\\bm{\\hat{p}}_{0_1,1}=\n\\begin{bmatrix}\n\\sqrt{2}/2 \\\\\n0 \\\\\n\\sqrt{2}/2\\\\\n\\end{bmatrix}\n\\qquad\n\\bm{\\hat{p}}_{0_1,2}=\n\\begin{bmatrix}\n0 \\\\\n1 \\\\\n0\\\\\n\\end{bmatrix}\n\\qquad\n\\bm{\\hat{p}}_{0_1,3}=\n\\begin{bmatrix}\n-\\sqrt{2}/2 \\\\\n0 \\\\\n\\sqrt{2}/2\\\\\n\\end{bmatrix}\n\\end{split}\n\\end{align}\n\n\\begin{align}\n\\begin{split}\nl_2=0.4 m, \\qquad m_2=40 kg, \\qquad \\dot{\\varphi}_{2,0}=0.1 rad/s, \\qquad \\dot{\\vartheta}_{2,0}= 0.5 rad/s \\\\\n\\bm{d}_1=\n\\begin{bmatrix}\n0.1 \\\\\n0.1 \\\\\n0.1\\\\\n\\end{bmatrix} m\n\\qquad\n\\bm{\\hat{p}}_{0_2,1}=\n\\begin{bmatrix}\n1 \\\\\n0 \\\\\n0\\\\\n\\end{bmatrix}\n\\qquad\n\\bm{\\hat{p}}_{0_2,2}=\n\\begin{bmatrix}\n0 \\\\\n1 \\\\\n0\\\\\n\\end{bmatrix}\n\\qquad\n\\bm{\\hat{p}}_{0_2,3}=\n\\begin{bmatrix}\n0 \\\\\n0 \\\\\n1 \\\\\n\\end{bmatrix}\n\\end{split}\n\\end{align}\n\n\\begin{table}[htbp]\n\t\\caption{Error Tolerance - Note: Relative Tolerance is $\\textnormal{abs}(\\frac{\\textnormal{truth} - \\textnormal{value}}{\\textnormal{truth}}$)}\n\t\\label{tab:errortol}\n\t\\centering \\fontsize{10}{10}\\selectfont\n\t\\begin{tabular}{| c | c |} % Column formatting, \n\t\t\\hline\n\t\tTest   & Relative Tolerance \\\\\n\t\t\\hline\n\t\tEnergy and Momentum Conservation & 1e-8 \\\\\n\t\t\\hline\t\n\t\\end{tabular}\n\\end{table}\n\n\\section{Test Results}\n\nThe following figures show the conservation of the quantities described in the success criteria for each scenario. The conservation plots are all relative difference plots. All conservation plots show integration error which is the desired result. In the python test these values are automatically checked therefore when the tests pass, these values have all been confirmed to be conserved. An additional note: the angular momentum plots are plotting the change in the components of the angular momentum vector in the inertial frame. The individual components are not labeled because the goal is for each component to show conservation therefore the individual components do not have separate information needing to be specified.  \n\n\\subsection{Time Step = 0.01}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInOrbitalAngularMomentumOneHundredth.pdf}}\n\t\\caption{Change in Orbital Angular Momentum Time Step = 0.01}\n\t\\label{fig:ChangeInOrbitalAngularMomentumTimeStep01}\n\\end{figure}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInOrbitalEnergyOneHundredth.pdf}}\n\t\\caption{Change in Orbital Energy Time Step = 0.01}\n\t\\label{fig:ChangeInOrbitalEnergyTimeStep01}\n\\end{figure}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInRotationalAngularMomentumOneHundredth.pdf}}\n\t\\caption{Change In Rotational Angular Momentum Time Step = 0.01}\n\t\\label{fig:ChangeInRotationalAngularMomentumTimeStep01}\n\\end{figure}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInRotationalAngularMomentumOneHundredth.pdf}}\n\t\\caption{Change In Rotational Energy Time Step = 0.01}\n\t\\label{fig:ChangeInRotationalEnergyTimeStep01}\n\\end{figure}\n\\clearpage\n\n\\subsection{Time Step = 0.001}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInOrbitalAngularMomentumOneThousandth.pdf}}\n\t\\caption{Change in Orbital Angular Momentum Time Step = 0.001}\n\t\\label{fig:ChangeInOrbitalAngularMomentumTimeStep001}\n\\end{figure}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInOrbitalEnergyOneThousandth.pdf}}\n\t\\caption{Change in Orbital Energy Time Step = 0.001}\n\t\\label{fig:ChangeInOrbitalEnergyTimeStep001}\n\\end{figure}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInRotationalAngularMomentumOneThousandth.pdf}}\n\t\\caption{Change In Rotational Angular Momentum Time Step = 0.001}\n\t\\label{fig:ChangeInRotationalAngularMomentumTimeStep001}\n\\end{figure}\n\\begin{figure}[htbp]\n\t\\centerline{\n\t\t\\includegraphics[width=0.8\\textwidth]{AutoTeX/ChangeInRotationalAngularMomentumOneThousandth.pdf}}\n\t\\caption{Change In Rotational Energy Time Step = 0.001}\n\t\\label{fig:ChangeInRotationalEnergyTimeStep001}\n\\end{figure}\n\\clearpage\n", "meta": {"hexsha": "0db213c01db89d4ecf7459e7cac754bbbf94c292", "size": 4858, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/simulation/dynamics/sphericalPendulum/_Documentation/secTest.tex", "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/simulation/dynamics/sphericalPendulum/_Documentation/secTest.tex", "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_forks_repo_path": "src/simulation/dynamics/sphericalPendulum/_Documentation/secTest.tex", "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.5034482759, "max_line_length": 731, "alphanum_fraction": 0.7591601482, "num_tokens": 1577, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.8031738057795402, "lm_q1q2_score": 0.5589754965810724}}
{"text": "%!TEX root = ../notes.tex\n\\section{February 2, 2022}\n\\subsection{Inverses mod \\texorpdfstring{$m$}{m}}\n\\recall Last time, we showed in \\cref{existence-of-inverse} that there exists an integer $b$ with with $a\\cdot b\\equiv 1\\mod m$ iff $\\gcd(a, m) = 1$.\n\n\\begin{claim}\n    We further claim that if such a $b$ exists, then it is unique mod $m$.\n\n    That is, if we have\n    \\begin{align*}\n        a\\cdot b_1\\equiv 1\\pmod{m} \\\\\n        a\\cdot b_2\\equiv 1\\pmod{m}\n    \\end{align*}\n    then we have that $b_1\\equiv b_2\\pmod{m}$.\n\\end{claim}\n\\begin{proof}\n    We consider $b_1 ab_2$. We have\n    \\[b_2\\equiv (b_1 a)b_2 = b_2(a b_2) \\equiv b_2\\]\n    all taking mod $m$.\n\\end{proof}\n\nHow, then, could we compute this inverse $b$ efficiently?\n\nRecall that last class, we used the extended Euclidean algorithm to compute the linear combination of $a$ and $m$ efficiently,\n\\begin{align*}\n    1 & = a\\cdot u + m\\cdot v          \\\\\n      & \\equiv a\\cdot \\boxed{u}\\mod{m}\n\\end{align*}\nwhere $u$ is $b$.\n\n\\subsection{Modular Arithmetic \\emph{continued}}\n\\begin{definition}[Ring of Integers mod $m$]\n    $\\ZZ/m\\ZZ = \\{0, 1, 2, \\dots, m-1\\}$ with operations $+, -, \\times \\pmod{m}$.\n\\end{definition}\n\\begin{example}\n    $\\ZZ/4\\ZZ = \\{0, 1, 2, 3\\}$. We have the following operation tables for $\\ZZ/4\\ZZ$:\n    \\[\\begin{array}[]{c|cccc}\n            + & 0 & 1 & 2 & 3 \\\\ \\hline\n            0 & 0 & 1 & 2 & 3 \\\\\n            1 & 1 & 2 & 3 & 0 \\\\\n            2 & 2 & 3 & 0 & 1 \\\\\n            3 & 3 & 0 & 1 & 2 \\\\\n        \\end{array}\\qquad\n        \\begin{array}[]{c|cccc}\n            \\times & 0 & 1 & 2 & 3 \\\\ \\hline\n            0      & 0 & 0 & 0 & 0 \\\\\n            1      & 0 & 1 & 2 & 3 \\\\\n            2      & 0 & 2 & 0 & 2 \\\\\n            3      & 0 & 3 & 2 & 1 \\\\\n        \\end{array}\\]\n\\end{example}\n\n\\begin{definition}[Group of Units mod $m$]\n    We have the set of units in $\\ZZ/m\\ZZ$ as\n    \\begin{align*}\n        (\\ZZ/m\\ZZ)^\\times & = \\{a\\in \\ZZ/m\\ZZ\\mid \\exists b \\text{s.t. }a\\cdot b \\equiv 1\\} \\\\\n                          & = \\{a\\in \\ZZ/m\\ZZ\\mid \\gcd(a, m) = 1\\}\n    \\end{align*}\n\\end{definition}\n\\begin{example}\n    $(\\ZZ/4\\ZZ)^\\times = \\{1, 3\\}$.\n\\end{example}\n\n\\begin{definition}[Euler Totient Function]\n    We have\n    \\begin{equation*}\n        \\varphi(m) = \\# (\\ZZ/m\\ZZ)^\\times\n    \\end{equation*}\n    which counts the number of units modulo $m$.\n\\end{definition}\n\\begin{example}\n    $\\varphi(4) = 2$.\n\\end{example}\n\nLet's investigate the properties of units. Let's say $a_1, a_2$ are units. Which of the following have to be units?\n\n\\begin{table}[h]\n    \\centering\n    \\renewcommand\\arraystretch{1.7}\n    \\begin{tabular}{c|l}\n                       & Does this have to be a unit?                                                \\\\ \\hline\n        $a_1\\cdot a_2$ & \\begin{minipage}[t]{0.7\\textwidth}\n                             \\ul{Yes}! \\vspace{0.5em}\n\n                             Since $\\gcd(a_1, m) = 1$ and $\\gcd(a_2, m) = 2$\n                             so we have $\\gcd(a_1a_2, m) = 1$. We also have $a_1b_1\\equiv 1\\mod m$\n                             and $a_2b_2\\equiv 1\\mod m$,\n                             we have $(a_1a_2)(b_2b_1)\\equiv 1\\mod m$.\n                             \\vspace{0.5em}\n                         \\end{minipage} \\\\ \\hline\n        $a_1 + a_2$    & \\ul{No}. We have counterexample $m = 4$: $1 + 1$ is not a unit.             \\\\ \\hline\n        $a_1 - a_2$    & \\ul{Also no}. For any $a$, $a - a = 0$ which is never a unit.\n    \\end{tabular}\n    \\renewcommand\\arraystretch{1.05}\n\\end{table}\n\n\\begin{definition}[Prime Number]\n    An integer $n\\geq 2$ is \\ul{prime} if its only (positive) divisors are $1$ and $n$.\n\\end{definition}\n\\begin{example}\n    Numbers like $2, 3, 5, 7, 11, 12, \\dots$.\n\\end{example}\n\nWhat if $m$ is a prime number? Then we have\n\\[(\\ZZ/m\\ZZ)^\\times = \\{1, 2, \\dots, m-1\\}\\]\nso we can divide by elements of $\\ZZ/m\\ZZ$, just like in $\\QQ, \\RR, \\CC$. We can divide by any nonzero element of $\\ZZ/m\\ZZ$. We call these \\ul{fields}!\n\n\\subsection{Fast\\emph{ish} Powering}\n\\begin{problem*}\n    How might we compute $g^a\\mod m$?\n\\end{problem*}\nA na\\\"ive solution might be\n\\begin{lstlisting}[language=Python]\ndef pow_mod(g, a, m): \n    return g ** a % m\n\\end{lstlisting}\nWhat if we tried to compute \\textsf{pow\\_mod(239418762304, 12349876234, 12394876123482783641)} or something of the like? Something like this\\dots\n\n\\begin{center}\n    \\includegraphics[height=5cm]{images/broken_laptop.png}\n\\end{center}\n\nWe could do something a bit more clever, like taking a mod every time we multiply:\n\\begin{lstlisting}[language=Python]\ndef pow_mod(g, a, m): \n    p = 1\n    for i in range(a): \n        p = (p * q) % m\n    return p\n\\end{lstlisting}\n\nYet we \\emph{still} couldn't do \\textsf{pow\\_mod(239418762304, 12349876234, 12394876123482783641)} since that takes the amount of time proportional to \\textsf{a}\\footnote{Which can become big\\dots}.\n\n\\begin{example}\n    Let's try to compute $3^{37}$ by hand.\n    \\begin{align*}\n         & 3^1                                   & \\equiv & \\ 3  \\mod{100} \\\\\n         & 3^2                                   & \\equiv & \\ 9  \\mod{100} \\\\\n         & 3^4 = (3^2)^2 =                       & \\equiv & \\ 81 \\mod{100} \\\\\n         & 3^8 = (3^4)^2 = 81^2 = 6561           & \\equiv & \\ 61 \\mod{100} \\\\\n         & 3^{16} = (3^8)^2 \\equiv 61^2 = 3721   & \\equiv & \\ 21 \\mod{100} \\\\\n         & 3^{32} = (3^{16})^2 \\equiv 21^2 = 441 & \\equiv & \\ 41 \\mod{100}\n    \\end{align*}\n    Since $37 = 32 + 4 + 1$, we can simply do\n    \\[3^{37} = 3^{32} \\cdot 3^{4} \\cdot 3^{1} = 41\\cdot 81\\cdot 3=1863\\equiv 63 \\mod{100}\\]\n\\end{example}", "meta": {"hexsha": "ea48f76e251d0909067396de6cc503aedf8362fd", "size": 5578, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "lectures/2022-02-02.tex", "max_stars_repo_name": "jchen/math1580-notes", "max_stars_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_stars_repo_licenses": ["BSL-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T15:03:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T15:03:38.000Z", "max_issues_repo_path": "lectures/2022-02-02.tex", "max_issues_repo_name": "jchen/math1580-notes", "max_issues_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_issues_repo_licenses": ["BSL-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/2022-02-02.tex", "max_forks_repo_name": "jchen/math1580-notes", "max_forks_repo_head_hexsha": "9784be9e0faa57bbb3c421d8a104daadebf99a2f", "max_forks_repo_licenses": ["BSL-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2054794521, "max_line_length": 198, "alphanum_fraction": 0.5374686267, "num_tokens": 2037, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.7956581049086031, "lm_q1q2_score": 0.5589737341084012}}
{"text": "% !TEX root = thesis.tex\n\n\\chapter{Incremental construction}\n\\label{ch:incremental-construction}\n\nOne more method to the extrusion of \\refch{ch:extrusion}, \\emph{incremental construction} exploits the property that an $n$-cell can be defined based on a set of $(n-1)$-cells known to form its complete (closed) boundary.\nIn practice, defining an $n$-cell in this manner is significantly more complex than doing so based on extrusion.\nHowever, unlike extrusion it permits the creation of cells of an \\emph{arbitrary shape}.\nIt can also be applied cell by cell in increasing dimension, starting with the construction of isolated 0-cells (embedded in $\\mathbb{R}^n$) first, and then constructing 2-cells, 3-cells and further based on their boundary, allowing for the \\emph{incremental construction} of objects of any dimension---hence the name given here.\n\nThis chapter describes an operation that permits the aforementioned process to be easily applied in practice using a topological data structure.\nBased on combinatorial maps (\\refse{ss:ordered-topological-models}) and their fundamental operations (\\refse{ss:operations-maps}), it connects the separate boundary $(n-1)$-cells together by computing the appropriate adjacency relationships between them, encapsulating low-level details such as the creation of new individual combinatorial elements and setting the correct orientation for the existing ones.\n\nThe chapter begins with some background on the operations and a description of the overall approach in \\refse{se:incremental-approach}.\nIt then explains the steps needed to create the cells of each dimension in \\refse{se:primitives}.\n\\refse{se:incremental-implementation} describes how the approach was implemented based on the Computational Geometry Algorithms Library (CGAL).\n\\refse{se:incremental-experiments} summarises some experiments based on this implementation, generating relatively large objects in up to 4D.\nFinally, \\refse{se:incremental-conclusions} concludes the chapter with the possibilities to use incremental construction to build higher-dimensional datasets.\n\nMost of this chapter is based on the paper:\n\\begin{itemize}\n\\papericaaincrementalconstruction%\n\\end{itemize}\n\n\\section{Background and overall approach}\n\\label{se:incremental-approach}\n\nBased on the Jordan-Brouwer separation theorem \\citep{Lebesgue11,Brouwer11}, it is known that a subset of space homeomorphic to an $(n-1)$-dimensional sphere\\footnote{A generalisation of the concept of a sphere in arbitrary dimensions.\nA 0-sphere is the pair of points, a 1-sphere is a circle, and a 2-sphere a sphere. As opposed to disks and balls, circles and spheres are hollow.} $S^{n-1}$ in the $n$-dimensional Euclidean space $\\mathbb{R}^n$ divides the space into two connected components: the \\emph{interior}, which is the region bounded by the sphere, and the \\emph{exterior}, which is the unbounded region in which the sphere is a hole.\nThis means that the principles of boundary representation as described in \\refse{ss:data-models} extend to higher dimensions, and so an $n$-cell in a cell complex can be described (and therefore constructed) based a set of $(n-1)$-cells that are known to form its complete (closed) boundary\\footnote{This is true in practice.\nHowever, there are fractal-like pathological objects where this is not the case because their interior or exterior components are not homeomorphic to disks \\citep{Alexander24}.\nWhile they do exist, this type of objects are certainly out of scope here.}.\n\nThe \\emph{incremental construction} method proposed in this chapter exploits this property by applying this process cell by cell in increasing dimension.\nIsolated vertices are first constructed, which are embedded in $\\mathbb{R}^n$ and are uniquely defined based on their coordinates.\nThese vertices can then be connected to form individual edges, or instead to directly form cycles of implicit edges representing faces, as in most of the typical GIS approaches described in \\refse{ss:formats}.\nSets of faces can be connected to form volumes, sets of volumes to form 4-cells and so on.\n\nIn a similar manner as extrusion, presented in \\refch{ch:extrusion}, this \\emph{incremental construction} process significantly reduces the conceptual complexity of defining and creating higher-dimensional objects.\nThe $(n-1)$-dimensional boundary of an $n$-cell is much easier to conceive than the original $n$-cell because it can be subdivided into multiple simpler $(n-1)$-cells, which can be individually described and constructed using the same method.\n\nHowever, applying this incremental construction process using a topological data structure is not that simple, as it involves many intricate small problems: finding the topological relationships between the cells, appropriately connecting them, keeping track of multiple connected components, avoiding the creation of duplicate cells (as part of the boundary of independently-described higher-dimensional cells), changing the orientation of cells on the fly (since those that have been created separately will often have incompatible orientations), and detecting non-manifold shapes (which would result in invalid structures), among others.\n\nThe incremental construction operator presented in this chapter solves all of the aforementioned issues efficiently.\nIt is based on $n$-dimensional combinatorial maps with linear geometries and it is fully dimension-independent.\nAs it generates all the incidence and adjacency relationships between $(n-1)$-cells in an $n$-dimensional cell complex, it can also be used to obtain these relationships when needed, such as when multiple datasets are combined into one or when a non-topological representation is instead used (\\eg\\ the Simple Features-like approach shown in \\refse{ss:nd-topology}).\n\nIn order to be efficient, the algorithm uses two basic techniques:\n(i) indexes on the lexicographically smallest vertex (\\refse{se:spatial-indexing}) of certain cells, which are used in order to keep track of individual cells (which might be disconnected) and to access cells efficiently; and\n(ii) signature-generating traversals for specific cells \\citep{Gosselin11}, which are used to efficiently compare whether two cells are equivalent by checking if it is possible to find an isomorphism $f$ that maps corresponding darts with equivalent ($\\beta$) or reversed ($\\beta^{-1}$) relations and which are embedded at the same location in $\\mathbb{R}^n$, as was shown in \\refse{se:ndqueries}.\n\nThe incremental construction operation as described in this chapter is applicable only to perfect data.\nThe $(n-1)$-cells bounding an $n$-dimensional cell should thus form a manifold and perfectly match each other at their common $(n-2)$-dimensional boundaries.\nHowever, the method can be easily extended to handle some more complex configurations, such as by cleaning the input data using the methods described in \\refch{ch:cleaning}.\n\n\\section{Incremental construction of primitives per dimension}\n\\label{se:primitives}\n\nThe idea of the incremental construction algorithm is to construct cells individually, using lower-dimensional cells that have already been constructed in order to describe part of the boundary of a higher-dimensional cell.\nIn terms of the darts of a combinatorial map, this sometimes implies the reuse of existing darts, and sometimes the creation of new ones which are connected to the existing ones.\nThere is however no strict requirement that the cells are created in strictly increasing dimension, and so new cells can be easily added to an existing cell complex.\n\nDue to the need to embed 0-cells into a point in space, as well as the oriented nature of a combinatorial map, the incremental construction method is different for 0-, 1- and 2-cells.\n3-cells and those of higher dimensions follow a unified procedure.\nThese cases are therefore described separately below, using as an example the creation of the pair of adjacent tetrahedra shown in \\reffig{fig:2tetra}.\n\\marginpar{\n\\captionsetup{type=figure}\n\\centering\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/2tetra}} \\\\\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/2tetra-map}}\n\\caption[Two adjacent tetrahedra as a combinatorial map]{(a) Two adjacent tetrahedra and (b) their representation as a combinatorial map.}\n\\label{fig:2tetra}\n}\n\n\\subsection{Vertices (0-cells)}\n\\label{ss:incremental-vertices}\n\nThe aim of the process of 0-cell construction is to create an isolated dart and an associated point embedding structure for every 0-cell, while avoiding having duplicate embedding structures having the same point coordinates.\nBy making point embeddings unique---as defined by their coordinates---, they can be used to compare 0-cells without checking an entire tuple of coordinates.\nMoreover, by maintaining an index of all point embeddings and links from every point embedding to a dart there, it is possible to use this index to access the combinatorial structure at that point.\nIn this manner, it is possible to find either an already free dart that can be reused, or a non-free dart that is part of a larger combinatorial structure, which can be then copied and its copy used instead.\n\nThus, calling a function $\\texttt{make\\_0\\_cell(}x_0, \\ldots, x_n\\texttt{)}$ with the coordinates $x_0, \\ldots, x_n$ describing an $n$-dimensional point, should use the index of 0-cells to return an existing dart embedded at that location (using an existing point embedding) if one exists, or a new dart embedded at that location (using a new point embedding) otherwise.\nIn the latter case, the new point embedding and its associated dart should be added to the index.\nThe result of calling this function for all the point coordinates of the vertices in \\reffig{fig:2tetra} is shown in \\reffig{fig:reconstruction-0}.\nNote that the result is identical whether the function is called once for every unique vertex, or multiple times (\\eg\\ once for every vertex in every face or volume).\n\\marginpar{\n\\captionsetup{type=figure}\n\\centering\n\\includegraphics[width=\\marginparwidth]{figs/reconstruction-0}\n\\caption[Constructing the 0-cells of \\reffig{fig:2tetra}]{Constructing the 0-cells of \\reffig{fig:2tetra} consists of creating exactly one dart embedded at each point.\nThese darts are isolated (\\ie\\ not linked by any $\\beta$ relations to the other darts).\nThe direction of the arrows shown here is arbitrary.}\n\\label{fig:reconstruction-0}\n}\n\n\\subsection{Edges (1-cells)}\n\\label{ss:incremental-edges}\n\nGenerally, it is best to skip the generation of 1-cells and proceed directly to the creation of 2-cells from sequences of points.\nIn order to represent an isolated edge in a combinatorial map not one but \\emph{two} darts are required: one embedded at each of the two vertices bounding the edge.\nMore precisely, taking into account the (arbitrary) orientation defined for the edge, the dart embedded at the \\emph{origin} of the edge is connected to the \\emph{destination} by $\\beta_1$, and the destination is connected to the origin by $\\beta_1^{-1}$.\nHaving two darts per edge is not a major problem, but it is wasteful as it often creates unnecessary darts and unnecessary connections between them that would later be deleted.\nFor instance, if a single face that uses all the edges is created from its bounding edges, half of the darts lose their original purpose (\\ie\\ to store a connection to their otherwise unlinked point embeddings) and will be eliminated\\footnote{These could be those embedded at the edges' origin, destination or a mixture of the two.}, which is accompanied with having to reset the $\\beta$ relationships pointing to them.\n\nIn addition, as shown in \\refse{ss:formats}, polygons can be easily described as an ordered sequence of vertices connected by implicit line segments between each consecutive pair.\nTherefore, incrementally constructing $1$-cells often brings no efficiency gains or practical benefits, and so it is normally best to skip dimension one, constructing 2D facets from a sequence of 0D points, 3D volumes from their 2D faces, 4-cells from their 3-cell faces, and so on.\n\nHowever, there is an exception to this rule, as isolated edges and polygonal lines sometimes do need to be explicitly represented.\nIn these cases, it is possible to simply follow the process described for 2-cells below, omitting sewing the last dart of the line segment or polygonal line to the first dart (and vice versa), and appropriately using the 1-cells index to find if a given 1-cell already exists, or otherwise to index the newly created 1-cell or 1-cells (in the case of a polygonal line).\n\n\\subsection{Faces (2-cells)}\n\\label{ss:incremental-faces}\n\nStarting from the unique 0-cells obtained from the procedure presented in \\refse{ss:incremental-vertices}, in order to create a 2-cell from a sequence of 0-cells, three general steps are needed:\n\\begin{figure*}[tb]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2}} \\quad\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2-a}\\label{subfig:reconstruction-2-a}} \\quad\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2-b}\\label{subfig:reconstruction-2-b}} \\quad\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2-c}}\\\\\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2-d}} \\quad\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2-e}}\\quad\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2-f}} \\quad\n\\subfloat[]{\\includegraphics[width=0.22\\linewidth]{figs/reconstruction-2-g}\\label{subfig:reconstruction-2-g}}\\\\\n\\caption[Constructing the 2-cells of \\reffig{fig:2tetra}]{The construction of the 2-cells of \\reffig{fig:2tetra}. (a)~Initial configuration: one dart per vertex. (b)~After $b=\\texttt{make\\_2\\_cell(}1,2,3\\texttt{)}$. (c)~After $c=\\texttt{make\\_2\\_cell(}2,4,3\\texttt{)}$. (d)~After $d=\\texttt{make\\_2\\_cell(}1,4,3\\texttt{)}$. (e)~After $e=\\texttt{make\\_2\\_cell(}1,4,2\\texttt{)}$. (f)~After $f=\\texttt{make\\_2\\_cell(}1,3,5\\texttt{)}$. (g)~After $g=\\texttt{make\\_2\\_cell(}5,3,4\\texttt{)}$. (h)~Final result, after $h=\\texttt{make\\_2\\_cell(}4,5,1\\texttt{)}$.}\n\\label{fig:reconstruction-2}\n\\end{figure*}\n\n\\begin{enumerate}\n\n\\item\nThe procedure first checks if the 2-cell that is being requested already exists\\footnote{This test can also be done at the end of the process, which makes it possible to use the easy cell comparison method described in \\refse{se:ndqueries}.\nIn that case, the newly created cells (in the form of darts) that are found to be redundant should be deleted and removed from (or not added to) the indices.}.\nJust as in the creation of 0-cells, a 2-cell being requested might have already been created, either independently or as part of a 3-cell.\nThis can be easily checked using the index of 2-cells with the lexicographically smallest vertex of the 0-cells being passed, and finding if from a dart starting at the lexicographically smallest vertex and following the $\\beta_1$ or $\\beta_1^{-1}$ relations, one passes through the same point embeddings as those passed to this construction method.\n\n\\item\nEach of these 0-cells might be a $1$-free dart $d$ (\\ie\\ $\\beta_1(d) = \\emptyset$) and thus have no dart linked to it by $\\beta_1$, \\ie\\ $\\beta_1^{-1}(d) = \\emptyset$ or $\\forall d^\\prime \\nexists \\beta_1(d^\\prime) = d$, or if $\\beta_1^{-1}$ is not stored, $\\nexists d^\\prime \\mid \\beta_1(d^\\prime) = d$, \\emph{in which case it can be used directly}, such as in \\reffig{subfig:reconstruction-2-a}.\nIt can also be a non $1$-free dart or one that has a dart linked by $\\beta_1$ to it, which means that it is used as part of a $1$-cell (and possibly other higher dimensional cells).\n\nThe darts that are already used as part of $1$-cells are more difficult to handle, as they \\emph{can be reused only when the 1-cell they are part of is also part of the boundary of a geometrically identical $2$-cell as the one that will be constructed using this method}.\nThis needs to be tested in both possible orientations for the sequence of 0-cells, possibly resulting in the reversal of the orientation of some 1-cells.\nIf a given dart \\emph{cannot be reused} (because it is part of a 1-cell that is not part of the 2-cell to be created), a copy of it has to be created.\nThe result of this step is thus a list that contains for every 0-cell a reusable existing dart or a newly created dart.\n\n\\item\nThe darts obtained in the previous step are 1-sewn sequentially (using $\\beta_1$ and $\\beta_1^{-1}$), and the last is 1-sewn to the first, forming a closed cycle.\nDuring these sewing operations, the function checks whether every newly created edge (consisting of a pair of linked darts) is present in the index of 1-cells.\nIf a given 1-cell is already there, the new edge's dart is linked to its edge embedding structure.\nIf a 1-cell is not there, the edge is added to the index\\footnote{Here, depending on the kind of expected input data, it might be more convenient to index edges at their origin vertex rather than at their lexicographically smallest.}, a new edge embedding is created, and the edge's dart is linked to it.\nFinally, the newly created 2-cell is then added to the index of 2-cells.\n\nIn order to verify that the 2-cell being generated is a quasi-manifold, it is useful to assert that all darts that are $1$-free and have their $\\beta_1^{-1}$ relation set to $\\emptyset$ at the beginning of this third step, \\ie\\ those that were not copied, should continue to be $1$-free and have a $\\beta_1^{-1}$ set to $\\emptyset$ until they are sewn.\nThis condition, which is applied differently to the 1-cell, ensures that a vertex is not used twice within the same face, and as such, that the 2-cell is a quasi-manifold.\n\n\\end{enumerate}\n\n\\reffig{fig:reconstruction-2} shows the incremental construction procedure for all the 2-cells of \\reffig{fig:2tetra}, which consists of 7 2-cells and is thus obtained by 7 calls to the \\texttt{make\\_2\\_cell} method.\n\n\\subsection{Volumes (3-cells) and higher}\n\\label{ss:incremental-volumes}\n\nThe method to create $i$-cells from their $(i-1)$-cell boundaries is identical for all $i \\geq 3$, allowing for a fully dimension-independent function to be created.\nThis function consists of four general steps:\n\n\\begin{enumerate}\n\n\\item\nFirst of all, the procedure checks whether each $(i-1)$-cell that is passed is $(i-1)$-free or not.\nIf it is $(i-1)$-free, it is reused as part of the $i$-cell, such as in \\reffig{subfig:reconstruction-3-a}.\nIf it is not $(i-1)$-free, then it is already part of a different $i$-cell, so a copy of it is made \\emph{with a reversed orientation}, as shown in \\reffig{subfig:reconstruction-3-b}.\nThis copy can then be used for the construction of the $i$-cell.\n\nFor a given dart $d$ that is known to be part of the $(i-1)$-cell, this copy can be made by taking all the darts in the orbit $C = \\langle \\beta_1, \\ldots, \\beta_{i-2} \\rangle (d)$, creating a new set of darts $C^\\prime$, and inserting a new corresponding dart in $C^\\prime$ for every dart in $C$.\nUsing a function $f : C \\rightarrow C^\\prime$ that maps a dart in $C$ to its corresponding dart in $C^\\prime$, for every dart $c \\in C$ the $\\beta_1$ relations are set as $\\beta_1^{-1}\\left(f\\left(c\\right)\\right) = f\\left(\\beta_1\\left(c\\right)\\right)$ and $\\beta_1\\left(f\\left(c\\right)\\right) = f\\left(\\beta_1^{-1}\\left(c\\right)\\right)$. For the ones of higher dimensions, they are set such that $\\forall 2 \\leq j \\leq i-2$, $\\beta_j\\left(f\\left(c\\right)\\right) = f\\left(\\beta_j\\left(c\\right)\\right)$.\n\nNote that the combinatorial structures are copied with reversed orientation, as this ensures that the two (old and new) can be directly $i$-sewn together, if necessary.\n\n\\item\nA temporary \\emph{ridge index} of the $i$-cell is built, containing all the $(i-2)$-cells on the boundary of all $(i-1)$-cells that have been passed to the construction method---not all the $(i-1)$-cells in the cell complex---using their lexicographically smallest vertices, such that their index entries link to a usable dart in the $(i-2)$-cell with a point embedding at the lexicographically smallest vertex.\nThis dart should be one that is $(i-1)$-free, either because it was already free as part of the combinatorial structure of the passed $(i-1)$-cells, or because it is a copy (with reversed orientation) of one that was not $(i-1)$-free.\n\n\\item\nUsing the ridge index, $(i-1)$-cells (\\ie\\ facets) are $(i-1)$-sewn along their common $(i-2)$-cell boundaries (\\ie\\ ridges).\nFor this, for every $(i-2)$-cell on the boundary of an $(i-1)$-cell passed to the function, exactly one matching $(i-2)$-cell should be found in the index, which should be equivalent as compared using the method described in \\refse{se:ndqueries} and might be of the same or opposite orientation (but should not be part of the same $(i-1)$-cell or matched to itself).\nThis criterion ensures that a quasi-$(i-1)$-manifold will be constructed.\n\nIf the two $(i-1)$-cells are sewable along their common $(i-2)$-cell boundaries, \\ie\\ they have compatible orientations, they are directly sewn together.\nOtherwise, the orientation of the entire connected component of either of the two $(i-1)$-cells should be reversed.\nIf the two $(i-1)$-cells have incompatible orientations but are part of the same connected component, the cell that is being requested is unorientable, and thus cannot be represented using combinatorial maps.\nAlthough not discussed further here, note that as long as it does forms a quasi-manifold it can however be represented using \\emph{generalised maps}.\n\n\\item\nUsing the index of $i$-cells, the newly constructed $i$-cell is compared to others to check if it had already been created, which is also done using the lexicographically smallest vertex of the cell.\nIf an equivalent $i$-cell (with the same or opposite orientation) is found, the function deletes the newly created darts of the cell and their corresponding index entries, and instead finally returns the existing $i$-cell.\nIf an equivalent $i$-cell is not found, the newly created cell is added to the index and returned.\n\n\\end{enumerate}\n\n\\reffig{fig:reconstruction-3} shows the incremental construction procedure for the two 3-cells of \\reffig{fig:2tetra}.\n\\marginpar{\n\\captionsetup{type=figure}\n\\centering\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/reconstruction-3-a}\\label{subfig:reconstruction-3-a}} \\\\\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/reconstruction-3-b}\\label{subfig:reconstruction-3-b}}\n\\caption[Constructing the 3-cells of \\reffig{fig:2tetra}]{The construction of the 3-cells of \\reffig{fig:2tetra}. (a)~After $\\texttt{make\\_3\\_cell(}b,c,d,e\\texttt{)}$. (b)~Final result, after $\\texttt{make\\_3\\_cell(}e,f,g,h\\texttt{)}$.}\n\\label{fig:reconstruction-3}\n}\n\n\\section{Implementation and complexity analysis}\n\\label{se:incremental-implementation}\n\nThe incremental construction algorithm has been implemented in C++11 and made available under the open source MIT licence at \\url{https://github.com/kenohori/lcc-tools}.\nAs the extrusion related code (\\refse{se:extrusion-implementation}), it requires and builds upon the CGAL packages Combinatorial Maps and Linear Cell Complex, among others.\n% The first package provides data structures and algorithms to store and to efficiently iterate over the darts of a combinatorial map, and the second links the 0-embeddings to other CGAL types in order to store the geometry of a model.\n\nFor this prototype implementation, the lexicographically smallest vertex indices were implemented as C++ Standard Library \\texttt{maps}\\footnote{The exact implementation depends on the library that is used, but normally they are red-black trees.} for every dimension, using point embeddings as keys and \\texttt{lists} of darts as values.\nEach dart in a \\texttt{list} represents a separate cell (of the dimension of the index) that has that point as its lexicographically smallest.\nA custom comparison function is passed as a template so that the points are internally sorted in lexicographical order.\nAs a \\texttt{map} has $O(\\log n)$ search, insertion and deletion times and $O(n)$ space \\citep[\\S{}23.4]{ISO14882:2015}, all of these operations can be performed efficiently.\n\nIf a strictly incremental approach is followed, creating all $i$-cells before proceeding to the $(i+1)$-cells, it is not necessary to maintain indices for all the cells of all dimensions at the same time.\nThe only ones that are needed are those for: all $i$- and $(i-1)$-cells, as well as a temporary index for the $(i-2)$-cells on the boundary of the $(i-1)$-cells for the $i$-cell that is currently being constructed.\nThis means that only three indexes---one of which likely covers only a small part of the dataset---need to be kept at a given time.\n\nMost of these indices can be easily built incrementally, adding new cells as they are created in $O(\\log c)$, with $c$ the number of cells of that dimension, assuming that the smallest vertex and a dart embedded there are kept during its construction.\nThe complexity of building any index of cells of any dimension is thus $O(c \\log c)$ and it uses $O(c)$ space.\n\nChecking whether a given cell already exists in the cell complex is more complex.\nFinding a list of cells that have a certain smallest vertex is done in $O(\\log c)$.\nIn an unrealistically pathological case, all existing cells in the complex could have the same smallest vertex, leading to up to $c$ quadratic time cell-to-cell comparisons just to find whether one cell exists.\nHowever, every dart is only part of \\emph{a single} cell of any given dimension\\footnote{This is true for the type of combinatorial maps with linear geometries that are handled here, but not necessarily so in the general case.}, so while every dart could conceivably be a starting point for the comparison, a single dart cannot be used as a starting point in more than one comparison, and thus a maximum of $d_{\\mathrm{complex}}$ identity comparisons will be made for \\emph{all} cells, with $d_{\\mathrm{complex}}$ being the total number of darts in the cell complex.\nFrom these $d_{\\mathrm{complex}}$ darts, two identity comparisons are started, one assuming that the two cells (new and existing) have the same orientation, and one assuming opposite orientations.\nEach of these involves a number of dart-to-dart comparisons in the canonical representations that \\emph{cannot} be higher than the number of darts in the smallest of the two cells.\nThe number of darts in the existing cell is unknown, but starting from the number of darts in the newly created cell ($d_{\\mathrm{cell}}$), it is safe to say that no more than $d_{\\mathrm{cell}}$ dart-to-dart comparisons will be made in each identity test, leading to a worst-case time complexity of $O(d_{\\mathrm{complex}} d_{\\mathrm{cell}})$.\nNote that this is similar to an isomorphism test starting at every dart of the complex.\n\nFinally, creating an $i$-cell from a set of $(i-1)$-cells on its boundary is more expensive, since the $(i-2)$-cell (ridge) index needs to be computed for every $(i-1)$-cell.\nFollowing the same reasoning as above, it can be created in $O(r \\log r)$ with $r$ the number of ridges in the $i$-cell, and uses $O(r)$ space.\nChecking whether a single ridge has a corresponding match in the index is done in $O(d_{\\mathrm{cell}} d_{\\mathrm{ridge}})$, with $d_{\\mathrm{cell}}$ the number of darts in the $i$-cell and $d_{\\mathrm{ridge}}$ the number of darts in the ridge to be tested.\nSince this is done for all the ridges in an $n$-cell, the total complexity of this step, which dominates the running time of the algorithm, is\n\n\\begin{equation*}\n\\displaystyle\\sum_{\\mathrm{ridges}} O(d_{\\mathrm{cell}} d_{\\mathrm{ridge}}) = O(d_{\\mathrm{cell}}^{2}).\n\\end{equation*}\n\nThe analyses given above give an indication of the computational and space complexity of the incremental algorithm as a whole.\nHowever, it is worth noting that in realistic cases the algorithm fares far better than in these worst-case scenarios: the number of cells that have a certain smallest vertex is normally far lower than the total number of cells in the complex, most of their darts are not embedded at the smallest vertex, and from these darts most identity comparisons will fail long before reaching the end of their canonical representations.\n\nFinally, one more nuance can affect the performance of this approach.\nWhen two connected components of darts with incompatible orientations have to be joined, the orientation of one of these has to be reversed.\nThis is easily done by obtaining all the connected darts of one of the connected components, preferably the one that is expected to be smaller, and reversing their orientation 2-cell by 2-cell.\nEvery dart $d$ in a 2-cell is then $1$-sewn to the previous dart in the polygonal curve of the 2-cell (\\ie\\ $\\beta_{1}^{-1}(d)$).\nA group of $n$ darts can thus have its orientation reversed in $O(n)$ time.\nThis is not a problem in practice since GIS datasets generally store nearby objects close together, but if a cell complex is incrementally constructed in the worst possible way, \\ie\\ creating as many disconnected groups as possible, this could have to be repeated for \\emph{every} cell of every dimension, creating a very inefficient process.\n\n\\section{Experiments}\n\\label{se:incremental-experiments}\n\n\\subsection*{Simple comparisons with valid primitives}\n\nThe CGAL Linear Cell Complex package provides functions to generate a series of primitives (line segments, triangles, quadrangles, tetrahedra and hexahedra) which are known to be created with correct geometry and topology, and can then be sewn together to generate more complex models.\nModels constructed in this manner were thus created independently and compared to those incrementally constructed using the method presented in this chapter.\nBy using the approach shown in \\refse{se:ndqueries}, it was possible to validate that they were equivalent.\n\n\\subsection*{A tesseract}\n\nIn order to present an example in more than three dimensions, a tesseract was also incrementally constructed using the approach presented in this chapter, which is shown in \\reffig{fig:tesseract-darts}.\nA tesseract is a 4-cell bounded by 8 cubical 3-cells, each of which is bounded by 6 square 2-cells.\nIt thus consists of one 4-cell, 8 3-cells, 24 2-cells, 32 1-cells and 16 0-cells.\n\\marginpar{\n\\captionsetup{type=figure}\n\\centering\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/tesseract2}} \\\\\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/tesseract3}} \\\\\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/tesseract}}\n\\caption[A tesseract as a combinatorial map]{A tesseract as a combinatorial map: (a) the darts of its 7 inner cubes, (b) the darts of its 8 cubes, and (c) all of its darts shown together.}\n\\label{fig:tesseract-darts}\n}\n\nUsing the approach presented here, an empty 0-cell index is first created.\nThen, the 16 vertices of the tesseract, each vertex $p_{i}$ being described by a tuple of coordinates $(x_{i},y_{i},z_{i},w_{i})$, were created as $p_{i}=\\texttt{make\\_0\\_cell(}x_{i},y_{i},z_{i},w_{i}\\texttt{)}$, which returns a unique dart embedded at each location that is also added to the 0-cell index.\nAt this point, the algorithm has built an unconnected cell complex consisting solely of 16 completely free darts.\n\nAn empty index of 2-cells is then created.\nEach of the tesseract's 24 square facets are then built based on their vertices as $f_{i}=\\texttt{make\\_2\\_cell(}p_{j},p_{k},p_{l},p_{m}\\texttt{)}$, which 1-sews (copies of) these darts in a loop and returns the dart embedded at the smallest vertex of the facet.\nThese are added to the index of 2-cells.\nSince every vertex is used in 6 different 2-cells, each dart would be copied 5 times.\nThe cell complex at this point thus consists of 24 disconnected groups of 4 darts each.\n\nNext, an empty index of 3-cells is created and the index of 0-cells can be deleted.\nFor each of the 8 cubical 3-cells, a function call of the form $v_{i}=\\texttt{make\\_3\\_cell(}f_{j},f_{k},f_{l},f_{m},f_{n},f_{o}\\texttt{)}$ is made.\nAt this point, an index of the 1D ridges of each face is built, which is used to find the 12 pairs of corresponding ridges that are then be 2-sewn together.\nWhen a 3-cell is created, it is added to the index.\nSince every face bounds two 3-cells, each dart is duplicated once again, resulting in a cell complex of 8 disconnected groups of 24 darts each.\n\nFinally, the tesseract is created with the function $t=\\texttt{make\\_4\\_cell(}v_{1},v_{2},\\dots,v_{8}\\texttt{)}$.\nThis can use the index of 2-cells to find the 24 corresponding pairs of facets that are then 3-sewn to generate the final cell complex.\n\nThe validity of this object was tested by checking whether it formed a valid combinatorial map, testing whether each cube was identical to a cube created with the Linear Cell Complex package, and manually verified the $\\beta$ links of its 192 darts.\n\n\\subsection*{2D+scale data}\n\nIn order to test the incremental construction algorithm and its applicability to data incorporating non-spatial characteristics, a few 2D+scale datasets from \\citet{Meijers11} using ATKIS data\\footnote{\\url{http://www.bkg.bund.de/nn_147094/SharedDocs/Download/Barrierefreie-Textversionen/EN-InfoMaterial/EN-Text-Vector-Data.html}}, were also incrementally constructed.\nAs shown in \\reffig{fig:utm}, these datasets model the generalisation of a planar partition as a set of stacked prisms.\nBoth of the simple datasets of this figure were created successfully in under a second.\n\\marginpar{\n\\captionsetup{type=figure}\n\\centering\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/utm}} \\\\\n\\subfloat[]{\\includegraphics[width=\\marginparwidth]{figs/utm2}}\n\\caption[Simple 2D+scale datasets]{Simple 2D+scale datasets from \\citet{Meijers11}, which represent the generalisation of a planar partition by merging areas.\nThe dataset shown in (a) has realistic scale intervals such that the generalisation operations are performed at scales that depend on the area of a polygon, while the dataset in (b) uses equally spaced generalisation operations.}\n\\label{fig:utm}\n}\n\nA larger dataset, shown in \\reffig{fig:atkis} and consisting of 698 polyhedra with a total of 457 185 faces, was used as a benchmark to test the performance of the algorithm.\nThis dataset was processed without errors in roughly 30 minutes, including validation tests to ensure that every face created was a valid combinatorial map and that the faces of a volume formed a closed quasi-manifold.\n\\begin{figure}[bp]\n\\includegraphics[width=\\linewidth]{figs/atkis}\n\\caption[A large 2D+scale dataset]{A large 2D+scale dataset from \\citet{Meijers11}, which uses equally spaced generalisation operations that merge two adjacent polygons.}\n\\label{fig:atkis}\n\\end{figure}\n\nIn this latter dataset, an additional test was made using its first 250 polyhedra, which lie on the top of \\reffig{fig:atkis}, comparing the time used for the construction of vertices, faces and volumes with and without the use of indices.\nThe results of both methods were equivalent using the approach shown in \\refse{se:ndqueries}.\nOn average, using indices resulted in a faster vertex construction time by a factor of 2 200, faster face construction by a factor of 56 and faster volume construction by a factor of 38.\nHowever, as \\reffig{fig:construction-times} shows, these speed gains are not uniform throughout the 250 polyhedra.\nThe speed gained from using an index on the vertices increases roughly linearly on the number of constructed vertices, while that gained on from the faces index remains roughly constant between a factor of 50 and 60, and that gained from the volumes index tends to slowly increase as well.\n\\begin{figure*}[tbp]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.33\\linewidth]{figs/construction-vertices}\\label{subfig:construction-vertices}}\n\\subfloat[]{\\includegraphics[width=0.33\\linewidth]{figs/construction-faces}\\label{subfig:construction-faces}}\n\\subfloat[]{\\includegraphics[width=0.33\\linewidth]{figs/construction-volumes}\\label{subfig:construction-volumes}}\n\\caption[Construction time speed-up from the use of indices]{Construction time speed-up from the use of indices on the (a) vertices, (b) faces and (c) volumes.}\n\\label{fig:construction-times}\n\\end{figure*}\n\n\\section{Conclusions}\n\\label{se:incremental-conclusions}\n\nCreating computer representations of higher-dimensional objects can be complex.\nCommon construction methods used in 3D, such as directly manipulating combinatorial primitives, or using primitive-level construction operations (\\eg\\ Euler operators \\citep{Mantyla88}), rely on our intuition of 3D geometry, and thus do not work well in higher dimensions.\nIt is therefore all too easy to create invalid objects, which then cannot be easily interpreted or fixed.\n\nThe incremental construction method proposed in this section follows a completely different approach, which has a solid underpinning in the Jordan-Brouwer separation theorem \\citep{Lebesgue11,Brouwer11}.\nBy exploiting the principles of boundary representation, it constructs an $i$-cell based on a set of its bounding $(i-1)$-cells.\nSince individual $(i-1)$-cells are easier to describe than the $i$-cell, it thus subdivides a complex representation problem into a set of simpler, more intuitive ones.\nThe method can moreover be incrementally applied to construct cell complexes of any dimension, starting from a set of vertices in $\\mathbb{R}^n$ as defined by a $n$-tuple of their coordinates, and continuing with cells of increasing dimension---optionally creating edges from vertices, then faces from vertices or edges, volumes from faces and so on.\n\nWhile a set of $(i-1)$-cells bounding an $i$-cell can be said to already form a complete representation of an $i$-cell, this is not sufficient for its representation in a topological data structure, which requires the topological relationships between the $(i-1)$-cells to be computed.\nThe incremental construction algorithm proposed in this section thus computes the relationships that are required for two data structures: generalised or combinatorial maps.\nHowever, these relationships are also applicable to most other data structures.\n\nThe algorithm is efficient due to its use of indices using the lexicographically smallest vertex of every cell per dimension, as well as an added index using the lexicographically smallest vertex of the bounding ridges of the cell that is being built.\nIt generates an $i$-cell in $O(d^{2})$ in the worst case, with $d$ the total number of darts in the cell.\nHowever, it fares markedly better in real-world datasets, as cells do not generally share the same lexicographically smallest vertex.\nBy checking all matching ridges within a cell's facets, the algorithm can optionally verify that the cell being constructed forms a combinatorially valid quasi-manifold, avoiding the construction of invalid configurations.\n\nA publicly available implementation of the algorithm has been made based on CGAL Combinatorial Maps, and its source code has been released under a permissive licence.\nIt is worth noting that it is one of very few general-purpose object construction methods that has been described and implemented for four- or higher-dimensional cell complexes.\n\nThe implementation has been tested with simple 2D--4D objects, as well as with large 2D+scale datasets from \\citet{Meijers11}.\nThe constructed objects were tested to verify that they form valid combinatorial maps.\nThe small objects were also manually inspected, visualised, and where possible, they were compared with equivalent objects known to be valid using the method discussed in \\refse{se:ndqueries}.\n\nWhile the incremental construction operation as described in this chapter works only with perfect data, small modifications could make it applicable to additional configurations.\nFor instance, merging adjacent cells that can be perfectly described by a single cell with linear geometry (\\eg\\ collinear edges and coplanar faces) as a preprocessing step can be used to handle cases where cells are subdivided only on one side of a boundary.\nSnapping points together can solve several common invalid configurations, such as those described in \\refch{ch:cleaning} and in \\citet{Diakite14}.\n\nFinally, the logical step forward at this point would be the use of the incremental construction algorithm for the creation of large higher-dimensional datasets.\nHowever, since to the best of my knowledge there are no large space-filling higher-dimensional datasets currently available---without even considering any validity criteria---, such tests could not be conducted within this thesis.\nThe generation of such a higher-dimensional dataset is thus a necessary first step in this direction and is considered as a related piece of future work.\n", "meta": {"hexsha": "8486ec3a48ff0a508f2f0ea7611d9a6947cc1cfa", "size": 40842, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "incremental-construction.tex", "max_stars_repo_name": "kenohori/thesis", "max_stars_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 38, "max_stars_repo_stars_event_min_datetime": "2016-03-04T13:55:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T07:28:24.000Z", "max_issues_repo_path": "incremental-construction.tex", "max_issues_repo_name": "kenohori/thesis", "max_issues_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-02-23T16:34:29.000Z", "max_issues_repo_issues_event_max_datetime": "2019-02-27T10:12:11.000Z", "max_forks_repo_path": "incremental-construction.tex", "max_forks_repo_name": "kenohori/thesis", "max_forks_repo_head_hexsha": "31c026184ba535a491d6a3981c29dd897cba84b5", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2018-10-11T04:08:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T23:58:06.000Z", "avg_line_length": 108.912, "max_line_length": 640, "alphanum_fraction": 0.783139905, "num_tokens": 10052, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540518, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.5589737276015402}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{algpseudocode}\n\\usepackage{algorithm}\n\\usepackage[margin=1.0in]{geometry}\n\\title{SPR-based Error Estimation}\n\\author{Dan Ibanez and Brian Granzow}\n\\date{Jun 21, 2014}\n\\begin{document}\n\\maketitle\n\n\\section{Overview}\nAn SPR-based error estimation procedure is defined in detail\nin Jie Wan's thesis.\nWe have reimplemented this system using the latest APF\nlibraries and it supports the Albany adaptive cycle.\nThis document details the inner workings of this estimator.\n\nFrom a high level, the SPR-based error estimator works by\nrecovering a $p$-order field from what is essentially\na $(p-1)$-order input field, then it computes the\nerror at each element as a function of the difference\nbetween the input (less accurate) and output\n(hopefully more accurate) fields. \n\nThe input to SPR is actually a ``field\" $\\epsilon$ that contains values\nat integration points rather than nodes.\nThis is typically the result of some kind of gradient computation\n$\\epsilon = \\nabla f$ from a $p$-order field $f$.\nIn mechanics applications, $\\epsilon$ is a strain or stress quantity\nevaluated at integration points.\nNote that the choice of integration points is independent of\nanything else, although typically they should be chosen such\nthat they can integrate a field of order $p$.\n\n\\section{Field Recovery}\nThe SPR recovery process generates a $p$-order field $\\epsilon^*$ which\nhas values at the nodes for the same quantity that the\ninput $\\epsilon$ provides at integration points.\n\nThis recovery is done node-by-node, and the value at each\nnode is recovered by fitting an ordinary $p$-order polynomial\n$q (x, y, z) = v$ using input data $v_i = \\epsilon(x_i, y_i, z_i)$\nat integration points $(x_i,y_i,z_i)$ around the node.\nThe collection of integration points is done by building\na ``patch\" of mesh elements around the node.\n\nFitting a polynomial from data is done by solving a least-squares linear\nsystem $Ac \\approx b$.\nThe solution $c$ should minimize $\\|Ac - b\\|$, and represents\nthe coefficients of the polynomial $q$.\nFor example, if $p=1$:\n\\[q(x,y,z) = c_0 + c_1 x + c_2 y + c_3 z \\]\n\nEach data point becomes a row of the linear system:\n\n\\[q(x_i,y_i,z_i) = c_0 + c_1 x_i + c_2 y_i + c_3 z_i \\approx v_i \\]\n\nWhich can be expressed as a dot product:\n\n\\[q(x_i,y_i,z_i) = (1,x_i,y_i,z_i)(c_0,c_1,c_2,c_3)^T \\approx v_i \\]\n\nWhich yields the matrix equation:\n\n\\[\\begin{bmatrix}\n1 & x_0 & y_0 & z_0 \\\\\n1 & x_1 & y_1 & z_1 \\\\\n1 & x_2 & y_2 & z_2 \\\\\n\\text{...}\n\\end{bmatrix}\n\\begin{bmatrix} c_0 \\\\ c_1 \\\\ c_2 \\\\ c_3 \\end{bmatrix}\n\\approx\n\\begin{bmatrix} v_0 \\\\ v_1 \\\\ v_2 \\\\ ... \\end{bmatrix}\n\\]\n\nThe matrix $A$ on the left is then $m\\times n$ where $m$\nis the number of sample points and $n$ is the number of\npolynomial coefficients.\nWe would rather have an over-determined answer than\nan under-determined one, so we require $m \\geq n$.\nTypically, we get $m > n$, in which case the solution\nis not exact and we must choose coefficients to minimize\n\n\\[ \\left( \\sum_{i=0}^{m-1} (q(x_i,y_i,z_i) - v_i)^2 \\right)^\\frac12 \\]\n\nTo solve the least squares problem $Ac \\approx v$, we\nuse a QR factorization approach.\nWe use Householder reflectors to obtain a decomposition\n\n\\[QR = A, QQ^T = I, QRc \\approx v, Rc = Q^Tv\\]\n\nWhere R is upper triangular and Q is stored implicitly\nas a set of Householder vectors.\nThis allows us to compute $y = Q^Tv$ implicitly\n(which is more accurate)\nand then solve $Rc = y$, which is an $n\\times n$ problem,\nwith back substitution.\n\nHowever, the solution $c$ can only be accurately found\nif the matrix $A$ is well-conditioned, i.e. rank$(A) = n$.\nWe must be concerned with this because least-squares polynomial\nfitting is well-known for producing ill-conditioned matrices.\nThe QR solver provides a single point of inspection where\na division by zero will occur if $A$ is ill-conditioned,\nso it will signal a failure in this case.\n\nDuring recovery, the sample points consist of all\nintegration points in the elements of the patch.\nTherefore, we must increase the number of elements\nin the patch so long as the resulting $A$ does\nnot satisfy the conditions $m \\geq n$ and rank$(A) = n$.\n\nThe elements $S$ in the patch are collected as follows\nuntil those conditions are met:\n\n\\begin{algorithm}\n\\caption{Patch building algorithm}\n\\begin{algorithmic}\n\\State let $o$ be the mesh entity to which the node is associated.\n\\State set $S$ to be the elements adjacent to $o$ $(S \\gets o\\{M^3\\})$.\n\\Loop\n\\For{$d$ from 2 down to 0}\n\\State add the elements adjacent by $d$-dimension entities to $S$\n$(S \\gets S\\{M^d\\}\\{M^3\\})$.\n\\If{$S$ has enough elements}\n\\State \\Return $S$\n\\EndIf\n\\EndFor\n\\EndLoop\n\\end{algorithmic}\n\\end{algorithm}\n\nFinally, once the patch is obtained, the integration points\n$(x_i,y_i,z_i)$ and the field values $\\epsilon(x_i,y_i,z_i) = v_i$\nfrom the input are used to build $A$ and $v$.\nNote that the field values $v_i$ may be tensors instead of scalars,\nin which case the analysis is solved one component at a time.\nEach component $j$ forms a set of scalar values $v^{(j)} = (v_0^{(j)},...,v_m^{(j)})^T$\nand results in a set of polynomial coefficients $c^{(j)}$\nthat minimize $\\|Ac^{(j)} - v^{(j)}\\|$ and define a polynomial $q^{(j)}(x,y,z)$.\nWe then evaluate this polynomial at the location $(x_a,y_a,z_a)$\nof the node to obtain the field component value at\nthat node: $\\epsilon^*(x_a,y_a,z_a)^{(j)} = \\rho^{(j)}(x_a,y_a,z_a)$.\nOne can also think of the coefficients $c$ being\ntensors.\nNote that the matrix $A$ remains the same for each component,\nonly the right hand side $v^{(j)}$ changes to give a different\n$c^{(j)}$. This allows us to only compute the QR decomposition\nof A once and only repeat the computations of $y=Q^Tv^{(j)}$\nand $Rc^{(j)}=y$.\n\n\\section{Error Estimation}\n\nA per-element scalar error estimate\n$\\|e_\\epsilon\\|_e$ is computed using the\nnorm of the integrated difference between\ndirect and recovered gradient fields.\n\\[e_\\epsilon=\\epsilon - \\epsilon^*\\]\nThe current implementation uses entry-wise L2 norms on\n3x3 matrices, which seems to agree with Wan's thesis.\nThe following notations describe the L2 norm integration\nover an element and over the whole mesh:\n\n\\[\\|A\\|_e=\n\\left(\n\\int_{\\Omega^e} A : A d\\Omega\n\\right)^\\frac12,\n\\|A\\|=\n\\left(\n\\sum_{e=1}^{n_e}\n\\int_{\\Omega^e} A : A d\\Omega\n\\right)^\\frac12\\]\n\nWhere $A:A$ is the Frobenius inner product.\nIf $\\|A\\|$ or $\\|A\\|_e$ will be computed for\nsome matrix field $A$, it may be advantageous\nto store the field $A:A$.\n\n\\section{Size Field Computation}\n\nA per-element scalar size factor is computed based on\nthe per-element error following this formula:\n\n\\[h^\\text{new}_e = h^\\text{current}_e\n\\|e_\\epsilon\\|^{-\\frac{2}{2p+d}}_e\n\\left(\n\\frac\n{\\hat{\\eta}^2\\|\\epsilon^*\\|^2}\n{\\sum_{i=1}^n\\|e_\\epsilon\\|^\\frac{2d}{2p+d}_i}\n\\right)^\\frac{1}{2p}\n\\]\n\nTo which the following definitions apply:\n\n\\begin{center}\n\\begin{tabular}{ll}\n$d$ & element dimension \\\\\n$p$ & polynomial order of $f$ \\\\\n$\\eta$ & $\\frac{\\|e_\\epsilon\\|}{\\|\\epsilon\\|}$ \\\\\n$\\hat{\\eta}$ & threshold on $\\eta$ for adaptivity \\\\\n$n$ & number of elements in the mesh \\\\\n$h_e^\\text{current}$ & the current element size (longest edge) \\\\\n$h_e^\\text{new}$ & the desired element size for adaptivity \\\\\n\\end{tabular}\n\\end{center}\n\nFinally, a linear isotropic size field at\nthe vertices is recovered from the per-element\nsize factors $h_e^\\text{new}$ by a simple\nprocess of averaging the values from elements\nadjacent to a vertex.\n\n\\section{Target Size Field Computation}\n\nIn practical applications, it is desirable to target a specified\nnumber of elements $N$ during mesh adaption to avoid over-coarsening\nor over-refinement, and to avoid exhausting the memory on the available\nmachine.\n\nBoussetta et al. (\"Adaptive remeshing based on a posteriori error\nestimation for forging simulation\") define a size field specification\nby the following:\n\\[\nh^{new}_e = \\left(\n\\frac{\\theta_{uni}}{\\theta_e} \\right)\n^{( \\frac{2}{2p +d} )} h_e\n\\]\nwhere $\\theta_e$ is the element-wise contribution to the error,\n$\\theta_{uni}$ is the value of a uniformly distributed error\nacross the mesh, $h_e^{new}$ is the new element mesh size,\n$h_e$ is the old element mesh size, $p$ is the polynomial order\nof accuracy of the finite element method, and $d$ is the number\nof spatial dimensions of the mesh.\n\nLet $G$ be a globally computed error metric given by:\n\\[\nG = \\sum_e^n \\left( \\theta_e \\right) ^{(\\frac{2d}{2p+d})}\n\\]\nwhere $n$ is the total number of elements in the mesh.\nThen value of the uniformly distributed error is given by\n\\[\n\\theta_{uni} = \\left( \\theta_{imp} \\right) ^{( \\frac{2p+d}{2p} )}\n\\left( G \\right) ^{-(\\frac{2p+d}{4p} )}\n\\]\nwhere $\\theta_{imp}$ is a prescribed accuracy parameter. \nTo generate a size field that will yield an adapted\nmesh with approximately $N$ elements, the accuracy parameter\n$\\theta_{imp}$ is given by\n\\[\n\\theta_{imp} = \\left( N \\right) ^{-(\\frac{p}{d})}\n\\left( G \\right)^{(\\frac{2p+d}{2d})}\n\\]\n\nAlgebraic manipulation of these quantities yields a simpler\nequivalent expression for the new element size field:\n\\[\nh_e^{new} = \\left(\\frac{G}{N}\\right)^{\\frac{1}{d}}\n\\left( \\theta_e \\right)^{-(\\frac{2}{2p+d})} h_e\n\\]\nThe element-level error contribution $\\theta_e$ is chosen to be\n\\[\n\\theta_e = \\left( \n\\int_{\\Omega^e}(\\epsilon - \\epsilon^*) : (\\epsilon - \\epsilon^*)\n\\text{d} \\Omega\n\\right) ^{\\frac{1}{2}}\n\\]\nas is done in the previous size field computation.\n\nIn addition to targeting a specified number of elements in the\noutput mesh, Boussetta et al. define user-input parameters\n$\\alpha$ and $\\beta$ to control the gradients of the mesh size\nfield such that\n\\[\n\\alpha < \\frac{h_e^{new}}{h_e} < \\beta\n\\]\n\n\\end{document}\n", "meta": {"hexsha": "5390f0d6b2a7faafa975c890dce548017f6bd533", "size": 9635, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "spr/spr.tex", "max_stars_repo_name": "cwsmith/core", "max_stars_repo_head_hexsha": "840fbf6ec49a63aeaa3945f11ddb224f6055ac9f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 138, "max_stars_repo_stars_event_min_datetime": "2015-01-05T15:50:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T01:09:58.000Z", "max_issues_repo_path": "spr/spr.tex", "max_issues_repo_name": "cwsmith/core", "max_issues_repo_head_hexsha": "840fbf6ec49a63aeaa3945f11ddb224f6055ac9f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 337, "max_issues_repo_issues_event_min_datetime": "2015-08-07T18:24:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T14:39:03.000Z", "max_forks_repo_path": "spr/spr.tex", "max_forks_repo_name": "cwsmith/core", "max_forks_repo_head_hexsha": "840fbf6ec49a63aeaa3945f11ddb224f6055ac9f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 70, "max_forks_repo_forks_event_min_datetime": "2015-01-17T00:58:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-13T04:58:20.000Z", "avg_line_length": 34.6582733813, "max_line_length": 87, "alphanum_fraction": 0.7187337831, "num_tokens": 2864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956580952177051, "lm_q2_score": 0.7025300573952052, "lm_q1q2_score": 0.5589737273002541}}
{"text": "\\section{Semidecidability of the CF Regularity Problem}\n\\label{sec:semidecidability}\n\nWe know that the context-free regularity problem is undecidable\n\\cite{Pettorossi13}, due to its reduction to the undecidable Post Correspondence\nProblem \\cite{Hopcroft06}.\n\nWe prove the semidecidability and the undecidability of the context-free\nregularity problem.\n\n\\begin{theorem}\n\t\\label{thm:semidecidability-mnf}\n\tGiven a context-free grammar, the problem of saying whether or not there\n\texists an equivalent grammar in Marciani Normal Form is semidecidable and\n\tundecidable.\n\n\t\\begin{proof}\n\t\tLet us consider a context-free grammar in Chomsky Normal Form.\n\t\tLet us derive an equivalent grammar by unfolding every production,\n\t\tuntil getting the productions for the axiom only.\n\t\tNow, it' easy to check if the derived grammar is in Marciani Normal Form.\n\t\tSo, given any context-free grammar in Chomsky Normal Form, it is always\n\t\tpossible to check if there exists an equivalent context-free grammar in\n\t\tMarciani Normal Form.\n\n\t\tAs this possibility holds for grammars in Chomsky Normal Form, then it\n\t\tholds for every context-free grammar \\cite{Pettorossi13}.\n\t\\end{proof}\n\\end{theorem}\n\n\\begin{theorem}\n\t\\label{thm:semidecidability}\n\tThe context-free regularity problem is semidecidable and undecidable.\n\n\t\\begin{proof}\n\t\tFollows from the semidecidability and undecidability of the problem of\n\t\tsaying whether or not, given a context-free grammar, there exists an\n\t\tequivalent grammar in Marciani Normal Form, and from the regularity of\n\t\tthe language generated by a context-free grammar in Marciani Normal Form.\n\t\\end{proof}\n\\end{theorem}\n", "meta": {"hexsha": "58174a145761ca647c91ad2c24d945973c8dbb76", "size": 1631, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "marciani-normal-form/sec/semidecidability.tex", "max_stars_repo_name": "gmarciani/research", "max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z", "max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z", "max_issues_repo_path": "marciani-normal-form/sec/semidecidability.tex", "max_issues_repo_name": "gmarciani/research", "max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "marciani-normal-form/sec/semidecidability.tex", "max_forks_repo_name": "gmarciani/research", "max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z", "avg_line_length": 38.8333333333, "max_line_length": 80, "alphanum_fraction": 0.7952176579, "num_tokens": 424, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631541, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.5589737207933934}}
{"text": "\\documentclass[12pt]{article}\n\\bibliographystyle{apsrev4-1}%amsplain}\n%%a4paper compact\n\\usepackage[utf8x]{inputenc}\n\\usepackage{amsfonts, amssymb, amsmath, amscd, eucal, amsthm}\n\\usepackage{array}\n\\usepackage{graphicx}\n\\usepackage{subfig}\n\\usepackage{fancyhdr}\n\\usepackage{color}\n\\usepackage{extarrows}\n\\usepackage{enumerate}\n\\usepackage{multirow}\n\\usepackage[numbers]{natbib}\n\\usepackage{geometry}\n\\usepackage{CJK}\n\\geometry{left=3cm,right=3cm,top=2.5cm,bottom=2.5cm}\n\n%\\theoremstyle{plain}\n%\\newtheorem{thm}{Theorem}[chapter] % reset theorem numbering for each chapter\n%\\theoremstyle{definition}\n%\\newtheorem{defn}[thm]{Definition} % definition numbers are dependent on theorem numbers\n%\\newtheorem{exmp}[thm]{Example} \n\n\n\\pagestyle{fancy}\n%\\chea\n\\rhead{}\n\n%\\setlength{\\textwidth}{13.72cm}\n%\\setlength{\\oddsidemargin}{1cm}\n%\\setlength{\\evensidemargin}{1cm}\n\n\\graphicspath{{pics/}}\n%opening\n\\title{MNIST Experiment in Python}\n%\\author{Jianhong Chen}\n\n\\begin{document}\n\\maketitle\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{definition}[theorem]{Definition}\n\\newtheorem{example}[theorem]{Example}\n\\begin{CJK*}{UTF8}{gbsn}\n\n\\section{Least square}\n\\subsection{Model}\nFor each data $x_i\\in \\mathbb{R}^{784}, i=1,...,60000$, the corresponding label is $y_i\\in \\mathbb{R}^{10}$ where $y_i=e_j$ if $x_i \\in A_j$ or the label for $x_i$ is j. \n\nIf we denote \nA=\n$\\begin{pmatrix}\n    x_{1}       & 1 \\\\\n    x_{2}       & 1 \\\\\n    ... &... \\\\\n    x_{60000}       & 1\\\\\n\\end{pmatrix} \\in \\mathbb{R}^{60000\\times 785}$ , Y=\n$\\begin{pmatrix}\n    y_{1}        \\\\\n    y_{2}       \\\\\n    ... \\\\\n    y_{60000}     \\\\\n\\end{pmatrix}\\in \\mathbb{R}^{60000\\times 10}$.\n\nWe want to find $\\theta=\\left[ {\\begin{array}{c}\n   W^T \\\\\n   b^T \\\\\n  \\end{array} } \\right]\\in \\mathbb{R}^{785\\times 10}$ to minimize $\\|Y-A\\theta\\|^2_2$.\n  \nThe empirical formulae gives\n\\begin{align}\n\\hat{\\theta}=(A^TA)^{-1}A^TY.\n\\end{align}\n\n\\subsection{Numerical result}\nFor the MNIST dataset, after $\\hat{\\theta}$ is obtained, we can classify $x_i$ based on \n\\begin{align}\n\\hat{y_i}=Wx_i+b.\n\\end{align}\nWe choose the digit with the maximum value as the predicted label for $x_i$. The training accuracy is 85.77$\\%$.\n\n\\section{Logistic regression }\n\\subsection{Model}\n\\begin{itemize}\n\\item X: the input image of a handwritten digit\n\\item Y: the true value of the digit\n\\item W,b: weight and bias\n\\item Y$\\_$pred=softmax(Wx+b)\n\\item Loss=cross$\\_$entropy(Y, Y$\\_$pred)+ regularization term\n\\item Optimization method: Adam\n\\end{itemize}\n\n\\subsection{Implementation }\n\\begin{itemize}\n\\item Parameter\n\\begin{enumerate}\n\\item learning rate = 0.001\n\\item training epochs=50\n\\item batch size =100\n\\item regularization coefficient =0.0001\n\\end{enumerate}\n\\item Result\n\\begin{enumerate}\n\\item no regularization: training accuracy=93$\\%$; test accuracy =92$\\%$\n\\item regularization coefficient =0.0001: training accuracy=93$\\%$; test accuracy =92$\\%$\n\\item regularization coefficient =0.001: training accuracy=92$\\%$; test accuracy =92$\\%$\n\\item no regularization+SGD: training accuracy= 88$\\%$; test accuracy =89$\\%$\n\\end{enumerate}\n\n\\end{itemize}\n\n\\section{One hidden layer }\n\\subsection{Model}\n\\begin{itemize}\n\\item X: the input image of a handwritten digit\n\\item Y: the true value of the digit\n\\item Hidden layer size=500\n\\item Loss=cross$\\_$entropy(Y, Y$\\_$pred)+ regularization term\n\\item Optimization method: Adam\n\\end{itemize}\n\n\\subsection{Implementation }\n\\begin{itemize}\n\\item Parameter\n\\begin{enumerate}\n\\item learning rate = 0.001\n\\item training epochs=50\n\\item batch size =100\n\\item regularization coefficient =0.0001\n\\end{enumerate}\n\\item Result\n\\begin{enumerate}\n\\item no regularization: training accuracy=100$\\%$; test accuracy =98.36$\\%$\n\\item regularization coefficient =0.0001: training accuracy=99.00$\\%$; test accuracy =98.04$\\%$\n\\item regularization coefficient =0.001: training accuracy=98.00$\\%$; test accuracy =97.67$\\%$\n\\item no regularization+SGD: training accuracy= 90$\\%$; test accuracy =90.54$\\%$\n\\end{enumerate}\n\n\\end{itemize}\n\n\n\n\n\n\n\n\n\\end{CJK*}\n\\end{document}\n", "meta": {"hexsha": "8cf3bd47240eefbb1d8bc5329540e8559dd45ea1", "size": 4166, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/mnist.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/mnist.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/mnist.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.2287581699, "max_line_length": 170, "alphanum_fraction": 0.7145943351, "num_tokens": 1336, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300449389326, "lm_q2_score": 0.7956581000631542, "lm_q1q2_score": 0.5589737207933934}}
{"text": "\\section{Method}\n\\label{sec:method}\n\nIn the following, we first introduce the mathematical formulation of the weakly-supervised 3D shape completion problem. Subsequently, we briefly discuss the concept of variational auto-encoders (\\VAEs) \\cite{Kingma2013ARXIV} which we use to learn a shape prior.\n% -- which are used to learn a shape prior based on the reference shapes.\nFinally, we formally derive our proposed amortized maximum likelihood (\\AML) approach. The overall framework is also illustrated in Figure \\ref{fig:method}.\n\n\\subsection{Problem Formulation}\n\nIn a supervised setting, our task can be described as follows: given a set of partial observations $\\mathcal{X} = \\{x_n\\}_{n = 1}^N \\subseteq \\mathbb{R}^R$ and corresponding ground truth shapes $\\mathcal{Y}^* = \\{y_n^*\\}_{n = 1}^N \\subseteq \\mathbb{R}^R$, learn a mapping $x_n \\mapsto y_n^*$ that is able to generalize to previously unseen observations. Here, we assume $\\mathbb{R}^R$ to be a suitable vector representation of observations and shapes; in practice, we resort to occupancy grids or signed distance functions (SDFs) defined on regular grids, \\ie, $x_n, y_n^* \\in \\mathbb{R}^{H \\times W \\times D} \\simeq \\mathbb{R}^R$.\nSDFs represent the distance of each voxel's center to the closest point on the surface; we use negative signs for interior voxels.\nFor the (partial) observations, we write $x_n \\in \\{0, 1, \\uk\\}^R$ to make missing information explicit; in particular, $x_{n,i} = \\uk$ corresponds to unobserved voxels, while $x_{n,i} = 1$ and $x_{n,i} = 0$ correspond to occupied and unoccupied voxels, respectively.\n\nOn real data, \\eg, KITTI \\cite{Geiger2012CVPR}, supervised learning is often not possible as obtaining ground truth annotations is labor intensive (\\eg, \\cite{Menze2015CVPR,Xie2016CVPR}). Therefore, we target a weakly-supervised variant of the problem instead. Given observations $\\mathcal{X}$ and a set of reference shapes $\\mathcal{Y} = \\{y_m\\}_{m = 1}^M \\subseteq \\mathbb{R}^R$ both of the same, known object category, learn a mapping $x_n \\mapsto \\tilde{y}(x_n)$ such that the predicted shape $\\tilde{y}(x_n)$ matches the unknown ground truth shape $y_n^*$ as close as possible.\nHere, supervision is provided in the form of the known object category, allowing to derive the reference shapes from (watertight) triangular meshes; on real data, we also assume the object locations to be given in the form of 3D bounding boxes in order to extract the observations $\\mathcal{X}$.\n%\\green{In practice, the reference shapes $\\mathcal{Y}$ are derived from watertight, triangular meshes in order to obtain well-defined occupancy grids and SDFs.}\n%In the following, we assume that the shapes $\\mathcal{Y}$ are available as meshes from which we derive occupancy grids and SDFs; the observations are voxelized as described above.\n\n\\subsection{Shape Prior}\n\\label{subsec:method-prior}\n\nWe propose to use the provided reference shapes $\\mathcal{Y}$ to learn a model of possible 3D shapes over the latent space $\\mathcal{Z} = \\mathbb{R}^Q$ with $Q \\ll R$. The prior model is learned using a \\VAE where the joint distribution $p(y, z)$ decomposes into $p(y, z) = p(y | z)p(z)$ with $p(z)$ being a unit Gaussian, \\ie, $p(z) = \\mathcal{N}(z;0, I_Q)$ with $I_Q \\in \\mathbb{R}^{R \\times R}$ being the identity matrix. Sampling from the model is then performed by choosing $z \\sim p(z)$ and subsequently sampling $y \\sim p(y | z)$. For training the generative model, we also need to approximate the posterior $q(z | y) \\approx p(z | y)$, \\ie, the inference model. In the framework of the variational auto-encoder, both the so-called recognition model $q(z | y)$ and the generative model $p(y | z)$ -- corresponding to encoder and decoder -- are represented by neural networks. In particular,\n\\begin{align}\n%\\text{\\bf Encoder: }\nq(z | y) = \\mathcal{N}(z_i; \\mu_i(y), \\text{diag}(\\sigma_i^2(y)))\n\\end{align}\nwhere $\\mu(y), \\sigma^2(y) \\in \\mathbb{R}^Q$ are predicted using the encoder neural network and $p(y_i | z)$ is assumed to be a Bernoulli distribution when working with occupancy grids, \\ie, $p(y_i | z) = \\text{Ber}(y_i ; \\theta_i(z))$ while a Gaussian distribution is used when predicting SDFs, \\ie, $p(y_i | z) = \\mathcal{N}(y_i ; \\mu_i(z), \\sigma^2)$. In both cases, the parameters, \\ie, $\\theta_i(z)$ or $\\mu_i(z)$, are predicted using the decoder neural network. For SDFs, we neglect the variance ($\\sigma^2 = 1$) as it merely scales the training objective.\n\nIn the framework of variational inference, the parameters of the encoder and the decoder are found by maximizing the likelihood $p(y)$. In practice, the likelihood is often intractable. Instead, the evidence lower bound is maximized, resulting in the following loss to be minimized:\n\\begin{align}\n\\mathcal{L}_{\\text{VAE}}(w) = - \\mathbb{E}_{q(z |y)}[\\ln p(y|z)] + \\text{KL}(q(z | y)| p(z)).\n\\end{align}\nwhere $w$ are the weights of the encoder and decoder. The Kullback-Leibler divergence $\\text{KL}$ can be computed analytically; the expectation corresponds to a binary cross-entropy error for occupancy grids or a scaled sum-of-squared error for SDFs. The loss $\\mathcal{L}_{\\text{VAE}}$ is minimized using stochastic gradient descent (SGD). We refer to \\cite{Kingma2013ARXIV} for details.\n\n\\subsection{Shape Inference}\n\\label{subsec:method-inference}\n\nAfter learning the shape prior $p(y, z) = p(y| z) p(z)$, shape completion can be formulated as a maximum likelihood (\\ML) problem over the lower-dimensional latent space $\\mathcal{Z} = \\mathbb{R}^Q$. The corresponding negative log-likelihood, \\ie, $-\\ln p(y, z)$, can be written as\n%\n\\begin{align}\n\\mathcal{L}_{\\text{ML}}(z) &= - \\sum_{x_i \\neq \\uk} \\ln p(y_i = x_i | z) - \\ln p(z).\\label{eq:ml}\n\\end{align}\n%\nwhere $x_i \\neq \\uk$ expresses that the summation ranges only over observed voxels.\nAs the prior $p(z)$ is Gaussian, the corresponding negative log-probability $- \\ln p(z) \\propto \\|z\\|_2^2$ results in a quadratic regularizer. As before, the generative model $p(y | z)$ decomposes over voxels.\n% here, we can only consider actually observed voxels, \\ie, $x_i \\neq \\uk$.\nInstead of solving Equation \\eqref{eq:ml} for each observation $x \\in \\mathcal{X}$ individually, we follow the idea of amortized inference \\cite{Gersham2014COGSCI} and train an encoder $z(x;w)$ to \\emph{learn} \\ML. To this end, we keep the generative model $p(y|z)$ fixed and train the weights $w$ of the encoder $z(x;w)$ using the \\ML objective as loss:\n%\n\\begin{align}\n\\mathcal{L}_{\\text{AML}}(w) = - \\sum _{x_i \\neq \\uk} \\ln p(y_i = x_i | z) - \\lambda \\ln p(z).\\label{eq:aml}\n\\end{align}\n%\nwhere we added an additional parameter $\\lambda$ controlling the importance of the shape prior. The exact form of the probabilities $p(y_i = x_i | z)$ depends on the used shape representation. In the case of occupancy grids, this term results in a cross-entropy error (as both $y_i$ and $x_i$ are, for $x_i \\neq \\uk$, binary). However, when using SDFs, the term is not well-defined as $p(y_i | z)$ is modeled with a continuous Gaussian distribution, while the observations $x_i$ are binary, \\ie, it is unclear how to define $p(y_i = x_i | z)$. Alternatively, we could derive distance values along the rays corresponding to observed points (\\eg, following \\cite{Steinbrucker2013ICCV}). However, as illustrated in Figure \\ref{fig:method-sdf}, noisy rays lead to invalid observations along the whole ray. This problem can partly be avoided when relying on occupancy to represent the observations.\n\n\\begin{figure}\n    \\centering\n    \\vspace*{-0.25cm}\n    \\begin{subfigure}[t]{0.425\\linewidth}\n        \\vspace{0px}\n        \\centering\n        \\includegraphics[width=1\\linewidth]{fig/method_sdf_2}\n    \\end{subfigure}\n    \\hfill\n    \\begin{subfigure}[t]{0.5\\linewidth}\n        \\vspace{3px}\n        \\centering\n        \\hspace*{-12px}\n        \\includegraphics[width=1\\linewidth]{fig/method_sdf_1}\n    \\end{subfigure}\n    \\vspace*{-10px}\n    \\caption{{\\bf Left: Problem when Predicting SDFs.} Illustration of a ray ({\\color{red}red}) correctly hitting a surface ({\\color{blue}blue}) causing the SDF values and occupancy values in the underlying voxel grid to be correct (\\cf (a)). A noisy ray, however, causes all voxels along the ray to get invalid distances assigned (marked {\\colorbox{red!25}{red}}; \\cf (b)). When using occupancy, in contrast, only the voxels behind the surface are assigned invalid occupancy states (marked {\\colorbox{red!25}{red}}); the remaining voxels are labeled correctly (marked {\\colorbox{green!25}{green}}; \\cf (c)).\n    {\\bf Right: Proposed Gaussian-to-Bernoulli Transformation.} For $p(y_i) := p(y_i | z) = \\mathcal{N}(y_i;\\mu_i(z), \\sigma^2)$ ({\\color{blue}blue}), we illustrate the transformation discussed in Section \\ref{subsec:method-inference}, allowing to use the binary observations $x_i$ (for $x_i \\neq \\uk$) to supervise the SDF predictions. This is achieved, by transforming the predicted Gaussian distribution to a Bernoulli distribution with occupancy probability $\\theta_i(\\mu_i(z)) = p(y_i \\leq 0)$ ({\\color{blue}blue area}).}\n    \\label{fig:method-sdf}\n    \\vspace*{-0.25cm}\n\\end{figure}\n\nIn order to still work with SDFs (to achieve sub-voxel accuracy) we propose to define $p(y_i = x_i | z)$ through a simple transformation. In particular, as $p(y_i | z)$ is modeled as Gaussian distribution $p(y_i | z) = \\mathcal{N}(y_i ; \\mu_i(z), \\sigma^2)$ where $\\mu_i(z)$ is predicted using the fixed decoder (and $\\sigma^2 = 1$) and $x_i$ is binary (for $x_i \\neq \\uk$), we introduce a mapping $\\theta_i(\\mu_i(z))$ transforming the predicted Gaussian distribution to a Bernoulli distribution with occupancy probability $\\theta_i(\\mu_i(z))$, \\ie, $p(y_i = x_i|z)$ becomes $\\text{Ber}(y_i = x_i; \\theta_i(\\mu_i(z)))$.\n%\n%ag: no need to discuss this I think\n%This is necessary, as the observation $x_i \\neq \\uk$ is inherently binary. Although we could derive distance functions from the observations, \\eg, along the rays corresponding to observed points \\ag{You had a paper here, right?}, even slight noise can result in large changes in the derived distances (along the whole ray). \n%\n%\\begin{align}\n%p(y_i = x_i | z) = \\text{Ber}(y_i = x_i; \\theta_i(\\mu_i(z)))\n%\\end{align}\nAs we defined occupied voxels to have negative sign in the SDF, we can derive the occupancy probability $\\theta_i(\\mu_i(z))$ as the probability of a negative distance:\n\\begin{align}\n\\theta_i(\\mu_i(z)) &= \\mathcal{N}(y_i \\leq 0; \\mu_i(z), \\sigma^2)\\label{eq:sdf}\\\\\n%& = \\int_{y_i'}^0 \\mathcal{N}(y_i' \\leq 0; \\mu_i(z), \\sigma^2) dy_i'\\\\\n&= \\frac{1}{2} \\left(1 + \\text{erf}\\left(\\frac{- \\mu_i(z)}{\\sigma \\sqrt{\\pi}}\\right)\\right).\\label{eq:sdf-erf}\n\\end{align}\n%\nHere, $\\text{erf}$ is the error function which, in practice, is approximated following \\cite{Abramowitz1974}. Equation \\eqref{eq:sdf} is illustrated in Figure~\\ref{fig:method-sdf} where the occupancy probability $\\theta_i(\\mu_i(z))$ is computed as the area under the Gaussian bell curve for $y_i \\leq 0$. This per-voxel transformation can easily be implemented as non-linearity layer and its derivative \\wrt $\\mu_i(z)$ is -- by construction -- a Gaussian distribution. Overall, this transformation allows to predict SDFs while using binary observations.", "meta": {"hexsha": "f9bed24242d5b19d5df173a260ee56e1b1c9d636", "size": 11196, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sec_method.tex", "max_stars_repo_name": "davidstutz/cvpr2018-shape-completion", "max_stars_repo_head_hexsha": "fdd08d8ab4f1678c72ba676b71da259a0aa43a59", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 62, "max_stars_repo_stars_event_min_datetime": "2018-04-24T03:47:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T12:24:18.000Z", "max_issues_repo_path": "paper/sec_method.tex", "max_issues_repo_name": "davidstutz/cvpr2018-shape-completion", "max_issues_repo_head_hexsha": "fdd08d8ab4f1678c72ba676b71da259a0aa43a59", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sec_method.tex", "max_forks_repo_name": "davidstutz/cvpr2018-shape-completion", "max_forks_repo_head_hexsha": "fdd08d8ab4f1678c72ba676b71da259a0aa43a59", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2018-06-19T03:03:42.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-29T08:28:36.000Z", "avg_line_length": 121.6956521739, "max_line_length": 897, "alphanum_fraction": 0.727581279, "num_tokens": 3239, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8824278757303677, "lm_q2_score": 0.6334102498375401, "lm_q1q2_score": 0.558938861229982}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n% \\usepackage{graphicx}\n%     \\DeclareGraphicsExtensions{.png, .jpeg}\n% \\usepackage{caption}\n\\usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry}\n\n\\title{STAT 775: Machine Learning \\\\ HW 08}\n\\author{Terence Henriod}\n\\date{\\today}\n\n\\begin{document}\n\n\\clearpage            % All of\n\\maketitle            % this,\n\\thispagestyle{empty} % removes the page number from the title page\n\n\\begin{abstract}\nIn this assignment, we use Support Vector Machines (SVM) to classify hand-written digits that are typically hard to discern from one another for classifiers.\n\\end{abstract}\n\n\\newpage\n\\section{Exercise 01}\n\\subsection{Problem Statement}\nUsing the zip.data data set from the ESL website and perform classification using Support Vector Machines. Perform binary classification on $7$s vs. $9$s, $3$s vs. $5$s, and $0$s vs. $8$s. You may use a library since implementing the SVM training algorithm is complex.\n%\n%\n\\subsection{Results}\nThe SVM method performed pretty well. Without fiddling with which kernel type was used too much or any parameter values, classification performance was comparable to the better methods we have explored this semester. It appeared that the linear kernel was the best kernel.\n%\n\\subsubsection{The Tables}\nThe confusion matrices for classification are shown below.\\\\\n\\newline\n$7$s vs. $9$s:\\\\\n\\newline\n\\begin{tabular}{| c || c | c || c |}\n  \\hline\n  Actual/Prediction &   7 &   9 &\\% Correct \\\\\n  \\hline\n  \\hline\n  7                 & 136 &  11 & 92.5 \\\\\n  \\hline\n  9                 &   2 & 175 & 98.9 \\\\\n  \\hline\n  \\hline\n  Overall           &     &     & 96.0 \\\\\n  \\hline\n\\end{tabular}\\\\\n\\newline\n\\newline\n$3$s vs. $5$s:\\\\\n\\newline\n\\begin{tabular}{| c || c | c || c |}\n  \\hline\n  Actual/Prediction &   3 &   5 &\\% Correct \\\\\n  \\hline\n  \\hline\n  3                 & 152 &  14 & 91.6 \\\\\n  \\hline\n  5                 &  13 & 147 & 91.9 \\\\\n  \\hline\n  \\hline\n  Overall           &     &     & 91.7 \\\\\n  \\hline\n\\end{tabular}\\\\\n\\newline\n\\newline\n$0$s vs. $8$s:\\\\\n\\newline\n\\begin{tabular}{| c || c | c || c |}\n  \\hline\n  Actual/Prediction &   0 &   8 &\\% Correct \\\\\n  \\hline\n  \\hline\n  0                 & 351 &   8 & 97.8 \\\\\n  \\hline\n  8                 &  15 & 151 & 91.0 \\\\\n  \\hline\n  \\hline\n  Overall           &     &     & 95.6 \\\\\n  \\hline\n\\end{tabular}\\\\\n\\newline\n%\n\\subsubsection{The Code}\nThe following R code was used to perform classification:\n\\begin{verbatim}\n##############\n# STAT775: Machine Learning\n#\n# HW08\n# Exercise 01\n#\n# Using the zip.data data set from the ESL website, use Support Vector Machines\n# (SVM) to perfrom classification on traditionally hard to differentiate digits:\n# 7s vs. 9s, 3s vs. 5s, and 0s vs. 8s.\n###############\n\n#\n# Initial Setup\n#\nsetwd(\"C:/Users/Terence/Documents/GitHub/STAT775/HW07\")\n\n#\n# Data Cleaning\n#\nDATA.PATH <- \"../DataSets/zip.data/\"\nZIP.TRAIN.FILE.NAME <- paste0(DATA.PATH, \"zip.train\")\nZIP.TEST.FILE.NAME <- paste0(DATA.PATH, \"zip.test\")\n\nconstruct.classification.frame <- function(data, targets) {\n  return(\n    data.frame(\n      'target' = targets,\n      'prediction' = rep(0, length(targets)),\n      data\n    )\n  )\n}\n\nget.data <- function(\n  train.file, test.file, class.1, class.2, num.principle.components = 50) {\n  # Args:\n  #   train.file:\n  #   test.file:\n  #   class.1:\n  #   class.2\n  #   num.principle.components:\n  #\n\n  train <- read.table(train.file)\n  train <- subset(train, V1 == class.1 | V1 == class.2)\n  for (i in 1:nrow(train)) {\n    train[i, 1] <- if (train[i, 1] == class.1) {1} else {2}\n  }\n\n  test <- read.table(test.file)\n  test <- subset(test, V1 == class.1 | V1 == class.2)\n  for (i in 1:nrow(test)) {\n    test[i, 1] <- if (test[i, 1] == class.1) {1} else {2}\n  }\n\n  pca.model <- prcomp(train[, -1])\n\n  train[, -1] <- predict(pca.model, train[, -1])\n  train <- train[, 1:(num.principle.components + 1)]\n  train <- construct.classification.frame(\n    targets = train[, 1],\n    data = train[, -1]\n  )\n\n  test[, -1] <- predict(pca.model, test[, -1])\n  test <- test[, 1:(num.principle.components + 1)]\n  test <- construct.classification.frame(\n    targets = test[, 1],\n    data = test[, -1]\n  )\n\n  return(list(\n    train = train,\n    test = test\n  ))\n}\n\n#\n# Data Presentation and Evaluation\n#\n\ncompute.confusion.matrix <- function(predictions, targets, num.classes = 2) {\n  confusion.matrix <- matrix(0, nrow = num.classes + 1, ncol = num.classes + 1)\n  for (i in 1:length(predictions)) {\n    confusion.matrix[targets[[i]], predictions[[i]]] <-\n      confusion.matrix[targets[[i]], predictions[[i]]] + 1\n  }\n  confusion.matrix[num.classes + 1, num.classes + 1] <-\n    sum(diag(confusion.matrix)) / sum(confusion.matrix)\n  for (i in 1:num.classes) {\n    confusion.matrix[i, num.classes + 1] <-\n      confusion.matrix[i, i] / sum(confusion.matrix[i, 1:num.classes])\n    confusion.matrix[num.classes + 1, i] <-\n      confusion.matrix[i, i] / sum(confusion.matrix[1:num.classes, i])\n  }\n  return(confusion.matrix)\n}\n\n#\n# Main\n#\nlibrary('e1071')\n\ntests <- list(\n  list(7, 9),\n  list(3, 5),\n  list(0, 8)\n)\n\ntest <- tests[[1]]\n\nfor (test in tests) {\n  data <- get.data(\n    train.file = ZIP.TRAIN.FILE.NAME,\n    test.file = ZIP.TEST.FILE.NAME,\n    class.1 = test[[1]],\n    class.2 = test[[2]],\n    num.principle.components = 50\n  )\n\n  svm.model <- svm(\n    x = data$train[, -(1:2)],\n    y = data$train[, 1],\n    type = 'nu-classification',\n    kernel = 'linear'\n  )\n  data$test$prediction <- (predict(svm.model, data$test[, -(1:2)]))\n\n  print(compute.confusion.matrix(\n    predictions = data$test$prediction,\n    targets = data$test$target,\n    num.classes = 2\n  ))\n}\n\\end{verbatim}\n%\n\\end{document}\n", "meta": {"hexsha": "c5bd42ff6c387545a8da0bae90160e7576c2f73b", "size": 5675, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "STAT775/HW08/HW08.tex", "max_stars_repo_name": "T-R0D/Past-Courses", "max_stars_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-03-13T17:32:26.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-27T16:51:22.000Z", "max_issues_repo_path": "STAT775/HW08/HW08.tex", "max_issues_repo_name": "T-R0D/Past-Courses", "max_issues_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-29T19:54:02.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-29T19:54:52.000Z", "max_forks_repo_path": "STAT775/HW08/HW08.tex", "max_forks_repo_name": "T-R0D/Past-Courses", "max_forks_repo_head_hexsha": "0edc83a7bf09515f0d01d23a26df2ff90c0f458a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2016-10-18T03:31:44.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-29T13:23:10.000Z", "avg_line_length": 25.110619469, "max_line_length": 272, "alphanum_fraction": 0.6144493392, "num_tokens": 1777, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7461390043208003, "lm_q1q2_score": 0.558923187342866}}
{"text": "\\section{Power analysis}\n\\label{sec:power}\n\nIn this section, we'll calculate the variance of our different annotation schemes.\n\n\\newcommand{\\xh}{\\hat{x}}\n\\newcommand{\\xb}{\\bar{x}}\n\\newcommand{\\yh}{\\hat{y}}\n\\newcommand{\\yb}{\\bar{y}}\n\\newcommand{\\rh}{\\hat{r}}\n\\newcommand{\\ph}{\\hat{p}}\n\nLet $\\sC$ be a corpus (our ``population'') that we wish to compute statistics over\nand $\\sP$ be the population of pooled contexts, \n  with $r \\eqdef \\Pr(c \\in \\sP \\given c \\in \\sC)$ (the recall of the pool).\n\nLet $E$ be a randomly drawn subset of $\\sC$ with $n_E$ samples for exhaustive annotation and\n    $P$ be a randomly drawn subset of $\\sP$ with $n_P$ samples for pooling annotation.\n\n\\begin{figure}\n  \\begin{subfigure}{0.49\\textwidth}\n    \\includegraphics[width=\\textwidth]{figures/variance-ratio-pool}\n    \\caption{\\label{fig:pool-for-corpus} The relative efficiency of using pooled annotations versus exhaustive annotation, assuming that there are twice as many pooled annotations as there are exhaustive annotations.}\n  \\end{subfigure}\n  \\hfill\n  \\begin{subfigure}{0.49\\textwidth}\n    \\includegraphics[width=\\textwidth]{figures/variance-ratio-unassessed}\n    \\caption{\\label{fig:pool-for-unassessed} The relative efficiency of using unassessed annotations versus exhaustive annotation, assuming that there are twice as many pooled annotations and ten times as many unassessed contexts as there are exhaustive annotations.}\n  \\end{subfigure}\n\\end{figure}\n\n\\begin{theorem}[Relative efficiency of using pooled annotations to compute corpus statistics] \n  \\label{thm:variance-pooled-recall}\n  % Define all the variables\n  Let $x$ be a statistic that we wish to compute on $\\sC$ (e.g.\\ recall),\n    $\\rh$ be an unbiased estimator of $r$ on $E$ with variance $\\sigma_r^2$,\n    $\\yh_p$ be an unbiased estimator of $x$ on $P$ with variance $\\sigma_y^2$,\n  then $\\xh_p = \\rh \\yh_p$ is an unbiased estimator of $x$ on $P$ with variance\n  $$\n  \\sigma_p^2 = \\sigma_r^2 \\sigma_y^2 + y^2 \\sigma_r^2 + r^2 \\sigma_r^2.\n  $$\n\n  Furthermore, if $\\xh_e$ be an unbiased estimator of $x$ on $E$ with variance $\\sigma_e^2$,\n  the relative efficiency of $\\xh_p$ with respect to $\\xh_e$ is,\n  \\begin{align*}\n  \\frac{\\sigma_e^2}{\\sigma_p^2}\n  &\\approx \\frac{1-x}\n      {\n      y (1-r) + \\frac{n_E}{n_P} r (1-y)\n      }.\n  \\end{align*}\n\\end{theorem}\n\\begin{proof}\n  \\begin{align*}\n    \\sigma_e^2 &= \\frac{x (1-x)}{n_E + 1} \\\\\n    \\sigma_r^2 &= \\frac{r (1-r)}{n_E + 1} \\\\\n    \\sigma_y^2 &= \\frac{y (1-y)}{n_P + 1} \\\\\n    \\sigma_p^2 \n    &\\approx y^2 \\frac{r (1-r)}{n_E + 1} + r^2 \\frac{y (1-y)}{n_P + 1} \\\\\n    &= x^2 \\left( \\frac{(1-r)/r}{n_E + 1} + \\frac{(1-y)}{n_P + 1} \\right).\n  \\end{align*}\n\\end{proof}\n\n\\begin{lemma}[Benefit of combining estimators]\n  Let $x_1, \\dots, x_n$ be independent and unbiased estimators for a statistic $x$ with variances $\\sigma_1^2, \\dots \\sigma_n^2$.\n  Let $z = \\sum_{i=1}^n \\alpha_i x_i$ where $\\sum_{i=1}^n \\alpha_i = 1$ be a combined estimator for $x$ with minimal variance.\n  Then, the relative efficiency, i.e ratio of variances, of $z$ and $x_1$ is,\n  $\\frac{\\sigma_z^2}{\\sigma_1^2} = \\sum_{i=1}^n \\frac{\\sigma_i^2}{\\sigma_1^2}$.\n\n  In other words, relative efficiency of a combined estimator with respect\n  to an original estimator is the sum of relative efficiencies of each component\n  estimator with respect to the original estimator. \n\\end{lemma}\n\n\n", "meta": {"hexsha": "530945b9cf1f67a8f9b28e0e24e8231d36977bfe", "size": 3363, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/emnlp2017/extra-appendix.tex", "max_stars_repo_name": "arunchaganty/kbp-online", "max_stars_repo_head_hexsha": "9f8763d8f4bfb1fb8a01f1f4f506f56625dd38d8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-08-09T14:05:48.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-25T01:34:23.000Z", "max_issues_repo_path": "doc/emnlp2017/extra-appendix.tex", "max_issues_repo_name": "arunchaganty/kbp-online", "max_issues_repo_head_hexsha": "9f8763d8f4bfb1fb8a01f1f4f506f56625dd38d8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2017-01-19T23:18:18.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-23T18:57:54.000Z", "max_forks_repo_path": "doc/emnlp2017/extra-appendix.tex", "max_forks_repo_name": "arunchaganty/kbp-online", "max_forks_repo_head_hexsha": "9f8763d8f4bfb1fb8a01f1f4f506f56625dd38d8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-08-08T09:48:20.000Z", "max_forks_repo_forks_event_max_datetime": "2018-07-09T09:12:43.000Z", "avg_line_length": 44.25, "max_line_length": 267, "alphanum_fraction": 0.6833184657, "num_tokens": 1093, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7461389986757758, "lm_q1q2_score": 0.5589231831142503}}
{"text": "\\documentclass[a4paper,11pt]{article}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{underscore}\n\n\\newcommand{\\code}[1]{\\texttt{#1}}\n\\newtheorem{theorem}{Theorem}\n\n\\title{Drawing thermal ellipsoids with OpenGL}\n\\author{Luc J. Bourhis}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{From anisotropic displacements to ellispoids}\nIn this section, we will work in fractional coordinates.\nIgnoring anomalous scattering, the form factor $F(k)$ of an atom with an anisotropic displacement tensor $U$, sitting on the position $x$, and that has a scattering factor $f(k)$ reads\n\\begin{align}\n  F(k) = f(k) e^{k U k^T} e^{i 2\\pi k x},\n\\end{align}\nwhere $x$ is a 3-vector column of coordinates whereas $k$ is a 3-vector row of Miller indices. The associated electron density $\\rho(x)$ is the inverse Fourier transform of $F(k)$: it transforms the product of the thermal smoothing with the scattering into a convolution of their Fourier transforms,\n\\begin{align}\n  \\rho(x) = \\sum_k \\left[\\int \\rho_0(y) e^{(x-y)^T U^{-1} (x-y)} d^3y \n  \\right] e^{i 2\\pi k x},\n\\end{align}\nwhere $\\rho_0$ is the electron density without thermal smearing.\\footnote{and $y$ is another 3-vector column} The apparition of $U^{-1}$ is just a mathematical property of the Fourier transforms. Thus $p(y) \\ltimes e^{(y-x)^T U^{-1} (y-x)}$ can be viewed as a probability distribution that smoothes $\\rho_0(y)$. One may wish to display the surface corresponding to let's say 50\\% probability. These surfaces are quadrics of equations\n\\begin{align}\n  (y-x)^T U^{-1} (y-x) = r,\n  \\label{quadrics}\n\\end{align}\nfor some constant $r$ depending on the sought probability. This is actually the standard mathematical form for a quadrics equation. This quadrics is an ellispoid if and only if $U$ is positive-definite. This is the only case of interest in term of graphical display but we wish to have a fail-safe procedure when $U$ is non-positive definite (NPD).\n\n\\section{Drawing ellispsoids with OpenGL}\n\nFrom this point on, the crystallographic origin of $U$ does not matter anymore: we just deal with a symmetric matrix.\n\nFirst we move the problem to the origin with the change of variable\n\\begin{align}\n  z = y - x,\\\\\n  \\intertext{and the quadrics equation then becomes}\n  z^T U^{-1} z = r\n  \\label{quadrics:at:orig}\n\\end{align}\n\nThen we need to find a way to transform the problem into drawing a sphere because either GLU or GLUT provides tools to easily and efficiently draw that shape. The simplest idea is to seek a linear transformation that maps the points on the sphere $\\cal S$ of radius $r$ centered at the origin onto our ellipsoid $\\cal E$. Let $M$ be the matrix of that transformation: for any vectors $v$ lying on $\\cal S$, $z=Mv$ shall lie on $\\cal E$, i.e.\n\\begin{align}\n  r &= z^T U^{-1} z = v^T M^T U^{-1} M v,\\nonumber\\\\\n  \\intertext{and since $v^T v$ = r,}\n  1 &= \\frac{v^T M^T U^{-1} M v}{v^T v}\\nonumber.\n\\end{align}\nThis is the Rayleigh quotient of the matrix $M^T U^{-1} M$, whose minimum and maximum values as $v$ spans $\\cal S$ are respectively the minimum and maximum eigenvalue of that matrix. Thus this matrix is the identity matrix and we can conclude that $U=M M^T$.\n\nReciproquely, if $U=M M^T$ for some matrix $M$, then for any vector $z=Mv$ with $v$ on $\\cal S$, we have\n\\begin{align}\n  r &= v^T v = z^T M^{-T} M^{-1} z = z^T U^{-1} z,\\nonumber\n\\end{align}\nand therefore $z$ lies on $\\cal E$. Thus we can conclude that\n\\begin{theorem}\n  A linear transformation of matrix $M$ maps $\\cal S$ onto $\\cal E$ if and only if\n  \\begin{align}\n    U = M M^T.\n    \\label{mapping:condition}\n  \\end{align}\n\\end{theorem}\n\nIt shall be obvious that\n\\begin{theorem}\n  If $M$ is solution of \\eqref{mapping:condition}, then $MQ$ is also solution for any orthogonal matrix $Q$.\n\\end{theorem}\n\nTwo particular choices stand out from this infinite number of solutions, which we will now discuss. \n\n\\subsection{A scaling followed by a rotation}\nIn this section, $U$ is the matrix of the thermal displacement tensor in Cartesian coordinates.\nSince $U$ is a symmetric matrix, it can be diagonalised:\n\\begin{align}\n  U &= R \\Delta R^T,\\\\\n  \\intertext{where $\\Delta$ is a diagonal matrix and where $R$ is a rotation matrix. Thus a possible mapping satisfying \\eqref{mapping:condition} is}\n  M &= R \\Delta^\\frac{1}{2}\n\\end{align} \n \nThe advantage of this method lies in its geometrical interpretation: $\\Delta^\\frac{1}{2}$ transforms the sphere into an ellipsoid with the correct dimensions but whose principal axes are the axes of the basis the coordinates $v$ are referred to; then $R$ rotates this ellipsoid into its correct position. As a result drawing the great circles on the sphere that lies in the planes of equations\\footnote{we denote $v=\\begin{pmatrix}\n  v_1\\\\\n  v_2\\\\\n  v_3\n\\end{pmatrix}$} $v_1=0$, $v_2=0$, and $v_3=0$ will result in drawing the principal ellipses on the ellipsoid. Those are useful visual guides to assess its shape.\n\nIn the \\code{cctbx}, this is the method currently implemented in class \\code{proto\\_ellipsoid} in \\code{gltbx/quadrics.h} -- the change of frame is actually implemented in class \\code{ellipsoid_to_sphere_transform}. Bands are drawn about the great circles using textures created in class \\code{ellipsoid_principal_sections_texture}. This is also the method used in Olex2.\n\n\n\\subsection{The Cholesky factor}\n\nThe Cholesky decomposition of $U$ reads\n\\begin{align}\n  U = L L^T\n\\end{align}\nwhere $L$ is lower triangular. Thus $L$ is a possible solution of \\eqref{mapping:condition}.\nThe main advantage of this method is that the Cholesky decomposition algorithm is much faster than the eigenvalue decomposition used in the previous section, and the former is particularly trivial to implement for a 3x3 matrix. A further advantage is that it is possible to start with $U$ in fractional coordinates, which is the input one would get from a CIF or a .res file, thus saving the cost of the computation of the Cartesian $U$ that is required by the method described in the previous section.\n\nHowever, there is a drawback. Since $L$ is a mapping that does not preserve orthogonality, the 3 great circles on the sphere introduced in the previous section are not mapped onto the principal ellipses on the ellipsoid. Thus we completely loose the ability to draw those visual clues. If this is not deemed important, then the Cholesky method discussed in this section shall be preferred.\n\nThis is the method used by Coot.\n\n\\end{document}\n", "meta": {"hexsha": "85d9f41d38566f8c2d1ae75cad7c2e2c9b47cf36", "size": 6435, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "gltbx/adp-display.tex", "max_stars_repo_name": "rimmartin/cctbx_project", "max_stars_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 155, "max_stars_repo_stars_event_min_datetime": "2016-11-23T12:52:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T15:35:44.000Z", "max_issues_repo_path": "gltbx/adp-display.tex", "max_issues_repo_name": "rimmartin/cctbx_project", "max_issues_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 590, "max_issues_repo_issues_event_min_datetime": "2016-12-10T11:31:18.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T23:10:09.000Z", "max_forks_repo_path": "gltbx/adp-display.tex", "max_forks_repo_name": "rimmartin/cctbx_project", "max_forks_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 115, "max_forks_repo_forks_event_min_datetime": "2016-11-15T08:17:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-09T15:30:14.000Z", "avg_line_length": 60.7075471698, "max_line_length": 502, "alphanum_fraction": 0.7418803419, "num_tokens": 1772, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5589231830651412}}
{"text": "  \n\\section{Wavelet Analysis of Time Series}  \n\n\\subsection{Introduction}\n\nSeveral approaches have been proposed for time \nseries  prediction by the WT, based on a   \nneural network \\cite{starck:gonghui99,pred:bashir00},\nKalman filtering \\cite{pred:cristi00,pred:hong98}, or\nan AR (autoregression) model \\cite{pred:soltani00}.\nIn~\\cite{starck:gonghui99,pred:soltani00} the undecimated Haar transform\nwas used. This choice of the Haar transform \nwas motivated by the fact that the wavelet\ncoefficients are calculated only from data obtained previously in time,\nand the choice of an undecimated wavelet transform avoids aliasing problems.\n \nThe \\`a trous wavelet transform with a wavelet function related to a spline \nfunction, as described earlier, is not consistent with a directed\n(time-varying) data stream. We now keep the wavelet function, but alter the \nwavelet transform, to make of it a multiscale transform\nwhich is appropriate for a data stream. We consider a signal s(1), \ns(2),..., s(n), where n is the present time-point.\n\\begin{enumerate}\n\\item For index k sufficiently large, carry out an \\`a trous wavelet transform \n on $\\{s(1), s(2), ... , s(k)\\}$.\n\\item Retain the detail coefficient values, and the continuum value, for the \n   kth time-point only (cf. equation 8): $w_{1,k},w_{2,k},w_{J,k},c_{J,k}$.\n     Note that summing these values gives $s(k)$.\n\\item If k is less than n, set k to k+1 and return to Step 1.\n\\end{enumerate}\n\nThis produces an additive decomposition of the signal, which is similar to \nthe \\`a trous wavelet transform decomposition\nwith the $B_3$ spline on $\\{s(1),s(2),...,s(k),...,s(n)\\}$. The computational time \nis evidently greater, $O(n^2)$ as against $O(n)$. \n\nWe have not touched on an important matter in regard to equation 2: how \nto handle signal boundaries. Although other strategies could be\nenvisaged, we use a mirror approach. This is tantamount, of course, to \nredefining the discrete filter associated with\nthe scaling function in the signal boundary region; and to redefining \nthe associated wavelet function in this region. This strategy is of\nparticular relevance when we work on an ordered data stream. We hypothesize \nfuture data based on values in the immediate past. Not\nsurprisingly there is discrepancy in fit in the succession of scales, which \ngrows with scale as larger numbers of immediately past values\nare taken into account.  \n\n\\subsection{The redundant Haar wavelet transform}\n\n\\begin{figure}[htb]\n\\centerline{\n\\vbox{ \n\\psfig{figure=fig_haartrous.ps,bbllx=1cm,bblly=20cm,bburx=19.cm,bbury=24.5cm,width=15cm,height=4.5cm,clip=}\n}}\n\\caption{This figure shows which pixels of the input signal are used\nto calculate the last wavelet coefficient in the different scales.}\n\\label{fig_haar}\n\\end{figure}  \nThe Haar wavelet transform was first described in the early years of this \ncentury and is described in almost every text on the wavelet\ntransform. As already mentioned, the asymmetry of the wavelet function \nused makes it a good choice for edge detection, i.e.\\ localized\njumps. The usual Haar wavelet transform, however, is a decimated one. \nWe now develop a non-decimated or redundant version of this\ntransform. This will be an \\`a trous algorithm, but with a different pair \nof scaling and wavelet functions compared to those used previously.\n\nThe non-decimated Haar algorithm is exactly the same as the \\`a trous \nalgorithm, except that the low-pass filter $h$, \n$(\\frac{1}{16}, \\frac{1}{4}, \\frac{3}{8},\n\\frac{1}{4}, \\frac{1}{16})$,\nis replaced by the simpler filter $(\\frac{1}{2},\\frac{1}{2})$. \nThere h is now non-symmetric. \nConsider the creation of the first wavelet resolution level. We\nhave  created it from  by convolving the original signal with h. Then:\n\\begin{eqnarray}\nc_{j+1,l} = 0.5( c_{j,l-2^j} +  c_{j,l})\n\\end{eqnarray}\nand\n\\begin{eqnarray}\nw_{j+1,l} =  c_{j,l}  - c_{j+1,l}\n\\end{eqnarray}\nAt any time point, $l$, we never use information after $l$ in \ncalculating the \nwavelet coefficient. \nFigure \\ref{fig_haar} shows which pixels of the input signal are used\nto calculate the last wavelet coefficient in the different scales.\nA wavelet coefficient at a\nposition $t$ is calculated from the signal samples at positions less\nthan or equal to $t$, but never larger.\n% Examples of the \\`a trous Haar transform. \n% the Haar \\`a trous wavelet transform, will be seen later. \n\n\\subsection{Autoregressive Multiscale Prediction}\n\\label{sect_AR_pred}\n\\subsubsection{Stationary signal}\n\n\\begin{figure}[htb]\n\\centerline{\n\\vbox{ \n\\psfig{figure=fig_haarpred.ps,bbllx=1cm,bblly=20cm,bburx=19.cm,bbury=24.5cm,width=15cm,height=4.5cm,clip=}\n}}\n\\caption{Wavelet coefficients which are used for the prediction of the next value.}\n\\label{fig_haarpred}\n\\end{figure}  \nAssuming a stationary signal $D = (d_1, \\dots, d_N)$,\nthe AR (autoregressive) multiscale prediction model is:\n\\begin{eqnarray}\nd_{t+1} = \\sum_{j=1}^J \\sum_{k=1}^{A_j} a_{j,k} w_{j,t-2^{j}(k-1)}  +\n\\sum_{k=1}^{A_{J+1}} a_{J+1,k} c_{J,t-2^{J}(k-1)}  + \\epsilon(t+1)\n\\end{eqnarray}\nwhere ${\\cal W} = {w_{1}, \\dots, w_J, c_J}$ represents the Haar \\`a trous\nwavelet transform of $D$ ($D = \\sum_{j=1}^J w_j + c_J$).\nFor example, choosing $A_j = 1$ for all resolution levels $j$ leads \nto the equation\n\\begin{eqnarray}\nd_{t+1} = \\sum_{j=1}^J  a_{j} w_{j,t} + a_{J+1} c_{J,t}  + \\epsilon(t+1)\n\\end{eqnarray}\nFigure~\\ref{fig_haarpred} shows which wavelet coefficients are used \nfor the prediction using  $A_j = 2$ for all resolution levels $j$,\nand a wavelet transform with five scales (four wavelet scales + the\nsmoothed array). In this case, we can see that only ten coefficients \nare used, but taking into account low resolution information. This means\nthat a long term prediction can easily be introduced, either by increasing\nthe number of scales in the wavelet transform, or by increasing the \nAR order in the last scales, but with a very small additional number of\nparameters.\nTo find the $Q = \\sum_{j=1}^{J+1} A_{j}$ unknown parameters: \\\\ \n$$\\{\\{a_{1,1},...,a_{1,A_1}\\},..., \n\\{a_{j,1},...,a_{j,A_j}\\},..., \n\\{a_{J,1},...,a_{j,A_J}\\},\\{a_{J+1,1},...,a_{j,A_J+1}\\}\\}$$ \nof our model, we need to resolve the following equation: $A X = S$,\nwhere $A,X,W$ are defined by:\n\\begin{eqnarray*}\nA^t & = & (L_{N-1}, \\dots, L_{N-P}) \\\\\nL_i & = & (w_{1,i}, \\dots, w_{1,i-2 A_1}, \\dots, w_{2,i}, \\dots, w_{2,i-2^2 A_2},\n\\dots, w_{J,i}, \\dots, w_{J,i-2^J A_J}, c_{J,i}, \\dots, c_{J,i-2^J A_{J+1}}) \\\\ \nX^t & = & (a_{1,1},\\dots, a_{1,A_1},a_{2,1}, \\dots,a_{2,A_2},\n \\dots, a_{J,1}, \\dots, a_{J,A_j}, \\dots, a_{J+1,1}, \n \\dots, a_{J+1,A_{J+1}})  \\\\\nS^t & = & (d_N, \\dots, d_{i+1},  \\dots, d_{N-P+1})  \n\\end{eqnarray*}\nWe have $P$ equations and $Q$ unknowns, so $A$ is a $Q \\times P$ matrix \n($P$ rows\n$L_i$, each with $Q$ elements),\n$X$ and $S$ are respectively $Q$- and $P$-sized vectors. When $Q$ is larger\nthan $P$, many minimization methods may be used to find $X$. In our experiments,\nwe used the standard SVD method.\n\n\n\\subsubsection{Non-stationary signal}\nWhen the signal is not stationary, the previous method will not\ncorrectly model our data. However, in many cases, the \nnon-stationary part affects the low frequency components, while\nthe high frequencies may still give rise to stationary behavior.\nThen we can separate our signal $D$ into two parts,   \nthe low and the high frequencies $L$ and $H$:\n\\begin{eqnarray*}\nL & = & c_J \\\\\nH & = & D - L = \\sum_{j=1}^J w_j \\\\\nd_{t+1} & = & l_{t+1} + h_{t+1} \\\\\n\\end{eqnarray*}\nThe smoothed array of the wavelet transform is first subtracted from\nthe data, and we consider now that the signal $H$ is stationary.\nOur prediction will be the coaddition of two predicted values,\none on the signal $H$ by the AR Multiscale model, and the second on\nthe low frequency component by some other method.\nThe AR-Multiscale model gives:\n\\begin{eqnarray}\nh_{t+1} = \n\\sum_{j=1}^J \\sum_{k=1}^{A_j} a_{j,k} w_{j,t-2^{j}(k-1)} +  \\epsilon(t+1)\n\\end{eqnarray}\nWe have now $Q = \\sum_{j=1}^{J} A_{j}$ unknown parameters: \n$$\\{\\{a_{1,1},...,a_{1,A_1}\\},..., \n\\{a_{j,1},...,a_{j,A_j}\\},..., \n\\{a_{J,1},...,a_{j,A_J}\\} \\}$$ and\nwe need to resolve the following equation: $A X = S$,\nwhere $A,X,W$ are defined by:\n\\begin{eqnarray*}\nA^t & = & (L_{N-1}, \\dots, L_{N-P}) \\\\\nL_i & = & (w_{1,i}, \\dots, w_{1,i-2 A_1}, \\dots, w_{2,i}, \\dots, w_{2,i-2^2 A_2},\n\\dots, w_{J,i}, \\dots, w_{J,i-2^J A_J}) \\\\ \nX^t & = & (a_{1,1},\\dots, a_{1,A_1},a_{2,1}, \\dots,a_{2,A_2},\n \\dots, a_{J,1}, \\dots, a_{J,A_j})  \\\\\nS^t & = & (h_N, \\dots, h_{i+1},  \\dots, h_{N-P+1})  \n\\end{eqnarray*}\nMany methods may be used for the prediction of $l_{t+1}$. The problem\nis simplified by the fact that $L$ is very smooth. We used a polynomial\nfitting of degree 3 in our experiments.\n\n\\subsubsection{AR order determination}\nThe AR order at the different scales must now be defined. \nIt can be a user parameter,\nbut an automatic method is generally preferable. At each scale $j$, \nwe need to know how many coefficients should be used. This value $A_j$\nmay be determined by looking at how the wavelet coefficients at the\nscale $j$ are correlated. Therefore each scale is first analyzed separately,\nand the best AR order $A_j$ at scale $j$ minimizes:\n\\begin{eqnarray*}\nJ(A_j) = \\log \\sigma_{A_j}^2 + {\\cal P}(A_j)\n\\end{eqnarray*}\nwhere $\\sigma_{A_j}$ is the prediction error, and\n${\\cal P}$ is a penalty function, which increases with the AR order.\nExamples of penalty functions are:\n\\begin{itemize}\n\\item AIC: $AIC = \\log \\sigma_{A_j}^2 + \\frac{2 A_j}{N}$ \n\\item AICC: $AICC = \\log \\sigma_{A_j}^2 + \\frac{N +  A_j}{N - A_j - 2}$\n\\item SIC: $SIC = \\log \\sigma_{A_j}^2 + \\frac{ A_j\\log N }{N}$\n\\end{itemize}\n\n\n\\subsection{Wavelets and autocorrelation function: mr1d\\_acor}\n\\index{mr1d\\_acor}\nProgram {\\em mr1d\\_acor} calculates the autocorrelation at each scale\nof the wavelet transform. \n{\\bf\n\\begin{center}\n USAGE: mr1d\\_acor option signal\\_in autocor\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item {\\bf [-n number\\_of\\_scales]} \\\\\nNumber of scales used in the multiresolution transform.\n\\item {\\bf [-S Nbr\\_of\\_Lags]} \\\\\nDefault is 10.\n\\end{itemize}\n\n\\subsection{Transition detection: mr1d\\_nowcast}\n\\index{mr1d\\_nowcast}\nProgram {\\em mr1d\\_nowcast} detects the transitions in all scales at a \ngiven position. The wavelet transform used is the Haar transform,\nso a given wavelet coefficient at position $x$ and at scale $j$ ($j=1..P$,\n$P$ being the number of scales)\nis calculated from pixel values between  positions $x-2^{j}+1$ and $x$.\nOnly pixels in the signal which are on the left of a given position $x$\n(or before a given time for temporal signal) are\nused for the calculation of the wavelet coefficients at position $x$.\nThis allows us to detect a new event in a temporal series \nirrespective of the time \nscale of the event.\nBy default, the analysed position is the last one of the signal, but other\npositions can equally well be analyzed using the ``-x'' option. The program\nprints for each scale $j$ the following information corresponding to \nthe position $x$:\n\\begin{itemize}\n\\item {\\bf No detection} \\\\\n if the wavelet coefficient $\\mid w_j(x) \\mid  < k \\sigma_j$\n\\item {\\bf New upward detection}\\\\\n if $  w_j(x) > k \\sigma_j$  and $\\mid w_j(x-1) \\mid  < k \\sigma_j$\n\\item {\\bf New downward detection}\\\\\n if $  w_j(x) < - k \\sigma_j$ and $\\mid w_j(x-1) \\mid  < k \\sigma_j$\n\\item {\\bf Positive significant structure}\\\\\n if $  w_j(x) > k \\sigma_j$ and $\\mid w_j(x-1) \\mid  > k \\sigma_j$ \\\\\nThe first detected coefficient of the structure is also given.\n\\item {\\bf Negative significant structure}\\\\\n if $  w_j(x) < -k \\sigma_j$ and $\\mid w_j(x-1) \\mid  > k \\sigma_j$ \\\\\nThe first detected coefficient of the structure is also given.\n\\item {\\bf End of significant structure}\\\\\n if $\\mid w_j(x) \\mid  < k \\sigma_j$ and $\\mid w_j(x-1) \\mid  > k \\sigma_j$\n\\end{itemize}\nFurthermore the signal to noise ratio of the wavelet coefficient is given.\n{\\bf\n\\begin{center}\n USAGE: mr1d\\_nowcast option signal\\_in  \n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\item {\\bf [-m type\\_of\\_noise]}\n{\\small\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item Gaussian Noise \n\\item Poisson Noise \n\\item Poisson Noise + Gaussian Noise \n\\item Multiplicative Noise \n\\item Non-stationary additive noise \n\\item Non-stationary multiplicative noise \n\\item Undefined stationary noise \n\\item Undefined noise \n\\end{enumerate}\n}\n Default is Gaussian noise.\n\\item {\\bf [-g sigma]} \n\\item {\\bf [-c gain,sigma,mean]} \n\\item {\\bf [-n number\\_of\\_scales]} \n\\item {\\bf [-s NSigma]} \n\\item {\\bf [-x Position]}  \\\\\nPosition to analyse. Default is the last point.\n\\end{itemize}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr1d\\_nowcast sig.dat  \\\\\nAnalyse the last point of the signal with all default option.\n\\item mr1d\\_nowcast -x 55 -s 10 sig.dat  \\\\\nAnalyse the point at position 55, and detect the transition with a \nsignal to noise ratio equal to 10.\n\\end{itemize}\n\n\n\\subsection{Prediction: mr1d\\_fcast}\n\\index{mr1d\\_nowcast}\n\nProgram {\\em mr1d\\_fcast} performs a forecasting by four different methods:\nthe standard AR model (AR), an AR method per scale (and the prediction\nis the coaddition of the predicted wavelet coefficients), \nthe multiresolution AR model (MAR), and a neural network.\nThe program can be used in two modes: the evaluation and the prediction \nmode. In the evaluation mode, the first part of the time series is used\nto predict the second part, and the output file contains the same \nnumber of values as the input file. The initial values (corresponding to\nthe initial part of the signal) are identical, \nand the other values are the predicted ones. \nThe error prediction is calculated\nand printed on the standard output device (screen window). \nIn the prediction mode,\nthe output file contains more values than the input file. The last values\ncorrespond to the predicted values.\nFor the AR and MAR models, the order of the model can either be fixed \nby the user (``-a option''), or automatically calculated. \nIn case of an automatic\nMAR model order estimation, the order can be different at each scale.\n\nBy default, the signal is assumed to be stationary. \nIf the ``-h'' option is set,\nwe assume a non-stationary signal. In that case, the last scale of the\nwavelet transform is analyzed differently (i.e. not with the AR model).\nSeveral methods can be selected by the ``-B'' option. \nThe default is the polynomial extrapolation of order 2.\n\nIf the ``-L'' option is set, only the last pixels will be used in the\nanalysis.\n\n{\\bf\n\\begin{center}\n USAGE: mr1d\\_fcast option signal\\_in  signal\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\item {\\bf [-P predict\\_method]}\n{\\small\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item  Autoregressive model.\n\\item  Autoregressive model per scale. \n\\item  Multiresolution Autoregressive model.\n\\item  Neural network.\n\\end{enumerate}\n}\nDefault is Multiresolution Autoregressive Model.\n\\item {\\bf [-n number\\_of\\_scales]} \\\\\nNumber of scales to be used in the Multiresolution AR model. Default is 5.\n\\item {\\bf [-a AR\\_Order]} \\\\\nAR oder used for the prediction. Default is automatically estimated.\n\\item {\\bf [-O Estimation\\_AR\\_Order\\_Method]}\n{\\small\n\\begin{enumerate}\n\\item AIC \n\\item AICC\n\\item BIC\n\\end{enumerate}}\nDefault is BIC method.\n\\item {\\bf[-h]} \\\\\nNon stationary signal. Default is stationary.\n\\item {\\bf[-p Number\\_of\\_Predict]} \\\\\nNumber of prediction. Default is 0.\n\\item {\\bf[-m MinAROrder]} \\\\\nMinimum AR model order. Default is 1.\n\\item {\\bf[-M MaxAROrder]} \\\\\nMaximum AR model order. Default is 10.\n\\item {\\bf[-w]} \\\\\nWrite to disk some infomation about the prediction error and the\n AR model order. \nThe file contains a 1D table \nof size $N+3$, where $N$ is the number of scales ($N = 1$ when the \nMAR model is not used).\n\\begin{verbatim}\n  T[0] = prediction error\n  T[1] = Number_of_scale \n  T[2] = Percentage of prediction in the interval [Pred-SigmaNoise, Pred+SigmaNoise]. \n  for j = 0 to Number_of_scale-1 do T[j] = AR order at scale j    \n\\end{verbatim}\n\\item {\\bf[-L NPix]]} \\\\\nAnalyse the last NPix pixels. Default is all pixels.\n\\item {\\bf [-B extrapol\\_type]}\n{\\small\n\\begin{enumerate}\n\\item Constant border ($c_J(N+i) = c_J(N)$, where $c_J$ is the last scale,\n$N$ the number of pixels).\n\\item Mirror border ($c_J(N+i) = c_J(N-i)$).\n\\item Double mirror border ($c_J(N+i) = 2 c_J(N) - c_J(N-i)$).\n\\item Polynomial extrapolation (deg 1).\n\\item Polynomial extrapolation (deg 2).\n\\end{enumerate}}\nDefault is 5. Only used if the ``-h'' option is set.\n\\item {\\bf[-T Poly\\_Nbr\\_Pix]} \\\\\nNumber of pixels used for the polynomial extrapolation.\nDefault is 5. Only used if the ``-h'' option is set and if a polynomial\nextrapolation is used.\n\\end{itemize}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr1d\\_fcast sig.dat eval.dat \\\\\nEvaluate the prediction by MAR method.\n\\item mr1d\\_fcast -p 1 sig.dat pred.dat \\\\\nMake a prediction at position N+1.\n\\end{itemize}\n\n \n\\section{Time-Frequencies Analsysis}  \n\n\\subsection{The Short Term Fourier Transform: im1d\\_stf}\n \\index{im1d\\_stf}\nThe Short-Term Fourier Transform of a 1D signal $f$ is defined by:\n\\begin{eqnarray}\nSTFT(t, \\nu) = \\int_{-\\infty}^{+\\infty} e^{-j2\\pi\\nu \\tau} f(\\tau)g(\\tau-t) d\\tau\n\\end{eqnarray}\nIf $g$ is the Gaussian window, this corresponds to the Gabor transform.\nThe energy density function, called the {\\em spectrogram}, is given by:\n\\begin{eqnarray}\nSPEC(t, \\nu) = \\mid STFT(t, \\nu) \\mid^2 \n=  \\mid  \\int_{-\\infty}^{+\\infty} e^{-j2\\pi\\nu \\tau} f(\\tau)g(\\tau-t) d\\tau \\mid^2 \n\\end{eqnarray}\nMost used windows are:\n{\\small\n\\begin{itemize}\n\\itemsep=1truecm\n\\baselineskip=0.4truecm\n\\item Truncated window function:\n$\n\\hat{W}(\\nu) =  \\left\\{\n  \\begin{array}{ll}\n    1   &  \\mbox{ if }  \\mid \\hat{P}(\\nu) \\mid \\ge \\sqrt{\\epsilon}    \\\\\n    0   &   otherwise\n  \\end{array}\n  \\right.\n$\nwhere $\\epsilon$ is the regularization parameter.\n\\item Rectangular window:\n$\n\\hat{W}(\\nu) =  \\left\\{\n  \\begin{array}{ll}\n    1   &  \\mbox{ if }  \\mid  \\nu \\mid \\le \\Omega    \\\\\n    0   &   otherwise\n  \\end{array}\n  \\right.\n$\nwhere $\\Omega$ defines the band-width.\n\\item Triangular window:\n$\n\\hat{W}(\\nu) =  \\left\\{\n  \\begin{array}{ll}\n    1 - \\frac{\\nu}{\\Omega}  &  \\mbox{ if }  \\mid  \\nu \\mid \\le \\Omega    \\\\\n    0   &   otherwise\n  \\end{array}\n  \\right.\n$\n\\item Hamming Window:\n$\n\\hat{W}(\\nu) =  \\left\\{\n  \\begin{array}{ll}\n    0.54 + 0.46 \\cos(\\frac{2\\pi \\nu}{\\Omega})    &  \\mbox{ if }  \\mid  \\nu \\mid \\le \\Omega    \\\\\n    0   &   otherwise\n  \\end{array}\n  \\right.\n$\n\n\\item Hanning Window:\n$\n\\hat{W}(\\nu) =  \\left\\{\n  \\begin{array}{ll}\n    \\cos(\\frac{\\pi \\nu}{\\Omega})    &  \\mbox{ if }  \\mid  \\nu \\mid \\le \\Omega    \\\\\n    0   &   otherwise\n  \\end{array}\n  \\right.\n$\n\n\\item Gaussian Window:\n$\n\\hat{W}(\\nu) =  \\left\\{\n  \\begin{array}{ll}\n     \\exp(-4.5 \\frac{\\nu^2}{\\Omega^2})    &  \\mbox{ if }  \\mid  \\nu \\mid \\le \\Omega    \\\\\n    0   &   otherwise\n  \\end{array}\n  \\right.\n$\n\n\\item Blackman Window:\n$\n\\hat{W}(\\nu) =  \\left\\{\n  \\begin{array}{ll}\n  0.42 + 0.5  \\cos(\\frac{\\pi \\nu}{\\Omega}) +  0.08  \\cos(\\frac{2\\pi \\nu}{\\Omega})  &  \\mbox{ if }  \\mid  \\nu \\mid \\le \\Omega    \\\\\n    0   &   otherwise\n  \\end{array}\n  \\right.\n$\n\\end{itemize}\n }\n \nThe inverse transform is obtained by:\n\\begin{eqnarray}\nf(t) =  \\int_{-\\infty}^{+\\infty}  g(t-\\tau)  \\int_{-\\infty}^{+\\infty} e^{j2\\pi\\nu \\tau} STFT(\\tau, \\nu) d\\nu d\\tau\n\\end{eqnarray}\n\n\\bigskip\nProgram {\\em im1d\\_stf} calculate the short term Fourier Transform of a signal.\nIt outputs three files: the real part of the STF,  the imaginary part\nof the STF, and the spectrogram. When the ``-r'' option is set, the \nthe signal is reconstructed from the STF real and imaginary parts  \n(the spectrogram is not used for the reconstruction).\n \n\\smallskip\nThe command line is:\n{\\bf\n\\begin{center}\n USAGE: im1d\\_stf option signal\\_in image\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item {\\bf [-t type\\_of\\_window ]}\n\\begin{enumerate}\n\\itemsep=0.1truecm\n\\baselineskip=0.4truecm\n\\item Hamming window\n\\item Hanning window\n\\item Gaussian window\n\\item Blackman window\n\\item Rectangular window\n\\end{enumerate}\nDefault is Hamming window.\n\\item {\\bf [-w window\\_size]} \\\\\nWindow size. Default is 1024. \nIf the signal has less than 1024 pixels, the default value is\nthe number of pixels divided by four.\n\\item {\\bf [-W window\\_param]} \\\\\n Window parameter. Default is 0.5.\n\\item {\\bf [-S Step]} \\\\\n Step between two points. Default is WindowSize/2. \n\\item {\\bf [-r]} \\\\\n Reverse transform.\n\\item {\\bf [-v]} \\\\\nVerbose. Default is no.\n\\end{itemize}\n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item im1d\\_stf sig.fits stf \\\\\nCreates three files, ``stf\\_re.fits'', ``stf\\_im.fits'',  ``stf\\_spec.fits'',\ncorresponding respectively the STF real part, the STF imaginary part and\nthe spectrogram.\n\\item  im1d\\_stf -r stf rec \\\\\nReconstruct a signal from its STF (i.e. ``stf\\_re.fits'' and ``stf\\_im.fits'').\n\\end{itemize}\n\n\n\\subsection{Time-Frequency Distribution: im1d\\_tfreq}\n\\index{im1d\\_tfreq}\n\n% \\subsubsection*{The Wigner-Ville Distribution.}\n\\index{Wigner-Ville transform}\n\\index{transform!Wigner-Ville transform}\n\nThe Wigner-Ville \ndistribution \\cite{ima:wigner32,ima:ville48} of a signal $s(t)$ is \n\\begin{eqnarray}\nW(t, \\nu) = \\frac{1}{2\\pi} \\int s^*(t - \\frac{1}{2}\\tau ) \n           s(t + \\frac{1}{2}\\tau ) e^{-i\\tau 2 \\pi \\nu} d\\tau\n\\end{eqnarray}\nwhere $s^*$ is the conjugate of $s$.  \nThe Wigner-Ville transform is always real (even for a complex signal).\nIn practice, its use is limited by the existence of interference terms,\neven if they can be attenuated using specific averaging approaches.\nMore details can be found in \\cite{ima:cohen95,ima:mallat98}.\n\n% \\subsubsection*{The Choi-Williams Distribution.}\n% The Choi-Williams distribution of a signal $s(t)$ is \n% \\begin{eqnarray}\n% W(t, \\nu) = \\frac{1}{4\\pi^{\\frac{3}{2}}} \\int \\int \n%    \\frac{1}{\\sqrt{\\tau^2/\\sigma}} s^*(u - \\frac{1}{2}\\tau ) \n%            s(u + \\frac{1}{2}\\tau ) e^{-\\sigma(u-t)^2/\\tau^2 - 2i\\tau \\pi \\nu} d\\tau\n% \\end{eqnarray}\n% \n% \\bigskip\n\nProgram {\\em im1d\\_stf} calculates the short term Fourier Transform of a signal.\nIt outputs three files: the real part of the STF,  the imaginary part\nof the STF, and the spectrogram.\n\n\\smallskip\nThe command line is:\n{\\bf\n\\begin{center}\n USAGE: im1d\\_stf option signal\\_in image\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item {\\bf [-T TimeFrequency\\_Distrib ]}\n\\begin{enumerate}\n\\itemsep=0.1truecm\n\\baselineskip=0.4truecm\n\\item Short Term Fourier Transform Spectogram \n\\item Wigner-Ville Distribution\n% \\item Choi-Williams Distribution\n\\end{enumerate}\nDefault is 1.\n\\item {\\bf [-t type\\_of\\_window ]}\n\\begin{enumerate}\n\\itemsep=0.1truecm\n\\baselineskip=0.4truecm\n\\item Hamming window\n\\item Hanning window\n\\item Gaussian window\n\\item Blackman window\n\\item Rectangular window\n\\end{enumerate}\ndefault is Hamming window.\n% \\item {\\bf [-F type\\_of\\_window (freq. domain) ]} \\\\\n%  Only for Choi-Williams distribution.             \n% Default is Hamming window.\n\\item {\\bf [-w window\\_size (time domain)]} \\\\\nWindow size. Default is 1024.\nIf the signal has less than 1024 pixels, the default value is\nthe number of pixels divided by four.\n\\item {\\bf [-W window\\_param (time domain)]} \\\\\n Window parameter. Default is $0.5$.\n\\item {\\bf [-S Step]} \\\\\n Step between two points. Default is WindowSize / 2.\n\\item {\\bf [-v]} \\\\\nVerbose. Default is no.\n\\end{itemize}\n", "meta": {"hexsha": "509d9c933c6fe0d92c9244c1cfe2a02d1e07ba92", "size": 23368, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_mra/doc_mr1/ch_time.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_mra/doc_mr1/ch_time.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_mra/doc_mr1/ch_time.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.858044164, "max_line_length": 130, "alphanum_fraction": 0.7007874016, "num_tokens": 7504, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5589231830651411}}
{"text": "\\section{Results}\nThe program was configured to generate a dataset of one million games of Blackjack. This number was chosen, because adding more 0\\textquotesingle s to this number increases the time it takes to generate and refine the data set a lot. With one million it took my laptop approximately 5 minutes to generate and 1 minute to refine the data. I could wait longer for more data, but it is not expected to make a difference in the results. one million should be enough data to minimize the coincidence and give reliable results. \\\\\n \\\\\nFigure 1 and table 1 show all hand values considered difficult with the total number of occurrences when\n\\begin{itemize}\n    \\item The player decided to draw \n    \\item The player decided to pass\n    \\item The player decided to draw and won\n    \\item the player decided to pass and won \n\\end{itemize}\nAn interesting trend appears in the graph with this data. You can see that, the higher the hand value, the less successful it is to draw a card (Blue in figure 1). So drawing a card with a hand value of 12 is more successful than drawing a card at a hand value of 17. This makes sense, because a lower hand value can draw more different cards without going over 21, so has a higher probability of winning.\nThe same happens to the passing card results, but in opposite order. The higher the hand value, the higher the probability passing will be successful (Red in figure 1). The interesting part in this graph is the differences between these two trends. For the outer two columns the results are very clear:\n\\begin{itemize}\n    \\item 12: Probability to win much better by drawing\n    \\item 13: Probability to win better by drawing\n    \\item 16: Probability to win better by passing\n    \\item 17: Probability to win way, way better by passing \n\\end{itemize}\nIn the middle of the graph is the most important data. As expected are 14 and 15 the most difficult hands to make a decision for. When you look at the data it is slightly better to draw a card at a value of 14 and to pass at a value of 15. Based on the big dataset, it is safe to say that is also the best move in this situation. Making the right decision won\\textquotesingle t make a big difference for your results while playing the game in real life, but if you happen to play one million games with a value of 14 or 15 in your hand, you could increase your win probability.\n\n\\input{fig/GraphWinChanceCardValue.tex}\n\nFigure 2 and table 2 show the total number of occurrences of each possible outcome over the entire experiment of one million games of Blackjack. You can see that the dealer won 513440 / 1000000 * 100 = 51,3 percent of the games, the player won 421942 / 1000000 * 100 = 42,2 percent of the games and only 64618 / 1000000 * 100 = 6,5 percent of the games resulted in a tie. This has a ratio of 1,22 / 1 / 0.15 with Dealer / Player / Tied. This means that, in the long run, the dealer always wins 1,22 times more games than the player. That is 22 percent more games.\n\n\\input{fig/GraphTotalWins.tex} \n", "meta": {"hexsha": "931812a01c92f398e72626d081a10a0572dcb965", "size": 3023, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Paper/sec/4_Results.tex", "max_stars_repo_name": "obin1000/Black-Jack-DataResearch", "max_stars_repo_head_hexsha": "3b29aecf8b7e5b782f3c1042c8cf49d45005ec74", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-08T19:11:58.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-08T19:11:58.000Z", "max_issues_repo_path": "Paper/sec/4_Results.tex", "max_issues_repo_name": "obin1000/Black-Jack-DataResearch", "max_issues_repo_head_hexsha": "3b29aecf8b7e5b782f3c1042c8cf49d45005ec74", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Paper/sec/4_Results.tex", "max_forks_repo_name": "obin1000/Black-Jack-DataResearch", "max_forks_repo_head_hexsha": "3b29aecf8b7e5b782f3c1042c8cf49d45005ec74", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 116.2692307692, "max_line_length": 577, "alphanum_fraction": 0.7724115117, "num_tokens": 727, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7461389873857264, "lm_q1q2_score": 0.5589231830160317}}
{"text": "\\documentclass{ximera}\n\\input{../preamble}\n\\title{Exercises: Arc Length}\n%%%%%\\author{Philip T. Gressman}\n\n\\begin{document}\n\\begin{abstract}\nWe practice computing arc length.\n\\end{abstract}\n\\maketitle\n\n\\begin{exercise}%[APEX0704ARCL04]\nFind the arc length of the function on the given interval: \\(\\displaystyle f(x) = \\sqrt{8}x\\) on \\([-1, 1]\\)\n\\[ L = \\answer{6}. \\]\n%%\n%%\n\\end{exercise}\n\n\\begin{exercise}\nFind the arc length of the function on the given interval: \\(\\displaystyle f(x) = \\ln \\left(\\cos x\\right)\\) on \\([0, \\pi/4]\\).\n(You may use the fact that $\\int \\sec x ~ dx = \\ln |\\sec x + \\tan x| + C$.)\n\\[ L = \\answer{\\ln (1 + \\sqrt{2})}. \\]\n\\end{exercise}\n\n\\begin{exercise}%[APEX0704ARCL13]\nSet up the integral to compute the arc length of the function on the given interval: \\(\\displaystyle f(x) = x^2\\) on \\([0, 1]\\).\n\\[ L = \\int_{\\answer{0}}^{\\answer{1}} \\answer{\\sqrt{1+4x^2}} \\ dx\\]\n%\n%\n\\end{exercise}\n\n\\begin{question}%%%%%[2017C.03]\nLet \\(\\displaystyle y = \\frac{x^4}{16} + \\frac{1}{2x^2}\\). Find the arc length for \\(1 \\leq x \\leq \\sqrt{2}\\).\n\\[ L = \\answer{\\frac{7}{16}}. \\]\n\\end{question}\n\n\\section*{Sample Quiz Questions}\n\n\\begin{question}%%%%%[Arclen01]\n\nCompute the arc length of the curve \\[y = \\frac{3}{4}x^{-2} + \\frac{1}{24}x^{4} - 1\\] between the endpoints \\(x = \\sqrt{3}\\) and \\(x = \\sqrt{6}\\).\n(Hints won't be revealed until after you choose a response.)\n\\begin{multiplechoice}\n\\choice{\\(\\displaystyle \\frac{5}{12}\\)}\n\\choice{\\(\\displaystyle \\frac{7}{12}\\)}\n\\choice{\\(\\displaystyle \\frac{3}{4}\\)}\n\\choice{\\(\\displaystyle \\frac{11}{12}\\)}\n\\choice{\\(\\displaystyle \\frac{13}{12}\\)}\n\\choice[correct]{\\(\\displaystyle \\frac{5}{4}\\)}\n\\end{multiplechoice}\n\\begin{feedback}\nApplying the formula for arc length gives that\n\\[\n\\begin{aligned}\nL & = \\int_{\\sqrt{3}}^{\\sqrt{6}} \\sqrt{1 + \\left( \\frac{dy}{dx} \\right)^2}~dx \n   = \\int_{\\sqrt{3}}^{\\sqrt{6}} \\sqrt{1 + \\left( -\\frac{3}{2}x^{-3} + \\frac{1}{6}x^{3} \\right)^2}~dx\n\\end{aligned}\n\\]\n\\begin{hint}\n\\[\n\\begin{aligned}\n  & = \\int_{\\sqrt{3}}^{\\sqrt{6}} \\sqrt{1 + \\frac{9}{4}x^{-6} - \\frac{1}{2} + \\frac{1}{36}x^{6}} ~ dx \n   = \\int_{\\sqrt{3}}^{\\sqrt{6}} \\sqrt{ \\left( \\frac{3}{2}x^{-3} + \\frac{1}{6}x^{3} \\right)^2} ~ dx \\\\\n  & = \\int_{\\sqrt{3}}^{\\sqrt{6}} \\left( \\frac{3}{2}x^{-3} + \\frac{1}{6}x^{3} \\right) ~ dx \n   = \\left.  \\left( -\\frac{3}{4}x^{-2} + \\frac{1}{24}x^{4} \\right) \\right|_{\\sqrt{3}}^{\\sqrt{6}} \\\\\n  & =  \\left( -\\frac{1}{8} + \\frac{3}{2} \\right) - \\left( -\\frac{1}{4} + \\frac{3}{8} \\right) \n   = \\frac{5}{4}.\n\\end{aligned}\n\\]\nNote that you must always take the positive square root in going from line two to line three. In particular, if you get a negative answer, you have likely taken the negative square root.\n\\end{hint}\n\\end{feedback}\n\n\\end{question}\n\n\\section*{Sample Exam Questions}\n\n\\begin{question}(2017 Midterm 1)\nCompute the length of the curve $x = \\frac{1}{8} (y^2 + 2y) - \\ln (y+1)$ between $y=0$ and $y = 2$.\n\\begin{multipleChoice}\n\\choice[correct]{$\\displaystyle 1 + \\ln 3$}\n\\choice{$\\displaystyle 2 + \\ln 6$}\n\\choice{$\\displaystyle 3 + \\ln 9$}\n\\choice{$\\displaystyle 4 + \\ln 12$}\n\\choice{$\\displaystyle 5 + \\ln 15$}\n\\choice{none of the above}\n\\end{multipleChoice}\n\\end{question}\n\n\\begin{question}%%%%%[2018.S.4]\n Find the arc length of the following curve between $x=-1$ and $x=1$:\n\\[ y = 3 \\cosh \\frac{x}{3}. \\]\n(Note: $\\cosh x = (e^x + e^{-x})/2$.)\n\\begin{multiplechoice}\n\\choice{\\(\\displaystyle \\frac{e}{3} - \\frac{1}{3e}\\)}\n\\choice{\\(\\displaystyle \\frac{e}{2} - \\frac{1}{2e}\\)}\n\\choice{\\(\\displaystyle e - \\frac{1}{e}\\)}\n\\choice{\\(\\displaystyle 2 e - \\frac{2}{e}\\)}\n\\choice[correct]{\\(\\displaystyle 3 e - \\frac{3}{e}\\)}\n\\choice{none of the above}\n\\end{multiplechoice}\n\n\\end{question}\n\n\\begin{question}\n A certain curve $y = f(x)$ in the plane has the property that its length between the endpoints $x=0$ and $x=a$ is equal to\n\\[ \\int_0^a \\sqrt{1 + \\sin^2 t} ~ dt \\]\nfor every value of $a > 0$.  Assuming the curve passes through the points $(0,0)$ and $\\left(\\frac{\\pi}{2},1\\right)$, what is $f\\left( \\frac{\\pi}{4} \\right)$?\n\\begin{multipleChoice} \n\\choice{$\\displaystyle \\frac{1}{2}$}\n\\choice{$\\displaystyle \\frac{1}{\\sqrt{2}}$}\n\\choice[correct]{$\\displaystyle 1 - \\frac{1}{\\sqrt{2}}$}\n\\choice{$\\displaystyle 0$}\n\\choice{$\\displaystyle - \\frac{1}{\\sqrt{2}}$}\n\\choice{none of these}\n\\end{multipleChoice}\n\\end{question}\n\n\\begin{question}%%%%%[2015C.13]\n\nFind the length of the part of the curve \\(\\displaystyle y = \\frac{3}{16} e^{2x} + \\frac{1}{3} e^{-2x}\\) for \\(0 \\leq x \\leq \\ln 2\\).\n\\begin{multiplechoice}\n\\choice[correct]{\\(\\displaystyle \\frac{13}{16}\\)}\n\\choice{\\(\\displaystyle \\frac{11}{16}\\)}\n\\choice{\\(\\displaystyle \\frac{3}{8}\\)}\n\\choice{\\(\\displaystyle \\frac{9}{8}\\)}\n\\choice{\\(\\displaystyle \\frac{29}{64}\\)}\n\\choice{\\(\\displaystyle \\frac{3}{4}\\)}\n\\end{multiplechoice}\n\n\\end{question}\n\n\\begin{question}%%%%%[2016C.01]\n\nFind the length of the part of the curve \\(\\displaystyle y = \\frac{x^4}{4} + \\frac{1}{8x^2}\\) for \\(1 \\leq x \\leq 2\\).\n\\begin{multiplechoice}\n\\choice[correct]{\\(\\displaystyle \\frac{13}{16}\\)}\n\\choice{\\(\\displaystyle \\frac{11}{16}\\)}\n\\choice{\\(\\displaystyle \\frac{7}{8}\\)}\n\\choice{\\(\\displaystyle \\frac{13\\sqrt{2}}{16}\\)}\n\\choice{\\(\\displaystyle \\frac{11\\sqrt{2}}{16}\\)}\n\\choice{\\(\\displaystyle \\frac{7\\sqrt{2}}{8}\\)}\n\\end{multiplechoice}\n\n\\end{question}\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "36e893d2fd105b45e07592630ec5ef3c47101e4a", "size": 5308, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "arclengths/05arclengthpractice.tex", "max_stars_repo_name": "ptgressman/math104", "max_stars_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "arclengths/05arclengthpractice.tex", "max_issues_repo_name": "ptgressman/math104", "max_issues_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "arclengths/05arclengthpractice.tex", "max_forks_repo_name": "ptgressman/math104", "max_forks_repo_head_hexsha": "3b797f5622f6c7b93239a9a2059bd9e7e1f1c7c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.1523178808, "max_line_length": 186, "alphanum_fraction": 0.6226450641, "num_tokens": 2016, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7461389817407017, "lm_q1q2_score": 0.5589231787874159}}
{"text": "\\chapter{Symmetric-key Cryptography} % Main chapter title\n\\label{Symmetric-key Cryptography} % For referencing the chapter elsewhere, use \\ref{Chapter1} \n\n\\textit{Symmetric-key} cryptography (also commonly referred to as \\textit{private-key cryptography}) is a system of cryptography which\nuses the same keys for both encryption of plaintext and decryption of ciphertext. The keys must be shared between the different\nparties accessing and handling the information. The requirement that all (authorized) parties must have access to the secret key\nis the biggest drawback of symmetric-key cryptography \\cite{wiki:symmetric_key_cryptography}.     \n\nThis chapter will discuss and implement the different types of symmetric-key cryptography algorithms. \nFirst we will explore and implement the various classical cryptography examples (some of which are discussed in the \n\\hyperref[Early Cryptography]{Early Cryptography} chapter). From there, we will continue and take a look at more complex modern-day algorithms \nand discuss their implementation. \n\n\\section{Transposition Ciphers}\n\nOne of the simplest methods of cryptography, a transposition cipher, works by changing the ordering of the message to make it seemingly unreadable.\nThis is similar to an anagram but with a regular defined structure to make it easily decryptable (assuming you know the defined structure or the \\textit{key}).   \n\nMathematically, the ciphertext will be a permutation of the original plaintext. We can use an encryption function to encrypt our plaintext and then use \nthe \\textit{inverse} of our encryption function to decrypt our ciphertext. A generic transposition cipher (one with no specific encryption function) \ncan be expressed mathematically as follows: $$\\mathlarger{P \\rightarrow E, f(x)}$$\n\nIn this example, we are mapping the set $P$ containing the plaintext to the set $E$ containing the ciphertext using the encryption function $f(x)$. \nIn mathematics, this would be referred to a \\textit{bijection} from set $P$ to $E$. Keep in mind that while transposition ciphers certainly\nare functions, we can not easily assign them a simple algebraic formula since they deal with the positions of letters rather than the letters themselves.\n\nDecryption of the ciphertext is defined in a similar fashion, but instead we use the \\textit{inverse bijection} from our ciphertext set $E$ to $P$. \nNotice that now our function $f(x)$ is inversed becoming: $f^{-1}(x)$. The algebraic expression for decryption is written as follows: \n$$\\mathlarger{E \\rightarrow P, f^{-1}(x)}$$\n\nThough there are many different types of transposition ciphers, in this section we will only discuss one and it's benefits, drawbacks, and efficiency.\n\n\\subsection{Rail Fence Cipher}\n\nThe \\textit{rail fence cipher} (also referred to as a \\textit{ZigZag} cipher) is a form of transposition cipher which arranges letters from\nthe plaintext in different \"rails.\" It works by writing letters downwards and diagonally on following \"rails\" of an \n\\textit{imaginary fence} (hence where the name of this cipher comes from). When we reach the bottom rail, we begin writing \nletters upwards. Likewise, when we reach the top rail, we begin writing letters downwards again until the whole plaintext is\nwritten on our rail fence. For instance, if our fence has \\textit{3} rails and our plaintext is:\n'\\textit{WE ARE DISCOVERED FLEE AT ONCE}', the ciphertext would look like so:\n\n\\begin{listing}[H]\n    \\begin{minted}[mathescape,\n        numbersep=5pt,\n        frame=lines,\n        framesep=2mm]{text}\n    \nW . . . E . . . C . . . R . . . L . . . T . . . E\n. E . R . D . S . O . E . E . F . E . A . O . C .\n. . A . . . I . . . V . . . D . . . E . . . N . .\n    \n    \\end{minted}\n    \\caption{Example of plaintext encrypted using the rail fence cipher. In this example, we have removed\n    all whitespaces in the plaintext.}\n\\end{listing}\n\nThis is then read off to get the ciphertext:\n\n\\begin{minted}[mathescape,\n    numbersep=5pt,\n    frame=lines,\n    framesep=2mm]{text}\n    \nWECRL TEERD SOEEF EAOCA IVDEN\n\\end{minted}\n\nDecryption is done in a similar fashion by laying out the ciphertext in appropriate rails. Let's take a look at how\nyou go about decrypting the ciphertext above. First, we must know the height and \\textit{cycle} of the ciphertext. The height\nis simply the amount of rails that was used to encrypt the plaintext. A cycle is the number of letters which run from the top \nrow to the bottom row and then up again, but stops before the top-most row (where a new cycle begins). The cycle can be \nwritten mathematically as: $$\\mathlarger{2R - 2}$$ \n\nThis simply says that each cycle is double our number of rails, minus 2. This is because, when we double our number of rails\n(for going up and down) we wil have a duplicate position at the bottom row, thus we subtract $1$. We then also \nsubtract $1$ since our cycle does not include the top-most row. Do note that the rail index is \\textit{zero based}. This means\nthat the \\textit{first} rail will have an index of \\textit{0} and the last rail will have an index of $n - 1$.\n\nNext, we will need to know the distance between every character on a specific rail. If we refer back to our example,\nyou will notice that both the top and bottom rails have the \\textit{same} spacing between each character. This is because,\nboth the top and bottom row are where cycles connect. If you think of it as a triangle, the vertices of the triangle are\nlayed out on the bottom and top rows. The equation for calculating the character distance is: $$\\mathlarger{C - 2R}$$\n\nIn simple terms, we are getting the \\textit{distance} between our cycle and our current rail (factoring both up and\n down movements). As previously mentioned, distance patterns alternate on middle rows, this alternation is our cycle\n  minus our \\textit{current} character distance. This can be written mathematically as follows: $$\\mathlarger{D_n = C - D_c}$$\n\nWhere $C$ is our cycle, $D_c$ is our current character distance, and $D_n$ is our new character distance.\n\nThere is one last step to calculating the character distance: handling the bottom rail. The aforementioned distance equation\nwill \\textit{not} work for the bottom rail. Instead, the character distance for the bottom rail is our cycle. \nIn fact, the top rail is also the cycle but we don't need to manually handle that case as our equation does that for us \n(the subtraction is factored when the rail index is \\textit{0}).\n\nGiven the cycle and character distance, we can simply reconstruct the rail fence by laying out the cipher\ntext according to the appropriate character distance. \n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\def\\boxgap{+0.0cm}\n        \\def\\minboxsz{0.8cm}\n        \\def\\boxlinewidth{0.04cm}\n        \\def\\stripgap{1cm}\n        \\def\\lsgap{0.5cm}\n         \n        \\node at(4.4, 3.2) {\\Large Original Message: Hello World};\n        \\node at(4.4, -0.6) {\\Large Encrypted Message: Horel ollWd};\n        \\draw[step=0.8cm,gray] (0,0) grid (11*\\minboxsz,3*\\minboxsz);\n        \\draw [->, thick, blue!25] (0.1, 2.3) -- (2.0, 0.4) -- (3.6, 2.0) -- (5.2, 0.4) -- (6.8, 2.0) -- (8.7, 0.1);\n        \\draw [->, thick, red!50] (0.2, 2.6) -- (8.6, 2.6);\n        \\node at(1*\\minboxsz/2, 5*\\minboxsz/2) {\\Large H};\n        \\node at(3*\\minboxsz/2, 3*\\minboxsz/2) {\\Large e};\n        \\node at(5*\\minboxsz/2, 1*\\minboxsz/2) {\\Large l};\n        \\node at(7*\\minboxsz/2, 3*\\minboxsz/2) {\\Large l};\n        \\node at(9*\\minboxsz/2, 5*\\minboxsz/2) {\\Large o};\n        \\node at(13*\\minboxsz/2, 1*\\minboxsz/2) {\\Large W};\n        \\node at(15*\\minboxsz/2, 3*\\minboxsz/2) {\\Large o};\n        \\node at(17*\\minboxsz/2, 5*\\minboxsz/2) {\\Large r};\n        \\node at(19*\\minboxsz/2, 3*\\minboxsz/2) {\\Large l};\n        \\node at(21*\\minboxsz/2, 1*\\minboxsz/2) {\\Large d};       \n    \\end{tikzpicture}\n    \\caption{A diagram of the rail fence cipher which demonstrates how plaintext is arranged on the rails.}\n\\end{figure}\n\nNow that we have discussed the theory behind the Rail Fence cipher, we will go ahead and implement it.\nFirst we will write the encryption function. To encrypt plaintext we need two key pieces of information:\nour plaintext and the amount of rails to encrypt with. \n\n\\begin{minted}[mathescape,\n    numbersep=5pt,\n    frame=lines,\n    framesep=2mm]{python}\n# Encryption function which takes in the plain_text and rail number.\ndef encrypt(plain_text, rails):\n    # Create our cipher_text string and initialize to empty\n    cipher_text = str()\n\n    # TODO: Encryption\n\n    return cipher_text\n\\end{minted}\n\nAs discussed previously, when encrypting, the message is laid out in rails according to the cycle. \nTherefore, we will need to calculate the cycle within our function. \nWe can use the aforementioned equation to calculate the cycle. We use the \\textit{max} function to \nmake sure that our cycle stays above 1, it is simply a special case for an encryption with one rail.\\footnote{Although \nencrypting plaintext with one rail won't yield particularly interesting results, we still make sure our algorithm can support it.}\n\n\\begin{minted}{python}\ncycle = max(rails * 2 - 2, 1) # 1 is special case for 1 rail    \n\\end{minted}\n\nNext we will write the letters of the plaintext to their corresponding position on the rail. \nWe will iterate through every rail and layout characters which belong to that rail. \n\nIn every iteration, we must calculate a few key pieces of data. First, we will need to know the distance between \neach character on the specific rail. As discussed, there is an alternating pattern between each character on a rail,\none for a character going down a cycle, and one for a character going up a cycle (with the top and bottom rails\nhaving a constant spacing between characters). Since the top and bottom rails have the same spacing, if the rail\nis the bottom rail, we set our distance to the cycle. Second, we will need to have a pointer variable which\nkeeps a reference to the current character we are looking at. This will simply be initialized to our rail (the start position\nof the rail in the text). \n\nLaying out the letters is actually quite simply. Since we know the distance between each character, we can continue\niterating through the plaintext until our pointer is \\textit{greater} than the length of our plaintext. In every iteration,\nwe increment the pointer by our character distance, making sure to alternate between the two distance patterns. \n\nHere is the iteration code along with the rest of the encryption function.\n\n\\begin{listing}[H]\n    \\begin{minted}[mathescape,\n        numbersep=5pt,\n        frame=lines,\n        framesep=2mm]{python}\ndef encrypt(plain_text, rails):\n    cipher_text = str()\n    cycle = max((rails - 1) * 2, 1) # 1 is special case for 1 rail\n            \n    for rail in range(rails):\n        ptr = rail\n        character_distance = cycle - 2 * rail\n                \n        # Both the bottom and top rails have a (same) character distance of the cycle. \n        if rail == rails - 1:\n            character_distance = cycle\n        \n        # While we have *something* to write\n        while ptr < len(plain_text):\n            cipher_text += plain_text[ptr]\n            ptr += character_distance\n        \n            # If this is not the top or bottom rail \n            # alternate between two distance patterns\n            if rail != 0 and rail != rails - 1:\n                character_distance = cycle - character_distance\n        \n    return cipher_text\n        \\end{minted}\n        \\caption{Full implementation of encryption in the rail fence cipher.}\n\\end{listing}\n\nThe implementation for decryption is almost identical. Like encryption, when decrypting we must layout our ciphertext in the corresponding\nrails. To do this, we can use the \\textit{same} algorithm that we used for encryption. \n\n\\begin{listing}[H]\n    \\begin{minted}[mathescape,\n        numbersep=5pt,\n        frame=lines,\n        framesep=2mm]{python}\ndef decrypt(cipher_text, rails):\n    plain_text = [''] * len(cipher_text)\n    cipher_index = 0\n    cycle = max((rails - 1) * 2, 1)  # 1 is special case for 1 rail\n    \n    for rail in range(rails):\n        ptr = rail\n        character_distance = cycle - 2 * rail\n    \n        # Both the bottom and top rails have a (same) character distance of the cycle. \n        if rail == rails - 1:\n            character_distance = cycle\n    \n        # While we have *something* to write\n        while ptr < len(plain_text):\n            plain_text[ptr] = cipher_text[cipher_index]\n            cipher_index += 1\n                \n            ptr += character_distance\n            \n            # If this is not the top or bottom rail\n            # alternate between two distance patterns  \n            if rail != 0 and rail != rails - 1:\n                character_distance = cycle - character_distance\n    \n    return ''.join(plain_text)\n        \\end{minted}\n        \\caption{Full implementation of decryption in the rail fence cipher.}\n\\end{listing}\n\nAs you can see, there are only slight differences between the encryption and decryption implementations. The first difference is the\nway we build our result text. When we \\textit{encrypt}, we append characters to a string (which is our ciphertext). We can do this because\nour ciphertext is \\textit{one-dimensional}. Conversely, when we decrypt, since we are laying out characters on rails, we must preserve the order of the\nfence; we can think of this as \\textit{two-dimensional} (since we are not just laying out character horizontally). \nThe second change, is the introduction of a second pointer variable: the \\textit{cipher index}. \nThis is simply used to track the current reading-index of the ciphertext (so we know what character to read next). \nThere are some other \\textit{small} changes in the decryption code but those are self-explanatory so they will not be covered.\n\n\\section{Substitution Ciphers}\n\nA substitution cipher is a type of cryptography technique where each unit of plaintext is replaced with a unit of ciphertext; a unit may refer to a single\nletter of the plaintext, a pair of letters, alternating patterns of the aforementioned, and so on \\cite{wiki:substitution_cipher}.\n\nA substitution cipher can be compared with a transposition cipher---the two are rather similar. In a transposition cipher, the units of plaintext are rearranged as a\npermutation, but the units \\textit{always} remained unchanged. The latter is unlike a substitution cipher which preserves the order of the plaintext but alters\nthe units.\n\n\\subsection{Caesar Cipher}\nThe Caesar Cipher, arguably the most popular form of substitution ciphers is another very simple form of cryptography. \nIt works by replacing each letter by a certain number of letters up or down in the alphabet. The amount of \nletters that we move by is known as the \\textit{shift offset}. If you shift the letter \\textit{A} by 3 spaces, you get \nthe letter \\textit{D}. If you shift the letter \\textit{B}, you get the letter \\textit{E}.\n\n% Caesar cipher diagram\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\def\\boxgap{+0.053cm}\n        \\def\\minboxsz{0.8cm}\n        \\def\\boxlinewidth{0.04cm}\n         \n        \\draw[gray, dotted, line width=0.04cm] (1, 3) -- (1.4, 3);\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz] (a1) at (2, 3) {\\Large A};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of a1] (b1) {\\Large B};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of b1] (c1) {\\Large C};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of c1] (d1) {\\Large D};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of d1, fill=cyan] (e1) {\\Large \\textbf{E}};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of e1] (f1) {\\Large F};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of f1, dashed, gray] (g1) {\\Large \\textcolor{gray}{G}};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of g1, dashed, gray] (h1) {\\Large \\textcolor{gray}{H}};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of h1, dashed, gray] (i1) {\\Large \\textcolor{gray}{I}};\n         \n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, dashed, gray] (x2) at (0, 0) {\\Large \\textcolor{gray}{X}};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of x2, dashed, gray] (y2) {\\Large \\textcolor{gray}{Y}};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of y2, dashed, gray] (z2) {\\Large \\textcolor{gray}{Z}};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of z2] (a2) {\\Large A};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of a2, fill=cyan] (b2) {\\Large \\textbf{B}};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of b2] (c2) {\\Large C};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of c2] (d2) {\\Large D};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of d2] (e2) {\\Large E};\n        \\node[rectangle, draw, line width=\\boxlinewidth, minimum size=\\minboxsz, right = \\boxgap of e2] (f2) {\\Large F};\n        \\draw[gray, dotted, line width=0.04cm] (7.8, 0) -- (8.2, 0);\n         \n        \\draw [->, line width=0.04cm, gray] (a1) to[out=270, in=90] (x2);\n        \\draw [->, line width=0.04cm, gray] (b1) to[out=270, in=90] (y2);\n        \\draw [->, line width=0.04cm, gray] (c1) to[out=270, in=90] (z2);\n        \\draw [->, line width=0.04cm] (d1) to[out=270, in=90] (a2);\n        \\draw [->, line width=0.04cm] (e1) to[out=270, in=90] (b2);\n        \\draw [->, line width=0.04cm] (f1) to[out=270, in=90] (c2);\n        \\draw [->, line width=0.04cm, gray] (g1) to[out=270, in=90] (d2);\n        \\draw [->, line width=0.04cm, gray] (h1) to[out=270, in=90] (e2);\n        \\draw [->, line width=0.04cm, gray] (i1) to[out=270, in=90] (f2);\n    \\end{tikzpicture}\n    \\caption{A diagram of the Caesar cipher using a shift offset of \\textit{3}.}\n\\end{figure}\n\nHere is a simple example of a message encrypted using the cipher with a shift offset of 3:\n\n\\begin{minted}[mathescape,\n               numbersep=5pt,\n               frame=lines,\n               framesep=2mm]{text}\n\nPlaintext message: \nHELLO WORLD\n               \nCiphertext message:\nKHOOR ZRUOG\n\\end{minted}\n\nEven though the ciphertext appears to be unreadable, substitution cipher remains to be one of the weakest forms of cryptography. \nThis is for several reasons:\n\\begin{itemize}  \n    \\item The possible number of keys is very small, since the key is an offset by which to shift by, it is limited to the number of letters in the alphabet.\n    In the case of the English alphabet, one could very easily try all 25 possibilities. \n    \\item Since the structure of the original message remains intact, the ciphertext could be subjected to frequency-analysis which would reveal patterns about the message;\n    potentially revealing the key.\n    \\item By studying multiple different ciphertexts, one could notice patterns within them and find out the key through that.\n\\end{itemize}\n\nThe substitution cipher can be expressed mathematically as follows: $$\\mathlarger{E_i(n)=(n+k)\\mod{26}}$$\n\nHere we say that the encryption of the $ith$ letter $n$ is equal to a shift of $n+k$ where $k$ is the shift offset (or the key). The result of the shift\nis divided by $26$ where the remainder of the division is used to make sure that the shift wraps around the alphabet; if the \nshift goes beyond the range of the alphabet ($0$ to $26$), it wraps around back to the beginning.\n\nDecryption is done in a similar manner, but instead subtracting by $k$. $$\\mathlarger{P_i(n)=(n-k)\\mod{26}}$$\n\nPlease note that even though this example is for the English alphabet, the aforementioned equations should work for most languages assuming that they are modified\nto the count of the specific languages' alphabet.\n\nThe concept of shifting letters is rather intuitive but how do we shift letters in code? This is done by representing\neach letter as a \\textit{number} and then \\textit{adding} or \\textit{subtracting} from this number by our shift\noffset. The \\textit{ASCII} (American Standard Code for Information Interchange) scheme is an encoding which \nprovides the numerical representation of letters; this is known as an \\textit{ordinal} \\cite{caesar_cipher_invent_with_python}. \nWe can use ASCII to implement this cipher. \n\nIn the ASCII encoding scheme, the capital letters: \\textit{A} through \\textit{Z} are represented by the ordinals $65$ through \n$90$. The lowercase letters: \\textit{a} through \\textit{z} are represented by the ordinals $97$ through $122$. \nLastly, the numeric digits $0$ through $9$ are represented by the ordinals $48$ through $57$.\n\nModern computers typically use the \\textit{UTF-8} encoding instead of ASCII. Fortunately, UTF-8 is backwards compatible\nwith ASCII which means that the ordinals in UTF-8 will be the same as in ASCII.\n\nThe figure below shows the relevant parts of the ASCII table in all of it's glory!\n\n\\begin{table}[H]\n    \\centering\n        \\begin{tabular}{ | c c | c c | c c | c c | c c | c c |  } \n            \\hline\n            \\multicolumn{12}{|c|}{ASCII Table} \\\\\n            \\hline\n        % x & n     & x  & n & x  & n & x  & n & x  & n  & x   & n\n            32 & (space) & 48 & 0 & 64 & @ & 80 & P & 96 & ` & 112 & p \\\\ \n            33 & !     & 49 & 1 & 65 & A & 81  & Q & 97  & a  & 113 & q \\\\\n            34 & \"  & 50 & 2 & 66 & B & 82 & R & 98 & b & 114 & r \\\\\n            35 & \\# & 51 & 3 & 67 & C & 83 & S & 99 & c & 115 & s \\\\\n            36 & \\$ & 52 & 4 & 68 & D & 84 & T & 100 & d & 116 & t \\\\\n            37 & \\% & 53 & 5 & 69 & E & 85 & U & 101 & e & 117 & u \\\\\n            38 & \\& & 54 & 6 & 70 & F & 86 & V & 102 & f & 118 & v \\\\\n            39 & '  & 55 & 7 & 71 & G & 87 & W & 103 & g & 119 & w \\\\\n            40 & (  & 56 & 8 & 72 & H & 88 & X & 104 & h & 120 & x \\\\\n            41 & )  & 57 & 9 & 73 & I & 89 & Y & 105 & i & 121 & y \\\\\n            42 & *  & 58 & : & 74 & J & 90 & Z & 106 & j & 122 & z \\\\\n            43 & +  & 59 & ; & 75 & K & 91 & [ & 107 & k & 123 & \\{ \\\\ \n            44 & ,  & 60 & < & 76 & L & 92 & \\textbackslash & 108 & l & 124 & $\\vert$ \\\\\n            45 & -  & 61 & = & 77 & M & 93 & ] & 109 & m & 125 & \\} \\\\\n            46 & .  & 62 & > & 78 & N & 84 & \\textasciicircum & 110 & n & 126 & \\textasciitilde \\\\\n            47 & /  & 63 & ? & 79 & O & 85 & \\textunderscore & 111 & o & 127 & (delete) \\\\\n            \\hline\n       \\end{tabular}\n       \\caption{The relevant parts of the ASCII table.}\n\\end{table}\n\nAssume you wanted to shift the letter \\textit{\"A\"} by three spaces, you would do the following:\n\\begin{itemize}\n    \\item Convert \"A\" to it's respective ASCII ordinal: 65.\n    \\item Add 3 to 65, to get the ordinal: 68.\n    \\item Convert the new ordinal: 68, back to it's letter form to get \\textit{\"D\"}.\n\\end{itemize} \n\nIn Python, the \\texttt{chr()} and \\texttt{ord()} allows us to convert from letters and ordinals, respectively \\cite{caesar_cipher_invent_with_python}. \nThe \\texttt{chr()} function takes an integer as a parameter and returns the respective ASCII character \\cite{caesar_cipher_invent_with_python}. Conversely, \nthe \\texttt{ord()} function takes a character as a parameter and returns the respective ASCII ordinal \\cite{caesar_cipher_invent_with_python}. \n\nThe code below represents an implementation of the Caesar cipher:\n\n\\begin{listing}[H]\n    \\begin{minted}[mathescape,\n        numbersep=5pt,\n        frame=lines,\n        framesep=2mm]{python}\ndef cipher(message, shift):\n    result = str()\n    for character in message:\n        new_ordinal = ord(character) + shift\n        result += chr(new_ordinal)\n        \n    print(f\"Your translated text is {result}\")\n    \n# 1 is encrypt, 2 is decrypt\nmode = int(input(\"Should I encrypt (1) or decrypt (2)? \"))\n    \n# Make sure that our shift offset is within range.\nshift = int(input(\"What is the shift offset? \")) % 26\nmessage = input(\"What is the message? \")\n    \nif mode == 1:\n    cipher(message, shift)\nelif mode == 2:\n    cipher(message, -shift)\nelse:\n    print(\"Invalid mode input. Try again.\")    \n    \\end{minted}\n    \\caption{Full implementation of Caesar cipher in Python.}\n\\end{listing}\n\nThe code for Caesar cipher is rather simple. The actual cipher process rests within the \\texttt{cipher()} function.\nEncryption and decryption processes are the reverse of each other---yet even then, they share code which is quite alike.\nThe only difference between the two is the \\textit{direction} in which they shift in. Since the direction is a binary decision, \nrepresented by a subtraction or addition, we can simply combine both processes into a generic function. In order to encrypt\nusing the \\texttt{cipher()} function, we pass in the shift as a \\textit{positive} value; when we decrypt, we pass in the shift\nvalue as a \\textit{negative}. More specifically, the encryption and decryption functions are defined as follows:\n\n\\begin{listing}[H]\n    \\begin{minted}[mathescape,\n        numbersep=5pt,\n        frame=lines,\n        framesep=2mm]{python}\n    def encrypt(message, shift):\n        return cipher(message, shift)\n    \n    def decrypt(message, shift):\n        return cipher(message, -shift)\n    \\end{minted}\n\\end{listing}\n\nThe above code illustrates the similarity between the two processes. As you can see, whether we decrypt or encrypt is \ndetermined by the sign of the shift offset.\n\n\\subsection{Breaking the Caesar Cipher}\n\nThe simplicity of Caesar cipher is unparalleled to other ciphers---we can see this in the implementation of the cipher.\nFor instance, the transposition cipher (mentioned earlier in this chapter), though simple, required changes in encryption and\ndecryption processes. Nevertheless, simplicity does come with a cost, as discussed earlier, Caesar cipher is rather insecure. \nWe can easily try all 26 different keys to figure out the cipher text; this is known as a \\textit{brute force} method\nof breaking a cipher. The following code shows how we can break a Caesar cipher through brute force. \n\n\\begin{listing}[H]\n    \\begin{minted}[mathescape,\n        numbersep=5pt,\n        frame=lines,\n        framesep=2mm]{python}\ndef cipher(message, shift):\n    result = str()\n    for character in message:\n        new_ordinal = ord(character) + shift\n        result += chr(new_ordinal)\n        \n    print(f\"Your translated text is {result}\")\n    \ndef solve(message, max_key):\n    # Start at the shift offset of 1\n    for i in range(1, max_key):\n        cipher(message, -i)\n    \nmessage = input(\"What do you want to solve? \")\nmax_key = int(input(\"What is the max key? \"))\nsolve(message, max_key)  \n    \\end{minted}\n    \\caption{Implementation of a Caesar cipher solver in Python.}\n\\end{listing}\n\nThe solver code is rather simple, we run the \\texttt{cipher()} on every possible shift offset from \\textit{1}\nto the \\textit{max key size}. Recall that the message \\textit{\"Hello\"} is encrypted to \\textit{\"Khoor\"} when encrypting with a \nshift offset of \\textit{3}. Let's take a look at a sample run of the solver on the encrypted message: \\textit{\"Khoor\"}.\n\n\\newcommand{\\reducedstrut}{\\vrule width 0pt height .9\\ht\\strutbox depth .9\\dp\\strutbox\\relax}\n\\newcommand{\\yellow}[1]{%\n  \\begingroup\n  \\setlength{\\fboxsep}{0pt}%  \n  \\colorbox{yellow}{\\reducedstrut#1\\/}%\n  \\endgroup\n}\n\n\\begin{minted}[\n        escapeinside=||,\n        mathescape=true,\n        numbersep=5pt,\n        frame=lines,\n        framesep=2mm]{text}\nWhat do you want to solve? Khoor\nWhat is the max key? 26\nYour translated text is Jgnnq\nYour translated text is Ifmmp\n|$\\yellow{\\texttt{Your translated text is Hello}}$|\nYour translated text is Gdkkn\nYour translated text is Fcjjm\nYour translated text is Ebiil\nYour translated text is Dahhk\nYour translated text is C`ggj\nYour translated text is B_ffi\nYour translated text is A^eeh\nYour translated text is @]ddg\nYour translated text is ?\\ccf\nYour translated text is >[bbe\nYour translated text is =Zaad\nYour translated text is <Y``c\nYour translated text is ;X__b\nYour translated text is :W^^a\nYour translated text is 9V]]`\nYour translated text is 8U\\\\_\nYour translated text is 7T[[^\nYour translated text is 6SZZ]\nYour translated text is 5RYY\\\nYour translated text is 4QXX[\nYour translated text is 3PWWZ\nYour translated text is 2OVVY\n\\end{minted}\n\\begingroup\n    \\captionof{listing}{Sample run of a Caesar cipher solver in Python.} \n\\endgroup\n\nThe highlighted text above shows the decrypted message: \"Hello.\" As you can see, we were able to decrypt our ciphertext\nusing the brute force method rather simply. \n\nEven though Caesar cipher is rather basic and can be easily broken, it is not invaluable. Undoubtedly, Caesar cipher gives\noff the impression of an encrypted message---and to the laymen, \\textit{it is}. For simple usage, Caesar cipher does indeed\nobscure the meaning of messages. In brief, since Caesar cipher is simple to implement and is rather efficient in the\nencryption and decryption processes, it is certainly not a candidate which should be ruled out; rather, build on the \nconcepts established by the Caesar cipher in order to create a better cipher.\n\n\\section{Block Ciphers}\n\nA block cipher is a type of cryptography algorithm which operates on \\textit{fixed-length} groups, comprised of bits---called a \\textit{block} \\cite{wiki:block_cipher}.\nBlock ciphers play an integral role in modern-day cryptography design; many cryptography methods are based off the design of the block cipher \\cite{wiki:block_cipher}.\n\nEssentially, a block cipher takes a block of plaintext and produces a block of ciphertext, and vice-versa for decryption. \nThe size of a block can be any integer but is fixed within the scheme. \n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\node at (4.6, 2.5) [rectangle, draw, align=center, minimum width=3.5cm, minimum height=1cm] {Round Key};\n        \\node at (4.6, 0) [rectangle, draw, fill=black!10, align=center, text width = 2cm, minimum width=3.5cm, minimum height=1.6cm] {Encryption Process};\n        \\node at (0, 0) [rectangle, draw, align=center, minimum width=3.5cm, minimum height=1cm] {Block of Plaintext};\n        \\node at (9.2, 0) [rectangle, draw, align=center, minimum width=3.5cm, minimum height=1cm] {Block of Ciphertext};\n        \\draw[->, line width=0.5mm] (1.75, 0) -- (2.85, 0);\n        \\draw[->, line width=0.5mm] (4.6+1.75, 0) -- (4.6+1.75+1.1, 0);\n        \\draw[->, line width=0.5mm] (4.6, 2) -- (4.6, 0.8);\n    \\end{tikzpicture}\n    \\caption{A diagram of the basic flow of a block cipher.}\n\\end{figure}\n\nThough there are different types of block ciphers, most are referred to as \\textit{iterated block ciphers} \\cite{wiki:block_cipher}. \nIn an iterated block cipher, for every block of plaintext, the cipher produces an identically sized block of ciphertext. Each iteration of the cipher is \nknown as a \\textit{round} \\cite{wiki:block_cipher}. In each round, the blocks are operated on via a reversible transformation function \nknown as a \\textit{round function} \\cite{wiki:block_cipher}.\n\nThe round function will also typically take in different \\textit{round keys} as a secondary input. These round keys are created from the original key. \n\n\\begin{align*}\n    \\mathlarger{M_i = R_{K_i}(M_{i-1})}\n\\end{align*}\n\nThe equation above defines the generic operation of a round where $M_0$ represents the plaintext and $M_r$ represents the ciphertext, where \n$r$ represents the number of rounds.\n\n\\begin{align*}\n    \\mathlarger{M_0\\;} & \\mathlarger{= M \\oplus K_0} \\\\\n    \\mathlarger{M_i\\;} & \\mathlarger{= R_{K_i}(M_{i - 1});\\; i=1 \\ldots r} \\\\\n    \\mathlarger{C\\;} & \\mathlarger{= M_r \\oplus K_{r + 1}} \\\\\n\\end{align*}\n\n\\subsection{Feistel Cipher}\n\nA \\textit{Feistel cipher} (also commonly referred to as \\textit{Feistel network}) is a cryptographic model used for creating block ciphers \\cite{wiki:feistel_cipher}.\nIn a Feistel cipher, a block is split into two equal-sized halves. The round function is then applied to one half, which is then XORed with the other half \\cite{wiki:feistel_cipher}. \nThe resulting halves are then swapped with each other \\cite{wiki:feistel_cipher}.\n\nThe basic encryption process for a Feistel cipher where $F(K, R)$ represents the round function and $K_0,K_1,\\ldots,K_n$ represents the subkeys for the rounds \n$0,1,\\ldots,n$ respectively---are as follows:\n\n\\begin{enumerate}\n    \\item Split the plaintext into two equal halves, $L_0$ and $R_0$, respectively.\n    \\item For each round $i = 0,1,\\ldots,n$ where $n$ represents the amount of rounds, compute the following: \n        \\begin{align*}\n            \\mathlarger{L_{i + 1}\\;} & \\mathlarger{= R_i} \\\\\n            \\mathlarger{R_{i + 1}\\;} & \\mathlarger{= L_i \\oplus F(R_i,K_i)}\n        \\end{align*}\n            \n        where $F(K,R)$ represents the round function.\n\\end{enumerate}\n\nThe resulting block of ciphertext is then $(L_{n + 1},R_{n + 1})$.\n\nThe process to decrypt a block of ciphertext is the reverse of the encryption process. For each round $i = n,n-1,\\ldots,0$ where $n$ represents the amount of rounds, \nwe compute the following:\n\\begin{align*}\n    \\mathlarger{R_{i}\\;} & \\mathlarger{= L_i+1} \\\\\n    \\mathlarger{L_{i}\\;} & \\mathlarger{= R_i+1 \\oplus F(L_i+1,K_i)}\n\\end{align*}\n\nOnce again, the resulting block of plaintext is then $(L_0,R_0)$.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\definecolor{greenarrow}{RGB}{0,120,10}\n        \\definecolor{redarrow}{RGB}{120,0,0}\n        \\node at (1, 6) {Plaintext};\n        \\node at (0, 5.4) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$L_0$};\n        \\node at (2, 5.4) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$R_0$};\n         \n        \\draw[fill=purple!10] (1, 4.5) ellipse (0.5cm and 0.3cm);\n        \\node at (1, 4.5) {$K_0$};\n        \\node at (1, 3.5) [rectangle, draw, fill=yellow!10, align=center, minimum width=0.5cm, minimum height=0.5cm] {F};\n        \\draw[->] (1, 4.2) -- (1, 3.75);\n        \\draw (0, 3.5) circle (0.2cm);\n        \\draw (0, 3.7) -- (0, 3.3);\n        \\draw (-0.2, 3.5) -- (0.2, 3.5);\n        \\draw[->] (0.75, 3.5) -- (0.2, 3.5);\n        \\draw[->] (2, 3.5) -- (1.25, 3.5);\n         \n        \\draw[fill=purple!10] (1, 2) ellipse (0.5cm and 0.3cm);\n        \\node at (1, 2) {$K_1$};\n        \\node at (1, 1) [rectangle, draw, fill=yellow!10, align=center, minimum width=0.5cm, minimum height=0.5cm] {F};\n        \\draw[->] (1, 1.7) -- (1, 1.25);\n        \\draw (0, 1) circle (0.2cm);\n        \\draw (0, 1.2) -- (0, 0.8);\n        \\draw (-0.2, 1) -- (0.2, 1);\n        \\draw[->] (0.75, 1) -- (0.2, 1);\n        \\draw[->] (2, 1) -- (1.25, 1);\n         \n        \\newcommand{\\down}{3}\n        \\draw[fill=purple!10] (1, 2-\\down) ellipse (0.5cm and 0.3cm);\n        \\node at (1, 2-\\down) {$K_n$};\n        \\node at (1, 1-\\down) [rectangle, draw, fill=yellow!10, align=center, minimum width=0.5cm, minimum height=0.5cm] {F};\n        \\draw[->] (1, 1.7-\\down) -- (1, 1.25-\\down);\n        \\draw (0, 1-\\down) circle (0.2cm);\n        \\draw (0, 1.2-\\down) -- (0, 0.8-\\down);\n        \\draw (-0.2, 1-\\down) -- (0.2, 1-\\down);\n        \\draw[->] (0.75, 1-\\down) -- (0.2, 1-\\down);\n        \\draw[->] (2, 1-\\down) -- (1.25, 1-\\down);\n         \n        \\draw[->, draw=greenarrow, thick] (0, 5.15) -- (0, 3.7);\n        \\draw[draw=greenarrow, thick] (0, 3.3) -- (0, 3.1) -- (2, 2.9) -- (2, 0.6) -- (0, 0.4);\n        \\draw[->, draw=redarrow, thick] (2, 5.15) -- (2, 3.1) -- (0, 2.9) -- (0, 1.2);\n        \\draw[draw=redarrow, thick] (0, 0.8) -- (0, 0.6) -- (2, 0.4);\n        \\draw[->, draw=greenarrow, thick] (0, 0) -- (0, -1.8);\n        \\draw[->, draw=greenarrow, thick] (0, -2.2) -- (0, -2.5);\n        \\draw[->, draw=redarrow, thick] (2, 0) -- (2, -2.5);\n         \n        \\node at (0, -2.8) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$R_{n+1}$};\n        \\node at (2, -2.8) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$L_{n+1}$};\n        \\node at (1, -3.5) {Ciphertext};\n         \n        \\draw[thick,dashed] (1, 0.2) -- (1, -0.4);\n         \n        \\begin{scope}[xshift=6cm]\n        \\node at (1, 6) {Ciphertext};\n        \\node at (0, 5.4) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$R_{n+1}$};\n        \\node at (2, 5.4) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$L_{n+1}$};\n         \n        \\draw[fill=purple!10] (1, 4.5) ellipse (0.5cm and 0.3cm);\n        \\node at (1, 4.5) {$K_n$};\n        \\node at (1, 3.5) [rectangle, draw, fill=yellow!10, align=center, minimum width=0.5cm, minimum height=0.5cm] {F};\n        \\draw[->] (1, 4.2) -- (1, 3.75);\n        \\draw (0, 3.5) circle (0.2cm);\n        \\draw (0, 3.7) -- (0, 3.3);\n        \\draw (-0.2, 3.5) -- (0.2, 3.5);\n        \\draw[->] (0.75, 3.5) -- (0.2, 3.5);\n        \\draw[->] (2, 3.5) -- (1.25, 3.5);\n         \n        \\draw[fill=purple!10] (1, 2) ellipse (0.5cm and 0.3cm);\n        \\node at (1, 2) {$K_{n-1}$};\n        \\node at (1, 1) [rectangle, draw, fill=yellow!10, align=center, minimum width=0.5cm, minimum height=0.5cm] {F};\n        \\draw[->] (1, 1.7) -- (1, 1.25);\n        \\draw (0, 1) circle (0.2cm);\n        \\draw (0, 1.2) -- (0, 0.8);\n        \\draw (-0.2, 1) -- (0.2, 1);\n        \\draw[->] (0.75, 1) -- (0.2, 1);\n        \\draw[->] (2, 1) -- (1.25, 1);\n         \n        \\draw[fill=purple!10] (1, 2-\\down) ellipse (0.5cm and 0.3cm);\n        \\node at (1, 2-\\down) {$K_0$};\n        \\node at (1, 1-\\down) [rectangle, draw, fill=yellow!10, align=center, minimum width=0.5cm, minimum height=0.5cm] {F};\n        \\draw[->] (1, 1.7-\\down) -- (1, 1.25-\\down);\n        \\draw (0, 1-\\down) circle (0.2cm);\n        \\draw (0, 1.2-\\down) -- (0, 0.8-\\down);\n        \\draw (-0.2, 1-\\down) -- (0.2, 1-\\down);\n        \\draw[->] (0.75, 1-\\down) -- (0.2, 1-\\down);\n        \\draw[->] (2, 1-\\down) -- (1.25, 1-\\down);\n         \n        \\draw[->, draw=redarrow, thick] (0, 5.15) -- (0, 3.7);\n        \\draw[draw=redarrow, thick] (0, 3.3) -- (0, 3.1) -- (2, 2.9) -- (2, 0.6) -- (0, 0.4);\n        \\draw[->, draw=greenarrow, thick] (2, 5.15) -- (2, 3.1) -- (0, 2.9) -- (0, 1.2);\n        \\draw[draw=greenarrow, thick] (0, 0.8) -- (0, 0.6) -- (2, 0.4);\n        \\draw[->, draw=redarrow, thick] (0, 0) -- (0, -1.8);\n        \\draw[->, draw=redarrow, thick] (0, -2.2) -- (0, -2.5);\n        \\draw[->, draw=greenarrow, thick] (2, 0) -- (2, -2.5);\n         \n        \\node at (0, -2.8) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$L_0$};\n        \\node at (2, -2.8) [rectangle, draw, fill=blue!10, align=center, minimum width=2cm, minimum height=0.4cm] {$R_0$};\n        \\node at (1, -3.5) {Plaintext};\n         \n        \\draw[thick,dashed] (1, 0.2) -- (1, -0.4);\n        \\end{scope}\n    \\end{tikzpicture}\n    \\caption{A diagram depicting the encryption and decryption process in a Feistel cipher.}\n\\end{figure}\n\n\\subsubsection{Data Encryption Standard}\n\nThe \\textit{Data Encryption Standard} (or \\textit{DES} for short) is a symmetric-key algorithm. The algorithm was developed in the early 1970s by IBM and is based\noff the design of the Feistel cipher. The algorithm (after a few changes) was eventually published as an official United States \\textit{Federal Information Processing \nStandard} (or \\textit{FIPS} for short) in 1977. The publication of the algorithm resulted in international adoption and academic critic \\cite{wiki:des}. \n\nThe Data Encryption Standard is now generally regarded to as insecure (for many applications) \\cite{wiki:des}. This is mainly because DES has a relatively small key size of \n56 bits. Despite this, DES was \"highly influential in the advancements of modern cryptography.\" \\cite{wiki:des} \n\nDES is structured much like any other iterated block cipher. In the cipher, a block consists of 64-bits \\cite{wiki:des}. Also, DES uses a key to modify the \ntransformational operations---a key is effectively only 56-bits as the algorithm uses 8-bits for error checking (and thus disregarding them in the actual operations) \n\\cite{wiki:des}.\n\nDES additionally makes use of \\textit{initial} and \\textit{final} permutations. Before transforming a block, an initial permutation table is applied to the 64-bit block of data.\nOnce the block is operated on, a final permutation table is applied to the resulting 64-bit block of data. The initial permutation table reverses the actions of the\nfinal permutation table, and vice-versa \\cite{wiki:des}.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\node at (3, 11) {Plaintext (64 bits)};\n        \\draw[fill=blue!10] (0, 10) rectangle (6, 9.5);\n        \\node at (3, 9.75) {Initial Permutation (IP)};\n        \\begin{scope}[yshift=8.2cm]\n        \\draw[fill=yellow!10] (2.2, 0.25) rectangle (3.8, -0.25);\n        \\node at (3, 0) {F};\n        \\draw[draw=red] (0.5, 0) circle [radius=0.2];\n        \\draw[draw=red] (0.3, 0) -- (0.7, 0);\n        \\draw[draw=red] (0.5, 0.2) -- (0.5, -0.2);\n        \\draw[->] (5.5, 0) -- (3.8, 0);\n        \\draw[->] (2.2, 0) -- (0.7, 0);\n        \\end{scope}\n        \\begin{scope}[yshift=6.2cm]\n        \\draw[fill=yellow!10] (2.2, 0.25) rectangle (3.8, -0.25);\n        \\node at (3, 0) {F};\n        \\draw[draw=red] (0.5, 0) circle [radius=0.2];\n        \\draw[draw=red] (0.3, 0) -- (0.7, 0);\n        \\draw[draw=red] (0.5, 0.2) -- (0.5, -0.2);\n        \\draw[->] (5.5, 0) -- (3.8, 0);\n        \\draw[->] (2.2, 0) -- (0.7, 0);\n        \\end{scope}\n        \\begin{scope}[yshift=3.0cm]\n        \\draw[fill=yellow!10] (2.2, 0.25) rectangle (3.8, -0.25);\n        \\node at (3, 0) {F};\n        \\draw[draw=red] (0.5, 0) circle [radius=0.2];\n        \\draw[draw=red] (0.3, 0) -- (0.7, 0);\n        \\draw[draw=red] (0.5, 0.2) -- (0.5, -0.2);\n        \\draw[->] (5.5, 0) -- (3.8, 0);\n        \\draw[->] (2.2, 0) -- (0.7, 0);\n        \\end{scope}\n        \\begin{scope}[yshift=1.0cm]\n        \\draw[fill=yellow!10] (2.2, 0.25) rectangle (3.8, -0.25);\n        \\node at (3, 0) {F};\n        \\draw[draw=red] (0.5, 0) circle [radius=0.2];\n        \\draw[draw=red] (0.3, 0) -- (0.7, 0);\n        \\draw[draw=red] (0.5, 0.2) -- (0.5, -0.2);\n        \\draw[->] (5.5, 0) -- (3.8, 0);\n        \\draw[->] (2.2, 0) -- (0.7, 0);\n        \\end{scope}\n        \\draw[fill=blue!10] (0, 0) rectangle (6, -0.5);\n        \\node at (3, -0.25) {Final Permutation (FP)};\n        \\node at (3, -1.5) {Ciphertext (64 bits)};\n         \n        \\draw[->] (0.5, 11) -- (0.5, 10);\n        \\draw[->] (5.5, 11) -- (5.5, 10);\n         \n        \\draw[->] (0.5, 9.5) -- (0.5, 8.4);\n        \\draw (0.5, 8) -- (0.5, 7.5) -- (5.5, 6.9) -- (5.5, 5.5) -- (0.5, 4.9) -- (0.5, 4.8);\n        \\draw[dashed] (0.5, 4.8) -- (0.5, 3.5);\n        \\draw[->] (0.5, 3.5) -- (0.5, 3.2);\n        \\draw[->] (0.5, 2.8) -- (0.5, 2.3) -- (5.5, 1.7) -- (5.5, 0);\n         \n        \\draw[->] (5.5, 9.5) -- (5.5, 7.5) -- (0.5, 6.9) -- (0.5, 6.4);\n        \\draw (0.5, 6) -- (0.5, 5.5) -- (5.5, 4.9) -- (5.5, 4.8);\n        \\draw[dashed] (5.5, 4.8) -- (5.5, 3.5);\n        \\draw[->] (5.5, 3.5) -- (5.5, 2.3) -- (0.5, 1.7) -- (0.5, 1.2);\n        \\draw[->] (0.5, 0.8) -- (0.5, 0);\n         \n        \\draw[->] (0.5, -0.5) -- (0.5, -1.5);\n        \\draw[->] (5.5, -0.5) -- (5.5, -1.5);\n         \n        \\node at (3, 4.2) {for 16 rounds};\n    \\end{tikzpicture}\n    \\caption{The overall structure of DES.}\n\\end{figure}\n\nIn each round, a \\textit{subkey} derived from the original key is used to transform the block of data (the subkeys are supplied in the reverse order when decrypting).\nThe subkeys are produced from a table known as the \\textit{key schedule}. The key schedule defines how the original key should be rotated to create a subkey from a given\nround. A \\textit{permutation choice 1} (or \\textit{PC1} for short) table is applied onto the original key before the subkey process. For each subkey, a \\textit{permutation\nchoice 2} (or \\textit{PC2} for short) table is applied.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\node at (6, 10) {Key (64 bits)};\n        \\draw[fill=blue!10] (4, 9) rectangle (8, 8);\n        \\node at (6, 8.5) {PC1};\n        \\draw[->] (6, 9.8) -- (6, 9);\n         \n        \\begin{scope}[yshift=6cm]\n        \\draw[fill=yellow!10] (5, 0.5) rectangle (7, -0.5);\n        \\node at (6, 0) {PC2};\n        \\draw[draw=red] (4, 1.1) rectangle (5, 0.7);\n        \\node at (4.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[draw=red] (7, 1.1) rectangle (8, 0.7);\n        \\node at (7.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[->] (5, 0.9) -- (5.5, 0.9) -- (5.5, 0.5);\n        \\draw[->] (7, 0.9) -- (6.5, 0.9) -- (6.5, 0.5);\n        \\draw[->] (5, 0) -- (3, 0);\n        \\end{scope}\n        \\begin{scope}[yshift=4cm]\n        \\draw[fill=yellow!10] (5, 0.5) rectangle (7, -0.5);\n        \\node at (6, 0) {PC2};\n        \\draw[draw=red] (4, 1.1) rectangle (5, 0.7);\n        \\node at (4.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[draw=red] (7, 1.1) rectangle (8, 0.7);\n        \\node at (7.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[->] (5, 0.9) -- (5.5, 0.9) -- (5.5, 0.5);\n        \\draw[->] (7, 0.9) -- (6.5, 0.9) -- (6.5, 0.5);\n        \\draw[->] (5, 0) -- (3, 0);\n        \\end{scope}\n        \\begin{scope}[yshift=1cm]\n        \\draw[fill=yellow!10] (5, 0.5) rectangle (7, -0.5);\n        \\node at (6, 0) {PC2};\n        \\draw[draw=red] (4, 1.1) rectangle (5, 0.7);\n        \\node at (4.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[draw=red] (7, 1.1) rectangle (8, 0.7);\n        \\node at (7.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[->] (5, 0.9) -- (5.5, 0.9) -- (5.5, 0.5);\n        \\draw[->] (7, 0.9) -- (6.5, 0.9) -- (6.5, 0.5);\n        \\draw[->] (5, 0) -- (3, 0);\n        \\end{scope}\n        \\begin{scope}[yshift=-1cm]\n        \\draw[fill=yellow!10] (5, 0.5) rectangle (7, -0.5);\n        \\node at (6, 0) {PC2};\n        \\draw[draw=red] (4, 1.1) rectangle (5, 0.7);\n        \\node at (4.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[draw=red] (7, 1.1) rectangle (8, 0.7);\n        \\node at (7.5, 0.9) {\\textcolor{red}{$<<$}};\n        \\draw[->] (5, 0.9) -- (5.5, 0.9) -- (5.5, 0.5);\n        \\draw[->] (7, 0.9) -- (6.5, 0.9) -- (6.5, 0.5);\n        \\draw[->] (5, 0) -- (3, 0);\n        \\end{scope}\n         \n        \\draw (6, 8) -- (6, 7.5);\n        \\draw[->] (6, 7.5) -- (4.5, 7.5) -- (4.5, 7.1);\n        \\draw[->] (6, 7.5) -- (7.5, 7.5) -- (7.5, 7.1);\n        \\draw[->] (4.5, 6.7) -- (4.5, 5.1);\n        \\draw[->] (7.5, 6.7) -- (7.5, 5.1);\n        \\draw (4.5, 4.7) -- (4.5, 3.5);\n        \\draw[dashed] (4.5, 3.5) -- (4.5, 2.5);\n        \\draw[->] (4.5, 2.5) -- (4.5, 2.1);\n        \\draw (7.5, 4.7) -- (7.5, 3.5);\n        \\draw[dashed] (7.5, 3.5) -- (7.5, 2.5);\n        \\draw[->] (7.5, 2.5) -- (7.5, 2.1);\n        \\draw[->] (4.5, 1.7) -- (4.5, 0.1);\n        \\draw[->] (7.5, 1.7) -- (7.5, 0.1);\n         \n        \\node at (1, 6) {Subkey 1 (48 bits)};\n        \\node at (1, 4) {Subkey 2 (48 bits)};\n        \\node at (1, 1) {Subkey 15 (48 bits)};\n        \\node at (1, -1) {Subkey 16 (48 bits)};\n    \\end{tikzpicture}\n    \\caption{The structure of the DES key schedule. The $\\mathsmaller{<<}$ denotes a rotation to the left.}\n\\end{figure}\n\nThe round $F(K, R)$ function is defined as follows:\n\n\\begin{enumerate}\n    \\item The 32-bit half-block is expanded to 48-bits using the \\textit{expansion permutation table} (E). The expansion permutation duplicates half of the bits in the \n          32-bit block in order to expand the data. \n    \\item The result of the expansion is then combined with the given subkey for the round (using a XOR operation).\n    \\item The result of the key combination is split into eight 6-bit blocks of data which is then processed by the \\textit{substitution boxes}---one 6-bit block for \n    each substitution-box. Each substitution-box replaces the 6-bit input data with a 4-bit output. The substitution values are non-linearly transformed.\n    \\item Lastly, the outputs from the substitution process are combined into one 32-bit block of data which is rearranged by a fixed permutation table known as the\n        \\textit{permutation-box}. The permutation-box shuffles the output of each substitution to increase cryptographic security.\n\\end{enumerate}\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\node at (1, 6) {Half Block (32 bits)};\n        \\node at (5, 6) {Subkey (48 bits)};\n        \\draw[fill=blue!10] (0, 5.25) rectangle (2, 4.75);\n        \\node at (1, 5) {E};\n        \\draw[draw=red] (3.15, 4) circle [radius=0.2];\n        \\draw[draw=red] (2.95, 4) -- (3.35, 4);\n        \\draw[draw=red] (3.15, 4.2) -- (3.15, 3.8);\n        \\draw[->] (1, 5.8) -- (1, 5.25);\n        \\draw[->] (1, 4.75) -- (1, 4) -- (2.95, 4);\n        \\draw[->] (5, 5.8) -- (5, 4) -- (3.35, 4);\n        \\draw[->] (3.15, 3.8) -- (3.15, 3.4);\n        \\draw (0, 3.4) -- (6.3, 3.4);\n        \\begin{scope}[xshift=0cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S1};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\begin{scope}[xshift=0.8cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S2};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\begin{scope}[xshift=1.6cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S3};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\begin{scope}[xshift=2.4cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S4};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\begin{scope}[xshift=3.2cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S5};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\begin{scope}[xshift=4cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S6};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\begin{scope}[xshift=4.8cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S7};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\begin{scope}[xshift=5.6cm]\n        \\draw[fill=yellow!10] (0, 3) rectangle (0.7, 2.5);\n        \\node at (0.35, 2.75) {S8};\n        \\draw[->] (0, 3.4) -- (0, 3);\n        \\draw[->] (0.14, 3.4) -- (0.14, 3);\n        \\draw[->] (0.28, 3.4) -- (0.28, 3);\n        \\draw[->] (0.42, 3.4) -- (0.42, 3);\n        \\draw[->] (0.56, 3.4) -- (0.56, 3);\n        \\draw[->] (0.7, 3.4) -- (0.7, 3);\n        \\draw (0.1, 2.5) -- (0.1, 2.1);\n        \\draw (0.267, 2.5) -- (0.267, 2.1);\n        \\draw (0.433, 2.5) -- (0.433, 2.1);\n        \\draw (0.6, 2.5) -- (0.6, 2.1);\n        \\end{scope}\n        \\draw (0.1, 2.1) -- (6.2, 2.1);\n        \\draw[->] (3.15, 2.1) -- (3.15, 1.5);\n        \\draw[fill=green!10] (0, 1.5) rectangle (6.3, 0.5);\n        \\node at (3.15, 1) {P};\n        \\draw[->] (3.15, 0.5) -- (3.15, 0);\n    \\end{tikzpicture}\n    \\caption{The structure of the round function in DES. \\textit{S1-S8} denote their respective substitution box.}\n\\end{figure}", "meta": {"hexsha": "444c88d01b8aeccc17cd19de8c9ee6efb4306a22", "size": 54226, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/SymmetricKeyCryptography.tex", "max_stars_repo_name": "GalacticGlum/CryptographyResearchPaper", "max_stars_repo_head_hexsha": "b538ba91fcee47995b2bf102affa9425badafc0c", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/SymmetricKeyCryptography.tex", "max_issues_repo_name": "GalacticGlum/CryptographyResearchPaper", "max_issues_repo_head_hexsha": "b538ba91fcee47995b2bf102affa9425badafc0c", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/SymmetricKeyCryptography.tex", "max_forks_repo_name": "GalacticGlum/CryptographyResearchPaper", "max_forks_repo_head_hexsha": "b538ba91fcee47995b2bf102affa9425badafc0c", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.4429400387, "max_line_length": 183, "alphanum_fraction": 0.6106480286, "num_tokens": 18415, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407017, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.5589231787874159}}
{"text": "\\documentclass[../main]{subfiles}\n\\begin{document}\n\n\\chapter{Algebra, Number Theory and Related Assumptions}\n\n\\section{Divisors, primes, etc.}\n\n\\begin{lemma}\n\t\tIf $n, m \\in{} \\mathbb{N}$ and $m > 1$, then n is invertible modulo m (there exists p such that np mod m = 1) whenever gcd(n, m) = 1, i. e. when n, m are comprime.\n\\end{lemma}\n\n\\section{Primes and factoring}\n\n\\noindent\n$\\textbf{wFactor}_A (n):$\\\\\n$(x, y) \\leftarrow{} \\mathbb{N} \\times{} \\mathbb{N} \\; with \\; |x| = |y| = n$\\\\\n$N \\leftarrow{} x \\cdot{} y$\\\\\n$(z, w) \\leftarrow{} A(N)$\\\\\n$\\textbf{Result:} \\; z \\cdot{} w = N$\\\\\n$$Pr(wFactor_A (n) = 1) = \\varepsilon{} (n)$$\n\n\\noindent\n$\\textbf{for} \\; i \\leftarrow{} 1 \\; to \\; t \\; \\textbf{do}$\\\\\n$\\quad{}\\quad{}r \\leftarrow{} \\{0, 1\\}^{n-1}$\\\\\n$\\quad{}\\quad{}p \\leftarrow{} 1\\mid\\mid r$\\\\\n$\\quad{}\\quad{}\\textbf{if} \\; p \\; is \\; prime \\; \\textbf{then}$\\\\\n$\\quad{}\\quad{}\\quad{}\\quad{}\\textbf{Result:} \\; p$\\\\\n$\\textbf{Result:} \\; fail$\\\\\n\n\\begin{theorem}\n\tThere exists a constant c such that for every $n > 1$ the number of primes that can be represented in exactly n bits is at least equal to $\\frac{c \\cdot{} 2^{n-1}}{n}$.\n\\end{theorem}\n\n\\begin{theorem}\n\tIf $(\\mathbb{G}, \\cdot)$ has order m, then for each $g \\in{} \\mathbb{G}$, it is true that $g^m = 1_{\\mathbb{G}}$.\n\\end{theorem}\n\n\\begin{corollary}\n\tIf $(\\mathbb{G}, \\cdot)$ has order $m > 1$, then for every $g \\in{} \\mathbb{G}$ and for every i, $g^i = g^{[i \\; mod \\; m]}$.\n\\end{corollary}\n\n\\begin{theorem}\n\tLet $N > 1$. For every natural $e > 0$, we define $f_e : \\mathbb{Z}_N* \\rightarrow{} \\mathbb{Z}_N*$ assuming $f_e (x) = x^e \\; mod \\; N$.\n\tIf $gcd(e, \\Phi (N)) = 1$, then $f_e$ is a permutation. Moreover, if d is the inverse of e (modulo $\\Phi (N)$), then $f_d$ is the inverse of $f_e$.\n\\end{theorem}\n\n\\noindent\n$\\textbf{RSAInv}_{A, GenRSA}(n):$\\\\\n$(N, e, d) \\leftarrow{} GenRSA(1^n)$\\\\\n$y \\leftarrow{} \\mathbb{Z}_N*$\\\\\n$x \\leftarrow{} A(N, e, y)$\\\\\n$\\textbf{Result:} \\; x^e mod N = y$\n$$Pr(RSAInv_{A, GenRSA}(n) = 1) = \\varepsilon{} (n)$$\n\n\\noindent\n$(N, p, q) \\leftarrow{} \\textbf{GenModulus}(1^n)$\\\\\n$M \\leftarrow{} (p-1)(q-1)$\\\\\n$e \\leftarrow{} \\{1, \\ldots, M\\} \\; such \\; that \\; gcd(e, M) = 1$\\\\\n$d \\leftarrow{} e^{-1} mod M$\\\\\n$\\textbf{Result:} \\; (N, e, d)$\n\n\\begin{lemma}\n\tIf $\\mathbb{G}$ has order m and $g \\in{} \\mathbb{G}$ has order i, then $i|m$.\n\\end{lemma}\n\n\\begin{theorem}\n\tIf $\\mathbb{G}$ has prime order then $\\mathbb{G}$ is cyclic and every $g \\in{} \\mathbb{G}$ with $g \\neq{} 1_{\\mathbb{G}}$ generates $\\mathbb{G}.$\n\\end{theorem}\n\n\\noindent\n$\\textbf{DLog}_{A, GenCG} (n):$\\\\\n$(\\mathbb{G}, q, g) \\leftarrow{} GenCG(1^n)$\\\\\n$h \\leftarrow{} \\mathbb{G}$\\\\\n$x \\leftarrow{} A(\\mathbb{G}, q, g, h)$\\\\\n$\\textbf{Result:} \\; g^x  =    n $\n$$Pr(DLog_{A, GenCG} (n) = 1) = \\varepsilon{} (n)$$\n\n% slides 43-49\n\n\\begin{theorem}\n\tIf factoring is hard relative to \\textbf{GenModulus}, then $f_{GenModulus}$ is a one-way function.\n\\end{theorem}\n\n% Proof 16/11/2021\n\\begin{theorem}\n\tIf $\\mathbb{G}$ is a finite group where $m = |\\mathbb{G}|$ is the order of $\\mathbb{G}$, then for every $g \\in{} \\mathbb{G}$, it holds that $g^n = 1_{\\mathbb{G}}$.\n\\end{theorem}\n\n\\paragraph{Proof}\n\t\\begin{itemize}\n\t\t\\item Let us make the assumption that $\\mathbb{G}$ is abelian (this of course makes the proof less general, but has the advantage of simplifying it).\n\t\t\\item Take $\\mathbb{G} = \\{g_1, \\ldots, g_m\\}$ and consider any $g \\in{} \\mathbb{G}$.\n\t\t\\item Of course\n\t\t\t\t$$g_1 \\cdot{} g_2 \\cdot{} g_3 \\cdot{} \\ldots{} \\cdot{} g_m = (gg_1)\\cdot{}(gg_2)\\ldots(gg_m) \\quad\\quad \\textcolor{red}{(*)}$$\n\t\t\t\tIndeed, the factors in the rhs are all distinct elements of $\\mathbb{G}$ because if $gg_i = gg_j$;\n\t\t\t\t\\newline\n\t\t\t\tthen $g^{-1}gg_i = g^{-1}gg_j$ and $g_i = g_j$, which would contradict the fact that $|\\{g_1, \\ldots, g_m\\}| = m$.\n\t\t\\item Since $\\mathbb{G}$ is abelian, we can rearrange the rhs of \\textcolor{red}{(*)} and obtain that\n\t\t\t\t$$g_1g_2 \\cdots{} g_m = g^m \\cdot{} (g_1 \\cdot{} g_2 \\cdots g_m)$$\n\t\t\\item We can then multiply both sides of this equation by $(g_1 g_2 \\cdots{} g_m)^{-1}$ and obtain:\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t1_{\\mathbb{G}} &= (g_1 g_2 \\cdots{} g_m) \\cdot (g_1 g_2 \\cdots{} g_m)^{-1}\t\\\\\n\t\t\t\t\t&= g^m \\cdot (g_1 g_2 \\cdots{} g_m) \\cdot (g_1 g_2 \\cdots{} g_m)^{-1}\t\t\\\\\n\t\t\t\t\t&= g^m \\cdot 1_{\\mathbb{G}}\t\t\t\t\t\t\t\t\t\t\\\\\n\t\t\t\t\t&= g^m\n\t\t\t\t\\end{align*}\n\t\\end{itemize}\n\n\\end{document}\t", "meta": {"hexsha": "cb993c86a34e217f63d681a97deee02c074a39f7", "size": 4363, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/07_algebra_number_theory_related_assumptions.tex", "max_stars_repo_name": "TommasoAzz/cryptography-notes", "max_stars_repo_head_hexsha": "e694358042d838029aab271d2099676bf12fcbc3", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-17T15:05:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-17T15:05:18.000Z", "max_issues_repo_path": "chapters/07_algebra_number_theory_related_assumptions.tex", "max_issues_repo_name": "TommasoAzz/cryptography-notes", "max_issues_repo_head_hexsha": "e694358042d838029aab271d2099676bf12fcbc3", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/07_algebra_number_theory_related_assumptions.tex", "max_forks_repo_name": "TommasoAzz/cryptography-notes", "max_forks_repo_head_hexsha": "e694358042d838029aab271d2099676bf12fcbc3", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.0275229358, "max_line_length": 169, "alphanum_fraction": 0.5961494385, "num_tokens": 1809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.5589231747061277}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[a4paper,margin=25mm]{geometry}\n\\usepackage{graphicx,subcaption}\n\\usepackage{amsmath,amsfonts}\n\\title{Notes on Louvain Modularity}\n\\author{G.A. Jarrad}\n\\begin{document}\n\\maketitle\n\\numberwithin{equation}{section}\n\\numberwithin{figure}{section}\n\\numberwithin{table}{section}\n\\section{Undirected Modularity}\\label{sec:Q}\nThis section is motivated by the application of the Louvain modularity clustering algorithm to an undirected graph,\nas (incompletely) described by \\cite{blondel08}.\n\nLet $D_{ij}\\ge 0$ represent the weight of a directed edge\n$i\\rightarrow j$ (if one exists) from vertex $i\\in{\\cal V}$ to vertex $j\\in{\\cal V}$. \nThen the equivalent undirected edge has weight\n$A_{ij}=D_{ij}+D_{ji}-\\delta_{ij}D_{ii}$, such that $A_{ij}=A_{ji}$. The total\nweight of all edges for vertex i (its so-called {\\em vertex weight}) is then given by\n\\begin{eqnarray}\n  A_{i\\cdot} & = & \\sum_{j\\in{\\cal V}} A_{ij}~=~\\sum_{j\\in{\\cal V}} A_{ji}~=~A_{\\cdot i}\\,.\n\\label{eq:in:out}\n\\end{eqnarray}\nThe sum of all vertex weights is then given by\n\\begin{eqnarray}\n  A_{\\cdot\\cdot} & = & \\sum_{i\\in{\\cal V}} A_{i\\cdot}\n  ~=~\\sum_{i\\in{\\cal V}}\\sum_{j\\in{\\cal V}} A_{ij}\\,.\n\\end{eqnarray}\nNote that this counts self edges (i.e.\\ $i$ -- $i$) once and all other edges (i.e.\\ $i$ -- $j$, \n$i\\ne j$) twice. The Louvain modularity algorithm \\cite{blondel08} in fact assumes that there are no self edges, but we\nshall include them here for completeness.\n\nThe modularity score of a clustered, undirected graph is then given by\n\\begin{eqnarray}\n  Q & = & \\sum_{i\\in{\\cal V}}\\sum_{j\\in{\\cal V}} \\left[\n  \\frac{A_{ij}}{A_{\\cdot\\cdot}}-\n  \\frac{A_{i\\cdot}A_{\\cdot j}}{A_{\\cdot\\cdot}^2}\n  \\right]\\,\\delta(c_i,c_j)\\,,\n\\end{eqnarray}\nwhere $c_i$ is the index of the cluster containing vertex $i$. Note that\n$\\delta(c_i,c_j)=1$ if and only if $c_i=c_j=g$ for some cluster index $g$. Then $Q$ can be\npartitioned by cluster as\n\\begin{eqnarray}\n  Q & = & \\sum_{g=1}^G\\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} \\left[\n  \\frac{A_{ij}}{A_{\\cdot\\cdot}}-\n  \\frac{A_{i\\cdot}A_{\\cdot j}}{A_{\\cdot\\cdot}^2}\n  \\right]~\\doteq~\\sum_{g=1}^G Q_g\\,,\n\\label{eq:Qg}\n\\end{eqnarray}\nwhere the $g$-th cluster contains vertices ${\\cal V}_g=\\{i\\!\\in\\!{\\cal V}\\;|\\;c_i\\!=\\!g\\}$,\nand therefore ${\\cal V}=\\bigcup_{g=1}^G {\\cal V}_g$.\n\nWe now observe that the sum of edge weights of vertex $i$ for all edges to and from cluster $g$\nis given by\n\\begin{eqnarray}\n  A_{i,g} & = & \\sum_{j\\in{\\cal V}_g} A_{ij}\\,,\n\\end{eqnarray}\nand hence the {\\em internal} cluster weight, namely the total weight of all edges internal to \ncluster $g$, is given by\n\\begin{eqnarray}\n   \\Sigma_g^{\\tt int} & = & \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} A_{ij}\n   ~=~\\sum_{i\\in{\\cal V}_g} A_{i,g}\\,.\n\\label{eq:sig:int}\n\\end{eqnarray}\nNote that this value, like $A_{\\cdot\\cdot}$, also counts self edges ($i$ -- $i$) once and all other edges \n($i$ -- $j$) twice.\nConversely, the {\\em external} cluster weight, namely the total weight of all edges from vertices\nin cluster $g$ to and from vertices in other clusters, is given by\n\\begin{eqnarray}\n   \\Sigma_g^{\\tt ext} & = & \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in\\bar{\\cal V}_g}\n   A_{ij} ~=~\\sum_{j\\in\\bar{\\cal V}_g} A_{j,g}\\,,\n\\label{eq:sig:ext}\n\\end{eqnarray}\nwhere $\\bar{\\cal V}_g={\\cal V}\\backslash{\\cal V}_g$ (or ${\\cal V}-{\\cal V}_g$).\nNote that these external edge weights are only counted once per cluster, but the\nedge $i$ -- $j$ is counted separately for both the cluster containing vertex $i$\nand the other cluster containing vertex $j$.\n\nThe total weight of cluster $g$ is then given by\n\\begin{eqnarray}\n   \\Sigma_g^{\\tt tot} & = & \\Sigma_g^{\\tt int} + \\Sigma_g^{\\tt ext}\n\\nonumber\\\\&=&\n   \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} A_{ij} \n   + \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in\\bar{\\cal V}_g} A_{ij}\n\\nonumber\\\\&=&\n   \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}} A_{ij}~=~\\sum_{i\\in{\\cal V}_g}A_{i\\cdot}\\,.\n\\label{eq:sig:tot}\n\\end{eqnarray}\nWe now observe that\n\\begin{eqnarray}\n    \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} A_{i\\cdot}A_{\\cdot j}\n    & = & \\sum_{i\\in{\\cal V}_g}A_{i\\cdot}\\sum_{j\\in{\\cal V}_g} A_{\\cdot j}\n\\nonumber\\\\&=&\n    \\left(\\sum_{i\\in{\\cal V}_g}A_{i\\cdot}\\right)\\left(\\sum_{j\\in{\\cal V}_g} A_{j\\cdot}\\right)\n\\nonumber\\\\&=&\n    \\left(\\sum_{i\\in{\\cal V}_g}A_{i\\cdot}\\right)^2 ~=~ \\left(\\Sigma_g^{\\tt tot}\\right)^2\\,.\n\\label{eq:sum:sq}\n\\end{eqnarray}\nHence, from equation~\\eqref{eq:Qg}, we see that the modularity score of the $g$-th cluster\nsimplifies to become\n\\begin{eqnarray}\n   Q_g & = & \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} \\left[\n  \\frac{A_{ij}}{A_{\\cdot\\cdot}}-\n  \\frac{A_{i\\cdot}A_{\\cdot j}}{A_{\\cdot\\cdot}^2}\n  \\right]\n~=~    \\frac{\\Sigma_g^{\\tt int}}{A_{\\cdot\\cdot}}\n    -\\left(\\frac{\\Sigma_g^{\\tt tot}}{A_{\\cdot\\cdot}}\\right)^2\\,,\n\\label{eq:Qg:only}\n\\end{eqnarray}\nfrom equations~\\eqref{eq:sig:int} and~\\eqref{eq:sum:sq}.\n\nThis cluster modularity score $Q_g$ now gives us a handle on how to compute changes in score\ndue to changes in the graph clustering, with the aim of choosing a clustering that maximises\nthe total modularity score $Q$.\nSuppose we merge a singleton cluster containing only vertex $k$ with another cluster $g$ to form a new\ncluster $g\\oplus k$ (technically, the new cluster is ${\\cal V}_{g\\oplus k}={\\cal V}_g\\bigcup\\{k\\}$). \nThen, from equation~\\eqref{eq:sig:int}, the new internal cluster weight is\ngiven by\n\\begin{eqnarray}\n   \\Sigma_{g\\oplus k}^{\\tt int} & = & \\sum_{i\\in{\\cal V}_g\\bigcup\\{k\\}}\n   \\sum_{j\\in{\\cal V}_g\\bigcup\\{k\\}} A_{ij}\n\\nonumber\\\\&=&\n    \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} A_{ij}\n    +\\sum_{i\\in{\\cal V}_g}A_{ik}+\\sum_{j\\in{\\cal V}_g}A_{kj}+A_{kk}\n\\nonumber\\\\&=&\n    \\Sigma_g^{\\tt int}+2A_{k,g}+A_{kk}\\,.\n\\label{eq:sig:int:add}\n\\end{eqnarray}\nSimilarly, the new total cluster weight is given by\n\\begin{eqnarray}\n    \\Sigma_{g\\oplus k}^{\\tt tot} & = & \\sum_{i\\in{\\cal V}_g\\bigcup\\{k\\}} A_{i\\cdot}\n~=~\n    \\sum_{i\\in{\\cal V}_g} A_{i\\cdot}+A_{k\\cdot}\n    ~=~\\Sigma_{g}^{\\tt tot}+A_{k\\cdot}\\,,\n\\label{eq:sig:tot:add}\n\\end{eqnarray}\nfrom equation~\\eqref{eq:sig:tot}.\nConsequently, the modularity score of the new cluster is given by\n\\begin{eqnarray}\nQ_{g\\oplus k} & = & \n\\frac{\\Sigma_{g\\oplus k}^{\\tt int}}{A_{\\cdot\\cdot}}\n    -\\left(\\frac{\\Sigma_{g\\oplus k}^{\\tt tot}}{A_{\\cdot\\cdot}}\\right)^2\n\\nonumber\\\\& = &\n\\frac{\\Sigma_{g}^{\\tt int}+2A_{k,g}+A_{kk}}{A_{\\cdot\\cdot}}\n    -\\left(\\frac{\\Sigma_{g}^{\\tt tot}+A_{k\\cdot}}{A_{\\cdot\\cdot}}\\right)^2\\,,\n\\label{eq:Qgpk}\n\\end{eqnarray}\nfrom equations~\\eqref{eq:Qg:only}--\\eqref{eq:sig:tot:add}.\nBy extension, a singleton cluster containing only vertex $k$ is notionally\nformed by merging the vertex with an empty cluster having zero cluster weights, and so the\nmodularity score of the singelton cluster is just\n\\begin{eqnarray}\n    Q_k & = & \\frac{A_{kk}}{A_{\\cdot\\cdot}}\n    -\\left(\\frac{A_{k\\cdot}}{A_{\\cdot\\cdot}}\\right)^2\\,.\n\\label{eq:Qk}\n\\end{eqnarray}\nNote that the Louvain modularity algorithm \\cite{blondel08} starts by placing every vertex $k\\in{\\cal V}$ into\nits own singleton cluster, and so initially there are $G=|{\\cal V}|$ such clusters.\nAlso note that, as mentioned above, the Louvain algorithm \\cite{blondel08} implicitly assumes that $A_{kk}=0$.\n\nWe can now compute the total change in modularity caused by adding singleton vertex $k$\nto cluster $g$, namely\n\\begin{eqnarray}\n    \\Delta Q_{(g,k)\\rightarrow g\\oplus k} & = & Q_{g\\oplus k}-Q_g-Q_k\n\\nonumber\\\\&=&\n    \\left[\\frac{\\Sigma_{g}^{\\tt int}+2A_{k,g}+A_{kk}}{A_{\\cdot\\cdot}}\n    -\\left(\\frac{\\Sigma_{g}^{\\tt tot}+A_{k\\cdot}}{A_{\\cdot\\cdot}}\\right)^2\\right]\n-\\left[\\frac{\\Sigma_{g}^{\\tt int}}{A_{\\cdot\\cdot}}\n    -\\left(\\frac{\\Sigma_{g}^{\\tt tot}}{A_{\\cdot\\cdot}}\\right)^2\\right]\n    -\\left[\\frac{A_{kk}}{A_{\\cdot\\cdot}}\n    -\\left(\\frac{A_{k\\cdot}}{A_{\\cdot\\cdot}}\\right)^2\\right]\n\\nonumber\\\\&=&\n\\frac{\\Sigma_{g}^{\\tt int}+2A_{k,g}+A_{kk}}{A_{\\cdot\\cdot}}\n-\\frac{(\\Sigma_{g}^{\\tt tot})^2+2\\Sigma_{g}^{\\tt tot}\nA_{k\\cdot}+A_{k\\cdot}^2}{A_{\\cdot\\cdot}^2} \n-\\frac{\\Sigma_{g}^{\\tt int}+A_{kk}}{A_{\\cdot\\cdot}}\n+\\frac{(\\Sigma_{g}^{\\tt tot})^2+A_{k\\cdot}^2}{A_{\\cdot\\cdot}^2}\n\\nonumber\\\\&=&\n    \\frac{2A_{k,g}}{A_{\\cdot\\cdot}}-\\frac{2\\Sigma_g^{\\tt tot}A_{k\\cdot}}{A_{\\cdot\\cdot}^2}\\,,\n\\label{eq:delta:Q:plus}\n\\end{eqnarray}\nfrom equations~\\eqref{eq:Qg:only} and \\eqref{eq:Qgpk}--\\eqref{eq:Qk}.\nConceptually, this is just the score change upon destroying clusters $k$ and $g$ and then creating a new cluster $g\\oplus k$.\n\nIn the converse situation, we instead want to remove vertex $k$ from the cluster $g\\oplus k$\n and restore $k$ to its singleton cluster. Since the\naction of adding vertex $k$ to group $g$ (above) is reversible, then removing the vertex must result in a change of \nmodularity score opposite to the change in score caused by adding the vertex.\nThis is just the score change involved in destroying cluster $g\\oplus k$ and creating clusters $g$ and $k$.\n Hence, we obtain\n\\begin{eqnarray}\n    \\Delta Q_{g\\oplus k\\rightarrow(g,k)} & = & Q_g+Q_k-Q_{g\\oplus k}\n~=~    -\\frac{2A_{k,g}}{A_{\\cdot\\cdot}}+\\frac{2\\Sigma_g^{\\tt tot}A_{k\\cdot}}{A_{\\cdot\\cdot}^2}\\,,\n\\end{eqnarray}\nwhere now $A_{k,g}$ is computed from $A_{k,g\\oplus k}$ via\n\\begin{eqnarray}\n    A_{k,g\\oplus k} & = & \\sum_{j\\in{\\cal V}_g\\bigcup\\{k\\}}A_{kj}\n    ~=~\\sum_{j\\in{\\cal V}_g}A_{kj}+A_{kk}~=~A_{k,g}+A_{kk}\n\\nonumber\\\\\n\\Rightarrow A_{k,g} & = & A_{k,g\\oplus k} - A_{kk}\\,,\n\\end{eqnarray}\nand $\\Sigma_g^{\\tt tot}$ is computed from $\\Sigma_{g\\oplus k}^{\\tt tot}$ as\n\\begin{eqnarray}\n    \\Sigma_{g}^{\\tt tot} & = & \\Sigma_{g\\oplus k}^{\\tt tot}-A_{k\\cdot}\\,,\n\\end{eqnarray}\nfrom equation~\\eqref{eq:sig:tot:add}.\nIn terms of the existing cluster $g\\oplus k$, the score change is thus\n\\begin{eqnarray}\n    \\Delta Q_{g\\oplus k\\rightarrow(g,k)} & = &\n    -\\frac{2(A_{k,g\\oplus k}-A_{kk})}{A_{\\cdot\\cdot}}\n    +\\frac{2(\\Sigma_{g\\oplus k}^{\\tt tot}-A_{k\\cdot})A_{k\\cdot}}{A_{\\cdot\\cdot}^2}\\,.\n\\label{eq:delta:Q:minus}\n\\end{eqnarray}\nHence, the combined action of removing vertex $k$ from cluster $g\\oplus k$ (to form cluster $g$)\nand adding it to cluster $g'$ (to form cluster $g'\\oplus k$) gives rise to the total change of score\n\\begin{eqnarray}\n    \\Delta Q_{(g\\oplus k,g')\\rightarrow(g,g'\\oplus k)} & = & \\Delta Q_{g\\oplus k\\rightarrow(g,k)} + \n    \\Delta Q_{(g',k)\\rightarrow g'\\oplus k}\\,,\n\\end{eqnarray}\nutilising equations~\\eqref{eq:delta:Q:plus} and~\\eqref{eq:delta:Q:minus}.\n\n\\section{Directed Modularity}\nThis section is motivated by the application of the Louvain modularity clustering algorithm to a directed graph,\nas described by \\cite{browet14}.\nIn distinction from the case of Section~\\ref{sec:Q}, we now take $A_{ij}=D_{j}\\ge 0$ to be the directed edge weight from vertex $i$ to vertex $j$ (if such an\nedge exists), such that $A_{ij}\\ne A_{ji}$ in general; this breaks a number of assumptions used in Section\\ref{sec:Q}.\nFor example, equation~\\eqref{eq:in:out} now becomes\n\\begin{eqnarray}\n  s^{\\tt out}_i~=~A_{i\\cdot}~=~\\sum_{j\\in{\\cal V}} A_{ij}\\,,&&\n  s^{\\tt in}_j~=~\\sum_{i\\in{\\cal V}} A_{ij}~=~A_{\\cdot j}\\,,\n\\end{eqnarray}\nmixing in some of the notation from \\cite{browet14}.\nSimilarly, if we consider the $g$-th cluster, then the external cluster weight from equation~\\eqref{eq:sig:ext} is now replaced by\n\\begin{eqnarray}\n   \\Sigma_g^{\\tt out}~=~\\sum_{i\\in{\\cal V}_g}\\sum_{j\\in\\bar{\\cal V}_g} A_{ij}\\,, && \n   \\Sigma_g^{\\tt in}~=~\\sum_{i\\in\\bar{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} A_{ij}\\,,\n\\end{eqnarray}\nsuch that\n\\begin{eqnarray}\n   S^{\\tt out}_g & = & \\sum_{i\\in{\\cal V}_g} s^{\\tt out}_i~=~\\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}}A_{ij}~=~\\Sigma_g^{\\tt int}+\\Sigma_g^{\\tt out}\\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n   S^{\\tt in}_g & = & \\sum_{j\\in{\\cal V}_g} s^{\\tt in}_j~=~\\sum_{i\\in{\\cal V}}\\sum_{j\\in{\\cal V}_g}A_{ij}~=~\\Sigma_g^{\\tt int}+\\Sigma_g^{\\tt in}\\,,\n\\end{eqnarray}\nusing equation~\\eqref{eq:sig:int}.\nIt follows that the modularity score $Q_g$ of the $g$-th cluster becomes\n\\begin{eqnarray}\n   Q_g & = & \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} \\left[\n  \\frac{A_{ij}}{A_{\\cdot\\cdot}}-\n  \\frac{A_{i\\cdot}A_{\\cdot j}}{A_{\\cdot\\cdot}^2}\n  \\right]\n~=~    \\frac{\\Sigma_g^{\\tt int}}{A_{\\cdot\\cdot}}\n    -\\frac{S_g^{\\tt out}S_g^{\\tt in}}{A_{\\cdot\\cdot}^2}\\,,\n\\end{eqnarray}\ninstead of equation~\\eqref{eq:Qg:only}.\n\nWe now consider the effect of merging the $g$-th cluster with the singleton cluster containing only vertex $k$, resulting\nin a new cluster $g\\oplus k$.\nThe internal weight $\\Sigma^{\\tt int}_{g\\oplus k}$ of this new cluster is now\nno longer given by equation~\\eqref{eq:sig:int:add}, but instead by\n\\begin{eqnarray}\n   \\Sigma_{g\\oplus k}^{\\tt int} & = & \\sum_{i\\in{\\cal V}_g\\bigcup\\{k\\}}\n   \\sum_{j\\in{\\cal V}_g\\bigcup\\{k\\}} A_{ij}\n\\nonumber\\\\&=&\n    \\sum_{i\\in{\\cal V}_g}\\sum_{j\\in{\\cal V}_g} A_{ij}\n    +\\sum_{i\\in{\\cal V}_g}A_{ik}+\\sum_{j\\in{\\cal V}_g}A_{kj}+A_{kk}\n\\nonumber\\\\&=&\n    \\Sigma_g^{\\tt int}+W(g,k)+W(k,g)+A_{kk}\\,.\n\\end{eqnarray}\nMore generally \\cite{browet14}, $W(g,k)$ is the edge weight from all vertices in the $g$-th cluster to vertex $k$, excluding any self edge\n$k\\rightarrow k$ if $k$ happens to be a member of the $g$-th cluster.  Thus\n\\begin{eqnarray}\n  W(g,k) & = & \\sum_{i\\in{\\cal V}_g\\backslash\\{k\\}}A_{ik}\\,,\n\\label{eq:W:g:k}\n\\end{eqnarray}\nand conversely\n\\begin{eqnarray}\n  W(k,g) & = & \\sum_{j\\in{\\cal V}_g\\backslash\\{k\\}}A_{kj}\\,,\n\\label{eq:W:k:g}\n\\end{eqnarray}\nwhich is the edge weight from vertex $k$ to all vertices (except $k$) in the $g$-th cluster.\n\nSimilarly, the cluster merge $g\\oplus k$ results in new out and in edge weights given by\n\\begin{eqnarray}\n    S_{g\\oplus k}^{\\tt out} & = & \\sum_{i\\in{\\cal V}_g\\bigcup\\{k\\}} A_{i\\cdot}\n~=~\n    \\sum_{i\\in{\\cal V}_g} A_{i\\cdot}+A_{k\\cdot}\n    ~=~S_{g}^{\\tt out}+s^{\\tt out}_k\\,,\n\\label{eq:S:g:k:out}\n\\\\\n    S_{g\\oplus k}^{\\tt in} & = & \\sum_{j\\in{\\cal V}_g\\bigcup\\{k\\}} A_{\\cdot j}\n~=~\n    \\sum_{j\\in{\\cal V}_g} A_{\\cdot j}+A_{\\cdot k}\n    ~=~S_{g}^{\\tt out}+s^{\\tt in}_k\\,,\n\\label{eq:S:g:k:in}\n\\end{eqnarray}\nrespectively; these equations supercede equation~\\eqref{eq:sig:tot:add}.\nConsequently, the modularity score of the new cluster is now given by\n\\begin{eqnarray}\nQ_{g\\oplus k} & = & \n\\frac{\\Sigma_{g\\oplus k}^{\\tt int}}{A_{\\cdot\\cdot}}\n    -\\frac{S_{g\\oplus k}^{\\tt out}S_{g\\oplus k}^{\\tt in}}{A_{\\cdot\\cdot}^2}\n\\nonumber\\\\& = &\n\\frac{\\Sigma_{g}^{\\tt int}+W(g,k)+W(k,g)+A_{kk}}{A_{\\cdot\\cdot}}\n    -\\frac{\\left(S_{g}^{\\tt out}+A_{k\\cdot}\\right)\\left(S_{g}^{\\tt in}+A_{\\cdot k}\\right)}{A_{\\cdot\\cdot}^2}\\,,\n\\end{eqnarray}\ninstead of by equation~\\eqref{eq:Qgpk}.\nSimilarly, the modularity score of the singleton cluster $k$ is now just\n\\begin{eqnarray}\n    Q_k & = & \\frac{A_{kk}}{A_{\\cdot\\cdot}}\n    -\\frac{A_{k\\cdot}A_{\\cdot k}}{A_{\\cdot\\cdot}^2}\\,,\n\\end{eqnarray}\nreplacing equation~\\eqref{eq:Qk}.\nFinally, the change in score due to merging clusters $g$ and $k$ is\n\\begin{eqnarray}\n    \\Delta Q_{(g,k)\\rightarrow g\\oplus k} & = & Q_{g\\oplus k}-Q_g-Q_k\n\\nonumber\\\\&=&\n    \\left[\\frac{\\Sigma_{g}^{\\tt int}+W(g,k)+W(k,g)+A_{kk}}{A_{\\cdot\\cdot}}\n    -\\frac{\\left(S_{g}^{\\tt out}+A_{k\\cdot}\\right)\\left(S_{g}^{\\tt in}+A_{\\cdot k}\\right)}{A_{\\cdot\\cdot}^2}\\right]\n\\nonumber\\\\&&\n{}-\\left[\\frac{\\Sigma_{g}^{\\tt int}}{A_{\\cdot\\cdot}}\n    -\\frac{S_{g}^{\\tt out}S_{g}^{\\tt in}}{A_{\\cdot\\cdot}^2}\\right]\n    -\\left[\\frac{A_{kk}}{A_{\\cdot\\cdot}}\n    -\\frac{A_{k\\cdot}A_{\\cdot k}}{A_{\\cdot\\cdot}^2}\\right]\n\\nonumber\\\\&=&\n    \\frac{W(g,k)+W(k,g)}{A_{\\cdot\\cdot}}-\\frac{S_g^{\\tt out}A_{\\cdot k}+S_g^{\\tt in}A_{k\\cdot}}{A_{\\cdot\\cdot}^2}\\,,\n\\end{eqnarray}\nwhich replaces equation~\\eqref{eq:delta:Q:plus}.\n\nWe can now compute the change in modularity score involved with removing vertex $k$ from some cluster $g\\oplus k$, resulting in cluster $g$.\nAs in Section~\\ref{sec:Q}, this is just the negative of the score change from merging vertex $k$ with cluster $g$, namely\n\\begin{eqnarray}\n    \\Delta Q_{g\\oplus k\\rightarrow (g,k)} & = & \n    -\\frac{W(g,k)+W(k,g)}{A_{\\cdot\\cdot}}+\\frac{S_{g}^{\\tt out}A_{\\cdot k}+S_{g}^{\\tt in}A_{k\\cdot}}{A_{\\cdot\\cdot}^2}\n\\nonumber\\\\&=&    \n    -\\frac{W(g\\oplus k,k)+W(k,g\\oplus k)}{A_{\\cdot\\cdot}}\n    +\\frac{(S_{g\\oplus k}^{\\tt out}-A_{k\\cdot})A_{\\cdot k}+(S_{g\\oplus k}^{\\tt in}-A_{\\cdot k})A_{k\\cdot}}{A_{\\cdot\\cdot}^2}\n\\,,\n\\end{eqnarray}\nfrom equations~\\eqref{eq:W:g:k}--\\eqref{eq:W:k:g} and~\\eqref{eq:S:g:k:out}--\\eqref{eq:S:g:k:in}. This replaces equation~\\eqref{eq:delta:Q:minus}.\nThe total change in modularity score involved in moving vertex $k$ from cluster $g\\oplus k$ to some other cluster $g'$ is therefore\n\\begin{eqnarray}\n    \\Delta Q_{(g\\oplus k,g')\\rightarrow(g,g'\\oplus k)} & = & \\Delta Q_{g\\oplus k\\rightarrow(g,k)} + \n    \\Delta Q_{(g',k)\\rightarrow g'\\oplus k}\n\\nonumber\\\\\n& = &\n\\frac{W(g',k)+W(k,g')-W(g\\oplus k,k)-W(k,g\\oplus k)}{A_{\\cdot\\cdot}}\n\\nonumber\\\\&&\n{}-\\frac{(S_{g'}^{\\tt out}-S_{g\\oplus k}^{\\tt out}+A_{k\\cdot})A_{\\cdot k}+(S_{g'}^{\\tt in}-S_{g\\oplus k}^{\\tt in}+A_{\\cdot k})A_{k\\cdot}}\n{A_{\\cdot\\cdot}^2}\n\\,.\n\\end{eqnarray}\nMultiplying $\\Delta Q$ by $A_{\\cdot\\cdot}$ gives equation~(3.26) of \\cite{browet14}\nwith $c_1=g\\oplus k$, $c_2=g'$, $s_k^{\\tt out}=A_{k\\cdot}$ and $s_k^{\\tt in}=A_{\\cdot k}$,\nexcept that their use of $m$ \n(the total number of edges) should be\nreplaced with $m_w=A_{\\cdot\\cdot}$ (the total weight of all edges).\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{thebibliography}{9}\n\n\\bibitem{blondel08}\n  Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, Etienne Lefebvre,\n  \\textit{Fast unfolding of communities in large networks},\n  Journal of Statistical Mechanics: Theory and Experiment 2008 (10), P10008; ArXiv: https://arxiv.org/abs/0803.0476.\n\n\\bibitem{browet14}\n  Arnaud Browet,\n  \\textit{Algorithms for community and role detection in networks},\n  Doctoral thesis, Louvain-la-Neuve, September 30, 2014;\n  https://perso.uclouvain.be/arnaud.browet/files/thesis/thesis.pdf.\n\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "0ac526a3b8a394efba89ca0c2c22293088ff3b9b", "size": 17813, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "GraphAnalysis/notes/Louvain-modularity-notes.tex", "max_stars_repo_name": "gaj67/gaj-data-science", "max_stars_repo_head_hexsha": "aadcf6ee2cd00606563f213167c2eeeb42430c59", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "GraphAnalysis/notes/Louvain-modularity-notes.tex", "max_issues_repo_name": "gaj67/gaj-data-science", "max_issues_repo_head_hexsha": "aadcf6ee2cd00606563f213167c2eeeb42430c59", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "GraphAnalysis/notes/Louvain-modularity-notes.tex", "max_forks_repo_name": "gaj67/gaj-data-science", "max_forks_repo_head_hexsha": "aadcf6ee2cd00606563f213167c2eeeb42430c59", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.1243386243, "max_line_length": 157, "alphanum_fraction": 0.6535676192, "num_tokens": 7133, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407016, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5589231746079093}}
{"text": "\\section{Introduction}\n\nThe objective of the exercise is to study several methods of measuring the\nspeed of sound in air: the resonance method, the phase comparison method, and\nthe time difference method, including successive difference method in\nmeasurement data processing.\n    \nSound is a mechanical wave that propagates through a compressible medium.\nIt's a longitudinal wave since the direction of vibrations of the medium is\nthe same as the direction of propagation. Sound with the frequency higher\nthan $20,000\\ Hz$ is called \\emph{ultrasound}, which is chosen as the signal\nsource in this experiment because its wavelength is short enough to measure\nthe speed precisely.\n    \nThe phase speed $v$, the frequency $f$ and the length $\\lambda$ of a wave are\nrelated by the formula \n\\begin{equation}\\label{vlf}\n    v=\\lambda f.\n\\end{equation}\n\nFor motion with constant speed $v$ along a straight line, we have \n\\begin{equation}\\label{vlt}\n    v=\\frac{L}{t},\n\\end{equation}\nwhere $L$ is the distance travelled over time $t$. ", "meta": {"hexsha": "f78a958f163d282bc8250346ae8438bb1df2feea", "size": 1027, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "E4/part/1i.tex", "max_stars_repo_name": "iamwrm/VP141", "max_stars_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-24T11:28:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-24T11:28:04.000Z", "max_issues_repo_path": "E4/part/1i.tex", "max_issues_repo_name": "iamwrm/VP141", "max_issues_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "E4/part/1i.tex", "max_forks_repo_name": "iamwrm/VP141", "max_forks_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.08, "max_line_length": 77, "alphanum_fraction": 0.7711781889, "num_tokens": 243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872019117029, "lm_q2_score": 0.746138993030751, "lm_q1q2_score": 0.5589231705266209}}
{"text": "% pathAPEXCalculus_Source/text/06_Hyperbolic_Functions.tex\n%c13ffb9  on Jun 15, 2015\n%@APEXCalculus APEXCalculus Updated files from Version 2.0 to Version 3.0.\n%1 contributor\n%RawBlameHistory     421 lines (353 sloc)  20.7 KB\n\\section{Hyperbolic Functions}\\label{sec:hyperbolic}\n\nThis section defines the derivatives of the hyperbolic functions and describes some of their properties. Recall their definition.\n\n\\begin{definition}{Hyperbolic Functions}{def:hyperbolic_functions} \n{\\noindent%\n\\begin{minipage}{.5\\textwidth}\n\\begin{enumerate}\n\\item\t\t$\\ds \\cosh x = \\frac{e^x+e^{-x}}2$\\index{hyperbolic function!definition}\n\\item\t\t$\\ds \\sinh x = \\frac{e^x-e^{-x}}2$\n\\item\t\t$\\ds \\tanh x = \\frac{\\sinh x}{\\cosh x}$\n\\end{enumerate}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{enumerate}\\addtocounter{enumi}{3}\n\\item\t\t$\\ds \\sech x = \\frac{1}{\\cosh x}$\n\\item\t\t$\\ds \\csch x = \\frac{1}{\\sinh x}$\n\\item\t\t$\\ds \\coth x = \\frac{\\cosh x}{\\sinh x}$\n\\end{enumerate}\n\\end{minipage}\n}\\\\\n\\end{definition}\n\n\\begin{example}{Exploring properties of hyperbolic functions}{ex_hf1}\n\\noindent Use Definition \\ref{def:hyperbolic_functions} to rewrite the following expressions.\n\\begin{enumerate}\n\\item\t\t$\\frac{d}{dx}\\big(\\cosh x\\big)$\n\\item\t\t$\\frac{d}{dx}\\big(\\sinh x\\big)$\n\\item\t\t$\\frac{d}{dx}\\big(\\tanh x\\big)$\n\\end{enumerate}\n\\end{example}\n\n\\begin{solution}\n{\n\\begin{enumerate}\n\\item  \\hfill$\\begin{aligned}[t]\n\t\\frac{d}{dx}\\big(\\cosh x\\big) &= \\frac{d}{dx}\\left(\\frac{e^x+e^{-x}}2\\right) \\\\\n\t\t\t\t\t&= \\frac{e^x-e^{-x}}2\\\\\n\t\t\t\t\t&= \\sinh x.\n\t\\end{aligned}\\hfill$\n\nSo $\\frac{d}{dx}\\big(\\cosh x\\big) = \\sinh x.$\n\t\n\\item  \\hfill$\\begin{aligned}[t]\n\t\\frac{d}{dx}\\big(\\sinh x\\big) &= \\frac{d}{dx}\\left(\\frac{e^x-e^{-x}}2\\right) \\\\\n\t\t\t\t\t&= \\frac{e^x+e^{-x}}2\\\\\n\t\t\t\t\t&= \\cosh x.\n\t\\end{aligned}\\hfill$\n\nSo $\\frac{d}{dx}\\big(\\sinh x\\big) = \\cosh x.$\n\t\n\\item  \\hfill$\\begin{aligned}[t]\n\t\\frac{d}{dx}\\big(\\tanh x\\big) &= \\frac{d}{dx}\\left(\\frac{\\sinh x}{\\cosh x}\\right) \\\\\n\t\t\t\t\t&= \\frac{\\cosh x \\cosh x - \\sinh x \\sinh x}{\\cosh^2 x}\\\\\n\t\t\t\t\t&= \\frac{1}{\\cosh^2 x}\\\\\n\t\t\t\t\t&= \\sech^2 x.\n\t\\end{aligned}\\hfill$\n\nSo $\\frac{d}{dx}\\big(\\tanh x\\big) = \\sech^2 x.$\t\n\\end{enumerate}\n\\vskip-\\baselineskip\n}\n\\end{solution}\n\nThe following Key Idea summarizes the derivatives relating to hyperbolic functions. Each can be verified by referring back to Definition \\ref{def:hyperbolic_functions}.\n\\textbf{Derivatives}\n\\begin{enumerate}\n\\item $\\frac{d}{dx}\\big(\\cosh x\\big) = \\sinh x$\n\\item $\\frac{d}{dx}\\big(\\sinh x\\big) = \\cosh x$\n\\item $\\frac{d}{dx}\\big(\\tanh x\\big) = \\sech^2 x$\n\\item $\\frac{d}{dx}\\big(\\sech x\\big) = -\\sech x\\tanh x$\n\\item $\\frac{d}{dx}\\big(\\csch x\\big) = -\\csch x\\coth x$\n\\item $\\frac{d}{dx}\\big(\\coth x\\big) = -\\csch^2x$\n\\end{enumerate}\n\n\n%We practice using Key Idea \\ref{idea:hyperbolic_identities}.\\\\\n\n%\\example{ex_hf2}{Derivatives and integrals of hyperbolic functions}{\n%Evaluate the following derivatives and integrals.\n\n%\\begin{minipage}[t]{.5\\linewidth}\n%\\begin{enumerate}\n%\\item\t\t$\\ds\\frac{d}{dx}\\big(\\cosh 2x\\big)$\n%\\item\t\t$\\ds\\int \\sech^2(7t-3)\\ dt$\n%\\end{enumerate}\n%\\end{minipage}\n%\\begin{minipage}[t]{.5\\linewidth}\n%\\begin{enumerate}\\addtocounter{enumi}{2}\n%\\item\t\t$\\ds \\int_0^{\\ln 2} \\cosh x\\ dx$\n%\\end{enumerate}\n%\\end{minipage}\n%}\n%{\\begin{enumerate}\n%\\item\t\tUsing the Chain Rule directly, we have $\\frac{d}{dx} \\big(\\cosh 2x\\big) = 2\\sinh 2x$.\n%\n%Just to demonstrate that it works, let's also use the Basic Identity found in Key Idea \\ref{idea:hyperbolic_identities}: $\\cosh 2x = \\cosh^2x+\\sinh^2x$.\n%\\begin{align*}\\frac{d}{dx}\\big(\\cosh 2x\\big) = \\frac{d}{dx}\\big(\\cosh^2x+\\sinh^2x\\big) &= 2\\cosh x\\sinh x+ 2\\sinh x\\cosh x\\\\ &= 4\\cosh x\\sinh x.\n%\\end{align*}\n%Using another Basic Identity, we can see that $4\\cosh x\\sinh x = 2\\sinh 2x$. We get the same answer either way.\n\n%\\item\t  We employ substitution, with $u = 7t-3$ and $du = 7dt$. Applying Key Ideas \\ref{idea:linearsub}  and \\ref{idea:hyperbolic_identities} we have:\n%$$ \\int \\sech^2 (7t-3)\\ dt = \\frac17\\tanh (7t-3) + C.$$\n%\n%\\item\t\t$$\\int_0^{\\ln 2} \\cosh x\\ dx = \\sinh x\\Big|_0^{\\ln 2} = \\sinh (\\ln 2) - \\sinh 0 = \\sinh(\\ln 2).$$\n%\n%We can simplify this last expression as $\\sinh x$ is based on exponentials:\n%$$\\sinh(\\ln 2) = \\frac{e^{\\ln 2}-e^{-\\ln 2}}2 = \\frac{2-1/2}{2} = \\frac34.$$\n%\\end{enumerate}\n%\\vskip-1.5\\baselineskip\n\n\n\n\n", "meta": {"hexsha": "401fca80281d29fce90c09dc6eb717fc126fb631", "size": 4286, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "4-derivatives/4-10-der-hyp-fns.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "4-derivatives/4-10-der-hyp-fns.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4-derivatives/4-10-der-hyp-fns.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.7166666667, "max_line_length": 168, "alphanum_fraction": 0.6549230051, "num_tokens": 1732, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407016, "lm_q2_score": 0.7490872131147276, "lm_q1q2_score": 0.5589231704284028}}
{"text": "\\documentclass[12pt, titlepage, oneside]{article}\n\n\\input{settings}\n\n\\begin{document}\n\t\n\t\\textbf{ELECENG 3TQ3}\\\\\n\t\\textbf{Elston A.}\n\t\n\t\\section{Lecture 1}\n\n\\subsection{Probability vs. Statistics}\n\n\\items\n\\item Statistics is probability formulate differently\n\\item Statistics applies to probability to extract information about data and recommend what decisions should be made\n\\eitems\n\n\\b{Probabilistic: } You have a fair coin, you flip it 100 times, what is the probability to getting more than a 70\\% heads (We expect a small number).\n\n\\b{Statistical: } You have a coin of unknown fairness, you flip it 100 times and get 70\\% heads. Your job would be to draw conclusions about that coin. \n\nThe main difference is that you know the conditions when you are deriving probability of a particular outcome. In statistics, you know the outcome and you are trying to obtain conclusions about the underlying relations, models, and hidden patterns in the data.\n\n\\subsection{Frequency vs. Bayes}\n\nConsider a simple coin toss and assume you toss it $n$ times. Let $nh$ be the number of heads and $nt = n - nh$ be the number of tails.\n\nThe \\b{Frequentist} estimates the probability of heads as $Ph = nh/n$. That is the frequency of the \"head\" event.\n\nThe \\b{Beyesian} attempts to find prior knowledge such as where was the coin minted and built. This \"prior\" information is then used for analyzing the data.\n\n\\subsection{Counting and Sets}\n\nWe need to determine all the possible all the outcomes in order to determine probabilities. To accomplish this we would need to understand sets, unions, intersections, and complements. Also aspects of of combinatorics calculus is needed in order to analyze problems eg. selecting $n$ out of $m$ objects.\n\n\\ex Toss a fair coin 4 times. What is the probability to get exactly two tails? \n\nWe can approach this using the counting approach by finding all the possible outcomes.\n\nS = \\{ HHHH, HHHT, HHTH, HHTT, HTHH, HTHT, HTTH, HTTT, THHH, THHT, THTH, THTT, TTHH, TTHT, TTTH, TTTT\\}\n\nThen we find the set of favorable outcomes\n\nA = \\{HHTT, HTHT, HTTH, THHT, THTH, TTHH\\}\n\nSo the probability of the event to exactly two tails in 4 tosses is the number of favourable outcomes divided by the total outcomes = 6/16.\n\nBut this type of approach becomes impossible once the space grows. \\ex if we had 100 tosses. \n\t\n\\subsection{Formalizing Sets/Counting}\n\nA \\b{set} is a collection of elements: \\ex patients with blood pressure over 140, people with brown hair, etc.\n\nWe use $x \\in S$ to denote \\b{element} x belonging to the set $S$: \\ex John has brown hair, hence John is an element in the set of people with brown hair.\n\nWe say a set $A$ is a \\b{subset} of $S$ if all the elements in $A$ belongs to $S$: \\ex we can pick 6 people from the people with brown hair set and make a new set. This new set would be a subset of the original set of people with brown hair.\n\nAn \\b{empty set} is the set with no elements denoted as $\\emptyset$\n\n\\subsection{Set Operations}\n\n\\b{Complement}: The complement of $A$ in $S$ is all the elements in $S$ that are not in $A$. We denote this as $S-A$ or $A^c$. \n\n\\ex Let $B$ be the set of all the people with brown hair and shorter than 6 feet. The complement of this would be all the people who do not have brown hair or are taller than 6 feet. \n\n\\b{Union}: The union of $A$ and $B$ is the set of all the elements that are in $A$ or $B$ (or in both). We denote this as: $A \\u B$.\n\n\\b{Intersection}: The intersection of $A$ and $B$ is the set of all elements that are in both $A$ and $B$. We denote this as: $A\\n B$.\n\n\\b{Disjoint}: Two sets are called disjoint if they have no mutual elements\n\n\\b{Difference}: The difference of $A$ and $B$ is the set of all elements in A, but are not in B and is denoted as $A \\backslash B$.\n\n\\ex Consider a set of 10 cities \\{Berlin, Munich, Madrid, Barcelona, Beijing, Shanghai,  Portland, Washington DC, Ottawa, Toronto\\}\n\nConsider two subsets: 1) M is a set of capitols and 2) N is a set of cities in Europe\n\nM=\\{Berlin, Madrid, Beijing, Washington DC, Ottawa\\}\n\nN=\\{Berlin, Munich, Madrid, Barcelona\\}\n\n\\items\n\\item Intersection: \\{Berlin, Madrid\\}\n\\item Union: \\{Berlin, Munich, Madrid, Barcelona, Beijing, Washington DC, Ottawa\\}\n\\item Complement of M in S : \\{Munich, Barcelona, Shanghai, Portland, Toronto\\}\n\\item Difference  = \\{Beijing, Washington DC, Ottawa\\}\n\\eitems\n\n\n\\subsection{De Morgans Laws}\n\n\\begin{align}\n(A \\u B)^c = A^c \\n B^c\n\\end{align}\n\\begin{align}\n(A \\n B)^c = A^c \\u B^c\n\\end{align}\n\n\n\n\\subsection{Proof of De Morgans Laws}\n\n\\begin{align*}\n&x \\in (A \\n B)^c\\\\\n&\\iff x \\notin (A \\n B) \\\\\n&\\iff x \\in A^c \\u x \\in B^c\n\\end{align*}\n\\begin{align*}\n&x \\in (A \\u B)^c\\\\\n&\\iff x \\notin (A \\u B)\\\\\n&\\iff x \\in A^c \\n x \\in B^c\n\\end{align*}\n\n\\subsection{Venn Diagrams}\n\n\n \\begin{figure}[h!]\n \t\\centering\n \t\\includegraphics[width=0.5\\linewidth]{images/S}\n \t\\caption{Set $S$ of all possible outcomes}\n \t\\label{fig:s}\n \\end{figure}\n \\begin{figure}[h!]\n \t\\centering\n \t\\includegraphics[width=0.2\\linewidth]{images/A}\n \t\\caption[]{The set of all the outcomes in $A$}\n \t\\label{fig:a}\n \\end{figure}\n \n \\begin{figure}[h!]\n \t\\centering\n \t\\includegraphics[width=0.2\\linewidth]{images/B}\n \t\\caption[]{The set of all the outcomes in $B$}\n \t\\label{fig:b}\n \\end{figure}\n \n \\begin{figure}[h!]\n \t\\centering\n \t\\includegraphics[width=0.3\\linewidth]{images/Union}\n \t\\caption[]{Union of set $A$ and set $B$}\n \t\\label{fig:union}\n \\end{figure}\n \n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.2\\linewidth]{images/intersection}\n\t\\caption{Intersection of $A$ and $B$}\n\t\\label{fig:intersection}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.3\\linewidth]{images/Ac}\n\t\\caption{Complement of $A$ or $A^c$}\n\t\\label{fig:ac}\n\\end{figure}\n\\newpage \n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.3\\linewidth]{images/B-A}\n\t\\caption{$B-A$ or $B \\backslash A$}\n\t\\label{fig:b-a}\n\\end{figure}\n\n\\subsection{Set Product and Additional Set Terminology}\n\n$S \\times T = \\{ (s,t) \\in S \\text{ and } t \\in T\\}$\n\n\\ex $\\{1,2\\} \\times \\{3,4,5\\} = \\{(1,3),(1,4),(1,5), (2,3), (2,4), (2,5)\\}$\n\nCardinal number is the number of elements in the set. Cardinal number is denoted as $\\#s$ or $card\\{S\\}$ or $|S|$.\n\nInclusion - Exclusion principle: $|A \\u B| = |A| + |B| - |A \\n B| $ \n\nWe subtract $|A \\n B|$ to ensure we do not double count.\n\n\\end{document}", "meta": {"hexsha": "f6eecd30788eec9a6e945eabba7ed8ff4e4e9fef", "size": 6351, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture1/lec1.tex", "max_stars_repo_name": "elston-jja/EE3TQ3", "max_stars_repo_head_hexsha": "327a69f24b4f062d2554658405daf140daef2813", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture1/lec1.tex", "max_issues_repo_name": "elston-jja/EE3TQ3", "max_issues_repo_head_hexsha": "327a69f24b4f062d2554658405daf140daef2813", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture1/lec1.tex", "max_forks_repo_name": "elston-jja/EE3TQ3", "max_forks_repo_head_hexsha": "327a69f24b4f062d2554658405daf140daef2813", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.6797752809, "max_line_length": 303, "alphanum_fraction": 0.707447646, "num_tokens": 1983, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407016, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.558923166248896}}
{"text": "%% -*- coding:utf-8 -*-\n\\chapter{Monads}\n\nMonads are very important for pure functional programming languages\nsuch as Haskell. We will start with \\mynameref{def:monoid}\nconsideration, continue with the formal mathematical definition for\nmonad and\nwill finish with programming languages examples later.\n\n\\section{Monoid in \\textbf{Set} category}\nWe are going to consider \\mynameref{def:monoid} in the terms of Set theory and\nwill try to give the definition that is based rather on morphisms then\non internal set structure i.e. we will use\n\\mynameref{def:categorical_approach}. Let $M$ is a set and by the\nmonoid \ndefinition (\\cref{def:monoid})\n$\\forall m_1, m_2 \\in M$ we can define a new element of the set\n$\\mu(m_1, m_2) \\in M$. Later we will use the following notation for\nthe $\\mu$:\n\\[\n\\mu(m_1, m_2) \\equiv m_1 \\cdot m_2.\n\\]\nIf the $(M, \\cdot)$ is monoid then the following 2 conditions have to\nbe satisfied. The first one (associativity) declares that $\\forall\nm_1, m_2, m_3 \\in M$ \n\\[\nm_1 \\cdot ( m_2 \\cdot m_3) = ( m_1 \\cdot\nm_2 ) \\cdot m_3.\n\\]\nThe second one (identity presence) says that\n\\(\n\\exists e \\in M\n\\) such that $\\forall m \\in M$:\n\\begin{equation}\nm \\cdot e = e \\cdot m = m.\n\\label{eq:monoid2}\n\\end{equation}\n\nWith the first one we can define $\\mu$ as a\n\\mynameref{def:morphism} in the following way \n\\[\n\\mu: M\\times M \\to M,\n\\]\nwhere $M \\times M$ is the \\mynameref{ex:set_product} in the\n\\mynameref{def:setcategory}. I.e. $M \\times M, M \\in \\catob{Set}$ and\n$\\mu \\in \\cathom{Set}$.  Consider other objects of $\\cat{Set}$: $A =\nM \\times \\left( M \\times M \\right)$ and $A' = \\left( M \\times M \\right)\n\\times M$. They are not the same but there is a trivial\n\\mynameref{def:isomorphism} between them $A \\cong_\\alpha A'$, where\nthe isomorphism $\\alpha$ defined as\n\\[\n\\alpha(x,(y,z)) = ((x,y),z).\n\\]\nConsider the action of \\mynameref{def:product_of_morphisms} \n$\\idm{M} \\times \\mu$ on $A$:\n\\[\n\\idm{M} \\times \\mu \\left(x,\\left(y,z\\right)\\right) = \n\\left(\\idm{M}(x),\\mu\\left(y,z\\right)\\right) = \n\\left(x, y \\cdot z\\right) \\in M \\times M\n\\]\ni.e. $\\idm{M} \\times \\mu: M \\times \\left( M \\times M \\right) \\to M\n\\times M$. If we act $\\mu$ on the result then we can obtain:\n\\begin{eqnarray}\n\\mu \\left(\\idm{M} \\times \\mu \\left(x,\\left(y,z\\right)\\right)\\right) = \n\\left(\\idm{M}(x),\\mu\\left(y,z\\right)\\right) = \n\\nonumber \\\\\n=\n\\mu\\left(x, y \\cdot z\\right) = x \\cdot (y\\cdot z) \\in M,\n\\nonumber\n\\end{eqnarray}\ni.e. \n$\\mu \\circ \\left(\\idm{M} \\times \\mu\\right): M \\times \\left( M \\times M\n\\right) \\to M$.\n\nFor $A'$ we have the following one:\n\\begin{eqnarray}\n\\mu\\circ\\left(\\mu \\times \\idm{M}\\right)\\left(\\left(x,y\\right),z\\right)\n= \\mu\\left(x \\cdot y, z\\right) = (x \\cdot y) \\cdot z.\n\\nonumber\n\\end{eqnarray}\nMonoid associativity requires \n\\[\nx \\cdot (y\\cdot z) = \n(x \\cdot y) \\cdot z\n\\]\ni.e. the morphisms is shown in \\cref{fig:monoid_mu_alpha} commute:\n\\begin{equation}\n\\label{eq:monad_monoid_mu}\n\\mu\\circ\\left(\\mu \\times \\idm{M}\\right) =\n\\mu \\circ \\left(\\idm{M} \\times \\mu\\right) \\circ \\alpha.\n\\end{equation}\n\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \n    \\node[ele,label=above:$M\\times\\left(M \\times M\\right)$] (M31) at (0,3) {};    \n    \\node[ele,label=above:$\\left(M \\times M\\right)\\times M$] (M32) at (6,3) {};    \n    \\node[ele,label=below:$M \\times M$] (M21) at (0,0) {};    \n    \\node[ele,label=below:$M \\times M$] (M22) at (6,0) {};    \n    \\node[ele,label=below:$M$] (M) at (3,0) {};    \n\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M31) to\n    node[sloped,above]{$\\cong_\\alpha$} (M32);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M31) to\n    node[sloped,below]{$\\idm{M} \\times \\mu$} (M21);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M32) to\n    node[sloped,below]{$\\mu \\times \\idm{M}$} (M22);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M22) to\n    node[sloped,above]{$\\mu$} (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M21) to\n    node[sloped,above]{$\\mu$} (M);\n  \\end{tikzpicture}\n  \\caption{Commutative diagram for $\\mu\\circ\\left(\\mu \\times\n    \\idm{M}\\right) = \\mu \\circ \\left(\\idm{M} \\times \\mu\\right) \\circ\n    \\alpha$.} \n  \\label{fig:monoid_mu_alpha}\n\\end{figure}\nVery often the isomorphism $\\alpha$ is omitted i.e. \n\\[\nM\\times\\left(M \\times M\\right)\n= \\left(M \\times M\\right)\\times M = M^3\n\\]\nand the morphism\nequality \\eqref{eq:monad_monoid_mu} is written as follow\n\\[\n\\mu\\circ\\left(\\mu \\times \\idm{M}\\right) =\n\\mu \\circ \\left(\\idm{M} \\times \\mu\\right).\n\\]\nThe corresponding commutative diagram is shown in\n\\cref{fig:monoid_mu}.\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \n    \\node[ele,label=above:$M^3$] (M3) at (0,3) {};    \n    \\node[ele,label=above:$M \\times M$] (M21) at (3,3) {};    \n    \\node[ele,label=below:$M \\times M$] (M22) at (0,0) {};    \n    \\node[ele,label=below:$M$] (M) at (3,0) {};    \n\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M3) to\n    node[sloped,below]{$\\idm{M} \\times \\mu$} (M21);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M3) to\n    node[sloped,above]{$\\mu \\times \\idm{M}$} (M22);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M22) to\n    node[sloped,above]{$\\mu$} (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M21) to\n    node[sloped,above]{$\\mu$} (M);\n  \\end{tikzpicture}\n  \\caption{Commutative diagram for $\\mu\\circ\\left(\\mu \\times\n    \\idm{M}\\right) = \\mu \\circ \\left(\\idm{M} \\times \\mu\\right)$.}\n  \\label{fig:monoid_mu}\n\\end{figure}\n\nFor \\eqref{eq:monoid2} consider a morphism $\\eta$ from\n\\mynameref{def:singleton_set} \n\\footnote{\n It also is called \\cite{bib:maclane98} as a one point set\n}\n$I = \\{0\\}$ to the special element $e \\in M$ such that\n$\\forall m \\in M: e \\cdot m = m \\cdot e = m$. I.e. $\\eta: I \\to M$ and\n$e = \\eta(0)$. Consider 2 sets $B = I \\times M$ and $B' = M \\times I$. \nWe have 2 \\mynameref{def:isomorphism}s: $B \\cong_\\lambda M$ and $B'\n\\cong_\\rho M$ such that\n\\[\n\\lambda(m) = 0 \\times m\n\\] \nand\n\\[\n\\rho(m) = m \\times 0.\n\\] \n\nIf we apply the products (see \\mynameref{def:product_of_morphisms}) $\\eta \\times \\mu$ and\n$\\mu \\times \\eta$ on $B$ and $B'$ respectively then we get\n\\begin{eqnarray}\n\\eta \\times \\idm{M} \\left(0 \\times m\\right) = e \\times m,\n\\nonumber \\\\\n\\idm{M} \\times \\eta \\left(m \\times 0\\right) = m \\times e.\n\\nonumber\n\\end{eqnarray}\nAfter the application of $\\mu$ on the result we obtain\n\\begin{eqnarray}\n\\mu \\left(\\eta \\times \\idm{M} \\left(0 \\times m\\right) \\right) \n= \\mu \\left( e \\times m \\right) = e \\cdot m,\n\\nonumber \\\\\n\\mu \\left( \\idm{M} \\times \\eta \\left(m \\times 0\\right) \\right) = \n\\mu \\left(m \\times e \\right) = m \\cdot e.\n\\nonumber\n\\end{eqnarray}\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \n    \\node[ele,label=above:$M$] (M') at (0,3) {};    \n    \\node[ele,label=above:$M \\times M$] (M21) at (3,3) {};    \n    \\node[ele,label=below:$M \\times M$] (M22) at (0,0) {};    \n    \\node[ele,label=below:$M$] (M) at (3,0) {};    \n\n    \\draw [double equal sign distance] (M') to (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M') to\n    node[sloped,below]{$\\lambda$} (M21);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M') to\n    node[sloped,above]{$\\rho$} (M22);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M22) to\n    node[sloped,above]{$\\mu$} (M);\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (M21) to\n    node[sloped,above]{$\\mu$} (M);\n  \\end{tikzpicture}\n  \\caption{Commutative diagram for $\\mu \\circ (\\eta \\times \\idm{M})\n    \\circ \\lambda = \\mu \\circ (\\idm{M} \\times \\mu) \\circ \\rho =\n    \\idm{M}$ .} \n  \\label{fig:monoid_eta_lambda_rho}\n\\end{figure}\nThe \\eqref{eq:monoid2} leads to the following equation for morphisms\n\\[\n\\mu \\circ (\\eta \\times \\idm{M}) \\circ \\rho = \n\\mu \\circ (\\idm{M} \\times \\mu) \\circ \\lambda = \n\\idm{M}\n\\]\nor the commutative diagram show on \\cref{fig:monoid_eta_lambda_rho}.\n\nBefore given a formal definition lets look at the operations were used\nfor the construction. The first one is the product of 2 objects:\n\\[\nM \\times M.\n\\]\nWe also have 2 pairs of morphisms:\n\\begin{eqnarray}\n\\mu: M \\times M \\to M,\n\\nonumber \\\\\n\\idm{M}: M \\to M.\n\\nonumber\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\eta: I \\to M, \n\\nonumber \\\\\n\\idm{M}: M \\to M.\n\\nonumber\n\\end{eqnarray}\nThe pairs can be combined into one using\n\\mynameref{def:product_of_morphisms} as follows:\n\\begin{eqnarray}\n\\mu \\times \\idm{M}: \\left(M \\times M\\right) \\times M \\to M \\times M,\n\\nonumber \\\\\n\\idm{M} \\times \\mu: M \\times \\left(M \\times M\\right) \\to M \\times M\n\\nonumber\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\eta \\times \\idm{M}: I \\times M \\to M \\times M,\n\\nonumber \\\\\n\\idm{M} \\times \\eta: M \\times I \\to M \\times M.\n\\nonumber\n\\end{eqnarray}\nThe same structure \n\\footnote{not only objects mapping but also morphisms mapping}\nis used by\n\\mynameref{def:functor} \nand\nespecially by \\mynameref{ex:product_bifunctor}.  \n\n%% If we take into consideration that one-point set is\n%% \\mynameref{ex:set_terminal_object} then we can conclude that the\n%% monoid can be defined for instance in a \\mynameref{def:cartesian_closed_category}\n%% as follow\nNow we are ready to provide the monoid definition in the terms of morphisms.\n\\begin{definition}[Monoid]\n\\label{def:monoid_category}\nConsider \\mynameref{def:setcategory} $\\cat{C}$ with a\n\\mynameref{def:singleton_set} $t \\in \\catob{C}$. The \\mynameref{def:cartesian_product}\nwith \\mynameref{def:product_of_morphisms} forms a\n\\mynameref{def:bifunctor} $\\times$ (see \\cref{ex:product_bifunctor}).\nThe object $m \\in\n\\catob{C}$ is called \\textit{monoid} if the following\nconditions satisfied:\n\\begin{enumerate}\n\\item there is a \\mynameref{def:morphism} $\\mu: m \\times m \\to m$ in\n  the category\n\\item there is another morphism $\\eta: t \\to m$ in the category\n\\item the morphisms satisfy the following conditions:\n\\begin{equation}\n\\mu\\circ\\left(\\mu \\times\n    \\idm{M}\\right) = \\mu \\circ \\left(\\idm{M} \\times \\mu\\right) \\circ\n    \\alpha,\n\\label{eq:monoid_def_mu}\n\\end{equation}\n\\begin{equation}\n\\mu \\circ (\\eta \\times \\idm{M})\n\\circ \\lambda = \\mu \\circ (\\idm{M} \\times \\mu) \\circ \\rho =\n\\idm{M}\n\\label{eq:monoid_def_eta}\n\\end{equation}\nwhere $\\alpha$ (associator) is an \\mynameref{def:isomorphism} between\n$m \\times (m \\times m)$ and $(m \\times m) \\times m$. $\\lambda, \\rho$\nare other isomorphisms:  \n\\[\nm \\cong_\\lambda t \\times m\n\\]\nand\n\\[\nm \\cong_\\rho m \\times t \n\\]\n\\end{enumerate}\n\\end{definition}\n\n\\section{Monoidal category}\nAs we saw in the categorical definition for monoid (see\n\\cref{def:monoid_category}) the category $\\cat{C}$ should satisfy\nseveral conditions to have an object as monoid. Lets formalise the conditions.\n\\begin{definition}[Monoidal category]\n\\label{def:monoidal_category}\nA category $\\cat{C}$ is called \\textit{monoidal category} if it is\nequipped with a \\mynameref{def:monoid} structure i.e. there are\n\\begin{itemize}\n\\item \\mynameref{def:bifunctor} $\\otimes: \\cat{C} \\times \\cat{C} \\tof\n  \\cat{C}$ called \\textit{monoidal product} \\index{Monoidal\n    product!definition} \n\\item an \\mynameref{def:object} $e$ called unit object or identity object\n\\end{itemize}\n\nThe elements should satisfy (up to \\mynameref{def:isomorphism})\nseveral conditions. The first one:\nassociativity: \n\\begin{equation}\na \\otimes \\left( b \\otimes c \\right) \\cong_\\alpha\n  \\left( a \\otimes b \\right) \\otimes c,\n\\nonumber\n\\end{equation}\nwhere $\\alpha$ is called associator. \\index{Associator!definition}\nThe second condition says that\n$e$ can be treated as left and right identity: \n\\begin{eqnarray}\na \\cong_\\lambda e \\otimes a, \n\\nonumber \\\\\na \\cong_\\rho a \\otimes e,\n\\nonumber\n\\end{eqnarray}\nwhere $\\lambda, \\rho$ are called as left and right unitors respectively. \\index{Left unitor!definition} \\index{Right unitor!definition}\n\\end{definition}\n\nIn the \\mynameref{def:setcategory} we have $\\times$ as the monoidal\nproduct (see \\cref{ex:product_bifunctor}). There is also a\nmorphism $\\eta$ from terminal object $t$ to $e$\n\\cite{bib:stackexchange:terminalinmonoid} (see \\cref{def:monoid_category}). \n\n\\begin{definition}[Strict monoidal category]\n\\label{def:strict_monoidal_category}\n\\index{Associator}\nA \\mynameref{def:monoidal_category} $\\cat{C}$ is said to be \\textit{strict} if the\nassociator, left and right unitors are all identity morphisms i.e.\n\\[\n\\alpha = \\lambda = \\rho = \\idm{C}.\n\\]\n\\index{Left unitor} \\index{Right unitor}\n\\end{definition}\n\n\\begin{remark}[Monoidal product]\n\\label{rem:monoidal_product}\nThe monoidal product is a binary operation that specifies the exact\nmonoidal structure. Often it is called as \\textit{tensor product} but\nwe will avoid the naming because it is not always the same as the\n\\mynameref{def:tensor_product} introduced for\n\\mynameref{def:hilbert_space}s. We also note that the monoidal product is\na \\mynameref{def:bifunctor}. \n\\end{remark}\n\n\\section{Category of endofunctors}\n\nThe \\mynameref{ex:fun_category} is an example of a category. We can\napply additional limitation and consider only\n\\mynameref{def:endofunctor}s i.e. we will look at the category\n$[\\cat{C}, \\cat{C}]$ - the category of functors from category $\\cat{C}$ to\nthe same category. One of the most popular math definition of a monad\nis the following: \n``All told, a monad in X is just a monoid in the category of\nendofunctors of X''\\cite{bib:maclane98}.\nLater we will give an explanation for that one.\n\nWe start with the formal definition of category of endofunctors and a\ntensor product in the category\n\\begin{definition}[Category of endofunctors]\n\\label{def:category_of_endofunctors}\nLet $\\cat{C}$ is a category, then  the category $[\\cat{C}, \\cat{C}]$ of\nfunctors from category $\\cat{C}$ to the same category is called the\ncategory of endofunctors. The monoidal product in the category is the\nfunctor composition. \n\\end{definition}\n\n\\begin{definition}[Monad]\n  \\label{def:monad}\n  The monad $M$ is an \\mynameref{def:endofunctor} with 2\n  \\mynameref{def:nt}s:\n  \\begin{equation}\n    \\label{eq:monad_mu}\n    \\mu: M \\circ M \\tont M\n  \\end{equation}\n  and\n  \\begin{equation}\n    \\label{eq:monad_eta}\n    \\eta: \\idf{C} \\tont M,\n  \\end{equation}\n  where $\\idf{C}$ is \\mynameref{def:idfunctor}.\n\n  The $\\eta, \\mu$ should satisfy the following conditions:\n  \\begin{eqnarray}\n    \\mu \\circ M \\mu = \\mu \\circ \\mu M, \n    \\nonumber \\\\\n    \\mu \\circ M \\eta = \\mu \\circ \\eta M = \\idnt{M},\n    \\label{eq:monad}\n  \\end{eqnarray}\n  where $M \\mu, M \\eta$ - \\mynameref{def:rw}s, $\\mu M, \\eta M$ -\n  \\mynameref{def:lw}s, $\\idnt{M}$ - \\mynameref{def:idnt} for $M$.\n  \\mynameref{def:vertical_composition} is used in the equations.\n\n  The monad will be denoted later as $\\left<M, \\mu, \\eta\\right>$.\n\\end{definition}\n\nLets look at the requirements \\eqref{eq:monad} more closely. Notice\nthat the functor composition is associative:\n\\[\nM \\circ ( M \\circ M ) = (M \\circ M) \\circ M = M^3.\n\\]\nSecondly \nall rewrite it with \\eqref{eq:lw} and \\eqref{eq:rw} as follows\n\\begin{eqnarray}\n  \\mu \\circ \\left( \\idnt{M} \\star \\mu \\right) = \n  \\mu \\circ \\left( \\mu \\star \\idnt{M} \\right), \n  \\nonumber \\\\\n  \\mu \\circ \\left( \\idnt{M} \\star \\eta \\right) = \n  \\mu \\circ \\left( \\eta \\star \\idnt{M} \\right) = \\idnt{M}.\n  \\label{eq:monad_p1}\n\\end{eqnarray}\nThus we can notice that the pair of operations (composition $\\circ$\nand \\mynameref{def:horizontal_composition} $\\star$) forms the bifunctor (see\n\\mynameref{rem:bifunctor_fun_cat}). \n\nThe morphism $\\idnt{M} \\star \\mu$ acts on $M \\circ ( M \\circ M )$ as\n\\[\n\\idnt{M} \\star \\mu : M \\circ ( M \\circ M ) \\tont M \\circ (M \\otimes M)\n\\]\nthus\n\\[\n\\mu \\circ (\\idnt{M} \\star \\mu) : M \\circ ( M \\circ M ) \\tont M \\otimes (M \\otimes M).\n\\]\nSimilarly \n\\[\n\\mu \\circ (\\mu \\star \\idnt{M}) : (M \\circ  M) \\circ M  \\tont ( M \\otimes M) \\otimes M.\n\\]\nI.e. the both morphisms start at the same object $M^3$ and finish also\nat the same point. The equality \n\\begin{equation}\n\\mu \\circ (\\idnt{M} \\star \\mu) = \n\\mu \\circ (\\mu \\star \\idnt{M})\n\\label{eq:monoidalobject1}\n\\end{equation}\nis similar to the conditions on the \\cref{fig:monoid_mu} and can be\nwritten as \\cref{fig:monad_monoid1}. Thus if we compare\n\\eqref{eq:monoidalobject1} and \\eqref{eq:monoid_def_mu} then we can say\nthat they are same if we replace $\\star$ sign with $\\times$ one. I.e.\nin the case we can say that the monad looks like a\n\\mynameref{def:monoid_category}. \n\n\\begin{figure}\n  \\centering\n  \\begin{tikzpicture}[ele/.style={fill=black,circle,minimum\n        width=.8pt,inner sep=1pt},every fit/.style={ellipse,draw,inner\n        sep=-2pt}]\n\n    % the texts\n    \n    \\node[ele,label=left:$M^3$] (m3) at (0,4) {};    \n    \\node[ele,label=left:$M^2$] (m21) at (0,0) {};    \n    \\node[ele,label=right:$M^2$] (m22) at (4,4) {};\n    \\node[ele,label=right:$M$] (m) at (4,0) {};\n\n    \\draw[->,thick,shorten <=2pt,shorten >=2pt] (m3) to\n    node[sloped,above]{$\\idnt{M} \\star \\mu$} (m21);\n    \\draw[->,thick,shorten <=2pt,shorten >=2] (m3) to\n    node[sloped,above]{$\\mu \\star \\idnt{M}$} (m22); \n    \\draw[->,thick,shorten <=2pt,shorten >=2] (m21) to\n    node[sloped,above]{$\\mu$} (m); \n    \\draw[->,thick,shorten <=2pt,shorten >=2] (m22) to\n    node[sloped,above]{$\\mu$} (m); \n  \\end{tikzpicture}\n  \\caption{Monad as monoid in the category of endofunctors.}\n  \\label{fig:monad_monoid1}\n\\end{figure}\n\nFor the identity element consider the same trick: replace in\n\\eqref{eq:monoid_def_eta} tensor\nproduct $\\times$ with \\mynameref{def:horizontal_composition} $\\star$ and\nmorphisms $\\idm{M}, \\rho, \\lambda$ with identity natural transformation\n$\\idnt{M}$. Thus the equation\n\\[\n\\mu \\circ (\\eta \\times \\idm{M})\n\\circ \\lambda = \\mu \\circ (\\idm{M} \\times \\mu) \\circ \\rho =\n\\idm{M}\n\\]\nwill be replaced with\n\\[\n\\mu \\circ (\\eta \\star \\idnt{M}) = \\mu \\circ (\\idnt{M} \\star \\mu) =\n\\idnt{M}\n\\]\nthat is the exact we want to get (see second equation of\n\\eqref{eq:monad_p1}). \n\n%% There should be an identity element for each \\mynameref{def:monoid}.\n%% Lets show that the second equation of  \\eqref{eq:monad_p1} provides us\n%% the required element. Lets $U =\n%% \\eta\\left(\\idf{C}\\right)$. \n%% We want to show that \n%% \\begin{equation}\n%% U \\otimes M = M \\otimes U = M, \n%% \\nonumber\n%% \\end{equation}\n%% that is exactly required as identity presence in the\n%% \\mynameref{def:monoid} definition.\n%% Consider the action of the left part of second equation\n%% \\eqref{eq:monad_p1} on $M = M \\circ \\idf{C}$:\n%% \\begin{eqnarray}\n%% \\mu \\circ \\left( \\idnt{M} \\star \\eta \\right) \\left[M \\circ \\idf{C}\\right] = \n%% \\nonumber \\\\\n%% =\n%% \\mu \\left[M \\circ U\\right] = M \\otimes U\n%% \\nonumber\n%% \\end{eqnarray}\n%% For the middle part of  of second equation\n%% \\eqref{eq:monad_p1}\n%% \\begin{eqnarray}\n%% \\mu \\circ \\left( \\eta \\star \\idnt{M} \\right) \\left[M\\right] = \n%% \\mu \\circ \\left( \\eta \\star \\idnt{M} \\right) \\left[\\idf{C} \\circ M\\right] = \n%% \\nonumber \\\\\n%% =\n%% \\mu \\left[U \\circ M\\right] = U \\otimes M.\n%% \\nonumber\n%% \\end{eqnarray}\n%% Thus\n%% finally\n%% \\[\n%% M \\otimes U = U \\otimes M = M\n%% \\]\n%% that finals the proof of monoidal structure.\n\n\\section{Monads in programming languages}\nThere are several examples of \\mynameref{def:monad} implementation in\ndifferent programming languages:\n\n\\subsection{Haskell}\n\n\\begin{example}[Monad][\\textbf{Hask}]\n\\label{ex:monad_haskell}\nIn Haskell monad can be defined from \\mynameref{ex:functor_haskell} as follows \n\\footnote{real definition is quite different from the presented one}\n\\begin{minted}{haskell}\n    class Functor m => Monad m where\n        return :: a -> m a\n        (>>=)  :: m a -> (a -> m b) -> m b\n\\end{minted} \n\nTo show how this one can be get we can start from a definition that is\nsimilar to the math definition:\n\\begin{minted}{haskell}\n    class Functor m => Monad m where\n        return :: a -> m a\n        join  :: m (m a) -> m a\n\\end{minted} \nwhere \\mintinline{haskell}{return} can be treated as $\\eta$\n\\eqref{eq:monad_eta} and \n\\mintinline{haskell}{join} as $\\mu$ \\eqref{eq:monad_mu}. In the case\nthe bind operator \\mintinline{haskell}{>>=} can be implemented as follows\n\\begin{minted}{haskell}\n(>>=)  :: m a -> (a -> m b) -> m b\nma >>= f = join ( fmap f ma )\n\\end{minted} \n\n\\end{example}\n\n\\subsection{C++}\nThe monad in C++ use the functor definition from \\mynameref{ex:functor_cpp}\n\\begin{minted}{c++}\n// from functor.h\ntemplate < template< class ...> class M, class A, class B> \nM<B> fmap(std::function<B(A)>, M<A>);\n\n// file: monad.h\ntemplate < template< class ...> class M, class A> \nM<A> pure(A);\n\ntemplate < template< class ...> class M, class A> \nM<A> join(M< M<A> >);\n\\end{minted}\nwhere \\mintinline{c++}{pure} can be treated as $\\eta$\n\\eqref{eq:monad_eta} and \n\\mintinline{haskell}{join} as $\\mu$ \\eqref{eq:monad_mu}. In the case\nthe bind operator can be implemented as follows\n\\begin{minted}{c++}\ntemplate < template< class ...> class M, class A, class B> \nM<B> bind(std::function< M<B> (A) > f, M<A> a) {\n  return join( fmap<>(f, a) );\n};\n\\end{minted}\n\n\\subsection{Scala}\n\n\\begin{example}[Monad][\\textbf{Scala}]\nThe monad concept is Scala is more close to formal math definition for\n\\mynameref{def:monad}. It can be defined as follows \n\\footnote{real definition is quite different from the presented one}\n\\label{ex:monad_scala}\n\\begin{minted}{scala}\ntrait M[A] {\n  def flatMap[B](f: A => M[B]): M[B]\n}\n  \ndef unit[A](x: A): M[A]\n\\end{minted} \nI.e. \\mintinline{scala}{flatMap} can be considered as $\\mu$ and\n\\mintinline{scala}{unit} as $\\eta$. \n\\end{example}\n\nTBD\n\n\\section{Kleisli category}\n\n\\begin{definition}[Kleisli category]\n\\label{def:kleisli_category}\nLet $\\cat{C}$ is a category, $M$ is an \\mynameref{def:endofunctor} and\n$\\left<M, \\mu, \\eta\\right>$ is a \\mynameref{def:monad}. Then we can\nconstruct a new category $\\cat{C_M}$ as follows:\n\\begin{eqnarray}\n\\catob{C_M} = \\catob{C},\n\\nonumber \\\\\n\\hom_{\\cat{C_M}}\\left(a, b\\right) = \n\\hom_{\\cat{C}}\\left(a, M(b)\\right)\n\\nonumber\n\\end{eqnarray}\ni.e. objects of categories $\\cat{C}$ and $\\cat{C_M}$ are the same but\nmorphisms from $\\cat{C_M}$ form a subset of morphisms $\\cat{C}$:\n$\\cathom{C_M} \\subset \\cathom{C}$. The category is called as\n\\textit{Kleisli category}. \n\nThe identity morphism in the Kleisli category is the\n\\mynameref{def:nt} $\\eta$ \\eqref{eq:monad_eta} defined by the monad\n$\\left<M, \\mu, \\eta\\right>$: \n\\[\n\\idm{C_M} = \\eta\n\\]\n\\end{definition}\n\n\\begin{remark}[Kleisli category composition]\n\\mynameref{def:kleisli_category} has a non trivial composition rules.\nIf we have 2 \\mynameref{def:morphism}s from $\\cathom{C_M}$:\n\\[\nf_M: a \\to b\n\\]\nand\n\\[\ng_M: b \\to c.\n\\]\nThe morphisms have correspondent ones in $\\cat{C}$:\n\\[\nf: a \\to M(b)\n\\]\nand\n\\[\ng: b \\to M(c).\n\\]\nThe composition $g_M \\circ f_M$ gives a new morphism\n\\[\nh_M = g_M \\circ f_M: a \\to c.\n\\]\nThe corresponding one from $\\cat{C}$ is\n\\[\nh: a \\to M(c).\n\\]\nIt has to be pointed that the compositions in $\\cat{C}$ and\n$\\cat{C_M}$ are not the same:\n\\[\ng_M \\circ f_M \\ne g \\circ f.\n\\]\n\\end{remark}\n\n\\mynameref{def:kleisli_category} widely spread in programming\nespecially it provides good description for different types of\ncomputations, for instance \\cite{bib:Moggi91, bib:milewski2018category} \n\\begin{itemize}\n\\item \\textbf{Partiality} i.e. then a function not defined for each input, for\n  instance the following expression is undefined (or partially\n  defined) for $x = 0$: $f(x) = \\frac{1}{x}$\n\\item \\textbf{Non-Determinism} i.e. then multiply output are possible\n\\item \\textbf{Side-effects} i.e. then a function communicates with\n  an environment\n\\item \\textbf{Exception} i.e. when some input is incorrect and can\n  produce an abnormal result. Therefore it is the same as\n  \\textbf{Partiality} and will be considered below as the same type of\n  computation. \n\\item \\textbf{Continuation} i.e. when we need to save the current\n  state of the computation and be able to restore it on demand later\n\\item \\textbf{Interactive input} i.e. a function that reads data from\n  an input device (keyboard, mouse, etc.)\n\\item \\textbf{Interactive output} i.e. a function that writes data to\nan output device (monitor etc.)\n\\end{itemize}\n\n\\subsection{Partiality and Exception}\n\nPartial functions and exceptions can be processed via monad be called\nas Maybe. There will be implementations in different languages below.\nAnd the usage example for the following function implementation\n\\[\nh(x) = \\frac{1}{2 \\sqrt{x}}.\n\\]\nThe function is a composition of 3 functions:\n\\begin{eqnarray}\nf_1(x) = \\sqrt{x},\n\\nonumber \\\\\nf_2(x) = 2 \\cdot x,\n\\nonumber \\\\\nf_3(x) = \\frac{1}{x}\n\\label{eq:monadmaybe_ex_f}\n\\end{eqnarray}\nand as result the goal can be implemented as the following\ncomposition:\n\\begin{equation}\nh = f_3 \\circ f_2 \\circ f_1.\n\\label{eq:monadmaybe_ex_h}\n\\end{equation}\n$f_2$ is a \\mynameref{def:pure_function} and defined $\\forall x \\in \\mathbb{R}$. The\nfunctions $f_1, f_3$ are partially defined.\n\n\\begin{example}[Maybe monad][\\textbf{Hask}]\n\\label{ex:maybe_monad_haskell}\nThe Maybe monad can be implemented as follows\n\\begin{minted}{haskell}\n  instance Monad Maybe where\n    return = Just\n    join Just( Just x) = Just x\n    join _ = Nothing\n\\end{minted} \n\nOur functions \\eqref{eq:monadmaybe_ex_f} can be implemented as follows\n\\begin{minted}{haskell}\nf1 :: (Ord a, Floating a) => a -> Maybe a\nf1 x = if x >= 0 then Just(sqrt x) else Nothing \n\nf2 :: Num a => a -> Maybe a\nf2 x = Just (2*x)\n\nf3 :: (Eq a, Fractional a) => a -> Maybe a\nf3 x = if x /= 0 then Just(1/x) else Nothing\n\\end{minted}\n\nThe $h$ \\eqref{eq:monadmaybe_ex_h} is the composition via bind\noperator:\n\\begin{minted}{haskell}\nh :: (Ord a, Floating a) => a -> Maybe a\nh x = (return x) >>= f1 >>= f2 >>= f3\n\\end{minted}\n\nThe usage example is the following:\n\\begin{minted}{bash}\n*Main> h 4\nJust 0.25\n*Main> h 1\nJust 0.5\n*Main> h 0\nNothing\n*Main> h (-1)\nNothing\n\\end{minted}\n\n\\end{example}\n\n\\begin{example}[Maybe monad][\\textbf{C++}]\n\\label{ex:maybe_monad_cpp}\nThe Maybe monad can be implemented as follows\n\\begin{minted}{c++}\ntemplate <class A> using Maybe = std::optional<A>;\n\ntemplate < class A, class B> \nMaybe<B> fmap(std::function<B(A)> f, Maybe<A> a) {\n  if (a) {\n    return f(a.value());\n  }\n  return {};\n}\n\ntemplate < class A> \nMaybe<A> pure(A a) {\n  return a;\n}\n\ntemplate < class A> \nMaybe<A> join(Maybe< Maybe<A> > a){\n  if (a) {\n    return a.value();\n  }\n  return {};\n}\n\\end{minted} \n\nOur functions \\eqref{eq:monadmaybe_ex_f} can be implemented as follows\n\\begin{minted}{c++}\nstd::function<Maybe<float>(float)> f1 =\n    [](float x) {\n      if (x >= 0) {\n        return Maybe<float>(sqrt(x));\n      }\n      return Maybe<float>();\n    };\n\nstd::function<Maybe<float>(float)> f2 = [](float x) { return 2 * x; };\n\nstd::function<Maybe<float>(float)> f3 =\n    [](float x) {\n      if (x != 0) {\n        return Maybe<float>(1 / x);\n      }\n      return Maybe<float>();\n    };\n}\n\\end{minted}\n\nThe $h$ \\eqref{eq:monadmaybe_ex_h} is the composition via bind\noperator:\n\\begin{minted}{c++}\nauto h(float x) {\n  Maybe<float> a = pure(x);\n  return bind(f3,bind(f2,bind(f1, a)));\n};\n\\end{minted}\n\n\\end{example}\n\n\\subsection{Non-Determinism}\n\nTBD\n\n\\subsection{Side effects and interactive input/output}\n\nTBD\n\n\\subsection{Continuation}\n\nTBD\n\n\n\\section{Examples}\n\n\\subsection{Quantum mechanics}\n\n\\begin{definition}[Tensor product]\n  \\label{def:tensor_product}\n  TBD\n\\end{definition}\n\nThe tensor product in quantum mechanics is used for\nrepresenting a system that consists of multiple systems. For instance\nif we have an interaction between an 2 level atom ($a$ is excited\nstate $b$ as a ground state) and one mode light then the\natom has its own Hilber space $\\mathcal{H}_{at}$ with $\\ket{a}$ and\n$\\ket{b}$ as basis \nvectors.  Light also has its own Hilber space $\\mathcal{H}_f$ with Fock state\n$\\{\\ket{n}\\}$ as the basis.\n\\footnote{\n  Really the $\\mathcal{H}_f$ is infinite dimensional Hilber space and\n  seems to be out of our assumption about \\textbf{FdHilb} category as\n  a collection of finite dimensional Hilber spaces only.\n}\nThe result system that describes both atom\nand light is represented as the tensor product $\\mathcal{H}_{at}\n\\otimes \\mathcal{H}_f$.\n\nThe morphisms of \\textbf{FdHilb} category have a connection with\n\\mynameref{def:tensor_product}. Consider the so called Hilbert-Schmidt\ncorrespondence for finite dimensional Hilbert spaces i.e. for given\n$\\mathcal{A}$ and $\\mathcal{B}$ there is a natural isomorphism between\nthe tensor product and linear maps (aka morphisms) between\n$\\mathcal{A}$ and $\\mathcal{B}$:\n\\[\n\\mathcal{A}^\\ast \\otimes \\mathcal{B} \\cong \\hom(\\mathcal{A}, \\mathcal{B})\n\\]\nwhere $\\mathcal{A}^\\ast$ - \\mynameref{def:dual_space}.\n\n\nTBD\n", "meta": {"hexsha": "9302b413733729ead987371c9f69308d2ec3b13d", "size": 28598, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cattheory/monads.tex", "max_stars_repo_name": "ivanmurashko/articles", "max_stars_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-09-27T08:59:55.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-27T08:59:55.000Z", "max_issues_repo_path": "cattheory/monads.tex", "max_issues_repo_name": "ivanmurashko/articles", "max_issues_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cattheory/monads.tex", "max_forks_repo_name": "ivanmurashko/articles", "max_forks_repo_head_hexsha": "522db3ad21e96084490acd39a146a335763e5beb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7755555556, "max_line_length": 135, "alphanum_fraction": 0.6750821736, "num_tokens": 10091, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680199891789, "lm_q2_score": 0.7520125848754472, "lm_q1q2_score": 0.5588717037088304}}
{"text": "% !TEX root = main.tex\n\n\\chapter{Non-parametric methods}\\label{chap:nonparametric}\n\n%% intro\n%A typical problem in statistics is to estimate the distribution of a random variable from a set of observations. \n%\n%\\smallskip \nSo far we have assumed that the parametric form of the distribution is known, for example $X\\sim\\text{Bernoulli}(\\theta)$ where $\\theta\\in[0,1]$ is unknown or $X\\sim\\text{Poisson}(\\theta)$ where $\\theta>0$ is unknown. We now look at methods for estimating the distribution of a random variable where the parametric form of the distribution is not known. Such methods are called \\emph{non-parametric} or \\emph{distribution-free} methods.\n\n\\smallskip\nWe consider only continuous distributions, so we can assume that the inverse CDF $F^{-1}(u)$ exists for all $u\\in[0,1]$ and that the probability of different observations taking the same value is zero.\n\n\n\\input{10A_quantiles}\n\\input{10B_location_models}\n\\input{10C_one_sample_tests}\n\n", "meta": {"hexsha": "dbc74856a29c1220c98f9798c06b705db4b06647", "size": 956, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "L5/MA2500/10_nonparametric.tex", "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "L5/MA2500/10_nonparametric.tex", "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L5/MA2500/10_nonparametric.tex", "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "avg_line_length": 50.3157894737, "max_line_length": 436, "alphanum_fraction": 0.7730125523, "num_tokens": 245, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5588716910230128}}
{"text": "%!TEX root = TDT4265-Summary.tex\r\n\\section{Epipolar geometry}\r\nWith one camera, you know the direction to each pixel/object in an image, but not its distance. With two cameras and epipolar geometry, this can be determined.\r\n\r\n\\begin{figure}[htbp]\r\n    \\centering\r\n    \\includegraphics[width=0.95\\linewidth]{images/Epipolar_geometry}\r\n    \\caption{Epipolar geometry}\r\n    \\label{fig:epipolar-geometry}\r\n\\end{figure}\r\n\r\nIn Figure \\ref{fig:epipolar-geometry}, both cameras see the object $\\V{X}$. The line $\\V{O}_L$--$\\V{X}$ from the left camera to $\\V{X}$, is seen as the epipolar line $\\e_R$--$\\x_R$ by the right camera, marked in red. (Correspondingly, $\\x_L$--$\\e_L$ is an epipolar line for the left camera.) This means that $\\V{X}$ must lie on the red epipolar line, but we don't know where (this is the fundamental ambiguity). This provides a constraint that can be used to create a model describing the geometric transformation between the two image planes.\r\n\r\n\\subsection{Essential matrix}\r\nThe epipolar constraint from the previous section can be formulated as\r\n\\begin{equation}\r\n    \\y_1\\T \\M{E} \\y_0 = 0\r\n\\end{equation}\r\nwhere $\\y_1$ and $\\y_0$ are normalized image coordinates and $\\M{E}$ is the essential matrix. This relation holds if the coordinates correspond to the same point in 3D space. Thus, the essential matrix relates points in global coordinates.\r\n\r\n\\subsection{Fundamental matrix}\r\nThe fundamental matrix $\\M{F}$ does the same as the essential matrix, but relates points given in image pixel coordinates, such as $\\x_L$ and $\\x_R$ in Figure \\ref{fig:epipolar-geometry}. A similar relation holds for image coordinates that describe the same point in space:\r\n\\begin{equation}\r\n    \\x_1\\T \\M{F} \\x_0 = 0\r\n\\end{equation}\r\n\r\n\\subsection{Structure from motion (SfM)}\r\nSfM is a method for using data from many 2D images to create a 3D map. It's based on finding features in all the images taken (a huge amount is necessary). Match features between images to estimate camera location, and then pixel distances.\r\n", "meta": {"hexsha": "c2d99a07c9e249823a14113f9b4f7c492e19e52e", "size": 2027, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TDT4265 Computer vision/epipolar-geometry.tex", "max_stars_repo_name": "jakoblover/ntnu-course-summaries", "max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z", "max_issues_repo_path": "TDT4265 Computer vision/epipolar-geometry.tex", "max_issues_repo_name": "jakoblover/ntnu-course-summaries", "max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TDT4265 Computer vision/epipolar-geometry.tex", "max_forks_repo_name": "jakoblover/ntnu-course-summaries", "max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.8965517241, "max_line_length": 544, "alphanum_fraction": 0.7449432659, "num_tokens": 546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.7520125626441471, "lm_q1q2_score": 0.5588716871872392}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\subsection{Spectrum of synchrotron radiation}\n\n\\marginpar{Saturday\\\\ 2020-8-29, \\\\ compiled \\\\ \\today}\n\nWe will use a rough, heuristic approach. \nRecall that the spectrum depends on \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{\\omega } \\dd{\\Omega }} \\propto \\abs{\\hat{E}(\\omega )}^2\n\\,,\n\\end{align}\n%\nwhere \\(\\hat{E}(\\omega )\\) is the Fourier transform of the electric field \\(E(t)\\).\n\nWhen we compute the Fourier transform we do so referring to the time as seen by the observer; we will see that the observer in any specific direction will only see the radiation for a very short amount of time each revolution because of relativistic beaming. \nIn order to see this, consider a single charge moving along a helical path.  \n\nLet us choose two points 1 and 2 along the trajectory, such that the path length between them is \\(\\Delta s\\) and the angle difference is \\(\\Delta \\theta \\). Let us also denote the radius of curvature of the trajectory as \\(a\\): then we will have \\(\\Delta s = a \\Delta \\theta \\). \n\nWe know that radiation is emitted in a cone with aperture \\(\\sim 1/\\gamma \\): let us choose the two points so that they are at the edges of where the radiation starts and then stops being received by the observer.\nThen, radiation emitted between the two points will always hit the observer.\n\nWe know that in this case \\(\\Delta \\theta  = 2 / \\gamma \\) by the geometry of the setup, since \\(2 / \\gamma \\) is the angle between two diametrically opposed points in the cone. \nThe equation of motion is \n%\n\\begin{align}\nm \\gamma \\dv{\\vec{v}}{t} = \\frac{q}{c} \\vec{v} \\times \\vec{B}\n\\,,\n\\end{align}\n%\nso the modulus of the velocity change, for small enough time differences, is\n%\n\\begin{align}\n\\frac{ \\abs{ \\Delta \\vec{v}}}{ \\Delta t} = \\frac{q}{mc \\gamma } v B \\sin \\alpha \n\\,,\n\\end{align}\n%\nwhere \\(\\alpha \\) is the angle between the velocity and \\(\\vec{B}\\), which is called the \\emph{pitch angle}. \nSince the modulus of \\(\\vec{v}\\) is constant, we can write \\(\\abs{ \\Delta \\vec{v}} = v \\Delta \\theta\\), so that \n%\n\\begin{align}\nv \\frac{ \\Delta \\theta }{\\Delta t} = \\frac{q}{mc \\gamma } vB \\sin \\alpha \n\\,,\n\\end{align}\n%\nbut since the path travelled is given by \\(\\Delta s = v \\Delta t\\) we have \n%\n\\begin{align}\nv^2 \\frac{ \\Delta \\theta }{ \\Delta s} &= \\frac{q}{mc \\gamma } v B \\sin \\alpha  \\\\\n \\frac{ \\Delta \\theta }{ \\Delta s} &= \\frac{q}{mc v \\gamma } B \\sin \\alpha \\\\\n \\frac{ 2/ \\gamma  }{ \\Delta s} &= \\frac{q}{mc v \\gamma } B \\sin \\alpha \\\\\n \\Delta s = \\frac{2 mc v}{q B \\sin \\alpha }\n\\,,\n\\end{align}\n%\nwhich is related to the frequency \\(\\omega _B = q B / mc \\gamma \\) by \n%\n\\begin{align}\n\\Delta s = \\frac{2 b}{\\gamma \\omega _B \\sin \\alpha  }\n\\,.\n\\end{align}\n\nThen, we can derive the time needed to go from point 1 to point 2: \n%\n\\begin{align}\n\\Delta t = \\frac{\\Delta s}{v} = \\frac{2}{\\gamma \\omega _B \\sin \\alpha }\n\\,,\n\\end{align}\n%\nhowever we must be careful: although this is the time as measured in the lab frame, the motion of the particle is highly relativistic and the radiation is emitted in two different places. \n\nLet us call \\(t_1^A\\) and \\(t_2^A\\) the times at which the radiation corresponding to  the start and the end of the pulse arrive at the observer. \nThey will be given by \n%\n\\begin{align}\nt_1^{A} = t_1 + \\frac{L}{c} + \\frac{\\Delta s}{c} \n\\qquad \\text{and} \\qquad\nt_2^{A} = t_2 + \\frac{L}{c}  \n\\,.\n\\end{align}\n\nThis means that \n%\n\\begin{align}\nt_1^A- t_2^{A} = t_2 - t_1 - \\frac{\\Delta s}{c}  \n= \\frac{2}{\\gamma \\omega_B \\sin \\alpha } - 2 \\frac{v/c}{\\gamma \\omega _B \\sin \\alpha }\n= \\frac{2}{\\gamma \\omega_B \\sin \\alpha } \\qty(1 - \\frac{v}{c})\n\\,,\n\\end{align}\n%\nbut if the motion is highly relativistic we can approximate \n%\n\\begin{align}\n1 - \\frac{v}{c} \\approx 1 - \\qty(1 - \\frac{1}{2 \\gamma^2}) = \\frac{1}{2 \\gamma^2}\n\\,,\n\\end{align}\n%\ntherefore \n%\n\\begin{align}\n\\Delta t^{A} = \\frac{2}{\\gamma \\omega _B \\sin \\alpha } \\frac{1}{2 \\gamma^2} = \\frac{1}{\\gamma^3 \\omega _B \\sin \\alpha }\n\\,,\n\\end{align}\n%\nso the reduction of the pulse duration is shorter by a factor \\(\\sim 1/ \\gamma^3\\) than \\(1 / \\omega _B\\): it becomes \\textbf{very short!}\n\nTherefore, the spectrum will be rather broad in frequency, of the order \\(\\sim \\gamma^3 \\omega _B\\). \n\nLet us define the \\textbf{critical pulsation} and the \\textbf{critical frequency}:\n%\n\\begin{align}\n\\omega _c = \\frac{3}{2} \\gamma^3 \\omega _B \\sin \\alpha \n\\qquad \\text{and} \\qquad\n\\nu _c = \\frac{3}{4 \\pi } \\gamma^3 \\omega _B \\sin \\alpha \n\\,,\n\\end{align}\n%\nwhich will roughly be around the cutoff of the spectrum. \n\nWe can see by looking at the old formulas we derived that the dependence of the emitted power per unit solid angle on the angle is always in the form \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{t} \\dd{\\Omega }} = F(\\gamma \\theta )\n\\,,\n\\end{align}\n%\nwhich is a consequence of relativistic beaming. Therefore, this will also hold for the dependence of the electric field.\n\nNow, we want to find out how \\(\\theta \\) is related to \\(t\\): we know that \\(\\theta = s/ a\\) while \\(t = (s/v) (1 - v/c)\\); so \n%\n\\begin{align}\nt &= \\frac{a \\theta }{v} \\qty(1 - \\frac{v}{c}) \\\\\n\\gamma t &= \\frac{\\gamma  a \\theta }{v} \\qty(1 - \\frac{v}{c}) \\\\\n\\gamma \\theta &= \\gamma t \\frac{v}{a} \\qty(1 - \\frac{v}{c})^{-1}\n\\approx \\gamma t \\omega _B \\sin \\alpha 2 \\gamma^2 \\propto \\omega _c t\n\\,.\n\\end{align}\n\nTherefore, the electric field depends on time like \\(E(t) \\propto g(\\omega _c t)\\) for some function \\(g\\). \nIts Fourier transform will then look like \n%\n\\begin{align}\n\\hat{E}(\\omega ) &\\propto \\int_{- \\infty }^{\\infty } g(\\omega _c t) e^{-i \\omega t} \\dd{t}  \\\\\n&\\propto \\int_{-\\infty }^{\\infty } g(z) e^{-i \\frac{\\omega}{\\omega _c} z} \\dd{z}\n\\,,\n\\end{align}\n%\nmeaning that \\emph{the dependence of \\(\\hat{E}(\\omega )\\) on \\(\\omega \\) is only through the ratio \\(\\omega / \\omega _c\\).}\n\nRecall that the power per unit solid angle is given by \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{\\omega } \\dd{\\Omega }} \\propto \\abs{\\hat{E}(\\omega )}^2\n\\,,\n\\end{align}\n%\nso if we integrate over the solid angle and divide by the period we find that the power per unit frequency is \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{t} \\dd{\\omega }} = c_1 F \\qty(\\frac{\\omega }{\\omega _c} )\n\\,,\n\\end{align}\n%\nwhere \\(c_1 \\) is some constant and \\(F\\) is some function. \nWe can evaluate \\(c_1 \\) since we know what is the total radiated power: \n%\n\\begin{align}\n\\dv{w}{t} = P \n= \\frac{2}{3} r_0^2 c \\gamma^2 \\beta_{\\perp}^2B^2 \n= \\frac{2}{3} \\frac{q^{4} \\beta^2 \\sin^2 \\alpha B^2 \\gamma^2}{m^2 c^3}\n\\,,\n\\end{align}\n%\nwhere we used the fact that \\(\\beta _\\perp^2 = \\beta^2 \\sin^2 \\alpha \\) and \\(r_0^2= q^{4} / m^2 c^{4}\\). \nIn terms of the constant \\(c_1 \\) and the function \\(F\\) this will be the integral over the frequency space: \n%\n\\begin{align}\n\\dv{w}{t} \n= c_1 \\int_0^{\\infty} F \\qty(\\frac{\\omega }{\\omega _c}) \\dd{\\omega }  = \nc_1 \\omega _c \\int_0^{\\infty } F(z) \\dd{z} \\overset{!}{=} \n\\frac{2}{3} \\frac{q^{4} \\beta^2 \\sin^2 \\alpha B^2 \\gamma^2}{m^2 c^3}\n\\,,\n\\end{align}\n%\nso we can write the value of the constant \\(c_1 \\) as \n%\n\\begin{align}\nc_1 &= \\frac{2}{3} \\frac{q^{4} \\beta^2 \\sin^2 \\alpha B^2 \\gamma^2}{m^2 c^3} \\frac{2}{3 \\gamma^3 \\omega _B \\sin \\alpha } \\frac{1}{\n\\int_0^{\\infty } F(z) \\dd{z} \n}   \\\\\n&= \\frac{q^3 \\beta^2  \\sin \\alpha B}{mc^2} \\frac{1}{\\int_0^{ \\infty } F(z) \\dd{z}}\n\\,.\n\\end{align}\n\nAs we shall show later the integral will evaluate to \\(\\int_0^{ \\infty } F(z) \\dd{z} = 2 \\pi / \\sqrt{3}\\), so, as long as the particle is ultrarelativistic (\\(\\beta \\approx 1\\)) we will have \n%\n\\begin{align}\nc_1 = \\frac{q^3 \\sin \\alpha B}{mc^2} \\frac{\\sqrt{3}}{ 2 \\pi }\n\\,,\n\\end{align}\n%\ntherefore \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{t} \\dd{\\omega }} = \\frac{\\sqrt{3}}{ 2 \\pi }\n\\frac{q^3 B \\sin \\alpha }{mc^2} F \\qty( \\frac{\\omega }{\\omega _c})\n\\,.\n\\end{align}\n\nThe explicit expression of \\(F\\) can be derived \\cite[par.\\ 6.4]{rybickiRadiativeProcessesAstrophysics1979}, but we will not do it here. It is given by the modified Bessel function \n%\n\\begin{align}\nF(z) =  z \\int_{z}^{\\infty } k_{5/3} (x) \\dd{x}\n\\,.\n\\end{align}\n\nIt is a smooth function of \\(z\\), going to zero at 0 and \\(+ \\infty \\), and it has its peak around \\num{.29}, where it reaches a value close to 1 (this means that the peak of synchrotron radiation is around \\(\\omega \\sim \\num{.29} \\omega _c\\)). Its decay is then exponential.\nAsymptotically, we have that for \\(z \\ll 1\\) \n%\n\\begin{align}\nF(z) \\sim \\frac{4}{\\sqrt{3} \\Gamma (1/3)} \\qty(\\frac{z}{2})^{1/3}\n\\,,\n\\end{align}\n%\nwhile for \\(z \\gg 1\\): \n%\n\\begin{align}\nF(z) \\sim \\qty(\\frac{\\pi }{2})^{1/2} e^{-z} z^{1/2}\n\\,.\n\\end{align}\n\n\\subsubsection{Synchrotron emission from a nonthermal electron distribution}\n\nIn many astrphyiscal applications we need to consider the emission from electrons whose energies do not follow a thermal distribution but instead something like a powerlaw; the energy losses and gains caused by synchrotron emission actually cause the spectrum to get closer to a powerlaw, which gives further justification for considering this situation.\n\nSo, we will consider a population such that the number density of electrons per unit energy is given by\n%\n\\begin{align}\n\\dv{N}{E} = C E^{-P}\n\\,.\n\\end{align}\n\nSince the energy is given by \\(E = mc^2 \\gamma \\), we can also write \n%\n\\begin{align}\n\\dd{N} = C \\gamma^{-P} \\dd{\\gamma }\n\\,.\n\\end{align}\n\nThen, the emission per unit volume, frequency and time will be \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{t} \\dd{V} \\dd{\\omega }} \n&= \\int_{\\gamma_1 }^{\\gamma_2 } \\frac{ \\dd{N}}{ \\dd{\\gamma }} \\frac{ \\dd{w}}{ \\dd{t} \\dd{\\omega }}  \\dd{\\gamma }  \\\\\n&= \\int_{\\gamma_1 }^{\\gamma_2 } c \\gamma^{-P } c_1 F \\qty(\\frac{\\omega }{\\omega _c}) \\dd{\\gamma }\n\\,.\n\\end{align}\n\nRemember that \\(\\omega_c\\) depends on \\(\\gamma \\), as \\(\\omega _c \\propto \\gamma^{2}\\), so we cannot bring it out of the integral. \nWe can write this integral, keeping only the variable parts, as \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{t} \\dd{V} \\dd{\\omega }} \n&\\propto \\int_{\\gamma_1 }^{\\gamma_2 } \\gamma^{-P} F \\qty(\\frac{\\omega }{\\omega _c}) \\dd{\\gamma }  \\\\\n&\\propto \\int_{x_1 }^{x_2 } \\omega^{-P / 2} x^{P/2} \\omega^{1/2} \\qty(x)^{3/2} F \\qty(x) \\dd{ x}  \\\\\n&= \\int_{x_1 }^{x_2 } \\omega^{-(P-1)/2} x^{(P-3) / 2} F \\qty(x) \\dd{x}   \\\\\n&= \\omega^{- (P-1 ) / 2} \\int_{x_1 }^{x_2 } x^{(P-3) / 2} F(x) \\dd{x}\n\\,,\n\\end{align}\n%\n\\todo[inline]{in the second equality, should the integration bounds not change?}\nwhere \\(x = \\omega / \\omega _c\\). \nThe integral bounds \\(x_1 \\) and \\(x_2 \\) will depend on \\(\\omega \\) in general --- depending on what the regime of validity of the powerlaw --  but we will brutally approximate \\(x_1 \\sim 0 \\) and \\(x_2 \\sim + \\infty \\).\nIn this case, the integral will not depend on \\(\\omega \\), therefore \n%\n\\begin{align}\n\\frac{ \\dd{w}}{ \\dd{t} \\dd{V} \\dd{\\omega }} \n\\propto \\omega^{- (P-1) / 2}\n\\,.\n\\end{align}\n\nSo, the spectrum of the synchrotron radiation emitted by an electron population whose energies are distributed according to a powerlaw will \\textbf{also be a powerlaw}.\nThe new spectral index is given by \\(S = (P-1) / 2\\). \n\n\\end{document}\n", "meta": {"hexsha": "2e29b50ea1fb5ef9dddcd0a0db377a931bb041fa", "size": 11022, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_second_semester/radiative_processes/apr30.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_second_semester/radiative_processes/apr30.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_second_semester/radiative_processes/apr30.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 38.1384083045, "max_line_length": 354, "alphanum_fraction": 0.6429867538, "num_tokens": 3936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.752012562644147, "lm_q2_score": 0.743168019989179, "lm_q1q2_score": 0.5588716871872391}}
{"text": "\\input{../header_function}\r\n\r\n%---------- start document ---------- %\r\n \\section{bigrange -- range-like generator functions}\\linkedzero{bigrange}\r\n%\r\n  \\subsection{count -- count up}\\linkedone{bigrange}{count}\r\n   \\func{count}\r\n   {%\r\n     \\hikiopt{n}{integer}{0}\r\n   }{%\r\n     {\\em iterator}\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Count up infinitely from \\param{n} (default to \\(0\\)).\r\n   See \\linklibrary{itertools}.\\linklibraryone{itertools\\#count}{count}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad \\param{n} must be int, long or rational.Integer.\\\\\r\n\r\n  \\subsection{range -- range-like iterator}\\linkedone{bigrange}{range}\r\n   \\func{range}\r\n   {%\r\n     \\hiki{start}{integer},\\ %\r\n     \\hikiopt{stop}{integer}{None},\\ %\r\n     \\hikiopt{step}{integer}{1}\r\n   }{%\r\n     {\\em iterator}\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a range-like iterator which generates a finite integer sequence.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad  It can generate more than\r\n   \\linklibrary{sys}.\\linklibraryone{sys\\#maxint}{maxint} elements,\r\n   which is the limitation of the \\linklibraryone{functions\\#range}{range}\r\n   built-in function.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   \\quad The argument names do not correspond to their roles, but\r\n   users are familiar with the\r\n   \\linklibraryone{functions\\#range}{range} built-in function of\r\n   \\python and understand the semantics.\r\n   Note that the output is not a list.\\\\\r\n%\r\n\\begin{ex}\r\n>>> range(1, 100, 3) # built-in\r\n[1, 4, 7, 10, 13, 16, 19, 22, 25, 28, 31, 34, 37, 40, 43, 46,\r\n 49, 52, 55, 58, 61, 64, 67, 70, 73, 76, 79, 82, 85, 88, 91,\r\n 94, 97]\r\n>>> bigrange.range(1, 100, 3)\r\n<generator object at 0x18f8c8>\r\n\\end{ex}%Don't indent!(indent causes an error.)\r\n\r\n  \\subsection{arithmetic\\_progression -- arithmetic progression iterator}\\linkedone{bigrange}{arithmetic\\_progression}\r\n   \\func{arithmetic\\_progression}\r\n   {%\r\n     \\hiki{init}{integer},\\ %\r\n     \\hiki{difference}{integer}\r\n   }{%\r\n     {\\em iterator}\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return an iterator which generates an arithmetic progression\r\n   starting from \\param{init} and \\param{difference} step.\\\\\r\n\r\n  \\subsection{geometric\\_progression -- geometric progression iterator}\\linkedone{bigrange}{geometric\\_progression}\r\n   \\func{geometric\\_progression}\r\n   {%\r\n     \\hiki{init}{integer},\\ %\r\n     \\hiki{ratio}{integer}\r\n   }{%\r\n     {\\em iterator}\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return an iterator which generates a geometric progression\r\n   starting from \\param{init} and multiplying \\param{ratio}.\\\\\r\n\r\n  \\subsection{multirange -- multiple range iterator}\\linkedone{bigrange}{multirange}\r\n   \\func{multirange}\r\n   {%\r\n     \\hiki{triples}{list of range triples}\r\n   }{%\r\n     {\\em iterator}\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return an iterator over Cartesian product of elements of ranges.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad Be cautious that using multirange usually means you are\r\n   trying to do brute force looping.\\\\\r\n   \\spacing\r\n   % input / output\r\n   \\quad The range triples may be doubles {\\tt (start, stop)} or single {\\tt (stop,)},\r\n   but they have to be always tuples.\\\\\r\n   \r\n%\r\n\\begin{ex}\r\n>>> bigrange.multirange([(1, 10, 3), (1, 10, 4)])\r\n<generator object at 0x18f968>\r\n>>> list(_)\r\n[(1, 1), (1, 5), (1, 9), (4, 1), (4, 5), (4, 9), (7, 1),\r\n (7, 5), (7, 9)]\r\n\\end{ex}%Don't indent!(indent causes an error.)\r\n\r\n  \\subsection{multirange\\_restrictions -- multiple range iterator with restrictions}\\linkedone{bigrange}{multirange\\_restrictions}\r\n   \\func{multirange\\_restrictions}\r\n   {%\r\n     \\hiki{triples}{list of range triples},\\ %\r\n     \\hiki{**kwds}{keyword arguments}\r\n   }{%\r\n     {\\em iterator}\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad {\\tt multirange\\_restrictions} is an iterator similar to the\r\n   {\\tt multirange} but putting restrictions on each ranges.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad Restrictions are specified by keyword arguments: \\param{ascending},\r\n   \\param{descending}, \\param{strictly\\_ascending} and\r\n   \\param{strictly\\_descending}.\r\n\r\n   A restriction \\param{ascending}, for example, is a sequence that\r\n   specifies the indices where the number emitted by the range should\r\n   be greater than or equal to the number at the previous index.\r\n   Other restrictions \\param{descending}, \\param{strictly\\_ascending}\r\n   and \\param{strictly\\_descending} are similar.  Compare the examples\r\n   below and of \\linkingone{bigrange}{multirange}.\r\n\r\n%\r\n\\begin{ex}\r\n>>> bigrange.multirange_restrictions([(1, 10, 3), (1, 10, 4)], ascending=(1,))\r\n<generator object at 0x18f978>\r\n>>> list(_)\r\n[(1, 1), (1, 5), (1, 9), (4, 5), (4, 9), (7, 9)]\r\n\\end{ex}%Don't indent!(indent causes an error.)\r\n\\C\r\n%---------- end document ---------- %\r\n\r\n\\input{../footer}\r\n", "meta": {"hexsha": "604aa57842963a2672e7fc2d3212323d492a8a32", "size": 4960, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/en/bigrange.tex", "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_issues_repo_path": "manual/en/bigrange.tex", "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/en/bigrange.tex", "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.5135135135, "max_line_length": 131, "alphanum_fraction": 0.6451612903, "num_tokens": 1496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5588716868926151}}
{"text": "\\subsection{The Open Mapping Theorem}\r\nWe want to study complex analytic functions $f:R\\to\\mathbb C$ where $R$ is a Riemann surface.\r\nThe case where $R$ is compact is especially interesting as we can greatly constrain them using the open mapping theorem.\r\n\\begin{theorem}\\label{open_mapping}\r\n    Any non-constant, analytic map of Riemann surfaces $f:R\\to S$ is an open map.\r\n\\end{theorem}\r\n\\begin{proof}\r\n    By the identity principle for Riemann surfaces shows that $f$ is not constant in any open subset.\r\n    Let $W\\subset R$ be open and $p\\in W$.\r\n    Pick charts $(\\phi,U)$ containing $p$ and $(\\psi,V)$ containing $f(p)$, then by the open mapping theorem on complex plane, $\\psi\\circ f(U\\cap W\\cap f^{-1}(V))$ is an open neighbourhood of $\\psi\\circ f(p)$ in $\\psi(f(W)\\cap V)$, so $f(U\\cap W\\cap f^{-1}(V))$ is an open neighbourhood of $f(p)$ in $f(W)$.\r\n\\end{proof}\r\n\\begin{corollary}\r\n    Let $f:R\\to S$ be a non-constant, analytic map of Riemann surfaces.\r\n    If $R$ is compact, then $f$ is surjective and $S$ is also compact.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    By Theorem \\ref{open_mapping}, $f(R)$ is open.\r\n    But $R$ is compact, so $f(R)$ is compact, hence $f(R)$ is also closed as $S$ is Hausdorff.\r\n    But then $S$ is path-connected hence connected, therefore $S=f(R)$ and hence is compact.\r\n\\end{proof}\r\n\\begin{corollary}\r\n    Every analytic function on a compact Riemann surface is constant.\r\n\\end{corollary}\r\n\\begin{proof}\r\n    $\\mathbb C$ is not compact.\r\n\\end{proof}", "meta": {"hexsha": "04b3184495789033a7c9e35f0a1dcc2f1c73ef96", "size": 1492, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "5/open.tex", "max_stars_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_stars_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5/open.tex", "max_issues_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_issues_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5/open.tex", "max_forks_repo_name": "david-bai-notes/II-Riemann-Surfaces", "max_forks_repo_head_hexsha": "cbda76f7189c679c4aaccf030b70d310823ead3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.3846153846, "max_line_length": 308, "alphanum_fraction": 0.6849865952, "num_tokens": 451, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7520125682019722, "lm_q1q2_score": 0.5588716827622173}}
{"text": "%!TEX root = ../../template.tex\n\\section{Literature review}%\n\\label{sec:literature_review}\n\n\\subsection{Tomography}%\n\\label{sub:tomography}\n\nTomography is the cross-sectional imaging of an object through the use\nof transmitted or reflected waves, captured by the object exposure to\nthe waves from a set of known angles. It has many different applications\nin science, industry, and most prominently,\nmedicine~\\cite{DeChiffre2014}. Since the invention of the \\gls{CT}\nmachine in 1972, by Hounsfield~\\cite{Gunderman2006}, tomographic imaging\ntechniques have had a revolutionary impact, allowing doctors to see\ninside their patients, without having to subject them to more invasive\nprocedures~\\cite{Kak2001}.\n\nMathematical basis for tomography were set by Johannes Radon in 1917. At\nthe time, he postulated that  it is possible to represent a function\nwritten in $\\mathbb{R}$ in the space of straight lines, $\\mathbb{L}$\nthrough the function's line integrals. A line integral is an integral in\nwhich the function that is being integrated is evaluated along a curved\npath, a line. In the tomographic case, these line integrals represent a\nmeasurement on a ray that traverses the \\gls{ROI}.  Each set of line\nintegrals, characterized by an incidence angle, is called a projection\n(see Figure~\\ref{fig:projection}). To perform a tomographic\nreconstruction, the machine must take many projections around the\nobject. To the set of projections arranged in matrix form by detector\nand projection angle, we call sinogram. All reconstruction methods,\nanalytical and iterative, revolve around going from reality to sinogram\nto image~\\cite{Bruyant2002, Kak2001, Herman1973, Herman1995, Herman2009,\nDefrise2003}.\n\n\\begin{figure}[htpb]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{img/png/projections.png}\n    \\caption{A schematic representation of a projection acquisition. In\n    this image, taken from ~\\cite{Herman2009}, the clear line that comes\n    down at a diagonal angle is a projection.}\n    \\label{fig:projection}\n\\end{figure}\n\nThere are two broad algorithm families when it comes to tomographic\nreconstruction, regarding the physics of the problem. It can involve\neither non-diffracting sources (light travels in straight lines), such\nas the X-Rays in a conventional \\gls{CT} exam; or diffracting sources,\nsuch as micro-waves or ultrasound in more research-oriented\napplications~\\cite{Kak2001}. In this document, I will not address the\nlatter family, since I will not be applying them in my work.\n\nIn any tomographic procedure, the first step is to gather information\nfrom the target object. The first concept one requires for this is to\ndetermine the problem's geometry. There are many different possible\ngeometry, however, there are two that are more important for this\nthesis: parallel and fan-beam geometries. In the parallel case, there\nare as many light sources as there are detectors. Light travels between\nthe source and the detector in straight lines, and the whole set rotates\naround the object's location. Fan-beam geometries are characterized by\nhaving only one light source which rotates around the target object. In\nthis geometry, a set of detectors are placed on the other side of the\nobject, and the lines (rays) between the source and the detectors\ndescribe a fan, thus the name of the technique~\\cite{Herman2009,\nKak2001}.\n\nAs far as algorithms are concerned, there are two main types: analytical\nand iterative. The first family includes the most famous algorithm for\nthese applications, the \\gls{FBP}. Iterative algorithms work by\niteratively searching for a solution to the reconstruction equation,\nwhich is basically an underdetermined system of equations (far less\nequations than unknowns).  There are numerous algorithms that work in\nthis way, but in my work, I have identified three that are extensively\nused, both in the field of atmospheric tomography and in medicine:\n\\gls{ART}, \\gls{SART}, and \\gls{MLEM}. The simulator that was developed\nas part of this project uses the last two and \\gls{FBP}. All these\ntechniques are further discussed and presented in\nChapter~\\ref{cha:literature_review}.\n\n\\subsection{\\gls{DOAS}}%\n\\label{sub:doas}\n\nSince the beginning of the 20\\textsuperscript{th} century, scientists\nhave been using spectroscopy to measure reactive trace gases in the\natmosphere, especially ozone. The basis for these applications were set\nby Bouguer, Lambert and Beer, which have separately presented the law\n(Lambert-Beer's) that determines the relationship between light\nextinction and the concentration of an absorber, when it must traverse a\nmedium in which this absorber is present. \\gls{DOAS} is one of the\nmethods that is applied for this purpose. It was developed in 1976, by\nPerner and his colleagues~\\cite{Perner1976}, to detect and quantify the\nhydroxyl radical in the atmosphere. The book by Jochen St\u00fctz and Ulrich\nPlatt~\\cite{Platt2007} is considered by most researchers one of the most\nimportant references in the field and is present in most bibliographies\nof the literature in this subject (it is also one of the main references\nin this thesis). Platt, in particular, has been working with the\ntechnique since its beginning, as one of the elements of Perner's team\nthat published the article about the hydroxyl radical mentioned some\nlines above~\\cite{Perner1976}.\n\nBesides \\gls{DOAS}, Lambert-Beer's law is the basis of many quantitative\nspectroscopy applications. However, most of these techniques are used in\na laboratory context, in which conditions are controlled and very well\nknown. Atmospheric studies do not have this luxury. In the open\natmosphere, there are a number of factors, like Mie and Rayleigh\nscattering, atmospheric turbulence or thermal fluctuations in the\noptical path that make outdoor spectral measurements more complicated.\n\\gls{DOAS} is able to circumvent these difficulties by measuring\ndifferential absorptions, which is to say the difference in absorption\nbetween two different wavelengths~\\cite{Platt2007, Merlaud2013}.\n\nThere are two modalities for \\gls{DOAS} experiments, Active and Passive,\nwhich differ mainly on the use of artificial or natural light sources,\nrespectively. Both methods have their advantages and disadvantages.\nActive systems are more similar to a bench spectroscopy experiment.\nConditions are more controlled (starting with the light source) and\ntherefore, results are usually more reliable and precise, not to mention\nsimpler to reach, since there is no need to account for complex physical\nphenomena, like radiative transfer~\\cite{Platt2007}. However, these\nsystems do require additional material, and many times entire\ninfrastructures have to be built around them~\\cite{Pundt2005}. Passive\nsystems, on the other hand, can be comprised of just a computer, a\nspectrometer and a telescope, making them instrumentally much simpler\nthan their active counterparts. This flexibility also comes with the\npossibility to develop new interesting sub-techniques, like\n\\gls{maxdoas}, which allows (for instance) the determination of the\nstratospheric contribution of a certain trace gas (in opposition to its\ntropospheric contribution). The mathematical processing of the acquired\nspectra, nonetheless, is much more complex.\n\n\\subsection{DOAS Tomography}%\n\\label{sub:doas_tomography}\n\n\\gls{DOAS} tomography is a relatively new subject within the realm of\n\\gls{DOAS}. It consists in the application of tomographic methods to\nreconstruct a two-dimensional or three-dimensional \\emph{map} of the\nconcentrations of trace gases in study. The seminal paper that\noriginated this and other remote sensing tomographic techniques was\npublished by Byer and Shepp in 1979~\\cite{Byer1979}. It is not by any\nmeans a very populated literary space. In a systematic review that was\nperformed as part of a course I took during the development of this PhD\nthesis (see Section~\\ref{sec:doas_tomography}), I managed to identify a\ntotal of 13 papers that were clearly about \\gls{DOAS} tomography. In\ndoing this review, I have also found that the largest \\gls{DOAS}\ntomography study was performed in Germany, during the first years of the\n21\\textsuperscript{st} century. This research campaign, called\nBAB-II~\\cite{Laepple2004}, aimed to measure and map traffic-related\nconcentrations for \\gls{no2} in the motorway that goes between\nHeidelberg and Mannheim. A more recent effort that is mention worthy is\nthe paper by the Stutz~\\cite{Stutz2016}, in which the group created a\ntomographic system that was able to perform as a fence  line monitor for\na refinery in Houston, Texas. In between the two studies, and right\nafter BAB-II, Erna Frins published her paper~\\cite{Frins2006}, in which\nshe described the use of a \\gls{maxdoas} system alternately pointed\ntowards sun-illuminated and dark targets in a tomographic manner.\n\nBesides the birds eye view of the literary panorama of the \\gls{DOAS}\ntomography field, this systematic review allowed me to understand two\nimportant gaps in the technology being employed for this research. The\nfirst is that all of the studies that were featured in my review used a\nvery low number of tomographic projections (some dozens of line\nintegrals). This is a problem because it is the single most important\nfactor for the resolution of any tomographic procedure, and although\nresolution is not an absolutely critical factor in atmospheric analysis\n(because the sizes of the target objects - gas plumes - are very large\nand diffuse), it is an important system feature that should be improved.\nMoreover, the second important pattern that was clear from my research\nwas that all but one of the described systems were fixed, and the one\nthat was mobile was composed of a minimum of two spectral acquisition\ndevices and had one of the lowest numbers of projections in all papers.\nThis is a very important gap. \\gls{DOAS} tomography has the ability not\nonly to measure but also to map pollutant concentrations and can be an\ninvaluable technique in the fight to understand and track the movement\nof pollutant plumes, but the fact that the available systems require\ndedicated infrastructure to operate and have no mobility at all may be\nthe most important factor leading to the almost non-existing investment\n\\gls{DOAS} tomography systems.\n\nMore than half of the world's population is expected to be living in\ncities by the year 2050, and in the next 10 years we will see an\nincrease in the number of so-called mega cities of more than\n30\\%~\\cite{CABI2019}. This puts a lot of pressure on governmental\nagencies and municipalities (specially in the West) to \"smarten up\"\ntheir urban infrastructures, so that cities can harbour their\ninhabitants with reasonable quality of life for everyone. In fact, a\nrecent report by \\gls{oecd} concluded that in 2010, estimated costs due\nto \\gls{AP} were around 1.7 trillion American dollars, just in\n\\gls{oecd} countries.  The flexibility and mobility that the system I am\ndeveloping brings to the table are two very heavy points in its favour\nas a pollution mapping tool which is unobtainable with the traditional\nmethods, whether \\emph{in-situ} or remote.\n", "meta": {"hexsha": "3eac44d4ebc1d547446c3557d743fcce1d4b013f", "size": 11099, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/introduction/lit_review.tex", "max_stars_repo_name": "ruivalmeida/novathesis", "max_stars_repo_head_hexsha": "ba50f95c3e6e10f5ec3ff4c98cc8bb786246a6ef", "max_stars_repo_licenses": ["LPPL-1.3c"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/introduction/lit_review.tex", "max_issues_repo_name": "ruivalmeida/novathesis", "max_issues_repo_head_hexsha": "ba50f95c3e6e10f5ec3ff4c98cc8bb786246a6ef", "max_issues_repo_licenses": ["LPPL-1.3c"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/introduction/lit_review.tex", "max_forks_repo_name": "ruivalmeida/novathesis", "max_forks_repo_head_hexsha": "ba50f95c3e6e10f5ec3ff4c98cc8bb786246a6ef", "max_forks_repo_licenses": ["LPPL-1.3c"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.4157894737, "max_line_length": 72, "alphanum_fraction": 0.8052076764, "num_tokens": 2645, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.558871682614905}}
{"text": "%The standard model of particle physics is a quantum field theory which describes three of the four known fundamental forces and all known elementary particles.\n%The SM describes the electromagnetic, strong, and weak interactions, but does not provide a description of gravity.\nAs previously mentioned, the standard model of particle physics is a quantum field theory which describes three of the four known fundamental forces: electromagnetic, weak, and strong.\nIn particular, the SM is a gauge field theory, meaning its Lagrangian is invariant under certain local transformations.\nGauge fields are discussed in greater detail in Sec.~\\ref{sec:theory_gauge}.\nIn Sec.~\\ref{sec:theory_qed}, we will see that imposing local gauge invariance on the Dirac Lagrangian gives rise to quantum electrodynamics.\nSec.~\\ref{sec:theory_qcd} describes the strong interaction and Sec.~\\ref{sec:theory_ssbhm} illustrates how spontaneous symmetry breaking and the Higgs mechanism allow for massive gauge fields, enabling us to describe the weak interaction in Sec.~\\ref{sec:theory_ewk}.\n%Sec.~\\ref{sec:theory_qcd} describes the strong interaction and Sec.~\\ref{sec:theory_ewk} describes the electroweak interaction, the unification of the electromagnetic and weak interactions.\n%Finally, in Sec.~\\ref{sec:theory_ssbhm} we will see how the Higgs mechanism allows for massive gauge fields and subsequently generates the masses of the gauge bosons and fermions.\n\n\\subsection{Gauge Fields} \\label{sec:theory_gauge}\nFor an arbitrary Lagrangian made of a single field variable $\\psi$, suppose we impose that its resulting field equations be invariant under the local phase transformation\n\\begin{equation} \\label{eqn:local_phase}\n    \\psi \\to e^{i q \\theta(x)} \\psi.\n\\end{equation}\nThis is deemed a ``local'' phase transformation as the phase $\\theta$ is be a function of $x^\\mu$.\nIn the case that $\\theta$ is a constant, we deem this a ``global'' phase transformation.\nA Lagrangian that is invariant under the transformation in Eqn.~\\ref{eqn:local_phase} is said to be \\emph{gauge invariant}.\nMore generally, theories which are invariant under gauge transformations are called \\emph{gauge theories}.\nAs Sec.~\\ref{sec:theory_qed} will show, quantum electrodynamics is an abelian gauge theory under the symmetry group U(1), with a single gauge field.\nThe SM as a whole is a non-abelian gauge theory under the symmetry group U(1) $\\times$ SU(2) $\\times$ SU(3), with a total of twelve gauge fields corresponding to the spin-1 bosons: the photon, the three massive weak bosons, and the eight gluons.\n\nGauge theories are particularly attractive from a theoretical standpoint for several reasons.\nFirst, demanding gauge invariance seems reasonable a priori -- the transformation in Eqn.~\\ref{eqn:local_phase} is simply a change in the coordinate system we use to define the field $\\psi$, and the physics of the universe should be independent of the particular choice of coordinates we use to describe it.\nSecond, gauge theories have been proven to be renormalizable~\\cite{tHooft:1971qjg}, also a reasonable requirement for a theory we hope will describe the universe.\n\nRenormalization refers to the technique by which a quantum field theory is ``cut off'' above some very high energy scale $\\Lambda$, above which the theory is assumed to no longer be valid.\nIn general, this is motivated by the presence of infinities in perturbative calculations of decay rates and cross sections.\nRather than assume these infinities render the Lagrangian a useless description of our universe, renormalization serves as a way of quantitatively applying the qualitative statement that the Lagrangian is a low-energy approximation of a more fundamental theory.\nBy formalizing the idea that the theory is only valid up to a certain energy scale, we are able to avoid the presence of infinities in the calculation of decay rates and cross sections.\n\n\\subsection{Quantum Electrodynamics} \\label{sec:theory_qed}\nStarting with the Dirac Lagrangian (Eqn.~\\ref{eqn:dirac_lagrangian}), suppose we impose that its resulting field equations must be invariant under a local phase transformation (as given by Eqn.~\\ref{eqn:local_phase}).\nInitially, the Dirac Lagrangian is not invariant under the local phase transformation, as an extra term from the derivative of $\\theta$ appears:\n\\begin{equation} \\label{eqn:ld_var}\n    \\mathcal L \\to \\mathcal L - q (\\partial_\\mu \\theta)\\bar{\\psi} \\gamma^\\mu \\psi.\n\\end{equation}\nThe situation can be remedied with the introduction of a vector field $A_\\mu$ which transforms under Eqn.~\\ref{eqn:local_phase} as\n\\begin{equation} \\label{eqn:local_phase_field}\n    A_\\mu \\to A_\\mu + \\partial_\\mu \\theta.\n\\end{equation}\nThe resulting Lagrangian,\n\\begin{equation} \\label{eqn:ld_inv}\n    \\mathcal L = \\mathcal L_{\\text{Dirac}} + q \\bar{\\psi} \\gamma^\\mu \\psi A_\\mu\n\\end{equation}\nis gauge invariant as the second term in Eqn.~\\ref{eqn:ld_inv} cancels exactly with the additional term from the transformation to the field $A_\\mu$ in Eqn.~\\ref{eqn:local_phase_field}.\n\nFrequently this additional field is absorbed into the definition of a \\emph{covariant derivative} \n\\begin{equation}\n    \\mathcal D_\\mu \\equiv \\partial_\\mu + i q A_\\mu\n\\end{equation}\nwhich replaces the original definition of the derivative, and the resulting field equations are then invariant under local phase transformations, as desired.\n\nThe vector field $A_\\mu$ which has been added to the Lagrangian implies the existence of an associated spin-1 particle.\nIn principle, we must also include a free term for the field $A_\\mu$: it is natural to start with the Proca Lagrangian (Eqn.~\\ref{eqn:proca_lagrangian}) which describes the dynamics of free spin-1 particles.\nIt can be shown~\\cite{Griffiths:2008zz} that the mass term in the Proca Lagrangian is not invariant under Eqn.~\\ref{eqn:local_phase_field}: this can be interpreted as a requirement that this new vector field $A_\\mu$ must be massless.\nThe full Lagrangian becomes\n\\begin{align}\n    \\mathcal L_{\\text{QED}} &= \\bar{\\psi}(i \\gamma_\\mu \\partial_\\mu - m) \\psi - \\frac{1}{4} F^{\\mu\\nu}F_{\\mu\\nu} + q (\\bar{\\psi}\\gamma^\\mu \\psi)A_\\mu, \\\\\n                            &= \\bar{\\psi}(i \\gamma_\\mu \\mathcal D_\\mu - m)\\psi - \\frac{1}{4} F^{\\mu\\nu}F_{\\mu\\nu}\n\\end{align}\nwith $F^{\\mu\\nu} = \\partial^\\mu A^\\nu - \\partial^\\nu A^\\mu$, which can be identified as the Lagrangian for quantum electrodynamics.\nThe field $A_\\mu$ is associated with the photon, the constant $q$ with the charge of the electron, the tensor $F^{\\mu\\nu}$ with the electromagnetic field strength, and interactions between photons and electrons with the trilinear term $q (\\bar{\\psi}\\gamma^\\mu \\psi)A_\\mu$.\n\nIt is instructive to reflect on the implications of demanding gauge invariance: we started with a spinor field characterized by the Dirac Lagrangian, as would be natural to do in attempting to describe the behavior of electrons.\nNext, by simply requiring that the equations of motion be invariant under changes in the coordinate system used to describe the field (i.e. demanding gauge invariance), we see that there must be an accompanying massless vector field which interacts with the spinor field, which we identify as the photon field.\nThis is truly remarkable: we have inferred the existence of photons just by demanding that the behavior of electrons be independent of the choice of coordinate system used to describe their field.\n\n\\subsection{Quantum Chromodynamics} \\label{sec:theory_qcd}\nAs Sec.~\\ref{sec:pp_parton_model} details, inelastic scattering experiments in the 1960s gave strong evidence of the composite nature of protons.\nZweig~\\cite{Zweig:1964jf} and Gell-Mann~\\cite{GellMann:1964nj} independently proposed a quark model to describe the composite nature, which initially implied that quarks violate the spin-statistics theorem.\nThe remedy to this came in the proposal~\\cite{Greenberg:1964pe} that each quark comes in three different \\emph{colors}.\nMore formally, this is the statement that quarks are assigned to the fundamental representation $SU(3)$, giving rise to a quantum number which has three states which we (arbitrarily) call \\emph{red}, \\emph{green}, and \\emph{blue}.\n\nIn attempting to construct the Lagrangian for quantum chromodynamics (QCD), which describes the strong interaction of quarks, we can again begin with the free Dirac Lagrangian for spin-1/2 particles (Eqn.~\\ref{eqn:dirac_lagrangian}).\nGiven that we have three distinct colors of each quark, the free Lagrangian for a particular flavor is actually a sum of three free Dirac Lagrangians.\nThis is simplified with the notation\n\\begin{equation}\n    \\psi = \\begin{bmatrix}\n        \\psi_r \\\\\n        \\psi_b \\\\\n        \\psi_g \\\\\n    \\end{bmatrix},\n    \\quad\\bar{\\psi} = [\\bar{\\psi}_r ~\\bar{\\psi}_b ~\\bar{\\psi}_g]\n\\end{equation}\nin which the spinor $\\psi$ from the original Dirac Lagrangian has now been promoted to a three-component column vector.\nThe single-particle Dirac Lagrangian is invariant under global phase transformations; in other words, it has $U(1)$ invariance.\nSimilarly, the three-particle Dirac Lagrangian has $U(3)$ invariance:\n\\begin{equation} \\label{eqn:u3_global}\n    \\psi \\to U \\psi, \\quad \\bar{\\psi} \\to \\bar{\\psi}U^\\dagger\n\\end{equation}\nwith $U$ any unitary $3 \\times 3$ matrix\\footnote{A matrix $U$ is said to be \\emph{unitary} if $U^\\dagger U = 1$}.\nWhereas in the case of $U(1)$ symmetry, the invariance has the simple interpretation of a phase, the picture is more subtle for $U(3)$.\nIt can be shown~\\cite{Griffiths:2008zz} that any unitary matrix can be written in the form\n\\begin{equation}\n    U  = e^{i \\theta} e^{i \\bf{\\lambda} \\cdot \\bf{a}}\n\\end{equation}\nwith\n\\begin{equation}\n    \\bf{\\lambda} \\cdot \\bf{a} = \\sum_{i=1}^8 \\lambda_i a_i\n\\end{equation}\nand the matrices $a_i$ identified with the eight Gell-Mann matrices which are the generators of the group $SU(3)$.\nFollowing the development of the QED Lagrangian, we again impose the requirement that the Lagrangian not just be invariant under global transformations as described by Eqn.~\\ref{eqn:u3_global}, but also local transformations.\nIn other words, we want $\\mathcal L$ to be invariant under local $SU(3)$ gauge transformations:\n\\begin{equation}\n    \\psi \\to S \\psi, \\qquad S \\equiv e^{-ig\\bf{\\lambda} \\cdot \\phi(x)} \\text{ and } \\phi \\equiv - \\frac{1}{g_s} \\bf{a}\n\\end{equation}\nAs in the case of QED, this can be accomplished through the definition of a covariant derivative\n\\begin{equation}\n    \\mathcal D_\\mu \\equiv \\partial_\\mu + i g_s \\bf{\\lambda} \\cdot \\bf{a},\n\\end{equation}\nresulting in the following Lagrangian which is now invariant under local gauge transformations:\n\\begin{equation}\n    \\mathcal L = \\bar{\\psi}(i \\gamma_\\mu \\mathcal D_\\mu - m) \\psi.\n\\end{equation}\nThis time, we have introduced eight gauge fields $\\bf{A}^\\mu$, corresponding to the eight gluons.\n\nFinally, we must account for the free gluon field.\nAs before, the mass terms are excluded because they violate local gauge invariance.\nHowever, the field strength tensor for QED, $F^{\\mu\\nu} = \\partial^\\mu A^\\nu - \\partial^\\nu A^\\mu$, cannot be directly generalized to QCD due to the fact that transformations of $SU(3)$ are non-Abelian.\nAn additional term is required to restore local gauge invariance, resulting in the QCD field strength tensor\n\\begin{equation}\n    F_{\\mu \\nu}^a = \\partial_\\mu A_\\nu^a - \\partial_\\nu A_\\mu^a - g_s f^{abc} A_\\mu^b A_\\mu^c.\n\\end{equation}\nThe term $f^{abc}$ corresponds to the SU(3) structure constants and are defined by the commutation relation $[\\lambda_a, \\lambda_b] = i f^{abc} \\lambda_c$.\nThis has no analog in QED; it allows for self-interaction of gluons.\nThe full QCD Lagrangian is then given by\n\\begin{equation} \\label{eqn:qcd_lagrangian}\n    \\mathcal L_{\\text{QCD}} = \\bigg(\\sum_f \\bar{\\psi}_f (i \\gamma_\\mu \\mathcal D_\\mu - m_f) \\psi_f \\bigg) - \\frac{1}{4} F_{\\mu \\nu}^a F^{a \\mu \\nu},\n\\end{equation}\nwhere the sum over $f$ corresponds to the different flavors of quarks, of which six have been experimentally observed.\n\nUnlike in QED, in which the magnitude of the force associated with the free photon field \\emph{decreases} with distance, the magnitude of the strong force associated with the free gluon field \\emph{increases} as a function of distance.\nAs a result, particles possessing color charge cannot exist as free particles and are instead confined to bound states of multiple particles which must always be colorless.\n\n\\subsection{Spontaneous Symmetry Breaking \\& The Higgs Mechanism} \\label{sec:theory_ssbhm} \nWe were able to derive the Lagrangians for the electromagnetic and strong interactions by starting with the Dirac Lagrangian describing free spin-1/2 particles and imposing the principle of local gauge invariance.\nThis involved the introduction of additional vector fields, with which we are able to associate the mediators of each force: the photon (for QED) and the eight gluons (for QCD).\nThe fact that the mass term in the Proca Lagrangian is not locally gauge invariant implies that the mediators must be massless; conveniently, photons and gluons are indeed observed to be massless.\nGiven the success of the method of imposing local gauge invariance for deriving the Lagrangians for the electromagnetic and strong interactions, it is natural to extend the method to the weak interaction.\nAn immediate challenge, however, is the fact that the mediators of the weak interaction, the $W$ and $Z$ bosons, are not massless.\nLocal gauge invariance can still be applied to the weak interaction, but it requires reinterpreting the original field variables in a form that allows us to expand about their ground state.\nBy doing so, we find that symmetries in the original Lagrangian are broken because of the fact that the ground state does not share the symmetry of the original Lagrangian.\nThis allows for locally invariant massive gauge fields, and as a consequence, implies the presence of a massive scalar particle, which we will identify with the Higgs boson.\n\nSpontaneous symmetry breaking and the Higgs mechanism can be illustrated through a toy Lagrangian composed of a single complex field:\n\\begin{equation} \\label{eqn:spon_orig}\n    \\mathcal L = \\frac{1}{2} (\\partial_\\mu \\phi)^*(\\partial^\\mu \\phi) + \\frac{1}{2} \\mu^2 (\\phi^* \\phi) - \\frac{1}{4} \\lambda^2 (\\phi^* \\phi)^2, \\qquad \\phi \\equiv \\phi_1 + i \\phi_2\n\\end{equation}\nIn this Lagrangian, the mass term $(1/2) \\mu^2 (\\phi^* \\phi)$ appears to have the wrong sign: naively, a positive coefficient implies that the particle associated with the $\\phi$ field has an imaginary mass.\nPhysically, this does not make sense.\nThe subtlety lies in the fact that the Feynman calculus is a perturbative procedure, and must be performed by expanding about a system's ground state.\nInterpreting $\\frac{1}{2} \\mu^2 (\\phi^* \\phi) - \\frac{1}{4} \\lambda^2 (\\phi^* \\phi)^2$ as the \\emph{potential} term in the Lagrangian, we can expand about its minimum and apply the Feynman calculus.\nIn contrast to previously considered fields, the minimum does not occur at $\\phi_1 = \\phi_2 = 0$, but rather is defined by the circle\n\\begin{equation} \\label{eqn:spon_gs}\n    \\phi_{1_{\\text{min}}}^2 + \\phi_{2_{\\text{min}}}^2 = \\frac{\\mu^2}{\\lambda^2}.\n\\end{equation}\nChoosing $\\phi_{1_{\\text{min}}} = \\mu/\\lambda$ and $\\phi_{2_{\\text{min}}} = 0$, let us next rewrite the Lagrangian in terms of fields which can be treated as fluctuations about the vacuum state, defining\n\\begin{equation} \\label{eqn:spon_gs}\n    \\eta \\equiv \\phi_1 - \\frac{\\mu}{\\lambda}, \\qquad \\xi \\equiv \\phi_2.\n\\end{equation}\nIn terms of these new fields, the Lagrangian becomes\n\\begin{equation} \\label{eqn:spon_transf}\n    \\mathcal L = \\frac{1}{2} (\\partial_\\mu \\eta)(\\partial^\\mu \\eta) - \\mu^2 \\eta^2 + \\frac{1}{2} (\\partial_\\mu \\xi)(\\partial^\\mu \\xi) - \\mu \\lambda(\\eta^3 + \\eta \\xi^2) - \\frac{\\lambda^2}{4}(\\eta^4 + \\xi^4 + 2 \\eta^2 \\xi^2) + \\frac{\\mu^4}{4 \\lambda^2}.\n\\end{equation}\nThe original Lagrangian (Eqn.~\\ref{eqn:spon_orig}) was invariant under rotations in $\\phi_1, \\phi_2$ space\\footnote{More precisely, the original Lagrangian is invariant under SO(2)}; however, this rotational symmetry is no longer manifest in the $\\eta, \\xi$ space.\nThe continuous SO(2) symmetry has been broken by the choice of a particular ground state.\nThe particular ground state we chose, $\\phi_{1_{\\text{min}}} = \\mu/\\lambda$ and $\\phi_{2_{\\text{min}}} = 0$, is arbitrary: the system could just as easily choose any other ground state which satisfies Eqn.~\\ref{eqn:spon_gs}.\nFor this reason, we say that the symmetry has been \\emph{spontaneously} broken.\n\nExamining Eqn.~\\ref{eqn:spon_transf}, we can identify that the particle associated with the $\\eta$ field has mass $m_\\eta = \\sqrt{2} \\mu$ and that the particle associated with the $\\xi$ field is massless.\nIn fact, Goldstone's theorem~\\cite{Goldstone:1962es} shows that the spontaneous breaking of a continuous global symmetry is associated with one or more massless scalar particles, referred to as \\emph{Goldstone bosons}.\n\nNext, let us impose the condition of local gauge invariance on the original Lagrangian, Eqn.~\\ref{eqn:spon_orig}, demanding that it be invariant under transformations of the form $\\phi \\to e^{i \\theta(x)} \\phi$.\nAs before, we can introduce a massless gauge field $A^\\mu$ and replace derivatives with covariant derivatives to satisfy local gauge invariance.\nThe Lagrangian becomes\n\\begin{multline} \\label{eqn:spon_full}\n    \\mathcal L = \\frac{1}{2} (\\partial_\\mu \\eta)(\\partial^\\mu \\eta) - \\mu^2 \\eta^2 + \\frac{1}{2} (\\partial_\\mu \\xi)(\\partial^\\mu \\xi) \n    - \\frac{1}{4} F^{\\mu\\nu} F_{\\mu\\nu} + \\frac{1}{2} \\bigg(\\frac{q\\mu}{\\lambda}\\bigg)^2 A_\\mu A^\\mu \\\\\n    + q \\Big[ \\eta(\\partial_\\mu \\xi) - \\xi (\\partial_\\mu \\eta) \\Big] A^\\mu + q^2 \\frac{\\mu}{\\lambda} \\eta (A_\\mu A^\\mu)\n    + \\frac{1}{2} q^2 (\\xi^2 + \\eta^2)(A_\\mu A^\\mu) \\\\\n    - \\mu \\lambda(\\eta^3 + \\eta \\xi^2) - \\frac{\\lambda^2}{4}(\\eta^4 + \\xi^4 + 2 \\eta^2 \\xi^2) \n    + \\frac{\\mu}{\\lambda}q (\\partial_\\mu \\xi) A^\\mu\n    + \\frac{\\mu^4}{4 \\lambda^2}.\n\\end{multline}\nThe gauge field $A^\\mu$ that we introduced to impose local gauge invariance now has a quadratic term $(1/2) (q\\mu)\\lambda)^2 A_\\mu A^\\mu$, which we can associate with a \\emph{massive} gauge boson.\nA mass term associated with the gauge field $A^\\mu$ has appeared because of the fact that we have rewritten the Lagrangian in a form that allows us to expand about its ground state.\nIn terms of the original $\\phi_1$ and $\\phi_2$ fields, no mass term for $A^\\mu$ appears, but once a ground state has been selected (transforming to $\\eta$ and $\\xi$ fields) the gauge boson associated with $A^\\mu$ acquires mass: spontaneous symmetry breaking generates masses for gauge bosons.\n\nThe Lagrangian of Eqn.~\\ref{eqn:spon_full} still presents some difficulties in its physical interpretation.\nThere is a bilinear term proportional to $(\\partial_\\mu \\xi)A^\\mu$ which we would interpret as allowing for a $\\xi$ particle to suddenly become an $A^\\mu$ gauge boson.\nThis implies that we have not yet fully cast the Lagrangian in a form that makes its physical interpretation apparent and can be solved by choosing a particular gauge.\nThe Lagrangian of Eqn.~\\ref{eqn:spon_orig} is invariant under global U(1) phase transformations $\\phi \\to e^{i\\theta} \\phi$.\nIf we choose $\\theta = - \\tan^{-1}(\\phi_2/\\phi_1)$, the transformed field $\\phi'$ is real ($\\phi'_2 = 0$), implying that $\\xi = 0$: the problematic bilinear term has been eliminated by the choice of gauge.\n\nWe have shown that a gauge boson can acquire mass through the spontaneous breaking of a continuous global symmetry (the SO(2) symmetry of the complex scalar field $\\phi$).\nWith a proper choice of gauge, we identify a real scalar field $\\eta$ and a massive scalar particle associated with this field.\nThis process by which gauge bosons can acquire mass is known as the \\emph{Higgs mechanism}~\\cite{Higgs:1964pj,Englert:1964et} and the massive scalar is known as a Higgs boson.\n\n\\subsection{Electroweak Interactions} \\label{sec:theory_ewk}\nThe weak and electromagnetic interactions can be unified in a single electroweak interaction, originally developed by Glashow, Weinberg, and Salam~\\cite{Glashow:1959wxa,Weinberg:1967tq,Salam:1968rm}.\nThe GWS theory of weak interactions begins with an $SU(2) \\otimes U(1)$ gauge symmetry.\nThe symmetry is broken spontaneously through the introduction of a scalar field, leading to the generation of masses for the gauge bosons of the $SU(2)$ component and leaving the gauge boson of the $U(1)$ symmetry massless.\nThe former will be identified as the massive vector gauge bosons, the $W^\\pm$ and the $Z$, while the latter be identified as the massless photon.\nThe particle associated with the scalar field responsible for the spontaneous symmetry breaking will be identified as the Higgs boson.\n\nAs before, we demand that the Lagrangian be invariant under local gauge transformations, this time of the form $\\phi \\to e^{i \\alpha^a \\tau^a} e^{i \\beta/2} \\phi$ and define a covariant derivative for $\\phi$:\n\\begin{equation}\n    D_\\mu \\phi = (\\partial_\\mu - i g A^a_\\mu \\tau^a - \\frac{i}{2} g' B_\\mu) \\phi, \\qquad \\tau^a = \\sigma^a/2\n\\end{equation}\nwhere $\\sigma^a$ are the Pauli spin matrices, $A^a_\\mu$ corresponds to the $SU(2)$ gauge bosons and $B_\\mu$ corresponds to the $U(1)$ gauge boson.\nWith a quartic potential for the scalar field $\\phi$, as in the example of Sec.~\\ref{sec:theory_ssbhm}, the field has a minimum defined by a circle in the $\\phi_1, \\phi_2$ plane (with $\\phi = \\phi_1 + i \\phi_2$).\nThe original $SO(2)$ symmetry of $\\phi$ will be spontaneously broken when a particular ground state along this circle is chosen.\nAssuming a ground state of $\\phi_1 = (1/\\sqrt{2})v, \\phi_2 = 0$, choosing the gauge $\\alpha^1 = \\alpha^2 = 0,~\\alpha^3 = \\beta$, and rewriting the Lagrangian about this field configuration leads to the generation of masses for the bosons of the $A^a_\\mu$ field and leaves the boson of the $B_\\mu$ field massless.\nExpressing the Lagrangian in terms of the ground state, rather than the original field $\\phi$, we find that the following terms appear:\n\\begin{equation}\n    \\Delta \\mathcal L = \\frac{1}{2} \\frac{v^2}{4} \\bigg[ g^2 (A^1_\\mu)^2 + g^2 (A^2_\\mu)^2 + (g' B_\\mu - g A^3_\\mu)^2 \\bigg].\n\\end{equation}\nAgain we see that rewriting the Lagrangian in a way such that it can be expanded about its spontaneously chosen ground state breaks the original symmetry and gives rise to mass terms for the $SU(2)$ gauge field. \n\nThe original fields can be expressed in terms of their mass eigenstates, which will make the physical interpretation of this theory more transparent.\nIt can be shown~\\cite{Peskin:1995ev} that these eigenstates are\n\\begin{align}\n    W^\\pm_\\mu &= \\frac{1}{\\sqrt{2}} (A^1_\\mu \\mp i A^2_\\mu) \\\\\n    Z^0_\\mu &= \\frac{1}{\\sqrt{g^2 + g'^2}}(g A^3_\\mu - g' B_\\mu) \\\\\n    A_\\mu &= \\frac{1}{\\sqrt{g^2 + g'^2}}(g'A_\\mu^3 + g B_\\mu)\n\\end{align}\n\nThe $W^\\pm_\\mu$ field has vector bosons of mass $m_W = gv/2$, the $Z^0_\\mu$ field has a vector boson of mass $m_Z = \\sqrt{g^2 + g'^2} v/2$, and the $A_\\mu$ field remains massless.\nThe covariant derivative can be rewritten in terms of the mass eigenstates and the \\emph{weak mixing angle}, $\\theta_w$, defined as\n\\begin{equation}\n    \\begin{pmatrix} Z^0 \\\\ A \\end{pmatrix} =\n    \\begin{pmatrix} \\cos \\theta_w & -\\sin \\theta_w \\\\ \\sin \\theta_w & \\cos \\theta_w \\end{pmatrix} \\begin{pmatrix} A^3 \\\\ B \\end{pmatrix}.\n\\end{equation}\nAs the field $A_\\mu$ will be identified as the electromagnetic vector potential, it is helpful to define $e$, which will be identified as the electron charge\n\\begin{equation}\n    e = \\frac{gg'}{\\sqrt{g^2 + g'^2}}.\n\\end{equation}\nIn this notation the covariant derivative becomes\n\\begin{equation}\n    D_\\mu \\phi = \\bigg[ \\partial_\\mu - i \\frac{g}{\\sqrt{2}} (W^+_\\mu T^+ + W_\\mu^- T^-) - i \\frac{g}{\\cos \\theta_w} Z_\\mu (T^3 - \\sin^2 \\theta_w Q) - ieA_\\mu Q \\bigg] \\phi.\n\\end{equation}\n\nWe can next couple the $SU(2) \\otimes U(1)$ gauge fields of the electroweak interaction to the leptons and quarks.\nAs the W boson couples only to the left-handed helicity states of leptons and quarks, it is helpful to decompose the kinetic energy term for fermions into left- and right-handed components:\n\\begin{equation}\n    \\bar{\\psi} i \\gamma^\\mu \\partial_\\mu \\psi = \\bar{\\psi}_L i \\gamma^\\mu \\partial_\\mu \\psi_L + \\bar{\\psi}_R i \\gamma^\\mu \\partial_\\mu \\psi_R,\n\\end{equation}\nso that $\\psi_L$ and $\\psi_R$ can couple differently to the gauge fields.\n\nThe left- and right-handed components of fermion fields couple differently to the gauge fields, they have different quantum numbers and consequently simple mass terms are forbidden by gauge invariance.\nExperimentally, we know that the fermions are not massless, so this poses a problem.\nAgain, spontaneous symmetry breaking can remedy this and allow for fermions to acquire mass.\n\nAssuming the scalar field $\\phi$ undergoes spontaneous symmetry breaking (``acquires a vacuum expectation value''), we can add terms to the Lagrangian which describe interactions between $\\phi$ and left- and right-handed components of fermions.\nFor example, for the electrons:\n\\begin{equation}\n\\Delta \\mathcal L_e = - \\lambda_e \\bar{E}_L \\cdot \\phi e_R + \\text{ h.c.}, \\qquad E_L = \\begin{pmatrix} \\nu_e \\\\ e^- \\end{pmatrix} \n\\end{equation}\nAssuming the same ground state as before, $\\phi = (0 \\quad v/\\sqrt{2})$, we obtain a mass term for the electron:\n\\begin{equation}\n    \\Delta \\mathcal L_e = \\frac{-1}{\\sqrt{2}} \\lambda_e v \\bar{e}_L e_R + \\text{ h.c.},\n\\end{equation}\nreferred to as the Yukawa term and $\\lambda_e$ referred to as the Yukawa coupling.\nYukawa terms for the other leptons and the quarks can be obtained in a similar fashion, such that the mass of any fermion is given by\n\\begin{equation}\n    m_f = \\frac{1}{\\sqrt{2}} \\lambda_f v.\n\\end{equation}\nNeither the magnitude of the Yukawa couplings, $\\lambda_f$, nor the vacuum expectation value, $v$, are known a priori and must be measured experimentally.\n\nOne final subtlety of the electroweak interaction is that while the W boson couples to leptons only of the same generation, it can couple to quarks from different generations.\nThis mixing is a result of the fact that the mass eigenstates of quarks are different from the weak isospin eigenstates.\nThe mixing of these eigenstates is described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix~\\cite{Cabibbo:1963yz,Kobayashi:1973fv}:\n\\begin{equation}\nV_{\\text{CKM}} = \\begin{pmatrix} V_{ud} & V_{us} & V_{ub} \\\\ V_{cd} & V_{cs} & V_{cb} \\\\ V_{td} & V_{ts} & V_{tb} \\end{pmatrix}.\n\\end{equation}\nThe CKM matrix is experimentally observed to be nearly diagonal, which has the physical consequence that W-mediated interactions between quarks of different generations are much weaker than those between quarks of the same generation.\n\n\\subsection{Shortcomings of the Standard Model} \\label{sec:theory_sm_problems}\nThough the SM has proved to be compatible with nature in nearly every single experimental test, it is known to be an incomplete theory.\nSome of the most glaring shortcomings of the SM are summarized in this section, but it is by no means an exhaustive list of every problem.\n\nFirst, the SM does not provide a description of gravity, and is (so far) incompatible with the theory of \\emph{general relativity}, currently the most successful theory of gravity.\nSecond, the SM cannot explain dark matter \\& dark energy, with strong evidence for dark matter given by the inconsistency of galactic rotation curves with the amount of visible matter~\\cite{Rubin:1980zd}.\nThird, the SM does not explain the observed dominance of matter over antimatter in the universe~\\cite{Dine:2003ax} -- CP violation (which is allowed and observed in the SM) can account for some of this asymmetry, but not nearly enough to explain the observed asymmetry.\nFinally, the SM is known to be self-inconsistent at very high energies, with the electromagnetic coupling and the Higgs self-coupling both diverging at arbitrarily high energies~\\cite{Gockeler:1997dn,Chivukula:1999az}.\nThis issue, called \\emph{quantum triviality}, implies that the SM is only valid up to some finite energy scale.\n\nGiven the abundance of issues with the SM, it is widely believed that the SM is a low-energy approximation of some more fundamental theory.\nWith the stipulation that the SM is fundamentally an incorrect description of the universe, it is natural to search for the ways in which the SM does \\emph{not} correctly describe our universe.\nIn this paradigm, the experimental success of the SM is puzzling -- if the SM is wrong, why does every experimental test agree with its predictions?\nOne of the more promising approaches is to test the SM at increasingly high energy scales, as is done through analysis of the 13 TeV center-of-mass collisions at the Large Hadron Collider (LHC).\nAmong the countless measurements which can be made with the datasets from the LHC's numerous experiments, measuring the properties of the Higgs boson is a promising avenue to search for departures from the SM predictions given the Higgs unique role in mass generation and electroweak symmetry breaking.\n", "meta": {"hexsha": "3815be467b5806269a397ef8bc53f026dcfe351b", "size": 29096, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/sm.tex", "max_stars_repo_name": "sam-may/phd_thesis", "max_stars_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "theory/sm.tex", "max_issues_repo_name": "sam-may/phd_thesis", "max_issues_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/sm.tex", "max_forks_repo_name": "sam-may/phd_thesis", "max_forks_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 98.9659863946, "max_line_length": 312, "alphanum_fraction": 0.7538149574, "num_tokens": 8069, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8723473813156294, "lm_q2_score": 0.640635861701035, "lm_q1q2_score": 0.5588570163317796}}
{"text": "\\chapter{Introduction}\n\n\n\\epigraph{A complex piece of software consists of simple parts and simple\nrelations between those parts; the programmer's task is to state those\nparts and those relationships, in whatever order is best for human\ncomprehension\\dots%---not in some rigidly determined order like top-down or\n%bottom-up.\n}{``Literate Programming'' (1984)\\\\ % section J\n\\textsc{Donald Knuth}} % The Computer Journal, vol. 27\n\n\n\\section{Structure of a General Circulation Model}\n\nA simple general circulation model consists of a ``dynamical core'':\nmodel the atmosphere as a Navier-Stokes fluid, the Earth as a rotating\nsphere. This consists of three equations for the conservation of\nmomenta, another equation for the conservation of mass (for atmosphere),\nand some thermodynamic equation of state for describing pressure as\nrelated to density. Atmospheric general circulation models add equations\nof radiative transfer to the dynamic core, and some physical\nparametrizations for subgrid details.\n\nWe will first examine the atmosphere as a fluid. Since the atmosphere is\n``so shallow'' and ``so slow'', we can make various approximations to\nthe Navier-Stokes equations to make it easier to solve.\nBut we will have six unknowns (the velocity vector of the atmosphere,\nits density, pressure, and temperature) and only four equations from\nfluid mechanics alone.\n\nFortunately, the atmosphere is amazingly close to an ideal gas. So we\ncan draw upon a rich thermodynamical well. This will be done in the\nsecond section.\n\nWe can add more realism to our model than ``just atmosphere''. Doing so\nrequires care when structuring the code. The usual architecture has a\nseparate module for each model component, which interact with each other\nvia a ``coupler'' pattern --- a specific maestro which coordinates the\ntime-stepping.\n\nSome discussion of radiative transfer will be given, where sunlight and\nheat emitted from the Earth affect atmospheric temperatures. This is\nwhere more than 90\\% of CPU cycles are spent in climate models.\n\n\\section{Fluid Mechanics of Atmosphere}\n\n\\subsection{Navier-Stokes Equations}\n\nThe conservation of momenta amounts to Newton's second Law\n\\begin{equation}\n  \\frac{\\materialD\\vec{u}}{\\materialD t}\n  =\\frac{-1}{\\rho}\\nabla p + \\nu\\nabla^{2}\\vec{u} + \\vec{F}\n\\end{equation}\nwhere $\\vec{u}$ is the velocity vector field for the atmosphere, $\\rho$\nits density, $p$ pressure, $\\nu$ kinematics viscosity, and $\\vec{F}$\nexternal forces like gravity. Here we use the material derivative\n\\begin{equation}\n  \\frac{\\materialD f}{\\materialD t} = \\partial_{t}f + (\\vec{u}\\cdot\\grad)f\n\\end{equation}\nwhich should match the intuition of, if $\\vec{x}(t)$ is the position of\na parcel of atmospheric fluid at time $t$, taking the total time\nderivative of $f(\\vec{x}(t))$ where we note\n$\\partial_{t}\\vec{x}(t)=\\vec{u}(t)$.\n\nThe conservation of mass may be written in a variety of ways, usually as\n\\begin{equation}\n  \\frac{\\materialD\\rho}{\\materialD t}+\\rho\\grad\\cdot\\vec{u}=0.\n\\end{equation}\nThe atmosphere typically doesn't have a constant density, so this can't\nbe simplified much further.\n\nFor the equation of state, we'll need to invoke thermodynamics.\nFortunately, the atmosphere is close enough to an ideal gas that we can\nrely on quite a few results from the literature. The interested reader\nmay peruse Vallis~\\cite{vallis_2017} or Holton and\nHakim~\\cite{holton2013dynamicMeteorology} for details.\n\n\\subsection{In Rotating Reference Frame}\n\nWe can approximate the Earth as a rotating sphere. The angular velocity\nvector $\\vec{\\Omega}$ is roughly constant, and points in the same\ndirection as the North pole. We relate the time derivative of quantities\nas measured in the inertial frame $\\D\\vec{A}/\\D t$ to how it appears to a\nrotating reference frame $(\\D\\vec{A}/\\D t)'$ (primes indicate quantities\nin the rotating frame) by\n\\begin{equation}\n  \\frac{\\D\\vec{A}}{\\D t} = \\left(\\frac{\\D\\vec{A}}{\\D t}\\right)' + \\vec{\\Omega}\\times\\vec{A}.\n\\end{equation}\nWhen applies to the position $\\vec{r}$ of a body, we find the velocity\nin an inertial frame related to the velocity in a rotating frame\n\\begin{equation}\n  \\frac{\\D\\vec{r}}{\\D t} = \\left(\\frac{\\D\\vec{r}}{\\D t}\\right)' + \\vec{\\Omega}\\times\\vec{r}\n\\end{equation}\nand acceleration in the rotating frame may be found from\n\\begin{equation}\n  \\frac{\\D}{\\D t}(\\vec{v} - \\vec{\\Omega}\\times\\vec{r})\n  = \\frac{\\D\\vec{v}'}{\\D t}+\\vec{\\Omega}\\times\\vec{v}'\n\\end{equation}\nwhere $\\vec{v} = \\D\\vec{r}/\\D t$ and $\\vec{v}' = (\\D\\vec{r}/\\D t)'$.\nAfter some algebra, we conclude\n\\begin{equation}\n  \\frac{\\D\\vec{v}'}{\\D t}\n  = \\frac{\\D \\vec{v}}{\\D t}\n    - 2\\vec{\\Omega}\\times\\vec{v}'\n    - \\vec{\\Omega}\\times(\\vec{\\Omega}\\times\\vec{r}).\n\\end{equation}\nThat is, we have to add the Coriolis effect\n$-2\\vec{\\Omega}\\times\\vec{v}'$ and the centrifugal pseudoforce\n$-\\vec{\\Omega}\\times(\\vec{\\Omega}\\times\\vec{r})$. We usually sweep the\ncentrifugal contribution into the external force term.\n\nThe conservation of momentum equations in the rotating frame would\nbecome\n\\begin{equation}\n  \\frac{\\materialD\\vec{u}}{\\materialD t} + 2\\vec{\\Omega}\\times\\vec{u}\n  = \\frac{-1}{\\rho}\\grad p + \\vec{F}\n\\end{equation}\nwhere we understand $\\vec{u}$ refers to the fluid velocity for the\natmosphere as observed in the rotating frame.\n\n\\subsection{Spherical Coordinates in Rotating Frame}\n\nWe change coordinates to spherical coordinates, taking $\\lambda$ to\ndescribe the longitude (Eastwards angle), $\\theta$ the latitude (angular\ndistance polewards), and $r$ the radial distance from the center of the\nEarth. In these coordinates, any scalar quantity $\\varphi$ has its\nmaterial derivative be\n\\begin{equation}\n  \\frac{\\materialD\\varphi}{\\materialD t}\n  = \\frac{\\partial\\varphi}{\\partial t}\n    + \\frac{u}{r\\cos(\\theta)}\\frac{\\partial\\varphi}{\\partial\\lambda}\n    + \\frac{v}{r}\\frac{\\partial\\varphi}{\\partial\\theta}\n    + w\\frac{\\partial\\varphi}{\\partial r}.\n\\end{equation}\nCare must be taken when computing the viscosity term, because the vector\nLaplacian is defined as\n\\begin{equation}\n  \\nabla^{2}\\vec{A} := \\grad(\\grad\\cdot\\vec{A}) - \\curl(\\curl\\vec{A}).\n\\end{equation}\nOnly in Cartesian coordinates does it coincide with the familiar Laplacian.\n\n\\begin{prop}\n  Mass conservation in spherical coordinates is\n  \\begin{equation}\n    \\frac{\\partial\\rho}{\\partial t}\n    + \\frac{1}{r\\cos(\\theta)}\\frac{\\partial(u\\rho)}{\\partial\\lambda}\n    + \\frac{1}{r\\cos(\\theta)}\\frac{\\partial}{\\partial\\theta}(v\\rho\\cos(\\theta))\n    + \\frac{1}{r^{2}}\\frac{\\partial}{\\partial r}(r^{2}w\\rho)\n    = 0\n  \\end{equation}\n  where we have $\\lambda$ be the latitude coordinate, $\\theta$ the\n  longitude coordinate, $r$ the vertical/radial coordinate; and\n  $\\vec{u}=(u,v,w)$ is the velocity pointing North $u$, East $v$, and\n  outward $w$.\n\\end{prop}\n\n\\begin{prop}\n  Momentum conservation in a rotating frame using spherical coordinates\n  are (neglecting the viscosity term)\n  \\begin{subequations}\n    \\begin{align}\n      \\frac{\\materialD u}{\\materialD t} -\\left(2\\Omega + \\frac{u}{r\\cos(\\theta)}\\right)(v\\sin(\\theta)-w\\cos(\\theta))\n      &= \\frac{-1}{\\rho r\\cos(\\theta)}\\frac{\\partial p}{\\partial\\lambda}\\\\\n      \\frac{\\materialD v}{\\materialD t} -\\left(2\\Omega + \\frac{u}{r\\cos(\\theta)}\\right)u\\sin(\\theta)\n      &= \\frac{-1}{\\rho r}\\frac{\\partial p}{\\partial\\theta} \\\\\n      \\frac{\\materialD w}{\\materialD t} - \\frac{u^{2}+v^{2}}{r} -\n      2\\Omega u\\cos(\\theta)\n      &= \\frac{-1}{\\rho}\\frac{\\partial p}{\\partial r}-g.\\label{eq:vertical-rotating-momentum-conservation}\n    \\end{align}\n  \\end{subequations}\n  The quadratic terms on the left-hand side involving factors of $1/r$\n  are usually called ``metric terms'', and those involving factors of\n  $\\Omega$ are Coriolis terms.\n\\end{prop}\n\nNeglecting the viscosity term may be justified by observing, in MKS\nunits, it is typically $10^{-7}$ times smaller than the $\\materialD\\vec{u}/\\materialD t$\nterm. For more on this, see Holton and\nHakim~\\cite{holton2013dynamicMeteorology}, \\S2.4 for scale analysis of\nmomentum equations.\n\n\\begin{prop}\n  The radius of the Earth $a$ may be approximated as\n  \\begin{subequations}\n    \\begin{equation}\n      a = \\SI{20000/\\pi}{\\km}\n  \\end{equation}\n  or, to some 44 digits\n  \\begin{equation}\n    a\\approx\n    \\SI{6366.1977236758134307553505349005744813783858}{\\km}.%\\,2962\n  \\end{equation}\n  \\end{subequations}\n\\end{prop}\n\\begin{proof}\n  Recall from history, in 1791, the meter was defined such that a\n  quarter of the Earth's circumference through the North and South\n  poles, starting at the equator, would be\n\\begin{equation}\n  (2\\pi a)/4 = \\SI{1e4}{\\km}\n\\end{equation}\nwhere $a$ is Earth's radius. Modern measurements note the meridional\ncircumference of the Earth is $\\SI{40007.86}{\\km}$, so this\nhistoric definition has an absolute error of $\\SI{7.86}{\\km}$ (making\nit a useful approximation). In reality, Earth's equatorial\nradius $a_{eq}\\approx \\SI{6378.137}{\\km}$ and polar radius\n$a_{pol}\\approx\\SI{6356.752}{\\km}$ have a geometric mean of\n$\\SI{6367.44}{\\km}$.\n\\end{proof}\n\\begin{cor}\n  The inverse of Earth's radius is\n  \\begin{subequations}\n  \\begin{equation}\n    \\frac{1}{a} = \\frac{\\pi}{\\SI{2e4}{\\km}}\n  \\end{equation}\n  or, to 36 digits\n  \\begin{equation}\n    \\frac{1}{a}\\approx \\SI{1.57079632679489661923132169163975144e-4}{\\per\\km}\n  \\end{equation}\n  \\end{subequations}\n\\end{cor}\n\nThis speeds up performance when doing calculations, since multiplication\nis between 5- to 9-times faster than division on x86 CPUs\\footnote{See, e.g.,\nWittmann and friends~\\cite{wittmann2015short} or more informally \\url{https://latkin.org/blog/2014/11/09/a-simple-benchmark-of-various-math-operations/}}.\nSuch is the proportion of CPU latency (i.e., the time from the start of\nthe instruction until the operation concludes).\n\n\\subsection{Primitive Equations}\n\nThere are three simplifications we could make: the shallow fluid\napproximation, the hydrostatic approximation, and the traditional\napproximation.\\footnote{My guide for this material is drawn\nprincipally from Vallis~\\cite[\\S\\S2.2.4--2.2.5]{vallis_2017}, who cites\nPhillips~\\cite{phillips1966TheEquationsofMotionforaShallowRotatingAtmosphereandtheTraditionalApproximation}\nand White~\\cite{white2002}.}\n\nWe usually simplify these equations further, noting that the height of\nthe atmosphere is no greater than $\\SI{100}{\\km}$ which is about\n60-times smaller than the radius of the Earth. So we could think of the\natmosphere as a shallow fluid.\\marginpar{(1) Shallow fluid: $r=a+z\\approx a$,\\\\\n  $\\partial/\\partial r=\\partial/\\partial z$}\nThat is to say, we treat $r=a+z$ where\n$a$ is the radius of the Earth and $z\\ll a$ the altitude. Consequently,\nwe replace $\\partial/\\partial r$ by $\\partial/\\partial z$ and all other\noccurrences of $r$ by $a$. If we make this simplification, then we\nare also forced to make the hydrostatic\napproximation: in the vertical\nmomentum equation, the gravitational term is balanced by the pressure\ngradient term (and these are the dominant terms in the equation, the\nother terms are negligible in comparison). That is to say, we use\\marginpar{(2)Hydrostatic Approximation}\n\\begin{equation}\n  \\frac{\\partial p}{\\partial z}=-\\rho g\n\\end{equation}\ninstead of Eq \\eqref{eq:vertical-rotating-momentum-conservation}. These\ntwo approximations are bundle deal --- they are taken together, or not\nat all. For large-scale atmospheric (and oceanic) flow on Earth, they're\na pretty good approximation.\n\nThe traditional approximation\\marginpar{(3) Traditional approximation} discards Coriolis terms in the horizonal\nmomentum equations, and the smaller metric terms $uw/r$ and\n$vw/r$. This approximation is largely independent of the shallow fluid\nand hydrostatic approximations, and may be empirically justified by\nexamining the scale of the various terms.\n\n\\begin{prop}[Momentum Equations]\nTaking all three approximations gives us the conservation of momentum equations:\n\\begin{subequations}\n  \\begin{align}\n    \\frac{\\materialD u}{\\materialD t}\n    - 2\\Omega v\\sin(\\theta) - \\frac{uv}{a}\\tan(\\theta)\n    &= \\frac{-1}{\\rho a\\cos(\\theta)}\\frac{\\partial p}{\\partial\\lambda}\\\\\n    \\frac{\\materialD v}{\\materialD t}\n    - 2\\Omega u\\sin(\\theta) - \\frac{u^{2}\\tan(\\theta)}{a}\n    &= \\frac{-1}{\\rho a}\\frac{\\partial p}{\\partial\\theta} \\\\\n    0 &= \\frac{-1}{\\rho}\\frac{\\partial p}{\\partial z}-g.\n  \\end{align}\n\\end{subequations}\n\\end{prop}\n\n\\begin{note}\nIn our approximations, we now have\n\\begin{equation}\n  \\frac{\\materialD}{\\materialD t}=\\frac{\\partial}{\\partial t} +\n  \\frac{u}{a\\cos(\\theta)}\\frac{\\partial}{\\partial\\lambda} +\n  \\frac{v}{a}\\frac{\\partial}{\\partial\\theta} + w\\frac{\\partial}{\\partial z}.\n\\end{equation}\n\\end{note}\n\n\\begin{defn}\nIt is conventional to introduce the \\define{Coriolis parameter}\n\\begin{equation}\n  f = 2\\Omega\\sin(\\theta)\n\\end{equation}\nto simplify calculations. Empirically, $\\Omega=7.2921150\\times10^{-5}\\,\\mathrm{rad}\\cdot\\mathrm{s}^{-1}$.\n\\end{defn}\n\n\n\\begin{prop}[Continuity Equation]\nThe mass conservation equation for a shallow atmosphere becomes\n\\begin{equation}\n  \\frac{\\partial\\rho}{\\partial t}\n  + \\frac{u}{a\\cos(\\theta)}\\frac{\\partial\\rho}{\\partial\\lambda}\n  + \\frac{v}{a}\\frac{\\partial\\rho}{\\partial\\theta}\n  + w\\frac{\\partial\\rho}{\\partial z}\n  + \\rho\\left(\\frac{1}{a\\cos(\\theta)}\\frac{\\partial u}{\\partial\\lambda}\n              + \\frac{1}{a\\cos(\\theta)}\\frac{\\partial(v\\cos(\\theta))}{\\partial\\theta}\n              + \\frac{\\partial w}{\\partial z}\n       \\right)\n  = 0\n\\end{equation}\nor equivalently\n\\begin{equation}\n  \\frac{\\partial\\rho}{\\partial t}\n  + \\frac{1}{a\\cos(\\theta)}\\frac{\\partial(u\\rho)}{\\partial\\lambda}\n  + \\frac{1}{a\\cos(\\theta)}\\frac{\\partial(v\\rho\\cos(\\theta))}{\\partial\\theta}\n  + \\frac{\\partial(w\\rho)}{\\partial z}\n  = 0.\n\\end{equation}\n\\end{prop}\n\n\\section{Thermodynamics for Atmospheric Fluid}\nWe need some way to relate density to pressure, which requires\nthermodynamics. Atmosphere is approximately an ideal gas, so we could\nuse\n\\begin{equation}\n  p = \\rho R T\n\\end{equation}\nwhere $R=\\SI{287}{\\joule\\per\\kilogram\\per\\kelvin}$ is the gas\nconstant for dry air, $T$ is temperature (an unknown). We need another\nequation to fix $T$. A first approximation simply fixes $T=T_{0}$ to be\nconstant, a second approximation fixes $T=T_{0}-T_{1}z$ or something\nsimilar. But realistically, we want to model temperature from first\nprinciples.\n\nFor a simple ideal gas, the heat capacity is constant, so\n\\begin{equation}\n  I = cT\n\\end{equation}\nwhere $c$ is a constant and $I$ satisfies the fundamental thermodynamic\nrelation\n\\begin{equation}\n  \\D I = T\\,\\D\\eta - p\\,\\D\\alpha + \\mu\\,\\D S\n\\end{equation}\nwhere $I$ is the internal energy of the fluid, $\\alpha=1/\\rho$ is the\nspecific volume, $\\eta$ is the specific entropy, $\\mu$ is the chemical\npotential, and $S$ is the composition of the fluid in terms of\n``species'' of molecules. If $\\dot{Q}$ is the heating applied to the\nfluid and $\\dot{Q}_{E} = \\dot{Q} + \\mu\\dot{S}$ is the total rate of\nenergy change, then the internal energy equation\n\\begin{equation}\n  \\frac{\\materialD I}{\\materialD t} + \\frac{p}{\\rho}\\grad\\cdot\\vec{u} = \\dot{Q}_{E}\n\\end{equation}\nor the entropy equation\n\\begin{equation}\n\\frac{\\materialD \\eta}{\\materialD t} = \\frac{1}{T}\\dot{Q}\n\\end{equation}\ncould be used to fix the remaining parameters, provided we have a\nboundary condition giving us $\\dot{Q}$ and $\\dot{S}$. For an ideal dry\ngas (like the atmosphere) we have\n\\begin{equation}\n  \\D I = c_{v}\\,\\D T\n\\end{equation}\nwhich gives us\n\\begin{equation}\nc_{v}\\frac{\\materialD T}{\\materialD t} + p\\alpha\\grad\\cdot\\vec{u} = \\dot{Q}.\n\\end{equation}\nArguably, for a steady state, $\\dot{Q}=0$. Realistically, the Sun\n\\emph{is} an external heat source, and we need to model how it heats the\natmosphere, which is the rich subject of radiative transfer.\n\n\\begin{rmk}\n  We can combine several results from thermodynamics to write the first\n  law as\n  \\begin{equation}\n    \\dot{Q} = c_{p}\\frac{\\materialD T}{\\materialD t} - \\frac{1}{\\rho}\\frac{\\materialD p}{\\materialD t}\n  \\end{equation}\n  where for the atmosphere\n  $c_{p}=1003\\,\\mathrm{J}\\cdot\\mathrm{kg}^{-1}\\cdot\\mathrm{K}^{-1}$, and\n  $\\dot{Q}$ is the heat rate per unit mass. This is typically used in\n  numerical weather prediction. See chapter 2 of\n  Kalnay~\\cite{kalnay_2002} but be warned: Kalnay uses $Q$ whereas we\n  use $\\dot{Q}$.\n\\end{rmk}\n\n\\section{Radiative Transfer}\n\nWe need to incorporate heating from sunlight, which requires radiative\ntransfer --- for a review, see Liou~\\cite{liou2002introduction} and\nStamnes, Thomas, and Stamnes~\\cite{stamnes_thomas_stamnes_2017}.\nThe latter are part of the team who coded the \\DISORT/\\index{\\DISORT/}\nlibrary\\footnote{\\DISORT/ homepage \\url{http://www.rtatmocn.com/disort/}},\nwhich underpins many radiative transfer libraries like the\n\\RRTM/\\index{\\RRTM/} suite\\footnote{\\url{http://rtweb.aer.com/}}.\n\nThe basic radiative transfer equation is an integro-differential\nequation\n\\begin{equation}\\label{eq:integro-diff-radiative-transfer}\n  \\mu\\frac{\\D I(\\tau,\\mu,\\phi)}{\\D\\tau}\n  = I(\\tau, \\mu, \\phi) - S(\\tau, \\mu, \\phi)\n\\end{equation}\nwhere $\\tau$ is the optical depth of the medium; $I$ the diffuse\nspecific intensity in a cone of unit solid angle along direction $\\mu$,\n$\\phi$; and $S$ is the ``source function''\n\\begin{equation}\n  \\begin{split}\n    S(\\tau, \\mu, \\phi) &= Q^{\\text{(beam)}}(\\tau,\\mu,\\phi)  + Q^{\\text{(therm)}}(\\tau) \\\\\n    &\\qquad + \\frac{\\omega(\\tau)}{4\\pi}\n  \\int^{2\\pi}_{0}\\int^{1}_{-1} P(\\tau,\\mu,\\phi;\\mu',\\phi')I(\\tau,\\mu',\\phi')\\,\\D\\phi'\\,\\D\\mu'.\n  \\end{split}\n\\end{equation}\nHere $\\omega(\\tau)$ is the ``single scattering albedo'' of the medium\n(i.e., the fraction of an incident beam which is scattered by that\nvolume), and $P$ is the ``scattering phase function'' describing the\nangular scattering pattern of an infinitesimal volume. The curious\nreader may profitably read chapters 5--8 of Stamnes and\nfriends~\\cite{stamnes_thomas_stamnes_2017}.\n\n\\begin{note}\nLaszlo and friends~\\cite{Laszlo2016} explain how \\DISORT/ solves Eq\n\\eqref{eq:integro-diff-radiative-transfer} in three basic steps:\n\\begin{enumerate}\n\\item Transform the equation into a set of radiative transfer equations\n  by separation of azimuth dependence;\n\\item Transforming these equations into a system of ordinary\n  differential equations;\n\\item Solving the System of ODEs using linear algebra.\n\\end{enumerate}\nThis strategy may be found in Chandrasekhar's book\n\\emph{Radiative Transfer} (1960) and refined over time.\n\\end{note}\n\n\n\\section{ADT Calculus}\n\nFollowing scientific programming tradition from SAGA\\footnote{\\url{https://www.ii.uib.no/saga/index.html}},\nRouson and friends~\\cite{rouson2011scientific,rouson2008grid}, and much\nhard earned experience, we will be encoding mathematical equations\ndirectly into the abstract data types.\n\n% There are a lot of people observing sin(x) is inaccurate on Intel CPUs\n% - https://hackernoon.com/help-my-sin-is-slow-and-my-fpu-is-inaccurate-a2c751106102\n% - https://randomascii.wordpress.com/2014/10/09/intel-underestimates-error-bounds-by-1-3-quintillion/\n% - http://notabs.org/fpuaccuracy/\n%\n% Also see:\n% - https://latkin.org/blog/2014/11/09/a-simple-benchmark-of-various-math-operations/\n", "meta": {"hexsha": "0e9d1d7701fb0656245ec8c45bb1b82493cae1cd", "size": 19069, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/intro.tex", "max_stars_repo_name": "pqnelson/gcm", "max_stars_repo_head_hexsha": "dfbe1f42368fc3513f1e2548a963d0182951e56f", "max_stars_repo_licenses": ["BSD-3-Clause-Clear"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-23T00:20:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-23T00:20:23.000Z", "max_issues_repo_path": "tex/intro.tex", "max_issues_repo_name": "pqnelson/gcm", "max_issues_repo_head_hexsha": "dfbe1f42368fc3513f1e2548a963d0182951e56f", "max_issues_repo_licenses": ["BSD-3-Clause-Clear"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/intro.tex", "max_forks_repo_name": "pqnelson/gcm", "max_forks_repo_head_hexsha": "dfbe1f42368fc3513f1e2548a963d0182951e56f", "max_forks_repo_licenses": ["BSD-3-Clause-Clear"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.5647321429, "max_line_length": 154, "alphanum_fraction": 0.7256279826, "num_tokens": 5786, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430562234878, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.5588038232518026}}
{"text": "\\section{Computing Limits: Algebraically}\\label{sec:ComputingLimitsAlg}\r\n\\subsection*{Properties of limits}\r\nWith reference to Theorem \\ref{thm:LimitProduct}, we can derive \r\na handful of theorems to give us the tools to compute many limits\r\nwithout explicitly working with the precise definition of a limit.\r\n\r\n\\begin{theorem}{Limit Properties}{PropertiesLimits}\r\n Suppose that $ L, M $ and $ k $ are real numbers where  $\\ds \\lim_{x\\to a}f(x)=L$ and $\\ds \\lim_{x\\to a}g(x)=M$. Then\r\n\\begin{itemize}\r\n\\item[Constant Multiple Rule] $\\ds\\lim_{x\\to a} kf(x) =\\ds k\\lim_{x\\to a}f(x)=kL$\r\n\\item $\\ds\\lim_{x\\to a} (f(x)+g(x)) = \\ds\\lim_{x\\to a}f(x)+\\lim_{x\\to a}g(x)=L+M$\r\n\\item $\\ds\\lim_{x\\to a} (f(x)-g(x)) = \\ds\\lim_{x\\to a}f(x)-\\lim_{x\\to a}g(x)=L-M$\r\n\\item $\\ds\\lim_{x\\to a} (f(x)g(x)) = \\ds\\lim_{x\\to a}f(x)\\cdot\\lim_{x\\to a}g(x)=LM$\r\n\\item $\\ds\\lim_{x\\to a} \\frac{f(x)}{g(x)} = \\ds\\frac{\\ds\\lim_{x\\to a}f(x)}{\\ds\\lim_{x\\to a}g(x)}=\\frac{L}{M},\\hbox{ if $M$ is not 0}$\r\n\\end{itemize}\r\n\\end{theorem}\r\n\r\nRoughly speaking, these rules say that to compute the limit of an\r\nalgebraic expression, it is enough to compute the limits of the\r\n``innermost bits'' and then combine these limits. This often means\r\nthat it is possible to simply plug in a value for the variable, since\r\n$\\ds \\lim_{x\\to a} x =a$.\r\n\r\n\\begin{example}{Limit Properties}{LimitProperties}\r\nCompute $\\ds\\lim_{x\\to 1}{x^2-3x+5\\over x-2}$.\r\n\\end{example}\r\n\r\n\\begin{solution} \r\n If we apply the theorem in all its gory detail, we get\r\n\\begin{eqnarray*}\r\n\\lim_{x\\to 1}{x^2-3x+5\\over x-2}&=&\r\n{\\ds\\lim_{x\\to 1}(x^2-3x+5)\\over \\ds\\lim_{x\\to1}(x-2)}\\cr\r\n\\\\\r\n&=&{(\\ds\\lim_{x\\to 1}x^2)-(\\ds\\lim_{x\\to1}3x)+(\\ds\\lim_{x\\to1}5)\\over \r\n  (\\ds\\lim_{x\\to1}x)-(\\ds\\lim_{x\\to1}2)}\\cr\r\n\\\\\r\n&=&{(\\ds\\lim_{x\\to 1}x)^2-3(\\ds\\lim_{x\\to1}x)+5\\over (\\ds\\lim_{x\\to1}x)-2}\\cr\r\n\\\\\r\n&=&{1^2-3\\cdot1+5\\over 1-2}\\cr\r\n\\\\\r\n&=&{1-3+5\\over -1} = -3\r\n\\end{eqnarray*}\r\n\\end{solution}\r\n \r\nIt is worth commenting on the trivial limit $\\ds \\lim_{x\\to1}5$. From one\r\npoint of view this might seem meaningless, as the number 5 can't\r\n``approach'' any value, since it is simply a fixed number. However, 5 can,\r\nand should, be interpreted here as the function that has value 5\r\neverywhere, $f(x)=5$, with graph a horizontal line. From this point of\r\nview it makes sense to ask what happens to the values of the function (height of the graph)\r\nas $x$ approaches 1.\r\n\r\nWe're primarily interested in limits\r\nthat aren't so easy, namely, limits in which a denominator approaches\r\nzero. There are a handful of algebraic tricks that work on many of\r\nthese limits.\r\n\r\n\\begin{example}{Zero Denominator}{zerodenominator}\r\nCompute $\\ds\\lim_{x\\to1}{x^2+2x-3\\over x-1}$.\r\n\\end{example}\r\n\r\n\\begin{solution} \r\n We can't simply plug in $x=1$ because that makes the denominator\r\n zero.  However:\r\n\\begin{eqnarray*}\r\n\\lim_{x\\to1}{x^2+2x-3\\over x-1}&=&\\lim_{x\\to1}{(x-1)(x+3)\\over x-1}\\cr\r\n\\\\\r\n&=&\\lim_{x\\to1}(x+3)=4\\\r\n\\end{eqnarray*}\r\n\\end{solution}\r\n\r\nThe technique used to solve the previous example can be referred to as \\ifont{factor and cancel}.\r\nIts validity comes from the fact that we are allowed to cancel $x-1$ from the numerator and denominator.\r\nRemember in Calculus that we have to make sure we don't cancel zeros, so we require $x-1\\neq 0$ in order\r\nto cancel it. But looking back at the definition of a limit using $x\\to 1$, the key point for this example\r\nis that we are taking values of $x$ close to $1$ but \\ifont{not} equal to $1$. This is\r\nexactly what we wanted ($x\\neq 1$) in order to cancel this common factor.\r\n\r\nWhile Theorem~\\ref{thm:PropertiesLimits} is very helpful, we\r\nneed a bit more to work easily with limits. Since the theorem applies\r\nwhen some limits are already known, we need to know the behavior of\r\nsome functions that cannot themselves be constructed from the simple\r\narithmetic operations of the theorem, such as $\\ds\\sqrt{x}$. Also,\r\nthere is one other extraordinarily useful way to put functions\r\ntogether: composition. If $f(x)$ and\r\n$g(x)$ are functions, we can form two functions by composition:\r\n$f(g(x))$ and $g(f(x))$. For example, if $\\ds f(x)=\\sqrt{x}$ and $\\ds\r\n\\ds g(x)=x^2+5$, then $\\ds f(g(x))=\\sqrt{x^2+5}$ and $\\ds\r\ng(f(x))=(\\sqrt{x})^2+5=x+5$.  Here is a companion to\r\nTheorem~\\ref{thm:PropertiesLimits} for composition:\r\n\r\n\\begin{theorem}{Limit of Composition}{LimitComposition}\r\nSuppose that $\\ds \\lim_{x\\to a}g(x)=L$ and $\\ds \\lim_{x\\to L}f(x)=f(L)$. Then\r\n$$\\lim_{x\\to a} f(g(x)) = f(L).$$\r\n\\end{theorem}\r\n\r\nNote the special form of the condition on $f$: it is not enough to\r\nknow that $\\ds\\lim_{x\\to L}f(x) = M$, though it is a bit tricky to see\r\nwhy. We have included an example in the exercise section to illustrate this tricky\r\npoint for those who are interested. Many of the most familiar functions do have this property, and\r\nthis theorem can therefore be applied. For example:\r\n\r\n\\begin{theorem}{Continuity of Roots}{ContinuityRoots}\r\n Suppose that $n$ is a positive integer. Then\r\n$$\\lim_{x\\to a}\\root n\\of{x} = \\root n\\of{a},$$\r\nprovided that $a$ is positive if $n$ is even.\r\n\\end{theorem}\r\n\r\nThis theorem is not too difficult to prove from the definition of limit.\r\n\r\nAnother of the most common algebraic tricks is called \\ifont{rationalization}. \r\nRationalizing makes use of the difference of squares formula $(a-b)(a+b)=a^2-b^2$.\r\nHere is an example.\r\n\r\n\\begin{example}{Rationalizing}{Rationalizing}\r\nCompute $\\ds\\lim_{x\\to-1} {\\sqrt{x+5}-2\\over x+1}$.\r\n\\end{example}\r\n\r\n\\begin{solution} \r\n\\begin{eqnarray*}\r\n\\lim_{x\\to-1} {\\sqrt{x+5}-2\\over x+1}&=&\r\n\\lim_{x\\to-1} {\\sqrt{x+5}-2\\over x+1}\\cdot{\\sqrt{x+5}+2\\over \\sqrt{x+5}+2}\\cr\r\n\\\\\r\n&=&\\lim_{x\\to-1} {x+5-4\\over (x+1)(\\sqrt{x+5}+2)}\\cr\r\n\\\\\r\n&=&\\lim_{x\\to-1} {x+1\\over (x+1)(\\sqrt{x+5}+2)}\\cr\r\n\\\\\r\n&=&\\lim_{x\\to-1} {1\\over \\sqrt{x+5}+2}={1\\over4}\r\n\\end{eqnarray*}\r\nAt the very last step we have used Theorems~\\ref{thm:LimitComposition} and \\ref{thm:ContinuityRoots}.\r\n\\end{solution}\r\n\r\n\\begin{example}{Left and Right Limit}{leftright}\r\nEvaluate $\\ds\\lim_{x\\to 0}{x\\over|x|}$.\r\n\\end{example}\r\n\r\n\\begin{solution} \r\nThe function $f(x)=x/|x|$ is undefined at 0; when $x>0$, $|x|=x$ and\r\nso $f(x)=1$; when $x<0$, $|x|=-x$ and $f(x)=-1$. Thus\r\n$$\\ds \\lim_{x\\to 0^-}{x\\over|x|}=\\lim_{x\\to 0^-}-1=-1$$\r\nwhile \r\n$$\\ds \\lim_{x\\to 0^+}{x\\over|x|}=\\lim_{x\\to 0^+}1=1.$$\r\nThe limit of $f(x)$ must be equal to both the left and right limits; since they are\r\ndifferent, the limit $\\ds \\lim_{x\\to 0}{x\\over|x|}$ does not exist.\r\n\\end{solution}\r\n\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for \\ref{sec:ComputingLimitsAlg}}\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\nCompute the limits. If a limit does not exist, explain why.\r\n\\begin{multicols}{2}\r\n\\begin{enumerate}\r\n\t\\item\t$\\ds \\lim_{x\\to 3}{x^2+x-12\\over x-3}$\r\n\t\\item\t$\\ds \\lim_{x\\to 1}{x^2+x-12\\over x-3}$\r\n\t\\item\t$\\ds \\lim_{x\\to -4}{x^2+x-12\\over x-3}$\r\n\t\\item\t$\\ds \\lim_{x\\to 2} {x^2+x-12\\over x-2}$\r\n\t\\item\t$\\ds \\lim_{x\\to 1} {\\sqrt{x+8}-3\\over x-1}$\r\n\t\\item\t$\\ds \\lim_{x\\to 0^+} \\sqrt{{1\\over x}+2} - \\sqrt{1\\over x}$\r\n\t\\item\t$\\ds\\lim _{x\\to 2} 3$\r\n\t\\item\t$\\ds\\lim _{x\\to 4 } 3x^3 - 5x $\r\n\t\\item\t$\\ds \\lim _{x\\to 0 } {4x - 5x^2\\over x-1}$\r\n\t\\item\t$\\ds\\lim _{x\\to 1 } {x^2 -1 \\over x-1 }$\r\n\t\\item\t$\\ds\\lim _{x\\to 0^ + } {\\sqrt{2-x^2 }\\over x}$\r\n\t\\item\t$\\ds\\lim _{x\\to 0^ + } {\\sqrt{2-x^2}\\over x+1}$\r\n\t\\item\t$\\ds\\lim _{x\\to a } {x^3 -a^3\\over x-a}$\r\n\t\\item\t$\\ds\\lim _{x\\to 2 } (x^2 +4)^3$\r\n\\end{enumerate}\r\n\\end{multicols}\r\n\\begin{sol}\r\n\\begin{multicols}{2}\r\n\\begin{enumerate}\r\n\t\\item\t7\r\n\t\\item\t5\r\n\t\\item\t0\r\n\t\\item\tundefined\r\n\t\\item\t$1/6$\r\n\t\\item\t0\r\n\t\\item\t3\r\n\t\\item\t172\r\n\t\\item\t0\r\n\t\\item\t2\r\n\t\\item\tdoes not exist\r\n\t\\item\t$\\ds \\sqrt2$\r\n\t\\item\t$\\ds 3a^2$\r\n\t\\item\t512\r\n\\end{enumerate}\r\n\\end{multicols}\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex}\r\nLet $f(x)=\\left\\{ \r\n\\begin{array}{cc}\r\n1 & \\text{if }x\\neq 0 \\\\ \r\n0 & \\text{if }x=0%\r\n\\end{array}%\r\n\\right.$ and $g(x)=0$. What are the values of $\r\nL=\\lim_{x\\to 0}g(x)$ and $M=\\lim_{x\\to L}f(x)$? Is it true that $\\lim_{x\\to\t0}f(g(x))=M$? What are some noteworthy differences\r\nbetween this example and Theorem~\\ref{thm:LimitComposition}?\r\n\\begin{sol}\r\n\t$L=0$ and $M=1.$ No.\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n\\end{enumialphparenastyle}\r\n\r\n\\clearpage", "meta": {"hexsha": "b77bf587a0ce4819fd22813173fb8df00253dd69", "size": 8233, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "3-limits/3-4-limits-props.old.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "3-limits/3-4-limits-props.old.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "3-limits/3-4-limits-props.old.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4719626168, "max_line_length": 134, "alphanum_fraction": 0.6482448682, "num_tokens": 3116, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6723316991792861, "lm_q2_score": 0.8311430520409023, "lm_q1q2_score": 0.5588038204397178}}
{"text": "\n\\subsection{Q-Q plots}\n\nPlot quartiles of variables against each other.\n\n", "meta": {"hexsha": "eacfae99d53745489438c32252ffe73a4df335a2", "size": 74, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/summaryMultiple/04-03-QQ.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/summaryMultiple/04-03-QQ.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/summaryMultiple/04-03-QQ.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 12.3333333333, "max_line_length": 47, "alphanum_fraction": 0.7702702703, "num_tokens": 17, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8311430311279739, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5588038009224193}}
{"text": "% 17fiberbundles.tex\n\n\\begin{quote}\n  A vector bundle is a family of vector spaces parameterized by points in the base space. How do we parameterize a family of manifolds, say Lie groups?\n\\end{quote}\n\n\\section{ 17.1. Fiber Bundles and Principal Bundles }\n\n\\subsection{ 17.1a. Fiber Bundles }\n\n\\subsection{ 17.1b. Principal Bundles and Frame Bundles }\n\nframe $\\mathbf{e}$ at $p$ chosen\n\n\\begin{equation}\n  f(p) = e_{\\alpha}(p) g_{\\alpha}(p) \\quad \\quad \\quad \\, (17.4)\n\\end{equation}\n\n\\begin{equation}\n\\begin{aligned}\n  & \\Phi_{\\alpha} : U_{\\alpha} \\times G \\to \\pi^{-1}(U_{\\alpha}) \\\\ \n  & \\Phi_{\\alpha}(p,g) = e_{\\alpha}(p) g = (e_{\\alpha})_i g^i_{ \\, \\, j } = f_j\n\\end{aligned}\n\\end{equation}\n\nin an overlap, the same frame (17.4) will have another representation\n\n\\begin{equation}\n  \\mathbf{f}(p) = \\mathbf{e}_{\\beta}{(p)} g_{\\beta}(p) \\quad \\quad \\quad \\, (17.5)\n\\end{equation}\n\n\\[\n\\begin{gathered}\n  e_{\\beta}(p) = e_{\\alpha}(p) \\tau_{\\alpha \\beta}(p) \\\\ \n  \\tau_{\\alpha \\beta}(p) \\equiv \\tau_{\\alpha \\beta} \\\\ \n  g_{\\alpha}(p) = \\tau_{\\alpha \\beta}(p) g_{\\beta}(p)\n\\end{gathered}\n\\]\n\ndiffeomorphism \n\\[\n\\tau_{\\alpha \\beta}(p)  : G \\to G\n\\]\nleft translation of $G$ by (transition) matrix $\\tau_{\\alpha \\beta}(p)$\n\n\n\n\\subsection{ 17.1c. Action of the Structure Group on a Principal Bundle}\n\nLet $\\mathbf{f} = (\\mathbf{f}_1 \\dots \\mathbf{f}_n)$ frame at $p$, $\\mathbf{f} \\in P$\n\n\\begin{theorem}[17.8]\n  \\[\n(f \\in P , g\\in G) \\to (fg) \\in P \n\\]\nfreely when $g\\neq e$ and \n\\[\n\\pi(fg) = \\pi(f)\n\\]\ni.e. preserves fibers\n\\end{theorem}\n\nProof:\n  $\\pi(\\mathbf{f}) = p$ \\\\\n\n$\\Phi_{\\alpha} : U_{\\alpha} \\times G \\to \\pi^{-1}(U_{\\alpha})$ \\quad \\quad \\, local trivialization \\\\\n$\\Phi_{\\alpha}(p,g_{\\alpha}) = \\mathbf{f} \\Longrightarrow \\Phi_{\\alpha}^{-1}(\\mathbf{f}) = (p,g_{\\alpha})$ \n\\quad $\\exists \\, ! \\, g_{\\alpha}$ for $\\mathbf{f}$\n\nLet $g \\in G$, \n\nright action of $g$ on $\\pi^{-1}(U_{\\alpha})$ is (locally action)\n\n\\[\n\\Phi_{\\alpha}(p,g_{\\alpha}g) = fg\n\\]\n\nif $p \\in U_{\\alpha} \\bigcap U_{\\beta}$ \n\n\\[\n\\begin{gathered}\n  fg = \\Phi_{\\beta}(p,g_{\\beta}g) = \\Phi_{\\beta}(p, \\tau_{\\beta \\alpha}(p)g_{\\alpha} g ) = \\Phi_{\\alpha}(p,g_{\\alpha} g) \n\\end{gathered}\n\\]\n$\\tau_{\\beta \\alpha} = \\Phi_{\\beta}^{-1} \\Phi_{\\alpha}$\n\n\\hrulefill\n\nWe see in this proof that the essential point is that \\emph{left translations} in $G$ (say by $\\tau_{\\beta \\alpha}$) \\emph{ commute with right translations} (say by $g$).  \n\n\n\n\\subsection{ 17.2. }\n\n\n\\subsection{ 17.3. Chern's Proof of the Gauss-Bonnet-Poincar\\'{e} Theorem }\n\n\n\\subsubsection{ 17.3a. A Connection in the Frame Bundle of a Surface }\n\n\n\\begin{equation}\n  \\omega \\left( \\frac{d\\mathbf{x}}{dt} \\right) \\in \\mathfrak{g} = \\mathfrak{u}{ (1) } \\quad \\quad \\quad \\, (17.14)\n\\end{equation}\n\n\n\n\n\n\n", "meta": {"hexsha": "0949f1934e7c12bd3a345d09d92183bcc211e1bb", "size": 2731, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX_and_pdfs/the geometry of physics problems/17fiberbundles.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LaTeX_and_pdfs/the geometry of physics problems/17fiberbundles.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LaTeX_and_pdfs/the geometry of physics problems/17fiberbundles.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 24.6036036036, "max_line_length": 172, "alphanum_fraction": 0.6188209447, "num_tokens": 1067, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5587687228797432}}
{"text": "\n\\chapter{Quantification of PCA/PLS-DA Class Separations}\n\n\\begin{quote}\n{\\it\n  People want to see patterns in the world. ... So important is this skill\n  that we apply it everywhere, warranted or not.}\n\\\\\\\\\n -- Benoit Mandelbrot\n\\end{quote}\n\n\\section{Introduction}\n\n\\begin{doublespace}\nThe importance placed on interpretation of PCA, PLS-DA and OPLS-DA scores plots\nnecessitates the use of quantitative procedures to determine the significance\nof separations between multiple experimental groups in scores space. However,\nno de facto protocol or metric exists to provide a means of reporting the\ndegree or significance of group separation\n\\cite{werth:abio2010,goodpaster:abio2010,goodpaster:cils2011}.\nAnderson et al. used the $J_2$ criterion\n\\cite{anderson:metab2008,koutroumbas2006} to assess the quality of\nresulting scores clusters according to the average withing-group and\nbetween-group scatters for all groups. However, the $J_2$ metric only provides\nan overall estimate of group separation without fine-grained information on\neach pair of groups \\cite{koutroumbas2006}. A similar problem exists\nwith the related Davies-Bouldin index \\cite{davies:ieee1979}, which\nchooses a worst-case estimate of group overlap as its figure of merit. Dixon\net al. \\cite{dixon:jchemo2009} also comprehensively reported the\nperformances of four cluster separation indices based on modifications of\nmetrics used to validate separation for unsupervised clustering algorithms.\nAlternatively, the PCAtoTree protocol constructs dendrograms from Euclidean\ndistance matrices computed from PCA scores for the PHYLIP\n\\cite{felsenstein:clad1989} software suite using a bootstrapping routine to\ndetermine branch node significance \\cite{werth:abio2010,retief:mmbio2000}.\nHowever, it was recently shown that hypothesis testing using a Mahalanobis\ndistance metric and the $T^2$ and $F$ distributions can provide a statistical\nmeans of quantifying group similarity \\cite{goodpaster:cils2011}, suggesting\nthe possibility of returning $p$ values for full statistical quantitation of\ngroup separations in scores space.\n\\end{doublespace}\n\n\\section{Materials and Methods}\n\n\\begin{doublespace}\nThe methods described below were implemented in software using the C\nprogramming language with minimal external dependencies, so the programs\nmay be compiled and executed on any modern GNU/Linux distribution.\n\\end{doublespace}\n\n\\subsection{Probability Calculation}\n\n\\begin{doublespace}\nUnder the assumption that each group in the scores space is distributed as a\nmultivariate normal random variable, the separations between groups may be\ncalculated using the squared Mahalanobis distance metric\n\\cite{mahalanobis:pnisi1936}:\n\\begin{equation}\nD_M^2 =\n  (\\mathbf{u}_j - \\mathbf{u}_i)^T\n  \\mathbf{S}_p^{-1}\n  (\\mathbf{u}_j - \\mathbf{u}_i)\n\\end{equation}\n\nIn the above equation, $\\mathbf{u}_i, \\mathbf{u}_j \\in \\mathbb{R}^p$ are the\n$p$-variate sample means of groups $i$ and $j$, respectively, and\n$\\mathbf{S}_p \\in \\mathbb{R}^{p \\times p}$ is the pooled variance-covariance\nmatrix, a weighted sum of the covariance matrices from groups $i$ and\n$j$:\n\\begin{equation}\n\\mathbf{S}_p = \\frac{n_i \\mathbf{S}_i + n_j \\mathbf{S}_j}{n_i + n_j}\n\\end{equation}\nwhere $n_i$ and $n_j$ are the number of observations in groups $i$ and $j$,\nrespectively. The Mahalanobis distance may then be related to a Hotelling's\n$T^2$ statistic by the following scaling \\cite{mardia1979}:\n\\begin{equation}\nT^2 = \\left( \\frac{n_i n_j}{n_i + n_j} \\right) D_M^2\n\\end{equation}\n\nThis $T^2$ statistic is an extension of the Student's $t$ statistic to\nhypothesis tests in multiple dimensions, and may be related to an $F$\ndistribution by a final scaling \\cite{mardia1979}:\n\\begin{equation}\nx_F =\n  \\frac{n_i + n_j - p - 1}{p (n_i + n_j - 2)} T^2 \\sim\n  F(p, n_i + n_j - p - 1)\n\\end{equation}\n\nIt can be seen from this final relation that evaluation of the complement of\nthe cumulative $F$-distribution function at $x_F$ yields the $p$ value for\naccepting the null hypothesis: the points in groups $i$ and $j$ are in fact\ndrawn from the same distribution.\n\\end{doublespace}\n\n\\begin{figure}[ht!]\n\\includegraphics[width=6in]{figs/utils/01-opls.png}\n\\caption\n      [Confidence Ellipses and $p$-dendrogram of Example OPLS-DA Scores.]{\n  {\\bf Confidence Ellipses and $p$-dendrogram of Example OPLS-DA Scores.}\n  \\\\\n  ({\\bf A}) 2D OPLS-DA scores plot illustrating 95\\% confidence ellipses for\n  a model having one predictive (PLS) and one orthogonal (OSC) component. The\n  symbol shape and color each point correspond to the groups in ({\\bf B}).\n  Discrimination in the first component is between wild-type and\n  antibiotic-treated {\\it Mycobacterium smegmatis}, and separations along the\n  second component indicate metabolic differences between different antibiotic\n  treatments. The antibiotics cluster together based on a shared biological\n  target (cell wall synthesis, mycolic acid biosynthesis, or transcription,\n  translation and DNA supercoiling). ({\\bf B}) Dendrogram generated from the\n  scores in ({\\bf A}) using Mahalanobis distances, with $p$ values for the null\n  hypothesis reported at each branch.\n}\n\\label{figure.10.1}\n\\end{figure}\n\n\\subsection{Dendrogram Generation}\n\n\\begin{doublespace}\nThe implementation of the tree-generation procedure is a classical UPGMA\nalgorithm \\cite{sokal:uksci1958}. When $p$ values are reported at\neach branch point, a single tree is generated based on the matrix of\nMahalanobis distances between groups. In the case of bootstrapped trees, the\ngroups are randomly resampled with replacement while preserving group size.\nThe desired number of trees is then generated using Euclidean distances\nbetween group means. The final tree used to report bootstrap probabilities\nis built using a Euclidean distance matrix calculated from the original\n(non-resampled) dataset.\n\\end{doublespace}\n\n\\subsection{Confidence Ellipse Calculation}\n\n\\begin{doublespace}\nWhen viewing PCA and PLS-DA scores plots, it was common practice to apply\nhand-drawn ellipses to inform group membership, or even to omit such ellipses\nentirely. This may lead to inconsistent or erroneous interpretation of\nexperimental results. Instead, the fact that the Mahalanobis distances of a\nset of $p$-variate points from their sample mean follow a $\\chi^2$ distribution\nhaving $p$ degrees of freedom \\cite{hotelling:ams1931} may be leveraged\nto estimate 95\\% confidence ellipses for scores in any number of dimensions.\nThe sample mean $\\mathbf{u}$ and sample covariance matrix $\\mathbf{S}$ for each\ngroup must first be calculated from its scores-space data. Then, each group\ncovariance matrix is decomposed into its eigenvalues and eigenvectors,\n\\begin{equation}\n\\mathbf{S} = \\mathbf{Q} \\mathbf{\\Lambda} \\mathbf{Q}^{-1}\n\\end{equation}\nwhere $\\mathbf{Q} \\in \\mathbb{R}^{p \\times p}$ is an orthogonal matrix holding\nthe eigenvectors of $\\mathbf{S}$, and\n$\\mathbf{\\Lambda} \\in \\mathbb{R}^{p \\times p}$ is a diagonal matrix holding the\ncorresponding eigenvalues of $\\mathbf{S}$.\n\nFor the case of two-dimensional scores data, the 95\\% confidence ellipse for a\ngroup is as follows:\n\\end{doublespace}\n\n\\begin{equation}\n\\begin{bmatrix}\nx(t) \\\\\ny(t)\n\\end{bmatrix}\n = \\mathbf{u} + \\mathbf{Q} \\sqrt{\\mathbf{\\Lambda} F_{0.95,2}^{-1}}\n\\begin{bmatrix}\n\\cos(t) \\\\\n\\sin(t)\n\\end{bmatrix}\n\\end{equation}\n\n\\begin{doublespace}\nwhere $F_{0.95,2}^{-1}$ is the value of the inverse $\\chi^2$ cumulative\ndistribution function at $\\alpha = 0.05$ and two degrees of freedom, and the\nsquare-root is taken element-wise over $\\mathbf{\\Lambda}$. Similarly, a\nthree-dimensional (3D) confidence ellipsoid may be obtained from the\nfollowing parametric equation:\n\\end{doublespace}\n\n\\begin{equation}\n\\begin{bmatrix}\nx(u,v) \\\\\ny(u,v) \\\\\nz(u,v)\n\\end{bmatrix}\n = \\mathbf{u} + \\mathbf{Q} \\sqrt{\\mathbf{\\Lambda} F_{0.95,3}^{-1}}\n\\begin{bmatrix}\n\\cos(u) \\cos(v) \\\\\n\\cos(u) \\sin(v) \\\\\n\\sin(v)\n\\end{bmatrix}\n\\end{equation}\n\n\\begin{doublespace}\nwhere the parameters $t$, $u$ and $v$ are all evaluated on $(0,2\\pi)$. These\nmethods allow for the inclusion of confidence regions onto two- and\nthree-dimensional scores plots that reflect the 95\\% membership boundaries\nfor each group. The approach assumes normally distributed within-group errors.\n\\figref{10.1}{Figures 10.1A} and \\figref{10.2}{10.2} illustrate the inclusion\nof these group confidence regions in representative PCA and OPLS-DA scores,\nrespectively. The ellipses and ellipsoids clearly define statistically\nsignificant class separation and also provide an example in which multiple\ngroups actually belong to the same underlying biological classification.\n\\end{doublespace}\n\n\\begin{SCfigure}\n\\includegraphics[width=3.25in]{figs/utils/02-pca.png}\n\\caption\n      [Confidence Ellipsoids from PCA Scores.]{\n  {\\bf Confidence Ellipsoids from PCA Scores.}\n  \\\\\n  3D PCA scores plot with superimposed 95\\% confidence ellipsoids drawn as\n  meshes containing group points. The ellipsoids define the statistical\n  significance of class separation and provide an illustration where two\n  groups are distinct in three-dimensional scores space.\n}\n\\label{figure.10.2}\n\\end{SCfigure}\n\n\\section{Results and Discussion}\n\n\\begin{doublespace}\nThe described PCA utilities software package consists of a set of standalone\nC programs that generate dendrograms from PCA, PLS-DA and OPLS-DA scores,\nreport $p$ values and bootstrap numbers on tree branches, and incorporate\nconfidence ellipses/ellipsoids into scores plots. The $p$ values reported for\nevery pair of distinct groups in scores space provide a truly quantitative\nmeans to discuss group separations. Support for the generation of dendrograms\nwith these $p$ values at each branch point is also included as an alternative\nanswer to the bootstrap for answering the question of tree uniqueness. This\neliminates the prior dependence on PHYLIP \\cite{retief:mmbio2000}\nreported for the original PCAtoTree \\cite{werth:abio2010} software\npackage. The reporting of $p$ values is complementary to bootstrapping methods\nin cases of highly overlapped groups, in that it provides a more direct,\ninterpretable quantitation of group separation.\n\\\\\\\\\nIn comparison with PCAtoTree, the PCA utilities software package now uses\nMahalanobis distances because this metric is more appropriate for multivariate\ndata. De Maesschalck et al. \\cite{demaesschalck:cils2000} provide an\nexceptional introduction to the use of Mahalanobis distances with PCA.\nSpecifically, Mahalanobis distances account for different variances in each\nscores-space direction ($t_1$, $t_2$, $t_3$, etc.) and are invariant\nto scaling transformations. This accounting for variances-covariance structure\nensures that the use of a Mahalanobis distance metric for dendrogram generation\nincludes cluster shape and orientation in the analysis of group separation.\nAlso, Mahalanobis distances calculated between groups in PCA scores space will\nclosely approximate those calculated from the original data matrix while\navoiding possible multicollinearities among the original variables. This is\nnot true of Mahalanobis distances in PLS or OPLS scores space, because of the\nunderlying supervision of the PLS algorithm. These features differ from the\nEuclidean distance metric, which is a special case of the Mahalanobis metric\nthat arises when the group covariance matrices equal the identity matrix.\n\\figref{10.1}{Figure 10.1B} illustrates the dendrogram structure based on the\nuse of Mahalanobis distances determined from a set of scores, and\n\\figref{10.3}{Figure 10.3} shows the dendrogram structure based on\nEuclidean distances from the same scores.\n\\end{doublespace}\n\n\\begin{SCfigure}\n\\includegraphics[width=3.25in]{figs/utils/03-tree.png}\n\\caption\n      [Dendrogram Generated using Euclidean Distances.]{\n  {\\bf Dendrogram Generated using Euclidean Distances.}\n  \\\\\n  Bootstrapped dendrogram generated from the scores data in Figure 10.1A using\n  a Euclidean distance metric. Bootstrap statistics reported at each branch\n  were computed from 5,000 bootstrap iterations.\n}\n\\label{figure.10.3}\n\\end{SCfigure}\n\n\\begin{doublespace}\nIt is important to note that our software is not a means of determining the\nreliability of PCA or PLS-DA models, but only a tool set for quantifying the\nscores that those models produce. In the case of PCA scores, significance of\nthe principal components used must be inferred based on the explained sum of\nsquares or another cross-validation technique\n\\cite{eastment:tech1982,krzanowski:biom1987}. PLS-DA models require\nrigorous cross-validation to ensure model reliability, as they almost always\nyield perfect separations between the scores of different groups\n\\cite{kjeldahl:jchemo2010}. With that in mind, separations between\ngroups not under discrimination may be due to true experimental differences in\nPLS-DA scores plots, as opposed to the forced separations between discriminated\ngroups. Thus, the interpretation of any results from the PCA utilities must be\ndone with the knowledge of the underlying algorithm's mathematical intent, and\nonly after the model has been validated. While we demonstrated confidence\nregion generation using only 2D and 3D scores plots, it is important to note\nthat the PCA utilities software package places no restriction on the number of\ncomponents or on which components may be used during dendrogram generation and\n$p$ value calculation. Any dimensionality or choice of scores may be used with\nthe described methods, provided all components are suitably validated.\n\\\\\\\\\nThe updated and enhanced version of PCAtoTree, called PCA utilities, provides\na novel means of quantifying and visualizing separation significance in PCA,\nPLS-DA and OPLS-DA scores plots. Importantly, PCA utilities enables single-step\nmethodologies for generating informative scores plots and dendrograms of\nexperimental groups in {\\it any} study utilizing PCA, PLS-DA or OPLS-DA to\nelucidate group structure in complex datasets, including metabolic\nfingerprinting and untargeted metabolic profiling. The tools are distributed\nunder version 3.0 of the GNU General Public License \\cite{gpl3} and are freely\navailable at \\url{http://bionmr.unl.edu/pca-utils.php}.\n\\end{doublespace}\n\n\\bibliographystyle{abbrv}\n\\bibliography{bworley}\n\n", "meta": {"hexsha": "a9a92982c99601222817014a5739bb77ad889ae2", "size": 14195, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "bworley-ch10.tex", "max_stars_repo_name": "geekysuavo/dissertation", "max_stars_repo_head_hexsha": "b0ae8bc72104a22b2ed01c155604c2fd69cebb31", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-22T21:52:03.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-22T21:52:03.000Z", "max_issues_repo_path": "bworley-ch10.tex", "max_issues_repo_name": "geekysuavo/dissertation", "max_issues_repo_head_hexsha": "b0ae8bc72104a22b2ed01c155604c2fd69cebb31", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bworley-ch10.tex", "max_forks_repo_name": "geekysuavo/dissertation", "max_forks_repo_head_hexsha": "b0ae8bc72104a22b2ed01c155604c2fd69cebb31", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-07-08T14:36:58.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-08T14:36:58.000Z", "avg_line_length": 47.0033112583, "max_line_length": 79, "alphanum_fraction": 0.7897851356, "num_tokens": 3692, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7401743563075447, "lm_q1q2_score": 0.5587687226342732}}
{"text": "% !TEX root = main.tex\n% !TEX spellcheck = en-US\n\n\n\n\\section{Polynomial Commitment Schemes}\n\\label{sec:pcom}\nA polynomial commitment scheme $\\PCOM = (\\kgen, \\com, \\open, \\verify)$ consists of four\nalgorithms and allows to commit to a polynomial $\\p{f}$ and later open the evaluation in a\npoint $z$ to some value $s=\\p{f}(z)$. More formally:\n\\begin{description}\n\\item[$\\kgen(1^\\secpar, \\maxdeg)$:] The key generation algorithm takes in a security\n  parameter $\\secpar$ and a parameter $\\maxdeg$ which determines the maximal degree of the\n  committed polynomial. It outputs a structured reference string $\\srs$ (the commitment\n  key). Note that $\\srs$ implicitly determines $\\secpar$ and $\\maxdeg$.\n\\item[$\\com(\\srs, \\p{f})$:] The commitment algorithm $\\com(\\srs, \\p{f})$ takes\n  in $\\srs$ and a polynomial $\\p{f}$ with maximum degree $\\maxdeg$, and outputs\n  a commitment $c$.\n\\item[$\\open(\\srs, z, s, \\p{f})$:] The opening algorithm\n  takes as input $\\srs$, an evaluation point $z$, a\n  value $s$ and the polynomial $\\p{f}$. It outputs an opening $o$.\n\\item[$\\verify(\\srs, c, z, s, o)$:] The verification algorithm takes in $\\srs$,\n  a commitment $c$, an evaluation point $z$, a value $s$ and an opening $o$. It\n  outputs 1 if $o$ is a valid opening for $(c, z, s)$ and 0 otherwise.\n\\end{description} \n\nA secure polynomial commitment $\\PCOM$ should satisfy correctness, evaluation binding,\nopening uniqueness, hiding and knowledge-binding.  Note that since we are in the updatable\nsetting, $\\srs$ in these security definitions is the SRS that $\\advse$ finalizes using the\nupdate oracle $\\initU$ (See~\\cref{fig:upd}).\n\n\\begin{description}\n\\item[Evaluation binding:] A $\\ppt$ adversary $\\adv$ which outputs a commitment\n  $\\vec{c}$ and evaluation points $\\vec{z}$ has at most negligible chances to open\n  the commitment to two different evaluations $\\vec{s}, \\vec{s'}$. That is, let\n  $k \\in \\NN$ be the number of committed polynomials, $l \\in \\NN$ number of\n  evaluation points, $\\vec{c} \\in \\GRP^k$ be the commitments, $\\vec{z} \\in\n  \\FF_p^l$ be the arguments the polynomials are evaluated at, $\\vec{s},\\vec{s}'\n  \\in \\FF_p^k$ the evaluations, and $\\vec{o},\\vec{o}' \\in \\FF_p^l$ be the\n  commitment openings. Then for every $\\ppt$ adversary $\\adv$\n\t\\[\n\t\t\\condprob{\n\t\t\t\\begin{matrix}\n\t\t\t\t  \\verify(\\srs, \\vec{c}, \\vec{z}, \\vec{s}, \\vec{o}) = 1,  \\\\ \n\t\t\t\t  \\verify(\\srs, \\vec{c}, \\vec{z}, \\vec{s}', \\vec{o}') = 1, \\\\\n\t\t\t\t  \\vec{s} \\neq \\vec{s}'\n\t\t\t\\end{matrix}}\n\t\t\t{\n\t\t\t\\begin{matrix}\n%\t\t\t\t& \\srs \\gets \\kgen(\\secparam, \\maxdeg),\\\\\n\t\t\t\t (\\vec{c}, \\vec{z}, \\vec{s}, \\vec{s}', \\vec{o}, \\vec{o}') \\gets \\adv^{\\initU}(\\maxdeg)\n\t\t\t\\end{matrix}\n\t\t} \\leq \\negl\\,.\n\t\\]\n\n\\end{description}\n\t\n\\begin{description}\n\\item[Hiding:] We also formalize notion of $k$-hiding property of a polynomial commitment scheme. Let $\\HHH$ be a set of size $\\maxdeg + 1$ and $\\ZERO_\\HHH$ its\n  vanishing polynomial. We say that a polynomial scheme is \\emph{hiding} with\n  security $\\epsh(\\secpar)$ if for every $\\ppt$ adversary $\\adv$, $k \\in \\NN$,\n  probability\n  \\begin{align*}\n    \\condprob\n   { b' = b}{\n    (f_0, f_1, c, k, b') \\gets \\adv^{\\initU, \\oraclec}(\\maxdeg, c), f_0, f_1 \\in \\FF^{\\maxdeg}\n    [X]}\n\\leq \\frac{1}{2} + \\eps(\\secpar).\n  \\end{align*}\n  Where $c = f'_b (\\chi)$, for a random bit $b$ and the polynomial\n      $f'_b (X) = f_b + \\ZERO_\\HHH (X) (a_0 + a_1 X + \\ldots a_{k - 1} X^{k -\n        1})$,\nand the oracle $\\oraclec$ on adversary's evaluation query $x$ it adds $x$ to initially empty set\n      $Q_x$ and if $|Q_x| \\leq k$, it provides $f'_b (x)$.\n \n  \\end{description}\n\n\\begin{description}\n\\item[Commitment of knowledge:] Intuitively, when a commitment scheme is ``of knowledge'' then if an\nadversary produces a (valid) commitment $c$, which it can open correctly in an evaluation point, then it must\nknow the underlying polynomial $\\p{f}$ which commits to that value.  For every $\\ppt$ adversary $\\adv$ who produces\n  commitment $c$, evaluation $s$ and opening $o$ there\n  exists a $\\ppt$ extractor $\\ext$ such that\n\\[\n  \\condprob{\n    \\begin{matrix}\n       \\deg \\p{f} \\leq \\maxdeg,\n       c = \\com(\\srs, \\p{f}),\\\\\n       \\verify(\\srs, c, z, s, o) = 1\n    \\end{matrix}\n        }{\n    \\begin{matrix}\n      %\n     % & \\srs \\gets \\kgen(\\secparam, \\maxdeg),\\\\\n      c \\gets \\adv^{\\initU}(\\maxdeg),\n      z \\sample \\FF_p \\\\\n      (s, o) \\gets \\adv(c, z), \\\\\n   \\p{f} = \\ext_\\adv(\\srs, c)\\\\\n    \\end{matrix}}\n  \\geq 1 - \\epsk(\\secpar).\n\\]\nIn that case we say that $\\PCOM$ is $\\epsk(\\secpar)$-knowledge.\n\\end{description}\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"main\"\n%%% End:\n", "meta": {"hexsha": "7c64ad33a7eb84d8fe5cf9d49932cd75b0548519", "size": 4584, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "SCN2022/polynomial-commitment-schemes.tex", "max_stars_repo_name": "clearmatics/research-plonkext", "max_stars_repo_head_hexsha": "7da7fa2b6aa17142ef8393ace6aa532f3cfd12b4", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SCN2022/polynomial-commitment-schemes.tex", "max_issues_repo_name": "clearmatics/research-plonkext", "max_issues_repo_head_hexsha": "7da7fa2b6aa17142ef8393ace6aa532f3cfd12b4", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SCN2022/polynomial-commitment-schemes.tex", "max_forks_repo_name": "clearmatics/research-plonkext", "max_forks_repo_head_hexsha": "7da7fa2b6aa17142ef8393ace6aa532f3cfd12b4", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.0550458716, "max_line_length": 160, "alphanum_fraction": 0.6380890052, "num_tokens": 1542, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.754914997895581, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.558768722634273}}
{"text": "%% This document created by Scientific Word (R) Version 2.5\n%% Starting shell: mathart1\n\n\n\\documentclass[12pt,thmsb,titlepage,final,oneside,letterpaper]{article}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n% \\usepackage{sw20elba}\n\\usepackage{setspace}\n\\usepackage{hyperref}\n\\usepackage{caption}\n\n\\setcounter{MaxMatrixCols}{10}\n% \\TCIDATA{TCIstyle=article/art1.lat,elba,article}\n\n% %TCIDATA{OutputFilter=LATEX.DLL}\n% %TCIDATA{Version=5.00.0.2606}\n% %TCIDATA{<META NAME=\"SaveForMode\" CONTENT=\"1\">}\n% %TCIDATA{BibliographyScheme=Manual}\n% %TCIDATA{Created=Sun Oct 14 23:49:17 2007}\n% %TCIDATA{LastRevised=Tuesday, July 24, 2012 15:26:47}\n% %TCIDATA{<META NAME=\"GraphicsSave\" CONTENT=\"32\">}\n% %TCIDATA{Language=American English}\n\n\\onehalfspacing\n\\addtolength{\\oddsidemargin}{-.10in}\n\\addtolength{\\evensidemargin}{-.10in}\n\\addtolength{\\textwidth}{0.2in}\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{acknowledgement}{Acknowledgement}\n\\newtheorem{algorithm}{Algorithm}\n\\newtheorem{axiom}{Assumption}\n\\newtheorem{case}{Case}\n\\newtheorem{claim}{Claim}\n\\newtheorem{conclusion}{Conclusion}\n\\newtheorem{condition}{Condition}\n\\newtheorem{conjecture}{Conjecture}\n\\newtheorem{corollary}{Corollary}[section]\n\\newtheorem{criterion}{Criterion}\n\\newtheorem{definition}{Definition}\n\\newtheorem{example}{Example}\n\\newtheorem{exercise}{Exercise}\n\\newtheorem{lemma}{Lemma}[section]\n\\newtheorem{notation}{Notation}\n\\newtheorem{problem}{Problem}\n\\newtheorem{proposition}{Proposition}\n\\newtheorem{remark}{Comments}\n\\newtheorem{solution}{Solution}\n\\newtheorem{summary}{Summary}\n\\newenvironment{proof}[1][Proof]{\\noindent\\textbf{#1.} }{\\ \\rule{0.5em}{0.5em}}\n\\captionsetup{labelformat=empty, labelsep= none, justification= justified,width=.90\\textwidth,aboveskip=3pt}\n\\captionsetup{justification=raggedright, singlelinecheck=false}\n\\captionsetup{font={small,sf}}\n% \\input{tcilatex}\n\n\\begin{document}\n\n\\author{$%\n\\begin{array}{c}\n\\text{Donald W. K.\\ Andrews}^{\\ast } \\\\ \n\\text{Cowles Foundation} \\\\ \n\\text{Yale University} \\\\ \n\\vspace{0.2in} \\\\ \n\\text{Xu Cheng} \\\\ \n\\text{Department of Economics} \\\\ \n\\text{University of Pennsylvania}%\n\\end{array}%\n$}\n\\title{\\textbf{GMM Estimation and}\\\\\n\\textbf{Uniform Subvector Inference}\\\\\n\\textbf{with Possible Identification Failure}}\n\\date{First Draft: August, 2007\\\\\nRevised: \\today\\bigskip \\\\\n$^{\\ast }$Andrews gratefully acknowledges the research support of the\nNational Science Foundation via grant numbers SES-0751517 and SES-1058376.\nThe authors thank a co-editor, two referees, Tim Armstrong, Xiaohong Chen,\nSukjin Han, Yuichi Kitamura, Peter Phillips, Eric Renault, Frank\nSchorfheide, and Ed Vytlacil for helpful comments.}\n\\maketitle\n\n% %TCIMACRO{\\TeXButton{empty page}{\\thispagestyle{empty}}}%\n% %BeginExpansion\n% \\thispagestyle{empty}%\n% %EndExpansion\n% $\\vspace*{0.5cm}$\n\n\\begin{center}\n\\textbf{Abstract}$\\vspace{0.25cm}$\n\\end{center}\n\nThis paper determines the properties of standard generalized method of\nmoments (GMM) estimators, tests, and confidence sets (CS's) in moment\ncondition models in which some parameters are unidentified or weakly\nidentified in part of the parameter space. The asymptotic distributions of\nGMM estimators are established under a full range of drifting sequences of\ntrue parameters and distributions. The asymptotic sizes (in a uniform sense)\nof standard GMM tests and CS's are established.\n\nThe paper also establishes the correct asymptotic sizes of \\textquotedblleft\nrobust\\textquotedblright\\ GMM-based Wald, $t,$ and quasi-likelihood ratio\ntests and CS's whose critical values are designed to yield robustness to\nidentification problems.\n\nThe results of the paper are applied to a nonlinear regression model with\nendogeneity and a probit model with endogeneity and possibly weak\ninstrumental variables.$\\vspace{2in}$\n\n\\noindent \\emph{Keywords}\\textbf{:} Asymptotic size, confidence set,\ngeneralized method of moments, GMM estimator, identification, nonlinear\nmodels, test, Wald test, weak identification.\\newline\n\\newline\n\n\\noindent \\emph{JEL Classification Numbers}: C12, C15.\\newpage\n\n%TCIMACRO{\\TeXButton{pageno}{\\setcounter{page}{1}}}%\n%BeginExpansion\n\\setcounter{page}{1}%\n%EndExpansion\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Introduction\\label{Intro\nSec}}\n\n\\setcounter{equation}{0}\\hspace{0.25in}This paper gives a set of GMM\nregularity conditions that are akin to the classic conditions in Hansen\n(1982) and Pakes and Pollard (1989). But, they allow for singularity of the\nGMM estimator's variance matrix due to the lack of identification of some\nparameters in part of the parameter space.\\footnote{%\nThroughout the paper, we use the term identification/lack of identification\nin the sense of identification by a GMM or minimum distance criterion\nfunction $Q_{n}(\\theta ).$ Lack of identification by $Q_{n}(\\theta )$ means\nthat $Q_{n}(\\theta )$ is flat in some directions in part of the parameter\nspace. See Assumption GMM1(i) below for a precise definition. Lack of\nidentification by the criterion function $Q_{n}(\\theta )$ is not the same as\nlack of identification in the usual or strict sense of the term, although\nthere is often a close relationship.} This paper is a sequel to Andrews and\nCheng (2012a) (AC1). The latter paper provides results for general extremum\nestimators, $t$ tests, and QLR tests in the presence of possible weak\nidentification under high-level assumptions. Here we provide more primitive\nconditions for GMM-based statistics by verifying the high-level assumptions\nof AC1. This paper provides results for Wald tests and CS's that apply not\nonly to GMM estimators, but also to other extremum estimators covered by\nAC1. This paper also provides some results for minimum distance (MD)\nestimators, tests, and CS's. Lastly, the paper analyzes two specific models\nthat are not considered in AC1.\n\nUnder the conditions given, the asymptotic distributions of GMM estimators\nand Wald and quasi-likelihood ratio (QLR) test statistics are established.\nThe asymptotic sizes of standard GMM tests and confidence sets (CS's) are\nestablished. In many cases, their asymptotic sizes are not correct. We show\nthat Wald and QLR statistics combined with \\textquotedblleft identification\nrobust\\textquotedblright\\ critical values have correct asymptotic sizes (in\na uniform sense).\n\nIn contrast to standard GMM results in the literature, the results given\nhere cover a full range of drifting sequences of true parameters and\ndistributions. Such results are needed to establish the (uniform) asymptotic\nsize properties of tests and CS's and to give good approximations to the\nfinite-sample properties of estimators, tests, and CS's under weak\nidentification. Non-smooth sample moment conditions are allowed, as in Pakes\nand Pollard (1989) and Andrews (2002).\n\nWe consider moment condition models where the parameter $\\theta $ is of the\nform $\\theta =(\\beta ,\\zeta ,\\pi ),$ where $\\pi $ is identified if and only\nif $\\beta \\neq 0,$ $\\zeta $ is not related to the identification of $\\pi ,$\nand $\\psi =(\\beta ,\\zeta )$ is always identified. The parameters $\\beta ,$ $%\n\\zeta ,$ and $\\pi $ may be scalars or vectors. For example, this framework\napplies to the nonlinear regression model $Y_{i}=\\beta \\cdot h\\left(\nX_{1,i},\\pi \\right) +X_{2,i}^{\\prime }\\zeta +U_{i}$ with endogenous\nvariables $X_{1,i}$ or $X_{2,i}$ and instruments (IV's) $Z_{i}.$ Here lack\nof identification of $\\pi $ when $\\beta =0$ occurs because of nonlinearity.\nThis framework also applies to the probit model with endogeneity: $%\ny_{i}^{\\ast }=Y_{i}\\pi +X_{i}^{\\prime }\\zeta _{1}^{\\ast }+U_{i}^{\\ast },$\nwhere one observes $y_{i}=1(y_{i}^{\\ast }>0),$ the endogenous variable $%\nY_{i},$ and the exogenous regressor vector $X_{i}$ and the reduced form for $%\nY_{i}$ is $Y_{i}=Z_{i}^{\\prime }\\beta +X_{i}^{\\prime }\\zeta _{2}+V_{i}.$ In\nthis case, lack of identification of $\\pi $ occurs when $\\beta =0$ because\nthe IV's are irrelevant.\n\nWe determine the asymptotic properties of GMM estimators and tests under\ndrifting sequences of true parameters $\\theta _{n}=(\\beta _{n},\\zeta\n_{n},\\pi _{n})$ for $n\\geq 1,$ where $n$ indexes the sample size. The\nbehavior of GMM estimators and tests depends on the magnitude of $||\\beta\n_{n}||.$ The asymptotic behavior of these statistics varies across three\ncategories of sequences $\\{\\beta _{n}:n\\geq 1\\}:$ Category I(a) $\\beta\n_{n}=0 $ $\\forall n\\geq 1,$ $\\pi $ is unidentified; Category I(b) $\\beta\n_{n}\\neq 0$ and $n^{1/2}\\beta _{n}\\rightarrow b\\in R^{d_{\\beta }},$ $\\pi $\nis weakly identified; Category II $\\beta _{n}\\rightarrow 0$ and $%\nn^{1/2}||\\beta _{n}||\\rightarrow \\infty ,$ $\\pi $ is semi-strongly\nidentified; and Category III $\\beta _{n}\\rightarrow \\beta _{0}\\neq 0,$ $\\pi $\nis strongly identified.\n\nFor Category I sequences, GMM estimators, tests, and CS's are shown to have\nnon-standard asymptotic properties. For Category II and III\\ sequences, they\nare shown to have standard asymptotic properties such as normal and\nchi-squared distributions. However, for Category II sequences, the rates of\nconvergence of estimators of $\\pi $ are slower than $n^{1/2}$ and tests\nconcerning $\\pi $ do not have power against $n^{-1/2}$-local alternatives.\n\nNumerical results for the nonlinear regression model with endogeneity show\nthat the GMM estimators of both $\\beta $ and $\\pi $ have highly non-normal\nasymptotic and finite-sample ($n=500$) distributions when $\\pi $ is\nunidentified or weakly identified. The asymptotics provide excellent\napproximations to the finite-sample distributions. Nominal $95\\%$ standard $%\nt $ confidence intervals (CI's) for $\\beta $ are found to have asymptotic\nsize equal to $68\\%$ and finite-sample size of $72\\%.$ In contrast, nominal $%\n95\\%$ standard QLR CI's for $\\beta $ have asymptotic and finite-sample size\nof $93\\%.$ There are no asymptotic size distortions for the standard $t$ and\nQLR CI's for $\\pi $ and the finite-sample sizes are close to the asymptotic\nvalues. However, the CI's for $\\pi $ are far from being similar\nasymptotically or in finite samples. The robust CI's for $\\beta $ have\ncorrect asymptotic size. Their finite-sample sizes are $91.5\\%$ for $t$ CI's\nand $95\\%$ for QLR CI's for nominal $95\\%$ CI's.\n\nTo conclude, the numerical results show that (i) weak identification can\nhave substantial effects on the properties of estimators and standard tests\nand CS's; (ii) the asymptotic results of the paper provide useful\napproximations to the finite-sample distributions of estimators and test\nstatistics under weak identification and identification failure; and (iii)\nthe robust tests and CS's improve the size properties of tests and CS's in\nfinite-samples noticeably compared to standard tests and CS's.\n\nLike the results in Hansen (1982), Pakes and Pollard (1989), and Andrews\n(2002), the present paper applies when the GMM criterion function has a\nstochastic quadratic approximation as a function of $\\theta .$ This rules\nout a number of models of interest in which identification failure may\nappear, including regime switching models, mixture models, abrupt transition\nstructural change models, and abrupt transition threshold autoregressive\nmodels.\\footnote{%\nFor references concerning results for these models, see AC1.}\n\nNow, we discuss the literature related to this paper. The following papers\nare companions to this one: AC1, Andrews and Cheng (2012b) (AC1-SM), and\nAndrews and Cheng (2011a) (AC2). These papers provide related, complementary\nresults to the present paper. AC1 provides results under high-level\nconditions and analyzes the ARMA(1, 1) model in detail. AC1-SM provides\nproofs for AC1 and related results. AC2 provides primitive conditions and\nresults for estimators and tests based on log likelihood criterion\nfunctions. It provides applications to a smooth transition threshold\nautoregressive (STAR) model and a nonlinear binary choice model.\n\nCheng (2008) establishes results for a nonlinear regression model with\nmultiple sources of weak identification, whereas the present paper only\nconsiders a single source. However, the present paper applies to a much\nbroader range of models.\n\nTests of $H_{0}:\\beta =0$ versus $H_{1}:\\beta \\neq 0$ are tests in which a\nnuisance parameter $\\pi $ only appears under the alternative. Such tests\nhave been considered in the literature since Davies (1977). The results of\nthis paper cover tests of this sort, as well as tests for a whole range of\nlinear and nonlinear hypotheses that involve $(\\beta ,\\zeta ,\\pi )$ and\ncorresponding CS's.\n\nThe weak instrument (IV) literature is closely related to this paper. This\nis true especially of Stock and Wright (2000), Kleibergen (2005), and\nGuggenberger, Kleibergen, Mavroeidis, and Chen (2013). In comparison to\nStock and Wright (2000), the present paper differs because it focuses on\ncriterion functions that are indexed by a parameter $\\beta $ that determines\nthe strength of identification. It also differs in that it considers\nsubvector analysis. In contrast to Kleibergen (2005) and Guggenberger,\nKleibergen, Mavroeidis, and Chen (2013), the present paper does not focus on\nLagrange multiplier statistics. Rather, it investigates the behavior of\nstandard estimators and tests, as well as robust tests based on Wald and QLR\nstatistics. Other related papers from the weak IV literature include Nelson\nand Startz (1990), Dufour (1997), Staiger and Stock (1997), Kleibergen\n(2002), and Moreira (2003).\n\nAntoine and Renault (2009, 2010) and Caner (2010) consider GMM estimation\nwith IV's that lie in the semi-strong category, using our terminology.\nNelson and Startz (2007) and Ma and Nelson (2008) analyze models like those\nconsidered in this paper. They do not provide asymptotic results or robust\ntests and CS's of the sort given in this paper. Andrews and Mikusheva (2011)\nand Qu (2011) consider Lagrange multiplier tests in a maximum likelihood\ncontext where identification may fail, with emphasis on dynamic stochastic\ngeneral equilibrium models. Andrews and Mikusheva (2011) consider subvector\ninference based on Anderson-Rubin-type minimum distance statistics.\n\nIn likelihood scenarios, Lee and Chesher (1986) consider Lagrange multiplier\ntests and Rotnitzky, Cox, Bottai, and Robbins (2000) consider maximum\nlikelihood estimators and likelihood ratio tests, when the model is\nidentified at all parameter values, but the information matrix is singular\nat some parameter values, such as those in the null hypothesis. This is a\ndifferent situation than considered here for two reasons. First, the present\npaper considers situations where identification fails at some parameter\nvalues in the parameter space (and this causes the GMM variance matrix to be\nsingular at these parameter values). Second, this paper considers GMM-based\nstatistics rather than likelihood-based statistics.\n\nSargan (1983), Phillips (1989), and Choi and Phillips (1992) establish\nfinite-sample and asymptotic results for linear simultaneous equations\nmodels when some parameters are not identified. Shi and Phillips (2011)\nprovide results for a nonlinear regression model with nonstationary\nregressors in which identification may fail.\n\nThe remainder of the paper is organized as follows. Section \\ref{Estimator &\nCrit Fn Sec} defines the GMM estimators, criterion functions, tests, and\nconfidence sets considered in the paper and specifies the drifting sequences\nof distributions that are considered. It also introduces the two examples\nthat are considered in the paper. Section \\ref{Assumptions Sec} states the\nassumptions employed. Section \\ref{Estimation Results Sec} provides the\nasymptotic results for the GMM estimators. Section \\ref{Wald Tests Sec}\nestablishes the asymptotic distributions of Wald statistics under the null\nand under alternatives, determines the asymptotic size of standard Wald\nCS's, and introduces robust Wald tests and CS's, whose asymptotic size is\nequal to their nominal size. Section \\ref{QLR Tests Sec} considers QLR CS's\nbased on the GMM criterion function. Section \\ref{Numerical Results Sec}\nprovides numerical results for the nonlinear regression model with\nendogeneity.\n\nAndrews and Cheng (2011b) provides five supplemental appendices to this\npaper. Supplemental Appendix A verifies the assumptions of the paper for the\nprobit model with endogeneity. Supplemental Appendix B provides proofs of\nthe GMM estimation results given in Section \\ref{Estimation Results Sec}. It\nalso provides some results for minimum distance estimators. Supplemental\nAppendix C provides proofs of the Wald test and CS results given in Section %\n\\ref{Wald Tests Sec}. Supplemental Appendix D provides some results used in\nthe verification of the assumptions for the two examples. Supplemental\nAppendix E provides some additional numerical results for the nonlinear\nregression model with endogeneity.\n\nAll limits below are taken \\textquotedblleft as $n\\rightarrow \\infty .$%\n\\textquotedblright\\ We let $\\lambda _{\\min }(A)$ and $\\lambda _{\\max }(A)$\ndenote the smallest and largest eigenvalues, respectively, of a matrix $A.$\nAll vectors are column vectors. For notational simplicity, we often write $%\n(a,b)$ instead of $(a^{\\prime },b^{\\prime })^{\\prime }$ for vectors $a$ and $%\nb.$ Also, for a function $f(c)$ with $c=(a,b)$ $(=(a^{\\prime },b^{\\prime\n})^{\\prime }),$ we often write $f(a,b)$ instead of $f(c).$ Let $0_{d}$\ndenote a $d$-vector of zeros. Because it arises frequently, we let $0$\ndenote a $d_{\\beta }$-vector of zeros, where $d_{\\beta }$ is the dimension\nof a parameter $\\beta .$\n\nWe let $X_{n}(\\pi )=o_{p\\pi }(1)$ mean that $\\sup_{\\pi \\in \\Pi }\\allowbreak\n\\left\\vert \\left\\vert X_{n}(\\pi )\\right\\vert \\right\\vert =o_{p}(1),$ where $%\n\\left\\vert \\left\\vert \\cdot \\right\\vert \\right\\vert $ denotes the Euclidean\nnorm. We let $\\Rightarrow $ denote weak convergence of a sequence of\nstochastic processes indexed by $\\pi \\in \\Pi $ for some space $\\Pi .$ We\nemploy the uniform metric $d$ on the space $\\mathcal{E}_{v}$ of $R^{v}$%\n-valued functions on $\\Pi .$ See AC1-SM for more details regarding this.\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Estimator, Criterion\nFunction, and Examples\\label{Estimator & Crit Fn Sec}}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}GMM Estimators \\label%\n{Extremum Estrs Subsec}}\n\n\\setcounter{equation}{0}\\hspace{0.25in}The GMM sample criterion function is%\n\\begin{equation}\nQ_{n}(\\theta )=\\overline{g}_{n}(\\theta )^{\\prime }\\mathcal{W}_{n}(\\theta )%\n\\overline{g}_{n}(\\theta )/2,  \\label{GMM CF}\n\\end{equation}%\nwhere $\\overline{g}_{n}(\\theta ):\\Theta \\rightarrow R^{k}$ is a vector of\nsample moment conditions and $\\mathcal{W}_{n}(\\theta ):\\Theta \\rightarrow\nR^{k\\times k}$ is a symmetric random weight matrix.\n\nThe paper considers inference when $\\theta $ is not identified (by the\ncriterion function $Q_{n}(\\theta )$) at some points in the parameter space.\nLack of identification occurs when $Q_{n}(\\theta )$ is flat with respect to\n(wrt) some sub-vector of $\\theta .$ To model this identification problem, $%\n\\theta $ is partitioned into three sub-vectors:%\n\\begin{equation}\n\\theta =(\\beta ,\\zeta ,\\pi )=(\\psi ,\\pi ),\\text{ where }\\psi =(\\beta ,\\zeta\n).\n\\end{equation}%\nThe parameter $\\pi \\in R^{d_{\\pi }}$ is unidentified when $\\beta =0$ $(\\in\nR^{d_{\\beta }}).$ The parameter $\\psi =(\\beta ,\\zeta )\\in R^{d_{\\psi }}$ is\nalways identified. The parameter $\\zeta \\in R^{d_{\\zeta }}$ does not effect\nthe identification of $\\pi .$ These conditions allow for a broad range of\ncases, including cases where reparametrization is used to transform a model\ninto the framework considered here.\n\nThe true distribution of the observations $\\{W_{i}:i\\geq 1\\}$ is denoted $%\nF_{\\gamma }$ for some parameter $\\gamma \\in \\Gamma .$ We let $P_{\\gamma }$\nand $E_{\\gamma }$ denote probability and expectation under $F_{\\gamma }.$\nThe parameter space $\\Gamma $ for the true parameter,\\ referred to as the\n\\textquotedblleft true parameter space,\\textquotedblright\\ is compact and is\nof the form:%\n\\begin{equation}\n\\Gamma =\\{\\gamma =(\\theta ,\\phi ):\\theta \\in \\Theta ^{\\ast },\\phi \\in \\Phi\n^{\\ast }(\\theta )\\},  \\label{True Par Space Gamma}\n\\end{equation}%\nwhere $\\Theta ^{\\ast }$ is a compact subset of $R^{d_{\\theta }}$ and $\\Phi\n^{\\ast }(\\theta )\\subset \\Phi ^{\\ast }$ $\\forall \\theta \\in \\Theta ^{\\ast }$\nfor some compact metric space $\\Phi ^{\\ast }$ with a metric that induces\nweak convergence of the bivariate distributions $(W_{i},W_{i+m})$ for all $%\ni,m\\geq 1.$\\footnote{%\nThat is, the metric satisfies: if $\\gamma \\rightarrow \\gamma _{0},$ then $%\n(W_{i},W_{i+m})$ under $\\gamma $ converges in distribution to $%\n(W_{i},W_{i+m})$ under $\\gamma _{0}$ for all $i,m\\geq 1.$ For example, in an\ni.i.d. situation, the metric on $\\Phi ^{\\ast }$ can be the uniform metric on\nthe distribution of $W_{i}.$ In a stationary time series context, it can be\nthe supremum over $m\\geq 1$ of the uniform metric on the space of\ndistributions of the vectors $(W_{i},W_{i+m}).$ Note that $\\Gamma $ is a\nmetric space with metric $d_{\\Gamma }(\\gamma _{1},\\gamma _{2})=||\\theta\n_{1}-\\theta _{2}||+d_{\\Phi ^{\\ast }}(\\phi _{1},\\phi _{2}),$ where $\\gamma\n_{j}=(\\theta _{j},\\phi _{j})\\in \\Gamma $ for $j=1,2$ and $d_{\\Phi ^{\\ast }}$\nis the metric on $\\Phi ^{\\ast }.$}\\textbf{\\ }In the case of a moment\ncondition model, the parameter $\\phi $ indexes the part of the distribution\nof the observations that is not determined by the moment conditions, which\ntypically is infinite dimensional.\n\nBy definition, the GMM estimator $\\widehat{\\theta }_{n}$ (approximately)\nminimizes $Q_{n}(\\theta )$ over an \\textquotedblleft optimization parameter\nspace\\textquotedblright\\ $\\Theta $:\\footnote{%\nThe $o(n^{-1})$ term in (\\ref{Defn of Thetahat}), and in (\\ref{Defn psi})\nand (\\ref{Defn pihat}) below, is a fixed sequence of constants that does not\ndepend on the true parameter $\\gamma \\in \\Gamma $ and does not depend on $%\n\\pi $ in (\\ref{Defn psi}).}%\n\\begin{equation}\n\\widehat{\\theta }_{n}\\in \\Theta \\text{ and }Q_{n}(\\widehat{\\theta }%\n_{n})=\\inf_{\\theta \\in \\Theta }Q_{n}(\\theta )+o(n^{-1}).\n\\label{Defn of Thetahat}\n\\end{equation}%\nWe assume that the interior of $\\Theta $ includes the true parameter space $%\n\\Theta ^{\\ast }$ (see Assumption B1 below). This ensures that the asymptotic\ndistribution of $\\widehat{\\theta }_{n}$ is not affected by boundary\nrestrictions for any sequence of true parameters in $\\Theta ^{\\ast }.$ The\nfocus of this paper is not on the effects of boundary restrictions.\n\nWithout loss of generality, the optimization parameter space $\\Theta $ can\nbe written as%\n\\begin{eqnarray}\n\\Theta \\hspace{-0.08in} &=&\\hspace{-0.08in}\\{\\theta =(\\psi ,\\pi ):\\psi \\in\n\\Psi (\\pi ),\\pi \\in \\Pi \\},\\text{ where}  \\notag \\\\\n\\Pi \\hspace{-0.08in} &=&\\hspace{-0.08in}\\{\\pi :(\\psi ,\\pi )\\in \\Theta \\text{\nfor some }\\psi \\}\\text{ and}  \\notag \\\\\n\\Psi (\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\{\\psi :(\\psi ,\\pi )\\in\n\\Theta \\}\\text{ for }\\pi \\in \\Pi .  \\label{Defn of Par Sp Pi and Psi}\n\\end{eqnarray}%\nWe allow $\\Psi (\\pi )$ to depend on $\\pi $ and, hence, $\\Theta $ need not be\na product space between $\\psi $ and $\\pi .$\n\nThe main focus of this paper is on GMM estimators, but the results also\napply to minimum distance (MD) estimators. However, the assumptions employed\nwith MD estimators are not as primitive. The MD sample criterion function is\ndefined exactly as the GMM criterion function is defined in (\\ref{GMM CF})\nexcept that $\\overline{g}_{n}(\\theta )$ is not a vector of moment\nconditions, but rather, is the difference between an unrestricted estimator $%\n\\widehat{\\xi }_{n}$ of a parameter $\\xi _{0}$ and a vector of restrictions $%\nh(\\theta )$ on $\\xi _{0}.$ That is, \n\\begin{equation}\n\\overline{g}_{n}(\\theta )=\\widehat{\\xi }_{n}-h(\\theta ),\\text{ where }\\xi\n_{0}=h(\\theta _{0}).  \\label{MD defn}\n\\end{equation}%\nSee Schorfheide (2011) for a discussion of MD estimation of dynamic\nstochastic general equilibrium models and weak identification problems in\nthese models.\n\n\\subsection{\\hspace{-0.2in}\\textbf{.}\\hspace{0.18in}Example 1: Nonlinear\nRegression with Endogeneity}\n\n\\hspace{0.25in}The first example is a nonlinear regression model with\nendogenous regressors estimated using instrumental variables (IV's). The\nIV's are assumed to be strong. Potential identification failure in this\nmodel arises due to the nonlinearity in the regression function. Let $%\nh(x,\\pi )\\in R$ be a function of $x$ that is known up to the\nfinite-dimensional parameter $\\pi \\in R^{d_{\\pi }}.$ The model is%\n\\begin{equation}\nY_{i}=\\beta \\cdot h\\left( X_{1,i},\\pi \\right) +X_{2,i}^{\\prime }\\zeta +U_{i}%\n\\text{ and }EU_{i}Z_{i}=0\n\\end{equation}%\nfor $i=1,...,n,$ where $X_{i}=(X_{1,i},X_{2,i})\\in R^{d_{X}},$ $X_{2,i}\\in\nR^{d_{X_{2}}},$ $Z_{i}\\in R^{k},$ and $k\\geq d_{X_{2}}+d_{\\pi }+1.$ The\nregressors $X_{i}$ may be endogenous or exogenous. The function $h(x,\\pi )$\nis assumed to be twice continuously differentiable wrt $\\pi .$ Let $h_{\\pi\n}(x,\\pi )$ and $h_{\\pi \\pi }(x,\\pi )$ denote the first- and second-order\npartial derivatives of $h(x,\\pi )$ wrt $\\pi .$ For example, Areosa, McAleer,\nand Medeiros (2011) consider GMM estimation of smooth transition models with\nendogeneity (which are nonlinear regression models). In their case $h(x,\\pi\n) $ involves the logistic function. They provide an empirical application of\nthis model to inflation rate targeting in Brazil.\n\nThe GMM sample criterion function is%\n\\begin{eqnarray}\nQ_{n}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\overline{g}_{n}(\\theta\n)^{\\prime }\\mathcal{W}_{n}\\overline{g}_{n}(\\theta )/2,\\text{ where }%\n\\overline{g}_{n}(\\theta )=n^{-1}\\sum_{i=1}^{n}U_{i}(\\theta )Z_{i},\\text{ } \n\\notag \\\\\nU_{i}\\left( \\theta \\right) \\hspace{-0.08in} &=&\\hspace{-0.08in}Y_{i}-\\beta\nh(X_{1,i},\\pi )-X_{2,i}^{\\prime }\\zeta ,\\text{ and }\\mathcal{W}_{n}=\\left(\nn^{-1}\\sum_{i=1}^{n}Z_{i}Z_{i}^{\\prime }\\right) ^{-1}.  \\label{gmm CF}\n\\end{eqnarray}\n\nFor simplicity, we use the optimal weight matrix under homoskedasticity.\nAlternatively, one can employ the optimal weight matrix under\nheteroskedasticity using a preliminary estimator $\\overline{\\theta }_{n}.$\nProvided $\\mathcal{W}_{n}(\\theta )$ and $\\overline{\\theta }_{n}$ satisfy the\nAssumptions in Lemma \\ref{Lemma Two step weight matrix}\\textbf{\\ }below, all\nresults hold for this two-step estimator as well. For example, the\npreliminary estimator $\\overline{\\theta }_{n}$ can be the estimator obtained\nunder homoskedasticity, which is shown below to satisfy the Assumptions in\nLemma \\ref{Lemma Two step weight matrix}.\n\nWhen $\\beta =0,$ $U_{i}(\\theta )$ does not depend on $\\pi .$ In consequence, \n$Q_{n}(\\theta )$ does not depend on $\\pi $ when $\\beta =0.$\n\nSuppose the random variables $\\left\\{ \\left( X_{i},Z_{i},U_{i}\\right)\n:i=1,...,n\\right\\} $ are i.i.d. with distribution $\\phi \\in \\Phi ^{\\ast },$\nwhere $\\Phi ^{\\ast }$ is a compact metric space with a metric $d_{\\Phi }$\nthat induces weak convergence of $(X_{i},Z_{i},U_{i}).$ In this example, the\nparameter of interest is $\\theta =(\\beta ,\\zeta ,\\pi )$ and the nuisance\nparameter is $\\phi ,$ which is infinite dimensional.\n\nThe true parameter space for $\\theta $ is%\n\\begin{equation}\n\\Theta ^{\\ast }=\\mathcal{B}^{\\ast }\\times \\mathcal{Z}^{\\ast }\\times \\Pi\n^{\\ast },\\text{ where }\\mathcal{B}^{\\ast }=[-b_{1}^{\\ast },b_{2}^{\\ast\n}]\\subset R,  \\label{gmm theta space}\n\\end{equation}%\n$b_{1}^{\\ast }\\geq 0,$ $b_{2}^{\\ast }\\geq 0,$ $b_{1}^{\\ast }$ and $%\nb_{2}^{\\ast }$ are not both equal to $0,$ $\\mathcal{Z}^{\\ast }\\subset\nR^{d_{\\zeta }}$ is compact, and $\\Pi ^{\\ast }\\subset R^{d_{\\pi }}$ is\ncompact.\n\nSuppose $||h_{\\pi \\pi }(x,\\pi _{1})-h_{\\pi \\pi }(x,\\pi _{2})||\\leq M_{\\pi\n\\pi }(x)\\delta $ $\\forall \\pi _{1},\\pi _{2}\\in \\Pi $ with $||\\pi _{1}-\\pi\n_{2}||\\leq \\delta $ for some non-stochastic function $M_{\\pi \\pi }(x):%\n\\mathcal{X}\\rightarrow R^{+}$ that satisfies the conditions in (\\ref{gmm phi\nspace}) below, where $\\delta $ is some positive constant and $\\mathcal{X}$\ndenotes the union of the supports of $X_{1,i}$ over all $\\phi \\in \\Phi\n^{\\ast }.$ Define%\n\\begin{eqnarray}\nd_{i}(\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}(h\\left( X_{1,i},\\pi \\right)\n,X_{2,i},h_{\\pi }\\left( X_{1,i},\\pi \\right) )\\in R^{d_{X_{2}}+d_{\\pi }+1}%\n\\text{ and}  \\notag \\\\\nd_{\\psi ,i}^{\\ast }(\\pi _{1},\\pi _{2})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n(h\\left( X_{1,i},\\pi _{1}\\right) ,h\\left( X_{1,i},\\pi _{2}\\right)\n,X_{2,i})\\in R^{d_{X_{2}}+2}.\n\\end{eqnarray}%\nLet $E_{\\phi }$ denote expectation under $\\phi .$ For any $\\theta ^{\\ast\n}\\in \\Theta ^{\\ast },$ the true parameter space for $\\phi $ is \n\\begin{eqnarray}\n&&\\Phi ^{\\ast }(\\theta ^{\\ast })\\overset{}{=}\\{\\phi \\overset{}{\\in }\\Phi\n^{\\ast }:E_{\\phi }U_{i}Z_{i}\\overset{}{=}0,\\text{ }E_{\\phi\n}(U_{i}^{2}|X_{i},Z_{i})=\\sigma ^{2}(X_{i},Z_{i})>0\\text{ a.s.},\\text{ }%\nE_{\\phi }|U_{i}|^{4+\\varepsilon }  \\notag \\\\\n&&\\overset{}{\\leq }C,\\text{ }E_{\\phi }\\sup_{\\pi \\in \\Pi }\\left( ||h\\left(\nX_{1,i},\\pi \\right) ||^{2+\\varepsilon }+||h_{\\pi }\\left( X_{1,i},\\pi \\right)\n||^{2+\\varepsilon }+||h_{\\pi \\pi }\\left( X_{1,i},\\pi \\right)\n||^{1+\\varepsilon }\\right) \\overset{}{\\leq }C,  \\notag \\\\\n&&E_{\\phi }(\\left\\Vert X_{2,i}\\right\\Vert ^{2+\\varepsilon\n}+||Z_{i}||^{2+\\varepsilon }+M_{\\pi \\pi }(X_{1,i}))\\overset{}{\\leq }C,\\text{ \n}\\lambda _{\\min }(E_{\\phi }Z_{i}Z_{i}^{\\prime })\\geq \\varepsilon ,  \\notag \\\\\n&&E_{\\phi }Z_{i}d_{\\psi ,i}^{\\ast }(\\pi _{1},\\pi _{2})^{\\prime }\\in\nR^{k\\times (d_{X_{2}}+2)}\\text{ has full column rank }\\forall \\pi _{1},\\pi\n_{2}\\overset{}{\\in }\\Pi \\text{ with }\\pi _{1}\\overset{}{\\neq }\\pi _{2}, \n\\notag \\\\\n&&E_{\\phi }Z_{i}d_{i}(\\pi )\\in R^{k\\times (d_{X_{2}}+d_{\\pi }+1)}\\text{ has\nfull column rank }\\forall \\pi \\overset{}{\\in }\\Pi \\},  \\label{gmm phi space}\n\\end{eqnarray}%\nfor some constants $C<\\infty $ and $\\varepsilon >0.$ Note that in this\nexample $\\Phi ^{\\ast }(\\theta ^{\\ast })$ does not depend on $\\theta ^{\\ast\n}. $\n\n\\subsection{\\hspace{-0.2in}\\textbf{.}\\hspace{0.18in}Example 2: Probit Model\nwith Endogeneity and\\newline\nPossibly Weak Instruments\\label{Simul Probit Ex Sec}}\n\n\\hspace{0.25in}The second example is a probit model with endogeneity and\nIV's that may be weak or irrelevant, which causes identification issues.\nConsider the following two-equation model with endogeneity of $Y_{i}$ in the\nfirst equation:%\n\\begin{eqnarray}\ny_{i}^{\\ast }\\hspace{-0.08in} &=&\\hspace{-0.08in}Y_{i}\\pi +X_{i}^{\\prime\n}\\zeta _{1}^{\\ast }+U_{i}^{\\ast }\\text{ and}  \\notag \\\\\nY_{i}\\hspace{-0.08in} &=&\\hspace{-0.08in}Z_{i}^{\\prime }\\beta +X_{i}^{\\prime\n}\\zeta _{2}+V_{i},  \\label{Probit-Model}\n\\end{eqnarray}%\nwhere $y_{i}^{\\ast },Y_{i},U_{i}^{\\ast },V_{i}\\in R,$ $X_{i}\\in R^{d_{X}},$ $%\nZ_{i}\\in R^{d_{Z}},$ and $\\{(X_{i},Z_{i},U_{i},V_{i}):i=1,...,n\\}$ are\ni.i.d. The outcome variable $y_{i}^{\\ast }$ of the first equation is not\nobserved. Only the binary indicator $y_{i}=1(y_{i}^{\\ast }>0)$ is observed,\nalong with $Y_{i},$ $X_{i},$ and $Z_{i}.$ That is, we observe $%\n\\{W_{i}=(y_{i},Y_{i},X_{i},Z_{i}):$ $i=1,...,n\\}.$ Similar models with\nbinary, truncated, or censored endogenous variables are considered in\nAmemiya (1974), Heckman (1978), Nelson and Olson (1978), Lee (1981), Smith\nand Blundell (1986), Rivers and Vuong (1988), among others.\n\nThe reduced-form equations of the model are%\n\\begin{eqnarray}\ny_{i}^{\\ast }\\hspace{-0.08in} &=&\\hspace{-0.08in}Z_{i}^{\\prime }\\beta \\pi\n+X_{i}^{\\prime }\\zeta _{1}+U_{i}\\text{ and}  \\notag \\\\\nY_{i}\\hspace{-0.08in} &=&\\hspace{-0.08in}Z_{i}^{\\prime }\\beta +X_{i}^{\\prime\n}\\zeta _{2}+V_{i},\\text{ where}  \\notag \\\\\n\\zeta _{1}\\hspace{-0.08in} &=&\\hspace{-0.08in}\\zeta _{1}^{\\ast }+\\pi \\zeta\n_{2}\\text{ and }U_{i}=U_{i}^{\\ast }+\\pi V_{i}.  \\label{Probit-red eqns}\n\\end{eqnarray}%\nThe variables $(X_{i},Z_{i})$ are independent of the errors $(U_{i},V_{i})$\nand the errors $(U_{i},V_{i})$ have a joint normal distribution with mean\nzero and covariance matrix $\\Sigma _{uv},$ where%\n\\begin{equation}\n\\Sigma _{uv}=\\left( \n\\begin{array}{cc}\n1 & \\rho \\sigma _{v} \\\\ \n\\rho \\sigma _{v} & \\sigma _{v}^{2}%\n\\end{array}%\n\\right) .\n\\end{equation}%\nThe parameter of interest is $\\theta =(\\beta ,\\zeta ,\\pi ),$ where $\\zeta\n=(\\zeta _{1},\\zeta _{2}).$\n\nIn this model, weak identification of $\\pi $ occurs when $\\beta $ is close\nto $0.$ We analyze a GMM estimator of $\\theta ,$ and corresponding tests\nconcerning functions of $\\theta ,$ in the presence of weak identification or\nlack of identification.\n\nLet $L(\\cdot )$ denote the distribution function of the standard normal\ndistribution. Let $L^{\\prime }(x)$ and $L^{\\prime \\prime }(x)$ denote the\nfirst- and second-order derivatives of $L(x)$ wrt $x.$ We use the\nabbreviations \n\\begin{equation}\nL_{i}(\\theta )=L(Z_{i}^{\\prime }\\beta \\pi +X_{i}^{\\prime }\\zeta _{1}),\\text{ \n}L_{i}^{\\prime }(\\theta )=L^{\\prime }(Z_{i}^{\\prime }\\beta \\pi\n+X_{i}^{\\prime }\\zeta _{1}),\\text{ and }L_{i}^{\\prime \\prime }(\\theta\n)=L^{\\prime \\prime }(Z_{i}^{\\prime }\\beta \\pi +X_{i}^{\\prime }\\zeta _{1}).\n\\end{equation}\n\nNow we specify the moment conditions for the GMM estimator. The\nlog-likelihood function based on the first reduced-form equation in (\\ref%\n{Probit-red eqns}) and $y_{i}=1(y_{i}^{\\ast }>0)$ is \n\\begin{equation}\n\\ell (\\theta )=\\sum_{i=1}^{n}\\left[ y_{i}\\log \\left( L_{i}(\\theta )\\right)\n+(1-y_{i})\\log \\left( 1-L_{i}(\\theta )\\right) \\right] .\n\\end{equation}%\nLet $a=\\beta \\pi $ and $a_{0}=\\beta _{0}\\pi _{0}.$ The log-likelihood\nfunction $\\ell (\\theta )$ depends on $\\theta $ only through $a$ and $\\zeta\n_{1}.$ The expectation of the score function wrt $(a,\\zeta _{1})$ yields the\nfirst set of moment conditions%\n\\begin{eqnarray}\n&&E_{\\gamma _{0}}w_{1,i}(\\theta _{0})(y_{i}-L_{i}(\\theta _{0}))\\overline{Z}%\n_{i}\\overset{}{=}0,\\text{ where }  \\notag \\\\\n&&w_{1,i}(\\theta )\\overset{}{=}\\frac{L_{i}^{\\prime }(\\theta )}{L_{i}(\\theta\n)(1-L_{i}(\\theta ))}\\text{ and }\\overline{Z}_{i}\\overset{}{=}%\n(X_{i},Z_{i})\\in R^{d_{X}\\times d_{Z}}.  \\label{Probit-Momt1}\n\\end{eqnarray}%\nThe second reduced-form equation in (\\ref{Probit-red eqns}) implies \n\\begin{equation}\nE_{\\gamma _{0}}V_{i}(\\theta _{0})\\overline{Z}_{i}=0,\\text{ where }%\nV_{i}(\\theta )=Y_{i}-Z_{i}^{\\prime }\\beta -X_{i}^{\\prime }\\zeta _{2}.\n\\label{Probit-Momt2}\n\\end{equation}\n\nWe consider a two-step GMM estimator of $\\theta $ based on the moment\nconditions in (\\ref{Probit-Momt1}) and (\\ref{Probit-Momt2}). The resulting\nestimator has not appeared in the literature previously, but it is close to\nestimators in the papers referenced above, e.g., see Rivers and Vuong\n(1988). The GMM sample criterion function is%\n\\begin{eqnarray}\nQ_{n}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\overline{g}_{n}(\\theta\n)^{\\prime }\\mathcal{W}_{n}\\overline{g}_{n}(\\theta )/2,\\text{ where}\n\\label{Probit-SmplCrit} \\\\\n\\overline{g}_{n}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nn^{-1}\\sum_{i=1}^{n}e_{i}(\\theta )\\otimes \\overline{Z}_{i}\\in\nR^{2(d_{X}+d_{Z})}\\text{ and }e_{i}(\\theta )=\\binom{w_{1,i}(\\theta\n)(y_{i}-L_{i}(\\theta ))}{Y_{i}-Z_{i}^{\\prime }\\beta -X_{i}^{\\prime }\\zeta\n_{2}}.  \\notag\n\\end{eqnarray}%\nIn the first step, the weight matrix $\\mathcal{W}_{n}$ is the identity\nmatrix, yielding an estimator $\\overline{\\theta }_{n}.$ In the second step, $%\n\\mathcal{W}_{n}$ is the optimal weight matrix that takes the form%\n\\begin{equation}\n\\mathcal{W}_{n}=\\mathcal{W}_{n}(\\overline{\\theta }_{n}),\\text{ where }%\n\\mathcal{W}_{n}(\\theta )=n^{-1}\\sum_{i=1}^{n}\\left( e_{i}(\\theta\n)e_{i}(\\theta )^{\\prime }\\right) \\otimes (\\overline{Z}_{i}\\overline{Z}%\n_{i}^{\\prime }).  \\label{Probit-Weight}\n\\end{equation}\n\nThe optimization and true parameter spaces $\\Theta $ and $\\Theta ^{\\ast }$\nare $\\Theta =\\mathbf{\\times }_{j=1}^{k}[-b_{L,j},b_{H,j}]\\times \\mathcal{Z}%\n\\times \\Pi $ and $\\Theta ^{\\ast }=\\mathbf{\\times }_{j=1}^{k}[-b_{L,j}^{\\ast\n},b_{H,j}^{\\ast }]\\times \\mathcal{Z}^{\\ast }\\times \\Pi ^{\\ast },$ where $%\nb_{L,j},b_{H,j},b_{L,j}^{\\ast },b_{H,j}^{\\ast }\\in R,$ $0\\leq b_{L,j}^{\\ast\n}<b_{L,j},$ $0\\leq b_{H,j}^{\\ast }<b_{H,j},$ $b_{L,j}^{\\ast },b_{H,j}^{\\ast\n} $ are not both $0,$ for $j=1,...,k,$ $\\mathcal{Z}^{\\ast }\\subset int(%\n\\mathcal{Z})\\subset R^{2d_{X}},$ $\\Pi ^{\\ast }\\subset int(\\Pi )\\subset R,$ $%\n\\mathcal{Z}^{\\ast },\\mathcal{Z},\\Pi ^{\\ast },$ and $\\Pi $ are compact.%\n\\footnote{%\nNote that $\\mathcal{Z}$ and $\\mathcal{Z}^{\\ast }$ are not related to the\nsupport of $Z_{i}.$ Rather, they are the optimization and true parameter\nspaces for $\\zeta ,$ which has dimension $2d_{X}.$}\n\nDefine $\\overline{w}_{1,i}=\\sup_{\\theta \\in \\Theta }|w_{1,i}(\\theta )|$ and $%\n\\overline{w}_{2,i}=\\sup_{\\theta \\in \\Theta }|w_{2,i}(\\theta )|,$ where $%\nw_{2,i}(\\theta )=L_{i}^{\\prime \\prime }(\\theta )/$\\linebreak $(L_{i}(\\theta\n)(1-L_{i}(\\theta ))).$\n\nThe nuisance parameter $\\phi $ is defined by $\\phi =(\\rho ,\\sigma _{v},F)\\in\n\\Phi ^{\\ast },$ where $F$ is the distribution of $(X_{i},Z_{i})$ and $\\Phi\n^{\\ast }$ is a compact metric space with a metric $d_{\\Phi }$ that induces\nweak convergence of $(X_{i},Z_{i}).$ We use $P_{\\phi }$ and $E_{\\phi }$ to\ndenote probability and expectation under $\\phi ,$ respectively, for random\nquantities that depend only on $(X_{i},Z_{i}).$ For any $\\theta ^{\\ast }\\in\n\\Theta ^{\\ast },$ the true parameter space for $\\phi $ is%\n\\begin{eqnarray}\n\\Phi (\\theta ^{\\ast })\\hspace{-0.08in} &=&\\hspace{-0.08in}\\{\\phi =(\\rho\n,\\sigma _{v},F)\\in \\Phi :|\\rho |<1,\\sigma _{v}\\geq \\varepsilon ,\\text{ }%\nP_{\\phi }(\\overline{Z}_{i}^{\\prime }c=0)<1\\text{ for any }c\\neq 0,  \\notag \\\\\n&&E_{\\phi }(||\\overline{Z}_{i}||^{4+\\varepsilon }+\\overline{w}%\n_{1,i}^{4+\\varepsilon }+\\overline{w}_{2,i}^{2+\\varepsilon })\\overset{}{\\leq }%\nC\\},  \\label{Simul Probit Phi Space}\n\\end{eqnarray}%\nfor some $C<\\infty $ and $\\varepsilon >0.$ Note that in this example, $\\Phi\n(\\theta ^{\\ast })$ does not depend on $\\theta ^{\\ast }.$\n\nThe verification of the assumptions of this paper for this example is given\nin Supplemental Appendix A.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Confidence Sets and\nTests}\n\n\\hspace{0.25in}We return now to the general framework. We are interested in\nthe effect of lack of identification or weak identification on the GMM\nestimator $\\widehat{\\theta }_{n}.$ Also, we are interested in its effects on\nCS's for various functions $r(\\theta )$ of $\\theta $ and on tests of null\nhypotheses of the form $H_{0}:r(\\theta )=v.$\n\nA CS is obtained by inverting a test. A nominal $1-\\alpha $ CS for $r(\\theta\n)$ is \n\\begin{equation}\nCS_{n}=\\{v:\\mathcal{T}_{n}(v)\\leq c_{n,1-\\alpha }(v)\\},  \\label{invert CI}\n\\end{equation}%\nwhere $\\mathcal{T}_{n}\\left( v\\right) $ is a test statistic, such as a $t,$\nWald, or QLR statistic, and $c_{n,1-\\alpha }\\left( v\\right) $ is a critical\nvalue for testing $H_{0}:r(\\theta )=v.$ The critical values considered in\nthis paper may depend on the null value $v$ of $r(\\theta )$ as well as on\nthe data. The coverage probability of a CS for $r(\\theta )$ is%\n\\begin{equation}\nP_{\\gamma }(r(\\theta )\\in CS_{n})=P_{\\gamma }(\\mathcal{T}_{n}(r(\\theta\n))\\leq c_{n,1-\\alpha }(r(\\theta ))),  \\label{coverage prob}\n\\end{equation}%\nwhere $P_{\\gamma }\\left( \\cdot \\right) $ denotes probability when $\\gamma $\nis the true value.\n\nWe are interested in the finite-sample size of the CS, which is the smallest\nfinite-sample coverage probability of the CS over the parameter space. It is\napproximated by the asymptotic size, which is defined to be%\n\\begin{equation}\nAsySz=\\underset{n\\rightarrow \\infty }{\\lim \\inf \\text{ }}\\underset{\\gamma\n\\in \\Gamma }{\\inf }\\text{ }P_{\\gamma }(r(\\theta )\\in CS_{n}).\n\\label{AsySz Confid Set defn}\n\\end{equation}\n\nFor a test, we are interested in its null rejection probabilities and in\nparticular its maximum null rejection probability, which is the size of the\ntest. A test's asymptotic size is an approximation to the latter. The null\nrejection probabilities and asymptotic size of a test are given by%\n\\begin{eqnarray}\n&&P_{\\gamma }(\\mathcal{T}_{n}(v)>c_{n,1-\\alpha }(v))\\text{ for }\\gamma \n\\overset{}{=}(\\theta ,\\phi )\\overset{}{\\in }\\Gamma \\text{ with }r(\\theta )%\n\\underset{}{=}v\\text{ and}  \\notag \\\\\n&&AsySz\\overset{}{=}\\underset{n\\rightarrow \\infty }{\\lim \\sup }\\text{ }%\n\\underset{\\gamma \\in \\Gamma :r(\\theta )=v}{\\sup }\\text{ }P_{\\gamma }(%\n\\mathcal{T}_{n}(v)>c_{n,1-\\alpha }(v)).  \\label{AsySz Test defn}\n\\end{eqnarray}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Drifting Sequences of\nDistributions\\label{Drifting Seqs of Distns}}\n\n\\hspace{0.25in}To determine the asymptotic size of a CS or test, we need to\nderive the asymptotic distribution of the test statistic $\\mathcal{T}%\n_{n}(v_{n})$ under sequences of true parameters $\\gamma _{n}=(\\theta\n_{n},\\phi _{n})$ and $v_{n}=r(\\theta _{n})$ that may depend on $n.$ The\nreason is that the value of $\\gamma $ at which the finite-sample size of a\nCS or test is attained may vary with the sample size. Similarly, to\ninvestigate the finite-sample behavior of the GMM estimator under weak\nidentification, we need to consider its asymptotic behavior under drifting\nsequences of true distributions---as in Stock and Wright (2000).\n\nResults in Andrews and Guggenberger (2009, 2010) and Andrews, Cheng, and\nGuggenberger (2009) show that the asymptotic size of CS's and tests are\ndetermined by certain drifting sequences of distributions. In this paper,\nthe following sequences $\\{\\gamma _{n}\\}$ are key: \n\\begin{eqnarray}\n\\Gamma \\left( \\gamma _{0}\\right) \\hspace{-0.08in} &=&\\hspace{-0.08in}\\left\\{\n\\{\\gamma _{n}\\in \\Gamma :n\\geq 1\\}:\\gamma _{n}\\rightarrow \\gamma _{0}\\in\n\\Gamma \\right\\} ,  \\label{Seq's of Gamma Par's} \\\\\n\\Gamma \\left( \\gamma _{0},0,b\\right) \\hspace{-0.08in} &=&\\hspace{-0.08in}%\n\\left\\{ \\{\\gamma _{n}\\}\\in \\Gamma \\left( \\gamma _{0}\\right) :\\beta _{0}=0%\n\\text{ and }n^{1/2}\\beta _{n}\\rightarrow b\\in (R\\cup \\left\\{ \\pm \\infty\n\\right\\} )^{d_{\\beta }}\\right\\} ,\\text{ and}  \\notag \\\\\n\\Gamma \\left( \\gamma _{0},\\infty ,\\omega _{0}\\right) \\hspace{-0.08in} &=&%\n\\hspace{-0.08in}\\left\\{ \\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0}):n^{1/2}||\\beta _{n}||\\rightarrow \\infty \\text{ and }\\beta _{n}/||\\beta\n_{n}||\\rightarrow \\omega _{0}\\in R^{d_{\\beta }}\\right\\} ,  \\notag\n\\end{eqnarray}%\nwhere $\\gamma _{0}=(\\beta _{0},\\zeta _{0},\\pi _{0},\\phi _{0})$ and $\\gamma\n_{n}=(\\beta _{n},\\zeta _{n},\\pi _{n},\\phi _{n}).$\n\nThe sequences in $\\Gamma \\left( \\gamma _{0},0,b\\right) $ are in Categories I\nand II and are sequences for which $\\{\\beta _{n}\\}$ is \\emph{close} to $0$: $%\n\\beta _{n}\\rightarrow 0.$ When $||b||<\\infty ,$ $\\{\\beta _{n}\\}$ is within $%\nO(n^{-1/2})$ of $0$ and the sequence is in Category I. The sequences in $%\n\\Gamma \\left( \\gamma _{0},\\infty ,\\omega _{0}\\right) $ are in Categories II\nand III and are more \\emph{distant }from $\\beta =0$: $n^{1/2}||\\beta\n_{n}||\\rightarrow \\infty .$ The sets $\\Gamma (\\gamma _{0},0,b)$ and $\\Gamma\n(\\gamma _{0},\\infty ,\\omega _{0})$ are \\emph{not} disjoint. Both contain\nsequences in Category II.\n\nThroughout the paper we use the terminology: \\textquotedblleft under $%\n\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0})$\\textquotedblright\\ means\n\\textquotedblleft when the true parameters are $\\{\\gamma _{n}\\}\\in \\Gamma\n(\\gamma _{0})$ for any $\\gamma _{0}\\in \\Gamma ;$\\textquotedblright\\\n\\textquotedblleft under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$%\n\\textquotedblright\\ means \\textquotedblleft when the true parameters are $%\n\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ for any $\\gamma _{0}\\in \\Gamma $\nwith $\\beta _{0}=0$ and any $b\\in (R\\cup \\left\\{ \\pm \\infty \\right\\}\n)^{d_{\\beta }};$\\textquotedblright\\ and \\textquotedblleft under $\\{\\gamma\n_{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0})$\\textquotedblright\\ means\n\\textquotedblleft when the true parameters are $\\{\\gamma _{n}\\}\\in \\Gamma\n(\\gamma _{0},\\infty ,\\omega _{0})$ for any $\\gamma _{0}\\in \\Gamma $ and any $%\n\\omega _{0}\\in R^{d_{\\beta }}$ with $||\\omega _{0}||=1.$\\textquotedblright\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Assumptions \\label%\n{Assumptions Sec}}\n\n\\setcounter{equation}{0}\\hspace{0.25in}This section provides relatively\nprimitive sufficient conditions for GMM estimators.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Assumption GMM1\\label%\n{Section GMM}}\n\n\\hspace{0.25in}The first assumption specifies the basic identification\nproblem. It also provides conditions that are used to determine the\nprobability limit of the GMM estimator, when it exists, under all categories\nof drifting sequences of distributions.\\medskip\n\n\\noindent \\textbf{Assumption GMM1. }(i) If $\\beta =0,$ $\\overline{g}%\n_{n}(\\theta )$ and $\\mathcal{W}_{n}(\\theta )$ do not depend on $\\pi ,$ $%\n\\forall \\theta \\in \\Theta ,$ $\\forall n\\geq 1,$ for any true parameter $%\n\\gamma ^{\\ast }\\in \\Gamma .$\n\n\\noindent (ii) Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0}),$ $%\n\\sup_{\\theta \\in \\Theta }||\\overline{g}_{n}(\\theta )-g_{0}(\\theta ;\\gamma\n_{0})||\\rightarrow _{p}0$ and $\\sup_{\\theta \\in \\Theta }||\\mathcal{W}%\n_{n}(\\theta )-\\mathcal{W}(\\theta ;\\gamma _{0})||\\allowbreak \\rightarrow\n_{p}0 $ for some non-random functions $g_{0}(\\theta ;\\gamma _{0}):\\Theta\n\\times \\Gamma \\rightarrow R^{k}$ and $\\mathcal{W}(\\theta ;\\gamma\n_{0}):\\Theta \\times \\Gamma \\rightarrow R^{k\\times k}.$\n\n\\noindent (iii) When $\\beta _{0}=0,$ $g_{0}(\\psi ,\\pi ;\\gamma _{0})=0$ if\nand only if $\\psi =\\psi _{0},$ $\\forall \\pi \\in \\Pi ,$ $\\forall \\gamma\n_{0}\\in \\Gamma .$\n\n\\noindent (iv) When $\\beta _{0}\\neq 0,$ $g_{0}(\\theta ;\\gamma _{0})=0$ if\nand only if $\\theta =\\theta _{0},$ $\\forall \\gamma _{0}\\in \\Gamma .$\n\n\\noindent (v) $g_{0}(\\theta ;\\gamma _{0})$ is continuously differentiable in \n$\\theta $ on $\\Theta ,$ with its partial derivatives wrt $\\theta $ and $\\psi \n$ denoted by $g_{\\theta }(\\theta ;\\gamma _{0})\\in R^{k\\times d_{\\theta }}$\nand $g_{\\psi }(\\theta ;\\gamma _{0})\\in R^{k\\times d_{\\psi }},$ respectively.\n\n\\noindent (vi) $\\mathcal{W}(\\theta ;\\gamma _{0})$ is continuous in $\\theta $\non $\\Theta $ $\\forall \\gamma _{0}\\in \\Gamma .$\n\n\\noindent (vii) $0<\\lambda _{\\min }(\\mathcal{W}(\\psi _{0},\\pi ;\\gamma\n_{0}))\\leq \\lambda _{\\max }(\\mathcal{W}(\\psi _{0},\\pi ;\\gamma _{0}))<\\infty\n, $ $\\forall \\pi \\in \\Pi ,$ $\\forall \\gamma _{0}\\in \\Gamma .$\n\n\\noindent (viii) $\\lambda _{\\min }(g_{\\psi }(\\psi _{0},\\pi ;\\gamma\n_{0})^{\\prime }\\mathcal{W}(\\psi _{0},\\pi ;\\gamma _{0})g_{\\psi }(\\psi\n_{0},\\pi ;\\gamma _{0}))>0,$ $\\forall \\pi \\in \\Pi ,$ $\\forall \\gamma _{0}\\in\n\\Gamma $ with $\\beta _{0}=0.$\n\n\\noindent (ix) $\\Psi (\\pi )$ is compact $\\forall \\pi \\in \\Pi ,$ and $\\Pi $\nand $\\Theta $ are compact.\n\n\\noindent (x) $\\forall \\varepsilon >0,$ $\\exists \\delta >0$ such that $%\nd_{H}\\left( \\Psi \\left( \\pi _{1}\\right) ,\\Psi \\left( \\pi _{2}\\right) \\right)\n<\\varepsilon $ $\\forall \\pi _{1},\\pi _{2}\\in \\Pi $ with $\\left\\Vert \\pi\n_{1}-\\pi _{2}\\right\\Vert <\\delta ,$ where $d_{H}\\left( \\cdot \\right) $ is\nthe Hausdorff metric.\\medskip\n\nAssumption GMM1(i) is the key condition that concerns the lack of\nidentification (by the moment functions) when $\\beta =0.$ Assumptions\nGMM1(ii)-(x) are mostly fairly standard GMM regularity conditions, but with\nsome adjustments due to the lack of identification of $\\pi $ when $\\beta =0,$\ne.g., see Assumption GMM1(iii). Note that Assumption GMM1(viii) involves the\nderivative matrix of $g_{0}(\\theta ;\\gamma _{0})$ with respect to $\\psi $\nonly, not $\\theta =(\\psi ,\\pi ).$ In consequence, this assumption is not\nrestrictive.\n\nThe weight matrix $\\mathcal{W}_{n}(\\theta )$ depends on $\\theta $ only when\na continuous updating GMM estimator is considered. For a two-step estimator, \n$\\mathcal{W}_{n}(\\theta )$ depends on a preliminary estimator $\\overline{%\n\\theta }_{n},$ but does not depend on $\\theta .$ Let $\\mathcal{W}_{n}(%\n\\overline{\\theta }_{n})$ be the weight matrix for a two-step estimator.\n(This is a slight abuse of notation because in (\\ref{GMM CF}) $\\mathcal{W}%\n_{n}(\\theta )$ and $\\overline{g}_{n}(\\theta )$ are indexed by the same $%\n\\theta ,$ whereas here they are different.)\n\nFor the weight matrix of a two-step estimator to satisfy Assumption\nGMM1(ii), we need \n\\begin{equation}\n\\mathcal{W}_{n}(\\overline{\\theta }_{n})\\rightarrow _{p}\\mathcal{W}(\\theta\n_{0};\\gamma _{0})  \\label{conv in prob}\n\\end{equation}%\nfor some non-random matrix $\\mathcal{W}(\\theta _{0};\\gamma _{0})$ under $%\n\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0}).$ This is not an innocuous\nassumption in the weak identification scenario because the preliminary\nestimator $\\overline{\\theta }_{n}$ may be inconsistent. Lemma \\ref{Lemma Two\nstep weight matrix} below shows that (\\ref{conv in prob}) holds despite the\ninconsistency of $\\overline{\\pi }_{n}$ that occurs under $\\{\\gamma _{n}\\}\\in\n\\Gamma (\\gamma _{0},0,b)$ with $||b||<\\infty ,$ where $\\overline{\\theta }%\n_{n}=(\\overline{\\psi }_{n},\\overline{\\pi }_{n}).$\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma Two step weight matrix}Suppose $%\n\\overline{\\theta }_{n}=(\\overline{\\psi }_{n},\\overline{\\pi }_{n})$ is an\nestimator of $\\theta $ such that \\emph{(i)} $\\overline{\\theta }%\n_{n}\\rightarrow _{p}\\theta _{0}$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0}),$ $\\forall \\gamma _{0}\\in \\Gamma $ with $\\beta _{0}\\neq 0,$ \\emph{(ii)}\n$\\overline{\\psi }_{n}\\rightarrow _{p}\\psi _{0}$ under $\\{\\gamma _{n}\\}\\in\n\\Gamma (\\gamma _{0}),$ $\\forall \\gamma _{0}\\in \\Gamma $ with $\\beta _{0}=0,$ \n\\emph{(iii)} $\\mathcal{W}_{n}(\\theta )$ satisfies Assumptions \\emph{GMM1(i),}\n\\emph{GMM1(ii),} and \\emph{GMM1(vi),} and \\emph{(iv)} $\\Pi $ is compact.\nThen, $\\mathcal{W}_{n}(\\overline{\\theta }_{n})\\rightarrow _{p}\\mathcal{W}%\n(\\theta _{0};\\gamma _{0})$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0})$ $%\n\\forall \\gamma _{0}\\in \\Gamma .$\n\\end{lemma}\n\n\\noindent \\textbf{Comments.} \\textbf{1.} Lemma \\ref{Lemma Two step weight\nmatrix} allows for inconsistency of $\\overline{\\pi }_{n},$ i.e., $\\overline{%\n\\pi }_{n}-\\pi _{n}\\neq o_{p}(1),$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0})$ with $\\beta _{0}=0.$ Inconsistency occurs under $\\{\\gamma _{n}\\}\\in\n\\Gamma (\\gamma _{0},0,b)$ with $||b||<\\infty ,$ see Theorem \\ref{Thm dist'n\nof estimator b=finite}(a) below.\n\n\\textbf{2.} Typically, the preliminary estimator $\\overline{\\theta }_{n}$ is\nobtained by minimizing $Q_{n}(\\theta )$ in (\\ref{GMM CF}) with a weight\nmatrix $\\mathcal{W}_{n}(\\theta )$ that does not depend on $\\theta $ or any\nestimator of $\\theta .$ In such cases, the properties of $\\overline{\\theta }%\n_{n}$ assumed in Lemma \\ref{Lemma Two step weight matrix} hold provided\nAssumption GMM1 holds with the specified weight matrix.\\footnote{%\nThis follows from the combination of Lemma \\ref{Lemma GMM A and B3} in\nAppendix A and Lemma 3.1 of AC1.}\\medskip\n\n\\noindent \\textbf{Example 1 (cont.). }For this example, the key quantities\nin Assumption GMM1 are \n\\begin{eqnarray}\ng_{0}(\\theta ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\phi\n_{0}}(\\beta _{0}h(X_{1,i},\\pi _{0})-\\beta h(X_{1,i},\\pi )+X_{2,i}^{\\prime\n}(\\zeta _{0}-\\zeta ))Z_{i},  \\notag \\\\\n\\mathcal{W}(\\theta ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}\\mathcal{%\nW}(\\gamma _{0})=\\left( E_{\\phi _{0}}Z_{i}Z_{i}^{\\prime }\\right) ^{-1}, \n\\notag \\\\\ng_{\\psi }(\\theta ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}-E_{\\phi\n_{0}}Z_{i}d_{\\psi ,i}(\\pi )^{\\prime },\\text{ and }g_{\\theta }(\\theta ;\\gamma\n_{0})=-E_{\\phi _{0}}Z_{i}d_{\\theta ,i}(\\pi )^{\\prime },\\text{ where}  \\notag\n\\\\\nd_{\\psi ,i}(\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}(h(X_{1,i},\\pi\n),X_{2,i})\\in R^{d_{X_{2}}+1}\\text{ and }  \\notag \\\\\nd_{\\theta ,i}(\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}(h(X_{1,i},\\pi\n),X_{2,i},\\beta h_{\\pi }(X_{1,i},\\pi ))\\in R^{d_{X_{2}}+d_{\\pi }+1}.\n\\label{form of g_0 and weight}\n\\end{eqnarray}\n\nAssumption GMM1(i) holds by the form of $\\overline{g}_{n}(\\theta )$ and $%\n\\mathcal{W}_{n}$ in (\\ref{gmm CF}) and the fact that $U_{i}(\\theta )$ does\nnot depend on $\\pi $ when $\\beta =0.$ Assumption GMM1(ii) holds by the\nuniform LLN in Lemma \\ref{Lemma uniform convergence} in Supplemental\nAppendix D under the conditions in (\\ref{gmm phi space}).\n\nTo verify Assumption GMM1(iii), we write%\n\\begin{equation}\ng_{0}(\\psi ,\\pi ;\\gamma _{0})-g_{0}(\\psi _{0},\\pi ;\\gamma _{0})=E_{\\phi\n_{0}}(-\\beta h(X_{1,i},\\pi )+X_{2,i}^{\\prime }(\\zeta _{0}-\\zeta ))Z_{i}= \n\\left[ E_{\\phi _{0}}Z_{i}d_{\\psi ,i}(\\pi )^{\\prime }\\right] \\Delta ,\n\\label{GMM1(iii)}\n\\end{equation}%\nwhere $\\Delta =(-\\beta ,\\zeta _{0}-\\zeta )\\in R^{d_{X_{2}}+1}.$ We need to\nshow that when $\\beta _{0}=0$ the quantity in (\\ref{GMM1(iii)}) does not\nequal zero $\\forall \\psi \\neq \\psi _{0}$ and $\\forall \\pi \\in \\Pi .$ This\nholds because $d_{\\psi ,i}(\\pi )$ is a sub-vector of $d_{\\psi ,i}^{\\ast\n}(\\pi _{1},\\pi _{2})$ and $E_{\\phi }Z_{i}d_{\\psi ,i}^{\\ast }(\\pi _{1},\\pi\n_{2})^{\\prime }$ has full column rank $\\forall \\pi _{1},\\pi _{2}\\in \\Pi $\nwith $\\pi _{1}\\neq \\pi _{2}$ by (\\ref{gmm phi space}).\n\nTo verify Assumption GMM1(iv), we write%\n\\begin{eqnarray}\ng_{0}(\\theta ;\\gamma _{0})-g_{0}(\\theta _{0};\\gamma _{0})\\hspace{-0.08in} &=&%\n\\hspace{-0.08in}E_{\\phi _{0}}(\\beta _{0}h(X_{1,i},\\pi _{0})-\\beta\nh(X_{1,i},\\pi )+X_{2,i}^{\\prime }(\\zeta _{0}-\\zeta ))Z_{i}  \\notag \\\\\n&=&\\hspace{-0.08in}\\left[ E_{\\phi _{0}}Z_{i}d_{\\psi ,i}^{\\ast }(\\pi _{0},\\pi\n)^{\\prime }\\right] c,  \\label{GMM1(iv)}\n\\end{eqnarray}%\nwhere $c=(\\beta _{0},-\\beta ,\\zeta _{0}-\\zeta )\\in R^{d_{X_{2}}+2}.$ We need\nto show that when $\\beta _{0}\\neq 0$ the quantity in (\\ref{GMM1(iv)}) does\nnot equal zero when $\\theta \\neq \\theta _{0}.$ This holds when $\\pi \\neq \\pi\n_{0}$ because $E_{\\phi _{0}}Z_{i}d_{\\psi ,i}^{\\ast }(\\pi _{0},\\pi )^{\\prime\n} $ has full column rank for $\\pi \\neq \\pi _{0}$ by (\\ref{gmm phi space}).\nWhen $\\pi =\\pi _{0},$%\n\\begin{equation}\ng_{0}(\\theta ;\\gamma _{0})-g_{0}(\\theta _{0};\\gamma _{0})=g_{0}(\\psi ,\\pi\n_{0};\\gamma _{0})-g_{0}(\\psi _{0},\\pi _{0};\\gamma _{0})=\\left[ E_{\\phi\n_{0}}Z_{i}d_{\\psi ,i}(\\pi _{0})^{\\prime }\\right] \\Delta _{1},\n\\label{GMM1(iv)-2}\n\\end{equation}%\nwhere $\\Delta _{1}=(\\beta _{0}-\\beta ,\\zeta _{0}-\\zeta )\\in R^{d_{X_{2}}+1}.$\nThe quantity in (\\ref{GMM1(iv)-2}) does not equal zero for $\\psi \\neq \\psi\n_{0}$ because $E_{\\phi _{0}}Z_{i}d_{\\psi ,i}(\\pi _{0})^{\\prime }$ has full\ncolumn rank. This completes the verification of Assumption GMM1(iv).\n\nAssumption GMM1(v) holds by the assumption that $h(x,\\pi )$ is twice\ncontinuously differentiable wrt $\\pi $ and the moment conditions in (\\ref%\n{gmm phi space}). Assumption GMM1(vi) holds automatically because $\\mathcal{W%\n}(\\theta ;\\gamma _{0})=(E_{\\phi _{0}}Z_{i}Z_{i}^{\\prime })^{-1}$ does not\ndepend on $\\theta .$ Assumption GMM1(vii) holds because $E_{\\phi\n_{0}}Z_{i}Z_{i}^{\\prime }\\in R^{k\\times k}$ is positive definite $\\forall\n\\gamma _{0}\\in \\Gamma .$ Assumption GMM1(viii) holds because $\\mathcal{W}%\n(\\psi _{0},\\pi ;\\gamma _{0})=E_{\\phi _{0}}Z_{i}Z_{i}^{\\prime }$ is positive\ndefinite and $g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})$ has full rank by the\nconditions in (\\ref{gmm phi space}). Assumption GMM1(ix) holds because $%\n\\Theta =\\mathcal{B}\\times \\mathcal{Z}\\times \\Pi ,\\ $and $\\mathcal{B},$ $%\n\\mathcal{Z},$ $\\Pi $, and $\\Psi =\\mathcal{B}\\times \\mathcal{Z}$ are all\ncompact. Assumption GMM1(x) holds automatically because $\\Psi $ does not\ndepend on $\\pi .$ $\\square $\n\nFor brevity, the verifications of Assumptions GMM1 and GMM2-GMM5 below for\nthe probit model with endogeneity are given in Section \\ref{Example 2 Verif\nof As.s Sec}.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Assumption GMM2}\n\n\\hspace{0.25in}The next assumption, Assumption GMM2, is used when verifying\nthat the GMM criterion function satisfies a quadratic approximation with\nrespect to $\\psi $ when $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ and\nwith respect to $\\theta $ when $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},\\infty ,\\omega _{0}).$ In the former case, the expansion is around the\nvalue \n\\begin{equation}\n\\psi _{0,n}=(0,\\zeta _{n}),\n\\end{equation}%\nrather than around the true value $\\psi _{n}=(\\beta _{n},\\zeta _{n}).$ The\nreason for expanding around $\\psi _{0,n}$ is that the first term in the\nexpansion of $Q_{n}(\\psi ,\\pi )$ does not depend on $\\pi $ when $\\psi =\\psi\n_{0,n}$ by Assumption GMM1(i).\n\nUnder $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0}),$ define the centered sample\nmoment conditions by \n\\begin{equation}\n\\widetilde{g}_{n}\\left( \\theta ;\\gamma _{0}\\right) =\\overline{g}_{n}\\left(\n\\theta \\right) -g_{0}\\left( \\theta ;\\gamma _{0}\\right) .\n\\end{equation}\n\nWe define a matrix $B(\\beta )$ that is used to normalize the (generalized)\nfirst-derivative matrix\\ of the sample moments $\\overline{g}_{n}(\\theta )$\nso that it is full-rank asymptotically. Let $B(\\beta )$ be the $d_{\\theta\n}\\times d_{\\theta }$ diagonal matrix defined by%\n\\begin{equation}\nB(\\beta )=Diag\\{1_{d_{\\psi }}^{\\prime },\\iota (\\beta )1_{d_{\\pi }}^{\\prime\n}\\},\n\\end{equation}%\nwhere $\\iota (\\beta )=\\beta $ if $\\beta $ is a scalar and $\\iota (\\beta\n)=||\\beta ||$ if $\\beta $ is a vector.\\footnote{%\nThe matrix $B(\\beta )$ is defined differently in the scalar and vector $%\n\\beta $ cases because in the scalar case the use of $\\beta ,$ rather than $%\n||\\beta ||,$ produces noticeably simpler (but equivalent) formulae, but in\nthe vector case $||\\beta ||$ is required.}\\medskip\n\n\\noindent \\textbf{Assumption GMM2. }(i) Under $\\{\\gamma _{n}\\}\\in \\Gamma\n(\\gamma _{0},0,b),$ \\newline\n$\\sup_{\\psi \\in \\Psi (\\pi ):||\\psi -\\psi _{0,n}||\\leq \\delta _{n}}||%\n\\widetilde{g}_{n}(\\psi ,\\pi ;\\gamma _{0})-\\widetilde{g}_{n}(\\psi _{0,n},\\pi\n;\\gamma _{0})||/(n^{-1/2}+||\\psi -\\psi _{0,n}||)=o_{p\\pi }(1)$ for all\nconstants $\\delta _{n}\\rightarrow 0.$\n\n\\noindent (ii) Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega\n_{0}),$ $\\sup_{\\theta \\in \\Theta _{n}\\left( \\delta _{n}\\right) }||\\widetilde{%\ng}_{n}(\\theta ;\\gamma _{0})-\\widetilde{g}_{n}(\\theta _{n};\\gamma\n_{0})||/(n^{-1/2}+||B(\\beta _{n})(\\theta -\\theta _{n})||)=o_{p}(1)$ for all\nconstants $\\delta _{n}\\rightarrow 0,$ where $\\Theta _{n}\\left( \\delta\n_{n}\\right) =\\{\\theta \\in \\Theta :\\left\\Vert \\psi -\\psi _{n}\\right\\Vert \\leq\n\\delta _{n}\\left\\Vert \\beta _{n}\\right\\Vert $ and $\\left\\Vert \\pi -\\pi\n_{n}\\right\\Vert \\leq \\delta _{n}\\}.$\\medskip\n\nWhen $\\overline{g}_{n}\\left( \\theta \\right) $ is continuously differentiable\nin $\\theta ,$ Assumption GMM2 is easy to verify. In this case, Assumption\nGMM2$^{\\ast }$ below is a set of sufficient conditions for Assumption GMM2.\n\nAssumption GMM2 allows for non-smooth sample moment conditions. It is\nanalogous to Assumption GMM2(d) of Andrews (2002), which in turn is shown to\nbe equivalent to condition (iii) of Theorem 3.3 of Pakes and Pollard (1989).\nIn contrast to these conditions in the literature, Assumption GMM2 applies\nunder drifting sequences of true parameters and provides conditions that\nallow for weak identification. Nevertheless, Assumption GMM2 can be verified\nby methods used in Pakes and Pollard (1989) and Andrews (2002).\\medskip\n\n\\noindent \\textbf{Assumption GMM2}$^{\\ast }$\\textbf{.} (i) $\\overline{g}%\n_{n}(\\theta )$ is continuously differentiable in $\\theta $ on $\\Theta $ $%\n\\forall n\\geq 1.$\n\n\\noindent (ii) Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b),$ $%\n\\sup_{\\theta \\in \\Theta :||\\psi -\\psi _{0,n}||\\leq \\delta _{n}}\\left\\Vert\n(\\partial /\\partial \\psi ^{\\prime })\\overline{g}_{n}(\\theta )-g_{\\psi\n}(\\theta ;\\gamma _{0})\\right\\Vert =o_{p}(1)$ for all constants $\\delta\n_{n}\\rightarrow 0.$\n\n\\noindent (iii) Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega\n_{0}),$ $\\sup_{\\theta \\in \\Theta _{n}\\left( \\delta _{n}\\right) }\\left\\Vert\n\\left( (\\partial /\\partial \\theta ^{\\prime })\\overline{g}_{n}(\\theta\n)-g_{\\theta }(\\theta ;\\gamma _{0})\\right) B^{-1}(\\beta _{n})\\right\\Vert\n=o_{p}(1)$ for all constants $\\delta _{n}\\rightarrow 0.$\\medskip\n\nWhen $\\overline{g}_{n}(\\theta )$ takes the form of a sample average,\nAssumption GMM2$^{\\ast }$ can be verified by a uniform LLN and the switch of \n$E$ and $\\partial $ under some regularity conditions.\\medskip\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma GMM Smooth Sufficient}Assumption \n\\emph{GMM2}$^{\\ast }$ implies Assumption \\emph{GMM2.}\n\\end{lemma}\n\n\\noindent \\textbf{Example 1 (cont.). }We verify Assumption GMM2 in this\nexample using the sufficient condition Assumption GMM2$^{\\ast }.$ The key\nquantities in Assumption GMM2$^{\\ast }$ are%\n\\begin{equation}\n\\frac{\\partial }{\\partial \\psi ^{\\prime }}\\overline{g}_{n}(\\theta\n)=n^{-1}\\sum_{i=1}^{n}Z_{i}d_{\\psi ,i}(\\pi )^{\\prime }\\text{ and }\\frac{%\n\\partial }{\\partial \\theta ^{\\prime }}\\overline{g}_{n}(\\theta\n)=n^{-1}\\sum_{i=1}^{n}Z_{i}d_{\\theta ,i}(\\pi )^{\\prime }.\n\\label{g_bar  partial der}\n\\end{equation}\n\nAssumption GMM2$^{\\ast }$(i) holds with the partial derivatives given in (%\n\\ref{g_bar partial der}). Assumption GMM2$^{\\ast }$(ii) holds by the uniform\nLLN given in Lemma \\ref{Lemma uniform convergence} in Supplemental Appendix\nD under the conditions in (\\ref{gmm phi space}). Assumption GMM2$^{\\ast }$%\n(iii) holds by this uniform LLN and $\\beta /\\beta _{n}=1+o(1)$ for $\\theta\n\\in \\Theta _{n}(\\delta _{n}).$ $\\square $\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Assumption GMM3}\n\n\\hspace{0.25in}Under Assumptions GMM1 and GMM2, Assumption GMM3 below is\nused when establishing the asymptotic distribution of the GMM estimator\nunder weak and semi-strong identification, i.e., when $\\{\\gamma _{n}\\}\\in\n\\Gamma (\\gamma _{0},0,b).$\n\nDefine the $k\\times d_{\\beta }$ matrix of partial derivatives of the average\npopulation moment function wrt the true $\\beta $ value, $\\beta ^{\\ast },$ to\nbe%\n\\begin{equation}\nK_{n,g}(\\theta ;\\gamma ^{\\ast })=n^{-1}\\sum_{i=1}^{n}\\frac{\\partial }{%\n\\partial \\beta ^{\\ast \\prime }}E_{\\gamma ^{\\ast }}g(W_{i},\\theta ),\n\\label{exp fn2}\n\\end{equation}%\nwhere $\\gamma ^{\\ast }=(\\beta ^{\\ast },\\zeta ^{\\ast },\\pi ^{\\ast },\\phi\n^{\\ast }).$ The domain of the function $K_{n,g}(\\theta ;\\gamma ^{\\ast })$ is \n$\\Theta _{\\delta }\\times \\Gamma _{0},$ where $\\Theta _{\\delta }=\\{\\theta \\in\n\\Theta :||\\beta ||<\\delta \\}$ and $\\Gamma _{0}=\\{\\gamma _{a}=(a\\beta ,\\zeta\n,\\pi ,\\phi )\\in \\Gamma :$ $\\gamma =(\\beta ,\\zeta ,\\pi ,\\phi )\\in \\Gamma $\nwith $||\\beta ||<\\delta $ and $a\\in \\lbrack 0,1]\\}$ for some $\\delta >0.$%\n\\footnote{%\nThe constant $\\delta >0$ is as in Assumption B2(iii) stated below. The set $%\n\\Gamma _{0}$ is not empty by Assumption B2(ii).}\\medskip\n\n\\noindent \\textbf{Assumption GMM3.} (i) $\\overline{g}_{n}\\left( \\theta\n\\right) $ takes the form $\\overline{g}_{n}(\\theta\n)=n^{-1}\\sum_{i=1}^{n}g(W_{i},\\theta )$ for some function $g\\left(\nW_{i},\\theta \\right) \\in R^{k}$ $\\forall \\theta \\in \\Theta .$\n\n\\noindent (ii) $E_{\\gamma ^{\\ast }}g(W_{i},\\psi ^{\\ast },\\pi )=0$ $\\forall\n\\pi \\in \\Pi ,$ $\\forall i\\geq 1$ when the true parameter is $\\gamma ^{\\ast }$\n$\\forall \\gamma ^{\\ast }=(\\psi ^{\\ast },\\pi ^{\\ast },\\phi ^{\\ast })\\in\n\\Gamma $ with $\\beta ^{\\ast }=0.$\n\n\\noindent (iii) Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b),$ $%\nn^{-1/2}\\sum_{i=1}^{n}(g(W_{i},\\psi _{0,n},\\pi _{n})-E_{\\gamma\n_{n}}g(W_{i},\\psi _{0,n},\\pi _{n}))\\rightarrow _{d}N(0,\\allowbreak \\Omega\n_{g}(\\gamma _{0}))$ for some $k$ by $k$ matrix $\\Omega _{g}(\\gamma _{0}).$\n\n\\noindent (iv) (a) $K_{n,g}(\\theta ;\\gamma ^{\\ast })$ exists $\\forall\n(\\theta ,\\gamma ^{\\ast })\\in \\Theta _{\\delta }\\times \\Gamma _{0},$ $\\forall\nn\\geq 1.$ (b) For some non-stochastic $k\\times d_{\\beta }$ matrix-valued\nfunction $K_{g}(\\psi _{0},\\pi ;\\gamma _{0}),$ $K_{n,g}(\\psi _{n},\\pi ;%\n\\widetilde{\\gamma }_{n})\\rightarrow K_{g}(\\psi _{0},\\pi ;\\gamma _{0})$\nuniformly over $\\pi \\in \\Pi $ for all non-stochastic sequences $\\{\\psi\n_{n}\\} $ and $\\{\\widetilde{\\gamma }_{n}\\}$ such that $\\widetilde{\\gamma }%\n_{n}\\in \\Gamma ,$ $\\widetilde{\\gamma }_{n}\\rightarrow \\gamma _{0}=(0,\\zeta\n_{0},\\pi _{0},\\phi _{0})$ for some $\\gamma _{0}\\in \\Gamma ,$ $(\\psi _{n},\\pi\n)\\in \\Theta ,$ and $\\psi _{n}\\rightarrow \\psi _{0}=(0,\\zeta _{0}).$ (c) $%\nK_{g}(\\psi _{0},\\pi ;\\gamma _{0})$ is continuous on $\\Pi $ $\\forall \\gamma\n_{0}\\in \\Gamma $ with $\\beta _{0}=0.$\n\n\\noindent (v) $\\forall \\omega _{0}\\in R^{d_{\\beta }}$ with $||\\omega\n_{0}||=1,$ $K_{g}(\\psi _{0},\\pi ;\\gamma _{0})\\omega _{0}=g_{\\psi }(\\psi\n_{0},\\pi ;\\gamma _{0})S$ for some $S\\in R^{d_{\\psi }}$ if and only if $\\pi\n=\\pi _{0}.$\n\n\\noindent (vi) Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b),$ $%\nn^{-1}\\sum_{i=1}^{n}(\\partial /\\partial \\psi ^{\\prime })E_{\\gamma\n_{n}}g(W_{i},\\psi ,\\pi )|_{(\\psi ,\\pi )=\\theta _{n}}\\rightarrow g_{\\psi\n}(\\theta _{0};\\gamma _{0}).$\\medskip\n\nAssumption GMM3(iii) can be verified using a triangular array CLT. Although\nAssumption GMM3(iv) is somewhat complicated, it is not restrictive, see the\nverification of it in the two examples. A set of primitive sufficient\nconditions for Assumption GMM3(iv) is given in Appendix A of AC1-SM.%\n\\footnote{%\nThe sufficient conditions are for Assumption C5 of AC1, which is the same as\nAssumption GMM3(iv) but with $m(W_{i},\\theta )$ of AC1 in place of $%\ng(W_{i},\\theta ).$}\n\nIn Assumption GMM3(v), the equality holds for $\\pi =\\pi _{0}$ with $%\nS=-[I_{d_{\\beta }}:0_{d_{\\beta }\\times d_{\\zeta }}]^{\\prime }\\omega _{0}$ by\nLemma 9.3 in AC1-SM under the assumptions therein. For any $\\pi \\neq \\pi\n_{0},$ Assumption GMM(v) requires that any linear combination of the columns\nof $K_{g}(\\psi _{0},\\pi ;\\gamma _{0})$ cannot be in the column space of $%\ng_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0}).$\n\nWith identically distributed observations, Assumption GMM3(vi) can be\nverified by the exchange of $E$ and $\\partial $ under suitable regularity\nconditions.\\medskip\n\n\\noindent \\textbf{Example 1 (cont.). }For this example, the key quantities\nin Assumption GMM3 are%\n\\begin{eqnarray}\ng(W_{i},\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}(Y_{i}-\\beta\nh(X_{1,i},\\pi )-X_{2,i}^{\\prime }\\zeta )Z_{i},  \\notag \\\\\n\\Omega _{g}(\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\phi\n_{0}}U_{i}^{2}Z_{i}Z_{i}^{\\prime },\\text{ and}  \\notag \\\\\nK_{g,n}(\\theta ,\\gamma ^{\\ast })\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nK_{g}(\\theta ,\\gamma ^{\\ast })=E_{\\phi ^{\\ast }}h(X_{1,i},\\pi ^{\\ast })Z_{i}.\n\\label{GMM3_key quant}\n\\end{eqnarray}\n\nAssumption GMM3(i) holds with $g(W_{i},\\theta )$ in (\\ref{GMM3_key quant}).\n\nTo verify Assumption GMM3(ii), we have%\n\\begin{equation}\nE_{\\phi ^{\\ast }}g(W_{i},\\theta )=E_{\\phi ^{\\ast }}\\left( U_{i}+\\beta ^{\\ast\n}h(X_{1,i},\\pi ^{\\ast })-\\beta h(X_{1,i},\\pi )+X_{2,i}^{\\prime }(\\zeta\n^{\\ast }-\\zeta )\\right) Z_{i}.  \\label{GMM3 - Eg fn}\n\\end{equation}%\nWhen $\\beta =\\beta ^{\\ast }=0$ and $\\zeta =\\zeta ^{\\ast },$ $E_{\\phi ^{\\ast\n}}g(W_{i},\\theta )=0$ $\\forall \\pi \\in \\Pi .$\n\nNext, we show that Assumption GMM3(iii) holds with $\\Omega _{g}(\\gamma _{0})$\nin (\\ref{GMM3_key quant}). Define \n\\begin{eqnarray}\nG_{g,n}(\\pi _{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}n^{-1/2}\\sum_{i=1}^{n}%\n\\left( g(W_{i},\\psi _{0,n},\\pi _{n})-E_{\\phi _{n}}g(W_{i},\\psi _{0,n},\\pi\n_{n})\\right)  \\label{gmm G_g} \\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}n^{-1/2}\\sum_{i=1}^{n}U_{i}Z_{i}+\\beta\n_{n}[n^{-1/2}\\sum_{i=1}^{n}\\left( h(X_{i},\\pi _{n})Z_{i}-E_{\\phi\n_{n}}h(X_{i},\\pi _{n})Z_{i}\\right) ].  \\notag\n\\end{eqnarray}%\nBy the CLT for triangular arrays of row-wise i.i.d. random variables given\nin Lemma \\ref{Lemma CLT, array} in Supplemental Appendix C, $%\nn^{-1/2}\\sum_{i=1}^{n}U_{i}Z_{i}\\rightarrow _{d}N(0,\\Omega _{g}(\\gamma\n_{0})).$ The second term in the second line of (\\ref{gmm G_g}) is $o_{p}(1)$\nbecause $\\beta _{n}\\rightarrow 0$ and $n^{-1/2}\\sum_{i=1}^{n}(h(X_{i},\\pi\n_{n})Z_{i}-E_{\\phi _{n}}h(X_{i},\\allowbreak \\pi _{n})Z_{i})=O_{p}(1)$ by the\nCLT in Lemma \\ref{Lemma CLT, array} in Supplemental Appendix C. Hence, $%\nG_{g,n}(\\pi _{n})\\rightarrow _{d}N(0,\\Omega _{g}(\\gamma _{0})).$\n\nNext, we show that Assumption GMM3(iv) holds with $K_{g,n}(\\theta ,\\gamma\n^{\\ast })$ and $K_{g}(\\theta ,\\gamma ^{\\ast })$ in (\\ref{GMM3_key quant}).\nAssumption GMM3(iv)(a) is implied by (\\ref{GMM3 - Eg fn}) and the moment\nconditions in (\\ref{gmm phi space}). The convergence in Assumption\nGMM3(iv)(b) holds because $\\phi _{n}\\rightarrow \\phi _{0}$ induces weak\nconvergence of $(X_{i},Z_{i})$ by the definition of the metric on $\\Phi\n^{\\ast }$ and $E_{\\phi }\\sup_{\\pi \\in \\Pi }||h(X_{1,i},\\allowbreak \\pi\n)Z_{i}||^{1+\\delta }\\leq C$ for some $\\delta >0$ and $C<\\infty $ by the\nconditions in (\\ref{gmm phi space}). The convergence holds uniformly over $%\n\\pi \\in \\Pi $ by Lemma \\ref{Lemma uniform convergence} in Supplemental\nAppendix D because $\\Pi $ is compact and $E_{\\phi ^{\\ast }}\\sup_{\\pi \\in \\Pi\n}||h_{\\pi }(X_{1,i},\\pi )||\\cdot ||Z_{i}||\\leq C$ for some $C<\\infty .$\nAssumption GMM3(iv)(c) holds because $\\Pi $ is compact, $h(x,\\pi )$ is\ncontinuous in $\\pi ,$ and $E_{\\phi ^{\\ast }}\\sup_{\\pi \\in \\Pi\n}||h(X_{1,i},\\pi )||\\cdot ||Z_{i}||\\leq C$ for some $C<\\infty $ by the\nconditions in (\\ref{gmm phi space}). This completes the verification of\nAssumption GMM(iv).\n\nTo verify Assumption GMM3(v), note that for $S\\in R^{d_{X_{2}}+1}$ we have%\n\\begin{eqnarray}\n&&K_{g}(\\psi _{0},\\pi ;\\gamma _{0})\\omega _{0}-g_{\\psi }(\\psi _{0},\\pi\n;\\gamma _{0})S  \\notag \\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\phi _{0}}Z_{i}h(X_{1,i},\\pi\n_{0})\\omega _{0}+E_{\\phi _{0}}Z_{i}d_{\\psi ,i}(\\pi )^{\\prime }S  \\notag \\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\phi _{0}}Z_{i}d_{\\psi ,i}^{\\ast\n}(\\pi _{0},\\pi )^{\\prime }\\Delta _{2},\\text{ where }\\Delta _{2}=(\\omega\n_{0},S)\\neq 0_{d_{\\zeta }+2}.\n\\end{eqnarray}%\nBecause $E_{\\phi _{0}}Z_{i}d_{\\psi ,i}^{\\ast }(\\pi _{0},\\pi )^{\\prime }$ has\nfull column rank for all $\\pi \\neq \\pi _{0}$ by (\\ref{gmm phi space}), $%\nK_{g}(\\psi _{0},\\pi ;\\gamma _{0})\\omega _{0}\\neq g_{\\psi }(\\psi _{0},\\pi\n;\\gamma _{0})S$ for any $\\pi \\neq \\pi _{0}.$ When $\\pi =\\pi _{0},$ $%\nK_{g}(\\psi _{0},\\pi ;\\gamma _{0})\\omega _{0}=g_{\\psi }(\\psi _{0},\\pi ;\\gamma\n_{0})S$ if $S=(-\\omega _{0},0_{d_{\\zeta }})$ $(\\in R^{d_{\\zeta }+1}).$ This\ncompletes the verification of Assumption GMM3 for this example. $\\square $\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Assumption GMM4}\n\n\\hspace{0.25in}To obtain the asymptotic distribution of $\\widehat{\\pi }_{n}$\nwhen $\\beta _{n}=O(n^{-1/2})$ via the continuous mapping theorem, we use\nAssumption GMM4 stated below.\n\nUnder Assumptions GMM1(i) and GMM1(ii), $\\mathcal{W}(\\psi _{0},\\pi ;\\gamma\n_{0})$ does not depend on $\\pi $ when $\\beta _{0}=0.$ For simplicity, let $%\n\\mathcal{W}(\\psi _{0};\\gamma _{0})$ abbreviate $\\mathcal{W}(\\psi _{0},\\pi\n;\\gamma _{0})$ when $\\beta _{0}=0.$\n\nThe following quantities arise in the asymptotic distributions of $\\widehat{%\n\\theta }_{n}$ and various test statistics when $\\{\\gamma _{n}\\}\\in \\Gamma\n(\\gamma _{0},0,b)$ and $||b||<\\infty .$ Define%\n\\begin{eqnarray}\n\\Omega (\\pi _{1},\\pi _{2};\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\ng_{\\psi }(\\psi _{0},\\pi _{1};\\gamma _{0})^{\\prime }\\mathcal{W}(\\psi\n_{0};\\gamma _{0})\\Omega _{g}(\\gamma _{0})\\mathcal{W}(\\psi _{0};\\gamma\n_{0})g_{\\psi }(\\psi _{0},\\pi _{2};\\gamma _{0}),  \\notag \\\\\nH(\\pi ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}g_{\\psi }(\\psi\n_{0},\\pi ;\\gamma _{0})^{\\prime }\\mathcal{W}(\\psi _{0};\\gamma _{0})g_{\\psi\n}(\\psi _{0},\\pi ;\\gamma _{0}),\\text{ and}  \\notag \\\\\nK(\\psi _{0},\\pi ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}g_{\\psi\n}(\\psi _{0},\\pi ;\\gamma _{0})^{\\prime }\\mathcal{W}(\\psi _{0};\\gamma\n_{0})K_{g}(\\psi _{0},\\pi ;\\gamma _{0}).  \\label{Omega, H, and K defns}\n\\end{eqnarray}%\nLet $G(\\cdot ;\\gamma _{0})$ denote a mean zero Gaussian process indexed by $%\n\\pi \\in \\Pi $ with bounded continuous sample paths and covariance kernel $%\n\\Omega (\\pi _{1},\\pi _{2};\\gamma _{0})$ for $\\pi _{1},\\pi _{2}\\in \\Pi .$\n\nNext, we define a \\textquotedblleft weighted non-central\nchi-square\\textquotedblright\\ process $\\{\\xi (\\pi ;\\gamma _{0},b):\\pi \\in\n\\Pi \\}$ that arises in the asymptotic distributions. Let%\n\\begin{equation}\n\\xi (\\pi ;\\gamma _{0},b)=-\\frac{1}{2}\\left( G(\\pi ;\\gamma _{0})+K(\\pi\n;\\gamma _{0}\\right) b)^{\\prime }H^{-1}(\\pi ;\\gamma _{0})\\left( G(\\pi ;\\gamma\n_{0})+K(\\pi ;\\gamma _{0})b\\right) .  \\label{Defn of Xi Stoch Process}\n\\end{equation}%\nUnder Assumptions GMM1-GMM3, $\\{\\xi (\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ has\nbounded continuous sample paths a.s.\\medskip\n\n\\noindent \\textbf{Assumption GMM4.\\ }Each sample path of the stochastic\nprocess $\\{\\xi (\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ in some set $A(\\gamma\n_{0},b)$ with $P_{\\gamma _{0}}(A(\\gamma _{0},b))=1$ is minimized over $\\Pi $\nat a unique point (which may depend on the sample path), denoted $\\pi ^{\\ast\n}(\\gamma _{0},b),$ $\\forall \\gamma _{0}\\in \\Gamma $ with $\\beta _{0}=0,$ $%\n\\forall b$ with $||b||<\\infty .$\\medskip\n\nIn Assumption GMM4, $\\pi ^{\\ast }(\\gamma _{0},b)$ is random.\n\nNext, we provide a sufficient condition for Assumption GMM4. We partition $%\ng_{\\psi }(\\theta ;\\gamma _{0})\\allowbreak \\in R^{k\\times d_{\\psi }}$ as%\n\\begin{equation}\ng_{\\psi }(\\theta ;\\gamma _{0})=[g_{\\beta }(\\theta ;\\gamma _{0}):g_{\\zeta\n}(\\theta ;\\gamma _{0})],\n\\end{equation}%\nwhere $g_{\\beta }(\\theta ;\\gamma _{0})\\in R^{k\\times d_{\\beta }}$ and $%\ng_{\\zeta }(\\theta ;\\gamma _{0})\\in R^{k\\times d_{\\zeta }}.$ When $\\beta\n_{0}=0,$ $g_{\\zeta }(\\psi _{0},\\pi ;\\gamma _{0})$ does not depend on $\\pi $\nby Assumptions GMM1(i) and GMM3(i) and is denoted by $g_{\\zeta }(\\psi\n_{0};\\gamma _{0})$ for simplicity$.$ When $d_{\\beta }=1$ and $\\beta _{0}=0,$\ndefine \n\\begin{equation}\ng_{\\psi }^{\\ast }(\\psi _{0},\\pi _{1},\\pi _{2};\\gamma _{0})=[g_{\\beta }(\\psi\n_{0},\\pi _{1};\\gamma _{0}):g_{\\beta }(\\psi _{0},\\pi _{2};\\gamma\n_{0}):g_{\\zeta }(\\psi _{0};\\gamma _{0})]\\in R^{k\\times (d_{\\zeta }+2)}.\n\\end{equation}\n\n\\noindent \\textbf{Assumption GMM4}$^{\\ast }$\\textbf{.} (i) $d_{\\beta }=1$\n(e.g., $\\beta $ is a scalar).\n\n\\noindent (ii) $g_{\\psi }^{\\ast }(\\psi _{0},\\pi _{1},\\pi _{2};\\gamma _{0})$\nhas full column rank, $\\forall \\pi _{1},\\pi _{2}\\in \\Pi $ with $\\pi _{1}\\neq\n\\pi _{2},$ $\\forall \\gamma _{0}\\in \\Gamma $ with $\\beta _{0}=0.$\n\n\\noindent (iii) $\\Omega _{g}(\\gamma _{0})$ is positive definite, $\\forall\n\\gamma _{0}\\in \\Gamma $ with $\\beta _{0}=0.$\\medskip\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma As GMM4}Assumptions \\emph{GMM1-GMM3 }%\nand \\emph{GMM4}$^{\\ast }$ imply Assumption \\emph{GMM4.}\n\\end{lemma}\n\n\\noindent \\textbf{Example 1 (cont.). }We verify Assumption GMM4 in this\nexample using the sufficient condition Assumption GMM4$^{\\ast }.$ The key\nquantity in Assumption GMM4$^{\\ast }$ is \n\\begin{equation}\ng_{\\psi }^{\\ast }(\\psi _{0},\\pi _{1},\\pi _{2};\\gamma _{0})=-E_{\\phi\n_{0}}Z_{i}(h(X_{1,i},\\pi _{1}),h(X_{1,i},\\pi _{2}),X_{2,i}^{\\prime\n})=-E_{\\phi _{0}}Z_{i}d_{\\psi ,i}^{\\ast }(\\pi _{1},\\pi _{2}).\n\\end{equation}\n\nAssumption GMM4$^{\\ast }$(i) holds automatically. Assumption GMM4$^{\\ast }$%\n(ii) holds because $E_{\\phi _{0}}Z_{i}d_{\\psi ,i}^{\\ast }(\\pi _{1},\\pi _{2})$\nhas full column rank $\\forall \\pi _{1},\\pi _{2}\\in \\Pi $ with $\\pi _{1}\\neq\n\\pi _{2}$ by (\\ref{gmm phi space}). Assumption GMM4$^{\\ast }$(iii) holds\nwith $\\Omega _{g}(\\gamma _{0})=E_{\\phi _{0}}U_{i}^{2}Z_{i}Z_{i}^{\\prime }$\nbecause $E_{\\phi _{0}}Z_{i}Z_{i}^{\\prime }$ is positive definite and $%\nE(U_{i}^{2}|Z_{i})>0$ a.s. This completes the verification of Assumption\nGMM4. $\\square $\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Assumption GMM5}\n\n\\hspace{0.25in}Under Assumptions GMM1 and GMM2, Assumption GMM5 is used\nbelow to establish the asymptotic distribution of the GMM estimator under\nsemi-strong and strong identification, i.e., when $\\{\\gamma _{n}\\}\\in \\Gamma\n(\\gamma _{0},\\infty ,\\omega _{0}).$\\medskip\n\n\\noindent \\textbf{Assumption GMM5. }Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},\\infty ,\\omega _{0}),$\n\n\\noindent (i) $n^{1/2}\\overline{g}_{n}(\\theta _{n})\\rightarrow\n_{d}N(0,V_{g}(\\gamma _{0}))$ for some symmetric and positive definite $%\nd_{\\theta }\\times d_{\\theta }$ matrix $V_{g}(\\gamma _{0}),$\n\n\\noindent (ii) for all constants $\\delta _{n}\\rightarrow 0,$ $\\sup_{\\theta\n\\in \\Theta _{n}\\left( \\delta _{n}\\right) }||(g_{\\theta }(\\theta ;\\gamma\n_{0})-g_{\\theta }(\\theta _{n};\\gamma _{0}))B^{-1}(\\beta _{n})||=o(1),$ and\n\n\\noindent (iii) $g_{\\theta }(\\theta _{n};\\gamma _{0})B^{-1}(\\beta\n_{n})\\rightarrow J_{g}(\\gamma _{0})$ for some matrix $J_{g}(\\gamma _{0})\\in\nR^{k\\times d_{\\theta }}$ with full column rank.\\footnote{%\nIn the vector $\\beta $ case, $J_{g}(\\gamma _{0})$ may depend on $\\omega _{0}$\nas well as $\\gamma _{0}.$}\\medskip\n\nNow, we define two key quantities that arise in the asymptotic distribution\nof the estimator $\\widehat{\\theta }_{n}$ when $\\{\\gamma _{n}\\}\\in \\Gamma\n(\\gamma _{0},\\infty ,\\omega _{0}).$ Let%\n\\begin{eqnarray}\nV(\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}J_{g}(\\gamma _{0})^{\\prime\n}\\mathcal{W}(\\theta _{0};\\gamma _{0})V_{g}\\left( \\gamma _{0}\\right) \\mathcal{%\nW}(\\theta _{0};\\gamma _{0})J_{g}\\left( \\gamma _{0}\\right) \\text{ and}  \\notag\n\\\\\nJ(\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}J_{g}(\\gamma _{0})^{\\prime\n}\\mathcal{W}(\\theta _{0};\\gamma _{0})J_{g}\\left( \\gamma _{0}\\right) .\n\\label{J and V Matrices}\n\\end{eqnarray}%\nLet $G^{\\ast }(\\gamma _{0})\\sim N(0_{d_{\\theta }},V(\\gamma _{0}))$ for $%\n\\gamma _{0}\\in \\Gamma .\\medskip $\n\n\\noindent \\textbf{Example 1 (cont.). }The key quantities in Assumption GMM5\nfor this example are%\n\\begin{equation}\nV_{g}(\\gamma _{0})=E_{\\phi _{0}}U_{i}^{2}Z_{i}Z_{i}^{\\prime }\\text{ and }%\nJ_{g}(\\gamma _{0})=-E_{\\phi _{0}}Z_{i}d_{i}(\\pi _{0})^{\\prime }.\n\\label{GMM C5 -1}\n\\end{equation}\n\nAssumption GMM5(i) holds by the CLT for triangular arrays of row-wise i.i.d.\nrandom variables given in Lemma \\ref{Lemma CLT, array} in Supplemental\nAppendix C. Assumption GMM5(ii) holds with $g_{\\theta }(\\theta ;\\gamma _{0})$\ndefined as in (\\ref{form of g_0 and weight}) because $\\beta _{n}/\\beta\n=1+o(1)$ for $\\theta \\in \\Theta _{n}(\\delta _{n})$ and $g_{\\theta }(\\theta\n;\\gamma _{0})B^{-1}(\\beta )=-E_{\\phi _{0}}Z_{i}d_{i}(\\pi )^{\\prime }$ is\ncontinuous in $\\pi $ uniformly over $\\pi \\in \\Pi ,$ which in turn holds by\nthe moment conditions in (\\ref{gmm phi space}) and the compactness of $\\Pi .$\n\nAssumption GMM5(iii) holds because \n\\begin{equation}\ng_{\\theta }(\\theta _{n};\\gamma _{n})B^{-1}(\\beta _{n})=-E_{\\phi\n_{n}}Z_{i}d_{i}(\\pi _{n})^{\\prime }\\rightarrow -E_{\\phi _{0}}Z_{i}d_{i}(\\pi\n_{0})^{\\prime },\n\\end{equation}%\nwhere the convergence holds because (i) $E_{\\phi _{n}}Z_{i}d_{i}(\\pi\n)^{\\prime }\\rightarrow E_{\\phi _{0}}Z_{i}d_{i}(\\pi )$ uniformly over $\\pi\n\\in \\Pi $ by arguments analogous to those used in the verification of\nAssumption GMM3(iv)(b) and (ii) $\\pi _{n}\\rightarrow \\pi _{0}.$ The matrix $%\nJ_{g}(\\gamma _{0})$ has full column rank by (\\ref{gmm phi space}). This\ncompletes the verification of Assumption GMM5. $\\square $\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Minimum Distance\nEstimators}\n\n\\hspace{0.25in}Assumptions GMM1, GMM2, GMM4, and GMM5 apply equally well to\nthe MD estimator as to the GMM estimator. Only Assumption GMM3 does not\napply to the MD estimator. In place of part of Assumption GMM3, we employ\nthe following assumption for MD estimators.\\medskip\n\n\\noindent \\textbf{Assumption MD. }Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},0,b),$ $n^{1/2}\\overline{g}_{n}(\\psi _{0,n},\\pi _{n})=O_{p}(1).$\\medskip\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Parameter Space\nAssumptions\\label{Par Space Subsec}}\n\n\\hspace{0.25in}Next, we specify conditions on the parameter spaces $\\Theta $\nand $\\Gamma .$\n\nDefine $\\Theta _{\\delta }^{\\ast }=\\{\\theta \\in \\Theta ^{\\ast }:||\\beta\n||<\\delta \\},$ where $\\Theta ^{\\ast }$ is the true parameter space for $%\n\\theta ,$ see (\\ref{True Par Space Gamma}). The optimization parameter space \n$\\Theta $ satisfies:\\medskip\n\n\\noindent \\textbf{Assumption B1.} (i) $int(\\Theta )\\supset \\Theta ^{\\ast }.$\n\n\\noindent (ii) For some $\\delta >0,$ $\\Theta \\supset \\{\\beta \\in R^{d_{\\beta\n}}:||\\beta ||<\\delta \\}\\times \\mathcal{Z}^{0}\\times \\Pi \\supset \\Theta\n_{\\delta }^{\\ast }$ for some non-empty open set $\\mathcal{Z}^{0}\\mathcal{%\n\\subset }R^{d_{\\zeta }}$ and $\\Pi $ as in (\\ref{Defn of Par Sp Pi and Psi}).\n\n\\noindent (iii) $\\Pi $ is compact.\\medskip\n\n\\noindent Because the optimization parameter space is user selected,\nAssumption B1 can be made to hold by the choice of $\\Theta .$\n\nThe true parameter space $\\Gamma $ satisfies:\\medskip\n\n\\noindent \\textbf{Assumption B2.} \\noindent (i) $\\Gamma $ is compact and (%\n\\ref{True Par Space Gamma}) holds.\n\n\\noindent (ii) $\\forall \\delta >0,$ $\\exists \\gamma =(\\beta ,\\zeta ,\\pi\n,\\phi )\\in \\Gamma $ with $0<||\\beta ||\\,<\\delta .$\n\n\\noindent (iii) $\\forall \\gamma =(\\beta ,\\zeta ,\\pi ,\\phi )\\in \\Gamma $ with \n$0<||\\beta ||\\,<\\delta $ for some $\\delta >0,$ $\\gamma _{a}=(a\\beta ,\\zeta\n,\\pi ,\\phi )\\in \\Gamma $ $\\forall a\\in \\lbrack 0,1].$\\medskip\n\n\\noindent Assumption B2(ii) guarantees that $\\Gamma $ is not empty and that\nthere are elements $\\gamma $ of $\\Gamma $ whose $\\beta $ values are non-zero\nbut are arbitrarily close to $0,$ which is the region of the true parameter\nspace where near lack of identification occurs. Assumption B2(iii) ensures\nthat $\\Gamma $ is compatible with the existence of the partial derivatives\nthat arise in (\\ref{exp fn2}) and Assumption GMM3.\\medskip\n\n\\noindent \\textbf{Example 1 (cont.). }Given the definitions in (\\ref{gmm\ntheta space})-(\\ref{gmm phi space}), the true parameter space\\ $\\Gamma $ is\nof the form in (\\ref{True Par Space Gamma}). Thus, Assumption B2(i) holds.\nAssumption B2(ii) follows from the form of $\\mathcal{B}^{\\ast }$ given in (%\n\\ref{gmm theta space}). Assumption B2(iii) follows from the form of $%\n\\mathcal{B}^{\\ast }$ and the fact that $\\Theta ^{\\ast }$ is a product space\nand $\\Phi ^{\\ast }(\\theta ^{\\ast })$ does not depend on $\\beta ^{\\ast }.$\nHence, the true parameter space $\\Gamma $ satisfies Assumption B2.\n\nThe optimization parameter space $\\Theta $ takes the form%\n\\begin{equation}\n\\Theta =\\mathcal{B}\\times \\mathcal{Z}\\times \\Pi ,\\text{ where }\\mathcal{B}%\n=[-b_{1},b_{2}]\\subset R,\n\\end{equation}%\n$b_{1}>b_{1}^{\\ast },$ $b_{2}>b_{2}^{\\ast },$ $\\mathcal{Z}\\subset\nR^{d_{\\zeta }}$ is compact, $\\Pi \\subset R^{d_{\\pi }}$ is compact, $\\mathcal{%\nZ}^{\\ast }\\subset int(\\mathcal{Z}),$ and $\\mathcal{B}^{\\ast }\\subset int(%\n\\mathcal{B}).$ Given these conditions, Assumptions B1(i) and B1(iii) follow\nimmediately. Assumption B1(ii) holds by taking $\\delta <\\min \\{b_{1}^{\\ast\n},b_{2}^{\\ast }\\}$ and $\\mathcal{Z}^{0}=int(\\mathcal{Z}).$ $\\square $\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}GMM Estimation Results \n\\label{Estimation Results Sec}}\n\n\\setcounter{equation}{0}\\hspace{0.25in}This section provides the asymptotic\nresults of the paper for the GMM estimator $\\widehat{\\theta }_{n}.$ Define a\nconcentrated GMM estimator $\\widehat{\\psi }_{n}(\\pi )$ $(\\in \\Psi (\\pi ))$\nof $\\psi $ for given $\\pi \\in \\Pi $ by%\n\\begin{equation}\nQ_{n}(\\widehat{\\psi }_{n}(\\pi ),\\pi )=\\inf_{\\psi \\in \\Psi (\\pi )}Q_{n}(\\psi\n,\\pi )+o(n^{-1}).  \\label{Defn psi}\n\\end{equation}\n\nLet $Q_{n}^{c}(\\pi )$ denote the concentrated GMM criterion function $Q_{n}(%\n\\widehat{\\psi }_{n}(\\pi ),\\pi ).$ Define an extremum estimator $\\widehat{\\pi \n}_{n}$ $(\\in \\Pi )$ by \n\\begin{equation}\nQ_{n}^{c}(\\widehat{\\pi }_{n})=\\inf_{\\pi \\in \\Pi }Q_{n}^{c}(\\pi )+o(n^{-1}).\n\\label{Defn pihat}\n\\end{equation}\n\nWe assume that the GMM estimator $\\widehat{\\theta }_{n}$ in (\\ref{Defn of\nThetahat}) can be written as $\\widehat{\\theta }_{n}=(\\widehat{\\psi }_{n}(%\n\\widehat{\\pi }_{n}),\\widehat{\\pi }_{n}).$ Note that if (\\ref{Defn psi}) and (%\n\\ref{Defn pihat}) hold and $\\widehat{\\theta }_{n}=(\\widehat{\\psi }_{n}(%\n\\widehat{\\pi }_{n}),\\widehat{\\pi }_{n}),$ then (\\ref{Defn of Thetahat})\nautomatically holds.\n\nFor $\\gamma _{n}=(\\beta _{n},\\zeta _{n},\\pi _{n},\\phi _{n})\\in \\Gamma ,$ let \n$Q_{0,n}=Q_{n}(\\psi _{0,n},\\pi ),$ where $\\psi _{0,n}=(0,\\zeta _{n}).$ Note\nthat $Q_{0,n}$ does not depend on $\\pi $ by Assumption GMM1(i).\n\nDefine the Gaussian process $\\{\\tau (\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ by%\n\\begin{equation}\n\\tau (\\pi ;\\gamma _{0},b)=-H^{-1}(\\pi ;\\gamma _{0})(G(\\pi ;\\gamma\n_{0})+K(\\pi ;\\gamma _{0})b)-(b,0_{d_{\\zeta }}),  \\label{Defn of Tau Process}\n\\end{equation}%\nwhere $(b,0_{d_{\\zeta }})\\in R^{d_{\\psi }}.$ Note that, by (\\ref{Defn of Xi\nStoch Process}) and (\\ref{Defn of Tau Process}), $\\xi (\\pi ;\\gamma\n_{0},b)=-(1/2)(\\tau (\\pi ;\\gamma _{0},b)+(b,0_{d_{\\zeta }}))^{\\prime }H(\\pi\n;\\gamma _{0})(\\tau (\\pi ;\\gamma _{0},b)+(b,0_{d_{\\zeta }})).$ Let%\n\\begin{equation}\n\\pi ^{\\ast }(\\gamma _{0},b)=\\text{ }\\underset{\\pi \\in \\Pi }{\\arg \\min }\\text{\n}\\xi (\\pi ;\\gamma _{0},b).  \\label{PiStar Process}\n\\end{equation}\n\n\\begin{theorem}\n\\hspace{-0.08in}\\textbf{. }\\label{Thm dist'n of estimator b=finite}Suppose\nAssumptions \\emph{GMM1-GMM4,} \\emph{B1, }and \\emph{B2} hold. Under $\\{\\gamma\n_{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ with $||b||<\\infty ,$\n\n\\noindent $\\emph{(a)}$ $\\left( \n\\begin{array}{c}\nn^{1/2}(\\widehat{\\psi }_{n}-\\psi _{n}) \\\\ \n\\widehat{\\pi }_{n}%\n\\end{array}%\n\\right) \\overset{}{\\rightarrow _{d}}\\left( \n\\begin{array}{c}\n\\tau (\\pi ^{\\ast }(\\gamma _{0},b);\\gamma _{0},b) \\\\ \n\\pi ^{\\ast }(\\gamma _{0},b)%\n\\end{array}%\n\\right) ,$ and\n\n\\noindent $\\emph{(b)}$ $n\\left( Q_{n}(\\widehat{\\theta }_{n})-Q_{0,n}\\right) \n\\overset{}{\\rightarrow _{d}}\\inf_{\\pi \\in \\Pi }\\xi (\\pi ;\\gamma _{0},b).$\n\\end{theorem}\n\n\\noindent \\textbf{Comments. 1. }The results of Theorem \\ref{Thm dist'n of\nestimator b=finite} and Theorem \\ref{Thm dist'n of estimator b=inf} below\nare the same as those in Theorems 3.1 and 3.2 of AC1, but they are obtained\nunder more primitive conditions, which are designed for GMM estimators.\n\n\\textbf{2. }Define the Gaussian process $\\{\\tau _{\\beta }(\\pi ;\\gamma\n_{0},b):\\pi \\in \\Pi \\}$ by \n\\begin{equation}\n\\tau _{\\beta }(\\pi ;\\gamma _{0},b)=S_{\\beta }\\tau (\\pi ;\\gamma _{0},b)+b,\n\\label{Asy Distn of Betahat(pi)}\n\\end{equation}%\nwhere $S_{\\beta }=[I_{d_{\\beta }}:0_{d_{\\beta }\\times d_{\\zeta }}]$ is the $%\nd_{\\beta }\\times d_{\\psi }$ selector matrix that selects $\\beta $ out of $%\n\\psi .$ The asymptotic distribution of $n^{1/2}\\widehat{\\beta }_{n}$\n(without centering at $\\beta _{n}$) under $\\Gamma (\\gamma _{0},0,b)$ with $%\n||b||<\\infty $ is given by $\\tau _{\\beta }(\\pi ^{\\ast }(\\gamma\n_{0},b);\\gamma _{0},b).$ This quantity appears in the asymptotic\ndistributions of the Wald and $t$ statistics below.\n\n\\textbf{3. }Assumption GMM4 is not needed for Theorem \\ref{Thm dist'n of\nestimator b=finite}(b).\n\n\\begin{theorem}\n\\hspace{-0.08in}\\textbf{. }\\label{Thm dist'n of estimator b=inf}Suppose\nAssumptions \\emph{GMM1-GMM5, B1, }and \\emph{B2} hold. Under $\\{\\gamma\n_{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0}),$\n\n\\noindent \\emph{(a)} $n^{1/2}B(\\beta _{n})(\\widehat{\\theta }_{n}-\\theta\n_{n})\\rightarrow _{d}-J^{-1}(\\gamma _{0})G^{\\ast }(\\gamma _{0})\\sim\nN(0_{d_{\\theta }},J^{-1}(\\gamma _{0})V(\\gamma _{0})J^{-1}(\\gamma _{0})),$ and\n\n\\noindent \\emph{(b)} $n(Q_{n}(\\widehat{\\theta }_{n})-Q_{n}(\\theta\n_{n}))\\rightarrow _{d}-\\frac{1}{2}G^{\\ast }(\\gamma _{0})^{\\prime\n}J^{-1}(\\gamma _{0})G^{\\ast }(\\gamma _{0}).$\n\\end{theorem}\n\n\\noindent \\textbf{Comment. }The results of Theorems \\ref{Thm dist'n of\nestimator b=finite} and \\ref{Thm dist'n of estimator b=inf} hold for minimum\ndistance estimators under the assumptions listed in Supplemental Appendix B.\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Wald Confidence Sets and\nTests\\label{Wald Tests Sec}}\n\n\\setcounter{equation}{0}\\hspace{0.25in}In this section, we consider a CS for\na function $r(\\theta )$ of $\\theta $ by inverting a Wald test of the\nhypotheses $H_{0}:r(\\theta )=v$ for $v\\in r(\\Theta ).$ We also consider Wald\ntests of $H_{0}.$ We establish the asymptotic distributions of the Wald\nstatistic under drifting sequences of null and alternative distributions\nthat cover the entire range of strengths of identification. We determine the\nasymptotic size of standard Wald CS's. We introduce robust Wald CS's whose\nasymptotic size is guaranteed to equal their nominal size. The results in\nthis section apply not just to Wald statistics based on GMM estimators, but\nto Wald tests based on any of the estimators considered in AC1 and AC2 as\nwell.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Wald Statistics}\n\n\\hspace{0.25in}The Wald statistics are defined as follows. Let%\n\\begin{equation}\n\\Sigma (\\gamma _{0})=J^{-1}\\left( \\gamma _{0}\\right) ^{\\prime }V(\\gamma\n_{0})J^{-1}(\\gamma _{0})\\text{ and }\\widehat{\\Sigma }_{n}=\\widehat{J}%\n_{n}^{-1}\\widehat{V}_{n}\\widehat{J}_{n}^{-1},  \\label{Variance Matrix Defns}\n\\end{equation}%\nwhere $\\widehat{J}_{n}$ and $\\widehat{V}_{n}$ are estimators of $J(\\gamma\n_{0})$ and $V(\\gamma _{0}).$ The Wald statistic takes the form%\n\\begin{equation}\nW_{n}(v)=n(r(\\widehat{\\theta }_{n})-v)^{\\prime }(r_{\\theta }(\\widehat{\\theta \n}_{n})B^{-1}(\\widehat{\\beta }_{n})\\widehat{\\Sigma }_{n}B^{-1}(\\widehat{\\beta \n}_{n})r_{\\theta }(\\widehat{\\theta }_{n})^{\\prime })^{-1}(r(\\widehat{\\theta }%\n_{n})-v),  \\label{Defn Wald}\n\\end{equation}%\nwhere $r_{\\theta }(\\theta )=(\\partial /\\partial \\theta ^{\\prime })r(\\theta\n)\\in R^{d_{r}\\times d_{\\theta }}.$\n\nWhen $d_{r}=1,$ the $t$ statistic takes the form%\n\\begin{equation}\nT_{n}(v)=\\frac{n^{1/2}(r(\\widehat{\\theta }_{n})-v)}{(r_{\\theta }(\\widehat{%\n\\theta }_{n})B^{-1}(\\widehat{\\beta }_{n})\\widehat{\\Sigma }_{n}B^{-1}(%\n\\widehat{\\beta }_{n})r_{\\theta }(\\widehat{\\theta }_{n})^{\\prime })^{1/2}}.\n\\label{Defn T}\n\\end{equation}\n\n\\noindent Although these definitions of the Wald and $t$ statistics involve $%\nB^{-1}(\\widehat{\\beta }_{n}),$ they are the same as the standard definitions\nused in practice. By Theorem \\ref{Thm dist'n of estimator b=inf}(a), when $%\n\\beta _{0}\\neq 0,$ $B^{-1}(\\beta _{0})\\Sigma (\\gamma _{0})B^{-1}(\\beta _{0})$\nis the asymptotic covariance matrix of $\\widehat{\\theta }_{n}.$ In the Wald\nstatistics, the asymptotic covariance is replaced by the estimator $B^{-1}(%\n\\widehat{\\beta }_{n})\\widehat{\\Sigma }_{n}B^{-1}(\\widehat{\\beta }_{n}).$ The\nsame form of the Wald statistics is used under all sequences of true\nparameters $\\gamma _{n}\\in \\Gamma (\\gamma _{0}).$\n\nIn the results below (except in Section \\ref{Wald under Alt Subsec}), we\nconsider the behavior of the Wald statistics when the null hypothesis holds.\nThus, under a sequence $\\{\\gamma _{n}\\},$ we consider the sequence of null\nhypotheses $H_{0}:r(\\theta )=v_{n},$ where $v_{n}$ equals $r(\\theta _{n})$\nand $\\gamma _{n}=(\\theta _{n},\\phi _{n}).$ We employ the following\nnotational simplification:%\n\\begin{equation}\nW_{n}=W_{n}(v_{n}),\\text{ where }v_{n}=r(\\theta _{n}).\n\\label{Null Defn of Wald and t Stats}\n\\end{equation}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Rotation}\n\n\\hspace{0.25in}To obtain the asymptotic distribution of the Wald statistic\nwe consider a rotation of $r(\\widehat{\\theta }_{n})$ and $r_{\\theta }(%\n\\widehat{\\theta }_{n})$ by a matrix $A(\\widehat{\\theta }_{n}).$ The rotation\nis designed to separate the effects of the randomness in $\\widehat{\\psi }%\n_{n} $ and $\\widehat{\\pi }_{n},$ which have different rates of convergence\nfor some sequences $\\{\\gamma _{n}\\}.$ Similar rotations are carried out in\nthe analysis of partially-identified models in Sargan (1983) and Phillips\n(1989), in the nonstationary time series literature, e.g., see Park and\nPhillips (1988), and in the GMM analysis in Antoine and Renault (2009, 2010).\n\nWe partition $r_{\\theta }(\\theta )$ conformably with $\\theta =(\\psi ,\\pi )$:%\n\\begin{equation}\nr_{\\theta }(\\theta )=[r_{\\psi }(\\theta ):r_{\\pi }(\\theta )].\n\\end{equation}%\nSuppose $rank(r_{\\pi }(\\theta ))=d_{\\pi }^{\\ast }$ $(\\leq \\min (d_{r},d_{\\pi\n}))$ $\\forall \\theta \\in \\Theta _{\\delta }$ for some $\\delta >0.$ (This is\nAssumption R1(iii) below). For $\\theta \\in \\Theta _{\\delta },$ let $A(\\theta\n)=[A_{1}(\\theta )^{\\prime }:A_{2}(\\theta )^{\\prime }]^{\\prime }\\in O(d_{r}),$\nwhere the rows of $A_{1}(\\theta )\\in R^{(d_{r}-d_{\\pi }^{\\ast })\\times\nd_{r}} $ span the null space of $r_{\\pi }(\\theta )^{\\prime },$ the rows of $%\nA_{2}(\\theta )\\in R^{d_{\\pi }^{\\ast }\\times d_{r}}$ span the column space of \n$r_{\\pi }(\\theta ),$ and $O(d_{r})$ stands for the orthogonal group of\ndegree $d_{r}$ over the real space. Hence,%\n\\begin{equation}\nA(\\theta )r_{\\pi }(\\theta )=\\left[ \n\\begin{array}{c}\nA_{1}(\\theta )r_{\\pi }(\\theta ) \\\\ \nA_{2}(\\theta )r_{\\pi }(\\theta )%\n\\end{array}%\n\\right] =\\left[ \n\\begin{array}{c}\n0_{(d_{r-}d_{\\pi }^{\\ast })\\times d_{\\pi }} \\\\ \nr_{\\pi }^{\\ast }(\\theta )%\n\\end{array}%\n\\right] ,\n\\end{equation}%\nwhere $r_{\\pi }^{\\ast }(\\theta )\\in R^{d_{\\pi }^{\\ast }\\times d_{\\pi }}$ has\nfull row rank $d_{\\pi }^{\\ast }.$ For simplicity, hereafter we write the $0$\nmatrix as $0$ when there is no confusion about its dimension.\n\nWith the $A(\\theta )$ rotation, the derivative matrix $r_{\\theta }(\\theta )$\nbecomes \n\\begin{equation}\nr_{\\theta }^{A}(\\theta )=A(\\theta )r_{\\theta }(\\theta )=\\left[ \n\\begin{array}{cc}\nr_{\\psi }^{\\ast }(\\theta ) & 0 \\\\ \nr_{\\psi }^{0}(\\theta ) & r_{\\pi }^{\\ast }(\\theta )%\n\\end{array}%\n\\right] ,  \\label{r_A matrix}\n\\end{equation}%\nwhere the $(d_{r}-d_{\\pi }^{\\ast })\\times d_{\\psi }$ matrix $r_{\\psi }^{\\ast\n}(\\theta )$ has full row rank $d_{r}-d_{\\pi }^{\\ast }.$ When $d_{\\pi }^{\\ast\n}=d_{r},$ $A_{1}(\\theta )$ and $[r_{\\psi }^{\\ast }(\\theta ):0]$ disappear.\nWhen $d_{\\pi }^{\\ast }=0,$ $A_{2}(\\theta )$ and $[r_{\\psi }^{0}(\\theta\n):r_{\\pi }^{\\ast }(\\theta )]$ disappear.\n\nThe effect of randomness in $\\widehat{\\pi }_{n}$ on $r(\\widehat{\\theta }%\n_{n}) $ is concentrated in the full rank matrix $r_{\\pi }^{\\ast }(\\widehat{%\n\\theta }_{n})$ because the upper right corner of $r_{\\theta }^{A}(\\widehat{%\n\\theta }_{n})$ is $0.$ The effect of randomness in $\\widehat{\\psi }_{n}$ on $%\nr(\\widehat{\\theta }_{n})$ is incorporated in both $r_{\\psi }^{\\ast }(%\n\\widehat{\\theta }_{n})$ and $r_{\\psi }^{0}(\\widehat{\\theta }_{n}).$\n\nUsing the rotation by $A(\\widehat{\\theta }_{n}),$ the Wald statistic in (\\ref%\n{Defn Wald}) can be written as%\n\\begin{equation}\nW_{n}=n(r(\\widehat{\\theta }_{n}\\hspace{-0.01in})-v)^{\\prime }\\hspace{-0.02in}%\nA(\\widehat{\\theta }_{n}\\hspace{-0.01in})^{\\prime }(r_{\\theta }^{A}(\\widehat{%\n\\theta }_{n}\\hspace{-0.01in})\\hspace{-0.01in}B^{-1\\hspace{-0.01in}}(\\widehat{%\n\\beta }_{n})\\widehat{\\Sigma }_{n}B^{-1}\\hspace{-0.01in}(\\widehat{\\beta }%\n_{n})r_{\\theta }^{A}(\\widehat{\\theta }_{n}\\hspace{-0.01in})^{\\prime })^{-1}%\n\\hspace{-0.02in}A(\\widehat{\\theta }_{n}\\hspace{-0.01in})(r(\\widehat{\\theta }%\n_{n}\\hspace{-0.01in})-v),\n\\end{equation}%\nwhere the first $d_{r}-d_{\\pi }^{\\ast }$ rows of $A(\\widehat{\\theta }_{n})r(%\n\\widehat{\\theta }_{n})$ only depend on the randomness in $\\widehat{\\psi }%\n_{n},$ not $\\widehat{\\pi }_{n},$ asymptotically by the choice of $A(\\widehat{%\n\\theta }_{n}).$\n\nDefine a $d_{r}\\times d_{\\theta }$ matrix \n\\begin{equation}\nr_{\\theta }^{\\ast }(\\theta )=\\left[ \n\\begin{array}{cc}\nr_{\\psi }^{\\ast }(\\theta ) & 0 \\\\ \n0 & r_{\\pi }^{\\ast }(\\theta )%\n\\end{array}%\n\\right] .\n\\end{equation}\n\nThe matrix $r_{\\theta }^{\\ast }(\\theta ),$ rather than $r_{\\theta\n}^{A}(\\theta ),$ appears in the asymptotic distribution below. The reason is\nas follows. Because $\\widehat{\\psi }_{n}$ converges faster than $\\widehat{%\n\\pi }_{n}$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b),$ as shown in\nTheorems \\ref{Thm dist'n of estimator b=finite} and \\ref{Thm dist'n of\nestimator b=inf}, the effect of randomness in $\\widehat{\\pi }_{n}$ is an\norder of magnitude larger than that in $\\widehat{\\psi }_{n}.$ As a result,\nthe limit of $r_{\\psi }^{0}(\\widehat{\\theta }_{n})$ in (\\ref{r_A matrix})\ndoes not show up in the asymptotic distributions of the Wald and $t$\nstatistics. On the other hand, the limit of $r_{\\psi }^{\\ast }(\\widehat{%\n\\theta }_{n})$ does appear in the asymptotic distribution because it is the\neffect of randomness in $\\widehat{\\psi }_{n}$ separated from that in $%\n\\widehat{\\pi }_{n}.$\n\nWhen $r_{\\pi }(\\theta )$ has full row rank, i.e., $d_{\\pi }^{\\ast }=d_{r},$\nfor all $\\theta \\in \\Theta _{\\delta },$ we have $A(\\theta )=I_{d_{r}},$ $%\nr_{\\theta }^{A}(\\theta )=r_{\\theta }(\\theta ),$ and $r_{\\theta }^{\\ast\n}(\\theta )=[0:r_{\\pi }(\\theta )].$ In this case, rotation is not needed to\nconcentrate the randomness in $\\widehat{\\pi }_{n}.$ Also, when $d_{r}=1,$ we\nhave $A(\\theta )=1,$ so no rotation is employed.\n\nDefine \n\\begin{equation}\n\\eta _{n}(\\theta )=\\left\\{ \n\\begin{array}{cc}\nn^{1/2}A_{1}(\\theta )(r(\\psi _{n},\\pi )-r(\\psi _{n},\\pi _{n})) & \\text{if }%\nd_{\\pi }^{\\ast }<d_{r} \\\\ \n0 & \\text{if }d_{\\pi }^{\\ast }=d_{r}.%\n\\end{array}%\n\\right.  \\label{Defn eta}\n\\end{equation}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Function $\\mathbf{r(%\n\\protect\\theta )}$ of Interest}\n\n\\hspace{0.25in}The function of interest, $r(\\theta ),$ satisfies the\nfollowing assumptions.$\\medskip $\n\n\\noindent \\textbf{Assumption R1. }(i) $r(\\theta )$ is continuously\ndifferentiable on $\\Theta .$\n\n\\noindent (ii) $r_{\\theta }(\\theta )$ is full row rank $d_{r}$ $\\forall\n\\theta \\in \\Theta .$\n\n\\noindent (iii) $rank(r_{\\pi }(\\theta ))=d_{\\pi }^{\\ast }$ for some constant \n$d_{\\pi }^{\\ast }\\leq \\min (d_{r},d_{\\pi })$ $\\forall \\theta \\in \\Theta\n_{\\delta }=\\{\\theta \\in \\Theta :||\\beta ||<\\delta \\}$ for some $\\delta\n>0.\\medskip $\n\n\\noindent \\textbf{Assumption R2. }$\\eta _{n}(\\widehat{\\theta }%\n_{n})\\rightarrow _{p}0$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ $%\n\\forall b\\in (R\\cup \\left\\{ \\pm \\infty \\right\\} )^{d_{\\beta }}.\\medskip $\n\nThree different sufficient conditions for the high-level Assumption R2 are\ngiven by Assumptions R2$^{\\ast }$(i)-(iii) below. Any one of them is\nsufficient for Assumption R2 (under the conditions in Lemma \\ref{Lemma\nSufficient R2} below).$\\medskip $\n\n\\noindent \\textbf{Assumption R2}$^{\\ast }$\\textbf{.} (i) $d_{\\pi }^{\\ast\n}=d_{r}.$\n\n\\noindent (ii) $d_{r}=1.$\n\n\\noindent (iii) The column space of $r_{\\pi }(\\theta )$ is the same $\\forall\n\\theta \\in \\Theta _{\\delta }$ for some $\\delta >0.\\medskip $\n\nAssumption R2$^{\\ast }$(i) requires that the restrictions only involve $\\pi\n. $ Alternatively, Assumption R2$^{\\ast }$(ii) requires that only one\nrestriction appears. Alternatively, R2$^{\\ast }$(iii) is satisfied when $%\nr_{\\pi }(\\theta )=a(\\theta )R_{\\pi },$ where $a(\\theta ):\\Theta _{\\delta\n}\\rightarrow R,$ $a(\\theta )\\neq 0,$ and $R_{\\pi }\\in R^{d_{r}\\times d_{\\pi\n}}.$ A special case is when $r_{\\pi }(\\theta )$ is constant due to the\nrestrictions being linear.$\\medskip $\n\n\\noindent \\textbf{Assumption }R$_{\\text{\\textbf{L}}}$\\textbf{. }$r(\\theta\n)=R\\theta ,$ where $R\\in R^{d_{r}\\times d_{\\theta }}$ has full row rank $%\nd_{r}.\\medskip $\n\nAssumption R$_{\\text{L}}$ is a sufficient condition for Assumptions R1 and\nR2.$\\medskip $\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma Sufficient R2}Assumptions \\emph{R2}$%\n^{\\ast }$\\emph{(i)} and \\emph{R2}$^{\\ast }$\\emph{(ii)} each \\emph{(}%\nseparately\\emph{)} implies Assumption \\emph{R2}. Assumption \\emph{R2}$^{\\ast\n}$\\emph{(iii)} combined with Assumption \\emph{GMM1 (}or Assumptions \\emph{A}\nand \\emph{B3(i)-(ii)} of \\emph{AC1) }implies Assumption \\emph{R2.}\n\\end{lemma}\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma Sufficient Linear}Assumption \\emph{R}%\n$_{\\text{\\emph{L}}}$ implies Assumptions \\emph{R1} and \\emph{R2.}\n\\end{lemma}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Variance Matrix\nEstimators\\label{Var Matrix Subsec}}\n\n\\hspace{0.25in}The estimators of the components of the asymptotic variance\nmatrix are assumed to satisfy the following assumptions. Two forms are given\nfor Assumption V1 that follows. The first applies when $\\beta $ is a scalar\nand the second applies when $\\beta $ is a vector. The reason for the\ndifference is that the normalizing matrix $B(\\beta )$ is different in these\ntwo cases.\n\nWhen $\\beta $ is a scalar, let $J(\\theta ;\\gamma _{0})$ and $V(\\theta\n;\\gamma _{0})$ for $\\theta \\in \\Theta $ be some non-stochastic $d_{\\theta\n}\\times d_{\\theta }$ matrix-valued functions such that $J(\\theta _{0};\\gamma\n_{0})=J(\\gamma _{0})$ and $V(\\theta _{0};\\gamma _{0})=V(\\gamma _{0}),$ where \n$J(\\gamma _{0})$ and $V(\\gamma _{0})$ are as in (\\ref{J and V Matrices}) (or\nas in Assumptions D2 and D3 of AC1). Let \n\\begin{equation}\n\\Sigma (\\theta ;\\gamma _{0})=J^{-1}(\\theta ;\\gamma _{0})V(\\theta ;\\gamma\n_{0})J^{-1}(\\theta ;\\gamma _{0})\\text{ and }\\Sigma (\\pi ;\\gamma _{0})=\\Sigma\n(\\psi _{0},\\pi ;\\gamma _{0}).  \\label{Sigma defn scalar beta}\n\\end{equation}%\nLet $\\Sigma _{\\beta \\beta }(\\pi ;\\gamma _{0})$ denote the upper left (1,1)\nelement of $\\Sigma (\\pi ;\\gamma _{0}).$\n\nAssumption V1 below applies when $\\beta $ is a scalar.\\medskip\n\n\\noindent \\textbf{Assumption V1 (scalar }$\\mathbf{\\beta }$\\textbf{). }(i) $%\n\\widehat{J}_{n}=\\widehat{J}_{n}(\\widehat{\\theta }_{n})$ and $\\widehat{V}_{n}=%\n\\widehat{V}_{n}(\\widehat{\\theta }_{n})$ for some (stochastic) $d_{\\theta\n}\\times d_{\\theta }$ matrix-valued functions $\\widehat{J}_{n}(\\theta )$ and $%\n\\widehat{V}_{n}(\\theta )$ on $\\Theta $ that satisfy $\\sup_{\\theta \\in \\Theta\n}||\\widehat{J}_{n}(\\theta )-J(\\theta ;\\gamma _{0})||\\rightarrow _{p}0$ and $%\n\\sup_{\\theta \\in \\Theta }||\\widehat{V}_{n}(\\theta )-V(\\theta ;\\gamma\n_{0})||\\rightarrow _{p}0$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$\nwith $|b|<\\infty .$\n\n\\noindent (ii) $J(\\theta ;\\gamma _{0})$ and $V(\\theta ;\\gamma _{0})$ are\ncontinuous in $\\theta $ on $\\Theta $ $\\forall \\gamma _{0}\\in \\Gamma $ with $%\n\\beta _{0}=0.$\n\n\\noindent (iii) $\\lambda _{\\min }(\\Sigma (\\pi ;\\gamma _{0}))>0$ and $\\lambda\n_{\\max }(\\Sigma (\\pi ;\\gamma _{0}))<\\infty $ $\\forall \\pi \\in \\Pi ,$ $%\n\\forall \\gamma _{0}\\in \\Gamma $ with $\\beta _{0}=0.$\\medskip\n\nWhen $\\beta $ is a vector, i.e., $d_{\\beta }>1,$ we reparameterize $\\beta $\nas $(||\\beta ||,\\omega ),$ where $\\omega =\\beta /||\\beta ||$ if $\\beta \\neq\n0 $ and by definition $\\omega =1_{d_{\\beta }}/||1_{d_{\\beta }}||$ with $%\n1_{d_{\\beta }}=(1,...,1)\\in R^{d_{\\beta }}$ if $\\beta =0.$ Correspondingly, $%\n\\theta $ is reparameterized as $\\theta ^{+}=(||\\beta ||,\\omega ,\\zeta ,\\pi\n). $ Let $\\Theta ^{+}=\\{\\theta ^{+}:$ $\\theta ^{+}=(||\\beta ||,\\beta\n/||\\beta ||,\\zeta ,\\pi ),$ $\\theta \\in \\Theta \\}.$ Let $\\widehat{\\theta }%\n_{n}^{+}$ and $\\theta _{0}^{+}$ be the counterparts of $\\widehat{\\theta }%\n_{n} $ and $\\theta _{0}$ after reparametrization.\n\nWhen $\\beta $ is a vector, let $J(\\theta ^{+};\\gamma _{0})$ and $V(\\theta\n^{+};\\gamma _{0})$ denote some non-stochastic $d_{\\theta }\\times d_{\\theta }$\nmatrix-valued functions such that $J(\\theta _{0}^{+};\\gamma _{0})=J(\\gamma\n_{0})$ and $V(\\theta _{0}^{+};\\gamma _{0})=V(\\gamma _{0}).$ Let \n\\begin{eqnarray}\n\\Sigma (\\theta ^{+};\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nJ^{-1}(\\theta ^{+};\\gamma _{0})V(\\theta ^{+};\\gamma _{0})J^{-1}(\\theta\n^{+};\\gamma _{0})\\text{ and}  \\notag \\\\\n\\Sigma (\\pi ,\\omega ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}\\Sigma\n(||\\beta _{0}||,\\omega ,\\zeta _{0},\\pi ;\\gamma _{0}).\n\\label{Sigma defn vector beta}\n\\end{eqnarray}%\nLet $\\Sigma _{\\beta \\beta }(\\pi ,\\omega ;\\gamma _{0})$ denote the upper left \n$d_{\\beta }\\times d_{\\beta }$ sub-matrix of $\\Sigma (\\pi ,\\omega ;\\gamma\n_{0}).$\n\nAssumption V1 below applies when $\\beta $ is a vector.\\medskip\n\n\\noindent \\textbf{Assumption V1 (vector }$\\mathbf{\\beta }$)\\textbf{. }(i) $%\n\\widehat{J}_{n}=\\widehat{J}_{n}(\\widehat{\\theta }_{n}^{+})$ and $\\widehat{V}%\n_{n}=\\widehat{V}_{n}(\\widehat{\\theta }_{n}^{+})$ for some (stochastic) $%\nd_{\\theta }\\times d_{\\theta }$ matrix-valued functions $\\widehat{J}%\n_{n}(\\theta ^{+})$ and $\\widehat{V}_{n}(\\theta ^{+})$ on $\\Theta ^{+}$ that\nsatisfy $\\sup_{\\theta ^{+}\\in \\Theta ^{+}}||\\widehat{J}_{n}(\\theta\n^{+})-J(\\theta ^{+};\\gamma _{0})||\\rightarrow _{p}0$ and $\\sup_{\\theta\n^{+}\\in \\Theta ^{+}}||\\widehat{V}_{n}(\\theta ^{+})-V(\\theta ^{+};\\gamma\n_{0})||\\rightarrow _{p}0$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$\nwith $||b||<\\infty .\\footnote{%\nThe functions $J(\\theta ^{+};\\gamma _{0})$ and $V(\\theta ^{+};\\gamma _{0})$\ndo not depend on $\\omega _{0},$ only $\\gamma _{0}.$}$\n\n\\noindent (ii) $J(\\theta ^{+};\\gamma _{0})$ and $V(\\theta ^{+};\\gamma _{0})$\nare continuous in $\\theta ^{+}$ on $\\Theta ^{+}$ $\\forall \\gamma _{0}\\in\n\\Gamma $ with $\\beta _{0}=0.$\n\n\\noindent (iii) $\\lambda _{\\min }(\\Sigma (\\pi ,\\omega ;\\gamma _{0}))>0$ and $%\n\\lambda _{\\max }(\\Sigma (\\pi ,\\omega ;\\gamma _{0}))<\\infty $ $\\forall \\pi\n\\in \\Pi ,$ $\\forall \\omega \\in R^{d_{\\beta }}$ with $||\\omega ||=1,$ $%\n\\forall \\gamma _{0}\\in \\Gamma $ with $\\beta _{0}=0.$\n\n\\noindent (iv) $P(\\tau _{\\beta }(\\pi ^{\\ast }(\\gamma _{0},b),\\gamma\n_{0},b)=0)=0$ $\\forall \\gamma _{0}\\in \\Gamma $ with $\\beta _{0}=0$ and $%\n\\forall b$ with $||b||<\\infty .$\\medskip\n\nThe following assumption applies with both scalar and vector $\\beta .$%\n\\medskip\n\n\\noindent \\textbf{Assumption V2. }Under $\\Gamma (0,\\infty ,\\omega _{0}),$ $%\n\\widehat{J}_{n}\\rightarrow _{p}J(\\gamma _{0})$ and $\\widehat{V}%\n_{n}\\rightarrow _{p}V(\\gamma _{0}).$\\medskip\n\n\\noindent \\textbf{Example 1 (cont.). }In this example, $\\beta $ is a scalar.\nThe estimators of $J(\\gamma _{0})$ and $V(\\gamma _{0})$ are%\n\\begin{equation}\n\\widehat{J}_{n}=\\widehat{J}_{n}(\\widehat{\\theta }_{n})\\text{ and }\\widehat{V}%\n_{n}=\\widehat{V}_{n}(\\widehat{\\theta }_{n}),\n\\end{equation}%\nrespectively, where%\n\\begin{eqnarray}\n\\widehat{J}_{n}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\widehat{J}%\n_{g,n}(\\theta )^{\\prime }\\mathcal{W}_{n}\\widehat{J}_{g,n}\\left( \\theta\n\\right) ,  \\notag \\\\\n\\widehat{V}_{n}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\widehat{J}%\n_{g,n}(\\theta )^{\\prime }\\mathcal{W}_{n}\\widehat{V}_{g,n}\\left( \\theta\n\\right) \\mathcal{W}_{n}\\widehat{J}_{g,n}\\left( \\theta \\right) ,  \\notag \\\\\n\\widehat{J}_{g,n}(\\theta )^{\\prime }\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nn^{-1}\\sum_{i=1}^{n}Z_{i}d_{i}(\\pi )^{\\prime },\\text{ and }\\widehat{V}%\n_{g,n}\\left( \\theta \\right) =n^{-1}\\sum_{i=1}^{n}U_{i}^{2}(\\theta\n)Z_{i}Z_{i}^{\\prime }.\n\\end{eqnarray}%\nThe key quantities in Assumption V1 (scalar $\\beta $) are \n\\begin{eqnarray}\n&&J(\\theta ;\\gamma _{0})\\overset{}{=}J_{g}(\\theta ;\\gamma _{0})^{\\prime }%\n\\mathcal{W}(\\gamma _{0})J_{g}(\\theta ;\\gamma _{0})\\text{ and}  \\notag \\\\\n&&V(\\theta ;\\gamma _{0})\\overset{}{=}J_{g}(\\theta ;\\gamma _{0})^{\\prime }%\n\\mathcal{W}(\\gamma _{0})V_{g}(\\theta ;\\gamma _{0})\\mathcal{W}(\\gamma\n_{0})J_{g}(\\theta ;\\gamma _{0}),\\text{ where}  \\notag \\\\\n&&J_{g}(\\theta ;\\gamma _{0})\\overset{}{=}-E_{\\phi _{0}}Z_{i}d_{i}(\\pi\n)^{\\prime },\\text{ }\\mathcal{W}(\\gamma _{0})\\overset{}{=}(E_{\\phi\n_{0}}Z_{i}Z_{i}^{\\prime })^{-1},\\text{ and}  \\label{gmm J and V} \\\\\n&&V_{g}(\\theta ;\\gamma _{0})\\overset{}{=}E_{\\phi\n_{0}}U_{i}^{2}Z_{i}Z_{i}^{\\prime }+2E_{\\phi _{0}}[\\beta _{0}h(X_{1,i},\\pi\n_{0})-\\beta h(X_{1,i},\\pi )+X_{2,i}(\\zeta _{0}-\\zeta )]Z_{i}Z_{i}^{\\prime } \n\\notag \\\\\n&&\\hspace{0.78in}+E_{\\phi _{0}}[\\beta _{0}h(X_{1,i},\\pi _{0})-\\beta\nh(X_{1,i},\\pi )+X_{2,i}^{\\prime }(\\zeta _{0}-\\zeta )]^{2}Z_{i}Z_{i}^{\\prime\n}.  \\notag\n\\end{eqnarray}\n\nAssumption V1(i) holds by the uniform LLN given in Lemma \\ref{Lemma uniform\nconvergence} in Supplemental Appendix D using the moment conditions in (\\ref%\n{gmm phi space}), Assumption GMM1(ii), and the continuous mapping theorem.\nAssumption V1(ii) holds by the continuity of $h(x,\\pi )$ and $h_{\\pi }(x,\\pi\n)$ in $\\pi $ and the conditions in (\\ref{gmm phi space}).\n\nTo verify Assumption V1(iii), note that%\n\\begin{eqnarray}\n\\Sigma (\\pi ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}J^{-1}(\\psi\n_{0},\\pi ;\\gamma _{0})V(\\psi _{0},\\pi ;\\gamma _{0})J^{-1}(\\psi _{0},\\pi\n;\\gamma _{0}),\\text{ where}  \\notag \\\\\nJ_{g}(\\psi _{0},\\pi ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n-E_{\\phi _{0}}Z_{i}d_{i}(\\pi )^{\\prime }\\text{ and }V_{g}(\\psi _{0},\\pi\n;\\gamma _{0})=E_{\\phi _{0}}U_{i}^{2}Z_{i}Z_{i}^{\\prime }\n\\end{eqnarray}%\nwhen $\\beta _{0}=0.$ We have: $\\Sigma (\\pi ;\\gamma _{0})$ is positive\ndefinite (pd) and finite $\\forall \\pi \\in \\Pi $ because both $J(\\psi\n_{0},\\pi ;\\gamma _{0})$ and $V(\\psi _{0},\\pi ;\\gamma _{0})$ are pd and\nfinite, which in turn holds because (i) $\\mathcal{W}(\\gamma _{0})$ is pd and\nfinite by Assumption GMM1(vii), (ii) $J_{g}(\\psi _{0},\\pi ;\\gamma _{0})\\in\nR^{k\\times d_{\\theta }}$ has full rank by (\\ref{gmm phi space}), and (iii) $%\nV_{g}(\\psi _{0},\\pi ;\\gamma _{0})$ is pd and finite by (\\ref{gmm phi space}%\n). This completes the verification of Assumption V1.\n\nAssumptions V1(i) and V1(ii) hold not only under $\\{\\gamma _{n}\\}\\in \\Gamma\n(\\gamma _{0},0,b),$ but also under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},\\infty ,\\omega _{0})$ in this example. This and $\\widehat{\\theta }%\n_{n}\\rightarrow _{p}\\theta _{0}$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},\\infty ,\\omega _{0}),$ which holds by Theorem \\ref{Thm dist'n of\nestimator b=inf} (because Assumptions GMM1-GMM5, B1, and B2 have been\nverified above), imply that Assumption V2 holds. This completes the\nverification of Assumption V2. $\\square $\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Asymptotic Null\nDistribution of the Wald Statistic\\label{Subsec Asy Distn Wald Stat}}\n\n\\hspace{0.25in}The asymptotic null distribution of the Wald statistic under $%\nH_{0}$ depends on the following quantities. The limit distribution of $%\n\\widehat{\\omega }_{n}(\\pi )=\\widehat{\\beta }_{n}(\\pi )/||\\widehat{\\beta }%\n_{n}(\\pi )||$ under $\\Gamma (\\gamma _{0},0,b)$ with $||b||<\\infty $ is given\nby%\n\\begin{equation}\n\\omega ^{\\ast }(\\pi ;\\gamma _{0},b)=\\frac{\\tau _{\\beta }(\\pi ;\\gamma _{0},b)%\n}{||\\tau _{\\beta }(\\pi ;\\gamma _{0},b)||}\\text{ for }\\pi \\in \\Pi ,\n\\label{Omegastar defn}\n\\end{equation}%\nwhere $\\tau _{\\beta }(\\pi ;\\gamma _{0},b)$ is defined in (\\ref{Asy Distn of\nBetahat(pi)}). Let $\\overline{B}(\\pi ;\\gamma _{0},b)$ be a $d_{r}\\times\nd_{r} $ matrix-valued function of $\\tau _{\\beta }(\\pi ;\\gamma _{0},b)$\ndefined as%\n\\begin{equation}\n\\overline{B}(\\pi ;\\gamma _{0},b)=\\left[ \n\\begin{array}{cc}\nI_{(d_{r}-d_{\\pi }^{\\ast })} & 0 \\\\ \n0 & \\iota (\\tau _{\\beta }(\\pi ;\\gamma _{0},b))I_{d_{\\pi }^{\\ast }}%\n\\end{array}%\n\\right]\n\\end{equation}%\nwhere $\\iota (\\beta )=\\beta $ when $\\beta $ is a scalar and $\\iota (\\beta\n)=||\\beta ||$ when $\\beta $ is a vector.\n\nLet%\n\\begin{eqnarray}\nr_{\\theta }^{\\ast }(\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}r_{\\theta\n}^{\\ast }(\\psi _{0},\\pi ),\\text{ }r_{\\psi }^{\\ast }(\\pi )=r_{\\psi }^{\\ast\n}(\\psi _{0},\\pi )\\text{ and}  \\notag \\\\\n\\overline{\\Sigma }(\\pi ;\\gamma _{0},b)\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n\\left\\{ \n\\begin{array}{ll}\n\\Sigma (\\pi ;\\gamma _{0}) & \\text{if }\\beta \\text{ is a scalar} \\\\ \n\\Sigma (\\pi ,\\omega ^{\\ast }(\\pi ;\\gamma _{0},b);\\gamma _{0}) & \\text{if }%\n\\beta \\text{ is a vector,}%\n\\end{array}%\n\\right.  \\label{Sigmabar defn}\n\\end{eqnarray}%\nwhere $\\Sigma (\\pi ;\\gamma _{0})$ and $\\Sigma (\\pi ,\\omega ;\\gamma _{0})$\nare defined in (\\ref{Sigma defn scalar beta}) and (\\ref{Sigma defn vector\nbeta}), respectively.\n\nDefine a stochastic process $\\{\\lambda (\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$\nby%\n\\begin{eqnarray}\n&&\\hspace{-0.08in}\\lambda (\\pi ;\\gamma _{0},b)\\hspace{-0.08in}  \\notag \\\\\n&=&\\hspace{-0.08in}\\tau ^{A}(\\pi ;\\gamma _{0},b)^{\\prime }\\overline{B}(\\pi\n;\\gamma _{0},b)(r_{\\theta }^{\\ast }(\\pi )\\overline{\\Sigma }(\\pi ;\\gamma\n_{0},b)r_{\\theta }^{\\ast }(\\pi )^{\\prime })^{-1}\\overline{B}(\\pi ;\\gamma\n_{0},b)\\tau ^{A}(\\pi ;\\gamma _{0},b),\\text{ where}  \\notag \\\\\n&&\\hspace{-0.25in}\\tau ^{A}(\\pi ;\\gamma _{0},b)\\overset{}{=}\\left( \n\\begin{array}{c}\nr_{\\psi }^{\\ast }(\\pi )\\tau (\\pi ;\\gamma _{0},b) \\\\ \nA_{2}(\\psi _{0},\\pi )(r(\\psi _{0},\\pi )-r(\\psi _{0},\\pi _{0}))%\n\\end{array}%\n\\right) \\overset{}{\\in }R^{d_{r}}.\n\\end{eqnarray}\n\nWith linear restrictions, the stochastic process $\\lambda (\\pi ;\\gamma\n_{0},b)$ can be simplified. Under Assumption R$_{\\text{L}}$, $r_{\\theta\n}(\\theta )=R$ does not depend on $\\theta ,$ and, hence, $A(\\theta )$ and $%\nr_{\\theta }^{\\ast }(\\theta )$ do not depend on $\\theta .$ Define $R^{\\ast\n}=r_{\\theta }^{\\ast }(\\theta )$ under Assumption R$_{\\text{L}}$.\nSpecifically, \n\\begin{equation}\nR^{A}=AR=\\left[ \n\\begin{array}{cc}\nR_{\\psi }^{\\ast } & 0 \\\\ \nR_{\\psi }^{0} & R_{\\pi }^{\\ast }%\n\\end{array}%\n\\right] \\text{ and }R^{\\ast }=\\left[ \n\\begin{array}{cc}\nR_{\\psi }^{\\ast } & 0 \\\\ \n0 & R_{\\pi }^{\\ast }%\n\\end{array}%\n\\right] ,\n\\end{equation}%\nwhere $R_{\\psi }^{\\ast }\\in R^{(d_{r}-d_{\\pi }^{\\ast })\\times d_{\\psi }}$\nand $R_{\\pi }^{\\ast }\\in R^{d_{\\pi }^{\\ast }\\times d_{\\pi }}.$\n\nDefine a stochastic process $\\{\\lambda _{L}(\\pi ;\\gamma _{0},b):\\pi \\in \\Pi\n\\}$ by%\n\\begin{eqnarray}\n&&\\hspace{-0.08in}\\lambda _{L}(\\pi ;\\gamma _{0},b)  \\notag \\\\\n&=&\\hspace{-0.08in}\\overline{\\tau }(\\pi ;\\gamma _{0},b)^{\\prime }R^{\\ast\n\\prime }\\overline{B}(\\pi ;\\gamma _{0},b)(R^{\\ast }\\overline{\\Sigma }(\\pi\n;\\gamma _{0},b)R^{\\ast \\prime })^{-1}\\overline{B}(\\pi ;\\gamma _{0},b)R^{\\ast\n}\\overline{\\tau }(\\pi ;\\gamma _{0},b),\\text{ where}  \\notag \\\\\n&&\\hspace{-0.25in}\\overline{\\tau }(\\pi ;\\gamma _{0},b)\\overset{}{=}(\\tau\n(\\pi ;\\gamma _{0},b)^{\\prime },(\\pi -\\pi _{0})^{\\prime })^{\\prime }\\overset{}%\n{\\in }R^{d_{\\theta }}.  \\label{Omega Process}\n\\end{eqnarray}%\nUnder the linear restriction of Assumption R$_{\\text{L}}$, $\\lambda _{L}(\\pi\n;\\gamma _{0},b)=\\lambda (\\pi ;\\gamma _{0},b)$ and the asymptotic\ndistribution of the Wald statistic can be simplified by replacing the\nstochastic process $\\{\\lambda (\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ with $%\n\\{\\lambda _{L}(\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ in the asymptotic results\nbelow.\n\nThe following theorem establishes the asymptotic null distribution of the\nWald statistic for nonlinear restrictions that satisfy Assumption R2. (The\nnull holds by the definition $W_{n}=W_{n}(v_{n})$ in (\\ref{Null Defn of Wald\nand t Stats}).)\n\n\\begin{theorem}\n\\hspace{-0.08in}\\textbf{.} \\label{Theorem Wald Nonlinear}Suppose Assumptions \n\\emph{B1-B2, R1-R2, }and \\emph{V1-V2 }hold. In addition, suppose Assumptions \n\\emph{GMM1-GMM5 }hold \\emph{(}or Assumptions \\emph{A, B3, C1-C8, }and \\emph{%\nD1-D3} of \\emph{AC1 }hold\\emph{).}\n\n\\noindent \\emph{(a) }Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$\nwith $||b||<\\infty ,$ $W_{n}\\rightarrow _{d}\\lambda (\\pi ^{\\ast }(\\gamma\n_{0},b);\\gamma _{0},b).$\n\n\\noindent \\emph{(b) }Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty\n,\\omega _{0}),$ $W_{n}\\rightarrow _{d}\\chi _{d_{r}}^{2}.$\n\\end{theorem}\n\nA special case of Theorem \\ref{Theorem Wald Nonlinear} is the following\nresult for linear restrictions.\n\n\\begin{corollary}\n\\hspace{-0.08in}\\textbf{.} \\label{Cor Wald Linear}Suppose Assumptions \\emph{%\nB1-B2, R}$_{\\text{\\emph{L}}},$ and \\emph{V1-V2} hold. In addition, suppose\nAssumptions \\emph{GMM1-GMM5 }hold \\emph{(}or Assumptions \\emph{A, B1-B3,\nC1-C8, }and \\emph{D1-D3 }of \\emph{AC1 }hold\\emph{).}\n\n\\noindent \\emph{(a) }Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$\nwith $||b||<\\infty ,$ $W_{n}\\rightarrow _{d}\\lambda _{L}(\\pi ^{\\ast }(\\gamma\n_{0},b);\\gamma _{0},b).$\n\n\\noindent \\emph{(b) }Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty\n,\\omega _{0}),$ $W_{n}\\rightarrow _{d}\\chi _{d_{r}}^{2}.$\n\\end{corollary}\n\nSpecific forms of the stochastic process $\\lambda (\\pi ;\\gamma _{0},b)$ are\nprovided in the following examples. In Examples r1-r4, $r(\\theta )$ is\nlinear in $\\theta $ and Corollary \\ref{Cor Wald Linear} applies. In Example\nr5, $r(\\theta )$ is nonlinear in $\\theta $ and Assumption R2 is\nverified.\\medskip\n\n\\noindent \\textbf{Example r1.}\\label{exp 1} When $r(\\theta )=\\psi ,$ $%\nR=R^{\\ast }=[I_{d_{\\psi }}:0],$ and $\\lambda _{L}(\\pi ;\\gamma _{0},b)=\\tau\n(\\pi ;\\gamma _{0},b)^{\\prime }\\overline{\\Sigma }_{\\psi \\psi }^{-1}(\\pi\n;\\allowbreak \\gamma _{0},b)\\tau (\\pi ;\\gamma _{0},b),$ where $\\overline{%\n\\Sigma }_{\\psi \\psi }(\\pi ;\\gamma _{0},b)$ is the upper left $d_{\\psi\n}\\times d_{\\psi }$ block of $\\overline{\\Sigma }(\\pi ;\\gamma _{0},b).$\\medskip\n\n\\noindent \\textbf{Example r2.}\\label{exp 2} When $r(\\theta )=\\pi ,$ $%\nR=R^{\\ast }=[0:I_{d_{\\pi }}],$ and $\\lambda _{L}(\\pi ;\\gamma _{0},b)=||\\tau\n_{\\beta }(\\pi ;\\gamma _{0},b)||^{2}(\\pi -\\pi _{0})^{\\prime }\\allowbreak \n\\overline{\\Sigma }_{\\pi \\pi }^{-1}(\\pi ;\\gamma _{0},b)(\\pi -\\pi _{0}),$\nwhere $\\overline{\\Sigma }_{\\pi \\pi }(\\pi ;\\gamma _{0},b)$ is the lower right \n$d_{\\pi }\\times d_{\\pi }$ block of $\\overline{\\Sigma }(\\pi ;\\gamma _{0},b).$%\n\\medskip\n\n\\noindent \\textbf{Example r3.}\\label{exp 3} When $d_{\\psi }=d_{\\pi }$ and $%\nr(\\theta )=\\psi +\\pi ,$ $R=[I_{d_{\\psi }}:I_{d_{\\pi }}],$ $R^{\\ast\n}=[0_{d_{\\psi }}:I_{d_{\\pi }}],$ and $\\lambda _{L}(\\pi ;\\gamma\n_{0},b)=||\\tau _{\\beta }(\\pi ;\\gamma _{0},b)||^{2}(\\pi -\\pi _{0})^{\\prime }%\n\\overline{\\Sigma }_{\\pi \\pi }^{-1}(\\pi ;\\gamma _{0},b)(\\pi -\\pi _{0}).$ Note\nthat $\\lambda _{L}(\\pi ;\\gamma _{0},b)$ is the same in this example as in\nExample r2. This occurs because $d_{\\pi }^{\\ast }=d_{r}$ so that the\nrandomness in $\\widehat{\\psi }_{n}$ is completely dominated by that in $%\n\\widehat{\\pi }_{n}.$ Although $R$ is different in Examples r2 and r3, $%\nR^{\\ast }$ is the same in both examples.\\medskip\n\n\\noindent \\textbf{Example r4.}\\label{exp 4} When $r(\\theta )=\\theta ,$ $%\nR=R^{\\ast }=I_{d_{\\theta }},$ and $\\lambda _{L}(\\pi ;\\gamma _{0},b)=%\n\\overline{\\tau }(\\pi ;\\gamma _{0},b)^{\\prime }\\overline{B}(\\pi ;\\gamma\n_{0},b)\\allowbreak \\overline{\\Sigma }^{-1}(\\pi ;\\gamma _{0},b)\\overline{B}%\n(\\pi ;\\gamma _{0},b)\\overline{\\tau }(\\pi ;\\gamma _{0},b).$\\medskip\n\n\\noindent \\textbf{Example r5.}\\label{exp 5} When $\\theta =(\\beta ,\\pi\n)^{\\prime },$ $r(\\theta )=(\\beta ,\\pi ^{2})^{\\prime },$ and $\\beta $ and $%\n\\pi $ are scalars, we have \n\\begin{equation}\nr_{\\theta }(\\theta )=r_{\\theta }^{\\ast }(\\theta )=\\left[ \n\\begin{array}{cc}\n1 & 0 \\\\ \n0 & 2\\pi%\n\\end{array}%\n\\right] ,\\text{ and }A(\\theta )=I_{2}.\n\\end{equation}%\nAssumption R2$^{\\ast }$(iii) holds because $A_{2}(\\theta )$ does not depend\non $\\theta .$ This implies that Assumption R2 holds. The stochastic process $%\n\\{\\tau ^{A}(\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ can be simplified to $\\tau\n^{A}(\\pi ;\\gamma _{0},b)=(\\tau (\\pi ;\\gamma _{0},b),\\pi ^{2}-\\pi _{0}^{2}).$%\n\\medskip\n\nNext we show that Assumption R2 is not superfluous. In certain cases, the\nWald statistic diverges to infinity in probability under $H_{0}.$\n\n\\begin{theorem}\n\\hspace{-0.08in}\\textbf{.} \\label{Theorem Divergence}Suppose Assumptions \n\\emph{B1-B2, R1, }and \\emph{V1 }hold. In addition, suppose Assumptions \\emph{%\nGMM1-GMM4 }hold \\emph{(}or Assumptions \\emph{A, B1-B3, }and \\emph{C1-C8 }of \n\\emph{AC1 }hold\\emph{).} Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b),$\n$W_{n}\\rightarrow _{p}\\infty $ if $||\\eta _{n}(\\widehat{\\theta }%\n_{n})||\\rightarrow _{p}\\infty .$\n\\end{theorem}\n\n\\noindent \\textbf{Comment. }This theorem provides a high-level condition\nunder which the Wald statistic diverges to infinity in probability under the\nnull. This result holds for sequences $\\{\\gamma _{n}\\}$ in both the weak and\nsemi-strong identification categories. The Wald statistic, which uses $%\nr_{\\theta }(\\widehat{\\theta }_{n})$ in the covariance matrix estimation, is\ndesigned for the standard case in which $\\widehat{\\theta }_{n}$ converges to \n$\\theta _{n}$ at rate $n^{-1/2}.$ When $\\widehat{\\pi }_{n}$ is inconsistent\nor converges to $\\pi _{n}$ slower than $n^{-1/2},$ the estimator of the\ncovariance matrix does not necessarily provide a proper normalization for\nthe Wald statistic to have a non-degenerate limit.$\\medskip $\n\n\\noindent \\textbf{Example r6.}\\label{exp 6} We now demonstrate that\nrestrictions exist for which Assumption R2 fails to hold. Suppose $\\theta\n=(\\beta ,\\pi )^{\\prime },$ $r(\\theta )=((\\beta +1)\\pi ,\\pi ^{2})^{\\prime },$\nand $\\beta $ and $\\pi $ are both scalars. Then, we have%\n\\begin{eqnarray}\nr_{\\theta }(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left[ \n\\begin{array}{cc}\n\\pi  & \\beta +1 \\\\ \n0 & 2\\pi \n\\end{array}%\n\\right] ,\\text{ }A_{1}(\\theta )=\\frac{1}{||(-2\\pi ,\\beta +1)||}(-2\\pi ,\\beta\n+1),\\text{ and}  \\notag \\\\\n\\eta _{n}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}-\\frac{n^{1/2}}{%\n||(-2\\pi ,\\beta +1)||}[-2\\pi (\\beta _{n}+1)(\\pi -\\pi _{n})+(\\beta +1)(\\pi\n^{2}-\\pi _{n}^{2})].\n\\end{eqnarray}%\nConsider a sequence $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b).$ Suppose\nAssumptions B1, B2, and GMM1-GMM5 hold. If $|b|<\\infty ,$ assume $P(\\pi\n^{\\ast }(\\gamma _{0},b)=0)=0$ (which typically holds when $\\Pi $ contains a\nnondegenerate interval). Some calculations show that under $\\{\\gamma _{n}\\},$\nwe have $\\eta _{n}(\\widehat{\\theta }_{n})=||(-2\\pi _{0},1)||^{-1}\\allowbreak\n\\times \\lbrack n^{1/2}\\beta _{n}(\\widehat{\\pi }_{n}-\\pi\n_{n})]^{2}(n^{1/4}\\beta _{n})^{-2}(1+o(1))+O_{p}(1).$\\footnote{%\nThis holds because $\\eta _{n}(\\theta )=-\\frac{n^{1/2}}{||(-2\\pi ,\\beta +1)||}%\n[-2\\pi (\\beta _{n}+1)(\\pi -\\pi _{n})+\\allowbreak (\\beta _{n}+1)(\\pi ^{2}-\\pi\n_{n}^{2})+\\allowbreak (\\beta -\\beta _{n})(\\pi ^{2}-\\pi _{n}^{2})]\\allowbreak\n=\\frac{n^{1/2}}{||(-2\\pi ,\\beta +1)||}[(\\beta _{n}+1)(\\pi -\\pi\n_{n})^{2}-\\allowbreak (\\beta -\\beta _{n})(\\pi ^{2}-\\pi _{n}^{2})].$ Hence, $%\n\\eta _{n}(\\widehat{\\theta }_{n})||(-2\\widehat{\\pi }_{n},\\widehat{\\beta }%\n_{n}+1)||\\allowbreak =n^{1/2}(\\widehat{\\pi }_{n}-\\pi\n_{n})^{2}(1+o(1))-\\allowbreak n^{1/2}(\\widehat{\\beta }_{n}-\\beta _{n})(%\n\\widehat{\\pi }_{n}^{2}-\\pi _{n}^{2})\\allowbreak =[n^{1/2}\\beta _{n}(\\widehat{%\n\\pi }_{n}-\\pi _{n})]^{2}(n^{1/4}\\beta _{n})^{-2}\\allowbreak (1+o(1))+O_{p}(1)\n$ using Theorem \\ref{Thm dist'n of estimator b=finite}(a) or \\ref{Thm dist'n\nof estimator b=inf}(a). (The $O_{p}(1)$ term is $o_{p}(1)$ if $|b|=\\infty .$%\n) Because $||(-2\\widehat{\\pi }_{n},\\widehat{\\beta }_{n}+1)||\\rightarrow\n_{p}\\allowbreak ||(-2\\pi _{0},1)||<\\infty ,$ the claim follows.} In\nconsequence, if $n^{1/4}\\beta _{n}\\rightarrow 0,$ then $\\eta _{n}(\\widehat{%\n\\theta }_{n})\\rightarrow _{p}\\infty $ and Theorem \\ref{Theorem Divergence}\napplies.\\footnote{%\nWhen $|b|=\\infty ,$ this holds because $n^{1/2}\\beta _{n}(\\widehat{\\pi }%\n_{n}-\\pi _{n})$ has an asymptotic normal distribution by Theorem \\ref{Thm\ndist'n of estimator b=inf}(a). When $|b|<\\infty ,$ this holds because $%\n[n^{1/2}\\beta _{n}(\\widehat{\\pi }_{n}-\\pi _{n})]^{2}(n^{1/4}\\beta\n_{n})^{-2}=n^{1/2}(\\widehat{\\pi }_{n}-\\pi _{n})^{2},$ $\\widehat{\\pi }%\n_{n}\\rightarrow _{d}\\pi ^{\\ast }(\\gamma _{0},b)$ by Theorem \\ref{Thm dist'n\nof estimator b=finite}(a), and $P(\\pi ^{\\ast }(\\gamma _{0},b)=0)=0.$}\n\nSequences for which $n^{1/2}\\beta _{n}\\rightarrow \\infty $ and $n^{1/4}\\beta\n_{n}\\rightarrow 0$ are in the semi-strong identification category. Hence,\nthis example shows that even for sequences in the semi-strong identification\ncategory, in which case both $\\widehat{\\beta }_{n}$ and $\\widehat{\\pi }_{n}$\nare consistent and asymptotically normal, the Wald test can diverge to\ninfinity for nonlinear restrictions due to the different rates of\nconvergence of $\\widehat{\\beta }_{n}$ and $\\widehat{\\pi }_{n}.$\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Asymptotic Distribution\nof the Wald Statistic\\newline\nUnder the Alternative\\label{Wald under Alt Subsec}}\n\n\\hspace{0.25in}Next, we provide the asymptotic distributions of the Wald\ntest under alternative hypotheses, which yield power results for the Wald\ntest and false coverage probabilities for Wald CS's. Suppose the conditions\nof Theorem \\ref{Theorem Wald Nonlinear} hold. The following results are\nobtained by altering of the proof of Theorem \\ref{Theorem Wald Nonlinear}.\nSuppose the sequence of null hypothesis values of $r(\\theta )$ are $%\n\\{v_{n,0}^{null}:n\\geq 1\\}.\\footnote{%\nBy allowing $v_{n,0}^{null}$ to depend on $n,$ we obtain results for\ndrifting null values. For example, if $r(\\theta )=\\beta ,$ this provides\nresults when the null and local alternative values of $\\beta $ are $n^{-1/2}$%\n-local to zero. This is useful for obtaining asymptotic false coverage\nprobabilities of CS's for $\\beta $ when the true value of $\\beta $ is close\nto zero. In this case, the relevant null values also are close to zero, in a \n$n^{-1/2}$-local to zero sense.}$ We consider the case where the true\nparameters $\\{\\gamma _{n}\\}$ satisfy $r(\\theta _{n})\\neq v_{n,0}^{null}.$\n\nFirst, consider the alternative hypothesis distributions $\\{\\gamma _{n}\\}\\in\n\\Gamma (\\gamma _{0},0,b)$ with $b\\in R^{d_{\\beta }}.$ Suppose the sequence\nof true values $\\{\\theta _{n}\\}$ satisfies $n^{1/2}(r(\\theta\n_{n})-v_{n,0}^{null})\\rightarrow d$ for some $d\\in R^{d_{r}}.$ Then, the\nasymptotic distribution of $W_{n}(v_{n,0}^{null})$ is given by the\nexpression in Theorem \\ref{Theorem Wald Nonlinear}(a), but with $\\tau\n^{A}(\\pi ;\\gamma _{0},b)$ in the definition of $\\lambda (\\pi ;\\gamma _{0},b)$\nreplaced by $\\tau ^{A\\ast }(\\pi ;\\gamma _{0},b)=\\tau ^{A}(\\pi ;\\gamma\n_{0},b)+(A_{1}(\\psi _{0},\\pi )d,0_{d_{\\pi }^{\\ast }}).$ Alternatively,\nsuppose the sequence of true values satisfies $r(\\theta\n_{n})-v_{n,0}^{null}\\rightarrow d_{0}\\in R^{d_{r}}$ and $d_{0}\\neq 0.$ When $%\nA_{1}(\\theta )\\neq 0$ $\\forall \\theta \\in \\Theta ,$ $W_{n}(v_{n,0}^{null})%\n\\rightarrow _{p}\\infty .$ When $A_{1}(\\theta )=0$ $\\forall \\theta \\in \\Theta\n,$ the asymptotic distribution of $W_{n}(v_{n,0}^{null})$ is given by the\nexpression in Theorem \\ref{Theorem Wald Nonlinear}(a), but with $\\tau\n^{A}(\\pi ;\\gamma _{0},b)$ in the definition of $\\lambda (\\pi ;\\gamma _{0},b)$\nreplaced by $\\tau ^{A\\ast \\ast }(\\pi ;\\gamma _{0},b)=\\tau ^{A}(\\pi ;\\gamma\n_{0},b)+(0_{d_{r}-d_{\\pi }^{\\ast }},A_{2}(\\psi _{0},\\pi )d_{0}).$\n\nNext, consider the alternative hypothesis distributions $\\{\\gamma _{n}\\}\\in\n\\Gamma (\\gamma _{0},\\infty ,\\omega _{0})$ with $\\beta _{0}\\neq 0.$ When $%\nn^{1/2}(r(\\theta _{n})-v_{n,0}^{null})\\rightarrow d$ for some $d\\in\nR^{d_{r}},$ $W_{n}(v_{n,0}^{null})$ converges in distribution to a\nnon-central $\\chi _{d_{r}}^{2}$distribution with noncentrality parameter $%\n\\delta ^{2}=d^{\\prime }(r_{\\theta }(\\theta _{0})B^{-1}(\\beta\n_{0})\\allowbreak \\Sigma (\\gamma _{0})B^{-1}(\\beta _{0})r_{\\theta }(\\theta\n_{0})^{\\prime })^{-1}d.$ Alternatively, when $r(\\theta\n_{n})-v_{n,0}^{null}\\rightarrow d_{0}$ for some $d_{0}\\in R^{d_{r}}$ with $%\nd_{0}\\neq 0,$ $W_{n}\\rightarrow _{p}\\infty .$\n\nLastly, consider the alternative hypothesis distributions $\\{\\gamma\n_{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0})$ with $\\beta _{0}=0.$\nSuppose the restrictions satisfy $r(\\theta )=(r_{1}(\\psi ),r_{2}(\\theta ))$\nfor $r_{2}(\\theta )\\in R^{d_{\\pi }^{\\ast }}$ with $d_{\\pi }^{\\ast }\\geq 0$\nand the $d_{\\pi }^{\\ast }\\times d_{\\pi }$ matrix $(\\partial /\\partial \\pi\n^{\\prime })r_{2}(\\theta )$ has full rank $d_{\\pi }^{\\ast }.\\footnote{%\nUnder these conditions on $r(\\theta ),$ one can take $A(\\theta )=I_{d_{r}}.$}\n$ Let $v_{n,0}^{null}=(v_{n,0,1}^{null},v_{n,0,2}^{null})$ for $%\nv_{n,0,2}^{null}\\in R^{d_{\\pi }^{\\ast }}.$ When%\n\\begin{equation}\nn^{1/2}(r_{1}(\\theta _{n})-v_{n,0,1}^{null})\\rightarrow d_{1}\\in\nR^{d_{r}-d_{\\pi }^{\\ast }}\\text{ and }n^{1/2}\\iota (\\beta _{n})(r_{2}(\\theta\n_{n})-v_{n,0,2}^{null})\\rightarrow d_{2}\\in R^{d_{\\pi }^{\\ast }},\n\\label{Local deviations}\n\\end{equation}%\nthe asymptotic distribution of $W_{n}(v_{n,0}^{null})$ is a non-central $%\n\\chi _{d_{r}}^{2}$ distribution with non-centrality parameter $\\delta\n^{2}=d^{\\prime }(r_{\\theta }^{\\ast }(\\theta _{0})\\Sigma (\\gamma\n_{0})r_{\\theta }^{\\ast }(\\theta _{0})^{\\prime })^{-1}d,$ where $%\nd=(d_{1},d_{2})\\in R^{d_{r}}.$ Note that the local alternatives in (\\ref%\n{Local deviations}) are $n^{-1/2}$-alternatives for the $r_{1}(\\psi )$\nrestrictions, but are more distant $n^{-1/2}\\iota (\\beta _{n})^{-1}$%\n-alternatives for the $r_{2}(\\theta )$ restrictions due to the slower $%\nn^{1/2}\\iota (\\beta _{n})$-rate of convergence of $\\widehat{\\pi }_{n}$ in\nthe present context. Alternatively, when $r(\\theta\n_{n})-v_{n,0}^{null}\\rightarrow d_{0}$ for some $d_{0}\\in R^{d_{r}}$ with $%\nd_{0}\\neq 0,$ $W_{n}\\rightarrow _{p}\\infty .$\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Asymptotic Size of\nStandard Wald Confidence Sets}\n\n\\hspace{0.25in}Here, we determine the asymptotic size of a standard CS for $%\nr(\\theta )\\in R^{d_{r}}$ obtained by inverting a Wald statistic, i.e., \n\\begin{equation}\nCS_{W,n}=\\{v:W_{n}(v)\\leq \\chi _{d_{r},1-\\alpha }^{2}\\},  \\label{wald CS}\n\\end{equation}%\nwhere the Wald statistic $W_{n}(v)$ is as in (\\ref{Defn Wald}), $\\chi\n_{d_{r},1-\\alpha }^{2}$ is the $1-\\alpha $ quantile of a chi-square\ndistribution with $d_{r}$ degree of freedom, and $1-\\alpha $ is the nominal\nsize of the CS.\n\nThe asymptotic size of the CS above is determined using the asymptotic\ndistribution of $W_{n}=W_{n}(r(\\theta _{n}))$ under drifting sequences of\ntrue parameters, as given in Theorems \\ref{Theorem Wald Nonlinear} and \\ref%\n{Theorem Divergence}. For $||b||<\\infty ,$ define \n\\begin{eqnarray}\nh\\hspace{-0.08in} &=&\\hspace{-0.08in}(b,\\gamma _{0}),\\text{ }H=\\{h=(b,\\gamma\n_{0}):||b||<\\infty ,\\gamma _{0}\\in \\Gamma \\text{ with }\\beta _{0}=0\\},\\text{\nand}  \\notag \\\\\nW(h)\\hspace{-0.08in} &=&\\hspace{-0.08in}\\lambda (\\pi ^{\\ast }(\\gamma\n_{0},b);\\gamma _{0},b).  \\label{Defn of W(h) and T(h)}\n\\end{eqnarray}%\nAs defined, $W(h)$ is the asymptotic distribution of $W_{n}$ under $\\{\\gamma\n_{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ for $||b||<\\infty $ determined in\nTheorem \\ref{Theorem Wald Nonlinear}(a).\n\nLet $c_{W,1-\\alpha }(h)$ denote the $1-\\alpha $ quantile of $W(h)$ for $h\\in\nH.$\n\nAs in (\\ref{AsySz Confid Set defn}), $AsySz$ denotes the asymptotic size of\na CS of nominal level $1-\\alpha .$ The asymptotic size results use the\nfollowing distribution function (df) continuity assumption, which typically\nis not restrictive.$\\medskip $\n\n\\noindent \\textbf{Assumption V4.} The df of $W(h)$ is continuous at $\\chi\n_{d_{r},1-\\alpha }^{2}$ and $\\sup_{h\\in H}c_{W,1-\\alpha }(h)$ $\\forall h\\in\nH.$\n\n\\begin{theorem}\n\\hspace{-0.08in}\\textbf{.} \\label{Theorem Wald Asy Sz}Suppose Assumptions \n\\emph{B1-B2, R1-R2, V1-V2, }and\\emph{\\ V4 hold. }In addition, suppose\nAssumptions \\emph{GMM1-GMM5 }hold \\emph{(}or Assumptions \\emph{A,} \\emph{%\nB1-B3,} \\emph{C1-C8,} and \\emph{D1-D3} of \\emph{AC1 }hold\\emph{).} Then, the\nstandard nominal $1-\\alpha $ Wald CS satisfies%\n\\begin{equation*}\nAsySz=\\min \\{\\inf_{h\\in H}P(W(h)\\leq \\chi _{d_{r},1-\\alpha }^{2}),\\text{ }%\n1-\\alpha \\}.\n\\end{equation*}\n\\end{theorem}\n\n\\noindent \\textbf{Comment. }Under Assumption R$_{\\text{L}}$ (i.e., linearity\nof $r(\\theta )$), Theorem \\ref{Theorem Wald Asy Sz} holds with $W(h)$\nreplaced by the equivalent, but simpler, quantity $W_{L}(h)=\\lambda _{L}(\\pi\n^{\\ast }(\\gamma _{0},b);\\gamma _{0},b)$ for $h=(b,\\gamma _{0}).$ This holds\nby Corollary \\ref{Cor Wald Linear}(a).\\medskip\n\nTheorem \\ref{Theorem Divergence} shows that the Wald statistic $W_{n}$\ndiverges to infinity in some circumstances, e.g., see Example r6 in Section %\n\\ref{Subsec Asy Distn Wald Stat} above. In such cases, the standard Wald CS\nhas asymptotic size equal to 0.\n\n\\begin{corollary}\n\\hspace{-0.08in}\\textbf{.} \\label{Corollary Wald 2}Suppose Assumptions \\emph{%\nB1-B2, R1,} and \\emph{V1} hold. In addition, suppose Assumptions \\emph{%\nGMM1-GMM5 }hold \\emph{(}or Assumptions \\emph{A,} \\emph{B1-B3,} \\emph{C1-C8,}\nand \\emph{D1-D3} of \\emph{AC1 }hold\\emph{).} If $||\\eta _{n}(\\widehat{\\theta \n}_{n})||\\rightarrow _{p}\\infty $ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},0,b)$ for some $\\gamma _{0}\\in \\Gamma $ and $||b||<\\infty ,$ the\nstandard nominal $1-\\alpha $ Wald CS has $AsySz=0.$\n\\end{corollary}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Robust Wald Confidence\nSets\\label{Robust Wald CS Subsec}}\n\n\\hspace{0.25in}Next, we construct Wald CS's that have correct asymptotic\nsize. These CS's are robust to the strength of identification. The CS's for $%\nr(\\theta )$ are constructed by inverting a robust Wald test that combines\nthe Wald test statistic with a robust critical value that differs from the\nusual $\\chi _{d_{r}}^{2}$-quantile, which is designed for the\nstrong-identification case. The first robust CS uses the least favorable\n(LF) critical value. The second robust CS, called a type 2 robust CS, is\nintroduced in AC1. It uses a data-dependent critical value. It is smaller\nthan the LF robust CS under strong identification and, hence, is preferable.\n\n\\subsubsection{\\hspace{-0.19in}\\textbf{.}\\hspace{0.18in}Least Favorable\nCritical Value\\textbf{\\label{LF Subsubsec}}}\n\n\\hspace{0.25in}The LF critical value is%\n\\begin{equation}\nc_{W,1-\\alpha }^{LF}=\\max \\{\\sup_{h\\in H}c_{W,1-\\alpha }(h),\\chi\n_{d_{r},1-\\alpha }^{2}\\}.  \\label{LF defn}\n\\end{equation}\n\nThe LF critical value can be improved (i.e., made smaller) by exploiting the\nknowledge of the null hypothesis value of $r(\\theta ).$ For instance, if the\nnull hypothesis specifies the value of $\\pi $ to be $3,$ then the supremum\nin (\\ref{LF defn}) does not need to be taken over all $h\\in H,$ only over\nthe $h$ values for which $\\pi =3.$ We call such a critical value a\nnull-imposed (NI) LF critical value. Using a NI-LF critical value increases\nthe computational burden because a different critical value is employed for\neach null hypothesis value.\\footnote{%\nTo be precise, let $H(v)=\\{h=(b,\\gamma _{0})\\in H:||b||<\\infty ,r(\\theta\n_{0})=v\\},$ where $\\gamma _{0}=(\\theta _{0},\\phi _{0}).$ By definition, $%\nH(v) $ is the subset is $H$ that is consistent with the null hypothesis $%\nH_{0}:r(\\theta _{0})=v,$ where $\\theta _{0}$ denotes the true value. The\nNI-LF critical value, denoted $c_{W,1-\\alpha }^{LF}(v),$ is defined by\nreplacing $H$ by $H(v)$ in (\\ref{LF defn}) when the null hypothesis value is \n$r(\\theta _{0})=v.$ Note that $v$ takes values in the set $V_{r}=\\{v_{0}:$ $%\nr(\\theta _{0})=v_{0}$ for some $h=(b,\\gamma _{0})\\in H\\}.$}$^{,}$\\footnote{%\nWhen $r(\\theta )=\\beta $ and the null hypothesis imposes that $\\beta =v,$\nthe parameter $b$ can be imposed to equal $n^{1/2}v.$ In this case, $%\nH(v)=H_{n}(v)=\\{h=(b,\\gamma _{0})\\in H:b=n^{1/2}v\\}.$ The asymptotic size\nresults given below for NI-LF CS's and NI robust CS's hold in this case.}\n\nWhen part of $\\gamma $ is unknown under $H_{0}$ but can be consistently\nestimated, then a \\emph{plug-in} LF (or plug-in NI-LF) critical value can be\nused that has correct size asymptotically and is smaller than the LF (or\nNI-LF) critical value. The plug-in critical value replaces elements of $%\n\\gamma $ with consistent estimators in the formulae in (\\ref{LF defn}) and\nthe supremum over $H$ is reduced to a supremum over the resulting subset of $%\nH,$ denoted $\\widehat{H}_{n},$ for which the consistent estimators appear in\neach vector $\\gamma .$\\footnote{%\nFor example, if $\\zeta $ is consistently estimated by $\\widehat{\\zeta }_{n},$\nthen $H$ is replaced by $\\widehat{H}_{n}=\\{h=(b,\\gamma )\\in H:\\gamma =(\\beta\n,\\widehat{\\zeta }_{n},\\pi ,\\phi )\\}.$ If a plug-in NI-LF critical value is\nemployed, $H(v)$ is replaced by $H(v)\\cap \\widehat{H}_{n},$ where $H(v)$ is\ndefined in a footnote above. The parameter $b$ is not consistently\nestimable, so it cannot be replaced by a consistent estimator.}\n\n\\subsubsection{\\hspace{-0.19in}\\textbf{.}\\hspace{0.18in}Type 2 Robust\nCritical Value\\label{Type 2 robust CI Subsubsec}}\n\n\\hspace{0.25in}Next, we define the type 2 robust critical value. It improves\non the LF critical value. It employs an identification category selection\n(ICS) procedure that uses the data to determine whether $b$ is finite.%\n\\footnote{%\nWhen $\\beta $ is specified by the null hypothesis, it is not necesary to use\nan ICS procedure. Instead, we recommend using a (possibly plug-in) NI-LF\ncritical value, see the footnote above.} The ICS procedure chooses between\nthe identification categories $\\mathcal{IC}_{0}:||b||<\\infty $ and $\\mathcal{%\nIC}_{1}:||b||=\\infty .$ The identification-category selection statistic is%\n\\begin{equation}\nA_{n}=\\left( n\\widehat{\\beta }_{n}^{\\prime }\\widehat{\\Sigma }_{\\beta \\beta\n,n}^{-1}\\widehat{\\beta }_{n}/d_{\\beta }\\right) ^{1/2},\n\\label{Defn of A_n for robust CS}\n\\end{equation}%\nwhere $\\widehat{\\Sigma }_{\\beta \\beta ,n}$ is the upper left $d_{\\beta\n}\\times d_{\\beta }$ block of $\\widehat{\\Sigma }_{n},$ which is defined in (%\n\\ref{Variance Matrix Defns}).\n\nThe type 2 robust critical value provides a continuous transition from a\nweak-identification critical value to a strong-identification critical value\nusing a transition function $s(x).$ Let $s(x)$ be a continuous function on $%\n[0,\\infty )$ that satisfies: (i) $0\\leq s(x)\\leq 1,$ (ii) $s(x)$ is\nnon-increasing in $x,$ (iii) $s(0)=1,$ and (iv) $s(x)\\rightarrow 0$ as $%\nx\\rightarrow \\infty .$ Examples of transition functions include (i) $%\ns(x)=\\exp (-c\\cdot x)$ for some $c>0$ and (ii) $s(x)=(1+c\\cdot x)^{-1}$ for\nsome $c>0.\\footnote{%\nIf $c_{W,1-\\alpha }^{LF}=\\infty ,$ $s(x)$ should be taken to equal $0$ for $x\n$ sufficiently large, where $\\infty \\times 0$ equals $0$ in (\\ref{type 2\nrobust cv}). Then, the critical value $\\widehat{c}_{W,1-\\alpha ,n}$ is\ninfinite if $A_{n}$ is small and is finite if $A_{n}$ is sufficiently large.}\n$ For example, in the nonlinear regression\\ model with endogeneity, we use\nthe function $s(x)=\\exp (-2x).$\n\nThe type 2 robust critical value is%\n\\begin{eqnarray}\n\\widehat{c}_{W,1-\\alpha ,n}\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left\\{ \n\\begin{tabular}{ll}\n$c_{B}$ & $\\text{if }A_{n}\\leq \\kappa $ \\\\ \n$c_{S}+[c_{B}-c_{S}]\\cdot s(A_{n}-\\kappa )$ & $\\text{if }A_{n}>\\kappa ,$\nwhere%\n\\end{tabular}%\n\\right.  \\notag \\\\\nc_{B}\\hspace{-0.08in} &=&\\hspace{-0.08in}c_{W,1-\\alpha }^{LF}+\\Delta _{1},%\n\\text{ }c_{S}=\\chi _{d_{r},1-\\alpha }^{2}+\\Delta _{2},\n\\label{type 2 robust cv}\n\\end{eqnarray}%\nand $\\Delta _{1}\\geq 0$ and $\\Delta _{2}\\geq 0$ are asymptotic\nsize-correction factors that are defined below. Here, \\textquotedblleft $B$%\n\\textquotedblright\\ denotes Big, and \\textquotedblleft $S$%\n\\textquotedblright\\ denotes Small. When $A_{n}\\leq \\kappa ,$ $\\widehat{c}%\n_{W,1-\\alpha ,n}$ equals the LF critical value $c_{W,1-\\alpha }^{LF}$ plus a\nsize-correction factor $\\Delta _{1}.$ When $A_{n}>\\kappa ,$ $\\widehat{c}%\n_{W,1-\\alpha ,n}$ is a linear combination of $c_{W,1-\\alpha }^{LF}+\\Delta\n_{1}$ and $\\chi _{d_{r},1-\\alpha }^{2}+\\Delta _{2},$ where $\\Delta _{2}$ is\nanother size-correction factor. The weight given to the standard critical\nvalue $\\chi _{d_{r},1-\\alpha }^{2}$ increases with the strength of\nidentification, as measured by $A_{n}-\\kappa .$\n\nThe ICS statistic $A_{n}$ satisfies $A_{n}\\rightarrow _{d}A(h)$ under $\\{\n\\gamma _{n}\\} \\in \\Gamma (\\gamma _{0},0,b)$ with $||b||<\\infty ,$ where $%\nA(h) $ is defined by%\n\\begin{equation}\nA(h)=\\left( \\tau _{\\beta }(\\pi ^{\\ast };\\gamma _{0},b)^{\\prime }\\Sigma\n_{\\beta \\beta }^{-1}(\\pi ^{\\ast };\\gamma _{0})\\tau _{\\beta }(\\pi ^{\\ast\n};\\gamma _{0},b)/d_{\\beta }\\right) ^{1/2},  \\label{A(h) defn}\n\\end{equation}%\nwhere $\\pi ^{\\ast }$ abbreviates $\\pi ^{\\ast }(\\gamma _{0},b),$ $\\tau\n_{\\beta }(\\pi ;\\gamma _{0},b)$ is defined in (\\ref{Asy Distn of Betahat(pi)}%\n), and $\\Sigma _{\\beta \\beta }(\\pi ;\\gamma _{0})$ is the upper left (1,1)\nelement of $\\Sigma (\\psi _{0},\\pi ;\\gamma _{0})$ for $\\Sigma (\\theta ;\\gamma\n_{0})=J^{-1}(\\theta ;\\gamma _{0})V(\\theta ;\\gamma _{0})J^{-1}(\\theta ;\\gamma\n_{0}).$\\footnote{%\nThe convergence in distribution follows from Theorem \\ref{Thm dist'n of\nestimator b=finite}(a) and Assumption V1.}$^{,}$\\footnote{%\nIn the vector $\\beta $ case, $\\Sigma _{\\beta \\beta }^{-1}(\\pi ^{\\ast\n};\\gamma _{0})$ is replaced in (\\ref{A(h) defn}) by a slightly different\nexpresssion, see footnote 51 of AC1. When the type 2 robust critical value\nis considered in the vector $\\beta $ case, $h$ is defined to include $\\omega\n_{0}=\\lim_{n\\rightarrow \\infty }\\beta _{n}/||\\beta _{n}||\\in R^{d_{\\beta }}$\nas an element, i.e., $h=(b,\\gamma _{0},\\omega _{0})$ and $H=\\{h=(b,\\gamma\n_{0},\\omega _{0}):||b||<\\infty ,\\gamma _{0}\\in \\Gamma $ with $\\beta\n_{0}=0,||\\omega _{0}||=1\\}$ because the true value $\\omega _{0}$ affects the\nasymptotic distribution of $A_{n}.$}$^{,}$\\footnote{%\nAlternatively to the ICS statistic $A_{n},$ one can use a NI-ICS statistic $%\nA_{n}(v),$ which employs the restricted estimator $\\widetilde{\\beta }_{n}(v)$\nof $\\beta $ in place of $\\widehat{\\beta }_{n}$ and a different weight\nmatrix. See AC1 for details.\n\\par\n{}}\n\nUnder $\\gamma _{n}\\in \\Gamma (\\gamma _{0},0,b)$ with $||b||<\\infty ,$ the\nasymptotic null rejection probability of a test based on the statistic $%\nW_{n} $ and the robust critical value $\\widehat{c}_{W,1-\\alpha ,n}$ is equal\nto%\n\\begin{eqnarray}\n&&\\hspace{-0.2in}NRP(\\Delta _{1},\\Delta _{2};h)\\overset{}{=}P(W(h)\\overset{}{%\n>}c_{B}\\text{ \\& }A(h)\\overset{}{\\leq }\\kappa )+P(W(h)\\overset{}{>}c_{A}(h)%\n\\text{ \\& }A(h)\\overset{}{>}\\kappa )  \\notag \\\\\n&&\\hspace{-0.2in}\\hspace{1.21in}\\overset{}{=}P(W(h)\\overset{}{>}c_{B})+P(W(h)%\n\\overset{}{\\in }(c_{A}(h),c_{B}]\\text{ \\& }A(h)\\overset{}{>}\\kappa ),\\text{\nwhere}  \\notag \\\\\n&&\\hspace{-0.2in}c_{A}(h)\\overset{}{=}c_{S}+(c_{B}-c_{S})\\cdot s(A(h)-\\kappa\n).  \\label{NRP defn}\n\\end{eqnarray}\n\nThe constants $\\Delta _{1}$ and $\\Delta _{2}$ are chosen such that $%\nNRP(\\Delta _{1},\\Delta _{2};h)\\leq \\alpha $ $\\forall h\\in H.$ In particular,\nwe define $\\Delta _{1}=\\sup_{h\\in H_{1}}\\Delta _{1}(h),$ where $\\Delta\n_{1}(h)\\geq 0$ solves $NRP(\\Delta _{1}(h),0;h)=\\alpha $ (or $\\Delta\n_{1}(h)=0 $ if $NRP(0,0;h)<\\alpha ),$ $H_{1}=\\{(b,\\gamma _{0}):(b,\\gamma\n_{0})\\in H$ \\& $||b||\\leq ||b_{\\max }||+D\\},$ $b_{\\max }$ is defined such\nthat $c_{W,1-\\alpha }(h)$ is maximized over $h\\in H$ at $h_{\\max }=(b_{\\max\n},\\gamma _{\\max })\\in H$ for some $\\gamma _{\\max }\\in \\Gamma ,$ and $D$ is a\nnon-negative constant, such as $1.$ We define $\\Delta _{2}=\\sup_{h\\in\nH}\\Delta _{2}(h),$ where $\\Delta _{2}(h)$ solves $NRP(\\Delta _{1},\\Delta\n_{2}(h);h)=\\alpha $ (or $\\Delta _{2}(h)=0$ if $NRP(\\Delta _{1},0;\\allowbreak\nh)<\\alpha ).\\footnote{%\nWhen $NRP(0,0;h)>\\alpha ,$ a unique solution $\\Delta _{1}(h)$ typically\nexists because $NRP(\\Delta _{1},0;h)$ is always non-increasing in $\\Delta\n_{1}$ and is typically strictly decreasing and continuous in $\\Delta _{1}.$\nIf no exact solution to $NRP(\\Delta _{1}(h),0;h)=\\alpha $ exists, then $%\n\\Delta _{1}(h)$ is taken to be any value for which $NRP(\\Delta\n_{1}(h),0;h)\\leq \\alpha $ and $\\Delta _{1}(h)\\geq 0$ is as small as\npossible. Analogous comments apply to the equation $NRP(\\Delta _{1},\\Delta\n_{2}(h);h)=\\alpha $ and the definition of $\\Delta _{2}(h).$}^{,}\\footnote{%\nWhen the LF critical value is achieved at $||b||=\\infty ,$ i.e., $\\chi\n_{d_{r},1-\\alpha }^{2}\\geq \\sup_{h\\in H}c_{QLR,1-\\alpha }(h),$ the standard\nasymptotic critical value $\\chi _{d_{r},1-\\alpha }^{2}$ yields a test or CI\nwith correct asymptotic size and constants $\\Delta _{1}$ and $\\Delta _{2}$\nare not needed. Hence, here we consider the case where $||b_{\\max }||<\\infty\n.$ If $\\sup_{h\\in H}c_{QLR,1-\\alpha }(h)$ is not attained at any point $%\nh_{\\max },$ then $b_{\\max }$ can be taken to be any point such that $%\nc_{QLR,1-\\alpha }(h_{\\max })$ is arbitrarily close to $\\sup_{h\\in\nH}c_{QLR,1-\\alpha }(h)$ for some $h_{\\max }=(b_{\\max },\\gamma _{\\max })\\in\nH. $}$ As defined, $\\Delta _{1}$ and $\\Delta _{2}$ can be computed\nsequentially, which eases computation.\n\nGiven the definitions of $\\Delta _{1}$ and $\\Delta _{2},$ the asymptotic\nrejection probability is always less than or equal to the nominal level $%\n\\alpha $ and it is close to $\\alpha $ when $h$ is close to $h_{\\max }$ (due\nto the adjustment by $\\Delta _{1})$ and when $||b||$ is large (due to the\nadjustment by $\\Delta _{2}).$\n\nThe type 2 robust critical value can be improved by employing NI and/or\nplug-in versions of it, denoted by $\\widehat{c}_{W,1-\\alpha ,n}(v).$ These\nare defined by replacing $c_{W,1-\\alpha }^{LF}$ in (\\ref{type 2 robust cv})\nby the NI-LF or plug-in NI-LF critical value and making $c_{B},$ $\\Delta\n_{1},$ and $\\Delta _{2}$ depend on the null value $v,$ denoted $c_{B}(v),$ $%\n\\Delta _{1}(v),$ and $\\Delta _{2}(v).$ We recommend using these versions\nwhenever possible because they lead to smaller CS's.\n\nFor any given value of $\\kappa ,$ the type 2 robust CS has correct\nasymptotic size due to the choice of $\\Delta _{1}$ and $\\Delta _{2}.$ In\nconsequence, a good choice of $\\kappa $ depends on the false coverage\nprobabilities (FCP's) of the robust CS. (An FCP of a CS for $r(\\theta )$ is\nthe probability that the CS includes a value different from the true value $%\nr(\\theta ).)$ The numerical work in this paper and in AC1 and AC2 shows that\nif a reasonable value of $\\kappa $ is chosen, such as $\\kappa =1.5$ or $2.0,$\nthe FCP's of type 2 robust CS's are not sensitive to deviations from this\nvalue of $\\kappa .$ This is because the size-correction constants $\\Delta\n_{1}$ and $\\Delta _{2}$ have to adjust as $\\kappa $ is changed in order to\nmaintain correct asymptotic size. The adjustments of $\\Delta _{1}$ and $%\n\\Delta _{2}$ offset the effect of changing $\\kappa .$\n\nOne can select $\\kappa $ in a simple way, i.e., by taking $\\kappa =1.5$ or $%\n2.0,$ or one can select $\\kappa $ in a more sophisticated way that\nexplicitly depends on FCP's. Both methods yield similar results for the\ncases that we have considered.\n\nThe more sophisticated method of choosing $\\kappa $ is to minimize the\naverage FCP of the robust CS over a chosen set of $\\kappa $ values denoted\nby $\\mathcal{K}.$ First, for given $h\\in H,$ one chooses a null value $%\nv_{H_{0}}(h)$ that differs from the true value $v_{0}=r(\\theta _{0})$ (where \n$h=(b,\\gamma _{0})$ and $\\gamma _{0}=(\\theta _{0},\\phi _{0})).$ The null\nvalue $v_{H_{0}}(h)$ is selected such that the robust CS based on a\nreasonable choice of $\\kappa ,$ such as $\\kappa =1.5$ or $2,$ has a FCP that\nis in a range of interest, such as close to $0.50.\\footnote{%\nWhen $b$ is close to $0,$ the FCP may be larger than $0.50$ for all\nadmissible $v$ due to weak identification. In such cases, $v_{H_{0}}(h)$ is\ntaken to be the admissible value that minimizes the FCP for the selected\nvalue of $\\kappa $ that is being used to obtain $v_{H_{0}}(h).$}$ Second,\none computes the FCP of the value $v_{H_{0}}(h)$ for each robust CS with $%\n\\kappa \\in \\mathcal{K}.$ Third, one repeats steps one and two for each $h\\in \n\\mathcal{H},$ where $\\mathcal{H}$ is a representative subset of $H.\\footnote{%\nWhen $r(\\theta )=\\pi ,$ we do not include $h$ values in $\\mathcal{H}$ for\nwhich $b=0$ because when $b=0$ there is no information about $\\pi $ and it\nis not necessarily desirable to have a small FCP.}$ The optimal choice of $%\n\\kappa $ is the value that minimizes over $\\mathcal{K}$ the average over $%\nh\\in \\mathcal{H}$ of the FCP's at $v_{H_{0}}(h).$\n\nIn summary, the steps used to construct a type 2 robust Wald (or $t$) test\nare as follows: (1) Estimate the model using the standard GMM estimator,\nyielding $\\widehat{\\beta }_{n}$ and the covariance matrix $\\widehat{\\Sigma }%\n_{\\beta \\beta ,n}.$ (2) Compute the Wald statistic using the formula in (\\ref%\n{Defn Wald}). (3) Construct the ICS statistic $A_{n}$ defined in (\\ref{Defn\nof A_n for robust CS}). (4) Simulate the LF critical value $c_{W,1-\\alpha\n}^{LF}$ and the size correction factors $\\Delta _{1}$ and $\\Delta _{2}$\nbased on the asymptotic formulae in (\\ref{Defn of W(h) and T(h)}), (\\ref%\n{A(h) defn}), and (\\ref{NRP defn}) and the description below (\\ref{NRP defn}%\n), for a given value of $\\kappa .$ (5). Compute the type 2 robust critical\nvalue $\\widehat{c}_{W,1-\\alpha ,n}$ defined in (\\ref{type 2 robust cv}),\nemploying the NI and/or plug-in versions when applicable. (6). Choose $%\n\\kappa $ by minimizing the FCP of the type 2 robust CI. The last step can be\navoided when the type 2 robust CI constructed is not very sensitive to the\nchoice of $\\kappa ,$ which is typically the case found in our simulation\nstudies. For a type 2 robust CI for a particular parameter, one takes the CI\nto consist of all null values of the parameter for which the type 2 robust\ntest fails to reject the null hypothesis. This can be computed by grid\nsearch or some more sophisticated method, such as a multi-step grid search\nwhere the fineness of the grid varies across the steps.\n\n\\subsubsection{\\hspace{-0.19in}\\textbf{.}\\hspace{0.18in}Asymptotic Size of\nRobust Wald CS's\\label{Asy Size Robust Wald CS Subsubsec}}\n\n\\hspace{0.25in}In this section, we show that the LF and data-dependent\nrobust CS's defined above have correct asymptotic size. The asymptotic size\nresults rely on the following df continuity conditions, which are not\nrestrictive in most examples.$\\medskip $\n\n\\noindent \\textbf{Assumption LF.} (i) The df of $W(h)$ is continuous at $%\nc_{W,1-\\alpha }(h)$ $\\forall h\\in H.$\n\n\\noindent (ii) If $c_{W,1-\\alpha }^{LF}>\\chi _{d_{r},1-\\alpha }^{2},$ $%\nc_{W,1-\\alpha }^{LF}$ is attained at some $h_{\\max }\\in H.\\medskip $\n\n\\noindent \\textbf{Assumption NI-LF.} (i) The df of $W(h)$ is continuous at $%\nc_{W,1-\\alpha }(h)$ $\\forall h\\in H(v),$ $\\forall v\\in V_{r}.$\n\n\\noindent (ii) For some $v\\in V_{r},$ $c_{W,1-\\alpha }^{LF}(v)=\\chi\n_{d_{r},1-\\alpha }^{2}$ or $c_{W,1-\\alpha }^{LF}(v)$ is attained at some $%\nh_{\\max }\\in H.\\medskip $\n\nFor $h\\in H,$ define%\n\\begin{equation}\n\\widehat{c}_{W,1-\\alpha }(h)=\\left\\{ \n\\begin{tabular}{ll}\n$c_{B}$ & $\\text{if }A(h)\\leq \\kappa $ \\\\ \n$c_{S}+[c_{B}-c_{S}]\\cdot s(A(h)-\\kappa )$ & $\\text{if }A(h)>\\kappa .$%\n\\end{tabular}%\n\\right.  \\label{Defn of chat(h)}\n\\end{equation}%\nAs defined, $\\widehat{c}_{W,1-\\alpha }(h)$ equals $\\widehat{c}_{W,1-\\alpha\n,n}$ with $A(h)$ in place of $A_{n}.$ The asymptotic distribution of $%\n\\widehat{c}_{W,1-\\alpha ,n}$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},0,b)$ for $||b||<\\infty $ is the distribution of $\\widehat{c}%\n_{W,1-\\alpha }(h).$\n\nDefine $\\widehat{c}_{W,1-\\alpha }(h,v)$ analogously to $\\widehat{c}%\n_{W,1-\\alpha }(h),$ but with $c_{W,1-\\alpha }^{LF},$ $\\Delta _{1},$ and $%\n\\Delta _{2}$ replaced by $c_{W,1-\\alpha }^{LF}(v),$ $\\Delta _{1}(v),$ and $%\n\\Delta _{2}(v),$ respectively, for $v\\in V_{r}.$ The asymptotic distribution\nof $\\widehat{c}_{W,1-\\alpha ,n}(v)$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},0,b)$ for $||b||<\\infty $ is the distribution of $\\widehat{c}%\n_{W,1-\\alpha }(h,v).\\medskip $\n\n\\noindent \\textbf{Assumption Rob2. }(i) $P(W(h)=\\widehat{c}_{W,1-\\alpha\n}(h))=0$ $\\forall h\\in H.$\n\n\\noindent (ii) If $\\Delta _{2}>0,$ $NRP(\\Delta _{1},\\Delta _{2};h^{\\ast })%\n\\overset{}{=}\\alpha $ for some point $h^{\\ast }\\in H,$ where $\\Delta _{1}$\nand $\\Delta _{2}$ are defined following (\\ref{NRP defn}).$\\medskip $\n\n\\noindent \\textbf{Assumption NI-Rob2. }(i) $P(W(h)=\\widehat{c}_{W,1-\\alpha\n}(h,v))=0$ $\\forall h\\in H(v),$ $\\forall v\\in V_{r}.$\n\n\\noindent (ii) For some $v\\in V_{r},$ $\\Delta _{2}(v)=0$ or $NRP(\\Delta\n_{1}(v),\\Delta _{2}(v);h^{\\ast })=\\alpha $ for some point $h^{\\ast }\\in\nH(v), $ where $\\Delta _{1}(v)$ and $\\Delta _{2}(v)$ are defined following (%\n\\ref{NRP defn}).\n\n\\begin{theorem}\n\\hspace{-0.08in}\\textbf{.} \\label{Theorem Robust Size Wald CS}Suppose\nAssumptions \\emph{B1-B2, R1-R2, }and \\emph{V1-V2 hold. }In addition, suppose\nAssumptions \\emph{GMM1-GMM5 }hold \\emph{(}or Assumptions \\emph{A,} \\emph{%\nB1-B3,} \\emph{C1-C8,} and \\emph{D1-D3} of \\emph{AC1 }hold\\emph{). }Then, the\nnominal $1-\\alpha $ robust Wald CS has $AsySz=1-\\alpha $ when based on the\nfollowing critical values\\emph{:} \\emph{(a) LF,} \\emph{(b) NI-LF, (c) }type \n\\emph{2} robust, and \\emph{(d) }type \\emph{2} \\emph{NI} robust, provided the\nfollowing additional Assumptions hold, respectively\\emph{:} \\emph{(a) LF,\n(b) NI-LF,} \\emph{(c) Rob2, }and \\emph{(d) NI-Rob2.}\n\\end{theorem}\n\n\\noindent \\textbf{Comments. 1. }Plug-in versions of the robust Wald CS's\nconsidered in Theorem \\ref{Theorem Robust Size Wald CS} also have\nasymptotically correct size under continuity assumptions on $c_{W,1-\\alpha\n}(h)$ that typically are not restrictive. For brevity, we do not provide\nformal results here.\n\n\\noindent \\textbf{2. }If part (ii) of Assumption LF, NI-LF, Rob2, or NI-Rob2\ndoes not hold, then the corresponding part of Theorem \\ref{Theorem Robust\nSize Wald CS} still holds, but with $AsySz\\geq 1-\\alpha .$\n\n\\noindent \\textbf{3. }A third type of robust critical value, referred to as\ntype 1, is considered in AC1. Critical values of this type can be employed\nwith Wald statistics. The resulting type 1 robust CS's out-perform LF robust\nCS's in terms of FCP's, but are inferior to type 2 robust CS's. However,\nthey are easier to compute than type 2 robust CS's.\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}QLR Confidence Sets and\nTests\\label{QLR Tests Sec}}\n\n\\setcounter{equation}{0}\\hspace{0.25in}In this section, we introduce CS's\nbased on the quasi-likelihood ratio (QLR) statistic. For brevity,\ntheoretical results for the QLR procedures are given in AC1. However, we\ndefine QLR procedures here because numerical results are reported for them\nin the numerical results section.\n\nWe consider CS's for a function $r(\\theta )$ $(\\in R^{d_{r}})$ of $\\theta $\nobtained by inverting QLR tests. The function $r(\\theta )$ is assumed to be\nsmooth and to be of the form%\n\\begin{equation}\nr(\\theta )=\\left[ \n\\begin{array}{c}\nr_{1}(\\psi ) \\\\ \nr_{2}(\\pi )%\n\\end{array}%\n\\right] ,  \\label{Form of r(theta)}\n\\end{equation}%\nwhere $r_{1}(\\psi )\\in R^{d_{r_{1}}},$ $d_{r_{1}}\\geq 0$ is the number of\nrestrictions on $\\psi ,$ $r_{2}(\\pi )\\in R^{d_{r_{2}}},$ $d_{r_{2}}\\geq 0$\nis the number of restrictions on $\\pi ,$ and $d_{r}=d_{r_{1}}+d_{r_{2}}.$\n\nFor $v\\in r(\\Theta ),$ we define a restricted estimator $\\widetilde{\\theta }%\n_{n}(v)$ of $\\theta $ subject to the restriction that $r(\\theta )=v.$ By\ndefinition, \n\\begin{equation}\n\\widetilde{\\theta }_{n}(v)\\in \\Theta ,\\text{ }r(\\widetilde{\\theta }%\n_{n}(v))=v,\\text{ and }Q_{n}(\\widetilde{\\theta }_{n}(v))=\\inf_{\\theta \\in\n\\Theta :r(\\theta )=v}Q_{n}(\\theta )+o(n^{-1}).\n\\end{equation}\n\nFor testing $H_{0}:r(\\theta )=v,$ the QLR test statistic is%\n\\begin{equation}\nQLR_{n}(v)=2n(Q_{n}(\\widetilde{\\theta }_{n}(v))-Q_{n}(\\widehat{\\theta }%\n_{n}))/\\widehat{s}_{n},  \\label{Defn QLR Test Statistic}\n\\end{equation}%\nwhere $\\widehat{s}_{n}$ is a real-valued scaling factor that is employed in\nsome cases to yield a QLR statistic that has an asymptotic $\\chi\n_{d_{r}}^{2} $ null distribution under strong identification. See AC1 for\ndetails.\n\nLet $c_{n,1-\\alpha }(v)$ denote a nominal level $1-\\alpha $ critical value\nto be used with the QLR test statistic. It may be stochastic or\nnon-stochastic. The usual choice, based on the asymptotic distribution of\nthe QLR statistic under standard regularity conditions, is the $1-\\alpha $\nquantile of the $\\chi _{d_{r}}^{2}$ distribution: $c_{n,1-\\alpha }(v)=\\chi\n_{d_{r},1-\\alpha }^{2}.$\n\nA critical value that delivers a robust QLR CS for $r(\\theta )$ that has\ncorrect asymptotic size can be constructed using the same approach as in\nSection \\ref{Asy Size Robust Wald CS Subsubsec}. Details are in AC1.\n\nGiven a critical value $c_{n,1-\\alpha }(v),$ the nominal level $1-\\alpha $\nQLR CS for $r(\\theta )$ is \n\\begin{equation}\nCS_{r,n}^{QLR}=\\{v\\in r(\\Theta ):QLR_{n}(v)\\leq c_{n,1-\\alpha }(v)\\}.\n\\label{Defn of QLR CS}\n\\end{equation}\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Numerical Results:\nNonlinear Regression Model with Endogeneity\\label{Numerical Results Sec}}\n\n\\hspace{0.25in}\\setcounter{equation}{0}In this section, we provide\nasymptotic and finite-sample simulation results for the nonlinear regression\nmodel with endogeneity.\n\nThe model we consider consists of a structural equation with two right-hand\nside endogenous variables $X_{1}$ and $X_{2},$ where $X_{1}$ is a nonlinear\nregressor and $X_{2}$ is a linear regressor, and two reduced-form equations\nfor $X_{1}$ and $X_{2},$ respectively:%\n\\begin{eqnarray}\nY_{i}\\hspace{-0.08in} &=&\\hspace{-0.08in}\\zeta _{1}+\\beta \\cdot\nh(X_{1,i},\\pi )+\\zeta _{2}X_{2,i}+U_{i},  \\notag \\\\\nX_{1,i}\\hspace{-0.08in} &=&\\hspace{-0.08in}\\lambda _{1}+\\lambda\n_{2}Z_{1,i}+V_{1,i},  \\notag \\\\\nX_{2,i}\\hspace{-0.08in} &=&\\hspace{-0.08in}\\lambda _{3}+\\lambda\n_{4}Z_{2,i}+\\lambda _{5}Z_{3,i}+V_{2,i},\n\\end{eqnarray}%\nwhere $Y_{i},X_{1,i},X_{2,i}\\in R$ are endogenous variables, $%\nZ_{1,i},Z_{2,i},Z_{3,i}\\in R$ are excluded exogenous variables, $h(x,\\pi\n)=(|x|^{\\pi }-1)/\\pi $, and $\\theta =(\\beta ,\\zeta _{1},\\zeta _{2},\\pi\n)^{\\prime }\\in R^{4}$ is the unknown parameter.\\footnote{%\nThe absolute value of $x$ is employed in $h(x,\\pi )$ to guarantee $h(x,\\pi\n)\\in R$ when $\\pi $ is not an integer. With the data generating process\nspecified below, $X_{1,i}$ is positive with probability close to $1.$ Hence, \n$h(X_{1,i},\\pi )$ is approximately the Box-Cox transformation of $X_{1,i}.$}\nThe data generating process (DGP) satisfies $(\\zeta _{1},\\zeta\n_{2})=(-2,2),(\\lambda _{1},\\lambda _{2})=(3,1),$ $(\\lambda _{3},\\lambda\n_{4},\\lambda _{5})=(0,1,1),$ $%\n\\{(Z_{1,i},Z_{2,i},Z_{3,i},U_{i},V_{1,i},V_{2,i}):i=1,...,n\\}$ are i.i.d., $%\n(Z_{1,i},Z_{2,i},Z_{3,i})$ and $(U_{i},V_{1,i},V_{2,i})$ are independent, $%\n(Z_{1,i},Z_{2,i},Z_{3,i})\\sim N(0,I_{3}),$ $U_{i}\\sim N(0,0.25),$ $%\nV_{k,i}\\sim N(0,1)$ and $\\mathrm{Corr}(U_{i},V_{k,i})=0.5$ for $k=1$ and $2,$\nand $\\mathrm{Corr}(V_{1,i},V_{2,i})=0.5.$\n\nThe IV's for the GMM estimator of $\\theta $ are $%\nZ_{i}=(1,Z_{1,i},Z_{1,i}^{2},Z_{2,i},Z_{3,i})^{\\prime }\\in R^{5}.$ Thus,\nfive moment conditions are used to estimate four parameters.\n\nThe true parameter space for $\\pi $ is $[1.5,$ $3.5]$ and the optimization\nspace for $\\pi $ is $[1,4].$ The finite-sample results are for $n=500.$ The\nnumber of simulation repetitions is 20,000.\\footnote{%\nThe discrete values of $b$ for which computations are made run from $0$ to $%\n30$, with a grid of $0.2$ for $b$ between $0$ and $10,$ a grid of $1$ for $b$\nbetween $10$ and $20$, and a grid of $2$ for $b$ between $20$ and $30.$}\n\nFigures 1 and 2 provide the asymptotic and finite-sample densities of the\nGMM estimators of $\\beta $ and $\\pi $ when the true $\\pi $ value is $\\pi\n_{0}=1.5$. Each Figure gives the densities for $b=0,$ $4,$ $10,$ and $30,$\nwhere $b$ indexes the magnitude of $\\beta $. Specifically, for the\nfinite-sample results, $b=n^{1/2}\\beta .$ Figures S-1 and S-2 in\nSupplemental Appendix E provide analogous results for $\\pi _{0}=3.0.$\n\nFigure 1 shows that the ML estimator of $\\beta $ has a distribution that is\nvery far from a normal distribution in the unidentified and\nweakly-identified cases. The figure shows a build-up of mass at $0$ in the\nunidentified case and a bi-modal distribution in the weakly-identified case.\nFigure 2 shows that there is a build-up of mass at the boundaries of the\noptimization space for the estimator of $\\pi $ in the unidentified and\nweakly-identified cases. Figures 1 and 2 indicate that the asymptotic\napproximations developed here work very well.\n\n% \\FRAME{ftbpFU}{3.557in}{1.7038in}{0pt}{\\Qcb{Figure 1. Asymptotic and\n% Finite-Sample ($n=500$) Densities of the Estimator of $\\protect\\beta $ in\n% the Nonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=1.5.$}%\n% }{}{gmm_beta_dens_15.eps}{\\special{language \"Scientific Word\";type\n% \"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n% 3.557in;height 1.7038in;depth 0pt;original-width 6.9851in;original-height\n% 2.9447in;cropleft \"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom\n% \"0.0512\";filename 'graphics/GMM_beta_dens_15.eps';file-properties \"XNPEU\";}}\n\n% \\FRAME{ftbpFU}{3.557in}{1.7038in}{0pt}{\\Qcb{Figure 2. Asymptotic and\n% Finite-Sample ($n=500$) Densities of the Estimator of $\\protect\\pi $ in the\n% Nonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=1.5.$}}{}{%\n% gmm_pi_dens_15.eps}{\\special{language \"Scientific Word\";type\n% \"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n% 3.557in;height 1.7038in;depth 0pt;original-width 6.9851in;original-height\n% 2.9447in;cropleft \"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom\n% \"0.0512\";filename 'graphics/GMM_pi_dens_15.eps';file-properties \"XNPEU\";}}\n\nFigures S-3 to S-6 in Supplemental Appendix E provide the asymptotic and\nfinite-sample ($n=500$) densities of the $t$ and QLR statistics for $\\beta $\nand $\\pi $ when $\\pi _{0}=1.5.$ These Figures show that in the case of weak\nidentification the $t$ and QLR statistics are not well approximated by\nstandard normal and $\\chi _{1}^{2}$ distributions. However, the asymptotic\napproximations developed here work very well.\n\n% \\FRAME{ftbpFU}{3.4823in}{3.4168in}{0pt}{\\Qcb{Figure 3. Asymptotic 0.95\n% Quantiles of the $|t|$ and QLR Statistics for Tests Concerning $\\protect%\n% \\beta $ and $\\protect\\pi $ in the Nonlinear Regression Model with\n% Endogeneity.}}{}{gmm_quant.eps}{\\special{language \"Scientific Word\";type\n% \"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n% 3.4823in;height 3.4168in;depth 0pt;original-width 7.4858in;original-height\n% 6.7464in;cropleft \"0.0875\";croptop \"0.9567\";cropright \"0.9268\";cropbottom\n% \"0.0432\";filename 'graphics/GMM_Quant.eps';file-properties \"XNPEU\";}}\n\n% \\FRAME{ftbpFU}{3.462in}{3.3974in}{0pt}{\\Qcb{Figure 4. Coverage Probabilities\n% of Standard $|t|$ and QLR CI's for $\\protect\\beta $ and $\\protect\\pi $ in\n% the Nonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=1.5.$}%\n% }{}{gmm_std_cp_15.eps}{\\special{language \"Scientific Word\";type\n% \"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n% 3.462in;height 3.3974in;depth 0pt;original-width 7.4858in;original-height\n% 6.7464in;cropleft \"0.0876\";croptop \"0.9567\";cropright \"0.9268\";cropbottom\n% \"0.0432\";filename 'graphics/GMM_Std_CP_15.eps';file-properties \"XNPEU\";}}\n\nFigure 3 provides graphs of the $0.95$ asymptotic quantiles of the $|t|$ and\nQLR statistics concerning $\\beta $ and $\\pi $ as a function of $b$ for $\\pi\n_{0}=1.5,$ $2.0,$ $3.0,$ and $3.5.$ For the $|t|$ statistic concerning $%\n\\beta ,$ for small to medium $b$ values, the graphs exceed the $0.95$\nquantile under strong identification (given by the horizontal black line).\nThis implies that tests and CI's that employ the $|t|$ statistic for $\\beta $\nand the standard critical value (based on the normal distribution) have\nincorrect size. For the QLR statistic for $\\beta $, the graphs slightly\nexceed the $0.95$ quantile under strong identification when $b$ is $0$ or\nalmost $0$ and fall below the $0.95$ quantile under strong identification\nfor other small to medium $b$ values. The graphs in Figure 3(b) imply that\ntests and CI's that employ the QLR statistic for $\\beta $ and the standard\ncritical value (based on the $\\chi _{1}^{2}$ distribution) have small size\ndistortions due to the under-coverage for $b$ values close to $0.$ Given the\nheights of the graphs in Figure 3(c) and 3(d), tests and CI's that employ\nthe $|t|$ statistic for $\\pi $ have correct asymptotic size when $\\pi\n_{0}=1.5$ and $2.0$ and have slight size distortion when $\\pi _{0}=3.0$ and $%\n3.5,$ whereas those that employ the QLR statistic for $\\pi $ always have\ncorrect asymptotic size.\n\nFigure 4 reports the asymptotic and finite-sample CP's of nominal $0.95$\nstandard $|t|$ and QLR CI's for $\\beta $ and $\\pi $ when $\\pi _{0}=1.5.$ For\nexample, the smallest asymptotic and finite-sample CP's (over $b$) are\naround $0.68$ and $0.93$ for the $|t|$ and QLR CI's for $\\beta ,$\nrespectively. There is no size distortion for the $|t|$ and QLR CI's for $%\n\\pi .$ Note that the asymptotic CP's provide a good approximation to the\nfinite-sample CP's. Figure S-7 in Supplemental Appendix E provides analogous\nresults for $\\pi _{0}=3.0.$\n\n% \\FRAME{ftbpFU}{3.4832in}{3.4177in}{0pt}{\\Qcb{Figure 5. Coverage\n% Probabilities of Robust $|t|$ and QLR CI's for $\\protect\\beta $ and $\\protect%\n% \\pi $ in the Nonlinear Regression Model with Endogeneity when $\\protect\\pi %\n% _{0}=1.5.$ No smooth transition is employed.}}{}{gmm_rob_cp_15.eps}{\\special%\n% {language \"Scientific Word\";type \"GRAPHIC\";maintain-aspect-ratio\n% TRUE;display \"USEDEF\";valid_file \"F\";width 3.4832in;height 3.4177in;depth\n% 0pt;original-width 7.4858in;original-height 6.7464in;cropleft\n% \"0.0876\";croptop \"0.9567\";cropright \"0.9268\";cropbottom \"0.0432\";filename\n% 'graphics/GMM_Rob_CP_15.eps';file-properties \"XNPEU\";}}\n\nNext, we consider CI's that are robust to weak identification. For the\nrobust CI for $\\beta ,$ we impose the null value of $b=n^{1/2}\\beta _{0},$\nwhere $\\beta _{0}$ is the true value of $\\beta $ under the null. With the\nknowledge of $b$ under the null, no identification-category-selection\nprocedure is needed. Imposing the null value of $b$ also results in a\nsmaller LF critical value. As indicated in Figure 3(a), the NI-LF critical\nvalues for the $|t|$ CI for $\\beta $ is attained at $\\pi _{0}=1.5$ for all $%\nb $ values. In consequence, the robust $|t|$ CI for $\\beta $ is\nasymptotically similar when $\\pi _{0}=1.5,$ as shown in Figure 5(a). Figure\n5(a) also reports the finite-sample ($n=500$) CP's of the robust $|t|$ CI\nfor $\\beta .$ The smallest and largest finite-sample CP's are around 0.91\nand 0.97, as opposed to 0.68 and 1.00 for the standard $|t|$ CI. Figure 5(b)\nshows that the robust QLR CI for $\\beta $ tends to over-cover for a range of\nsmall to medium b values, but the asymptotic size is correct. Figures S-8(a)\nand S-8(b) in Supplemental Appendix E provide analogous results for $\\pi\n_{0}=3.0.$ The robust CI's for $\\beta $ are not asymptotically similar when $%\n\\pi _{0}=3.0,$ but they have correct asymptotic size and the asymptotic and\nfinite-sample CP's are close for all $b$ values.\n\nThe robust CI's for $\\pi $ are constructed with the null value $\\pi _{0}$\nimposed. When $\\pi _{0}=1.5,$ the robust $|t|$ and QLR CI's are the same as\nthe standard $|t|$ and QLR CI's, respectively, because the NI-LF critical\nvalues equal the standard critical values in both cases. In consequence,\nFigures 5(c) and 5(d) are the same as Figures 4(c) and 4(d), respectively.\nThe robust $|t|$ and QLR CI's for $\\pi $ when $\\pi _{0}=3.0$ are reported in\nFigures S-8(c) and S-8(d) in Supplemental Appendix E. In this case, the\nNI-LF critical value for the robust $|t|$ CI for $\\pi $ is slightly larger\nthan the standard critical value, as shown in Figure 3(c). We apply the\nsmooth transition in (\\ref{Defn of chat(h)}) to obtain critical values for\nthe robust $|t|$ CI for $\\pi ,$ where the transition function is $s(x)=\\exp\n(-2x)$ and the constants are $\\kappa =1.5$ and $D=1.$ The choices of $s(x)$\nand $D$ were determined via some experimentation to be good choices in terms\nof yielding CP's that are relatively close to the nominal size $0.95$ across\ndifferent values of $b.$ A wide range of $\\kappa $ values yield similar\nresults (because the constants $\\Delta _{1}$ and $\\Delta _{2}$ adjust to\nmaintain correct asymptotic size as $\\kappa $ is changed). Figures S-7(c)\nand S-8(c) show that, when $\\pi _{0}=3.0,$ the standard $|t|$ CI for $\\pi $\nsuffers from size distortion but the robust $|t|$ CI for $\\pi $ has correct\nasymptotic size. When $\\pi _{0}=3.0,$ the robust QLR CI for $\\pi $ is the\nsame the standard QLR CI for $\\pi ,$ as shown in Figures S-7(d) and S-8(d).\n\nBesides $b$ and $\\pi _{0},$ the construction of a robust CI also requires\nthe $\\zeta $ value in order to obtain the LF (or NI-LF) critical value\nthrough simulation. In this model, $\\zeta =(\\zeta _{1},\\zeta _{2})^{\\prime\n}. $ Because $\\zeta $ can be consistently estimated, we recommend plugging\nin the estimator $\\widehat{\\zeta }_{n}$ in place of $\\zeta _{0}$ in\npractice. To ease the computational burden required to simulate the CP's,\nthe finite-sample CP's of the robust CI's reported in Figures 5 and S-8 are\nconstructed using the true value $\\zeta _{0},$ rather than the estimated\nvalue $\\widehat{\\zeta }_{n}.$\\footnote{%\nWith a single sample, the computational burden is the same whether the true\nvalue $\\zeta _{0}$ or the estimated value $\\widehat{\\zeta }_{n}$ is\nemployed. However, in a simulation study, it is much faster to simulate the\ncritical values for a range of true values of $b$ and $\\pi _{0}$ and the\nsingle true value of $\\zeta _{0}$ one time and then use them in each of the\nsimulation repetitions, rather than to simulate a new critical value for\neach simulation repetition, which is required if $\\widehat{\\zeta }_{n}$ is\nemployed.} However, the difference between the robust CI's constructed with $%\n\\widehat{\\zeta }_{n}$ and $\\zeta _{0}$ typically is relatively minor. A\ncomparison is reported in Table S-1 of AC2 in the context of a smooth\ntransition autoregressive model.\n\n\\newpage\n\n\\begin{center}\n{\\LARGE R}{\\Large EFERENCES}\n\\end{center}\n\n\\begin{description}\n\\item Amemiya, T. (1974) Multivariate regression and simultaneous-equation\nmodels when the dependent variables are truncated normal.\\ \\emph{Econometrica%\n} 42, 999--1012.\n\n\\item Andrews, D.W.K. (2002) Generalized method of moments estimation when a\nparameter is on a boundary.\\ \\emph{Journal of Business and Economic\nStatistics} 20, 530--544.\n\n\\item Andrews, D.W.K. \\& X. Cheng (2011a) Maximum likelihood estimation and\nuniform inference with sporadic identification failure.\\ Cowles Foundation\nDiscussion Paper No. 1824, Yale University.\n\n\\item Andrews, D.W.K. \\& X. Cheng (2011b) Supplemental appendices for\n\\textquotedblleft Generalized method of moments estimation and uniform\nsubvector inference with possible identification failure.\\textquotedblright\\\nCowles Foundation Discussion Paper No. 1828, Yale University.\n\n\\item Andrews, D.W.K. \\& X. Cheng (2012a) Estimation and inference with\nweak, semi-strong, and strong identification.\\ \\emph{Econometrica }80,\nforthcoming.\n\n\\item Andrews, D.W.K. \\& X. Cheng (2012b) Supplemental material for\n\\textquotedblleft Estimation and inference with weak, semi-strong, and\nstrong identification.\\textquotedblright\\ Econometric Society website.\n\n\\item Andrews, D.W.K., X. Cheng, \\& P. Guggenberger (2009) Generic results\nfor establishing the asymptotic size of confidence sets and tests.\\ Cowles\nFoundation Discussion Paper No. 1813, Yale University.\n\n\\item Andrews, D.W.K. \\& P. Guggenberger (2009) Validity of Subsampling and\n`Plug-in Asymptotic' Inference for Parameters Defined by Moment\nInequalities. \\emph{Econometric Theory}\\textit{\\ }25, 669--709.\n\n\\item Andrews, D.W.K. \\& P. Guggenberger (2010) Asymptotic size and a\nproblem with subsampling and with the $m$ out of $n$ bootstrap.\\ \\emph{%\nEconometric Theory}\\textit{\\ }26, 426--468.\n\n\\item Andrews, I. \\& A. Mikusheva (2011) Maximum likelihood inference in\nweakly identified DSGE models.\\ Unpublished manuscript, Department of\nEconomics, MIT.\n\n\\item Andrews, I. \\& A. Mikusheva (2012) A Geometric Approach to Weakly\nIdentified Econometric Models.\\ Unpublished manuscript, Department of\nEconomics, MIT.\n\n\\item Antoine, B. \\& E. Renault (2009) Efficient GMM with nearly weak\ninstruments.\\ \\emph{Econometrics Journal }12, S135--S171.\n\n\\item Antoine, B. \\& E. Renault (2010) Efficient inference with poor\ninstruments, a general framework. In D. Giles \\& A. Ullah (eds.), \\emph{%\nHandbook of Empirical Economics and Finance. }Taylor and Francis.\n\n\\item Areosa, W.D., M. McAleer, and M.C. Medeiros (2011) Moment-Based\nEstimation of Smooth Transition Regression Models with Endogenous Variables. \n\\emph{Journal of Econometrics }165. 100--111.\n\n\\item Caner, M. (2010) Testing, estimation in GMM and CUE with nearly weak\nidentification.\\ \\emph{Econometric Reviews} 29, 330--363.\n\n\\item Cheng, X. (2008) Robust confidence intervals in nonlinear regression\nunder weak identification.\\ Unpublished working paper, Department of\nEconomics, Yale University.\n\n\\item Choi, I. \\& P.C.B. Phillips (1992) Asymptotic and finite sample\ndistribution theory for IV estimators and tests in partially identified\nstructural equations.\\ \\emph{Journal of Econometrics} 51, 113--150.\n\n\\item Davies, R. B. (1977) Hypothesis testing when a nuisance parameter is\npresent only under the alternative. \\emph{Biometrika} 64, 247--254.\n\n\\item Dufour, J.-M. (1997) Impossibility theorems in econometrics with\napplications to structural and dynamic models.\\ \\emph{Econometrica} 65,\n1365--1387.\n\n\\item Guggenberger, P., F. Kleibergen, S. Mavroeidis, \\& L. Chen (2013) On\nthe asymptotic sizes of subset Anderson-Rubin and Lagrange multiplier tests\nin linear instrumental variables regression. \\emph{Econometrica,}\nforthcoming.\n\n\\item Hansen, L.P. (1982) Large sample properties of generalized method of\nmoments estimation.\\ \\emph{Econometrica} 50, 1029--1054.\n\n\\item Heckman, J.J. (1978) Dummy endogenous variables in a simultaneous\nequation system.\\ \\emph{Econometrica} 46, 931--959.\n\n\\item Kleibergen, F. (2002) Pivotal statistics for testing structural\nparameters in instrumental variables regression.\\ \\emph{Econometrica} 70,\n1781--1803.\n\n\\item Kleibergen, F. (2005) Testing parameters in GMM without assuming that\nthey are identified.\\ \\emph{Econometrica} 73, 1103--1123.Lee, L.F. (1981)\nSimultaneous equations models with discrete endogenous variables. In C.F.\nManski \\& D. McFadden (eds.), \\emph{Structural Analysis of Discrete Data and\nEconometric Applications}. MIT Press.\n\n\\item Lee, L.-F. \\& A. Chesher (1986) Specification testing when score test\nstatistics are identically zero.\\ \\emph{Journal of Econometrics }31,\n121--149.\n\n\\item Ma, J. \\& C.R. Nelson (2008) Valid inference for a class of models\nwhere standard inference performs poorly; including nonlinear regression,\nARMA, GARCH, and unobserved components. Unpublished manuscript, Department\nof Economics, U. of Washington.\n\n\\item Moreira, M.J. (2003) A conditional likelihood ratio test for\nstructural models.\\ \\emph{Econometrica} 71, 1027--1048.\n\n\\item Nelson, C.R. \\& R. Startz (1990) Some further results on the exact\nsmall sample properties of the instrumental variables estimator.\\ \\emph{%\nEconometrica} 58, 967--976.\n\n\\item Nelson, C.R. \\& R. Startz (2007) The zero-information-limit condition\nand spurious inference in weakly identified models.\\ \\emph{Journal of\nEconometrics} 138, 47--62.\n\n\\item Nelson, F. \\& L. Olson (1978) Specification and estimation of a\nsimultaneous-equation model with limited dependent variables.\\ \\emph{%\nInternational Economic Review }19, 695--709.\n\n\\item Pakes, A., \\& D. Pollard (1989) Simulation and the asymptotics of\noptimization estimators.\\ \\emph{Econometrica} 57, 1027--1057.\n\n\\item Park, J.Y. \\& P.C.B. Phillips (1988) Statistical inference in\nregressions with integrated processes: part 1.\\ \\emph{Econometric Theory }4,\n468--497.\n\n\\item Phillips, P.C.B. (1989) Partially identified econometric models. \\emph{%\nEconometric Theory} 5, 181--240.\n\n\\item Rotnitzky, A., D.R. Cox, M. Bottai, \\& J. Robins (2000)\nLikelihood-based inference with singular information matrix.\\ \\emph{%\nBernoulli }6, 243--284.\n\n\\item Qu, Z. (2011) Inference and specification testing in DSGE models with\npossible weak identification. Unpublished working paper, Department of\nEconomics, Boston University.\n\n\\item Rivers, D. \\& Q.H. Vuong (1988) Limited information estimators and\nexogeneity tests for simultaneous probit models.\\ \\emph{Journal of\nEconometrics }39, 347--366.\n\n\\item Sargan, J.D. (1983) Identification and lack of identification. \\emph{%\nEconometrica} 51, 1605--1633.\n\n\\item Schorfheide, F. (2011) Estimation and evaluation of DSGE models:\nprogress and challenges. NBER Working Paper 16781.\n\n\\item Shi, X. \\& P.C.B. Phillips (2011) Nonlinear cointegrating regression\nunder weak identification.\\ \\emph{Econometric Theory} 28, 1--39.\n\n\\item Smith, R.J. \\& R.W. Blundell (1986) An exogeneity test for a\nsimultaneous equation tobit model with an application to labor supply.\\ \n\\emph{Econometrica} 54, 679--685.\n\n\\item Staiger, D. \\& J.H. Stock (1997) Instrumental variables regression\nwith weak instruments.\\ \\emph{Econometrica} 65, 557--586.\n\n\\item Stock, J.H. \\& J.H. Wright (2000) GMM with weak instruments.\\ \\emph{%\nEconometrica }68, 1055--1096.\\newpage \n\\end{description}\n\n\\begin{center}\n%TCIMACRO{\\TeXButton{empty page}{\\thispagestyle{empty}}}%\n%BeginExpansion\n\\thispagestyle{empty}%\n%EndExpansion\n$\\vspace*{1.95cm}$\n\n{\\LARGE Supplemental Appendices}$\\bigskip $\n\n{\\Large for}$\\bigskip $\n\n{\\LARGE GMM Estimation and\\medskip }\n\n{\\LARGE Uniform Subvector Inference\\medskip }\n\n{\\LARGE with Possible Identification Failure}$\\vspace*{1.95cm}$\n\n{\\large Donald W. K. Andrews}\n\n{\\large Cowles Foundation for Research in Economics}\n\n{\\large Yale University}$\\bigskip $\n\n{\\large Xu Cheng}\n\n{\\large Department of Economics}\n\n{\\large University of Pennsylvania}$\\vspace*{0.75cm}$\n\n{\\large First Draft: August, 2007}\n\n{\\large Revised: \\today}\\bigskip\n\\end{center}\n\n\\newpage\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Outline}\n\n%TCIMACRO{\\TeXButton{pageno}{\\setcounter{page}{1}}}%\n%BeginExpansion\n\\setcounter{page}{1}%\n%EndExpansion\n\\hspace{0.25in}This Supplement includes five Supplemental Appendices\n(denoted A-E) to the paper \\textquotedblleft GMM Estimation and Uniform\nSubvector Inference with Possible Identification Failure,\\textquotedblright\\\ndenoted hereafter by AC3. Supplemental Appendix A verifies the assumptions\nof AC3 for the probit model with endogeneity. Supplemental Appendix B\nprovides proofs of the GMM estimation results given in Section \\ref%\n{Estimation Results Sec} of AC3. It also provides some results for minimum\ndistance estimators. Supplemental Appendix C provides proofs of the Wald\ntest and CS results given in Section \\ref{Wald Tests Sec} of AC3.\nSupplemental Appendix D gives some results that are used in the verification\nof the assumptions for the two examples of AC3. Supplemental Appendix E\nprovides additional numerical results to those provided in AC3 for the\nnonlinear regression model with endogeneity.\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Supplemental Appendix A:\nProbit Model with Endogeneity: Verification of Assumptions\\label{Example 2\nVerif of As.s Sec}}\n\n\\setcounter{equation}{0}\\hspace{0.25in}In this Supplemental Appendix, we\nverify Assumptions GMM1-GMM5 and V1-V2 for the probit model with endogeneity\nand possibly weak instruments. Assumptions B1 and B2 hold immediately in\nthis model given the definitions of $\\Theta ,$ $\\Theta ^{\\ast },$ and $\\Phi\n^{\\ast }(\\theta )$ in Section \\ref{Simul Probit Ex Sec} of AC3.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Verification of\nAssumption GMM1}\n\n\\hspace{0.25in}Assumption GMM1(i) holds by (\\ref{Probit-SmplCrit}) and (\\ref%\n{Probit-Weight}) because $Z_{i}^{\\prime }\\beta \\pi $ does not depend on $\\pi \n$ when $\\beta =0.$\n\nThe quantity $g_{0}(\\theta ;\\gamma )$ that appears in Assumptions\nGMM1(ii)-(v) is \n\\begin{eqnarray}\ng_{0}(\\theta ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\gamma\n_{0}}e_{i}(\\theta )\\otimes \\overline{Z}_{i}=E_{\\gamma _{0}}e_{0,i}(\\theta\n)\\otimes \\overline{Z}_{i},\\text{ where}  \\notag \\\\\ne_{0,i}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\binom{w_{1,i}(\\theta\n)(L_{i}(\\theta _{0})-L_{i}(\\theta ))}{Z_{i}^{\\prime }(\\beta _{0}-\\beta\n)-X_{i}^{\\prime }(\\zeta _{2,0}-\\zeta _{2})}\\in R^{2}.\n\\end{eqnarray}%\nThe first uniform convergence condition in Assumption GMM1(ii) follows from\nthe ULLN given in Lemma \\ref{Lemma uniform convergence} in Supplemental\nAppendix D because $E_{\\gamma _{0}}(y_{i}|X_{i},Z_{i})=L_{i}(\\theta _{0})$\nwhen the true value is $\\gamma _{0}=(\\theta _{0},\\phi _{0}).$\n\nWhen $\\mathcal{W}_{n}(\\theta )$ is the identity matrix, $\\mathcal{W}(\\theta\n;\\gamma _{0})$ in Assumption GMM1(ii) also is the identity matrix. When $%\n\\mathcal{W}_{n}(\\theta )$ is the optimal weight matrix defined in (\\ref%\n{Probit-Weight}), Assumption GMM1(ii) holds with \n\\begin{eqnarray}\n\\mathcal{W}(\\theta ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nE_{\\gamma _{0}}\\left( e_{i}(\\theta )e_{i}(\\theta )^{\\prime }\\right) \\otimes (%\n\\overline{Z}_{i}\\overline{Z}_{i}^{\\prime })=E_{\\gamma _{0}}(\\mathcal{W}%\n_{e,i}(\\theta ;\\gamma _{0})\\otimes (\\overline{Z}_{i}\\overline{Z}_{i}^{\\prime\n}),\\text{ where}  \\notag \\\\\n\\mathcal{W}_{e,i}(\\theta ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nE_{\\gamma _{0}}\\left( e_{i}(\\theta )e_{i}(\\theta )^{\\prime }|\\overline{Z}%\n_{i}\\right) =\\left( \n\\begin{array}{cc}\n\\mathcal{W}_{11,i}(\\theta ) & \\mathcal{W}_{12,i}(\\theta ) \\\\ \n\\mathcal{W}_{12,i}(\\theta ) & \\mathcal{W}_{22,i}(\\theta )%\n\\end{array}%\n\\right)  \\label{Probit_WeightMatrix}\n\\end{eqnarray}%\nand $\\mathcal{W}_{11,i}(\\theta ),\\mathcal{W}_{12,i}(\\theta ),$ and $\\mathcal{%\nW}_{22,i}(\\theta )$ are defined in (\\ref{Probit_W11})-(\\ref{Probit_W22})\nbelow.\\footnote{%\nNote that $\\mathcal{W}_{11,i}(\\theta ),\\mathcal{W}_{12,i}(\\theta ),$ and $%\n\\mathcal{W}_{22,i}(\\theta )$ all depend on $\\gamma _{0}.$ We omit $\\gamma\n_{0}$ from these terms for notational simplicity.} The convergence condition\nin Assumption GMM1(ii) holds for the optimal weight matrix $\\mathcal{W}%\n_{n}(\\theta )$ by the ULLN given in Lemma \\ref{Lemma uniform convergence} in\nSupplemental Appendix C.\n\nNow we derive the elements of $\\mathcal{W}_{e,i}(\\theta ;\\gamma _{0})$ in (%\n\\ref{Probit_WeightMatrix}). Note that \n\\begin{equation}\nP_{\\gamma _{0}}(y_{i}=1|\\overline{Z}_{i})=L_{i}(\\theta _{0})\\text{ and }%\nP_{\\gamma _{0}}(y_{i}=0|\\overline{Z}_{i})=1-L_{i}(\\theta _{0}).\n\\end{equation}%\nThe upper left element of $\\mathcal{W}_{e,i}(\\theta ;\\gamma _{0})$ is%\n\\begin{equation}\n\\mathcal{W}_{11,i}(\\theta )=E_{\\gamma _{0}}(w_{1,i}(\\theta\n)^{2}(y_{i}-L_{i}(\\theta ))^{2}|\\overline{Z}_{i})=w_{1,i}(\\theta\n)^{2}(L_{i}(\\theta _{0})-2L_{i}(\\theta _{0})L_{i}(\\theta )+L_{i}(\\theta\n)^{2}).  \\label{Probit_W11}\n\\end{equation}%\nThe lower-right element of $\\mathcal{W}_{e,i}(\\theta ;\\gamma _{0})$ is%\n\\begin{equation}\n\\mathcal{W}_{22,i}(\\theta )=E_{\\gamma _{0}}((Y_{i}-Z_{i}^{\\prime }\\beta\n-X_{i}^{\\prime }\\zeta _{2})^{2}|\\overline{Z}_{i})=\\sigma\n_{v}^{2}+(Z_{i}^{\\prime }(\\beta _{0}-\\beta )+X_{i}^{\\prime }(\\zeta\n_{2,0}-\\zeta _{2}))^{2}.  \\label{Probit_W22}\n\\end{equation}\n\nTo calculate the off-diagonal term of $\\mathcal{W}_{e,i}(\\theta ;\\gamma\n_{0}),$ note that%\n\\begin{eqnarray}\nE_{\\gamma _{0}}(V_{i}|\\overline{Z}_{i},y_{i}\\hspace{-0.08in} &=&\\hspace{%\n-0.08in}1)=E_{\\gamma _{0}}(V_{i}|\\overline{Z}_{i},U_{i}>-(Z_{i}^{\\prime\n}\\beta _{0}\\pi _{0}+X_{i}^{\\prime }\\zeta _{1,0}))=\\sigma _{v}\\rho \\frac{%\nL_{i}^{\\prime }(\\theta _{0})}{L_{i}(\\theta _{0})}\\text{ and}  \\notag \\\\\nE_{\\gamma _{0}}(V_{i}|\\overline{Z}_{i},y_{i}\\hspace{-0.08in} &=&\\hspace{%\n-0.08in}0)=E_{\\gamma _{0}}(V_{i}|\\overline{Z}_{i},-U_{i}>Z_{i}^{\\prime\n}\\beta _{0}\\pi _{0}+X_{i}^{\\prime }\\zeta _{1,0})=-\\sigma _{v}\\rho \\frac{%\nL_{i}^{\\prime }(\\theta _{0})}{1-L_{i}(\\theta _{0})}\\text{.}\n\\end{eqnarray}%\nThe off-diagonal term of $\\mathcal{W}_{e,i}(\\theta ;\\gamma _{0})$ is%\n\\begin{eqnarray}\n&&\\mathcal{W}_{12,i}(\\theta )  \\notag \\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\gamma _{0}}(w_{1,i}(\\theta\n)(y_{i}-L_{i}(\\theta ))(Y_{i}-Z_{i}^{\\prime }\\beta -X_{i}^{\\prime }\\zeta\n_{2})|\\overline{Z}_{i})  \\notag \\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}w_{1,i}(\\theta\n)\\sum_{k=0,1}(k-L_{i}(\\theta ))\\left[ E_{\\gamma _{0}}(V_{i}|\\overline{Z}%\n_{i},y_{i}=k)+Z_{i}^{\\prime }(\\beta _{0}-\\beta )+X_{i}^{\\prime }(\\zeta\n_{2,0}-\\zeta _{2})\\right] P_{\\gamma _{0}}(y_{i}=k|\\overline{Z}_{i})  \\notag\n\\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}w_{1,i}(\\theta )\\left[ (1-L_{i}(\\theta\n))\\sigma _{v}\\rho \\frac{L_{i}^{\\prime }(\\theta _{0})}{L_{i}(\\theta _{0})}%\nL_{i}(\\theta _{0})+L_{i}(\\theta )\\sigma _{v}\\rho \\frac{L_{i}^{\\prime\n}(\\theta _{0})}{1-L_{i}(\\theta _{0})}(1-L_{i}(\\theta _{0})\\right] +  \\notag\n\\\\\n&&w_{1,i}(\\theta )\\left[ (1-L_{i}(\\theta ))L_{i}(\\theta _{0})-L_{i}(\\theta\n)(1-L_{i}(\\theta _{0}))\\right] \\left[ Z_{i}^{\\prime }(\\beta _{0}-\\beta\n)+X_{i}^{\\prime }(\\zeta _{2,0}-\\zeta _{2})\\right]   \\notag \\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}w_{1,i}(\\theta )\\left[ \\sigma _{v}\\rho\nL_{i}^{\\prime }(\\theta _{0})+\\left( L_{i}(\\theta _{0})-L_{i}(\\theta )\\right)\n\\left( Z_{i}^{\\prime }(\\beta _{0}-\\beta )+X_{i}^{\\prime }(\\zeta _{2,0}-\\zeta\n_{2})\\right) \\right] .  \\label{Probit_W12}\n\\end{eqnarray}\n\nNow we verify Assumptions GMM1(iii) and GMM1(iv). We write $g_{0}(\\theta\n;\\gamma _{0})=$\\linebreak $(g_{1,0}(\\theta ;\\gamma _{0})^{\\prime\n},g_{2,0}(\\theta ;\\gamma _{0})^{\\prime })^{\\prime }$ for $g_{j,0}(\\theta\n;\\gamma _{0})\\in R^{d_{X}+d_{Z}}$ for $j=1,2.$ We have \n\\begin{equation*}\ng_{2,0}(\\theta ;\\gamma _{0})\\xi =\\xi ^{\\prime }E_{\\gamma _{0}}\\overline{Z}%\n_{i}\\overline{Z}_{i}^{\\prime }\\xi >0\\text{ for }\\xi =((\\beta _{0}-\\beta\n)^{\\prime },(\\zeta _{2,0}-\\zeta _{2}))^{\\prime },\n\\end{equation*}%\nwhere the inequality holds because $E_{\\gamma _{0}}\\overline{Z}_{i}\\overline{%\nZ}_{i}^{\\prime }$ is positive definite since $P_{\\phi _{0}}(\\overline{Z}%\n_{i}^{\\prime }c=0)<1$ for any $c\\neq 0$ by (\\ref{Simul Probit Phi Space}).\nHence, $g_{2,0}(\\theta ;\\gamma _{0})=0$ if and only if $\\beta =\\beta _{0}$\nand $\\zeta _{2}=\\zeta _{2,0}.$ Now, for $\\theta $ with $\\beta =\\beta _{0}$\nand $\\zeta _{2}=\\zeta _{2,0},$ \n\\begin{equation}\ng_{1,0}(\\theta ;\\gamma _{0})=E_{\\gamma _{0}}w_{1,i}(\\theta )(L_{i}(\\theta\n_{0})-L_{i}(\\theta ))\\overline{Z}_{i}\\text{ and }L_{i}(\\theta\n)=L(Z_{i}^{\\prime }\\beta _{0}\\pi +X_{i}^{\\prime }\\zeta _{1}).\n\\end{equation}%\nIf $\\beta _{0}\\neq 0,$ the conditions $g_{1,0}(\\theta ;\\gamma _{0})=0$ are\nmore restrictive than the populations first-order conditions for the\nstandard probit ML estimator for a probit model with regression function $%\nZ_{i}^{\\prime }\\beta _{0}\\pi +X_{i}^{\\prime }\\zeta _{1}$ (because the latter\nhas the multiplicative factor $(Z_{i}^{\\prime }\\beta _{0},X_{i}^{\\prime\n})^{\\prime },$ rather than $\\overline{Z}_{i}$). The latter have a unique\nsolution at the true parameter vector because, as is well known, the\npopulation log likelihood function of the probit model is strictly concave.\nHence, $g_{1,0}(\\theta ;\\gamma _{0})=0$ only if $\\pi =\\pi _{0}$ and $\\zeta\n_{1}=\\zeta _{1,0}$ and Assumption GMM1(iv) holds. If $\\beta _{0}=0,$ then\nthe same argument holds but with the regression function being $%\nX_{i}^{\\prime }\\zeta _{1},$ rather than $Z_{i}^{\\prime }\\beta _{0}\\pi\n+X_{i}^{\\prime }\\zeta _{1}.$ In this case, $g_{1,0}(\\theta ;\\gamma _{0})=0$\nonly if $\\zeta _{1}=\\zeta _{1,0}$ and Assumption GMM1(iii) holds.\n\nThe partial derivatives $g_{\\psi }(\\theta ;\\gamma _{0})$ and $g_{\\theta\n}(\\theta ;\\gamma _{0})$ in Assumptions GMM1(v) and GMM1\\linebreak (viii) are \n\\begin{eqnarray}\ng_{\\psi }(\\theta ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\phi\n_{0}}\\binom{\\overline{Z}_{i}a_{i}(\\theta )d_{1\\psi ,i}(\\pi )^{\\prime }}{%\n\\overline{Z}_{i}d_{2\\psi ,i}^{\\prime }}\\text{ and }g_{\\theta }(\\theta\n;\\gamma _{0})=E_{\\phi _{0}}\\binom{\\overline{Z}_{i}a_{i}(\\theta\n)d_{1,i}(\\theta )^{\\prime }}{\\overline{Z}_{i}d_{2,i}^{\\prime }},\\text{ where}\n\\notag \\\\\nd_{1\\psi ,i}(\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}(\\pi\nZ_{i},X_{i},0_{d_{X}})\\in R^{d_{Z}+2d_{X}},\\text{ }d_{2\\psi\n,i}=(Z_{i},0_{d_{X}},X_{i})\\in R^{d_{Z}+2d_{X}},  \\notag \\\\\nd_{1,i}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}(d_{1\\psi ,i}(\\pi\n),Z_{i}^{\\prime }\\beta )\\in R^{d_{Z}+2d_{X}+1},\\text{ }d_{2,i}=(d_{2\\psi\n,i},0)\\in R^{d_{Z}+2d_{X}+1},\\text{ and}  \\label{Probit_1st derivative} \\\\\na_{i}(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\frac{L_{i}^{\\prime\n}(\\theta )^{2}+L_{i}^{\\prime \\prime }(\\theta )(L_{i}(\\theta )-L_{i}(\\theta\n_{0}))}{L_{i}(\\theta )(1-L_{i}(\\theta ))}-\\frac{L_{i}^{\\prime }(\\theta\n)^{2}(L_{i}(\\theta )-L_{i}(\\theta _{0}))(1-2L_{i}(\\theta ))}{L_{i}(\\theta\n)^{2}(1-L_{i}(\\theta ))^{2}}.  \\notag\n\\end{eqnarray}%\nAssumptions GMM1(v) and GMM1(vi) hold by the continuity of $w_{1,i}(\\theta )$\nand $L_{i}(\\theta )$ in $\\theta $ and the moment conditions in (\\ref{Simul\nProbit Phi Space}).\n\nNext, we verify Assumption GMM1(vii). To show $\\lambda _{\\min }(\\mathcal{W}%\n(\\psi _{0},\\pi ;\\gamma _{0}))>0,$ $\\forall \\pi \\in \\Pi ,$ $\\forall \\gamma\n_{0}\\in \\Gamma ,$ we show that for any $c=(c_{1}^{\\prime },c_{2}^{\\prime\n})^{\\prime }$ with $||c||>0,$ $c^{\\prime }\\mathcal{W}(\\psi _{0},\\pi ;\\gamma\n_{0})c>0,$ where $c_{j}\\in R^{d_{X}+d_{Z}}$ for $j=1,2.$ Let%\n\\begin{equation}\nU_{i}^{\\ast }(\\theta )=w_{1,i}(\\theta )(U_{i}+L_{i}(\\theta\n_{0})-L_{i}(\\theta )).  \\label{GMM1(vii) 1}\n\\end{equation}%\nFor $\\theta \\in (\\psi _{0},\\pi ),$ we have%\n\\begin{eqnarray}\nc^{\\prime }\\mathcal{W}(\\psi _{0},\\pi ;\\gamma _{0})c\\hspace{-0.08in} &=&%\n\\hspace{-0.08in}c^{\\prime }\\left[ E_{\\gamma _{0}}\\left( \n\\begin{array}{c}\nU_{i}^{\\ast }(\\theta ) \\\\ \nV_{i}%\n\\end{array}%\n\\right) \\left( \n\\begin{array}{c}\nU_{i}^{\\ast }(\\theta ) \\\\ \nV_{i}%\n\\end{array}%\n\\right) ^{\\prime }\\otimes \\overline{Z}_{i}\\overline{Z}_{i}^{\\prime }\\right] c\n\\notag \\\\\n&=&\\hspace{-0.08in}E_{\\gamma _{0}}E_{\\gamma _{0}}((U_{i}^{\\ast }(\\theta\n)c_{1}^{\\prime }\\overline{Z}_{i}+V_{i}c_{2}^{\\prime }\\overline{Z}_{i})^{2}|%\n\\overline{Z}_{i})  \\notag \\\\\n&\\geq &\\hspace{-0.08in}E_{\\gamma _{0}}E_{\\gamma _{0}}((U_{i}w_{1,i}(\\theta\n)c_{1}^{\\prime }\\overline{Z}_{i}+V_{i}c_{2}^{\\prime }\\overline{Z}_{i})^{2}|%\n\\overline{Z}_{i}),  \\label{c'Wc}\n\\end{eqnarray}%\nwhere the inequality holds because $E_{\\gamma _{0}}(w_{1,i}(\\theta\n)(L_{i}(\\theta _{0})-L_{i}(\\theta ))c_{1}^{\\prime }\\overline{Z}%\n_{i}V_{i}c_{2}^{\\prime }\\overline{Z}_{i}|\\overline{Z}_{i})=0$ a.s. since $%\nE_{\\gamma _{0}}(V_{i}|\\overline{Z}_{i})=0$ a.s. and $E_{\\gamma\n_{0}}((w_{1,i}(\\theta )(L_{i}(\\theta _{0})-L_{i}(\\theta ))c_{1}^{\\prime }%\n\\overline{Z}_{i})^{2}|\\overline{Z}_{i})\\geq 0$ a.s. The rhs of (\\ref{c'Wc})\nequals zero only if $E_{\\gamma _{0}}((U_{i}w_{1,i}(\\theta )c_{1}^{\\prime }%\n\\overline{Z}_{i}+\\allowbreak V_{i}c_{2}^{\\prime }\\overline{Z}_{i})^{2}|%\n\\overline{Z}_{i})=0$ a.s. But,%\n\\begin{equation}\nE_{\\gamma _{0}}((U_{i}w_{1,i}(\\theta )c_{1}^{\\prime }\\overline{Z}%\n_{i}+V_{i}c_{2}^{\\prime }\\overline{Z}_{i})^{2}|\\overline{Z}_{i})>0\n\\label{GMM1(vii) 3}\n\\end{equation}%\nfor all $\\overline{Z}_{i}$ for which $c_{j}^{\\prime }\\overline{Z}_{i}\\neq 0$\nfor $j=1$ and $j=2$ because $w_{1,i}(\\theta )>0$ a.s., $(U_{i},V_{i})$ is\nindependent of $\\overline{Z}_{i},$ and $|Cov(U_{i},V_{i})|=|\\rho |<1.$ By (%\n\\ref{Simul Probit Phi Space}), $P_{\\gamma _{0}}(c_{j}^{\\prime }\\overline{Z}%\n_{i}\\neq 0$ for $j=1$ and $j=2)>0.$ Hence, we conclude that $c^{\\prime }%\n\\mathcal{W}(\\psi _{0},\\pi ;\\gamma _{0})c>0.$\n\nIn addition, $\\lambda _{\\max }(\\mathcal{W}(\\psi _{0},\\pi ;\\gamma\n_{0}))<\\infty $ because $||\\mathcal{W}(\\psi _{0},\\pi ;\\gamma\n_{0})||=||E_{\\phi _{0}}[\\mathcal{W}_{e,i}(\\theta ;\\gamma _{0})\\otimes (%\n\\overline{Z}_{i}\\overline{Z}_{i}^{\\prime })]||<\\infty $ using (\\ref%\n{Probit_W11})-(\\ref{Probit_W22}) and $E_{\\phi _{0}}(||\\overline{Z}%\n_{i}||^{4+\\varepsilon }+\\overline{w}_{1,i}^{4+\\varepsilon })<\\infty $ for\nsome $\\varepsilon >0$ by (\\ref{Simul Probit Phi Space}), where $||\\cdot ||$\ndenotes the Frobenious norm. Thus, Assumption GMM1(vii) holds.\n\nAssumption GMM1(viii) holds because $\\mathcal{W}(\\psi _{0},\\pi ;\\gamma _{0})$\nis non-singular $\\forall \\pi \\in \\Pi $ and $g_{\\psi }(\\psi _{0},\\pi ;\\gamma\n_{0})$ has full column rank because $P_{\\phi _{0}}(\\overline{Z}_{i}^{\\prime\n}c=0)<1$ for all $c\\neq 0.$\n\nAssumption GMM1(ix) holds automatically by the Assumptions on the parameter\nspace.\n\nAssumption GMM1(x) holds because $\\Psi (\\pi )$ does not depend on $\\pi $ in\nthis example.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Verification of\nAssumption GMM2}\n\n\\hspace{0.25in}We verify Assumption GMM2 using the sufficient condition\nAssumption GMM2$^{\\ast }.$ Assumption GMM2$^{\\ast }$(i) holds because $%\ne_{i}(\\theta )$ is continuously differentiable in $\\theta .$ Assumption GMM2$%\n^{\\ast }$(ii) holds by the ULLN given in Lemma \\ref{Lemma uniform\nconvergence} in Supplemental Appendix C. Assumption GMM2$^{\\ast }$(iii)\nholds by the uniform LLN given in Lemma \\ref{Lemma uniform convergence} in\nSupplemental Appendix D using $||\\beta ||/||\\beta _{n}||=1+o(1)$ for $\\theta\n\\in \\Theta _{n}(\\delta _{n})$ and $||\\beta _{n}||\\neq 0$ for $n$ large for $%\n\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0}).$\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Verification of\nAssumption GMM3}\n\n\\hspace{0.25in}Assumption GMM3(i) holds with \n\\begin{equation}\ng(W_{i},\\theta )=e_{i}(\\theta )\\otimes \\overline{Z}_{i}.\n\\end{equation}%\nAssumption GMM3(ii) holds because $E_{\\gamma ^{\\ast }}g(W_{i},\\psi ^{\\ast\n},\\pi )=E_{\\gamma ^{\\ast }}e_{0,i}(\\psi ^{\\ast },\\pi )\\otimes \\overline{Z}%\n_{i}=0$ when $\\beta ^{\\ast }=0.$\n\nAssumption GMM3(iii) hold by the CLT for triangular arrays of row-wise\ni.i.d. random variables given in Lemma \\ref{Lemma CLT, array} of\nSupplemental Appendix D. The variance matrix is%\n\\begin{eqnarray}\n\\Omega _{g}(\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}E_{\\gamma\n_{0}}\\left( e_{i}(\\theta _{0})e_{i}(\\theta _{0})^{\\prime }\\right) \\otimes (%\n\\overline{Z}_{i}\\overline{Z}_{i}^{\\prime })=\\mathcal{W}(\\theta _{0};\\gamma\n_{0})  \\notag \\\\\n&=&\\hspace{-0.08in}E_{\\gamma _{0}}\\left( \n\\begin{array}{cc}\nw_{1,i}(\\theta _{0})L_{i}^{\\prime }(\\theta _{0}) & w_{1,i}(\\theta\n_{0})L_{i}^{\\prime }(\\theta _{0})\\rho \\sigma _{v} \\\\ \nw_{1,i}(\\theta _{0})L_{i}^{\\prime }(\\theta _{0})\\rho \\sigma _{v} & \\sigma\n_{v}^{2}%\n\\end{array}%\n\\right) \\otimes \\left( \\overline{Z}_{i}\\overline{Z}_{i}^{\\prime }\\right) ,\n\\label{Probit_Omega}\n\\end{eqnarray}%\nwhere the second and third equalities follow from (\\ref{Probit_WeightMatrix}%\n) and (\\ref{Probit_W11})-(\\ref{Probit_W22}) with $\\theta =\\theta _{0}$ and $%\nw_{1,i}(\\theta _{0})(L_{i}(\\theta _{0})$\\allowbreak $-L_{i}(\\theta\n_{0})^{2})\\allowbreak =L_{i}^{\\prime }(\\theta _{0}).$\n\nTo verify Assumption GMM3(iv), first note that%\n\\begin{equation}\nE_{\\gamma ^{\\ast }}g(W_{i},\\theta )=E_{\\gamma ^{\\ast }}\\binom{w_{1,i}(\\theta\n)(L_{i}(\\theta ^{\\ast })-L_{i}(\\theta ))}{Z_{i}^{\\prime }(\\beta ^{\\ast\n}-\\beta )-X_{i}^{\\prime }(\\zeta _{2}^{\\ast }-\\zeta _{2})}\\otimes \\overline{Z}%\n_{i}.  \\label{Expected g}\n\\end{equation}%\nThe derivative of $E_{\\gamma ^{\\ast }}g(W_{i},\\theta )$ wrt $\\beta ^{\\ast }$\nis%\n\\begin{equation}\nK_{n,g}(\\theta ;\\gamma ^{\\ast })=E_{^{\\phi \\ast }}\\left( \n\\begin{array}{c}\nw_{1,i}(\\theta )L_{i}^{\\prime }(\\theta ^{\\ast })\\pi ^{\\ast }\\overline{Z}%\n_{i}Z_{i}^{\\prime } \\\\ \n\\overline{Z}_{i}Z_{i}^{\\prime }%\n\\end{array}%\n\\right)  \\label{Probit Kg}\n\\end{equation}%\n$\\forall (\\theta ,\\gamma ^{\\ast })\\in \\Theta _{\\delta }\\times \\Gamma _{0}$\nand $\\forall n\\geq 1.$ This verifies Assumption GMM3(iv)(a). Assumptions\nGMM3(iv)(b) and (c) hold with $K_{g}(\\theta ;\\gamma _{0})=K_{n,g}(\\theta\n;\\gamma _{0}).$\n\nTo verify Assumption GMM3(v), note that $a_{i}(\\psi _{0},\\pi\n)=w_{1,i}(\\theta _{0})L_{i}^{\\prime }(\\theta _{0})$ when $\\beta _{0}=0.$\nUsing (\\ref{Probit_1st derivative}) and (\\ref{Probit Kg}), this yields%\n\\begin{eqnarray}\ng_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nE_{\\phi _{0}}M_{i}(\\theta _{0})\\left( \n\\begin{array}{c}\nd_{1\\psi ,i}(\\pi )^{\\prime } \\\\ \nd_{2\\psi ,i}^{\\prime }%\n\\end{array}%\n\\right) ,\\text{ }K_{g}(\\psi _{0},\\pi ;\\gamma _{0})=E_{\\phi _{0}}M_{i}(\\theta\n_{0})\\left( \n\\begin{array}{c}\n\\pi _{0}Z_{i}^{\\prime } \\\\ \nZ_{i}^{\\prime }%\n\\end{array}%\n\\right) ,\\text{ where}  \\notag \\\\\nM_{i}(\\theta _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left( \n\\begin{array}{cc}\nw_{1,i}(\\theta _{0})L_{i}^{\\prime }(\\theta _{0})\\overline{Z}_{i} & 0_{d_{Z}}\n\\\\ \n0_{d_{Z}} & \\overline{Z}_{i}%\n\\end{array}%\n\\right) .  \\label{Probit_g_psi}\n\\end{eqnarray}%\nAssumption GMM3(v) holds because (i) $M_{i}(\\theta _{0})$ has full rank\na.s., (ii) $d_{2\\psi ,i}S=Z_{i}^{\\prime }$ for $S=(S_{1},S_{2},S_{3})\\in\nR^{d_{Z}\\times d_{X}\\times d_{X}}$ if and only if $S_{1}=1_{d_{Z}}$ and $%\nS_{3}=0_{d_{X}},$ and (iii) $d_{1\\psi ,i}(\\pi )S=\\pi _{0}Z_{i}$ for $%\nS=(1_{d_{Z}},S_{2},0_{d_{X}})$ if and only if $S_{2}=0_{d_{X}}$ and $\\pi\n=\\pi _{0}.$\n\nAssumption GMM3(vi) holds by (\\ref{Expected g}), (\\ref{Probit_g_psi}), an\nexchange of \\textquotedblleft $E$\\textquotedblright\\ and \\textquotedblleft $%\n\\partial ,$\\textquotedblright\\ the moment conditions in (\\ref{Simul Probit\nPhi Space}), and some calculations. The left-hand side does not depend on an\naverage over $n$ because the observations are identically distributed.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Verification of\nAssumption GMM4}\n\n\\hspace{0.25in}When $d_{Z}>1,$ we do not have a proof that Assumption GMM4\nholds. In this case, we just assume that it does. However, when $d_{Z}=1,$\nAssumption GMM4 can be verified by verifying Assumption GMM4$^{\\ast }.$ In\nthis case, Assumption GMM4$^{\\ast }$(i) holds automatically. Using (\\ref%\n{Probit_g_psi}), we obtain%\n\\begin{equation}\ng_{\\psi }^{\\ast }(\\psi _{0},\\pi _{1},\\pi _{2};\\gamma _{0})=E_{\\phi\n_{0}}M_{i}(\\theta _{0})\\left( \n\\begin{array}{c}\n\\pi _{1}Z_{i}^{\\prime },\\pi _{2}Z_{i}^{\\prime },X_{i}^{\\prime\n},0_{d_{X}}^{\\prime } \\\\ \nZ_{i}^{\\prime },Z_{i}^{\\prime },0_{d_{X}}^{\\prime },X_{i}^{\\prime }%\n\\end{array}%\n\\right) ,\n\\end{equation}%\nwhere $M_{i}(\\theta _{0})$ is of full column rank a.s. Assumption GMM4$%\n^{\\ast }$(ii) holds because $P_{\\phi _{0}}(\\overline{Z}_{i}^{\\prime }c=0)<1$\nfor $c\\neq 0$ and $\\pi _{1}\\neq \\pi _{2}.$ Assumption GMM4$^{\\ast }$(iii)\nholds with $\\Omega _{g}(\\gamma _{0})=\\mathcal{W}(\\theta _{0};\\gamma _{0})$\nby (\\ref{Probit_WeightMatrix}) and (\\ref{Probit_Omega}) because $\\mathcal{W}%\n(\\theta _{0};\\gamma _{0})$ is positive definite by the verification of\nAssumption GMM1(vii) in (\\ref{GMM1(vii) 1})-(\\ref{GMM1(vii) 3}).\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Verification of\nAssumption GMM5}\n\n\\hspace{0.25in}The verification of Assumption GMM5(i) is analogous to that\nof Assumption GMM3\\linebreak (iii). The variance matrix $V_{g}(\\gamma _{0})$\nis equal to $\\Omega _{g}(\\gamma _{0})$ defined in (\\ref{Probit_Omega}).\n\nAssumption GMM5(ii) holds with $g_{\\theta }(\\theta ;\\gamma _{0})$ in (\\ref%\n{Probit_1st derivative}) using $||\\beta ||/||\\beta _{n}||=1+o(1)$ for $%\n\\theta \\in \\Theta _{n}(\\delta _{n}),$ $||\\beta _{n}||\\neq 0$ for $n$ large\nfor $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0}),$ and the\nmoment conditions in (\\ref{Simul Probit Phi Space}).\n\nAssumption GMM5(iii) holds with \n\\begin{equation}\nJ_{g}(\\gamma _{0})=E_{\\phi _{0}}M_{i}(\\theta _{0})\\binom{\\pi\n_{0}Z_{i}^{\\prime },X_{i}^{\\prime },0_{d_{X}}^{\\prime },Z_{i}^{\\prime\n}\\omega _{0}}{Z_{i}^{\\prime },0_{d_{X}}^{\\prime },X_{i}^{\\prime },0}\n\\end{equation}%\nusing (\\ref{Probit_1st derivative}) and (\\ref{Probit_g_psi}) and $\\beta\n_{n}/||\\beta _{n}||\\rightarrow \\omega _{0}.$ The matrix $J_{g}(\\gamma _{0})$\nhas full column rank because $P_{\\phi }(\\overline{Z}_{i}^{\\prime }c=0)<1$\nfor $c\\neq 0.$\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Verification of\nAssumptions V1 and V2 (Vector $\\mathbf{\\protect\\beta }$)}\n\n\\hspace{0.25in}Here we verify Assumptions V1(i)-V1(iii) (vector $\\beta $)\nand V2. We do not verify Assumption V1(iv) (vector $\\beta $). However, it\nshould hold because $\\tau _{\\beta }(\\pi ;\\gamma _{0},b)$ is a Gaussian\nprocess.\n\nWe estimate $J(\\gamma _{0})$ and $V(\\gamma _{0})$ by $\\widehat{J}_{n}=%\n\\widehat{J}_{n}(\\widehat{\\theta }_{n}^{+})$ and $\\widehat{V}_{n}=\\widehat{V}%\n_{n}(\\widehat{\\theta }_{n}^{+}),$ respectively, where%\n\\begin{eqnarray}\n\\widehat{J}_{n}(\\theta ^{+})\\hspace{-0.08in} &=&\\hspace{-0.08in}\\widehat{J}%\n_{g,n}(\\theta ^{+})^{\\prime }\\mathcal{W}_{n}\\widehat{J}_{g,n}(\\theta ^{+}),%\n\\text{ }\\widehat{V}_{n}(\\theta ^{+})=\\widehat{J}_{g,n}(\\theta ^{+})^{\\prime }%\n\\mathcal{W}_{n}\\widehat{V}_{g,n}(\\theta ^{+})\\mathcal{W}_{n}\\widehat{J}%\n_{g,n}(\\theta ^{+}),  \\notag \\\\\n\\widehat{J}_{g,n}(\\theta ^{+})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nn^{-1}\\sum_{i=1}^{n}M_{i}(\\theta )\\binom{\\pi Z_{i}^{\\prime },X_{i}^{\\prime\n},0_{d_{X}}^{\\prime },Z_{i}^{\\prime }\\omega }{Z_{i}^{\\prime\n},0_{d_{X}}^{\\prime },X_{i}^{\\prime },0}\\text{ and }  \\notag \\\\\n\\widehat{V}_{g,n}(\\theta ^{+})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nn^{-1}\\sum_{i=1}^{n}\\left( e_{i}(\\theta )e_{i}(\\theta )^{\\prime }\\right)\n\\otimes \\left( \\overline{Z}_{i}\\overline{Z}_{i}^{\\prime }\\right) .\n\\end{eqnarray}\n\nAssumption V1(i) (vector $\\beta $) holds with \n\\begin{eqnarray}\nJ(\\theta ^{+};\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}J_{g}(\\theta\n^{+};\\gamma _{0})^{\\prime }\\mathcal{W}(\\theta _{0};\\gamma _{0})J_{g}(\\theta\n^{+};\\gamma _{0})\\text{ and}  \\notag \\\\\nV(\\theta ^{+};\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}J_{g}(\\theta\n^{+};\\gamma _{0})^{\\prime }\\mathcal{W}(\\theta _{0};\\gamma _{0})V_{g}(\\theta\n^{+};\\gamma _{0})\\mathcal{W}(\\theta _{0};\\gamma _{0})J_{g}(\\theta\n^{+};\\gamma _{0}),\n\\end{eqnarray}%\nwhere $J_{g}(\\theta ^{+};\\gamma _{0})$ and $V_{g}(\\theta ^{+};\\gamma _{0})$\nare defined analogously to $\\widehat{J}_{g}(\\theta ^{+})$ and $\\widehat{V}%\n_{g}(\\theta ^{+}),$ respectively, but with $n^{-1}\\sum_{i=1}^{n}$ replaced\nby $E_{\\gamma _{0}}.$ The uniform convergence conditions of Assumption V1(i)\nfor $\\widehat{J}_{n}(\\theta ^{+})$ and $\\widehat{V}_{n}(\\theta ^{+})$ follow\nfrom the uniform convergence of $\\widehat{J}_{g,n}(\\theta ^{+})$ and $%\n\\widehat{V}_{g,n}(\\theta ^{+})$ and $\\mathcal{W}_{n}\\rightarrow _{p}\\mathcal{%\nW}(\\theta _{0};\\gamma _{0}).$ The former holds by the ULLN given in Lemma %\n\\ref{Lemma uniform convergence} in Supplemental Appendix C. When $\\mathcal{W}%\n_{n}$ is the identity matrix, the latter holds automatically. When $\\mathcal{%\nW}_{n}$ is the optimal weight matrix that involves a first step estimator $%\n\\overline{\\theta }_{n}$ and $\\overline{\\theta }_{n}$ is based on the\nidentity weight matrix, the convergence in probability of $\\mathcal{W}_{n}$\nholds by Lemma \\ref{Lemma Two step weight matrix}. The assumptions of Lemma %\n\\ref{Lemma Two step weight matrix} follow from Theorems \\ref{Thm dist'n of\nestimator b=finite}(a) and \\ref{Thm dist'n of estimator b=inf}(a).\n\nAssumption V1(ii) (vector $\\beta $) holds by the continuity of $M_{i}(\\theta\n)$ and $e_{i}(\\theta )$ in $\\theta $ and the moment conditions in (\\ref%\n{Simul Probit Phi Space}).\n\nAssumption V1(iii) (vector $\\beta $) holds provided that $J(\\theta\n^{+};\\gamma _{0})$ and $V(\\theta ^{+};\\gamma _{0})$ are both finite and\nnon-singular when $\\beta _{0}=0.$ To this end, we need that $J_{g}(\\theta\n^{+};\\gamma _{0}),$ $V_{g}(\\theta ^{+};\\gamma _{0}),$ and $\\mathcal{W}%\n(\\theta ;\\gamma _{0})$ are all finite and non-singular. This holds using the\nforms of these matrices and $P_{\\phi }(\\overline{Z}_{i}^{\\prime }c=0)<1$ for \n$c\\neq 0$ by the arguments used in the verifications of Assumptions\nGMM5(iii), GMM5(i), and GMM1(vii), respectively.\n\nAssumption V2 follows from (i) the uniform convergence of $\\widehat{J}%\n_{g,n}(\\theta ^{+})$ and $\\widehat{V}_{g,n}(\\theta ^{+})$, which holds by\nthe ULLN given in Lemma \\ref{Lemma uniform convergence} in Supplemental\nAppendix C, (ii) $\\widehat{\\theta }_{n}^{+}\\rightarrow _{p}\\theta _{0}^{+}$\nunder $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0}),$ which\nholds by Theorem \\ref{Thm dist'n of estimator b=inf}(a) and $\\widehat{\\beta }%\n_{n}/||\\widehat{\\beta }_{n}||\\rightarrow \\omega _{0}$ (see Lemma 9.4(b) of\nAppendix B of AC1-SM), and (iii) $\\mathcal{W}_{n}\\rightarrow _{p}\\mathcal{W}%\n(\\theta _{0};\\gamma _{0}),$ which holds by Lemma \\ref{Lemma Two step weight\nmatrix}.\n\n\\newpage\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Supplemental Appendix B:\nProofs of GMM\\newline\nEstimation Results}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Lemmas}\n\n\\setcounter{equation}{0}\\hspace{0.25in}This Supplemental Appendix proves the\nresults in Theorems \\ref{Thm dist'n of estimator b=finite} and \\ref{Thm\ndist'n of estimator b=inf} of AC3. The method of proof is to show that\nAssumptions B1, B2, and GMM1-GMM5 imply the high-level assumptions in AC1,\nviz., Assumptions A, B3, C1-C8, and D1-D3 of AC1. Given this, Theorems 3.1\nand 3.2 of AC1 imply Theorems \\ref{Thm dist'n of estimator b=finite} and \\ref%\n{Thm dist'n of estimator b=inf} because the results of these theorems are\nthe same, just the assumptions differ.\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma GMM A and B3}Suppose Assumption \n\\emph{GMM1} holds. Then,\n\n\\noindent \\emph{(a)} Assumption \\emph{A} of \\emph{AC1 }holds and\n\n\\noindent \\emph{(b)} Assumption \\emph{B3} of \\emph{AC1 }holds with $Q(\\theta\n;\\gamma _{0})=g_{0}(\\theta ;\\gamma _{0})^{\\prime }\\mathcal{W}(\\theta ;\\gamma\n_{0})g_{0}(\\theta ;\\gamma _{0}).$\n\\end{lemma}\n\nUnder Assumptions GMM1 and GMM2, Assumption GMM3 is used to show that the\n\"C\" assumptions of AC1 hold for the GMM estimator. As above, $\\mathcal{W}%\n(\\psi _{0};\\gamma _{0})$ abbreviates $\\mathcal{W}(\\psi _{0},\\pi ;\\gamma\n_{0}) $ when $\\beta _{0}=0.$\n\n\\begin{lemma}\n\\textbf{\\hspace{-0.08in}.} \\label{Lemma GMM suff condition C}Suppose\nAssumptions \\emph{GMM1-GMM3 }hold. Then, the following are true.\n\n\\noindent \\emph{(a)} Assumption \\emph{C1} of \\emph{AC1} holds with $D_{\\psi\n}Q_{n}(\\theta )=g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})^{\\prime }\\mathcal{W}%\n(\\psi _{0};\\gamma _{0})\\overline{g}_{n}(\\theta )$ and \\newline\n$D_{\\psi \\psi }Q_{n}(\\theta )=g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})^{\\prime }%\n\\mathcal{W}(\\psi _{0};\\gamma _{0})g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0}).$\n\n\\noindent \\emph{(b)} Assumption \\emph{C2} of \\emph{AC1 }holds with $%\nm(W_{i},\\theta )=g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})^{\\prime }\\mathcal{W}%\n(\\psi _{0};\\gamma _{0})g(W_{i},\\theta ).$\n\n\\noindent \\emph{(c)} Assumption \\emph{C3} of \\emph{AC1 }holds with $\\Omega\n(\\pi _{1},\\pi _{2};\\gamma _{0})=g_{\\psi }(\\psi _{0},\\pi _{1};\\gamma\n_{0})^{\\prime }\\mathcal{W}(\\psi _{0};\\gamma _{0})\\Omega _{g}(\\gamma _{0})%\n\\newline\n\\times \\mathcal{W}(\\psi _{0};\\gamma _{0})g_{\\psi }(\\psi _{0},\\pi _{2};\\gamma\n_{0}).$\n\n\\noindent \\emph{(d)} Assumption \\emph{C4} of \\emph{AC1 }holds with $H(\\pi\n;\\gamma _{0})=g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})^{\\prime }\\mathcal{W}%\n(\\psi _{0};\\gamma _{0})g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})=$\\newline\n$D_{\\psi \\psi }Q_{n}(\\theta ).$\n\n\\noindent \\emph{(e)} Assumption \\emph{C5} of \\emph{AC1 }holds with $%\nK_{n}(\\theta ;\\gamma ^{\\ast })=g_{\\psi }(\\psi _{0},\\pi ;\\gamma _{0})^{\\prime\n}\\mathcal{W}(\\psi _{0};\\gamma _{0})K_{n,g}(\\theta ;\\gamma ^{\\ast })\\in\nR^{d_{\\psi }\\times d_{\\beta }},$ and $K(\\psi _{0},\\pi ;\\gamma _{0})=g_{\\psi\n}(\\psi _{0},\\pi ;\\gamma _{0})^{\\prime }\\mathcal{W}(\\psi _{0};\\gamma\n_{0})K_{g}(\\psi _{0},\\pi ;\\gamma _{0}).$\n\n\\noindent \\emph{(f) }Assumption \\emph{C7} of \\emph{AC1 }holds.\n\n\\noindent \\emph{(g) }Assumption \\emph{C8} of \\emph{AC1 }holds.\n\\end{lemma}\n\n\\noindent \\textbf{Comments.} \\textbf{1.} To obtain Lemma \\ref{Lemma GMM suff\ncondition C}(a), Assumption GMM3 is sufficient but not necessary. When $%\n\\overline{g}_{n}(\\theta )$ is not a sample average, as occurs with the MD\nestimator, Assumption MD can be used in conjunction with Assumptions GMM1\nand GMM2 to obtain Lemma \\ref{Lemma GMM suff condition C}(a). In this case,\nAssumptions C2-C5 of AC1 can be verified directly without using Assumption\nGMM3.\n\n\\textbf{2.} Lemma \\ref{Lemma GMM suff condition C}(c)-(e) provide the\nquantities that appear in Assumption C6 of AC1, which is the same as\nAssumption GMM4.\\medskip\n\n\\begin{lemma}\n\\textbf{\\hspace{-0.08in}.} \\label{Lemma GMM Sufficient D}Suppose Assumptions \n\\emph{GMM1, }$\\emph{GMM2,}$ and \\emph{GMM5 }hold.\n\n\\noindent \\emph{(a)} Assumption \\emph{D1} of \\emph{AC1 }holds with $%\nDQ_{n}(\\theta )=g_{\\theta }(\\theta _{0};\\gamma _{0})^{\\prime }\\mathcal{W}%\n(\\theta _{0};\\gamma _{0})\\overline{g}_{n}(\\theta )$ and \\newline\n$D^{2}Q_{n}(\\theta )=g_{\\theta }(\\theta _{0};\\gamma _{0})^{\\prime }\\mathcal{W%\n}(\\theta _{0};\\gamma _{0})g_{\\theta }(\\theta _{0};\\gamma _{0}).$\n\n\\noindent \\emph{(b)} Assumption \\emph{D2} of \\emph{AC1 }holds with $J(\\gamma\n_{0})=J_{g}(\\gamma _{0})^{\\prime }\\mathcal{W}(\\theta _{0};\\gamma\n_{0})J_{g}\\left( \\gamma _{0}\\right) .$\n\n\\noindent \\emph{(c)} Assumption \\emph{D3} of \\emph{AC1 }holds with $V(\\gamma\n_{0})=J_{g}(\\gamma _{0})^{\\prime }\\mathcal{W}(\\theta _{0};\\gamma\n_{0})V_{g}\\left( \\gamma _{0}\\right) \\mathcal{W}(\\theta _{0};\\gamma\n_{0})J_{g}\\left( \\gamma _{0}\\right) .$\n\\end{lemma}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Minimum Distance\nEstimators}\n\n\\hspace{0.25in}For the MD estimator, Assumption MD can be used in place of\nAssumption GMM3 to obtain Assumption C1 of AC1.\n\n\\begin{corollary}\n\\textbf{\\hspace{-0.08in}.} \\label{Corollary MD}Assumptions \\emph{GMM1,} \n\\emph{GMM2,} and \\emph{MD} imply that Assumption \\emph{C1} of \\emph{AC1 }%\nholds with $D_{\\psi }Q_{n}(\\theta )$ and $D_{\\psi \\psi }Q_{n}(\\theta )$\ndefined as in Lemma \\emph{\\ref{Lemma GMM suff condition C}(a).}\n\\end{corollary}\n\nIn addition to the result of Corollary \\ref{Corollary MD}, Lemmas \\ref{Lemma\nGMM A and B3} and \\ref{Lemma GMM Sufficient D} show that Assumptions A, B3,\nand D1-D3 of AC1 hold for the MD estimator under Assumptions GMM1, GMM2, and\nGMM5. Hence, in order to obtain the results of Theorems 3.1 and 3.2 of AC1\nfor MD estimators and other results concerning CS's, one just needs to\nverify Assumptions C2-C8 of AC1.\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Proofs of Lemmas\\label%\n{GMM MD Proofs}}\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma GMM A and B3}.} Assumption A of\nAC1 is implied by Assumption GMM1(i).\n\nAssumption GMM1(ii) implies that Assumption B3(i) of AC1 holds with $%\nQ(\\theta ;\\gamma _{0})=g_{0}(\\theta ;\\gamma _{0})^{\\prime }\\mathcal{W}%\n(\\theta ;\\gamma _{0})g_{0}(\\theta ;\\gamma _{0}).$\n\nNow we verify Assumptions B3(ii) and B3(iii) of AC1 by using Lemma 8.1 in\nAppendix A of AC1-SM, which shows that Assumption B3$^{\\ast }$ of AC1-SM is\nsufficient for Assumptions B3(ii) and B3(iii) of AC1. Assumption B3$^{\\ast }$%\n(i) of AC1-SM holds by Assumptions GMM1(v) and GMM1(vi). Assumption B3$%\n^{\\ast }$(ii) of AC1-SM holds by Assumptions GMM1(iii) and GMM1(vii).\nAssumption B3$^{\\ast }$(iii) of AC1-SM holds by Assumptions GMM1(iv) and\nGMM1(vii). Hence, Assumption B3 of AC1 holds. $\\square $\\bigskip\n\nWe prove Lemma \\ref{Lemma GMM Sufficient D} first and then prove Corollary %\n\\ref{Corollary MD} and Lemma \\ref{Lemma GMM suff condition C}.\\medskip\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma GMM Sufficient D}. }We start\nwith the proof of part (a). For notational simplicity, in this proof $%\ng_{0}(\\theta ;\\gamma _{0}),$ $g_{\\theta }(\\theta ;\\gamma _{0}),$ $g_{\\psi\n}(\\theta ;\\gamma _{0}),$ and $\\mathcal{W}(\\theta ;\\gamma _{0})$ are\nabbreviated to $g_{0}(\\theta ),$ $g_{\\theta }(\\theta ),$ $g_{\\psi }(\\theta\n), $ and $\\mathcal{W}(\\theta ),$ respectively.\n\nWe start with the case in which $\\mathcal{W}_{n}(\\theta )=I_{k}.$ When $%\nDQ_{n}\\left( \\theta _{n}\\right) $ and $D^{2}Q_{n}\\left( \\theta _{n}\\right) $\ntake the form in Lemma \\ref{Lemma GMM Sufficient D}(a), the remainder term\nin Assumption D1 becomes%\n\\begin{equation}\nR_{n}^{\\ast }(\\theta )=\\left\\Vert \\overline{g}_{n}(\\theta )\\right\\Vert\n^{2}/2-\\left\\Vert \\overline{g}_{n}(\\theta _{n})\\right\\Vert ^{2}/2-\\overline{g%\n}_{n}(\\theta _{n})^{\\prime }g_{\\theta }(\\theta _{0})(\\theta -\\theta\n_{n})-\\left\\Vert g_{\\theta }(\\theta _{0})(\\theta -\\theta _{n})\\right\\Vert\n^{2}/2.  \\label{R star GMM}\n\\end{equation}%\nWe approximate $R_{n}^{\\ast }(\\theta )$ by replacing $g_{\\theta }(\\theta\n_{0})(\\theta -\\theta _{n})$ by $g_{0}(\\theta )-g_{0}(\\theta _{n})$ and get%\n\\begin{equation}\nR_{n}^{\\dag }(\\theta )=\\left\\Vert \\overline{g}_{n}(\\theta )\\right\\Vert\n^{2}/2-\\left\\Vert \\overline{g}_{n}(\\theta _{n})\\right\\Vert ^{2}/2-\\overline{g%\n}_{n}(\\theta _{n})^{\\prime }\\left( g_{0}(\\theta )-g_{0}(\\theta _{n})\\right)\n-\\left\\Vert g_{0}(\\theta )-g_{0}(\\theta _{n})\\right\\Vert ^{2}/2.\n\\label{R star star}\n\\end{equation}%\nLet $a,c,$ and $d$ be $k-$vectors for which $a=c+d.$ By the Cauchy-Schwarz\ninequality, \n\\begin{equation}\n\\left\\vert \\left\\Vert a\\right\\Vert ^{2}-\\left\\Vert c\\right\\Vert\n^{2}\\right\\vert =\\left\\vert \\left\\Vert d\\right\\Vert ^{2}+2c^{\\prime\n}d\\right\\vert \\leq \\left\\Vert d\\right\\Vert ^{2}+2\\left\\Vert c\\right\\Vert\n\\left\\Vert d\\right\\Vert .  \\label{inequality}\n\\end{equation}%\nLet $a=g_{0}(\\theta )-g_{0}(\\theta _{n})$ and $c=g_{\\theta }(\\theta\n_{0})(\\theta -\\theta _{n}),$ then \n\\begin{eqnarray}\nd\\hspace{-0.08in} &=&\\hspace{-0.08in}a-c=g_{0}(\\theta )-g_{0}(\\theta\n_{n})-g_{\\theta }(\\theta _{0})(\\theta -\\theta _{n})  \\notag \\\\\n\\hspace{-0.08in} &=&\\hspace{-0.08in}[(g_{\\theta }(\\theta _{n}^{\\dag\n})-g_{\\theta }(\\theta _{0}))B^{-1}(\\beta _{n})]B(\\beta _{n})(\\theta -\\theta\n_{n})=o(\\left\\Vert B(\\beta _{n})\\left( \\theta -\\theta _{n}\\right)\n\\right\\Vert ),  \\label{c small op}\n\\end{eqnarray}%\nwhere the first two equalities hold by definition, the third equality\nfollows from element-by-element mean-value expansions, where $\\theta\n_{n}^{\\dag }$ is between $\\theta $ and $\\theta _{n}$ (and $\\theta _{n}^{\\dag\n}$ may depend on the row)$,$ and the last equality follows from Assumption\nGMM5(ii). By Assumptions GMM5(ii) and GMM5(iii), \n\\begin{equation}\nc=g_{\\theta }\\left( \\theta _{0}\\right) \\left( \\theta -\\theta _{n}\\right) = \n\\left[ g_{\\theta }\\left( \\theta _{0}\\right) B^{-1}\\left( \\beta _{n}\\right) %\n\\right] B\\left( \\beta _{n}\\right) \\left( \\theta -\\theta _{n}\\right) =O\\left(\n\\left\\Vert B\\left( \\beta _{n}\\right) \\left( \\theta -\\theta _{n}\\right)\n\\right\\Vert \\right) .  \\label{b term}\n\\end{equation}%\nHence, \n\\begin{eqnarray}\n&&\\sup_{\\theta \\in \\Theta _{n}\\left( \\delta _{n}\\right) }\\frac{n|R_{n}^{\\dag\n}(\\theta )-R_{n}^{\\ast }(\\theta )|}{(1+n^{1/2}||B(\\beta _{n})(\\theta -\\theta\n_{n})||)^{2}}  \\notag \\\\\n\\hspace{-0.18in} &=&\\hspace{-0.08in}\\frac{1}{2}\\sup_{\\theta \\in \\Theta\n_{n}\\left( \\delta _{n}\\right) }\\frac{n\\left\\vert 2\\overline{g}_{n}(\\theta\n_{n})^{\\prime }d+\\left\\Vert g_{0}(\\theta )-g_{0}(\\theta _{n})\\right\\Vert\n^{2}-\\left\\Vert g_{\\theta }(\\theta _{0})(\\theta -\\theta _{n})\\right\\Vert\n^{2}\\right\\vert }{(1+n^{1/2}||B(\\beta _{n})(\\theta -\\theta _{n})||)^{2}} \\\\\n\\hspace{-0.18in} &\\leq &\\hspace{-0.08in}\\frac{1}{2}\\sup_{\\theta \\in \\Theta\n_{n}\\left( \\delta _{n}\\right) }n\\left( 2\\left\\Vert \\overline{g}_{n}(\\theta\n_{n})\\right\\Vert \\left\\Vert d\\right\\Vert \\hspace{-0.02in}+\\hspace{-0.02in}%\n\\left\\Vert d\\right\\Vert ^{2}\\hspace{-0.02in}+\\hspace{-0.02in}2\\left\\Vert\nc\\right\\Vert \\left\\Vert d\\right\\Vert \\right) \\hspace{-0.02in}%\n/(1+n^{1/2}||B(\\beta _{n})(\\theta -\\theta _{n})||)^{2}=o_{p}\\left( 1\\right) ,\n\\notag\n\\end{eqnarray}%\nwhere the first equality follows from (\\ref{R star GMM}) and (\\ref{R star\nstar}), the inequality holds by (\\ref{inequality}), and the second equality\nuses (\\ref{c small op}), (\\ref{b term}), and $\\overline{g}_{n}\\left( \\theta\n_{n}\\right) =O_{p}(n^{-1/2})$, where the latter holds by Assumption GMM5(i).\nThus, it suffices to show that Assumption D1(ii) holds with $R_{n}^{\\ast\n}(\\theta )$ replaced by $R_{n}^{\\dag }(\\theta ).$\n\nNote that%\n\\begin{eqnarray}\nR_{n}^{\\dag }(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left\\Vert \n\\overline{g}_{n}(\\theta )\\right\\Vert ^{2}/2-\\left\\Vert \\overline{g}%\n_{n}\\left( \\theta _{n}\\right) +g_{0}\\left( \\theta \\right) -g_{0}\\left(\n\\theta _{n}\\right) \\right\\Vert ^{2}/2  \\notag \\\\\n&=&\\hspace{-0.08in}\\left\\Vert \\widetilde{g}_{n}(\\theta )-\\widetilde{g}%\n_{n}(\\theta _{n})\\right\\Vert ^{2}/2+\\left( g_{0}(\\theta )-g_{0}(\\theta _{n})+%\n\\overline{g}_{n}(\\theta _{n})\\right) ^{\\prime }(\\widetilde{g}_{n}(\\theta )-%\n\\widetilde{g}_{n}(\\theta _{n})),  \\label{r star 1}\n\\end{eqnarray}%\nwhere the first equality follows from (\\ref{R star star}) and the second\nequality uses $\\left\\Vert a\\right\\Vert ^{2}-\\left\\Vert c\\right\\Vert\n^{2}=\\left\\Vert a-c\\right\\Vert ^{2}+2c^{\\prime }(a-c)$ with $a=\\overline{g}%\n_{n}\\left( \\theta \\right) ,$ $c=\\overline{g}_{n}(\\theta _{n})+g_{0}(\\theta\n)-g_{0}(\\theta _{n}),$ and $a-c=\\widetilde{g}_{n}(\\theta )-\\widetilde{g}%\n_{n}(\\theta _{n}).$\n\nWe have \n\\begin{equation}\n\\eta _{n}=\\sup_{\\theta \\in \\Theta _{n}\\left( \\delta _{n}\\right) }\\frac{%\nn^{1/2}\\left\\Vert \\widetilde{g}_{n}(\\theta )-\\widetilde{g}_{n}(\\theta\n_{n})\\right\\Vert }{1+n^{1/2}\\left\\Vert B(\\beta _{n})(\\theta -\\theta\n_{n})\\right\\Vert }=o_{p}\\left( 1\\right) ,  \\label{eta}\n\\end{equation}%\nwhere the $o_{p}\\left( 1\\right) $ term holds by Assumption GMM2(ii)$.$ By (%\n\\ref{r star 1}), (\\ref{eta}), and the triangle inequality,%\n\\begin{eqnarray}\n&&\\hspace{-0.08in}\\sup_{\\theta \\in \\Theta _{n}\\left( \\delta _{n}\\right) }%\n\\frac{2n|R_{n}^{\\dag }(\\theta )|}{\\left( 1+n^{1/2}\\left\\Vert B(\\beta\n_{n})(\\theta -\\theta _{n})\\right\\Vert \\right) ^{2}}\\hspace{-0.08in}  \\notag\n\\\\\n&\\leq &\\hspace{-0.08in}\\eta _{n}^{2}+2\\sup_{\\theta \\in \\Theta _{n}\\left(\n\\delta _{n}\\right) }\\frac{n^{1/2}\\left\\Vert g_{0}(\\theta )-g_{0}(\\theta\n_{n})\\right\\Vert +n^{1/2}||\\overline{g}_{n}(\\theta _{n})||}{%\n1+n^{1/2}\\left\\Vert B(\\beta _{n})(\\theta -\\theta _{n})\\right\\Vert }\\eta _{n}\n\\notag \\\\\n&=&\\hspace{-0.08in}\\eta _{n}^{2}+O_{p}(1)\\eta _{n}=o_{p}(1),\n\\end{eqnarray}%\nwhere the first equality holds because $\\overline{g}_{n}\\left( \\theta\n_{n}\\right) =O_{p}(n^{-1/2})$ and $\\left\\Vert g_{0}\\left( \\theta \\right)\n-g_{0}\\left( \\theta _{n}\\right) \\right\\Vert =O(||B(\\beta _{n})$\\allowbreak $%\n(\\theta -\\theta _{n})||)$ uniformly on $\\Theta _{n}\\left( \\delta _{n}\\right)\n.$ To see that the latter holds, element-by-element mean-value expansions\ngive%\n\\begin{equation}\ng_{0}\\left( \\theta \\right) -g_{0}\\left( \\theta _{n}\\right) =(g_{\\theta\n}(\\theta _{n}^{\\dag })B^{-1}(\\beta _{n}))B\\left( \\beta _{n}\\right) \\left(\n\\theta -\\theta _{n}\\right) =\\left( J_{g}(\\gamma _{0})+o\\left( 1\\right)\n\\right) B\\left( \\beta _{n}\\right) \\left( \\theta -\\theta _{n}\\right) ,\n\\label{G diff}\n\\end{equation}%\nwhere $\\theta _{n}^{\\dag }$ lies between $\\theta $ and $\\theta _{n}$ and the\nlast equality follows from Assumptions GMM5(ii) and GMM5(iii). This\ncompletes the proof of Lemma \\ref{Lemma GMM Sufficient D}(a) for the case in\nwhich $\\mathcal{W}_{n}\\left( \\theta \\right) =I_{k}.$\n\nNext, Lemma \\ref{Lemma GMM Sufficient D}(a) is established for the case\nwhere $\\mathcal{W}_{n}(\\theta )$ is as in Assumption GMM1. By Assumptions\nGMM1(ii) and GMM1(vii), we know that $\\mathcal{W}_{n}(\\theta )$ is symmetric\nand positive definite in a neighborhood of $\\theta _{0}.$ Hence, both $%\n\\mathcal{W}(\\theta )$ and $\\mathcal{W}_{n}(\\theta )$ have square roots,\ndenoted by $\\mathcal{W}^{1/2}(\\theta )$ and $\\mathcal{W}_{n}^{1/2}(\\theta ),$\nrespectively$.$ The idea is to use the same proof as above, but with $%\n\\overline{g}_{n}(\\theta ),$ $g_{0}(\\theta ),$ and $g_{\\theta }(\\theta _{0})$\nreplaced by $\\mathcal{W}_{n}^{1/2}(\\theta )\\overline{g}_{n}(\\theta ),$ $%\n\\mathcal{W}^{1/2}(\\theta _{0})g_{0}(\\theta ),$ and $\\mathcal{W}^{1/2}(\\theta\n_{0})g_{\\theta }(\\theta _{0}).$ With these changes, $R_{n}^{\\ast }(\\theta )$\nin (\\ref{R star GMM}) becomes%\n\\begin{eqnarray}\n&&R_{n}^{\\ast \\ast }(\\theta )\\overset{}{=}||\\mathcal{W}_{n}^{1/2}(\\theta )%\n\\overline{g}_{n}(\\theta )||^{2}/2-||\\mathcal{W}_{n}^{1/2}(\\theta _{n})%\n\\overline{g}_{n}(\\theta _{n})||^{2}/2-  \\label{R star star replace} \\\\\n&&\\overline{g}_{n}(\\theta _{n})^{\\prime }\\mathcal{W}_{n}^{1/2}(\\theta\n_{n})^{\\prime }\\mathcal{W}^{1/2}(\\theta _{0})g_{\\theta }(\\theta _{0})(\\theta\n-\\theta _{n})-||\\mathcal{W}^{1/2}\\left( \\theta _{0}\\right) g_{\\theta\n}(\\theta _{0})(\\theta -\\theta _{n})||^{2}/2.  \\notag\n\\end{eqnarray}%\nTo show the condition in Assumption D1(ii) holds for $R_{n}^{\\ast \\ast\n}\\left( \\theta \\right) ,$ the method used for the case $\\mathcal{W}%\n_{n}\\left( \\theta \\right) =I_{k}$ works provided that Assumptions GMM2(ii)\nand GMM5, which are used in the foregoing proof, hold with the same changes.\nAssumption GMM5 obviously does with $V_{g}\\left( \\gamma _{0}\\right) $ and $%\nJ_{g}\\left( \\gamma _{0}\\right) $ adjusted to $\\mathcal{W}^{1/2}\\left( \\theta\n_{0}\\right) V_{g}\\left( \\gamma _{0}\\right) \\mathcal{W}^{1/2}\\left( \\theta\n_{0}\\right) $ and $\\mathcal{W}^{1/2}\\left( \\theta _{0}\\right) J_{g}\\left(\n\\gamma _{0}\\right) ,$ respectively.\n\nWe now show Assumption GMM2(ii) also holds with the changes above. For $%\n\\theta \\in \\Theta _{n}(\\delta _{n}),$%\n\\begin{eqnarray}\n&&\\hspace{-0.08in}||\\mathcal{W}_{n}^{1/2}\\left( \\theta \\right) \\overline{g}%\n_{n}\\left( \\theta \\right) -\\mathcal{W}^{1/2}\\left( \\theta _{0}\\right)\ng_{0}\\left( \\theta \\right) -\\mathcal{W}_{n}^{1/2}\\left( \\theta _{n}\\right) \n\\overline{g}_{n}\\left( \\theta _{n}\\right) +\\mathcal{W}^{1/2}\\left( \\theta\n_{0}\\right) g_{0}\\left( \\theta _{n}\\right) ||  \\notag \\\\\n&\\leq &\\hspace{-0.08in}||\\mathcal{W}^{1/2}\\left( \\theta _{0}\\right)\n||\\left\\Vert \\widetilde{g}_{n}(\\theta )-\\widetilde{g}_{n}(\\theta\n_{n})\\right\\Vert +||\\mathcal{W}_{n}^{1/2}\\left( \\theta \\right) -\\mathcal{W}%\n^{1/2}\\left( \\theta _{0}\\right) ||\\left\\Vert \\overline{g}_{n}\\left( \\theta\n\\right) -\\overline{g}_{n}\\left( \\theta _{n}\\right) \\right\\Vert +  \\notag \\\\\n&&\\hspace{-0.08in}||\\mathcal{W}_{n}^{1/2}\\left( \\theta \\right) -\\mathcal{W}%\n_{n}^{1/2}\\left( \\theta _{n}\\right) ||\\left\\Vert \\overline{g}_{n}\\left(\n\\theta _{n}\\right) \\right\\Vert  \\notag \\\\\n&\\leq &\\hspace{-0.08in}O\\left( 1\\right) \\left\\Vert \\widetilde{g}_{n}(\\theta\n)-\\widetilde{g}_{n}(\\theta _{n})\\right\\Vert +o_{p}\\left( 1\\right)\n(\\left\\Vert \\widetilde{g}_{n}(\\theta )-\\widetilde{g}_{n}(\\theta\n_{n})\\right\\Vert +\\left\\Vert g_{0}\\left( \\theta \\right) -g_{0}\\left( \\theta\n_{n}\\right) \\right\\Vert )+  \\notag \\\\\n&&\\hspace{-0.08in}o_{p}\\left( 1\\right) \\left\\Vert \\overline{g}_{n}\\left(\n\\theta _{n}\\right) \\right\\Vert  \\label{GMM adjustment} \\\\\n&=&\\hspace{-0.08in}o_{p}(n^{-1/2}\\sup_{\\theta \\in \\Theta _{n}(\\delta\n_{n})}(1+n^{1/2}||B(\\beta _{n})(\\theta -\\theta _{n})||))+O\\left( \\left\\Vert\n\\left( B\\left( \\beta _{n}\\right) \\left( \\theta -\\theta _{n}\\right) \\right)\n\\right\\Vert \\right) =o_{p}(1),  \\notag\n\\end{eqnarray}%\nwhere the first inequality follows from adding and subtracting $\\mathcal{W}%\n^{1/2}\\left( \\theta _{0}\\right) \\overline{g}_{n}\\left( \\theta \\right) ,$%\n\\linebreak $\\mathcal{W}^{1/2}\\left( \\theta _{0}\\right) \\overline{g}%\n_{n}\\left( \\theta _{n}\\right) ,$ and $\\mathcal{W}_{n}^{1/2}\\left( \\theta\n\\right) \\overline{g}_{n}\\left( \\theta _{n}\\right) $ and invoking the\ntriangle inequality, the second inequality holds by Assumptions GMM1(ii),\nGMM1(vi), and GMM1(vii), the first equality holds by Assumption GMM2(ii), (%\n\\ref{G diff}), and $\\overline{g}_{n}\\left( \\theta _{n}\\right)\n=O_{p}(n^{-1/2}),$ and the second equality holds by the definition of $%\n\\Theta _{n}(\\delta _{n})$ and $B(\\beta _{n})$. By (\\ref{GMM adjustment}),\nthe condition in Assumption D1(ii) holds with $R_{n}^{\\ast }(\\theta )$\nchanged to $R_{n}^{\\ast \\ast }(\\theta ).$\n\nWhen the random derivative matrices take the form in Lemma \\ref{Lemma GMM\nSufficient D}(a), the remainder term in Assumption D1(i) is%\n\\begin{eqnarray}\nR_{n}^{\\ast }(\\theta )\\hspace{-0.08in} &=&\\hspace{-0.08in}||\\mathcal{W}%\n_{n}^{1/2}\\hspace{-0.02in}\\left( \\theta \\right) \\overline{g}_{n}(\\theta\n)||^{2}/2-||\\mathcal{W}_{n}^{1/2}\\hspace{-0.02in}\\left( \\theta _{n}\\right) \n\\overline{g}_{n}(\\theta _{n})||^{2}/2-\\overline{g}_{n}(\\theta _{n})^{\\prime }%\n\\mathcal{W}\\left( \\theta _{0}\\right) g_{\\theta }(\\theta _{0})^{\\prime\n}(\\theta -\\theta _{n})-  \\notag \\\\\n&&\\hspace{-0.08in}||\\mathcal{W}^{1/2}\\left( \\theta _{0}\\right) g_{\\theta\n}(\\theta _{0})(\\theta -\\theta _{n})||^{2}/2.\n\\end{eqnarray}%\nWe now show the difference between $R_{n}^{\\ast }\\left( \\theta \\right) $ and \n$R_{n}^{\\ast \\ast }\\left( \\theta \\right) $ in (\\ref{R star star replace}) is\nsmall enough so that the condition in Assumption D1(ii) holds for $%\nR_{n}^{\\ast }\\left( \\theta \\right) $ provided it holds for $R_{n}^{\\ast \\ast\n}\\left( \\theta \\right) .$ For $\\theta \\in \\Theta _{n}(\\delta _{n}),$%\n\\begin{eqnarray}\n&&\\hspace{-0.08in}|R_{n}^{\\ast }\\left( \\theta \\right) -R_{n}^{\\ast \\ast\n}\\left( \\theta \\right) |\\overset{}{=}|\\overline{g}_{n}(\\theta _{n})^{\\prime\n}(\\mathcal{W}_{n}^{1/2}\\left( \\theta _{n}\\right) -\\mathcal{W}^{1/2}\\left(\n\\theta _{0}\\right) )^{\\prime }\\mathcal{W}^{1/2}\\left( \\theta _{0}\\right)\ng_{\\theta }(\\theta _{0})(\\theta -\\theta _{n})|  \\notag \\\\\n&\\leq &\\hspace{-0.08in}\\left\\Vert \\overline{g}_{n}\\left( \\theta _{n}\\right)\n\\right\\Vert \\cdot ||\\mathcal{W}_{n}^{1/2}\\left( \\theta _{n}\\right) -\\mathcal{%\nW}^{1/2}\\left( \\theta _{0}\\right) ||\\cdot ||\\mathcal{W}^{1/2}\\left( \\theta\n_{0}\\right) ||\\cdot \\left\\Vert g_{\\theta }(\\theta _{0})B^{-1}\\left( \\beta\n_{n}\\right) \\right\\Vert \\cdot  \\notag \\\\\n&&\\hspace{-0.08in}\\left\\Vert B\\left( \\beta _{n}\\right) \\left( \\theta -\\theta\n_{n}\\right) \\right\\Vert  \\notag \\\\\n&=&\\hspace{-0.08in}o_{p}(n^{-1/2}\\left\\Vert B\\left( \\beta _{n}\\right) \\left(\n\\theta -\\theta _{n}\\right) \\right\\Vert )=o_{p}(1),\n\\end{eqnarray}%\nwhere the second last equality holds by Assumptions GMM1 and GMM5. This\ncompletes the proof of part (a).\n\nPart (b) follows from part (a) and Assumptions GMM5(ii) and GMM5(iii).\n\nPart (c) follows from part (a) and Assumptions GMM5(i)-(iii). $\\square $%\n\\bigskip\n\nWe now prove Corollary \\ref{Corollary MD} and then use Corollary \\ref%\n{Corollary MD} to prove Lemma \\ref{Lemma GMM suff condition C}.\\medskip\n\n\\noindent \\textbf{Proof of Corollary \\ref{Corollary MD}. }The proof is\nanalogous to the proof of Lemma \\ref{Lemma GMM Sufficient D}(a) with \\ (i) $%\nDQ_{n}\\left( \\theta _{n}\\right) $ and $D^{2}Q_{n}\\left( \\theta _{n}\\right) $\nin Lemma \\ref{Lemma GMM Sufficient D}(a) changed to $D_{\\psi }Q_{n}\\left(\n\\psi _{0,n},\\pi \\right) $ and $D_{\\psi \\psi }Q_{n}\\left( \\psi _{0,n},\\pi\n\\right) $ in Lemma \\ref{Lemma GMM suff condition C}(a)$,$ (ii) $R_{n}^{\\ast\n}(\\theta )$ changed to $R_{n}(\\psi ,\\pi ),$ (iii) $\\theta _{n}$ and $\\theta\n-\\theta _{n}$ changed to $(\\psi _{0,n},\\pi )$ and $\\psi -\\psi _{0,n},$ (iv) $%\ng_{\\theta }\\left( \\cdot \\right) $ changed to $g_{\\psi }(\\cdot ),$ where as\nabove $g_{\\theta }(\\cdot )$ and $g_{\\psi }(\\cdot )$ abbreviate $g_{\\theta\n}(\\cdot ;\\gamma _{0})$ and $g_{\\psi }(\\cdot ;\\gamma _{0}),$ respectively$,$\n(v) $B(\\beta _{n})$ and $B^{-1}(\\beta _{n})$ deleted throughout, (vi) $%\n\\theta _{n}^{\\dag }$ changed to $(\\psi _{0,n}^{\\dag }(\\pi ),\\pi )$ with $%\n\\psi _{0,n}^{\\dag }(\\pi )$ between $\\psi $ and $\\psi _{0,n},$ (vii) $\\theta\n\\in \\Theta _{n}\\left( \\delta _{n}\\right) $ changed to $\\psi \\in \\Psi \\left(\n\\pi \\right) $ and $\\left\\Vert \\psi -\\psi _{0,n}\\right\\Vert \\leq \\delta _{n},$\nand (viii) $O_{p}\\left( 1\\right) $ and $o_{p}\\left( 1\\right) $ changed to $%\nO_{p\\pi }(1)$ and $o_{p\\pi }(1),$ where the uniformity over $\\Pi $ usually\nholds using the compactness of $\\Pi ,$ and (ix) $\\mathcal{W}\\left( \\theta\n_{0}\\right) $ changed to $\\mathcal{W}(\\psi _{0};\\gamma _{0}).$ Note that\nAssumptions GMM3(iii) and MD hold with $\\pi _{n}$ replaced by $\\pi $ $%\n\\forall \\pi \\in \\Pi $ under Assumption GMM1(i). The assumptions that are\nreferenced in the proof also are changed accordingly. Specifically, the\nproof goes through with Assumption GMM2(ii) changed to Assumption GMM2(i),\nAssumption GMM5(i) changed to Assumption MD, Assumption GMM5(ii) changed to\nthe continuity of $g_{\\psi }(\\theta ,\\pi )$ uniformly over $\\Pi ,$ which is\nimplied by Assumption GMM1(vii) and the compactness of $\\Pi ,$ and\nAssumption GMM5(iii) changed to the continuity of $g_{\\psi }(\\theta ).$ (The\nassumption that $J_{g}(\\gamma _{0})$ has full column rank is not used in the\nproof of Lemma \\ref{Lemma GMM Sufficient D}(a).)\n\nAssumption C1(iii) follows from the form of $D_{\\psi }Q_{n}(\\theta )$ and $%\nD_{\\psi \\psi }Q_{n}(\\theta )$ in Lemma \\ref{Lemma GMM suff condition C} and\nAssumption GMM1(i). $\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma GMM suff condition C}. }First we\nprove part (a). Under Assumption GMM3, we can show Assumption MD holds using\na proof that is similar to the proof of Lemma 9.1 in Appendix B of AC1-SM\nwith (i) $D_{\\psi }Q_{n}\\left( \\psi _{0,n},\\pi \\right) $ changed to $%\n\\overline{g}_{n}(\\psi _{0,n},\\pi ),$ (ii) $m\\left( W_{i},\\theta \\right) $\nchanged to $g\\left( W_{i},\\theta \\right) ,$ (iii) Assumptions C2, C3, and C5\nof AC1 changed to the corresponding conditions in Assumptions GMM3. By\nCorollary \\ref{Corollary MD}, Lemma \\ref{Lemma GMM suff condition C}(a)\nholds under Assumptions GMM1-GMM3.\n\nPart (b) follows from part (a) and Assumptions GMM3(i) and GMM3(ii).\n\nPart (c) follows from part (b) and Assumptions GMM1(i) and GMM3(iii).\n\nPart (d) follows from part (a), $H(\\pi ;\\gamma _{0})=D_{\\psi \\psi\n}Q_{n}(\\psi _{0,n},\\pi ),$ and Assumption GMM1\\allowbreak (viii).\n\nPart (e) follows from part (a) and Assumption GMM3(iv).\n\nNow we verify part (f). Note that when $\\beta _{0}=0$ as in Assumption C7, $%\nK_{g}(\\psi _{0},\\pi ;\\gamma _{0})$ does not depend on $\\pi $ by Assumptions\nGMM1(i) and GMM3(i). Given the form of $H(\\pi ;\\gamma _{0})$ and $K(\\pi\n;\\gamma _{0})$ in parts (d) and (e), for any $\\pi \\in \\Pi ,$%\n\\begin{eqnarray}\n&&\\omega _{0}^{\\prime }K(\\pi ;\\gamma _{0})^{\\prime }H^{-1}(\\pi ;\\gamma\n_{0})K(\\pi ;\\gamma _{0})\\omega _{0}\\overset{}{=}Y^{\\prime }X(\\pi )(X(\\pi\n)^{\\prime }X(\\pi ))^{-1}X(\\pi )^{\\prime }Y\\overset{}{\\leq }Y^{\\prime }Y,%\n\\text{ where}  \\notag \\\\\n&&X(\\pi )\\overset{}{=}\\mathcal{W}^{1/2}(\\psi _{0};\\gamma _{0})g_{\\psi }(\\psi\n_{0},\\pi ;\\gamma _{0})\\text{, }Y\\overset{}{=}\\mathcal{W}^{1/2}(\\psi\n_{0};\\gamma _{0})K_{g}(\\psi _{0},\\pi ;\\gamma _{0})\\omega _{0},\n\\label{C6(ii) eq}\n\\end{eqnarray}%\nand $Y$ does not depend on $\\pi .$ The inequality in (\\ref{C6(ii) eq}) holds\nbecause $X(\\pi )(X(\\pi )^{\\prime }X(\\pi ))^{-1}\\allowbreak X(\\pi )^{\\prime }$\nis a projection matrix. The inequality holds as an equality when $\\mathcal{W}%\n^{1/2}(\\psi _{0};\\gamma _{0})$\\linebreak $\\times K_{g}(\\psi _{0},\\pi ;\\gamma\n_{0})\\omega _{0}=\\mathcal{W}^{1/2}(\\psi _{0};\\gamma _{0})g_{\\psi }(\\psi\n_{0},\\pi ;\\gamma _{0})S$ for some $S\\in R^{d_{\\psi }}.$ By Assumptions\nGMM1(vii) and GMM3(v), the inequality in (\\ref{C6(ii) eq}) holds as an\nequality iff $\\pi =\\pi _{0}.$ This completes the verification of Assumption\nC7.\n\nTo verify Assumption C8 as in part (g), we have%\n\\begin{eqnarray}\n\\frac{\\partial }{\\partial \\psi ^{\\prime }}E_{\\gamma _{n}}D_{\\psi }Q_{n}(\\psi\n_{n},\\pi _{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}g_{\\psi }(\\psi _{0},\\pi\n_{n};\\gamma _{0})^{\\prime }\\mathcal{W}(\\psi _{0};\\gamma _{0})\\frac{\\partial \n}{\\partial \\psi ^{\\prime }}E_{\\gamma _{n}}\\overline{g}_{n}(\\theta _{n}) \n\\notag \\\\\n&=&\\hspace{-0.08in}g_{\\psi }(\\psi _{0},\\pi _{n};\\gamma _{0})^{\\prime }%\n\\mathcal{W}(\\psi _{0};\\gamma _{0})\\left( n^{-1}\\sum_{i=1}^{n}\\frac{\\partial \n}{\\partial \\psi ^{\\prime }}E_{\\gamma _{n}}g(W_{i},\\theta _{n})\\right)  \\notag\n\\\\\n&\\rightarrow &\\hspace{-0.08in}g_{\\psi }(\\theta _{0};\\gamma _{0})^{\\prime }%\n\\mathcal{W}(\\psi _{0};\\gamma _{0})g_{\\psi }(\\theta _{0};\\gamma _{0})=H(\\pi\n_{0};\\gamma _{0}),\n\\end{eqnarray}%\nwhere the first equality holds by Lemma \\ref{Lemma GMM suff condition C}(a),\nthe second equality holds by Assumption GMM3(i), the convergence holds by\nAssumption GMM3(vi) and the continuity of $g_{\\psi }(\\theta ;\\gamma _{0})$\nin $\\pi $ in Assumption GMM1(v), and the third equality holds by Lemma \\ref%\n{Lemma GMM suff condition C}(d). $\\square $\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Proofs of Section 3\nLemmas}\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma Two step weight matrix}. }By the\ntriangle inequality,%\n\\begin{equation}\n\\left\\Vert \\mathcal{W}_{n}(\\overline{\\theta }_{n})-\\mathcal{W}(\\theta\n_{0};\\gamma _{0})\\right\\Vert \\leq \\left\\Vert \\mathcal{W}_{n}(\\overline{%\n\\theta }_{n})-\\mathcal{W}(\\overline{\\theta }_{n};\\gamma _{0})\\right\\Vert\n+\\left\\Vert \\mathcal{W}(\\overline{\\theta }_{n};\\gamma _{0})-\\mathcal{W}%\n(\\theta _{0};\\gamma _{0})\\right\\Vert ,  \\label{weight ineq}\n\\end{equation}%\nwhere the first term on the rhs is $o_{p}(1)$ because $\\mathcal{W}%\n_{n}(\\theta )$ converges to $\\mathcal{W}(\\theta ;\\gamma _{0})$ uniformly\nover $\\Theta .$ When $\\beta _{0}\\neq 0,$ the second \\ term on the rhs of (%\n\\ref{weight ineq}) is $o_{p}(1)$ because $\\mathcal{W}(\\theta ;\\gamma _{0})$\nis continuous in $\\theta $ and $\\overline{\\theta }_{n}\\rightarrow _{p}\\theta\n_{0}.$ When $\\beta _{0}=0,$ to show the second term on the rhs of (\\ref%\n{weight ineq}) is $o_{p}(1)$, we have%\n\\begin{eqnarray}\n&&\\hspace{-0.08in}\\left\\Vert \\mathcal{W}(\\overline{\\theta }_{n};\\gamma _{0})-%\n\\mathcal{W}(\\theta _{0};\\gamma _{0})\\right\\Vert  \\notag \\\\\n&\\leq &\\hspace{-0.08in}\\left\\Vert \\mathcal{W}(\\overline{\\psi }_{n},\\overline{%\n\\pi }_{n},;\\gamma _{0})-\\mathcal{W}(\\psi _{0},\\overline{\\pi }_{n};\\gamma\n_{0})\\right\\Vert +||\\mathcal{W}(\\psi _{0},\\overline{\\pi }_{n};\\gamma _{0})-%\n\\mathcal{W}(\\psi _{0},\\pi _{0};\\gamma _{0})||  \\notag \\\\\n&\\leq &\\hspace{-0.08in}\\sup_{\\pi \\in \\Pi }\\left\\Vert \\mathcal{W}(\\overline{%\n\\psi }_{n},\\pi ,;\\gamma _{0})-\\mathcal{W}(\\psi _{0},\\pi ;\\gamma\n_{0})\\right\\Vert ,  \\label{weight ineq 2}\n\\end{eqnarray}%\nwhere the first inequality holds by the triangle inequality, and the second\ninequality holds because $\\mathcal{W}(\\psi _{0},\\pi ;\\gamma _{0})$ does not\ndepend on $\\pi $ when $\\beta _{0}=0,$ which in turn holds by Assumptions\nGMM1(i) and GMM1(ii)$.$ The third line of (\\ref{weight ineq 2}) is $o_{p}(1)$\nbecause $\\overline{\\psi }_{n}\\rightarrow _{p}\\psi _{0}$ and $\\mathcal{W}%\n(\\psi ,\\pi ,;\\gamma _{0})$ is continuous in $\\psi $ uniformly over $\\pi \\in\n\\Pi ,$ where the latter holds because $\\mathcal{W}(\\theta ;\\gamma _{0})$ is\ncontinuous in $\\theta $ and $\\Pi $ is compact. This completes the proof. $%\n\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma GMM Smooth Sufficient}.} First\nwe show that Assumption GMM2(ii) holds under Assumption GMM2$^{\\ast }$. For $%\n\\theta \\in \\Theta _{n}(\\delta _{n}),$%\n\\begin{eqnarray}\n\\widetilde{g}_{n}(\\theta ;\\gamma _{0})-\\widetilde{g}_{n}(\\theta _{n};\\gamma\n_{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}\\frac{\\partial }{\\partial \\theta\n^{\\prime }}\\widetilde{g}_{n}(\\theta _{n}^{\\dag };\\gamma _{0})(\\theta -\\theta\n_{n})  \\notag \\\\\n&=&\\hspace{-0.08in}\\left( [\\frac{\\partial }{\\partial \\theta ^{\\prime }}%\ng_{n}(\\theta _{n}^{\\dag };\\gamma _{0})-g_{\\theta }(\\theta _{n}^{\\dag\n};\\gamma _{0})]B^{-1}(\\beta _{n})\\right) B(\\beta _{n})(\\theta -\\theta _{n}) \n\\notag \\\\\n&=&\\hspace{-0.08in}o_{p}(||B(\\beta _{n})(\\theta -\\theta _{n})||),\n\\label{gmm sm 1}\n\\end{eqnarray}%\nwhere the first equality holds by element-by-element mean-value expansions\nwith $\\theta _{n}^{\\dag }$ between $\\theta $ and $\\theta _{n}$ (and $\\theta\n_{n}^{\\dag }$ may depend on the row)$,$ the second equality holds by the\ndefinition of $\\widetilde{g}_{n}(\\theta ,\\gamma _{0}),$ and the last\nequality holds uniformly over $\\theta \\in \\Theta _{n}(\\delta _{n})$ by\nAssumption GMM2$^{\\ast }$(iii). Assumption GMM2(ii) follows from (\\ref{gmm\nsm 1}) using the \"$||B(\\beta _{n})(\\theta -\\theta _{n})||$\" part of the\ndenominator in Assumption GMM2(ii).\n\nThe proof for Assumption GMM2(i) is analogous to the proof of\nAssumption\\linebreak GMM2(ii). For $\\psi \\in \\Psi (\\pi ):||\\psi -\\psi\n_{0,n}||\\,\\leq \\delta _{n},$%\n\\begin{eqnarray}\n\\widetilde{g}_{n}(\\psi ,\\pi ;\\gamma _{0})-\\widetilde{g}_{n}(\\psi _{0,n},\\pi\n;\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left( \\frac{\\partial }{%\n\\partial \\psi ^{\\prime }}g_{n}(\\psi _{0,n}^{\\dag }(\\pi ),\\pi ;\\gamma\n_{0})-g_{\\psi }(\\psi _{0,n}^{\\dag }(\\pi ),\\pi ;\\gamma _{0})\\right) (\\psi\n-\\psi _{0,n})  \\notag \\\\\n&=&\\hspace{-0.08in}o_{p\\pi }(||\\psi -\\psi _{0,n}||),  \\label{gmm sm 2}\n\\end{eqnarray}%\nwhere the first equality holds by element-by-element mean-value expansions\nwith $\\psi _{0,n}^{\\dag }(\\pi )$ between $\\psi $ and $\\psi _{0,n}$ (and $%\n\\psi _{0,n}^{\\dag }(\\pi )$ may depend on the row), and the second equality\nholds uniformly over $\\psi \\in \\Psi (\\pi ):||\\psi -\\psi _{0,n}||\\,\\leq\n\\delta _{n}$ by Assumption GMM2$^{\\ast }$(ii). Assumption GMM2(i) follows\nfrom (\\ref{gmm sm 2}) using the \"$||\\psi -\\psi _{0,n}||$\" part of the\ndenominator in Assumption GMM2(i). $\\square \\bigskip $\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma As GMM4}. }Assumption GMM4 is\nthe same as Assumption C6 of AC1. Hence, it suffices to verify the latter.\nWe verify Assumption C6 of AC1 by verifying the sufficient condition\nAssumption C6$^{\\ast \\ast }$ given in Lemma 8.5 in Appendix A of AC1-SM.\nBecause $\\beta $ is a scalar, it remains to show Assumption C6$^{\\ast \\ast }$%\n(ii) of AC1 holds. By Lemma \\ref{Lemma GMM suff condition C}(c), the\ncovariance matrix $\\Omega _{G}(\\pi _{1},\\pi _{2};\\gamma _{0})$ in Assumption\nC6$^{\\ast \\ast }$(ii) is%\n\\begin{eqnarray}\n\\Omega _{G}(\\pi _{1},\\pi _{2};\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in%\n}g_{\\psi }^{\\ast }(\\psi _{0},\\pi _{1},\\pi _{2};\\gamma _{0})^{\\prime }%\n\\widetilde{\\Omega }_{g}(\\gamma _{0})g_{\\psi }^{\\ast }(\\psi _{0},\\pi _{1},\\pi\n_{2};\\gamma _{0})^{\\prime },\\text{ where}  \\notag \\\\\n\\widetilde{\\Omega }_{g}(\\gamma _{0})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n\\mathcal{W}(\\psi _{0};\\gamma _{0})\\Omega _{g}(\\gamma _{0})\\mathcal{W}(\\psi\n_{0};\\gamma _{0})\n\\end{eqnarray}%\nand $\\widetilde{\\Omega }_{g}(\\gamma _{0})$ does not depend on $\\pi _{1}$ and \n$\\pi _{2}$ by Assumptions GMM1(i) and GMM3(i). Because $g_{\\psi }^{\\ast\n}(\\psi _{0},\\pi _{1},\\pi _{2};\\gamma _{0})\\in R^{k\\times (d_{\\zeta }+2)}$\nand $k\\geq d_{\\theta }\\geq d_{\\zeta }+2,$ Assumption C6$^{\\ast \\ast }$(ii)\nis implied by Assumptions GMM1$^{\\ast }$(vii), GMM4$^{\\ast }$(ii), and GMM4$%\n^{\\ast }$(iii). $\\square $\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Supplemental Appendix C:\nProofs for Wald Tests}\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Proofs of Asymptotic\nDistributions}\n\n\\setcounter{equation}{0}\\hspace{0.25in}Most of the results in Section \\ref%\n{Wald Tests Sec} of AC3 are stated to hold under some combination of\nAssumptions GMM1-GMM5 or under certain assumptions from AC1 (plus some other\nassumptions). We prove the results of this section using the stated\nassumptions from AC1. Lemmas \\ref{Lemma GMM A and B3}-\\ref{Lemma GMM\nSufficient D} in Supplemental Appendix B show that the appropriate\ncombination of Assumptions GMM1-GMM5 imply the corresponding assumptions\nfrom AC1.\\medskip\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma Sufficient R2}. }(i) When $%\nd_{\\pi }^{\\ast }=d_{r},$ $\\eta _{n}(\\widehat{\\theta }_{n})=0$ by definition\nin (\\ref{Defn eta}).\n\n(ii) When $d_{r}=1,$ $d_{\\pi }^{\\ast }=0$ or $d_{\\pi }^{\\ast }=1$ by\nAssumption R1(iii). If $d_{\\pi }^{\\ast }=1,$ $\\eta _{n}(\\widehat{\\theta }%\n_{n})=0$ by definition in (\\ref{Defn eta}). If $d_{\\pi }^{\\ast }=0,$ $r_{\\pi\n}(\\theta )=0$ for $\\theta \\in \\Theta _{\\delta }$ by Assumption R1(iii). By\nthe mean-value expansion, we have%\n\\begin{equation}\nr(\\psi _{n},\\widehat{\\pi }_{n})-r(\\psi _{n},\\pi _{n})=r_{\\pi }(\\psi _{n},%\n\\widetilde{\\pi }_{n})(\\widehat{\\pi }_{n}-\\pi _{n}),  \\label{MVT_pi}\n\\end{equation}%\nwhere $\\widetilde{\\pi }_{n}$ is between $\\widehat{\\pi }_{n}$ and $\\pi _{n}.$\nFor $n$ large enough that $||\\beta _{n}||<\\delta ,$ $(\\psi _{n},\\widetilde{%\n\\pi }_{n})\\in \\Theta _{\\delta }$ and $r_{\\pi }(\\psi _{n},\\widetilde{\\pi }%\n_{n})=0,$ which implies $\\eta _{n}(\\widehat{\\theta }_{n})=o_{p}(1).$\n\n(iii) From (\\ref{MVT_pi}), we have%\n\\begin{equation}\n\\eta (\\widehat{\\theta }_{n})=n^{1/2}A_{1}(\\widehat{\\theta }_{n})r_{\\pi\n}(\\psi _{n},\\widetilde{\\pi }_{n})(\\widehat{\\pi }_{n}-\\pi _{n}).\n\\end{equation}%\nUnder Assumption R2$^{\\ast }$(iii), $A_{1}(\\widehat{\\theta }_{n})r_{\\pi\n}(\\psi _{n},\\widetilde{\\pi }_{n})\\rightarrow _{p}0$ because the column space\nof $r_{\\pi }(\\theta )$ is the same for all $\\theta \\in \\Theta _{\\delta },$\nby definition the rows of $A_{1}(\\theta )$ are in the null space of $r_{\\pi\n}(\\theta )^{\\prime }$ $\\forall \\theta \\in \\Theta _{\\delta },$ and $\\widehat{%\n\\theta }_{n}\\in \\Theta _{\\delta }$ holds with probability that goes to one\nby Lemma 3.1(a) of AC1 using Assumptions A and B3(i)-(ii) of AC1. This gives\nthe desired result. $\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma Sufficient Linear}. }Under\nAssumption R$_{\\text{L}}$, $r_{\\theta }(\\theta )=R$ $\\forall \\theta \\in\n\\Theta $ and $R$ has full row rank. Assumption R1 is satisfied directly.\nMoreover, under Assumption R$_{\\text{L}}$, $r_{\\pi }(\\theta )$ does not\ndepend on $\\theta .$ This implies Assumption R2$^{\\ast }$(iii), which is a\nsufficient condition of Assumption R2 by Lemma \\ref{Lemma Sufficient R2}. $%\n\\square $\\bigskip\n\nThe proof of Theorem \\ref{Theorem Wald Nonlinear} below uses the following\nLemma.\\textbf{\\ }Define $\\widehat{\\omega }_{n}=\\widehat{\\beta }_{n}/||%\n\\widehat{\\beta }_{n}||.$\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{. }\\label{Lemma Omega_hat}Suppose Assumption \\emph{%\nV1 (}vector $\\beta $\\emph{) }holds. In addition, suppose Assumptions \\emph{%\nGMM1-GMM4 }hold \\emph{(}or Assumptions \\emph{A, B1-B3,} and \\emph{C1-C8} of \n\\emph{AC1 }hold\\emph{).}\n\n\\noindent \\emph{(a)} Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$\nwith $||b||<\\infty ,$ $\\widehat{\\omega }_{n}\\rightarrow _{d}\\omega ^{\\ast\n}(\\pi ^{\\ast }(\\gamma _{0},b);\\gamma _{0},b).$\n\n\\noindent \\emph{(b)} Under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty\n,\\omega _{0}),$ $\\widehat{\\omega }_{n}\\rightarrow _{p}\\omega _{0}.$\n\\end{lemma}\n\n\\noindent \\textbf{Proof of Lemma \\ref{Lemma Omega_hat}.} To prove Lemma \\ref%\n{Lemma Omega_hat}(a), we have%\n\\begin{equation}\n\\widehat{\\omega }_{n}=n^{1/2}\\widehat{\\beta }_{n}/||n^{1/2}\\widehat{\\beta }%\n_{n}||\\rightarrow _{d}\\frac{\\tau _{\\beta }(\\pi ^{\\ast }(\\gamma\n_{0},b);\\gamma _{0},b)}{||\\tau _{\\beta }(\\pi ^{\\ast }(\\gamma _{0},b);\\gamma\n_{0},b)||}=\\omega ^{\\ast }(\\pi ^{\\ast }(\\gamma _{0},b);\\gamma _{0},b)\n\\end{equation}%\nby the continuous mapping theorem, because $n^{1/2}\\widehat{\\beta }%\n_{n}\\rightarrow _{d}\\tau _{\\beta }(\\pi ^{\\ast }(\\gamma _{0},b);\\gamma\n_{0},b) $ by Theorem \\ref{Thm dist'n of estimator b=finite}(a) and Comment 2\nto Theorem \\ref{Thm dist'n of estimator b=finite}(a) and $P(\\tau _{\\beta\n}(\\pi ^{\\ast };\\gamma _{0},b)=0)=0$ by Assumption V1(iv) (vector $\\beta $).\n\nNext, we prove that Lemma \\ref{Lemma Omega_hat}(b) holds when $\\beta _{0}=0.$\nBy Lemma 3.4 in AC1, $||\\beta _{n}||^{-1}(\\widehat{\\beta }_{n}-\\beta\n_{n})=o_{p}(1).$ This implies that $\\widehat{\\beta }_{n}=\\beta _{n}+||\\beta\n_{n}||o_{p}(1)$ and $||\\widehat{\\beta }_{n}||/||\\beta _{n}||=1+o_{p}(1).$\nHence,%\n\\begin{equation}\n\\widehat{\\omega }_{n}=\\frac{\\widehat{\\beta }_{n}}{||\\widehat{\\beta }_{n}||}=%\n\\frac{\\widehat{\\beta }_{n}-\\beta _{n}}{||\\beta _{n}||}\\frac{||\\beta _{n}||}{%\n||\\widehat{\\beta }_{n}||}+\\frac{\\beta _{n}}{||\\beta _{n}||}\\frac{||\\beta\n_{n}||}{||\\widehat{\\beta }_{n}||}\\rightarrow _{p}\\omega _{0}.\n\\end{equation}\n\nUnder $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0})$ with $%\n\\beta _{0}\\neq 0,$ $\\widehat{\\omega }_{n}\\rightarrow \\omega _{0}$ by the\ncontinuous mapping theorem given that $\\widehat{\\beta }_{n}\\rightarrow\n_{p}\\beta _{0}$ by Lemma 3.3(b)\\label{Lemma consistency of pihat b=inf IN\nAC1} in AC1. $\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Theorem \\ref{Theorem Wald Nonlinear}.} Under the\nnull hypothesis $H_{0}:r(\\theta _{n})=v_{n},$ the Wald statistic defined in (%\n\\ref{Defn Wald}) with $v=v_{n}$ becomes%\n\\begin{equation}\nW_{n}=n(r(\\widehat{\\theta }_{n})-r(\\theta _{n}))^{\\prime }(r_{\\theta }(%\n\\widehat{\\theta }_{n})B^{-1}(\\widehat{\\beta }_{n})\\widehat{\\Sigma }%\n_{n}B^{-1}(\\widehat{\\beta }_{n})r_{\\theta }(\\widehat{\\theta }_{n})^{\\prime\n})^{-1}(r(\\widehat{\\theta }_{n})-r(\\theta _{n})).  \\label{Wald Nonlinear 1}\n\\end{equation}\n\nBefore proving the specific results in parts (a) and (b), we analyze the\nWald statistic under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b).$ With the\nrotation represented by $A(\\widehat{\\theta }_{n}),$ the Wald statistic in (%\n\\ref{Wald Nonlinear 1}) can be written as%\n\\begin{equation}\nW_{n}=n(r(\\widehat{\\theta }_{n}\\hspace{-0.01in})-r(\\theta _{n}\\hspace{-0.01in%\n}))^{\\prime }\\hspace{-0.02in}A(\\widehat{\\theta }_{n}\\hspace{-0.01in}%\n)^{\\prime }(r_{\\theta }^{A}(\\widehat{\\theta }_{n}\\hspace{-0.01in})\\hspace{%\n-0.01in}B^{-1\\hspace{-0.01in}}(\\widehat{\\beta }_{n})\\widehat{\\Sigma }%\n_{n}B^{-1}\\hspace{-0.01in}(\\widehat{\\beta }_{n})r_{\\theta }^{A}(\\widehat{%\n\\theta }_{n}\\hspace{-0.01in})^{\\prime })^{-1}\\hspace{-0.02in}A(\\widehat{%\n\\theta }_{n}\\hspace{-0.01in})(r(\\widehat{\\theta }_{n}\\hspace{-0.01in}%\n)-r(\\theta _{n}\\hspace{-0.01in})).  \\label{Wald Nonlinear 2}\n\\end{equation}%\nTo deal with the normalizing matrix $B^{-1}(\\widehat{\\beta }_{n}),$ part of\nwhich diverges as $n\\rightarrow \\infty $ and $\\beta _{n}\\rightarrow 0,$ we\ndefine a $d_{r}\\times d_{r}$ matrix \n\\begin{equation}\nB^{\\ast }(\\widehat{\\beta }_{n})=\\left[ \n\\begin{array}{cc}\nI_{(d_{r}-d_{\\pi }^{\\ast })} & 0 \\\\ \n0 & \\iota (\\widehat{\\beta }_{n})I_{d_{\\pi }^{\\ast }}%\n\\end{array}%\n\\right]  \\label{B_star}\n\\end{equation}%\nwhere $\\iota (\\beta )=\\beta $ when $\\beta $ is a scalar and $\\iota (\\beta\n)=||\\beta ||$ when $\\beta $ is a scalar . We write the Wald statistic in (%\n\\ref{Wald Nonlinear 2}) as%\n\\begin{eqnarray}\nW_{n}\\hspace{-0.08in} &=&\\hspace{-0.08in}\\varrho (\\widehat{\\theta }%\n_{n})^{\\prime }(\\overline{r}_{\\theta }(\\widehat{\\theta }_{n})\\widehat{\\Sigma \n}_{n}\\overline{r}_{\\theta }(\\widehat{\\theta }_{n})^{\\prime })^{-1}\\varrho (%\n\\widehat{\\theta }_{n}),\\text{ where}  \\label{Wald Nonlinear 3} \\\\\n\\varrho (\\widehat{\\theta }_{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\nn^{1/2}B^{\\ast }(\\widehat{\\beta }_{n})A(\\widehat{\\theta }_{n})(r(\\widehat{%\n\\theta }_{n})-r(\\theta _{n}))\\text{ and }\\overline{r}_{\\theta }(\\widehat{%\n\\theta }_{n})=B^{\\ast }(\\widehat{\\beta }_{n})r_{\\theta }^{A}(\\widehat{\\theta \n}_{n})B^{-1}(\\widehat{\\beta }_{n}).  \\notag\n\\end{eqnarray}%\nNote that \n\\begin{equation}\n\\overline{r}_{\\theta }(\\widehat{\\theta }_{n})=\\left[ \n\\begin{array}{cc}\nr_{\\psi }^{\\ast }(\\widehat{\\theta }_{n}) & 0 \\\\ \n\\iota (\\widehat{\\beta }_{n})r_{\\psi }^{0}(\\widehat{\\theta }_{n}) & r_{\\pi\n}^{\\ast }(\\widehat{\\theta }_{n})%\n\\end{array}%\n\\right] =r_{\\theta }^{\\ast }(\\widehat{\\theta }_{n})+\\left[ \n\\begin{array}{cc}\n0 & 0 \\\\ \n\\iota (\\widehat{\\beta }_{n})r_{\\psi }^{0}(\\widehat{\\theta }_{n}) & 0%\n\\end{array}%\n\\right] =r_{\\theta }^{\\ast }(\\widehat{\\theta }_{n})+o_{p}(1),  \\label{r_bar}\n\\end{equation}%\nwhere the $o_{p}\\left( 1\\right) $ term holds because $\\iota (\\widehat{\\beta }%\n_{n})=o_{p}\\left( 1\\right) $ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},0,b)$ and $r_{\\psi }^{0}(\\widehat{\\theta }_{n})=O_{p}(1)$ under\nAssumption R1(i).\n\nThe next step is to derive the asymptotic distribution of $\\varrho (\\widehat{%\n\\theta }_{n})$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b).$ Note that%\n\\begin{eqnarray}\nr(\\widehat{\\theta }_{n})-r(\\theta _{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}r(%\n\\widehat{\\psi }_{n},\\widehat{\\pi }_{n})-r(\\psi _{n},\\widehat{\\pi }%\n_{n})+r(\\psi _{n},\\widehat{\\pi }_{n})-r(\\psi _{n},\\pi _{n})  \\notag \\\\\n&=&\\hspace{-0.08in}r_{\\psi }(\\widehat{\\theta }_{n})(\\widehat{\\psi }_{n}-\\psi\n_{n})+(r(\\psi _{n},\\widehat{\\pi }_{n})-r(\\psi _{n},\\pi\n_{n}))+o_{p}(n^{-1/2}),  \\label{decomp}\n\\end{eqnarray}%\nwhere the first equality is trivial and the second equality holds by a\nmean-value expansion, $\\widehat{\\psi }_{n}-\\psi _{n}=O_{p}(n^{-1/2}),$ and\nAssumption R1(i). From (\\ref{B_star}) and $A(\\theta )=[A_{1}^{\\prime\n}(\\theta ):A_{2}^{\\prime }(\\theta )]^{\\prime },$ we have \n\\begin{eqnarray}\n\\varrho (\\widehat{\\theta }_{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left( \n\\begin{array}{c}\nn^{1/2}A_{1}(\\widehat{\\theta }_{n})(r(\\widehat{\\theta }_{n})-r(\\theta _{n}))\n\\\\ \nn^{1/2}\\iota (\\widehat{\\beta }_{n})A_{2}(\\widehat{\\theta }_{n})(r(\\widehat{%\n\\theta }_{n})-r(\\theta _{n}))%\n\\end{array}%\n\\right) =\\varrho _{1}(\\widehat{\\theta }_{n})+\\varrho _{2}(\\widehat{\\theta }%\n_{n})+o_{p}\\left( 1\\right) ,\\text{ where}  \\notag \\\\\n\\varrho _{1}(\\widehat{\\theta }_{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n\\left( \n\\begin{array}{c}\nn^{1/2}A_{1}(\\widehat{\\theta }_{n})r_{\\psi }(\\widehat{\\theta }_{n})(\\widehat{%\n\\psi }_{n}-\\psi _{n}) \\\\ \nn^{1/2}\\iota (\\widehat{\\beta }_{n})A_{2}(\\widehat{\\theta }_{n})(r(\\psi _{n},%\n\\widehat{\\pi }_{n})-r(\\psi _{n},\\pi _{n}))%\n\\end{array}%\n\\right) ,  \\notag \\\\\n\\varrho _{2}(\\widehat{\\theta }_{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n\\left( \n\\begin{array}{c}\n\\eta _{n}(\\widehat{\\theta }_{n}) \\\\ \nn^{1/2}\\iota (\\widehat{\\beta }_{n})A_{2}(\\widehat{\\theta }_{n})r_{\\psi }(%\n\\widehat{\\theta }_{n})(\\widehat{\\psi }_{n}-\\psi _{n})%\n\\end{array}%\n\\right) =\\left( \n\\begin{array}{c}\n\\eta _{n}(\\widehat{\\theta }_{n}) \\\\ \no_{p}\\left( 1\\right)%\n\\end{array}%\n\\right) ,  \\label{Varrho}\n\\end{eqnarray}%\nthe second equality in $\\varrho (\\widehat{\\theta }_{n})$ uses (\\ref{decomp}%\n), and the $o_{p}\\left( 1\\right) $ term associated with $\\varrho _{2}(%\n\\widehat{\\theta }_{n})$ holds by $n^{1/2}(\\widehat{\\psi }_{n}-\\psi\n_{n})=O_{p}(1)$ and $\\iota (\\widehat{\\beta }_{n})=o_{p}(1)$ under $\\{\\gamma\n_{n}\\}\\in \\Gamma (\\gamma _{0},0,b).$ Under Assumption R2, $\\eta _{n}(%\n\\widehat{\\theta }_{n})=o_{p}\\left( 1\\right) ,$ and, hence, $\\varrho _{2}(%\n\\widehat{\\theta }_{n})=o_{p}\\left( 1\\right) .$\n\nIn part (a), in which case $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ and \n$||b||<\\infty ,$ we have%\n\\begin{eqnarray}\n\\varrho _{1}(\\widehat{\\theta }_{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n\\overline{B}_{n}(\\widehat{\\pi }_{n})\\tau _{n}^{A}(\\widehat{\\pi }_{n}),\\text{\nwhere}  \\notag \\\\\n\\tau _{n}^{A}(\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left( \n\\begin{array}{c}\nr_{\\psi }^{\\ast }(\\widehat{\\psi }_{n}(\\pi ),\\pi )n^{1/2}(\\widehat{\\psi }%\n_{n}(\\pi )-\\psi _{n}) \\\\ \nA_{2}(\\widehat{\\psi }_{n}(\\pi ),\\pi )(r(\\psi _{n},\\pi )-r(\\psi _{n},\\pi\n_{n}))%\n\\end{array}%\n\\right) \\text{ and }  \\notag \\\\\n\\overline{B}_{n}(\\pi )\\hspace{-0.08in} &=&\\hspace{-0.08in}\\left[ \n\\begin{array}{cc}\nI_{(d_{r}-d_{\\pi }^{\\ast })} & 0 \\\\ \n0 & \\iota (n^{1/2}\\widehat{\\beta }_{n}(\\pi ))I_{d_{\\pi }^{\\ast }}%\n\\end{array}%\n\\right] .  \\label{Varrho 1}\n\\end{eqnarray}%\nUsing Assumption R1(i), Lemma 3.1(a) of AC1, Lemma 9.2(b) in Appendix B of\nAC1-SM, and $\\tau _{n}(\\pi )=n^{1/2}(\\widehat{\\psi }_{n}(\\pi )-\\psi\n_{n})\\Rightarrow \\tau (\\pi ;\\gamma _{0},b)$ in (9.21) of AC1-SM, we have%\n\\begin{equation}\n\\left( \n\\begin{array}{c}\n\\tau _{n}^{A}(\\cdot ) \\\\ \n\\overline{B}_{n}(\\cdot )%\n\\end{array}%\n\\right) \\Rightarrow \\left( \n\\begin{array}{c}\n\\tau ^{A}(\\cdot ;\\gamma _{0},b) \\\\ \n\\overline{B}(\\cdot ;\\gamma _{0},b)%\n\\end{array}%\n\\right)  \\label{tao_A and B_bar}\n\\end{equation}%\nunder $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ with $||b||<\\infty .$\nFrom (\\ref{Wald Nonlinear 3}), (\\ref{r_bar}), (\\ref{Varrho}), and (\\ref%\n{Varrho 1}), in the case of a scalar $\\beta ,$ we have%\n\\begin{eqnarray}\nW_{n}\\hspace{-0.08in} &=&\\hspace{-0.08in}\\tau _{n}^{A}(\\widehat{\\pi }%\n_{n})^{\\prime }\\overline{B}_{n}(\\widehat{\\pi }_{n})(r_{\\theta }^{\\ast }(%\n\\widehat{\\theta }_{n})\\widehat{\\Sigma }_{n}r_{\\theta }^{\\ast }(\\widehat{%\n\\theta }_{n})^{\\prime })^{-1}\\overline{B}_{n}(\\widehat{\\pi }_{n})\\tau\n_{n}^{A}(\\widehat{\\pi }_{n})+o_{p}\\left( 1\\right)  \\notag \\\\\n&=&\\hspace{-0.08in}\\lambda _{n}(\\widehat{\\pi }_{n})+o_{p}(1)\\rightarrow\n_{d}\\lambda (\\pi ^{\\ast }(\\gamma _{0},b);\\gamma _{0},b),  \\label{Wald last}\n\\end{eqnarray}%\nwhere $\\lambda _{n}(\\pi )$ is defined implicitly, $\\widehat{\\Sigma }_{n}=%\n\\widehat{\\Sigma }_{n}(\\widehat{\\theta }_{n})=\\widehat{J}_{n}(\\widehat{\\theta \n}_{n})^{-1}\\widehat{V}_{n}(\\widehat{\\theta }_{n})\\widehat{J}_{n}(\\widehat{%\n\\theta }_{n})^{-1}$ by Assumption V1 (scalar $\\beta $), and the convergence\nfollows from the joint convergence $(\\lambda _{n}(\\cdot ),\\widehat{\\pi }%\n_{n})\\Rightarrow (\\lambda (\\cdot ;\\gamma _{0},b),\\pi ^{\\ast }(\\gamma\n_{0},b)) $ and the continuous mapping theorem. The latter joint convergence\nholds by (\\ref{tao_A and B_bar}), Assumptions V1 (scalar $\\beta $) and R1,\nTheorem \\ref{Thm dist'n of estimator b=finite}(a), the uniform consistency\nof $\\widehat{\\psi }_{n}(\\pi )$ over $\\pi \\in \\Pi ,$ and the fact that $\\tau\n_{n}^{A}(\\cdot ),$ $\\overline{B}_{n}(\\cdot ),$ and $\\widehat{\\pi }_{n}$ are\ncontinuous functions of the empirical process $G_{n}(\\cdot )$ with\nprobability one.\n\nIn the case of a vector $\\beta ,$ (\\ref{Wald last}) holds with $\\widehat{%\n\\Sigma }_{n}(\\widehat{\\theta }_{n})$ replaced by $\\widehat{\\Sigma }_{n}(%\n\\widehat{\\theta }_{n}^{+})=\\widehat{J}_{n}^{-1}(\\widehat{\\theta }%\n_{n}^{+})\\allowbreak \\widehat{V}_{n}(\\widehat{\\theta }_{n}^{+})\\widehat{J}%\n_{n}^{-1}(\\widehat{\\theta }_{n}^{+})$ using Assumption V1 (vector $\\beta )$\nand with $\\lambda _{n}(\\widehat{\\pi }_{n})$ replaced by $\\lambda _{n}(%\n\\widehat{\\pi }_{n},\\widehat{\\omega }_{n}),$ which is defined implicitly. In\nthis case, the convergence in (\\ref{Wald last}) follows from the joint\nconvergence $(\\lambda _{n}(\\cdot ),\\widehat{\\pi }_{n},\\widehat{\\omega }%\n_{n})\\Rightarrow (\\lambda (\\cdot ;\\gamma _{0},b),$ $\\pi ^{\\ast }(\\gamma\n_{0},b),$ $\\omega ^{\\ast }(\\pi ^{\\ast }(\\gamma _{0},b);\\gamma _{0},b),$\nwhich holds by the same argument as above plus Lemma \\ref{Lemma Omega_hat}%\n(a) and Assumption V1 (vector $\\beta $). This completes the proof of part\n(a).\n\nThe proof of part (b) is the same for the scalar and vector $\\beta $ cases\nbecause it relies on Assumption V2 which applies in both cases. To prove\npart (b), we first analyze the case where $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},\\infty ,\\omega _{0})$ and $\\beta _{0}=0.$ In this case, $\\{\\gamma\n_{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$ with $b\\notin R^{d_{\\beta }},$ so (\\ref%\n{Wald Nonlinear 2})-(\\ref{Varrho}) apply. As in (\\ref{Variance Matrix Defns}%\n), $\\Sigma (\\gamma _{0})=J^{-1}(\\gamma _{0})V(\\gamma _{0})J^{-1}(\\gamma\n_{0}).$ We have%\n\\begin{eqnarray}\n\\varrho _{1}(\\widehat{\\theta }_{n})\\hspace{-0.08in} &=&\\hspace{-0.08in}%\n\\left( \n\\begin{array}{c}\nn^{1/2}r_{\\psi }^{\\ast }(\\widehat{\\theta }_{n})(\\widehat{\\psi }_{n}-\\psi\n_{n}) \\\\ \nn^{1/2}\\iota (\\widehat{\\beta }_{n})A_{2}(\\widehat{\\theta }_{n})(r_{\\pi }(%\n\\widehat{\\theta }_{n})+o_{p}\\left( 1\\right) )(\\widehat{\\pi }_{n}-\\pi _{n})%\n\\end{array}%\n\\right)  \\notag \\\\\n&=&\\hspace{-0.08in}\\left( \n\\begin{array}{c}\nn^{1/2}r_{\\psi }^{\\ast }(\\widehat{\\theta }_{n})(\\widehat{\\psi }_{n}-\\psi\n_{n}) \\\\ \nn^{1/2}\\iota (\\beta _{n})A_{2}(\\widehat{\\theta }_{n})r_{\\pi }(\\widehat{%\n\\theta }_{n})(\\widehat{\\pi }_{n}-\\pi _{n})+o_{p}\\left( 1\\right)%\n\\end{array}%\n\\right)  \\notag \\\\\n&=&\\hspace{-0.08in}r_{\\theta }^{\\ast }(\\widehat{\\theta }_{n})n^{1/2}B(\\beta\n_{n})(\\widehat{\\theta }_{n}-\\theta _{n})+o_{p}(1)  \\notag \\\\\n&\\rightarrow &\\hspace{-0.15in}_{d}\\text{ }N(0,r_{\\theta }^{\\ast }(\\theta\n_{0})\\Sigma (\\gamma _{0})r_{\\theta }^{\\ast }(\\theta _{0})),  \\label{Varrho 2}\n\\end{eqnarray}%\nwhere the first equality holds by a mean-value expansion, the fact that $%\n\\widehat{\\pi }_{n}$ is consistent under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},\\infty ,\\omega _{0})$, and the continuity of $r_{\\pi }(\\theta )$ which\nholds by Assumption R1, the second equality holds by $n^{1/2}(\\widehat{\\beta \n}_{n}-\\beta _{n})=O_{p}(1)$ and $||\\beta _{n}||n^{1/2}(\\widehat{\\pi }%\n_{n}-\\pi _{n})=O_{p}(1)$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma\n_{0},\\infty ,\\omega _{0}),$ the third equality holds by the definitions of $%\nB(\\beta )$ and $r_{\\theta }^{\\ast }(\\theta ),$ and the convergence in\ndistribution holds by Theorem \\ref{Thm dist'n of estimator b=inf}(a). The\nresult of part (b) follows from (\\ref{Wald Nonlinear 3}), (\\ref{r_bar}), (%\n\\ref{Varrho}), (\\ref{Varrho 2}), and Assumptions D2 and D3(ii) of AC1 and\nAssumption V2.\n\nUnder $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},\\infty ,\\omega _{0})$ and $%\n\\beta _{0}\\neq 0,$ \n\\begin{equation}\nn^{1/2}(r(\\widehat{\\theta }_{n})-r(\\theta _{n}))\\rightarrow\n_{d}N(0,r_{\\theta }(\\theta _{0})B^{-1}(\\beta _{0})\\Sigma (\\gamma\n_{0})B^{-1}(\\beta _{0})r_{\\theta }(\\theta _{0})^{\\prime })  \\label{wald b 3}\n\\end{equation}%\nby Theorem \\ref{Thm dist'n of estimator b=inf}(a) and the delta method. By\nAssumptions R1(i) and V2, \n\\begin{equation}\nr_{\\theta }(\\widehat{\\theta }_{n})B^{-1}(\\widehat{\\beta }_{n})\\widehat{%\n\\Sigma }_{n}B^{-1}(\\widehat{\\beta }_{n})r_{\\theta }(\\widehat{\\theta }%\n_{n})^{\\prime }\\rightarrow _{p}r_{\\theta }(\\theta _{0})B^{-1}(\\beta\n_{0})\\Sigma \\left( \\gamma _{0}\\right) B^{-1}(\\beta _{0})r_{\\theta }(\\theta\n_{0})^{\\prime }.  \\label{wald b 4}\n\\end{equation}%\nThe desired result follows from (\\ref{Wald Nonlinear 1}), (\\ref{wald b 3}),\nand (\\ref{wald b 4}). $\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Corollary \\ref{Cor Wald Linear}. }By Lemma \\ref%\n{Lemma Sufficient Linear}, Assumption R2 is satisfied. Based on Theorem \\ref%\n{Theorem Wald Nonlinear}, it suffices to show that the stochastic process $%\n\\{\\lambda (\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ can be written as $\\{\\lambda\n_{L}(\\pi ;\\gamma _{0},b):\\pi \\in \\Pi \\}$ under Assumption R$_{\\text{L}}$.\nUnder Assumption R$_{\\text{L}}$, $r_{\\theta }(\\theta ),$ $A(\\theta ),$ and $%\nr_{\\theta }^{\\ast }(\\theta )$ do not depend on $\\theta ,$ and, hence,%\n\\begin{equation}\n\\tau ^{A}(\\pi ;\\gamma _{0},b)=\\left( \n\\begin{array}{c}\nr_{\\psi }^{\\ast }\\tau (\\pi ;\\gamma _{0},b) \\\\ \nA_{2}r_{\\pi }\\cdot (\\pi -\\pi _{0})%\n\\end{array}%\n\\right) =\\left( \n\\begin{array}{c}\nr_{\\psi }^{\\ast }\\tau (\\pi ;\\gamma _{0},b) \\\\ \nr_{\\pi }^{\\ast }\\cdot (\\pi -\\pi _{0})%\n\\end{array}%\n\\right) =R^{\\ast }\\overline{\\tau }(\\pi ;\\gamma _{0},b).  \\label{Wald linear}\n\\end{equation}%\nThe desired result follows from (\\ref{Wald linear}) and $r_{\\theta }^{\\ast\n}(\\pi )=R^{\\ast }$ $\\forall \\pi \\in \\Pi .$ $\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Theorem \\ref{Theorem Divergence}. }From the proof\nof Theorem \\ref{Theorem Wald Nonlinear}, we know that $\\varrho _{1}(\\widehat{%\n\\theta }_{n})=O_{p}(1)$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b).$\nTherefore, when $||\\eta _{n}(\\widehat{\\theta }_{n})||\\rightarrow _{p}\\infty\n, $ it follows from (\\ref{Varrho}) that $||\\varrho (\\widehat{\\theta }%\n_{n})||\\rightarrow _{p}\\infty .$ This result, together with (\\ref{Wald\nNonlinear 3}), (\\ref{r_bar}), and Assumptions R1 and V1, completes the\nproof. $\\square $\n\n\\subsection{\\hspace{-0.23in}\\textbf{.}\\hspace{0.18in}Proofs of Asymptotic\nSize Results}\n\n\\noindent \\textbf{Proof of Theorem \\ref{Theorem Wald Asy Sz}. }The proof is\nthe same as the proof of Theorem 4.4 of AC1, which is given in Appendix B of\nAC1-SM, but with $|T_{n}|,$ $|T(h)|,$ and $z_{1-\\alpha /2}$ replaced by $%\nW_{n},$ $W(h),$ and $\\chi _{d_{r},1-\\alpha }^{2},$ respectively; with\nTheorem 4.1 of AC1 replaced by Theorem \\ref{Theorem Wald Nonlinear}; and\nwith Assumption V3 of AC1 replaced by Assumption V4. $\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Corollary \\ref{Corollary Wald 2}. }By Theorem \\ref%\n{Theorem Divergence}, $P_{\\gamma _{n}}(W_{n}\\leq \\chi _{d_{r},1-\\alpha\n}^{2})\\rightarrow _{p}0$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0},0,b)$\nfor which $||\\eta _{n}(\\widehat{\\theta }_{n})||\\rightarrow _{p}\\infty .$ As\na result, the nominal $1-\\alpha $ Wald CS has $AsySz=0$ by the definition of\nasymptotic size. $\\square $\\bigskip\n\n\\noindent \\textbf{Proof of Theorem \\ref{Theorem Robust Size Wald CS}. }The\nproof of Theorem \\ref{Theorem Robust Size Wald CS} is the same as the proof\nof Theorem 5.1 of AC1, which is given in Appendix B of AC1-SM, but with $%\n|T_{n}|,$ $|T(h)|,$ and $z_{1-\\alpha /2}$ replaced by $W_{n},$ $W(h),$ and $%\n\\chi _{d_{r},1-\\alpha }^{2},$ respectively; with $c_{|t|,1-\\alpha }^{LF},$ $%\nc_{|t|,1-\\alpha }(h),...$ replaced by $c_{W,1-\\alpha }^{LF},$ $c_{W,1-\\alpha\n}(h),...$ throughout; with Theorem 4.1\\label{Theorem T stat IN AC1 copy(1)}\nof AC1 replaced by Theorem \\ref{Theorem Wald Nonlinear}; and with Assumption\nV3 of AC1 replaced by Assumption V4. $\\square $\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Supplemental Appendix D:\nUniform LLN\\newline\nand CLT}\n\n\\setcounter{equation}{0}\\hspace{0.25in}In this Supplemental Appendix, we\nstate a uniform convergence result, a uniform LLN, and a CLT that are used\nin the verification of Assumptions GMM1-GMM5 in the two examples considered\nin the paper. Specifically, Lemma \\ref{Lemma uniform convergence} is a\nuniform convergence result for non-stochastic functions, Lemma \\ref{Lemma\nUniform LLN, drift} is a uniform LLN, and Lemma \\ref{Lemma CLT, array} is a\nCLT. The latter two results are for strong mixing triangular arrays. These\nare standard sorts of results. The proofs of these Lemmas are given in\nAppendix A of Andrews and Cheng (2011).\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma uniform convergence}Let $%\n\\{q_{n}(\\theta ):n\\geq 1\\}$ be non-stochastic functions on $\\Theta .$\nSuppose \\emph{(i)} $q_{n}(\\theta )\\rightarrow 0$ $\\forall \\theta \\in \\Theta\n, $ \\emph{(ii)} $||q_{n}(\\theta _{1})-q_{n}(\\theta _{2})||\\leq C\\delta $ $%\n\\forall \\theta _{1},\\theta _{2}\\in \\Theta $ with $||\\theta _{1}-\\theta\n_{2}||\\leq \\delta ,$ $\\forall n\\geq 1,$ for some $C<\\infty $ and all $\\delta\n>0,$ and \\emph{(iii)} $\\Theta $ is compact. Then, $\\sup_{\\theta \\in \\Theta\n}||q_{n}(\\theta )||\\rightarrow 0.$\n\\end{lemma}\n\n\\noindent \\textbf{Assumption S1. }Under any $\\gamma _{0}\\in \\Gamma ,$ $%\n\\{W_{i}:i\\geq 1\\}$ is a strictly stationary and strong mixing sequence with\nmixing coefficients $\\alpha _{m}\\leq Cm^{-A}$ for some $A>d_{\\theta\n}q/(q-d_{\\theta })$ and some $q>d_{\\theta }\\geq 2,$ or $\\{W_{i}:i\\geq 1\\}$\nis an i.i.d. sequence and the constant $q$ equals $2+\\delta $ for some $%\n\\delta >0.$\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma Uniform LLN, drift}Suppose \\emph{(i)}\nAssumption \\emph{S1} holds, $\\emph{(ii)}$ for some function $M_{1}(w):%\n\\mathcal{W}\\rightarrow R^{+}$ and all $\\delta >0,$ $||s(w,\\theta\n_{1})-s(w,\\theta _{2})||\\leq M_{1}(w)\\delta ,$ $\\forall \\theta _{1},\\theta\n_{2}\\in \\Theta $ with $||\\theta _{1}-\\theta _{2}||\\leq \\delta ,$ $\\forall\nw\\in \\mathcal{W},$ \\emph{(iii)} $E_{\\gamma }\\sup_{\\theta \\in \\Theta\n}||s(W_{i},\\theta )||^{1+\\varepsilon }+E_{\\gamma }M_{1}(W_{i})\\leq C$ $%\n\\forall \\gamma \\in \\Gamma $ for some $C<\\infty $ and $\\varepsilon >0,$ and \n\\emph{(iv)} $\\Theta $ is compact. Then, $\\sup_{\\theta \\in \\Theta\n}||n^{-1}\\sum_{i=1}^{n}s(W_{i},\\theta )-E_{\\gamma _{0}}s(W_{i},\\theta\n)||\\rightarrow _{p}0$ under $\\{\\gamma _{n}\\}\\in \\Gamma (\\gamma _{0})$ and $%\nE_{\\gamma _{0}}s(W_{i},\\theta )$ is uniformly continuous on $\\Theta $ $%\n\\forall \\gamma _{0}\\in \\Gamma .$\n\\end{lemma}\n\n\\noindent \\textbf{Comment.} Note that the centering term in Lemma \\ref{Lemma\nUniform LLN, drift} is $E_{\\gamma _{0}}s(W_{i},\\theta ),$ rather than $%\nE_{\\gamma _{n}}s(W_{i},\\theta ).$\n\n\\begin{lemma}\n\\hspace{-0.08in}\\textbf{.} \\label{Lemma CLT, array}Suppose \\emph{(i)}\nAssumption \\emph{S1} holds, \\emph{(ii)} $s(w)\\in R$ and $E_{\\gamma\n}|s(W_{i})|^{q}\\leq C$ $\\forall \\gamma \\in \\Gamma $ for some $C<\\infty $ and \n$q$ as in Assumption \\emph{S1}$.$ Then, $n^{-1/2}\\sum_{i=1}^{n}(s(W_{i})-E_{%\n\\gamma _{n}}s(W_{i}))\\rightarrow _{d}N(0,V_{s}(\\gamma _{0}))$ under $\\{\n\\gamma _{n}\\} \\in \\Gamma (\\gamma _{0})$ $\\forall \\gamma _{0}\\in \\Gamma ,$\nwhere $V_{s}(\\gamma _{0})=\\sum_{m=-\\infty }^{\\infty }\\allowbreak Cov_{\\gamma\n_{0}}(s(W_{i}),s(W_{i+m})).$\n\\end{lemma}\n\n\\section{ \\hspace{-0.34in}\\textbf{.}\\hspace{0.2in}Supplemental Appendix E:\nNumerical Results}\n\n\\hspace{0.25in}Here we report some additional numerical results for the\nnonlinear regression model with endogeneity.\n\nFigures S-1 and S-2 report asymptotic and finite-sample ($n=500$) densities\nof the estimators for $\\beta $ and $\\pi $ when $\\pi _{0}=3.0.$ Figures S-3\nto S-6 report asymptotic and finite-sample ($n=500$) densities of the $t$\nand QLR statistics for $\\beta $ and $\\pi $ when $\\pi _{0}=1.5.$ Figures S-7\nand S-8 report CP's of nominal $0.95$ standard and robust $|t|$ and QLR CI's\nfor $\\beta $ and $\\pi $ when $\\pi _{0}=3.0.$\n\n\\FRAME{ftbpFU}{3.557in}{1.7038in}{0pt}{\\Qcb{Figure S-1. Asymptotic and\nFinite-Sample ($n=500$) Densities of the Estimator of $\\protect\\beta $ in\nthe Nonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=3.0.$}%\n}{}{gmm_beta_dens_30.eps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n3.557in;height 1.7038in;depth 0pt;original-width 6.9851in;original-height\n2.9447in;cropleft \"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom\n\"0.0512\";filename 'graphics/GMM_beta_dens_30.eps';file-properties \"XNPEU\";}}\n\n\\FRAME{ftbpFU}{3.5561in}{1.7038in}{0pt}{\\Qcb{Figure S-2. Asymptotic and\nFinite-Sample ($n=500$) Densities of the Estimator of $\\protect\\pi $ in the\nNonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=3.0.$}}{}{%\ngmm_pi_dens_30.eps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n3.5561in;height 1.7038in;depth 0pt;original-width 6.9851in;original-height\n2.9447in;cropleft \"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom\n\"0.0512\";filename 'graphics/GMM_pi_dens_30.eps';file-properties \"XNPEU\";}}\n\n\\FRAME{ftbpFU}{3.557in}{1.7038in}{0pt}{\\Qcb{Figure S-3. Asymptotic and\nFinite-Sample ($n=500$) Densities of the $t$ Statistic for $\\protect\\beta $\nin the Nonlinear Regression Model with Endogeneity when $\\protect\\pi %\n_{0}=1.5 $ and the Standard Normal Density (Black Line).}}{}{%\ngmm_beta_t_15.eps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n3.557in;height 1.7038in;depth 0pt;original-width 6.9851in;original-height\n2.9447in;cropleft \"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom\n\"0.0512\";filename 'graphics/GMM_beta_t_15.eps';file-properties \"XNPEU\";}}\n\n\\FRAME{ftbpFU}{3.557in}{1.7038in}{0pt}{\\Qcb{Figure S-4. Asymptotic and\nFinite-Sample (n=500) Densities of the QLR Statistic for $\\protect\\beta $ in\nthe Nonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=1.5$\nand the $\\protect\\chi _{1}^{2}$ Density (Black Line).}}{}{gmm_beta_qlr_15.eps%\n}{\\special{language \"Scientific Word\";type \"GRAPHIC\";maintain-aspect-ratio\nTRUE;display \"USEDEF\";valid_file \"F\";width 3.557in;height 1.7038in;depth\n0pt;original-width 6.9851in;original-height 2.9447in;cropleft\n\"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom \"0.0512\";filename\n'graphics/GMM_beta_qlr_15.eps';file-properties \"XNPEU\";}}\n\n\\FRAME{ftbpFU}{3.557in}{1.7038in}{0pt}{\\Qcb{Figure S-5. Asymptotic and\nFinite-Sample ($n=500$) Densities of the $t$ Statistic for $\\protect\\pi $ in\nthe Nonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=1.5$\nand the Standard Normal Density (Black Line).}}{}{gmm_pi_t_15.eps}{\\special%\n{language \"Scientific Word\";type \"GRAPHIC\";maintain-aspect-ratio\nTRUE;display \"USEDEF\";valid_file \"F\";width 3.557in;height 1.7038in;depth\n0pt;original-width 6.9851in;original-height 2.9447in;cropleft\n\"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom \"0.0512\";filename\n'graphics/GMM_pi_t_15.eps';file-properties \"XNPEU\";}}\n\n\\FRAME{ftbpFU}{3.557in}{1.7038in}{0pt}{\\Qcb{Figure S-6. Asymptotic and\nFinite-Sample (n=500) Densities of the QLR Statistic for $\\protect\\pi $ in\nthe Nonlinear Regression Model with Endogeneity when $\\protect\\pi _{0}=1.5$\nand the $\\protect\\chi _{1}^{2}$ Density (Black Line).}}{}{gmm_pi_qlr_15.eps}{%\n\\special{language \"Scientific Word\";type \"GRAPHIC\";maintain-aspect-ratio\nTRUE;display \"USEDEF\";valid_file \"F\";width 3.557in;height 1.7038in;depth\n0pt;original-width 6.9851in;original-height 2.9447in;cropleft\n\"0.0861\";croptop \"1\";cropright \"0.9281\";cropbottom \"0.0512\";filename\n'graphics/GMM_pi_qlr_15.eps';file-properties \"XNPEU\";}}\n\n\\FRAME{ftbpFU}{3.4832in}{3.4177in}{0pt}{\\Qcb{Figure S-7. Coverage\nProbabilities of Standard $|t|$ and QLR CI's for $\\protect\\beta $ and $%\n\\protect\\pi $ in the Nonlinear Regression Model with Endogeneity when $%\n\\protect\\pi _{0}=3.0.$}}{}{gmm_std_cp_30.eps}{\\special{language \"Scientific\nWord\";type \"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file\n\"F\";width 3.4832in;height 3.4177in;depth 0pt;original-width\n7.4858in;original-height 6.7464in;cropleft \"0.0876\";croptop\n\"0.9567\";cropright \"0.9268\";cropbottom \"0.0432\";filename\n'graphics/GMM_Std_CP_30.eps';file-properties \"XNPEU\";}}\n\n\\FRAME{ftbpFU}{3.4832in}{3.4177in}{0pt}{\\Qcb{Figure S-8. Coverage\nProbabilities of Robust $|t|$ and QLR CI's for $\\protect\\beta $ and $\\protect%\n\\pi $ in the Nonlinear Regression Model with Endogeneity when $\\protect\\pi %\n_{0}=3.0,$ $\\protect\\kappa =1.5,D=1,$ and $s(x)=\\exp (-2x).$}}{}{%\ngmm_rob_cp_30.eps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n3.4832in;height 3.4177in;depth 0pt;original-width 7.4858in;original-height\n6.7464in;cropleft \"0.0876\";croptop \"0.9567\";cropright \"0.9268\";cropbottom\n\"0.0432\";filename 'graphics/GMM_Rob_CP_30.eps';file-properties \"XNPEU\";}}\n\n\\begin{center}\n{\\LARGE R}{\\Large EFERENCE}\n\\end{center}\n\n\\begin{description}\n\\item Andrews, D.W.K. \\& X. Cheng (2011) Maximum likelihood estimation and\nuniform inference with sporadic identification failure.\\ Cowles Foundation\nDiscussion Paper No. 1824, Yale University.\n\\end{description}\n\n\\end{document}\n", "meta": {"hexsha": "5e44c7853b5a9a82f78273a582c5443bac7ade67", "size": 265623, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reference/AC14.tex", "max_stars_repo_name": "sangrey/RiskPriceInference", "max_stars_repo_head_hexsha": "9ec8b235e3d1f24281890a5f689840affd3f495e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "reference/AC14.tex", "max_issues_repo_name": "sangrey/RiskPriceInference", "max_issues_repo_head_hexsha": "9ec8b235e3d1f24281890a5f689840affd3f495e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reference/AC14.tex", "max_forks_repo_name": "sangrey/RiskPriceInference", "max_forks_repo_head_hexsha": "9ec8b235e3d1f24281890a5f689840affd3f495e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.8497811381, "max_line_length": 252, "alphanum_fraction": 0.6559145857, "num_tokens": 100163, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7401743505760728, "lm_q1q2_score": 0.5587687101448913}}
{"text": "% -----------------------------*- LaTeX -*------------------------------\n\\documentclass[12pt]{report}\n\\usepackage{scribe_hgen486}\n\\usepackage{bbm}\n\n\\begin{document}\n\n\\scribe{Justin Hsu}\t\t% required\n\\lecturenumber{4}\t\t\t% required, must be a number\n\\lecturedate{January 14}\t\t% required, omit year\n\\lecturer{Matthew Stephens} \n\n\\maketitle\n\n% please leave this comment \n\\framebox[.95\\textwidth]{\\parbox{.93\\textwidth}{ {{\\bf Note:}} These\nlecture notes are still rough, and have only have been mildly\nproofread.  }}\n\\vspace*{.1in}\n\n\n% feel free to delete content below this line \n% ----------------------------------------------------------------------\n\\section{Likelihood Analysis}\nConsider the set of data of 100 tusks, 40 of which have the \"1\" allele, 60 with the \"0\" allele\nThen the data has the likelihood function \\begin{equation} L(q) = P(Data|M_q)\\end{equation}. We can write this as \\begin{equation} L(q) = q^{40}(1-q)^{60}\\end{equation}\n\nNow consider the log of the likelihood function: \\begin{equation} \\ell(q) = 40log(q) + 60 log(1-q)\\end{equation}\n\nWe can estimate q by finding the value of q that maximizes $L(q)$. This is known as the Maximum Likelihood estimator (mle), which we denote as $\\hat{q}$. A useful feature is that the value that maximizes the likelihood function also maximizes the log likelihood function.\n\\newcommand{\\argmax}{\\arg\\!\\max}\n\\begin{equation}\n\\begin{aligned} \\hat{q} = \\argmax_q L(q) \\\\\n= \\argmax_q \\ell(q)\n\\end{aligned} \n \\end{equation}\n\nThis is useful because it is sometimes easier to find the maximum of $\\ell(q)$.\n\nReturning to the elephant tusk example, we find the maximum of the likelihood function by taking the derivative of the log likelihood.\n\\begin{equation}\n\\begin{aligned}\n\\ell'(q) = \\frac{40}{q} - \\frac{60}{1-q}\\\\\n0 = \\frac{40}{q} - \\frac{60}{1-q} \\\\\n\\hat{q} = \\frac{40}{100}\n\\end{aligned}\n\\end{equation}\n\nWe can extend this generally so that given two populations $n_1$ and $n_0$, we have a likelihood function \\begin{equation}\nL(q) = q^{n_1}(1-q)^{n_0}\n\\end{equation}\nand a log likelihood function \\begin{equation}\n\\ell(q) = n_1log(q) - n_0log(1-q)\n\\end{equation}\nThen the maximum likelihood estimate will have the form \\begin{equation}\n\\hat{q} = \\frac{n_1}{n_1+n_0}\n\\end{equation}\n\nThis is the maximum likelihood of the Binomial Distribution.\n\n\\section{Mixture Models}\nWe now move on to mixture models, which our models that consist of a mixture of two or more distributions. As an example, consider the heights of all humans of these worlds. What would be the distribution of these heights. We could assume that they are normally distributed, but what if the male heights come from a different distribution than the female heights?\n\nSuppose we have \\begin{equation}\n\\begin{aligned}\nmale height\\sim N(\\mu_m,\\sigma^2_m)\\\\\nfemale height \\sim N(\\mu_f,\\sigma^2_f)\n\\end{aligned}\n\\end{equation} and suppose the population is 50\\% male and 50\\% female.\n\nLet $X$ be the height of a randomly chosen person. What would be the density function for $X$?\n\nIf $X$ was discrete, then \\begin{equation}\nPr(X=x) = Pr(X = x|male)Pr(male) + Pr(X=x|female)Pr(female)\n\\end{equation}\n\nThe continuous analogue would be:\n\\begin{equation}\n\\begin{aligned}\nf_x(x) = \\frac{1}{2}f_{x|male}(x) + \\frac{1}{2}f_{x|female}(x) \\\\\n= \\frac{1}{2}N(X;\\mu_m,\\sigma^2_m) +  \\frac{1}{2}N(X;\\mu_f,\\sigma^2_f)\n\\end{aligned}\n\\end{equation}\n\nWe call the probabilities $Pr(male) = \\frac{1}{2}$ and $Pr(female) = \\frac{1}{2}$ the \"mixture proportions\". We call $f_{x|male}(x)$ and $f_{x|female}(x)$ the \"component densities\".\n\nReturning to our elephant tusk example, suppose we have data $X = (X_1,X_2,...,X_n)$ on $n$ tusks, and that we know the allele frequencies. \n\nLet the proportion of elephants that are Savannah be $\\Pi_S$. \n\nLet $Z_i \\in \\lbrace S,F\\rbrace$ denote whether tusk $i$ came from either a Savannah or Forest elephant. We call $\\lbrace S,F\\rbrace$ the \"component labels\".\n\nThen we have the mixture model \\begin{equation}\n\\begin{aligned}\nP(X_i = x_i|\\Pi_S) = Pr(Z_i=S)Pr(X_i = x_i|Z_i=S) + Pr(Z_i = F)Pr(X_i=x_i|Z_i = F) \\\\\n = \\Pi_SPr(X_i = x_i|Z_i=S) + (1-\\Pi_S)Pr(X_i=x_i|Z_i = F)\n\\end{aligned}\n\\end{equation}\n\nMore generally \\begin{equation}\nPr{X_i = x_i} = \\sum_{k}\\Pi_kPr(X_i = x_i|Z_i = k)\n\\end{equation}\nwhere $\\Pi_1,...\\Pi_k$ are nonnegative and sum to 1.\n\nThe likelihood function of this mixture model is\n\\begin{equation}\n\\begin{aligned}\nL(\\Pi_S) = P(X|\\Pi_S)\\\\=\\prod_{i=1}^{n}P(X_i=x_i|\\Pi_S)\n\\end{aligned}\n\\end{equation}\n\nWhen we take the log of this likelihood function, we get \\begin{equation}\n\\begin{aligned}\n\\ell(\\Pi_S) = \\sum_{i=1}^{n}log(Pr(X_i=x_i|\\Pi_S))\\\\\n= \\sum_{i=1}^{n}log[\\Pi_SP(X_i=x_i|Z_i=S)+(1-\\Pi_S)P...]\n\\end{aligned}\n\\end{equation}\n\nUnlike the example with the binomial distribution, this log likelihood is difficult to differentiate, so to find the maximum, we must rely on numerical methods.\n\n\\section{EM Algorithm}\nThe Expectation Maximization (EM) Algorithm is a method for finding maximum likelihood estimates for a model. The key idea behind the EM algorithm is \"data augmentation\". It is data which we do not have but wish we would have. Suppose our data is $X$, then the augmented data would be $(X,Z)$, where $Z$ is the \"missing data\".\n\nLet $L(\\theta) = P(X|\\theta)$ be the \"marginal likelihood\"/\"observed likelihood\". The \"complete likelihood\" is $L_{comp}(\\theta) = P(X,Z|\\theta)$. \n\nThe steps of the EM algorithm are as follows:\n\\begin{enumerate}\n\n\\item Choose some $\\theta_0$\n\n\\item E step: Form the \"expected\" complete log likelihood by taking the expectation over Z. In other words find \\begin{equation}\nQ(\\theta,\\theta_0) = E_{Z|X,\\theta_0}[\\ell_{comp}(\\theta;Z,X)]\n\\end{equation}\n\n\\item M step: Choose the value of $\\theta$ which maximizes $Q(\\theta,theta_0)$.\n\n\\item The maximizes theta is your new $\\theta_0$. Repeat the E and M steps until $\\ell(\\theta)$ does not change very much.\n\\end{enumerate}\n\nThe advantage of the EM algorith is that the likelihood will always increase with each iteration.\n\nSometimes the algorithm will converge to a local optimum rather than a global optimum. In practice the algorithm is run multiple times.\n\nReturning to elephant tusk mixture model, which has a complete likelihood \\begin{equation}\n\\begin{aligned}\nL(\\Pi_S) = P(X,Z|\\Pi_S) = \\prod_{i=1}^{n}P(X_i,Z_i|\\Pi_S) \\\\\n = \\prod_{i=1}^{n}P(Z_i|\\Pi_S)\n \\propto \\prod_{i=1}^{n}\\Pi_S^{\\mathbbm{1}_{Z_i=S}}(1-\\Pi_S)^{\\mathbbm{1}_{Z_i=F}}\n\\end{aligned}\n\\end{equation}\n\n$\\mathbbm{1}$ stands for the indicator function, which is 1 for the given event and 0 otherwise.\n\nTaking the log of this expression, we get\n\\begin{equation}\nlog(\\prod_{i=1}^{n}P(Z_i|\\Pi_S)) = \\underset{ constant}{C} + \\sum_i \\mathbbm{1}(Z_i=S)log(\\Pi_S) + \\sum_i \\mathbbm{1}(Z_i=F)log(1-\\Pi_S)\n\\end{equation}\n\nIf we take the expectation of the sum of indicator functions, we find the probability of that event occurring. We find that the log likelihood above is maximizes at\n\\begin{equation}\n\\frac{\\sum_i E(\\mathbbm{1}(Z_i=S)|X,\\theta)}{\\sum_i E(...) = n}\n\\end{equation}\n\n\\end{document}\n\n", "meta": {"hexsha": "b39dab070563138f5d941067bbe59c6b0c4cdb16", "size": 7012, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/scribe_notes_2016/lec4.tex", "max_stars_repo_name": "stephens999/hgen48600", "max_stars_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-03-19T18:02:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:15:28.000Z", "max_issues_repo_path": "docs/scribe_notes_2016/lec4.tex", "max_issues_repo_name": "stephens999/hgen48600", "max_issues_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-02-05T00:34:09.000Z", "max_issues_repo_issues_event_max_datetime": "2017-03-07T20:15:19.000Z", "max_forks_repo_path": "docs/scribe_notes_2016/lec4.tex", "max_forks_repo_name": "stephens999/hgen48600", "max_forks_repo_head_hexsha": "e5901f91d81ba4902ae1399a2db3f0903454b34e", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2016-01-08T16:59:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-11T23:09:29.000Z", "avg_line_length": 42.2409638554, "max_line_length": 363, "alphanum_fraction": 0.7037934969, "num_tokens": 2180, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105720171531, "lm_q2_score": 0.8397339696776499, "lm_q1q2_score": 0.5587678611054397}}
{"text": "\\section{Computational Methods and Software}\n\\label{sec:software}\n\nThe previous section described a series of models \nfor the system of interest. \n%These models are too \n%complicated to solve exactly, and must instead be \n%instantiated with software to produce a numerical \n%result. \nThis section details the numerical formulation\nand simulation of these models. It begins with a \ndiscussion of the numerical discretization of the equations \nof interest. The mesh discretization is then described. \nNext, the scientific software in which these numerical \nmodels are used is discussed. Finally, the tool\nchain and supercomputer systems are briefly introduced. \n\n\\subsection{Discretization Scheme}\n\nTo numerically solve the Navier-Stokes equations on a computer, a\nGalerkin finite element method (FEM) is used, which\nrequires that the equations in section~\\ref{sub_sec:ns_en} \nbe cast into a weak form. Manipulating these partial differential\nequations into a variational formulation is accomplished by multiplying \nthe equations by appropriate test functions and integrating over the\ndomain, $\\Omega$. The resulting weak problem is to find, \n$(u,p,T) \\in H^1(\\Omega)^3 \\times L_2(\\Omega) \\times H^1(\\Omega)$ such \nthat \n\n%\n% http://www.numerik.uni-hd.de/Oberwolfach-Seminar/CFD-Course.pdf\n%\n\\begin{align}\n  (\\frac{\\partial u}{\\partial t}, v) + (u \\cdot \\nabla u,v) + \\nu\n   (\\nabla u, \\nabla v)  \n  -(p,\\nabla \\cdot u) &= (gT'/T_0,v)\n \\label{eqn:ns_weak} \\\\\n (\\nabla \\cdot u,q) &= 0\n \\label{eqn:cont_weak} \\\\\n (\\frac{\\partial T}{\\partial t}, w) + (u \\cdot \\nabla T, w) + (k \\, \\nabla\n T, \\nabla w) &= 0.\\label{eqn:en_weak}\n\\end{align}\n\n$\\forall (v,q,w) \\in H^1(\\Omega)^3 \\times L_2(\\Omega) \\times\nH^1(\\Omega)$, where $(u,v) = \\int_\\Omega u \\cdot v \\, dx$.  \nSome of the simulations presented here were conducted under\nsteady conditions, for which the $\\frac{\\partial}{\\partial t}$ terms\nvanish. A Galerkin FEM scheme is obtained by posing the weak form in\nterms of discrete subspaces of the function spaces specified above\ndefined using piecewise-polynomial basis functions. All of the\nsimulations discussed in this work were accomplished using \nlinear basis functions for both the velocity and pressure.\n%Typically, the use of equal order elements for velocity and pressure is\n%ruled out in the standard Galerkin FEM formulation by the Babuska-Brezzi\n%condition\\cite{bb-cond}. \nThe scheme is\nstable with equal-order elements for velocity and pressure due to the\nintroduction of streamline upwind/Petrov-Galerkin (SUPG) stabilization\nterms as first described by Hughes\\cite{Hughes198685,supg} and extended\nto natural convection as in Becker and Braack\\cite{Becker2002428}. The\nstabilization terms add artificial dissipation that approaches zero as the\nresidual converges. This scheme is ``consistent'' because the\nunderlying order of convergence of the numerical method is not \naffected\\cite{hughes2000finite}. \n\nThis stabilization is accomplished by introducing an additional term,\n$\\langle Lc,S\\phi \\rangle_\\tau$, to the weak form defined in Equations\n\\ref{eqn:ns_weak}-\\ref{eqn:en_weak}. Here $L()$ is the operator for the PDEs\nin \\ref{sub_sec:ns_en}, S is a stabilization operator\nwhich is chosen to be the negative adjoint of the differential terms of\n$L()$, and $c$ and $\\phi$ are state and test functions, i.e. \n$ c= (u,p,T)$, and $\\phi = ( v,w,q )$. \nThe angle brackets $\\langle \\cdot,\\cdot \\rangle$ signify integration of\nthe element interiors for each of the K elements, that is:\n\\begin{equation}\n \\langle u,v \\rangle_\\tau = \\sum_K \\tau_K(u,v)_K.\n\\end{equation}\nThis results in three stabilization parameters, $\\tau_P, \\tau_v, \\tau_T$ \nwhich are selected as proposed by Becker and Braack. \nA full derivation of the weak form and stabilization terms will be\nprovided in an appendix of the full thesis. \n\nThe system of ODEs are discretized in time using the \nbackward Euler method\\cite{moin2010fundamentals}. The time interval\n$(0,T)$ is sliced into $N_t$ steps of \nuniform temporal length,  $\\Delta t$, where $n = 0,\\dots,N_t$. \nThis has the form, \n\\begin{equation}\n y_{n+1} = y_n + \\Delta t \\, f(y_{n+1},t_{n+1}).\n\\end{equation}\nAs $f$ is non-linear, a Newton-Raphson method is used to solve the\nresulting implicit nonlinear problem. While an iterative method is\nsignificantly more computationally expensive per timestep than a similar\nexplicit method, the method was selected due to its unconditional\nstability and ease of statistical sampling for a uniform timestep. \n\n\n%\n% gave not completely described numerical methods\n% for instance, have not indicated the stabilization schemes\n% do not need complete equations, but should permit someone to access \n% the literature and construct precisely the numerical formulations used\n%\n\n\n\\subsection{Mesh Discretization}\n\n%\n% what about mesh...\n%\nThe domain's described in subsection~\\ref{sec:bc} are\nconsistently discretized. The domain extents\nare scaled by system diameter but the same number of grid points are used\nfor every simulation. Thus, while the ratio of the domain length to system\ndiameter remains fixed, the grid spacing increases proportionally with\ndomain length. \n\nThe mesh has a uniform spacing in the lateral directions, except for a\nsingle refinement in the region of the vanes. Typically, the grid is \nroughly one hundred points in the streamwise and spanwise directions\nbefore the refinement. The refinement halves the spacing (doubles the\nnumber of points) in all three\ncoordinate directions, \\{x,y,z\\} in this region. The refinement is made\nfrom the ground to 1.5 times the height of the vanes and cone. \n\nThe mesh is non-uniform in height to\nresolve the boundary layer. This is accomplished by redistributing a\nmesh uniform in height, z, according to,\n\\begin{equation}\n z = \\begin{cases} C_1(z-L_z)+L_z,& \\text{if } z \\geq z_\\delta\\\\\n      C_2 \\text{ exp}(C_3 z - 1),                 & \\text{otherwise}\n     \\end{cases}\n\\end{equation}\n\nwhere $z_\\delta$ is the chosen height of the boundary layer mesh, and\n$C_1-C_3$ are scaling coefficients. This gives the mesh an exponentially\nvarying character, with the coefficients chosen to ensure ten or more\npoints in the boundary layer, isotropic spacing in cells outside of\nit, and smooth blending between these two regimes. Each boundary layer\nspacing was tested against a finer spacing to ensure that the results\nwere not sensitive to the choice of spacing. A horizontal slice though a\nrepresentative domain is shown in Figure \\ref{fig:meshing}. The single\nrefinement in the region of the vanes is visible, as well as the finer\nmeshed boundary layer region near the ground. \n\n  \\begin{figure}[!htb]\n    \\begin{center}\n     \\includegraphics[width = 10 cm]{figs/meshing}\n     \\caption{Horizontal slide through the domain, to show a\n     representative meshing. The single refinement region around the\n     vanes is visible, as well as the finer boundary layer mesh near the\n     ground.}\n     \\label{fig:meshing}\n    \\end{center}\n  \\end{figure}\n\nFinally, the diffusivities are proportionally\nscaled with grid size to ensure that the cell Reynolds number, \n\\begin{equation}\n \\text{Re}_\\text{cell} = \\frac{\\text{max}(\\Delta x,\\Delta y) u}{\\nu_T}\n\\end{equation}\n is maintained for every simulation, in order to ensure stability.\n\n\n%% hmin = 0.001  \n%% hmax = 0.4  \n%% zb = 2.0\n%% hrat = ${/ ${mesh-options/hmax} ${mesh-options/hmin}}\n%% zmax = ${mesh-options/domain_x3_max}\n%% loghrat = ${= log ${mesh-options/hrat}}\n%% c2 = ${/ ${mesh-options/zb} ${- ${mesh-options/hrat} 1}} \n%% zetab = ${/ ${mesh-options/zmax} ${+ 1 ${/ ${- ${mesh-options/zmax} ${mesh-options/zb}} ${* ${mesh-options/c2} ${mesh-options/hrat} ${mesh-options/loghrat}}}}}\n%% c3 = ${/ ${mesh-options/loghrat} ${mesh-options/zetab}}\n%% mesh_nx3 = ${= ceil ${/ ${* ${mesh-options/zmax} ${mesh-options/c2} ${mesh-options/c3}} ${mesh-options/hmin}}}\n%% c1 = ${* ${mesh-options/c2} ${mesh-options/c3} ${= exp ${* ${mesh-options/c3} ${mesh-options/zetab}}}}\n%% redistribute = '{x}{y}{if(z>${mesh-options/zetab},${mesh-options/c1}*(z-${mesh-options/zmax})+${mesh-options/zmax},${mesh-options/c2}*(exp(${mesh-options/c3}*z)-1))}' \n\n%After operation, solutions are evaluated to ensure that \n%the qualitative character of the solution does not change.\n\n\\subsection{Software Stack}\n\nThe numerical approximations described above had been implemented using\nthe GRINS library\\cite{GRINSpaper} by Bauman and Stogner using the\nLibmesh\\cite{libMeshPaper} FEM infrastructure. It was designed to\nsupport multiphysics FEM applications, the reusability and extensibility\nof mathematical modeling kernels, supporting interfaces to existing\nsolver and discretization libraries to enable modern solution\nstrategies, while, at the same time, retaining flexibility to\neffectively address a wide range of science or engineering problems.  \n\nGRINS provides a platform that enables powerful numerical algorithms\nsuch as adjoint-based AMR, adaptive modeling, sensitivity analysis,\nand, eventually, enabling uncertainty quantification. While few of these\ncapabilities are in use for the present work, they could be useful in\nfuture investigations. \n\nGRINS stands for, ``General Reacting Incompressible Navier-Stokes'',\nwhich roughly encapsulates the physical regimes it was originally\ndesigned to simulate. GRINS is open-source, and available on\n\\hyperref[www.github.com/grinsfem/grins]{github}. It is released \nunder LGPL2.1.  GRINS is heavily unit tested, with over 60 tests\navailable to ensure the reliability of results regardless of install\nplatform. \n\n%The remainder of this subsection is devoted to\n%discussing the underlying libraries used and the description of the\n%GRINS framework.  \n% PETSC\\cite{petsc} trilinos\\cite{trilinos}\n\n% GRINS also uses the fparser\\cite{fparser}\n% library to support both parsing and compilation of mathematical\n% functions into high \n% performance kernels. This capability allows for easy specification of\n% boundary conditions, initial conditions, or constitutive equations from an input file. \n\n% Currently, libMesh has been scaled tens of thousands of cores and has\n% been run on over 100,000 cores on the BG/Q machine Mira at Argonne National\n% Lab\\cite{libmesh-scaling}\n\n%In principle, alternative software libraries/frameworks such as\n%FEniCS\\cite{fenics}, OpenFOAM\\cite{openfoam}, etc. would likely be\n%capable of simulating this regime. \n\n\n%\n% INCLUDE IN THESIS\n%\n% \\subsection{Solver Options}\n\n% GRINS uses PETSC\\cite{petsc} and trilinos\\cite{trilinos} for numerical\n% linear algebra, such as constructing and using sparse matrices, finding\n% the iterative solution of linear systems, and for preconditioning.  \n\n% While a variety of solver options have been tested in PETSC, all the\n% results shown in this document use GMRES with block Jacobi for\n% preconditioning\\cite{Saad:2003} for the linear solve. \n\n% This uses the inverse of the diagonal block for that processor for\n% preconditioning. \n\n% the preconditioner it's going to use to precondition the linear system\n% for the solution of the diagonal block. To approximate this, incomplete\n% LU factorization is used. \n\n% ILU(0) factorization\n\n\n\n%% (11:41:54 AM) nick: ``-ksp_view -ksp_type gmres -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_levels 0''\n%% (11:42:00 AM) Paul Bauman: OK\n%% (11:42:17 AM) Paul Bauman: -pc_type is the preconditioner for the entire linear system\n%% (11:42:25 AM) Paul Bauman: You're doing bjacobi = block Jacobi\n%% (11:42:28 AM) nick: right\n%% (11:42:36 AM) nick: and does anyone have a good reference I can learn this better? i feel as if I cant look this up, for some reason\n%% (11:42:39 AM) Paul Bauman: That is just using the inverse of the diagonal block for that processor\n%% (11:42:46 AM) Paul Bauman: Now\n%% (11:42:55 AM) Paul Bauman: that is a linear solve\n%% (11:43:17 AM) Paul Bauman: So you can use all the linear solver technology to solve or approximately that block\n%% (11:43:24 AM) Paul Bauman: Hence, -sub_pc_type\n%% (11:43:32 AM) hil left the room.\n%% (11:43:46 AM) Paul Bauman: That's the preconditioner it's going to use to precondition the linear system for the solution of the diagonal block\n%% (11:43:53 AM) Paul Bauman: You're telling it to use incomplete lu\n%% (11:44:26 AM) Paul Bauman: Now the -sub_pc_factor_levels options applies to ilu\n%% (11:44:28 AM) hil entered the room.\n%% (11:44:44 AM) Paul Bauman: The incomplete part of imcomplete LU is about the level of fill you use\n%% (11:45:05 AM) Paul Bauman: The more levels of fill you have, the more ``complete'' the incomplete LU will be\n%% (11:45:08 AM) Paul Bauman: Does that make sense?\n%% (11:45:23 AM) nick: no, that is where you lost me\n%% (11:45:39 AM) nick: i dont think i know this level of fill\n%% (11:46:17 AM) Paul Bauman: Check out Youssef Saad's book if more curious about the subject\n%% (11:46:37 AM) nick: cool thanks\n%% (11:46:38 AM) Paul Bauman: Suffice it to say, you heopfully shouldn't ever need to go past 3 or 4 levels of fill\n%% (11:47:03 AM) Paul Bauman: Also, if you've got superlu installed with the PETSc, consider using -sub_pc_factor_mat_solver_package superlu\n%% (11:47:15 AM) Paul Bauman: That's a *much* faster/better implementation than PETSc's\n\n\n\\subsection{Tool Chain and Simulation Custodianship}\n\nSimulations are performed on the Texas Advanced Computing Center (TACC)\nsupercomputers Lonestar Four and Stampede. Run durations for transient\ncases are typically twelve hours to perform several hundred timesteps. \nThese runs are submitted to the production queue and are  \n264-528 processing cores, \nor 22-44 nodes on Lonestar (with 12 cores per node), and a similar number\nfor Stampede. The runs typically have several million degrees of freedom (DoF), \nand the local number of DoF per core is maintained at $O(10^4)$. This was\nselected due to memory constraints, after a strong scaling\nanalysis of the performance of the code on these resources, and\nafter consulting with the software developers.  \n\nAfter a run terminates, several scripts are automatically invoked. \nThese scripts archive the run (outside of the volatile /scratch \nproduction directories) and simultaneously, label the concluded run with\nunique metadata that defines the system environment, the jobs input\nfiles and run definitions, as well as information detailing the\nhypothesis or physics the job was intended to investigate. Finally, once\na week a script performs \\textbf{rsync} on the entire archived database to\nensure more than single redundancy for the runs. \n\nIn other words, the workflow is designed to permit rapid queuing of a\nseries of runs (in parallel) to investigate a variety of conditions or\nscenario parameters. This capability is necessary for the optimization\ncampaign detailed in \\ref{sec:proposed_work}, where running many\nconcurrent investigations will be required to adequately sample the\nconfiguration space.  \n", "meta": {"hexsha": "8e38942d18fa1e58d39e1eb0a8005bc653e25f35", "size": 14817, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "disputatio/propositum/software.tex", "max_stars_repo_name": "nicholasmalaya/paleologos", "max_stars_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-04T17:49:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-04T17:49:42.000Z", "max_issues_repo_path": "disputatio/propositum/software.tex", "max_issues_repo_name": "nicholasmalaya/paleologos", "max_issues_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "disputatio/propositum/software.tex", "max_forks_repo_name": "nicholasmalaya/paleologos", "max_forks_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-01-04T16:08:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-16T19:34:24.000Z", "avg_line_length": 48.2638436482, "max_line_length": 170, "alphanum_fraction": 0.7598029291, "num_tokens": 3882, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.839733963661418, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.5587678515303982}}
{"text": "\n\\subsection{Implications of axiom schema of specification}\n\n\\subsubsection{All finite subsets exist}\n\nFinite subsets. Don't know about infinite subsets\n\nIf we can define a subset, by the axiom of specification it exists.\n\nFor example if set \\(\\{a,b,c\\}\\) exists, we can define a preterite to select any subset of this.\n\nFor example we can use define a \\(P(x)\\) as \\(x=a\\lor x=b\\) to extract the subset \\(\\{a,b\\}\\).\n\nIf a subset is infinitely large, \n\n\\subsubsection{Intersections of finite sets exist}\n\nCan prove exists from this axiom\n\n\\subsubsection{If any set exists, the empty set exists}\n\n\\(\\forall x \\forall a \\exists s[(P(x)\\land x\\in a )\\leftrightarrow (x\\in s)]\\)\n\n", "meta": {"hexsha": "dd266ec35f2b394f2dd8c1cf22575419e6f602c6", "size": 675, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/logic/setsSpecification/01-02-specificationResults.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/logic/setsSpecification/01-02-specificationResults.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/logic/setsSpecification/01-02-specificationResults.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.125, "max_line_length": 96, "alphanum_fraction": 0.7274074074, "num_tokens": 178, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5586676743784622}}
{"text": "\\section{Exercises in Statistics}\\label{S:xsStatistics}\n\\begin{ExerciseList}\n\\Exercise What is the sample mean and sample variance of the following dataset:\n\\[\n1, 3, 2, 1, 2, 3, 3\n\\]\n\\Answer Apply the formulas for the sample mean, $\\overline{X_7}$, and sample variance.\n\\end{ExerciseList}\n\n", "meta": {"hexsha": "dffb0713a988fa7dd11611af22c208e42b8156a2", "size": 290, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "matlab/csebook/ExsInStatistics.tex", "max_stars_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_stars_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-02-19T07:54:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T13:55:18.000Z", "max_issues_repo_path": "matlab/csebook/ExsInStatistics.tex", "max_issues_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_issues_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "matlab/csebook/ExsInStatistics.tex", "max_forks_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_forks_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-07-18T07:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-19T11:28:24.000Z", "avg_line_length": 29.0, "max_line_length": 86, "alphanum_fraction": 0.7448275862, "num_tokens": 89, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8104789178257654, "lm_q2_score": 0.6893056104028797, "lm_q1q2_score": 0.5586676651705547}}
{"text": "%!TEX root = ../TTK26-Summary.tex\n\\section{Muscle modeling}\nThe main purpose of a mathematical model is\n\\begin{itemize}\n    \\item \\emph{comprehension}, the ability to aid in understanding the system, and\n    \\item \\emph{prediction}, the ability to predict dynamics outside of experimental boundaries.\n\\end{itemize}\nFurther, a good model should have\n\\begin{itemize}\n    \\item \\emph{credibility}, that it predicts well, and\n    \\item \\emph{tractability}, that it is simple.\n\\end{itemize}\nIn a way, these oppose each other: A very precise model is often also very complex. It is necessary to balance these qualities.\n\n\\subsection{Types of muscle models}\nDivided first by the level they represent:\n\\begin{enumerate}\n    \\item Microscopic (crossbridge/sarcomere) models\n        \\begin{enumerate}\n            \\item Conventional cross-bridge models (introduced by Huxley)\n            \\item Unconventional cross-bridge models (based on different postulates than Huxley's)\n        \\end{enumerate}\n    \\item Macroscopic (whole muscle) models\n        \\begin{enumerate}\n            \\item Viscoelastic models (consider muscle as viscoelastic material)\n            \\item Hill-type models (based on Hill, 1938)\n            \\item Black box models (use system identification methods)\n        \\end{enumerate}\n    \\item Fiber models\n\\end{enumerate}\n\n\\subsection{Conventional microscopic models}\nIntroduced by Huxley, 1957 and improved upon since. Assumes all sarcomeres identical, and macroscopic properties can be calculated through integrals on $n(x,t)$, where $n$ is ...\n\nHuxley's model was extended by hill with more states of the ...\n\n\\subsection{Unconventional microscopic models}\nFounded on different assumptions.\n\\paragraph{Bornhorst \\& Minardi} Modeling each cross-bridge as linear energy converters.\n\\paragraph{Iwazumi} No direct binding between myosine and actin, ATP causes hydrolysis.\n\\paragraph{Tirosh} Hydrodynamic model\n\\paragraph{Hatze} Assume elastisity in Z-disks and M-lines rather than in the cross-bridges.\n\n\\subsection{Macroscopic models}\n\\paragraph{Viscoelastic} Assume muscle is viscoelastic material. Can be represented by a spring-damper in series with an undamped spring.\n\\paragraph{Hill-type} Improvement upon viscoelastic. Most used model for dynamic analysis and control.  Consists of a series elastic element, a contractile element, and a parallel elastic element. Made to model contraction under max stimuli over short contraction distances. Has been further developed in many directions.\n\\begin{equation}\n    \\dot{L} = C(P) \\dot(P) - F(P, P_0)\n\\end{equation}\n\n\\subsection{Black box models}\nBased on statistics: Do a bunch of experiments, and fit a model to the data. Many model types can be used\n\\begin{itemize}\n    \\item LTI SISO\n    \\item etc...\n\\end{itemize}\nSimplest form is an LTI SISO system\n\\begin{equation}\n    y(t) = \\int G(t-\\tau) \\dot{x}(\\tau) \\dif \\tau\n\\end{equation}\nTime delay $\\tau$ represents phase delay at high frequencies and is sometimes simplified away.\n\nCan also write a 2I1O-model: Muscle force a function of length and activation.\n\n\\subsection{Fiber models}\nFibres are assumed non-uniform, as opposed to other models.\n\n\\subsection{Distribution-moment models}\nRepresents the bond restriction function $n(x,t)$ from Huxley model as a Gaussian probability distribution dependent on stiffness, force, and elastic energy.\n", "meta": {"hexsha": "22776cb82e01ed6185579aaaf1ef7d84a3d1ad85", "size": 3356, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TTK26 Biomedisinsk instrumentering og regulering/tex/sec-w41-muscle-modeling.tex", "max_stars_repo_name": "jakoblover/ntnu-course-summaries", "max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z", "max_issues_repo_path": "TTK26 Biomedisinsk instrumentering og regulering/tex/sec-w41-muscle-modeling.tex", "max_issues_repo_name": "jakoblover/ntnu-course-summaries", "max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TTK26 Biomedisinsk instrumentering og regulering/tex/sec-w41-muscle-modeling.tex", "max_forks_repo_name": "jakoblover/ntnu-course-summaries", "max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.9428571429, "max_line_length": 321, "alphanum_fraction": 0.7574493445, "num_tokens": 826, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703224, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5586676588596564}}
{"text": "\\chapter{Optimal Power and Natural Gas Flow}\r\n\\label{chap:formulation}\r\n\r\nIn this chapter, we explain in detail the optimization model behind \\mpng{}. We introduce the problem as an interdependent steady-state optimal power and natural flow, where a linear objective function models the overall operational cost, subject to several linear and non-linear constraints coming from both technical and operational conditions.\r\n\r\nIn practice, the state-variables of a natural gas network change quite slowly in comparison to a power system~\\cite{Cui2016}. Then, we consider a window of analysis of one entire day for the gas network. On the other hand, as it is found in most of the simulation tools, the power system is modeled using standard windows of one hour. However, a 24-hour time resolution would introduce too much complexity in terms of computational and mathematical tractability. Therefore, we propose a multi-period window of analysis defined by the user. Figure \\ref{fig:periods} shows how the optimization model allows $n$ periods for the power system, whose individual duration is such that $\\{t_i\\in \\mathbb{N}, \\; i=1,2,...,n : \\sum_i t_i = 24\\}$, and one entire day for the natural gas system. The interconnection between systems is always consistent. Namely, each individual period of the power system is precisely linked to the single period of the natural gas network.\r\n\r\n\\begin{figure}[!ht]\r\n\t\\centering\r\n\t\\includegraphics[scale=1.0]{Figures/Time_resolution}\r\n\t\\caption{Periods considered in \\mpng{}.}\r\n\t\\label{fig:periods}\r\n\\end{figure}\r\n\r\n\\section*{Nomenclature}\r\n\\subsection*{Indexes}\r\n\\begin{labeling}{Longer label\\quad}\r\n\\item [$i$, $j$] Gas nodes.\r\n\\item [$m$, $n$] Electric nodes (buses). \r\n\\item [$o$] Gas pipeline.\r\n\\item [$c$] Compressor.\r\n\\item [$l$] Transmission line.\r\n\\item [$w$] Gas well.\r\n\\item [$e$] Power generator.\r\n\\item [$ref$] Reference bus.\r\n\\item [$r$] Spinning reserve.\r\n\\item [$\\sigma$] Type of gas load.\r\n\\end{labeling}\r\n\r\n\\subsection*{Parameters}\r\n\r\n\\begin{labeling}{Longer label\\quad}\r\n\\item [$\\alpha^{i}_{\\pi_+}, \\alpha^{i}_{\\pi _-}$] Penalties for over-pressure and under-pressure at node $i$.\r\n\\item [$\\alpha_{\\gamma}$] Penalties for non-supplied gas.\r\n\\item [$\\alpha_{\\epsilon}$] Penalties for non-supplied electricity.\r\n\\item [$C^{w}_{G}$] Gas cost at the well $w$.\r\n\\item [$C^{oij}_{O}$] Transport cost of pipeline $o$, from node $i$ to node $j$.\r\n\\item [$C^{cij}_{C}$] Compression cost of compressor $c$, from node $i$ to node $j$.\r\n\\item [$C^{i}_{S}$] Storage cost at node $i$.\r\n\\item [$C^{i}_{S_+}$] Storage outflow price at node $i$.\r\n\\item [$C^{i}_{S_-}$] Storage inflow price at node $i$.\r\n\\item [$C^{e}_{E}$] Power cost generation (excluding gas cost).\r\n\\item [$\\eta^{q}_{e}$] Thermal efficiency at generator $e$.\r\n\\item [$D_{g}^{i \\sigma}$] Gas demand of type $\\sigma$ at node $i$.\r\n\\item [$D_{e}^{tm}$] Electricity demand in the bus $m$ at time $t$.\r\n\\item [$\\bar{g}^{w}$, $\\underline{g}^{w}$] Gas production limits.\r\n\\item [$\\overline{\\pi}^{i}$, $\\underline{\\pi}^{i}$] Quadratic pressure limits at node $i$.\r\n\\item [$S^{i}_{0}$] Initial stored gas at node $i$.\r\n\\item [$\\overline{S}^{i}$, $\\underline{S}^{i}$] Storage limits at node $i$.\r\n\\item [$\\kappa^{oij}$] Weymouth constant of pipeline $o$.\r\n\\item [$\\delta^{oij}$] Threshold for gas flow capacities.\r\n\\item [$\\beta^{cij}$] Compression ratio of compressor $c$.\r\n\\item [$Z^{c}$] Ratio parameter of compressor $c$.\r\n\\item [$B^{c}$] Compressor design parameter of compressor $c$.\r\n\\item [$x$, $y$, $z$] Gas consumption parameters of gas-driven compressors.\r\n\\item [$\\overline{f}^{oij}_{g}$] Gas transport capacity of pipeline $o$, from node $i$ to node $j$.\r\n\\item [$\\overline{f}^{cij}_{g}$] Gas flow capacity of compressor $c$, from node $i$ to node $j$.\r\n\\item [$\\overline{f}^{i}_{s}$,$\\underline{f}^{i}_{s}$] Storage outflow capacities at node $i$.\r\n\\item [$\\overline{p}_{g}^{e}$, $\\underline{p}_{g}^{e}$] Active power generation limits of generator $e$.\r\n\\item [$\\overline{q}_{g}^{e}$, $\\underline{q}_{g}^{e}$] Reactive power generation limits of generator $e$.\r\n\\item [$\\overline{V}^{tm} \\underline{V}^{tm}$] Voltage limits for bus $m$ at time $t$.\r\n\\item [$\\mathbb{S}^{l}$] Transmission capacity of power line $l$.\r\n%\\item [$x^{mn}_{l}$] Reactance of line $l$, from bus $m$ to bus $n$.\r\n\\item [$R^{tr}$]  Spinning reserve in the $r$-th spinning reserve zone at time $t$.\r\n%\\item [$M$] Generators assignment matrix.\r\n%\\item [$L$] Compressors assignment matrix.\r\n\\item [$u^{te}$] Unit commitment state for generator $e$ at time $t$.\r\n\\item [$\\tau^{t}$] Energy weight related to period of time $t$.\r\n\\item [$E^{e}$] Available energy for hydroelectric generator $e$, \\break during the total analysis window.\r\n\\end{labeling}\r\n\r\n\\subsection*{Sets}\r\n\r\n\\begin{labeling}{Longer label\\quad}\r\n\\item [$\\cal{N}$] Gas nodes, $\\left| \\cal{N} \\right|= n_{\\cal{N}}$.\r\n\\item [$\\cal{N}_{S}$] Gas nodes with storage, $\\cal{N}_{S} \\subset \\cal{N} $, $\\left| \\cal{N}_{S} \\right|= n_{\\cal{S}}$.\r\n\\item [${\\cal{O}}$] Gas pipelines, $\\left| \\cal{O}  \\right| = n_{\\cal{O}}$\r\n\\item [${\\cal{C}}$] Compressors, ${\\cal{C}}_{G} \\cup {\\cal{C}}_{E}$, $\\left| \\cal{C}  \\right| = n_{\\cal{C}}$ \r\n\\item [${\\cal{C}}_{G}$] Compressors controlled by natural gas, ${\\cal{C}}_{G} \\subseteq {\\cal{C}}$, $\\left| {\\cal{C}}_{G}  \\right| = n_{{\\cal{C}}_{G}}$ \r\n\\item [${\\cal{C}}_{E}$] Compressors controlled by electric power, ${\\cal{C}}_{E} \\subseteq {\\cal{C}}$, $\\left| {\\cal{C}}_{E}  \\right| = n_{{\\cal{C}}_{P}}$ \r\n\\item [${\\cal{W}}$] Gas wells, $\\left| \\cal{W} \\right|= n_{\\cal{W}}$.\r\n%\\item [${\\cal{W}}^{i}$] Gas wells at node $i$, ${\\cal{W}}^{i} \\subset \\cal{W} $, $\\left| {\\cal{W}}^{i} \\right|= n_{{\\cal{W}}^{i}}$.\r\n\\item [$\\cal{B}$] Power buses, $\\left| \\cal{B} \\right|= n_{\\cal{B}}$.\r\n\\item [$\\cal{L}$] Power lines, $\\left| \\cal{L} \\right|= n_{\\cal{L}}$.\r\n\\item [$\\cal{E}$] Power unit generators, ${\\cal{E}}_{H} \\cup {\\cal{E}}_{G}^{i} =\\cal{E}$, $\\left| \\cal{E} \\right|= n_{\\cal{E}}$.\r\n\\item [${\\cal{E}}_{H}$] Hydroelectric power units, ${\\cal{E}}_{H} \\subseteq \\cal{E} $, $\\left| {\\cal{E}}_{H} \\right|= n_{{\\cal{E}}_{H}}$.\r\n\\item [${\\cal{E}}^{i}_{G}$] Gas-fired power units connected to gas node $i$, \\break ${\\cal{E}}^{i}_{G} \\subseteq \\cal{E}$, $\\left| {\\cal{E}}^{i}_{G} \\right|= n_{{\\cal{E}}_{G}}$.\r\n\\item [${\\cal{Z}}_{r}$] Spinning reserve zones. \r\n\\item [${\\cal{F}}^{i}_{G}$, ${\\cal{T}}^{i}_{G}$] Connected pipelines to node $i$ at side \\textit{From} or \\textit{To}.\r\n\\item [${\\cal{F}}^{i}_{C}$, ${\\cal{T}}^{i}_{C}$] Connected compressors to node $i$ at side \\textit{From} or \\textit{To}.\r\n\\item [${\\cal{F}}^{m}_{E}$, ${\\cal{T}}^{m}_{E}$] Connected power lines to bus $m$ at side \\textit{From} or \\textit{To}. \r\n\\item [$\\cal{T}$] Periods of analysis.\r\n\\item [$\\Sigma$] Different types of gas demands.\r\n\\end{labeling}\r\n\r\n\\subsection*{Variables}\r\n\r\n\\begin{labeling}{Longer label\\quad}\r\n\\item [${f}_{g}^{oij}$] Gas flow in pipeline $o$, from node $i$ to node $j$.\r\n\\item [${f}_{g_+}^{oij}$ ${f}_{g_-}^{oij}$] Positive and negative gas flow in pipeline $o$.\r\n\\item [${f}_{g}^{cij}$] Gas flow in compressor $c$, from node $i$ to node $j$.\r\n\\item [$\\psi^{c}$] Power consumed by compressor $c$.\r\n\\item [$\\phi^{c}$] Gas consumed by compressor $c$, connected to node $i$ at side \\textit{from}.\r\n\\item [$\\gamma^{i \\sigma}$] Non-served gas of type $\\sigma$ at node $i$.\r\n\\item [$\\pi^{i}$] Quadratic pressure at node $i$.\r\n\\item [${\\pi}^{i}_{+}$, ${\\pi}^{i}_{-}$] Quadratic over/under pressures at node $i$.\r\n\\item [$g^{w}$] Gas production at well $w$.\r\n\\item [$f_{s}^{i}$] Storage outflow difference.\r\n\\item [$f_{s_+}^{i}$, $f_{s_-}^{i}$] Storage outflow and inflow.\r\n\\item [$p_{g}^{te}$] Active power production of generator $e$ at time $t$.\r\n\\item [$q_{g}^{te}$] Reactive power production of generator $e$ at time $t$.\r\n\\item [$V^{tm}$] Voltage magnitude of bus $m$ at time $t$.\r\n\\item [$\\theta^{tm}$] Voltage angle of bus $m$ at time $t$.\r\n\\item [$\\epsilon^{tm}$] Non-served active power of bus $m$ at time $t$.\r\n\\end{labeling}\r\n%\\section{~}\r\n\\section{Objective function}\r\n\r\nThe cost function presented in Equation \\ref{eq:obj_func} consists of several linear components, both from the power and the gas networks. It comprises the sum of the operation cost  of the interdependent system. In the case of the natural gas network, it includes the natural gas extraction cost, the transportation cost, the storage cost, the penalties associates with quadratic over/sub pressures, and non-supplied natural gas. In the case of the power network, it includes the generation cost and the penalties associates with non-supplied power demand.\r\n\r\n%The cost function is represented by the equation \\ref{obj_func} and is composed by several linear components. The first component is the gas production cost at each of the wells. As well as the first component, the second one is the power generation cost for every power plant, for the complete period. \\color{red}The third component is the cost of the storage flow at every node with storage availability. \\color{black}The fourth  expression is the storage cost, having into account the previous storage level and the outing flow. The fifth component is the gas transport cost for each pipeline. Finally, the last three component are the penalties cost for over/under pressure, non-supply gas and non-supply power, respectively. Related to the non-gas supply, is important to clarify that the non-supply cost depends on the type of user. \r\n\r\n\\begin{equation}\r\n\\begin{aligned}\r\nC \\left( x \\right) = & \\sum_{w \\in \\cal{W}}{C^{w}_{G} g^{w}} + \\sum_{t \\in \\cal{T}} {\\tau}^{t}  \\sum_{e \\in \\cal{E}} {C^{e}_{E} p_{g}^{te}}\\\\ \r\n\t\t\t\t& + \\sum_{i \\in {\\cal{N}}_{S}}{\\left({C^{i}_{S_+} f^{i}_{s_+}} - {C^{i}_{S_-} f^{i}_{s_-}}  \\right)}\\\\\r\n\t\t\t\t& + \\sum_{i \\in {\\cal{N}}_{S}}{C^{i}_{S} \\left( S^{i}_{0} - f^{i}_{s} \\right)} \\\\\r\n\t\t\t\t& + \\sum_{o \\in \\cal{O}}{C^{oij}_{O} f^{oij}_{g_+}} - \\sum_{o \\in \\cal{O}}{C^{oij}_{O} f^{oij}_{g_-}} \\\\\r\n\t\t\t\t& + \\sum_{c \\in \\cal{C}}{C^{cij}_{C} f^{cij}_{g}} \\\\ \r\n\t\t\t\t& + \\sum_{i \\in \\cal{N}}{\\alpha^{i}_{\\pi_+} \\pi^{i}_{+}} + \\sum_{i \\in \\cal{N}}{ \\alpha^{i}_{\\pi_-} \\pi^{i}_{-}} \\\\\r\n\t\t\t\t& + \\sum_{i \\in \\cal{N}}\\sum_{\\sigma \\in \\Sigma} {\\alpha_{\\gamma}^{i \\sigma}\\gamma^{i \\sigma}} + \\alpha_{\\epsilon} \\sum_{t \\in \\cal{T}} {\\tau}^{t} \\sum_{m \\in \\cal{B}} {\\epsilon^{tm}}  \r\n\\end{aligned}\r\n\\label{eq:obj_func}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\nThe aim of the Optimal Power and Natural Gas Flow is to minimize the linear objective $C(x)$ subject to the constraints explained below.\r\n\r\n\\section{Constraints}\r\n\r\n%The model of the optimal power and natural gas flow consists in maintain the balance in both networks. For the natural gas network the nodal balance must be has linear and non-linear constraints, which model the \r\n\r\n\\subsection{Gas network}\r\n\r\nEquation \\ref{eq:node_gas_balance} shows the gas balance for a specific node $k$ during a day. This gas balance is composed by the incoming and outgoing flows associated with pipelines and compressors at the node $k$, the outgoing stored flow in the available storage, the generation related to that node, and the total gas demand. In detail, the gas demand is composed by the required flow in the gas-fired power plants and compressors, and the total gas demand of the rest of the consumers, excluding the non-supplied natural gas. \r\n\r\n\\begin{equation}\r\n\\begin{array}{l}\\displaystyle\r\n\\sum_{o \\in {\\cal{T}}^{k}_{G}}{{f}_{g}^{oij}} - \\sum_{o \\in {\\cal{F}}^{k}_{G}}{{f}_{g}^{oij}} + \\sum_{c \\in {\\cal{T}}^{k}_{C}}{{f}_{g}^{cij}} - \\sum_{c \\in {\\cal{F}}^{k}_{C}}{ \\left({f}_{g}^{cij} + \\phi^{c}\\right)} + {f}^{k}_{s}\\\\ [0.8cm] \\displaystyle\r\n+ \\sum_{w \\in {\\cal{W}}^{k}}{g^{w}}  - \\sum_{t \\in \\cal{T}} {\\tau}^{t} \\sum_{e \\in {\\cal{E}}^{k}_{G}}\\left({\\eta^{q}_{e}}\\cdot {p_{g}^{te}}\\right) = {\\sum_{\\sigma \\in \\Sigma}\\left( D^{\\sigma k}_{g} - \\gamma^{\\sigma k}\\right)} \\\\\r\n\\end{array}\r\n,\\;\\; \\forall k \\in \\cal{N}.\r\n\\label{eq:node_gas_balance}\r\n\\end{equation}\r\n\r\n\\newpage\r\n\\noindent \\textbf{Nodes}\\\\\r\n\r\nConstraints related to node $k$ are those that involve variables of non-supplied gas demands and quadratic pressures. The non-supplied gas at node $k$ for a specific user $\\sigma$ can not exceed the total demand of that user. This constraint is presented in Equation \\ref{eq:nsg_limits}.\r\n\r\n\\begin{equation}\r\n{0 \\le \\gamma^{\\sigma k} \\le D^{\\sigma k}_{g}, \\quad \\forall \\sigma \\in \\Sigma, \\quad \\forall k \\in \\cal{N}}.\r\n\\label{eq:nsg_limits}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\nMoreover, Equations \\ref{overp} and \\ref{underp} are the constraints that characterize the quadratic over-pressure and under-pressure at each node of the system, respectively. \r\n\r\n\\begin{equation}\r\n\\begin{array}{l}\r\n \\pi^{k} \\le \\overline{\\pi}^{k} + \\pi^{k}_{+}\\\\\r\n 0 \\le \\pi^{k}_{+}\\\\\r\n\\end{array} \r\n,\\quad \\forall k  \\in {\\cal{N}}\\\\ \r\n\\label{overp}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\begin{array}{l}\r\n\\underline{\\pi}^{k} - \\pi^{k}_{-} \\le \\pi^{k}\\\\\r\n0 \\le \\pi^{k}_{-}\\\\\r\n\\end{array} \r\n,\\quad \\forall k  \\in {\\cal{N}}.\\\\ \r\n\\label{underp}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\n\\noindent \\textbf{Wells}\\\\\r\n\r\nThe constraints related to the gas wells injection depends on each well characteristics. The injection limits are represented as follows:\r\n\r\n\\begin{equation}\r\n\\underline{g}^{w} \\le g^{w} \\le \\overline{g}^{w}, \\quad \\forall w \\in \\cal{W}.\r\n\\label{eq:g_limits}\r\n\\end{equation}\r\n%\\vspace{5cm}\r\n\r\n\\noindent \\textbf{Pipelines} \\\\\r\n\r\nThe gas flow in pipeline $o$, connecting nodes $i$ and $j$, depends on the quadratic pressures of such nodes. This behavior is given by the Weymouth Equation \\ref{eq:wey}. In particular, the gas flow is allowed to be bidirectional within a physical limit for a maximum daily transportation according to Equation \\ref{fgo_limits}. On the other hand, as the transport cost is always a positive quantity no matter the direction, variables $f^{oij}_{g+}$ and $f^{oij}_{g-}$ support a positive contribution to the objective function. Equation \\ref{eq:fgo} shows the sum of both directional flows to determine the actual flow in the direction $from$  node $i$ - $to$ node $j$. These flow variables are constrained by Equations \\ref{eq:fgopos_limits} and \\ref{eq:fgoneg_limits}. As seen, the positive gas flow must be greater than or equal to zero but lower than or equal to the maximum transport capacity multiplied by a threshold factor, $\\delta^{oij}$, which states an extra flow margin. Analogously, the negative gas flow has similar bounds in the negative side.\r\n\r\n\r\n\\begin{equation}\r\n{f}^{oij}_{g} = {{\\kappa}^{oij}} sgn \\left(\\pi^{i}-\\pi^{j}\\right) {\\sqrt{\\left|\\pi^{i}-\\pi^{j}\\right|}}, \\quad \\forall o \\in {\\cal{O}}\r\n\\label{eq:wey}\r\n\\end{equation}\r\n\\begin{equation}\r\n - \\overline{f}^{oij}_{g}  \\le f^{oij}_{g} \\le  \\overline{f}^{oij}_{g},  \\quad \\forall o \\in {\\cal{O}}\r\n\\label{fgo_limits}\r\n\\end{equation}\r\n\\begin{equation}\r\n{f}^{oij}_{g} =  f^{oij}_{g_+} + {f}^{oij}_{g_-}, \\quad \\forall o \\in {\\cal{O}}\r\n\\label{eq:fgo}\r\n\\end{equation}\r\n\\begin{equation}\r\n0 \\le f^{oij}_{g_+} \\le \\delta^{oij} \\cdot \\overline{f}^{oij}_{g}, \\quad \\forall o \\in {\\cal{O}}\r\n\\label{eq:fgopos_limits}\r\n\\end{equation}\r\n\\begin{equation}\r\n- \\delta^{oij} \\cdot \\overline{f}^{oij}_{g} \\le f^{oij}_{g_-} \\le 0, \\quad \\forall o \\in {\\cal{O}}.\r\n\\label{eq:fgoneg_limits}\r\n\\end{equation}\r\n\r\n\\noindent \\textbf{Compressors}\\\\\r\n\r\nCompressors allow recovering from pressure losses through the gas network. This process demands energy. The power consumption of compressor $c$ between the suction node $i$ and the discharge node $j$ is given by Equation \\ref{eq:hp_fc}. It depends on the quadratic pressure ratio of nodes $i$ and $j$, and the gas flow through the compressor. Moreover, the additional gas required by a gas-driven compressor relies on its consumed power as shown in Equation \\ref{eq:g_fc}. As the gas flow through compressors is restricted to flow in one direction, the flow limits of compressor $c$ are fixed according to Equation \\ref{eq:gfa_limits}. Finally, the quadratic pressure at suction and discharge nodes must fall within acceptable margins, as presented in Equation \\ref{eq:presa_rel}, where $\\beta^{cij}$ is the maximum compressor ratio.\\\\\r\n\r\n\\begin{equation}\r\n\\psi^{c} = {B^{c}}{f}^{cij}_{g} \\cdot \\left[ {\\left( \\frac{\\pi^{j}}{\\pi^{i}} \\right)}^{\\frac{Z^{c}}{2}} - 1 \\right],   \\quad \\forall c \\in {\\cal{C}}\r\n\\label{eq:hp_fc}\r\n\\end{equation}\r\n\\begin{equation} \r\n{\\phi}^{c} = x + y \\psi^{c} +  z {\\psi^{c}}^{2},  \\quad \\forall c \\in {\\cal{C}}_{G}\r\n\\label{eq:g_fc}\r\n\\end{equation}\r\n\\begin{equation}\r\n0 \\le {f}^{cij}_{g} \\le \\overline{f}^{cij}_{g},  \\quad \\forall c \\in {\\cal{C}}\r\n\\label{eq:gfa_limits}\r\n\\end{equation}\r\n\\begin{equation}\r\n\\begin{aligned}\r\n&\\pi^{i} \\le \\pi^{j} \\le \\beta^{cij} \\pi^{i}\\\\\r\n&\\beta^{cij} \\ge 1 \\\\\r\n\\end{aligned}\r\n,\\quad\\forall i,j \\in {\\cal{N}}, \\quad \\forall c \\in {\\cal{C}}.\r\n\\label{eq:presa_rel}\r\n\\end{equation}\r\n\r\n\\noindent \\textbf{Storage}\\\\\r\n\r\nThe storage outflow difference is the subtraction between the storage outflow and the storage inflow at the storage nodes; this relationship is represented by Equation \\ref{eq:fs}. Additionally, Equation \\ref{eq:fs_limits} shows that the outflow storage difference is restricted by the maximum and minimum amount of gas that is allowed to be injected to or demanded from the network in every storage node. Furthermore, as the storage unit can operate either as an injection or a demand for the network, Equations \\ref{eq:fs+} and \\ref{eq:fs-} represent the possible storage unit behavior. In particular, the maximum gas amount that can be injected is the difference between the currently available stored gas and the minimum possible volume of the unit. Similarly, the maximum inflow is the difference between the maximum volume of the unit and the currently available stored gas.\r\n\r\n\\begin{equation}\r\nf_{s}^{k} = f_{s_+}^{k}  - f_{s_-}^{k}, \\quad \\forall k \\in \\cal{N}\r\n\\label{eq:fs}\r\n\\end{equation}\r\n\\begin{equation}\r\n\\underline{f}^{k}_{s} \\le f^{k}_{s} \\le \\overline{f}^{k}_{s}, \\quad \\forall k \\in \\cal{N}\r\n\\label{eq:fs_limits}\r\n\\end{equation}\r\n\\begin{equation}\r\n0 \\le f_{s_+}^{i} \\le S^{k}_{0}  - \\underline{S}^{k}, \\quad \\forall k \\in \\cal{N}\r\n\\label{eq:fs+}\r\n\\end{equation}\r\n\\begin{equation}\r\n0 \\le f_{s_-}^{i} \\le \\overline{S}^{k} - S^{k}_{0}, \\quad \\forall k \\in \\cal{N}.\r\n\\label{eq:fs-}\r\n\\end{equation}\r\n\r\n\\subsection{Power network}\r\n\r\nThe power network balance equations of active and reactive power are given by Equation \\ref{eq:power_balance}. The model also takes into consideration the non-supplied power demand and the power consumed by compressors connected to the power system.\r\n \r\n\\begin{equation}\r\n\\begin{array}{l}\r\ng_{p_m}\\left(\\theta^{tm}, V^{tm}, p_{g}^{te}, \\epsilon^{te}, \\psi^{c}\\right) = 0\\\\ \r\ng_{q_m}\\left(\\theta^{tm}, V^{tm}, q_{g}^{te}\\right) = 0\\\\\r\n\\\\\r\n\\quad \\forall m \\in {\\cal{B}} \\quad \\forall t  \\in {\\cal{T}} \\quad \\forall c  \\in {\\cal{C}}_{E}. \r\n\\end{array}\r\n\\label{eq:power_balance}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\nThe main variables of the power system are the voltage angles $\\theta^{tm}$ and the voltage magnitudes $V^{tm}$ at each bus $m$ for every period of time $t$, as well as the active generation $p^{te}_{g}$ and reactive generation $g^{te}_{g}$ at each generator $e$ for each time period. The voltage limits are represented by Equation \\ref{eq:v_lims}, and the generation limits are shown in Equation \\ref{eq:pq_lims}.\r\n\r\n\\begin{equation}\r\n\\begin{aligned}\r\n&\\theta^{t_\\text{ref}} = 0\\\\\r\n&\\underline{V}^{tm} \\le V^{tm}  \\le \\overline{V}^{tm}\\\\\r\n\\end{aligned} \r\n\\;,\\quad \\forall m \\in {\\cal{B}} \\quad \\forall t  \\in {\\cal{T}}  \r\n\\label{eq:v_lims}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\begin{aligned}\r\n&\\underline{p}_{g}^{e} \\le p_{g}^{te}  \\le \\overline{p}_{g}^{e}\\\\\r\n&\\underline{q}_{g}^{e} \\le q_{g}^{te}  \\le \\overline{q}_{g}^{e}\\\\\r\n\\end{aligned} \r\n\\;,\\quad \\forall e \\in {\\cal{E}}, \\quad \\forall t  \\in {\\cal{T}}.  \r\n\\label{eq:pq_lims}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\nThe power flow limits are bidirectional and are presented in Equation \\ref{eq:sft_lims}, where $\\mathbb{S}_{fl}$ and $\\mathbb{S}_{tl}$ are the complex power injections at side $from$ and $to$ of line $l$, respectively.\r\n\r\n\\begin{equation}\r\n\\begin{aligned}\r\n&|\\mathbb{S}_{fl}\\left(\\theta,V\\right)| \\le \\overline{\\mathbb{S}}_{fl}\\\\\r\n&|\\mathbb{S}_{tl}\\left(\\theta,V\\right)| \\le \\overline{\\mathbb{S}}_{tl}\\\\\r\n\\end{aligned} \r\n\\;,\\quad \\forall l \\in {\\cal{L}}.\r\n\\label{eq:sft_lims}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\n The non-supplied active power demand at bus $m$ can not exceed the total bus demand, according to Equation \\ref{nsd_limits}.\r\n \r\n\\begin{equation}\r\n0 \\le \\epsilon^{tm} \\le D^{tm}_{e}, \\quad \\forall m \\in {\\cal{B}}, \\quad \\forall t  \\in {\\cal{T}}.  \r\n\\label{nsd_limits}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\nThe model also considers the required spinning reserve for each zone $r$ at every time $t$. This constraint is given by Equation \\ref{eq:reserve}.\r\n\r\n\\begin{equation}\r\n%\\begin{array}{l}\r\n\\sum_{e \\in {\\cal{Z}}_{r}}{u^{te} \\left( \\overline{p}_{g}^{e} - p_{g}^{te} \\right)} \\ge R^{tr} ,\\quad \\forall r \\in {\\cal{Z}}_{r}, \\quad \\forall t  \\in {\\cal{T}}.\r\n%\\end{array}\r\n\\label{eq:reserve}\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\nFinally, the model takes into consideration the maximum available energy during a day for certain generators, especially the energy stored in the dams for hydro-power plants. Equation \\ref{eq:hydro_e} represents such a constraint.\r\n\r\n\\begin{equation}\r\n\\sum_{t \\in {\\cal{T}}}{{\\tau}^{t} p_{g}^{te}} \\le E^{e}, \\quad \\forall e \\in {\\cal{E}}_{H}\r\n\\label{eq:hydro_e}.\r\n\\vspace{0.3cm}\r\n\\end{equation}\r\n\r\n\\newpage", "meta": {"hexsha": "b67a5216efcf82ce7b82b469e511c448f4eda907", "size": 21543, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MPNG_User's_Manual/Chapters/Formulation.tex", "max_stars_repo_name": "MATPOWER/mpng", "max_stars_repo_head_hexsha": "550370c8dda85c3aea0e86a63a907a09955aa3ce", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-10-01T16:05:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-06T10:57:21.000Z", "max_issues_repo_path": "MPNG_User's_Manual/Chapters/Formulation.tex", "max_issues_repo_name": "Segama/mpng", "max_issues_repo_head_hexsha": "0c99baa5ee8c43851a0b12192476fc5d7beeaa37", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MPNG_User's_Manual/Chapters/Formulation.tex", "max_forks_repo_name": "Segama/mpng", "max_forks_repo_head_hexsha": "0c99baa5ee8c43851a0b12192476fc5d7beeaa37", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-01T16:09:07.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-20T04:16:02.000Z", "avg_line_length": 62.625, "max_line_length": 1060, "alphanum_fraction": 0.663046001, "num_tokens": 7097, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706734, "lm_q2_score": 0.6757646075489391, "lm_q1q2_score": 0.558662571076248}}
{"text": "\n\\section{Review of stochastic calculus}\r\n\\subsection{Riemann integration}\r\n\\subsection{The It\\^o integral}\r\n", "meta": {"hexsha": "88f5858d5c4e6822617b716b1296446f06f62ab5", "size": 109, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/appendix/app_1.tex", "max_stars_repo_name": "mxochicale/trying-latex-action", "max_stars_repo_head_hexsha": "5841132df742cf270fc7471598254b0352675f6a", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2016-02-04T10:36:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-27T18:07:03.000Z", "max_issues_repo_path": "thesis/appendix/app_1.tex", "max_issues_repo_name": "mxochicale/trying-latex-action", "max_issues_repo_head_hexsha": "5841132df742cf270fc7471598254b0352675f6a", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2016-02-04T11:21:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-08T08:24:11.000Z", "max_forks_repo_path": "thesis/appendix/app_1.tex", "max_forks_repo_name": "mxochicale/trying-latex-action", "max_forks_repo_head_hexsha": "5841132df742cf270fc7471598254b0352675f6a", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-02-04T19:26:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-03T21:44:09.000Z", "avg_line_length": 21.8, "max_line_length": 40, "alphanum_fraction": 0.7798165138, "num_tokens": 27, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117855317473, "lm_q2_score": 0.6757645879592642, "lm_q1q2_score": 0.5586625491109288}}
{"text": "\n\\documentclass[12pt, a4paper]{article}\n\n\\input{preamble}\n\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{Arnau Abella}\n\\lhead{Randomized Algorithms - UPC}\n\\rfoot{Page \\thepage}\n\n\\title{%\n  \\vspace{-10ex}\n  An Exploratory Assignment on Minimum Spanning Trees\n}\n\\author{%\n  Arnau Abella \\\\\n  \\large{Universitat Polit\\`ecnica de Catalunya}\n}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n%%%%%%%%%%%%%%%%%%%%\n\n\\vspace{5ex}\n\n\\section{Introduction}\\label{sec:1}\n\nLet the weight of a tree be the sum of the squares of its edges lengths. Given a set of points $P$ in the unit square $I \\times I$, let $W(P)$ be the weight of the \\textit{minimum spanning tree (MST)} of $P$, where an edge length is the \\textit{Euclidean distance} between its endpoints. If $P$ consist of the four corners of the square, then $W(P) = 3$. Gilbert and Pollack \\cite{gil68} proved that $W(P)$ is $\\mathcal{O}(1)$ and this was extended to an arbitrary number of dimensions by Bern and Eppstein \\cite{bern93}. While more recent \\textit{divide-and-conquer} approaches have show that $W(P) \\leq 4$, no point set is known with $W(P) > 3$, and hence it has been widely conjectured that $W(P) \\leq 3$. In 2013, it was proven that $W(P) < 3.41$ \\cite{aichholzer2013sum}. Here we show an empirical experiment to check whether $W(P) < 3.41$ holds for any $MST(P)$.\n\n\\section{Experiment}\\label{sec:2}\n\nIn order to check the previous theorem, we generate uniformaly at random points in the unite square $P$ and compute the weight of the $MST$. We do this with an increasing number of points in order to explore the solution space. It is important to note that the exploration is not exhaustive since exploring the whole solution space would require a large amount of computational power and the search does not explicitly aims for the degenerate instances where $W(P) > 3.41$ may happen.\n\nThese random points are generated using a pseudo-random number generator (PRNG) that uses \\textit{Marsaglia's MWC256} (also known as \\textit{MWC8222}) which has a period of $2^{8222}$ and fares well in tests of randomness. It is also extremely fast, between $2-3$ times faster than the \\textit{Mersenne Twister}.\n\n\\newpage\n\nOnce the random points are generated, we build a complete undirected graph $G = (V, E)$ where $|V| = n$ and $|E| = \\binom{n}{2}$ using an \\textit{inductive graph} representation which is efficiently implemented as a \\textit{big-endian patricia trees}.\n\nThen, we search the \\textit{minimum spanning tree} on the inducive graph using \\textit{Prim's algorihm} (see Algorithm \\ref{algo:prim}). The inductive implementation has $\\mathcal{O}(|V|^2)$ time complexity which could be improved up to $\\mathcal{O}(|E| + |V| \\log |V|)$ using a \\textit{fibonacci heap}.\n\n\\begin{algorithm}[H]\n  \\SetAlgoLined\n  \\DontPrintSemicolon\n  \\SetKwInput{Input}{Input}\n  \\SetKwInput{Output}{Output}\n  \\Input{An undirected weighted graph $G = (V,E)$}\n  \\Output{The minimum spanning tree of the input graph $G$}\n  \\ForEach{$v \\in V$}{\n    $key(v) = \\infty$\\;\n    $parent(v) = NIL$\\;\n  }\n  $key(r) = 0$ \\tcp*[r]{Pick \\textit{u.a.r.} the initial vertex $r \\in V$}\n  $Q = V$\\;\n  \\While{$Q \\neq \\emptyset$}{\n    $u = EXTRACT-MIN(Q)$\\;\n    \\ForEach{$v \\in ADJ(u)$}{\n      \\If{$v \\in Q$ and $w(u,v) < key(v)$}{\n        $key(v) = w(u,v)$\\;\n        $parent(v) = u$\\;\n      }\n    }\n  }\n  \\caption{Minimum Spanning Tree Prim's Algorithm}\n  \\label{algo:prim}\n\\end{algorithm}\n\nWe emphasise that there are more efficient algorithms such as Karger, Klein and Tarjan linear-time randomized algorithm \\cite{karger1995randomized} or Bernand Chazelle almost linear algorithm \\cite{chazelle2000minimum} but we decided to use Prim's algorithm because it is the simplest to implement and the performance was good enough for this experiment (see Table \\ref{table:1}).\n\n\\begin{table}[H]\n  \\center\n  \\begin{tabular}{ccccc}\n    $|V|$  & Time       & $R^2$   & $\\mu$      & $\\sigma$    \\\\ \\hline\n    $128$  & $8.1$   ms & $0.996$ & $8.358$ ms & $327.8$ $\\mu$s \\\\\n    $512 $ & $406.0$ ms & $1.0  $ & $391.9$ ms & $10.33$ ms    \\\\\n    $1024$ & $153.6$ ms & $1.0  $ & $150.3$ ms & $3.178$ ms    \\\\\n    $2048$ & $943.1$ ms & $1.0  $ & $934.4$ ms & $14.24$ ms\n  \\end{tabular}\n\\caption{Prim's Algorithm benchmark.}\n\\label{table:1}\n\\end{table}\n\nOne optimization applied to the search is the removal of edges that are unlikely to be used. The minimum spanning tree is unlikely to use any edge of weight greater than $k(n)$ for some function $k(n)$. We estimated the function $k(n)$ by calculating the $MST$ on random graphs of increasing size in the unite square (see Table \\ref{table:2}).\n\n\\begin{table}[H]\n  \\center\n  \\begin{tabular}{cc}\n    $|V|$  & $W_{max}$     \\\\ \\hline\n    $16$   & $0.220$ \\\\\n    $32$   & $0.150$ \\\\\n    $64$   & $0.060$ \\\\\n    $128$  & $0.040$ \\\\\n    $256$  & $0.020$ \\\\\n    $512$  & $0.013$ \\\\\n    $1024$ & $0.008$ \\\\\n    $2048$ & $0.004$ \\\\\n  \\end{tabular}\n  \\caption{Estimation of $k(n)$.}\n\\label{table:2}\n\\end{table}\n\nThe code of the experiment is publicly available at \\url{https://github.com/monadplus/mst-experiment}.\n\n\\section{Results}\\label{sec:3}\n\nThe experiment was run on a ThinkPad T495 (AMD Ryzen 7 Pro 3700U, $13934$ MiB of RAM, Linux 5.8.10-arch1-1). The experiment was implemented purely in haskell on the Glasgow Haskell Compiler (GHC) 8.8.4. The experiment was run up to $8196$ vertices where the execution time increased up to $30'$ of CPU time and we could not generate reliable results.\n\n\\begin{table}[H]\n  \\center\n  \\begin{tabular}{ccccc}\n    $|V|$  & $\\mu_W$    & $\\sigma_W$ \\\\ \\hline\n    $16$   & $1.1067605$ & $0.21767415$ \\\\\n    $32$   & $1.1015033$ & $0.20797518$ \\\\\n    $64$   & $1.2079728$ & $0.12993012$ \\\\\n    $128$  & $1.1780577$ & $0.10956229$ \\\\\n    $256$  & $1.2000419$ & $0.08088318$ \\\\\n    $512$  & $1.1488761$ & $0.02073750$ \\\\\n    $1024$ & $1.2366604$ & $0.05549732$ \\\\\n    $2048$ & $1.2135003$ & $0.00306511$ \\\\\n    $4096$ & $1.1810393$ & $0.01102620$\n  \\end{tabular}\n\\caption{Experiment results.}\n\\label{table:3}\n\\end{table}\n\nThe results show that $W(P)$ is $\\mathcal{O}(1)$ (see Table \\ref{table:3} and Figure \\ref{fig:1}). Note, the deviation decreases when the number of vertices increases. As a conclusion, we  believe that there is no evidence to refute the claim $W(P) < 3.41$.\n\nFor future work, the search could be tunned to better explore the degenerate instances which would lead to a stronger position to accept or refute the theorem.\n\n\\begin{figure}[H]\n  \\center\n  \\includegraphics[scale=0.4]{plot}\n  \\caption{Experiment results.}\n  \\label{fig:1}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%\n\n\\bibliographystyle{unsrt}\n\\bibliography{refs}\n\n\\end{document}\n", "meta": {"hexsha": "a3eb207b839f48d2ba9589459df74e6774126322", "size": 6647, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/report.tex", "max_stars_repo_name": "monadplus/mst-experiment", "max_stars_repo_head_hexsha": "a9498227304a743b1174a7f6f248d98ec2755d59", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/report.tex", "max_issues_repo_name": "monadplus/mst-experiment", "max_issues_repo_head_hexsha": "a9498227304a743b1174a7f6f248d98ec2755d59", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/report.tex", "max_forks_repo_name": "monadplus/mst-experiment", "max_forks_repo_head_hexsha": "a9498227304a743b1174a7f6f248d98ec2755d59", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.2176870748, "max_line_length": 868, "alphanum_fraction": 0.6676696254, "num_tokens": 2177, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.709019146082187, "lm_q2_score": 0.7879312056025699, "lm_q1q2_score": 0.5586583105678422}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{natbib,unatbib}\n\\usepackage[nohide,twocolumn]{ulecnot}\n\\usepackage{usynsem}\n\\usepackage{uprog}\n\\usepackage{utheorem}\n\\usepackage{linguex}\n\t\\renewcommand{\\refdash}{}\n\n\\lhead{COGS 543 -- Computational Semantics}\n\\chead{$\\lambda$-Calculus}\n\\rhead{Updated \\it \\today}\n\\lfoot{Umut \\\"Ozge}\n\\cfoot{}\n\\rfoot{Page \\thepage/\\pageref{LastPage}}\n\\setlength{\\headheight}{13.6pt}\n\n\\usepackage{tikz-qtree}\n\n\\begin{document}\n\n\n\\section{Introduction} Here are two functions:\n\n\\begin{align}\n\\label{f} f(x) & = x + 2\\\\ \n\\label{g}g(x) & = x^2\n\\end{align}\n\nThe composition of these functions is expressed as:\n\n\\begin{align}\nf\\circ g = f(g(x))\\label{fog}\n\\end{align}\n\n\n\\xref{fog} is of no use without looking up the definitions of the names $f$ and $g$ in \\xref{f} and \\xref{g}. Furthermore, it is simply impossible to define a function that takes two functions as arguments and gives a third function which is the composition of the first two. Actually, it is impossible to define any higher order function within the language of our expressions -- i.e.\\ numbers, arithmetic operators and parentheses.\n\nGiven a formal language -- the language of arithmetic in our case -- lambda calculus adds to the language the lambda operator `$\\lambda$' and the dot `.'. This additions allow one to write anonymous functions who do not bear names like $f$ or $g$, by which you look up definitions. Instead, in lambda calculus, the name and the function definition are one and the same entities. Here is how functions in \\xref{f} and \\xref{g} are rendered in lambda calculus:\n\n\\begin{align}\n\\label{lf} & (\\lambda x.x + 2)\\\\ \n\\label{lg} & (\\lambda x.x^2)\n\\end{align}\n\nApplication of a function to its argument is depicted by concatenating the function and its argument and enclosing both in a parentheses. To compute the result of application, you substitute the argument to its place as in standard functions and get rid of the lambda, the variable and the dot.\n\n\\begin{align}\n\\label{af} & ((\\lambda x.x + 2) 8)\\\\ \n & = 8 +2  = 10\\nonumber\n\\end{align}\n\n\\begin{align}\n\\label{ag} & ((\\lambda x.x^2) 8)\\\\\n& = 8^2 = 64\\nonumber\n\\end{align}\n\nSo far everything appears as a little more cryptic way of defining functions and computing the result of applying a function to its argument. The real power of lambda calculus becomes visible when you apply functions to other functions, in exactly the same way you apply them to non-function arguments. We will illustrate this over function composition. Things will get a little complicated at this point; either go very slowly, or have a first pass, read on and come back. In either case you need to use pencil and paper to understand the transitions between forms.  Now let us have the following higher order function that composes two functions -- the lambda calculus way of having $f_1\\circ f_2$:\n\n\\begin{align}\n(\\lambda f_1.(\\lambda f_2.(\\lambda z.(f_1(f_2z)))))\n\\end{align}\n\nFirst apply this to $(\\lambda x. x + 2)$:\n\n\\begin{align}\n& ((\\lambda f_1.(\\lambda f_2.(\\lambda z.(f_1(f_2z))))) (\\lambda x. x +2))\\\\\n=& (\\lambda f_2.(\\lambda z.((\\lambda x. x +2)(f_2z)))) \\nonumber\n\\end{align}\n\nNow apply this result to the second function $(\\lambda y. y^2)$:\n\n\\begin{align}\n&  ((\\lambda f_2.(\\lambda z.((\\lambda x. x +2)(f_2z))))(\\lambda y. y^2)) \\\\\n=& (\\lambda z.((\\lambda x. x+2)((\\lambda y. y^2)z))) \\nonumber\n\\end{align}\n\nWe obtained the composition of the functions defined in \\xref{f} and \\xref{g}. To see that this is indeed the case, apply this composite to an argument, say 6:\n\n\\begin{align}\n\\label{faint} & ((\\lambda z.((\\lambda x. x+2)((\\lambda y. y^2)z))) 6)\\\\\n&=  ((\\lambda x. x+2)((\\lambda y. y^2)6))\\nonumber \n\\end{align}\n\nIf you look carefully, you will realize two more patterns in the form of $(f a)$ -- a function $f$ applied to argument $a$ -- in the result we obtained in \\xref{faint}. The one that is relatively easier to see is  \n\\begin{align}\n\\label{occ1} ((\\lambda y. y^2)6)\n\\end{align}\n\nif you perform this application you will obtain 36 as result. Replacing the occurrence of \\xref{occ1} in \\xref{faint} with 36 will give,\n\n\\begin{align}\n& ((\\lambda x. x+2)36)\\\\\n& = 36+2 = 38 \\nonumber\n\\end{align}\n\n\nWe now turn to a detailed and formal characterization of the concepts we encountered.\n\n\n\\section{The set of lambda terms}\n\nFrom here on, we take a rather abstract stance towards lambda calculus.  The discussion will apply to any domain of use, not just arithmetic. Try to perceive the lambda calculus as a system of manipulating expressions constructed from a set of names, the lambda, the dot, and left and right parentheses, which we call \\uterm{lambda terms}. Also refrain from assuming that $x, y, z$ stand for variables, $f, g, h$ for functions, $a, b, c$  for constants, etc. Names are just names; although our use may be suggestive at times, there is no necessary association between a name and the type of object it stands for.\n\n\\begin{udefinition}[Lambda terms]\\label{dfexp}\nLet $A=\\{ a,b,\\ldots,z\\}$ be the set of \\uterm{names}, we extend the set by subscripting like $x_1,x_2\\ldots$ as we need, so that we have an infinite set of names.\n\n\\begin{itemize}\n\\item[i.] all names are lambda terms; \n\\item[ii.] if $\\omega_1$ and $\\omega_2$ are lambda terms, so is $(\\omega_1\\omega_2)$; \\hfill (concatenation/application)\n\\item[iii.] if $\\omega$ is a lambda term and $\\alpha$ is a name, then\n$(\\lambda\\alpha.\\omega)$ is a lambda term; \\hfill (abstraction) \n\\item[iv.] nothing else is a lambda term.\n\\end{itemize}\n\\qed\n\\end{udefinition} \n\nWe usually shorten ``lambda term'' to ``term''. Given a name $\\alpha$, we call the expression `$\\lambda\\alpha$' a ``lambda binder''.\n\n\\begin{uexample}\\label{fullex}\nSome example terms:\n\\renewcommand{\\arraycolsep}{6pt}\n\\renewcommand{\\arraystretch}{2}\n\n$$\n\\begin{array}{cccc}\nx & (xy) & (x(yz))  & ((xy)z) \\\\\n(\\lambda x.x) & (\\lambda y.(\\lambda x.x)) & (\\lambda z.(x(\\lambda y.(yz)))) & (x(\\lambda z.(\\lambda y.(yz))))\\\\\n(x(\\lambda x.x)) & ((\\lambda y.(\\lambda x.x))(\\lambda x.x)) & (((\\lambda y.(\\lambda x.x))(\\lambda x.x))(xy)) & ((x(yz))((xy)z))\n\\end{array}\n$$\n\n\\qed\n\\end{uexample}\n\nOnce again note that in lambda calculus we write $(f x)$ (simplified to $fx$) rather than the usual $f(x)$\nto represent the application of function $f$ to the argument $x$.\n\n\\section{Notational conventions}\n\\label{scnotcon}\n\n\\begin{itemize}\n\n\\item[A.] Omit outer parentheses.\n\n\\item[B.] Concatenation of terms associate to left:    \n\n\\begin{align*}\n\\omega_1\\omega_2\\omega_3\\ldots\\omega_n & \\equiv (\\ldots((\\omega_1\\omega_2)\\omega_3)\\ldots\\omega_n)\\\\\nfxyz & \\equiv   (((fx)y)z)\\\\\nfx(yz) & \\equiv   ((fx)(yz))\\\\\nf(xyz) & \\equiv   (f((xy)z))\\\\\nf(xy)z & \\equiv   ((f(xy))z) \n\\end{align*}\n\n\\item[C.] The scope of dot extends to right until the first right parenthesis whose matching pair falls to the left of the dot. In restoring parentheses, put a left parenthesis just before the first lambda sign you encounter to the left of the dot, then start to scan rightwards, and put a right parenthesis at the point you encounter a right parenthesis unmatched during your rightward scan or the end of the term.\n\n\\begin{align*}\n(\\lambda x.xx)y & \\equiv ((\\lambda x.xx)y) \\not\\equiv \\lambda x.xxy\\\\\n\\lambda x.xyz & \\equiv (\\lambda x.xyz) \\not\\equiv (\\lambda x. x)yz\n\\end{align*}\n\n\\item[D.] Stacked lambda binders associate to right, and dots between lambda binders are deleted:\n\\begin{align*}\n\\lambda \\alpha_1\\lambda \\alpha_2\\lambda \\alpha_3\\ldots\\lambda \\alpha_n.\\omega &\\equiv (\\lambda\\alpha_1.(\\lambda\\alpha_2.(\\lambda\\alpha_3\\ldots(\\lambda\\alpha_n.\\omega)\\ldots)))\\\\\n\\lambda f \\lambda x.f(fx)  &\\equiv  (\\lambda f.(\\lambda x.(f(fx))))\\\\\n\\lambda f.(\\lambda x\\lambda y.xyy)zf & \\equiv (\\lambda f.(((\\lambda x.(\\lambda y.((xy)y)))z)f))\n\\end{align*}\n\n\\end{itemize}\n\n\\begin{uexample}\nThe terms in Example~\\ref{fullex} in simplified form:\n\n\\begin{align*}\n(xy) &\\equiv xy\\\\\n(x(yz)) &\\equiv x(yz)\\\\\n((xy)z) &\\equiv xyz\\\\\n(\\lambda x.x) &\\equiv \\lambda x.x\\\\\n(\\lambda y.(\\lambda x.x)) &\\equiv \\lambda y.\\lambda x.x\\\\\n(\\lambda z.(x(\\lambda y.(yz)))) &\\equiv \\lambda z.x(\\lambda y.yz)\\\\\n(x(\\lambda z.(\\lambda y.(yz)))) &\\equiv x(\\lambda z\\lambda y.yz)\\\\\n(x(\\lambda x.x)) &\\equiv x(\\lambda x.x)\\\\\n((\\lambda y.(\\lambda x.x))(\\lambda x.x)) &\\equiv (\\lambda y\\lambda x.x)\\lambda x.x\\\\\n(((\\lambda y.(\\lambda x.x))(\\lambda x.x))(xy)) &\\equiv (\\lambda y\\lambda x.x)(\\lambda x.x)(xy)\\\\\n((x(yz))((xy)z)) &\\equiv x(yz)(xyz)\n\\end{align*}\n\\qed\n\\end{uexample}\n\nHere are some exercises from \\cite{selinger13} to train your hand on notational conventions:\n\n\\begin{uexercise}\nSimplify the following terms by removing parentheses and dots using the conventions above:\n\\begin{itemize}\n\\item[i.] $(\\lambda x.(\\lambda y.(\\lambda z.((xz)(yz)))))$\n\\item[ii.] $(((ab)(cd))((ef)(gh)))$\n\\item[iii.] $(\\lambda x.((\\lambda y.(yx))(\\lambda v.v)z)u)(\\lambda w.w)$\n\\end{itemize}\n\\qed\n\\end{uexercise}\n\n\\begin{uexercise}\nRestore the parentheses to the following terms:\n\\begin{itemize}\n\\item[i.] $xxxx$  \n\\item[ii.] $\\lambda x.x\\lambda y.y$ \n\\item[iii.] $\\lambda x.(x\\lambda y.yxx)x$ \n\\end{itemize}\n\\qed\n\\end{uexercise}\n\n\\section{Bondage and freedom}\n\n\\subsection{Expressions as binary trees}\n\nFirst observe that every compound term (=terms that are more complicated than a name) corresponds to a binary tree; this is obvious if you take the steps (ii) and (iii) in Definition~\\xref{dfexp} as an instruction to form a binary branching tree. Here are some examples:\n\n\\smallskip\n\\ex.\n\\parbox[t]{0.2\\textwidth}{\na. \\Tree [.$(\\lambda z.(x(\\lambda y.(yz))))$ \n\t\t\t[.$\\lambda z$ ]\n\t\t\t[.$(x(\\lambda y.(yz)))$ \n\t\t\t\t[.$x$ ]\t\n\t\t\t\t[.$(\\lambda y.(yz))$ \n\t\t\t\t\t[.$\\lambda y$ ]\t\t\t\t\t\n\t\t\t\t\t[.$(yz)$ \n\t\t\t\t\t\t[.$y$ ]\n\t\t\t\t\t\t[.$z$ ]\n\t\t\t\t\t]\t\t\t\t\t\n\t\t\t\t]\n\t\t\t]\n]\n}\n\\parbox[t]{0.22\\textwidth}{\nb. \\Tree [.$(\\lambda f.(\\lambda g.(\\lambda x.((fx)(gx)))))$ \n\t\t[.$\\lambda f$ ]\n\t\t[.$(\\lambda g.(\\lambda x.((fx)(gx))))$ \n\t\t\t[.$\\lambda g$ ]\n\t\t\t[.$(\\lambda x.((fx)(gx)))$ \n\t\t\t\t[.$\\lambda x$ ]\t\n\t\t\t\t[.$((fx)(gx))$\n\t\t\t\t\t[.$(fx)$ \n\t\t\t\t\t\t[.$f$ ]\n\t\t\t\t\t\t[.$x$ ]\n\t\t\t\t\t]\n\t\t\t\t\t[.$(gx)$ \n\t\t\t\t\t\t[.$g$ ]\n\t\t\t\t\t\t[.$x$ ]\n\t\t\t\t\t]\n\t\t\t\t]\n\t\t\t]\n\t\t]\n]\n}\n\n\nNow the same examples simplified by the conventions of Section~\\ref{scnotcon}:\n\n\n\\ex.\\label{extree}\n\\parbox[t]{0.2\\textwidth}{\na. \\Tree [.$\\lambda z.x(\\lambda y.yz)$ \n\t\t\t[.$\\lambda z$ ]\n\t\t\t[.$x(\\lambda y.yz)$ \n\t\t\t\t[.$x$ ]\t\n\t\t\t\t[.$(\\lambda y.yz)$ \n\t\t\t\t\t\t[.$\\lambda y$ ]\t\t\t\t\t\n\t\t\t\t\t[.$yz$ \n\t\t\t\t\t\t[.$y$ ]\n\t\t\t\t\t\t[.$z$ ]\n\t\t\t\t\t]\t\t\t\t\t\n\t\t\t\t]\n\t\t\t]\n]\n}\n\\parbox[t]{0.23\\textwidth}{\nb. \\Tree [.$\\lambda f\\lambda g\\lambda x.fx(gx)$ \n\t\t[.$\\lambda f$ ]\n\t\t[.$\\lambda g\\lambda x.fx(gx)$ \n\t\t\t[.$\\lambda g$ ]\n\t\t\t[.$\\lambda x.fx(gx)$ \n\t\t\t\t[.$\\lambda x$ ]\t\n\t\t\t\t[.$fx(gx)$\n\t\t\t\t\t[.$fx$ \n\t\t\t\t\t\t[.$f$ ]\n\t\t\t\t\t\t[.$x$ ]\n\t\t\t\t\t]\n\t\t\t\t\t[.$gx$ \n\t\t\t\t\t\t[.$g$ ]\n\t\t\t\t\t\t[.$x$ ]\n\t\t\t\t\t]\n\t\t\t\t]\n\t\t\t]\n\t\t]\n]\n}\n\n\nWe need some auxiliary definitions before we define bondage and freedom.  \n\n\\medskip\n\\noindent {\\bf Occurrence}\\\\\nThe notions of bondage and freedom are defined for \\uterm{occurrences} of names and lambda binders. Each leaf\\footnote{Leaves are nodes without children.} in the construction tree of a lambda term is either an occurrence of a name or a lambda binder. Names also occur as part of lambda binders, but in the technical sense we define here occurrence of a name does not include such cases. For instance the name $x$ has 1 occurrence in \\xxref{extree}{a} and 2 occurrences in \\xxref{extree}{b}.\n\n\\medskip\n\\noindent {\\bf Step}\\\\\nIn traversing a tree, going from one node to an adjacent\\footnote{Two nodes are adjacent if there exists a line connecting them without any nodes in between.} node is one \\uterm{step}. Steps can be taken down or up.\n\n\\medskip\n\\noindent {\\bf Command}\\\\\nA node $\\alpha$ in a tree $\\Gamma$ commands a node $\\beta$ in the same tree iff you can go one step up from $\\alpha$ and go down one or more steps down to reach $\\beta$.\n\n\n\\medskip\n\\noindent {\\bf Bondage}\\\\\nGiven a term $\\Gamma$ and a name $\\alpha$, an occurrence of $\\alpha$ is \\uterm{bound} in $\\Gamma$ by an occurrence of a lambda binder  $\\lambda\\alpha$ iff the occurence of $\\lambda\\alpha$ commands the occurrence of $\\alpha$ and there is no occurrence of $\\lambda\\alpha$ that commands $\\alpha$ and closer to it.\n\nGiven a term $\\Gamma$ and a name $\\alpha$, an occurrence of $\\alpha$ is \\uterm{bound} in $\\Gamma$ iff there exists at least one occurrence of a lambda binder $\\lambda\\alpha$ in $\\Gamma$ binding that occurrence of $\\alpha$  in $\\Gamma$.\n\nNote that bondage is defined between occurrences of names and lambda binders; therefore, it does not make sense to ask whether $\\alpha$ is bound in a term $\\Gamma$ or not; one has to specify which occurrence of $\\alpha$ s/he is talking about.\n\n\\medskip\n\\noindent {\\bf Freedom}\\\\\nGiven an expression $\\Gamma$ and a name $\\alpha$, an occurrence of $\\alpha$ is \\uterm{free} in $\\Gamma$, if it is not bound by any occurrence of a lambda binder in $\\Gamma$. \n\n\n% It will be useful to define a function \\sysm{FN}, which gives the set of free names in a term:\n% \n% \\begin{udefinition}[Set of free names.]\n% \n% The function \\sysm{FN} that maps the set of lambda terms to the power set of the set of names: \n% \\begin{itemize}\n% \\item[i.] \\sysm{FN (\\omega) = \\{\\omega \\}  }, if $\\omega$ is a name;\n% \\item[ii.] \\sysm{FN ((\\omega_1\\omega_2)) = FN(\\omega_1) \\cup FN(\\omega_2)};\n% \\item[iii.] \\sysm{FN (\\lambda \\alpha.\\omega) = FN(\\omega) - \\{ \\alpha\\}}.\n% \\end{itemize}\n% \\qed\n% \\end{udefinition}\n% \n\n\\begin{uexample}\n\\begin{itemize}\n\\item[]\n\\item In \\xxref{extree}{a}, there is a single occurrence of $x$ and it is free in \\xxref{extree}{a}\n\\item In \\xxref{extree}{b}, there is no free occurrence of a name; all the occurrences of all the names are bound by some occurrence of a lambda term.\n\\item Our definitions allow for expressions like $\\lambda x\\lambda k.c y$, where all the occurrences of all names are free and none of the lambda terms bind any name. \n\\item In $\\lambda x\\lambda y.z (y x) y $ both occurrences of $y$ are bound by $\\lambda y$, whereas in $(\\lambda x\\lambda y.z (y x)) y $, the second $y$ (counting from left to right) is free.\n\\end{itemize}\n\\qed\n\\end{uexample}\n\n\n\n\\section{$\\alpha$-equivalence}\n\nThe functions $\\lambda x.x$ and $\\lambda y.y$ are one and the same function; they just do the same thing using different names. Similarly for $\\lambda f\\lambda g.fa(ga)$ and $\\lambda g\\lambda f.ga(fa)$. We express this relation of being alphabetical variants of the same term as \\uterm{\\sysm{\\mathbf{\\alpha}}-equivalence}. Symbolically,\n\n$$\n\\lambda f\\lambda g.fa(ga) \\equiv_\\alpha \\lambda g\\lambda f.ga(fa)\n$$\n\nAgain think of $\\alpha$-equivalence over trees:\n\n\\parbox[t]{0.2\\textwidth}{\n\\Tree [.$\\lambda f\\lambda g.fa(ga)$ \n\t\t[.$\\lambda f$ ]\n\t\t[.$\\lambda g.fa(ga)$ \n\t\t\t[.$\\lambda g$ ]\t\n\t\t\t[.$fa(ga)$ \n\t\t\t\t\t[.$fa$ $f$ $a$ ]\t\n\t\t\t\t\t[.$ga$ $g$ $a$ ]\n\t\t\t]\n\t\t]\n]\n}\n\\parbox[t]{0.2\\textwidth}{\n\\Tree [.$\\lambda g\\lambda f.ga(fa)$ \n\t\t[.$\\lambda g$ ]\n\t\t[.$\\lambda f.ga(fa)$ \n\t\t\t[.$\\lambda f$ ]\t\n\t\t\t[.$.ga(fa)$ \n\t\t\t\t\t[.$ga$ $g$ $a$ ]\t\n\t\t\t\t\t[.$fa$ $f$ $a$ ]\n\t\t\t]\n\t\t]\n]\n}\n\nThe terms have structurally identical trees with identical binding relations. The only difference between the two concerns the bound names.\nHere is a formal definition:\n\n\\begin{udefinition}[$\\alpha$-equivalence]\nLet $\\Lambda_1$ and $\\Lambda_2$ are lambda terms with construction trees $\\tau_1$ and $\\tau_2$;\\\\\nand let $\\nu_1$ and $\\nu_2$ be sets of nodes of $\\tau_1$ and $\\tau_2$, respectively;\\\\\n$\\Lambda_1$ and $\\Lambda_2$ are $\\alpha$-equivalent iff\nthere exists a one to one correspondence $c$ between $\\nu_1$ and $\\nu_2$ such that,\n\\begin{itemize}\n\\item for all nodes $i,j$ in $\\nu_1$, $i$ immediately dominates\\footnote{A node $\\nu_1$ immediately dominates node $\\nu_2$ iff it takes only one step going down from $\\nu_1$ to $\\nu_2$} $j$ iff $c(i)$ immediately dominates $c(j)$; \n\\item for all nodes $i$ in $\\nu_1$, $i$ is some name $\\alpha$, free in $\\tau_1$, iff $c(i)$ is some name $\\alpha$, free in $\\tau_2$;\n\\item for all nodes $i,j$ in $\\nu_1$, $i$ is a lambda binder binding $j$ iff $c(i)$ is a lambda binder binding $c(j)$.\n\\end{itemize}\n\\qed\n\\end{udefinition}\n\n\nThe above definition gives a declarative specification of what does it mean for two terms to be $\\alpha$-equivalent. It does not tell how to go about obtaining an $\\alpha$-equivalent term from a given one. The essence of $\\alpha$-equivalence is to obtain a term that has the same configuration and binding relations with the original, but possibly differing in some bound names. This looks like a renaming operation where you change the name on a lambda binder and all the occurrences of that name bound by the binder, crucially without touching any free names. Care needs to be taken, however, in renaming bound variables. For instance take the following term:\n\n\\begin{align}\\label{alphaorg}\n\\lambda x.z(\\lambda y.yx)\n\\end{align}\n\nYou want to replace the name $x$ with $z$. If you replace all the occurrences, you would obtain,\n\n\\begin{align}\\label{alphawrong}\n\\lambda z.z(\\lambda y.yz)\n\\end{align}\n\nwhere you made the first $z$, which was free in \\xref{alphaorg}, accidentally bound in \\xref{alphawrong}. In other words, you failed to obtain an $\\alpha$-equivalent term:\n\n$$\n\\lambda x.z(\\lambda y.yx)\\not\\equiv_\\alpha \\lambda z.z(\\lambda y.yz)\n$$\n\n\nLikewise, renaming $x$ to $y$ would again yield a non-$\\alpha$-equivalent term:\n\n$$\n\\lambda x.z(\\lambda y.yx)\\not\\equiv_\\alpha \\lambda y.z(\\lambda y.yy)\n$$\n\n\nAt this point one way to avoid such accidental bondages is to always rename to a name that is not in the term at all. But this move misses the general concept of $\\alpha$-equivalence. Observe that it is legitimate to rename $y$ to $z$ in \\xref{alphaorg}:\n\n$$\n\\lambda x.z(\\lambda y.yx)\\equiv_\\alpha \\lambda x.z(\\lambda z.zx)\n$$\n\nWe take another strategy for avoiding accidental bondage. We again make use of trees and define renaming as a single step procedure that targets a specific occurrence of a lambda binder. Here is a rough algorithm:\n\n\\begin{ualgorithm}[Single shot rename]\n\\label{renamealg}\n\\begin{algorithmic}\n\\item[]\n\\item[]\n\\Function{Rename}{\\sysm{term, binder, name}} \n\\State \\sysm{nodes \\gets } nodes bound by \\sysm{binder} \n\\For{\\sysm{node} in \\sysm{nodes} } \n\\State replace the name on \\sysm{node} with \\sysm{name}\n\\State check for new binders of \\sysm{node}\n \t \t\\If{any new binder dominated by \\sysm{binder} } \n \t \t\t\\State ERROR\n \t \t\\EndIf\t\n\\EndFor\n\\State replace the name on \\sysm{binder} with \\sysm{name}\n\\If{new bound nodes not in \\sysm{nodes} are created }\n\\State ERROR!\n\\EndIf\n\\EndFunction\n\\end{algorithmic}\n\\qed\n\\end{ualgorithm}\n\nTwo terms $\\Lambda_1$ and $\\Lambda_2$ are $\\alpha$-equivalent iff one can be obtained from the other by zero or more applications of Algorithm~\\xref{renamealg}.  \n\n\\section{Substitution}\n\nWe will see a process quite similar to renaming. In renaming you alphabetically change a lambda binder and the names it binds. You do this carefully so that you do not introduce binding relations that were not present among the nodes of the original tree. The process of \\uterm{substitution} again operates on a lambda term, but replaces all \\emph{free} occurrences of a name with a \\emph{lambda term}; therefore, substitution may alter the tree structure of the lambda term it operates on. Again, care is needed not to accidentally bind the free names in the substituted term. The substitution of a term $\\nu$ for $\\alpha$ in a term $\\Lambda$ is denoted as:\n$$\n\\subs{\\Lambda}{\\nu}{\\alpha}\n$$\n\nand defined via induction as:\n\n\\begin{udefinition}[Substitution]\n\n\\begin{itemize}\n\\item[]\n\\item[i.] $\\subs{\\alpha}{\\nu}{\\alpha}= \\nu$;\n\\item[ii.] $\\subs{\\gamma}{\\nu}{\\alpha}= \\gamma$, if $\\gamma$ is a name and $\\gamma\\neq\\alpha$;\n\\item[iii.] $\\subs{(\\omega_1\\omega_2)}{\\nu}{\\alpha}= (\\subs{\\omega_1}{\\nu}{\\alpha}\\subs{\\omega_2}{\\nu}{\\alpha})$;\n\\item[iv.] $\\subs{(\\lambda \\alpha.\\omega)}{\\nu}{\\alpha}= (\\lambda \\alpha.\\omega)$;\n\\item[v.] $\\subs{(\\lambda \\gamma.\\omega)}{\\nu}{\\alpha} = (\\lambda \\gamma.\\subs{\\omega}{\\nu}{\\alpha})$, where $\\gamma\\neq\\alpha$ and $\\gamma$ is not free in $\\nu$.\n\n\\end{itemize}\n\n\\qed\n\\end{udefinition}\n\nObserve that substitution can get blocked in clause (v). For instance take,\n\n\\begin{align}\n& \\subs{(k(\\lambda z.zx))}{(\\lambda y.zy)}{x}\\\\\n \\equiv & \\, \\subs{k}{(\\lambda y.zy)}{x}\\subs{(\\lambda z.zx)}{(\\lambda y.zy)}{x}  \\nonumber\\\\\n \\equiv & \\, k \\subs{(\\lambda z.zx)}{(\\lambda y.zy)}{x} \\nonumber\n\\end{align}\n\nThe substitution operation gets blocked at $\\subs{(\\lambda z.zx)}{(\\lambda y.zy)}{x}$, since $z$ is free in $(\\lambda y.zy)$. What needs to be done at such situations is to switch to an $\\alpha$-equivalent term. First, \n\n\\begin{align}\n& (k(\\lambda z.zx)) \\equiv_\\alpha (k(\\lambda f.fx)) \n\\end{align}\n\nthen\n\n\\begin{align}\n& \\subs{(k(\\lambda f.fx))}{(\\lambda y.zy)}{x}\\\\\n \\equiv & \\, \\subs{k}{(\\lambda y.zy)}{x}\\subs{(\\lambda f.fx)}{(\\lambda y.zy)}{x}  \\nonumber\\\\\n \\equiv & \\, k (\\lambda f.\\subs{(fx)}{(\\lambda y.zy)}{x}) \\nonumber\\\\\n \\equiv & \\, k (\\lambda f.f(\\lambda y.zy))\\nonumber\n\\end{align}\n\n\\section{$\\beta$-reduction}\n\\ezimeti{\n\\item An expression of the form $(\\lambda \\alpha.\\gamma)\\omega$ is called a\n\\uterm{$\\beta$-redex}. \n\n\\item It can be \\uterm{$\\beta$-reduced} to $\\subs{\\gamma}{\\omega}{\\alpha}$,\ncalled a \\uterm{reduct}, if $\\omega$ is free for $\\alpha$ in $\\gamma$.\n\n\\item  A lambda expression can be reduced by turning all the redexes to reducts,\nwhich results in a \\uterm{$\\beta$-normal} form.\n\n\\item Some example reductions:\n\\begin{align*}\n(\\lambda f.fx)g  & \\breduce gx \\\\\n(\\lambda f.fx)ga  & \\breduce gxa \\\\\n(\\lambda f.fx)(ga) & \\breduce gax \\\\\n(\\lambda f\\lambda x.fx)g a & \\breduce ga \\\\\n%(\\lambda x\\lambda y \\lambda z.x(yz))f & \\breduce \\\\\n\\end{align*}\n\n\\item There may be more than one redex in a\nexpression:\n\n\\begin{align*}\n(\\lambda x.y)((\\lambda z.zz)(\\lambda w.w)) & \\breduce (\\lambda x.y)((\\lambda w.w)(\\lambda w.w))\\\\\n\t\t\t\t\t\t\t\t\t\t\t& \\breduce (\\lambda x.y)(\\lambda w.w)\n\t\t\t\t\t\t\t\t\t\t\t& \\breduce y \n\\end{align*}\n\n\\item Another reduction of the same expression would be:\n\n\\begin{align*}\n(\\lambda x.y)((\\lambda z.zz)(\\lambda w.w)) & \\breduce y \n\\end{align*}\n\n\\item The first is called the \\uterm{applicative order} reduction; the second is\ncalled the \\uterm{normal order} reduction.\n\n\\item Reduce the following expressions:\n\\begin{align*}\n(\\lambda x. m x)j\\\\\n(\\lambda y. y j)m\\\\\n(\\lambda x.\\lambda y. y(y x))jm\\\\\n(\\lambda y.y j)(\\lambda x. m x)\\\\\n(\\lambda x. xx)(\\lambda y. yyy)\n\\end{align*}\n}\n\n\n\n\n\n% \\item Reduce the following expression in both applicative and normal order:\n% \n% \\begin{align*}\n% (\\lambda p_1\\lambda p_2. p_1 0 \\land p_2)(\\lambda x. x\\not= 0)((\\lambda y.\\frac{5}{y} = 0)0)\n% \\end{align*}\n\n\n\n\\section{Lambda calculus in action: some examples}\n\n\\subsection{Logic}\n\\ezimeti{\n\\item Let's define the truth values:\n\n\\begin{align*}\n\\combf{T} \\equiv \\lambda x\\lambda y.x\\\\\n\\combf{F} \\equiv \\lambda x\\lambda y.y\n\\end{align*}\n\n\\item Verify that $\\lambda x\\lambda y.yxy$ behaves like \\emph{and} in prefix\nnotation (i.e. $\\land p q$). \n\n\\item Can you think of lambda expressions for $\\lor$ and $-$?\n\\item What about an if-then-else function that applies to the test, a function\nto execute if the test is true and a function to execute if the test is false.     \n\n}\n\\subsection{Arithmetic}\n\n\\ezimeti{\n\n\\item Number can be represented as lambda expressions in the following way:\n\\begin{align*}\n0 & \\equiv \\lambda f\\lambda x.x\\\\\n1 & \\equiv \\lambda f\\lambda x.fx\\\\\n2 & \\equiv \\lambda f\\lambda x.f(fx)\\\\\n3 & \\equiv \\lambda f\\lambda x.f(f(fx))\\\\\n\\vdots\n\\end{align*}\n\n\n\\item A successor function which returns $n+1$ given $n$ is:\n\\begin{align*}\n\\mathbf{S}\\equiv \\lambda a\\lambda f\\lambda x. f(afx)\n\\end{align*}\n\n\\item Here are addition and multiplication (again in prefix notation); verify\nthat they do what they are meant to do. \n\n\\begin{align*}\n\\mathbf{+} & \\equiv \\lambda a\\lambda b\\lambda f\\lambda x.af(bfx)\\\\\n\\mathbf{\\times} & \\equiv \\lambda a\\lambda b\\lambda f.a(bf)\n\\end{align*}\n\n}\n\n\\section{Lambda Calculus in LISP}\n\nWe will adopt \\href{https://slideplayer.com/slide/11363727/}{the implementation} provided by \\href{https://es-static.fbk.eu/people/cimatti/}{Alessandro Cimatti}, with some minor modifications.\n\n\n\n\\bibliographystyle{plain}\n\\bibliography{ozge}\n\\end{document}\n", "meta": {"hexsha": "ee93b624de01adde98b6a281a4c3c9b523936e3d", "size": 24087, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/03_lambda-calculus.tex", "max_stars_repo_name": "umutozge/cogs543", "max_stars_repo_head_hexsha": "34d03f2c8eae75d643d5d9d08813371a07305b64", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2019-04-18T11:26:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T09:37:09.000Z", "max_issues_repo_path": "notes/03_lambda-calculus.tex", "max_issues_repo_name": "umutozge/cogs543", "max_issues_repo_head_hexsha": "34d03f2c8eae75d643d5d9d08813371a07305b64", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/03_lambda-calculus.tex", "max_forks_repo_name": "umutozge/cogs543", "max_forks_repo_head_hexsha": "34d03f2c8eae75d643d5d9d08813371a07305b64", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.1727416799, "max_line_length": 700, "alphanum_fraction": 0.6794951634, "num_tokens": 7618, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7577943822145997, "lm_q2_score": 0.7371581741774411, "lm_q1q2_score": 0.5586143231952363}}
{"text": "%-------------------------------------------------------------------------------\n\\section{Gradient Analysis Framework}\n%-------------------------------------------------------------------------------\n\nThe gradient analysis framework works similarly to autodifferentiation, computing the gradient of each operation and using the results as inputs to the next gradient. However, unlike autodifferentiation, nonsmooth functions are handled by Proximal Gradients, so that a meaningful gradient can be computed over all program operations. Since Proximal Gradients require local Lipschitz Constants to bound their sampling in some cases, Lipschitz Constants are also propogated over the program along with the gradient.\n\n\\noindent \\textbf{Gradient Propogation with Chain Rule.} We model a program as an arbitrary function $P$, that operates on the system state $x$ and produces a modified system state $x'$. Since programs are generally nonsmooth, we can use subgradients to approximate the gradient of $P$. Programs in general are too complex to model, but since the program $P$ is composed of individual operations whose behavior is known, we model $P$ as a composition of $N$ individual functions on the system state that represent each operation in the program:\n\n\\vspace{-10pt}\\begin{align*}\nP(x) = P_N \\circ P_{N-1} \\circ \\cdots \\circ P_2 \\circ P_1 (x)\n\\end{align*}\n\nSubgradients compose under the chain rule, so we apply the same approach used in Automatic Differentation to evaluate the subgradient of $P$,  $\\nabla_{sub} P(x)$ by computing the product of the subgradients of each operation in $P$ ~\\cite{rockafellar2009variational}. \n\n\\vspace{-10pt}\\begin{align*}\n  \\nabla_{sub} P(x) = \\nabla_{sub} P_N * \\nabla_{sub} P_{N-1} * \\cdots * \\nabla_{sub} P_2 * \\nabla_{sub} P_1\n\\end{align*}\n\nThis is the same approach used in Automatic Differentiation, but applied to discrete functions and subgradients instead of continuous functions with gradients. Then, instead of attempting to evaluate a subgradient over the program function $P$, which has unknown behavior and operate on a large number of variables, subgradients can be evaluated on the individual operations $P_i$, which have known behavior and usually only operate on 1 or 2 variables.\n\nFor some operations that are analytically differentiable on continuous variables, the subgradient of the operation on discrete variables can also be computed analytically. However, the subgradients of nonsmooth operations must be approximated with Proximal Gradients. \n\n\\noindent \\textbf{Sampling Bounds with Lipschitz Constants.} In order to bound the samples required to evaluate the Proximal Gradient, Lipschitz Bounds must also be known for each operation. Fortunately, Lipschitz Bounds can be computed for each type of nonsmooth operation in a program based on its behavior and the Lipschitz bounds of its input. This means that in addition to propogating the gradient through a program, we also compute the Lipschitz Constant on each operation based on the Lipschitz Constants of its inputs, and propogate the result to the next function.\n\nIn some cases the Lipschitz Constant $K_f$ cannot be derived from knowledge of the function, so it is estimated by sampling the function at $x \\pm K$ and taking the maximum difference.\n\n\\noindent \\textbf{Partial Derivatives.} A gradient is composed of partial derivatives with regard to each part of the input, so when defining rules for computing the gradient we focus on partial derivatives with regard to the variable in the system state that was first marked for tracking, which is denoted $x_i$. Typically $x_i$ is a byte in an input read from a file or socket, but may also be an internal program variable.\n\n\n\\noindent \\textbf{Program Inputs.} In the case where $g$ and $h$ are direct inputs to the program, such as bytes read from a file, we consider them to be identity functions with a partial derivative of 1 if they operate on the byte of the input that the partial derivative is being taken with regard to, and 0 otherwise. The Lipschitz Constant in this case is also 1.\n\n\\noindent \\textbf{Propogation rules.} We then define gradient and Lipschitz Constant propogation rules as follows, where $f$ represents the current operation and $g$ and $h$ represent the prior operations on its input.\n\n\\begin{enumerate}\n  \\item \\textbf{Addition and Multiplication.} Addition and multiplication, as well as subtraction, are examples of discrete functions whose subgradients can still be computed analytically and therefore do not require sampling. The partial derivatives and associated Lipschitz Constants with regard to the $i$th input variable $x_i$ can then be computed as follows based on the definition of $f$ as $f\\left(g, h\\right) = g+h$:\n\n\\vspace{-10pt}\\begin{align*}\n  K_f &= K_{g} + K_{h} &\\tfrac{df}{dx_i} &= \\tfrac{dg}{dx_i} + \\tfrac{dh}{dx_i}\n\\end{align*}\n\nMultiplication can likewise rely on analytical derivatives as it is a linear operation:\n\n\\vspace{-10pt}\\begin{align*}\n  K_f &= h*K_{g} + g*K_{h} & \\frac{df}{dx_i} &= h\\frac{dg}{dx_i} + g\\frac{dh}{dx_i}\n\\end{align*}\n\n    \\gabe{These are pretty obvious, textual description may be sufficient.}\n\n\n\\item \\textbf{Integer Division.} Unlike division with continuous values, division on integers is not a linear function because any fractional portion of the result is truncated. Therefore sampling the function is necessary to evaluate an accurate partial derivative. To compute the Lipschitz Constant, we use the quotient rule and consider the case that causes the maximum possible change in $f$, where $\\tfrac{dg}{dx_i} = K_g$, and $\\tfrac{dh}{dx_i} = -K_h$.\n\n\\vspace{-10pt}\\begin{align*}\n  K_f &= \\frac{K_g h + K_h g}{h^2} & \n  \\frac{df}{dxi} &= prox_{\\nabla f}\\left(\\bar{x}, K_f, \\lambda\\right)\n\\end{align*}\n\n\\item \\textbf{Modulo Operations.} Like integer division, modulo operations are also generally nonsmooth and require evaluating the proximal gradient. In cases the modulo operand is constant, which we found usually the case in our test programs, the Lipschitz Constant for the modulo function is simply the modulo operand:\n\n\\vspace{-10pt}\\begin{align*}\n  K_f &= h &\n  \\frac{df}{dxi} = prox_{\\nabla f}\\left(\\bar{x}, K_f, \\lambda\\right)\n\\end{align*}\n\nIn cases where the derivative of the modulo operand is nonzero, the Lipschitz Constant is estimated by sampling and used to evaluate the Proximal Gradient. \n\n\n\\item \\textbf{Shift Operations.} Shift operations can be defined as a multiplication with an exponent, but have the additional condition that bits shifted past the width of the datatype are dropped. Therefore the Lipschitz Constant is also estimated by sampling and used to evaluate the Proximal Gradient.\n\n\n  \\gabe{May want to combine integer division, modulo and shift.}\n\n\\item \\textbf{Bitwise Operations.} When one of the operands of a bitwise operation is a constant, or at least has a 0 derivative with regard the marked input, the Lipschitz Constant can be set the value of the highest set bit in the constant operand. However, in cases where both operands have nonzero derivatives, the Lipschitz Constant must be estimated and used to evaluate the Proximal Gradient. \n\n\n\\item \\textbf{Memory Indexing.} Indexing into an array in memory can be modeled as an arbitrary nonsmooth function $f\\left(g\\right)$. Since we cannot derive the Lipschitz Constant $K_f$ from knowledge of the function, we estimate it by sampling the function at $g \\pm g'$ and taking the maximum difference. The proximal gradient can be computed using the estimated Lipschitz Constant\n\n%\\begin{align*}\n  %f\\left(g\\right) &= \\textrm{mem}\\left[g\\right]\\\\\n  %\\frac{df}{dxi} &= prox_{\\nabla f}\\left(\\bar{x}, K_f, \\lambda\\right)\n%\\end{align*}\n\n\\item \\textbf{Load Operations.} Load operations will normally take on the saved partial derivative of whatever variable was originally saved to that memory location. However, there is possibility that one or more types with smaller bit widths are being loaded into a single larger width type. In these cases each partial derivative and Lipschitz Constant associated with an address being loaded, if there are more than one, is shifted by its offset with the base address and summed with the others.\n\n\\vspace{-10pt}\\begin{align*}\n  K_f &= \\sum_j K_j 2^{offset_j} &\n  \\frac{df}{dxi} &= \\sum_j \\frac{d \\textrm{addr}_j}{dx_i}2^{offset_j}\n\\end{align*}\n\n\n\\item \\textbf{Integer Branches.} In cases where a variable can take on different values based on which portion of the branch was executed, we model the branch and subsequent merge as either a piecewise point function or a piecewise step function, depending on if the branch comparison defines a single value or set of values, such as greater than. In order to the Proximal Gradient on this function, the alternate branch path must be sampled to determine its value in the merge operation. In practice any execution of an alternate path will be invalid, but for many branches will still result in an accurate representation of program behavior. In cases where the sample execution does not return to the merge operator, the alternate branch portion of the piecewise function is considered to be undefined, resulting in a partial deriviative of 0. Once the path has been sampled, the Proximal Gradient can be evaluated on it normally, and the associated Lipschitz Constant can be set to the size of the step in the piecewise function.\n\n  This approach will handle most single branches, but breaks down in the case where multiple nested branches on different values must be bypassed for the merge variable to be set. To handle these cases, when sampling alternate branch paths any subsequent branch encountered can also be sampled, and the Proximal Gradient computed over each branch individually. Figure \\ref{fig:branch_exs} shows how this process works on nested branches with two conditions, \\tc{x>1} and \\tc{set>0}. When the branch is initially executed with \\tc{x=0} and \\tc{set=0}, the result is \\tc{y=0}. To evaluate Proximal Gradient, the first branch is sampled by executing the alternate path with \\tc{x=2}. This execution also results in \\tc{y=0}, but during it the second nested branch on \\tc{set} is encountered and sampled with \\tc{set=1}, which results in \\tc{y=1}. These samples are then used to form a function that is 1 for \\tc{x>1} and \\tc{set>0} and 0 elsewhere. The closest point to \\tc{x=0, set=0} that gives the greatest change on this function is \\tc{x=2, set=1}, so the Proximal Gradient evaluates on that point, resulting in partial derivatives of $\\tfrac{dy}{dx}=\\tfrac{1}{2}$ and $\\tfrac{dy}{dset} = 1$.\n\n\\begin{figure}\n  %\\centering\n  \\begin{subfigure}{0.38\\columnwidth}\n    %\\includegraphics[width=\\linewidth]{figs/sosp-working_ex3}\n    \\begin{lstlisting}\n// int x=0 \n// bool set=0\nint y=0;\nif (x>1)\n    if (set==1)\n        y=1;\n    \\end{lstlisting}\n    %\\caption{\\label{fig:branch_ex1}Example of nested branch with $\\tfrac{dy}{dx}=\\tfrac{1}{2}$ and $\\tfrac{dy}{dset} = 1$}\n  \\end{subfigure}\n  \\hspace{0.1cm}\n  \\begin{subfigure}{0.58\\columnwidth}\n    \\includegraphics[width=\\textwidth]{figs/nested_branch_plot.jpg}\n    %\\caption{\\label{fig:branch_ex2}Surface}\n  \\end{subfigure}\n  \\vspace{-0pt}\n  \\caption{\\label{fig:branch_exs}Example of nested branch with $\\tfrac{dy}{dx}=\\tfrac{1}{2}$ and $\\tfrac{dy}{dset} = 1$.}\n  \\vspace{-0pt}\n\\end{figure}\n\n\n\\item \\textbf{Floating Point Branches.} Unlike derivatives of integer operations, derivatives of floating point operations can generally be computed analytically, as is done in existing autodifferentiation frameworks. The exception to this is branches, which are normally not handled. Like integer branches, we model branches with floating point values as a piecewise step function, and execute the alternate branch path to determine the value of the step. However, unlike integer branches, floating point branches are not locally lipschitz, because they have an infinite slope at the point of discontinuity \\gabe{add a figure showing this}, so a Proximal Gradient cannot be evaluated on them. Instead, we apply a Gaussian smoothing function, where the variance of the gaussian used is set according the magnitude of the input derivative.\n\\end{enumerate}\n\n\n", "meta": {"hexsha": "649d5a09c7486ee1f35a39e918665c7d5a4b8d47", "size": 12171, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "submissions/sosp2019/sections/methodology.tex", "max_stars_repo_name": "sillywalk/grazz", "max_stars_repo_head_hexsha": "a0adb1a90d41ff9006d8c1476546263f728b3c83", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "submissions/sosp2019/sections/methodology.tex", "max_issues_repo_name": "sillywalk/grazz", "max_issues_repo_head_hexsha": "a0adb1a90d41ff9006d8c1476546263f728b3c83", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "submissions/sosp2019/sections/methodology.tex", "max_forks_repo_name": "sillywalk/grazz", "max_forks_repo_head_hexsha": "a0adb1a90d41ff9006d8c1476546263f728b3c83", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 98.9512195122, "max_line_length": 1194, "alphanum_fraction": 0.7641935749, "num_tokens": 3008, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8670357598021707, "lm_q2_score": 0.6442251133170357, "lm_q1q2_score": 0.5585662106084756}}
{"text": "\\chapter{Assignment: Clustering}\n\\label{hw:clustering}\n\n\\newthought{Clustering helps to discover groups of data}, for example similar countries based on certain socio-economic features. For this task, we will use \\textit{HDI} data set from the \\widget{Datasets} widget. The data reports the Human Development Index for the year 2016 for 188 countries. While HDI has its limitations, the data offers an interesting exercise in clustering.\n\n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[width=\\linewidth]{workflow.png}%\n  \\caption{Workflow for the assignment.}\n\\end{figure}\n\n\\begin{enumerate}\n    \\item Try Euclidean and cosine distance with Ward linkage. Which one works better? Why?\n    \\item How many groups did you discover? What number would make sense?\n    \\item Explain the final clusters. What defines each of them?\n    \\item Use Euclidean distance and Ward linkage. Can you explain why is Cuba clustered together with South Korea and the United Stated? Use \\widget{Box Plot} and the Data output from Hierarchical Clustering to answer this question.\n\\end{enumerate}\n\n\\begin{wrapfigure}{o}{1.0\\textwidth}\n  \\includegraphics[scale=0.4]{clustering.png}\n\\end{wrapfigure}\n", "meta": {"hexsha": "acf6db0b27ffbbe1d76b31ff2d07e89a6eadeab0", "size": 1177, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignments/010-clustering/clustering.tex", "max_stars_repo_name": "PrimozGodec/orange-lecture-notes", "max_stars_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-10-13T14:31:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:47:06.000Z", "max_issues_repo_path": "assignments/010-clustering/clustering.tex", "max_issues_repo_name": "PrimozGodec/orange-lecture-notes", "max_issues_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-02-26T13:33:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-25T19:15:34.000Z", "max_forks_repo_path": "assignments/010-clustering/clustering.tex", "max_forks_repo_name": "PrimozGodec/orange-lecture-notes", "max_forks_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-01-19T16:55:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-21T20:35:41.000Z", "avg_line_length": 53.5, "max_line_length": 381, "alphanum_fraction": 0.7774001699, "num_tokens": 285, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5585633383694145}}
{"text": "\\section{History and motivation}\r\n\r\n\r\n\\qquad Justification logic is one of those few subjects in which a historical introduction is more fruitful than a plain exposition of the syntax and the semantics of the logic.\r\n\r\n\\qquad In the debate around foundations of mathematics one of the philosophical positions that arose was Brouwer's intuitionism. Briefly, intuitionism says that the truth of a mathematical statement should be identified with the proof of that statement. Summarizing the core idea of this position in a slogan: \\textit{truth means provability}. Starting from this core idea an informal semantics was created. Now, this semantics is known as Brouwer\u2013Heyting\u2013Kolmogorov (BHK) semantics. It gives an informal meaning to the logical connectives $\\bot, \\e, \\ou, \\impli, \\nao$ in the following way:\r\n\r\n\\begin{itemize}\r\n\t\\item $\\bot$ is a proposition which has no proof (an absurdity, e.g. $0=1$).\r\n\t\\item A proof of $\\varphi \\e \\psi$ consist of a proof of $\\varphi$ and a proof of $\\psi$.\r\n\t\\item A proof of $\\varphi \\ou \\psi$ is given by exhibiting either a proof of $\\varphi$ or a proof of $\\psi$.\r\n\t\\item A proof of $\\varphi \\impli \\psi$ is a construction which, given a proof of $\\varphi$, returns a proof of $\\psi$.\r\n\t\\item A proof of $\\nao \\varphi$ is a construction which transforms any proof of $\\varphi$ into a proof of a contradiction. \\footnote{By this definition, we can clearly treat $\\nao \\varphi$ as an abbreviation of $\\varphi \\impli \\bot$.} \r\n\\end{itemize}\r\n\r\n\\qquad Using this semantics we can give an informal argument to show that some formulas are intuitionistic validities (formulas like $\\varphi \\impli \\varphi$, $\\varphi \\impli (\\psi \\impli \\varphi)$ and $\\bot \\impli \\varphi$) and show that some formulas that are classical validities are not validities by this interpretation (formulas like $\\varphi \\ou \\nao \\varphi$ and $\\nao\\nao\\varphi \\impli \\varphi$). More important than to decide whether some formula is a validity or not, this semantics gives us a way to grasp the intended reasoning that \\textit{intuitionistic logic} (Int) wants to capture.\r\n\r\n\\qquad The first step toward a formalization of this semantics was given by G\u00f6del in 1933 \\cite{Goedel33}. He added a new unary operator $B$ to classical logic; $B\\varphi$ should be read as `$\\varphi$ is provable' ($B$ stand for `beweisbar', the German word for `provable'). This new operator was added in order to express the notion of provability in classical mathematics. To describe the behavior of this operator G\u00f6del constructed the following calculus:\r\n\r\n\\begin{itemize}\r\n \\item[] All tautologies\r\n \\item[] $B\\varphi \\impli \\varphi$\r\n \\item[] $B(\\varphi \\impli \\psi) \\impli$ $(B\\varphi \\impli B\\psi)$\r\n \\item[] $B\\varphi \\impli BB\\varphi$\r\n \\item[] (\\textit{Modus Ponens}) $\\teo \\varphi$, $\\teo \\varphi\\impli\\psi$ $\\Rightarrow$ $\\teo \\psi$\r\n \\item[] (\\textit{Internalization})  $\\teo \\varphi$ $\\Rightarrow$ $\\teo B\\varphi$\r\n\\end{itemize}\r\n\r\n\r\n\r\n\\qquad Since this axiom system is equivalent to Lewis's S4 when we translate  $B\\varphi$ by  $\\Box\\varphi$, we will refer to this calculus of provability in classical mathematics simply as S4.\r\n\r\n\r\n\r\n\\qquad Based on the intuitionistic notion of truth as provability, it is possible to define the following translation from formulas of intuitionistic logic to formulas of S4:\r\n\\pagebreak\r\n\\begin{itemize}\r\n\t\\item $p^{B} = Bp$;\r\n\t\\item $\\bot^{B} = \\bot$;\t\r\n\t\\item $(\\varphi \\e \\psi)^{B} = (\\varphi^{B} \\e \\psi^{B})$;\r\n\t\\item $(\\varphi \\ou \\psi)^{B} = (\\varphi^{B} \\ou \\psi^{B})$;\t\r\n\t\\item $(\\varphi \\impli \\psi)^{B} = B(\\varphi^{B} \\impli \\psi^{B})$.\r\n\\end{itemize}\r\n\r\n\\qquad It was shown by G\u00f6del, McKinsey and Tarski (for all the references see \\cite{Artemov06}) that this translation `makes sense', i.e., that the following theorem holds:\r\n\r\n\\begin{center}\r\n\tFor every formula $\\varphi$, Int $\\teo \\varphi$ iff S4 $\\teo \\varphi^{B}$.\r\n\\end{center}\r\n\r\n\\qquad The next step is to give a formal interpretation of the $B$ operator. One natural interpretation is the following: fix a first-order version of Peano Arithmetic (\\textbf{PA}); $B$ should be interpreted as the predicate $\\ex y Proof (y,x)$ which asserts that there exits a proof (in \\textbf{PA}) with G\u00f6del number $y$ for a formula with G\u00f6del number $x$. This predicate has the following property:\r\n\r\n\r\n\\begin{center}\r\n\tFor every sentence $\\varphi$ in the language of \\textbf{PA}, $\\textbf{PA} \\teo \\varphi$ iff $Proof (n,\\ulcorner \\varphi\\urcorner)$ holds for some $n$.\r\n\\end{center} \r\n\r\n\\qquad For simplicity, we will use $Prov(x)$ as an abbreviation of $\\ex y Proof (y,x)$. Let $*$ be a bijection between the sentences of \\textbf{PA} and the propositional variables. We can extend the mapping $*$ to give an arithmetical interpretation of all S4 formulas as follows:\r\n\r\n\\begin{itemize}\r\n\t\\item $\\bot^{*} = \\bot$;\r\n\t\\item $(\\varphi \\e \\psi)^{*} = (\\varphi^{*} \\e \\psi^{*})$;\r\n\t\\item $(\\varphi \\ou \\psi)^{*} = (\\varphi^{*} \\ou \\psi^{*})$;\t\r\n\t\\item $(\\varphi \\impli \\psi)^{*} = (\\varphi^{*} \\impli \\psi^{*})$;\r\n\t\\item $(B\\varphi)^{*} = Prov(\\ulcorner \\varphi^{*}\\urcorner)$.\r\n\\end{itemize}\r\n\r\n\\qquad On the one hand, it was straightforward how to interpret the modal formulas in the language of \\textbf{PA}; on the other hand it was not clear how to give a formal interpretation of this provability calculus (S4) in \\textbf{PA}. In \\cite{Goedel33} G\u00f6del pointed out that S4 does not correspond to the calculus of the predicate $Prov(x)$ in \\textbf{PA}. Simply because S4 proves the formula $B(B(\\bot) \\impli \\bot)$. Using the above translation this formula correspond to $Prov(\\ulcorner Prov(\\ulcorner\\bot\\urcorner) \\impli \\bot\\urcorner)$. And since the following sentences are equivalent in \\textbf{PA}:\r\n\r\n\\begin{center}\r\n\t$Prov(\\ulcorner\\bot\\urcorner) \\impli \\bot$\\\\\r\n\t$\\nao Prov(\\ulcorner\\bot\\urcorner)$\\\\\r\n\t$Consist(\\textbf{PA})$,\r\n\\end{center}\r\n$Prov(\\ulcorner Prov(\\ulcorner\\bot\\urcorner) \\impli \\bot\\urcorner)$ means that the consistency of \\textbf{PA} is internally provable in \\textbf{PA}, which contradicts G\u00f6del's Second Incompleteness Theorem.\r\n\r\n\\qquad In a lecture in 1938 \\cite{Goedel38} G\u00f6del suggested a way to remedy this problem. Instead of using the implicit representation of proofs by the existential quantifier in the formula $\\ex y Proof (y,x)$ one can use explicit variables for proofs (like $t$) in the formula $Proof (t,x)$. In these lines, G\u00f6del proposed expanding the language of classical propositional logic with variables for proofs and adding the following ternary operator $tB(\\varphi,\\psi)$ which should be read as `$t$ is a derivation of $\\psi$ from $\\varphi$'. Using $tB(\\varphi)$ as an abbreviation of $tB(\\top,\\varphi)$, G\u00f6del formulated the following axiom system:\r\n\r\n\\begin{itemize}\r\n\t\\item[] All tautologies\r\n\t\\item[] $tB(\\varphi)\\impli \\varphi$\r\n\t\\item[] $tB(\\varphi,\\psi) \\impli (sB(\\psi,\\theta) \\impli f(t,s)B(\\varphi,\\theta))$\\footnote{To understand the motivation behind this function $f$ consider the following. Suppose $t$ is a derivation of $\\psi$ from $\\varphi$ and $s$ is a derivation of $\\theta$ from $\\psi$. Then it can be easily seen that the concatenation of $t$ and $s$, $t^{\\smallfrown}s$, is a derivation of $\\theta$ from $\\varphi$. So, if $t$ is a derivation of $\\psi$ from $\\varphi$ and $s$ is a derivation of $\\theta$ from $\\psi$, then $f(t,s)= t^{\\smallfrown}s$.  }\r\n\t\\item[] $tB(\\varphi) \\impli t\\p B(tB(\\varphi))$\r\n\t\\item[] (\\textit{Modus Ponens}) $\\teo \\varphi$, $\\teo \\varphi\\impli\\psi$ $\\Rightarrow$ $\\teo \\psi$\r\n\t\\item[] (\\textit{Internalization})  $\\teo \\varphi$ $\\Rightarrow$ $\\teo tB(\\varphi)$ (where $t$ is an derivation of $\\varphi$).\r\n\\end{itemize}\r\n\r\n\r\n\\qquad G\u00f6del just formulated this system but he did not give a proof of how this system could be used to be a bridge between Int and \\textbf{PA}. Independently of G\u00f6del's system presented in \\cite{Goedel38} (the lecture was published only in 1998), Sergei Artemov (in \\cite{Artemov01}) proposed the use of explicit variables and constants for proofs and some basic operations between proofs (Application `$\\cdot$', Sum `$+$' and Verifier `$!$'). Instead of having $B\\varphi$ (or the more modern notation of provability logic $\\Box \\varphi$), the non-classical formulas are of the form $t$$:$$\\varphi$ (which should be read as `$t$ is a proof of $\\varphi$'); where $t$ is a simple or complex term composed of proof variables or constants. With this new language Artemov stipulated the following axiom system to capture the behavior of this explicit provability:     \r\n\r\n\\begin{itemize}\r\n\t\\item[] All tautologies\r\n\t\\item[] $t$$:$$\\varphi \\impli \\varphi$\r\n\t\\item[] $t$$:$$(\\varphi \\impli \\psi) \\impli$ $(s$$:$$\\varphi \\impli$ $[t\\cdot s]$$:$$\\psi)$\r\n\t\\item[] $t$$:$$\\varphi \\impli$ $!t$$:$$t$$:$$\\varphi$\r\n\t\\item[] $t$$:$$\\varphi \\impli$ $[t+s]$$:$$\\varphi$ \r\n\t\\item[] $s$$:$$\\varphi \\impli$ $[t+s]$$:$$\\varphi$\r\n\t\\item[] (\\textit{Modus Ponens}) $\\teo \\varphi$, $\\teo \\varphi\\impli\\psi$ $\\Rightarrow$ $\\teo \\psi$\r\n\t\\item[] (\\textit{axiom necessitation})  $\\teo c$$:$$\\varphi$, where $\\varphi$ is an axiom and $c$ is a justification constant.\r\n\\end{itemize}\r\n\r\n\\qquad This logic was called \\textit{Logic of Proofs} (LP) and it was the first example of justification logic.\r\n\r\n\r\n\\qquad If $\\varphi$ is a S4 formula, there is a mapping $r$ (called a \\textit{realization}) from the occurrences of $B$'s (or boxes) into terms. The result of this mapping on $\\varphi$ is denoted $\\varphi^{r}$. The following theorem express the connection between S4 and LP:\r\n\r\n\\textbf{(Realization Theorem between S4 and LP)} For every $\\varphi$ in the language of S4, there is a realization $r$ such that\r\n\r\n\\begin{center}\r\nS4 $\\teo \\varphi$ iff LP $\\teo \\varphi^{r}$ \r\n\\end{center}\r\n\r\n\\qquad There is a way to define an interpretation $*$ of the LP formulas into the sentences of \\textbf{PA} (for details see \\cite{Artemov01}). And with all this machinery Artemov was able to prove the following result:\r\n\r\n\\textbf{(Provability Completeness of Intuitionistic Logic)} For every $\\varphi$, for every interpretation $*$, there is a realization $r$ such that\r\n\r\n\\begin{center}\r\nInt $\\teo \\varphi$ iff S4 $\\teo \\varphi^{B}$ iff LP $\\teo (\\varphi^{B})^{r}$ iff \\textbf{PA} $\\teo ((\\varphi^{B})^{r})^{*}$\r\n\\end{center}\r\n\r\n\r\n\\qquad This result shows that instead of the philosophical attitude of understanding intuitionistic logic as a reasoning different from the reasoning that classical logic wants to capture, we can interpret intuitionistic logic as provability in \\textit{classical mathematics}. Thus, the primitive notions that appear in the BHK semantics (`proof' and `construction') can have a formal meaning in a classical setting.\r\n\r\n\r\n\\qquad Going beyond the specific problem of the formalization of BHK semantics, justification logic can be seen as a new tool to introduce the notion of \\textit{justifications} in the well-established discussion of epistemic logic (for a more detailed discussion see \\cite{Artemov08}). Instead of using modal formulas like $\\Box \\varphi$ to express:\r\n\r\n\\begin{center}\r\nFor a given agent, $\\varphi$ is known,\t\r\n\\end{center}  \r\nwe use justification formulas like $t$$:$$\\varphi$ to express:\r\n\r\n\\begin{center}\r\nFor a given agent, $\\varphi$ is known for the reason $t$.\t\r\n\\end{center}  \r\n\r\n\\qquad Informally, we can see the terms $t, s, \\dots$ as justifications and the operators $+, \\cdot, !$ can be seen as means of epistemic action. In fact, this point of view enables us to see justification logic as something bigger than the logic of the explicit provability; justification logic can be seen as a logic of \\textit{explicit knowledge}.\r\n\r\n\\qquad Our main interest in justification logic lies in this connection with epistemic logic. We are not going to focus on the arithmetical interpretation of this logic, instead we are going to work only with the Kripke-style semantics introduced by Melvin Fitting for this logic. But it is important to have the provability interpretation in mind because some of the choices made to formulate specific aspects of justification logic are directly motivated by the relationship with provability logic and the arithmetical interpretation. \r\n\r\n\\section{The propositional case: language and axiom system}\r\n\r\n\\begin{defn} (Basic vocabulary)\t\r\n\t\\begin{itemize} \r\n\t\t\\item $p, q, p\\p, q\\p, \\dots$ (\\textit{propositional variables});\r\n\t\t\\item $\\impli, \\bot$ (\\textit{boolean connectives});\r\n\t\t\\item $x,y,z, \\dots \\dots$(\\textit{justification variables});\r\n\t\t\\item $a, b, \\dots$, with indices, $1, 2, \\dots$ (\\textit{justification constants});\r\n\t\t\\item $+$, $\\cdot$ (\\textit{justification operators});\r\n\t\t\\item $),($ (\\textit{parentheses}).\r\n\t\\end{itemize}\r\n\\end{defn}\r\n\r\n\\begin{defn} (Justification terms)\r\n\t\\begin{center}\r\n\t\t$ t :: = x$   $|$ $c$ $|$  $(t \\cdot t)$ $|$ $(t + t)$ \r\n\t\\end{center}\r\n\\end{defn}\r\n\r\n\r\n\r\n\\begin{defn} (Justification formulas)\r\n\t\\begin{center}\r\n\t\t$ \\varphi :: = p$   $|$ $\\bot$ $|$  $(\\varphi \\impli \\varphi)$ $|$ $t$$:$$\\varphi$\r\n\t\\end{center}\r\n\\end{defn}\r\n\r\n\\qquad We define $\\nao, \\e, \\see$ and $\\ou$ as usual. Sometimes, to help readability, we use the brackets `$],[$' together with `$),($'.\r\n\r\n\\qquad The minimal justification logic J$_{0}$ is axiomatized by the following axiom schemes and inference rules:\\\\\r\n\r\nAll tautologies\\\\\r\n\r\n(\\textit{Application Axiom}) $t$$:$$(\\varphi \\impli \\psi) \\impli$ $(s$$:$$\\varphi \\impli$ $[t\\cdot s]$$:$$\\psi)$\\\\\r\n\r\n(\\textit{Sum Axioms}) $t$$:$$\\varphi \\impli$ $[t+s]$$:$$\\varphi$, $s$$:$$\\varphi \\impli$ $[t+s]$$:$$\\varphi$\\\\\r\n\r\n(\\textit{Modus Ponens}) $\\teo \\varphi$, $\\teo \\varphi\\impli\\psi$ $\\Rightarrow$ $\\teo \\psi$ \\\\\r\n\r\n(\\textit{axiom necessitation})  $\\teo c$$:$$\\varphi$, where $\\varphi$ is an axiom and $c$ is a justification constant.\\\\\r\n\r\n\\qquad The notion of derivation in this system, J$_{0}$ $\\teo \\varphi$, is defined as usual. \r\n\r\n\\begin{defn}\r\nLet $\\C$ be a non-empty set of formulas. We say that $\\C$ is a \\textit{constant specification}, if for every $\\varphi \\in \\C$, $\\varphi = c$$:$$\\psi$ where $c$ is a justification constant and $\\psi$ is an axiom. A proof meets constant specification $\\C$ provided that whenever the inference rule `axiom necessitation' is used to introduce $c$$:$$\\psi$, then $c$$:$$\\psi \\in \\C$.\r\n\r\n\\qquad We say that a constant specification $\\C$ is \\textit{axiomatically appropriate} if i) for every axiom $\\varphi$ there is a justification constant $c_{1}$ such that $c_{1}$$:$$\\varphi\\in \\C$; and ii) if $c_n, c_{n-1}, \\dots , c_{1}$$:$$\\varphi\\in \\C$, then $c_{n+1}, c_{n}, \\dots , c_{1}$$:$$\\varphi\\in \\C$, for each $n \\geq 1$. \t\r\n\\end{defn}\r\n\r\n\r\n\\qquad For a constant specification $\\C$, by J$_{\\C}$ we mean J$_{0}$ plus formulas from $\\C$ as additional axioms.\r\n\r\n\\begin{teor}\r\n(Internalization) Suppose $\\C$ is an axiomatically appropriate constant specification. In these conditions, J$_{\\C}$ satisfies internalization. That is, if J$_{\\C}$ $\\teo \\varphi$ then J$_{\\C}$ $\\teo  t$$:$$\\varphi$, for some justification term $t$.\r\n\\end{teor}\r\n\r\n\\qquad There are some well-know examples of justification logic other than J$_{0}$; in this thesis we are going to mention only two of them. The first one is the already mentioned Logic of Proofs (LP): it extends the language of J$_{0}$ with the unary justification operator $!$ and has the following additional axiom schemes:\\\\\r\n\r\n(\\textit{Factivity Axiom}) $t$$:$$\\varphi \\impli \\varphi$\\\\\r\n\r\n(\\textit{Positive Introspection Axiom}) $t$$:$$\\varphi \\impli$ $!t$$:$$t$$:$$\\varphi$\\\\\r\n\r\n\\qquad The second one is called JT45, it extends the language of LP with the unary justification operator $?$ and has the following additional axiom scheme:\\\\\r\n\r\n(\\textit{Negative Introspection Axiom}) $\\nao t$$:$$\\varphi \\impli$ $?t$$:$$\\nao t$$:$$\\varphi$\\\\\r\n\r\n\\qquad We have stated the Internalization Theorem above for J$_{0}$, but this theorem also holds for LP and JT45. Because of the Positive Introspection Axiom we can prove this result for LP and JT45 with a weaker notion of axiomatically appropriate constant specification $\\C$. In both of these logics we just say that a constant specification $\\C$ is axiomatically appropriate if for every axiom $\\varphi$ there is a justification constant $c$ such that $c$$:$$\\varphi\\in \\C$. It should be noted that the Internalization Theorem is just an explicit form of the necessitation rule.\r\n\r\n\\qquad Informally speaking, the \\textit{forgetful projection} of a justification formula $\\varphi$, denoted $\\varphi^{\\circ}$, is the result of replacing every subformula $t$$:$$\\psi$ with $\\Box\\psi$. We also commented on the notion of \\textit{realization}. With these two notions we can state more clearly the relationship between modal logic and justification logic.\r\n\r\n\r\n\r\n\\begin{defn}\r\nSuppose KL is a normal modal logic and let JL be a justification logic mentioned above. We say that JL is a \\textit{counterpart} of KL if the following holds:\r\n\r\n\\begin{itemize}\r\n\\item  If JL $\\teo \\varphi$, then KL $\\teo \\varphi^{\\circ}$.\r\n\\item  If KL $\\teo \\varphi$, then there is a realization $r$ such that JL $\\teo \\varphi^{r}$.\r\n\\end{itemize}\r\n\\end{defn}\r\n\r\n\r\n\\qquad It can be proved that for an axiomatically appropriate constant specification $\\C$:\r\n\r\n\\begin{center}\r\nJ$_{\\C}$ is a counterpart of K\\\\\r\n\r\nLP is a counterpart of S4\\\\\r\n\r\nJT45 is a counterpart of S5\r\n\\end{center}\r\n\r\n \r\n\r\n\r\n\\section{From propositional logic to first-order}\r\n\r\n\\qquad Before we start presenting the first-order version of JT45 we need to remember some properties of derivations in classical first-order logic. It is useful to remember these details, because first-order justification logic tries to mirror some aspects of the individual variables in classical first-order derivations.     \r\n\r\n\\qquad Let $\\varphi(x)$ be any tautology, and let $t$ be the following derivation:\r\n\r\n\\begin{enumerate}[1.]\r\n\t\\item $\\varphi(x)$ \r\n\t\\item $\\todo x \\varphi(x)$                 (generalization)\r\n\t\\item $\\todo x\\varphi(x) \\impli (Px \\impli \\todo x\\varphi(x))$ (tautology)\r\n\t\\item $Px \\impli \\todo x\\varphi(x)$ (Modus Ponens)\r\n\\end{enumerate}\r\n\r\n\\qquad Although $x$ is free in the formula $Px \\impli \\todo x\\varphi(x)$, if $c$ is an individual term we cannot substitute $c$ for $x$ in $t$ in order to obtain a derivation $t(c / x)$ of $Pc \\impli \\todo x\\varphi(x)$ (if we do that we ruin the derivation at 2).\r\n\t\r\n\\qquad Now, let $s$ be the following derivation:\r\n\t\r\n\\begin{enumerate}[1.]\r\n\\item $\\varphi(x)$ \r\n\\item $\\todo x \\varphi(x)$                 (generalization)\r\n\\item $\\todo x\\varphi(x) \\impli (Py \\impli \\todo x\\varphi(x))$ (tautology)\r\n\\item $Py \\impli \\todo x\\varphi(x)$ (Modus Ponens)\r\n\\end{enumerate}\r\n\t\r\n\\qquad $y$ is free in the formula $Py \\impli \\todo x\\varphi(x)$ and moreover for every individual term $c$ the result of substituting $c$ for $y$ in $s$, $s(c/y)$, is a  derivation of $Pc \\impli \\todo x\\varphi(x)$.\r\n\r\n\r\n\\qquad These examples show us that there are two different roles of variables in a derivation: a variable can be a \\textit{formal symbol} that can be subjected to generalization or a \\textit{place-holder} that can be substituted for. In $t$, $x$ is both a formal symbol and a place-holder. And in $s$, $x$ is a formal symbol and $y$ is a place-holder.\r\n\t\r\n\\qquad This consideration motivates the following definition:\r\n\t\r\n\\begin{center}\r\n$x$ \\textit{is free in the derivation} $t$ of the formula $\\varphi$ iff for every individual term $c$, $t(c/x)$ is a derivation of $\\varphi(c/x)$.\r\n\\end{center}\r\n\r\n\t\r\n\\qquad In propositional justification logic we write $t$$: $$\\varphi$ to express that $t$ is a derivation of $\\varphi$. In order to represent the distinct roles of variables in first-order justification logic, we are going to write formulas of the form:\r\n\r\n\\begin{center}\r\n$t$$:$$Px \\impli \\todo x\\varphi(x)$\\\\\r\n$s$$:_{\\{y\\}}$$Py \\impli \\todo x\\varphi(x)$\r\n\\end{center}\r\n\t\r\n\t\r\n\\qquad The role of $\\{y\\}$ in $s$$:_{\\{y\\}}$$Py \\impli \\todo x\\varphi(x)$ is to point out that $y$ is free in the derivation $s$ of $Py \\impli \\todo x\\varphi(x)$. \r\n\r\n\r\n\r\n", "meta": {"hexsha": "20a685b9dc1464175cf360234269cc41837f64f8", "size": 19938, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/chapters/J_introduction.tex", "max_stars_repo_name": "felipessalvatore/dissertacao_mestrado", "max_stars_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/chapters/J_introduction.tex", "max_issues_repo_name": "felipessalvatore/dissertacao_mestrado", "max_issues_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/chapters/J_introduction.tex", "max_forks_repo_name": "felipessalvatore/dissertacao_mestrado", "max_forks_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.4522968198, "max_line_length": 866, "alphanum_fraction": 0.7025779918, "num_tokens": 5880, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.7772998611746911, "lm_q1q2_score": 0.5585633309550658}}
{"text": "\\section{Brief History of Burgers' Equation}\n    \n    The simplest fluids (called Newtonian fluids), are described by the well-known Navier-Stokes equations, named after Claude-Louis Navier and George Gabriel Stokes. These are a set of non-linear (PDEs), which are obtained by applying the principles of conservation of mechanics and thermodynamics on a volume of fluid to obtain the so-called integral formulation of the equations. Applying certain considerations, especially those in which the tangential forces have a linear relationship with the velocity gradient (Newton's viscosity law), the differential formulation is obtained which is generally more useful for solving the problems that arise in the mechanics of fluids. For further details about the Navier-Stokes equations see \\cite{Acheson2001, Batchelor1967, Landau1987, Currie1974, Temam1984}. \\\\\n    \n    Let $v$ be a vector field, Navier-Stokes equations are given as follows\n    \\begin{equation}\n        \\left \\lbrace \\begin{array}{ll}\n    \t\\nabla \\cdot v = 0, \\\\\n    \t(\\rho v)_t + (\\nabla \\cdot \\rho v) v + \\nabla p - \\mu \\nabla^2 v - \\rho G = 0.\n    \t\\end{array}  \\right .\n    \t\\label{navierstokes}\n    \\end{equation}\n    It is well known that when $\\rho$ is considered the density, $p$ the pressure, $v$ the velocity and $\\mu$ the viscosity of a fluid, these equations describe the dynamics of an incompressible fluid (free divergence, and $\\rho_t = 0$), where $G$ represents the gravitational effects. \\\\\n    \n    In contrast to equation (\\ref{navierstokes}), this can be investigated in one spatial dimension. Simplification in (\\ref{navierstokes}) of the $x$ component of the velocity vector, which we will call $v^x$, gives\n    \\begin{equation*}\n        \\rho \\frac{\\partial v^x}{\\partial t} + \\rho v^x \\frac{\\partial v^x}{\\partial x} + \\rho v^y \\frac{\\partial v^x}{\\partial y} + \\rho v^z \\frac{\\partial v^x}{\\partial z} + \\frac{\\partial p}{\\partial x} - \\mu \\left(\\frac{\\partial^2 v^x}{\\partial x^2} + \\frac{\\partial^2 v^x}{\\partial y^2} + \\frac{\\partial^2 v^x}{\\partial z^2} \\right) - \\rho G^x = 0.\n    \\end{equation*}\n\n    \\noindent Considering a $1D$ problem with no pressure gradient, the above equation reduces to\\\\\n    \\begin{equation}\n        \\rho \\frac{\\partial v^x}{\\partial t} + \\rho v^x \\frac{\\partial v^x}{\\partial x} - \\mu \\frac{\\partial^2 v^x}{\\partial x^2} - \\rho G^x = 0.\n        \\label{1.2}\n    \\end{equation}\n    If we use now the traditional variable $v$ rather than $v^x$, take $\\alpha$ to be the kinematic viscosity, i.e, $\\alpha = \\frac{\\mu}{\\rho}$ and $G \\equiv 0$, then the equation (\\ref{1.2}) becomes just the viscid Burgers' equation\n    \\begin{equation}\n    \\label{Burgers_Equation}\n        \\frac{\\partial v(x, t)}{\\partial t} +  \\underbrace{v(x, t) \\frac{\\partial v(x, t)}{\\partial x}}_{\\textbf{Convection}} - \\underbrace{\\alpha \\frac{\\partial^2 v(x, t)}{\\partial x^2}}_{\\textbf{Diffusion}} = 0.\n    \\end{equation}\n\n    Some assumptions  are  made,  namely: $\\rho =$ constant (density), $\\mu =$ constant  (viscosity),  $p=$ constant (pressure). \\\\\n    \n    Burgers' equation was introduced in 1915 by Harry Bateman \\cite{Bateman1915}, an English mathematician, in his paper along with its corresponding initial condition and boundary values. Later in 1939, Johannes Martinus Burgers \\cite{Burgers1939,Burgers1948}, a Dutch physicist, simplified the Navier-Stokes equation (\\ref{navierstokes}) by just dropping the pressure term, and in 1948 explained the  mathematical modeling of turbulence with the help of the equation (\\ref{Burgers_Equation}). The name of this equation is because Burgers became one of the leading figures in the field of fluid mechanics and, therefore, honors his contributions. \\\\  \n    \n    The equation (\\ref{Burgers_Equation}) is a partial differential equation nonlinear, where the second term is known as the convective part of the equation and the third as the diffusive part. This equation appears in several areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, traffic flow, and many others. It is generally considered a toy model, i.e., a tool that is used to understand part of the internal behavior of the general problem.  \\\\\n    \n    The formulation given by the equation (\\ref{Burgers_Equation}) is called the strong form, i.e., the partial differential equation requires that it be satisfied for each point $x$ in its domain and for each $t$. This formulation can be written as follows\n    \\begin{equation}\n    \t\\frac{\\partial v}{\\partial t} + A(v) + F(v) = 0, \\hspace{2mm} t > 0,\n    \t\\label{strong}\n    \\end{equation}\n\twhere $F$ and $A$ are given by\n\t\\begin{equation*}\n\t\tA(v) = - \\alpha v_{xx}, \\hspace{3mm} F(v) = \\frac{1}{2} (v^2)_x , \\hspace{3mm} x \\in I.\n\t\\end{equation*} \n\tMultiplying both sides of (\\ref{strong}) by $\\phi \\in X$, for some arbitrary smooth function $\\phi$ of compact support, such that the integral of the PDE over the space $I$ is satisfied, we get   \n    \\begin{equation}\n    \t\\displaystyle \\int_{I} \\frac{\\partial v}{\\partial t} \\phi dx + \\int_{I} A(v) \\phi dx + \\int_{I} F(v) \\phi dx = 0, \\hspace{2mm} \\forall \\phi \\in X,\\hspace{2mm} \\forall t > 0.\n    \t\\label{weak}\n    \\end{equation}\n\tThe formulation (\\ref{weak}) is called the weak form of (\\ref{Burgers_Equation}) and $\\phi$ are known as the test functions.  If we denote $\\langle \\cdot, \\cdot \\rangle$ as the inner product in $X$, then (\\ref{weak}) can be written in compact form as follows\n    \\begin{equation}\n    \t\\left \\langle \\frac{\\partial v}{\\partial t} + A(v) + F(v), \\phi\\right\\rangle = 0, \\hspace{2mm} \\forall \\phi \\in X, \\hspace{2mm} \\forall t > 0.\t\n    \t\\label{compact_weak}\n    \\end{equation}\n\tNote that the two formulations, (\\ref{weak}) and (\\ref{strong}), are equivalent if the solution is smooth enough, however the weak formulation can adjust less regular solutions than in the strong form. In fact, the solution to (\\ref {weak}) is known as the distribution solution of the original equation (\\ref{Burgers_Equation}), since it can be shown to satisfy (\\ref{Burgers_Equation}) in the sense of distributions. For more details of the above it is recommended to see Schwartz \\cite{Schwartz1966}, Lions and Magenes \\cite{Lions1972}, Renardy and Rogers \\cite{Renardy1993}. \\\\\n\t\n\tProper use of these formulations allows one to recover the strong form from the weak form, therefore, an appropriate way to design a numerical method is to first choose one of the formulations satisfied by the exact solution, then restrict the choice of test functions to a space of finite dimension, to replace $u$ with the discrete solution $u_N$, and possibly to replace the exact integration with quadrature rules. \\\\\n\t\n   \n    \\subsection{Analytical solutions for Burgers' equation.}\n    \n    When we want to know the precision of an approximation, an alternative is through numerical experiments that can be compared with some other solution considered as a good reference. For this type of tests, the better is to have exact solutions to compare with an approximate one, and fortunately the equation (\\ref{Burgers_Equation}) can be solved exactly by Hopf-Cole transformation introduced by Eberhard Hopf \\cite{Hopf1950} and Julian David Cole \\cite{Cole1951} independently to convert the Burgers' equation into a linear parabolic equation and solve it exactly for any initial condition. \\\\\n  \n   \t\\noindent Now consider the problem of initial value for the equation (\\ref{Burgers_Equation}) as follows\n    \\begin{equation}\n        \\left \\lbrace \\begin{array}{ll}\n    \tu_t + u u_x = \\alpha u_{xx} & x \\in \\mathbb{R}, \\hspace{2mm} t > 0, \\hspace{2mm} \\alpha > 0 \\\\\n    \tu (x, 0) = u_0 (x)  & x \\in \\mathbb{R},\n    \t\\end{array}  \\right .\n    \t\\label{IVP}\n    \\end{equation}\n\tHence, the transformation known as the Cole-Hopf transformation is given by\n    \\begin{equation}\n        u = -2 \\alpha \\frac{\\varphi_x}{\\varphi}\n        \\label{Hopf_tranform}\n    \\end{equation}\n    Operating (\\ref{Hopf_tranform}) into each term of (\\ref{IVP}) we find that\n    \\begin{equation*}\n        u_t = \\frac{2 \\alpha (\\varphi_t \\varphi_x - \\varphi \\varphi_{x t})}{\\varphi^2} \\hspace{2mm} , \\hspace{2mm} u u_x = \\frac{4 \\alpha^2 \\varphi_x (\\varphi \\varphi_{xx} - \\varphi^2_x)}{\\varphi^3},\n    \\end{equation*}\n    and \n    \\begin{equation*}\n        \\alpha u_{xx} = - \\frac{2 \\alpha^2 (2 \\varphi_x^3 - 3 \\varphi \\varphi_{xx} \\varphi_x + \\varphi^2 \\varphi_{xxx})}{\\varphi^3}.\n    \\end{equation*}\n    Substituting these expressions into (\\ref{IVP}),\n    \\begin{align*}\n        \\frac{2 \\alpha (-\\varphi \\varphi_{x t} +  \\varphi_x ( \\varphi_t - \\alpha \\varphi_{xx}) + \\alpha \\varphi \\varphi_{xxx})}{\\varphi^2} = 0, \n    \\end{align*}\n\tso we have the following,\n\t\\begin{align*}    \n        - \\varphi \\varphi_{x t} + \\varphi_x (\\varphi_t - \\alpha \\varphi_{xx}) + \\alpha \\varphi \\varphi_{xxx} = 0 &\\Longleftrightarrow \\varphi_x (\\varphi_t - \\alpha \\varphi_{xx}) = \\varphi (\\varphi_{x t} - \\alpha \\varphi_{xxx}) \\\\\n        &\\Longleftrightarrow \\varphi_x (\\varphi_t - \\alpha \\varphi_{xx}) =  \\varphi (\\varphi_t - \\alpha \\varphi_{xx})_x.\n    \\end{align*}\n    Therefore, if $\\varphi$ solves the equation $\\varphi_t - \\alpha \\varphi_{xx} = 0$, $x \\in \\mathbb{R}$, then $u(x, t)$ given by the transformation (\\ref{Hopf_tranform}) solves the Burgers equation.\\\\\n    \n    \\noindent To completely transform the problem (\\ref{IVP}) we still have to work with the initial condition function. To do this, note that (\\ref{Hopf_tranform}) can be written as\n    \\begin{equation}\n        u = -2 \\alpha (\\log \\varphi)_x,\n        \\label{Hopf2}\n    \\end{equation}\n    \n    \\noindent hence, we get\n    \\begin{equation*}\n        \\varphi (x, t) = \\displaystyle e^{- \\int \\frac{u(x, t)}{2 \\alpha} dx}.\n    \\end{equation*}\n    It is clear from (\\ref{Hopf2}) that multiplying $\\varphi$ by a constant does not affect $u$, so we can write the last equation as\n    \\begin{equation}\n        \\varphi (x, t) = \\displaystyle e^{- \\int_{0}^{x} \\frac{u(y, t)}{2 \\alpha} dy}.\n        \\label{phi}\n    \\end{equation}\n    The initial condition on (\\ref{IVP}) must be transformed by using (\\ref{Hopf2}) to get\n    \\begin{equation*}\n        \\varphi (x, 0) = \\varphi_0 (x) = \\displaystyle e^{- \\int_{0}^{x} \\frac{u_0 (y)}{2 \\alpha} dy}.\n    \\end{equation*}\n    In summary, we have reduced the problem (\\ref{IVP}) to this one\n    \\begin{equation}\n        \\left \\lbrace \\begin{array}{ll}\n    \t\\varphi_t - \\alpha \\varphi_{xx} = 0,  & x \\in \\mathbb{R}, \\hspace{2mm} t > 0, \\hspace{2mm} \\alpha > 0, \\\\\n    \t\\varphi (x, 0) = \\varphi_0 (x) = \\displaystyle e^{- \\int_{0}^{x} \\frac{u_0 (y)}{2 \\alpha} dy}, & x \\in \\mathbb{R}.\n    \t\\end{array}  \\right .\n    \\label{heat}\n    \\end{equation}\n    \\paragraph{Parabolic Equation.} The general solution of the initial value problem for the equation (\\ref{heat}) is well known and can be handled by a variety of methods. An interesting method related to spectral methods is the following: one can take the Fourier transform with respect to $x$ for both the equation and the initial condition $\\varphi_0 (x)$ to obtain a first-order (ODE) as follows\n    \\begin{equation*}\n        \\left \\lbrace \\begin{array}{ll}\n    \t\\hat{\\varphi}_t = \\xi^2 \\alpha \\hat{\\varphi},  & \\xi \\in \\mathbb{R}, \\hspace{2mm} t > 0, \\hspace{2mm} \\alpha > 0, \\\\\n    \t\\hat{\\varphi} (\\xi, 0) = \\hat{\\varphi}_0 (\\xi), & \\xi \\in \\mathbb{R},\n    \t\\end{array}  \\right .\n    \\end{equation*}\n    where $\\hat{\\varphi} (\\xi, t) = \\displaystyle \\int_{-\\infty}^{\\infty} \\varphi (x, t) e^{i \\xi x} dx $. \\\\\n    \n    \\noindent Then the solution for this problem is\n    \\begin{equation*}\n         \\hat{\\varphi} (\\xi, t) =  \\hat{\\varphi}_0 (\\xi) e^{ \\xi^2 \\alpha t}.\n    \\end{equation*}\n    \n    \\noindent To recover $\\varphi (x, t)$ we have to use the inverse Fourier transformation $F^{-1}$ , namely,\n    \\begin{equation*}\n        \\varphi (x, t) = F^{-1} (\\hat{\\varphi} (\\xi, t)) = F^{-1} (\\hat{\\varphi}_0 e^{\\xi^2 \\alpha t}) = \\varphi_0 (x) \\ast F^{-1} (e^{\\xi^2 \\alpha t}),\n    \\end{equation*}\n    where $\\ast$ denotes the convolution product.\\\\\n    \n    \\noindent On the other hand\n    \\begin{equation*}\n        F^{-1} (e^{\\xi^2 \\alpha t}) = \\frac{1}{2 \\sqrt{\\pi \\alpha t}} e^{- \\frac{x^2}{4 \\alpha t}},\n    \\end{equation*}\n    so the initial value problem (\\ref{heat}) has the analytic solution\n    \\begin{equation*}\n        \\varphi (x, t) = \\frac{1}{2 \\sqrt{\\pi \\alpha t}} \\displaystyle \\int_{-\\infty}^{\\infty} \\varphi_0 (\\xi) e^{- \\frac{(x - \\xi)^2}{4 \\alpha t}} d\\xi.\n    \\end{equation*}\n    Finally, from (\\ref{Hopf_tranform}), we obtain the analytic solution for the problem (\\ref{IVP})\n    \\begin{equation}\n    \\label{Exact_Solution}\n        u (x, t) =  \\displaystyle \\frac{\\int_{-\\infty}^{\\infty} \\frac{x - \\xi}{t} \\varphi_0 (\\xi) e^{- \\frac{(x - \\xi)^2}{4 \\alpha t}} d\\xi}{\\int_{-\\infty}^{\\infty} \\varphi_0 (\\xi) e^{- \\frac{(x - \\xi)^2}{4 \\alpha t}} d\\xi}. \n    \\end{equation}\n    \n    However, the previous solution cannot always be calculated explicitly and we must implement some numerical integration method very efficiently to obtain a good approximation.\n    \n    \\begin{figure}[H]\n    \t\\centering\n    \t\\includegraphics[width=11cm]{introduction/figures/Exact_Solution_alpha=001.png}\n    \t\\caption{Exact solution for (\\ref{IVP}) with initial condition $u_0 (x) = e^{-0.05x^2}$ using the equation (\\ref{Exact_Solution}) for $x \\in [-60, 60]$, $t \\in [0, 100]$, and $\\alpha = 0.01$.}\n    \t\\label{Exact_Solution_alpha=0.01}\n    \\end{figure}", "meta": {"hexsha": "e4a74df83734a57287b84dc01410683ee6ae42ab", "size": 13429, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/introduction/Burgers_Deterministic.tex", "max_stars_repo_name": "alanmatzumiya/Maestria", "max_stars_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-12-29T10:44:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-12T11:18:45.000Z", "max_issues_repo_path": "docs/introduction/Burgers_Deterministic.tex", "max_issues_repo_name": "alanmatzumiya/spectral-methods", "max_issues_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/introduction/Burgers_Deterministic.tex", "max_forks_repo_name": "alanmatzumiya/spectral-methods", "max_forks_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-04T13:29:56.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T13:29:56.000Z", "avg_line_length": 81.3878787879, "max_line_length": 810, "alphanum_fraction": 0.6715317596, "num_tokens": 4216, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.558563326270652}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[usenames]{color} %used for font color\n\\usepackage{amsmath, amssymb, amsthm}\n\\usepackage{wasysym}\n\\usepackage[utf8]{inputenc} %useful to type directly diacritic characters\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\\usepackage{float}\n\\usepackage{mathtools}\n\\usepackage [english]{babel}\n\\usepackage [autostyle, english = american]{csquotes}\n\\MakeOuterQuote{\"}\n\\graphicspath{ {./} }\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\\newcommand{\\prob}{\\mathbb{P}}\n\\newcommand{\\degrees}{^{\\circ}}\n\\DeclarePairedDelimiter\\ceil{\\lceil}{\\rceil}\n\\DeclarePairedDelimiter\\floor{\\lfloor}{\\rfloor}\n\n\\author{Tianshuang (Ethan) Qiu}\n\\begin{document}\n\n\\section{Mon Dis, 2c}\nLet $a\\leq b \\leq c \\leq d$, we can do this since $\\N$ is ordered. We rearrange $a+b+c+d = abcd$ to $a+b+c = d(abc-1)$\n\\newline\nSince we have assumed an order, we have $a+b+c \\leq 3d$, so $abc-1 \\leq 3$, $abc \\leq 4$\n\\newline\nNow in this case $abc$ can only be $1,2,3,4$. We consider each of these cases:\n\\newline\n$abc = 1 \\implies a=1, b=1, c=1$, solving for our original equation gives $3 = 0d$, which does not have a solution.\n\\newline\n$abc = 2$, by our assumed order we have $a=1, b=1, c=2$, and we solve $2d=4+d$, $d=4$, this gives a solution.\n\\newline\n$abc = 3$, by our assumed order we have $a=1, b=1, c=3$, and we solve $3d=5+d$, which does not have an integer solution.\n\\newline\n$abc = 4$, by our assumed order we have $a_0=1, b_0=1, c_0=4$, or $a_1=1, b_1 = 2, b_2=2$. Solving the first equation gives $d = 2$, which contradicts our assumption of $c \\leq d$, so this is not a solution. Solving the latter gives $4d = 5+d$, which does not have an integer solution.\n\\newline\nSo finally we have our solution: rearrangements of $1,1,2,4$ is the only natural solution to this system.\n\n\\section{Mon Dis, 4g}\n$$(7,29) \\to (7+1,29+1) = (8, 30) \\to (4, 15)$$\n$$(4,15) \\to (4+11, 15+11) = (15, 26)$$\n$$(7,29) \\to (7+19, 29+19) = (26, 48)$$\n$$((4, 15), (15,26)) \\to (4, 26)$$\n$$((4,26), (26,48)) \\to (4,48)$$\n$$(4, 48) \\to (2, 24) \\to (1, 12)$$\n\n\\end{document}\n", "meta": {"hexsha": "6b9f0bb72303640a5fbd1d062f5fba6612a98ab5", "size": 2147, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week8/main.tex", "max_stars_repo_name": "TianshuangQiu/Math74-Homework", "max_stars_repo_head_hexsha": "89f1c999b9744af6062185ab91834887a81ca3d5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week8/main.tex", "max_issues_repo_name": "TianshuangQiu/Math74-Homework", "max_issues_repo_head_hexsha": "89f1c999b9744af6062185ab91834887a81ca3d5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week8/main.tex", "max_forks_repo_name": "TianshuangQiu/Math74-Homework", "max_forks_repo_head_hexsha": "89f1c999b9744af6062185ab91834887a81ca3d5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5094339623, "max_line_length": 285, "alphanum_fraction": 0.6707033069, "num_tokens": 838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7772998611746912, "lm_q1q2_score": 0.55856331221741}}
{"text": "\\ignore{\n\\documentstyle[11pt]{report}\n\\textwidth 13.7cm\n\\textheight 21.5cm\n\\newcommand{\\myimp}{\\verb+ :- +}\n\\newcommand{\\ignore}[1]{}\n\\def\\definitionname{Definition}\n\n\\makeindex\n\\begin{document}\n}\n\\chapter{\\label{chapter:tabling}Tabling}\nThe Picat system is a term-rewriting system. For a predicate call, Picat selects a matching rule and rewrites the call into the body of the rule. For a function call $C$, Picat rewrites the equation $C = X$ where $X$ is a variable that holds the return value of $C$. Due to the existence of recursion in programs, the term-rewriting process may never terminate. Consider, for example, the following program:  \n\\begin{verbatim}\n    reach(X,Y) ?=> edge(X,Y).\n    reach(X,Y) => reach(X,Z),edge(Z,Y).\n\\end{verbatim}\nwhere the predicate \\texttt{edge} defines a relation, and the predicate \\texttt{reach} defines the transitive closure of the relation. For a query such as \\texttt{reach(a,X)}, the program never terminates due to the existence of left-recursion in the second rule. Even if the rule is converted to right-recursion, the query may still not terminate if the graph that is represented by the relation contains cycles. \n\nAnother issue with recursion is redundancy. Consider the following problem: \\emph{Starting in the top left corner of a $N\\times N$ grid, one can either go rightward or downward. How many routes are there through the grid to the bottom right corner?} The following gives a program in Picat for the problem:\n\\begin{verbatim}\n    route(N,N,_Col) = 1.\n    route(N,_Row,N) = 1.\n    route(N,Row,Col) = route(N,Row+1,Col)+route(N,Row,Col+1).\n\\end{verbatim}\nThe function call \\texttt{route(20,1,1)} returns the number of routes through a 20$\\times$20 grid. The function call \\texttt{route(N,1,1)} takes exponential time in \\texttt{N}, because the same function calls are repeatedly spawned during the execution, and are repeatedly resolved each time that they are spawned.\n\n\\section{Table Declarations}\nTabling\\index{tabling} is a memoization technique that can prevent infinite loops and redundancy. The idea of tabling\\index{tabling} is to memorize the answers to subgoals and use the answers to resolve their variant descendants. In Picat, in order to have all of the calls and answers of a predicate or function tabled\\index{tabling}, users just need to add the keyword \\texttt{table}\\index{\\texttt{table}} before the first rule.\n\n\\subsection*{Example}\n\\begin{verbatim}\n    table\n    reach(X,Y) ?=> edge(X,Y).\n    reach(X,Y) => reach(X,Z),edge(Z,Y).\n\n    table\n    route(N,N,_Col) = 1.\n    route(N,_Row,N) = 1.\n    route(N,Row,Col) = route(N,Row+1,Col)+route(N,Row,Col+1).\n\\end{verbatim}\nWith tabling\\index{tabling}, all queries to the \\texttt{reach} predicate are guaranteed to terminate, and the function call \\texttt{route(N,1,1)} takes only \\texttt{N}$^2$ time.\n\nFor some problems, such as planning problems, it is infeasible to table\\index{tabling} all answers, because there may be an infinite number of answers. For some other problems, such as those that require the computation of aggregates, it is a waste to table\\index{tabling} non-contributing answers. Picat allows users to provide table modes\\index{mode-directed tabling} to instruct the system about which answers to table\\index{tabling}. For a tabled\\index{tabling} predicate, users can give a \\emph{table mode declaration}\\index{mode-directed tabling} in the form ($M_{1},M_{2},\\ldots,M_{n}$), where each $M_{i}$ is one of the following: a plus-sign (+) indicates input, a minus-sign (-) indicates output, \\texttt{max} indicates that the corresponding variable should be maximized, and \\texttt{min} indicates that the corresponding variable should be minimized. The last mode $M_{n}$ can be \\texttt{nt}, which indicates that the argument is not tabled. Two types of data can be passed to a tabled predicate as an \\texttt{nt} argument: (1) global data that are the same to all the calls of the predicate, and (2) data that are functionally dependent on the input arguments. Input arguments are assumed to be ground.  Output arguments, including \\texttt{min} and \\texttt{max} arguments, are assumed to be variables. An argument with the mode \\texttt{min} or \\texttt{max} is called an \\emph{objective} argument. Only one argument can be an objective to be optimized. As an objective argument can be a compound value, this limit is not essential, and users can still specify multiple objective variables to be optimized. When a table mode\\index{mode-directed tabling} declaration is provided, Picat tables\\index{tabling} only one optimal answer for the same input arguments.\n\n\\subsection*{Example}\n\\begin{verbatim}\n    table(+,+,-,min)\n    sp(X,Y,Path,W) ?=>\n        Path = [(X,Y)],\n        edge(X,Y,W).\n    sp(X,Y,Path,W) =>\n        Path = [(X,Z)|Path1],\n        edge(X,Z,Wxz),\n        sp(Z,Y,Path1,W1),\n        W = Wxz+W1.\n\\end{verbatim}\nThe predicate \\texttt{edge(X,Y,W)} specifies a weighted directed graph, where \\texttt{W} is the weight of the edge between node \\texttt{X} and node \\texttt{Y}. The predicate \\texttt{sp(X,Y,Path,W)} states that \\texttt{Path} is a path from \\texttt{X} to \\texttt{Y} with the minimum weight \\texttt{W}. Note that whenever the predicate \\texttt{ sp/4} is called, the first two arguments must always be instantiated. For each pair, the system stores only one path with the minimum weight. \n\nThe following program finds a shortest path among those with the minimum weight for each pair of nodes:\n\\begin{verbatim}\n    table (+,+,-,min).\n    sp(X,Y,Path,WL) ?=>\n        Path = [(X,Y)],\n        WL = (Wxy,1),\n        edge(X,Y,Wxy).\n    sp(X,Y,Path,WL) =>\n        Path = [(X,Z)|Path1],\n        edge(X,Z,Wxz),\n        sp(Z,Y,Path1,WL1),\n        WL1 = (Wzy,Len1),\n        WL = (Wxz+Wzy,Len1+1).\n\\end{verbatim}\nFor each pair of nodes, the pair of variables \\texttt{(W,Len)} is minimized, where \\texttt{W} is the weight, and \\texttt{Len} is the length of a path. The built-in function \\texttt{compare\\_terms($T_1$,$T_2$)} is used to compare answers. Note that the order is important. If the term would be \\texttt{(Len,W)}, then the program would find a shortest path, breaking a tie by selecting one with the minimum weight.\n\nThe tabling system is useful for offering dynamic programming solutions for planning problems. The following shows a tabled program for general planning problems:\n\\begin{verbatim}\n    table (+,-,min)\n    plan(S,Plan,Len),final(S) => Plan=[],Len=0.\n    plan(S,Plan,Len) =>\n        action(Action,S,S1),\n        plan(S1,Plan1,Len1),\n        Plan = [Action|Plan1],\n        Len = Len1+1.\n\\end{verbatim}\nThe predicate \\texttt{action(Action,S,S1)} selects an action and performs the action on state \\texttt{S} to generate another state, \\texttt{S1}.\n\n\\subsection*{Example}\nThe program shown in Figure \\ref{fig:farmer} solves the Farmer's problem: {\\em The farmer wants to get his goat, wolf, and cabbage to the other side of the river. His boat isn't very big, and it can only carry him and either his goat, his wolf, or his cabbage. If he leaves the goat alone with the cabbage, then the goat will gobble up the cabbage. If he leaves the wolf alone with the goat, then the wolf will gobble up the goat. When the farmer is present, the goat and cabbage are safe from being gobbled up by their predators.}\n\n\n\\begin{figure}\n\\begin{center}\n\\begin{verbatim}\n    go =>\n        S0=[s,s,s,s],\n        plan(S0,Plan,_),\n        writeln(Plan.reverse()).\n\n    table (+,-,min)\n    plan([n,n,n,n],Plan,Len) => Plan=[], Len=0.\n    plan(S,Plan,Len) =>\n        Plan=[Action|Plan1],\n        action(S,S1,Action),\n        plan(S1,Plan1,Len1),\n        Len=Len1+1.\n    \n    action([F,F,G,C],S1,Action) ?=>\n        Action=farmer_wolf,\n        opposite(F,F1),\n        S1=[F1,F1,G,C],\n        not unsafe(S1).\n    action([F,W,F,C],S1,Action) ?=>\n        Action=farmer_goat,\n        opposite(F,F1),\n        S1=[F1,W,F1,C],\n        not unsafe(S1).\n    action([F,W,G,F],S1,Action) ?=>\n        Action=farmer_cabbage,\n        opposite(F,F1),\n        S1=[F1,W,G,F1],\n        not unsafe(S1).\n    action([F,W,G,C],S1,Action) ?=>\n        Action=farmer_alone,\n        opposite(F,F1),\n        S1=[F1,W,G,C],\n        not unsafe(S1).\n\n    index (+,-) (-,+)\n    opposite(n,s).\n    opposite(s,n).\n\n    unsafe([F,W,G,_C]),W==G,F!==W => true.\n    unsafe([F,_W,G,C]),G==C,F!==G => true.\n\\end{verbatim}\n\\end{center}\n\\caption{\\label{fig:farmer}A program for the Farmer's problem.}\n\\end{figure}\n\n\\section{The Tabling Mechanism}\nThe Picat tabling\\index{tabling} system employs the so-called \\emph{linear tabling}\\index{linear tabling} mechanism, which computes fixpoints by iteratively evaluating looping subgoals. The system uses a data area, called the \\emph{table area}\\index{tabling}, to store tabled\\index{tabling} subgoals and their answers. The tabling area can be initialized with the following built-in predicate:\n\n\\begin{itemize}\n\\item \\texttt{initialize\\_table}\\index{\\texttt{initialize\\_table/0}}: This predicate initializes the table area.   \n\\end{itemize}\nThis predicate clears up the table area. It's the user's responsibility to ensure that no data in the table area are referenced by any part of the application.\n\nLinear tabling relies on the following three primitive operations to access and update the table\\index{tabling} area.\n\n\\begin{description}\n\\item[Subgoal lookup and registration:] This operation is used when a tabled\\index{tabling} subgoal is encountered during execution. It looks up the subgoal table\\index{tabling} to see if there is a variant of the subgoal. If not, it inserts the subgoal (termed a \\emph{pioneer} or \\emph{generator}) into the subgoal table\\index{tabling}. It also allocates an answer table\\index{tabling} for the subgoal and its variants. Initially, the answer table\\index{tabling} is empty. If the lookup finds that there already is a variant of the subgoal in the table\\index{tabling}, then the record that is stored in the table\\index{tabling} is used for the subgoal (called a \\emph{consumer}). Generators and consumers are handled differently. In linear tabling\\index{linear tabling}, a generator is resolved using rules, and a consumer is resolved using answers; a generator is iterated until the fixed point is reached, and a consumer fails after it exhausts all of the existing answers.\n\\item[Answer lookup and registration:] This operation is executed when a rule succeeds in generating an answer for a tabled\\index{tabling} subgoal. If a variant of the answer already exists in the table\\index{tabling}, then it does nothing; otherwise, it inserts the answer into the answer table\\index{tabling} for the subgoal, or it tables\\index{tabling} the answer according to the mode declaration. Picat uses the lazy consumption strategy (also called the local strategy). After an answer is processed, the system backtracks to produce the next answer. \n\\item[Answer return:] When a consumer is encountered, an answer is returned immediately, if an answer exists. On backtracking, the next answer is returned. A generator starts consuming its answers after it has exhausted all of its rules. Under the lazy consumption strategy, a top-most looping generator does not return any answer until it is complete.\n\\end{description}\n\n\\ignore{\n\\section{Initialization of the Table Area}\n\\begin{itemize}\n\\item \\texttt{table\\_get\\_all($Goal$) = $List$}\\index{\\texttt{table\\_get\\_all/1}}:  This function returns a list of answers of the subgoals that are subsumed by $Goal$. For example, the \\texttt{table\\_get\\_all(\\_)}\\index{\\texttt{table\\_get\\_all/1}} fetches all of the answers in the table\\index{tabling}, since any subgoal is subsumed by the anonymous variable\\index{anonymous variable}.\n\\item \\texttt{table\\_get\\_one($Goal$)}\\index{\\texttt{table\\_get\\_one/1}}: If there is a subgoal in the subgoal table\\index{tabling} that is a variant of \\texttt{$Goal$}, and that has answers, then \\texttt{$Goal$} is unified with the first answer. This predicate fails if there is no variant subgoal in the table\\index{tabling}, or if there is no answer available.\n\\end{itemize}\n}\n\n\\ignore{\n\\end{document}\n}\n", "meta": {"hexsha": "4a0447bfab5b741a003497cf25a3374c713fcc91", "size": 12065, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/tabling.tex", "max_stars_repo_name": "ponyatov/pycat", "max_stars_repo_head_hexsha": "b02baacfd519cc1f95b42cdce0d96078910e50c0", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-02T01:10:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-08T18:13:13.000Z", "max_issues_repo_path": "src/doc/tabling.tex", "max_issues_repo_name": "ponyatov/pycat", "max_issues_repo_head_hexsha": "b02baacfd519cc1f95b42cdce0d96078910e50c0", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-03-24T17:55:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-23T20:29:31.000Z", "max_forks_repo_path": "src/doc/tabling.tex", "max_forks_repo_name": "ponyatov/pycat", "max_forks_repo_head_hexsha": "b02baacfd519cc1f95b42cdce0d96078910e50c0", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-08T17:56:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T17:56:16.000Z", "avg_line_length": 72.245508982, "max_line_length": 1771, "alphanum_fraction": 0.7258184832, "num_tokens": 3243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7772998508568416, "lm_q1q2_score": 0.5585633048030613}}
{"text": "\n\\subsection{The value function of a policy}\n\n\\subsubsection{Other}\n\n\\(P(s_{t+1}|H)=P(s_{t+1}|s_t=s, a_t=a)=P_{s,a}\\)\n\n\\(E[r_t|H]=E[r_t|s_t=s, a_t=a]=R_{s,a}\\)\n\n\\subsubsection{Optimal policies}\n\nExpected return is greater or identical to any other policy.\n\n", "meta": {"hexsha": "ea3c9af44b0bb35bc6327eb3be75561ef96afa32", "size": 257, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/ai/MDP/02-03-valuePolicy.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/ai/MDP/02-03-valuePolicy.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/ai/MDP/02-03-valuePolicy.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.3571428571, "max_line_length": 60, "alphanum_fraction": 0.6809338521, "num_tokens": 92, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891305219503, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5584133494831583}}
{"text": "% Copyright 2018-2021 Melvin Eloy Irizarry-Gelp\u00ed\n\\setcounter{chapter}{9}\n\\chapter{Diffraction}\n%\nIn this experiment you will learn about diffraction of light from a laser due to a single slit.\n%\n\\section{Preliminary}\n%\nWhen light passes through a double slit, an interference pattern is produced. The physical mechanism responsible for this is \\textbf{diffraction}. The \\textbf{interference pattern} arises as the combination of the \\textbf{diffraction pattern} from each slit in the double slit.\n\nYou looked at interference patterns from a double slit in the previous experiment: A collection of dark and bright fringes. A diffraction pattern is similar, but has less structure (e.g. no ``internal'' fringes). For the interference pattern, you found that the distance between nearby \\textbf{peaks} was almost constant and close to the theoretical value\n\\begin{equation}\n    \\Delta y \\approx \\frac{\\ell \\lambda}{d}\n\\end{equation}\nHere $\\ell$ is the longitudinal distance between the slit and the screen where the pattern appears, $\\lambda$ is the wavelength of the laser light, and $d$ is the separation between the two slits. If the distance between nearby peaks is the same, then it follows that the distance $y_{n}$ from the central peak to the $n$-th peak is given by\n\\begin{equation}\n    y_{n} \\approx \\frac{n \\ell \\lambda}{d}\n\\end{equation}\nThere is an analogous result for a diffraction pattern from a single slit. However, for a single slit there is no slit separation $d$, so instead the \\textbf{slit width} $a$ plays a role:\n\\begin{equation}\n    x_{n} \\approx \\frac{n \\ell \\lambda}{a}\n\\end{equation}\nThe interpretation is also different: $x_{n}$ is now the distance from the central peak to the $n$-th dark region (valley). In order to emphasize this subtle difference, I encourage you to use $y$ for distance \\textbf{from the center to a bright peak}, and $x$ for distance \\textbf{from the center to a dark valley}.\n%\n\\subsection{Intensity}\n%\nSince the interference and diffraction patterns involve dark and bright regions, it is useful to measure the brightness along the screen. In your experiment this is done with a light intensity sensor. For practical purposes, light intensity is measured in units of percentage (\\%). Dark regions have small percentage of intensity, and bright regions have large percentages. One observation about the intensity of the many bright regions in a pattern is that the intensity appears to change from peak to peak. For single slit diffraction there is a particular prediction for the ratio of the intensity of bright peaks. Let $I_{0}$ be the intensity of the central bright peak, $I_{1}$ be the intensity of the first bright peak, and $I_{2}$ be the intensity of the second bright peak. Then you expect that\n\\begin{align}\n    \\frac{I_{1}}{I_{0}} = 0.045 && \\frac{I_{2}}{I_{0}} = 0.016\n\\end{align}\nThat is, the intensity of the first bright peak away from the center should be about 4.5\\% the intensity of the central bright peak. Similarly, the intensity of the second bright peak away from the center should be about 1.6\\% the intensity of the central bright peak. These relations should hold for bright peaks on both sides.\n%\n\\section{Experiment}\n%\nThe experiment consisted of collecting light intensity data for diffraction patterns from different single slits:\n\\begin{itemize}\n    \\item Single slit with $a = 0.08$ mm\n    \\item Single slit with $a = 0.04$ mm\n    \\item Single slit with $a = 0.16$ mm\n\\end{itemize}\nFor each single slit configuration you should have two runs, for consistency.\n%\n\\section{Analysis}\n%\nThe analysis for this experiment is very similar to the analysis in the previous experiment with the interference pattern. The main difference is that you need to find the position of dark valleys now, instead of bright peaks before.\n%\n\\section{My Data}\n%\nFor my data I used the \\textbf{red laser}, so the wavelength value is 6.35{ }\\texttimes{ }10\\textsuperscript{\\textminus 4} mm. I also collected data for the $a = 0.02$ mm slit (runs 7 and 8).\n%\n\\section{Your Data}\n%\nFor your data you used the \\textbf{green laser}, so the wavelength value is 5.32{ }\\texttimes{ }10\\textsuperscript{\\textminus 4} mm.\n%\n% \\newpage\n% \\section{Your Report}\n% %\n% Your report should include the following:\n% \\begin{itemize}\n%     \\item A table like Table \\ref{table.11.results.12} with the distance values for the $a = 0.08$ mm slit (either run 1 or run 2).\n%     \\item A table like Table \\ref{table.11.results.34} with the distance values for the $a = 0.04$ mm slit (either run 4 or run 4).\n%     \\item A table like Table \\ref{table.11.results.56} with the distance values for the $a = 0.16$ mm slit (either run 5 or run 6).\n%     \\item A table like Table \\ref{table.11.intensity.8} with the intensity ratios for the $a = 0.08$ mm slit (either run 1 or run 2).\n%     \\item A table like Table \\ref{table.11.intensity.4} with the intensity ratios for the $a = 0.04$ mm slit (either run 3 or run 4).\n%     \\item A table like Table \\ref{table.11.intensity.16} with the intensity ratios for the $a = 0.16$ mm slit (either run 5 or run 6).\n%     \\item A chart like Figure \\ref{figure.11.chart1} with the intensity profile for the $a = 0.08$ mm slit (either run 1 or run 2).\n%     \\item A chart like Figure \\ref{figure.11.chart2} with the intensity profile for the $a = 0.04$ mm slit (either run 3 or run 4).\n%     \\item A chart like Figure \\ref{figure.11.chart3} with the intensity profile for the $a = 0.16$ mm slit (either run 5 or run 6).\n%     \\item Which diffraction slit yields the dark valleys that are closest to the central bright peak?\n%     \\item Which diffraction slit yields the dark valleys that are farthest to the central bright peak?\n% \\end{itemize}\n%\n\\newpage\n\\section{Tables}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Feature} & \\textbf{Position} (mm) \\\\\n        \\hline\n        Valley 3L & 50.08 \\\\\n        Valley 2L & 57.32 \\\\\n        Valley 1L & 65.11 \\\\\n        \\hline\n        Peak 0C & 72.77 \\\\\n        \\hline\n        Valley 1R & 79.8 \\\\\n        Valley 2R & 87.29 \\\\\n        Valley 3R & 94.61 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Positions for Run 2 ($a = 0.08$ mm)}\n    \\label{table.11.pos.12}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Feature} & \\textbf{Position} (mm) \\\\\n        \\hline\n        Valley 2L & 51.73 \\\\\n        Valley 1L & 65.19 \\\\\n        \\hline\n        Peak 0C & 81.15 \\\\\n        \\hline\n        Valley 1R & 95.46 \\\\\n        Valley 2R & 109.98 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Positions for Run 4 ($a = 0.04$ mm)}\n    \\label{table.11.pos.34}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Feature} & \\textbf{Position} (mm) \\\\\n        \\hline\n        Valley 3L & 66.8 \\\\\n        Valley 2L & 70.57 \\\\\n        Valley 1L & 74.34 \\\\\n        \\hline\n        Peak 0C & 77.94 \\\\\n        \\hline\n        Valley 1R & 81.66 \\\\\n        Valley 2R & 85.43 \\\\\n        Valley 3R & 89.24 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Positions for Run 5 ($a = 0.16$ mm)}\n    \\label{table.11.pos.56}\n\\end{table}\n%\n\\newpage\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Difference} & \\textbf{Distance} (mm) \\\\\n        \\hline\n        Valley 3L -- Peak 0C & \\textminus 22.69 \\\\\n        Valley 2L -- Peak 0C & \\textminus 15.45 \\\\\n        Valley 1L -- Peak 0C & \\textminus 7.66 \\\\\n        \\hline\n        Valley 1R -- Peak 0C & 7.03 \\\\\n        Valley 2R -- Peak 0C & 14.52 \\\\\n        Valley 3R -- Peak 0C & 21.84 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Distances for Run 2 ($a = 0.08$ mm)}\n    \\label{tablel.11.dis.12}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Difference} & \\textbf{Distance} (mm) \\\\\n        \\hline\n        Valley 2L -- Peak 0C & \\textminus 29.42 \\\\\n        Valley 1L -- Peak 0C & \\textminus 15.96 \\\\\n        \\hline\n        Valley 1R -- Peak 0C & 14.31 \\\\\n        Valley 2R -- Peak 0C & 28.83 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Distances for Run 4 ($a = 0.04$ mm)}\n    \\label{tablel.11.dis.34}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Difference} & \\textbf{Distance} (mm) \\\\\n        \\hline\n        Valley 3L -- Peak 0C & \\textminus 11.14 \\\\\n        Valley 2L -- Peak 0C & \\textminus 7.37 \\\\\n        Valley 1L -- Peak 0C & \\textminus 3.6 \\\\\n        \\hline\n        Valley 1R -- Peak 0C & 3.72 \\\\\n        Valley 2R -- Peak 0C & 7.49 \\\\\n        Valley 3R -- Peak 0C & 11.3 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Distances for Run 5 ($a = 0.16$ mm)}\n    \\label{tablel.11.dis.56}\n\\end{table}\n%\n\\newpage\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r|r|r}\n        $n$ & \\textbf{Observed} $x_{n}$ (mm) & \\textbf{Expected} $x_{n}$ (mm) & \\textbf{P.D.} (\\%) \\\\\n        \\hline\n        \\textminus 3 & \\textminus 22.69 & \\textminus 22.38 & 1.37 \\\\\n        \\textminus 2 & \\textminus 15.45 & \\textminus 14.92 & 3.53 \\\\\n        \\textminus 1 & \\textminus 7.66 & \\textminus 7.46 & 2.66 \\\\\n        \\hline\n        +1 & 7.03 & 7.46 & \\textminus 5.78 \\\\\n        +2 & 14.52 & 14.92 & \\textminus 2.70 \\\\\n        +3 & 21.84 & 22.38 & \\textminus 2.43 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Results for Run 2 ($a = 0.08$ mm)}\n    \\label{table.11.results.12}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r|r|r}\n        $n$ & \\textbf{Observed} $x_{n}$ (mm) & \\textbf{Expected} $x_{n}$ (mm) & \\textbf{P.D.} (\\%) \\\\\n        \\hline\n        \\textminus 2 & \\textminus 29.42 & \\textminus 29.85 & \\textminus 1.42 \\\\\n        \\textminus 1 & \\textminus 15.96 & \\textminus 14.92 & 6.95 \\\\\n        \\hline\n        +1 & 14.31 & 14.92 & \\textminus 4.10 \\\\\n        +2 & 28.83 & 29.85 & \\textminus 3.40 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Results for Run 4 ($a = 0.04$ mm)}\n    \\label{table.11.results.34}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r|r|r}\n        $n$ & \\textbf{Observed} $x_{n}$ (mm) & \\textbf{Expected} $x_{n}$ (mm) & \\textbf{P.D.} (\\%) \\\\\n        \\hline\n        \\textminus 3 & \\textminus 11.14 & \\textminus 11.19 & \\textminus 0.46 \\\\\n        \\textminus 2 & \\textminus 7.37 & \\textminus 7.46 & \\textminus 1.22 \\\\\n        \\textminus 1 & \\textminus 3.6 & \\textminus 3.73 & \\textminus 3.50 \\\\\n        \\hline\n        +1 & 3.72 & 3.73 & \\textminus 0.28 \\\\\n        +2 & 7.49 & 7.46 & 0.39 \\\\\n        +3 & 11.3 & 11.19 & 0.97 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Results for Run 5 ($a = 0.16$ mm)}\n    \\label{table.11.results.56}\n\\end{table}\n%\n\\newpage\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Ratio} & \\textbf{Value} \\\\\n        \\hline\n        $I_{\\text{3L}} / I_{\\text{0C}}$ & 0.017 \\\\\n        $I_{\\text{2L}} / I_{\\text{0C}}$ & 0.025 \\\\\n        $I_{\\text{1L}} / I_{\\text{0C}}$ & 0.055 \\\\\n        \\hline\n        $I_{\\text{1R}} / I_{\\text{0C}}$ & 0.065 \\\\\n        $I_{\\text{2R}} / I_{\\text{0C}}$ & 0.031 \\\\\n        $I_{\\text{3R}} / I_{\\text{0C}}$ & 0.021 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Intensity ratios for Run 2 ($a = 0.08$ mm)}\n    \\label{table.11.intensity.8}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Ratio} & \\textbf{Value} \\\\\n        \\hline\n        $I_{\\text{1L}} / I_{\\text{0C}}$ & 0.086 \\\\\n        \\hline\n        $I_{\\text{1R}} / I_{\\text{0C}}$ & 0.099 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Intensity ratios for Run 4 ($a = 0.04$ mm)}\n    \\label{table.11.intensity.4}\n\\end{table}\n%\n\\begin{table}[ht!]\n    \\centering\n    \\begin{tabular}{l|r}\n        \\textbf{Ratio} & \\textbf{Value} \\\\\n        \\hline\n        $I_{\\text{3L}} / I_{\\text{0C}}$ & 0.010 \\\\\n        $I_{\\text{2L}} / I_{\\text{0C}}$ & 0.020 \\\\\n        $I_{\\text{1L}} / I_{\\text{0C}}$ & 0.051 \\\\\n        \\hline\n        $I_{\\text{1R}} / I_{\\text{0C}}$ & 0.059 \\\\\n        $I_{\\text{2R}} / I_{\\text{0C}}$ & 0.025 \\\\\n        $I_{\\text{3L}} / I_{\\text{0C}}$ & 0.014 \\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Intensity ratios for Run 5 ($a = 0.16$ mm)}\n    \\label{table.11.intensity.16}\n\\end{table}\n%\n\\FloatBarrier\n\\newpage\n\\section{Figures}\n%\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[scale=0.74]{image/11-diffraction/chart1.pdf}\n\t\\caption{Intensity for diffraction pattern with $a = 0.08$ mm.}\n\t\\label{figure.11.chart1}\n\\end{figure}\n%\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[scale=0.74]{image/11-diffraction/chart2.pdf}\n\t\\caption{Intensity for diffraction pattern with $a = 0.04$ mm.}\n\t\\label{figure.11.chart2}\n\\end{figure}\n%\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[scale=0.74]{image/11-diffraction/chart3.pdf}\n\t\\caption{Intensity for diffraction pattern with $a = 0.16$ mm.}\n\t\\label{figure.11.chart3}\n\\end{figure}\n%", "meta": {"hexsha": "cbd8fbeecc1dc9a7afed6a44e8fccc00abe3767e", "size": 12771, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/11-diffraction.tex", "max_stars_repo_name": "meirizarrygelpi/phys-208L", "max_stars_repo_head_hexsha": "ce8b093f9e730c87a2b4c6208b4d4da518035267", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/11-diffraction.tex", "max_issues_repo_name": "meirizarrygelpi/phys-208L", "max_issues_repo_head_hexsha": "ce8b093f9e730c87a2b4c6208b4d4da518035267", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/11-diffraction.tex", "max_forks_repo_name": "meirizarrygelpi/phys-208L", "max_forks_repo_head_hexsha": "ce8b093f9e730c87a2b4c6208b4d4da518035267", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5428571429, "max_line_length": 802, "alphanum_fraction": 0.6214078772, "num_tokens": 4330, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.734119526900183, "lm_q2_score": 0.7606506635289835, "lm_q1q2_score": 0.5584085052462077}}
{"text": "\\documentclass{article}\n\\usepackage{enumerate}\n\\usepackage{amsmath, amsthm, amssymb}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[parfill]{parskip}\n\n\\title{Econ C103 Problem Set 2}\n\\author{Sahil Chinoy}\n\n\\begin{document}\n\\maketitle{}\n\n\\subsection*{Exercise 1}\n\n\\begin{enumerate}[(a)]\n\n\t\\item\n\n\tIn this context, a mechanism consists of\n\n\t\\begin{itemize}\n\n\t\t\\item A set of messages $M$.\n\n\t\t\\item A rule that maps from the set of messages to the (real-valued) tax the worker pays $t: M \\rightarrow \\mathbb{R}$.\n\n\t\t\\item A rule that maps from the set of messages to the type of job the worker has $x: M \\rightarrow \\{l,h\\}$.\n\n\t\\end{itemize}\n\n\t\\item\n\n\tA direct mechanism is one in which the worker reports their ability $\\theta$. This implies $M = [\\underline{\\theta}, \\bar{\\theta}]$.\n\n\t\\item\n\n\tAn incentive compatible direct mechanism is one in which it is always optimal for the worker to tell the truth about their ability, that is, $x(m) = x(\\theta)$. This implies\n\n\t\\begin{equation*}\n\tw_{x(\\theta)} - t(\\theta) - \\frac{x(\\theta)}{\n\t\\theta} \\geq \\max_m \\left \\{ w_{x(m)} - t(m) - \\frac{x(m)}{\\theta} \\right \\}\n\t\\end{equation*}\n\n\t\\item\n\n\tIncentive compatibility implies that an individual with true ability $\\theta$ should prefer to report $\\theta \\neq \\theta'$.\n\n\t\\begin{equation*}\n\tw_{x(\\theta)} - t(\\theta) - \\frac{x(\\theta)}{\n\t\\theta} \\geq w_{x(\\theta')} - t(\\theta') - \\frac{x(\\theta')}{\n\t\\theta}\n\t\\end{equation*}\n\t\\begin{equation}\n\t\\frac{1}{\\theta} (x(\\theta') - x(\\theta)) \\geq w_{x(\\theta')} - t(\\theta') - (w_{x(\\theta)} - t(\\theta))\n\t\\end{equation}\n\n\tLikewise, an individual with true ability $\\theta'$ should prefer to report $\\theta' \\neq \\theta$.\n\n\t\\begin{equation*}\n\tw_{x(\\theta')} - t(\\theta') - \\frac{x(\\theta')}{\n\t\\theta'} \\geq w_{x(\\theta)} - t(\\theta) - \\frac{x(\\theta)}{\n\t\\theta'}\n\t\\end{equation*}\n\t\\begin{equation}\n\t\\frac{1}{\\theta'} (x(\\theta) - x(\\theta')) \\geq w_{x(\\theta)} - t(\\theta) - (w_{x(\\theta')} - t(\\theta'))\n\t\\end{equation}\n\n\tAdding (1) and (2)\n\n\t\\begin{equation*}\n\t(\\frac{1}{\\theta} - \\frac{1}{\\theta'})(x(\\theta') - x(\\theta)) \\geq 0\n\t\\end{equation*}\n\n\tSince $0 \\leq \\underline{\\theta}$, both $\\theta$ and $\\theta'$ are positive, so we can rearrange to find\n\n\t\\begin{equation*}\n\t(\\theta'- \\theta)(x(\\theta') - x(\\theta)) \\geq 0\n\t\\end{equation*}\n\n\tThus, for $\\theta' > \\theta$, $x(\\theta') > x(\\theta)$. This proves $x(\\theta)$ is monotonically increasing.\n\n\t\\item \n\n\tSince $x = \\{l,h\\}$, we define $\\theta_0 = \\sup \\{ \\theta: x(\\theta) = l \\}$. Since a worker will always report the type that leads to the lowest taxes, the tax can only depend on the type of job the worker has, not on their ability.\n\n\t\\begin{equation*}\n\tt(\\theta) =  \\begin{cases} \n\t      t_1, &\\theta > \\theta_0 \\\\\n\t      t_0, & \\theta < \\theta_0\n\t   \\end{cases}\n\t\\end{equation*}\n\n\tWorkers with ability $\\theta < \\theta_0$ should prefer the low-intensity job and associated tax\n\n\t\\begin{equation}\n\t\tw_l - t_0 - \\frac{l}{\\theta_0} \\geq w_h - t_1 - \\frac{h}{\\theta_0}\n\t\\end{equation}\n\n\tLikewise, workers with ability $\\theta > \\theta_0$ should prefer the high-intensity job and associated tax\n\n\t\\begin{equation}\n\t\tw_l - t_0 - \\frac{l}{\\theta_0} \\leq w_h - t_1 - \\frac{h}{\\theta_0}\n\t\\end{equation}\n\n\tCombining (3) and (4)\n\n\t\\begin{equation*}\n\t\tw_l - t_0 - \\frac{l}{\\theta_0} = w_h - t_1 - \\frac{h}{\\theta_0}\n\t\\end{equation*}\n\t\\begin{equation*}\n\t\tt_1 = t_0 + (w_h - w_l) - \\frac{h-l}{\\theta_0}\n\t\\end{equation*}\n\n\tSo\n\n\t\\begin{equation*}\n\tt(\\theta) =  \\begin{cases} \n\t      t_0 + (w_h - w_l) - \\frac{h-l}{\\theta_0}, &\\theta > \\theta_0 \\\\\n\t      t_0, & \\theta < \\theta_0\n\t   \\end{cases}\n\t\\end{equation*}\n\n\t\\item\n\n\tSince the outcome of any mechanism can be implemented in an incentive-compatible direct mechanism, the set of outcomes that can be implemented in any mechanism is characterized by the job allocation rule\n\n\t\\begin{equation*}\n\tx(\\theta) =  \\begin{cases} \n\t      h, &\\theta > \\theta_0 \\\\\n\t      l, & \\theta < \\theta_0\n\t   \\end{cases}\n\t\\end{equation*}\n\n\tAnd the taxation rule\n\n\t\\begin{equation*}\n\tt(\\theta) =  \\begin{cases} \n\t      t_0 + (w_h - w_l) - \\frac{h-l}{\\theta_0}, &\\theta > \\theta_0 \\\\\n\t      t_0, & \\theta < \\theta_0\n\t   \\end{cases}\n\t\\end{equation*}\n\n\t\\item\n\n\tConsider an indirect mechanism where the worker decides between high and low intensity jobs and reports only their wage $w_x \\in \\mathbb{R}$. Then $M = \\mathbb{R}$. One outcome that can be implemented in this mechanism is a uniform tax $\\tau$ on wages\n\n\t\\begin{equation*}\n\tt(w_x) = \\tau w_x\n\t\\end{equation*}\n\n\tThe allocation rule is meaningless, since the worker decides their own job.\n\n\t\\item\n\n\tThe participation constraint implies the agent has positive utility in all cases\n\n\t\\begin{equation*}\n\t\\max_m \\left \\{ w_{x(m)} - t(m) - \\frac{x(m)}{\\theta} \\right \\} \\geq 0\n\t\\end{equation*}\n\n\tIn the context of taxation, this means that wages net of taxes and the cost of effort from working must be positive. If we consider a world in which unemployment is an option, i.e. the worker can choose to hold no job, receive no wage, and pay no tax ($x = w_x = t =0$), then the participation constraint is binding in the sense that an agent can opt for unemployment and thereby refuse to participate in the mechanism. If we require that everyone hold a job and pay taxes, then the participation constraint does not apply.\n\n\t\\item\n\n\tFor $l = w_l = 0$, we have\n\n\t\\begin{equation*}\n\tE[t] = F(\\theta_0) t_0 + (1 - F(\\theta_0)) (t_0 + w_h - \\frac{h}{\\theta_0})\n\t\\end{equation*}\n\n\tIf we restrict ourselves to mechanisms where $t \\leq w_{x(\\theta)}$, then $t_0 \\leq 0$, since $w_l = 0$. Clearly, the expected tax revenue is increasing in $t_0$, so the maximum expected revenue occurs for $t_0 = 0$. This implies a tax structure\n\n\t\\begin{equation*}\n\tt(\\theta) =  \\begin{cases} \n\t      w_h - \\frac{h}{\\theta_0}, &\\theta > \\theta_0 \\\\\n\t      0, & \\theta < \\theta_0\n\t   \\end{cases}\n\t\\end{equation*}\n\n\t\\item\n\n\tWhen $t=0$, the worker will choose the high intensity job if\n\n\t\\begin{equation*}\n\tw_h - \\frac{h}{\\theta} > 0\n\t\\end{equation*}\n\t\\begin{equation*}\n\t\\theta > \\frac{h}{w_h}\n\t\\end{equation*}\n\n\tWith $w_h = 4$ and $h = 1$, workers with ability $\\theta > \\frac{1}{4}$ will choose the high intensity job. Given that $\\theta$ is uniformly distributed, this implies the worker will choose the high intensity job with probability $\\frac{3}{4}$.\n\n\t\\item \n\n\tWith $t_0 = 0$, $w_h = 4$, $h=1$, $F(\\theta) = \\theta$\n\n\t\\begin{equation*}\n\tE[t] = (1 - \\theta_0) (4 - \\frac{1}{\\theta_0}) = 5 - 4\\theta_0 - \\frac{1}{\\theta_0}\n\t\\end{equation*}\n\n\tThe expected revenue is maximized for $\\frac{d E[t]}{d \\theta_0} = 0$\n\n\t\\begin{equation*}\n\t-4 + \\frac{1}{\\theta_0^2} = 0\n\t\\end{equation*}\n\t\\begin{equation*}\n\t\\theta_0 = \\frac{1}{2}\n\t\\end{equation*}\n\n\tSo, workers with ability $\\theta > \\frac{1}{2}$ will choose the high intensity job. Given that $\\theta$ is uniformly distributed, this implies the worker will choose the high intensity job with probability $\\frac{1}{2}$.\n\n\\end{enumerate}\n\n\\end{document}", "meta": {"hexsha": "529ddb1f20022079dae188f4aa0626cf103a98c7", "size": 6943, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw2/hw2.tex", "max_stars_repo_name": "sahilchinoy/econ103", "max_stars_repo_head_hexsha": "ab2ecbb759eb811e953157e7f04f5a003066a62c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-08T22:59:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-08T22:59:12.000Z", "max_issues_repo_path": "hw2/hw2.tex", "max_issues_repo_name": "sahilchinoy/econ103", "max_issues_repo_head_hexsha": "ab2ecbb759eb811e953157e7f04f5a003066a62c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw2/hw2.tex", "max_forks_repo_name": "sahilchinoy/econ103", "max_forks_repo_head_hexsha": "ab2ecbb759eb811e953157e7f04f5a003066a62c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-10-29T10:06:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-27T14:30:33.000Z", "avg_line_length": 32.4439252336, "max_line_length": 524, "alphanum_fraction": 0.6559124298, "num_tokens": 2386, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342971, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.558408502146042}}
{"text": "\\section{NT Functions and Polynomials}\n\n\\begin{Remark}\n    The main idea for most NT FEs is to take aid from LARGE integers on the\n    left side of the divisibility.\n\\end{Remark}\n\n\\prob{https://artofproblemsolving.com/community/c6h597243p3544096}{ISL 2013 N1}{E}{Let $\\mathbb{Z} _{>0}$ be the set of positive integers. Find all functions $f: \\mathbb{Z} _{>0}\\rightarrow \\mathbb{Z} _{>0}$ such that\n    \\[ m^2 + f(n) \\mid mf(m) +n \\]\nfor all positive integers $m$ and $n$.}\n\n\\solu{Go with the flow.}\n\n\n\n\\prob{https://artofproblemsolving.com/community/c6h356076p1935854}{ISL 2010 N5}{M}{Find all functions $g:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that \\[\\left(g(m)+n\\right)\\left(g(n)+m\\right)\\] is a perfect square for all $m,n\\in\\mathbb{N}.$}\n\n\\solu{Playig around with some primes give us that for every ``big'' primes, we need to have every residue class present in the range of $ g $. Now with this fact, we can prove the injectivity as well. Now we want to show that $ g(n+1)=g(n)+1 $. How to show that? We can show that by saying that no prime $ p $ exits such that $ p|g(n+1)-g(n) $, in other words, again the residue classes.}\n\n\n\n\\prob{https://artofproblemsolving.com/community/c6h27188p169519}{ISL 2004 N3}{E}{Find all functions $ f: \\N\\to\\N$ satisfying\n    \\[ \\left. \\left(f\\left(m\\right)^{2}+f\\left(n\\right)\\right) \\right|\n        \\left(m^{2}+n\\right)^{2}\\]\nfor any two positive integers $ m$ and $ n$.}\n\n\\begin{solution}\n    First get $ f(1)=1 $, then using casework, get $ f(p-1)=p-1 $. Then let $\n    m=(p-1) $, and use the division algorithm to reduce the dividend.\n\\end{solution}\n\n\\begin{Remark}\n    In nt-fe s with ``divisible'' condition, one occurring idea is to get an\n    infinite set of integers with the desired output value, and somehow keep\n    those numbers only in the divisor, and make them vanish from the dividend.\n    Most of the time, using division algorithm to reduce the dividend does the\n    trick.\n\\end{Remark}\n\n\n\n\\prob{https://artofproblemsolving.com/community/c6h355799p1932945}{ISL 2009 N5}{E}{Let $P(x)$ be a non-constant polynomial with integer coefficients. Prove that there is no function $T$ from the set of integers into the set of integers such that the number of integers $x$ with $T^n(x)=x$ is equal to $P(n)$ for every $n\\geq 1$, where $T^n$ denotes the $n$-fold application of $T$.}\n\n\\solu{Write down what the problem is actually saying. Integer functions are like cycles, use that.}\n\n\n\\prob{}\n{ISL 2019 N4}{M}{\n    Find all functions $f:\\mathbb{N} \\to \\mathbb{N}$ such that for all $a, b\n    \\in \\mathbb{N}, a+b < C$ for some constant $C$, we have\n    \\[a + f(b) | a^2 + bf(a)\\] \n}\n\n\\begin{solution}\n    The main idea is to get primes on the left hand side somehow. We want to\n    show for primes, $p|f(p)$.\n\\end{solution}\n\n\n", "meta": {"hexsha": "74b349a286f832040d47e4c1f326570a286fa506", "size": 2774, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "nt/sec3_fe_poly.tex", "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 48, "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_issues_repo_path": "nt/sec3_fe_poly.tex", "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nt/sec3_fe_poly.tex", "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "avg_line_length": 46.2333333333, "max_line_length": 388, "alphanum_fraction": 0.6888968998, "num_tokens": 875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5584084932965456}}
{"text": "\\section{Formal ledger rules}\n\\label{sec:model}\n\n\\begin{ruledfigure}{t}\n  \\begin{displaymath}\n    \\begin{array}{rll}\n      \\B{} && \\mbox{the type of Booleans}\\\\\n      \\N{} && \\mbox{the type of natural numbers}\\\\\n      \\Z{} && \\mbox{the type of integers}\\\\\n      \\H{} && \\mbox{the type of bytestrings: } \\bigcup_{n=0}^{\\infty}\\{0,1\\}^{8n}\\\\\n      (\\phi_1 : T_1, \\ldots, \\phi_n : T_n) && \\mbox{a record type with fields $\\phi_1, \\ldots, \\phi_n$ of types $T_1, \\ldots, T_n$}\\\\\n      t.\\phi && \\mbox{the value of $\\phi$ for $t$, where $t$ has type $T$ and $\\phi$ is a field of $T$}\\\\\n      \\Set{T} && \\mbox{the type of (finite) sets over $T$}\\\\\n      \\List{T} && \\mbox{the type of lists over $T$, with $\\_[\\_]$ as indexing and $|\\_|$ as length}\\\\\n      h::t && \\mbox{the list with head $h$ and tail $t$}\\\\\n      x \\mapsto f(x) && \\mbox{an anonymous function}\\\\\n      \\hash{c} && \\mbox{a cryptographic collision-resistant hash of $c$}\\\\\n      \\Interval{A} && \\mbox{the type of intervals over a totally-ordered set $A$}\\\\\n      \\FinSup{K}{M} && \\mbox{the type of finitely supported functions from a type $K$ to a monoid $M$}\n    \\end{array}\n  \\end{displaymath}\n  \\caption{Basic types and notation}\n  \\label{fig:basic-types}\n\\end{ruledfigure}\n%\nOur formal ledger model follows the style of the UTXO-with-scripts model from~\\cite{Zahnentferner18-UTxO} adopting the notation from~\\cite{eutxo-1-paper} with basic types defined as in Figure~\\ref{fig:basic-types}.\n\n\\paragraph{Finitely-supported functions.}\n\\label{sec:fsfs}\n%\nWe model token bundles as finitely-supported functions.\nIf $K$ is any type and $M$ is a monoid with identity element $0$, then a function $f: K \\rightarrow M$ is \\textit{finitely supported} if $f(k) \\ne 0$ for only finitely many $k \\in K$.\nMore precisely, for $f: K \\rightarrow M$ we define the \\textit{support} of $f$ to be\n%%\n$\\supp(f) = \\{k \\in K : f(k) \\ne 0\\}$\n%%\nand\n%%\n$\\FinSup{K}{M} = \\{f : K \\rightarrow M : \\left|\\supp(f)\\right| < \\infty \\}$.\n%%\n\nIf $(M,+,0)$ is a monoid then $\\FinSup{K}{M}$ also becomes a monoid if we define addition pointwise (i.e., $(f+g)(k) = f(k) + g(k)$), with the identity element being the zero map.\nFurthermore, if $M$ is an abelian group then $\\FinSup{K}{M}$ is also an abelian group under this construction, with $(-f)(k) = -f(k)$.\nSimilarly, if $M$ is partially ordered, then so is $\\FinSup{K}{M}$ with comparison defined pointwise: $f \\leq g$ if and only if $f(k) \\leq g(k)$ for all $k \\in K$.\n\nIt follows that if $M$ is a (partially ordered) monoid or abelian group then so is $\\FinSup{K}{\\FinSup{L}{M}}$ for any two sets of keys $K$ and $L$.\nWe will make use of this fact in the validation rules presented later in the paper (see Figure~\\ref{fig:validity}).\nFinitely-supported functions are easily implemented as finite maps, with a failed map lookup corresponding to returning 0.\n\n\\subsection{Ledger types}\n\n%\n\\begin{ruledfigure}{htb}\n  \\begin{displaymath}\n    \\begin{array}{rll}\n      \\multicolumn{3}{l}{\\textsc{Ledger primitives}}\\\\\n      \\Quantity && \\mbox{an amount of currency, forming an abelian group (typically \\Z{})}\\\\\n      \\Asset && \\mbox{a type consisting of identifiers for individual asset classes}\\\\\n      \\Tick && \\mbox{a tick}\\\\\n      \\Address && \\mbox{an ``address'' in the blockchain}\\\\\n      \\TxId && \\mbox{the identifier of a transaction}\\\\\n      \\txId : \\utxotx \\rightarrow \\TxId && \\mbox{a function computing the identifier of a transaction}\\\\\n      \\lookupTx : \\Ledger \\times \\TxId \\rightarrow \\utxotx && \\mbox{retrieve the unique transaction with a given identifier}\\\\\n      \\verify : \\PublicKey\\times\\H\\times\\H \\rightarrow \\B && \\mbox{signature verification}\\\\\n      \\FPScript && \\mbox{forging policy scripts}\\\\\n      \\scriptAddr : \\Script \\rightarrow \\Address && \\mbox{the address of a script}\\\\\n      \\applyScript{\\_} : \\Script \\to (\\Address \\times \\utxotx \\times \\Set{\\Output}) \\to\n      \\B && \\mbox{apply script inside brackets to its arguments}\\\\\n    \\\\\n    \\multicolumn{3}{l}{\\textsc{Ledger types}} \\\\\n    \\Policy &=& \\Address \\qquad \\mbox{(an identifier for a custom currency)}\\\\\n    \\Signature &=& \\H\\\\\n    \\\\\n    \\Quantities   &=& \\FinSup{\\Policy}{\\FinSup{\\Asset}{\\Quantity}}\\\\\n    \\\\\n    \\Output &=& (\\addr: \\Address, \\val: \\Quantities)\\\\\n    \\\\\n    \\OutputRef &=& (\\txrefid: \\TxId, \\idx: \\s{Int})\\\\\n    \\\\\n    \\Input &=& ( \\outputref : \\OutputRef\\\\\n             & &\\ \\validator: \\Script)\\\\\n    \\\\\n    \\utxotx &=&(\\inputs: \\Set{\\Input},\\\\\n               & &\\ \\outputs: \\List{\\Output},\\\\\n               & &\\ \\validityInterval: \\Interval{\\Tick},\\\\\n               & &\\ \\forge: \\Quantities\\\\\n               & &\\ \\scripts: \\Set{\\FPScript},\\\\\n               & &\\ \\sigs: \\Set{\\Signature})\\\\\n    \\\\\n    \\Ledger &=&\\!\\List{\\utxotx}\\\\\n    \\end{array}\n  \\end{displaymath}\n  \\caption{Ledger primitives and basic types}\n  \\label{fig:ledger-types}\n\\end{ruledfigure}\n%\nFigure~\\ref{fig:ledger-types} defines the ledger primitives and types that we need to define the \\UTXOma\\ model.\nAll outputs use a pay-to-script-hash scheme, where an output is locked with the hash of a script. We use a single scripting language for forging policies and to define output locking scripts. Just as in Bitcoin, this is a restricted domain-specific language (and not a general-purpose language); the details follow in Section~\\ref{sec:fps-language}.\nWe assume that each transaction has a unique identifier derived from its value by a hash function. This is the basis of the $\\lookupTx$ function to look up a transaction, given its unique identifier.\n\n\\paragraph{Token bundles.}\n\nWe generalise per-output transferred quantities from a plain \\Quantity\\ to a bundle of \\Quantities.\nA \\Quantities{} represents a token bundle: it is a mapping from a policy and an \\emph{asset}, which defines the asset class, to a \\Quantity{} of that asset.\\footnote{\n  We have chosen to represent \\Quantities{} as a finitely-supported function whose values are themselves finitely-supported functions (in an implementation, this would be a nested map).\n  We did this to make the definition of the rules simpler (in particular Rule~\\ref{rule:forging}).\n  However, it could equally well be defined as a finitely-supported function from tuples of \\Policy{}s and \\Asset{}s to \\Quantity{}s.\n}\nSince a \\Quantities\\ is indexed in this way, it can represent any combination of tokens from any assets (hence why we call it a token \\emph{bundle}).\n\n\\paragraph{Asset groups and forging policy scripts.}\n\nA key concept is the \\emph{asset group}.\nAn asset group is identified by the hash of special script that controls the creation and destruction of asset tokens of that asset group.\nWe call this script the \\emph{forging policy script}.\n\n\\paragraph{Forging.}\n\nEach transaction gets a $\\forge$ field, which simply modifies the required balance of the transaction by the $\\Quantities$ inside it: thus a positive $\\forge$ field indicates the creation of new tokens.\nIn contrast to outputs, $\\Quantities$ in forge fields can\nalso be negative, which effectively burns existing tokens.\\footnote{\nThe restriction on outputs is enforced by Rule~\\ref{rule:all-outputs-are-non-negative}.  We simply do not impose such a restriction on the $\\forge$ field: this lets us define rules in a simpler way, with cleaner notation. }\n\nAdditionally, transactions get a $\\scripts$ field holding a set of forging policy scripts: \\(\\Set{\\FPScript}\\).\nThis provides the forging policy scripts that are required as part of validation when tokens are minted or destroyed (see Rule~\\ref{rule:forging} in Figure~\\ref{fig:validity}). The forging scripts of the assets being forged are\nexecuted and the transaction is only considered valid if the execution of the script returns $\\true$.\nA forging policy script is executed in a context that provides access to the main components of the forging transaction, the UTXOs it spends, and the policy ID.\nThe passing of the context provides a crucial piece of the puzzle regarding self-identification: it includes the script's own $\\Policy$, which avoids the problem of trying to include the hash of a script inside itself.\n\n\\paragraph{Validity intervals.}\n\\label{para:validity-intervals}\n\nA transaction's \\emph{validity interval} field contains an interval of ticks (monotonically increasing units of ``time'', from~\\cite{eutxo-1-paper}).\nThe validity interval states that the transaction must only be validated if the current tick is within the interval. The validity interval, rather than the actual current chain tick value, must be used\nfor script validation. In an otherwise valid transaction, passing the current\ntick to the evaluator\ncould result in different script validation outcomes at different ticks, which\nwould be problematic.\n\n\\paragraph{Language clauses.}\n\nIn our choice of the set of predicates $\\texttt{p1, ..., pn}$ to include in the\nscripting language definition, we adhere to the following heuristic: we only\nadmit predicates with quantification over finite structures\npassed to the evaluator in the transaction-specific data, i.e. sets,\nmaps, and lists. The computations we allow in the predicates themselves are well-known\ncomputable functions, such as hashing, signature checking, arithmetic operations, comparisons,\netc.\n\nThe gamut of policies expressible in the model we propose here is\nfully determined by the collection of predicates,\nassembled into a single script by logical connectives\n\\texttt{\\&\\&}, \\texttt{||}, and \\texttt{Not}.\nDespite being made up of only hard-coded predicates and connectives,\nthe resulting policies can be quite\nexpressive, as we will demonstrate in the upcoming applications section.\nWhen specifying forging predicates, we use $\\texttt{tx.\\_}$ notation to access\nthe fields of a transaction.\n\n\\subsection{Transaction validity}\n\\label{sec:validity}\n\n\\begin{ruledfigure}{t}\n  \\begin{displaymath}\n  \\begin{array}{lll}\n  \\multicolumn{3}{l}{\\txunspent : \\utxotx \\rightarrow \\Set{\\s{OutputRef}}}\\\\\n  \\txunspent(t) &=& \\{(\\txId(t),1), \\ldots, (\\txId(id),\\left|t.outputs\\right|)\\}\\\\\n  \\\\\n  \\multicolumn{3}{l}{\\unspent : \\s{Ledger} \\rightarrow \\Set{\\s{OutputRef}}}\\\\\n  \\unspent([]) &=& \\emptymap \\\\\n  \\unspent(t::l) &=& (\\unspent(l) \\setminus t.\\inputs) \\cup \\txunspent(t)\\\\\n  \\\\\n  \\multicolumn{3}{l}{\\getSpent : \\s{Input} \\times \\s{Ledger} \\rightarrow \\s{Output}}\\\\\n  \\getSpent(i,l) &=& \\lookupTx(l, i.\\outputref.\\id).\\outputs[i.\\outputref.\\idx]\n  \\end{array}\n  \\end{displaymath}\n  \\caption{Auxiliary validation functions}\n  \\label{fig:validation-functions}\n\\end{ruledfigure}\n%\n\\begin{ruledfigure}{t}\n\\begin{enumerate}\n\n\\item\n  \\label{rule:slot-in-range}\n  \\textbf{The current tick is within the validity interval}\n  \\begin{displaymath}\n    \\msf{currentTick} \\in t.\\i{validityInterval}\n  \\end{displaymath}\n\n\\item\n  \\label{rule:all-outputs-are-non-negative}\n  \\textbf{All outputs have non-negative values}\n  \\begin{displaymath}\n    \\textrm{For all } o \\in t.\\outputs,\\ o.\\val \\geq 0\n  \\end{displaymath}\n\n\\item\n  \\label{rule:all-inputs-refer-to-unspent-outputs}\n  \\textbf{All inputs refer to unspent outputs}\n  \\begin{displaymath}\n    \\{i.\\outputref: i \\in t.\\inputs \\} \\subseteq \\unspent(l).\n  \\end{displaymath}\n\n\\item\n  \\label{rule:value-is-preserved}\n  \\textbf{Value is preserved}\n  \\begin{displaymath}\n    t.\\forge + \\sum_{i \\in t.\\inputs} \\getSpent(i, l) = \\sum_{o \\in t.\\outputs} o.\\val\n  \\end{displaymath}\n\n\\item\n  \\label{rule:no-double-spending}\n  \\textbf{No output is double spent}\n  \\begin{displaymath}\n    \\textrm{If } i_1, i \\in t.\\inputs \\textrm{ and }  i_1.\\outputref = i.\\outputref\n    \\textrm{ then } i_1 = i.\n  \\end{displaymath}\n\n\\item\n  \\label{rule:all-inputs-validate}\n  \\textbf{All inputs validate}\n  \\begin{displaymath}\n    \\textrm{For all } i \\in t.\\inputs,\\\n    \\applyScript{i.\\validator}(\\scriptAddr(i.\\validator), t,\n    \\{\\getSpent(i, l) ~\\vert~ i~\\in~ t.\\inputs\\}) = \\true\n  \\end{displaymath}\n\n\\item\n  \\label{rule:validator-scripts-hash}\n  \\textbf{Validator scripts match output addresses}\n  \\begin{displaymath}\n    \\textrm{For all } i \\in t.\\inputs,\\ \\scriptAddr(i.\\validator) = \\getSpent(i, l).\\addr\n  \\end{displaymath}\n\n\\item\n  \\label{rule:forging}\n  \\textbf{Forging}\\\\\n  A transaction with a non-zero \\forge{} field is only\n  valid if either:\n  \\begin{enumerate}\n  \\item the ledger $l$ is empty (that is, if it is the initial transaction).\n  \\item \\label{rule:custom-forge}\n    for every key $h \\in \\supp(t.\\forge)$, there\n    exists $s \\in t.\\scripts$ with\n    $h = \\scriptAddr(s)$.\n  \\end{enumerate}\n  \\medskip\n  % ^ There's no space between these items without this, but all the other items have space due to \\displaymath\n\n\\item\n  \\label{rule:all-mps-validate}\n  \\textbf{All scripts validate}\n  \\begin{displaymath}\n    \\textrm{For all } s \\in t.\\scripts,\\ \\applyScript{s}(\\scriptAddr(s), t,\n    \\{\\getSpent(i, l) ~\\vert~ i~\\in~ t.\\inputs\\}) = \\true\n  \\end{displaymath}\n\n\\end{enumerate}\n\\caption{Validity of a transaction $t$ in a ledger $l$}\n\\label{fig:validity}\n\\end{ruledfigure}\n%\nFigure~\\ref{fig:validity} defines what it means for a transaction $t$ to be valid for a valid ledger $l$ during the tick \\currentTick, using some auxiliary functions from Figure~\\ref{fig:validation-functions}. A ledger $l$ is \\textit{valid} if either $l$ is empty or $l$ is of the form $t::l^{\\prime}$ with $l^{\\prime}$ valid and $t$ valid for $l^{\\prime}$.\n\nThe rules follow the usual structure for an UTXO ledger, with a number of modifications and additions.\nThe new \\textbf{Forging} rule (Rule~\\ref{rule:forging}) implements the support for forging policies by requiring that the currency's forging policy is included in the transaction --- along with Rule~\\ref{rule:all-mps-validate} which ensures that they are actually run!\nThe arguments that a script is applied to are the ones discussed earlier.\n\nWhen forging policy scripts are run, they are provided with the appropriate transaction data, which allows them to enforce conditions on it.\nIn particular, they can inspect the $\\forge$ field on the transaction, and so a forging policy script can identify how much of its own currency was forged, which is typically a key consideration in whether to allow the transaction.\n\nWe also need to be careful to ensure that transactions in our new system preserve value correctly.\nThere are two aspects to consider:\n\\begin{itemize}\n\\item\n  We generalise the type of value to \\Quantities.\n  However, since \\Quantities\\ is a monoid (see Section~\\ref{sec:model}), Rule~\\ref{rule:value-is-preserved} is (almost) identical to the one in the original UTXO model, simply with a different monoid.\n  Concretely, this amounts to preserving the quantities of each of the individual token classes in the transaction.\n\\item\n  We allow forging of new tokens by including the forge field into the balance in Rule~\\ref{rule:value-is-preserved}.\n\\end{itemize}\n", "meta": {"hexsha": "ea3620a5d3ba54e178f3cd8a2636589fa5312684", "size": 14752, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "papers/utxoma/model.tex", "max_stars_repo_name": "AriFordsham/plutus", "max_stars_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1299, "max_stars_repo_stars_event_min_datetime": "2018-10-02T13:41:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T01:10:02.000Z", "max_issues_repo_path": "papers/utxoma/model.tex", "max_issues_repo_name": "AriFordsham/plutus", "max_issues_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2493, "max_issues_repo_issues_event_min_datetime": "2018-09-28T19:28:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T15:31:31.000Z", "max_forks_repo_path": "papers/utxoma/model.tex", "max_forks_repo_name": "AriFordsham/plutus", "max_forks_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 399, "max_forks_repo_forks_event_min_datetime": "2018-10-05T09:36:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T11:18:25.000Z", "avg_line_length": 52.1272084806, "max_line_length": 357, "alphanum_fraction": 0.7076328633, "num_tokens": 4215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.847967769904032, "lm_q2_score": 0.6584174938590246, "lm_q1q2_score": 0.5583168139334388}}
{"text": "\\documentclass[fleqn,12pt]{article}\n\\newcommand{\\expectation}[1]{\\langle #1 \\rangle}\n\\begin{document}\n\\title{Statistical Function Notes}\n\\author{Brian Gough}\n\\date{November 2008}\n\\maketitle\n\n\\section{Weighted mean and variance}\nWe have $N$ samples $x_i$ drawn from a gaussian distribution\n$G(\\mu,\\sigma)$ (or any distribution with finite first and second\nmoments).  Each sample has a weight $w_i$ which represents the\nrelative value we place on it.  Given the estimate of the mean\n%\n\\begin{eqnarray}\n\\bar{x} &=& {1 \\over W} \\sum_i w_i x_i \\\\\nW       &=& \\sum_i w_i\n\\end{eqnarray}\n%\n\\noindent\nwe want an unbiased estimator of the variance of the underlying\ndistribution $\\sigma^2$.  \n\nWe start with the standard definition of the sample variance $V$ and\ncompute the bias correction factor.\n%\n\\begin{eqnarray}\nV &=& {1\\over W} \\sum_i w_i (x_i - \\bar{x})^2 \\\\\n  &=& {1\\over W} \\sum_i w_i \\left(x_i - {1\\over W}\\sum_j w_j x_j\\right)^2 \\\\\n  &=& {1\\over W} \\sum_i w_i \\left(x_i^2 - {2 \\over W} x_i \\sum_j w_j x_j \n       + {1 \\over W^2} (\\sum_j w_j x_j)^2\\right) \\\\\n  &=& {1\\over W} \\left( \\sum_i w_i x_i^2 \n       - {2 \\over W} \\sum_i w_i x_i \\sum_j w_j x_j\n       + {1 \\over W} (\\sum_j w_j x_j)^2\\right)\\\\\n  &=& {1\\over W} \\left( \\sum_i w_i x_i^2 \n       - {1 \\over W} \\sum_i w_i x_i \\sum_j w_j x_j\\right)\\\\\n  &=& {1\\over W} \\left( \\sum_i w_i x_i^2 \n       - {1 \\over W} \\sum_{ij} w_i w_j x_i x_j\\right)\n\\end{eqnarray}\n%\nWe find the expectation value $\\expectation{V}$ using the Gaussian\nformulas $\\expectation{x_i} = \\mu$, $\\expectation{x_i x_j} = \\mu^2 +\n\\delta_{ij} \\sigma^2$.  We assume that any random contribution\ndependent on the weights themselves is zero or can be\nneglected in comparison to $\\sigma$.\n\n%\n\\begin{eqnarray}\n\\expectation{V}   &=& {1\\over W} \\left( \\sum_i w_i \\expectation{x_i^2}\n       - {1 \\over W} \\sum_{ij} w_i w_j \\expectation{x_i x_j}\\right)\\\\\n      &=& {1\\over W} \\left( \\sum_i w_i (\\mu^2 + \\sigma^2)\n       - {1 \\over W} \\sum_{ij} w_i w_j (\\mu^2 + \\delta_{ij} \\sigma^2)\\right)\\\\\n      &=& {1\\over W} \\left( W (\\mu^2 + \\sigma^2)\n       - {1 \\over W} ( W^2 \\mu^2 +( \\sum_i w_i^2) \\sigma^2)\\right)\\\\\n      &=& {1\\over W} \\left(W \\sigma^2 - {1 \\over W} ( \\sum_i w_i^2)\\sigma^2\\right)\\\\\n      &=& \\left({{W^2 - \\sum_i w_i^2} \\over W^2}\\right) \\sigma^2\n\\end{eqnarray}\n%\nTherefore an unbiased estimator $U$ of $\\sigma^2$ is\n%\n\\begin{eqnarray}\nU &=& {W^2 \\over {(W^2 - \\sum_i w_i^2)}} \\expectation{V}\\\\\n  &=& {W^2 \\over {(W^2 - \\sum_i w_i^2)}} {1\\over W} \\sum_i w_i (x_i - \\bar{x})^2 \\\\\n  &=& {W \\over {(W^2 - \\sum_i w_i^2)}} \\sum_i w_i (x_i - \\bar{x})^2\n\\end{eqnarray}\n%\nAnd this is the formula used in GSL.\n\\subsection{Notes}\nNote the following properties:\n\n\\begin{itemize}\n\\item\nThe formula is invariant under rescaling of the weights.\n\n\\item \nFor equal weights $w_i = w$ the factor reduces to $N/(N^2-N) =\n1/(N-1)$, which is the familiar factor of the unbiased estimator of\nthe variance for data without weights.\n\n\\item\nWhen $\\sum_i (w_i/W)^2 \\ll 1$ the commonly-used weighted variance\nformula $V = (1/W)\\sum_i w_i (x_i - \\bar{x})^2$ is a good\napproximation.\n\\end{itemize}\n\nIf we assume that the ``experimental errors'' arising from the weights\ncontribute, the underlying variance $\\sigma^2$ is overestimated by\nthis formula (e.g. consider the case $\\sigma = 0$---all the variation\nwill come from the gaussian fluctuations represented by the\n$w_i$). The appropriate expectation in this case is $\\expectation{x_i\n  x_j} = \\mu^2 + \\delta_{ij} (\\sigma^2 + 1/w_i)$\n\\end{document}\n\n", "meta": {"hexsha": "07cbba33f156b851a83e265f36b3fd813feced7c", "size": 3499, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "folding_libs/gsl-1.14/doc/statnotes.tex", "max_stars_repo_name": "parasol-ppl/PPL_utils", "max_stars_repo_head_hexsha": "92728bb89692fda1705a0dee436592d97922a6cb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "folding_libs/gsl-1.14/doc/statnotes.tex", "max_issues_repo_name": "parasol-ppl/PPL_utils", "max_issues_repo_head_hexsha": "92728bb89692fda1705a0dee436592d97922a6cb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "folding_libs/gsl-1.14/doc/statnotes.tex", "max_forks_repo_name": "parasol-ppl/PPL_utils", "max_forks_repo_head_hexsha": "92728bb89692fda1705a0dee436592d97922a6cb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2234042553, "max_line_length": 84, "alphanum_fraction": 0.6496141755, "num_tokens": 1269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417487156366, "lm_q2_score": 0.8479677526147223, "lm_q1q2_score": 0.5583167968662165}}
{"text": "\\section{Perceptrons}\n\\smallskip \\hrule height 2pt \\smallskip\n\n\\begin{itemize}\n\t\\item Decision mechanism: sign of $w \\cdot x$.  If + say yes, if - say no.  % week 6 audio\n\t\\item Linear classifier in feature space.\n\t\\item Error driven, not probabilistic. \\hfill \\\\\n\t\t\\begin{itemize}\n\t\t\t\\item Mistake driven rather than model drive. \\hfill \\\\\n\t\t\t\\item Parameters from reactions to mistakes  % end of slide set 7\n\t\t\\end{itemize}\n\t\\item does better with linearly separable data. \n\t\\item Parameters are from a discriminative interpretation % end of slide set 7\n\t\\item To train, you go through the data until the held-out accuracy maxes out. % end of slide set 7\n\t\\item  Just moving the plane to satisfy your labels.\n\t\t\tThen project new sample into this space and see which side it was on.  % week 6 audio\n\t\\item Note you can scale your $w$ (weight) vector(s) by any constant because all you care about is sign($w \\cdot x$).\n\t\tThis rescales your gamma by that constant too! \\hfill \\\\\n\t\\item Bias allows you to have decision boundaries that don't go through the origin.  \n\t\tYou could us any number, but 1 is used by convention. \n\t\\item Go through the data point by point, not feature by feature.\n\t\tFeature by feature would assume independence of the features (bad).  % Week 6 audio.  \n\t\\item If linearly separable, it will converge.  And we will know how fast it will converge.  % Week 6 audio\n\\end{itemize}\n\n\\subsection{Properties of Perceptron}\n\\underline{Separability}: some parameters get the training set perfectly correct. \\hfill \\\\\n\\underline{Convergence}: if the training is separable, the perceptron will eventually converge. \\hfill \\\\\n\\underline{Mistake Bound}: the maximum number of mistakes (for the binary case) is related to the \nmargin or degree of separability: $mistakes \\leq \\frac{R^2}{\\gamma^2}$. \\hfill \\\\\n\\hfill \\\\  \\hfill \\\\\n\nSort of inspired by what might happen in a human brain.\nNeurons send info.  A module sums them up \\& produces a result. \n\n\n\\subsection{Problems with the Perceptron}\n\\includegraphics[width=2.8in]{figures/perceptron_problems.pdf}\nLines might not be optimal: issues with generalizibility. \\hfill \\\\\nLeads into SVMs. \\hfill \\\\\n\n \\subsection{Linear Classifiers}\n Inputs are feature values.  \\hfill \\\\\n Each feature has a weight.  \\hfill \\\\\n Sum is the activation.  activation$_w(x) = \\sum_i w_i x_i = w \\cdot x$  \\hfill \\\\\n If the activation is positive, chose output class 1.  \\hfill \\\\\n If the activation is negative, chose output class 2.  \\hfill \\\\\n \n \\includegraphics[width=1.5in]{figures/linear_classifier_cartoon.pdf}  \\hfill \\\\\n \n For a binary decision rule:   \\hfill \\\\\n In the space of feature vectors: \n \\begin{itemize}\n \t\\item examples are points\n\t\\item any weight vector is a hyperplane\n\t\\item one side corresponds to y = +1\n\t\\item the other side corresponds to y = -1\n\t\\item ??? The $w \\cdot x = 0$ is the solution to the line.\n \\end{itemize}\n \n \\includegraphics[width=1.5in]{figures/binary_decision_rule.pdf} \\hfill \\\\\n The black line is the decision boundary.     \\hfill \\\\\n $w$ is a vector normal to the decision boundary, and points towards the + label points. \n \n \\subsubsection{Binary Perceptron Algorithm}\n \\begin{itemize}\n \t\\item start with zero weights: $w=0$\n\t\\item for $ t = 1 \\dots T$ (T passes over the data):\n\t\t\\begin{itemize}\n\t\t\t\\item for $i = 1 \\dots n$ (each training example)\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item  Classify with current weights: \\hfill \\\\\n\t\t\t\t\t$ y = sign(w \\cdot x^i)$ \\hfill \\\\\n\t\t\t\t\t(sign($z$)  is $+1$ if $z > 0$, else -1) \\hfill \\\\\n\t\t\t\t\\item if correct (i.e. $y = y^i$), don't change weights.\n\t\t\t\t\\item if it was wrong, update with $w = w + y^i x^i$\n\t\t\t\\end{itemize}\n\t\t\\end{itemize}\n \\end{itemize}\nFigure showing \"you got it wrong:\"  \\hfill \\\\\n \\includegraphics[width=1.0in]{figures/binary_perceptron_rule.pdf}  \\hfill \\\\\n The -1 is the $y^i$ in the equation above.   \\hfill \\\\\n   \\hfill \\\\\n  \n \\includegraphics[width=3.2in]{figures/perceptron_chugging_example.pdf}\n \n \\underline{$w \\cdot x$ and the boundary between positive and negative answers} \\hfill \\\\\nIf a point has w * x = 1000, a small change in x might change w * x' to 999, or 1001, but it surely won't make w * x a negative value. On the other hand, if w * x = 0.0001, even tiny changes to x might make w * x' negative. And by extension, cases where w * x = 0 where even the tiniest change might make w * x' change sign. So the values where w * x = 0 are those that are on the boundary between positive and negative examples.[1]\n\nSo \\textbf{the equation w * x = 0 defines the boundary between the positive and negative region}. Now let's unpack that statement. w is a fixed vector, while x is a variable point. So think of w = [w1, w2] as constants, and x = [x1, x2] as variables. The equation w * x = 0 is just another way of writing the equation w1 x1 + w2 x2 = 0. But this is a linear equation in two variables, so it defines a line. You can algebraically solve the equation for x2, and that gives you the \"standard form\" of the equation of a line, which you can then draw.\n\n[1] The boundary of a region is defined as the set of points where even points very close by can be outside that region.\n\n \n \\subsection{Multiclass Decision Rule}\n \n If we have more than two classes: \n \\begin{itemize}\n \t\\item we have a weight vector for \\textbf{each} class: $w_y$\n\t\\item we calculate an activation for each class: \\hfill \\\\\n\t\tactivation$_w(x,y) = w_x \\cdot x$\n\t\\item the highest activation wins:  \\hfill \\\\\n\t\t$y^* = \\argmax_y(activation_w(x,y))$ \\hfill \\\\\n\t\t\"win the vote\" \n \\end{itemize}\n \nFor each point, look at all 3 classes and see which is farthest from the hyperplane.\nYou can treat the dot product (+ bias) as a confidence from the hyperplane you are.  % week 6 audio. \n\\hfill \\\\ \n\\hfill \\\\\n \n\\includegraphics[width=1.5in]{figures/multiclass_decision_rule_planes.pdf}\n \n\\includegraphics[width=2.5in]{figures/perceptron_multiclass--win_the_vote.pdf}\n\n \\subsubsection{Binary Perceptron Algorithm}\n \\begin{itemize}\n \t\\item start with zero weights: $w_y=0$\n\t\\item for $ t = 1 \\dots T$ (T passes over the data):\n\t\t\\begin{itemize}\n\t\t\t\\item for $i = 1 \\dots n$ (each training example)\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item  Classify with current weights: \\hfill \\\\\n\t\t\t\t\t(no more $ y = sign(w_y \\cdot x^i)$) \\hfill \\\\\n\t\t\t\t\tinstead:  $y= \\argmax_y w_y \\cdot x^i$ \\hfill \\\\\n\t\t\t\t\\item if correct (i.e. $y = y^i$), don't change weights.\n\t\t\t\t\\item if it was wrong, update two vectors: \\hfill \\\\\n\t\t\t\t\t$w_{y^i} = w_{y^i} + x^i$ \\hfill \\\\\n\t\t\t\t\tIf class 2 was the right label, add $x_i$ to $w_2$. \\hfill \\\\\n\t\t\t\t\tAlso want to push down the wrong classification:  \\hfill \\\\\n\t\t\t\t\t(Didn't have to push one down for binary because adding to one pushed the other. )\n\t\t\t\t\t$w_y = w_y -x^i$ \\hfill \\\\\n\t\t\t\\end{itemize}\n\t\t\\end{itemize}\n \\end{itemize}\n \\includegraphics[width=1.0in]{figures/multi_perceptron_rule.pdf}\n \n \\subsection{Linear Separability}\n \\subsubsection{binary case}\nRecall $ \\displaystyle  ||x||_2 = \\sqrt{\\sum_i x_i^2}$ \n \n The data is linearly separable with margin $\\gamma$ if:   \\hfill \\\\\n$\\exists .w \\forall t . y^t (w \\cdot x^t) \\geq \\gamma > 0$.  \\hfill \\\\\nPlain english: the data is linearly separable if there exists a w that has a margin greater than zero for all points $t$.  \\hfill \\\\\nNote: for $y^t = 1$, $w \\cdot x^t \\geq \\gamma$ and for $y^t = -1$, $w \\cdot x^t \\leq -\\gamma$.  \\hfill \\\\\nPlain english: points having label = 1 have a dot product that is greater than $\\gamma$, and points that have label = -1 have a dot product that is more negative than $-\\gamma$.  \\hfill \\\\\n \\includegraphics[width=1.0in]{figures/lin_sep_margin.pdf}\n \n \\subsubsection{maximum number of mistakes for training linearly separable binary data}\nHere, assume the data is separable with margin $\\gamma$ and the weight vector is a unit vector: \\hfill \\\\\nIn math notation, this is: $\\exists w^*$ such that $||w^*||_2 = 1$ and $\\forall t. y^t(w^* \\cdot x^t) \\geq \\gamma$.) \\hfill \\\\\nRecall that you can scale your $w$ (weight) vector(s) by any constant because all you care about is sign($w \\cdot x$),\nbut that this scales your $\\gamma$.  You are just multiplying the equation above by a constant.  \\hfill \\\\\n\nAlso assume some maximum radius R for your data points:  \n$\\forall t. ||x^t||_2 \\leq R$ \\hfill \\\\\nThen the number of mistakes (parameter updates) made by the perceptron is bounded by \n$\\displaystyle mistakes \\leq \\frac{R^2}{\\gamma^2}$. \\hfill \\\\\nFor full inductive proof, see slides.  \nStrategy: let $w^k$ be the weights after the k-th update (mistake).  \nThen $k^2 \\gamma^2 \\leq ||w^k||_2^2 \\leq k R^2$ \\hfill \\\\\nTherefore $k \\leq \\frac{R^2}{\\gamma^2}$.  \\hfill \\\\\n\\textbf{If there is a linear separator, the Perceptron will find it!} \\hfill \\\\\n \\hfill \\\\\n \nThe \\# of mistakes you need to make for a perceptron to converge is bounded by a ratio of $R/\\gamma^2$. \\hfill \\\\\n \\hfill \\\\\n\nWhat if gamma is small?\n    Starting to violate linearly separability rule. \n    Shouldn't trust perceptron under this. \n\n\n\\subsection{Logistic Regression \\& Perceptron similarities}\n\\underline{Logistic regression}:  \\hfill \\\\\nIn vector notation, y is in the set $\\{ 0, 1\\}$. \\hfill \\\\\n$w = w + \\nu \\sum_j [y^j - P(y^j | x^j, w))] x^j$ \\hfill \\\\\n\n\\underline{Perceptron}:  \\hfill \\\\\nWhen y is in $\\{ 0, 1\\}$: \\hfill \\\\\n$w = w + [y^j - sign^0(w \\cdot x^j)] x^j$  \\hfill \\\\\nNote: sign$^0(z) =  + 1$ if $z > 0$ and 0 otherwise.  \\hfill \\\\\n\\hfill \\\\\nDifferences:   \\hfill \\\\\n1. Online vs. batch learning.  Logistic regression is batch, Perceptron is on-line.  \\hfill \\\\\n2. Logistic is probabilistic and Perceptron is error-driven. \n \n\n\n", "meta": {"hexsha": "43adb2b82242d6ff77e88685c3a1f0b69520d19b", "size": 9540, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/perceptrons.tex", "max_stars_repo_name": "JanetMatsen/Machine-Learning", "max_stars_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 25, "max_stars_repo_stars_event_min_datetime": "2016-02-07T23:35:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-26T05:13:33.000Z", "max_issues_repo_path": "tex/perceptrons.tex", "max_issues_repo_name": "JanetMatsen/Machine-Learning", "max_issues_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/perceptrons.tex", "max_forks_repo_name": "JanetMatsen/Machine-Learning", "max_forks_repo_head_hexsha": "12e1f701eb7de89b97d5caffe86b0267731e4cb5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2016-08-29T00:15:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-06T22:36:19.000Z", "avg_line_length": 50.2105263158, "max_line_length": 546, "alphanum_fraction": 0.6972746331, "num_tokens": 2933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7634837743174788, "lm_q1q2_score": 0.5581513628596075}}
{"text": "\\input{../../UCLHeader.tex}\n\\input{../../UCLCommands.tex}\n\\begin{document}\n\\title{Quantum Error Correction - Lecture 5}\n\\author{with Dan Browne}\n\\maketitle\n\\tableofcontents\n\n\\section{Introduction} \nIn this lecture, we shall introduce the concept of fault-tolerant quantum computing. We will especially focus on the threshold theorem and emphasise how important efficient scaling methods becomes for fault-tolerant computing. \n\n\\section{The Clifford group}\nThe concept of a Clifford algebra might sound related. This is a type of associative algebra that generalises the notion of real numbers, complex numbers, quaternions and so on. \n\nThe Clifford group is something else, however. It is the set of unitary gates that maps to stabiliser states. \n\nWhat then are stabiliser states? They are states, or codewords in a stabiliser code with $k = 0$. That is, the stabiliser code that does not encode any qubits. This code has the property $n = m$, thus the number of independent stabiliser generators is $m = n$. \n\nFor example, the very simplest code contains the following states and stabilisers. \n\n\\begin{tabular}{ c c}\n\\hline \nState & Stabiliser \\\\ \\hline\n$\\ket{0}$ & $Z$ \\\\ \n$\\ket{1}$ & $- Z$ \\\\ \n$\\ket{00} + \\ket{11}$ & $XX, ZZ$ \\\\ \n$\\ket{000} + \\ket{111}$ & $XXX, ZZI, IZZ$ \\\\ \n\\end{tabular}\n\nFor example, the Hadamard gate acts as a stabiliser in that it maps a codeword to the same codespace. \n\\beq\nH \\ket{0} \\rightarrow \\ket{+}\n\\eeq\n\\beq\nH \\ket{1} \\rightarrow \\ket{-}\n\\eeq\nWe find that $H$ is a Clifford group unitary. So it the $Z$ operator, \n\\beq\nZ \\ket{0} = \\ket{0}\n\\eeq\n\\beq\nZ \\ket{1} = - \\ket{1}\n\\eeq\nNote the global phase here. It does not affect the stabiliser group since the states $\\ket{1}$ and $-\\ket{1}$ are experimentally indistinguishable. \nThe CNOT gate is another example of a member of the Clifford group. \n\nNow that we have listed a few gates, we might perhaps rather have a simple way to demonstrate the characteristics of Clifford group gates. We can do so by considering a general operator $\\sigma$ as part of the Clifford group and then use a set of unitary matrices $\\{U\\}$ in a similarity transformation to map the elements of one Puali group to other elements in the Pauli group. That is, \n\\beq\n\\sigma \\rightarrow \\sigma' = U\\sigma U^\\dagger\n\\eeq\nwhere $\\sigma$ is some Pauli operator (holds for the $N$-qubit case as well). That is, the Clifford group is the group of unitary matrices that maps elements in the Pauli group to another element in the Pauli group. \n\n\nA convenient fact which we shall now prove is that this operation is equivalent to conjugation of the operators. So, a stabiliser $S_j$ fulfils the following relationship\n\\beq\nS_j \\ket{\\psi} = \\ket{\\psi}\n\\eeq\nfor all $j$. We say that $S_j \\in $ the stabiliser group. Consider now \n\\beq\nU S_j \\ket{\\psi} = U \\ket{\\psi}\n\\eeq\nBut since we know that $U^\\dagger U = \\identity$, we can insert this into the expression to find, \n\\beq\nU S_j U^\\dagger U \\ket{\\psi} = U \\ket{\\psi}\n\\eeq\nwhich again holds for all $j$. In fact, $US_j U^\\dagger$ is just another stabiliser. \n\nIn order to prove that all the objects that we can obtain for $US_jU$ with different unitary matrices form a group, we need to prove the group properties. First, we have the identity. This is quite clearly filled by the identity matrix $\\identity$. Secondly, there exists an inverse since we are dealing with unitary matrices. By conjugating the matrices, we obtain the inverse. Finally, the group is closed because, as we showed, the product $U S_j U^\\dagger$ still maps the state $U \\ket{\\psi}$ to itself. Thus, $US_j U^\\dagger$ is also a stabiliser, and the group is closed. \n\nLet us have a look at another gate which is a member of the Clifford group. This is the $S$-gate, given by \n\\beq\nS = \\bpmat 1 & 0 \\\\ 0 & i \\epmat\n\\eeq\nWe find that $S \\in$ the Clifford group, since \n\\beq\nS Z S^\\dagger = S\n\\eeq\nThat is, $S$ maps $Z$ to itself, and thus stays in the Clifford group. This can be easily confirmed since $[S, Z]= 0$. \n\nThe $S$ gate has the following effect. \n\\beq\nS \\ket{+} = \\ket{+i}\n\\eeq\n\\beq\nS \\ket{+i} = \\ket{-}\n\\eeq\nOn the Bloch sphere, the $S$ gate can be thought of as a rotation of 90 degrees on the equator. \n\nFor a single qubit, the Clifford group is generated by $H$ and $S$. This is all we need. \n\nRecall that the stabiliser states in the single qubit group lie on an octahedron in the Bloch sphere - one for each axial extremal point. Thus, the symmetry group for the stabilisers is the symmetry group of the octahedron. Here, all $X,Y,Z$ are rotations of 180 degrees around a certain axis. The Hadamard gate is a reflection about a diagonal line. \n\nHowever, we want to be able to go from the single qubit Clifford group to the $N$-qubit Clifford group. It turns out that it is generated by the set\n\\beq\n\\{H, S, CNOT\\}\n\\eeq\nwhich must act on every qubit and every pair of qubits. Note the similarity between this set and the universal quantum computing gate set. \n\nWe now wish to prove that the CNOT gate is a meber of the Clifford group. It suffices to check that \n\\beq\nU \\sigma U^\\dagger = \\sigma' \n\\eeq\nwhere $\\sigma$ is some Pauli operator. We use the following notation $CNOT_{CT}$ where $C$ is the control qubit and $T$ is the target qubit. We know that CNOT is self-inverse. We can check that \n\\beq\nCNOT_{CT} ( Z\\otimes I) CNOT_{CT} = I \\otimes Z\n\\eeq\nThe control commutes with $Z$ gates. WE also find that \n\\beq\nCNOT ( X \\otimes I ) CNOT = X\\otimes X\n\\eeq\n\\beq\nCNOT(I \\otimes Z ) CNOT = Z\\otimes Z\n\\eeq\nNote that the commutator relations are preserved. Finally, \n\\beq\nCNOT(I\\otimes X)CNOT = I \\otimes X\n\\eeq\nSince we proved this for generators of the Pauli group, we know it applies to the entire group. \n\n\\section{Gottesmann-Knill Theorem}\nTheorem: The Clifford group circuits acting on stabiliser states are easy to simulate classically. Any quantum circuit on $n$ qubits acting on an initial stabiliser state consisting of a polynomial in $n$ different gates can be effectively simulated in $\\mathcal{O}(n)$ gates on a classical computer. \n\n\\emph{Proof idea}: We shall here not prove the entire theorem, but rather give a flavour of the proof ingredients. \n\\begin{itemize}\n\\item Represent states by stabiliser generators and show that we end up with unitary update rules. \n\\item Then, must show that the measurement update rules are all efficiently simulatable. \n\\end{itemize}\n\nThe consequences are substantial. It shows that Clifford gates, stabiliser states and the Pauli group is not enough for universal quantum computing. \n\nSo given stabiliser states, Clifford unitaries and Pauli measurements, what must we add for true universal computing? \n\nIt turns out that we need to add a single non-Clifford unitary. We can consider using a non-stabiliser states as an ancilla, or performing some non-Pauli measurements to achieve this. It is usually enough to add just one of these components. \n\nWe paraphrase the following theorem:\n\n\\textbf{Theorem}: (Nebe, Rains, Sloane) Given a non-Clifford unitary $U$, the set\n\\beq\n\\{U, U^\\dagger, CNOT, H, S\\}\n\\eeq\nis approximately universal. \n\nIt means that we can approximate universality with other sets. A gate set is approximately universal if any unitary $U$ can be approximated to arbitrary accuracy by a gate sequence contained in the set. \n\nSo, by using the set above, we can \\emph{efficiently} simulate any other set, thus gaining approximate universality. As long we are happy with a little bit of error. \n\n\\section{The Solovay-Kitaev theorem}\nAgain, we paraphrase this theorem: Given a set (single qubit) of unitarieis which is approximately universal for single qubit quantum computing, we can represent an arbitrary single qubit unitary to arbitrary accuracy \\emph{efficiently}. \n\nThat is, if $U$ is our target unitary, $S$ is our unitary achieved as a sequence of gates, and we want\n\\beq\n\\norm{S - U}< \\epsilon\n\\eeq\nwhere the norm is\n\\beq\n\\norm{S - U } = \\max_{\\ket{\\psi}} \\norm{S \\ket{\\psi} - U \\ket{\\psi} }\n\\eeq\nIt turns out that it suffices to use a sequence of length $\\mathcal{O} \\left( \\log^c{\\frac{1}{\\epsilon}} \\right)$ where $c \\simeq 1.7$. Here, the scaling is the important thing -- it shows that the process is efficient. \n\nSo we just need a unitary $U$ gate set to efficiently simulate anything. \n\nWhat then, shall we use? We can make use of the so-called $T$ gate. \nIt is defined as\n\\beq\nT = \\sqrt{S}\n\\eeq\nIf we write\n\\beq\nS = \\bpmat 1 & 0 \\\\ 0 & e^{i \\pi/2} \\epmat\n\\eeq\nThen\n\\beq\nT = \\bpmat 1 & 0 \\\\ 0 & e^{i \\pi /4} \\epmat\n\\eeq\nIt is also known as a $\\pi/8$ gate because we could write\n\\beq\nT= e^{i \\pi /8}\\bpmat e^{- i \\pi/8} &0 \\\\ 0 & e^{i \\pi/8}\n\\epmat\n\\eeq\nHowever, the name $T$-gate is the more common jargon. This is a non-Cliford gate with many more properties. We know that\n\\beq\nT^2 = S\n\\eeq\nAlso, for $T $ and $T^\\dagger$, it follows\n\\beq\nT^\\dagger = T^\\dagger T^\\dagger T = S^\\dagger T = S\u00a8\\dagger S\u00a8\\dagger S T = ZST\n\\eeq\nSo we can obtain the inverse by multiplication of other gates. This becomes very clear if we view it in the Bloch sphere. \n\nAs a result, we can conclude that the set $\\{CNOT, H, T\\}$ is an approximately universal set. For fault tolerance, it is very useful to have a universal gate set. In general, we can approximate any small evolution as \n\\beq\ne^{i Ht} \\simeq I + i Ht\n\\eeq\nThat is, we divide it up into gates. \n\n\\section{Fault-Tolerant Quantum Computing}\nWe first introduce the Threshold Theorem. There exist many implementations of it, but we shall use a simple error model to demonstrate its purpose. \n\n\\textbf{Threshold Theorem}: Assume the following simplified error model: Every component in our computation, e.g. the preparation of input states, unitary gates, and measurements. \n\nAssume that every component succeeds with probability $1-p$ and fails with probability $p$. We take this probability to be very small $p\\ll 1$. \n\nHere, fail means that the gate is followed by an arbitrary error -- anything can happen. If we use quantum error correcting codes, concatenation or similar, frequent error correction, we need a fault-tolerant construction to stop errors from spreading. \n\nWe find that certain methods cause errors to spread to the extent that they multiply. We thus need to prevent single errors from turning into multiple errors. \n\nGiven all this, efficient, reliable quantum computation is possible provided\n\\beq\np<p_{th}\n\\eeq\nwhere $p_{th}$ is a threshold rate. Then, the errors will be suppressed through correction. \n\nHow would we do this? It turns out that we only need a code that can correct a single error.  The key idea is that we use a distance 3 code that allows us to correct one error and call the set of qubits supporting the code a code block. \n\nFor example, we can take the Steane code, $n = 7, d = 3, k = 1)$. Here, let 7 qubit represent a code block - this is one single encoded qubit. Then, in order to implement a fault-tolerant design, we would try and ensure that the probability of more than one error in each code block is heavily suppressed. \n\nSo fault-tolerant constructions want to design circuits such that one component failure leads to at most one error on one single qubit in one separate code block. \n\nThe key idea is the following. It means that if this is achieved, a code block will only fail to have its error corrected if two or more independent component failures occur. This notion of \\textbf{independence} is very important. \n\nIf a component failure rate is $p$, then the probability of wo independent failures is $p^2$, which is indeed very small. We can do the same kind of analysis with $d>3$ codes. \n\nThis means estimating the probability of a code block failure occuring. It will be \n\\beq\np^2 \\times \\mbox{no. of pairs or error locations}\n\\eeq\nwhere the second term simply means all the possible things that can go wrong. We must consider every component as an error location., e.g. the number of pairs of failed components which cause code blocks to fail.\n\nFor example, we can look at the encoded CNOT gate in the Steane code. Since the CNOT gate operates on two logical qubits, it must act on two separate code blocks. After we act with the CNOT gate, we must measure each code block for an error and then correct them individually. \n\nCounting all pairs of errors that can occur gives us about $\\sim 10^4$. If we then calculate the probability of an error occuring times the number of ways in which this can happen, we get\n\\beq\n\\left( 10^4 \\cdot 10^{-4} \\right)^2 = 1\n\\eeq\nWe would want this quantity to be really small, preferably smaller than 1. \n\nBut note that our one layer of encoding has brought us from $p$ to $p^2$. We write\n\\beq\ncp^2\n\\eeq\nwhere $c$ denotes pairs of error locations. \n\n\\section{Code concatenation}\nWe replace every qubit by a code-block and every component by an encoded CNOT gate. We worked out before that we got $cp^2$ for one layer. For two layers of encoding, we gain\n\\beq\ncp^2 \\rightarrow c \\left( cp^2 \\right)^2 = c^3 p^4 = \\frac{(cp)^4}{c}\\eeq\nThen, we go from $2 \\rightarrow 2\\cdot 7 \\rightarrow 2 \\cdot 7^2$ qubits in the case of the Steane code. \n\nWe can then repeat the concatenation. After $n$ layers of encoding, we have\n\\beq\n\\frac{(cp)^{2^n}}{c}\n\\eeq\nNow, if $cp<1$, this quantity gets exponentially suppressed. However, we do end up having to use $2\\cdot 7^n$ qubits for the encoding. We can show though that the error grows slower than the number of qubits. \n\nMore generally, let $d$ be the size of a code block. We then get a number of $d^n$ physical qubits and gates that scale with $d^n$. It is not too bad, considering that the error quantity scaled with $2^n$, which is much faster. \n\nConsider then a quantum algorithm with problem size $n$ and $f(n)$ number of polynomial components. We can accept an overall error $\\calE$, where $\\calE$ must not scale with $n$. Thus, if we repeat the algorithm, we bring down the error. This is the case for e.g. Grover's unstructured search algorithm. Note that by problem size, we mean the parameter that determines the complexity of hte problem. \n\nThe encoded component error at the top layer is $p$. Then, the probability of all components succeeding for a number of $f(n)$ operations is\n\\beq\n(1- p)^{f(n)} \\simeq 1 - pf(n)\n\\eeq\nSo here, we want $pf(n) \\ll \\calE$. To explore this more closely, set $p = \\epsilon$, where\n\\beq\n\\epsilon = \\frac{(cp)^{2^n}}{c}\n\\eeq\nis the quantity we considered before. So then, for $k$ levels of encoding (we switch the name of the variable since $n$ is already taken), we get\n\\beq\n\\frac{(cp)^{2^k}}{c} f(n) \\ll \\calE\n\\eeq\nWe can solve this for $k$, see Nielsen and Chuang Section 10.6 for details. However, the overhead from encoding is $d^k$, and we can show that\n\\beq\nd^k = \\cal{O} \\left( poly \\left( \\log{f(n)/\\epsilon} / \\calE \\right) f(n) \\right)\n\\eeq\nwhich is approximately $Polyl(\\log{n})$, so it's actually pretty good. Thus, provided we can encode everything, all our gates and all the qubits, we can have fault-tolerant computing. \n\nHowever, the architecture for such a system is difficult, and the overhead is difficult. \n\nQuestion: What is meant by overhead? I don't seem to have written that down. \n\n\n\n\n\n\n\n\n\\end{document}", "meta": {"hexsha": "bc115628a66a4a02b296fab25669488e428a7213", "size": 15115, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Error Correction Lecture 7 March 2016.tex", "max_stars_repo_name": "sqvarfort/Quantum-Error-Correction-Notes", "max_stars_repo_head_hexsha": "3628ece1bf999b4ed57ce1badb376bd91b40dc26", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2016-04-01T04:53:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-09T07:01:28.000Z", "max_issues_repo_path": "Error Correction Lecture 7 March 2016.tex", "max_issues_repo_name": "sqvarfort/Quantum-Error-Correction-Notes", "max_issues_repo_head_hexsha": "3628ece1bf999b4ed57ce1badb376bd91b40dc26", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Error Correction Lecture 7 March 2016.tex", "max_forks_repo_name": "sqvarfort/Quantum-Error-Correction-Notes", "max_forks_repo_head_hexsha": "3628ece1bf999b4ed57ce1badb376bd91b40dc26", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.4826388889, "max_line_length": 578, "alphanum_fraction": 0.7421104863, "num_tokens": 4258, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7310585786300048, "lm_q1q2_score": 0.5581513549910876}}
{"text": "\\section{The identity types of pushouts}\n\n\\subsection{Characterizing families of maps over pushouts}\n  \n\\begin{defn}\n    Consider a span $\\mathcal{S}$\n  \\begin{equation*}\n    \\begin{tikzcd}\n      A & S \\arrow[l,swap,\"f\"] \\arrow[r,\"g\"] & B,\n    \\end{tikzcd}\n  \\end{equation*}\n  and consider $P,Q:\\mathsf{Fam\\usc{}pushout}(\\mathcal{S})$.\n  A morphism of descent data from $P$ to $Q$ over $\\mathcal{S}$ is defined to be a triple $(h_A,h_B,h_S)$ consisting of\n  \\begin{align*}\n    h_A & : \\prd{x:A} P_A(x)\\to Q_A(x) \\\\\n    h_B & : \\prd{y:B} P_B(y)\\to Q_B(y)\n  \\end{align*}\n  equipped with a homotopy $h_S$ witnessing that the square\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=huge]\n      P_A(f(s)) \\arrow[r,\"h_A(f(s))\"] \\arrow[d,swap,\"P_S(s)\"] & Q_A(f(s)) \\arrow[d,\"Q_S(s)\"] \\\\\n      P_B(g(s)) \\arrow[r,swap,\"h_B(g(s))\"] & Q_B(g(s))\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes for every $s:S$. We write $\\mathsf{hom}_{\\mathcal{S}}(P,Q)$ for the type of morphisms of descent data over $\\mathcal{S}$.\n\n  An equivalence of descent data from $P$ to $Q$ is a morphism $h$ such that $h_A$ and $h_B$ are families of equivalences.\n\\end{defn}\n\n\\begin{rmk}\\label{rmk:id-hom-Fam-pushout}\n  The identity type $h=h'$ of $\\mathsf{hom}_{\\mathcal{S}}(P,Q)$ is characterized as the type of triples $(H_A,H_B,H_S)$ consisting of\n  \\begin{align*}\n    H_A & : \\prd{a:A} h_A(a)\\htpy h'_A(a) \\\\\n    H_B & : \\prd{b:B} h_B(b)\\htpy h'_B(b)\n  \\end{align*}\n  and a homotopy $K_S(s)$ witnessing that the square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      h_B(g(s))\\circ P_S(s) \\arrow[d] \\arrow[r] & Q_S(s)\\circ h_A(f(s)) \\arrow[d] \\\\\n      h'_B(g(s))\\circ P_S(s) \\arrow[r] & Q_S(s)\\circ h'_A(g(s))\n    \\end{tikzcd}\n  \\end{equation*}\n  of homotopies commutes for every $s:S$.\n\\end{rmk}\n\n\\begin{defn}\\label{defn:hom-Fam-pushout-map}\n  Consider a commuting square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      S \\arrow[r,\"g\"] \\arrow[d,swap,\"f\"] & B \\arrow[d,\"j\"] \\\\\n      A \\arrow[r,swap,\"i\"] & X\n    \\end{tikzcd}\n  \\end{equation*}\n  with $H:i\\circ f \\htpy j \\circ f$, and let $P$ and $Q$ be type families over $X$. We define a map\n  \\begin{equation*}\n    \\Big(\\prd{x:X}P(x)\\to Q(x)\\Big) \\to \\mathsf{hom}_{\\mathcal{S}}(\\mathsf{desc\\usc{}fam}(P),\\mathsf{desc\\usc{}fam}(Q)).\n  \\end{equation*}\n\\end{defn}\n\n\\begin{constr}\n  Let $h:\\prd{x:X}P(x)\\to Q(x)$. Then we define\n  \\begin{align*}\n    h_A & : \\prd{a:A}P(i(a))\\to Q(i(a)) \\\\\n    h_B & : \\prd{b:B}P(j(b))\\to Q(j(b))\n  \\end{align*}\n  by $h_A(a,p)\\defeq h(i(a),p)$ and $h_B(b,q)\\defeq h(j(b),q)$. Then it remains to define for every $s:S$ a homotopy $h_S(s)$ witnessing that the square\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=huge]\n      P(i(f(s))) \\arrow[r,\"h_A(f(s))\"] \\arrow[d,swap,\"\\mathsf{tr}_P(H(s))\"] & Q(i(f(s))) \\arrow[d,\"\\mathsf{tr}_Q(H(s))\"] \\\\\n      P(j(g(s))) \\arrow[r,swap,\"h_B(g(s))\"] & Q(j(g(s)))\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes. Note that every family of maps $h:\\prd{x:X}P(x)\\to Q(x)$ is natural in the sense that for any path $p:x=x'$ in $X$, there is a homotopy $\\psi(p,h)$ witnessing that the square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      P(x) \\arrow[r,\"h(x)\"] \\arrow[d,swap,\"\\mathsf{tr}_P(p)\"] & Q(x) \\arrow[d,\"\\mathsf{tr}_Q(p)\"] \\\\\n      P(x') \\arrow[r,swap,\"{h(x')}\"] & Q(x')\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes. Therefore we define $h_S(s)\\defeq\\psi(H(s),h)$.\n\\end{constr}\n\n\\begin{thm}\\label{thm:hom-Fam-pushout}\n  The map defined in \\cref{defn:hom-Fam-pushout-map} is an equivalence.\n\\end{thm}\n\n\\begin{proof}\n  We will first construct a commuting triangle\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=-3em]\n      \\phantom{\\mathsf{hom}_{\\mathcal{S}}(\\mathsf{desc\\usc{}fam}(P),\\mathsf{desc\\usc{}fam}(Q))} & \\prd{x:X}P(x)\\to Q(x) \\arrow[dl] \\arrow[dr] & \\phantom{\\mathsf{dep\\usc{}cocone}_{(i,j,H)}(x\\mapsto P(x)\\to Q(x))} \\\\\n      \\mathsf{dep\\usc{}cocone}_{(i,j,H)}(x\\mapsto P(x)\\to Q(x)) \\arrow[rr,densely dotted] & &\n      \\mathsf{hom}_{\\mathcal{S}}(\\mathsf{desc\\usc{}fam}(P),\\mathsf{desc\\usc{}fam}(Q))\n    \\end{tikzcd}\n  \\end{equation*}\n  Recall from \\cref{thm:dependent-pullback-property-pushout} that $X$ satisfies the dependent universal property, so the map on the left is an equivalence. Therefore we will prove the claim by showing that the bottom map is an equivalence.\n\n  In order to construct the bottom map, we first note that for any two maps $\\alpha:P(x)\\to Q(x)$ and $\\alpha':P(x')\\to Q(x')$ and any path $p:x=x'$, there is an equivalence\n  \\begin{equation*}\n    \\varphi(p,f,f'):\\Big(\\mathsf{tr}_{x\\mapsto P(x)\\to Q(x)}(p,f)=f'\\Big) \\simeq \\Big(\\prd{y:B(x)} f'(\\mathsf{tr}_B(p,y))=\\mathsf{tr}_C(p,f(y))\\Big).\n  \\end{equation*}\n  The equivalence $\\varphi$ is defined by path induction on $p$, where we take\n  \\begin{equation*}\n    \\varphi(\\refl{},f,f')\\defeq \\mathsf{htpy\\usc{}eq}\\circ\\mathsf{inv}.\n  \\end{equation*}\n  Now we define the bottom map in the asserted triangle to be the map\n  \\begin{equation*}\n    (h_A,h_B,h_S)\\mapsto (h_A,h_B,\\lam{s}\\varphi(H(s),h_A(f(s)),h_B(g(s)),h_S(s))).\n  \\end{equation*}\n  Note that this map is an equivalence, since it is the induced map on total spaces of an equivalence.\n\n  It remains to show that the triangle commutes. By \\cref{rmk:id-hom-Fam-pushout} it suffices to construct families of homotopies\n  \\begin{align*}\n    K_A : \\prd{a:A} h_{i(a)}\\htpy h_{i(a)} \\\\\n    K_B : \\prd{b:B} h_{j(b)}\\htpy h_{j(b)}\n  \\end{align*}\n  and for each $s:S$ a homotopy $K_S(s)$ witnessing that the square\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=13em]\n      h_{j(g(s))} \\circ \\mathsf{tr}_P(H(s)) \\arrow[d] \\arrow[r,\"{\\psi(H(s),h)}\"] & \\mathsf{tr}_Q(H(s)) \\circ h_{i(f(s))} \\arrow[d] \\\\\n      h_{j(g(s))} \\circ \\mathsf{tr}_P(H(s)) \\arrow[r,swap,\"{\\varphi(H(s),h_{i(f(s))},h_{j(g(s))},\\apd{h}{H(s)})}\"] & \\mathsf{tr}_Q(H(s))\\circ h_{i(f(s))}\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes. Of course, we take $K_A(a)\\defeq\\mathsf{htpy\\usc{}refl}$ and $K_B(b)\\defeq\\mathsf{htpy\\usc{}refl}$, so it suffices to show that\n  \\begin{equation*}\n    \\psi(H(s),h)\\htpy \\varphi(H(s),h_{i(f(s))},h_{j(g(s))},\\apd{h}{H(s)}).\n  \\end{equation*}\n  Now we would like to proceed by homotopy induction on $H:i\\circ f \\htpy j\\circ g$. However, we can only do so after we generalize the problem sufficiently to a situation where $H$ has free endpoints. It is indeed possible by homotopy induction to construct for every $f,g:S\\to X$ equipped with a homotopy $H:f\\htpy g$, every family of maps $h:\\prd{x:X}P(x)\\to Q(x)$ and every $s:S$, a homotopy\n  \\begin{equation*}\n    \\psi(H(s),h)\\htpy \\varphi(H(s),h_{f(s)},h_{g(s)},\\apd{h}{H(s)}).\\qedhere\n  \\end{equation*}\n\\end{proof}\n\n\\subsection{Characterizing the identity types of pushouts}\n\n\\begin{defn}\n  Consider a span $\\mathcal{S}$ equipped with $a:A$, and consider\n  $P:\\mathsf{Fam\\usc{}pushout}(\\mathcal{S})$ equipped with $p:P_A(a)$. We say that $P$ is \\define{universal} if for every $Q:\\mathsf{Fam\\usc{}pushout}(\\mathcal{S})$ the evaluation map\n  \\begin{equation*}\n    \\mathsf{hom}_{\\mathcal{S}}(P,Q)\\to Q_A(a)\n  \\end{equation*}\n  given by $h\\mapsto h_A(a,p)$ is an equivalence.\n\\end{defn}\n\n\\begin{lem}\n  Consider a pushout square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      S \\arrow[r,\"g\"] \\arrow[d,swap,\"f\"] & B \\arrow[d,\"j\"] \\\\\n      A \\arrow[r,swap,\"i\"] & X\n    \\end{tikzcd}\n  \\end{equation*}\n  with $H:i\\circ f \\htpy j\\circ g$, and let $a:A$. Furthermore, let $P$ be the descent data for the type family $x\\mapsto i(a)=x$ over $X$. Then $P$ is universal.\n\\end{lem}\n\n\\begin{proof}\n  Since $\\mathsf{desc\\usc{}fam}$ is an equivalence, it suffices to show that for every type family $Q$ over $X$, the map\n  \\begin{equation*}\n    \\mathsf{hom}_{\\mathcal{S}}(\\mathsf{desc\\usc{}fam}(\\mathsf{Id}(i(a))),\\mathsf{desc\\usc{}fam}(Q))\\to Q(i(a))\n  \\end{equation*}\n  given by $h\\mapsto h_A(a,\\refl{i(a)})$ is an equivalence. \n  Note that we have a commuting triangle\n  \\begin{equation*}\n    \\begin{tikzcd}\n      \\prd{x:X}(i(a)=x)\\to Q(x) \\arrow[r] \\arrow[dr,swap,\"\\mathsf{ev\\usc{}refl}\"] &\n      \\mathsf{hom}_{\\mathcal{S}}(\\mathsf{desc\\usc{}fam}(\\mathsf{Id}(i(a))),\\mathsf{desc\\usc{}fam}(Q)) \\arrow[d,\"{h\\mapsto h_A(\\refl{i(a)})}\"] \\\\\n      & Q(i(a))\n    \\end{tikzcd}\n  \\end{equation*}\n  The map $\\mathsf{ev\\usc{}refl}$ is an equivalence by \\cref{thm:yoneda}, and the top map is an equivalence by \\cref{thm:hom-Fam-pushout}. Therefore it follows that the remaining map is an equivalence.\n\\end{proof}\n\n\\begin{thm}\n  Consider a pushout square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      S \\arrow[r,\"g\"] \\arrow[d,swap,\"f\"] & B \\arrow[d,\"j\"] \\\\\n      A \\arrow[r,swap,\"i\"] & X\n    \\end{tikzcd}\n  \\end{equation*}\n  with $H:i\\circ f \\htpy j\\circ g$, and let $a:A$. Furthermore consider a pair $(P,p_0)$ consisting of $P:\\mathsf{Fam\\usc{}pushout}(\\mathcal{S})$ and $p:P_A(a)$. If $P$ is universal, then we have two families of equivalences\n  \\begin{align*}\n    e_A & : \\prd{x:A}P_A(x)\\simeq (i(a)=i(x)) \\\\\n    e_B & : \\prd{y:B} P_B(y)\\simeq (i(a)=j(b)) \n  \\end{align*}\n  equipped with a homotopy $e_S$ witnessing that the square\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=huge]\n      P_A(f(s)) \\arrow[r,\"e(s)\"] \\arrow[d,swap,\"e_A(f(s))\"] & P_B(g(s)) \\arrow[d,\"e_B(g(s))\"] \\\\\n      (i(a)=i(f(s))) \\arrow[r,swap,\"\\lam{p}\\ct{p}{H(s)}\"] & (i(a)=g(s))\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes for each $s:S$, and an identification $e_A(a,r)=\\refl{i(a)}$\n\\end{thm}\n\n\\begin{thm}\n  Let $X$ be a pointed type with base point $x_0:X$. Then the loop space of $\\susp{X}$ is the initial type $Y$ equipped with a base point $y_0:Y$, and a pointed map\n  \\begin{equation*}\n    X \\to_\\ast (Y\\simeq Y).\n  \\end{equation*}\n\\end{thm}\n\n\\begin{proof}\n  The type of pairs $(Y,\\mu)$ consisting of a pointed type $Y$ and a pointed map $\\mu:X\\to_\\ast (Y \\simeq Y)$ is equivalent to the type of triples $(Y,Z,\\mu)$ consisting of a pointed type $Y$, a type $Z$, and a map $\\mu:X\\to (Y\\simeq Z)$.  \n\\end{proof}\n\n\\begin{cor}\n  The loop space of $\\sphere{2}$ is the initial type $X$ equipped with a point $x_0:X$ and a homotopy $H:\\idfunc\\htpy\\idfunc$.\n\\end{cor}\n\n\\begin{exercises}\n\\exercise Consider the suspension\n  \\begin{equation*}\n    \\begin{tikzcd}\n      P \\arrow[r] \\arrow[d] & \\unit \\arrow[d,\"\\south\"] \\\\\n      \\unit \\arrow[r,swap,\"\\north\"] & \\susp{P}\n    \\end{tikzcd}\n  \\end{equation*}\n  of a proposition $P$. Show that $(\\north=\\south)\\simeq P$. \n\\exercise Show that if $X$ has decidable equality, then $\\susp{X}$ is a $1$-type.\n\\exercise Consider a pushout square\n  \\begin{equation*}\n    \\begin{tikzcd}\n      A \\arrow[r] \\arrow[d,swap,\"f\"] & \\unit \\arrow[d,\"j\"] \\\\\n      B \\arrow[r,swap,\"i\"] & X\n    \\end{tikzcd}\n  \\end{equation*}\n  where $f:A\\to B$ is an embedding.\n  \\begin{subexenum}\n  \\item Show that there are equivalences\n  \\begin{align*}\n    (i(b)=i(y)) & \\simeq (b=y)\\ast \\fib{f}{b} \\\\\n    (i(b)=j(\\ttt)) & \\simeq \\fib{f}{b}\n  \\end{align*}\n  for any $b,y:B$.\n  \\item Use \\cref{ex:trunc-join-with-prop} to show that if $B$ is a $k$-type, then so is $X$, for any $k\\geq 0$.\n  \\end{subexenum}\n\\exercise Consider the join\n  \\begin{equation*}\n    \\begin{tikzcd}\n      P \\times X \\arrow[r,\"\\proj 2\"] \\arrow[d,swap,\"\\proj 1\"] & X \\arrow[d,\"\\inr\"] \\\\\n      P \\arrow[r,swap,\"\\inl\"] & \\join{P}{X}\n    \\end{tikzcd}\n  \\end{equation*}\n  of a proposition $P$ and an arbitrary type $X$.\n  \\begin{subexenum}\n  \\item Show that for any $x,y:X$ there is an equivalence\n    $e:(\\inr(x)=\\inr(y))\\simeq \\join{P}{(x=y)}$ for which the triangle\n  \\begin{equation*}\n    \\begin{tikzcd}[column sep=tiny]\n      \\phantom{\\join{P}{(x=y)}} & (x=y) \\arrow[dl,swap,\"\\apfunc{\\inr}\"] \\arrow[dr,\"\\inr\"] & \\phantom{(\\inr(x)=\\inr(y))} \\\\\n      (\\inr(x)=\\inr(y)) \\arrow[rr,swap,\"e\"] & & \\join{P}{(x=y)}\n    \\end{tikzcd}\n  \\end{equation*}\n  commutes.\n\\exercise \\label{ex:trunc-join-with-prop}Show that if $X$ is a $k$-type, then so is $\\join{P}{X}$.\n  \\end{subexenum}\n\\end{exercises}\n", "meta": {"hexsha": "5383d44b4d3a6e5ca6fd5567cd90dfcbf4052796", "size": 11798, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/id-pushout.tex", "max_stars_repo_name": "hemangandhi/HoTT-Intro", "max_stars_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 333, "max_stars_repo_stars_event_min_datetime": "2018-09-26T08:33:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T23:50:15.000Z", "max_issues_repo_path": "Book/id-pushout.tex", "max_issues_repo_name": "hemangandhi/HoTT-Intro", "max_issues_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-06-18T04:16:04.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-16T15:27:01.000Z", "max_forks_repo_path": "Book/id-pushout.tex", "max_forks_repo_name": "hemangandhi/HoTT-Intro", "max_forks_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2018-09-26T09:08:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-16T00:33:50.000Z", "avg_line_length": 45.5521235521, "max_line_length": 395, "alphanum_fraction": 0.6204441431, "num_tokens": 4572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867873410141, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5581356448814144}}
{"text": "\\section{Imaging} \\label{sec:imaging}\n\n%In addition to multiple cameras at different locations and poses, an uniform lightning is also required to minimize specular difficulties in the texture.\n\nDigital stereo vision in the end analyses digital images; this section introduces the basic image acquisition steps and concerns that affect reconstruction quality. Cameras are never ideal, and practical algorithms take lens imperfections into account (e.g. \\cite{opencv}). Images are commonly taken with digital cameras that project a three-dimensional view from a single viewpoint to an image plane, and finally to a discrete grid of numerical values that describe light intensities.\n\nPractical details, such as depth of field, sharpness, aberrations and others are not considered, as they vary greatly depending on the used hardware and are out of scope of this work.\nIt suffices to say that in a practical system the choice of good optics is a key to good quality reconstruction.\nAccuracy and errors depend on not only decidable physical parameters of a stereo imaging rig, such as camera positioning, but also on e.g.~physical construction errors, lens imperfections, camera sensor noise, image compression and algorithmic accuracy. \\cite{hollsten2013imagequality, kyto2011method,rieke2009evaluation}.\n\n%In addition to plain photographic cameras, reconstruction can be done using e.g. laser scanners or light field cameras. Those are not covered in this work.\n\n\\subsection{Pinhole camera}\n\n\\simplegfx{h}{0.6\\textwidth}{pinhole-camera}\n{Pinhole camera principle. The box represents a camera; image seen through the small hole is formed to the plane on its back, rotated upside down.}\n\nA physical camera is in its simplest form modeled as a pinhole camera; an ideal device that projects an image upside down on its film through a small aperture.\nIllustration given in image \\ref{fig:pinhole-camera}.\nIn computer vision, this projection is given as a $3 \\times 4$ matrix, when homogeneous coordinates are used.\nHomogeneous coordinates add an extra dimension to the interpretation of coordinates and each point becomes a line that crosses the origin in a dimension one higher than the original.\nIn addition, several vector operations become more convenient to manipulate. \\cite{hartley03multiview,heyden2005multiple}\n\nThe pinhole model (or, perspective projection) states that the world point $(x, y, z)$ is projected to the image plane ($f$ units away from the origin) at $(u, v)$:\n\n\\begin{equation}\n\\begin{pmatrix}\nu \\\\ v\n\\end{pmatrix}\n=\n-\\frac{f}{z} \\begin{pmatrix}\nx \\\\ y \\\\ z\n\\end{pmatrix}\n\\end{equation}\n\nLight rays travel through the pinhole camera's aperture to the image plane that is $f$ units behind the pinhole.\nThe result can be derived from similar triangles with a common vertex at the aperture.\nSometimes the sign is inverted, which results in a plane between the pinhole (i.e.~camera origin) and the actual point, where the image is not rotated; this can be more convenient to analyse.\n\\cite{hartley03multiview}\n\nSetting the camera to origin and using homogeneous coordinates, the mapping is given with a camera matrix as\n\n\\begin{equation}\n\\begin{pmatrix}\nu \\\\ v \\\\ 1\n\\end{pmatrix}\n=\n\\begin{pmatrix}\nx \\\\ y \\\\ z/f\n\\end{pmatrix}\n=\n\\begin{pmatrix} \\label{eq:cmat}\n\t1 & 0 & 0 & 0 \\\\\n\t0 & 1 & 0 & 0 \\\\\n\t0 & 0 & 1/f & 0\n\\end{pmatrix}\n\\begin{pmatrix}\nx \\\\ y \\\\ z \\\\ 1\n\\end{pmatrix}\n\\end{equation}\n\nThe camera position and rotation in a global coordinate frame can be encoded in a matrix so that the point $(x,y,z)$ in global coordinate frame is first transformed relative to the camera's origin; section \\ref{sec:coord} discusses this in more detail.\n\n\\subsection{Optics}\n\nIn practice, no actual camera works ideally; imperfections in the lenses project points to positions that differ from those predicted by straight lines in this linear case.\nLens distortions deviate the rays, and no system is in perfect focus, so that one light ray spreads out as a circle.\nIn reconstructing, methods that estimate the points and minimize errors are used, as no model predicts the camera perfectly.\n\nConstruction of optical systems is well studied. \\cite{kingslake1989history}\nActual camera lenses consist of not only a single glass element but many, especially in the case of zoom lenses. In this work, the inner workings of these systems are ignored and equations assume a simple projective model, which is a safe assumption when the image is in focus.\n\nThe following equation applies for a thin lens while capturing sharp images:\n\n\\begin{equation}\n\t\\frac{1}{a} + \\frac{1}{b} = \\frac{1}{f} \\label{eq:focal}\n\\end{equation}\n\nwhere f is the focal length of the lens, a is the distance between the lens and the film, and b is the distance between the lens and the imaged source.\n\n%The focal length has a direct influence to field of view, as given in figure TODO. Longer focal length (long-focus lens, often referred to as telephoto lens) results to a more zoomed in picture, as opposed to a wide-angle lens. \n\nAll practical optical systems (lenses) introduce some non-linear distortion that affects the performance of the ideal pinhole model.\nCommon distortions are the purely radial so-called barrel and pincushion distortions, where the magnification is a nonlinear function of image ray distance from the center of the lens.\nFisheye lenses are commonly known to have this kind of effect.\nTangential distortion is less common, particularly in great magnitudes, and is often ignored. Its cause is small misalignments in separate elements in a single optical system; lenses being offset from each other and not parallel to the image plane. \\cite{kingslake1989history}\n\nWilson \\cite{wilson2004anton} discusses optical systems' relation to depth of field, focus and distortions.\n\nIt should be noted that the nonlinear optical distortions are different from the inevitable perspective projection distortion that happens when projecting a 3D scene to a 2D plane, which is taken into account in the reconstruction.\n\n\\simplefig{h}{%\n\\includegraphics[width=0.2\\textwidth]{gfx/barrel-distortion}\n\\includegraphics[width=0.2\\textwidth]{gfx/pincushion-distortion}\n}{fig:distortions}\n{Barrel (left) and pincushion distortions that would show up in an image of a grid of straight lines. For a lens with no distortion, the lines would not be curved.}\n\nDistortion should be corrected in software, as the following stereo algorithms assume that the images are free of nonlinear errors, i.e. straight lines in the world should remain straight in 2D images after the projective transformation.\n%In particular, image rectification (discussed later in \\ref{sec:rectification} won't work if this straightness does not remain; the assumption that similar features should be found on horizontal lines wouldn't hold on distorted images. \\cite{hartley03multiview} \n\nThe radial correction used by the OpenCV library to create a new image of the original pixel values at new positions \\cite{opencv} is\n\n\\begin{align}\n\tx_{corr} &= x(1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\\\\\n\ty_{corr} &= y(1 + k_1 r^2 + k_2 r^4 + k_3 r^6)\n\\end{align}\n\nTrucco and Verri \\cite{trucco1998introductory} use only the two first coefficients. For tangential distortion:\n\n\\begin{align}\nx_{corr} &= x + (2 p_1 x y + p_2 (r^2 + 2 x^2))\\\\\ny_{corr} &= y + (2 p_2 x y + p_1 (r^2 + 2 y^2))\n\\end{align}\n\n$x$ and $y$ are the original coordinates in the distorted image, $x_{corr}$ and $y_{corr}$ are the corrected ones, $k_1$, $k_2$, $k_3$, $p_1$ and $p_2$ are coefficients specific to the distortion, and $r$ equals to the distance to image center located at $(x_c,~y_c)$:\n\n\\begin{equation}\nr = \\sqrt{(x - x_c)^2 + (y - y_c)^2}\n\\end{equation}\n\n\\subsection{Shutter}\n\nIn dynamic (i.e.~moving) environments, the process of acquiring several images consecutively is an important thing to consider.\nWhen a group of cameras are capturing the same target, they should operate synchronously and grab images at same infinitesimally small points in time.\nIn reality, there is some error: the cameras do not work in perfect sync, and their sensors take some time to acquire an image.\n\nMoreover, a CCD sensor should be used in place of a chaper CMOS sensor; CCDs incorporate a ``global shutter'', i.e.~the whole sensor images the scene at once.\nA phenomenon called ``rolling shutter'' is common in CMOS sensors: the reading happens linearly, and moving objects get distorted when several pixels see the same object occurring at the same time.\nThis should be avoided, but it can be also taken into account and fixed. \\cite{ait2006simultaneous,bradley2009synchronization}\nUniform lightning is also usually assumed: when comparing pixel intensities, some figures might not look the same in different brightnesses, which is one of many reasons to use cameras that do as little automatic improvements as possible.\n\n\\subsection{Video}\n\nHumans perceive motion when a previously seen object is seen again in a nearby location or similar position.\nCurrent digital video technology encodes motion in a sequence of still images, usually displayed in constant rate.\nThree dimensional motion is usually no different: it is encoded as discrete poses in sequence.\nIn order to do object capture in stereo, video material from two or more of cameras is used to initially capture a sequence of still photos.\n\nWhen scanning a scene continuously, a camera grabs frames using the same principles as in photos, but does it in sequence, at a speed that is called frame rate.\nAnother notable point from the capture viewpoint is the shutter speed; in movies, the shutter is often open deliberately so long that fast motion is blurred, because it is considered aesthetically pleasing to human eye; even though the motion is blurred, more information about the scene movement is encoded per frame than when grabbing infinitesimally short times.\n\\cite{wilson2004anton}\nFor sharp images that are preferred in reconstruction, this is to be avoided.\n\nSynchronizing all the cameras that shoot the same scene might not a trivial issue in practice, depending on the gear used.\nProfessional grade cameras can be synced to a single clock generator, so that they all operate on the same frequency and phase.\nThe same method is used when shooting with machine vision cameras that have external trigger input.\nThis still leaves a small phase difference caused by unequal transmission paths from the clock generator.\nSynchronizable camcorders are very expensive, though, and consumer-grade hardware usually lacks all possibilities to properly sync frequency or phase, not to mention frequency drift or frame jitter.\nClapperboard is a ubiquitous and simple way to sync video and audio, but it still leaves a maximum of half a frame lag between the camera sequences; this is illustrated in figure \\ref{fig:syncproblems}.\nWhen cameras open their shutter in a different time, they effectively shoot a different scene, breaking one of the most fundamental assumptions of stereo vision: that the images encode geometrically same objects.\nThis can be compensated to some degree with optical flow \\cite{bradley2009synchronization}.\n\n\\simplefig{h!}{\n\\begin{tikzpicture}[scale=1.5]\n\t\\draw (0,0) rectangle (6,-1);\n\t\\draw (0,-1) rectangle (6,-2);\n\t\\foreach \\x in {0,...,5} {\n\t\t\\draw (\\x*1, 0) -- (\\x*1, -2.5);\n\t\t\\node at (\\x*1, -2.6) {{\\x}t};\n\n\t\t\\draw [fill=red] (\\x*1, 0) rectangle (\\x*1+0.4, -1);\n\t\t\\draw [fill=green] (\\x*1+0.3, -1) rectangle (\\x*1+0.3+0.4, -2);\n\t}\n\t\\draw (0.3, 0) -- (0.3, -2.5);\n\t\\node at (0.3, -2.6) {e};\n\t\\node at (-0.3, -0.5) {A};\n\t\\node at (-0.3, -1.5) {B};\n\t% P, Z\n\\end{tikzpicture}\n}{fig:syncproblems}\n{Video phase difference.\nRed rectangles illustrate exposure times of camera A, green rectangles the same for camera B.\nFrame period is t, and the cameras have a constant exposure time offset of an arbitrary e.}\n\nFaster frame rate encodes information more often, which is preferable as longer distances of pixels of same objects are more difficult to match when tracking objects; faster shutter speeds help to reduce motion blur.\nFast shutter (i.e.~short exposure) obviously needs to be compensated by using more sensitive sensors or more light to get equivalently bright images.\nNoise vs. motion blur is a common tradeoff that has to be made when building a stereo vision system.\n\nVideo recording and motion tracking are best considered orthogonal issues; while a single static case can be scanned in three dimensions, so can be also each frame of a sequence, separately.\nWhile this sounds tempting, it might not be computationally feasible, though, because the reconstruction must be started all over again for each frame and the topology is uncorrelated between the frames.\n% Assumptions that the scene is locally almost the same can help and speed up computations. [?]\n\n%Section \\ref{sec:tracking} will \n", "meta": {"hexsha": "761d92f3aa9079c44e3c405b68bd60e8498015bf", "size": 12801, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "semma/tex/imaging.tex", "max_stars_repo_name": "sooda/thesis", "max_stars_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2015-06-06T23:57:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T01:13:52.000Z", "max_issues_repo_path": "semma/tex/imaging.tex", "max_issues_repo_name": "sooda/thesis", "max_issues_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "semma/tex/imaging.tex", "max_forks_repo_name": "sooda/thesis", "max_forks_repo_head_hexsha": "25838d80eeb93d7fc2028c6a69ee745652c9efa4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-27T03:55:10.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-27T03:55:10.000Z", "avg_line_length": 69.5706521739, "max_line_length": 485, "alphanum_fraction": 0.7790016405, "num_tokens": 3084, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867873410141, "lm_q2_score": 0.6992544210587586, "lm_q1q2_score": 0.5581356398788913}}
{"text": "%!TEX root = ../TTT4150-Summary.tex\n\\section{Loxodromes}\n\nA \\emph{loxodrome} curve is an Earth course of constant bearing (also called a rhumb). Generally not a great circle, and therefore not the shortest route, but easy to use, as great circles require continuous change of bearing.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Conformal maps}\n\nA \\emph{conformal map} distorts east-west and north-south by the same amount, which means that it preserves angles. Distortion depends only on position (actually only latitude). The \\emph{Mercator map} is a special conformal map, on which a rhumb/loxodrome is a straight line.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{img/esa_mercator_map}\n\t\\caption{A Mercator map}\n\\end{figure}\n", "meta": {"hexsha": "0f55bf59635d5de87b9056da06f4443add60c34d", "size": 784, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TTT4150 Navigation systems/tex/sec-loxodromes.tex", "max_stars_repo_name": "jakoblover/ntnu-course-summaries", "max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z", "max_issues_repo_path": "TTT4150 Navigation systems/tex/sec-loxodromes.tex", "max_issues_repo_name": "jakoblover/ntnu-course-summaries", "max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TTT4150 Navigation systems/tex/sec-loxodromes.tex", "max_forks_repo_name": "jakoblover/ntnu-course-summaries", "max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.0, "max_line_length": 276, "alphanum_fraction": 0.7193877551, "num_tokens": 202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.6992544273261176, "lm_q1q2_score": 0.5581356381675979}}
{"text": "\\chapter{Introduction}\n\\label{chpr:intro}\n\\section{Problem under analysis}\n\nBitcoin \\cite{BTC} is a decentralized network whose aim is to provide a peer-to-peer electronic payment system. In recent years it has astonished people who had the opportunity to study it, with an increasing number of supporters and detractors worldwide: it has been called a bubble, a Ponzi scheme, it has been labelled as sound money and store of value. But labels can be dangerous, and excessive labelling is usually useless: for this reason the present work does not aim at settling the dispute. Bitcoin is hard to understand, being at the crossroad of various disciplines, going from networking theory to cryptography, from economics to game theory. Among this multitude, mathematics and cryptography are perhaps the easiest way to approach Bitcoin, being difficult to misinterpret these two disciplines: of course there are engineering trade-offs and debates around them, but the technical foundations are sound.\n\\\\\nFor this reason, the present work wants to give a brief but satisfactory introduction to the mathematical structures and the cryptographic primitives behind Bitcoin, focusing in particular on the role played by the digital signature scheme. We will outline the problems presented by ECDSA, the scheme actually implemented, and show how they could be solved by Schnorr, a scheme that might be introduced in the protocol in the coming years. On top of that, we will delve into the technicalities of higher level constructions enabled by Schnorr, that will help in solving some of the key problems of Bitcoin, namely privacy and fungibility\\footnote{Fungibility is the property of a good or a commodity whose individual units are interchangeable.}. Bitcoin was intended to be anonymous, but it is not: it is pseudonymous, in the sense that it is possible to link transactions to a unique identity through blockchain analysis. If this digital identity is unveiled, connecting to a real identity, that person could be persecuted for his involvement in Bitcoin (e.g. in authoritarian regimes). The lack of privacy contributes to the lack of fungibility, allowing to distinguish between different bitcoins\\footnote{Typically Bitcoin identifies the protocol, while bitcoins are the coins transferred using the protocol.}. These problems affects heavily the monetary use case and in general have to be properly addressed: their improvement will be the \\textit{leitmotiv} of this present work.\n\n\\bigskip\n\\noindent\nThe present thesis has been written during the author's fellowship at the Digital Gold Institute, a research and development center focused on teaching, consulting, and advising about scarcity in digital domain (Bitcoin and crypto-assets) and the underlying blockchain technology. The author has actively contributed to the development of the Python library that can be found at \\url{https://github.com/dginst/BitcoinBlockchainTechnology}: for this reason, some optimization procedures are presented throughout the thesis.\n\n\\bigskip\n\n\\bigskip\n\n\\section{Thesis structure}\nIn Chapter \\ref{chpr:math} we will start presenting the mathematical foundations of the cryptography underpinning the Bitcoin protocol: in particular we will recall the concepts of group and field (Section \\ref{groupsfields}), we will introduce the general idea of elliptic curve (EC, Section \\ref{ec}) and show how to induce a group structure on an EC through a proper definition of the addition operation (Section \\ref{grouplaw}).\n\n\\bigskip\n\\noindent\nIn Chapter \\ref{chpr:ecc} we will give an overview of the public key cryptography based on elliptic curves: thanks to the mathematical introduction of Chapter \\ref{chpr:math} we will be able to understand what an elliptic curve over a finite field is (Section \\ref{ecoverff}); then we will discuss elliptic curves' parameters selection and validation (Section \\ref{ecparam}), we will present the basic idea behind asymmetric cryptography based on EC (Section \\ref{keypairs}) and we will show how to improve the core operation of asymmetric elliptic curve cryptography (ECC), scalar multiplication, through a proper change of coordinates (Section \\ref{jac}). As a practical use case the section will be concluded with the presentation of the Bitcoin curve (secp256k1, Section \\ref{btccurve}). The chapter will be concluded by the description of the hard problem at the basis of elliptic curves' widespread adoption in recent years, the so called discrete logarithm problem (DLP in Section \\ref{dlp}).\n\n\\bigskip\n\\noindent\nChapter \\ref{chpr:dss} will deal with ECDSA (Section \\ref{ecdsa}) and Schnorr (Section \\ref{schnorr}). Throughout the chapter many topics will be touched, among which the issues of ECDSA, Schnorr's solutions and multi-signature schemes.\n\n\\bigskip\n\\noindent\nFinally in Chapter \\ref{chpr:application} we will delve in the technicalities of Schnorr's applications: we will study a multi-signature scheme that recovers key aggregation (MuSig, Section \\ref{musig}), a threshold signature scheme (Section \\ref{threshold}) and the benefits arising from the use of adaptor signatures (Section \\ref{adaptor}) for cross-chain atomic swaps (Section \\ref{atomic}) and for the Lightning Network (Section \\ref{ln}).\n\n\\bigskip\n\\noindent\nChapter \\ref{chpr:conclusion} will conclude the thesis with a summary in which we will draw interesting conclusions.\n\\bigskip\n\n\\bigskip\n\n\\section{Notation}\n\\begin{itemize}\n\\item Prime numbers: the lowercase letter $p$ is used to represent an odd prime number;\n\\item Fields: in Chapter \\ref{chpr:math} for a general field the letter $K$ is used, while the finite fields of order $q = p^k$, where $p$ is an odd prime and $k$ an integer, are represented as $\\mathbb{F}_q$;\n\\item Elliptic curves: in general an elliptic curve over a field $K$ is denoted by $E(K)$, which represents the set of points satisfying the generalized Weierstrass equation with coefficients in $K$ plus the point at infinity. From Chapter \\ref{chpr:ecc} onward, we will deal with EC over finite fields: in this case lowercase letters are used to denote scalars, while the uppercase equivalent denotes the linked EC point, e.g. $qG = Q = (x_Q, y_Q)$ ($G$ is reserved to the generator of the group). Whenever a second generator is needed we use the capital letter $H$: this does not constitute a conflict with the cofactor $h$ of the group since typically the two generators are NUMS (\\textit{nothing up my sleeves}), meaning that we do not know the discrete logarithm of one with respect to the other and viceversa;\n\\item Elliptic curve's key pair: the pair of private and public key is denoted as $\\{q, Q\\}$, where $Q = qG$. Whenever a second point is needed we use the couple $\\{r, R\\}$; if more key pairs are needed subscripts are used, e.g. $q_1G = Q_1 = (x_1, y_1)$, $q_2G = Q_2 = (x_2, y_2)$ and $q_3G = Q_3 = (x_3, y_3)$. Notice that, for the sake of clarity, also the coordinate representation is adapted.\n\\item Algorithms:\n    \\begin{itemize}\n\t\t\\item $||$ refers to byte array concatenation;\n\t\t\\item $a \\gets b$ refers to the operation of assignment;\n\t\t\\item $z \\xleftarrow{\\text{\\$}} Z$ denotes uniform sampling from the set $Z$ and assignment to $z$;\n\t\t\\item The function $\\text{bytes}(x)$, where $x$ is an integer, returns the byte encoding of $x$;\n\t\t\\item The function $\\text{bytes}(Q)$, where $Q$ is an EC point, returns $bytes(0\\text{x}02 + (y_Q \\& 1))$ $ || \\ bytes(x_Q)$\\footnote{Here \\& denotes the bitwise AND operator, thus $y_Q \\& 1$ checks whether $y_Q$ is even (prepend 0x02) or odd (prepend 0x03), following the standard proposed in \\cite{RefWork:2} about the compressed representation of an EC point. An EC point can be expressed in compressed or uncompressed form: the compressed form requires the $x$ coordinate and one additional byte to make explicit the $y$ coordinate, while the uncompressed form requires both the coordinates.};\n\t\t\\item The function $\\text{int}(x)$, where $x$ is a byte array, returns the unsigned integer whose byte encoding is $x$;\n\t\t\\item The function $\\text{hash}(x)$, where $x$ is a byte array, returns the hash of $x$. In particular, when dealing with the Schnorr signature, it returns the 32 bytes SHA-256 of $x$;\n\t\t\\item The function $\\text{jacobi}(x)$, where $x$ is an integer, returns the Jacobi symbol $\\left(\\frac{x}{p}\\right)$. In general we have (gcd here stands for greatest common divisor): $$\\left(\\frac{x}{p}\\right)= \\begin{cases} 0, & \\text{if gcd}(x, p) \\neq 1 \\\\ \\pm 1, & \\mbox{if gcd}(x, p) = 1 \\end{cases}$$\n\t\tMoreover we have that if $\\left(\\frac{x}{p}\\right) = - 1$ then $x$ is not a quadratic residue modulo $p$ (i.e. it has not a square root modulo $p$) and that if $x$ is a quadratic residue modulo $p$ and $\\text{gcd}(x, p) = 1$, then $\\left(\\frac{x}{p}\\right) = 1$. However, unlike the Legendre symbol, if $\\left(\\frac{x}{p}\\right) = 1$ then $x$ may or may not be a quadratic residue modulo $p$. Fortunately enough, since $p$ is an odd prime we have that the Jacobi symbol is equal to the Legendre symbol. Thus we can check only whether $\\left(\\frac{x}{p}\\right) = 1$ to assess if $x$ is or is not a quadratic residue modulo $p$.\n\t\t\\\\\n\t\tMoreover, by Euler's criterion we have that $\\left(\\frac{x}{p}\\right) = x^{\\frac{p - 1}{2}} \\ (\\text{mod} \\ p)$, so that we have an efficient way to compute the Jacobi symbol.\n\t\\end{itemize}\t\n\\end{itemize}\n", "meta": {"hexsha": "733f20c41e3f2f2565a066bc67bb8c1cc20aaa50", "size": 9361, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/Introduction.tex", "max_stars_repo_name": "gionasoldati/thesis", "max_stars_repo_head_hexsha": "e8b3b3828f4ccb0a35e26381b361425e09a51b11", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/Introduction.tex", "max_issues_repo_name": "gionasoldati/thesis", "max_issues_repo_head_hexsha": "e8b3b3828f4ccb0a35e26381b361425e09a51b11", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapters/Introduction.tex", "max_forks_repo_name": "gionasoldati/thesis", "max_forks_repo_head_hexsha": "e8b3b3828f4ccb0a35e26381b361425e09a51b11", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-11-06T23:47:52.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-06T23:47:52.000Z", "avg_line_length": 156.0166666667, "max_line_length": 1483, "alphanum_fraction": 0.7674393761, "num_tokens": 2315, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867681382279, "lm_q2_score": 0.6992544335934767, "lm_q1q2_score": 0.5581356364563043}}
{"text": "\nThis chapter studies an embedded Domain Specific Language for logic\nprogramming.  First, we give a quick introduction of \\textit{$\\mu$Kanren}, a\npurely functional implementation of this language and, second, we extend the\nHOL Light theorem prover in order to introduce the relational paradigm in its\ntactics mechanism.\n\n\\section{$\\mu$Kanren and relational programming}\n\nThe central tenet of relational programming is that \\textit{programs\ncorresponds to relations that generalize mathematical functions}; our interest\nhere is to deepen our understanding of the underlying concepts and data\nstructures of languages in the \\textit{miniKanren} family. The main reference\nthat drives our work is \\citep{Friedman:Reasoned:Schemer} and advanced topics\nare discussed in Byrd's dissertation \\citep{Byrd:PhD}.\n\nThe heavy use of higher order functions, infinite streams of objects and\nunification \\`a-la Robinson makes possible to implement $\\mu$Kanren\n\\citep{Hemann:muKanren}, a purely functional core of miniKanren; we repeat the\nexercise of coding it using different programming languages, in particular\n\\begin{description}\n\\item[Python]\n    we provide both a complete implementation of the abstract definition and a\n    test suite that stresses our functions against \\textit{all} questions in the\n    reference book. Moreover, we characterize our code with a \\textit{fair}\n    enumeration strategy based on the \\textit{dovetail} techniques used in the\n    enumeration of the Rationals numbers; precisely, the monadic function\n    \\verb|mplus(streams, interleaving)| enumerates the states space\n    \\verb|streams|, using different strategies according to the argument\n    \\verb|interleaving|.\n\n    In order to understand states enumeration can be helpful to use a \"matrix\n    notation\", that associates a row to each stream \u03b1 of states in\n    \\verb|streams|, which is an \\textit{iterable} object over a \\textit{countably},\n    possibly infinite, set of \\textit{states streams}, so the matrix could have\n    infinite rows.  In parallel, since each states stream \u03b1 lying on a row is an\n    \\textit{iterable} object over a \\textit{countably}, possibly infinite, set of\n    \\textit{satisfying states}, the matrix could have infinite columns too;\n    therefore, the matrix we are building could be infinite in both dimensions.\n    So, let \\verb|streams| be represented as\n    \\begin{displaymath}\n        \\left(\\begin{array}{ccccc}\n        s_{00} & s_{01} & s_{02} & s_{03} & \\ldots \\\\\n        s_{10} & s_{11} & s_{12} & \\ldots &        \\\\\n        s_{20} & s_{21} & \\ldots &        &        \\\\\n        s_{30} & \\ldots &        &        &        \\\\\n        \\ldots &        &        &        &        \\\\\n        \\end{array}\\right)\n    \\end{displaymath}\n    where each $s_{i,j}$ is a state that carries a substitution which satisfies\n    the relation under study.  Such states are visited according to the\n    \\textit{dovetail} techniques which enumerates by interleaving \\verb|state|\n    objects lying on the same \\textit{rising diagonal}, resulting in a\n    \\textit{fair, complete scheduler} in the sense that \\textit{every} satisfying\n    \\verb|state| object will be reached, eventually. For the sake of clarity,\n    enumeration proceeds as follows\n    \\begin{displaymath}\n    s_{00}, s_{10}, s_{01}, s_{20}, s_{11}, s_{02}, s_{30}, s_{21},\n    s_{12}, s_{03}, \\ldots\n    \\end{displaymath}\n    with respect to its implementation\n    \\begin{minted}[baselinestretch=0.8]{python}\n    def mplus(streams, interleaving):\n\n        if interleaving:\n\n            try: \u03b1 = next(streams)\n            except StopIteration: return\n            else: S = [\u03b1]\n\n            while S:\n\n                for j in reversed(range(len(S))):\n                    \u03b2 = S[j]\n                    try: s = next(\u03b2)\n                    except StopIteration: del S[j]\n                    else: yield s\n\n                try: \u03b1 = next(streams)\n                except StopIteration: pass\n                else: S.append(\u03b1)\n\n        else:\n\n            for \u03b1 in streams: yield from \u03b1\n    \\end{minted}\n\n\\item[Scheme] we provide our implementation using the same language that\noriginal authors use for their canonical version. We diverge from them in the\nway we represent substitutions, choosing an \\textit{union-find}  data structure\nthat allows us to maintain a balanced tree to track associations.\nThe overhead work that was necessary to implement a fully-flagged\n$\\mu$Kanren yield a Scheme library to define and manipulate infinite streams of\nobjects, and this allows us to have another way to define Riordan arrays for free, such as\n\\begin{minted}[baselinestretch=0.8]{scheme}\n(test '((1)\n        (1 1)\n        (1 2 1)\n        (1 3 3 1)\n        (1 4 6 4 1)\n        (1 5 10 10 5 1)\n        (1 6 15 20 15 6 1)\n        (1 7 21 35 35 21 7 1)\n        (1 8 28 56 70 56 28 8 1)\n        (1 9 36 84 126 126 84 36 9 1))\n ((list\u25cbtake 10) (riordan-array stream:1s stream:1s)))\n\\end{minted}\nlying on the procedural abstraction \\verb|riordan-array| that consumes two\nformal power series and produces a stream of lists, each one denoting a\ntriangle's row; its definition is clear and elegant in our opinion,\n\\begin{minted}[baselinestretch=0.8]{scheme}\n(define riordan-array\n (\u039b (d h)\n  (stream:dest/car+cdr (d \u2205)\n   ((dcar dcdr) (stream:cons\n                 (list dcar)\n                 ((stream:zip-with cons)\n                  dcdr (riordan-array (series:\u00d7 d h) h)))))))\n\\end{minted}\nwhere \\verb|stream:\u00d7| denotes the \\textit{series convolution operator} and\nthe syntactic abstraction \\verb|\u039b| is defined as an \"augmented lambda\" form\nthat allows us to define delayed by means of \\verb|delay-force|,\n\\begin{minted}[baselinestretch=0.8]{scheme}\n(define-syntax \u039b\n (syntax-rules ()\n  ((\u039b args body ...)\n   (lambda args (delay-force (begin body ...))))))\n\\end{minted}\n\n\\item[Smalltalk] this implementation is an exercise in object-oriented\nprogramming and its coding has been driven by a test-first approach\n\\citep{beck:TDD} and it is a literal port of the canonical one.\n\n\\item[OCaml] finally, this version is preparatory for the extension of the HOL\nLight theorem prover which we are going to describe in the rest of this\nchapter.\n\n\\end{description}\n\nAll these prototypes can be found in \\citep{Nocentini:kanrens} and some\nexamples follows using the Scheme implementation to show the power of the\npresent paradigm.\n\n\\begin{example} The \\textit{context free grammar} $\\mathcal{D}$\nthat defines the set of \\textit{Dyck paths}\n\\begin{displaymath}\n\\mathcal{D} = \\varepsilon\\,|\\,\\circ\\,\\mathcal{D}\\,\\bullet\\,\\mathcal{D}\n\\end{displaymath}\nis encoded in the relation \\verb|dyck\u00ba| defined by the $\\mu$Kanren goal\n\\begin{minted}[baselinestretch=0.8]{scheme}\n(define dyck\u00ba\n (lambda (\u03b1)\n  (cond\u00ba/\u00a7\n   ((null\u00ba \u03b1))\n   ((fresh (\u03b2 \u03b3) (\u2227\n                  (dyck\u00ba \u03b2)\n                  (dyck\u00ba \u03b3)\n                  (append\u00ba `(\u25cb . ,\u03b2) `(\u25cf . ,\u03b3) \u03b1)))))))\n\\end{minted}\nand an enumeration is reported in Table \\ref{tbl:kanren:dyck:path}.\n\\begin{margintable}\n\\begin{minted}[baselinestretch=0.8]{scheme}\n        (()\n         (\u25cb \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cf \u25cb \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cb \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cf \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cf \u25cb \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cf \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cb \u25cf \u25cf \u25cb \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cb \u25cf \u25cf \u25cb \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cb \u25cf \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cf \u25cf \u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cf \u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cf \u25cb \u25cf \u25cb \u25cf)\n         (\u25cb \u25cb \u25cf \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cb \u25cf \u25cb \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cb \u25cf \u25cb \u25cf \u25cf)\n         (\u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cf \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cf \u25cb \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cf \u25cb \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf)\n         (\u25cb \u25cb \u25cf \u25cb \u25cf \u25cf \u25cb \u25cb \u25cf \u25cf)\n         (\u25cb \u25cf \u25cb \u25cb \u25cb \u25cf \u25cf \u25cf \u25cb \u25cb \u25cf \u25cf))\n\\end{minted}\n\\caption{First $42$ Dyck paths enumerated by relation \\texttt{dyck\u00ba}.}\n\\label{tbl:kanren:dyck:path}\n\\end{margintable}\n\\end{example}\n\n\\begin{example}\nThe recurrence relation for the Fibonacci numbers\n\\begin{displaymath}\nf_{n+2} = f_{n+1} + f_{n}, \\quad n \\geq 0\n\\end{displaymath}\nis encoded in the relation \\verb|fibonacci\u00ba| defined by the goal\n\\begin{minted}[baselinestretch=0.8]{scheme}\n(define fibonacci\u00ba\n (lambda (depth n \u03b1)\n  (cond\n   ((zero? depth) (\u2261 \u03b1 (list n)))\n   (else (fresh (\u03b2 \u03b3)\n          (\u2227\n           (fibonacci\u00ba (sub1 depth) (sub1 n) \u03b2)\n           (fibonacci\u00ba (sub1 depth) (sub2 n) \u03b3)\n           (append\u00ba \u03b2 \u03b3 \u03b1)))))))\n\\end{minted}\nwhich enumerates the following identities\n\\begin{displaymath}\n    \\begin{array}{c}\n      f_{n+2} = f_{n} + f_{n+1} \\\\\n      f_{n+2} = f_{n-2} + 2\\,f_{n-1} + f_{n}  \\\\\n      f_{n+2} = f_{n-4} + 3\\,f_{n-3} + 3\\,f_{n-2} + f_{n-1} \\\\\n      f_{n+2} = f_{n-6} + 4\\,f_{n-5} + 6\\,f_{n-4} + 4\\,f_{n-3} + f_{n-2} \\\\\n      f_{n+2} = f_{n-8} + 5\\,f_{n-7} + 10\\,f_{n-6} + 10\\,f_{n-5} + 5\\,f_{n-4} + f_{n-3} \\\\\n      f_{n+2} = f_{n-10} + 6\\,f_{n-9} +15\\,f_{n-8} + 20\\,f_{n-7} +15\\,f_{n-6} + 6\\,f_{n-5} + f_{n-4} \\\\\n      %((-12 1) (-11 7) (-10 21) (-9 35) (-8 35) (-7 21) (-6 7) (-5 1))\n      %((-14 1) (-13 8) (-12 28) (-11 56) (-10 70) (-9 56) (-8 28) (-7 8) (-6 1))\n    \\end{array}\n\\end{displaymath}\ncompacted in $\\displaystyle f_{n} = \\sum_{i=0}^{j}{{\n{j}\\choose{i} }\\,f_{n-2\\,j+i}}$ where\n$\\displaystyle j\\leq \\frac{n}{2}$.\n\\end{example}\n\n\\begin{example}\nThe recurrence relation for the Pascal triangle\n\\begin{displaymath}\n\\begin{split}\nd_{0,0} &= 1, \\\\\nd_{n+1, 0} &= d_{n,0}, \\quad n \\geq 0 \\\\\nd_{n+1, k+1} &= d_{n,k} + d_{n,k+1}, \\quad n,k \\geq 0 \\\\\n\\end{split}\n\\end{displaymath}\nis encoded in the relation \\verb|tartaglia\u00ba| defined by the goal\n\\begin{minted}[baselinestretch=0.8]{scheme}\n(define tartaglia\u00ba\n (lambda (depth n k \u03b1)\n  (cond\n   ((zero? depth) (\u2261 \u03b1 (list (list n k))))\n   (else (fresh (\u03b2 \u03b3)\n          (\u2227\n           (tartaglia\u00ba (sub1 depth) (sub1 n) (sub1 k) \u03b2)\n           (tartaglia\u00ba (sub1 depth) (sub1 n) k \u03b3)\n           (append\u00ba \u03b2 \u03b3 \u03b1)))))))\n\\end{minted}\nwhich enumerates the following identities\n\\begin{displaymath}\n    \\begin{array}{c}\n      d_{n+1,k+1} = d_{n,k+1} + d_{n,k} \\\\\n      d_{n+1,k+1} = d_{n-1, k+1} + 2\\,d_{n-1,k} + d_{n-1,k-1}  \\\\\n      d_{n+1,k+1} = d_{n-2, k+1} + 3\\,d_{n-2,k} + 3\\,d_{n-2,k-1} + d_{n-2,k-2} \\\\\n      d_{n+1,k+1} = d_{n-3, k+1} + 4\\,d_{n-3,k} + 6\\,d_{n-3,k-1} + 4\\,d_{n-3,k-2} + d_{n-2,k-3} \\\\\n      d_{n+1,k+1} = d_{n-4, k+1} + 5\\,d_{n-4,k} + 10\\,d_{n-4,k-1} + 10\\,d_{n-4,k-2} + 5\\,d_{n-4,k-3} + d_{n-4,k-4} \\\\\n      d_{n+1,k+1} = d_{n-5, k+1} + 6\\,d_{n-5,k} +15\\,d_{n-5,k-1} + 20\\,d_{n-5,k-2} +15\\,d_{n-5,k-3} + 6\\,d_{n-5,k-4} + d_{n-5,k-5} \\\\\n    \\end{array}\n\\end{displaymath}\ncompacted in $\\displaystyle { {p+m}\\choose{r+m} } = \\sum_{j=0}^{m-1}{{\n{m-1}\\choose{j} }{ {p-1+m}\\choose{r-j+m} }}$ for $p,r,m\\in\\mathbb{N}$\n-- recall that $\\displaystyle d_{n,k}={ {n}\\choose{k} }$.\n\\end{example}\n\n\\section{Toward certified computation}\n\\label{sec:introduction}\n\nTheorem provers are employed to construct logically verified truths.\nIn this work, we propose an extended language of tactics which support\nthe derivation of formally verified theorems in the spirit of the\nlogic programming paradigm.\n\nOur setup, is based on the HOL Light theorem prover, in which we\nextend the currently available tactics mechanism with three basic\nfeatures: (i)~the explicit use of meta-variables, (ii)~the ability to\nbacktrack during the proof search, (iii)~a layer of tools and\nfacilities to interface with the underlying proof mechanism.\n\nThe basic building block of our framework are ML procedures that we\ncall \\emph{solvers}, which are a generalization of HOL tactics and\nare~--as well as tactics-- meant to be used compositionally to define\narbitrarily complex proof search strategies.\n\nWe say that our approach is \\emph{semi-certified} because\n\\begin{itemize}\n\\item on one hand, the produced solutions are formally proved\n  theorems, hence their validity is guaranteed by construction;\n\\item on the other hand, the completeness of the search procedure\n  cannot be enforced in our framework and consequently has to be\n  ensured by a meta-reasoning.\n\\end{itemize}\n\nAt the present stage, our implementation \\citep{Maggesi:Nocentini:kanrenlight}\nis intended to be a test bed for experiments and further investigation on this\nreasoning paradigm.\n\n\n\\section{A simple example}\n\\label{sec:an-simple-example}\n\nTo give the flavor of our framework, we show how to perform simple\ncomputations on lists.\n\nConsider first the problem of computing the concatenation of two lists\n\\verb|[1; 2]| and \\verb|[3]|.  One natural way to approach this\nproblem is by using rewriting.  In HOL Light, this can be done by using\n\\emph{conversions} with the command\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# REWRITE_CONV [APPEND] `APPEND [1;2] [3]`;;\n\\end{minted}\nwhere the theorem\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# APPEND;;\nval it : thm =\n  |- (!l. APPEND [] l = l) /\\\n     (!h t l. APPEND (h :: t) l = h :: APPEND t l)\n\\end{minted}\ngives the recursive equations for the operator \\verb|APPEND|.\n\nOur implementation allows us to address the same problem from a\nlogical point of view.  We start by proving two theorems\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# APPEND_NIL;;\nval it : thm = |- !l. APPEND [] l = l\n\n# APPEND_CONS;;\nval it : thm =\n  |- !x xs ys zs. APPEND xs ys = zs\n                  ==> APPEND (x :: xs) ys = x :: zs\n\\end{minted}\nthat gives the logical rules that characterize the \\verb|APPEND|\noperator.  Then we define a \\emph{solver}\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nlet APPEND_SLV : solver =\n  REPEAT_SLV (CONCAT_SLV (ACCEPT_SLV APPEND_NIL)\n                         (RULE_SLV APPEND_CONS));;\n\\end{minted}\nwhich implements the most obvious strategy for proving a relation of\nthe form \\verb|`APPEND x y = z`| by structural analysis on the list\n\\verb|`x`|.  The precise meaning of the above code will be clear later\nin this note; however, this can be seen as the direct translation of\nthe Prolog program\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nappend([],X,X).\nappend([X|Xs],Ys,[X|Zs]) :- append(Xs,Ys,Zs).\n\\end{minted}\n\nThen, the problem of concatenating the two lists is described by the\nterm\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n`??x. APPEND [1;2] [3] = x`\n\\end{minted}\nwhere the binder \\verb|`(??)`| is a syntactic variant of the usual\nexistential quantifier \\verb|`(?)`|, which introduces the\n\\emph{meta-variables} of the \\emph{query}.\n\nThe following command\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nlist_of_stream\n  (solve APPEND_SLV\n         `??x. APPEND [1; 2] [3] = x`);;\n\\end{minted}\nruns the search process where the \\verb|solve| function starts the\nproof search and produces a stream (i.e., a lazy list) of\n\\emph{solutions} and the outermost \\verb|list_of_stream| transform the\nstream into a list.\n\nThe output of the previous command is a single solution which is\nrepresented by a pair where the first element is the instantiation for\nthe meta-variable \\verb|`x`|and the second element is a HOL theorem\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nval it : (term list * thm) list =\n  [([`x = [1; 2; 3]`], |- APPEND [1; 2] [3] = [1; 2; 3])]\n\\end{minted}\n\nNow comes the interesting part: as in logic programs, our search\nstrategy (i.e., the \\verb|APPEND_SLV| solver) can be used for backward\nreasoning.\n\nConsider the variation of the above problem where we want to enumerate\nall possible splits of the list \\verb|[1; 2; 3]|.  This can be done by\nsimply changing the goal term in the previous query:\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# list_of_stream\n    (solve APPEND_SLV\n           `??x y. APPEND x y = [1;2;3]`);;\n\nval it : (term list * thm) list =\n  [([`x = []`; `y = [1; 2; 3]`],\n    |- APPEND [] [1; 2; 3] = [1; 2; 3]);\n   ([`x = [1]`; `y = [2; 3]`],\n    |- APPEND [1] [2; 3] = [1; 2; 3]);\n   ([`x = [1; 2]`; `y = [3]`],\n    |- APPEND [1; 2] [3] = [1; 2; 3]);\n   ([`x = [1; 2; 3]`; `y = []`],\n    |- APPEND [1; 2; 3] [] = [1; 2; 3])]\n\\end{minted}\n\n\\section{A library of solvers}\n\\label{sec:library-solvers}\n\nOur framework is based on ML procedures called \\emph{solvers}.  Solvers\ngeneralizes classical HOL tactics in two ways, (i)~they facilitate the\nmanipulation of meta-variables in the goal and (ii)~they allows us to backtrack\nduring the proof search. We observe that the tactics mechanism currently\nimplemented in HOL Light already provides basic support for meta-variables in\ngoals; however, it seems to be used only internally in the implementation of\nthe intuitionistic tautology prover \\texttt{ITAUT\\_TAC}.\n\nWe provide a library of basic solvers with the convention that their names end\nin \\verb|_SLV| as in \\verb|REFL_SLV|, for instance.\n\nEvery HOL tactic can be `promoted' into a solver using the ML function\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nTACTIC_SLV : tactic -> solver\n\\end{minted}\nA partial list of solvers approximately corresponding to classical HOL\ntactics are \\verb|ACCEPT_SLV|, \\verb|NO_SLV|, \\verb|REFL_SLV|,\n\\verb|RULE_SLV| (corresponding to \\verb|MATCH_MP_TAC|).\n\nNotice that these solvers are different from their corresponding\ntactics because either\n\\begin{enumerate}\n\\item use the stream mechanism instead of OCaml exceptions to\n  handle the control flow; or\n\\item perform some kind of unification.\n\\end{enumerate}\n\nFor (1), a very basic example is the solver \\verb|NO_SLV| which,\ninstead of raising an exception, it returns the empty stream of\nsolutions.\n\nOne example of (2) is the \\verb|REFL_SLV| solver: when it is applied\nto the goal\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n?- x + 1 = 1 + x\n\\end{minted}\nwhere \\verb|x| is a meta-variable, closes the goal by augmenting the\ninstantiation with the substitution \\verb|1/x| and\nproducing the theorem \\verb!|- 1 + 1 = 1 + 1!.  Observe that the\ncorresponding \\verb|REFL_TAC| fails in this case.\n\nAs for tactics, we have a collection of higher-order solvers.  Some of\nthem, are the analogous of the corresponding tacticals:\n\\verb|ASSUM_LIST_SLV|,\n\\verb|CHANGED_SLV|,\n\\verb|EVERY_SLV|,\n\\verb|MAP_EVERY_SLV|,\n\\verb|POP_ASSUM_LIST_SLV|,\n\\verb|POP_ASSUM_SLV|,\n\\verb|REPEAT_SLV|,\n\\verb|THENL_SLV|,\n\\verb|THEN_SLV|,\n\\verb|TRY_SLV|,\n\\verb|UNDISCH_THEN_SLV|.\n\n\nGiven two solvers $s_1$ and $s_2$ the solver combinator\n\\verb|CONCAT_SLV| make a new solver that collect sequentially all\nsolutions of $s_1$ followed by all solutions of $s_2$.  This is the\nmost basic construction for introducing backtracking into the proof\nstrategy.\n\nFrom \\verb|CONCAT_SLV|, a number of derived combinators are defined to\ncapture the most common enumeration patterns, here\nwe give a brief list of those combinators without an explicit\ndescription. However, we hope that the reader can guess the actual\nbehaviour from both their name and their ML type:\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nCOLLECT_SLV : solver list -> solver\nMAP_COLLECT_SLV : ('a->solver) -> 'a list -> solver\nCOLLECT_ASSUM_SLV : thm_solver -> solver\nCOLLECT_X_ASSUM_SLV : thm_solver -> solver\n\\end{minted}\n\n%% Solver 'bilanciati'?\n%% % let MPLUS_SLV (slv1:solver) (slv2:solver) : solver =\n%% %   fun g -> mplusf (slv1 g) (fun _ -> slv2 g);;\n%%\n%% % let INTERLEAVE_SLV (slvl:solver list) : solver =\n%% %   if slvl = [] then NO_SLV else end_itlist MPLUS_SLV slvl;;\n%%\n%% % let MAP_INTERLEAVE_SLV (slvf:'a->solver) (lst:'a list) : solver =\n%% %   INTERLEAVE_SLV (map slvf lst);;\n%%\n%% % let INTERLEAVE_ASSUM_SLV (tslv:thm_solver) : solver =\n%% %   fun (mvs,(asl,w) as g) -> MAP_INTERLEAVE_SLV (tslv o snd) asl g;;\n%%\n%% % let INTERLEAVE_X_ASSUM_SLV (tslv:thm_solver) : solver =\n%% %   INTERLEAVE_ASSUM_SLV (fun th -> UNDISCH_THEN_SLV (concl th) tslv);;\n\nSolvers can be used interactively.  Typically, we can start a new goal\nwith the command \\verb|gg| and execute solvers with \\verb|ee|.  The\ncommand \\verb|bb| restore the previous proof state and \\verb|pp|\nprints the current goal state.  The stream of results is produced by\na call to \\verb|top_thms()|.\n\nHere is an example of interaction.  We first introduce the goal,\nnotice the use of the binder \\verb|(??)| for the meta-variable \\verb|x|:\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# gg `??x. 2 + 2 = x`;;\nval it : mgoalstack =\n`2 + 2 = x`\n\\end{minted}\none possible solution is by using reflexivity, closing the proof\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# ee REFL_SLV;;\nval it : mgoalstack =\n\\end{minted}\nwe can now form the resulting theorem\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# list_of_stream(top_thms());;\nval it : thm list = [|- 2 + 2 = 2 + 2]\n\\end{minted}\n\nNow, if one want to find a different solution, we can restore the\ninitial state\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# bb();;\nval it : mgoalstack =\n`2 + 2 = x`\n\\end{minted}\nthen we use a different solver that allows us to unify with the\nequation \\verb?|- 2 + 2 = 4?\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# ee (ACCEPT_SLV(ARITH_RULE `2 + 2 = 4`));;\nval it : mgoalstack =\n\\end{minted}\nand again, we take the resulting theorem\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# list_of_stream(top_thms());;\nval it : thm list = [|- 2 + 2 = 4]\n\\end{minted}\n\nFinally, we can change the proof strategy to find both solutions by\nusing backtracking\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# bb();;\nval it : mgoalstack =\n`2 + 2 = x`\n\n# ee (CONCAT_SLV REFL_SLV (ACCEPT_SLV(ARITH_RULE `2 + 2 = 4`)));;\nval it : mgoalstack =\n# list_of_stream(top_thms());;\nval it : thm list = [|- 2 + 2 = 2 + 2; |- 2 + 2 = 4]\n\\end{minted}\n\nThe function\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nsolve : solver -> term -> (term list * thm) stream\n\\end{minted}\nruns the proof search non interactively and produces a list of\nsolutions as already shown in Section~\\ref{sec:an-simple-example}.  In\nthis last case it would be\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# list_of_stream\n    (solve (CONCAT_SLV REFL_SLV (ACCEPT_SLV(ARITH_RULE `2 + 2 = 4`)))\n           `??x. 2 + 2 = x`);;\nval it : (term list * thm) list =\n  [([`x = 2 + 2`], |- 2 + 2 = 2 + 2);\n   ([`x = 4`], |- 2 + 2 = 4)]\n\\end{minted}\n\n%\\section{Advanced solvers}\n%\\label{sec:advanced-solvers}\n\n% - PROLOG_SLV (come chiamarla pero'?)\n% - ITAUT_SLV (bug di ITAUT_TAC)\n\n\\section{Case study: Evaluation for a lisp-like language}\n\\label{sec:lisp-eval}\n\nThe material in this section is strongly inspired from the ingenious\nwork of Byrd, Holk and Friedman about the miniKanren system\n\\citep{Byrd:2012:MLU:2661103.2661105}, where the authors work with the\nsemantics of the Scheme programming language. Here, we target a lisp-like\nlanguage, implemented as an object language inside the HOL prover.\nOur language is substantially simpler than Scheme; in\nparticular, it uses dynamic (instead of lexical) scope for variables.\nNonetheless, we believe that this example can suffice to illustrate\nthe general methodology.\n\nFirst, we need to extend our HOL Light environment with an object\ndatatype \\verb|sexp| for encoding S-expressions.\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nlet sexp_INDUCT,sexp_RECUR = define_type\n  \"sexp = Symbol string\n        | List (sexp list)\";;\n\\end{minted}\nFor instance the sexp \\verb|(list a (quote b))| is represented as HOL\nterm with\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n`List [Symbol \"list\";\n       Symbol \"a\";\n       List [Symbol \"quote\";\n             Symbol \"b\"]]`\n\\end{minted}\nThis syntactic representation can be hard to read and gets quickly\ncumbersome as the size of the terms grows.  Hence, we also introduce a\nnotation for concrete sexp terms, which is activated by the syntactic\npattern \\verb|'(|\\ldots\\verb|)|.  For instance, the above example\nis written in the HOL concrete syntax for terms as\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n`'(list a (quote b))`\n\\end{minted}\n\nWith this setup, we can easily specify the evaluation rules for our\nminimal lisp-like language.  This is an inductive predicate with rules\nfor: (i) quoted expressions; (ii) variables; (iii) lambda\nabstractions; (iv) lists; (v) unary applications.  We define a ternary\npredicate \\verb|`|$EVAL\\ e\\ x\\ y\\mathtt{}$\\verb|`|, where $e$\nis a variable environment expressed as associative list, $x$ is the\ninput program and $y$ is the result of the evaluation.\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nlet EVAL_RULES,EVAL_INDUCT,EVAL_CASES = new_inductive_definition\n  `(!e q. EVAL e (List [Symbol \"quote\"; q]) q) /\\\n   (!e a x. RELASSOC a e x ==> EVAL e (Symbol a) x) /\\\n   (!e l. EVAL e (List (CONS (Symbol \"lambda\") l))\n                 (List (CONS (Symbol \"lambda\") l))) /\\\n   (!e l l'. ALL2 (EVAL e) l l'\n             ==> EVAL e (List (CONS (Symbol \"list\") l)) (List l')) /\\\n   (!e f x x' v b y.\n      EVAL e f (List [Symbol \"lambda\"; List[Symbol v]; b]) /\\\n      EVAL e x x' /\\ EVAL (CONS (x',v) e) b y\n      ==> EVAL e (List [f; x]) y)`;;\n\\end{minted}\n\nWe now use our framework for running a certified evaluation process\nfor this language.  First, we define a solver for a single step of\ncomputation.\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nlet STEP_SLV : solver =\n  COLLECT_SLV\n    [CONJ_SLV;\n     ACCEPT_SLV EVAL_QUOTED;\n     THEN_SLV (RULE_SLV EVAL_SYMB) RELASSOC_SLV;\n     ACCEPT_SLV EVAL_LAMBDA;\n     RULE_SLV EVAL_LIST;\n     RULE_SLV EVAL_APP;\n     ACCEPT_SLV ALL2_NIL;\n     RULE_SLV ALL2_CONS];;\n\\end{minted}\nIn the above code, we collect the solutions of several different\nsolvers.  Other than the five rules of the \\verb|EVAL| predicate, we\ninclude specific solvers for conjunctions and for the two predicates\n\\verb|REL_ASSOC| and \\verb|ALL2|.\n\nThe top-level recursive solver for the whole evaluation predicate is now easy to define:\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nlet rec EVAL_SLV : solver =\n   fun g -> CONCAT_SLV ALL_SLV (THEN_SLV STEP_SLV EVAL_SLV) g;;\n\\end{minted}\n\nLet us make a simple test.  The evaluation of the expression\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n((lambda (x) (list x x x)) (list))\n\\end{minted}\ncan be obtained as follows:\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# get (solve EVAL_SLV\n             `??ret. EVAL []\n                          '((lambda (x) (list x x x)) (list))\n                          ret`);;\n\nval it : term list * thm =\n  ([`ret = '(() () ())`],\n   |- EVAL [] '((lambda (x) (list x x x)) (list)) '(() () ()))\n\\end{minted}\n\nAgain, we can use the declarative nature of logic programs to run the\ncomputation backwards.  For instance, one intriguing exercise is the\ngeneration of \\textit{quine} programs, that is, programs that evaluates to\nthemselves.  In our formalization, they are those terms $q$ satisfying\nthe relation \\verb|`EVAL|~\\verb|[]|~$q$~$q$\\verb|`|.  The following command\ncomputes the first two quines found by our solver.\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# let sols = solve EVAL_SLV `??q. EVAL [] q q`);;\n# take 2 sols;;\n\nval it : (term list * thm) list =\n  [([`q = List (Symbol \"lambda\" :: _3149670)`],\n    |- EVAL [] (List (Symbol \"lambda\" :: _3149670))\n       (List (Symbol \"lambda\" :: _3149670)));\n   ([`q =\n      List\n      [List\n       [Symbol \"lambda\"; List [Symbol _3220800];\n        List [Symbol \"list\"; Symbol _3220800; Symbol _3220800]];\n       List\n       [Symbol \"lambda\"; List [Symbol _3220800];\n        List [Symbol \"list\"; Symbol _3220800; Symbol _3220800]]]`],\n    |- EVAL []\n       (List\n       [List\n        [Symbol \"lambda\"; List [Symbol _3220800];\n         List [Symbol \"list\"; Symbol _3220800; Symbol _3220800]];\n        List\n        [Symbol \"lambda\"; List [Symbol _3220800];\n         List [Symbol \"list\"; Symbol _3220800; Symbol _3220800]]])\n       (List\n       [List\n        [Symbol \"lambda\"; List [Symbol _3220800];\n         List [Symbol \"list\"; Symbol _3220800; Symbol _3220800]];\n        List\n        [Symbol \"lambda\"; List [Symbol _3220800];\n         List [Symbol \"list\"; Symbol _3220800; Symbol _3220800]]]))]\n\\end{minted}\n\nOne can easily observe that any lambda expression is trivially a quine\nfor our language.  This is indeed the first solution found by our\nsearch:\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n([`q = List (Symbol \"lambda\" :: _3149670)`],\n |- EVAL []\n         (List (Symbol \"lambda\" :: _3149670))\n         (List (Symbol \"lambda\" :: _3149670)))\n\\end{minted}\n\nThe second solution is more interesting.  Unfortunately it is\npresented in a form that is hard to decipher.  A simple trick can help\nus to present this term as a concrete sexp term: it is enough to\nreplace the HOL generated variable (\\verb|`_3149670`|) with a concrete\nstring.  This can be done by an ad hoc substitution.\n\\begin{minted}[baselinestretch=0.8]{ocaml}\n# let [_; i2,s2] = take 2 sols;;\n# vsubst [`\"x\"`,hd (frees (rand (hd i2)))] (hd i2);;\n\nval it : term =\n  `q = '((lambda (x) (list x x)) (lambda (x) (list x x)))`\n\\end{minted}\n\nIf we take one more solution from \\verb|sols| stream, we get a new\nquine, which, interestingly enough, is precisely the one obtained in\n\\citep{Byrd:2012:MLU:2661103.2661105}.\n\\begin{minted}[baselinestretch=0.8]{ocaml}\nval it : term =\n  `q =\n   '((quote (lambda (x) (list x (list (quote quote) x))))\n     (quote (quote (lambda (x) (list x (list (quote quote) x))))))`\n\\end{minted}\n\n\\section*{Conclusions}\n\\label{sec:conclusions}\n\nWe presented a rudimentary framework implemented on top of the HOL\nLight theorem prover that enables a logic programming paradigm for\nproof searching.  More specifically, it facilitates the use of\nmeta-variables in HOL goals and permits backtracking during the proof\nconstruction.\n\nIt would be interesting to enhance our framework with more features:\n\\begin{itemize}\n\\item Implement higher-order unification such as Miller's higher-order\n  patterns, so that our system can enable higher-order logic\n  programming in the style of $\\lambda$Prolog.\n\\item Support constraint logic programming, e.g., by adapting the data\n  structure that represent goals.\n\\end{itemize}\n\nDespite the simplicity of the present implementation, we have already\nshown the implementation of some paradigmatic examples of\nlogic-oriented proof strategies.  In the code base, some further\nexamples are included concerning a quicksort implementation and a\nsimple example of a logical puzzle.\n\n", "meta": {"hexsha": "e42ec8b42d9ff2aaec6a5c8a3097e04d868a8758", "size": 30271, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "SCILP/scilp.tex", "max_stars_repo_name": "massimo-nocentini/PhD-thesis", "max_stars_repo_head_hexsha": "f30ec2cb9cdf1e93532935448c3438700b9fcbba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SCILP/scilp.tex", "max_issues_repo_name": "massimo-nocentini/PhD-thesis", "max_issues_repo_head_hexsha": "f30ec2cb9cdf1e93532935448c3438700b9fcbba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SCILP/scilp.tex", "max_forks_repo_name": "massimo-nocentini/PhD-thesis", "max_forks_repo_head_hexsha": "f30ec2cb9cdf1e93532935448c3438700b9fcbba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.6109693878, "max_line_length": 133, "alphanum_fraction": 0.6557761554, "num_tokens": 9879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867825403176, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.5581356315194598}}
{"text": "\\section{Another Application and Outlook}\r\nHow can one obtain information about a Markov chain from the observations of it?\r\nSuppose $(X_i)_{i=0,\\ldots,n}$ is given as observations.\r\nFor any candidate $\\tilde{P}=(\\tilde{p}_{ij})$, we define\r\n$$l(\\tilde{P})=\\log(\\tilde{p}_{X_0X_1}\\tilde{p}_{X_1X_2}\\cdots\\tilde{p}_{X_{n-1}X_n})=\\sum_{i,j\\in I}N_{ij}(n)\\tilde{p}_{ij}$$\r\nwhere $N_{ij}(n)=\\sum_{m=0}^{n-1}1_{X_m=i,X_{m+1}=j}$.\r\nThe maximum likelihood estimator $\\hat{P}=\\hat{P}(n)$ is the maximiser of $l$.\r\n\\begin{lemma}\r\n    We have $\\hat{p}_{ij}(n)=N_{ij}(n)/V_i(n)$ where like in the last section $V_i(n)=\\sum_{k=0}^{n-1}1_{X_k=i}$.\r\n\\end{lemma}\r\n\\begin{proof}\r\n    Use your favourite optimisation method, like Lagrange multipliers.\r\n\\end{proof}\r\n\\begin{theorem}\r\n    If $P$ is positive recurrent, then\r\n    $$\\mathbb P[\\hat{p}_{ij}(n)\\to p_{ij}\\text{ as }n\\to\\infty]=1$$\r\n\\end{theorem}\r\n\\begin{proof}\r\n    Consider $N_{ij}=\\sum_{m=1}^{V_i}Y_m$ where $Y_m$ is the indicator of the $m^{th}$ transition being from $i$ to $j$ and $V_i=V_i(\\infty)$.\r\n    By the Markov property, $Y_i$ are i.i.d. with mean $p_{ij}$ and independent from $V_i$.\r\n    As $P$ is positive recurrent, $\\mathbb P[V_i(n)\\to\\infty,n\\to\\infty]=1$, so the strong law of large numbers gives\r\n    $$\\mathbb P\\left[ \\hat{p}_{ij}(n)=\\frac{1}{V_i(n)}\\sum_{i=1}^{V_i(n)}Y_i\\to p_{ij}\\text{ as }n\\to\\infty \\right]=1$$\r\n    as desired.\r\n\\end{proof}\r\nThe course is basically finished here.\r\nWe might as well take a outlook on the subject.\\\\\r\nFor an aperiodic and irreducible finite-state Markov chain, we have seen that $\\mathbb P[X_n=i]\\to\\pi_i$ as $n\\to\\infty$.\r\nThus, conversely, to sample from a given distribution $\\pi$ (on say $N$ states), one may try to find a Markov chain with $\\pi$ as its invariant distribution and then run it for a long time.\r\nThis is known as a Markov Chain Monte Carlo (MCMC) studies in the 50s by Metropolis \\& Ulam.\r\nThere are different ways to find such a Markov chain.\r\nThe most famous one is Metropolis' algorithm originated from an article by Metropolis, Rosenbluth, Rosenbluth, Teller \\& Teller (1953).\r\nWe will not explain it here -- you can look it up elsewhere.\\\\\r\nThere is another question of theoretical and practical relevance, which is the rate of this convergence.\r\nFor example, we might want to know more about\r\n$$\\min\\{n:\\sum_i|\\mathbb P[X_n=i]-\\pi_1|<\\epsilon\\}$$\r\nThis depends very much on the particular structure of the Markov chain and is expected to be highly nontrivial.\r\nIn fact, this is a subject of current research interest.\\\\\r\nThere are other perspectives of Markov chains we can study, for example we can look at random walks on groups, or consider continuous time and mixing time of Markov chains.", "meta": {"hexsha": "c9539dfadeef1b204d3ec52e68fa82ccd6efb986", "size": 2720, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "11/out.tex", "max_stars_repo_name": "david-bai-notes/IB-Markov-Chains", "max_stars_repo_head_hexsha": "cef4f20b59106a1deaed4de2f503e594e3ffc61d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "11/out.tex", "max_issues_repo_name": "david-bai-notes/IB-Markov-Chains", "max_issues_repo_head_hexsha": "cef4f20b59106a1deaed4de2f503e594e3ffc61d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "11/out.tex", "max_forks_repo_name": "david-bai-notes/IB-Markov-Chains", "max_forks_repo_head_hexsha": "cef4f20b59106a1deaed4de2f503e594e3ffc61d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.5789473684, "max_line_length": 190, "alphanum_fraction": 0.7022058824, "num_tokens": 878, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.7981867801399695, "lm_q1q2_score": 0.5581356248384823}}
{"text": "\\documentclass{report}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{listings}\n\\begin{document}\n\\section{Logistic Regression}\n\\subsection{Abstract}\nIn this chapter, we will learn an algorithm for binary classification - soft output: Logistic regression. It mainly relies on an activation function: $sigmoid$. Since the range of the function is $(0,1)$, it can approximate the probability value.\n\\subsection{Origin}\nThe following is the explanation of sigmoid in $shuhuai$ teacher's handout.\n\\begin{quotation}\n\tSometimes we only need to get the probability of a category, so we need a function that can output the value of $(0, 1)$ interval. Considering the two classification model, we hope to model $p(C|x)$ using the discriminant model and Bayesian theorem: $$p\\left(C_{1} \\mid x\\right)=\\frac{p\\left(x \\mid C_{1}\\right) p\\left(C_{1}\\right)}{p\\left(x \\mid C_{1}\\right) p\\left(C_{1}\\right)+p\\left(x \\mid C_{2}\\right) p\\left(C_{2}\\right)}$$\n\tset $a=\\ln \\frac{p\\left(x \\mid C_{1}\\right) p\\left(C_{1}\\right)}{p\\left(x \\mid C_{2}\\right) p\\left(C_{2}\\right)}$, so:\n$$\np\\left(C_{1} \\mid x\\right)=\\frac{1}{1+\\exp (-a)}\n$$\nThe above formula is called $Logistic\\ Sigmoid$ function and its parameters denote the logarithm of the two types of the joint probability ratios. In the discriminant, we don't care the specific value of the parameter and we just use the form of the function.\n\\end{quotation}\nOf course, it doesn't matter if we can't understand tearcher's advanced explanation. We just need to know that we have the activation function $sigmoid$ now, which can be used to get the probability of a category.\n\\subsection{Algorithm}\nFirstly, we suppose the logistic regression model is:\n$$\nf(x)=\\sigma(w^Tx)\n$$\nIn the formula, $\\sigma(a)=sigmoid(a)$, we usually denote the activation function by $\\sigma$.\\\\\nSo, if we find out the best value of $w$, the best model under the assumption is determined.\\\\\nThe parameters of probability discrinant model is usually determined by maximum likelihood estimation.\\\\\nTo determine the likelihood function, we have to make some marks firstly:\n$$\np_1=\\sigma(w^Tx) \\qquad p_0=1-p_1\n$$\nIn the formula, $p_1$ is the probability of $x$ belonging to class $1$ and $p_0$ is the probability of class $0$.\\\\\nThen we can obtain the likelihood function of the model:\n$$\np(y|w;x)=p_1^yp_0^{1-y}\n$$\nThe likelihood function seem to be a little obscure, but actually it's reasonable:\n\\begin{itemize}\n\t\\item when $y$ is $1$: $p(y|w;x)=p_1^1p_0^0=p_1$\n\t\\item when $y$ is $0$: $p(y|w;x)=p_1^0p_0^1=p_0$\n\\end{itemize}\nWell, then we can determine the parameters via $MLE$.\n\\begin{equation}\n\\begin{aligned}\n\\hat{w}=argmax(J(w))&=argmax(p(Y|w;X))\\\\\n&=argmax(log(p(Y|w;X)))\\\\\n&=argmax(log(\\prod_{i=1}^n p(y_i|w;x_i)))\\\\\n&=argmax(\\sum_{i=1}^n log(p(y_i)|w;x_i))\\\\\n&=argmax(\\sum_{i=1}^n y\\ log\\, p_1+(1-y)log\\,p_0)\n\\end{aligned}\n\\end{equation}\nNotes, the formula is the opposite of the cross entropy formula mutiplied by $N$ and the logarithm in $MLE$ also match the exponential function to obtain the stable gradient in a large interval.\\\\\\\\\nBy differentiating the above formula, we note that:\n$$\np_1'=p_1(1-p_1)\n$$\nOf course, it's easy to obtain since it's just the chain rule. We only need to be a little bit careful.\\\\\nFinally, we obtain the result:\n$$\n\\frac{\\partial}{\\partial w}J(w)=\\sum_{i=1}^{N}\\left(y_{i}-p_{1}\\right) x_{i}\n$$\nLast but not least, we are to obtain the maximun of $p(p|w;x)$, so we need to use gradient ascent instead of gradient descent. Certainly they are similar, just add a negative character.\n\\newpage\n\\subsection{Implement}\n\\begin{lstlisting}[language={python}]\nimport os\nos.chdir(\"../\")\nfrom models.linear_models import Logistic_regression\nimport numpy as np\nimport warnings\n\nepsilon = 1\nnum_test = 100\nnum_base = 1000\nratio = 0.6\nk1, k2 = 3, 5\nb1, b2 = 1, 2\nX = np.linspace(0, 100, num_base)\nX_train = X[:-num_test]\nX_test = X[-num_test:]\nv1 = X_train[:round(len(X_train) * ratio)] * k1 + b1\nv2 = X_train[round(len(X_train) * ratio):] * k2 + b2\nv1 += np.random.normal(scale=epsilon, size=v1.shape)\nv2 += np.random.normal(scale=epsilon, size=v2.shape)\nvalue = np.r_[v1, v2]\ndata = np.c_[X_train, value]\nl1 = np.ones_like(v1)\nl2 = np.zeros_like(v2)\nlabel = np.r_[l1, l2]\nv_test_c1 = X_test * k1 + b1\nl_test_c1 = np.ones_like(v_test_c1)\ndata_test = np.c_[X_test, v_test_c1]\n\nmodel = Logistic_regression(10, 1000, lr=1e-3)\nmodel.fit(data, label)\nprint(model.get_params())\nprint(model.predict(data_test, l_test_c1))\n\\end{lstlisting}\n\\end{document}", "meta": {"hexsha": "edc501119b62183db6d6f9574103b264104546bf", "size": 4492, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EN-TeX_files/LinearClassification/08_linear_classification_logistic_regression.tex", "max_stars_repo_name": "btobab/Machine-Learning-notes", "max_stars_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-08-28T18:47:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T07:36:27.000Z", "max_issues_repo_path": "EN-TeX_files/LinearClassification/08_linear_classification_logistic_regression.tex", "max_issues_repo_name": "btobab/Machine-Learning-notes", "max_issues_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EN-TeX_files/LinearClassification/08_linear_classification_logistic_regression.tex", "max_forks_repo_name": "btobab/Machine-Learning-notes", "max_forks_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-28T18:47:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T18:47:22.000Z", "avg_line_length": 45.3737373737, "max_line_length": 430, "alphanum_fraction": 0.7235084595, "num_tokens": 1429, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825006, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.5580256541654313}}
{"text": "\\section{Visualization Details and Discussion}\n\\label{sec:supp-visualization}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\includegraphics[width=0.425\\textwidth]{plots_supp_hessian}\n\t\\vspace*{2px}\n\t\n\t\\hspace*{-0.35cm}\n\t\\includegraphics[width=0.44\\textwidth]{plots_supp_hessian2}\n\t\\vspace*{-4px}\n\t\\caption{\\textbf{Visualization in ``Hessian'' Direction:} \\RCE visualized in the direction of the largest Hessian eigenvalue (\\ie, the corresponding eigenvector). The eigenvalues quantify the ``rate of change'' along the corresponding eigenvector. Thus, the largest eigenvalue represents a worst-case direction in weight space. Clearly, \\RCE increases significantly in these directions.}\n\t\\label{fig:supp-visualization}\n\t\\vspace*{-6px}\n\\end{figure}\n\n\\textbf{Visualization Details:}\n%\nFor the plots in the main paper, we compute the mean \\RCE across $10$ random, normalized directions; for adversarial directions, we plot max \\RCE over $10$ adversarial directions. After normalization, we re-scale the weight directions to have length $0.5$ for random directions and $0.025$ for adversarial directions. This essentially ``zooms in'' and is particularly important when visualizing along adversarial weight directions. In all cases, we estimate \\RCE on one batch of $128$ test examples for $51$ evenly spaced  step sizes in $[-1, 1]$. We found that using more test examples does not change the \\RCE landscape significantly. \\figref{fig:supp-visualization} shows additional visualizations along the direction of the largest Hessian eigenvalue (also using per-layer normalization, multiplied by $0.5$).\n\n\\textbf{Discussion of \\cite{LiNIPS2018}:}\n%\nOriginally, \\cite{LiNIPS2018} uses a per-filter normalization instead of our per-layer normalization. Specifically, this means\n\\begin{align}\n\t\\hat{\\nu}^{(l,i)} &= \\frac{\\nu^{(l)}}{\\|\\nu^{(l,i)}\\|_2} \\|w^{(l,i)}\\|_2\\quad\\text{ for layer }l\\text{, filter }i,\\label{eq:supp-normalization-filter}\n\\end{align}\ninstead of our normalization outlined in the main paper:\n\\begin{align}\n\t\\hat{\\nu}^{(i)} &= \\frac{\\nu^{(l)}}{\\|\\nu^{(l)}\\|_2} \\|w^{(l)}\\|_2\\quad\\text{ for layer }l.\\label{eq:supp-normalization}\n\\end{align}\nFurthermore, \\cite{LiNIPS2018} does not consider changes in the biases or batch normalization parameters. Instead, we also normalize the biases as above and take them into account for visualization (but not the batch normalization parameters). More importantly, \\cite{LiNIPS2018} considers only (clean) \\CE, while we focus on \\RCE. Compared to the plots from the main paper, \\figref{fig:supp-visualization-li} shows that the difference between filter-wise and layer-wise normalization has little impact in visually judging flatness. Generally, filter-wise normalization makes the \\RCE landscape ``look'' flatter. However, this is mainly because the absolute step size, \\ie, $\\|\\hat{\\nu}\\|_2$, is smaller compared to layer-wise normalization: for our AT baseline, this is (on average) $\\|\\hat{\\nu}\\|_2 \\approx 33.13$ for layer-wise and $\\|\\hat{\\nu}\\|_2 \\approx 21.49$ for filter-wise normalization.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\n\t\\includegraphics[width=0.425\\textwidth]{plots_supp_random_filter}\n\t\\vspace*{2px}\n\t\n\t\\hspace*{-0.35cm}\n\t\\includegraphics[width=0.44\\textwidth]{plots_supp_random_filter2}\n\t\\vspace*{-4px}\n\t\\caption{\\textbf{Filter-Wise Normalization:} Compared to the \\RCE landscape visualizations in the main paper, using per-layer normalization in \\eqnref{eq:supp-normalization}, we follow \\cite{LiNIPS2018} and use filter-wise normalization in \\eqnref{eq:supp-normalization-filter}. Again, we plot mean \\RCE across $10$ random directions. However, this does not change results significantly, flatness remains difficult to judge and compare in an objective way. Filter-wise normalization, however, ``looks'' generally flatter.}\n\t\\label{fig:supp-visualization-li}\n\t\\vspace*{-6px}\n\\end{figure}\n\n\\section{Computing Flatness in \\RCE}\n\\label{sec:supp-flatness-computation}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\begin{minipage}[t]{0.31\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_flatness_epochs_small}\n\t\\end{minipage}\n\t\\hspace*{2px}\n\t\\begin{minipage}[t]{0.31\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_flatness_epochs}\n\t\\end{minipage}\n\t\\hspace*{2px}\n\t\\begin{minipage}[t]{0.31\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_flatness_epochs_large}\n\t\\end{minipage}\n\t\\vspace*{-6px}\n\t\\caption{\\textbf{Flatness Throughout Training, Ablation:} We plot train and test \\RCE, maximum Hessian eigenvalue $\\lambda_{\\text{max}}$, average-/worst-case flatness of (clean) \\CE as well as average-/worst-case flatness on \\RCE. We consider $\\xi{=}0.25$/$\\xi{=}0.001$ (left), $\\xi{=}0.5$/$\\xi{=}0.003$ (middle and main paper), and $\\xi{=}0.75$/$\\xi{=}0.005$ for average-/worst-case flatness, respectively. If the neighborhood $b_\\xi(w)$ is chosen too small (left), increases/changes in flatness during robust overfitting are difficult to measure due to fluctuations throughout training. Chosen too large (right), in contrast, worst-case flatness (both in \\CE and \\RCE) quickly reaches unreasonably high loss values. This becomes problematic when comparing across models.}\n\t\\label{fig:supp-flatness-epochs}\n\t\\vspace*{-6px}\n\\end{figure*}\n\n\\textbf{Average-Case Flatness:}\n%\nThe average-case flatness measure in \\RCE is defined as:\n\\begin{align}\n\t\\hspace*{-0.2cm}\\begin{split}\n\t\t\\mathbb{E}_{\\nu}[\\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\mathcal{L}(f(x{+}\\delta, w{+}\\nu), y)]\\text{\\hphantom{aaaaaaa}}\\\\\n\t\t\\text{\\hphantom{aaaaaaa}}- \\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\mathcal{L}(f(x{+}\\delta;w), y)\n\t\\end{split}\\label{eq:supp-average}\n\\end{align}\nwhere $\\mathbb{E}_{\\nu}$ denotes the expectation over random weight perturbations $\\nu \\in B_\\xi(w)$, $\\mathcal{L}$ is the cross-entropy loss and $\\max_{\\|\\delta\\|_\\infty \\leq \\epsilon}\\mathcal{L}(f(x{+}\\delta;w), y)$ represents the robust loss (\\RCE). The first term is computed by randomly sampling $10$ weight perturbations from\n\\begin{align}\n\tB_\\xi(w) = \\{w + \\nu : \\|\\nu^{(l)}\\|_2 \\leq \\xi \\|w^{(l)}\\|_2 \\forall\\text{ layers }l\\}.\\label{eq:supp-ball}\n\\end{align}\nFor each weight perturbation $\\nu$, the robust loss, defined as $\\max_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\mathcal{L}(f(x{+}\\delta, w{+}\\nu), y)$, is estimated using PGD with $20$ iterations ($\\epsilon = \\nicefrac{8}{255}$, learning rate $0.007$ and signed gradient). This is done \\emph{per-batch} (of size $128$) for the \\emph{first} $1000$ test examples. Alternatively, the weights perturbations $\\nu$ could also be fixed across batches (\\ie, $10$ samples in total for $\\lceil\\nicefrac{1000}{128}\\rceil$ batches). However, this is not possible for our worst-case flatness measure, as discussed next. Thus, for comparability, we sample random weight perturbations \\emph{for each} batch individually. The second term is computed using PGD-$20$ with $10$ restarts, choosing the worst-case adversarial examples per test example (\\ie, maximizing \\RCE).\n\nSampling in $B_\\xi(w)$ is accomplished by sampling individually per layer. That is, for each layer $l$, we compute $\\xi' := \\xi \\cdot \\|w^{(l)}\\|_2$ given the original weights $w$. Then, a random vector $\\nu^{(l)}$ with $\\|\\nu^{(l)}\\|_2 \\leq \\xi'$ is sampled. This is done for each layer, handling weights and biases as separate layers, but ignoring batch normalization \\cite{IoffeICML2015} parameters.\n\n\\textbf{Worst-Case Flatness:}\n%\nWorst-case flatness is defined as:\n\\begin{align}\n\t\\begin{split}\n\t\t\\max_{\\nu \\in B_\\xi(w)} \\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} &\\mathcal{L}(f(x{+}\\delta, w{+}\\nu), y)\\\\\n\t\t&- \\max\\limits_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\mathcal{L}(f(x{+}\\delta;w), y).\n\t\\end{split}\\label{eq:supp-worst}\n\\end{align}\nHere, the expectation over $\\nu$ in \\eqnref{eq:supp-average} is replaced by a maximum over $\\nu \\in B_\\xi(w)$, considering smaller $\\xi$. In practice, the first term in \\eqnref{eq:supp-worst} is computed by \\emph{jointly} optimizing over weight perturbation $\\nu$ and input perturbation(s) $\\delta$ on a per-batch basis (of size $B = 128$). This means, after random initialization of $\\delta_b$, $\\forall b = 1,\\ldots,B$, and $\\nu \\in B_\\xi(w)$, each iteration computes and applies updates\n\\begin{align}\n\t\\Delta_\\nu = \\nabla_\\nu \\sum_{b = 1}^B\\mathcal{L}(f(x_b + \\delta_b; w + \\nu), y_b)\\\\\n\t\\Delta_{\\delta_b} = \\nabla_{\\delta_b} \\sum_{b = 1}^B\\mathcal{L}(f(x_b + \\delta_b; w + \\nu), y_b)\n\\end{align}\nbefore projecting $\\delta_b$ and $\\nu$ onto the constraints $\\|\\delta_b\\|_\\infty \\leq \\epsilon$ and $\\|\\nu^{(l)}\\|_2 \\leq \\xi \\|w^{(l)}|_2$. The latter projection is applied in a per-layer basis, similar to sampling as described above. For the adversarial weight perturbation $\\nu$, we use learning rate $0.001$, after normalizing the update $\\Delta_\\nu$ per-layer as in \\eqnref{eq:supp-normalization}. We run $20$ iterations with $10$ restarts for each batch.\n\n\\textbf{Flatness of Clean Loss Landscape:}\n%\nWe can also consider both \\eqnref{eq:supp-average} and \\eqnref{eq:supp-worst} on the \\emph{clean} (cross-entropy) loss (``\\CE''), \\ie, $\\mathcal{L}(f(x, w{+}\\nu), y)$ instead of $\\max_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\mathcal{L}(f(x{+}\\delta, w{+}\\nu), y)$. \\red{We note that \\RCE is an upper bound of (clean) \\CE. Thus, flatness in \\RCE and \\CE are connected. However, Pearson correlation between \\RCE and average-case flatness in (clean) \\CE is only $0.27$, compared to $0.85$ for average-case flatness in \\emph{\\RCE}. This indicates that correctly measuring flatness in \\RCE is crucial to empirically establish a relationship between robustness and flatness.}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\hspace*{-1.75cm}\n\t\\begin{minipage}[t]{0.28\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.5cm]{plots_supp_flatness_correlation_seq_loss_std}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.2\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.5cm]{plots_supp_flatness_correlation_seq_train_loss}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.2\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.5cm]{plots_supp_flatness_correlation_adv_loss_e0003}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.25\\textwidth}\n\t\t\t\\vspace*{0px}\n\t\t\t\n\t\t\t\\includegraphics[height=3.5cm]{plots_supp_flatness_seq_early_stopping_all}\n\t\t\t\n\t\t\t\\begin{tikzpicture}[overlay,remember picture]\n\t\t\t\t\\node[anchor=south west] at (1.8,0.75){\\includegraphics[height=1.4cm]{plots_supp_flatness_seq_early_stopping_zoom}};\n\t\t\t\\end{tikzpicture}\n\t\t\\end{minipage}\n\t\\vspace*{-18px}\n\t\\caption{\\textbf{Left: Standard Deviation of Average-Case Flatness:} We plot \\RCE (y-axis) against the standard deviation (std) in our average-case flatness measure (x-axis). Note that the standard-deviation is due to the random weight perturbations $\\nu$ in \\eqnref{eq:supp-average}. Interestingly, more robust methods are not only flatter, but our average-case flatness measure also has lower standard deviation. \\textbf{Middle Left: Average-Case Flatness of Train \\RCE:} \\emph{Test} \\RCE plotted against our average-case flatness measure as computed on \\emph{training} examples. Even on the training set, flatness is predictive of robust generalization, \\ie, adversarial robustness on the test set. The relationship, however, is weaker compared to average-case flatness on test examples. \\textbf{Middle Right: Worst-Case Flatness in (Clean) \\CE:} As worst-case flatness in the \\emph{clean} \\CE landscape also mirrors robust overfitting in \\figref{fig:supp-flatness-epochs}, we plot \\RCE against worst-case flatness in \\CE. Even though flatness is measured considering clean \\CE, many methods improving robustness (\\ie, lower \\RCE) exhibit surprisingly good flatness. \\textbf{Right: Early Stopping for all Models:} \\RCE vs\\onedot average-case flatness for all models where early stopping improves adversarial robustness. For example, this is not the case for AutoAugment or AT with unlabeled examples. Across all models, early stopping improves both robustness and flatness. For clarity we provide a zoomed-in plot for the lower left corner.}\n\t\\label{fig:supp-flatness-misc}\n\\end{figure*}\n\n\\subsection{Ablation for Flatness Measures}\n\\label{sec:supp-flatness-ablation}\n \n\\textbf{Flatness Throughout Training:}\n%\n\\figref{fig:supp-flatness-epochs} shows average- and worst-case flatness on both clean as well as robust loss (\\CE and \\RCE) throughout training of our AT baseline. We consider different sizes of the neighborhood $B_\\xi(w)$ for computing our flatness measures: $\\xi{=}0.25$/$\\xi{=}0.001$ (left), $\\xi{=}0.5$/$\\xi{=}0.003$ (middle, as in main paper), and $\\xi{=}0.75$/$\\xi{=}0.005$ for average-/worst-case flatness, respectively. While average-case flatness of \\emph{clean} \\CE does \\emph{not} mirror robust overfitting very well, its worst-case pendant increases during overfitting, even though \\RCE is \\emph{not} taken into account. Furthermore, if the neighborhood $B_\\xi(w)$ is chosen too small, the flatness measures are not sensitive enough to be discriminative (\\cf left). \\figref{fig:supp-flatness-epochs} also shows that, throughout training of one model, the largest Hessian eigenvalue mirrors robust overfitting. Overall, this means that early stopping essentially improves adversarial robustness by finding flatter minima. This is confirmed in \\figref{fig:supp-flatness-misc} (right), showing that early stopping consistently improves robustness and flatness.\n\n\\textbf{Standard Deviation in Average-Case Flatness:} \n%\nIn \\figref{fig:supp-flatness-misc} (left), the x-axis plots the standard deviation in our average-case flatness measure (in \\RCE). Note that the standard deviation originates in the random samples $\\nu$ used to calculate \\eqnref{eq:supp-average}. First of all, standard deviation tends to be small (\\ie, $\\leq 0.3$) across almost all models. This means that our findings in the main paper, \\ie, the strong correlation between flatness and \\RCE, is supported by low standard deviation. More importantly, the standard deviation \\emph{reduces} for particularly robust methods.\n\n\\textbf{Average-Case Flatness on Training Examples:}\n%\n\\figref{fig:supp-flatness-misc} (middle left) shows that average-case flatness in \\RCE is also predictive for robust generalization when computed on \\emph{training} examples. However, the correlation between (test) \\RCE and (train) flatness is less clear, \\ie, there is a larger ``spread'' across methods. Here, we use the first $1000$ training examples to compute average-case flatness.\n\n\\textbf{Worst-Case Flatness on \\emph{Clean} \\CE:}\n%\nIn \\figref{fig:supp-flatness-epochs}, worst-case flatness on clean \\CE also correlates with robust overfitting. Thus, in \\figref{fig:supp-flatness-misc} (middle right), we plot \\RCE against worst-case flatness of \\CE, showing that there is no clear relationship across models. Nevertheless, many methods improving adversarial robustness also result in flatter minima in the clean loss landscape. This is sensible as \\RCE is generally an upper bound for (clean) \\CE. On the other hand, flatness in \\CE is \\emph{not} discriminative enough to clearly distinguish between robust and less robust models.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\begin{minipage}[t]{0.3\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_seq_loss_e025}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.22\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_seq_loss_e05}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.22\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_seq_loss_e075}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.22\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_seq_loss_e1}\n\t\\end{minipage}\n\t\\\\[6px]\n\t\n\t{\\color{black!75}\\rule{\\textwidth}{0.65px}}\n\t\\\\[-6px]\n\t\n\t\\begin{minipage}[t]{0.3\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_joint_loss_e000075}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.22\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_joint_loss_e0001}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.22\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_joint_loss_e0003}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.22\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=3.6cm]{plots_supp_flatness_correlation_joint_loss_e0005}\n\t\\end{minipage}\n\t\\caption{\\textbf{Flatness in \\RCE, Ablation for $B_\\xi(w)$:} \\RCE(y-axis) plotted against average-case (top) and worst-case (bottom) flatness in \\RCE (x-axis).\n\t\\textbf{Top:} We consider $\\xi \\in \\{0.25, 0.5, 0.75, 1\\}$ for average-case flatness. The clear relationship between adversarial robustness, \\ie, low \\RCE, and flatness shown for $\\xi = 0.5$ in the main paper can be reproduced for all cases. \\textbf{Bottom:} For worst-case flatness, we consider $\\xi \\in \\{0.00075, 0.001, 0.003, 0.005\\}$. When chosen too large, \\eg, $\\xi = 0.005$, however, variance seems to increase, making the relationship less clear. For small $\\xi$, \\eg, $\\xi = 0.00075$, the correlation between robustness and flatness is pronounced, except for a few outliers, including AT-AWP \\cite{WuNIPS2020}.}\n\t\\label{fig:supp-flatness-ablation}\n\\end{figure*}\n\n\\textbf{Ablation for $B_\\xi(w)$:}\n%\nFor computing our average- and worst-case flatness measures (in \\RCE), we considered various sizes of neighborhoods in weight space, \\ie $B_\\xi(w)$ from \\eqnref{eq:supp-ball} for different $\\xi$. \\figref{fig:supp-flatness-ablation} considers $\\xi \\in \\{0.25, 0.5, 0.75, 1\\}$ for average-case flatness (top) and $\\xi \\in \\{0.00075, 0.001, 0.003, 0.005\\}$ for worst-case flatness (bottom). In both cases, we plot \\RCE (y-axis) against flatness in \\RCE (y-axis), as known from the main paper. Average-case flatness using small $\\xi = 0.25$ results in significantly smaller values, between $0$ and $0.4$, \\ie, the increase in \\RCE in random weight directions is rather small. Still, the relationship between adversarial robustness and flatness is clearly visible. The same holds for larger $\\xi \\in \\{0.75, 1\\}$. Worst-case flatness generally gives a less clear picture regarding the relationship between robustness and flatness.\nAdditionally, for larger $\\xi \\in \\{0.003, 0.005\\}$, variance seems to increase such that this relationship becomes less pronounced. In contrast to average-case flatness, the variance is not induced by the $10$ restarts used for \\eqnref{eq:supp-worst}, but caused by training itself. Indeed, re-training our AT baseline leads to a worst-case flatness in \\RCE of $5.1$, a significant reduction from $6.49$ as obtained for our original baseline. Overall, however, the observations from the main paper can be confirmed using different sizes of the neighborhood $B_\\xi(w)$.\n\n\\section{Scaling Networks and Scale-Invariance}\n\\label{sec:supp-scaling}\n\n\\textbf{Scale-Invariance:}\n%\nIn the main paper, we presented a simple experiment to show that our measures of flatness in \\RCE are scale-invariant: we scaled weights \\emph{and} biases of \\emph{all} convolutional layers in our adversarially trained ResNet-18 \\cite{HeCVPR2016} by factor $s \\in \\{0.5, 2\\}$. Note that all convolutional layers in the ResNet are followed by batch normalization layers \\cite{IoffeICML2015}. Thus, the effect of scaling is essentially ``canceled out'', \\ie, these convolutional layers are scale-invariant. Thus, the prediction stays roughly constant. \\figref{fig:supp-scaling} (left) shows \\RCE landscape visualizations for AT and its scaled variants in random and adversarial weight directions. Clearly, scaling AT has negligible impact on the \\RCE landscape in both cases. \\figref{fig:supp-scaling} (right) shows that our flatness measures remain invariant, as well. As $B_\\xi(w)$ in \\eqnref{eq:supp-ball} is defined \\emph{per-layer} (weights and biases separately) and \\emph{relative} to $w$, the neighborhood increases alongside the weights, rendering visualization and flatness measures invariant.\n\\red{When, for example, scaling up specific layers and scaling down others, as discussed in \\cite{DinhICML2017}, causes the neighborhood $B_\\xi(w)$ to increase or decrease in size for these particular layers. Thus, following \\cite{DinhICML2017}, scaling up the first layer of a two-layer ReLU network by $\\alpha$ and scaling down the second layer by $\\nicefrac{1}{\\alpha}$ (keeping the output constant), has no effect in terms of measuring flatness as the per-layer neighborhood $B_\\xi(w)$ is scaled accordingly, as well.}\nThe Hessian eigenspectrum, in contrast, scales with the models, \\cf \\tabref{tab:supp-convexity}, and is not suited to quantify flatness.\n\n\\textbf{Convexity and Flatness:}\n%\n\\tabref{tab:supp-convexity} also presents the convexity metric introduced in \\cite{LiNIPS2018}: $\\nicefrac{|\\lambda_{\\text{min}}|}{|\\lambda_{\\text{max}}|}$ with $\\lambda_{\\text{min/max}}$ being largest/smallest Hessian eigenvalue. Note that the Hessian is computed following \\cite{LiNIPS2018} \\wrt to the \\emph{clean} (cross-entropy) loss, not taking into account adversarial examples. The intuition is that negative eigenvalues with large absolute value correspond to non-convex directions in weight space. If these eigenvalues are large in relation to the positive eigenvalues, there is assumed to be significant non-convexity ``around'' the found minimum. \\tabref{tab:supp-convexity} shows that this fraction is usually very small, as also found in \\cite{LiNIPS2018}. However, \\tabref{tab:supp-convexity} also shows that this convexity measure is not clearly correlated with adversarial robustness.\n\n\\section{Detailed Experimental Setup}\n\\label{sec:supp-setup}\n\nWe focus our experiments on \\CifarT \\cite{Krizhevsky2009}, consisting of $50\\text{k}$ training examples and $10\\text{k}$ test examples of size $32\\times32$ (in color) and $K = 10$ class labels. We use all training examples during training, but withhold the \\emph{last} $500$ test examples for early stopping. Evaluation is performed on the \\emph{first} $1000$ test examples, due to long runtimes of AutoAttack \\cite{CroceARXIV2020} and our flatness measures (on \\RCE). Any evaluation on the training set is performed on the first $1000$ training examples (\\eg, in \\figref{fig:supp-flatness-misc}, middle).\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\begin{minipage}[t]{0.26\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_scaling_random}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.155\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_scaling_adversarial}\n\t\\end{minipage}\n\t\\hspace*{0.25cm}\n\t\\begin{minipage}[t]{0.38\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\\scriptsize\n\t\t{\n\t\t\t\\setlength{\\tabcolsep}{0pt}\n\t\t    \\newcolumntype{C}[1]{@{}>{\\centering\\arraybackslash}p{#1}@{}}\n\t\t\t\\begin{tabularx}{\\textwidth}{|X|C{0.85cm}|C{1.5cm}|C{0.85cm}|C{0.85cm}|C{0.85cm}|}\n\t\t\t\t\\hline\n\t\t\t\t\\hspace*{2px} Model & \\multicolumn{2}{c|}{\\bfseries Robustness} & \\multicolumn{3}{c|}{\\bfseries Flatness}\\\\\n\t\t\t\t\\hline\n\t\t\t\t& \\RTE $\\downarrow$ & \\RTE $\\downarrow$ & Worst & Avg & Worst\\\\\n\t\t\t\t& (test) & (train) & \\emph{Loss} & \\textbf{RLoss} & \\textbf{RLoss}\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\hline\t\t\n\t\t\t\t\\hspace*{2px} Scaled $\\times0.5$ & 60.9 & 8.4 (-52.5) &  0.86 & 1.36 & 6.50\\\\\n\t\t\t\t\\hspace*{2px} AT (baseline) & 61.0 & 8.4 (-52.6) & 0.86 & 1.21 & 6.48\\\\\n\t\t\t\t\\hspace*{2px} Scaled $\\times2$ & 61.0 & 8.3 (-52.7) & 0.86 & 1.27 & 6.49\\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabularx}\n\t\t}\n\t\\end{minipage}\n\t\\vspace*{-6px}\n\t\\caption{\\textbf{Flatness and Scale-Invariance.} \\textbf{Left:} We plot average \\RCE and worst \\RCE along random and adversarial directions, as discussed in \\secref{sec:supp-visualization}, for AT and its scaled variants, $\\times0.5$ and $\\times2$. Clearly, \\RCE landscape looks nearly identical. \\textbf{Right:} Robustness against PGD-$20$ on train and test examples, as well as average- and worst-case flatness measures on \\RCE. For completeness, we also include worst-case flatness on clean \\CE. All of these measures are nearly invariant to scaling. The shown differences can be attributed to randomness in computing these measures.}\n\t\\label{fig:supp-scaling}\n\n\\end{figure*}\n\nAs network architecture, we use ResNet-18 \\cite{HeCVPR2016} with batch normalization \\cite{IoffeICML2015} and ReLU activations. Our AT baseline (\\ie, default model) is trained using SGD for $150$ epochs, batch size $128$, learning rate $0.05$, reduced by factor $0.1$ at $60$, $90$ and $120$ epochs, weight decay $0.005$ and momentum $0.9$. We save snapshots every $5$ epochs to perform early stopping, but do \\emph{not} use early stopping by default. We whiten input examples by subtracting the (per-channel) mean and dividing by standard deviation. We use standard data augmentation, considering random flips and cropping (by up to $4$ pixels per side). By default, we use $7$ iterations PGD, with learning rate $0.007$, signed gradient and $\\epsilon = \\nicefrac{8}{255}$ to compute $L_\\infty$ adversarial examples. Note that no momentum \\cite{DongCVPR2018} or backtracking \\cite{StutzICML2020} is used for PGD. The training curves in \\figref{fig:supp-training-curves} correspond to robustness measured using the $7$-iterations PGD attack used for training, which we also use for early stopping (with $5$ random restarts).\n\nFor evaluation, we run PGD for $20$ iterations and $10$ random restarts, taking the worst-case adversarial example per test example \\cite{StutzICML2020}. Our results considering robust loss (\\RCE) are based on PGD, while we report robust test error (\\RTE) using AutoAttack \\cite{CroceARXIV2020}. \\red{Note that AutoAttack does \\emph{not} maximize cross-entropy loss as it stops when adversarial examples are found. Thus, it is not suitable to estimate \\RCE.} Robust test error is calculated as the fraction of test examples that are either mis-classified or successfully attacked. The distinction between PGD-$20$ and AutoAttack is important as AutoAttack does \\emph{not} maximize cross-entropy loss, resulting in an under-estimation of \\RCE, while PGD-$20$ generally underestimates \\RTE. Computation of our average- and worst-case flatness measure is detailed in \\secref{sec:supp-flatness-computation}.\n\nEverything is implemented in PyTorch~\\cite{PaszkeNIPSWORK2017}.\n\n\\section{Methods}\n\\label{sec:supp-methods}\n\nIn the following, we briefly elaborate on the individual methods considered in our experiments.\n\n\\textbf{Learning Rate Schedules:}\n%\nBesides our default, multi-step learning rate schedule (learning rate $0.05$, reduced by factor $0.1$ after epochs $60$, $90$, and $120$), we followed \\cite{PangARXIV2020b} and implemented the following learning rate schedules: First, simply using a constant learning rate of $0.05$. Second, only two ``late'' learning rate reductions at epochs $140$ and $145$, as done in \\cite{QinNIPS2019}. Third, using a cyclic learning rate, interpolating between a learning rate of $0.2$ and $0$ for $30$ epochs per cycle, as, \\eg, done in \\cite{WongARXIV2020}. We consider training for up to $4$ cycles ($=120$ epochs). These learning rate schedules are available as part of PyTorch \\cite{PaszkeNIPSWORK2017}.\n\n\\textbf{Label Smoothing:}\n%\nIn \\cite{SzegedyCVPR2016}, label smoothing is introduced as regularization to improve (clean) generalization by \\emph{not} enforcing one-hot labels in the cross-entropy loss. Instead, for label $y$ and $K = 10$ classes, a target distribution $p \\in [0,1]^K$ (subject to $\\sum_i p_i = 1$) with $p_y = 1 - \\tau$ (correct label) and $p_i = \\nicefrac{\\tau}{K - 1}$ for $i \\neq y$ (all other labels) is enforced. During AT, we only apply label smoothing for the weight update, not for PGD. We consider $\\tau \\in \\{0.1, 0.2, 0.3\\}$.\n\n\\textbf{Label Noise:}\n%\nInstead of explicitly enforcing a ``smoothed'' target distribution, we also consider injecting label noise during training. In each batch, we sample random labels for a fraction of $\\tau$ of the examples. Note that the labels are sampled uniformly across all $K = 10$ classes. Thus, in expectation, the enforced target distribution is $p_y = 1 - \\tau + \\nicefrac{\\tau}{K}$ and $p_i = \\nicefrac{\\tau - \\nicefrac{\\tau}{K}}{K - 1}$. As result, this is equivalent to label smoothing with $\\tau = \\tau - \\nicefrac{\\tau}{K}$. In contrast to label smoothing, this distribution is not enforced explicitly in the cross-entropy loss. As above, adversarial examples are computed against the true labels (without label noise) and label noise is injected for the weight update. We consider $\\tau \\in \\{0.1, 0.2, 0.3, 0.4, 0.5\\}$. While label smoothing does not further improve adversarial robustness for $\\tau > 0.3$, label noise proved very effective in avoiding robust overfitting, which is why we also consider $\\tau = 0.4$ or $0.5$.\n\n\\textbf{Weight Averaging:}\n%\nTo implement weight averaging \\cite{IzmailovUAI2018}, we follow \\cite{GowalARXIV2020} and keep a ``running'' average $\\bar{w}$ of the model's weights throughout training, updated in each iteration $t$ as follows: \n\\begin{align}\n\t\\bar{w}^{(t)} = \\tau \\bar{w}^{(t - 1)} + (1 - \\tau) w^{(t)}\n\\end{align}\nwhere $w^{(t)}$ are the weights in iteration $t$ \\emph{after} the gradient update. Weight averaging is motivated by finding the weights $\\bar{w}$ in the center of the found local minimum. As, depending on the learning rate, training tends to oscillate, the average of the iterates is assumed to be close to the actual center of the minimum. In our experiments, we consider $\\tau \\in \\{0.98, 0.985, 0.99, 0.9975\\}$.\n\n\\textbf{Weight Clipping:}\n%\nFollowing \\cite{StutzMLSYS2021}, we implement weight clipping by clipping the weights to $[-w_{\\text{max}}, w_{\\text{max}}]$ after each training iteration. We found that $w_{\\text{max}}$ can be chosen as small as $0.005$, which we found to work particularly well. Larger $w_{\\text{max}}$ does \\emph{not} have significant impact on adversarial robustness for AT. \\cite{StutzMLSYS2021} argues that weight clipping together with minimizing cross-entropy loss leads to more redundant weights, improving robustness to random weight perturbations. As result, we also expect weight clipping to improve flatness. We consider $w_{\\text{max}} \\in \\{0.005, 0.01, 0.025\\}$.\n\n\\textbf{Ignoring Incorrect Examples \\& Preventing Label Leaking:}\n%\nAs robust overfitting in AT leads to large \\RCE on incorrectly classified test examples, we investigate whether (a) \\emph{not} computing adversarial examples on incorrectly classified examples (during training) or (b) computing adversarial examples against the predicted (not true) label (during training) helps to mitigate robust overfitting. These changes can be interpreted as ablations of MART \\cite{WangICLR2020} and are easily implemented. Note that option (b) is essentially computing adversarial examples without label leaking \\cite{KurakinARXIV2016}. However, as shown in \\figref{fig:supp-ii-pll}, these two variants of AT have little to no impact on robust overfitting.\n\n\\begin{table}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\scriptsize\n\t\\begin{tabularx}{0.475\\textwidth}{|X|@{\\hspace*{3px}}c@{\\hspace*{3px}}|@{\\hspace*{3px}}c@{\\hspace*{3px}}|@{\\hspace*{3px}}c@{\\hspace*{3px}}|}\n\t\t\\hline\n\t\tModel (\\RTE against AutoAttack \\cite{CroceICML2020}) & \\RTE $\\downarrow$ & $\\lambda_{\\text{max}}$ & $\\frac{|\\lambda_{\\text{min}}|}{|\\lambda_{\\text{max}}|}$\\\\\n\t\t\\hline\n\t\t\\hline\n\t\tAT (baseline) & 62.8 & 1990 & 0.088\\\\\n\t\tScaled $\\times0.5$ & 62.8 & 7936 & 0.088\\\\\n\t\tScaled $\\times2$ & 62.8 & 505 & 0.088\\\\\n\t\t\\hline\n\t\tBatch size $8$ & 58.2 & 3132 & 0.027\\\\\n\t\tAdam & 57.5 & 540 & 0.047\\\\\n\t\t\\hline\n\t\tLabel smoothing & 61.2 & 2484 & 0.085\\\\\n\t\tSelf-supervision & 57.1 & 389 & 0.041\\\\\n\t\tEntropy-SGD & 58.6 & 5773 & 0.054\\\\\n\t\tTRADES & 56.7 & 947 & 0.089\\\\\n\t\tMART & 61 & 1285 & 0.087\\\\\n\t\tAT-AWP & 54.3 & 1200 & 0.241\\\\\n\t\t\\hline\n\t\\end{tabularx}\n\t\\vspace*{-6px}\n\t\\caption{\\textbf{Hessian Eigenvalue $\\lambda_{\\text{max}}$ and Convexity:} For the models from \\figref{fig:supp-visualization} and \\ref{fig:supp-visualization-li}, we report \\RTE against AutoAttack \\cite{CroceARXIV2020}, the maximum Hessian eigenvalue $\\lambda_{\\text{max}}$ and the convexity measure of \\cite{LiNIPS2018} computed as $\\nicefrac{|\\lambda_{\\text{min}}|}{|\\lambda_{\\text{max}}|}$. This fraction is supposed to quantify the degree of non-convexity around the found minimum. As can be seen, neither $\\lambda_{\\text{max}}$ nor convexity correlate well with adversarial robustness. Regarding $\\lambda_{\\text{max}}$ this is due to the Hessian eigenspectrum not being scale-invariant, as shown for scaled versions ($\\times0.5$ and $\\times2$) of our AT baseline.}\n\t\\label{tab:supp-convexity}\n\t\\vspace*{-6px}\n\\end{table}\n\n\\textbf{AutoAugment:}\n%\nIn \\cite{CubukARXIV2018}, an automatic procedure for finding data augmentation policies is proposed, so-called AutoAugment. We use the found \\CifarT policy (\\cf \\cite{CubukARXIV2018}, appendix), which includes quite extreme augmentations. For example, large translations are possible, rendering the image nearly completely uniform, only leaving few pixels at the border. In practice, AutoAugment usually prevents convergence and, thus, avoids overfitting. We further combine AutoAugment with CutOut \\cite{DevriesARXIV2017} (using random $16\\times 16$ ``cutouts''). We apply both AutoAugment and CutOut on top of our standard data augmentation, \\ie, random flipping and cropping. We use publicly available PyTorch implementations\\footnote{\\url{https://github.com/DeepVoltaire/AutoAugment}, \\url{https://github.com/uoguelph-mlrg/Cutout}}.\n\n\\textbf{Entropy-SGD}\n%\n\\cite{ChaudhariICLR2017} explicitly encourages flatter minima by taking the so-called ``local'' entropy into account. As a result, Entropy-SGD not only finds ``deep'' minima (\\ie, low loss values) but also flat ones. In practice, this is done using nested SGD: the inner loop approximates the local entropy using stochastic gradient Langevin dynamics (SGLD), the outer loop updates the weights. The number of inner iterations is denoted by $L$. While the original work \\cite{ChaudhariICLR2017} uses $L$ in $[5, 20]$ on \\CifarT, we experiment with $L \\in \\{1, 2, 3, 5\\}$. Note that, for fair comparison, we train for $\\nicefrac{150}{L}$ epochs.\nFor details on the Entropy-SGD algorithm, we refer to \\cite{ChaudhariICLR2017}. Our implementation follows the official PyTorch implementation\\footnote{\\url{https://github.com/ucla-vision/entropy-sgd}}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\includegraphics[width=0.425\\textwidth]{plots_supp_understanding_incorrect2}\n\t\\vspace*{-6px}\n\t\\caption{\\textbf{Approaches of Handling Incorrect Examples:} We plot test \\RCE on all (solid), correctly classified (dotted) and incorrectly classified (dashed) examples throughout training. We consider our AT baseline ({\\color{plot0}light blue}), ignoring incorrectly classified training examples in the \\RCE computation during training ({\\color{plot1}dark blue}) and preventing label leaking by computing adversarial examples against the \\emph{predicted} labels during training ({\\color{plot2}rose}). However, these ``simple'' approaches of tackling the high \\RCE on incorrectly classified test examples are not successful in reducing robust overfitting. As outlined in the main paper, MART \\cite{WangICLR2020} ({\\color{plot3}green}) is able to dampen overfitting through an additional robust KL-loss weighted by confidence, see text.}\n\t\\label{fig:supp-ii-pll}\n\t\\vspace*{-6px}\n\\end{figure}\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_understanding_ablation_wd}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_understanding_ablation_ls}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_main_understanding_ablation_ln}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_understanding_ablation_bs}\n\t\\end{minipage}\n\t\\\\[2.5px]\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_understanding_ablation_ssl}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_understanding_ablation_mart}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_understanding_ablation_trades}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.23\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_understanding_ablation_esgd}\n\t\\end{minipage}\n\t\\caption{\\textbf{Training Curves for Varying Hyper-Parameters:} We plot \\RCE for selected methods and hyper-parameters to demonstrate the impact of hyper-parameters on avoiding or reducing robust overfitting. Note that, for TRADES, we show both \\RCE on adversarial examples computed by maximizing the KL-divergence in \\eqnref{eq:supp-methods-trades} (solid) and on adversarial examples obtained by maximizing cross-entropy loss (``CE'', dotted).}\n\t\\label{fig:supp-training-ablation}\n\\end{figure*}\n\n\\textbf{Activation functions:}\n%\nWe consider three recently proposed activation functions: SiLU \\cite{ElfwingNN2018}, MiSH \\cite{MisraBMVC2020} and GeLU \\cite{HendrycksARXIV2016}. These are defined as:\n\\begin{align}\n\t(\\text{SiLU})\\quad &x \\sigma(x)\\text{ with } \\sigma(x) = \\nicefrac{1}{(1 + \\exp(-x))},\\\\\n\t(\\text{MiSH})\\quad &x \\tanh(\\log(1 + \\exp(x))),\\\\\n\t(\\text{GeLU})\\quad &x \\sigma(1.702 x).\n\\end{align}\nAll of these activation functions can be seen as smooth versions of the ReLU activation.\nIn \\cite{SinglaARXIV2021}, some of these activation functions are argued to avoid robust overfitting due to lower curvature compared to ReLU.\n\n\\textbf{AT-AWP:}\n%\nAT with adversarial weight perturbations (AT-AWP) \\cite{WuNIPS2020} computes adversarial weight perturbations \\emph{on top} of adversarial examples to further regularize training. This is similar to our worst-case flatness measure of \\RCE, however, adversarial examples and adversarial weights are computed sequentially, not jointly, and only one iteration is used to compute adversarial weights. Specifically, after computing adversarial examples $\\tilde{x} = x + \\delta$,\nan adversarial weight perturbation $\\nu$ is computed by solving\n\\begin{align}\n\t\\max_{\\nu \\in B_\\xi(w)} \\mathcal{L}(f(\\tilde{x}; w + \\nu), y)\n\\end{align}\nwith $B_\\xi(w)$ as in \\eqnref{eq:supp-ball} using one iteration of gradient ascent with fixed step size of $\\xi$. The gradient is normalized per layer as in \\eqnref{eq:supp-normalization}.\nWe considered $\\xi \\in \\{0.0005, 0.001, 0.005, 0.01, 0.015, 0.02\\}$ and between $1$ and $7$ iterations and found that $\\xi = 0.01$ and $1$ iteration works best (similar to \\cite{WuNIPS2020}).\n\n\\textbf{TRADES:} \\cite{ZhangICML2019} proposes an alternative formulation of AT that allows a better trade-off between adversarial robustness and (clean) accuracy. The loss to be minimized is\n\\begin{align}\n\t\\begin{split}\n\t\t\\mathcal{L}&(f(x;w), y)\\\\\n\t\t&+ \\lambda \\max_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\text{KL}(f(x;w), f(x + \\delta;w)).\\label{eq:supp-methods-trades}\n\t\\end{split}\n\\end{align}\nDuring training, adversarial examples are computed by maximizing the KL-divergence (instead of cross-entropy loss), \\ie, using the second term in \\eqnref{eq:supp-methods-trades}. Commonly $\\lambda = 6$ is chosen, however, we additionally tried $\\lambda \\in \\{1, 3, 6, 9\\}$. We follow the official implementation\\footnote{\\url{https://github.com/yaodongyu/TRADES}}.\n\n\\begin{figure*}\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_eps}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.12\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_0005p}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_ls03}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_ln03}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_cyc4}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_wd005}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.145\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_esgd1}\n\t\\end{minipage}\n\t\\\\[2.5px]\n\t\n\t\\begin{minipage}[t]{0.12\\textwidth} \n\t\t\\hphantom{\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_gelu}}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_trades6}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_mart6}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_aa}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_silu}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.11\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_gelu}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.14\\textwidth}\n\t\t\\includegraphics[height=3.25cm]{plots_supp_flatness_epochs_correlation_joint_mish}\n\t\\end{minipage}\n\t\\vspace*{-6px}\n\t\\caption{\\textbf{Flatness Throughout Training:} Complementary to the main paper, we plot \\RCE against average-case flatness in \\RCE for selected methods throughout training epochs. \\red{Early epochs are shown in {\\color{blue!60!black}dark blue}, late epochs are shown in {\\color{red!60!black}dark red}.} For cyclic learning rate, we show $4$ cycles with a total of $120$ epochs. For many methods not avoiding robust overfitting, flatness decreases alongside an increase in \\RCE during overfitting. Using, \\eg, AutoAugment, label noise or Entropy-SGD, in contrast, both effects are reduced.}\n\t\\label{fig:supp-methods-flatness-epochs}\n\t\\vspace*{-6px}\n\\end{figure*}\n\n\\textbf{MART} \\cite{WangICLR2020} explicitly addresses the problem of incorrectly classified examples during training. First, the cross-entropy loss $\\mathcal{L}$ for training is replaced using a binary cross-entropy loss $\\mathcal{L}_{\\text{bin}}$, \\ie, classifying correct class vs\\onedot most-confident ``other'' class:\n\\begin{align}\n\t\\begin{split}\n\t\t\\mathcal{L}_{\\text{bin}}(f(x; w), y) =& -\\log(f_y(x;w))\\\\\n\t\t& - \\log(1 - \\max_{y' \\neq y} f_{y'}(x; w)).\n\t\\end{split}\n\\end{align}\nSecond, the KL-divergence used in TRADES in \\eqnref{eq:supp-methods-trades} is combined with a confidence-based weight:\n\\begin{align}\n\t\\begin{split}\n\t\t\\mathcal{L}_{\\text{bin}}&(f(\\tilde{x}; w), y)\\\\ &+ \\lambda \\text{KL}(f(x;w), f(\\tilde{x}; w)() (1 - f_y(x; w))\n\t\\end{split}\n\\end{align}\nAdversarial examples are still computed by maximizing regular cross-entropy loss. We follow the official implementation\\footnote{\\url{https://github.com/YisenWang/MART}}. MART is successful in reducing robust overfitting on incorrectly classified examples, as shown in \\figref{fig:supp-ii-pll}.\n\n\\textbf{PGD-$\\tau$:}\n%\nIn \\cite{ZhangARXIV2020}, a variant of PGD is proposed for AT: PGD-$\\tau$ stops maximization $\\tau$ iterations \\emph{after} the label flipped. This is supposed to find ``friendlier'' adversarial examples that can be used for AT. Note that $\\tau = 0$ also does \\emph{not} compute adversarial examples on incorrectly classified training examples. We consider $\\tau \\in \\{0, 1, 2, 3\\}$.\n\n\\textbf{Self-Supervision:} Following \\cite{HendrycksNIPS2019}, we implement AT using rotation-prediction as \\emph{additional} self-supervised task. Note, however, that no additional (unlabeled) training examples are used. Specifically, the following learning problem is tackled:\n\\begin{align}\n\t\\begin{split}\n\t\t\\max&_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\mathcal{L}(f(x + \\delta;w), y)\\\\\n\t\t&+ \\lambda \\max_{\\|\\delta\\|_\\infty \\leq \\epsilon} \\mathcal{L}(f(\\text{rot}(x + \\delta, r); w), y_{r})\\\\\n\t\t&r\\in\\{0, 90, 180, 270\\}, y_r \\in \\{0, 1, 2, 3\\}\n\t\\end{split}\n\\end{align}\nwhere $\\text{rot}(x, r)$ rotates the training example $x$ by $r$ degrees.\nIn practice, we split every batch in half: The first half uses the original training examples with correct labels. Examples in the second half are rotated randomly by $\\{0, 90, 180, 270\\}$ degrees, and the labels correspond to the rotation (\\ie, $\\{0, 1, 2, 3\\}$). Adversarial examples are computed against the true or rotation-based labels. Note that, in contrast to common practice \\cite{SohnARXIV2020}, we do \\emph{not} predict all four possible rotations every batch, but just one randomly drawn per example. We still use $150$ epochs in total. We consider $\\lambda \\in \\{0.5, 1, 2, 4, 8\\}$.\n \n\\textbf{Additional Unlabeled Examples:} As proposed in \\cite{CarmonNIPS2019,UesatoNIPS2019}, we also consider additional, pseudo-labeled examples during training. We use the provided pseudo-labeled data from \\cite{CarmonNIPS2019} and split each batch in half: using $50\\%$ original \\CifarT training examples, and $50\\%$ pseudo-labeled training examples from \\cite{CarmonNIPS2019}. We still use $150$ epochs in total. We follow the official PyTorch implementation\\footnote{\\url{https://github.com/yaircarmon/semisup-adv}}.\n\n\\subsection{Training Curves}\n\\label{sec:supp-methods-ablation}\n\n\\figref{fig:supp-training-ablation} shows (test) \\RCE throughout training for selected methods and hyper-parameters. Across all methods, we found that hyper-parameters have a large impact on robust overfitting. For example, weight decay or smaller batch sizes can reduce and delay robust overfitting considerably if regularization is ``strong'' enough, \\ie, large weight decay or low batch size (to induce more randomness). For the other methods, difference between hyper-parameters is more subtle. However, across all cases, reduced overfitting generally goes hand in hand with higher \\RCE on training examples, \\ie, the robust generalization gap is reduced. This indicates that avoiding convergence on training examples plays an important role in avoiding robust overfitting.\n\nTraining curves for all methods are shown in \\figref{fig:supp-training-curves}.\n\n\\subsection{Flatness for Methods}\n\\label{sec:supp-methods-flatness}\n\n\\begin{figure*}[t]\n\\begin{minipage}{\\textwidth}\n\t\\centering\n\t\\vspace*{-0.2cm}\n\t\\hspace*{-0.4cm}\n\t\\begin{minipage}[t]{0.18\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_flatness_correlation_seq_loss_methods_lr}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.18\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\t\t\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_flatness_correlation_seq_loss_methods_ls}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.18\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\t\t\n\t\t\\includegraphics[width=\\textwidth]{plots_supp_flatness_correlation_seq_loss_methods_flat}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.26\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\t\t\n\t\t\\includegraphics[width=1\\textwidth]{plots_supp_flatness_correlation_seq_loss_methods_at}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.01\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\hspace*{4px}{\\color{black!75}\\rule{0.65px}{3.15cm}}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.18\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\t\t \n\t\t\\includegraphics[width=1\\textwidth]{plots_supp_flatness_correlation_joint_loss_methods_flat}\n\t\\end{minipage}\n\n\t\\vspace*{-6px}\n\t\\captionof{figure}{\\textbf{Robustness and Flatness for Varying Hyper-Parameters:} \\textbf{Left:} \\RCE (y-axis) plotted against average-case flatness of \\RCE (x-axis) for various groups of methods: learning rate schedules (left), label smoothing/noise and weight decay (middle left), weight clipping, Entropy-SGD and AT-AWP (middle right) as well as AT with self-supervision, MART and TRADES (right). As outlined in \\secref{sec:supp-methods}, we considered multiple hyper-parameter settings per method and show that favorable hyper-parameters in terms of adversarial robustness also result in improved flatness. That is, in most cases, varying hyper-parameters creates (roughly) a diagonal line in these plots. Interestingly, weight clipping can be considered an outlier: adversarial robustness improves while \\emph{average-case} flatness \\emph{reduces}. \\textbf{Right:} \\RCE (y-axis) plotted against \\emph{worst-case} flatness in \\RCE (x-axis). Here, flatness for weight clipping aligns well with \\RCE.}\n\t\\label{fig:supp-flatness-methods}\n\t\\vspace*{12px}\n\\end{minipage}\n\\end{figure*}\n\n\n\\textbf{Flatness Throughout Training:}\n%\n\\figref{fig:supp-methods-flatness-epochs} shows \\RCE (y-axis) plotted against average-case flatness in \\RCE (x-axis) throughout training, \\ie, over epochs ({\\color{blue!50!black}dark blue} to {\\color{red!50!black}dark red}), for methods not shown in the main paper. Strikingly, using higher $\\epsilon{=}\\nicefrac{9}{255}$ or alternative activation functions (SiLU \\cite{ElfwingNN2018}, GeLU \\cite{HendrycksARXIV2016} or MiSH \\cite{MisraBMVC2020}) affect neither robust overfitting nor flatness significantly. Interestingly, as discussed in the main paper, label smoothing avoids sharper minima during overfitting, but does \\emph{not} avoid an increased \\RCE. Methods that consistently reduce or avoid robust overfitting, \\eg, weight clipping, label noise, strong weight decay or AutoAugment, avoid both the increase in \\RCE as well as worse flatness. Clearly, the observations from the main paper are confirmed: flatness usually reduces alongside \\RCE in robust overfitting.\n\n\\textbf{Flatness Across Hyper-Parameters:}\n%\nIn \\figref{fig:supp-flatness-methods}, we consider flatness when changing hyper-parameters of selected methods. As before, we plot \\RCE (y-axis) against average-case flatness in \\RCE (x-axis) for various groups of methods: learning rate schedules (first column), label smoothing/noise and weight decay (second column), methods explicitly improving flatness, \\ie, weight clipping, Entropy-SGD and AT-AWP (third column), as well as self-supervision, MART and TRADES (fourth column). Except for weight clipping, hyper-parameter settings with improved adversarial robustness also favor flatter minima. In most cases, this relationship follows a clear, diagonal line. For weight clipping, in contrast, the relationship is reversed: improved flatness reduces \\RCE. Thus, \\figref{fig:supp-flatness-methods} (fifth column) considers worst-case flatness in \\RCE. Here, ``stronger'' weight clipping improves both robustness \\emph{and} flatness. This supports our discussion in the main paper: methods need at least ``some kind'' of flatness, average- or worst-case, in order to improve adversarial robustness.\n\n\\section{Results in Tabular Form}\n\\label{sec:supp-results}\n\n\\tabref{tab:supp-table-error} and \\ref{tab:supp-table-loss} report the quantitative results from all our experiments. Besides flatness in \\RCE, we also report both average- and worst-case flatness in (clean) \\CE. As described in the main paper, we use $\\xi = 0.5$ for average-case flatness and $\\xi = 0.003$ for worst-case flatness. In \\tabref{tab:supp-table-error}, methods are sorted (in ascending order) by \\RTE against AutoAttack \\cite{CroceICML2020}. Additionally, we split all methods into four groups: \\colorbox{colorbrewer3!15}{good}, \\colorbox{colorbrewer5!15}{average}, \\colorbox{colorbrewer1!15}{poor} and \\colorbox{colorbrewer0!15}{worse} robustness at $57\\%$, $60\\%$ and $62.8\\%$ \\RTE. These thresholds correspond roughly to the $30\\%$ and $70\\%$ percentile of all methods with $\\RTE \\leq 62.8\\%$. As our AT baseline obtains $62.8\\%$ \\RTE, we group all methods with higher \\RTE than $62.8\\%$ in \\colorbox{colorbrewer0!15}{worse} robustness. In \\tabref{tab:supp-table-loss}, methods are sorted (in ascending order) by \\RCE against PGD. Finally, \\tabref{tab:supp-table-robustbench-error} and \\ref{tab:supp-table-robustbench-loss} report \\RTE and \\RCE, together with our average- and worst-case flatness (of \\RCE) measures for the evaluated, pre-trained models from RobustBench \\cite{CroceARXIV2020b}.\n\n\\newpage\n\\clearpage\n\\begin{figure*}\n\\begin{minipage}{\\textwidth}\n\t\\centering\n\t\\scriptsize\n\t{\n\t\\begin{tabularx}{\\textwidth}{|X|c|c|c||c|c|c||c|c|}\n\t\t\\hline\n\t\tModel & \\multicolumn{3}{c||}{\\bfseries Test Robustness} & \\multicolumn{3}{c||}{\\bfseries Train Robustness} & \\multicolumn{2}{c|}{\\bfseries Flatness}\\\\\n\t\t\\hline\n\t\t(sorted by \\RCE on AA) & \\TE & \\RTE & \\RTE & \\TE & \\RTE & \\RTE & Avg & Worst\\\\\n\t\t(PGD = PGD-$20$, $10$ restarts) & (test) & (test) & (test) & (train) & (train) & (train) & \\RCE & \\RCE\\\\\n\t\t(AA = AutoAttack \\cite{CroceARXIV2020}) && (PGD) & (AA) && (PGD) & (AA) &&\\\\\n\t\t\\hline\n\t\t\\hline\n\t\tCarmon et al. [16] & 10.31 & 37.6 & 40.8 & 1.93 & 16.8 & 19.2 & 0.7 & 0.34\\\\\n\t\tEngstrom et al. [36] & 12.97 & 45.3 & 49.2 & 6.71 & 33.1 & 36.3 & 0.23 & 0.51\\\\\n\t\tPang et al. [93] & 14.87 & 36.6 & 45.8 & 7.79 & 20.5 & 28.6 & 0.08 & 0.07\\\\\n\t\tWang [238] & 12.5 & 37.1 & 42.8 & 8.07 & 24.8 & 32.1 & 0.61 & 0.34\\\\\n\t\tWong et al. [131] & 16.66 & 54.4 & 57.6 & 11.86 & 44.9 & 49.2 & 0.3 & 0.16\\\\\n\t\tWu et al. [133] & 14.64 & 41.5 & 43.9 & 2.2 & 14.5 & 16.5 & 0.49 & 0.09\\\\\n\t\tZhang et al. [148] & 15.08 & 44.1 & 46.4 & 7.83 & 29.9 & 33.6 & 0.61 & 0.43\\\\\n\t\tZhang et al. [149] & 15.48 & 43 & 47.2 & 4.85 & 26.3 & 30.1 & 0.51 & 0.13\\\\\n\t\t\\hline\n\t\\end{tabularx} \n\t}\n\t\\vspace*{-8px}\n\t\\captionof{table}{\\textbf{RobustBench \\cite{CroceARXIV2020b}: \\TE, \\RTE and Flatness in \\RCE:} \\TE and \\RTE on train and test examples as well as average- and worst-case flatness in \\RCE for pre-trained models from RobustBench. In contrast to \\tabref{tab:supp-table-error}, the RobustBench models were obtained using early stopping.}\n\t\\label{tab:supp-table-robustbench-error}\n\t\\vspace*{12px}\n\\end{minipage}\n\\begin{minipage}{\\textwidth}\n\t\\centering\n\t\\scriptsize\n\t{\n\t\\begin{tabularx}{\\textwidth}{|X|c|c|c||c|c|c||c|c|}\n\t\t\\hline\n\t\tModel & \\multicolumn{3}{c||}{\\bfseries Test Robustness} & \\multicolumn{3}{c||}{\\bfseries Train Robustness} & \\multicolumn{2}{c|}{\\bfseries Flatness}\\\\\n\t\t\\hline\n\t\t(sorted by \\RCE on AA) & \\CE & \\RCE & \\RCE & \\CE & \\RCE & \\RCE & Avg & Worst\\\\\n\t\t(PGD = PGD-$20$, $10$ restarts) & (test) & (test) & (test) & (train) & (train) & (train) & \\RCE & \\RCE\\\\\n\t\t(AA = AutoAttack \\cite{CroceARXIV2020}) && (PGD) & (AA) && (PGD) & (AA) &&\\\\\n\t\t\\hline\n\t\t\\hline\n\t\tCarmon et al. [16] & 0.53 & 1.02 & 0.63 & 0.36 & 0.62 & 0.41 & 0.7 & 0.34\\\\\n\t\tEngstrom et al. [36] & 0.44 & 1.25 & 0.59 & 0.29 & 0.82 & 0.41 & 0.23 & 0.51\\\\\n\t\tPang et al. [93] & 1.84 & 1.98 & 1.86 & 1.8 & 1.91 & 1.8 & 0.08 & 0.07\\\\\n\t\tWang [128] & 0.64 & 1.11 & 0.73 & 0.54 & 0.9 & 0.6 & 0.61 & 0.34\\\\\n\t\tWong et al. [131] & 0.57 & 1.37 & 0.73 & 0.46 & 1.11 & 0.61 & 0.3 & 0.16\\\\\n\t\tWu et al. [133] & 0.63 & 1.13 & 0.72 & 0.37 & 0.61 & 0.41 & 0.49 & 0.09\\\\\n\t\tZhang et al. [148] & 0.55 & 1.19 & 0.66 & 0.39 & 0.83 & 0.48 & 0.61 & 0.43\\\\\n\t\tZhang et al. [149] & 0.85 & 1.27 & 0.93 & 0.71 & 1.01 & 0.76 & 0.51 & 0.13\\\\\n\t\t\\hline\n\t\\end{tabularx} \n\t}\n\t\\vspace*{-8px}\n\t\\captionof{table}{\\textbf{RobustBench \\cite{CroceARXIV2020b}: \\emph{\\CE, \\RCE} and Flatness in \\RCE:} \\CE and \\RCE on train and test examples as well as average- and worst-case flatness in \\RCE for pre-trained models from RobustBench. In contrast to \\tabref{tab:supp-table-loss}, the RobustBench models were obtained using early stopping.}\n\t\\label{tab:supp-table-robustbench-loss}\n\t\\vspace*{-6px}\n\\end{minipage}\n\\end{figure*}\n\\FloatBarrier\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_eps_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_eps_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_i5_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_i5_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_i14_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_i14_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ii_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ii_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_pll_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_pll_error}\n\t\\end{minipage}\n\t\\\\[2.5px]\n\t\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_0005p_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_0005p_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ls03_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ls03_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ln03_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ln03_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_late_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_late_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_const_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_const_error}\n\t\\end{minipage}\n\t\\\\[2.5px]\n\t\t\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_cyc4_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_cyc4_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t \n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_lr02_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_lr02_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_b16_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_b16_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_b512_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_b512_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_nowd_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_nowd_error}\n\t\\end{minipage}\n\t\\\\[2.5px]\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_wd005_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_wd005_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_esgd1_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_esgd1_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_tau0_loss}\n\t\t\n\t\t\\hspace*{-0.15cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_tau0_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_trades6_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_trades6_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_mart6_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_mart6_error}\n\t\\end{minipage}\n\t\\\\[2.5px]\n\t\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_aa_loss}\n\t\t\n\t\t\\hspace*{-0.15cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_aa_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_500k_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_500k_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_silu_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_silu_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_gelu_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_gelu_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_mish_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_mish_error}\n\t\\end{minipage}\n\t\\\\[2.5px]\n\t\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_dropout_loss}\n\t\t\n\t\t\\hspace*{-0.15cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_dropout_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ssl4_loss}\n\t\t\n\t\t\\hspace*{-0.25cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_ssl4_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.19\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_awit001_loss}\n\t\t\n\t\t\\hspace*{-0cm}\n\t\t\\includegraphics[height=1.8cm]{plots_supp_training_curves_awit001_error}\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.01\\textwidth}\n\t\t\\vspace*{0px}\n\t\t\n\t\t\\hfill\n\t\\end{minipage}\n\t\\begin{minipage}[t]{0.38\\textwidth}\n\t\t\\vspace*{4px}\n\t\t\n\t\t\\caption{\\textbf{Training Curves:} Test and train \\RCE (top) and \\RTE (bottom), including \\RTE for early stopping, for all considered methods with selected hyper-parameters. \\textbf{*} Train and test \\RCE correspond to the attacks used during training, \\eg, PGD-$\\tau$ or maximizing KL-divergence for TRADES. \\textbf{$\\dagger$} Reported \\RCE corresponds to \\RCE on adversarial examples \\emph{without} adversarial weights.}\n\t\t\\label{fig:supp-training-curves}\n\t\\end{minipage}\n\\end{figure*}\n\\begin{table*}[t]\n\t\\centering\n\t\\vspace*{-0.5cm}\n\t\\scriptsize\n\t{\n\t\\begin{tabularx}{\\textwidth}{|X|c|c|c||c|c|c||c|c||c|c|c|c|}\n\t\t\\hline\n\t\tModel & \\multicolumn{3}{c||}{\\bfseries Test Robustness} & \\multicolumn{3}{c||}{\\bfseries Train Robustness} & \\multicolumn{2}{c||}{\\bfseries Early Stopping} & \\multicolumn{4}{c|}{\\bfseries Flatness}\\\\\n\t\t\\hline\n\t\t(sorted by \\RTE on AA) & \\TE & \\RTE & \\RTE & \\TE & \\RTE & \\RTE & \\RTE & \\RTE & Avg & Worst & Avg & Worst\\\\\n\t\t(PGD = PGD-$20$, $10$ restarts) & (test) & (test) & (test) & (train) & (train) & (train) & (stop) & (stop) & \\CE & \\CE & \\bfseries\\RCE & \\bfseries\\RCE\\\\\n\t\t(AA = AutoAttack \\cite{CroceARXIV2020}) && (PGD) & (AA) && (PGD) & (AA) & (PGD) & (AA) &&&&\\\\\n\t\t\\hline\n\t\t\\hline\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} +Unlabeled & 16.96 & 45.9 & 48.9 & 12.6 & 38.6 & 43.2 & 45.3 & 48.9 & 0.12 & 4.64 & 0.32 & 1.2\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Cyclic $\\times2$ & 19.66 & 51.2 & 53.6 & 7.64 & 32.3 & 35.4 & 51 & 53.6 & 0.09 & 3.93 & 0.35 & 1.5\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} AutoAugment & 16.89 & 49.5 & 54.0 & 12.25 & 42.8 & 47.9 & 49.5 & 53.5 & 0.13 & 15.01 & 0.49 & 0.69\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} AT-AWP $\\xi{=}0.01$ & 21.4 & 50.7 & 54.3 & 13.52 & 37.4 & 43.1 & 48.9 & 53.6 & 0.12 & 6.17 & 0.35 & 2.68\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} AT-AWP $\\xi{=}0.005$ & 20.05 & 52.5 & 55 & 7.34 & 28.1 & 31.8 & 50.8 & 53.3 & 0.15 & 6.98 & 0.54 & 4.46\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Label noise $\\tau{=}0.4$ & 20.56 & 52.4 & 55 & 9.66 & 32.8 & 36.8 & 51.2 & 54.8 & 0.11 & 3.95 & 0.21 & 0.96\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} TRADES $\\lambda{=}9$ & 23.03 & 52.4 & 55 & 2.92 & 16.4 & 18.8 & 49.7 & 53 & 0.19 & 5.04 & 0.45 & 3.08\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Cyclic $\\times3$ & 20.04 & 53.1 & 55.2 & 5.62 & 26.9 & 30.6 & 53.1 & 55.2 & 0.1 & 4.1 & 0.53 & 0.93\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Cyclic & 22.42 & 53.2 & 55.4 & 13.09 & 39.5 & 43.5 & 53.2 & 55.4 & 0.07 & 2.6 & 0.22 & 0.41\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Label noise $\\tau{=}0.5$ & 22.71 & 51.3 & 55.4 & 15.04 & 40.4 & 45.5 & 51.3 & 55.4 & 0.09 & 0.43 & 0.16 & 0.13\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Label noise $\\tau{=}0.3$ & 19.9 & 54.2 & 56.2 & 5.47 & 26.9 & 30 & 51.8 & 55.5 & 0.15 & 3.37 & 0.33 & 0.93\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Weight clipping $w_{max}{=}0.005$ & 21.39 & 54.1 & 56.5 & 10.19 & 35.6 & 39 & 54.1 & 56.5 & 0.74 & 10.49 & 0.41 & 4.58\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} TRADES $\\lambda{=}6$ & 21.68 & 55.3 & 56.7 & 1.74 & 13.5 & 15.8 & 50.1 & 53.4 & 0.21 & 5.12 & 0.57 & 2.26\\\\\n\t\t\\rowcolor{colorbrewer3!15}\\hspace*{2px} Cyclic $\\times4$ & 19.85 & 55.2 & 56.9 & 4.01 & 23.1 & 26 & 55.1 & 56.9 & 0.16 & 6.65 & 0.62 & 0.8\\\\\n\t\t\\hline\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Self-supervision $\\lambda{=}4$ & 17.1 & 55.3 & 57.1 & 5.76 & 41.9 & 45 & 55.3 & 56.8 & 0.12 & 5.59 & 0.34 & 2.64\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Adam & 25.84 & 53.9 & 57.5 & 18.87 & 47.9 & 52.3 & 53.9 & 57.5 & 0.22 & 2.65 & 0.56 & 0.9\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Entropy-SGD ($L{=}2$) & 24.53 & 54.4 & 57.6 & 9.03 & 35.4 & 38.8 & 52.6 & 55.2 & 0.08 & 1.76 & 0.27 & 0.7\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Self-supervision $\\lambda{=}1$ & 15.9 & 56.9 & 58.1 & 1.48 & 28.3 & 31.6 & 55.9 & 57.5 & 0.12 & 6.98 & 0.46 & 3.87\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Weight decay $0.05$ & 19.32 & 56.2 & 58.1 & 5.03 & 29 & 32.8 & 52 & 54.8 & 0.12 & 5.77 & 0.51 & 3.94\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Batch size $8$ & 17.73 & 57.1 & 58.2 & 3.46 & 26.8 & 31.4 & 55.6 & 58.2 & 0.32 & 24.01 & 1.55 & 12.27\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Entropy-SGD ($L{=}1$) & 25.42 & 56 & 58.6 & 12.79 & 42.8 & 46.1 & 53.2 & 56.9 & 0.09 & 3.24 & 0.28 & 1.8\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Self-supervision $\\lambda{=}0.5$ & 16.16 & 58 & 58.6 & 1.26 & 28 & 30.7 & 56.7 & 58.3 & 0.1 & 6.48 & 0.45 & 3.29\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} AT-AWP $\\xi{=}0.001$ & 18.75 & 57.3 & 58.7 & 1.34 & 15.1 & 18.3 & 52.1 & 54.6 & 0.34 & 20.42 & 1.44 & 13.82\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Self-supervision $\\lambda{=}2$ & 15.72 & 57.4 & 58.7 & 2.47 & 33.4 & 36.6 & 55.8 & 57.7 & 0.1 & 21.79 & 0.47 & 3.47\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} MART $\\lambda{=}9$ & 22.06 & 57 & 58.8 & 3.86 & 16 & 22 & 50 & 55 & 0.18 & 8.08 & 0.7 & 3.42\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Weight decay $0.01$ & 18.52 & 57.2 & 58.9 & 2.06 & 20.1 & 23.2 & 51.7 & 55.3 & 0.25 & 16.46 & 0.9 & 7.19\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Batch size $16$ & 18.12 & 58.3 & 59 & 1.82 & 20.4 & 24.5 & 52.5 & 55.6 & 0.33 & 22.11 & 1.41 & 11.39\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Self-supervision $\\lambda{=}8$ & 19.6 & 56.6 & 59 & 12.08 & 50 & 53.3 & 56.6 & 58.6 & 0.11 & 3.59 & 0.29 & 1.76\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} TRADES $\\lambda{=}3$ & 20.51 & 57.7 & 59.1 & 0.94 & 13.4 & 15.5 & 52.3 & 54.9 & 0.2 & 19.08 & 0.71 & 3.48\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Weight decay $0.005$ & 18.79 & 58.2 & 59.4 & 2.03 & 20.2 & 23.9 & 51.8 & 54.3 & 0.26 & 19.67 & 1.2 & 8.35\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Label noise $\\tau{=}0.2$ & 19.45 & 57.5 & 59.5 & 2.34 & 18.8 & 22.2 & 50.2 & 53 & 0.18 & 9.79 & 0.39 & 1.4\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} MART $\\lambda{=}3$ & 20.89 & 58.9 & 59.6 & 1.94 & 14.4 & 19.2 & 53.3 & 57.4 & 0.17 & 10.53 & 1.01 & 3.99\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Weight clipping $w_{max}{=}0.01$ & 19.15 & 58 & 59.6 & 3.28 & 21.5 & 24.8 & 56.7 & 58.5 & 0.66 & 15.1 & 0.26 & 7.41\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} Learning rate $0.2$ & 19.17 & 58.3 & 59.7 & 0.46 & 9.4 & 12.4 & 54.3 & 56.6 & 0.2 & 24.41 & 1.44 & 5.75\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} MiSH & 19.29 & 58.9 & 59.8 & 0.06 & 4.5 & 5.3 & 51.8 & 53.7 & 0.25 & 10.04 & 1.58 & 3.55\\\\\n\t\t\\rowcolor{colorbrewer5!15}\\hspace*{2px} ``Late'' multi-step & 20.63 & 58.5 & 59.8 & 1.6 & 16.4 & 18.4 & 54.2 & 57.8 & 0.17 & 5.24 & 0.81 & 2.96\\\\\n\t\t\\hline\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} SiLU & 19.45 & 59.7 & 60 & 0.07 & 4.8 & 5.6 & 51.3 & 53.7 & 0.3 & 9.97 & 1.68 & 4.2\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Weight averaging ($\\tau{=}0.9975$) & 19.63 & 59.7 & 60 & 0.19 & 7.9 & 10 & 50.5 & 53 & 0.23 & 15.66 & 1.29 & 6\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Weight clipping $0.025$ & 18.91 & 59.2 & 60.4 & 0.73 & 12.5 & 15.6 & 52.1 & 54.9 & 0.32 & 17.4 & 0 & 8.61\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Batch size $32$ & 18.72 & 59.6 & 60.5 & 0.56 & 12 & 14.6 & 53.7 & 55.6 & 0.18 & 19.34 & 1.22 & 7.88\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Entropy-SGD ($L{=}3$) & 24 & 58.5 & 60.5 & 5.25 & 29.9 & 33.9 & 56.7 & 59.3 & 0.09 & 2.91 & 0.33 & 1.03\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Label noise $\\tau{=}0.1$ & 19.39 & 59 & 60.8 & 1.12 & 14.1 & 17.5 & 51.9 & 55 & 0.2 & 16.75 & 0.69 & 3.55\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Larger $\\epsilon{=}9/255$ & 21.3 & 60.4 & 60.9 & 0.47 & 8.9 & 11.1 & 51.3 & 53.8 & 0.21 & 10.26 & 1.34 & 5.85\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Label smoothing $\\tau{=}0.1$ & 19.55 & 59.6 & 61 & 0.2 & 6.4 & 8.5 & 52.5 & 55 & 0.26 & 8.87 & 0.85 & 2.66\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} MART $\\lambda{=}6$ & 21.51 & 58.7 & 61 & 3.21 & 16.1 & 20.8 & 49.2 & 54.7 & 0.18 & 13.52 & 0.74 & 3.17\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Weight averaging ($\\tau{=}0.98$) & 20.01 & 60.6 & 61 & 0.2 & 7.6 & 9.9 & 54.3 & 56.3 & 0.23 & 12.8 & 1.37 & 5.6\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Weight decay $0.001$ & 19.47 & 59.9 & 61 & 0.36 & 10.4 & 13.3 & 52 & 54.8 & 0.24 & 8.36 & 1.3 & 6.78\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Batch size $64$ & 19.06 & 60.5 & 61.1 & 0.3 & 9.2 & 11.1 & 51.2 & 54.4 & 0.18 & 23.13 & 1.14 & 5.96\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} GeLU & 20.64 & 60.8 & 61.1 & 0.01 & 2.7 & 3.2 & 54.9 & 56.7 & 0.23 & 14.31 & 1.56 & 4.13\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Label smoothing $\\tau{=}0.3$ & 19.41 & 59.4 & 61.2 & 0.27 & 5.7 & 8 & 51.1 & 54 & 0.29 & 18.42 & 0.65 & 2.72\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} MART $\\lambda{=}1$ & 20.51 & 59.4 & 61.2 & 1.04 & 11.4 & 14.7 & 50.3 & 55.4 & 0.17 & 7.97 & 0.87 & 3.1\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Weight averaging ($\\tau{=}0.99$) & 20.41 & 60.3 & 61.4 & 0.19 & 7.8 & 9.6 & 51.7 & 54.2 & 0.22 & 6.12 & 1.44 & 4.98\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Dropout & 18.91 & 60.5 & 61.6 & 0.58 & 13 & 16.7 & 51.2 & 54.5 & 0.2 & 13.81 & 1.52 & 7.01\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} PGD-14 & 20.8 & 60.6 & 61.6 & 0.22 & 7.1 & 9.3 & 53.6 & 56.1 & 0.27 & 20.9 & 1.48 & 5.35\\\\\n\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Entropy-SGD ($L{=}5$) & 23.48 & 59.5 & 61.7 & 3.01 & 22.2 & 25.9 & 53.2 & 56.6 & 0.1 & 3.57 & 0.46 & 1.49\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Ignore incorrect & 18.4 & 60.5 & 61.8 & 0.06 & 6.3 & 9 & 54.4 & 56.4 & 0.21 & 14.65 & 1.68 & 5.93\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Learning rate $0.1$ & 19.23 & 61.1 & 61.9 & 0.26 & 8.9 & 11.5 & 51.9 & 54.2 & 0.21 & 17.63 & 1.23 & 5.26\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} TRADES $\\lambda{=}1$ & 17.54 & 59.5 & 61.9 & 0.15 & 16.6 & 20.7 & 56.6 & 59.6 & 0.16 & 12.68 & 0.78 & 4.3\\\\\n\t\t\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Weight averaging ($\\tau{=}0.985$) & 20.27 & 61.7 & 62.3 & 0.18 & 7.4 & 9.4 & 55.9 & 58 & 0.22 & 15.66 & 1.35 & 6.51\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Label smoothing $\\tau{=}0.2$ & 20.07 & 60.2 & 62.4 & 0.26 & 5.1 & 7.8 & 51.9 & 54.6 & 0.28 & 9.94 & 0.69 & 2.61\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} Prevent label leaking & 18.38 & 62.1 & 62.4 & 0.38 & 8.6 & 10.8 & 55.3 & 57.7 & 0.22 & 14.62 & 1.48 & 6\\\\\n\t\t\\rowcolor{colorbrewer1!15}\\hspace*{2px} AT (baseline) & 20.2 & 61 & 62.8 & 0.33 & 8.5 & 10.7 & 52.3 & 54.6 & 0.21 & 21.05 & 1.22 & 6.49\\\\\n\t\t\\hline\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} Const learning rate $0.05$ & 24.96 & 60.7 & 62.9 & 6.17 & 32.9 & 37.8 & 55.4 & 58.9 & 0.09 & 3.52 & 0.44 & 0.9\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} PGD-5 & 20.22 & 61.8 & 62.9 & 0.11 & 7.3 & 10.4 & 55.1 & 57.4 & 0.17 & 20.4 & 1.24 & 4.19\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} Batch size $256$ & 20.86 & 62.6 & 63.3 & 0.28 & 8.2 & 10.3 & 56.9 & 58.4 & 0.3 & 11.22 & 1.35 & 8.33\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} PGD-$7$-$3$ & 17.17 & 61.7 & 63.3 & 0.08 & 19.5 & 25.2 & 51.3 & 58.8 & 0.17 & 7.4 & 1.08 & 5.29\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} Batch size $512$ & 22.58 & 62.9 & 63.5 & 0.64 & 11 & 14.2 & 58.6 & 60 & 0.48 & 23.97 & 1.92 & 16.22\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} Learning rate $0.01$ & 22.83 & 63 & 63.5 & 1.05 & 15.2 & 18 & 57.8 & 59.7 & 0.56 & 23.42 & 2.25 & 16.02\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} No weight decay & 23.37 & 64.8 & 65.7 & 0.23 & 9.2 & 12.7 & 57.1 & 60.3 & 0.66 & 21.05 & 2.53 & 11.75\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} PGD-$7$-$0$ & 14.67 & 63.8 & 65.7 & 0.09 & 23.7 & 30 & 59.4 & 61.4 & 0.11 & 6.86 & 1.28 & 2.8\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} PGD-$7$-$2$ & 16.19 & 63.6 & 65.9 & 0.1 & 20.9 & 28.1 & 58.3 & 62.3 & 0.14 & 22.81 & 1.21 & 2.55\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} PGD-$7$-$1$ & 15.02 & 64.1 & 67.1 & 0.11 & 25.9 & 34.3 & 58.8 & 63.8 & 0.13 & 11.71 & 1.15 & 2.33\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} Const learning rate $0.01$ & 25.87 & 66.7 & 67.4 & 0.67 & 18.5 & 20.7 & 58.4 & 61 & 0.33 & 15.09 & 1.37 & 8.27\\\\\n\t\t\\rowcolor{colorbrewer0!15}\\hspace*{2px} Const learning rate $0.005$ & 27.24 & 68.3 & 69.2 & 0.42 & 15.5 & 16.7 & 61.1 & 65.5 & 0.59 & 20.63 & 2.06 & 15.74\\\\\n\t\t\\hline\n\t\\end{tabularx}\n\t}\n\t\\vspace*{-8px}\n\t\\caption{\\textbf{Results: \\TE, \\RTE and Flatness in \\CE and \\RCE.} \\TE and \\RTE (PGD-$20$ and AutoAttack \\cite{CroceICML2020}) on test and train examples, together with average- and worst-case flatness in (clean) \\CE and \\RCE. Methods sorted by (test) \\RTE against AutoAttack and split into \\colorbox{colorbrewer3!15}{good}, \\colorbox{colorbrewer5!15}{average}, \\colorbox{colorbrewer1!15}{poor} and \\colorbox{colorbrewer0!15}{worse} robustness at $57\\%$, $60\\%$ and $62.8\\%$ \\RTE, see text.}\n\t\\label{tab:supp-table-error}\n\t\\vspace*{-6px}\n\\end{table*}\n\\begin{table*}[t]\n\t\\centering\n\t\\vspace*{-0.5cm}\n\t\\scriptsize\n\t{\n\t\\begin{tabularx}{\\textwidth}{|X|c|c|c||c|c|c||c|c|}\n\t\t\\hline\n\t\tModel & \\multicolumn{3}{c||}{\\bfseries Test Robustness} & \\multicolumn{3}{c||}{\\bfseries Train Robustness} & \\multicolumn{2}{c|}{\\bfseries Early Stopping}\\\\\n\t\t\\hline\n\t\t(sorted by \\RCE on PGD) & \\CE & \\RCE & \\RCE & \\CE & \\RCE & \\RCE & \\RCE & \\RCE\\\\\n\t\t(PGD = PGD-$20$, $10$ restarts) & (test) & (test) & (test) & (train) & (train) & (train) & (stop) & (stop)\\\\\n\t\t(AA = AutoAttack \\cite{CroceARXIV2020}) && (PGD) & (AA) && (PGD) & (AA) & (PGD) & (AA)\\\\\n\t\t\\hline\n\t\t\\hline\n\t\t+Unlabeled & 0.57 & 1.18 & 0.67 & 0.47 & 0.94 & 0.56 & 1.18 & 0.67\\\\\n\t\tAutoAugment & 0.58 & 1.3 & 0.71 & 0.48 & 1.08 & 0.61 & 1.3 & 0.71\\\\\n\t\tAT-AWP $\\xi{=}0.01$ & 0.7 & 1.31 & 0.81 & 0.55 & 0.99 & 0.62 & 1.3 & 0.81\\\\\n\t\tCyclic & 0.68 & 1.41 & 0.8 & 0.49 & 0.97 & 0.58 & 1.41 & 0.8\\\\\n\t\tAdam & 0.8 & 1.46 & 0.89 & 0.66 & 1.19 & 0.74 & 1.45 & 0.89\\\\\n\t\tWeight clipping $w_{max}{=}0.005$ & 0.77 & 1.48 & 0.91 & 0.53 & 0.99 & 0.62 & 1.47 & 0.9\\\\\n\t\tTRADES $\\lambda{=}9$ & 0.77 & 1.52 & 0.9 & 0.33 & 0.58 & 0.37 & 1.42 & 0.9\\\\\n\t\tLabel noise $\\tau{=}0.4$ & 0.93 & 1.55 & 1.05 & 0.71 & 1.15 & 0.8 & 1.5 & 1.05\\\\\n\t\tCyclic $\\times2$ & 0.6 & 1.55 & 0.74 & 0.32 & 0.76 & 0.42 & 1.55 & 0.74\\\\\n\t\tAT-AWP $\\xi{=}0.005$ & 0.59 & 1.57 & 0.74 & 0.29 & 0.66 & 0.38 & 1.36 & 0.74\\\\\n\t\tSelf-supervision $\\lambda{=}8$ & 0.59 & 1.58 & 0.76 & 0.43 & 1.24 & 0.62 & 1.57 & 0.76\\\\\n\t\tEntropy-SGD ($L{=}2$) & 0.72 & 1.59 & 0.83 & 0.4 & 0.87 & 0.5 & 1.44 & 0.83\\\\\n\t\tLabel noise $\\tau{=}0.5$ & 1.12 & 1.59 & 1.22 & 1 & 1.39 & 1.08 & 1.59 & 1.22\\\\\n\t\tEntropy-SGD ($L{=}1$) & 0.77 & 1.59 & 0.87 & 0.5 & 1.06 & 0.6 & 1.44 & 0.87\\\\\n\t\tLabel noise $\\tau{=}0.3$ & 0.78 & 1.62 & 0.94 & 0.45 & 0.91 & 0.55 & 1.47 & 0.94\\\\\n\t\tWeight decay $0.05$ & 0.61 & 1.65 & 0.78 & 0.28 & 0.73 & 0.39 & 1.33 & 0.78\\\\\n\t\tSelf-supervision $\\lambda{=}4$ & 0.51 & 1.68 & 0.71 & 0.25 & 1.02 & 0.45 & 1.62 & 0.71\\\\\n\t\tWeight clipping $w_{max}{=}0.01$ & 0.62 & 1.71 & 0.83 & 0.22 & 0.61 & 0.31 & 1.59 & 0.83\\\\\n\t\tTRADES $\\lambda{=}6$ & 0.7 & 1.74 & 0.86 & 0.2 & 0.44 & 0.25 & 1.4 & 0.86\\\\\n\t\tCyclic $\\times3$ & 0.6 & 1.75 & 0.74 & 0.24 & 0.65 & 0.34 & 1.59 & 0.74\\\\\n\t\tEntropy-SGD ($L{=}3$) & 0.69 & 1.84 & 0.85 & 0.26 & 0.72 & 0.38 & 1.57 & 0.85\\\\\n\t\tSelf-supervision $\\lambda{=}2$ & 0.47 & 1.86 & 0.69 & 0.13 & 0.8 & 0.33 & 1.53 & 0.69\\\\\n\t\t\n\t\tLabel noise $\\tau{=}0.2$ & 0.68 & 1.89 & 0.9 & 0.22 & 0.63 & 0.32 & 1.4 & 0.9\\\\\n\t\tCyclic $\\times4$ & 0.6 & 1.9 & 0.78 & 0.2 & 0.57 & 0.28 & 1.44 & 0.78\\\\\n\t\tConst learning rate $0.05$ & 0.75 & 1.92 & 0.89 & 0.31 & 0.81 & 0.43 & 1.54 & 0.88\\\\\n\t\tSelf-supervision $\\lambda{=}0.5$ & 0.48 & 2.01 & 0.71 & 0.09 & 0.67 & 0.27 & 1.6 & 0.71\\\\\n\t\tEntropy-SGD ($L{=}5$) & 0.71 & 2.06 & 0.89 & 0.18 & 0.57 & 0.28 & 1.5 & 0.86\\\\\n\t\tSelf-supervision $\\lambda{=}1$ & 0.48 & 2.08 & 0.72 & 0.09 & 0.67 & 0.27 & 1.63 & 0.7\\\\\n\t\tLabel smoothing $\\tau{=}0.3$ & 0.77 & 2.12 & 1.02 & 0.15 & 0.47 & 0.21 & 1.46 & 0.97\\\\\n\t\tTRADES $\\lambda{=}3$ & 0.65 & 2.16 & 0.85 & 0.1 & 0.34 & 0.17 & 1.42 & 0.83\\\\\n\t\tBatch size $8$ & 0.56 & 2.22 & 0.78 & 0.17 & 0.64 & 0.3 & 1.86 & 0.76\\\\\n\t\tLabel noise $\\tau{=}0.1$ & 0.63 & 2.22 & 0.87 & 0.1 & 0.42 & 0.19 & 1.37 & 0.86\\\\\n\t\tWeight decay $0.01$ & 0.58 & 2.23 & 0.78 & 0.12 & 0.47 & 0.22 & 1.35 & 0.78\\\\\n\t\tLabel smoothing $\\tau{=}0.2$ & 0.71 & 2.26 & 0.98 & 0.09 & 0.35 & 0.14 & 1.44 & 0.89\\\\\n\t\t\n\t\tMART $\\lambda{=}9$ & 0.7 & 2.36 & 0.93 & 0.24 & 0.48 & 0.3 & 1.41 & 0.93\\\\\n\t\tWeight clipping $0.025$ & 0.57 & 2.37 & 0.81 & 0.06 & 0.32 & 0.14 & 1.39 & 0.81\\\\\n\t\tWeight decay $0.005$ & 0.58 & 2.44 & 0.8 & 0.11 & 0.46 & 0.23 & 1.43 & 0.76\\\\\n\t\tLabel smoothing $\\tau{=}0.1$ & 0.64 & 2.48 & 0.88 & 0.04 & 0.24 & 0.09 & 1.43 & 0.8\\\\\n\t\tMART $\\lambda{=}6$ & 0.71 & 2.58 & 0.93 & 0.21 & 0.45 & 0.27 & 1.4 & 0.93\\\\\n\t\t``Late'' multi-step & 0.66 & 2.63 & 0.87 & 0.09 & 0.36 & 0.17 & 1.47 & 0.87\\\\\n\t\tTRADES $\\lambda{=}1$ & 0.56 & 2.68 & 0.81 & 0.04 & 0.38 & 0.18 & 1.74 & 0.74\\\\\n\t\tMART $\\lambda{=}3$ & 0.69 & 2.71 & 0.89 & 0.14 & 0.38 & 0.21 & 1.48 & 0.89\\\\\n\t\tAT-AWP $\\xi{=}0.001$ & 0.62 & 2.71 & 0.84 & 0.08 & 0.34 & 0.16 & 1.35 & 0.78\\\\\n\t\tBatch size $16$ & 0.57 & 2.78 & 0.83 & 0.1 & 0.46 & 0.22 & 1.63 & 0.74\\\\\n\t\tLearning rate $0.01$ & 0.72 & 2.94 & 0.96 & 0.07 & 0.34 & 0.16 & 1.74 & 0.85\\\\\n\t\tMART $\\lambda{=}1$ & 0.7 & 2.99 & 0.94 & 0.08 & 0.29 & 0.15 & 1.44 & 0.94\\\\\n\t\tPGD-$7$-$3$ & 0.55 & 3.03 & 0.8 & 0.04 & 0.48 & 0.19 & 1.7 & 0.73\\\\\n\t\tPGD-$7$-$2$ & 0.54 & 3.2 & 0.79 & 0.03 & 0.48 & 0.21 & 2.12 & 0.71\\\\\n\t\tPGD-$7$-$1$ & 0.51 & 3.3 & 0.75 & 0.04 & 0.61 & 0.25 & 2.43 & 0.68\\\\\n\t\tBatch size $512$ & 0.77 & 3.41 & 1.04 & 0.05 & 0.27 & 0.12 & 1.86 & 0.85\\\\\n\t\tPGD-$7$-$0$ & 0.49 & 3.43 & 0.74 & 0.03 & 0.58 & 0.21 & 2.48 & 0.65\\\\\n\t\tDropout & 0.66 & 3.44 & 0.9 & 0.04 & 0.3 & 0.13 & 1.44 & 0.76\\\\\n\t\t\n\t\tConst learning rate $0.01$ & 0.89 & 3.46 & 1.07 & 0.04 & 0.43 & 0.17 & 1.67 & 0.97\\\\\n\t\tWeight decay $0.001$ & 0.69 & 3.52 & 0.92 & 0.03 & 0.24 & 0.1 & 1.41 & 0.77\\\\\n\t\tBatch size $32$ & 0.67 & 3.54 & 0.91 & 0.04 & 0.27 & 0.11 & 1.69 & 0.73\\\\\n\t\tBatch size $64$ & 0.69 & 3.62 & 0.93 & 0.03 & 0.22 & 0.09 & 1.44 & 0.76\\\\\n\t\tConst learning rate $0.005$ & 0.97 & 3.65 & 1.24 & 0.04 & 0.35 & 0.14 & 1.62 & 1.12\\\\\n\t\tMiSH & 0.69 & 3.65 & 0.92 & 0.01 & 0.1 & 0.04 & 1.45 & 0.73\\\\\n\t\tLearning rate $0.1$ & 0.7 & 3.65 & 0.94 & 0.02 & 0.19 & 0.09 & 1.39 & 0.79\\\\\n\t\tLearning rate $0.2$ & 0.72 & 3.66 & 0.97 & 0.03 & 0.21 & 0.1 & 1.67 & 0.75\\\\\n\t\tLarger $\\epsilon{=}9/255$ & 0.78 & 3.69 & 1.01 & 0.03 & 0.19 & 0.09 & 1.45 & 0.79\\\\\n\t\tPrevent label leaking & 0.67 & 3.76 & 0.91 & 0.02 & 0.21 & 0.08 & 1.74 & 0.7\\\\\n\t\tSiLU & 0.71 & 3.81 & 0.96 & 0.01 & 0.11 & 0.04 & 1.47 & 0.73\\\\\n\t\tPGD-14 & 0.76 & 3.84 & 1.05 & 0.02 & 0.15 & 0.07 & 1.52 & 0.78\\\\\n\t\tWeight averaging ($\\tau{=}0.9975$) & 0.73 & 3.88 & 1 & 0.02 & 0.17 & 0.08 & 1.34 & 0.81\\\\\n\t\tAT (baseline) & 0.75 & 3.91 & 0.99 & 0.02 & 0.19 & 0.08 & 1.53 & 0.77\\\\\n\t\tWeight averaging ($\\tau{=}0.98$) & 0.75 & 3.94 & 1.05 & 0.02 & 0.16 & 0.08 & 1.96 & 0.8\\\\\n\t\tWeight averaging ($\\tau{=}0.985$) & 0.75 & 3.95 & 1 & 0.02 & 0.16 & 0.07 & 1.94 & 0.77\\\\\n\t\tIgnore incorrect & 0.71 & 3.99 & 0.96 & 0.01 & 0.16 & 0.07 & 1.65 & 0.7\\\\\n\t\tWeight averaging ($\\tau{=}0.99$) & 0.76 & 3.99 & 1.07 & 0.02 & 0.18 & 0.07 & 1.53 & 0.74\\\\\n\t\tBatch size $256$ & 0.79 & 4.02 & 1.07 & 0.02 & 0.18 & 0.08 & 1.74 & 0.81\\\\\n\t\tNo weight decay & 0.9 & 4.17 & 1.15 & 0.02 & 0.22 & 0.1 & 1.55 & 0.91\\\\\n\t\tPGD-5 & 0.77 & 4.22 & 0.99 & 0.01 & 0.17 & 0.08 & 1.62 & 0.77\\\\\n\t\tGeLU & 0.82 & 4.27 & 1.06 & 0 & 0.06 & 0.02 & 1.7 & 0.79\\\\\n\t\t\\hline\n\t\\end{tabularx} \n\t}\n\t\\vspace*{-8px}\n\t\\caption{\\textbf{Results: \\emph{\\CE} and \\emph{\\RCE}.} \\CE and \\RCE (PGD-$20$ and AutoAttack \\cite{CroceICML2020}) on test and train examples corresponding to the results in \\tabref{tab:supp-table-error}.}\n\t\\label{tab:supp-table-loss}\n\t\\vspace*{-6px}\n\\end{table*}", "meta": {"hexsha": "498ff8f78da2298b02119ccab7e5593ff44a4fd4", "size": 81814, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/supp_main.tex", "max_stars_repo_name": "davidstutz/iccv2021-robust-flatness", "max_stars_repo_head_hexsha": "d63daf8fc0221d07d8cfc8b7a5bcdc213403a17b", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-11-08T21:27:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T19:09:04.000Z", "max_issues_repo_path": "paper/supp_main.tex", "max_issues_repo_name": "davidstutz/iccv2021-robust-flatness", "max_issues_repo_head_hexsha": "d63daf8fc0221d07d8cfc8b7a5bcdc213403a17b", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/supp_main.tex", "max_forks_repo_name": "davidstutz/iccv2021-robust-flatness", "max_forks_repo_head_hexsha": "d63daf8fc0221d07d8cfc8b7a5bcdc213403a17b", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 75.4046082949, "max_line_length": 1545, "alphanum_fraction": 0.6933898844, "num_tokens": 31615, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825007, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5580256447001717}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{multicol}\n\\usepackage{listings}\n\\usepackage{verbatim}\n\\usepackage{color}\n\\usepackage{geometry}\n\\usepackage{float}\n\\usepackage{amsmath}\n\n\\usepackage{pdflscape}\n\\usepackage{hyperref}\n\\setlength{\\belowcaptionskip}{-10pt}\n\\setlength{\\abovecaptionskip}{-30pt}\n\\floatstyle{boxed} \n\\restylefloat{figure}\n\\usepackage{graphicx}\n\\definecolor{codegreen}{rgb}{0,0.6,0}\n\\definecolor{codegray}{rgb}{0.5,0.5,0.5}\n\\definecolor{codepurple}{rgb}{0.58,0,0.82}\n\\definecolor{backcolour}{rgb}{0.95,0.95,0.92}\n\n\\lstdefinestyle{mystyle}{\n\tbackgroundcolor=\\color{backcolour},   \n\tcommentstyle=\\color{codegreen},\n\tkeywordstyle=\\color{blue},\n\tnumberstyle=\\tiny\\color{codegray},\n\tstringstyle=\\color{codepurple},\n\tbasicstyle=\\footnotesize,\n\tbreakatwhitespace=false,         \n\tbreaklines=true,                 \n\tcaptionpos=b,                    \n\tkeepspaces=true,                 \n\tnumbers=left,                    \n\tnumbersep=5pt,                  \n\tshowspaces=false,                \n\tshowstringspaces=false,\n\tshowtabs=false,                  \n\ttabsize=2\n}\n\n\\lstset{style=mystyle}\n\\title{Data Mining\\\\\n\t\tHome work 07\\\\Association rules, }\n\\author{Aqeel Labash\\\\ \\textbf{Lecturer:} Jaak Vilo}\n\\date{23 March 2016}\n\n\\geometry{\n\ta4paper,\n\ttotal={170mm,257mm},\n\tleft=10mm,\n\ttop=5mm,\n}\n\\begin{document}\n\t\\maketitle\n\\section*{First Question}\nFor this task I used \\textbf{arule} library. Specifically the functions \\textbf{apriori, eclat}.\\\\\n\\textbf{apriori:} get list of rules.\\\\\n\\textbf{eclat:} get list of item sets.\\\\\nboth of them need optional  parameters (support , minimum length , only apriori :confidence ).\\\\\nOnly apriori need appearance as optional option which determine lhs or rhs for the rules.\\\\\nThe code I used is here :\n\\begin{lstlisting}[language=R]\n###First Question ####\nlibrary(arules)\nsupermarket = read.transactions('supermarket.txt',format = 'basket',sep=\" \")\ntim = proc.time()\nrules = apriori(supermarket,parameter = list(minlen=2,supp = 0.01,conf=0.05))#,conf=0.5))\nprint (proc.time()-tim)\ntim = proc.time()\nitmset = eclat(supermarket,parameter = list(supp = 0.01, maxlen = 15))\nprint (proc.time()-tim)\ninspect(itmset)\nhead(inspect(rules))\nhead(inspect(itmset))\n?apriori\n?eclat\n\\end{lstlisting}\nI selected a low confidence to get more rules because raising it will decrease them (choose least number came to my mind).\\\\\nHere is the head of rules :\\\\\n\\begin{tabular}{|c*{6}{|c}}\n\t\\hline\n&lhs&rhs&support&confidence&lift\\\\\n\\hline\n1&{14914}&{5330}&0.01111024&0.89240506&4.568581\\\\\n\\hline\n2&{5330}&{14914}&0.01111024&0.05687777&4.568581\\\\\n\\hline\n3&{12562}&{5330}&0.01623198&0.86736842&4.440408\\\\\n\\hline\n4&{5330}&{12562}&0.01623198&0.08309802&4.440408\\\\\n\\hline\n5&{11995}&{5330}&0.01591679&0.76226415&3.902337\\\\\n\\hline\n6&{5330}&{11995}&0.01591679&0.08148447&3.902337\\\\\n\\hline\n\\end{tabular} \\\\ \\\\\nHere is the head of item sets :\\\\ \\\\\n\\begin{tabular}{|c*{3}{|c}}\n\t\\hline\n&items&support\\\\\n\t\\hline\n1&{14914,5330}&0.01111024\\\\\n\t\\hline\n2&{12562,5330}&0.01623198\\\\\t\\hline\n3&{11995,5330}&0.01591679\\\\\t\\hline\n4&{6385,9108}&0.01193759\\\\\t\\hline\n5&{5330,6385}&0.01012529\\\\\t\\hline\n6&{4037,9108}&0.01079505\\\\ \t\\hline\n\\end{tabular}\n\nApriori used 0.156 sec.\neclat used 0.158 sec.\n\\section*{Second Question}\nFor this task I used the following code to get top of every list:\\\\\n\\begin{lstlisting}[language=R]\nhigh.support<- sort(rules, decreasing = TRUE, na.last = NA, by = \"support\")\nhigh.confidence<- sort(rules, decreasing = TRUE, na.last = NA, by = \"confidence\")\nhigh.lift<- sort(rules, decreasing = TRUE, na.last = NA, by = \"lift\")\n\\end{lstlisting}\nAfter that I used this code to build the contingency matrix for every rule.Kinda a straight forward solution (brute force).\n\\begin{lstlisting}[language=R]\n\n#### Second Question #####\nhigh.support<- sort(rules, decreasing = TRUE, na.last = NA, by = \"support\")[1:10,]\nhigh.confidence<- sort(rules, decreasing = TRUE, na.last = NA, by = \"confidence\")[1:10,]\nhigh.lift<- sort(rules, decreasing = TRUE, na.last = NA, by = \"lift\")[1:10,]\nlst<-read.csv('supermarket.txt',header = FALSE,sep=\" \")\n\nFindAllInfoV2 <- function(rule, dataset){\n# Extract the left hand side of the rule\nlhs.tbl <- itemInfo(lhs(rule))[which(as(lhs(rule), \"matrix\")[1, ] == 1), ]\nrhs.tbl <- itemInfo(rhs(rule))[which(as(rhs(rule), \"matrix\")[1, ] == 1), ]\nTP = 0\nTN= 0 \nFP =0\nFN = 0\n\nfor(i in seq_len(nrow(dataset)))\n{\n#Left Hand exist\nl <- sum(lhs.tbl %in% dataset[i,])\nr <- sum(rhs.tbl %in% dataset[i,])\nl <- l>=length(lhs.tbl)\nr <- r>=length(rhs.tbl)\nif (l)\n{\n#right hand also exist\nif (r)\n{\nTP<-TP+1\n}\nelse\n{\nFN<-FN+1\n}\n}\n#left hand doesn't exist\nelse\n{\n#but right hand exist\nif (r)\n{\nFP<-FP+1\n}\n#also right hand doesn't exist\nelse\n{\nTN<-TN+1\n}\n}\n}\nleftside =0\nif (length(lhs.tbl)>1)\n{\nleftside = paste(lhs.tbl, collapse = ',')\n}\nelse\n{\nleftside = strtoi(lhs.tbl, base = 0L)\n}\nreturn (c(quality(rule)[1],quality(rule)[2],quality(rule)[3],left =leftside,right =strtoi(rhs.tbl, base = 0L),F11= TP,F10=FN,F01=FP,F00=TN))\n}\ndfsupport<- data.frame()\nfor (i in seq_len(length(high.support)))\n{\ndfsupport<-rbind(dfsupport,FindAllInfoV2(high.support[i],lst))\n}\n\ndfconfidence<- data.frame()\nfor (i in seq_len(length(high.confidence)))\n{\ndfconfidence<-rbind(dfconfidence,FindAllInfoV2(high.confidence[i],lst))\n}\ndflift<-data.frame()\nfor (i in seq_len(length(high.lift)))\n{\ndflift<-rbind(dflift,FindAllInfoV2(high.lift[1],lst))\n}\ndfsupport\ndfconfidence\ndflift\n\\end{lstlisting}\nThe previous code will build 3 tables for the top support,lift,confidence between the rules we found.\\\\\nSupport table :\\\\\n\\begin{tabular}{|l|*{9}{c|}}\n\t\\hline\nN&support&confidence&lift&left&right&F11&F10&F01&F00\\\\\n\\hline\n1&0.06973446&0.3569988&1.626812&5330&9108&1396&3562&4174&23428\\\\\n\\hline\n2&0.06973446&0.3177738&1.626812&9108&5330&1396&4174&3562&23428\\\\\n\\hline\n3&0.02935151&0.3755040&1.711139&13973&9108&449&1535&5121&25455\\\\\n\\hline\n4&0.02935151&0.1337522&1.711139&9108&13973&449&5121&1535&25455\\\\\n\\hline\n5&0.02907572&0.2940239&1.339841&11217&9108&538&1972&5032&25018\\\\\n\\hline\n6&0.02907572&0.1324955&1.339841&9108&11217&538&5032&1972&25018\\\\\n\\hline\n7&0.02718462&0.2749004&1.407326&11217&5330&357&2153&4601&25449\\\\\n\\hline\n8&0.02718462&0.1391690&1.407326&5330&11217&357&4601&2153&25449\\\\\n\\hline\n9&0.02671184&0.2833264&1.291093&14155&9108&377&2016&5193&24974\\\\\n\\hline\n10&0.02671184&0.1217235&1.291093&9108&14155&377&5193&2016&24974\\\\\n\\hline\n\\end{tabular}\\\\\\\\\nThe Confidence table :\\\\ \n\\begin{tabular}{|l|*{9}{c|}}\n\t\\hline\nN&support&confidence&lift&left&right&F11&F10&F01&F00\\\\\n\\hline\n1&0.01111024&0.8924051&4.568581&14914&5330&137&179&4821&27423\\\\\n\\hline\n2&0.01623198&0.8673684&4.440408&12562&5330&308&167&4650&27435\\\\\n\\hline\n3&0.01591679&0.7622642&3.902337&11995&5330&275&255&4683&27347\\\\\n\\hline\n4&0.01296194&0.5007610&2.281924&13973,5330&9108&125&178&5445&26812\\\\\n\\hline\n5&0.01296194&0.4416107&2.260783&13973,9108&5330&125&324&4833&27278\\\\\n\\hline\n6&0.02194469&0.4258410&1.940520&3723&9108&407&901&5163&26089\\\\\n\\hline\n7&0.01386810&0.4141176&1.887098&4185&9108&226&624&5344&26366\\\\\n\\hline\n8&0.02450556&0.4005151&1.825112&3423&9108&499&1054&5071&25936\\\\\n\\hline\n9&0.01028288&0.3782609&1.723702&11217,5330&9108&101&256&5469&26734\\\\\n\\hline\n10&0.02935151&0.3755040&1.711139&13973&9108&449&1535&5121&25455\\\\\n\\hline\n\\end{tabular}\\\\ \\\\ \nThe top lift table : \\\\\n\\begin{tabular}{|l|*{9}{c|}}\n\t\\hline\nN&support&confidence&lift&left&right&F11&F10&F01&F00\\\\\\hline\n1&0.01111024&0.89240506&4.568581&14914&5330&137&179&4821&27423\\\\\\hline\n2&0.01111024&0.05687777&4.568581&5330&14914&137&4821&179&27423\\\\\\hline\n3&0.01623198&0.86736842&4.440408&12562&5330&308&167&4650&27435\\\\\\hline\n4&0.01623198&0.08309802&4.440408&5330&12562&308&4650&167&27435\\\\\\hline\n5&0.01591679&0.76226415&3.902337&11995&5330&275&255&4683&27347\\\\\\hline\n6&0.01591679&0.08148447&3.902337&5330&11995&275&4683&255&27347\\\\\\hline\n7&0.01063746&0.20642202&3.373731&3723&3423&267&1041&1286&29966\\\\\\hline\n8&0.01063746&0.17385705&3.373731&3423&3723&267&1286&1041&29966\\\\\\hline\n9&0.01036167&0.20107034&2.572363&3723&13973&107&1201&1877&29375\\\\\\hline\n10&0.01036167&0.13256048&2.572363&13973&3723&107&1877&1201&29375\\\\\\hline\n\\end{tabular}\n\\section*{Third Question}\nFor this task I think actually Jaccard measurement could help. \\(\\zeta = \\frac{P(A\\cap B)}{P(A)+P(B)-P(A\\cap B)}\\).With this measurement we can know how much the rule predict a correct answer.\nTo calculate Jaccard value I used the following code :\n\n\\begin{lstlisting}[language=R]\n###### Third Question #####\n\ncalculatelaplace<-function(thedata)\n{\n#Jaccard = f11/f1plus+fplus1-f11\n#fplus1= f11+f01\n#f1plus= f11+f10\nthedata$Jaccard <- thedata$F11/(thedata$F11+thedata$F01+thedata$F10)\nreturn (thedata)\n}\ndfsupport<-calculatelaplace(dfsupport)\ndflift<-calculatelaplace(dflift)\ndfconfidence<-calculatelaplace(dfconfidence)\nfor (i in seq_len(10))\n{\n\nprint (c(i,dfsupport$Jaccard[i],dfconfidence$Jaccard[i],dflift$Jaccard[i]))\n}\n\\end{lstlisting}\nIn the following table we can see Jaccard measurment for the previous 3 lists (linked by row number):\\\\\n\\begin{tabular}{|l|*{4}{c|}}\\hline\nNumber&Sup\\_Jac&Confid\\_Jac&Lift\\_Jac\\\\ \\hline\n1&0.15286903&0.02666926&0.02666926\\\\\\hline\n2&0.15286903&0.06009756&0.02666926\\\\\\hline\n3&0.06319493&0.05275273&0.06009756\\\\\\hline\n4&0.06319493&0.02174669&0.06009756\\\\\\hline\n5&0.07133386&0.02366528&0.05275273\\\\\\hline\n6&0.07133386&0.06289600&0.05275273\\\\\\hline\n7&0.05020391&0.03648692&0.10292984\\\\\\hline\n8&0.05020391&0.07533213&0.10292984\\\\\\hline\n9&0.04969681&0.01733608&0.03359498\\\\\\hline\n10&0.04969681&0.06319493&0.03359498\\\\ \\hline\n\\end{tabular}\n\\section*{Fourth Question}\nActually there is so many things coming to my mind.\n\\begin{enumerate}\n\t\\item We can see what people buy in general in specific time.\n\t\\item what's the most bought items in certain area.\n\t\\item Most bought items together.\n\t\\item Detect approximate area for customer (home , work) depending on his usual buying location and products.\n\t\\item Find what type of the area around the shop depending on the most bought items (resident , companies, sports ..).\n\t\\item Specify customer (married , single ) depending on frequent products bought.\n\t\\item Specify customer gender depending on frequent items bought.\n\\end{enumerate}\n\\section*{Fifth Question}\nIn this tasked I picked the first rule in support table where it's contingency table looks like the following : \\\\\n\\begin{tabular}{|l|*{3}{c|}}\n\t\\hline\n&9108&not 9108 \\\\ \\hline\n5330&F11=1396&F10 = 3562\\\\ \\hline\nnot 5330&F01=4174&F00= 23428 \\\\ \\hline\n\\end{tabular}\n\\textbf{Total:}32560 transaction.Now to calculate \\(P(A|B)\\) we use the formula :\\[P(A|B)=\\frac{P(A\\cap B)}{P(B)}\\] Where\\(A\\) is \"5330\" and \\(B\\) is \"9108\".\n\\[P(B) = \\frac{1396+4174}{32560},P(A\\cap B)=\\frac{1396}{32560}\\Longrightarrow P(A|B) = \\frac{\\frac{1396}{32560}}{\\frac{5570}{32560}}=\\frac{1396}{5570} \\simeq 0.25\\]\nTo calculate \\(P(B|A)\\) we can use Bayes rule.Firstly the description of how we come with Bayes rule, after that we use the rule.\n\\[P(A|B) = \\frac{P(A\\cap B)}{P(B)}\\Longrightarrow P(A\\cap B) = P(A|B)P(B) \\]\n\\[P(B|A) = \\frac{P(A\\cap B)}{P(A)}\\Longrightarrow P(A\\cap B) = P(B|A)P(A) \\]\nFrom the previous two formulas we get : \n\\[P(A|B)P(B) = P(B|A)P(A) \\Longrightarrow P(B|A) = \\frac{P(A|B)P(B)}{P(A)}\\]\nWhich is the Bayes rule.\n\\[P(A) = \\frac{1396+3562}{32560} = \\frac{4958}{32560}\\]\n\\[P(B|A) = \\frac{0.25* 4958}{5570}\\simeq 0.22 \\]\n\\section*{Sixth Question}\nFor this task I did the following :\n\\begin{enumerate}\n\t\\item I tried to run but didn't work so I built it again following the commands in \\href{https://courses.cs.ut.ee/MTAT.03.183/2016_spring/uploads/Main/krimp_compilation.txt}{this link}\n\t\\item copied krimp file to bin folder where all the configuration exist.\n\t\\item changed datadir.conf folders to fit linux.\n\t\\item changed fic.conf to point to supermarket(conf) file instead of compress(conf)\n\t\\item modified fic.user.conf so the application can use up to 4GB of ram instead of 1GB.\n\t\\item copied supermarket.txt to dataset folder and changed the suffix to dat.\n\t\\item modified convertdb.conf.Changed dbName value to supermarket.After that executed krimp with convertd.conf to convert my file to db by the command krimp convertdb.conf\n\t\\item I replicated compress file changed the file name ,unhashed the dataType = bai32 because uint8 and bm128 wasn't able to handle more than 128 item (our database have more than 15k item).\n\t\\item Done many experiments with threads number but all the threads worked on the same core so I changed it to single thread.\n\t\\item I built a python code to get a specific number of items+ one line items.\n\t\\item I used 127 and got about 133 item. started the application at 12:40 AM 25-Mar which didn't work.\n\\end{enumerate}\nHere is the python code I used to generate the new file:\n\\begin{lstlisting}[language=Python]\nimport numpy as np\n#read file\nf = open ('supermarket.txt')\nlines = f.readlines()\n\n#Set around max number of items\nbreakpoint = 127\nt = set()\ndef AddElements(line):\nelements = line.split(\" \")\nfor element in elements:\nt.add(element.strip())\n\n#Shuffle the lines to get random lines.\nnp.random.shuffle(lines)\noutputfile=[]\n\n#Write Selected lines to file\ndef WriteToFile():\noutput = open('supermarket.dat','w')\nfor l in outputfile:\noutput.write(l)\n\n#Add lines\nfor line in lines:\noutputfile.append(line)\nAddElements(line)\nif len(t)>breakpoint:\nbreak\n\n#Write lines to file\nWriteToFile()\n\\end{lstlisting}\nNext day morning I woke up on this error :\n\\begin{lstlisting}\n** Processing conf: 'fic.conf'\n* Verbosity:\t\t2\n* Max Mem Usage:\t4096mb\n* Priority:\t\tOpzij, opzij, opzij!\nMaak plaats, maak plaats, maak plaats!\nWij hebben ongelovelijke haast!\n\n** Database :: \n* File:\t\tsupermarket.db\n* Database:\t\t19t 19r, 154i, 1066.07bits\n* \t\t\tpruned below support 0, maximum set length 39\n* Alphabet:\t\t133 items\n* Internal datatype:\t32bit bitmap array\n\n** ItemSetCollection ::\n* Mining:\t\tStoring chunk #177             \n!! Run-time fatal exception:\n! WriteItemSet - Error writing ItemSet\n\\end{lstlisting}\nIt took about 60GB and left about 137 MB of hard disk drive.\n\\begin{figure}[H]\n\\includegraphics[scale=1]{size.png}\n\\end{figure}\n\\begin{figure}[H]\n\t\\includegraphics[scale=1]{space.png}\n\\end{figure}\nThe previous try took about 9 hours from around 12~ to 8:49.\nTried to play with parameters but nothing worked out at least for me.\\\\\n\\section*{Sixth Question Another try}\nI got help from Lisa Yankovskaya, Faiz Ali Shah about the configuration. I noticed that I used the wrong parameter for minimum support ( support count) I updated that value to be 325 (0.01 * 32560) which is the same I used for \\textbf{apriori} and \\textbf{eclat} function. Using 325 give me about 80 candidate where apriori gave about 90 rules.I kept modifing the minimum support till I got 90 at 284 minimum support.\\\\\nAfter that I executed ./krimp analysdb.confg which generated analysis file and here is the header :\n\\begin{lstlisting}\n** KRIMP database analysis **\n\n* General information\n\nNumber of rows:\t\t\t25382\nHas bin sizes:\t\t\tno\nNumber of items:\t\t207515\nAverage row length:\t\t8.18\nAlphabet length:\t\t15699\nStandard DB size:\t\t2375775.82\nCurrent data type:\t\tbit\n\n* Alphabet\n\nValue\t\tCount\t\t\t\tStdLength\n0=>9108\t\t  5570 (21.9%)\t\t\t5.219\n1=>5330\t\t  4958 (19.5%)\t\t\t5.387\n2=>11217\t\t  2510 (9.9%)\t\t\t6.369\n3=>14155\t\t  2393 (9.4%)\t\t\t6.438\n4=>13973\t\t  1984 (7.8%)\t\t\t6.709\n5=>7893\t\t  1847 (7.3%)\t\t\t6.812\n6=>14754\t\t  1656 (6.5%)\t\t\t6.969\n7=>3423\t\t  1553 (6.1%)\t\t\t7.062\n8=>7595\t\t  1513 (6.0%)\t\t\t7.100\n9=>2508\t\t  1370 (5.4%)\t\t\t7.243\n\\end{lstlisting}\nWe can notice that 9108 is the highest one in apriori \\& krimp and the first rules is made of first elements in krimp.\\\\\nHopefully that what is wanted from this question.\\\\\nLink to files :.R,.ipython,.py,.tex,.pdf can be found \\href{https://github.com/aqeel13932/DM/tree/master/HW07}{here}\n\\begin{center}\n\\textbf{E.O.F}\n\\end{center}\n\n\\end{document}", "meta": {"hexsha": "ba36bdac41755ee5b1cdd11a1baf582e8411dac3", "size": 15752, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HW07/HW07.tex", "max_stars_repo_name": "aqeel13932/DM", "max_stars_repo_head_hexsha": "acf47c79d43ded6eb58d03f325c6d660d572ae6e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW07/HW07.tex", "max_issues_repo_name": "aqeel13932/DM", "max_issues_repo_head_hexsha": "acf47c79d43ded6eb58d03f325c6d660d572ae6e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW07/HW07.tex", "max_forks_repo_name": "aqeel13932/DM", "max_forks_repo_head_hexsha": "acf47c79d43ded6eb58d03f325c6d660d572ae6e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.5438596491, "max_line_length": 419, "alphanum_fraction": 0.7203529711, "num_tokens": 5769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5580256373825332}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{exercises}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% --------------------------------------------------------------------------------------------\n\\section*{Exercise 2.8 Position keyword in {\\tt ::Indices}}\n\n\\begin{cadabra}\n   {a,b,c}::Indices(position=free).\n\n   foo := A_{a b} + A^{a b}.                       # cdb (ex-0208.101,foo)\n\n   substitute (foo, $A_{a b} -> B_{a b}$)          # cdb (ex-0208.102,foo)\n\n   {p,q,r}::Indices(position=fixed).\n\n   foo := A_{p q} B^{p q} + A^{p q} B_{p q}.       # cdb (ex-0208.201,foo)\n\n   canonicalise (foo)                              # cdb (ex-0208.202,foo)\n\n   {u,v,w}::Indices(position=independent).\n\n   foo := A_{u v} B^{u v} + A^{u v} B_{u v}.       # cdb (ex-0208.301,foo)\n\n   canonicalise (foo)                              # cdb (ex-0208.302,foo)\n\n\\end{cadabra}\n\n\\begin{align*}\n   \\cdb{ex-0208.101} &= \\Cdb{ex-0208.102}\\\\[5pt]\n   \\cdb{ex-0208.201} &= \\Cdb{ex-0208.202}\\\\[5pt]\n   \\cdb{ex-0208.301} &= \\Cdb{ex-0208.302}\n\\end{align*}\n\n\\end{document}\n", "meta": {"hexsha": "346c19895f1960a97e2fddd597c0ca6b2a66435f", "size": 1061, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/exercises/ex-0208.tex", "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_issues_repo_path": "source/cadabra/exercises/ex-0208.tex", "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/exercises/ex-0208.tex", "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "avg_line_length": 27.2051282051, "max_line_length": 94, "alphanum_fraction": 0.4919886899, "num_tokens": 375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484144, "lm_q2_score": 0.7799928900257126, "lm_q1q2_score": 0.5580256205996349}}
{"text": "\\section{Qualitative approximation of neural networks}\nThe first question about ${\\rm DNN}_1$ is about the approximation\nproperties for any continuous functions. Here we have the next\ntheorem. \n\nThe first proof for this theorem above can be found in\n\\cite{leshno1993multilayer} and summarized in\n\\cite{pinkus1999approximation}.  %The next theorem plays an important role in the proof of above lemma, which is first proved in \\cite{leshno1993multilayer} with several steps. \nHere we can present a\nmore direct and simple version.\n\n\n\\begin{theorem}[Universal Approximation Property of Shallow Neural\n Networks]\n Let $\\sigma$ be a Riemann integrable function and $\\sigma\\in\n L_{loc}^\\infty(\\mathbb{R})$.  Then $\\Sigma_d(\\sigma)$ \n in dense in\n $C(\\Omega)$ for any compact $\\Omega\\subset \\mathbb{R}^n$ if and and\n only if $\\sigma$ is not a polynomial!\n\nNamely, if $\\sigma$ is not a polynomial,  then, for  any $f\\in C(\\bar \\Omega)$,\n there exists a sequence $\\phi_n \\in {\\rm DNN}_1$ such that\n$$\n\t\\max_{x\\in \\bar \\Omega} |\\phi_n(x) - f(x)| \\to 0, \\quad n \\to \\infty.\n$$\n\\end{theorem}\n\n\\begin{proof}\nLet us first prove the theorem in a special case that $\\sigma\\in\nC^\\infty(\\mathbb{R})$.\nSince $\\sigma\\in C^\\infty(\\mathbb{R})$, it follows that for every $\\omega,b$,\n\\begin{equation}\n \\frac{\\partial}{\\partial \\omega_j}\\sigma(\\omega\\cdot x+b) = \n \\lim_{n\\rightarrow \\infty}\\frac{\\sigma((\\omega+h e_j)\\cdot x+b)-\\sigma(\\omega\\cdot x+b)}{h} \\in \\overline{\\Sigma}_d(\\sigma)\n\\end{equation}\nfor all $j=1,...,d$. \n\nBy the same argument, for $\\alpha = (\\alpha_1,...,\\alpha_d)$\n$$D^\\alpha_\\omega\\sigma(\\omega\\cdot x+b)\\in\\overline{\\Sigma}_d(\\sigma)$$\nfor all $k\\in\\mathbb{N}$, $j=1,...,d$, $\\omega\\in\\mathbb{R}^d$ and $b\\in\\mathbb{R}$.\n\nNow \n$$\nD^\\alpha_\\omega\\sigma(\\omega\\cdot x+b)=x^\\alpha\\sigma^{(k)}(\\omega\\cdot x+b)\n$$\nwhere $k=|\\alpha|$ and $x^\\alpha = x_1^{\\alpha_1}\\cdots x_d^{\\alpha_d}$.  Since\n$\\sigma$ is not a polynomial there exists a $\\theta_k\\in\\mathbb{R}$\nsuch that $\\sigma^{(k)}(\\theta_k)\\ne0$.  Taking $\\omega=0$ and\n$b=\\theta_k$, we thus see that $x_j^k\\in\\overline{\\Sigma}_d(\\sigma)$.\nThus, all polynomials of the form $x_1^{k_1}\\cdots x_d^{k_d}$ are in\n$\\overline{\\Sigma}_d(\\sigma)$.\n\nThis implies that $\\overline{\\Sigma}_d(\\sigma)$ contains all\npolynomials.  By Weierstrass's Theorem \\cite{stone1948generalized} it\nfollows that $\\overline{\\Sigma}_d(\\sigma)$ contains $C(K)$ for each\ncompact $K\\subset\\mathbb{R}^n$. That is $\\Sigma_d(\\sigma)$ is dense in\n$C(\\mathbb{R}^d)$.\n\nNow we consider the case that $\\sigma$ is only Riemann integrable. \nConsider the mollifier $\\eta$\n\t\\begin{equation*}\n\t\\begin{aligned}\n\t\\eta(x)=\\frac{1}{\\sqrt {\\pi}}e^{-x^2}.\n\t\\end{aligned}\n\t\\end{equation*}\nSet $\\eta_\\epsilon=\\frac{1}{\\epsilon}\\eta(\\frac{x}{\\epsilon})$. Then consider $\\sigma_{\\eta_\\epsilon}$\n\\begin{equation}\n\\sigma_{\\eta_\\epsilon}(x):=\\sigma\\ast{\\eta_\\epsilon}(x)=\\int_{\\mathbb{R}}\\sigma(x-y){\\eta_\\epsilon}(y)dy\n\\end{equation}\nIt can be seen that $\\sigma_{\\eta_\\epsilon}\\in C^\\infty(\\mathbb{R})$.\nWe first notice that\n$\\overline{\\Sigma}_1(\\sigma_{\\eta_\\epsilon})\\subset\\overline{\\Sigma}_1(\\sigma)$,\nwhich can be done easily by checking the Riemann sum of\n$\\sigma_{\\eta_\\epsilon}(x)=\\int_{\\mathbb{R}}\\sigma(x-y){\\eta_\\epsilon}(y)dy$\nis in $\\overline{\\Sigma}_1(\\sigma)$.\n\nFollowing the argument in the beginning of the proof proposition, we\nwant to show that $\\overline{\\Sigma}_1(\\sigma_{\\eta_\\epsilon}))$\ncontains all polynomials.  For this purpose, it suffices to show that\nthere exists $\\theta_k$ and $\\sigma_{\\eta_\\epsilon}$ such that\n$\\sigma_{\\eta_\\epsilon}^{(k)}(\\theta_k)\\ne0$ for each k. If not, then\nthere must be $k_0$ such that\n$\\sigma_{\\eta_\\epsilon}^{(k_0)}(\\theta)=0$ for all\n$\\theta\\in\\mathbb{R}$ and all $\\epsilon>0$.  Thus\n$\\sigma_{\\eta_\\epsilon}$'s are all polynomials with degree at most\n$k_0-1$.  In particular, It is known that $\\eta_\\epsilon\\in\nC_0^\\infty(\\mathbb{R})$ and $\\sigma\\ast\\eta_\\epsilon$ uniformly\nconverges to $\\sigma$ on compact sets in $\\mathbb{R}$ and\n$\\sigma\\ast\\eta_\\epsilon$'s are all polynomials of degree at most\n$k_0-1$. Polynomials of a fixed degree form a closed linear subspace,\ntherefore $\\sigma$ is also a polynomial of degree at most $k_0-1$,\nwhich leads to contradiction.\n\\end{proof}\n", "meta": {"hexsha": "ca9c01e049535fd49c270b86250d94482352d908", "size": 4233, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/DNN_Qualitative0.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/DNN_Qualitative0.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/DNN_Qualitative0.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.0319148936, "max_line_length": 177, "alphanum_fraction": 0.7030474841, "num_tokens": 1426, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.8056321843145405, "lm_q1q2_score": 0.558014684043602}}
{"text": "\\chapter*{Symbols}\n\\label{sec:symbols}\n\\addcontentsline{toc}{chapter}{Symbols}\n\n\n\\section*{Symbols}\n\n\\begin{tabbing}\n \\hspace*{1.6cm} \\= \\kill\n $\\eta$ \\> Learning rate \\\\ [0.5ex]\n $\\gamma$ \\> Model parameter \\\\ [0.5ex]\n $\\kappa$ \\> Largest subset size \\\\ [0.5ex]\n $\\nu$ \\> Noise-to-data ratio \\\\ [0.5ex]\n $\\sigma$ \\> Gaussian distribution's standard deviation \\\\ [0.5ex]\n $\\mathcal{D}$ \\> Set of data samples \\\\ [0.5ex]\n $K$ \\> Latent coherence dimensions \\\\ [0.5ex]\n $L$ \\> Latent diversity dimensions \\\\ [0.5ex]\n $M$ \\> Number of features \\\\ [0.5ex]\n $\\mathcal{N}$ \\> Set of noise samples \\\\ [0.5ex]\n $\\mathbb{R}$ \\> Real numbers \\\\ [0.5ex]\n $S,T$ \\> Set \\\\ [0.5ex]\n $S_{t}$ \\> Sequence \\\\ [0.5ex]\n $V$ \\> Ground set \\\\ [0.5ex]\n $Z$ \\> Partition function \\\\ [0.5ex]\n\\end{tabbing}\n\n\\section*{Matrices}\n\n\\begin{tabbing}\n  \\hspace*{1.6cm} \\= \\kill\n  $\\mathbf{B}$ \\> Feature diversity weights \\\\ [0.5ex]\n  $\\mathbf{E}$ \\> Feature coherence weights \\\\ [0.5ex]\n  $\\mathcal{I}$ \\> Identity \\\\ [0.5ex]\n  $\\mathbf{W}^{b}$ \\> Coherence weights \\\\ [0.5ex]\n  $\\mathbf{W}^{e}$ \\> Diversity weights \\\\ [0.5ex]\n  $\\mathbf{X}$ \\> Features \\\\ [0.5ex]\n\\end{tabbing}\n\n\\section*{Vectors}\n\n\\begin{tabbing}\n  \\hspace*{1.6cm} \\= \\kill\n  $\\boldsymbol{\\theta}$ \\> Model parameters \\\\ [0.5ex]\n  $\\mathbf{a}$ \\> Utility weights for features \\\\ [0.5ex]\n  $\\mathbf{u}$ \\> Utility weights \\\\ [0.5ex]\n  $\\mathbf{G}$ \\> Accumulated gradient \\\\ [0.5ex]\n\\end{tabbing}\n\n\\section*{Indices}\n\n\\begin{tabbing}\n  \\hspace*{1.6cm} \\= \\kill\n  $i, j$ \\> Element \\\\ [0.5ex]\n  $k$ \\> Feature \\\\ [0.5ex]\n  $c$ \\> Coherence dimension \\\\ [0.5ex]\n  $d$ \\> Diversity dimension \\\\ [0.5ex]\n\\end{tabbing}\n\n\\section*{Acronyms and Abbreviations}\n\\begin{tabbing}\n \\hspace*{1.6cm}  \\= \\kill\n ETH \\> Eidgen\u00f6ssische Technische Hochschule \\\\[0.5ex]\n FLID \\> Facility Location Diversity \\\\[0.5ex]\n FLDC \\> Facility Location Diversity and Coherence \\\\[0.5ex]\n FFLDC \\> Featurized Facility Location Diversity and Coherence \\\\ [0.5ex]\n MLE \\> Maximum Likelihood Estimation \\\\ [0.5ex]\n NCE \\> Noise Contrastive Estimation \\\\ [0.5ex]\n LAS \\> Learning \\& Adaptive Systems \\\\ [0.5ex]\n SGD \\> Stochastic Gradient Descent \\\\ [0.5ex]\n TVD \\> Total Variation Distance \\\\ [0.5ex]\n\\end{tabbing}", "meta": {"hexsha": "8c0de2cb3c48d99d327626edc6f696bf24850bf2", "size": 2221, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/report/chapters/symbols.tex", "max_stars_repo_name": "dballesteros7/master-thesis-2015", "max_stars_repo_head_hexsha": "8c0bf9a6eef172fc8167a30780ae0666f8ea2d88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/report/chapters/symbols.tex", "max_issues_repo_name": "dballesteros7/master-thesis-2015", "max_issues_repo_head_hexsha": "8c0bf9a6eef172fc8167a30780ae0666f8ea2d88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/report/chapters/symbols.tex", "max_forks_repo_name": "dballesteros7/master-thesis-2015", "max_forks_repo_head_hexsha": "8c0bf9a6eef172fc8167a30780ae0666f8ea2d88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.2816901408, "max_line_length": 73, "alphanum_fraction": 0.6204412427, "num_tokens": 835, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5580146770531552}}
{"text": "\\section{Axisymmetric Favre-Averaged Navier-Stokes Equations}\n\\subsection{Overview of Averaging Properties}\n\n\tThe difference between the laminar multi-species axisymmetric equations already derived and expressed in \nEqs.\\ref{eqn:speciesfinal},\\ref{eqn:xmom},\\ref{eqn:rmom}, and \\ref{eqn:energy} and their turbulent \ncounterparts is that the flow variables in the turbulent equations represent values averaged over a \nsufficiently long period of time to establish a meaningful average (i.e. a period many times larger than\nthe period of the turbulent fluctuations).  This averaging process can be done numerous\nways, such as the familiar Reynolds average,\n\n\\begin{equation}\n\t\\overline{\\phi} = \\frac{1}{T}\\int_{t=0}^{t=T} \\phi dt = \\frac{1}{2T}\\int_{t=-T}^{t=T} \\phi dt\n\\label{eqn:reynolds}\n\\end{equation}\n\n\twhere the flow property $\\phi$ would then be broken down into its Reynolds averaged value ($\\overline{\\phi}$)\nand a corresponding turbulent fluctuation component ($\\phi'$),\n\n\\begin{equation}\n\t\\phi = \\overline{\\phi} + \\phi'\n\\label{eqn:reybreakdown}\n\\end{equation}\n\n\tHowever, breaking down the flow variables in this fashion does not take advantage of the fact\nthat many of the quantities of interest in the Navier Stokes equations involve products of density.  \nThus defining a density weighted average, called the Favre average as\n\n\\begin{equation}\n\t\\tilde{\\phi} = \\frac{1}{\\overline{\\rho} T}\\int_{t=0}^{t=T} \\rho \\phi dt = \\frac{1}{2\\overline{\\rho} T}\n\t\\int_{t=-T}^{t=T} \\rho \\phi dt\n\\label{eqn:favre}\n\\end{equation}\n\n\tone can decompose the flow variables into a Favre averaged component ($\\tilde\\phi$) and a fluctuating turbulent\ncomponent ($\\phi''$) (note that $\\overline{\\rho}$ is the Reynolds averaged density as calculated using Eq. \\ref{eqn:reynolds}).\n\n\\begin{equation}\n\t\\phi = \\tilde{\\phi} + \\phi''\n\\label{eqn:favbreakdown}\n\\end{equation}\n\n\tBy decomposing the flow variables into Favre averages and Favre fluctuating components the\nresulting turbulent equations can be greatly simplified.  At this point several points should be emphasized\nconcerning the two types of averages.  First, all the rules that govern the Reynolds averaging of sums of \nquantities, products of quantities, derivatives of quantities, etc. also apply to the Favre averaging of \nthese same quantities.  Second, there is a distinction to be made between a Reynolds averag\\emph{ed} quantity \nand a quantity undergoing Reynolds averag\\emph{ing} (or a Favre averag\\emph{ed} quantity and the process of \nFavre averag\\emph{ing}).  An averag\\emph{ed} quantity is one that has been decomposed into an average and\na fluctuating component (i.e. Eqs. \\ref{eqn:reybreakdown} and \\ref{eqn:favbreakdown}) where as the averag\\emph{ing}\nprocess refers to the process by which one is taking an average (Eqs. \\ref{eqn:reynolds} and \\ref{eqn:favre}).\nAlthough these two items are related, as indeed one cannot have a Favre averaged quantity without first performing\nthe Favre averaging process, the distinction lies in the fact that one can take the Reynolds average of a Favre\naveraged quantity (or vise versa) which is ultimately what we would like to do.  As an example,\n\n\\begin{displaymath}\n\t\\overline{\\tilde{\\phi}+\\phi''}\n\\end{displaymath}\n\n\tbut since the Reynolds (or Favre) average of a sum of two quantities is equal to the sum of the Reynolds\n(or Favre) average of the quantities, i.e., \n\n\\begin{equation}\n\t\\overline{\\phi_1 + \\phi_2} = \\overline{\\phi_1} + \\overline{\\phi_2}\n\\label{eqn:sums}\t\n\\end{equation}\t \n\n\tone can rewrite the previous expression as\n\n\\begin{displaymath}\n\t\\overline{\\tilde{\\phi} + \\phi''} = \\overline{\\tilde{\\phi}} + \\overline{\\phi''} \n\\end{displaymath}\n\n\tHowever, if it is also noted that the average of an averaged quantity is simply the averaged \nquantity itself then,\n\n\\begin{equation}\n\t\\overline{\\tilde{\\phi}} = \\tilde{\\phi}\n\\label{eqn:aveofave}\n\\end{equation}\n\n\tthen the previous expression can be simplified to\n\n\\begin{equation}\n\t\\overline{\\tilde{\\phi} + \\phi''} = \\tilde{\\phi} + \\overline{\\phi''}\t\n\\label{eqn:reyoffav}\n\\end{equation}\n\n\tNow at this point it might be tempting to say that $\\overline{\\phi''} = 0$ since when Reynolds averaging a \nReynolds averaged quantity\n\n\\begin{displaymath}\n\t\\overline{\\phi} = \\overline{(\\overline{\\phi} + \\phi')} = \\overline{\\phi} + \\overline{\\phi'}\n\\end{displaymath}\n\n\ttherefore,\n\n\\begin{equation}\n\t\\overline{\\phi'} = 0\n\\label{eqn:reyaveoffluc}\n\\end{equation}\n\n\tHowever, although the Reynolds average of a Reynolds fluctuating component is zero (as shown in\nEq. \\ref{eqn:reyaveoffluc}) the same cannot be said of the Reynolds average of a Favre fluctuating component \n($\\overline{\\phi''}$) as follows.  Taking the Reynolds average of the product of density and a given quantity,\n\n\\begin{displaymath}\n\t\\overline{\\rho \\phi} = \\frac{1}{T} \\int_{t=0}^{t=T} \\rho \\phi dt\n\\end{displaymath}\n\n\tand comparing this to the definition of a Favre average (Eq. \\ref{eqn:favre}) one obtains the result\n\n\\begin{equation}\n\t\\overline{\\rho \\phi} = \\overline{\\rho}\\tilde{\\phi}\n\\label{eqn:rhophi}\n\\end{equation}\n\n\tUsing the definition of a Favre averaged quantity (Eq. \\ref{eqn:favbreakdown}) and substituting\nfor $\\tilde{\\phi}$ using Eq. \\ref{eqn:rhophi} one obtains,\n\n\\begin{displaymath}\n\t\\phi'' = \\phi - \\frac{\\overline{\\rho \\phi}}{\\overline{\\rho}}\n\\end{displaymath}\n\n\tDecomposing $\\phi$ into its Reynolds average and its Reynolds fluctuating component while making use \nof another rule of Reynolds averaging which states\n\n\\begin{equation}\n\t\\overline{\\phi_1 \\phi_2} = \\overline{\\phi_1} \\hspace{0.1cm}\\overline{\\phi_2} + \\overline{\\phi_1' \\phi_2'}\n\\label{eqn:product}\n\\end{equation}\n\n\tone can write,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\t\\phi'' = (\\overline{\\phi} + \\phi') - \\frac{\\overline{\\rho}\\overline{\\phi} + \\overline{\\rho'\\phi'}}\n\t\t\t{\\overline{\\rho}} \\\\ \\\\\n\t\t\\phi'' = \\overline{\\phi} + \\phi' - \\overline{\\phi} - \\frac{\\overline{\\rho'\\phi'}}\n\t\t\t{\\overline{\\rho}} \\\\ \\\\\n\t\t\\phi'' = \\phi' - \\frac{\\overline{\\rho'\\phi'}}{\\overline{\\rho}}\n\t\\end{array}\n\\end{displaymath}\n\n\tTaking the Reynolds average of the above expression will yield an expression for the Reynolds average\nof a Favre fluctuating component (which is what we are looking for)\n\n\\begin{displaymath}\n\t\\overline{\\phi''} = \\overline{\\phi' - \\overline{(\\frac{\\rho'}{\\overline{\\rho}})\\phi'}}\n\\end{displaymath}\n\n\twhich can be rewritten using the relation in Eq. \\ref{eqn:sums} as\n\n\\begin{displaymath}\n\t\\overline{\\phi''} = \\overline{\\phi'} - \\overline{(\\frac{\\rho'}{\\overline{\\rho}})\\phi'}\n\\end{displaymath}\n\n\twhich can then be reduced using the results of Eq. \\ref{eqn:reyaveoffluc} to\n\n\\begin{equation}\n\t\\overline{\\phi''} = \\overline{(\\frac{\\rho'}{\\overline{\\rho}})\\phi'}\n\\label{eqn:reyoffavfluc}\n\\end{equation}\n\n\tThus is has been clearly shown that the quantity $\\overline{\\phi''}$ is not equal to zero and hence\nEq. \\ref{eqn:reyoffav} cannot be reduced any further.  So then, the obvious question becomes why use Favre\naveraged values in the first place when this produces no apparent simplifications while performing a Reynolds averaging\nprocess.  The answer lies in the fact that for many of the quantities in the Navier-Stokes equations, one is not merely \ninterested in the Reynolds average of a given quantity, but rather the Reynolds average of a given quantity\nmultiplied by density....\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\t\\overline{\\rho \\phi} = \\overline{\\rho (\\tilde{\\phi} + \\phi'')} \\\\ \\\\\n\t\t\\overline{\\rho \\phi} = \\overline{\\rho \\tilde{\\phi}} + \\overline{\\rho \\phi''} \\\\ \\\\\n\t\t\\overline{\\rho \\phi} = \\overline{\\rho} \\tilde{\\phi} + \\overline{\\rho \\phi''}\n\t\\end{array}\n\\end{displaymath}\n\n\tIn the above it is noted that Eqs. \\ref{eqn:sums}, and \\ref{eqn:aveofave} where used.\nHowever, comparing the result expressed in Eq. \\ref{eqn:rhophi} to this expression one can conclude that \nthe Reynolds averaging of the product of density and a Favre fluctuating component is equal to zero,\n\n\\begin{equation}\n\t\\overline{\\rho \\phi''} = 0\n\\label{eqn:rhophidp}\n\\end{equation}\n\n\tNot only does decomposing the flow variables into Favre averages make Reynolds averaging ofdouble products \ninvolving density simplify, but Reynolds averaging of triple products involving density are also greatly simplified,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\t\\overline{\\rho \\phi_1 \\phi_2} = \\overline{\\rho (\\tilde{\\phi}_1 + \\phi_1'') \n\t\t(\\tilde{\\phi}_2 + \\phi_2'')} \\\\ \\\\\n\t\t\\overline{\\rho \\phi_1 \\phi_2} = \\overline{\\rho \\tilde{\\phi}_1 \\tilde{\\phi}_2 + \n\t\t\\rho \\tilde{\\phi}_1\\phi_2'' + \\rho \\phi_1'' \\tilde{\\phi}_2 + \\rho \\phi_1'' \\phi_2'' }\n\t\\end{array}\n\\end{displaymath}\n\n\tUsing Eq. \\ref{eqn:sums} followed by the result derived in Eq. \\ref{eqn:rhophidp} this expression\ncan be rewritten,\n\n\\begin{equation}\n\t\\overline{\\rho \\phi_1 \\phi_2} = \\overline{\\rho}\\tilde\\phi_1 \\tilde\\phi_2 + \n\t\\overline{\\rho \\phi_1'' \\phi_2''}\t\n\\label{eqn:tripleprod}\n\\end{equation}\n\n\tThe use of Eq. \\ref{eqn:rhophidp} also allows the simplification of a quadruple product involving density (and in\nfact will continue to allow simplifications as compared to a Reynolds averaged quantity for any number\nof products involving density, although more than quadruple products are not encountered in the derivation\nof the turbulent Navier-Stokes equations) as follows,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\t\\overline{\\rho \\phi_1 \\phi_2 \\phi_3} = \\overline{\\rho (\\tilde \\phi_1 + \\phi_1'')(\\tilde \\phi_2 + \\phi_2'')\n\t\t\t(\\tilde \\phi_3 + \\phi_3'')} \\\\ \\\\\n\t\t\\overline{\\rho \\phi_1 \\phi_2 \\phi_3} = \\overline{\\rho (\\tilde \\phi_1\\tilde \\phi_2 + \\tilde \\phi_1\\phi_2''\n\t\t\t+ \\phi_1''\\tilde \\phi_2 + \\phi_1''\\phi_2'')(\\tilde \\phi_3 + \\phi_3'')} \\\\ \\\\\n\t\t= \\overline{\\rho}\\tilde \\phi_1\\tilde \\phi_2\\tilde \\phi_3 + \\tilde \\phi_1\\tilde \\phi_2\n\t\t\\overline{\\rho\\phi_3''} + \\tilde \\phi_1\\tilde \\phi_3\\overline{\\rho \\phi_2''} + \\tilde \\phi_1\n\t\t\\overline{\\rho \\phi_2''\\phi_3''} + \\tilde \\phi_2\\tilde \\phi_3\\overline{\\rho\\phi_1''} +\n\t\t\\tilde \\phi_2\\overline{\\rho\\phi_1''\\phi_3''} + \\tilde \\phi_3\\overline{\\rho\\phi_1''\\phi_2''}\n\t\t+\\overline{\\rho\\phi_1''\\phi_2''\\phi_3''}\n\t\\end{array}\n\\end{displaymath}\n\t\n\twhich can be reduced to the final result using Eq. \\ref{eqn:rhophidp},\n\n\\begin{equation}\n\t\\overline{\\rho \\phi_1 \\phi_2 \\phi_3} = \\overline{\\rho}\\tilde \\phi_1\\tilde \\phi_2\\tilde \\phi_3 + \n\t\t\\tilde \\phi_1\\overline{\\rho \\phi_2''\\phi_3''} +\n\t\t\\tilde \\phi_2\\overline{\\rho\\phi_1''\\phi_3''} + \\tilde \\phi_3\\overline{\\rho\\phi_1''\\phi_2''}\n\t\t+\\overline{\\rho\\phi_1''\\phi_2''\\phi_3''}\n\\label{eqn:quadprod}\n\\end{equation}\n\n\tAlthough not shown here, the Reynolds averaging of a triple (and quadruple) product involving density \nusing Reynolds averaged quantities contains more terms due to the fact that $\\overline{\\rho \\phi'} \\not= 0$.\n\n%----------------------------------------------------------------------------------------------------------------------\n\\subsection{Momentum Along the $\\theta$ Co-Ordinate Direction}\n\n\tNow it has been stated that one of the main advantages of the axisymmetric formulation of the Navier-Stokes \nequations is the reduction of the governing set of equations by an entire dimension which drastically reduces \nthe computational effort required in the solution of a given problem.  However, the physical mechanisms of \nturbulence are inherently three dimensional and as such any derivation of turbulent quantities must be done \nwith this in mind.  Therefore, although in the end the axisymmetric equations of interest will contain only \ntwo momentum equations (thereby making use of the assumption that $w = 0$), the derivation of these equations \nmust be done without making use of the restriction that the circumferential velocity be zero.  Note that this \ndoes not negate the fact that the flow be axisymmetric, as the condition expressed by Eq. \\ref{eqn:dtheta} \nstill applies, however this does require the derivation of the $\\theta$ direction momentum equation (and the \naddition of some terms to the energy equation which will be derived in the next section).\n\t\n\n\\begin{figure}[ht]\n\\includegraphics[height=5cm]{./thetadir.eps}\n% One can specify [angle=x,width=y,height=z] for the graphics\n\\caption[$x$ - $r$ planes]{$x$ - $r$ planes : All fluid properties are depicted by solid lines while \n\t\t\t\tforces acting on the infantesimal fluid element are depicted by dashed lines (these include\n\t\t\t\tthe pressure acting on the left and right $x$ - $r$ planes\n\t\t\t\tand the shears acting on all faces of the volume)}\n\\label{fig:thetadir}\n\\end{figure}\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\textrm{$\\theta$ Momentum in through the bottom $x$ - $\\theta$ plane} &\n\t\t= & (\\rho w)(v)dx(R_o \\theta)\\\\\n \t\t& \\\\ & \\\\\n\t\t\\textrm{$\\theta$ Momentum out through the top $x$ - $\\theta$ plane} &\n\t\t= & (\\rho w + \\frac{\\partial \\rho w}{\\partial r}dr)(v + \\frac{\\partial v}{\\partial r}dr)dx\n\t\t(R_o + dr) \\theta \n\t\\end{array}\n\\end{displaymath}\n\n\tTaking the difference between the outflow and inflow $x$ - $\\theta$ planes yields (while again neglecting\ntriple products of differentials and combining terms),\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\\Delta \\theta_{momentum_{x \\theta}}=\n\t\t\\begin{array}{c}\n\t(\\rho vw + \\rho w \\frac{\\partial v}{\\partial r}dr + \n\tv \\frac{\\partial \\rho w}{\\partial r}dr + \\frac{\\partial v}{\\partial r} \\frac{\\partial \\rho w}{\\partial r}dr^2 -\n\t\\rho vw)dx(R_o \\theta) \\\\\n\t + (\\rho vw + \\rho w \\frac{\\partial v}{\\partial r}dr + v \\frac{\\partial \\rho w}{\\partial r}dr + \n\t\\frac{\\partial v}{\\partial r} \\frac{\\partial \\rho w}{\\partial r}dr^2)dxdr \\theta\n\t\t\\end{array} \\\\ \\\\\n\t\\Delta \\theta_{momentum_{x \\theta}}= (\\frac{\\partial \\rho vw}{\\partial r})dxdr (R_o \\theta) + (\\rho vw)dxdr \\theta\n\t\\end{array}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\textrm{$\\theta$ Momentum in through the front $r$ - $\\theta$ plane} &\n\t\t= & (\\rho w)(u)dr(R_o \\theta)\\\\\n \t\t& \\\\ & \\\\\n\t\t\\textrm{$\\theta$ Momentum out through the back $r$ - $\\theta$ plane} &\n\t\t= & (\\rho w + \\frac{\\partial \\rho w}{\\partial x}dx)(u + \\frac{\\partial u}{\\partial x}dx)dr(R_o \\theta) \n\t\\end{array}\n\\end{displaymath}\n\n\tTaking the difference between the outflow and inflow $r$ - $\\theta$ planes yields,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\\Delta \\theta_{momentum_{r \\theta}}=(\\rho uw + \\rho w \\frac{\\partial u}{\\partial x}dx + \n\tu \\frac{\\partial \\rho w}{\\partial x}dx +\n\t \\frac{\\partial u}{\\partial x} \\frac{\\partial\\rho w}{\\partial x}dx^2 - \\rho uw)dr(R_o \\theta) \\\\ \\\\\n\t\\Delta \\theta_{momentum_{r \\theta}}=(\\frac{\\partial \\rho uw}{\\partial x})dxdr(R_o \\theta)\n\t\\end{array}\n\\end{displaymath}\n\n\tAllowing for an unsteady change in the $\\theta$ direction momentum and applying Newton's second law\n\n\\begin{displaymath}\n\t\\frac{\\partial \\rho w}{\\partial t}dxdr(R_o \\theta) + \\Delta \\theta_{momentum_{r \\theta}} + \\Delta \\theta_{momentum_{x \\theta}} = \n\t\\sum \\textrm{Forces}_\\theta\n\\end{displaymath}\n\n\twhich after making the appropriate substitutions yields\n\n\\begin{displaymath}\n\t\\Big\\{(\\frac{\\partial \\rho w}{\\partial t} + \\frac{\\partial \\rho uw}{\\partial x} +\n\t\\frac{\\partial \\rho vw}{\\partial r})(R_o \\theta) + (\\rho vw \\theta)\\Big\\}dxdr = \\sum \\textrm{Forces}_\\theta\n\\end{displaymath}\n\n\tHowever, $\\theta$ can be eliminated through the use of Eq.\\ref{eqn:theta} to obtain\n\n\\begin{equation}\n\t\\Big\\{\\frac{\\partial \\rho w}{\\partial t} + \\frac{\\partial \\rho uw}{\\partial x} +\n\t\\frac{\\partial \\rho vw}{\\partial r} + \\frac{1}{r}(\\rho vw)\\Big\\}dxdr = \\sum \\textrm{Forces}_\\theta\n\\label{eqn:thetamomchg}\n\\end{equation}\n\n\tExamining each of the forces contributing to an overall force in the $\\theta$ direction individually, one \ncan complete the right hand side of Eq.\\ref{eqn:thetamomchg}.  As for the $r$ direction derivations, the distance\nbetween the centre of the infantesimal volume and the boundaries will be considered to be $\\frac{\\theta}{2}$.\nAs well, the definition of two additional stress terms is required for the purposes of this derivation.  However, \nsince the $\\theta$ momentum equation is only a stepping stone used in deriving the turbulent set of axisymmetric \nequations, they are included here for completeness only.\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\textrm{Force exerted on the left $x$ - $r$ plane} \\\\\n\t\t\\textrm{by the pressure and shears (normal and parallel)} \\\\\n\t\t\\textrm{on its surface in the $\\theta$ direction}\n\t\t\\end{array} \n\t\t& = &\n\t\t\\begin{array}{c}\n\t\t(-p \\cos{\\frac{\\theta}{2}} + \\tau_{\\theta \\theta}\\cos{\\frac{\\theta}{2}} \\\\\n\t\t-\\tau_{\\theta r}\\sin{\\frac{\\theta}{2}}) dxdr\n\t\t\\end{array} \\\\\n\t& \\\\ & \\\\\n\t\t\\begin{array}{c}\n\t\t\\textrm{Force exerted on the right $x$ - $r$ plane}\\\\\n\t\t\\textrm{by the pressure and shears (normal and parallel)} \\\\\n\t\t\\textrm{on its surface in the $\\theta$ direction}\n\t\t\\end{array} \n\t\t& = &\n\t\t\\begin{array}{c}\n\t\t(p \\cos{\\frac{\\theta}{2}} - \\tau_{\\theta \\theta}\\cos{\\frac{\\theta}{2}} \\\\\n\t\t-\\tau_{\\theta r}\\sin{\\frac{\\theta}{2}}) dxdr\t\n\t\t\\end{array}\n\t\\end{array} \n\\end{displaymath}\n\\\\\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\textrm{Force exerted on the bottom $x$- $\\theta$ plane} \\\\\n\t\t\\textrm{by the shear on its surface}\n\t\t\\end{array} & = &\n\t\t-\\tau_{r\\theta}dx (R_o \\theta) \\\\\n   \t& \\\\ & \\\\\n\t\t\\begin{array}{c}\n\t\t\\textrm{Force exerted on the top $x$ - $\\theta$ plane}\\\\\n\t\t\\textrm{by the shear on its surface}\n\t\t\\end{array} & = &\n\t\t(\\tau_{r\\theta} + \\frac{\\partial \\tau_{r\\theta}}{\\partial r}dr) dx (R_o + dr) \\theta\n\t\\end{array}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\textrm{Force exerted on the front $r$- $\\theta$ plane} \\\\\n\t\t\\textrm{by the shear on its surface}\n\t\t\\end{array} & = &\n\t\t-\\tau_{x\\theta}dr (R_o \\theta) \\\\\n   \t& \\\\ & \\\\\n\t\t\\begin{array}{c}\n\t\t\\textrm{Force exerted on the back $r$ - $\\theta$ plane}\\\\\n\t\t\\textrm{by the shear on its surface}\n\t\t\\end{array} & = &\n\t\t(\\tau_{x\\theta} + \\frac{\\partial \\tau_{x\\theta}}{\\partial x}dx) dr (R_o \\theta) \n\t\\end{array}\n\\end{displaymath}\n\n\tSumming all the forces considered, the total force in the $\\theta$ - direction becomes,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\\sum \\textrm{Forces}_\\theta & = &\n\t\t\\begin{array}{c}\n\t\t\t(-p \\cos{\\frac{\\theta}{2}} + p \\cos{\\frac{\\theta}{2}} \n\t\t\t + \\tau_{\\theta \\theta}\\cos{\\frac{\\theta}{2}} - \\tau_{\\theta \\theta}\\cos{\\frac{\\theta}{2}}\n\t\t\t-\\tau_{\\theta r}\\sin{\\frac{\\theta}{2}} -\\tau_{\\theta r}\\sin{\\frac{\\theta}{2}})dxdr \\\\\n\t\t\t+ (-\\tau_{r\\theta}dx + \\tau_{r\\theta}dx + \\frac{\\partial \\tau_{r\\theta}}{\\partial r}dxdr\n\t\t\t- \\tau_{x\\theta}dr + \\tau_{x\\theta}dr + \\frac{\\partial \\tau_{x\\theta}}{\\partial x}dxdr)(R_o \\theta) \\\\\n\t\t\t+(\\tau_{r\\theta} + \\frac{\\partial \\tau_{r\\theta}}{\\partial r}dr)dxdr \\theta\n\t\t\\end{array}\n\t\\end{array}\n\\end{displaymath}\n\n\twhich after canceling like terms, inserting Eqs. \\ref{eqn:smallangle} and \\ref{eqn:theta} while neglecting \ntriple products of differentials (and noting $\\tau_{\\theta r} = \\tau_{r \\theta}$) reduces to\n\n\\begin{equation}\n\t\\begin{array}{ccc}\n\t\\sum \\textrm{Forces}_\\theta & = &\n\t\t\\Big\\{\\frac{\\partial \\tau_{x\\theta}}{\\partial x}\n\t\t+ \\frac{\\partial \\tau_{r\\theta}}{\\partial r} + \\frac{1}{r}(2\\tau_{r \\theta})\\Big\\}dxdr\n\t\\end{array}\n\\label{eqn:thetaforces}\n\\end{equation}\n\n\tTherefore combining Eqs. \\ref{eqn:thetamomchg} and \\ref{eqn:thetaforces} yields the desired form of the \n$\\theta$ momentum equation,\n\n\\begin{equation}\n\\frac{\\partial}{\\partial t}(\\rho w) + \\frac{\\partial}{\\partial x}(\\rho uw) +\n\\frac{\\partial}{\\partial r}(\\rho vw) + \\frac{1}{r}(\\rho vw) = \n\\frac{\\partial}{\\partial x}(\\tau_{x\\theta}) + \\frac{\\partial}{\\partial r}(\\tau_{r\\theta}) + \n\\frac{1}{r}(2\\tau_{r\\theta})\n\\label{eqn:thetamom}\n\\end{equation}\n\n\\subsubsection{Definition of Shear Stresses Normal to the $\\theta$ - Direction}\n \n\\begin{displaymath}\n\t\\tau_{x\\theta} = \\tau_{\\theta x} = \\mu(\\frac{\\partial w}{\\partial x} + \\frac{1}{r}\\frac{\\partial u}{\\partial \\theta})\n\\end{displaymath}\n\\begin{displaymath}\n\t\\tau_{r\\theta} = \\tau_{\\theta r} = \\mu(\\frac{1}{r}\\frac{\\partial v}{\\partial \\theta} + r\\frac{\\partial}{\\partial r}\n\t\\frac{w}{r})\n\\end{displaymath}\n\n\twhere when combined with the axisymmetric assumption of Eq. \\ref{eqn:dtheta} both of these quantities can be\nreduced to\n\n\\begin{equation}\n\t\\tau_{x\\theta} = \\tau_{\\theta x} = \\mu\\frac{\\partial w}{\\partial x}\t\n\\label{eqn:tauxtheta}\n\\end{equation}\n\n\\begin{equation}\n\t\\tau_{r\\theta} = \\tau_{\\theta r} = \\mu r\\frac{\\partial}{\\partial r}\\frac{w}{r} = \n\t\\mu \\frac{\\partial w}{\\partial r} - \\frac{\\mu}{r}w\t\n\\label{eqn:taurtheta}\n\\end{equation}\n\n%----------------------------------------------------------------------------------------------------------------------\n\\subsection{Circumferential Additions to the Axisymmetric Energy Equation}\n\n\tThe axisymmetric energy equation as derived previously (see Eq. \\ref{eqn:energyshear}) is repeated below\n(without the turbulence terms as these will derived presently),\n\n\\begin{equation}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\Big\\{\\frac{\\partial}{\\partial t}(\\rho E) +\\frac{\\partial}{\\partial x}(\\rho E u + pu) \\\\\n\t\t+ \\frac{\\partial}{\\partial r}(\\rho E v + pv) + \\frac{1}{r}(\\rho E v + pv)\\Big\\}\n\t\t\\end{array} & = &\n\t\t\\begin{array}{c}\n\t\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial x}h_k) \n\t\t\t- q_{c_x} +  \\tau_{xx}u + \\tau_{xr}v \\Big\\} \\\\\n\t\t\t\\frac{\\partial}{\\partial r}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k)\n\t\t\t-  q_{c_r} + \\tau_{rx}u + \\tau_{rr}v  \\Big\\} \\\\\n\t\t\t+\\frac{1}{r}\\Big\\{\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) - q_{c_r} + \\tau_{rx}u \n\t\t\t+ \\tau_{rr}v\\Big\\}\n\t\t\\end{array}  \n\t\\end{array}\n\\label{eqn:energy2}\n\\end{equation}\n\n\tAny additions to this equation are going to be added to one of the four main components that comprise the\noverall energy equation, (i) the rate of change of total energy, (ii) energy transfer due to species diffusion,\n(iii) energy transfer due to heat, (iv) rate of work done.  Starting with the first component, since the \naxisymmetric assumption of Eq. \\ref{eqn:dtheta} negates any change of properties in the circumferential \ndirection, this requires that the total energy entering and exiting the infantesimal volume through the \nleft and right $x$ - $r$ planes be equal.  Therefore there is no addition to this component of the energy\nequation.  However, one must be careful in the definition of E, the specific energy, as it was defined as the \nsum of the specific internal and kinetic energies (Eq. \\ref{eqn:spenergy})\n\n\\begin{displaymath}\n\tE = e + \\frac{1}{2}\\vec{V}^2 \n\\end{displaymath}\n\n\tNow since the assumption of zero circumferential velocity is not being applied at this point in the derivation, \nthe velocity vector must be understood as the vector sum of all \\emph{three} co-ordinate direction velocities,\n\n\\begin{equation}\n\t\\vec{V}^2 = (u^2 + v^2 + w^2)\n\\label{eqn:totvel}\n\\end{equation}\n\n\tConsidering next the second and third components of the energy equation, since these both rely on \ngradients in concentration and temperature respectively, they too produce no circumferential additions due to the \nassumption of axisymmetric flow.  This leaves only the fourth component, the rate of work done on the infantesimal\nvolume, to be considered.  In this case, if $w$ is not assumed to be equal to zero, then there is indeed an\naddition that must be made to the overall energy equation.  Considering the forces acting in the circumferential\ndirection as shown in Fig. \\ref{fig:thetadir}\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t   \\begin{array}{c}\n\t\t\\textrm{Work due to pressure acting} \\\\ \\textrm{on the right $x$ - $r$ plane}\n\t   \\end{array} & = &\n\t(-p\\cos{\\frac{\\theta}{2}})w dxdr\n\t\\\\ & & \\\\\n\t   \\begin{array}{c}\n\t\t\\textrm{Work due to pressure acting} \\\\ \\textrm{on the left $x$ - $r$ plane}\n\t   \\end{array} & = &\n\t(+p\\cos{\\frac{\\theta}{2}})w dxdr\n\t\\\\ & & \\\\\n\t   \\begin{array}{c}\n\t\t\\textrm{Work due to normal shear acting} \\\\ \\textrm{on the right $x$ - $r$ plane}\n\t   \\end{array} & = &\n\t(\\tau_{\\theta \\theta}\\cos{\\frac{\\theta}{2}})w dxdr\n\t\\\\ & & \\\\\n\t   \\begin{array}{c}\n\t\t\\textrm{Work due to normal shear acting} \\\\ \\textrm{on the left $x$ - $r$ plane}\n\t   \\end{array} & = &\n\t-(\\tau_{\\theta \\theta}\\cos{\\frac{\\theta}{2}})w dxdr\n\t\\end{array}\n\\end{displaymath}\n\n\tAs can be seen, these work terms cancel by inspection.  However, the remaining stress terms must\nbe accounted for as follows,\n\n\\begin{displaymath}\n       \\begin{array}{ccc}\n\t\\begin{array}{c}\n\t\t\\textrm{Work due to shear acting} \\\\ \\textrm{on the bottom $x$ - $\\theta$ plane}\n\t\\end{array} & = &\n\t-(\\tau_{r\\theta}w)dx(R_o \\theta)\n\t\\\\ & & \\\\\n\t\\begin{array}{c}\n\t\t\\textrm{Work due to shear acting} \\\\ \\textrm{on the top $x$ - $\\theta$ plane}\n\t\\end{array} & = &\n\t(\\tau_{r \\theta} + \\frac{\\partial \\tau_{r \\theta}}{\\partial r}dr)(w + \\frac{\\partial w}{\\partial r}dr)\n\tdx(R_o + dr) \\theta\n\t\\\\ & & \\\\\n\t\\begin{array}{c}\n\t\t\\textrm{Work due to shear acting} \\\\ \\textrm{on the front $r$ - $\\theta$ plane}\n\t\\end{array} & = &\n\t-(\\tau_{x \\theta}w)dr(R_o \\theta)\n\t\\\\ & & \\\\\n\t\\begin{array}{c}\n\t\t\\textrm{Work due to shear acting} \\\\ \\textrm{on the back $r$ - $\\theta$ plane}\n\t\\end{array} & = &\n\t(\\tau_{x \\theta} + \\frac{\\partial \\tau_{x \\theta}}{\\partial x}dx)(w + \\frac{\\partial w}{\\partial x}dx)\n\tdr (R_o \\theta)\n       \\end{array}\n\\end{displaymath}\n\n\tSumming the above terms yields,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\Delta \\dot{W}_\\theta & = &\n\t\t\\begin{array}{c}\n\t\t(-\\tau_{r \\theta}w dx  + \\tau_{r \\theta}w dx + w\\frac{\\partial \\tau_{r \\theta}}{\\partial r}dxdr\n\t\t+\\tau_{r \\theta}\\frac{\\partial w}{\\partial r}dxdr + \\frac{\\partial \\tau_{r \\theta}}{\\partial r}\n\t\t\\frac{\\partial w}{\\partial r}dxdr^2 \\\\\n\t\t-\\tau_{x \\theta}w dr + \\tau_{x \\theta} w dr + w\\frac{\\partial \\tau_{x \\theta}}{\\partial x}dxdr\n\t\t+\\tau_{x \\theta}\\frac{\\partial w}{\\partial x}dxdr + \\frac{\\partial \\tau_{x \\theta}}{\\partial x}\n\t\t\\frac{\\partial w}{\\partial x}dx^2dr)(R_o \\theta) \\\\\n\t\t+(\\tau_{r \\theta}w + w\\frac{\\partial \\tau_{r \\theta}}{\\partial r}dr\n\t\t+\\tau_{r \\theta}\\frac{\\partial w}{\\partial r}dr + \\frac{\\partial \\tau_{r \\theta}}{\\partial r}\n\t\t\\frac{\\partial w}{\\partial r}dr^2)(dxdr\\theta)\n\t\t\\end{array}\n\t\\end{array}\n\\end{displaymath}\n\n\tUsing the result expressed in Eq. \\ref{eqn:theta}, combining derivatives and neglecting terms with triple \nproducts of differentials (and higher) yields\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\Delta \\dot{W}_\\theta & = & \\Big\\{\n\t\t\\frac{\\partial \\tau_{x \\theta}w}{\\partial x} + \\frac{\\partial \\tau_{r \\theta}w}{\\partial r}\n\t\t+ \\frac{1}{r}(\\tau_{r \\theta}w)\\Big\\}dxdr\n\t\\end{array}\n\\end{displaymath}\n\n\tIt is noted again that the remaining shears acting parallel to the left and right $x$ - $r$ planes\nproduce no contribution to the work terms as they act perpendicular to the circumferential velocity (see \nEq. \\ref{eqn:workdotproduct}).\n\n\tTherefore including the possibility of a non-zero value for the circumferential velocity in the \nenergy equation by combining $\\Delta \\dot{W}_\\theta$ with Eq. \\ref{eqn:energy2} yields,\n\n\\begin{equation}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\Big\\{\\frac{\\partial}{\\partial t}(\\rho E) +\\frac{\\partial}{\\partial x}(\\rho E u + pu) \\\\\n\t\t+ \\frac{\\partial}{\\partial r}(\\rho E v + pv) + \\frac{1}{r}(\\rho E v + pv)\\Big\\}\n\t\t\\end{array} & = &\n\t\t\\begin{array}{c}\n\t\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial x}h_k) \n\t\t\t- q_{c_x} +  \\tau_{xx}u + \\tau_{xr}v + \\tau_{x\\theta}w\\Big\\} \\\\\n\t\t\t\\frac{\\partial}{\\partial r}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k)\n\t\t\t-  q_{c_r} + \\tau_{rx}u + \\tau_{rr}v  + \\tau_{r\\theta}w\\Big\\} \\\\\n\t\t\t\\frac{1}{r}\\Big\\{\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) - q_{c_r} + \\tau_{rx}u \n\t\t\t+ \\tau_{rr}v + \\tau_{r\\theta}w\\Big\\}\n\t\t\\end{array}  \n\t\\end{array}\n\\label{eqn:wenergy}\n\\end{equation}\n\n\tOf course, if $w$ is assumed to be zero, then the Eq. \\ref{eqn:wenergy} collapses as expected back\nto Eq. \\ref{eqn:energy2}.\n\n%--------------------------------------------------------------------------------------------------------------\n\\subsection{Species Conservation}\n\n\tStarting with the laminar species conservation equation as previously derived and shown in \nEq. \\ref{eqn:speciesfinal} and rewritten here for convenience,\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}(\\rho_k) + \\frac{\\partial}{\\partial x}(\\rho_k u) + \\frac{\\partial}{\\partial r}(\\rho_k v)\n\t+ \\frac {1}{r}(\\rho_k v) = \\frac{\\partial}{\\partial x}(\\nu_k \\frac{\\partial c_k}{\\partial x}) + \\frac{\\partial}\n\t{\\partial r}(\\nu_k \\frac{\\partial c_k}{\\partial r}) + \\frac {1}{r}(\\nu_k \\frac{\\partial c_k}{\\partial r})\n\\label{eqn:speciesfinal2}\n\\end{equation}\n\n\twe will make use of the relation expressed in Eq. \\ref{eqn:conc} to phrase the above equation entirely in \nterms of species concentrations, $c_k$, and then take the Reynolds average of the resulting expression.  However,\nas noted earlier, although a Reynolds averag\\emph{ing} process is being used, the actual flow properties (concentration\nand velocities) being averaged are Favre averag\\emph{ed}.\n\n\\begin{displaymath}\n\t\\overline{\n\t\\frac{\\partial}{\\partial t}(\\rho c_k) + \\frac{\\partial}{\\partial x}(\\rho c_k u) + \\frac{\\partial}{\\partial r}(\\rho c_k v)\n\t+ \\frac {1}{r}(\\rho c_k v)} = \\overline{\\frac{\\partial}{\\partial x}(\\nu_k \\frac{\\partial c_k}{\\partial x}) + \n\t\\frac{\\partial}{\\partial r}(\\nu_k \\frac{\\partial c_k}{\\partial r}) + \\frac {1}{r}(\\nu_k \\frac{\\partial c_k}{\\partial r})}\n\\end{displaymath}\n\n\tAt this point we will need to make use of another property of Reynolds averaging which states that the\nReynolds average of the derivative of a quantity is equal to the derivative of the Reynolds average of that quantity,\n\n\\begin{equation}\n\t\\overline{\\frac{\\partial \\phi}{\\partial x}} = \\frac{\\partial \\overline{\\phi}}{\\partial x}\n\\label{eqn:reyderiv}\n\\end{equation} \n\n\tUsing this result and that expressed in Eq. \\ref{eqn:sums} we can consider each term in the modified species\nconcentration equation individually.  Therefore, starting with the time dependent term,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\rho c_k)} = \\frac{\\partial}{\\partial t}\\overline{\\rho c_k} = \n\t\\frac{\\partial}{\\partial t}\\overline{\\rho}\\tilde c_k\n\\end{displaymath}\n\n\twhere Eq. \\ref{eqn:rhophi} was used to simplify the double product.  Considering next the spatial derivative\nterms (both $u$ and $v$ are treated in the same fashion, thus only one will be shown),\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\rho c_k v)} = \\frac{\\partial}{\\partial r}\\overline{\\rho c_k v} = \n\t\\frac{\\partial}{\\partial r}\\Big\\{(\\overline{\\rho}\\tilde c_k\\tilde{v}) + \\overline{\\rho c_k'' v''}\\Big\\}\n\\end{displaymath}\n\n\twhere Eq. \\ref{eqn:tripleprod} is used to simplify the triple product.  Now since the inviscid axisymmetric \nsource term is the same as the above with $\\frac {1}{r}$ replacing the derivative $\\frac{\\partial}{\\partial r}$,\none arrives at the same result given that the $\\frac{1}{r}$ term is constant with respect to the\nReynolds average at a given location and hence is unaffected by the averaging process.  Thus summarizing the \nresults of Reynolds averaging the inviscid terms,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}\\overline{\\rho}\\tilde c_k + \\frac{\\partial}{\\partial x}\\Big\\{\n\t(\\overline{\\rho}\\tilde c_k\\tilde{u}) + \\overline{\\rho c_k'' u''}\\Big\\} + \\frac{\\partial}{\\partial r}\\Big\\{\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v}) + \\overline{\\rho c_k'' v''}\\Big\\}\t+ \\frac{1}{r}\\Big\\{\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v}) + \\overline{\\rho c_k'' v''}\\Big\\} = \\textrm{Viscous Terms}\n\\end{displaymath}\n\n\tAt this point it is noted that the inviscid axisymmetric turbulent equation has three additional terms when\ncompared to its laminar counterpart.  These terms are often referred to as \\emph{Reynolds stresses} which exist only\nfor turbulent flow, given that if there exist no fluctuations in the flow properties the double primed quantities are\nzero (they are referred to as stresses as their units are $N/m^2$).  Leaving these terms for the moment and considering \nthe remaining viscous terms in the species conservation equation (again considering only one spatial dimension as the \nother is treated exactly the same),\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\nu_k \\frac{\\partial c_k}{\\partial r})} = \n\t\\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\nu_k} \\overline{\\frac{\\partial c_k}{\\partial r}} +\n\t\\overline{\\nu_k' (\\frac{\\partial c_k}{\\partial r})'}\\Big\\}\n\\end{displaymath}\n\n\twhere Eqs. \\ref{eqn:product} and \\ref{eqn:reyderiv} were applied.  If one further decomposes the concentration \ninto its Favre averaged components while making further use of Eqs. \\ref{eqn:reyderiv} and \\ref{eqn:sums} one can write,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\nu_k \\frac{\\partial c_k}{\\partial r})} = \\frac{\\partial}{\\partial r}\\Big\\{\n\t\\overline{\\nu_k}\\frac{\\partial}{\\partial r}\n\t\\overline{(\\tilde c_k + c_k'')} + \\overline{\\nu_k' (\\frac{\\partial c_k}{\\partial r})'}\\Big\\} = \\frac{\\partial}\n\t{\\partial r}\\Bigg\\{\n\t\\overline{\\nu_k}\\frac{\\partial}{\\partial r} \\tilde c_k + \\Big\\{\\overline{\\nu_k}\\frac{\\partial}{\\partial r}\\overline{c_k''} \n\t+ \\overline{\\nu_k' (\\frac{\\partial c_k}{\\partial r})'}\\Big\\}\\Bigg\\}\n\\end{displaymath}\n\n\tAt this point we make a simplification regarding the terms in the inner curly braces.  It is noted that these\nterms represent the effect of the turbulent fluctuating motion on the laminar diffusion terms (as \nthese are the terms under consideration).  However, with the knowledge that the diffusion driven by the large scale \nmovement of turbulent eddies within the flow field will be orders of magnitude more significant than that driven\nby laminar diffusion (i.e. due solely to gradients in concentration) the terms in the inner curly braces will be neglected.\nThis does not mean one assumes these turbulent fluctuating terms to be zero, but rather that in comparison to the Reynolds\nstresses (which are responsible for the large scale movement of turbulent eddies) the effect of the overall laminar\ndiffusion terms (including those in the inner curly braces) will be neglible.  Thus the turbulent viscous terms will be \napproximated by,\n\n\\begin{equation}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\nu_k \\frac{\\partial c_k}{\\partial r}}) \\approx \n\t\\frac{\\partial}{\\partial r}(\\overline{\\nu_k}\\frac{\\partial \\tilde c_k}{\\partial r})\n\\label{eqn:difffluxapprox}\n\\end{equation}\n\n\tExtending this result to the remaining viscous terms (the $x$ direction derivative and the viscous axisymmetric \nsource term) while combining the inviscid results and rearranging yields,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}\\overline{\\rho}\\tilde c_k + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{u})  + \\frac{\\partial}{\\partial r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v})\t+ \\frac{1}{r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v})  = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t\\overline{\\nu_k}\\frac{\\partial \\tilde c_k}{\\partial x} - \\overline{\\rho c_k'' u''}\\Big\\} +\n\t\t\\frac{\\partial}{\\partial r} \\Big\\{\\overline{\\nu_k}\\frac{\\partial \\tilde c_k}{\\partial r} \n\t\t- \\overline{\\rho c_k'' v''}\\Big\\} \\\\\n\t\t+ \\frac{1}{r}\\Big\\{\\overline{\\nu_k}\\frac{\\partial \\tilde c_k}{\\partial r} \n\t\t- \\overline{\\rho c_k'' v''}\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\n\tThe only task remaining is to develop an expression for calculating the Reynolds stresses.  To do this one\ncan make use of the Boussinesq approximation, which is an eddy viscosity formulation (i.e., one proposes the existence\nof a turbulent eddy viscosity similar in function to the familiar laminar viscosity, with the important difference however\nthat this eddy viscosity is a property of both the state of the flow and the properties of the fluid itself, as opposed to \nthe laminar viscosity which is dependent solely on the state of the fluid) which relates the products of fluctuating \nproperties to the gradient in the average value of the property along a given direction,\n\n\\begin{equation}\n\t\\overline{\\rho \\phi'' u''} \\approx -\\mu_T \\frac{\\partial \\tilde{\\phi}}{\\partial x}\n\\label{eqn:boussinesq}\n\\end{equation}\n\n\twhere $\\mu_T$ is the turbulent eddy viscosity, the derivative is taken with respect to the \ndirection of the fluctuating velocity, and $\\phi ''$ is the fluctuating property of interest.  Thus using \nEq. \\ref{eqn:boussinesq} one can rewrite the Reynolds stresses as,\n\n\\begin{equation}\n\t\\overline{\\rho v'' c_k''} = -\\frac{\\mu_T}{Sc_T}\\frac{\\partial \\tilde c_k}{\\partial r}\n\\label{eqn:turbdiffapprox}\n\\end{equation}\t\n\n\twhere the Schmidt number is used to relate the turbulent eddy viscosity to the laminar diffusion\nco-efficient (see Eq. \\ref{eqn:nuk}).  \n\n\\begin{equation}\n\tSc_T = \\frac{\\mu_T}{\\rho D_{kl}} = \\frac{\\mu_T}{\\nu_k}\n\\label{eqn:schmidt}\n\\end{equation} \n\n\tAlso note that the above Reynolds stress approximation is exactly the same in the $x$ direction \nbut with $x$ replacing $r$.  Therefore one can rewrite the turbulent axisymmetric species continuity equation as,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}\\overline{\\rho}\\tilde c_k + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{u})  + \\frac{\\partial}{\\partial r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v})\t+ \\frac{1}{r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v}) = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{(\\overline{\\nu_k} + \\frac{\\mu_T}{Sc_T})\n\t\t\\frac{\\partial \\tilde c_k}{\\partial x}\\Big\\} +\n\t\t\\frac{\\partial}{\\partial r} \\Big\\{(\\overline{\\nu_k} + \\frac{\\mu_T}{Sc_T})\n\t\t\\frac{\\partial \\tilde c_k}{\\partial r}\\Big\\} \\\\\n\t\t+ \\frac{1}{r}\\Big\\{(\\overline{\\nu_k} + \\frac{\\mu_T}{Sc_T})\\frac{\\partial \\tilde c_k}{\\partial r}\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\n\tIf one further defines a turbulent diffusion co-efficient as,\n\n\\begin{equation}\n\t\\nu_k^* = \\overline{\\nu_k} + \\frac{\\mu_T}{Sc_T}\n\\label{eqn:nukstar} \n\\end{equation}\n\n\tone can write the final version of the turbulent equation as,\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}\\overline{\\rho}\\tilde c_k + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{u})  + \\frac{\\partial}{\\partial r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v})\t+ \\frac{1}{r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v}) = \\frac{\\partial}{\\partial x}(\n\t\\nu_k^*\\frac{\\partial \\tilde c_k}{\\partial x}) +\n\t\\frac{\\partial}{\\partial r} (\\nu_k^*\\frac{\\partial \\tilde c_k}{\\partial r})\n\t+ \\frac{1}{r}(\\nu_k^*\\frac{\\partial \\tilde c_k}{\\partial r})\n\\label{eqn:turbspeciesfinal}\n\\end{equation}\n\n\tIn this form, one can immediately see the benefit of using the Boussinesq approximation as \nwhen compared with Eq. \\ref{eqn:speciesfinal2} (with the substitution made for $\\rho_k$),\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\rho c_k) + \\frac{\\partial}{\\partial x}(\\rho c_k u) + \\frac{\\partial}{\\partial r}(\\rho c_k v)\n\t+ \\frac {1}{r}(\\rho c_k v) = \\frac{\\partial}{\\partial x}(\\nu_k \\frac{\\partial c_k}{\\partial x}) + \\frac{\\partial}\n\t{\\partial r}(\\nu_k \\frac{\\partial c_k}{\\partial r}) + \\frac {1}{r}(\\nu_k \\frac{\\partial c_k}{\\partial r})\n\\end{displaymath}\n\n\tone can see that both the laminar and turbulent equations have exactly the same form.  Thus from the viewpoint\nof their solution, the only thing that needs to be modified is the definition of the diffusion co-efficient, using either\nEq. \\ref{eqn:nukstar} or Eq. \\ref{eqn:nuk} (although it is also noted that in the case of turbulent flow one must remember\nthat the equation is solving for the Favre averaged value of species concentration).  \n\n\tAs a final consideration, since the global continuity equation is directly related to the species concentration\nequation, it is prudent to develop its turbulent counterpart here as well.  Starting with the laminar axisymmetric \nequation (see Eq. \\ref{eqn:globalcont} and taking the Reynolds average\n\t\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\rho) + \\frac{\\partial}{\\partial x}(\\rho u) + \n\t\\frac{\\partial}{\\partial r}(\\rho v) + \\frac{1}{r}(\\rho v)} = 0 \n\\end{displaymath}\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}) + \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde{u})\n\t + \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde{v}) + \\frac{1}{r}(\\overline{\\rho}\\tilde{v}) = 0\n\\label{eqn:turbglobalcont}\n\\end{equation}\n\n\twhere Eqs. \\ref{eqn:sums}, \\ref{eqn:rhophi}, and \\ref{eqn:reyderiv} are used to simplify the above expression.  \nIt is clearly seen that there is no difference between the laminar and turbulent global continuity equations except for \nthe quantities being solved for, as the form of the two equations is identical.\n\n%----------------------------------------------------------------------------------------------------------------------------\n\\subsection{$x$ Momentum Equation}\n\n\tThe previously derived laminar $x$ momentum equation without substitution of the stress terms is repeated here\nin a slightly rearranged form so that we may derive its turbulent counterpart.\n\n\\begin{equation}\n\\overline{\\frac{\\partial}{\\partial t}(\\rho u) + \\frac{\\partial}{\\partial x}(\\rho u^2 + p) +\n\\frac{\\partial}{\\partial r}(\\rho uv) + \\frac{1}{r}(\\rho u v)} = \\overline{ \n\\frac{\\partial}{\\partial x}(\\tau_{xx}) + \\frac{\\partial}{\\partial r}(\\tau_{rx}) + \\frac{1}{r}(\\tau_{rx})}\n\\label{eqn:xmomshear2}\n\\end{equation}\n\n\tUsing Eq. \\ref{eqn:sums} to break the turbulent equation into individual components and\nstarting first with the time dependent term, using Eqs. \\ref{eqn:reyderiv} and \\ref{eqn:rhophi} \none can write,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\rho u)} = \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{u})\n\\end{displaymath}\n\n\tConsidering next the inviscid $x$ derivative term,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}(\\rho u^2 + p)} = \\frac{\\partial}{\\partial x}(\\overline{\\rho u u}) + \n\t\\frac{\\partial}{\\partial x}(\\overline{p}) = \\frac{\\partial}{\\partial x}\\Big\\{\\overline{\\rho}\\tilde{u}\\tilde{u} +\n\t\\overline{\\rho u'' u''} + \\overline{p}\\Big\\}\n\\end{displaymath}\n\n\twhere Eq. \\ref{eqn:tripleprod} was used to evaluate $\\overline{\\rho u u}$.  Using similar logic for \nthe inviscid $r$ derivative term (and hence the inviscid axisymmetric source term as well),\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\rho uv)} = \\frac{\\partial}{\\partial r}(\\overline{\\rho u v}) \n\t= \\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\rho}\\tilde{u}\\tilde{v} +\n\t\\overline{\\rho u'' v''}\\Big\\}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\overline{\\frac{1}{r}(\\rho uv)} = \\frac{1}{r}(\\overline{\\rho u v}) \n\t= \\frac{1}{r}\\Big\\{\\overline{\\rho}\\tilde{u}\\tilde{v} +\n\t\\overline{\\rho u'' v''}\\Big\\}\n\\end{displaymath}\n\n\tCombining the inviscid turbulence results yields,\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{u}) + \\frac{\\partial}{\\partial x}\\Big\\{\n\t\\overline{\\rho}\\tilde{u}^2 + \\overline{p} + \\overline{\\rho u'' u''}\\Big\\} + \\frac{\\partial}{\\partial r}\\Big\\{\n\t\\overline{\\rho}\\tilde{u}\\tilde{v} + \\overline{\\rho u'' v''}\\Big\\} + \\frac{1}{r}\\Big\\{\n\t\\overline{\\rho}\\tilde{u}\\tilde{v} + \\overline{\\rho u'' v''}\\Big\\} = \\textrm{Viscous Terms}\n\\label{eqn:xmomturbinv}\n\\end{equation}\n\n\tMoving on to the viscous terms, we shall need to make use of the definitions of the various stress terms\ndefined earlier so that the proper variables can be decomposed into their Favre averaged components.  \nStarting with the definition of $\\tau _{xx}$ as shown in Eq. \\ref{eqn:tauxxlambda}  \nand substituting Stokes hypothesis (Eq. \\ref{eqn:stokes}) one can write,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}(\\tau_{xx})} = \\overline{\\frac{\\partial}{\\partial x}\\Big\\{\n\t2\\mu \\frac{\\partial u}{\\partial x} - \\frac{2}{3}\\mu (\\nabla \\cdot \\vec{V})} \\Big\\}\n\\end{displaymath}\n\n\tUsing the by now familiar rules of Reynolds averaging (Eqs. \\ref{eqn:sums} and \\ref{eqn:reyderiv}) and\ndecomposing the above terms using Eq. \\ref{eqn:product} yields,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}\\Big\\{\n\t2\\mu \\frac{\\partial u}{\\partial x} - \\frac{2}{3}\\mu (\\nabla \\cdot \\vec{V})} \\Big\\} = \\frac{\\partial}{\\partial x}\\Big\\{\n\t2 \\overline{\\mu} \\frac{\\partial \\overline{u}}{\\partial x} + 2\\overline{\\mu' (\\frac{\\partial u}{\\partial x})'}\n\t- \\frac{2}{3}\\overline{\\mu} \\overline{(\\nabla \\cdot \\vec{V})} - \\frac{2}{3}\\overline{\\mu'(\\nabla \\cdot \\vec{V})'}\\Big\\}\n\\end{displaymath}\n\n\tAt this point one will make use of some foresight combined with the experience gained from deriving\nthe turbulent version of the species conservation equation.  Since in the species conservation equation each\nof the Reynolds stresses arising out of the inviscid flux terms was modeled and then combined with the viscous\nterm along the matching co-ordinate direction, we assume that this process will also hold true here.  Thus \nwe will neglect the Reynolds fluctuating components (') arising from the viscous terms for no other reason than that \nwe know they form only a part of the turbulent terms to be added to the $x$ momentum equation.  This does not mean \nthat these terms are\nassumed to be zero, or even negligible (as was the case in the species conservation equation as the turbulent\ndiffusion term is known to be magnitudes larger than the laminar counterpart), but rather that these terms are \ncapable of being neglected without reducing the resulting equation back down to its laminar origin (as it should\nbe kept in mind that by taking the Reynolds average of Favre averaged quantities we hope to \\emph{add} terms to the \nlaminar equation to reflect the existence of turbulence).  However, it is noted that if indeed the Reynolds stresses\nare much larger than these terms we are neglecting then this process has a physical justification as well.  \nNeglecting these fluctuating terms leaves,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}\\Big\\{\n\t2\\mu \\frac{\\partial u}{\\partial x} - \\frac{2}{3}\\mu (\\nabla \\cdot \\vec{V})} \\Big\\} = \\frac{\\partial}{\\partial x}\\Big\\{\n\t2 \\overline{\\mu} \\frac{\\partial \\overline{u}}{\\partial x} + \n\t- \\frac{2}{3}\\overline{\\mu} \\overline{(\\nabla \\cdot \\vec{V})} \\Big\\}\n\\end{displaymath}\n\n\tExamining the second term in more detail using Eq. \\ref{eqn:delv},\n\n\\begin{displaymath}\n\t\\overline{(\\nabla \\cdot \\vec{V})} = \\overline{(\\frac{\\partial u}{\\partial x} +  \\frac{\\partial v}{\\partial r} \n\t+ \\frac{v}{r} )} = \\frac{\\partial}{\\partial x}\\overline{u} + \\frac{\\partial}{\\partial r}\\overline{v} +\n\t\\frac{1}{r}\\overline{v}\n\\end{displaymath}\n\t\n\twhere again the Reynolds averaging rules have been applied.  Decomposing the velocities into their Favre\naveraged quantities and their Favre fluctuating components yields,\n\n\\begin{displaymath}\n\t\\overline{(\\nabla \\cdot \\vec{V})} = \\frac{\\partial}{\\partial x}\\overline{(\\tilde{u}+u'')} + \n\t\\frac{\\partial}{\\partial r}\\overline{(\\tilde{v}+v'')} +\n\t\\frac{1}{r}\\overline{(\\tilde{v}+v'')} = (\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}\n\t{\\partial r} + \\frac{\\tilde{v}}{r}) + \\Big\\{\\frac{\\partial}{\\partial x}\\overline{u''} + \\frac{\\partial}{\\partial r}\n\t\\overline{v''} + \\frac{1}{r}\\overline{v''}\\Big\\}\n\\end{displaymath}\n\n\tAlthough it is acknowledged that none of the terms in the curly braces can be assumed to be zero (as shown in \nEq. \\ref{eqn:reyoffavfluc}) we will neglect these terms in order to simplify the Reynolds average of this\nterm.  Thus with this simplification one can write,\n\n\\begin{equation}\n\t\\overline{(\\nabla \\cdot \\vec{V})} \\approx (\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}\n\t{\\partial r} + \\frac{\\tilde{v}}{r}) \n\\label{eqn:delvturb}\n\\end{equation}\n\n\tGoing back to the equation for $\\tau_{xx}$ and decomposing the $u$ velocity into a Favre average and\nFavre fluctuating component,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t2 \\overline{\\mu} \\frac{\\partial \\overline{u}}{\\partial x} + \n\t- \\frac{2}{3}\\overline{\\mu} \\overline{(\\nabla \\cdot \\vec{V})} \\Big\\} = \\frac{\\partial}{\\partial x}\\Big\\{\n\t2 \\overline{\\mu} \\frac{\\partial}{\\partial x}\\overline{(\\tilde{u}+u'')} + \n\t\\frac{2}{3}\\overline{\\mu} \\overline{(\\nabla \\cdot \\vec{V})} \\Big\\} = \\frac{\\partial}{\\partial x}\\Big\\{\n\t2\\overline{\\mu}\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}\\overline{\\mu} \\overline{(\\nabla \\cdot \\vec{V})}\n\t+2\\overline{\\mu}\\frac{\\partial}{\\partial x}\\overline{u''}\\Big\\}\n\\end{displaymath}\n\n\tAs was done for the $(\\nabla \\cdot \\vec{V})$ term we will neglect the last term above despite the fact that\nit has been proved that the Reynolds average of a Favre fluctuating component is not zero (Eq. \\ref{eqn:reyoffavfluc}).\nTherefore using the result expressed above and in Eq. \\ref{eqn:delvturb}, $\\tau_{xx}$ can be approximated by\n\n\\begin{equation}\n\t\\overline{\\frac{\\partial}{\\partial x}(\\tau_{xx})} \\approx \\frac{\\partial}{\\partial x}\\Big\\{\n\t2\\overline{\\mu}\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}\\overline{\\mu} (\\frac{\\partial \\tilde{u}}\n\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r}) \n\t\\Big\\}\n\\label{eqn:tauxxturb}\n\\end{equation}\n\n\tUsing the definition of $\\tau_{rx}$ (Eq. \\ref{eqn:taurx}) and taking the Reynolds average (using Eqs. \\ref{eqn:product}\nand \\ref{eqn:reyderiv} yields),\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\tau_{rx})} = \\frac{\\partial}{\\partial r}\\Big\\{\n\t\\overline{\\mu(\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial r})}\\Big\\} = \\frac{\\partial}{\\partial r}\\Big\\{\n\t\\overline{\\mu}(\\overline{\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial r}}) + \n\t\\overline{\\mu'(\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial r})'}\\Big\\}\n\\end{displaymath}\n\n\tHere again we note that there will be a Reynolds stress to be combined with this term and hence \nwe will neglect the fluctuating components.  Decomposing the\nvelocities into their Favre averages and Favre fluctuating components while applying Eqs. \\ref{eqn:sums} and \n\\ref{eqn:reyderiv} yields,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\mu(\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial r})}\\Big\\} \n\t= \\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\mu}[\\frac{\\partial}{\\partial x}\\overline{(\\tilde{v} + v'')} + \n\t\\frac{\\partial}{\\partial r}\\overline{(\\tilde{u} + u'')}]\\Big\\} =  \\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\mu}\n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r}) + \\overline{\\mu}\n\t(\\frac{\\partial}{\\partial x}\\overline{v''} + \\frac{\\partial}{\\partial r}\\overline{u''})\\Big\\}\n\\end{displaymath}\n\n\tAs was done for $\\tau_{xx}$ and $(\\nabla \\cdot \\vec{V})$, we will again neglect these fluctuating components \ndespite the results of Eq. \\ref{eqn:reyoffavfluc} thus yielding,\n\n\\begin{equation}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\tau_{rx})} \\approx \\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\mu}\n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\}\t\n\\label{eqn:taurxturb}\n\\end{equation}\n\n\tGiven the similarity between the viscous axisymmetric source term and the term derived above, the\nsame logic can be applied to yield,\n\n\\begin{displaymath}\n\t\\overline{\\frac{1}{r}(\\tau_{rx})} \\approx \\frac{1}{r}\\Big\\{\\overline{\\mu}\n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\}\t\n\\end{displaymath}\n\n\tThus combining this last result with those obtained in Eqs. \\ref{eqn:xmomturbinv}, \\ref{eqn:tauxxturb},\nand \\ref{eqn:taurxturb} and rearranging yields,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{u}) + \\frac{\\partial}{\\partial x}(\n\t\\overline{\\rho}\\tilde{u}^2 + \\overline{p}) + \\frac{\\partial}{\\partial r}(\n\t\\overline{\\rho}\\tilde{u}\\tilde{v}) + \\frac{1}{r}(\\overline{\\rho}\\tilde{u}\\tilde{v}) \n\t& = & \n\t\t\\begin{array}{c} \n\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t2\\overline{\\mu}\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}\\overline{\\mu} (\\frac{\\partial \\tilde{u}}\n\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r}) - \\overline{(\\rho u'' u'')} \\Big\\}\n\t\\\\ + \\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\mu} \n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r}) - \\overline{(\\rho u'' v'')}\\Big\\}\\\\\n\t+ \\frac{1}{r}\\Big\\{\\overline{\\mu}\n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r}) - \\overline{(\\rho u'' v'')}\\Big\\}\t\n\t\t\\end{array}\n\t\\end{array}\n\\end{displaymath}\n\n\tUsing the Boussinesq approach tailored to the fact that we would like each of the Reynolds stresses to be\ncombined with their viscous counterparts, we will model each of the Reynolds stresses as the product of a turbulent\neddy viscosity and term composed of the same derivatives as found in the matching viscous stress terms.  Thus for\n$(\\rho u'' u'')$ we will write,\n\n\\begin{equation}\n\t-\\overline{(\\rho u'' u'')} \\approx \\mu_T\\Big\\{2\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k\n\\label{eqn:rhouu}\n\\end{equation}\n\n\tIt should be noted that a new term, $\\rho k $, has been introduced here which is to be understood as the kinetic\nenergy of turbulence.  This is a three dimensional quantity composed of the fluctuating components of velocity and\ncan be expressed as (using Favre fluctuating components),\n\n\\begin{equation}\n\t\\overline{\\rho} k = \\frac {1}{2}(\\overline{\\rho u'' u''} + \\overline{\\rho v'' v''} + \\overline{\\rho w'' w''}) =\n\t\\frac{1}{2}\\sum_{i=0}^{i=2}\\overline{\\rho v_i'' v_i''}\n\\label{eqn:rhok}\n\\end{equation} \n\n\tOne must be careful not to confuse $w''$ with $\\tilde{w}$ in the sense that even if one makes the assumption\nthat the average circumferential velocity is zero (i.e. $\\tilde{w}=0$), this does not necessarily require that the \nfluctuating value of this velocity be zero as well.  Hence for the axisymmetric equations in which there is no\n$\\theta$ momentum equation used (as is the case for $\\tilde{w}=0$) one must still understand the definition of \n$\\overline{\\rho} k$ to include $w''$.  This point will become clearer after all the Reynolds stresses are modeled, \nwhere for the approximations to be self consistent the kinetic energy of turbulence must include the fluctuating \ncircumferential velocity.\n\n\tAs for the remaining $(\\rho u'' v'')$Reynolds stress we will write,\n\n\\begin{equation}\n\t-\\overline{(\\rho u'' v'')} \\approx \\mu_T(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\n\\label{eqn:rhouv}\n\\end{equation}\n\n\tThus substituting the approximations made in Eqs. \\ref{eqn:rhouu} and \\ref{eqn:rhouv} into the $x$ momentum \nequation yields,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{u}) + \\frac{\\partial}{\\partial x}(\n\t\\overline{\\rho}\\tilde{u}^2 + \\overline{p}) + \\frac{\\partial}{\\partial r}(\n\t\\overline{\\rho}\\tilde{u}\\tilde{v}) + \\frac{1}{r}(\\overline{\\rho}\\tilde{u}\\tilde{v})\n\t& = & \n\t\t\\begin{array}{c} \n\t\\frac{\\partial}{\\partial x}\\Bigg\\{(\\overline{\\mu}+\\mu_T)\n\t\\Big\\{2\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}(\\frac{\\partial \\tilde{u}}\n\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\} -\\frac{2}{3}\\overline{\\rho}k \\Bigg\\}\n\t\\\\ \n\t+ \\frac{\\partial}{\\partial r}\\Big\\{(\\overline{\\mu} + \\mu_T) \n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\}\\\\\n\t+ \\frac{1}{r}\\Big\\{(\\overline{\\mu} + \\mu_T)\n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\}\t\n\t\t\\end{array}\n\t\\end{array}\n\\end{displaymath}\n\n\tDefining a total turbulent co-efficient of viscosity as,\n\n\\begin{equation}\n\t\\mu^* = \\overline{\\mu} + \\mu_T\n\\label{eqn:mustar}\n\\end{equation}\n\n\tone can write the turbulent axisymmetric $x$ momentum equation as,\n\n\\begin{equation}\t\n\t\\frac{\\partial}{\\partial t}(\\rho \\tilde{u}) + \\frac{\\partial}{\\partial x}(\\rho \\tilde{u}^2 + \n\t[\\overline{p}+\\frac{2}{3}\\overline{\\rho}k]) +\n\t\\frac{\\partial}{\\partial r}(\\rho \\tilde{u}\\tilde{v}) + \\frac{1}{r}(\\rho \\tilde{u} \\tilde{v}) =  \n\t\\frac{\\partial}{\\partial x}(\\tilde{\\tau}_{xx}) + \\frac{\\partial}{\\partial r}(\\tilde{\\tau}_{rx}) \n\t+ \\frac{1}{r}(\\tilde{\\tau}_{rx})\n\\label{eqn:xmomfinalturb}\n\\end{equation}\n\n\twhere the definitions of the stresses remain unchanged but with $\\mu$ replaced by $\\mu^*$ and the\nvelocities replaced by their Favre averages.  Note that as with the species equation, the turbulent version\nof the axisymmetric $x$ momentum equation has the exact same form as its laminar counterpart\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\rho u) + \\frac{\\partial}{\\partial x}(\\rho u^2 + p) +\n\t\\frac{\\partial}{\\partial r}(\\rho uv) + \\frac{1}{r}(\\rho u v) = \n\t\\frac{\\partial}{\\partial x}(\\tau_{xx}) + \\frac{\\partial}{\\partial r}(\\tau_{rx}) + \\frac{1}{r}(\\tau_{rx})\n\\end{displaymath}\n\nbut for the use of Favre averaged quantities, a turbulent co-efficient of viscosity and an additional turbulent\npressure component.\n\n%-------------------------------------------------------------------------------------------------------------------------\n\\subsection{$r$ Momentum Equation}\n\n\tRepeating the process used to derive the turbulent $x$ momentum equation, we will start with the laminar\nversion of the $r$ momentum equation and take the Reynolds average of each term individually,\n\n\\begin{equation}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\rho v) + \\frac{\\partial}{\\partial x}(\\rho uv) +\n\t\\frac{\\partial}{\\partial r}(\\rho v^2 + p) + \\frac{1}{r}(\\rho v^2)} = \n\t\\overline{\\frac{\\partial}{\\partial x}(\\tau_{xr})\n\t+ \\frac{\\partial}{\\partial r}(\\tau_{rr}) + \\frac{1}{r}(\\tau_{rr} - \\tau_{\\theta \\theta})}\n\\label{eqn:rmomshear2}\n\\end{equation}\n\n\tThe time dependent term ($\\rho v$) and the $x$ derivative term ($\\rho u v$) have already been\nconsidered in doing the $x$ momentum equations (but for the fact that the derivative of $\\rho u v$ was taken\nwith respect to $r$ in the previous case and with respect to $x$ in the present case, and instead of $\\rho u$\nin the time dependent term one has $\\rho v$) and so only the results will be repeated here,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\rho v)} = \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{v})\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}(\\rho uv)} = \\frac{\\partial}{\\partial x}\\Big\\{\\overline{\\rho}\\tilde{u}\\tilde{v} +\n\t\\overline{\\rho u'' v''}\\Big\\}\n\\end{displaymath}\n\n\tLikewise, the $r$ derivative term has the same form as the $x$ derivative term of the $x$ momentum equation\nbut for the use of $v$ in place of $u$, thus the same logic can be applied to obtain the result,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\rho v^2 + p)} = \\frac{\\partial}{\\partial r}\\Big\\{\n\t\\overline{\\rho}\\tilde{v}\\tilde{v} + \\overline{\\rho v'' v''} + \\overline{p}\\Big\\}\n\\end{displaymath}\n\n\tNoting that the inviscid axisymmetric source term is identical to the $r$ derivative term minus the \npressure one can extend the above result to yield,\n\n\\begin{displaymath}\n\t\\overline{\\frac{1}{r}(\\rho v^2)} = \\frac{1}{r}\\Big\\{\n\t\\overline{\\rho}\\tilde{v}\\tilde{v} + \\overline{\\rho v'' v''}\\Big\\}\n\\end{displaymath}\n\n\tThus combining the inviscid results,\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{v}) + \\frac{\\partial}{\\partial x}\\Big\\{\n\t\\overline{\\rho}\\tilde{u}\\tilde{v} + \\overline{\\rho u'' v''}\\Big\\} + \\frac{\\partial}{\\partial r}\\Big\\{\n\t\\overline{\\rho}\\tilde{v}^2 + \\overline{p} + \\overline{\\rho v'' v''}\\Big\\} + \\frac{1}{r}\\Big\\{\n\t\\overline{\\rho}\\tilde{v}^2 + \\overline{\\rho v'' v''}\\Big\\} = \\textrm{Viscous Terms}\n\\label{eqn:rmomturbinv}\n\\end{equation}\n\n\tConsidering next the viscous terms, we note that since $\\tau_{xr} = \\tau_{rx}$ we can use the \nresults from the $x$ momentum equation on $\\tau_{rx}$ (noting that here the derivative is taken with \nrespect to $x$ and not $r$).  Therefore,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}(\\tau_{xr})} \\approx \\frac{\\partial}{\\partial x}\\Big\\{\\overline{\\mu}\n\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\}\t\n\\end{displaymath}\n\n\tFrom the definition of $\\tau_{rr}$ shown in Eq. \\ref{eqn:taurrlambda},\nwe note that it has exactly the same form as $\\tau_{xx}$ and hence we will extend those results yielding,\n\n\\begin{equation}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\tau_{rr})} = \\overline{\\frac{\\partial}{\\partial r}\\Big\\{\n\t2\\mu \\frac{\\partial v}{\\partial r} - \\frac{2}{3}\\mu (\\nabla \\cdot \\vec{V}) \\Big\\}} \n\t \\approx \\frac{\\partial}{\\partial r}\\Big\\{\n\t2\\overline{\\mu}\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}\\overline{\\mu} (\\frac{\\partial \\tilde{u}}\n\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r}) \n\t\\Big\\}\n\\label{eqn:taurrturb}\n\\end{equation}\n\n\tThe last term to be considered is the viscous axisymmetric source term, \n\n\\begin{displaymath}\n\t\\overline{\\frac{1}{r}(\\tau_{rr} - \\tau_{\\theta \\theta})} = \\frac{1}{r}(\\overline{\\tau_{rr}} - \n\t\\overline{\\tau_{\\theta \\theta}})\n\\end{displaymath}\n\n\twhere Eq. \\ref{eqn:sums} was used to obtain the two separate terms.  The first term of the viscous \naxisymmetric source term is nearly \nidentical to the $r$ derivative term, but for $\\frac{1}{r}$ replacing the derivative, hence the results of\nEq. \\ref{eqn:taurrturb} will be extended here.  However, $\\tau_{\\theta \\theta}$ still requires separate\nconsideration.  Using the definition for this stress in Eq. \\ref{eqn:tauthetathetalambda} along with Stokes'\nhypothesis \\ref{eqn:stokes},\n\n\\begin{displaymath}\n\t-\\frac{1}{r}\\overline{(\\tau_{\\theta \\theta})} = -\\frac{1}{r}\\Big\\{\\overline{2 \\mu \\frac{v}{r} -\\frac{2}{3}\\mu\n\t(\\nabla \\cdot \\vec{V})}\\Big\\}  \n\\end{displaymath}\n\n\twhich when expanded with the results of Eqs. \\ref{eqn:sums} and \\ref{eqn:product} yields,\n\n\\begin{displaymath}\n\t-\\frac{1}{r}\\overline{(\\tau_{\\theta \\theta})} = -\\frac{1}{r}\\Big\\{2\\overline{\\mu}\\frac{\\overline{v}}{r} \n\t -\\frac{2}{3}\\overline{\\mu}(\\overline{\\nabla \\cdot \\vec{V}}) + [2\\overline{\\mu'(\\frac{v}{r})'} -\n\t\\frac{2}{3}\\overline{\\mu'(\\nabla \\cdot \\vec{V})'}]    \\Big\\} \n\\end{displaymath}\n\n\tAt this point we are going to break with the normal chain of events, in that we are not going to neglect\nthe fluctuating components (those in the square brackets) of this particular stress.  Since the reasoning behind \nneglecting similar components\nfor the other stresses ($\\tau_{xx}, \\tau_{rr}, \\textrm{and} \\tau_{rx}$) stemmed from the fact that one would still be left \nwith additional turbulent terms once the Reynolds stresses arising out of the inviscid terms was taken into account, \napplying the same reasoning here one is no longer permitted to neglect these fluctuating terms as there is no \ncorresponding Reynolds stress to be combined with $\\tau_{\\theta \\theta}$.  Thus if one were to neglect these \nfluctuating terms there would be no difference between the laminar and turbulent versions of $\\tau_{\\theta \\theta}$.\n\n\tThe reason no corresponding Reynolds stress appears for $\\tau_{\\theta \\theta}$ is due to the fact that\nthe axisymmetric assumption removed the inviscid term that would have, when treated for turbulence, given rise\nto $\\overline{(\\rho w'' w'')}$.  However, since the Reynolds stress $(\\rho w'' w'')$ represents fluctuations in the\ncircumferential stress component, and the terms in the square brackets above represent fluctuations in the circumferential\nstress component, we note that these two terms actually represent the same physical phenomenon and hence can be treated\nas the same term, i.e.,  \n\n\\begin{equation}\n\t-\\overline{(\\rho w'' w'')} = 2\\overline{\\mu'(\\frac{v}{r})'} -\n\t\\frac{2}{3}\\overline{\\mu'(\\nabla \\cdot \\vec{V})'}\n\\label{eqn:rhoww}\n\\end{equation}\n\n\n\tGoing back to the overall expression for $\\tau_{\\theta\\theta}$ and decomposing $v$ into its Favre averaged \ncomponents yields,\n\n\\begin{displaymath}\n\t-\\frac{1}{r}\\overline{(\\tau_{\\theta \\theta})} = -\\frac{1}{r}\\Big\\{2\\overline{\\mu}\\frac{1}{r}\\overline{(\\tilde{v}+v'')} \n\t -\\frac{2}{3}\\overline{\\mu}(\\overline{\\nabla \\cdot \\vec{V}}) -\\overline{(\\rho w'' w'')} \\Big\\} = -\\frac{1}{r}\\Big\\{\n\t 2\\overline{\\mu}\\frac{\\tilde{v}}{r} -\\frac{2}{3}\\overline{\\mu}(\\overline{\\nabla \\cdot \\vec{V}}) \n\t-\\overline{(\\rho w'' w'')} +2\\overline{\\mu}\\frac{1}{r}(v'')\\Big\\}\n\\end{displaymath}\n\n\tGiven that we have included a Reynolds stress in the turbulent version of $\\tau_{\\theta \\theta}$ we will now \ncontinue neglecting terms as was done for the previous stresses.  Thus in order to simplify the resulting equations,\nwe will neglect the last term above (despite Eq. \\ref{eqn:reyoffavfluc}) resulting in,\n\n\\begin{equation}\n\t-\\frac{1}{r}\\overline{(\\tau_{\\theta \\theta})} = -\\frac{1}{r}\\Big\\{\n\t 2\\overline{\\mu}\\frac{\\tilde{v}}{r} -\\frac{2}{3}\\overline{\\mu}(\\overline{\\nabla \\cdot \\vec{V}}) \n\t-\\overline{(\\rho w'' w'')}\\Big\\}\n\\label{eqn:tauthetathetaturb}\n\\end{equation}\n\n\tTherefore combining Eq. \\ref{eqn:rmomturbinv} with the preceding results for the stresses and rearranging\nyields,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{v}) + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde{u}\\tilde{v}) + \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde{v}^2 + \\overline{p})\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde{v}^2) =\n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\\overline{\\mu}\n\t\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r}) - \\overline{\\rho u'' v''}\\Big\\} \n\t\t\\\\ + \\frac{\\partial}{\\partial r}\\Big\\{\n\t\t2\\overline{\\mu}\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}\\overline{\\mu} (\\frac{\\partial \\tilde{u}}\n\t\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r}) \n\t\t- \\overline{\\rho v'' v''}\\Big\\}  \\\\\n\t\t+ \\frac{1}{r}\\Big\\{\n\t\t2\\overline{\\mu}\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}\\overline{\\mu} (\\frac{\\partial \\tilde{u}}\n\t\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r}) - \\overline{\\rho v'' v''}\n\t\t\\\\ \n\t \t-[ 2\\overline{\\mu}\\frac{\\tilde{v}}{r} - \\frac{2}{3}\\overline{\\mu}(\\overline{\\nabla \\cdot \\vec{V}}) \n\t\t - \\overline{\\rho w'' w''}] \n\t\t\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\n\tThe last step is to model each of the Reynolds stresses appearing in the above equation.  Starting with\n$\\rho u'' v''$ we note that this has already been approximated using Eq. \\ref{eqn:rhouv}.  Next considering\n$\\rho v'' v''$ one can use the approximation for $\\rho u'' u''$ as expressed by Eq. \\ref{eqn:rhouu} as a guide\nto approximating this stress as,\n\n\\begin{equation}\n\t-\\overline{(\\rho v'' v'')} \\approx \\mu_T\\Big\\{2\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k\n\\label{eqn:rhovv}\n\\end{equation}\n\n\tAs for the remaining terms, those from the viscous axisymmetric source term, we note that $\\overline{(\\nabla\n\\cdot \\vec{V})}$ has already been approximated by Eq. \\ref{eqn:delvturb} while $\\rho w'' w''$ already contains a\nguide to its approximation given its definition in Eq. \\ref{eqn:rhoww}.  Since this last Reynolds stress was\nfound to be equal to an already existing fluctuating stress, it follows then to approximate this term by simply\nsubstituting the laminar viscosity for a turbulent one and using Favre averaged quantities,\n\n\\begin{equation}\n\t-\\overline{(\\rho w'' w'')} \\approx \\mu_T \\Big\\{2\\frac{v}{r} -\n\t\\frac{2}{3}\\overline{(\\nabla \\cdot \\vec{V})} \\Big\\} - \\frac{2}{3}\\overline{\\rho}k\n\\label{eqn:rhowwapprox}\n\\end{equation}\n\n\tThe one odd addition to each of the Reynolds stresses containing squares of fluctuating velocities in all\nthree cases has been the term $\\frac{2}{3}\\overline{\\rho}k$.  This term is added so that the approximations made\nfor each of these stresses are consistent with the definition of the kinetic energy of turbulence expressed in \nEq. \\ref{eqn:rhok}.  Since $\\overline{\\rho}k$ is simply the sum of the three previously mentioned Reynolds stresses, then \nfor the approximations to be consistent with this definition their addition should yield the kinetic energy of\nturbulence,\n\n\\begin{displaymath}\n\t\\overline{(-\\rho u'' u'')} + \\overline{(-\\rho v'' v'')} + \\overline{(-\\rho w'' w'')} =\n\t\t\\begin{array}{c}\n\t\\mu_T\\Big\\{2\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k  \\\\ +\n\t\\mu_T\\Big\\{2\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k \\\\ + \\mu_T \\Big\\{2\\frac{v}{r} -\n\t\\frac{2}{3}(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \n\t\\frac{\\tilde{v}}{r}) \\Big\\} - \\frac{2}{3}\\overline{\\rho}k\n\t\t\\end{array}\n\\end{displaymath}\n\n\twhere Eqs. \\ref{eqn:rhouu}, \\ref{eqn:rhovv}, and \\ref{eqn:rhowwapprox} are used.  Rearranging this slightly \none can obtain,\n\n\\begin{displaymath}\n\t= \\mu_T \\Big\\{2(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\n\t-\\frac{6}{3}(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \n\t\\frac{\\tilde{v}}{r})\\Big\\} -\\frac{6}{3}\\overline{\\rho}k\n\\end{displaymath}\n\n\tTherefore,\n\n\\begin{displaymath}\n\t\\overline{(\\rho u'' u'')} + \\overline{(\\rho v'' v'')} + \\overline{(\\rho w'' w'')} =\n\t2\\overline{\\rho}k\t\n\\end{displaymath}\n\n\twhich is the same result as expressed in Eq. \\ref{eqn:rhok}!  Thus it can be seen that without the \n$\\frac{2}{3}\\overline{\\rho}k$ term added to each of the appropriate Reynolds stress approximations, one would be\nproviding two separate definitions for the turbulent kinetic energy.  It is also worth noting that the definition of \n$\\overline{\\rho}k$ must include the fluctuating\ncircumferential velocity component, even if the average of this value is zero, if one wishes to use the \nabove made approximations.\n\tGoing back to the $r$ momentum equation and substituting the approximations for the Reynolds stresses\n(Eqs. \\ref{eqn:rhouv}, \\ref{eqn:rhovv}, and \\ref{eqn:rhowwapprox}) yields,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{v}) + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde{u}\\tilde{v}) + \\frac{\\partial}{\\partial r} (\\overline{\\rho}\\tilde{v}^2 \n\t+ \\overline{p})+ \\frac{1}{r}(\\overline{\\rho}\\tilde{v}^2) =\n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{(\\overline{\\mu} + \\mu_T)\n\t\t(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\} \n\t\t\\\\ + \\frac{\\partial}{\\partial r}\\Big\\{\n\t\t(\\overline{\\mu} + \\mu_T)\\Big[2\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}(\\frac{\\partial \\tilde{u}}\n\t\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big] -\\frac{2}{3}\\overline{\\rho}k\n\t\t\\Big\\}  \\\\\n\t\t+ \\frac{1}{r}\\Big\\{\n\t\t(\\overline{\\mu} + \\mu_T)\\Big[2\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}(\\frac{\\partial \\tilde{u}}\n\t\t{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big] -\\frac{2}{3}\\overline{\\rho}k \n\t\t\\\\ -  \n\t \t(\\overline{\\mu}+\\mu_T)\\Big[2\\frac{\\tilde{v}}{r} -\\frac{2}{3}(\\overline{\\nabla \\cdot \\vec{V}}) \n\t\t\\Big] +\\frac{2}{3}\\overline{\\rho}k\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\n\tUsing the previously defined $\\mu^*$ (Eq. \\ref{eqn:mustar}) and the definitions of the viscous stresses\n(with $\\mu^*$ replacing $\\mu$ and Favre averaged quantities replacing laminar ones) one can simplify the above equation \nand rewrite it in a more convenient form,\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{v}) + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde{u}\\tilde{v}) + \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde{v}^2 + \n\t[\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde{v}^2) =\n\t\\frac{\\partial}{\\partial x}(\\tilde\\tau_{xr}) + \\frac{\\partial}{\\partial r}(\\tilde\\tau_{rr}) + \n\t\\frac{1}{r}(\\tilde\\tau_{rr} - \\tilde\\tau_{\\theta \\theta})\n\\label{eqn:rmomfinalturb}\n\\end{equation}\n\n\tCompared to the laminar $r$ momentum equation,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\rho v) + \\frac{\\partial}{\\partial x}(\\rho uv) +\n\t\\frac{\\partial}{\\partial r}(\\rho v^2 + p) + \\frac{1}{r}(\\rho v^2) = \n\t\\frac{\\partial}{\\partial x}(\\tau_{xr})\n\t+ \\frac{\\partial}{\\partial r}(\\tau_{rr}) + \\frac{1}{r}(\\tau_{rr} - \\tau_{\\theta \\theta})\n\\end{displaymath}\n\n\tthis form highlights the similarity between the turbulent and laminar versions of the $r$ momentum \nequation, with the only differences being the use of a turbulent co-efficient of viscosity, Favre averaged quantities,\nand an additional turbulent pressure contribution (as is the case for the $x$ momentum equation as well).\n\n%-----------------------------------------------------------------------------------------------------------------------\n\\subsection{Energy Equation}\n\n\tTo derive the turbulent axisymmetric energy equation, we shall make use of the equation derived\nwithout making use of the added assumption that circumferential velocity be zero (Eq. \\ref{eqn:wenergy}.  \nThis will allow for the substitution of turbulent kinetic energy as defined by Eq. \\ref{eqn:rhok} into the \nresulting energy equation.  Thus repeating the laminar axisymmetric energy equation here for convenience,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}(\\rho E) +\\frac{\\partial}{\\partial x}(u[\\rho E + p]) \\\\\n\t\t+ \\frac{\\partial}{\\partial r}(v[\\rho E + p]) + \\frac{1}{r}(v[\\rho E + p])\n\t\t\\end{array} & = &\n\t\t\\begin{array}{c}\n\t\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial x}h_k) \n\t\t\t- q_{c_x} +   \\tau_{xx}u + \\tau_{xr}v + \\tau_{x\\theta}w\\Big\\} \\\\\n\t\t\t+ \\frac{\\partial}{\\partial r}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k)\n\t\t\t-  q_{c_r} + \\tau_{rx}u +  \\tau_{rr}v  + \\tau_{r\\theta}w\\Big\\} \\\\\n\t\t\t+\\frac{1}{r}\\Big\\{\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) - q_{c_r} + \\tau_{rx}u \n\t\t\t+ \\tau_{rr}v + \\tau_{r\\theta}w\\Big\\}\n\t\t\\end{array}  \n\t\\end{array}\n\\end{displaymath}\n\t\n\tNow since the total specific energy, $E$, can be expressed in terms of the sum of the specific internal energy\nof the individual species ($c_k e_k = c_k h_k -p/\\rho$) plus the specific kinetic energy (where Eq. \\ref{eqn:totvel}\nis used to express the velocity as the sum of individual co-ordinate velocities),\n\n\\begin{equation}\n\tE = \\sum_{k=0}c_k h_k - \\frac{p}{\\rho} + \\frac{1}{2}\\sum_{i=0}^{i=2}v_i^2\n\\label{eqn:spenergy2}\n\\end{equation} \n\t\n\tone can use this relation to rewrite the energy equation,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}(\\rho[\\sum_{k}c_k h_k -\\frac{p}{\\rho} + \\frac{1}{2}\\sum_{i=0}^{i=2}v_i^2]) \\\\ \\\\\n\t\t+\\frac{\\partial}{\\partial x}(\\rho u[\\sum_{k}c_k h_k + \\frac{1}{2}\\sum_{i=0}^{i=2}v_i^2]) \\\\ \\\\\n\t\t+ \\frac{\\partial}{\\partial r}(\\rho v[\\sum_{k}c_k h_k + \\frac{1}{2}\\sum_{i=0}^{i=2}v_i^2]) \\\\ \\\\\n\t\t+ \\frac{1}{r}(\\rho v[\\sum_{k}c_k h_k + \\frac{1}{2}\\sum_{i=0}^{i=2}v_i^2])\n\t\t\\end{array} & = &\n\t\t\\begin{array}{c}\n\t\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial x}h_k)\n\t\t\t- q_{c_x} +   \\tau_{xx}u + \\tau_{xr}v + \\tau_{x\\theta}w\\Big\\} \\\\\n\t\t\t+ \\frac{\\partial}{\\partial r}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k)\n\t\t\t-  q_{c_r} + \\tau_{rx}u +  \\tau_{rr}v  + \\tau_{r\\theta}w\\Big\\} \\\\\n\t\t\t+\\frac{1}{r}\\Big\\{\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) - q_{c_r} + \\tau_{rx}u \n\t\t\t+ \\tau_{rr}v + \\tau_{r\\theta}w\\Big\\}\n\t\t\\end{array}  \n\t\\end{array}\n\\end{displaymath}\n\n\tTaking the Reynolds average of this entire equation, and using the relation expressed in Eq. \\ref{eqn:sums}\none can consider the averaging of each of the above terms individually (as has been done for the previous\nturbulent equations).  Starting with the inviscid time dependent term,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\rho[\\sum_{k} \\rho c_k h_k -\\frac{p}{\\rho} + \\frac{1}{2}\n\t\\sum_{i=0}^{i=2}v_i^2])} =\n\t\\overline{\\frac{\\partial}{\\partial t}(\\sum_{k}\\rho c_k h_k)} - \\overline{\\frac{\\partial p}{\\partial t}} \n\t+ \\overline{\\frac{\\partial}{\\partial t}(\\frac{1}{2} \\sum_{i=0}^{i=2} \\rho v_i v_i)}\n\\end{displaymath}\n\n\twhich when combined with the results of Eqs. \\ref{eqn:reyderiv} and \\ref{eqn:tripleprod} yields,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}\\overline{(\\sum_{k}\\rho c_k h_k)} - \\frac{\\partial \\overline{p}}{\\partial t}\n\t+ \\frac{\\partial}{\\partial t}\\overline{(\\frac{1}{2} \\sum_{i=0}^{i=2} \\rho v_i v_i)} =\n\t\\frac{\\partial}{\\partial t}\\Big\\{\\sum_k \\overline{\\rho}\\tilde c_k \\tilde h_k + \\overline{\\sum_k \\rho c_k'' h_k''}\n\t+\\frac{1}{2}\\sum_{i=0}^{i=2}\\overline{\\rho}\\tilde{v_i}\\tilde{v_i} + \\frac{1}{2}\n\t\\overline{\\sum_{i=0}^{i=2}\\rho v_i'' v_i''} -\\overline{p}\\Big\\}\n\\end{displaymath}\n\n\tIf we now define the enthalpic energy of turbulence, $g$, as\n\n\\begin{equation}\n\t\\overline{\\rho}g = \\sum_{k}\\overline{\\rho c_k'' h_k''}\t\n\\label{eqn:rhog}\n\\end{equation}\n\n\twhile using the relation expressed earlier for the specific internal energy of a given species as\na guide to defining the Favre averaged specific internal energy of the gas as a whole (i.e. summed over\nthe individual species),\n\n\\begin{equation}\n\t\\tilde e = \\sum_{k} \\tilde c_k \\tilde h_k - \\frac{\\overline{p}}{\\overline{\\rho}}\n\\label{eqn:spintenergyturb}\n\\end{equation}\n\n\twe can rewrite the above result as,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}\\Big\\{\\sum_k \\overline{\\rho}\\tilde c_k \\tilde h_k -\\overline{p}\n\t+ \\overline{\\sum_k \\rho c_k'' h_k''}\n\t + \\frac{1}{2}\\overline{\\sum_{i=0}^{i=2}\\rho v_i'' v_i''}\n\t+\\frac{1}{2}\\sum_{i=0}^{i=2}\\overline{\\rho}\\tilde{v_i}\\tilde{v_i}  \\Big\\} =\n\t\\frac{\\partial}{\\partial t}\\Big\\{\\overline{\\rho}(\\tilde e + g + k \n\t+\\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde{v_i}\\tilde{v_i}) \\Big\\}\n\\end{displaymath}\n\n\twhere the relation expressed in Eq. \\ref{eqn:rhok} for the kinetic energy of turbulence\nhas also been used.  Considering next the $x$ derivative term and applying Eqs. \\ref{eqn:reyderiv} and \\ref{eqn:quadprod} one\ncan write,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial x}\\overline{(\\sum_{k}\\rho u c_k h_k)} + \\frac{\\partial}{\\partial x}\n\t\\overline{(\\frac{1}{2}\\sum_{i=0}^{i=2}\\rho u v_i v_i)} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\\sum_k \\overline{\\rho}\\tilde{u}\\tilde c_k \\tilde h_k \n\t\t+\\overline{\\sum_k \\rho u'' c_k'' h_k''} + \\sum_k \\tilde u \\overline{\\rho c_k'' h_k''}\n\t\t+\\sum_k \\tilde c_k \\overline{\\rho u'' h_k''} + \\sum_k \\tilde h_k \\overline{\\rho u'' c_k''} \\\\\n\t\t+ \\frac{1}{2}\\sum_{i=0}^{i=2}\\overline{\\rho}\\tilde u \\tilde v_i \\tilde v_i + \\frac{1}{2}\n\t\t\\overline{\\sum_{i=0}^{i=2} \\rho u'' v_i'' v_i''} + \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde u \n\t\t\\overline{\\rho v_i'' v_i''} + \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho u'' v_i''}\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\n\tRearranging a bit and using the expression in Eqs. \\ref{eqn:rhog}, \\ref{eqn:spintenergyturb}, and \\ref{eqn:rhok} \nthe above result can be rewritten as,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial x}\\overline{(\\sum_{k}\\rho u c_k h_k)} + \\frac{\\partial}{\\partial x}\n\t\\overline{(\\frac{1}{2}\\sum_{i=0}^{i=2}\\rho u v_i v_i)} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\\overline{\\rho}\\tilde u(\\tilde e + \\frac{\\overline{p}}{\\overline{\\rho}} \n\t\t + g + k + \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde v_i \\tilde v_i)\n\t\t+\\sum_k \\tilde c_k \\overline{\\rho u'' h_k''} + \\sum_k \\tilde h_k \\overline{\\rho u'' c_k''} \\\\\n\t\t +\\overline{\\sum_k \\rho u'' c_k'' h_k''} + \\frac{1}{2}\n\t\t\\overline{\\sum_{i=0}^{i=2} \\rho u'' v_i'' v_i''}  + \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho u'' v_i''}\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\t\n\tThe $r$ and $x$ derivative terms are treated in the exact same manner given\nthat the only differences between the two are that $u$ is replaced by $v$ and that the derivative is taken along a\ndifferent direction, both of which make no difference in the averaging process.  Thus,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial r}\\overline{(\\sum_{k}\\rho v c_k h_k)} + \\frac{\\partial}{\\partial r}\n\t\\overline{(\\frac{1}{2}\\sum_{i=0}^{i=2}\\rho v v_i v_i)} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\\sum_k \\overline{\\rho}\\tilde{v}\\tilde c_k \\tilde h_k \n\t\t+\\overline{\\sum_k \\rho v'' c_k'' h_k''} + \\sum_k \\tilde v \\overline{\\rho c_k'' h_k''}\n\t\t+\\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} + \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''} \\\\\n\t\t+ \\frac{1}{2}\\sum_{i=0}^{i=2}\\overline{\\rho}\\tilde v \\tilde v_i \\tilde v_i + \\frac{1}{2}\n\t\t\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''} + \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde v \n\t\t\\overline{\\rho v_i'' v_i''} + \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho v'' v_i''}\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\n\twhich can be simplified similarly to the $x$ derivative terms,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial r}\\overline{(\\sum_{k}\\rho v c_k h_k)} + \\frac{\\partial}{\\partial r}\n\t\\overline{(\\frac{1}{2}\\sum_{i=0}^{i=2}\\rho v v_i v_i)} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\rho}\\tilde v(\\tilde e + \\frac{\\overline{p}}{\\overline{\\rho}} \n\t\t + g + k + \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde v_i \\tilde v_i)\n\t\t+\\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} + \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''} \\\\\n\t\t +\\overline{\\sum_k \\rho v'' c_k'' h_k''} + \\frac{1}{2}\n\t\t\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''}  + \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho v'' v_i''}\\Big\\}\n\t\\end{array}\n\\end{displaymath}\n\n\tThis result also applies to the inviscid axisymmetric source term, with $\\frac{\\partial}{\\partial r}$ replaced\nby $\\frac{1}{r}$.  Therefore, summarizing the inviscid results so far while rearranging the additional turbulent terms \nby placing them with various viscous terms one has,\n\n\\begin{equation}\n\t\\begin{array}{ccc}\n\t   \\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}\\Big\\{\\overline{\\rho}(\\tilde e + g + k \n\t\t+ \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde{v_i}\\tilde{v_i}) \\Big\\}  \\\\\n\t\t+ \\frac{\\partial}{\\partial x}\\Big\\{\\overline{\\rho}\\tilde u(\\tilde e + \\frac{\\overline{p}}{\\overline{\\rho}}  \n\t\t+ g + k + \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde v_i \\tilde v_i)\\Big\\}  \\\\\n\t\t+ \\frac{\\partial}{\\partial r}\\Big\\{\\overline{\\rho}\\tilde v(\\tilde e + \\frac{\\overline{p}}{\\overline{\\rho}}  \n\t\t+ g + k + \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde v_i \\tilde v_i)\\Big\\}  \\\\\n\t\t+ \\frac{1}{r}\\Big\\{\\overline{\\rho}\\tilde v(\\tilde e + \\frac{\\overline{p}}{\\overline{\\rho}}  \n\t\t+ g + k + \\frac{1}{2}\\sum_{i=0}^{i=2}\\tilde v_i \\tilde v_i)\\Big\\}\n\t   \\end{array} \n\t   & = &\n\t   \\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t(\\overline{\\sum_k (\\nu_k h_k\\frac{\\partial c_k}{\\partial x})} - \\sum_k \\tilde h_k \\overline{\\rho u'' c_k''}) \n\t\t\\\\ +(- \\overline{q_{c_x}} - \\sum_k \\tilde c_k \\overline{\\rho u'' h_k''}) \\\\ + (\\overline{\\tau_{xx}u} + \n\t\t\\overline{\\tau_{xr}v} + \\overline{\\tau_{x\\theta}w}\n\t\t- \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho u'' v_i''}) \\\\ - (\\overline{\\sum_k \\rho u'' c_k'' h_k''} \n\t\t+ \\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho u'' v_i'' v_i''}) \\Big\\} \n\t\t\\\\ \\\\ + \n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\n\t\t(\\overline{\\sum_k (\\nu_k h_k\\frac{\\partial c_k}{\\partial r})} - \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''}) \n\t\t\\\\ +(- \\overline{q_{c_r}} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''}) \\\\ + (\\overline{\\tau_{rx}u} + \n\t\t\\overline{\\tau_{rr}v} + \\overline{\\tau_{r\\theta}w}\n\t\t- \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho v'' v_i''}) \\\\ - (\\overline{\\sum_k \\rho v'' c_k'' h_k''} \n\t\t+ \\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''}) \\Big\\} \n\t\t\\\\ \\\\+ \n\t\t\\frac{1}{r}\\Big\\{\n\t\t(\\overline{\\sum_k (\\nu_k h_k\\frac{\\partial c_k}{\\partial r})} - \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''}) \n\t\t\\\\ +(- \\overline{q_{c_r}} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''}) \\\\ + (\\overline{\\tau_{rx}u} + \n\t\t\\overline{\\tau_{rr}v} + \\overline{\\tau_{r\\theta}w}\n\t\t- \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho v'' v_i''}) \\\\ - (\\overline{\\sum_k \\rho v'' c_k'' h_k''} \n\t\t+ \\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''}) \\Big\\} \n\t   \\end{array}\t\n\t\\end{array}\n\\label{eqn:invenergyturb}\n\\end{equation}\n\n\twhere is it noted that the results of Eqs. \\ref{eqn:sums} and \\ref{eqn:reyderiv} have already been applied.\nAs well, if we define a new turbulent total specific energy, $\\tilde E$, as \n\n\\begin{equation}\n\t\\tilde E = \\tilde e + g + k + \\sum_{i=0}^{i=2}\\tilde v_i^2\n\\label{eqn:spenergyturb}\n\\end{equation}\n\n\twhich contains the extra turbulent enthalpic and kinetic energies, as well as Favre averaged values of\nspecific internal and kinetic energies when compared to the laminar specific total energy $E$, the inviscid terms \ncan be greatly simplified.\n\n\tAlthough Eq. \\ref{eqn:invenergyturb} looks complicated, upon closer inspection one notes that the\nthree large viscous terms all have similar forms, and in fact are composed of only four distinct parts, each\nof which relating to either diffusion, heat conduction, viscosity, or pure turbulence (this last grouping \nbeing composed of terms solely arising from turbulence, while the other three contain laminar and turbulent\ncomponents).  Each of these parts will be considered individually as follows.\n\n\\subsubsection{Diffusion Terms}\n\n\tConsidering the diffusive terms on the viscous side of the energy equation, given the similarity between the\n$x$ and $r$ derivative terms, as well as the similarity between the $r$ derivative and viscous axisymmetric source term,\nonly the $r$ derivative terms will be examined in detail as these results can be easily extended to the other two\nsets of terms.  From the derivation of the laminar species conservation equation, the diffusive flux ${j_k}_x$ \ncan be defined using Eqs. \\ref{eqn:fick} and \\ref{eqn:nuk} as \n\n\\begin{equation}\n\t{j_k}_x = -\\nu_k \\frac{\\partial c_k}{\\partial x}\n\\label{eqn:diffflux}\n\\end{equation}\n\n\twhich also holds true in the radial direction, with $x$ replaced by $r$ in Eq. \\ref{eqn:diffflux}.  Thus\nthe diffusive terms can be rewritten,\n\n\\begin{displaymath}\n\t\\overline{\\sum_k (\\nu_k h_k\\frac{\\partial c_k}{\\partial r})} - \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''} = \n\t-\\overline{\\sum_{k}h_k {j_k}_r} - \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''} = \n\t-\\overline{\\sum_{k}(\\tilde h_k + h_k''){j_k}_r} - \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''}\n\\end{displaymath}\n\n\twhere it is noted here that by decomposing enthalpy into a Favre averaged value and a Favre fluctuating\ncomponent, we are assuming a constant specific heat at constant pressure, $c_p$, as enthalpy can be \nexpressed as the product of temperature and this specific heat.  At the very least, if one assumes a thermally perfect\ngas (where $c_p$ can vary with temperature) then one must realize that we are neglecting the fluctuating components \nof this variable caused by the fluctuations in temperature.  \n\n\tIt is also noted that comparing \nEqs. \\ref{eqn:diffflux} and \\ref{eqn:difffluxapprox} we note that the diffusive flux has already been \napproximated while deriving the turbulent axisymmetric species conservation equation and hence one can write, \n\n\\begin{displaymath}\n\t-\\overline{\\sum_{k}(\\tilde h_k + h_k''){j_k}_r} - \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''} =\n\t\\sum_{k}\\Big\\{h_k (\\overline{\\nu_k} \\frac{\\partial \\tilde c_k}{\\partial r} - \\overline{\\rho v'' c_k''})\n\t+ \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}}\\Big\\}\n\\end{displaymath}\n\n\twhere Eq. \\ref{eqn:sums} is also used.  For the turbulent term arising out of the inviscid diffusive\nterm ($\\overline{\\rho v'' c_k''}$), recalling the species conservation equation one realizes that this quantity\nhas already been approximated by Eq. \\ref{eqn:turbdiffapprox} thus,\n\n\\begin{displaymath}\n\t=\\sum_{k}\\Big\\{\\tilde h_k (\\overline{\\nu_k} \\frac{\\partial \\tilde c_k}{\\partial r} + \\frac{\\mu_T}{Sc_T}\n\t\\frac{\\partial \\tilde c_k}{\\partial r})\n\t+ \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}}\\Big\\}\n\\end{displaymath}\n\n\tand using $\\nu_k^*$ as defined by Eq. \\ref{eqn:nukstar} yields,\n\n\\begin{equation}\n\t\\overline{\\sum_k (\\nu_k h_k\\frac{\\partial c_k}{\\partial r})} - \\sum_k \\tilde h_k \\overline{\\rho v'' c_k''} =\n\t\\sum_{k}\\Big\\{\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} \n\t+ \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}}\\Big\\}\n\\label{eqn:rdiffturb}\n\\end{equation}\n\n\tThis result can be used directly in the axisymmetric viscous source term, while in the $x$ direction \nthis result can be extended as,\n\n\\begin{equation}\n\t\\overline{\\sum_k (\\nu_k h_k\\frac{\\partial c_k}{\\partial x})} - \\sum_k \\tilde h_k \\overline{\\rho u'' c_k''} =\n\t\\sum_{k}\\Big\\{\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial x} \n\t+ \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial x}}\\Big\\}\n\\label{eqn:xdiffturb}\n\\end{equation}\n\n\\subsubsection{Heat Flux Terms}\n\n\tEvaluating the heat transfer terms, we will make use of Eq. \\ref{eqn:rheatflux} to replace ${q_c}_r$\nwhile also applying the results of Eq. \\ref{eqn:product}\n\n\\begin{displaymath}\n\t-\\overline{q_{c_r}} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} = \\overline{\\kappa \\frac{\\partial T}{\\partial r}}\n\t- \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} = \\overline{\\kappa}\\overline{\\frac{\\partial T}{\\partial r}}\n\t+\\overline{\\kappa ' (\\frac{\\partial T}{\\partial r})'} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''}\n\\end{displaymath}\n\n\tUsing Eq. \\ref{eqn:reyderiv} and decomposing the temperature into a Favre averaged and Favre fluctuating component,\n\n\\begin{displaymath}\n\t= \\overline{\\kappa}\\frac{\\partial}{\\partial r}\n\t\\overline{(\\tilde T + T'')} + \\overline{\\kappa '(\\frac{\\partial T}{\\partial r})'}\n\t- \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} = \\overline{\\kappa}\\frac{\\partial \\tilde T}{\\partial r}\n\t+ [\\overline{\\kappa}\\frac{\\partial}{\\partial r}\\overline{(T'')} + \\overline{\\kappa '(\\frac{\\partial T}{\\partial r})'}]\n\t- \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''}\t\n\\end{displaymath}\n\n\tAs has been done previously to aid in the simplification of the resulting equations, the terms in the\nsquare brackets will be neglected (although this does not assume that these terms are zero) as the \npresence of the inviscid turbulent term assures the resulting equation will not collapse back to its laminar form.\nThus one is left with,\n\n\\begin{displaymath}\n\t-\\overline{q_{c_r}} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} = \n\t\\overline{\\kappa}\\frac{\\partial \\tilde T}{\\partial r} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''}\t\n\\end{displaymath}\t\n\n\tNow since enthalpy can be expressed as the product of a constant specific heat (at constant pressure) and temperature\n($h=c_p T$), if one assumes a calorically perfect gas, then one can write\n\n\\begin{displaymath}\n\t\\tilde h_k + h_k'' = {c_p}_k (\\tilde T + T'') \n\\end{displaymath}\n\n\tusing Favre averaged temperature.  From the above relation one can define\n\n\\begin{equation}\n\t\\tilde h_k = {c_p}_k \\tilde T\n\\label{eqn:enthturb}\n\\end{equation}\n\n\\begin{equation}\n\th_k'' = {c_p}_k T''\n\\label{eqn:flucenthturb}\n\\end{equation}\n\n\tFurther, if one defines a turbulent specific heat for the entire flow as,\n\n\\begin{equation}\n\t\\overline{c_p} = \\sum_{k} \\tilde c_k {c_p}_k\n\\label{eqn:cpturb}\n\\end{equation}\n\n\tthen one can write for the heat flux terms,\n\n\\begin{displaymath}\n\t-\\overline{q_{c_r}} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} = \n\t\\overline{\\kappa}\\frac{\\partial \\tilde T}{\\partial r} - \\sum_k \\tilde c_k {c_p}_k \\overline{\\rho v'' T''} = \n\t\\overline{\\kappa}\\frac{\\partial \\tilde T}{\\partial r} - \\overline{c_p} \\overline{\\rho v'' T''}\t\n\\end{displaymath}\n\n\tUsing the Boussinesq approximation the last term in the above expression can be approximated as,\n\n\\begin{equation}\n\t\\overline{\\rho v'' T''} = -\\frac{\\mu_T}{Pr_T}\\frac{\\partial \\tilde T}{\\partial r}\n\\label{eqn:heatfluxapprox}\n\\end{equation}\n\n\tusing the turbulent Prandtl number to relate turbulent eddy viscosity to the laminar heat flux\nco-efficient,\n\n\\begin{equation}\n\tPr_T = \\frac{\\mu_T \\overline{c_p}}{\\kappa}\n\\label{eqn:prandtl}\n\\end{equation}\n\n\tTherefore the heat flux terms can be rewritten as,\n\n\\begin{displaymath}\n\t-\\overline{q_{c_r}} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} = \n\t(\\overline{\\kappa} + \\frac{\\mu_T \\overline{c_p}}{Pr_T})\\frac{\\partial \\tilde T}{\\partial r}\t\n\\end{displaymath}\n\n\tand defining a turbulent heat flux co-efficient as,\n\n\\begin{equation}\n\t\\kappa^* = \\overline{\\kappa} +  \\frac{\\mu_T \\overline{c_p}}{Pr_T}\n\\label{eqn:kappastar}\n\\end{equation}\n\n\tthen one can write the final result,\n\n\\begin{equation}\n\t-\\overline{q_{c_r}} - \\sum_k \\tilde c_k \\overline{\\rho v'' h_k''} = \n\t\\kappa^*\\frac{\\partial \\tilde T}{\\partial r}\t\n\\label{eqn:rheatturb}\n\\end{equation}\n\n\tand extending this result to the $x$ direction,\n\n\\begin{equation}\n\t-\\overline{q_{c_x}} - \\sum_k \\tilde c_k \\overline{\\rho u'' h_k''} = \n\t\\kappa^*\\frac{\\partial \\tilde T}{\\partial x}\t\n\\label{eqn:xheatturb}\n\\end{equation}\n\n\\subsubsection{Viscosity Terms}\n\n\tExamining the viscous terms present in Eq. \\ref{eqn:invenergyturb} we note that many of these\nterms have been previously approximated while deriving the momentum equations, thus those results will \nbe used here as well.  Starting with the $x$ derivative terms there are three viscous stresses to be considered,\n\n\\begin{displaymath}\n\t\\overline{u \\tau_{xx}} = \\overline{(\\tilde u + u'') \\tau_{xx}} = \\tilde u \\overline{\\tau_{xx}} + \n\t\\overline{u'' \\tau_{xx}}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\overline{v \\tau_{xr}} = \\overline{(\\tilde v + v'') \\tau_{xr}} = \\tilde v \\overline{\\tau_{xr}} + \n\t\\overline{v'' \\tau_{xr}}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\overline{w \\tau_{x\\theta}} = \\overline{(\\tilde w + w'') \\tau_{x\\theta}} = \\tilde w \\overline{\\tau_{x\\theta}} + \n\t\\overline{w'' \\tau_{x\\theta}}\n\\end{displaymath}\n\n\twhere the velocities have been decomposed using Favre averaging and the results of Eqs. \\ref{eqn:sums} and\n\\ref{eqn:aveofave} have been applied.  Expanding the remaining term from this group over its sum and \ncombining with the above results yields,\n\n\\begin{displaymath}\n\t\\tilde u (\\overline{\\tau_{xx}} - \\overline{\\rho u'' u''}) + \\tilde v (\\overline{\\tau_{xr}} \n\t- \\overline{\\rho u'' v''}) + \\tilde w(\\overline{\\tau_{x\\theta}} - \\overline{\\rho u'' w''})\n\t+ (\\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}} + \\overline{w''\\tau_{x\\theta}})\n\\end{displaymath}\n\n\tAt this point we note that Eqs. \\ref{eqn:tauxxturb} and \\ref{eqn:taurxturb} can be used\nto approximate $\\overline{\\tau_{xx}}$ and $\\overline{\\tau_{xr}} = \\overline{\\tau_{rx}}$ while\nEqs. \\ref{eqn:rhouu} and \\ref{eqn:rhouv} can be used to approximate the first two Reynolds stresses.\nThe remaining circumferential terms have not been previously treated and hence will be considered here.\nUsing the definition of $\\tau_{x\\theta}$ as given by Eq. \\ref{eqn:tauxtheta}\n\n\\begin{displaymath}\n\t\\overline{\\tau_{x\\theta}} = \\overline{\\mu\\frac{\\partial w}{\\partial x}} =\n\t\\overline{\\mu}\\frac{\\partial \\overline{w}}{\\partial x} + \\overline{\\mu'(\\frac{\\partial w}{\\partial x})'} = \n\t\\overline{\\mu}\\frac{\\partial}{\\partial x}\\overline{(\\tilde w + w'')} + \\overline{\\mu'(\\frac{\\partial w}{\\partial x})'} =\n\t\\overline{\\mu}\\frac{\\partial \\tilde w}{\\partial x} + [\\overline{\\mu}\\frac{\\partial \\overline{w''}}{\\partial x} +\n\t\\overline{\\mu'(\\frac{\\partial w}{\\partial x})'}]\t\n\\end{displaymath}\n\n\twhere Eqs. \\ref{eqn:product} and \\ref{eqn:reyderiv} have also been used while decomposing the\ncircumferential velocity.  Following the same logic as used for the other viscous stresses, since we know that there \nis a Reynolds stress to be combined with this term, we will neglect the fluctuating components in the square brackets.  \nThe corresponding Reynolds stress, $-\\overline{\\rho u'' w''}$, can be approximated using the Boussinesq approximation,\n\n\\begin{equation}\n\t-\\overline{\\rho u'' w''} \\approx \\mu_T  \\frac{\\partial \\tilde w}{\\partial x}\n\\label{eqn:rhouw}\n\\end{equation}\n\n\tRewriting these viscous terms in light of all of the above yields,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\\tilde u \\Big\\{(\\overline \\mu + \\mu_T)[2\\frac{\\partial \\tilde u}{\\partial x} -\\frac{2}{3}\n\t\\overline{(\\nabla \\cdot \\vec{V})}] -\\frac{2}{3}\\overline{\\rho} k \\Big\\} +\n\t\\tilde v \\Big\\{(\\overline \\mu + \\mu_T)(\\frac{\\partial \\tilde v}{\\partial x} + \\frac{\\partial \\tilde u}\n\t{\\partial r}) \\Big\\} \n\t\\\\\n\t+ \\tilde w \\Big\\{(\\overline{\\mu} + \\mu_T)(\\frac{\\partial \\tilde w}{\\partial x})\n\t\\Big\\} + (\\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}} + \\overline{w''\\tau_{x\\theta}})\t\n\t\\end{array}\n\\end{displaymath}\n\n\tUsing the definitions of the various stresses (noting the use of Favre averaged quanitites) along with \nEq. \\ref{eqn:mustar}, the above can be reduced to,\n\n\\begin{equation}\n\t(\\overline{\\tau_{xx}u} + \\overline{\\tau_{xr}v} + \\overline{\\tau_{x\\theta}w}\n\t- \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho u'' v_i''}) = (\\tilde \\tau_{xx} \\tilde u + \n\t\\tilde \\tau_{xr} \\tilde v + \\tilde \\tau_{x\\theta} \\tilde w -\\frac{2}{3}\\overline{\\rho}k)\n\t + (\\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}} + \\overline{w''\\tau_{x\\theta}})\n\\label{eqn:enerxvisturb}\n\\end{equation}\n\n\tMoving on to the viscous $r$ derivative terms (and hence the viscous axisymmetric source terms)\nthere are again three stresses to be considered,\n\n\\begin{displaymath}\n\t\\overline{u \\tau_{rx}} = \\overline{(\\tilde u + u'') \\tau_{rx}} = \\tilde u \\overline{\\tau_{rx}} + \n\t\\overline{u'' \\tau_{rx}}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\overline{v \\tau_{rr}} = \\overline{(\\tilde v + v'') \\tau_{rr}} = \\tilde v \\overline{\\tau_{rr}} + \n\t\\overline{v'' \\tau_{rr}}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\overline{w \\tau_{r\\theta}} = \\overline{(\\tilde w + w'') \\tau_{r\\theta}} = \\tilde w \\overline{\\tau_{r\\theta}} + \n\t\\overline{w'' \\tau_{r\\theta}}\n\\end{displaymath}\n\n\twhere the velocities have been decomposed using Favre averaging and the results of Eqs. \\ref{eqn:sums} and\n\\ref{eqn:aveofave} have been applied.  Expanding again the remaining term in this group over its sum and combining with\nthe above yields,\n\n\\begin{displaymath}\n\t\\tilde u (\\overline{\\tau_{rx}} - \\overline{\\rho v'' u''}) + \\tilde v (\\overline{\\tau_{rr}} \n\t- \\overline{\\rho v'' v''}) + \\tilde w(\\overline{\\tau_{r\\theta}} - \\overline{\\rho v'' w''})\n\t+ (\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}})\n\\end{displaymath}\n\n\tHere again we note that Eqs. \\ref{eqn:taurrturb} and \\ref{eqn:taurxturb} can be used\nto approximate $\\overline{\\tau_{rr}}$ and $\\overline{\\tau_{xr}} = \\overline{\\tau_{rx}}$ respectively while\nEqs. \\ref{eqn:rhouv} and \\ref{eqn:rhovv} can be used to approximate the first two Reynolds stresses.\nThe circumferential term $\\tau_{r\\theta}$ as given by Eq. \\ref{eqn:taurtheta} can be treated as follows,\n\n\\begin{displaymath}\n\t\\overline{\\tau_{r\\theta}} = \\overline{\\mu r\\frac{\\partial}{\\partial r}\\frac{w}{r}} = r\\Big\\{\n\t\\overline \\mu \\frac{\\partial}{\\partial r}\\frac{\\overline{w}}{r} + \n\t\\overline{\\mu'(\\frac{\\partial}{\\partial r}\\frac{w}{r})'} \\Big\\} = r\\Big\\{\n\t\\overline \\mu \\frac{\\partial}{\\partial r}\\overline{(\\frac{\\tilde w}{r} + \\frac{w''}{r})} + \n\t\\overline{\\mu'(\\frac{\\partial}{\\partial r}\\frac{w}{r})'} \\Big\\}\n\\end{displaymath}\n\n\twhere $w$ has been decomposed into Favre averaged components and the results of \nEqs. \\ref{eqn:product} and \\ref{eqn:reyderiv} have also been applied.  Following the same\nlogic as before, since we know that there is a Reynolds stress to be combined with this term we will neglect the \nfluctuating components leaving,\n\n\\begin{equation}\n\t\\overline{\\tau_{r\\theta}} \\approx r\n\t\\overline \\mu \\frac{\\partial}{\\partial r}\\frac{\\tilde w}{r}\n\\label{eqn:taurthetaturb}\n\\end{equation}  \n\n\tApproximating the remaining Reynolds stress using the Boussinesq approach and a knowledge of the stress\nthis term is to be combined with,\n\n\\begin{equation}\n\t-\\overline{\\rho v'' w''} \\approx r \\mu_T \\frac{\\partial}{\\partial r}\\frac{\\tilde w}{r}\n\\label{eqn:rhovw}\n\\end{equation}\n\n\tSubstituting the various approximations yields,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\\tilde u \\Big\\{(\\overline \\mu + \\mu_T)(\\frac{\\partial \\tilde v}{\\partial x} + \\frac{\\partial \\tilde u}\n\t{\\partial r}) \\Big\\} + \\tilde v \\Big\\{(\\overline \\mu + \\mu_T)[2\\frac{\\partial \\tilde v}{\\partial r} -\\frac{2}{3}\n\t\\overline{(\\nabla \\cdot \\vec{V})}] -\\frac{2}{3}\\overline{\\rho} k \\Big\\}\n\t\\\\\n\t+ \\tilde w \\Big\\{(\\overline{\\mu} + \\mu_T)(r\\frac{\\partial}{\\partial r}\\frac{\\tilde w}{r})\n\t\\Big\\} + (\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}})\t\n\t\\end{array}\n\\end{displaymath}\n\n\tAgain if Eq. \\ref{eqn:mustar} is used in the definition of the various stresses along with Favre averaged\nquantities the above can be reduced to,\n\n\\begin{equation}\n\t(\\overline{\\tau_{rx}u} + \\overline{\\tau_{rr}v} + \\overline{\\tau_{r\\theta}w}\n\t- \\sum_{i=0}^{i=2}\\tilde v_i \\overline{\\rho v'' v_i''}) = (\\tilde \\tau_{rx} \\tilde u + \n\t\\tilde \\tau_{rr} \\tilde v + \\tilde \\tau_{r\\theta} \\tilde w -\\frac{2}{3}\\overline{\\rho}k)\n\t + (\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}})\n\\label{eqn:enerrvisturb}\n\\end{equation}\n\n\tThe only terms left to be considered are the pure turbulent ones.  However, before considering these terms it\nwill prove helpful to summarize the results obtained so far.  Combining the results of \nEqs. \\ref{eqn:rdiffturb} and \\ref{eqn:xdiffturb} (diffusion), Eqs. \\ref{eqn:rheatturb} and \\ref{eqn:xheatturb}\n(heat conduction), and Eqs. \\ref{eqn:enerxvisturb} and \\ref{eqn:enerrvisturb} (viscosity) with Eqs. \\ref{eqn:spenergyturb} (inviscid)\nand \\ref{eqn:invenergyturb} yields,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}E) +\n\t\t\\frac{\\partial}{\\partial x}(\\tilde u[\\overline{\\rho}E + \\overline{p}]) \\\\ +\n\t\t\\frac{\\partial}{\\partial r}(\\tilde v[\\overline{\\rho}E + \\overline{p}]) +\n\t\t\\frac{1}{r}(\\tilde v[\\overline{\\rho}E + \\overline{p}])\n\t\\end{array} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t(\\sum_{k}[\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial x} \n\t\t+ \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial x}}]) \n\t\t\\\\ +(\\kappa^*\\frac{\\partial \\tilde T}{\\partial x}) \\\\ + [(\\tilde \\tau_{xx} \\tilde u + \n\t\t\\tilde \\tau_{xr} \\tilde v + \\tilde \\tau_{x\\theta} \\tilde w -\\frac{2}{3}\\overline{\\rho}k)\n\t \t+ (\\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}} + \\overline{w''\\tau_{x\\theta}})] \\\\ \n\t\t- (\\overline{\\sum_k \\rho u'' c_k'' h_k''} \n\t\t+ \\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho u'' v_i'' v_i''}) \\Big\\} \n\t\t\\\\ \\\\ + \n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\n\t\t(\\sum_{k}[\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} \n\t\t+ \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}}]) \n\t\t\\\\ +(\\kappa^*\\frac{\\partial \\tilde T}{\\partial r}) \\\\ + [(\\tilde \\tau_{rx} \\tilde u + \n\t\t\\tilde \\tau_{rr} \\tilde v + \\tilde \\tau_{r\\theta} \\tilde w -\\frac{2}{3}\\overline{\\rho}k)\n\t \t+ (\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}})] \\\\ \n\t\t- (\\overline{\\sum_k \\rho v'' c_k'' h_k''} \n\t\t+ \\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''}) \\Big\\} \n\t\t\\\\ \\\\+ \n\t\t\\frac{1}{r}\\Big\\{\n\t\t(\\sum_{k}[\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} \n\t\t+ \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}}]) \n\t\t\\\\ +(\\kappa^*\\frac{\\partial \\tilde T}{\\partial r}) \\\\ + [(\\tilde \\tau_{rx} \\tilde u + \n\t\t\\tilde \\tau_{rr} \\tilde v + \\tilde \\tau_{r\\theta} \\tilde w -\\frac{2}{3}\\overline{\\rho}k)\n\t \t+ (\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}})] \\\\ \n\t\t- (\\overline{\\sum_k \\rho v'' c_k'' h_k''} \n\t\t+ \\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''}) \\Big\\} \n\t\\end{array}\n\\end{displaymath}\n\n\tRearranging the above by considering the $\\frac{2}{3}\\overline{\\rho}k$ terms as additions to the pressure\nand placing the new turbulence terms with the ones yet to be considered,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}E) +\n\t\t\\frac{\\partial}{\\partial x}(\\tilde u[\\overline{\\rho}E + \\overline{p} + \\frac{2}{3}\\overline{\\rho}k]) \\\\ +\n\t\t\\frac{\\partial}{\\partial r}(\\tilde v[\\overline{\\rho}E + \\overline{p} + \\frac{2}{3}\\overline{\\rho}k]) +\n\t\t\\frac{1}{r}(\\tilde v[\\overline{\\rho}E + \\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\n\t\\end{array} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t(\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial x} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial x} + \\tilde \\tau_{xx} \\tilde u + \\tilde \\tau_{xr} \\tilde v + \\tilde \\tau_{x\\theta} \\tilde w) \n\t \t\\\\\n\t\t(\\sum_k \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial x}} - \\overline{\\sum_k \\rho u'' c_k'' h_k''})\n\t\t\\\\\n\t\t+ (\\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}} + \\overline{w''\\tau_{x\\theta}} \n\t\t-\\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho u'' v_i'' v_i''}) \\Big\\} \n\t\t\\\\ \\\\ + \n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\n\t\t(\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial r} + \\tilde \\tau_{rx} \\tilde u + \\tilde \\tau_{rr} \\tilde v + \\tilde \\tau_{r\\theta} \\tilde w) \n\t \t\\\\\n\t\t(\\sum_k \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}} - \\overline{\\sum_k \\rho v'' c_k'' h_k''})\n\t\t\\\\\n\t\t+ (\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}} \n\t\t-\\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''}) \\Big\\} \n\t\t\\\\ \\\\+ \n\t\t\\frac{1}{r}\\Big\\{\n\t\t(\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial r} + \\tilde \\tau_{rx} \\tilde u + \\tilde \\tau_{rr} \\tilde v + \\tilde \\tau_{r\\theta} \\tilde w) \n\t \t\\\\\n\t\t(\\sum_k \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}} - \\overline{\\sum_k \\rho v'' c_k'' h_k''})\n\t\t\\\\\n\t\t+ (\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}} \n\t\t-\\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''}) \\Big\\} \n\t\\end{array}\n\\end{displaymath}\n\n\tAt this point we are forced to concede that there exist a large number of fluctuating quantities\nleft in the viscous terms which cannot be computed directly.  Therefore, the enthalpic terms will be \napproximated as a group, as will the velocity terms using the following relations:\n\n\\begin{equation}\n\t\\sum_k \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial x}} - \\overline{\\sum_k \\rho u'' c_k'' h_k''} \\approx\n\t(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial x}\n\\label{eqn:energyxend1}\n\\end{equation}\n\n\\begin{equation}\n\t\\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}} + \\overline{w''\\tau_{x\\theta}} \n\t-\\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho u'' v_i'' v_i''} \\approx\n\t(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial x}\n\\label{eqn:energyxend2}\n\\end{equation}\n\n\twhile in the $r$ direction\n\n\\begin{equation}\n\t\\sum_k \\overline{h_k'' \\nu_k \\frac{\\partial c_k}{\\partial r}} - \\overline{\\sum_k \\rho v'' c_k'' h_k''} \\approx\n\t(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial r}\n\\label{eqn:energyrend1}\n\\end{equation}\n\n\\begin{equation}\n\t\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}} \n\t-\\frac{1}{2}\\overline{\\sum_{i=0}^{i=2} \\rho v'' v_i'' v_i''} \\approx\n\t(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\n\\label{eqn:energyrend2}\n\\end{equation}\n\n\twhere $\\sigma_g$ and $\\sigma_k$ are commonly referred to as closure co-efficients.  Substituting these approximations \ninto the turbulent axisymmetric energy equation yields the desired form,\n\n\\begin{equation}\n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}E) +\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\\tilde u(\\overline{\\rho}E + [\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\\Big\\} \n\t\t \\\\ +\n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\\tilde v(\\overline{\\rho}E + [\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\\Big\\} +\n\t\t\\frac{1}{r}\\Big\\{\\tilde v(\\overline{\\rho}E + [\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\\Big\\}\n\t\\end{array} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t(\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial x} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial x} + \\tilde \\tau_{xx} \\tilde u + \\tilde \\tau_{xr} \\tilde v + \\tilde \\tau_{x\\theta} \\tilde w) \n\t \t\\\\\n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial x} \n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial x} \\Big\\}\n\t\t\\\\ \\\\ + \n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\n\t\t(\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial r} + \\tilde \\tau_{rx} \\tilde u + \\tilde \\tau_{rr} \\tilde v + \\tilde \\tau_{r\\theta} \\tilde w) \n\t \t\\\\\n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial r}\n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r} \\Big\\} \n\t\t\\\\ \\\\+ \n\t\t\\frac{1}{r}\\Big\\{\n\t\t(\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial r} + \\tilde \\tau_{rx} \\tilde u + \\tilde \\tau_{rr} \\tilde v + \\tilde \\tau_{r\\theta} \\tilde w) \n\t \t\\\\\n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial r}\n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r} \\Big\\} \n\t\\end{array}\n\\label{eqn:energyturb}\n\\end{equation}\n\n\n\tComparing this to the laminar version of the axisymmetric energy equation,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}(\\rho E) +\\frac{\\partial}{\\partial x}(u[\\rho E + p]) \\\\\n\t\t+ \\frac{\\partial}{\\partial r}(v[\\rho E + p]) + \\frac{1}{r}(v[\\rho E + p])\n\t\t\\end{array} & = &\n\t\t\\begin{array}{c}\n\t\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial x}h_k) \n\t\t\t- q_{c_x} +   \\tau_{xx}u + \\tau_{xr}v + \\tau_{x\\theta}w\\Big\\} \\\\\n\t\t\t+ \\frac{\\partial}{\\partial r}\\Big\\{\n\t\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k)\n\t\t\t-  q_{c_r} + \\tau_{rx}u +  \\tau_{rr}v  + \\tau_{r\\theta}w\\Big\\} \\\\\n\t\t\t\\frac{1}{r}\\Big\\{\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) - q_{c_r} + \\tau_{rx}u \n\t\t\t+ \\tau_{rr}v + \\tau_{r\\theta}w\\Big\\}\n\t\t\\end{array}  \n\t\\end{array}\n\\end{displaymath}\n\n\tthe additional turbulent terms can be easily identified.  There is an additional turbulent pressure component and\na transfer of both turbulent kinetic and enthalpic energy in Eq. \\ref{eqn:energyturb}, as well as the necessary substitution\nof Favre averaged quantities in place of laminar ones.  It is also worth noting that if one wishes to re-assert the \nassumption of zero circumferential average velocity ($\\tilde w = 0$) in order to eliminate the $\\theta$ direction\nmomentum equation, Eq. \\ref{eqn:energyturb} still holds true but with the three corresponding $\\tau_{i\\theta}$ stress terms removed.\n\n%----------------------------------------------------------------------------------------------------------------------------\n\\subsection{Turbulent Kinetic Energy (TKE) Equation} \n\n\tGiven that a new flow variable has been introduced through the definition of $k$ (Eq. \\ref{eqn:rhok}), we \nnow require at least one additional equation in order to solve the system.  To do this one can make use of the\nmomentum equations to produce an equation that governs the conservation of turbulent kinetic energy.  This is \naccomplished by multiplying each of the co-ordinate direction momentum equations by the corresponding fluctuating\nvelocity, and then summing these equations.  \n\n%------------------------------------------------------------------------------------------------------------------------------\n\\subsubsection{$x$ Direction Contribution}\n\n\tStarting with the laminar $x$ momentum equation rewritten here for\nconvenience (see Eq. \\ref{eqn:xmomshear2}),\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\rho u) + \\frac{\\partial}{\\partial x}(\\rho u^2 + p) +\n\t\\frac{\\partial}{\\partial r}(\\rho uv) + \\frac{1}{r}(\\rho u v) =  \n\t\\frac{\\partial}{\\partial x}(\\tau_{xx}) + \\frac{\\partial}{\\partial r}(\\tau_{rx}) + \\frac{1}{r}(\\tau_{rx})\n\\end{displaymath}\n\n\tand expanding the inviscid derivative terms using the chain rule from the Calculus, the above equation can\nbe rewritten as,\n\n\\begin{displaymath}\n\t\\Big\\{\\rho \\frac{\\partial u}{\\partial t} + u\\frac{\\partial \\rho}{\\partial t}\\Big\\} + \\Big\\{\n\t\\rho u \\frac{\\partial u}{\\partial x} + u \\frac{\\partial \\rho u}{\\partial x} + \\frac{\\partial p}{\\partial x}\\Big\\} \n\t+ \\Big\\{\\rho v \\frac{\\partial u}{\\partial r} + u\\frac{\\partial \\rho v}{\\partial r}\\Big\\} + \\frac{1}{r}(\\rho u v) =  \n\t \\frac{\\partial}{\\partial x}(\\tau_{xx}) + \\frac{\\partial}{\\partial r}(\\tau_{rx}) + \\frac{1}{r}(\\tau_{rx})\n\\end{displaymath}\n\t\n\twhich can be rearranged to a more convenient format,\n\n\\begin{displaymath}\n\t\\rho \\frac{\\partial u}{\\partial t} + \\rho u \\frac{\\partial u}{\\partial x} \n\t+ \\rho v \\frac{\\partial u}{\\partial r}  + \\frac{\\partial p}{\\partial x} + \n\tu\\Big\\{\\frac{\\partial \\rho}{\\partial t} +\\frac{\\partial \\rho u}{\\partial x} + \n\t\\frac{\\partial \\rho v}{\\partial r} + \\frac{1}{r}(\\rho u v)\\Big\\} =  \n\t \\frac{\\partial}{\\partial x}(\\tau_{xx}) + \\frac{\\partial}{\\partial r}(\\tau_{rx}) + \\frac{1}{r}(\\tau_{rx})\n\\end{displaymath}\n\n\tIn this form, the terms in the curly braces can be seen to be equal to the axisymmetric global continuity equation,\nEq. \\ref{eqn:globalcont}, and are hence equal to zero.  Multiplying the remaining terms by the Favre fluctuating component\nof $x$ direction velocity ($u''$) and then taking the Reynolds average of the entire equation (which is equal to taking the\naverage of the individual terms due to Eq. \\ref{eqn:sums}) yields,\n\n\\begin{equation}\n\t\\overline{\\rho u'' \\frac{\\partial u}{\\partial t}} + \\overline{\\rho u u'' \\frac{\\partial u}{\\partial x}} \n\t+ \\overline{\\rho v u'' \\frac{\\partial u}{\\partial r}}  + \\overline{u''\\frac{\\partial p}{\\partial x}} \n\t= \\overline{u''\\frac{\\partial}{\\partial x}(\\tau_{xx})} + \\overline{u''\\frac{\\partial}{\\partial r}(\\tau_{rx})} \n\t + \\overline{u''\\frac{1}{r}(\\tau_{rx})}\n\\label{eqn:uxmomshear}\n\\end{equation}\n\n\tStarting with the time derivative term and decomposing $u$ into its Favre averaged components,\n\n\\begin{displaymath}\n\t\\overline{\\rho u'' \\frac{\\partial u}{\\partial t}} = \\overline{\\rho u'' \\frac{\\partial}{\\partial t}(\\tilde u\n\t+ u'')} = \\overline{\\rho u''}\\frac{\\partial \\tilde u}{\\partial t} + \\overline{\\rho u'' \\frac{\\partial u''}{\\partial t}}\n\t= \\overline{\\rho u'' \\frac{\\partial u''}{\\partial t}} = \\overline{\\frac{1}{2}\\rho u''\n\t\\frac{\\partial u''}{\\partial t}} + \\overline{\\frac{1}{2}\\rho u''\\frac{\\partial u''}{\\partial t}}\n\\end{displaymath}\n\n\twhere Eqs. \\ref{eqn:reyderiv}, \\ref{eqn:aveofave}, and \\ref{eqn:rhophidp} have been used.  At this point\none can manipulate the remaining terms using the relation,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\frac{1}{2}\\rho u'' u'')} = \\overline{\\frac{1}{2}\\rho\\frac{\\partial}{\\partial t}\n\t(u''u'')} + \\overline{\\frac{1}{2}u''u''\\frac{\\partial \\rho}{\\partial t}} = \\Big\\{\\overline{\\frac{1}{2}\\rho u''\n\t\\frac{\\partial u''}{\\partial t}} + \\overline{\\frac{1}{2}\\rho u''\\frac{\\partial u''}{\\partial t}} \\Big\\}\n\t+ \\overline{\\frac{1}{2}u''u''\\frac{\\partial \\rho}{\\partial t}}\n\\end{displaymath}\n\n\tfrom which we will use the relationship,\n\n\\begin{displaymath}\n\t\\overline{\\frac{1}{2}\\rho u''\n\t\\frac{\\partial u''}{\\partial t}} + \\overline{\\frac{1}{2}\\rho u''\\frac{\\partial u''}{\\partial t}} = \n\t\\overline{\\frac{1}{2}\\rho\\frac{\\partial}{\\partial t}(u''u'')} = \\overline{\\frac{\\partial}{\\partial t}\n\t(\\frac{1}{2}\\rho u'' u'')} - \\overline{\\frac{1}{2}u''u''\\frac{\\partial \\rho}{\\partial t}}\n\\end{displaymath}\n\n\tThus one can write the time derivative term as,\n\n\\begin{equation}\n\t\\overline{\\rho u'' \\frac{\\partial u}{\\partial t}} = \\frac{\\partial}{\\partial t}\\overline{\n\t(\\frac{1}{2}\\rho u'' u'')} - \\overline{\\frac{1}{2}u''u''\\frac{\\partial \\rho}{\\partial t}}\t\n\\label{eqn:timeterm}\n\\end{equation}\n\n\tMoving on to the $x$ derivative term and again decomposing $u$ into Favre average components,\n\n\\begin{displaymath}\n\t\\overline{\\rho u u'' \\frac{\\partial u}{\\partial x}} = \\overline{\\rho(\\tilde u + u'')u''\\frac{\\partial}{\\partial x}\n\t(\\tilde u + u'')} = \\overline{\\rho \\tilde u u'' \\frac{\\partial \\tilde u}{\\partial x}} + \\overline{\\rho u'' u''\n\t\\frac{\\partial \\tilde u}{\\partial x}} + \\overline{\\rho \\tilde u u'' \\frac{\\partial u''}{\\partial x}} +\n\t\\overline{\\rho u'' u'' \\frac{\\partial u''}{\\partial x}}\n\\end{displaymath}\n\n\tApplying the results of Eqs. \\ref{eqn:reyderiv},\\ref{eqn:aveofave}, and \\ref{eqn:rhophidp} one can write,\n\n\\begin{displaymath}\n\t\\overline{\\rho u u'' \\frac{\\partial u}{\\partial x}} = \\overline{\\rho u''} \\tilde u \\frac{\\partial \\tilde u}{\\partial x}\n\t+ \\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} + \\overline{(\\tilde u + u'')(\\rho u'' \n\t\\frac{\\partial u''}{\\partial x})} = \\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} + \\overline{(u \\rho u'' \n\t\\frac{\\partial u''}{\\partial x})}\n\\end{displaymath}\n\n\twhere it is noted that one Favre averaged and one Favre fluctuating component were recombined to create a single\nterm containing the derivative of $u''$.  This term can then be rewritten using the following relation,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}(\\frac{1}{2}\\rho u u'' u'')}= \\overline{\\frac{1}{2}u''u''\\frac{\\partial}\n\t{\\partial x}(\\rho u)} + \\overline{\\frac{1}{2}\\rho u\\frac{\\partial}{\\partial x}(u'' u'')} = \\overline{\\frac{1}{2}u''u''\n\t\\frac{\\partial}{\\partial x}(\\rho u)} + \\Big\\{ \\overline{\\frac{1}{2}\\rho u u'' \\frac{\\partial u''}{\\partial x}}\n\t+ \\overline{\\frac{1}{2}\\rho u u'' \\frac{\\partial u''}{\\partial x}}\\Big\\}\n\\end{displaymath}\n\n\tfrom which one can make use of the result,\n\n\\begin{displaymath}\n\t\\overline{(u \\rho u'' \\frac{\\partial u''}{\\partial x})} = \\overline{\\frac{1}{2}\\rho u u'' \\frac{\\partial u''}{\\partial x}}\n\t+ \\overline{\\frac{1}{2}\\rho u u'' \\frac{\\partial u''}{\\partial x}} = \\overline{\\frac{\\partial}{\\partial x}\n\t(\\frac{1}{2}\\rho u u'' u'')} - \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial x}(\\rho u)} \n\\end{displaymath}\n\n\twhich allows the $x$ derivative term to be expressed as,\n\n\\begin{displaymath}\n\t\\overline{\\rho u u'' \\frac{\\partial u}{\\partial x}} = \\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x}\n\t+  \\overline{\\frac{\\partial}{\\partial x}\n\t(\\frac{1}{2}\\rho u u'' u'')} - \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial x}(\\rho u)} \n\\end{displaymath}\n\n\tNow re-decomposing the $x$ velocity in the largest derivative term of the above equation into Favre components\nand applying Eqs. \\ref{eqn:reyderiv} and \\ref{eqn:aveofave} yields,\n\n\\begin{equation}\n\t\\overline{\\rho u u'' \\frac{\\partial u}{\\partial x}} = \\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x}\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\frac{1}{2}\\rho u'' u''}) + \\frac{\\partial}{\\partial x}(\\frac{1}{2}\n\t\\overline{\\rho u'' u'' u''})- \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial x}(\\rho u)}\n\\label{eqn:xderivterm}\n\\end{equation}\n\n\tFor the $r$ derivative term the logic is the same with the outcome only slightly different.  Starting again\nby decomposing the velocities into Favre averaged components,\n\n\\begin{displaymath}\n\t\\overline{\\rho v u'' \\frac{\\partial u}{\\partial r}} = \\overline{\\rho (\\tilde v + v'')u''\\frac{\\partial}{\\partial r}\n\t(\\tilde u + u'')} = \\overline{\\rho \\tilde v u'' \\frac{\\partial \\tilde u}{\\partial r}} + \\overline{\\rho v'' u''\n\t\\frac{\\partial \\tilde u}{\\partial r}} + \\overline{\\rho \\tilde v u'' \\frac{\\partial u''}{\\partial r}} +\n\t\\overline{\\rho v'' u'' \\frac{\\partial u''}{\\partial r}} \n\\end{displaymath}\n\n\tand again making use of Eqs. \\ref{eqn:reyderiv}, \\ref{eqn:aveofave}, and \\ref{eqn:rhophidp} one obtains\n\n\\begin{displaymath}\n\t\\overline{\\rho v u'' \\frac{\\partial u}{\\partial r}} = \\overline{\\rho u''} \\tilde v \\frac{\\partial \\tilde u}{\\partial r}\n\t+ \\overline{\\rho u'' v''}\\frac{\\partial \\tilde u}{\\partial r} + \\overline{(\\tilde v + v'')(\\rho u'' \n\t\\frac{\\partial u''}{\\partial r})} = \\overline{\\rho u'' v''}\\frac{\\partial \\tilde u}{\\partial r} + \\overline{(v \\rho u'' \n\t\\frac{\\partial u''}{\\partial r})}\n\\end{displaymath}\n\n\twhere it is again noted that one Favre averaged and one Favre fluctuating component were recombined to create a single\nterm containing the derivative of $u''$.  This term can then be rewritten using the following relation,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial r}(\\frac{1}{2}\\rho v u'' u'')}= \\overline{\\frac{1}{2}u''u''\\frac{\\partial}\n\t{\\partial r}(\\rho v)} + \\overline{\\frac{1}{2}\\rho v\\frac{\\partial}{\\partial r}(u'' u'')} = \\overline{\\frac{1}{2}u''u''\n\t\\frac{\\partial}{\\partial r}(\\rho v)} + \\Big\\{ \\overline{\\frac{1}{2}\\rho v u'' \\frac{\\partial u''}{\\partial r}}\n\t+ \\overline{\\frac{1}{2}\\rho v u'' \\frac{\\partial u''}{\\partial r}}\\Big\\}\n\\end{displaymath}\n\n\tfrom which one can extract the result,\n\n\\begin{displaymath}\n\t\\overline{(v \\rho u''\\frac{\\partial u''}{\\partial r})} = \\overline{\\frac{1}{2}\\rho v u'' \\frac{\\partial u''}{\\partial r}}\n\t+ \\overline{\\frac{1}{2}\\rho v u'' \\frac{\\partial u''}{\\partial r}} = \\overline{\\frac{\\partial}{\\partial r}\n\t(\\frac{1}{2}\\rho v u'' u'')} - \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial r}(\\rho v)}\n\\end{displaymath}\n\n\tthus allowing the $r$ derivative term to be written as,\n\n\\begin{displaymath}\n\t\\overline{\\rho v u'' \\frac{\\partial u}{\\partial r}} = \\overline{\\rho u'' v''}\\frac{\\partial \\tilde u}{\\partial r}\t\n\t+ \\overline{\\frac{\\partial}{\\partial r}\n\t(\\frac{1}{2}\\rho v u'' u'')} - \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial r}(\\rho v)}\n\\end{displaymath}\n\n\tRe-decomposing the $r$ velocity in the largest derivative term of the above equation into Favre components\nand applying Eqs. \\ref{eqn:reyderiv} and \\ref{eqn:aveofave} yields the desired form of the $r$ derivative term,\n\n\\begin{equation}\n\t\\overline{\\rho v u'' \\frac{\\partial u}{\\partial r}} = \\overline{\\rho v'' u''}\\frac{\\partial \\tilde u}{\\partial r}\n\t+ \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho u'' u''}) + \\frac{\\partial}{\\partial r}(\\frac{1}{2}\n\t\\overline{\\rho v'' u'' u''})- \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial r}(\\rho v)}\n\\label{eqn:rderivterm}\n\\end{equation}\n\n\tThe last derivative term to be considered is the pressure term, which we shall treat as follows,\n\n\\begin{displaymath}\n\t\\overline{u''\\frac{\\partial p}{\\partial x}} = \\overline{u''\\frac{\\partial}{\\partial x}(\\overline{p} + p')} = \n\t\\overline{u''}\\frac{\\partial \\overline{p}}{\\partial x} + \\overline{u''\\frac{\\partial p'}{\\partial x}}\n\\end{displaymath}\n\n\tThe last term in the above expression can be rewritten using the relation,\n\n\\begin{displaymath}\n\t\\overline{\\frac{\\partial}{\\partial x}(u''p')} = \\overline {u''\\frac{\\partial p'}{\\partial x}} + \\overline {p'\\frac\n\t{\\partial u''}{\\partial x}}\n\\end{displaymath}\n\n\tTherefore,\n\n\\begin{equation}\n\t\\overline{u''\\frac{\\partial p}{\\partial x}} = \\overline{u''}\\frac{\\partial \\overline{p}}{\\partial x} \n\t+ \\overline{\\frac{\\partial}{\\partial x}(u''p')} - \\overline {p'\\frac{\\partial u''}{\\partial x}}\n\\label{eqn:pderivterm}\n\\end{equation}\n\n\tThe only terms left to be considered are the ones involving the viscous stresses.  To phrase these terms in \nforms that will amend themselves to convenient approximation in the final TKE equation, these terms can be expressed\nas components of larger derivatives using the inverse of the Chain Rule as follows:\n\n\\begin{equation}\n\t\\overline{u''\\frac{\\partial \\tau_{xx}}{\\partial x}} = \\frac{\\partial}{\\partial x}\\overline{(u''\\tau_{xx})} -\n\t\\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}}\n\\label{eqn:tauxxterm}\n\\end{equation}\n\t\n\twhile for the other stress term,\n\n\\begin{equation}\n\t\\overline{u''\\frac{\\partial \\tau_{rx}}{\\partial r}} = \\frac{\\partial}{\\partial r}\\overline{(u''\\tau_{rx})} -\n\t\\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}}\n\\label{eqn:taurxterm}\n\\end{equation}\n\n\tThe last term to be considered is the viscous axisymmetric source term, which since it contains no derivative \nterms will be left as is (noting that the $\\frac{1}{r}$ term is constant and can be removed from the Reynolds averaging).  \nThus combining Eqs. \\ref{eqn:timeterm}- \\ref{eqn:taurxterm} one can write Eq. \\ref{eqn:uxmomshear} as,\n\n\\begin{displaymath}\n   \\begin{array}{ccc}\n      \\begin{array}{c}\t\n\t\\frac{\\partial}{\\partial t}\\overline{(\\frac{1}{2}\\rho u'' u'')} - \\overline{\\frac{1}{2}u''u''\n\t\\frac{\\partial \\rho}{\\partial t}} \\\\\n\t+ \\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x}\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\frac{1}{2}\\rho u'' u''}) + \\frac{\\partial}{\\partial x}(\\frac{1}{2}\n\t\\overline{\\rho u'' u'' u''})- \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial x}(\\rho u)} \\\\\n\t+ \\overline{\\rho v'' u''}\\frac{\\partial \\tilde u}{\\partial r}\n\t+ \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho u'' u''}) + \\frac{\\partial}{\\partial r}(\\frac{1}{2}\n\t\\overline{\\rho v'' u'' u''})- \\overline{\\frac{1}{2}u''u''\\frac{\\partial}{\\partial r}(\\rho v)} \\\\\n\t+ \\overline{u''}\\frac{\\partial \\overline{p}}{\\partial x} \n\t+ \\overline{\\frac{\\partial}{\\partial x}(u''p')} - \\overline {p'\\frac{\\partial u''}{\\partial x}} \\\\\n      \\end{array}\n\t& = &\n      \\begin{array}{c}\n\t\\frac{\\partial}{\\partial x}\\overline{(u''\\tau_{xx})} - \\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}}\\\\\n\t + \\frac{\\partial}{\\partial r}\\overline{(u''\\tau_{rx})} -\n\t\\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}} +  \\frac{1}{r}\\overline{u''\\tau_{rx}}\n      \\end{array}\t\n   \\end{array}\n\\end{displaymath}\n\n\tThis equation can be rearranged to highlight which terms are related and phrase it in a more recognizable form as\nfollows,\n\n\\begin{displaymath}\n   \\begin{array}{ccc}\n      \\begin{array}{c}\n\t\\frac{\\partial}{\\partial t}\\overline{(\\frac{1}{2}\\rho u'' u'')}\t+ \\frac{\\partial}{\\partial x}\n\t(\\tilde u \\overline{\\frac{1}{2}\\rho u'' u''}) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho u'' u''})\\\\\n\t-\\overline{\\frac{1}{2}u''u''\\Big\\{\\frac{\\partial}{\\partial t}(\\rho) + \\frac{\\partial}{\\partial x}(\\rho u)\n\t+ \\frac{\\partial}{\\partial r}(\\rho v)\\Big\\}} \\\\ \n\t+ \\Big\\{ \\overline{u''}\\frac{\\partial \\overline{p}}{\\partial x} + \\overline{\\frac{\\partial}{\\partial x}(u''p')} \n\t- \\overline {p'\\frac{\\partial u''}{\\partial x}} \\Big\\} \\\\\n\t-\\frac{\\partial}{\\partial x}\\Big\\{ \\overline{(u''\\tau_{xx})} - \\frac{1}{2}\\overline{\\rho u'' u'' u''}\\Big\\} \\\\\n\t-\\frac{\\partial}{\\partial r}\\Big\\{ \\overline{(u''\\tau_{rx})} - \\frac{1}{2}\\overline{\\rho v'' u'' u''}\\Big\\} \\\\\n\t+ \\Big\\{\\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} +  \\overline{\\rho v'' u''}\n\t\\frac{\\partial \\tilde u}{\\partial r}  \\Big\\} \\\\\n\t+ \\Big\\{\\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}} + \\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}}  \\Big\\}\n\t-\\frac{1}{r}\\Big\\{\\overline{u''\\tau_{rx}} \\Big\\}\n      \\end{array}\n   & = & 0\n   \\end{array}\t\n\\end{displaymath}\n\n\tRecalling the global continuity equation (Eq. \\ref{eqn:globalcont}), multiplying it by the term\n$-\\frac{1}{2}u''u''$ and taking the Reynolds average of the entire equation, one can write\n\n\\begin{displaymath}\n\t -\\overline{\\frac{1}{2}u''u''\\Big\\{\\frac{\\partial}{\\partial t}(\\rho) + \\frac{\\partial}{\\partial x}(\\rho u)\n\t+ \\frac{\\partial}{\\partial r}(\\rho v) + \\frac{1}{r}(\\rho v)\\Big\\}} = 0\n\\end{displaymath}\n\n\tTherefore,\n\n\\begin{displaymath}\n\t\\overline{\\frac{1}{r}\\frac{1}{2}\\rho v u''u''} = -\\overline{\\frac{1}{2}u''u''\\Big\\{\\frac{\\partial}{\\partial t}(\\rho) \n\t+ \\frac{\\partial}{\\partial x}(\\rho u) + \\frac{\\partial}{\\partial r}(\\rho v)\\Big\\}}\n\\end{displaymath}\n\n\tThereby allowing this new form of the $x$ direction turbulent momentum to be written as,\n\n\\begin{equation}\n   \\begin{array}{ccc}\n      \\begin{array}{c}\n\t\\frac{\\partial}{\\partial t}\\overline{(\\frac{1}{2}\\rho u'' u'')}\t+ \\frac{\\partial}{\\partial x}\n\t(\\tilde u \\overline{\\frac{1}{2}\\rho u'' u''}) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho u'' u''})\\\\\n\n\t+ \\Big\\{ \\overline{u''}\\frac{\\partial \\overline{p}}{\\partial x} + \\overline{\\frac{\\partial}{\\partial x}(u''p')} \n\t- \\overline {p'\\frac{\\partial u''}{\\partial x}} \\Big\\} \\\\\n\n\t-\\frac{\\partial}{\\partial x}\\Big\\{ \\overline{(u''\\tau_{xx})} - \\frac{1}{2}\\overline{\\rho u'' u'' u''}\\Big\\} \\\\\n\n\t-\\frac{\\partial}{\\partial r}\\Big\\{ \\overline{(u''\\tau_{rx})} - \\frac{1}{2}\\overline{\\rho v'' u'' u''}\\Big\\} \\\\\n\n\t+ \\Big\\{\\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} +  \\overline{\\rho v'' u''}\n\t\\frac{\\partial \\tilde u}{\\partial r}  \\Big\\} \\\\\n\n\t+ \\Big\\{\\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}} + \\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}}\\Big\\} \\\\\n\n\t-\\frac{1}{r}\\Big\\{\\overline{u''\\tau_{rx}} - \\overline{\\frac{1}{2} \\rho v u'' u''} \\Big\\}\n      \\end{array}\n   & = 0 = & \n      \\begin{array}{c}\n\t\\Big\\{\\textrm{Turbulent Kinetic Energy Terms}\\Big\\} \\\\\n\t+ \\Big\\{\\textrm{Pressure Terms}\\Big\\} \\\\\n\t- \\frac{\\partial}{\\partial x}\\Big\\{\\textrm{Turbulent Kinetic Energy Flux Terms}\\Big\\} \\\\\n\t- \\frac{\\partial}{\\partial r}\\Big\\{\\textrm{Turbulent Kinetic Energy Flux Terms}\\Big\\} \\\\\n\t+ \\Big\\{\\textrm{Production Terms}\\Big\\} \\\\\n\t+ \\Big\\{\\textrm{Dissipation Terms}\\Big\\} \\\\\n\t- \\frac{1}{r}\\Big\\{\\textrm{Axisymmetric Source Terms}\\}\n      \\end{array}\t\n   \\end{array}\t\t\n\\label{eqn:turbxmom}\n\\end{equation}\n\n\twhere each group of terms has been given a description.  The final turbulent kinetic energy equation\nwill be comprised of these various broad components summed over all three co-ordinate directions. \n\n%------------------------------------------------------------------------------------------------------------------------------\n\\subsubsection{$r$ Direction Contribution}\n\n\tHaving completed the required manipulations on the $x$ direction momentum equation, we will continue with the \nsame process on the $r$ direction momentum equation.  However, given that the processes in both cases are very similar,\nsome of the derivations will be abbreviated if it is clear how to arrive at the result from comparison with the preceding\nsection.  Starting again with the laminar $r$ direction momentum equation (but already expanded using the Chain Rule and\nsimplified using the global continuity equation), multiplying in this case by the fluctuating component of $r$ direction\nvelocity, and taking the Reynolds average yields,\n\n\\begin{equation}\n\t\\overline{\\rho v'' \\frac{\\partial v}{\\partial t}} + \\overline{\\rho u v'' \\frac{\\partial v}{\\partial x}} \n\t+ \\overline{\\rho v v'' \\frac{\\partial v}{\\partial r}}  + \\overline{v''\\frac{\\partial p}{\\partial r}} \n\t= \\overline{v''\\frac{\\partial}{\\partial x}(\\tau_{xr})} + \\overline{v''\\frac{\\partial}{\\partial r}(\\tau_{rr})} \n\t + \\overline{v''\\frac{1}{r}(\\tau_{rr} - \\tau_{\\theta \\theta})}\n\\label{eqn:vrmomshear}\n\\end{equation}\n\n\tAll of the derivative terms in the $r$ direction are treated in the same manner as in the $x$ direction, with\nonly the variables changed.  Thus considering first the time derivative term and extending the result shown in \nEq. \\ref{eqn:timeterm}\t\n\n\\begin{equation}\n\t\\overline{\\rho v'' \\frac{\\partial v}{\\partial t}} = \\overline{\\frac{\\partial}{\\partial t}\n\t(\\frac{1}{2}\\rho v'' v'')} - \\overline{\\frac{1}{2}v''v''\\frac{\\partial \\rho}{\\partial t}}\t\n\\label{eqn:timeterm2}\n\\end{equation}\n\n\tFor both the $x$ and $r$ derivative terms we note that these are very similar in form to those found\nin the $x$ direction derivation with $v''$ substituted for $u''$ and $v$ for $u$ in the actual derivatives.  Thus\nextending the results of Eqs. \\ref{eqn:xderivterm} and \\ref{eqn:rderivterm} to yield the $r$ and $x$ derivative terms \nrespectively yields,\n\n\\begin{equation}\n\t\\overline{\\rho u v'' \\frac{\\partial v}{\\partial x}} = \\overline{\\rho u'' v''}\\frac{\\partial \\tilde v}{\\partial x}\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\frac{1}{2}\\rho v'' v''}) + \\frac{\\partial}{\\partial x}(\\frac{1}{2}\n\t\\overline{\\rho u'' v'' v''})- \\overline{\\frac{1}{2}v''v''\\frac{\\partial}{\\partial x}(\\rho u)}\n\\label{eqn:xderivterm2}\n\\end{equation}\n\n\tand\n\n\\begin{equation}\n\t\\overline{\\rho v v'' \\frac{\\partial v}{\\partial r}} = \\overline{\\rho v'' v''}\\frac{\\partial \\tilde v}{\\partial r}\n\t+ \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho v'' v''}) + \\frac{\\partial}{\\partial r}(\\frac{1}{2}\n\t\\overline{\\rho v'' v'' v''})- \\overline{\\frac{1}{2}v''v''\\frac{\\partial}{\\partial r}(\\rho v)}\n\\label{eqn:rderivterm2}\n\\end{equation}\n\n\tThe pressure derivative term is treated exactly the same but for the direction along which the derivative is taken,\nthus from Eq. \\ref{eqn:pderivterm},\n\n\\begin{equation}\n\t\\overline{v''\\frac{\\partial p}{\\partial r}} = \\overline{v''}\\frac{\\partial \\overline{p}}{\\partial r} \n\t+ \\overline{\\frac{\\partial}{\\partial r}(v''p')} - \\overline {p'\\frac{\\partial v''}{\\partial r}}\n\\label{eqn:pderivterm2}\n\\end{equation}\n\n\tMoving on to the viscous stress terms again the inverse of the Chain Rule can be applied to yield,\n\n\\begin{equation}\n\t\\overline{v''\\frac{\\partial \\tau_{xr}}{\\partial x}} = \\frac{\\partial}{\\partial x}\\overline{(v''\\tau_{xr})} -\n\t\\overline{\\tau_{xr}\\frac{\\partial v''}{\\partial x}}\n\\label{eqn:tauxrterm}\n\\end{equation}\n\t\n\tas well as,\n\n\\begin{equation}\n\t\\overline{v''\\frac{\\partial \\tau_{rr}}{\\partial r}} = \\frac{\\partial}{\\partial r}\\overline{(v''\\tau_{rr})} -\n\t\\overline{\\tau_{rr}\\frac{\\partial v''}{\\partial r}}\n\\label{eqn:taurrterm}\n\\end{equation}\n\n\tThe axisymmetric source term will be left as is but for the removal of the $\\frac{1}{r}$ term from \nunder the Reynolds average,\n\n\\begin{equation}\n\t\\overline{v''\\frac{1}{r}(\\tau_{rr} - \\tau_{\\theta \\theta})} = \n\t\\frac{1}{r}(\\overline{v''\\tau_{rr} - v''\\tau_{\\theta \\theta}})\n\\label{eqn:sourceterm2}\n\\end{equation}\n\n\tAt this point we can again rewrite Eq. \\ref{eqn:vrmomshear} with the results of \nEqs. \\ref{eqn:timeterm2}-\\ref{eqn:sourceterm2} to yield (where the terms have already been placed in a more \nconvenient arrangement),\n\n\\begin{displaymath}\n   \\begin{array}{ccc}\n      \\begin{array}{c}\n\t\\frac{\\partial}{\\partial t}\\overline{(\\frac{1}{2}\\rho v'' v'')}\t+ \\frac{\\partial}{\\partial x}\n\t(\\tilde u \\overline{\\frac{1}{2}\\rho v'' v''}) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho v'' v''})\\\\\n\t-\\overline{\\frac{1}{2}v''v''\\Big\\{\\frac{\\partial}{\\partial t}(\\rho) + \\frac{\\partial}{\\partial x}(\\rho u)\n\t+ \\frac{\\partial}{\\partial r}(\\rho v)\\Big\\}} \\\\ \n\t+ \\Big\\{ \\overline{v''}\\frac{\\partial \\overline{p}}{\\partial r} + \\overline{\\frac{\\partial}{\\partial r}(v''p')} \n\t- \\overline {p'\\frac{\\partial v''}{\\partial r}} \\Big\\} \\\\\n\t-\\frac{\\partial}{\\partial x}\\Big\\{ \\overline{(v''\\tau_{xr})} - \\frac{1}{2}\\overline{\\rho u'' v'' v''}\\Big\\} \\\\\n\t-\\frac{\\partial}{\\partial r}\\Big\\{ \\overline{(v''\\tau_{rr})} - \\frac{1}{2}\\overline{\\rho v'' v'' v''}\\Big\\} \\\\\n\t+ \\Big\\{\\overline{\\rho u'' v''}\\frac{\\partial \\tilde v}{\\partial x} +  \\overline{\\rho v'' v''}\n\t\\frac{\\partial \\tilde v}{\\partial r}  \\Big\\} \\\\\n\t+ \\Big\\{\\overline{\\tau_{xr}\\frac{\\partial v''}{\\partial x}} + \\overline{\\tau_{rr}\\frac{\\partial v''}{\\partial r}}  \\Big\\}\n\t-\\frac{1}{r}\\Big\\{\\overline{v''\\tau_{rr}} -\\overline{v''\\tau_{\\theta \\theta}} \\Big\\}\n      \\end{array}\n   & = & 0\n   \\end{array}\t\n\\end{displaymath}\n\n\tAgain recalling the global continuity equation (Eq. \\ref{eqn:globalcont}), multiplying it this time by the term\n$-\\frac{1}{2}v''v''$ and taking the Reynolds average of the entire equation yields,\n\n\\begin{displaymath}\n\t -\\overline{\\frac{1}{2}v''v''\\Big\\{\\frac{\\partial}{\\partial t}(\\rho) + \\frac{\\partial}{\\partial x}(\\rho u)\n\t+ \\frac{\\partial}{\\partial r}(\\rho v)\\Big\\}} = \\overline{\\frac{1}{r}\\frac{1}{2} \\rho v v'' v''}\n\\end{displaymath}\n\n\twhich again allows the simplification of the first term in curly braces.  Adding the above results to\nEq. \\ref{eqn:turbxmom} by placing the various terms into their corresponding categories yields,\n\n\\begin{equation}\n   \\begin{array}{ccc}\n      \\begin{array}{c}\n\t\\frac{\\partial}{\\partial t}\\Big\\{\\overline{\\frac{1}{2}\\rho u'' u''} + \\overline{\\frac{1}{2}\\rho v'' v''}\\Big\\}\t\n\t+\\frac{\\partial}{\\partial x}\\Big\\{\\tilde u (\\overline{\\frac{1}{2}\\rho u'' u''} \n\t+ \\overline{\\frac{1}{2}\\rho v'' v''})\\Big\\} \\\\\n\t+\\frac{\\partial}{\\partial r}\\Big\\{\\tilde v (\\overline{\\frac{1}{2}\\rho u'' u''} \n\t+ \\overline{\\frac{1}{2}\\rho v'' v''})\\Big\\}\\\\\n\n\t+ \\Big\\{ \\overline{u''}\\frac{\\partial \\overline{p}}{\\partial x} + \\overline{\\frac{\\partial}{\\partial x}(u''p')} \n\t- \\overline {p'\\frac{\\partial u''}{\\partial x}} + \\overline{v''}\\frac{\\partial \\overline{p}}{\\partial r} + \n\t\\overline{\\frac{\\partial}{\\partial r}(v''p')} - \\overline {p'\\frac{\\partial v''}{\\partial r}} \\Big\\} \\\\\n\n\t-\\frac{\\partial}{\\partial x}\\Big\\{ \\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}}\n\t- \\frac{1}{2}\\overline{\\rho u'' u'' u''} - \\frac{1}{2}\\overline{\\rho u'' v'' v''}\\Big\\} \\\\\n\n\t-\\frac{\\partial}{\\partial r}\\Big\\{ \\overline{u''\\tau_{rx}} +  \\overline{v''\\tau_{rr}} \n\t- \\frac{1}{2}\\overline{\\rho v'' u'' u''} - \\frac{1}{2}\\overline{\\rho v'' v'' v''}\\Big\\} \\\\\n\n\t+ \\Big\\{\\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} + \\overline{\\rho v'' u''}\n\t\\frac{\\partial \\tilde u}{\\partial r} + \\overline{\\rho u'' v''}\\frac{\\partial \\tilde v}{\\partial x}\n\t+ \\overline{\\rho v'' v''}\\frac{\\partial \\tilde v}{\\partial r}  \\Big\\} \\\\\n\n\t+ \\Big\\{\\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}} + \\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}}\n\t+ \\overline{\\tau_{xr}\\frac{\\partial v''}{\\partial x}} + \\overline{\\tau_{rr}\\frac{\\partial v''}{\\partial r}} \\Big\\} \\\\\n\n\t-\\frac{1}{r}\\Big\\{\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} -\\overline{v''\\tau_{\\theta \\theta}}\n\t- \\overline{\\frac{1}{2} \\rho v u'' u''} - \\overline{\\frac{1}{2} \\rho v v''v''} \\Big\\}\n      \\end{array}\n   & = & \n      \\begin{array}{c}\n\t\\Big\\{\\textrm{Turbulent Kinetic Energy Terms}\\Big\\} \\\\ \\\\\n\t+ \\Big\\{\\textrm{Pressure Terms}\\Big\\} \\\\\n\t- \\frac{\\partial}{\\partial x}\\Big\\{\\textrm{Turbulent Kinetic Energy Flux Terms}\\Big\\} \\\\\n\t- \\frac{\\partial}{\\partial r}\\Big\\{\\textrm{Turbulent Kinetic Energy Flux Terms}\\Big\\} \\\\\n\t+ \\Big\\{\\textrm{Production Terms}\\Big\\} \\\\\n\t+ \\Big\\{\\textrm{Dissipation Terms}\\Big\\} \\\\\n\t- \\frac{1}{r}\\Big\\{\\textrm{Axisymmetric Source Terms}\\}\n      \\end{array}\t\n   \\end{array}\t\t\n\\label{eqn:turbxrmom}\n\\end{equation}\n\n%------------------------------------------------------------------------------------------------------------------------------\n\\subsubsection{$\\theta$ Direction Contribution}\n\n\tAt this point the true usefulness of deriving the $\\theta$ direction momentum equation shows itself, as without it\nthe turbulent kinetic energy equation cannot be derived.  This is due to the fact that some of the approximations used \nto develop the turbulent Navier-Stokes equations (see Eqs. \\ref{eqn:rhouu}, \\ref{eqn:rhoww}, and \\ref{eqn:rhovv}) and the\ndefinition of $k$ itself (Eq. \\ref{eqn:rhok}), require that there exist three dimensions to remain \nself consistent.  Thus although the ultimate application of these equations will be to flows where the Favre averaged \ncircumferential velocity is zero ($\\tilde w = 0$) where no third momentum equation is then required, the turbulent kinetic\nenergy remains a three dimensional quantity, ergo the equation describing its conservation must be derived using these three\ndimensions.  This is clearly evident by examining Eq. \\ref{eqn:turbxrmom} where it can be seen that the turbulent \nkinetic energy as defined by Eq. \\ref{eqn:rhok} does not yet appear.  Therefore taking the laminar $\\theta$ direction \nmomentum equation as expressed in Eq. \\ref{eqn:thetamom}, expanding derivatives and simplifying using the global\ncontinuity equation, multiplying the result by $w''$, and taking the Reynolds average (the same process as was done for the\nlaminar $x$ and $r$ momentum equations) one can write,\n\n\\begin{equation}\n\t\\overline{\\rho w'' \\frac{\\partial w}{\\partial t}} + \\overline{\\rho u w'' \\frac{\\partial w}{\\partial x}} \n\t+ \\overline{\\rho v w'' \\frac{\\partial w}{\\partial r}}  \n\t= \\overline{w''\\frac{\\partial}{\\partial x}(\\tau_{x \\theta})} \n\t+ \\overline{w''\\frac{\\partial}{\\partial r}(\\tau_{r \\theta})} \n\t+ \\overline{w''\\frac{1}{r}(2\\tau_{r \\theta})}\n\\label{eqn:wthetamomshear}\n\\end{equation}\n\n\tStarting again with the time derivative term and extending the results of Eq. \\ref{eqn:timeterm} (replacing \n$u''$ with $w''$ and $u$ with $w$) yields,\n\n\\begin{equation}\n\t\\overline{\\rho w'' \\frac{\\partial w}{\\partial t}} = \\overline{\\frac{\\partial}{\\partial t}\n\t(\\frac{1}{2}\\rho w'' w'')} - \\overline{\\frac{1}{2}w''w''\\frac{\\partial \\rho}{\\partial t}}\t\n\\label{eqn:timeterm3}\n\\end{equation}\n\t\n\tMoving next to the $x$ and $r$ derivative terms and making the same substitutions in Eqs. \\ref{eqn:xderivterm}\nand \\ref{eqn:rderivterm} yields,\n\n\\begin{equation}\n\t\\overline{\\rho u w'' \\frac{\\partial w}{\\partial x}} = \\overline{\\rho u'' w''}\\frac{\\partial \\tilde w}{\\partial x}\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\frac{1}{2}\\rho w'' w''}) + \\frac{\\partial}{\\partial x}(\\frac{1}{2}\n\t\\overline{\\rho u'' w'' w''})- \\overline{\\frac{1}{2}w''w''\\frac{\\partial}{\\partial x}(\\rho u)}\n\\label{eqn:xderivterm3}\n\\end{equation}\n\n\tand\n\n\\begin{equation}\n\t\\overline{\\rho v w'' \\frac{\\partial w}{\\partial r}} = \\overline{\\rho v'' w''}\\frac{\\partial \\tilde w}{\\partial r}\n\t+ \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho w'' w''}) + \\frac{\\partial}{\\partial r}(\\frac{1}{2}\n\t\\overline{\\rho v'' w'' w''})- \\overline{\\frac{1}{2}w''w''\\frac{\\partial}{\\partial r}(\\rho v)}\n\\label{eqn:rderivterm3}\n\\end{equation}\n\n\tSince the axisymmetric assumption of Eq. \\ref{eqn:dtheta} precludes any pressure changes in the circumferential\ndirection, there is no pressure term to be considered.  Going directly to the viscous stress terms, again a simple \napplication of the inverse of the Chain rule yields,\n\n\\begin{equation}\n\t\\overline{w''\\frac{\\partial \\tau_{x \\theta}}{\\partial x}} = \\frac{\\partial}{\\partial x}\\overline{(w''\\tau_{x\\theta})} -\n\t\\overline{\\tau_{x\\theta}\\frac{\\partial w''}{\\partial x}}\n\\label{eqn:tauxthetaterm}\n\\end{equation}\n\t\n\tas well as,\n\n\\begin{equation}\n\t\\overline{w''\\frac{\\partial \\tau_{r\\theta}}{\\partial r}} = \\frac{\\partial}{\\partial r}\\overline{(w''\\tau_{r\\theta})} -\n\t\\overline{\\tau_{r\\theta}\\frac{\\partial w''}{\\partial r}}\n\\label{eqn:taurthetaterm}\n\\end{equation}\n\n\twhile the source term will be rewritten as,\n\n\\begin{equation}\n\t\\overline{w''\\frac{1}{r}(2\\tau_{\\theta r})} = \\frac{2}{r}(\\overline{w'' \\tau_{r \\theta}})\n\\label{eqn:sourceterm3}\n\\end{equation}\n\t\n\tCombining the results of Eqs. \\ref{eqn:timeterm3} - \\ref{eqn:sourceterm3} with Eq. \\ref{eqn:wthetamomshear}\nyields,\n\n\\begin{displaymath}\n   \\begin{array}{ccc}\n      \\begin{array}{c}\n\t\\overline{\\frac{\\partial}{\\partial t}(\\frac{1}{2}\\rho w'' w'')}\t+ \\frac{\\partial}{\\partial x}\n\t(\\tilde u \\overline{\\frac{1}{2}\\rho w'' w''}) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\frac{1}{2}\\rho w'' w''})\\\\\n\t-\\overline{\\frac{1}{2}w''w''\\Big\\{\\frac{\\partial}{\\partial t}(\\rho) + \\frac{\\partial}{\\partial x}(\\rho u)\n\t+ \\frac{\\partial}{\\partial r}(\\rho v)\\Big\\}} \\\\ \n\t-\\frac{\\partial}{\\partial x}\\Big\\{ \\overline{w''\\tau_{x\\theta}} - \\frac{1}{2}\\overline{\\rho u'' w'' w''}\\Big\\} \\\\\n\t-\\frac{\\partial}{\\partial r}\\Big\\{ \\overline{w''\\tau_{r\\theta}} - \\frac{1}{2}\\overline{\\rho v'' w'' w''}\\Big\\} \\\\\n\t+ \\Big\\{\\overline{\\rho u'' w''}\\frac{\\partial \\tilde w}{\\partial x} +  \\overline{\\rho v'' w''}\n\t\\frac{\\partial \\tilde w}{\\partial r}  \\Big\\} \\\\\n\t+ \\Big\\{\\overline{\\tau_{x\\theta}\\frac{\\partial w''}{\\partial x}} \n\t+ \\overline{\\tau_{r\\theta}\\frac{\\partial w''}{\\partial r}}  \\Big\\}\n\t-\\frac{2}{r}\\Big\\{\\overline{w''\\tau_{r\\theta}} \\Big\\}\n      \\end{array}\n   & = & 0\n   \\end{array}\t\n\\end{displaymath}\n\n\tThe first term in the curly braces can again be simplified using the global continuity equation multiplied by\nthe term $\\frac{1}{2}w''w''$ and Reynolds averaged,\n\n\\begin{displaymath}\n\t -\\overline{\\frac{1}{2}w''w''\\Big\\{\\frac{\\partial}{\\partial t}(\\rho) + \\frac{\\partial}{\\partial x}(\\rho u)\n\t+ \\frac{\\partial}{\\partial r}(\\rho v)\\Big\\}} = \\overline{\\frac{1}{r}\\frac{1}{2} \\rho v w'' w''}\n\\end{displaymath}\n\n\tUsing the above results and adding these to the result expressed in Eq. \\ref{eqn:turbxrmom} one can write,\n\n\\begin{equation}\n  \\begin{array}{ccc}\n   \\begin{array}{c}\n\t\\frac{\\partial}{\\partial t}\\Bigg\\{\\Big\\{\\overline{\\frac{1}{2}\\rho u'' u''} + \\overline{\\frac{1}{2}\\rho v'' v''} \n\t+\\overline{\\frac{1}{2}\\rho w'' w''} \\Big\\}\t\n\t+\\frac{\\partial}{\\partial x}\\Big\\{\\tilde u (\\overline{\\frac{1}{2}\\rho u'' u''} \n\t+ \\overline{\\frac{1}{2}\\rho v'' v''} + \\overline{\\frac{1}{2}\\rho w'' w''} )\\Big\\} \\\\\n\t+\\frac{\\partial}{\\partial r}\\Big\\{\\tilde v (\\overline{\\frac{1}{2}\\rho u'' u''} \n\t+ \\overline{\\frac{1}{2}\\rho v'' v''} + \\overline{\\frac{1}{2}\\rho w'' w''} )\\Big\\} \\Bigg\\}\\\\\n\n\t+ \\Big\\{ \\overline{u''}\\frac{\\partial \\overline{p}}{\\partial x} + \\overline{\\frac{\\partial}{\\partial x}(u''p')} \n\t- \\overline {p'\\frac{\\partial u''}{\\partial x}} + \\overline{v''}\\frac{\\partial \\overline{p}}{\\partial r} + \n\t\\overline{\\frac{\\partial}{\\partial r}(v''p')} - \\overline {p'\\frac{\\partial v''}{\\partial r}} \\Big\\} \\\\\n\n\t-\\frac{\\partial}{\\partial x}\\Big\\{ \\overline{u''\\tau_{xx}} + \\overline{v''\\tau_{xr}} + \\overline{w''\\tau_{x\\theta}}\n\t- \\frac{1}{2}\\overline{\\rho u'' u'' u''} - \\frac{1}{2}\\overline{\\rho u'' v'' v''} \n\t- \\frac{1}{2}\\overline{\\rho u'' w'' w''}\\Big\\} \\\\\n\n\t-\\frac{\\partial}{\\partial r}\\Big\\{ \\overline{u''\\tau_{rx}} +  \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}} \n\t- \\frac{1}{2}\\overline{\\rho v'' u'' u''} - \\frac{1}{2}\\overline{\\rho v'' v'' v''}  \n\t- \\frac{1}{2}\\overline{\\rho v'' w'' w''}\\Big\\} \\\\\n\n\t+ \\Big\\{\\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} + \\overline{\\rho v'' u''}  \n\t\\frac{\\partial \\tilde u}{\\partial r} + \\overline{\\rho u'' v''}\\frac{\\partial \\tilde v}{\\partial x}\n\t+ \\overline{\\rho v'' v''}\\frac{\\partial \\tilde v}{\\partial r} + \\overline{\\rho u'' w''}\n\t\\frac{\\partial \\tilde w}{\\partial x}+ \\overline{\\rho v'' w''}\\frac{\\partial \\tilde w}{\\partial r} \\Big\\} \\\\\n\n\t+ \\Big\\{\\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}} + \\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}} \n\t+ \\overline{\\tau_{xr}\\frac{\\partial v''}{\\partial x}} + \\overline{\\tau_{rr}\\frac{\\partial v''}{\\partial r}}\n\t+ \\overline{\\tau_{x\\theta}\\frac{\\partial w''}{\\partial x}} \n\t+ \\overline{\\tau_{r\\theta}\\frac{\\partial w''}{\\partial r}}\\Big\\} \\\\\n\t\n\t-\\frac{1}{r}\\Big\\{\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}}\n\t+ \\overline{w''\\tau_{r\\theta}} -\\overline{v''\\tau_{\\theta \\theta}}\n\t- \\overline{\\frac{1}{2} \\rho v u'' u''} - \\overline{\\frac{1}{2} \\rho v v''v''} \n\t- \\overline{\\frac{1}{2} \\rho v w'' w''} \\Big\\}\n   \\end{array}\n  & = & 0\n  \\end{array}\t\t\n\\label{eqn:turbxrthetamom}\n\\end{equation}\n\n\twhere the above equation can be broken down into smaller groups as before,\n\n\\begin{equation}\n   \\begin{array}{ccc}\n      \\begin{array}{c}\n\t\\Bigg \\{ \\Big \\{ \\textrm{Turbulent Kinetic Energy Terms}\\Big \\} \\Bigg \\} \\\\ \\\\\n\t+ \\Big \\{ \\textrm{Pressure Terms}\\Big \\} \\\\\n\t- \\frac{\\partial}{\\partial x}\\Big\\{\\textrm{Turbulent Kinetic Energy Flux Terms}\\Big\\} \\\\\n\t- \\frac{\\partial}{\\partial r}\\Big\\{\\textrm{Turbulent Kinetic Energy Flux Terms}\\Big\\} \\\\\n\t+ \\Big\\{\\textrm{Production Terms}\\Big\\} \\\\\n\t+ \\Big\\{\\textrm{Dissipation Terms}\\Big\\} \\\\\n\t- \\frac{1}{r}\\Big\\{\\textrm{Axisymmetric Source Terms}\\Big\\}\n      \\end{array}\n\t& = & 0\n   \\end{array}\n\\label{eqn:turbgroups}\n\\end{equation}\n\n%------------------------------------------------------------------------------------------------------------------------------\n\\subsection{Simplification of the TKE Equation}\n\n\tEquation \\ref{eqn:turbxrthetamom} is the complete turbulent kinetic energy equation.  Although it contains \nnumerous unknown turbulent terms, certain simplifications can be made.  The first is to notice that the \n\\emph{Turbulent Kinetic Energy Terms} can be simplified using the definition of $k$ (Eq. \\ref{eqn:rhok}) so that they \ncan be rewritten as,\n\n\\begin{equation}\n\t\\textrm{Turbulent Kinetic Energy Terms} = \\frac{\\partial}{\\partial t}(\\overline{\\rho} k)\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\rho} k) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\rho} k)\n\\label{eqn:tketerms}\n\\end{equation}\n\n\tTaking a closer look at the \\emph{Turbulent Kinetic Energy Flux Terms} and recalling the derivation of the \nturbulent energy equation, it is noted that the approximations made in Eqs. \\ref{eqn:energyxend2} and \\ref{eqn:energyrend2}\ncan be used to simplify these terms,\n\n\\begin{equation}\n\t\\textrm{Turbulent Kinetic Energy Flux Terms} = (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial x}\n\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\n\\label{eqn:tkefluxterms}\n\\end{equation}\t\n\n\tThe \\emph{Axisymmetric Source Terms} still contain the velocity $v$, which must first be decomposed into its Favre \naveraged and fluctuating components,\n\n\\begin{displaymath}\n   \\begin{array}{c}\n\t\\Big\\{\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}}\n\t+ \\overline{w''\\tau_{r\\theta}} -\\overline{v''\\tau_{\\theta \\theta}}\n\t- \\overline{\\frac{1}{2} \\rho v u'' u''} - \\overline{\\frac{1}{2} \\rho v v''v''} \n\t- \\overline{\\frac{1}{2} \\rho v w'' w''} \\Big\\} \n\t\\\\\n\t= \\Big\\{\\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}}\n\t+ \\overline{w''\\tau_{r\\theta}} -\\overline{v''\\tau_{\\theta \\theta}}\n\t- \\overline{\\frac{1}{2} \\rho (\\tilde v + v'') u'' u''} - \\overline{\\frac{1}{2} \\rho (\\tilde v + v'') v''v''} \n\t- \\overline{\\frac{1}{2} \\rho (\\tilde v + v'') w'' w''} \\Big\\} \n   \\end{array}\t\n\\end{displaymath}\n\n\twhich can be simplified using the result of Eq. \\ref{eqn:aveofave} and rearranged to\n\n\\begin{displaymath}\n   \\begin{array}{c}\n\t= \\Big\\{\\tilde v (-\\frac{1}{2} \\overline{\\rho u'' u''} - \\frac{1}{2}\\overline{\\rho v'' v''}\n\t-\\frac{1}{2}\\overline{\\rho w'' w''}) \n\t\\\\\n\t+ \\overline{u''\\tau_{rx}} + \\overline{v''\\tau_{rr}} + \\overline{w''\\tau_{r\\theta}}\n\t+ \\overline{w''\\tau_{r\\theta}} -\\overline{v''\\tau_{\\theta \\theta}}\n\t- \\overline{\\frac{1}{2} \\rho v'' u'' u''} - \\overline{\\frac{1}{2} \\rho v'' v''v''} \n\t- \\overline{\\frac{1}{2} \\rho v'' w'' w''} \\Big\\} \n   \\end{array}\n\\end{displaymath}\n\n\tIn this form the definition of $\\overline{\\rho} k$ and the approximation made in Eq. \\ref{eqn:energyrend2} can be\nused to simplify the above expression into\n\n\\begin{equation}\n\t\\textrm{Axisymmetric Source Terms} = -\\tilde v \\overline{\\rho} k\n\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\n\t+ \\overline{w''\\tau_{r\\theta}} - \\overline{v''\\tau_{\\theta \\theta}}\n\\label{eqn:tkesourceterms}\n\\end{equation}\n\n\tLooking at the units of the last two terms in Eq. \\ref{eqn:tkesourceterms} we will call these terms ``Turbulent \nWork Terms'',\n\n\\begin{equation}\n\t\\textrm{Turbulent Work Terms} = \\overline{w''\\tau_{r\\theta}} - \\overline{v''\\tau_{\\theta \\theta}}\n\\label{eqn:tketurbworkterms}\n\\end{equation}\n\n\tNext we can define a quantity called the dissipation rate, $\\varepsilon$ as being equal to the \\emph{Dissipation\nTerms},\n\n\\begin{equation}\n\t\\overline{\\rho} \\varepsilon = \\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}} \n\t+ \\overline{\\tau_{xr}\\frac{\\partial v''}{\\partial x}} \n\t+ \\overline{\\tau_{x\\theta}\\frac{\\partial w''}{\\partial x}} + \\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}}\n\t+ \\overline{\\tau_{rr}\\frac{\\partial v''}{\\partial r}} + \\overline{\\tau_{r\\theta}\\frac{\\partial w''}{\\partial r}}\n\\label{eqn:dissipation}\n\\end{equation}\n\n\tIf one further defines the rate at which turbulent kinetic energy is created, $P_k$,  as equal to the negative of\nthe \\emph{Production Terms} (the negative is used so that $P_k$ will become positive when placed on the right side \nof the equality),\n\n\\begin{displaymath}\n\t- P_k = \\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} + \\overline{\\rho v'' u''}  \n\t\\frac{\\partial \\tilde u}{\\partial r} + \\overline{\\rho u'' v''}\\frac{\\partial \\tilde v}{\\partial x}\n\t+ \\overline{\\rho v'' v''}\\frac{\\partial \\tilde v}{\\partial r} + \\overline{\\rho u'' w''}\n\t\\frac{\\partial \\tilde w}{\\partial x}+ \\overline{\\rho v'' w''}\\frac{\\partial \\tilde w}{\\partial r}\n\\end{displaymath} \n\n\\begin{equation}\n\tP_k = -\\overline{\\rho u'' u''}\\frac{\\partial \\tilde u}{\\partial x} - \\overline{\\rho u'' v''}\n\t(\\frac{\\partial \\tilde u}{\\partial r} + \\frac{\\partial \\tilde v}{\\partial x})  \n\t- \\overline{\\rho v'' v''}\\frac{\\partial \\tilde v}{\\partial r} - \\Big\\{\\overline{\\rho u'' w''}\n\t\\frac{\\partial \\tilde w}{\\partial x}+ \\overline{\\rho v'' w''}\\frac{\\partial \\tilde w}{\\partial r}\\Big\\}\n\\label{eqn:pk}\n\\end{equation} \n\n\twhich can be rewritten using the approximations made for the various Reynolds stresses (Eqs. \\ref{eqn:rhouu},\n\\ref{eqn:rhouv},\\ref{eqn:rhovv}, \\ref{eqn:rhouw}, and \\ref{eqn:rhovw}) to yield,\n\n\\begin{equation}\n\tP_k = \n\t\\begin{array}{c}\n\t\\Bigg\\{ \\mu_T\\Big\\{2\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k\\Bigg\\} \\frac{\\partial \\tilde u}{\\partial x}  \n\t+ \\Big\\{ \\mu_T(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\}\n\t(\\frac{\\partial \\tilde u}{\\partial r} + \\frac{\\partial \\tilde v}{\\partial x})\n\t\\\\ + \\Bigg\\{ \\mu_T\\Big\\{2\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k \\Bigg\\} \\frac{\\partial \\tilde v}{\\partial r}\n\t+ \\Bigg\\{ \\Big\\{\\mu_T \\frac{\\partial \\tilde w}{\\partial x}\\Big\\}\\frac{\\partial \\tilde w}{\\partial x}\n\t+ \\Big\\{ \\mu_T r\\frac{\\partial}{\\partial r}(\\frac{\\tilde w}{r})\\Big\\}\\frac{\\partial \\tilde w}{\\partial r} \\Bigg\\}\n\t\\end{array}\n\\label{eqn:totalpk}\n\\end{equation}\n\t\n\tAt this point a simplification will be made.  Since the major thrust of using the axisymmetric equations\nis to reduce the computational effort required to solve the flow, this requires that the circumferential velocity, $w$, or\nits average (in the case of turbulence), be equal to zero thereby negating the need for the third momentum equation. \nThis assumption has already been applied implicitly by the fact that no turbulent $\\theta$ direction momentum equation\nhas been derived (although the laminar version was used \\emph{within} the derivation of the TKE equation).  Applying this\nfurther assumption to Eq. \\ref{eqn:totalpk} eliminates the last terms in the large curly braces leaving,\n\n\\begin{equation}\n\tP_k = \n\t\\begin{array}{c}\n\t\\Bigg\\{ \\mu_T\\Big\\{2\\frac{\\partial \\tilde{u}}{\\partial x} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k\\Bigg\\} \\frac{\\partial \\tilde u}{\\partial x}  \n\t+ \\Big\\{ \\mu_T(\\frac{\\partial \\tilde{v}}{\\partial x} + \\frac{\\partial \\tilde{u}}{\\partial r})\\Big\\}\n\t(\\frac{\\partial \\tilde u}{\\partial r} + \\frac{\\partial \\tilde v}{\\partial x})\n\t\\\\ + \\Bigg\\{ \\mu_T\\Big\\{2\\frac{\\partial \\tilde{v}}{\\partial r} - \\frac{2}{3}\n\t(\\frac{\\partial \\tilde{u}}{\\partial x} + \\frac{\\partial \\tilde{v}}{\\partial r} + \\frac{\\tilde{v}}{r})\\Big\\}\n\t-\\frac{2}{3}\\overline{\\rho}k \\Bigg\\} \\frac{\\partial \\tilde v}{\\partial r}\n\t\\end{array}\n\\label{eqn:pknotaus}\n\\end{equation}\n\n\tExamining this equation in closer detail, we note that many of the terms are simply the\nviscous stress terms with $\\mu$ replaced by $\\mu_T$ and the velocities replaced with their Favre averaged counterparts.\nDenoting these changes with $^*$ (in this case $\\tilde \\tau$ is not used as the co-efficient of viscosity is simply\n$\\mu_T$ and not $(\\overline{\\mu} + \\mu_T)$) we can write,\n\n\\begin{equation}\n\tP_k = \n\t\\begin{array}{c}\n\t(\\tau^*_{xx} -\\frac{2}{3}\\overline{\\rho}k) \\frac{\\partial \\tilde u}{\\partial x}  \n\t+ \\tau^*_{xr} (\\frac{\\partial \\tilde u}{\\partial r} + \\frac{\\partial \\tilde v}{\\partial x})\n\t+ (\\tau^*_{rr} -\\frac{2}{3}\\overline{\\rho}k) \\frac{\\partial \\tilde v}{\\partial r}\n\t\\end{array}\n\\label{eqn:pktaus}\n\\end{equation} \n\n\twhere the various stresses are still defined by Eqs. \\ref{eqn:taurx} - \\ref{eqn:taurrlambda} but with the \nchanges mentioned above applied.  As was done during the derivation of the laminar axisymmetric equations, in order to \nkeep all the terms unique to the axisymmetric formulation of the Navier-Stokes equations separated from the remaining \ntraditional 2D terms one can define,\n\n\\begin{equation}\n\t\\hat{\\tau}^*_{xx} = \\mu_T(\\frac{4}{3}\\frac{\\partial \\tilde u}{\\partial x}-\\frac{2}{3}\\frac{\\partial \\tilde v}{\\partial r})\n\\label{eqn:tauxxhatstar}\n\\end{equation}\n\n\\begin{equation}\n\t\\hat{\\tau}^*_{rr} = \\mu_T(-\\frac{2}{3}\\frac{\\partial \\tilde u}{\\partial x}+\\frac{4}{3}\\frac{\\partial \\tilde v}{\\partial r})\n\\label{eqn:taurrhatstar}\n\\end{equation}\n\n\twhich then allows $\\tau^*_{xx}$ and $\\tau^*_{rr}$ to be written as\n\n\\begin{equation}\n\t\\tau^*_{xx} = \\hat{\\tau}^*_{xx} - \\mu_T \\frac{2}{3} \\frac{\\tilde v}{r}\n\\label{eqn:tauxxstar}\n\\end{equation}\n\n\\begin{equation}\n\t\\tau^*_{rr} = \\hat{\\tau}^*_{rr} - \\mu_T \\frac{2}{3} \\frac{\\tilde v}{r}\n\\label{eqn:taurrstar}\n\\end{equation}\n\n\tand thus Eq. \\ref{eqn:pktaus} can be rewritten as,\n\n\\begin{displaymath}\n\tP_k = \n\t\\begin{array}{c}\n\t(\\hat{\\tau}^*_{xx} -\\frac{2}{3}\\overline{\\rho}k) \\frac{\\partial \\tilde u}{\\partial x}  \n\t+ \\tau^*_{xr} (\\frac{\\partial \\tilde u}{\\partial r} + \\frac{\\partial \\tilde v}{\\partial x})\n\t+ (\\hat{\\tau}^*_{rr} -\\frac{2}{3}\\overline{\\rho}k) \\frac{\\partial \\tilde v}{\\partial r}\n\t-\\mu_T\\frac{2}{3}\\frac{\\tilde v}{r}(\\frac{\\partial \\tilde u}{\\partial x} + \\frac{\\partial \\tilde v}{\\partial r})\n\t\\end{array}\n\\end{displaymath} \n\n\tIf we further define\n\n\\begin{equation}\n\tP_{k_{2D}} = (\\hat{\\tau}^*_{xx} -\\frac{2}{3}\\overline{\\rho}k) \\frac{\\partial \\tilde u}{\\partial x}  \n\t+ \\tau^*_{xr} (\\frac{\\partial \\tilde u}{\\partial r} + \\frac{\\partial \\tilde v}{\\partial x})\n\t+ (\\hat{\\tau}^*_{rr} -\\frac{2}{3}\\overline{\\rho}k) \\frac{\\partial \\tilde v}{\\partial r}\n\\label{eqn:pk2d}\n\\end{equation}\n\n\tand\n\n\\begin{equation}\n\tP_{k_{be3D}} = -\\mu_T\\frac{2}{3}\\frac{\\tilde v}{r}(\\frac{\\partial \\tilde u}{\\partial x} + \\frac{\\partial \\tilde v}\n\t{\\partial r})\n\\label{eqn:pkbe3d}\n\\end{equation}\n\n\tthen the total turbulent kinetic energy production can be written as the sum of these two terms,\n\n\\begin{equation}\n\tP_k = P_{k_{2D}} + P_{k_{be3D}}\n\\label{eqn:pktauhats}\n\\end{equation}\n\n\tAs a last step, we will neglect the \\emph{Turbulent Work Terms} (Eq. \\ref{eqn:tketurbworkterms} as well as \nthe \\emph{Pressure Terms} due to\nthe lack of a means to sufficiently approximate these quantities.  Combining the remaining results expressed in \nEqs. \\ref{eqn:tketerms}-\\ref{eqn:dissipation} and Eq. \\ref{eqn:pktauhats} into Eq. \\ref{eqn:turbxrthetamom} while further\ndefining the co-efficient $\\mu^*_k$ as,\n\n\\begin{equation}\n\t\\mu^*_k = \\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k}\n\\label{eqn:mukstar}\n\\end{equation}\n\nyields the final form of the turbulent kinetic energy equation,\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho} k)\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\rho} k) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\rho} k)\n\t+ \\frac{1}{r}(\\tilde v \\overline{\\rho} k)\n\t- \\frac{\\partial}{\\partial x}(\\mu^*_k\\frac{\\partial k}{\\partial x})\n\t- \\frac{\\partial}{\\partial r}(\\mu^*_k\\frac{\\partial k}{\\partial r}) \n\t- \\frac{1}{r}(\\mu^*_k\\frac{\\partial k}{\\partial r})\n\t= P_k - \\overline{\\rho} \\varepsilon\n\\label{eqn:tkefinalb}\n\\end{equation}\n\n\tIt should be noted here that if one lets \n\n\\begin{equation}\n\tS_k = P_k - \\overline{\\rho}\\varepsilon\n\\label{eqn:sk}\n\\end{equation}\n\n\tthen Eq. \\ref{eqn:tkefinalb} matches\nexactly the conservation equation derived in the laminar section (Eq. \\ref{eqn:tkefinal}) where the existence of the turbulent \nkinetic energy was simply assumed.  This is convenient for the simple fact that one can derive the TKE equation by treating \n$k$ in a manner similar to any other conserved quantity (with the addition of a \nspecific source term) and arrive at the correct conservation equation.\n\n%----------------------------------------------------------------------------------------------------------------------------\n\\subsection{Reynolds Averaged Pressure}\n\t\n\tSome mention should be made of the fact that the pressure was decomposed into a Reynolds average and \na Reynolds fluctuating component while deriving the TKE equation, which differs from the manner in which all other flow \nvariables are treated.  Recalling\nthe sections on the derivation of the turbulent $x$ and $r$ direction momentum equations, it is noted that the pressure\nwas left un-decomposed and hence resulted in a $\\overline{p}$ term occurring in each of these equations.  Thus for consistency,\nthe pressure is not Favre averaged as this would create two pressure terms, a $\\tilde p$ and a $\\overline{p}$.  However,\nsince we are treating pressure slightly differently than the other flow variables, it is appropriate to examine this term \nin closer detail.  The equation of state can be used to define pressure, which for a multi-species perfect gas is simply\nthe sum of the individual gas species partial pressures (Dalton's Law of Partial Pressures),\n\n\\begin{equation}\n\tp = \\sum_k p_k\n\\label{eqn:dalton}\n\\end{equation}   \n\t\n\twhere the individual species partial pressures, $p_k$, also obey the equation of state for perfect gases (i.e., no\nintermolecular forces)\n\n\\begin{equation}\n\tp_k = \\rho_k R_k T = \\rho c_k R_k T\n\\label{eqn:pres_k}\n\\end{equation}\n\t\n\tDecomposing the species concentration ($c_k$) and temperature into their Favre averaged and Favre fluctuating\ncomponents while taking the Reynolds average of the entire equation yields,\n\n\\begin{displaymath}\n\t\\overline{p_k} = \\overline{\\rho (\\tilde c_k + c_k'') R_k (\\tilde T + T'')} = R_k \\Big\\{ \\overline{\\rho}\\tilde c_k \n\t\\tilde T + \\overline{\\rho c_k''}\\tilde T + \\overline{\\rho T''}\\tilde c_k + \\overline{\\rho c_k'' T''}\\Big\\}\n\\end{displaymath}\n\n\tsince $R_k$ is a constant for each species.  This equation can be simplified using Eq. \\ref{eqn:rhophidp} and then\nsubstituted into Eq. \\ref{eqn:dalton} to yield\n\n\\begin{equation}\n\t\\overline{p} = \\sum_{k} R_k \\overline{\\rho}\\tilde c_k \\tilde T + \\sum_{k}R_k\\overline{\\rho c_k'' T''}\n\\label{eqn:pbar}\n\\end{equation}\n\n\tNow to approximate the turbulent term in Eq. \\ref{eqn:pbar}, we will make use of Eq. \\ref{eqn:flucenthturb} which\nwas based on the assumption of a calorically perfect gas (constant specific heat) and hence resulted in the fact that\nfluctuations in $c_p$ are ignored when a thermally perfect gas is assumed (i.e. $c_p$ is a function of temperature).  \nHaving noted this fact, using Eq. \\ref{eqn:flucenthturb} along with the definition of the specific gas constant ($R_k$) \n\n\\begin{equation}\n\tR_k = c_{p_k} - c_{v_k}\n\\label{eqn:spgasconst}\n\\end{equation}\n\t\n\tone can write,\n\n\\begin{displaymath}\n\t\\overline{p} = \\sum_{k} R_k \\overline{\\rho}\\tilde c_k \\tilde T + \\sum_{k}R_k\\overline{\\rho c_k'' (\\frac{h_k''}{c_{p_k}})}\n\t= \\sum_{k} R_k \\overline{\\rho}\\tilde c_k \\tilde T + \\sum_{k}\\frac{(c_{p_k} - c_{v_k})}{c_{p_k}}\n\t\\overline{\\rho c_k'' h_k''}\n\\end{displaymath}\n\n\tRecalling the definition of the ratio of specific heats ($\\gamma$) for an individual gas species\n\n\\begin{equation}\n\t\\gamma_k = \\frac{c_{p_k}}{c_{v_k}}\n\\label{eqn:gamma}\n\\end{equation}\n\n\tone can simplify the expression of the Reynolds averaged pressure to,\n\n\\begin{displaymath}\n\t\\overline{p} = \\sum_{k} R_k \\overline{\\rho}\\tilde c_k \\tilde T + \\sum_{k}\\frac{\\gamma_k -1}{\\gamma_k}\n\t\\overline{\\rho c_k'' h_k''}\n\\end{displaymath}\n\n\tAs a last step, if one approximates the ratio of specific heats for each species to be constant over all the \nspecies (for a monatomic gas this value is 1.667 while for a diatomic gas it is 1.4, for temperatures between $\\approx\n300 - 2000$ K) then the terms involving $\\gamma_k$ can be taken out of the summation.  Recalling Eq. \\ref{eqn:rhog}\nwe note that the final result can be written as,\n\n\\begin{equation}\n\t\\overline{p} \\approx \\sum_{k} R_k \\overline{\\rho}\\tilde c_k \\tilde T + \\frac{\\gamma -1}{\\gamma}\\overline{\\rho}g\n\\label{eqn:pbarturb}\t\n\\end{equation}\n\n%---------------------------------------------------------------------------------------------------------------------------\n\\subsection{Turbulent Energy Dissipation Equation}\n\n\tIn deriving the TKE equation, an as of yet unseen quantity was defined as the dissipation rate of turbulent kinetic\nenergy ($\\varepsilon$).  Since the purpose of developing the TKE equation was to provide closure to the set of governing \nequations (as the definition of $k$ required an additional equation to complete the system), the addition of another unknown \nquantity defeated the original goal.  However, in this section we will define one more turbulence equation which will provide \na solution for the new quantity, $\\varepsilon$, while refraining from introducing any other new variables.  Recalling \nEq. \\ref{eqn:dissipation} we have a definition of the rate at which the turbulent kinetic energy dissipates,\n\n\\begin{displaymath}\n\t\\overline{\\rho} \\varepsilon = \\overline{\\tau_{xx}\\frac{\\partial u''}{\\partial x}} \n\t+ \\overline{\\tau_{xr}\\frac{\\partial v''}{\\partial x}} \n\t+ \\overline{\\tau_{x\\theta}\\frac{\\partial w''}{\\partial x}} + \\overline{\\tau_{rx}\\frac{\\partial u''}{\\partial r}}\n\t+ \\overline{\\tau_{rr}\\frac{\\partial v''}{\\partial r}} + \\overline{\\tau_{r\\theta}\\frac{\\partial w''}{\\partial r}}\n\\end{displaymath} \n\n\tThe derivation of a conservation equation for $\\varepsilon$ involves decomposing the dissipation rate into two\ncomponents called the solenoidal and dilatational dissipation rates, $\\varepsilon_s$ and $\\varepsilon_d$ respectively,\n\n\\begin{equation}\n\t\\varepsilon = \\varepsilon_s + \\varepsilon_d\n\\label{eqn:epsilon}\n\\end{equation}\n\n\tThe main difference between these two terms is that the solenoidal dissipation rate is dependent solely on the\nfluctuations in vorticity ($\\vec{\\omega} = \\vec{\\nabla}  \\times \\vec{V}$) while the dilatational dissipation is a \ncompressibility effect and is postulated to relate directly to $\\varepsilon_s$.  This decomposition \nprocess has the advantage of allowing a conservation equation for $\\varepsilon_s$ to be derived from the vorticity balance\nequation.  Starting by defining $\\varepsilon_s$ as\n\n\\begin{equation}\n\t\\overline{\\rho}\\varepsilon_s = \\overline{(\\frac{\\mu}{\\rho})} \\Big\\{\\overline{\\rho \\omega_x'' \\omega_x''} +\n\t\\overline{\\rho \\omega_r'' \\omega_r''} + \\overline{\\rho \\omega_\\theta'' \\omega_\\theta''}\\Big\\} =\n\t\\overline{(\\frac{\\mu}{\\rho})}\\sum_{i=0}{^i=2}\\overline{\\rho w_i''w_i''}\n\\label{eqn:epsilons}\n\\end{equation}\n\n\tthen taking the vorticity balance equation (also referred to as the Helmholtz vorticity equation) written as,\n\n\\begin{equation}\n\t\\frac{\\partial \\vec{\\omega}}{\\partial t} + (\\vec{V} \\cdot \\vec{\\nabla})\\vec{\\omega} - (\\vec{\\omega} \\cdot \\vec{\\nabla})\n\t\\vec{V} + \\vec{\\omega}(\\vec{\\nabla} \\cdot \\vec{V}) = \\vec{\\nabla} \\times\n\t\\Big\\{\\frac{1}{\\rho}(\\vec{\\nabla} \\cdot \\vec{\\tau} - \\vec{\\nabla}p) \\Big\\}\n\\label{eqn:helmholtz}\n\\end{equation}\n\n\tone can break Eq. \\ref{eqn:helmholtz} down into its co-ordinate direction components to arrive at a conservation equation \nfor the solenoidal dissipation rate as follows.  In a fashion similar to that done for the TKE equation, if one multiplies\neach of these components by the factor $2\\overline{(\\frac{\\mu}{\\rho})}\\rho \\omega_i''$ and takes the Reynolds average of the\nresulting three equations (one for each of the $x$, $r$, and $\\theta$ directions), when these three equations are summed one \nobtains an equation of $\\varepsilon_s$ of the form (after some simplification),\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\varepsilon_s) + \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\rho}\n\t\\varepsilon_s) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\rho}\\varepsilon_s) + \\frac{1}{r}\n\t(\\overline{\\rho}\\tilde v \\varepsilon_s) = \\ldots\n\\end{displaymath}\n\n\tThis process is not shown explicitly because the terms obtained on the right hand side of the above \nexpression are very complex and not modeled on a term by term basis (as was done for the TKE equation) and thus\nare not used.  Instead, a more global approach is taken and the solenoidal dissipation rate is modeled after the \nTKE equation as,\n\n\\begin{equation}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\varepsilon_s) + \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\rho}\n\t\\varepsilon_s) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\rho}\\varepsilon_s) + \\frac{1}{r}\n\t(\\overline{\\rho}\\tilde v \\varepsilon_s) \n\t- \\frac{\\partial}{\\partial x}(\\mu_\\varepsilon^* \\frac{\\partial \\varepsilon_s}{\\partial x}) \n\t- \\frac{\\partial}{\\partial r}(\\mu_\\varepsilon^* \n\t\\frac{\\partial \\varepsilon_s}{\\partial r}) - \\frac{1}{r}(\\mu_\\varepsilon^* \n\t\\frac{\\partial \\varepsilon_s}{\\partial r})\n\t\\approx \\frac{\\varepsilon_s}{k}\n\t(C_{\\varepsilon_1}P_k - C_{\\varepsilon_2}\\overline{\\rho}\\varepsilon_s)\n\\label{eqn:soldissipation}\n\\end{equation}\n\n\twhere\n\n\\begin{equation}\n\t\\mu_\\varepsilon^* = \\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} \n\\label{eqn:muepsilonstar}\n\\end{equation}\n\n\tand both $C_{\\varepsilon_1}$ and $C_{\\varepsilon_2}$ are closure co-efficients.  At this point a choice must be made \nas to how one wishes to complete these turbulence equations, in that the various closure co-efficients, or constants, appearing \nthe in both the TKE and solenoidal dissipation rate equations must be determined.  There are numerous methods available in the \nliterature, however the one that will be used in this case is the $k\\omega$ model of Wilcox.  In order to use \nthis method, Eq. \\ref{eqn:soldissipation} must first be modified slightly through the introduction of the specific dissipation \nrate, $\\omega$ (which is not to be confused with the vorticity),\n\n\\begin{equation}\n\t\\omega = \\frac{\\varepsilon_s}{k} \n\\label{eqn:length}\n\\end{equation}\n\n\twhich is simply the ratio of solenoidal dissipation to turbulent kinetic energy (or the rate of dissipation of turbulence\nper unit of energy).  If we now use the substantial derivative defined as,\n\n\\begin{displaymath}\n\t\\frac{D}{Dt} = \\frac{\\partial}{\\partial t} + (\\vec{V} \\cdot \\nabla)\n\\end{displaymath}\n\n\twhich for cylindrical co-ordinates becomes\n\n\\begin{equation}\n\t\\frac{D}{Dt} = \\frac{\\partial}{\\partial t} + \\tilde u \\frac{\\partial}{\\partial x} \n\t+ \\tilde v \\frac{\\partial}{\\partial r} + \\frac{\\tilde w}{r}\\frac{\\partial}{\\partial \\theta}\n\\label{eqn:substderiv}\n\\end{equation}\n\n\twe can take the substantial derivative of $\\varepsilon_s$ (Eq. \\ref{eqn:length}) while using the Chain rule from the \nCalculus to obtain,\n\n\\begin{displaymath}\n\t\\frac{D\\varepsilon_s}{Dt} = \\omega\\frac{Dk}{Dt} + k\\frac{D\\omega}{Dt}\n\\end{displaymath}\n\n\twhich when multiplied by $\\overline{\\rho}$, divided through by $k$, and expanding using Eq. \\ref{eqn:substderiv} \nyields (while also applying the axisymmetric condition, Eq. \\ref{eqn:dtheta}, to simplify Eq. \\ref{eqn:substderiv}),\n\n\\begin{displaymath}\n\t\\overline{\\rho}\\frac{D\\omega}{Dt} = \\frac{\\overline{\\rho}}{k}\\frac{D\\varepsilon_s}{Dt} \n\t-\\frac{\\overline{\\rho}}{k}\\omega\\frac{Dk}{Dt} \n\\end{displaymath}\n\n\\begin{equation}\n\t\\overline{\\rho}\\Big\\{\\frac{\\partial\\omega}{\\partial t} + \\tilde u\\frac{\\partial\\omega}{\\partial x} \n\t+ \\tilde v\\frac{\\partial\\omega}{\\partial r}\\Big\\} = \n\t\\frac{1}{k}\\overline{\\rho}\\Big\\{\\frac{\\partial\\varepsilon_s}{\\partial t} + \\tilde u\\frac{\\partial\\varepsilon_s}{\\partial x}\n\t+ \\tilde v\\frac{\\partial\\varepsilon_s}{\\partial r}\\Big\\} - \\frac{\\omega}{k}\\overline{\\rho}\n\t\\Big\\{\\frac{\\partial k}{\\partial t} + \\tilde u\\frac{\\partial k}{\\partial x} \n\t+ \\tilde v\\frac{\\partial k}{\\partial r}\\Big\\}\n\\label{eqn:diss}\n\\end{equation}\n\n\tThe above expression can be rewritten by examining the following relations,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega) & = & \\omega\\frac{\\partial}{\\partial t}(\\overline{\\rho})\n\t\t+ \\overline{\\rho}\\frac{\\partial\\omega}{\\partial t} \\\\ \\\\\n\t\t\\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\omega) & = & \n\t\t\\omega\\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u)+ \\overline{\\rho}\\tilde u\\frac{\\partial\\omega}{\\partial x}\n\t\t\\\\ \\\\\n\t\t\\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\omega) & = & \n\t\t\\omega\\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v)+ \\overline{\\rho}\\tilde v\\frac{\\partial\\omega}{\\partial r}\n\t\\end{array}\n\\end{displaymath}\n\n\twhich when summed can be written as,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\omega)\n\t+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\omega) = \\omega\\Big\\{\\frac{\\partial}{\\partial t}(\\overline{\\rho})\n\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u) + \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v) \\Big\\}\n\t+ \\overline{\\rho}\\Big\\{\\frac{\\partial\\omega}{\\partial t} + \\tilde u\\frac{\\partial\\omega}{\\partial x}\n\t+ \\tilde v\\frac{\\partial\\omega}{\\partial r}\\Big\\}\n\\end{displaymath}\n\n\tIt is noted that the first term in curly braces can be simplified using the global continuity equation \n(Eq. \\ref{eqn:globalcont}) thus yielding the final desired relation upon rearranging,\n\n\\begin{equation}\n\t\\overline{\\rho}\\Big\\{\\frac{\\partial\\omega}{\\partial t} + \\tilde u\\frac{\\partial\\omega}{\\partial x}\n\t+ \\tilde v\\frac{\\partial\\omega}{\\partial r}\\Big\\} = \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)\n\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\omega)+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\omega)\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v\\omega)\n\\label{eqn:dissA}\n\\end{equation}\n\n\twhere the terms on the left hand side of Eq. \\ref{eqn:dissA} match those found on the left hand side\nof Eq. \\ref{eqn:diss}.  Similar logic can be used to obtain relations of the same form for the remaining\ntwo sets of terms in curly braces found in Eq. \\ref{eqn:diss},\n\n\\begin{equation}\n\t\\overline{\\rho}\\Big\\{\\frac{\\partial\\varepsilon_s}{\\partial t} + \\tilde u\\frac{\\partial\\varepsilon_s}{\\partial x}\n\t+ \\tilde v\\frac{\\partial\\varepsilon_s}{\\partial r}\\Big\\} = \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\varepsilon_s)\n\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\varepsilon_s)\n\t+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\varepsilon_s)\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v\\varepsilon_s)\n\\label{eqn:dissB}\n\\end{equation}\n\n\\begin{equation}\n\t\\overline{\\rho}\\Big\\{\\frac{\\partial k}{\\partial t} + \\tilde u\\frac{\\partial k}{\\partial x}\n\t+ \\tilde v\\frac{\\partial k}{\\partial r}\\Big\\} = \\frac{\\partial}{\\partial t}(\\overline{\\rho}k)\n\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u k)+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v k)\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v k)\n\\label{eqn:dissC}\n\\end{equation}\n\n\tTherefore, combining the results of Eqs. \\ref{eqn:dissA} - \\ref{eqn:dissC} with Eq. \\ref{eqn:diss} one obtains,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)\n\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\omega)+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\omega)\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v\\omega) & = &\n\t   \\begin{array}{c}\n\t\t\\frac{1}{k}\\Big\\{\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\varepsilon_s)\n\t\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\varepsilon_s)\n\t\t+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\varepsilon_s)\n\t\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v\\varepsilon_s)\\Big\\} \\\\\n\t\t- \\frac{\\omega}{k}\\Big\\{ \\frac{\\partial}{\\partial t}(\\overline{\\rho}k)\n\t\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u k)+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v k)\n\t\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v k)\\Big\\}\n\t   \\end{array}\n\t\\end{array}\n\\end{displaymath}\n\n\tAt this point it is noticed that the terms in the first set of curly braces can be replaced by \nEq. \\ref{eqn:soldissipation} while the second set of curly braces can be replaced by Eq. \\ref{eqn:tkefinalb} to yield,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n\t \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)\n\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\omega)+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\omega)\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v\\omega) & = &\n\t   \\begin{array}{c}\n\t\t\\frac{1}{k}\\Big\\{\\frac{\\partial}{\\partial x}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\t\\frac{\\partial \\varepsilon_s}{\\partial x}] \n\t\t+ \\frac{\\partial}{\\partial r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\t\\frac{\\partial \\varepsilon_s}{\\partial r}]\n\t\t+ \\frac{1}{r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\t\\frac{\\partial \\varepsilon_s}{\\partial r}] \\\\ \n\t\t+ \\frac{\\varepsilon_s}{k}(C_{\\varepsilon_1}P_k - C_{\\varepsilon_2}\\overline{\\rho}\\varepsilon_s)\\Big\\} \\\\\n\t\t- \\frac{\\omega}{k}\\Big\\{\\frac{\\partial}{\\partial x}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial x}] \n\t\t+ \\frac{\\partial}{\\partial r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial r}]\n\t\t+ \\frac{1}{r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial r}] \\\\\n\t\t+ P_k - \\overline{\\rho}\\varepsilon \\Big\\}\n\t   \\end{array}\n\t\\end{array}\n\\end{displaymath}\n\n\tApplying the relations expressed in Eqs. \\ref{eqn:epsilon} and \\ref{eqn:length} the above can be rearranged to\n\n\\begin{equation}\n\t\\begin{array}{ccc}\n\t \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)\n\t+ \\frac{\\partial}{\\partial x}(\\overline{\\rho}\\tilde u\\omega)+ \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde v\\omega)\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde v\\omega) & = &\n\t   \\begin{array}{c}\n\t\t\\frac{\\omega}{k}\\Big\\{(C_{\\varepsilon_1}P_k - C_{\\varepsilon_2}\\overline{\\rho}\\varepsilon_s)\n\t\t- P_k + \\overline{\\rho}(\\varepsilon_s + \\varepsilon_d) \\Big\\} \\\\\n\t\t+ \\frac{1}{k}\\Big\\{\n\t\t\\frac{\\partial}{\\partial x}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\t\\frac{\\partial \\varepsilon_s}{\\partial x}] \n\t\t+ \\frac{\\partial}{\\partial r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\t\\frac{\\partial \\varepsilon_s}{\\partial r}]\n\t\t+ \\frac{1}{r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\t\\frac{\\partial \\varepsilon_s}{\\partial r}]\\Big\\} \n\t\t\\\\\n\t\t- \\frac{\\omega}{k}\\Big\\{\\frac{\\partial}{\\partial x}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial x}] \n\t\t+ \\frac{\\partial}{\\partial r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial r}]\n\t\t+ \\frac{1}{r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial r}] \n\t\t\\Big\\}\n\t   \\end{array}\n\t\\end{array}\n\\label{eqn:diss2}\n\\end{equation}\n\n\tFrom Eq. \\ref{eqn:length}, it is noted that the derivatives involving $\\varepsilon_s$ can be expressed in terms\nof derivatives of $\\omega$ and $k$ as follows.  Taking the first term as an example,\n\n\\begin{displaymath}\n\t\\frac{1}{k}\\frac{\\partial}{\\partial x}\\Big\\{(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\\frac{\\partial \\varepsilon_s}{\\partial x}\\Big\\} = \\frac{1}{k}\\frac{\\partial}{\\partial x}\\Big\\{(\\overline{\\mu} \n\t+ \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial}{\\partial x}(\\omega k)\\Big\\} = \\frac{1}{k} \n\t\\frac{\\partial}{\\partial x}\\Big\\{(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t(\\omega \\frac{\\partial k}{\\partial x} + k \\frac{\\partial \\omega}{\\partial x})\\Big\\}\n\\end{displaymath}\n\n\tApplying the Chain Rule from the Calculus once again yields,\n\n\\begin{displaymath}\n\t= \\frac{1}{k}\\Bigg\\{ \\omega \\frac{\\partial}{\\partial x}\\Big\\{(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\\frac{\\partial k}{\\partial x}\\Big\\} + \\frac{\\partial k}{\\partial x}(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\\frac{\\partial \\omega}{\\partial x} + k \\frac{\\partial}{\\partial x}\\Big\\{(\\overline{\\mu} + \n\t\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial \\omega}{\\partial x}\\Big\\} + \\frac{\\partial \\omega}{\\partial x}\n\t(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial k}{\\partial x} \\Bigg\\}\n\\end{displaymath}\n\n\twhere it is noted that the first term in the above expression can be combined with the first \n$\\frac{\\partial k}{\\partial x}$ term in Eq. \\ref{eqn:diss2} which allows the following relation to be written,\n\n\\begin{equation}\n\t\\frac{1}{k}\\Big\\{\\frac{\\partial}{\\partial x}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\\frac{\\partial \\varepsilon_s}{\\partial x}] - \\omega \\frac{\\partial}{\\partial x}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\\frac{\\partial k}{\\partial x}]\\Big\\} = \\frac{\\omega}{k}\\frac{\\partial}{\\partial x}\n\t\\Big\\{(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} \n\t-\\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial x}\\Big\\} + \\frac{2}{k}(\\overline{\\mu} \n\t+ \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial k}{\\partial x}\\frac{\\partial \\omega}{\\partial x}\n\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial \\omega}{\\partial x}\n\\label{eqn:simpx}\n\\end{equation}\n\n\tSimilar logic can be applied for the $r$ derivative terms to yield,\n\n\\begin{equation}\n\t\\frac{1}{k}\\Big\\{\\frac{\\partial}{\\partial r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\\frac{\\partial \\varepsilon_s}{\\partial r}] - \\omega \\frac{\\partial}{\\partial r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\\frac{\\partial k}{\\partial r}]\\Big\\} = \\frac{\\omega}{k}\\frac{\\partial}{\\partial r}\n\t\\Big\\{(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} \n\t-\\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\\Big\\} + \\frac{2}{k}(\\overline{\\mu} \n\t+ \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial k}{\\partial r}\\frac{\\partial \\omega}{\\partial r}\n\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial \\omega}{\\partial r}\n\\label{eqn:simpr}\n\\end{equation}\n\n\tThe remaining source terms can be treated in the same manner by substituting Eq. \\ref{eqn:length}, \n\n\\begin{displaymath}\n\t\\frac{1}{k}\\frac{1}{r}\\Big\\{(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\\frac{\\partial \\varepsilon_s}{\\partial r}\\Big\\} = \\frac{1}{k}\\frac{1}{r}\\Big\\{(\\overline{\\mu} \n\t+ \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\\frac{\\partial}{\\partial r}(\\omega k)\\Big\\} = \\frac{1}{k} \n\t\\frac{1}{r}\\Big\\{(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t(\\omega \\frac{\\partial k}{\\partial r} + k \\frac{\\partial \\omega}{\\partial r})\\Big\\}\n\\end{displaymath}\n\t\n\twhich can be added to the remaining $\\frac{\\partial k}{\\partial r}$ source term,\n\n\\begin{displaymath}\n\t\\frac{1}{k}\\frac{1}{r}\\Big\\{\\omega(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}}) \\frac{\\partial k}{\\partial r}\n\t+ k(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}}) \\frac{\\partial \\omega}{\\partial r}\n \t- \\omega(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\\Big\\} \n\\end{displaymath}\n\t\n\tand thus be simplified to \n\n\\begin{equation}\n\t\\frac{1}{k}\\Big\\{\\frac{1}{r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}})\n\t\\frac{\\partial \\varepsilon_s}{\\partial r}] - \\omega\\frac{1}{r}[(\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\n\t\\frac{\\partial k}{\\partial r}] \\Big\\} = \\frac{1}{r}\\Big\\{\\frac{\\omega}{k}(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}}\n\t- \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r} + (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_{\\varepsilon_s}}) \n\t\\frac{\\partial \\omega}{\\partial r}\\Big\\}\n\\label{eqn:simpsource}\n\\end{equation}\n\n\tTherefore, combining the expressions in Eqs. \\ref{eqn:simpx}-\\ref{eqn:simpsource} with Eq. \\ref{eqn:diss2} yields\nthe desired form for the specific dissipation rate equation,\n\n\\begin{displaymath}\n\t\\begin{array}{ccc}\n \t   \\begin{array}{c}\n\t \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u\\overline{\\rho}\\omega)+ \\frac{\\partial}{\\partial r}(\\tilde v\\overline{\\rho}\\omega)\n\t+ \\frac{1}{r}(\\tilde v\\overline{\\rho}\\omega) \\\\ \\\\\n\t- \\frac{\\partial}{\\partial x}(\\mu^*_{\\varepsilon}\n\t\\frac{\\partial \\omega}{\\partial x}) - \\frac{\\partial}{\\partial r}(\\mu^*_{\\varepsilon}\\frac{\\partial \\omega}{\\partial r})\n\t- \\frac{1}{r}(\\mu^*_{\\varepsilon}\\frac{\\partial \\omega}{\\partial r})\n\t   \\end{array}\n\t& = &\n\t   \\begin{array}{c}\n\t\t\\frac{\\omega}{k}\\Big\\{(C_{\\varepsilon_1}- 1)P_k - (C_{\\varepsilon_2}-1)\\overline{\\rho}\\varepsilon_s\n\t\t+ \\overline{\\rho}\\varepsilon_d \\Big\\} \\\\ \\\\\n\t\t+ \\Bigg\\{\\frac{2}{k}\\mu^*_{\\varepsilon}(\\frac{\\partial \\omega}{\\partial x}\\frac{\\partial k}{\\partial x} \n\t\t+\\frac{\\partial \\omega}{\\partial r}\\frac{\\partial k}{\\partial r}) \n\t\t\\\\ \\\\ + \\frac{\\omega}{k}\\Big\\{\n\t\t\\frac{\\partial}{\\partial x}\\big\\{(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} - \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial x}\\big\\} \n\t\t+ \\frac{\\partial}{\\partial r}\\big\\{(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} \n\t\t- \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\\big\\} + \\frac{1}{r}(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} \n\t\t- \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\t\\Big\\} \\Bigg\\}\n\t   \\end{array}\n\t\\end{array}\n\\end{displaymath}\n\n\twhere Eq. \\ref{eqn:muepsilonstar} has been applied.  If one lets the terms on the right side of the above \nequation equal $S_\\omega$,\n\n\\begin{equation}\n\tS_\\omega = \\begin{array}{c}\n\t\t\\frac{\\omega}{k}[(C_{\\varepsilon_1}- 1)P_k - (C_{\\varepsilon_2}-1)\\overline{\\rho}\\varepsilon_s\n\t\t+ \\overline{\\rho}\\varepsilon_d ] \n\t\t+ \\Bigg\\{\\frac{2}{k}\\mu^*_{\\varepsilon}(\\frac{\\partial \\omega}{\\partial x}\\frac{\\partial k}{\\partial x} \n\t\t+\\frac{\\partial \\omega}{\\partial r}\\frac{\\partial k}{\\partial r}) \n\t\t\\\\ + \\frac{\\omega}{k}\\Big\\{\n\t\t\\frac{\\partial}{\\partial x}\\big\\{(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} - \\frac{\\mu_T}{\\sigma_k})\n\t\t\\frac{\\partial k}{\\partial x}\\big\\} \n\t\t+ \\frac{\\partial}{\\partial r}\\big\\{(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} \n\t\t- \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\\big\\} + \\frac{1}{r}(\\frac{\\mu_T}{\\sigma_{\\varepsilon_s}} \n\t\t- \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}\t\\Big\\} \\Bigg\\}\n\t   \\end{array}\n\\label{eqn:somega}\n\\end{equation}\n\n\tthen the specific dissipation rate equation can be written as,\n\n\\begin{equation}\n\t \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u\\overline{\\rho}\\omega)+ \\frac{\\partial}{\\partial r}(\\tilde v\\overline{\\rho}\\omega)\n\t+ \\frac{1}{r}(\\tilde v\\overline{\\rho}\\omega) \\\\ \\\\\n\t- \\frac{\\partial}{\\partial x}(\\mu^*_{\\varepsilon}\n\t\\frac{\\partial \\omega}{\\partial x}) - \\frac{\\partial}{\\partial r}(\\mu^*_{\\varepsilon}\\frac{\\partial \\omega}{\\partial r})\n\t- \\frac{1}{r}(\\mu^*_{\\varepsilon}\\frac{\\partial \\omega}{\\partial r})\n\t =  S_\\omega\n\\label{eqn:spdissipation}\n\\end{equation}\n\n\twhich matches exactly the equation derived in the laminar section treating $\\omega$ as a conserved quantity.  it should\nbe noted here that in the $k\\omega$ turbulence model of Wilcox, the terms in the curly braces of Eq. \\ref{eqn:somega} are\nneglected resulting in a source term for $\\omega$ equal to,\n\n\\begin{equation}\n\tS_\\omega = \\frac{\\omega}{k}\\Big\\{(C_{\\varepsilon_1}- 1)P_k - (C_{\\varepsilon_2}-1)\\overline{\\rho}\\varepsilon_s\n\t\t+ \\overline{\\rho}\\varepsilon_d \\Big\\}\n\\label{eqn:somega2}\n\\end{equation}\n\n%----------------------------------------------------------------------------------------------------------------------------\n\\subsection{Wilcox $k\\omega$ Turbulence Model}\n\n\tIn the previous sections we derived two additional conservation equations to describe the behavior of \nthe turbulent quantities arising out of the Favre averaging process.  However, in doing so one is left with other \nquantities that remain to be evaluated and are not part of the set of flow variables.  Thus one is required to \nmake further approximations in order to provide closure to the overall set of equations to be solved.  This process\ncan be done in numerous ways, however this section will examine the approximations made when using the $k\\omega$ \nturbulence model of Wilcox.  The first and most notable approximation required is the definition of the \nturbulent viscosity, $\\mu_T$, as this term is introduced as soon as the Reynolds stresses are\napproximated.  In this particular turbulence model $\\mu_T$ is approximated as,\n\n\\begin{equation}\n\t\\mu_T \\approx \\frac{9}{100}\\overline{\\rho}\\frac{k}{\\omega}\n\\label{eqn:mut}\n\\end{equation}\n\n\tIn approximating the turbulent species conservation equation the turbulent Schmidt number was introduced, while \nwhen considering the turbulent heat flux terms in the energy equation the turbulent Prandtl appears for the first time.  \nThese two values are approximated as,\n\n\\begin{equation}\n\t\\begin{array}{cc}\n\tSc_T \\approx 1.0 & Pr_T \\approx 0.9\n\t\\end{array}\n\\label{eqn:sctandprt}\n\\end{equation}\n\n\tThe remaining terms to be defined are all introduced while deriving the two turbulence equations for $k$ and \n$\\omega$.  The various constants found in these equations (Eqs. \\ref{eqn:tkefinalb} and \\ref{eqn:spdissipation})\nare taken as,\n\n\\begin{equation}\n\t\\begin{array}{cccc}\n\t\\sigma_k \\approx 2 & \\sigma_{\\varepsilon_s} \\approx 2 & C_{\\varepsilon_1} \\approx \\frac{14}{9} & \n\tC_{\\varepsilon_2} \\approx \\frac{11}{6}\n\t\\end{array}\n\\label{eqn:turbconstants}\n\\end{equation}\n\n\tIn deriving the specific dissipation rate equation, nothing was said of the dilatational dissipation term.  In\nthe model of Wilcox used here, this term is simply an algebraic function of its solenoidal counterpart,\n\n\\begin{equation}\n\t\\overline{\\rho}\\varepsilon_d = \\frac{3}{2}\\overline{\\rho}\\varepsilon_s \\textrm{max}(M^2_T - \\frac{1}{16},0)\n\\label{eqn:dildissipation}\n\\end{equation}\n\n\twhere $M_T$ is the turbulent Mach number defined as,\n\n\\begin{equation}\n\tM_T = \\frac{\\sqrt{2k}}{a}\n\\label{eqn:turbmach}\n\\end{equation}\n\n\tand $a$ is the familiar speed of sound.  At this point one now has all that is required to find the\nturbulent kinetic energy source term $S_k$ (Eq. \\ref{eqn:sk}),\n\n\\begin{displaymath}\n\tS_k = P_k - \\overline{\\rho}\\varepsilon  = P_k -\\overline{\\rho}\\varepsilon_s - \\overline{\\rho}\\varepsilon_d\n\\end{displaymath}\n\n\tand the specific dissipation rate source term as defined by Eq. \\ref{eqn:somega2}, \n\n\\begin{displaymath}\n\t\tS_\\omega = \\frac{\\omega}{k}\\Big\\{(C_{\\varepsilon_1}- 1)P_k - (C_{\\varepsilon_2}-1)\\overline{\\rho}\\varepsilon_s\n\t\t+ \\overline{\\rho}\\varepsilon_d) \\Big\\}\n\\end{displaymath}\n\n%--------------------------------------------------------------------------------------------------------------------------------\n\\subsection{Summary of Equations}\n\n\tThe final set of axisymmetric, Favre-averaged equations can be written in exactly the same form as previously \nderived in Eqs. \\ref{eqn:final}-\\ref{eqn:finalHv}.  Written out on an equation by equation basis they look like the \nfollowing:\n\n\\begin{description}\n\t\\item[Species Conservation Equation]\n\t\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}\\overline{\\rho}\\tilde c_k + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{u})  + \\frac{\\partial}{\\partial r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v})\t+ \\frac{1}{r}\n\t(\\overline{\\rho}\\tilde c_k\\tilde{v}) = \\frac{\\partial}{\\partial x}(\n\t\\nu_k^*\\frac{\\partial \\tilde c_k}{\\partial x}) +\n\t\\frac{\\partial}{\\partial r} (\\nu_k^*\\frac{\\partial \\tilde c_k}{\\partial r})\n\t+ \\frac{1}{r}(\\nu_k^*\\frac{\\partial \\tilde c_k}{\\partial r})\n\t\\end{displaymath}\n\t\\item[$x$ Direction Momentum Equation]\n\t\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\rho \\tilde{u}) + \\frac{\\partial}{\\partial x}(\\rho \\tilde{u}^2 + \n\t[\\overline{p}+\\frac{2}{3}\\overline{\\rho}k]) +\n\t\\frac{\\partial}{\\partial r}(\\rho \\tilde{u}\\tilde{v}) + \\frac{1}{r}(\\rho \\tilde{u} \\tilde{v}) =  \n\t\\frac{\\partial}{\\partial x}(\\hat{\\tilde{\\tau}}_{xx}) + \\frac{\\partial}{\\partial r}(\\tilde{\\tau}_{rx}) \n\t+ \\frac{1}{r}(\\tilde{\\tau}_{rx}) - \\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu^* \\frac{\\tilde v}{r})\n\t\\end{displaymath}\n\t\\item[$r$ Direction Momentum Equation]\n\t\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}\\tilde{v}) + \\frac{\\partial}{\\partial x}\n\t(\\overline{\\rho}\\tilde{u}\\tilde{v}) + \\frac{\\partial}{\\partial r}(\\overline{\\rho}\\tilde{v}^2 + \n\t[\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\n\t+ \\frac{1}{r}(\\overline{\\rho}\\tilde{v}^2) = \n\t\\end{displaymath}\n\t\\begin{displaymath}\n\t\\frac{\\partial}{\\partial x}(\\tilde\\tau_{xr}) + \\frac{\\partial}{\\partial r}(\\hat{\\tilde\\tau}_{rr}) + \n\t\\frac{1}{r}(\\hat{\\tilde\\tau}_{rr} - \\tilde\\tau_{\\theta \\theta} -\\frac{2}{3}\\mu^*\\frac{\\tilde v}{r}) - \\frac{2}{3}\n\t\\frac{\\partial}{\\partial r}(\\mu^*\\frac{\\tilde v}{r})\n\t\\end{displaymath}\n\t\\item[Energy Equation]\n\t\\begin{displaymath}\n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial t}(\\overline{\\rho}E) +\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\\tilde u(\\overline{\\rho}E + [\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\\Big\\} \n\t\t \\\\ +\n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\\tilde v(\\overline{\\rho}E + [\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\\Big\\} +\n\t\t\\frac{1}{r}\\Big\\{\\tilde v(\\overline{\\rho}E + [\\overline{p} + \\frac{2}{3}\\overline{\\rho}k])\\Big\\}\n\t\\end{array} = \n\t\\begin{array}{c}\n\t\t\\frac{\\partial}{\\partial x}\\Big\\{\n\t\t\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial x} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial x} + \\hat{\\tilde \\tau}_{xx} \\tilde u + \\tilde \\tau_{xr} \\tilde v  \n\t \t\\\\\n\t\t+ \\tilde \\tau_{x\\theta} \\tilde w + (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial x} \n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial x} \\Big\\}\n\t\t\\\\ \\\\ + \n\t\t\\frac{\\partial}{\\partial r}\\Big\\{\n\t\t\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial r} + \\tilde \\tau_{rx} \\tilde u + \\hat{\\tilde \\tau}_{rr} \\tilde v  \n\t \t\\\\\n\t\t+ \\tilde \\tau_{r\\theta} \\tilde w + (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial r}\n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r} \\Big\\} \n\t\t\\\\ \\\\+ \n\t\t\\frac{1}{r}\\Big\\{\n\t\t\\sum_{k}\\tilde h_k \\nu_k^* \\frac{\\partial \\tilde c_k}{\\partial r} + \\kappa^*\\frac{\\partial \\tilde T}\n\t\t{\\partial r} + \\tilde \\tau_{rx} \\tilde u + \\hat{\\tilde \\tau}_{rr} \\tilde v  \n\t \t\\\\\n\t\t+ \\tilde \\tau_{r\\theta} \\tilde w+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_g})\\frac{\\partial g}{\\partial r}\n\t\t+ (\\overline{\\mu} + \\frac{\\mu_T}{\\sigma_k})\\frac{\\partial k}{\\partial r}-\\frac{2}{3}\\mu^*\\frac{\\tilde v^2}{r} \n\t\t\\Big\\}\n\t\t\\\\\n\t\t-\\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu^*\\frac{\\tilde u \\tilde v}{r}) \n\t\t-\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu^*\\frac{\\tilde v^2}{r})  \n\t\\end{array}\n\t\\end{displaymath}\n\t\\item[Turbulent Kinetic Energy Equation]\n\t\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\overline{\\rho} k)\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u \\overline{\\rho} k) + \\frac{\\partial}{\\partial r}(\\tilde v \\overline{\\rho} k)\n\t+ \\frac{1}{r}(\\tilde v \\overline{\\rho} k)\n\t- \\frac{\\partial}{\\partial x}(\\mu^*_k\\frac{\\partial k}{\\partial x})\n\t- \\frac{\\partial}{\\partial r}(\\mu^*_k\\frac{\\partial k}{\\partial r}) \n\t- \\frac{1}{r}(\\mu^*_k\\frac{\\partial k}{\\partial r}) = S_k\n\t\\end{displaymath}\t\n\t\\item[Turbulent Energy Dissipation Rate Equation]\n\t\\begin{displaymath}\n\t \\frac{\\partial}{\\partial t}(\\overline{\\rho}\\omega)\n\t+ \\frac{\\partial}{\\partial x}(\\tilde u\\overline{\\rho}\\omega)+ \\frac{\\partial}{\\partial r}(\\tilde v\\overline{\\rho}\\omega)\n\t+ \\frac{1}{r}(\\tilde v\\overline{\\rho}\\omega) \\\\ \\\\\n\t- \\frac{\\partial}{\\partial x}(\\mu^*_{\\varepsilon}\n\t\\frac{\\partial \\omega}{\\partial x}) - \\frac{\\partial}{\\partial r}(\\mu^*_{\\varepsilon}\\frac{\\partial \\omega}{\\partial r})\n\t- \\frac{1}{r}(\\mu^*_{\\varepsilon}\\frac{\\partial \\omega}{\\partial r})\n\t =  S_\\omega\n\t\\end{displaymath}\n\\end{description}\n\n\twhere it must be remembered that the stress terms although still defined by Eqs. \\ref{eqn:taurx},\n\\ref{eqn:tauthetatheta}, \\ref{eqn:tauxxhat}, \\ref{eqn:taurrhat}, \\ref{eqn:tauxtheta}, and \\ref{eqn:taurtheta} are\nnow understood to be evaluated using Favre averaged quantities while using the total turbulent co-efficient of \nviscosity as defined by Eq. \\ref{eqn:mustar}.\n", "meta": {"hexsha": "3cc2e65f4d4605d8fd87607fda2fbb7e2b555b6b", "size": 193492, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "model/fluid/doc/Axisymmetric_Theory/turb.tex", "max_stars_repo_name": "zhanghuanqian/CFDWARP", "max_stars_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 29, "max_stars_repo_stars_event_min_datetime": "2018-09-13T13:58:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T21:44:13.000Z", "max_issues_repo_path": "model/fluid/doc/Axisymmetric_Theory/turb.tex", "max_issues_repo_name": "zhanghuanqian/CFDWARP", "max_issues_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-11-10T11:28:30.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-23T09:21:28.000Z", "max_forks_repo_path": "model/fluid/doc/Axisymmetric_Theory/turb.tex", "max_forks_repo_name": "zhanghuanqian/CFDWARP", "max_forks_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2018-07-26T08:17:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T08:41:55.000Z", "avg_line_length": 50.9994728519, "max_line_length": 133, "alphanum_fraction": 0.6685547723, "num_tokens": 67490, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321936479701, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.558014675174113}}
{"text": "\n\\begin{abstract}\n\tThis article introduces a generalization of  the discrete optimal transport, with applications to color image manipulations. This new formulation includes a relaxation of the mass conservation constraint and a regularization term.  These two features are crucial for image processing tasks, which necessitate to take into account families of multimodal histograms, with large mass variation across modes.  \t \n\t The corresponding relaxed and regularized transportation problem is the solution of a convex optimization problem. Depending on the regularization used, this minimization can be solved using standard linear programming methods or first order proximal splitting schemes.\n\t The resulting transportation plan can be used as a color transfer map, which is robust to mass variation across image color palettes. Furthermore, the regularization of the transport plan helps remove colorization artifacts due to noise amplification.\n\tWe also extend this framework to compute the barycenter of distributions. The barycenter is the solution of an optimization problem, which is separately convex with respect to the barycenter and the transportation plans, but not jointly convex. A block coordinate descent scheme converges to a stationary point of the energy. We show that the resulting algorithm can be used for color normalization across several images. The relaxed and regularized barycenter defines a common color palette for those images. Applying color transfer toward this average palette performs a color normalization of the input images.  \n\\end{abstract}\n\n\\begin{keywords}Optimal Transport, color transfer, variational regularization, convex optimization, proximal splitting, manifold learning.\\end{keywords}\n\n\\begin{AMS}90C25, 68U10\\end{AMS}\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Introduction}\n\nA large class of image processing problems involves probability densities estimated from local or global image features. In contrast to most distances from information theory (e.g. the Kullback-Leibler divergence), optimal transport (OT) takes into account the spatial location of the density modes~\\cite{Villani03}. Furthermore, it also provides as a by-product a warping (the so-called transport plan) between the densities. This plan can be used to perform image modifications such as color transfer. However, an important flaw of this OT plan is that it is in general highly irregular, thus introducing unwanted artifacts in the modified images. In this article, we propose a variational formalism to relax and regularize the transport. This novel regularized OT improves visually the results for color image modifications. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Color Normalization and Color Transfer}\n\nThe problem of imposing some histogram on an image has been tackled since the beginning of image processing. Classic problems are histogram equalization or histogram specification (see for example~\\cite{Gonzalez:2001}). Given two images, the goal of color transfer is to impose on one of the images the histogram of the other one. An approach to color transfer based on matching statistical properties (mean and covariance) is proposed by Reinhard et al.~\\cite{Reinhard01} for the $\\ell\\al\\beta$ color space, and generalized by Xiao and Ma~\\cite{Xiao:2006} to any color space. Wang and Huang~\\cite{WangH04} use similar ideas to generate a sequence of the same image with a changing histogram.  Morovic and Sun~\\cite{Morovic03} and Delon~\\cite{Delon04} show that histogram transfer is directly related to the OT problem.\n\nA special case of color transfer is color normalization where the goal is to impose the same histogram, normally some ``average'' histogram, on a set of different images. An application for the color balancing of videos is proposed by Delon~\\cite{Delon:2006} to correct flickering in old movies. In the context of canceling illumination, this problem is also known as color constancy and it has been thoroughly studied by Land and McCann who propose the Retinex theory (see~\\cite{Land:71} and~\\cite{Amestoy09} for a modern formulation). Canceling the illumination of a scene is an important component in the computer vision pipeline, and it is regularly used as a preprocessing to register/compare several images taken with different cameras or illumination conditions, as a preprocessing before registration, see~\\cite{Csink98} for instance. \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Optimal Transport and Imaging}\n\\label{subsec-ot-imaging}\n\n%%\n\\paragraph{Discrete optimal transport}\n\nThe discrete OT is the solution of a convex linear program originally introduced by Kantorovitch~\\cite{Kantorovitch-OT}. It corresponds to the convex relaxation of a combinatorial problem when the densities are sums of the same number of Dirac masses. This relaxation is tight (i.e. the solution of the linear program is an assignment) and it extends the notion of OT to an arbitrary sum of weighted Diracs, see for instance~\\cite{Villani03}. Although there exist dedicated linear solvers (transportation simplex~\\cite{Dantzig-Book}) and combinatorial algorithms (such as the Hungarian~\\cite{Kuhn-hungarian} and auction algorithms~\\cite{Bertsekas1988}), computing OT is still a challenging task for densities composed of thousands of Dirac masses. \n\n%%\n\\paragraph{Optimal transport and its associated distance}\n\nThe OT distance (also known as the Wasserstein distance or the Earth Mover distance) has been shown to produce state of the art results for the comparison of statistical descriptors, see for instance~\\cite{Rubner98}. Image retrieval performance as well as computational time are both greatly improved by using non-convex cost functions, see~\\cite{Pele-ICCV}. \n\nAnother line of applications of OT makes use of the transport plan to warp an input density onto another. OT is strongly connected to fluid dynamic partial differential equations~\\cite{Benamou00}. These connections have been used to perform image registration~\\cite{haker-ijcv}.  The estimation of the transport plan is also an interesting way of tackling  the challenging problem of color transfer between images, see for instance~\\cite{Reinhard01,Morovic03,McCollum07}. For grayscale images, the usual histogram equalization algorithm corresponds to the application of the 1-D OT plan to an image, see for instance~\\cite{Delon04}. It thus makes sense to consider the 3-D OT as a mathematically-sound way to perform color palette transfer, see for instance~\\cite{Pitie07} for an approximate transport method. When doing so, it is important to cope with variations in the modes of the color palette across images, which makes the mass conservation constraint of OT problematic. A workaround is to consider parametric densities such as Gaussian mixtures and defines ad-hoc matching between the components of the mixture, see~\\cite{Tai-cvpr-colortransfer}. In our work, we tackle this issue by defining a novel notion of OT well adapted to color manipulation. \n\n%%\n\\paragraph{Optimal transport barycenter}\n\nIt is natural to extend the classical barycenter of points to barycenter of densities by minimizing a weighted sum of OT distances toward a family of input distributions. In the special case of two input distributions, this corresponds to the celebrated displacement interpolation defined by McCann~\\cite{mccann1997convexity}. Existence and uniqueness of such a barycenter is proved by Agueh and Carlier~\\cite{Carlier_wasserstein_barycenter}, which also show the equivalence with the multi-marginal transportation problem introduced by Gangbo and {\\'S}wi\\c{e}ch~\\cite{gangbo1998optimal}.  Displacement interpolation (i.e. barycenter between a pair of distributions) is used by Bonneel et al.~\\cite{Bonneel-displacement} for computer graphics applications.  Rabin et al.~\\cite{Rabin_ssvm11} apply this OT barycenter for texture synthesis and mixing. The image mixing is achieved by computing OT barycenters of empirical distributions of wavelet coefficients.  A similar approach is proposed by Ferradans et al.~\\cite{2013-ssvm-mixing} for static and dynamic texture mixing using Gaussian distributions. \\\\\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Regularized and relaxed transport}\n\\label{subsec-regul-intro}\n\n\n% Another kind of artifacts can nevertheless be still observed with these last approaches. Pixels that were originally close in the color space can be very different after the color transfer. Generalizing the Optimal Transport framework with regularity priors on the transport map would therefore be a good solution in this case. Few theoretical results exist on the regularity of the transport map \\cite{}, and no satisfying non combinatorial algorithms have been proposed up to our knowledge. \n\n\n%%\n\\paragraph{Removing transport artifacts}\n\nThe OT map between complicated densities is usually irregular. Using directly this transport plan to perform color transfer creates artifacts and amplifies the noise in flat areas of the image. Since the transfer is computed over the 3-D color space, it does not take into account the pixel-domain regularity of the image. The visual quality of the transfer is thus improved by denoising the resulting transport using a pixel-domain regularization either as a post-processing~\\cite{Papadakis_ip11} or by solving a variational problem~\\cite{Papadakis_ip11,Rabin_icip11}.\n\n\n% The perfect transfer of color is not satisfying in real applications and one may generally observe outliers in the final images. Indeed, as the transfer is realized in the color space, it does not take into account the fact that coherent colors should be transferred to neighbor pixels. As a consequence, methods have been proposed to consider the spatial nature of images and model some regularity priors on the image domain. In \\cite{Papadakis_ip11}, the color transfer is formalized as an energy minimization problem in the image domain, which allows directly incorporating spatial regularization of the colors. The energy then involves the $L_2$ distance between cumulated color histograms instead of relying on the Wasserstein distance. The post-regularization of the  image color has also been proposed in \\cite{Rabin_ip11}, where the color transfer is realized with the Sliced Wasserstein Distance.\n\n\n%%\n\\paragraph{Transport regularization}\n\nA more theoretically grounded way to tackle the problem of colorization artifacts should use directly a regularized OT. This corresponds to adding a regularization penalty to the OT energy. This however leads to difficult non-convex variational problems, that have not yet been solved in a satisfying manner either theoretically or numerically. The only theoretical contribution we are aware of is the recent work of Louet and Santambrogio~\\cite{louet-regularizaton-1d}. They show that in 1-D the (un-regularized) OT is also the solution of the Sobolev regularized transport problem.\n%%\n\n%%\n\\paragraph{Graph regularization and matching}\n\nFor imaging applications, we use regularizations built on top of a graph structure connecting neighboring points in the input density. This follows ideas introduced in manifold learning~\\cite{isomap}, that have been applied to various image processing problems, see for instance~\\cite{elmoataz-graph}. Using graphs enables us to design regularizations that are adapted to the geometry of the input density, that often has a manifold-like structure. \n\nThis idea of graph-based regularization of OT can be interpreted as a soft version of the graph matching problem, which is at the heart of many computer vision tasks, see~\\cite{Belongie-graph-match,Yefeng-graph-match}. Graph matching is a quadratic assignment problem, known to be NP-hard to solve.  Similarly to our regularized OT formulation, several convex approximations have been proposed, including for instance linear programming~\\cite{Almohamad-graph-match} and SDP programming~\\cite{schellewald-ivc}. \n\n%%\n\\paragraph{Transport relaxation}\n\nThe result of Louet and Santambrogio~\\cite{louet-regularizaton-1d} is deceiving from the applications point of view, since it shows that, in 1-D, no regularization is possible if one  maintains a 1:1 assignment between the two densities. This is our first motivation for introducing a relaxed transport which is not a bijection between the densities.  Another (more practical) motivation is that relaxation is crucial to solve imaging problems such as color transfer. Indeed, the color distributions of natural images are multi-modals. An ideal color transfer should match the modes together. This cannot be achieved by classical OT because these modes often do not have the same mass. A typical example is for two images with strong foreground and background dominant colors (thus having bi-modal densities) but where the proportion of pixels in foreground and background are not the same. Such simple examples cannot be handled properly with OT. Allowing a controlled variation of the matched densities thus requires an appropriate relaxation of the mass conservation constraint. Mass conservation relaxation is related to the relaxation of the bijectivity constraint in graph matching, for which a convex formulation is proposed in~\\cite{Zaslavskiy-graph-match}. \n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Contributions}\n\nIn this paper, we generalize the discrete formulation of OT to tackle the two major flaws that we just mentioned: i) the lack of regularity of the transport and ii) the need for a relaxed matching between densities. Our main contribution is the integration of these two properties in a unified variational formulation to compute a regular transport map between two empirical densities. The corresponding optimization problem is convex and can be solved using standard convex optimization procedures. We propose two optimization algorithms adapted to the different class of regularizations. We apply this framework to the color transfer problem and obtain results that are comparable to the state of the art. \nOur second contribution takes advantage of the proposed regularized OT energy to compute the barycenter of several empirical densities. \nWe develop a block-coordinate descent method that converges to a stationary point of the non-convex barycenter energy. We show an application\n to color normalization between a set of photographs. Numerical results show the relevance of these approaches to imaging problems. The matlab code to reproduce the figures of this article is available online\\footnote{\\url{https://github.com/siraferradans/ColorTransfer}}.\n\nPart of this work was presented at the conference SSVM 2013~\\cite{2013-ssvm-regul-ot}. \n\n\n", "meta": {"hexsha": "8a6f4434a90b93a6c137b444a96b81b563555f09", "size": 14777, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sections/sec-intro.tex", "max_stars_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_stars_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_stars_repo_licenses": ["CECILL-B"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-06-27T03:15:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-19T17:21:04.000Z", "max_issues_repo_path": "paper/sections/sec-intro.tex", "max_issues_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_issues_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_issues_repo_licenses": ["CECILL-B"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sections/sec-intro.tex", "max_forks_repo_name": "gpeyre/2013-SIIMS-regularized-ot", "max_forks_repo_head_hexsha": "4d20033657717e3e0d744e3ce95fbc9afc6e5096", "max_forks_repo_licenses": ["CECILL-B"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2016-10-12T17:29:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-04T01:52:32.000Z", "avg_line_length": 149.2626262626, "max_line_length": 1266, "alphanum_fraction": 0.8012451783, "num_tokens": 3041, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.5580146719417504}}
{"text": "\\documentclass{article}\n\\usepackage{enumerate}\n\\usepackage{amsmath, amsthm, amssymb}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[parfill]{parskip}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\n\\title{Econ C103 Problem Set 4}\n\\author{Sahil Chinoy}\n\\date{February 14, 2017}\n\n\\begin{document}\n\\maketitle{}\n\n\\subsection*{Exercise 1}\n\n\\begin{enumerate}[(a)]\n\t\\item\n\n\tLet $\\hat{m}_i = \\max_{j \\neq i} \\{m_j\\}$ denote the maximum bid submitted by agents $j \\neq i$.\n\n\t\\begin{itemize}\n\n\t\t\\item\n\n\t\tThe set of possible messages for each agent is $M_i = \\mathbb{R}_+$.\n\n\t\t\\item\n\n\t\tThe allocation is\n\n\t\t$$x_i = \n\t\t\t\\begin{cases} \n\t\t      1 & \\text{if } m_i > \\hat{m}_i \\\\\n\t\t      0 & \\text{otherwise}\n\t\t   \\end{cases}.\n\t\t$$\n\n\t\t\\item\n\n\t\tThe transfer is\n\n\t\t$$t_i = \n\t\t\t\\begin{cases} \n\t\t      \\hat{m}_i & \\text{if } m_i > \\hat{m}_i \\\\\n\t\t      0 & \\text{otherwise}\n\t\t   \\end{cases}.\n\t\t$$\n\n\t\\end{itemize}\n\n\t\\item\n\n\tWe proceed by cases.\n\n\t\\begin{enumerate}[1.]\n\n\t\t\\item\n\n\t\tIf $\\theta_i > \\hat{m}_i$, then\n\n\t\t\\begin{itemize}\n\t\t\t\\item Bids $m_i > \\theta_i$ result in winning the auction with positive payoff $\\theta_i - \\hat{m}_i > 0$.\n\t\t\t\\item Bids $\\hat{m}_i < m_i < \\theta_i$ result in winning the auction with positive payoff $\\theta_i - \\hat{m}_i > 0$.\n\t\t\t\\item Bids $m_i < \\hat{m}_i$ result in losing the auction with payoff 0.\n\t\t\\end{itemize}\n\n\t\tSo the utility-maximizing bid is any $m_i > \\hat{m}_i$.\n\n\t\t\\item\n\n\t\tIf $\\theta_i < \\hat{m}_i$, then\n\n\t\t\\begin{itemize}\n\t\t\t\\item Bids $m_i > \\hat{m}_i$ result in winning the auction with negative payoff $\\theta_i - \\hat{m}_i < 0$.\n\t\t\t\\item Bids $\\theta_i < m_i < \\hat{m}_i$ result in losing the auction with payoff 0.\n\t\t\t\\item Bids $m_i < \\theta_i$ result in losing the auction with payoff 0.\n\t\t\\end{itemize}\n\n\t\tSo the utility-maximizing bid is any $m_i < \\hat{m}_i$.\n\n\t\\end{enumerate}\n\n\t\\item\n\n\tBidding truthfully, i.e. $m_i = \\theta_i$, maximizes the agent's utility. We can see this from the previous section: If $\\theta_i > \\hat{m}_i$, then any bid $m_i > \\hat{m}_i$ is optimal, so $m_i = \\theta_i$ is optimal. Likewise, if $\\theta_i < \\hat{m}_i$, then any bid $m_i < \\hat{m}_i$ is optimal, so $m_i = \\theta_i$ is optimal. \n\n\t\\item\n\n\tWe have just shown that the strategy of bidding truthfully, $s_i(\\theta_i) = \\theta_i$, is optimal for each agent independent of what the other agents do. Bidding truthfully is thus by definition a dominant strategy equilibrium. It is a \\textit{unique} dominant strategy equilibrium because any deviation from this bid $s_i(\\theta_i) = \\theta_i \\pm \\epsilon$ would not be optimal in some cases ($\\theta_i + \\epsilon$ is not optimal if $\\theta_i < \\hat{m}$ and $\\theta_i - \\epsilon$ is not optimal if $\\theta_i > \\hat{m}$).\n\n\tThe expected revenue is thus the expected value of the second of $n$ draws from the distribution of types $F$. For $\\theta$ to be the second-highest bid, we need exactly $n-2$ of the other $n-1$ agents to bid less than $\\theta$, which occurs with probability $F(\\theta)^{(n-2)}$, and we need exactly one of the other $n-1$ agents to bid more than $\\theta$, which occurs with probability $(n-1)(1-F(\\theta))$. So\n\n\t\\begin{equation*}\n\t\\mathbb{E}[t] = \\int \\limits_0^{\\bar{\\theta}} \\theta f(\\theta) F(\\theta)^{(n-2)} (n-1)(1-F(\\theta)) \\; d\\theta\n\t\\end{equation*}\n\n\t\\item\n\n\tNo. Consider the best response of player 1. If all other players bid 0, then player 1 could bid $\\bar{\\theta}$ and win the auction with transfer 0. But if one other player bids $m$, and $\\theta_1 < m$, then the best response is to bid $m_1 < m$. So, the best response of player 1 depends on the behavior of other players; bidding $\\bar{\\theta}$ is not always optimal and thus this not a dominant strategy equilibrium.\n\n\t\\item\n\n\tYes. Player 1 expects $\\hat{m}_1 = 0$, so any $m_1 > 0$ results in winning the auction, with positive payoff. The other players expect $\\hat{m}_i = \\bar{\\theta}$, so they expect to lose the auction no matter what message they send; any $m_i \\in [0, \\bar{\\theta}]$ results in payoff 0. Each agent's strategy is optimal given their expectation of the other agents' strategies, so this is a Bayes-Nash equilibrium.\n\n\t\\item\n\n\tGiven this strategy profile, the good will be allocated to player 1 with transfer $\\hat{m}_1 = 0$, so the expected revenue is 0.\n\n\\end{enumerate}\n\n\\subsection*{Exercise 2}\n\n\\begin{enumerate}[(a)]\n\n\t\\item\n\n\tThe intuition is that if $s_i$ is the best response to \\textit{every} possible set of messages $m_{-i}$, it is also the best response to the expectation of the Bayes-Nash equilibrium set of messages $\\mathbb{E} [s_{-i}(\\theta) \\; | \\; \\theta]$.\n\n\tFormally, if $s_i$ is a dominant strategy equilibrium, then\n\n\t\\begin{equation*}\n\t\\forall m_{-i}: s_i(\\theta_i) \\in \\argmax_{m_i \\in M_i} u(a(m_i, m_{-i}), \\theta_i) \n\t\\end{equation*}\n\n\tso\n\n\t\\begin{equation*}\n\t\\forall m_{i} \\; \\forall m_{-i}: u(a(s_i(\\theta_i), m_{-i}), \\theta_i) > u(a(m_i, m_{-i}), \\theta_i).\n\t\\end{equation*}\n\n\tThen\n\n\t\\begin{equation*}\n\t\\forall \\theta_i \\; \\forall m_{i} : u(a(s_i(\\theta_i), s_{-i}(\\theta_i)), \\theta_i) > u(a(m_i, s_{-i}(\\theta_i)), \\theta_i).\n\t\\end{equation*}\n\n\tand if $\\theta_i$ is distributed with density $f(\\theta_i)$ and support $[\\underline{\\theta_i}, \\bar{\\theta_i}]$\n\n\t\\begin{equation*}\n\t\\forall m_{i}: \\int \\limits_{\\underline{\\theta_i}}^{\\bar{\\theta_i}} f(\\theta_i) \\; u(a(s_i(\\theta_i), s_{-i}(\\theta_i)), \\theta_i) \\; d\\theta_i > \\int \\limits_{\\underline{\\theta_i}}^{\\bar{\\theta_i}} f(\\theta_i) \\; u(a(m_i, s_{-i}(\\theta_i)), \\theta_i) \\; d\\theta_i.\n\t\\end{equation*}\n\n\tThus\n\n\t\\begin{equation*}\n\t\\forall m_{i} : \\mathbb{E} [u(a(s_i(\\theta), s_{-i}(\\theta_i)), \\theta_i) \\; | \\; \\theta_i ] > \\mathbb{E} [u(a(m_i, s_{-i}(\\theta_i)), \\theta_i) \\; | \\; \\theta_i ]\n\t\\end{equation*}\n\n\tand\n\n\t\\begin{equation*}\n\ts_i(\\theta_i) \\in \\argmax_{m_i \\in M_i} \\mathbb{E} [u(a(m_i, s_{-i}(\\theta_i)), \\theta_i) \\; | \\; \\theta_i ]\n\t\\end{equation*}\n\n\tso $s_i$ is a Bayes-Nash equilibrium.\n\n\t\\item\n\n\tConsider a mechanism in which two agents $A$ and $B$ are each allocated a good if and only if they send the same message from the set $\\{ 0, 1 \\}$. Formally, $M_i = \\{0, 1\\}$, $x_i = 1$ if $m_A = m_B = 0$ or $m_A = m_B = 1$, otherwise $x_i = 0$. Assume the agents both have positive valuation for the good and that there are no transfers, i.e. $t_i = 0$.\n\n\tThen the best response for $A$ is $m_A = 0$ if $m_B = 0$, or $m_A = 1$ if $m_B = 1$. The situation is symmetric for $B$. There is no response that is optimal \\textit{independent} of what the other agent does, thus there is no dominant strategy equilibrium.\n\n\tThere are, however, two Bayes-Nash equilibria: $m_A = m_B = 1$ and $m_A = m_B = 0$.\n\n\\end{enumerate}\n\n\\end{document}", "meta": {"hexsha": "9e8b270db121a383d75ec5065e79dc63358439f6", "size": 6628, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hw4/hw4.tex", "max_stars_repo_name": "sahilchinoy/econ103", "max_stars_repo_head_hexsha": "ab2ecbb759eb811e953157e7f04f5a003066a62c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-08T22:59:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-08T22:59:12.000Z", "max_issues_repo_path": "hw4/hw4.tex", "max_issues_repo_name": "sahilchinoy/econ103", "max_issues_repo_head_hexsha": "ab2ecbb759eb811e953157e7f04f5a003066a62c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw4/hw4.tex", "max_forks_repo_name": "sahilchinoy/econ103", "max_forks_repo_head_hexsha": "ab2ecbb759eb811e953157e7f04f5a003066a62c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-10-29T10:06:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-27T14:30:33.000Z", "avg_line_length": 39.4523809524, "max_line_length": 523, "alphanum_fraction": 0.6658117079, "num_tokens": 2295, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455588, "lm_q2_score": 0.8056321889812553, "lm_q1q2_score": 0.5580146668303455}}
{"text": "\\documentclass[letterpaper,landscape]{article}\r\n\\usepackage{minitoc}\r\n\\usepackage{fullpage,enumitem,amsmath,amssymb,graphicx,algpseudocode,algorithm,array}\r\n\\usepackage{tikz}\r\n\\usepackage{mathtools}\r\n\\usepackage{minted}\r\n\\usepackage{multicol}\r\n\\usepackage{fancyhdr}\r\n\\usepackage{lastpage}\r\n\\usepackage{setspace}\r\n\\usepackage{titlesec}\r\n\\usepackage[margin=12pt,top=30pt,headsep=6pt,headheight=12pt]{geometry}\r\n\r\n\\setminted{fontsize=\\footnotesize,\r\n           frame=single,framesep=6pt,\r\n           linenos,numbersep=6pt,xleftmargin=12pt,\r\n           breaklines=true}\r\n\\usemintedstyle{colorful}\r\n\r\n\\pagestyle{fancy}\r\n\\fancyhf{}\r\n\\lhead{Stanford Cardinal '16 Notebook}\r\n\\rhead{\\thepage/24}\r\n\\chead{\\leftmark}\r\n\r\n\\titleformat{\\section}{\\normalfont\\large\\bfseries}{\\thesection}{1em}{\\setstretch{0.1}}\r\n\\titleformat{\\subsection}{\\normalfont\\large\\bfseries}{\\thesubsection}{1em}{\\setstretch{0.1}}\r\n\r\n%\\title{Stanford Cardinal '16 Notebook}\r\n\\begin{document}\r\n\r\n\\begingroup\r\n\\let\\cleardoublepage\\clearpage\r\n\\endgroup\r\n\r\n\\begin{multicols*}{2}\r\n\r\n  %\\raggedcolumns  \r\n\r\n  \\tableofcontents\r\n  \\clearpage\r\n  \r\n  \\section{Graph Algorithms}\r\n\r\n  \\subsection{SAP Maximum flow}\r\n  \\inputminted{cpp}{src/graph_maxflow.cpp}\r\n  \r\n  \\subsection{Dinic's maximum flow}\r\n  \\inputminted{cpp}{src/graph_dinic.cpp}\r\n\r\n  \\subsection{Min-cost max-flow}\r\n  \\inputminted{cpp}{src/graph_mincost_maxflow.cpp}\r\n\r\n  \\subsection{Min-cost max-flow 2}\r\n  \\inputminted{cpp}{src/graph_mincost_maxflow_2.cpp}\r\n\r\n  \\subsection{Kuhn-Munkres bipartite matching}\r\n  \\inputminted{cpp}{src/KM.cpp}\r\n\r\n  \\subsection{Hopcroft-Karp bipartite matching}\r\n  \\inputminted{cpp}{src/graph_hopcroft_karp.cpp}\r\n\r\n  \\subsection{Biconnected components}\r\n  \\inputminted{cpp}{src/graph_bcc.cpp}\r\n\r\n  \\subsection{Tarjan's SCC, articulation points, bridges}\r\n  \\inputminted{cpp}{src/graph_scc.cpp}\r\n  \r\n  \\subsection{Global min-cut}\r\n  \\inputminted{cpp}{src/Stoer-Wagner.cpp}\r\n\r\n  \\subsection{Constructing Euler Tour}\r\n  \\inputminted{cpp}{src/Euler.cpp}\r\n  \r\n  \\section{Math}\r\n  \r\n  \\subsection{Combinatorial formulas}\r\n  \r\n   $\\sum_{k=0}^{n}k^{2}=n(n+1)(2n+1)/6$\\\\\r\n   $\\sum_{k=0}^{n}k^{3}=n^{2}(n+1)^{2}/4$\\\\\r\n   $\\sum_{k=0}^{n}k^{4}=(6n^{5}+15n^{4}+10n^{3}-n)/30$\\\\\r\n   $\\sum_{k=0}^{n}k^{5}=(2n^{6}+6n^{5}+5n^{4}-n^{2})/12$\\\\\r\n   $\\sum_{k=0}^{n}x^{k}=(x^{n+1}-1)/(x-1)$\\\\\r\n   $\\sum_{k=0}^{n}kx^{k}=(x-(n+1)x^{n+1}+nx^{n+2})/(x-1)^{2}$\\\\\r\n   ${n \\choose k}=\\frac{n!}{(n-k)!k!}$\\\\\r\n   ${n \\choose k}={n-1 \\choose k}+{n-1 \\choose k-1}$\\\\\r\n   ${n \\choose k}=\\frac{n}{n-k}{n-1 \\choose k}$\\\\\r\n   ${n \\choose k}=\\frac{n-k+1}{k}{n \\choose k-1}$\\\\\r\n   ${n+1 \\choose k}=\\frac{n+1}{n-k+1}{n \\choose k}$\\\\\r\n   ${n \\choose k+1}=\\frac{n-k}{k+1}{n \\choose k}$\\\\\r\n   $\\sum_{k=1}^{n}k\\tbinom{n}{k}=n2^{n-1}$\\\\\r\n   $\\sum_{k=1}^{n}k^{2}\\tbinom{n}{k}=(n+n^{2})2^{n-2}$\\\\\r\n   ${m+n \\choose r}=\\sum_{k=0}^{r}{m \\choose k}{n \\choose r-k}$\\\\\r\n   ${n \\choose k}=\\prod_{i=1}^{k}\\frac{n-k+i}{i}$\\\\\r\n   \r\n\r\n%  \\subsection{Miller-Rabin primality testing \\& Pollard-Rho factorization}\r\n%  \\inputminted{cpp}{src/math_pollard_rho.cpp}\r\n  \r\n  \\subsection{Number theory identities}\r\n \\textbf{Lucas' Theorem:} For non-negative integers $m$ and $n$ and a prime $p$,\r\n \r\n $$\\binom{m}{n}\\equiv\\prod_{i=0}^k\\binom{m_i}{n_i}\\pmod p,$$\r\nwhere\r\n$$m=m_kp^k+m_{k-1}p^{k-1}+\\cdots +m_1p+m_0$$\r\nis the base $p$ representation of $m$, and similarly for $n$.\r\n  \r\n  \\inputminted{cpp}{src/math_finite_fields.cpp}\r\n  \\inputminted{cpp}{src/primes.cpp}\r\n  \r\n  \\subsection{Burnside's Lemma}\r\n  Let $G$ be a finite group that acts on a set $X$. For each $g$ in $G$ let $X^g$ denote the set of elements in $X$ that are fixed by $g$, which means $X^g=\\{x\\in X| g(x)=x\\}$. Burnside's lemma assers the following formula for the number of orbits, denoted $|X/G|$:\r\n  \\begin{align*}\r\n  |X/G|=\\frac{1}{|G|} \\sum_{g\\in G} |X^g|\r\n  \\end{align*}\r\n  \r\n  \\subsection{NTT}\r\n  \\inputminted{cpp}{src/math_ntt.cpp}\r\n\r\n  \\subsection{FFT}\r\n  \\inputminted{cpp}{src/math_fft.cpp}\r\n\r\n  \\subsection{Simplex}\r\n  \\inputminted{cpp}{src/math_simplex.cpp}\r\n\r\n  \\subsection{Numerical integration}\r\n  RK4: to integrate $\\dot{y} = f(t, y)$ with $y_0 = y(t_0)$, compute\r\n  \\begin{align*}\r\n  \tk_1 &= f(t_n, y_n) \\\\\r\n    k_2 &= f(t_n + \\frac h 2, y_n + \\frac h 2 k_1) \\\\\r\n    k_3 &= f(t_n + \\frac h 2, y_n + \\frac h 2 k_2) \\\\\r\n    k_4 &= f(t_n + h, y_n + h k_3) \\\\\r\n    y_{n+1} &= y_n + \\frac h 6 (k_1 + 2k_2 + 2k_3 + k_4) \r\n  \\end{align*}\r\n  \r\n  \\subsection{Partition function}\r\n  \\inputminted{cpp}{src/partition.cpp}\r\n  \r\n  \\section{String Algorithms}\r\n\r\n  \\subsection{KMP}\r\n  \\inputminted{cpp}{src/string_KMP.cpp}\r\n\r\n  \\subsection{Aho-Corasick trie/automaton}\r\n  \\inputminted{cpp}{src/string_AC.cpp}\r\n\r\n  \\subsection{Suffix array}\r\n  \\inputminted{cpp}{src/string_SA.cpp}\r\n\r\n  \\subsection{Suffix array}\r\n  \\inputminted{cpp}{src/string_SA_nlogn_pear.cpp}\r\n\r\n  \\subsection{Permuted LCP}\r\n  \\inputminted{cpp}{src/string_kasai.cpp}\r\n  \r\n  \\subsection{Suffix tree}\r\n  \\inputminted{cpp}{src/string_ukkonen.cpp}\r\n\r\n  \\subsection{Suffix automaton}\r\n  \\inputminted{cpp}{src/string_suffix_automaton.cpp}\r\n  \r\n  \\subsection{Z-function}\r\n  \\inputminted{cpp}{src/string_z_function.cpp}\r\n\r\n  \\section{Data structures}\r\n\r\n  \\subsection{Dynamic convex hull}\r\n  \\inputminted{cpp}{src/ds_dynamic_convex_hull.cpp}\r\n\r\n  \\subsection{Treap}\r\n  \\inputminted{cpp}{src/ds_treap.cpp}\r\n\r\n  \\subsection{Splay tree}\r\n  \\inputminted{cpp}{src/ds_splay_tree.cpp}\r\n\r\n  \\subsection{Link-cut tree}\r\n  \\inputminted{cpp}{src/ds_link_cut_tree.cpp}\r\n\r\n  \\subsection{Centroid decomposition}\r\n  \\inputminted{cpp}{src/ds_centroid_decomposition.cpp}\r\n\r\n  \\subsection{Heavy-light decomposition}\r\n  \\inputminted{cpp}{src/ds_heavy_light_decomposition.cpp}\r\n\r\n  \\subsection{Hash table}\r\n  \\inputminted{cpp}{src/ds_hash_table.cpp}\r\n  \r\n  \\subsection{KD-tree}\r\n  \\inputminted{cpp}{src/KDTree.cpp}\r\n  \r\n  \\subsection{PB DS}\r\n  \\inputminted{cpp}{src/ds_pbds.cpp}\r\n\r\n  \\section{Geometry}\r\n\r\n  \\subsection{C++ geometry}\r\n  \\inputminted{cpp}{src/geom_library.cpp}\r\n  \r\n  \\subsection{Java geometry}\r\n  \\inputminted{java}{src/JavaGeometry.java}\r\n  \r\n  \\subsection{Java 3D geometry}\r\n  \\inputminted{java}{src/Geom3D.java}\r\n\r\n  \\section{Misc}\r\n\r\n  \\subsection{Virtual tree DP}\r\n  \\inputminted{cpp}{src/misc_virtual_tree.cpp}\r\n  \r\n  \\subsection{Dates}\r\n  \\inputminted{cpp}{src/dates.cpp}\r\n  \r\n\\end{multicols*}\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "32e4ff55e7993e46376bd72c4b9851fd466d1363", "size": 6325, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ExtraCode/stanford-cardinal-16/notebook.tex", "max_stars_repo_name": "Alex7Li/ICPCNotebook", "max_stars_repo_head_hexsha": "c6ff4abace39913ff49b4c683a79adbcf008abf7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ExtraCode/stanford-cardinal-16/notebook.tex", "max_issues_repo_name": "Alex7Li/ICPCNotebook", "max_issues_repo_head_hexsha": "c6ff4abace39913ff49b4c683a79adbcf008abf7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ExtraCode/stanford-cardinal-16/notebook.tex", "max_forks_repo_name": "Alex7Li/ICPCNotebook", "max_forks_repo_head_hexsha": "c6ff4abace39913ff49b4c683a79adbcf008abf7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-09T22:25:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-09T22:25:27.000Z", "avg_line_length": 29.2824074074, "max_line_length": 266, "alphanum_fraction": 0.6545454545, "num_tokens": 2276, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.7905303137346444, "lm_q1q2_score": 0.5579444692696442}}
{"text": "\n\\section{Algorithm}\n\\label{sect:algorithm}\n\\index{algorithm}\n\n\nTo illustrate how the event selection algorithm works\nconsider the decay \n$B\\rightarrow D^{*} \\tau \\bar\\nu$,\n$D^* \\rightarrow D\\pi$, and $\\tau \\to \\pi \\nu$.\nThe general case is a straight forward generalization of \nthis example. The decay amplitude can be written as\n\\def\\AmpB{A^{B\\rightarrow D^*\\tau\\nu}_{\\lambda_{D^*}\\lambda_{\\tau}}}\n\\def\\AmpDstar{A^{D^*\\rightarrow D\\pi}_{\\lambda_{D^*}}}\n\\def\\Amptau{A^{\\tau\\rightarrow \\pi\\nu}_{\\lambda_{\\tau}}}\n\\begin{equation}\nA=\\sum_{\\lambda_{D^*}\\lambda_{\\tau}}\\AmpB\n\\times \\AmpDstar\n\\times \\Amptau,\n\\label{Eq:ampeq}\n\\end{equation}\nwhere $\\lambda^{\\phantom '}_{D^*}$ and $\\lambda^{\\phantom '}_{\\tau}$ label the states of spin degrees of freedom of the\n$D^{*}$ and the $\\tau$, respectively. Thus, \n$\\AmpB$ represents the decay amplitude for $B \\to D^* \\tau \\nu$\nfor the six different combinations of $D^*$ and\n$\\tau$ states. \n\nA possible implementation of Eq.~\\ref{Eq:ampeq} is to generate kinematics according to phase\nspace for the entire decay chain and to calculate the probability,\nthe amplitude squared, which is \nused in an accept-reject algorithm.  This approach has two serious\nlimitations.  First, the maximum probability of the decay chain\nmust be known.  This is logicistally difficult given the large\nnumber of potential decay chains in $B$ decays.  Second, for long\ndecay chains the accept-reject algorithm can be very inefficient as\nthe entire chain must be regenerated if the event is rejected. \nWe have implemented an algorithm that generates a decay chain as a \nsequence of sub-decays, thus avoiding both of these limitations.\n\nFirst the decay of the $B$ is considered. Kinematics are generated\naccording to phase space and the probability is calculated\n\\begin{equation}\nP_B=\\sum_{\\lambda^{\\phantom '}_{D^*}\\lambda^{\\phantom '}_{\\tau}}|\\AmpB|^2.\n\\end{equation}\nThe kinematics are regenerated until the event \npasses an accept-reject algorithm based on $P_B$. \nAfter decaying the $B$ we form the spin density matrix\n\\begin{equation}\n\\rho^{D^*}_{\\lambda^{\\phantom '}_{D^*}\\lambda^{'}_{D^*}}=\\sum_{\\lambda^{\\phantom '}_{\\tau}}\\AmpB\n[{A^{B\\rightarrow D^*\\tau\\nu}_{\\lambda^{'}_{D^*}\\lambda^{\\phantom '}_{\\tau}}}]^*,\n\\end{equation}\nwhich describes a $D^*$ from the $B \\to D^* \\tau \\nu$ decay after summing over\nthe degrees of freedom for the $\\tau$.\nTo generate the $D^* \\to D \\pi$ decay, proceed as with the $B$,\nincluding also $\\rho^{D^*}$\n\\begin{equation}\nP_{D^*}=\n{{1} \\over {{\\rm Tr}\\,\\rho^{D^*}}} \n\\sum_{\\lambda^{\\phantom '}_{D^*} \\lambda^{'}_{D^*}}\n\\rho^{D^*}_{\\lambda^{\\phantom '}_{D^*}\\lambda^{'}_{D^*}} \n\\AmpDstar\n[{A^{D^*\\rightarrow D\\pi}_{\\lambda^{'}_{D^*}}}]^*,\n\\label{Eq:probdstar}\n\\end{equation}\nwhere the scale factor, $1/{\\rm Tr}\\,\\rho^{D^*}$, \nis proportional to the decay rate, and does not affect the \nangular distributions.  This scale factor makes\nthe maximum decay probability of each sub-decay \nindependent of the full decay chain.\n\nFinally, we decay the $\\tau$. We form the density matrix\n\\begin{equation}\n\\tilde\\rho^{D^*}_{\\lambda^{\\phantom '}_{D^*}\\lambda^{'}_{D^*}}=\\AmpDstar\n[{A^{D^*\\rightarrow D\\pi}_{\\lambda^{'}_{D^*}}}]^*,\n\\end{equation}\nwhich encapsulates the information about the $D^*$ decay\nneeded to properly decay the $\\tau$ with the full correlations between\nall kinematic variables in the decay. \nUsing the $\\tilde\\rho^{D^*}$ matrix we calculate the spin density\nmatrix of the $\\tau$\n\\begin{equation}\n\\rho^{\\tau}_{\\lambda^{\\phantom '}_{\\tau}\\lambda^{'}_{\\tau}}=\\sum_{\\lambda^{\\phantom '}_{D^*}\\lambda^{'}_{D^*}}\n\\tilde\\rho^{D^*}_{\\lambda^{\\phantom '}_{D^*}\\lambda^{'}_{D^*}}\n\\AmpB[{A^{B\\rightarrow D^*\\tau\\nu}_{\\lambda^{'}_{D^*}\\lambda^{'}_{\\tau}}}]^*.\n\\end{equation}\nAs in the other decays, kinematics are generated according to phase space\nand the accept-reject is based on the probability calculated \nas in Eq.~\\ref{Eq:probdstar}, replacing $D^*$ with $\\tau$.\n\n\nThe algorithm was illustrated above using an example which should \nconvey the idea. In general consider the decay\n\\begin{equation}\nA\\rightarrow B_1B_2...B_N\n\\end{equation}\nwhere the amplitudes are denoted by\n\\begin{equation}\nA^{A\\rightarrow B_1B_2...B_N}_{\\lambda_A\\lambda_{B_1}\\lambda_{B_2}...\\lambda_{B_N}}.\n\\end{equation}\nThe probability for, $P_A$, for this decay is given by\n\\begin{equation}\nP_A=\\sum_{\\lambda_A\\lambda'_{A}\\lambda_{B_1}...\\lambda_{B_N}}\n\t\\rho^{A}_{\\lambda_A\\lambda'_{A}}\n\tA^{A\\rightarrow B_1B_2...B_N}_{\\lambda_A\\lambda_{B_1}\\lambda_{B_2}...\\lambda_{B_N}}\n[A^{A\\rightarrow B_1B_2...B_N}_{\\lambda'_{A}\\lambda_{B_1}\\lambda_{B_2}...\\lambda_{B_N}}]^*  .\n\\end{equation}\nThe forward spin-density matrix $\\rho^{B_i}$, given that $B_j$, $j<i$,\nhave been decayed and have backward spin-density matrices $\\hat\\rho^{B_j},$\nis given by\n\\begin{equation}\n\\rho^{B_i}_{\\lambda_{B_i}\\lambda'_{B_i}}=\n\t\\sum_{\\lambda_A\\lambda'_{A}\n\t\\lambda_{B_1}...\\lambda_{B_N} \\atop\n\t\\lambda'_{B_1}...\\lambda'_{B_{i-1}}}\n\t\\rho^{A}_{\\lambda_A\\lambda'_{A}}\n\t\\hat\\rho^{B_1}_{\\lambda_{B_1}\\lambda'_{B_1}}...\n\t\\hat\\rho^{B_{i-1}}_{\\lambda_{B_{i-1}}\\lambda'_{B_{i-1}}}\n\tA^{A\\rightarrow B_1B_2...B_N}_{\\lambda_A\\lambda_{B_1}\\lambda_{B_2}...\\lambda_{B_N}}\n[A^{A\\rightarrow B_1B_2...B_N}_{\\lambda'_{A}\\lambda'_{B_1}...\\lambda'_{B_{i}}\\lambda_{B_{i+1}}...\\lambda_{B_N}}]^*.\n\\end{equation}\nAfter all $B_i$ are decays the backward spin-density matrix is given by\n\\begin{equation}\n\\hat\\rho^{A}_{\\lambda_{A}\\lambda'_{A}}=\n\t\\sum_{\\lambda_{B_1}...\\lambda_{B_N} \\atop\n\t\\lambda'_{B_1}...\\lambda'_{B_{N}}}\n\t\\hat\\rho^{B_1}_{\\lambda_{B_1}\\lambda'_{B_1}}...\n\t\\hat\\rho^{B_{N}}_{\\lambda_{B_{N}}\\lambda'_{B_N}}\n\tA^{A\\rightarrow B_1B_2...B_N}_{\\lambda_A\\lambda_{B_1}\\lambda_{B_2}...\\lambda_{B_N}}\n[A^{A\\rightarrow B_1B_2...B_N}_{\\lambda'_{A}\\lambda'_{B_1}...\\lambda'_{B_N}}]^*.\n\\end{equation}\n\n\n\n\n%To illustrate how the event selection algorithm works we will first\n%consider an example of the decay \n%\\begin{equation}\n%B\\rightarrow D^{*}(\\rightarrow D\\pi)\\tau(\\rightarrow \\pi\\nu)\\bar\\nu.\n%\\end{equation}\n%The amplitude for this decay can be written as\n%\\def\\AmpB{A^{B\\rightarrow D^*\\tau\\nu}_{\\lambda_{D^*}\\lambda_{\\tau}}}\n%\\def\\AmpDstar{A^{D^*\\rightarrow D\\pi}_{\\lambda_{D^*}}}\n%\\def\\Amptau{A^{\\tau\\rightarrow \\pi\\nu}_{\\lambda_{\\tau}}}\n%\\begin{equation}\n%A=\\sum_{\\lambda_{D^*}\\lambda_{\\tau}}\\AmpB\n%\\times \\AmpDstar\n%\\times \\Amptau\n%\\label{Eq:ampeq}\n%\\end{equation}\n%Where $\\lambda_{D^*}$ and $\\lambda_{\\tau}$ labels the states of the\n%$D^{*}$ and the $\\tau$, respectively, so that \n%\\begin{equation}\n%\\AmpB\n%\\end{equation}\n%is 6 different amplitudes for the different combinations of $D^*$\n%$\\tau$ states. (Note here that the neutrino is treated as a\n%particle with only one state, all neutrinos are assumed to be\n%left-handed and anti neutrinos to be right-handed.)\n%Similarly, \n%\\begin{equation}\n%\\AmpDstar\\ {\\rm and}\\ \\Amptau\n%\\end{equation}\n%are the amplitudes for a decay of a $D^{*}\\rightarrow D\\pi$ and\n%$\\tau\\rightarrow\\pi\\nu$ respectively.%\n%\n%The scheme that was originally implemented in EvtGen was to generate \n%kinematics according to phasespace for the whole decay chain and then calculate that\n%amplitude for the whole decay chain according to Eq.~\\ref{Eq:ampeq}.\n%This amplitude was squared to obtain the probability and the event was\n%accepted or rejected based on this probability. This approach works, but has\n%two serious flaws. First, when doing the acceptance rejection you need to know \n%the maximum probability, this gets very hard to keep track of since the \n%number of decay chains get very large. Secondly, as the decay chains get long\n%the efficiency for the acceptance rejection gets low and the time for\n%generating events long.\n%\n%The source of both these problems are that the decays are treated as a chain\n%and all three decays in the example above are generated at once, in one\n%acceptance rejection. We would like to change the algorithm such that it\n%generates the decay as a sequence of sub-decay. In the example considered\n%here the decay $B\\rightarrow D^*\\tau\\nu$ would be done first, then the\n%decay $D^*\\rightarrow D \\pi$ is done. This decay will need to incorporate \n%information about how $D^*$ was produced in the $B$ decay. Further, when\n%the $\\tau$ is decayed it also has to incorporate the information from the\n%decay of the $D^*$ as well so that correlations between all kinematic\n%variables are correctly generated. This is described bellow for the \n%example considered in this section.\n%\n%First the decay of the $B$ is considered, kinematics for the decay is generated\n%according to phasespace and the and the probability is then calculated as\n%\\begin{equation}\n%P_B=\\sum_{\\lambda_{D^*}\\lambda_{\\tau}}|\\AmpB|^2.\n%\\end{equation}\n%Acceptance-rejection is done on the the probability $P_B$ until an event is\n%accepted. After decaying the $B$ we now form the spin-density matrix for the\n%decay of the $D^*$, in the code this spin-density matrix is refered to as the\n%forward spin-density matrix. This matrix is defined by\n%\\begin{equation}\n%\\rho^{D^*}_{\\lambda_{D^*}\\lambda'_{D^*}}=\\sum_{\\lambda_{\\tau}}\\AmpB\n%{A^{B\\rightarrow D^*\\tau\\nu}_{\\lambda'_{D^*}\\lambda_{\\tau}}}^*\n%\\end{equation}\n%The interpretation of this is that here we have summed incoherently over the\n%states of the tau. This means that the spin density matrix $\\rho^{D^*}$ \n%describes a $D^*$ in this decay that you have after summing over\n%the degrees of freedom for the tau.\n%\n%This spin density matrix, $\\rho^{D^*}$ is now used to calculate the\n%probability for the $D^*$ decay, again kinematics is generated according to\n%phases pace and the probability is used to accept or reject events based\n%on \n%\\begin{equation}\n%P_{D^*}=\\rho^{D^*}_{\\lambda_{D^*}\\lambda'_{D^*}}\\AmpDstar\n%{A^{D^*\\rightarrow D\\pi}_{\\lambda'_{D^*}}}^*\n%\\end{equation}\n%\n%Now we need to decay the $D^*$; to do this we form the density matrix\n%\\begin{equation}\n%\\tilde\\rho^{D^*}_{\\lambda_{D^*}\\lambda'_{D^*}}=\\AmpDstar\n%{A^{D^*\\rightarrow D\\pi}_{\\lambda'_{D^*}}}^*\n%\\end{equation}\n%This matrix encapsulates the information about the $D^*$ decay that is\n%needed to properly decay the $\\tau$ with the full correlations between\n%all kinematic variables in the decay. Since this matrix\n%is passed ``back'' to the decay of the $B$ from the decay of the\n%$D^*$ this matrix is called the backward spin-density matrix in the code.\n%\n%Using the $\\tilde\\rho^{D^*}$ matrix we can now calculate the forward spin density\n%matrix of the $\\tau$\n%\\begin{equation}\n%\\rho^{\\tau}_{\\lambda_{\\tau}\\lambda'_{\\tau}}=\\sum_{\\lambda_{D^*}\\lambda'_{D^*}}\n%\\tilde\\rho^{D^*}_{\\lambda_{D^*}\\lambda'_{D^*}}\n%\\AmpB{A^{B\\rightarrow D^*\\tau\\nu}_{\\lambda'_{D^*}\\lambda'_{\\tau}}}^*\n%\\end{equation}\n%Note that this spin density matrix for the decay of the $\\tau$ includes the \n%information about how the $D^*$ decayed through the $\\tilde\\rho^{D^*}$\n%matrix.\n%Now, as in the other decays, we generate kinematics according to phase space\n%and accepts or rejects based on the probability calculated according to\n%\\begin{equation}\n%P_{\\tau}=\\sum_{\\lambda_{\\tau}\\lambda'_{\\tau}}\\rho^{\\tau}_{\\lambda_{\\tau}\\lambda'_{\\tau}}\n%\\Amptau{A^{\\tau\\rightarrow \\pi\\nu}_{\\lambda'_{\\tau}}}^*\n%\\end{equation}\n%\n%This algorithm is implemented in the code and it solves two major problems\n%that the original versions of EvtGen had. The algorithm was illustrated above\n%using an example which should convey the idea. I will try to write up the\n%algorithm more formally at some point but the notation for describing it is so awkward.%\n\n\n\n\n\n\n", "meta": {"hexsha": "56ad62a886b172e7ebee0cbe48ae17f946c47b62", "size": 11449, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EvtGen1_06_00/doc/evt_algorithm.tex", "max_stars_repo_name": "klendathu2k/StarGenerator", "max_stars_repo_head_hexsha": "7dd407c41d4eea059ca96ded80d30bda0bc014a4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-12-24T19:37:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T06:57:20.000Z", "max_issues_repo_path": "EvtGen1_06_00/doc/evt_algorithm.tex", "max_issues_repo_name": "klendathu2k/StarGenerator", "max_issues_repo_head_hexsha": "7dd407c41d4eea059ca96ded80d30bda0bc014a4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EvtGen1_06_00/doc/evt_algorithm.tex", "max_forks_repo_name": "klendathu2k/StarGenerator", "max_forks_repo_head_hexsha": "7dd407c41d4eea059ca96ded80d30bda0bc014a4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.0748031496, "max_line_length": 119, "alphanum_fraction": 0.7112411564, "num_tokens": 3708, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303137346446, "lm_q2_score": 0.7057850340255386, "lm_q1q2_score": 0.5579444643774258}}
{"text": "\\documentclass{article}\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{February 24, 2014}\n\\maketitle\n\n\\section*{lesson 12}\n\\subsection*{definition}\nGiven $f(x)$ on $\\mathbb{R}$.\n\\begin{align*}\n  \\mathcal{F}[f]&=F(\\xi)=\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{e^{-i\\xi x}f(x)\\,\\mathrm{d}x}\n\\end{align*}\nInverse transform recovers $f(x)$ from $F(\\xi)$:\n\\begin{align*}\n  f(x)&=\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{e^{+i\\xi x}F(\\xi)\\,\\mathrm{d}\\xi}&\\text{Deep Theorem}\n\\end{align*}\n\\subsection*{problem}\npage 93\n\\begin{align*}\n  PDE&&u_t&=\\alpha ^2u_{xx}&-\\infty<&x<\\infty&0<&t<\\infty\\\\\n  IC&&u(x,0)&=\\phi(x)&-\\infty<&x<\\infty\n\\end{align*}\napply $\\mathcal{F}$ to pde. $U(\\xi,t)=\\mathcal{F}[u(x,t)]$. Use property 3 (derivative): $\\mathcal{F}[u_{xx}]=$\n\nHow are $\\mathcal{F}[f']$ and $\\mathcal{F}[f]$ related? page 91.\n\\begin{align*}\n  \\mathcal{F}[f']&=\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{e^{-i\\xi x}f'(x)\\,\\mathrm{d}x}\\\\\n  &=\\frac{1}{\\sqrt{2\\pi }}\\left[e^{-i\\xi x}f(x)\\right]_{-\\infty}^{+\\infty}-\\int_{-\\infty}^{+\\infty}{(-i\\xi)e^{-i\\xi x}f(x)\\,\\mathrm{d}x}\\\\\n  \\intertext{\\emph{note:} implicit conditions on $f(x)$ to insure integrals exist $f(+\\infty)=f(-\\infty)=0$}\n  &=0+i\\xi\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{e^{-i\\xi x}f(x)\\,\\mathrm{d}x}\\\\\n  \\intertext{property 3}\n  \\mathcal{F}[f']&=i\\xi\\mathcal{F}[f]\\\\\n  \\mathcal{F}[f'']&=\\xi^2\\mathcal{F}[f]\n\\end{align*}\nSo $\\mathcal{F}[u_{xx}]=-\\xi^2U(\\xi,t)$. $\\frac{\\mathrm{d}U}{\\mathrm{d}t}=-\\alpha ^2\\xi^2U$ with $U(\\xi,0)=\\phi(\\xi)$\n\n\\subsubsection*{step 2}\nsolve prolem $U(\\xi,t)=\\phi(\\xi)e^{-\\alpha ^2\\xi^2 t}$.\n\\subsubsection*{step 3}\ninvert transform\n\n\\subsection*{property 4}\n\\subsubsection*{convolution theorem}\ndefinition: given $f(x),gx)$ on $\\mathbb{R}$\n\\begin{align*}\n  f*g(x)&=1\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{f(x-s)g(s)\\,\\mathrm{d}s}\\\\\n  \\intertext{theorem}\n  \\mathcal{F}[f*g(x)]&=\\mathcal{F}[f]\\mathcal{F}[g]=F(\\xi)G(\\xi)&\\text{deep theorem}\n\\end{align*}\nsample calculation: find the convolution of two functions. Text example\n\\begin{align*}\n  \\left.\\begin{aligned}\n    f(x)&=x\\\\\n    g(x)&=e^{-x^2}\n  \\end{aligned}\\right\\}&f*g(x)=\\frac{x}{\\sqrt{2}}\\\\\n  f*g(x)&=\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{f(x-u)g(u)\\,\\mathrm{d}u}\\\\\n  &=\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{(x-u)e^{-u^2}\\,\\mathrm{d}u}\\\\\n  &=\\frac{1}{\\sqrt{2\\pi }}\\left[x\\int_{-\\infty}^{+\\infty}{e^{-u^2}\\,\\mathrm{d}u}-\\int_{-\\infty}^{+\\infty}{ue^{-u^2}\\,\\mathrm{d}u}\\right]&\\text{odd integral}\\\\\n  &=\\frac{x}{\\sqrt{2\\pi }}\\sqrt{\\pi }\n  f(x)&=e^{-x^2}\\\\\n  g(x)&=x\\\\\n  f*g(x)&=\\frac{1}{\\sqrt{\\pi }}\\int_{-\\infty}^{+\\infty}{e^{-(x-u)^2}u\\,\\mathrm{d}u}\\\\\n  \\intertext{property:$f*g(x)=g*f(x)$ commutativity}\n  f*g(x)&=\\frac{1}{\\sqrt{2\\pi }}\\int_{-\\infty}^{+\\infty}{f(x-u)g(u)\\,\\mathrm{d}u}\\\\\n  u&=-\\infty\\qquad\\gets x-u\\\\\n  x-u&=+\\infty\n\\end{align*}\nnote that the definition in the book has a negative $i$ and mathematica doesn't. and then that makes the signs flipped on the inverse transform as well. CAREFUL!!!\n\\begin{align*}\n  u(x,t)&=\\int_{-\\infty}^{\\infty}{\\phi(x-u)\\frac{1}{\\sqrt{\\pi}}\\frac{1}{2\\alpha \\sqrt{t}}e^{-\\frac{u^2}{2/\\alpha ^2t}}\\,\\mathrm{d}u}\\\\\n  \\int_{-\\infty}^{\\infty}{\\frac{1}{\\sqrt{\\pi}}\\frac{1}{2\\alpha \\sqrt{t}}e^{-\\frac{u^2}{2/\\alpha ^2t}}\\,\\mathrm{d}u}&=\\int_{-\\infty}^{\\infty}{\\frac{1}{\\sqrt{\\pi}}\\frac{1}{2\\alpha \\sqrt{t}}e^{-\\frac{u^2}{2/\\alpha ^2t}}\\,\\mathrm{d}u}\\\\\n  u(x,t)&=\\int_{-\\infty}^\\infty{\\phi(x)f(x-u)\\,\\mathrm{d}u}\\text{ positive with unit area}\n\\end{align*}\n\\end{document}\n", "meta": {"hexsha": "0f2ea6ff8a9a4b333e75af3dc422ee1cf2573f96", "size": 3651, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "partial differential equations/pde-notes-2014-02-24.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "partial differential equations/pde-notes-2014-02-24.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "partial differential equations/pde-notes-2014-02-24.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.9879518072, "max_line_length": 232, "alphanum_fraction": 0.5962749932, "num_tokens": 1632, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370111, "lm_q2_score": 0.7905303137346444, "lm_q1q2_score": 0.5579444594852072}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\begmath 2.11 Finite Legendre Series\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThis subroutine computes the value of a finite sum of Legendre polynomials,\n\\begin{equation*}\ny=\\sum_{j=0}^{\\text{N}}a_jP_j(x)\n\\end{equation*}\nfor a specified summation limit, N, argument, $x$, and sequence of\ncoefficients, $a_j$. The Legendre polynomials are defined in \\cite{ams55:or-poly}.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf N}\n\n\\item[REAL]  \\ {\\bf X, Y, A}($0:m\\geq $ N)\n\\end{description}\n\nAssign values to X, N, and A(0), A(1), ... A(N).\n$$\n\\fbox{{\\bf CALL SLESUM (S, N, A, Y)}}\n$$\nThe sum will be stored in Y.\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[X]  \\ [in] Argument of the polynomials.\n\n\\item[N]  \\ [in] Highest degree of polynomials in sum.\n\n\\item[A()]  \\ [in] The coefficients must be given in A(J), J = 0, ..., N.\n\n\\item[Y]  \\ [out] Computed value of the sum.\n\\end{description}\n\n\\subsubsection{Modifications for Double Precision}\n\nFor double precision usage, change the REAL statement to DOUBLE PRECISION\nand change the subroutine name from SLESUM to DLESUM.\n\n\\subsection{Examples and Remarks}\n\nSee DRSLESUM and ODSLESUM for an example of the usage of SLESUM. DRSLESUM\nevaluates the following identity, the coefficients of which were obtained\nfrom Table~22.9, page~798, of~\\cite{ams55:or-poly}.\n\\begin{equation*}\nz = y - w = 0,\n\\end{equation*}\nwhere\n\\begin{multline*}\ny = 0.07P_0(x) + 0.27P_1(x) + 0.20P_2(x)\\\\\n+ 0.28P_3(x) + 0.08P_4(x) + 0.08P_5(x),\n\\end{multline*}\nand\n\\begin{equation*}\nw = 0.35x^4 + 0.63x^5.\n\\end{equation*}\n\n\\subsection{Functional Description}\n\nThe sum is evaluated by the following algorithm:\n\\begin{gather*}\nb_{N+2}=0,\\quad b_{N+1}=0,\\\\\nb_k=\\frac{2k+1}{k+1}b_{k+1}x-\\frac{k+1}{k+2}b_{k+2}+a_k,\\ \\ k=N,...,0,\\\\\ny=b_0.\n\\end{gather*}\nFor an error analysis applying to this algorithm see \\cite{Ng:1968:DSS} and\n\\cite{Ng:1971:RAC}. The first four Legendre polynomials are \\begin{gather*}\nP_0(x) = 1,\\quad P_1(x) = x,\\\\\nP_2(x) = 1.5x^2 - 0.5,\\quad P_3(x) = 2.5x^3 - 1.5x.\n\\end{gather*}\nFor $k \\geq 2$ the Legendre polynomials satisfy the recurrence\n\\begin{equation*}\nkP_k(x) = (2k-1)xP_{k-1}(x) - (k-1)P_{k-2}(x).\n\\end{equation*}\n\nThe Legendre polynomials are orthogonal relative to integration over the\ninterval [$-$1,~1] and are normally used only with an argument, $x$, in this\ninterval.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nThe subroutine will return Y = 0 if N $< 0$. It is recommended that $x$\nsatisfy $|x| \\leq 1.$\n\n\\subsection{Supporting Information}\n\nThe source language is ANSI Fortran\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.2in} {\\bf Required Files}\\vspace{2pt} \\\\\nDLESUM & \\hspace{.35in}DLESUM\\rule[-5pt]{0pt}{8pt}\\\\\nSLESUM & \\hspace{.35in}SLESUM\\\\\n\\end{tabular}\n\nBased on a 1974 program by E. W. Ng, JPL. Present version by C. L. Lawson\nand S. Y. Chiu, JPL, 1983.\n\n\n\\begcodenp\n\n\\medskip\n\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRSLESUM}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{slesum}}\n\n\\vspace{30pt}\\centerline{\\bf \\large ODSLESUM}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{slesum}}\n\n\\end{document}\n", "meta": {"hexsha": "0f98a7fefce590767ebb8f8f77a33a53f93c6c4a", "size": 3495, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch02-11.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch02-11.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch02-11.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 27.3046875, "max_line_length": 98, "alphanum_fraction": 0.7067238913, "num_tokens": 1269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370111, "lm_q2_score": 0.7905303087996143, "lm_q1q2_score": 0.5579444560021368}}
{"text": "\n\\section{Introduction}\n\nThe {\\tt{}matrix-lua} module adds simple matrix support to Lua.\nSupported functions are\n\\begin{itemize}\n  \\item {\\tt{}matrix(m,n)}: create a new zero matrix of size $m$ by $n$\n  \\item {\\tt{}A(i,j)}:      get $A_{ij}$\n  \\item {\\tt{}A[k]}:        get or set the $k$th element (column-major 1-based)\n  \\item {\\tt{}C\\ =\\ A+B}:     add two matrices\n  \\item {\\tt{}C\\ =\\ A-B}:     subtract two matrices\n  \\item {\\tt{}C\\ =\\ -A}:      take the unary negation of a matrix\n  \\item {\\tt{}C\\ =\\ A*B}:     multiply two matrices\n  \\item {\\tt{}A.print()}:   print matrix\n  \\item {\\tt{}A.factor()}:  factor matrix\n  \\item {\\tt{}A.solve(x)}:  compute $x := A^{-1}x$\n  \\item {\\tt{}A.clone()}:   create a copy of the matrix\n  \\item {\\tt{}A.slice(i1,i2,j1,j2)}:   create a slice of the matrix\n  \\item {\\tt{}A.free()}:    free matrix\n\\end{itemize}\nSupported variables are\n\\begin{itemize}\n  \\item {\\tt{}A.m}:  return number of rows in matrix\n  \\item {\\tt{}A.n}:  return number of columns in matrix\n\\end{itemize}\n\n\n\\section{Interface}\n\n\\nwfilename{matrix-lua.nw}\\nwbegincode{1}\\sublabel{NWmatD-matC-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-matC-1}}}\\moddef{matrix-lua.h~{\\nwtagstyle{}\\subpageref{NWmatD-matC-1}}}\\endmoddef\n#ifndef MATRIX_LUA_H\n#define MATRIX_LUA_H \n\n#include <lua.h>\n\nstruct lua_matrix_struct \\{\n    int owns_data;  /* True if this matrix owns storage  */\n    int m, n, ld;   /* Array size and leading dimension  */\n    double* data;   /* Array data                        */\n    int* ipiv;      /* Pivot array for factored matrices */\n\\};\n\ntypedef struct lua_matrix_struct* lua_matrix_t;\n\nvoid         lua_matrix_register(lua_State* L);\nvoid         lua_matrix_push(lua_State* L, lua_matrix_t A);\nlua_matrix_t lua_matrix_get(lua_State* L, int index);\n\nlua_matrix_t lua_matrix_create(int m, int n);\nlua_matrix_t lua_matrix_slice(lua_matrix_t A, int i1, int i2, int j1, int j2);\nvoid         lua_matrix_destroy(lua_matrix_t self);\n\n#endif /* MATRIX_LUA_H */\n\\nwnotused{matrix-lua.h}\\nwendcode{}\\nwbegindocs{2}\\nwdocspar\n\nThe {\\tt{}lua{\\char95}matrix{\\char95}register} function registers the functions of\nthe module with the Lua interpreter.  The {\\tt{}lua{\\char95}matrix{\\char95}push}\nand {\\tt{}lua{\\char95}matrix{\\char95}get} functions add and get matrices\non the Lua stack.\n\nThe {\\tt{}lua{\\char95}matrix{\\char95}create} and {\\tt{}lua{\\char95}matrix{\\char95}destroy} functions create\nnew matrices.  The {\\tt{}lua{\\char95}matrix{\\char95}slice} function creates a ``parasite''\nmatrix that uses the same storage as the original matrix but which\nhas different indexing.  In Matlab parlance, the slice function returns\n{\\tt{}A(i1:i2,\\ j1:j2)}.\n\n\n\\section{Implementation}\n\n\\nwenddocs{}\\nwbegincode{3}\\sublabel{NWmatD-matC.2-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-matC.2-1}}}\\moddef{matrix-lua.c~{\\nwtagstyle{}\\subpageref{NWmatD-matC.2-1}}}\\endmoddef\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <assert.h>\n#include \"matrix-lua.h\"\n\n\\LA{}macros~{\\nwtagstyle{}\\subpageref{NWmatD-mac6-1}}\\RA{}\n\\LA{}static prototypes~{\\nwtagstyle{}\\subpageref{NWmatD-staH-1}}\\RA{}\n\\LA{}static data~{\\nwtagstyle{}\\subpageref{NWmatD-staB-1}}\\RA{}\n\\LA{}static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}\\RA{}\n\\LA{}functions~{\\nwtagstyle{}\\subpageref{NWmatD-fun9-1}}\\RA{}\n\\nwnotused{matrix-lua.c}\\nwendcode{}\\nwbegindocs{4}\\nwdocspar\n\n\n\\subsection{Matrix tag}\n\nLua provides \\emph{tags} for user data, with semantics known only\nto the host program.  The {\\tt{}lua{\\char95}matrix{\\char95}tag} is the tag for\nthe Lua matrix type.  Note that you \\emph{could} get into trouble\nwith this if multiple interpreters are simultaneously active,\nsince the different interpreters may not end up allocating the same\ntag value.\n\n\\nwenddocs{}\\nwbegincode{5}\\sublabel{NWmatD-staB-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staB-1}}}\\moddef{static data~{\\nwtagstyle{}\\subpageref{NWmatD-staB-1}}}\\endmoddef\nstatic int lua_matrix_tag;\n\n\\nwused{\\\\{NWmatD-matC.2-1}}\\nwendcode{}\\nwbegindocs{6}\\nwdocspar\n\n\n\\subsection{Accessor macros}\n\nWe use a couple convenience macros for getting the matrix entries.\nThe macros use one-based indexing for consistence with the Lua\nindex conventions.\n\n\\nwenddocs{}\\nwbegincode{7}\\sublabel{NWmatD-mac6-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-mac6-1}}}\\moddef{macros~{\\nwtagstyle{}\\subpageref{NWmatD-mac6-1}}}\\endmoddef\n#define Mij(M,i,j) (M->data[((j)-1)*M->ld + ((i)-1)])\n#define Mrow0(M,k) ( ((k)-1) % (M->m) )\n#define Mcol0(M,k) ( ((k)-1) / (M->m) )\n#define Mk(M,k)    ( M->data[Mcol0(M,k) * M->ld + Mrow0(M,k)] )\n\n\\nwused{\\\\{NWmatD-matC.2-1}}\\nwendcode{}\\nwbegindocs{8}\\nwdocspar\n\n\\subsection{Matrix call getter}\n\nWe use Lua tag methods to handle the case when the ``call'' syntax\nis applied to a matrix object.  If there is one numeric argument $i$,\nwe get the $(i,1)$ matrix entry; if there are two numeric arguments,\nwe get the $(i,j)$ entry.\n\n\\nwenddocs{}\\nwbegincode{9}\\sublabel{NWmatD-regI-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\endmoddef\nlua_pushcclosure(L, lua_matrix_call, 0);\nlua_settagmethod(L, lua_matrix_tag, \"function\");\n\\nwalsodefined{\\\\{NWmatD-regI-2}\\\\{NWmatD-regI-3}\\\\{NWmatD-regI-4}\\\\{NWmatD-regI-5}\\\\{NWmatD-regI-6}\\\\{NWmatD-regI-7}\\\\{NWmatD-regI-8}}\\nwused{\\\\{NWmatD-fun9-4}}\\nwendcode{}\\nwbegindocs{10}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{11}\\sublabel{NWmatD-staG-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\endmoddef\nstatic int lua_matrix_call(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,1);\n    int n = lua_gettop(L);\n    int i, j;\n\n    if (n < 2 || n > 3)\n        lua_error(L, \"Wrong number of arguments\");\n\n    if (!lua_isnumber(L,2) || (n == 3 && !lua_isnumber(L,3)))\n        lua_error(L, \"Index must be a number\");\n\n    i = (int) lua_tonumber(L,2);\n    j = (n == 3) ? (int) lua_tonumber(L,3) : 1;\n\n    if (i <= 0 || i > A->m || j <= 0 || j > A->n)\n        lua_error(L, \"Index out of range\");\n\n    lua_settop(L,0);\n    lua_pushnumber(L, Mij(A,i,j));\n    return 1;\n\\}\n\n\\nwalsodefined{\\\\{NWmatD-staG-2}\\\\{NWmatD-staG-3}\\\\{NWmatD-staG-4}\\\\{NWmatD-staG-5}\\\\{NWmatD-staG-6}\\\\{NWmatD-staG-7}\\\\{NWmatD-staG-8}\\\\{NWmatD-staG-9}\\\\{NWmatD-staG-A}\\\\{NWmatD-staG-B}\\\\{NWmatD-staG-C}\\\\{NWmatD-staG-D}\\\\{NWmatD-staG-E}\\\\{NWmatD-staG-F}\\\\{NWmatD-staG-G}\\\\{NWmatD-staG-H}}\\nwused{\\\\{NWmatD-matC.2-1}}\\nwendcode{}\\nwbegindocs{12}\\nwdocspar\n\n\n\\subsection{Matrix table getter}\n\nThe indexed read operation does two different things.  If the index\nis a number $k$, we try to return the $k$th element of the matrix\n(as arranged in column-major order with one-based indexing).\nIf the index is a string, then we probably have a method call,\nwhich is handled by the {\\tt{}lua{\\char95}matrix{\\char95}getmethod} function described\nlater.  (Recall that the dot syntax in Lua is a type of indexing,\nso that {\\tt{}A.print()} is equivalent to {\\tt{}A[\"print\"]()}.)\n\nThe {\\tt{}gettable} tag method receives the tagged object and the index.\n\n\\nwenddocs{}\\nwbegincode{13}\\sublabel{NWmatD-regI-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-2}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\plusendmoddef\nlua_pushcclosure(L, lua_matrix_gettable, 0);\nlua_settagmethod(L, lua_matrix_tag, \"gettable\");\n\\nwendcode{}\\nwbegindocs{14}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{15}\\sublabel{NWmatD-staG-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-2}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_gettable(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,1);\n    int k;\n\n    if (!lua_isnumber(L,2)) \\{\n        if (lua_isstring(L,2))\n            return lua_matrix_getmethod(L);\n        else\n            lua_error(L, \"Index must be a number\");\n    \\}\n    k = (int) lua_tonumber(L,2);\n\n    if (k <= 0 || k > A->m * A->n)\n        lua_error(L, \"Index out of range\");\n\n    lua_settop(L,0);\n    lua_pushnumber(L, Mk(A,k));\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{16}\\nwdocspar\n\n\n\\subsection{Matrix setter}\n\nThe indexed write operation can only do one thing: set a matrix\nentry.  The {\\tt{}settable} tag method gets the tagged object,\nthe index, and the value to write on the stack.\n\n\\nwenddocs{}\\nwbegincode{17}\\sublabel{NWmatD-regI-3}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-3}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\plusendmoddef\nlua_pushcclosure(L, lua_matrix_settable, 0);\nlua_settagmethod(L, lua_matrix_tag, \"settable\");\n\\nwendcode{}\\nwbegindocs{18}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{19}\\sublabel{NWmatD-staG-3}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-3}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_settable(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,1);\n    int k;\n\n    if (!lua_isnumber(L,2))\n        lua_error(L, \"Index must be a number\");\n\n    if (!lua_isnumber(L,3))\n        lua_error(L, \"Value must be a number\");\n\n    k = (int) lua_tonumber(L,2);\n    if (k <= 0 || k > A->m*A->n)\n        lua_error(L, \"Index out of range\");\n\n    Mk(A,k) = lua_tonumber(L,3);\n\n    lua_settop(L,0);\n    return 0;\n\\}\n\n\\nwendcode{}\\nwbegindocs{20}\\nwdocspar\n\n\n\\subsection{Matrix arithmetic operations}\n\nThe {\\tt{}lua{\\char95}matrix{\\char95}add} and {\\tt{}lua{\\char95}matrix{\\char95}sub} methods are attached\nto the add and subtract events.  The {\\tt{}lua{\\char95}matrix{\\char95}unm} method is\nattached to the unary negation event.\n\n\\nwenddocs{}\\nwbegincode{21}\\sublabel{NWmatD-regI-4}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-4}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\plusendmoddef\nlua_pushcclosure(L, lua_matrix_add, 0);\nlua_settagmethod(L, lua_matrix_tag, \"add\");\n\\nwendcode{}\\nwbegindocs{22}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{23}\\sublabel{NWmatD-staG-4}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-4}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_add(lua_State* L)\n\\{\n    int i, j;\n    \\LA{}get binary operands~{\\nwtagstyle{}\\subpageref{NWmatD-getJ-1}}\\RA{}\n\n    \\LA{}check summand conformality~{\\nwtagstyle{}\\subpageref{NWmatD-cheQ-1}}\\RA{}\n    C = lua_matrix_create(A->m, A->n);\n\n    for (i = 1; i <= A->m; ++i)\n        for (j = 1; j <= A->n; ++j)\n            Mij(C,i,j) = Mij(A,i,j) + Mij(B,i,j);\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NWmatD-retC-1}}\\RA{}\n\\}\n\n\\nwendcode{}\\nwbegindocs{24}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{25}\\sublabel{NWmatD-regI-5}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-5}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\plusendmoddef\nlua_pushcclosure(L, lua_matrix_sub, 0);\nlua_settagmethod(L, lua_matrix_tag, \"sub\");\n\\nwendcode{}\\nwbegindocs{26}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{27}\\sublabel{NWmatD-staG-5}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-5}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_sub(lua_State* L)\n\\{\n    int i, j;\n    \\LA{}get binary operands~{\\nwtagstyle{}\\subpageref{NWmatD-getJ-1}}\\RA{}\n\n    \\LA{}check summand conformality~{\\nwtagstyle{}\\subpageref{NWmatD-cheQ-1}}\\RA{}\n    C = lua_matrix_create(A->m, A->n);\n\n    for (i = 1; i <= A->m; ++i)\n        for (j = 1; j <= A->n; ++j)\n            Mij(C,i,j) = Mij(A,i,j) - Mij(B,i,j);\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NWmatD-retC-1}}\\RA{}\n\\}\n\n\\nwendcode{}\\nwbegindocs{28}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{29}\\sublabel{NWmatD-regI-6}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-6}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\plusendmoddef\nlua_pushcclosure(L, lua_matrix_unm, 0);\nlua_settagmethod(L, lua_matrix_tag, \"unm\");\n\\nwendcode{}\\nwbegindocs{30}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{31}\\sublabel{NWmatD-staG-6}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-6}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_unm(lua_State* L)\n\\{\n    int i, j;\n    \\LA{}get unary operand~{\\nwtagstyle{}\\subpageref{NWmatD-getH-1}}\\RA{}\n\n    C = lua_matrix_create(A->m, A->n);\n\n    for (i = 1; i <= A->m; ++i)\n        for (j = 1; j <= A->n; ++j)\n            Mij(C,i,j) = -Mij(A,i,j);\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NWmatD-retC-1}}\\RA{}\n\\}\n\n\\nwendcode{}\\nwbegindocs{32}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{33}\\sublabel{NWmatD-regI-7}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-7}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\plusendmoddef\nlua_pushcclosure(L, lua_matrix_mul, 0);\nlua_settagmethod(L, lua_matrix_tag, \"mul\");\n\\nwendcode{}\\nwbegindocs{34}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{35}\\sublabel{NWmatD-staG-7}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-7}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_mul(lua_State* L)\n\\{\n    extern int dgemm_(char* transA, char* transB, int* m, int* n, int* k,\n                      double* alpha, double* A, int* ldA,\n                      double* B, int* ldB,\n                      double* beta, double* C, int* ldC);\n\n    double one  = 1;\n    double zero = 0;\n    \\LA{}get binary operands~{\\nwtagstyle{}\\subpageref{NWmatD-getJ-1}}\\RA{}\n\n    if (A->n != B->m)\n        lua_error(L, \"Nonconformal matrices\");\n\n    C = lua_matrix_create(A->m, B->n);\n\n#ifdef HAS_LAPACK\n\n    dgemm_(\"N\", \"N\", &(C->m), &(C->n), &(A->m),\n           &one, A->data, &(A->ld),\n           B->data, &(B->ld),\n           &zero, C->data, &(C->ld));\n\n#else\n    lua_error(L, \"Dense linear algebra routines not linked\\\\n\");\n#endif /* HAS_LAPACK */\n\n    \\LA{}return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NWmatD-retC-1}}\\RA{}\n\\}\n\n\\nwendcode{}\\nwbegindocs{36}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{37}\\sublabel{NWmatD-getJ-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-getJ-1}}}\\moddef{get binary operands~{\\nwtagstyle{}\\subpageref{NWmatD-getJ-1}}}\\endmoddef\nlua_matrix_t A, B, C;\n\nif (lua_tag(L,1) != lua_matrix_tag || lua_tag(L,2) != lua_matrix_tag)\n    lua_error(L, \"Invalid operands\");\n\nA = lua_touserdata(L,1);\nB = lua_touserdata(L,2);\n\\nwused{\\\\{NWmatD-staG-4}\\\\{NWmatD-staG-5}\\\\{NWmatD-staG-7}}\\nwendcode{}\\nwbegindocs{38}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{39}\\sublabel{NWmatD-getH-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-getH-1}}}\\moddef{get unary operand~{\\nwtagstyle{}\\subpageref{NWmatD-getH-1}}}\\endmoddef\nlua_matrix_t A = lua_touserdata(L,1);\nlua_matrix_t C;\n\\nwused{\\\\{NWmatD-staG-6}}\\nwendcode{}\\nwbegindocs{40}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{41}\\sublabel{NWmatD-cheQ-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-cheQ-1}}}\\moddef{check summand conformality~{\\nwtagstyle{}\\subpageref{NWmatD-cheQ-1}}}\\endmoddef\nif (A->m != B->m || A->n != B->n)\n    lua_error(L, \"Noncomformal matrices\");\n\n\\nwused{\\\\{NWmatD-staG-4}\\\\{NWmatD-staG-5}}\\nwendcode{}\\nwbegindocs{42}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{43}\\sublabel{NWmatD-retC-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-retC-1}}}\\moddef{return \\code{}C\\edoc{}~{\\nwtagstyle{}\\subpageref{NWmatD-retC-1}}}\\endmoddef\nlua_settop(L,0);\nlua_pushusertag(L, C, lua_matrix_tag);\nreturn 1;\n\\nwused{\\\\{NWmatD-staG-4}\\\\{NWmatD-staG-5}\\\\{NWmatD-staG-6}\\\\{NWmatD-staG-7}}\\nwendcode{}\\nwbegindocs{44}\\nwdocspar\n\n\\subsection{{\\tt{}matrix} command}\n\nThe {\\tt{}matrix} command creates a new matrix of a specified size.\nThe user is allowed to leave off the second argument $n$;\nif it is not explicitly specified, $n$ is assumed to be $1$.\nThe sizes are only somewhat sanity checked -- there is no\ncheck to make sure that $mn$ is not humongous, and we fail\ngracelessly if the memory allocations fail.\n\nAfter the numeric arguments, the user may also specify a table\nof array contents in \\emph{row-major} order.  The idea is that\n\\begin{verbatim}\n  A = matrix(2,2, {1, 2,\n                   3, 4})\n\\end{verbatim}\nis probably the most natural way to initialize a little two-by-two\nexample matrix.  It should also be possible to write\n\\begin{verbatim}\n  A = matrix{1, 2, 3}\n\\end{verbatim}\nto get a three-element vector.\n\n\\nwenddocs{}\\nwbegincode{45}\\sublabel{NWmatD-regI-8}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-regI-8}}}\\moddef{register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}}\\plusendmoddef\nlua_register(L, \"matrix\", lua_matrix);\n\\nwendcode{}\\nwbegindocs{46}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{47}\\sublabel{NWmatD-staG-8}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-8}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix(lua_State* L)\n\\{\n    int nargs = lua_gettop(L);\n    lua_matrix_t A;\n    int i,j;\n    int m = 0, n = 0;\n    int init_table = 0;\n\n    \\LA{}get \\code{}matrix\\edoc{} parameters~{\\nwtagstyle{}\\subpageref{NWmatD-getP-1}}\\RA{}\n\n    A = lua_matrix_create(m,n);\n\n    \\LA{}initialize matrix from table~{\\nwtagstyle{}\\subpageref{NWmatD-iniS-1}}\\RA{}\n\n    lua_settop(L,0);\n    lua_pushusertag(L, A, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{48}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{49}\\sublabel{NWmatD-getP-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-getP-1}}}\\moddef{get \\code{}matrix\\edoc{} parameters~{\\nwtagstyle{}\\subpageref{NWmatD-getP-1}}}\\endmoddef\nfor (i = 1; i <= nargs; ++i) \\{\n    if (lua_isnumber(L,i)) \\{\n        int val = (int) lua_tonumber(L,i);\n        if (val <= 0)\n            lua_error(L, \"Size out of bounds\");\n\n        if      (m == 0)  m = val;\n        else if (n == 0)  n = val;\n        else              lua_error(L, \"Too many size parameters\");\n\n    \\} else if (lua_istable(L,i)) \\{\n        if (init_table == 0)\n            init_table = i;\n        else\n            lua_error(L, \"Too many initializers\");\n    \\}\n\\}\n\n\\nwalsodefined{\\\\{NWmatD-getP-2}}\\nwused{\\\\{NWmatD-staG-8}}\\nwendcode{}\\nwbegindocs{50}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{51}\\sublabel{NWmatD-getP-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-getP-2}}}\\moddef{get \\code{}matrix\\edoc{} parameters~{\\nwtagstyle{}\\subpageref{NWmatD-getP-1}}}\\plusendmoddef\nif (m == 0) \\{\n    if (init_table) \\{\n        m = lua_getn(L,init_table);\n        if (m == 0)\n            lua_error(L, \"Insufficient initializer entries\");\n    \\}\n\\}\n\nif (n == 0)\n    n = 1;\n\nif (init_table && lua_getn(L,init_table) > m*n)\n    lua_error(L, \"Initializer too large\");\n\n\\nwendcode{}\\nwbegindocs{52}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{53}\\sublabel{NWmatD-iniS-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-iniS-1}}}\\moddef{initialize matrix from table~{\\nwtagstyle{}\\subpageref{NWmatD-iniS-1}}}\\endmoddef\nif (init_table) \\{\n    int k = 1;\n    for (i = 1; i <= m; ++i) \\{\n        for (j = 1; j <= n; ++j) \\{\n            lua_rawgeti(L,init_table, k++);\n            if (lua_isnumber(L,-1))\n                Mij(A,i,j) = lua_tonumber(L,-1);\n            lua_pop(L,1);\n        \\}\n    \\}\n\\}\n\\nwused{\\\\{NWmatD-staG-8}}\\nwendcode{}\\nwbegindocs{54}\\nwdocspar\n\n\\subsection{Matrix {\\tt{}print} method}\n\nMatlab's wrapped matrix output format is \\emph{really} nice when you\nhave to inspect matrices of moderate size.  The {\\tt{}print{\\char95}matrix} routine\nemulates Matlab's behavior for {\\tt{}format\\ short\\ e}.\n\n\\nwenddocs{}\\nwbegincode{55}\\sublabel{NWmatD-calP-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-calP-1}}}\\moddef{call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NWmatD-calP-1}}}\\endmoddef\nlua_pushvalue(L, -1);\nlua_pushstring(L, buf);\nlua_rawcall(L, 1, 0);\n\\nwused{\\\\{NWmatD-staG-9}}\\nwendcode{}\\nwbegindocs{56}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{57}\\sublabel{NWmatD-staG-9}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-9}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic void print_matrix(lua_State* L, lua_matrix_t A)\n\\{\n    int i, j, c;\n    int m = A->m;\n    int n = A->n;\n    char buf[256];\n\n    lua_getglobal(L, \"print\");\n    for (c = 1; c <= n; c += 6) \\{\n\n        if (n > 6) \\{\n            sprintf(buf, \"  Columns %d through %d\\\\n\", \n                    c, (c+5 > n) ? n : c+5);\n            \\LA{}call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NWmatD-calP-1}}\\RA{}\n        \\}\n\n        for (i = 1; i <= m; ++i) \\{\n            char num[64];\n\n            *buf = '\\\\0';\n            for (j = c; j <= n && j < c+6; ++j) \\{\n                if (Mij(A,i,j))\n                    sprintf(num, \"  % .4e\", Mij(A,i,j));\n                else\n                    sprintf(num, \"            0\");\n                strcat(buf, num);\n            \\}\n            \\LA{}call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NWmatD-calP-1}}\\RA{}\n\n        \\}\n        buf[0] = ' ';\n        buf[1] = '\\\\0';\n        \\LA{}call Lua print for buffer~{\\nwtagstyle{}\\subpageref{NWmatD-calP-1}}\\RA{}\n    \\}\n    lua_pop(L,1);\n\\}\n\n\\nwendcode{}\\nwbegindocs{58}\\nwdocspar\n\nThe {\\tt{}matrix{\\char95}print} routine (also known as the {\\tt{}print} method\nfor a matrix object) uses {\\tt{}print{\\char95}matrix} to output a reasonably\npretty representation of the matrix.\n\n\\nwenddocs{}\\nwbegincode{59}\\sublabel{NWmatD-staG-A}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-A}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_print(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n\n    if (lua_gettop(L) != 1)\n        lua_error(L, \"Wrong number of arguments\");\n\n    print_matrix(L, A);\n    lua_settop(L, 0);\n    return 0;\n\\}\n\n\\nwendcode{}\\nwbegindocs{60}\\nwdocspar\n\n\n\\subsection{Matrix {\\tt{}factor} method}\n\nThe {\\tt{}matrix{\\char95}factor} function computes $A = PLU$ using LAPACK's %'\n{\\tt{}DGETRF} routine.\n\n\\nwenddocs{}\\nwbegincode{61}\\sublabel{NWmatD-staG-B}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-B}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_factor(lua_State* L)\n\\{\n    extern int dgetrf_(int* m, int* n, double* A, int* ldA,\n                       int* ipiv, int* info);\n\n    lua_matrix_t A = lua_touserdata(L,-1);\n    int info;\n\n    if (A->m != A->n)\n        lua_error(L, \"Matrix must be square\");\n\n#ifdef HAS_LAPACK\n    if (A->ipiv == NULL) \\{\n        A->ipiv = calloc(A->m, sizeof(int));\n        dgetrf_(&(A->m), &(A->n), A->data, &(A->ld), A->ipiv, &info);    \n        if (info != 0) \\{\n            printf(\"dgetrf failed with error code %d\\\\n\", info);\n            lua_error(L, \"Error during factorization\");\n        \\}\n    \\}\n#else\n    lua_error(L, \"Dense linear algebra not linked\\\\n\");\n#endif\n\n    lua_settop(L,0);\n    lua_pushusertag(L, A, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{62}\\nwdocspar\n\n\n\\subsection{Matrix {\\tt{}solve} method}\n\nThe {\\tt{}matrix{\\char95}solve} function computes $x := A^{-1} x$ given a factored\nmatrix $A$.  If the matrix has not already been factored, we will factor\nit.\n\n\\nwenddocs{}\\nwbegincode{63}\\sublabel{NWmatD-staG-C}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-C}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_solve(lua_State* L)\n\\{\n    extern int dgetrs_(char* trans, int* n, int* nrhs, double* A, int* ldA,\n                       int* ipiv, double* B, int* ldB, int* info);\n\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_matrix_t B;\n    int info;\n\n    if (lua_gettop(L) != 2)\n        lua_error(L, \"Wrong number of arguments\");\n\n    if (lua_tag(L,1) != lua_matrix_tag)\n        lua_error(L, \"Argument must be a matrix\");\n    B = lua_touserdata(L,1);\n\n    if (A->ipiv == NULL) \\{\n        lua_matrix_factor(L);\n    \\}\n\n    if (A->n != B->m)\n        lua_error(L, \"Dimension mismatch\");\n\n#ifdef HAS_LAPACK\n    dgetrs_(\"N\", &(A->n), &(B->n), A->data, &(A->ld), A->ipiv, \n            B->data, &(B->ld), &info);    \n    if (info != 0)\n        printf(\"Solve failed with error code %d\\\\n\", info);\n\n#else\n    lua_error(L, \"Dense linear algebra libraries not linked\\\\n\");\n#endif\n\n    lua_settop(L,0);\n    lua_pushusertag(L, B, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{64}\\nwdocspar\n\n\n\\subsection{Matrix {\\tt{}clone} method}\n\nThe {\\tt{}matrix{\\char95}clone} function ({\\tt{}clone} method) makes a full\ncopy of a matrix object.\n\n\\nwenddocs{}\\nwbegincode{65}\\sublabel{NWmatD-staG-D}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-D}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_clone(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_matrix_t B;\n    int j;\n\n    if (lua_gettop(L) != 1)\n        lua_error(L, \"Too many arguments\");\n\n    B = lua_matrix_create(A->m, A->n);\n    for (j = 1; j <= A->n; ++j)\n        memcpy(&Mij(B,1,j), &Mij(A,1,j), A->m * sizeof(double));\n\n    if (A->ipiv) \\{\n        B->ipiv = calloc(A->m, sizeof(int));\n        memcpy(B->ipiv, A->ipiv, A->m * sizeof(int));\n    \\}\n\n    lua_settop(L,0);\n    lua_pushusertag(L, B, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{66}\\nwdocspar\n\n\\subsection{Matrix {\\tt{}slice} method}\n\nThe matrix {\\tt{}slice} command returns a copy of a subscripted slice\nof the matrix $A$.\n\n\\nwenddocs{}\\nwbegincode{67}\\sublabel{NWmatD-staG-E}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-E}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_slice_method(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_matrix_t B;\n    int n = lua_gettop(L);\n    int i1, i2, j1 = 1, j2 = 1;\n\n    \\LA{}get slice subscript arguments~{\\nwtagstyle{}\\subpageref{NWmatD-getT-1}}\\RA{}\n\n    B = lua_matrix_slice(A,i1,i2,j1,j2);\n\n    lua_settop(L,0);\n    lua_pushusertag(L, B, lua_matrix_tag);\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{68}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{69}\\sublabel{NWmatD-getT-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-getT-1}}}\\moddef{get slice subscript arguments~{\\nwtagstyle{}\\subpageref{NWmatD-getT-1}}}\\endmoddef\nif (n != 3 && n != 5)\n    lua_error(L, \"Wrong number of arguments\");\n\nif (!lua_isnumber(L,1) || !lua_isnumber(L,2) ||\n    (n == 5 && (!lua_isnumber(L,3) || !lua_isnumber(L,4))))\n    lua_error(L, \"Subscripts must be numeric\");\n\ni1 = (int) lua_tonumber(L,1);\ni2 = (int) lua_tonumber(L,2);\nif (n == 5) \\{\n    j1 = (int) lua_tonumber(L,3);\n    j2 = (int) lua_tonumber(L,4);\n\\}\n\nif (i1 <= 0 || i2 < i1 || i2 > A->m ||\n    j1 <= 0 || j2 < j1 || j2 > A->n)\n    lua_error(L, \"Bad subscripts\");\n\n\\nwused{\\\\{NWmatD-staG-E}}\\nwendcode{}\\nwbegindocs{70}\\nwdocspar\n\n\\subsection{Matrix {\\tt{}free} method}\n\nThe {\\tt{}matrix{\\char95}free} function (also known as the {\\tt{}free} method)\ndeallocates the object memory.  It should probably be called on\nLua garbage collection, but it isn't yet.\n\n\\nwenddocs{}\\nwbegincode{71}\\sublabel{NWmatD-staG-F}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-F}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_free(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n\n    if (lua_gettop(L) != 1)\n        lua_error(L, \"Too many arguments\");\n\n    lua_matrix_destroy(A);\n\n    lua_settop(L,0);\n    return 0;\n\\}\n\n\\nwendcode{}\\nwbegindocs{72}\\nwdocspar\n\n\n\\subsection{{\\tt{}m} and {\\tt{}n} fields}\n\n\\nwenddocs{}\\nwbegincode{73}\\sublabel{NWmatD-staG-G}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-G}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_m(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_settop(L,0);\n    lua_pushnumber(L, A->m);\n    return 1;\n\\}\n\nstatic int lua_matrix_n(lua_State* L)\n\\{\n    lua_matrix_t A = lua_touserdata(L,-1);\n    lua_settop(L,0);\n    lua_pushnumber(L, A->n);\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{74}\\nwdocspar\n\n\n\\subsection{Method recall}\n\nWhen a matrix object is indexed by a method name string,\nwe return a Lua closure that implements the method.\nSo when the user requests {\\tt{}A.print}, for instance,\nthe returned closure object will have {\\tt{}A} as its final\nargument when it is called.\n\nOn entry, the Lua stack contains the matrix object\nand the name string.\n\n\\nwenddocs{}\\nwbegincode{75}\\sublabel{NWmatD-staH-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staH-1}}}\\moddef{static prototypes~{\\nwtagstyle{}\\subpageref{NWmatD-staH-1}}}\\endmoddef\nstatic int lua_matrix_getmethod(lua_State* L);\n\n\\nwused{\\\\{NWmatD-matC.2-1}}\\nwendcode{}\\nwbegindocs{76}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{77}\\sublabel{NWmatD-staG-H}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-staG-H}}}\\moddef{static functions~{\\nwtagstyle{}\\subpageref{NWmatD-staG-1}}}\\plusendmoddef\nstatic int lua_matrix_getmethod(lua_State* L)\n\\{\n    const char* name = lua_tostring(L,2);\n\n    lua_pop(L,1);\n    if (strcmp(name, \"print\") == 0)\n        lua_pushcclosure(L, lua_matrix_print, 1);\n    else if (strcmp(name, \"free\") == 0)\n        lua_pushcclosure(L, lua_matrix_free, 1);\n    else if (strcmp(name, \"factor\") == 0)\n        lua_pushcclosure(L, lua_matrix_factor, 1);\n    else if (strcmp(name, \"solve\") == 0)\n        lua_pushcclosure(L, lua_matrix_solve, 1);\n    else if (strcmp(name, \"clone\") == 0)\n        lua_pushcclosure(L, lua_matrix_clone, 1);\n    else if (strcmp(name, \"slice\") == 0)\n        lua_pushcclosure(L, lua_matrix_slice_method, 1);\n    else if (strcmp(name, \"m\") == 0)\n        lua_matrix_m(L);\n    else if (strcmp(name, \"n\") == 0)\n        lua_matrix_n(L);\n    else\n        lua_error(L, \"Invalid method name\");\n\n    return 1;\n\\}\n\n\\nwendcode{}\\nwbegindocs{78}\\nwdocspar\n\n\n\\subsection{Public matrix manipulation}\n\n\\nwenddocs{}\\nwbegincode{79}\\sublabel{NWmatD-fun9-1}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-fun9-1}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NWmatD-fun9-1}}}\\endmoddef\nlua_matrix_t lua_matrix_create(int m, int n)\n\\{\n    lua_matrix_t C;\n\n    C = calloc(1, sizeof(*C));\n    C->owns_data = 1;\n    C->ld = m;\n    C->m  = m;\n    C->n  = n;\n    C->data = calloc(m*n, sizeof(double));\n    return C;\n\\}\n\nvoid lua_matrix_destroy(lua_matrix_t self)\n\\{\n    if (self->ipiv)\n        free(self->ipiv);\n\n    if (self->owns_data)\n        free(self->data);\n\n    free(self);\n\\}\n\n\\nwalsodefined{\\\\{NWmatD-fun9-2}\\\\{NWmatD-fun9-3}\\\\{NWmatD-fun9-4}}\\nwused{\\\\{NWmatD-matC.2-1}}\\nwendcode{}\\nwbegindocs{80}\\nwdocspar\n\n\\nwenddocs{}\\nwbegincode{81}\\sublabel{NWmatD-fun9-2}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-fun9-2}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NWmatD-fun9-1}}}\\plusendmoddef\nlua_matrix_t lua_matrix_slice(lua_matrix_t A, int i1, int i2, int j1, int j2)\n\\{\n    lua_matrix_t C;\n\n    C = calloc(1, sizeof(*C));\n    C->owns_data = 0;\n    C->ld   = A->ld;\n    C->m    = (i2-i1)+1;\n    C->n    = (j2-j1)+1;\n    C->data = &Mij(A,i1,j1);\n    return C;\n\\}\n\n\\nwendcode{}\\nwbegindocs{82}\\nwdocspar\n\n\\subsection{Setting and removing vectors}\n\nThe {\\tt{}lua{\\char95}matrix{\\char95}push} function is a thin wrapper around\n{\\tt{}lua{\\char95}pushusertag}.  Similarly, {\\tt{}lua{\\char95}matrix{\\char95}get} is a\nthin wrapper around {\\tt{}lua{\\char95}touserdata}.  The only reason we\ndon't want the user to directly use the {\\tt{}lua{\\char95}pushusertag} and \n{\\tt{}lua{\\char95}touserdata} functions is that then we would have\nto expose the matrix tag value to the world.  That wouldn't\nbe a tragedy, but it would be nice to keep it private.\n\n\\nwenddocs{}\\nwbegincode{83}\\sublabel{NWmatD-fun9-3}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-fun9-3}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NWmatD-fun9-1}}}\\plusendmoddef\nvoid lua_matrix_push(lua_State* L, lua_matrix_t matrix)\n\\{\n    lua_pushusertag(L, matrix, lua_matrix_tag);\n\\}\n\nlua_matrix_t lua_matrix_get(lua_State* L, int index)\n\\{\n    if (index > lua_gettop(L))\n        lua_error(L, \"Index out of range\");\n\n    if (lua_tag(L,index) != lua_matrix_tag)\n        lua_error(L, \"Variable is not a matrix\");\n\n    return lua_touserdata(L,index);\n\\}\n\n\\nwendcode{}\\nwbegindocs{84}\\nwdocspar\n\n\n\\subsection{Registration functions}\n\n\\nwenddocs{}\\nwbegincode{85}\\sublabel{NWmatD-fun9-4}\\nwmargintag{{\\nwtagstyle{}\\subpageref{NWmatD-fun9-4}}}\\moddef{functions~{\\nwtagstyle{}\\subpageref{NWmatD-fun9-1}}}\\plusendmoddef\nvoid lua_matrix_register(lua_State* L)\n\\{\n    lua_matrix_tag = lua_newtag(L);\n    \\LA{}register functions~{\\nwtagstyle{}\\subpageref{NWmatD-regI-1}}\\RA{}\n\\}\n\n\\nwendcode{}\n\n\\nwixlogsorted{c}{{call Lua print for buffer}{NWmatD-calP-1}{\\nwixd{NWmatD-calP-1}\\nwixu{NWmatD-staG-9}}}%\n\\nwixlogsorted{c}{{check summand conformality}{NWmatD-cheQ-1}{\\nwixu{NWmatD-staG-4}\\nwixu{NWmatD-staG-5}\\nwixd{NWmatD-cheQ-1}}}%\n\\nwixlogsorted{c}{{functions}{NWmatD-fun9-1}{\\nwixu{NWmatD-matC.2-1}\\nwixd{NWmatD-fun9-1}\\nwixd{NWmatD-fun9-2}\\nwixd{NWmatD-fun9-3}\\nwixd{NWmatD-fun9-4}}}%\n\\nwixlogsorted{c}{{get binary operands}{NWmatD-getJ-1}{\\nwixu{NWmatD-staG-4}\\nwixu{NWmatD-staG-5}\\nwixu{NWmatD-staG-7}\\nwixd{NWmatD-getJ-1}}}%\n\\nwixlogsorted{c}{{get \\code{}matrix\\edoc{} parameters}{NWmatD-getP-1}{\\nwixu{NWmatD-staG-8}\\nwixd{NWmatD-getP-1}\\nwixd{NWmatD-getP-2}}}%\n\\nwixlogsorted{c}{{get slice subscript arguments}{NWmatD-getT-1}{\\nwixu{NWmatD-staG-E}\\nwixd{NWmatD-getT-1}}}%\n\\nwixlogsorted{c}{{get unary operand}{NWmatD-getH-1}{\\nwixu{NWmatD-staG-6}\\nwixd{NWmatD-getH-1}}}%\n\\nwixlogsorted{c}{{initialize matrix from table}{NWmatD-iniS-1}{\\nwixu{NWmatD-staG-8}\\nwixd{NWmatD-iniS-1}}}%\n\\nwixlogsorted{c}{{macros}{NWmatD-mac6-1}{\\nwixu{NWmatD-matC.2-1}\\nwixd{NWmatD-mac6-1}}}%\n\\nwixlogsorted{c}{{matrix-lua.c}{NWmatD-matC.2-1}{\\nwixd{NWmatD-matC.2-1}}}%\n\\nwixlogsorted{c}{{matrix-lua.h}{NWmatD-matC-1}{\\nwixd{NWmatD-matC-1}}}%\n\\nwixlogsorted{c}{{register functions}{NWmatD-regI-1}{\\nwixd{NWmatD-regI-1}\\nwixd{NWmatD-regI-2}\\nwixd{NWmatD-regI-3}\\nwixd{NWmatD-regI-4}\\nwixd{NWmatD-regI-5}\\nwixd{NWmatD-regI-6}\\nwixd{NWmatD-regI-7}\\nwixd{NWmatD-regI-8}\\nwixu{NWmatD-fun9-4}}}%\n\\nwixlogsorted{c}{{return \\code{}C\\edoc{}}{NWmatD-retC-1}{\\nwixu{NWmatD-staG-4}\\nwixu{NWmatD-staG-5}\\nwixu{NWmatD-staG-6}\\nwixu{NWmatD-staG-7}\\nwixd{NWmatD-retC-1}}}%\n\\nwixlogsorted{c}{{static data}{NWmatD-staB-1}{\\nwixu{NWmatD-matC.2-1}\\nwixd{NWmatD-staB-1}}}%\n\\nwixlogsorted{c}{{static functions}{NWmatD-staG-1}{\\nwixu{NWmatD-matC.2-1}\\nwixd{NWmatD-staG-1}\\nwixd{NWmatD-staG-2}\\nwixd{NWmatD-staG-3}\\nwixd{NWmatD-staG-4}\\nwixd{NWmatD-staG-5}\\nwixd{NWmatD-staG-6}\\nwixd{NWmatD-staG-7}\\nwixd{NWmatD-staG-8}\\nwixd{NWmatD-staG-9}\\nwixd{NWmatD-staG-A}\\nwixd{NWmatD-staG-B}\\nwixd{NWmatD-staG-C}\\nwixd{NWmatD-staG-D}\\nwixd{NWmatD-staG-E}\\nwixd{NWmatD-staG-F}\\nwixd{NWmatD-staG-G}\\nwixd{NWmatD-staG-H}}}%\n\\nwixlogsorted{c}{{static prototypes}{NWmatD-staH-1}{\\nwixu{NWmatD-matC.2-1}\\nwixd{NWmatD-staH-1}}}%\n\\nwbegindocs{86}\\nwdocspar\n\\nwenddocs{}\n", "meta": {"hexsha": "c2825b228589ddd001f54449927aee0a321fe370", "size": 34529, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sugar30/src/tex/matrix-lua.tex", "max_stars_repo_name": "davidgarmire/sugar", "max_stars_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sugar30/src/tex/matrix-lua.tex", "max_issues_repo_name": "davidgarmire/sugar", "max_issues_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sugar30/src/tex/matrix-lua.tex", "max_forks_repo_name": "davidgarmire/sugar", "max_forks_repo_head_hexsha": "699534852cb37fd2225a8b4b0072ebca96504d23", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.3655555556, "max_line_length": 435, "alphanum_fraction": 0.6723913232, "num_tokens": 12528, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7057850154599563, "lm_q2_score": 0.7905303186696747, "lm_q1q2_score": 0.5579444531838406}}
{"text": "\\section{The \\equalrange algorithm}\n\\Label{sec:equalrange}\n\\Label{sec:equalrangeii}\n\nThe \\equalrange algorithm is one of the four binary search algorithms\nof the \\cxx Standard Library \\cite[\\S 28.7.3.3]{cxx-17-draft}.\nAs with the other binary search algorithms \\equalrange requires that\nits input array is in increasing order.\nThe specification of \\equalrange states that it \\emph{combines} the results of the algorithms\n\\specref{lowerbound} and \\specref{upperbound}.\n\nFor our purposes we have modified\n\\equalrange\nto take an array of type \\valuetype.\nMoreover, instead of a pair of iterators, our version returns a pair of indices.\nTo be more precise, the return type of \\equalrange is the\nstruct \\sizetypepair from Listing~\\ref{lst:size_type_pair}.\nThus, the signature of \\equalrange now reads:\n\n\\begin{lstlisting}[style = acsl-block]\n\n  size_type_pair\n  equal_range(const value_type* a, size_type n, value_type v);\n\\end{lstlisting}\n\nFigure~\\ref{fig:equalrange} combines Figure~\\ref{fig:lowerbound} with Figure~\\ref{fig:upperbound}\nin order visualize the behavior of \\equalrange for select test cases.\nThe two types of arrows~$\\rightarrow$ and~$\\dashrightarrow$ represent the\nrespective fields \\inl{first} and \\inl{second} of the return value.\nFor values that are not contained in the array, the two arrows point to the same index.\nMore generally, if \\equalrange returns the pair $(\\mathtt{lb},\\mathtt{ub})$, then\nthe difference $\\mathtt{ub} - \\mathtt{lb}$ is equal to the number of occurrences of the \nargument \\inl{v} in the array.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=0.60\\textwidth]{Figures/equal_range.pdf}\n\\caption{\\Label{fig:equalrange}Some examples for \\equalrange}\n\\end{figure}\n\n\\FloatBarrier\n\nWe will provide two implementations of \\equalrange and verify both of them.\nThe first implementation \\implref{equalrange} just straightforwardly\ncalls \\specref{lowerbound} and \\specref{upperbound} and simply\nreturns the pair of their respective results.\nThe second implementation \\implref{equalrangeii}, which is more elaborate, follows the\noriginal \\cxx code by attempting to minimize duplicate computations.\n%\nLet $(\\mathtt{lb}, \\mathtt{ub})$ be the return value  \\equalrange, then\nthe conditions~\\eqref{eq:lower-bound-result}--\\eqref{eq:upper-bound-right} can\nbe merged into the inequality\n%\n\\begin{align}\n\\Label{eq:equal-range-result}\n0 \\leq \\mathtt{lb} \\leq \\mathtt{ub} \\leq n \n\\end{align}\n\nand the following three implications for a valid index $k$ of the array under\nconsideration\n%\n\\begin{alignat}{3}\n\\Label{eq:equal-range-left}\n0           &\\leq  k < \\mathtt{lb} && \\qquad\\Longrightarrow\\qquad &&  a[k] < \\mathtt{v} \\\\\n\\Label{eq:equal-range-middle}\n\\mathtt{lb} &\\leq k < \\mathtt{ub} && \\qquad\\Longrightarrow\\qquad &&  a[k] = \\mathtt{v} \\\\\n\\Label{eq:equal-range-right}\n\\mathtt{ub} &\\leq k < n           && \\qquad\\Longrightarrow\\qquad &&  a[k] > \\mathtt{v} \n\\end{alignat}\n\n%\\clearpage\n\nHere are some justifications for these conditions.\n\n\\begin{itemize}\n\\item \nConditions~\\eqref{eq:equal-range-left} and~\\eqref{eq:equal-range-right} are just the \nConditions~\\eqref{eq:lower-bound-left} and~\\eqref{eq:upper-bound-right}, respectively.\n\n\\item\nThe Inequality~\\eqref{eq:equal-range-result} follows from the Inequalities~\\eqref{eq:lower-bound-result}\nand~\\eqref{eq:upper-bound-result} and the following considerations:\nIf $\\mathtt{ub}$ were less than $\\mathtt{lb}$, then according to~\\eqref{eq:equal-range-left}\nwe would have $a[\\mathtt{ub}] < \\mathtt{v}$.\nOne the other hand, we know from~\\eqref{eq:equal-range-right} that opposite\ninequality $\\mathtt{v} < a[\\mathtt{ub}]$ holds.\nTherefore, we have $\\mathtt{lb} \\leq \\mathtt{ub}$.\n\n\\item\nCondition~\\eqref{eq:equal-range-middle} follows from the combination of~\\eqref{eq:lower-bound-right}\nand~\\eqref{eq:upper-bound-left} and the fact that~$\\leq$ is a total order on the integers.\n\\end{itemize}\n\n%\\clearpage\n\n\\subsection{Formal specification of \\equalrange}\n\nThe following listing show the specification of \\specref{equalrange}\n(and of \\equalrangeii).\n\n\\input{Listings/equal_range.h.tex}\n\nThe precondition \\inl{increasing} expresses\nthat the array values need to be in increasing order.\n\nThe postconditions reflect the conditions listed above and can be expressed\nusing the already introduced predicates\n\\logicref{AllEqual},\n\\logicref{StrictUpperBound} and \\logicref{StrictLowerBound}.\n\n\\begin{itemize}\n\\item Condition~\\eqref{eq:equal-range-result} becomes postcondition \\inl{result}\n\\item Condition~\\eqref{eq:equal-range-left} becomes postcondition \\inl{left}\n\\item Condition~\\eqref{eq:equal-range-middle} becomes postcondition \\inl{middle}\n\\item Condition~\\eqref{eq:equal-range-right} becomes postcondition \\inl{right}\n\\end{itemize}\n\n%\\clearpage \n\n\\subsection{Implementation of \\equalrange}\n\nOur first implementation of \\implref{equalrange} is shown in the following listing.\nWe just call the two functions \\specref{lowerbound} and \\specref{upperbound}\nand return their respective results as a pair.\n\n\\input{Listings/equal_range.c.tex}\n\nIn a very early version of this document we had proven the similar assertion\n\\inl{first <= second} with the interactive theorem prover \\coq.\nAfter reviewing this proof we formulated the new assertion \\inl{aux}\nthat uses a fact from the postcondition of \\specref{upperbound}.\nThe benefit of this reformulation is that both the assertion \\inl{aux}\nand the postcondition \\inl{first <= second} can now be verified automatically.\n\n%\\clearpage \n\n\\subsection{Implementation of \\equalrangeii}\n\nThe first implementation of \\implref{equalrange} does more work than needed.\nIn the following listing \\implref{equalrangeii} we show that it is possible to\nperform as much range reduction as possible before calling \\specref{upperbound}\nand \\specref{lowerbound} on the reduced ranges.\n\n\n\\input{Listings/equal_range2.c.tex}\n\nDue to the higher code complexity of the second implementation, additional\nassertions had to be inserted in order to ensure that \\wpframac is able to verify the\ncorrectness of the code.\nAll of these assertions are related to pointer arithmetic and shifting base pointers.\nThey fall into three groups and are briefly discussed below.\nIn order to enable the automatic verification of these properties we\nadded the following collection of \\logicref{ArrayBoundsShift}.\n\n\\input{Listings/ShiftLemmas.acsl.tex}\n\n\\subsubsection*{The \\inl{increasing} properties}\n\nBoth \\specref{lowerbound} and \\specref{upperbound} expect that they\noperate on increasingly ordered arrays.\nThis is of course also true for \\specref{equalrange}, however,\ninside our second implementation we need a more specific formulation, namely,\n\n\\begin{lstlisting}[style=acsl-block]\n\n        Increasing(a + middle, last - middle)\n\\end{lstlisting}\n\nWith the three-argument form of predicate \\logicref{Increasing}\nwe can formulate out an intermediate step.\nThis enables the provers to verify the preconditions of the call to\n\\specref{lowerbound} automatically.\nA similar assertion is present before the call to \\specref{upperbound}.\n\n%\\clearpage\n\n\\subsubsection*{The \\inl{strict} and \\inl{constant} properties}\n\nPart of the post conditions of \\specref{equalrange} is that \\inl{v}\nis both a strict upper and a strict lower bound.\nHowever, the calls to \\lowerbound and \\upperbound only give us\n\n\\begin{lstlisting}[style=acsl-block]\n\n        StrictUpperBound(a + first, 0, left - first, v) \n\n        StrictLowerBound(a + middle, right - middle, last - middle, v)\n\\end{lstlisting}\n\nwhich is not enough to reach the desired post conditions automatically.\nOne intermediate step for each of the assertions was sufficient to guide\nthe prover to the desired result.\n\nConceptually similar to the \\inl{strict} properties\nthe \\inl{constant} properties guide the prover towards\n\n\\begin{lstlisting}[style=acsl-block]\n\n        LowerBound(a, left, n, v) \n\n        UpperBound(a, 0, right, v)\n\\end{lstlisting}\n\nCombining these properties allow the postcondition \\inl{middle} to be derived automatically.\n\n%\\clearpage\n\n", "meta": {"hexsha": "c8253306f0f9b9d0e22b2f4fc2bd68b8f36dd313", "size": 7980, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/binary-search/equal_range.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/binary-search/equal_range.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/binary-search/equal_range.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 38.3653846154, "max_line_length": 104, "alphanum_fraction": 0.7721804511, "num_tokens": 2151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947290421275, "lm_q2_score": 0.8519528000888387, "lm_q1q2_score": 0.5579393981708618}}
{"text": "\\section{Mathematical Models}\n\\label{sec:mathmodel}\nBased on data volume rates, we do need mathematical models to identify which are the source addresses that may be proponents of an attack. \nIn letterature we found different approaches, \\cite{detection_by_path_analaysis} categorise data into predictable and non predictable data, their mathematical models must be able judge the data by using a threshold. These possible models are the tools for self-similarity analysis such as correlation coefficient and distance matrix. They \\cite{detection_by_path_analaysis} use \\textit{Pearson's correlation coefficient}, which is defined as:\n\n\\begin{equation}\n\\label{eq:pearson_corr}\n\t\\rho_{X, Y} = \\frac{E[(X-\\mu_{X})(Y-\\mu_{Y})]}{\\sigma_X\\sigma_Y}\n\\end{equation}\n\nThe correlation is used to measure dependence between two quantities (variables) $X$ and $Y$ with expected values $\\mu_X$ and $\\mu_Y$ and standard deviations $\\sigma_X$ and $\\sigma_Y$. Both value of the standard deviations are finite and nonzero ($0 < \\sigma_X< \\infty$ and $0 < \\sigma_Y < \\infty$). One of the impressive properties of the correlation is symmetric measurement, in other words, whichever data comes first, we can still get the same result as measuring. In paper \\cite{detection_by_path_analaysis} they propose several methodology algorithms, we report two of them\n\\begin{enumerate}\n\t\\item Correlation between arrival rate and sequence number.\n\tDenoted $X$ as a sample set of arrival rate ($X = {\\label{_k}}$, where $k=$ 0, 1, 2, ..., N) and $Y$ as a sample set of sequence number ($Y = {k}$  where $k=$ 0, 1, 2, ..., N). Calculate the correlation value $(\\rho_{X, Y})$ from the two variables: $X$ and $Y$. The expected value is between $-1$ and $1$, $-1 > \\rho_{X, Y} > 1 $.\n\t\\item Correlation of self arrival rate. Denoted $X$ as a sample sequence of arrival rate ($X = {\\lambda_{(2k)}}$, where $k=$ 0, 1, 2, ..., N) and $Y$ as a sequence number of time interval ($Y = {\\lambda_{(2k + 1)}}$ where $k=$ 0, 1, 2, ..., N). Calculate the correlation value $(\\rho_{X, Y})$ from the two variables: $X$ and $Y$. The expected value is between $-1$ and $1$, $-1 > \\rho_{X, Y} > 1 $.\n\\end{enumerate}\nIn order to use the previous mentioned methodology, \\cite{detection_by_path_analaysis} define two thresholds: upper $\\tau_U$ and lower $\\tau_L$. The value of the upper threshold must not exceed 1 and be less than the lower threshold, on the other hand, the value of the lower threshold must not be below 0 and greater than the upper threshold.\nThese thresholds will help to identify the dependency degree of dependency of the arrival rate data into two categories.\n\\begin{enumerate}\n\t\\item \\textit{Predictable attack rate}: the data will be classified as a predictable attack rate if the correlation value is closed to 0 or 1.\n\t\\item \\textit{Nonpredictable attack rate}: the data will be classified as a nonpredictable attack rate if the correlation value is not closed to 0 or 1.\n\\end{enumerate} \nUnfortunately, only one correlation result $\\rho_{X, Y}$ cannot\ndetermine whether arrival data is attacking or legitimate, it is\nneeded a series of correlation result to confirm the situation.\nThe attack detection must respond as quickly as possible after the attack reaches the victim by using these methods, doing an online analysis.\n\nSince we categorise data, using big data framework in order to analyse log files offline, which may contains attacks, we can not use methodology proposed by \\cite{detection_by_path_analaysis}. Using big data we elaborate the log files in order to extract features used for our analysis. For simplicity we explain each one by referring to a single entry which represents an exchange between a source IP and server.\n\n\\begin{itemize}\n\t\\item \\textit{n\\_packets} \\\\ Represents the amount of packets.\n\t\\item \\textit{total\\_volume} \\\\ It is the sum of all packets length.\n\t\\item \\textit{time\\_difference} \\\\ It is the time difference between the first communication and the last one, we consider it as a time window.\n\t\\item \\textit{ratio\\_vol\\_td} \\\\ Represents the volume over seconds exchanged during the time window.\n\\end{itemize}\n\nThe mathematical model must be able to judge the data by using a thresholds, these ones will be provided by our mathematical tool as we will se further on this sections. Other mathematical model are treated in  \\cite{detection_by_path_analaysis} even though we reported the important one. \n\nUsing \\textit{python} in order to manipulate the results returned by \\textit{Pig} we decide to use \\textbf{standard deviation} as graphical guideline model together with the mean and  \\textbf{Quantile Range Outliers} method to identify suspicious source address mathematically.\nThe formula for the sample standard deviation is\n\n\\begin{equation}\n\\label{eq:standard_dev}\n\ts = \\sqrt{\\frac{1}{N-1}\\sum_{i=1}^N(x_i - \\bar{x})^2}\n\\end{equation}\nwhere $x_1, x_2\\ ...\\ x_n$ are the samples values, $\\bar{x}$ is the mean and $N$ is the number of samples in the dataset generated by \\textit{Pig-latin} script.\n\nThe \\textit{standard deviation} is a description of the data's spread, how widely it is distributed about the mean.  \nA smaller standard deviation indicates that more of the data is clustered about the mean.  \nA larger one indicates the data are more spread out.\nUsing this information, we started to make inferences about the data.  \nFor example, we could determine whether a particular point was significantly higher above data volume than all others,  whether it represented a statistical anomaly that was worth investigating, based on how many standard deviations away from the mean it was located.\n\nThe \\textit{quantile range outliers} method of outlier detection uses the quantile distribution of the values in a column to locate the extreme values. Quantiles are useful for detecting outliers because there is no distributional assumption associated with them. Data are simply sorted from smallest to largest. For example, the $20^{th}$ quantile is the value at which 20\\% of values are smaller. Extreme values are found using a multiplier of the interquantile range, the distance between two specified quantiles.\nIn statistics and probability \\textit{quantiles} are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. \n\nIn our work we decide to identify outliers calculating the $25^{th}$ and the $75^{th}$ percentile then we calculate the interquantile rage by doing $Q_{75}-Q_{25}$. We chose $1.5$ as multiplicative $\\alpha$ factor for the interquantile. \nWe use this coefficient as \\textit{cut off} for determine upper and lower bound for outliers data, these will represent our thresholds. \nThe following listing report the code which implements this feature\\footnote{In Lst. \\ref{lst:qrom} we report only the method applied to data volume exchanged, we do the same for traffic volume (Kb/s)} .\n \n \\begin{lstlisting}[numbers=left, columns=flexible, breaklines=true, frame=tb, caption={quantile range outliers method}, label={lst:qrom}]\n    q25, q75 = percentile(dataframe['total_volume'], 25), percentile(dataframe['total_volume'], 75)\n    inter_quartile_range = q75 - q25\n    # calculate the outlier cutoff\n    cut_off = inter_quartile_range * 1.5\n    lower, upper = q25 - cut_off, q75 + cut_off\n    # identify outliers\n    data_outliers = [x for x in dataframe['total_volume'] if x < lower or x > upper]\n\\end{lstlisting}\n\nWe decide to output two plots and one file, \\textit{data analysis} which figure out the amount of data, in terms of megabyte, exchanged between a source and the server. \\textit{Volume analysis} is like the previous one but represents the data volumes per second exchanged between a source and the server. In the \\texttt{.txt} file we report the outliers by IP address and values of analysis we calculates, such as traffic volume and data exchanged.\n\\bigskip\n", "meta": {"hexsha": "faab4bb680fd3c42d70e3aa2c73600cb1918627d", "size": 7933, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/MathematicalModel.tex", "max_stars_repo_name": "AndreaPerazzoli/DDoS-Network-Flow-Forensics-Analyser-report", "max_stars_repo_head_hexsha": "732e53e29c99f7c3d61e4a8da31e760aeaa917cd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/MathematicalModel.tex", "max_issues_repo_name": "AndreaPerazzoli/DDoS-Network-Flow-Forensics-Analyser-report", "max_issues_repo_head_hexsha": "732e53e29c99f7c3d61e4a8da31e760aeaa917cd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/MathematicalModel.tex", "max_forks_repo_name": "AndreaPerazzoli/DDoS-Network-Flow-Forensics-Analyser-report", "max_forks_repo_head_hexsha": "732e53e29c99f7c3d61e4a8da31e760aeaa917cd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 108.6712328767, "max_line_length": 579, "alphanum_fraction": 0.7688138157, "num_tokens": 1954, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513731336202, "lm_q2_score": 0.6224593241981982, "lm_q1q2_score": 0.5578800240324604}}
{"text": "\\subsection{Classical Field Theory}\nTo begin to understand the Standard Model, a quantum field theory, it is helpful to first understand the classical notion of a field.\nA field is a physical quantity defined as a function of space and time.\nThe physical quantity may be as simple as a scalar (e.g. the temperature at each point in space and time) or may be a vector (e.g. the electric field).\nMore generally, the physical quantity is a tensor of arbitrary rank.\n\nWith the notion of a field defined, we may next ask how to use these fields to describe the behavior of the universe.\nIn classical mechanics, this is achieved through constructing a Lagrangian density, $\\mathcal L$ (referred to simply as the ``Lagrangian''), as a function of one or more fields, $\\phi(x)$, and their derivatives:\n\\begin{equation} \\label{eqn:lagrangian}\n    \\mathcal L = \\mathcal L(\\phi, \\partial_\\mu \\phi).\n\\end{equation}\nOne of the fundamental concepts of classical mechanics is the principle of least action, which states that the action, $S$, defined as the time integral of the Lagrangian,\n\\begin{equation} \\label{eqn:action}\n    S = \\int L~dt = \\int \\mathcal L(\\phi, \\partial_\\mu \\phi)~d^4x \n\\end{equation}\nfor a given system will be minimized as the system evolves between two points in time~\\cite{Peskin:1995ev}:\n\\begin{equation} \\label{eqn:least_action}\n    \\delta S = 0.\n\\end{equation}\nThrough inserting Eqn.~\\ref{eqn:action} into Eqn.~\\ref{eqn:least_action} and integrating by parts, one obtains the Euler-Lagrange equations of motion:\n\\begin{equation}\n    \\partial_\\mu \\bigg( \\frac{\\partial \\mathcal L}{\\partial (\\partial_\\mu \\phi)} \\bigg) - \\frac{\\partial \\mathcal L}{\\partial \\phi} = 0.\n\\end{equation}\nThough the SM Lagrangian is vastly more complex than the toy Lagrangian of Eqn.~\\ref{eqn:lagrangian}, and is composed of quantum rather than classical fields, the same principle of least action allows us to describe the dynamics and interactions of the fundamental particles of the universe.\n\n\\subsection{Quantum Mechanics}\nWhile classical mechanics provides a good description of our universe in some regimes, namely when we are dealing with large objects which move much more slowly than the speed of light, it cannot explain many of the observed phenomena in nature.\nA standard motivation for the need of quantum theory, is the so-called ``ultraviolet catastrophe''~\\cite{Schwartz:2013pla}, in which the classical prediction for the energy radiated by a blackbody, an object of some fixed temperature, diverges: the intensity of light radiated increases without bound as a function of frequency.\nThe ultraviolet catastrophe led Planck to propose a solution whose core principle was the idea that light may be radiated only at specific energies -- in other words, that the energy was \\emph{quantized}.\nThis paved the way for the development of quantum mechanics, which proved very successful for describing blackbody radiation, developing models of the hydrogen atom, and many other applications.\n\nHowever, one of the major shortcomings of quantum mechanics is the fact that it cannot describe the production or annihilation of particles~\\cite{Zee:2003mt}.\nMore generally, quantum mechanics is incompatible with the theory of special relativity.\nThis fundamental limitation of quantum mechanics motivated the development of theories which could explain the universe at both very small scales (i.e. ``quantum'') and at very high (relativistic) energies.\nQuantum field theory has emerged as the most successful theoretical framework for doing so.\n\n\\subsection{The Klein-Gordon Field}\nTo describe the universe in terms of quantum fields, it is helpful to examine a toy example: start with a classical field and make the necessary modifications to reinterpret the dynamical variables as quantum mechanical operators which obey the canonical commutation relations of quantum mechanics\\footnote{A full description of quantum mechanics is beyond the scope of this thesis. A description of quantum mechanical operators and the derivation of their canonical commutation relations can be found in many textbooks on quantum mechanics, e.g.~\\cite{Griffiths:qm}.} (following the treatment in ~\\cite{Peskin:1995ev}).\nWe then see that the allowed states of the resulting quantum field have a natural physical interpretation as particles.\n\nIn choosing a toy example for a quantum field theory, it is helpful to begin with a ``derivation''~\\cite{Griffiths:2008zz} of the Schr{\\\"o}dinger equation, which forms the basis of quantum mechanics.\nBeginning with the classical energy-momentum relation\n\\begin{equation}\n    \\frac{\\bf{p}^2}{2m} + V = E,\n\\end{equation}\none can promote the momentum and energy variables to quantum mechanical operators which act on the wave function $\\Psi$, making the substitutions $\\bf{p} \\to -i\\hbar\\nabla$ and $E \\to i\\hbar \\partial/\\partial t$, and obtain the Schr{\\\"o}dinger equation\n\\begin{equation}\n    - \\frac{\\hbar^2}{2m} \\nabla^2 \\Psi + V \\Psi = i\\hbar \\frac{\\partial \\Psi}{\\partial t}.\n\\end{equation}\nAs one of the primary aims of quantum field theory is to provide a description of particles which is consistent with special relativity, it is natural to start with the relativistic energy-momentum relation\n\\begin{equation}\n    E^2 - \\bf{p}^2 = m^2,\n\\end{equation}\nand again promote the momentum and energy variables to quantum mechanical operators.\nDoing so leads one to the \\emph{Klein-Gordon equation}, originally proposed to describe the behavior of relativistic electrons\\footnote{In fact, the Klein-Gordon equation does not provide a satisfactory description of relativistic electrons. It applies only to scalar (spin 0) particles, of which the Higgs boson is the only known example in nature.}~\\cite{Klein:kge,Gordon:kge}:\n\\begin{equation}\n    -\\frac{\\partial^2 \\Psi}{\\partial t^2} + \\nabla^2 \\Psi = m^2 \\Psi.\n\\end{equation}\nHowever, we are still working in the context of the wave function for the dynamics of a single particle -- in this paradigm we are still unable to describe the annihilation and pair production of particles.\nWe will see that this is possible working in the field theory framework, so it is natural to next ask: what Lagrangian density will give rise to the Klein-Gordon equation?\nThe Lagrangian for the classical Klein-Gordon field is given by\n\\begin{equation} \\label{eqn:classical_kg}\n    \\mathcal L = \\frac{1}{2} (\\partial_\\mu \\phi)^2 - \\frac{1}{2} m^2 \\phi^2.\n\\end{equation}\n%The reason for starting with this specific Lagrangian is the fact that its resulting equation of motion is the Klein-Gordon equation, originally proposed to describe the behavior of relativistic electrons~\\cite{Klein:kge,Gordon:kge}.\nRather than the Lagrangian formalism, it is often more convenient to work with the Hamiltonian formalism, in which a conjugate momentum density $\\pi \\equiv \\partial L/\\partial \\dot{\\phi}$ is used instead of the time-derivative of the field variable, $\\dot{\\phi}$.\nThe Hamiltonian density is then defined as\n\\begin{equation}\n    \\mathcal H \\equiv \\sum \\pi \\dot{\\phi} - \\mathcal L.\n\\end{equation}\nSee~\\cite{Fetter:cm} for a description of the Hamiltonian formalism. \n\nReturning to the classical Klein-Gordon field, the Hamiltonian density is given by:\n\\begin{equation} \\label{eqn:classical_kg_ham}\n    \\mathcal H = \\frac{1}{2} \\pi^2 + \\frac{1}{2} (\\nabla \\phi)^2 + \\frac{1}{2} m^2 \\phi^2.\n\\end{equation}\nThe variables $\\pi$ and $\\phi$ can then be promoted to quantum mechanical operators which obey the canonical commutation relations\n\\begin{align}\n    [\\phi(\\bf{x}), \\pi(\\bf{y})] &= i \\delta^{(3)}(\\bf{x} - \\bf{y}) \\\\\n    [\\phi(\\bf{x}), \\phi(\\bf{y})] &= [\\pi(\\bf{x}), \\pi(\\bf{y})] = 0.\n\\end{align}\nNext, it is convenient to rewrite $\\phi$ and $\\pi$ in terms of so-called ladder operators\\footnote{The motivation for the use of the ladder operators can be found in any standard quantum mechnics textbook, e.g.~\\cite{Griffiths:qm}},  $a_{\\bf{p}}$ and $a^{\\dagger}_{\\bf{p}}$, defined implicitly as\n\\begin{align}\n    \\phi(x) &= \\int \\frac{d^3p}{(2\\pi)^3} \\frac{1}{\\sqrt{2\\omega_{\\bf{p}}}} (a_{\\bf{p}} + a^{\\dagger}_{-\\bf{p}}) e^{i \\bf{p}\\cdot\\bf{x}}, \\\\\n    \\pi(x) &= \\int \\frac{d^3p}{(2\\pi)^3} (-i) \\sqrt{\\frac{\\omega_{\\bf{p}}}{2}} (a_{\\bf{p}} - a^{\\dagger}_{-\\bf{p}}) e^{i \\bf{p}\\cdot\\bf{x}},\n\\end{align}\nand with $\\omega_{\\bf{p}} \\equiv \\sqrt{|\\bf{p}|^2 + m^2}$.\nCombining the commutation relations with the definitions of the ladder operators, the Hamiltonian may be written~\\cite{Peskin:1995ev} as\n\\begin{equation}\n    H = \\int \\frac{d^3p}{(2\\pi)^3} \\omega_{\\bf{p}} \\bigg( a^{\\dagger}_{\\bf{p}} a_{\\bf{p}} + \\frac{1}{2} \\Big[a_{\\bf{p}}, a^{\\dagger}_{\\bf{p}} \\Big] \\bigg).\n\\end{equation}\nCalculating the commutators of the Hamiltonian and the ladder operators, $[H, a^{\\dagger}_{\\bf{p}}] = \\omega_{\\bf{p}} a^{\\dagger}_{\\bf{p}}$ and $[H, a_{\\bf{p}}] = -\\omega_{\\bf{p}} a_{\\bf{p}}$, we obtain a natural physical interpretation.\nThe operator $a^{\\dagger}_{\\bf{p}}$ acting on the ground state creates a state with momentum and energy given by $\\bf{p}$ and $\\omega_{\\bf{p}}$, respectively -- in other words, it creates a particle with momentum $\\bf{p}$ and energy $\\omega_{\\bf{p}}$.\nSimilarly, acting on this excited state with the operator $a_{\\bf{p}}$ returns the system to the ground state -- it annihilates a particle with momentum $\\bf{p}$ and energy $\\omega_{\\bf{p}}$.\n\nAlthough the fields describing the various particles in the SM are considerably more complex than the the scalar field in this example, the Klein-Gordon field still serves to illustrate a valuable point: the quantum field theory framework allows us to describe the creation and annihilation of particles with an energy-momentum relation that is consistent with special relativity.\nIn particular, the Klein-Gordon Lagrangian (Eqn.~\\ref{eqn:classical_kg}) describes a field whose excitations are particles of spin-zero and mass $m$.\nMore generally, the vast majority of fundamental particles are not spin-zero and we will need more complicated Lagrangians to describe their dynamics.\n\n\\subsection{Spinor \\& Vector Fields} \\label{sec:theory_sv_fields}\nOther than the Higgs boson, all of the currently known fundamental particles are either spin-$\\frac{1}{2}$ (fermions) or spin-1 (bosons).\nHow can we move beyond the Klein-Gordon Lagrangian and construct Lagrangians for spin-$\\frac{1}{2}$ and spin-1 particles?\nIn general, the business of constructing Lagrangians in quantum field theory is not as rigorously motivated as in classical field theory, where Lagrangians are derived by the relation $L = T - U$ for a given physical system.\nLagrangians in quantum field theory are usually motivated by writing down the most general Lagrangian which respects all of the symmetries of the physical system.\nAlternatively, we might choose a Lagrangian which yields the desired equations of motion.\n\nThe Lagrangian for spin-$\\frac{1}{2}$ particles can be motivated by picking one whose resulting equations of motion are the Dirac equation, which Dirac showed describes the dynamics of spin-$\\frac{1}{2}$ particles~\\cite{Dirac:1928hu}.\nOne such choice is the following, called the Dirac Lagrangian~\\cite{Peskin:1995ev}:\n\\begin{equation} \\label{eqn:dirac_lagrangian}\n    \\mathcal L_{\\text{Dirac}} = \\bar{\\psi}(i \\gamma_\\mu \\partial_\\mu - m) \\psi,\n\\end{equation}\nwhose resulting equation of motion is the Dirac equation\n\\begin{equation}\n    (i \\gamma_\\mu \\partial_\\mu - m) \\psi(x) = 0.\n\\end{equation}\nIn the preceding equations, $\\psi$ represents a two-component spinor with its adjoint $\\bar{\\psi} \\equiv \\psi^\\dagger\\gamma^0$ and the $\\gamma^\\mu$ a set of matrices which satisfy the anticommutation relation\n\\begin{equation}\n    \\{\\gamma_\\mu, \\gamma_\\nu\\} = 2 g^{\\mu\\nu},\n\\end{equation}\nwith $g^{\\mu\\nu}$ the Minkowski metric. \n\nThe Lagrangian for spin-1 particles can be motivated by selecting a Lagrangian whose equations of motion are consistent with the dynamics with those of the photon, a familiar spin-1 particle.\nSuch a Lagrangian is the Proca Lagrangian~\\cite{Griffiths:2008zz}, which describes a four-component vector field $A^\\mu$\n\\begin{equation} \\label{eqn:proca_lagrangian}\n    \\mathcal L_{\\text{Proca}} = -\\frac{1}{4} F_{\\mu \\nu}F^{\\mu \\nu} + \\frac{1}{2} m^2 A_\\mu A^\\mu.\n\\end{equation}\nThe resulting field equation is then~\\cite{Griffiths:2008zz}\n\\begin{equation}\n    \\partial_\\mu F^{\\mu\\nu} + m^2A^\\nu = 0,\n\\end{equation}\nwhich for the case of the photon (which is massless, $m=0$) restores Maxwell's equations in empty space: $\\partial_\\mu F^{\\mu\\nu} = 0$.\nThe Klein-Gordon, Dirac, and Proca Lagrangians form the basis from which the SM Lagrangian is constructed.\n\n%\\subsection{Perturbative Expansions via Feynman Diagrams}\n%Having now written down Lagrangians which are representative of the dynamics of particles in the SM, we next wish to calculate physically observable quantities, namely cross sections and decay rates.\n%The resulting Euler-Lagrange equations of motion for Lagrangians like those considered in Sec.~\\ref{sec:theory_sv_fields} are, in general, nonlinear differential equations with no known exact solutions.\n%Rather than exact solutions, we must settle for perturbative solutions.\n\n%In practice, this is done through an expansion in orders of the coupling constant, a parameter in a Lagrangian which represents the strength of an interaction.\n\n\n%\\subsection{Renormalization}\n\n\n%In attempting to construct Lagrangians to describe the dynamics of spin-$\\frac{1}{2}$ and spin-1 particles, there are several considerations\n", "meta": {"hexsha": "2858b9a7de095edeb5fa2fabb8c6a10c327b0601", "size": 13475, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/qft.tex", "max_stars_repo_name": "sam-may/phd_thesis", "max_stars_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "theory/qft.tex", "max_issues_repo_name": "sam-may/phd_thesis", "max_issues_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/qft.tex", "max_forks_repo_name": "sam-may/phd_thesis", "max_forks_repo_head_hexsha": "acd61f340e5677deba412b1b3baecd124c32440f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.2945205479, "max_line_length": 620, "alphanum_fraction": 0.7570315399, "num_tokens": 3644, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.896251362048962, "lm_q2_score": 0.6224593312018546, "lm_q1q2_score": 0.5578800234097482}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\title{Cardano's Formula for Cubic equations}\n\\date{2021 - 11 - 18}\n\\author{Nnadiukwu Miracle}\n\\begin{document}\n\t\\maketitle\n\\centering\n\t\\section*{Abstract}\n\tGerolamo Cardano was born in Pavia in 1501 as the illegitimate child of a jurist. He attended the University of Padua and became a physician in the town of Sacco, after being rejected in his home town in Milan. He became one of the most famous doctors in Europe,  having treated the pope. He was also an astrologer and an avid gambler, to which he wrote the book on Games of chance, which was the first treatise on the Mathematics of probability.\\cite{elsevier}\n\t\\centering\n\t\\section*{Introduction to Cardano's formula}\n\tCardano's formula for solution of cubic equations for an equation like:\n\t\t\\begin{equation*}\n\t\tx^3 + ax^2 + a_2x + a_3 = 0\n\t\\end{equation*}\n\tThe parameters Q, R, S and T can be computed thus:\n\t\t\\begin{align*}\n\t\t\tQ &=\\dfrac{3a_2 - a^2_1}{a}  \t&R &= \\frac{9a_1a_2 - 27a_3 - 2a^3_1}{54}\\\\\n\t\t\t\\linebreak\n\t\t\tS &=3\\sqrt{R + \\sqrt{-Q^3 + R^2}}  &T &= 3\\sqrt{R - \\sqrt{Q^3 + R^2}}\t\\\\\t\t\n\t\t\\end{align*}\nto give the roots:\n\\begin{align*} \n\tx_1 &=\tS + T - \\frac{1}{3}a_1\\\\\n\tx_2 &=\t\\frac{-(S + T)}{2} - \\frac{a_1}{3} + i\\frac{\\sqrt{3}(s - T)}{2}\\\\\n\tx_3 &=\t\\frac{-(S + T)}{2} - \\frac{a_1}{3} - i\\frac{\\sqrt{3}(s - T)}{2}\n\\end{align*} \nNote: $x^3$ must not have a co-efficient\n\\subsection*{Some examples}\n\\begin{itemize}\n\t\\item $x^3 - 3x^2 + 4 = 0$\n\t\\item $x^3 - 3x^2 + 4 = 0$\n\\end{itemize}\n\n\n\t\\bibliographystyle{ieeetr}\n\t\\bibliography{cardano}\n\\end{document}", "meta": {"hexsha": "404fd61297f99e8ca050c9c92f25e536c5c0c723", "size": 1566, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CSC 101 - miracle - LaTEX/class project 3.tex", "max_stars_repo_name": "Nnadiukwu-Miracle/miracleCSC101", "max_stars_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CSC 101 - miracle - LaTEX/class project 3.tex", "max_issues_repo_name": "Nnadiukwu-Miracle/miracleCSC101", "max_issues_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CSC 101 - miracle - LaTEX/class project 3.tex", "max_forks_repo_name": "Nnadiukwu-Miracle/miracleCSC101", "max_forks_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.1538461538, "max_line_length": 462, "alphanum_fraction": 0.6717752235, "num_tokens": 605, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859597, "lm_q2_score": 0.7279754548076478, "lm_q1q2_score": 0.5578429633417109}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 26, 2014}\n\\maketitle\n\\section*{fundamental theorem of ring homomorphism}\n\n\\subsection*{observation}\nlet $\\varphi:R\\to S$ be a given ring homomorphism (R,S are commutative rings)\n\nwe want to define $\\varepsilon:R[x]\\to S$ ring homomorphism such that $\\begin{cases}\\varphi(r)=\\varepsilon(r)&\\forall r\\in R\\\\\\varepsilon(x)=s&s\\text{ fixed }\\in S\\end{cases}$\n\nthen $\\varepsilon(a_0+a_1x+\\dots+a_nx^n)=\\varphi(a_0)+\\phi(a_1)s+\\dots+\\varphi(a_n)s^n$\n\n\\subsection*{construction}\nwe define $R_1+R_2+\\dots+R_n=\\{(a_1,a_2,\\dots,a_n):a_i\\in R_i\\}$ with component wise operations. this is a ring.\n\n\\subsection*{definition}\ngiven $R$ a commutative ring, we define the characteristic of$R$ to be $\\text{char } R$ to be the smallest $n$ such that $1_1+1_2+...+1_n=0$. If it doesn't exists we say $\\text{char } R$ is zero.\n\n\\subsection*{prop}\nthe characteristic of an integral domain is either $0$ or prime.\n\n\\subsubsection*{proof}\nif 0 then done, lets, assume that it's not prime.\n\nthen $\\text{char } R=n=\\alpha\\beta$ and $0=1_1+1_2+\\dots+1_n=1_1+1_2+\\dots+1_\\alpha+1_1+\\dots+1_\\beta=\\alpha\\beta$. Because we are in an integral domain then $\\alpha=0$ or $\\beta=0$ and so we have a contradiction because $\\alpha<n$ and the same for $\\beta$ and so we have a contradiction because $n$ is the smallest to be 0 and so $n$ is prime.\n\n\\subsection*{corollary}\nif $K$ is a field then characteristic is 0 or prime\n\\end{document}\n\n\n", "meta": {"hexsha": "37f2e79f04a09dcb505358c36a48143948885f66", "size": 1648, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-notes-2014-11-26.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-notes-2014-11-26.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-notes-2014-11-26.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.3255813953, "max_line_length": 344, "alphanum_fraction": 0.7269417476, "num_tokens": 559, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.727975460709318, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.5578429600933792}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{hyperref}\n\n\n\\begin{document}\n\n%taken from https://en.wikipedia.org/wiki/Lagrangian_mechanics\n\n\\section{Lagrangian Mechanics}\n\\subsection{Lagrangian and action}\n% this is a comment that is ignored\nThe core element of \\href{https://en.wikipedia.org/wiki/Lagrangian_mechanics}{Lagrangian mechanics} is the Lagrangian function, which summarizes the dynamics of the entire system in a very simple expression. The physics of analyzing a system is reduced to choosing the most convenient set of generalized coordinates, determining the kinetic and potential energies of the constituents of the system, then writing down the equation for the Lagrangian to use in Lagrange's equations. It is defined by\n$$ L = T -V $$\nwhere $T$ is the total kinetic energy and $V$ is the total potential energy of the system.\n\nThe next fundamental element is the action $\\mathcal{S}$, defined as the time integral of the Lagrangian:\n$$\\mathcal{S} = \\int_{t_1}^{t_2} L\\,\\mathrm{d}t.$$\nThis also contains the dynamics of the system, and has deep theoretical implications (discussed below). Technically action is a functional, rather than a function: its value depends on the full Lagrangian function for all times between $t_1$ and $\\bigl. t_2\\bigr.$. Its dimensions are the same as angular momentum.\n\nIn classical field theory, the physical system is not a set of discrete particles, but rather a continuous field defined over a region of 3d space. Associated with the field is a Lagrangian density $\\mathcal{L}(\\mathbf{r},t)$ defined in terms of the field and its derivatives at a location $\\bigl.\\mathbf{r}\\bigr.$. The total Lagrangian is then the integral of the Lagrangian density over 3d space (see volume integral):\n $$L(t) = \\int  \\mathcal{L}(\\mathbf{r},t) \\mathrm{d}^3 \\mathbf{r}$$\n where $\\bigl.\\mathrm{d}^3\\mathbf{r}\\bigr.$ is a 3d differential volume element, must be used instead. The action becomes an integral over space and time:\n $$\\mathcal{S} = \\int_{t_1}^{t_2}\\int \\mathcal{L}(\\mathbf{r},t) \\mathrm{d}^3\\mathbf{r} \\mathrm{d}t.$$\n\n \\subsection{Hamilton's principle of stationary action}\n Let $q_0$ and $q_1$ be the coordinates at respective initial and final times $t_0$ and $t_1$. Using the calculus of variations, it can be shown that Lagrange's equations are equivalent to Hamilton's principle: The trajectory of the system between $t_0$ and $t_1$ has a stationary action $\\mathcal{S}$.\n\n By stationary, we mean that the action does not vary to first-order from infinitesimal deformations of the trajectory, with the end-points $(q_0, t_0)$ and $(q_1,t_1)$ fixed. Hamilton's principle can be written as: %this is another comment that is ignored\n $$\\delta \\mathcal{S} = 0. $$\n Thus, instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action.\n\n Hamilton's principle is sometimes referred to as the principle of least action, however the action functional need only be stationary, not necessarily a maximum or a minimum value. Any variation of the functional gives an increase in the functional integral of the action.\n\n We can use this principle instead of Newton's Laws as the fundamental principle of mechanics, this allows us to use an integral principle (Newton's Laws are based on differential equations so they are a differential principle) as the basis for mechanics. However it is not widely stated that Hamilton's principle is a variational principle only with holonomic constraints, if we are dealing with nonholonomic systems then the variational principle should be replaced with one involving d'Alembert principle of virtual work. Working only with holonomic constraints is the price we have to pay for using an elegant variational formulation of mechanics.\n\n\\end{document}\n", "meta": {"hexsha": "7767bcae6e2fb5cdb4031dc69fe4328ae893a6f7", "size": 3827, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "example/example.tex", "max_stars_repo_name": "dino-r/casttex", "max_stars_repo_head_hexsha": "cb8daa3d982b526d7ba9fedd75c30fb944af8bd2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "example/example.tex", "max_issues_repo_name": "dino-r/casttex", "max_issues_repo_head_hexsha": "cb8daa3d982b526d7ba9fedd75c30fb944af8bd2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "example/example.tex", "max_forks_repo_name": "dino-r/casttex", "max_forks_repo_head_hexsha": "cb8daa3d982b526d7ba9fedd75c30fb944af8bd2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 100.7105263158, "max_line_length": 651, "alphanum_fraction": 0.7797230206, "num_tokens": 944, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936430859598, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.5578429497744735}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath,amssymb}\n\\usepackage{hyperref}\n\\usepackage[symbol]{footmisc}\n\n\\newcommand{\\F}{\\mathbb{F}}\n\\newcommand{\\G}{\\mathbb{G}}\n\\newcommand{\\hs}{\\mathcal{H}}\n\\newcommand{\\enc}{\\operatorname{Enc}}\n\\newcommand{\\dec}{\\operatorname{Dec}}\n\n\\title{Triptych multisignature analysis}\n\\author{Cypher Stack}\n\\date{\\today}\n\n\\begin{document}\n\n\\maketitle\n\nThis document describes research and analysis into multisignature support for the Triptych zero-knowledge proving system and its associated transaction model.\n\n\\textbf{All material in this document should be considered experimental and unsuitable for production without thorough independent review.}\n\n\n\\section{Introduction}\n\nTriptych \\cite{triptych} is a zero-knowledge proving system that can be used in a multidimensional linkable ring signature construction to build a confidential transaction protocol.\nProofs constructed using Triptych scale in size logarithmically with the size of the input anonymity set, and can take advantage of efficient verification using batching and multiscalar multiplication.\n\nA currently open question is the extent to which the protocol can support multisignature operations, where a group of players each holding a portion of a private signing key can jointly produce a valid transaction without mutual trust in each other or another third party.\nIt is a requirement that the resulting proof be indistinguishable from one produced by a single player holding a private key; that is, the resulting proof must be verifiable using a standard public key.\n\nA confidential protocol based on Triptych requires the use of a private key in two places: during generation of a one-of-many authorizing proof, and when constructing a linking tag used to prevent double-spend attempts.\nThe one-of-many proof uses the private key in a linear fashion, which allows for straightforward approaches to multiparty computation.\nHowever, the linking tag is constructed using the inverse of the private key, posing a challenge to such operations.\nFortunately, this inversion operation is similar to that used in other protocols, so we can apply earlier work toward a solution.\n\nLet $\\G$ be the prime-order subgroup of the \\texttt{ed25519} elliptic curve group, and let $\\F$ be its scalar field.\nLet $G$, $H$, and $U$ be independent fixed generators of $\\G$ known to all players.\nLet $\\hs: \\{0,1\\}^* \\to \\F$ be a cryptographic hash function.\nWe often prefix input to $\\hs$ with an integer value, to act as an arbitrary domain separator; note that any equivalent domain separation method may be substituted, and that our notation is used for convenience.\nWhen referring to sets of keys generated by players, we assume a fixed and publicly-known ordering, such as lexicographic ordering, is used.\n\n\n\\section{Key generation}\n\nKeys are generated in a manner similar to that of MuSig \\cite{musig}.\nWe assume a set of $\\nu$ players who wish to generate linear shares of a private and public key constructed in the following way.\nThroughout this section, we assume actions taken by a player $\\alpha \\in [0,\\nu)$ are separately taken by all $\\nu$ players in parallel.\n\\begin{enumerate}\n    \\item Each player $\\alpha$ chooses random $(a_\\alpha, b_\\alpha) \\in \\F^2$ and computes $B_\\alpha \\equiv b_\\alpha G$.\n    It generates a private key $\\gamma_\\alpha$ and corresponding public key $\\Gamma_\\alpha$ for the Paillier cryptosystem compatible with encryption of \\texttt{ed25519} prime-order subgroup scalar elements.\n    It sends the tuple $(a_\\alpha, B_\\alpha, \\Gamma_\\alpha)$ to all players.\n    \\item Each player $\\alpha$ computes the aggregate private view key $$a \\equiv \\sum_{\\alpha=0}^{\\nu-1} \\hs(0, \\{a_\\beta G\\}_{\\beta=0}^{\\nu-1}, \\alpha) a_\\alpha$$ and corresponding aggregate public view key $$A \\equiv aG = \\sum_{\\alpha=0}^{\\nu-1} \\hs(0, \\{a_\\beta G\\}_{\\beta=0}^{\\nu-1}, \\alpha) a_\\alpha G.$$\n    \\item Each player $\\alpha$ computes the aggregate public spend key $$B \\equiv \\sum_{\\alpha=0}^{\\nu-1} \\hs(1, \\{B_\\beta\\}_{\\beta=0}^{\\nu-1}, \\alpha) B_\\alpha.$$\n\\end{enumerate}\n\nThe aggregate public key pair $(A,B)$ is used to receive transactions.\n\n\n\\section{Transaction generation and scanning}\n\nA sender produces a transaction for the address $(A,B)$ as usual; we describe the process here for the sake of notation clarity.\n\\begin{enumerate}\n    \\item The sender chooses random $r \\in \\F$ and sets $R \\equiv rG$.\n    \\item The sender computes the output public key $P \\equiv \\hs(2,rA)G + B$.\n    \\item The sender chooses random $s_1 \\in \\F$ and, using a valid value $v \\in \\F$, computes a commitment $C_{\\operatorname{val}} \\equiv s_1 G + vH$.\n    \\item The sender computes other auxiliary transaction information as needed.\n\\end{enumerate}\n\nThe sender includes $R, P, C_{\\operatorname{val}}$ among other public transaction information, and submits the transaction to the network.\n\nAny of the $\\nu$ players holding shares of the address $(A,B)$ scan transactions for outputs destined for the address as usual, using the known aggregate private view key $a$ and aggregate public spend key $B$.\nSpecifically, for a given output public key $P$ with associated $R$, they test if $\\hs(2,aR)G + B = P$, and ignore if the equality does not hold.\nIf it does, we note the associated private key:\n\\begin{align*}\np &\\equiv \\hs(2,aR) + b \\\\\n&= \\hs(2,aR) + \\sum_{\\alpha=0}^{\\nu-1} \\hs(1, \\{B_\\beta\\}_{\\beta=0}^{\\nu-1}, \\alpha) b_\\alpha\n\\end{align*}\nOf course, each player $\\alpha$ holds only its own share $b_\\alpha$ of the aggregate private spend key.\n\n\n\\section{Linking tag computation}\n\nTo spend the output represented by the public output key $P$, the players must cooperatively compute the associated linking tag $J$.\nIn the Triptych protocol, this tag is defined (continuing our rather bespoke notation from this documentation, rather than directly using the notation from \\cite{triptych}) as follows:\n\\begin{align*}\nJ &\\equiv p^{-1}U \\\\\n&= \\left( \\hs(2,aR) + \\sum_{\\alpha=0}^{\\nu-1} \\hs(1, \\{B_\\beta\\}_{\\beta=0}^{\\nu-1}, \\alpha) b_\\alpha \\right)^{-1}U\n\\end{align*}\n\nThe non-linearity of the linking tag with respect to the associated output private key makes such a cooperative computation more complex.\nHowever, we can apply a method of Gennaro and Goldfeder \\cite{gennaro}, which was originally devised for use in multiparty threshold ECDSA signature computation.\nThe result will be that all players obtain $J$ without revealing their private spend key shares to each other.\n\nBecause $p$ consists of an additive prefix $\\hs(2,aR)$ that is not distributed between players, we define players' secret shares $\\{p_\\alpha\\}_{\\alpha=0}^{\\nu-1}$ of $p$ according to their index:\n\\begin{itemize}\n    \\item $p_0 := \\hs(2,aR) + \\hs(1, \\{B_\\beta\\}_{\\beta=0}^{\\nu-1}, 0) b_0$\n    \\item $p_{\\alpha > 0} := \\hs(1, \\{B_\\beta\\}_{\\beta=0}^{\\nu-1}, \\alpha) b_\\alpha$\n\\end{itemize}\nThe result is that $\\sum_{\\alpha=0}^{\\nu-1} p_\\alpha = p$.\nThe designation of indices does not affect protocol security.\n\nA more comprehensive version of this method assumes the use of range proofs coupled with Paillier ciphertext; however, the authors hypothesize (but do not prove) that no significant information leakage occurs if these proofs are omitted.\n\nWe specifically require the use of Paillier public-key encryption, the keys for which have been introduced previously.\nLet $\\enc_\\alpha$ represent Paillier encryption using the public key $\\Gamma_\\alpha$ for player $\\alpha$, and $\\dec_\\alpha$ represent Paillier decryption using the private key $\\gamma_\\alpha$ for this player.\n\nThe process proceeds as follows.\n\\begin{enumerate}\n    \\item Each player $\\alpha$ chooses random $g_\\alpha \\in \\F$ and sets $G_\\alpha \\equiv g_\\alpha U$.\n    It generates a commitment $C_{\\operatorname{tag},\\alpha}$ to $G_\\alpha$ and sends the commitment to all players.\n    \\item Each player $\\alpha$ sets $c_\\alpha \\equiv \\enc_\\alpha(p_\\alpha)$. It sends $c_\\alpha$ to all players.\n    \\item For each other player $\\beta$, player $\\alpha$ chooses random $b_{\\alpha\\beta} \\in \\F$.\n    It sends the value $\\gamma_\\alpha c_\\beta + \\enc_\\beta(-b_{\\alpha\\beta})$ to player $\\beta$, who decrypts using $\\gamma_\\beta$ and defines $a_{\\beta\\alpha}$ to be the resulting plaintext.\n    \\item Each player $\\alpha$ computes $\\delta_\\alpha := p_\\alpha \\gamma_\\alpha + \\sum_{\\beta \\neq \\alpha}(a_{\\alpha\\beta} + b_{\\alpha\\beta})$.\n    It computes a Schnorr proof of representation $\\Pi_\\alpha$ of $G_\\alpha$ with respect to the generator $U$.\n    It sends $G_\\alpha$, $\\Pi_\\alpha$, and $\\delta_\\alpha$ to all players.\n    \\item Each player computes $\\delta := \\sum_{\\beta=0}^{\\nu-1} \\delta_\\beta$.\n    It ensures that the commitment $C_{\\operatorname{tag},\\beta}$ and Schnorr proof $\\Pi_\\beta$ are valid for each $\\beta$, and aborts otherwise.\n    It then computes the aggregate linking tag $J := \\delta^{-1} \\sum_{\\beta=0}^{\\nu-1} G_\\beta$.\n\\end{enumerate}\n\nNote that in this protocol, we use additive notation on Paillier ciphertexts, and multiplicative notation between field elements in $\\F$ and Paillier ciphertexts.\nBecause of Paillier encryption homomorphism, these operations are well defined.\n\n\n\\section{Authorizing proof}\n\nTriptych requires the use of a one-of-many authorizing proof.\nWhen embedded with a linking tag and message using the non-interactive Fiat-Shamir heuristic, this proof serves as a digital signature that the signer knows the private key corresponding to an input key and corresponding to the linking tag.\nThe proof effectively proceeds in two stages.\nIn the first stage, the signer hides the index corresponding to the known public key in the input set.\nIn the second stage, the signer demonstrates knowledge of the private key corresponding to the public key at the hidden index, and further shows that the linking tag is constructed correctly.\nThroughout this section, we assume the input set contains $N$ key tuples, where $N = n^m$ for some $n,m \\in \\mathbb{N}$.\n\nThe proof proceeds as follows.\n\\begin{enumerate}\n    \\item Any designated player assembles an input set $\\{(M_{i,0}, M_{i,1})\\}_{i=0}^{N-1}$.\n    A secret index $0 \\leq \\ell < N$ is chosen. Each $M_{i,0} \\in \\G$ is an existing output public key, and the designated output public key is set such that $M_{\\ell,0} = P$.\n    \\item Any designated player computes a commitment offset $C_{\\operatorname{off}} \\equiv s_2 G + vH$, where $s_2 \\in \\F$ is chosen according to the transaction protocol as usual.\n    Each $M_{i,1}$ is the amount commitment corresponding to $M_{i,0}$, offset by $C_{\\operatorname{off}}$.\n    Let $s \\equiv s_1 - s_2$, such that $M_{\\ell,1} = sG$.\n    \\item Each player $\\alpha$ selects random $$r_A^{(\\alpha)}, r_B^{(\\alpha)}, r_C^{(\\alpha)}, r_D^{(\\alpha)}, \\left\\{a_{j,i}^{(\\alpha)}\\right\\}_{i=1,j=0}^{n-1,m-1}, \\left\\{\\rho_j^{(\\alpha)}\\right\\}_{j=0}^{m-1} \\in \\F$$ according to the proof protocol.\n    It sends the values $$r_A^{(\\alpha)}, r_B^{(\\alpha)}, r_C^{(\\alpha)}, r_D^{(\\alpha)}, \\left\\{a_{j,i}^{(\\alpha)}\\right\\}_{i=1,j=0}^{n-1,m-1}, \\left\\{\\rho_j^{(\\alpha)}G\\right\\}_{j=0}^{m-1}, \\left\\{\\rho_j^{(\\alpha)}J\\right\\}_{j=0}^{m-1}$$ to all players.\n    \\item Each player computes the following values:\n        \\begin{align*}\n            r_A &\\equiv \\sum_{\\alpha=0}^{\\nu-1} \\hs(3, \\{r_A^{(\\beta)}\\}_{\\beta=0}^{\\nu-1}, \\alpha) r_A^{(\\alpha)} \\\\\n            r_B &\\equiv \\sum_{\\alpha=0}^{\\nu-1} \\hs(4, \\{r_B^{(\\beta)}\\}_{\\beta=0}^{\\nu-1}, \\alpha) r_B^{(\\alpha)} \\\\\n            r_C &\\equiv \\sum_{\\alpha=0}^{\\nu-1} \\hs(5, \\{r_C^{(\\beta)}\\}_{\\beta=0}^{\\nu-1}, \\alpha) r_C^{(\\alpha)} \\\\\n            r_D &\\equiv \\sum_{\\alpha=0}^{\\nu-1} \\hs(6, \\{r_D^{(\\beta)}\\}_{\\beta=0}^{\\nu-1}, \\alpha) r_D^{(\\alpha)} \\\\\n            \\left\\{a_{j,i}\\right\\}_{i=1,j=0}^{n-1,m-1} &\\equiv \\left\\{\\sum_{\\alpha=0}^{\\nu-1} \\hs(7, \\{a_{j,i}^{(\\beta)}\\}_{\\beta=0}^{\\nu-1}, \\alpha) a_{j,i}^{(\\alpha)}\\right\\}_{i=1,j=0}^{n-1,m-1} \\\\\n            \\{\\rho_j G\\}_{j=0}^{m-1} &\\equiv \\left\\{\\sum_{\\alpha=0}^{\\nu-1} \\hs(8, \\{\\rho_j^{(\\beta)}G, \\rho_j^{(\\beta)}J\\}_{\\beta=0}^{\\nu-1}, \\alpha) \\rho_j^{(\\alpha)}G\\right\\}_{j=0}^{m-1} \\\\\n            \\{\\rho_j J\\}_{j=0}^{m-1} &\\equiv \\left\\{\\sum_{\\alpha=0}^{\\nu-1} \\hs(8, \\{\\rho_j^{(\\beta)}G, \\rho_j^{(\\beta)}J\\}_{\\beta=0}^{\\nu-1}, \\alpha) \\rho_j^{(\\alpha)}J\\right\\}_{j=0}^{m-1}\n        \\end{align*}\n        It uses the index $\\ell$ to compute the set $\\{\\sigma_{j,i}\\}_{i,j=0}^{n-1,m-1}$ according to the proof protocol, and further uses this collection of values to compute $A, B, C, D$ as well.\n    \\item Each player computes the hash value $\\mu$, the values $\\{X_j,Y_j\\}_{j=0}^{m-1}$, and the challenge $\\xi$ according to the proof protocol.\n    It computes the set $\\{f_{j,i}\\}_{i=1,j=0}^{n-1,m-1}$ and the values $z_A, z_C, K$.\n    \\item Each player $\\alpha$ computes $$z^{(\\alpha)} \\equiv \\hs(1, \\{B_\\beta\\}_{\\beta=0}^{\\nu-1},\\alpha) b_\\alpha \\xi^m - \\sum_{j=0}^{m-1} \\hs(8, \\{\\rho_j^{(\\beta)}G, \\rho_j^{(\\beta)}J\\}_{\\beta=0}^{\\nu-1}, \\alpha) \\rho_j^{(\\alpha)}\\xi^j$$ and sends this value to all players.\n    \\item Each player computes $$z \\equiv (\\hs(2, aR) + \\mu s)\\xi^m + \\sum_{\\alpha=0}^{\\nu-1} z^{(\\alpha)}$$ and assembles the complete proof.\n\\end{enumerate}\n\n\n\\section{Observations}\n\nA possible modification is to replace the Schnorr proof of representation of the value $G_\\alpha$ in the Paillier inversion protocol with a proof that the associated commitment $C_{\\operatorname{tag},\\alpha}$ has a known discrete logarithm.\n\nIt may be necessary to require a specific commitment round for the authorizing proof pre-challenge values.\nCurrently, these values are computed using MuSig-type hash aggregation in order to assert that the resulting aggregated values are uniformly distributed and cannot be controlled by malicious players.\nHowever, the aggregation uses a common coefficient set to compute values of the form $\\{\\rho_j G\\}$ and $\\{\\rho_j J\\}$.\n\nIt may be secure in practice to compress the Paillier inversion protocol, sending the initial commitment and encrypted key shares simultaneously.\n\n\n\\section{Conclusions}\n\nThe use of cooperative signing in the Triptych transaction protocol imposes significant communication complexity.\nNotably, the inversion construction of linking tags requires an expensive many-to-many share conversion that itself requires multiple communication rounds between all signers, and a Paillier-based approach to this inversion itself requires support for arbitrary RSA groups.\n\nBecause security of the authorizing proof relies in part on Fiat-Shamir transcripts, signers cannot complete this proof (namely, the challenge values) without first computing the corresponding linking tag, as the tag must be embedded in the transcript.\nFurther, it is not possible to complete the entire pre-challenge portion of the authorizing proof prior to the linking tag computation, as this also requires knowledge of the complete tag.\n\nIf efficient multisignature operations are a priority for which this level of communication complexity is infeasible, it may be beneficial to investigate alternate transaction protocols that use linear linking tag constructions and/or simpler cooperative authorization proofs.\nThe author is aware of (and collaborating on the development of) at least two newer trustless protocols that take this approach: Seraphis\\footnote{\\url{https://gist.github.com/UkoeHB/24f6be6125684046c5c4d9f49351bccf}} and Lelantus 3\\footnote{Disclaimer: The Firo project funds Cypher Stack research that includes development of this protocol.}.\n\nIt remains for stakeholders and interested parties to assess the feasibility of this construction as it applies to protocols of interest, and determine if Cypher Stack or other researchers and developers should continue to formalize, optimize, implement, or deploy it in software.\n\n\n\\bibliographystyle{plain}\n\\bibliography{main}\n\n\\end{document}\n", "meta": {"hexsha": "0bdb2d971b57eab4f839999322da83cbc2f89921", "size": 15820, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main.tex", "max_stars_repo_name": "AaronFeickert/triptych-multisig", "max_stars_repo_head_hexsha": "76d31232a2cec30db0e5094a814e3dbc0568ffa2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-07-28T21:25:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-05T19:12:51.000Z", "max_issues_repo_path": "main.tex", "max_issues_repo_name": "AaronFeickert/triptych-multisig", "max_issues_repo_head_hexsha": "76d31232a2cec30db0e5094a814e3dbc0568ffa2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.tex", "max_forks_repo_name": "AaronFeickert/triptych-multisig", "max_forks_repo_head_hexsha": "76d31232a2cec30db0e5094a814e3dbc0568ffa2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-29T00:15:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-10T20:00:28.000Z", "avg_line_length": 81.5463917526, "max_line_length": 344, "alphanum_fraction": 0.7247787611, "num_tokens": 4437, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404018582426, "lm_q2_score": 0.600188359260205, "lm_q1q2_score": 0.5578393098214443}}
{"text": "\n\\subsection{Eplett's identity}\n\n\\citeauthor{riordan:intro:combinatorial:analysis},\nin his book \\cite{riordan:intro:combinatorial:analysis},\nasks for a solution of the following exercise:\n\n\\begin{quote}\n    let $\\lbrace a_{n} \\rbrace_{n\\in\\mathbb{N}}$ be a sequence, with $a_{0}\\neq0$, \n    and $\\lbrace a_{n}^{\\prime} \\rbrace_{n\\in\\mathbb{N}}$ be its \\emph{inverse}:\n    if $A(t)$ and $A^{\\prime}(t)$ are two \\ac{fps} over them, respectively, \n    then $A(t)A^{\\prime}(t)=1$. Show that:\n    \\begin{displaymath}                \n        a_{n}^{\\prime} = \\frac{(-1)^{n}}{a_{0}^{n+1}}\n            \\left|\n            \\begin{array}{ccccc}\n                a_1 & a_2 & \\ldots & a_{n-1} & a_{n}\\\\\n                a_0 & a_1 & \\ldots & a_{n-2} & a_{n-1}\\\\\n                0   & a_0 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n    \\end{displaymath}                \n    for $n \\geq 2$.\n    \\marginpar{another way to find the inverse of a given sequence}\n\\end{quote}\n\n\\begin{proof}\nAlthough not explicitly requested, %to show for $n\\geq2$, \nit's useful to recall formula about $a_{0}^{\\prime}$ and $a_{1}^{\\prime}$, \nsince they'll occur in the following. \n\nFirst of all we interpret the product\n$A(t)A^{\\prime}(t)=1$ as the Cauchy product over the ring of \\ac{fps}, \nnamely as a convolution, therefore: \n\\begin{displaymath}\n    \\left[t^{0}\\right]A(t)A^{\\prime}(t)=a_{0}a_{0}^{\\prime}=1\n\\end{displaymath}\nis the relation which defines $a_{0}^{\\prime}=\\frac{1}{a_0}$.  On the other hand: \n\\begin{displaymath}\n    \\left[t^{1}\\right]A(t)A^{\\prime}(t)=a_{0}a_{1}^{\\prime}+a_{1}a_{0}^{\\prime}=0\n\\end{displaymath}\nis the relation which defines $a_{1}^{\\prime}=-\\frac{a_1}{a_{0}^{2}}$.\n\nThe proof proceeds by induction on $n$:\n\\begin{itemize}\n    \\item base $n=2$:\n        \\begin{displaymath}                \n            a_{2}^{\\prime} = \\frac{1}{a_{0}^{3}}\n                \\left|\n                \\begin{array}{cc}\n                    a_1 & a_2 \\\\\n                    a_0 & a_1 \\\\\n                \\end{array}\n                \\right| \n                = \\frac{a_{1}^{2}-a_{0}a_{2}}{a_{0}^{3}}\n        \\end{displaymath}                \n        since $\\big[t^{2}\\big]A(t)A^{\\prime}(t)=a_{0}a_{2}^{\\prime}\n            +a_{1}a_{1}^{\\prime}+a_{2}a_{0}^{\\prime}=0$, we have:\n            \\begin{displaymath}\n                \\begin{split}\n                    a_{0}a_{2}^{\\prime} &= -\\left(a_{1}a_{1}^{\\prime}+a_{2}a_{0}^{\\prime}\\right)\n                        = -\\left(-\\left(\\frac{a_{1}}{a_{0}}\\right)^{2}+\\frac{a_{2}}{a_{0}}\\right)\\\\\n                    a_{2}^{\\prime} &= \\frac{a_{1}^{2}}{a_{0}^{3}}-\\frac{a_{2}}{a_{0}^{2}}\n                \\end{split}\n            \\end{displaymath}\n        which proves the base case;\n\n    \\item assume the statement holds for $n$ and show it still holds for $n+1$, therefore\n        we're left to prove the following:\n    \\begin{displaymath}                \n        a_{n+1}^{\\prime} = \n            -\\frac{1}{a_{0}}\\frac{(-1)^{n}}{a_{0}^{n+1}}\n            \\left|\n            \\begin{array}{ccccc}\n                a_1 & a_2 & \\ldots & a_{n} & a_{n+1}\\\\\n                a_0 & a_1 & \\ldots & a_{n-1} & a_{n}\\\\\n                0   & a_0 & \\ldots & a_{n-2} & a_{n-1}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n    \\end{displaymath}                \n    expanding the determinant:\n    \\begin{displaymath}                \n        a_{1}\n            \\left|\n            \\begin{array}{ccccc}\n                a_1 & a_2 & \\ldots & a_{n-1} & a_{n}\\\\\n                a_0 & a_1 & \\ldots & a_{n-2} & a_{n-1}\\\\\n                0   & a_0 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n        - a_{0}\n            \\left|\n            \\begin{array}{ccccc}\n                a_2 & a_3 & \\ldots & a_{n} & a_{n+1}\\\\\n                a_0 & a_1 & \\ldots & a_{n-2} & a_{n-1}\\\\\n                0   & a_0 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n    \\end{displaymath}                \n    therefore:\n    \\begin{displaymath}                \n        a_{n+1}^{\\prime} = \n            -\\frac{a_{1}}{a_{0}}a_{n}^{\\prime}\n        - \n            \\frac{1}{a_{0}}\\frac{(-1)^{n-1}}{a_{0}^{n}}\n            \\left|\n            \\begin{array}{ccccc}\n                a_2 & a_3 & \\ldots & a_{n} & a_{n+1}\\\\\n                a_0 & a_1 & \\ldots & a_{n-2} & a_{n-1}\\\\\n                0   & a_0 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n    \\end{displaymath}                \n    expanding the determinant a second time:\n    \\marginpar{tedious manipulations, skip them at a first reading}\n    \\begin{displaymath}                \n        a_{2}\n            \\left|\n            \\begin{array}{ccccc}\n                a_1 & a_2 & \\ldots & a_{n-2} & a_{n-1}\\\\\n                a_0 & a_1 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n        - a_{0}\n            \\left|\n            \\begin{array}{ccccc}\n                a_3 & a_4 & \\ldots & a_{n} & a_{n+1}\\\\\n                a_0 & a_1 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                0   & a_0 & \\ldots & a_{n-4} & a_{n-3}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n    \\end{displaymath}                \n    therefore:\n    \\begin{displaymath}                \n        a_{n+1}^{\\prime} = \n            -\\frac{a_{1}}{a_{0}}a_{n}^{\\prime}\n            -\\frac{a_{2}}{a_{0}}a_{n-1}^{\\prime}\n            -\\frac{1}{a_{0}}\\frac{(-1)^{n-2}}{a_{0}^{n-1}}\n            \\left|\n            \\begin{array}{ccccc}\n                a_3 & a_4 & \\ldots & a_{n} & a_{n+1}\\\\\n                a_0 & a_1 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                0   & a_0 & \\ldots & a_{n-4} & a_{n-3}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & a_{0} & a_{1}\\\\\n            \\end{array}\n            \\right|\n    \\end{displaymath}                \n    keep expanding determinants $n$ times yield:\n    \\begin{displaymath}                \n        a_{n+1}^{\\prime} = \n            -\\frac{a_{1}}{a_{0}}a_{n}^{\\prime}\n            -\\frac{a_{2}}{a_{0}}a_{n-1}^{\\prime}\n            \\ldots\n            -\\frac{a_{n}}{a_{0}}a_{1}^{\\prime}\n            -\\frac{a_{n+1}}{a_{0}}a_{0}^{\\prime}\n    \\end{displaymath}                \n    which is the same to say:\n    \\begin{displaymath}                \n        a_{0}a_{n+1}^{\\prime}  \n            +a_{1}a_{n}^{\\prime}\n            +a_{2}a_{n-1}^{\\prime}\n            \\ldots\n            +a_{n}a_{1}^{\\prime}\n            +a_{n+1}a_{0}^{\\prime}\n            = 0\n    \\end{displaymath}                \n    which is the relation we get from $\\big[t^{n+1}\\big]A(t)A^{\\prime}(t)=0$\n    so the induction step holds and the argument is proved as required.\n\n\\end{itemize}\n\n\\end{proof}\n\nPrevious exercise is actually a lemma for an identity which\n\\citeauthor{eplett:1979} finds and discusses in \\cite{eplett:1979}.  Let\n$\\lbrace d_{nk}\\rbrace_{n,k\\in\\mathbb{N}}$ be the Catalan triangle as defined\nin \\autoref{eq:shapiro:catalan:triangle:expanded}, so \\citeauthor{eplett:1979}'s\nresult reads as follow.\n\n\\begin{theorem}\n    \\marginpar{\\ldots this combination can be also expressed using a $A$-matrix, as\n        we will see later}\n    \\begin{displaymath}                \n        c_{j-1}=\\sum_{k=1}^{j}{(-1)^{k-1}d_{jk}}\n    \\end{displaymath}                \n    In other words, coefficient $c_{j-1}$, the first one lying on row $j-1$,\n    combines coefficients lying on row $j$, summing with alternating signs.\n    \\label{thm:eplett:identity}\n\\end{theorem}\n\\begin{proof}\n    Let $\\vect{a}=\\lbrace a_{n}\\rbrace_{n\\in\\mathbb{N}}$ be a sequence such\n    that $a_{0}=1$ and consider its \\emph{inverse} sequence, call it $\\vect{b}$,\n    where $b_{0}=1$ by inverse relation on the very first coefficient.\n    By previous exercise observe that:\n    \\begin{displaymath}                \n        a_{n} = b_{n}+a_{1}b_{n-1}+a_{2}b_{n-2}+\\ldots+a_{n-1}b_{1}\n    \\end{displaymath}                \n    holds \\emph{if and only if}:\n    \\begin{equation}                \n        b_{n} = (-1)^{n-1}\n            \\left|\n            \\begin{array}{ccccc}\n                a_1 & a_2 & \\ldots & a_{n-1} & a_{n}\\\\\n                1 & a_1 & \\ldots & a_{n-2} & a_{n-1}\\\\\n                0   & 1 & \\ldots & a_{n-3} & a_{n-2}\\\\\n                \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n                0 & 0 & \\ldots & a_{1} & a_{2}\\\\\n                0 & 0 & \\ldots & 1 & a_{1}\\\\\n            \\end{array}\n            \\right|\n        \\label{eq:eplett:det:inverse:identity:lemma}\n    \\end{equation}                \n    As we have seen in\n    \\autoref{eq:shapiro:catalan:triangle:convolution:for:generic:coefficient}, \n    we can rewrite theorem's \\ac{rhs}, calling the new term $b_{j}$:\n    \\begin{displaymath}                \n        c_{j-1}=\\sum_{k=1}^{j}{(-1)^{k-1}d_{jk}}\n            = \\sum_{k=1}^{j}{(-1)^{k-1}\\sum_{i_{1}+\\ldots+i_{k}=j}{c_{i_{1}}\\cdots c_{i_{k}}}}=b_{j}\n    \\end{displaymath}                \n    % TODO in order to be truly precise, we should prove that the term abstracted out by `b_{j}'\n    % actually equals the determinant of Catalan matrix.\n    Applying \\autoref{eq:eplett:det:inverse:identity:lemma} \\emph{from right to left}, \n    it is necessary to find a sequence $\\vect{a}$ such that:\n    \\begin{displaymath}                \n        a_{n} = c_{n-1}+a_{1}c_{n-2}+a_{2}c_{n-3}+\\ldots+a_{n-1}c_{0}\n    \\end{displaymath}                \n    according to \\citeauthor{feller:intro:combinatorial:analysis}, page $94$ of\n    \\cite{feller:intro:combinatorial:analysis}, the following combination over\n    Catalan numbers holds, for $n>0$:\n    \\begin{displaymath}                \n        c_{n}=\\sum_{k=1}^{n}{c_{k-1}c_{n-k}}\\quad\\wedge\\quad c_{0}=1\n    \\end{displaymath}                \n    therefore setting $a_{n}=c_{n}$ sequence $\\vect{a}$ is found and theorem\n    is proved.\n\n\\end{proof}\n\\quad\n\\\\\\\\\nThe above proof shows another interesting identity which relates Catalan\nnumbers with matrix determinants\\marginpar{yet another characterization for\ncoefficient $c_{j}$ using matrix theory}:\n\\begin{displaymath}                \n    c_{n-1} = (-1)^{n-1}\n        \\left|\n        \\begin{array}{ccccc}\n            c_1 & c_2 & \\ldots & c_{n-1} & c_{n}\\\\\n            1 & c_1 & \\ldots & c_{n-2} & c_{n-1}\\\\\n            0   & 1 & \\ldots & c_{n-3} & c_{n-2}\\\\\n            \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n            0 & 0 & \\ldots & c_{1} & c_{2}\\\\\n            0 & 0 & \\ldots & 1 & c_{1}\\\\\n        \\end{array}\n        \\right|\n\\end{displaymath}                \n\nTaking apart the sum in theorem's \\ac{rhs}, we can restate it as:\n\\begin{displaymath}                \n    c_{j-1} + \\sum_{{k \\text{ is even}}}{d_{jk}}\n        = \\sum_{{k \\text{ is odd}}}{d_{jk}}\\qquad\\text{where } k\\in\\mathbb{N}\\setminus \\lbrace0\\rbrace\n\\end{displaymath}\nSince $d_{j0}$ has no meaning by definition of the triangle, we can\nattach to it a combinatorial meaning: let $d_{j0}$ denotes the\nnumber of pairs of paths that intersect for the first time at\nstep $j$ and possibly after again. Therefore $d_{j0}=c_{j-1}$, so \nwe can rewrite one more time:\n\\begin{displaymath}                \n    \\sum_{{k \\text{ is even}}}{d_{jk}}\n        = \\sum_{{k \\text{ is odd}}}{d_{jk}} \\qquad\\text{where } k\\in\\mathbb{N}\n\\end{displaymath}\nThis last version supplies a combinatorial \ninterpretation in terms of \\emph{non-intersecting} pairs of paths:\n\\begin{quote}\n    \\marginpar{a combinatorial interpretation of \\citeauthor{eplett:1979}'s result}\n    consider the set $S$ of pairs of paths with distance $j$. Then,\n    the subset of $S$ composed of \\emph{non-intersecting} pairs of paths \n    with \\emph{even} distance augmented by the subset of $S$ of \n    pairs of paths that \\emph{intersect} for the first time at step $j$\n    equals, in number, the subset of $S$ is composed of pairs of paths \n    with \\emph{odd} distance\n\\end{quote}\n\\begin{proof}\n    Consider a pair of paths $(\\vect{\\rho},\\vect{\\sigma})$ of length $j$ with\n    \\emph{even} distance $k$.  If a pair of steps, such that they \\emph{do not\n    point to the same direction}, is applied to $(\\vect{\\rho},\\vect{\\sigma})$\n    then a new pair of paths $(\\vect{\\rho}^{\\prime},\\vect{\\sigma}^{\\prime})$ of\n    length $j+1$ with \\emph{odd} distance $k+1$ is built. This production is a\n    bijection, therefore the number of pairs of paths with \\emph{even} distance\n    equal the one with \\emph{odd} distance, under the constraint that paths\n    have the same length them all, as required.\n\\end{proof}\n\\quad\\\\\\\\\nAlthough identity stated in \\autoref{thm:eplett:identity} can be consider\n\\marginpar{\\autoref{thm:eplett:identity} can be applied to Renewal arrays too}\nuseless for our work, it is precious for two reasons: first, it characterize\nCatalan numbers using matrix theory and, second, allows\n\\citeauthor{rogers:eplett:identity} to find further results about the theory of\n\\emph{Renewal arrays}, as described in \\cite{rogers:eplett:identity}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "e97e538d27f5e0191434334a4ea6be3168245d13", "size": 14008, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classicthesis/Chapters/back-to-the-basics/eplett.tex", "max_stars_repo_name": "massimo-nocentini/master-thesis", "max_stars_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classicthesis/Chapters/back-to-the-basics/eplett.tex", "max_issues_repo_name": "massimo-nocentini/master-thesis", "max_issues_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classicthesis/Chapters/back-to-the-basics/eplett.tex", "max_forks_repo_name": "massimo-nocentini/master-thesis", "max_forks_repo_head_hexsha": "0d82bfcc82c92512d0795f286256a19f39b9b1f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.3215339233, "max_line_length": 102, "alphanum_fraction": 0.5063535123, "num_tokens": 4872, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8128673178375734, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.557585898883344}}
{"text": "\\documentclass{article}\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{January 22, 2014}\n\\maketitle\n\n\\section*{prototype problem}\nlesson 2\n\nPDE is $u_t=\\alpha^2u_{xx}$ on $0<x<L,0<t<\\infty$\n\ninitial conditions $u(x,0)=?$ on $0<x<L,t=0$\n\nboundary conditions $u(0,t)=T_1$ and $u(L,t)=T_2$ where $x=0,L$ and $0\\leq t<\\infty$\n\nother types:\n\\begin{align*}\n  u_t&=\\alpha^2 u_{xx}-\\beta (u-u_0), \\beta > 0\\\\\n  u_t&=\\alpha^2 u_{xx}+f(x,t)\\\\\n  u_t&=\\alpha^2 u_{xx}-\\omega u_x\n\\end{align*}\n\n\\section*{steady state solution}\na solution independant of time. $u=U(x)$\n\nPDE becomes $0=\\alpha^2 U_{xx}$\n\\begin{align*}\n  U''(x)&=0\\\\\n  U(x)&=c_2x+c_1\\\\\n  U(0)&=T_1\\\\\n  U(L)&=T_2\\\\\n  U(x)&=T_1+c_2x\\\\\n  U(L)&=T_2=T_1+c_2L\\\\\n  c_2&=\\frac{T_2-T_1}{L}\\\\\n  U(x)&=T_1+\\frac{T_2-T_1}{L}x\n\\end{align*}\n\\section*{energy conservation stuff}\nthe temperature of the bar determines the total heat energy\n\\begin{align*}\n  E&=\\int_0^L{cu(x,t),\\mathrm{d}x}\\\\\n  c&=\\text{specific heat} \\left(\\frac{cal}{\\mathrm{d}y cm}\\right)\\\\\n  \\frac{\\mathrm{d}E}{\\mathrm{d}t}&=\\frac{\\mathrm{d}}{\\mathrm{d}t}\\int_0^L{cu,\\mathrm{d}x}=c\\int_0^L{\\frac{\\partial u}{\\partial t},\\mathrm{d}x}=c\\int_0^L{\\alpha^2 u_{xx},\\mathrm{d}x}=\\alpha^2 c(u_x(L,t)-u_x(0,t))\n\\end{align*}\nfor steady state: $\\frac{\\mathrm{d}E}{\\mathrm{d}t}=\\alpha^2 c(\\frac{T_2-T_1}{L}-\\frac{T_2-T_1}{L})=0$\n\\section*{lesson 3}\nfourier's law of heat flow: the rate of flow of heat energy at the position $x_0$ (in positive direction in bar) is equal to $-k\\frac{\\partial u}{\\partial x}(x_0,t)$\n\nsee page 22 in text\n\nrate of heat flow is $\\frac{\\text{calories/sec}}{\\text{cm}^2}=\\frac{\\text{cal}}{\\text{sec deg cm}}\\cdot\\frac{\\text{deg}}{\\text{cm}}$\n\nheat flow at $x=L$ is $\\alpha^2 c u_x(L,t)$ and heat flow at $x=0$ is $\\alpha^2 c u_2(0,t)$\n\\section*{boundary conditions}\nat a boundary, common conditions are:\n\\begin{align*}\n  u&=g(t)\\\\\n  u_x+\\lambda u&=g(t)\\\\\n  u_x&=g(t) \\text{most commonly} u_x=0 \\text{called insulated condition}\n\\end{align*}\n\\end{document}\n", "meta": {"hexsha": "a67a47bf8e9267881f952903c84f2e24e8be6ed9", "size": 2103, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "partial differential equations/pde-notes-2014-01-22.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "partial differential equations/pde-notes-2014-01-22.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "partial differential equations/pde-notes-2014-01-22.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.9264705882, "max_line_length": 211, "alphanum_fraction": 0.6524013314, "num_tokens": 894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6859494550081926, "lm_q2_score": 0.8128673201042492, "lm_q1q2_score": 0.5575858952194798}}
{"text": "% !TEX root = scombinatorics.tex\n\\documentclass[scombinatorics.tex]{subfiles}\n\\begin{document}\n\\chapter{Stability}\n\\label{sauer}\n\n\n\n\\def\\medrel#1{\\parbox[t]{5ex}{$\\displaystyle\\hfil #1$}}\n\\def\\ceq#1#2#3{\\parbox[t]{20ex}{$\\displaystyle #1$}\\medrel{#2}{$\\displaystyle #3$}}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{The order property}\\label{ladder}\n\nThe \\emph{chain index\\/} of $\\phi(\\U\\,;b)_{b\\in\\V}$, or of $\\phi(x\\,;z)$ when $\\U$ and $\\V$ are clear, is the maximal length (that is, $n+1$) of a chain of the form\n\n\\ceq{\\ssf{ch}\\hfill\\phi(A\\,;b_0)\\ \\subset}{\\dots}{\\subset\\ \\phi(A\\,;b_n)}\n\nfor some set $A\\subseteq\\U$ and some $b_0,\\dots,b_n\\in\\V$.\nNote that we allow $\\phi(A\\,;b_0)$ to be empty.\nThis choice produces a small asymmetry below in the definition of ladder; see also Fact~\\ref{fact_stability_dual}.\n\n\\begin{example}\n  If $\\phi(\\U\\,;b)_{b\\in\\V}$ consists of just one set, the chain index is $1$.\n  If it contains two distinct sets, the  chain index is at least $2$ and it is exactly $2$ if there are no more two sets, or if all sets are disjoint.\\QED\n\\end{example}\n\nIf a maximal length does not exist, we say that $\\phi(x\\,;z)$ is \\emph{unstable,} or that it has the \\emph{order-property.} \nOtherwise we say that it is \\emph{stable.}\n\nIn place of requiring the existence of the chain in \\ssf{ch}, we could equivalently ask for a pair of tuples $a_1,\\dots,a_n\\in\\U$ and $b_0,\\dots,b_n\\in\\V$ such that\n\n\\ceq{\\ssf{ld}\\hfill\\phi(a_h\\,;b_k)}\n{\\IFF}\n{h\\le k.}\n\nWe call this pair of tuples a \\emph{ladder\\/} of length $n+1$.\nWe may also say \\emph{ladder index\\/} instead of chain index.\nSetting $A=\\{a_1,\\dots,a_n\\}$ we easily obtain a chain from a ladder, the converse is left as an easy exercise for the reader.\n\n\\begin{exercise}\n  Let $\\phi(x\\,;z)$ have chain index $n+1$ or more. \n  Let $A\\subseteq\\U$ be a minimal set such that a chain as in \\ssf{ch} obtains for some $b_0,\\dots,b_n\\in\\V$.\n  Prove that there is a ladder $a_1,\\dots,a_n$ and $b_0,\\dots,b_n$ such that $A=\\{a_1,\\dots,a_n\\}$ and conclude that $\\phi(x\\,;z)$ has ladder index $n+1$.\\QED\n\\end{exercise}\n\nThe following facts are obvious but worth noting.\n\n\\begin{fact}\\label{fact_stability_dual}\n  Let $\\phi(x\\,;z)$ have ladder index $n+1$. \n  Then $\\phi(x\\,;z)^{\\rm op}$ has ladder index $\\ge n$.\\QED\n\\end{fact}\n \n\\begin{fact}\\label{fact_stability_neg}\n  Let $\\phi(x\\,;z)$ have ladder index $n$. \n  Then $\\neg\\phi(x\\,;z)$ has ladder index $n$.\\QED\n\\end{fact}\n\n\\begin{proof}\n  If for all $h\\in(n]$ and $k\\in[n]$ \n\n  \\ceq{\\hfill\\neg\\phi(a_h\\,;b_k)}\n  {\\IFF}\n  {h\\le k}\n\n  then $a'_h=a_{n+1-h}$ and $b'_k=b_{n-k}$ satisfy\n\n  \\ceq{\\hfill\\phi(a'_h\\,;b'_k)}\n  {\\IFF}\n  {\\phi(a_{n+1-h}\\,;b_{n-h})}\n\n  \\ceq{}\n  {\\IFF}\n  {n-k>n+1-h}\n\n  \\ceq{}\n  {\\IFF}\n  {h\\le k}\n\\end{proof}\n \nThe following definition is connected with those above, though in a less evident manner.\nWe write ${}^n2$ for the set of binary sequences of length $n$ or, more precisely, the set of functions $s:[n)\\to[2)$.\nWe write $s_h$ for the value of $s$ at $h$, and $s{\\restriction} h$ for the restriction of $s$ to $[h)$.\nWe define ${}^{<n}2=\\big\\{r\\, :\\, r\\in {}^h2,\\ h\\in[n)\\big\\}$.\n\nA \\emph{branching tree\\/} of height $n$ for the formula $\\phi(x\\,;z)$ is a function \n\n\\ceq{\\hfill\\bar{a}\\ :\\ {}^{<n}2}{\\to}{\\U}\n\\nopagebreak[4]\\par\n\\ceq{\\hfill r}{\\mapsto}{a_r},\n\nwhich we may also present by writing $\\bar a=\\<a_r:r\\in{}^{<n}2\\>$, such that\n\n\\ceq{\\ssf{2r}\\hfill\\0}\n{\\neq}\n{\\bigcap_{h=0}^{n-1}\\neg^{s_h}\\,\\phi(a_{s\\restriction h}\\,;\\V)}\\hfill for all $s\\in {}^n2$.\n\nwhere $\\neg^i$, for $i$ a non negative integer, denotes a negation symbol repeated $i$ times.\nIn other words \\ssf{2r} requires the existence of some $\\<b_s:s\\in{}^n2\\>$ such that\n\n\\ceq{\\hfill\\phi(a_{s\\restriction h}\\,;b_s)}\n{\\IFF}\n{s_h=0}\\hfill for all pairs $s\\in {}^n2$ and $h\\in[n)$\n\nor, with slightly different notation,\n\n\\ceq{\\hfill\\phi(a_r\\,;b_s)}\n{\\IFF}\n{r^\\frown0\\subseteq s}\\hfill for all pairs $r\\subset s\\in {}^n2$.\n\nIt helps to represent a branching tree as follows.\nFor definiteness, fix $n=3$.\nConsider a full binary tree of height $n+1$ and assign to each internal node (different from root and leaves) a formula as depicted below.\nThen \\ssf{2r} requires that all formulas in each branch $s\\in2^n$ are satified by some $b_s\\in\\V$.\n\\medskip\n\n% Set the overall layout of the tree\n\\tikzstyle{level 1}=[level distance=3.5cm, sibling distance=2.5cm]\n\\tikzstyle{level 2}=[level distance=3.5cm, sibling distance=1.2cm]\n\\tikzstyle{level 3}=[level distance=2.5cm, sibling distance=0.5cm]\n\\tikzstyle{level 4}=[level distance=0.5cm, sibling distance=0.5cm]\n\n% Define styles for bags and leafs\n\\tikzstyle{bag0} = [text width=2.5ex, align=left]\n\\tikzstyle{bag} = [text width=7.5ex, align=right]\n%\\tikzstyle{end} = [circle, minimum width=3pt,fill, inner sep=0pt]\n\n\\begin{tikzpicture}[grow=right]\n\\node[bag0] {{$\\top$}}\n    child {\n        node[bag] {$\\phi(a_{\\0};z)$}      \n            child {\n                node[bag] {$\\phi(a_1;z)$}\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{11};z)$\\rlap{ ----- $b_{111}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize$\\llap{$\\neg$}\\phi(a_{11};z)$\\rlap{ ----- $b_{110}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            }\n            child {\n                node[bag] {\\llap{$\\neg$}$\\phi(a_1;z)$}\n                edge from parent\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{10};z)$\\rlap{ ----- $b_{101}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize\\llap{$\\neg$}$\\phi(a_{10};z)$\\rlap{ ----- $b_{100}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            }\n       edge from parent \n    }\n    child {\n        node[bag] {\\llap{$\\neg$}$\\phi(a_{\\0};z)$}         \n            child {\n                node[bag] {$\\phi(a_0;z)$}\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{01};z)$\\rlap{ ----- $b_{011}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize\\llap{$\\neg$}$\\phi(a_{01};z)$\\rlap{ ----- $b_{010}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            }\n            child {\n                node[bag] {\\llap{$\\neg$}$\\phi(a_0;z)$}\n                edge from parent\n                    child {\n                       node[bag] {\\footnotesize$\\phi(a_{00};z)$\\rlap{ ----- $b_{001}$}}\n                       edge from parent\n                    }    \n                    child {\n                       node[bag] {\\footnotesize\\llap{$\\neg$}$\\phi(a_{00};z)$\\rlap{ ----- $b_{000}$}}\n                       edge from parent\n                    }  \n                 edge from parent\n            } \n        edge from parent\n    };\n\\end{tikzpicture}\n\\smallskip\n\nThe Shelah \\emph{2-rank\\/} of $\\phi(x\\,;z)$ is the maximal height of a branching tree for $\\phi(x\\,;z)$.\nIf such a maximal integer does not exist, we say that the 2-rank is infinite.\n\n\\begin{example}\n  If $\\phi(\\U\\,;b)_{b\\in\\V}$ consists of just one set, the only branching tree for $\\phi(x\\,;z)$ is the empty tree.\n  Therefore the 2-rank is $0$.\n  If there are at least two distinct definable sets, then we can always find $a_\\0, b_0,b_1$ such that $\\phi(a_\\0,b_0)\\niff\\phi(a_\\0,b_1)$ hence the 2-rank is at least $1$.\\QED  \n\\end{example}\n\nA branching tree $\\bar a'=\\<a'_r:r\\in{}^{<m}2\\>$ is a \\emph{subtree\\/} of $\\bar a$ if there is an $\\subseteq$-preserving map $f:{}^{<m}2\\to{}^{<n}2$ such that $a'_r=a_{fr}$.\n\nThe theorem below needs the following Ramsey-like lemma.\n\n% \\begin{lemma}\\label{lem_Ramsey_on_trees}\n%   Let $\\bar a=\\<a_r\\, :\\, r\\in{}^{<2m}2\\>$ be a branching tree for $\\phi(x\\,;z)$.\n%   Let $k\\in[2m]$ be given.\n%   Then, for every $2$-coloring of $\\range(\\bar a)$, there is a branching tree $\\bar a'=\\<a'_r\\, :\\, r\\in{}^m2\\>$ such that  $\\range(\\bar a')$ is a monochromatic subset of $\\range(\\bar a)$.\\QED\n% \\end{lemma}\n\n% As it happens, a more general lemma is easier to prove.\n\n\\begin{lemma}\\label{lem_Reamsey_tree}\n  Let $\\bar a=\\<a_r\\, :\\, r\\in{}^{<n}2\\>$ be a branching tree for $\\phi(x\\,;z)$.\n  Let $k\\in[n]$ be given.\n  Then, for every red-blue coloring of $\\range(\\bar a)$,  there is a monochromatic branching subtree, say $\\bar a'=\\<a'_r\\, :\\, r\\in{}^{n'}2\\>$, and either all nodes of $\\bar a'$ are red and $n'=k$, or they are all blue and $n'=n-k$.\n\\end{lemma}\n\n\\begin{proof}\n  Induction on $n$.\n  First, note that the lemma is trivial when $k=0$ or $k=n$.\n  In particular the lemma holds for $n=1$.\n\n  Assume the lemma true for $n$ and prove it for $n+1$.\n  Let  $\\bar a=\\<a_r\\, :\\, r\\in{}^{n+1}2\\>$ be given.\n  Fix some $k\\in[n+1]$. \n  We want a branching tree $\\bar a'$ that is either red of height $k$ or blue of height $n+1-k$.\n  \n  If we discard the trivial cases, we can assume $k\\in(n]$.\n\n  For $i=0,1$ we define $\\bar a_i=\\<a_{i^\\frown r}\\;:\\;r\\in {}^{<n}2\\>$.\n\n  First suppose that $a_\\0$ is blue.\n  If, for either $i=0$ or $i=1$, there is a red branching tree $\\bar a'_i$ of height $k$, we are done.\n  Otherwise, for both $i=0,1$ there is a blue branching tree $\\bar a'_i$ of height $n-k$.\n  Then we graft these two trees on $a_\\0$, which is also blue, and obtain the required blue tree of height $n-k+1$.\n  \n  Now suppose that $a_\\0$ is red.\n  If, for either $i=0$ or $i=1$, there is a blue branching tree $\\bar a'_i\\subseteq \\bar a_i$ of height $n-(k-1)$, we are done.\n  Otherwise, we graft on $a_\\0$ two red trees of height $k-1$ to obtain a red tree of height $k$.  \n\\end{proof}\n\nWe are now ready to characterize stability via the 2-rank.\nThe proof is based on Hodges~\\cite{hodges}.\nIt is a direct proof which yields an explicit bound on the 2-rank given the ladder index (it yields also the converse, but this is easy).\nThis bound may be far from optimal.\n\nShelah was the first to prove the equivalence \\ssf{1}$\\IFF$\\ssf{2} below.\nHis proof is model theoretic and does not gives explicit bounds.\nHowever it introduces some deep insight on sable formulas that we will present in the next section.\n\n\\begin{theorem}\\label{thm_hodges}\n  The following are equivalent\n  \\begin{itemize}\n    \\item[1.] $\\phi(x\\,;z)$ is stable;\n    \\item[2.] $\\phi(x\\,;z)$ has finite 2-rank.\n  \\end{itemize}\n  Precisely, if $n_{\\rm ld}$ and $n_{\\rm 2r}$ are the ladder index and the 2-rank, respectively, then \n  \n  \\ceq{\\hfill n_{\\rm ld}}{<}{2^{n_{\\rm 2r}+1};}\n\n  \\ceq{\\hfill n_{\\rm 2r}}{<}{2^{n_{\\rm ld}+1}-2.}\n\\end{theorem}\n\n\\begin{proof}[Proof (\\kern.1ex\\ssf{2}\\kern.3ex\\boldmath$\\IMP$\\ssf{1})]\n  % \\def\\ceq#1#2#3{\\parbox[t]{29ex}{$\\displaystyle #1$}\\medrel{#2}{$\\displaystyle #3$}}\n  % \\ssf{1}$\\IMP$\\ssf{2} \n  % We prove the contrapositive.\n  % If there is a binary tree of height $n$ then there is a ladder of length $n$.\n  % This also proves the first inequality of the proposition.\n\n  % Let $\\{a_s\\; :\\; s\\in2^{<n}\\}$ and $\\{b_s\\; :\\; s\\in2^{n}\\}$ satisfy \\ssf{2r}.\n  % Apply \\ssf{2r} to sequences of the form $1^k\\,0^{n-k}$ for $0\\le k\\le n$.\n\n  % % Define $a'_1,\\dots,a'_n$ and $b'_0,\\dots,b'_n$, where $a'_h=a_{1^{h-1}}$ and $b'_k=b_{1^k0^{n-k}}$.\n  % % By \\ssf{2r} we have\n\n  % \\ceq{\\hfill\\phi(a_{1^k\\,0^{n-k}\\,\\restriction\\, h-1}\\;;\\,b_{1^k\\,0^{n-k}})}\n  % {\\IFF}\n  % {\\big(1^k0^{n-k}\\big)_{h-1}=1}\n\n  % \\ceq{}\n  % {\\IFF}\n  % {h\\le k}\n\n  \n  We prove the contrapositive.\n  We show that if there is a ladder of length $m=2^n$, say $a_1,\\dots,a_{m-1}$ and $b_0,\\dots,b_{m-1}$, then there is a branching tree $\\bar a'$ of height $n$.\n  This also proves the first inequality above.\n  In fact, if $n_{\\rm ld}\\ge 2^{n_{\\rm 2r}+1}$, there would exist a branching tree of height $n_{\\rm 2r}+1$ which is a contradiction.\n  \n  The branching tree $\\bar a'=\\<a'_r\\, :\\, r\\in{}^{<n}2\\>$ is defined as follows\n\n  \\quad $a'_r=a_h$\\quad  where $h$ is obtained reading $r^\\frown1^\\frown0^{n-|r|-1}$ as an $n$-digit binary number.\n\n  To verify \\ssf{2r} we define for $s\\in{}^n2$ \n  \n  \\quad $b'_s=b_k$\\quad  where $k$ is obtained reading $s$ as an $n$-digit binary number.\n\n  Then it is easy to verify that for all pairs $r\\subset s\\in{}^n2$\n\n  \\ceq{\\hfill\\phi(a'_r\\,;b'_s)}\n  {\\IFF}\n  {\\phi(a_h\\,;b_k)}\\hfill where $h$ and $k$ are like above\n\n  \\ceq{}\n  {\\IFF}\n  {h\\le k}\n\n  \\ceq{}\n  {\\IFF}\n  {r^\\frown1^\\frown0^{n-|r|-1}\\ \\le\\ s}\\hfill  as $n$-digit binary numbers\n\n  \\ceq{}\n  {\\IFF}\n  {r^\\frown1\\ \\subseteq\\ s}\n\n  \\textbf{(\\ssf{1}\\kern.2ex\\boldmath$\\IMP$\\ssf{2}\\kern.1ex)}\\ \n  We prove the contrapositive.\n  We claim that if there is a branching tree $\\bar a$ of height $2^n-2$ then there is a ladder $a_1,\\dots,a_{n-1}$ and $b_0,\\dots,b_{n-1}$, of length $n$, such that $a_1,\\dots,a_{n-1} \\in \\range(\\bar a)$ and $b_0,\\dots,b_{n-1} \\in \\{b_s:s\\in{}^{2^n-2}2\\}$.\n  This yields also the second inequality of the theorem.\n  In fact, if $n_{\\rm 2r}\\ge 2^{n_{\\rm ld}+1}-2$, there would exist a ladder of length $n_{\\rm ld}+1$ which is a contradiction.\n\n  As $n_{\\rm ld}\\ge 1$, we start the induction from $n=2$.\n  In this case the claim is witnessed by ladder $a_\\0$ and $b_0,b_1$.\n  Now we assume the claim is true for $n$ and prove it for $n+1$.\n\n  Let $\\<a_r\\, :\\, r\\in{}^{<\\, 2m+2}2\\>$, where $m=2^n-2$, be a branching tree of height $2^{n+1}-2$.\n  To each $b\\in\\{b_s:s\\in{}^{2m+2 }2\\}$ we associate a red-blue coloring of $\\range(\\bar a)$ as follows.\n  A node $a\\in\\range(\\bar a)$ is colored\n  \n  \\qquad\\qquad \\llap{red}\\qquad if $\\phi(a\\,;b)$ holds;\n  \n  \\qquad\\qquad \\llap{blue}\\qquad otherwise, that is, $\\neg\\phi(a\\,;b)$.\n  \n  We consider two cases that are exhaustive by Lemma~\\ref{lem_Reamsey_tree}.\n  Note that we are applying the lemma only to the subtree $\\<a_{1^\\frown r}:r\\in{}^{2m+1}2\\>$.\n\n  Case 1: for some $b$ there is a red subtree of $\\<a_{1^\\frown r}:r\\in{}^{2m+1}2\\>$ of height $m+1$.\n  Let $\\bar a'$ be this red tree and consider its subtree $\\<a'_{0^\\frown r}\\,:\\,r\\in {}^{<\\,m}2\\>$.\n  By induction hypothesis, there are $A\\subseteq\\{a'_{0^\\frown r}\\,:\\,r\\in {}^{<\\,m}2\\}$ and $b_0,\\dots,b_{n-1}$ such that\n\n  \\ceq{(1)\\hfill\\phi(A\\,;b_0)\\ \\subset}{\\dots}{\\subset\\ \\phi(A\\,;b_{n-1})}\n\n  Let $A'=A\\cup\\{a'_\\0\\}$ then\n\n  \\ceq{(2)\\hfill\\phi(A'\\,;b_0)\\ \\subset}{\\dots}{\\subset\\ \\phi(A'\\,;b_{n-1})}\n\n  In fact, as $b_0,\\dots,b_{n-1}\\in\\neg\\phi(a'_\\0\\,;\\V)$, this is the same chain as (1).\n  Therefore, if we extend the chain on the right with $\\phi(A'\\,;b)=A'$, we obtain the required chain of length $n+1$.\n  \n  Case 2: for every $b$ there is a blue subtree of $\\<a_{1^\\frown r}:r\\in{}^{2m+1}2\\>$ of height $m$.\n  Pick any $b\\in\\{b_s:s\\in{}^{2m+2}2\\}$ such that $\\neg\\phi(a_\\0, b)$ and let $\\bar a'$ the corresponding blue subtree.\n  Apply the induction hypothesis to obtain $A\\subseteq\\range(\\bar a')$ and $b_0,\\dots,b_{n-1}$ such that (1).\n  We claim that (2) above holds with $A'=A\\cup\\{a_\\0\\}$.\n  In fact, $b_0,\\dots,b_{n-1}\\in\\phi(a_\\0\\,;\\V)$ so (2) is the chain in (1) with all sets augmented by $a_\\0$.\n  We can extend the chain on the left with $\\phi(A'\\,;b)=\\0$ and obtain the required chain of length $n+1$.\n\\end{proof}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Approximable sets}\n\nThe notion of approximable set is a combinatorial counterpart of the model theoretical notion of externally definable set.\n\n\\begin{definition}\\label{def_approx}\n  We say that $\\Aa\\subseteq\\U$ is \\emph{approximable\\/} if for every finite set $A\\subseteq\\U$ there is a $b\\in\\V$ such that $\\phi(A\\,;b)=\\Aa\\cap A$.\n  If we also have that $\\phi(\\U\\,;b)\\subseteq\\Aa$, then we say that $\\Aa$ is approximable \\emph{from below.}\\QED\n\\end{definition}\n\nThe following is immediate.\n\n\\begin{fact}\n  The following are equivalent for every $\\Aa\\subseteq\\U$\n  \\begin{itemize}\n    \\item[1.] $\\Aa$ is approximable from below;\n    \\item[2.] for every finite set $A\\subseteq\\Aa$ there is a $b\\in\\V$ such that $A\\subseteq\\phi(\\U\\,;b)\\subseteq\\Aa$.\\QED\n  \\end{itemize}\n\\end{fact}\n\nTowards the main theorem of this section we prove three separated lemmas.\n\n\\begin{lemma}\n  Let $\\phi(x\\,;z)$ be a stable formula.\n  Every set $\\Aa\\subseteq\\U$ approximable by $\\phi(x\\,;z)$ is approximable from below by the formula\n\n  \\ceq{\\hfill \\psi(x\\,;z_0,\\dots,z_n)}{=}{\\bigwedge_{i=0}^n\\phi(x\\,;z_i)}\n  \n  where $n+1$ is the ladder index of $\\phi(x\\,;z)$.\n\\end{lemma}\n\n\\begin{proof}\n  Let $A\\subseteq\\Aa$ be finite.\n  We prove that $A\\subseteq\\psi(\\U\\,;b_0,\\dots,b_n)\\subseteq\\Aa$ for some $b_0,\\dots,b_n$.\n  To obtain the $b_k$, we construct a ladder inductively.\n  Pick any $b_0$ be such that $A\\subseteq\\phi(\\U\\,;b_0)$, which we can do by approximability.\n  Now, suppose that $a_1,\\dots,a_k$ and $b_0,\\dots,b_k$ have been defined.\n  If \n\n  \\ceq{\\hfill A}{\\subseteq}{\\bigcap_{i=0}^k\\phi(\\U\\,;b_i)\\medrel{\\subseteq}\\Aa}\n\n  we set $b_{k+1}=\\dots=b_n=b_k$ and stop as we already have the required parameters.\n  Otherwise pick any element\n\n  \\ceq{\\hfill a_{k+1}}{\\in}{\\bigcap_{i=0}^k\\phi(\\U\\,;b_i)\\sm\\Aa}\n\n  and some $b_{k+1}$ such that\n\n  \\ceq{\\hfill A}{\\subseteq}{\\phi(\\U\\,;b_{k+1})\\medrel{\\subseteq}\\U\\sm\\{a_1,\\dots,a_{k+1}\\}}.\n\n  Such a parameter $b_{k+1}$ exists because $\\Aa$ is approximable (apply Definition~\\ref{def_approx} with $A\\cup\\{a_1,\\dots,a_{k+1}\\}$ for $A$).\n  Note that at stage $n$ we have constructed a ladder of length $n$ for $\\neg\\phi(x\\,;z)$.\n  In fact for all $h\\in(n]$ and $k\\in[n]$ we have\n  \n  \\ceq{\\hfill\\neg\\phi(a_h\\,;b_k)}\n  {\\IFF}\n  {h\\le k.}\n  \n  As $\\phi(x\\,;z)$ has ladder index $n+1$, by Fact~\\ref{fact_stability_neg} it is not possible to further prolong this chain, hence\n\n  \\ceq{\\hfill A}{\\subseteq}{\\bigcap_{i=0}^n\\phi(\\U\\,;b_i)\\medrel{\\subseteq}\\Aa}\n\n  as required.\n\\end{proof}\n\nWe prove that the formula $\\psi(x\\,;z_0,\\dots,z_n)$ in the lemma above is itself stable, though with a larger ladder index.\n  \n\\begin{lemma}\n  Let $\\psi_i(x\\,;z)$, where $i=1,\\dots,m$, be formulas with ladder index $n_i$. Let\n  \n  \\ceq{\\hfill \\phi(x\\,;z)}{=}{\\bigwedge_{i=1}^m\\phi_i(x\\,;z)}\n  \n  Then $\\phi(x\\,;z)$ has ladder index $< R(n_1+1,\\dots, n_m+1)$, the Ramsey number for $m$-colorings.\n\\end{lemma}\n  \n\\begin{proof} \n  Suppose for a contradiction that there is a ladder $a_1,\\dots,a_n$ and $b_0,\\dots,b_n$, where $n=R(n_1+1,\\dots, n_m+1)-1$.\n  Let $C_i$ contains the pairs $\\{h,k\\}$ such that $\\neg\\phi_i(a_h\\,;b_k)$ and $0\\le k<h\\le n$.\n  Then from \\ssf{ld} we obtain \n  \n  \\ceq{\\hfill\\bigcup_{i=1}^mC_i}{=}{\\binom{[n]}{2}}\n  \n  By the definition of $n$, for some $i\\in[m]$, there is a set $H$ of cardinality $n_i+1$ such that $H^{(2)}\\subseteq C_i$.\n  Assume $i{=}1$ for definiteness.\n\n  Write $a'_1,\\dots,a'_{n_1}$ and $b'_0,\\dots,b'_{n_1}$ for the tuples obtained by restricting $a_1,\\dots,a_n$ and $b_0,\\dots,b_n$ to the indexes in $H$.\n  These tuples witness that $\\phi_1(x\\,;z)$ has ladder index at least $n_1+1$, which contradicts the assumption of the lemma.\n\\end{proof}\n\nTo apply the proposition to the formula $\\psi(x\\,;z_0,\\dots,z_n)$ in the lemma above, let $z=z_0,\\dots,z_n$ and let $\\phi_i(x\\,;z)=\\phi(x\\,;z_i)$.\nThat is, $\\phi_i(x\\,;z)$ defines a relation between $\\U$ and $\\V^{n+1}$ that depends only on the $i$-th coordinate.\nThe ladder index of every $\\phi_i(x\\,;z)$ is the same as $\\phi(x\\,;z_0)$ as a relation between $\\U$ and $\\V$.\n\n\n\\begin{lemma}\n  If $\\Aa$ is approximated from below by a stable formula $\\phi(x\\,;z)$ then for some $b_0,\\dots,b_n\\in\\V$ \n  \\\\[-1ex]\n  \\ceq{\\hfill \\Aa}{=}{\\bigcup^n_{i=0}\\phi(\\U\\,;b_i)}\n  \n  where $n+1$ is the ladder index of $\\phi(x\\,;z)$. \n\\end{lemma}\n  \n\\begin{proof}\n  Let $A\\subseteq\\U$ be finite.\n  To obtain the $b_k$, we construct a ladder by stages.\n  Pick any $b_0$ be such that $\\phi(\\U\\,;b_0)\\subseteq\\Aa$.\n  Now, suppose that $a_1,\\dots,a_k$ and $b_0,\\dots,b_k$ have been defined.\n  If \n  \\\\[-1ex]\n  \\ceq{\\hfill\\Aa}{\\subseteq}{\\bigcup_{i=1}^k\\phi(\\U\\,;b_i)}\n\n  we set $b_{k+1}=\\dots=b_n=b_k$ and stop.\n  As the construction guarantees the converse inclusion, we have the required parameters.\n  Otherwise pick any element\n\n  \\ceq{\\hfill a_{k+1}}{\\in}{\\Aa\\sm\\bigcup_{i=1}^k\\phi(\\U\\,;b_i)}\n\n  and some $b_{k+1}$ such that\n\n  \\ceq{\\hfill \\{a_1,\\dots,a_{k+1}\\}}{\\subseteq}{\\phi(\\U\\,;b_{k+1})\\medrel{\\subseteq}\\Aa}.\n\n  Such a parameter $b_{k+1}$ exists because $\\Aa$ is approximable from below.\n  Note that at stage $n$ we have constructed a ladder of length $n$ for $\\phi(x\\,;z)$.\n  In fact for all $h\\in(n]$ and $k\\in[n]$ we have\n  \n  \\ceq{\\hfill\\neg\\phi(a_h\\,;b_k)}\n  {\\IFF}\n  {h\\le k.}\n  \n  As $\\phi(x\\,;z)$ has ladder index $n+1$, it is not possible to prolonge this chain, hence\n\n  \\ceq{\\hfill\\Aa}{=}{\\bigcup_{i=0}^n\\phi(\\U\\,;b_i)}\n\n  as required.\n\\end{proof}\n  \nThe main theorem of this section is an immediate corollary of the three lemmas above.\n\n\\begin{theorem}\\label{thm_stable_definability}\n  Let $\\phi(x\\,;z)$ be stable.\n  Then there are $m, n$ such every set $\\Aa\\subseteq\\U$ approximable by $\\phi(x\\,;z)$ is definable by $\\psi(x\\,;\\bar b)$, where\n\n  \\ceq{\\hfill\\psi(x\\,;\\bar z)}{=}{\\bigvee_{j=0}^m\\bigwedge_{i=0}^n\\phi(x\\,;z_{i,j})}\n  \n  and $\\bar b\\in\\V^{n\\times m}$ is some $n{\\times}m$-tuple of parameters.\\QED\n\\end{theorem}\n\nNote that the numbers $m,n$ in the theorem above do not depend on the set $\\Aa$ approximated by $\\phi(x\\,;z)$. As a matter of fact, $m,n$ only depend on the ladder index of $\\phi(x\\,;z)$.\n\nWe conclude this section with the sketch of an argument that shows that stable formulas have finite 2-rank, that is, \\ssf{1}$\\IMP$\\ssf{2} in Theorem~\\ref{thm_hodges}.\n\nAs a matter of fact, below we prove an apparently weaker claim\n\\begin{itemize}\n  \\item[\\#] if there is a branching tree of height $\\omega$, say $\\{a_s:s\\in2^{<\\omega}\\}$, the ladder index is infinite.\n\\end{itemize}\nIn general, infinite 2-rank does not implies the existence of a branching tree of height $\\omega$.\nNevertheless, by standard model theoretic argument of compactness, \\# suffices to obtain \\ssf{1}$\\IMP$\\ssf{2} in Theorem~\\ref{thm_hodges}.\nModel theory is beyond the scope of this notes, hence here we only present the argument that proves \\#.\n\nFirst note that Theorem~\\ref{thm_stable_definability} remains true when $\\U$ is replaced by a subset $\\U'\\subseteq\\U$.\nIt is also clear that the formula $\\psi(x\\,;\\bar z)$ that proves the theorem for $\\U$, works also for any $\\U'\\subseteq\\U$ (in fact, the ladder index does not decrease when moving to a subset).\n\nFor $s\\in2^\\omega$, let $\\U_s=\\{a_{s\\restriction n}:n<\\omega\\}\\subseteq\\U$.\nLet $\\Aa_s=\\{a_{s\\restriction n}:n<\\omega,\\ s_n=1\\}$.\nThen $\\Aa_s\\subseteq\\U_s$ is approximable with respect to the collection $\\phi(\\U_s\\,;b)_{b\\in\\V}$.\nTherefore, from the theorem above, we obtain $\\Aa_s=\\psi(\\U_s\\,;\\bar b_s)$ for some  $\\bar b_s\\in\\V^{n\\times m}$, where the formula $\\psi(x\\,;\\bar z)$ does non depend on $s$.\n\nFor $s\\neq s'$, let $n$ be least such that $s_n\\neq s'_n$.\nThen $a_{s\\restriction n}=a_{s'\\restriction n}$ belongs to $\\U_s\\cap\\U_{s'}$ and to $\\Aa_s\\simdiff\\Aa_{s'}$.\nTherefore $\\bar b_s\\neq\\bar b_{s'}$, hence there is a continuum of such tuples $\\bar b_s$.\nOn the other hand, without loss of generality we can assume that $\\V$ is a countable set. \nA contradiction. \n\n% Assume the 2-rank is infinite and reason for a contradiction.\n% Assume for the moment that the rank is $\\omega$, in the sense that there is an infinite branching tree $\\<a_r:r\\in2^{<\\omega}\\>$. \n% Let $\\<b_s:s\\in2^{\\omega}\\>$ be witnesses of \\ssf{2r}.\n% Let $Q\\subseteq\\{b_s:s\\in2^{\\omega}\\}$ the set of sequences that are eventually $0$.\n% Note that in the proof of Theorem~\\ref{thm_stable_definability} we can require that $\\bar b\\in Q^{n\\times m}$.\n% But there are \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\\section{Stable Erd\\H{o}s-Hajnal}\n\n\\end{document}\n", "meta": {"hexsha": "257858a99593137f6fc7375e69e931e4abb8f9f5", "size": 24108, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "stable.tex", "max_stars_repo_name": "domenicozambella/scombinatorics", "max_stars_repo_head_hexsha": "d0a03c1472a4b345d713e082ebdd0d1361068b88", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "stable.tex", "max_issues_repo_name": "domenicozambella/scombinatorics", "max_issues_repo_head_hexsha": "d0a03c1472a4b345d713e082ebdd0d1361068b88", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-07-27T12:37:11.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-27T12:37:11.000Z", "max_forks_repo_path": "stable.tex", "max_forks_repo_name": "domenicozambella/scombinatorics", "max_forks_repo_head_hexsha": "d0a03c1472a4b345d713e082ebdd0d1361068b88", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-19T08:23:27.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-19T08:23:27.000Z", "avg_line_length": 43.2818671454, "max_line_length": 256, "alphanum_fraction": 0.6087605774, "num_tokens": 8630, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.5574834084186844}}
{"text": "\\newcommand\\lbxmonad{\\texttt{xmonad}\\xspace}\n\n\\section{Functional Correctness Invariants}\\label{sec:structures}\n\nSo far, we have considered a variety of general, application independent\ncorrectness criteria. Next, let us see how we can use \\toolname to specify \nand statically verify critical, application specific correctness properties,\nusing two illustrative case studies: red-black trees and the stack-set data\nstructure introduced in the \\lbxmonad system.\n\n\\subsection{Red-Black Trees}\\label{sec:redblack}\n\nRed-Black trees have several non-trivial invariants that are ideal for \nillustrating the effectiveness of refinement types and contrasting with\nexisting approaches based on GADTs~\\cite{Kahrs01}.\n%\nThe structure can be defined via the following Haskell type:\n%\n\\begin{code}\n  data Col    = R | B\n  data Tree a = Leaf \n              | Node Col a (Tree a) (Tree a)\n\\end{code}\n%\nHowever, a @Tree@ is a valid Red-Black tree only if it \nsatisfies three crucial invariants:\n%\n\\begin{itemize}\n  \\item{\\emphbf{Order:}} \n    The keys must be binary-search ordered, \\ie the key at each node must\n    lie between the keys of the left and right subtrees of the node,\n  \\item{\\emphbf{Color:}}\n    The children of every \\emph{red} @Node@ must be colored \\emph{black}, \n    where each @Leaf@ can be viewed as black,\n  \\item{\\emphbf{Height:}}\n    The number of black nodes along any path from each @Node@ to its @Leaf@s \n    must be the same.\n\\end{itemize}\n\nRed-Black trees are especially tricky as various operations create \ntrees that can temporarily violate the invariants. Thus, while \nthe above invariants can be specified with singletons and GADTs, \nencoding all the properties (and the temporary violations) results\nin a proliferation of data constructors that can somewhat obfuscate \ncorrectness. In contrast, with refinements, we can specify and verify\nthe invariants in isolation (if we wish) and can trivially compose\nthem simply by \\emph{conjoining} the refinements.\n\n\\mypara{Color Invariant}\nTo specify the color invariant, we define a \\emph{black-rooted tree} as:\n%\n\\begin{code}\n  measure isB           :: Tree a -> Prop \n    isB (Node c x l r)  = c == B\n    isB (Leaf)          = True\n\\end{code}\n%\nand then we can describe the color invariant simply as:\n%\n\\begin{code}\n  measure isRB          :: Tree a -> Prop\n    isRB (Leaf)         = True\n    isRB (Node c x l r) = isRB l && isRB r &&\n                          c = R => (isB l && isB r)\n\\end{code}\n%\nThe insertion and deletion procedures create intermediate \\emph{almost} \nred-black trees where the color invariant may be violated at the root. \nRather than create new data constructors we define almost red-black \ntrees with a measure that just drops the invariant at the root:\n%\n\\begin{code}\n  measure almostRB          :: Tree a -> Prop\n    almostRB (Leaf)         = True\n    almostRB (Node c x l r) = isRB l && isRB r\n\\end{code}\n\n\\mypara{Height Invariant}\nTo specify the height invariant, we define a black-height measure:\n%\n\\begin{code}\n  measure bh          :: Tree a -> Int\n    bh (Leaf)         = 0\n    bh (Node c x l r) = bh l + if c = R then 0 else 1\n\\end{code}\n%\nand we can now specify black-height balance as:\n%\n\\begin{code}\n  measure isBal          :: Tree a -> Prop\n    isBal (Leaf)         = true\n    isBal (Node c x l r) = bh l = bh r \n                         && isBH l && isBH r \n\\end{code}\n%\nNote that @bh@ only considers the left sub-tree, \nbut this is legitimate, because @isBal@ will \nensure the right subtree has the same @bh@.\n\n\\mypara{Order Invariant}\nWe refine the data definition of @Tree@ \nto encode the ordering property:\n%\n\\begin{code}\n  data Tree a\n    = Leaf\n    | Node { c   :: Col\n           , key :: a\n           , lt  :: Tree {v:a | v < key }\n           , rt  :: Tree {v:a | key < v } }\n\\end{code}\n%\n\n\\mypara{Composing Invariants}\nFinally, we can compose the invariants and define a \nRed-Black tree with the alias:\n%\n\\begin{code}\n  type RBT a = {v:Tree a | isRB v && isBal v}\n\\end{code}\n%\nAn almost Red-Black tree is the above with @isRB@ \nreplaced with @almostRB@, \\ie does not require any \nnew types or constructors.\nIf desired, we can ignore a particular invariant \nsimply by replacing the corresponding refinement \nabove with @true@.\nGiven the above -- and suitable signatures \\toolname \nverifies the various insertion, deletion and rebalancing\nprocedures for a Red-Black Tree library.\n\n\\subsection{Stack Sets in XMonad}\\label{sec:xmonad}\n\n\\lbxmonad is a dynamically tiling \\texttt{X11} \nwindow manager that is written and configured in Haskell. \nThe set of windows managed by XMonad is organized into a\nhierarchy of types. At the lowest level we have a \n\\emph{set} of windows @a@ represented as a @Stack a@\n%\n\\begin{code}\n  data Stack a = Stack { focus :: a   \n                       , up    :: [a] \n                       , down  :: [a] }\n\\end{code}\n%\nThe above is a zipper~\\cite{zipper} where @focus@ is the \n``current\" window and @up@ and @down@ the windows ``before\"\nand ``after\" it.\n%\nEach @Stack@ is wrapped inside a @Workspace@ that also has\ninformation about layout and naming:\n%\n\\begin{code}\n  data Workspace i l a = Workspace \n     { tag    :: i\n     , layout :: l\n     , stack  :: Maybe (Stack a) }\n\\end{code}\n%\nwhich is in turn, wrapped inside a @Screen@:\n%\n\\begin{code}\n  data Screen i l a sid sd = Screen \n    { workspace    :: Workspace i l a\n    , screen       :: sid\n    , screenDetail :: sd }\n\\end{code}\n%\nThe set of all screens is represented by the top-level zipper:\n%\n\\begin{code}\n  data StackSet i l a sid sd = StackSet \n    { cur :: Screen i l a sid sd  \n    , vis :: [Screen i l a sid sd]\n    , hid :: [Workspace i l a]   \n    , flt :: M.Map a RationalRect } \n\\end{code}\n\n\n\\mypara{Key Invariant: Uniqueness of Windows}\nThe key invariant for the @StackSet@ type is that each window @a@\nshould appear at most once in a @StackSet i l a sid sd@. That is,\na window should \\emph{not be duplicated} across stacks or workspaces.\nInformally, we specify this invariant by defining a measure for the \n\\emph{set of elements} in a list, @Stack@, @Workspace@ and @Screen@,\nand then we use that measure to assert that the relevant sets are \ndisjoint.\n\n\\mypara{Specification: Unique Lists} To specify that the set of elements\nin a list is unique, \\ie there are no duplicates in the list we first define\na measure denoting the set using Z3's~\\citep{z3} built-in theory of sets:\n%\n\\begin{code}\n  measure elts  :: [a] -> Set a \n    elts ([])   = emp  \n    elts (x:xs) = cup (sng x) (elts xs) \n\\end{code}\n%\nNow, we can use the above to define uniqueness:\n%\n\\begin{code}\n  measure isUniq  :: [a] -> Prop \n    isUniq ([])   =  true \n    isUniq (x:xs) =  notIn x xs && isUniq xs\n\\end{code}\n%\nwhere @notIn@ is an abbreviation: \n%\n\\begin{code}\n  predicate notIn X S = not (mem X (elts S))\n\\end{code}\n\n\\mypara{Specification: Unique Stacks}\nWe can use @isUniq@ to define unique, \\ie, duplicate free,\n@Stack@s as:\n%\n\\begin{code}\n  data Stack a = Stack \n   { focus :: a   \n   , up    :: {v:[a] | Uniq1 v focus}\n   , down  :: {v:[a] | Uniq2 v focus up} }\n\\end{code}\n%\nusing the aliases\n%\n\\begin{code}\n  predicate Uniq1 V X    = isUniq V  && notIn X V\n  predicate Uniq2 V X Y  = Uniq1 V X && disjoint Y V\n  predicate disjoint X Y = cap (elts X) (elts Y) = emp\n\\end{code}\n%\n\\ie the field @up@ is a unique list of elements different \nfrom @focus@, and the field @down@ is additionally disjoint \nfrom @up@.\n\n\\mypara{Specification: Unique StackSets}\nIt is straightforward to lift the @elts@ measure to \nthe @Stack@ and the wrapper types @Workspace@ and \n@Screen@, and then correspondingly lift @isUniq@ to \n@[Screen]@ and \\hbox{@[Workspace]@.}\n%\nHaving done so, we can use those measures to refine \nthe type of @StackSet@ to stipulate that there are \nno duplicates:\n%\n\\begin{code}\n  type UniqStackSet i l a sid sd \n    = {v: StackSet i l a sid sd | NoDups v} \n\\end{code}\n%\nusing the predicate aliases\n%\n\\begin{code}\n  predicate NoDups V \n    =  disjoint3 (hid V) (cur V) (vis V) \n    && isUniq (vis V) && isUniq (hid V)\n  \n  predicate disjoint3 X Y Z \n    =  disjoint X Y && disjoint Y Z && disjoint X Z\n\\end{code}\n%\n\\toolname automatically turns the record selectors of \nrefined data types to measures that return the values \nof appropriate fields, hence @hid x@ (resp. @cur x@, @vis x@)\nare the values of the \\hbox{@hid@,} @cur@ and @vis@ fields of\na @StackSet@ named @x@.\n\n%%%% \\begin{code}\n%%%% data Stack a = Stack { focus :: a   \n%%%%                      , up    :: ULNEq a focus\n%%%%                      , down  :: ULNEq a focus }\n%%%% \n%%%% data StackSet i l a sid sd = StackSet \n%%%%    { lcurrent  ::  Screen i l a sid sd   \n%%%%    , lvisible  :: [Screen i l a sid sd]\n%%%%    , lhidden   :: [Workspace i l a]\n%%%%    , lfloating :: M.Map a RationalRect     \n%%%%    }\n%%%% \\end{code}\n%%%% %\n%%%% data Stack a = Stack { focus  :: !a   \n%%%%                      , up     :: [a]   \n%%%%                      , down   :: [a] } \n%%%%                                   \n%%%% data Stack a = Stack { focus :: a   \n%%%%                      , up    :: UListDif a focus\n%%%%                      , down  :: UListDif a focus }\n%%%% \n%%%% data Workspace i l a = Workspace  { tag :: !i, layout :: l, stack :: Maybe (Stack a) }\n%%%% \n%%%% data Workspace i l a = Workspace  { tag :: i, layout :: l, stack :: Maybe (UStack a) }\n%%%% \n%%%% type UStack a = {v:(Stack a) | (ListDisjoint (getUp v) (getDown v))}\n%%%% \n%%%% \n%%%% data Screen i l a sid sd = Screen { workspace     :: !(Workspace i l a)\n%%%%                                   , screen        :: !sid\n%%%%                                   , screenDetail  :: !sd }\n%%%% \n%%%% data StackSet i l a sid sd =\n%%%%     StackSet { current  :: !(Screen i l a sid sd)    -- ^ currently focused workspace\n%%%%              , visible  :: [Screen i l a sid sd]     -- ^ non-focused workspaces, visible in xinerama\n%%%%              , hidden   :: [Workspace i l a]         -- ^ workspaces not visible anywhere\n%%%%              , floating :: M.Map a RationalRect      -- ^ floating windows\n%%%%              } \n%%%% \n%%%% A workspace is just a @Stack@ of virtual workspaces \n%%%% tagged with a tag @i@ and its layout @l@\n%%%% @Workspace i l a@.\n%%%% \n%%%% To view a workspace on a physical screen one needs to \n%%%% associate the workspace with physical screen's id @sid@\n%%%% and details @sd@, \n%%%% forming the new data structure @Screen i l a sid sd@.\n%%%% \n%%%% Xinerama in X11 allows viewing multiple virtual workspaces\n%%%% simultaneously. \n%%%% %\n%%%% While only one the current one will ever be in focus (i.e. will\n%%%% receive keyboard events), other workspaces may be passively\n%%%% viewable.  \n%%%% %\n%%%% We thus need to track which virtual workspaces are\n%%%% associated (viewed) on which physical screens.  \n%%%% %\n%%%% To keep track of\n%%%% this, \\lbxmonad's main data structure  @StackSet i l a sid sd@ \n%%%% keeps, apart from the current screen,\n%%%% separate lists of visible but non-focused\n%%%% workspaces (@Screen@) , and non-visible workspaces (@Workspace@).\n\n%%%% \\mypara{Unique Stack} \n%%%% Assume the existence of a predicate \n%%%% @ULNeq (X::a) (XS::[a])@ that ensures that \n%%%% (1)~the list @XS@ has no duplicates and\n%%%% (2)~@X@ is not an element of @XS@.\n%%%% %\n%%%% With that magical predicate we define an \\emph{almost unique}\n%%%% stack: \n%%%% \\begin{code}\n%%%% data Stack a = Stack { focus :: a   \n%%%%                      , up    :: ULNEq a focus\n%%%%                      , down  :: ULNEq a focus }\n%%%% \\end{code}\n%%%% The stack has a @focus@ element @a@ and \n%%%% and two unique lists of @a@'s @up@ and @down@ of the focus.\n%%%% \n%%%% The above Stack is almost unique, as an element \n%%%% may appear both in the @up@ and the @down@ lists.\n%%%% %\n%%%% We define a type alias for @U@nique@Stack@\n%%%% that rejects with possibility:\n%%%% \n%%%% \\begin{code}\n%%%% type UStack a = {v:(Stack a) |  (LDisjoint (up v) (down v))}\n%%%% \n%%%% predicate LDisjoint X Y = \n%%%%   (Set_emp (Set_cap (elts X) (elts Y)))\n%%%% \\end{code}\n%%%% \n%%%% \n%%%% Note that the above definitions crucially depend on set theoretic properties.\n%%%% Thus, verification of \\lbxmonad is achieved using an SMT back-end \n%%%% that supports set theory (like Z3~\\citep{z3}).\n%%%% \n%%%% We slightly modify the above measure definition\n%%%% to define @dups@, a measure that returns the \n%%%% duplicate elements of a list.\n%%%% %\n%%%% \\begin{code}\n%%%%   measure dups :: [a] -> Set a\n%%%%   dups ([])  = emp v\n%%%%   dups(x:xs) = if mem x (elts xs) \n%%%%                  then cup (sng x) (dups xs)\n%%%%                  else (dups xs)\n%%%% \\end{code}\n%%%% \n%%%% Using @dups@ we define the magical list type @ULNEq a N@ \n%%%% as a list @v@ that has no duplicates, \\ie the set @dups v@\n%%%% is empty, and @N@ does not belong to the set @elts v@.\n%%%% \n%%%% \\begin{code}\n%%%% type ULNEq a N = {v:[a] | ( (UL v) && (not (LElt N v))}\n%%%% predicate LElt N LS  = (Set_mem N (elts LS)) \n%%%% predicate UL     LS  = (Set_emp (dups LS)   )\n%%%% \\end{code}\n%%%% \n%%%% Throughout verification\n%%%% we need to establish and use the invariant \n%%%% that each Stack is Unique.\n%%%% %\n%%%% This is achieved by the following annotation\n%%%% \\begin{code}\n%%%% using (Stack a) as (UStack a)\n%%%% \\end{code}\n%%%% that allows \\toolname to \n%%%% (1)~ use the invariant:\n%%%% each time a Stack value is retrieved from the environment\n%%%% \\toolname strengthens its type with the disjointness information;\n%%%% (2)~ prove the invariant:\n%%%% each time @Stack@ data constructor is used,\n%%%% an disjoint constraint should be proved.\n%%%% Failure to prove this constraint will raise an \n%%%% ``Invariant Check'' error.\n%%%% \n%%%% \\mypara{Unique StackSet} \n%%%% Establishing Uniqueness on StackSets is a generalization of the above procedure.\n%%%% \n%%%% The definition of a @StackSet@ includes the @current@ Screen, the list of @visible@ screens,\n%%%% and the list of @hidden@ workspaces.\n%%%% \\begin{code}\n%%%% data StackSet i l a sid sd = StackSet \n%%%%    { lcurrent  ::  Screen i l a sid sd   \n%%%%    , lvisible  :: [Screen i l a sid sd]\n%%%%    , lhidden   :: [Workspace i l a]\n%%%%    , lfloating :: M.Map a RationalRect     \n%%%%    }\n%%%% \\end{code}\n%%%% %\n%%%% \\toolname automatically turns the record selectors of refined data types\n%%%% to measures that return the appropriate fields.\n%%%% %\n%%%% Thus we infix the refined selectors with an @l@\n%%%% to distinguish between the haskell (\\eg, @current@)\n%%%% and the logical (\\eg, @lcurrent@) selectors. \n%%%% \n%%%% To prove absence of duplicates we need to @use@\n%%%% only @StackSet@s that have no duplicates:  \n%%%% \\begin{code}\n%%%% using (StackSet i l a sid sd) \n%%%%  as  {v:StackSet i l a sid sd|(NoDuplicates v)}\n%%%% \\end{code}\n%%%% \n%%%% @NoDuplicates@ ensures that the elements of\n%%%% @hidden@, @current@, and @visible@ \n%%%% workspaces are mutually disjoint\n%%%% and that the @visible@ and @hidden@ workspaces\n%%%% have no duplicates.\n%%%% %\n%%%% \\begin{code}\n%%%% predicate NoDuplicates SS = \n%%%%     (Disjoint3  (workspacesElts (lhidden  SS)) \n%%%%                 (screenElts     (lcurrent SS)) \n%%%%                 (screensElts    (lvisible SS))) \n%%%%   &&\n%%%%     (Set_emp (screensDups    (lvisible SS))) \n%%%%   &&\n%%%%     (Set_emp (workspacesDups (lhidden  SS)))\n%%%% \\end{code}\n%%%% %\n%%%% Again we used recursively defined measures to grap \n%%%% the elements and the duplicates of the structures.\n%%%% For example, @screenElts@ returns \n%%%% the elements of the stack of the workspace of the screen, \n%%%% and is used by @screensDup@ to grap the duplicates of \n%%%% a list of Screens.\n%%%% \n\\mypara{Verification}\n\\toolname uses the above refined type to verify the key invariant,\nnamely, that no window is duplicated.\n%\n% However, the verification process, while eventually successful, \n% revealed several limitations of our approach.\n%\nThree key actions of the, eventually successful, verification process\ncan be summarized as follows:\n\\begin{itemize}\n\\item\\emph{Strengthening library functions.} \n  \\lbxmonad repeatedly concatenates the lists of a Stack. %edit to fix overful box\n  %\n  To prove that for some \\hbox{@s:Stack a@,} @(up s ++ down s)@ is a unique list,\n  the type of @(++)@ needs to capture that concatenation of two unique and\n  disjoint lists is a unique list.\n  %\n  For verification, we assumed that Prelude's @(++)@ satisfies this property.\n  %\n  But, not all arguments of @(++)@ are unique disjoint lists:\n  @\"StackSet\" ++ \"error\"@ is a trivial example that does not satisfy\n  the assumed preconditions of @(++)@ thus creating a type error.\n  % \n  Currently, \\toolname does not support intersection types, \n  thus we used an unrefined @(++.)@ variant of @(++)@ for such cases.\n     \n\\item\\emph{Restrict the functions' domain.}\n%% \\RJ{Seems like you HAVE to do this -- has nothing to do with LH?}\n  @modify@ is a @maybe@-like function that given a default value @x@,\n  a function @f@, and a StackSet @s@, applies @f@ on the @Maybe@\n  values inside @s@. \n  %\n\\begin{code}\nmodify :: x:{v:Maybe (Stack a) | isNothing v}\n       -> (y:Stack a -> Maybe {v:Stack a | SubElts v y})\n       -> UniqStackSet i l a s sd \n       -> UniqStackSet i l a s sd\n\\end{code}\n\t%\n        Since inside the StackSet @s@ each @y:Stack a@ could be replaced\n    with either the default value @x@ or with @f y@, we need to\n    ensure that both these alternatives will not insert duplicates.\n\t%\n\tThis imposes the curious precondition that the default\n\tvalue should be @Nothing@.\n\n\t\t\t\n\t\\item\\emph{Code inlining}\n    %\n    Given a tag @i@ and a StackSet @s@,  @view i s@ will set the current Screen \n    to the screen with tag @i@, if such a screen exists in @s@.\n    %\n    Below is the original definition for @view@ in case when a screen with tag \n    @i@ exists in visible screens\n    %\n\\begin{code}\nview :: (Eq s, Eq i) => i \n     -> StackSet i l a s sd \n     -> StackSet i l a s sd\nview i s    \n  | Just x <- find ((i==).tag.workspace) (visible s)\n  = s { current = x\n      , visible = current s \n                : deleteBy (equating screen) x (visible s) } \n\\end{code}\n    %\n    Verification of this code is difficult as we cannot suitably type @find@. \n    %\n    Instead we \\emph{inline} the call to @find@ and the field update into a \n    single recursive function @raiseIfVisible i s@ that in-place replaces @x@ \n    with the current screen.  \n\\end{itemize}\n\nFinally, \\lbxmonad comes with an extensive suite of QuickCheck properties,\nthat were formally verified in Coq~\\cite{Swierstra2012}. In future work~\\ref{chapter:conclusion},\nit would be interesting to do a similar verification with \\toolname, \nto compare the refinement types to proof-assistants.\n\n%%% TODO \\subsection{QuickCheck Properties}\n%%% TODO \n%%% TODO \\lbxmonad is tested against $113$ \\texttt{quickcheck} properties.\n%%% TODO %\n%%% TODO Of those $15$ check the uniqueness invariant \n%%% TODO and the rest $113$ check various functional properties.\n%%% TODO %\n%%% TODO We started the endeavour of verifying these properties with \\toolname.\n%%% TODO %\n%%% TODO We looked at a sample of $15$ properties to conclude that\n%%% TODO which we categorized as follows:\n%%% TODO \\begin{itemize}\n%%% TODO \\item\\emph{Easy to be proved.}\n%%% TODO Consider the \\texttt{quickcheck} property that checks that @view@ing \n%%% TODO a @StackSet a@ is idempotent:\n%%% TODO \\begin{code}\n%%% TODO prop_view_idem (x :: T) (i :: NonNegative Int) \n%%% TODO   = i `tagMember` x ==> view i (view i x) == (view i x)\n%%% TODO \\end{code}\n%%% TODO %\n%%% TODO The above property directly translated to a haskell function\n%%% TODO \\begin{code}\n%%% TODO type Valid     = {v:Bool | (Prop v) }\n%%% TODO \n%%% TODO prop_view_idem :: StackSet i l a sid sd -> i -> Valid\n%%% TODO prop_view_idem x i \n%%% TODO   | i `tagMember` x = view i (view i) == v\n%%% TODO   | otherwise       = True\n%%% TODO \\end{code}\n%%% TODO %\n%%% TODO By the above type signature,\n%%% TODO \\ie by the result type @Valid@, \n%%% TODO we specify that the function should always returns True.\n%%% TODO %\n%%% TODO When typechecking the above function,\n%%% TODO \\toolname proves that the property holds.\n%%% TODO %\n%%% TODO \\toolname is able to verify this property as the result type of @view@\n%%% TODO is strengthens with a refinement \n%%% TODO (in this case @EqTag x i => x = v@) \n%%% TODO that directly implies this property.\n%%% TODO \n%%% TODO The above is generalizing to (10/17) properties that we checked:\n%%% TODO strengthening the function types by refinements that \\toolname can prove\n%%% TODO is sufficient to verify these properties.\n%%% TODO \n%%% TODO \\item\\emph{Can be estimated.}\n%%% TODO In some other properties (like checking that @view@ing is reversible),\n%%% TODO two StackSets (@s1@ and @s2@) were normalized before being compared,\n%%% TODO that is their elements were first sorted.\n%%% TODO %\n%%% TODO In our logic we do not support any operation that can normalize structures in such a way.\n%%% TODO %\n%%% TODO Thus we cannot prove this exact property.\n%%% TODO %\n%%% TODO Instead we approximated it, by proving that proving that @s1@ and @s2@\n%%% TODO have the same sets of elements.\n%%% TODO %\n%%% TODO We approximated (3/17) of the properties.\n%%% TODO \n%%% TODO \\item\\emph{Their proof cannot be supported, currently.} [1]\n%%% TODO One \\texttt{quickcheck} property checks \n%%% TODO that @i@ cannot belong to an empty stackset.\n%%% TODO %\n%%% TODO We used abstract refinements to encode empty stacksets.\n%%% TODO %\n%%% TODO Proving the above property would be easy \n%%% TODO if we could mix abstract and concrete refinements in logical formulas\n%%% TODO or if \\toolname supported sum types.\n%%% TODO %\n%%% TODO Both the alternatives constitutes features that\n%%% TODO we would like to extend \\toolname with in the near future.\n%%% TODO %\n%%% TODO Still, currently we are not able to prove such kind of properties.\n%%% TODO \\item\\emph{Cannot be expressed}\n%%% TODO Other properties %, like @prop_focus_left_master@ \n%%% TODO check that the order of the windows is not affected by certain operations.\n%%% TODO %\n%%% TODO Though not infeasible we acknowledge that \\toolname \n%%% TODO is not appropriate for reasoning about order preserving\n%%% TODO and verification of such properties \n%%% TODO would require many code modifications.\n%%% TODO %\n%%% TODO (3/17) are order preserving properties.\n%%% TODO \\end{itemize}\n", "meta": {"hexsha": "f2dbc89b36ccf9b6d1515e8e22d02cda0f19d465", "size": 22331, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "text/realworldhaskell/structures.tex", "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_issues_repo_path": "text/realworldhaskell/structures.tex", "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "text/realworldhaskell/structures.tex", "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "avg_line_length": 35.959742351, "max_line_length": 106, "alphanum_fraction": 0.6435000672, "num_tokens": 6069, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802423634963, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.5574834076841669}}
{"text": "%!TEX root = ../main.tex\n\n%=============================================================================== \n%\n%    Chapter: vectors\n%\n%=============================================================================== \n\n\\chapter{Vectors}\n\\label{chapter:vectors}\n\nIn this chapter we'll learn how to manipulate multi-dimensional objects called vectors.\t\t\t\t\t\t\\index{vector|textbf}\nVectors are the precise way to describe directions in space.\nWe need vectors in order to describe physical quantities like forces, velocities, and accelerations.\n\nVectors are built from ordinary numbers,\nwhich form the \\emph{components} of the vector.\nYou can think of a vector as a list of numbers,\nand \\emph{vector algebra} as operations\nperformed on the numbers in the list.\nVectors can also be manipulated as geometric objects,\nrepresented by arrows in space.\nFor instance, the arrow that corresponds to the vector $\\vec{v}=(v_x,v_y)$ starts at the origin $(0,0)$\nand ends at the point $(v_x,v_y)$.\nThe word vector comes from the Latin \\emph{vehere},\nwhich means \\emph{to carry}.\nIndeed, the vector $\\vec{v}$ takes the point $(0,0)$ and carries it to the point $(v_x,v_y)$.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{figures/vectors/vector_components.pdf}\n\t\\vspace{-2mm}\n\t\\caption{\tThe vector $\\vec{v}=(3,2)$ is an arrow in the Cartesian plane.\n\t\t\tThe horizontal component of $\\vec{v}$ is $v_x=3$\n\t\t\tand the vertical component  is $v_y=2$.}\n\t\\label{fig:vector_components}\n\\end{figure}\n\n\n\n\n%!TEX root = ../main.tex\n\n%=======================================================================  vectors\n\\section{Vectors}\n\\label{sec:vectors}\n\n\tVectors are extremely useful in all areas of life.\n\tIn physics, for example, we use a vector to describe the velocity of an object.\n\tIt is not sufficient to say that the speed of a tennis ball is 200 kilometres per hour:\n\twe must also specify the direction in which the ball is moving.\n\tBoth of the two velocities \n\t\\[\n\t \\vec{v}_1 = (200,0) \n\t \\qquad \\textrm{and}\n\t \\qquad \\vec{v}_2=(0,200)\n\t\\]\n\tdescribe motion at the speed of $200$ kilometres per hour;\n\tbut since one velocity points along the $x$-axis, and the other points along the $y$-axis,\n\tthey are \\emph{completely} different velocities. \n\tThe velocity vector contains information about the object's speed \\emph{and} its direction.\n\tThe direction makes a big difference.\n\tIf it turns out the tennis ball is hurtling toward you, you'd better get out of the way!\n\n\tThe main idea in this chapter is that \\textbf{vectors are not the same as numbers}.\n\tWe'll start by defining what vectors are.\n\tThen we'll describe all the mathematical operations we can perform with vectors,\n\twhich include\n\t\tvector addition $\\vec{u}+\\vec{v}$, \n\t\tvector subtraction $\\vec{u}-\\vec{v}$,\n\t\tvector scaling $\\alpha\\vec{v}$,\n\t\tand other operations.\n\n\n\t\\subsection{Definitions}\n\t\\label{vectors:definitions}\n\n\t\tA two-dimensional vector $\\vec{v}$ corresponds to a \\emph{pair of numbers}:\n\t\t\\[\n\t\t\t\\vec{v} = (v_x, v_y),\n\t\t\\]\n\t\twhere $v_x$ is the \\emph{$x$-component} of the vector and $v_y$ is its \\emph{$y$-component}.\t\t\t\t\t\\index{components}\n\t\tWe denote the set of two-dimensional vectors as $\\mathbb{R}^2$,\n\t\tsince the components of a two-dimensional vector are specified by two real numbers.\n\t\tWe'll use the mathematical shorthand $\\vec{v} \\in \\mathbb{R}^2$ to define a two-dimensional vector $\\vec{v}$.\n\t\tVectors in $\\mathbb{R}^2$ can be represented as arrows in the Cartesian plane.\n\t\tSee the vector $\\vec{v}=(3,2)$ illustrated in Figure~\\ref{fig:vector_components}.\n\n\t\tWe can also define three-dimensional vectors like the vector $\\vec{v} = (v_x, v_y, v_z) \\in \\mathbb{R}^3$,\n\t\twhich has three components.\n\t\tA three-dimensional coordinate system is similar to the Cartesian coordinate system you're familiar with,\n\t\tand includes the additional $z$-axis that measures the height above the plane.\n\t\tIn fact,\n\t\tthere's no limit to the number of dimensions for vectors.\n\t\tWe can define vectors in an $n$-dimensional space:\t\t\t\t\t\t\t\t\\index{dimension}\n\t\t$\\vec{v} = (v_1, v_2, \\ldots, v_n) \\in \\mathbb{R}^n$.\n\t\tFor the sake of simplicity,\n\t\twe'll define all the vector operation formulas using two-dimensional vectors.\n\t\tUnless otherwise indicated in the text,\n\t\tall the formulas we give for two-dimensional vectors $\\vec{v} \\in \\mathbb{R}^2$\n\t\talso apply to $n$-dimensional vectors $\\vec{v} \\in \\mathbb{R}^n$.\n\n\n\t\t\\subsubsection{Vector operations}\n\n\t\t\tConsider two vectors,\n\t\t\t$\\vec{u}=(u_x,u_y) $ and $\\vec{v}=(v_x,v_y)$,\n\t\t\tand assume that $\\alpha \\in \\mathbb{R}$ is an arbitrary constant. \n\t\t\tThe following operations are defined for these vectors:\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item \t\\textbf{Addition:}\t$\\vec{u} + \\vec{v} = (u_x+v_x,\\, u_y+v_y)$\n\t\t\t\t\\item \t\\textbf{Subtraction:}\t$\\vec{u} - \\vec{v} = (u_x-v_x,\\, u_y-v_y)$\n\t\t\t\t\\item \t\\textbf{Scaling:}\t\t$\\alpha \\vec{u} = (\\alpha u_x,\\, \\alpha u_y)$\n\t\t\t\t\\item \t\\textbf{Dot product:}\t$\\vec{u} \\cdot \\vec{v}  = u_xv_x+u_yv_y$\t\t\t\t\t\t\t\t\\index{dot product}\n\t\t\t\t\\item \t\\textbf{Length:}\t\t$\\|\\vec{u}\\| = \\sqrt{\\vec{u}\\cdot\\vec{u}} = \\sqrt{u_x^2+u_y^2}$.\n\t\t\t\t\t\tThe vector's length is also called the \\emph{norm} of the vector.\n\t\t\t\t\t\tWe sometimes use the letter $u$ to denote the length of the vector $\\vec{u}$.\n\t\t\t\\end{itemize}\n\n\t\t\t\\noindent\n\t\t\tNote there is no vector division operation.\n\n\t\t\tFor vectors in a three-dimensional space $\\vec{u}=(u_x,u_y,u_z) \\in \\mathbb{R}^3$\n\t\t\tand $\\vec{v}=(v_x,v_y,v_z) \\in \\mathbb{R}^3$,\n\t\t\twe can also define the \\textbf{cross product} operation\t\t\t\t\t\t\t\t\t\t\t\t\\index{cross product}\n\t\t\t$\\vec{u} \\times \\vec{v} = (u_yv_z-u_zv_y,\\; u_zv_x-u_xv_z,\\; u_xv_y-u_yv_x)$.\n\t\t\tThe dot product and the cross product are new operations that you probably haven't seen before.\n\n\n\t\t\\subsubsection{Vector representations}\n\n\t\t\tWe'll use three equivalent ways to denote vectors in two dimensions:\n\t\t\t\\begin{itemize}\n\t\t\t    \\item   \t$\\vec{v} =(v_x, v_y)$: component notation.\n\t\t\t\t\tThe vector is written as a pair of numbers called the \\emph{components} or \\emph{coordinates} of the vector.\t\\index{coordinates}\n\t\t\t    \\item \t$\\vec{v} =v_x\\hat{\\imath}+ v_y\\hat{\\jmath}$: unit vector notation.\n\t\t\t\t\tThe vector is expressed as a combination of the unit vectors\n\t\t\t\t\t$\\hat{\\imath} = (1,0)$ and $ \\hat{\\jmath} = (0,1)$.\n\t\t\t    \\item   \t$\\vec{v}=\\|\\vec{v}\\|\\angle \\theta$: length-and-direction notation (polar coordinates).\n\t\t\t\t\tThe vector is expressed in terms of its \\emph{length} $\\|\\vec{v}\\|$\n\t\t\t\t\tand the angle $\\theta$ that the vector makes with the $x$-axis.\n\t\t\t\\end{itemize}\n\n\t\t\t\\begin{figure}[htb]\n\t\t\t\t\\centering\n\t\t\t\t\\includegraphics[width=0.6\\textwidth]{figures/vectors/vector_components_annotated.pdf}\n\t\t\t\t\\vspace{-2mm}\n\t\t\t\t\\caption{The vector $\\vec{v}=(v_x,v_y) = v_x\\hat{\\imath}+ v_y\\hat{\\jmath} = \\|\\vec{v}\\|\\angle \\theta$.}\n\t\t\t\t\\label{fig:vector_components_annotated}\n\t\t\t\\end{figure}\n\n\t\t\t\\noindent\n\t\t\tWe use the component notation for doing vector algebra calculations since it is most compact.\n\t\t\tThe unit vector notation shows explicitly that the vector $\\vec{v}$ corresponds to the sum of\n\t\t\t$v_x\\hat{\\imath}$ (a displacement of $v_x$ steps in the direction of the $x$-axis)\n\t\t\tand $v_y\\hat{\\jmath}$ (a displacement of $v_y$ steps in the direction of the $y$-axis).\n\t\t\tThe length-and-direction notation describes the vector $\\vec{v}$\n\t\t\tas a displacement of $\\|\\vec{v}\\|$ steps in the direction of the angle $\\theta$.\n\t\t\tWe'll use all three ways of denoting vectors throughout the rest of the book,\n\t\t\tand we'll learn how to convert between them.\n\n\n\t\\subsection{Exercises}\n\t\\label{basis:exercises}\n\n\t\t\\input{problems/chapter3_exercises_vectors.tex}\n\n%!TEX root = ../main.tex\n\n\\section{Vectors problems}\n\\label{sec:vec_problems}\n\n\\vspace{-2mm}\n\nYou learned a bunch of vector formulas and you saw some vector diagrams,\nbut did you really learn how to solve problems with vectors?\nThere is only one way to find out: test yourself by solving problems.\n\nI've said it before and I don't want to repeat myself too much,\nbut it's worth saying again: the more problems you solve, the better you'll understand the material.\nIt's now time for you to try the following vector problems to make sure you're on top of things.\n\n\\medskip\n\n{ \\small\n\n\\begin{problems}{ch3}\n\n\t\\begin{problem}\n\t\tGiven the vectors $\\vec{u}=(1,1,1)$, $\\vec{v} = (2,3,1)$, and $\\vec{w}=(-1,-1,2)$,\n\t\tcompute the following products:\n\t\t\n\t\t\\threecol\n\t\t\t\\textbf{a)}~$\\vec{u} \\cdot \\vec{v}$\n\n\t\t\t\\textbf{b)}~$\\vec{u} \\cdot \\vec{w}$\n\t\t\t\n\t\t\t\\textbf{c)}~$\\vec{v} \\cdot \\vec{w}$\n\t\t\\endthreecol\n\t\t\n\t\t\\threecol\n\t\t\t\\textbf{d)}~$\\vec{u} \\times \\vec{v}$\n\n\t\t\t\\textbf{e)}~$\\vec{u} \\times \\vec{w}$\n\t\t\t\n\t\t\t\\textbf{f)}~$\\vec{v} \\times \\vec{w}$\n\t\t\\endthreecol\n\t\t\n\t\t\\begin{answer}\\textbf{a)}~$6$.\n\t\t\t\t\t\\textbf{b)}~$0$.\n\t\t\t\t\t\\textbf{c)}~$-3$.\n\t\t\t\t\t\\textbf{d)}~$(-2, 1, 1)$.\n\t\t\t\t\t\\textbf{e)}~$(3, -3, 0)$.\n\t\t\t\t\t\\textbf{f)}~$(7, -5, 1)$.\\end{answer}\n\t\\end{problem}\n\n\n\t\\begin{problem}\n\t\tGiven the vectors $\\vec{p} =(1,1,0,3,3)$ and $\\vec{q}=(1,2,3,4,5)$, calculate the following expressions:\n\t\t\\threecol\n\t\t\t\\textbf{a)}~$\\vec{p}+\\vec{q}$\n\t\t\t\n\t\t\t\\textbf{b)}~$\\vec{p}-\\vec{q}$\n\t\t\t\n\t\t\t\\textbf{c)}~$\\vec{p} \\cdot \\vec{q}$\n\t\t\\endthreecol\n\t\t\n\t\t\\begin{answer}\\textbf{a)}~$(2,3,3,7,8)$.\n\t\t\t\t\t\\textbf{b)}~$(0,-1,-3,-1,-2)$.\n\t\t\t\t\t\\textbf{c)}~$30$.\\end{answer}\n\t\\end{problem}\n\n\n\t\\begin{problem}\n\t\tFind a unit vector that is perpendicular to both  $\\vec{u}=(1, 0, 1)$ and $\\vec{v} =(1, 2, 0)$.\n\t\t\\begin{hint}\n\t\t\tUse the cross product.\n\t\t\\end{hint}\n\t\t\\begin{answer}$(-\\frac{2}{3}, \\frac{1}{3}, \\frac{2}{3})$ or $(\\frac{2}{3}, -\\frac{1}{3}, -\\frac{2}{3})$.\\end{answer}\n\t\t\\begin{solution}See \\href{http://bit.ly/1cOa8yo}{\\texttt{bit.ly/1cOa8yo}} for calculations.\n\t\t\\end{solution}\n\t\\end{problem}\n\n\t\\begin{problem}\n\t\tFind a vector that is orthogonal to both $\\vec{u}_1 = (1, 0, 1)$ and $\\vec{u}_2 =(1, 3, 0)$, \n\t\t and whose dot product with the vector  $\\vec{v}= (1, 1, 0)$ is equal to $8$.\n\t\t%\t\\begin{hint}\n\t\t%\t\\end{hint}\n\t\t\\begin{answer}$(12, -4, -12)$.\\end{answer}\n\t\t\\begin{solution}\n\t\t\tAny multiple of the vector $\\vec{u}_1 \\times \\vec{u}_2 = (-3,1,3)$ \n\t\t\tis perpendicular to both $\\vec{u}_1$ and \t$\\vec{u}_2$.\n\t\t\tWe must find a multiplier $t \\in \\mathbb{R}$ such that \n\t\t\t$t(-3,1,3) \\cdot (1, 1, 0) = 8$.\n\t\t\tComputing the dot product we find $-3t + t = 8$, so $t=-4$.\n\t\t\tThe vector we're looking for is $(12, -4, -12)$.\n\t\t\tSee \\href{http://bit.ly/1nmYH8T}{\\texttt{bit.ly/1nmYH8T}} for calculations.\n\t\t\\end{solution}\n\t\\end{problem}\n\n\n\\end{problems}\n\n} % /small\n\n", "meta": {"hexsha": "ce4109c1d08c65cb714d65edc929d9c5e20f24d0", "size": 10413, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sources/extracted/03_vectors.tex", "max_stars_repo_name": "minireference/sample-book", "max_stars_repo_head_hexsha": "83e827d7e0c3f5fea1d08815810f0b08bef503e3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2020-10-19T21:21:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T16:42:13.000Z", "max_issues_repo_path": "sources/extracted/03_vectors.tex", "max_issues_repo_name": "minireference/sample-book", "max_issues_repo_head_hexsha": "83e827d7e0c3f5fea1d08815810f0b08bef503e3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sources/extracted/03_vectors.tex", "max_forks_repo_name": "minireference/sample-book", "max_forks_repo_head_hexsha": "83e827d7e0c3f5fea1d08815810f0b08bef503e3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-12T19:03:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-12T19:03:04.000Z", "avg_line_length": 39.1466165414, "max_line_length": 133, "alphanum_fraction": 0.6574474215, "num_tokens": 3441, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802264851919, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5574834053166314}}
{"text": "%`\n%\\nonstopmode\n\\hbadness=100000\n\\documentclass[a4paper, 12pt]{article}\n\\usepackage{amsmath,amsfonts,caption,float,geometry,graphicx,mathtools,pythonhighlight,textcomp,url,verbatim,subcaption,tabularx, longtable, ulem, hyperref, tikz} %,parskip\n\\geometry{ a4paper, total={170mm,257mm}, left=20mm, top=20mm}\n\\newcommand{\\matr}[1]{\\underline{\\underline{\\textbf{#1}}}}\n\\newcommand{\\ve}[1]{\\boldsymbol{#1}}\n\\newcommand{\\pythoncode}[2]{\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\begin{adjustwidth}{-1.3cm}{-1.3cm}\n\\texttt{#1}\n\\inputpython{#2}{1}{1500}\n\\end{adjustwidth}\n}\n\\usepackage[toc, page]{appendix}\n% \\usepackage[dvipsnames]{xcolor}\n% \\definecolor{subr}{rgb}{0.8, 0.33, 0.0}\n% \\definecolor{func}{rgb}{0.76, 0.6, 0.42}\n\n\\begin{document}\n% \\includegraphics[width=8cm]{CoverPage/UoBlogo.pdf}\n% \\hrule\n% \\bigbreak\n% \\textbf{F}usion Neutron \\textbf{Acti}vation Spectra \\textbf{U}nfolding by \\textbf{N}eural \\textbf{N}etworks \\\\\n% (FACTIUNN)                                      \\\\\n% \\hrule\n% \\bigbreak\n% \\begin{minipage}[b]{0.4\\textwidth}\n%     \\includegraphics[height=2cm]{CoverPage/CCFElogo.jpeg}\n%   \\end{minipage}\n%   \\hfill\n%   \\begin{minipage}[b]{0.4\\textwidth}\n%     \\includegraphics[height=3cm]{CoverPage/UKAEAlogo.jpeg}\n% \\end{minipage}\n    \n\\begin{table}[!h]\n\\centering\n\\begin{tabular}{rl}\nauthor:&Ocean Wong          \\\\\n       &(Hoi Yeung Wong)    \\\\\ndate:  &May 2020       \\\\\nOrganization:&Culham Centre for Fusion Energy\\\\\n            & Sheffield Hallam University\\\\\nReivewed by:&Bashir Mitchell\\\\\nMotivated by:&Boredom of quarantine\n\\end{tabular}\n\\end{table}\n\\hrule\n\\begin{abstract}\nThis document explores the intuition for exponentiation of complex numbers\n\\end{abstract}\n% \\hline\n% \\twocolumn\n\\begin{center}\n\\chapter{}\n\\end{center}\n\\section{Introduction}\nTo most physics students and graduates, two of the three basic arithmetic operations of complex numbers $z \\in \\mathbb{C}$ has been made intuitive:\n\\begin{longtable}{ccc}\n\\hline\noperation& representation & intuition\\\\\n\\hline\naddition & $z_1+z_2$ & translation of the $z_1$ plane by the vector $z_2$\\\\\nmultiplication & $(z_1)(z_2)$ & dilation of the complex plane by a factor $|z_2|$\\\\ && and rotation by a factor $arg(|z_2|)$. \\\\\n\\hline\n\\end{longtable}\n\nHowever, the third commonly used arithmetic operation, exponentiation ${z_1}^{z_2}$, is not made obvious/intuitive as part of their physics training.\n\nThis text aims to provide intuition for that.\n\n\\section{Set up and calculation}\nIn the previous two operations we have visualized $z_1$ as a member on the infinitely extending complex plane, while $z_2$ describes an operation on this plane\\footnote{This asymmetric nature of $z_1$ and $z_2$ makes the communtativity of complex multiplication unintuitive. But that's a story for another time}. We will use the same approach in the following set up:\n\nLet $z_1= e^{x} e^{iy}$. And for convenience, let's denote the $z_2 = a+ib$.\n\nThen an operation $z_1^{z_2}$ will give \n    \n\\begin{align}\nz_3 = z_1^{z_2}  & = e^{ax} e^{iay} \\cdot e^{ibx} e^{-by}\\\\\n                & = e^{ax - by} e^{iay + bx} \\\\\n                & = e^{(ax-by) +i (ay+bx)} \\label{initial algebra}\n\\end{align}\n\n\\section{Trick}\nThe trick here is then to recognize the matrix-algebra nature of equation \\ref{initial algebra}.\n\nIt the real part of the exponent looks like the dot product $ (a, -b) \\cdot (x, y)$, while\nthe imaginary part of the exponent looks like the dot product $(b, a) \\cdot (x, y)$.\n\nThus we can re-write the the real and imaginary part of the resulting $z_3$ into a 2D-vector, given by the transformation\n\\begin{equation}\n\\begin{pmatrix}\na & -b\\\\\nb & a\n\\end{pmatrix}\n\\begin{pmatrix}\nx\\\\\ny\n\\end{pmatrix}\\label{transformation equation}\n\\end{equation}\n\n\\section{Intuition}\\label{Intuition}\nTherefore the intuition of raising $z_1$ to the power of $z_2$ can be broken down into four steps\n\\begin{enumerate}\n    \\item Decompose $z_1$ into $x=ln(|z|), y=arg(z)$. This will turn the polar coordinates lines on the argand diagram \\ref{polar} into a cartesian coordinate with $-\\infty < x < \\infty$,$ -\\pi\\le y\\le \\pi$ (figure \\ref{cartesian}). The point ($-\\infty$, $-\\pi < y\\ge\\pi$) in figure \\ref{cartesian} corresponds to the origin in figure \\ref{polar}.\n    \\item Contruct a matrix:\n    $\\matr{M}=\\begin{pmatrix}\n    a & -b\\\\\n    b & a\n    \\end{pmatrix}$ by decomposing $z_2$ into $z_2=a+ib$.\n    \\item Apply this matrix transformation $\\matr{M}$ (equation \\ref{transformation equation}) on figure \\ref{cartesian}.\n    \\item Now exponentiate the transformed figure \\ref{cartesian} (inverse transformation of what we've done in step 1).\n\\end{enumerate}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.4\\textwidth]{polar.png}\n    \\captionof{figure}{Input complex plane with polar coordinate lines overlaid. Exponentiating figure \\ref{cartesian} will return this figure.\n    The origin in figure \\ref{cartesian} corresponds to the point $(1,0)$ in this figure.\n    }\\label{polar}\n\\end{figure}\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=0.4\\textwidth]{cartesian.png}\n    \\captionof{figure}{The output complex plane with cartesian coordinates overlaid. Each point on the cartesian coordinate lines corresponds to a point on the polar coordinate lines in figure \\ref{polar}. This can be obtained by taking the logarithm of figure \\ref{polar} to give $x=ln(|z|), y=arg(z)$.}\\label{cartesian}\n\\end{figure}\nThe lines in the figures have corresponding colours: the deep purple line gets mapped to the deep purple line after transformation, etc.\n\n\\section{Correspondence to our expectation in special cases}\nWhen $a=n, b=0$, \n$\\begin{pmatrix}\na & -b\\\\\nb & a\n\\end{pmatrix}$ becomes a simple scaling matrix with no skewing component, agreeing with our expectation that ${z_1}^{n}$ simply multiplies itself by n times.\n\nIf both the real and imaginary parts are zero, then the transformation in step 3 will squash everything down to the origin; and step four will transform the origin back to the number $1+0i$, confirming our expectation that $z_1^0 = 1$.\n\n\\section{Example: $i^i$}\nFor the case of raising any complex number $z_1$ to the power of $i$,\n\\begin{align}\n    z_2 &= 0 + 1i\n\\end{align}\nTherefore, using the checklist laid out in section \\ref{Intuition}:\n\n\\begin{enumerate}\n    \\item Step 1 is as described in section \\ref{Intuition}. At this point $z_1 = i = (0,1)$ will be transformed to $ln(z_1) = 0+arg(i) i = (0, arg(i)) = (0, \\frac{\\pi}{2})$\n    \\item Step 2 will give us a rotation matrix \n$\\matr{M} = \\begin{pmatrix}\n0 & -1\\\\\n1 & 0\n\\end{pmatrix}$ which is a rotation matrix that rotates everything anti-clockwise by $90 ^\\circ$.\n    \\item Applying this on $z_1$ will give us a rotated figure \\ref{cartesian}, rotated to the left by $90^\\circ$. If we consider only the point $z_1 = i$, it will now be mapped from $(0, \\frac{\\pi}{2})$ (step 1) to the point $(-\\frac{\\pi}{2}, 0)$\n    \\item Exponentiating this result will give us the solution $z_3$. In this case we will exponentiate $-\\frac{\\pi}{2} + 0i$ into $e^{-\\frac{\\pi}{2}} \\approx 0.20788$\n\\end{enumerate}\n\n\\section{Interesting results}\nIn fact when $z_2$ is purely imaginary, the unit circle of $z_1$ (whose exponent has real part=0) will be mapped to the real number line in the output $z_3$. This is because when $a=0$, the matrix transformation becomes a 90-degrees rotation plus a scaling operation, swapping the real and the imaginary part of the exponent.\n\\section{Multiple solutions}\nOf course, the point $(x,y)$ and the point $(x, y + 2n\\pi)$ on figure \\ref{cartesian} both maps to the same point on figure \\ref{polar}. But these two points may be transformed differently by step 3. Therefore one get multiple answers of $z_3$ depending on which represenation of $z_1$ they have chosen for themselves on figure \\ref{cartesian} to represent $z_2$.\n\nAn example of this is that $i^i$ actually has mutliple solutions, \n\\begin{align}\n    i^i = e^{-\\frac{\\pi}{2}} e^{2n\\pi} = 0.20788 (e^{2n\\pi})\n    & & \\forall n \\in \\mathbb{Z}\n\\end{align}\n% \\begin{figure}[H]\n% \\centering\n% \\includegraphics[width=1\\textwidth]{path/to/file.png} %Stupid latex doesn't allow two dots in the filename.\n% \\caption{Short description} \\label{somethingcatchy}\n% \\end{figure}\n\n\\end{document}\n%`", "meta": {"hexsha": "46edf36c10e541c9094668895cc1b444fde569ed", "size": 8258, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exponentiate.tex", "max_stars_repo_name": "OceanNuclear/ComplexAnalysis", "max_stars_repo_head_hexsha": "7adccc0d81ea1216e305d35e248e6d886c682e6c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exponentiate.tex", "max_issues_repo_name": "OceanNuclear/ComplexAnalysis", "max_issues_repo_head_hexsha": "7adccc0d81ea1216e305d35e248e6d886c682e6c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exponentiate.tex", "max_forks_repo_name": "OceanNuclear/ComplexAnalysis", "max_forks_repo_head_hexsha": "7adccc0d81ea1216e305d35e248e6d886c682e6c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.9204545455, "max_line_length": 367, "alphanum_fraction": 0.7107047711, "num_tokens": 2574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802317779601, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.5574834045821142}}
{"text": "\\lab{Stochastic Differential Equations}{Stochastic Differential Equations}\n\\label{lab:sde}\n\\objective{Stochastic differential equations are used to model stochastic processes. \nIn this lab we will explore Brownian motion and then derive the Euler-Maruyama numerical method for SDEs.\nWe will build an Euler-Maruyama numerical solver and use this solver to predict future stock prices.}\n\nStochastic differential equations combine the concepts of Brownian motion and differential equations in order to model stochastic processes.\nA stochastic process is a mathematical object made from a family of random variables.\nThese processes model events that occur with random changes over time, such as bacterial population growth, movement of gas molecules, and fluctuating stock prices.\nTo understand stochastic processes, it is imperative to understand Brownian motion.\n\n\\section*{Brownian Motion}\n\nBrownian motion is random movement described by the Wiener process.\nThe Wiener process is a stochastic process $W(t)\\sim\\mathscr{N}(0,t)$ which satisfies the following conditions:\n\n\\begin{enumerate}\n\\item $W(0)=0$,\\\\\n\\item $W(t-s)=W(t) - W(s)$,\\\\\n\\item $W(t)$ is independent for all $t$.\n\\end{enumerate}\n\nFor example, imagine a point at zero on a number line.\nThe point can only move one number away and must move left or right.\nThe probability of moving left and right is equal and can be modeled by a coin flip.\nLet's say that landing on heads moves the point right and tails moves the point left.\nOn our first flip, we get heads and the point moves from 0 to 1.\nNow we flip the coin again.\nThis coin flip does not depend on the previous coin flip and determines whether the point moves back to 0 or moves to 2.\nOn the second flip, we get tails and the point moves back to 0.\nContinuing this process shows the random movement of the point on the number line.\n\n\\begin{figure}\n\\captionsetup[subfigure]{justification=justified}\n\\begin{center}\n\\begin{subfigure}{\\textwidth}\n\\centering\n\\begin{tikzpicture}\n\\draw(-2,0)--(-1,0)--(0,0)--(1,0)--(2,0);\n\\draw[fill] (-2,0) circle [radius=0.1];\n\\draw[fill] (-1,0) circle [radius=0.1];\n\\draw[fill, blue] (0,0) circle [radius=0.1];\n\\draw[fill] (1,0) circle [radius=0.1];\n\\draw[fill] (2,0) circle [radius=0.1];\n\\node at (-2,-.5) {-2};\n\\node at (-1,-.5) {-1};\n\\node at (0,-.5) {0};\n\\node at (1,.-.5) {1};\n\\node at (2,-.5) {2};\n\\end{tikzpicture}\n\\caption{Initial position}\n\\vspace{.3cm}\n\\end{subfigure}\n\\begin{subfigure}{\\textwidth}\n\\centering\n\\begin{tikzpicture}\n\\draw(-2,0)--(-1,0)--(0,0)--(1,0)--(2,0);\n\\\n\\draw[fill] (-2,0) circle [radius=0.1];\n\\draw[fill] (-1,0) circle [radius=0.1];\n\\draw[fill] (0,0) circle [radius=0.1];\n\\draw[fill, blue] (1,0) circle [radius=0.1];\n\\draw[fill] (2,0) circle [radius=0.1];\n\\node at (-2,-.5) {-2};\n\\node at (-1,-.5) {-1};\n\\node at (0,-.5) {0};\n\\node at (1,.-.5) {1};\n\\node at (2,-.5) {2};\n\\end{tikzpicture}\n\\caption{Heads is flipped and point moves right.}\n\\vspace{.3cm}\n\\end{subfigure}\n\\begin{subfigure}{\\textwidth}\n\\centering\n\\begin{tikzpicture}\n\\draw(-2,0)--(-1,0)--(0,0)--(1,0)--(2,0);\n\\draw[fill] (-2,0) circle [radius=0.1];\n\\draw[fill] (-1,0) circle [radius=0.1];\n\\draw[fill, blue] (0,0) circle [radius=0.1];\n\\draw[fill] (1,0) circle [radius=0.1];\n\\draw[fill] (2,0) circle [radius=0.1];\n\\node at (-2,-.5) {-2};\n\\node at (-1,-.5) {-1};\n\\node at (0,-.5) {0};\n\\node at (1,.-.5) {1};\n\\node at (2,-.5) {2};\n\\end{tikzpicture}\n\\caption{Tails is flipped and point moves left.}\n\\vspace{.3cm}\n\\end{subfigure}\n\\begin{subfigure}{\\textwidth}\n\\centering\n\\begin{tikzpicture}\n\\draw(-2,0)--(-1,0)--(0,0)--(1,0)--(2,0);\n\\draw[fill] (-2,0) circle [radius=0.1];\n\\draw[fill, blue] (-1,0) circle [radius=0.1];\n\\draw[fill] (0,0) circle [radius=0.1];\n\\draw[fill] (1,0) circle [radius=0.1];\n\\draw[fill] (2,0) circle [radius=0.1];\n\\node at (-2,-.5) {-2};\n\\node at (-1,-.5) {-1};\n\\node at (0,-.5) {0};\n\\node at (1,.-.5) {1};\n\\node at (2,-.5) {2};\n\\end{tikzpicture}\n\\caption{Tails is flipped and point moves left.}\n\\vspace{.3cm}\n\\end{subfigure}\n\\caption{Example of Brownian motion. The blue dot travels left or right with equal probability. It is modeled by a coin flip. When heads, the dot moves right. When tails, the dot moves left.}\n\\end{center}\n\\end{figure}\n\nAs we flip the coin $t$ times, the distribution of the point's position is approximately normally distributed with mean 0 and variance $t$.\nNote that $dW\\sim\\mathscr{N}(0,\\Delta t)$ is also approximately normally distributed because $W(t-s)=W(t)-W(s)$ and $W(t-s)\\sim\\mathscr{N}(0,t-s)$.\nUsing $dW$, we can show the movement of a point with the differential equation\n\\begin{equation}\n\\frac{dS_t}{S_t}=g(t,S_t)dW.\n\\label{eqn:brownian-motion}\n\\end{equation}\nwhere $g(t,S_t)$ is a scalar function and $S_t$ is the position of the point at time $t$.\nWe can manipulate Equation \\ref{eqn:brownian-motion} to model the position of the point numerically:\n\n\n\\begin{align*}\ndS_t&\\approx g(t,S_t)S_tdW\\\\\nS_{t+1}-S_t&\\approx g(t,S_t)S_tdW\\\\\nS_{t+1}&\\approx S_t+g(t,S_t)S_tdW\n\\end{align*}\nwhere $dW\\sim\\mathscr{N}(0,1)$.\nThis model is itself a stochastic differential equation that is based completely on brownian motion.\n\n\\begin{problem}\nWrite a function \\li{brownian_motion()} that accepts a scalar function \\li{g}, initial condition \\li{y0}, and an array of time points \\li{t}.\nThe function should return an array of the solution to Equation \\ref{eqn:brownian-motion} evaluated at $t$.\n\nAnimate this function for $g(t,S_t)=1$, $y_0=(1,1)$ and $t\\in[0,100)$.\nThe animation should show a particle moving with a tail indicating it's previous position.\n\n(Hint: Because $y_0\\in\\mathbb{R}^2$, $dW\\in\\mathbb{R}^2\\times\\mathbb{R}^2$ so that each dimension of $y_0$ moves independently. Thus, $dW=\\begin{pmatrix}d_{w_1}&0\\\\0&d_{w_2}\\end{pmatrix}$ where $d_{w_i}\\sim\\mathscr{N}(0,1)$).\n\\label{prob:brownian-motion}\n\\end{problem}\n\n\\section*{Euler-Maruyama Method}\n\nA stochastic differential equation (SDE) is a differential equation that involves at least one stochastic component.\nEquation \\ref{eqn:brownian-motion} is an example of a SDE, as its only component is stochastic.\nHowever, most SDEs are not made of just one stochastic component.\nA commonly used stochastic differential equation which contains a non-stochastic component is\n\\begin{equation}\n\\frac{dS_t}{S_t}=f(t,S_t)dt + g(t,S_t)dW\n\\label{eqn:general-model}\n\\end{equation}\nwhere $f(t,S_t)$ and $g(t,S_t)$ are scalar functions and $dW\\sim\\mathscr{N}(0,\\Delta t)$.\nIn this equation, the first term is a standard differential equation while the second term represents brownian motion.\nThe combination allows for more accurate models of stochastic processes.\nTo solve Equation \\ref{eqn:general-model}, we apply the Euler-Maruyama method.\nThis method combines the Euler method for ODEs with an additional stochastic component.\nNote that Problem \\ref{prob:brownian-motion} is the Euler-Maruyama method where $f(t,S_t)=0$.\nWe solve for $S_{t+1}$ as follows:\n\n\\begin{align*}\ndS_t&\\approx f(t,S_t)S_tdt + g(t,S_t)S_tdW\\\\\nS_{t+1}-S_t&\\approx f(t,S_t)S_tdt + g(t,S_t)S_tdW\\\\\nS_{t+1}&\\approx S_t+f(t,S_t)S_tdt + g(t,S_t)S_tdW.\n\\end{align*}\n\n\\begin{problem}\nWrite a function \\li{euler_maruyama()} which accepts a scalar function \\li{f}, a scalar function \\li{g}, initial condition \\li{y0}s, and an array of time points \\li{t}.\nThe function should return an array of the solution to \\ref{eqn:general-model}.\n\nTo test your function, set $f(t,S_t)=1-(S_t)^2$, $g(t,S_t)=0.1$, $y_0=1$ and $t\\in[0,10)$.\nYour function should result in a plot which randomly oscillates with $y$ generally in the interval $[0,2]$.\nAn example is shown in Figure \\ref{fig:prob2}.\n% try doing histogram instead\n\\label{prob:euler-maruyama}\n\\end{problem}\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{figures/problem2.pdf}\n\\caption{Possible solution for $f(t,S_t)=1-(S_t)^2$, $g(t,S_t)=0.1$, $y_0=1$ and $t\\in[0,10)$.}\n\\label{fig:prob2}\n\\end{figure}\n\n\\section*{Drift and Volatility}\n\nSDEs are often used in mathematical finance models.\nParticularly, the Geometric Brownian Motion (GBM) model is useful in predicting future stock prices.\nThe GBM is defined as follows:\n\\begin{equation}\ndS_t=\\mu S_tdt+\\sigma S_tdW,\n\\label{eqn:gbm}\n\\end{equation}\nwhere $\\mu$ is the drift of the stock and $\\sigma$ is the volatility.\nThe drift $\\mu$ of a stock is the average change in return of historic stock data.\nThe volatility of a stock is the standard deviation of the change in return of historic stock data.\nIt is expected that $\\mu$ and $\\sigma$ will remain the same for future stock data.\n\nTo find $\\mu$ and $\\sigma$, let $\\theta=(\\mu,\\sigma)$.\nWe want to find $\\theta_{MAP}$ (maximum a posteriori) which maximizes the probability that $\\mu$ and $\\sigma$ fit the historical data.\nWe can calculate $\\theta_{MAP}$ using Bayes Theorem:\n\\begin{equation}\n\\theta_{MAP}\\Bigg(\\frac{dS}{S}\\Bigg)=\\text{argmax}_\\theta P\\Bigg(\\theta|\\frac{dS}{S}\\Bigg)=\\text{argmax}_\\theta\\frac{P\\Big(\\frac{dS}{S}|\\theta\\Big)P(\\theta)}{\\int_\\Theta P\\Big(\\frac{dS}{S}|\\vartheta\\Big)P(\\vartheta)d\\vartheta}=\\text{argmax}_\\theta P\\Big(\\frac{dS}{S}|\\theta\\Big)P(\\theta),\n\\label{eqn:map}\n\\end{equation}\nwhere $\\Theta$ is the collection of all possible $\\theta$.\nSince no information is known about $P(\\theta)$, we choose $P(\\theta)$ to be equal over all $\\theta$.\nThis avoids giving bias to one value of $\\theta$ over another.\nTo ease calculations, we choose our improper prior to be uniformly 1, $P(\\theta)=1$.\nThis implies $\\theta_{MAP}=\\text{argmax}_\\theta P(\\frac{dS}{S}|\\theta)=\\theta_{MLE}$.\n\nNow it is necessary to determine the probability distribution of $P(\\frac{dS}{S}|\\theta)$.\nPlotting the change in stock price results in the following histogram:\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{figures/distribution.pdf}\n\\caption{Distribution of change in stock price.}\n\\label{fig:distribution}\n\\end{figure}\nThe change in stock price is approximately normally distributed and thus we consider $P(\\frac{dS}{S})\\sim\\mathscr{N}(\\mu,\\sigma^2)$.\nNow we can calculate $\\theta_{MAP}$ as follows:\n\n\\begin{equation}\n\\begin{split}\n\\theta_{MAP}&=\\text{argmax}_\\theta P(\\frac{dS}{S}|\\theta)\\\\\n&=\\text{argmax}_{\\theta}\\prod_{i=1}^N\\frac{1}{\\sqrt{2\\pi \\sigma^2}}\\text{exp}(-\\frac{((\\frac{dS}{S})_i-\\mu)^2}{2\\sigma^2})\\\\\n&=\\text{argmax}_{\\theta}\\log(\\prod_{i=1}^N\\frac{1}{\\sqrt{2\\pi \\sigma^2}}\\text{exp}(-\\frac{((\\frac{dS}{S})_i-\\mu)^2}{2\\sigma^2}))\\\\\n&=\\text{argmax}_{\\theta}\\sum_{i=1}^N\\log(\\frac{1}{\\sqrt{2\\pi \\sigma^2}}\\text{exp}(-\\frac{((\\frac{dS}{S})_i-\\mu)^2}{2\\sigma^2}))\\\\\n&=\\text{argmax}_\\theta\\sum_{i=1}^N\\log(\\frac{1}{\\sqrt{2\\pi\\sigma^2}})-\\frac{((\\frac{dS}{S})_i-\\mu)^2}{2\\sigma^2}\\\\\n&=\\text{argmax}_\\theta N\\log(\\frac{1}{\\sqrt{2\\pi\\sigma^2}})-\\sum_{i=1}^N\\frac{((\\frac{dS}{S})_i-\\mu)^2}{2\\sigma^2}\\\\\n&=\\text{argmin}_\\theta N\\log(\\sqrt{2\\pi\\sigma^2})+\\sum_{i=1}^N\\frac{((\\frac{dS}{S})_i-\\mu)^2}{2\\sigma^2}\\\\\n&=\\text{argmin}_\\theta N\\log(\\sqrt{2\\pi\\sigma^2})+\\frac{1}{2\\sigma^2}\\sum_{i=1}^N((\\frac{dS}{S})_i-\\mu)^2\n\\end{split}\n\\label{eqn:calculation}\n\\end{equation}\nThe end result of Equation \\ref{eqn:calculation} gives an equation that can be optimized using \\li{scipy.optimize.minimize} to find $\\theta$.\n\n\\begin{problem}\nWrite a function \\li{theta()} which takes in an array of historical data.\nUse \\li{scipy.optimize.minimize} and Equation \\ref{eqn:calculation} to calculate the optimal $\\mu$ and $\\sigma$.\nReturn $\\mu$ and $\\sigma$.\nFor the closing stock prices of \\li{google_stock.csv}, $\\mu\\approx1129.4321$ and $\\sigma\\approx1.8548$.\n\n(Hint: Use the sample mean and sample variance as an initial guess for \\li{scipy.optimize.minimize}. Also use \\li{method='Nelder-Mead'}.)\n\\label{prob:theta}\n\\end{problem}\n\n\\begin{problem}\nUse \\li{euler_maruyama()} to predict the future closing stock prices of Google stock for $t\\in[377,427)$.\nPlot the original data and the average predicted stock prices.\nReturn an array of the average future stock prices.\nYour plot should look similar to Figure \\ref{fig:stock}.\n\n(Hint: Let $f(t,S_t)=\\mu$ and $g(t,S_t)=\\sigma$. Each $t$ represents one day, so there should be 50 predicted values.)\n\\label{stock}\n\\end{problem}\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{figures/future_stock.pdf}\n\\caption{Possible plot of future Google stock prices.}\n\\label{fig:stock}\n\\end{figure}\n\n\\section*{Convergence of Euler-Maruyama}\n\nThe Euler-Maruyama method has strong convergence as $\\Delta t\\rightarrow\\infty$.\nThis can be seen heuristically by looking at the expected value of the error of the numerical method.\nNote equation \\ref{eqn:gbm} can be solved analytically to get the solution\n\\begin{equation}\nS(t)=s_0e^{\\bigl(\\mu-\\frac{\\sigma^2}{2}\\bigr)t+\\sigma W(t)}\n\\label{eqn:analytic}\n\\end{equation}\nwhere $s_0=S(0)$.\nLet $A_{\\Delta t}(t)$ be the approximation of $S(t)$ with step size $\\Delta t$.\nBecause we are working with SDEs, we take the expected value of the max error over all $t$ and define the error as\n\n\\begin{equation}\nE(A_{\\Delta t})=\\sup_t\\mathbb{E}\\Bigl(|S(t)-A_{\\Delta t}(t)|\\Bigr)\\approx C(\\Delta t)^\\gamma.\n\\end{equation}\nwhere $\\gamma$ is the order of convergence and $C$ is some constant.\nTaking the log of both sides, we get\n\\begin{equation}\n\\log(E(A_{\\Delta t}))=\\log(C)+\\gamma\\log(\\Delta t).\n\\end{equation}\n\nTo find the order of convergence $\\gamma$, we can plot $\\Delta t\\times E(A_{\\Delta t}(t))$ as $\\Delta t\\rightarrow\\infty$ with log axes.\nThis should result is a straight line with slope $\\gamma$, giving us the convergence rate.\nFor Euler-Maruyama, $\\gamma=\\frac{1}{2}$.\n\n% Compare to Euler which has convergence of order 1\n\\begin{problem}\nWrite a function \\li{convergence()} that calculates the convergence of Euler-Maruyama.\nCalculate the drift and volatility of the closing prices in \\li{google_stock.csv} and use Euler-Maruyama to predict stock values for $t\\in[0,50)$.\nCalculate the difference between the analytical solution \\ref{eqn:analytic} and predicted data from Euler-Maruyama.\nPlot the difference for $dt=2^i$ for $i\\in[0,1,2,3,4,5]$ on a log-log plot.\nYour plot should be a straight line with slope $\\frac{1}{2}$.\n\n(Hint: Try using \\li{np.arange}).\n\\end{problem}\n\n\n", "meta": {"hexsha": "ce67032d1def05bdfa512201338e695ef3c709d0", "size": 14027, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Volume4/SDE/SDE.tex", "max_stars_repo_name": "frigusgulo/Labs", "max_stars_repo_head_hexsha": "58faeab611e2d54bf2debded58d6e13db40f4146", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-27T06:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-27T06:20:37.000Z", "max_issues_repo_path": "Volume4/SDE/SDE.tex", "max_issues_repo_name": "frigusgulo/Labs", "max_issues_repo_head_hexsha": "58faeab611e2d54bf2debded58d6e13db40f4146", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Volume4/SDE/SDE.tex", "max_forks_repo_name": "frigusgulo/Labs", "max_forks_repo_head_hexsha": "58faeab611e2d54bf2debded58d6e13db40f4146", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.7566666667, "max_line_length": 288, "alphanum_fraction": 0.7149069651, "num_tokens": 4626, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6513548782017746, "lm_q2_score": 0.8558511524823263, "lm_q1q2_score": 0.5574628231839741}}
{"text": "\\subsection{Functions}\nThis section decribes all the functions in the file ``ptwXY\\_functions.c''.\n\n\\subsubsection{ptwXY\\_pow}\n\\setargumentNameLengths{ptwXY}\nThis function applies the math operation $y_i = y_i^{p}$ to the y-values of \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_pow(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double p );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{p}{The exponent.}\n    \\vskip 0.05 in \\noindent\nThis function infills to maintain the initial accuracy.\n\n\\subsubsection{ptwXY\\_exp}\n\\setargumentNameLengths{ptwXY}\nThis function applies the math operation $y_i = \\exp( a \\, y_i )$ to the y-values of \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_exp(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double a );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{a}{The exponent coefficient.}\n    \\vskip 0.05 in \\noindent\nThis function infills to maintain the initial accuracy.\n\n\\subsubsection{ptwXY\\_convolution}\nThis function returns the convolution of \\highlight{ptwXY1} and \\highlight{ptwXY2}.\n\\setargumentNameLengths{ptwXY1}\n\\CallingC{ptwXYPoints *ptwXY\\_convolution(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1,}\n    \\addArgument{ptwXYPoints *ptwXY2,}\n    \\addArgument{int mode );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{mode}{Flag to determine the initial x-values for calculating the convolutions.}\n    \\vskip 0.05 in \\noindent\nUser should set \\highlight{mode} to 0. \n\n\\subsubsection{ptwXY\\_inverse}\nThis function returns a new instance of \\highlight{ptwXYPoints} that is the inverse of \\highlight{ptwXY1}.\nThat is, the returned points are $(y_{\\rm i},x_{\\rm i})$ where $(x_{\\rm i},y_{\\rm i})$ are the points of \\highlight{ptwXY1}.\nAll y-values of \\highlight{ptwXY1} must be descending or increasing. If the y-values of \\highlight{ptwXY1} are descending,\nthe returned points are reversed to insure that in the returned instance $X_{\\rm i} < X_{\\rm i+1}$ where $X_{\\rm i}$ is a \nx-value of new returned instance.\n\\setargumentNameLengths{ptwXY1}\n\\CallingC{ptwXYPoints *ptwXY\\_inverse(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1 );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{mode}{Flag to determine the initial x-values for calculating the convolutions.}\n    \\vskip 0.05 in \\noindent\nIf an error occurs, NULL is returned.\n", "meta": {"hexsha": "0d661c59f2020b6f4b57c1c867f19f3b0ba233a7", "size": 3081, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "numericalFunctions/Doc/ptwXY_functions.tex", "max_stars_repo_name": "Mathnerd314/gidiplus", "max_stars_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_stars_repo_licenses": ["MIT-0", "MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2019-08-29T23:46:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T10:16:25.000Z", "max_issues_repo_path": "numericalFunctions/Doc/ptwXY_functions.tex", "max_issues_repo_name": "Mathnerd314/gidiplus", "max_issues_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_issues_repo_licenses": ["MIT-0", "MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-04T16:14:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-01T01:54:34.000Z", "max_forks_repo_path": "numericalFunctions/Doc/ptwXY_functions.tex", "max_forks_repo_name": "Mathnerd314/gidiplus", "max_forks_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_forks_repo_licenses": ["MIT-0", "MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-03-03T22:41:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T22:54:43.000Z", "avg_line_length": 54.0526315789, "max_line_length": 124, "alphanum_fraction": 0.7552742616, "num_tokens": 940, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8558511396138366, "lm_q2_score": 0.6513548646660543, "lm_q1q2_score": 0.5574628032174589}}
{"text": "\\section{Introduction}\n\\label{sec:intro}\n\nThe use of Machine Learning in attempts to find new physics by probing High Energy Particle interactions has been ubiquitous, but unsupervised techiques are yet to find their place in this landscape. With vast volumes of data available from experiments like the Large Hadron Collider (LHC), it becomes increasingly desirable and ever more plausible that Unsupervised and Model-Free learning take a leading role in the hunt for new Physics.\n\nJet substructure is the analysis of radiation patterns and particle distributions within the collimated sprays of particles (jets) emerging from high-energy collisions. \\cite{energyflow_poly}.\nIt has found a central importance in many of the searches for new physics at the Large Hadron Collider (LHC) and otherwise, some of them being:\n\\begin{itemize}\n    \\item Standard Model measurements\n    \\item Identification of boosted heavyparticles\n    \\item Discrimination of quark- from gluon- initiated jets\n    \\item Search of Beyond Standard Model particles\n\\end{itemize}\n\n\\subsection{Energy Flow Polynomials (EFPs)}\n\nWe have had several methods to represent jets as inputs to computational models, from a list of 4-vectors of constituent particles, to images of the jet itself amongst others, but either they are extremely sparse representations of the jets (as in the images) making is hard to ML models to learn something meaningful, or they have had an ordering (as in lists) not providing permutation invariance.\n\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}[b]{0.49\\textwidth}\n        \\includegraphics[width=\\textwidth]{img/efp-qcd-particlenetdata.png}\n        \\caption{Histograms for QCD Jets}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.49\\textwidth}\n        \\includegraphics[width=\\textwidth]{img/efp-top-particlenetdata.png}\n        \\caption{Histograms for Top Jets}\n    \\end{subfigure}\n    \\caption{EFP Polynomials on Top Tagging data}\n    \\label{fig:efp-histogram}\n\\end{figure}\n\n\\paragraph{EFPs as a method of jet representation} are both dense representations which inherently have permutation, as well as translational and rotational invariance. For any given jet, it's EFPs are Polynomials of the following form ~\\cite{energyflow_poly}:\n\\begin{equation}\n    \\text{EFP}_{G} = \\sum_{i_1 = 1}^{M} \\dots \\sum_{i_N = 1}^{M} z_{i_1} \\dots z_{i_N} \\prod_{k,l \\in G} \\theta_{i_k i_l}\n\\end{equation}\nIn addition, linear models trained on these polynomials perform nearly as well as the best Neural Networks on jet tagging tasks while having over an order magnitude fewer parameters. Owing to the fact that very simple supervised models can perform well using EFPs as the input representation, it seems natural that the same would apply to unsupervised techiniques.\n\n\n\n\\subsection{Latent Dirichet Allocation}\n\n\\textcolor{red}{Add details on what LDA does and any it's general issues, low accuracy, etc.}", "meta": {"hexsha": "6816e256ed77fa90ecf21435d6fb67cff24f1ae4", "size": 2899, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/tex/intro.tex", "max_stars_repo_name": "Blizzard57/jet-tagging", "max_stars_repo_head_hexsha": "b338b0d55380a6d21e2f78d7098650b0806f7cfc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/tex/intro.tex", "max_issues_repo_name": "Blizzard57/jet-tagging", "max_issues_repo_head_hexsha": "b338b0d55380a6d21e2f78d7098650b0806f7cfc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/tex/intro.tex", "max_forks_repo_name": "Blizzard57/jet-tagging", "max_forks_repo_head_hexsha": "b338b0d55380a6d21e2f78d7098650b0806f7cfc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.4186046512, "max_line_length": 439, "alphanum_fraction": 0.7771645395, "num_tokens": 705, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5574374141277451}}
{"text": "In order to give a better sense of the approach to reliability analysis and\noptimization presented in \\sref{chaos-reliability-analysis} and\n\\sref{chaos-optimization}, we consider a concrete application, meaning that we\nspecify the uncertain parameters and discuss the accompanying computations. This\napplication is also utilized for the quantitative evaluation of our technique\npresented in the next section, \\sref{chaos-optimization-results}.\n\n\\subsection{\\problemtitle}\n\nAssume that the structure of the reliability model $R(\\cdot | \\vg)$ of the\nsystem at hand is the one given in \\eref{reliability-model} where each\nindividual reliability function $R_i(\\cdot | \\vg_i)$ is the one shown in\n\\eref{weibull-reliability} with its own parameters $\\scale_i$ and $\\shape_i$.\n\nDuring each iteration, the temperature of processing element~$i$ exhibits \\nk{i}\ncycles. Each cycle generally has different characteristics and hence causes a\ndifferent amount of damage to the processing element. This aspect is accounted\nfor by adjusting $\\scale_i$ as shown in \\eref{thermal-cycling-scale}. The shape\nparameter $\\shape_i$ is known to be indifferent to temperature \\cite{chang2006}.\nFor simplicity, assume that $\\shape_i$ does not depend on process parameters\neither, and that $\\shape_i = \\shape$ for $i = \\range{1}{\\np}$.\n\nUnder the above assumptions, \\rref{weibull-homogeneity} applies, and the\nlifetime $\\life: \\Omega \\to \\real$ of the system has a Weibull distribution as\nfollows:\n\\[\n  \\life | (\\scale, \\shape) \\sim \\mathrm{Weibull}(\\scale, \\shape)\n\\]\nwhere $\\scale$ is the one given in \\rref{weibull-homogeneity} combined with\n\\eref{thermal-cycling-scale}. Even though the reliability model has two\nparameters, only one of them is uncertain to the designer, namely $\\scale$.\nTherefore, we treat the model as if it was parameterized only by $\\scale$. The\nshape parameter $\\shape$ is assumed to be implicitly given.\n\nIn the case of reliability analysis under process variation without any\naccompanying exploration of the design space, one can proceed to constructing a\n\\ac{PC} expansion of $\\scale$. Having obtained this lightweight surrogate, the\nreliability of the system can be studied from various perspectives. In the\ncurrent scenario, however, the quantity of interest \\g is the one given in\n\\eref{chaos-optimization-quantity}, since it allows for evaluating the objective\nfunction and constraints defined in \\eref{chaos-optimization-objective} and\n\\eref{chaos-optimization-constraints}, respectively. In\n\\eref{chaos-optimization-quantity}, the component denoted by \\life stands for\nthe parameterization of the reliability model; consequently, it is $\\scale$ in\nthe illustrative application developed in this section.\n\nLet us now turn our attention to the uncertain parameters \\vu of the problem\nbeing addressed. We focus on two crucial process parameters: the effective\nchannel length and gate oxide thickness. Each processing element is then\nassigned two random variables corresponding to the two process parameters, which\nmeans that $\\nu = 2 \\np$ in the current example; see also \\sref{chaos-problem}.\n\n\\begin{remark}\nThe variability in a process parameter at a spatial location can be modeled as a\ncomposition of several parts---such as inter-lot, inter-wafer, inter-die, and\nintra-die variations---which is demonstrated in\n\\sref{chaos-transient-application}. In this section, we illustrate a different\napproach. From a mathematical perspective, it is sufficient to consider only one\nrandom variable per location with an adequate distribution and correlations with\nrespect to the other locations.\n\\end{remark}\n\nBased on \\sref{chaos-formulation}, the parameters \\vu are assumed to be given as\na set of marginal distributions and a correlation matrix denoted by\n$\\set{F_i}_{i = 1}^\\nu$ and $\\correlation{\\vu}$, respectively. Note that the\nnumber of distinct marginals is only two, since \\np components of \\vu correspond\nto the same process parameter.\n\nBoth process parameters, the effective channel length and gate oxide thickness,\ncorrespond to Euclidean distances; they take values on bounded intervals of the\npositive half of the real line. Consequently, similarly to\n\\sref{chaos-transient-application}, we model the two process parameters using\nthe four-parameter family of beta distributions shown in\n\\eref{beta-distribution}. Without loss of generality, the parameters are assumed\nto be independent of each other, and the correlations between those elements of\n\\vu that correspond to the same process parameter are assumed to be given by the\ncorrelation function shown in \\eref{bayes-correlation}.\n\nThe process parameters manifest themselves in the calculations associated with\nthe power model shown in \\eref{chaos-power-model-bulk} through static power.\nAnalogously to \\sref{chaos-transient-application}, the modeling here is based on\n\\up{SPICE} simulations of a series of \\up{CMOS} invertors. The invertors are\ntaken from the 45-nm open cell library by NanGate \\cite{nangate} and configured\naccording to the 45-nm \\up{PTM} \\up{HP} model \\cite{ptm}. The simulations are\nperformed on a fine-grained and sufficiently broad three-dimensional grid\ncomprising the effective channel length, gate oxide thickness, and temperature;\nthe results are tabulated. An interpolation algorithm is subsequently employed\nwhenever static power is to be evaluated at a particular point within the range\nof the grid. The output of this model is scaled up to account for about 40\\% of\nthe total power consumption \\cite{liu2007}. Regarding temperature, the thermal\n\\up{RC} circuit utilized for dynamic steady-state analysis is constructed by\nvirtue of HotSpot \\cite{skadron2003} as described in \\sref{temperature-model}.\n\nAt this point, the two outputs of Stage~1 are now specified.\n\n\\subsection{Probability Transformation}\n\nAt Stage~2 in \\fref{chaos-overview}, the uncertain parameters \\vu are\ntransformed into a vector of independent random variables \\vz via a suitable\ntransformation $\\transform$. Specifically, we use the one given in\n\\eref{probability-transformation}, which also includes model order reduction.\nUnlike \\sref{chaos-transient-application}, in this section, we let \\vz obey the\nstandard Gaussian distribution and, therefore, tailor $\\transform$ accordingly;\nsee \\xref{probability-transformation}.\n\n\\subsection{Surrogate Construction}\n\nSince the auxiliary variables $\\vz = (\\z_i)_{i = 1}^\\nz$ are Gaussian, the\npolynomial basis considered at Stage~3 is to be composed of Hermite polynomials,\nwhich is the exact scenario described in \\xref{polynomial-chaos}. The variables\nalso tell us how to approach numerical integration needed for evaluation of the\ncoefficients of \\ac{PC} expansions: since we are interested in integrals with\nrespect to the standard Gaussian measure, Gauss--Hermite quadratures\n\\cite{maitre2010} are worth considering. These quadratures are especially\nefficient, since they belong to the class of Gaussian quadratures and thus\ninherit their properties; see \\xref{numerical-integration}.\n\nLastly, let us illustrate the Hermite basis. In the case of working with only\none standard Gaussian variable ($\\nz = 1$), a second-level \\ac{PC} expansion\n($\\lc = 2$) of a three-dimensional quantity of interest \\vg is as follows:\n\\[\n  \\chaos{1}{2}{\\vg}\n  = \\hat{\\vg}_{(0)} \\psi_{(0)}\n  + \\hat{\\vg}_{(1)} \\psi_{(1)}\n  + \\hat{\\vg}_{(2)} \\psi_{(2)}\n\\]\nwhere $\\set{\\hat{\\vg}_{\\vi}} \\subset \\real^3$,\n\\begin{align*}\n  & \\psi_{(0)}(\\vz) = 1, \\\\\n  & \\psi_{(1)}(\\vz) = \\z_1, \\text{ and} \\\\\n  & \\psi_{(2)}(\\vz) = \\z_1^2 - 1.\n\\end{align*}\nAt Stage~4, the expansion is post-processed as described in\n\\sref{chaos-optimization}.\n", "meta": {"hexsha": "e99f311050dff10e93558ec47c98125975d34d16", "size": 7625, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "include/uncertainty/process/development/optimization-application.tex", "max_stars_repo_name": "IvanUkhov/thesis", "max_stars_repo_head_hexsha": "95a7e2ee7664b94156906322610555e36e53cfe0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "include/uncertainty/process/development/optimization-application.tex", "max_issues_repo_name": "IvanUkhov/thesis", "max_issues_repo_head_hexsha": "95a7e2ee7664b94156906322610555e36e53cfe0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "include/uncertainty/process/development/optimization-application.tex", "max_forks_repo_name": "IvanUkhov/thesis", "max_forks_repo_head_hexsha": "95a7e2ee7664b94156906322610555e36e53cfe0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.4814814815, "max_line_length": 80, "alphanum_fraction": 0.7832131148, "num_tokens": 1870, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5574374069065196}}
{"text": "\\chapter{Experimental Tests in the Solar System}\nHow can we test relativistic effects in our solar system?\n\\begin{itemize}\n  \\item The sum of the mass of all planets is much smaller than the solar mass\n  $M_{\\astrosun}\\approx \\unit[2\\cdot 10^{30}]{kg}$. The heaviest planet is\n  Jupiter with a mass of $M\\textsubscript{Jup}\\approx \\unit[2\\cdot\n  10^{27}]{kg}$, therefore we can assume the planets to be testparticles.\n  \\item The sun is in good approximation a spherically symmetric object. We can\n  therefore use the Schwarzschild metric.\n\\end{itemize}\n% Consider the variation of the energy functional \n% \\begin{equation}\n% \\int\\tensor{g}{_\\mu_\\nu}\\tensor{\\dot{x}}{^\\mu}\\tensor{\\dot{x}}{^\\nu}\\dif\\lambda\\,.\n% \\end{equation}\nWe define\n$K:=-\\tensor{g}{_\\mu_\\nu}\\tensor{\\dot{x}}{^\\mu}\\tensor{\\dot{x}}{^\\nu}$, which is\nconserved along geodesics and it holds true that\n\\begin{equation} \nK=\\begin{cases}\n-1& \\mathrm{\\ timelike\\ geodesics}\\\\\n\\phantom{-}0& \\mathrm{\\ lightlike\\ geodesics}\n\\end{cases}\\,.\n\\end{equation}\nUsing the Schwarzschild metric, we can explicitly write\n\\begin{equation}\nK=-e^{2a(r)}\\dot{t}^2+e^{2b(r)}\\dot{r}^2+r^2\\left(\\dot{\\vartheta}+\\sin^2\\vartheta\n\\dot{\\phi}\\right)\n\\end{equation}\nA Killing-vector $\\tensor{\\xi}{^\\mu}$ satisfies\n\\begin{equation}\n\\tensor{\\xi}{_\\mu}\\tensor{\\dot{x}}{^\\mu}=\\mathrm{\\ const.}\n\\end{equation}\nFor the Schwarzschild metric, there are four independent Killing-vectors,\ncorresponding to 3 rotations, and staticity (time independence).\nConservation of angular momentum leads to a motion in a plane, w.l.o.g.\\ we can\nchose a coordinate system in which $\\vartheta=\\nicefrac{\\pi}{2}$.\nThe Killing-Vectors are given by\n\\begin{align}\n\\tensor*{\\xi}{_{(\\varphi)}^\\mu}&=(\\partial_\\varphi)^\\mu=\\tensor*{\\delta}{_\\varphi^\\mu}\\,,\\\\\n\\tensor*{\\xi}{_{(t)}^\\mu}&=(\\partial_t)^\\mu=\\tensor*{\\delta}{_t^\\mu}\\,.\n\\end{align}\nThe associated conserved quantities are\n\\begin{align}\nE&:=\\tensor*{\\xi}{_{(\\varphi)}^\\mu}\\tensor{g}{_\\mu_\\nu}\\tensor{\\dot{x}}{^\\nu}\n=\\tensor{g}{_t_t}\\dot{t}\n=e^{2a}\\dot{t}\n=\\left(1-\\frac{2M}{r}\\right)\\dot{t}\n\\\\\nL&:=\\tensor*{\\xi}{_{(t)}^\\mu}\\tensor{g}{_\\mu_\\nu}\\tensor{\\dot{x}}{^\\nu}\n=\\tensor{g}{_\\varphi_\\varphi}\\dot{\\varphi}\n=r^2\\dot{\\varphi}\\,.\n\\end{align}\nFor massless\n%TODO stimmt das?\nparticles, we can think of $E$ and $L$ as conserved energy and angular momentum.\nIt follows\n\\begin{equation}\nK=-\\left(1-\\frac{2M}{r}\\right)\\dot{t}^2+\\left(1-\\frac{2M}{r}\\right)^{-1}\\dot{r}^2+r^2\\varphi^2\n\\end{equation}\nIf we insert the conserved quantities $L,E$ and multiply by\n$\\frac{1}{2}\\left(1-\\frac{2M}{r}\\right)$, we get\n\\begin{equation}\n\\frac{E^2}{2}=\\frac{\\dot{r}^2}{2}\n+\\left(1-\\frac{2M}{r}\\right)\\left(\\frac{L^2}{2r}-\\frac{K}{2}\\right)\\,.\n\\end{equation}\nThis expression can be rearranged to the Form\n\\begin{equation}\n\\frac{\\dot{r}^2}{2}+V\\textsubscript{eff}(r)\n=\\varepsilon\\,,\n\\end{equation}\nwith the convenient definitions \n\\begin{equation}\nV\\textsubscript{eff}(r):=\\frac{MK}{r}+\\frac{L^2}{2r^2}-\\frac{ML^2}{r^3}\\,,\\quad\n\\varepsilon:=\\frac{E^2+K}{2}\\,.\n\\end{equation}\n\\begin{figure}[hbtp!]\n\\centering\n \\includegraphics{plot2.pdf}\n\\caption{Effective Potential in Newtonian physics.}\n%TODO unit of a!\n\\end{figure}\n\\begin{figure}[hbtp!]\n\\centering\n \\includegraphics{plot1.pdf}\n\\caption{Effective Potential in GR for various\n$a=L/(mr\\textsubscript{S})$.}\n%TODO unit of a!\n\\end{figure}\n\\begin{figure}[hbtp!]\n\\centering\n \\includegraphics{plot3.pdf}\n\\caption{Effective Potential for a photon.}\n%TODO unit of a!\n\\end{figure}\n\n\\section{Perihelion Shift of Mercury}\n% \\begin{figure}[b]\n% \\centering\n% \\begin{tikzpicture}[auto,node distance=3cm,thick,main node/.style={circle,draw,font=\\sffamily\\Large\\bfseries}]\n% \\draw [black,thick,dashed] (0,0) circle (1/1.6);\n% \\draw [black,thick,dashed] (0,0) circle (1/0.4);\n% \\draw [thick, color=red, domain=0:6*pi, samples=200, smooth]\n%   plot (xy polar cs:angle=\\x r, radius={1/(1-0.6*cos(\\x r+0.2*\\x r))});\n% \\node [fill=white] at (0,1) {Perihelion};s\n% \\node at (0,0) {\\Huge\\textbullet};\n% \\node (1) at (40:1/1.6) {\\textbullet};\n% \\node (2) at (90:1/1.6) {\\textbullet};\n% \\node (3) at (140:1/1.6) {\\textbullet};\n% \\end{tikzpicture} \n% \\caption{Perhelion shift of mercury (exaggerated)}\n% \\end{figure}\nIt has been known for a long time that the perihelion of the mercury moves by\nroughly $5061''\\footnote{Where an arcsec $''$ denotes the 3600 part of\nan degree.}/\\textrm{century}$, if on subtracts other effects such a\nfraction of $43''/\\textrm{century}$. \n\\begin{figure}[hbtp!]\n\\centering\n \\includegraphics{perishift.pdf}\n\\caption{Perihelion shift of mercury. (exaggerated)}\n\\end{figure}\n\nThere where several proposed explanations for this including\n\\begin{itemize}\n  \\item a new planet called Vulcan between mercury and the sun,\n  \\item a change to newtons $1/r^2$-law,\n  \\item effects due the suns quadrupole moment. \n\\end{itemize}\nWe will now look at what is predicted by GR.\nThe trajectories of planets ($K=-1$) are given by\n\\begin{equation}\n\\dot{r}^2-\\frac{2M}{r}+\\frac{L^2}{r^2}-\\frac{2ML^2}{r^3}=E^2-1\\label{eq:planeteq}\\,.\n\\end{equation}\nIf we think of the radius as a function of the angle, we can write\n\\begin{equation}\n\\dot{r}=\\od{r}{\\lambda}=\\od{r}{\\phi}\\od{\\phi}{\\lambda}\\,.\n\\end{equation}\nMultiplying \\eqref{eq:planeteq} with\n%TODO choose between varphi and phi\n$\\left(\\od{\\phi}{\\lambda}\\right)^{-2}=\\frac{1}{\\dot{\\phi}}=\\frac{r^4}{L^2}$\nyields\n\\begin{equation}\n\\left(\\dod{r}{\\phi}\\right)^2-\\frac{2Mr^3}{L^2}+r^2-2Mr\n=\\left(E^2-1\\right)\\frac{r^4}{L^2}\\,.\\label{eq:orbitrphi}\n\\end{equation}\nSimilar to the treatment of the Keppler problem in classical mechanics, we\ndefine\n\\begin{equation}\nu(\\phi):=\\frac{L^2}{Mr(\\phi)}\\,.\n\\end{equation}\nThis implies\n\\begin{equation}\n\\dif r=-\\frac{L^2}{Mu^2}\\dif u\\,,\\quad\n\\left(\\dod{r}{\\phi}\\right)^2=\\frac{L^4}{M^2u^4}\\left(\\dod{u}{\\phi}\\right)^2\\,.\n\\end{equation} \nIn terms of the new variables \\eqref{eq:orbitrphi} reads\n\\begin{equation}\n\\left(\\dod{u}{\\phi}\\right)^2-2u+u^2-\\frac{2M^2}{L^2}u^3=(E^2-1)\\frac{L^2}{M^2}\\,.\n\\end{equation}\nWe differentiate with respect to $\\phi$ and get \n\\footnote{primes denote $\\phi$\nderivatives}\n\\begin{equation}\n2u^\\prime u^{\\prime\\prime}-2u^\\prime+2u\nu^{\\prime}-\\frac{6M^2}{L^2}u^2u^\\prime=0\\,.\n\\end{equation}\nDividing by $2u^\\prime$ yields\n\\begin{equation}\nu^{\\prime\\prime}-1+u=\\frac{3M^2}{L^2}u^2\\,,\n\\end{equation}\nwhich is an equation similar to the Keppler problem. The difference is in the\nright hand side, which is equal zero in the classical case.\nWe may calculate an approximation by pertubative corrections\n\\footnote{The quantity relevant for the error of a given order is\n$\\frac{3M^2}{L^2}$.}\n\\begin{equation}\nu=u_0+u_1+u_2+\\dots\\,.\n\\end{equation}\nAt zeroth order, we neglect the quadratic term, so that we are left with\n\\begin{equation}\nu_0^{\\prime\\prime}-1+u_0=0\\,,\n\\end{equation}\nwhich is equal to the Newtonian problem, with exact solution\n\\begin{equation}\nu_0(\\phi)=1+\\varepsilon\\cos\\phi\\,.\n\\end{equation}\n\\begin{figure}[hbtp!]\n\\centering\n \\includegraphics{ellipsegeo.pdf}\n\\caption{An ellipse is the solution to the classical unperturbed Kepler\nproblem.}\n\\end{figure}\n\nAt first order we have\n\\begin{equation}\n\\begin{split}\nu_1^{\\prime\\prime}-1+u_1&=\\frac{3M^2}{L^2}u_0^2\\\\\n&=\\frac{3M^2}{L^2}\\left(1+\\varepsilon\\cos\\phi\\right)^2\\\\\\\n&=\\frac{3M^2}{L^2}\\left(1+\\frac{\\varepsilon^2}{2}+2\\varepsilon\\cos\\phi+\\frac{1}{2}\\varepsilon^2\\cos\n2\\phi\\right)\\,,\n\\end{split}\n\\end{equation}\nWhere we made use of trigonometric identities in the last step. \nThe first order solution reads\n\\begin{equation}\nu_1(\\phi)=1+\\varepsilon\\cos\\phi+\\frac{3M^2}{L^2}\\phi\\sin\\phi\\,.\n\\end{equation}\nwhich can be checked by using\n\\begin{align}\n\\dod[2]{}{\\phi}(\\phi\\sin\\phi)+\\phi\\sin\\phi&=2\\cos\\phi\\,,\\\\\n\\dod[2]{}{\\phi}(\\cos 2\\phi)+\\cos 2\\phi&=-3\\cos 2\\phi\\,.\n\\end{align}\nAssuming that we still have approximately a movement along an ellipse and Taylor\nexpansion yields\n\\begin{equation}\nu(\\phi)=1+\\varepsilon\\cos[(1-\\delta)\\phi]\n\\simeq\n1+\\varepsilon\\cos\\phi\n+\\varepsilon\\delta\\phi\\sin\\phi+\\landauO(\\delta^2)\\,.\n\\end{equation}\nComparing this to the expression for $u_1$ we find\n\\begin{equation}\n\\delta=\\frac{3M^2}{L^2}\n\\end{equation}\nand the error is of order $\\delta^2$.\n%\\begin{equation}\n% r(\\phi)=\\frac{p}{1+\\varepsilon\\cos\\phi}\n% =\\frac{a\\left(1-\\varepsilon^2\\right)}{1+\\varepsilon\\cos\\phi}\n% \\end{equation}\nIf we assume that there was a perihelion at $\\phi_p$, the next one will occure\nat $\\phi+\\Delta\\Phi$ with \n\\begin{equation}\n\\Delta\\phi=2\\pi\\delta=\\frac{6\\pi M^2}{L^2}\n\\end{equation}\nThe zeroth order solution gives \n\\begin{equation}\nr_0(\\phi)=\\frac{L^2}{M(1+\\varepsilon\\cos\\phi)}\\stackrel{!}{=}\n\\frac{a(1-\\varepsilon^2)}{1+\\varepsilon\\cos\\phi}\\,.\n\\end{equation}\nSo $L^2\\approx M(1-\\varepsilon^2)a$ with an error of order $\\delta$, that can\nbe neglected.\nWith this approximation and restoring factors of $c$ and $G$ we arrive at the following\nexpression for the perihelion shift:\n\\begin{equation}\n\\Delta\\phi=\\frac{6\\pi G M_{\\astrosun}}{c^2 (1-\\varepsilon^2)a}\\,.\n\\end{equation}\nAs expected the effect scalses with $1/a$ and thus the planet nearest to the sun\nrecives the strongest effect. For mercury we have\n\\begin{equation}\n\\frac{GM_{\\astrosun}}{c^2}=\\unit[1.48]{km}\\,,\\quad a =\\unit[5.79\\cdot\n10^7]{km}\\,,\\quad\\varepsilon=0.2056\\,,\n\\end{equation}\nWhich results in an perihelionshift\n\\begin{equation}\n\\Delta\\phi\\textsubscript{mer}\n=\\unitfrac[5.03\\cdot 10^{-7}]{rad}{orbit}\n=\\unitfrac[43]{{}^{\\prime\\prime}}{century}\\,,\n\\end{equation}\nwhich fits the observation in the error margin.", "meta": {"hexsha": "42d15a7ad3fdbd06f2a2a0640aaa272b44bbeada", "size": 9441, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/10-experimental-tests.tex", "max_stars_repo_name": "Bigben37/GeneralRelativity", "max_stars_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-31T13:18:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-31T13:18:57.000Z", "max_issues_repo_path": "src/10-experimental-tests.tex", "max_issues_repo_name": "QuantumDancer/GeneralRelativity", "max_issues_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/10-experimental-tests.tex", "max_forks_repo_name": "QuantumDancer/GeneralRelativity", "max_forks_repo_head_hexsha": "c3ca730b97d2f90a6e74da296cf1b5bb0305126b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.5930232558, "max_line_length": 112, "alphanum_fraction": 0.6991844084, "num_tokens": 3554, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321720225278, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5574374009557994}}
{"text": "\n\\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size\n\\usepackage{physics}\n\\usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs\n\\usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default\n\\usepackage[english]{babel} % English language/hyphenation\n\\usepackage{amsmath,amsfonts,amsthm} % Math packages\n\\usepackage{braket}\n\\usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template\n\\usepackage{tikz}\n\\usepackage{amsmath}\n\\usepackage{sectsty} % Allows customizing section commands\n\\allsectionsfont{\\centering \\normalfont\\scshape} % Make all sections centered, the default font and small caps\n\\usepackage[mathscr]{euscript}\n\\usepackage{bm}\n\\newcommand{\\uvec}[1]{\\boldsymbol{\\hat{\\textbf{#1}}}}\n\\usepackage[thinlines]{easytable}\n\\usepackage{fancyhdr} % Custom headers and footers\n\\pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers\n\\fancyhead{} % No page header - if you want one, create it in the same way as the footers below\n\n\\usepackage{multicol}\n\\fancyfoot[L]{} % Empty left footer\n\\fancyfoot[C]{} % Empty center footer\n\\fancyfoot[R]{\\thepage} % Page numbering for right footer\n\\renewcommand{\\headrulewidth}{0pt} % Remove header underlines\n\\renewcommand{\\footrulewidth}{0pt} % Remove footer underlines\n\\setlength{\\headheight}{13.6pt} % Customize the height of the header\n\\usepackage{float}\n\\numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\\numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\\numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)\n\n\\setlength\\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text\n\n%----------------------------------------------------------------------------------------\n%\tTITLE SECTION\n%----------------------------------------------------------------------------------------\n\n\\newcommand{\\horrule}[1]{\\rule{\\linewidth}{#1}} % Create horizontal rule command with 1 argument of height\n\n\\title{\t\n\\normalfont \\normalsize \n\\textsc{California State University San Marcos \\\\ Dr. De Leone, Physics 323} \\\\ [25pt] % Your university, school and/or department name(s)\n\\horrule{0.5pt} \\\\[0.4cm] % Thin top horizontal rule\n\\huge H.W. 3 \\\\ % The assignment title\n\\horrule{2pt} \\\\[0.5cm] % Thick bottom horizontal rule\n}\n\n\\author{Josh Lucas} % Your name\n\n\\date{\\normalsize\\today} % Today's date or a custom date\n\n\\begin{document}\n\n\\maketitle % Print the title\n\n%----------------------------------------------------------------------------------------\n%\tPROBLEM 1\n%----------------------------------------------------------------------------------------\n\n\\section{State Vectors}\nConsider the following state vectors:\n\\begin{equation*}\n\\ket{\\psi_1} = 2\\ket{+} + 3\\ket{-};\\quad \\ket{\\psi_2} = -3i\\ket{+} + 2\\ket{-};\\quad \\ket{\\psi_3} = \\ket{+} + e^{i\\frac{ \\pi}{4}}\\ket{-};  \n\\end{equation*}\n\\textbf{a) Calculate the inner product of $\\bra{\\psi_2}\\ with\\ \\ket{\\psi_1}$}\\\\\n\\begin{align*}\n\\braket{\\psi_2 |\\psi_1} & = -3i(2) \\braket{+|+} +2(3)\\braket{-|-}\\\\\n   & =  -6i + 6\\\\\n   \\braket{\\psi_2 | \\psi_2} &= 6-6i\n\\end{align*}\n\\textbf{b) Normalize each state vector}\\\\\n \\begin{align*}\n1 & = \\braket{\\psi_1 | \\psi_1} \\\\\n & = C^* \\big \\{\\ 2\\bra{+} +3\\bra{-}\\ \\big\\}\\ C\\big \\{\\ 2\\ket{+}+3\\ket{-} \\big \\} \\\\\n& = C^*C \\big \\{ 2^2 \\braket{+|+} + 2(3)\\braket{+|-} + 3(2)\\braket{-|+} + 3^2 \\braket{-|-} \\big \\}  \\\\\n 1 & = 13|C|^2 \\\\\n |C_{\\psi_1}| & = \\frac{1}{\\sqrt{13}} \\\\\n \\ket{\\psi_1} & = \\frac{1}{\\sqrt{13}} \\big ( 2\\ket{+} + 3\\ket{-} \\big )\n\\end{align*}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n \\begin{align*}\n1 & = \\braket{\\psi_2 | \\psi_2} \\\\\n & = C^* \\big \\{\\ 3i\\bra{+} +2\\bra{-}\\ \\big\\}\\ C\\big \\{\\ -3i\\ket{+}+2\\ket{-} \\big \\} \\\\\n& = C^*C \\big \\{ -9(i)^2 \\braket{+|+} +3i(2)\\braket{+|-} + 2(-3i)\\braket{-|+} + 2^2 \\braket{-|-} \\big \\}  \\\\\n& = C^*C \\big \\{ 9 + 4 \\big \\} \\\\  \n 1 & = 13|C|^2 \\\\\n |C_{\\psi_2}| & = \\frac{1}{\\sqrt{13}} \\\\\n \\ket{\\psi_2} & = \\frac{1}{\\sqrt{13}} \\big ( -3i\\ket{+} + 2\\ket{-} \\big )\n\\end{align*}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{align*}\n1 & = \\braket{\\psi_3 | \\psi_3} \\\\\n  & = C^* \\big \\{  \\bra{+} + e^{-i\\tfrac{\\pi}{4}} \\bra{-} \\big \\} C \\big \\{ \\ket{+} + e^{i\\tfrac{\\pi}{4}} \\ket{-} \\big \\} \\\\\n  & = C^*C \\big \\{ \\braket{+|+} + e^{i\\tfrac{\\pi}{4} - i\\tfrac{\\pi}{4}} \\braket{-|-} \\big \\} \\\\\n  & = C^* C \\big \\{ 1+ 1 \\big \\} \\\\\n  1 & = 2|C|^2 \\\\\n  |C_{\\psi_3}| & = \\frac{1}{\\sqrt{2}} \\\\\n  \\ket{\\psi_3} & = \\frac{1}{\\sqrt{2}} \\big ( \\ket{+} +   e^{i\\tfrac{\\pi}{4}} \\ket{- }\\big )\n\\end{align*}\n \\textbf{c) For each normalized state vector, use Postulate 4 to calculate the probability that the spin-component is up or down along the z-axis. }\n \\begin{multicols}{2}\n \\noindent\n\\begin{align*}\n \\mathscr{P_+} & = |\\braket{+|\\psi_1 }|^2 \\\\\n & = |\\bra{+} \\big[ \\tfrac{2}{\\sqrt{13}}\\ket{+} + \\tfrac{3}{\\sqrt{13}} \\ket{-} \\big] |^2 \\\\\n & = |\\tfrac{2}{\\sqrt{13}} \\braket{+|+} + \\tfrac{3}{\\sqrt{13}} \\braket{+|-}|^2 \\\\\n & = \\bigg | \\tfrac{2}{\\sqrt{13}} \\bigg |^2 \\\\\n\\mathscr{P_+} & = \\frac{4}{13}\n  \\end{align*}\n   \\begin{align*}\n \\mathscr{P_-} & = |\\braket{-|\\psi_1 }|^2 \\\\\n & = |\\bra{-} \\big[ \\tfrac{2}{\\sqrt{13}}\\ket{+} + \\tfrac{3}{\\sqrt{13}} \\ket{-} \\big] |^2 \\\\\n & = |\\tfrac{2}{\\sqrt{13}} \\braket{-|+} + \\tfrac{3}{\\sqrt{13}} \\braket{-|-}|^2 \\\\\n & = \\bigg | \\tfrac{3}{\\sqrt{13}} \\bigg |^2 \\\\\n\\mathscr{P_-} & = \\frac{9}{13}\n  \\end{align*}\n  \\end{multicols}\n  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n  \\begin{multicols}{2}\n  \\noindent\n  \\begin{align*}\n \\mathscr{P_+} & = |\\braket{+|\\psi_2 }|^2 \\\\\n & = |\\bra{+} \\big[ \\tfrac{-3i}{\\sqrt{13}}\\ket{+} + \\tfrac{2}{\\sqrt{13}} \\ket{-} \\big] |^2 \\\\\n & = |\\tfrac{-3i}{\\sqrt{13}} \\braket{+|+} + \\tfrac{2}{\\sqrt{13}} \\braket{+|-}|^2 \\\\\n & = \\bigg | \\tfrac{-3i}{\\sqrt{13}} \\bigg |^2 \\\\\n\\mathscr{P_+} & = \\frac{9}{13}\n  \\end{align*}\n   \\begin{align*}\n \\mathscr{P_-} & = |\\braket{-|\\psi_2 }|^2 \\\\\n & = |\\bra{-} \\big[ \\tfrac{-3i}{\\sqrt{13}}\\ket{+} + \\tfrac{2}{\\sqrt{13}} \\ket{-} \\big] |^2 \\\\\n & = |\\tfrac{-3i}{\\sqrt{13}} \\braket{-|+} + \\tfrac{2}{\\sqrt{13}} \\braket{-|-}|^2 \\\\\n & = \\bigg | \\tfrac{2}{\\sqrt{13}} \\bigg |^2 \\\\\n\\mathscr{P_-} & = \\frac{4}{13}\n  \\end{align*}\n  \\end{multicols}\n  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n  \\begin{multicols}{2}\n  \\noindent\n  \\begin{align*}\n  \\mathscr{P_+} & = |\\braket{+ | \\psi_3}|^2 \\\\\n  & = |\\bra{+} \\big [ \\tfrac{1}{\\sqrt{2}} \\ket{+} + \\tfrac{ e^{i\\tfrac{\\pi}{4}}}{\\sqrt{2}} \\ket{-} \\big ]|^2 \\\\\n  & = |\\tfrac{1}{\\sqrt{2}} \\braket{+|+} + \\tfrac{ e^{i\\tfrac{\\pi}{4}}}{\\sqrt{2}} \\braket{+|-}  |^2 \\\\\n  & = \\bigg | \\frac{1}{\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_+} & = \\frac{1}{2}\n\\end{align*}\n\\begin{align*}\n  \\mathscr{P_-} & = |\\braket{- | \\psi_3}|^2 \\\\\n  & = |\\bra{-} \\big [ \\tfrac{1}{\\sqrt{2}} \\ket{+} + \\tfrac{ e^{i\\tfrac{\\pi}{4}}}{\\sqrt{2}} \\ket{-} \\big ]|^2 \\\\\n  & = |\\tfrac{1}{\\sqrt{2}} \\braket{-|+} + \\tfrac{ e^{i\\tfrac{\\pi}{4}}}{\\sqrt{2}} \\braket{-|-}  |^2 \\\\\n  & = \\bigg |  \\tfrac{ e^{i\\tfrac{\\pi}{4}}}{\\sqrt{2}} \\bigg |^2 \\\\\n  \\mathscr{P_-} & = \\frac{1}{2}\n\\end{align*}\n\\end{multicols}\n \\textbf{d) Would you expect to find the same probabilities for the measured spin-components along the x- and y- axes?}\\\\\n \\\\\n We would not expect to find the same probabilities for the other axes. The probabilities in the orthogonal directions are independent of each other.  \n \\section{Phase of quantum state vector}\n\\textbf{Show that a change in the overall phase of a quantum state vector does not change the probability of obtaining a particular result in a measurement. }\\\\\n\\\\\nThe phase of the quantum state vector is not physically measurable only the difference in the phases. The complex conjugate will ensure the magnitude is the same.\n\\begin{align*}\n\\mathscr{P_\\pm} & = |\\braket{\\pm|\\psi}|^2 \\\\\n\\mathscr{P_{\\psi_\\alpha}}& = |\\braket{\\pm | e^{i\\alpha}\\psi}|^2 \\\\\n& = |e^{i\\alpha} \\braket{\\pm | \\psi}|^2 \\\\\n& = (e^{i\\alpha} \\braket{\\pm | \\psi})(e^{-i\\alpha} \\braket{\\pm | \\psi}) \\\\\n& = |(1)\\braket{\\pm | \\psi}|^2 \\\\\n\\mathscr{P_\\pm} & = \\mathscr{\\psi_\\alpha} \\\\\n|\\braket{\\pm | \\psi}|^2 & = |\\braket{\\pm | \\psi}|^2\n\\end{align*}\n%----------------------------------------------------------------------------------------\n\n\\end{document}", "meta": {"hexsha": "03c891401f31fad98dbb84260c6f37fd1f2b7dde", "size": 8545, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "H.W. 3 Due Monday 9-10-2018/Phys323 H.W.3.tex", "max_stars_repo_name": "Epikarsios/QuantumHW", "max_stars_repo_head_hexsha": "1db9c1f3d6e4627a6848a69b8530b9f24c14a889", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "H.W. 3 Due Monday 9-10-2018/Phys323 H.W.3.tex", "max_issues_repo_name": "Epikarsios/QuantumHW", "max_issues_repo_head_hexsha": "1db9c1f3d6e4627a6848a69b8530b9f24c14a889", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "H.W. 3 Due Monday 9-10-2018/Phys323 H.W.3.tex", "max_forks_repo_name": "Epikarsios/QuantumHW", "max_forks_repo_head_hexsha": "1db9c1f3d6e4627a6848a69b8530b9f24c14a889", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.1091954023, "max_line_length": 162, "alphanum_fraction": 0.5394967817, "num_tokens": 3233, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802603710086, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.5573943764390008}}
{"text": "%\\lipsum[1-20]\n\nChapter~\\ref{chp:bgm} has provided a framework for modeling the self-assembly of a polyhedron using a Markov process. When using a transition rate matrix of the form \n\\begin{align}\n  \\label{eq:Qdef_rep}\n  Q_{jk} =\n  \\begin{cases}\n   S_{jk}e^{-\\beta\\left(E_{jk} - E_{j}\\right)} & \\text{if } [x^j] \\leftrightarrow [x^k]  \\\\\n   -z_j       & \\text{if } j = k \\\\\n   0 & \\text{else}\n  \\end{cases}\n\\end{align}\nthe intermediate energies $E_j$ and transition barrier energies $E_{jk}$ must be specified. Using minus the number of closed edges in an intermediate has a nice, physically motivated interpretation. The choice of barrier heights, however, has not yet been addressed. Using information of an intermediate's geometric configuration space that can be gathered using our manifold reflected Brownian motion scheme, we derive energy barrier heights and thus the rate matrix $Q$. This combination of combinatorial and geometric structure provides a unique insight into the various themes of self-assembly.\n\n\\section{Deriving Rates}\n\nConsider the process of adding a face to an intermediate. In the combinatorial configurations space this is a transition from one node of the graph to another. However, for this transition to occur in a more physical setting, the geometry of the intermediate plays a large role. One can imagine that there are geometric configurations are contorted in a way as to prevent a face from being added at a particular location. For instance consider the linkage of two triangles as an intermediate of the tetrahedron. Since the combinatorial configuration space is rather trivial, adding a face will results in the rigid linkage of three triangles all sharing a vertex. Since the equilateral triangle has angles of $\\frac{\\pi}{3}$, the angle between the two edges of the two triangle linkage that the third face attaches to must be close to $\\frac{\\pi}{3}$ for it to attach easily. We extend and formalize this idea for face attachment on a Building Game intermediate to obtain the rates we desire.\n\n%Since we assume that each polyhedron is composed solely of triangles, each attachable face will have three vertices. For each of these vertices, there is either $0, 1,$ or $2$ vertices on the existing intermediate that they will attach to. If there are zero vertices on the intermediate that the vertex on the new face combines with, it means that the attachment occurs \n\n\nIf there are three vertices $v_a, v_b, v_c$ in the intermediate that each combine with one of the three vertices on the added triangle, and $(v_a, v_b)$ and $(v_b, v_c)$ are edges in one of the intermediate's existing triangles, the rate of attachment for the face is given by the proportion of time the configuration--undergoing manifold reflected Brownian motion--has $\\frac{\\pi}{3} - \\epsilon < \\angle( v_a, v_b, v_c) < \\frac{\\pi}{3} + \\epsilon$. Here $\\epsilon$ is a parameter reflecting how close to the ideal angle the configuration must be in order to allow attachment. If there are no such triplets $v_a, v_b, v_c$ where a face is being attached, the attachment occurs at a unit rate. If there are more than one triplet $v_a, v_b, v_c$, the attachment rate is averaged across each such triplet. Taking this empirical rate and summing over the other faces corresponding to the degeneracies of the transition of interest, the empirical forward transitions rates $\\hat{Q}_{jk}$ are derived. By comparing these computed rates with the analytic form for our transition rate matrix, we can derive the barrier heights $E_{jk}$ for each transitions. Since the barriers are symmetric, this information also gives the reverse transition rates $Q_{kj}$.\n\\begin{align}\n\\hat{Q}_{jk} &= Q^0_{jk} \\\\\n\t&= S_{jk}e^{\\beta_0(E_{jk} - E_j)} \\\\\nE_{jk} &= E_j-\\frac{1}{\\beta_0}\\log\\left(\\hat{Q}_{jk}/S_{jk}\\right)\n\\end{align}\nHere $\\beta_0$ is the inverse temperature used to derive the barriers from the empirical computations. Since \n\\begin{align}\nQ_{jk} \t&= S_{jk}e^{\\beta(E_{jk} - E_j)} \\\\\n &= S_{jk} e^{-\\beta\\left(E_j-\\frac{1}{\\beta_0}\\log\\left(\\hat{Q}_{jk}/S_{jk}\\right) - E_j\\right)} \\\\\n &= S_{jk}\\left(\\hat{Q}_{jk}/S_{jk}\\right)^{\\frac{\\beta}{\\beta_0}},\n\\end{align}\nthe choice of $\\beta_0$ effectively specifies the units of $\\beta$. Typically, we choose $\\beta_0 = 1$.\n\nTo derive the empirical rates, each intermediate was simulated for $10,000,000$ steps with a stepsize of $h = 0.05$. By storing each of the relevant angles either step by step, or in a histogram, the rates can be computed for each choice of $\\epsilon$ at a later time as needed.\n\n\\section{Self-Assembly Statistics}\n\nAfter we have computed the rate matrix $Q$ for the octahedron as a function of $\\epsilon$ and $\\beta$, we are now able to compute the different statistics derived in section~\\ref{sec:StochMod}. \n\\begin{figure}[ht]\n\\centering\n  \\includegraphics[scale=0.6]{images/octahedron_pi.eps}\n\\caption{Octahedron stationary distributions as a function of $\\beta$.}\n\\label{fig:OctaPi}\n\\end{figure}\nFor instance, the stationary distribution of each octahedron intermediate is plotted as a function of $\\beta$ in figure~\\ref{fig:OctaPi}.\n\\begin{figure}[ht]\n\\centering\n  \\includegraphics[scale=0.6]{images/octahedron_log_pi.eps}\n\\caption{Natural log of the octahedron stationary distributions as a function of $\\beta$.}\n\\label{fig:OctaLogPi}\n\\end{figure}\nTo better view how these probabilities decay as the temperature decreases, figure~\\ref{fig:OctaLogPi} presents the natural logarithm of these stationary probabilities.\n\nWith the rate matrix, we can also look at how the occupation probabilities change over time. \n\\begin{figure}[ht]\n\\centering\n  \\includegraphics[scale=0.6]{images/octahedron_finite_dist.eps}\n\\caption{Occupation probabilities for Octahedron intermediates as a function of time.}\n\\label{fig:OctaFinDist}\n\\end{figure}\nIf the process begins with a single face, the probabilities diffuse through the graph according to $e^{Qt}e_1$ and eventually converge of the stationary distribution $\\pi$ as $t \\to \\infty$. Figure~\\ref{fig:OctaFinDist} shows how these occupation probabilities change over time.\n\nAnother way to visualize the dynamic behaviors of our Markov process is to look at individual transitions and the rates with which they occur.\n\\begin{figure}[ht]\n\\centering\n  \\includegraphics[scale=0.5]{images/octahedron_ccs.eps}\n\\caption{Combinatorial Configuration space for the Octahedron.}\n\\label{fig:OctaCCS}\n\\end{figure}\nFigure~\\ref{fig:OctaCCS} shows the octahedron combinatorial configuration space with the edges labeled with the joint transition rate $\\pi_j Q_{jk} = \\pi_k Q_{kj}$ for the choices $\\beta = 0.8$ and $\\epsilon = 0.5$.\n\nSince our model uses the two parameters $\\epsilon$ and $\\beta$, it is of particular interest to examine how the choice of these parameters affects the behaviors of the Markov process. \n\\begin{figure}[ht]\n\\centering\n  \\includegraphics[scale=0.22]{images/octahedron_pi_Q_grid.eps}\n\\caption{The Affect of $\\beta$ and $\\epsilon$ on the transitions rates and stationary distribution of the Octahedron combinatorial configuration space.}\n\\label{fig:OctaPiGrid}\n\\end{figure}\nFigure~\\ref{fig:OctaPiGrid} shows how the stationary distributions (node size) and the transition rates (edge thickness) change for three choices of $\\epsilon$ and three choices of $\\beta$. These plots provide and effective way of visualizing which edges and pathways are more probable for different parameter choices. \n\nOne of the most interesting Markov process statistics is the expected formation time $E[\\tau]$.\n\\begin{figure}[ht]\n\\centering\n  \\includegraphics[scale=0.6]{images/octahedron_tau.eps}\n\\caption{Expected formation times for the octahedron as a function of $\\epsilon$ and $\\beta$.}\n\\label{fig:OctaTau}\n\\end{figure}\nFigure~\\ref{fig:OctaTau} shows the relation between $\\beta$ and the expected formation time for several choices of $\\epsilon$. At $\\beta = 0$, the Markov process is essentially a random walk on the graph, weighted only by the degeneracy of each transition and unaffected by the choice of $\\epsilon$. On the other end, as $\\beta$ gets higher and higher, each transition occurs with an increasingly small rate. This leads to a sharp increase in formations times. Interestingly, there is an intermediary range of $beta$s where a minimum of the formation time function occurs. \n", "meta": {"hexsha": "7623d3519d15b74dfc2f3ed0b80796e5a96999d2", "size": 8314, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/results.tex", "max_stars_repo_name": "Danie1Johnson/thesis", "max_stars_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/results.tex", "max_issues_repo_name": "Danie1Johnson/thesis", "max_issues_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/results.tex", "max_forks_repo_name": "Danie1Johnson/thesis", "max_forks_repo_head_hexsha": "cc1137b2ab121771a6937a903f835268a5ec313e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.3626373626, "max_line_length": 1250, "alphanum_fraction": 0.7700264614, "num_tokens": 2105, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245787544825, "lm_q2_score": 0.668880247169804, "lm_q1q2_score": 0.5573943502099711}}
{"text": "\\section{Data Structures}\n\\subsection{Binary-Indexed Tree}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/fenwick/BIT.cpp\"}\n\\hrulefill\n\\subsection{2D Binary-Indexed Tree}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/fenwick/2D_BIT.cpp\"}\n\\hrulefill\n\\subsection{Range BIT}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/fenwick/BIT_range.cpp\"}\n\\hrulefill\n\\subsection{RMQ}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/static rmq/max_index.cpp\"}\n\\hrulefill\n\\subsection{Segment Tree}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/segment tree/classic_segt.cpp\"}\n\\hrulefill\n\\subsection{KD-Tree}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/kd_tree.cpp\"}\n\\hrulefill\n\\subsection{Wavelet Tree}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/wavelet_tree.cpp\"}\n\\hrulefill\n\\subsection{Lazy Treap}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/BST/lazy_treap.cpp\"}\n\\hrulefill\n\\subsection{Range LEQ query}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/range_leq.cpp\"}\n\\hrulefill\n\\subsection{Persistent Segment Tree}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Data Structures/segment tree/persistent_segtree.cpp\"}\n\\hrulefill\n\n\\section{Flow and Matching}\n\\subsection{Max Flow}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Flow & Matching/dinic.cpp\"}\n\\hrulefill\n\\subsection{Bipartite Matching}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Flow & Matching/hopcroft_karp.cpp\"}\n\\hrulefill\n\\subsection{Min-cost Flow}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Flow & Matching/min_cost_flow.cpp\"}\n\\hrulefill\n\\subsection{LP Solver}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Flow & Matching/simplex.cpp\"}\n\\hrulefill\n\\subsection{Stable Marriage}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Flow & Matching/stable_marriage.cpp\"}\n\\hrulefill\n\n\\section{Geometry}\n\\subsection{2D Floating-point Geometry}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Geometry/base_ld.cpp\"}\n\\hrulefill\n\\subsection{3D Geometry}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Geometry/3d.cpp\"}\n\\hrulefill\n\\subsection{2D Convex Hull}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Geometry/convex_hull.cpp\"}\n\\hrulefill\n\\subsection{Pair of Closest Points}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Geometry/closest_points.cpp\"}\n\\hrulefill\n\\subsection{Triangulation}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Geometry/triangulation.cpp\"}\n\\hrulefill\n\\subsection{Delaunay Triangulation}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Geometry/delaunay.cpp\"}\n\\hrulefill\n\\subsection{3D Convex Hull}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Geometry/3dcnv.cpp\"}\n\\hrulefill\n\n\\section{Graphs}\n\\subsection{Bridge Edges}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Graphs/bridge.cpp\"}\n\\hrulefill\n\\subsection{Dijkstra's}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Graphs/dijkstra.cpp\"}\n\\hrulefill\n\\subsection{Euler Path}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Graphs/eulerpath.cpp\"}\n\\hrulefill\n\\subsection{Kruskal's Algorithm}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Graphs/kruskal.cpp\"}\n\\hrulefill\n\\subsection{Strongly-connected Components}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Graphs/scc.cpp\"}\n\\hrulefill\n\\subsection{Topological Sorting}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Graphs/top_sort.cpp\"}\n\\hrulefill\n\n\\section{Math}\n\\subsection{Floating-Point Matrix}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Math/matrix_fp.cpp\"}\n\\hrulefill\n\\subsection{Finite Field Matrix}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Math/matrix_ll.cpp\"}\n\\hrulefill\n\\subsection{Primes}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Math/primes.cpp\"}\n\\hrulefill\n\\subsection{Formulas}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Math/formulas.cpp\"}\n\\hrulefill\n\n\\section{Miscellaneous}\n\\subsection{2-SAT}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Miscellaneous/2sat.cpp\"}\n\\hrulefill\n\\subsection{Convex Hull Trick}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Miscellaneous/convex_hull_trick.cpp\"}\n\\hrulefill\n\\subsection{Divide and Conquer Optimization}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Miscellaneous/divide_conquer_optimization.cpp\"}\n\\hrulefill\n\\subsection{Fast C++ Input}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Miscellaneous/fast_input.cpp\"}\n\\hrulefill\n\\subsection{Longest Increasing Subsequence}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Miscellaneous/longest_incr_subseq.cpp\"}\n\\hrulefill\n\\subsection{Java}\n\\raggedbottom\\lstinputlisting[style=java]{\"/home/jack/comp/Competitive-Programming/Miscellaneous/Main.java\"}\n\\hrulefill\n\\subsection{Other}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Miscellaneous/random.cpp\"}\n\\hrulefill\n\n\\section{Strings}\n\\subsection{Suffix Array}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Strings/suffix_array.cpp\"}\n\\hrulefill\n\\subsection{KMP Algorithm}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Strings/kmp.cpp\"}\n\\hrulefill\n\\subsection{String Hashing}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Strings/range_hash.cpp\"}\n\\hrulefill\n\\subsection{Z-Algorithm}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Strings/Z_algo.cpp\"}\n\\hrulefill\n\n\\section{Transforms}\n\\subsection{FFT}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Transforms/fft1.cpp\"}\n\\hrulefill\n\\subsection{NTT}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Transforms/ntt.cpp\"}\n\\hrulefill\n\\subsection{FHWT and Similar}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Transforms/fwht.cpp\"}\n\\hrulefill\n\n\\section{Trees}\n\\subsection{Centroid Decomposition}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Trees/centroid_decomposition.cpp\"}\n\\hrulefill\n\\subsection{Heavy-Light Decomposition}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Trees/hld.cpp\"}\n\\hrulefill\n\\subsection{LCA}\n\\raggedbottom\\lstinputlisting[style=cpp]{\"/home/jack/comp/Competitive-Programming/Trees/lca.cpp\"}\n\\hrulefill\n\n", "meta": {"hexsha": "138cf4d7910abb76cfbc706de088bebac96437d0", "size": 7528, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "_Notebook/contents.tex", "max_stars_repo_name": "Cephian/Competitive-Programming", "max_stars_repo_head_hexsha": "6f14ba02245577b858db45f5f1368985aaa9fbfb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-02-16T20:28:08.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-17T23:29:01.000Z", "max_issues_repo_path": "_Notebook/contents.tex", "max_issues_repo_name": "Cephian/Competitive-Programming", "max_issues_repo_head_hexsha": "6f14ba02245577b858db45f5f1368985aaa9fbfb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_Notebook/contents.tex", "max_forks_repo_name": "Cephian/Competitive-Programming", "max_forks_repo_head_hexsha": "6f14ba02245577b858db45f5f1368985aaa9fbfb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.3493975904, "max_line_length": 135, "alphanum_fraction": 0.8241232731, "num_tokens": 2225, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.837619947119304, "lm_q2_score": 0.665410572017153, "lm_q1q2_score": 0.5573611681456335}}
{"text": "\\section{\\toolname}\\label{sec:Toolname}\n\nSimilar to few-shot learning, positive-unlabeled (PU)~learning must overcome limited labeled data.  In fact, PU~learning could be viewed as an instance of zero-shot learning since labeled negative instances are non-existent!  Given this overlap, it may be reasonably expected that ideas originally proposed for Siamese networks may also be applicable to PU~learning.\n\nShown in Figure~\\ref{fig:Toolname}, our deep positive-unlabeled learning architecture, \\textit{\\toolname}, relies on a bifurcated autoencoder.  Each input ${\\xBase\\in\\xDomain}$ is mapped by encoder~$g_e$ to (concatenated) latent space ${\\zBase\\in\\mathbb{R}^{\\abs{\\zP}+\\abs{\\zS}+\\abs{\\zN}}}$. Unlike Siamese networks which map instances to~$\\mathbb{R}^m$, \\toolname\\ is a function ${\\fPU:\\xDomain\\mapsto\\xDomain\\times\\xDomain}$.  Each instance in the generated tuple corresponds to an output of one of two decoders.  As described in the next section, decoder~$\\fPUp$ is trained to accurately reconstruct specifically positive instances while decoder~$\\fPUn$ is trained to correctly reconstruct negative instances.\n\nObserve that the positive and negative decoder inputs --- ${\\lbrack \\zS~\\zP \\rbrack}$ and ${\\lbrack \\zS~\\zN \\rbrack}$ respectively --- are not identical.  The shared latent vector component,~$\\zS$, contains the mutual information needed to reconstruct \\textit{both} positive and negative instances while $\\zP$ and~$\\zN$ contain class-specific reconstruction information --- i.e.,~for positive and negative respectively.\n\n\\begin{figure}[t]\n  \\centering\n  \\input{tikz/deep_pu.tex}\n  \\caption{\\toolname\\ network architecture}\\label{fig:Toolname}\n\\end{figure}\n\n\\subsection{Learning}\n\nThis subsection outlines our novel ideas for training \\toolname\\@.\n\n\\paragraph{Loss Function} Just as Siamese network training requires the unique triplet loss, \\toolname\\ similarly uses a novel loss function we call the \\textit{\\attLossLow}.\n\nLet $\\xBase\\in\\xDomain$ be a training example with label~${y\\in\\{\\negLabel,\\posLabel\\}}$. Consider first the more straightforward case where ${\\xBase\\in\\Pos}$ necessitating that ${y=\\posLabel}$.  As explained above, \\toolname's positive decoder output,~$\\xHatP$, should be a more accurate reconstruction of~$\\xBase$ than the negative decoder output,~$\\xHatN$. If $\\xBase$ is considered the ``anchor,'' $\\xHatP$ and $\\xHatN$ can serve as the triplet loss' $\\exP$ and~$\\exN$ respectively. The fundamental intuition outlined in Section~\\ref{sec:Siamese} still applies.  Our positive \\attLossLow\\ is shown in Eq.~\\eqref{eq:Loss:AttP}.  As with the triplet loss, $\\alpha$ is a hyperparameter, and distance metric~$\\distSym$'s selection application-specific with mean-squared error often adequate.\n\n\\begin{equation}\\label{eq:Loss:AttP}\n  \\lPosAtt = \\max\\Big\\{ \\puDistDiff + \\alpha, 0 \\Big\\}\n\\end{equation}\n\nConsider next the alternative case where ${\\xBase\\in\\Unlabel}$. $y$~is unknown so the triplet loss cannot be directly used, but it helps guide the intuition.\n\nWhen ${y=\\posLabel}$, then during training the distance between~$\\xBase$ and~$\\exP$ should decrease while the distance between~$\\xBase$ and~$\\exN$ should increase.  If ${y=\\negLabel}$, the direction of changes of these distances is reversed.  $\\xBase$~can be thought of as being \\textit{attracted} to the decoder associated with its label~$y$. The unlabeled \\attLossLow\\ in Eq.~\\ref{eq:Loss:AttU} modifies the triplet loss to incorporate this attraction.  The basic intuition behind this loss function, put colloquially, is that each ${\\xBase\\in\\Unlabel}$ is driven to ``pick a side'' --- either the positive or negative class.\n\n\\begin{equation}\\label{eq:Loss:AttU}\n  \\lUAtt = \\max\\Big\\{ - \\big\\lvert\\puDistDiff\\big\\rvert + \\alpha, 0 \\Big\\}\n\\end{equation}\n\nWhen ${\\puDist{\\xBase}{\\xHatP} < \\puDist{\\xBase}{\\xHatN}}$ --- i.e.,~$\\xBase$ appears ``more positive'' --- Eq.~\\eqref{eq:Loss:AttU} is equivalent to Eq.~\\eqref{eq:Loss:AttP}, and the triplet loss' intuition applies. In the opposite case, where $\\xBase$ appears ``more negative'' --- ${\\puDist{\\xBase}{\\xHatN} < \\puDist{\\xBase}{\\xHatP}}$ --- the absolute value inverts the intuition, meaning the loss is minimized when the distance between $\\xBase$ and~$\\xHatN$ is reduced and the distance between $\\xBase$ and~$\\xHatP$ increased.\n\nThe \\attLossLow\\ can introduce instability during training since an obvious minimum is to attract all unlabeled instances to one decoder and produce a maximally poor reconstruction on the other decoder.  To ensure minimum reconstruction quality, a reconstruction error term is added to both the positive and unlabeled attractive losses as shown in Eq.~\\eqref{eq:Loss:PuP} and Eq.~\\eqref{eq:Loss:PuU} respectively. ${\\lambda\\in\\mathbb{R}_{{>}0}}$ is a hyperparameter.\n\n\\begin{align}\n  \\lPuP &= \\lPosAtt + \\lambda\\puDist{\\xBase}{\\xHatP} \\label{eq:Loss:PuP}\\\\\n  \\lPuU &= \\lUAtt + \\underbrace{\\lambda\\min\\Big\\{\\puDist{\\xBase}{\\xHatP}, \\puDist{\\xBase}{\\xHatN}\\Big\\}}_{\\text{Reconstruction Quality}}\\label{eq:Loss:PuU}\n\\end{align}\n\n\\begin{algorithm}[t]\n  \\caption{\\toolname\\ training algorithm}\\label{alg:Complete}\n  \\input{alg/complete_alg.tex}\n\\end{algorithm}\n\n\\paragraph{Training Algorithm} Training is divided into three disjoint phases.  In the first phase, the encoder and negative decoder are fit to minimize the reconstruction error on~$\\Unlabel$ similar to a standard, stacked autoencoder; the positive decoder is untouched during this stage.  Once $\\Unlabel$'s reconstruction error has adequately converged, training stops, and all weights in the encoder are frozen except those associated exclusively with $\\zP$.  This ensures that during the next training phase, the performance of the negative decoder is not degraded.\n\nStage~2 trains the encoder and positive decoder on~$\\Pos$ similar again to a standard autoencoder.  We allow the positive encoder to train for twice as many epochs as the negative decoder.  This increases the likelihood that the positive decoder can reconstruct positive examples more accurately than the negative decoder.  Similarly, since the positive decoder has, until this point, never seen a negative training example, its reconstruction performance on negative instances should be poor.\n\nBefore starting the final phase, all networks weights are unfrozen. The encoder and both decoders are then trained on interleaved batches from $\\Pos$ and~$\\Unlabel$ using the loss functions in Eq.~\\eqref{eq:Loss:PuP} and~\\eqref{eq:Loss:PuU}.  If hyperparameter $\\alpha$ is initially set too high, attraction of unlabeled examples to the positive decoder can be unstable and result in unpredictable network behavior. We address this by setting the initial $\\alpha$ close to zero and linearly increasing its value after each epoch.  The previously mentioned batch interleaving also promotes stability be ensuring the performance of both decoders remains in sync.\n\n% \\begin{algorithm}[t]\n%   \\caption{Joint training of the positive and unlabeled decoders}\\label{alg:JointTraining}\n%   \\input{alg/attractive_training.tex}\n% \\end{algorithm}\n\n\\subsection{Inference}\n\nFor unlabeled example~${\\xBase\\in\\Unlabel}$, if $g_{p}$ yields a superior reconstruction than $g_{n}$, it can be reasonably concluded that $\\xBase$ is positive labeled; otherwise, $\\xBase$ is more likely negative labeled.  This intuition is the basis for \\toolname's inference function shown in Eq.~\\eqref{eq:PU:ClassificationFunc}.\n\n  \\begin{equation}\\label{eq:PU:ClassificationFunc}\n    \\hat{y} = -\\sign{\\puDistDiff}\n  \\end{equation}\n\n\\noindent\nIn rare cases where ${\\puDist{\\xBase}{\\xHatP}=\\puDist{\\xBase}{\\xHatN}}$, $\\xBase$ is equally likely to be either negative or positive labeled.  For simplicity, such examples are assigned a positive label.\n\n\n", "meta": {"hexsha": "5879da43e05c1b480cb281b57a3b2849e82c7d83", "size": 7733, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "project/final_report/deep_pu.tex", "max_stars_repo_name": "ZaydH/cis572", "max_stars_repo_head_hexsha": "8b57f99c268ddb0c160266803ca96b3999beab4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "project/final_report/deep_pu.tex", "max_issues_repo_name": "ZaydH/cis572", "max_issues_repo_head_hexsha": "8b57f99c268ddb0c160266803ca96b3999beab4c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project/final_report/deep_pu.tex", "max_forks_repo_name": "ZaydH/cis572", "max_forks_repo_head_hexsha": "8b57f99c268ddb0c160266803ca96b3999beab4c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 107.4027777778, "max_line_length": 791, "alphanum_fraction": 0.7628346049, "num_tokens": 2056, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8376199714402812, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.5573611676558173}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage[\\graphtype]{mfpic}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\opengraphsfile{pl02-17}\n\\begmath 2.17 Fresnel Integrals\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThe subprograms described in this chapter compute the Fresnel Integrals%\n\\begin{equation*}\n\\hspace{-13pt}C(x)=\\int_0^x\\!\\!\\cos \\left( \\frac \\pi 2t^2\\right) dt\\quad\n\\text{and}\\quad S(x)=\\int_0^x\\!\\!\\sin \\left( \\frac \\pi 2t^2\\right) dt,\n\\end{equation*}\nand the associated functions%\n\\begin{align*}\n\\hspace{-13pt}f(x)&=\\left[ \\frac 12-S(x)\\right] \\cos \\left(\n\\frac \\pi 2x^2\\right) -\\left[\\frac 12-C(x)\\right] \\sin \\left(\n\\frac \\pi 2x^2\\right)\\\\\n\\hspace{-13pt}g(x)&=\\left[ \\frac 12-C(x)\\right] \\cos \\left(\n\\frac \\pi 2x^2\\right) +\\left[\\frac 12-S(x)\\right] \\sin \\left(\n\\frac \\pi 2x^2\\right)\n\\end{align*}\nas defined by Equations 7.3.1,~7.3.2, 7.3.5 and~7.3.6 in \\cite{ams55}.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\nTo compute $C(x)$ use\n\n{\\bf REAL \\ SFRENC, X, Y}\n$$\n\\fbox{{\\bf Y = SFRENC(X)}}\n$$\nTo compute $S(x)$ use\n\n{\\bf REAL \\ SFRENS, X, Y}\n$$\n\\fbox{{\\bf Y = SFRENS(X)}}\n$$\nTo compute $f(x)$ use\n\n{\\bf REAL \\ SFRENF, X, Y}\n$$\n\\fbox{{\\bf Y = SFRENF(X)}}\n$$\nTo compute $g(x)$ use\n\n{\\bf REAL \\ SFRENG, X, Y}\n$$\n\\fbox{{\\bf Y = SFRENG(X)}}\n$$\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[X]  \\ [in] The value at which the function is to be evaluated.\n\\end{description}\n\n\\subsubsection{Modifications for Double Precision}\n\nChange the REAL type statements to double precision, and change the initial\nletter of the subprogram names from S to D. It is important that the\nsubprogram names be explicitly typed.\n\\vspace{10pt}\n\n\\hspace{5pt}\\mbox{\\input pl02-17 }\n\n\\subsection{Examples and Remarks}\n\nSee DRSFRENL and ODSFRENL for an example of the usage of this subprogram.\n\nThere are no restrictions on the range of applicability of these functions.\nThe accuracy of the trigonometric functions decreases, however, for large $%\n|x|$. Thus evaluation of $C(x)$, $S(x)$, $f(-|x|)$ or $g(-|x|)$ for large $%\n|x|$ will be less accurate than evaluation of $f(|x|)$ and $g(|x|)$ for the\nsame value of $x$. When formulating an application, one should when possible\nuse $C(x)$ and $S(x)$ when $|x|\\leq 1.6$ (to achieve maximum efficiency). To\nachieve maximum accuracy and efficiency use $f(x)$ and $g(x)$ when $x>1.6$,\nand avoid using $x<-1.6.$\n\n\\subsection{Functional Description}\n\n\\begin{table*}\n\\begin{center}\n\\begin{tabular}{cl*{6}{r}}\n& \\multicolumn{1}{c}{\\bf Argument} & \\multicolumn{1}{c}{\\bf Mean} &\n\\multicolumn{1}{c}{\\bf Max} & \\multicolumn{1}{c}{\\bf Mean} &\n\\multicolumn{1}{c}{\\bf Max} & \\multicolumn{1}{c}{\\bf Mean} &\n\\multicolumn{1}{c}{\\bf Max}\\\\\n{\\bf Function} & \\multicolumn{1}{c}{\\bf Interval} &\n\\multicolumn{1}{c}{\\bf ULP} & \\multicolumn{1}{c}{\\bf ULP} &\n\\multicolumn{1}{c}{\\bf REL} & \\multicolumn{1}{c}{\\bf REL} &\n\\multicolumn{1}{c}{\\bf ABS} & \\multicolumn{1}{c}{\\bf ABS}\\\\\n$C(x)$ & [0..1.2] & 0.57 $\\rho $ & 2.18 $\\rho $ & 0.40 $\\rho $ &\n1.29 $\\rho $ & 0.19 $\\rho $ & 0.83 $\\rho $\\\\\n& (1.2..1.6] & 0.70 $\\rho $ & 2.52 $\\rho $ & 0.48 $\\rho $ &\n1.55 $\\rho $ & 0.19 $\\rho $ & 0.63 $\\rho $\\\\\nS($x)$ & [0..1.2] & 0.74 $\\rho $ & 2.42 $\\rho $ & 0.52 $\\rho $ &\n1.39 $\\rho $ & 0.09 $\\rho $ & 0.65 $\\rho $\\\\\n& (1.2..1.6] & 0.75 $\\rho $ & 2.20 $\\rho $ & 0.55 $\\rho $ &\n1.55 $\\rho $ & 0.38 $\\rho $ & 1.10 $\\rho $\\\\\n$f(x)$ & (1.6..1.9] & 0.51 $\\rho $ & 1.50 $\\rho $ & 0.36 $\\rho $ &\n1.10 $\\rho $ & 0.06 $\\rho $ & 0.19 $\\rho $\\\\\n& (1.9..2.4] & 0.30 $\\rho $ & 0.96 $\\rho $ & 0.26 $\\rho $ &\n0.77 $\\rho $ & 0.04 $\\rho $ & 0.12 $\\rho $\\\\\n& (2.4..6.0] & 0.43 $\\rho $ & 1.15 $\\rho $ & 0.29 $\\rho $ &\n0.80 $\\rho $ & 0.02 $\\rho $ & 0.07 $\\rho $\\\\\n& (6.0..50.0] & 0.45 $\\rho $ & 1.05 $\\rho $ & 0.31 $\\rho $ &\n0.71 $\\rho $ & 0.00 $\\rho $ & 0.03 $\\rho $\\\\\n& (50..1000] & 0.39 $\\rho $ & 1.02 $\\rho $ & 0.27 $\\rho $ &\n0.72 $\\rho $ & 0.00 $\\rho $ & 4E$-$3 $\\rho $\\\\\n$g(x)$ & (1.6..1.9] & 0.53 $\\rho $ & 1.92 $\\rho $ & 0.38 $\\rho $ &\n1.11 $\\rho $ & 0.01 $\\rho $ & 0.02 $\\rho $\\\\\n& (1.9..2.4] & 1.06 $\\rho $ & 3.43 $\\rho $ & 0.75 $\\rho $ &\n1.93 $\\rho $ & 0.01 $\\rho $ & 0.02 $\\rho $\\\\\n& (2.4..6.0] & 1.51 $\\rho $ & 4.04 $\\rho $ & 1.01 $\\rho $ &\n2.61 $\\rho $ & 0.00 $\\rho $ & 0.01 $\\rho $\\\\\n& (6.0..50.0] & 1.40 $\\rho $ & 3.62 $\\rho $ & 0.97 $\\rho $ &\n2.11 $\\rho $ & 0.00 $\\rho $ & 3E$-$4 $\\rho $\\\\\n& (50..1000] & 1.09 $\\rho $ & 2.40 $\\rho $ & 0.75 $\\rho $ &\n1.50 $\\rho $ & 0.00 $\\rho $ & 2E$-$7 $\\rho $\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\nThe computer approximations for these functions use Chebyshev rational\napproximations developed by W. J. Cody, described in \\cite{Cody:1968:CAF}. Cody\nprovides approximations for $C(x)$ and $S(x)$ for $|x| \\leq 1.6$, and for $%\nf(x)$ and $g(x)$ for $x > 1.6$. The approximations for $f(x)$ and $g(x)$ for\n$x > 2.4$ have the same asymptotic form as the functions. We compute $f(x)$\nand $g(x)$ from $S(x)$ and $C(x)$ when $|x| \\leq 1.6$, and vice versa for $x\n> 1.6$. For $x < 0$ we use $C(-x) = -C(x)$, $S(-x) = -S(x)$, $g(-x) = \\cos\n(\\pi /2\\ x^2) + \\sin (\\pi /2\\ x^2) - g(x)$ and $f(-x) = \\cos (\\pi /2\\ x^2) -\n\\sin (\\pi /2\\ x^2) - f(x)$.\n\nThe approximations and programming were checked by comparing the double\nprecision functions to an extended precision computation of $w(z)$, the\nFadeeva function described in Chapter~2.16. Testing consisted of dividing\nseveral regions of the argument range into~200 equal-sized intervals, and\nselecting a point randomly in each interval. To test $f(x)$ and $g(x)$ when $%\n50 < x < 1000$ we divided the range $10^{-3} < 1/x < .02$ into~200 equal\nsubranges. In each interval we report the error in units of the last\nposition of the test value in the column headed ULP, the error relative to\nthe true value in the column headed REL, and the absolute error in the\ncolumn headed ABS. The quantity $\\rho $ is the round off level, that is, the\ndifference between~1.0 and the next representable number, which is provided\nby D1MACH(4) (Chapter~19.1). For IEEE arithmetic, $\\rho \\approx 2.22$E$-$16 in\ndouble precision. The results are summarized above.\n\nCody's testing of the approximations, as described in \\cite{Cody:1968:CAF},\nindicates a relative accuracy in the approximations of~15 to~18 digits, so\none should not expect to achieve more accuracy simply by carrying out the\ncalculations using more precision, as, for example, by using double\nprecision on a Cray computer.\n\nThe errors in $f(x)$ and $g(x)$ decrease as $x$ increases in the range $6 <\nx \\leq 1000$. The approximations for $f(x)$ and $g(x)$ have the same\nasymptotic form as the functions when $x > 2.4$, and therefore they become\nmore accurate as $x$ increases. For IEEE format double precision arithmetic,\nthe approximation for $f(x)$ is identical to the asymptotic expansion when $%\nx > 29$, and the approximation for $g(x)$ is identical to the asymptotic\nexpansion when $x > 14.$\n\nWe verified correct programming of $f(x)$ and $g(x)$ for $|x| \\leq 1.6$, and\nfor $C(x)$ and $S(x)$ for $x > 1.6$, by comparing results to values in table\n7.7 in \\cite{ams55}. Extensive accuracy testing would simply have validated\nthe trigonometric function routines.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nThere are no restrictions on the argument range for these functions; they do\nnot announce any errors.\n\n\\subsection{Supporting Information}\n\nThe source language is ANSI Fortran~77.\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}c}\n\\bf Entry & \\hspace{.27in} {\\bf Required Files}\\vspace{2pt} \\\\\nDFRENC & \\hspace{.2in} AMACH, DFRENL\\\\\nDFRENF & \\hspace{.2in} AMACH, DFRENL\\\\\nDFRENG & \\hspace{.2in} AMACH, DFRENL\\\\\nDFRENS & \\hspace{.2in} AMACH, DFRENL\\\\\nSFRENC & \\hspace{.2in} AMACH, SFRENL\\\\\nSFRENF & \\hspace{.2in} AMACH, SFRENL\\\\\nSFRENG & \\hspace{.2in} AMACH, SFRENL\\\\\nSFRENS & \\hspace{.2in} AMACH, SFRENL\\\\\n\\end{tabular}\n\nSubprograms designed and developed by W. V. Snyder, JPL, 1992.\n\n\n\\begcodenp\n\n\\enlargethispage*{10pt}\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\\centerline{\\bf \\large DRSFRENL}\\vspace{-5pt}\n\\lstinputlisting{\\codeloc{sfrenl}}\n\n\\vspace{0pt}\\centerline{\\bf \\large ODSFRENL}\\vspace{2pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{sfrenl}}\n\n\\closegraphsfile\n\\end{document}\n", "meta": {"hexsha": "0ca365c81b504878b8548d0de88a33682926cea3", "size": 8361, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch02-17.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch02-17.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch02-17.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 38.888372093, "max_line_length": 98, "alphanum_fraction": 0.6440617151, "num_tokens": 3192, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5572482646463821}}
{"text": "\\documentclass{article}\n\\begin{document}\n\\author{Ross Brown}\n\\title{Proposed Implementations/Changes}\n\\maketitle\n\\section{Evaluating}\n\\subsection{Current Issue}\nThe algorithms are running on 3 datasets at the moment for an arbitrary number of runs. They are then adjusted/modified manually to visually improve these results which is of little value.\n\\subsection{Solution}\nUsing a set of datasets (currently about 2000 datasets), $X$, portioned into training, validating, and testing subsets ($X_\\mathrm{train}$, $X_\\mathrm{test}$), more robust results can be produced. The idea for these subsets arises from the equivalent idea seen in machine learning.\n\n$X_\\mathrm{train}$ will be used to fit parameters in the algorithm and the $X_\\mathrm{test}$ will have the fitted algorithm applied and the scoring reported. An $X_\\mathrm{valid}$ is not deemed necessary at the moment.\n\\section{Scoring}\n\\subsection{Current Issues}\nSimply taking the mse of the results does not fit in with the biological aspect: finding highly active compounds. The scoring thus needs to fulfil the following criteria:\n\\begin{itemize}\n    \\item Weighting to higher pXa\n\\end{itemize}\nAnd would preferably meet the following:\n\\begin{itemize}\n    \\item Lightweight - would rather have computational power used on the active learning than the scoring.\n    \\item Independent of dataset size and distribution.\n\\end{itemize}\n\\subsection{Solutions}\n\\begin{itemize}\n    \\item Weighted mse:\n          $$\\sigma{}=\\sum_i{w(y_i-\\bar{y})^2}$$\n          Where $w$ may be:\n          $$w_i=y_i^\\alpha$$\n          It is unknown what $\\alpha{}$ would be (so probably 1 according to Occam's razor).\n    \\item Number of `top' results to contain $a$ of the top $b$ true top values.\n    \\item Number of iterations needed to get either of the above below a predefined value.\n\\end{itemize}\nLikely to choose the first solution as simple to implement with a target of 5 iterations. A validation could be used to determine the number of iterations (i.e. look at the rate of improvement and stop at a certain rate).\n\\end{document}", "meta": {"hexsha": "1b29bea0ad5f715dd98c65ba81f39b3f3bc337e5", "size": 2073, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/newIdeas/idea.tex", "max_stars_repo_name": "rjb255/researchProject", "max_stars_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/newIdeas/idea.tex", "max_issues_repo_name": "rjb255/researchProject", "max_issues_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/newIdeas/idea.tex", "max_forks_repo_name": "rjb255/researchProject", "max_forks_repo_head_hexsha": "7b0c118ee1adaf0c68f83d5b4a043c6aa5a55331", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.2285714286, "max_line_length": 281, "alphanum_fraction": 0.7563917028, "num_tokens": 498, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5572482646463821}}
{"text": "\n\n\\chapter{Results}\n\\label{cha:results}\n\n% Move this to Ericssons appendix\n%\\section{Positive feedback on fan and overheating faults}\n%\\noindent\\fbox{\n%\t\\parbox{\\textwidth}\n%\t{\n%\t\tComment on the fan issue found while mining faults\n%\t}\n%}\n\n\n\\section{Comparative imputing experiments}\n\\label{sec:imp-exps}\n\nA set of thorough experiments has been designed to compare both model estimation techniques and empirically determine which one performs better for the current dataset.\n\nThe starting point is a database containing approximately two months of measurements from several \\acp{rbs}. Then, the database is mined to find $n=50$ signals per feature with no missing data. After that, data has been artificially removed at a set ratio using a uniform distribution so that it can be simulated a \\ac{mcar} scenario \\cite{rantou2017missing}. \n\nThe two algorithms run to estimate the process models and use them to run the Kalman smoothing. It is used the coefficient of determination $R^2$ to compare the performance of the models. \n\n\\begin{equation}\\label{eq:R2}\n\tR^2 = 1 - \\frac{\\sum_{i}{(y_i - \\hat{y}_i)^2}}{\\sum_{i}{(y_i - \\bar{y})^2}}\n\\end{equation}\n\nWhere $y_i$ refers to the $i$-th data sample, $\\hat{y}$ its estimated value and $\\bar{y}$ the series mean. \n\nThus, having perfect predictions would cancel out the right-hand term's numerator and produce a score equal to 1. Therefore, the closer the score is to 1, the better quality has been the data imputation.  \n\nThe process of removing data, imputing and computing the coefficient of determination value is performed iteratively while increasing the amount of simulated missing data. The mean of the $R^2$ values from the  selected \\ac{rbs} are reported. \n\nIn Figure \\ref{alg:imputing_exp}, it is shown the experiment's algorithm pseudocode in order to give more context of the meaning of the results plots.\n\n\\begin{figure}[hptb]\n\\begin{lstlisting}[keywords={,let, input, output, return, datatype, function, in, if, else, foreach, while, begin, end, do, }, mathescape=true, tabsize=4, basicstyle=\\ttfamily\\small]\ninput : database, ratio_step, ratio_min, ratio_max\noutput: mean_scores\n\nlet n $\\gets$ rbs batch size\nlet grid $\\gets$ [ratio_min : ratio_step : ratio_max]\nlet mean_scores $\\gets$ []\n\nforeach feature in database.get_features() do:\n\tsites $\\gets$ database.get_random_sites(n)\n\ti $\\gets$ 0\n\tforeach ratio in grid do:\n\t\tlet arima_scores $\\gets$ []\n\t\tlet structural_scores $\\gets$ []\n\t\tj $\\gets$ 0\n\t\tforeach site in sites do:\n\t\t\tdata $\\gets$ database.get_site_data(site, feature)\n\t\t\tsim_data $\\gets$ remove_uniform_random_data(data, ratio)\n\t\t\t\n\t\t\timputed_arima $\\gets$ impute_with_arima(sim_data)\n\t\t\timputed_structural $\\gets$ impute_with_structural_model(sim_data)\n\t\t\t\n\t\t\tarima_scores[j] $\\gets$ $R^2$(data, imputed_arima)\n\t\t\tstructural_scores[j] $\\gets$ $R^2$(data, imputed_structural)\n\t\t\tj $\\gets$ j + 1\n\t\tmean_scores[i] $\\gets$ (ratio, mean(arima_scores), mean(structural_scores))\n\t\ti $\\gets$ i + 1\nreturn mean_scores\n\\end{lstlisting}\n\\caption{Comparative imputing experiment pseudocode.}\n\\label{alg:imputing_exp}\n\\end{figure}\n\n\\pagebreak\n\nFigure \\ref{fig:imp_exp_good} shows the results of the imputation comparison for the radio traffic as example. It can be seen that the $R^2$ value decreases --as expected-- when the missing data ratio increase. It also can be seen that, in this example, and most of them, structural models performs better than ARIMA models for imputing.\n\n\\begin{figure}[hptb]\n\t\\centering\n\t\\includegraphics[width=0.7\\textwidth]{imp_traffic}\n\t\\caption{Radio traffic load imputation}\n\t\\label{fig:imp_exp_good}\n\\end{figure}\n\n%\\begin{figure}[hptb]\n%\t\\centering\n%\t\\begin{subfigure}{.48\\textwidth}\n%\t\t\\includegraphics[width=\\textwidth]{imp_traffic}\n%\t\t\\caption{Radio traffic load imputation}\n%\t\t\\label{fig:imp_radio_traffic}\n%\t\\end{subfigure}%\n%\t\\hfill\n%\t\\begin{subfigure}{.48\\textwidth}\n%\t\t\\includegraphics[width=\\textwidth]{imp_temp}\n%\t\t\\caption{Cabinet temperature imputation}\n%\t\t\\label{fig:imp_temp}\n%\t\\end{subfigure}\n%\t\\caption{Imputation experiments good results}\n%\t\\label{fig:imp_exp_good}\n%\\end{figure}\n\n\nAlthough promising results were obtained for some of the imputed signals, there were other cases where the experiment did not perform as expected. In the following subsections, they will be discussed.\n\n\\subsection{Unintuitive $R^2$ values}\n\nThere are cases in which Auto-ARIMA estimation show poor and even negative $R^2$ values as shown in Figure \\ref{fig:imp_exp_issue}. These are unintuitive results, as $R^2$ values are usually expected to be limited to the $[0,1]$ interval. \n\nNonetheless, this is meant only for linear models, where the worst fitted model is assumed to be the observations mean \\cite{wackerly}. Thus, having negative $R^2$ values implies that the observations mean explains more variance than the fitted model.\n\n\\begin{figure}[hptb]\n\t\\centering\n\t\\begin{subfigure}{.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{imp_sys_voltage}\n\t\t\\caption{\\ac{pdu} system voltage imputation.}\n\t\t\\label{fig:imp_pdu_sys_voltage}\n\t\\end{subfigure}%\n\t\\hfill\n\t\\begin{subfigure}{.48\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{imp_power_load}\n\t\t\\caption{Average \\ac{psu} utilisation imputation.}\n\t\t\\label{fig:imp_psu_load}\n\t\\end{subfigure}\n\t\\caption{Imputation experiments with bad results.}\n\t\\label{fig:imp_exp_issue}\n\\end{figure}\n\n\\subsection{Optim convergence failures}\n\nThe model estimation threw runtime exceptions for some features due to the R code calls to the \\texttt{optim} library not converging. These exceptions were found to be an actual bug in the library that occurs when the function being optimised tends to a constant value as shown in Figure \\ref{fig:imp_conv_issue}. A dedicated routine was written to catch whenever this exception was thrown and run a simple interpolation instead of estimating the model.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.55\\linewidth]{imp_conv_issue}\n\t\\caption{Example of problematic signals for Auto-ARIMA estimation.}\n\t\\label{fig:imp_conv_issue}\n\\end{figure}\n\n\\subsection{Conclusion of the imputing experiments}\n\nThe imputations using ARIMA models have shown to be unstable for some signals. Even though when, for the same signals, the structural model does not overperform either, it stills better than the very negative $R^2$ scores produced by the ARIMA approximations. In conclusion, based on this experiment's results, the structural models' approach has been chosen to impute the missing data and construct the database. \n\n\n\\section{Forecasting initial experiments}\n\n\nThe average \\ac{psu} load signal shown in Figure \\ref{fig:forecast_experiment_signal} has been chosen to run the following experiments. It is not a particularly easy signal since it contains some severe outliers in the first days that turned the power to almost zero, and in the end, the average power consumption seems to decrease.\n\nIt is necessary to notice that in every forecasting model, the further the prediction horizon is, the more uncertainty and, thus, the lower performance. For this experiment, it has been decided to leave the last 10\\% of data for testing purposes.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.6\\linewidth]{forecast_experiment_signal}\n\t\\caption{Average PSU load signal used for the forecasting experiments}\n\t\\label{fig:forecast_experiment_signal}\n\\end{figure}\n\nIn the following sections, the baselines models will be implemented first. Then, results of the univariate Prophet model will be shown. %To, later, add more exogenous regressors to observe how the predictions are improved. \nThe models will be evaluated by using the scores described in Section \\ref{subsec:eval_criteria}.\n\n\\subsection{Evaluation criteria}\\label{subsec:eval_criteria}\n\\subsubsection*{Coefficient of determination}\n\nAs previously mentioned, the coefficient of determination $R^2$ is a measure of how much variance of $\\hat{y}$ is explained by the variance in $y$ in a linear regression context. It is compared against the considered \\emph{worst possible linear approximation} which corresponds to the samples mean.\n\nAlthough the current application is not linear, $R^2$ is still considered a valuable score to compare predictor performances. Nonetheless, as the linearity assumption is not met, its values would reside in $(-\\infty, 1]$, where 0 still is the mean value, but it is no longer considered the worst possible fit.\n\n%\\begin{equation}\\label{eq:rsq}\n%\tR^2 = 1 - \\frac{\\sum_{i}{(y_i - \\hat{y}_i)^2}}{\\sum_{i}{(y_i - \\bar{y}_i)^2}}\n%\\end{equation}\n\n\n\\subsubsection*{Mean absolute error}\n\nIt measures, in absolute terms, the deviations from the true values. \\ac{mae} is preferred, given its interpretability, over \\ac{rmse} \\cite{MAE}.\n\n\\begin{equation}\\label{eq:mae}\n\t\\text{MAE}\t= \\frac{1}{N} \\sum_{i=1}^{N}{ \\left| y_i - \\hat{y}_i \\right| }\n\\end{equation}\n\n\n\\subsubsection*{Mean out-of-bounds error}\n\nAs a result of the fitting or forecasting process, Prophet's output is comprised by the mean prediction $\\hat{y}_t$ and also its lower and upper boundaries $\\hat{y}_t - \\Delta\\hat{y}_t$ and $\\hat{y}_t + \\Delta\\hat{y}_t$, respectively. Where $\\Delta \\hat{y}_t$ is the computed confidence interval half-magnitude for the time $t$. The width of this interval is defined by the user and defaults to $80\\%$ \n\nLet  the \\ac{obe} be the out-of-confidence-bands error defined as zero if the true value $y_t$ lies inside the confidence interval, and if it lies outside, define the distance to the closest confidence boundary as defined in (\\ref{eq:obe}). Then the \\ac{mobe} of $N$ samples can be obtained by taking the mean of these values to obtain an overall performance score as in (\\ref{eq:mobe}). \n\n\\begin{equation}\n\t\\text{OBE}_t = \n\t\\begin{cases}\n\t\t\\hfil 0  & \\text{, if } \\hat{y}_t - \\Delta\\hat{y}_t \\leq y_t \\leq \\hat{y}_t + \\Delta\\hat{y}_t \\\\\n\t\t\\bm{\\min} \\Big\\{ \\big| y_t - (\\hat{y}_t - \\Delta\\hat{y}_t) \\big| , \\big\t| y_t - (\\hat{y}_t + \\Delta\\hat{y}_t) \\big| \\Big\\} & \\text{, otherwise}\\\\\n\t\\end{cases}\\label{eq:obe} \n\\end{equation}\n\t\n\\begin{equation}\n\t\\label{eq:mobe}\n\t\\text{MOBE} = \\frac{1}{N} \\sum_{t=1}^{N}{ \\text{OBE}_t }\n\\end{equation}\n\n\\subsection{Baseline predictions models}\n\nFor the imputation step, the seasonal ARIMA and structural models have been learnt to fill the missing values by applying the Kalman smoothing algorithm. Furthermore, these models can also be used to forecast future observations. They are not expected to overperform. Nonetheless, predicting with them can be used as a benchmark to compare other models' improvements over their baseline.  \n\n\\subsubsection*{Structural model predictions}\n\\label{subsubsec:structts_preds}\n\nFigure \\ref{fig:base_sts_overall} shows the overall performance of the structural time series predictor. It can be seen that its predictive mean tends to follow the test dataset mean, whereas the confidence interval is wider the further the prediction horizon is. More detailed plots are available in Appendix \\ref{app:structts_preds}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{baseline_imp_overall-sts}\n\t\\caption{Structural model 3-days baseline overall performance}\n\t\\label{fig:base_sts_overall}\n\\end{figure}\n\nWhen plotting how the predictions $\\hat{y}$ and the real values $y$ are distributed, the ideal prediction would be denoted by the positive unitary line. Therefore, the results in Figure \\ref{fig:base_sts_joints} can be considered very poor.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.55\\linewidth]{baseline_imp_joints-sts}\n\t\\caption{Structural model 3-days baseline joint distribution}\n\t\\label{fig:base_sts_joints}\n\\end{figure}\n\n\n\\subsubsection*{Auto-ARIMA implementation}\n\nSame as in Section \\ref{subsubsec:structts_preds}, an Auto-ARIMA model is trained and used to make predictions. In this case, it can be seen that, in the short-range, the predictive mean tries, but poorly succeeds to follow the actual data fluctuations. However later, in the long-range, it converges to the data mean. More detailed plots of the results can be found in Appendix \\ref{app:arima_preds}.   \n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{baseline_imp_overall}\n\t\\caption{Auto-ARIMA 3-days baseline overall performance}\n\t\\label{fig:base_arima_overall}\n\\end{figure}\n\nAlthough the joint distribution of predictions and true values has been slightly improved, it still is very poor.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.55\\linewidth]{baseline_imp_joints}\n\t\\caption{Auto-ARIMA 3-days baseline joint distribution}\n\t\\label{fig:base_arima_joints}\n\\end{figure}\n\n\\subsubsection*{Overall baselines scores}\n\nIn Table \\ref{table:base_scores}, the long-range prediction of both baselines is summarised. They perform very poorly; nonetheless, as naive baselines, it is not expected any other outcome.\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|}\n\t\t\\hline\n\t\tScore\t\t& Structural Model\t& Auto-ARIMA \\\\\n\t\t\\hline\n\t\t\\ac{mae} \t&  1.87 \t\t\t& 1.95 \\\\\n\t\t$R^2$ \t\t& -0.02\t\t\t\t& -0.12 \\\\\n\t\t\\ac{mobe} \t&  0\t\t\t\t& 0 \\\\\n\t\t\\ac{rmse}\t& 2.02 \t\t\t\t& 2.12 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Baselines long-range performance}\n\t\\label{table:base_scores}\n\\end{table}\n\n\\subsection{Univariate Prophet model implementation}\n\nFigure \\ref{fig:prophet_uni_split} shows the complete results of training and testing for a univariate Prophet model. In comparison to the baselines, now it can be observed that the predictive mean tends to follow a more clear seasonal pattern. Nonetheless, in the testing set, it appears to miss the trend change.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{prophet_uni_split}\n\t\\caption{Training, test and predictions from the univariate Prophet model}\n\t\\label{fig:prophet_uni_split}\n\\end{figure}\n\nMore detailed plots can be found in Appendix \\ref{app:uniprophet}.\n\n\n\\subsubsection*{Model learnt components}\n\nFigure \\ref{fig:prophet_uni_components} presents the learnt approximation of every component in the \\ac{gam}. Although the trend did not overfit the outliers, it stills learnt that there is a breakpoint and changed the piecewise trend.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{prophet_uni_components}\n\t\\caption{Learnt components from the univariate Prophet model}\n\t\\label{fig:prophet_uni_components}\n\\end{figure}\n\n\\subsubsection*{Data and predictions joint distribution}\n\nIf the joint distributions for the true values $y$ and the predicted values $\\hat{y}$ are plotted, it can be seen how the outlier in the training set was not learnt and how the approximation becomes erratic in the test set. Nonetheless, they already show a considerable improvement from the baselines distributions.\n\n\\begin{figure}[hptb]\n\t\\centering\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{prophet_train_res_joint}\n\t\t\\caption{Train}\n\t\t\\label{fig:prophet_train_res_joint}\n\t\\end{subfigure}%\n\t\\hfill\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{prophet_test_res_joint}\n\t\t\\caption{Test}\n\t\t\\label{fig:prophet_test_res_joint}\n\t\\end{subfigure}\n\t\\caption{Univariate Prophet joint distributions of $y$ and $\\hat{y}$}\n\t\\label{fig:prophet_res_joint}\n\\end{figure}\n\n\n\\subsection{Prophet implementation using exogenous regressors}\n\nTo fully exploit the flexibility of \\acp{gam}, Prophet's API allows defining custom regressors, which in the current work will be called exogenous variables as a resemblance of SARIMAX models. These variables are the ones explained in Chapter \\ref{cha:data_analysis}.\n\nThe overall performance can be seen in Figure \\ref{fig:prophet_multi_split}. Compared to the univariate model, it can be seen that the confidence bounds now are narrower since there is less uncertainty in the predictions. It also can be seen that the model now can predict the trend change in the test set. \n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{figures/prophet_multi_split}\n\t\\caption{Training, test and predictions from the multivariate Prophet model}\n\t\\label{fig:prophet_multi_split}\n\\end{figure}\n\n\n\\subsubsection*{Model learnt components}\n\nThe model components now show the additive factor of all the exogenous regressors, which seems to contain the information not learnt by the univariate training residuals. \n\n\n%\n%The $R^2$ score has considerably improved. Nonetheless, the other scores have become worse. The interpretation of this decline could be that the confidence band is now narrower than the univariate model. Thus, being out of the confidence band is now likely than before.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{prophet_multi_components}\n\t\\caption{Learnt components from multivariate Prophet}\n\t\\label{fig:prophet_multi_components}\n\\end{figure}\n\n\\subsubsection*{Data and predictions joint distribution}\n\nThe joint distributions for $y$ and $\\hat{y}$ in Figure \\ref{fig:prophet_test_res_joint_multi} now show that the model has improved significantly  its prediction accuracy compared against the baselines. \n\nIn the plot for the training set, it can be seen that the trend model approximates the abrupt drop. Nonetheless, the positive trend in the correlation between  $y$ and $\\hat{y}$ in the test set shows that the model is not overfitting the training data. Moreover, the trend drop in the testing set is also approximated without issues. \n\n\\begin{figure}[hptb]\n\t\\centering\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{prophet_train_res_joint_multi}\n\t\t\\caption{Train}\n\t\t\\label{fig:prophet_train_res_joint_multi}\n\t\\end{subfigure}%\n\t\\hfill\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{prophet_test_res_joint_multi}\n\t\t\\caption{Test}\n\t\t\\label{fig:prophet_test_res_joint_multi}\n\t\\end{subfigure}\n\t\\caption{Multivariate Prophet joint distributions of $y$ and $\\hat{y}$}\n\t\\label{fig:prophet_res_joint_multi}\n\\end{figure}\n\n\n\\subsection{Example forecast performance comparison}\n\nIn order to quantify the improvement of the long-range predictions made by the usage of the Prophet model. Table \\ref{table:prophet_scores} summarises the performance scores.\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{|l|c|c|c|c|}\n\t\\hline\n\t\t\t\t\t& \\ac{mae}\t&\t$R^2$\t\t& \\ac{mobe}  & \\ac{rmse} \t\\\\\n\t\\hline \t\t\t\t\t\t\t\t\t\t\t\t\n\tUnivar - Train \t&\t2.16\t&\t0.14\t\t& \t0.04\t&\t2.53\t\t\\\\\n\tUnivar - Test \t&\t2.11\t&\t-0.57\t\t&\t0.04\t&\t2.70\t\t\\\\\n\tMultivar - Train&\t0.55\t&\t0.83\t\t& \t0.05\t&\t1.13\t\t\\\\\n\tMultivar - Test &\t0.57\t&\\textbf{0.85}\t&\t0.03\t&\\textbf{0.85}\t\\\\\n\t\\hline\n\\end{tabular}\n\\caption{Example of long-range prediction performance by the Prophet model}\n\\label{table:prophet_scores}\n\\end{table}\n\n\nIn this example, the significant improvement done by using the exogenous predictors in the multivariate model can be observed, obtaining a $R^2=0.85$, which for a non-linear case can be considered a good approximation.\n\n\n\\section{Exhaustive forecasting experiments}\n\nAfter visualising how the models comparatively perform in an example case, a general view of how they perform is needed. Therefore, similar to the experiments in Section \\ref{sec:imp-exps}, it is performed an exhaustive iterative evaluation of the predictions is performed while moving the predicting horizon further and further. \n\nIn the following subsections, the exhaustive baseline results are presented, and then Prophet model results are obtained to visualise its general improvement.\n\n\\subsection{Baselines predictions moving horizon}\n\n\\begin{figure}[htpb]\n\t\\centering\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{baseline_db-structts_MAE}\n\t\t\\caption{Baseline MAE}\n\t\\end{subfigure}%\n\t\\hfill\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{baseline_db-structts_RMSE}\n\t\t\\caption{Baseline RMSE}\n\t\\end{subfigure}\n\t\\caption{Baseline predictions performance vs forecast horizon time}\n\t\\label{fig:baseline_performances}\n\\end{figure}\n\n\nFigure \\ref{fig:baseline_performances} shows the Auto-ARIMA and structural model performances together. The curves are interesting due to their peak-valley seasonal-alike shapes. Their possible explanation is related to the original signal peaks and valleys variability, i.e., the minima are more show less variability than the maxima. Therefore, the predictions for the minima are less error-prone than for the maxima. \n\nOn the other hand, it can be seen that, opposed to the results for the imputation phase, the structural model provides poorer predictions than the ARIMA. Thus, if any of the baselines would have to be chosen, the Auto-ARIMA would be the best candidate.\n\n\n\\subsection{Univariate Prophet prediction moving horizon}\n\\label{subsec:Results-Experiments-UnivariateProphet}\n\nThis set of experiments have been found to require hefty computing power and time. Since the following overall experiment took $\\sim17$ hours to finish, it has been limited to sampling 50 models performances for a three days long-range horizon maximum.\n\n\n%\\begin{figure}[H]\n%\t\\centering\n%\t\\begin{subfigure}{0.49\\linewidth}\n%\t\t\\includegraphics[width=\\linewidth]{}\n%\t\t\\caption{}\n%\t\\end{subfigure}\n%\t\\hfill\n%\t\\begin{subfigure}{0.49\\linewidth}\n%\t\t\\includegraphics[width=\\linewidth]{}\n%\t\t\\caption{}\n%\t\\end{subfigure}\n%\t\\caption{}\n%\t\\label{fig:}\n%\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{subfigure}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{exp_prophet_uni_mae_time}\n\t\t\\caption{\\ac{mae}}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{exp_prophet_uni_rmse_time}\n\t\t\\caption{\\ac{rmse}}\n\t\\end{subfigure}\n\t\\caption{Univariate Prophet predictions performance vs forecast horizon time}\n\t\\label{fig:uni_prophet_performances}\n\\end{figure}\n\nIn the results in Figure \\ref{fig:uni_prophet_performances}, it can be observed that as long the time horizon increases, as expected, the prediction errors tend to increase also. While \\ac{mae} show that the prediction errors are consistently higher than the training error, the \\ac{rmse} shows lower scores for the testing set than for the training when the time horizon is close to the present time. This shift can be explained by the fact that \\ac{rmse} penalises higher the large errors compared to \\ac{mae}. \n\nIt is important to notice that, as the \\ac{psu} utilisation signal is already in percent, these results are easily interpretable as percent error also. Therefore, in the long-range, the univariate Prophet model, shows that in average it can achieve $\\text{MAE}\\sim2.5$ and $\\text{RMSE} \\sim 3.6$ \\ac{psu} per-cent utilisation.\n\n%\\subsubsection*{Mean Absolut Error}\n%\\begin{figure}[H]\n%\t\\centering\n%\t\\includegraphics[width=0.8\\linewidth]{exp_prophet_uni_mae_time}\n%\t\\caption{Univariate moving prediction horizon: \\ac{mae}}\n%\t\\label{fig:exp_prophet_uni_mae}\n%\\end{figure}\n%\n%\\subsubsection*{Root Mean Squared Error}\n%\\begin{figure}[H]\n%\t\\centering\n%\t\\includegraphics[width=0.8\\linewidth]{exp_prophet_uni_rmse_time}\n%\t\\caption{Univariate moving prediction horizon: \\ac{rmse}}\n%\t\\label{fig:exp_prophet_uni_rmse}\n%\\end{figure}\n\n\n\n\\subsection{Multivariate Prophet prediction moving horizon}\n\nNow it is the turn of the Prophet model with external regressors that showed the best performance in the first example. Therefore, it is expected to maintain that tendency but now with more confidence that the performance in the example was not some fortunate random event.\n\nThese experiments have consumed even more computing time than the previous in Section \\ref{subsec:Results-Experiments-UnivariateProphet}, having spent more than 18.5 hours to finish.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{subfigure}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{exp_prophet_mv_mae_time}\n\t\t\\caption{\\ac{mae}}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{exp_prophet_mv_rmse_time}\n\t\t\\caption{\\ac{rmse}}\n\t\\end{subfigure}\n\t\\caption{Multivariate Prophet predictions performance vs forecast horizon time}\n\t\\label{fig:multi_prophet_performances}\n\\end{figure}\n\nThe results in Figure \\ref{fig:multi_prophet_performances} show that the long-range prediction results for the Prophet model using exogenous regressors are consistently good having both, $\\text{MAE}$ and $\\text{RMSE}$ below $1\\%$.\n\n%\\begin{figure}[H]\n%\t\\centering\n%\t\\includegraphics[width=0.8\\linewidth]{exp_prophet_mv_mae_time}\n%\t\\caption{Multivariate moving prediction horizon: \\ac{mae}}\n%\t\\label{fig:exp_prophet_mv_mae}\n%\\end{figure}\n%\\begin{figure}[H]\n%\t\\centering\n%\t\\includegraphics[width=0.8\\linewidth]{exp_prophet_mv_rmse_time}\n%\t\\caption{Multivariate moving prediction horizon: \\ac{rmse}}\n%\t\\label{fig:exp_prophet_mv_rmse}\n%\\end{figure}\n\nOther noticeable results are the ones shown in Figure \\ref{fig:exp_prophet_mv_r2}, in which the $R^2$ scores in long-range are $\\sim0.88$. Although it might seem unintuitive to have better $R^2$ in the long-range rather than in the short term, it can be explained by the amount of data used to compute the score: the less data, the more likely to have \\emph{a general bad approximation}. \n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{exp_prophet_mv_r2_time}\n\t\\caption{Multivariate Prophet model $R^2$ vs forecast horizon time}\n\t\\label{fig:exp_prophet_mv_r2}\n\\end{figure}\n\n\\subsection{Time performances}\n\nFigure \\ref{fig:time-perfs} shows the average consumed time for running the exhaustive experiments in Ericsson's computing farm. It can be seen that the testing time is very similar for univariate and multivariate case. On the other hand, the training time for the multivariate model is slightly higher than the univariate one. Nonetheless, both are under half of a second on average. It is also interesting that the longer the prediction range, the times remain almost constant.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{subfigure}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{exp_prophet_uni_time}\n\t\t\\caption{Univariate experiment computing time}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}{0.49\\linewidth}\n\t\t\\includegraphics[width=\\linewidth]{exp_prophet_mv_time}\n\t\t\\caption{Multivariate experiment computing time}\n\t\\end{subfigure}\n\t\\caption{Experiments computing times}\n\t\\label{fig:time-perfs}\n\\end{figure}\n\n\n\n\\section{Performance summary}\n\nFinally, for ease of comparison this section summarises the average performances per day for all the models under analysis. Table \\ref{table:mae_perf_summary} shows that the Prophet model with exogenous regressors vastly outperforms the other methods in terms of \\ac{mae}.\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|}\n\t\t\\hline\n\t\t\\textbf{Model} & \\textbf{1 day} & \\textbf{2 day}\t& \\textbf{3 day} \\\\\n\t\t\\hline\n\t\tStructural\t\t & \t\t2.55 \t&\t\t3.17 \t\t& 3.08 \t\t\t\\\\\n\t\t\\hline\n\t\tARIMA \t\t\t &\t\t2.59 \t& \t\t2.812\t\t& 2.70 \t\t\t\\\\\n\t\t\\hline\n\t\tUniVar-Prophet \t & \t\t2.88 \t& \t\t2.96 \t\t& 3.01 \t\t\t\\\\\n\t\t\\hline\n\t\tMultiVar-Prophet & \\textbf{0.41}& \\textbf{0.40}\t\t& \\textbf{0.40} \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{\\ac{mae} Summary of models performance by day}\n\t\\label{table:mae_perf_summary}\n\\end{table}\n\nLikewise, in Table \\ref{table:rmse_perf_summary}, it can be observed the long-range stability of the multivariate Prophet model having, again, the best \\ac{rmse} scores.\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|}\n\t\t\\hline\n\t\t\\textbf{Model} & \\textbf{1 day} & \\textbf{2 day}\t& \\textbf{3 day} \\\\\n\t\t\\hline\n\t\tStructural\t\t & \t\t3.27 \t&\t\t4.31 \t\t& 4.35 \t\t\t\\\\\n\t\t\\hline\n\t\tARIMA \t\t\t &\t\t3.22 \t& \t\t3.77\t\t& 3.81 \t\t\t\\\\\n\t\t\\hline\n\t\tUniVar-Prophet \t & \t\t3.54\t& \t\t3.88\t\t& 4.02 \t\t\t\\\\\n\t\t\\hline\n\t\tMultiVar-Prophet & \\textbf{0.62}& \\textbf{0.71} \t& \\textbf{0.68}\t\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{\\ac{rmse} Summary of models performance by day}\n\t\\label{table:rmse_perf_summary}\n\\end{table}\n\nLastly, Table \\ref{table:r2_perf_summary} presents how the multivariate Prophet model is even capable of having acceptable $R^2$ scores, which in this type of data is not an easy task.\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|}\n\t\t\\hline\n\t\t\\textbf{Model} & \\textbf{1 day} & \\textbf{2 day}\t& \\textbf{3 day} \\\\\n\t\t\\hline\n\t\tStructural\t\t & \t\t-7.23 \t&\t\t-8.90 \t\t& -2.90 \t\t  \\\\\n\t\t\\hline\n\t\tARIMA \t\t\t &\t\t-95.76 \t& \t\t-2.25\t\t& -0.23 \t\t  \\\\\n\t\t\\hline\n\t\tUniVar-Prophet \t & \t\t-203.76\t& \t\t-1.04 \t\t& -0.98 \t      \\\\\n\t\t\\hline\n\t\tMultiVar-Prophet & \\textbf{0.33}& \t\\textbf{0.85} \t& \\textbf{0.86} \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{$R^2$ Summary of models performance by day}\n\t\\label{table:r2_perf_summary}\n\\end{table}\n\n\n\n\\section{Power headroom estimation}\n\nUntil this point of the work, the research has been mainly focused on PSU loads forecasting. Nonetheless, as mentioned in the introductory context, the goal is to predict the power headroom defined in (\\ref{eq:phdroom}). However, as there are no direct measurements of power headroom that can be learnt, it needs to be derived from the power consumption.  \n\nTherefore, to provide a power headroom forecast, after estimating the loads future values, a further step is needed to translate that information into power headroom terms. The following sections will propose a way to manage to achieve it and discuss other aspects of its technical applicability.\n\n\\subsection{Power headroom derivation as PSU utilisation complement}\n\nAs it has been exposed in Section \\ref{subsec:data_description:power_supply}, the power load is a measure of \\emph{how many percentual power capacity it is being used at that time}. Thus, it is straightforward to claim that the percentual power headroom is its complement.\n\n\\begin{align}\\label{eq:ph_deriv}\n\t\\begin{split}\n\t\tP_{h[\\%]}\t&= 100 - P_{L[\\%]} \\\\\n\t\t\\text{Where }P_{h[\\%]}\t&: \\text{Power headroom in percents} \\\\\n\t\t\\text{and } P_{L[\\%]}\t&: \\text{Power loads consumptions in percents}\n\t\\end{split}\n\\end{align}\n\nIf the interest is to obtain a measurement in Watts units, the installed power capacity in the RBS, $P_{max}$, needs to be known so that its proportion can be computed as showed in (\\ref{eq:phwatts}).\n\n\\begin{align}\\label{eq:phwatts}\n\t\\begin{split}\n\t\tP_h\t&= P_{max} \\cdot \\frac{P_{h[\\%]}}{100} \\\\\n\t\t\\text{Where } P_{max}\t&: \\text{Installed power capacity in Watts}\n\t\\end{split}\n\\end{align}\n\nFigure \\ref{fig:ph_der}, shows the \\ac{psu} utilisation signal used as example test in Chapter \\ref{cha:forecasting} and its derived power headroom.\n \t\n\n%\\begin{figure}[H]\n%\t\\centering\n%\t\\begin{subfigure}{0.49\\linewidth}\n%\t\t\\includegraphics[width=\\linewidth]{}\n%\t\t\\caption{}\n%\t\\end{subfigure}\n%\t\\hfill\n%\t\\begin{subfigure}{0.49\\linewidth}\n%\t\t\\includegraphics[width=\\linewidth]{}\n%\t\t\\caption{}\n%\t\\end{subfigure}\n%\t\\caption{}\n%\t\\label{fig:}\n%\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{power_headroom_derivation}\n\t\\caption{Power headroom derivation from PSU Load}\n\t\\label{fig:ph_der}\n\\end{figure}\n\n\n%\\begin{figure}[hptb]\n%\t\\centering\n%\t\\begin{subfigure}{.65\\textwidth}\n%\t\t\\centering\n%\t\t\\includegraphics[width=\\textwidth]{power_headroom_derivation}\n%\t\t\\caption{Derived power headroom in percents}\n%\t\t\\label{fig:ph_der_percent}\n%\t\\end{subfigure}%\n%\t\\hfill\n%\t\\begin{subfigure}{.65\\textwidth}\n%\t\t\\centering\n%\t\t\\includegraphics[width=\\textwidth]{power_headroom_derivation_watts}\n%\t\t\\caption{Derived power headroom in Watts}\n%\t\t\\label{fig:ph_der_watts}\n%\t\\end{subfigure}\n%\t\\caption{Power headroom derivation from PSU Load}\n%\t\\label{fig:ph_der}\n%\\end{figure}\n\n\\subsection{Criterion considerations}\n\nIn general terms, an alarm is meant to make someone notice that something abnormal is happening in a given process or, in a forecasting context, may (or will) occur in the future, to support the operator response \\cite{iec_alarms}.\n\nIn the present works' context, the alarm meaning can be conceived as the \\ac{rbs} not having enough power to maintain its by-design behaviour.\n\nAlthough it should be possible to define how likely a power overload is, given the system's current state in probability terms, triggering an alarm is a binary task. The system is operating either in a \\emph{\"safe\"}  or in an \\emph{\"abnormal\"} region. Then, crossing a boundary that separates these two regions is the trigger for the alarm. \n\nThere are different techniques to tune an optimal trigger level. It could be done based on a system variable, or latent variable, or even a joint distribution of the two kinds \\cite{izadi2009alarms}. \n\nChoosing the best possible boundary is considered to be out of scope. Nonetheless, as an initial approach, it can be proposed to be settable by the user according to their needs and expertise. \n\nFrom this assumption and the derivations in (\\ref{eq:ph_deriv}) and (\\ref{eq:phwatts}), the following guidelines should be taken into consideration. \n\n\\subsubsection*{Values interpretation}\n\nPercentages values are not always able to fully represent the system conditions. For example, it is very different having 10\\% of power headroom where the total capacity is 10 kW and 1 kW since the relative oscillations could be drastically different. Therefore, the power magnitude should be taken into consideration. \n\n\\subsubsection*{PSUs do not partially fail}\n\nThe power capacity is not continuous. It will have discrete values as shown in Figure \\ref{fig:ph_lvls}, and also, the \\acp{psu} can have different power contributions. This quantisation means that in order to set the threshold is needed to take into account the worst-case scenario, which can be depicted by the sentence: \\emph{\"can the \\ac{rbs} work if the next failure is the most power contributing \\ac{psu}?\"}\n\n%\\begin{figure}[hptb]\n%\t\\centering\n%\t\\begin{subfigure}{.65\\textwidth}\n%\t\t\\centering\n%\t\t\\includegraphics[width=\\textwidth]{power_headroom_levels_percent}\n%\t\t\\caption{Power headroom and discrete availability in percents}\n%\t\t\\label{fig:ph_lvls_percent}\n%\t\\end{subfigure}%\n%\t\\hfill\n%\t\\begin{subfigure}{.65\\textwidth}\n%\t\t\\centering\n%\t\t\\includegraphics[width=\\textwidth]{power_headroom_levels_watts}\n%\t\t\\caption{Power headroom and discrete availability in Watts}\n%\t\t\\label{fig:ph_lvls_watts}\n%\t\\end{subfigure}\n%\t\\caption{Power headroom derivation from PSU Load}\n%\t\\label{fig:ph_lvls}\n%\\end{figure}\n\n\\begin{figure}[hptb]\n\t\\centering\n\t\\includegraphics[width=.65\\textwidth]{power_headroom_levels_percent}\n\t\\caption{Power headroom and discrete availability in percents}\n\t\\label{fig:ph_lvls}\n\\end{figure}\n\n\\pagebreak\n\\subsection{Alarm triggering and the $n$-level safety criteria}\n\nLet $P_{max}$ be the maximum power capacity installed and working in an \\ac{rbs},  $\\bar{P}_{Load}$ the average \\ac{psu} utilisation and $\\Delta p$ the confidence interval used when training the Prophet model. Let $N$ be the amount of working \\acp{psu}, let $\\{P_{PSU_1}, \\ldots, P_{PSU_N}\\}$ such that $P_{PSU_1} > \\ldots > P_{PSU_N}$ are the descending sorted power contributions from the working \\acp{psu}.\n\nThen, let the $n$-level safety criteria, be the $n$ amount of worst-case \\ac{psu} failures to keep as operational margin:\n\n\\begin{align}\n\t\\begin{split}\n\t\tP_{max} - \\sum_{i=1}^{n}{P_{PSU_i}} &> \\bar{P}_{Load} + \\Delta p \\\\\n\t\tP_{max} - \\left\\{\\bar{P}_{Load} + \\Delta p \\right\\} &> \\sum_{i=1}^{n}{P_{PSU_i}}\n\t\\end{split}\n\\end{align}\n\nHowever, as the power headroom is by definition the difference between the installed power capacity and the power being consumed, it can be said that: \n\n\\begin{align}\n\t\\begin{split}\n\t\tP_{h} &= P_{max} - \\left\\{\\bar{P}_{Load} + \\Delta p \\right\\} \\\\\n\t\tP_{h} &> \\sum_{i=1}^{n}{P_{PSU_i}} \\\\\n\t\t\\therefore P_{crit_n} &= \\sum_{i=1}^{n}{P_{PSU_i}}\n\t\\end{split}\n\\end{align} \n\nWhere $ P_{crit_n}$ is the $n$-level criteria threshold. Thus, whenever the power headroom forecast $\\hat{P}_h$ at time $t+k$, $t$ being the current time, meets the condition in (\\ref{eq:n-trigger}) a fault alarm must be triggered\n\n\\begin{equation}\n\t\\label{eq:n-trigger}\n\t\\hat{P}_{h_{t+k}} \\geq P_{crit_n}\n\\end{equation}\n\n\n\n\n\n\n\n", "meta": {"hexsha": "f57f3d88a6ae3cfb99cbfdd75501f62493466108", "size": 35624, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/results.tex", "max_stars_repo_name": "agustinvalencia/MasterThesis", "max_stars_repo_head_hexsha": "25e3b4689264647bd1418f249de7a5272234008a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sections/results.tex", "max_issues_repo_name": "agustinvalencia/MasterThesis", "max_issues_repo_head_hexsha": "25e3b4689264647bd1418f249de7a5272234008a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/results.tex", "max_forks_repo_name": "agustinvalencia/MasterThesis", "max_forks_repo_head_hexsha": "25e3b4689264647bd1418f249de7a5272234008a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.5673202614, "max_line_length": 513, "alphanum_fraction": 0.7580844375, "num_tokens": 10165, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920020959544, "lm_q2_score": 0.6959583187272711, "lm_q1q2_score": 0.557248259597073}}
{"text": "% ***********************************************************************************\r\n% Pure LaTeX part to be inserted in a document (be careful of depencies of packages & commands\r\n% Prepared by XXX and YYY under the supervision of Arnaud de La Fortelle\r\n% Fall 2017\r\n% 2D wave propagation subsection of the modeling part\r\n% ***********************************************************************************\r\n\r\n\\subgroup{1}{Bradley Cage and Lin Yang}\r\n\r\n\\paragraph{Description}\r\n\\begin{figure}[htb]\r\n\t\\centering\r\n\t\\includegraphics[width=10cm]{Figures/2D_waves_system.png}       \r\n\t\\caption{The membrane system }\r\n\t\\label{2D_waves_system.fig}\r\n\\end{figure}\r\n\r\nOur system is comprised of a flexible membrane stretched to some shape, with all of its edges fixed in place. The desired goal is to understand the vertical position of the various points on the membrane over time. The membrane in this system has vertical deflections which are small compared to its overall size, and deflections happen only in the vertical direction.\r\n\r\nThis 2D system is a continuation of the 1D wave equation, and is a natural precursor to the 3D wave case. \r\n\r\n\r\n\\paragraph{Model}\r\nAssumptions:\r\n\\begin{itemize}\r\n\t\\item Membrane has uniform planar density $\\rho$\r\n    \\item The tension per unit length, $F_t$, caused by stretching the membrane is the same at all points and in all directions and does not change during the motion\r\n    \\item Vertical position is given by some function $u(x,y,t)$\r\n\\end{itemize}\r\n\\begin{figure}[htb]\r\n\t\\centering\r\n\t\\includegraphics[width=15cm]{Figures/2D_waves_model.png}       \r\n\t\\caption{The force analysis of a small section of the membrane system }\r\n\t\\label{2D_waves_model.fig}\r\n\\end{figure}\r\nWe begin from basic principles.\r\n\r\n$$\\Sigma F = m\\vec{a}$$\r\n\r\n\\noindent Taking some small section of the membrane $\\ud x$ by $\\ud y$, we can replace mass and acceleration and since we know density and that $\\vec{a}$ is the second derivative of position with respect to time, thus enabling us to rewrite the equation.\r\n\r\n\\begin{equation}\r\n\\label{no_balance}\r\n\\Sigma F = \\rho \\ud x\\ud y \\frac{\\partial^2u}{\\partial t^2}\r\n\\end{equation}\r\n\r\n\\noindent Performing a force balance on the section of membrane in the x and y directions gives us tensions at each on each side, then resolved to their vertical components. Remember that since tension is constant per unit length, we must multiply the force acting on each side by the length of that side. Thus, the force acting on this balance lets us rewrite $\\Sigma F$ (that is we only consider vertical forces, forces acting in the $x-u$ and $y-u$ planes): \r\n\r\n$$\\Sigma F = F_x + F_y$$\r\n\r\n$$F_x = F_t\\ud y\\Big[\\sin\\big( \\theta (x+\\ud x,y,t) \\big) - \\sin \\big((\\theta (x,y,t)\\big)\\Big]$$\r\n$$F_y = F_t\\ud x\\Big[\\sin\\big( \\beta (x,y+\\ud y,t) \\big) - \\sin \\big((\\beta (x,y,t)\\big)\\Big]$$\r\n\r\n\\noindent We can confidently use the small angle approximation $\\sin$ in the x direction\r\n\r\n$$ \\sin(\\theta) \\approx \\tan(\\theta) = \\frac{\\partial u}{\\partial x} = u_x$$\r\n\r\n\\noindent and likewise in the y direction \r\n\r\n$$ \\sin(\\beta) \\approx \\tan(\\beta) = \\frac{\\partial u}{\\partial y} = u_y$$\r\n\r\n\\noindent to get our equations into the form\r\n\r\n$$F_x = F_t\\ud y\\Big[u_x(x+\\ud x,y,t) - u_x(x,y,t)\\Big]$$\r\n$$F_y = F_t\\ud x\\Big[u_y(x,y+\\ud y,t) - u_y(x,y,t)\\Big]$$\r\n\r\n\\noindent From there we can sum these forces and plug them back in to equation \\ref{no_balance}\r\n\r\n$$\\rho \\ud x\\ud y \\frac{\\partial^2u}{\\partial t^2} = F_t\\bigg[dy\\Big[u_x(x+\\ud x,y,t) - u_x(x,y,t)\\Big]+dx\\Big[u_y(x,y+\\ud y,t) - u_y(x,y,t)\\Big] \\bigg]$$\r\n\r\n\\noindent We then divide by $\\ud x$ and $\\ud y$ and take the limit as $\\ud x,\\ud y \\to 0$:\r\n\r\n$$\\rho\\frac{\\partial^2u}{\\partial t^2} = \\lim_{\\ud x,\\ud y\\to 0} F_t \\bigg[ \\frac{u_x(x+\\ud x,y,t) - u_x(x,y,t)}{\\ud x} + \\frac{u_y(x,y+\\ud y,t) - u_y(x,y,t)}{\\ud y} \\bigg]$$\r\n\r\n\\noindent We recognize that we now have derivatives in the form of difference quotients, and can take the partial derivative of each one (since $u$ is a function of multiple variables)\r\n\r\n\\begin{equation}\r\n\\rho \\frac{\\partial^2u}{\\partial t^2} = F_t\\bigg[\\frac{\\partial}{\\partial x}u_x + \\frac{\\partial}{\\partial y}u_y \\bigg] = F_t\\bigg[\\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2}\\bigg]\r\n\\end{equation}\r\n\r\n\\noindent Dividing over the uniform  tension, we reach our final form.\r\n\r\n\\begin{equation}\r\n\\frac{\\rho}{F_t}\\frac{\\partial^2u}{\\partial t^2} = \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2}\r\n\\end{equation}\r\n\r\n\\noindent We can adhere to standard conventions and write our final 2D wave equation as \r\n\r\n\r\n\\begin{align}\r\na^2 \\frac{\\partial^2u}{\\partial t^2} &= \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} & a &= \\sqrt{\\frac{\\rho}{F_t}}\r\n\\label{final_eq}\r\n\\end{align}\r\n\r\n\\noindent Equation \\ref{final_eq} is also commonly written using the Laplace operator:\r\n\r\n\\begin{equation}\r\na^2 \\frac{\\partial^2u}{\\partial t^2} = \\nabla^2 u\r\n\\end{equation}\r\n\r\n\\paragraph{Initial and boundary conditions}\r\nThe initial state (i.e. at $t=0$) of the system is given by two functions:\r\n\\begin{itemize}\r\n\t\\item the vertical position all over the membrane $u(x,y,0)=u_0(x,y)$;.\r\n  \\item the membrane speed $\\frac{\\partial u(x,y,0)}{\\partial t}=u_1(x,y)$. When --- as usually assumed --- the membrane is at rest, then $u_1=0$.\r\n\\end{itemize}\r\n\\begin{figure}[htb]\r\n\t\\centering\r\n\t\\includegraphics[width=10cm]{Figures/2D_waves_boundary_conditions.png}       \r\n\t\\caption{The membrane system's boundaries}\r\n\t\\label{2D_waves_boundary_conditions.fig}\r\n\\end{figure}\r\n\\noindent The membrane's area is ab (with the edge length being a,b respectively).\r\n\r\nFor membrane the boundary conditions can be of the two types (or a combination of): \r\n\\begin{itemize}\r\n\t\\item The vertical positions of the 4 edges of the membrane remain fixed all the time, and usually at 0, i.e., $u(x,0,t)=0$, $u(0,y,t)=0$, $u(a,y,t)=0$, $u(x,b,t)=0$.\r\n\t\\item Another type of boundary condition is the reflecting or no-flux boundary condition: this implies a derivative condition along the normal at the boundary and it is more technical to write (though not less meaningful in terms of physics).\r\n\\end{itemize}\r\n\r\nSome hints for the control and the cost function of the 2D wave equation:\r\n\\begin{itemize}\r\n\t\\item Control: Maintain the highest vertical deflection of the membrane to be some constant height H   \r\n  \\item The cost function then could be finding the smallest force exerted on the membrane to maintain the highest height H in the membrane system  \r\n\\end{itemize}", "meta": {"hexsha": "22db821ae0c450e20e72379b040d4fd81edfbb65", "size": 6530, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "modeling-2Dwaves.tex", "max_stars_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems", "max_stars_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-01-08T02:54:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-29T06:19:28.000Z", "max_issues_repo_path": "modeling-2Dwaves.tex", "max_issues_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems", "max_issues_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "modeling-2Dwaves.tex", "max_forks_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems", "max_forks_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-16T17:29:03.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-16T17:29:03.000Z", "avg_line_length": 52.24, "max_line_length": 462, "alphanum_fraction": 0.6868300153, "num_tokens": 1898, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.66192288918838, "lm_q2_score": 0.8418256452674008, "lm_q1q2_score": 0.5572236633082702}}
{"text": "%\n% Copyright 2018 Parakram Majumdar\n%\n% Licensed under the Apache License, Version 2.0 (the \"License\");\n% you may not use this file except in compliance with the License.\n% You may obtain a copy of the License at\n%\n%     http://www.apache.org/licenses/LICENSE-2.0\n%\n% Unless required by applicable law or agreed to in writing, software\n% distributed under the License is distributed on an \"AS IS\" BASIS,\n% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n% See the License for the specific language governing permissions and\n% limitations under the License.\n%\n\n\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[legalpaper, portrait, margin=1in]{geometry}\n\\usepackage{tikz}\n\\usetikzlibrary{shapes.geometric, arrows,positioning}\n\n\\title{\n                         Paragraph                    \\\\\n  \\large          A Backpropagation engine            \\\\ \n                          in C++\n}\n\\author{Parakram Majumdar}\n\n\\begin{document}\n  \\maketitle\n  \n\\tikzstyle{neuron} = [rectangle,\n                      text centered,\n                      draw=black,\n                      minimum width=1.5cm,\n                      minimum height=.8cm,\n                      fill=blue!20]\n\\tikzstyle{arrow} = [thick,->,>=stealth]\n\n\\section{Introduction}\n  The \\emph{training} of Artificial Neural Networks\n  involves calibrating the \\emph{weights} of each neuron\n  so that the \\emph{output} generated by the neural network \n  matches the expected output of a \\emph{training set}. \n  Most training algorithms employ some form of \\emph{gradient descent},\n  wherein each weight is adjusted in the opposite direction of\n  the partial derivative of the error with respect to that weight.\n  \n  \\[ \\Delta w = -c \\times \\frac{\\partial o}{\\partial w} \\]\n  \n  The \\emph{backpropagation} algorithm employs \\emph{algorithmic differentiation}\n  to compute the gradient, and is especially common for training\n  \\emph{deep} neural networks.\n  \n  This document describes the anatomy of \\emph{Paragraph},\n  a backpropagation engine written in C++.\n  \n\\section{Basic Concepts}\n  A neural network consists of interconnected neurons, where each neuron,\n  based on the input signals received from input connections,\n  sends an output signal to output connections.\n  \n  \\begin{center}\n  \\begin{tikzpicture}[node distance=2cm]\n    \\node (n) [neuron] {Neuron};\n    \\draw [arrow] ++(-2, -0.5) -- (n.west);\n    \\draw [arrow] ++(-2, 0) -- (n.west);\n    \\draw [arrow] ++(-2, 0.5) -- (n.west);\n    \\draw [arrow] (n.east) -- ++(1.25, -0.5);\n    \\draw [arrow] (n.east) -- ++(1.25, 0);\n    \\draw [arrow] (n.east) -- ++(1.25, 0.5);\n  \\end{tikzpicture}\n  \\end{center}\n  \n  Thus, the network can be considered a directed computational \\texttt{Graph},\n  where each \\texttt{Node} represents a neuron.\n  Each node is one of two types :\n  \\begin{itemize}\n    \\item \\texttt{Operation}: \n      Like neurons, operations react to their inputs to produce an output.\n    \\item \\texttt{Variable}:\n      Variables act as ``input nodes'', and do not have any inputs of their own.\n      They allow the graph to be manipulated by external systems.\n  \\end{itemize}\n  \n  The nodes communicate to each other via \\texttt{Tensor}s, \n  or multi-dimensional arrays of real numbers (\\texttt{double}s).\n  The tensors also generalize quite well to represent gradients of multi-dimensional\n  functions. \n  These gradients can then be used to calibrate the variables in a graph.\n  \n\\section{Tensors}\n  \\texttt{Tensor}s are multi-dimensional arrays of real numbers.\n  An $n$-dimensional tensor has fixed dimensionalities \n  $\\left\\{d_1, d_2, ..., d_n\\right\\}$, which drive the coordinates\n  of each number stored in the array.\n  \n  Common operations on tensors are described in the TensorFunctions document.\n  A notable operation is the \\emph{tensor multiplication},\n  a generalization of matrix multiplication,\n  which is key in determining the gradient with respect to a node,\n  given the gradients of its outputs.\n  For example, suppose we have a node $g$ \n  which takes another node $f$ as input, \n  then\n  \\[ \\nabla (g \\circ f) = \\nabla g \\cdot_n \\nabla f \\]\n  Where $\\cdot_n$ is the tensor multiplication operator on $n$ dimensions,\n  $n$ being the dimesionality of $f$.\n  \n  Many useful computations on tensors can work in an iterative fashion,\n  traversing the elements of the tensor in a row-major ordering.\n  This means that storing the elements in contiguous memory in row-major\n  ordering allows for noticable speed-ups from the co-location of data.\n  With most modern operating systems providing virtual memory mappings,\n  simply \\texttt{alloc}ating large contiguous chunks of memory \n  is a viable option.\n  \n  \n  Furthermore, it is also critical that tensors allow for \n  sparse representation of the data.\n  A typical real life deep learning network will consist of\n  many sparse but otherwise large tensors,\n  rendering the computations intractable unless the sparsity is exploited.\n\n\\section{Node}\n  A Node represents a tensor that is computed as part of a Graph.\n  As mentioned earlier, nodes can be of two types: Variables and Operations.\n  \n  Variables, do not have any input nodes, and therefore,\n  for the purposes of a single computation traversal, are contant tensors.\n  However the graph allows for changing the values of these Variables\n  across multiple computation traversals.\n  Thus, effectively, if the graph can be considered a single function\n  that outputs a tensor, its Variable nodes represent its inputs.\n  \n  Operations represent functions that go from multiple input tensors\n  to a single output tensor.\n  The backpropagation algorithm requires each operation to have \n  a well defined first derivative with respect to its inputs.\n  \n  Paragraph represents a node as a thin \\texttt{struct}\n  with an \\texttt{enum} representing the node type,\n  and an \\texttt{int} index uniquely identifying the node in a graph.\n  \n  Furthermore, the indices of the nodes happen to be in a\n  topologically sorted order of the graph,\n  as the Graph Builder assigns indices to the outputs of a node\n  only after the node itself has been assigned.\n  This fact is useful in determining the order in which\n  nodes must be traversed in order to complete a traversal in the graph.\n  However, in general, a graph may choose to ignore this ordering\n  during computation, for example to reduce peak memory usage.\n  The Graph does, however, use the node indices for storing and retrieving\n  properties of each node, such as a node's name, its inputs and outputs,\n  and for operation nodes, the tensor functor.\n  \n\\section{Graph}\n  \\subsection{Forward propagation}\n    Given the input values of Variable nodes, the Graph can compute\n    the values for a set of output nodes.\n    The input values for Variables which are not dependencies of the \n    target output nodes need not be provided.\n  \n  \\subsection{Backward propagation}\n    The Graph also allows for the computation of the gradients\n    of a given set of output nodes with respect to another given set\n    of variable nodes.\n    This is done in two traversals.\n    \n    First, a forward propagation step computes and stores\n    the values of all intermediate nodes.\n    \n    Then, using a generalized chain rule for multi-argument tensor functions,\n    a backward propagation step computes the derivative of the outputs\n    with respect to each node.\n    \n  \\subsection{Order of computation}\n    It is somewhat easy to ensure that only nodes which are necessary\n    are traversed, and are traversed only once (per propagation).\n    This ensures that the computation time is minimized.\n    \n    Furthermore, it is also easy to ensure that the computed values \n    of each node is dropped out of storage as soon as they become unnecessary.\n    This allows us to reduce the peak memory consumption,\n    which is key for making large graphs and/or large inputs tractable.\n    However, the \\emph{order} of computation of nodes, \n    which may have further impact on key memory utilization, \n    is currently not minimized. This requires future work.\n  \n\\section{Graph Builder}\n  While on one hand, it quite useful to have immutable graphs\n  in a distributed processing environment, immutable graphs are quite \n  cumbersome to set up.\n  Thus, it is useful to have a separate construct, a Graph Builder,\n  which is mutable and therefore convenient for building large graphs,\n  and which can generate an immutable Graph on demand.\n  \n  The Graph Builder object allows the user to add nodes to the graph,\n  along with their names, types, and in the case of operation nodes,\n  their inputs and corresponding tensor functions.\n  It automatically assigns indices to the nodes.\n  \n\n\\end{document}\n", "meta": {"hexsha": "2441bf01dd81d3d6785284f5fa7dac596daf4eff", "size": 8728, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/BackPropagation.tex", "max_stars_repo_name": "appu226/ParaGraph", "max_stars_repo_head_hexsha": "82604a286b50d414515f22521882115980f2ee80", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-12-17T08:52:06.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-17T08:52:06.000Z", "max_issues_repo_path": "doc/tex/BackPropagation.tex", "max_issues_repo_name": "appu226/ParaGraph", "max_issues_repo_head_hexsha": "82604a286b50d414515f22521882115980f2ee80", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2018-03-31T17:26:01.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-04T15:17:48.000Z", "max_forks_repo_path": "doc/tex/BackPropagation.tex", "max_forks_repo_name": "appu226/ParaGraph", "max_forks_repo_head_hexsha": "82604a286b50d414515f22521882115980f2ee80", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.1642512077, "max_line_length": 84, "alphanum_fraction": 0.7278872594, "num_tokens": 2053, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256472515683, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.5572236533930408}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\title{Sieve of Eratosthenes}\n\\date{}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n% For algorithms\n\\usepackage[linesnumbered,ruled,lined,boxed,commentsnumbered]{algorithm2e}\n\n\\usepackage{titling}\n\n\\setlength{\\droptitle}{-10em}\n\n\\begin{document}\n\\maketitle\n\n\\begin{algorithm}[H]\n\t\\KwData{A postive integer $n \\geq 2$}\n\t\\KwResult{An array $M[2 \\text{ .. } n]$ such that $M[i] = 0$ if $i$ is\n\tprime and 1 if $i$ is composite}\n\t\\BlankLine\n\t\\Begin {\n\t\tLet $M[2 \\text{ .. } n]$ be initialized with all elements = 0\\;\n\t\t\\BlankLine\n\t\t\\For{$i \\leftarrow 2$ to $n$} {\n\t\t\t\\If{$M[i] \\neq 1$} {\n\t\t\t\t\\For{$j \\leftarrow 2i$ and $j \\leq n$} {\n\t\t\t\t\t$M[j] \\leftarrow 1$\\;\n\t\t\t\t\t$j \\leftarrow j+i$\\;\n\t\t\t\t}\n\t\t\t}\n\t\t\t$i \\leftarrow i+1$\\;\n\t\t}\n\t\treturn $M$\\;\n\t}\n\\caption{Sieve of Eratosthenes}\n\\end{algorithm}\n\n\\section{Proof of Correctness (By Induction)}\n\t\\subsection{Inductive Hypothesis}\n\tLet $P(i) :=$ At the beginning of the outer for loop, all prime\tnumbers\n\t$< i$ have been found (set to 0 in M) and the multiples of all these\n\tprimes have been marked (set to 1) in M.       \n\n\t\\subsection{Base Case}\n\tInitially, $i = 2$. As there are no prime numbers less than 2 and our\n\tindexing for M begins from 2, P(2) is trivially true.\n\n\t\\subsection{Inductive Step}\n\tAssume the $P(k)$ to be true for some $k \\geq 2$. When $i = k+1$, we have\n\ttwo cases. If $k$ is composite, it can expressed as the product of\n\tprimes i.e $k = p_1^{\\alpha_{1}} p_2^{\\alpha_{2}} \\text{ ... } p_n^{\\alpha_{n}}$\n\twhere $p_{j}$ is a prime number $< k$. According to our assumption, this would mean\n\tthat $k$ has already been marked (set to 1 in M). The inner loop will not execute.\n\t\\BlankLine\n\tIf $k$ is indeed prime, then none of the previous marking procedures would\n\thave affected $M[k]$. As $M[k]$ was initially set 0, it will continue to remain 0.\n\tIn addition to this, the inner loop will execute successfully and mark all multiples\n\tof $k$.\t\n\t\\BlankLine\n\tIn both the cases, we have now found all prime numbers less than $k+1$ and marked\n\ttheir multiples i.e $P(k+1)$ holds true.\n\n\t\\subsection{Termination}\n\tThe procedure terminates when $i > n$. As $i$ is always incremented by 1, $i$ will always\n\tbe equal to $n+1$ when the procedure terminates. $\\therefore P(n+1)$ is true i.e all prime\n\tnumbers $ < n+1$ have been found. $\\blacksquare$\n\\end{document}\n", "meta": {"hexsha": "b35e02dc2826ab9646140e2905b27424aa3c91ec", "size": 2401, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "primality/Eratosthenes/Eratosthenes.tex", "max_stars_repo_name": "abcdjdj/proof_correctness", "max_stars_repo_head_hexsha": "d0bdf505ea1dc9c8f00cac89be753e8b43cfdce3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "primality/Eratosthenes/Eratosthenes.tex", "max_issues_repo_name": "abcdjdj/proof_correctness", "max_issues_repo_head_hexsha": "d0bdf505ea1dc9c8f00cac89be753e8b43cfdce3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "primality/Eratosthenes/Eratosthenes.tex", "max_forks_repo_name": "abcdjdj/proof_correctness", "max_forks_repo_head_hexsha": "d0bdf505ea1dc9c8f00cac89be753e8b43cfdce3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.8169014085, "max_line_length": 91, "alphanum_fraction": 0.6817992503, "num_tokens": 820, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228758499942, "lm_q2_score": 0.8418256432832333, "lm_q1q2_score": 0.5572236507663091}}
{"text": "\\documentclass[]{article}\n\\usepackage{xcolor}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\n\n%opening\n\\title{MTH 343 Numerical Analysis Lecture 6: Secant Method}\n\\author{Sheikh Abdul Raheem Ali}\n\n\\begin{document}\n\t\\newcommand{\\cfbox}[2]{%\n\t\t\\colorlet{currentcolor}{.}%\n\t\t{\\color{#1}%\n\t\t\t\\fbox{\\color{currentcolor}#2}}%\n\t}\n\n\\maketitle\n\nSuppose \\cfbox{red}{f(x) = 0}\n\n\\section*{Technique:}\n\n\\begin{enumerate}\n\t\\item Begin with two values $ x_0 $ \\& $ x_1 $ near the root $ r $. Draw a line joining the two points. The point $ x_2 $ where this line hits the x-axis is the first approximation of the exact root $ r $. \n\t\\item Continue this repeatedly, always choosing the last computed values for drawing this straight line. \n\\end{enumerate}\n\n\\section*{Derivation:}\n\n\\begin{align*}\n\t\\frac{y-y_1}{x-x_1} = m \\text{ or } y - y_1 = m(x - x_1)s\\\\\n\t\\frac{y - f(P_1)}{x-P_1} &= \\frac{f(P_1) - f(P_0)}{P_1 - P_0}\\\\\n\t y &= f(P_1) + \\frac{f(P_1) - f(P_0)}{P_1 - P_0}(x - P_1)\\\\\n\t\\text{To find x-intercept, set y=0},\\\\\n\t0 &= f(P_1) + \\frac{f(P_1)-f(P_0)}{P_1 - P_0}(x-P_1)\\\\\n\t0 &= f(P_1)(P_1-P_0) + (f(P_1) - f(P_0))(x - P_1)\\\\\n\t(f(P_1)-f(P_0))(x-P_1) &= - f(P_1)(P_1 - P_0)\\\\\n\tx &= P_1 - \\frac{f(P_1)(P_1 - P_0)}{f(P_1) - f(P_0)}\\\\\n\tP_2 &= P_1 - \\frac{f(P_1)(P_1 - P_0)}{f(P_1) - f(P_0)}\n\\end{align*}\n\n\\section*{Example:}\n\n\\[ f(x) = x^2 - 2 \\]\n\n%A dirty hack to get the table centered\n\\begin{enumerate}\n\t\\item[]\n\t\\centering\n\t\\begin{tabular}{c c c}\n\t\t$ n $ & $ P_{n} $ & $ f(P_n) $ \\\\\n\t\t2&1.6&0.56\\\\\n\t\t3&1.47826&0.18525\\\\\n\t\t4&1.418079&0.01094\\\\\n\t\t5&1.414299&0.000241 \n\t\\end{tabular}\n\\end{enumerate}\n\n\\section*{Disadvantages of the Secant Method}\n\n\\begin{enumerate}\n\t\\item If the fraction is far from linear near the root, the iterates can fly off to points far from the root. This can be fixed by plotting $ f(x) $ and changing the starting points.\n\t\n\t\\item If an iterate duplicates a previous one this results in an endless loop that never reaches the value of the true solution.\n\t\n\t\\item Secant method doesn't have the bracketing property of the bisection method. Therefore, the method doesn't always converge. But if it does, it is generally faster than the bisection method.\n\\end{enumerate}\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width= 0.8\\linewidth]{secant.png}\n\t\\caption{Visual demonstration of the secant method}\n\\end{figure}\n\n\\end{document}\n", "meta": {"hexsha": "f688b0afe2841dcd1fba179a6d19ad2d02439826", "size": 2346, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Numerical/Lecture_6/Lecture6.tex", "max_stars_repo_name": "sheikheddy/aus-files", "max_stars_repo_head_hexsha": "0c38d15d560ccbb8231c8ef210916ea94a0f004b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Numerical/Lecture_6/Lecture6.tex", "max_issues_repo_name": "sheikheddy/aus-files", "max_issues_repo_head_hexsha": "0c38d15d560ccbb8231c8ef210916ea94a0f004b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Numerical/Lecture_6/Lecture6.tex", "max_forks_repo_name": "sheikheddy/aus-files", "max_forks_repo_head_hexsha": "0c38d15d560ccbb8231c8ef210916ea94a0f004b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0769230769, "max_line_length": 207, "alphanum_fraction": 0.6636828645, "num_tokens": 899, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228758499942, "lm_q2_score": 0.8418256432832333, "lm_q1q2_score": 0.5572236507663091}}
{"text": "% To be compiled by XeLaTeX, preferably under TeX Live.\n% LaTeX source for ``Yanqi Lake Lectures on Algebra'' Part III.\n% Copyright 2019  \u674e\u6587\u5a01 (Wen-Wei Li).\n% Permission is granted to copy, distribute and/or modify this\n% document under the terms of the Creative Commons\n% Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)\n% https://creativecommons.org/licenses/by-nc/4.0/\n\n% To be included\n\\chapter{Serre's criterion for normality and depth}\n\nReferences: \\cite[\\S 11]{Eis95} and \\cite[{X.1}]{Bour98}. Except in the last section, we will try to avoid the use of depth as in \\cite{Eis95}.\n\n\\section{Review of discrete valuation rings}\nLet $R$ be an integral domain.\n\n\\begin{definition}\\index{discrete valuation ring}\n\tA \\emph{discrete valuation} on a field $K$ is a surjective map $v: K^\\times \\to \\Z$ such that\n\t\\begin{itemize}\n\t\t\\item $v(xy) = v(x) + v(y)$, and\n\t\t\\item $v(x+y) \\geq \\min\\{v(x), v(y) \\}$\n\t\\end{itemize}\n\tfor all $x,y \\in K^\\times$, where we set $v(0) = +\\infty$ for convenience. We say a domain $R$ with $K := \\mathrm{Frac}(R)$ is a \\emph{discrete valuation ring}, abbreviated as DVR, if there exists a discrete valuation $v$ such that\n\t\\[ R = v^{-1}([0, +\\infty]). \\]\n\tWe say $t \\in R$ is a \\emph{uniformizer} if $v(t)=1$.\\index{uniformizer}\n\\end{definition}\nIt follows immediately that $R^\\times = v^{-1}(0)$. Uniformizers are unique up to $R^\\times$. Note that $v(K^\\times) = \\Z$ implies that $R$ cannot be a field.\n\n\\begin{example}\n\tThe ring $\\Z_p$ of $p$-adic integers ($p$: prime number) together with the usual $p$-adic valuations are standard examples of DVR. The algebra of formal power series $\\Bbbk\\llbracket X\\rrbracket$ are also DVR: the valuation of $\\sum_n a_n X^n$ is the smallest $n$ such that $a_n \\neq 0$.\n\t\n\tMore generally, in the geometric context, discrete valuations can be defined by looking at the vanishing order of rational/meromorphic functions along subvarieties of codimension one with suitable regularities.\n\\end{example}\n\n\\begin{lemma}\\label{prop:dvr-regular}\n\tLet $R$ be a discrete valuation ring with valuation $v$ and uniformizer $t$. Every ideal $\\mathfrak{a} \\neq \\{0\\}$ of $R$ has the form $(t^r)$ for a unique $r \\geq 0$. In particular, $R$ is a local principal ideal domain which is not a field, hence is of dimension $1$.\n\\end{lemma}\n\\begin{proof}\n\tTake $r := \\min\\{v(x) : x \\in \\mathfrak{a} \\}$.\n\\end{proof}\n\nIn the exercises below, we assume $R$ is a discrete valuation ring with valuation $v$.\n\\begin{exercise}\n\tShow that $t$ is a uniformizer if and only if it generates the maximal ideal of $R$.\n\\end{exercise}\n\\begin{exercise}\n\tReconstruct $v$ from the ring-theoretic structure of $R$.\n\\end{exercise}\n\nRecall that a regular local ring $R$ with $\\dim R = 1$ is a Noetherian local ring whose maximal ideal $\\mathfrak{m}$ is principal and nonzero; elements generating $\\mathfrak{m}$ are called the regular parameters for $R$.\n\\begin{proposition}\\label{prop:reg-local-dvr}\n\tSuppose $t$ is a regular parameter in a regular local ring $R$ of dimension one, then $R$ is a domain, and every element $x \\in R \\smallsetminus \\{0\\}$ can be uniquely written as $x = t^r u$ with $r \\geq 0$ and $u \\in R^\\times$. This makes $R$ into a discrete valuation ring by setting $v(x) = r$, for which $t$ is a uniformizer.\n\t\n\tTherefore $R$ is a discrete valuation ring. Conversely, every discrete valuation ring is regular local of dimension $1$.\n\\end{proposition}\n\\begin{proof}\n\tFrom Theorem \\ref{prop:regular-local-domain} we know regular local rings are Noetherian domains. By applying Krull's Intersection Theorem (Corollary \\ref{prop:Krull-intersection-domain}) to the powers of $(t)$, we see that $r := \\sup\\{k \\geq 0: x \\in (t)^k \\}$ is finite. Write $x = t^r u$. Since $R^\\times = R \\smallsetminus \\mathfrak{m}$, we see $u \\in R^\\times$. As to uniqueness, suppose $t^r u = t^s w$ with $r \\geq s$, then $t^{r-s} = u^{-1}w \\in R^\\times$ implies $r=s$, hence $u=w$ as $R$ is a domain. As every element of $\\text{Frac}(R)^\\times$ is uniquely expressed as $t^r u$ with $r \\in \\Z$, one readily checks that $v(t^r u) = r$ satisfies all the requirements of discrete valuation.\n\t\n\tThe converse direction has been addressed in Lemma \\ref{prop:dvr-regular}.\n\\end{proof}\n\nTo recap, in dimension one we have\n\\[ \\text{regular local ring} \\iff \\text{discrete valuation ring}. \\]\nThis will be related to normality later on.\n\n\\begin{exercise}\n\tExplain that the regular local rings of dimension $0$ are just fields.\n\\end{exercise}\n\n\\section{Auxiliary results on the total fraction ring}\nLet $R$ be a ring, henceforth assumed Noetherian. If there exist a non zero-divisor $t \\in R$ and $\\mathfrak{p} \\in \\text{Ass}(R/(t))$, we say $\\mathfrak{p}$ is \\emph{associated to a non zero-divisor}.\n\n\\begin{lemma}\\label{prop:zero-test-ass}\n\tLet $M$ be a finitely generated $R$-module. An element $x \\in M$ is zero if and only if its image in $M_{\\mathfrak{p}}$ is zero for every maximal element $\\mathfrak{p}$ in $\\mathrm{Ass}(M)$.\n\\end{lemma}\n\\begin{proof}\n\tSuppose $x \\neq 0$. Since $M$ is Noetherian, among ideals of the form $\\text{ann}(y)$ there is a maximal one containing $\\text{ann}(x)$, and we have seen in Lemma \\ref{prop:Ass-maximal} that such an ideal $\\mathfrak{p}$ belongs to $\\text{Ass}(M)$. Since $\\text{ann}(x) \\subset \\mathfrak{p}$, we have $x/1 \\in M_{\\mathfrak{p}} \\smallsetminus \\{0\\}$.\n\\end{proof}\n\nCall a ring \\emph{reduced}\\index{reduced ring} if it has no nilpotent element except zero.\n\\begin{lemma}\n\tSuppose $R$ is reduced, then $\\mathrm{Ass}(R)$ consists of minimal primes.\n\\end{lemma}\n\\begin{proof}\n\tAs $R$ is reduced, $\\{0\\} = \\sqrt{0_R}$ is the intersection of minimal prime ideals $\\mathfrak{p}_1, \\mathfrak{p}_2, \\ldots$ (all lying in $\\text{Ass}(R)$, hence finite in number). By the theory of primary decompositions, one infers that $\\text{Ass}(R) = \\{\\mathfrak{p}_1, \\ldots \\}$.\n\\end{proof}\n\nFor the next result, we denote by $T$ the set of non zero-divisors of $R$. Recall that the \\emph{total fraction ring} $K(R)$ is $R[T^{-1}]$; this is the largest localization such that $R \\to K(R)$ is injective, and $K(R) = \\text{Frac}(R)$ when $R$ is a domain. The map $\\mathfrak{p} \\mapsto \\mathfrak{p}K(R)$ sets up an order-preserving bijection $\\text{Ass}(R) \\rightiso \\text{Ass}(K(R))$: indeed, if $\\mathfrak{p} \\ni t$ for some $t \\in T$, then $\\mathfrak{p}$ cannot belong to $\\text{Ass}(R)$ because the union of $\\text{Ass}(R)$ equals $R \\smallsetminus T$.\n\n\\begin{lemma}\\label{prop:K-vs-localization}\n\tLet $R$ be reduced. Then $K(R) \\rightiso \\prod_{\\mathfrak{p}} K(R/\\mathfrak{p})$ as $R$-algebras, where $\\mathfrak{p}$ ranges over the minimal prime ideals of $R$. For any multiplicative subset $S \\subset R$ there is a canonical isomorphism of $R[S^{-1}]$-algebras\n\t\\begin{align*}\n\tK(R[S^{-1}]) & \\rightiso K(R)[S^{-1}].\n\t\\end{align*}\n\\end{lemma}\nIn other words, the formation of total fraction ring commutes with localizations.\n\\begin{proof}\n\tEach element in $K(R) = R[T^{-1}]$ is either a zero-divisor or invertible. The set of zero-divisors of $K(R)$ is the union of minimal prime ideals $\\mathfrak{p}_i K(R)$ of $K(R)$ (where $\\text{Ass}(R) = \\{ \\mathfrak{p}_1, \\ldots, \\mathfrak{p}_m\\}$ by an earlier discussion), therefore each prime ideal of $K(R)$ must equal some $\\mathfrak{p}_i K(R)$, by prime avoidance (Proposition \\ref{prop:prime-avoidance}). Hence $\\mathfrak{p}_1 K(R), \\ldots, \\mathfrak{p}_m K(R)$ are also the maximal ideals in $K(R)$, with zero intersection. Chinese Remainder Theorem entails that $K(R) \\simeq \\prod_{i=1}^m K(R)/\\mathfrak{p}_i K(R)$. To conclude the first part, notice that $K(R)/\\mathfrak{p}_i K(R) = (R/\\mathfrak{p}_i)[T^{-1}]$; this is a field in generated by an isomorphic copy of $R/\\mathfrak{p}_i$ since $\\mathfrak{p}_i \\cap T = \\emptyset$, hence equals $\\text{Frac}(R/\\mathfrak{p}_i)$.\n\t\n\tAs for the second part, one decomposes $K(R[S^{-1}])$ and $K(R)[S^{-1}]$ by the previous step, noting that\n\t\\begin{compactitem}\n\t\t\\item $R[S^{-1}]$ is reduced;\n\t\t\\item $\\text{Ass}(R[S^{-1}]) = \\left\\{ \\mathfrak{p}_i R[S^{-1}]: 1 \\leq i \\leq m, \\; \\mathfrak{p}_i \\cap S = \\emptyset \\right\\}$ consists of minimal primes;\n\t\t\\item $K\\left( R[S^{-1}] \\big/ \\mathfrak{p}_i R[S^{-1}]\\right) \\simeq K(R/\\mathfrak{p}_i) = K(R/\\mathfrak{p}_i)[S^{-1}]$ when $\\mathfrak{p}_i \\cap S = \\emptyset$, by the arguments above;\n\t\t\\item $K(R/\\mathfrak{p}_i)[S^{-1}] = \\{0\\}$ when $\\mathfrak{p}_i \\cap S \\neq \\emptyset$.\n\t\\end{compactitem}\n\tA term-by-term comparison finishes the proof.\n\\end{proof}\n\nWe use Lemma \\ref{prop:K-vs-localization} to interchange $K(\\cdot)$ and localizations in what follows.\n\\begin{lemma}\\label{prop:intersection-ass}\n\tSuppose $R$ is reduced. Then $x \\in K(R)$ belongs to $R$ if and only if its image in $K(R)_{\\mathfrak{p}} = K(R_{\\mathfrak{p}})$ belongs to $R_{\\mathfrak{p}}$ for every prime $\\mathfrak{p}$ associated to a non zero-divisor.\n\\end{lemma}\n\\begin{proof}\n\tOnly the ``if'' direction requires a proof. Write $x = a/t$ with $t$ not a zero-divisor. Suppose that $a \\notin (t)$, i.e. $a$ does not map to zero in $R/(t)$. Lemma \\ref{prop:zero-test-ass} asserts there exists $\\mathfrak{p} \\in \\text{Ass}(R/(t))$ such that $a$ does not map to $0 \\in (R/(t))_{\\mathfrak{p}} = R_{\\mathfrak{p}}/t R_{\\mathfrak{p}}$. It follows that the image of $a/t$ in $K(R_{\\mathfrak{p}})$ does not lie in $R_{\\mathfrak{p}}$.\n\\end{proof}\n\n\\section{On normality}\nFix a Noetherian ring $R$.\n\n\\begin{exercise}\n\tSuppose $R$ is a domain, $K := \\text{Frac}(R)$. If $R = \\bigcap_{i \\in I} R_i$ where $\\{ R_i \\subset K\\}_{i \\in I}$ are subrings such that $\\text{Frac}(R_i) = K$ and $R_i$ is normal, for each $i$, then $R$ is normal.\n\\end{exercise}\n\n\\begin{proposition}\\label{prop:normality-principal}\n\tLet $R$ be a Noetherian domain. Then $R$ is normal if and only if for every principal ideal $(t) \\subset R$ and every $\\mathfrak{p} \\in \\mathrm{Ass}(R/(t))$, the ideal $\\mathfrak{p}R_{\\mathfrak{p}}$ is principal.\n\\end{proposition}\n\\begin{proof}\n\tAssume the conditions above. To prove the normality of $R$, it suffices to use $R = \\bigcap R_{\\mathfrak{p}}$ where $\\mathfrak{p}$ ranges over the primes associated to nonzero principal ideals (consequence of Lemma \\ref{prop:intersection-ass}). Indeed, each $R_{\\mathfrak{p}}$ is regular, hence normal by Proposition \\ref{prop:reg-local-dvr}, therefore so is their intersection by the previous exercise.\n\t\n\tConversely, assume $R$ is normal and let $\\mathfrak{p} \\in \\text{Ass}(R/(t))$ with $t \\neq 0$, we have to show $\\mathfrak{p}R_{\\mathfrak{p}}$ is principal. Upon replacing $R$ by $R_{\\mathfrak{p}}$ and recalling how associated primes behave under localization, we may even assume $R$ is local with maximal ideal $\\mathfrak{p}$. Express $\\mathfrak{p}$ as the annihilator of some $\\bar{x} \\in R/(t)$ with $x \\in R$. Define the fractional ideal\n\t\\[ \\mathfrak{p}^{-1} := \\{ y \\in \\text{Frac}(R) : y\\mathfrak{p} \\subset R \\}. \\]\n\tIt is an $R$-submodule of $\\text{Frac}(R)$ containing $R$. Define the $R$-submodule $\\mathfrak{p}^{-1}\\mathfrak{p}$ of $\\text{Frac}(R)$ in the obvious way. Clearly $\\mathfrak{p} \\subset \\mathfrak{p}^{-1}\\mathfrak{p} \\subset R$. By maximality of $\\mathfrak{p}$, exactly one of the $\\subset$ is equality. If $\\mathfrak{p}\\mathfrak{p}^{-1}=\\mathfrak{p}$, every element of $\\mathfrak{p}^{-1}$ is integral over $R$, hence $\\mathfrak{p}^{-1} \\subset R$ by normality (integrality is ``witnessed'' by the module $\\mathfrak{p}$). From $\\mathfrak{p}x \\subset (t)$ we see $x/t \\in \\mathfrak{p}^{-1} = R$; this would imply $\\bar{x} = 0$ and $\\mathfrak{p}=R$, which is absurd.\n\t\n\tTherefore we must have $\\mathfrak{p}\\mathfrak{p}^{-1} = R$. This implies that $\\mathfrak{p}y \\not\\subset \\mathfrak{p}$ for some $y \\in \\mathfrak{p}^{-1}$, hence $\\mathfrak{p}y = R$ since $R$ is local. Hence $\\mathfrak{p} = y^{-1}R \\simeq R$ is principal.\n\\end{proof}\n\n\\begin{corollary}\\label{prop:normality-dvr}\n\tThe following are equivalent for a local domain $R$:\n\t\\begin{enumerate}[(i)]\n\t\t\\item $R$ is normal of dimension $1$;\n\t\t\\item $R$ is a regular local ring of dimension $1$;\n\t\t\\item $R$ is a discrete valuation ring.\n\t\\end{enumerate}\n\\end{corollary}\n\\begin{proof}\n\t(iii) $\\implies$ (i). We have seen that discrete valuation rings are principal ideal rings of dimension $1$, therefore also normal by unique factorization property.\n\t\n\t(i) $\\implies$ (ii). Under the normality assumption, choose any $t \\in R \\smallsetminus \\{0\\}$. Since $\\dim R = 1$ and $\\{0\\}$ is a prime ideal, the associated prime of $(t)$ can only be the maximal ideal $\\mathfrak{m}$, which is principal by Proposition \\ref{prop:normality-principal}. This shows that $R$ is regular local.\n\t\n\t(ii) $\\implies$ (iii) is included in Proposition \\ref{prop:reg-local-dvr}.\n\\end{proof}\n\n\\begin{corollary}\n\tLet $R$ be a Noetherian normal domain. Then\n\t\\[ R = \\bigcap_{\\mathrm{ht}(\\mathfrak{p}) = 1} R_{\\mathfrak{p}} \\]\n\tinside $\\mathrm{Frac}(R)$.\n\\end{corollary}\n\\begin{proof}\n\tEvidently $\\subset$ holds. By Lemma \\ref{prop:intersection-ass} together with Proposition \\ref{prop:normality-principal}, $R$ can be written as an intersection of $R_{\\mathfrak{p}}$ where $\\mathfrak{p}$ is associated to some non zero-divisor, such that $\\mathfrak{p}R_{\\mathfrak{p}}$ is principal; it suffices to show $\\text{ht}(\\mathfrak{p})=1$. From $\\mathfrak{p} \\neq \\{0\\}$ we see $\\text{ht}(\\mathfrak{p}) \\geq 1$; on the other hand, by Hauptidealsatz or by the discussion on regular local rings, we see $\\text{ht}(\\mathfrak{p}) = \\text{ht}(\\mathfrak{p}R_{\\mathfrak{p}}) \\leq 1$.\n\\end{proof}\n\n\\section{Serre's criterion}\n\n\\begin{lemma}\\label{prop:reduced-test}\n\tLet $R$ be a Noetherian ring. Suppose that\n\t\\begin{compactitem}\n\t\t\\item the primes in $\\mathrm{Ass}(R)$ are all minimal, and\n\t\t\\item $R_{\\mathfrak{p}}$ is a field for every minimal prime ideal $\\mathfrak{p}$,\n\t\\end{compactitem}\n\tthen $R$ is reduced.\n\\end{lemma}\n\\begin{proof}\n\tTake a minimal primary decomposition $\\{0\\} = I_1 \\cap \\cdots \\cap I_m$ with $\\text{Ass}(R/I_j) = \\{\\mathfrak{p}_j = \\sqrt{I_j} \\}$ and $\\text{Ass}(R) = \\{\\mathfrak{p}_1, \\ldots, \\mathfrak{p}_m \\}$. By the properties of primary ideals, $\\mathfrak{p}_j = \\sqrt{I_j} \\supset I_j$ for all $j$. By assumption each $\\mathfrak{p}_j$ is minimal, and $R_{\\mathfrak{p}_j}$ is a field. From the uniqueness of non-embedded components in primary decompositions, $I_j = \\Ker\\left[R \\to R_{\\mathfrak{p}_j}\\right]$ is a prime contained in $\\mathfrak{p}_j$, hence $I_j = \\mathfrak{p}_j$. We deduce that $\\{0\\} = \\bigcap_{j=1}^m \\mathfrak{p}_j$, thereby showing $\\sqrt{0_R} = \\{0\\}$.\n\\end{proof}\n\n\\begin{theorem}[J.-P. Serre]\\label{prop:Serre-criterion}\\index{Serre's criterion}\n\tA Noetherian ring $R$ is a finite direct product of normal domains if and only if the following two conditions hold.\n\t\\begin{description}\n\t\t\\item[R1] The localization of $R$ at every prime ideal of height $1$ (resp. $0$) is a discrete valuation ring (resp. a field).\n\t\t\\item[S2] For every non zero-divisor $t$ of $R$, the primes in $\\mathrm{Ass}(R/(t))$ are all of height $1$; the primes in $\\mathrm{Ass}(R)$ are all of height $0$.\n\t\\end{description}\n\\end{theorem}\nThe condition \\textbf{R1} means regularity in codimension $\\leq 1$. The condition \\textbf{S2} is often rephrased in terms of \\emph{depth}, which will be discussed in Proposition \\ref{prop:S2}.\n\n\\begin{proof}\n\tWe begin with the $\\implies$ direction. Suppose $R = R_1 \\times \\cdots \\times R_n$ where each $R_i$ is a normal domain. As is well-known, $\\Spec(R) = \\bigsqcup_{i=1}^n \\Spec(R_i)$ as topological spaces: to be precise, the elements of $\\Spec(R)$ take the form $\\mathfrak{p} = R_1 \\times \\cdots \\times \\mathfrak{p}_i \\times \\cdots \\times R_n$, where $\\mathfrak{p}_i \\in \\Spec(R_i)$. We have\n\t\\[ \\text{ht}(\\mathfrak{p}) = \\text{ht}(\\mathfrak{p}_i), \\quad R_{\\mathfrak{p}} \\simeq (R_i)_{\\mathfrak{p}_i}. \\]\n\tFurthermore, one easily checks that\n\t\\[ \\text{Ass}(R/(t)) = \\bigsqcup_{i=1}^n \\text{Ass}(R_i/(t_i)), \\quad t = (t_1, \\ldots, t_n) \\in R \\]\n\tcompatibly with the description above.\n\t\n\tThis reduces the verification of \\textbf{S2} to the case of normal domains, which is addressed in Proposition \\ref{prop:normality-principal}. The condition \\textbf{R1} is implied by Corollary \\ref{prop:normality-dvr} since normality is preserved under localizations.\n\t\n\tAssume conversely \\textbf{R1} and \\textbf{S2}. They imply the conditions of Lemma \\ref{prop:reduced-test}, hence $R$ is reduced. Now for every prime $\\mathfrak{p}$ associated to a non zero-divisor, we have $\\text{ht}(\\mathfrak{p})=1$ and $R_{\\mathfrak{p}}$ is a normal domain by \\textbf{R1} $\\wedge$ \\textbf{S2}. By Lemma \\ref{prop:intersection-ass} (as $R$ is reduced), $R$ is integrally closed in $K(R)$: indeed, if $x \\in K(R)$ is integral over $R$, so is its image in $K(R_{\\mathfrak{p}}) = K(R)_{\\mathfrak{p}}$ for every $\\mathfrak{p}$ as above, therefore lies in $R_{\\mathfrak{p}}$. Decompose $K(R) = \\prod_{i=1}^m K(R/\\mathfrak{p}_i)$ as in Lemma \\ref{prop:K-vs-localization}. The idempotents $e_i \\in K(R)$ associated to this decomposition are trivially integral over $R$: $e_i^2 - e_i = 0$, hence $e_i \\in R$ for all $i$. It follows that $R = Re_1 + \\cdots + Re_m \\prod_{i=1}^m Re_i \\subset K(R)$ and one easily checks that $Re_i = R/\\mathfrak{p}_i$.\n\t\n\tFinally, since $R$ is integrally closed in $K(R)$, the decomposition above implies $R/\\mathfrak{p}_i$ is integrally closed in $K(R/\\mathfrak{p}_i)$. All in all, we have written $R$ as a direct product of normal domains.\n\\end{proof}\n\n\\begin{exercise}\n\tRecall that for an $R$-module $M$, a prime ideal $\\mathfrak{p} \\in \\text{Ass}(M)$ is called \\emph{embedded} if $\\mathfrak{p}$ is not a minimal element in $\\text{Ass}(M)$. Show that for $M=R$, embedded primes are primes in $\\text{Ass}(R)$ with height $> 0$. For $M=R/(t)$ where $t$ is not a zero-divisor, embedded primes are primes in $\\text{Ass}(R/(t))$ with height $> 1$. Use this to rephrase \\textbf{S2} as follows: there are no embedded primes in $\\text{Ass}(R/(t))$ ($t$ not a zero-divisor) or in $\\text{Ass}(R)$.\n\\end{exercise}\n\n\\begin{exercise}\n\tSuppose a ring $R$ is isomorphic to a direct product $\\prod_{i \\in I} R_i$. Show that $R$ is a domain if and only if $|I|=1$ (say $I=\\{i_0\\}$), and $R_{i_0}$ is a domain.\n\\end{exercise}\n\n\\begin{corollary}\n\tA Noetherian domain $R$ is normal if and only if \\textbf{R1} and \\textbf{S2} hold for $R$.\n\\end{corollary}\n\\begin{proof}\n\tImmediate from the previous exercise and Theorem \\ref{prop:Serre-criterion}.\n\\end{proof}\n\n\\section{Introduction to depth}\nBased on \\cite{Bour98}, we give a brief account on the notion of depth. Let $R$ be a ring and $M$ be an $R$-module, $M \\neq \\{0\\}$. Recall the $\\Ext$-functors\n\\[ \\Ext^n_R(X,Y) := H^n(R\\mathcal{H}\\text{om}(X,Y)) = \\text{Hom}_{D^+(R\\dcate{Mod})}(X, Y[n]). \\]\n\n\\begin{definition}[Depth of a module]\\index{depth}\n\tLet $I$ be a proper ideal of $R$. We define the \\emph{depth} of $M$ relative to $I$ as\n\t\\[ \\text{depth}_I(M) := \\inf \\left\\{n \\geq 0: \\Ext^n_R(R/I, M) \\neq 0 \\right\\} \\]\n\twith values in $\\Z_{\\geq 0} \\sqcup \\{+\\infty\\}$.\n\\end{definition}\n\n\\begin{proposition}\\label{prop:depth-zero}\n\tFor $I$, $M$ as above, the following are equivalent:\n\t\\begin{enumerate}[(i)]\n\t\t\\item $\\mathrm{depth}_I(M)=0$;\n\t\t\\item for all $x \\in I$, the homomorphism $M \\xrightarrow{x} M$ is not injective;\n\t\t\\item $\\mathrm{Ass}(M) \\cap V(I) \\neq \\emptyset$.\n\t\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n\tIn each case we have $M \\neq \\{0\\}$. If (i) holds, then $M \\xrightarrow{x} M$ vanishes on the image of some nonzero $R/I \\to M$, hence (ii). If (ii) holds, the union of $\\text{Ass}(M)$ will cover $I$, and (iii) follows by prime avoidance. Finally, suppose $\\mathfrak{p} \\in \\text{Ass}(M) \\cap V(I)$, there is an embedding $R/\\mathfrak{p} \\hookrightarrow M$, which yields a non-zero $R/I \\to M$.\n\\end{proof}\n\n\\begin{definition}\\index{regular sequence}\n\tA sequence $x_1, \\ldots, x_n \\in R$ is called an $M$-\\emph{regular sequence} of length $n$ if $(x_1, \\ldots, x_n)M \\subsetneq M$ and\n\t\\[ 0 \\to M/(x_1, \\ldots, x_{k-1}) \\xrightarrow{x_k} M/(x_1, \\ldots, x_{k-1}) \\]\n\tis exact for all $1 \\leq k \\leq n$.\n\\end{definition}\n\n\\begin{lemma}\n\tLet $M$ be an $R$-module, $x_1, \\ldots, x_r$ be an $M$-regular sequence lying in an ideal $I \\subsetneq R$. We have $\\mathrm{depth}_I(M) = r + \\mathrm{depth}_I(M/(x_1, \\ldots, x_r)M)$.\n\\end{lemma}\n\\begin{proof}\n\tThe case $r=1$ follows by staring at the long exact sequence attached to $0 \\to M \\xrightarrow{x_1} M \\to M/x_1 M \\to 0$. The general case follows by induction on $r$.\n\\end{proof}\n\n\\begin{theorem}\\label{prop:regular-vs-depth}\n\tAssume $R$ Noetherian, $M$ finitely generated and $I \\subsetneq R$.\n\t\\begin{enumerate}[(i)]\n\t\t\\item $\\mathrm{depth}_I(M)$ is the supremum of the lengths of $M$-regular sequences with elements in $I$.\n\t\t\\item Suppose $\\mathrm{depth}_I(M) < +\\infty$. Every $M$-regular sequence with elements in $I$ can be extended to one of length $\\mathrm{depth}_I(M)$.\n\t\t\\item The depth of $M$ relative to $I$ is finite if and only if $V(I) \\cap \\Supp(M) \\neq \\emptyset$, or equivalently $IM \\neq M$.\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\tTo prove (i) and (ii), by the previous Lemma we are reduced to show that $\\text{depth}_I(M) > 0$ implies the existence of $x \\in I$ which is not a zero-divisor of $M$; this follows from Proposition \\ref{prop:depth-zero}.\n\t\n\tNow pass to the word ``equivalently'' in (iii). We have $M \\neq IM$ if and only if $(M/IM)_{\\mathfrak{p}} = M_{\\mathfrak{p}}/ I_{\\mathfrak{p}} M_{\\mathfrak{p}} \\neq 0$ for some prime ideal $\\mathfrak{p}$. That quotient always vanishes when $\\mathfrak{p} \\not\\supset I$, in which case $I_{\\mathfrak{p}} = R_{\\mathfrak{p}}$. On the other hand, when $\\mathfrak{p} \\in V(I)$ we have $I_{\\mathfrak{p}} \\subset \\text{rad}(R_{\\mathfrak{p}})$, thus the non-vanishing is equivalent to $M_{\\mathfrak{p}} \\neq \\{0\\}$ by Nakayama's Lemma \\ref{prop:NAK}.\n\t\n\tThe ``if'' direction of (iii) is based on the following fact\n\t\\[ IM \\neq M \\implies \\text{depth}_I(M) < +\\infty \\]\n\twhich will be proved in the next lecture (Theorem \\ref{prop:Koszul-depth}) using \\emph{Koszul complexes}. As for the ``only if'' direction, $V(I) \\cap \\Supp(M) = \\emptyset$ implies $I + \\text{ann}(M) = R$, but the elements in $I + \\text{ann}(M)$ annihilate each $\\Ext^n(R/I, M)$, hence $\\text{depth}_I(M)=+\\infty$.\n\\end{proof}\n\n\\begin{figure}[h]\n\t\\centering \\vspace{1em} \\includegraphics[height=220pt]{Jean-Louis_Koszul.jpg} \\\\ \\vspace{1em}\n\t\\begin{minipage}{0.7\\textwidth}\n\t\t\\small Jean-Louis Koszul (1921---2018) created the Koszul complexes in order to define a cohomology theory for Lie algebras; this device turns out to be a general, convenient construction in homological algebra, which will be discussed in the next lecture. The study of ``Koszulness'' in a broader (eg. operadic) context is now an active area of research. J.-L. Koszul is also a second-generation member of the Bourbaki group. Source: by Konrad Jacobs - \\href{http://owpdb.mfo.de/detail?photo_id=2273}{Oberwolfach Photo Collection}, ID 2273.\n\t\\end{minipage}\n\\end{figure}\n\n\\begin{corollary}\n\tWith the same assumptions, let $(x_1, \\ldots, x_r)$ be an $M$-regular sequence with $x_i \\in I$. It is of length $\\mathrm{depth}_I(M)$ if and only if $\\mathrm{Ass}(M/(x_1, \\ldots, x_r)M) \\cap V(I) \\neq \\emptyset$.\n\\end{corollary}\n\\begin{proof}\n\tThe sequence has length $\\text{depth}_I(M)$ if and only if $R$-module $M/(x_1, \\ldots, x_r)M$ has depth zero, so it remains to apply Proposition \\ref{prop:depth-zero}.\n\\end{proof}\n\n\\begin{corollary}\n\tLet $R$ be a Noetherian local ring with maximal ideal $\\mathfrak{m}$, and $M \\neq \\{0\\}$ a finitely generated $R$-module, then $\\mathrm{depth}_{\\mathfrak{m}}(M) \\leq \\dim M$.\n\\end{corollary}\n\\begin{proof}\n\tConsider the following situation: $x \\in \\mathfrak{m}$ is not a zero-divisor for $M \\neq \\{0\\}$. In the discussion of dimensions, we have seen that $d(M) \\geq d(M/xM) \\geq d(M/xM)-1$, where $d(\\cdot)$ is the degree of Hilbert--Samuel polynomial; on the other hand, since the alternating sum of Hilbert--Samuel polynomials in $0 \\to M \\xrightarrow{x} M \\to M/xM \\to 0$ has degree $< d(M)$, we infer that $d(M/xM) = d(M) - 1$. Hence $\\dim(M/xM) = \\dim(M) - 1$. By relating depth to $M$-regular sequences, we deduce $\\text{depth}_{\\mathfrak{m}}(M) \\leq \\dim M$.\n\\end{proof}\n\n\\begin{definition}[Cohen--Macaulay modules]\\index{Cohen--Macaulay module}\n\tLet $R$ be a Noetherian ring. A finitely generated $R$-module $M$ is called \\emph{Cohen--Macaulay} if\n\t\\[ \\mathrm{depth}_{\\mathfrak{m}A_{\\mathfrak{m}} }(M_{\\mathfrak{m}}) = \\dim M_{\\mathfrak{m}} \\]\n\tfor every $\\mathfrak{m} \\in \\MaxSpec(R) \\cap \\Supp(M)$. We say $R$ is a Cohen--Macaulay ring if it is Cohen--Macaulay as a module.\n\\end{definition}\n\n\\begin{example}\n\tRegular local rings are Cohen--Macaulay, although we do not prove this here. Another important class of Cohen--Macaulay rings is the algebra of invariants $A^G$ where $A$ is the algebra of regular functions on an affine $\\Bbbk$-variety $\\mathcal{X}$ with \\emph{rational singularities} (eg. $A = \\Bbbk[X_1, \\ldots, X_n]$) with an action by a reductive $\\Bbbk$-group $G$ (finite groups allowed), and we assume $\\text{char}(\\Bbbk)=0$. Here is the reason: Boutot \\cite{Bou87} proved the GIT quotient $\\mathcal{X}/\\!/G$ has rational singularities as well, hence is Cohen--Macaulay; in characteristic zero this strengthens an earlier theorem of Hochster--Roberts. These algebras are interesting objects from the geometric, algebraic or even combinatorial perspectives.\n\\end{example}\n\n\\begin{exercise}\n\tShow by using Proposition \\ref{prop:depth-zero} that $\\text{depth}_{\\mathfrak{m}}(R) = 0$ if $R$ is local with maximal ideal $\\mathfrak{m}$ and has dimension zero.\n\\end{exercise}\n\n\\begin{proposition}\\label{prop:S2}\n\tThe condition \\textbf{S2} in Theorem \\ref{prop:Serre-criterion} is equivalent to\n\t\\[ \\mathrm{ht}(\\mathfrak{p}) \\geq i \\implies \\mathrm{depth}_{\\mathfrak{p}R_{\\mathfrak{p}}}(R_{\\mathfrak{p}}) \\geq i \\]\n\tfor all $\\mathfrak{p} \\in \\Spec(R)$ and $i \\in \\{1,2\\}$.\n\\end{proposition}\n\\begin{proof}\n\tAssume the displayed conditions. We first show that every $\\mathfrak{p} \\in \\text{Ass}(R)$ has height zero. Upon localization we may assume $R$ local with maximal ideal $\\mathfrak{p}$, thus $\\mathfrak{p}$ has depth zero by Proposition \\ref{prop:depth-zero}; our conditions force $\\text{ht}(\\mathfrak{p})=0$. Next, consider $\\mathfrak{p} \\in \\text{Ass}(R/(t))$ with $t$ non zero-divisor. Note that $t$ remains a non zero-divisor in $R_{\\mathfrak{p}}$, and $\\mathfrak{p}R_{\\mathfrak{p}} \\in \\text{Ass}(R_{\\mathfrak{p}}/(t))$. Hence $\\text{depth}(R_{\\mathfrak{p}}) = \\text{depth}(R_{\\mathfrak{p}}/(t)) + 1 = 1$ since $\\text{depth}_{\\mathfrak{p}}(R_{\\mathfrak{p}}/(t)) = 0$ by Proposition \\ref{prop:depth-zero}. Our conditions force $\\text{ht}(\\mathfrak{p}) \\leq 1$ and $t \\in \\mathfrak{p}$ implies $\\text{ht}(\\mathfrak{p}) > 0$.\n\t\n\tConversely, assume \\textbf{S2}. Suppose $\\text{ht}(\\mathfrak{p}) \\geq 1$. If $R_{\\mathfrak{p}}$ has depth zero then $\\text{Ass}(R_{\\mathfrak{p}}) \\cap V(\\mathfrak{p}R_{\\mathfrak{p}}) \\neq \\emptyset$, which implies $\\mathfrak{p}R_{\\mathfrak{p}} \\in \\text{Ass}(R_{\\mathfrak{p}})$ thus $\\mathfrak{p} \\in \\text{Ass}(R)$, contradicting \\textbf{S2}. Next, suppose $\\text{ht}(\\mathfrak{p}) \\geq 2$. If $R_{\\mathfrak{p}}$ has depth $\\leq 1$, the standard property\n\t\\[ \\forall i \\geq 0, \\quad \\Ext^i_{R_{\\mathfrak{p}}}(X_{\\mathfrak{p}}, Y_{\\mathfrak{p}}) \\simeq \\Ext^i_R(X,Y)_{\\mathfrak{p}} \\]\n\tvalid for Noetherian $R$ and finitely generated $R$-modules $X, Y$, yields\n\t\\[ 0 \\leq \\text{depth}_{\\mathfrak{p}}(R) \\leq \\text{depth}_{\\mathfrak{p}R_{\\mathfrak{p}}}(R_{\\mathfrak{p}}) \\leq 1. \\]\n\tIf $\\text{depth}_{\\mathfrak{p}}(R)=0$, there exists of $\\mathfrak{p}' \\supset \\mathfrak{p}$ and $\\mathfrak{p}' \\in \\text{Ass}(R)$ by Proposition \\ref{prop:depth-zero}, but \\textbf{S2} implies $\\text{ht}(\\mathfrak{p}) \\leq \\text{ht}(\\mathfrak{p}') = 0$ which is absurd. If $\\text{depth}_{\\mathfrak{p}}(R)=1$, there exists a non zero-divisor $x$ in $R$ such that $\\text{depth}_{\\mathfrak{p}}(R/(x)) = 0$. The same argument furnishes $\\mathfrak{p}' \\supset \\mathfrak{p}$ such that $\\mathfrak{p}' \\in \\text{Ass}(R/(x))$. Now \\textbf{S2} implies $\\text{ht}(\\mathfrak{p}) \\leq \\text{ht}(\\mathfrak{p}')=1$, again a contradiction.\n\\end{proof}\n\nNow the conditions \\textbf{R1} and \\textbf{S2} can be generalized to arbitrary $k \\in \\Z_{\\geq 0}$:\n\\begin{description}\n\t\\item[Rk] $\\text{ht}(\\mathfrak{p}) \\leq k$ implies $R_{\\mathfrak{p}}$ is a regular local ring, for every $\\mathfrak{p} \\in \\Spec(R)$;\n\t\\item[Sk] $\\text{depth}(R_{\\mathfrak{p}}) \\geq \\min\\{ k, \\text{ht}(\\mathfrak{p})\\}$ for every $\\mathfrak{p} \\in \\Spec(R)$.\n\\end{description}\nOne readily checks their compatibility with Proposition \\ref{prop:S2}. Note that \\textbf{S0} is trivial, whilst $R$ is Cohen--Macaulay if and only if it satisfies \\textbf{Sk} for all $k$.\n\n\\begin{example}\n\tFrom Theorem \\ref{prop:Serre-criterion}, we see that any finite direct product of normal domains of dimension $\\leq 2$ is Cohen--Macaulay.\n\\end{example}\n\n\\begin{exercise}\n\tShow that \\textbf{S1} holds if and only if there are no embedded primes in $\\text{Ass}(R)$.\n\\end{exercise}", "meta": {"hexsha": "4f402329bba2e19355c9090f8c33a46585457c98", "size": 29068, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "YAlg3-7.tex", "max_stars_repo_name": "wenweili/Yanqi-Algebra-3", "max_stars_repo_head_hexsha": "4223e9973c97342ecb09b444b9fc3c30ffd53aa3", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2019-07-09T06:22:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-05T14:44:14.000Z", "max_issues_repo_path": "YAlg3-7.tex", "max_issues_repo_name": "wenweili/Yanqi-Algebra-3", "max_issues_repo_head_hexsha": "4223e9973c97342ecb09b444b9fc3c30ffd53aa3", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "YAlg3-7.tex", "max_forks_repo_name": "wenweili/Yanqi-Algebra-3", "max_forks_repo_head_hexsha": "4223e9973c97342ecb09b444b9fc3c30ffd53aa3", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-07-10T23:47:58.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-21T03:32:08.000Z", "avg_line_length": 87.8187311178, "max_line_length": 960, "alphanum_fraction": 0.6868721618, "num_tokens": 10253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.5571807745591537}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\n\\begin{document}\n\n\\chapter{Algorithm Complexity Analysis}\n\\label{chapter_algorithm_analysis}\nWhen a software program runs on a machine, we  genuinely care about the \\textit{hardware space} and the \\textit{running time} that it takes to complete the execution of the program; space and running time is the cost we need to pay to get the problem solved. The lower the cost, the happier we would be. Thus, \\texttt{space} and \\texttt{running time} are two metrics we use to evaluate the performance of programs, or rather say, algorithms. \n\nNow, if I ask you the question, \"How to evaluate the performance of algorithms?\" Do not go low and tell me, \"You just write the code and run it on a computer?\" Because here is the reality: (a) These two metrics are mostly possible to vary as using different the physical machine and the programming languages, and (b) The cost will be too high. First, when we are solving a problem, we would always try to come up with many possible solutions--algorithms. Implementing and running all candidates just boost your cost of labor and finance. Second, even at the best case, you only have one candidate, but what if your designated machine can not load the program due to the memory limit, what if your algorithm takes millions of years to run, would you prefer to sit and wait? \n\nWith these situation, it is obvious that we need  to \\textit{predict} algorithm's performance--running time and space--without implementing or running on a particular machine, and meanwhile the prediction should be independent of the hardwares. In this chapter, we will study the complexity analysis method that strives to enable us such ability.  The space complexity is mostly obvious and way easier to obtain compared with its counterpart-time complexity. This decides that in this chapter, the analysis of time complexity will outweigh the pages we spent on space complexity. Before we dive into a plethora of algorithms and data structures, learning the complexity analysis techniques can help us evaluate each algorithm. \n\n\n% We organize the chapter:\n% \\begin{enumerate}\n%     \\item Introduction\n%     \\item Asymptotic notations\n%     \\item Amortized Analysis\n%     \\item Hands-on Examples\n% \\end{enumerate}\n\n\\section{Introduction}\nIn reality, it is impossible to predict the exact behavior of an algorithm, thus complexity analysis only try to extract the main influencing factors and ignore some trivial details. The complexity analysis is thus only \\textit{approximate}, but it works. \n\n\\paragraph{What are the main influencing factors? }\n\nImagine sorting an array of integers with size 10 and size 10,000,000. The time and space it takes to these two input size will mostly be a huge difference. Thus, the number of items in the \\textit{input size} is a straightforward factor. Assume we use $n$ to denote the size of the input, and the complexity analysis will define an expression of the running time as $T(n)$ and the space as $S(n)$. \n\nIn complexity analysis, RAM model is based upon, where instructions/operators are executed one after another, without concurrency. Therefore, the running time of algorithm on a particular input can be expressed as counting the number of \\textit{operations or ``steps''} to run. \n\n\\paragraph{What are the difference cases?}\n\nYet, when two input instance has exactly the same size, but with different values, such that one array where the input array is already sorted, and the other is totally random, the time it takes to these two cases will possibly vary, depending on the sorting algorithm that you chose. In complexity analysis, \\textit{best-case}, \\textit{worst-case}, \\textit{average-case} complexity analysis is used to differentiate the behavior of the same algorithm applied on different input instance.  \n\\begin{enumerate}\n    \\item \\textbf{Worst-case}: The behavior of the algorithm or an operation of a data structure with respect to the worst possible case of input instance.  This gave us a way to measure the upper bound on the running time for any input, which is denoted as $O$. Knowing it gives us a guarantee that the algorithm will never take any longer. \n    \\item \\textbf{Average-case}:  The expected behavior when the input is randomly drawn from a given distribution. Average case running time is used as an estimate complexity for a normal case. The expected case here offers us asymptotic bound $\\Theta$.  Computation of average-case running time entails knowing all possible input sequences, the probability distribution of occurrence of these sequences, and the running times for the individual sequences. Often it is assumed that all inputs of a given size are equally likely. \n    \\item \\textbf{Best-case}: The possible best behavior when  the input data is arranged in a way, that your algorithms run least amount of time. Best case analysis can lead us to the lower bound $\\Omega$ of an algorithm or data structure. \n\\end{enumerate}\n\n\\paragraph{Toy Example: Selection Sort} Given a list of integers, sort the item incrementally.\n\\begin{lstlisting}[numbers=none]\nFor example, given the list A=[10, 3, 9, 2, 8, 7, 9], the sorted list will be:\nA=[2, 3, 7, 8, 9, 9, 10].\n\\end{lstlisting}\nThere are many sorting algorithms, in this case, let us examine the \\textit{selection sort}. Given the input array $A$, and size to be $n$, we have index $[0, n-1]$. In selection sort, each time we select the current largest item and swap it with item at its corresponding position in the sorted list, thus dividing the list into two parts: unsorted list on the left and sorted list on the right. For example, at the first pass, we choose 10 from $A[0,n-1]$ and swap it with $A[n-1]$, which is 9;  at the second pass, we choose the largest item 9 from $A[0,n-2]$ and swap it with 7 at $A[n-2]$, and so. Totally, after $n-1$ passes we will get an incrementally sorted array. More details of selection sort can be found in Chapter~\\ref{chapter_sorting}. \n\nIn the implementation, we use \\texttt{ti} to denote the target position and \\texttt{li} the index of the largest item which can only get by scanning. We show the Python code:\n\\begin{lstlisting}[language=Python]\ndef selectSort(a):                       cost           times\n  '''Implement selection sort'''\n  n = len(a)\n  for i in range(n - 1): #n-1 passes, \n    ti = n - 1 -i                         c             n-1        \n    li = 0                                c             n-1\n    for j in range(n - i):\n      if a[j] > a[li]:                    c \\sum_{i=0}^{n-2}(n-i) \n        li = j                            c \\sum_{i=0}^{n-2}(n-i)\n    # swap li and ti\n    print('swap', a[li], a[ti])\n    a[ti], a[li] = a[li], a[ti]           c            n-1\n    print(a)\n  return a\n\\end{lstlisting}\nFirst, we ignore the distinction between different operation types and treat all alike with a cost of $c$. In the above code, the line that comes with notations--\\texttt{cost} and \\texttt{times}--are operations. In line $5$, we first point at the target position \\texttt{ti}. Because of the \\texttt{for} loop above it, this operation will be called $n-1$ times. Same for line $6$ and $12$. For operation in line $8$ and $9$, the times it operated is denoted as $\\sum_{i=0}^{n-2}(n-i)$ due to two nested \\texttt{for} loops. And the range of $j$ is dependable of the outer loop with $i$.  We get our running time $T(n)$ by summing up these cost on the variable of $i$.\n\\begin{align}\n\\label{complexity_eq_1}\n    T(n) &= 3c*(n-1) + \\sum_{i=0}^{n-2} {2c(n-i)}\\\\\n    &= 3c*(n-1) + 2c (n+(n-1)+(n-2)+...+2) \\notag \\\\\n    &= 3c*(n-1) + 2c (\\frac{(n-1)*(2+n)}{2})\\notag \\\\\n    &=cn^2+cn-2+3cn-3c\\notag \\\\\n    &=cn^2+4cn-3c-2\\label{complexity_eq_1_2} \\\\\n    &=an^2+bn+c\\label{complexity_eq_1_3}\n\\end{align}\nWe use three constants $a, b, c$ to rewrite Eq. \\ref{complexity_eq_1_2} with Eq.\\ref{complexity_eq_1_3}. \n\nIn the case of sorting, an incrementally sorted array will potentially be the best-cases that takes the lest running time and on the other hand decrementally sorted array will be the worst-case. However, in the example of selection sorted array, even if the input is perfect sorted, the algorithm does not consider this case, it still runs n-1 passes, each pass it still scans from a fixed size of window to find the largest item (you would only know it is the largest by looking all cases). Thus, in this case, the best-case, worst-case, and average-case all happens to have the same running time shown in Eq.~\\ref{complexity_eq_1_3}. \n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%Asymptotic notations%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Asymptotic Notations}\n\\paragraph{Order of Growth and Asymptotic Running Time} In Equation~\\ref{complexity_eq_1_3} we end up with three constant a, b, c and two terms with order  $n^2$ and $n$. When the input is large enough, all the lower order terms, even if with large constant,  will become relatively insignificant to the highest term; we thus neglect the lower terms and end up with $an^2$. Further, we neglect the constant coefficient $a$ for the same reason. However, we can not say $T(n)=n^2$, because we know mathematically speaking, it is wrong. \n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.8\\columnwidth]{fig/big_o_complexity_chart.png}\n    \\caption{Order of Growth of Common Functions}\n    \\label{fig:big_o_complexity_chart}\n\\end{figure}\n\nInstead, since we are only interested with property of $T(n)$ when $n$ is large enough, we say the relation between the original complexity function $an^2+bn+c$ is ``asymptotically equivalent to''  $n^2$, which reads ``$T(n)$ is is asymptotic to $n^2$'' and denoted as $T(n)=an^2+bn+c \\asymp n^2$. Form Fig.~\\ref{fig:big_o_complexity_chart}, we can visualize that when $n$ is large enough, the term $n$ is trivial compared with $n^2$. \n\nIn this way, we manage to classify our complexity into a group of families, say, exponential $2^n$ or polynomial $n^2$.% For example, if the input size is $n$, then we can have functions like $f_1 = an+b$ and $f_2=an^2+bn+c$. But normally, we would only get the highest order of function, and simplified them to $g_1=n$ and $g_2=n^2$, with different notation. Function $f_1$ and $g_1$ are of the same order of magnitude or growth and $g_1$ can approach the curve of $f_1$ arbitrarily closely. The relation of them is  \\textbf{asymptotic}, and denoted as $f_1 \\asymp g_1$.\n\n\\subsubsection{Definition of Asymptotic Notations} We mentioned ``asymptotically equivalent'' relation, which can be formalized and  defined with $\\Theta$-Notation as $T(n)=\\Theta(n)$, one of the main three asymptotic notations--asymptotically equivalent, smaller, and larger--we will cover in this section.\n\\paragraph{$\\Theta$-Notation}  For a given function $g(n)$, we define $\\Theta(g(n))$(pronounced as ``big theta'') as a set of functions $\\Theta(g(n))=\\{f(n)\\}$, that each $f(n)$ can be bounded by $g(n)$ by $0 \\leq c_1g(n)\\leq f(n)\\leq c_2g(n)$ for all $n\\geq n_0$ for positive constant $c_1$, $c_2$ and $n_0$. We show this relation in Fig.~\\ref{fig:asym_notation}. Strictly speaking, we would write $f(n)\\in\\Theta(g(n))$ to indicate that $f(n)$ is just one member of the set of functions that $\\Theta(g(n))$ can represent. However, in the field of computer science, we write  $f(n)=\\Theta(g(n))$ instead. \n\nWe say $g(n)$ is an  \\textit{asymptotically tight bound} of $f(n)$.  For example, we can say $n^2$ is asymptotically tight bound for $2n^2+3n+4$ or $5n^2+3n+4$ or $3n^2$ or any other similar functions. We can denote our running time as $T(n)=\\Theta(n^2)$.\n\n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width = 1\\columnwidth]{fig/notations.png}\n    \\caption{Graphical examples for asymptotic notations. Replace f(n) with T(n) }\n    \\label{fig:asym_notation}\n\\end{figure}\n\n\\paragraph{$O$-Notation}  Further, we define the  \\textit{asymptotically upper bound} of a set of functions $\\{f(n)\\}$ as $O(g(n))$(pronounced as ``big oh'' of $f(n)$), with $0 \\leq  f(n)\\leq cg(n)$ for all $n\\geq n_0$ for positive constant $c$ and $n_0$. We show this relation in Fig.~\\ref{fig:asym_notation}.\n\nNote that $T(n) = \\Theta(g(n))$ implies that $T(n)=O(g(n))$, but not the other way around. With $2n^2+3n+4$ or $5n^2+3n+4$ or $3n^2$, it also be denoted as $T(n)=O(n^2)$. Big Oh notation is widely applied in computer science to describe either the running time or the space complexity.\n\n\\paragraph{$\\Omega$-Notation} It provides \\textbf{asymptotic lower bound} running time. With $T(n)=\\Omega(g(n))$(pronounced as ``big omega'') we represent a set of functions that $0 \\leq cg(n) \\leq f(n)$ for all $n\\geq n_0$ for positive constant $c$ and $n_0$. \n\n\n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{Does it mean that $O$ is worst-case, $\\Theta$ is the average-case and $\\Omega$ is the best-case? How does it relate to this three cases. }\n\\end{bclogo}\n\n\\subsubsection{Properties of Asymptotic Comparisons}\nWe should note that only if $f(n)=O(g(n))$ and $f(n)=\\Omega(g(n))$, we can have $f(n)=\\Theta(g(n))$.\n\n\\begin{table}[!ht]\n\\begin{small}\n\\centering\n\\noindent\\captionof{table}{ Analog of Asymptotic Relation}\n \\noindent \\begin{tabular}{|p{0.4\\columnwidth}|p{0.4\\columnwidth}| }\n  \\hline\n Notation & Similar Relations   \\\\ \\hline\n$f(n)=\\Theta(g(n))$  & $f(n)=g(n)$ \\\\\\hline\n$f(n)=O(g(n))$  &$f(n)\\leq g(n)$\\\\ \\hline\n$f(n)=\\Omega(g(n))$  & $f(n)\\geq g(n)$\\\\\\hline\n\\end{tabular}\n  \\label{tab:asymptotic_notation_property}\n  \\end{small}\n\\end{table}\n\nIt is fair to denote the relation of $g(n)$ and $f(n)$ to similar relation as between real numbers as shown in Table.~\\ref{tab:asymptotic_notation_property}. Thus the properties of real numbers, such as transitivity, reflexivity, symmetry, transpose symmetry all holds for asymptotic notations. \n\n\\section{Practical Guideline}\nThe previous two sections, we introduced the complexity function $T(n)$, how it is influenced by different cases of input instance--worst, average, and best cases, and how that we can use asymptotic notations to focus the complexity only on the dominant term in function $T(n)$. In this section, we would like to provide some practical guideline that arise in real application. \n\\paragraph{Input Size and Running Time}\nIn general, the time taken by an algorithm grows with the size of the input, so it is universal to describe the running time of a program as a function of the size of its input. $f(n)$, with the input size denoted as $n$.\n\nThe notation of \\textbf{input size} depends on specific problems and data structures. For example, the size of the array can be denoted as integer $n$, the total numbers of bits when it come to binary notation, and sometimes, if the input is matrix or graph, we need to use two integers such as $(m, n)$ for a two-dimensional matrix or $(V, E)$ for the vertices and edges in a graph. \n\nWe use function $T$ to denote the running time. With input size of $n$, our running time can be denoted as $T(n)$. Given $(m, n)$, it can be $T(m, n)$. \n\n\\paragraph{Worst-case Analysis is Preferred} \nIn reality, worst-case input is chosen as our indicator over the best input and average input for: (a) best input is not representative; there is usually an input for the algorithm become trivial; (b) the average-input is sometimes very hard to define and measure; (3) In some cases, the worst-case input is very close to the average and to the observational input; (4)The algorithm with the best efficiency on the worst-case usually achieve the best performance. \n\n\\paragraph{Relate Asymptotic Notations to Three Cases of Input Instance}\nIt might seemingly confusing about how the asymptotic notation relates to the three cases of input instance--worst-case, best-case, and average case. \n\nThink about it this way, asymptotic notations apply to any function that it abstract away some lower-term to characterize the property of the function when the input is large or infinite. Therefore, it has nothing to do with these three cases in this way. \n\nHowever, assume we are trying to characterize the complexity of an algorithm, and we analyzed its best-case and worst case input:\n\\begin{itemize}\n    \\item Worst-case: $T(n)=an^2+bn+c$, now we can say $T(n)=\\Theta(n^2)$, which indicates that $T(n)=\\Omega(n^2)$ and $T(n)=O(n^2)$.\n    \\item Best-case: $T(n)=an$, we can say $T(n)=\\Theta(n)$, which indicates that $T(n)=\\Omega(n)$ and $T(n)=O(n)$.\n\\end{itemize}\nIn order to describe the complexity of our algorithm in general; put aside the particular input instance. Such as the the average case analysis, which is typically hard to ``average'' between different input, we can  come up with an estimation, and safely say for the time complexity in general is $an\\leq T(n)\\leq an^2+bn+c$. This can be further expanded as:\n\\begin{equation}\n    c_1n\\leq an\\leq T(n)\\leq an^2+bn+c \\leq c_2n^2\n\\end{equation}\nEquivalently, we are safe to characterize a lower-bound based on best-case and an upper-bound based on the worst-case, thus we say the time complexity of our algorithm as $T(n)=\\Omega(n), T(n)=O(n^2)$. \n\n\n\\paragraph{Big Oh is a Popular Notation to Complexity Analysis}\nAs we have concluded that the worst-case analysis is both easy to get and good indicator of the overall complexity. Big Oh as the absolute upper bound of the worst-case would also indicate the upper bound of the algorithm in general. \n\nEven if we can get a tight bound for the algorithm as in the case of selection sort, it is always right to say that its an upper bound because $\\Theta(g(n))$ is a subset of $O(g(n))$. This is like, we know dog is categorized as canine, and canine is in the type of mammal, thus, we are right to say that dog is a species of mammal. \n\n\n\n\n% \\subsubsection{Four Types of Complexity Analysis}\n%  As we will see in the remaining content of the book, different algorithm is affected differently with its input data distribution. We category this influence in three levels: worst-case, average-case, and best case.\n% \\begin{enumerate}\n%     \\item \\textbf{Worst-case}: The behavior of the algorithm or an operation of a data structure with respect to the worst possible case of input instance.  This gave us a way to measure the upper bound on the running time for any input, which is denoted as $O$. Knowing it gives us a guarantee that the algorithm will never take any longer. \n%     \\item \\textbf{Average-case}:  The expected behavior when the input is randomly drawn from a given distribution. Average case running time is used as an estimate complexity for a normal case. The expected case here offers us asymptotic bound $\\Theta$.  Computation of average-case running time entails knowing all possible input sequences, the probability distribution of occurrence of these sequences, and the running times for the individual sequences. Often it is assumed that all inputs of a given size are equally likely. \n%     \\item \\textbf{Best-case}: The possible best behavior when  the input data is arranged in a way, that your algorithms run least amount of time. Best case analysis can lead us to the lower bound $\\Omega$ of an algorithm or data structure. \n% \\end{enumerate}\n% The selection algorithm's complexity will not be affected by its input, because no matter what data distribution is, we need to traverse the fixed range of array to find the largest item. For these type of algorithms, usually we have $T(n)=O(g(n))=\\Theta(g(n))=\\Omega(g(n))$. However, there are a lot of other algorithms that differs from different inputs. \n\n% All of these notations are applied to functions.  However, in the practical interviews, when the interviewer asks you to give the time and space complexity, you do not necessarily to give them the answer for each notation, you can just use $O$ to denote, with regarding to different cases introduced in the next section.  Here, we provide the most likely used growth rate plotted in Fig.~\\ref{fig:big_o_complexity_chart}.\n\n\n\\section{Time Recurrence Relation}\nWe have studied recurrence relation throughly in Chapter.~\\ref{chapter_recurrence_relation}. How does it relate to complexity analysis? We can represent either recursive function or iterative function with time recurrence relation. Therefore, the complexity analysis can be done in two steps: (1) get the recurrence relation and (2) solve the recurrence relation.\n\\begin{itemize}\n    \\item For recursive function, this representation is natural. For example, in the merge sort, it can be easily represented as $T(n)=2T(n/2)+O(n)$, that each step it divides a problem of size $n$ into two subproblems each with half size, and the cost to combine the solution of these two subproblems will be at most $n$, that is why we add $O(n)$.\n    \\item A time recurrence relation can be easily applied on iterative program too. Say, in the simple task where we try to search a target in a list array, we can write a recurrence relation function to it as $T(n)=T(n-1)+1$. Because, in the scanning process, one move reduce the problem to a smaller size, and the case of it is 1. Using the asymptotic notation, we can further write it as $T(n)=T(n-1)+O(1)$. Solving this recurrence relation straightforwardly through iteration method, we can have $T(n)=O(n)$. \n\\end{itemize}\n\nAs in the chapter.~\\ref{chapter_divide_and_conquer}, there are generally two ways of reducing a problem: divide and conquer and Reduce by Constant size, which is actually a non-homogenous recurrence relation. \n\n In Chapter.~\\ref{chapter_recurrence_relation}, we showed how to solve linear recurrence relation and get absolute answer, it was seemingly complex and terrifying. Good news, as complexity analysis is about estimating the cost, so we can loose ourselves a bit and sometimes a lower/upper bound is good enough, and the base case will almost always be $O(1)=1$. \n \n \\subsection{General Methods to Solve Recurrence Relation}\n We have shown in Chapter.~\\ref{chapter_recurrence_relation} there are iterative method and mathematical induction as general methods to try to solve an easy recurrence relation. We demonstrate how these two methods can be used in solving time recurrence relations first. Additionally, we introduce recursion tree method. \n \\subsubsection{Iterative Method}\nThe most straightforward method for solving recurrence relation no matter its linear or non-linear is the \\textit{iterative method}. Iterative method is a technique or procedure in computational mathematics that it iteratively replace/substitute each $a_n$ with its recurrence relation $\\Psi(n, a_{n-1}, a_{n-2}, ..., a_{n-k})$ till all items ``disappear'' other than the initial values. Iterative method is also called substitution method. \n\nWe demonstrate iteration with a simple non-overlapping recursion. \n\\begin{align}\n\\label{complexity_eq_binary_search}\n    T(n)&=T(n/2)+O(1)\\\\\n    &=T(n/2^2)+O(1)+O(1)\\notag\\\\\n    &=T(n/2^3)+3O(1)\\notag\\\\\n    &=...\\notag\\\\\n    &=T(1)+kO(1)\n\\end{align}\nWe have $\\frac{n}{2^k}=1$, we solve this equation and will get $k=\\log_2 n$. Most likely $T(1)=O(1)$ will be the initial condition, we replace this, and we get $T(n)=O(\\log_2 n)$.\n\nHowever, when we try to apply iteration on the third recursion:  $T(n)=3T(n/4)+O(n)$. It might be tempting to assume that $T(n)=O(n\\log n)$ due to the fact that $T(n)=2T(n/2)+O(n)$ leads to this time complexity.\n\\begin{align}\n\\label{complexity_non_overlap_1}\n    T(n)&=3T(n/4)+O(n)\\\\\n    &=3(3T(n/4^2)+n/4)+n=3^2T(n/4^2)+n(1+3/4)\\notag\\\\\n    &=3^2(3T(n/4^3)+n/4^2)+n(1+3/4)=3^3T(n/4^3)+n(1+3/4+3/4^2)\\\\\n    &=...\\\\\n    &=3^kT(n/4^k)+n\\sum_{i=0}^{k-1}(\\frac{3}{4})^{i}\n\\end{align}\n\\subsubsection{Recursion Tree}\nSince the term of T(n) grows, the iteration can look messy. We can use recursion tree to better visualize the process of iteration. In a recursive tree, each node represents the value of a single subproblem, and a leaf would be a subproblem. As a start, we expand $T(n)$ as a node with value $n$ as root, and it would have three children each represents a subproblem $T(n/4)$. We further do the same with each leaf node, until the subproblem is trivial and be a base case. In practice, we just need to draw a few layers to find the rule. The cost will be the sum of costs of all layers.  The process can be seen in  Fig.~\\ref{fig:recursive_tree}. \n\\begin{figure}[!ht]\n    \\centering\n    \\includegraphics[width=0.98\\columnwidth]{fig/recursion_tree_non_overlap.png}\n    \\caption{The process to construct a recursive tree for $T(n) = 3T(\\floor*{n/4}) + O(n)$. There are totally k+1 levels. Use a better figure.  }\n    \\label{fig:recursive_tree}\n\\end{figure}\n In this case, it is the base case $T(1)$. Through the expansion with iteration and recursion tree, our time complexity function becomes:\n\\begin{align}\n\\label{complexity_non_overlap_2}\n    T(n)&=\\sum_{i=1}^{k}L_i + L_{k+1}\\\\\n    &=n\\sum_{i=1}^{k}(3/4)^{i-1}+3^kT(n/4^k)\n\\end{align}\n\nIn the process, we can see that Eq.~\\ref{complexity_non_overlap_2} and Eq.~\\ref{complexity_non_overlap_1} are the same.  Because $T(n/4^k)=T(1)=1$, we have $k=\\log_4 n$. \n\\begin{align}\n\\label{complexity_non_overlap_2}\n    T(n)&\\leq n\\sum_{i=1}^{\\infty}(3/4)^{k-1}+3^kT(n/4^k)\\\\\n    &\\leq 1/(1-3/4)n+3^{\\log_4 n} T(1)= 4n+n^{log_4 3}\n    &\\leq 5n \\\\\n    &=O(n)\n\\end{align}\n\n\n\n\\subsubsection{Mathematical Induction}\nMathematical induction is a mathematical proof technique, and is essentially used to prove that a property $P(n)$ holds for every natural number $n$, i.e. for $n=0, 1, 2, 3$, and so on. Therefore, in order to use induction, we need to make a \\textit{guess} of the closed-form solution for $a_n$. Induction requires two cases to be proved. \n\\begin{enumerate}\n    \\item \n \\textit{Base case:} proves that the property holds for the number $0$. \n\\item \\textit{Induction step:} proves that, if the property holds for one natural number $n$, then it holds for the next natural number $n+1$.\n\\end{enumerate}\n\nFor $T(n)=2\\times T(n-1) +1, T_0 = 0$, we can have the following result by expanding $T(i), i \\in [0, 7]$.\n\\begin{lstlisting}[numbers=none]\nn    0 1 2 3 4 5 6 7\nT_n  0 3 7 15 31 63 127\n\\end{lstlisting}\nIt is not hard that we find the rule and guess $T(n) = 2^n-1$. Now, we prove this equation by induction:\n\\begin{enumerate}\n    \\item Show that the basis is true: $T(0) = 2^0 -1 = 0$.\n    \\item Assume it holds true for $T(n-1)$. By induction, we get\n    \\begin{align}\n        T(n)&=2T(n-1) + 1 \\\\\n        &=2 (2^{n-1} - 1) + 1 \\\\\n        &= 2^n -1\n    \\end{align}\n    Now we show that the induction step holds true too. \n\\end{enumerate}\n\n\\begin{bclogo}[couleur = blue!30, arrondi=0.1,logo=\\bccrayon,ombre=true]{Solve $T(n)=T(n/2)+O(1)$ and $T(2n)\\leq2T(n)+2n-1, T(2)=1$.}\n\\end{bclogo}\n \n \\subsection{Solve Divide-and-Conquer Recurrence Relations}\nAll the previous recurrence relation, either homogeneous or non-homogeneous, they fall into the bucket of \\textit{decrease and conquer} (maybe not right), and either is yet another type of recursion--Divide and Conquer. Same here, we ignore how we get such recurrence but focus on how to solve it. \n\nWe write our divide and conquer recurrence relation using the time complexity function, there are two types as shown in Eq.\\ref{divide_conquer_eq1}(n are divided equally) and E1.\\ref{divide_conquer_eq2}(n are divided unequally):\n\\begin{equation}\n    T(n)=aT(n/b)+f(n)\n    \\label{divide_conquer_eq1}\n\\end{equation}\nwhere $a\\leq 1, b>1$, and $f(n)$ is a given function, which usually has $f(n)= cn^k$.\n\\begin{equation}\n    T(n)=\\sum_{i=1}^{k}a_iT(n/b_i)+f(n)\n    \\label{divide_conquer_eq2}\n\\end{equation}\nConsidering that the first type is much more commonly seen that the other, we only learn how to solve the first type; in fact, at least, I assume you that within this book, the second type will never appear. \n\n\\paragraph{Sit and Deduct} For simplicity, we assume $n=b^m$, so that $n/b$ is always integer. First, let us use the iterative method, and expand Eq.~\\ref{divide_conquer_eq1} up till $n/b^m$ times so that $T(n)$ become $T(1)$:\n\\begin{align}\n    T(n)&=aT(n/b)+cn^k\\\\\n    &=a(aT(n/b^2)+c(n/b)^k)+cn^k\\\\\n    &=a(a(T(n/b^3)+c(n/b^2)^k)+c(n/b)^k)+cn^k\\\\\n    &\\vdots\\\\\n    &=a(a(\\ldots T(n/b^m)+c(n/b^{m-1})^{k})+\\ldots)+cn^k\\\\ \n    &=a(a(\\ldots T(1)+cb^{k})+\\ldots)+cn^k\n\\end{align}\nNow, assume $T(1)=c$ for simplicity and for getting rid of this constant part in our sequence. Then,\n\\begin{equation}\n    T(n)=ca^m+ca^{m-1}b^k+ca^{m-2}b^{2k}+\\ldots+cb^{mk},\n\\end{equation}\nwhich implies that\n\\begin{align}\nT(n)&=c\\sum_{i=0}^ma^{m-i}b^{ik}\\\\\n&=ca^m\\sum_{i=0}^m(\\frac{b^k}{a})^i\n\\end{align}\nSo far, we get a geometric series, which is a good sign to get the closed-form expression. We first summarize all possible substitutions that will help our further analysis.\n\\begin{align}\n    f(n)&=cn^k\\\\\n    n&=b^m\\\\\n    & \\xrightarrow{} \\\\\n    \\label{eq_divide_conquer_sub_2}\n    m&=\\log_b n\\\\\n    f(n)&=cb^{mk}\\\\\n    a^m&=a^{\\log_b n}=n^{\\log_b a}\\label{eq_divide_conquer_sub_1}\n\\end{align}\nDepending on the relation between $a$ and $b^k$, there are three cases:\n\\begin{enumerate}\n    \\item $b^k < a$: In this case, $\\frac{b^k}{a}<1$, so the geometric series converges to a constant even if $m$ goes to infinity. Then, we have an upper bound for $T(n)$, $T(n)<ca^m$, which converts to $T(n)=O(a^m)$. According to Eq.~\\ref{eq_divide_conquer_sub_1}, we further get:\n    \\begin{equation}\n        T(n)=O(n^{\\log_b a})\n    \\end{equation}\n    \\item $b^k = a$: With $\\frac{b^k}{a}=1$, $T(n)=O(a^mm)$. With Eq.~\\ref{eq_divide_conquer_sub_1} and Eq.~\\ref{eq_divide_conquer_sub_2}, our upper bound is:\n    \\begin{equation}\n        T(n)=O(n^{\\log_b a}\\log_b n)\n    \\end{equation}\n    \\item $b^k > a$: In this case, we denote $\\frac{b^k}{a}=d$ ($d$ is a constant and $d>1$). Use the standard formula for summing over a geometric series:\n    \\begin{align}\n        T(n)&=ca^m\\frac{d^{m+1}-1}{d-1}=O(a^m\\frac{d^{m+1}-1}{d-1})\\\\\n        &=O(b^{mk})=O(n^k)=O(f(n))\n    \\end{align}\n\\end{enumerate}\n\n\n\n\\subsubsection{Master Method} Comparison between $b^k$ and $a$ equals to the comparison between $b^{km}$ between $a^m$ . From the above substitution, it further equals to compare  $f(n)$ to $n^{\\log_b a}$. This is when master method kicks in and we will see how it helps us to apply these three cases into real situation. \n\nCompare   $f(n)/c=n^k$ with  $n^{\\log_b a}$. Intuitively, the larger of the two functions would dominate the solution to the recurrence. Now, we rephrase the three cases using the master method for the easiness of memorization.  \n\\begin{enumerate}\n    \\item If $n^k<n^{\\log_b a}$, or say \\textit{polynomially smaller} than by a factor of $n^{\\epsilon}$ for some constant $\\epsilon > 0$, we have:\n     \\begin{equation}\n        T(n)=O(n^{\\log_b a})\n    \\end{equation}\n    \\item If $n^k>n^{\\log_b a}$, similarily, we need it to be polynomially larger than a factor of $n^{\\epsilon}$ for some constant $\\epsilon > 0$, we have:\n         \\begin{equation}\n        T(n)=O(f(n))\n    \\end{equation}\n    \\item If $n^k=n^{\\log_b a}$, then:\n        \\begin{equation}\n        T(n)=O(n^{\\log_b a}\\log_b n)\n    \\end{equation}\n   \n\\end{enumerate}\n\n \n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%% Complexity Analysis\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Hands-on Example: Insertion Sort}\n\\label{sec_time_complexity}\nIn this section, we are expecting to see example that has different asymptotic bound as the input differs; where we focus more on the worst-case and average-case analysis. Along the analysis of complexity, we will also see how asymptotic notation can be used in equations or inequalities to assist the process. \n\nBecause most of the time, the average-case running time will be asmptotically equal to the worst-case, thus we do not really try to analyze it at the first place. In the case of best-case, it would only matter if you know your application context fits right in, otherwise, it will be trivial and non-helpful in the comparison of multiple  algorithms. We will see example below. \n\n\\paragraph{Insertion Sort: Worst-case and Best-case} There is another sorting algorithm--insertion sort--it sets aside another array $S$ to save the sorted items. At first, we can put the first item in which itself is already sorted. At the second pass, we put A[1] into the right position in $S$. Until the last item is handled, we return the sorted list. The code is:\n\\begin{lstlisting}[language=Python]\ndef insertionSort(a):\n  '''implement insertion sort'''\n  if not a or len(a) == 1:\n    return a\n  n = len(a)\n  sl = [a[0]] + [None] *(n-1) # sorted list\n  for i in range(1, n): # items to be inserted into the sorted\n    key = a[i]\n    j = i-1 \n\n    while j >= 0 and sl[j] > key: # compare key from the last sorted element\n      sl[j+1] = sl[j] # shift a[j] backward\n      j -= 1\n    sl[j+1] = key\n    print(sl)\n  return sl\n\\end{lstlisting}\nFor the first \\texttt{for} loop in line 7, it will sure has $n-1$ passes. However, for the inner \\texttt{while} loop, the real times of execution of statement in line 12 and 13 depends on the state between \\texttt{sl} and \\texttt{key}. If we try to sort the input array \\texttt{a} incrementally such that A=[2, 3, 7, 8, 9, 9, 10], and if the input array is already sorted, then there will be no items in the sorted list can be larger than our key which result only the execution of line 14. This is the best case, we can denote the running time of the \\texttt{while} loop by $\\Omega(1)$ because it has constant running time at its best case. However, if the input array is a reversed as the desired sorting, which means it is decreasing sorted such as A=[10, 9, 9, 9, 7, 3, 2], then the inner \\texttt{while} loop will has $n-i$, we denote it by $O(n)$. We can denote our running time equation as:\n\\begin{align}\n    T(n)&=T(n-1)+O(n)\\\\\n    &=O(n^2) \\notag\n\\end{align}\nAnd, \n\\begin{align}\n    T(n)&=T(n-1)+\\Omega(1)\\\\\n    &=\\Omega(n)\\notag\n\\end{align}\nUsing simple iteration, we can solve the math formula and have the asymptotic upper bound and lower bound for the time complexity of insertion sort. \n\nFor the average case, we can assume that each time, we need half time of comparison of $n-i$, we can have the following equation:\n\\begin{align}\n    T(n)&=T(n-1)+\\Theta(n/2)\\\\\n    &=T(n-2)+\\Theta(\\frac{n}{2}+\\frac{n-1}{2}) \\notag\\\\\n    &=\\Theta(n^2)\\notag\n\\end{align}\nFor algorithm that is stable in complexity, we conventionally analyze its average performance, and it is better to use $\\Theta$-notation in the running time equation and give the asymptotic tight bound like in the selection sort. For algorithm such as insertion sort, whose complexity varies as the input data distribution we conventionally analyze its worst-case and use $O$-notation.\n\n\n\n% \\paragraph{Simple example} These are just straightforward for us to analyze the running time. Sometimes, things become more obscure. Then, we need more advanced techniques to help us handle. For example, we use recurrence function to represent the the time we need when the problem decrease the size. Such that for one for loop, we can use $T(n)=T(n-1)+O(1)$, and for two nested for loops, normally $T(n)=T(n-1)+O(n)$ is enough to represent this situation. For a divide and conquer problem, we might get a recurrence function as $T(n) = T(n/2)+O(1)$. With recurrence function, the time complexity analysis is conveniently converted to a math problem and things get to be more interesting. We can divide the recurrence function into two types: non-overlapping as $T(n) = T(a*n/b)+f(n)$, and with over-lapping as $T(n) = T(a*n/b)+f(n)$. \n\n\n%%%%%%%%%%%%%%%%%%Amortiled analysis%%%%%%%%%%%%%\n\\section{*Amortized Analysis}\n\\label{sec_amortized_analysis}\nThere are two different ways to evaluate an algorithm/data structure: \n\n\\begin{enumerate}\n    \\item Consider each operation separately: one that look each operation incurred in the algorithm/data structure separately and offers worst-case running time $O$ and average running time $\\Theta$ for each operation. For the whole algorithm, it sums up on these two cases by how many times each operation is incurred.\n    \\item Amortized among a sequence of (related) operations: Amortized analysis can be used to show that the average cost of an operation is small, if one averages over a sequence of operations, even though a simple operation might be expensive. Amortized analysis guarantees the average performance of each operation in the worst case. \n\\end{enumerate}\n\nAmortized analysis does not purely look each operation on a given data structure separately, it averages time required to perform a sequence of different data structure operations over all performed operations. With amortized analysis, we might see that even though one single operation might be expensive, the amortized cost of this operation on all operations is small. Different from average-case analysis, probability will not be applied. From the example later we will see that amortized analysis view the data structure in applicable scenario, to complete this tasks, what is the average cost of each operation, and it is acheviable given any input. Therefore, the same time complexity, say $O(f(n))$, worst-case > amortized > average.\n\nThere are three types of amortized analysis:\n\\begin{enumerate}\n    \\item Aggregate Analysis:\n    \\item Accounting Method:\n    \\item Potential method: \n\\end{enumerate}\n%%%%%%%%%%%%%%%%%%space complexity%%%%%%%%%%%%%%%\n\\section{Space Complexity}\nThe analysis of space complexity is more straightforward, given that we are essentially the one who allocate space for the application. We simply link it to the size of items in the data structures. The only obscure is with \\textit{recursive program} which takes space from stack but is hidden from the users by the programming language compiler or interpreter. The recursive program can be represented as a recursive tree, the maximums stack space it needs is decided by the height of the recursive tree, thus $O(h)$, given $h$ as the height.\n\n\\paragraph{Space and Time Trade-off} In the field of algorithm design, we can usually trade space for time efficiency or trade time for space efficiency. For example, if you put your algorithm on a backend server, we need to response the request of users, then decrease the response time if especially useful here. Normally we want to decrease the time complexity by sacrificing more space if the extra space is not a problem for the physical machine. But in some cases, decrease the time complexity is more important and needed, thus we need might go for alternative algorithms that uses less space but might with more time complexity. \n\n\\section{Summary}\nFor your convenience, we provide a table that shows the frequent used recurrence equations' time complexity. \n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=1\\columnwidth] {fig/complexity_cheatsheet.png}\n    \\caption{The cheat sheet for time and space complexity with recurrence function. If T(n) = T(n-1)+T(n-2)+...+T(1)+O(n-1) = $3^n$. They are called factorial, exponential, quadratic, linearithmic, linear, logarithmic, constant. }\n    \\label{fig:cheat_sheet}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%Examples%%%%%%%%%%%%%%%%%%%%%%%%%\n% \\subsection{More Examples}\n% \\begin{examples}[resume]\n% \\item \\textbf{ Pow(x, n) (50).}\n    \n%     Solution: T(n)= T(n/2)+O(1), the complexity is the same as the binary search, $O(logn)$.\n% \\begin{lstlisting}[language=Python]\n% def myPow(self, x, n):\n%     \"\"\"\n%     :type x: float\n%     :type n: int\n%     :rtype: float\n%     \"\"\"\n%     if n==0:\n%         return 1\n%     if n<0:\n%         n=-n\n%         x=1.0/x\n%     def helper(n):\n%         if n==1:\n%             return x\n        \n%         h = n//2\n%         r = n-h\n%         value = helper(h) #T(n/2), then we have O(1)\n%         if r==h:                \n%             return value*value\n%         else: #r is going to be 1 bigger\n%             return value*value*x\n%     return helper(n)\n% \\end{lstlisting}\n% \\end{examples}\n\n\n%%%%%%%%%%%%%%Cheat sheet%%%%%%%%%%%%%%%%%%%%%%\n% \\subsection{Big-O Cheat Sheet}\n% \\label{complexity_subsec_cheat_sheet}\n% In this section, we provide the plotting of common seen time complexity functions (shown in Fig~\\ref{fig:big_o_complexity_chart}): including $log_2{n}$, $n$, $n\\log_2{n}$, $n^2$, $2^n$, and $n!$, so that we can sense the complexity change as the input size n increase. Resource found on \\url{http://bigocheatsheet.com/}.\n\n\n% \\begin{table}[h]\n% \\begin{small}\n% \\centering\n% \\noindent\\captionof{table}{ Explanation of Common Growth Rate}\n%  \\noindent \\begin{tabular}{|p{0.15\\columnwidth}|p{0.2\\columnwidth}| p{0.65\\columnwidth}|}\n%   \\hline\n%  Growth Rate & Name & Example operations   \\\\ \\hline\n% $O(1)$  & Constant& append, get item, set item \\\\\\hline\n% $O(\\log{n})$  &Logarithmic& binary search in the sorted array\\\\ \\hline\n% $O(n)$  & Liner & Copy, iteration\\\\ \\hline\n% $O(n\\log{n})$  & Linear-Logarithmic& MergeSort, QuickSort\\\\ \\hline\n% $O(n^2)$  & Quadratic& Nested Loops\\\\ \\hline\n% $O(n^3)$  &Cubic& Matrix Multiplication\\\\ \\hline\n% $O(2^n)$  & Exponential& Backtracking, Combination\\\\ \\hline\n% $O(n!)$  & factorial & Permutation\\\\ \\hline\n% \\end{tabular}\n%   \\label{tab:single_sequence}\n%   \\end{small}\n% \\end{table}\n\n% Also, we provide the average and worst time and space complexity for the some classical data structure's operations (shown in Fig.~\\ref{fig:data_structure_complexity}) and of algorithms (shown in Fig.~\\ref{fig:data_structure_complexity}).\n% \\begin{figure}[h!]\n%     \\centering\n%     \\includegraphics[width=0.9\\columnwidth]{fig/common_data_structure_operations.png}\n%     \\includegraphics[width=0.9\\columnwidth]{fig/array_sorting_algorithms.png}\n%     \\caption{Complexity of Common Data structures}\n%     \\label{fig:data_structure_complexity}\n% \\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%Exercise\n\\section{Exercises}\n\\subsection{Knowledge Check}\n\\begin{enumerate}\n    \\item Use iteration and recursion tree to get the time complexity of $T(n)=T(n/3)+2T(2n/3)+O(n)$. \n    \\item Get the time complexity of $T(n)=2T(n/2)+O(n^2)$.\n    \\item $T(n)=T(n-1)+T(n-2)+T(n-3)+...+T(1)+O(1)$. \n\\end{enumerate}\n\\end{document}", "meta": {"hexsha": "9716cf2e4a6267a9280d7eaa439106f8496e45bd", "size": 42069, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Easy-Book/chapters/chapter_5_algorithm_analysis.tex", "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Easy-Book/chapters/chapter_5_algorithm_analysis.tex", "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Easy-Book/chapters/chapter_5_algorithm_analysis.tex", "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.7466410749, "max_line_length": 896, "alphanum_fraction": 0.7143740046, "num_tokens": 11743, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646140788307, "lm_q2_score": 0.8244619306896955, "lm_q1q2_score": 0.5571421984152097}}
{"text": "\\documentclass[12pt]{article}\n\n\\addtolength{\\textwidth}{1.0in}\n\\addtolength{\\oddsidemargin}{-0.5in}\n\\addtolength{\\evensidemargin}{-0.5in}\n\\addtolength{\\textheight}{1.0in}\n\\addtolength{\\topmargin}{-0.5in}\n\n\\usepackage [english]{babel}\n\\usepackage [autostyle, english = american]{csquotes}\n\\MakeOuterQuote{\"}\n\n\\title{$p$-Value Computations within the GASP Pipeline}\n\n\\author{Jake Martinez}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\nIn any given motif finding pipeline, scoring methods for the results generated are essential to\nanalysis and evaluation of the pipeline's effectiveness. In this paper, we will explore the\nvarious proposed methods for computing $p$-values.\n\n\n\\section{Methods}\nIn order to calculate $p$-value we must first understand the purpose of $p$-value. In general terms,\n$p$-value is the probability that the observed outcome is equal to or more extreme than the expected\noutcome. Essentially, it is used to quantify the statistical significance of observed results in an\neffort to reject the established null hypothesis[1]. The null hypothesis $H_0$ is formulated to \nrepresent the condition where no significant event occurs; conversely, the alternative hypothesis \n$H_1$ represents the condition where a significant event occurs. To reject the null hypothesis, we \nset a threshold $\\alpha$ for comparison to the $p$-value. For our pipeline, let the arbitrarily chosen \nthreshold to reject the null hypothesis be $\\alpha = 1\\times10^{-6}$,\n\n$$ H_0\\textrm{: }p \\geq 1.0\\times10^{-6} $$\n$$ H_1\\textrm{: }p < 1.0\\times10^{-6} $$\n\nOnce we have determined the threshold for rejection of the null hypothesis, we can move on to the\ncalculation of the $p$-value. This calculation is the two-sided test for the distribution of the\ndata; in our case we will use the binomial distribution. Other potential distributions that could be\nused for our application include the hypergeometric and chi-square distributions; however, the binomial\nshould be sufficient for our purposes. To computationally approximate the integral of the binomial \ndistribution, we can use the \\texttt{scipy.stats} package which contains a function called \n\\texttt{binom\\_test()}. The parameters $\\theta = \\{x,n,p\\}$ of this function are: $x$ the number of \nsuccesses, $n$ the total number of trials, $p$ the probability of success. More specifically to our \npipeline, $x$ is the number of $l$-mers that are occurrences of motif $m$ and $n$ is the total number \nof $l$-mers in the dataset, where $m$ is of length $l$. The last parameter is a little less \nstraightforward because the value needs to be estimated.\n\n\n\\subsection{Na\\\"{i}ve Approach}\nThis approach to computing an estimate for the probability of success (in our case this just means finding\nan $l$-mer that is statistically significant) simply relies on finding the sample mean of distinct $l$-mer\noccurrences as an unbiased estimator for the parameter $p$. Below is the simple calculation of the sample \nmean $\\hat{p}$ in terms of parameters $x$ and $n$,\n\n$$ \\hat{p} = \\frac{x}{n} $$\n\n\n\\subsection{Markov Model Approach}\nThe na\\\"{i}ve approach for estimation is not quite adequate when determining the statistical significance \nas there is much of the data not being taken into account. Using a Markov model instead of the na\\\"{i}ve \napproach for the estimation increases the utilization of information present in the data for our estimation.\nMarkov chains determines the probability of a given sequence based on the product of transition probabilities \nin the Markov model, rather than just using the sample mean. For a first-order Markov model, the probability\nis represented by the following,\n\n$$ P = Pr(S_1)\\cdot\\prod_{i = 2}^{n} Pr(S_i|S_{i-1}) $$\n\n\\noindent where $Pr(S_i)$ is the probability of character $S_i$ based on the background model of sequence $S$ \nand $Pr(S_i|S_{i-1})$ is the transition probability of $S_i$ given that the preceding character is $S_{i-1}$.\nThe Markov model is constructed by creating a mapping for all possible transitions within the model (16\npossibilities for a first-order model, $4^k$ for $k$-order). The sequences are then iterated through to\ndetermine the number of times each possible transition occurs. Transition probabilities are then assigned\nvalues by dividing number of occurrences of the given transition by total number of transitions from the\nparticular starting node. For example: if the transition $A \\rightarrow A$ occurs 6 times, \n$A \\rightarrow T$ occurs 3 times,\n$A \\rightarrow C$ occurs 8 times,\n$A \\rightarrow G$ occurs 5 times; the probabilities for transitions from $A$ would be:\n$$Pr(A|A) = \\frac{6}{22} = .273$$\n$$Pr(A|T) = \\frac{3}{22} = .136$$\n$$Pr(A|C) = \\frac{8}{22} = .364$$\n$$Pr(A|G) = \\frac{5}{22} = .227$$\n\\\\\n\\\\\nTo illustrate the usefulness of a Markov model, suppose that in a series of sequences there are two distinct \nmotifs that occur an equal number of times. In the na\\\"{i}ve approach, these two distinct motifs would have \nthe same estimated probability; however, with the Markov model it is unlikely that these two distinct \nmotifs would have the same estimated probability, as the product of their transition probabilities would \nhave to be equal.\n\n\n\\section{Results \\& Discussion}\nTo analyze the effectiveness of our calculations, we can calculate the $p$-value for each possible $l$-mer\nin our sequence data. Ideally, only motifs that have a high likelihood of being actual motifs will be\nscored well with low $p$-values and reject the null hypothesis. Using the sequences file \n\\texttt{positive\\_genes.fasta} and motif length of 12 as input for the pipeline, there were 23040 distinct \nsequences found in the unfiltered data. Applying the low-complexity filter, this number drops down to 21350. \nThe twenty motif sequences with the lowest $p$-values are: \n\n\n\\begin{center}\n\t\\begin{tabular}{| c c | c c|}\n\t\t\\hline\n\t\t\\multicolumn{2}{| c |}{Unfiltered} & \\multicolumn{2}{| c |}{Filtered} \\\\\n\t\t\\hline\n\t\t\\textit{TTTTTTTTTTTT} & $2.649998\\times10^{-58}$ & CAGCACTCCTGC & $3.482304\\times10^{-8}$ \\\\\n\t\t\\textit{TTTTTCTTTTTT} & $1.142853\\times10^{-10}$ & TCAGCACTCCTG & $1.248463\\times10^{-7}$ \\\\\n\t\t\\textit{TTTTTTCTTTTT} & $2.128183\\times10^{-8}$ & GGAATGCTTCCT & $4.785597\\times10^{-7}$ \\\\\n\t\tCAGCACTCCTGC & $3.075061\\times10^{-8}$ & GCTTTCGACGTA & $4.785597\\times10^{-7}$ \\\\\n\t\tTCAGCACTCCTG & $1.102143\\times10^{-7}$ & ATGCTTTCGACG & $4.786337\\times10^{-7}$ \\\\\n\t\tATGCTTTCGACG & $3.675220\\times10^{-7}$ & TGGAATGCTTCC & $4.792504\\times10^{-7}$ \\\\\n\t\tGGAATGCTTCCT & $3.696426\\times10^{-7}$ & CCAAACCTCATT & $6.646762\\times10^{-7}$ \\\\\n\t\tGCTTTCGACGTA & $3.696426\\times10^{-7}$ & TGCTTTCGACGT & $6.918790\\times10^{-7}$ \\\\\n\t\tTGGAATGCTTCC & $3.701234\\times10^{-7}$ & CAACAAAAATCC & $7.902159\\times10^{-7}$ \\\\\n\t\tTGCTTTCGACGT & $4.390882\\times10^{-7}$ & ACACACCAAATA & $7.907213\\times10^{-7}$ \\\\\n\t\tGCTTCCTTTCTG & $5.525666\\times10^{-7}$ & GTTTCAAGAACG & $8.213701\\times10^{-7}$ \\\\\n\t\tCTTCCTTTCTGC & $5.849518\\times10^{-7}$ & CAAAAATCCAAG & $8.478478\\times10^{-7}$ \\\\\n\t\tAAACTACCAAGG & $7.029208\\times10^{-7}$ & CTTCCTTTCTGC & $8.659784\\times10^{-7}$ \\\\\n\t\tGTTTCAAGAACG & $9.367088\\times10^{-7}$ & GCTTCCTTTCTG & $9.295849\\times10^{-7}$ \\\\\n\t\tTGTTTTAAATTA & $1.045540\\times10^{-6}$ & CAATGCTTTCGA & $1.104609\\times10^{-6}$ \\\\\n\t\tAATTGCTTGGCA & $1.104837\\times10^{-6}$ & AATGCTTTCGAC & $1.105315\\times10^{-6}$ \\\\\n\t\tCCAAACCTCATT & $1.118712\\times10^{-6}$ & AATTGCTTGGCA & $1.185921\\times10^{-6}$ \\\\\n\t\tAATGCTTTCGAC & $1.173514\\times10^{-6}$ & ATGAATCCAAAC & $1.224741\\times10^{-6}$ \\\\\n\t\tCAATGCTTTCGA & $1.176328\\times10^{-6}$ & CCTTTCTGCTAA & $1.486207\\times10^{-6}$ \\\\\n\t\tGGTCAATGCTTT & $1.318213\\times10^{-6}$ & CTTTCGACGTAT & $1.594581\\times10^{-6}$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{center}\n\\noindent \\textit{Note: the italicized motifs in the \"Unfiltered\" column were not present in the \n\tfiltered data} \\\\ \n\n\nThe currently implemented method of computing $p$-values in the GASP pipeline can only take a single\nconsensus motif as input. To make the pipeline more robust, it would be beneficial to also add the ability\nto take multiple motifs or a PWM as input parameters in addition to a single consensus motif [2]. Although\nthe results so far do seem moderately promising, it is difficult to tell the effectiveness of these \ncomputations for $p$-value without other scoring mechanisms for relative comparison and increased\nconfidence.\n\n\\subsection{Side Notes on Unfiltered vs. Filtered Data}\nAfter studying some of the $p$-values generated from the unfiltered and filtered data, there were some\ninteresting discrepancies between the two as a result of the low-complexity sequence masking. In general,\nmost of the sequences retain their relative order within the top twenty sequences; however there are some\nnotable outliers:\n\n\\begin{center}\n\t\\begin{tabular}{| c | c | c | c |}\n\t\t\\hline\n\t\tUnfiltered Motif & $p$-value & Unfiltered Rank & Filtered Rank \\\\\n\t\t\\hline\n\t\tTTTTTTTTTTTT & $2.6499983151712726\\times10^{-58}$ & 1 & N/A \\\\\n\t\tTTTTTCTTTTTT & $1.1428538156799511\\times10^{-10}$ & 2 & N/A \\\\\n\t\tTTTTTTCTTTTT & $2.1281839612839577\\times10^{-8}$ & 3 & N/A \\\\\n\t\tCAGCACTCCTGC & $3.0750613401552712\\times10^{-8}$ & 4 & 1 \\\\\n\t\tTCAGCACTCCTG & $1.1021432100320698\\times10^{-7}$ & 5 & 2 \\\\\n\t\tATGCTTTCGACG & $3.6752200803119519\\times10^{-7}$ & 6 & 5 \\\\\n\t\tGGAATGCTTCCT & $3.6964267262968008\\times10^{-7}$ & 7 & 3 \\\\\n\t\tGCTTTCGACGTA & $3.6964267262968008\\times10^{-7}$ & 8 & 4 \\\\\n\t\tTGGAATGCTTCC & $3.7012341268741639\\times10^{-7}$ & 9 & 6 \\\\\n\t\tTGCTTTCGACGT & $4.3908822022035107\\times10^{-7}$ & 10 & 8 \\\\\n\t\tGCTTCCTTTCTG & $5.5256668497249088\\times10^{-7}$ & 11 & 14 \\\\\n\t\tCTTCCTTTCTGC & $5.8495188300691534\\times10^{-7}$ & 12 & 13 \\\\\n\t\tAAACTACCAAGG & $7.029208550022092\\times10^{-7}$ & 13 & 5145 \\\\\n\t\tGTTTCAAGAACG & $9.3670887239749622\\times10^{-7}$ & 14 & 11 \\\\\n\t\tTGTTTTAAATTA & $1.0455404078440172\\times10^{-6}$ & 15 & 323 \\\\\n\t\tAATTGCTTGGCA & $1.1048377352226602\\times10^{-6}$ & 16 & 17 \\\\\n\t\tCCAAACCTCATT & $1.1187120969193656\\times10^{-6}$ & 17 & 7 \\\\\n\t\tAATGCTTTCGAC & $1.1735144778682587\\times10^{-6}$ & 18 & 16 \\\\\n\t\tCAATGCTTTCGA & $1.1763281539111292\\times10^{-6}$ & 19 & 15 \\\\\n\t\tGGTCAATGCTTT & $1.3182132144143742\\times10^{-6}$ & 20 & 24 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{center}\n\nAs seen in the table above, there is a huge discrepancy between the ranking for unfiltered motif rank 13\nand the filtered counterpart with rank 5145. The $p$-values are quite different as well, with the unfiltered\n$p = 7.029208550022092\\times10^{-7}$ and the filtered $p = 8.5705808504802467\\times10^{-4}$.\n\n\\pagebreak\n\\section{References}\n[1] Milton, J. Susan, and Jesse C. Arnold. \\textit{Introduction to Probability and Statistics: \\\\\n\\indent \\indent Principles and Applications for Engineering and the Computing Sciences}. Boston: \\\\\n\\indent \\indent McGraw-Hill, 2003. Print. \\\\\n\n\\noindent [2] Zhang J, Jiang B, Li M, Tromp J, Zhang X, Zhang M: \"Computing exact P-values for \\\\\n\\indent \\indent DNA motifs\",  \\textit{Bioinformatics} 2007, 23(5): 531-537. \\\\\n\n\\end{document}\n", "meta": {"hexsha": "8657127dd233854128d8031cc6dc4b43e9f871d8", "size": 10964, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Papers/pvalue.tex", "max_stars_repo_name": "yanshen43/MCAT", "max_stars_repo_head_hexsha": "336c5deea456dae0916fbc8935930402a4acbcad", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Papers/pvalue.tex", "max_issues_repo_name": "yanshen43/MCAT", "max_issues_repo_head_hexsha": "336c5deea456dae0916fbc8935930402a4acbcad", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Papers/pvalue.tex", "max_forks_repo_name": "yanshen43/MCAT", "max_forks_repo_head_hexsha": "336c5deea456dae0916fbc8935930402a4acbcad", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.1041666667, "max_line_length": 110, "alphanum_fraction": 0.7280189712, "num_tokens": 3525, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619350028204, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.5571421959462199}}
{"text": "\\chapter{Generalized cofinite filters}\n\nThe following is a straightforward generalization of cofinite filter.\n\n\\begin{defn}\n  $\\Omega_{1 a} = \\bigsqcap^{\\mathfrak{A}}_{X \\in\n  \\operatorname{coatoms}^{\\mathfrak{Z}}} X$; $\\Omega_{1 b} =\n  \\bigsqcap^{\\mathfrak{A}}_{X \\in \\operatorname{coatoms}^{\\mathfrak{A}}} X$.\n\\end{defn}\n\n\\begin{prop}\nThe following is an implications tuple:\n\\begin{enumerate}\n \\item\\label{omeq-pow} $(\\mathfrak{A},\\mathfrak{Z})$ is a powerset filtrator.\n \\item\\label{omeq-flt} $(\\mathfrak{A},\\mathfrak{Z})$ is a primary filtrator.\n \\item\\label{omeq-res} $\\Omega_{1 a} = \\Omega_{1 b}$ for this filtrator.\n\\end{enumerate}  \n\\end{prop}\n\n\\begin{proof}\n~\n\\begin{description}\n \\item[\\ref{omeq-pow}$\\Rightarrow$\\ref{omeq-flt}] Obvious.\n \\item[\\ref{omeq-flt}$\\Rightarrow$\\ref{omeq-res}] Proposition~\\ref{coat}.\n\\end{description}\n\\end{proof}\n\n\\begin{prop}\n  Let $(\\mathfrak{A},\\mathfrak{Z})$ be a primary filtrator.\n  Let $\\mathfrak{Z}$ be a subset of $\\subsets U$. Let it be a\n  meet-semilattice with greatest element.\n  Let also every non-coempty cofinite set lies in\n  $\\mathfrak{Z}$. Then\n  \\begin{equation}\n    \\corestar \\Omega = \\setcond{ Y \\in \\mathfrak{Z} }{\n    \\card \\atoms^{\\mathfrak{Z}} Y \\geq \\omega } .\n    \\label{d-cofin}\n  \\end{equation}\n\\end{prop}\n\n\\begin{proof}\n  $\\Omega$ exists by corollary~\\ref{filt-is-complete}.\n\\begin{multline*}\n  Y \\in \\corestar \\Omega \\Leftrightarrow Y \\nasymp^{\\mathfrak{A}} \n  \\bigsqcap^{\\mathfrak{A}}_{X \\in \\operatorname{coatoms}^{\\mathfrak{Z}}} X\n  \\Leftrightarrow \\\\ \\text{(by properties of filter bases)} \\Leftrightarrow \\\\\n  \\forall S \\in \\subsets_{\\operatorname{fin}} \\operatorname{coatoms}^{\\mathfrak{Z}} : Y\n  \\nasymp^{\\mathfrak{A}} \\bigsqcap^{\\mathfrak{A}} S \\Leftrightarrow \\\\\n  \\text{(corollary~\\ref{f-meet-closed})} \\Leftrightarrow \\forall S \\in \\subsets_{\\operatorname{fin}}\n  \\operatorname{coatoms}^{\\mathfrak{Z}} : Y \\nasymp \\bigsqcap S \\Leftrightarrow \\\\\n  \\forall K \\in \\subsets_{\\operatorname{fin}} U : Y \\setminus K \\neq \\emptyset\n  \\Leftrightarrow \\\\ \\card Y \\geq \\omega \\Leftrightarrow \\card\n  \\atoms^{\\mathfrak{Z}} Y \\geq \\omega.\n\\end{multline*}\n  (Here $\\subsets_{\\operatorname{fin}}$ denotes\n  the set of finite subsets.)\n\\end{proof}\n\n\\begin{cor}\n  Formula (\\ref{d-cofin}) holds for both reloids and funcoids.\n\\end{cor}\n\n\\begin{proof}\n  For reloiods it's straightforward, for funcoids take that they are\n  isomorphic to filters on lattice $\\Gamma$.\n\\end{proof}\n\n\\begin{cor}\n$\\Omega^{\\mathsf{FCD}} \\ne \\bot^{\\mathsf{FCD}}$ (for $\\mathsf{FCD}(A,B)$ where $A\\times B$ is an infinite set).\n\\end{cor}\n\n\\begin{prop}\\label{omega-bot}\nThe following is an implications tuple:\n\\begin{enumerate}\n \\item\\label{omega-bot-pow} $(\\mathfrak{A},\\mathfrak{Z})$\n  is a powerset filtrator.\n \\item\\label{omega-bot-flt} $(\\mathfrak{A},\\mathfrak{Z})$\n   is a primary filtrator over an atomic ideal base\n   and $\\forall \\alpha \\in\n  \\atoms^{\\mathfrak{Z}} \\exists X \\in \\operatorname{coatoms}^{\\mathfrak{Z}}: a\n  \\nsqsubseteq X$.\n \\item\\label{omega-bot-cond} $\\Omega_{1 a}$ and $\\Cor \\Omega_{1 a}$ are defined,\n  $\\forall \\alpha \\in\n  \\atoms^{\\mathfrak{Z}} \\exists X \\in \\operatorname{coatoms}^{\\mathfrak{Z}}: a\n  \\nsqsubseteq X$ and $\\mathfrak{Z}$ is an atomic poset.\n \\item\\label{omega-bot-res} $\\Cor \\Omega_{1 a} = \\bot^{\\mathfrak{Z}}$.\n\\end{enumerate}  \n\\end{prop}\n\n\\begin{proof}\n~\n\\begin{description}\n\\item[\\ref{omega-bot-pow}$\\Rightarrow$\\ref{omega-bot-flt}]\n  Obvious.\n\\item[\\ref{omega-bot-flt}$\\Rightarrow$\\ref{omega-bot-cond}]\n  Obvious.\n\\item[\\ref{omega-bot-cond}$\\Rightarrow$\\ref{omega-bot-res}]\n  Suppose $\\alpha \\in \\atoms^{\\mathfrak{Z}} \\Cor \\Omega$. Then\n  $\\exists X \\in \\up \\Omega : \\alpha \\nsqsubseteq X$.\n  Therefore $\\alpha \\notin \\atoms^{\\mathfrak{Z}} \\Cor \\Omega$. So $\\atoms^{\\mathfrak{Z}}\n  \\Cor \\Omega_{1 a} = \\emptyset$ and thus by atomicity $\\Cor\n  \\Omega_{1 a} = \\bot^{\\mathfrak{Z}}$.\n\\end{description}\n\\end{proof}\n\n\\begin{cor}\n  $\\Cor \\Omega^{\\mathsf{FCD}} = \\bot$.\n\\end{cor}\n\n\\begin{prop}\nThe following is an implications tuple:\n\\begin{enumerate}\n\\item\\label{om-max-pow} $(\\mathfrak{A},\\mathfrak{Z})$ is a powerset filtrator.\n\\item\\label{om-max-flt} $(\\mathfrak{A},\\mathfrak{Z})$ is a primary filtrator\n  over an atomic meet-semilattice with greatest element such that\n  $\\forall \\alpha \\in\n  \\atoms^{\\mathfrak{Z}} \\exists X \\in \\operatorname{coatoms}^{\\mathfrak{Z}}: a\n  \\nsqsubseteq X$.\n\\item\\label{om-max-cond} $\\mathfrak{A}$ is\n  a complete lattice,\n  $\\forall \\alpha \\in\n  \\atoms^{\\mathfrak{Z}} \\exists X \\in \\operatorname{coatoms}^{\\mathfrak{Z}}: a\n  \\nsqsubseteq X$ and~$(\\mathfrak{A}; \\mathfrak{Z})$ is a filtered filtrator over an atomic poset.\n\\item\\label{om-max-res} $\\Omega_{1 a} = \\max \\setcond{ \\mathcal{X} \\in \\mathfrak{A} }{\n  \\Cor \\mathcal{X} = \\bot^{\\mathfrak{Z}} }$\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\n~\n\\begin{description}\n\\item[\\ref{om-max-pow}$\\Rightarrow$\\ref{om-max-flt}]\n  Obvious.\n\n\\item[\\ref{om-max-flt}$\\Rightarrow$\\ref{om-max-cond}]\n  Obvious.\n\n\\item[\\ref{om-max-cond}$\\Rightarrow$\\ref{om-max-res}]\n  Due the last proposition, it is enough to show that $\\Cor \\mathcal{X}\n  = \\bot^{\\mathfrak{Z}} \\Rightarrow \\mathcal{X} \\sqsubseteq \\Omega_{1 a}$ for\n  every $\\mathcal{X} \\in \\mathfrak{A}$.\n  \n  Let $\\Cor \\mathcal{X} = \\bot^{\\mathfrak{Z}}$ for some $\\mathcal{X} \\in\n  \\mathfrak{A}$. Because of our filtrator being filtered, it's enough to show~$X\\in\\up\\mathcal{X}$\n  for every~$X \\in \\up \\Omega_{1 a}$ . $X = a_0 \\sqcap \\ldots \\sqcap a_n$ for $a_i$\n  being coatoms of $\\mathfrak{Z}$. $a_i \\sqsupseteq \\mathcal{X}$ because\n  otherwise $a_i \\nsqsupseteq \\Cor \\mathcal{X}$. So $X\n  \\in \\up \\mathcal{X}$.\n\\end{description}\n\\end{proof}\n\n\\begin{prop}\nThe following is an implications tuple:\n\\begin{enumerate}\n\\item\\label{omfinm-pow} $(\\mathfrak{A},\\mathfrak{Z})$ is\n  a powerset filtrator.\n\\item\\label{omfinm-flt} $(\\mathfrak{A},\\mathfrak{Z})$ is\n  a primary filtrator over a meet-semilattice.\n\\item\\label{omfinm-res} $\\up \\Omega_{1a} = \\setcond{ \\bigsqcap S }{ S \\in\n  \\subsets_{\\operatorname{fin}} \\operatorname{coatoms}^{\\mathfrak{Z}} }$\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\n~\n\\begin{description}\n \\item[\\ref{omfinm-pow}$\\Rightarrow$\\ref{omfinm-flt}]\n  Obvious.\n \\item[\\ref{omfinm-flt}$\\Rightarrow$\\ref{omfinm-res}]\n  Because $\\setcond{ \\bigsqcap S }{ S \\in\n  \\subsets_{\\operatorname{fin}} \\operatorname{coatoms}^{\\mathfrak{Z}} }$ is a\n  filter.\n\\end{description}\n\\end{proof}\n\n\\begin{cor}\n  $\\up \\Omega^{\\mathsf{FCD}} = \\up\n  \\Omega^{\\mathsf{RLD}}$.\n\\end{cor}\n\n\\begin{defn}\n$\\Omega_{1c} =\n\\bigsqcup(\\atoms^{\\mathfrak{A}}\\setminus\\mathfrak{Z})$.\n\\end{defn}\n\n\\begin{prop}\nThe following is an implications tuple:\n\\begin{enumerate}\n \\item\\label{omeq-ca-pow} $(\\mathfrak{A}; \\mathfrak{Z})$ is a powerset\n  filtrator.\n \\item\\label{omeq-ca-flt} $(\\mathfrak{A}; \\mathfrak{Z})$ is a down-aligned\n  filtered complete lattice filtrator over an atomistic poset and\n  $\\forall \\alpha \\in \\atoms^{\\mathfrak{Z}} \\exists X \\in \\operatorname{coatoms}^{\\mathfrak{Z}} : a \\nsqsubseteq X$.\n \\item\\label{omeq-ca-res} $\\Omega_{1c} = \\Omega_{1a}$.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\n~\n\\begin{description}\n \\item[\\ref{omeq-ca-pow}$\\Rightarrow$\\ref{omeq-ca-flt}]\n  Obvious.\n \\item[\\ref{omeq-ca-flt}$\\Rightarrow$\\ref{omeq-ca-res}]\n  For $x\\in\\atoms^{\\mathfrak{A}}\\setminus\\mathfrak{Z}$ we have\n  $\\Cor x=\\bot$ because otherwise $\\bot\\ne\\Cor x\\sqsubset x$.\n  Thus by previous $x\\sqsubseteq\\Omega_{1a}$ and so\n  $\\Omega_{1c} =\n  \\bigsqcup(\\atoms^{\\mathfrak{A}}\\setminus\\mathfrak{Z}) \\sqsubseteq\n  \\Omega_{1a}$.\n\n  If $x\\in\\atoms\\Omega_{1a}$ then $x\\notin\\mathfrak{Z}$ because otherwise\n  $\\Cor x\\ne\\bot$. So \\[ \\Omega_{1a}=\\bigsqcup\\atoms \\Omega_{1a}=\n  \\bigsqcup(\\atoms \\Omega_{1a}\\setminus\\mathfrak{Z})\\sqsubseteq\n  \\bigsqcup(\\atoms^{\\mathfrak{A}}\\setminus\\mathfrak{Z}) =\n  \\Omega_{1c}. \\]\n\\end{description}\n\\end{proof}\n\n\\begin{thm}\\label{cor-adjom}\nThe following is an implications tuple:\n\\begin{enumerate}\n\\item\\label{cor-adj-omega-pow}\n  $(\\mathfrak{A},\\mathfrak{Z})$ is a powerset filtrator.\n\\item\\label{cor-adj-omega-cond}\n  $(\\mathfrak{A},\\mathfrak{Z})$ is a primary filtrator over\n  a complete atomic boolean lattice.\n\\item\\label{cor-adj-omega-flt} All of the following:\n  \\begin{enumerate}\n    \\item $\\mathfrak{A}$ is atomistic complete starrish lattice.\n    \\item $\\mathfrak{Z}$ is a complete atomistic lattice.\n    \\item $(\\mathfrak{A},\\mathfrak{Z})$ is a filtered\n      down-aligned filtrator with binarily meet-closed core.\n  \\end{enumerate}\n\\item\\label{cor-adj-omega-res} $\\Cor'$ is the lower adjoint of\n  $\\Omega_{1c}\\sqcup^{\\mathfrak{A}}-$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n~\n\\begin{widedisorder}\n\\item[\\ref{cor-adj-omega-pow}$\\Rightarrow$\\ref{cor-adj-omega-cond}]\nObvious.\n\n\\item[\\ref{cor-adj-omega-cond}$\\Rightarrow$\\ref{cor-adj-omega-flt}]\nObvious.\n\n\\item[\\ref{cor-adj-omega-flt}$\\Rightarrow$\\ref{cor-adj-omega-res}]\nIt with join-closed core by theorem~\\ref{semifilt-joinclosed}.\n\nWe will prove $\\Cor'\\mathcal{X} \\sqsubseteq \\mathcal{Y} \\Leftrightarrow\n\\mathcal{X} \\sqsubseteq \\Omega_{1c} \\sqcup \\mathcal{Y}$.\n\nBy atomisticity it is equivalent to:\n$\\atoms^{\\mathfrak{A}}\\Cor'\\mathcal{X} \\subseteq \\atoms^{\\mathfrak{A}}\\mathcal{Y}\n\\Leftrightarrow\n\\atoms^{\\mathfrak{A}}\\mathcal{X} \\subseteq \\atoms^{\\mathfrak{A}}(\\Omega_{1c} \\sqcup \\mathcal{Y})$;\n(theorem~\\ref{dual-core-join})\n$\\atoms^{\\mathfrak{A}}\\Cor'\\mathcal{X} \\subseteq \\atoms^{\\mathfrak{A}}\\mathcal{Y}\n\\Leftrightarrow\n\\atoms^{\\mathfrak{A}}\\mathcal{X} \\subseteq \\atoms^{\\mathfrak{A}}\\Omega_{1c} \\cup \\atoms^{\\mathfrak{A}}\\mathcal{Y}$;\nwhat by below is equivalent to:\n$\\atoms^{\\mathfrak{Z}}  \\mathcal{X} \\subseteq\n\\atoms^{\\mathfrak{Z}}  \\mathcal{Y} \\Leftrightarrow\n\\atoms^{\\mathfrak{A}}  \\mathcal{X} \\subseteq \\atoms^{\\mathfrak{A}}\n\\Omega_{1c} \\cup \\atoms^{\\mathfrak{A}}  \\mathcal{Y}$.\n\n$\\Cor' \\mathcal{X} \\sqsubseteq \\mathcal{Y} \\Leftrightarrow\n\\atoms^{\\mathfrak{A}} \\Cor' \\mathcal{X} \\subseteq\n\\atoms^{\\mathfrak{A}}  \\mathcal{Y} \\Rightarrow\n\\atoms^{\\mathfrak{Z}} \\Cor' \\mathcal{X} \\subseteq\n\\atoms^{\\mathfrak{Z}}  \\mathcal{Y} \\Leftrightarrow\n\\atoms^{\\mathfrak{Z}}  \\mathcal{X} \\subseteq \\atoms^{\\mathfrak{Z}}\n\\mathcal{Y}$;\n\n$\\atoms^{\\mathfrak{Z}}  \\mathcal{X} \\subseteq\n\\atoms^{\\mathfrak{Z}}  \\mathcal{Y} \\Rightarrow\n\\text{(theorem~\\ref{cor-join-atom})} \\Rightarrow\n\\Cor' \\mathcal{X}\n\\sqsubseteq \\Cor' \\mathcal{Y} \\Rightarrow\n\\text{(theorem~\\ref{f-cor-max})} \\Rightarrow\n\\Cor' \\mathcal{X} \\sqsubseteq \\mathcal{Y}$.\n\nFinishing the proof\n$\\atoms^{\\mathfrak{A}} \\mathcal{X} \\subseteq \\atoms^{\\mathfrak{A}}\n\\Omega_{1c} \\cup \\atoms^{\\mathfrak{A}} \\mathcal{Y} \\Leftrightarrow\n\\atoms^{\\mathfrak{A}} \\mathcal{X} \\subseteq\n(\\atoms^{\\mathfrak{A}} \\setminus \\mathfrak{Z})\n\\cup \\atoms^{\\mathfrak{A}} \\mathcal{Y} \\Leftrightarrow\n\\atoms^{\\mathfrak{Z}} \\mathcal{X} \\subseteq\n\\atoms^{\\mathfrak{A}} \\mathcal{Y} \\Leftrightarrow\n\\atoms^{\\mathfrak{Z}} \\mathcal{X} \\subseteq\n\\atoms^{\\mathfrak{Z}} \\mathcal{Y}$.\n\\end{widedisorder}\n\\end{proof}\n\nNext there is an alternative proof of the above theorem.\nThis alternative proof requires additional condition\n$\\forall \\alpha \\in \\atoms^{\\mathfrak{Z}} \\exists X \\in \\operatorname{coatoms}^{\\mathfrak{Z}} : a\n\\nsqsubseteq X$ however.\n\n\\begin{proof}\nDefine $\\Omega = \\Omega_{1a} = \\Omega_{1c}$.\n\nIt with join-closed core by theorem~\\ref{semifilt-joinclosed}.\n\nIt's enough to prove that\n\\[\n\\mathcal{X}\\sqsubseteq\\Omega\\sqcup^{\\mathfrak{A}}\\Cor'\\mathcal{X}\\quad\\text{and}\\quad\\Cor'(\\Omega\\sqcup^{\\mathfrak{A}}\\mathcal{Y})\\sqsubseteq\\mathcal{Y}.\n\\]\n$\\Cor'(\\Omega\\sqcup^{\\mathfrak{A}}\\mathcal{Y}) =\n\\text{(theorem~\\ref{dual-core-join})} =\n\\Cor'\\Omega\\sqcup^{\\mathfrak{Z}}\\Cor'\\mathcal{Y} =\n\\text{(proposition~\\ref{omega-bot})} =\n\\bot^{\\mathfrak{Z}}\\sqcup^{\\mathfrak{Z}}\\Cor'\\mathcal{Y} \\sqsubseteq\n\\text{(theorem~\\ref{f-cor-max})} \\sqsubseteq\n\\mathcal{Y}$.\n\n$\\Omega\\sqcup^{\\mathfrak{A}}\\Cor'\\mathcal{X} =\n\\bigsqcup\\atoms(\\Omega\\sqcup^{\\mathfrak{A}}\\Cor'\\mathcal{X}) =\n\\bigsqcup(\\atoms\\Omega\\cup\\Cor'\\mathcal{X}) =\n\\bigsqcup\\atoms\\Omega\\sqcup\\bigsqcup\\atoms\\mathcal{X}) \\sqsupseteq\n\\bigsqcup(\\atoms\\mathcal{X}\\setminus\\mathfrak{Z}) \\sqcup\n\\bigsqcup(\\atoms\\mathcal{X}\\cap\\mathfrak{Z}) =\n\\bigsqcup((\\atoms\\mathcal{X}\\setminus\\mathfrak{Z}) \\cup\n(\\atoms\\mathcal{X}\\cap\\mathfrak{Z}) =\n\\bigsqcup\\atoms\\mathcal{X} = \\mathcal{X}$.\n\\end{proof}\n\n\\begin{cor}\n Under conditions of the last theorem\n $\\Cor'\\bigsqcup^{\\mathfrak{A}} S=\\bigsqcup^{\\mathfrak{A}} \\rsupfun{\\Cor'}S$.\n\\end{cor}\n\n\\begin{prop}\nThe following is an implications tuple:\n\\begin{enumerate}\n \\item\\label{corom-pow} $(\\mathfrak{A},\\mathfrak{Z})$ is a powerset\n  filtrator.\n \\item\\label{corom-flt} $(\\mathfrak{A},\\mathfrak{Z})$ is a primary\n  filtrator over a complete atomic boolean lattice.\n \\item\\label{corom-cond} All of the following:\n  \\begin{enumerate}\n    \\item $\\mathfrak{A}$ is atomistic complete co-brouwerian lattice.\n    \\item $\\mathfrak{Z}$ is a complete atomistic lattice.\n    \\item $(\\mathfrak{A},\\mathfrak{Z})$ is a filtered\n      down-aligned filtrator with binarily meet-closed core.\n  \\end{enumerate}\n \\item\\label{corom-res} $\\Cor'\\mathcal{X} = \\mathcal{X}\\psetminus\\Omega_{1c}$\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\n~\n\\begin{enumerate}\n \\item[\\ref{corom-pow}$\\Rightarrow$\\ref{corom-flt}]\n  Obvious.\n \\item[\\ref{corom-flt}$\\Rightarrow$\\ref{corom-cond}]\n  Because complete atomic boolean lattice is isomorphic\n  to a powerset.\n \\item[\\ref{corom-cond}$\\Rightarrow$\\ref{corom-res}]\n  Theorems~\\ref{cor-adjom} and~\\ref{cobrow}.\n\\end{enumerate}\n\\end{proof}\n\n\\begin{prop}\n  ~  \n  \\begin{enumerate}\n    \\item $\\langle \\Omega^{\\mathsf{FCD}} \\rangle \\{ x \\} = \\Omega^U$;\n    \n    \\item $\\langle \\Omega^{\\mathsf{FCD}} \\rangle p = \\top$ for every\n    nontrivial atomic filter $p$.\n  \\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\n  $\\langle \\Omega^{\\mathsf{FCD}} \\rangle \\{ x \\} =\n  \\bigsqcap^{\\mathfrak{A}}_{y \\in U} (U \\setminus \\{ y \\}) = \\Omega^U$;\n  $\\langle \\Omega^{\\mathsf{FCD}} \\rangle p = \\bigsqcap^{\\mathfrak{A}}_{y\n  \\in U} \\top = \\top$.\n\\end{proof}\n\n\\begin{prop}\n  $\\tofcd \\Omega^{\\mathsf{RLD}} =\n  \\Omega^{\\mathsf{FCD}}$.\n\\end{prop}\n\n\\begin{proof}\n  $\\tofcd \\Omega^{\\mathsf{RLD}} =\n  \\bigsqcap^{\\mathsf{FCD}} \\up \\Omega^{\\mathsf{RLD}} =\n  \\Omega^{\\mathsf{FCD}}$.\n\\end{proof}\n\n\\begin{prop}\n  $(\\mathsf{RLD})_{\\operatorname{out}} \\Omega^{\\mathsf{FCD}} =\n  \\Omega^{\\mathsf{RLD}}$.\n\\end{prop}\n\n\\begin{proof}\n  $(\\mathsf{RLD})_{\\operatorname{out}} \\Omega^{\\mathsf{FCD}} =\n  \\bigsqcap^{\\mathsf{RLD}} \\up \\Omega^{\\mathsf{FCD}} =\n  \\bigsqcap^{\\mathsf{RLD}} \\up \\Omega^{\\mathsf{RLD}} =\n  \\Omega^{\\mathsf{RLD}}$.\n\\end{proof}\n\n\\begin{prop}\n  $(\\mathsf{RLD})_{\\operatorname{in}} \\Omega^{\\mathsf{FCD}} = \\Omega^{\\mathsf{RLD}}$.\n\\end{prop}\n\n\\begin{proof}\n  \\begin{multline*}\n  (\\mathsf{RLD})_{\\operatorname{in}} \\Omega^{\\mathsf{FCD}} = \\bigsqcup\n  \\setcond{ a \\times^{\\mathsf{RLD}} b }{ a \\in\n  \\atoms^{\\mathscr{F}}, b \\in \\atoms^{\\mathscr{F}}, a\n  \\times^{\\mathsf{FCD}} b \\sqsubseteq \\Omega^{\\mathsf{FCD}}\n  } = \\\\\n  \\bigsqcup \\setcond{ a \\times^{\\mathsf{RLD}} b }{\n  a \\in \\atoms^{\\mathscr{F}}, b \\in\n  \\atoms^{\\mathscr{F}}, \\text{not $a$ and $b$ both trivial} } = \\\\\n  \\bigsqcup \\setcond{ \\bigsqcup \\atoms (a \\times^{\\mathsf{RLD}} b)\n  }{ a \\in \\atoms^{\\mathscr{F}}, b \\in\n  \\atoms^{\\mathscr{F}}, \\text{not $a$ and $b$ both trivial} } = \\\\\n  \\bigsqcup \\bigcup \\setcond{ \\atoms (a \\times^{\\mathsf{RLD}} b) }{\n  a \\in \\atoms^{\\mathscr{F}}, b \\in\n  \\atoms^{\\mathscr{F}}, \\text{not $a$ and $b$ both trivial} } = \\\\\n  \\bigsqcup \\left( \\text{nontrivial atomic reloids under $A \\times B$} \\right) =\n  \\Omega^{\\mathsf{RLD}}.\n  \\end{multline*}\n\\end{proof}\n", "meta": {"hexsha": "3f14fd40eb7cc6db581b37450b3a95001e8f89b2", "size": 15509, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-cofinite.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-cofinite.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-cofinite.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.0674418605, "max_line_length": 153, "alphanum_fraction": 0.6741247018, "num_tokens": 6025, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619263765707, "lm_q2_score": 0.6757646140788307, "lm_q1q2_score": 0.5571421955005527}}
{"text": "% Copyright 2017 Markus J. Pflaum, licensed under CC BY-NC-ND 4.0\n% main author: \n%   Markus J. Pflaum\n%\n\\section{Function spaces and their topologies}\n\\label{sec:function-spaces-topologies}\n\n\\begin{proposition}\n  \\label{thm:metric-structure-space-bounded-maps}\n  Let $X$ be a topological space and $(Y,d)$ a metric space.\n  Then the following holds true.\n  \\begin{romanlist}\n  \\item\\label{ite:metric-space-bounded-maps}\n  The space\n  \\[\n    \\bFcts (X,Y) = \\big\\{ f : X \\to Y \\bigmid\n    \\exists y_0\\in Y \\, \\exists C >0 \\, \\forall x\\in X : \\:  d\\big(f(x),y_0\\big) \\leq C\\big\\}\n    %\\sup_{x\\in X} d\\big(f(x),y_0\\big) < \\infty \\text{ for some } y_0\\in Y \\big\\}\n  \\]\n  of bounded functions from $X$ to $Y$ is a metric space with metric\n  \\[\n    \\varrho : \\bFcts (X,Y) \\times \\bFcts (X,Y) \\to \\R_{\\geq 0} , \\: (f,g) \\mapsto\n     \\sup_{x\\in X} d\\big( f(x),g(x)\\big)   \\ .\n  \\]\n  \\item\n  \\label{ite:completeness-space-bounded-maps-range-complete-metric-space}\n    If $(Y,d)$ is complete, then $(\\bFcts (X,Y),\\varrho)$ is so, too.\n  \\item\n  \\label{ite:closedness-bounded-continuous-functions-space-bounded-functions}\n    The space\n    \\[\n      \\shContFcts_\\textup{b} (X,Y) = \\shContFcts (X,Y) \\cap \\bFcts (X,Y) \n    \\]\n    of continuous bounded functions from $X$ to $Y$ is a closed subspace of $\\bFcts (X,Y)$. \n  \\end{romanlist}\n\\end{proposition}\n\\begin{proof}\n  Note first that  by the triangle inequality there exists for every $f\\in \\bFcts (X,Y)$ and $y\\in Y$\n  a real number $C_{f,y}>0$ such that\n  \\[ \n    d\\big(f(x),y\\big) \\leq C_{f,x} \\quad \\text{for all } x\\in X \\ .\n  \\]  \n  \\begin{adromanlist}\n  \\item\n    Before verifying the axioms of a metric for $\\varrho$ we need to show that $\\varrho$ is well-defined meaning that\n    $\\sup_{x\\in X} d\\big( f(x),g(x)\\big) < \\infty$ for all $f,g \\in \\bFcts (X,Y)$. \n    To this end fix some $y\\in Y$ and observe using the triangle inequality\n    that  \n    \\[\n      d\\big( f(x),g(x)\\big)  \\leq d\\big( f(x), y\\big) + d\\big( y, g(x)\\big)\n      \\leq C_{f,y} + C_{g,y} \\quad \\text{for all } x \\in X \\ . \n    \\]\n    Since furthermore $d\\big( f(x),g(x)\\big) \\geq 0$ for all $x\\in X$, the map $\\varrho$ is well-defined indeed\n    with image in $\\R_{\\geq 0}$.\n    If $\\varrho (f,g) = 0$, then $d \\big( f(x),g(x) \\big)=0$ for all $x\\in X$, hence $f=g$.\n    Obviously, $\\varrho$ is symmetric since $d$ is symmetric. Finally, let\n    $f,g,h\\in \\bFcts (X,Y)$ and check using the triangle inequality for $d$:\n    \\begin{equation*}\n      \\begin{split}\n      \\varrho (f,g) & = \\sup_{x\\in X} d\\big( f(x),g(x)\\big)  \\leq\n      \\sup_{x\\in X} \\left( d\\big( f(x),h (x)\\big) + d\\big( h(x),g(x)\\big)\\right)  \\leq \\\\\n      & \\leq \\sup_{x\\in X} d\\big( f(x),h (x)\\big) + \\sup_{x\\in X} d\\big( h(x),g(x)\\big) =\n      d(f,h) + d(h,g) \\ . \n      \\end{split}\n    \\end{equation*}\n    Hence  $\\varrho$ is a metric. \n   \\item\n     Assume $(Y,d)$ to be complete and let $(f_n)_{n\\in\\N}$ be a  Cauchy sequence in $\\bFcts (X,Y)$.\n     Let $\\varepsilon >0$ and choose $N_\\varepsilon \\in \\N$ so that\n     \\[\n           \\varrho (f_n,f_m) < \\varepsilon \\quad \\text{for all } n,m\\geq N \\ . \n     \\]\n     Then for every $x\\in X$ the relation\n     \\begin{equation}\n       \\label{eq:cauchy-sequence-relation-values}\n       d\\big(f_n(x),f_m(x)\\big) < \\varepsilon \\quad \\text{for all } n,m\\geq N_\\varepsilon \n     \\end{equation}\n     holds true, so $(f_n(x))_{n\\in \\N}$ is a Cauchy sequence in $Y$.\n     By completeness of $(Y,d)$ it has a limit which we denote by $f(x)$.\n     By passing to the limit $m\\to \\infty$ in \\eqref{eq:cauchy-sequence-relation-values}\n     one obtains that\n     \\begin{equation}\n       \\label{eq:limit-relation-values}\n       d\\big(f(x),f_n(x)\\big) \\leq \\varepsilon \\quad \\text{for all } x \\in X \\text{ and } n \\geq N_\\varepsilon \\ . \n     \\end{equation}\n     Using the triangle inequality one infers from this for an element $y\\in Y$ which we now fix that\n     \\[\n       d\\big(f(x),y )\\big) \\leq  d\\big(f(x),f_{N_1}(x))\\big) + d\\big(f_{N_1}(x),y \\big) \\leq\n       1  +  C_{f_{N_1},y} \\ . \n     \\]\n     Hence $f$ is a bounded function. Moreover, \\eqref{eq:limit-relation-values} entails\n     that\n     \\[\n       \\varrho (f,f_n) = \\sup_{x\\in X} d\\big(f(x),f_n(x)\\big) \\leq \\varepsilon\n       \\quad \\text{for all } n \\geq N_\\varepsilon \\ ,\n     \\]\n     so $(f_n)_{n\\in \\N}$ converges to $f$. \n   \\item\n     We have to show that the limit $f$ of a sequence $(f_n)_{n\\in}$ of functions\n     $f_n \\in \\shContFcts_\\textup{b} (X,Y)$ which converges in $(\\bFcts (X,Y),\\varrho)$ has to be continuous.\n     To this end let $\\varepsilon >0$ and choose $N_\\varepsilon \\in \\N$ so that\n     \\[\n             \\varrho (f_n,f) < \\frac{\\varepsilon}{3}  \\quad \\text{for all } n \\geq N_\\varepsilon \\ .\n     \\]\n     Let $x_0\\in X$. By continuity of $f_{N_\\varepsilon}$ there exists a neighborhood $U\\subset X$ of $x$ so\n     that\n     \\[\n          d\\big(f_{N_\\varepsilon} (x),f_{N_\\varepsilon}(x_0)\\big) <  \\frac{\\varepsilon}{3} \\quad \\text{for all } x\\in U \\ . \n     \\]\n     By the triangle inequality one concludes that\n     \\[\n       d\\big(f (x),f(x_0)\\big) \\leq \n       d\\big(f(x),f_{N_\\varepsilon}(x)\\big) + d\\big(f_{N_\\varepsilon} (x),f_{N_\\varepsilon}(x_0)\\big) +\n       d\\big(f_{N_\\varepsilon} (x_0),f(x_0)\\big)< \\varepsilon \n     \\]\n     for all $x\\in U$. Hence $f$ is continuous at $x_0$. Since $x_0\\in X$ was arbitrary $f$, is  a continuous map,\n     hence an elemnt of $\\shContFcts_\\textup{b} (X,Y)$. \n  \\end{adromanlist}\\mbox{ }\n\\end{proof}\n\n\\begin{proposition}\n  Let $X$ be a topological space and $\\fldK$ the division algebra of real or complex numbers or of quaternions.\n  Then the following holds true.\n  \\begin{romanlist}\n  \\item\n    The space $\\bFcts (X,\\fldK)$ of bounded $\\fldK$-valued functions on $X$ can be expressed as\n    \\begin{equation}\n      \\label{eq:algebra-bounded-functions-values-real-complex-numbers-quaternions}\n      \\bFcts (X,\\fldK) = \n      \\big\\{ f : X \\to \\fldK \\bigmid \n      \\exists C > 0 \\, \\forall x\\in X :\\:  |f(x)| \\leq C \\big\\} \\ .\n    \\end{equation}\n    It carries the structure of a $\\fldK$-algebra by pointwise addition and multiplication of functions\n    and becomes  a Banach algebra when equipped with the \\emph{supremums-norm}\n    \\[\n      \\| \\cdot \\|_\\infty : \\: \\bFcts (X,\\fldK) \\to \\fldK,\\quad\n      f \\mapsto \\sup_{x\\in X} |f(x)| \\ .\n    \\]\n    \\item \n    The subspace $\\shContFcts_\\textup{b} (X,\\fldK) \\subset \\bFcts (X,\\fldK)$ \n    of bounded continuous $\\fldK$-valued functions on $X$ is a closed subalgebra of\n    $\\big( \\bFcts (X,\\fldK) , \\| \\cdot \\|_\\infty\\big)$,\n    so a Banach algebra as well when endowed with the supremums-norm. \n    For $X$ compact this means in particular that the algebra $\\big( \\shContFcts (X,\\fldK), \\| \\cdot \\|_\\infty\\big)$\n    is a Banach algebra.\n  \\end{romanlist}\n\\end{proposition}\n\n\\begin{proof}\n  Eq.~\\eqref{eq:algebra-bounded-functions-values-real-complex-numbers-quaternions} is obvious since\n  the distance of two elements $a,b\\in \\fldK$ is given by $d(a,b) = |a-b|$, so in particular\n  $d(a,0) = |a|$. Let $f,g \\in \\bFcts (X,\\fldK)$ and choose $C_f,C_g \\geq 0$ so that\n  $ |f(x)| \\leq C_f $ and $ |g(x)| \\leq C_g $ for all $x \\in X$. Then, by the triangle inequality\n  and absolute homogeneity of the absolute value, \n  \\[\n    | f(x) + g (x)| \\leq C_f + C_g , \\quad | a\\, f(x) | \\leq |a| \\, C_f , \\quad \\text{and} \\quad\n    | f(x) \\cdot g (x)| \\leq C_f \\cdot C_g \\ . \n  \\]\n  Hence the sum and the product of two bounded functions are bounded and so is any scalar multiple of a bounded function.\n  Therefore, $\\bFcts (X,\\fldK)$ is an algebra over $\\fldK$. Using the triangle inequality and absolute homogeneity of the\n  absolute value again one verifies that  $ \\|f \\|_\\infty$ is a norm on  $\\bFcts (X,\\fldK)$ indeed and that\n  it fulfills $\\|fg \\|_\\infty\\leq \\|f \\|_\\infty \\cdot \\|g \\|_\\infty$ for all $f,g \\in \\bFcts (X,\\fldK)$. \n  Furthermore, by definition, $ \\|f \\|_\\infty = \\varrho (f,0)$ for all $f \\in \\bFcts (X,\\fldK)$, where $\\varrho$ is\n  defined as in \\Cref{thm:metric-structure-space-bounded-maps}.\n  Since $(\\bFcts (X,\\fldK),\\varrho) $ is a complete metric space,\n  $(\\bFcts (X,\\fldK),\\|\\cdot \\|_\\infty)$ therefore is a Banach algebra.  This proves the first claim.\n   \n  For the second observe that for $f, g \\in \\shContFcts_\\textup{b} (X,\\fldK)$ and $a\\in \\fldK$ the\n  sum $f+g$, the scalar multiple $af$, and the product $f\\cdot g$ are elements of  $\\shContFcts_\\textup{b} (X,\\fldK)$ again.\n  To verify this let $x\\in X$ and $\\varepsilon >0$. Choose neighborhoods\n  $U_1$ and $U_2$ of $x$ so that\n  \\[\n    | f (y) - f(x) |< \\min \\left\\{\\frac{\\varepsilon}{2}, \\frac{\\varepsilon}{|a|+1}, \\frac{\\varepsilon}{2(|g(x)|+1)} \\right\\}\n    \\quad \\text{for } y\\in U_1\n  \\]   \n  and\n  \\[\n    | g(y) - g(x) | < \\left\\{1,\\frac{\\varepsilon}{2},\\frac{\\varepsilon}{2(|f(x)|+1)} \\right\\} \\quad\\text{for } y\\in U_2 \\ .\n  \\]\n  Then for all $y\\in U_1\\cap U_2$ \n  \\begin{equation*}\n    \\begin{split}\n      | (f+g) (y) - (f+g) (x) | & \\leq  | f (y) - f(x) | +  | g (y) - g (x) | < \\varepsilon \\ , \\\\\n      | (af) (y) - (af) (x) | & \\leq  |a|  \\cdot | f (y) - f(x) | < \\varepsilon \\ , \\\\\n      | (f\\cdot g) (y) - (f\\cdot g) (x) | & \\leq |g(y)| \\cdot | f (y) - f(x) |  +  |f(x) | \\cdot | (g (y) - g (x) |\n       < \\varepsilon \\ .\n    \\end{split}\n  \\end{equation*}\n  This means that $f+g$, $af$ and $fg$ are continuous in $x$, hence elements of $\\shContFcts_\\textup{b} (X,\\fldK)$\n  since $x \\in X$ was arbitrary. So $\\shContFcts_\\textup{b} (X,\\fldK)$ is a subalgebra of $\\bFcts (X,\\fldK)$.   \n  By \\Cref{thm:metric-structure-space-bounded-maps} one knows that $\\shContFcts_\\textup{b} (X,\\fldK)$\n  is a closed subspace of $\\bFcts (X,\\fldK)$. The rest of the claim is obvious.\n\\end{proof}\n\n\\para\nAs the next step, we introduce seminorms and their topologies on spaces of differentiable functions defined over an\nopen set $\\Omega\\subset\\R^n$. We agree that from now on $\\Omega$ will always  denote in this section an open subset\nof $\\R^n$. For any differentiability order $m\\in \\N \\cup \\{ \\infty\\}$\nthe symbol $\\shContFcts^m (\\Omega)$ stands for the space of $m$-times continuously differentiable complex valued\nfunctions on $\\Omega$. For $i=1,\\ldots,n$ we denote by $x^i : \\R^n \\to \\R$  the $i$-th coordinate function\nand, if $m\\geq 1$, by $\\partial_i : \\shDiffFcts{m} (\\Omega) \\to \\shDiffFcts{m-1} (\\Omega)$\nthe operator which maps $f \\in \\shContFcts^m (\\Omega)$ to the partial derivative\n$\\frac{\\partial f}{\\partial x^i}$. More generally, if $\\alpha \\in \\N^n$ is a multiindex satisfying\n$|\\alpha| = \\alpha_1+\\ldots \\alpha_n \\leq m$, then we write\n$\\partial^\\alpha : \\shDiffFcts{m} (\\Omega) \\to \\shDiffFcts{m-|\\alpha|} (\\Omega)$  for the higher order partial derivative\nwhich maps $f \\in \\shDiffFcts{m} (\\Omega)$ to\n$\\frac{\\partial^{|\\alpha|} f}{\\partial x_1^{\\alpha_1} \\cdot \\ldots \\cdot \\partial x_n^{\\alpha_n}}$. Recall that the\nsum and the product of two $m$-times differentiable functions and scalar multiples of  $m$-times differentiable functions\nare again $m$-times differentiable, hence $\\shDiffFcts{m} (\\Omega)$ forms a $\\C$-algebra.\nNow we define $\\shDiffExtFcts{m} (\\Omega)$ to be the space of continuous functions on the closure $\\closure{\\Omega}$\nwhich are $m$-times continuosly differentiable on $\\Omega$ so that each of its partial derivatives of order $\\leq m$\nhas a continuos extension to $\\closure{\\Omega}$. Since the operators  $\\partial_i$ are linear and also derivations\nby the Leibniz rule,  $\\shDiffExtFcts{m} (\\Omega)$ is a subalgebra of $\\shDiffFcts{m} (\\Omega)$. In general, these\nalgebras do not coincide as for example the function $\\frac 1x$ on $\\R_{>0}$ shows. It is an element\nof $\\shDiffFcts{\\infty} (\\R_{>0})$ but can not be extended to  a continuous function on $\\R_{\\geq 0}$,\nso is not an element of $\\shDiffExtFcts{\\infty} (\\R_{>0})$.\n\nIf $X \\subset \\R^n$ is locally closed which means that $X$ is the intersection of an open and a closed susbet of $\\R^n$,\nthen  define $\\shDiffFcts{m} (X)$ as the quotient space $\\shDiffFcts{m} (\\Omega) / \\shVanIdealJ{X} (\\Omega)$, where\n$\\Omega \\subset \\R^n$ open is chosen so that $X = \\closure{X} \\cap \\Omega$ and where\n$\\shVanIdealJ{X}$ denotes the ideal sheaf of all $m$-times continuously differentiable functions\nvanishing on $X$ that is\n\\[\n   \\shVanIdealJ{X} (\\Omega) = \\big\\{ f \\in \\shDiffFcts{m} (\\Omega)  \\bigmid f|_X = 0 \\big\\}  \\ .\n\\]\nUsing a smooth partition of unity type of argument one shows that  $\\shDiffFcts{m} (X)$ does not depend on the particular\nchoice of the neighborhood $\\Omega$ in which $X$ is relatively closed and that $\\shDiffFcts{m} (X)$ can be naturally\nidentified with the space of continuous functions on $X$ which have an extension to an element of\n$\\shDiffFcts{m} (\\Omega)$.  \n\n\\begin{proposition}\n  Let $\\Omega \\subset \\R^n$ be open and bounded and $m\\in \\gzN$. Then  $\\shDiffExtFcts{m} (\\Omega)$\n  equipped with the norm\n  \\[\n    \\| \\cdot \\|_{\\Omega,m} : \\shDiffExtFcts{m} (\\Omega) \\to \\R_{\\geq 0}, \\quad\n    f \\mapsto \n  \\]\n  \n  \n\\end{proposition}", "meta": {"hexsha": "26c717c84e22ac9ccb2eaa2499b09ddb2d245762", "size": 12953, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Example/sections/function-spaces-topologies.tex", "max_stars_repo_name": "martinpflaum/latex_to_html", "max_stars_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-11-13T15:10:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T14:08:26.000Z", "max_issues_repo_path": "Example/sections/function-spaces-topologies.tex", "max_issues_repo_name": "martinpflaum/latex_to_html", "max_issues_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-07-11T13:18:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T22:02:11.000Z", "max_forks_repo_path": "Example/sections/function-spaces-topologies.tex", "max_forks_repo_name": "martinpflaum/latex_to_html", "max_forks_repo_head_hexsha": "65096594cb0891e56954627dc0abeb09bae6d2b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-13T15:22:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-13T15:22:47.000Z", "avg_line_length": 53.9708333333, "max_line_length": 124, "alphanum_fraction": 0.6258009727, "num_tokens": 4774, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432182679956, "lm_q2_score": 0.7718435083355187, "lm_q1q2_score": 0.5570728177053378}}
{"text": "\\section{Main Track Methods}\n\nThe main track of methods in community detection is quite clear according the published years of milestone papers. Modularity based papers are in dominant position in early 2000s after the concept of modularity is defined by \\cite{newman2004fast} and \\cite{newman2006modularity}. Spectral clustering methods start to raise in early 2010s, which aims to calculate eigenvectors from graph Laplacian matrix. \\cite{nascimento2011spectral} is a representative survey paper summarizes a series of relevant researches. Matrix factorization based approaches are also very popular in the similar time period to spectral clustering, as both types of approaches utilize matrix decomposition techniques. Stochastic block model is a type of statistical inference model which has a lot of variants in the recent decade. Recently with the rapid increasing in deep learning, more models tend to detect communities in either an end-to-end fashion or by formulating conventional models under deep learning frameworks. In this section, the six discussed tracks are the most popular ones and related studies are summarized in following paragraphs.\n\n\\subsection{Modularity}\n\nAs the most important metric to measure the fitness of a community partition, modularity reflects the superiority of how much the communities preserves in-community edges better than a random partition model. Mark Newman published a series of papers regarding to explain the modularity from various perspectives, such as edge connections and adjacency matrix transformation. Among these papers, \\cite{blondel2008fast} and \\cite{newman2006modularity} are two representative works to explain what is modularity and how to find out a proper community partition which maximizes the graph modularity efficiently. \n\nLouvain method \\cite{blondel2008fast} so far is the most efficient method to find the optimal partition with largest modularity. Within the paper, modularity is interpreted as the measurement to compare the density of within-community edges with that of between-community edges. It is calculated as:\n\n\\begin{equation}\n\tQ = \\frac{1}{2|E|}\\sum_{ij}(A_{ij} - \\frac{k_ik_j}{2|E|})\\delta(c_i,c_j)\n\\end{equation}\n\nwhere   $k_i$ is the degree of node $i$.  $\\delta(c_i,c_j) = 1$ if node $i$ and $j$ are in the same community, it equals to 0 otherwise.\n\nThe efficient Louvain method contains two phases and optimizes the modularity through an iterative approach. In the initialization step, all nodes are assigned to a single-node community. Then, for each node $i$, the paper considers to remove $i$ from its current community and plug into one of the communities which its neighbor nodes belong to. It will be re-assigned to a community which has the largest modularity gain. The second phase will run iteratively until no further modularity gain achieved. \\cite{traag2019louvain} is a follow-up work of Louvain method to guarantee the generated communities are well-connected.\n\nFPMQA model \\cite{bu2013fast} is a parallel model to detect modularity based communities efficiently. The steps are similar as Louvain method. It also initializes a set of single-node communities, and merges communities which leads to the largest modularity gain in each step. The FPMQA model takes use of a mark array to store community states (busy or free) and merges communities based on their states in a parallel fashion.\n\nIn  \\cite{newman2006modularity}, it theoretically proves that the original modularity optimization problem can be rewritten as the eigenvalue and eigenvector calculation on a defined modularity matrix. In a two-community detection scenario,  the modularity matrix can be written as:\n\n\\begin{equation}\nQ = \\frac{1}{4|E|}s^TBs\n\\end{equation}\nwhere $s$ is a column vector in which $s_i = 1$ if node $i$ is in community 1 or $s_i = -1$ if node $i$ is in community 2. $B$ is a symmetric matrix in which $B_{ij} = A_{ij} - \\frac{k_ik_j}{2|E|}$. In the end, after matrix transformations, The final goal is to find a proper community assignment $s$ to concentrate as much as positive eigenvectors of the matrix $B$.\n\nA further approach, \\cite{jiang2012modularity}, solves modularity maximization from the nonnegative matrix factorization (NMF) perspective. It runs on modularity Laplacian matrix instead of the modularity matrix.  \n\n\\cite{nicosia2009extending} extends the modularity to directed graphs for overlapping community detection. The modified modularity is calculated as:\n\\begin{equation}\n\tQ= \\frac{1}{|E|}\\sum_{c \\in C}\\sum_{i,j \\in V}[r_{ijc}A_{ij}- s_{ijc}\\frac{k_{in}k_{out}}{|E|}]\n\\end{equation}\nwhere $r_{ijc}$ is the weight of contribution of edge between node $i$ and $j$ to the modularity of community $c$. And $s_{ijc}$ is the weight of contribution in the null model where communities are randomly assigned. The whole approach is optimized through a genetic process.\n\n\\cite{cafieri2011locally} introduces a local divisive heuristic to maximize graph modularity in undirected and unweighted graphs. It hierarchically divides the graph by using Kernighan-Lin heuristic, which proceeds a bipartition to reassign nodes from a community to the other. In each step, the bipartition reassignment which gains the largest modularity improvement will be selected. \\cite{xiang2016local} also considers local graph connectivity and proposes a local modularity optimization to detect node communities.\n\n \\cite{yang2016modularity} applies stacked auto-encode, a type of deep learning technique, to reconstruct the modularity matrix. Besides that, \\cite{chen2014community} introduces several other variants of modularity, including modularity density,  fine-tuned modularity and fine-tuned modularity density.  Modularity density avoids the resolution limitation problem of original modularity. And the two fine-tuned variants improves the measurement by splitting and merging the graph structures. Derived from modularity density, \\cite{sun2013maximizing} proposes a new measurement named modularity intensity to indicate the cohesiveness of community partition, which is defined as:\n\\begin{equation}\n\tM = \\sum_{i=1}^{C}\\frac{\\alpha F(C_i,C_i) - \\beta F(C_i,\\bar{C_i})}{|C_i|}\n\\end{equation}\nwhere $|C_{i}|$ refers to the number of nodes in community $C_i$. $F(C_s,C_t) = \\sum_{\\forall i\\in C_s, \\forall j\\in C_t} A_{ij}\\cdot B_{ij}$. And $\\bar{C_i} = C - C_i$. $A$ is the adjacency matrix and $B$ is the edge weight matrix, which is calculated as:\n\n\\begin{equation}\n\tB_{ij} = \\frac{\\sum_{t=1}^{|V|}A_{it}\\cdot A_{tj} + A_{ij}}{\\sqrt{\\sum_{t=1}^{|V|}A_{it} \\cdot \\sum_{t=1}^{|V|}A_{tj}}}\n\\end{equation}\n where $i,j \\in V$ and $i \\neq j$.\n \n\n  \n\\cite{bagrow2012communities} shows a series of different tree and tree-like graphs, such as caley tree graph, z-ary graph and other clique or tree graphs. It theoretically proves that these types of graphs  always obtain communities which have high modularity scores. And adding nontree-like components in graph will destroy this phenomenon. \n\nTo solve the resolution limit of modularity, \\cite{zhang2013normalized} proposed a refined modularity metric by involving community degree factor, which compares the sum of average degree difference between the detected communities and randomly generated communities. The calculating formula is defined as:\n\n\\begin{equation}\nQ = \\sum_{k=1}^{K}\\frac{\\sum_{ij}(A_{ij} - \\frac{k_ik_j}{2|E|})S_{ik}S_{jk}}{\\sum_{i=1}^{|V|}S_{ik}}\n\\end{equation}\n\n$S_{ik} = 1$ if node $i$ in community $k$, otherwise it equals to 0. Compared with the original modularitym the only difference is that  the normalized modularity divides the community size.\n\n\\cite{newman2016equivalence} is a very classic mathematical paper which proves the equivalence between modularity method and  degree corrected stochastic block model under special parameter settings.\n\nOn the contrary to modularity, \\cite{chen2014anti} proposes an anti-modularity metric to detect anti-communities in graphs. Anti-community is a particular type of node partitionwhere nodes with nor or few connections within the community and densely connected with nodes from extra-communities. The anti-community is defined as:\n\n\\begin{equation}\nQ = \\frac{1}{|V|}\\sum_{c \\in C} \\sum_{i,j \\in c}(\\sum_{k=1}^{|V|}a_{ik}a_{kj} -\\frac{ k_ik_j }{|V|})\n\\end{equation}\n\nwhere $\\sum_{k=1}^{|V|}a_{ik}a_{kj}$ is the number of two-step paths between node $i$ and $j$ passing a third node $k$. The paper theoretically proves that the anti-modularity is fundamentally a principle component analysis method on the adjacent matrix. And in the end, the paper proposes a label propagation method to find a community partition with maximized anti-modularity. In detail, community labels are assigned to a set of seed nodes, which are propagated to other nodes through their connections. In the end, the nodes with same community labels are assigned to the same community.\n\n\\subsection{Spectral Clustering}\nSpectral clustering is a type of approaches which leverages  graph matrices (modularity matrix, Adjacency matrix or Laplacian matrix) to find out the top eigenvectors or other graph characteristics so as to learn low dimensional representations for nodes. By reducing the dimension of sparse graph matrices, it can preserve denser graph information and explore more cohesive node relationships. Spectral clustering has a huge relationship with stochastic block models. \\cite{lei2015consistency} proves that spherical k-median spectral clustering method can be extended the degree corrected stochastic block models. \\cite{nascimento2011spectral} is a survey paper which clarifies some graph terminologies and introduces several graph cut and spectral clustering methods. However, as it is published over a decade ago, the mentioned models are a bit dated and lack of detailed model explanation.\n\n\\cite{newman2013spectral} is a theory paper shows that with a proper selection of parameters (i.e. how to choose the value of community membership matrix) in normalized cut via spectral method, it equals to the modularity based method and stochastic block model. \\cite{bruna2013spectral} extends convolutional neural network (CNN) to general graph Laplacian matrix by adding a CNN operator on the eigenvectors of graph Laplacian. \\cite{krzakala2013spectral} shows that spectral algorithms on a defined nonbacktracking matrix performs better than on adjacency matrix or other first-order approximate matrices. The nonbacktracking matrix $B$ is defined as:\n\\begin{equation}\nB_{(u\\rightarrow v),(w\\rightarrow x)} = \n\\begin{cases}\n1,&   v = w,u \\neq x\\\\ \n0,  & otherwise\\\\  \n\\end{cases}\n\\end{equation}\n\n \\cite{saade2014spectral} introduces a spectral clustering method on Bethe Hessian matrix, which is also called as deformed Laplacian:\n\\begin{equation}\n\tH(r) :=(r^2-1)\\mathbb{I}- rA + D \n\\end{equation}\nwhere $|r|>1$ is a regularized term. Particularly, $|r| = \\sqrt{c}$ is used in this paper where $c$ is the average node degree in the graph. The eigenvectors of the matrix are calculated and those with negative eigenvalues in $H(\\sqrt{c})$ or $H(-\\sqrt{c})$ are selected. A standard k-means method is thereafter applied on these eigenvectors to generate node communities. Particularly, the negative eigenvalues of $H(\\sqrt{c})$ shows the assortative communities, while those of $H(-\\sqrt{c})$ represents the disassortative communities.\n\n\\cite{liu2013large} proposes a spectral method for large-scale graphs by generating supernodes connected to the regular nodes. To reduce the graph size, it generates supernodes and connects them to regular nodes. supernodes are regarded as cluster indicators to detect regular node communities. In the end, the original graph turns to be a bipartite graph containing two types of nodes. To construct $k$ supernodes, the paper firstly selects $k$ seed nodes, and calculates all the rest nodes' shortest distance to these seed nodes in order to group them into $k$ subsets as supernodes. The original graph can be converted to a bipartite graph to record the belonging of two node types where each row refers to a supernode and each column refer to a regular node. After that, a SVD approach is applied on the bipartite graph adjacency matrix to learn low dimensional representations for all nodes. In the end, a k-means clustering methods finally helps to detect node communities. \n\n\\cite{nadakuditi2012graph} proposes a hierarchical and k-way spectral clustering method which decomposes top eigenvectors from constructed similar matrix $W$.  $W$ can be calculated as either a noisy hierarchical block matrix or a noisy k-Block Diagonal matrix. It proves the spectral clustering can tolerate the noises from similar matrix $W$ and still achieves good community partition even $W$ involves extra noise.   \\cite{rohe2011spectral} summarizes the mathematical theory and proofs behind spectral clustering and stochastic block models. It also points out the the significance of spectral clustering for graph visualization. \\cite{chaudhuri2012spectral} studies a spectral clustering method on graphs generated by Extended Planted Partition (EPP) model. EPP model generates graphs from a random graph with hidden community partition which affects edge generation probabilities. To facilitate the spectral clustering in such graphs, all graph nodes are randomly split into two sets $P$ and $Q$. Nodes in $Q$ are projected to the subspace generated by the bottom $k$ eigenvectors of random walk based graph Laplacian of $P$, and further community detection can group nodes in $Q$ to communities. The nodes in $P$ are partitioned in a similar way.\n\n\\cite{mahoney2012local} is a locally-biased spectral clustering method which involves extra constraints for community detection. It is formulated as: \n\\begin{equation}\n\\begin{split}\n&min \\,\\, x^T(D-A)x\\\\\n&s.t.\\quad  \\left\\{\\begin{array}{lc}\nx^TDx = 1\\\\\n(x^TD\\mathbb{I})^2 = 0\\\\\n(x^TDs)^2 > k\\\\ \\end{array}\\right.\n\\end{split}\n\\end{equation}\nwhere $s$ is the constraint matrix, $x$ is the node low dimensional representation aims to optimize. $k$ is a threshold parameter. It offers a feasible solution which satisfies the local constraints. \n\n\\cite{zhang2015multiway} is a multiway spectral method maps modularity maximization to vector partitioning learning. Through a set of transformations, the paper finally aims to learn a representation $r_i$ for each node $i$ which maximizes the modularity score $Q$ in the graph where $Q$ is defined as:\n\\begin{equation}\n\tQ = \\frac{1}{2|E|}\\sum_{s=1}^{k}\\big |\\sum_{i \\in s}r_i \\big |^2\n\\end{equation}\n $s$ refers to each community and $i \\in s$ means node $i$ in community $s$.\n\n\\cite{joseph2016impact} discusses the importance to choose regularization factors for community detection. It theoretically proves that the cluster discovery result is not fully depend on the minimum node degree. And regularization can better support to group low-degree nodes to well-structured communities. In this paper, Eigengap is defined as the gap of the $k$ smallest eigenvalues to the rest of eigenvalues, which can be controlled by the regularization factors and in return will affect spectral clustering performance.\n\nSCORE model \\cite{jin2015fast} uses the coordinate-wise ratios between the largest eigenvector and the rest largest eigenvectors to construct a decomposed matrix for clustering, which significantly reduces the nuisance from degree heterogeneity problem. TSC model \\cite{benson2015tensor} is a spectral method on high-order graphs where graph information beyond edges implicitly connects nodes. Instead of finding out the pairwise relationship between nodes, it aims to detect node triplet relationships. Therefore, instead of constructing a 2-D matrix for matrix decomposition, it builds up a 3-D matrix to hold node transition probabilities. \n\nSClump model \\cite{li2019spectral} proposes a spectral clustering for heterogeneous graphs. It constructs a similarity matrix based on metapaths from \\cite{sun2013pathselclus}. \\cite{sun2013pathselclus} calculates pairwise node similarities for each metapath $P_i$, and sum them over to construct the metapath similarity matrix $\\sum_{i} \\lambda_i P_i$. Thereafter, it aims to learn a refined similarity matrix $S$ that exhibits a clear clustering structure, the final objective function turns to be:\n\\begin{equation}\n\\min ||S-\\sum_{i} \\lambda_i P_i||^2_F + \\alpha ||S||^2_F + \\beta ||\\lambda|| + 2 \\gamma\\sum_{i}^k\\sigma_i(L_S)\n\\end{equation}\nwhich is constraint with $\\sum_{j=1}^n S_{ij} = 1, S_{ij} \\geq 0$ and $\\sum_{i}\\lambda_i = 1, \\lambda_i \\geq 0$. $\\sigma_i(L_S)$ is the $i_{th}$ smallest eigenvalue of Laplacian graph $L_S$ for matrix $S$.\n\n\\cite{mercado2019spectral} extends spectral clustering for signed graphs by constructing a family of Signed Power Mean Laplacians, which is defined by a transformation function of the normalized Laplacian graphs generated from both positive edges and negative edges. \\cite{wu2018scalable} is a scalable method utilizes graph random binning features (RB) and CoreCut model \\cite{zhang2018understanding} introduces the relationship between regularized spectral clustering and graph conductance minimizing. \n\n\n\\subsection{Stochastic Block Model}\n\nSBM is a major track of community detection method and there are so many papers regarding to this topic. As a type of approaches derived from statistical inference, most of papers in this track are theory papers aiming to prove their model efficiency under certain scenarios such lower \\& upper bound limitation. Hereby, I will broadly introduce the general goal in each paper instead detailed theoretically proofs. \\cite{karrer2011stochastic}, as one of the most classic papers which introduces the SBM concept as a generative random graph model to reconstruct the original graph from communities. It assumes there exists community partition $C$ and expected edge weight $w_{rs}$ between node $i$ in community $r$ and node $j$ in community $s$ following Possion distribution. After a set of transformation functions, the final generative probability to maximize turns to be:\n\\begin{equation}\n\\log P(G|w,C) = \\sum_{rs} (m_{rs}\\log w_{rs}-n_rn_sw_{rs})\n\\end{equation}\nwhere $m_{rs} = \\sum_{ij}A_{ij}\\delta{g_i;r}\\delta{g_j;s}$ refers to the total number of edges between groups $r$ and group $s$ and $n_{r}$ is the number of edges in group $r$. \nThe paper also proves the equivalence between the standard SBM and modularity for undirected graphs. It further extends to a degree-corrected SBM  to solve the limitation caused by node degree heterogeneity. \n\n\\cite{abbe2017community} is the latest survey paper published in 2017. It is a very detailed paper which introduces the general forms of stochastic block model. The main content discusses the thresholds during SBM phase transitions for exact recovery, weak recovery and partial recovery. It also extends to other related tracks of approaches such as graph-splitting, semi-definite programming and spectral clustering. In the end, it points out several open questions such as possible extensions for semi-supervised or dynamic graphs. \n\n\\cite{mossel2014belief} is a theory paper which discusses the graph reconstruction problem of sparse SBM with only two communities. The inter- and intra- edge connection probability are $a/n$ and $b/n$ where $a,b$ are parameters and $n = |V|$ is the number of nodes. The paper proposes a belief propagation model to correctly assign node community labels under a general situation where $(a-b)^2> C(a+b)$ where $C$ is a constant. \\cite{mossel2016density} also discusses the same binary SBM where there are only two communities in the graph. It converts the original community recovery problem to a transformed tree reconstruction problem. Particularly, it analyzes the density evolution of belief propagation on trees with Gaussian approximations.\n\n\\cite{peixoto2012entropy} is an ensemble SBM model containing several existing SBM variants. It uses an entropy based log-likelihood function to infer community structure under two different scenarios including a soft constraint and a hard constraint. For a soft constraint, each imposed node degree refers to its average value among all SBM variants. While for a hard constraint, each imposed node degree should be exactly same among all variants.\n\n\\cite{latouche2011overlapping} is an overlapping SBM  which associates a latent vector for each node following multivariate Bernoulli distribution. The edge generative probability is calculated based on the latent factor of its start node and end node. \\cite{abbe2015community} talks about the current limitation in partial or exact recovery of SBM and generalizes the discussion to overlapping communities.\n\n\\cite{qin2013regularized} involves degree-corrected SBM to generate adjacency matrix which are used for spectral clustering. With proper adjustments in parameter selection, the regularized spectral clustering can achieve better performance for graphs where node degree significantly varies. It also points out choosing a parameter close to average degree can balance performance difference from several competing models. \\cite{zhang2016minimax} is a theory paper which proposes a minimax rate to discuss the lower and upper bound for exact or partial recovery in SBM.\n\n\\cite{zhao2012consistency} checks the performance consistency for degree-corrected SBM. By comparing a set of different models, it proves that degree-corrected SBM can always obtain a stable model performance. Modularity based methods need to have special parameter constraints in order to achieve a consistent result while likelihood-based methods do not. \\cite{celisse2012consistency} is another paper checks the consistency of maximum-likelihood and variational estimators in SBM. It proves that in SBM variational estimators can be asymptotically equivalent to maximum-likelihood estimators to estimate the edge appearance probability between nodes. \\cite{yan2014model} focuses on the model selection in SBM as there are too many parameters to tune and general model-selection criteria can't properly work. The paper discusses the influence of different log-likelihood ratio distributions in both standard SBM and degree-corrected SBM. It further extends to sparse graphs to explore more graph scenarios. Moreover, it proposes a belief propagation based linear-time approximations for log-likelihoods, which proves to have relatively satisfied agreements. \n\n\\cite{yun2016optimal} introduces a new labeled SBM task where there are labels appeared with a certain probability $p(l,i,j)$ between two nodes in community $i$ and $j$. In the end, the paper proposes a spectral partition method under SBM to reconstruct the communities from the observation of these appeared labels. \n\n\\cite{peixoto2014efficient} utilizes an optimized Markov chain Monte Carlo (MCMC) method to efficiently infer SBM in large graphs. It heuristically defines a node moving probability for community $r$ to $s$ as:\n\\begin{equation}\np(r \\rightarrow s| t) = \\frac{e_{ts}+ \\epsilon}{e_{t} + \\epsilon B}\n\\end{equation}\n\nwhere $t$ is the community label of a random node, $e_{t}$ is its degree. $\\epsilon$ is a tuning parameter and $B$ is a relatively large constant. \n\nGiven a random labeled graph generated by standard SBM, \\cite{xu2014edge} aims to infer its edge community distributions from node latent attributes. specifically, it proves that no model works well without observation if average node degree is below a certain threshold.  \n\n\\cite{he2015stochastic} proposes a SBM based edge community detection task and addresses the community size heterogeneity problem which is often ignored by previous works. In its definition,  an edge $<i,j>$ is generated by two nodes  $i,j$ selected from community $z$. The node selection probabilities are $\\theta_{iz}$ and $\\theta_{jz}$ and  community size is $w_z$. In its generative model, a community $z$ is firstly chosen with $w_z$ nodes. And node $i$ and $j$ are selected in this community with the aforementioned probability to form an edge. The final expected number of edges between the two nodes in community $z$ are calculated as:\n\\begin{equation}\n\\hat{A}^z_{ij} = w_z\\theta_{iz}\\theta_{jz}\n\\end{equation}\nAnd the final expectation of edge number between node $i$ and $j$ is the sum over all possible communities as $\\hat{A}_{ij} = \\sum_{z}\\hat{A}^z_{ij}$.\n\n\n\\cite{wang2017likelihood} is a likelihood based SBM  which can be extended to degree-corrected SBM with proper parameters. It calculates the asymptotic distribution under overfitting and uderfitting situation and proves its result performs stable when average node degree grows  at a polylog rate.  \\cite{lei2016goodness} is a goodness-of-fit test on SBM to offer a baseline result for comparisons with other competing models. \\cite{gao2018community} is an in-depth exploration particularly for degree-corrected SBM and proposes a polynomial time algorithm for asymptotic optimization in  the degree-corrected SBM. \n\n\n%,\\cite{heimlicher2012community},\\cite{yun2014accurate},\n%\n%\\cite{peixoto2017nonparametric},\\cite{sarkar2015role},\\cite{lyzinski2014perfect}\n\n\n\\subsection{Deep Learning}\n\nIn recent years, deep learning techniques are more and more involved in community detection tasks. Related approaches either leverage graph neural networks (GNN), graph convolutional networks (GCN) or other standard deep frameworks (i.e. autoencoder) to learn community embeddings or direct node-community distributions. \n\nAutoencoder is a very intuitive approach to compress node original one-hot encoding to a latent low-dimensional space. By natural, the node latent vector can be regarded as its community distribution or be further clustered by other community detection models. \\cite{huang2014deep} is the first work which uses autoencoder technique to learn node embeddings. It uses a four fully connected layers as the encoder to project original input to a latent space. Then a symmetric decoder is processed on the latent vector to reconstruct the original input. A locality-preserving constraint (from k-nearest neighbor nodes) and a group sparsity constraint (from same-community nodes generated from group lasso) are combined with the reconstruction error as the final objective to minimize. In the end, k-means clustering method is applied to the latent node representation for community detection. Similarly \\cite{tian2014learning} is a very classic method to learn node non-linear embedding via stacked autoencoder, and applies k-means to obtain node communities afterwards. \\cite{sun2017non} is another similar approach but with a shared weight in both encoder and decoder. And the final optimized weight is regarded as the node-community matrix. MGAE model \\cite{wang2017mgae} is a marginalized autoencoder which leverages both graph structure and content information into a unified GCN framework. The model considers both graph adjacency matrix and node content to learn latent node representations. And a spectral clustering is taken on top of the representations to detect the final node communities.\n\n\\cite{cavallari2017learning} jointly models three tasks to learn node embeddings, community embeddings and detect communities. The community embedding are generated from multivariate Gaussian distribution. And a Gaussian Mixture Model (GMM) helps to learn the generative probability of a node from a community. Meanwhile, the node embedding is optimized from a skip-gram model with negative sampling with the goal to generate nodes from its neighbor nodes.  Similarly, \\cite{sun2019vgraph} is also a multi-task generative model to jointly learn node embedding and detect communities. In its assumption, a node is represented as a mixture of communities, and a community is a multinomial distribution over all nodes.  The generative probability of a neighbor node $u$ of node $v$ is calculated as:\n\\begin{equation}\np(u|v) = \\sum_{c}p(u|c)p(c|v)\n\\end{equation}\nThe probability is the sum over all conditions that node $v$ generates a community $c$ first, and community $c$ thereafter generates node $u$. The whole generative process can be optimized under a deep framework using stochastic gradient decent optimization.\n\nDLC model \\cite{shao2015deep} proposes a single layer transformation as:\n\\begin{equation}\n\\min_{W,C} ||A-WDC||^2_F + \\lambda||C||^2_F\n\\end{equation}\nwhere $W$ is the linear transformation function, $D$ is the dictionary in the linear coding, $C$ is the code of the graph nodes. This linear transformation can be stacked into multi-layers to achieve a deep linear coding schema. The paper also theoretically proves its equivalence to the marginalized denoising autoencoding with spectral clustering.  \n\nComE model \\cite{zheng2016node} proposes an interesting task to learn community embeddings instead of node embeddings. Instead of representing communities as a vector and inspired by the Gaussian mixture model, ComE formulates that community embeddings follows multivariate Gaussian distribution with a tuple of a mean vector and covariance matrix. Therefore, the main output of this model is a set of community embedding $(\\psi_k,\\Sigma_k)$ for each community $k \\in \\{1,..,K\\}$. Node embeddings $\\phi$ are learned as a prior knowledge via LINE method \\cite{tang2015line} to support community embedding. The whole process is optimized as a generative model which uses community embedding to generate node embeddings $p(v_i|c_i=k,\\phi_i,\\psi_k,\\Sigma_k)$.\n\nGEMSEC model \\cite{rozemberczki2019gemsec} jointly learns node embeddings and clusters nodes into communities. It uses a negative sampling to maximize the generative probability of neighbor nodes given the current node. Meanwhile, it adds a community cost to the node embedding objective to learn community centers:\n\\begin{equation}\n\\argmin \\sum_{v\\in V}\\big[ ln \\Big(\\sum_{u\\in V}exp(f(v)\\cdot f(u)) \\Big) \\big] + \\gamma \\sum_{v \\in V}\\min_{c\\in C}||f(v)-u_c||_2\n\\end{equation}\nwhere $f(v)$ is the embedding of node $v$. $N_S(v)$ is a collection of nodes within a window size towards node $v$ through random walks. The first term is the node embedding learning cost, and the second term is the community center distance cost. \n\nDFuzzy model \\cite{bhatia2018dfuzzy} is a three-step approach to learn fuzzy clusters. In the beginning, a pre-training step learns the initialized community centers through a personalized PageRank. And an autoencoder approach is taken with the PageRank result to learn the initialized node communities regarding as the center nodes. Second, modularity is used to redefine the partition result of initialized communities. Third, the community centers are updated based on the distance of the rest nodes in the community, which are calculated from the last layer of the model. The whole process will iteratively updated until convergence.\n\n\\cite{yang2016modularity} learns node embedding from deep neural network, and extends its model to a semi-supervised community detection task by involving pairwise node contraints. The node embedding learning process is a standard stacked autoencoder approach to firstly project each node to a latent space and reconstruct it after that. To involve the pairwise node contraints, the objective function adds $\\lambda Tr(H^TLH)$ to the reconstruction error in the stacked autoencoder. $H$ is the learned node low dimensional vector, which is also regarded as node-community distribution. $L$ is the Laplacian matrix of the node constraint matrix. \n\n\\cite{bruna2017community} is a very first work to really apply graph neural network (GNN) techniques in community detection tasks. It is a stacked model which contains multiple stacked components. In each layer of the component, it involves adjacency matrix in the non-linear transformation and the final output is node community labels. The whole approach can be optimized in an end-to-end fashion. \n\nCluster-GCN model \\cite{chiang2019cluster} is an efficient model to leverage graph convolutional network (GCN) for community detection. It makes an innovation for the mini-batch SGD update by sample a subgraph to optimize the corresponding node embeddings. This strategy significantly reduces the computational cost in the orignal GCN framework without losing any graph information. \n\nMRFasGCN  model \\cite{jin2019graph} solves a very complex task: using both graph convolutional network (GCN) and markov random field (MRF) for semi-supervised community detection in attribute graphs with semantic information. The model input contains adjacency matrix $A$, node attribute matrix $X$ and node similarity matrix $K$ calculated from $A$ and $X$. The first two layers are GCN layers only involved with $A$ and $X$ and reLU activation function. In detail, the first layer is represented as $AXW^{(0)}$, the second layer is represented as $AX^{(0)}W^{(1)}$.  And the third layer takes $K$ into account as $KX^{(2)}W^{(2)}$. The last layer result is passed a MRF layer to learn the pairwise constraint between nodes.  \n\nFor other works, \\cite{zhang2019attributed} exploits the high-order graph convolutional networks and theoretically discusses the order influence to the model performance in attribute graph clustering. DMGC \\cite{luo2020deep} is the latest work which detects communities in multi-graphs simultaneously via an attention module and minimum-entropy loss.  CommunityGAN \\cite{jia2019communitygan} jointly learns node embeddings and detects overlapping communities through a generative Adversarial net (GAN). The generator aims to generate motifs from nodes to approximate the real graph, while the discriminator aims to detect which is the fake motif generated from the generator or the ground-truth motif.\n\n\\subsection{Matrix Factorization}\nA lot of matrices can be derived from graphs to reveal its structure, some of the most popular ones are degree matrix $D$, adjacency matrix $A$, Laplacian matrix$L = D-A$, normalized Laplacian matrix $D^{-1/2}LD^{-1/2}$, stochastic random walk matrix $Q=AD^{-1}$ and modularity matrix $M_{uv} = A_{uv}-d_ud_v/2|E|$. Therefore, matrix factorization by nature can be directly utilized on these matrices to learn a hidden representation on nodes. Intuitively, if each dimension of the representation is regarded as a community, the node vectors can also indicate their affiliations in each community to solve the overlapping community problem. Inspired by this, nonnegative matrix factorization has been a popular track of methods for years.\n\n\\cite{wang2011community} is the first work that utilizes nonnegative matrix factorization (NMF) for community detection. In its paper, it proposes three NMF based techniques including Symmetric NMF, Asymmetric NMF and Joint NMF. The Symmetric NMF is the simplest form of graph NMF, which assumes the graph is undirected so that adjacency matrix $A$ is symmetric. The objective of Symmetric NMF turns to learn the low dimensional representation of nodes, which refers to the probability of all nodes belonging to communities:\n\\begin{equation}\n\t\t\\min_{X \\geq 0} ||A-XX^T||^2_F\n\\end{equation}\nIn a directed graph, Asymmetric NMF is used to learn a node community membership matrix $X$ and a diagonal matrix $S$ which shows the connectivity within each community. Therefore, the objective function has a small change compared with Symmetric NMF method:\n\n\\begin{equation}\n\\min_{X,S \\geq 0} ||A-XSX^T||^2_F\n\\end{equation}\n\nThe Joint NMF considers an even more complex scenario which considers a heterogeneous graph whose adjacency matrix $A$ refers to the connections between the two types of nodes. $U$ and $D$ refer to the connections within each individual node type. Therefore, to learn a latent matrix $X$ to uncover the belonging relationships of two type of nodes, it considers the all three aforementioned information:  \n\n \\begin{equation}\n \\min_{X,\\alpha, \\beta \\geq 0} ||A-X||^2_F + \\alpha ||U-XX^T||^2_F + \\beta ||D-X^T X||^2_F\n \\end{equation}\n \nSymNMF model \\cite{kuang2012symmetric} comprehensively explains how to use the NMF to graph clustering. Derived from the standard NMF which decomposes on the adjacency matrix, it introduces another similar matrix $D^{-1/2}AD^{-1/2}$ and explains its equivalence to the Normalized Cut method. The paper also generalizes the similar matrix to a kernel function as $\\phi(X)\\phi(X)^T$. \\cite{kuang2015symnmf} is a follow-up work on the same SymNMF model but with more detailed supplementary. BIGCLAM \\cite{yang2013overlapping}, a previously mentioned overlapping community detection model, is also leverages nonnegative matrix factorization techniques.\n\n\\cite{yang2012clustering} uses multi-step random walks to calculate node pairwise similarities. The original similarity matrix is the normalized Laplacian matrix $Q=D^{-1/2}AD^{-1/2}$ where $D$ is the diagonal degree matrix. The $j$ step node random walk similarity matrix is calculated as $(\\alpha Q)^j$ where $\\alpha$ is a decay factor. Summing over all possible $j$ steps. the limitation is calculated as $\\sum_j^{\\infty}(\\alpha Q)^j = (I-\\alpha Q)^{-1}$. Therefore, the all step similarity matrix is used to replace the adjacency matrix in the objective function. The goal is to learn a low dimensional node-community distribution $W$ by minimizing the reconstruction error to the similarity matrix:\n\\begin{equation}\n\\min_{W \\geq 0} ||c^{-1}(I-\\alpha Q)^{-1} - WW^T||^2_{F}\n\\end{equation}\nwhere $c=\\sum_{ij}[(I-\\alpha Q)^{-1}]_{ij}$ is a normalizing factor.\n\n\\cite{tang2014uncovering} proposes a two-step matrix factorization approach. First, a singular value decomposition on the adjacency matrix is calculated as $A= U\\Sigma V^T$. It selects the top ranked columns in the three matrices to get a refined adjacency matrix $N$ with more condensed information. A Bayesian Nonnegative Matrix Factorization (BNMF)  \\cite{psorakis2011overlapping}  is thereafter applied on the matrix $N$ by assuming each node in $N$ follows a Poisson distribution. In detail, it calculates the posterior distribution of two smaller matrices $W$ and $H$ by maximizing the posterior criterion:\n\\begin{equation}\n\\min_{W,H,\\beta \\geq 0} p(W,H,\\beta|N)\n\\end{equation}\nwhere $\\beta$ is a scale hyperparameter with Gamma distribution, and $W,H$ are both with half-normal probability distributions.\n\nBNMTF \\cite{zhang2012overlapping} is a bounded nonnegative matrix factorization approach which uses tri-factorization to decompose adjacency matrix $A$ into three sub-matrices $U,B,U^T$.  $U$ is the node-community matrix where each row refers to a node distribution among all communities, which is bounded between 0 to 1. $B$ is the connections between communities. The goal is to use the three matrices to regenerate a matrix $\\hat{A} = UBU^T$ to let $A \\approx \\hat{A}$. The reconstruction error can be measured using either square loss $l_{sq}$ or KL-divergence $l_{kl}$, which are calculated as :\n\\begin{equation}\n\\begin{aligned} \nl_{sq}(A,U,B) = ||A-UBU^T||^2_F \\\\\nl_{kl}(A,U,B) = \\sum_{ij}(a_{ij}ln\\frac{a_{ij}}{\\hat{a_{ij}}} - a_{ij} + \\hat{a_{ij}})\n\\end{aligned} \n\\end{equation}\n\nwhere $a_{ij}$ is the related datapoint in $A$ and $\\hat{a_{ij}}$ is the estimated value of $a_{ij}$.\n\nSBMF \\cite{zhang2013overlapping} not only detects overlapping communities via a binary matrix factorization, but also detects node outliers from unweighted graphs. Given its binary adjacency matrix $A$, $A_{ij} = 0$ if there is no edges between node $i$ and $j$, otherwise $A_{ij} = 1$. The paper aims to find a binary community matrix $U$ where $U_it = 1$ if node $i$ in community $t$ and 0 if not. If a node $i$ belong to multiple communities, the sum of its vector will be larger than 1 ($\\sum_{t} U_{it} > 1$). And $\\sum_{t} U_{it} = 0$ if the node $i$ is an outlier. In its model, it assumes there are a few outlier nodes which do not belong to any community, and the outlier nodes should be as few as possible. Therefore, from the basic matrix factorization objective function, it adds a term regarding the outlier node penalty and uses L1 normalization in the matrix factorization objective:\n\\begin{equation}\n\\min_{U} ||A- UU^T||_1 + \\sum_i [1-\\Theta(\\sum_j U_{ij})]\n\\end{equation}\n where $\\Theta(X)$ equals to 1 if $X>0$ or 0 if $X \\leq 0$.\n \n\\cite{liu2017semi} is a semi-supervised NMF approach which involves prior information (must-links $l_{ml}$) into the NMF objective function. The prior information constructs a constraint matrix $M$ where \n\\begin{equation}\nM_{ij} = \n\\begin{cases}\n1,&   i = j\\\\ \n0,  & (v_i,v_j )\\in l_{ml} \\\\ \n\\epsilon,  & others\\\\ \n\\end{cases}\n\\end{equation}\nThe constraint matrix will be involved in the conventional objective function to learn the node-community matrix $X$:\n\\begin{equation}\n\\min_{X} ||A-XX^T||^2_F + \\frac{\\lambda}{2}\\sum_{ij}||x_i - x_j||^2M_{ij}\n\\end{equation}\nwhere $x_i$ is the $i_{th}$ row in node-community matrix $X$.  \\cite{shi2015community} is a similar semi-supervised approach on unweighted, undirected graphs but with both must-links and cannot-links constraints. \n\nGraph regularization is a typical strategy used on top of constructed graph matrices for enhancing the performance of matrix factorization models.  DNMTF \\cite{shang2012graph} is a novel graph dual regularization non-negative matrix tri-factorization model which considers the graph regularized terms from both structure based and feature based perspectives. A k-nearest neighborhood method helps to construct a data graph from graph topological structure,  and a feature graph from node feature information. The objective function therefore includes three components:\n\\begin{equation}\n\t\\min_{U,S,V \\geq 0} = ||A- USV^T||^2_F + \\lambda Tr(V^TL_V V) + \\mu Tr(U^T L_UU)\n\\end{equation}\nwhere $U,S,V$ are matrices need to be learned, $L_v$ and $L_U$ are related graph Laplacian matrices for data graph and feature graph. $Tr(\\cdot)$ is the trace of related matrix. $\\lambda$ and $\\mu$ are weights for the two regularized terms.\n\nNMTF \\cite{pei2015nonnegative} utilizes three types of graph regularizations to capture user similarity, message similarity and user connections seamlessly in social networks. It contains three binary graph matrices ($M_{u-u},M_{u-f}, M_{t-f}$ ), two similarity matrices ($S_{u-u},S_{t-t}$) and one binary interaction matrix ($R$). The goal of this paper aims to learn a user binary cluster matrix $U$, message binary cluster matrix $V$ and word binary cluster matrix $W$. By involving conventional NMF approach with three regularized terms, the overall objective function is defined as:\n\\begin{equation}\n\\begin{aligned}\n\\min_{U,V,WH_1,H_2,H_3}||M_{u-u}-UH_1U^T||^2_F + ||M_{t-f}-VH_2W^T||^2_F + ||M_{u-f}-UH_3W^T||^2_F + \\\\\n\\alpha Tr(U^TL^uU)+\\beta Tr(V^TL^tUV) + \\gamma  Tr(U^TL^\\tau U)\n\\end{aligned}\n\\end{equation}\n$L^u,L^t,L^\\tau$ are the Laplacian matrix of $S_{u-u},S_{t-t},R$. And $H_1,H_2,H_3$ are three matrices to be optimized. To ensure a user/message can only belong to one cluster, the objective function should be constraint by $UU^T=I$ and $VV^T=I$.\n\nMHGNMF model \\cite{wu2018nonnegative} extends the graph regularizations to hypergraphs by considering high-order node information to enhance model performance. RGNMF \\cite{huang2018robust} considers an extra error matrix $S$ in the conventional NMF approach:\n\\begin{equation}\n\\min_{U,V,S} ||A-UV^T -S||^2_F + \\gamma ||S||_1 + \\mu \\sum_{ii^{'}}  W^U_{ii^{'}}||U_i-U_{i^{'}}||_2 + \\lambda W_{jj^{'}}^V||V_j-V_{j^{'}}||_2\n\\end{equation} \nThe last two terms are related graph regularized terms. \n\nDANMF model \\cite{ye2018deep} is a deep autoencode-like method to learn node communities via an encoder-decoder component. It is a very straightforward method which uses multiple layers to transform the original adjacency matrix to a low dimensional community space (decoder component). Then it uses the node-community matrix to reconstruct the original adjacency matrix (encoder component). The overall loss it the combination of two component errors with a regularization term. M-NMF model \\cite{wang2017community} incorporates a NMF based node representation learning model and a modularity based community model together to optimize them jointly.\n\n\\subsection{Flow-Based}\nFlow-based models assumes either random walks or information propagation in the graph. Through these flow-like process, the energy or information of each node will be propagated to its neighbor nodes. When this process becomes stable, the nodes contains similar type or amount information will be grouped into the same community. By nature, flow-based models are all Markov chains, as the next step node will be fully dependent on the current step node.  This type of approaches are quite scattered, random walk models, local search models, Potts model, and heat kernel based models all fall into this category. \n\nMap Equation \\cite{rosvall2008maps} is a metric to quantify how well a community can compress graph information. Derived from this metric, there are several variants are proposed to serve other more complex scenarios such as hierarchical community detection \\cite{rosvall2011multilevel}, dynamic community detection \\cite{rosvall2014memory} and sparse Markov chain to solve vast parameters problem in  high-order Markov chain \\cite{persson2016maps}. Particularly, \\cite{rosvall2014memory} shows second-order Markov dynamics in the random walks can lead to significant influence for community detection, node ranking and information propagation.  \n\nIn detail, \\cite{rosvall2008maps} uses a probability of random walks as an proxy of information flow in the graphs by introducing the Map Equation metric. In order to describe a random walk path, it utilizes huffman codes to encode nodes by assigning shorter codewords with hub nodes and longer codewords to rare nodes. And a path can be described as the concatenation on the codewords of all nodes appeared in the path. The goal in this approach aims to find an optimized community partition $C$ which group all nodes into $k$ communities, which has the minimum average description length of all the paths $L(M)$. The metric is defined as:\n\\begin{equation} \n L(M) = q_{\\curvearrowright}H(\\mathcal{L})+\\sum_{i=1}^{k} \\textit{$p_{\\circlearrowright}^{i}$}H(\\textit{$\\mathcal{P}^{i}$}) \n\\end{equation}\nThis metric contains two parts: first is the entropy $H(\\mathcal{L})$ of the random walks between communities, and second is the entropy $H(\\mathcal{P}^{i}) $ of random walks within each community (where exiting current community also is considered a step in random walk). To minimize the Map Equation metric, a deterministic greedy search algorithm is proposed and refined via a simulated annealing process. \n\n \\cite{rosvall2011multilevel} extends the original map equation to a hierarchical version. For a hierarchical map $M$ with $k$ communities, each community $i$ contains a submap $M^i$ and $m^i$ sub-communities, the hierarchical map equation is calculated in a nested way:\n \n \\begin{equation}\n L(M)= q_{\\curvearrowright}H(\\mathcal{L})+\\sum_{i=1}^{k} L(M^i)\n \\end{equation}\n \nUEOC model \\cite{jin2011markov} uses Markov random walks with constraints strategy to detect overlapping communities. It contains four steps: first, select the node with maximum degree and non-community membership; second, apply random walks on the selected nodes with a constraint number of steps. Calculate the probability of each node being the end node and rank them; third, select nodes with conductance score larger than a threshold to be in the same community of target node; fourth, if there are still nodes without being assigned to at least one community, repeat the step 1 until no such nodes remained. \n\n\\cite{kloster2014heat} introduces a heat kernel based diffusion model to detect communities in a local and deterministic manner. The overall goal of this approach is to approximate a heat kernel $h$ as a diffusion function in the graph:\n\\begin{equation}\nh = e^{-t} \\left (\\sum_{k=0}^{\\infty} \\frac{t^k}{k!}(P)^{k}\\right)s \n\\end{equation}\nwhere $P=AD^{-1}$ is the random walk transition matrix and $D$ is degree matrix. $t$ is a coefficient term. And $s$ is an initialized seed vector which sums to one. In the end, the subgraphs with low conductance score will be regarded as single communities.\n\n\\cite{zlatic2010topologically} studies a particular Topologically Biased Random Walk (TBRW) on graphs for community detection. A standard transition matrix $T$ is calculated as:\n\\begin{equation}\nT_{Ij} = \\frac{W_{ij}}{\\sum_{l} W_{lj}}\n\\end{equation} \n$W_{ij}$ is the edge weight between two nodes $i$ and $j$. In a biased random walk, the edge weight can be defined as $W_{ij} = A_{ij}e^{\\beta x_i}$. $x_i$ can be any defined factors such as related node degree. After the transition matrix is calculated, spectral clustering can be leveraged to select top eigenvectors as node community distributions.\n\n\\cite{liu2010detecting} proposes two simulated annealing algorithms,SADI (dissimilarity-index-based) and SADD (diffusion-distance-based), to generate communities with maiximized modularity under a k-means framework. A dissimilarity index measures the proximate extent between each pair of nodes. And diffusion distance metric calculates the distance difference for each pair of nodes to the rest nodes in the graph.  Each community centroid is calculated given a certain community partition and related metrics. After that, a simulated annealing k-means function is applied to find the best partition with largest modularity based energy score.  Nodes will be merged or separated in an recurrent way according to the gain or loss of the graph modularity.\n\n\\cite{li2012potts} uses Potts model via a Markov process which assumes nodes as spins and simulates spin dynamic value changes trough spin-spin correlations.  The detected dynamics are used to detect node hierarchical communities. \\cite{lambiotte2012ranking} ranks nodes and detects their communities via PageRank algorithm with teleportation. Instead of standard teleportation to nodes (the next step of a random walk has a small chance to a random-selected node instead of direct-connected nodes), this paper proves the teleportation to edges can achieve a more robust community partition result.\n\n\\cite{wang2013fuzzy} is a fuzzy overlapping community detection method based on distance matrix calculated via local random walks. A $t$ step local random walk (LRW) index between two nodes $i$ and $j$ is defined as:\n\\begin{equation}\ns_{ij}^{LRW}(t) = \\frac{k_i}{2|E|}\\cdot \\pi_{ij}(t) + \\frac{k_i}{2|E|}\\cdot \\pi_{ji}(t)\n\\end{equation}\nwhere $k_i$ is the node degree of $i$, $ \\pi_{ij}(t)$ is the probability of a $t$ step random walk with start node $i$ and end node $j$. Derived from local random walk index, the $t$ step superposed random walk (SRW) index between $i$ and $j$ is calculated as $s_{ij}^{SRW}(t) = \\sum_{l=1}^{t}s_{ij}^{LRW}(t)$. Subsequently, each datapoint $d_{ij}$ of the distance matrix $D$ is calculated as:\n\n\\begin{equation}\nd_{ij}=\n\\begin{cases}\n1- s_{ij}^{SRW}(t),    & \\quad  i \\neq j\\\\ \n0,  & i = j\\\\ \n\\end{cases}\n\\end{equation}\n\nAfter that, a standard multidimensional scaling (MDS) method is applied on $D$ to project each node into a low dimensional space. An existing fuzzy-means (FCM) method helps to learn each node fuzzy communities from the projected space.\n\n\\cite{yang2014closed} discovers that three to four steps closed walks in the graph can reveal community structures veiledly. A closed walk is defined as a random walk with same start and end node. Such as a walk `1-2-3-1' is a closed walk while `1-2-3' is not. It measures an edge importance score by involving three step and four step closed walks, which is defined as:\n\\begin{equation}\n\ts_{ij} = \\frac{z_{ij}^{(3)}+1}{min(k_i-1,k_j-1)} + \\frac{z_{ij}^{(4)}+1}{min(k_i-1,k_j-1)}\n\\end{equation}\nwhere $z_{ij}^{(k)}$ refers to the number of $k$ step closed walks that the edge $e_{ij}$ participates in. Edges are iteratively removed from the graph according to their importance score, and the remained disjoint graph components are naturally regarded as communities. \n\n For other works, \\cite{orecchia2014flow} is a flow based local partition method which is a refined model and theoretically proves its efficiency and effectiveness.  \\cite{salnikov2016using} explains second-order Markov process in graph random walks can better reveal community structure in dynamic graphs.  NetMRF \\cite{he2018network} is a Markov random field model which infers node communities by belief propagation.  \\cite{ibrahim2019nonlinear} proposes a class of non-linear diffusion function between nodes, and compares their performance with non-linear transformation functions in neural networks. \n\n\\subsection{Summary}\nIn this section, I explore major tracks of community detection solutions and introduce the most representative works in the recent decade. In fact, these tracks are not fully independent with each other. For example, spectral clustering and stochastic block models share similar mathematical forms. And modularity methods start to be more inveloved in deep learning methods . A graph can be viewed as either a matrix to record nodes pairwise relationship or a flow where information propagates through edges. All tracks of methods are developed from these two graph understandings. There are some other tracks of approaches as well, which are either similar to existing tracks (graph cut methods is similar to spectral clustering), or classic models waiting for more exploration (information theory models).", "meta": {"hexsha": "a89bad37395ecb6677f5b43cb7a9f9ffbcc67997", "size": 53401, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter2/chapter2.3.tex", "max_stars_repo_name": "RoyZhengGao/thesis", "max_stars_repo_head_hexsha": "b73b473d5b8a5d948080420edeb899c60d88c9e9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter2/chapter2.3.tex", "max_issues_repo_name": "RoyZhengGao/thesis", "max_issues_repo_head_hexsha": "b73b473d5b8a5d948080420edeb899c60d88c9e9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter2/chapter2.3.tex", "max_forks_repo_name": "RoyZhengGao/thesis", "max_forks_repo_head_hexsha": "b73b473d5b8a5d948080420edeb899c60d88c9e9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 150.0028089888, "max_line_length": 1598, "alphanum_fraction": 0.7883934758, "num_tokens": 12812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080671950640465, "lm_q2_score": 0.689305616785446, "lm_q1q2_score": 0.5570052562977078}}
{"text": "\\chapter{Sets, Etc.}\n\n\\section{Sets}\n\n\\begin{enumerate}\n\n\\item[B.1{-}1] {Draw Venn diagrams that illustrate the first of the distributive\nlaws (B.1).}\n\n\\begin{framed}\n\\begin{center}\n\\def\\firstcircle{(0,0) circle (0.5cm)}\n\\def\\secondcircle{(60:0.75cm) circle (0.5cm)}\n\\def\\thirdcircle{(0:0.75cm) circle (0.5cm)}\n\n\\begin{tikzpicture}\n  \\draw \\firstcircle node (a) {$A$};\n  \\draw \\secondcircle node (b) {$B$};\n  \\draw \\thirdcircle node (c) {$C$};\n  \\fill[gray, fill opacity=0.2] \\firstcircle;\n  \\node [align=flush center, below=1.2cm] at (b) { $A$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,0) { $\\cap$ };\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,1) { $\\cap$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\draw \\firstcircle node (a) {$A$};\n  \\draw \\secondcircle node (b) {$B$};\n  \\draw \\thirdcircle node (c) {$C$};\n  \\fill[gray, fill opacity=0.2] \\secondcircle;\n  \\fill[gray, fill opacity=0.2] \\thirdcircle;\n  \\node [align=flush center, below=1.2cm] at (b) { $(B \\cup C)$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,0) { $=$ };\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,1) { $=$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\draw \\firstcircle node (a) {$A$};\n  \\draw \\secondcircle node (b) {$B$};\n  \\draw \\thirdcircle node (c) {$C$};\n  \\begin{scope}\n    \\clip \\firstcircle;\n    \\fill[gray, fill opacity=0.2] \\secondcircle;\n  \\end{scope}\n  \\begin{scope}\n    \\clip \\firstcircle;\n    \\fill[gray, fill opacity=0.2] \\thirdcircle;\n  \\end{scope}\n  \\node [align=flush center, below=1.2cm] at (b) { $A \\cap (B \\cup C)$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,0) { $=$ };\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,1) { $=$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\draw \\firstcircle node (a) {$A$};\n  \\draw \\secondcircle node (b) {$B$};\n  \\draw \\thirdcircle node (c) {$C$};\n  \\begin{scope}\n    \\clip \\firstcircle;\n    \\fill[gray, fill opacity=0.2] \\secondcircle;\n  \\end{scope}\n  \\node [align=flush center, below=1.2cm] at (b) { $(A \\cap B)$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,0) { $\\cup$ };\n  \\node [align=flush center,text width=0.5cm, minimum height=0.5cm] at (0,1) { $\\cup$ };\n\\end{tikzpicture}\n\\begin{tikzpicture}\n  \\draw \\firstcircle node (a) {$A$};\n  \\draw \\secondcircle node (b) {$B$};\n  \\draw \\thirdcircle node (c) {$C$};\n  \\begin{scope}\n    \\clip \\firstcircle;\n    \\fill[gray, fill opacity=0.2] \\thirdcircle;\n  \\end{scope}\n  \\node [align=flush center, below=1.2cm] at (b) { $(A \\cap C)$ };\n\\end{tikzpicture}\n\\end{center}\n\\end{framed}\n\n\\item[B.1{-}2] {Prove the generalization of DeMorgan's laws to any finite\ncollection of sets:\n\\[\n  \\overline{A_1 \\cap A_2 \\cap \\cdots \\cap A_n} = \\overline{A_1} \\cup \\overline{A_2} \\cup \\cdots \\cup \\overline{A_n},\n\\]\n\\[\n  \\overline{A_1 \\cup A_2 \\cup \\cdots \\cup A_n} = \\overline{A_1} \\cap \\overline{A_2} \\cap \\cdots \\cap \\overline{A_n}.\n\\]\n}\n\n\\begin{framed}\nThe base case, which occurs when $n = 2$, is given (from the text book). Now,\nlets assume it holds for $n$ and show that it also holds for $n + 1$.\n\nFor the first DeMongan's law, we have\n\\begin{equation*}\n\\begin{aligned}\n  \\overline{A_1 \\cap A_2 \\cap \\cdots \\cap A_n \\cap A_{n + 1}}\n  &= \\overline{(A_1 \\cap A_2 \\cap \\cdots \\cap A_n) \\cap A_{n + 1}}\\\\\n  &= \\overline{(A_1 \\cap A_2 \\cap \\cdots \\cap A_n)} \\cup \\overline{A_{n + 1}}\\\\\n  &= (\\overline{A_1} \\cup \\overline{A_2} \\cup \\cdots \\cup \\overline{A_n}) \\cup \\overline{A_{n + 1}}\\\\\n  &= \\overline{A_1} \\cup \\overline{A_2} \\cup \\cdots \\cup \\overline{A_n} \\cup \\overline{A_{n + 1}}.\n\\end{aligned}\n\\end{equation*}\n\nFor the second DeMongan's law, we have\n\\begin{equation*}\n\\begin{aligned}\n  \\overline{A_1 \\cup A_2 \\cup \\cdots \\cup A_n \\cup A_{n + 1}}\n  &= \\overline{(A_1 \\cup A_2 \\cup \\cdots \\cup A_n) \\cup A_{n + 1}}\\\\\n  &= \\overline{(A_1 \\cup A_2 \\cup \\cdots \\cup A_n)} \\cap \\overline{A_{n + 1}}\\\\\n  &= (\\overline{A_1} \\cap \\overline{A_2} \\cap \\cdots \\cap \\overline{A_n}) \\cap \\overline{A_{n + 1}}\\\\\n  &= \\overline{A_1} \\cap \\overline{A_2} \\cap \\cdots \\cap \\overline{A_n} \\cap \\overline{A_{n + 1}}.\n\\end{aligned}\n\\end{equation*}\n\\end{framed}\n\n\\item[B.1{-}3]{($\\star$) Prove the generalization of equation (B.3), which is\ncalled the \\textbf{\\emph{principle of inclusion and exclusion}}:\n\\begin{equation*}\n\\begin{aligned}\n  | A_1 &\\cup A_2 \\cup \\cdots \\cup A_n | =\\\\\n        &|A_1| + |A_2| + \\cdots + |A_n|\\\\\n        &- |A_1 \\cap A_2| - |A_1 \\cap A_3| - \\cdots && \\text{(all pairs)}\\\\\n        &+ |A_1 \\cap A_2 \\cap A_3| + \\cdots && \\text{(all triples)}\\\\\n        &\\qquad\\qquad\\qquad\\vdots\\\\\n        &+ (-1)^{n - 1} |A_1 \\cap A_2 \\cap \\cdots \\cap A_n|.\n\\end{aligned}\n\\end{equation*}\n}\n\n\\begin{framed}\nSkipped.\n\\end{framed}\n\n\\item[B.1{-}4]{Show that the set of odd natural numbers is countable.}\n\n\\begin{framed}\nLet $\\mathbb{O}$ denote the set of odd natural numbers.\n\nThe function $f(n) = 2n + 1$ is a 1-1 correspondence from $\\mathbb{N}$ to\n$\\mathbb{O}$.  Thus, $\\mathbb{O}$ is countable.\n\\end{framed}\n\n\\newpage\n\n\\item[B.1{-}5]{Show that for any finite set $S$, the power set $2^S$ has\n$2^{|S|}$ elements (that is, there are $2^{|S|}$ distinct subsets of $S$).}\n\n\\begin{framed}\nFor the base case, consider a set with a single element $x$. We have\n\\[\n  2^{\\{x\\}} = \\{\\emptyset, \\{x\\}\\},\n\\]\nwhich shows that the power set of a set with a single element has cardinality\n$2^1 = 2$.\n\nLet $C(\\cdot)$ denote the cardinality of a power set. Let $S$ be a set of size\n$n$. Lets assume that the power set of $S$ has cardinality\n$C(S) = 2^{|S|} = 2^n$. Now, let $S'$ be the set $S$ with one additional element\n$x$, such that $|S'| = n + 1$. The power set of $S'$ will consist of all sets\nin the power set of $S$ plus all those same sets again, with the element $x$\nadded. Thus, we have\n\\[\n  C(S') = 2 \\cdot C(S) = 2 \\cdot 2^n = 2^{n + 1}.\n\\]\n\\end{framed}\n\n\\item[B.1{-}6]{Give an inductive definition for an $n$-tuple by extending the\nset-theoretic definition for an ordered pair.}\n\n\\begin{framed}\n\\begin{equation*}\n\\begin{aligned}\n  (a) &= \\{a\\}\\\\\n  (a, b) &= \\{a, \\{a, b\\}\\}\\\\\n  (a, b, c) &= \\{a, \\{a, b\\}, \\{a, b, c\\}\\}\\\\\n  (a_1, a_2, \\dots, a_n) &= (a_1, a_2, \\dots, a_{n - 1}) \\cup \\{a_1, a_2, \\dots, a_n\\}\n\\end{aligned}\n\\end{equation*}\n\\end{framed}\n\n\\end{enumerate}\n\n\\newpage\n\n\\section{Relations}\n\n\\begin{enumerate}\n\n\\item[B.2{-}1]{Prove that the subset relation ``$\\subseteq$'' on all subsets of\n$\\mathbb{Z}$ is a partial order but not a total order.}\n\n\\begin{framed}\nLet $\\mathbb{S}$ denote all the subsets of $\\mathbb{Z}$. Let $A = \\{1\\}$,\n$B = \\{2\\}$ be two subsets of $\\mathbb{Z}$. We have $A \\not\\subseteq B$ and\n$B \\not\\subseteq A$. Thus, the subset relation ``$\\subseteq$'' on\n$\\mathbb{S} \\times \\mathbb{S}$ is not a total relation and therefore is not\na total order.\n\nFor the relation $\\subseteq$ on $\\mathbb{S}$ be a partial order, the\nfollowing properties needs to hold: (1) reflexivity, (2) antisymmetry,\n(3) transitivity. Since $A \\subseteq A$, for all $A \\in \\mathbb{S}$, the\nrelation ``$\\subseteq$'' on $\\mathbb{S} \\times \\mathbb{S}$ is reflexive. To be\nantisymmetric, we need to show that if $A \\subseteq B$ and $B \\subseteq A$, then\n$A = B$, for all $A, B \\in \\mathbb{S}$. Since $A \\subseteq B$, for all $a \\in A$\nwe have $a \\in B$ and since $B \\subseteq A$, for all $b \\in B$ we have\n$b \\in A$.  Thus, $A = B$ and the relation ``$\\subseteq$'' on\n$\\mathbb{S} \\times \\mathbb{S}$ is antisymmetric. To be transitive, we need to\nshow that if $A \\subseteq B$ and $B \\subseteq C$, then $A \\subseteq C$, for all\n$A, B, C \\in \\mathbb{S}$. So let $a \\in A$.  Since $A \\subseteq B$, we have\n$a \\in B$. Since $a \\in B$ and $B \\subseteq C$, we have $a \\in C$. Thus,\n$A \\subseteq C$ and the relation ``$\\subseteq$'' on\n$\\mathbb{S} \\times \\mathbb{S}$ is transitive.\n\\end{framed}\n\n\\item[B.2{-}2]{Show that for any positive integer $n$, the relation ``equivalent\nmodulo $n$'' is an equivalence relation on the integers. (We say that\n$a \\equiv b$ (mod $n$) if there exists an integer $q$ such that $a - b = qn$.)\nInto what equivalence classes does this relation partition the integers?}\n\n\\begin{framed}\nTo the relation ``equivalent modulo $n$'' be an equivalent relation on\n$\\mathbb{Z} \\times \\mathbb{Z}$, the following needs to hold:\n\\begin{enumerate}\n  \\item $a \\equiv a$ (mod $n$), for all $a, n \\in \\mathbb{Z}$ (reflexivity)\n  \\item $a \\equiv b$ (mod $n$) implies $b \\equiv a$ (mod $n$), for all $a, b, n \\in \\mathbb{Z}$ (symmetry)\n  \\item $a \\equiv b$ (mod $n$) and $b \\equiv c$ (mod $n$) implies $a \\equiv c$ (mod $n$), for all $a, b, c, n \\in \\mathbb{Z}$ (transitivity)\n\\end{enumerate}\n\nFor the reflexivity property, we have that $a - a = qn$ holds directly for $q = 0$.\n\nFor the symmetry property, we have that $a - b = p n$ implies $b - a = q n$ holds directly for $q = -p$.\n\nFor the transitivity property, we have that $a - b = pn$ and $b - c = qn$ implies $a - c = rn$ holds for $r = p + q$, since\n\\[\n  (a - b) + (b - c) = pn + qn \\rightarrow a - c = (p + q) n.\n\\]\n\n\\end{framed}\n\n\\item[B.2{-}3]{Give examples of relations that are\n\\begin{enumerate}\n\\item[a.] reflexive and symmetric but not transitive,\n\\item[b.] reflexive and transitive but not symmetric,\n\\item[c.] symmetric and transitive but not reflexive.\n\\end{enumerate}\n}\n\n\\begin{framed}\n\\begin{enumerate}\n\\item The relation ``is neighbor of'' is reflexive (one is neighbor of\n  himself) and symmetric ($a$ ``is neighbor of'' $b$ imply $b$ ``is neighbor\n  of'' $a$), but not transitive ($a$ ``is neighbor of'' $b$ and $b$ ``is\n  neighbor of'' $c$ does not imply $a$ ``is neighbor of'' $c$.\n\\item The relation ``$\\le$'' is reflexive ($a \\le a$) and transitive ($a \\le b$\n  and $b \\le c$ imply $a \\le c$), but not symmetric ($a \\le b$ does not imply\n  $b \\le a$).\n\\item Consider the relation ``$a + b > \\infty$'' on $\\mathbb{Z} \\times \\mathbb{Z}$.\n  This relation is empty. However, it is symmetric ($a\\;R\\;b$ imply $b\\;R\\;a$)\n  and transitive ($a\\;R\\;b$ and $b\\;R\\;c$ imply $a\\;R\\;c$), but not reflexive\n  since for no $a \\in \\mathbb{Z}$ is it the case that $a\\;R\\;a$.\n\\end{enumerate}\n\\end{framed}\n\n\\item[B.2{-}4]{Let $S$ be a finite set, and let $R$ be an equivalence relation\non $S \\times S$. Show that if in addition $R$ is antisymmetric, then the\nequivalence classes of $S$ with respect to $R$ are singletons.}\n\n\\begin{framed}\nFor every $a, b \\in S$ such that $a\\;R\\;b$, by symmetry $b\\;R\\;a$, and by\nantisymmetry $a = a$. Thus, $[a] = \\{b \\in S : a\\;R\\;b\\} = \\{a\\}$ for all\n$a \\in S$, which implies that the equivalence classes are singletons.\n\\end{framed}\n\n\\item[B.2{-}5]{Professor Narcissus claims that if a relation $R$ is symmetric\nand transitive, then it is also reflexive. He offers the following proof. By\nsymmetry, $a\\;R\\;b$ implies $b\\;R\\;a$. Transitivity, therefore, implies\n$a\\;R\\;a$. Is the professor correct?}\n\n\\begin{framed}\nNo. This is only true for relations that for every $a$ there is $b$ such that\n$a\\;R\\;b$, by symmetry $b\\;R\\;a$, and by transitivity $a\\;R\\;a$. For\ninstance, an empty relation (like the one from Question B.2-3, item (c)) are\nsymmetric and transitive, but not reflexive.\n\\end{framed}\n\n\\end{enumerate}\n\n\\newpage\n\n\\section{Functions}\n\n\\begin{enumerate}\n\n\\item[B.3{-}1]{Let $A$ and $B$ be finite sets, and let $f : A \\rightarrow B$ be\na function. Show that\n\\begin{enumerate}\n\\item[a.] if $f$ is injective, then $|A| \\le |B|$;\n\\item[b.] if $f$ is surjective, then $|A| \\ge |B|$.\n\\end{enumerate}\n}\n\n\\begin{framed}\n\\begin{enumerate}\n  \\item Since $f$ is injective, we have that $A = f(A)$. Also, we have\n    \\begin{equation*}\n    \\begin{cases}\n      |B| = |f(A)|, & f \\text{ is surjective},\\\\\n      |B| > |f(A)|, & f \\text{ is not surjective}.\n    \\end{cases}\n    \\end{equation*}\n    Thus, $|B| \\ge |f(A)| = |A| \\rightarrow |A| \\le |B|$.\n  \\item Since $f$ is surjective, we have $|f(A)| = |B|$. Also, we have\n    \\begin{equation*}\n    \\begin{cases}\n      |A| = |f(A)|, & f \\text{ is injective},\\\\\n      |A| > |f(A)|, & f \\text{ is not injective}.\n    \\end{cases}\n    \\end{equation*}\n    Thus, $|A| \\ge |f(A)| = |B| \\rightarrow |A| \\ge |B|$.\n\\end{enumerate}\n\\end{framed}\n\n\\item[B.3{-}2]{Is the function $f(x) = x + 1$ bijective when the domain and the\ncodomain are $\\mathbb{N}$? Is it bijective when the domain and the codomain are\n$\\mathbb{Z}$?}\n\n\\begin{framed}\nOn the set of naturals, $f$ is injective but not surjective, since there is no\n$a \\in \\mathbb{N}$ such that $0 = f(a)$, which makes\n$f(\\mathbb{N}) \\neq \\mathbb{N}$.\n\nOn the set of integers, $f$ is both injective and surjective, and therefore\nbijective.\n\\end{framed}\n\n\\item[B.3{-}3]{Give a natural definition for the inverse of a binary relation\nsuch that if a relation is in fact a bijective function, its relational inverse\nis its functional inverse.}\n\n\\begin{framed}\nLet $R$ be a binary relation on the sets $A$ and $B$, such that\n$R \\subseteq A \\times B$. The general definition of the inverse of $R$ is given\nby\n\\[\n  R^{-1} = \\{(b, a) \\in B \\times A : (a, b) \\in R\\}.\n\\]\n\nWhen $R$ is a bijective function, we have: (1) for all $b \\in B$, there\nis at most one $a \\in A$ such that $a\\;R\\;b$ (injective) and (2) for all\n$b \\in B$ there is at least one $a \\in A$ such that $a\\;R\\;b$ (surjective).\nTherefore, when $R$ is bijective, each element of $A$ is related to exactly\none element of $B$ and vice-versa, which implies\n\\[\n  f(a) = b \\iff f'(b) = a,\n\\]\nfor all $a \\in A$ and for all $b \\in B$.\n\\end{framed}\n\n\\item[B.3{-}4]{($\\star$) Give a bijection from $\\mathbb{Z}$ to\n$\\mathbb{Z} \\times \\mathbb{Z}$.}\n\n\\begin{framed}\nSkipped.\n\\end{framed}\n\n\\end{enumerate}\n", "meta": {"hexsha": "5ac06bf9565555ab6bbd957f1723e3317b6ea434", "size": 13748, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/AB.tex", "max_stars_repo_name": "danielmoraes/clrs", "max_stars_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-07-08T17:39:19.000Z", "max_stars_repo_stars_event_max_datetime": "2016-07-08T17:39:19.000Z", "max_issues_repo_path": "chapters/AB.tex", "max_issues_repo_name": "danielmoraes/clrs", "max_issues_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-31T20:41:48.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-31T20:41:48.000Z", "max_forks_repo_path": "chapters/AB.tex", "max_forks_repo_name": "danielmoraes/clrs", "max_forks_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-03-12T04:51:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-12T04:51:51.000Z", "avg_line_length": 36.9569892473, "max_line_length": 140, "alphanum_fraction": 0.6351469305, "num_tokens": 5216, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.8198933271118221, "lm_q1q2_score": 0.5568540833703922}}
{"text": "\\section{Case studies}\\label{s:case-studies}\nIn this section, we explore some practical cases in which the imprecisions of floating-point numbers can cause problems and give some possible solutions.\n\n\\subsection{Problem ``Keeping the Dogs Apart''}\\label{ss:dogs}\nWe will first talk about problem ``Keeping the Dogs Apart'', which we mentioned before, because it is a good example of accumulation of error and how to deal with it. It was written by Markus Fanebust Dregi for NCPC 2016. You can read the full statement and submit it at \\url{https://open.kattis.com/problems/dogs}.\n\nHere is a summarized problem statement: There are two dogs A and B, walking at the same speed along different polylines $A_0 \\ldots A_{n-1}$ and $B_0 \\ldots B_{m-1}$, made of 2D integer points with coordinates in $[0,10^4]$. They start at the same time from $A_0$ and $B_0$ respectively. What is the closest distance they will ever be from each other before one of them reaches the end of its polyline? The relative/absolute error tolerance is $10^{-4}$, and $n,m \\leq 10^5$.\n\nThe idea of the solution is to divide the time into intervals where both dogs stay on a single segment of their polyline. Then the problem reduces to the simpler task of finding the closest distance that get when one walks on $[PQ]$ and the other on $[RS]$, with $|PQ|=|RS|$. This division into time intervals can be done with the two-pointers technique: if we remember for each dog how many segments it has completely walked and their combined length, we can work out when is the next time on of the dogs will switch segments.\n\nThe main part of the code looks like this. We assume that \\lstinline|moveBy(a,b,t)| gives the point on a segment $[AB]$ at a certain distance $t$ from $A$, while \\lstinline|minDist(p,q,r,s)| gives the minimum distance described above for $P,Q,R,S$.\n\\begin{lstlisting}\nint i = 0, j = 0; // current segment of A and B\ndouble ans = abs(a[0]-b[0]), // closest distance so far\n      sumA = 0, // total length of segments fully walked by A\n      sumB = 0; // total length of segments fully walked by B\n\n// While both dogs are still walking\nwhile (i+1 < n && j+1 < m) {\n    double start = max(sumA, sumB), // start of time interval\n              dA = abs(a[i+1]-a[i]), // length of current segment of A\n              dB = abs(b[j+1]-b[j]), // length of current segment of B\n            endA = sumA + dA, // time at which A will end this segment\n            endB = sumB + dB, // time at which B will end this segment\n             end = min(endA, endB); // end of time interval\n    \n    // Compute start and end positions of both dogs\n    pt p = moveBy(a[i], a[i+1], start-sumA),\n       q = moveBy(a[i], a[i+1], end-sumA),\n       r = moveBy(b[j], b[j+1], start-sumB),\n       s = moveBy(b[j], b[j+1], end-sumB);\n    \n    // Compute closest distance for this time interval\n    ans = min(ans, minDist(p,q,r,s));\n    \n    // We get to the end of the segment for one dog or the other,\n    // so move to the next and update the sum of lengths\n    if (endA < endB) {\n        i++;\n        sumA += dA;\n    } else {\n        j++;\n        sumB += dB;\n    }\n}\n// output ans\n\\end{lstlisting}\n\nAs we said in section~\\ref{ss:accumulation}, the sums \\lstinline|sumA| and \\lstinline|sumB| accumulate very large errors. Indeed, they can both theoretically reach $M=\\sqrt{2}\\times 10^9$, and are based on up to $k=10^5$ additions. With \\lstinline|double|, $\\epsilon = 2^{-53}$, so we could reach up to $kM\\epsilon \\approx 0.016$ in absolute error in both \\lstinline|sumA| and \\lstinline|sumB|. Since this error directly translates into errors in $P,Q,R,S$ and is bigger than the tolerance of $10^{-4}$, this causes WA.\n\nIn the rest of this section, we will look at two ways we can avoid this large accumulation of error in \\lstinline|sumA| and \\lstinline|sumB|. Since this is currently much bigger than what could have been caused by the initial length computations, \\lstinline|moveBy()| and \\lstinline|minDist()|, we will consider those errors to be negligible for the rest of the discussion.\n\n\\subsubsection{Limiting the magnitude involved}\nThe first way we can limit the accumulation of error in \\lstinline|sumA| and \\lstinline|sumB| is to realize that in fact, we only care about the difference between them: if we add a certain constant to both variables, this doesn't change the value of \\lstinline|start-sumA|, \\lstinline|end-sumA|, \\lstinline|start-sumB| or \\lstinline|end-sumB|, so the value of \\lstinline|p|, \\lstinline|q|, \\lstinline|r|, \\lstinline|s| is unchanged.\n\nSo we can adapt the code by adding these lines at the end of the \\lstinline|while| loop:\n\\begin{lstlisting}\ndouble minSum = min(sumA, sumB);\nsumA -= minSum;\nsumB -= minSum;\n\\end{lstlisting}\n\nAfter this, one of \\lstinline|sumA| and \\lstinline|sumB| becomes zero, while the other carries the error on both. In total, at most $n+m$ additions and $n+m$ subtractions are performed on them, for a total of $k \\leq 4\\times 10^5$. But since the difference between \\lstinline|sumA| and \\lstinline|sumB| never exceeds the length of one segment, that is, $M=\\sqrt{2} \\times 10^4$, the error is much lower than before:\n\\[kM\\epsilon = \\left(4\\times 10^5\\right) \\times \\left(\\sqrt{2}\\times 10^4\\right) \\times 2^{-53} \\approx 6.3 \\times 10^{-7}\\]\nso it gives an AC verdict.\n\nSo here we managed to reduce the precision mistakes on our results by reducing the magnitude of the numbers that we manipulate. Of course, this is only possible if the problem allows it.\n\n\\subsubsection{Summing positive numbers more precisely}\nNow we present different way to reduce the precision mistake, based on the fact that all the terms in the sum we're considering are positive. This is a good thing, because it avoids catastrophic cancellation (see section~\\ref{ss:abs-rel}).\n\nIn fact, addition of nonnegative numbers conserves relative precision: if you sum two nonnegative numbers $a$ and $b$ with relative errors of $k_a\\epsilon$ and $k_b\\epsilon$ respectively, the worst-case relative error on $a+b$ is about\\footnote{It could in fact go up to $(\\max(k_a,k_b)(1+\\epsilon)+1)\\epsilon$ but the difference is negligible for our purposes.} $(\\max(k_a,k_b)+1)\\epsilon$.\n\nLet's say we need to compute the sum of $n$ nonnegative numbers $a_1,\\ldots,a_n$. We suppose they are exact. If we perform the addition in the conventional order, like this:\n\\[\\Bigg(\\cdots\\Big(\\big((a_1+a_2)+a_3\\big)+a_4\\Big)+\\cdots\\Bigg)+a_n\\]\nthen\n\\begin{itemize}\n\\item $a_1+a_2$ will have a relative error of $(\\max(0,0)+1)\\epsilon = \\epsilon$;\n\\item $(a_1+a_2)+a_3$ will have a relative error of $(\\max(1,0)+1)\\epsilon = 2\\epsilon$;\n\\item $\\big((a_1+a_2)+a_3\\big)+a_4$ will have a relative error of $(\\max(2,0)+1)\\epsilon = 3\\epsilon$;\n\\item \\ldots and so on.\n\\end{itemize}\nSo the complete sum will have an error of $(n-1)\\epsilon$, not better than what we had before.\n\nBut what if we computed the additions in another order? For example, with $n=8$, we could do this:\n\\[\\big((a_1+a_2)+(a_3+a_4)\\big) + \\big((a_5+a_6)+(a_7+a_8)\\big)\\]\nthen all additions of two numbers have error $\\epsilon$, all additions of 4 numbers have error $(\\max(1,1)+1)\\epsilon = 2\\epsilon$, and the complete addition has error $(\\max(2,2)+1)\\epsilon = 3\\epsilon$, which is much better than $(n-1)\\epsilon = 7\\epsilon$. In general, for $n = 2^k$, we can reach a relative precision of $k\\epsilon$.\n\nWe can use this grouping technique to create an accumulator such that the relative error after adding $n$ numbers is at most $2\\log_2(n)\\epsilon$.\\footnote{We could even get $(\\log_2{n}+1)\\epsilon$ but we don't know a way to do it faster than $O(n\\log n)$.} Here is an $O(n)$ implementation:\n\\begin{lstlisting}\nstruct stableSum {\n    int cnt = 0;\n    vector<double> v, pref{0};\n    void operator+=(double a) {\n        assert(a >= 0);\n        int s = ++cnt;\n        while (s % 2 == 0) {\n            a += v.back();\n            v.pop_back(), pref.pop_back();\n            s /= 2;\n        }\n        v.push_back(a);\n        pref.push_back(pref.back() + a);\n    }\n    double val() {return pref.back();}\n};\n\\end{lstlisting}\n\nLet's break this code down. This structure provides two methods: \\lstinline|add(a)| to add a number $a$ to the sum, and \\lstinline|val()| to get the current value of the sum. Array \\lstinline|v| contains the segment sums that currently form the complete sum, similar to Fenwick trees: for example, if we have added 11 elements, \\lstinline|v| would contain three elements:\n\\[v = \\{a_1+\\cdots+a_8,\\ a_9+a_{10},\\ a_{11}\\}\\]\nwhile \\lstinline|pref| contains the prefix sums of \\lstinline|v|: \\lstinline|pref[i]| contains the sum of the $i$ first elements of \\lstinline|v|.\n\nFunction \\lstinline|add()| performs the grouping: when adding a new element \\lstinline|a|, it will merge it with the last element of \\lstinline|v| while they contain the same number of terms, then \\lstinline|a| is added to the end of \\lstinline|v|. For example, if we add the $12\\nth$ element $a_{12}$, the following steps will happen:\n\\begin{align*}\nv &= \\{a_1+\\cdots+a_8,\\ a_9+a_{10},\\ a_{11}\\} \\quad &a = a_{12}\\\\\nv &= \\{a_1+\\cdots+a_8,\\ a_9+a_{10}\\} \\quad &a = (a_{11})+a_{12}\\\\\nv &= \\{a_1+\\cdots+a_8\\} \\quad &a = (a_9+a_{10})+a_{11}+a_{12}\\\\\nv &= \\{a_1+\\cdots+a_8,\\ a_9+a_{10}+a_{11}+a_{12}\\}\n\\end{align*}\n\nThe number of additions we have to make for the $i\\nth$ number is the number of times it is divisible by 2. Since we only add one element to \\lstinline|v| when adding an element to the sum, this is amortized constant time.\n\nBy simply changing the types of \\lstinline|sumA| and \\lstinline|sumB| to \\lstinline|stableSum| and adding \\lstinline|.val()| whenever the value is read in the initial code, we can get down to an error of about\n\\[2\\log_2\\big(10^5\\big) M \\epsilon = \\Big(2\\log_2\\big(10^5\\big)\\Big) \\times \\left(\\sqrt{2} \\times 10^9\\right) \\times 2^{-53} \\approx 5.2 \\times 10^{-6}\\]\nwhich also gives an AC verdict.\\footnote{Theoretically we can't really be sure though, since both \\lstinline|sumA| and \\lstinline|sumB| could have that error, and we still have to take into account the other operations performed.}\n\nThis is not as good as the precision obtained with the previous method, but that method was specific to the problem, while this one can be applied whenever we need to compute sums of nonnegative numbers.\n\n\\subsection{Quadratic equation}\nAs another example, we will study the precision problems that can occur when computing the roots of an equation of the type $ax^2+bx+c=0$ with $a\\neq 0$. We will see how some precision problems are unavoidable, while others can be circumvented by \n\nWhen working with full-precision reals, we can solve quadratic equations in the following way. First we compute the discriminant $\\Delta = b^2-4ac$. If $\\Delta < 0$, there is no solution, while if $\\Delta \\geq 0$ there is are 1 or 2 solutions, given by \\[x = \\frac{-b \\pm \\sqrt{\\Delta}}{2a}\\]\n\nThe first difficulty when working with floating-point numbers is the computation of $\\Delta$: if $\\Delta \\approx 0$, that is $b^2 \\approx 4ac$, then the imprecisions can change the sign of $\\Delta$, therefore changing the number of solutions.\n\nEven if that does not happen, since we have to perform a square root, the problems that we illustrated with line-circle intersection in section~\\ref{ss:other-operations} can also happen here.\\footnote{Which is not surprising, since the bottom of a parabola looks a lot like a circle.} Take the example of equation $x^2-2x+1=0$, which is a single root $x=0$. If there is a small error on $c$, it can translate into a large error on the roots. For example, if $c=1-10^{-6}$, then the roots become\n\\[x = \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a} = \\frac{2 \\pm \\sqrt{4 - 4(1-10^{-6})}}{2} = \\frac{2 \\pm 2\\sqrt{10^{-6}}}{2} = 1 \\pm 10^{-3}.\\]\nwhere the error $10^{-3}$ is much bigger than the initial error on $c$.\n\n\\centerFig{cases0}\n\nEven if the computation of $\\sqrt{\\Delta}$ is very precise, a second problem can occur. If $b$ and $\\sqrt{\\Delta}$ have a similar magnitude, in other words when $b^2 \\gg ac$, then catastrophic cancellation will occur for one of the roots. For example if $a=1,b=10^4,c=1$, then the roots will be:\n\\[x_1 = \\frac{-10^4 - \\sqrt{10^8-4}}{2} \\approx -10^4 \\qquad x_2 = \\frac{-10^4 + \\sqrt{10^8-4}}{2} \\approx 10^{-4}\\]\n\nThe computation of $x_1$ goes fine because $-b$ and $-\\sqrt{\\Delta}$ have the same sign. But because the magnitude of $-b+\\sqrt{\\Delta}$ is $10^8$ times smaller than the magnitude of $b$ and $\\sqrt{\\Delta}$, the relative error on $x_2$ will be $10^8$ times bigger than the relative error on $b$ and $\\sqrt{\\Delta}$.\n\nFortunately, in this case we can avoid catastrophic cancellation entirely by rearranging the expression:\n\\begin{align*}\n\\frac{-b \\pm \\sqrt{b^2-4ac}}{2a}\n&= \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a} \\times \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}\\\\\n&= \\frac{(-b^2) - \\left(\\sqrt{b^2-4ac}\\right)^2}{2a \\left(-b \\mp \\sqrt{b^2-4ac}\\right)}\\\\\n&= \\frac{4ac}{2a \\left(-b \\mp \\sqrt{b^2-4ac}\\right)}\\\\\n&= \\frac{2c}{-b \\mp \\sqrt{b^2-4ac}}\n\\end{align*}\nIn this new expression, since the sign of the operation is opposite from the sign in the original expression, catastrophic cancellation happens in only one of the two.\n\nSo if $b \\geq 0$, we can use $\\frac{-b-\\sqrt{\\Delta}}{2a}$ for the first solution and $\\frac{2c}{-b-\\sqrt{\\Delta}}$ for the second solution, while if $b \\leq 0$, we can use $\\frac{2c}{-b+\\sqrt{\\Delta}}$ for the first solution and $\\frac{-b+\\sqrt{\\Delta}}{2a}$ for the second solution. We only need to be careful that the denominator  is never zero.\n\nThis gives a safer way to find the roots of a quadratic equation. This function returns the number of solutions, and places them in \\lstinline|out| in no particular order.\n\\begin{lstlisting}\nint quadRoots(double a, double b, double c, pair<double,double> &out) {\n    assert(a != 0);\n    double disc = b*b - 4*a*c;\n    if (disc < 0) return 0;\n    double sum = (b >= 0) ? -b-sqrt(disc) : -b+sqrt(disc);\n    out = {sum/(2*a), sum == 0 ? 0 : (2*c)/sum};\n    return 1 + (disc > 0);\n}\n\\end{lstlisting}\n\nIn many cases, there are several ways to write an expression, and they can have very different behaviors when used with floating-point numbers. So if you realize that the expression you are using can cause precision problems in some cases, it can be a good idea to rearrange the expression to handle them, as we did here.\n\n\\subsection{Circle-circle intersection}\nThis last case study will study one possible implementation for the intersection of two circles. It will show us why we shouldn't rely too much on mathematical truths when building our programs.\n\nWe want to know whether two circles of centers $C_1,C_2$ and radii $r_1,r_2$ touch, and if they do what are the intersection points. Here, we will solve this problem with triangle inequalities and the cosine rule.\\footnote{The way we actually implement it in this book is completely different.} Let $d = |C_1C_2|$. The question of whether the circles touch is equivalent to the question of whether there exists a (possibly degenerate) triangle with edge lengths $d,r_1,r_2$.\n\n\\centerFig{cases1}\n\nWe know that such a triangle exists iff the triangle inequalities are respected, that is:\n\\[|r_2-r_1| \\leq d \\leq r_1+r_2\\]\nIf this is true, then we can find the angle at $C_1$, which we'll call $\\alpha$, thanks to the cosine rule:\n\\[\\cos\\alpha = \\frac{d^2+r_1^2-r_2^2}{2dr_1}\\]\n\nOnce we have $\\alpha$, we can find the intersection points in the following way: if we take vector $\\vv{C_1C_2}$, resize it to have length $r_1$, then rotate by $\\alpha$ in either direction, this gives the vectors from $C_1$ to either intersection points.\n\n\\centerFig{cases2}\n\nThis gives the following code. It uses a function \\lstinline|abs()| to compute the length of a vector (see section~\\ref{ss:point-representation}) and a function \\lstinline|rot()| to rotate a vector by a given angle (see section~\\ref{ss:rotation}).\n\\begin{lstlisting}\nbool circleCircle(pt c1, double r1, pt c2, double r2, pair<pt,pt> &out) {\n    double d = abs(c2-c1);\n    if (d < abs(r2-r1) || d > r1+r2) // triangle inequalities\n        return false;\n    double alpha = acos((d*d + r1*r1 - r2*r2)/(2*d*r1));\n    pt rad = (c2-c1)/d*r1; // vector C1C2 resized to have length d\n    out = {c1 + rot(rad, -alpha), c1 + rot(rad, alpha)};\n    return true;\n}\n\\end{lstlisting}\n\nThis implementation is quite nice, but unfortunately it will sometimes output \\lstinline|nan| values. In particular, if\n\\[r_1 = 0.625 \\qquad r_2 = 0.3750000000000004 \\qquad d = 1.0000000000000004\\]\nthen the triangle inequalities are respected, so the function returns \\lstinline|true|, but the program computes\n\\[\\frac{d^2 + r_1^2 - r_2^2}{2dr_1} > 1\\]\n\nIn fact, this is mathematically impossible! The cosine rule should give values in $[-1,1]$ as long as the edge lengths respect the triangle inequality. To make sure, we can compute:\n\\begin{align*}\n\\frac{d^2 + r_1^2 - r_2^2}{2dr_1} > 1\\ \n&\\Rightarrow\\ d^2 + r_1^2 - r_2^2 > 2dr_1\\\\\n&\\Leftrightarrow\\ (d-r_1)^2 > r_2^2\\\\\n&\\Leftrightarrow\\ |d-r_1| > r_2\\\\\n&\\Leftrightarrow\\ d > r_2+r_1 \\quad\\mbox{or}\\quad r_1 > d+r_2\n\\end{align*}\nIndeed, both are impossible because of the triangle inequalities. So this must be the result of a few unfortunate roundings made while computing the expression.\n\nThere are two possible solutions to this. The first solution would be to just treat the symptoms: make sure the cosine is never outside $[-1,1]$ by either returning \\lstinline|false| or by moving it inside:\n\\begin{lstlisting}\ndouble co = (d*d + r1*r1 - r2*r2)/(2*d*r1);\nif (abs(co) > 1) {\n    return false; // option 1\n    co /= abs(co); // option 2\n}\ndouble alpha = cos(co);\n\\end{lstlisting}\n\nThe second solution, which we recommend, is based on the principles that we should always try to minimize the number of comparisons we make, and that if we have to do some computation that might fail (giving a result of \\lstinline|nan| or infinity), then we should test the input of that computation \\emph{directly}.\n\nSo instead of testing the triangle inequalities, we test the value of $\\cos\\alpha$ directly, because it turns out that it will be in $[-1,1]$ iff the triangle inequalities are verified. This gives the following code, which is a bit simpler and safer.\n\\begin{lstlisting}\nbool circleCircle(pt c1, double r1, pt c2, double r2, pair<pt,pt> &out) {\n    double d = abs(c2-c1), co = (d*d + r1*r1 - r2*r2)/(2*d*r1);\n    if (abs(co) > 1) return false;\n    double alpha = acos(co);\n    pt rad = (c2-c1)/d*r1; // vector C1C2 resized to have length d\n    out = {c1 + rot(rad, -alpha), c1 + rot(rad, alpha)};\n    return true;\n}\n\\end{lstlisting}\n", "meta": {"hexsha": "ccad2735a8759dd8c98bf24cd22ec1a2b3b7f004", "size": 18575, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "archives/codelibraries/cp-geo-master/precision/cases.tex", "max_stars_repo_name": "cbarnson/UVa", "max_stars_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_stars_repo_licenses": ["Unlicense", "MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-09-07T17:00:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-05T02:08:35.000Z", "max_issues_repo_path": "archives/codelibraries/cp-geo-master/precision/cases.tex", "max_issues_repo_name": "cbarnson/UVa", "max_issues_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_issues_repo_licenses": ["Unlicense", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "archives/codelibraries/cp-geo-master/precision/cases.tex", "max_forks_repo_name": "cbarnson/UVa", "max_forks_repo_head_hexsha": "0dd73fae656613e28b5aaf5880c5dad529316270", "max_forks_repo_licenses": ["Unlicense", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.3958333333, "max_line_length": 527, "alphanum_fraction": 0.7055720054, "num_tokens": 5573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.7853085859124002, "lm_q1q2_score": 0.5567988229946198}}
{"text": "\\chapter{A Compartmental Model of Political Revolution}\nIn many ways a revolution is like an infectious disease. Though the mechanics of transmission are inherently different, there are notable parallels. The idea of it spreads through the population with an origin, hotspots and occasional eruptive dynamics.\\\\\n\nThere is an established link between meme propogation and infectious disease\\cite{meme-epidemic}. Here we use `meme' by Richard Dawkins' definition of a cultural entity that appears to exhibit self-replication\\cite{selfish-gene}. At the centre of a revolution is a `meme', a self-replicating thought that inspires change against the regime. However, a revolutionary meme is a meme of a special kind in that it carries a risk to those who actively disseminate the meme. The reason to not share a conventional meme is usually simply apathy. However, a reason not to share a political meme is fear of imprisonment, social isolation and worse. There is thus an extra disencentive not to share the meme as doing so risks negative personal repurcusions from the authorities.\\\\\n\nResearch has shown that conventional memes replicate very similarly to infectious disease and can borrow the same models and fit the parameters to datasets of memes\\cite{meme-epidemic}. In this model, the transmission parameter is given as a constant. The susceptible agent consider whether they want to share the meme and then either does with probability $\\alpha$ or doesn't with probability $1-\\alpha$. Note that there they do not need to look outside of themselves; it does not depend on the makeup of the wider population of any exterior consideration. This decision can easily be incorporated into the transmitability constant of standard epdemiological models.\\\\\n\nRevolutionary memes however involve more considerations. A susceptible agent sees the meme and is either inclined to spread it or not. This is identical to the general meme case. However, suppose they are inclined to spread it. At this point it differs. Having done the same personal considerations, they now have the external consideratinos: what are teh chances that sharing this will increase the probability of significant regime change? How many other people are actively sharing this? What are the chances I'll be caught for sharing this? If I am caught, will I be put in jail? If so, for how long? To include these considerations, we need a model that goes beyond the standard epidemiological models to incorporate a non-constant transmitability parameter that depends on the population demographic\\footnote{Some epidemiological models do have non-constant transmitability constants. The general Kermack-McKendrick epidemic model includes factors that depend on how long the agent has the infection\\cite{kermack-mckendrick}. However, these are factors belonging \\textit{to} the agent and are not exterior to it.}.\\\\\n\nWe will start by analysing a special case of the Kermack\u2013McKendrick epidemic model known as the SIR model and apply it to general memes. We will then extend the model by splitting one class, infected, into two: lapsed infected (carriers) and active infected. We will then introduce a non-constant transmittability constant and allow movement between the two infected classes.\\\\\n\nThese will all be continuous, deterministic models. This assumption is acceptable when the populations and compartments are all relatively large. However, as in epidemics, it is a poor approximation when the `infected' (read: `revolutionary') population is small\\cite{models-epidemiology}. Thus after developing the continuous deterministic model, we will then explore a branching model that can incorporate this stochasticity.\\\\\n\n\nWe will then look at network models that will be able to provide even greater nuance.\n\n\\section{SIR model}\n\\paragraph{Introduction}\n\n\nHow it works, graphic of the compartments, equations\\\\\nSolve model for when revolution happens\n\\section{My SIIR model}\nSplit $I$ into two parts: $I_1$ infected and do not spread $I_2$ and do spread\\\\\nShow can be written by absorbing $I_1$ into $R$ so anyone infected but not acting is effectively removed.\n\\section{SIIR with sleeping revolutionaries}\nAllow movement between $I_1,I_2$\n\\subsection{With linear movement between revolutionaries}\n\n\\subsection{With non-linear movement between revolutionaries}\n\n\\section{SIIR with stochasticity}\nThese models have been useful and relatively easy to analyse. However revolutions are, as in human life and probably more so, sensitive to random occurences. An external war is declared, a politician falls ill or the next would-be revolutionary forgets their newly formed manifesto on the way back form the pub. To account for this we need either branching processes or stochastic differential equations.", "meta": {"hexsha": "51287ca2a79497f69c768c6d31a1c8641df0c572", "size": 4769, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Writing/TeX_files/Old/chapter03.tex", "max_stars_repo_name": "joekroese/math-of-revolution", "max_stars_repo_head_hexsha": "c831ea3d5f6c56c3861522f71ec47e1a22f9ff2c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-12-07T18:16:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-16T10:54:20.000Z", "max_issues_repo_path": "Writing/TeX_files/Old/chapter03.tex", "max_issues_repo_name": "joekroese/math-of-revolution", "max_issues_repo_head_hexsha": "c831ea3d5f6c56c3861522f71ec47e1a22f9ff2c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Writing/TeX_files/Old/chapter03.tex", "max_forks_repo_name": "joekroese/math-of-revolution", "max_forks_repo_head_hexsha": "c831ea3d5f6c56c3861522f71ec47e1a22f9ff2c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 144.5151515152, "max_line_length": 1122, "alphanum_fraction": 0.812749004, "num_tokens": 995, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7853085859124002, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5567988181661131}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\\begin{document}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Examples\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nIn this chapter, we mainly discuss about the array based questions. We first categorize these problems into different type, and then each type can usually be solved and optimized with nearly the best efficiency. \n\nGiven an array, a subsequence is composed of elements whose subscripts are increasing in the original array. A subarray is a subset of subsequence, which is contiguous subsequence. Subset contain any possible combinations of the original array. For example, for array [1, 2, 3, 4]:\n\\begin{lstlisting}[numbers=none]\nSubsequence\n[1, 3]\n[1, 4]\n[1, 2, 4]\nSubarray\n[1, 2]\n[2, 3]\n[2, 3, 4]\nSubset includes different length of subset, either\nlength 0: []\nlength 1: [1], [2], [3], [4]\nlength 2: [1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]\n\\end{lstlisting}\n\nHere array means one dimension list. For array problems, math will play an important role here. The rules are as follows:\n\\begin{itemize}\n    \\item Subarray: using dynamic programming based algorithm to make brute force $O(n^3)$ to $O(n)$. Two pointers for the increasing subarray. Prefix sum, or kadane's algorithm plus sometimes with the hashmap, or two pointers (three pointers) for the maximum subarray. \n    \\item Subsequence: using dynamic programming based algorithm to make brute force $O(2^n)$ to $O(n^2)$, which corresponds to the seqence type of dynamic programming. \n    \\item Duplicates: 217, 26, 27, 219, 287, 442;\n    \\item Intersections of Two Arrays: \n\\end{itemize}\n\nBefore we get into solving each type of problems, we first introduce the algorithms we will needed in this Chapter, including two pointers (three pointers or sliding window), prefix sum, kadane's algorithm. Kadane's algorithm can be explained with sequence type of dynamic programming. \n\n    % Easy problems: Duplicates:  Intersection: 349. Intersection of Two Arrays; Consecutive: 485. Max Consecutive Ones\n    % Maximum/Minimum subarray: 718, 53. Maximum Subarray, 325. Maximum Size Subarray Sum Equals k. 209. Minimum Size Subarray Sum Solutions: divide and conquer, special sum and hashtable, two pointers (sliding window) for minimum\n    % Sum of K numbers of elements: Target, return either the index or the elements(might need to avoid repetition). (2/3/4 sums)\n    % Partition a list into K equal part: DP\n    \nAfter this chapter, we need to learn the step to solve these problems:\n\\begin{enumerate}\n    \\item Analyze the problem and categorize it.  To know the naive solution's time complexity can help us identify it.\n    \\item If we can not find what type it is, let us see if we can \\textit{convert}. If not, we can try to identify a simple version of this problem, and then upgrade the simple solution to the more complex one. \n    \\item Solve the problem with the algorithms we taught in this chapter.\n    \\item Try to see if there is any more solutions. \n    \n\n    % \\textit{Note: If the problem is complex, trying to see the simple version, and then upgrade the simple version to a complex one. e.g. (487. Max Consecutive Ones II, 485. Max Consecutive Ones)}\n    \\item Check the special case. (Usually very important for this type of problems)\n\\end{enumerate}\n% Including two pointers both from the start, or two pointers one is from the beginning and the other is from the end. Also, the sliding window, and the flexible sliding windows, also find the cycle algorithm. \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Subarray }\n\\textit{Note: For subarray the most important feature is contiguous. Here, we definitely will not use sorting.} Given an array with size $n$, the total number of subarrays we have is $\\sum_{i=1}^{i=n} i = n*(n+1)/2$, which makes the time complexity of naive solution that use two nested for/while loop $O(n^2)$ or $O(n^3)$. \n\nThere are two types of problems related to subarry: \\textbf{Range Query} and \\textbf{optimization-based subarray}.  The Range query problems include querying the minimum/maximum or sum of all elements in a given range [i,j] in an array. Range Query has a more standard way to solve, either by searching or with the segment tree:\n\\paragraph{Range Query}\n\\begin{enumerate}\n    \\item 303. Range Sum Query - Immutable\n    \\item 307. Range Sum Query - Mutable\n    \\item 304. Range Sum Query 2D - Immutable\n\\end{enumerate}\n\n\\paragraph{Optimization-based subarray} \nGiven a single array, we would normally be asked to return either the maximum/minimum value, the maximum/minimum length, or the number of subarrays that has sum/product that \\textit{satisfy a certain condition}. The condition here decide the difficulty of these problems.\n\nThe questions can are classified into two categories:\n\\begin{enumerate}\n    \\item \\textit{Absolute-conditioned Subarray} that $sum/product=K$ or \n    \\item \\textit{Vague-conditioned subarray} that has these symbols that is not equal.\n\\end{enumerate}\n\n% \\begin{enumerate}\n%     \\item Maximum/minimum subarray with Sum or Product or a pattern; we use \\textbf{math and prefix\\_sum} or sometimes together with hashmap method to tackle. Also, sliding window can be used.\n%     \\item Minimum Subarray with Sum or Product or a pattern; \\textbf{sliding window} can be used to get the minimum length of subarray. \n%     \\item Find subarray that is increasing or decreasing ; \\textbf{Two pointers or sliding window} can be used. \n% \\end{enumerate}\nWith the proposed algorithms, the time complexity of subarray problems can be decreased from the brute force $O(n^3)$ to $O(n)$. The brute force is universal: two nested for loops marked the start and end of the subarray to enumerate all the possible subarrays, and another $O(n)$ spent to compute the result needed (sum or product or check the pattern like increasing or decreasing). \n\n% Using prefix sum or kadane's algorithm or hashmap sometimes if we have problems to solve it, a panacea is the Sliding Window Algorithm either with two or three pointers. \n\nAs we have discussed in the algorithm section, \n\\begin{enumerate}\n    \\item \\textbf{stack/queue/monotone stack} can be used to solve subarray problems that is related to its smaller/larger item to one item's left/right side\n    \\item \\textbf{sliding window} can be used to find subarray that either the sum or product inside of the sliding window is ordered (either monotone increasing/decreasing). This normally requires that the array are all positive or all negative. We can use the sliding window to cover its all search space. Or else we cant use sliding window. \n    \\item For all problems related with subarray sum/product, for both vague or absolute conditioned algorithm, we have a universal algorithm: save the prefix sum (sometimes together with index) in a sorted array, and use binary search to find all possible starting point of the window. \n    \\item Prefix Sum or Kadane's algorithm can be used when we need to get the sum of the subarry. \n\\end{enumerate}\n\n\\begin{enumerate}\n    \\item 53. Maximum Subarray (medium)\n    \\item 325. Maximum Size Subarray Sum Equals k\n    \\item 525. Contiguous Array\n    \\item 560. Subarray Sum Equals K\n    \\item 209. Minimum Size Subarray Sum (medium)\n\\end{enumerate}\nMonotone stack and vague conditioned subarray\n\\begin{enumerate}\n    \\item 713. Subarray Product Less Than K (all positive)\n    \\item 862. Shortest Subarray with Sum at Least K (with negative)\n    \\item 907. Sum of Subarray Minimums (all positive, but minimum in all subarray and sum) \n\\end{enumerate}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Maximum Subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Absolute-conditioned Subarray}\nFor the maximum array, you are either asked to return: \n\\begin{enumerate}\n\\item the maximum sum or product; \\textit{solved using prefix sum or kadane's algorithm}\n\\item the maximum length of subarray with sum or product S equals to K; \\textit{solved using prefix sum together with a hashmap saves previous prefix sum and its indices}\n\\item the maximum number of subarray with sum or product S (the total number of) equals to K; \\textit{solved using prefix sum together with a hashmap saves previous prefix sum and its count}\n\\end{enumerate}\n\n\\paragraph{Maximum/Minimum sum or product}\n\\begin{examples}[resume]\n\\item \\textbf{53. Maximum Subarray (medium).}\nFind the contiguous subarray within an array (containing at least one number) which has the largest sum.\n\\begin{lstlisting}[numbers=none]\nFor example, given the array [-2,1,-3,4,-1,2,1,-5,4],\n the contiguous subarray [4,-1,2,1] has the largest sum = 6.\n\\end{lstlisting}\nSolution: Brute force is to use two for loops, first is the starting, second is the end, then we can get the maximum value. To optimize, we can use divide and conquer, $O(nlgn)$ vs brute force is $O(n^3)$ (two embedded for loops and n for computing the sum). The divide and conquer method was shown in that chapter. A more efficient algorithm is using pre\\_sum. Please check Section~\\ref{part4_prefix_sum} for the answer. \n\nNow what is the slinding window solution? The key step in sliding window is when to move the first pointer of the window (shrinking the window). The window must include current element j. For the maximum subarray, to increase the sum of the window, we need to abandon any previous elements if they have negative sum.\n\\begin{lstlisting}[language = Python]\nfrom sys import maxsize\nclass Solution:\n    def maxSubArray(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        i, j = 0, 0 #i<=j\n        maxValue = -maxsize\n        window_sum = 0\n        while j < len(nums):\n            window_sum += nums[j]\n            j += 1\n            maxValue = max(maxValue, window_sum)\n            while i<j and window_sum < 0:\n                window_sum -= nums[i]\n                i += 1                           \n        return maxValue\n\\end{lstlisting}\n\\end{examples}\n\\paragraph{Maximum/Minimum length of subarray with sum or product S}\nFor this type of problem we need to track the length of it. \n\n\\begin{examples}[resume]\n\\item \\textbf{325. Maximum Size Subarray Sum Equals k.}\n\nGiven an array nums and a target value k, find the maximum length of a subarray that sums to k. If there isn\u2019t one, return 0 instead. \\textit{Note:\n The sum of the entire nums array is guaranteed to fit within the 32-bit signed integer range.}\n\n\\begin{lstlisting}[numbers=none]\nExample 1:\nGiven nums = [1, -1, 5, -2, 3], k = 3,\n return 4. (because the subarray [1, -1, 5, -2] sums to 3 and is the longest)\n\nExample 2:\nGiven nums = [-2, -1, 2, 1], k = 1,\n return 2. (because the subarray [-1, 2] sums to 1 and is the longest)\n\nFollow Up:\n Can you do it in O(n) time?\n \\end{lstlisting}\n \n\\textbf{Solution: Prefix Sum Saved as Hashmap.} \nAnswer: the brute force solution of this problem is the same as the maximum subarray. The similarity here is we track the prefix sum $S_{(i,j)} = y_j - y_{i-1}$, if we only need to track a certain value of $S_{(i,j)}$, which is $k$. Because $y_i = y_j - k$ which is the current prefix sum minus the k. If we use a hashmap to save the set of prefix sum together with the first index of this value appears. We saved $(y_i, first\\_index)$, so that $max\\_len = max(idx-dict[y_j-k])$. \n% Solution: using prefix\\_sum and hashmap, to just need to reformulate dict[sum\\_i]. For this question, we need to get the maximum size of subarray, so dict[i] =min(idx), the earliest that the value appear. which means every time we just set the $dict[i]=current_index$, only if the key does'nt exist. dict[0] = -1. Here we have sum\\_i, to check if there is a j in front of i that makes sum(j,i)=k, sum(j,i)=sum\\_i-A Value in Dict = k, so the value need to be sum\\_i-k, so that we need to check if it is in the dictionary.\n\\begin{lstlisting}[language = Python]\ndef maxSubArrayLen(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        prefix_sum = 0\n        dict = {0:-1} #this means for index -1, the sum is 0\n        max_len = 0\n        for idx,n in enumerate(nums):\n            prefix_sum += n\n            # save the set of prefix sum together with the first index of this value appears. \n            if prefix_sum not in dict: \n                dict[prefix_sum] = idx\n            # track the maximum length so far\n            if prefix_sum-k in dict:\n                max_len=max(max_len, idx-dict[sum_i-k])\n        return max_len\n\\end{lstlisting}\n\nAnother example that asks for pattern but can be converted or equivalent to the last problems:\n\n\\item \\textbf{525. Contiguous Array.} Given a binary array, find the maximum length of a contiguous subarray with equal number of 0 and 1. \\textit{Note: The length of the given binary array will not exceed 50,000.}\n\\begin{lstlisting}[numbers=none]\nExample 1:\n\nInput: [0,1]\nOutput: 2\nExplanation: [0, 1] is the longest contiguous subarray with equal number of 0 and 1.\n\nExample 2:\n\nInput: [0,1,0]\nOutput: 2\nExplanation: [0, 1] (or [1, 0]) is a longest contiguous subarray with equal number of 0 and 1.\n\\end{lstlisting}\n\nSolution: the problem is similar to the maximum sum of array with sum==0, so 0=-1, 1==1. Here our $k=0$\n\\begin{lstlisting}[language = Python]\ndef findMaxLength(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        nums=[nums[i] if nums[i]==1 else -1 for i in range(len(nums))]\n        \n        max_len=0\n        cur_sum=0\n        mapp={0:-1}\n        \n        for idx,v in enumerate(nums):\n            cur_sum+=v\n            if cur_sum in mapp:\n                max_len=max(max_len,idx-mapp[cur_sum])\n            else:\n                mapp[cur_sum]=idx    \n        \n        return max_len\n\\end{lstlisting}\n\n\\item \\textbf{674. Longest Continuous Increasing Subsequence} Given an unsorted array of integers, find the length of longest continuous increasing subsequence (subarray).\n\\begin{lstlisting}[numbers=none]\nExample 1:\nInput: [1,3,5,4,7]\nOutput: 3\nExplanation: The longest continuous increasing subsequence is [1,3,5], its length is 3. \nEven though [1,3,5,7] is also an increasing subsequence, it's not a continuous one where 5 and 7 are separated by 4.\n\nExample 2:\nInput: [2,2,2,2,2]\nOutput: 1\nExplanation: The longest continuous increasing subsequence is [2], its length is 1.\n\\textit{Note: Length of the array will not exceed 10,000.}\n\\end{lstlisting}\nSolution: The description of this problem should use ''subarray\" instead of the ''subsequence\".  The brute force solution is like any subarray problem $O(n^3)$. For embedded for loops to enumerate the subarray, and another $O(n)$ to check if it is strictly increasing.  Using two pointers, we can get $O(n)$ time complexity. We put two pointers: one $i$ located at the first element of the nums, second $j$ at the second element. We specifically restrict the subarray from $i$ to $j$ to be increasing, if this is violated, we reset the starting point of the subarray from the violated place.  \n\\begin{lstlisting}[language = Python]\nclass Solution:\n    def findLengthOfLCIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        if len(nums)==1:\n            return 1\n        i,j = 0,0\n        max_length = 0\n        while j < len(nums):\n            j += 1 #slide the window\n            max_length = max(max_length, j-i)\n            # when condition violated, reset the window\n            if j<len(nums) and nums[j-1]>=nums[j]:\n                i = j\n                         \n        return max_length\n\\end{lstlisting}\n\\item \\textbf{209. Minimum Size Subarray Sum (medium)} Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum >= s. If there isn't one, return 0 instead.\n\\begin{lstlisting}[numbers=none]\nExample: \n\nInput: s = 7, nums = [2,3,1,2,4,3]\nOutput: 2\nExplanation: the subarray [4,3] has the minimal length under the problem constraint.\n\\end{lstlisting}\n\n\\textbf{Solution 1: Sliding Window, $O(n)$.}\n\\begin{lstlisting}[language=Python]\ndef minSubArrayLen(self, s, nums):\n    ans = float('inf')\n    n = len(nums)\n    i = j = 0\n    acc = 0\n    while j < n:\n        acc += nums[j] # increase the window size\n        while acc >= s: # shrink the window to get the optimal result\n            ans = min(ans, j-i+1)\n            acc -= nums[i]\n            i += 1\n        j +=1\n    return ans if ans != float('inf') else 0\n\\end{lstlisting}\n\n\\textbf{Solution 2: prefix sum and binary search. $O(n\\log_n)$.} Assuming current prefix sum is $p_i$, We need to find the $\\max p_j \\leq (p_i-s)$, this is the right most value in the prefix sum array (sorted) that is <= $p_i -s$. \n\\begin{lstlisting}[language=Python]\nfrom bisect import bisect_right\nclass Solution(object):\n    def minSubArrayLen(self, s, nums):\n        ans = float('inf')\n        n = len(nums)\n        i = j = 0\n        ps = [0]\n        while j < n:\n            ps.append(nums[j]+ps[-1]) \n            # find a posible left i\n            if ps[-1]-s >= 0:\n                index = bisect_right(ps, ps[-1]-s)\n                if index > 0:\n                    index -= 1\n                    ans = min(ans, j-index+1)\n            j+=1\n        return ans if ans != float('inf') else 0\n\\end{lstlisting}\n\\end{examples}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Maximum Subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\paragraph{The maximum number of subarray with sum or product S}\n\\begin{examples}[resume]\n\\item \\textbf{560. Subarray Sum Equals K} Given an array of integers and an integer k, you need to find the total number of continuous subarrays whose sum equals to k.\n\\begin{lstlisting}[numbers=none]\nExample 1:\nInput:nums = [1,1,1], k = 2\nOutput: 2\n\\end{lstlisting}\n\nAnswer: The naive solution is we enumerate all possible subarray which is $n^2$, and then we compute and check its sum which is $O(n)$. So the total time complexity is  $O(n^3)$ time complexity. However, we can decrease it to $O(n^2)$ if we compute the sum of array in a different way: we first compute the sum till current index for each position, with equation $sum(i,j) = sum(0,j)-sum(0,i)$. However the OJ gave us LTE error. \n\\begin{lstlisting}[language = Python]\ndef subarraySum(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        ''' return the number of subarrays that equal to k'''\n        count = 0\n        sums = [0]*(len(nums)+1) # sum till current index\n        for idx, v in enumerate(nums):\n            sums[idx+1] = sums[idx]+v\n        for i in range(len(nums)):\n            for j in range(i, len(nums)):\n                value = sums[j+1]-sums[i]\n                count = count+1 if value==k else count\n        return count\n\\end{lstlisting}\n\nSolution 3: using prefix\\_sum and hashmap, to just need to reformulate dict[sum\\_i]. For this question, we need to get the total number of subsubarray, so $dict[i] = count$, which means every time we just set the dict[i]+=1. dict[0]=1\n\\begin{lstlisting}[language = Python]\nimport collections\nclass Solution(object):\n    def subarraySum(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        ''' return the number of subarrays that equal to k'''\n        dict = collections.defaultdict(int) #the value is the number of the sum occurs\n        dict[0]=1\n        prefix_sum, count=0, 0\n        for v in nums:\n            prefix_sum += v\n            count += dict[prefix_sum-k] # increase the counter of the appearing value k, default is 0\n            dict[prefix_sum] += 1 # update the count of prefix sum, if it is first time, the default value is 0\n        return count\n\\end{lstlisting}\n\n\\item \\textbf{974. Subarray Sums Divisible by K.} Given an array A of integers, return the number of (contiguous, non-empty) subarrays that have a sum divisible by K.\n\\begin{lstlisting}[numbers=none]\nExample 1:\n\nInput: A = [4,5,0,-2,-3,1], K = 5\nOutput: 7\nExplanation: There are 7 subarrays with a sum divisible by K = 5:\n[4, 5, 0, -2, -3, 1], [5], [5, 0], [5, 0, -2, -3], [0], [0, -2, -3], [-2, -3]\n\\end{lstlisting}\n\n\\textbf{Analysis:} for the above array, we can compute the prefix sum as [0,4,9, 9, 7,4,5]. Let P[i+1] = A[0] + A[1] + ... + A[i]. Then, each subarray can be written as P[j] - P[i] (for j > i). We need to i for current j index that (P[j]-P[i])\\% K == 0. Because P[j]\\%K=P[i]\\%K, therefore different compared with when sum == K, we not check P[j]-K but instead P[j]\\%K if it is in the hashmap. Therefore, we need to save the prefix sum as the modulo of K. For the example, we have dict: {0: 2, 4: 4, 2: 1}.\n\\begin{lstlisting}[language=Python]\nfrom collections import defaultdict\nclass Solution:\n    def subarraysDivByK(self, A, K):\n        \"\"\"\n        :type A: List[int]\n        :type K: int\n        :rtype: int\n        \"\"\"\n        a_sum = 0\n        p_dict = defaultdict(int)\n        p_dict[0] = 1 # when it is empty we still has one 0:1 \n        ans = 0\n        for i, v in enumerate(A):\n            a_sum += v\n            a_sum %= K\n            if a_sum in p_dict:\n                ans += p_dict[a_sum]\n            p_dict[a_sum] += 1 # save the remodule instead\n        return ans\n\\end{lstlisting}\n\\textbf{Solution 2: use Combination}  Then P = [0,4,9,9,7,4,5], and $C_0 = 2, C_2 = 1, C_4 = 4$. With $C_0=2$,  (at P[0] and P[6]), it indicates $C_2^1$ subarray with sum divisible by K, namely A[0:6]=[4, 5, 0, -2,-3,1]. With $C_4 = 4$ (at P[1], P[2], P[3], P[5]), it indicates $C_4^2=6$ subarrays with sum divisible by K, namely A[1:2]], A[1:3], A[1:5], A[2:3], A[2:5], A[3:5].\n\\begin{lstlisting}[language=Python]\ndef subarraysDivByK(self, A, K):\n    P = [0]\n    for x in A:\n        P.append((P[-1] + x) % K)\n\n    count = collections.Counter(P)\n    return sum(v*(v-1)/2 for v in count.values())\n\\end{lstlisting}\n\n\\end{examples}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Vague-conditioned subarray\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Vague-conditioned subarray}\nIn this section, we would be asked to ask the same type of question comapred with the last section. The only difference is the condition. For example, in the following question, it is asked with subarray that with $sum >= s$. \n\nBecause of the vague of the condition, a hashmap$+$prefix sum solution will on longer give us $O(n)$ liner time. The best we can do if the array is all positive number we can gain $O(nlgn)$ if it is combined with binary search. However, a carefully designed sliding window can still help us achieve linear time $O(n)$. For array with negative number, we can ultilize monotonic queue mentioned in Section~\\ref{section_mono_stack}, which will achieve $O(n)$ both in time and space complexity. \n\n\\paragraph{All Positive Array (Sliding Window)}\n\nIf it is all positive array, it can still be easily solved with sliding window. For example: \n\\begin{examples}[resume]\n\\item \\textbf{209. Minimum Size Subarray Sum (medium)} Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum >= s. If there isn't one, return 0 instead.\n\\begin{lstlisting}[numbers=none]\nExample: \nInput: s = 7, nums = [2,3,1,2,4,3]\nOutput: 2\nExplanation: the subarray [4,3] has the minimal length under the problem constraint.\n\\end{lstlisting}\nFollow up: If you have figured out the O(n) solution, try coding another solution of which the time complexity is O(n log n). \n\n\\textbf{Analysis.} For this problem, we can still use prefix sum saved in hashmap. However, since the condition is $sum >= s$, if we use a hashmap, we need to search through the hashmap with $key <= prefix_sum - s$. The time complexity would rise up to $O(n^2)$ if we use linear search. We would receive LTE error.\n\\begin{lstlisting}[language = Python]\n    def minSubArrayLen(self, s, nums):\n        \"\"\"\n        :type s: int\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        dict = collections.defaultdict(int)\n        dict[0] = -1 # pre_sum 0 with index -1\n        prefixSum = 0\n        minLen = sys.maxsize\n        for idx, n in enumerate(nums):\n            prefixSum += n\n            for key, value in dict.items():\n                if key <= prefixSum - s: \n                    minLen = min(minLen, idx-value)\n            dict[prefixSum] = idx #save the last index\n        return minLen if 1<=minLen<=len(nums) else 0\n\\end{lstlisting}\n\n\\textbf{Solution 1: Prefix Sum and Binary Search.} Because the items in the array are all positive number, so the prefix sum array is increasing,  this means if we save the prefix sum in an array, it is ordered,  we can use binary search to find the index of largest value <= (prefix sum - s). If we use bisect module, we can use bisect\\_right function which finds the right most position that we insert current value to keep the array ordered. The index will be rr-1. \n\\begin{lstlisting}[language=Python]\nimport bisect\ndef minSubArrayLen(self, s, nums):\n    ps = [0]\n    ans = len(nums)+1\n    for i, v in enumerate(nums):\n        ps.append (ps[-1] + v)\n        #find the right most position that <=\n        rr = bisect.bisect_right(ps, ps[i+1] - s)\n        if rr:\n            ans = min(ans, i+1 - (rr-1))\n    return ans if ans <= len(nums) else 0\n\\end{lstlisting}\n\\begin{lstlisting}[language = Python]\n    def minSubArrayLen(self, s, nums):\n        \"\"\"\n        :type s: int\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        def bSearch(nums, i, j, target):\n            while i < j:\n                mid = (i+j) / 2\n                if nums[mid] == target:\n                    return mid\n                elif nums[mid] < target:\n                    i = mid + 1\n                else:\n                    j = mid - 1\n            return i   \n        \n        if not nums:\n            return 0\n        rec = [0] * len(nums)\n        rec[0] = nums[0]\n        if rec[0] >= s:\n            return 1\n        minlen = len(nums)+1\n        for i in range(1, len(nums)):\n            rec[i] = rec[i-1] + nums[i]\n            if rec[i] >= s:\n                index = bSearch(rec, 0, i, rec[i] - s)\n                if rec[index] > rec[i] - s:\n                    index -= 1\n                minlen = min(minlen, i - index)\n        return minlen if minlen != len(nums)+1 else 0\n\\end{lstlisting}\n\n\\textbf{Solution 2: Sliding window in $O(n)$.}  While, using the sliding window, Once the sum in the window satisfy the condition, we keep shrinking the window size (moving the left pointer rightward) untill the condition is no longer hold. This way, we are capable of getting the complexity with $O(n)$. \n\\begin{lstlisting}[language = Python]\ndef minSubArrayLen(self, s, nums):\n    i, j = 0, 0\n    sum_in_window = 0\n    ans = len(nums) + 1\n    while j < len(nums):\n        sum_in_window += nums[j]\n        j += 1\n        # shrink the window if the condition satisfied\n        while i <j and sum_in_window >= s:\n            ans = min(ans, j-i)\n            sum_in_window -= nums[i]\n            i += 1\n    return ans if ans <= len(nums) else 0\n\\end{lstlisting}\n\n\\item \\textbf{713. Subarray Product Less Than K} Your are given an array of positive integers nums.\nCount and print the number of (contiguous) subarrays where the product of all the elements in the subarray is less than k.\n\\begin{lstlisting}[numbers=none]\nExample 1:\nInput: nums = [10, 5, 2, 6], k = 100\nOutput: 8\nExplanation: The 8 subarrays that have product less than 100 are: [10], [5], [2], [6], [10, 5], [5, 2], [2, 6], [5, 2, 6].\n\nNote that [10, 5, 2] is not included as the product of 100 is not strictly less than k.\nNote:\n0 < nums.length <= 50000.\n0 < nums[i] < 1000.\n0 <= k < 10^6.\n\\end{lstlisting}\n\nAnswer: Because we need the subarray less than k, so it is difficult to use prefix sum. If we use sliding window,\n\\begin{lstlisting}\ni=0, j=0, 10 10<100, ans+= j-i+1 (1) -> [10]\ni=0, j=1, 50 50<100, ans+= j-i+1 (3), -> [10],[10,5]\ni=0, j=2, 100 shrink the window, i=1, product = 10, ans+=2, ->[5,2][2]\ni=1, j=3, 60, ans+=3->[2,6],[2],[6]\n\\end{lstlisting}\nThe python code:\n\\begin{lstlisting}[language = Python]\nclass Solution:\n    def numSubarrayProductLessThanK(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: int\n        \"\"\"\n        if not nums:\n            return 0\n        i, j = 0, 0\n        window_product = 1\n        ans = 0\n        while j < len(nums):\n            window_product *= nums[j]\n            \n            while i<j and window_product >= k:\n                window_product /= nums[i]               \n                i+=1\n            if window_product < k:\n                ans += j-i+1            \n            j += 1\n        return ans\n\\end{lstlisting}\n\\end{examples}\n\\paragraph{Array with Negative Element (Monotonic Queue)}\n\nIn this section, we will work through how to handle the array with negative element and is Vague-conditioned. We found using monotonic Queue or stack (Section~\\ref{section_mono_stack} will fit the scenairo and gave $O(n)$ time complexity and $O(N)$ space complexity. \n\n\\begin{examples}[resume]\n\\item \\textbf{862. Shortest Subarray with Sum at Least K}\nReturn the length of the shortest, non-empty, contiguous subarray of A with sum at least K.\n\nIf there is no non-empty subarray with sum at least K, return -1.\n\\begin{lstlisting}[numbers=none]\nExample 1:\nInput: A = [1], K = 1\nOutput: 1\n\nExample 2:\nInput: A = [1,2], K = 4\nOutput: -1\n\nExample 3:\nInput: A = [2,-1,2], K = 3\nOutput: 3\n\\end{lstlisting}\nNote: $1 <= A.length <= 50000$, $-10 ^ 5 <= A[i] <= 10 ^ 5$, $1 <= K <= 10 ^ 9$.\n\n\\textbf{Analysis:} The only difference of this problem compared with the last is with negative value. Because of the negative, the shrinking method no longer works because when we shrink the window, the sum in the smaller window might even grow if we just cut out a negative value. For instance, [84,-37,32,40,95], K=167, the right answer is [32, 40, 95]. In this program, i=0, j=4, so how to handle the negative value?\n\n\\textbf{Solution 1: prefix sum and binary search in prefix sum. LTE}\n\n\\begin{lstlisting}[language=Python]\ndef shortestSubarray(self, A, K):\n    def bisect_right(lst, target):\n        l, r = 0, len(lst)-1\n        while l <= r:\n            mid = l + (r-l)//2\n            if lst[mid][0] <= target:\n                l = mid + 1\n            else:\n                r = mid -1\n        return l\n    acc = 0\n    ans = float('inf')\n    prefixSum=[(0, -1)] #value and index\n    for i, n in enumerate(A):\n        acc += n\n        index = bisect_right(prefixSum, acc-K)\n        for j in range(index):\n            ans = min(ans, i-prefixSum[j][1])\n        index = bisect_right(prefixSum, acc)\n        prefixSum.insert(index, (acc, i))\n        #print(index, prefixSum)\n    return ans if ans != float('inf') else -1\n\\end{lstlisting}\n\n% For an all positive array, the prefix is perfectly increasing, if we want $S_{(i,j)} = y[j]-y[i-1] >= k$, if we want to get the smallest subarray, then  $max(i)$. For each $j$, if we found $i$ that is satisfied, that i is no longer needed to be considered again (just like the previous sliding window, where when the condition satisfied, we move the window i+1 till the condition is violated again or we can not move i any further). \n\nNow, let us analyze a simple example which includes both 0 and negative number. [2, -1, 2, 0, 1], K=3, with prefix sum [0, 2, 1, 3, 3, 4], the subarray is [2,-1,2], [2,-1,2, 0] and [2, 0, 1] where its sum is at least three. First, let us draw the prefix sum on a x-y axis. When we encounter an negative number, the prefix sum decreases, if it is zero, then the prefix sum stablize. For the zero case: at p[2] = p[3], if subarray ends with index 2 is considered, then 3 is not needed. For the negative case: p[0]=2>p[1]=1 due to A[1]<0. Because p[1] can always be a better choice to be $i$ than p[1] (smaller so that it is more likely, shorter distance). Therefore, we can still keep the validate prefix sum monoitually increasing like the array with all positive numbers by maintaince a mono queue. \n\\begin{lstlisting}[language = Python]\nclass Solution:\n    def shortestSubarray(self, A, K):\n\n        P = [0]*(len(A)+1)\n        for idx, x in enumerate(A):\n            P[idx+1] = P[idx]+x\n\n\n        ans = len(A)+1 # N+1 is impossible\n        monoq = collections.deque() \n        for y, Py in enumerate(P):\n            while monoq and Py <= P[monoq[-1]]: #both negative and zero leads to kick out any prevous larger or equal value\n                print('pop', P[monoq[-1]])\n                monoq.pop()\n\n            while monoq and Py - P[monoq[0]] >= K: # if one x is considered, no need to consider again (similar to sliding window where we move the first index forward)\n                print('pop', P[monoq[0]])\n                ans = min(ans, y - monoq.popleft())\n            print('append', P[y])\n            monoq.append(y)\n\n\n        return ans if ans < len(A)+1 else -1\n\\end{lstlisting}\n\\end{examples}\n\n\\subsection{LeetCode Problems and Misc}\n\\paragraph{Absolute-conditioned Subarray}\n\\begin{enumerate}\n    \\item 930. Binary Subarrays With Sum\n    \\begin{lstlisting}\n    In an array A of 0s and 1s, how many non-empty subarrays have sum S?\nExample 1:\n\nInput: A = [1,0,1,0,1], S = 2\nOutput: 4\nExplanation: \nThe 4 subarrays are bolded below:\n[1,0,1,0,1]\n[1,0,1,0,1]\n[1,0,1,0,1]\n[1,0,1,0,1]\nNote:\n\n    A.length <= 30000\n    0 <= S <= A.length\n    A[i] is either 0 or 1.\n\\end{lstlisting}\nAnswer: this is exactly the third time of maximum subarray, the maximum length of subarry with a certain value. We solve it using prefix sum and a hashmap to save the count of each value. \n\\begin{lstlisting}[language=Python]\nimport collections\nclass Solution:\n    def numSubarraysWithSum(self, A, S):\n        \"\"\"\n        :type A: List[int]\n        :type S: int\n        :rtype: int\n        \"\"\"\n        dict = collections.defaultdict(int) #the value is the number of the sum occurs\n        dict[0]=1 #prefix sum starts from 0 and the number is 1\n        prefix_sum, count=0, 0\n        for v in A:\n            prefix_sum += v\n            count += dict[prefix_sum-S] # increase the counter of the appearing value k, default is 0\n            dict[prefix_sum] += 1 # update the count of prefix sum, if it is first time, the default value is 0\n        return count\n\\end{lstlisting}\nWe can write it as:\n\\begin{lstlisting}[language=Python]\n    def numSubarraysWithSum(self, A, S):\n        \"\"\"\n        :type A: List[int]\n        :type S: int\n        :rtype: int\n        \"\"\"\n        P = [0]\n        for x in A: P.append(P[-1] + x)\n        count = collections.Counter()\n\n        ans = 0\n        for x in P:\n            ans += count[x]\n            count[x + S] += 1\n\n        return ans\n\\end{lstlisting}\nAlso, it can be solved used a modified sliding window algorithm. For sliding window, we have $i,j$ starts from 0, which represents the window. Each iteration j will move one position. For a normal sliding window, only if the sum is larger than the value, then we shrink the window size by one. However, in this case, like in the example $1, 0, 1, 0, 1$, when $j = 5$, $i = 1$, the sum is $2$, but the algorithm would miss the case of $i = 2$, which has the same sum value. To solve this problem, we keep another index $i_hi$, in addition to the moving rule of $i$, it also moves if the sum is satisfied and that value is $0$. This is actually a Three pointer algorithm. \n\\begin{lstlisting}[language=Python]\n    def numSubarraysWithSum(self, A, S):\n        i_lo, i_hi, j = 0, 0, 0 #i_lo <= j\n        sum_lo = sum_hi = 0\n        ans = 0\n        while j < len(A):\n            # Maintain i_lo, sum_lo:\n            # While the sum is too big, i_lo += 1\n            sum_lo += A[j]\n            while i_lo < j and sum_lo > S:\n                sum_lo -= A[i_lo]\n                i_lo += 1\n\n            # Maintain i_hi, sum_hi:\n            # While the sum is too big, or equal and we can move, i_hi += 1\n            sum_hi += A[j]\n            while i_hi < j and (\n                    sum_hi > S or sum_hi == S and not A[i_hi]):\n                sum_hi -= A[i_hi]\n                i_hi += 1\n\n            if sum_lo == S:\n                ans += i_hi - i_lo + 1\n            j += 1\n\n        return ans\n\\end{lstlisting}\n\\item 523. Continuous Subarray Sum\n\\begin{lstlisting}\nGiven a list of non-negative numbers and a target integer k, write a function to check if the array has a continuous subarray of size at least 2 that sums up to the multiple of k, that is, sums up to n*k where n is also an integer.\n\nExample 1:\nInput: [23, 2, 4, 6, 7],  k=6\nOutput: True\nExplanation: Because [2, 4] is a continuous subarray of size 2 and sums up to 6.\n\nExample 2:\nInput: [23, 2, 6, 4, 7],  k=6\nOutput: True\nExplanation: Because [23, 2, 6, 4, 7] is an continuous subarray of size 5 and sums up to 42.\n\nNote:\nThe length of the array won't exceed 10,000.\nYou may assume the sum of all the numbers is in the range of a signed 32-bit integer.\n\\end{lstlisting}\nAnswer: This is a mutant of the subarray with value k. The difference here, we save the prefix sum as the reminder of k. if $(a+b)\\%k=0$, then $(a\\%k+b\\%k)/k=1$.\n\\begin{lstlisting}[language=Python]\nclass Solution:\n    def checkSubarraySum(self, nums, k):\n        \"\"\"\n        :type nums: List[int]\n        :type k: int\n        :rtype: bool\n        \"\"\"\n        \n        if not nums:\n            return False\n        k = abs(k)\n        prefixSum = 0\n        dict = collections.defaultdict(int)\n        dict[0]=-1\n        for i, v in enumerate(nums):\n            prefixSum += v\n            if k!=0:\n                prefixSum %= k\n            if prefixSum in dict and (i-dict[prefixSum])>=2:\n                    return True\n            if prefixSum not in dict:\n                dict[prefixSum] = i\n        return False\n\\end{lstlisting}\n\\end{enumerate}\n\n\nFor problems like bounded, or average, minimum in a subarray,\n\\begin{examples}[resume]\n\\item 795.Number of Subarrays with Bounded Maximum (medium)\n\\item 907. Sum of Subarray Minimums (monotone stack)\n\\end{examples}\n\n% \\item 674. Longest Continuous Increasing Subsequence\n\n% Given an unsorted array of integers, find the length of longest continuous increasing subsequence (subarray).\n\n% Example 1:\n% \\begin{lstlisting}\n% Input: [1,3,5,4,7]\n% Output: 3\n% Explanation: The longest continuous increasing subsequence is [1,3,5], its length is 3. \n% Even though [1,3,5,7] is also an increasing subsequence, it's not a continuous one where 5 and 7 are separated by 4.\n% \\end{lstlisting}\n% Example 2:\n% \\begin{lstlisting}\n% Input: [2,2,2,2,2]\n% Output: 1\n% Explanation: The longest continuous increasing subsequence is [2], its length is 1.\n% \\end{lstlisting}\n% \\textit{Note: Length of the array will not exceed 10,000.}\n\n% Solution: The brute force solution is use two for loops with $O(n^2)$. The first loop is the start number, the second loop is the $nums[j]>nums[j-1]$ or else stop. Or we can use two pointers. i,j start from 0,1 respectively.\n% \\begin{lstlisting}[language = Python]\n% class Solution:\n%     def findLengthOfLCIS(self, nums):\n%         \"\"\"\n%         :type nums: List[int]\n%         :rtype: int\n%         \"\"\"\n%         if not nums:\n%             return 0\n%         if len(nums)==1:\n%             return 1\n%         i,j=0,1\n%         max_length = 0\n%         while j<len(nums):\n%             if nums[j-1]<nums[j]:\n%                 j+=1\n%             else:\n%                 if j-i>max_length:\n%                     max_length = j-i\n%                 i=j\n%                 j+=1\n%         if j-i>max_length:\n%             max_length = j-i\n                \n%         return max_length\n% \\end{lstlisting}\n\n\n% section subsequence\n%\\subfile{chapters/mastering/array/subsequence}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% sub sequence\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Subsequence (Medium or Hard)}\nThe difference of the subsequence type of questions with the subarray is that we do not need the elements to be consecutive. Because of this relaxation, the brute force solution of this type of question is exponential$O(2^n)$, because for each element, we have two options: chosen or not chosen. This type of questions would usually be used as a follow-up question to the subarray due to its further difficulty because of nonconsecutive. This type of problems are a typical dynamic programming. Here we should a list of all related subsequence problems shown on LeetCode in Fig.~\\ref{fig:subsequence_problems}\n\nA subsequence of a string is a new string which is formed from the original string by deleting some (can be none) of the characters without disturbing the relative positions of the remaining characters. (ie, \"ACE\" is a subsequence of \"ABCDE\" while \"AEC\" is not). For the subsequence problems, commonly we will see increasing subsequence, count the distinct subsequence. And they are usually solved with single sequence type of dynamic programming. \n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.8\\columnwidth]{fig/subsequence_1.png}\n    \\includegraphics[width=0.8\\columnwidth]{fig/subsequence_2.png}\n    \\caption{Subsequence Problems Listed on LeetCode}\n    \\label{fig:subsequence_problems}\n\\end{figure}\n940. Distinct Subsequences II (hard)\n\nGiven a string S, count the number of distinct, non-empty subsequences of S . Since the result may be large, return the answer modulo $10^9 + 7$.\n\\begin{lstlisting}\nExample 1:\n\nInput: \"abc\"\nOutput: 7\nExplanation: The 7 distinct subsequences are \"a\", \"b\", \"c\", \"ab\", \"ac\", \"bc\", and \"abc\".\n\nExample 2:\n\nInput: \"aba\"\nOutput: 6\nExplanation: The 6 distinct subsequences are \"a\", \"b\", \"ab\", \"ba\", \"aa\" and \"aba\".\n\nExample 3:\n\nInput: \"aaa\"\nOutput: 3\nExplanation: The 3 distinct subsequences are \"a\", \"aa\" and \"aaa\".\n\\end{lstlisting}\n\\textbf{Sequence type dynamic programming}. The naive solution for subsequence is using DFS to generate all of the subsequence recursively and we also need to check the repetition. The possible number of subsequence is $2^n-1$. Let's try forward induction method. \n\\begin{lstlisting}\n# define the result for each state: number of subsequence ends with each state\nstate:  a   b  c\nans  :  1   2  4 \na: a; dp[0] = 1\nb: b, ab; = dp[0]+1 if this is 'a', length 1 is the same as dp[0], only length 2 is possible\nc: c, ac, bc, abc; = dp[0]+dp[1]+1, if it is 'a', aa, ba, aba,  = dp[1]+1\nd: d, ad, bd, abd, cd, acd, bcd, abcd = dp[0]+dp[1]+dp[2]+1\n\\end{lstlisting}\nThus the recurrence function can be Eq.~\\ref{eq:distinct_subsequence}.\n\\begin{equation}\n\\label{eq:distinct_subsequence}\n    dp[i] = \\sum_{j<i}(dp[j]) +1, S[j] != S[i]\n\\end{equation}\nThus, we have $O(n^2)$ time complexity, and the following code:\n\\begin{lstlisting}[language=Python]\ndef distinctSubseqII(self, S):\n    \"\"\"\n    :type S: str\n    :rtype: int\n    \"\"\"\n    MOD = 10**9+7\n    dp = [1]*len(S) #means for that length it has at least one count\n    for i, c in enumerate(S):\n        for j in range(i):\n            if c == S[j]:\n                continue\n            else:\n                dp[i] += dp[j]\n                dp[i] %= MOD\n    return sum(dp) % MOD\n\\end{lstlisting}\nHowever, we still get LTE. How to improve it further. If we use a counter indexed by all of the 26 letters, and a prefix sum. The inner for loop can be replaced by dp[i] = 1+ (prefix sum - sum of all S[i]).Thus we can lower the complexity further to $O(n)$.\n\\begin{lstlisting}[language=Python]\ndef distinctSubseqII(self, S):\n    MOD = 10**9+7\n    dp = [1]*len(S) #means for that length it has at least one count\n    sum_tracker = [0]*26\n    total = 0\n    for i, c in enumerate(S):\n        index = ord(c) - ord('a')\n        dp[i] += total-sum_tracker[index]\n        total += dp[i]\n        sum_tracker[index] += dp[i]\n    return sum(dp) % MOD\n\\end{lstlisting}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Sum\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% \\subsection{Sum}\n% In this section, to get sum we can choose to use hashmap to save the original list so that for the last element, we only check the hashmap, we can lower the complexity by one power of n. However, a better solution is to use two pointers or three pointers. for three pointers, the first one is to make sure the starting point. Also, we can think about divide and conquer.\n% \\begin{lstlisting}\n% [-4,-1,-1,0,1,2]\n% i, l-> ``````<-r\n% \\end{lstlisting}\n\n% \\begin{enumerate}\n%     \\item \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Others\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Others}\nFor example, the following question would be used as follow up for question \\textit{Longest Continuous Increasing Subsequence}\n\n300. Longest Increasing Subsequence\n\n\n673. Number of Longest Increasing Subsequence\n\nGiven an unsorted array of integers, find the number of longest increasing subsequence.\n\\begin{lstlisting}\nExample 1:\n\nInput: [1,3,5,4,7]\nOutput: 2\nExplanation: The two longest increasing subsequence are [1, 3, 4, 7] and [1, 3, 5, 7].\n\nExample 2:\nInput: [2,2,2,2,2]\nOutput: 5\nExplanation: The length of longest continuous increasing subsequence is 1, and there are 5 subsequences' length is 1, so output 5.\n\\textit{Note: Length of the given array will be not exceed 2000 and the answer is guaranteed to be fit in 32-bit signed int.}\n\\end{lstlisting}\n\nSolution: Another different problem, to count the number of the max subsequence. Typical dp:\n\nstate: f[i]\n\\begin{lstlisting}[language = Python]\nfrom sys import maxsize\nclass Solution:\n    def findNumberOfLIS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        max_count = 0\n        if not nums:\n            return 0\n        memo =[None for _ in range(len(nums))]\n        rlst=[]\n        def recursive(idx,tail,res):\n            if idx==len(nums):\n                rlst.append(res)\n                return 0\n            if memo[idx]==None:\n                length = 0\n                if nums[idx]>tail:\n                    addLen = 1+recursive(idx+1, nums[idx],res+[nums[idx]])\n                    notAddLen = recursive(idx+1, tail,res)\n                    return max(addLen,notAddLen)\n                else:\n                    return recursive(idx+1, tail,res)\n        \n        \n        ans=recursive(0,-maxsize,[])\n        count=0\n        for lst in rlst:\n            if len(lst)==ans:\n                count+=1\n                \n        return count\n\\end{lstlisting}\n\nUsing dynamic programming, the difference is we add a count array.\n\\begin{lstlisting}[language = Python]\nfrom sys import maxsize\nclass Solution:\n    def findNumberOfLIS(self, nums):\n        N = len(nums)\n        if N <= 1: return N\n        lengths = [0] * N #lengths[i] = longest ending in nums[i]\n        counts = [1] * N #count[i] = number of longest ending in nums[i]\n\n    for idx, num in enumerate(nums): #i\n            for i in range(idx): #j\n                if nums[i] < nums[idx]: #bigger \n                    if lengths[i] >= lengths[idx]:\n                        lengths[idx] = 1 + lengths[i] #set the biggest length\n                        counts[idx] = counts[i] #change the count\n                    elif lengths[i] + 1 == lengths[idx]: #if it is a tie\n                        counts[idx] += counts[i] #increase the current count by count[i]\n\nlongest = max(lengths)\n        print(counts)\n        print(lengths)\n        return sum(c for i, c in enumerate(counts) if lengths[i] == longest)\n\\end{lstlisting}\n\n128. Longest Consecutive Sequence\n\\begin{lstlisting}\nGiven an unsorted array of integers, find the length of the longest consecutive elements sequence.\n\nFor example,\n Given [100, 4, 200, 1, 3, 2],\n The longest consecutive elements sequence is [1, 2, 3, 4]. Return its length: 4.\n \n Your algorithm should run in O(n) complexity.\n \\end{lstlisting}\n\nSolution: Not thinking about the O(n) complexity, we can use sorting to get [1,2,3,4,100,200], and then use two pointers to get [1,2,3,4].\n\nHow about O(n)? We can pop out a number in the list, example, 4 , then we use while first-1 to get any number that is on the left side of 4, here it is 3, 2, 1, and use another to find all the bigger one and remove these numbers from the nums array.\n\\begin{lstlisting}[language =Python]\ndef longestConsecutive(self, nums):\n        nums = set(nums)\n        maxlen = 0\n        while nums:\n            first = last = nums.pop()\n            while first - 1 in nums: #keep finding the smaller one\n                first -= 1\n                nums.remove(first)\n            while last + 1 in nums: #keep finding the larger one\n                last += 1\n                nums.remove(last)\n            maxlen = max(maxlen, last - first + 1)\n        return maxlen\n\\end{lstlisting}\n\n\n% subset\n%\\subfile{chapters/mastering/array/subset.tex}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Subset\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Subset(Combination and Permutation)}\n\\label{part4_array_subset}\nThe Subset B of a set A is defined as a set within all elements of this subset are from set A. In other words, the subset B is contained inside the set A, $B \\in A$. There are two kinds of subsets: if the order of the subset doesnt matter, it is a combination problem, otherwise, it is a permutation problem. To solve the problems in this section, we need to refer to the backtracking in Sec~\\ref{sec_combination}. When the subset has a fixed constant length, then hashmap can be used to lower the complexity by one power of n.\n\n\\textbf{Subset VS Subsequence}. In the subsequence, the elements keep the original order from the original sequence. While, in the set concept, there is no ordering, only a set of elements. \n\nIn this type of questions, we are asked to return subsets of a list. For this type of questions, backtracking~\\ref{sec:backtrack} can be applied. \n\\subsection{Combination}\n\\label{part4_array_combine}\nThe solution of this section is heavily correlated to Section~\\ref{sec_combination}. \n78. Subsets\n\\begin{lstlisting}\nGiven a set of distinct integers, nums, return all possible subsets (the power set).\n\nNote: The solution set must not contain duplicate subsets.\n\nExample:\n\nInput: nums = [1,2,3]\nOutput:\n[\n  [3],\n  [1],\n  [2],\n  [1,2,3],\n  [1,3],\n  [2,3],\n  [1,2],\n  []\n]\n\\end{lstlisting}\n\\textbf{Backtracking}. This is a combination problem, which we have explained in backtrack section. We just directly gave the code here. \n\\begin{lstlisting}[language = Python]\ndef subsets(self, nums):\n    res, n = [], len(nums)\n    res = self.combine(nums, n, n)\n    return res\n\ndef combine(self, nums, n, k):\n    \"\"\"\n    :type n: int\n    :type k: int\n    :rtype: List[List[int]]\n    \"\"\"\n    def C_n_k(d, k, s, curr, ans): #d controls the degree (depth), k is controls the return level, curr saves the current result, ans is all the result\n        ans.append(curr)\n        if d == k: #the length is satisfied\n\n            return\n        for i in range(s, n):\n            curr.append(nums[i])\n            C_n_k(d+1, k, i+1, curr[:], ans) # i+1 because no repeat, make sure use deep copy curr[:]\n            curr.pop()\n\n    ans = []    \n    C_n_k(0, k, 0, [], ans) \n    return ans\n\\end{lstlisting}\n\\textbf{Incremental}. Backtracking is not the only way for the above problem. There is another way to do it iterative, observe the following process. We can just keep append elements to the end of of previous results. \n\\begin{lstlisting}\n[1, 2, 3, 4]\nl = 0, []\nl = 1, for 1, []+[1], -> [1],  get powerset of [1]\nl = 2, for 2, []+[2], [1]+[2], -> [2], [1, 2], get powerset of [1, 2]\nl = 3, for 3, []+[3], [1]+[3], [2]+[3], [1, 2]+[3], -> [3], [1, 3], [2, 3], [1, 2, 3], get powerset of [1, 2, 3]\nl = 4, for 4, []+ [4]; [1]+[4]; [2]+[4], [1, 2] +[4]; [3]+[4], [1,3]+[4],[2,3]+[4], [1,2,3]+[4], get powerset of [1, 2, 3, 4]\n\\end{lstlisting}\n\\begin{lstlisting}[language=Python]\ndef subsets(self, nums):\n    result = [[]] #use two dimensional, which already have [] one element\n    for num in nums:\n        new_results = []\n        for r in result:\n            new_results.append(r + [num])\n        result += new_results\n        \n    return result\n\\end{lstlisting}\n90. Subsets II\n\\begin{lstlisting}\nGiven a collection of integers that might contain duplicates, nums, return all possible subsets (the power set).\n\nNote: The solution set must not contain duplicate subsets.\n\nExample:\n\nInput: [1,2,2]\nOutput:\n[\n  [2],\n  [1],\n  [1,2,2],\n  [2,2],\n  [1,2],\n  []\n]\n\\end{lstlisting}\nAnalysis: Because of the duplicates, the previous superset algorithm would give repetitive subset. For the above example, we would have [1, 2] twice, and [2] twice.  If we try to modify on the previous code. We first need to sort the nums, which makes the way we check repeat easiler. Then the code goes like this:\n\\begin{lstlisting}[language = Python]\n    def subsetsWithDup(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: List[List[int]]\n        \"\"\"\n        nums.sort()\n        result = [[]] #use two dimensional, which already have [] one element\n        for num in nums:\n            new_results = []\n            for r in result:\n                print(r)\n                new_results.append(r + [num])\n            for rst in new_results:\n                if rst not in result: # check the repetitive\n                    result.append(rst)\n            \n        return result\n\\end{lstlisting}\nHowever, the above code is extremely inefficient because of the checking process. A better way to do this:\n\\begin{lstlisting}\n[1, 2, 2]\nl = 0, []\nl = 1, for 1, []+[1]\nl = 2, for 2, []+[2], [1]+[2]; []+[2, 2], [1]+[2, 2]\n\\end{lstlisting}\nSo it would be more efficient if we first save all the numbers in the array in a dictionary. For the above case, the dic = {1:1, 2:2}. Each time we try to generate the result, we use 2 up to 2 times. Same way, we can use dictionary on the backtracking too. \n\\begin{lstlisting}[language=Python]\nclass Solution(object):\n    def subsetsWithDup(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: List[List[int]]\n        \"\"\"\n        if not nums:\n            return [[]]\n        res = [[]]\n        dic = collections.Counter(nums)\n        for key, val in dic.items():\n            tmp = []\n            for lst in res:\n                for i in range(1, val+1):\n                    tmp.append(lst+[key]*i)\n            res += tmp\n        return res\n\\end{lstlisting}\n\n77. Combinations\n\\begin{lstlisting}\nGiven two integers n and k, return all possible combinations of k numbers out of 1 ... n.\n\nExample:\n\nInput: n = 4, k = 2\nOutput:\n[\n  [2,4],\n  [3,4],\n  [2,3],\n  [1,2],\n  [1,3],\n  [1,4],\n]\n\\end{lstlisting}\nAnalysis: In this problem, it is difficult for us to generate the results iteratively, the only way we can use the second solution is by filtering and get only the results with the length we want. However, the backtrack can solve the problem easily as we mentioned in Section~\\ref{sec_combination}.\n\\begin{lstlisting}[language=Python]\ndef combine(self, n, k):\n    \"\"\"\n    :type n: int\n    :type k: int\n    :rtype: List[List[int]]\n    \"\"\"\n    ans = []\n    def C_n_k(d,k,s,curr):\n        if d==k:\n            ans.append(curr)\n            return\n        for i in range(s, n):\n            #curr.append(i+1)\n            #C_n_k(d+1, k, i+1, curr[:])\n            #curr.pop()\n            C_n_k(d+1, k, i+1, curr+[i+1])\n    C_n_k(0,k,0,[]) \n\n    return ans\n\\end{lstlisting}\n%%%%%%%%%%%%%%%%%%%combination sum%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Combination Sum}\n39. Combination Sum\n\nGiven a set of candidate numbers (candidates) \\textbf{(without duplicates)} and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target.\n\nThe same repeated number may be chosen from candidates \\textbf{unlimited number} of times.\n\\begin{lstlisting}\nNote:\n\n    All numbers (including target) will be positive integers.\n    The solution set must not contain duplicate combinations.\n\nExample 1:\n\nInput: candidates = [2,3,6,7], target = 7,\nA solution set is:\n[\n  [7],\n  [2,2,3]\n]\n\nExample 2:\n\nInput: candidates = [2,3,5], target = 8,\nA solution set is:\n[\n  [2,2,2,2],\n  [2,3,3],\n  [3,5]\n]\n\\end{lstlisting}\n\\textbf{DFS Backtracking}. Analysis: This is still a typical combination problem, the only thing is the return level is when the sum of the path we gained is larger than the target, and we only collect the answer when it is equal. And Because a number can be used unlimited times, so that each time after we used one number, we do not increase the next start position. \n\\begin{lstlisting}[language=Python]\ndef combinationSum(self, candidates, target):\n    \"\"\"\n    :type candidates: List[int]\n    :type target: int\n    :rtype: List[List[int]]\n    \"\"\"\n    ans = []\n    candidates.sort()\n    self.combine(candidates, target, 0, [], ans)\n    return ans\n\ndef combine(self, nums, target, s, curr, ans):\n    if target < 0:\n        return  # backtracking\n    if target == 0:\n        ans.append(curr)\n        return \n    for i in range(s, len(nums)):\n        # if nums[i] > target:\n        #     return\n        self.combine(nums, target-nums[i], i, curr+[nums[i]], ans) # use i, instead of i+1 because we can reuse\n\\end{lstlisting}\n40. Combination Sum II\n\nGiven a collection of candidate numbers \\textbf{(candidates with duplicates)} and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target.\n\nEach number in candidates may only \\textbf{be used once} in the combination.\n\\begin{lstlisting}\nNote:\n\n    All numbers (including target) will be positive integers.\n    The solution set must not contain duplicate combinations.\n\nExample 1:\n\nInput: candidates = [10,1,2,7,6,1,5], target = 8,\nA solution set is:\n[\n  [1, 7],\n  [1, 2, 5],\n  [2, 6],\n  [1, 1, 6]\n]\n\nExample 2:\n\nInput: candidates = [2,5,2,1,2], target = 5,\nA solution set is:\n[\n  [1,2,2],\n  [5]\n]\n\\end{lstlisting}\n\\textbf{Backtracking+Counter}. Because for the first example, if we reuse the code from the previous problem, we will get extra combinations: [7, 1], [2, 1, 5]. To avoid this, we need a dictionary to save all the unique candidates with its corresponding appearing times. For a certain number, it will be used at most its counter times. \n\\begin{lstlisting}[language=Python]\ndef combinationSum2(self, candidates, target):\n    \"\"\"\n    :type candidates: List[int]\n    :type target: int\n    :rtype: List[List[int]]\n    \"\"\"\n        \n    candidates = collections.Counter(candidates)\n    ans = []\n    self.combine(list(candidates.items()), target, 0, [], ans) # convert the Counter to a list of (key, item) tuple\n    return ans\n    \ndef combine(self, nums, target, s, curr, ans):\n    if target < 0:\n        return \n    if target == 0:\n        ans.append(curr)\n        return\n    for idx in range(s, len(nums)):           \n        num, count = nums[idx]\n        for c in range(count):\n            self.combine(nums, target-num*(c+1), idx+1, curr+[num]*(c+1), ans )\n\\end{lstlisting}\n377. Combination Sum IV (medium)\n\\begin{lstlisting}\n Given an integer array with all positive numbers and no duplicates, find the number of possible combinations that add up to a positive integer target.\n\nExample:\n\nnums = [1, 2, 3]\ntarget = 4\n\nThe possible combination ways are:\n(1, 1, 1, 1)\n(1, 1, 2)\n(1, 2, 1)\n(1, 3)\n(2, 1, 1)\n(2, 2)\n(3, 1)\n\nNote that different sequences are counted as different combinations.\n\nTherefore the output is 7.\n\nFollow up:\nWhat if negative numbers are allowed in the given array?\nHow does it change the problem?\nWhat limitation we need to add to the question to allow negative numbers? \n\\end{lstlisting}\n\\textbf{DFS + MEMO}. This problem is similar to 39. Combination Sum. For [2, 3, 5], target = 8,  comparison:\n\\begin{lstlisting}\n[2, 3, 5], target = 8\n39. Combination Sum. # there is ordering (each time the start index is same or larger than before)\n[\n  [2,2,2,2],\n  [2,3,3],\n  [3,5]\n]\n377. Combination Sum IV, here we have no ordering( each time the start index is the same as before). Try all element.\n[\n  [2,2,2,2],\n  [2,3,3],\n* [3,3,2]\n* [3,2,3]\n  [3,5],\n* [5,3]\n]\n\\end{lstlisting}\n\\begin{lstlisting}[language=Python]\ndef combinationSum4(self, nums, target):\n    \"\"\"\n    :type nums: List[int]\n    :type target: int\n    :rtype: int\n    \"\"\"\n    nums.sort()\n    n = len(nums)\n    def DFS(idx, memo, t):\n        if t < 0:\n            return 0\n        if t == 0:\n            return 1\n        count = 0\n        if t not in memo:\n            for i in range(idx, n):\n                count += DFS(idx, memo, t-nums[i])\n            memo[t] = count\n        return memo[t]\n    return(DFS(0, {}, target))\n\\end{lstlisting}\nBecause, here we does not need to numerate all the possible solutions, we can use dynamic programming, which will be shown in Section~\\ref{}. \n\n\\subsection{K Sum}\nIn this subsection, we still trying to get subset that sum up to a target. But the length here is fixed. We would have 2, 3, 4 sums normally. Because it is still a combination problem, we can use the \\textbf{backtracking} to do. Second, because the fixed length, we can use \\textbf{multiple pointers} to build up the potential same lengthed subset.  But in some cases, because the length is fixed, we can use \\textbf{hashmap} to simplify the complexity. \n\n1. Two Sum\nGiven an array of integers, return \\textbf{indices} of the two numbers such that they add up to a specific target.\n\nYou may assume that each input would have \\textbf{exactly} one solution, and you may not use the same element twice.\n\\begin{lstlisting}\nExample:\n\nGiven nums = [2, 7, 11, 15], target = 9,\n\nBecause nums[0] + nums[1] = 2 + 7 = 9,\nreturn [0, 1].\n\\end{lstlisting}\n\\textbf{Hashmap}. Using backtracking or brute force will get us $O(n^2)$ time complexity. We can use hashmap to save the nums in a dictionary. Then we just check target-num in the dictionary. We would get $O(n)$ time complexity. We have two-pass hashmap and one-pass hashmap.\n\\begin{lstlisting}[language=Python]\n# two-pass hashmap\ndef twoSum(self, nums, target):\n    \"\"\"\n    :type nums: List[int]\n    :type target: int\n    :rtype: List[int]\n    \"\"\"\n    dict = collections.defaultdict(int)\n    for i, t in enumerate(nums):\n        dict[t] = i\n    for i, t in enumerate(nums):\n        if target - t in dict and i != dict[target-t]:\n            return [i, dict[target-t]]\n# one-pass hashmap\ndef twoSum(self, nums, target):\n    \"\"\"\n    :type nums: List[int]\n    :type target: int\n    :rtype: List[int]\n    \"\"\"\n    dict = collections.defaultdict(int)\n    for i, t in enumerate(nums):\n        if target - t in dict:\n            return [dict[target-t], i]\n        dict[t] = i\n\\end{lstlisting}\n\n15. 3Sum\n\nGiven an array S of n integers, are there elements a, b, c in S such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero.\n\nNote: The solution set must not contain duplicate triplets.\n\nFor example, given array S = [-1, 0, 1, 2, -1, -4],\n\\begin{lstlisting}\nA solution set is:\n[\n  [-1, 0, 1],\n  [-1, -1, 2]\n]\n\\end{lstlisting}\n\nSolution: Should use three pointers, no extra space. i is the start point from [0,len-2], l,r is the other two pointers. l=i+1, r=len-1 at the beignning. The saving of time complexity is totally from the sorting algorithm.\n\\begin{lstlisting}\n[-4,-1,-1,0,1,2]\ni, l-> ``````<-r\n\\end{lstlisting}\nHow to delete repeat?\n\\begin{lstlisting}[language = Python]\ndef threeSum(self, nums):\n    res = []\n    nums.sort()\n    for i in xrange(len(nums)-2):\n        if i > 0 and nums[i] == nums[i-1]: #make sure pointer not repeat\n            continue\n        l, r = i+1, len(nums)-1\n        while l < r:\n            s = nums[i] + nums[l] + nums[r]\n            if s < 0:\n                l +=1 \n            elif s > 0:\n                r -= 1\n            else:\n                res.append((nums[i], nums[l], nums[r]))\n                l+=1\n                r-=1\n\n                #after the first run, then check duplicate example.\n                while l < r and nums[l] == nums[l-1]:\n                    l += 1\n                while l < r and nums[r] == nums[r+1]:\n                    r -= 1\n    return res\n\\end{lstlisting}\nUse hashmap:\n\\begin{lstlisting}[language = Python]\ndef threeSum(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: List[List[int]]\n        \"\"\"\n        res =[]\n        nums=sorted(nums)\n        if not nums:\n            return []\n        if nums[-1]<0 or nums[0]>0:\n            return []\n        end_position = len(nums)-2\n        dic_nums={}\n        for i in xrange(1,len(nums)):\n            dic_nums[nums[i]]=i# same result save the last index\n        \n        for i in xrange(end_position):\n            target = 0-nums[i]\n            if i>0 and nums[i] == nums[i-1]: #this is to avoid repeat \n                continue\n            if target<nums[i]: #if the target is smaller than this, we can not find them on the right side\n                break\n            for j in range(i+1,len(nums)): #this is to avoid repeat \n                if j>i+1 and nums[j]==nums[j-1]:\n                    continue\n                complement =target - nums[j]\n                if complement<nums[j]: #if the left numbers are bigger than the complement, no need to keep searching\n                    break\n                if complement in dic_nums and dic_nums[complement]>j: #need to make sure the complement is bigger than nums[j]\n                    res.append([nums[i],nums[j],complement])\n        return res\n\\end{lstlisting}\nThe following code uses more time\n\\begin{lstlisting}[language = Python]\nfor i in xrange(len(nums)-2):\n            if i > 0 and nums[i] == nums[i-1]:\n                continue\n            l, r = i+1, len(nums)-1\n            while l < r:\n                if l-1>=i+1 and nums[l] == nums[l-1]: #check the front\n                    l += 1\n                    continue\n                if r+1<len(nums) and nums[r] == nums[r+1]:\n                    r -= 1\n                    continue\n                s = nums[i] + nums[l] + nums[r]\n                if s < 0:\n                    l +=1 \n                elif s > 0:\n                    r -= 1\n                else:\n                    res.append((nums[i], nums[l], nums[r]))\n                    l += 1; r -= 1\n        return res\n\\end{lstlisting}\n18. 4Sum\n\\begin{lstlisting}[language = Python]\ndef fourSum(self, nums, target):\n        def findNsum(nums, target, N, result, results):\n            if len(nums) < N or N < 2 or target < nums[0]*N or target > nums[-1]*N:  # early termination\n                return\n            if N == 2: # two pointers solve sorted 2-sum problem\n                l,r = 0,len(nums)-1\n                while l < r:\n                    s = nums[l] + nums[r]\n                    if s == target:\n                        results.append(result + [nums[l], nums[r]])\n                        l += 1\n                        r-=1\n                        while l < r and nums[l] == nums[l-1]:\n                            l += 1\n                        while l < r and nums[r] == nums[r+1]:\n                            r -= 1\n                    elif s < target:\n                        l += 1\n                    else:\n                        r -= 1\n            else: # recursively reduce N\n                for i in range(len(nums)-N+1):\n                    if i == 0 or (i > 0 and nums[i-1] != nums[i]):\n                        findNsum(nums[i+1:], target-nums[i], N-1, result+[nums[i]], results) #reduce nums size, reduce target, save result\n\nresults = []\n        findNsum(sorted(nums), target, 4, [], results)\n        return results\n\\end{lstlisting}\n\n454. 4Sum II\n\nGiven four lists A, B, C, D of integer values, compute how many tuples (i, j, k, l) there are such that A[i] + B[j] + C[k] + D[l] is zero.\n\nTo make problem a bit easier, all A, B, C, D have same length of N where $0 \\leq N \\leq 500$. All integers are in the range of -228 to 228\u20131 and the result is guaranteed to be at most 231\u20131.\n\nExample:\n\\begin{lstlisting}\nInput:\nA = [ 1, 2]\nB = [-2,-1]\nC = [-1, 2]\nD = [ 0, 2]\n\nOutput:\n2\n\\end{lstlisting}\n\nExplanation:\n\n\\begin{lstlisting}\nThe two tuples are:\n1. (0, 0, 0, 1) -> A[0] + B[0] + C[0] + D[1] = 1 + (-2) + (-1) + 2 = 0\n2. (1, 1, 0, 0) -> A[1] + B[1] + C[0] + D[0] = 2 + (-1) + (-1) + 0 = 0\n\\end{lstlisting}\nSolution: if we use brute force, use 4 for loop, then it is $O(N^4)$. If we use divide and conquer, sum the first half, and save a dictionary (counter), time complexity is $O(2N^2)$. What if we have 6 sum, we can reduce it to $O(2N^3)$, what if 8 sum.\n\n\\begin{lstlisting}[language = Python]\ndef fourSumCount(self, A, B, C, D):\n    AB = collections.Counter(a+b for a in A for b in B)\n    return sum(AB[-c-d] for c in C for d in D)\n\\end{lstlisting}\n\n\n\\subsubsection{Summary}\nAs we have seen from the shown examples in this section, to solve the combination problem, backtrack shown in Section~\\ref{sec_combination} offers a universal solution. Also, there is another iterative solution which suits the power set purpose. And I would include its code here again:\n\\begin{lstlisting}[language = Python]\ndef subsets(self, nums):\n    result = [[]] #use two dimensional, which already have [] one element\n    for num in nums:\n        new_results = []\n        for r in result:\n            new_results.append(r + [num])\n        result += new_results\n        \n    return result\n\\end{lstlisting}\nIf we have duplicates, how to handle in the backtrack?? In the iterative solution, we can replace the array with a dictionary saves the counts. \n\n\\subsection{Permutation}\n46. Permutations\n\\begin{lstlisting}\nGiven a collection of distinct numbers, return all possible permutations.\n\nFor example,\n [1,2,3] have the following permutations:\n\n[\n  [1,2,3],\n  [1,3,2],\n  [2,1,3],\n  [2,3,1],\n  [3,1,2],\n  [3,2,1]\n]\n\\end{lstlisting}\n\n47. Permutations II\n\nGiven a collection of numbers that might contain duplicates, return all possible unique permutations.\n\nFor example,\n\\begin{lstlisting}\n [1,1,2] have the following unique permutations:\n\n[\n  [1,1,2],\n  [1,2,1],\n  [2,1,1]\n]\n\\end{lstlisting}\n\n301. Remove Invalid Parentheses\n\nRemove the minimum number of invalid parentheses in order to make the input string valid. Return all possible results.\n\nNote: The input string may contain letters other than the parentheses ( and ).\n\nExamples:\n\\begin{lstlisting}\n\"()())()\" -> [\"()()()\", \"(())()\"]\n\"(a)())()\" -> [\"(a)()()\", \"(a())()\"]\n\")(\" -> [\"\"]\n\\end{lstlisting}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Merge List\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Merge and Partition}\n\\subsection{Merge Lists}\nWe can use divide and conquer (see the merge sort) and the priority queue.\n\\subsection{Partition Lists}\nPartition of lists can be converted to subarray, combination, subsequence problems. For example, \n\\begin{enumerate}\n    \\item 416. Partition Equal Subset Sum (combination)\n    \\item 698. Partition to K Equal Sum Subsets\n\\end{enumerate} \n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%% Sweep Line\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Intervals}\n\\label{sec_sweep_line}\n% \\documentclass[../../main.tex]{subfiles}\nSweep Line is a type of algorithm that mainly used to solve problems with intervals of one-dimensional. Let us look at one example:\n1. 253. Meeting Rooms II\n\nGiven an array of meeting time intervals consisting of start and end times [[s1,e1],[s2,e2],...] (si < ei), find the minimum number of conference rooms required.\n\\begin{lstlisting}\nExample 1:\n\nInput: [[0, 30],[5, 10],[15, 20]]\nOutput: 2\n\nExample 2:\n\nInput: [[7,10],[2,4]]\nOutput: 1\n\\end{lstlisting}\nIt would help a lot if at first we can draw one example with cooridinates.\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width = 0.6\\columnwidth]{fig/sweep_line_253.png}\n    \\caption{Interval questions}\n    \\label{fig:interval}\n\\end{figure}\nFirst, the simplest situation is when we only need one meeting room is there is no intersection between these time intervals. If we add one interval that only intersect with one of the previous intervals, this means we need two conference rooms. So to find the minimum conference rooms we need, we need to find the maximum number of intersection between these time intervals. The most native solution is to scan all the time slot in one for loop, and at another inner loop go through all the intervals, if this time slot is in this intervals, then we increase the minimum number of meeting room counter. This gives us time complexity of $O(n*m)$, where $n$ is the number of intervals and $m$ is the total number of time slots. The Python code is as follows, unfortunately, with this solution we have LTE error.  \n\\begin{lstlisting}[language = Python]\n# Definition for an interval.\n# class Interval(object):\n#     def __init__(self, s=0, e=0):\n#         self.start = s\n#         self.end = e\n\nfrom collections import defaultdict\nfrom heapq import heappush, heappop\nfrom sys import maxint\nclass Solution(object):\n    def minMeetingRooms(self, intervals):\n        \"\"\"\n        :type intervals: List[Interval]\n        :rtype: int\n        \"\"\"\n        if not intervals:\n            return 0\n        #solution 1, voting, time complexity is O(e1-s1), 71/77 test, TLE\n        votes = defaultdict(int)\n        num_rooms = 0   \n        for interval in intervals:\n            s=interval.start\n            e=interval.end\n            for i in range(s+1,e+1):\n                votes[i]+=1\n                num_rooms = max(num_rooms, votes[i])\n        return num_rooms\n\\end{lstlisting}\n\\subsection{Speedup with Sweep Line}\nNow, let us see how to speed up this process. We can use Sweep Line method. For the sweep line, we have three basic implementations: one-dimensional, min-heap, or map based. \n\\subsubsection{One-dimensional Implementation}\n To get the maximum number of intersection of all the intervals, it is not necessarily to scan all the time slots, how about just scan the key slot: the starts and ends . Thus, what we can do is to open an array and put all the start or end slot into the array, and with $1$ to mark it as start and $0$ to mark it as end. Then we sort this array. Till this point, how to get the maximum intersection? We go through this sorted array, if we get a start our current number of room needed will increase by one, otherwise, if we encounter an end slot, it means one meeting room is freed, thus we decrease the current on-going meeting room by one. We use another global variable to track the maximum number of rooms needed in this whole process. Great, because now our time complexity is decided by the number of slots $2n$, with the sorting algorithm, which makes the whole time complexity $O(nlogn)$ and space complexity $n$. This speeded up algorithm is called Sweep Line algorithm. Before we write our code, we better check the \\textit{special cases}, what if there is one slot that is marked as start in one interval but is the end of another interval. This means we can not increase the counting at first, but we need to decrease, so that the sorting should be based on the first element of the tuple, and followed by the second element of the tuple. For example, the simple case $[[13,15],[1,13]]$, we only need maximum of one meeting room. Thus it can be implemented as:\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/sweep_line_one_dimension.png}\n    \\caption{One-dimensional Sweep Line}\n    \\label{fig:one_dim_sl}\n\\end{figure}\n\\begin{lstlisting}[language=Python]\n def minMeetingRooms(self, intervals):\n        if not intervals:\n            return 0       \n        #solution 2\n        slots = []\n        # put slots into one-dimensional axis\n        for i in intervals:\n            slots.append((i.start, 1))\n            slots.append((i.end, 0))\n        # sort these slots on this dimension\n        #slots.sort(key = lambda x: (x[0], x[1]))\n        slots.sort()\n        \n        # now execute the counting\n        crt_room, max_room = 0, 0\n        for s in slots:\n            if s[1]==0: # if it ends, decrease\n                crt_room-=1\n            else:\n                crt_room+=1\n            max_room = max(max_room, crt_room)\n        return max_room\n\\end{lstlisting}\n\\subsubsection{Min-heap Implementation}\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.6\\columnwidth]{fig/sweep_line_min_heap.png}\n    \\caption{Min-heap for Sweep Line}\n    \\label{fig:min_heap_sl}\n\\end{figure}\nInstead of opening an array to save all the time slots, we can directly sort the intervals in the order of the start time. We can see Fig.~\\ref{fig:min_heap_sl}, we go through the intervals and visit their end time, the first one we encounter is $30$, we put it in a min-heap, and then we visit the next interval $[5, 10]$, $5$ is smaller than the previous end time $30$, it means this interval intersected with a previous interval, so the number of maximum rooms increase $1$, we get $2$ rooms now. We put $10$ into the min-heap. Next, we visit $[15, 20]$, $15$ is larger than the first element in the min-heap $10$, it means that these two intervals can be merged into one $[5, 20]$, so we need to update the end time $10$ to $20$. \n\nThis way, the time complexity is still the same which is decided by the sorting algorithm. While the space complexity is decided by real situation, it varies from $O(1)$ (no intersection) to $O(n)$ (all the meetings are intersected at at least one time slot).  \n\\begin{lstlisting}[language=Python]\ndef minMeetingRooms(self, intervals):\n        if not intervals:\n            return 0\n        #solution 2\n        intervals.sort(key=lambda x:x.start)\n        h = [intervals[0].end]\n        rooms = 1\n        for i in intervals[1:]:\n            s,e=i.start, i.end\n            e_before = h[0]\n            if s<e_before: #overlap\n                heappush(h, i.end)\n                rooms+=1\n            else: #no overlap\n                #merge\n                heappop(h) #kick out 10 in our example\n                heappush(h,e) # replace 10 with 20\n        return rooms\n\\end{lstlisting}\n% 2\u3001multiset\uff1a\u8fd9\u6b21\u662f\u5148\u5bf9\u6bcf\u4e2a\u533a\u95f4\u7684\u8d77\u70b9\u6392\u5e8f\uff0c\u7136\u540e\u4f9d\u6b21\u5c06\u6bcf\u4e2a\u533a\u95f4\u7684\u7ec8\u70b9\u653e\u5728\u4e00\u4e2a\u96c6\u5408\u4e2d\u3002\u5982\u679c\u4e0b\u4e00\u4e2a\u533a\u95f4\u7684\u8d77\u70b9\u5927\u4e8e\u7b49\u4e8e\u4e4b\u524d\u67d0\u4e2a\u533a\u95f4\u7684\u7ec8\u70b9\uff0c\u5c31\u5c06\u5176\u4ece\u96c6\u5408\u4e2d\u5220\u9664\uff0c\u6bcf\u6b21\u9700\u8981\u7edf\u8ba1\u4e00\u4e0b\u5f53\u524d\u6240\u9700\u7684\u6700\u5927\u529e\u516c\u5ba4\u6570\u91cf\u3002\u8fd9\u4e2a\u7248\u672c\u7684\u65f6\u95f4\u590d\u6742\u5ea6\u8fd8\u662fO(nlogn)\uff0c\u4f46\u662f\u7a7a\u95f4\u590d\u6742\u5ea6\u5374\u53d8\u6210output-dependent\u7684\u4e86\uff0c\u6700\u591a\u662fO(n)\uff0c\u6700\u5c11\u662fO(1)\u3002\n\\subsubsection{Map-based Implementation}\n\n% 3\u3001map\uff1a\u6211\u4eec\u7528\u4e00\u4e2amap\u6765\u5b58\u50a8\u91cd\u5408\u533a\u57df\uff0c\u5373\u6bcf\u4e2a\u533a\u95f4\u7684\u8d77\u70b9\u4ee3\u8868\u4e00\u4e2a\u533a\u95f4\u7684\u5f00\u59cb\uff0c\u4f1a\u5c06\u91cd\u53e0\u533a\u57df+1\uff0c\u6bcf\u4e2a\u533a\u95f4\u7684\u7ed3\u675f\u70b9\u4ee3\u8868\u4e00\u4e2a\u533a\u95f4\u7684\u7ed3\u675f\uff0c\u4f1a\u5c06\u91cd\u53e0\u533a\u57df-1\u3002\u56e0\u6b64\u6211\u4eec\u53ef\u4ee5\u5229\u7528\u8fd9\u4e2a\u6027\u8d28\uff0c\u7ed3\u5408STL\u4e2d\u7684map\u6765\u5b9e\u73b0\uff08\u5b9e\u8d28\u4e0a\u8fd9\u4e2a\u7b97\u6cd5\u548c\u201c\u4e00\u7ef4\u5411\u91cf\u201d\u7248\u672c\u975e\u5e38\u50cf\uff0c\u53ea\u662f\u91c7\u7528\u7684\u6570\u636e\u7ed3\u6784\u4e0d\u540c\u800c\u5df2\uff09\u3002\n% --------------------- \n% \u4f5c\u8005\uff1a\u9b54\u8c46Magicbean \n% \u6765\u6e90\uff1aCSDN \n% \u539f\u6587\uff1ahttps://blog.csdn.net/magicbean2/article/details/74199529 \n% \u7248\u6743\u58f0\u660e\uff1a\u672c\u6587\u4e3a\u535a\u4e3b\u539f\u521b\u6587\u7ae0\uff0c\u8f6c\u8f7d\u8bf7\u9644\u4e0a\u535a\u6587\u94fe\u63a5\uff01\n\n\\begin{lstlisting}[language=Python]\nclass Solution {\npublic:\n    int minMeetingRooms(vector<Interval>& intervals) {\n        map<int, int> mp;\n        for (auto val : intervals) {\n            ++mp[val.start];\n            --mp[val.end];\n        }\n        int max_room = 0, crt_room = 0;\n        for (auto val : mp) {\n            crt_room += val.second;\n            max_room = max(max_room, crt_room);\n        }\n        return max_room;\n    }\n};\n\\end{lstlisting}\n\n\\subsection{LeetCode Problems}\n\\begin{enumerate}\n \\item \\textbf{986. Interval List Intersections} Given two lists of closed intervals, each list of intervals is pairwise disjoint and in sorted order. Return the intersection of these two interval lists.\n\\begin{lstlisting}[numbers=none]\nInput: A = [[0,2],[5,10],[13,23],[24,25]], B = [[1,5],[8,12],[15,24],[25,26]]\nOutput: [[1,2],[5,5],[8,10],[15,23],[24,24],[25,25]]\nReminder: The inputs and the desired output are lists of Interval objects, and not arrays or lists.\n\\end{lstlisting}\n\\end{enumerate}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Intersection\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Intersection}\nFor problems to get intersections of lists, we can use hashmap, which takes $O(m+n)$ time complexity. Also, we can use sorting at first and use two pointers one start from the start of each array. Examples are shown as below;\n\\begin{enumerate}\n    \\item  349. Intersection of Two Arrays (Easy)\n    \n     Given two arrays, write a function to compute their intersection.\n\nExample:\n\\begin{lstlisting}\nGiven nums1 = [1, 2, 2, 1], nums2 = [2, 2], return [2].\n\\end{lstlisting}\n\nNote:\n\\begin{itemize}\n    \\item Each element in the result must be unique.\n    \\item The result can be in any order.\n\\end{itemize}\nSolution 1: Using hashmap, here we use set to convert, this takes 43ms. \n\\begin{lstlisting}[language = Python]\ndef intersection(self, nums1, nums2):\n    \"\"\"\n    :type nums1: List[int]\n    :type nums2: List[int]\n    :rtype: List[int]\n    \"\"\"\n    if not nums1 or not nums2:\n        return []\n    if len(nums1) > len(nums2):\n        nums1, nums2 = nums2, nums1\n    ans = set()\n    nums1 = set(nums1)\n    for e in nums2:\n        if e in nums1:\n            ans.add(e)\n    return list(ans)\n\\end{lstlisting}\nSolution2: sorting at first, and then use pointers. Take 46 ms. \n\\begin{lstlisting}[language = Python]\ndef intersection(self, nums1, nums2):\n    \"\"\"\n    :type nums1: List[int]\n    :type nums2: List[int]\n    :rtype: List[int]\n    \"\"\"\n    nums1.sort()\n    nums2.sort()\n    r = set()\n    i, j = 0, 0\n    while i < len(nums1) and j < len(nums2):\n        if nums1[i] < nums2[j]:\n            i += 1\n        elif nums1[i] > nums2[j]:\n            j += 1\n        else:\n            r.add(nums1[i])\n            i += 1\n            j += 1\n    return list(r)\n\\end{lstlisting}\n\\item 350. Intersection of Two Arrays II(Easy)\n\n Given two arrays, write a function to compute their intersection.\n\nExample:\n\\begin{lstlisting}\nGiven nums1 = [1, 2, 2, 1], nums2 = [2, 2], return [2, 2].\n\\end{lstlisting}\n\nNote:\n\\begin{itemize}\n    \\item Each element in the result should appear as many times as it shows in both arrays.\n    \\item The result can be in any order.\n\\end{itemize}\n\nFollow up:\n\\begin{enumerate}\n    \\item  What if the given array is already sorted? How would you optimize your algorithm?\n    \\item What if nums1's size is small compared to nums2's size? Which algorithm is better?\n    \\item What if elements of nums2 are stored on disk, and the memory is limited such that you cannot load all elements into the memory at once?\n\\end{enumerate}\n\n\\end{enumerate}\n\n\\section{Miscellanous Questions}\n\\begin{examples}[resume]\n\\item \\textbf{283. Move Zeroes. (Easy)} \nGiven an array nums, write a function to move all 0's to the end of it while maintaining the relative order of the non-zero elements.\n\nNote:\n\\begin{enumerate}\n    \\item You must do this in-place without making a copy of the array.\n    \\item Minimize the total number of operations.\n\\end{enumerate}\n\\begin{lstlisting}[language=Python]\nExample:\n\nInput: [0,1,0,3,12]\nOutput: [1,3,12,0,0]\n\\end{lstlisting}\n\\textbf{Solution 1: Find All Zeros Subarray.} If we found the first all zeros subarray [0, ..., 0] + [x], and we can swap this subarray with the first non-zero element as swap last 0 with x, swap second last element with x, ..., and so on. Therefore, if 0 is at first index, one zero, then it takes O(n), if another 0, at index 1, it takes n-1+n-2 = 2n. It is bit tricky to compute the complexity analysis. The upper bound is $O(n^2)$. \n\\end{examples}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Exercises\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Exercises}\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% %%%%% Subsequence\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Subsequence with (DP)}\n\n\\begin{enumerate}\n    \\item 594. Longest Harmonious Subsequence\n\nWe define a harmonious array is an array where the difference between its maximum value and its minimum value is exactly 1.\n\nNow, given an integer array, you need to find the length of its longest harmonious subsequence among all its possible subsequences.\n\nExample 1:\n\\begin{lstlisting}\nInput: [1,3,2,2,5,2,3,7]\nOutput: 5\nExplanation: The longest harmonious subsequence is [3,2,2,2,3].\n\\end{lstlisting}\n\n\\textit{Note: The length of the input array will not exceed 20,000.}\n\nSolution: at first, use a Counter to save the whole set. Then visit the counter dictionary, to check key+1 and key-1, only when the item is not zero, we can count it as validate, or else it is 0.\n\\begin{lstlisting}[language = Python]\nfrom collections import Counter\nclass Solution:\n    def findLHS(self, nums):\n        \"\"\"\n        :type nums: List[int]\n        :rtype: int\n        \"\"\"\n        if not nums or len(nums)<2:\n            return 0\n        count=Counter(nums) #the list is sorted by the key value\n        maxLen = 0\n        for key,item in count.items(): #to visit the key: item in the counter\n            if count[key+1]: #because the list is sorted, so we only need to check key+1\n                maxLen = max(maxLen,item+count[key+1])\n            \n            # if count[key-1]:\n            #     maxLen=max(maxLen, item+count[key-1])\n        return maxLen\n\\end{lstlisting}\n\n\\item 521. Longest Uncommon Subsequence I\n\nGiven a group of two strings, you need to find the longest uncommon subsequence of this group of two strings. The longest uncommon subsequence is defined as the longest subsequence of one of these strings and this subsequence should not be any subsequence of the other strings.\n\nA subsequence is a sequence that can be derived from one sequence by deleting some characters without changing the order of the remaining elements. Trivially, any string is a subsequence of itself and an empty string is a subsequence of any string.\n\nThe input will be two strings, and the output needs to be the length of the longest uncommon subsequence. If the longest uncommon subsequence doesn\u2019t exist, return -1.\n\nExample 1:\n\\begin{lstlisting}\nInput: \"aba\", \"cdc\"\nOutput: 3\nExplanation: The longest uncommon subsequence is \"aba\" (or \"cdc\"), \nbecause \"aba\" is a subsequence of \"aba\", \nbut not a subsequence of any other strings in the group of two strings.\n\\end{lstlisting}\n\n\\textit{Note:}\n\n    \\textit{Both strings\u2019 lengths will not exceed 100.}\n    \n    \\textit{Only letters from a ~ z will appear in input strings.}\n\nSolution: if we get more examples, we could found the following rules, \u201caba\u201d,\u201daba\u201d return -1,\n\\begin{lstlisting}[language = Python]\ndef findLUSlength(self, a, b):\n        \"\"\"\n        :type a: str\n        :type b: str\n        :rtype: int\n        \"\"\"\n        if len(b)!=len(a):\n            return max(len(a),len(b))\n        #length is the same\n        return len(a) if a!=b else -1\n\\end{lstlisting}\n\\item 424. Longest Repeating Character Replacement\n\nGiven a string that consists of only uppercase English letters, you can replace any letter in the string with another letter at most k times. Find the length of a longest substring containing all repeating letters you can get after performing the above operations.\n\n\\textit{Note:}\n\n \\textit{Both the string\u2019s length and k will not exceed 104.}\n\nExample 1:\n\\begin{lstlisting}\nInput:\ns = \"ABAB\", k = 2\n\nOutput:\n4\n\\end{lstlisting}\n\nExplanation:\nReplace the two 'A's with two 'B's or vice versa.\n\nExample 2:\n\\begin{lstlisting}\nInput:\ns = \"AABABBA\", k = 1\n\nOutput:\n4\n\\end{lstlisting}\n\nExplanation:\nReplace the one 'A' in the middle with 'B' and form \"AABBBBA\".\nThe substring \"BBBB\" has the longest repeating letters, which is 4.\n\nSolution: the brute-force recursive solution for this, is try to replace any char into another when it is not equal or choose not too. LTE\n\\begin{lstlisting}[language = Python]\n#brute force, use recursive function to write brute force solution\n        def replace(news, idx, re_char, k):\n            nonlocal maxLen\n            if k==0 or idx==len(s):\n                maxLen = max(maxLen, getLen(news))\n                return\n\nif s[idx]!=re_char: #replace\n                news_copy=news[:idx]+re_char+news[idx+1:]\n                replace(news_copy, idx+1, re_char, k-1)\n            replace(news[:], idx+1, re_char,k)\n        \n        #what if we only have one char\n        # for char1 in chars.keys():\n        #     replace(s[:],0,char1, k)\n\\end{lstlisting}\nTo get the BCR, think about the sliding window. The longest repeating string we can by number of replacement = `length of string max(numer of occurence of letter i), i=\u2019A\u2019 to \u2018Z\u2019. With the constraint, which means the equation needs to be $\\leq k$. So we can use sliding window to record the max occurence, and when the constraint is violated, we shrink the window. Given an example, strs= \u201cBBCABBBAB\u201d, k=2, when i=0, and j=7, 8\u20135=3>2, which is at A, we need to shrink it, the maxCharCount changed to 4, i=1, so that 8\u20131\u20134=3, i=2, 8\u20132\u20133=3, 8\u20133\u20133=2, so i=3, current length is 5.\n\\begin{lstlisting}[language = Python]\ndef characterReplacement(self, s, k):\n        \"\"\"\n        :type s: str\n        :type k: int\n        :rtype: int\n        \"\"\"\n        i,j = 0,0 #sliding window\n        counter=[0]*26\n        ans = 0\n        maxCharCount = 0\n        while j<len(s):\n            counter[ord(s[j])-ord('A')]+=1\n            maxCharCount = max(maxCharCount, counter[ord(s[j])-ord('A')])\n            while j-i+1-maxCharCount>k: #now shrink the window\n                counter[ord(s[i])-ord('A')]-=1\n                i+=1\n                #updata max\n                maxCharCount=max(counter)\n            ans=max(ans, j-i+1)\n            j+=1\n                \n        return ans\n\\end{lstlisting}\n\n\\item 395. Longest Substring with At Least K Repeating Characters\n\nFind the length of the longest substring T of a given string (consists of lowercase letters only) such that every character in T appears no less than k times.\n\nExample 1:\n\\begin{lstlisting}\nInput:\ns = \"aaabb\", k = 3\n\nOutput:\n3\n\\end{lstlisting}\n\nThe longest substring is \"aaa\", as 'a' is repeated 3 times.\n\nExample 2:\n\\begin{lstlisting}\nInput:\ns = \"ababbc\", k = 2\n\nOutput:\n5\n\\end{lstlisting}\n\nThe longest substring is \"ababb\", as 'a' is repeated 2 times and 'b' is repeated 3 times.\n\nSolution: use dynamic programming with memo: Cons: it takes too much space, and with LTE.\n\\begin{lstlisting}[language = Python]\nfrom collections import Counter, defaultdict\nclass Solution:\n    def longestSubstring(self, s, k):\n        \"\"\"\n        :type s: str\n        :type k: int\n        :rtype: int\n        \"\"\"\n        if not s:\n            return 0\n        if len(s)<k:\n            return 0\n        count = Counter(char for char in s)\n        print(count)\n        memo=[[None for col in range(len(s))] for row in range(len(s))]\n\ndef cut(start,end,count):\n            if start>end:\n                return 0\n            if memo[start][end]==None:\n                if any(0<item<k for key,item in count.items()):\n                    newCounterF=count.copy()\n                    newCounterF[s[start]]-=1\n                    newCounterB=count.copy()\n                    newCounterB[s[end]]-=1\n                    #print(newsF,newsB)\n                    memo[start][end]= max(cut(start+1, end, newCounterF), cut(start, end-1, newCounterB))\n                else:\n                    memo[start][end] = end-start+1\n            return memo[start][end]\n        return cut(0,len(s)-1,count)\n\\end{lstlisting}\n\nNow, use sliding window, we use a pointer mid, what start from 0, if the whole string satisfy the condition, return len(s). Otherwise, use two while loop to separate the string into three substrings: left, mid, right. left satisfy, mid unsatisfy, right unknown.\n\\begin{lstlisting}[language = Python]\nfrom collections import Counter, defaultdict\nclass Solution:\n    def longestSubstring(self, s, k):\n        \"\"\"\n        :type s: str\n        :type k: int\n        :rtype: int\n        \"\"\"\n        if not s:\n            return 0\n        if len(s)<k:\n            return 0\n        count = Counter(char for char in s)\n        mid=0 #on the left side, from 0-mid, satisfied elments\n        while mid<len(s) and count[s[mid]]>=k:\n            mid+=1\n        if mid==len(s): return len(s) \n        left = self.longestSubstring(s[:mid],k) #\"ababb\"\n        #from pre_mid - cur_mid, get rid of those cant satisfy the condition\n        while mid<len(s) and count[s[mid]]<k:\n            mid+=1\n        #now the right side keep doing it\n        right = self.longestSubstring(s[mid:],k)\n        return max(left,right)\n\\end{lstlisting}\n\\end{enumerate}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Subset\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Subset}\n\\label{sec_array_subset}\n\n\n216. Combination Sum III\n\n\nFind all possible combinations of \\textbf{k numbers} that add up to a number n, given that only numbers from 1 to 9 can be used and each combination should be a unique set of numbers.\n\\begin{lstlisting}[numbers=none]\nNote:\n\n    All numbers will be positive integers.\n    The solution set must not contain duplicate combinations.\n\nExample 1:\n\nInput: k = 3, n = 7\nOutput: [[1,2,4]]\n\nExample 2:\n\nInput: k = 3, n = 9\nOutput: [[1,2,6], [1,3,5], [2,3,4]]\n\\end{lstlisting}\n\\begin{lstlisting}[language=Python]\ndef combinationSum3(self, k, n):\n    \"\"\"\n    :type k: int\n    :type n: int\n    :rtype: List[List[int]]\n    \"\"\"\n    # each only used one time\n    def combine(s, curr, ans, t, d, k, n):\n        if t < 0:\n            return\n        if d == k:\n            if t == 0:\n                ans.append(curr)\n            return\n        for i in range(s, n):\n            num = i+1\n            combine(i+1, curr+[num], ans, t-num, d+1, k, n)\n    ans = []\n    combine(0, [], ans, n, 0, k, 9)\n    return ans\n\\end{lstlisting}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%% Intersection\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Intersection}\n\n160. Intersection of Two Linked Lists (Easy)\n\nWrite a program to find the node at which the intersection of two singly linked lists begins.\n\nFor example, the following two linked lists:\n\\begin{lstlisting}[numbers=none]\n\n        \nA:          a1 -> a2\n                  \\\n                     c1 -> c2 -> c3\n                  /            \nB:     b1 -> b2 -> b3\n\\end{lstlisting}\n\nbegin to intersect at node c1.\n\nNotes:\n\\begin{itemize}\n    \\item If the two linked lists have no intersection at all, return null.\n    \\item The linked lists must retain their original structure after the function returns.\n    \\item You may assume there are no cycles anywhere in the entire linked structure.\n    \\item Your code should preferably run in O(n) time and use only O(1) memory.\n\\end{itemize}\n\n\n\n\n\\end{document}", "meta": {"hexsha": "81bc86aae3333c160ede1a75bc1056e13aac8cbc", "size": 95054, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Easy-Book/chapters/question_3_array_question.tex", "max_stars_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_stars_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Easy-Book/chapters/question_3_array_question.tex", "max_issues_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_issues_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Easy-Book/chapters/question_3_array_question.tex", "max_forks_repo_name": "stungkit/Algorithms-and-Coding-Interviews", "max_forks_repo_head_hexsha": "131199fea0b082d92c0f272a495c7a56a3242b71", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.8307560137, "max_line_length": 1473, "alphanum_fraction": 0.6252445978, "num_tokens": 25775, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850933, "lm_q2_score": 0.7853085859124002, "lm_q1q2_score": 0.5567988133376065}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Miscellaneous}\n\\label{chap:misc}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Collaborative Filtering}\n\\label{misc:CF}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Information Theory \\& Entropy}\n\\label{misc:info_theory}\n% TODO\n% TODO Entropy, significance of bits\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Kernel Density Estimation (KDE)}\n\\label{misc:KDE}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Trick Questions}\n\\label{misc:trick}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Modulo Operations}\n\\label{misc:modulo}\n\n\\begin{subequations}\\label{eq:misc:modulo}\n\\begin{align}\n\\left(A~\\text{mod}~C\\right)~\\text{mod}~C &= A~\\text{mod}~C, \\label{eq:misc:modulo:basic} \\\\\n\\left(A \\pm B\\right)~\\text{mod}~C &= \\left(A~\\text{mod}~C \\pm B~\\text{mod}~C\\right)~\\text{mod}~C, \\label{eq:misc:modulo:pm} \\\\\n\\left(A \\times B\\right)~\\text{mod}~C &= \\left(A~\\text{mod}~C \\times B~\\text{mod}~C\\right)~\\text{mod}~C, \\label{eq:misc:modulo:multiplication} \\\\\nA^{B}~\\text{mod}~C &= \\left(\\left(A~\\text{mod}~C\\right)^{B}\\right)~\\text{mod}~C. \\label{eq:misc:modulo:exp}\n\\end{align}\n\\end{subequations}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Mathematical Relations}\n\\label{misc:math}\n\n\\begin{subequations}\\label{eq:misc:math}\n\\begin{align}\ne^{x} &= \\sum_{k=0}^{\\infty} \\frac{x^{k}}{k!} \\label{eq:misc:math:ex} \\\\\ne^{x} &= \\lim_{n \\to \\infty} \\left(1 + \\frac{x}{n}\\right)^{n} \\label{eq:misc:math:ex_limit} \\\\\n\\frac{1}{1-r} &= \\sum_{k=0}^{\\infty} r^{k}, \\quad \\abs{r} < 1 \\label{eq:misc:math:geometric} \\\\\ne^{-i x} &= \\cos\\left(x\\right) + i \\sin\\left(x\\right) \\label{eq:misc:math:eulers_formula} \\\\\n2 \\cos\\left(x\\right) &= e^{i x} + e^{-i x} \\label{eq:misc:math:cos_exp} \\\\\n2i \\sin\\left(x\\right) &= e^{i x} - e^{-i x} \\label{eq:misc:math:sin_exp} \\\\\n\\abs{\\vb{u} + \\vb{v}} &\\leq \\abs{\\vb{u}} + \\abs{\\vb{v}} \\label{eq:misc:math:triangle_inequality} \\\\\n\\abs{\\braket{u}{v}}^{2} &\\leq \\braket{u} \\braket{v} \\label{eq:misc:math:cauchy_schwarz_inequality} \\\\\nn^{k} &= N\\left(\\text{Permutations Length $k$, with Repetition}\\right) \\label{eq:misc:math:permutations_with_repetition} \\\\\n\\frac{n!}{\\left(n-k\\right)!} &= P_{k}^{n} = N\\left(\\text{Permutations Length $k$, without Repetition}\\right) \\label{eq:misc:math:permutations_without_repetition} \\\\\n\\frac{n!}{k!\\left(n-k\\right)!} &= \\binom{n}{k} = C_{k}^{n} = N\\left(\\text{Combinations of Length $k$ from $n$}\\right) \\label{eq:misc:math:binomial_coefficient} \\\\\n\\overline{A \\cap B} &= \\overline{A} \\cup \\overline{B},\\quad \\overline{A \\cup B} = \\overline{A} \\cap \\overline{B} \\label{eq:misc:math:de_morgan} \\\\\n\\dv{}{x}\\, a^{x} &= a^{x} \\ln\\left(a\\right), \\quad 0 < a \\label{eq:misc:math:diff_exp_gen} \\\\\n\\dv{}{x}\\, \\log_{a}\\left(x\\right) &= \\frac{1}{x \\ln a} \\label{eq:misc:math:diff_log_gen} \\\\\n\\int u \\dd{v} &= uv - \\int v \\dd{u} \\label{eq:misc:math:integration_by_parts} \\\\\n\\sqrt{2 \\pi c^{2}} &= \\int_{-\\infty}^{\\infty} e^{-\\left(x-b\\right)^{2} / 2 c^{2}} \\dd{x} \\label{eq:misc:math:gaussian_integral}\n\\end{align}\n\\end{subequations}\n", "meta": {"hexsha": "774169469787c86161aaf31286b43a8810035256", "size": 3638, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/misc.tex", "max_stars_repo_name": "mepland/data_science_notes", "max_stars_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-05-30T15:15:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-23T01:01:08.000Z", "max_issues_repo_path": "sections/misc.tex", "max_issues_repo_name": "mepland/data_science_notes", "max_issues_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/misc.tex", "max_forks_repo_name": "mepland/data_science_notes", "max_forks_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.2394366197, "max_line_length": 164, "alphanum_fraction": 0.4969763606, "num_tokens": 1221, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7853085758631158, "lm_q1q2_score": 0.5567988062124715}}
{"text": "\\section{Imputation methods and Algorithms} \\label{secMethods}\n\nConsider a dataset $\\bm{Z}$ of dimensionality $n \\times p$, with $n$ observations (rows) and \n$p$ variables (columns).\nAssume that the first $t$ $(t \\leq p)$ variables of $\\bm{Z}$ have missing values.\nThese $t$ variables are part of some substantive model of scientific interest (e.g. a linear \nregression model), and are target of imputation.\nThe subset of $\\bm{Z}$ containing variables $z_1$ to $z_t$ is referred to as the $n \\times t$ matrix $\\bm{T}$.\nThe remaining $n \\times (p-t)$ subset of $\\bm{Z}$ contains variables that are not target of imputation.\nThese variables constitute a pool of possible \\emph{auxiliary} variables that could be used to improve \nthe imputation procedure.\nLet $\\bm{A}$ denote this set of auxiliary variables so that $\\bm{Z} = (\\bm{T}, \\bm{A})$.\nFor a given $z_j$ variable, with $j = (1, ..., p)$, denote its observed and missing components \nby $z_{j, obs}$ and $z_{j, mis}$, respectively.\nLet $\\bm{Z}_{-j} = (z_1, ..., z_{j-1}, z_{j+1}, ..., z_{p})$ be the collection of $p-1$ variables in \n$\\bm{Z}$ excluding $z_j$.\nDenote $\\bm{Z}_{-j, obs}$ and $\\bm{Z}_{-j, mis}$ the components of $\\bm{Z}_{-j}$ corresponding to the\ndata units in $z_{j, obs}$ and $z_{j, mis}$, respectively.\n\n\\subsection{Multiple Imputation by Chained Equations}\n\nAssume that $\\bm{Z}$ is the result of $n$ random samples from a multivariate distribution defined by \nan unknown set of parameters $\\bm{\\theta}$.\nThe chained equations approach obtains the posterior distribution of $\\bm{\\theta}$ by sampling iteratively \nfrom conditional distributions of the form $P(z_{1}|\\bm{Z}_{-1}, \\bm{\\theta}_{1})$ $...$ \n$P(z_{t}|\\bm{Z}_{-t}, \\bm{\\theta}_{t})$, where $\\bm{\\theta}_{1}$ $...$ $\\bm{\\theta}_{t}$ are imputation model \nparameters specific to the conditional densities of each variable with missing values.\n\nMore precisely, the MICE algorithm takes the form of a Gibbs sampler where the $m$th iteration $(m = 1, ..., M)$\nsuccessively draws, for each $j$th target variable ($j = 1, ..., t$), from the following distributions:\n%\n\t\\begin{align}\n\t\\hat{\\bm{\\theta}}_{j}^{(m)} &\\sim \n\t\tp(\\bm{\\theta}_j | z_{j, obs}, \\bm{Z}_{-j, obs}^{(m)}) \n\t\t\\label{eq_pd}\\\\\n\tz_{j, mis}^{(m)} &\\sim \n\t\tp(z_{j, mis} | \\bm{Z}_{j, mis}^{(m)}, \\hat{\\bm{\\theta}}_{j}^{(m)}) \n\t\t\\label{eq_ppd},\n\t\\end{align}\n%\nwhere $\\hat{\\bm{\\theta}}_{j}^{(m)}$ and $z_{j, mis}^{(m)}$ are draws from the parameters full conditional posterior \ndistribution \\eqref{eq_pd} and the missing data posterior predictive distribution \\eqref{eq_ppd}, respectively.\nAfter convergence, $D$ different sets of values sampled from the predictive distribution are kept as imputations \nand $D$ differently imputed data sets are obtained. \nAny substantive model can then be fit to each dataset, and estimates can be pooled appropriately using Rubin's rules\n\\citep{rubin:1987}.\n\nGenerally speaking, for each variable $z_j$ target of imputation, a researcher needs to define a set of \nobserved variables that will be included in $\\bm{Z}_{-j}^{(m)}$.\nThe high-dimensional imputation methods compared in this paper and described below follow the general MICE framework,\nbut they differ in the elementary imputation methods they use to define equations \\eqref{eq_pd} and \\eqref{eq_ppd}.\nEach of them has a different way of processing the large number of auxiliary variables provided to the \nimputation algorithm to allow a maximal inclusive strategy while avoiding its usual obstacles.\n\n\\subsubsection{MICE with fixed ridge penalty (bridge)}\n\tThis approach uses as elementary imputation method the Bayesian imputation under the normal linear model \n\tprocedure as presented by \\citeauthor{vanBuuren:2018} (\\citeyear[p. 68, algorithm 3.1]{vanBuuren:2018}).\n\tIn this approach, the sampling of each $\\hat{\\bm{\\theta}}_{j}^{(m)}$ in equation \\eqref{eq_pd} relies on the \n\tinversion of the cross-product of the observed data matrix $\\bm{Z}_{j, obs}^{(m)}$.\n\tBy adding a biasing ridge penalty $\\kappa$, singularity of the cross-product matrix can be circumvented and the \n\tsampling scheme is possible even if $\\bm{Z}_{j, obs}^{(m)}$ is afflicted by high collinearity and $n$ is not \n\tsubstantially larger than $p$.\n\n\tThe value of $\\kappa$ is usually chosen close to zero (e.g. $\\kappa = 0.0001$), as values larger than $0.1$ \n\tmay introduce systematic bias \\citep[p. 68]{vanBuuren:2018}.\n\tHowever, larger values may be necessary to invert the observed data matrix cross-product in certain scenarios.\n\tIn the present work, the value of $\\kappa$ was decided by means of a cross-validation procedure.\n\n\\subsubsection{MICE with Bayesian lasso (blasso)}\n\tA high-dimensional Bayesian lasso imputation algorithm was proposed by \\cite{zhaoLong:2016}, but it was tested only\n\tin a univariate missing data context.\n\tThe method relies on the Bayesian lasso model, a regular Bayesian multiple regression with prior specifications \n\tthat allow to interpret the mode of the posterior distribution of the regression coefficients as lasso estimates \n\t\\citep{parkCasella:2008, hans:2009}.\n\tGiven data with sample size $n$, consider the dependent variable $y$ and a set of predictors $X$.\n\tThe Bayesian Lasso linear regression specification we used within the blasso imputation algorithm is that specified \n\tby \\cite{hans:2010}:\n%\n\t\\begin{align}\n\t% Likelihood equation\n\t\tp(y|\\beta, \\sigma^2, \\tau) &= \\textrm{N}(y|X\\beta, \\sigma^2I_n) \\label{eqn:dens} \\\\\n\t% Prior for regression coefficients\n\t\tp(\\beta_j|\\tau, \\sigma^2, \\rho) &= \n\t\t\t(1 - \\rho) \\delta_0 \\beta_j +\n\t\t\t\\rho \\left( \\frac{\\tau}{2\\sigma} \\right) \\times \n\t\t\t\\textrm{exp} \\left( \\frac{-\\tau \\norm{\\beta}_1}{\\sigma} \\right) \\label{eqn:bprior} \\\\\n\t% Hyperprior for error variance\n\t\t\\sigma^2 &\\sim \\textrm{Inverse-Gamma}(a, b) \\label{eqn:sigprior} \\\\\n\t% Hyperprior for penalty parameter\n\t\t\\tau &\\sim \\textrm{Gamma}(r, s) \\label{eqn:tauprior} \\\\\n\t% Hyperprior for sparsity parameter\n\t\t\\rho &\\sim \\textrm{Beta}(g, h) \\label{eqn:rhoprior}\n\t\\end{align}\n%\t\n\tThe expression in equation \\eqref{eqn:dens} represents the density function of a multivariate normal random variable \n\twith mean $X\\beta$ and covariance matrix $\\sigma^2I_n$, evaluated at $y$.\n\tThe prior expressed in equation \\eqref{eqn:bprior} is the expansion on the \\cite{parkCasella:2008} double exponential \n\tprior developed by \\cite{hans:2010} to accommodate for uncertainty regarding the value of the regression coefficients \n\tand the model sparsity.\n\tFinally, equations \\eqref{eqn:sigprior} to \\eqref{eqn:rhoprior} represent hyper priors for the residual variance \n\t$\\sigma^2$, the penalty parameter $\\tau$, and the sparsity parameter $\\rho$.\n\tThe blasso imputation algorithm used here is a standard MI MCMC sampler that replaces equation \\eqref{eq_pd} with the \n\tfull conditional posterior distributions derived by \\citet{hans:2010}, based of the prior specifications in equations\n\t\\eqref{eqn:sigprior} to \\eqref{eqn:rhoprior}, and uses posterior parameters draws to sample plausible values from the \n\tpredictive distributions of the missing data for equation \\eqref{eq_ppd}.\n\n\tThe R code to perform blasso imputation is based on the Bayesian Lasso R Package \\emph{blasso} \\citep{blasso} and can \n\tbe found on the main author's GitHub page.\n\tFor a detailed description of the algorithm for Bayesian Lasso Multiple Imputation in a univariate\n\tmissing data context we recommend reading \\cite{zhaoLong:2016}.\n\n\\subsubsection{Direct Use of Regularized Regression (DURR)}\n\tAs proposed by \\cite{zhaoLong:2016} and \\cite{dengEtAl:2016}, Frequentist Regularized Regression can be \n\tdirectly used in a MICE algorithm to perform multiple imputation of high dimensional data.\n\tAt iteration $m$, for a target variable $\\bm{z}_j$, the DURR algorithm uses as building blocks of the \n\tMICE framework the following two steps:\n\n\t\\begin{itemize}\n\n\t\\item Generate a bootstrap sample $\\bm{Z}^{*(m)}$ by sampling with replacement from $\\bm{Z}$,\n\t\tand train a regularized linear regression model (such as Lasso regression) with\n\t\t$\\bm{z}_{j,obs}^{*(m)}$ as outcome and $\\bm{Z}_{-j,obs}^{*(m)}$ as predictors.\n\t\tThis produces a set of parameter estimates (regression coefficients and error variance)\n\t\t$\\hat{\\bm{\\theta}}_{j}^{(m)}$ that can be considered as a sample from equation \\eqref{eq_pd}.\n\n\t\\item Predict $\\bm{z}_{j,mis}$, based on $\\bm{Z}_{-j, mis}$ and $\\hat{\\bm{\\theta}}_{j}^{(m)}$, \n\t\tto obtain draws from the posterior predictive distribution of the missing data equation \n\t\t\\eqref{eq_ppd}.\n\n\t\\end{itemize}\n\n\\subsubsection{Indirect Use of Regularized Regression (IURR)}\n\tWhile DURR performs simultaneously model trimming and parameter estimation in equation \\eqref{eq_pd}, \n\tanother approach is to use regularized regression exclusively for model trimming, and to follow it \n\twith a standard multiple imputation procedure \\citep{zhaoLong:2016, dengEtAl:2016}.\n\tAt iteration $m$, the IURR algorithm performs the following steps for each target variable:\n%\n\t\\begin{itemize}\n%\n\t\\item Fit a multiple linear regression model using a regularized regression method with $\\bm{z}_{j,obs}$ as \n\t\tdependent variable and $\\bm{Z}_{-j,obs}^{(m)}$ as predictors (compared to DURR, the original data are \n\t\tused, not a bootstrap sample).\n\t\tIn this model, the regression coefficients that are \\emph{not} shrunk to 0 identify the active \n\t\tset of variables that will be used as predictors in the actual imputation model.\n\t\n\t\\item Obtain Maximum Likelihood Estimates of the regression parameters and error variance in the linear\n\t\tregression of $\\bm{z}_{j,obs}$ on the active set of predictors in $\\bm{Z}_{-j,obs}^{(m)}$ and\n\t\tdraw a new value of these coefficients by sampling from a multivariate normal distribution\n\t\tcentered around these MLEs\\footnote{The sampling notation is the same used by \\cite{dengEtAl:2016}.}:\n%\n\t\t\\begin{equation}\\label{eq_MLEpd}\n\t\t(\\hat{\\bm{\\theta}}_{j}^{(m)}, \\hat{\\sigma}_{j}^{(m)}) \\sim N(\\hat{\\bm{\\theta}}_{MLE}^{(m)}, \n\t\t\t\\hat{\\bm{\\Sigma}}_{MLE}^{(m)})\n\t\t\\end{equation}\n%\n\t\tso that equation \\eqref{eq_MLEpd} corresponds to equation \\eqref{eq_pd} in the general MICE framework.\n\n\t\\item Impute $\\bm{z}_{j,mis}$ by sampling from the posterior predictive distribution based \n\t\ton $\\bm{Z}_{j,mis}^{(m)}$ and the parameters posterior draws $(\\hat{\\bm{\\theta}}_{j}^{(m)}, \n\t\t\\hat{\\sigma}_{j}^{(m)})$.\n%\n\t\\end{itemize}\n\n\\subsubsection{MICE with PCA (MI-PCA)}\n\tBy extracting Principal Components (PCs) from the auxiliary variables, it is possible to summarise the information \n\tcontained in this set with just a few components.\n\tA few PCs can then be used as predictors in a standard MICE algorithm in a low dimensional setting.\n\tThe Multiple Imputation with Principal Component Analysis (MI-PCA) imputation procedure can be summarized as follows:\n\n\t\\begin{itemize}\n\n\t\\item Extract the first principal components that cumulative explain at most 50\\% of the variance \n\t\tin the auxiliary variables $\\bm{A}$, and collect them in a new data matrix $\\bm{A}'$;\n\t\\item Replace the set of auxiliary variables $\\bm{A}$ in $\\bm{Z}$ with $\\bm{A}'$ to obtain \n\t\t$\\bm{Z}' = (\\bm{T}, \\bm{A}')$;\n\t\\item Use the standard MICE algorithm with the Bayesian imputation under the normal linear model \n\t\t\\citep[p. 68, algorithm 3.1]{vanBuuren:2018} as elementary imputation method to obtain multiply \n\t\timputed datasets from the low dimensional $\\bm{Z}'$.\n\t\\end{itemize}\n\n\tNote that if missing values are present in the set of auxiliary variables, one can fill them in with a \n\tstochastic single imputation (SI) algorithm of choice.\n\tMI is preferred to SI because it accounts for the uncertainty regarding the missing values when producing \n\tstandard errors.\n\tAs the extraction of PCs does not require the estimation of standard errors, SI suffices.\n\tThis method is inspired by \\cite{howardEtAl:2015} and the \\emph{PcAux} R-package \\citep{PcAux} that \n\timplements and developed its ideas.\n\t\n\\subsubsection{MICE with decision trees (MI-CART and MI-RANF)}\n\tThe MI-CART imputation method \\citep{burgetteReiter:2010} is a MICE algorithm that uses classification and regression \n\ttrees (CART) to define the conditional distributions used in the MI Gibbs sampler.\n\tGiven an outcome variable $y$ and a set of predictors $X$, CART is a nonparametric recursive partitioning technique \n\tthat models the relationship between $y$ and $X$ by sequentially splitting observations in subsets of units with \n\trelatively homogeneous $y$ values.\n\tAt every splitting stage, a CART algorithm searches through all predictor variables in $X$ to find the best binary \n\tpartitioning rule to predict $y$.\n\tThe resulting collection of binary splits can be visually represented by a decision tree structure where each terminal \n\tnode (or \\emph{leaf}) represents the conditional distribution of $y$ for units that satisfy the splitting rules.\n\n\tIn MI-CART, at the $m$-th iteration for a target variable $z_j$, a CART model is trained to predict $z_{j, obs}$ based on \n\t$\\bm{Z}_{-j, obs}^{(m)}$.\n\tEvery observation with a missing value on $z_j$ belongs to a terminal node of this CART model, depending on their values of\n\t$\\bm{Z}_{-j, mis}^{(m)}$. \n\tSampling from the $z_{j, obs}$ in a terminal node corresponds to sampling from the missing data posterior predictive\n\tdistribution.\n\tThe implementation of MI-CART used in this paper corresponds to the one presented by \\cite{dooveEtAl:2014}\n\t(p. 95, algorithm 1) and the \\emph{impute.mice.cart()} R function from the \\emph{mice} package.\n\n\tIn MI-RANF, at the $m$-th iteration for a target variable $z_j$, $k$ bootstrap samples are drawn from the complete \n\tdataset and $k$ single trees are fitted.\n\tIn each bootstrap sample, only a small random group of input variables is used to find the best split at each node.\n\tAll $k$ trees are used to compose the pool of candidates from which imputations are drawn.\n\tBootstrapping and random input selection introduce the model and imputation uncertainty in the imputation procedure,\n\tas required by a proper MI procedure.\n\tFor greater details on the algorithms, the reader may consult \\cite{dooveEtAl:2014} (algorithm A.1, p. 103).\n\tThe programming of the algorithm was heavily inspired by the \\emph{impute.mice.rf()} function in the \n\tR package \\emph{mice}.\n\n\\subsubsection{MICE optimal model (MI-OP)}\n\tWhen dealing with a large set of possible predictors for the imputation models, a common recommendation in the MI \n\tliterature is to decide which predictors to include by following three criteria \\citep[p. 168]{vanBuuren:2018}:\n\t\\begin{enumerate}\n\n\t\\item include all the variables that are part of the analysis models;\n\t\\item include all the variables that are related to the non-response;\n\t\\item include all the variables are correlated with the variables target of imputation.\n\n\t\\end{enumerate}\n\n\tIn practice, researchers can never be sure that the second requirement is entirely met, as there is no way to know exactly \n\twhich variables are responsible for missingness.\n\tHowever, if we knew which predictors were essential for the imputation models, we could use this information to specify \n\toptimal imputation models.\n\tWith simulated data, we have perfect knowledge over which variables are involved in the missing data mechanisms.\n\tMI-OP is an ideal specification of the MICE algorithm that uses as elementary imputation strategy a low dimensional \n\tunivariate Bayesian imputation under the normal linear model and uses this knowledge to include only the relevant \n\tpredictors in the imputation models.\n\t\n\\subsection{Single data strategies}\n\n\\subsubsection{missForest}\n\tMost research on high-dimensional data imputation has focused on applications for DNA \n\tgenetics data where the goal is to allow the use of large datasets for high-dimensional\n\tpredictive algorithms, rather than inferential analysis.\n\tFor this reason, a variety of single imputation machine learning algorithms have been proposed\n\tand compared \\citep{deAndradeSilvaHruschka:2009, stekhovenBuhlmann:2011}.\n\n\tIn this study, we consider the missForest imputation method proposed by \\cite{stekhovenBuhlmann:2011},\n\twhich is a popular non-parametric imputation approach that can accommodate for large numbers of predictors, \n\tcan handle mixed data type of the missing variables, and has been robustly implemented in a popular \n\tR-package \\citep{missForest}.\n\tThe approach consists of an iterative imputation that first trains a random forest on observed values, and then \n\tuses it to impute the missing values by averaging the predictions from its different trees.\n\tAs a single imputation method we do not expect it to perform well for inferential tasks, at least compared to \n\tthe other high dimensional MI methods discussed here.\n\n\\subsubsection{Complete Case Analysis (CC)}\n\tBy default, most data analysis software either ignore the presence of missing values or default to list wise \n\tdeletion: only complete cases are used for the analysis \\citep{R:2020, pandas:2020}.\n\tAs a default behaviour of most analysis tools, Complete Cases Analysis remains a popular missing data treatments \n\tin the social sciences, despite its known flaws (\\citeauthor{rubin:1987}, \\citeyear{rubin:1987}, p. 8; \n\t\\citeauthor{vanBuuren:2018}, \\citeyear{vanBuuren:2018}, p. 9, \\citeauthor{baraldiEnders:2010}, \n\t\\citeyear{baraldiEnders:2010}).\n\tTherefore, this method was included as a reference point.\n\n\\paragraph{Gold Standard}\n\tFinally, in simulation and resampling studies, the analysis models can be fitted to the fully observed data.\n\tResults obtained in this fashion are referred to here and in the results tables as the Gold Standard method.\n\tThey represent the counterfactual analysis that would have been performed if there had been no missing data.\n", "meta": {"hexsha": "d2396735183007169ea40f735454deeb79a4d3cf", "size": 17641, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manuscript/old/02_section.tex", "max_stars_repo_name": "EdoardoCostantini/imputeHD-comp", "max_stars_repo_head_hexsha": "c6fad9ee406ad85dd9874f336220177ae08ae477", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "manuscript/old/02_section.tex", "max_issues_repo_name": "EdoardoCostantini/imputeHD-comp", "max_issues_repo_head_hexsha": "c6fad9ee406ad85dd9874f336220177ae08ae477", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manuscript/old/02_section.tex", "max_forks_repo_name": "EdoardoCostantini/imputeHD-comp", "max_forks_repo_head_hexsha": "c6fad9ee406ad85dd9874f336220177ae08ae477", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.337037037, "max_line_length": 124, "alphanum_fraction": 0.7538688283, "num_tokens": 4698, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7090191276365462, "lm_q1q2_score": 0.5567988049465322}}
{"text": "\n\n\\subsection*{Integral Curves }\n\nIf smooth curve $ \\begin{aligned} & \\quad \\\\ \n c:  & I \\to M \\\\  \n c(t) & = p \\end{aligned}$ \\\\\n$\\forall \\, t \\in I ,  \\, c' \\equiv c'(t) \\in T_{c(t)}M \\equiv T_p M$ \\\\\n\nIf $X$ vector field on $M$, \\\\\nintegral curve of $X$ is differentiable $c: I  \\to M$ s.t. \\\\\n\\quad $\\forall \\, t \\in I$, $c'(t) \\equiv \\dot{c} = X_{c(t)} = X_p$  \\\\\n\nEY  \\\\\n\\[\n\\begin{aligned}\n  & c:I \\to M \\\\ \n  & c(t) = p  \\\\\n  & \\varphi c(t) = \\varphi(p) \\Longrightarrow \\begin{aligned} & c^i(t) = x^i(t) \\\\\n    & \\dot{c}^i(t) = \\dot{x}^i(t) \\end{aligned} \\\\\n  X_p = X^i(p) \\frac{ \\partial }{ \\partial x^i } \\equiv X^i \\frac{ \\partial }{ \\partial x^i } = \\dot{c}^i \\frac{ \\partial }{ \\partial x^i } \n\\end{aligned}\n\\]\n\n\\[\n\\begin{gathered}\n  f: M \\to \\mathbb{R} \\\\ \n  X_p f = X_p f(p) = X_pf \\varphi^{-1} \\varphi(p) = X_(f\\varphi^{-1})(x^j) = X^i \\frac{ \\partial f}{ \\partial x^i }(x^j) \\equiv X^i(p)  \\frac{ \\partial f}{ \\partial x^i}(x^j) = \\dot{c}^i \\left. \\frac{ \\partial f}{ \\partial x^i} \\right|_p = \\frac{d}{dt}( f \\circ c)(t)\n\\end{gathered}\n\\]\n\n\n\n\nExample 9.1.  (Integral Curves)\n\n\\begin{enumerate}\n\\item[(a)] Let $X = \\frac{ \\partial }{ \\partial x}$, $(x,y) \\in \\mathbb{R}^2$ \n\n\\[\n\\begin{gathered}\n  c(t) = (x(t), y(t)) = x \\partial_x + y\\partial_y \\\\ \n  c' = \\dot{x} \\partial_x + \\dot{y} \\partial_y = \\partial_y   \\Longrightarrow \\dot{y} = 0 \\quad \\, y = b \\\\\n  \\dot{x} = 1 \\quad \\quad \\, x = a+t \\\\ \n  c = (a+t ,b) \n\\end{gathered}\n\\]\n\n\n\\item[(b)] $X = x\\partial_x - y \\partial_x$.  Comparing the components of these vectors, we see that this is equivalent to \n\n\\[\n\\begin{gathered}\n  \\begin{aligned}\n    \\dot{x} & = -y \\\\\n    \\dot{y}  & = x \\end{aligned} \\Longrightarrow \\begin{aligned}\n    y & = a \\sin{t} + b\\cos{t} \\\\ \n    x & = a\\cos{t} - b\\sin{t} \\end{aligned}\n\\end{gathered}\n\\]\n\n\n\n\\end{enumerate}\n\n\n\\begin{proposition}[9.2] Let smooth vector field $X$ on smooth $M$, \\\\\n$\\forall \\, p \\in M$, $\\exists \\, \\epsilon > 0 $, $\\exists \\, $ smooth $c:(-\\epsilon, \\epsilon) \\to M$ i.e. integral curve of $X$ starting at $p$ \\end{proposition}\n\n\\begin{proof}\n  existence from Thm. D.1\n\\end{proof}\n\n\n\\begin{lemma}[9.3] (Rescaling Lemma) \n$\\widetilde{ c}(t) = c(at)$ integral curve of $aX$, where $\\widetilde{I} = \\lbrace t | at \\in I \\rbrace$\n\\end{lemma}\n\\begin{proof} Let smooth $f$ defined in neighborhood of $\\widetilde{c}(t_0)$ \\\\\ne.g. of rescaling  - $2t = 2\\cdot 1 = 2 \\quad \\, a=2$ \n\n\\[\n\\begin{gathered}\n  \\widetilde{c}(t) = c(at) = c(\\tau) = p \\in M \\\\ \n  \\dot{ \\widetilde{c}}(t) f = \\frac{d}{dt} (f\\circ \\widetilde{c})(t) = \\frac{d}{dt} (f\\varphi^{-1})( \\varphi\\widetilde{c}(t)) = \\frac{d}{dt} (f\\varphi^{-1})(\\varphi c(at) ) = \\frac{d}{dt} (f\\varphi^{-1})(c^i(at)) = \\left. \\frac{ \\partial f}{ \\partial x^i} \\right|_p \\frac{dc^i}{ d\\tau}(\\tau)a = a \\frac{d}{d \\tau}(f\\circ c)(\\tau) = aX_p f\n\\end{gathered}\n\\]\n\n\n\\end{proof}\n\n\\begin{lemma}[9.4] (Translation Lemma)\n\\[\n\\begin{aligned}\n  & \\widetilde{I} = \\lbrace t | t + a \\in I \\rbrace \\\\ \n  & \\widetilde{c}(t) = c(t+a)\n\\end{aligned}\n\\]\n\n\n\n\\end{lemma}\n\n\\exercisehead{9.5}\n\\begin{proof}\n\\[  \n\\begin{aligned}\n&  \\widetilde{c}(t) = c(t+a) = c(\\tau) = p \\in M \\\\ \n&  \\dot{ \\widetilde{c}}(t) f = \\frac{d}{dt} (f\\circ \\widetilde{c})(t) = \\frac{d}{dt} f(\\widetilde{c}(t)) = \\frac{d}{dt} f(c(t+a) ) = \\left. \\frac{ \\partial f}{ \\partial x^i } \\right|_p \\frac{d c^i(\\tau) }{ d\\tau } = \\dot{c}^i(\\tau)  \\left. \\frac{ \\partial f}{ \\partial x^i } \\right|_p = X^i_p \\left. \\frac{ \\partial f}{ \\partial x^i} \\right|_p = X_p f\n\\end{aligned}\n\\]\n\\end{proof}\n\n\n\n\n\\begin{proposition}[9.6] (Naturality of Integral curves)\n  Suppose smooth $F: M \\to N$ \\\\\n  \\quad Then $\\begin{aligned} & \\quad \\\\ \n    & X \\in \\mathfrak{X}(M) \\\\\n    & Y \\in \\mathfrak{X}(N) \\end{aligned}$ \\quad \\, $F$-related iff $F$ takes integral curves of $X$ to integral curves of $Y$\n\\end{proposition}\n\n\\begin{proof}\nRecall \n\\[\nX,Y \\, F-\\text{related means } \\, dF(X) =Y\n\\]\n\nLet $\\gamma = Fc$ \n\\[\n\\dot{\\gamma} = \\frac{d}{dt} (F\\circ c)(t) = (dF)(\\dot{c}) = dF(X) = Y\n\\]\n\n$\\Longrightarrow \\gamma $ integral curve of $Y$ \\\\\n\nif $\\gamma = Fc$ integral curve of $Y$, $\\dot{\\gamma} = Y$.  \\quad $q = F(p)$.  $p=c(t)$\n\n\\[\n\\begin{gathered}\n  Yg= Y_qg = Y^j \\left. \\frac{ \\partial g}{ \\partial y^j } \\right|_q = \\dot{\\gamma}^j(t) \\left. \\frac{ \\partial g}{ \\partial y^j } \\right|_{F(p)} = \\frac{d}{dt} (F\\circ c ) \\left. \\frac{ \\partial g}{ \\partial y^j } \\right|_{F(p)} = \\frac{ \\partial y^j}{ \\partial x^k } \\dot{c}^k(t) \\left. \\frac{ \\partial g}{ \\partial y^j } \\right|_q = \\frac{ \\partial y^j}{ \\partial x^k} X^k_p \\left. \\frac{ \\partial g}{ \\partial y^j } \\right|_q = (F_* X)g = dF(X)g \\\\\n\\Longrightarrow dF(X) = Y\n\\end{gathered}\n\\]\n\\end{proof}\n\n\n\\subsection*{Flows }\n\n\nLet $X \\in \\mathfrak{X}(M)$ \\\\\nSuppose $\\forall \\, p \\in M$, $\\exists \\, !$ \\, integral curve starting at $p$, $\\phi^{(p)}: \\mathbb{R} \\to M$ \\\\\n$\\forall \\, t \\in \\mathbb{R}$, define $\\begin{aligned} & \\quad \\\\\n  & \\phi_t: M \\to M \\\\\n  & \\phi_t(p) = \\phi^{(p)}(t) \\end{aligned}$\n\n\\[\n\\theta_0(p) = \\theta^{(p)}(0) = p\n\\]\n\nEY: $\\phi_t$ pushes $p$ to $\\phi^{(p)}(t)$ over time interval $t$ \\\\\n\ntranslation lemma implies $t\\mapsto \\phi^{(p)}(t+s)$ is integral curve of $X$ starting at $q = \\phi^{(p)}(s)$  \\\\\n\nassuming uniqueness of integral curves, $\\phi^{(p)}(t) = \\phi^{(p)}(t+s)$, so \n\n\\[\n\\begin{gathered}\n  \\phi_t \\circ \\phi_s(p) = \\phi_{t+s}(p) \\\\ \n  \\phi_0(p) = \\phi^{(p)}(0) = p \n\\end{gathered}\n\\]\n\n$\\Longrightarrow \\phi : \\mathbb{R}\\times M \\to M$ is an action of additive group $\\mathbb{R}$ on $M$.  \\\\\n\ndefine \\emph{ global flow } on $M$ (1-parameter group action) - cont. left $\\mathbb{R}$-action on $M$, i.e. \\\\\n\\quad cont. $\\phi : \\mathbb{R} \\times M \\to M$ s.t. $\\forall \\, s,t \\in \\mathbb{R}$, $\\forall \\, p M$ \n\n\\begin{equation}\n\\begin{aligned}\n& \\phi(t, \\phi(s,p) ) = \\phi(t+s, p )\n& \\phi(0,p) = p \n\\end{aligned} \\quad \\quad \\quad \\, (9.2)\n\\end{equation}\n\ngiven global flow $\\phi$ \\\\\n\n$\\forall \\, t \\in \\mathbb{R}$, define cont. $\\begin{aligned} & \\quad \\\\\n  & \\phi_t: M \\to M \\\\\n  & \\phi_t(p) \\to \\phi(t,p)\n\\end{aligned}$  \n\\[\n\\xrightarrow{ (9.2) } \\begin{gathered}\n  \\phi_t \\cdot \\phi_s = \\phi_{t+s} \\\\ \n  \\phi_0 = 1_M\n\\end{gathered}\n\\]\n\n$\\phi_t : M \\to M$ homeomorphism; if flow smooth, $\\phi_t$ diffeomorphism.   \\\\\n\n$\\forall \\, p \\in M$, define $ \\begin{aligned} & \\quad \\\\ & \\phi^{(p)}:\\mathbb{R} \\to M \\\\ & \\phi^{(p)}(t) = \\phi(t,p) \\end{aligned}$ \\\\\n\n$\\phi^{(p)}$ is orbit of $p$ under group action.  \n\nsmooth global flow $\\theta:\\mathbb{R} \\times M \\to M$ \\\\\n\\quad $\\forall \\, p \\in M$, define $V_p \\in T_pM$\n\\[\nV_p = (\\theta^{(p)})'(0)\n\\]\n\n\n\\begin{proposition}[9.7] Let smooth global flow $\\phi: \\mathbb{R}\\times M \\to M$ on smooth $M$ \\\\\ninfinitesimal generator $X$ of $\\phi$ \\quad $\\begin{aligned} & \\quad \\\\\n  & p \\mapsto X_p \\\\\n  & X_p  = \\dot{\\phi}^{(p)}(0) \\end{aligned}$ \\quad is smooth vector field on $M$, and $\\forall \\, \\phi^{(p)}$, $\\phi^{(p)}$ integral curve of $X$ \n\\end{proposition}\n\n\\begin{proof}\n  Show $X$ smooth.  Use Prop. 8.14,  \\\\\n$f$ smooth on open $U\\subseteq M$, $f:U\\to \\mathbb{R}$ \n\n\\[\nXf(p) = X_pf = \\dot{\\phi}^{(p)}(0) f \\equiv ( \\dot{\\phi}^{(p)}(0) )[f] = \\left. \\frac{d}{dt} ( f\\circ \\phi^{(p)} ) \\right|_{t=0} = \\frac{ \\partial }{ \\partial t} \\left. (f\\circ \\phi(t,p) ) \\right|_{t=0} \n\\]\n\n$f\\circ \\phi(t,p) = f(\\phi(t,p))$ smooth function of $(t,p)$ by composition, so $\\partial_t(f\\circ \\phi)$ smooth. So $Xf$ smooth, so $X$ smooth.   \\\\\n\nLet $q = \\phi^{(p)}(a) = \\phi_a(p)$ \n\n\\begin{equation}\n  \\phi^{(q)}(t) = \\phi_t(q) = \\phi_t( \\phi_a(p)) = \\phi_{t+a}(p) = \\phi^{(p)}(t+a) \\quad \\quad \\quad \\, (9.4)\n\\end{equation}\n\n\\begin{equation}\n  X_qf = \\dot{\\phi}^{(q)}(0)f = \\dot{\\phi}^{(q)}(0)[f] = \\frac{d}{dt} \\left. (f \\circ \\phi^{(q)}(t)) \\right|_{t=0} = \\frac{d}{dt} \\left. (f\\circ \\phi^{(p)}(t+a) ) \\right|_{t=0} = \\dot{\\phi}^{(p)}(a) f = X_{ \\phi^{(p)}(a) } f \\quad \\quad \\quad \\, (9.5)\n\\end{equation}\n\n\\end{proof}\n\nSo by def., $\\phi^{(p)}(t)$ integral curve of $X$\n\n\n\\subsubsection*{ The Fundamental Theorem on Flows }\n\nflow domain for $M$ is open $\\mathcal{D} \\subseteq \\mathbb{R} \\times M$ s.t. $\\forall \\, p \\in M$, $\\mathcal{D}^{(p)} = \\lbrace t \\in \\mathbb{R} | (t,p) \\in \\mathcal{D} \\rbrace$ is an open interval containing $0$.  \\\\\n\n\\textbf{flow} on $M$ is cont. $\\phi : \\mathcal{D} \\to M$ s.t. group laws : \n\\begin{equation}\n  \\forall \\, p \\in M, \\, \\phi(0, p ) = p \\quad \\quad \\quad \\, (9.6)\n\\end{equation}\n\n\\begin{equation}\n  \\begin{aligned} & \\quad \\\\ \n    & \\forall \\, s \\in \\mathcal{D}^{(p)} \\\\ \n    & \\forall \\, t \\in \\mathcal{D}^{ (\\phi(s,p))} \\end{aligned} \\quad \\text{ s.t. } s + t \\in \\mathcal{D}^{(p)}, \\quad \\quad \\, \\phi(t, \\phi(s,p) ) = \\phi(t+s, p ) \\quad \\quad \\quad \\, (9.7)\n\\end{equation}\n\n\\begin{proposition}[9.11] If $\\phi: \\mathcal{D} \\to M$ smooth flow, \\\\\nthen infinitesimal generator $X$ of $\\phi$ smooth vector field and $\\forall \\, \\phi^{(p)}$ integral curve of $X$\n\\end{proposition}\n\n\\begin{proof}\nRecall that  \\\\\n\\emph{infinitesimal generator} $X$ of $\\phi$, $\\begin{aligned} & \\quad \\\\ \n  & p \\mapsto X_p \\\\\n  & X_p = \\dot{\\phi}(0) \\end{aligned}$ \\quad now on open $\\mathcal{D}$, $\\mathcal{D} \\subseteq \\mathbb{R} \\times M$ s.t. $\\forall \\, p \\in M$, $\\mathcal{D}^{(p)} = \\lbrace t \\in \\mathbb{R} | (t,p) \\in \\mathcal{D} \\rbrace$ open, \\\\\n\ngiven $\\phi$ smooth flow, \\\\\n\\quad $\\phi(t,q)$ defined and smooth $\\forall \\, (t,q)$ sufficiently close to $(0,p)$ since $\\mathcal{D}$ open.  With $f$ smooth on open $U$ in this open neighborhood of $(0,p)$, \n\n\\[\n\\Longrightarrow Xf(p) = \\left. \\frac{ \\partial }{ \\partial t } (f \\circ \\phi(t,p)) \\right|_{t=0}\n\\]\n\n$f\\phi$ smooth (by composition), so $\\partial_t f\\phi$ smooth, so $X$ smooth itself around $\\forall \\, p \\in M$.   \\\\\n\nSuppose $t\\in \\mathcal{D}^{(p)}$ \\\\\n$\\mathcal{D}^{(p)}$, $\\mathcal{D}^{ (\\phi_t(p))} = \\mathcal{D}^{(q)}$ open (by def.)\n\n$\\phi_{\\Delta t}\\phi_t(p) = \\phi_{ \\Delta t + t }(p)$ by def. of flow. \n\n\n\n\n\n\n\n\n\n\\end{proof}\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=4.8em, minimum width=2em]\n  {\n\\mathfrak{X}(M) &  \\\\\nM &  \\\\\n\\mathcal{D} &       M \\\\\n\\mathcal{D}^{(p)} & \\\\\n};\n  \\path[->]\n  (m-2-1) edge node [above] {$$} (m-1-1)\n  edge node [above] {$\\phi_t$} (m-3-2)\n  (m-3-1) edge node [auto] {$$} (m-2-1)\n  edge node [below left] {$\\phi$} (m-3-2)\n  edge node [auto] {$$} (m-4-1)\n  (m-4-1) edge node [below] {$\\phi^{(p)}$} (m-3-2)\n  (m-3-2) edge node [right] {$\\left. \\frac{ \\partial }{ \\partial t} \\right|_{t=0} = \\left. \\frac{d}{dt} \\right|_{t=0}$} (m-1-1)\n;\n\\end{tikzpicture} \\quad \\quad \\quad \\, \\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=4.8em, minimum width=2em]\n  {\nX_p &  \\\\\np &  \\\\\n(t,p)       &       \\phi(t,p) = \\phi^{(p)}(t) = \\phi_t(p) \\\\\nt & \\\\\n};\n  \\path[|->]\n  (m-2-1) edge node [above] {$$} (m-1-1)\n  edge node [above] {$\\phi_t$} (m-3-2)\n  (m-3-1) edge node [auto] {$$} (m-2-1)\n  edge node [below left] {$\\phi$} (m-3-2)\n  edge node [auto] {$$} (m-4-1)\n  (m-4-1) edge node [below] {$\\phi^{(p)}$} (m-3-2)\n  (m-3-2) edge node [right] {$\\left. \\frac{ \\partial }{ \\partial t} \\right|_{t=0} = \\left. \\frac{d}{dt} \\right|_{t=0}$} (m-1-1)\n;\n\\end{tikzpicture}\n\n\n\n\n\n\\begin{theorem}[9.12] (Fundamental Theorem on Flows)\n  Let smooth vector field $X$ on smooth manifold $M$.  \\\\\n$\\exists \\, ! \\, \\, $ smooth maximal flow $\\phi : \\mathcal{D} \\to M$ whose infinitesimal generator is $X$ (recall $\\begin{aligned} & \\quad \\\\ \n    & p \\mapsto X_p \\\\\n    & X_p = \\dot{\\phi}{(0)} \\end{aligned}$) s.t. \n\\begin{enumerate}\n\\item[(a)] $\\forall \\, p \\in M$, curve $\\phi^{(p)} : \\mathcal{D}^{(p)} \\to M$ is unique maximal integral curve of $X$ starting at $p$. \n\\item[(b)] If $s\\in \\mathcal{D}^{(p)}$, then $\\mathcal{D}^{(\\phi(s,p))}$ is interval $\\mathcal{D}^{(p)}-s = \\lbrace t - s | t\\in \\mathcal{D}^{(p)} \\rbrace$\n\\item[(c)] $\\forall \\, t \\in \\mathbb{R}$, $M_t$ open in $M$, and $\\phi_t:M_t \\to M_{-t}$ diffeomorphism with inverse $\\phi_{-t}$\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nFrom Proposition 9.2 ($\\forall \\, p \\in M$, $\\exists \\, \\epsilon > 0$, \\, $\\exists \\, $ smooth $c:(-\\epsilon, \\epsilon) \\to M$, i.e. integral curve $X$ starting at $p$) \\\\\n\nSuppose $c,\\widetilde{c} : I \\to M$ \\, 2 integral curves of $X$, open $I$ s.t. $c(t_0) = \\widetilde{c}(t_0)$ for some $t_0 \\in I$ \\\\\nLet $S = \\lbrace t | t \\in I, \\text{ s.t. } c(t) = \\widetilde{c}(t) \\rbrace$ \\\\\nClearly $S \\neq \\emptyset$ since $c(t_0) = \\widetilde{c}(t_0)$ \\quad (hypothesis) \\\\\n\\quad $S$ closed in $I$ by continuity (of $c, \\widetilde{c}$) \\\\\nSuppose $t_1 \\in S$ \\\\\n\\quad $c(t_1) = \\widetilde{c}(t_1)=p$\nThen in smooth coordinate neighborhood around $p=c(t_1)$, \\, $c, \\widetilde{c}$ both solutions to same ODE with same initial conditions $c(t_1) = \\widetilde{c}(t_1) = p$ \\\\\n\nBy uniqueness part of Thm. D.1, $c\\equiv \\widetilde{c}$ on interval containing $t_1$ \\\\\n\\quad $\\Longrightarrow S $ \\, open in $I$. \\\\\nSince $I$ connected, $S=I$ \\quad ($S$ clopen) \\\\\n$c= \\widetilde{c} \\quad \\, \\forall \\, t \\in I$ \\\\\nThus, $\\forall \\, c , \\widetilde{c}$ \\, that agrees at 1 pt. agree on common domain.   \\\\ \n\n$\\forall \\, p \\in M$, let $\\mathcal{D}^{(p)} = \\bigcup_{\\alpha} I_{\\alpha}$, open $I_{\\alpha} \\subseteq \\mathbb{R}$ \n\\quad s.t. $0 \\in I_{\\alpha}$, and integral curve $\\begin{aligned} & \\quad \\\\ \n  & c_{\\alpha} : I_{\\alpha} \\to M \\\\\n  & c_{\\alpha}(0) = p \\end{aligned}$ \\, starting at $p$ is defined.  \\\\\n\ndefine $\\begin{aligned} & \\quad \\\\ \n  & \\phi^{(p)}: \\mathcal{D}^{(p)} \\to M  \\\\\n  & \\phi^{(p)}(t) = c(t) \\end{aligned}$ \\, where $c$ is any integral curve s.t. $c(0) = p$ and $c$ defined on $I_{\\alpha}$ s.t. $0,t \\in I_{\\alpha}$. \\\\\n\nsince all integral curves agree at $t$ by argument above, $\\phi^{(p)}$ well-defined \\\\\n\\quad \\quad and is obviously unique maximal integral curve starting at $p$.   \\\\\n\n\nLet $\\mathcal{D} = \\lbrace (t,p) \\in \\mathbb{R} \\times M | t \\in \\mathcal{D}^{(p)} \\rbrace$ \\\\\ndefine $\\begin{aligned} & \\quad \\\\ \n  & \\phi :\\mathcal{D} \\to M \\\\\n  & \\phi{(t,p)} = \\phi^{(p)}{(t)} \\equiv \\phi_t{(p)} \\end{aligned}$ (notation for last statement)  \\\\\n\nBy def. $\\phi$ satisfies (a): $\\forall \\, p \\in M$, $\\exists \\, ! \\, $ maximal integral curve of $X$, $\\phi^{(p)}$, starting at $p$.  \\\\\n\nFix $p\\in M$, $s \\in \\mathcal{D}^{(p)}$ \\\\\nwrite $q = \\phi{ (s,p) } = \\phi^{(p)}(s)$ \\\\\n\ndefine $\\begin{aligned} & \\quad \\\\ \n  & \\widetilde{c}: \\mathcal{D}^{(p)} -s \\to M \\\\\n  & \\widetilde{c}(t) = \\phi^{(p)}(t+s) \\, \\text{ s.t. } \\, \\widetilde{c}{(0)} = \\phi^{(p)}(s) = q \\end{aligned}$ \\\\\n\nBy translation lemma (9.4), $\\begin{aligned} & \\quad \\\\\n  & \\widetilde{c}{(t)} = c{(t+s)} \\\\\n  & \\widetilde{I} = \\lbrace t | t+s \\in I \\rbrace \\end{aligned}$ \\quad \\, e.g. $\\begin{aligned} & \\quad \\\\ \n  & I = (-2,6) , \\, \\, s = 1 \\\\\n  & \\widetilde{I} = (-3,5) \\end{aligned}$ \\quad \\, $\\widetilde{c}(t)$ also integral curve of $X$.  \n\nBy uniqueness of ODE solutions, \\\\\n\\quad $\\widetilde{c}$ agrees with $\\phi^{(q)}$ on their common domain, \\\\\n\\quad \\quad equivalent to second group law (9.7) \n\\[\n\\widetilde{c}{(t)} = \\phi^{(p)}{ (t+s)} = \\phi{(t+s,p)} = \\phi^{(q)}{(t)} = \\phi{(t,q)} = \\phi{(t,\\phi{(s,p) }) }\n\\]\n\n\n\n\\end{proof}\n\n\n\\begin{lemma}[9.19] \\textbf{(Escape Lemma)}\nSuppose smooth $M$, $V\\in \\mathfrak{X}(M)$.  \\\\\nIf $\\gamma: J \\to M$ maximal integral curve of $V$ s.t. domain $J$ has finite least upper bound $b$, \\\\\n\\phantom{ \\quad \\, } then $\\forall \\, t_0 \\in J$, $\\gamma([t_0,b))$ not contained in any compact subset of $M$\n\\end{lemma}\n\n\\begin{proof}\nSee Problem 9-6.  Solution there.  \n\\end{proof}\n\n\\subsection*{Flowouts}\n\nSuppose smooth $M$, $S\\subseteq M$ embedded $k$-dim. submanifold. \\\\\nsmooth $V \\in \\mathfrak{X}(M)$ s.t. $V$ nowhere tangent to $S$.  \\\\\nLet $\\theta:\\mathcal{D} \\to M$ be flow of $V$  \\\\\nLet $\\mathcal{O} = (\\mathbb{R}\\times S) \\bigcap \\mathcal{D}$ \\\\\n\\phantom{Let } $\\Phi= \\left. \\theta \\right|_{\\mathcal{O}}$\n\\begin{enumerate}\n\\item[(a)] $\\Phi : \\mathcal{O} \\to M$ immersion \n\\item[(b)] $\\frac{ \\partial }{ \\partial t} \\in \\mathfrak{X}(\\mathcal{O})$ is $\\Phi$-related to $V$\n\\item[(c)] $\\exists \\, $ smooth $\\delta >0$, $\\delta : S \\to \\mathbb{R}$ s.t. \\\\\n$\\left. \\Phi \\right|_{\\mathcal{O}_{\\delta}}$ injective, where $\\mathcal{O}_{\\delta} \\subseteq \\mathcal{O}$ flow domain.  \n\\begin{equation}\n  \\mathcal{O}_{\\delta} = \\lbrace (t,p) \\in \\mathcal{O} | |t| < \\delta(p) \\rbrace \\quad \\quad \\quad \\, (9.9)\n\\end{equation}\nThus $\\Phi(\\mathcal{O}_{\\delta})$ immersed submanifold of $M$ containing $S$. $V$ tangent to $\\Phi(\\mathcal{O}_{\\delta})$\n\\item[(d)] If $S$ codim. $1$, $\\left. \\Phi \\right|_{\\mathcal{O}_{\\delta}}$ diffeomorphism onto open submanifold of $M$\n\\end{enumerate}\n\n\\subsection*{Flows and Flowouts on Manifolds with Boundary }\n\n\\subsection*{Lie Derivatives }\n\n\\begin{equation}\n  D_vW(p) = \\left. \\frac{d}{dt} \\right|_{t=0} W_{p+tv} = \\lim_{t\\to 0} \\frac{ W_{p+tv} - W_p }{t} \\quad \\quad \\quad \\, (9.15)\n\\end{equation}\n\n\\[\nD_vW(p) = D_vW^i(p) \\left. \\frac{ \\partial }{ \\partial x^i } \\right|_p\n\\]\n\n\\textbf{Lie derivative of $W$ with respect to $V$ }\n\n\\begin{equation}\n  (\\mathcal{L}_VW)_p = \\left. \\frac{d}{dt} \\right|_{t=0} d(\\theta_{-t})_{\\theta_t(p)}(W_{\\theta_t(p)}) = \\lim_{t \\to 0} \\frac{ d(\\theta_{-t})_{\\theta_t(p)}(W_{\\theta_t(p)}) - W_p }{ t } \\quad \\quad \\quad \\, (9.16)\n\\end{equation}\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=6.8em, minimum width=2em]\n  {\n\\mathfrak{X}(M) & \\mathfrak{X}(M)  \\\\\n\\mathfrak{X}(M) &   \\\\\nM    &       M \\\\\n};\n  \\path[->]\n  (m-3-1) edge node [above] {$$} (m-2-1)\n  edge node [above] {$\\phi_t$} (m-3-2)\n  (m-3-2) edge node [auto] {$$} (m-1-2)\n  edge [bend left] node [auto] {$\\phi_{-t}$} (m-3-1)\n  (m-1-2) edge node [above] {$d(\\phi_{-t})=(\\phi_{-t})_*$} (m-1-1)\n;\n\\end{tikzpicture} \\quad \\quad \\quad \\, \\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=6.8em, minimum width=2em]\n  {\nd(\\phi_{-t})_{\\phi_t(p)}(W_{\\phi_t(p)}) & W_{\\phi_t(p)}  \\\\\nW_p   &    \\\\\np    &       \\phi_t(p) \\\\\n};\n  \\path[|->]\n  (m-3-1) edge node [above] {$$} (m-2-1)\n  edge node [above] {$\\phi_t$} (m-3-2)\n  (m-3-2) edge node [auto] {$$} (m-1-2)\n  edge [bend left] node [auto] {$\\phi_{-t}$} (m-3-1)\n  (m-1-2) edge node [above] {$d(\\phi_{-t})=(\\phi_{-t})_*$} (m-1-1)\n;\n\\end{tikzpicture}\n\n\n\n\n\\begin{lemma}[9.36]\n  Suppose smooth $M$ with or without $\\partial$, \\, $V, W \\in \\mathfrak{X}(M)$ \\\\\nIf $\\partial M \\neq \\emptyset$, assume $v$ trangent to $\\partial M$ \\\\\nThen $\\exists \\, $ smooth $(\\mathcal{L}_VW)_p$ \\, $\\forall \\, p \\in M$\n\\end{lemma}\n\n\\begin{proof}\n  $\\forall \\, (t,x ) \\in J_0 \\times U_0$, \\\\\n  matrix $d(\\theta_{-t})_{\\theta_t(x)}: T_{\\theta_t(x)}M \\to T_xM$\n\\[\n\\left( \\frac{ \\partial \\theta^i}{ \\partial x^j}(-t, \\theta(t,x)) \\right)\n\\]\n\nTherefore,\n\\[\nd(\\theta_{-t})_{\\theta_t(x)}(W_{\\theta_t(x)}) = \\frac{ \\partial \\theta^i}{ \\partial x^j}(-t,\\theta(t,x)) W^j(\\theta(t,x)) \\left. \\frac{ \\partial }{ \\partial x^i} \\right|_x\n\\]\n\n\\end{proof}\n\n\\exercisehead{9.37}\n\nGiven $V = v^i \\frac{ \\partial }{ \\partial x^i}$ with constant coefficients (i.e. $v^i$ constant)\n\n\\[\n\\begin{aligned}\n  & \\dot{\\theta}^{(x)}(t) = V \\\\ \n  & \\dot{\\theta}^i(t) = v^i  \\\\\n  & \\theta^i(t)= v^it + x^i\n\\end{aligned} \\quad \\quad \\Longrightarrow \\quad \n\\frac{ \\partial \\theta^i}{ \\partial x^j} = \\delta^i_{ \\, \\, j}\n\\]\n\n\\[\n\\begin{gathered}\n\\frac{d}{dt} W^i(\\theta(t,x))  = \\frac{ \\partial W^i}{ \\partial y^j}(v^j) \n\\end{gathered}\n\\]\n\nFrom the proof of Lemma 9.36, \n\\[\n\\begin{gathered}\n  d(\\theta_{-t})_{\\theta_t(x)}(W_{\\theta_t(x)}) = \\frac{ \\partial \\theta^i}{ \\partial x^j}(-t,\\theta(t,x)) W^j(\\theta(t,x)) \\left. \\frac{ \\partial }{ \\partial x^i} \\right|_x = \\\\\n = \\delta^i_{ \\, \\, j} W^j(\\theta(t,x)) \\left. \\frac{ \\partial }{ \\partial x^i} \\right|_x  = W^i(\\theta(t,x)) \\left. \\frac{ \\partial }{ \\partial x^i} \\right|_x\n\\end{gathered}\n\\]\nFrom (9.16)\n\\[\n\\begin{gathered}\n  (\\mathcal{L}_VW)_p = \\left. \\frac{d}{dt} \\right|_{t=0} d(\\theta_{-t})_{\\theta_t(p)}(W_{ \\theta_t(p) } ) = v^j \\frac{ \\partial W^i}{ \\partial x^j} \\left. \\frac{ \\partial }{ \\partial x^i} \\right|_p = D_VW^i(p) \\left. \\frac{ \\partial }{ \\partial x^i} \\right|_p = D_VW(p)\n\\end{gathered}\n\\]\n\n\\hrulefill\n\n\\begin{theorem}[9.38] If smooth $M$, and $V,W \\in \\mathfrak{X}(M)$, \n\\[\n\\mathcal{L}_VW = [V,W]\n\\]\n\\end{theorem}\n\n\n\n\\begin{corollary}[9.39]\n  \\begin{enumerate}\n    \\item[(a)]\n    \\item[(b)]\n    \\item[(c)]\n    \\item[(d)]\n    \\item[(e)]\n\\end{enumerate}\n\\end{corollary}\n\n\\exercisehead{9.40}\n\n  \\begin{enumerate}\n    \\item[(a)] \\[\n\\mathcal{L}_VW = [V,W] = -[W,V] = \\mathcal{L}_WV\n\\]\n    \\item[(b)]\n    \\item[(c)]\n    \\item[(d)]\n    \\item[(e)]\n\\end{enumerate}\n\n\n\\hrulefill\n\n\nProp. 9.41 is about derivative of $d(\\theta_{-t})_{\\theta_t(p)}(W_{\\theta_t(p)})$ at other times\n\n\\begin{proposition}[9.41] Suppose smooth $M$ with or without $\\partial$ and $V, W \\in \\mathfrak{X}(M)$ \\\\\nIf $\\partial M \\neq \\emptyset$, assume $V$ tangent to $\\partial M$ \\\\\nLet $\\theta$ flow of $V$.  \\\\\n\n$\\forall \\, (t_0,p)$ in domain of $\\theta$\n\n\\[\n\\left. \\frac{d}{dt} \\right|_{t=t_0} d(\\theta_{-t})_{\\theta_t(p)}(W_{\\theta_t(p)})  = d(\\theta_{-t_0}) ( (\\mathcal{L}_VW)_{\\theta_{t_0}(p)} )\n\\]\n\\end{proposition}\n\n\n\\subsection*{ Commuting Vector Fields }\n\n\n\\subsection*{ Time-Dependent Vector Fields }\n\nLet smooth manifold $M$ \n\n\\textbf{time-dependent vector field on $M$}, $V$ \\\\ \n\\quad cont. $V: J \\times M \\to TM$, interval $J \\subseteq \\mathbb{R}$ \\\\\n\\quad \\quad s.t. \n\\[\nV(t,p) \\in T_p M \\quad \\, \\forall \\, (t,p) \\in J \\times M\n\\]\ni.e. $\\forall \\, t \\in J$, \\\\\n$\\begin{aligned}\n  & V_t : M \\to TM \\\\\n  & V_t(p) = V(t,p) \\end{aligned}$ is a vector field on $M$\n\nEY : 20150226\n\n\\textbf{time-dependent vector field on $M$}, $V$ is \\\\\n\\phantom{ \\quad \\quad \\, }  $\\begin{aligned}\n& \\quad \\\\\n \\text{ cont. } & V : J \\times M \\to TM \\quad \\, \\text{ interval } J \\subseteq \\mathbb{R} \\\\\n  & V(t,p) \\in T_p M \\quad \\, \\forall \\, (t,p) \\in J \\times M \\end{aligned}$\n\ni.e.  $\\forall \\, t \\in J$ \\\\\n$\\begin{aligned}\n& \\quad \\\\\n  & V_t : M \\to TM \\\\\n  & V_t(p) = V(t,p ) \\in \\mathfrak{X}(M) \\end{aligned}$\n\n\n\\textbf{integral curve of $V$ } is diff. $\\begin{aligned} \n& \\quad \\\\\n  & \\gamma : J_0 \\to M \\\\\n  & \\dot{\\gamma}(t) = V(t,\\gamma(t)) \\quad \\, \\forall \\, t \\in J_0 \\end{aligned}$ \\quad \\, where $J_0 \\subset J $ s.t. \n\n$\\forall \\, X \\in \\mathfrak{X}(M)$, determines time dependent vector field $V: \\mathbb{R} \\times M \\to TM$ by \n\\[\nV(t,p) = X_p\n\\]\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=3.2em, column sep=4.8em, minimum width=3.2em]\n  {\n& \\mathfrak{X}(\\mathbb{R}\\times M) = T\\mathbb{R}\\oplus TM     & TM   \\\\\nJ \\times M & \\mathbb{R} \\times M &  M\\\\\n& \\mathcal{E} \\subseteq J \\times J \\times M         &       M \\\\\n& \\mathcal{E}^{(t_0,p)}                            & \\\\\n};\n  \\path[->]\n  (m-2-2) edge node [left] {$\\widetilde{V}$} (m-1-2)\n         edge node [auto] {$V$} (m-1-3)\n          edge node [above] {$$} (m-2-3)\n  (m-3-2) edge node [auto] {$$} (m-2-2)\n          edge node [auto] {$$} (m-2-3)\n          edge node [below left] {$\\psi$} (m-3-3)\n          edge node [auto] {$$} (m-4-2)\n          edge node [auto] {$\\widetilde{\\theta}$} (m-2-1)\n  (m-4-2) edge node [below] {$\\psi^{(t_0,p)}$} (m-3-3)\n  (m-2-3) edge node [right] {$\\psi_{tt_0}$} (m-3-3)\n          edge node [right] {$X$} (m-1-3)\n  (m-1-3) edge node [above] {$\\oplus T\\mathbb{R}$} (m-1-2)\n  (m-2-1) edge node [above] {$t=t_0$} (m-2-2)\n          edge node [auto] {$\\left. \\frac{ \\partial }{ \\partial t} \\right|_{t=t_0}$} (m-1-2)\n;\n\\end{tikzpicture} \\quad \\quad \\quad \\, \n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=3.2em, column sep=4.8em, minimum width=3.2em]\n  {\n& \\left. \\frac{ \\partial }{ \\partial t} \\right|_{t=t_0} \\widetilde{\\theta} = \\widetilde{V}_{(t_0,p)} = \\left( \\left. \\frac{ \\partial }{ \\partial s} \\right|_{t_0}, V(t_0,p) \\right) & V(t_0,p) = X_p  \\\\\n\\widetilde{\\theta}(t,(t_0,p)) = (\\alpha(t,(t_0,p)), \\beta(t,(t_0,p)))  & (t_0,p)        & p  \\\\\n& (t,t_0,p)       &       \\psi(t,t_0,p) = \\psi^{(t_0,p)}(t)  \\\\\n& t & \\\\\n};\n  \\path[|->]\n (m-2-2) edge node [left] {$\\widetilde{V}$} (m-1-2)\n         edge node [auto] {$V$} (m-1-3)\n          edge node [above] {$$} (m-2-3)\n  (m-3-2) edge node [auto] {$$} (m-2-2)\n          edge node [auto] {$$} (m-2-3)\n          edge node [below left] {$\\psi$} (m-3-3)\n          edge node [auto] {$$} (m-4-2)\n          edge node [auto] {$\\widetilde{\\theta}$} (m-2-1)\n  (m-4-2) edge node [below] {$\\psi^{(t_0,p)}$} (m-3-3)\n  (m-2-3) edge node [right] {$\\psi_{tt_0}$} (m-3-3)\n          edge node [right] {$X$} (m-1-3)\n  (m-1-3) edge node [above] {$+ \\left. \\frac{\\partial}{\\partial s} \\right|_{t_0}$} (m-1-2)\n  (m-2-1) edge node [above] {$t=t_0$} (m-2-2)\n          edge node [auto] {$\\left. \\frac{ \\partial }{ \\partial t} \\right|_{t=t_0}$} (m-1-2)\n; \n\\end{tikzpicture}\n\nwith \\[\n\\begin{gathered}\n\\left. \\frac{ \\partial }{ \\partial t}\\right|_{t=t_0} \\widetilde{\\theta}( t,(t_0,p)) = \\left. \\left( \\frac{ \\partial \\alpha }{\\partial t}(t,(t_0,p)), \\frac{\\partial \\beta}{ \\partial t}(t,(t_0,p))  \\right)\\right|_{t=t_0} = (1, V(t_0,p))\n\\end{gathered}\n\\]\n\nEY : 20150725 I don't like Lee's choice of notation.  Let me rewrite the above diagrams:\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=3.2em, column sep=4.8em, minimum width=3.2em]\n  {\n& \\mathfrak{X}(\\mathbb{R}\\times M) = T\\mathbb{R}\\oplus TM     & TM   \\\\\nJ \\times M & \\mathbb{R} \\times M &  M\\\\\n& \\mathcal{E} \\subseteq J \\times J \\times M         &       M \\\\\n& \\mathcal{E}^{(t,p)}                            & \\\\\n};\n  \\path[->]\n  (m-2-2) edge node [left] {$\\widetilde{V}$} (m-1-2)\n         edge node [auto] {$V$} (m-1-3)\n          edge node [above] {$$} (m-2-3)\n  (m-3-2) edge node [auto] {$$} (m-2-2)\n          edge node [auto] {$$} (m-2-3)\n          edge node [below left] {$\\psi$} (m-3-3)\n          edge node [auto] {$$} (m-4-2)\n          edge node [auto] {$\\widetilde{\\theta}$} (m-2-1)\n  (m-4-2) edge node [below] {$\\psi^{(t,p)}$} (m-3-3)\n  (m-2-3) edge node [right] {$\\psi_{st}$} (m-3-3)\n          edge node [right] {$V_t$} (m-1-3)\n  (m-1-3) edge node [above] {$\\oplus T\\mathbb{R}$} (m-1-2)\n  (m-2-1) edge node [above] {$s=0$} (m-2-2)\n          edge node [auto] {$\\left. \\frac{ \\partial }{ \\partial s} \\right|_{s=0}$} (m-1-2)\n;\n\\end{tikzpicture} \\quad \\quad \\quad \\, \n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=3.2em, column sep=4.8em, minimum width=3.2em]\n  {\n& \\left. \\frac{ \\partial }{ \\partial s} \\right|_{s=0} \\widetilde{\\theta} = \\widetilde{V}_{(t,p)} = \\left( \\left. \\frac{ \\partial }{ \\partial t} \\right|_{s=0}, V(t,p) \\right) & V(t,p) = V_t(p)  \\\\\n\\widetilde{\\theta}(s,(t,p)) = (\\alpha(s,(t,p)), \\beta(s,(t,p)))  & (t,p)        & p  \\\\\n& (s,t,p)       &       \\psi(s,t,p) = \\psi^{(t,p)}(s)  \\\\\n& s & \\\\\n};\n  \\path[|->]\n (m-2-2) edge node [left] {$\\widetilde{V}$} (m-1-2)\n         edge node [auto] {$V$} (m-1-3)\n          edge node [above] {$$} (m-2-3)\n  (m-3-2) edge node [auto] {$$} (m-2-2)\n          edge node [auto] {$$} (m-2-3)\n          edge node [below left] {$\\psi$} (m-3-3)\n          edge node [auto] {$$} (m-4-2)\n          edge node [auto] {$\\widetilde{\\theta}$} (m-2-1)\n  (m-4-2) edge node [below] {$\\psi^{(t,p)}$} (m-3-3)\n  (m-2-3) edge node [right] {$\\psi_{st}$} (m-3-3)\n          edge node [right] {$V_t$} (m-1-3)\n  (m-1-3) edge node [above] {$+ \\left. \\frac{\\partial}{\\partial t} \\right|_{s=0}$} (m-1-2)\n  (m-2-1) edge node [above] {$s=0$} (m-2-2)\n          edge node [auto] {$\\left. \\frac{ \\partial }{ \\partial s} \\right|_{s=0}$} (m-1-2)\n; \n\\end{tikzpicture}\n\nwith \\[\n\\begin{gathered}\n\\left. \\frac{ \\partial }{ \\partial s}\\right|_{s=0} \\widetilde{\\theta}( s,(t,p)) = \\left. \\left( \\frac{ \\partial \\alpha }{\\partial s}(s,(t,p)), \\frac{\\partial \\beta}{ \\partial s}(s,(t,p))  \\right)\\right|_{s=0} = (1, V(t,p))\n\\end{gathered}\n\\]\n\n\n\\begin{theorem}[9.48] \\textbf{(Fundamental Theorem on Time-Dependent Flows)} \\\\\nLet $M$ smooth manifold \\\\\nopen $J \\subseteq \\mathbb{R}$ \\\\\n$V: J \\times M \\to TM$ smooth time-dependent vector field on $M$ \\\\\n$\\exists \\, $ open $\\mathcal{E} \\subseteq J\\times J \\times M$, smooth $\\psi : \\mathcal{E} \\to M$ called time-dependent flow of $V$ s.t. \n\n\\begin{enumerate}\n\\item[(a)] $\\forall \\, t_0 \\in J$, \\, $\\forall \\, p \\in M$, \\\\\nopen $\\mathcal{E}^{(t_0,p)} = \\lbrace t \\in J | (t,t_0, p) \\in \\mathcal{E}_0 \\rbrace$ s.t. $t_0 \\in \\mathcal{E}^{(t_0, p)}$ \\\\\nsmooth curve $\\begin{aligned}\n  & \\quad \\\\ \n  & \\psi^{(t_0,p)}: \\mathcal{E}^{(t_0,p)} \\to M \\\\\n  & \\psi^{(t_0,p)}(t) = \\psi(t,t_0,p) \\end{aligned}$ \\\\\nis unique maximal integral curve of $V$ with $\\psi^{(t_0,p)}(t_0) = p$\n\\item[(b)] If $ \\begin{aligned}\n  & \\quad \\\\ \n  & t_1 \\in \\mathcal{E}^{(t_0,p)} \\\\\n  & q = \\psi^{(t_0,p)}(t_1) \\end{aligned}$\n\nthen $\\begin{aligned}\n  & \\quad \\\\\n  & \\mathcal{E}^{(t_1,q)} = \\mathcal{E}^{(t_0,p)} \\text{ and } \\\\\n  & \\psi^{(t_1,q)} = \\psi^{(t_0,p)} \\end{aligned}$\n\\item[(c)] $\\forall \\, (t_1,t_0) \\in J \\times J $ \\\\\n$M_{t_1,t_0} = \\lbrace p \\in M | (t_1,t_0, p) \\in \\mathcal{E} \\rbrace$ open in $M$ and \\\\\n  $\\begin{aligned}\n  & \\quad \\\\\n  & \\psi_{t_1t_0} : M_{t_1t_0} \\to M \\\\\n  & \\psi_{t_1t_0}(p) = \\psi(t_1,t_0,p) \\end{aligned}$\nis a diffeomorphism from $M_{t_1t_0}$ onto $M_{t_0t_1}$ with inverse $\\psi_{t_0t_1}$\n\\item[(d)] If $p \\in M_{t_1t_0} $, $\\psi_{t_1t_0}(p) \\in M_{t_0t_1}$, \\\\\nthen $p\\in M_{t_2 t_0}$ and \n\\begin{equation}\n\\psi_{t_2t_1} \\psi_{t_1t_0}(p) = \\psi_{t_2t_0}(p) \\quad \\quad \\quad \\, (9.18)\n\\end{equation}\n\\end{enumerate}\n\n\\end{theorem}\n\n\\begin{proof}\nConsider smooth vector field $\\begin{aligned} \n& \\quad  \\\\\n  & \\widetilde{V} \\in \\mathfrak{X}(J \\times M ) \\text{ defined by } \\\\\n  & \\widetilde{V}_{(s,p)} = \\left( \\left. \\frac{ \\partial }{ \\partial s } \\right|_s, V(s,p) \\right) \\end{aligned}$ \\\\\n\nidentify $T_{(s,p)}(J \\times M)$ with $T_sJ \\oplus T_pM$ (Prop. 3.14)\n\nLet $\\begin{aligned}\n& \\quad \\\\ \n  & \\widetilde{\\theta} : \\widetilde{\\mathcal{D}} \\to J \\times M \\\\\n  & \\widetilde{\\theta}(t,(s,p)) = (\\alpha(t,(s,p)) , \\beta(t,(s,p)) ) \\end{aligned}$ \\quad \\, flow of $\\widetilde{V}$\n\nthen $\\begin{aligned} \n & \\quad \\\\\n  & \\alpha: \\widetilde{D} \\to J \\\\\n  & \\beta: \\widetilde{D} \\to M \\end{aligned}$ s.t. \n\n\\[\n\\begin{aligned}\n  & \\frac{ \\partial \\alpha}{ \\partial t}(t,(s,p) ) = 1 \\quad \\quad \\, & \\alpha(0,(s,p)) = s \\\\ \n  & \\frac{ \\partial \\beta}{ \\partial t}(t,(s,p)) = V(\\alpha(t,(s,p)), \\beta(t,(s,p))) \\quad \\quad \\, & \\beta(0,(s,p)) = p \n\\end{aligned}\n\\]\n$\\Longrightarrow \\alpha(t,(s,p)) = t+s$ so \n\\begin{equation}\n  \\frac{ \\partial \\beta}{ \\partial t}(t,(s,p)) = V(t+s,\\beta(t,(s,p)) \\quad \\quad \\quad \\, (9.19)\n\\end{equation}\n\\end{proof}\n\n\nLet $\\begin{aligned}\n  & \\quad \\\\\n  & \\mathcal{E} \\subseteq \\mathbb{R} \\times J \\times M \\text{ defined } \\\\\n  & \\mathcal{E} = \\lbrace (t,t_0, p) | (t-t_0, (t_0,p) ) \\in \\widetilde{\\mathcal{D}} \\rbrace \\end{aligned}$\n\n$\\mathcal{E}$ open because $\\widetilde{\\mathcal{D}}$ is. \\\\\nsince $\\alpha: \\widetilde{D} \\to J$, if $(t,t_0, p) \\in \\mathcal{E}$, then $t= \\alpha(t-t_0,(t_0,p)) \\in J$, implies $\\mathcal{E} \\subseteq J \\times J \\times M$ \\\\\n$\\mathcal{E}$ open so $M_{t_1t_0} = \\lbrace p \\in M | (t_1,t_0, p) \\in \\mathcal{E} \\rbrace$ open. \n\ndefine $\\begin{aligned} & \\quad \\\\ \n  & \\psi : \\mathcal{E} \\to M \\\\\n  & \\psi(t,t_0,p) = \\beta(t-t_0,(t_0,p))\n\\end{aligned}$\n\nEY : 20150725 Remark: Out of the proof immediately above, there are a number of takeaways that really \\emph{should} be mentioned.  \n\nLet's collect the facts:\n\\[\n\\begin{aligned}\n  & \\widetilde{\\theta}(s,(t,p)) = (\\alpha(s,(t,p)), \\beta(s,(t,p)) ) := \\widetilde{\\theta}^{(t,p)}(s) \\\\ \n  & \\widetilde{\\theta}(0,(t,p)) = (\\alpha(0,(t,p)), \\beta(0,(t,p)) ) = (t,p) \\\\ \n  & \\frac{ \\partial \\alpha}{ \\partial s}(s,(t,p)) = 1 \\\\\n  & \\frac{ \\partial \\beta }{ \\partial s}(s,(t,p)) = V(\\alpha(s,(t,p)), \\beta(s,(t,p))) \\text{ so } \\\\ \n  & \\alpha(s,(t,p)) = s+t \\\\ \n  & \\frac{ \\partial \\beta}{ \\partial s}(s,(t,p)) = V(s+t,\\beta(s,(t,p)) ) \\\\\n  & \\left. \\frac{ \\partial }{ \\partial s} \\right|_{s=0} \\widetilde{\\theta}(s,(t,p)) = \\left. \\left( \\frac{ \\partial \\alpha }{ \\partial s}(s,(t,p)) , \\frac{ \\partial \\beta}{ \\partial s}(s,(t,p)) \\right) \\right|_{s=0} = (1,V(t,p)) = \\frac{d\\widetilde{\\theta}^{(t,p)}}{ds}(s=0) := \\widetilde{V}_{(t,p)}\n\\end{aligned}\n\\]\n\nAlso, we can write the flow $\\widetilde{\\theta}_s$ as\n\\[\n\\widetilde{\\theta}^{(t,p)}(s) = \\widetilde{\\theta}(s,(t,p)) = \\widetilde{\\theta}_s(t,p) = (\\alpha(s,(t,p)),\\beta(s,(t,p))) = (s+t, \\beta(s,(t,p)))\n\\]\n\nNow consider the Lie derivative:\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=6.8em, minimum width=2em]\n  {\n\\mathfrak{X}(\\mathbb{R} \\times M) & \\mathfrak{X}(\\mathbb{R} \\times M)  \\\\\n\\mathfrak{X}(\\mathbb{R} \\times M) &   \\\\\nJ\\times M    &       J\\times M \\\\\n};\n  \\path[->]\n  (m-3-1) edge node [above] {$$} (m-2-1)\n  edge node [above] {$\\widetilde{\\theta}_s$} (m-3-2)\n  (m-3-2) edge node [auto] {$$} (m-1-2)\n  edge [bend left] node [auto] {$\\widetilde{\\theta}_{-s}$} (m-3-1)\n  (m-1-2) edge node [above] {$d(\\widetilde{\\theta}_{-s})=(\\widetilde{\\theta}_{-s})_*$} (m-1-1)\n;\n\\end{tikzpicture} \\quad \\quad \\quad \\, \\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=6.8em, minimum width=2em]\n  {\nd(\\widetilde{\\theta}_{-s})_{\\widetilde{\\theta}_s(t,p)}(W_{\\widetilde{\\theta}_s(t,p)}) & W_{\\widetilde{\\theta}_s(t,p)}  \\\\\nW_{(t,p)}   &    \\\\\n(t,p)    &       \\widetilde{\\theta}_s(t,p) \\\\\n};\n  \\path[|->]\n  (m-3-1) edge node [above] {$$} (m-2-1)\n  edge node [above] {$\\widetilde{\\theta}_s$} (m-3-2)\n  (m-3-2) edge node [auto] {$$} (m-1-2)\n  edge [bend left] node [auto] {$\\widetilde{\\theta}_{-s}$} (m-3-1)\n  (m-1-2) edge node [above] {$d(\\widetilde{\\theta}_{-s})=(\\widetilde{\\theta}_{-s})_*$} (m-1-1)\n;\n\\end{tikzpicture}\n\nwith $\\widetilde{\\theta}$ being the flow of $\\widetilde{V}$.  Let's define the Lie derivative:\n\n\\begin{equation}\n\\begin{gathered}\n  \\mathcal{L}_{\\widetilde{V}}W = (\\mathcal{L}_{\\widetilde{V}}W)_{(t,p)} = \\left. \\frac{d}{ds} \\right|_{s=0}(d\\widetilde{\\theta}_{-s})_{\\widetilde{\\theta}_s(t,p)}(W_{\\widetilde{\\theta}_s(t,p)})   = \\lim_{s\\to 0} \\frac{ (d\\widetilde{\\theta}_{-s})_{\\widetilde{\\theta}_s(t,p)}( W_{\\widetilde{\\theta}_s(t,p)}) - W_{\\widetilde{\\theta}_s(t,p)} }{s}\n\\end{gathered}\n\\end{equation}\n\n\nUse Case 1 of the proof of Lee's Theorem 9.38, for showing $\\mathcal{L}_VW = [V,W]$.  \\\\\nLet open neighborhood $U \\subseteq J \\times M$, with $(t,p) \\in U$.  On open $U$, choose smooth coordinates $(t,u^i)$ on $U$.  By Theorem 9.22, that at a regular point $p\\in M$, $\\exists \\, (u^i)$ coordinates s.t. $V_p = \\frac{ \\partial }{ \\partial u^1}$, then consider \n\n\\[\n\\widetilde{V} = \\frac{ \\partial }{ \\partial t} + \\frac{ \\partial }{ \\partial u^1} \\in \\mathfrak{X}(\\mathbb{R} \\times M)\n\\]\nwith $V(t)(p) = \\frac{ \\partial }{ \\partial u^1} \\in \\mathfrak{X}(M)$.  (Remember, $V(t)$ is a vector-field that is time-dependent, but is on $M$.  I will use this as a justification for using Thm. 9.22).  \n\nNow the flow $\\widetilde{\\theta}_s$ takes on these forms:\n\\[\n\\begin{gathered}\n  \\widetilde{\\theta}^{(t,p)}(s) = \\widetilde{\\theta}(s,(t,p)) = \\widetilde{\\theta}_s(t,p) = \\\\\n  = (\\alpha(s,(t,p)) , \\beta(s,(t,p))) = (s+t, \\beta(s,(t,p)) )\n\\end{gathered}\n\\]\nGiven these conditions, that \\\\\n$\\beta(0,(t,p)) = p = (u^1,u^2, \\dots u^n)$ and \n\\[\n\\left. \\frac{ \\partial \\beta}{ \\partial s}(s, (t,p)) \\right|_{s=0} = V(t,p) = \\frac{ \\partial }{ \\partial u^1} = \\left. \\frac{d}{ds} \\beta^{(t,p)}(s) \\right|_{s=0}\n\\]\nthen a $\\beta$ that satisfies these conditions above is \n\\[\n\\beta(s,(t,p)) = \\beta_s(t,p) = (u^1 + s, u^2 \\dots u^n)\n\\]\nso that we can conclude that \n\\[\n\\widetilde{\\theta}_s(t,p) = (t+s, u^1 + s, u^2 , \\dots , u^n)\n\\]\n\nFor fixed $s$, then\n\\[\nd(\\widetilde{\\theta}_{-s})_{\\widetilde{\\theta}_s(t,p)} =1_{T_{\\widetilde{\\theta}_s(t,p)}(\\mathbb{R}\\times M)}\n\\]\nso that \n\\[\n\\begin{gathered}\n  d(\\widetilde{\\theta}_{-s})_{\\widetilde{\\theta}_s(t,p)}(W_{\\widetilde{\\theta}_s(t,p)}) = d(\\widetilde{\\theta}_{-s})_{\\widetilde{\\theta}_s(t,p)} \\cdot W^j(t+s,u^1 +s, u^2 \\dots u^n) \\left. \\frac{ \\partial }{ \\partial u^j} \\right|_{\\widetilde{\\theta}_s(t,p)} = W^j(t+s,u^1+s, u^2 \\dots u^n) \\left. \\frac{ \\partial }{ \\partial u^j} \\right|_{(t,p)} \\\\\n\\Longrightarrow \\left. \\frac{d}{ds} \\right|_{s=0} W^j(t+s,u^1+s, u^2 \\dots u^n) \\left. \\frac{ \\partial }{ \\partial u^j} \\right|_{(t,p)} = \\left( \\frac{\\partial }{ \\partial t} W^j(t,u^1 \\dots u^n) + \\frac{ \\partial }{ \\partial u^1 } W^j(t,u^1 \\dots u^n) \\right) \\left. \\frac{ \\partial}{ \\partial u^j} \\right|_{(t,p)}\n\\end{gathered}\n\\]\n\n\nThus, we can conclude that \n\\begin{equation}\n  \\boxed{ \\mathcal{L}_{\\widetilde{V}}W = \\mathcal{L}_{ \\frac{ \\partial }{ \\partial t} +V}W = \\left( \\mathcal{L}_{ \\frac{ \\partial}{ \\partial t} V } W \\right)_{(t,p)} = \\left( \\left( \\frac{ \\partial }{ \\partial t} + V \\right) W^j \\right) \\left. \\frac{ \\partial }{ \\partial x^j} \\right|_{(t,p)} }\n\\end{equation}\n\n\\subsection*{First-Order Partial Differential Equations }\n\n\n\n\\subsection*{ Problems }\n\n\n\\problemhead{9-21} Note that from wikipedia, \n\nambient isotopy \\\\\nLet $N, M$ manifolds, \\\\\n\\quad $g,h$ embeddings of $N$ in $M$ \n\ncont. map $F: M \\times [0,1] \\to M$ s.t.  \\\\\n$F: g\\mapsto h$ \\\\\nif $F_0 = 1$ \\\\\n\\phantom{ if } $F_t$ homeomorphism, $F_t: M \\to M$ \\\\\n\\phantom{ if } $F_1 : g \\mapsto h$ \\\\\n\n\\textbf{smooth isotopy} of $M$ is smooth $H: M \\times J \\to M$, \\, $J \\subseteq R$ interval s.t. \\\\\n\\quad \\, $\\forall \\, t \\in J$, $\\begin{aligned} & \\quad \\\\\n  & H_t : M \\to M \\\\\n  & H_t(p) = H(p,t) \\end{aligned}$ is a diffeomorphism.  \n\nSuppose open interval $J \\subseteq \\mathbb{R}$ \\\\\n\\phantom{Suppose } smooth isotopy $H:M \\times J \\to M$ \\\\\n\nEY : 20140206 \n\nBy definition \\\\\n$\\forall \\, t, \\, \\begin{aligned} & \\quad \\\\\n  & H_t : M \\to M \\\\\n  & H_t(p) = H(p,t) \\end{aligned}$\n\nThen $DH_t = (H_t)_*$ \\\\\n\\[\nDH_t : TM \\to TM\n\\]\n\nI tried, \n\nfor some time $t$, consider $H(x^i(p),t) = y^j(x^i,t)$ \n\\[\n\\frac{ \\partial H(p,t)}{ \\partial t } = \\frac{ \\partial y^j(x^i,t) }{ \\partial t }\n\\]\n\nConsider integral curve $x = x(t)$ s.t. $\\dot{x} = X(t)$ \n\n\\[\n\\begin{gathered}\n  \\dot{y} = \\frac{dy}{dt} = \\frac{d}{dt} y(x(t),t) = \\frac{ \\partial y^j}{ \\partial x^i }\\dot{x}^i = (DH_t)X \\\\\n\\begin{aligned}\n  & (H_t)_* : TM \\to TM \\\\ \n  & DH_t : X \\mapsto \\dot{y} = Y\n\\end{aligned}\n\\end{gathered}\n\\]\n\n\n\n$\\psi(t,t_0,p) = H_t \\circ H_{t_0}^{-1}(p)$ domain $J \\times J \\times M$ ??\n\n", "meta": {"hexsha": "fc15b1e4be824bc4f93173056c3ca12147dd9cb6", "size": 38046, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LeeJM/09IntegralCurvesandFlows.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LeeJM/09IntegralCurvesandFlows.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LeeJM/09IntegralCurvesandFlows.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 37.8944223108, "max_line_length": 452, "alphanum_fraction": 0.5579561583, "num_tokens": 16119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208003, "lm_q2_score": 0.7461390043208003, "lm_q1q2_score": 0.5567234137688352}}
{"text": "\\documentclass[12pt]{article}\r\n\\usepackage{amsfonts}\r\n\\usepackage{amssymb,amsmath}\r\n\\usepackage{ifthen}\r\n\\newboolean{flagoptimfirst}\r\n\r\n\\newenvironment{optim}[2]{\r\n\\setboolean{flagoptimfirst}{true}\r\n\\begin{aligned}\r\n& \\underset{#1}{\\text{minimize}} & & #2 \\\\\r\n& \\text{subject to} & & }\r\n{\\end{aligned}}\r\n\r\n\\newcommand{\\sjt}{\r\n \\ifthenelse{\\boolean{flagoptimfirst}}{\r\n }{\r\n \\\\ & & & \r\n }\r\n\\setboolean{flagoptimfirst}{false}\r\n}\r\n\\begin{document}\r\n\r\n\\section*{Example of MHE in casadi for a mass-spring system}\r\n\r\nIn the MHE example, we consider a simple mass-spring system. Measurements of the position of the mass are available. We would like to solve the following MHE-problem:\r\n\\begin{equation*}\r\n\\begin{optim}{\\textbf{x}(\\cdot),\\textbf{w}(\\cdot)}{ \\int_{\\tau=t-T}^t \\parallel \\textbf{w}(\\tau) \\parallel^2_{Q_\\mathrm{c}} + \\parallel \\textbf{y}(\\tau) - h(\\textbf{x}) \\parallel^2_{R_\\mathrm{c}}d\\tau + \\parallel x(t-T) - \\hat{x}(t-T)  \\parallel^2_{S}}\r\n\\sjt \\dot{\\textbf{x}} = f(\\textbf{x},\\textbf{u},\\textbf{w},\\tau), \\hspace{7mm} \\tau \\in [t-T,t],\r\n\\end{optim}\r\n\\end{equation*}\r\nwhere:\r\n\\begin{itemize}\r\n \\item $\\vec{x}$ is the system state,\r\n \\item $\\vec{y}$ are the measurements,\r\n \\item $\\vec{u}$ is the control input,\r\n \\item $\\vec{w}$ is a noise term introduced to model the model uncertainty (i.e.~the process noise),\r\n \\item $t$ is the current time,\r\n \\item $\\hat{x}(t-T)$ is an estimate for the state at the beginning of the horizon\r\n \\item $Q_\\mathrm{c}$ and $R_\\mathrm{c}$ $S$ are the weighting matrices for the model and measurement noise and for the arrival cost respectively,\r\n \\item $T$ is the considered time horizon,\r\n \\item $f$ is a model of the considered system, and\r\n \\item $h$ is the measurement function that maps the system state to the measurements. \r\n\\end{itemize}\r\n\r\nIn the example, this example is discretized using multiple shooting, which results in the following problem:\r\n\\begin{equation*}\r\n\\begin{optim}{\\textbf{x}_k,\\textbf{w}_k}{ \\sum_{k=i-N}^{i-1} \\parallel \\textbf{w}_k \\parallel^2_{Q_{\\mathrm{d}_k}} + \\sum_{k=i-N}^i \\parallel \\textbf{y}_k - h(\\textbf{x}_k) \\parallel^2_{R_{\\mathrm{d}_k}} +  \\parallel x_{i-N} - \\hat{x}_{i-N}  \\parallel^2_{S}}\r\n\\sjt \\textbf{x}_{k+1} = \\phi(\\textbf{x}_k,\\textbf{u}_k,\\textbf{w}_k), \\; k = i-N,\\ldots,i-1\r\n\\end{optim}\r\n\\end{equation*}\r\nwhere\r\n\\begin{itemize}\r\n \\item $\\phi$ integrates our model $f$, given the state $\\vec{x}_k$, control input $\\vec{u}_k$ and process noise $\\vec{w}_k$, over one sampling interval, \r\n \\item N is the number of measurements in the considered time horizon, \r\n \\item $i$ is the current sampling index, and\r\n \\item $Q_\\mathrm{d}$ and $R_\\mathrm{d}$ are the discretized versions of covariance matrices $Q_\\mathrm{c}$ and $R_\\mathrm{c}$, which might vary over the time horizon.\r\n\\end{itemize}\r\n\r\nThis is the setup of the MHE that is implemented in this example. Note that in this example, Structures are used. If you are unfamiliar with this, a tutorial can be found here: http://docs.casadi.org/tutorials/tools/structure.pdf\r\nThe arrival cost is updated using the smoothed EKF update.\r\n\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "4b61d79b47454f3dc383a6ef3b6258dd0760fd3d", "size": 3103, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "crane_controllers/external/casadi-3.4.5/docs/documents/mhe_spring_damper.tex", "max_stars_repo_name": "tingelst/crane", "max_stars_repo_head_hexsha": "e14bca2bd4e2397dce09180029223832aad9b070", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-22T08:50:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-18T03:04:18.000Z", "max_issues_repo_path": "crane_controllers/external/casadi-3.4.5/docs/documents/mhe_spring_damper.tex", "max_issues_repo_name": "tingelst/crane", "max_issues_repo_head_hexsha": "e14bca2bd4e2397dce09180029223832aad9b070", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "crane_controllers/external/casadi-3.4.5/docs/documents/mhe_spring_damper.tex", "max_forks_repo_name": "tingelst/crane", "max_forks_repo_head_hexsha": "e14bca2bd4e2397dce09180029223832aad9b070", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-01-14T04:28:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T05:29:01.000Z", "avg_line_length": 48.484375, "max_line_length": 259, "alphanum_fraction": 0.691588785, "num_tokens": 986, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461390043208004, "lm_q2_score": 0.7461389873857265, "lm_q1q2_score": 0.5567234011329162}}
{"text": "\n\\subsection{Distributions}\\label{distributions}\n\nLet's imagine we will measure how long it takes to get from Brooklyn\nCollege to Times square. Google maps says this takes about 54 minutes.\nBut, we all know that is an estimate that sometimes be off. Any given\ntrip could be shorter or longer. As a result, if we measured how long\nseveral trips take for different people, we will find different times.\nSo, the population of travel times has variability. We can easily\ndescribe these travel times with distributions. For example, consider\nthe two distributions below.\n\n\\includegraphics{Ttest_files/figure-latex/unnamed-chunk-1-1}\n\nBoth distributions have peaks around 54 minutes, which is the average\ntravel time between Brooklyn College and Times Square by subway. And,\nboth distributions have some variability. Some travel times are shorter\nand some are longer than 54 minutes. The narrow distribution has less\nvariability than the wide distribution. For example, the narrow\ndistribution has a standard deviation of 2 minutes, and the wide\ndistribution has a standard deviation of 20 minutes.\n\nWhat does the variability mean for your travel time? If there is less\nvariability, then more of your trips will be close to the mean of 54\nminutes. And, when the trip is shorter or longer than 54 minutes, it\nwon't be too much shorter or longer, only a few minutes give or take.\nNotice, that certain travel times pretty much never happen in the narrow\ndistribution. For example, it never takes 20 or a 100 minutes. When\nthere is more variability, then more of your trips will be slower or\nfaster than 54 minutes. For example, although the trips will average out\nto 54 minutes, many trips will be much shorter, and much longer than 54\nminutes. For example, you could expect a trip of 75 minutes to happen\nfairly often. But, even when the distribution is wide, some very short\nor long trips still do not happen very often. For example, a trip of 300\nminutes never happens according to the wide distribution.\n\nRandomly sampling a number from a distribution is a lot like taking your\nchances on the subway. You might get to your destination in the average\ntime, or you could have bad luck and get on the train when there are a\nlot of delays. We have a feeling for what the subway can do, it can\nsometimes be fast and sometimes be slow. Similarly, by looking at a\ndistribution, we can get a feeling for what chance can do to the\nmeasurement.\n\nWhenever we take a measurement, we can think of it as taking a random\nsample from a distribution. The distribution shows us that there are\ndifferent probabilities of getting smaller or larger numbers. The mean\nis the most probable number, and in the distributions we are looking at,\nas the numbers get smaller or larger, they also get less and less\nlikely. So, just by looking at the distribution, we can get a feeling\nfor what chance can do. For example, random sampling from the narrow\ndistribution will usually give numbers around 54, plus or minus 2 or\n4ish. And, random sampling from the wide disribution will usually give\nus numbers around 54, plus or minus 20-40ish.\n\n\\subsection{Differences can arise by chance because of\nsampling}\\label{differences-can-arise-by-chance-because-of-sampling}\n\nLet's say you and your friend each take 10 subway trips between Brooklyn\nCollege and Times Square, and each time you use your cell phone to\nrecord how long each trip takes. This is the same as taking two samples\nof 10 scores from the travel time distribution. What happens we we do\nthis? Will you and your friend have identical scores? Probably not. Each\ntime, different random factors will cause each of the trips to take\ndifferent amounts of time. We can plot the outcome of these hypothetical\ntrips below in a histogram.\n\n\\includegraphics{Ttest_files/figure-latex/unnamed-chunk-2-1}\n\nThe histogram shows that in each sample, different trips took different\namounts of time. These samples were created by randomly picking numbers\nfrom a normal distribution with mean = 54, and standard deviation = 20.\nSo, we might expect that both of our samples with also have a mean of\n54. But, as you can see this is not true. The black lines on each of the\nhistograms show the mean travel times, and it is clear they are not\nexactly the same.\n\\emph{The difference between these two sample means was produced by random chance.}\n\n\\subsection{What kind of differences can chance\nproduce?}\\label{what-kind-of-differences-can-chance-produce}\n\nLet's first look at the kind of differences that random sampling can\nproduce in our subway example. Imagine, that 20 people each took 10\ntrips between Brooklyn College and Times Square, and all of them\nrecorded their travel times. The data might look like this:\n\n\\includegraphics{Ttest_files/figure-latex/unnamed-chunk-3-1}\n\nIt easy to see that each person had different sets of travel times, and\nthat the means (black bars) are also moving around. All of the means are\nclose-ish to 54 minutes (which is the true mean), but some means are\nsmaller and larger. These sample means are very important, and they\npoint to another distribution, the sampling distribution of the mean.\n\nThe sampling distribution of the mean is a hypothetical idea. Imagine if\ninstead of 20 people taking 10 trips, and infinite number of people each\ntook 10 trips, and then recorded their travel times. Each of these\nsamples would have it's own mean. What does this distribution look like?\nWe can use a computer to simulate this distribution below:\n\n\\includegraphics{Ttest_files/figure-latex/unnamed-chunk-4-1}\n\nRemember each of the black lines in the sample histograms that represent\nthe sample means? The above histogram shows means from 10,000 of those\nblack lines (imagining we had 10,000 take trips).\n\nWe see that the distribution is centered on 54, which is the true mean\nof the population. We also see that some means get as small as around\n35, and as large as 75. However, sample means hardly ever get smaller\nthan 30, or larger than 80.\n\nThis graph is our window into the things that chance can do, and the\ndifferences that random sampling can produce just by taking measurements\nthat have variability. What is most important, is that there are clearly\nhard limits on what chance can do in this situation. We already said,\nthat chance alone hardly ever produces a mean larger than 75. We can use\nthis kind of information when we observe means that occur outside of our\nchance window. For example, if one person had a sample mean of 5 minutes\nfor taking 10 trips, what can we infer? Well, we can say that chance has\nan infintesimally small probability of producing this sample mean. For\nthis reason,we can also confidently rule-out chance as an explanation.\nMy guess is that person obviously DID NOT TAKE THE SUBWAY. Perhaps they\nflew in a helicopter.\n\nIt easy to rule out chance when the measurement produces sample mean\nthat is well outside the chance window (like 5 minutes). It gets harder\nto confidently rule out chance when the sample mean is inside the chance\nwindow, but it can still be done. Researchers set their own criterions\nabout this issue (e.g., alpha value). For example, if you found a sample\nmean of 70, what would you conclude? The histogram shows this sample\nmean occurs with a very low frequency, which means it does occur by\nchance. But, the chances are very low, less than 1\\%. So, if you are\nwilling to accept those chances of being wrong, you might infer that a\nsample mean of 75 was not produced by chance, but perhaps produced by\nlong delays on the subway.\n\n\\subsection{Chance can produce differences between conditions in an\nexperiment}\\label{chance-can-produce-differences-between-conditions-in-an-experiment}\n\nThe reason we are spending so much time on understanding chance, is that\nchance can produce differences between conditions in an experiment. This\noccurs for the same reason that chance can produce different sample\nmeans by random sampling alone. Remember in a simple experiment, we are\ntaking samples of the dependent variable in two conditions. We want to\nknow if there was a difference in the measure between conditions, so we\noften look at the difference in sample means between the conditions.\nAnd, as we have learned, those sample means can be different just\nbecause of random chance.\n\nFortunately, we can use methods called inferential statistics to\nestimate the kinds of differences that chance can produce. Then, we can\nestimate the likelihood that the differences we observe were produced by\nchance. When we find differences that are \\emph{likely not} produced by\nchance, we can be more confident that our observed differences are real,\nand not random.\n\n\\section{2 level designs and t-tests}\\label{level-designs-and-t-tests}\n\nThere are multiple ways to estimate whether chance is responsible for a\ndifference in an experiment. By far the most common approach is to use a\nt-test. The t-test is a statitiscal method for analyzing the data in two\nconditions to determine the likelihood that any observed difference\ncould have been produced by chance. You can refer to the inferential\nstatistics chapter, your old notes from statistics, discussions of\nt-tests in the lab manual, and google t-tests to learn more about how\nthey work. For now, we will briefly describe the three different kinds\nof t-tests, and give an example of how they are used to analyze data,\nand how the results from a t-test are reported in journal article.\n\nThe three most common versions of the t-test are: one-sample t-test,\nindependent samples t-test, and the paired samples t-test. The one\nsample t-test is used to test whether a sample mean could have come from\na particular population. The independent samples t-test is used in\nbetween-subjects designs, to test whether the sample mean in one\ncondition is different from the sample mean in another condition. The\npaired samples t-test is used in within-subjects designs, to test\nwhether the sample mean in one condition is different from the sample\nmean in the other condition.\n\nAll t-tests give the same basic information, a t-value, and a p-value.\nSimply, the p-value gives the probability that the observed difference\nbetween means could have been produced by chance alone. If we dive into\nthe details, we will see that the p-value estimate depends on several\nassumptions being met, and also has more nuanced meanings. But for now,\nit gives us what we want, an estimate of the likelihood that chance\ncould have produced the difference we observed. When the p-value is very\nsmall (e.g., less than .05, or 5\\%), many researchers would conclude\nthat a difference ``statistically significant'', and probably not\nproduced by chance.\n\n\\subsection{An example}\\label{an-example}\n\nImagine a between-subjects experiment on 20 student, asking whether\nstudying or not studying changes test performance on a midterm. The IV\nis studying (Studying vs.~No-Studying), and the DV is test performance\n(percentage on the midterm). Below are some imaginary results from the\nexperiment.\n\n\\begin{longtable}[]{@{}rr@{}}\n\\toprule\nstudy & nostudy\\tabularnewline\n\\midrule\n\\endhead\n75 & 75\\tabularnewline\n82 & 79\\tabularnewline\n82 & 85\\tabularnewline\n86 & 75\\tabularnewline\n71 & 79\\tabularnewline\n78 & 73\\tabularnewline\n85 & 76\\tabularnewline\n77 & 75\\tabularnewline\n72 & 78\\tabularnewline\n74 & 67\\tabularnewline\n\\bottomrule\n\\end{longtable}\n\n\n\n", "meta": {"hexsha": "8142c2fe3ec28caf861d78556a6aa961328c5163", "size": 11333, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LatexVersion/ttest.tex", "max_stars_repo_name": "danBurrell/research_methods_with_R", "max_stars_repo_head_hexsha": "74745c4bd69d185f1a36ef38638be8cc55966b06", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2017-12-29T16:39:51.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-24T12:46:59.000Z", "max_issues_repo_path": "LatexVersion/ttest.tex", "max_issues_repo_name": "danBurrell/research_methods_with_R", "max_issues_repo_head_hexsha": "74745c4bd69d185f1a36ef38638be8cc55966b06", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LatexVersion/ttest.tex", "max_forks_repo_name": "danBurrell/research_methods_with_R", "max_forks_repo_head_hexsha": "74745c4bd69d185f1a36ef38638be8cc55966b06", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2017-09-01T14:17:08.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T23:17:38.000Z", "avg_line_length": 51.7488584475, "max_line_length": 85, "alphanum_fraction": 0.7983764228, "num_tokens": 2585, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407016, "lm_q2_score": 0.7461389986757757, "lm_q1q2_score": 0.55672339270897}}
{"text": "\\section{SCC}\n\n%%%%%%%%%%%%%%%\n\\begin{frame}{SCC}\n  \\begin{exampleblock}{Kosaraju SCC algorithm, 1978 \\pno{3.4.7}}\n    \\begin{itemize}\n      \\item 1st DFS $\\Rightarrow^{?}$ BFS\n      \\item 2nd DFS $\\Rightarrow^{?}$ BFS\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    digraph = dag of SCCs\n    \\begin{itemize}\n      \\item (2nd round) Repeat: traversal from $s \\in \\text{\\textcolor{red}{sink} scc}$; delete\n      \\item (1nd round) dag: toposort $\\Rightarrow$ decreasing finish time $\\Rightarrow$ $s \\in \\text{\\textcolor{blue}{source} scc}$\n    \\end{itemize}\n  \\end{block}\n\n  \\begin{alertblock}{Remark.}\n    \\begin{itemize}\n      \\item DFS on $G^{T}$; DFS/BFS on $G$\n      \\item DFS on $G$; DFS/BFS on $G^{T}$\n    \\end{itemize}\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{One-way streets}\n  \\begin{exampleblock}{One-way streets \\pno{3.4.23}}\n    \\begin{itemize}\n      \\item $\\forall u,v: u \\leftrightsquigarrow v$\n      \\item $s: s \\leadsto v \\leadsto s$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item $G$ is an SCC\n      \\item $\\set{v \\mid s \\leadsto v}$ is an SCC\n\t\\begin{itemize}\n\t  \\item compute $\\set{v \\mid s \\leadsto v}$\n\t  \\item compute SCCs\n\t  \\item compare\n\t\\end{itemize}\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Infinite path}\n  \\begin{exampleblock}{Infinite path \\pno{3.4.25}}\n    \\begin{itemize}\n      \\item prove: $\\text{Inf}(p) \\subseteq \\exists \\text{SCC}$\n      \\item an infinite path?\n      \\item $\\ldots \\land$ visiting $\\exists g \\in V_{G}$ infinitely often\n      \\item $\\ldots \\land$ not visiting $\\exists b \\in V_{B}$ infinitely often\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item $\\text{cycle} \\subseteq \\exists \\text{scc}$\n      \\item $\\exists \\text{scc}: s \\leadsto \\text{scc}$\n      \\item $\\exists \\text{scc}: s \\leadsto \\text{scc} \\land \\text{scc} \\cap V_{G} \\neq \\emptyset$\n      \\item delete $V_{B}$; $\\exists \\text{scc}: \\text{scc} \\cap V_{G} \\neq \\emptyset$; in $G$: $s \\leadsto \\text{scc}$\n\t\\begin{itemize}\n\t  \\item wrong: $\\exists \\text{scc}: s \\leadsto \\text{scc} \\land \\text{scc} \\cap V_{G} \\neq \\emptyset \\land \\text{scc} \\cap V_{B} = \\emptyset$\n\t  \\item wrong: delete $V_{B}$; $\\exists \\text{scc}: s \\leadsto \\text{scc} \\land \\text{scc} \\cap V_{G} \\neq \\emptyset$\n\t\\end{itemize}\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Odd cycle in digraph}\n  \\begin{exampleblock}{Odd cycle in digraph \\pno{3.4.15}}\n    Find an odd cycle in a digraph $G$.\n  \\end{exampleblock}\n\n  \\begin{lemma}\n    A digraph $G$ has an odd directed cycle $\\iff$ $\\exists \\text{scc}: $ scc is non-bipartite (when treated undirected).\n  \\end{lemma}\n\n  \\begin{proof}\n    \\begin{itemize}\n      \\item $\\Leftarrow$: undirected $C$; oriented\n\t\\begin{itemize}\n\t  \\item odd directed cycle\n\t  \\item choose a direction $\\forall u \\to v$: $\\text{Len}(v \\leadsto u)$ \n\t\\end{itemize}\n    \\end{itemize}\n  \\end{proof}\n\n  \\begin{alertblock}{Remark.}\n    DFS + Coloring on $G$\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "c2bc375951a93399a10c82d6681cd851e0d30c43", "size": 3118, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-graph-decomposition-2016-05-19/sections/scc.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-graph-decomposition-2016-05-19/sections/scc.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-graph-decomposition-2016-05-19/sections/scc.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 32.1443298969, "max_line_length": 142, "alphanum_fraction": 0.6125721616, "num_tokens": 1109, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.749087201911703, "lm_q2_score": 0.7431680199891789, "lm_q1q2_score": 0.5566976526439545}}
{"text": "%!TEX root = ../../PhD_thesis__Edouard_Leurent.tex\n\\graphicspath{{2-Chapters/5-Chapter/}}\n\n\\chapter{Complements on \\Cref{chapter:5}}\n\\section{Proofs}\n\\label{sec:proofs}\n\\subsection{Proof of \\Cref{prop:bellman-expectation}}\n\\label{sec:proof-bell-expect}\n\\begin{proof}\n\tThanks to the introduction of the augmented spaces $\\ocS, \\ocA$ and dynamics $\\augmentedtransition$, this proof is the same as that in classical \\glspl{MOMDP}.\n\t\\begin{align*}\n\t\\oV^{\\budgetedpolicy}(\\os) &\\eqdef \\expectedvalue\\left[ \\augmentedreturn^{\\budgetedpolicy} \\condbar \\ov{s_0} = \\os\\right] \\\\\n\t&=\\sum_{\\oa\\in\\ocA} \\probability{\\oa_0 = \\oa \\condbar\\ov{s_0} = \\os} \\expectedvalue\\left[ \\augmentedreturn^{\\budgetedpolicy} \\condbar \\ov{s_0} = \\os, \\oa_0 = \\oa\\right]\\\\\n\t&= \\sum_{\\oa\\in\\ocA} \\budgetedpolicy(\\oa | \\os) \\oQ^{\\budgetedpolicy}(\\os,\\oa).\n\t\\end{align*}\n\t\n\t\\begin{align*}\n\t\\oQ^{\\budgetedpolicy}(\\os, \\oa) &\\eqdef \\expectedvalue\\left[\\sum_{t=0}^\\infty \\discountfactor^t \\augmentedreward(\\os_t, \\oa_t)\\condbar \\ov{s_0} = \\os, \\ov{a_0} = \\oa\\right] \\\\\n\t&= \\augmentedreward(\\os, \\oa) + \\sum_{\\os'\\in\\ocS}\\probability{\\os_1 = \\os' \\condbar\\ov{s_0} = \\os, \\ov{a_0} = \\oa}\\cdot \\expectedvalue\\left[\\sum_{t=1}^\\infty \\discountfactor^t \\augmentedreward(\\os_t, \\oa_t)\\condbar \\ov{s_1} = \\os'\\right] \\\\\n\t&= \\augmentedreward(\\os, \\oa) + \\discountfactor\\sum_{\\os'\\in\\ocS}\\augmentedtransition\\left(\\os' \\condbar\\os, \\oa\\right) \\expectedvalue\\left[\\sum_{t=0}^\\infty \\discountfactor^t \\augmentedreward(\\os_t, \\oa_t) \\condbar \\ov{s_0} = \\os'\\right] \\\\\n\t&= \\augmentedreward(\\os, \\oa) + \\discountfactor\\sum_{\\os'\\in\\ocS}\\augmentedtransition\\left(\\os' \\condbar\\os, \\oa\\right) \\oV^{\\budgetedpolicy}(\\os').\n\t\\end{align*}\n\t\n\t\\paragraph{Contraction of $\\abo^{\\budgetedpolicy}$.}\n\tLet $\\budgetedpolicy\\in\\policies, \\oQ_1, \\oQ_2\\in(\\Real^2)^{\\ocS\\ocA}$.\n\t\\begin{align*}\n\t\\forall \\os\\in\\ocS, \\oa\\in\\ocA,\\quad \\left|\\abo^{\\budgetedpolicy} \\oQ_1(\\os,\\oa) - \\abo^{\\budgetedpolicy} \\oQ_2(\\os,\\oa)\\right| &= \\left|\\discountfactor\\expectedvalueover{\\substack{\\os'\\sim\\augmentedtransition(\\os'|\\os,\\oa) \\\\ \\oa'\\sim\\budgetedpolicy(\\oa'|\\os')}} \\oQ_1(\\os',\\oa') - \\oQ_2(\\os',\\oa')\\right|\\\\\n\t&\\leq \\discountfactor\\left\\|\\oQ_1-\\oQ_2\\right\\|_\\infty.\n\t\\end{align*}\n\tHence, $\\left\\|\\abo^{\\budgetedpolicy} \\oQ_1 - \\abo^{\\budgetedpolicy} \\oQ_2 \\right\\|_\\infty \\leq \\discountfactor\\left\\|\\oQ_1-\\oQ_2\\right\\|_\\infty$\n\t\n\tAccording to the Banach fixed point theorem \\citep{Banach1922}, $\\abo^{\\budgetedpolicy}$ admits a unique fixed point.\n\tIt can be easily verified that $\\oQ^{\\budgetedpolicy}$ is indeed this fixed point by combining the two Bellman Expectation equations~\\eqref{eq:bellman_expectation}.\n\t\n\\end{proof}\n\n\\subsection{Proof of \\Cref{thm:bellman-optimality}}\n\\label{sec:proof-bell-optim}\n\n\\begin{proof}\n    Let $\\os, \\oa \\in \\ocA\\times\\ocS$. For this proof, we consider potentially non-stationary policies $\\budgetedpolicy=(\\rho, \\budgetedpolicy')$, with $\\rho\\in\\cM(\\ocA)$, $\\budgetedpolicy'\\in\\cM(\\ocA)^\\Natural$. The results will apply to the particular case of stationary optimal policies, when they exist.\n    \\begin{align}\n        \\Qr[^\\star](\\os, \\oa) &=  \\max_{\\rho, \\budgetedpolicy'} \\Qr[^{\\rho, \\budgetedpolicy'}](\\os', \\oa') \\label{eq:pthm_def}\\\\\n        &= \\max_{\\rho, \\budgetedpolicy'} \\reward(s, a) + \\discountfactor \\sum_{\\os'\\in\\ocS} \\augmentedtransition(\\os' | \\os, \\oa) \\Vr[^{\\rho, \\budgetedpolicy'}](\\os') \\label{eq:pthm_exp}\\\\\n        &= \\reward(s, a) + \\discountfactor \\sum_{\\os'\\in\\ocS}  \\augmentedtransition(\\os' | \\os, \\oa) \\max_{\\rho, \\budgetedpolicy'} \\sum_{\\oa'\\in\\ocA} \\rho(\\oa' | \\os')\\Qr[^{\\budgetedpolicy'}](\\os', \\oa') \\label{eq:pthm_marg}\\\\\n        &= \\reward(s, a) + \\discountfactor \\sum_{\\os'\\in\\ocS}  \\augmentedtransition(\\os' | \\os, \\oa) \\max_\\rho\\sum_{\\oa'\\in\\ocA}\\rho(\\oa' | \\os')\\max_{\\budgetedpolicy'\\in\\policies_a(\\os')}\\Qr[^{\\budgetedpolicy'}](\\os', \\oa') \\label{eq:pthm_max}\\\\\n        &= \\reward(s, a) + \\discountfactor \\sum_{\\os'\\in\\ocS}  \\augmentedtransition(\\os' | \\os, \\oa) \\max_\\rho\\expectedvalueover{\\oa'\\sim\\rho}\\Qr[^\\star](\\os', \\oa') \\label{eq:pthm_marg_def2}\n    \\end{align}\n    where $\\budgetedpolicy = (\\rho, \\budgetedpolicy')\\in\\policies_a(\\os)$ and $\\budgetedpolicy'\\in\\policies_a(\\os')$.\n\n    This follows from:\n    \\begin{enumerate}\n        \\item[\\eqref{eq:pthm_def}.] Definition of $\\oQ^{\\star}$.\n        \\item[\\eqref{eq:pthm_exp}.] Bellman Expectation expansion from \\Cref{prop:bellman-expectation}.\n        \\item[\\eqref{eq:pthm_marg}.] Marginalisation on $\\oa'$.\n        \\item[\\eqref{eq:pthm_max}.] \\begin{itemize}\n            \\item Trivially $\\max_{\\budgetedpolicy'\\in\\policies_a(\\os')} \\sum_{\\oa'\\in\\ocA} \\cdot \\leq \\sum_{\\oa'\\in\\ocA} \\max_{ \\budgetedpolicy'\\in\\policies_a(\\os)} \\cdot$.\n            \\item Let $\\ov{\\budgetedpolicy}\\in\\argmax_{\\budgetedpolicy'\\in\\policies_a(\\os')} \\Qr[^{\\budgetedpolicy'}](\\os', \\oa')$, then:\n            \\begin{align*}\n                \\sum_{\\oa'\\in\\ocA}\\rho(\\oa'|\\os')\\max_{\\budgetedpolicy'\\in\\policies_a(\\os')}\\Qr[^{\\budgetedpolicy'}](\\os', \\oa') &= \\sum_{\\oa'\\in\\ocA}\\rho(\\oa'|\\os')\\Qr[^{\\budgetedpolicy'}](\\os', \\oa') \\\\\n                &\\leq  \\max_{\\budgetedpolicy'\\in\\policies_a(\\os')} \\sum_{\\oa'\\in\\ocA}\\rho(\\oa'|\\os')\\Qr[^{\\budgetedpolicy'}](\\os', \\oa').\n            \\end{align*}\n        \\end{itemize}\n        \\item[\\eqref{eq:pthm_marg_def2}.] Definition of $\\oQ^{\\star}$.\n    \\end{enumerate}\n\n    Moreover, the condition $\\budgetedpolicy=(\\rho, \\budgetedpolicy')\\in\\policies_a(\\os)$ gives\n    \\begin{equation*}\n        \\expectedvalueover{\\oa'\\sim\\rho} \\Qc[^{\\star}](\\os, \\oa) = \\expectedvalueover{\\oa'\\sim\\rho} \\Qc[^{\\budgetedpolicy'}](\\os, \\oa) = \\Vc[^{\\budgetedpolicy}](\\os) \\leq \\budget.\n    \\end{equation*}\n\n    Consequently, $\\budgetedpolicy_\\text{greedy}(\\cdot; \\oQ^{\\star})$ belongs to the $\\argmax$ of \\eqref{eq:pthm_marg_def2}, and in particular:\n    \\begin{equation*}\n        \\Qr[^{\\star}](\\os, \\oa) = r(\\os, \\oa) + \\discountfactor \\sum_{\\os'\\in\\ocS}  P(\\os' | \\os, \\oa) \\expectedvalueover{\\oa'\\sim\\budgetedpolicy_\\text{greedy}(\\os', \\oQ^{\\star})} \\Qr[^{\\star}](\\os', \\oa').\n    \\end{equation*}\n\n    The same reasoning can be made for $\\Qc[^\\star]$ by replacing $\\max$ operators by $\\min$, and $\\policies_a$ by $\\policies_r$.\n\\end{proof}\n\n\n\\subsection{Proof of \\Cref{prop:greedy_optimal}}\n\\label{sec:proof-greedy-optim}\n\\begin{proof}\n    Notice from the definitions of $\\abo^{\\star}$ and $\\abo^{\\budgetedpolicy}$ in \\eqref{eq:bellman-optimality} and \\eqref{eq:bellman_expectation_operator} that $\\abo^{\\star}$ and $\\abo^{\\budgetedpolicy_\\text{greedy}(\\cdot;\\oQ^{\\star})}$ coincide on $\\oQ^{\\star}$. Moreover, since $\\oQ^{\\star} = \\abo^{\\star}\\oQ^{\\star}$ by \\Cref{thm:bellman-optimality}, we have: $\\abo^{\\budgetedpolicy_\\text{greedy}(\\cdot;\\oQ^{\\star})} \\oQ^{\\star} = \\abo^{\\star} \\oQ^{\\star} = \\oQ^{\\star}$.\n    Hence, $\\oQ^{\\star}$ is a fixed point of $\\abo^{\\budgetedpolicy_\\text{greedy}(\\cdot;\\oQ^{\\star})}$, and by \\Cref{prop:bellman-expectation} it must be equal to $\\oQ^{\\budgetedpolicy_\\text{greedy}(\\cdot;\\oQ^{\\star})}$\n\n    To show the same result for $\\oV^{\\star}$, notice that\n    \\begin{equation*}\n        \\oV^{\\budgetedpolicy_\\text{greedy}(\\oQ^{\\star})}(\\os) = \\expectedvalueover{\\oa\\sim\\budgetedpolicy_\\text{greedy}(\\oQ^{\\star})}\\oQ^{\\budgetedpolicy_\\text{greedy}(\\oQ^{\\star})}(\\os,\\oa) = \\expectedvalueover{\\oa\\sim\\budgetedpolicy_\\text{greedy}(\\oQ^{\\star})}\\oQ^{\\star}(\\os,\\oa).\n    \\end{equation*}\n    By applying the definitions of $\\oQ^{\\star}$ and $\\budgetedpolicy_\\text{greedy}$, we recover the definition of $\\oV^{\\star}$.\n\\end{proof}\n\n\\subsection{Proof of \\Cref{thm:contraction}}\n\\label{sec:proof-contraction}\n\\begin{proof}\n\tIn the trivial case $|\\cA| = 1$, there exits only one policy $\\budgetedpolicy$ and $\\abo = \\abo^\\budgetedpolicy$, which is a contraction by \\Cref{prop:bellman-expectation}.\n\t\n\tIn the general case $|\\cA| \\geq 2$, we can build the following counter-example.\n\t\n\tLet $(\\cS, \\cA, P, R_r, R_c)$ be a \\gls{BMDP}.\n\tFor any $\\epsilon > 0$, we define $\\oQ_\\epsilon^1$ and $\\oQ_\\epsilon^2$ as\n\t\\begin{align*}\n\t\\oQ_\\epsilon^1(\\os,\\oa) =\n\t\\begin{cases}\n\t(0, 0), & \\text{if } a = a_0 \\\\\n\t\\left(\\frac{1}{\\discount}, \\epsilon\\right), & \\text{if } a \\neq a_0\n\t\\end{cases}\\\\\n\t\\oQ_\\epsilon^2(\\os,\\oa) =\n\t\\begin{cases}\n\t(0, \\epsilon), & \\text{if } a = a_0 \\\\\n\t\\left(\\frac{1}{\\discount}, 2\\epsilon\\right), & \\text{if } a \\neq a_0\n\t\\end{cases}\n\t\\end{align*}\n\tThen, $\\|\\oQ_1-\\oQ_2\\|_\\infty = \\epsilon$.\n\t$\\oQ_\\epsilon^1$ and $\\oQ_\\epsilon^2$ are represented in \\Cref{fig:concavity_example}.\n\t\n\t\\begin{figure}[tp]\n\t\t\\centering\n\t\t\\includegraphics[width=0.4\\textwidth]{img/concavity_example.pdf}\n\t\t\\caption{Representation of $\\oQ_\\epsilon^1$ (blue) and $\\oQ_\\epsilon^2$ (yellow)}\n\t\t\\label{fig:concavity_example}\n\t\\end{figure}\n\t\n\tBut for $\\oa=(a,\\budgetaction)$ with $\\budgetaction = \\epsilon$, we have\n\t\\begin{align*}\n\t\\|\\abo \\oQ_\\epsilon^1(\\os, \\oa) - \\abo \\oQ_\\epsilon^2(\\os, \\oa)\\|_\\infty &= \\discount\\left\\|\\expectedvalueover{\\os'\\sim\\augmentedtransition(\\os'|\\os,\\oa)} \\expectedvalueover{\\oa'\\sim\\budgetedpolicy_\\text{greedy}(\\oQ^1_\\epsilon)}\\oQ^1_\\epsilon(\\os',\\oa') - \\expectedvalueover{\\oa'\\sim\\budgetedpolicy_\\text{greedy}(\\oQ^2_\\epsilon)}\\oQ^2_\\epsilon(\\os',\\oa')\\right\\|_\\infty \\\\\n\t&= \\discount\\left\\|\\expectedvalueover{\\os'\\sim\\augmentedtransition(\\os'|\\os,\\oa)}\\left(\\frac{1}{\\discount}, \\epsilon\\right) - (0, \\epsilon)\\right\\|_\\infty \\\\\n\t&= \\discount\\frac{1}{\\discount} = 1\n\t\\end{align*}\n\tHence, \n\t\\begin{align*}\n\t\\|\\abo \\oQ_\\epsilon^1 - \\abo \\oQ_\\epsilon^2\\|_\\infty &\\geq 1 = \\frac{1}{\\epsilon} \\|\\oQ_1-\\oQ_2\\|_\\infty\n\t\\end{align*}\n\t\n\tIn particular, there does not exist $L>0$ such that\n\t$$\\forall \\oQ_1,\\oQ_2\\in(\\Real^2)^{\\ocS\\ocA}, \\|\\abo \\oQ^1 - \\abo \\oQ^2\\|_\\infty \\leq L \\|\\oQ^1 - \\oQ^2\\|_\\infty$$\n\tIn other words, $\\abo$ is not a contraction for $\\|\\cdot\\|_\\infty$.\n\\end{proof}\n\n\\subsection{Proof of \\Cref{thm:contractivity-smooth}}\n\\label{sec:contraction-with-smooth}\n\n\\begin{remark}\n\t\\begin{leftbar}[remarkbar]\n\tThis proof makes use of insights detailed in the proof of \\Cref{prop:bftq_pi_hull} (\\Cref{sec:proof_pi_hull}), which we recommend the reader to consult first.\n\t\\end{leftbar}\n\\end{remark}\n\n\\begin{proof}\n\tWe now study the contractivity of $\\abo^{\\star}$ when restricted to the functions of $\\cL_{\\discountfactor}$ defined as follows:\n    \\begin{equation}\n    \\cL_{\\discountfactor} = \\left\\{\\begin{array}{cc}\n   \\oQ\\in(\\Real^2)^{\\ocS\\ocA}\\text{ s.t. }\\exists L<\\frac{1}{\\discountfactor}-1: \\forall \\os\\in\\ocS,\\oa_1,\\oa_2\\in\\ocA,   \\\\\n   |\\Qr(\\os,\\oa_1) - \\Qr(\\os,\\oa_2)| \\leq L|\\Qc(\\os,\\oa_1) - \\Qc(\\os,\\oa_2)|\n    \\end{array}\\right\\}.\n    \\end{equation}\n    That is, for all state $\\os$, the set $\\oQ(\\os, \\ocA)$ plot in the $(\\Qc,\\Qr)$ plane must be the \\emph{graph} of a $L$-Lipschitz function, with $L<1/\\discountfactor-1$.\n\n    We impose such structure for the following reason: the counter-example presented above prevented contraction because it was a pathological case in which the slope of $\\oQ$ can be arbitrary large. As a consequence, when solving $\\Qr[^\\star]$ such that $\\Qc[^\\star]=\\budget$, a vertical slice of a $\\|\\cdot\\|_\\infty$ ball around $\\oQ_1$ (which must contain $\\oQ_2$) can be arbitrary large as well.\n\n\n    We denote $\\text{Ball}(\\oQ,R)$ the ball of centre $\\oQ$ and radius $R$ for the $\\|\\cdot\\|_\\infty$-norm:\n    \\begin{equation*}\n        \\text{Ball}(\\oQ,R) = \\{\\oQ'\\in(R^2)^{\\ocS\\ocA}: \\|\\oQ-\\oQ'\\|_\\infty \\leq R\\}.\n    \\end{equation*}\n\n    We give the three main steps required to show that $\\abo^{\\star}$ restricted to $\\cL_{\\discountfactor}$ is a contraction. Given $\\oQ^1, \\oQ^2\\in\\cL_{\\discountfactor}$, show that:\n\n    \\begin{enumerate}\n        \\item $\\oQ^2\\in\\text{Ball}(\\oQ^1,R)\\implies\\cF^2\\in\\text{Ball}(\\cF^1, R), \\forall\\os\\in\\ocS$, where $\\cF$ is the top frontier of the convex hull of undominated points, as defined in~\\Cref{sec:proof_pi_hull}.\n        \\item $\\oQ\\in\\cL_{\\discountfactor} \\implies \\cF$ is the graph of a $L$-Lipschitz function, $\\forall\\os\\in\\ocS$.\n        \\item taking the slice $\\Qc=\\budget$ of a ball $\\text{Ball}(\\cF,R)$ with $\\cF$ $L$-Lipschitz results in an interval on $\\Qr$ of range at most $(L+1)R$\n    \\end{enumerate}\n\t\n\tThese three steps will allow us to control $\\Qr[^{2\\star}] - \\Qr[^{1\\star}]$ as a function of $R = \\|\\oQ^2-\\oQ^1\\|_\\infty$.\n\n    \\paragraph{Step 1}\n\n    We want to show that if $\\oQ^1$ and $\\oQ^2$ are close, then $\\cF^1$ are $\\cF^2$ are close as well in the following sense:\n    \\begin{align}\n        \\cF^2\\in\\text{Ball}(\\cF^1, R) &\\iff d(\\cF^1, \\cF^2) \\leq R \\iff \\max_{q^2\\in\\cF^2}\\min_{q^1\\in\\cF^1}\\|q^2-q^1\\|_\\infty \\leq R.\n        \\label{eq:ball-set}\n    \\end{align}\n\n    Assume $\\oQ^2\\in\\text{Ball}(\\oQ^1,R)$, we show by contradiction that $\\cF^2 \\in \\text{Ball}(\\cF^1, R)$. Indeed, assume there exists $q^1\\in \\cF^1$ such that $\\cF^2 \\cap \\text{Ball}(q^1, R) = \\emptyset$. Denote $q^2$ the unique point of $\\cF^2$ such that $q^2_c = q^1_c$. By construction of $q^1$, we know that $\\|q^1-q^2\\|_\\infty > R$. There are two possible cases:\n    \\begin{itemize}\n        \\item $q^2_r > q^1_r$: this also directly implies that $q^2_r > q^1_r + R$. But $q^2\\in\\cF^2$, so there exist $q^2_1, q^2_2\\in Q^{2},\\lambda\\in\\Real$ such that $q^2 = (1-\\lambda)q^2_1 + \\lambda q^2_2$. But since $\\oQ^2\\in \\text{Ball}(\\oQ^1, R)$, there also exist $q_1^1, q^1_2\\in \\oQ^1$ such that $\\|q^1_1-q^2_1\\|_\\infty \\leq R$ and $\\|q^1_2-q^2_2\\|_\\infty \\leq R$, and in particular $q^1_{1r}\\geq q^2_{1r}-R$ and $q^1_{2r}\\geq q^2_{2r}-R$. But then, the point $q^{1'}=(1-\\mu)q^1_1 + \\mu q^1_2$ with $\\mu=(q^2_c-q^1_{1c})/(q^2_{2c}-q^1_{1c})$ verifies $q^{1'}_c = q^1_c$ and $q^{1'}_r \\geq q^2_r - R > q^1_r$ which contradicts the definition of $q_1\\in\\cF^1$ as defined in \\eqref{eq:top-frontier}.\n        \\item $q^2_r < q^1_r$: then the same reasoning can be applied by simply swapping the indexes 1 and 2.\n    \\end{itemize}\n\n    % We start by showing this result for $\\cC^2(Q^{1-})$ and $\\cC^2(Q^{2-})$ as defined in \\Cref{sec:proof_pi_hull}:\n    % Let $\\os\\in\\ocS$ and $q^2\\in\\cC^2(Q^{2-})$, $\\exists\\lambda\\in[0,1], \\oa_1,\\oa_2\\in\\ocA: q^2 = (1-\\lambda)Q^2(\\os,\\oa_1) + \\lambda Q^2(\\os,\\oa_2)$. Define $q^1 = (1-\\lambda)Q^1(\\os,\\oa_1) + \\lambda Q^1(\\os,\\oa_2)$. Then\n    % \\begin{align*}\n    %     \\|q^2-q^1\\|_\\infty &= \\|(1-\\lambda)(Q^2(\\os,\\oa_1) - Q^1(\\os,\\oa_1)) + \\lambda (Q^2(\\os,\\oa_2) - Q^1(\\os,\\oa_2))\\|_\\infty\\\\\n    %     &\\leq  (1-\\lambda)\\|Q^2(\\os,\\oa_1) - Q^1(\\os,\\oa_1)\\|_\\infty + \\lambda \\|Q^2(\\os,\\oa_2) - Q^1(\\os,\\oa_2)\\|_\\infty\\\\\n    %     &\\leq (1-\\lambda)R+\\lambda R = R\n    % \\end{align*}\n\n    % It remains to show that when taking the top frontiers of the convex sets $\\cC^2(Q^{1-})$ and $\\cC^2(Q^{2-})$, they remain at a distance of at most $R$.\n\n    We have shown that $\\cF^2 \\in \\text{Ball}(\\cF^1, R)$.\n    This is illustrated in \\Cref{fig:contraction_lips_hull}: given a function $\\oQ^1$, we show the locus $\\text{Ball}(\\oQ_1,R)$ of $\\oQ^2$. We then draw $\\cF^1$ the top frontier of the convex hull of $\\oQ^1$ and alongside the locus of all possible $\\cF^2$, which belong to a ball $\\text{Ball}(\\cF^1, R)$.\n\n    \\begin{figure}[ht]\n        \\centering\n        \\includegraphics[trim=7cm 4cm 7cm 4cm, clip, width=0.7\\textwidth]{img/contraction_lipschitz.pdf}\n        \\caption{We represent the range of possible solutions $\\Qr[^{2\\star}]$ for any $\\oQ^2\\in\\text{Ball}(\\oQ^1)$, given $\\oQ_1\\in\\cL_\\lambda$}\n        \\label{fig:contraction_lips_hull}\n    \\end{figure}\n\n    \\paragraph{Step 2}\n\n    We want to show that if $\\oQ\\in\\cL_{\\discountfactor}$, $\\cF$ is the graph of an $L$-Lipschitz function:\n    \\begin{equation}\n        \\label{eq:L-lip-set}\n        \\forall q^1,q^2\\in\\cF, |q_r^2-q_r^1| \\leq |q_c^2-q_c^1|.\n    \\end{equation}\n\n    Let $\\oQ\\in\\cL_{\\discountfactor}$ and $\\os\\in\\ocS$, $\\cF$ the corresponding top frontier of convex hull.\n    For all $q^1,q^2\\in\\cF, \\exists \\lambda,\\mu\\in[0,1], q^{11},q^{12},q^{21},q^{22}\\in \\oQ(\\os,\\ocA)$ such that $q^1 = (1-\\lambda)q^{11} + \\lambda q^{12}$ and $q^2 = (1-\\mu)q^{21} + \\mu q^{22}$.\n    Without loss of generality, we can assume $q_c^{11}\\leq q_c^{12}$ and $q_c^{21}\\leq q_c^{22}$. We also consider the worst case in terms of maximum $q_r$ deviation: $q_c^{12} \\leq q_c^{21}$.\n    Then the maximum increment $q_r^2-q_r^{1}$ is:\n    \\begin{align*}\n        \\|q^2_r-q^{1}_r\\| &\\leq \\|q^{12}_r-q^{1}_r\\| + \\|q^{21}_r-q^{12}_r\\| + \\|q^{2}_r-q^{21}_r\\| \\\\\n        &= (1-\\lambda)\\|q^{12}_r-q^{11}_r\\| + \\|q^{21}_r-q^{12}_r\\| + \\mu\\|q^{22}_r-q^{21}_r\\| \\\\\n        &\\leq (1-\\lambda)L\\|q^{12}_c-q^{11}_c\\| + L\\|q^{21}_c-q^{12}_c\\| + \\mu L\\|q^{22}_c-q^{21}_c\\| \\\\\n        &= L\\|q^{12}_c-q^{1}_c\\| + L\\|q^{21}_c-q^{12}_c\\| + L\\|q^{2}_c-q^{21}_c\\|\\\\\n        &= L\\|q^{2}_c-q^{1}_c\\|.\n    \\end{align*}\n\n    This can also be seen in \\Cref{fig:contraction_lips_hull}: the maximum slope of the $\\cF^1$ is lower than the maximum slope between two points of $\\oQ^1$.\n\n    \\paragraph{Step 3}\n\n    Let $\\cF_1$ be a L-Lipschitz set as defined in \\eqref{eq:L-lip-set}, and consider a ball $\\text{Ball}(\\cF_1,R)$ around it as defined in \\eqref{eq:ball-set}.\n\n    We want to bound the optimal reward value $\\Qr[^{2\\star}]$ under constraint $\\Qc[^{2\\star}] = \\budget$ (regular case in \\Cref{sec:proof_pi_hull} where the constraint is saturated), for any $\\cF^2\\in\\text{Ball}(\\cF_1,R)$. This quantity is represented as a red double-ended arrow in \\Cref{fig:contraction_lips_hull}.\n\n    Because we are only interested in what happens locally at $\\Qc=\\budget$, we can zoom in on \\Cref{fig:contraction_lips_hull} and only consider a thin $\\epsilon$-section around $\\budget$. In the limit $\\epsilon\\rightarrow 0$, this section becomes the tangent to $\\cF^1$ at $\\Qc[^1]=\\budget$. It is represented in \\Cref{fig:contraction_lips_hull_slope}, from which we derive a geometrical proof:\n    \\begin{figure}[ht]\n        \\centering\n        \\includegraphics[trim=2cm 1cm 2cm 1cm, clip, width=0.7\\textwidth]{img/contraction_lipschitz_slope.pdf}\n        \\caption{We represent a section $[\\budget-\\epsilon, \\budget+\\epsilon]$ of $\\cF^1$ and $\\text{Ball}(\\cF^1, R)$. We want to bound the range of $\\Qr[^{2\\star}].$}\n        \\label{fig:contraction_lips_hull_slope}\n    \\end{figure}\n    \\begin{align*}\n        \\Delta \\Qr[^{2\\star}] &= b + c &\\\\\n        & \\leq La + c & \\text{($\\cF^1$ $L$-Lipschitz)}\\\\\n        &= 2LR+2R = 2R(L+1).\n    \\end{align*}\n    Hence,\n    \\begin{equation*}\n        | \\Qr[^{2\\star}] - \\Qr[^{1\\star}]| \\leq \\frac{\\Delta \\Qr[^{2\\star}]}{2} = R(L+1)\n    \\end{equation*}\n    and $\\Qc[^{1\\star}] = \\Qc[^{2\\star}] = \\budget$.\n    Consequently, $ \\|\\oQ[^{2\\star}] - \\oQ[^{1\\star}]\\|_\\infty \\leq (L+1)R$.\n\n    Finally, consider the edge case in \\Cref{sec:proof_pi_hull}: the constraint is not active, and the optimal value is simply $\\argmax_{q\\in\\cF} q^r$. In particular, since we showed that $\\cF^2\\in \\text{Ball}(\\cF^1, R)$, and since $\\oQ[^{2\\star}]\\in \\cF^2$, there exist $q^1\\in \\cF^1: \\|\\oQ[^{2\\star}]-q^1\\|_\\infty\\leq R$ and in particular $\\oQ[^{1\\star}]_r \\geq q^1_r \\geq \\oQ[^{2\\star}]_r - R$. Reciprocally, by the same reasoning, $\\Qr[^{2\\star}] \\geq \\Qr[^{1\\star}] - R$. Hence, we have that $| \\Qr[^{2\\star}] - \\Qr[^{1\\star}]| \\leq R \\leq R(L+1).$\n\n    \\paragraph{Wrapping it up}\n\n    We have shown that for any $\\oQ^1,\\oQ^2\\in\\cL_{\\discountfactor}$,\n    and all $\\os\\in\\ocS$, $\\cF^2\\in\\text{Ball}(\\cF^1,\\|\\oQ^2-\\oQ^1\\|_\\infty)$ and $\\cF^1$ is\n    the graph of a $L$-Lipschitz function with $L<1/\\discountfactor - 1$.\n    Moreover, the solutions of $\\budgetedpolicy_\\text{greedy}(\\oQ^1)$ and $\\budgetedpolicy_\\text{greedy}(\\oQ^2)$ at\n    $\\os$ are such that $ \\|\\oQ[^{2\\star}] - \\oQ[^{1\\star}]\\|_\\infty \\leq (L+1)\\|\\oQ^2-\\oQ^1\\|_\\infty$.\n\n    Hence, for all $\\oa$,\n    \\begin{align*}\n        \\|\\abo^{\\star}\\oQ^1(\\os, \\oa) - &\\abo^{\\star}\\oQ^2(\\os, \\oa)\\|_\\infty \\\\\n        &= \\discountfactor\\left\\|\\expectedvalueover{\\os'\\sim\\augmentedtransition(\\os'|\\os,\\oa)}\n        \\expectedvalueover{\\oa'\\sim\\budgetedpolicy_\\text{greedy}(\\oQ^1)}\\oQ^1(\\os',\\oa') -\n        \\expectedvalueover{\\oa'\\sim\\budgetedpolicy_\\text{greedy}(\\oQ^2)}\\oQ^2(\\os',\\oa')\\right\\|_\\infty \\\\\n        &= \\discountfactor\\left\\|\\oQ[^{2\\star}] - \\oQ[^{1\\star}]\\right\\|_\\infty \\\\\n        &\\leq \\discountfactor(L+1)\\|\\oQ^2-\\oQ^1\\|_\\infty.\n    \\end{align*}\n    Taking the sup on $\\ocS\\ocA$,\n    \\begin{equation*}\n        \\|\\abo^{\\star}\\oQ^1 - \\abo^{\\star}\\oQ^2\\|_\\infty \\leq \\discountfactor(L+1)\\|\\oQ^1-\\oQ^2\\|_\\infty\n    \\end{equation*}\n    with $\\discountfactor(L+1) < 1$.\n    As a conclusion, $\\abo^{\\star}$ is a $\\discountfactor(L+1)$-contraction on $\\cL_{\\discountfactor}$.\n\\end{proof}\n\n%\\subsection{\\Cref{lemma:concavity}}\n\n%\\begin{proof}. Let $s,s'\\in\\cS, a\\in\\cA$.\n%We first prove those results for $V_r^{\\star}(s', \\cdot)$\n\n%\\textbf{Non-decreasing}\n\n%Consider $\\beta_a^1 \\leq \\beta_a^2 \\in \\cB$.\n%Any policy that satisfies the budget $\\beta_a^1$ in $s'$ also satisfies $\\beta_a^2$, so $\\Pi_c(s', \\beta_a^1) \\subset \\Pi_c(s', \\beta_a^2)$. Hence, by taking the max over policies, $V_r^{\\star}(s', \\beta_a^1) \\leq V_r^{\\star}(s', \\beta_a^2)$.\n%Hence, $V_r^{\\star}(s', \\cdot)$ is non-decreasing.\n\n%\\textbf{Concave}\n\n%By contradiction: assume that $V_r^{\\star}(s', \\cdot)$ is not concave, \\ie there exist $\\beta^1 < \\beta^2\\in \\cB$ and $p\\in(0, 1)$ such that $\\beta^3 = (1-p)\\beta^1 + p\\beta^2$ verifies: $V_r^{\\star}(s', \\beta^3) < (1-p)V_r^{\\star}(s', \\beta^1) + pV_r^{\\star}(s',\\beta^2)$. By definition of $V^{\\star}$, there must be $\\pi_1,\\pi_2\\in\\Pi^{\\star}$ such that $V^{\\star}(s', \\beta^1) = V^{\\pi_1}(s', \\beta^1)$ and $V^{\\star}(s', \\beta^2) = V^{\\pi_2}(s', \\beta^2)$. \n\n%Define $\\pi = (1-p)(\\pi_1(\\cdot, \\beta^1), \\pi_1) + p(\\pi_2(\\cdot, \\beta^2), \\pi_2)$. By linearity of $V^\\pi$ with respect to $\\pi$, we have that $V_c^\\pi(s', \\beta^3) = (1-p)V_c^{\\pi_1}(s', \\beta^1) + pV_c^{\\pi_2}(s', \\beta^2) \\leq (1-p)\\beta^1 + p\\beta^2 = \\beta^3$ since $\\pi_1, \\pi_2\\in\\Pi^{\\star}(s')\\subset\\Pi_a(s')$, so $\\pi$ respects the budget $\\beta^3$. Moreover, we also have $V_r^\\pi(s', \\beta^3) = (1-p)V_r^{\\pi_1}(s', \\beta^1) + pV_r^{\\pi_2}(s', \\beta^2) > V_r^{\\star}(s', \\beta^3)$, which contradicts the definition of $V_r^{\\star}$.\n\n%Consequently, $V_r^{\\star}(s', \\cdot)$ is non-decreasing and concave. By \\eqref{eq:bellman_expectation_Q} we see that $Q_r^{\\star}(s,a,\\cdot) = R_r(s,a) + \\discount\\expectedvalueover{s'}V_r^{\\star}(s', \\cdot)$  is too.\n\n\n%\\end{proof}\n\n%\\subsection{\\Cref{lemma:tau_concavity}}\n\n\n%\\subsection{\\Cref{lemma:pi_hull}}\n\n%\\td\n\n%\\begin{proof}\n%If the estimates $q^c_0, q^c_1$ are accurate, then by construction and linearity of the expectation, the returned mixture policy has an expected total cost of $\\expectedvalueover{a, \\beta_a \\sim\\pi_\\text{greedy}}Q_c(s, a, \\beta_a) = \\beta$ as desired in \\eqref{eq:pi_greedy_constraint}. Because the $Q_r(s,a,\\cdot)$ is concave and under its tangents, this mixture must have the largest $Q_r$ possible as required in \\eqref{eq:pi_greedy_reward}. The special case of a tie $q_r^0 = q_r^1$ is considered, where we do minimise $Q_c$ as required in \\eqref{eq:pi_greedy_cost}.\n%\\end{proof}\n\n\n\\subsection{Proof of \\Cref{prop:bftq_pi_hull}}\n\\label{sec:proof_pi_hull}\n\\begin{definition}\n\t\\begin{leftbar}[defnbar]\n\tLet $A$ be a set, and $f$ a function defined on $A$. We define\n\t\n\t\\begin{itemize}\n\t\t\\item the convex hull of $A$: $\\cC(A) = \\{\\sum_{i=1}^p \\lambda_i a_i: a_i\\in A, \\lambda_i\\in\\Real^+, \\sum_{i=1}^p \\lambda_i = 1, p\\in\\Natural\\}$;\n\t\t\\item the convex edges of $A$: $\\cC^2(A) = \\{\\lambda a_1 + (1-\\lambda)a_2: a_1, a_2\\in A, \\lambda\\in[0, 1]\\}$;\n\t\t\\item Dirac distributions of $A$: $\\dirac(A) = \\{\\dirac(a-a_0): a_0\\in A\\}$;\n\t\t\\item the image of $A$ by $f$: $f(A) = \\{f(a): a\\in A\\}$.\n\t\\end{itemize}\n\\end{leftbar}\n\\end{definition}\n\n\\begin{proof}\nLet $\\os=(s,\\budget)\\in\\ocS$ and $\\oQ\\in(\\Real^2)^{\\ocS\\ocA}$. We recall the definition of $\\budgetedpolicy_\\text{greedy}$:\n    \\begin{subequations}\n        \\begin{align}\n            \\budgetedpolicy_\\text{greedy}(\\oa|\\os; \\oQ) &\\in \\argmin_{\\rho\\in\\policies_r^{\\oQ}} \\expectedvalueover{\\oa\\sim\\rho}\\Qc(\\os, \\oa) \\tag{\\ref{eq:pi_greedy_cost}}\\\\\n            \\text{where }\\quad\\policies_r^{\\oQ} = &\\argmax_{\\rho\\in\\cM(\\ocA)} \\expectedvalueover{\\oa\\sim\\rho} \\Qr(\\os, \\oa) \\tag{\\ref{eq:pi_greedy_reward}}\\\\\n            & \\text{ s.t. }  \\expectedvalueover{\\oa\\sim\\rho} \\Qc(\\os, \\oa) \\leq \\budget \\tag{\\ref{eq:pi_greedy_constraint}}\n        \\end{align}\n    \\end{subequations}\n\n    Note that any policy in the $\\argmin$ in \\eqref{eq:pi_greedy_cost} is suitable to compute $\\abo^{\\star}$.\n    We first reduce the set of candidate optimal policies.\n    Consider the problem described in \\eqref{eq:pi_greedy_reward},\\eqref{eq:pi_greedy_constraint}: it can be seen as a single-step \\gls{CMDP} problem with reward $\\reward=\\Qr$ and cost $\\constraint=\\Qc$. By \\citep[Theorem 4.4][]{Beutler1985}, we know that the solutions are mixtures of two deterministic policies. Hence, we can replace $\\cM(\\cA)$ by $\\cC^2(\\dirac(\\ocA))$ in \\eqref{eq:pi_greedy_reward}.\n\n    Moreover, remark that:\n    \\begin{align*}\n        \\{\\expectedvalueover{\\oa\\sim\\rho} \\oQ(\\os,\\oa):& \\rho\\in \\cC^2(\\dirac(\\ocA))\\} \\\\\n        &= \\{\\expectedvalueover{\\oa\\sim\\rho} \\oQ(\\os,\\oa): \\rho=(1-\\lambda)\\dirac(\\oa-\\oa_1)+\\lambda\\dirac(\\oa-\\oa_2), \\oa_1,\\oa_2\\in\\ocA, \\lambda\\in[0,1]\\} \\\\\n        &= \\{(1-\\lambda)\\oQ(\\os, \\oa_1)+\\lambda \\oQ(\\os, \\oa_2), \\oa_1,\\oa_2\\in\\ocA, \\lambda\\in[0,1]\\} \\\\\n        &= \\cC^2(\\oQ(\\os,\\ocA))\\}.\n    \\end{align*}\n\n    Hence, the problem \\eqref{eq:pi_greedy_reward}, \\eqref{eq:pi_greedy_constraint} has become:\n    \\begin{equation*}\n        \\tilde{\\policies}^{\\Qr} = \\argmax_{(q_r, q_c)\\in\\cC^2(\\oQ(\\os, \\ocA))} q_r \\quad\\text{ s.t. }\\quad q_c \\leq \\budget\n    \\end{equation*}\n    and the solution of $\\budgetedpolicy_\\text{greedy}$ is $q^{\\star}=\\argmin_{q\\in\\tilde{\\policies}^{\\Qr}} q_c$.\n\n    The original problem in the space of actions $\\ocA$ is now expressed in the space of values $\\oQ(\\os, \\ocA)$ (which is why we use $=$ instead of $\\in$ before $\\argmin$ here).\n\n    We further restrict the search space of $q^{\\star}$ following two observations:\n    \\begin{enumerate}\n        \\item $q^{\\star}$ belongs to the \\emph{undominated} points $\\cC^2(\\oQ^-)$:\n        \\begin{align}\n            \\label{eq:q_minus_undominated}\n            \\oQ^+ &= \\{(q_c, q_r): q_c > q_c^{\\pm} = \\min_{q^+} q_c^+\\text{ s.t. }q^+\\in\\argmax_{q\\in \\oQ(\\os,\\ocA)} q_r\\}\\\\\n            \\oQ^- &= \\oQ(\\os,\\ocA) \\setminus \\oQ^+.\n        \\end{align}\n        Denote $q^{\\star}$ = $(1-\\lambda) q^1 + \\lambda q^2$, with $q^1, q^2\\in \\oQ(\\os,\\ocA)$. There are three possible cases:\n        \\begin{enumerate}\n            \\item $q^1, q^2 \\not\\in \\oQ^-$. Then $q_c^{\\star} = (1-\\lambda) q^1_c + \\lambda q^2_c > q_c^{\\pm}$. But then $q_c^{\\pm} < q_c^{\\star} \\leq \\budget$ so $q^{\\pm}\\in\\tilde{\\policies}^{\\Qr}$ with a strictly lower $q_c$ than $q^{\\star}$, which contradicts the $\\argmin$.\n            \\item $q^1\\in \\oQ^-, q^2 \\not\\in \\oQ^-$. But then consider the mixture $q^\\top = (1-\\lambda) q^1 + \\lambda q^\\pm$. Since $q_r^{\\pm} \\geq q_r^{2}$ and $q_r^{\\pm} < q_r^{2}$, we also have $q^\\top_r \\geq q_r^{\\star}$ and $q^\\top_c < q_c^{\\star}$, which also contradicts the $\\argmin$.\n            \\item $q^1,q^2\\in \\oQ^-$ is the only remaining possibility.\n        \\end{enumerate}\n        \\item $q^{\\star}$ belongs to the \\emph{top frontier} $\\cF$:\n        \\begin{equation}\n            \\label{eq:top-frontier}\n            \\cF_{\\oQ} = \\{q\\in \\cC^2(\\oQ^-): \\not\\exists q'\\in \\cC^2(\\oQ^-): q_c=q_c'\\text{ and }q_r<q_r'\\}.\n        \\end{equation}\n        Trivially, otherwise q' would be a better candidate than $q^{\\star}$.\n    \\end{enumerate}\n\n\n    Let us characterise this frontier $\\cF$. It is both:\n    \\begin{enumerate}\n        \\item the \\emph{graph of a non-decreasing function}: $\\forall q^1, q^2\\in\\cF$ such that $q_c^1\\leq q_c^2$ then $q_r^1\\leq q_r^2$.\\\\\n        By contradiction, if we had $q_r^1 > q_r^2$, we could define $q^\\top = (1-\\lambda)q^1 + \\lambda q^\\pm$ where $q^\\pm$ is the dominant point as defined in \\eqref{eq:q_minus_undominated}. By choosing $\\lambda=(q^2_c-q^1_c)/(q^\\pm_c-q^1_c)$ such that $q^\\top_c = q_c^2$, then since $q_r^\\pm \\geq q_r^1 > q_r 2$ we also have $q^\\top_r > q_r^2$ which contradicts $q^2\\in\\cF$.\n        \\item the \\emph{graph of a concave function}: $\\forall q^1, q^2, q^3\\in\\cF$ such that $q_c^1\\leq q_c^2 \\leq q_c^3$ with $\\lambda$ such that $q^2_c = (1-\\lambda)q^1_c + \\lambda q^3_c$, then $q_r^2 \\geq (1-\\lambda)q_r^1 + \\lambda q_r^3$.\\\\\n        Trivially, otherwise the point $q^\\top = (1-\\lambda)q^1 + \\lambda q^3$ would verify $q^\\top_c=q^2_c$ and $q^\\top_r > q^2_r$, which would contradict $q^2 \\in\\cF$.\n    \\end{enumerate}\n\n    We denote $\\cF_{\\oQ} = \\cF \\cap \\oQ$. Clearly, $q^{\\star}\\in\\cC^2(\\cF_{\\oQ})$: let $q^1, q^2\\in \\oQ^-$ such that $q^{\\star} = (1-\\lambda)q^1 + \\lambda q^2$. First, $q^1, q^2\\in \\oQ^-\\subset\\cC^2(\\oQ^-)$. Then, by contradiction, if there existed $q^{1'}$ or $q^{2'}$ with equal $q_c$ and strictly higher $q_r$, again we could build an admissible mixture $q^{\\top}=(1-\\lambda)q^{1'}  + \\lambda q^{2'}$ strictly better than $q^{\\star}$.\n\n    $q^{\\star}$ can be written as $q^{\\star} = (1-\\lambda)q^1 + \\lambda q^2$ with $q^1, q^2\\in\\cF_{\\oQ}$ and, without loss of generality, $q^1_c \\leq q^2_c$.\n\n    \\paragraph{Regular case}\n\n    There exists $q^0\\in\\cF_{\\oQ}$ such that $q^0_c \\geq \\budget$. Then $q^1$ and $q^2$ must flank the budget: $q_c^1 \\leq \\budget \\leq q_c^2$. Indeed, by contradiction, if $q_c^2 \\geq q_c^1 > \\budget$ then $q_c^{\\star} > \\budget$ which contradicts $\\policies_r^{\\oQ}$. Conversely, if $q_c^1 \\leq q_c^2 < \\budget$ then $q^{\\star} < \\budget \\leq q^0_c$, which would make $q^{\\star}$ a worse candidate than $q^\\top=(1-\\lambda)q^{\\star} + \\lambda q^0$ when $\\lambda$ is chosen such that $q_c^\\top=\\budget$, and contradict $\\policies_r^{\\oQ}$ again.\n\n    Because $\\cF$ is the graph of a non-decreasing function, $\\lambda$ should be as high as possible, as long as the budget $q^{\\star}\\leq\\budget$ is respected. We reach the highest $q_r^{\\star}$ when $q^{\\star}_c=\\budget$, that is: $\\lambda=(\\budget-q_c^1)/(q_c^2-q_c^1)$.\n\n    It remains to show that $q^1$ and $q^2$ are two successive points in $\\cF_{\\oQ}$: $\\not\\exists q\\in\\cF_{\\oQ}\\setminus\\{q^1, q^2\\}: q^1_c \\leq q_c \\leq q^2_c$. Otherwise, as $\\cF$ is the graph of a concave function, we would have $q_r \\geq (1-\\mu)q_r^1 + \\mu q_r^2$. $q_r$ cannot be strictly greater than $(1-\\mu)q_r^1 + \\mu q_r^2$ which would contradict $q^{\\star}$, but it can still be equal, which means the tree points $q, q^1, q^2$ are aligned. In fact, every points aligned with $q^1$ and $q^2$ can also be used to construct mixtures resulting in $q^{\\star}$, but among these solutions we can still choose $q^1$ and $q^2$ as the two points in $\\cF_{\\oQ}$ closest to $q^{\\star}$.\n\n    \\paragraph{Edge case}\n\n    $\\forall q\\in\\cF_{\\oQ}, q_c < \\budget$. Then  $q^{\\star} =  \\argmax_{q\\in\\cF} q_r = q^\\pm =  \\argmax_{q\\in \\oQ^-} q_r$.\n\\end{proof}\n\n\n%\\begin{proof}\n%First, a straightforward proof by induction shows that for all $k\\in\\Natural$, $Q_k$ computed at iteration $k$ of either \\Cref{alg:bvi} or \\Cref{alg:bftq} is concave non-decreasing with respect to $\\beta_a$: the initialisation is trivial from $Q_0 = 0$, and the heredity stems from \\Cref{lemma:tau_concavity}.\n%\\end{proof}\n\n\n% \\subsection{Decomposition Lemma}\n\n% \\begin{lemma}\n%     For any sequence real valued functions $f_1,\\ldots,f_n$ and any real number $c$, we have\n%     \\[\n%         \\begin{array}{lcl}\n%             \\underbrace{\\max\\limits_{\\sum_i x_i \\leq c}\\sum_j f_j(x_j)}_{(a)} & \\quad{}=\\quad{} & \\underbrace{\\max\\limits_{\\sum_i c_i \\leq c}\\left(\\sum_j\\max\\limits_{x\\leq c_j} f_j(x)\\right)}_{(b)}\\\\\n%         \\end{array}\n%     \\]\n% \\end{lemma}\n\n% \\begin{proof}\n%     Let us first show that $(a)\\leq(b)$.\n%     By definition of the maximum on a set, for any $f_j$ and any $c_j$ we have\n%     $\\max\\limits_{x\\leq c_j} f_j(x) \\geq f_j(c_j)$.\n%     Hence, by replacing these terms in $(b)$ we get:\n%       \\[\n%     \\begin{array}{lcl}\n%         \\max\\limits_{\\sum_i c_i \\leq c} \\sum_j f_j(c_j) & \\quad{}\\leq\\quad{} & \\max\\limits_{\\sum_i c_i\\leq c}\\left(\\sum_j \\max\\limits_{x_j\\leq c_j} f_j(x_j)\\right)\\\\\n%     \\end{array}\n%     \\]\n%     The left hand side of this inequality is just a rewriting of $(a)$ with different dummy variables names.\n\n%     Let us show now that $(a) \\geq (b)$.\n%     Let $\\hat{x}_1,\\ldots,\\hat{x}_n, \\hat{c}_1, \\ldots \\hat{c}_n$ be a realisation (argmax) of $(b)$.\n%     By definition of $(b)$'s feasible set, we have $\\sum_i\\hat{c}_i \\leq c$ and for any $i$: $\\hat{x}_i\\leq \\hat{c}_i$.\n%     Because $\\sum_i\\hat{x}_i\\leq \\sum_i\\hat{c}_i \\leq c$, the tuple $(\\hat{x}_1, \\ldots \\hat{x}_n)$ is also a feasible value for $(a)$. And, by definition of the maximum on a set: $(a) = \\max\\limits_{\\sum_i x_i \\leq c} \\sum_j f_j(x_j) \\geq \\sum_j f_j(\\hat{x}_j) = (b)$.\n% \\end{proof}\n\n%\\section{Risk-Sensitive Exploration}\n%\\label{sec:risk-sensitive-supp}\n%We recall the Risk-Sensitive Exploration in %\\Cref{alg:risk-sensitive-exploration}:\n%\\input{source/risk-sensitive-explo-pseudo-code.tex}\n", "meta": {"hexsha": "1a6bbd06b4c4a90b81487466b78011f220634d58", "size": 32985, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2-Chapters/5-Chapter/appendix5.tex", "max_stars_repo_name": "eleurent/phd-thesis", "max_stars_repo_head_hexsha": "c1464cd3a219715e5218a866a822a34af581aac7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-12-04T13:16:56.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-03T08:19:45.000Z", "max_issues_repo_path": "2-Chapters/5-Chapter/appendix5.tex", "max_issues_repo_name": "eleurent/phd-thesis", "max_issues_repo_head_hexsha": "c1464cd3a219715e5218a866a822a34af581aac7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2-Chapters/5-Chapter/appendix5.tex", "max_forks_repo_name": "eleurent/phd-thesis", "max_forks_repo_head_hexsha": "c1464cd3a219715e5218a866a822a34af581aac7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.9756637168, "max_line_length": 705, "alphanum_fraction": 0.6296801576, "num_tokens": 12986, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147275, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.5566976524475453}}
{"text": "\\section*{Maneouver study}\n\\addcontentsline{toc}{section}{Maneouver study}\nThe higher the velocity is, the bigger the radius - this is, the greater the increase of altitude - given that the wing's flaps cannot deflect enough nor can support as much force to reduce the radius as to the same menauver for a lower velocity.\n\nNote that the centripetal force follows:\n\\[\nF=\\frac{mV^2}{R}\n\\]\nWhere \\textit{m}, \\textit{V} and \\textit{R} are the aircraft's mass (in kg), velocity (in m/s) and radius (in m) respectively. \\\\\n\nThe only angles affected are gamma $\\gamma$ and mu $\\mu$ if performed flawlessly. The former is the angle formed between the \\textit{x} axis of the horizontal set and the \\textit{x} axis of the wind set, while the latter is the one formed between the \\textit{y} axis of the same sets. Note that for the whole maneuver the body set remains consistent with the wind set.\\\\\nFigure \\ref{fig:paramEvolution} shows how these change in the different phases of the Immelmann turn\\\\\n\n\\includegraphics[width=\\linewidth]{../matlab/paramEvolution.jpg}\n\\captionof{figure}{Angles evolution over time}\n\\label{fig:paramEvolution}\n\\vspace{0.5cm}\n\nNote that for $\\gamma$ the evolution is perfectly defined, as the increment of said angle needs to be positive in order to increase the flight altitude. Conversely, for $\\mu$ the pilot could choose to rotate in the opposite direction thus changing $\\mu$ from 0 to $-\\pi$ instead - note that $\\pi$ and $-\\pi$ are equivalent -, and the result would be virtually the same.\\\\\nFurthermore, in order to describe a perfect semicircle the angular speed $\\dot{\\gamma}$ needs to remain constant. This results in the necessary condition of gamma's evolution to be a straight slope. For $\\mu$, however, the pilot could also perform the roll at non-constant rotating speed without this having an impact on the maneuver performance. This would be reflected as a curve on mu's temporal evolution, instead of a straight line.\\\\\n\nAll these\nFor the following computations, some values will be needed in order to numerically solve the Ordinal Differential Equation systems. The ENAERT T-35 Pill\u00e1n, a Chilean military small aircraft and its data \\cite{jane1969jane} have been taken as references.\\\\\n\n\\begin{center}\n\t\\includegraphics[width=0.9\\linewidth]{figures/pillan}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{ENAERT T-35 Pill\u00e1n. Extracted from \\cite{defensa.com_2018}}\\vspace{0.25cm}\n\\end{center}\n\nThe used information is:\n\n\\begin{center}\n\\begin{tabular}{|l|l||l|l|}\\hline\n\t & Value &   & Value\\\\ \\hline \\hline\n\tMass & 1.300 kg& Wingspan & 8.84 m\\\\ \\hline\n\tAirfoil & 63$_3$-414 & Wing area & 13.69 m$^2$ \\\\ \\hline\n\tStall s. & 31.94 m/s &  Cruise s. & 70.84 m/s\\\\ \\hline\n\tThrust &224 kW & Air den. & 1.225 kg/m$^3$ \\\\ \\hline\n\t\\multicolumn{4}{|c|}{$C_L=12\\alpha+0.33$}\\\\ \\hline\t\n\\end{tabular}\n\\end{center}\n\nThe selected initial speed is of 70 m/s as the third phase of the maneuver requires a few less meters per second in order to be performed safely.\n\nFor the totality of the maneuver, the fixated parameters are the gas control lever ($\\pi$) and the elevator ($\\alpha$), which are set as constants at 1.5 kN and 0.3 rad respectively. From the remaining variables, the position coordinates evolution ($\\Dot{x}$ and $\\Dot{z}$) are always left as dependant in order to be solved by the ODE solver. With the obtained results, the trajectory is plot - see the following section Trajectory analysis, where both of them are studied in moer depth -.\n\n\\begin{equation} \\label{eq:lift}\n\tL=\\frac{1}{2}S\\rho V(t)^2 C_L(\\alpha)\n\\end{equation}\n\\begin{equation} \\label{eq:drag}\n\tD=\\frac{1}{2}S\\rho V(t)^2 C_D(\\alpha)\n\\end{equation}\n\n\\subsubsection*{Cruise flight}\n\\addcontentsline{toc}{subsection}{Cruise flight}\nFor the first and last phases, both cruise flight, the remaining variable (not counting $\\Dot{x}$ and $\\Dot{z}$) is velocity (V) alone.\\\\\nAs fixed thrust and elevator deflection are imposed, there are virtually no temporal changes affecting velocity. This results in constant speed or cruise flight, and a simple slope regarding the x-axis position. When computing both the lift and drag forces, these must be constant too - see equations \\ref{eq:lift} and \\ref{eq:drag} -.\\\\\n\nVelocity being constant results in both lift and drag being constant too.\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/1/liftdrag.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Radius dependance on V and $\\alpha$t. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\nIn regards of the general overview, cruise flight is quite straight forward, as both the speed and vertical position remain constant, and as velocity is linealry dependant on time, the displacement of the aricraft is linear as well.\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/1/1traj.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Radius dependance on V and $\\alpha$t. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\nFor the total time, the following integration from the first equation can be done:\n\\begin{align*}\n\tdt =& \\frac{m}{T - D} dV\\\\\n\t\\int dt= & \\int \\frac{m}{T - D} dV\n\t\\intertext{Subsituting drag's definition from equation \\ref{eq:drag} and operating for lineal aeordynamics,}\n\t= & \\int_{V_0}^{V} \\frac{m}{T - \\frac{1}{2}\\rho S V^2 (C_{D_0}+kC_L^2)} dV\\\\\n\tt_1=& \\frac{m}{T - \\frac{1}{2}\\rho S V^2 (C_{D_0}+kC_L^2)} (V-V_0)\n\\end{align*}\n\n\\subsubsection*{Half loop}\n\\addcontentsline{toc}{subsection}{Half loop}\nIn order to work with the half loop part of the maneuver, some preliminary of the trajectory geometry is done.\nThe three force configurations during the maneuver correspond to the free body diagrams at stages 1, 2 and 3 (see Figure \\ref{fig:immelmann-overview}). \\vspace{0.5cm}\n\n%\\begin{minipage}{\\textwidth}\n\t\\includegraphics[width=0.3\\linewidth]{figures/free-body-1.pdf} \\hfill\n%\t\\captionof{subfigure}{Angles evolution over time}\n\t\\label{fig:free-body-1}\n\t\t\\includegraphics[width=0.3\\linewidth]{figures/free-body-2.pdf}\\hfill\n%\t\\captionof{subfigure}{Angles evolution over time}\n\t\\label{fig:free-body-2}\n\t\t\\includegraphics[width=0.3\\linewidth]{figures/free-body-3.pdf}\n%\t\\captionof{subfigure}{Angles evolution over time}\n\t\\label{fig:free-body-3}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Free body diagrams during the semiloop. Own elaboration.}\\vspace{0.25cm}\n\t\t\\label{fig:3avions}\n%\\end{minipage}\n\nNote that at all times the lift behaves as the centripetal force, thus indicating that both the velocity and radius limitations to perform the Immelmann turn are inherent to the wing design and its maximum and minimum lift generation.\\\\\nAS the motion will be circular, the normal and tangential accelerations will follow:\n\\begin{align*}\n\ta_n=&\\frac{V^2}{R}=V\\dot{\\gamma}=\\dot{\\gamma}^2R&a_t=V^2R\n\\end{align*}\nAs a result, the radius can be writen as a function of the angle of attack. By operating with the third equation of system number \\ref{eq:semicircle}, which is in wind system of reference, very convenient as is also polar set of axis:\n\\begin{align*}\n\tL=&m\\left(\\frac{V^2}{R}+g\\cos(\\gamma)\\right)\\\\\n\t\\frac{1}{2}\\rho S V^2 C_L(\\alpha)=&m\\left(\\frac{V^2}{R}+g\\cos(\\gamma)\\right)\\\\\n\tR=&\\frac{2mV^2}{2W\\cos(\\gamma)+\\rho S V^2 C_L(\\alpha)}\n\\end{align*}\nAs the plane must describe a semicircle, the radius must be constant.\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/radius.png}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Radius dependance on V and $\\alpha$t. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\nFor this phase the evolution of $\\gamma$ will be imposed. By using the following conditions:\n\\begin{itemize}\n\t\\item Must begin at 0 and end at $\\pi$ rad\n\t\\item Must have a linear evolution troughout time - in order not to overcomplicate the problem -\n\t\\item Must be compatible with circular accelerations, as the ideal performance of the maneuver would require the half loop to be a semicirle\n\\end{itemize}\n\nFrom these it can clearly be deucted that the basic model for a slope will be required\n\\[\\gamma(t)=At+B\\]\nAnd the rest of conditions define the parameters A and B such that the resulting expression is obtained:\n\\begin{align}\n\t\\gamma(t)=&\\frac{V_0}{R}t & &or& \\gamma(t)=&\\frac{V_0}{R}t + \\pi-\\frac{V_0}{R}tf \n\\end{align}\nWhere the finishing time of \\textit{tf} would be $tf=\\pi/V_0$.\\\\\nNote that $V_0$ instead of V(t) has been employed. This is in order to ensure the linearity, although a time dependant velocity could be used too. This would lead to a concatenation of curves for the loop, and not a semicirle.\\\\\n\nGiven that the determined parameters are therefore the gas control lever ($\\pi$) and the elevator ($\\alpha$), with the same values as for the previous phase - 1.5 kN and 0.3 rad respectively -, but also $\\gamma$ is, the only variable to study is the velocity. Nothe that the position coordinates evolution ($\\Dot{x}$ and $\\Dot{z}$) are explored in the following section: Trajectory analysis.\\\\\n\nFor this phase, the velocity must follow a non-linear evolution as it is dependant on both trigonometric and squared terms in the equations, see $\\gamma$ and the lift and drag forces definitions in equations \\ref{eq:lift} and \\ref{eq:drag}. \\\\\nAs the previous shcemes in Figure \\ref{fig:3avions} show, at the vertical flight after a quarter of loop, the velocity slows its rate of increase at that point. This inflection point is the consequence of the weight force switching the aircraft's axis for $\\gamma=\\pi/2$ thus forcing a force redistribution in order to mantain equilibrium.\\\\\nAs Figure \\ref{fig:radivelo} shows, it is worth noting too that the higher the initial speed solely modifies the placement of the curve in the plot, but has no effect on its characteristics. Althoug it is not clearly shown in the plot, the time is reduced too. On the contrary, for a fixed velocity, different radii have a more noticeable impact on the performance of the semiloop. As greater radius require more time to get to the quarter loop point - for the same initial speed -, the local velocity maximum prior to the $\\gamma=\\pi/2$ state is higher the greater the radius. Although both the local maximum and the finoshing speed are higher, the greater the radius the longer it takes to complete the half loop.\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/radivelo.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Radii and velocity dependencies on the half loop. Own elaboration.}\\vspace{0.25cm}\n\t\\label{fig:radivelo}\n\\end{center}\n\nBoth the lift and drag are affected by this velocity oscillation, but its mangnitude is rather small when compared to the rest of the parameters'.\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/2/liftdrag.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Lift and drag on the half loop. Own elaboration.}\\vspace{0.25cm}\n\t\\label{fig:Ld2}\n\\end{center}\n\nLastly, both of the position coordinates evolution ($\\Dot{x}$ and $\\Dot{z}$) evolve as expected, this is, highly dependant on $\\gamma$. As this angle ranges from 0 to $\\pi$, both coordinates follow reasonable behaviours. It is also clearly shown that the speed remains fairly constant during this part of the maneuver too.\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/2/trajectory.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Radii and velocity dependencies on the half loop. Own elaboration.}\\vspace{0.25cm}\n\t\\label{fig:2traj}\n\\end{center}\nFor time integration in this case \n\\begin{align*}\n\t\\begin{cases}\n\t\t\\frac{\\partial V}{\\partial t}=&\\frac{1}{m}\\left(T - D -mg\\sin\\gamma\\right)\\\\\n\t\t\\frac{\\partial \\gamma}{\\partial t}=&\\frac{1}{mV}\\left(L-mg\\cos\\gamma\\right)\n\t\\end{cases}\n\\end{align*}\nAs during this phase the angle $\\gamma$ must evolve from 0 to $\\pi$ rad -for simplification purposes we will work with a linear model - and, as previously stated, the circular motion accelerations have been assumed to apply, we can write:\n\\begin{align*}\n\t\\gamma(t) =& \\frac{V}{R}t & \\frac{\\partial \\gamma(t)}{\\partial t}=\\frac{V}{R}\n\\end{align*}\n\\begin{align*}\n\tdt =& \\frac{m}{T - D -mg\\sin\\gamma} dV\\\\\n\t\\int dt= & \\int \\frac{m}{T - D -mg\\sin\\gamma} dV\n\t\\intertext{Subsituting drag's definition from equation \\ref{eq:drag} and operating for lineal aeordynamics,}\n\t= & \\int \\frac{m}{T - \\frac{1}{2}\\rho S V^2 (C_{D_0}+kC_L^2) -mg\\sin\\gamma} dV\n\t\\intertext{Substituting ,}\n\\end{align*}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection*{Roll}\n\\addcontentsline{toc}{subsection}{Roll}\nSimilarly to the half loop, for this phase a part of the known fixed parameters; gas control lever ($\\pi$) and elevator deflection ($\\alpha$) more conditioning is needed. In this case, the angle is the roll angle; $\\mu$. As previouslt done for $\\gamma$, the conditions for $\\mu$ are:\n\n\\begin{itemize}\n\t\\item Must begin at $\\pi$ (or -$\\pi$) and end at 0 rad\n\t\\item Must have a linear evolution troughout time - in order not to overcomplicate the problem -\n\\end{itemize}\n\nFrom these it can clearly be deucted that the basic model for a slope will be required\n\\[\\mu(t)=At+B\\]\nAs we are missing some conditions to deremine the rate at which the aircraft should roll, the selection is based on the aircraft's structural capacities. For aerobatic aricrafts the rolls must be performed at approximately 120 knots for full aileron rolls and a rotating velocity of 7 m/s at the tip of the wing. With this data and the wingspan we can extract the expressions:\n\\begin{align}\n\t\\mu(t)=&-\\frac{V_{tip}}{b/2}t+\\pi \\\\\n\t\\intertext{or} \n\t\\mu(t)=&-\\frac{V_{tip}}{b/2}t+\\pi-\\frac{V_{tip}}{b/2}tf\n\\end{align}\nWhere the finishing time of \\textit{tf} would be $tf=\\pi \\frac{b/2}{V_{tip}}$.\\\\\nIf the evolution of $\\mu$ is considered, though, it can be seen that between $\\pi$ and $\\pi/2$ its trigonometric value for cosinus is negative. For the equation that yields the velocity - \\textit{squared} velocity - this is rather problematic.\\\\\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/3/sqrt.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Unsolvable domain of part of the trajectory. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\nFor this reason, the computations have been done only for half of the phase, understanding that the remaining quarter of roll is symmetric.\n\nThe velocity is deeply conditioned by the values that $\\mu$ acquires throughout the roll, even tending to infinity due to the previously mentioned trigonometric properties. In reality, we can assume a small variation of the total lift and drag, whoch would be oscillating between the lower values of the plots from Figure \\ref{fig:ld3}.\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/3/liftdrag.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Unsolvable domain of part of the trajectory. Own elaboration.}\\vspace{0.25cm}\n\t\\label{fig:ld3}\n\\end{center}\n\nFor the general overview:\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/3/3traj.jpg}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Radii and velocity dependencies on the half loop. Own elaboration.}\\vspace{0.25cm}\n\t\\label{fig:3traj}\n\\end{center}\n\nThe total velocity and x(t) would continue simetrically; the velocity would rapidly increase again and the position would exit the inflection point to keep increasing again.\\\\\n\nTemporal integration is exactly equal to the cruise phase one, as the roll does not have an effect on the temporal derivatives.\n\\begin{align*}\n\tdt =& \\frac{m}{T - D} dV\\\\\n\t\\int dt= & \\int \\frac{m}{T - D} dV\n\t\\intertext{Subsituting drag's definition from equation \\ref{eq:drag} and operating for lineal aeordynamics,}\n\t= & \\int_{V_0}^{V} \\frac{m}{T - \\frac{1}{2}\\rho S V^2 (C_{D_0}+kC_L^2)} dV\\\\\n\tt_1=& \\frac{m}{T - \\frac{1}{2}\\rho S V^2 (C_{D_0}+kC_L^2)} (V-V_0)\n\\end{align*}\n\nNote that this phase of the maneuver is perhaps the most idealised, as a roll is an unbalanced maneuver. Because of the aircraft stability design adverse yaw appears from the very beginning. In a nutshell,  an aircraft performing an aileron roll will actually fly along a slightly helical path, and a very light, positive g force will be maintained.\n\n\\subsection*{Trajectory analysis} \n\\addcontentsline{toc}{section}{Trajectory analysis}\nThe expected result of the trajectory is portrayed in the following image:\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/trajectory.png}\n\t\\vspace{0.5cm}\n\t\\captionof{figure}{Expected trajectory output. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\nWe can consider the maneuver coordinates:\n\\begin{align*}\n\t\\dot{x}_e=&V\\cos\\gamma&&&\tx_e=&Vt\\cos\\gamma \\\\\n\t\\dot{y}_e=&0& \\rightarrow&&\ty_e=&0\\\\\n\t\\dot{x}_e=&-V\\sin\\gamma&&&z_e=&-Vt\\sin\\gamma\n\\end{align*}\n\nFrom the computational results obtained from ODE solvers, the following plots represent the trajectory:\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/t1.jpg}\n\t\\vspace{0.25cm}\n\t\\captionof{figure}{Trajectory output for cruise flight. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/t2.jpg}\n\t\\vspace{0.25cm}\n\t\\captionof{figure}{Trajectory output for the semiloop. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\n\\begin{center}\n\t\\includegraphics[width=\\linewidth]{../matlab/t3.jpg}\n\t\\vspace{0.25cm}\n\t\\captionof{figure}{Trajectory output for the roll. Own elaboration.}\\vspace{0.25cm}\n\\end{center}\n\nThe obtained results fit perfectly with the expected outuput.", "meta": {"hexsha": "e05c0f436e5cb6f02314a1c2c91c24937af84202", "size": 17181, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/maneuver-study.tex", "max_stars_repo_name": "isimo00/immelmann-turn", "max_stars_repo_head_hexsha": "1b3f9b02e575a8e523cdf6c30d2d62c2dbfb1fce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/maneuver-study.tex", "max_issues_repo_name": "isimo00/immelmann-turn", "max_issues_repo_head_hexsha": "1b3f9b02e575a8e523cdf6c30d2d62c2dbfb1fce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/maneuver-study.tex", "max_forks_repo_name": "isimo00/immelmann-turn", "max_forks_repo_head_hexsha": "1b3f9b02e575a8e523cdf6c30d2d62c2dbfb1fce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.3607142857, "max_line_length": 715, "alphanum_fraction": 0.7443687795, "num_tokens": 5009, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872131147276, "lm_q2_score": 0.7431680029241321, "lm_q1q2_score": 0.5566976481864758}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\section{Effects of GW on test masses}\n\n\\marginpar{Monday\\\\ 2021-5-3, \\\\ compiled \\\\ \\today}\n\nConsider two masses, separated by a distance \\(L\\). \nThe distance between them can be measured as \\(L = (c/2) \\Delta t_{P P''}\\), where \\(P P''\\) is the trajectory of a light beam moving from \\(P\\) to \\(Q\\) and back to \\(P\\).\n\nIn flat spacetime, the separation vector between them can then be written as \\(h^{i} = L_0 n^i\\) for some unit vector \\(n^i\\). \n\nIn the presence of a GW, the length will read  \n%\n\\begin{align}\nL^2 \n&= g_{\\mu \\nu } \n\\qty(x^{\\mu }_{q'} - x^{\\mu }_{p'})\n\\qty(x^{\\nu }_{q'} - x^{\\nu }_{p'})   \\\\\n&\\to g_{\\mu \\nu } \n\\qty(x^{i }_{q'} - x^{i }_{p'})\n\\qty(x^{j }_{q'} - x^{j }_{p'})  \\\\\n&\\to g_{\\mu \\nu } \nx^{i }_{q'}\nx^{j }_{q'}    \\\\\n&\\to \\qty(\\delta_{ij} + h_{ij}^{TT}) x^{i}_{q'} x^{j}_{q'} \n= L_0^2 \\qty(\\delta_{ij} + h_{ij}^{TT}) n^i n^j\n\\,,\n\\end{align}\n%\nand so we can compute \n%\n\\begin{align}\n\\frac{ \\delta L}{L_0 } = \\frac{L}{L_0 } - 1 = \\sqrt{1 + h_{ij}^{TT} n^i n^j} - 1 \\approx \\frac{1}{2} h_{ij}^{TT} n^i n^j\n\\,.\n\\end{align}\n\nThis justifies the heuristic formula \\(\\delta L / L_0 \\sim h\\). \n\nA more formal treatment can be given through the \\textbf{geodesic deviation} formula: we can show that if \\(u^{\\mu }\\) is the tangent vector of a family of geodesics and \\(s^{\\mu }\\) is the displacement between geodesics, then\n%\n\\begin{align}\nu^{\\mu } \\nabla_{\\mu } \\qty( u^{\\nu } \\nabla_{\\nu } s^{\\alpha })\n= R^{\\alpha }_{\\lambda \\rho \\sigma } u^{\\lambda } u^{\\rho } s^{\\sigma }\n\\,,\n\\end{align}\n%\nand in the weak field limit the Riemann tensor is in the form \\(\\partial^2 h\\).\n\nIf we plug in everything (as we did in the exercise) we get \n%\n\\begin{align}\n\\dv[2]{s_\\alpha }{t} = R_{\\alpha 00 \\mu } s^{\\mu }\n\\qquad \\text{with} \\qquad\nR_{\\alpha 00 \\mu } = \\frac{1}{2} \\ddot{h}_{\\alpha \\mu }^{TT}\n\\,.\n\\end{align}\n\nThe temporal evolution of the spatial vector then reads \n%\n\\begin{align}\n\\ddot{s}^{i}(t) = \\frac{1}{2} \\ddot{h}^{TT}_{ij} (t) s_0^{j} + \\order{h^2}\n\\,,\n\\end{align}\n%\nso if initially \\(\\dot{s}^{i} (t=0)\\) and \\(s^{i}(t=0) = s^{i}_0\\) we get \n%\n\\begin{align}\ns^{i}(t) = s^{i}_0 + \\frac{1}{2} h^{TT}_{ij} (t) s^{j}_{0}\n= \\qty(\\delta_{ij} + \\frac{1}{2} h^{TT}_{ij}(t)) s^{j}_{0} \n\\,.\n\\end{align}\n\nA ring of particles in the \\(xy\\) plane is deformed by a wave travelling along the \\(z\\) axis: it becomes an ellipse with axes along the \\(x\\) and \\(y\\) direction for the \\(h_+\\) polarization, \n%\n\\begin{align}\n\\frac{ \\delta x^2}{r_0^2 (1 + h_+)^2} +\n\\frac{ \\delta y^2}{r_0^2 (1 - h_+)^2} = 1\n\\,.\n\\end{align}\n\nThe effect of the cross polarization is similar but rotated by \\SI{45}{\\degree}. \n\n\\section{Sources of GW}\n\nWe start from a formal solution of \\(\\square \\overline{h}_{\\mu \\nu }= - 16 \\pi T_{\\mu \\nu }\\): like in electromagnetism, we use Green functions, \n%\n\\begin{align}\n\\overline{h}_{\\mu \\nu } (t, \\vec{x}) = - 16 \\pi \\int G_R (x^{\\alpha } - x^{\\alpha \\prime}) T_{\\mu \\nu } (x^{\\alpha \\prime}) \\dd[4]{x'} \n\\,,\n\\end{align}\n%\nwhere \\(\\square G_R (x) = \\delta^{(4)} (x)\\); explicitly \n%\n\\begin{align}\nG_R(x) = - \\frac{1}{4 \\pi } \\frac{ \\delta (u - t)}{\\abs{\\vec{x}}}\n\\,.\n\\end{align}\n%\nwhere \\(u\\) is the retarded time. (to be defined\\dots check)\n\nWith this, we get \n%\n\\begin{align}\n\\overline{h}_{\\mu \\nu } (t, \\vec{x}) &= 4 \\int \\frac{T_{\\mu \\nu } (u, \\vec{x})}{\\abs{\\vec{x} - \\vec{x}'}} \\dd[3]{x'}\n\\,.\n\\end{align}\n\nThe assumptions we can make are the following: a mean-field approach, negligible self-gravity (this means that the quantity \\(2GM / c^2R = R_S  / R \\ll 1\\)). \n\nAlso, in order to derive the quadrupole formula, we assume that the distance from us to the source is large compared to the scale of the source and that the velocity of the source is slow compared to \\(c\\). \n\nThe result we will find is that we can compute \n%\n\\begin{align}\n\\overline{h}_{ij} (t, \\vec{x}) = \\frac{4}{r} \\int \\dd[3]{x'} T_{ij} \\qty(t - \\frac{r}{c} + \\frac{\\hat{n} \\cdot \\vec{x}}{r} , \\vec{x}')\n\\,,\n\\end{align}\n%\nand we can further simplify the integrand by expanding in \\(x' / r\\). \n\n\n\n\\end{document}\n", "meta": {"hexsha": "09f9edc91e7081b311a285a20fc8bd9f4cbad0a4", "size": 4072, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "phd_courses/gravitational_waves/may03.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "phd_courses/gravitational_waves/may03.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "phd_courses/gravitational_waves/may03.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 32.576, "max_line_length": 226, "alphanum_fraction": 0.5997053045, "num_tokens": 1569, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125848754471, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5566204395656345}}
{"text": "\\section{Maximum entropy approximation of distributions}\\label{supp_maxent}\n\n(Note: The Python code used for the calculations presented in this section can\nbe found in the\n\\href{https://www.rpgroup.caltech.edu//chann_cap/software/MaxEnt_approx_joint.html}{following\nlink} as an annotated Jupyter notebook)\n\nOn the one hand the solution of chemical master equations like the one in\n\\secref{sec_model} represent a hard mathematical challenge. As presented in\n\\siref{supp_param_inference} Peccoud and Ycart derived a closed-form solution\nfor the two-state promoter at the mRNA level \\cite{Peccoud1995}. In an\nimpressive display of mathematical skills, Shahrezaei and Swain were able to\nderive an approximate solution for the one- (not considered in this work) and\ntwo-state promoter master equation at the protein level \\cite{Shahrezaei2008}.\nNevertheless both of these solutions do not give instantaneous insights about\nthe distributions as they involve complicated terms such as confluent\nhypergeometric functions.\n\nOn the other hand there has been a great deal of work to generate methods that\ncan approximate the solution of these discrete state Markovian models\n\\cite{Ale2013, Andreychenko2017, Frohlich2016, Schnoerr2017, Smadbeck2013}. In\nparticular for master equations like the one that concerns us here whose\nmoments can be easily computed, the moment expansion method provides a simple\nmethod to approximate the full joint distribution of mRNA and protein\n\\cite{Smadbeck2013}. In this section we will explain the principles behind this\nmethod and show the implementation for our particular case study.\n\n\\subsection{The MaxEnt principle}\n\nThe principle of maximum entropy (MaxEnt) first proposed by E. T. Jaynes in\n1957 tackles the question of given limited information what is the least biased\ninference one can make about a particular probability distribution\n\\cite{Jaynes1957}. In particular Jaynes used this principle to show the\ncorrespondence between statistical mechanics and information theory,\ndemonstrating, for example, that the Boltzmann distribution is the probability\ndistribution that maximizes Shannon's entropy subject to a constraint that the\naverage energy of the system is fixed.\n\nTo illustrate the principle let us focus on a univariate distribution $P_X(x)$.\nThe $n^{\\text{th}}$ moment of the distribution for a discrete set of possible\nvalues of $x$ is given by\n\\begin{equation}\n  \\ee{x^n} \\equiv \\sum_x x^n P_X(x).\n  \\label{eq_mom_ref}\n\\end{equation}\n\nNow assume that we have knowledge of the first $m$ moments $\\bb{\\ee{x}}_m = (\n\\ee{x}, \\ee{x^2}, \\ldots, \\ee{x^m} )$. The question is then how can we use this\ninformation to build an estimator $P_H(x \\mid \\bb{\\ee{x}}_m)$ of the\ndistribution\nsuch that\n\\begin{equation}\n  \\lim_{m \\rightarrow \\infty} P_H(x \\mid \\bb{\\ee{x}}_m) \\rightarrow P_X(x),\n\\end{equation}\ni.e. that the more moments we add to our approximation, the more the estimator\ndistribution converges to the real distribution.\n\nThe MaxEnt principle tells us that our best guess for this estimator is to\nbuild it on the base of maximizing the Shannon entropy, constrained by the\ninformation we have about these $m$ moments. The maximization of Shannon's\nentropy guarantees that we are the least committed possible to information that\nwe do not posses. The Shannon entropy for an univariate discrete distribution\nis given by \\cite{Shannon1948}\n\\begin{equation}\n  H(x) \\equiv - \\sum_x P_X(x) \\log P_X(x).\n\\end{equation}\n\nFor an optimization problem subject to constraints we make use of the method of\nthe Lagrange multipliers. For this we define the constraint equation\n$\\mathcal{L}(x)$ as\n\\begin{equation}\n  \\mathcal{L}(x) \\equiv H(x) - \\sum_{i=0}^m\n  \\left[ \\lambda_i \\left( \\ee{x^i} - \\sum_x x^i P_X(x) \\right) \\right],\n  \\label{seq_constraint_eq}\n\\end{equation}\nwhere $\\lambda_i$ is the Lagrange multiplier associated with the $i\\th$ moment.\nThe inclusion of the zeroth moment is an additional constraint to guarantee the\nnormalization of the resulting distribution. Since $P_X(x)$ has a finite set of\ndiscrete values, when taking the derivative of the constraint equation with\nrespect to $P_X(x)$, we chose a particular value of $X = x$. Therefore from the\nsum over all possible $x$ values only a single term survives. With this in mind\nwe take the derivative of the constraint equation obtaining\n\\begin{equation}\n  {d\\mathcal{L} \\over d P_X(x)} = -\\log P_X(x) - 1 -\n  \\sum_{i=0}^m \\lambda_i x^i.\n\\end{equation}\n\nEquating this derivative to zero and solving for the distribution (that we now\nstart calling $P_H(x)$, our MaxEnt estimator) gives\n\\begin{equation}\n  P_H(x) = \\exp \\left(- 1 - \\sum_{i=0}^m \\lambda_i x^i \\right)\n         ={1 \\over \\mathcal{Z}}\n         \\exp \\left( - \\sum_{i=1}^m \\lambda_i x^i \\right),\n  \\label{eq_maxEnt}\n\\end{equation}\nwhere $\\mathcal{Z}$ is the normalization constant that can be obtained by\nsubstituting this solution into the normalization constraint. This results in\n\\begin{equation}\n  \\mathcal{Z} \\equiv \\exp\\left( 1 + \\lambda_0 \\right) =\n  \\sum_x \\exp \\left( - \\sum_{i=1}^m \\lambda_i x^i \\right).\n\\end{equation}\n\n\\eref{eq_maxEnt} is the general form of the MaxEnt distribution for a\nunivariate distribution. The computational challenge then consists in finding\nnumerical values for the Lagrange multipliers $\\{ \\lambda_i \\}$ such that\n$P_H(x)$ satisfies our constraints. In other words, the Lagrange multipliers\nweight the contribution of each term in the exponent such that when computing\nany of the moments we recover the value of our constraint. Mathematically what\nthis means is that $P_H(x)$ must satisfy\n\\begin{equation}\n  \\sum_x x^n P_H(x) =\n  \\sum_x {x^n \\over \\mathcal{Z}}\n  \\exp \\left( - \\sum_{i=1}^m \\lambda_i x^i \\right) = \\ee{x^n}.\n\\end{equation}\n\nAs an example of how to apply the MaxEnt principle let us use the classic\nproblem of a six-face die. If we are only told that after a large number of die\nrolls the mean value of the face is $\\ee{x} = 4.5$ (note that a fair die has a\nmean of $3.5$), what would the least biased guess for the distribution look\nlike? The MaxEnt principle tells us that our best guess would be of the form\n\\begin{equation}\n  P_H(x) = {1 \\over \\mathcal{Z}} \\exp \\left( \\lambda x \\right).\n\\end{equation}\nUsing any numerical minimization package we can easily find the value of the\nLagrange multiplier $\\lambda$  that satisfies our constraint.\n\\fref{fig_maxent_die} shows two two examples of distributions that satisfy the\nconstraint. Panel (A) shows a distribution consistent with the 4.5 average\nwhere both 4 and 5 are equally likely. Nevertheless in the information we got\nabout the nature of the die it was never stated that some of the faces were\nforbidden. In that sense the distribution is committing to information about\nthe process that we do not posses. Panel (B) by contrast shows the MaxEnt\ndistribution that satisfies this constraint. Since this distribution maximizes\nShannon's entropy it is guaranteed to be the least biased distribution given\nthe available information.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS15.pdf}\n\t\\caption{\\textbf{Maximum entropy distribution of six-face die.} (A)biased\n  distribution consistent with the constraint $\\ee{x} = 4.5$. (B) MaxEnt\n  distribution also consistent with the constraint.}\n  \\label{fig_maxent_die}\n\\end{figure}\n\n\\subsubsection{The mRNA and protein joint distribution}\n\nThe MaxEnt principle can easily be extended to multivariate distributions. For\nour particular case we are interested in the mRNA and protein joint\ndistribution\n$P(m, p)$. The definition of a moment $\\ee{m^x p^y}$ is a natural extension of\n\\eref{eq_mom_ref} of the form\n\\begin{equation}\n  \\ee{m^x p^y} = \\sum_m \\sum_p m^x p^y P(m, p).\n\\end{equation}\n\nAs a consequence the MaxEnt joint distribution $P_H(m, p)$ is of the form\n\\begin{equation}\n  P_H(m, p) = {1 \\over \\mathcal{Z}}\n              \\exp \\left( - \\sum_{(x,y)} \\lambda_{(x,y)} m^x p^y \\right),\n  \\label{seq_maxEnt_joint}\n\\end{equation}\nwhere $\\lambda_{(x,y)}$ is the Lagrange multiplier associated with the moment\n$\\ee{m^x p^y}$, and again $\\mathcal{Z}$ is the normalization constant given by\n\\begin{equation}\n  \\mathcal{Z} = \\sum_m \\sum_p\n              \\exp \\left( - \\sum_{(x, y)} \\lambda_{(x, y)} m^x p^y \\right).\n\\end{equation}\nNote that the sum in the exponent is taken over all available $(x, y)$ pairs\nthat define the moment constraints for the distribution.\n\n\\subsection{The Bretthorst rescaling algorithm}\n\nThe determination of the Lagrange multipliers suffer from a numerical under and\noverflow problem due to the difference in magnitude between the constraints.\nThis becomes a problem when higher moments are taken into account. The\nresulting numerical values for the Lagrange multipliers end up being separated\nby several orders of magnitude. For routines such as Newton-Raphson or other\nminimization algorithms that can be used to find these Lagrange multipliers\nthese different scales become problematic.\n\nTo get around this problem we implemented a variation to the algorithm due to\nG. Larry Bretthorst, E.T. Jaynes' last student. With a very simple argument we\ncan show that linearly rescaling the constraints, the Lagrange multipliers and\nthe ``rules'' for how to compute each of the moments, i.e. each of the\nindividual products that go into the moment calculation, should converge to the\nsame MaxEnt distribution. In order to see this let's consider again a\nunivariate distribution $P_X(x)$ that we are trying to reconstruct given the\nfirst two moments $\\ee{x}$, and $\\ee{x^2}$. The MaxEnt distribution can be\nwritten as\n\\begin{equation}\n  P_H(x) = {1 \\over \\mathcal{Z}}\n  \\exp \\left(- \\lambda_1 x - \\lambda_2 x^2 \\right) =\n  {1 \\over \\mathcal{Z}}\n  \\exp \\left(- \\lambda_1 x \\right) \\exp \\left( - \\lambda_2 x^2 \\right).\n\\end{equation}\nWe can always rescale the terms in any way and obtain the same result. Let's\nsay that for some reason we want to rescale the quadratic terms by a factor\n$a$. We can define a new Lagrange multiplier $\\lambda_2' \\equiv {\\lambda_2\n\\over a}$ that compensates for the rescaling of the terms, obtaining\n\\begin{equation}\n  P_H(x) = {1 \\over \\mathcal{Z}}\n  \\exp \\left(- \\lambda_1 x \\right) \\exp \\left( - \\lambda_2' ax^2 \\right).\n\\end{equation}\nComputationally it might be more efficient to find the numerical value of\n$\\lambda_2'$ rather than $\\lambda_2$ maybe because it is of the same order of\nmagnitude as $\\lambda_1$. Then we can always multiply $\\lambda_2'$ by $a$ to\nobtain back the constraint for our quadratic term. What this means is that that\nwe can always rescale the MaxEnt problem to make it numerically more stable,\nthen we can rescale back to obtain the value of the Lagrange multipliers. The\nkey to the Bretthorst algorithm lies in the selection of what rescaling factor\nto choose in order to make the numerical inference more efficient.\n\nBretthorst's algorithm goes even further by further transforming the\nconstraints and the variables to make the constraints orthogonal, making the\ncomputation much more effective. We now explain the implementation of the\nalgorithm for our joint distribution of interest $P(m, p)$.\n\n\\subsubsection{Algorithm implementation}\n\nLet the $M \\times N$ matrix $\\bb{A}$ contain all the factors used to compute\nthe moments that serve as constraints, where each entry is of the form\n\\begin{equation}\n  A_{ij} = m_i^{x_j} \\cdot p_i^{y_j}.\n  \\label{seq_maxent_rules}\n\\end{equation}\nIn other words, recall that to obtain any moment $\\ee{m^x p^y}$ we compute\n\\begin{equation}\n  \\ee{m^x p^y} = \\sum_m \\sum_p m^x p^y P(m, x).\n\\end{equation}\nIf we have $M$ possible $(m, p)$ pairs in our truncated sample space (because\nwe can't include the sample space up to infinity) $\\{(m, p)_1, (m, p)_2, \\ldots\n(m, p)_N \\}$, and we have $N$ exponent pairs $(x, y)$ corresponding to the $N$\nmoments used to constraint the maximum entropy distribution $\\{(x, y)_1, (x,\ny)_2, \\ldots, (x, y)_N \\}$, then matrix $\\bb{A}$ contains all the possible $M$\nby $N$ terms of the form described in \\eref{seq_maxent_rules}. Let also\n$\\bb{v}$ be a vector of length $N$ containing all the constraints with each\nentry of the form\n\\begin{equation}\n  v_j = \\ee{m^{x_j} p^{y_j}},\n\\end{equation}\ni.e. the information that we have about the distribution. That means that the\nconstraint equation $\\mathcal{L}$ to be used for this problem takes the form\n\\begin{equation}\n  \\mathcal{L} = -\\sum_i P_i \\ln P_i + \\lambda_0 \\left( 1 - \\sum_i P_i \\right)\n  + \\sum_{j>0} \\lambda_j \\left( v_j - \\sum_i A_{ij} P_i \\right),\n\\end{equation}\nwhere $\\lambda_0$ is the Lagrange multiplier associated with the normalization\nconstraint, and $\\lambda_j$ is the Lagrange multiplier associated with the\n$j\\th$ constraint. This constraint equation is equivalent to\n\\eref{seq_constraint_eq}, but now all the details of how to compute the moments\nare specified in matrix $\\bb{A}$.\n\nWith this notation in hand we now proceed to rescale the problem. The first\nstep consists of rescaling the terms to compute the entries of matrix $\\bb{A}$.\nAs mentioned before, this is the key feature of the Bretthorst algorithm; the\nparticular choice of rescaling factor used in the algorithm empirically\npromotes that the rescaled Lagrange multipliers are of the same order of\nmagnitude. The rescaling takes the form\n\\begin{equation}\n  A_{ij}' = {A_{ij} \\over G_j},\n\\end{equation}\nwhere $G_j$ serves to rescale the moments, providing numerical stability to the\ninference problem. Bretthorst proposes an empirical rescaling that satisfies\n\\begin{equation}\nG_j^2 = \\sum_i A_{ij}^2,\n\\end{equation}\nor in terms of our particular problem\n\\begin{equation}\nG_j^2 = \\sum_m \\sum_p \\left( m^{x_j} p^{y_j} \\right)^2.\n\\end{equation}\nWhat this indicates is that each pair $m_i^{x_j} p_i^{y_j}$ is normalized by\nthe square root of the sum of the all pairs of the same form squared.\n\nSince we rescale the factors involved in computing the constraints, the\nconstraints must also be rescaled simply as\n\\begin{equation}\nv_j' = \\ee{m^{x_j} p^{y_j}}' = {\\ee{m^{x_j} p^{y_j}} \\over G_j}.\n\\end{equation}\nThe Lagrange multipliers must compensate this rescaling since at the end of the\nday the probability must add up to the same value. Therefore we rescale the\n$\\lambda_j$ terms as\n\\begin{equation}\n\\lambda_j' = \\lambda_j G_j,\n\\end{equation}\nsuch that any $\\lambda_j A_{ij} = \\lambda_j' A_{ij}'$. If this empirical value\nfor the rescaling factor makes the rescaled Lagrange multipliers $\\lambda_j'$\nbe of the same order of magnitude, this by itself would already improve the\nalgorithm convergence. Bretthorst proposes another linear transformation to\nmake the optimization routine even more efficient. For this we generate\northogonal constraints that make Newton-Raphson and similar algorithms converge\nfaster. The transformation is as follows\n\\begin{equation}\n  A_{ik}'' = \\sum_j {e}_{jk} A_{ij}',\n\\end{equation}\nfor the entires of matrix $\\bb{A}$, and\n\\begin{equation}\n  v_k'' = \\sum_j {e}_{jk} u_j',\n\\end{equation}\nfor entires of the constraint vector $\\bb{v}$, finally\n\\begin{equation}\n  \\lambda_k'' = \\sum_j {e}_{jk} \\beta_j,\n\\end{equation}\nfor the Lagrange multipliers. Here ${e}_{jk}$ is the $j\\th$ component\nof the $k\\th$ eigenvector of the matrix $\\bb{E}$ with entries\n\\begin{equation}\n  {E}_{kj} = \\sum_i {A}_{ik}' {A}_{ij}'.\n\\end{equation}\nThis transformation guarantees that the matrix $\\bb{A}''$ has the property\n\\begin{equation}\n  \\sum_i A_{ij}'' A_{jk}'' = \\beta_j \\delta_{jk},\n\\end{equation}\nwhere $\\beta_j$ is the $j\\th$ eigenvalue of the matrix $\\bb{E}$ and\n$\\delta_{jk}$ is the Kronecker delta function. What this means is that, as\ndesired, the constraints are orthogonal to each other, improving the algorithm\nconvergence speed.\n\n\\subsection{Predicting distributions for simple repression constructs}\n\nHaving explained the theoretical background along with the practical\ndifficulties and a workaround strategy proposed by Bretthorst, we implemented\nthe inference using the moments obtained from averaging over the variability\nalong the cell cycle (See \\siref{supp_multi_gene}). \\fref{fig_pmf_mRNA} and\n\\fref{fig_pmf_protein} present these inferences for both mRNA and protein\nlevels respectively for different values of the repressor-DNA binding energy\nand repressor copy numbers per cell. From these plots we can easily appreciate\nthat despite the fact that the mean of each distribution changes as the\ninduction level changes, there is a lot of overlap between distributions. This\nas a consequence means that at the single-cell level cells cannot perfectly\nresolve between different inputs.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS16.pdf}\n\t\\caption{\\textbf{Maximum entropy mRNA distributions for simple repression\n\tconstructs.} mRNA distributions for different biophysical parameters. From\n\tleft to right the repressor-DNA affinity decreases as defined by the three\n\tlacI operators O1 ($-15.3 \\; k_BT$), O2 ($ -13.9 \\; k_BT$), and O3 ($-9.7 \\;\n\tk_BT$). From top to bottom the mean repressor copy number per cell increases.\n\tThe curves on each plot represent different IPTG concentrations. Each\n\tdistribution was fitted using the first three moments of the mRNA\n\tdistribution.}\n  \\label{fig_pmf_mRNA}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS17.pdf}\n\t\\caption{\\textbf{Maximum entropy protein distributions for simple repression\n\tconstructs.} Protein distributions for different biophysical parameters. From\n\tleft to right the repressor-DNA affinity decreases as defined by the three\n\tlacI operators O1 ($-15.3 \\; k_BT$), O2 ($ -13.9 \\; k_BT$), and O3 ($-9.7 \\;\n\tk_BT$). From top to bottom the mean repressor copy number per cell increases.\n\tThe curves on each plot represent different IPTG concentrations. Each\n\tdistribution was fitted using the first six moments of the protein\n\tdistribution.}\n  \\label{fig_pmf_protein}\n\\end{figure}\n\n\\subsection{Comparison with experimental data}\n\nNow that we have reconstructed an approximation of the probability distribution\n$P(m, p)$ we can compare this with our experimental measurements. But just as\ndetailed in \\siref{supp_theory_vs_data_mom} the single-cell microscopy\nmeasurements are given in arbitrary units of fluorescence. Therefore we cannot\ncompare directly our predicted protein distributions with these values. To get\naround this issue we use the fact that the fold-change in gene expression that\nwe defined as the ratio of the gene expression level in the presence of the\nrepressor and the expression level of a knockout strain is a non-dimensional\nquantity. Therefore we normalize all of our single-cell measurements by the\nmean fluorescence value of the $\\Delta lacI$ strain with the proper background\nfluorescence subtracted as explained in \\siref{supp_theory_vs_data_mom} for the\nnoise measurements. In the case of the theoretical predictions of the protein\ndistribution we also normalize each protein value by the predicted mean protein\nlevel $\\ee{p}$, having now non-dimensional scales that can be directly\ncompared. \\fref{sfig_cdf_delta} shows the experimental (color curves) and\ntheoretical (dark dashed line) cumulative distribution functions for the three\n$\\Delta lacI$ strains. As in \\fref{sfig_noise_delta}, we do not expect\ndifferences between the operators, but we explicitly plot them separately to\nmake sure that this is the case. We can see right away that as we would expect\ngiven the limitations of the model to accurately predict the noise and skewness\nof the distribution, the model doesn't accurately predict the data. Our model\npredicts a narrower distribution compared to what we measured with single-cell\nmicroscopy. \n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS18.pdf}\n\t\\caption{\\textbf{Experiment vs. theory comparison for $\\Delta lacI$ strain.}\n  Example fold-change empirical cumulative distribution functions (ECDF) for\n  strains with no repressors and different operators. The color curves\n  represent single-cell microscopy measurements while the dashed black lines\n  represent the theoretical distributions as reconstructed by the maximum\n  entropy principle. The theoretical distributions were fitted using the first\n  six moments of the protein distribution.}\n  \\label{sfig_cdf_delta}\n\\end{figure}\n\nThe same narrower prediction applies to the regulated promoters.\n\\fref{sfig_cdf_reg}, shows the theory-experiment comparison of the cumulative\ndistribution functions for different repressor binding sites (different\nfigures), repressor copy numbers (rows), and inducer concentrations (columns).\nIn general the predictions are systematically narrower compared to the actual\nexperimental data.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS19.pdf}\n\t\\caption{\\textbf{Experiment vs. theory comparison for regulated promoters.}\n  Example fold-change empirical cumulative distribution functions (ECDF) for\n  regulated strains with the three operators (different colors) as a function\n  of repressor copy numbers (rows) and inducer concentrations (columns). The\n  color curves represent single-cell microscopy measurements while the dashed\n  black lines represent the theoretical distributions as reconstructed by the\n  maximum entropy principle. The theoretical distributions were fitted using\n  the first six moments of the protein distribution}\n  \\label{sfig_cdf_reg}\n\\end{figure}", "meta": {"hexsha": "3819f4f30038eea9dbe688615c2e101426592709", "size": 21305, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/appendix_maxent.tex", "max_stars_repo_name": "RPGroup-PBoC/chann_cap", "max_stars_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-08-21T04:06:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T07:36:58.000Z", "max_issues_repo_path": "doc/appendix_maxent.tex", "max_issues_repo_name": "RPGroup-PBoC/chann_cap", "max_issues_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/appendix_maxent.tex", "max_forks_repo_name": "RPGroup-PBoC/chann_cap", "max_forks_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-04-29T17:43:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-09T00:20:16.000Z", "avg_line_length": 51.0911270983, "max_line_length": 93, "alphanum_fraction": 0.7691621685, "num_tokens": 5579, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7401743735019594, "lm_q1q2_score": 0.5566204356482539}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \n% Master Thesis in Mathematics\n% \"Immersions and Stiefel-Whitney classes of Manifolds\"\n% -- Chapter 2: General Obstructions by Characteristic Classes --\n% \n% Author: Gesina Schwalbe\n% Supervisor: Georgios Raptis\n% University of Regensburg 2018\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \n\n\\chapter{Results on the Stiefel-Whitney Classes of Manifolds}\n\\label{chap:massey}\n% Review the theory of Stiefel-Whitney classes and the properties of the\n% Steenrod algebra (as needed). Give a careful and detailed exposition of\n% the proof of Massey\u2019s theorem. Explain that the estimate in Massey\u2019s\n% theorem is the best possible in general.\n% [massey], [immersionconj]\nRecall from Corollary~\\ref{cor:obstruction} that a manifold can\nonly immerse in codimension $k$ if its dual Stiefel-Whitney classes in\ndegrees higher than $k$ are zero.\nThis chapter is dedicated to the proof of a theorem by Massey\n\\cite{massey}, which says that this obstruction vanishes in general for\n$k=n-\\alpha(n)$ (\\autoref{sec:massey}), and that this is the best\npossible general result (\\autoref{sec:bestpossibleresult}), \\idest for\n$k<n-\\alpha(n)$ there are examples of manifolds not immersing into\n$\\R^{n+k}$.\n\nThe preliminary work for Massey's theorem encompasses a review on\nSteenrod squares in \\autoref{sec:steenrodsquares}, and the\ninvestigation of Wu characteristic classes in\n\\autoref{sec:wuclassesmain}.\nIn the course of this a couple of other important results on\nStiefel-Whitney classes of manifolds are discussed.\nMost notably, Wu's formula on their relation to the Wu\ncharacteristic classes is proven in \\autoref{sec:wutheorem}\n\n\n\\section{Steenrod Squares}\\label{sec:steenrodsquares}\nSimilar to the cup and cap product Steenrod squares add further\nstructure to the cohomology ring, thus adding more tools to\ndifferentiate spaces by means of cohomology.\nAs applications, they will serve in constructing two kinds of\ncharacteristic classes, the Stiefel-Whitney and the Wu classes, and\nare a key ingredient for the proof of Massey's theorem on obstructions\nfor normal bundles of manifolds.\n\nThe following definition is due to\nSteenrod and Epstein \\cite[Chap.~I.1, p.~1]{steenrodepstein}.\n\\begin{Def}\\label{def:sq}\n  The \\emph{Steenrod squares} $\\Sq i$ for $i\\in\\Nat$ are each a family\n  of cohomology operations, \\idest families of homomorphisms, of the\n  form\n  \\begin{gather*}\n    \\left(\n      \\Sq i\\colon \\H^n(X, A;\\Zmod2) \\to \\H^{n+i}(X, A;\\Zmod2)\n      \\middlemid\n      n\\in\\Nat\n    \\right)\n  \\end{gather*}\n  that satisfy the following relations for any pair of spaces $(X,A)$, and any map of\n  pairs of spaces $f\\colon (X,A)\\to (Y,B)$:\n  \\begin{description}\n  \\item[(Naturality)]\\label{item:sqnaturality}\n    $\\pb f\\circ\\Sq i = \\Sq i\\circ\\pb f$.\n  \\item[(Stability)]\\label{item:sqstability}\n    $\\susp\\circ\\Sq i = \\Sq i\\circ\\susp$\n    where $\\susp$ is the suspension isomorphism on cohomology.\n  \\item[(Cartan formula)] For any $n\\in\\Nat$, and $x,y\\in \\H^n(X)$ holds\n    \\begin{gather}\\label{tag:cartan}\\tag{Cartan's formula}\n      \\Sq i(x\\cup y) = \\sum_{r+s=i}\\Sq r(x)\\cup\\Sq s(y)\n    \\end{gather}\n  \\item[(Fixed values)] The following values are fixed for $x\\in \\H^n(X,A)$:\n    % \\begin{gather}\\label{eq:sqlowerbound}\n    %   \\Sq i(x) = \\begin{cases}\n    %     0 & n<i \\\\\n    %     x^2 & n=i\n    %   \\end{cases}\n    %   \\qquad\\text{and}\\qquad\n    %   \\Sq i = \\begin{cases}\n    %     \\Id & i=0\\\\\n    %     \\beta & i=1\n    %   \\end{cases}\n    % \\end{gather}\n    \\begin{alignat}{4}\n      \\Sq i(x) &= 0     & \\qquad\\text{for }n<i \\label{eq:sqlowerbound}\\\\\n      \\Sq i(x) &= x^2   & \\qquad\\text{for }n=i \\label{eq:sqsquared}\\\\\n      \\Sq 0    &= \\Id   \\label{eq:sqidentity}\n      % \\\\\\notag\n      % \\Sq 1    &= \\beta \\label{eq:sqbockstein}\n    \\end{alignat}\n    % where $\\beta$ denotes the Bockstein homomorphism\n    % \\cite[see \\forexample][Chap.~3.E]{hatcher} of the exact\n    % coefficient sequence\n    % \\begin{center}\n    %   \\begin{tikzcd}\n    %     0 \\ar[r]\n    %     &\\Zmod2 \\ar[r,\"\\incl\"]\n    %     &\\Zmod4 \\ar[r,\"\\proj\"]\n    %     &\\Zmod2 \\ar[r]\n    %     &0\n    %   \\end{tikzcd}\n    % \\end{center}\n  \\item[(Adem relations)] For $\\alpha<2\\beta$ holds\n    \\begin{gather}\\label{tag:adem}\\tag{Adem's formula}\n      \\Sq\\alpha \\circ \\Sq\\beta =\n      \\sum_{j=0}^{\\left\\lfloor \\frac \\alpha 2 \\right\\rfloor}\n      \\binom{\\beta-j-1}{\\alpha-2j}\n      \\Sq{\\alpha+\\beta+j}\\Sq{j}\n    \\end{gather}\n  \\end{description}\n  The formal sum of all Steenrod squares\n  $\\SQ\\coloneqq\\sum_{j\\in\\Nat}\\Sq j$ is called the \\emph{total Steenrod square}.\n  Note that for any degree $n\\in\\Nat$ the total Steenrod square\n  $\\SQ\\colon \\H^n(X)\\to \\H^*(X)$ is well-defined since the sum is\n  finite by \\eqref{eq:sqlowerbound}.\n  Also \\ref{tag:cartan} can be reformulated to\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\SQ(x\\cup y) = \\SQ(x)\\cup\\SQ(y)\n    \\;,\n  \\end{gather*}\n  \\idest $\\SQ$ is a group\n  homomorphism with respect to the cup\\nbd{}product.\n\\end{Def}\n\n\\begin{Thm}\n  The Steenrod squares exist and are uniquely determined by\n  naturality, \\ref{tag:cartan}, and the fixed values\n  %\\eqref{eq:sqlowerbound}, \\eqref{eq:sqsquared}, and\n  %\\eqref{eq:sqidentity}\n  from Definition~\\ref{def:sq}.\n  \\begin{proof}\n    For existence see \\cite[Chapter 2]{mosher},\n    for uniqueness see \\cite[VIII \u00a73]{steenrodepstein}.\n  \\end{proof}\n\\end{Thm}\n\nThe fact that Steenrod squares, or in general cohomology operations,\ncan be added and concatenated, already gives a hint that they might\nform a ring, which was proven by Steenrod. The following notation and\nfacts are according to \\cite[Chap.~6]{mosher}.\n\\begin{Def}\\label{def:steenrodalgebra}\n  The \\emph{Steenrod algebra} $\\A$ is the quotient\n  of the graded $\\Zmod2$\\nbd{}polynomial algebra\n  $\\Zmod2[\\Sq i\\mid i\\in\\Nat]$ with grading $\\deg \\Sq i\\coloneqq i$\n  by the two-sided relations of both \\ref{tag:adem} and $\\Sq 0=1$.\n  With the induced grading, multiplication, and the diagonal defined\n  by\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\A\\longto \\A\\otimes\\A\n    \\;,\\qquad\n    \\Sq i\\longmapsto\\sum_{r+s=i}\\Sq r\\otimes\\Sq s\n    \\;,\n  \\end{gather*}\n  it is an associative, connected, non-commutative graded Hopf algebra\n  over $\\Zmod2$.\n\\end{Def}\n\\begin{Not}\n  In the following, iterated Steenrod squares\n  $\\Sq{i_1}\\cdot\\Sq{i_2}\\dotsm\\Sq{i_l}$ will have the short form\n  $\\Sq{(i_1,i_2,\\dotsc,i_l)}$,\n  and will be evaluated on an element $x$ of a cohomology ring as\n  $\\Sq{i_1}\\circ\\dotsb\\circ\\Sq{i_l}(x)$ respecting the\n  properties from Definition~\\ref{def:sq}.\n  Furthermore, for a sequence $I=(i_1,\\dotsc,i_l)$,\n  respectively $\\Sq I$, denote by\n  \\begin{compactdescription}[labelindent=1em]\n  \\item[$\\l(I)\\coloneqq l$] the \\emph{length} of $I$,\n  \\item[$\\d(I)\\coloneqq \\sum_{j=1}^{l} i_j$] the \\emph{degree} of $I$,\n    and by\n  \\item[$\\e(I)\\coloneqq 2i_1-\\d(I)=\\sum_{j=1}^{l-1}(i_j-2i_{j+1})$]\n    the \\emph{excess} of $I$.\n  \\end{compactdescription}\n  The sequence $I$, respectively $\\Sq I$, is called \\emph{admissible},\n  if $\\l(I)=1$ or $i_j\\geq 2i_{j+1}$ for $0\\leq j<\\l(I)$.\n\\end{Not}\n\\begin{Rem}\\label{rem:sq}\n  The following properties will be needed for Massey's Theorem:\n  \\begin{enumerate}\n  \\item The set of iterated Steenrod squares $\\Sq I$ of admissible\n    sequences $I$ forms a basis for $\\A$ as $\\Zmod2$\\nbd{}vector space\n    \\cite[Chap.~6, Theorem~1]{mosher}.\n  \\end{enumerate}\n  Let $I=(i_1,\\dotsc,i_l)$ be a sequence in $\\Nat$.\n  \\begin{enumerate}[resume]\n  \\item $\\deg(\\Sq I(x)) = \\deg(x) + \\d(\\Sq I)$.\n  \\item\\label{item:squpperboundgeneral}\n    $\\Sq I(x) = 0$ for $\\deg(x)<\\e(I)$ if $I$ is admissible.\n  \\end{enumerate}\n  \\begin{proof}\n    The non-obvious part~\\ref{item:squpperboundgeneral} follows by\n    induction over $\\l(I)$ using \\eqref{eq:sqlowerbound}:\n    The case $\\l(I)=1$ is \\eqref{eq:sqlowerbound}.\n    For $\\l(I)>1$ and $J\\coloneqq(i_2,\\dotsc,i_l)$, the condition\n    $\\deg(x) < \\e(I)=i_1-\\d(J)$\n    implies\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\deg(\\Sq J(x))\n      = \\deg(x)+\\d(J) < \\e(J)+\\d(J) = 2i_2\n      \\overset{\\text{adm.}}\\leq i_1\n    \\end{gather*}\n    So,\n    $\\Sq I(x)=\\Sq{i_1}(\\Sq J(x))=0$ by \\eqref{eq:sqlowerbound}.\n  \\end{proof}\n\n\\end{Rem}\n\nThe subsequent concept of formally inverting a formal sum of elements\nin a graded $\\Zmod2$\\nbd{}algebra will recur for several characteristic\nclasses. This particular definition is needed to define the Wu classes\nin \\autoref{sec:wuclasses}. \n\\begin{Def}\\label{def:antipode}\n  The \\emph{antipode} $\\antipode\\colon \\A\\to\\A$ of the Steenrod algebra is a\n  graded homomorphism inductively defined by the relation\n  \\begin{gather*}\n    1 = \\Sq 0\n    = \\SQ \\Sqcup \\antipode(\\SQ)\n    = \\sum_{k\\geq0}\\sum_{r+s=k} \\Sq r \\Sqcup \\antipode(\\Sq s)\n  \\end{gather*}\n\\end{Def}\n\n\\section{Wu Classes and the Wu Formula}\\label{sec:wuclassesmain}\nA crucial part in the proof of Massey's theorem is to cleverly involve\na new type of characteristic classes with nice properties that are\ndirectly connected to Stiefel-Whitney classes through Steenrod\nsquares.\nThe Wu classes were developed by Wu~Wen-Ts\u00fcn, and will be introduced in\n\\autoref{sec:wuclasses}.\nThe proof of Wu's theorem on their relation to Stiefel-Whitney\nclasses in \\autoref{sec:wutheorem} utilizes some properties of Thom\nclasses and the Thom isomorphism as reviewed in\n\\autoref{sec:thomclasses}, as well as a certain construction of\nStiefel-Whitney classes as introduced in\n\\autoref{sec:swclsconstruction}.\n\n\n\\subsection{Thom Classes and the Thom Isomorphisms}\\label{sec:thomclasses}\nIn this subsection, Thom classes and the Thom isomorphisms will be\nreviewed. The tools to be recalled in this section are an important\ningredient for the proof of Wu's theorem, both directly as well as for\nconstructing the Stiefel-Whitney characteristic classes from Steenrod\nsquares.\n\nLet $B$ be a paracompact space, \\forexample a manifold,\n$\\xi\\colon E\\xrightarrow{p} B$ a vector bundle over $B$ of rank $\\rkk>0$,\nand let $R$ be a principal ideal domain.\nThe following definition can \\forexample also be found in\n\\cite[p.~441]{hatcher}.\n\\begin{Def}\n  A \\emph{Thom class} of $\\xi$ in $R$\\nbd{}coefficients is a\n  cohomology class $\\u{\\xi}$ in $\\H^{\\rkk}(\\spherepair{E}; R)$,\n  such that for all points $b\\in B$ and fiber inclusions\n  \\begin{gather*}\n    i_b\\colon (\\spherepair{E_b}) \\to (\\thomspacepair{E})\n  \\end{gather*}\n  the restriction $\\u{\\xi}|_{E_b} = \\pb i_b (\\u{\\xi})$ is a\n  free generator of the $R$\\nbd{}module $\\H^{\\rkk}(\\spherepair{E_b}; R)$,\n  \\idest a unit\n  of the ring\n  $\\H^{\\rkk}(\\spherepair{E_b};R)\\cong \\H^{\\rkk}(\\spherepair{\\R}; R)\\cong R$.\n\\end{Def}\n\nThe following corollaries will deduce notions of naturality,\nmultiplicativity, and uniqueness for Thom classes quite directly from\ntheir above definition.\n\n\\begin{Cor}\\label{cor:thomclsnatural}\n  The Thom class construction is natural with respect to the pullback\n  of vector bundles over paracompact spaces.\n  \\Idest given any map of paracompact spaces $f\\colon A\\to B$, and a\n  vector bundle $\\xi\\colon E\\to B$ with pullback map\n  $F\\colon\\pb f\\xi\\to\\xi$,\n  then $\\pb F \\U$ of a Thom class $\\U$ of $\\xi$ will\n  be a Thom class of $\\pb f \\xi$.\n  \\begin{proof}\n    Let $\\U$ be a Thom class of $\\xi$ and $a\\in A$ any point.\n    Consider the restriction $\\pb i_a(\\pb f \\U)$\n    of the pullback of $\\U$ to the fiber over $a$. To show that this\n    is a generator of $\\H^{\\rkk}(\\spherepair{E_a};R)$ first use that\n    pullbacks commute with restriction:\n    \\begin{gather*}\n      \\pb i_a(\\pb f \\U)\n      = \\pb {(f\\circ i_a)} \\U\n      = \\pb {(i_{f(a)}\\circ f)} \\U\n      = \\pb f (\\pb i_{f(a)} \\U)\n    \\end{gather*}\n    $\\pb i_{f(a)} \\U$ is a generator by definition of $\\U$.\n    Now the restriction of $f$\n    \\begin{gather*}\n      f\\colon (\\spherepair{(\\pb f E)_a}) \\to (\\spherepair{E_{f(a)}})\n    \\end{gather*}\n    is an isomorphism, and thus\n    $\\pb f\\colon \\H^r(\\spherepair{E_{f(a)}})\n    \\cong \\H^r(\\spherepair{(\\pb f E)_a})$\n    sends generators to generators for all $r\\in\\Nat$.\n  \\end{proof}\n\\end{Cor}\n\n\\begin{Def}\n  Let $\\xi$, $\\eta$ be vector bundles over a space $B$.\n  Define the \\emph{cross-product} as in \\cite[p.~214]{hatcher} to be\n  the map\n    \\begin{align*}\n      \\H^*(\\thomspacepair{\\E\\xi})\n      \\otimes\n      \\H^*(\\thomspacepair{\\E\\eta})\n      &\\longto\n        \\H^*(\\thomspacepair{\\E{(\\xi\\oplus\\eta)}})\\\\\n      x\\otimes y\n      &\\longmapsto\n        \\pb \\pi_\\xi x \\cup \\pb \\pi_\\eta y\n        \\eqqcolon x\\times y\n        \\;.\n    \\end{align*}\n\\end{Def}\n\\begin{Cor}\\label{cor:thomclassmultiplicative}\n  The Thom class construction for coefficients in a field $R$ is\n  multiplicative in the following sense:\n  For vector bundles $\\xi\\colon E\\to B$, $\\eta\\colon E'\\to B$\n  of rank $\\rkk$ respectively $\\rkl$ over a paracompact space $B$, and Thom\n  classes\n  $\\u{\\xi}\\in \\H^{\\rkk}(\\thomspacepair{\\E\\xi}; R)$,\n  $\\u{\\eta}\\in \\H^{\\rkl}(\\thomspacepair{\\E\\eta}; R)$\n  the class\n  \\begin{gather*}\n    \\u{\\xi}\\times\\u{\\eta}\n    \\coloneqq \\pb \\pi_\\xi\\u{\\xi} \\cup \\pb \\pi_\\eta\\u{\\eta}\n    \\in \\H^{\\rkk+\\rkl}(\\thomspacepair{\\E{(\\xi\\oplus\\eta)}})\n  \\end{gather*}\n  is a Thom class of $\\xi\\oplus\\eta$.\n  \\begin{proof}\n    Consider a fiber $b\\in B$. As cup product and pullback commute\n    with restriction, the cross-product also commutes with\n    restriction, \\idest one has to show that\n    \\begin{gather*}\n      \\pb i_b(\\u{\\xi}\\times\\u{\\eta})\n      = \\left(\\pb i_b\\u{\\xi}\\right)\n      \\times \\left(\\pb i_b\\u{\\eta}\\right)\n    \\end{gather*}\n    is a generator of\n    $\\H^{\\rkk+\\rkl}(\\spherepair{\\E{(\\xi\\oplus\\eta)}_b};R)$.\n    % There are homotopies making the following diagram commute\n    % \\begin{center}\n    %   \\begin{tikzcd}\n    %     (\\spherepair{\\E{(\\xi)}_b})\n    %     \\ar[d, dash, \"\\isosymb\"{above,rotate=90}]\n    %     &(\\spherepair{\\E{(\\xi\\oplus\\eta)}_b})\n    %     \\ar[l,\"\\pi_\\xi\"above] \\ar[r,\"\\pi_\\eta\"]\n    %     \\ar[d, dash, \"\\isosymb\"{above,rotate=90}]\n    %     &(\\spherepair{\\E{(\\eta)}_b})\n    %     \\ar[d, dash, \"\\isosymb\"{above,rotate=90}]\\\\\n    %     (\\spherepair{\\R^i})\n    %     &(\\spherepair{\\R^{i+j}})\n    %     \\ar[l,\"\\proj\"above] \\ar[r,\"\\proj\"]\n    %     &(\\spherepair{\\R^{j}})\n    %   \\end{tikzcd}\n    % \\end{center}\n    On fibers one has that\n    \\begin{align*}\n      (\\spherepair{\\E{(\\xi\\oplus\\eta)}_b})\n      &= \\left(\n        \\E{\\xi}_b\\times\\E{\\eta}_b,\n        \\left(\\E{\\xi}_b\\times(\\minuszero{\\E{\\eta}_b})\\right)\n        \\cup\n        \\left((\\minuszero{\\E{\\xi}_b})\\times\\E{\\eta}_b\\right)\n        \\right)\n        \\\\\n      &= (\\spherepair{\\E{\\xi}_b})\\times(\\spherepair{\\E{\\eta}_b}\n    \\end{align*}\n    which makes the relative K\u00fcnneth isomorphism theorem applicable.\n    For the latter see \\forexample \\cite[Theorem~3.18]{hatcher}.\n    By the naturality of that isomorphism there is the\n    following commutative diagram that translates this problem to one\n    on the cohomology of spheres (all cohomology rings with\n    $R$\\nbd{}coefficients),\n    similar to \\cite[proof of 3.19, p.~221]{hatcher}:\n    \\begin{center}\n      \\begin{tikzcd}\n        \\H^*(\\spherepair{\\E\\xi_b})\n        \\otimes \\H^*(\\spherepair{\\E\\eta_b})\n        \\ar[r, \"\\cong\"]\n        \\ar[d, dash, \"\\cong\"{above,rotate=90}]\n        & \\H^*(\\spherepair{\\E{(\\xi\\oplus\\eta)}_b})\n        \\ar[d, dash, \"\\cong\"{below,rotate=90}]\n        \\\\\n        \\H^*(\\spherepair{\\R^{\\rkk}})\n        \\otimes \\H^*(\\spherepair{\\R^{\\rkl}})\n        \\ar[r, \"\\cong\"]\n        \\ar[d, dash, \"\\cong\"{above,rotate=90}]\n        & \\H^*(\\spherepair{\\R^{\\rkk+\\rkl}})\n        \\ar[d, dash, \"\\cong\"{below,rotate=90}]\n        \\\\\n        \\H^*(I^{\\rkk},\\Boundary{I^{\\rkk}})\n        \\otimes \\H^*(I^{\\rkl}, \\Boundary{I^{\\rkl}})\n        \\ar[r, \"\\cong\"]\n        \\ar[d, dash, \"\\cong\"{above,rotate=90}]\n        & \\H^*(I^{\\rkk+\\rkl}, \\Boundary{I^{\\rkk+\\rkl}})\n        \\ar[d, dash, \"\\cong\"{below,rotate=90}]\n        \\\\\n        \\H^*(\\Sphere{\\rkk})\n        \\otimes \\H^*(\\Sphere{\\rkl})\n        \\ar[r, \"\\cong\"]\n        & \\H^*(\\Sphere{\\rkk+\\rkl})\n      \\end{tikzcd}\n    \\end{center}\n    Furthermore, the simple structure of the cohomology of spheres\n    yields for the K\u00fcnneth isomorphism in the desired degree\n    \\begin{align*}\n      \\H^{\\rkk+\\rkl}(\\Sphere{\\rkk+\\rkl};R)\n      &\\cong\n        \\left(\\H^*(\\Sphere{\\rkk}; R)\\otimes \\H^*(\\Sphere{\\rkl}; R)\\right)_{\\rkk+\\rkl}\\\\\n      &\\coloneqq\n        \\bigoplus_{\\mathclap{r+s=\\rkk+\\rkl}}\n        \\H^r(\\Sphere{\\rkk};R)\\otimes \\H^s(\\Sphere{\\rkl};R)\n        =\n        \\H^{\\rkk}(\\Sphere{\\rkk};R)\\otimes \\H^{\\rkl}(\\Sphere{\\rkl};R)\n    \\end{align*}\n    by leaving out zero-summands for the last equality.\n    Thus, a generator $\\iota_{\\rkk}\\otimes\\iota_{\\rkl}$ of\n    $\\H^{\\rkk}(\\Sphere{\\rkk}; R)\\otimes \\H^{\\rkl}(\\Sphere{\\rkl}; R)$,\n    which is the tensor product of two generators,\n    is mapped to a generator $\\iota_{\\rkk+\\rkl}=\\iota_{\\rkk}\\times\\iota_{\\rkl}$ of\n    $\\H^{\\rkk+\\rkl}(\\Sphere{\\rkl+\\rkl}; R)$.\n    Using the isomorphisms above proves the claim.\n    % % Maybe put into separate Lemma:\n    % Now one can use the fact, that for any two generators\n    % $\\iota_{\\rkk}$ of $\\H^{\\rkk}(\\spherepair{\\R^{\\rkk}})$ and\n    % $\\iota_{\\rkl}$ of $\\H^{\\rkl}(\\spherepair{\\R^{\\rkl}})$,\n    % the cross-product\n    % $\\iota_{\\rkk}\\times\\iota_{\\rkl}$ is a generator of\n    % $\\H^{\\rkk+\\rkl}(\\spherepair{\\R^{\\rkk+\\rkl}})$.\n  \\end{proof}\n\\end{Cor}\n\n\\begin{Cor}\n  Every vector bundle $\\xi$ has a unique Thom class $\\u{\\xi}$ in\n  $\\Zmod2$\\nbd{}coefficients.\n  Furthermore, for any map of paracompact spaces $f\\colon A\\to B$ and\n  vector bundle $\\xi\\colon E\\to B$ holds $\\u{\\pb f \\xi} = \\pb f \\u{\\xi}$.\n  \\begin{proof}[Proof (sketch)]\n    \\begin{description}\n    \\item[Existence:] See \\cite[Theorem~4D.10]{hatcher} or use\n      \\cite[Prop.~17.9.3]{tomdieck}.\n    \\item[Uniqueness:]\n      Using a suitable Mayer-Vietoris sequence for gluing, and an\n      inductive argument starting with the trivial bundle case, one can show:\n      Any two classes in $\\H^{\\rkk}(\\thomspacepair{E};R)$ whose\n      restrictions coincide on all fibers will coincide.\n      However, for $R=\\Zmod2$ there is exactly one possible choice for\n      a unit $\\u{\\xi}|_{E_b}$ of the group\n      $\\H^{\\rkk}(\\spherepair{E_b})^\\times\\cong\\Zmod2^\\times=\\{1\\}$\n      over each point $b$.\n      For details see \\forexample \\cite[Theorem~(17.9.4)]{tomdieck}.\n    \\item[Naturality:]\n      Clear from uniqueness and the naturality of Thom classes.\n    \\end{description}\n  \\end{proof}\n\\end{Cor}\n\n\\begin{Rem}\n  Using paracompactness of $B$ and\n  \\cite[Proposition~17.9.6]{tomdieck}, one concludes that\n  $\\u{\\xi}\\in \\H^{\\rkk}(\\thomspacepair{E};R)$ has to be a unit.\n\\end{Rem}\n\nHaving a good notion of Thom classes by now, we recall the Thom\nisomorphism relating the (co)homology of the total space with that of the base\nspace of a vector bundle.\n\\begin{Thm}\n  For any Thom class $\\u{\\xi}$ of $\\xi$, and any degree $r$ there are\n  the following isomorphisms, called the \\emph{Thom isomorphism} for\n  cohomology respectively homology,\n  that are natural with respect to pullbacks of vector bundles over\n  paracompact spaces:\n  \\begin{align*}\n    \\thomiso\\colon\n    \\H^r(B;R) &\\longrightarrow \\H^{r+\\rkk}(\\thomspacepair{E}; R)\n    & \\thomiso\\colon\n      \\H_{r+\\rkk}(\\thomspacepair{E}; R) &\\longrightarrow \\H_r(B;R)\n    \\\\\n    x &\\longrightarrow \\pb p (x) \\cup \\u{\\xi}\n    & \\alpha &\\longrightarrow \\pf p (\\u{\\xi} \\cap \\alpha)\n               \\;.\n  \\end{align*}\n  \\begin{proof}\n    Naturality directly follows from the naturality of the Thom class\n    in Corollary~\\ref{cor:thomclsnatural}, and naturality of the cup-\n    respectively cap-product.\n    The cohomology part then is a direct application of Leray's theorem\n    \\cite[Theorem~4D.8]{hatcher}.\n    For the homology part see \\forexample \\cite[Theorem~14.6]{switzer}.\n  \\end{proof}\n\\end{Thm}\n\\begin{Lem}\\label{lem:thomisoself-adjoint}\n  If $B$ is connected, the Thom isomorphisms are adjoint in the\n  following sense:\n  For $r\\in\\Nat$, $x\\in \\H^r(B)$,\n  and $\\alpha\\in \\H_{r+\\rkk}(\\thomspacepair{E})$  holds \n  \\begin{gather*}\n    \\pf p\\capped{\\thomiso(x)}{\\alpha} = \\capped{x}{\\thomiso(\\alpha)}\n    \\in\\H_0(B)\\cong \\Zmod2\n  \\end{gather*}\n\\end{Lem}\nIn order to proof Lemma~\\ref{lem:thomisoself-adjoint},\nfirst recall the following properties of the cap product.\n\\begin{Rem}\n  For\n  a map of triples of spaces\n  $f\\colon (Y,Y'',Y')\\to (X,X'',X')$,\n  cohomology classes\n  $a\\in \\H^i(X,X')$ and $b\\in \\H^j(X,X')$,\n  homology classes\n  $\\gamma\\in \\H_{i+j}(X, X'\\cup X'')$\n  and\n  $\\beta\\in \\H_j(Y, Y'\\cup Y'')$,\n  and a vector bundle $E\\xrightarrow{p}B$\n  holds\n  \\begin{align}\n    \\label{eq:capprod1}\n    \\capped{a\\cup b}{\\beta} &= \\capped{b}{a\\cap\\beta}\n                              \\in \\H_0(X,X'')\\\\\n    \\label{eq:capprod2}\n    \\capped{a}{\\pf f \\beta} &= \\pf f \\capped{\\pb f a}{\\beta}\n                              \\in \\H_0(X,X'')\n                            &&\\text{\n                               %\\cite[Chap.~3.3.2, p.\\,241]{hatcher}\n                               \\cite[Sec.~18.1.1]{tomdieck}}\n  \\end{align}\n\\end{Rem}\n\\begin{proof}[Proof of Lemma~\\ref{lem:thomisoself-adjoint}]\n  With $\\U\\coloneqq\\u{\\xi}$ one calculates\n  \\begin{align*}\n    \\pf p\\capped{\\thomiso(x)}{\\alpha}\n    &= \\pf p\\capped{\\pb p x \\cup \\U}{\\alpha} \\\\\n    &\\equalsby{\\eqref{eq:capprod1}}\n      \\pf p\\capped{\\pb p x}{\\U\\cap\\alpha} \\\\\n    &\\equalsby{\\eqref{eq:capprod2}}\n      \\capped{x}{\\pf p(u\\cap\\alpha)}\n      = \\capped{x}{\\thomiso (\\alpha)} \\in\\Zmod2\n      \\qedhere\n  \\end{align*}\n\\end{proof}\n\nNow, we can focus on the specific case of normal bundles of manifolds.\nThe following is a well-known result from intersection theory, which\nlinks the fundamental class of a manifold with that of a submanifold\nusing the Thom isomorphism corresponding to the normal bundle of the\nembedding.\n\\begin{Lem}\\label{lem:thomisofundcl}\n  Let $M^n$ and $W^{n+\\rkk}$ be manifolds, and\n  $\\emb\\colon M\\to W$ be an embedding with\n  normal bundle $\\N{\\emb}$ of rank $\\rkk>0$.\n  The normal bundle gives rise to an embedding\n  $e\\colon \\E{\\N{\\emb}}\\to W$ of its total space\n  as a tubular neighborhood $e(\\E{\\N{\\emb}})$ of $i(M)$ into $W$\n  (see \\cite[Sec.~15.6]{tomdieck}).\n  The quotient map\n  \\begin{gather*}\n    \\collapse\\colon\n     W\n    \\to  W/\\left(  W\\setminus e(\\E{\\N{\\emb}}) \\right)\n    \\cong \\Thomspace{\\N{\\emb}}\n  \\end{gather*}\n  that collapses every point outside of $e(\\E{\\N{\\emb}})$ to the infinity\n  point fulfills\n  \\begin{center}\n    \\begin{tikzcd}[row sep=0pt]\n      \\H_{n+\\rkk}( W) \\ar[r, \"\\pf c\"]\n      & \\H_{n+\\rkk}(\\Thomspace{\\N{\\emb}}) \\ar [r, \"\\pf \\incl\"]\n      & \\relH_{n+\\rkk}(\\Thomspace{\\N{\\emb}})\n      %\\coloneqq \\H_{n+\\rkk}(\\Thomspace{\\N{\\emb}}, \\infty)\n      \\ar [r, \"\\pf t\"{above}, \"\\cong\"{below}]\n      & \\H_{n}(M)% \\cong \\Zmod2\n      \\\\\n      \\fundcl{ W} \\ar[rrr, mapsto]\n      &&&\\thomiso(\\pf \\incl\\pf\\collapse\\fundcl{ W})\n      = \\fundcl M\n    \\end{tikzcd}\n  \\end{center}\n  where\n  $\\incl\\colon\n  (\\Thomspace{\\N{\\emb}},\\emptyset)\n  \\to(\\Thomspace{\\N{\\emb}}, \\{\\infty\\})$\n  is the canonical inclusion of pairs of spaces,\n  and the isomorphism\n  $\\relH_{n+\\rkk}(\\Thomspace{\\N{\\emb}})\n  \\cong \\H_{n+\\rkk}(\\thomspacepair{\\E{\\N\\emb}})$\n  of the neighborhood deformation retract pair\n  $(\\thomspacepair{\\E{\\N\\emb}})$ was used.\n  This holds for any choice of $i$ and $e$.\n  \\begin{proof}\n    First note that by the long exact sequence of the pair \n    $(\\Thomspace{\\E{\\N\\emb}}, \\{\\infty\\})$, the map\n    $\\pf \\incl\\colon\n    \\H_r(\\Thomspace{\\N{\\emb}})\n    \\to \\relH_r(\\Thomspace{\\N{\\emb}})$\n    is an isomorphism in every degree $r>0$. The proof will be\n    conducted in two steps, first proving the connected case, then\n    the general one.\n    \\begin{description}\n    \\item[Connected case:]\n      Assume that $M$ is connected.\n      Then\n      \\begin{gather*}\n        \\H_{n+k}(\\Thomspace{\\N{\\emb}})\n        \\cong\\relH_{n+k}(\\Thomspace{\\N{\\emb}})\n        \\cong \\H_n(M) \\cong \\Zmod2=\\{\\fundcl M, 0\\}\n      \\end{gather*}\n      by the Thom isomorphism, and since $M$ is connected and closed.\n      Thus, one only has to show that\n      $\\pf\\collapse\\fundcl{ W}$\n      is non-zero.\n      The trick now is to use that manifolds locally look like some\n      Euclidean space, and that this property is mostly preserved by\n      the collapse.\n      Restricted to $e(\\E{\\N{\\emb}})$ the collapse map looks like\n      \\begin{gather*}\n        c|_{e(\\E{\\N{\\emb}})}\\colon\n        e(\\E{\\N{\\emb}})\n        \\overset{e}{\\underset{\\sim}\\longto} \\E{\\N{\\emb}}\n        \\cong \\Discbdl{\\N{\\emb}}\\setminus\\Spherebdl{\\N{\\iota}}\n        \\cong (\\Thomspace{\\N{\\emb}})\\setminus \\{\\infty\\}\n      \\end{gather*}\n      \\idest it is a homeomorphism onto\n      $(\\Thomspace{\\N{\\emb}})\\setminus\\{\\infty\\}$.\n      Therefore, by excision there is for any point\n      $p\\in(\\Thomspace{\\N{\\emb}})\\setminus\\{\\infty\\}$\n      the following commutative diagram on homology\n      \\begin{center}\n        \\begin{tikzcd}\n          \\fundcl{ W}\n          \\ar[d,mapsto]\\ar[r,phantom,\"\\in\"{near start}]\n          &\\H_{n+\\rkk}( W)\n          \\ar[r, \"\\pf\\incl\\pf\\collapse\"]\\ar[d,\"\\pf\\incl\"]\n          &\\relH_{n+\\rkk}(\\Thomspace{\\N{\\emb}})\n          \\ar[d,\"\\pf\\incl\"]\n          \\\\\n          \\left.\\fundcl{ W}\\right|_{\\collapse^{-1}(p)}\n          \\ar[r,phantom,\"\\in\"{near start}]\n          &\\H_{n+k}( W, W\\setminus\\collapse^{-1}(p))\n          \\ar[r,\"\\pf\\collapse\"{above},\"\\cong\"{below}]\n          \\ar[d,dash, \"\\cong\", \"\\text{exc.}\"{left}]\n          &\\H_{n+\\rkk}(\\Thomspace{\\N{\\emb}},\n          (\\Thomspace{\\N{\\emb}})\\setminus p)\n          \\ar[d,dash, \"\\cong\", \"\\text{exc.}\"{left}]\n          \\\\\n          & \\H_{n+\\rkk}(\\R^{n+\\rkk}, \\R^{n+\\rkk}\\setminus\\pt)\n          & \\H_{n+\\rkk}(\\R^{n+\\rkk}, \\R^{n+\\rkk}\\setminus\\pt)\n        \\end{tikzcd}\n      \\end{center}\n      By definition of the fundamental class $\\fundcl{ W}$,\n      the class $\\fundcl{ W}|_{\\collapse^{-1}(p)}$ in the\n      diagram is a generator, and thus also is\n      $\\pf\\collapse\\left(\\fundcl{ W}|_{\\collapse^{-1}(p)}\\right)\n      = \\left(\\pf\\incl\\pf\\collapse\\fundcl{ W}\\right)|_p$.\n      However, then $\\pf\\incl\\pf\\collapse\\fundcl{ W}\\in\n      \\relH_{n+\\rkk}(\\Thomspace{\\N{\\emb}})$\n      cannot be zero as was to be shown.\n    \\item[General case] In case $M=\\coprod_i M_i$ is the disjoint sum of its connected\n      components $M_i$, $i\\in I$ for some index set $I$, note the following:\n      \\begin{itemize}\n      \\item $\\E{\\N{\\emb}} = \\coprod_i\\E{\\N{\\emb_i}}$,\n        where $\\emb_i\\coloneqq\\emb|_{M_i}$.\n        % where $\\N{M_i}\\coloneqq\\N{\\emb}|_{M_i}$.\n      \\item Thus, $\\Thomspace{\\N{\\emb}} = \\bigvee_i\\Thomspace{\\N{\\emb_i}}$ using the\n        collapse maps\n        \\begin{gather*}\n          \\collapse_i\\colon\n           W\n          \\overset{\\collapse}\\longto\n           W/\\left( W\\setminus e(\\E{\\N{\\emb}})\\right)\n          \\overset{\\proj}\\longto\n           W/\\left( W\\setminus e(\\E{\\N{\\emb_i}})\\right)\n        \\end{gather*}\n        for the disjoint parts,\n        and $\\collapse=\\bigvee_i\\collapse_i$.\n      \\item Thus,\n        $\\relH_r(\\Thomspace{\\N{\\emb}})\n        = \\bigoplus_i \\relH_r(\\Thomspace{\\N{\\emb_i}})$ for\n        all degrees $r$, $\\fundcl M = (\\fundcl{M_i})_i$, and\n        $\\pf\\collapse = \\prod_i\\pf{\\collapse_i}$.\n        % \\item Thus, $\\H_{n+\\rkk}(\\Thomspace{\\N{\\emb}})\n        %   \\cong \\H_{n+\\rkk}(M)\\cong\\prod_i \\Zmod2$, and $\\fundcl{M} = (1)_i$.\n      \\end{itemize}\n      With this one sees directly from the definition of the fundamental\n      class of a manifold that\n      $\\pf\\incl\\pf\\collapse\\fundcl{ W} = \\fundcl M$\n      if and only if for all connected component manifolds $M_i$ holds\n      $\\pf\\incl\\pf{\\collapse_i}\\fundcl{ W} = \\fundcl{M_i}$,\n      which is true by the first case.\n      \\qedhere\n    \\end{description}\n  \\end{proof}\n\\end{Lem}\n\n\\subsection{A Construction of Stiefel-Whitney Classes}\n\\label{sec:swclsconstruction}\nNow, the promised construction of the Stiefel-Whitney classes from\nThom classes and Steenrod squares can be presented.\n\n\\begin{Thm}\\label{thm:altdefswclasses}\n  The Stiefel-Whitney classes can be given as\n  \\begin{gather*}\n    \\Sq i(\\u{\\xi}) = \\thomiso(\\w{i}{\\xi}) = \\pb p \\w{i}{\\xi} \\cup \\u{\\xi}\n  \\end{gather*}\n  for any vector bundle $\\xi\\colon E\\to B$ over a paracompact space\n  $X$. As the Thom isomorphism is a group homomorphism, one can\n  formulate the above as\n  \\begin{gather*}\n    \\SQ(\\u{\\xi}) = \\thomiso(\\W{\\xi}) = \\pb p \\W{\\xi} \\cup \\u{\\xi}\n  \\end{gather*}\n  \\begin{proof}\n    One has to check naturality of the expression and all further\n    defining properties from Definition~\\ref{def:swclasses}.\n    \\begin{description}\n    \\item[Naturality:] Both $\\Sq i$ and $t$ respectively also $t^{-1}$\n      are natural.\n    \\item[$\\ws{0}=1$:]\n      $\\H^0(\\thomspacepair{\\E{\\gamma_0}}) = \\Zmod2$, thus 1 is the only\n      candidate for a Thom class, $\\Sq0(1) = \\Id(1) = 1$, and the Thom\n      isomorphism sends 1 to 1 in this degree.\n    \\item[$\\W{\\gamma_1}=1+x$:]\n      Recall that $\\H^*(\\RP k)=\\Zmod2[x_k]/(x_k^{k+1})$.\n      For $\\w0{\\gamma_1}$ the defining relation\n      \\begin{gather*}\n        \\pb p\\w{0}{\\gamma_1}\\cup\\u{\\gamma_1}\n        = \\Sq 0(\\u{\\gamma_1})\n        \\cequalsby{\\eqref{eq:sqidentity}}\\u{\\gamma_1} \\neq 0\n      \\end{gather*}\n      directly gives\n      $\\pb p\\w{0}{\\gamma_1}=1$, and thus\n      $\\w{0}{\\gamma_1}=1\\in\\H^0(\\RP1)\\cong\\Zmod2$.\n      For $\\w1{\\gamma_1}$ the defining relation looks like\n      \\begin{gather*}\n        \\SwapAboveDisplaySkip\n        \\thomiso(\\w{1}{\\gamma_1})\n        = \\pb p\\w{1}{\\gamma_1}\\cup \\u{\\gamma_1}\n        = \\Sq 1(\\u{\\gamma_1})\n        \\cequalsby{\\eqref{eq:sqsquared}}\\u{\\gamma_1}^2\n        \\;,\n      \\end{gather*}\n      which, if non-zero, directly proves that $\\w{1}{\\gamma_1}$ is\n      the unique non-zero element of degree 1 in\n      $\\H^*(\\RP1)=\\Zmod2[x]/(x^2)$, which is $x$ as was desired.\n      To see that $\\u{\\gamma_1}^2$ is non-zero,\n      observe that the tautological line bundle\n      actually is the normal bundle of the embedding of $\\RP1$ into\n      $\\RP2$ induced by the embedding of the circle as equator into\n      the $2$\\nbd{}sphere. This then yields by excision that\n      \\begin{gather*}\n        \\H^r(\\thomspacepair{\\E{\\gamma_1}})\n        \\cong \\H^r(\\RP2,\\RP2\\setminus\\RP1)\n        \\cong \\H^r(\\RP2, \\pt)\n        = \\relH^r(\\RP2)\n        \\;.\n      \\end{gather*}\n      Let $\\incl\\colon\\RP2\\to(\\RP2,\\pt)$ be the inclusion, which is an\n      isomorphism on cohomology groups in degree greater 0.\n      Since\n      $\\u{\\gamma_1}\\in\\relH^1(\\RP2)$ is the unique non-zero element,\n      $\\pb\\incl\\u{\\gamma_1}= x_2$ and\n      $\\pb\\incl(\\u{\\gamma_1}^2) = x_2^2\\neq0$.\n      So $\\u{\\gamma_1}^2\\neq 0$.\n    \\item[Multiplicativity:]\n      Consider vector bundles $\\xi$, $\\eta$ over a paracompact space\n      $B$. With $\\u{\\xi\\oplus\\eta}=\\u{\\xi}\\cup\\u{\\eta}$ and the fact that\n      \\begin{gather}\\label{eq:projectionscommute}\n        p_\\xi\\circ\\pi_\\xi = p_{\\xi\\oplus\\eta} = p_\\eta\\circ\\pi_\\eta\n      \\end{gather}\n      one gets\n      \\begin{align*}\n        \\thomiso(\\w{i}{\\xi\\oplus\\eta})\n        &= \\Sq i(\\u{\\xi\\oplus\\eta}) \\\\\n        &\\equalsby{\\ref{cor:thomclassmultiplicative}}\n          \\Sq i(\\pb\\pi_\\xi\\u{\\xi} \\cup \\pb\\pi_\\eta\\u{\\eta})\\\\\n        &\\equalsby{\\ref{tag:cartan}}\n          \\sum_{r+s=i}\n          \\Sq r(\\pb\\pi_\\xi\\u{\\xi}) \\cup \\Sq s(\\pb \\pi_\\eta\\u{\\eta}) \\\\\n        &\\equalsby{Naturality}\n          \\sum_{r+s=i}\n          \\pb\\pi_\\xi\\Sq r(\\u{\\xi}) \\cup \\pb \\pi_\\eta\\Sq s(\\u{\\eta}) \\\\\n        &\\equalsby{Definition}\n          \\sum_{r+s=i}\n          \\pb\\pi_\\xi\\thomiso(\\w{r}{\\xi})\n          \\cup \\pb\\pi_\\eta\\thomiso(\\w{s}{\\eta}) \\\\\n        &\\equalsby{Definition}\n          \\sum_{r+s=i}\n          \\pb\\pi_\\xi \\left(\\pb p_\\xi  \\w{r}{\\xi}  \\cup \\u{\\xi} \\right)\n          \\cup\n          \\pb\\pi_\\eta\\left(\\pb p_\\eta \\w{s}{\\eta} \\cup \\u{\\eta}\\right) \\\\\n        &= \\left(\n          \\sum_{r+s=i}\n          \\pb\\pi_\\xi \\pb p_\\xi \\w{r}{\\xi} \\cup\n          \\pb\\pi_\\eta\\pb p_\\eta\\w{s}{\\eta}\n          \\right)\n          \\cup\n          \\left(\\pb\\pi_\\xi\\u{\\xi} \\cup \\pb\\pi_\\eta\\u{\\eta}\\right) \\\\\n        &\\equalsby{\\eqref{eq:projectionscommute}, Definition}\n          \\left(\\sum_{r+s=i}\n          \\pb p_{\\xi\\oplus\\eta} \\w{r}{\\xi} \\cup \\pb p_{\\xi\\oplus\\eta} \\w{s}{\\eta}\n          \\right)\n          \\cup\n          \\u{\\xi} \\times \\u{\\eta} \\\\\n        &\\equalsby{Group Hom., \\ref{cor:thomclassmultiplicative}}\n          \\pb p_{\\xi\\oplus\\eta}\n          \\left(\\sum_{r+s=i}\\w{r}{\\xi}\\cup\\w{s}{\\eta}\\right)\n          \\cup\n          \\u{\\xi\\oplus\\eta}\\\\\n        &\\equalsby{Definition}\n          \\thomiso\\left(\\sum_{r+s=i}\\w{r}{\\xi}\\cup\\w{s}{\\eta}\\right)\n          \\;.\n      \\end{align*}\n      Applying $\\thomiso^{-1}$ yields the result.\n      \\qedhere\n    \\end{description}\n  \\end{proof}\n\\end{Thm}\n\n\n\\subsection{Equivalent Definitions of Wu Classes}\\label{sec:wuclasses}\nLet $M$ be a compact, $n$\\nbd{}dimensional manifold.\nIn the following, two approaches to define the Wu characteristic\nclasses are pursued: an explicit one specially for normal bundles of\nmanifolds using Poincar\u00e9 duality, and a more general one, which makes\nclear that the Wu classes indeed are characteristic classes of vector\nbundles (\\idest natural).\n\n\\begin{Def}\\label{def:wuclasses}\n  The $i$th \\emph{Wu class} $\\v{i}{M}$ of $M$ for $0\\leq i\\leq n$ is defined\n  as the cohomology class in $\\H^i(M)$ that is uniquely determined by\n  \\begin{center}\n    \\begin{tikzcd}[row sep=0pt, column sep=small]\n      \\H^i(M) \\ar[r, equal, \"\\isosymb\"]\n      & \\H_{n-i}(M) \\ar[r, equal, \"\\isosymb\"]\n      &\\Hom{\\Zmod2}(\\H^{n-i}(M), \\Zmod2) \\ar[r, equal, \"\\isosymb\"]\n      &\\Zmod2\n      \\\\\n      y \\ar[rr, mapsto] &&\\capped{ x\\cup y }{ \\fundcl M }\\\\\n      \\v{i}{M} \\ar[rr, mapsto] &&\\capped{\\Sq i(x)}{\\fundcl M}\n    \\end{tikzcd}\n  \\end{center}\n  where the first isomorphism from the left is Poincar\u00e9 duality\n  and the second is the universal coefficient theorem\n  for the field $\\Zmod2$.\n  Equivalently, for any cohomology class $x\\in \\H^{n-i}(M)$ of fixed\n  degree $n-i$ holds\n  \\begin{gather*}\n    x\\cup \\v{i}{M} = \\Sq i(x) \\in \\H^n(M) \\cong \\Zmod2\n  \\end{gather*}\n  Mind the fixed degree of $x$---the above will not be true for other\n  degree cohomology classes in general!\n\\end{Def}\n\n\\begin{Rem}\n  Some immediate consequences of the definitions of the Wu classes of\n  $M$ are\n  \\begin{itemize}\n  \\item $\\v{0}{M} = 1$\n  \\item $\\v{i}{M} = 0$ for $i>\\frac n 2$, because $\\Sq i(x) = 0$ if the\n    degree of x is lower than $i$.\n  \\end{itemize}\n\\end{Rem}\n\nAs for the Stiefel-Whitney classes, one has a notion of a total class.\n\\begin{Def}\\label{def:dualwuclasses}\n  The \\emph{total Wu class} of $M$ is defined as the sum\n  $\\sum_{i\\geq0}\\v{i}{M}$. The \\emph{total dual Wu class}\n  $\\dualV{M}\\eqqcolon \\sum_{i\\geq 0}\\dualv{i}{M}$\n  and the dual Wu classes $\\dualv{i}{M}$\n  of $M$ are defined by\n  \\begin{gather*}\n    \\V{M} \\cup \\dualV{M} = 1\n  \\end{gather*}\n  or equivalently\n  \\begin{alignat*}{4}\n    \\SwapAboveDisplaySkip\n    1 &= \\dualv{0}{M} \\cup \\v{0}{M} = \\dualv{0}{M}\n    \\;,\\quad\n    &&\\text{and}\\\\\n    0 &= \\sum_{r+s=i}\\v{r}{M}\\cup\\dualv{s}{M}\n    &&\\text{in degree $0\\leq i\\leq n$.}\n  \\end{alignat*}\n\\end{Def}\n\nThe following more general definition of Wu classes\nwill turn out to be equivalent in a certain sense to the one above in\nDefinition~\\ref{def:wuclasses}.\nFor the formulation recall the antipode of the Steenrod algebra\n(see Definition~\\ref{def:antipode}).\n\\begin{Def}\\label{def:altwuclasses}\n  Let $\\xi\\colon E\\xrightarrow{p} B$ be a vector bundle over a\n  paracompact space $B$.\n  The $i$th \\emph{Wu class} $\\v{i}{\\xi}$ of $\\xi$ for $0\\leq i\\leq n$\n  is defined as the cohomology class in $\\H^i(B)$ that is uniquely\n  determined by\n  \\begin{gather*}\n    \\antipode(\\Sq i)(\\u{\\xi}) = \\thomiso(\\v{i}{\\xi}) = \\pb p \\v{i}{\\xi} \\cup \\u{\\xi}\n  \\end{gather*}\n  The \\emph{total Wu class} of $\\xi$ is defined as usual as\n  $\\V{\\xi}\\coloneqq\\sum_{i\\geq0}\\v{i}{\\xi}$, and satisfies accordingly \n  $\\antipode(\\SQ)(\\u{\\xi})=\\thomiso(\\V{\\xi})$.\n\\end{Def}\n\\begin{Rem}\n  Compare this to the possible definition of the Stiefel-Whitney\n  classes in Theorem~\\ref{thm:altdefswclasses}.\n  Similarly, naturality with respect to pullbacks of vector bundles\n  follows immediately from the definition, making the Wu classes\n  characteristic classes.\n\\end{Rem}\n\nRecall that the definition of Stiefel-Whitney classes of manifolds\nutilizes the canonical tangent bundle structure.\nIn contrast to that, the Wu classes of manifolds utilize normal\nbundles, as will be clear from the promised equivalence below.\n\\begin{Thm}\\label{thm:altdefwuclasses}\n  Let $M$ be a compact manifold of dimension $n$, and let\n  $\\N{\\emb}\\colon \\E{\\N{\\emb}} \\xrightarrow{p} M$ be\n  any normal bundle of rank $k$ of an embedding\n  $\\emb\\colon M\\immto\\R^{n+k}$. Then\n  \\begin{gather*}\n    \\v{i}{M} = \\v{i}{\\N{\\emb}}\\in \\H^i(M)\n  \\end{gather*}\n  \\begin{proof}\n    To proof Theorem~\\ref{thm:altdefwuclasses}, the defining property of\n    $\\v{i}{M}$ will be checked on $\\v{i}{\\N{\\emb}}$, \\idest one has to show\n    that for any $x\\in \\H^{n-i}(M)$ holds\n    \\begin{gather*}\n      \\capped{x\\cup \\v{i}{\\N{\\emb}}}{\\fundcl M}\n      = \\capped{\\Sq i(x)}{\\fundcl M}\n    \\end{gather*}\n    Note, that this is simply the $i$th degree of the equation\n    \\begin{gather}\\label{eq:proofaltdefwuclasses:claim}\n      \\capped{x\\cup \\V{\\N{\\emb}}}{\\fundcl M}\n      = \\capped{\\SQ(x)}{\\fundcl M}\n    \\end{gather}\n    which will be proven below.\n    Beforehand, recall the following:\n    \\begin{description}\n    \\item[By \\ref{lem:thomisoself-adjoint}:]\n      $\\pf p\\capped{\\thomiso(z)}{\\alpha} = \\capped{z}{\\thomiso(\\alpha)}$\n      for $r\\in\\Nat$, $z\\in \\H^r(M)$,\n      $\\alpha\\in \\H_{r+\\rkk}(\\thomspacepair{\\E{\\N{\\emb}}})$.\n    \\item[By \\ref{lem:thomisofundcl}:]\n      $\\thomiso(\\pf\\incl\\pf\\collapse\\fundcl{\\Sphere{n+\\rkk}}) = \\fundcl M$\n      where $\\collapse\\colon\n      \\Sphere{n+\\rkk}\\to\\Discbdl{\\N{\\emb}}/\\Spherebdl{\\N{\\emb}}$ is\n      the collapse map of a tubular embedding of the normal bundle\n      $\\N M$.\n    \\item[By \\eqref{eq:sqlowerbound} and \\eqref{eq:sqidentity}:]\n      The total Steenrod square\n      $\\SQ\\colon \\H^m(\\Sphere m)\n      \\to \\H^*(\\Sphere m)$\n      is the identity on $\\H^m(\\Sphere m)$, \\idest\n      $\\Sq i\\colon \\H^m(\\Sphere{m})\\to \\H^{m+i}(\\Sphere{m})$ is zero for\n      $i\\neq0$.\n    \\item[By Definition~\\ref{def:sq}:] The total Steenrod Square\n      $\\SQ$ is natural and a ring homomorphism.\n    \\item[By \\eqref{eq:capprod2}:]\n      For any map of spaces $f\\colon X\\to Y$ and co-/homology classes\n      $a$ and $\\beta$ in the corresponding co-/homology groups holds\n      $\\capped{a}{\\pf f \\beta} = \\pf f \\capped{\\pb f a}{\\beta}$\n    \\end{description}\n    For the proof fix some $i\\leq n$, some arbitrary $x\\in\n    \\H^{n-i}(M)$, as well as a collapse map\n    $\\collapse\\colon\\Sphere{n+k}\\to\\Discbdl{\\N{\\emb}}/\\Spherebdl{\\N{\\emb}}$\n    as in Lemma~\\ref{lem:thomisofundcl}.\n    For simplicity, denote\n    $\\Vu\\coloneqq \\V{\\N{\\emb}}$,\n    $\\Sph\\coloneqq \\Sphere{n+\\rkk}$,\n    $\\U\\coloneqq \\u{\\N{\\emb}}$ and\n    $\\pf\\collapse\\coloneqq \\pf{(\\incl\\circ\\collapse)}$\n    respectively\n    $\\pb\\collapse\\coloneqq \\pb{(\\incl\\circ\\collapse)}$.\n    \n    First reformulate\n    $\\capped{x\\cup \\V{\\N{\\emb}}}{\\fundcl M}$ using the cohomology of the\n    $(n+k)$\\nbd{}sphere:\n    \\begin{align}\\notag\n      \\capped{x\\cup \\Vu}{\\fundcl M}\n      &\\clapequalsby{\\ref{lem:thomisofundcl}}\n        \\capped\n        {x\\cup\\Vu}\n        {\\thomiso\\left(\\pf\\collapse\\fundcl{\\Sph}\\right)}\n      \\\\\\notag\n      &\\equalsby{\\ref{lem:thomisoself-adjoint}}\n        \\pf p\\capped\n        {\\thomiso\\left(x\\cup\\Vu\\right)}\n        {\\pf\\collapse\\fundcl{\\Sph}}\n      \\\\\\notag\n      &\\equalsby{\\eqref{eq:capprod2}}\n        \\pf p\\pf\\collapse\\capped\n        {\\pb\\collapse\\thomiso\\left(x\\cup\\Vu\\right)}\n        {\\fundcl{\\Sph}}\n      \\\\\\notag\n      &\\equalsby{\\eqref{eq:sqlowerbound} and \\eqref{eq:sqidentity}}\n        \\pf p\\pf\\collapse\\capped\n        {\\SQ\\left(\\pb\\collapse\\thomiso\\left(x\\cup\\Vu\\right)\\right)}\n        {\\fundcl{\\Sph}}\n      \\\\\\notag\n      &\\equalsby{Naturality}\n        \\pf p\\pf\\collapse\\capped\n        {\\pb\\collapse\\SQ\\left(\\thomiso\\left(x\\cup\\Vu\\right)\\right)}\n        {\\fundcl{\\Sph}}\n      \\\\\\label{eq:proofaltdefwuclasses:eq1}\n      &\\equalsby{\\eqref{eq:capprod2}}\n        \\pf p\\capped\n        {\\SQ\\left(\\thomiso\\left(x\\cup\\Vu\\right)\\right)}\n        {\\pf\\collapse\\fundcl{\\Sph}}\n    \\end{align}\n    Having introduced $\\SQ$ on the left hand side, one observes a\n    certain commutativity of the Thom isomorphism, and the total\n    Steenrod square:\n    \\begin{align}\\notag\n      \\SQ\\left(\\thomiso\\left(x\\cup\\Vu\\right)\\right)\n      &=\n        \\SQ\\left(\\left(\\pb p x\\cup \\pb p\\Vu\\right) \\cup \\U\\right)\n      \\\\\\notag\n      &=\n        \\SQ\\left(\\pb p x\\cup \\left(\\pb p\\Vu \\cup \\U\\right)\\right)\n      \\\\\\notag\n      &\\equalsby{Def. $\\thomiso$}\n       \\SQ\\left(\\pb p x\\cup \\thomiso(\\Vu)\\right)\n      \\\\\\notag\n      &\\equalsby{Naturality, Ring Hom.}\n        \\pb p\\SQ(x) \\cup \\SQ(\\thomiso(\\Vu))\n      \\\\\\notag\n      &\\equalsby{Def. $\\V{M}$}\n        \\pb p\\SQ(x) \\cup \\SQ(\\antipode(\\SQ)(u))\n      \\\\\\notag\n      &\\equalsby{Def. $\\antipode$}\n        \\pb p\\SQ(x) \\cup u\n      \\\\\\label{eq:proofaltdefwuclasses:eq2}\n      &\\equalsby{Def. $\\thomiso$}\n        \\thomiso(\\SQ(x))\n    \\end{align}\n    Inserting \\eqref{eq:proofaltdefwuclasses:eq2} into\n    \\eqref{eq:proofaltdefwuclasses:eq1} from above easily yields the\n    claim in \\eqref{eq:proofaltdefwuclasses:claim} that proves the theorem:\n    \\begin{align*}\n      \\SwapAboveDisplaySkip\n      \\capped{x\\cup \\Vu}{\\fundcl M}\n      &\\clapequalsby{\\eqref{eq:proofaltdefwuclasses:eq1}}\n        \\pf p\\capped\n        {\\SQ\\left(\\thomiso\\left(x\\cup\\Vu\\right)\\right)}\n        {\\pf\\collapse\\fundcl{\\Sph}}\n      \\\\\n      &\\equalsby{\\eqref{eq:proofaltdefwuclasses:eq2}}\n        \\pf p\\capped\n        {\\thomiso(\\SQ(x))}\n        {\\pf\\collapse\\fundcl{\\Sph}}\n      \\\\\n      &\\equalsby{\\ref{lem:thomisoself-adjoint}}\n        \\capped\n        {\\SQ(x)}\n        {\\thomiso\\left(\\pf\\collapse\\fundcl{\\Sph}\\right)}\n      \\\\\n      &\\equalsby{\\ref{lem:thomisofundcl}}\n        \\capped\n        {\\SQ(x)}\n        {\\fundcl M}\n        \\qedhere\n    \\end{align*}\n  \\end{proof}\n\\end{Thm}\n\n\n\\subsection{Wu's Theorem}\\label{sec:wutheorem}\nWu's theorem, the goal of this section, gives a close relation of\nthe Wu classes from above with the Stiefel-Whitney classes using\nSteenrod squares. Besides, it also immediately proves that Wu classes\nas defined in Theorem~\\ref{thm:altdefwuclasses} are actually characteristic\nclasses.\n\n\\begin{Thm}[Wu]\\label{thm:wu}\n  A closed manifold gives rise to the following two equalities\n  \\begin{align}\\notag\n    \\dualW{M} &= \\SQ\\left(\\dualV{M}\\right)\n    &\\text{respectively}&\n    &\\dualw{k}{M} &= \\sum_{i\\geq0} \\Sq i\\left(\\dualv{k-i}{M}\\right)\n    &\\text{and}\n    \\\\\n    \\W{M} &= \\SQ\\left(\\V{M}\\right)\n    &\\text{respectively}&\n    &\\w{k}{M} &= \\sum_{i\\geq0} \\Sq i\\left(\\v{k-i}{M}\\right)\n    \\label{tag:wuformula}\\tag{Wu's formula}\n  \\end{align}\n  that are equivalent using\n  $\\dualW{M}\\cup\\W{M} = 1$ and\n  \\begin{gather*}\n    \\SQ(\\dualV{M})\\cup\\SQ(\\V{M})\n    = \\SQ(\\dualV{M}\\cup\\V{M})\n    = \\SQ(1)\n    = 1\n    \\;.\n  \\end{gather*}\n\\end{Thm}\n% \\begin{Rem}\n%   The following is equivalent to Wu's formula\n%   \\begin{gather*}\n%     \\dualW{M} = \\SQ\\left(\\dualV{M}\\right)\n%     \\qquad\\text{respectively}\\qquad\n%     \\dualw{k}{M} = \\sum_{i\\geq0} \\Sq i\\left(\\dualv{k-i}{M}\\right)\n%   \\end{gather*}\n%   using the inverses $\\dualW{M}\\cup\\W{M} = 1$ and\n%   \\begin{gather*}\n%     \\SQ(\\dualV{M})\\cup\\SQ(\\V{M})\n%     = \\SQ(\\dualV{M}\\cup\\V{M})\n%     = \\SQ(1)\n%     = 1\n%     \\;.\n%   \\end{gather*}\n% \\end{Rem}\nThe proof uses the alternative characterization\n\\begin{gather*}\n  \\antipode(\\SQ)(\\u{\\N{\\emb}}) = \\thomiso(\\V{\\N{\\emb}})\n\\end{gather*}\nof the Wu classes from Theorem~\\ref{thm:altdefwuclasses}, and directly\nfollows from the following Lemma.\n\\begin{Lem}\\label{lem:wu}\n  For a vector bundle $\\xi\\colon \\E\\xi\\xrightarrow{p}\\B\\xi$ over a\n  paracompact space holds\n  \\begin{gather*}\n    \\SQ(\\V{\\xi}) \\cup \\W{\\xi}= 1\n    \\;.\n  \\end{gather*}\n\\end{Lem}\n\\begin{proof}[Proof of \\ref{tag:wuformula}]\n  Lemma~\\ref{lem:wu} states for a closed manifold $M^n$ and any\n  embedding $\\emb\\colon M\\to\\R^{n+k}$ that\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    \\SQ(\\V{M}) \\cup \\dualW{M}\n    = \\SQ(\\V{\\N M}) \\cup \\W{\\N{\\emb}}\n    \\cequalsby{\\ref{lem:wu}} 1\n    \\;.\n  \\end{gather*}\n  Cupping with $\\W{M}=\\W{\\T M}$ on both sides yields the claim\n  as $\\W{\\T M} \\cup \\W{\\N{\\emb}} = 1$ by\n  Remark~\\itemref{rem:propswclasses}{item:wuclassmfdinverse}.\n\\end{proof}\n\\begin{proof}[Proof of Lemma~\\ref{lem:wu}]\n  For simplicity use the shortenings\n  $\\U\\coloneqq \\u{\\xi}$,\n  $\\Vu\\coloneqq \\V{\\xi}$, and\n  $\\Ws\\coloneqq \\W{\\xi}$.\n  Then calculate\n  \\begin{align*}\n    \\SwapAboveDisplaySkip\n    \\thomiso(1)\n    &= \\U\n    \\\\\n    &\\equalsby{Def.~\\ref{def:antipode}}\n      \\SQ\\left(\\antipode(\\SQ)(\\U)\\right)\n    \\\\\n    &\\equalsby{Def.~\\ref{def:altwuclasses}}\n      \\SQ\\left(\\thomiso(\\Vu)\\right)\n    \\\\\n    &\\equalsby{Def.~$\\thomiso$}\n      \\SQ\\left(\\pb p\\Vu \\cup \\U\\right)\n    \\\\\n    &\\equalsby{\\ref{tag:cartan}, Naturality}\n      \\pb p\\SQ(\\Vu) \\cup \\SQ(\\U)\n    \\\\\n    &\\equalsby{\\ref{thm:altdefswclasses}}\n      \\pb p\\SQ(\\Vu) \\cup \\thomiso(\\Ws)\n    \\\\\n    &\\equalsby{Def.~$\\thomiso$}\n      \\pb p\\SQ(\\Vu) \\cup \\left(\\pb p \\Ws \\cup \\U\\right)\n    \\\\\n    &=\n      \\pb p\\left(\\SQ(\\Vu) \\cup \\Ws\\right) \\cup \\U\n    \\\\\n    &\\equalsby{Def.~$\\thomiso$}\n      \\thomiso\\left(\\SQ(\\Vu) \\cup \\Ws\\right)\n  \\end{align*}\n  Applying the inverse of the Thom isomorphism to both sides\n  yields the equality which was to be shown.\n\\end{proof}\n\n\n\\section{Massey's Theorem}\n\\label{sec:massey}\nMassey's main theorem on the Stiefel-Whitney classes of manifolds\n\\cite[Theorem~I.]{massey} gives a concrete statement, in which degrees\nthe dual Stiefel-Whitney classes may be non-zero in general.\nAs of Corollary~\\ref{cor:obstruction} this is directly related to the\nimmersion problem respectively an important obstruction for it.\n\\begin{Thm}[Massey]\\label{thm:massey}  \n  Let $M$ be a compact, $n$\\nbd{}dimensional manifold.\n  Given an integer $q$ with $0<q<n$ such that $\\dualw{n-q}{M}\\neq0$,\n  there is a sequence of integers $h_1\\geq\\dotsb\\geq h_q\\geq0$ of\n  length $q$ that fulfills\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    n = \\sum_{i=1}^{q} 2^{h_i}\n  \\end{gather*}\n\\end{Thm}\nNote that the minimal length of a representation of $n$ by\npowers of two is its binary representation of length $\\alpha(n)$.\nAs an immediate consequence:\n\\begin{Cor}\n  For any manifold, all its dual Stiefel-Whitney classes of degree\n  greater than $n-\\alpha(n)$ must be zero.\n\\end{Cor}\n\nThe following subsections are dedicated to the proof of Massey's Theorem.\nLet $M$ be a compact, $n$\\nbd{}dimensional manifold throughout the proof.\nThe latter consists of several steps:\n\\begin{steps}\n\\item\\label{tag:masseystep1} \n  Show that for any $q$, admissible iterated Steenrod square $\\Sq I$, and cohomology\n  class $x\\in\\H^q(M)$ of degree $q$ such that $\\Sq I(x)$ is non-trivial there exists\n  some representation of the form\n  \\begin{gather*}\n    \\deg \\left(\\Sq I(x)\\right)\n    = 2^k\\cdot\n    \\left( 2^{k_1}+\\dotsb+2^{k_{q-1}} + 1 \\right)\n    \\;.\n  \\end{gather*}\n  (All results during this step hold for any space $X$.)\n\\item\\label{tag:masseystep2}\n  Find some iterated Steenrod square candidate to which the above\n  degree formula is applicable, \\idest which is non-trivial in\n  degree\n  \\begin{gather*}\n    \\Sq I\\colon\\H^q(M)\\to\\H^n(M)\n    \\;.\n  \\end{gather*}\n\\end{steps}\nApplying \\ref{tag:masseystep1} to the Steenrod square $\\Sq I$ from\n\\ref{tag:masseystep2} and some $x\\in\\H^q(M)$ with $\\Sq I(x)\\neq 0$\nimmediately yields the result because\n\\begin{gather*}\n  n = \\deg\\left(\\Sq I(x)\\right) = \\underbrace\n  {2^{k_1+k}+\\dotsb+2^{k_{q-1}+k} + 2^{k}}_{\\text{$q$ summands}}\n  \\;.\n\\end{gather*}\n\n\\subsection[A Degree Formula for Iterated Steenrod Squares]\n{Step 1: A Degree Formula for Iterated Steenrod Squares}\nFor this step $M$ may be any space.\n\\ref{tag:masseystep1} requires to proof the following claim.\n\\begin{Lem}[\\ref{tag:masseystep1}]\\label{lem:masseystep1}\n  Let $q\\geq 0$ be an integer,\n  and $I\\in\\Nat^{\\l(I)}$ an admissible sequence of integers.\n  Further, let $x\\in\\H^q(M)$ be a cohomology class of degree $q$\n  such that $\\Sq I(x)$ is non-trivial.\n  Then there exists $k\\in\\Nat$ and a sequence of integers\n  $0\\leq k_1\\leq\\dotsb\\leq k_{q-1}$ of length $q-1$ such that the\n  degree of $\\Sq I(x)$ can be represented as the dissection\n  \\begin{gather*}\n    \\deg \\Sq I(x)\n    = \\deg x + \\d(I)\n    = 2^k\\cdot\n    \\left( 1 + 2^{k_1}+\\dotsb+2^{k_{q-1}} \\right)\n    \\;.\n  \\end{gather*}\n  % Mind that the maximum length of such a sequence is $\\deg\\Sq I(x)$.\n\\end{Lem}\n\nIn order to split the proof into several cases, recall that\n$\\Sq I(x) = 0$ for $\\e(I)>\\deg x$ by\nRemark~\\itemref{rem:sq}{item:squpperboundgeneral}. This leaves the two\ncases $\\e(I)<\\deg x$ and $\\e(I)=\\deg x$.\nInductively applying the following Lemma by Serre restricts the\nproof of Lemma~\\ref{lem:masseystep1} to the first case where $\\e(I)<q$.\n\\begin{Lem}[Serre]\n  \\label{lem:serre}\n  Every admissible sequence $I\\in\\Nat^{\\l(I)}$ of excess $\\e(I)>0$\n  admits an admissible sequence $J$ with $\\e(J)<\\e(I)$,\n  together with some $k\\in\\Nat$\n  such that for any cohomology class $x\\in\\H^{\\e(I)}(M)$ holds\n  \\begin{gather*}\n    \\Sq I(x) = \\left(\\Sq J(x)\\right) ^{2^k}\n    \\qquad\\text{respectively}\\qquad\n    \\deg\\left(\\Sq I(x)\\right) = 2^k\\cdot \\deg\\left(\\Sq J(x)\\right)\n    \\;.\n  \\end{gather*}\n\\end{Lem}\n\nBefore proving Lemma~\\ref{lem:serre} one can finish the argumentation for\nthe case $\\e(I)<\\deg x$.\n\\begin{proof}[Proof of Lemma~\\ref{lem:masseystep1}]\n  Let $q\\in\\Nat$ and $I=(i_1,\\dotsc,i_l)$ be an admissible sequence such that\n  $\\e(I)<q$.\n  Assume there is a cohomology class $x\\in\\H^q(M)$ such that $\\Sq I(x)\\neq0$.\n  Set\n  \\begin{alignat*}{4}\n    \\SwapAboveDisplaySkip\n    \\alpha_0 &= q-1-\\e(I)\\geq 0 &\\qquad&\\text{which is positive as $\\e(I)<q$,} \\\\\n    \\alpha_r &= i_{r}-2i_{r+1}  &&\\text{for $1\\leq r< \\l(I)$, and} \\\\ \n    \\alpha_{\\l(I)} &= i_{\\l(I)}\n    \\;.\n  \\end{alignat*}\n  It is an easy exercise that the excess of $I$ can be rewritten as\n  $\\e(I)=\\sum_{r=1}^{\\l(I)}\\alpha_r$, so\n  \\begin{gather}\\label{eq:alpha0}\n    \\sum_{r=0}^{\\l(I)}\\alpha_r = \\alpha_0 + \\e(I) \\cequalsby{Def.} q-1\n    \\;.\n  \\end{gather}\n  Just as easily one sees\n  \\begin{align}\n    \\SwapAboveDisplaySkip\n    \\label{eq:proofmassey:eq1}\n    i_s\n    &= \\sum_{r=0}^{s}2^r\\alpha_{s+r}\\\\\\notag\n  \\end{align}\n  The above definitions then directly imply the following reformulation\n  of $\\d(I)$ in terms of $\\alpha_i$:\n  \\begin{align}\n    \\SwapAboveDisplaySkip\n    \\notag\n    \\d(I)\n    &\\coloneqq \\sum_{s=1}^{\\l(I)}i_s  \\\\\\notag\n    &\\equalsby{\\eqref{eq:proofmassey:eq1}}\n      \\sum_{s=1}^{\\l(I)}\\sum_{r=0}^{s}2^r\\alpha_{s+r}\\\\\\notag\n    &\\equalsby{Reorder} \\sum_{j=1}^{\\l(I)}\n      \\left(\\sum_{m=0}^{j-1}2^m\\right)\\alpha_j\n      = \\sum_{j=1}^{\\l(I)}(2^j-1)\\alpha_j \n      = \\sum_{j=1}^{\\l(I)}2^j\\alpha_j\n      - \\sum_{j=1}^{\\l(I)}\\alpha_j\\\\\n    \\label{eq:proofmassey:eq2}\n    &= \\sum_{j=1}^{\\l(I)}2^j\\alpha_j\n      - \\e(I)\n      \\;.\n  \\end{align}\n  All put together yields\n  \\begin{align*}\n    \\deg\\left(\\Sq I(x)\\right)\n    &= \\deg(x) + \\d(I)\\\\\n    &= 1 + \\deg(x) -1 +\\d(I) \\\\\n    &\\equalsby{\\eqref{eq:proofmassey:eq2}}\n      1 + q - 1 - \\e(I) + \\sum_{j=1}^{\\l(I)}2^j\\alpha_j \\\\\n    &\\equalsby{Def.}\n      1 + \\alpha_0 + \\sum_{j=1}^{\\l(I)}2^j\\alpha_j \\\\\n    &= 1 + \\sum_{j=0}^{\\l(I)}2^j\\alpha_j\n    = 1 + \\left(\n      \\underbrace{2^0 +\\dotsb+ 2^0}_{\\text{$\\alpha_0$ times}}\n      + \\dotsb\n      + \\underbrace{2^{\\l(I)}+\\dotsb+2^{\\l(I)}}_{\\text{$\\alpha_{\\l(I)}$ times}}\n      \\right)\n  \\end{align*}\n  which is one plus a sum of exactly\n  $\\sum_{j=0}^{\\l(I)}\\alpha_j\\cequalsby{\\eqref{eq:alpha0}} q-1$\n  powers of two as was to be shown.\n  The $k_j$ are in this case\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    k_j = \\begin{cases}\n      0 & 0< j\\leq \\alpha_0\\\\\n      1 & \\alpha_0< j\\leq \\alpha_1\\\\\n      \\vdots\\\\\n      \\l(I) & \\alpha_{\\l(I)-1}<j\\leq \\alpha_{\\l(I)}\n    \\end{cases}\n    \\qedhere\n  \\end{gather*}\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:serre}]\n  A version of the following proof can be found in\n  \\cite[p.~159, Lemma~1, converse part]{serre}.\n  First note that any admissible sequence $I\\coloneqq(i_1,\\dotsc,i_l)\\in\\Nat^l$\n  of excess $\\e(I)>0$ can be written as\n  \\begin{gather*}\n    I=(2^{k-1}i_k,\\dotsc,2i_k,i_k,i_{k+1},\\dotsc,i_l)\n  \\end{gather*}\n  with $l>k\\geq 1$ chosen maximal, \\idest $i_k>2i_{k+1}$. If $I$ is admissible, the subsequence\n  $J\\coloneqq(i_{k+1},\\dotsc,i_l)$ will be admissible as well.\n  In order to see that such $J$ and $k$ fulfill the requirements from\n  Lemma~\\ref{lem:serre} one only needs to show\n  \\begin{claim}\n    For $I$ and $J$ as above, and $x\\in\\H^{\\e(I)}(X)$ a cohomology\n    class of a space $X$ holds\n    \\begin{enumerate}\n    \\item $\\Sq I(x) = \\left(\\Sq J(x)\\right)^{2^k}$, and\n    \\item $\\e(I)<\\e(J)$.\n    \\end{enumerate}\n  \\end{claim}\n  For the first part one simply has to check that\n  $\\deg(\\Sq J(x))=i_k$, as the statement then inductively follows from\n  property \\eqref{eq:sqsquared} that $\\Sq i(y)=y^2$ for any cohomology\n  class with $i=\\deg(y)$.\n  So calculate\n  \\begin{align*}\n    \\deg(\\Sq J(x))\n    &= \\d(J) + \\deg(x)\\\\\n    &= \\d(J) + \\e(I)\\\\\n    &= \\d(J) \n      + \\left(\\sum_{r=1}^{\\l(I)-1}(i_r-2i_{r+1})\\right)\n      + i_{\\l(I)}\\\\\n    &= \\d(J)\n      + \\left(\\sum_{r=1}^{k-1}\\underbrace{(i_r-2i_{r+1})}_{=0}\\right)\n      + (i_k-2i_{k+1})\n      +\\left(\\sum_{r=k+1}^{\\l(I)-1}(i_r-2i_{r+1})\\right) + i_{\\l(I)}\\\\\n    &= \\d(J)\n      + (i_k-2i_{k+1})\n      + \\e(J) \\\\\n    &= \\d(J) + i_k - 2i_{k+1} + 2i_{k+1} - \\d(J) \\\\\n    &= i_k\n  \\end{align*}\n  Comparing the excesses yields the second part:\n  \\begin{gather*}\n    \\e(I) - \\e(J)\n    = \\left(\\sum_{r=1}^{k-1}\\underbrace{(i_r-2i_{r+1})}_{=0}\\right)\n    + \\underbrace{(i_k - 2i_{k+1})}_{\\text{$>0$ by def. of k}}\n    > 0\n    \\qedhere\n  \\end{gather*}\n\\end{proof}\n\n\n\\subsection[Candidates for the Degree Formula]\n{Step 2: Candidates for the Degree Formula}\n\nThis section is dedicated to the search of an iterated Steenrod square\n$\\Sq I$ of degree $n-q$ that is non-trivial in degree\n$\\Sq I\\colon\\H^q(M)\\to\\H^n(M)$ in order to complete\n\\ref{tag:masseystep2} of the proof of Theorem~\\ref{thm:massey}.\n\nThe candidate is the multiplication map\n\\begin{gather*}\n  \\H^q(M) \\longto \\H^n(M)\n  \\qquad\n  x\\longmapsto x\\cup\\dualw{n-q}{M}\n\\end{gather*}\nwhich is non-trivial if and only if $\\dualw{n-q}{M}\\neq 0$, since the\ncup-product is non-degenerated by Poincar\u00e9 duality\n\\cite[Proposition~3.38]{hatcher}.\nIt remains to see that this map actually comes from a sum of\niterated Steenrod squares, one of which then must be non-trivial and\nof the correct degree.\nBut this easily follows from the next main Lemma,\nthe proof of which essentially uses Wu's theorem and the properties of\nthe Wu classes.\n\n\\begin{Lem}\\label{lem:masseystep2}\n  Let $0<k<n$ and $x\\in\\H^k(M)$ be a cohomology class.\n  Then\n  \\begin{gather*}\n    x\\cdot\\dualw{n-k}{M} = \\sum_{i>0}\\Sq i(x)\\cdot\\dualw{n-k-i}{M}\n  \\end{gather*}\n\\end{Lem}\nAgain, before proving the above lemma, let us see how this contributes\nto the final result.\n\\begin{proof}[Proof of Theorem~\\ref{thm:massey}]\n  The trick to complete \\ref{tag:masseystep1} is to show that\n  for $\\dualw{n-q}{M}\\neq 0$, and $k$, $x$ as in the Lemma above\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    x\\cdot \\dualw{n-q}{M} = \\sum_{I\\in A}\\Sq I(x)\n    \\;,\n  \\end{gather*}\n  where $A$ is some collection of sequences of degree $(n-q)$.\n  Because then for at least one $I\\in A$, $\\Sq I(x)$ is non-trivial,\n  as is needed to apply Lemma~\\ref{lem:masseystep1} which then yields\n  the desired decomposition of $n=\\deg(\\Sq I(x))$.\n\n  The above lemma serves for a descending induction on the maximum\n  degree $n-q-r$ of the dual Stiefel-Whitney classes occurring in\n  the sum describing $x\\cdot\\dualw{n-q}{M}$.\n  More precisely, assume that\n  \\begin{gather*}\n    \\SwapAboveDisplaySkip\n    x\\cdot \\dualw{n-q}{M}\n    = \\sum_{I\\in A}\\Sq I(x)\n    + \\sum_{j=r}^{n-q-1}\\Sq{I_j}(x)\\cdot\\dualw{n-q-j}{M}\n    \\;.\n  \\end{gather*}\n  Then Lemma~\\ref{lem:masseystep2} reduces the maximum degree $n-q-r$\n  of dual Stiefel-Whitney classes with the induction rule\n  \\begin{gather*}\n    \\Sq I(x)\\cdot\\dualw{n-k-j}{M}\n    \\cequalsby{\\ref{lem:masseystep2}}\n    \\sum_{i>0}\\Sq i\\circ\\Sq I(x)\\cdot\\dualw{n-k-(i+j)}{M}\n  \\end{gather*}\n  as follows:\n  \\begin{align*}\n    \\sum_{j=r}^{n-q-1}\\Sq{I_j}(x)\\cdot\\dualw{n-q-j}{M}\n    &= \\sum_{j=r}^{n-q-1}\\sum_{i>0}\n      \\Sq i(\\Sq{I_j}(x))\\cdot \\dualw{n-q-j-i}{M}\n    \\\\\n    &= \\sum_{j=r}^{n-q-1}\\sum_{i=1}^{n-q-j}\n      \\Sq{(i)\\concat I_j}(x)\\cdot \\dualw{n-q-j-i}{M}\n    \\\\\n    &= \\sum_{\\mathrlap{r+1\\leq i+j\\leq n-q}}\n      \\Sq{(i)\\concat I_j}(x)\\cdot \\dualw{n-q-j-i}{M}\n    \\\\\n    &=\\sum_{\\mathrlap{i+j=n-q}} \\Sq{(i)\\concat I_j}(x)\\cdot \\dualw0{M}\n      + \\sum_{\\mathrlap{r+1\\leq i+j\\leq n-q-1}}\n      \\Sq{(i)\\concat I_j}(x)\\cdot \\dualw{n-q-j-i}{M}\n    \\\\\n    &=\\sum_{\\mathrlap{i+j=n-q}} \\Sq{(i)\\concat I_j}(x)\n      + \\sum_{\\mathrlap{r+1\\leq i+j\\leq n-q-1}}\n      \\Sq{(i)\\concat I_j}(x)\\cdot \\dualw{n-q-j-i}{M}\n      \\;.\n  \\end{align*}\n  Going down respectively up to $r=n-q$, one obtains that\n  $x\\cdot\\dualw{n-q}{M}$ is a sum of iterated Steenrod squares\n  evaluated on $x$ as was to be shown.\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:masseystep2}]\n  As above let $0<k<n$ and $x\\in\\H^k(M)$.\n  For simplicity write\n  $\\ws i\\coloneqq\\w i M$, $\\dualws i\\coloneqq\\dualw i M$,\n  $\\vu i = \\v i M$ and $\\dualvu i = \\dualv i M$.\n  In order to translate from Stiefel-Whitney classes to Steenrod\n  squares and back again recall the following results.\n  \\begin{description}\n  \\item[By Theorem~\\ref{thm:wu} (Wu):]\n    $\\sum_{j=0}^{s} \\Sq j(\\dualvu{s-j}) = \\dualws{s-j}$\n    for any $s\\leq n$.\n  \\item[By Definition~\\ref{def:wuclasses} of $\\Vu$:]\n    $\\vu i \\cdot y = \\Sq i(y)$\n    for any $i\\leq n$ and $y\\in\\H^{n-i}(M)$.\n  \\item[By Definition~\\ref{def:dualwuclasses} of $\\dualVu$:]\n    $\\dualvu d = \\sum_{i=1}^{d} \\vu i\\cdot \\dualvu{d-i}$ for $d>0$.\n  \\end{description}\n  The full calculation is then\n  \\begin{align*}\n    \\qquad\n    x\\cdot \\dualws{n-k}\n    &\\clapequalsby{\\ref{thm:wu}}\n      x\\cdot \\sum_{i=0}^{n-k} \\Sq i(\\dualvu{n-k-i}) \\\\\n    &\\equalsby{Def. $\\Sq 0$}\n      x\\cdot\\left(\n      \\dualvu{n-k} + \\sum_{i=1}^{n-k} \\Sq i(\\dualvu{n-k-i})\n      \\right) \\\\\n    &\\equalsby{Def. $\\dualVu$}\n      x\\cdot\\left(\n      \\left(\\sum_{i=1}^{n-k} \\vu{i}\\cdot\\dualvu{n-k-i}\\right)\n      + \\left(\\sum_{i=1}^{n-k} \\Sq i(\\dualvu{n-k-i})\\right)\n      \\right) \\\\\n    &=\n      \\sum_{i=1}^{n-k} \\left(\n      \\vu{i}\\cdot x\\cdot \\dualvu{n-k-i}\n      + x\\cdot \\Sq i(\\dualvu{n-k-i})\n      \\right) \\\\\n    &\\equalsby{Def. $\\Vu$}\n      \\sum_{i=1}^{n-k} \\left(\n      \\Sq i (x\\cdot \\dualvu{n-k-i})\n      + x\\cdot \\Sq i(\\dualvu{n-k-i})\n      \\right) \\\\\n    &\\equalsby{\\ref{tag:cartan}, Def.}\n      \\sum_{i=1}^{n-k} \\left(\n      \\left( \\sum_{r=0}^{i} \\Sq r(x)\\cdot \\Sq{i-r}(\\dualvu{n-k-i}) \\right)\n      + \\Sq 0(x)\\cdot \\Sq i(\\dualvu{n-k-i})\n      \\right) \\\\\n    &=\n      \\sum_{i=1}^{n-k} \\left(\n      \\sum_{r=1}^{i} \\Sq r(x)\\cdot \\Sq{i-r}(\\dualvu{n-k-i})\n      \\right) \\\\\n    &\\equalsby{Reorder}\n      \\sum_{r=1}^{n-k} \\Sq r(x) \\cdot\n      \\left( \\sum_{j=0}^{n-k-r} \\Sq j(\\dualvu{n-k-(j+r)}) \\right) \\\\\n    &\\equalsby{\\ref{thm:wu}}\n      \\sum_{r=1}^{n-k} \\Sq r(x) \\cdot\\dualws{n-k-r}\n      \\qedhere\n  \\end{align*}\n\\end{proof}\nThis finished the proof of Massey's theorem.\n\n\n\\section{Best Possible Result}\\label{sec:bestpossibleresult}\nRecall that a closed $n$\\nbd{}manifold can only immerse into\n$\\R^{n+k}$ if all Stiefel-Whitney classes $\\dualw{i}{\\N M}$ of its\nnormal bundle are zero in degrees greater than $k$.\nMassey's Theorem now states that this condition is met for\n$k=n-\\alpha(n)$, and thus for all manifolds the above\nobstruction to the immersion property vanishes. From this arose the\nidea for the immersion conjecture.\n\nNaturally, there occurs the question whether Massey's $n-\\alpha(n)$ is\nthe best possible (\\idest smallest) result for arbitrary closed\n$n$\\nbd{}manifolds, making the immersion conjecture the best\nguess of a general possible immersion codimension.\nThe answer is yes, proved by the following counterexamples of manifolds\nnot immersing into $\\R^{2n-(\\alpha(n)+1)}$.\n\n\\begin{Thm}\n  Denote by $\\RP{i}$ the $i$th real projective space.\n  \\begin{enumerate}\n  \\item For $n=2^i$, the Stiefel-Whitney class\n    $\\dualw{n-\\alpha(n)}{\\RP{2^i}}$ is not zero.\n  \\item For $n\\in\\Nat$ with binary expansion\n    $n=\\sum_{r=1}^{q}2^{i_r}$, $i_1>\\dotsb>i_q$, the Stiefel-Whitney class\n    $\\dualw{n-\\alpha(n)}{\\prod_{r=1}^{q}\\RP{2^r}}$ is not zero.\n  \\end{enumerate}\n  \\begin{proof}\n    Compare also \\cite[p.~87]{immersionconj}.\n    The total Stiefel-Whitney class of $\\RP{n}$ for arbitrary\n    $n\\in\\Nat$ is\n    \\begin{gather*}\n      \\SwapAboveDisplaySkip\n      \\W{\\RP n} = (1+x_n)^{n+1}\\in\\H^*(\\RP n)\\cong \\Zmod2[x_n]/(x_n^{n+1})\n    \\end{gather*}\n    (see \\forexample \\cite[Example~(19.4.1)]{tomdieck}).\n    For $n=2^i$ this takes the form\n    \\begin{align*}\n      \\W{\\RP{2^i}}\n      &= (x+1)^{2^i+1}\n      = 1 + x + x^{2^i}\n        \\;,\\quad\\text{thus} \\\\\n      \\dualW{\\RP{2^i}}\n      &= \\sum_{r=0}^{2^i-1} x^r\n        \\;,\\quad\\text{especially} \\\\\n      \\dualw{n-\\alpha(n)}{\\RP{2^i}}\n      &= x^{2^i-1}\\neq 0\n        \\;,\\quad\\text{since $\\alpha(2^i)=1$,}\n      %   \\quad\\text{, as} \\\\\n      % \\W{\\RP{2^i}}\\cdot\\dualW{\\RP{2^i}}\n      % &= \\sum_{r=0}^{2^i-1}x^r\n      % + \\sum_{r=1}^{2^i}x^r\n      % + \\sum_{r=0}^{2^i-1}x^{2^i}\\cdot x^r \\\\\n      % &= \\left( 1 + \\sum_{r=1}^{2^i-1}x^r \\right)\n      %   + \\left( \\sum_{r=1}^{2^i-1}x^r + x^{2^i} \\right)\n      %   + x^{2^i}\n      %   = 1\n    \\end{align*}\n    which proves the first statement.\n    For the second statement where $n=\\sum_{r=1}^q 2^{i_r}$ note that\n    \\begin{gather}\n      \\SwapAboveDisplaySkip\n      \\label{eq:bestresultproof1}\n      n-\\alpha(n)\n      = \\left( \\sum_{r=1}^q 2^{i_r} \\right) - q\n      = \\sum_{r=1}^q \\left( 2^{i_r} - 1 \\right)\n      \\;.\n    \\end{gather}\n    Using multiplicativity %(\\ref{tag:swclassesmultiplicativity})\n    of the Stiefel-Whitney classes one gets\n    \\begin{align*}\n      \\dualW{\\prod_{r=1}^q \\RP{2^{i_r}}}\n      &= \\prod_{r=1}^q \\dualW{\\RP{2^{i_r}}}\n        = \\prod_{r=1}^q \\left(\\sum_{i=1}^{2^{i_r}-1} x_{2^{i_r}}^{i}\\right)\n        &\\text{and by combinatorics} \\\\\n      \\dualw{n-\\alpha(n)}{\\prod_{r=1}^{q}\\RP{2^{i_r}}}\n      &= \\prod_{r=1}^q x_{2^{i_r}}^{2^{i_r}-1}\n        \\neq 0\n    \\end{align*}\n    in\n    $\\Zmod2\\left[x_{2^{i_1}},\\dotsc,x_{2^{i_q}}\\right]/\n    \\left(x_{2^{i_1}}^{2^{i_1}+1},\\dotsc,x_{2^{i_q}}^{2^{i_q}+1}\\right)\n    \\cong \\H^*\\left(\\prod_{r=1}^q \\RP{2^{i_r}}\\right)$.\n  \\end{proof}\n\\end{Thm}\n\n%%% Local Variables:\n%%% mode: latex\n%%% TeX-master: \"thesis\"\n%%% ispell-local-dictionary: \"en_US\"\n%%% End:\n\n", "meta": {"hexsha": "c121f5bae901ee7a30bb4c2c64bae8091d06e5da", "size": 62533, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-02.tex", "max_stars_repo_name": "gesina/master_thesis", "max_stars_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis-02.tex", "max_issues_repo_name": "gesina/master_thesis", "max_issues_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis-02.tex", "max_forks_repo_name": "gesina/master_thesis", "max_forks_repo_head_hexsha": "4de043b92e7baa1e13c2ca1b6da7c5cf869580f6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.76147343, "max_line_length": 95, "alphanum_fraction": 0.6102058113, "num_tokens": 23091, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390162, "lm_q2_score": 0.7520125793176222, "lm_q1q2_score": 0.5566204311417361}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[version=3]{mhchem}\n\\usepackage{siunitx}\n\n\\title{Analyzing the Effectiveness of Various Types of Aspirin}\n\\date{9 February 2015}\n\\author{Tarik Onalan}\n\n\\sisetup{ per-mode=fraction }\n\n\\begin{document}\n    \\maketitle\n    \\section{Price}\n        Cost per gram:\n        \\begin{itemize}\n            \\item Baby Aspirin: $\\frac{\\SI{2.99}[\\$]{}}{1~pack}\\cdot\\frac{1~pack}{36~tablets}\\cdot\\frac{1~tablet}{\\SI{81}{\\mg}}\\cdot\\frac{\\SI{1000}{\\mg}}{\\SI{1}{g}}=1.02\\frac{\\$}{\\si{\\g}}$\n            \\item Normal Aspirin: $\\frac{\\SI{11.99}[\\$]{}}{1~pack}\\cdot\\frac{1~pack}{300~tablets}\\cdot\\frac{1~tablet}{\\SI{325}{\\mg}}\\cdot\\frac{\\SI{1000}{\\mg}}{\\SI{1}{g}}=0.12\\frac{\\$}{\\si{\\g}}$\n        \\end{itemize}\n        Cost per tablet:\n        \\begin{itemize}\n            \\item Baby Aspirin: $\\frac{\\SI{2.99}[\\$]{}}{1 pack}\\cdot\\frac{1~pack}{36~tablets}=\\frac{\\SI{0.08}[\\$]{}}{1~tablet}$\n            \\item Normal Aspirin: $\\frac{\\SI{2.99}[\\$]{}}{1 pack}\\cdot\\frac{1~pack}{36~tablets}=\\frac{\\SI{0.04}[\\$]{}}{1~tablet}$\n        \\end{itemize}\n    \\section{Performance}\n        \\begin{enumerate}\n            \\item We used \\SI{4.5}{\\ml} and \\SI{16.5}{\\ml} of base to titrate the baby and normal aspirin, respectively.\n            \\begin{itemize}\n                \\item Baby Aspirin: $\\SI{4.5}{\\ml}\\cdot\\SI{1}{\\g\\per\\ml}\\cdot\\SI{0.001}{\\ml\\per\\l}\\cdot\\SI{0.02}{\\mol\\per\\g}=\\SI{0.00009}{\\mol}$\n                \\item Normal Aspirin: $\\SI{16.5}{\\ml}\\cdot\\SI{1}{\\g\\per\\ml}\\cdot\\SI{0.001}{\\ml\\per\\l}\\cdot\\SI{0.02}{\\mol\\per\\g}=\\SI{0.00033}{\\mol}$\n            \\end{itemize}\n            \\item Baby aspirin neutralized \\SI{0.09}{\\mol} of \\ce{OH-}, while normal aspirin neutralized \\SI{0.33}{\\mol}\n            of \\ce{OH-}. (See above list for calculations)\n            \\item The number of moles of \\ce{H+} is equal to the number of moles of \\ce{OH-} required to titrate the\n            solution. If the number of moles of \\ce{H+} were equal to the number of moles of acid, then the mass of \n            tablets would be:\n            \\begin{itemize}\n                \\item Baby Aspirin: $\\SI{0.00009}{\\mol}\\cdot\\SI{180.157}{\\g\\per\\mol}=\\SI{0.0162}{\\g}=\\SI{16.2}{\\g}$\n                \\item Normal Aspirin: $\\SI{0.00033}{\\mol}\\cdot\\SI{180.157}{\\g\\per\\mol}=\\SI{0.0595}{\\g}=\\SI{59.5}{\\g}$\n            \\end{itemize}\n            \\item The actual masses of the tablets should be \\SI{81}{\\mg} and \\SI{325}{\\mg} for the baby and normal aspirins,\n            respectively. The differences between the actual masses and the calculated masses arises from the fact that\n            the tablet is not purely acetylsalicylic acid. There are other compounds in the tablet that contribute to its\n            mass.\n        \\end{enumerate}\n    \\section{Conclusion}\n        \\begin{enumerate}\n            \\item The baby aspirin costs $2$ times as much as normal aspirin per tablet and $8.5$ times as much per gram.\n            \\item I would buy normal aspirin because it is cheaper both per tablet and per gram.\n            \\item The biggest source of error in the lab was the fact that there were pieces of tablet that did not dissolve,\n            possibly skewing titration results.\n            \\item As the active ingredient of aspirin is an acid, it can cause gastrointestinal irritation and/or bleeding\n            if it concentrates on the stomach wall, etc. However, its blood-thinning effects are useful for people who\n            have a high risk of clotting.\n        \\end{enumerate}\n\\end{document}", "meta": {"hexsha": "b944d2bfbde8fa6f9fe5fac411686725e8c52442", "size": 3489, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2014-2015/Chemistry/Aspirin_Titration_Lab.tex", "max_stars_repo_name": "QuantumPhi/school", "max_stars_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2014-2015/Chemistry/Aspirin_Titration_Lab.tex", "max_issues_repo_name": "QuantumPhi/school", "max_issues_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-04-10T07:28:17.000Z", "max_issues_repo_issues_event_max_datetime": "2015-04-10T07:30:10.000Z", "max_forks_repo_path": "2014-2015/Chemistry/Aspirin_Titration_Lab.tex", "max_forks_repo_name": "QuantumPhi/school", "max_forks_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.3035714286, "max_line_length": 193, "alphanum_fraction": 0.6116365721, "num_tokens": 1141, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597972, "lm_q2_score": 0.7401743505760728, "lm_q1q2_score": 0.556620418407699}}
{"text": "\\subsection{System Characterisation}\n\\label{sec:theory:characterisation}\n\nA systems complete dynamic behaviour around an operating point of interest can\nbe determined by applying a step function -- $\\epsilon(t)$ -- to its input and\nrecording the system's output. The derivative of the resulting function is the\nsystem's  \\textit{impulse  response}  and  can  be  used  to calculate optimal\ncontroller parameters.\n\nBefore measuring the step response we first have to select the step amplitude.\nIf the step amplitude is too small, the plant behaves in an atypical way. This\nhappens because of small disturbances  which  affect the step response. A good\nexample for this is static friction. On the  other  hand if the step amplitude\nis  too  large,  then  non-linear  effects  will throw off the  accuracy  (and\nvalidity) of the result.\n\nAfter  a system's  step  response  has  been  measured,  it  is  necessary  to\ncharacterise  and  determine  its  properties before it's possible  to  fit  a\ntransfer function to it. The method  of  characterisation used here is to find\nthe point of inflection in the measured step response, through which a tangent\nis placed.  The  tangent's  intersection  points  with  the  lower  and  upper\nhorizontal bounds  are used to determine the dead time $T_u$ and the rise time\n$T_g$.  Further, the amplitude $K_s$ of the step response can  be  determined.\nThis process is illustrated in figure \\ref{fig:tu-tg-example}.\n\n\\begin{figure}[t]\n    \\centering\n    \\includegraphics[width=\\imagewidth]{images/tu_tg_example}\n    \\caption{Example step response of a system and measurement of the angle of inflection for determining $K_s$, $T_u$ and $T_g$. Image taken from Wikipedia\\cite{ref:tu-tg}.}\n    \\label{fig:tu-tg-example}\n\\end{figure}\n\nIt is important to note that this method  of  characterisation  is  valid  for\nsystems with an order of at least $n=2$ and for systems that don't exhibit any\novershoot.\n\n", "meta": {"hexsha": "5ff03fe7db852bb8be2dad09a2b2fdd8adbefae0", "size": 1936, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "versuche/rtGL/labor2/sections/theory/system_characterisation.tex", "max_stars_repo_name": "TheComet93/laborjournal", "max_stars_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "versuche/rtGL/labor2/sections/theory/system_characterisation.tex", "max_issues_repo_name": "TheComet93/laborjournal", "max_issues_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "versuche/rtGL/labor2/sections/theory/system_characterisation.tex", "max_forks_repo_name": "TheComet93/laborjournal", "max_forks_repo_head_hexsha": "5b83c35ec2580a22106d755f466dc6371d7444ee", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.3243243243, "max_line_length": 174, "alphanum_fraction": 0.7660123967, "num_tokens": 490, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743505760728, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.556620418407699}}
{"text": "\\section{\\texorpdfstring{Randomized computation}{Randomized computation}}\n\\vspace{5mm}\n\\large\n\n% 9th lecture from 51:50\n\n\\begin{definition}[Probabilistic TM (PTM)]\n\tTM with 2 transition functions: $\\delta_1, \\delta_2$.\n\tWhen PTM works on $x$ it at each step selects $\\delta_1$ with probability $\\frac{1}{2}$ (same for $\\delta_2$).\n\tPTM uses a fair cons toss that is independent from the previous results.\n\n\tPTM M returns a random variable denoted $M(x)$, $M(x) = 1$ if $M$ accepts $x$ and $0$ o/w.\n\n\tWe say that $M$ runs in tim $T(n)$ if for every input $x$ M stops after at most $T(|x|)$ steps, regardless of the selections.\n\\end{definition}\n\n\\begin{notation}\n\tFor a language $L \\in \\szo^{\\ast}$ and $x \\in \\szo^{\\ast}$ we define:\n\t\\[ L(x) = 1 \\iff X \\in L \\& L(x) = 0 \\iff x \\notin L \\]\n\\end{notation}\n\n\\begin{definition}[BPP]\n\tPTM M accepts the language $L \\in \\szo^{\\ast}$ in time $T(n)$ if\n\t\\[ \\forall x \\in \\szo^{\\ast}: Pr[M(x) = L(x)] \\geq \\frac{2}{3} \\]\n\tAnd $M$ runs in $T(n)$.\n\n\t$BPTIME(T(n))$ is a class of languages accepted by PTM in time $T(n)$.\n\t\\[ BPP = \\bigcup BPTIME(n^c) \\]\n\tBPP - bounded error probabilistic polynomial time.\n\\end{definition}\n\n\\begin{properties}[BPP]\n\tBPP is invariant to (robust):\n\t\\begin{itemize}\n\t\t\\item Accepting probability: BPP will remain the same if we replace $\\frac{2}{3}$ by any fraction $\\in (\\frac{1}{2}, 1)$.\n\n\t\t\\item Probabilities of picking $\\delta_1$ or $\\delta_2$ can also be changed to $p, 1 - p$ instead of $\\frac{1}{2}$.\n\t\t\\item Running time: $T(n)$ can be replaced by \"expected\" $T(n)$.\n\t\\end{itemize}\n\\end{properties}\n\n\\begin{note}[Open Question]\n\tWe know:\n\t\\[ \\TP \\subseteq BPP \\]\n\tas we can take $\\delta_1 = \\delta_2$.\n\n\tHowever the strict inclusion is an open question\n\t\\[ \\TP \\subsetneq BPP \\]\n\\end{note}\n\n\\begin{note}[BPP and NP]\n\tThe idea is similar, but BPP requires $\\frac{2}{3}$ of accepting computation branches and $\\TNP$ only 1.\n\\end{note}\n\n\\begin{definition}[Probabilistic TM (PTM) Alternative]\n\tRandomness can be presented making PTM accept additional input string, which is random $y \\in \\szo^{T(n)}$.\n\tEach bit tells which transition function to choose.\n\\end{definition}\n\n% todo change to Fact and fix cref\n\\begin{theorem}[Chernoff bound]\\label{chernoff}\n\tLet $x_i, \\ldots, x_n$ be independent r.v. such that $x_i \\in \\szo$ and let $\\mu = \\sum \\E[x_i]$ then\n\t\\[ \\forall c > 0: \\Prob[|\\sum x_i - \\mu| \\geq c \\cdot \\mu ] \\leq 2e^{-\\min\\{ \\frac{c^2}{4}, \\frac{c}{2} \\} \\cdot \\mu } \\]\n\\end{theorem}\n\n\\begin{theorem}[BPP error reduction]\\label{error_reduction}\n\tLet $L \\in \\szo^{\\ast}$ be a language, $c > 0$ constant.\n\tAssume that there $\\exists PTM\\ M$ such that\n\t\\[ \\forall x \\in \\szo^{\\ast}: Pr[M(x) = L(x)] \\geq \\frac{1}{2} + |x|^{-c} \\]\n\tNote that as $x$ grows, error $|x|^{-c} \\to 0$.\n\n\tThen\n\t\\[ \\forall d > 0: \\exists PTM \\ M_e: \\forall x \\in \\szo^{\\ast}:Pr[M_e(x) = L(x)] \\geq 1 - 2^{-|x|^d} \\]\n\n\tAnd if $M$ work in polynomial time so does $M_e$.\n\\end{theorem}\n\\begin{proof}\n\tIdea: $M_e$ runs $M$ $k = 8 \\cdot |x|^{2d + c} + 1$ times.\n\tAssume $k$ is odd.\n\tObtaining results $y_1, y_2, \\ldots y_k$ and $M_e$ accepts iff\n\t\\[ \\sum y_i > \\frac{1}{2} k \\]\n\tMajority decision.\n\n\tDenote $x_i \\in \\szo^{\\ast}$ r.v. where\n\t\\[ x_i = 1 \\iff y_i = L(x) \\]\n\tAs each $y_i$ represents different branch of computation, and coin flip is independent, $x_i$ are independent.\n\tThen by assumption on $M$\n\t\\[ \\E[x_i] = \\Prob[x_i = 1] \\geq \\frac{1}{2} + |x|^{-c} = p \\]\n\n\tDefine $\\delta = \\frac{1}{2} |x|^{-c}$ and consider\n\t\\[ S = \\sum x_i \\geq pk - \\delta pk = pk (1 - \\delta) = k \\left(\\frac{1}{2} |x|^{-c}\\right)\\left(1 - \\frac{1}{2} |x|^{-c}\\right) = \\frac{1}{2}k + k\\left(\\frac{3}{4} |x|^{-c} - \\frac{1}{2} |x|^{-2c}\\right) \\]\n\tThen as $\\left(\\frac{3}{4} |x|^{-c} - \\frac{1}{2} |x|^{-2c}\\right) \\geq 0$\n\t\\[ S = \\frac{1}{2}k + k\\left(\\frac{3}{4} |x|^{-c} - \\frac{1}{2} |x|^{-2c}\\right)  > \\frac{1}{2} k \\]\n\tWe conclude that if $M$ answers correctly at least $> \\frac{1}{2} k$ then $M_e$ answers correctly (for sure).\n\n\tOn the contrary, $M_e$ can make a mistake only if\n\t\\[ \\sum x_i < pk - \\delta pk \\]\n\tSo\n\t\\[ \\Prob[\\sum x_i < pk - \\delta pk] \\]\n\tis an upper bound of the probability that $M_e$ makes a mistake.\n\tThen\n\t\\[ \\E[x_i] \\geq p \\Rightarrow \\sum \\E[x_i] \\geq pk \\]\n\tFor some $\\varepsilon > 0$ denote\n\t\\[\\mu = \\sum \\E[x_i] = pk + \\varepsilon \\]\n\tFinally compute\n\t\\[ P_e = \\Prob[ pk - \\sum x_i > \\delta pk] \\leq \\Prob[ pk - \\sum x_i + (1 - \\delta)\\varepsilon > \\delta pk] = \\Prob[pk + \\varepsilon - \\sum x_i > \\delta pk + \\delta \\varepsilon] = \\]\n\t\\[ = \\Prob[\\mu - \\sum x_i > \\delta \\mu ] = \\]\n\tAs the probability is symmetric and by Chernoff \\cref{chernoff}\n\t% todo picture 9th lecture 01:58:50\n\t\\[ = \\frac{1}{2} \\Prob[|\\mu - \\sum x_i| > \\delta \\mu ] \\leq \\frac{1}{2} 2 e^{-\\frac{\\delta^2}{4} \\mu} \\leq e^{-\\frac{\\delta^2}{4} pk} \\]\n\tTo summarize:\n\t\\[ P_e = \\Prob[M_e\\ \\text{makes an error}] \\leq e^{-\\frac{1}{4}\\frac{|x|^{-2c}}{4} (\\frac{1}{2} + |x|^{-c}) 8 \\cdot |x|^{2d + c} + 1} = e^{-\\frac{8 \\cdot |x|^{2c + d}}{32 \\cdot |x|^{2c}} + \\frac{8 \\cdot |x|^{2c + d}}{16 \\cdot |x|^{3c}}} \\]\n\tBy reducing the exponent (2nd summand) and simplifying the first\n\t\\[ P_e \\leq e^{-\\frac{1}{4} |x|^d} = (e^{\\frac{1}{4}})^{-|x|^d} \\leq 2^{-|x|^d} \\]\n\\end{proof}\n\n\\begin{lemma}[Fair coin]\n\tA fair coin can be simulated by a PTM with an access to a biased coin with $\\Prob[\\text{head}]=p\\in(0,1)$ in $\\bigO(1)$ expected time.\n\\end{lemma}\n\\begin{proof}\n\tIn each round PTM flips the biased coin twice if\n\t% todo readability\n\t\\[ \\twopartdef{H + H}{}{T + T}{} \\Rightarrow \\text{new pair of flips} \\]\n\tOtherwise\n\t\\[ \\twopartdef{H + T}{H, \\Prob[H] = p (1 - p)}{T + H}{T, \\Prob[H] = (1 - p) p}\\]\n\tAs\n\t\\[ \\Prob[\\text{PTM stops at round}\\ i] = 2 p (1 - p) = c, c \\in (0, 1) \\]\n\tThen expected \\# of trials before PTM stops\n\t\\[ \\sum i (1 - c)^{i - 1} c \\]\n\twhich converges.\n\\end{proof}\n\\begin{lemma}[Biased coin (Von-Neumann)]\n\tA biased coin with probability $\\Prob[\\text{head}]=\\rho$ can be simulated in $\\bigO(1)$ time by a ``standard'' PTM (with fair coin) if the $i$-th bit of $\\rho$ can be generated in $\\text{poly}(n) = n^c$.\n\\end{lemma}\n\\begin{proof}\n\tLet $\\rho$ be a binary written as\n\t\\[ \\rho = 0p_1p_2 \\ldots \\]\n\tPTM generates fair sequence $b_1, b_2, \\ldots$.\n\tIf PTM in $i$-th step generates $b_i$ then\n\t\\begin{itemize}\n\t\t\\item $b_i < p_i \\Rightarrow H$\n\t\t\\item $b_i > p_i \\Rightarrow T$\n\t\t\\item $b_i = p_i$ go to step $(i + 1)$.\n\t\\end{itemize}\n\tM gets to $i$-th step $\\iff$\n\t\\[ \\forall 1 \\leq j \\leq i - 1: b_j = p_j \\]\n\twhich occurs with probability $\\frac{1}{2^{i - 1}}$.\n\t% todo did not understand following implication\n\t$p_i = 1 \\Rightarrow$ return $H$ in step $i$ with probability $\\frac{1}{2}$.\\\\\n\t$p_i = 0 \\Rightarrow$ return $H$ in step $i$ with probability $0$.\n\tThen\n\t\\[ \\Prob[H] = \\sum p_i \\frac{1}{2^i} = \\rho \\]\n\n\tThe expected time $\\sum n^i \\frac{1}{2^{i - 1}}$ which converges.\n\\end{proof}\n\n\\begin{definition}[Expected running time]\n    Let $M$ be a PTM and $x$ an input.\n    Then $T_{M,x}$ is a random variable which denotes the time it takes $M$ to halt on $x$.\n    The PTM $M$ runs in expected time $T(n)$ if $\\forall x \\in \\szo^{\\ast}: \\E[T_{N,x}] \\leq T(|x|)$.\n\\end{definition}\n\n\\begin{theorem}[Markov inequality]\\label{markov}\n\tLet X be a non-negative r.v. Then\n\t\\[ \\Prob[ X \\geq k \\E[x]] \\leq \\frac{1}{k} \\]\n\\end{theorem}\n\n\\begin{observation}\n\tLet PTM $M$ have expected time complexity $T(n)$.\n\tIf we simulate $M$ on PTM $M_s$ with time complexity $100 T(n)$ (by rejecting longer computations), then the probability (by Markov \\cref{markov})\n\t\\[ \\Prob[M\\ \\text{runs longer than}\\ 100 T(n)] \\leq \\frac{1}{100} \\]\n\tTherefore, $M_s$ rejects if it stops because of the clock, o/w it does the same as $M$.\n\tThen $\\Prob[M_s\\ \\text{makes an error}]$ is at most $100$ greater than for $M$.\n\n\tWe can reduce the error back to the error of $M$ by \\cref{error_reduction}.\n\\end{observation}\n\n% 10th lecture\n\\subsection{Classes RP, ZPP}\n\n\\begin{definition}[RTIME, RP]\n\t$L\\in\\text{RTIME}(T(n))\\Leftrightarrow\\exists$ PTM $M$ working in (expected) time $T(n)$ such that\n    \\begin{gather*}\n    \tx\\in L\\Rightarrow \\Prob[M(x)=1]\\geq 2/3\\\\\n\tx\\not\\in L\\Rightarrow\\Prob[M(x)=0]=1\n    \\end{gather*}\n\n    $RP=\\bigcup_c\\text{RTIME}(n^c)$ -- Randomized Polynomial time\n\n    Note that we can again prove that class is the same with either expected or exact polynomial time.\n\\end{definition}\n\\begin{definition}[ZTIME, ZPP]\n    $L\\in\\text{ZTIME}(T(n))\\Leftrightarrow\\exists$ PTM $M$ working in (expected) time $T(n)$ such that\n    \\begin{gather*}\n    \tx\\in L\\Rightarrow \\Prob[M(x)=1]=1\\\\\n\tx\\not\\in L\\Rightarrow\\Prob[M(x)=0]=1\n    \\end{gather*}\n\n    $ZPP=\\bigcup_c\\text{ZTIME}(n^c)$ -- Zero-error Probabilistic Polynomial time\n\n    Note that we can again prove that class is the same with either expected or exact polynomial time.\n\\end{definition}\n\\begin{lemma}[On probabilistic classes]\\label{rp_bpp_zpp_prop}\n    \\begin{enumerate}\n        \\item $RP\\subseteq \\TNP$\n        \\item $BPP=co-BPP$\n        \\item $RP\\subseteq BPP, co-RP\\subseteq BPP$\n        \\item $ZPP=RP\\cap co-RP$\n    \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n    \\begin{enumerate}\n\t    \\item\n\t\tFor $\\TNP$ we require at least one accepting computation, by definition of $RP$ we have at least $\\frac{2}{3}$ of them.\n\t\tIf we allow PTM to work in expected polynomial time, then there is shorter accepting branch of computation.\n        \\item Design new PTM which simulates original PTM but flips all answers.\n\t\t\\[ \\Prob[\\overline{M}(x) = \\overline{L}(x)] = \\Prob[1 - M(x) = 1 - L(x)] = \\Prob[M(x) = L(x)] \\geq \\frac{2}{3} \\]\n\t\\item Follows from stricter definition of $RP$. Second condition follows from 2).\n        \\item $ZPP \\subseteq RP\\cap co-RP$\n\t\tFollows from the stricter definition of ZPP.\n\n\t\t$ZPP \\supseteq RP\\cap co-RP$. Let $L \\in RP\\cap co-RP$ arbitrary.\n\t\tBy definition, we have 2 PTM: $M_y, M_n$.\n    \t\t\\begin{gather*}\n\t\t\tx\\in L \\Rightarrow \\Prob[M_y(x)=1] \\geq 2/3 \\& \\Prob[M_n(x)=1] = 1\\\\\n\t\t\tx \\notin L\\Rightarrow\\Prob[M_y(x)=0]=1 \\& \\Prob[M_n(x)=1] \\geq 2/3\n    \t\t\\end{gather*}\n\t\tThen, if $M_y(x) = 1 \\Rightarrow x \\in L$ (for sure) and symmetrically $M_n(x) = 1 \\Rightarrow x \\notin L$ (for sure).\n\t\tConstruct new PTM $M$ that simulates both PTM in parallel, output depending on the simulation:\n\t\t\\begin{itemize}\n\t\t\t\\item $M_y(x) = 1 \\Rightarrow M(X) = 1$\n\t\t\t\\item $M_n(x) = 0 \\Rightarrow M(X) = 0$\n\t\t\t\\item $M_y(x) = 0 \\& M_n(x) = 1 \\Rightarrow$ try simulation again\n\t\t\\end{itemize}\n\n\t\tThen\n    \t\t\\begin{gather*}\n\t\t\tx \\in L \\Rightarrow \\Prob[M_y(x) = 0 \\& M_n(x) = 1] \\leq \\frac{1}{3} \\cdot 1 = \\frac{1}{3} \\\\\n\t\t\tx \\notin L \\Rightarrow \\Prob[M_y(x) = 0 \\& M_n(x) = 1] \\leq 1 \\cdot \\frac{1}{3} = \\frac{1}{3}\n    \t\t\\end{gather*}\n\n\t\tExpected \\# of repetitions of the simulation is $\\leq \\sum i \\cdot (\\frac{1}{3})^i$.\n\t\tWhich converges.\n    \\end{enumerate}\n\\end{proof}\n\n\\begin{theorem}[$BPP \\subseteq \\Ppoly$ (Adleman)]\n\t\\[ BPP \\subseteq \\Ppoly \\]\n\\end{theorem}\n\\begin{proof}\n\t$L \\in BPP \\Rightarrow \\exists DTM\\ M$ with 2 inputs $x \\in \\szo^n, r \\in \\szo^m, m = poly(n)$ such that (by \\cref{error_reduction}):\n\t\\[ \\Prob[M(x, r) \\neq L(x)] \\leq \\frac{1}{2^{n^2}} \\leq \\frac{1}{2^{m + 1}} \\]\n\n\t$r \\in \\szo^m$ is \\emph{bad} for $x \\in \\szo^m$ if $M(x, r) \\neq L(x)$.\n\tFor a fixed $x$ there are $\\frac{2^m}{2^{n + 1}}$ strings $r$ that are bad for $x$.\n\tFor all $x$ there exists at $\\leq 2^n \\cdot \\frac{2^m}{2^{n + 1}} = 2^m/2$ bad strings.\n\tTherefore $\\geq \\frac{1}{2} 2^m$ strings are good $\\forall x$.\n\n\tPick some $r_0$ that is good for $x$.\n\tFrom $\\TP \\subseteq \\Ppoly$ \\cref{tp_ppoly} $M$ can be simulated by polynomial size family of circuits $\\{ C_{m + n} \\}$.\n\tNote that value of $r_0$ is hard wired into the circuit.\n\tThen\n\t\\[ \\forall x \\in \\szo^n: L(x) = M(x, r_0) = C_{m + n}(x) \\Rightarrow L \\in \\Ppoly \\]\n\\end{proof}\n\n\\begin{theorem}[Sipser-G\u00e1cs-Lautemann]\n\tBPP$\\subseteq \\Sigma^p_2 \\cap \\Pi_2^p$\n\\end{theorem}\n\\begin{proof}\n\tSince BPP is closed under complements \\cref{rp_bpp_zpp_prop} it is enough to show\n\t\\[ BPP \\subseteq \\Sigma^p_2 \\]\n\n\t$L \\in BPP \\Rightarrow \\exists DTM\\ M$ with 2 inputs, size of random bits is $m$, such that (by \\cref{error_reduction} setting $d = 1$):\n    \t\\begin{gather*}\n\t\tx \\in L \\Rightarrow \\Prob[M(x, r) = 1] \\geq 1 - \\frac{1}{2^n} \\\\\n\t\tx \\notin L \\Rightarrow \\Prob[M(x, r) = 1] \\leq \\frac{1}{2^n}\n    \t\\end{gather*}\n\n\tDefine\n\t\\[ \\forall x \\in \\szo^n: S_x := \\{ r |\\ M(x, r) = 1 \\} \\]\n\tTrivially\n    \t\\begin{gather*}\n\t\tx \\in L \\Rightarrow |S_x| \\geq (1 - \\frac{1}{2^n}) \\cdot 2^m \\\\\n\t\tx \\notin L \\Rightarrow |S_x| \\leq \\frac{1}{2^n} \\cdot 2^m \\\\\n    \t\\end{gather*}\n\tAlso define\n\t\\[ \\forall k, l \\in \\szo^m: k + l = k\\ XOR\\ l = k + l \\mod2 \\]\n\t\\[ \\forall Z \\subseteq \\szo^m, u \\in \\szo^m: Z + u = \\{ z + u |\\ z \\in Z \\} \\]\n\n\t\\begin{lemma}[Sipser-G\u00e1cs-Lautemann 1]\n\t\tLet $Z \\subseteq \\szo^m \\& |Z| \\leq 2^{m - n}$.\n\t\tAlso let $u_1, \\ldots u_k \\in \\szo^m$.\n\t\tThen for $k = \\lceil \\frac{m}{n} + 1 \\rceil$.\n\t\t\\[ \\bigcup (Z + u_i) \\subsetneq \\szo^m \\]\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tAs in Lagrange theorem for groups\n\t\t\\[ \\forall i: |Z + u_i| = |Z| \\]\n\t\tTherefore, even if all $Z + u_i$ are disjoint:\n\t\t\\[ \\bigcup (Z + u_i) \\leq k |Z| \\leq \\lceil \\frac{m}{n} + 1 \\rceil \\cdot \\frac{2^m}{2^n} \\leq 2^m \\]\n\t\tHolds for large enough $n$ as $m = poly(n)$ but the denominator is exponential $2^n$:\n\t\t\\[ \\frac{m}{n 2^n} < 1 \\]\n\t\\end{proof}\n\n\t\\begin{lemma}[Sipser-G\u00e1cs-Lautemann 2]\n\t\tLet $Z \\subseteq \\szo^m \\& |Z| \\geq (1 - 2^m) 2^n$.\n\t\tThen\n\t\t\\[ \\exists u_1, \\ldots u_k \\in \\szo^m: \\bigcup (Z + u_i) = \\szo^m \\]\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tIt is enough to prove that for a random choice of $u_1, \\ldots u_k$ we get\n\t\t\\[ \\Prob[\\bigcup (Z + u_i) = \\szo^m] > 0 \\iff \\Prob[\\bigcup (Z + u_i) \\subsetneq \\szo^m] < 1 \\]\n\t\tWhich is equivalent\n\t\t\\[ \\Prob[\\exists r \\in \\szo^m : r \\notin \\bigcup (Z + u_i)] < 1 \\]\n\tDefine $r$ is \\emph{bad} for $i \\iff r \\notin Z + u_i \\stackrel{XOR}{\\iff} r + u_i \\notin Z$.\n\tPick $u_1, \\ldots u_k \\in \\szo^m$ uniformly independently.\n\tThen // TODO fix (n, m)\n\t\\[ \\forall r: \\Prob[r + u_i \\in Z] \\geq 1 - 2^m \\Rightarrow \\Prob[r \\ \\text{is bad for}\\ i] \\leq  2^{-n} \\]\n\tBy independent choice of $u_i$:\n\t\\[ \\Prob[r \\ \\text{is bad }\\ \\forall i] < 2^{-nk} \\]\n\t\\[ \\Prob[\\exists r \\ \\text{is bad }\\ \\forall i] = \\Prob[\\exists r: r \\notin \\bigcup (Z + u_i)] < 2^m \\cdot 2^{-nk} \\stackrel{nk > m}{<} 2^m \\cdot 2^{-m} = 1 \\]\n\t\\end{proof}\n\n\t\\[ x \\in L \\iff \\exists u_1, \\ldots, u_k \\in \\szo^m: \\forall r \\in \\szo^m: r \\in \\bigcup (Z + u_i) \\]\n\tEquivalently\n\t\\[ x \\in L \\iff \\exists u_1, \\ldots, u_k \\in \\szo^m: \\forall r \\in \\szo^m: \\vee_i^k M(x, r + u_i) = 1 \\]\n\tAs $M$ is deterministic and works in polynomial time and we use it $k = \\lceil \\frac{m}{n} + 1 \\rceil$ times: $\\vee_i^k M(x, r + u_i)$ can be checked in polynomial time.\n\t\\[ \\vee_i^k M(x, r + u_i) = 1 \\in \\TP \\]\n\tThe formula is\n\t\\[ \\exists \\forall \\TP \\in \\Sigma_2 \\]\n\n\n\\end{proof}\n", "meta": {"hexsha": "8970966384cbc4841c1f17b38003792c23aab91f", "size": 14820, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/prednasky/08_prednaska.tex", "max_stars_repo_name": "karlov/NTIN063", "max_stars_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/prednasky/08_prednaska.tex", "max_issues_repo_name": "karlov/NTIN063", "max_issues_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/prednasky/08_prednaska.tex", "max_forks_repo_name": "karlov/NTIN063", "max_forks_repo_head_hexsha": "cb69e2889ce8374c64aa6ac9faf64ad1ab01add7", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.371257485, "max_line_length": 240, "alphanum_fraction": 0.6137651822, "num_tokens": 5875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7745833789613196, "lm_q1q2_score": 0.5566112727001907}}
{"text": "\\chapter{Boundaries reconstruction based on the triangulation refinement}\\label{chap:triangulation}\nThe purpose of this chapter is to provide an alternative approach to the $\\alpha$-shapes methods for determining the boundaries $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$ of the positive luminance regions in target PS. The idea of the method is based on the triangulation refinement of the source PS explained in Chapter \\ref{chap:PS}. The boundaries $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$ are approximated by connecting those vertices of the triangles that follow the path same $\\Pi$. Numerical results are provided for three optical systems: the two-faceted cup, the TIR-collimator and a parabolic reflector.\\\\ \\indent The PS method is compared to both MC and QMC ray tracing. Discussion and results are provided in the last section of this chapter.\n\\section{Reconstruction of the boundaries}\nIn Chapter \\ref{chap:PS} we have seen that, using the triangulation refinement, more rays close to the boundaries are traced selecting increasingly smaller values for the parameters $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}$. Once the algorithm stops, only the triangles that are expected to be crossed by at least a boundary $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$ are taken into account for the construction of the boundary. From now on, we call these triangles the \\textit{boundary triangles}. Two triangles are neighbors if they have one side in common. For each boundary triangle its neighbor is found so that an ordered sequence of triangles is constructed. Given a path $\\Pi$, the corresponding boundary $\\partial\\mbox{\\set{R}{$1$}{}}(\\Pi)$ on \\set{S}{}{} is approximated connecting the vertices of the boundary triangles which correspond to rays following path $\\Pi$. The edge-ray principle is employed in order to define the corresponding boundaries $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$ at the target.\nThus, $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$ at the target are given by\n\\begin{equation}\\map{M}{}{}(\\partial\\mbox{\\set{R}{$1$}{}}(\\Pi)):\\partial\\mbox{\\set{R}{$1$}{}}(\\Pi)\\rightarrow\\partial\\mbox{{R}{}{}}(\\Pi),\\end{equation}\nwhere $\\map{M}{}{}$ is defined in (\\ref{eq:map1}) and $\\map{M}{}{}(\\partial\\mbox{\\set{R}{$1$}{}}(\\Pi))$ is the restriction of $\\map{M}{}{}$ to $\\partial\\mbox{\\set{R}{$1$}{}}(\\Pi)$ for every path \n$\\Pi$. \\\\\\indent In this chapter, we develop a criterion to establish the values of the parameters $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$, $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$, $\\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}$ and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}$ that give a good approximation of $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$.\n Similar to the selection of $\\alpha$ in the $\\alpha$-shapes procedure, the triangulation parameters are chosen using \\'{e}tendue conservation, i.e., conservation of area in PS. The core of our approach is the following.\\\\\n\\indent The \\'{e}tendue $U_1$ at the source PS restricted to the rays that arrive at the target is calculated. If all the rays emitted by the source are received by the target, $U_1$ can be easily determined by (\\ref{eq:etenduesource1}).\nIn case some rays do not arrive at the target reaching another detector, we use (\\ref{eq:etendueintegralsource}) and (\\ref{eq:etenduesumsource}).\n\\\\ \\indent The \\'{e}tendue $U_{\\textrm{t}}$ at the target PS is calculated using (\\ref{eq:etendueintegraltarget}) and (\\ref{eq:etenduesumtarget}).\nTo do so, the triangulation refinement method is applied to the regions $\\mbox{\\set{R}{}{}}(\\Pi)$ for a range of values of $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$ and for a fixed value of $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$. The parameters along the $\\variabile{p}$-axis are scaled by \n\\begin{equation}\\label{eq:scaled_parameters}\n\\begin{aligned}\n& \\variabile{w} = \\frac{\\variabile{q}_1^{\\textrm{max}}-\\variabile{q}_1^{\\textrm{min}}}{\\variabile{p}_1^{\\textrm{max}}-\\variabile{p}_1^{\\textrm{min}}},\\\\\n&\\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}= \\frac{\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}}{\\variabile{w}}, \\\\\n&\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}= \\frac{\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}}{\\variabile{w}},\n\\end{aligned}\n\\end{equation}\n where \n$\\variabile{p}_1^{\\textrm{min}}$ and $\\variabile{p}_1^{\\textrm{max}}$ are the minimum and the maximum $\\variabile{p}$-coordinate in \\set{S}{}{} and, \n$\\variabile{q}_1^{\\textrm{min}}$ and $\\variabile{q}_1^{\\textrm{max}}$ are the minimum and the maximum $\\variabile{q}$-coordinate in \\set{S}{}{}. Every set of parameters gives a certain triangulation, for each of them an approximation of the boundaries $\\partial\\mbox{\\set{R}{}{}}(\\Pi)$ is obtained.\nNext, the intersection points $(\\variabile{q}^{\\variabile{i}}( \\Pi, \\variabile{p}),\\variabile{p})_{\\variabile{i} = 1, \\cdots, \\variabile{r}}$ between $\\partial\\mbox{\\set{R}{}{}}(\\Pi)$\nand the horizontal line $\\variabile{p}~=~ \\const{constant}$ are calculated for every path $\\Pi$, and for $\\variabile{p}~\\in~[-1,1]$. Ordering their $\\variabile{q}$-coordinates corresponding to each direction $\\variabile{p}$ in ascending order, the integral in Equation (\\ref{eq:etenduetarg1}) is computed.\nChanging the values of the parameters, different approximations of $\\partial\\mbox{\\set{R}{}{}}(\\Pi)$ are found and, consequently, different values of $U_{\\textrm{t}}$. By construction, $U_{\\textrm{t}}$ is always underestimated ($U_{\\textrm{t}}<U_1$) because the approximated boundaries are found joining the vertices of the \\textit{boundary triangles} which are \\textit{inside} the regions \\set{R}{}{}$(\\Pi)$.\n\\\\ \\indent To use the parameters that give a good accuracy of the target photometric variables, the difference $\\Delta U = U_1-U_{\\textrm{t}}$ is calculated for every value of $U_{\\textrm{t}}$ found. The values of the parameters that give a small $\\Delta U$ provide a triangulation refinement, from which a good approximation of the target photometric variables can be computed.\n\\\\ \\indent A similar method as described here is presented by Moore \\cite{moore2013methods}. In Moore's method each ray leaves a point source at the same position while the angle coordinate changes. It starts considering three sampling rays and their corresponding paths are taken into account. In case the paths are equal, the rays traced are representative of all the rays inside the polygon that they describe at the target, otherwise an interpolation is required to compute the illumination pattern. This interpolation can affect the efficiency of the method. Our method employs the distribution of the rays at the target PS and avoids using any interpolation. Moreover, a criterion to stop our algorithm is provided in such a way that no more rays than necessary are traced. This makes ray tracing in PS more accurate compared to Moore's procedure. Furthermore, Moore method is not suitable for systems in which the size and the spatial distribution of the source is important as it consider only point sources.\\\\ \\indent\nThe triangulation refinement method is tested for several optical systems. The results are presented next.\n\\section{The two-faceted cup}\nIn this section we apply the triangulation refinement in PS to the two-faceted cup described in Chapter \\ref{chap:raytracing} and depicted in Figure \\ref{fig:cup}. \nWe start tracing rays inside the system using PS ray tracing as explained in Chapter \\ref{chap:PS}. To avoid rays parallel to the source and rays emitted from the endpoints, we consider their initial position $\\variabile{q}_1$ and initial direction $\\variabile{p}_1$ such that \n\\begin{equation*}\n\\begin{aligned}\n\\variabile{p}_1&\\in[-1+10^{-6},1-10^{-6}] = [\\variabile{p}_1^{\\textrm{min}}, \\variabile{p}_1^{\\textrm{max}}], \\\\ \n\\variabile{q}_1&\\in[-2+10^{-12}, 2-10^{-12}] = [\\variabile{q}_1^{\\textrm{min}}, \\variabile{q}_1^{\\textrm{max}}].\n\\end{aligned}\n\\end{equation*} \nA stopping criterion for the triangulation is defined using \\'{e}tendue conservation. Since the two-faceted cup is formed by only reflective lines and its target is adjacent to the left and the right reflector (it is located exactly at the top of the system),  all the rays emitted by the source arrive at the target. Thus, from (\\ref{eq:etenduesource1}) with $\\n_1\\sin(\\myangle_1^{\\textrm{max}})=\\variabile{p}_1^\\textrm{max}$ and $\\variabile{a}=\\variabile{q}_1^{\\textrm{max}}$ follows\n\\begin{equation}U_1 = U \\approx 8. \\end{equation}\n%To establish the number of rays needed to achieve a good accuracy of the target intensity, we compare the approximated $U_{\\textrm{t}}$, obtained from a given number of rays, to the exact \\'{e}tendue $U=U_1$. \nRay tracing in PS is implemented by varying the parameter $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ and fixing  $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$ (we choose $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=1$), while the other two parameters are given by (\\ref{eq:scaled_parameters}).\nEvery set of parameters gives a different triangulation at the source PS. The approximated boundaries are computed for several triangulations joining the vertices of the triangles crossed by a boundary. \n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{.45\\textwidth}\n  \\includegraphics[width=\\textwidth]{boundaries_source_triangles2}\n \\caption{The black lines are the boundaries at \\set{S}{}{}. $1500$ rays are traced using the triangulation refinement with $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=0.1$. }\n  \\label{fig:boundaries_s2}\n\\end{subfigure}%\n\\hfill\n\\begin{subfigure}{.45\\textwidth}\n  \\includegraphics[width=\\textwidth]{boundaries_target_triangles2}\n  \\caption{The black lines are the boundaries at \\set{T}{}{}. $1500$ rays are traced using the triangulation refinement with $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=0.1$.} %\n  \\label{fig:boundaries_t2}\n\\end{subfigure} %\n\\hfill\n\\begin{subfigure}{.45\\textwidth}\n  \\includegraphics[width = \\textwidth]{boundaries_source_triangles1}\n  \\caption{The black lines are the boundaries at \\set{S}{}{}. $7500$ rays are traced using the triangulation refinement with $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=0.025$.}\n  \\label{fig:boundaries_s1}\n\\end{subfigure}%\n\\hfill\n\\begin{subfigure}{.45\\textwidth}\n  \\includegraphics[width=\\textwidth]{boundaries_target_triangles1}\n \\caption{The black lines are the boundaries at \\set{T}{}{}. $7500$ rays are traced using the triangulation refinement with $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=0.025$.} %\n  \\label{fig:boundaries_t1}\n\\end{subfigure}\n\\caption{\\textbf{Boundaries at \\set{S}{}{} and \\set{T}{}{} of the two-faceted cup.} The approximated boundaries are computed using the triangulation refinement with two different values of $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$.}\n\\label{fig:boundaries_cup}\n\\end{figure} \n\\\\ \\indent For example, if we consider $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=0.1$, $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=1$ and the corresponding parameters for the $\\variabile{p}$-axis given by (\\ref{eq:scaled_parameters}), a triangulation with around $1500$ rays (vertices of the triangles) is found. The boundaries $\\partial$\\set{R}{$1$}{}$(\\Pi)$ and $\\partial$\\set{R}{}{}$(\\Pi)$ are calculated using this triangulation refinement and are depicted in black in Figures \\ref{fig:boundaries_s2} and \\ref{fig:boundaries_t2}, respectively. For this set of rays we found $\\Delta U \\approx 0.53 $. Next, we can decrease $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ to obtain a more precise approximation of $U_{\\textrm{t}}$. Choosing $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}} = 0.025$ and $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=1$, a triangulation formed by around $7500$ rays is obtained. The approximated boundaries $\\partial$\\set{R}{$1$}{}$(\\Pi)$ and $\\partial$\\set{R}{}{}$(\\Pi)$ are depicted with black lines in Figures \\ref{fig:boundaries_s1} and \\ref{fig:boundaries_t1}, respectively. The approximation of the target \\'{e}tendue gives $\\Delta U \\approx 0.13 $.\n Obviously, the boundaries computation obtained using $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}} = 0.025$ is more accurate. \nNote that, decreasing $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$, the number of rays increases. \n\\\\ \\indent In Figure \\ref{fig:etendue_cup} we show with the blue line how the target \\'{e}tendue varies as a function of the parameter $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$. The exact \\'{e}tendue $U=8$ is depicted with the red line. By decreasing $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ an increase of $U_{\\textrm{t}}$ is observed. %Furthermore, by construction, $U_{\\textrm{t}}$ is always underestimated because the approximated boundaries are found joining the vertices of the \\textit{boundaries triangles} which are \\textit{inside} the regions \\set{R}{}{}$(\\Pi)$.\n \\begin{figure}[t]\n  \\center\n  \\includegraphics[width= 0.7\\textwidth]{etendue_cup_epsilon}\n  \\caption{\\textbf{\\'{E}tendue for the two-faceted cup.} The total \\'{e}tendue as an area in PS is depicted with the red line. The approximated \\'{e}tendue for a range of values of \n$\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ is shown with the blue line.}\n  \\label{fig:etendue_cup}\n\\end{figure}\nIn Figure \\ref{fig:etendue_cup} we show the approximation of $U_{\\textrm{t}}$ for at most around $1.2 \\cdot 10^5$ rays traced using PS ray tracing with parameters $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=0.8\\cdot 10^{-4}$, $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=1$, $\\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}=\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}/2$ and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}= \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}/2$. We expect that further decreasing $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$, the value of $U_{\\textrm{t}}$ becomes more precise. \n\\\\ \\indent The PS averaged an normalized intensity $\\hat{I}_{\\textrm{PS}}$ with $1.2 \\cdot 10^5$ rays is calculated from (\\ref{eq:normalized_PS_intensity}) where ${I}_{\\textrm{PS}}$ is given by (\\ref{eta2}). The intensity profile is shown in Figure \\ref{fig:intensity_cup_triangulation} with the red line. In the same graph we show the reference intensity $\\hat{I}_{\\textrm{ref}}$ with the dotted blue line. For the two-faceted cup the reference intensity is actually the exact intensity ($\\hat{I}_{\\textrm{ref}}= \\hat{I}_{\\textrm{exact}}$) computed as explained in Appendix \\ref{app:boundariescup}. \n \\begin{figure}[t]\n  \\center\n  \\includegraphics[width= 0.8\\textwidth]{intensity_cup_triangulation}\n  \\caption{\\textbf{Intensity profile at the target of the two-faceted cup.} The reference intensity is the exact intensity. The PS intensity is computed using the triangulation refinement with $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=0.8\\cdot 10^{-4}$, $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=1$, $\\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}=\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}/2$ and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}= \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}/2$. Around $1.2 \\cdot 10^5$ rays are traced.}\n  \\label{fig:intensity_cup_triangulation}\n\\end{figure}\n\\\\ \\indent \nFinally, we compare PS ray tracing with both MC and QMC ray tracing by computing the error between the approximated intensities $\\hat{I}_{\\textrm{A}}  (\\textrm{A}= \\textrm{MC}, \\textrm{QMC}, \\textrm{PS})$ and the exact intensity $\\hat{I}_{\\textrm{ref}}$ using (\\ref{eq:error}) with $\\nbin = 100$. The results are shown in Figure \\ref{fig:error_cup_triangulation} where the MC, QMC and PS intensity are depicted with the green, blue and red line, respectively.\n \\begin{figure}[t]\n  \\center\n  \\includegraphics[width= 0.8\\textwidth]{error_cup_triangulation}\n  \\caption{\\textbf{Error plot for the two-faceted cup.} The errors between the approximated intensities $\\hat{I}_{\\textrm{A}} (\\textrm{A}= \\textrm{MC}, \\textrm{QMC}, \\textrm{PS})$ and the exact intensity $\\hat{I}_{\\textrm{exact}}$.}\n  \\label{fig:error_cup_triangulation}\n\\end{figure}\nThe graph shows that using PS ray tracing in combination with the triangulation refinement admits tracing far less rays compared to MC ray tracing.\nA comparison between QMC ray tracing and PS ray tracing shows that more rays are needed in PS. Indeed, although the shapes of all the regions \\set{R}{}{}$(\\Pi)$ are very smooth, their boundaries at the edge of the target phase space \\set{T}{}{} are difficult to approximate by triangles. With the triangulation refinement, the vertical straight lines at the edge of \\set{T}{}{} are always approximated by a broken line. However, the two-faceted cup is a very simple system and QMC ray tracing does not requires a large number of rays to obtain the desired accuracy. \\\\ \\indent Nevertheless, PS ray tracing has a big advantage compared to QMC ray tracing. Indeed, as we have seen in Chapter \\ref{chap:raytracing}, MC and QMC ray tracing are binning procedures. Therefore, the MC and QMC intensities are given by the average over every bin and the error also depends on the number of bins. % according to Equations \\ref{} \\ref{}\nPS ray tracing gives a point-wise intensity along all possible directions. In the simulations shown in this thesis we always compute the average PS intensity. This is needed to give a fair comparison of PS ray tracing versus MC and QMC ray tracing. It is very important to observe that no error related to the number of bins is involved in the PS procedure. \\\\ \\indent\nTo investigate in more detail the performance of PS ray tracing, we test the method for more complicated systems. In the next paragraph we present the results for a TIR-collimator. \n\\section{A TIR-collimator}\nIn this section we provide the results of PS ray tracing for a TIR-collimator, using the triangulation refinement to compute the boundaries $\\partial$\\set{R}{}{}$(\\Pi)$ in target PS. In particular, we consider the TIR-collimator depicted in Figure \\ref{fig:analyticlens}. Since this system is located in two different media (air and glass), also the refraction law plays a role in the ray tracing procedure. We run PS ray tracing for the TIR-collimator several times gradually increasing the number of rays, i.e., gradually decreasing the values of the parameters $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}, \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}, \\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}$ and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}$ in the triangulation. In order to trace more rays close to the boundaries, we decide to vary only the values of $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ and $ \\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}$ while fixing the values of $ \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$ and $ \\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}$ as the last two are responsible of the number of rays inside the regions \\set{R}{}{}$(\\Pi)$. Every ray traced has initial position coordinate $\\variabile{q}_1\\in[-\\variabile{a}, \\variabile{a}]$ with $\\variabile{a}=2$ and the initial direction coordinate $\\variabile{p}_1\\in[-1,1]$. Therefore, the source PS of the TIR-collimator is the rectangular domain \\set{S}{}{}$= [-2, 2] \\times [-1, 1]$. The parameters $\\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}$ and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}$ are scaled as in Equation (\\ref{eq:scaled_parameters}).\n\\\\ \\indent To determine the triangulation refinement that gives a good approximation of the target intensity, we compare $U_1$ (source \\'{e}tendue) to $U_{\\textrm{t}}$ (target \\'{e}tendue) and use \\'{e}tendue conservation. In this case, not all light emitted by the source of the TIR-collimator arrives at the target. Indeed, using PS ray tracing, $\\npath=7$ different paths $(\\Pi_\\lineai)_{\\lineai=1, \\cdots, \\npath}$ are found but only five of them are paths from the source (line $1$) to the target (line $12$) (see also Section \\ref{sec:results-Tir-alpha}). Thus, we need to remove from the total area of \\set{S}{}{} those parts occupied by the rays that arrive at some others detectors and not at the target. % white areas in figure...\nIndicating with $A_T$ the area of each of these parts (see Section \\ref{sec:results-Tir-alpha}), the source PS is given by:\n\\begin{equation}\nU_1 = 8-2A_T\\approx 7.77.\n\\end{equation}\nThe target \\'{e}tendue $U_{\\textrm{t}}$ is obtained from (\\ref{eq:etenduetarg}) for a range of values of $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ and for $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=1$ fixed. The boundaries $\\partial$\\set{R}{}{}$(\\Pi)$ are found for every value of $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ and, $U_{\\textrm{t}}$ is calculated for each of these boundaries. \nThe results shown in Figure \\ref{fig:etendue_tir_triangulation} give the \\'{e}tendue plot as a function of $\\varepsilon_{\\variabile{q}_{1}}^{\\textrm{min}}$.\n \\begin{figure}[t]\n  \\center\n  \\includegraphics[width = 0.7\\textwidth]{etendue_tir_triangulation}\n  \\caption{\\textbf{\\'{E}tendue of the TIR-collimator.} A comparison between $U_1$ and $U_{\\textrm{t}}$ shows that by decreasing the value of $\\varepsilon_{\\variabile{q}_{1}}^{\\textrm{min}}$, $\\Delta U= U_1-U_{\\textrm{t}}$ decreases.}\n  \\label{fig:etendue_tir_triangulation}\n\\end{figure}\n\\\\ \\indent \nThe best approximation of $U_{\\textrm{t}}$ shown in the previous graph is obtained using $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=1.6\\cdot 10^{-3}$, tracing around $1.62 \\cdot 10^5$ rays. The boundaries $\\big(\\partial\\mbox{\\set{R}{$1$}{}}(\\Pi_{\\lineai})\\big)_{\\lineai = 1, \\cdots, 5}$ and $\\big(\\partial\\mbox{\\set{R}{}{}}(\\Pi_{\\lineai})\\big)_{\\lineai = 1, \\cdots, 5}$ of the regions formed by these rays are shown with red lines in Figure \\ref{fig:boundaries_TIR_triangulation} at the left and the right, respectively (see also Figure \\ref{fig:Tir2} for a comparison of the target PS). \n\\begin{figure}[h]\n \\begin{subfigure}[t]{0.5\\textwidth}\n\\centering\n    \\includegraphics[width=\\textwidth]{boundaries_triang_source_tir}\n    \\caption{Boundaries at the source PS.}\n    \\label{fig:boundaries_triang_source_tir}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{0.5\\textwidth}\n\\centering\n    \\includegraphics[width = \\textwidth]{boundaries_triang_targ_tir}\n    \\caption{Boundaries at the target PS.}\n    \\label{fig:boundaries_triang_target_tir}\n\\end{subfigure}\n\\caption{\\textbf{Boundaries at \\set{S}{}{} and \\set{T}{}{} of the TIR-collimator.} The red lines show the boundaries found using $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=1.6\\cdot 10^{-3}, \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=1, \\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}=\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}/2$ and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}} = \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}/2$.}\n \\label{fig:boundaries_TIR_triangulation}\n\\end{figure}\n \\\\ \\indent \nThe target PS intensity $\\hat{I}_{\\textrm{PS}}$ is computed and, it is compared with a reference intensity $\\hat{I}_{\\textrm{ref}}$ which is given by QMC ray tracing with $10^7$ rays (as the exact intensity for the TIR-collimator is unknown). The profile of the two intensities is given in Figure \\ref{fig:intensity_tir_triangulation}.\n \\begin{figure}[ht]\n  \\center\n  \\includegraphics[width= 0.8\\textwidth]{intensity_tir_triangulation}\n  \\caption{\\textbf{Target intensity for the TIR-collimator.} The PS intensity $\\hat{I}_{\\textrm{PS}}$ is computed using PS ray tracing with around $1.62\\cdot 10^5$ rays. The reference intensity $\\hat{I}_{\\textrm{ref}}$ is obtained by QMC ray tracing with $10^7$ rays.}\n  \\label{fig:intensity_tir_triangulation}\n\\end{figure}\n\\\\ \\indent\nTo validate our method, PS ray tracing is compared to both MC and QMC ray tracing. The error between the approximated intensities $\\hat{I}_{\\textrm{A}} (\\textrm{A}=\\textrm{QMC}, \\textrm{MC}, \\textrm{PS})$ and the reference intensity $\\hat{I}_{\\textrm{ref}}$ as a function of the number of rays traced is calculated. The error plot is shown in a logarithmic scale in Figure \\ref{fig:error_tir_triangulation} where the MC, QMC and PS convergences are shown with the green, the blue and the red line, respectively. The black dotted line is a line with slope $-\\frac{1}{2}$, the blue dotted line has slope $-1$.The graph shows that MC ray tracing converges, for $\\nrays \\rightarrow \\infty$, with an order of $\\mathcal{O}\\big(\\frac{1}{\\sqrt{\\nrays}}\\big)$, while both PS and QMC ray tracing have a speed of convergence of the order $\\mathcal{O}\\big(\\frac{1}{\\nrays}\\big)$. \nNote that, to obtain an error of $10^{-4}$, PS ray tracing allows tracing $10^2$ times less rays compared to MC ray tracing and almost $10$ times less rays compared to QMC ray tracing.\n \\begin{figure}[t]\n  \\center\n  \\includegraphics[width= 0.8\\textwidth]{error_nr_triangulation_tir}\n  \\caption{\\textbf{Error as a function of the number of rays for the TIR-collimator.} The reference intensity $\\hat{I}_{\\textrm{ref}}$ is obtained by QMC ray tracing with $10^7$ rays.}\n  \\label{fig:error_tir_triangulation}\n\\end{figure}\n\\begin{figure}[t]\n  \\center\n  \\includegraphics[width= 0.75\\textwidth]{error_tir_triangulation_time}\n  \\caption{\\textbf{Error as a function of the CPU-time for the TIR-collimator.} The reference intensity $\\hat{I}_{\\textrm{ref}}$ is obtained by QMC ray tracing with $10^7$ rays.}\n  \\label{fig:error_tir_triangulation_time}\n\\end{figure}\n\\\\ \\indent Finally, in order to show the advantages of PS ray tracing in terms of the computational time, we provide an error convergence as a function of the CPU-time for all the three methods (MC, QMC and PS raytracing) shown in Figure \\ref{fig:error_tir_triangulation_time}. The choice of the colors is consistent with Figure \\ref{fig:error_tir_triangulation}. We remark that the averaged normalized intensity in PS $\\hat{I}_{\\textrm{PS}}$ is computed over every bin using the trapezoidal rule and discretizing each bin into $10$ more sub-intervals of the same length. Because of this, the CPU-time for computing the PS intensity is divided by $10$ to provide the real CPU-time of the PS method when it is not compared to binning procedure, e.g., MC and QMC ray tracing. This is done for \\textit{all} the results presented in this thesis. From Figure \\ref{fig:error_tir_triangulation_time}, we observe that PS ray tracing outperforms both MC and QMC ray tracing. PS ray tracing is approximately $10^2$ times faster than MC ray tracing and $10$ times faster than QMC ray tracing. \\\\ \\indent The results shown in Figures \\ref{fig:error_tir_triangulation} and \\ref{fig:error_tir_triangulation_time} are reported in Tables \\ref{tab:ps_error_triangulation} and \\ref{tab:mc_error_triangulation}.\n\\begin{table}[t] \n\\centering\n\\caption{\\bf Errors of the PS intensity for the TIR-collimator}\n\\begin{tabular}{lllll}\n \\hline   $\\varepsilon_{\\variabile{q}}^{\\textrm{max}} $  & $\\nrays$ & \\'{E}tendue & PS error & PS CPU-time (sec.) \\\\\n  \\hline \n $0.05$ & $3\\,547$   & $7.50$   &  $1.75\\cdot10^{-4}$ & $1.98$\\\\\n$0.025$  & $8\\,055$    & $7.61$    & $1.49\\cdot 10^{-4}$ & $4.69$ \\\\\n$0.125$  & $17\\,300$    & $7.69$  & $8.68\\cdot 10^{-5}$ & $10.61$\\\\\n $6.3 \\cdot 10^{-3}$  & $38\\,300$  & $7.73$   & $4.43\\cdot 10^{-5}$ & $26.56$\\\\\n $3.1 \\cdot 10^{-3}$ & $79\\,600$  & $7.75$    & $2.27\\cdot 10^{-5}$ & $83,21$\\\\\n$1.6 \\cdot 10^{-3}$ & $162\\,300$  & $7.76$    & $1.20\\cdot 10^{-5}$ & $240.53$\\\\\n \\hline\n \\end{tabular}\n\\label{tab:ps_error_triangulation}\n \\end{table}\n\\begin{table}[t] \\label{tab:table_tir_triangulation}\n\\centering\n\\caption{\\bf Errors of the MC and QMC intensities for the TIR-collimator}\n\\begin{tabular}{lllll}\n \\hline   $\\nrays$ & MC error & MC CPU-time (sec.) & QMC error  & QMC CPU-time (sec.)\\\\\n  \\hline \n $10^3$     & $2.09\\cdot 10^{-3}$ & $2.73$ & $1.43\\cdot10^{-3}$    & $2.63$\\\\\n $10^4$     & $6.42\\cdot 10^{-4}$ & $25.98$ & $3.03\\cdot 10^{-4}$   & $25.84$\\\\\n $10^5$     & $1.92\\cdot 10^{-4}$ & $259.92$ & $5.82\\cdot 10^{-5}$   & $258.28$\\\\\n $10^6$     & $7.45\\cdot 10^{-5}$ & $2585.83$   & $1.28\\cdot 10^{-5}$   & $2482.67$\\\\\n \\hline\n \\end{tabular}\n \\label{tab:mc_error_triangulation}\n \\end{table}\n%\\begin{table}[t] \n%\\centering\n%\\caption{\\bf Errors of the QMC intensity for the TIR-collimator}\n%\\begin{tabular}{lllll}\n% \\hline  $\\nrays$\\;  & QMC error  & CPU-time (sec.)\\\\\n%  \\hline \n% $10^3$   & $1.43\\cdot10^{-3}$    & $2.63$  \\\\\n%$10^4$    & $3.03\\cdot 10^{-4}$   & $25.84$   \\\\\n%$10^5$    & $5.82\\cdot 10^{-5}$   & $258.28$  \\\\\n% $10^6$   & $1.28\\cdot 10^{-5}$   & $2482.67$  \\\\\n% \\hline\n% \\end{tabular}\n% \\label{tab:qmc_error_triangulation}\n% \\end{table}\n%Using PS ray tracing an error equal to $10^{-4}$ is achieved tracing almost $10$ time less compared to QMC ray tracing and $100$ times less rays compared to QMC ray tracing. \nNext, we show the results for a system in which more than $5$ paths are possible. \nIn particular, we present the results for an optical system for which multiple reflections between rays and the mirrors occur.\n\\section{A Parabolic reflector}\nIn this section we show an example of a parabolic reflector which is depicted in Figure \\ref{fig:PR}.\n It consists of a source \\point{S} (line $1$), a target \\point{T} (line $4$) parallel to \\point{S} and two reflectors (lines $2$ and $3$) which are arcs of the same parabola. \n  The minimum of the parabola is located at the point with $\\variabile{x}$-coordinate equal to $0$. $\\point{S}=[\\variabile{-a}, \\variabile{a}]$ (with $\\variabile{a}=2$) and $\\point{T}=[-\\variabile{b},\\variabile{b}]$ (with $\\variabile{b}=17$) are lines perpendicular to the optical axis (\\variabile{z}-axis) and are located at $\\variabile{z}=0$ and $\\variabile{z}=40$, respectively.\nAll the optical lines are located in air, therefore the index of refraction ${n}=1$ for every line.\nThe optical axis of the system in Figure \\ref{fig:PR} corresponds to the \\variabile{z}-axis.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{parabolic_reflector}\n\\caption{\\textbf{A parabolic reflector.}  Each line of the system is labeled with a number.\n   The source \\point{S}$= [2,2]$ (line $1$) is located on the $\\variabile{x}$-axis.\n   The target \\point{T}$= [-17, 17]$ (line $4$) is parallel to the source and is located at a height $ \\variabile{z}= 40$.\n   The left and right reflectors (lines $2$ and $3$) are arcs of the same parabola.}\n\\label{fig:PR}\n\\end{figure}\nWe trace rays in PS with source direction coordinates $\\variabile{p}_1 \\in [-1,1]$ and source position coordinates $\\variabile{q}_1\\in[-\\variabile{a}+\\varepsilon,\\variabile{a}-\\varepsilon]$ where $\\varepsilon>0$ is a small number. In particular we take $\\varepsilon = 10^{-12}$.\\\\ \\indent As an example we show the triangulation refinement obtained for the parameters $$\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}} = 0.025, \\; \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}} = 0.5, \\; \\varepsilon_{\\variabile{p}_1}^{\\textrm{min}} = \\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}/2, \\mbox{ and }  \\varepsilon_{\\variabile{p}_1}^{\\textrm{max}} = \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}/2, $$ for which around $8300$ rays are traced in PS. Their distribution at \\set{S}{}{} and \\set{T}{}{} is shown in Figures \\ref{fig:source_triang_pr} and \\ref{fig:target_triang_pr}, respectively. \n\\begin{figure}[h]\n \\begin{subfigure}[h]{0.47\\textwidth}\n\\centering\n    \\includegraphics[width=\\textwidth]{pr_source}\n    \\caption{Source PS \\set{S}{}{} $= [-\\variabile{a}+\\varepsilon,\\variabile{a}-\\varepsilon]\\times[-1,1]$}\n    \\label{fig:source_triang_pr}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.47\\textwidth}\n\\centering\n    \\includegraphics[width = \\textwidth]{pr_target}\n    \\caption{Target PS \\set{T}{}{} $= [-\\variabile{b},\\variabile{b}]\\times[-1,1]$}\n    \\label{fig:target_triang_pr}\n\\end{subfigure}\n\\caption{\\textbf{Rays distribution at \\set{S}{}{} and \\set{T}{}{} of the parabolic reflector.} Around $8300$ rays are traced using PS ray tracing with parameters $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}} = 0.025, \\; \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}} = 0.5, \\; \\varepsilon_{\\variabile{p}_1}^{\\textrm{max}} = \\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}/2, \\mbox{ and }  \\varepsilon_{\\variabile{p}_1}^{\\textrm{min}} = \\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}/2$ . $17$ different paths are found, each of them correspond to a certain number of reflections.}\n \\label{fig:phase_space_pr}\n\\end{figure} The distribution of the rays in PS gives information about the paths they follow. We note that for the parabolic reflector many paths are found. Every path corresponds to a given number of reflections. Rays can have multiple reflections at lines $2$ and $3$ before arriving at the target. The parameters used in the triangulation refinement not only establish the number of rays traced but also determine the number of detected paths. For instance, for the values of the parameters defined above, the triangulation refinement is able to detect $17$ different paths. This means that up to $8$ multiple reflections occur between the rays and the two mirrors. Counting the number of rays that follow a given path $\\Pi$, we can calculate the fraction of rays for every path. For example, tracing around $8300$ rays, the percentage of the rays that have $8$ multiple reflections along one of the two reflectors is around $0.13\\%$. Rays that reflect many times before reaching the target do not give a significant contribution to the target intensity. Decreasing the value of the parameter $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$, more paths can be found. The more reflections occur the smaller the corresponding area in PS is. In order to find as many path as possible, also the parameter $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$ needs to be decreased. Increasing the number of reflections considered, the corresponding regions in PS become smaller and smaller, see Figure \\ref{fig:phase_space_pr}. \\\\ \\indent Like for the optical systems considered in the previous sections, a stopping criterion of the triangulation refinement is determined for the parabolic reflector. \\'{E}tendue conservation is used in order to find the values of the parameters that give a good approximation of the boundaries of the positive luminance regions in PS. For the parabolic reflector in Figure \\ref{fig:PR} all the rays that leave the source arrive at the target. Indeed, line $2$ and $3$ can only reflect rays (refraction law is not involved) and the target \\'{e}tendue is equal to the source \\'{e}tendue. From Equation (\\ref{eq:etenduesource1}) we obtain:\n\\begin{equation}\nU = U_1 = 4(\\variabile{a}-\\varepsilon)\\approx 8.\n\\end{equation}\nA range of values of $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ and $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$ is considered (the triangulation parameters for the $\\variabile{p}$-axis depend on the $\\variabile{q}$-axis parameters according to (\\ref{eq:scaled_parameters})). For each couple of values, an approximation of the boundaries $\\partial$\\set{R}{}{}$(\\Pi)$ is found for every path $\\Pi$ as explained. $U_{\\textrm{t}}$ is calculated for the approximated boundaries using Equation (\\ref{eq:etenduetarg}). In Table \\ref{tab:etendue_pr} we show how the number of rays traced, the paths found and the value for the target \\'{e}tendue depend on the triangulation parameters.\nWe observe that, decreasing $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}$ and $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}$ the number of both the rays traced and the paths found increases. A maximum of $17$ different paths are detected. Furthermore, the value of $U_{\\textrm{t}}$ gets closer and closer to the exact \\'{e}tendue $U$.\n\\begin{table}[t] \n\\centering\n\\caption{\\bf Results of the triangulation refinement.}\n\\begin{tabular}{lllll}\n \\hline   $\\varepsilon_{\\variabile{q}}^{\\textrm{min}} $  & $\\varepsilon_{\\variabile{q}}^{\\textrm{max}}$ & $\\nrays$ & $\\npath$ & \\'{E}tendue \\\\\n  \\hline \n $0.2$ & $1$   & $643$   &  $11$ & $5.71$\\\\\n$0.1$ & $1$   & $1\\,573$   &  $15$ & $7.23$\\\\\n$0.025$  & $0.5$    & $8\\,357$  & $17$ & $7.65$\\\\\n $0.025/2$  & $0.5/2$  & $18\\,613$   & $17$ & $7.82$\\\\\n$0.025/4$  & $0.5/4$  & $40\\,465$   & $17$ & $7.82$\\\\\n $0.025/8$ & $0.5/8$  & $86\\,529$    & $17$ & $7.96$\\\\\n$0.025/16$ & $0.5/16$  & $185\\,581$    & $17$ & $7.98$\\\\\n \\hline\n \\end{tabular}\n \\label{tab:etendue_pr}\n \\end{table}\n\\\\ \\indent Using the triangulation refinement that gives the best \\'{e}tendue approximation, we calculate the target averaged and normalized PS intensity $\\hat{I}_{\\textrm{PS}}$ from (\\ref{eq:normalized_PS_intensity}).\nThe intensity profile is shown with the red line in Figure \\ref{fig:intensity_pr}. The PS intensity is compared to a reference intensity $\\hat{I}_{\\textrm{ref}}$, computed using QMC ray tracing with $10^7$ rays. $\\hat{I}_{\\textrm{ref}}$ is depicted Figure \\ref{fig:intensity_pr} with the dotted blue line. The graph shows that the two intensities coincide.\n \\begin{figure}[t]\n  \\center\n  \\includegraphics[width = 0.7\\textwidth]{intensity_pr}\n  \\caption{\\textbf{Target intensity of the parabolic reflector.} For the PS intensity the parameters $\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}=1.56\\cdot 10^{-3}$, $\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}=3.13\\cdot 10^{-2}$, $\\varepsilon_{\\variabile{p}_1}^{\\textrm{min}}=\\varepsilon_{\\variabile{q}_1}^{\\textrm{min}}/2$, and $\\varepsilon_{\\variabile{p}_1}^{\\textrm{max}}=\\varepsilon_{\\variabile{q}_1}^{\\textrm{max}}/2$ are used. Around $8.15 \\cdot 10^{4}$ rays are traced in PS. For the reference intensity QMC ray tracing with $10^7$ rays is implemented.}\n  \\label{fig:intensity_pr}\n\\end{figure}\n\\begin{table}[t] \\label{tab:table_pr_triangulation}\n\\centering\n\\caption{\\bf Errors of the PS intensity for the parabolic reflector}\n\\begin{tabular}{lll}\n \\hline   $\\nrays$ & PS error & CPU-time (sec.) \\\\\n  \\hline \n $643$        & $5.18\\cdot10^{-4}$ & $0.24$\\\\\n $1\\,573$       & $8.99\\cdot 10^{-4}$ & $0.48$\\\\\n $8\\,357$     & $2.87\\cdot 10^{-4}$ & $2.40$ \\\\\n $18\\,613$     & $1.38\\cdot 10^{-4}$ & $5.48$\\\\\n $40\\,465$   & $5.80\\cdot 10^{-5}$ & $16.14$\\\\\n $86\\,529$    & $2.90\\cdot 10^{-5}$ & $55.04$\\\\\n $185\\,581$   & $1.66\\cdot 10^{-5}$ & $245.97$\\\\\n \\hline\n \\end{tabular}\n\\label{tab:table_pr_triangulation}\n \\end{table}\n\\\\ \\indent \nNow, our method is compared to both MC and QMC ray tracing. \nThe error between the approximate intensities $\\hat{I}_{\\textrm{A}}(\\textrm{A}=\\textrm{PS}, \\textrm{MC}, \\textrm{QMC})$ and the reference intensity is calculated. In Figure \\ref{fig:error_rays_pr} the error as a function of the number of rays traced is shown for the three methods.\nThe green line represents the MC error, the blue line pictures the QMC error and the red line depicts the PS error. The errors are shown in a logarithmic scale, the dotted black line has slope $-1/2$, the dotted blue line has slope $-1$. Like for the other systems, MC ray tracing converges proportionally to $1/\\sqrt{\\nrays}$. QMC convergence is proportional to $1/\\nrays$ for $\\nrays \\rightarrow \\infty$. Less rays compared to MC ray tracing are needed in PS. In particular, to achieve an error of $10^{-4}$, around $10$ times less rays in PS are needed compared to MC and few rays less than QMC.\n\\begin{figure}[t]\n  \\center\n  \\includegraphics[width = 0.7\\textwidth]{error_pr_100bin_rays}\n  \\caption{\\textbf{Error as a function of the number of rays traced.} Less rays are needed using PS ray tracing compared to both MC and QMC ray tracing.}\n  \\label{fig:error_rays_pr}\n\\end{figure} \n\\\\ \\indent\nFinally, the error as a function of the CPU-time is shown in Figure \\ref{fig:error_time_pr}. The MC, QMC and PS errors are depicted with the green, blue and red line, respectively. To obtain an error of the order of $10^{-5}$, PS ray tracing is $10$ times faster than MC ray tracing while becomes twice slower than QMC ray tracing. The detailed results of the numerical simulations are reported in Tables \\ref{tab:mc_error_pr_triangulation} and \\ref{tab:QMC_error_pr_triangulation}.\n\\begin{figure}[t]\n  \\center\n  \\includegraphics[width = 0.7\\textwidth]{error_pr_100bin_time}\n  \\caption{\\textbf{Error as a function of the CPU-time.} PS ray tracing has significant advantages in terms of the CPU-time compared to MC ray tracing. For the parabolic reflector the computational time is comparable with QMC ray tracing.}\n  \\label{fig:error_time_pr}\n\\end{figure} \n\\begin{table}[t] \n\\centering\n\\caption{\\bf Errors of the MC intensity for the parabolic reflector}\n\\begin{tabular}{lll}\n \\hline   $\\nrays$ & MC error & CPU-time (sec.) \\\\\n  \\hline \n $10^3$     & $1.18\\cdot10^{-3}$ & $0.39$\\\\\n $10^4$     & $5.74\\cdot 10^{-4}$ & $3.43$ \\\\\n $10^5$     & $1.73\\cdot 10^{-4}$ & $33.13$\\\\\n $10^6$     & $5.79\\cdot 10^{-5}$ & $328.96$\\\\\n $10^7$     & $1.68\\cdot 10^{-5}$ & $3325.39$\\\\\n \\hline\n \\end{tabular}\n \\label{tab:mc_error_pr_triangulation}\n \\end{table}\n\\begin{table}[t] \n\\centering\n\\caption{\\bf Errors of the QMC intensity for the parabolic reflector}\n\\begin{tabular}{lll}\n \\hline   $\\nrays$ & QMC error & CPU-time (sec.) \\\\\n  \\hline \n $10^3$     & $1.36\\cdot10^{-3}$ & $0.53$\\\\\n $10^4$     & $2.05\\cdot 10^{-4}$ & $3.44$ \\\\\n $10^5$     & $2.89\\cdot 10^{-5}$ & $31.22$\\\\\n $10^6$     & $6.96\\cdot 10^{-6}$ & $314.59$\\\\\n \\hline\n \\end{tabular}\n \\label{tab:QMC_error_pr_triangulation}\n \\end{table}\n\\\\ \\indent As explained above, MC and QMC errors also depend on the number of bins $\\nbin$. In the simulations shown in this chapter we have always considered $\\nbin = 100$. On the contrary, PS ray tracing calculates the intensity point-wise, nevertheless we considered $\\nbin = 100$ bins at the target and we calculate the averaged normalized PS intensity over every bin in order to have a precise comparison with the binning procedures. Because of this, we expect that increasing the number of bins, the average PS intensity becomes more accurate. To verify this conjecture, we implemented MC, QMC and PS ray tracing considering a partitioning of the interval $[-1,1]$ into $\\nbin = 150$ bins. The number of rays considered for the reference intensity have to be increased $(1.5)^5$ times. The averaged normalized intensities $\\hat{I}_{\\textrm{A}} (\\textrm{A}=\\textrm{MC}, \\textrm{QMC}, \\textrm{PS})$ found considering $\\nbin=150$ bins are compared with the reference intensity $\\hat{I}_{\\textrm{ref}}$ (averaged and normalized) which is computed using QMC ray tracing with $7.6\\cdot10^7$ rays. The error as a function of the number of rays is shown in Figure \\ref{fig:error_pr_150_bin}. The results show that, using PS ray tracing an error of the order of $10^{-5}$ is obtained tracing around $10^2$ times less rays than MC ray tracing and twice less rays than QMC ray tracing. We conclude that, increasing the number of bins, PS error outperforms both MC and QMC ray tracing (see also Figure \\ref{fig:error_rays_pr}).\n\\begin{figure}[h]\n  \\center\n  \\includegraphics[width= 0.7\\textwidth]{error_pr_150bins_ray}\n  \\caption{\\textbf{Error plot for the parabolic reflector considering $\\nbin=150$ bins.} Increasing the number of bins, the PS error decreases resulting in a better convergence compared to MC and QMC ray tracing.}\n  \\label{fig:error_pr_150_bin}\n\\end{figure}\n\\section{Discussion and conclusions}\nIn this chapter, we presented a method to calculate the boundaries of the positive luminance regions with in PS. This method does not depend on the parameter $\\alpha$ needed for the $\\alpha$-shapes method presented in Chapter \\ref{chap:boundaries_alpha}. Indeed, given a triangulation at the source PS, the boundaries are computed connecting the vertices of the boundary triangles, i.e., triangles crossed by a boundary, that follow the same path. Employing \\'{e}tendue conservation, a stopping criterion for the triangulation refinement was developed. We applied the method to three different optical systems: the two-faceted cup, a TIR-collimator, and a parabolic reflector. Numerical results show that PS ray tracing is faster and more accurate than MC ray tracing. Compared to QMC ray tracing we observed accuracy and speed advantages of an order of magnitude with our method for the TIR-collimator. \nFor the two-faceted cup, PS ray tracing has a slower convergence compared to QMC ray tracing. For the parabolic reflector PS and QMC ray tracing display similar convergence. As an example, for this system we showed that increasing the number of bins the errors decrease. \nTo conclude we state that QMC ray tracing performs better than PS ray tracing for very simple optical systems, but the PS approach is more suitable for complicated optical systems.\n\\\\ \\indent In order to further improve PS ray tracing we develop a new method which employs the PS of \\textit{all} the optical lines. This method is explained in the next chapter.\n%We claim that PS ray tracing is also more accurate than the ray tracing procedure proposed by Moore (2013), \\cite{moore2013methods}.\n%The novelty of our approach compared to the method used by Moore, is briefly explained below. First, to compute the output intensity, we employ the phase space of the target. This avoids the use of any interpolation to compute the photometric variables and therefore, more accurate results are obtained.\n%Second, in  all rays that leave the source start at the same position and only a sampling angular range is given. In our approach a rectangular source is considered thus, both the angular and spatial coordinates of each ray change. This extra variable can produce very irregular shapes of the regions at target phase space. To overcome this issue, we employ the edge-ray principle and we consider the regions at source phase space where the distribution of the rays is much more regular and the corresponding boundaries are easily computed.\n%As a consequence, our procedure is suitable to compute the output intensity as function of both the angular or the spatial coordinates.\n%Third, using the conservation of \\'{e}tendue, we provided a criterion to stop the triangulation refinement. In this way we can estimate the number of rays required to obtain the desired accuracy and thus, we avoid tracing more rays than necessary.\n\n\n\n\n\n\n\n\n\n \n\n%\\section{The Compound Parabolic Concentrator (CPC)}", "meta": {"hexsha": "4ed7c83aa1e48f012e257de3f3e6faeed6ac7b1a", "size": 45848, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/triangulation.tex", "max_stars_repo_name": "melaniafilosa/ps_raytracing", "max_stars_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/triangulation.tex", "max_issues_repo_name": "melaniafilosa/ps_raytracing", "max_issues_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/triangulation.tex", "max_forks_repo_name": "melaniafilosa/ps_raytracing", "max_forks_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 126.303030303, "max_line_length": 2163, "alphanum_fraction": 0.7291266795, "num_tokens": 14200, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5566112717714281}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\chapter{Hypothesis Testing}\n\\label{chap:hypo}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Hypothesis Test Selection}\n\\label{hypo:test_selection}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{\\texorpdfstring{$Z$}{Z}-Test}\n\\label{hypo:Z_test}\n\nWhen we sample from a distribution known to be Gaussian,\nor can assume the sample means are approximately normally distributed via the central limit theorem (CLT) of \\cref{stats:CLT},\nand know the population standard deviation $\\sigma$,\nwe can perform hypothesis testing on the sample mean with a \\Ztest.\nFor a sample of size $n$ with sample mean $\\expval{x}$ and population standard deviation $\\sigma$\nwe compute the \\Zscore,\n\n\\begin{equation}\\label{eq:hypo:Z}\nZ = \\frac{\\expval{x} - \\mu_{0}}{\\sigma_{\\expval{x}}} = \\frac{\\expval{x} - \\mu_{0}}{\\sigma/\\sqrt{n}},\n\\end{equation}\n\n\\noindent and test the null hypothesis, $H_{0}$, that the mean is $\\mu_{0}$\nby comparing $Z$ to the standard normal distribution.\nDepending on if we wish to perform a one-sided or two-sided\ntest\\footnote{Note we can also perform two-sample and paired {\\Ztest}s, see the analogous \\ttest sections with $\\sigma$ replacing $s$.} we\nfind the probability of observing $P\\left(x < Z\\right)$, $P\\left(Z < x\\right)$, or $P\\left(Z < \\abs{x}\\right)$\nfrom the cumulative distribution function (CDF).\nThis \\pvalue then determines if we accept, $\\pvalue < \\alpha$, or reject, $\\alpha \\leq \\pvalue$,\n$H_{0}$ at a given significance level $\\alpha$, typically \\num{0.05} or lower.\nWe can\n\\href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zscore.html}{compute $Z$} with\n\\texttt{scipy.stats.zscore},\nand \\href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html}{find the \\pvalue} with\n\\texttt{scipy.stats.norm.cdf} or \\texttt{scipy.stats.norm.sf},\nbeing careful to include the correct side(s).\nFor an illustration see \\cref{fig:two_sided_t_test}, with \\Zscore replacing the \\tstat.\n\nIf we do not know the population standard deviation $\\sigma$, as is often the case,\nor if $n \\lesssim 30$, we should use a \\ttest instead.\nHowever, if $50 \\lesssim n$ we may be able to approximate $\\sigma \\approx s$\nand use a \\Ztest anyway.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Student's \\texorpdfstring{$t$}{t}-Test}\n\\label{hypo:t_test}\n\nStudent's \\ttest can be used to\nstatistically compare the means of two samples via the \\tdist\ngiven a \\tstat and degrees of freedom $\\nu$.\nThe \\ttest is appropriate when each sample is small\\footnote{For\nlarger sample sizes the \\tdist approaches the normal distribution which should be used instead.}, $n \\lesssim 30$,\nand drawn from a larger normally distributed population with an unknown standard deviation.\nThe \\pvalue returned by the test estimates the probability of obtaining the sample means\nassuming $H_{0}$, typically that the samples share the same mean, is true.\nWe can compute two-sided, \\ie the means are not equal, or one-sided, \\ie mean 1 is $>$ or $<$ mean 2,\nforms of the \\ttest in three basic variations.\nSee \\cref{fig:two_sided_t_test} for an illustration of a two-sided test's $H_{0}$ rejection regions.\nThe \\texttt{scipy.stats.ttest\\_*}\n\\href{https://docs.scipy.org/doc/scipy/reference/stats.html#statistical-tests}{family of functions}\nmake it easily to compute {\\tstat}s and {\\pvalue}s in practice.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\textwidth]{figures/hypo/one_side_t_test_rejection_regions.png}\n\\caption{\nIllustration of a two-sided {\\ttest}'s null hypothesis, $H_{0}$, rejection regions,\nby \\href{https://www.machinelearningplus.com/statistics/t-test-students-understanding-the-math-and-how-it-works/}{Selva Prabhakaran}.\nIn a one-sided \\ttest we would only consider the side on a single side.\n$\\alpha$ is the significance level, typically \\num{0.05} or lower.\n}\n\\label{fig:two_sided_t_test}\n\\end{figure}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{One-Sample}\n\\label{hypo:t_test:one}\n\nIn a one-sample \\ttest we have one sample of size $n$ with mean $\\expval{x}$ and standard deviation $s$,\nand test $H_{0}$ that it belongs to a parent population with mean $\\mu_{0}$.\nIn this case the parent population does not need to be normally distributed, but the distribution of possible $\\expval{x}$ is assumed to be normal.\n\nThe \\tstat \\cref{eq:hypo:t:one} can be used with $\\nu = n-1$ to find a \\pvalue.\n\n\\begin{equation}\\label{eq:hypo:t:one}\nt = \\frac{\\expval{x} - \\mu_{0}}{s / \\sqrt{n}}\n\\end{equation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Two-Sample: Unpaired}\n\\label{hypo:t_test:two:unpaired}\n\nIn a two-sample unpaired \\ttest we have two independent samples, each of size $n$,\nbut with their own sample means $\\expval{x_{i}}$ and standard deviations $s_{i}$.\nProvided that we can assume that the two parent distributions of $x_{1}$ and $x_{2}$ have the same variance,\nwe can test $H_{0}$ that the two parent distribution means are equal.\n\nThe \\tstat \\cref{eq:hypo:t:two:unpaired} can be used with $\\nu = 2n-2$ to find a \\pvalue.\n\n\\begin{subequations}\\label{eq:hypo:t:two:unpaired}\n\\begin{align}\nt &= \\frac{\\expval{x_{1}} - \\expval{x_{2}}}{s_{p} \\sqrt{2/n}} \\label{eq:hypo:t:two:unpaired:t} \\\\\ns_{p} &= \\sqrt{\\left(s^{2}_{x_{1}} + s^{2}_{x_{2}}\\right)/2} \\label{eq:hypo:t:two:unpaired:s_p}\n\\end{align}\n\\end{subequations}\n\n\\subsubsection{Different Sample Sizes}\n\\label{hypo:t_test:two:unpaired:diff_n}\n\nIf we relax the sample size condition and let $n_{1} \\neq n_{2}$\nwe can still compute a \\tstat,\nprovided the parent distribution's variances are equal\\footnote{A useful guideline is $1/2 < s_{x_{1}} / s_{x_{2}} < 2$.}.\n\nThe \\tstat \\cref{eq:hypo:t:two:unpaired:diff_n} can be used with $\\nu = n_{1} + n_{2} - 2$ to find a \\pvalue.\n\n\\begin{subequations}\\label{eq:hypo:t:two:unpaired:diff_n}\n\\begin{align}\nt &= \\frac{\\expval{x_{1}} - \\expval{x_{2}}}{s_{p} \\sqrt{\\frac{1}{n_{1}} + \\frac{1}{n_{2}}}} \\label{eq:hypo:t:two:unpaired:diff_n:t} \\\\\ns_{p} &= \\sqrt{\\left(\\left(n_{1} - 1\\right)s^{2}_{x_{1}} + \\left(n_{2} - 1\\right)s^{2}_{x_{2}}\\right)/\\left(n_{1} + n_{2} -2\\right)} \\label{eq:hypo:t:two:unpaired:diff_n:s_p}\n\\end{align}\n\\end{subequations}\n\n\\subsubsection{Different Sample Sizes and Variances (Welch's \\texorpdfstring{$t$}{t}-Test)}\n\\label{hypo:t_test:two:unpaired:diff_n_diff_var}\n\nIf we further relax the assumptions and also let the population variances differ\nwe arrive at Welch's \\ttest which approximates\\footnote{The\ntrue distribution of $t$ depends somewhat on the unknown population variances,\nsee the \\href{https://en.wikipedia.org/wiki/Behrens\\%E2\\%80\\%93Fisher_problem}{Behrens--Fisher problem}\nfor more.} the \\tdist.\n\nThe \\tstat \\cref{eq:hypo:t:two:unpaired:diff_n_diff_var:t} can be used with $\\nu$ \\cref{eq:hypo:t:two:unpaired:diff_n_diff_var:dof} to find a \\pvalue.\n\n\\begin{subequations}\\label{eq:hypo:t:two:unpaired:diff_n_diff_var}\n\\begin{align}\nt &= \\frac{\\expval{x_{1}} - \\expval{x_{2}}}{\\sqrt{\\frac{s^{2}_{1}}{n_{1}} + \\frac{s^{2}_{2}}{n_{2}}}} \\label{eq:hypo:t:two:unpaired:diff_n_diff_var:t} \\\\\n\\nu &= \\frac{\\left(s^{2}_{1}/n_{1} + s^{2}_{2}/n_{2}\\right)^{2}}{\\frac{\\left(s^{2}_{1}/n_{1}\\right)^{2}}{n_{1}-1} + \\frac{\\left(s^{2}_{2}/n_{2}\\right)^{2}}{n_{2}-1}} \\label{eq:hypo:t:two:unpaired:diff_n_diff_var:dof}\n\\end{align}\n\\end{subequations}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Two-Sample: Paired}\n\\label{hypo:t_test:two:paired}\n\nIn a two-sample paired \\ttest we have two dependent samples,\nsuch as two sets of measurements from the same $n$ individuals taken at different times.\nIn this case, we are testing $H_{0}$ that the difference in means of the two dependent samples is $\\mu_{0}$.\nNote, we can set $\\mu_{0} = 0$ if we simply want to test for a statistically significant difference, and not an \\apriori degree of difference.\nDefining the difference between paired observations as $x_{\\Delta}$, we compute the mean difference $\\expval{x_{\\Delta}}$ and $s_{\\Delta}$ standard deviation on the sample.\n\nThe \\tstat \\cref{eq:hypo:t:two:paired} can be used with $\\nu = n-1$ to find a \\pvalue.\n\n\\begin{equation}\\label{eq:hypo:t:two:paired}\nt = \\frac{\\expval{x_{\\Delta}} - \\mu_{0}}{s_{\\Delta} / \\sqrt{n}}\n\\end{equation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{\\texorpdfstring{$\\chi^{2}$-Test}{Chi-Squared Test}}\n\\label{hypo:chi2_test}\n\nPearson's \\chiSqtest can be used to\nstatistically compare a set of observations in\n$n$ variables, $x_{i}$, to prior expectations via the \\chiSqdist.\nThe \\pvalue returned by the test estimates the probability of obtaining the observations\nassuming $H_{0}$, \\ie the expectations, is true.\nThe \\chiSqstat, $X^{2}$, is created with the assumption that\nthe data are normally distributed and independent,\nwhich often is the case due to the CLT.\nIt is constructed by squaring the difference\\footnote{Yates's\ncorrection for continuity $\\left(x^{\\text{obs}}_{j} - x^{\\text{exp}}_{j}\\right)^{2} \\to \\left(\\abs{x^{\\text{obs}}_{j} - x^{\\text{exp}}_{j}}-0.5\\right)^{2}$ may\nalso be applied in some low statistics cases.} between\nan expected value, $x^{\\text{exp}}_{i}$, and its corresponding observation, $x^{\\text{obs}}_{i}$,\nand dividing by the expectation:\n\n\\begin{equation}\\label{eq:hypo:chi2_statistic}\nX^{2} = \\sum_{i=1}^{n} \\frac{\\left(x^{\\text{obs}}_{i} - x^{\\text{exp}}_{i}\\right)^{2}}{x^{\\text{exp}}_{i}}\\,.\n\\end{equation}\n\nIn the limit that each $x^{\\text{obs}}_{i}$ is normally distributed and $n$ is large, $X^{2} \\to \\chi^{2}$.\nWe can then use the \\chiSqdist with $\\nu = n-1$ degrees of freedom to find the \\pvalue as the area to the right of $X^{2}$.\nAn easy way to compute $X^{2}$ and the \\pvalue is to use the \\texttt{scipy.stats.chisquare(f\\_obs, f\\_exp)}\n\\href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html}{function}.\n\nThe \\chiSqtest can also be used to test of the data's independence, or homogeneity,\nfor $m$ samples of $n$ variables with $\\nu = \\left(n-1\\right)\\left(m-1\\right)$ and\\footnote{Note this is really the same as \\cref{eq:hypo:chi2_statistic} if we reindex, just with a different $\\nu$.}\n\n\\begin{equation}\\label{eq:hypo:chi2_statistic_ind}\nX^{2} = \\sum_{i=1}^{n} \\sum_{j=1}^{m} \\frac{\\left(x^{\\text{obs}}_{i,j} - x^{\\text{exp}}_{i,j}\\right)^{2}}{x^{\\text{exp}}_{i,j}}\\,.\n\\end{equation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Analysis of Variance (ANOVA)}\n\\label{hypo:ANOVA}\n\nWhen we are tasked with testing for differences between multiple groups\nwe should not use multiple pairwise tests, like two-sample unpaired {\\ttest}s,\nas the family-wise error rate \\cref{eq:hypo:alpha_fw}\nwill compound and become unreasonable, see \\cref{hypo:bonferroni_correction} for more.\nInstead when dealing with three\\footnote{When there are only two groups a one-factor ANOVA is equivalent to a two-sample unpaired \\ttest with $F=t^{2}$.} or more groups\nthe appropriate Analysis of Variance (ANOVA) method should be used.\nLike pairwise tests, there are multiple versions of ANOVA available for different situations, including\nfactorial ANOVA for multiple independent variables with interaction terms,\nrepeated measures ANOVA where the same subjects tested in different situations form the different groups,\nMultivariate Analysis of Variance (MANOVA) for multiple dependent variables,\nand further extensions such as Analysis of Covariance (ANCOVA) which incorporate correlations with additional covariate variables.\nAll share similar principles and assumptions,\nbut we will only focus on one-factor ANOVA here.\n\nIn general, ANOVA methods work by ``partitioning the sum of squares'',\n\\ie dividing the variance present in the data into\n``explained variance'', \\ie ``between-group variability'',\nand\n``unexplained variance'', \\ie ``within-group variability'',\ncomponents.\nThe ratio of these components then form a \\Fstat\nwhich can be compared against the \\Fdist\nwith the appropriate degrees of freedom to produce a \\pvalue.\nThe null hypothesis $H_{0}$ is typically that all of the groups come from populations which share the same mean.\nIncreased between-group variability, or decreased within-group variability,\nwill increase the \\Fstat and make rejecting $H_{0}$ more probable.\nOne downside to ANOVA testing is that if we reject $H_{0}$\nwe will not know which group(s) differ from the others.\nIn this case additional \\posthoc analysis is required,\nsuch as Bonferroni corrected {\\ttest}s or Tukey's honest significance test.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Assumptions}\n\\label{hypo:ANOVA:assumptions}\n\nMathematically ANOVA is a special case of linear regression\nand shares many of the same assumptions:\n\n\\begin{enumerate}[noitemsep]\n  \\item The variances of the different groups are equal (homoscedasticity).\n  \\item The observations $y_{ij}$ within group $i$ are independent and identically distributed (\\iid) normal random variables.\n  \\item The residuals are normally distributed (normality\\footnote{The normality assumption can be bent without severe consequences, but as always review the literature for specifics first.}).\n  \\item The dependent variable is continuous.\n\\end{enumerate}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{One-Factor}\n\\label{hypo:ANOVA:one}\n\nA one-factor ANOVA should be used when comparing $K$ groups,\ndivided by a single independent variable $x$,\nacross a single dependent variable $y$.\nWe partition the sum of squares into\nmean square $MS_{\\text{between-group}}$ and $MS_{\\text{within-group}}$\n\\cref{eq:hypo:ANOVA:one:MS_between,eq:hypo:ANOVA:one:MS_within}, where\n$n_{i}$ represents the number of observations in the $i$th group,\n$N = \\sum_{i} n_{i}$ is the total number of observations,\n$y_{ij}$ is the $j$th observation for group $i$,\n$\\expval{Y_{i}} = \\sum_{j} y_{ij} / n_{i}$ is the mean for group $i$,\nand $\\expval{Y} = \\sum_{i}\\sum_{j} y_{ij} / N$ is the overall mean across all groups.\n\n\\begin{subequations}\\label{eq:hypo:ANOVA:one}\n\\begin{align}\nMS_{\\text{between-group}} &= \\frac{1}{K-1} \\sum_{i=1}^{K} n_{i}\\left(\\expval{Y_{i}} - \\expval{Y}\\right)^{2}, \\label{eq:hypo:ANOVA:one:MS_between} \\\\\nMS_{\\text{within-group}} &= \\frac{1}{N-K} \\sum_{i=1}^{K} \\sum_{j=1}^{n_{i}} \\left( y_{ij} - \\expval{Y_{i}} \\right)^{2}, \\label{eq:hypo:ANOVA:one:MS_within} \\\\\nF &= \\frac{MS_{\\text{between-group}}}{MS_{\\text{within-group}}}, \\label{eq:hypo:ANOVA:one:F} \\\\\nd_{1} &= K-1,\\quad d_{2} = N-K. \\label{eq:hypo:ANOVA:one:dof}\n\\end{align}\n\\end{subequations}\n\nNote that $MS_{\\text{between-group}}$ is comparing the group means to the overall mean,\nwhile $MS_{\\text{within-group}}$ is comparing all of the data points to their respective group means.\nThe \\Fstat is then the ratio of the mean squares \\cref{eq:hypo:ANOVA:one:F},\nwhich can be used with the appropriate degrees of freedom \\cref{eq:hypo:ANOVA:one:dof}\nand the \\Fdist to produce a \\pvalue.\nDepending on the \\pvalue and selected $\\alpha$\nwe can accept or reject $H_{0}$,\n\\ie all of the groups share one mean.\nThe \\texttt{scipy.stats.f\\_oneway} \\href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f_oneway.html#scipy.stats.f_oneway}{function}\nperforms\\footnote{See\n\\href{https://www.analyticsvidhya.com/blog/2020/06/introduction-anova-statistics-data-science-covid-python/}{here} and\n\\href{https://www.reneshbedre.com/blog/anova.html}{here}\nfor example implementations.} the one-factor ANOVA test.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{\\texorpdfstring{$F$}{F}-Test of Equality of Variances}\n\\label{hypo:F_test_var}\n\nWe can test $H_{0}$ that the variances of two normally distributed samples are equal with a \\Ftest.\nGiven two samples of sizes $n_{1}$ and $n_{2}$ with sample variances $s_{2}^{2} \\leq s_{1}^{2}$\nwe construct the \\Fstat as:\n\n\\begin{equation}\\label{eq:hypo:F_test_var}\nF = \\frac{s_{1}^{2}}{s_{2}^{2}}\\,.\n\\end{equation}\n\nThe \\Fdist with degrees of freedom $d_{1} = n_{1}-1$ and $d_{2} = n_{2}-1$\nthen gives an appropriate \\pvalue.\nNote that this \\Ftest is particularly sensitive to the assumption that\nthe underlying samples are truly normally distributed;\nnot only approximately normal via the CLT.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{\\texorpdfstring{$F$}{F}-Test of Lack-of-Fit Sum of Squares}\n\\label{hypo:F_test_fit}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Binomial Proportion Test}\n\\label{hypo:binomial_test}\n\nWhen dealing with samples of $n$ binary events we can perform hypothesis testing\non the number of observed positive events $k$\nusing test statistics built on the binomial distribution.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Exact Binomial Test}\n\\label{hypo:binomial_test:exact}\n\nFor small $n$ it is possible to compute the \\pvalue\nexplicitly from the binomial distribution \\cref{eq:stats:binomial}.\nWe test $H_{0}$ that the probability of success is $\\pi_{0}$\nhaving actually observed $k$ successes, $k = n \\pi$.\n\nThe \\pvalue is then the sum\n\n\\begin{equation}\\label{eq:hypo:binomial_test:exact}\n\\pvalue = \\sum_{i \\in \\, \\mathcal{I}} \\binom{n}{i} \\pi_{0}^{i} \\left(1-\\pi_{0}\\right)^{n-i},\n\\end{equation}\n\n\\noindent where $\\mathcal{I}$ depends on the type of test:\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{l|l}\n$\\pi < \\pi_{0}$ & $\\mathcal{I} = \\left\\{0, 1, \\ldots, k\\right\\}$, \\\\\n$\\pi > \\pi_{0}$ & $\\mathcal{I} = \\left\\{k, k+1, \\ldots, n\\right\\}$, \\\\\n$\\pi \\neq \\pi_{0}$ & $\\mathcal{I} = \\left\\{\\forall \\, i: P\\left(x=i\\right) \\leq P\\left(x = k\\right)\\right\\}$, with binomial $P\\left(x\\right)$ \\cref{eq:stats:binomial}.\n\\end{tabular}\n\\end{table}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{One-Sample}\n\\label{hypo:binomial_test:one}\n\nFor large sample sizes the binomial distribution is approximated by the normal distribution\nand we can use a form of the \\Ztest to produce {\\pvalue}s.\nWe require the observations to be independent,\n\\ie we may only sample $< \\SI{10}{\\percent}$ of the parent population,\nthe sampling distribution of $\\pi$ to be approximately normal,\nand that there are $\\geq 10$ successes and $\\geq 10$ failures,\n$n \\pi_{0} \\geq 10$ and $n \\left(1-\\pi_{0}\\right) \\geq 10$, \\ie the success-failure condition.\n\nThe \\Zscore is then:\n\n\\begin{equation}\\label{eq:hypo:binomial_test:one}\nZ = \\frac{\\pi - \\pi_{0}}{\\sqrt{\\pi_{0}\\left(1-\\pi_{0}\\right)/n}} = \\frac{k - n \\pi_{0}}{\\sqrt{n\\pi_{0}\\left(1-\\pi_{0}\\right)}}.\n\\end{equation}\n\nNote that the one-sample test is provided by the\n\\texttt{scipy.stats.binomtest} \\href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binomtest.html}{function}.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Two-Sample}\n\\label{hypo:binomial_test:two}\n\nIn the case of two samples, we can test $H_{0}$\nthat the difference\\footnote{Again, we set $\\pi_{\\Delta} = 0$ if we want to test for any difference.} in the sample's probabilities is $\\pi_{\\Delta}$.\nWe require that the $\\pi$ from the two samples are uncorrelated,\nhave approximately normal sampling distributions,\nand that their difference $\\pi_{1} - \\pi_{2}$ is an unbiased estimator.\n\nThe \\Zscore is then:\n\n\\begin{subequations}\\label{eq:hypo:binomial_test:two}\n\\begin{align}\nZ &= \\frac{\\pi_{1} - \\pi_{2} - \\pi_{\\Delta}}{\\pi_{p} \\left(1-\\pi_{p}\\right)\\left(1/n_{1} + 1/n_{2}\\right)} \\label{eq:hypo:binomial_test:two:Z} \\\\\n\\pi_{p} &= \\frac{k_{1} + k_{2}}{n_{1} + n_{2}} \\label{eq:hypo:binomial_test:two:pi_p}\n\\end{align}\n\\end{subequations}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Mann-Whitney \\texorpdfstring{$U$}{U} Test}\n\\label{hypo:mann_whitney_U_test}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Kruskal-Wallis \\texorpdfstring{$H$}{H} Test}\n\\label{hypo:kruskal_wallis_H_test}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Kolmogorov-Smirnov Test}\n\\label{hypo:KS_test}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Hypothesis Test Error Types and Power Analysis}\n\\label{hypo:power}\n\nIn hypothesis testing, like binary classification, we can suffer from two types of errors;\nType I or false positives, and Type II or false negatives.\nThe probabilities of these errors are functions of the experimental design\nand are important to understand before undertaking a study.\nWe label $\\alpha$ as the probability of rejecting a true null hypothesis, \\ie a type I error,\nand $\\beta$ as the probability of failing to reject a false null hypothesis, \\ie a type II error.\nSee \\cref{table:CM} for a graphical representation in the context of binary classification.\n\n$\\alpha$ is the easier parameter to understand and improve,\nas it is just the \\pvalue threshold we select before the study.\nIt is typical to use $\\alpha \\leq \\num{0.05}$.\n$\\beta$ depends on many factors including\n$\\alpha$,\nthe magnitude of the underlying effect,\nthe measurement variance,\nthe model being utilized,\nand the sample size $n$.\nInstead of using $\\beta$ directly we often talk about the statistical power of a hypothesis test, $1-\\beta$,\n\\ie the probability of correctly rejecting a false null hypothesis.\n$\\num{0.8} < 1-\\beta$ is a commonly used target for the power.\nAs experimenters, we can\ntry to improve our methods to reduce the measurement variance,\nselect a more appropriate model\\footnote{Parametric\nmodels tend to have higher powers than the equivalent non-parametric model.\n% https://youtu.be/diRX_NesFkA?t=558\nIn particular, comparing\nthe Mann-Whitney U and unpaired \\ttest gives $\\frac{\\left(1-\\beta\\right)_{\\text{MWU}}}{\\left(1-\\beta\\right)_{t}} = \\frac{3}{\\pi} = \\num{0.955}$,\n%while comparing the Wilcoxon signed-rank and paired \\ttest gives $\\frac{\\left(1-\\beta\\right)_{\\text{WSR}}}{\\left(1-\\beta\\right)_{t}} = \\frac{2}{\\pi} = \\num{0.6437}$, % TODo add if I cover the Wilcoxon signed-rank test in the future\nfor large $n$.},\nor begrudgingly accept some combination of a larger $\\alpha$ or larger minimal detectable difference.\nHowever, the primary lever for improving an experiment's power is by increasing $n$,\nat the cost of additional time and money to complete the study.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{\\texorpdfstring{$Z$}{Z}-Test Power Example}\n\\label{hypo:power:Z_example}\n\nCalculating $\\beta$ can be challenging and is commonly done in software\\footnote{The \\texttt{statsmodels.stats.power}\n\\href{https://www.statsmodels.org/stable/stats.html\\#power-and-sample-size-calculations}{module}\nprovides many useful functions, see \\href{https://machinelearningmastery.com/statistical-power-and-power-analysis-in-python/}{here} for one example implementation.} particular to the model being utilized.\nFor a simple example, we can consider a \\Ztest of a null hypothesis $\\mu_{0}$\nand compute the power of the test at a specific value of the alternative hypothesis $\\mu_{a}$, with $\\mu_{0} < \\mu_{a}$.\nFor the given $\\alpha$ being used we can look up the corresponding critical \\Zscore, $Z_{\\alpha}$.\nThen assuming a standard deviation of $s_{\\text{est}}$ from prior work,\nand knowing $n$, we can estimate the sample mean\nwhich would put us at the critical \\Zscore, $\\expval{x_{c}}$ \\cref{eq:hypo:power_ex:x_c}.\nWe then calculate the \\Zscore again, this time assuming the alternative hypothesis $\\mu_{a}$ is true\nand we have observed $\\expval{x_{c}}$ from our sample, \\ie we are right at the edge of rejecting a false null hypothesis.\nThis \\Zscore, $Z_{a}$ \\cref{eq:hypo:power_ex:Z_a}, can finally be used to find the power $1-\\beta = P\\left(Z_{a} < Z\\right)$.\n\n\\begin{subequations}\\label{eq:hypo:power_ex}\n\\begin{align}\nZ_{\\alpha} &= \\frac{\\expval{x_{c}} - \\mu_{0}}{s_{\\text{est}} / \\sqrt{n}} \\implies\n\\expval{x_{c}} = \\frac{Z_{\\alpha} s_{\\text{est}}}{\\sqrt{n}} + \\mu_{0} \\label{eq:hypo:power_ex:x_c} \\\\\nZ_{a} &= \\frac{\\expval{x_{c}} - \\mu_{a}}{s_{\\text{est}} / \\sqrt{n}}\n= Z_{\\alpha} + \\sqrt{n}\\,\\frac{\\mu_{0} - \\mu_{a}}{s_{\\text{est}}} \\label{eq:hypo:power_ex:Z_a}\n\\end{align}\n\\end{subequations}\n\nNote that as advertised $Z_{a}$ depends on\nthe choice of $\\alpha$ via $Z_{\\alpha}$,\nthe magnitude of the underlying effect $\\mu_{0} - \\mu_{a}$,\nthe measurement variance $s_{\\text{est}}$,\nthe sample size $n$,\nand is particular to this hypothesis test.\nPlugging in numbers, if\n$\\alpha = \\num{0.05} \\to Z_{\\alpha} = \\num{1.645}$,\n$\\mu_{0} = \\num{10}$,\n$s_{\\text{est}} \\approx \\num{2}$,\n$n = \\num{100}$,\nand we want to find the power of the test for an alternative hypothesis of $\\mu_{a} = \\num{10.5}$,\nwe have $Z_{a} = \\num{-0.8551} \\to P\\left(\\num{-0.8551} < Z\\right) = \\num{0.8038}$\nand thus the power is an acceptable $1-\\beta = \\num{0.8038} = \\SI{80.38}{\\percent}$.\nWe could use a similar line of reasoning to estimate\nthe $n$ necessary to obtain a desired $\\alpha$ and $\\beta$ before running the experiment.\n\n% import numpy as np\n% import scipy.stats\n% norm = scipy.stats.norm\n% Z_a = norm.ppf(1-0.05) + np.sqrt(100)*(10-10.5)/2\n% print(f'Z_a = {Z_a:.4f}')\n% print(f'Power = 1-beta = {1-norm.cdf(Z_a):.4f}')\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Lehr's Equation for \\texorpdfstring{$t$}{t}-Tests}\n\\label{hypo:power:lehr}\n\nAs a rough approximation for one-sided (two-sided) {\\ttest}s,\nLehr argues that to have a power of $1-\\beta \\sim \\num{0.8}$ with $\\alpha = \\num{0.05}$,\n$n$ \\cref{eq:hypo:power:lehr} should be set to 8\\footnote{This\nfactor is $8 \\approx \\left(Z_{\\alpha/2} + Z_{\\beta}\\right)^{2}$, for $1-\\beta = \\num{0.8}$, $\\alpha = \\num{0.05}$.} (16)\ntimes the ratio of the estimated population variance, $s^{2}$,\nand the desired detectable difference squared, $\\Delta^{2} = \\left(\\mu_{1} - \\mu_{2}\\right)^{2}$.\nNote that we can also rearrange this approximation to estimate $\\Delta^{2}$ given a particular $n$.\n\n\\begin{subequations}\\label{eq:hypo:power:lehr}\n\\begin{align}\nn &\\approx \\hphantom{1}8 \\frac{s^{2}}{\\Delta^{2}}\\quad \\left(\\text{One-Sided}\\right), \\label{eq:hypo:power:lehr:one} \\\\\nn &\\approx 16 \\frac{s^{2}}{\\Delta^{2}}\\quad \\left(\\text{Two-Sided}\\right). \\label{eq:hypo:power:lehr:two}\n\\end{align}\n\\end{subequations}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Bonferroni Correction}\n\\label{hypo:bonferroni_correction}\n\nWhen conducting multiple hypothesis tests on the same set of data\nwe run the risk of underreporting $\\alpha$ for the whole analysis.\nIn particle physics this is known as the look elsewhere effect\\footnote{See the discussion in Appendix C of the dissertation \\cite{mepland_dissertation}.}\n\\cite{Demortier:2007zz,lyons2008,Gross2010,Ranucci:2012ed}.\nFor example, if we set $\\alpha = \\num{0.05}$ for any individual test\non a set of data with many features, but then run $\\num{20}$\ntests on it, by chance we'd expect $\\approx \\num{1}$ tests\nto erroneously reject a true $H_{0}$.\nTo quantify this concept, we can construct\nthe family-wise $\\alpha$ across all of the $N$ tests done on a dataset\\footnote{There is\ndisagreement on the best way to treat $\\alpha_{\\text{FW}}$,\n\\eg are we even talking about the right null hypothises \\cite{Perneger1236},\nwhat if the different tests are use correlated variables -- then they are not wholly independent tests in the context of $\\alpha_{\\text{FW}}$.\nThe $\\alpha_{\\text{FW}}$ of \\cref{eq:hypo:alpha_fw} is just one simple definition.},\n$\\alpha_{\\text{FW}}$ \\cref{eq:hypo:alpha_fw}.\nIn our earlier example, we would have $\\alpha_{\\text{FW}} = \\num{0.642}$,\nor a \\SI{64.2}{\\percent} chance of at least one test rejecting $H_{0}$ in error.\n\n\\begin{equation}\\label{eq:hypo:alpha_fw}\n\\alpha_{\\text{FW}} = 1 - \\left(1 - \\alpha\\right)^{N}\n\\end{equation}\n\nTo address this issue we can apply the Bonferroni correction,\nand simply divide our nominal $\\alpha$ by $N$\nbefore conducting the tests\\footnote{Or equivalently multiply the observed {\\pvalue}s by $N$.}.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Sequential Bonferroni Correction, \\texorpdfstring{\\ie}{ie} Holm--Bonferroni Correction}\n\\label{hypo:bonferroni_correction:sequential}\n\nWe can control $\\alpha_{\\text{FW}}$ more efficiently in terms of the cost imposed on $\\alpha$, and hence decreased power,\nby using the sequential Bonferroni correction, \\ie the Holm--Bonferroni correction.\nAs the name suggests, in the sequential correction\nwe iterate through the $i \\in \\left\\{0, 1, \\ldots, N-1\\right\\}$ tests\nin order of their {\\pvalue}s, from smallest to largest,\nchecking that $i$th test's \\pvalue is $< \\alpha / \\left(N-i\\right)$.\nWhen we come to the first $i+1$ test with a $\\pvalue \\geq \\alpha / \\left(N-\\left(i+1\\right)\\right)$\nwe stop iterating and say the previous $i$ tests reject $H_{0}$,\nwhile the remaining $N-i$ tests fail to reject $H_{0}$.\nIn this way we can constrain $\\alpha_{\\text{FW}} \\leq \\alpha$,\nwhile checking most tests against a less stringent condition\nthan $\\alpha / N$ of the normal Bonferroni correction.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Experiment Design}\n\\label{hypo:experiment}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Sources of Bias}\n\\label{hypo:experiment:bias}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Blocking}\n\\label{hypo:experiment:blocking}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{Interaction Effects}\n\\label{hypo:experiment:interaction}\n% TODO\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsection{AB Testing Example}\n\\label{hypo:experiment:AB}\n% TODO\n", "meta": {"hexsha": "5b53632b4c61f5519c7e219dc3c1ed48e5eaa7f1", "size": 30382, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/hypo.tex", "max_stars_repo_name": "mepland/data_science_notes", "max_stars_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-05-30T15:15:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-23T01:01:08.000Z", "max_issues_repo_path": "sections/hypo.tex", "max_issues_repo_name": "mepland/data_science_notes", "max_issues_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sections/hypo.tex", "max_forks_repo_name": "mepland/data_science_notes", "max_forks_repo_head_hexsha": "f529a86490110fc6a30d1af6d37c0add2517244f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.8464163823, "max_line_length": 232, "alphanum_fraction": 0.6666118096, "num_tokens": 8770, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7185943865443349, "lm_q1q2_score": 0.556611271771428}}
{"text": "\\section{Group Anomaly Detection (GAD) Techniques} \\label{Sec:D}\n% Brief paragraph \nAfter setting up the group deviation detection  problem and describing the general framework for methods in previous sections, we now  explain specific GAD techniques in further detail. A description of GAD techniques is provided in terms of the four key components from Section   \\ref{Sec:Problem}.  GAD involves comparing multiple groups and identifying groups with significantly different statistical properties.   GAD methods fall into  categories of discriminative methods and generative models. Hypothesis tests are a specific generative models that also described in the following.  \n\n\n\n\\subsection{ GAD Discriminative Methods } \n In GAD, a discriminative method classifies input groups into regular and anomalous behaviors.  \n % Consider a simple discriminative approach where statistical properties of groups (such as mean and/or variance) are estimated then  pointwise anomaly detection methods are applied. Further statistical properties are quantified using parametric or non-parametric measures as described in Table \\ref{Tab:Des}.  The selection of statistical properties for analysis  is crucial as  Guevara et al. \\cite{SMDD} find single quantities of group means may not sufficiently distinguish anomalous group deviations.  Given an appropriate combination of estimated statistical properties of group distributions, pointwise anomaly detection methods are applicable for classifying group behaviors.   The effectiveness of such a simple approach for identifying group anomalies also depends on the ability of metrics to quantify statistical properties.   \nWe  elaborately discuss two state-of-the-art discriminative models for detecting group anomalies.  Firstly One-Class Support Measure Machine (OCSMM)   proposed by Muandet and Sch\\\"olkopf \\cite{OCSMM} %supports unsupervised learning for classifying group behaviors,  OCSMM \nis an unsupervised method that maximises the margins between two classes using a separating hyperplane.  \n Another discriminative model Support Measure Data Description (SMDD)   proposed by Guevara et al. \\cite{SMDD} is similar to OCSMM however uses a supervised approach  based on a minimizing the  volume sets that contains the majority of groups in a training set.  % are contained in boundaries (soft or hard) of a volume set. \n  Both methods can handle continuous and discrete input data where it is assumed that the statistical properties of group deviations can be differentiated based on certain  optimisation criteria. \n%\\subsection{ One-Class Group Anomaly Detection}\n  \n%Firstly SVDD summarises data behavior in a training set based on the optimisation criteria of minimum volume sets.\n\n%Firstly, we introduce the notation to set up the problem for  OCSMM and SMDD.\n In this analysis, each  group   is associated with a probability measure where observations  are assumed to be  independent and identically distributed (iid). \nFormally, %on the  space $(\\Omega,\\mathcal{F})$\ngiven an  outcome space $\\Omega$ and $\\sigma$-algebra $\\mathcal{F}$, we define a set of probability measures $\\mathcal{P}=\\{\\mathbb{P}_1,\\dots,\\mathbb{P}_M\\}$  \nwhere each function is given by $\\mathbb{P}_m: \\, \\Omega \\to [0,1]\n$ for $m=1,\\dots, M$. \n  % The set of all probability measures  $\\mathcal{P}_\\Omega$  is defined on the probability space $(\\Omega,\\mathcal{F},\\mathbb{P})$. \nIf groups  ${\\bf G}_1,{\\bf G}_2,\\dots,{\\bf G}_M$ exhibit regular behavior then they have iid probability distributions specified by $\\mathbb{P}_1,\\dots,\\mathbb{P}_M$.  %Given the group size $N_m$, the empirical sample  $x^{(m)}_k$ denotes the $k$th observed values from the $m$th group for $k=1,\\dots,N_m$ . \n%A comparison of these probability measures based on the empirical samples leads to the detection of an anomalous group.\n%and defined on the set of probabilities $\\mathcal{P}_\\Omega$.   \n %Then the mean embedding function is defined as\n% For , the second component $f_1$ have the same characterisation function. With one-class methods, % initially transform data %into feature representations of each group. %estimate  probability measures of\n %In particular, \n In both OCSMM and SMDD, mean embedding functions are applied to transform groups into points in a reproducing kernel Hilbert space (RKHS). \n   Let $\\mathcal{H}$ denote the RKHS of probability measures with   kernel $k:\\Omega \\times \\Omega  \\to\\mathbb{R}$.  Group behaviors are characterised using mean kernel embedding functions as defined by \n \\begin{align}\n\\mu :  \\mathcal{P}\\to \\mathcal{H}, \\quad\\mathbb{P}_m \\mapsto \\mu_{\\mathbb{P}_m}=E_{\\mathbb{P}_m} [ k({\\bf G}_m,\\cdot) ] =\\int_{\\Omega} k(u,\\cdot)\\, d\\mathbb{P}_m(u).  \\label{KMF}\n \\end{align}\n for $m=1,\\dots,M$. \n \n\n  \n\n\\subsubsection{ One-Class Support Measure Machine (OCSMM) }\n Muandet et al. \\cite{OCSMM} propose OCSMM for  discriminating between regular and anomalous  group behaviors using a parametrised hyperplane. OCSMM maximises\n the margin between two classes as separated based on the hyperplane.  \n{  Since OCSMM is  analogous to one class support vector machines  \\cite{OCSVM}, we describe a linear hyperplane for vector ${\\bf x}$ as } \n\\[ %f_{\\bf w}(x) = \n\\big\\langle {\\bf w}, {\\bf x} \\big \\rangle = \\rho \\]\nwhere parameters $({\\bf w}, \\rho)$ are the weights and bias term for parametrizing a separating hyperplane respectively.  Regular behaviors are further away from  the origin than anomalous instances. \n\nOCSMM allows the user to select the expected proportion of group anomalies in the training data as denoted by  $\\nu \\in (0,1)$.  Since group anomalies are assumed to occur much less frequently that regular groups,  OCSMM learns patterns from one-class that exhibits  the dominant behavior in a dataset.  \nFor more flexible margins in a separating hyperplane, \nslack variables $\\xi_1,\\dots,\\xi_M$ are  introduced such that the parameters  of a separating hyperplane are estimated by optimizing the following problem %results in the optimisation problem \n%\\vspace{-1cm}\n \\begin{align}\n&\\min_{(  \\rho, {\\bf w},\\boldsymbol \\xi)} \n\\frac{1}{2} \\langle {\\bf w},{\\bf w} \\rangle_{\\small \\mathcal{H}} - \\rho +  \\frac{1}{\\nu M} \\sum_{m=1}^M\n\\xi_m  \\label{minW} \\\\\n\\mbox{with con}\\mbox{straints: } &  \\langle {\\bf w} ,  \\mu_{\\mathbb{P}_m } \\rangle_\\mathcal{H} \\ge \\rho - \\xi_m \\mbox{ and } \\xi_m \\ge 0 \\mbox{ for } m=1,\\dots,M   \\nonumber\n\\end{align}\n%and the parameters ${\\bf w},\\,\\rho$ and  $\\boldsymbol\\xi$ are\n The first term in Equation (\\ref{minW}) represents  minimizing the error or distance of separating data points from the origin. The slack variables offer a more flexible description of a separating hyperplane where penalty term $1/{\\nu M}$ represents the trade-off between the distance of a hyperplane from the origin and the upper bound on expected number of group anomalies in a training set. \n% The parameter $\\nu$ is  a penalty term that represents the trade-off between maximizing the margin and the number of allowable point \nEquation (\\ref{minW}) can be solved by introducing Lagrange multipliers $\\boldsymbol \\alpha$ where the estimated hyperplane is %estimated by %with weights % $\\bf w$ %simplified as \n\\begin{align*}\n f_{\\bf w}(\\mu_{\\mathbb{P}_m } ) = \\big\\langle {\\bf w},\\mu_{\\mathbb{P}_m } \\big \\rangle_\\mathcal{H} \\quad %= \\sum_{l} \\alpha_l \\, \\langle \\mu_{\\mathbb{P}_m}, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H} \\\\\n\\mbox{ where }{\\bf w}= \\sum_{m=1}^M \\alpha_m  \\mu_{\\mathbb{P}_m} \\mbox{ and } \\sum_{m=1}^M \\alpha_m=1\n%0 \\le \\alpha_m \\le \\lambda\n\\end{align*} \n \n%$(\\bf w, \\rho)$  is a function  $f_{\\bf w}:  \\mathcal{H}\\to \\mathbb{R}$. Given the embedding function $mu$, a group represented by random variable ${\\bf G}$ and probability measure $\\mathbb{Q}$  is classified with regular behavior if \n%$f_{\\bf w}({\\bf G}) = \\big\\langle {\\bf w}, \\mu_{\\mathbb{Q} }  \\big \\rangle\\ge \\rho $. \nThe following schema describes OCSMM in four key components:   \n \\begin{enumerate}[1.]\n\\item Characterisation function $f_1({\\bf G}_{train})={\\bf w}$: \\\\ The training set of group behaviors contains information about  $\\mu_{\\mathbb{P}_1},\\dots, \\mu_{\\mathbb{P}_M}$.  In particular, the weight function of a separating hyperplane characterises a group training set by\n\\[{\\bf w}= \\sum_{m=1}^M \\alpha_m  \\mu_{\\mathbb{P}_m}\\] \n \\end{enumerate}\n %\n \\begin{enumerate}[2.]\n \\item Characterisation function $f_2({\\bf G}_{test})=\\mu_{\\mathbb{P}_m}$: \\\\ \nThe $m$th group is characterised by the mean embedding function.  Intuitively the value of a group mapped onto the RKHS is a feature representation for the  group. \n\\end{enumerate}\n %\n\\begin{enumerate}[3.]\n\\item Measure $ \\mathcal{D}\\big( f_1({\\bf G}_{train}), f_2({\\bf G}_{test})\\big )$ using a separating hyperplane: \\\\\nThe separating hyperplane compares characterisation functions ${\\bf w}$ and $\\mu_{\\mathbb{P}_m }  $ with\n$ \\big\\langle {\\bf w},\\mu_{\\mathbb{P}_m } \\big \\rangle_\\mathcal{H}%\\sum_{l} \\alpha_l \\, \\langle \\mu_{\\mathbb{P}_m}, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H}\n$.  \n% A special reproducing property of mean embedding functions in RKHS is $\\langle \\mu_{\\mathbb{P}_m},f \\rangle = E_\\mathbb{P}[f(X)]$\n%for probability measure $\\mathbb{P}_m \\in \\mathcal{P}$ and function $f \\in \\mathcal{H}$. We introduce a pairwise similarity function for pairwise probability measures as  $K: \\mathcal{P} \\times \\mathcal{P}  \\to \\mathbb{R}$.\n% By applying the reproducing property and  Fubini's theorem, the kernel on  probability measures is defined as\n% \\begin{align*}\n%% K(\\mathbb{P}_m,\\mathbb{P}_j) &= \n%\\langle \\mu_{\\mathbb{P}_m},\\mu_{\\mathbb{P}_j} \\rangle_\\mathcal{H}   &=\\int \\int  k(x,y)  \\, d{\\mathbb{P}_m}(x) d{\\mathbb{P}_j}(y) \n% \\end{align*}\n%for $i,j=1,\\dots,M$. Instead of computing this integral through Monte Carlo simulation, \n For a deeper understanding of this measure, consider  \n\\begin{align*}\n \\big\\langle {\\bf w},\\mu_{\\mathbb{P}_m } \\big \\rangle_\\mathcal{H}= \\sum_{l} \\alpha_l \\, \\langle \\mu_{\\mathbb{P}_m}, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H} \n\\end{align*} \nwhere the kernel similarity on probability measures based on empirical samples is \n%To ensure the kernel similarity estimated on empirical samples   %K(\\hat{\\mathbb{P}}_m,\\hat{\\mathbb{P}}_l)=\n\\begin{align}\n\\langle \\hat{\\mu}_{\\mathbb{P}_m},\\hat{\\mu}_{\\mathbb{P}_l} \\rangle_\\mathcal{H}  \n=\n\\frac{1}{N_m   N_l } \\sum_{i=1}^{N_m}\n \\sum_{i'=1}^{N_l} k\\Big( X_{mi} , \n X_{li'}  \\Big)  \\label{kernelest}\n \\end{align}\n%and $X_{mi}$ is the $i$th observation in the $m$th vector. \n%The following assumptions are imposed \n For a reasonable approximation using empirical estimates for probability measures, we require that  $||  \\mu_{\\mathbb{P}_m}  - \\hat{\\mu}_{ \\mathbb{P}_m}  || $ is bounded   for $m=1,\\dots,M$.   \n\n  The selection of kernel similarity function $k$ is important in the detection of group anomalies.\nWhen $k$ is chosen as a characteristic kernel such as a  Gaussian   kernel, the representative function $\\mu$ is injective, that is there is a distinct mapping of groups onto the RKHS. The anomalous measure and classification threshold in  OCSMM are also dependent on the selection of kernel function. Table \\ref{Tab:Kernel} provides examples where given a particular choice of reproducing kernel  $k$ in Equation (\\ref{kernelest}), OCSMM characterises different statistical properties of  groups. \n\n % of kernel functions that captures different statistical properties of groups. %For instance, a linear kernel only captures the behavior of the first moment of a distribution while the  Gaussian RBF describes infinite moments.\\\\[2mm]\n\n\\end{enumerate}\n\n \\begin{table}[H]\n \n\\begin{center}\n\\tabcolsep=0.25cm\n \\scalebox{0.9}{\n\\begin{tabular}{p{15mm}ccp{20mm} } \n \\hline\\\\[-2mm]\nType & Reproducing Kernel $k(u,v)$  & Kernel Similarity $K(\\mathbb{P}_i,\\mathbb{P}_j  )$ & Moments  \\\\[1mm]\n\\hline \\\\[-4mm]\n \\hline\\\\[-2mm]\nLinear & $\\langle  u,v \\rangle $  & $0\\, [Av \\mbox{ if } \\mathbb{P}=N(A,1)] $ & First\\\\[2mm]\nQuadratic & $\\langle  u,v \\rangle ^2$   &  $v^2$ & Second \\\\[2mm]\nQuadratic &  $(\\langle  u,v \\rangle +1)^2$ & $v^2+1$ & First \\& Second \\\\[2mm]\nGaussian RBF & $\\displaystyle \\exp \\bigg( {-\\frac{||u-v||^2}{2\\sigma^2} } \\bigg)$ & $ \\displaystyle \\frac{1}{\\sqrt{2}}\\exp \\bigg({-\\frac{||v||^2}{4} } \\bigg)  $ & Infinite\n\\\\[-1mm]\n& & $(\\sigma^2=1)$ & \n \\\\[1mm] \\hline\n\\end{tabular}\n}\n\\end{center}\n%\\vspace{-1cm}\n \\caption{ Examples of different kernels for probability distribution $\\mathbb{P}=N(0,1)$. % and the Gaussian RBF has a bandwidth or tuning parameter  $\\sigma>0$.  \n }\n \\label{Tab:Kernel}\n\\end{table}\n\n \\begin{enumerate}[4.]\n\\item  Threshold $\\epsilon= \\rho$: \\\\  The  threshold term for OCSMM represents a bias parameter for the separating  hyperplane. This threshold is calculated from   groups with probability measures that are mapped closest to the separating hyperplane. In fact,   support measures provide a description for the separating hyperplane such that the $m'$th group with $ 0 <\\alpha_{m'} < \\displaystyle \\frac{1}{\\nu M}$ is a support measure  that satisfies \n\\[ \\rho  %=f_{\\bf w}(\\mu_{\\mathbb{P}_m } )\n = \\big\\langle {\\bf w},\\mu_{\\mathbb{P}_{m'} } \\big \\rangle_\\mathcal{H}= \\sum_{l} \\alpha_l \\, \\langle \\mu_{\\mathbb{P}_{m'} }, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H}  \\]\nA threshold for anomalous groups is     \\[\\hat{\\rho}=  \\sum_{l} \\hat{\\alpha}_l \\, \\langle \\hat{\\mu}_{\\mathbb{P}_{m'}}, \\hat{\\mu}_{\\mathbb{P}_l}\\rangle_\\mathcal{H} \\] \nwhere a group anomaly is separated by a parametrised hyperplane with\n$   \\big\\langle \\hat{\\bf w},\\hat{\\mu}_{\\mathbb{P}_m } \\big \\rangle_\\mathcal{H}  < \\hat{\\rho}$. Thus group deviations are closer to the origin than regular group behaviors. \n \\end{enumerate}\n \n \n \n%Solving with the Lagrange multiplers $\\boldsymbol \\alpha$ and $\\boldsymbol \\gamma$, results in the Lagrangian function\n%\\begin{align}\n%L(\\rho,{\\bf w},{\\boldsymbol \\xi},{\\boldsymbol \\alpha},{\\boldsymbol \\gamma})= \\frac{1}{2} || {\\bf w}||^2 + \\frac{1}{\\nu M} \\sum_{i=1}^M\n%\\xi_i +\\sum_{i=1}^M \\alpha_i  \\Big( \\langle {\\bf w} , \\mu_{\\mathbb{P}_i } \\rangle - \\rho +\\xi_i \\Big) + \\sum_{i=1}^M\\gamma_i\\xi_i \\label{Lag2}\n%\\end{align}\n%Minimizing $L$ over $(\\rho,{\\bf w},\\boldsymbol\\xi )$  and then maximizing for arguments $(\\boldsymbol\\alpha,\\boldsymbol\\gamma)$. \n\n\n\n\n% This leads to an equivalent optimisation problem of % to Equation (\\ref{SMDD2} ) where $\\bf c$ is replaced by $\\bf w$ and $\\lambda=\\frac{1}{\\nu M}$. \n%\\begin{align}\n%&\\min_{\\boldsymbol \\alpha}\n%\\sum_{m,l} \\alpha_m\\alpha_l \\, \\langle \\mu_{\\mathbb{P}_m}, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H}\n% %K({\\bf x}_i, {\\bf x}_j) \n% \\nonumber \\\\\n%\\mbox{with }\\mbox{constraints: } &\n%\\sum_{m=1}^M \\alpha_m=1, \\quad \n%{\\bf w}= \\sum_{m=1}^M \\alpha_m  \\mu_{\\mathbb{P}_m},  \\quad\n%0 \\le \\alpha_m \\le \\lambda\n%\\label{OCSMMprob} \n%\\end{align}\n%\n%To evaluate whether the behavior of the $m$th group is consistent with other groups, the parameter $\\hat{\\alpha}_1,\\dots,\\hat {\\alpha}_M$ and $\\hat \\rho$  \n%from training data. Following Equation (\\ref{Eqn:class}), an anomalous group is classified by\n%\\[ -  \\sum_{l=1}^M \\hat{\\alpha}_l K( \\hat{\\mu}_{\\mathbb{P}_m } ,\\hat{\\mu}_{\\mathbb{P}_l }  )  > -\\hat \\rho\n%\\]\n\n\n\n\n\n%Each empirical probability $\\hat {\\mathbb{P}}_m=\\frac{1}{N_m} \\sum_{k=1}^{N_m} I \\Big({{\\bf G}_m \\le x^{(i)}_k} \\Big)$ is associated with the true probability measure ${\\mathbb{P}}_m$. It is necessary to have a consistent estimator  for the true probability measure ${\\mathbb{P}}_m$ such that \n%  $ \\hat {\\mathbb{P}}_m \\to  {\\mathbb{P}_m}$ as $N_m \\to \\infty$. %In general, more accurate descriptions of group distributions are obtained for larger group sizes.\n%This results in  good empirical approximations on the RKHS, that is\n%  $||  \\mu_{\\mathbb{P}_m}  - \\mu_{\\hat {\\mathbb{P}}_m } || $ is bounded for $i=1,\\dots,M$.\n\n%It is also assumed that the random variable ${\\bf G}_m \\sim \\mathbb{P}_m$\n \n \n% $f:\\Omega \\to \\mathbb{R}$ \n%We examine the model for a smoothing function that maps data onto the feature space $\\mathcal{H}$ using $\\Phi:\\chi  \\to \\mathcal{H}$, ${\\bf x}_m \\mapsto \\Phi({\\bf x}_i )$. %which is applied as $\\Phi({\\bf x}_n)$ $n=1,\\dots,N$.  \n%The spherical function $\\Phi$ subsequently defines a kernel function by the inner product between other mappings as $K({\\bf x}_i,{\\bf x}_j )=\\langle \\Phi({\\bf x}_i),\\Phi({\\bf x}_j)\\rangle$ where $K: \\chi \\times \\chi  \\to \\mathbb{R}$. We examine the class of kernel function such that $K({\\bf x}_i,{\\bf x}_i)=1$. The multivariate group data  ${\\bf G}_1,{\\bf G}_2,\\dots,{\\bf G}_i$ are transformed into the  kernel mean function $\\mu_{\\mathbb{P}_1},\\dots,\\mu_{\\mathbb{P}_i}$ . The mean map $\\mu_{\\mathbb{P}_i}$  is the feature representation of the $m$th group. \n %Other examples are described in Table \\ref{Tab:Kernel}.\n\n\n%\\begin{align}\n%\\mbox{sgn}\\big (\\sum_{l=1}^M \\alpha_l K( \\mu_{\\mathbb{P}_m } ,\\mu_{\\mathbb{P}_l }  )-\\rho ) \\big) =\\left\\{\n%                \\begin{array}{ll}\n%                 +1 , \\qquad \n%%                  & || \\mu_{\\mathbb{P}_t }   -{\\bf c}||^2 \\le R^2, \n%              &    \\mbox{Group exhibits regular behavior. }\\\\\n%                 -1, \\qquad   \n%                 %   & ||\\mu_{\\mathbb{P}_t }   -{\\bf c}||^2 > R^2 \n%                 & \\mbox{ Group exhibits anomalous behavior.}\n%                \\end{array}\n%              \\right.\n%\\end{align} \n\n%\\newpage\n%\n%The optimisation problem in (\\ref{Con1}) is solved by introducing Lagrange multipliers $\\boldsymbol\\alpha$.\n%This  results in\n%\\begin{align}\n%&\\sum_{i}\\alpha_i=1 \\mbox{ and }\n%\\bf c= \\sum_i \\alpha_i  \\mu_{\\mathbb{P}_i}  \\label{Con2} \\\\\n%& \\qquad\\mbox{with } 0 \\le \\alpha_i \\le \\frac{1}{\\nu M}\n%\\end{align}\n%The new constraint  on $\\alpha_i $ leads to the support measures $SM$ which describe the majority of the group behaviors, that is $SM= \\{\\mu_{\\mathbb{P}_i}  | 0 < \\alpha_i <\\lambda\\}$.\n\n%For any support measure $ \\mu_{\\mathbb{P}_k}  \\in SM$, the radius is calculated by\n%\\begin{align}\n%R^2= \\langle \\mu_{\\mathbb{P}_k}  , \\mu_{\\mathbb{P}_k}  \\rangle\n%-2\\sum_{i}\\alpha_i\\langle \\mu_{\\mathbb{P}_i} ,\\mu_{\\mathbb{P}_k}  \\rangle +\n%\\sum_{i,j} \\alpha_i\\alpha_j\\langle  \\mu_{\\mathbb{P}_i} ,\\mu_{\\mathbb{P}_j} \\rangle \\label{SM}\n%\\end{align}\n\n\n\n\n\n%iven that $\\mathbb{P}_i$ is defined on the probability space $(\\Omega,\\mathcal{F},\\mathcal{P})$,    %with $\\Omega$ is the sample space, the set of probabilities$\\mathcal{P} $ \n%%$\\mathcal{F}$.\n%we introduce a training probability $\\mathbb{Q}$  measured on $\\mathcal{F}$.\n% where an $\\alpha \\in (0,1)$ proportion of probability measures are concentrated and is defined as \n%$MV_\\alpha = \\mbox{argmin}_{G \\in \\mathcal{F}} \\Big\n%\\{  \\mathbb{P}(G) : \\, \\mathbb{P} (G) \\ge \\alpha \\Big\\}\n%$. \n%\n%probability measures are concentrated and is defined as \n%$MV_\\alpha = \\mbox{argmin}_{G \\in \\mathcal{F}} \\Big\n%\\{  \\mathbb{P}(G) : \\, \\mathbb{P} (G) \\ge \\alpha \\Big\\}\n  % Thus a group with mean embedding value $ \\mu_{\\mathbb{P}_i}  $ is contained within a hypersphere with the radius $\\sqrt{R^2 +\\xi_i}.$ \n%We quantify the anomalous behavior of a group by\n%In particular, we minimise the volume of an enclosing hypersphere with center $\\bf c$ and radius $R$ that captures an $\\alpha$ proportion of probability measures. The estimation of this MV-set is based on   mean embedding functions $\\mu_{\\mathbb{P}_1}, \\mu_{\\mathbb{P}_2}, \\dots,\\mu_{\\mathbb{P}_M}$ where \n%\\[\\hat {MV}({\\bf c},R) = \\big\\{ \\mathbb{P}_m \\in \\mathcal{P}  : \\, || \\mu_{\\mathbb{P}_i}  -{\\bf c}||^2 _\\mathcal{H}\\le R^2 \\big\\} \\]   \n%A strict radius boundary of $R$ means that in a training set, all of the groups are assumed to exhibit regular behavior.\n %The minimum volume set  can be written as\n%\\[\\hat {MV}({\\bf c},R,\\boldsymbol  \\xi) = \\big\\{ \\mathbb{P}_i \\in \\mathcal{P}  : \\, || \\mu_{\\mathbb{P}_i}  -{\\bf c}||^2 _\\mathcal{H}\\le R^2 +\\xi_i \\big\\} \\]   \n \n %\n\\subsubsection{ Support Measure Data Description (SMDD) } \n Guevara et al. \\cite{SMDD} propose SMDD for  distinguishing between regular and anomalous  group behaviors by learning the dominant behavior (one-class) from   training data using minimum volume (MV) sets. {  Since SMDD is  analogous to support  vector data description \\cite{SVDD}, we introduce a common MV-set for a vector ${\\bf x}$ where an enclosing hypersphere with center $\\bf c$ and radius $R$ is described by } \n %The discriminative functions for SMDD is based on fitting on minimum volume set on  training data.      \\[|| \\hat{\\mu}_{\\mathbb{P}_m} -  \\hat{\\bf c} ||^2 \\]\n%describes an minimum volume sphere\n\\[ ||{\\bf x}  -{\\bf c}||^2 \\le R^2 \\]\n Since anomalous groups may be present in a training set, a penalty term is introduced. The penalty parameter $\\lambda >0$ represents the trade-off between the volume of a  hypersphere and \n the expected proportion of anomalous groups in a training set. \n For a more flexible radius boundary, slack variables $\\big\\{\\xi_m \\big\\}_{m=1}^M \\ge 0$ are also introduced where SMDD %a MV-set %$\\bf c$ and $R$\n involves minimizing the objective function \n \\begin{align}\n &\\min_{(R,{\\bf c}, \\boldsymbol\\xi) }   R^2+\\lambda \\sum_{m=1}^M \\xi_m \\label{SMM1}  \\\\\n\\mbox{with constraints: } &||\\mu_{\\mathbb{P}_m}  -{\\bf c}||^2 _\\mathcal{H}\\le R^2  +\\xi_m  \\mbox{ and } \\xi_m  \\ge 0 \\mbox{ for } m=1,\\dots,M  \\nonumber\n\\end{align}\n The first term in Equation (\\ref{SMM1}) accounts for radius of a volume set while the second term accounts for less strict radius boundary for a MV set. \n\n%Estimating the MV-set %$\\bf c$ and $R$\n% involves minimizing over the objective function \n% \\begin{align}\n% &\\min_{(R,{\\bf c}, \\boldsymbol\\xi) }   R^2+\\lambda \\sum_{i=1}^M \\xi_i \\nonumber \\\\\n%\\mbox{with constraints: } &||\\mu_{\\mathbb{P}_i}  -{\\bf c}||^2 \\le R^2  +\\xi_i  \\mbox{ and } \\xi_i  \\ge 0 \\mbox{ for } i=1,\\dots,M  \\label{SMM1}\n%\\end{align}\n\n\n%\\[\\hat {MV}({\\bf c},R,\\boldsymbol \\kappa) = \\big\\{ \\mathbb{P}_i \\in \\mathcal{P}  : \\, \\mathbb{P}_i \\big( || \\mu_{\\mathbb{P}_i}  -{\\bf c}||^2 _\\mathcal{H}\\le R^2 \\big) \\ge 1-\\kappa_i \\big\\} \\]   \n\n\n\n%such the penalty allows a certain number of groups to exceed the boundaries of the $MV$-set.\n\n%The penalty term $C>0$ represents the trade-off between the volume of the hypersphere and classification error in the training set. In other words, the probability that a test point ${\\bf x}_k$ lies outside of $S({\\bf c},R) $ is bounded by the model parameter $C$. \n\n\n\n \n\n%The optimisation problem in (\\ref{SMM1}) is solved by respectively introducing Lagrange multipliers $\\boldsymbol\\alpha$, % and $\\boldsymbol\\gamma$ for the two constraint inequalities.\n%%The Lagrangian function is \n%\\begin{align}\n%L(R^2,{\\bf c},{\\boldsymbol \\xi},{\\boldsymbol \\alpha},{\\boldsymbol \\gamma})= R^2+\\lambda\\sum_{i} \\xi_i -\\sum_i  \\alpha_i \\Big( R^2 +\\xi_i -||\\mu_{ \\mathbb{P}_i } -{\\bf c}||^2  \\Big) +\\sum_i \\gamma_i\\xi_i \n%\\end{align}\n%It is important to note that $L$ is minimised w.r.t. $(R,{\\bf c},\\boldsymbol\\xi )$  and maximised w.r.t. $(\\boldsymbol\\alpha,\\boldsymbol\\gamma).$\n %of the minimisation problem as \n%Since a kernel similarity function satisfies $ K({\\bf x}_i,{\\bf x}_i )=1$, \n%\\begin{align}\n%&\\min_{\\boldsymbol \\alpha}\n%\\sum_{i,j} \\alpha_i\\alpha_j \\, \\langle \\mu_{\\mathbb{P}_i}, \\mu_{\\mathbb{P}_j}\\rangle_\\mathcal{H}\n% %K({\\bf x}_i, {\\bf x}_j) \n% \\nonumber \\\\\n%\\mbox{with }\\mbox{constraints: } &\n%\\sum_{i=1}^M \\alpha_i=1, \\quad \n%{\\bf c}= \\sum_{i=1}^M \\alpha_i  \\mu_{\\mathbb{P}_i},  \\quad\n%0 \\le \\alpha_i \\le \\lambda\n%\\label{SMDD2} \n%\\end{align}\n%Suppose we want to evaluate whether group ${{\\bf G}_m}$ is anomalous. % A mapped probability  $\\mu_{\\mathbb{P}_m}$  is enclosed by \n%A  hypersphere is estimated with parameters $ \\hat{\\bf c}$ and $ \\hat R$ from the training data. \n%\n% In order to detect a group anomaly, we calculate   \nSimilar to OCSMM, we describe the key components of SMDD as follows.\n \\begin{enumerate}[1.]\n\\item Characterisation function $f_1({\\bf G}_{train})= {\\bf c}$: \\\\\n By combining kernel embedding functions  $\\mu_{\\mathbb{P}_1},\\dots, \\mu_{\\mathbb{P}_M}$, the center of an enclosing hypersphere is estimated by  \n\\[{\\bf c}= \\sum_{m=1}^M \\alpha_m  \\mu_{\\mathbb{P}_m}\\] \nThe value $\\bf c$ characterises  group information on the training set with weights that are optimised in a different way to OCSMM. A special case occurs when a spherical normalisation of mean embedding functions is applied with   \n\\[  \\langle \\mu_{\\mathbb{P}_m}, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H} \\mapsto  \\frac{ \\langle \\mu_{\\mathbb{P}_m}, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H}} { \\sqrt{\\langle \\mu_{\\mathbb{P}_m}, \\mu_{\\mathbb{P}_m}\\rangle_\\mathcal{H}  \\langle \\mu_{\\mathbb{P}_l}, \\mu_{\\mathbb{P}_l}\\rangle_\\mathcal{H}}} \\]\nFrom Guevara et al. \\cite{SMDD},  SMDD and OCSMM are equivalent under a spherical transformation  that preserves the injectivity  of the Hilbert space mapping.\n \\end{enumerate}\n%\n \\begin{enumerate}[2.]\n\\item Characterisation function $f_2({\\bf G}_{test})=\\mu_{\\mathbb{P}_m}$: \\\\ \nSimilar to OCSMM, the $m$th group is characterised by  a mean embedding function. However even though the group characterisation in SMDD is identical to OSCMM, the weights are optimised based on different criteria. \\end{enumerate}\n %\n\\begin{enumerate}[3.]\n\\item Measure $ \\mathcal{D}\\big( f_1({\\bf G}_{train}), f_2({\\bf G}_{test})\\big )$: \\\\\nThe anomalous score for the $m$th group is calculated by \\[ || \\hat{\\mu}_{\\mathbb{P}_m} -  \\hat{\\bf c} ||^2_\\mathcal{H}  \\]\nwhere it is assumed that  $||  \\mu_{\\mathbb{P}_m}  - \\hat{\\mu}_{ {\\mathbb{P}}_m } || $ is bounded  for   groups $m=1,\\dots,M$. \n\\end{enumerate}\n%\n\\begin{enumerate}[4.]\n\\item Threshold $\\epsilon={R}^2$: \\\\ In SMDD, the estimated radius \n$\\hat{R}^2$   of an enclosing sphere provides a threshold for group deviations.  Suppose that the $m'$th group has a support measure with $ 0 <\\alpha_{m'} < \\displaystyle \\lambda$ then the radius threshold is estimated as\n\\[ \\hat{R}^2  =  || \\hat{\\mu}_{\\mathbb{P}_ {m'} } -  \\hat{\\bf c} ||_\\mathcal{H}^2  \\]\n\\end{enumerate}\nA group anomaly is detected if it is not enclosed by a MV set with\n$ || \\hat{\\mu}_{\\mathbb{P}_{m} } -  \\hat{\\bf c} ||^2_\\mathcal{H} > \\hat{R}^2$. Thus  group deviations occur  outside of the boundaries of an estimated minimum volume set. \n\n%\n%{\\it Example:}\n%\n%We highlight key differences in SMDD and OCSMM through an example, consider the case where  group have identical probability distributions $\\mathbb{P}_i=\\mathbb{P}_j, \\; \\forall \\; 1 \\le i,j \\le M$. The objective function in this case simplifies to $min_{\\bf \\alpha } \\sum_{i=1}^M \\sum_{j=1}^M \\alpha_i \\alpha_j  $  which results in equal weights $\\alpha_i=1/M$ for $m=1,\\dots,M$ and the threshold $\\rho=1/M$.\n%The score of each group is $\n%\\mathcal{S}({\\bf G}_t)= 1 >  \\rho=1/M$\n% so all groups are classified with normal behavior. Similarly, the constraint in (\\ref{Con1}) reduces to $||\\mu_{\\mathbb{P}_i}  -\\mu_{\\mathbb{P}_j}||^2 \\le R^2  +\\xi_i$ where $R=0$. When all probability measures are identical, the radius is zero and all of the mean embedding maps are also support measures from Equation (\\ref{SM}).\n%\n%We further explore the characteristic kernels of Gaussian radial basis function (RBF) as it provides a general description of a distribution.  When the Gaussian RBF from Table \\ref{Tab:Kernel} is applied as the kernel $k$ in Equation (\\ref{KMF}), the algorithm OCSMM \\cite{OCSMM} has equivalencies to \n%the application of OCSVM. From Muandet et al. \\cite{OCSMM}, for identical bandwidths $\\sigma_i=\\sigma_j$ for $1 \\le i,j \\le N$, OCSMM corresponds to the OCSVM implemented on  spherically transformed data instances using the Gaussian RBF kernel. OCSMM is also equivalent to OCSVM applied on the kernel with variable bandwidth parameters.\n%\n%To discriminate between data behaviors a separating hyperplane is introduced as \\\\\n%\\begin{align}\n%\\mathcal{D} = \\big\\{ \\langle {\\bf c},\\mu_{\\mathbb{Q}}  \n% \\rangle\\ge \\rho  \\big\\} =  \\big\\{ \\mathbb{P} | \\langle\n%\\textstyle\\sum_{i}\\alpha_i\\langle  \\mu_{\\mathbb{P}_i} ,\\mu_{\\mathbb{Q}}  \\rangle\n%\\ge \\rho  \\big\\}\\label{DB}\n%\\end{align}\n\n\n% of statistical properties for group distributions.\n %An anomalous group is discovered when a combination of statistical properties substantially deviates from  the overall pattern of group distributions.\n % that are significantly different from  other groups.   \n % For example,  describes commonly used  statistical properties of distributions quantified by parametric or non-parametric measures  %such as location, scale, skewness, kurtosis and dependence.where a better description of groups is obtained for larger sample sizes.    \t Non-parametric measures are more appropriate when groups are contaminated with outliers or contain noisy values.  \n   %Other characterisations for statistical properties are also possible such as median absolute deviations  for  scale \\cite{MAD},   Kendall's rank correlation \\cite{kendall1938} as well as robust measures for skewness and kurtosis in Kim et al. \\cite{kim2004more}.\n \n\n\n%\\begin{center}\n%\\renewcommand{\\arraystretch}{2}\n%{\\bf Advantages and disadvantages of  discriminative models:} The disadvantages of classification-based techniques are as follow\\\\ \n%Advantages:\n%\\begin{enumerate}[(1)]\n%  \\setlength\\itemsep{5pt}\n%\\item    Does not assume a parametric   distribution for the data. \n%\\item Captures a variety of statistical properties of groups. \n%\\item  Directly classifies group behaviors as regular or anomalous.\n%%\\item \n%\\end{enumerate} \n%Disadvantages:\n%\\begin{enumerate}[(1)]\n%  \\setlength\\itemsep{5pt}\n%%\\item     Usually requires data labels  which are difficult for group anomaly detection\n%\\item  Results are sensitive to initial parameter  selection.  \n%\\item  Difficult to interpret the statistical properties of a detect group anomaly.  \n%\\item Prone to overfitting on training data.  \n%\\end{enumerate} \n%\\end{center}\n\n \n\n%By analyzing certain statistical properties of interest, an interpretation of a group anomaly is provided however many state-of-the-art technique detect   anomalous groups  without a providing a plausible explanation.    \n\n%Table \\ref{Tab:Des} describes the possible statistics for characterizing a group distribution. %well-known the non-parametric measures for location, scale and correlation\n\n\n%\t%\\vspace{-5mm}\n%\t%  \\tabcolsep=0.0cm\n%\t\\begin{table}[H]\n%\t\\tabcolsep=0.4cm \t\\renewcommand{\\arraystretch}{1.2}\n%\t\\begin{center}\n%\t\\scalebox{0.9}{\n%\t\\begin{tabular}{lcc  }\n%\t\\hline\\\\[-2mm]\n%\t% \\hline\n%Characteristic &\n%\t Parametric Measures    &  Non-parametric Measures  \\\\[2mm] \\hline\\\\[-2mm]\n%\t Location & \n%\t  $ \\displaystyle\\bar{x}=\\frac{1}{N} \\sum_{i=1}^N X_i$ & $  q_{0.5}$ \\\\[2mm]\n%\t  & Mean & Median \\\\[1mm]\n%\t  \\hline \\\\[-1mm]\n%\tScale & \n%\t $ \\displaystyle s^2 =\t\\frac{1}{N-1}\\sum_{i=1}^{N} \\big(X_{i}-\\bar{X} \\big)^2$ &\n%\t $q_{0.75}-q_{0.25}$ \\\\[4mm]\n%\t \t  & Variance & Interquartile Range \\\\[1mm]\n%\t \t  \t   \\hline \\\\[-2mm]\n%\t%\\(\\stackunder[1pt]{$ \\mbox{median}$}{\\scalebox{0.8}{$ 1\\le i\\le N$}} \\)$\\big (| X_i - q_{0.5}| \\big)$ \n%\t Skewness & \n%\t$\\displaystyle  \\hat{\\mathcal{S}}=\\frac{1}{N} \\sum_{i=1}^N \\frac{( X_i-\\bar X)^3}{ \\hat\\sigma^3}  $  &\n%\t$ \\displaystyle \\frac{q_{0.75} + q_{0.25} -2 q_{0.5} }{  q_{0.75} - q_{0.25}  }$\\\\[4mm]\n%\t\t   \\hline \\\\[-2mm]\n%\t Kurtosis & \n%\t $\\displaystyle \\hat{\\kappa} =\\frac{1}{N} \\sum_{i=1}^N \\frac{( X_i-\\bar X)^4}{ \\hat\\sigma^4}  $ &\n%\t$ \\displaystyle\\frac{q_{0.975} -q_{0.025} }{q_{0.75} -q_{0.25}}  $ \\\\[4mm]\n%\t\t   \\hline \\\\[-2mm]\n%\t Correlation &  \n%\t  \\;$ \\hat{\\rho} = \\frac{ \\sum_{i} \\big(X_i - \\bar X \\big) \\big(Y_i - \\bar Y\\big) } {\\sqrt{ \\sum_i \\big(X_i - \\bar X \\big) ^2 \\sum_i \\big(Y_i - \\bar Y\\big)^2 }   } $  & \n%\t $   \\frac{ \\sum_{i} \\big(A_i - \\bar A \\big) \\big(B_i - \\bar B\\big) } { \\sqrt{ \\sum_i \\big(A_i - \\bar A \\big) ^2 \\sum_i \\big(B_i - \\bar B\\big)^2  }} $ \\\\[6mm]\n%   & Pearson's Linear Correlation \\cite{Cor} & Spearman's Dependence \\cite{SpearmanRho} \\\\[1mm]\n%\t \\hline\\\\[-4mm]\n%\t \\end{tabular}\n%\t }\n%\t\\end{center}\n%\t\\caption{This table summarises parametric and non-parametric measures of  statistical properties for a single vector of random variables $\\{X_i\\}_{i=1}^N$. % such as location, scale, skewness, kurtosis and correlation.\n%\t In the above notation, $q_\\alpha=\\mbox{inf}\\{x \\in \\mathbb{R}: P(X \\le x) =\\alpha\\}$ is  the $\\alpha$-quantile of the distribution of $X$, while  \n%\tranked values of  variables $X$ and $Y$ are denoted by $A$ and $B$ respectively. \n%Note Pearson's correlation captures a linear relationship between variables whereas Spearman's rank correlation  measures a monotonic (possibly non-linear) dependence.\t\n% }\n% \\label{Tab:Des}\n%% \\vspace{-5mm}\n%\\end{table} \n\n%These are many hypothesis test that may be formulated for groups however\n  \n\n% quantified and captured by a particular   summary statistics are seen as the features of the group and thus we can uncover anomalies in respect to different group features.\n% ADV\n\n\n\n%\\subsection{Reduction to Pairwise comparisons}\n%Another approach reduces group observations to pointwise features by computing a dissimilarity (or similarity) measure of group distributions.  Krikamol  and Sch\\\"olkopf \\cite{OCSMM} uses distance and divergence measures in conjunction with $K$-nearest neighbor (KNN) anomaly detection method to detect group anomalies.  \n%%A pairwise comparison of groups is achieved by calculating distance or divergence measures. \n%Distance and divergence measures  assume   observations are independent and identically distributed in each group.  \n%Common measures for pairwise groups include $L_2$ Euclidean distance,  Kullback-Leibler divergence \\cite{perez2008kullback} or R\\'{e}nyi  divergence \\cite{renyi}. When two groups have unequal number of observations, distance/divergence metrics may also be converted kernel similarity measure.\n% For example, suppose we observed groups ${\\bf G}_m= \\{X^{(m)}_{i}\\}_{i=1}^{N_m} $ and ${\\bf G}_l= \\{X^{(l)}_{i}\\}_{i=1}^{N_l} $ with unequal group sizes of \n%$N_m$ and  $N_l$ respectively. \n%%In some instances, a kernel function is equivalent to a distance metric. \n%A Gaussian kernel similarity of two unequal groups is given by \n%\\[\\mathcal{K}({\\bf G}_m,{\\bf G}_l)=   \\frac{1}{N_m N_l} \\sum_{i=1}^{N_m}   \\sum_{i'=1}^{N_l} k ( X^{(m)}_{i },X^{(l)}_{i' })  \\] \n%where a \n%Gaussian radial basis function is  \n%\\[\\displaystyle k ( X^{(m)}_{i },X^{(l)}_{i' })  =\\exp \\bigg( {-\\frac{|| X^{(m)}_{i }- X^{(l)}_{i' }||^2}{2\\sigma^2} } \\bigg)  \\]\n%and $|| X^{(m)}_{i }- X^{(l)}_{i' }||^2$ is the $L^2$ Euclidean distance between two points in different groups. \n%  Once these metrics are computed, standard   anomaly detection methods differentiate values of dissimilarity (similarity) measures.\n\n \n \n \n%{\\tabcolsep=0.2cm\n% \\begin{table}[H]\n%\\begin{center}\n% \\scalebox{1}{\n%\\begin{tabular}{lccc}%p{20mm}ccp{20mm}\n% \\hline\\\\[-2mm]\n%Metric  &   $\\mathcal{D}({\\bf G}_m,{\\bf G}_l)$  & Expression & \\\\[1mm]\n% \\hline\\\\[-2mm]\n% Kernel Similarity &  $\\mathcal{K}({\\bf G}_m,{\\bf G}_l)$  & $\\displaystyle \\frac{1}{N_m N_l} \\sum_{i=1}^{N_m}   \\sum_{i'=1}^{N_l} k ( X^{(m)}_{i\\cdot},X^{(l)}_{i'\\cdot})  $ \\\\[4mm] \n%   $L^2$ Euclidean Distance &  $\\mathcal{K}({\\bf G}_m,{\\bf G}_l)$  & $\\displaystyle \\frac{1}{N_m N_l} \\sum_{i=1}^{N_m}   \\sum_{i'=1}^{N_l} k ( X^{(m)}_{i\\cdot},X^{(l)}_{i'\\cdot})  $ \\\\[4mm] \n% Kullback-Leibler (KL) Divergence \\cite{perez2008kullback} & $\\hat{KL}({\\bf G}_m||{\\bf G}_l )$ & \n% $ \\displaystyle \\frac{V}{N_m}   \\sum_{i=1}^{N_m} \\ln \\frac{ ||X^{(m)} _{i \\cdot}  - NN_{G_l} X^{(m)}_{i \\cdot}   ||_2}  { ||X^{(m)} _{i \\cdot}  - NN_{G_m} X^{(m)}_{i \\cdot}  ||_2}    + \\ln \\frac{N_m }{N_l-1}$ \\\\[4mm]\n% \\hline\n%\\end{tabular}\n%}\n%\\end{center}\n% \\caption{ Examples of different divergence to compare two groups with unequal sample sizes.\n%The Gaussian RBF is given by $k(x,y)=\\exp \\big(- ||x-y||^2_2 / \\sigma^2 \\big)$  $\\sigma>0$ is the bandwidth or tuning parameter. $NN_G (x)$ is the nearest neighbor of point $x$ for group $G$. \n% }\n% \\label{Tab:Metrics}\n%\\end{table}\n%}\n\n%The difference between a distance and divergence\n\n%KNN-Metric\n%Further metrics may be introduced such as the $NP$-$L_2$ and $NP$-Renyi divergences proposed by \\cite{}.\n\n \n%OCSMM and SMDD are analogous extensions of pointwise discriminative anomaly detection methods     One-Class Support Vector Machines (OCSVM) \\cite{OCSVM} and Support Vector Data Descriptions (SVDD) \\cite{SVDD} respectively. %, we initially describe the mechanisms and inference of these algorithms.  \n \n%Data Instances IID or Independent but not identically distributed.\n%If observations in a group are dependent then more complicated joint distributions are required. \n%Groups are IID across time.\n%\n%\\subsection{Reduction to Point Anomaly Detection}\n%A way to simplify the group anomaly problem is by calculating summary statistics of the distribution for each group. This may be either parametric or non-parametric measures for location, scale, shape and correlation as described in Table \\ref{Tab:Des}. The summary statistics are seen as the features of the group and thus we can uncover anomalies in respect to different group features.\n% Subsequently, pointwise anomaly detection methods are applicable in order to differentiate anomalous group behaviors.\n%\n%% ADV\n%This simple approach also has computationally feasible and suitable for large datasets. Better interpretation of an anomalous group.\n%A problem with other methods is that they detect anomalous groups however they do not specify the exact reasons why a group is anomalous.\n%\n%% DISADV\n%A problem that may occur is the representation of a group using a single value such as the mean, which is not sufficient for distinguishing group distributions in many cases.\n%\n%\n%%\\subsection{Reduction to Pairwise comparisons}\n%Another approach for detecting group anomalies, is by comparing the similarity of group distributions. That is, to calculate the metric $\\mathcal{C}$ for the distance or divergence between two probability distributions. Metrics such as $L_2$ Eucidean distance, Kullback-Leibler and Renyi-$\\alpha$ divergence. Once these metrics are computed, standard point anomaly detection methods may be applied.\n%\n%The difference between a distance and divergence\n%KNN-Metric\n%Further metrics may be introduced such as the $NP$-$L_2$ and $NP$-Renyi divergences proposed by \\cite{}.\n% \\cite{OCSMM} applies $K$-nearest neighbor (KNN) anomaly detection method from  \\cite{} on the $NP$-$L_2$ and $NP$-Renyi divergences. This leads to a high performance as compared to the OCSMM method which is explained below.\n%\n%Advantage of divergence are non-parametric and KNN is a non-parametric detection method.\n%\n\n\n%\\subsection{Discriminative Methods}\n%\\subsection{ One-Class Pointwise Anomaly Detection}%Support Vector Machine (OCSVM)}\n%Before we introduce discrimnative model for detecting group anomalies, let us review pointwise anomaly detection techniques such as  SVDD  and  OCSVM.   \n% SVDD summarises data behavior in a training set based on the optimisation criteria of minimum volume sets. The behavior of \n%data points is captured by a specific geometrical criteria. For example, an enclosing  hypersphere with center $\\bf a$ and radius $R$ is usually fitted to the data.  On the other hand, OCSVM learns the data distribution of the majority of points in a training set.  OCSVM estimates a boundary that separates data behaviors given the model parameter for expected number of anomalies. \n%\n%% To appropriately fit the assumptions of a hypersphere, an spherical transformation of data instances is required.\n% % description of data using different optimisation criteria. We examine Minimum Volume (MV)  sets which captures the majority of the data behavior %If SVDD is trained on `normal' data instances then anomalous cases in a test case are easily compared. \n%\n%%$V$-dimensional\n%Suppose we want to apply SVDD by minimizing the volume of an enclosing hypersphere on multivariate dataset  $\\chi =\\{ {\\bf x}_1,{\\bf x}_2,\\dots,{\\bf x}_N \\}$ with $ {\\bf x}_i \\in \\mathbb{R}^V$. Data  that does not exhibit  elliptical behavior, requires a spherical transformation to appropriately fit within an hypersphere. We examine the model for a smoothing function that maps data onto the feature space $\\mathcal{H}$ using $\\Phi:\\chi  \\to \\mathcal{H}$, ${\\bf x}_i \\mapsto \\Phi({\\bf x}_i )$. %which is applied as $\\Phi({\\bf x}_n)$ $n=1,\\dots,N$.  \n%The spherical function $\\Phi$ subsequently defines a kernel function by the inner product between other mappings as $K({\\bf x}_i,{\\bf x}_j )=\\langle \\Phi({\\bf x}_i),\\Phi({\\bf x}_j)\\rangle$ where $K: \\chi \\times \\chi  \\to \\mathbb{R}$. We examine the class of kernel function such that $K({\\bf x}_i,{\\bf x}_i)=1$. For instance, the Gaussian radial basis function (RBF)\n% $K({\\bf x}_i,{\\bf x}_j )=\\exp \\Big( {-\\frac{||{\\bf x}_i-{\\bf x}_j||^2}{2\\sigma^2} } \\Big)$  satisfies $K({\\bf x}_i,{\\bf x}_i)=1$ with bandwidth parameter $\\sigma^2>0$.\n%%on %each of the data instances  ${\\bf x}_i$ for $i=1,\\dots,N$. \n%\n%Now the minimum volume set for an enclosing hypersphere of data points is given by \n%$S({\\bf c},R) = \\{ {\\bf x}_i \\in \\chi \\,  |\\, || \\Phi( {\\bf x}_i)  -{\\bf c}||^2 \\le R^2\\}$.  \n%%Since extreme values naturally occur in a multitude of distributions, a more flexible description of data behaviors may be required. \n%Slack variables $\\{\\xi_i \\}_{i=1}^N \\ge 0$ are introduced for a more flexible description of data behaviors where the minimum volume set has a less strict radius boundary.\n%This also accounts for the possibility of infrequently occurring extreme values in a training set, which are penalised by the parameter $C$. %=\\frac{1}{\\nu N}$.  $\\nu \\in (0,1)$\n%The penalty term $C>0$ represents the trade-off between the volume of the hypersphere and classification error in the training set. In other words, the probability that a test point ${\\bf x}_k$ lies outside of $S({\\bf c},R) $ is bounded by the model parameter $C$. \n%\n%\n%\n%The computation of the SVDD algorithm %$\\bf c$ and $R$\n% involves minimizing the objective function \n% \\begin{align}\n% \\min_{ (R,{\\bf c},\\boldsymbol\\xi ) }   R^2+C\\sum_{i=1}^N \\xi_i \n% \\label{minR}\n%\\end{align}\n%\\[\n%\\mbox{with constraints: } ||\\Phi({\\bf x}_i ) -{\\bf c}||^2 \\le R^2  +\\xi_i \\mbox{ and } \\xi_i  \\ge 0 \\mbox{ for } i=1,\\dots,N. \n%\\]\n%\n%% This is analogous to Support Vector Classifier Vapnik1998.\n%\n%The optimisation problem in (\\ref{minR}) is solved by respectively introducing Lagrange multipliers $\\boldsymbol\\alpha$ and $\\boldsymbol\\gamma$ for the two constraint inequalities.\n%The Lagrangian function is \n%\\begin{align}\n%L(R^2,{\\bf c},{\\boldsymbol \\xi},{\\boldsymbol \\alpha},{\\boldsymbol \\gamma})= R^2+C\\sum_{i} \\xi_i -\\sum_i  \\alpha_i \\Big( R^2 +\\xi_i -||\\Phi({\\bf x}_i ) -{\\bf c}||^2  \\Big) +\\sum_i \\gamma_i\\xi_i \n%\\end{align}\n%It is important to note that $L$ is minimised w.r.t. $(R,{\\bf c},\\boldsymbol\\xi )$  and maximised w.r.t. $(\\boldsymbol\\alpha,\\boldsymbol\\gamma).$\n%\n%\n%%\\begin{align}\n%%&\\max_{\\boldsymbol \\alpha}\\sum_{i}\\alpha_i  K({\\bf x}_i,{\\bf x}_i )  -\n%%\\sum_{i,j} \\alpha_i\\alpha_j K({\\bf x}_i, {\\bf x}_j) \\nonumber \\\\\n%%\\mbox{with }&\\mbox{constraints: }\n%%\\sum_{i=1}^N \\alpha_i=1, \\quad \n%%{\\bf c}%= {\\boldsymbol \\alpha} \\cdot x \n%%= \\sum_{i=1}^N \\alpha_i {\\bf x}_i, , \\quad\n%%0 \\le \\alpha_i \\le C\n%%\\label{maxA} \n%%\\end{align}\n%A equivalent formulation of the minimisation problem is \n%%Since a kernel similarity function satisfies $ K({\\bf x}_i,{\\bf x}_i )=1$, \n%\\begin{align}\n%&\\min_{\\boldsymbol \\alpha}\n%\\sum_{i,j} \\alpha_i\\alpha_j K({\\bf x}_i, {\\bf x}_j) \\label{minA}\n%\\end{align}\n%\\[\n%\\mbox{with }\\mbox{constraints: } \n%\\sum_{i=1}^N \\alpha_i=1, \\quad \n%{\\bf c}%= {\\boldsymbol \\alpha} \\cdot x \n%= \\sum_{i=1}^N \\alpha_i \\Phi({\\bf x}_i),  \\quad\n%0 \\le \\alpha_i \\le C\n% \\]\n%\n%If a new  test instance ${\\bf x}_{t}$  is enclosed by the  hypersphere with estimated parameters $\\hat \\bf a$ and $\\hat R$ then consistent with the normal behavior.\n%That can be written as ${\\bf x}_{t} \\in S (\\bf \\hat {a} , \\hat R)$ or\n% $||\\Phi({\\bf x}_{t }) -\\hat {\\bf c}||^2 \\le \\hat R^2 $.\n%\n%By formulating this problem another way, Sch\\\"{o}lkopf et al. \\cite{OCSVM} introduces OCSVM to discriminate data behaviors using a separating hyperplane between regular and non-anomalous classes. \n% If the data instances ${\\bf x}$ are consistent with the overall data description then they are classified into the class with regular behavior by a threshold $\\rho$,\n%\\[\\mathcal{C}_{{\\boldsymbol w,\\rho}}= \\big\\{{\\bf x} \\big|  f_{\\bf w}({\\bf x}) \\ge \\rho \\big\\}  \\]\n%$\\mbox{ where } f_{\\bf w}({\\bf x}_i) = \\big\\langle {\\bf w},\\Phi({\\bf x}_i ) \\big \\rangle$ is the function for a separating hyperplane. \n% Similar to the previous framework, the  slack variable $\\boldsymbol \\xi$ is introduced and the parameters ${\\bf w},\\,\\rho$ and  $\\boldsymbol\\xi$ are estimated by minimizing the error for the hyperplane that separates data points from the origin. This results in the optimisation problem \n%%\\vspace{-1cm}\n% \\begin{align}\n%&\\min_{(  \\rho, {\\bf w},\\boldsymbol \\xi)} \n%\\frac{1}{2} || {\\bf w}||^2 + \\frac{1}{\\nu N} \\sum_{i=1}^N\n%\\xi_i - \\rho  \\label{minWeight}   \\\\\n%\\mbox{with con}\\mbox{straints: } &  \\langle {\\bf w} , {\\Phi}(x_i) \\rangle \\ge \\rho - \\xi_i \\mbox{ and } \\xi_i \\ge 0 \\mbox{ for } i=1,\\dots,N   \\nonumber\n%\\end{align}\n%where   $\\nu \\in (0,1)$ represents the trade-off between the distance of the hyperplane from the origin and the upper bound on the proportion of anomalies. The selection of parameter $\\nu$ is also interpreted as expected proportion of outliers  in the training set.\n%\n%%\\vspace{-5mm}\n%When introducing  Lagrange multiplers $\\boldsymbol \\alpha$ and $\\boldsymbol \\gamma$   respectively for constraint equations in (\\ref{minWeight}), %the optimisation of parameters in the separating hyperplane becomes equivalent to minimisation problem in Equation (\\ref{minA}) where weights ${\\bf w } = \\sum_{i=1}^N\\alpha_i \\Phi({\\bf x}_i) $.\n%  the Lagrangian function becomes\n%\\begin{align}\n%L(\\rho,{\\bf w},{\\boldsymbol \\xi},{\\boldsymbol \\alpha},{\\boldsymbol \\gamma})= \\frac{1}{2} || {\\bf w}||^2 + \\frac{1}{\\nu N} \\sum_{i=1}^N\n%\\xi_i +\\sum_{i=1}^N \\alpha_i  \\Big( \\langle {\\bf w} , {\\Phi}(x_i) \\rangle - \\rho +\\xi_i \\Big) + \\sum_i \\gamma_i\\xi_i \\label{Lag2}\n%\\end{align}\n%Minimizing $L$ over $(\\rho,{\\bf w},\\boldsymbol\\xi )$  and then maximizing for arguments $(\\boldsymbol\\alpha,\\boldsymbol\\gamma)$. This leads to an equivalent optimisation problem to Equation (\\ref{minA}) where $\\bf a$ is replaced by $\\bf w$ and $C=\\frac{1}{\\nu N}$. Note that if a spherical transformation $\\Phi$ is not applied then the formulations of Equations (\\ref{minR}) and (\\ref{minA}) may not be equivalent. \n%\n%\n%%Solving the simplified problem in (\\ref{Lag2}) is equivalent to minimizing the objective functions in (\\ref{Con1}) and (\\ref{Ease1}).\n%%The radius $R$ and parameter $\\rho$ can be calculated by\n%\n%For separating hyperplane, the discriminating function is\n%$f_{\\bf w}({\\bf x}_k) = \\big\\langle {\\bf w},\\Phi({\\bf x}_i ) \\big \\rangle=  \\sum_{i} \\alpha_i K( {\\bf x}_i,{\\bf x}_k )$.\n%To evaluate whether a new test instance ${\\bf x}_k$ for formulation of the minimum-volume set and the separating hyperplane are equivalent to \n%\\begin{align}\n%\\mbox{sgn}\\big (\\sum_{i} \\alpha_i K( {\\bf x}_i,{\\bf x}_k )-\\rho ) \\big) =\\left\\{\n%                \\begin{array}{ll}\n%                 +1 , \\qquad %{\\bf x}_k \\in S({\\bf c},R) %==\n%                  & || {\\bf x}_k  -{\\bf c}||^2 \\le R^2\\\\\n%                 -1, \\quad      & || {\\bf x}_k  -{\\bf c}||^2 > R^2 %{\\bf x}_k \\not\\in S({\\bf c},R)\n%                \\end{array}\n%              \\right.\n%\\end{align} \n\n ", "meta": {"hexsha": "f4308a8cd93b0b3f9b16a843d8d54dcda4bb867c", "size": 47305, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ARXIV_DAD_Survey/sections/Discriminative.tex", "max_stars_repo_name": "raghavchalapathy/Deep-Learning-for-Anomaly-Detection-A-Survey", "max_stars_repo_head_hexsha": "aa775990a4b23306885979c4ef8e8cb3ed00441b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 107, "max_stars_repo_stars_event_min_datetime": "2019-01-11T12:06:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T12:03:57.000Z", "max_issues_repo_path": "ARXIV_DAD_Survey/sections/Discriminative.tex", "max_issues_repo_name": "raghavchalapathy/Deep-Learning-for-Anomaly-Detection-A-Survey_Arxiv_WorkingDocument", "max_issues_repo_head_hexsha": "aa775990a4b23306885979c4ef8e8cb3ed00441b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ARXIV_DAD_Survey/sections/Discriminative.tex", "max_forks_repo_name": "raghavchalapathy/Deep-Learning-for-Anomaly-Detection-A-Survey_Arxiv_WorkingDocument", "max_forks_repo_head_hexsha": "aa775990a4b23306885979c4ef8e8cb3ed00441b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 27, "max_forks_repo_forks_event_min_datetime": "2019-01-15T02:42:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-06T07:59:29.000Z", "avg_line_length": 73.5692068429, "max_line_length": 840, "alphanum_fraction": 0.6939858366, "num_tokens": 15027, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7745833789613197, "lm_q1q2_score": 0.5566112633641046}}
{"text": "\\documentclass{article}\n\\usepackage[margin=0.8in]{geometry}\n\\usepackage{graphicx}\n\\usepackage{hyperref} \n\\usepackage{float}\n\\usepackage{amsmath}\n\\usepackage{fixmath}\n\\usepackage{minted}\n\n\n\\author{Leandro Rizk \\\\ leo.rizk@mail.utoronto.ca \\\\ CTA200 -- University of Toronto}\n\\title{Assignment 2}\n\\date{8 May 2021}\n\n\\begin{document}\n\n\\maketitle\n\n\n\n\\section{Question 1}\n\n\\subsection{Methods}\n\n\\paragraph{}\nI wrote two functions: deriv1 uses method 1 and deriv2 uses method 2. Each function takes a callable (i.e. another function), a value for $x_0$, and a step size ($h$). I used each function to calculate the derivative of $sin(x)$ at $x$ = 0.1 for an array of 99 step sizes ranging from 0.01 to 0.99. The derivative of $sin(x)$ is simply $cos(x)$; so to get the relative error, I performed the following operation:\n\n$$ \\rm{relative\\ error} \\; = \\; \\left| \\frac{\\rm{numerical\\ approximation} \\, - \\, cos(0.1)}{cos(0.1)} \\right|  \\, ,$$\n\nwhere the numerical approximation is the array resulting from calling deriv1 or deriv2. Plotting both methods' relative error arrays vs. the step size array in log scale gives \\textbf{Figure \\ref{fig1}}.\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=0.44]{Figure1.pdf}\n\\caption{Relative error of numerically evaluating the derivative of $sin(x)$ at $x$ = 0.1 in relation to step size \\textbf{\\label{fig1}}}\n\\end{center}\n\\end{figure}\n\n\\subsection{Analysis}\n\n\\paragraph{}\nThe relation between the relative error and the step size is roughly linear in logarithmic scales. This is especially true for method 2. A constant coefficient in log scale corresponds to a constant exponent in linear scale. In a sense, the slope in log scale represents the order of magnitude by which the step size affects the relative error. Since the slope in log scale for method 2 is about 1.99 ($\\approx$ 2), the relation between relative error and step size for this method should be quadratic. As seen in \\textbf{Figure \\ref{fig1}}, method 2 produces a consistently smaller error than method 1 over the range of chosen step sizes.\n\n\n\n\\section{Question 2}\n\n\\subsection{Methods}\n\n\\paragraph{}\nI created two arrays for $x$ and $y$, each containing 299 evenly spaced elements from -2 to 2 (exclusive). I then created a Python dictionary, with the key being every possible complex number $c$ obtained from combinations of $x$ and $y$ elements and the values being a list of the 1000 first iterations of $z$ (starting with 0) corresponding to that complex number $c$. To obtain \\textbf{Figure \\ref{fig2}}, I created an image of the complex plane represented by a 2-D numpy array of the shape (299, 299), initially making all values zero. I wrote a nested for loop that examined the dictionary for every complex number key represented in the 2-D array and that converted the image pixel value from zero to one if np.inf was found in the list of iterations of $z$ for that complex number. \\textbf{Figure \\ref{fig3}} was created in a similar fashion except, instead of converting the image pixel value from zero to one, the pixel value was converted to the index of the first occurrence of np.inf inside the list of iterations of $z$. If the pixel value remained zero, it was understood that $z$ did not diverge for that point in the complex plane. The resulting colour for zero in the plot was a distinct enough black that it isn't easily confused with a colour corresponding to rapid divergence (represented by purple).\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=0.4]{Figure2.pdf}\n\\caption{Locations in the complex plane where $z$ diverges (shown in yellow) \\textbf{\\label{fig2}}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=0.42]{Figure3.pdf}\n\\caption{Divergence of $z$ in the complex plane \\textbf{\\label{fig3}}}\n\\end{center}\n\\end{figure}\n\n\n\n\\subsection{Analysis}\n\n\\paragraph{}\n\\textbf{Figures \\ref{fig2}} and \\textbf{\\ref{fig3}} show a Mandelbrot fractal. The edges of the fractal have a $z$ that diverges much more slowly than points farther from the fractal. The plot is symmetric along the real axis, so a complex number will have the same behaviour in $z$ as its complex conjugate.\n\n\n\n\\section{Question 3}\n\n\\subsection{Methods}\n\n\\paragraph{}\nI wrote a SIRmodel function that returns $\\frac{dS}{dt}$, $\\frac{dI}{dt}$, and $\\frac{dR}{dt}$ (as vector) for a given vector $[S, I, R]$. This function is meant to be passed to the function ode from the module scipy.integrate along with initial conditions (that I called $SIR_0$) and time elements: initial time ($t_0$), endtime ($t_{end}$), and timesteps ($dt$). I set $SIR_0$ = [999, 1, 0] (thereby setting $N$ = 1000), $t_0$ = 0, $t_{end}$ = 200, and $dt$ = 0.1 and did not later modify these. The parameters of the SIR model $\\beta$ and $\\gamma$ were free to be reassigned before using ode. To simplify using ode, I also wrote a function solveode which streamlines the process: it takes the callable (SIRmodel), the initial conditions vector, and the time arguments, performs the necessary steps for using the function ode (with integrator dopri5), and returns the solution ($S(t)$, $I(t)$, $R(t)$) as a 2-D numpy array as well as the time array as a 1-D numpy array. This allowed me to reassign $\\beta$ and $\\gamma$ and easily solve the differential equations again. In order to make physical sense, $\\gamma$ could not exceed 1.\n\n\\paragraph{}\nIn adding $D$, I assumed deaths were only a result of the disease (susceptible and recovered individuals could not die). So this meant that $D$ was only populated from $I$:\n\n$$ \\frac{dD}{dt} \\; = \\; \\frac{\\delta \\, I}{N} \\, , $$\n\nfor a death rate $\\delta \\geq 0$. This means that a term $- \\frac{\\delta \\, I}{N}$ must be added to the definition of $\\frac{dI}{dt}$. I repeated the same steps as above, defining the new SIRDmodel callable and passing it to solveode with a new four-element initial conditions vector ($SIRD_0$ = [999, 1, 0, 0]). In order to make physical sense, $\\gamma + \\delta$ could not exceed 1.\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=0.43]{Figure4.pdf}\n\\caption{Disease evolution in a population of 1000 individuals (SIR model) \\textbf{\\label{fig4}}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=0.43]{Figure5.pdf}\n\\caption{Disease evolution in a population of 1000 individuals (SIRD model) \\textbf{\\label{fig5}}}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{Analysis}\n\n\\paragraph{}\nIn \\textbf{Figure \\ref{fig4}}, a high infection rate and a slow recovery results ($\\beta$ = 2.0, $\\gamma$ = 0.1) in a very quick conversion of the entire susceptible population to infected and a more gradual conversion from infected to recovered. The same high infection rate but with a much quicker recovery rate ($\\beta$ = 2.0, $\\gamma$ = 0.8) quickly converts susceptible to infected and infected to recovered. Since infected individuals recover very quickly, this number cannot build up very much and, as a result, a considerable proportion of the population remains uninfected after the end of the course of the disease. A low infection rate and a slow recovery (($\\beta$ = 0.2, $\\gamma$ = 0.1), produces a similar result in terms of disease evolution, but on a much longer timescale. Finally, a low infection rate and a rapid recovery ($\\beta$ = 1.0, $\\gamma$ = 0.8) produces very little active infections from susceptible individuals. Relatively few of the population contract the disease by the end of its course.\n\n\\paragraph{}\nIn \\textbf{Figure \\ref{fig5}}, adding the possibility of death simply splits the \"removed\" individuals into two categories: recovered and deceased. Of note, the death rate ($\\delta$) must always be considered in relation to the recovery rate ($\\gamma$) and vice versa. For example, in the case of a very small value of $\\delta$, a $\\gamma$ of zero still means that every infected individual will eventually die of the disease (100\\% mortality). For any time t, the sum of susceptible, infected, and removed individuals is always equal to $N$ (1000).\n\n\n\n\\end{document}\n", "meta": {"hexsha": "84f7b928c841bf6be049dfaae7ef447dbf0ffd4f", "size": 8028, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment_2/Assignment_2_report_source.tex", "max_stars_repo_name": "leandrorizk/CTA200", "max_stars_repo_head_hexsha": "54988efee9b6f6a71b01e3fda7500fe5152432cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-05T22:52:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-05T22:52:59.000Z", "max_issues_repo_path": "assignment_2/Assignment_2_report_source.tex", "max_issues_repo_name": "leandrorizk/CTA200", "max_issues_repo_head_hexsha": "54988efee9b6f6a71b01e3fda7500fe5152432cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment_2/Assignment_2_report_source.tex", "max_forks_repo_name": "leandrorizk/CTA200", "max_forks_repo_head_hexsha": "54988efee9b6f6a71b01e3fda7500fe5152432cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.2068965517, "max_line_length": 1321, "alphanum_fraction": 0.7477578475, "num_tokens": 2142, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.640635854839898, "lm_q2_score": 0.868826769445233, "lm_q1q2_score": 0.5566015801513339}}
{"text": "\\documentclass{article}\n\n\\usepackage[letterpaper, margin=1in, footskip=0.25in]{geometry}\n\\usepackage{fancyhdr}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{wrapfig}\n\\usepackage{csquotes}\n\\usepackage{subcaption}\n\\usepackage{amssymb}\n\\usepackage{commath}\n\n\\MakeOuterQuote{\"}\n\n\\newcounter{example}% Counter\n\\newcommand{\\ex}{\\stepcounter{example} \\paragraph{Example \\theexample}}\n\n\\title{Classifying One-variable Functions by Integrability}\n\\author{Steven Labalme}\n\\date{\\today}\n\n\\begin{document}\n\n\n\n\n\\pagenumbering{gobble}\n\\maketitle\n\\newpage\n\n\\pagenumbering{roman}\n\\tableofcontents\n\\listoffigures\n\\listoftables\n\\newpage\n\n\\pagenumbering{arabic}\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rfoot{Labalme \\thepage}\n\\renewcommand{\\headrulewidth}{0pt}\n\n\\setcounter{secnumdepth}{0}\n\\begin{center}\n\\section{Abstract}\n\\end{center}\nThe purpose of this paper is to separate the various functions explored at the Calculus I-II level by their techniques of integration. Integral calculus is roughly one-half of Calculus, so it is important to understand how to integrate a wide variety of functions, especially since techniques vary from the one-step application of rules of integration to complicated approximations. This report should provide a detailed analysis of various functions that can be used by people new to integration as well as returning players. It could be used as a supplemental reading to reinforce ideas and/or cast them in a different light.\\par\nAn example-solution strategy is taken to convey the differences in integrability. For time's sake, only one function of each type (18 types, total) is explored. The reasons that past methods do not work are often briefly explored at the beginning of each example. However, it is more generally left to the reader to be able to classify a function that they may be faced with into one of the categories listed.\n\\newpage\n\n\n\n\\setcounter{secnumdepth}{3}\n\\section{Algebraically Integrable Functions}\n\\subsection{Functions of \\emph{x}}\n\\subsubsection{Simple}\n\\paragraph{Introduction} It is simplest to integrate functions such as those in this section via antiderivatives. Integration of these functions generally requires rules of integration directly derived from rules of differentiation without a great deal of manipulation beforehand. Therefore, it is a logical starting place for this exploration of the different techniques of integration.\n\\ex This example is concerned with integrating the following function.$$y=3x^2+2x+1$$This can be accomplished with two rules of integration --- the sum and difference rule, and the antipower rule. The sum and difference rule states that the integral of the sum or difference of two functions can be evaluated as the integral of the two functions separately. Symbolically,$$\\int_a^b (f(x)\\pm g(x))\\, dx=\\int_a^b f(x)\\, dx\\pm\\int_a^b g(x)\\, dx$$The antipower rule states that the integral of a coefficient times a variable to a power is the coefficient divided by the original power plus one times the variable to the power of the original power plus one. Symbolically,$$\\int_a^b kx^n\\, dx=\\left[\\frac{kx^{n+1}}{n+1}\\right]_a^b$$\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int (3x^2+2x+1)\\, dx\n\\end{equation*}\nEmploy the sum and difference rule twice.\n\\begin{equation*}\n    y=\\int 3x^2\\, dx+\\int 2x\\, dx+\\int 1\\, dx\n\\end{equation*}\nEmploy the antipower rule three times.\n\\begin{equation*}\n    y=\\left[\\frac{3x^{2+1}}{2+1}\\right]+\\left[\\frac{2x^{1+1}}{1+1}\\right]+\\left[\\frac{1x^{0+1}}{0+1}\\right]\n\\end{equation*}\nSimplify and add the constant of integration to yield the following, final integrated equation (equation 1).\n\\begin{equation}\n    y=x^3+x^2+x+C\n\\end{equation}\n\\ex This example is concerned with integrating the following function.$$y=x^{-1}$$This is a deceptively complicated problem. At first glance, it seems like the antipower rule is the simple solution, but employing it (as follows) would require dividing by zero.\n\\begin{align*}\n    y &= \\int x^{-1}\\, dx\\\\\n      &= \\left[\\frac{x^{-1+1}}{-1+1}\\right]\\\\\n      &= \\frac{x^0}{0}\\\\\n      &= \\frac{1}{0}\n\\end{align*}\nInstead, the solution requires a new rule of integration --- that the integral of the differential of a variable over that variable is the natural log of the absolute value of that variable. Symbolically,$$\\int_a^b \\frac{dx}{x}=\\left[\\ln|x|\\right]_a^b$$A short proof follows (note that the absolute value of $x$ is added in at the end to extend the domain of the integral).\\par\n\\begin{align*}\n    y &= \\ln x\\\\\n    \\text{e}^y &= x\\\\\n    \\frac{d}{dx}\\left(\\text{e}^y\\right) &= \\frac{d}{dx}\\left(x\\right)\\\\\n    \\text{e}^y\\frac{dy}{dx} &= 1\\\\\n    \\frac{dy}{dx} &= \\frac{1}{\\text{e}^y}\\\\\n    \\frac{dy}{dx} &= \\frac{1}{x}\\\\\n    dy &= \\frac{1}{x}\\, dx\\\\\n    \\int dy &= \\int \\frac{1}{x}\\, dx\\\\\n    y &= \\int \\frac{dx}{x}\\\\\n    \\ln x &= \\int \\frac{dx}{x}\\\\\n    \\ln |x| &= \\int \\frac{dx}{x}\n\\end{align*}\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int x^{-1}\\, dx\n\\end{equation*}\nEmploy the definition of negative exponents.\n\\begin{equation*}\n    y=\\int \\frac{1}{x}\\, dx\n\\end{equation*}\nMultiply fractions.\n\\begin{equation*}\n    y=\\int \\frac{dx}{x}\n\\end{equation*}\nSimplify and add the constant of integration to yield the following, final integrated equation (equation 2).\n\\begin{equation}\n    y=\\ln |x|+C\n\\end{equation}\n\\paragraph{Summary} As Examples 1 and 2 have shown, polynomial functions of $x$ can be integrated via rules directly related to the sum and difference and power rule of derivatives as well as the fact that $\\text{e}^x$ is its own derivative. The functions subsequently explored in Section 1.1.2 cannot be integrated with altered derivative rules and require some algebraic manipulation to transform them into such a state.\n\\newpage\n\n\\subsubsection{Advanced Algebraic Techniques}\n\\paragraph{Introduction} Though some functions, as seen in the previous section, are readily integrable using rules of integration, others need some algebraic manipulation before such rules can be employed. The functions in this section are deceptively simple when first looking at them, but yet they require some inspired techniques to transform them into such a state that one of the rules from the previous section can be used. However, for the purposes of this section, all algebraic manipulation can be fully understood by someone who has not studied calculus.\n\\ex This example is concerned with integrating the following function.$$y=\\csc(x)$$Many trigonometric functions have integrals consisting of a trigonometric expression within the natural log function i.e. they are derived by manipulating the function into the form of derivative over function, or$$\\frac{dv}{v}$$For example, to take the integral of $\\cot(x)$, transform it into$$\\frac{\\cos(x)}{\\sin(x)}$$which, according to the rule used in Example 2, is equal to$$\\ln|\\sin(x)|+C$$when integrated if $u=\\sin(x)$ and $du=\\cos(x)\\, dx$.\\par\nHowever, some functions, like the cosecant function which is the focus of this example, require more algebraic work than the recognition of an alternate form of the function ($\\frac{1}{\\sin(x)}$ is no more integrable than $\\csc(x)$). The technique necessary, in this case, is multiplying by a clever form of 1. By the definition of inverses, any number divided by itself is 1. With functions, this holds as well i.e. any function divided by itself at any point within its domain is equal to 1.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int \\csc(x)\\, dx\n\\end{equation*}\nMultiply the integrand by a form of 1. In this case, the form is the as-of-yet undefined function, $u$, divided by itself.\n\\begin{equation}\n    y=\\int \\csc(x)\\times\\frac{u}{u}\\, dx\n\\end{equation}\\par\nThis function, $u$, must have some properties to work in this case. $u$ must be a function that, when multiplied by $\\csc(x)$ becomes its derivative. Symbolically,$$\\frac{du}{dx}=\\csc(x)\\times u$$Though it is theoretically possible to solve this differential equation, that would require finding the integral of $\\csc(x)$ with respect to $x$, which would put us back where we started. So we need a different technique. Let's look at some trigonometric functions and their derivatives and see if that helps. See Table \\ref{tab:1} for such a list.\n\\begin{table}[h!]\n    \\vspace{-.7em}\n    \\centering\n    \\begin{tabular}{|c|c|c|c|c|c|c|}\n        \\hline\n        \\textbf{Function} & $\\sin(x)$ & $\\cos(x)$ & $\\tan(x)$ & $\\csc(x)$ & $\\sec{x}$ & $\\cot(x)$\\\\\n        \\hline\n        \\textbf{Derivative} & $\\cos(x)$ & $-\\sin(x)$ & $\\sec^2(x)$ & $-\\csc(x)\\cot(x)$ & $\\sec(x)\\tan(x)$ & $-\\csc^2(x)$\\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Derivatives of trigonometric functions.}\n    \\label{tab:1}\n\\end{table}\\par\nLook for patterns that are concerned with the cosecant function. Since the derivatives of sine, cosine, tangent, and secant have nothing to do cosecant, throw them out. Between cosecant and cotangent, a pattern does emerge. The opposite of cotangent multiplied by cosecant is the derivative of cosecant. Similarly, the opposite of cosecant multiplied by cosecant is the derivative of cotangent. Concisely, both $-\\cot(x)$ and $-\\csc(x)$, when multiplied by $\\csc(x)$, become the derivative of the other function times $-1$. From this, we can infer that some combination of cosecant and cotangent constitutes $u$, we just need to \\emph{distribute} cosecant to the two parts (and the recognition of the need to distribute is key).\\par\nLet's try defining $u$ as $\\csc(x)+\\cot(x)$. This expression, multiplied by cosecant and simplified via the distributive property, is $\\csc^2(x)+\\csc(x)\\cot(x)$. Each of these expressions is the opposite of the derivative of one of the initial expressions. Luckily, $-1$ can also be distributed. Unfortunately, $-1\\neq1$, so multiplying it does not preserve equality. Fortunately, another $-1$ can be multiplied, but outside of the integral, to solve this problem ($-1\\times-1=1$). Let's substitute the new definition of $u$, account for the $-1$, and continue integrating. Pick up from equation 3.\n\\begin{align*}\n    y &= \\int \\csc(x)\\times\\frac{\\csc(x)+\\cot(x)}{\\csc(x)+\\cot(x)}\\, dx\\\\\n    &= -\\int -\\csc(x)\\times\\frac{\\csc(x)+\\cot(x)}{\\csc(x)+\\cot(x)}\\, dx\\\\\n    &= -\\int \\frac{-\\csc^2(x)-\\csc(x)\\cot(x)}{\\csc(x)+\\cot(x)}\\, dx\\\\\n\\end{align*}\nAt this point, it is clear that we have a situation of $\\frac{dv}{v}$, so the rule from Example 2 can be invoked. Simplify and add the constant of integration to yield the following, final integrated equation (equation 4).\n\\begin{equation}\n    y=-\\ln|\\csc(x)+\\cot(x)|+C\n\\end{equation}\n\\ex This example is concerned with integrating the following function.$$y=\\sqrt{9-x^2}$$This may be recognizable to the reader as the function defining the upper half of a circle of radius, 3. Though it may not look very complicated, the second-degree polynomial under the radical without any complementary first-degree polynomial as a coefficient means that the reverse-chain rule cannot be used. Instead, a series of properties of trigonometric functions and will be used to simplify this equation to a more integrable form.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int \\sqrt{9-x^2}\\, dx\n\\end{equation*}\nDefine $x$ as a function of another variable, $t$. This is allowed for the same reason that you can define a function in terms of $x$, as was done for $u$ in the last example. Think about it this way --- say we define $v$ as $x^2$. Nothing says we can't do that, and $v$ would look like a parabola when graphed and have all the same properties despite the fact that we chose $v$ instead of $y$ to be our independent variable. In fact, the choice of $y$, itself, is just as arbitrary. Anyway, now that $v=x^2$, according to the rules of algebra, $x=\\sqrt{v}$. It is this back-and-forth switching that allows the definition of $x$ as a function of $t$.\\par\nRegardless, in this case, it is necessary to define $x=3\\sin(t)$. Though this may seem arbitrary for now, it will allow us to entirely eliminate the radical. Also note that if $x=3\\sin(t)$, $\\frac{dx}{dt}=3\\cos(t)$ and, therefore, $dx=3\\cos(t)\\, dt$. Make these two substitutions.\n\\begin{equation*}\n    y=\\int \\sqrt{9-(3\\sin(t))^2} \\times 3\\cos(t)\\, dt\n\\end{equation*}\nSimplify.\n\\begin{equation*}\n    y=\\int \\sqrt{9-9\\sin^2(t)} \\times 3\\cos(t)\\, dt\n\\end{equation*}\nFactor out the 9.\n\\begin{equation*}\n    y=\\int \\sqrt{9(1-\\sin^2(t))} \\times 3\\cos(t)\\, dt\n\\end{equation*}\nMove the 9 outside of the radical by taking its square root and substitute $1-\\sin^2(t)$ for $\\cos^2(t)$.\n\\begin{equation*}\n    y=\\int 3\\sqrt{\\cos^2(t)} \\times 3\\cos(t)\\, dt\n\\end{equation*}\nSimplify.\n\\begin{align*}\n    y &= \\int 3\\cos(t) \\times 3\\cos(t)\\, dt\\\\\n    &= \\int 9\\cos^2(t)\\, dt\\\\\n\\end{align*}\nMove the constant outside of the integral sign.\n\\begin{equation*}\n    y=9\\int \\cos^2(t)\\, dt\n\\end{equation*}\nNow the problem is integrating $\\cos^2(t)$. Fortunately, it is possible to manipulate this expression into a more integrable form, as follows.\n\\begin{align*}\n    \\cos^2(t) &= \\frac{2\\cos^2(t)}{2}\\\\\n    &= \\frac{\\cos^2(t)+\\cos^2(t)}{2}\\\\\n    &= \\frac{1-\\sin^2(t)+\\cos^2(t)}{2}\\\\\n    &= \\frac{1+\\cos(t)\\cos(t)-\\sin(t)\\sin(t)}{2}\\\\\n    &= \\frac{1+\\cos(t+t)}{2}\\\\\n    &= \\frac{1+\\cos(2t)}{2}\\\\\n    &= \\frac{1}{2}+\\frac{\\cos(2t)}{2}\\tag{5}\n\\end{align*}\nSubstitute equation 5 in where we left off.\n\\begin{equation*}\n    y=9\\int \\left(\\frac{1}{2}+\\frac{\\cos(2t)}{2}\\right)\\, dt\n\\end{equation*}\nEmploy the sum and difference rule.\n\\begin{equation*}\n    y=9\\left(\\int \\frac{1}{2}\\, dt+\\int \\frac{\\cos(2t)}{2}\\, dt\\right)\n\\end{equation*}\nMove the constants out of the integrand.\n\\begin{equation*}\n    y=9\\left(\\frac{1}{2}\\int dt+\\frac{1}{2}\\int \\cos(2t)\\, dt\\right)\n\\end{equation*}\nIntegrate. Use the rules that $\\int dt=t$ and $\\int \\cos(at)=\\frac{1}{a}\\sin(at)$, where $a$ is a constant.\n\\begin{equation*}\n    y=9\\left(\\frac{1}{2} \\times t+\\frac{1}{2} \\times \\frac{1}{2}\\sin(2t)\\right)\n\\end{equation*}\nSimplify.\n\\begin{align*}\n    y &= 9\\left(\\frac{1}{2}t+\\frac{1}{4}\\sin(2t)\\right)\\\\\n    &= 9\\left(\\frac{t}{2}+\\frac{\\sin(2t)}{4}\\right)\\\\\n    &= \\frac{9t}{2}+\\frac{9\\sin(2t)}{4}\n\\end{align*}\nIt might be tempting to think that we've reached the final, integrated equation, but the equation above is still in terms of $t$ and not $x$. Since $x=3\\sin(t)$, rearrange to $t=\\arcsin\\left(\\frac{x}{3}\\right)$. Substitute.\n\\begin{equation*}\n    y=\\frac{9\\arcsin\\left(\\frac{x}{3}\\right)}{2}+\\frac{9\\sin\\left(2\\arcsin\\left(\\frac{x}{3}\\right)\\right)}{4}\n\\end{equation*}\nDefinition of multiplication.\n\\begin{equation*}\n    y=\\frac{9\\arcsin\\left(\\frac{x}{3}\\right)}{2}+\\frac{9\\sin\\left(\\arcsin\\left(\\frac{x}{3}\\right)+\\arcsin\\left(\\frac{x}{3}\\right)\\right)}{4}\n\\end{equation*}\nEmploy the double-argument property for the sine function.\n\\begin{equation*}\n    y=\\frac{9\\arcsin\\left(\\frac{x}{3}\\right)}{2}+\\frac{9 \\times 2\\sin\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)\\cos\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)}{4}\n\\end{equation*}\nSimplify.\n\\begin{align*}\n    y &= \\frac{9\\arcsin\\left(\\frac{x}{3}\\right)}{2}+\\frac{9\\sin\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)\\cos\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)}{2}\\\\\n    &= \\frac{9\\arcsin\\left(\\frac{x}{3}\\right)+9\\sin\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)\\cos\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)}{2}\n\\end{align*}\nTo simplify the nested trigonometric functions, refer to Figure \\ref{fig:1}, below, for a picture of the geometric meaning of each nested function, from which new functions can be written.\n\n\\begin{wrapfigure}[11]{r}{0.45\\textwidth}\n    \\centering\n    \\includegraphics[width=0.35\\textwidth]{Blender/Class1Var-NestedTrig-f_0001.png}\n    \\caption{\\label{fig:1}Nested trigonometric functions.}\n\\end{wrapfigure}\n\nFrom Figure \\ref{fig:1}, simplify the nested trigonometric functions. Use the facts that $\\sin\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)=\\sin\\theta$ and $\\cos\\left(\\arcsin\\left(\\frac{x}{3}\\right)\\right)=\\cos\\theta$.\n\\begin{equation*}\n    y=\\frac{9\\arcsin\\left(\\frac{x}{3}\\right)+9\\times\\frac{x}{3}\\times\\frac{\\sqrt{9-x^2}}{3}}{2}\n\\end{equation*}\nSimplify and add the constant of integration to yield the following, final integrated equation (equation 6).\n\\begin{align*}\n    y &= \\frac{9\\arcsin\\left(\\frac{x}{3}\\right)+9\\times\\frac{x\\sqrt{9-x^2}}{9}}{2}\\\\\n    &= \\frac{9\\arcsin\\left(\\frac{x}{3}\\right)+x\\sqrt{9-x^2}}{2}+C\\tag{6}\n\\end{align*}\n\\newpage\n\\paragraph{Summary} In this section, the cosecant function and the circle function were integrated. Both have deceptively simple expressions for the amount of work required, yet, after having explored them, it should be clear why each and every step is necessary. Some functions, like the cosecant, require guesswork with only the slightest logical foundation. Others, like the circle function, have a process where every step is intertwined and all shoot for the same goal. Every choice is very careful. In the next section, the functions are in such a state that no amount of algebraic manipulation will make them easier, and some calculus techniques must be used for simplification.\n\\newpage\n\n\\subsubsection{Advanced Calculus Techniques}\n\\paragraph{Introduction} While a great many functions can be integrated directly or after some algebraic manipulation, there are still more functions that no amount of algebraic work will turn into an integrable form. These functions require specialized calculus rules in order to rewrite them to a simpler form. These techniques will be justified, and then a variety of their implementations will be explored.\n\\ex This example is concerned with integrating the following function.$$y=\\frac{2}{3+5x}$$This process should bring into focus some techniques that have been flirted with in previous examples, such as defining variables as functions and functions in terms of variables. Formally, this process is known as integration by substitution, and it is the purpose of this example to explore this technique in more depth.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int \\frac{2}{3+5x}\\, dx\n\\end{equation*}\nDefine function, $u$, as $u=3+5x$. Substitute.\n\\begin{equation*}\n    y=\\int \\frac{2}{u}\\, dx\n\\end{equation*}\nRecall from Example 4 the rather dual nature of functions and variables. Even though $u$ is a defined function, it is still possible to integrate with respect to it. Unfortunately, the variable of integration is still $x$. Somehow, there must be a way to define $dx$ in terms of $du$. Luckily, a relation can between $du$ and $dx$ can be found by differentiating $u$ (as a function) with respect to $x$, as follows.\n\\begin{align*}\n    u &= 3+5x\\\\\n    \\frac{d}{dx}(u) &= \\frac{d}{dx}(3+5x)\\\\\n    \\frac{du}{dx} &= \\frac{d}{dx}(3)+\\frac{d}{dx}(5x)\\\\\n    &= 0+5\\\\\n    &= 5\\\\\n    du &= 5\\, dx\\\\\n    dx &= \\frac{du}{5}\n\\end{align*}\nSubstitute the final line above into where we left off with the integrable equation.\n\\begin{equation*}\n    y=\\int \\frac{2}{u}\\times \\frac{du}{5}\n\\end{equation*}\nMultiply fractions; factor out the constant.\n\\begin{equation*}\n    y=\\frac{2}{5}\\int \\frac{du}{u}\n\\end{equation*}\nEmploy the rule of integration from Example 2.\n\\begin{equation*}\n    y=\\frac{2}{5}\\ln|u|\n\\end{equation*}\nReturn the substitution and add the constant of integration to yield the following, final integrated equation (equation 7).\n\\setcounter{equation}{6}\n\\begin{equation}\n    y=\\frac{2}{5}\\ln|3+5x|+C\n\\end{equation}\n\\ex This example is concerned with integrating the following function.$$y=\\frac{1}{4+x^2}$$\nThe first thought of many might be to define a function, $u$, as $u=4+x^2$ and integrate with respect to $u$, but differentiating $u$ to find $dx$ in terms of constants and $du$ fails because$$\\frac{du}{dx}=2x$$so a substitiution would still contain the variable, $x$, and we would have a multivariable equation that cannot be evaluated. Therefore, an alternate solution is necessary.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int \\frac{1}{4+x^2}\\, dx\n\\end{equation*}\nDefine $x$ as $x=2\\tan(u)$. The logic will soon become apparent for reasons similar to the integration of the circle equation in Example 4. Substitute.\n\\begin{equation*}\n    y=\\int \\frac{1}{4+(2\\tan(u))^2}\\, dx\n\\end{equation*}\nSimplify.\n\\begin{equation*}\n    y=\\int \\frac{1}{4+4\\tan^2(u)}\\, dx\n\\end{equation*}\nWe also need to define $dx$ in terms of $du$. To do this, differentiate this definition of $x$ with respect to $u$. This can be easily done with the derivative rules in Table \\ref{tab:1}, but a short proof follows for clarity.\n\\begin{align*}\n    x &= 2\\tan(u)\\\\\n    \\frac{d}{du}(x) &= \\frac{d}{du}(2\\tan(u))\\\\\n    \\frac{dx}{du} &= 2\\times\\frac{d}{dx}\\left(\\frac{\\sin(u)}{\\cos(u)}\\right)\\\\\n    &= 2\\times\\left(\\frac{\\cos(u)\\times\\cos(u)-\\sin(u)\\times -\\sin(u)}{\\cos^2(u)}\\right)\\\\\n    &= 2\\times\\left(\\frac{\\cos^2(u)+\\sin^2(u)}{\\cos^2(u)}\\right)\\\\\n    &= 2\\times\\left(\\frac{1}{\\cos^2(u)}\\right)\\\\\n    &= 2\\sec^2(u)\\\\\n    dx &= 2\\sec^2(u)\\, du\n\\end{align*}\nSubstitute the final line above into where we left off with the integrable equation.\n\\begin{equation*}\n    y=\\int \\frac{1}{4+4\\tan^2(u)}\\times 2\\sec^2(u)\\, du\n\\end{equation*}\nMultiply fractions; factor out the constant.\n\\begin{equation*}\n    y=\\frac{1}{2}\\int \\frac{\\sec^2(u)}{1+\\tan^2(u)}\\, du\n\\end{equation*}\nEmploy the trigonometric identity that $\\tan^2(x)+1=\\sec^2(x)$ (note that this can be derivied by dividing the Pythagorean trigonometric identity --- $\\sin^2(x)+\\cos^2(x)=1$ --- by $\\cos^2(x)$).\n\\begin{equation*}\n    y=\\frac{1}{2}\\int \\frac{\\sec^2(u)}{\\sec^2(u)}\\, du\n\\end{equation*}\nSimplify.\n\\begin{equation*}\n    y=\\frac{1}{2}\\int 1\\, du\n\\end{equation*}\nEmploy the antipower rule.\n\\begin{equation*}\n    y=\\frac{1}{2}u\n\\end{equation*}\nFind $u$ in terms of $x$, as follows.\n\\begin{align*}\n    x &= 2\\tan(u)\\\\\n    \\frac{x}{2} &= \\tan(u)\\\\\n    \\arctan\\left(\\frac{x}{2}\\right) &= u\n\\end{align*}\nSubstitute the final line above into where we left off with the integrable equation and add the constant of integration to yield the following, final integrated equation (equation 8).\n\\begin{equation}\n    y=\\frac{1}{2}\\arctan\\left(\\frac{x}{2}\\right)\n\\end{equation}\n\\ex This example is concerned with integrating the following function.$$y=x\\cos(x)$$\nThis is another equation that may seem deceptively simple. $x$ and $\\cos(x)$ are both fairly straightforward to integrate and have their own, respective rules of integration. However, when multiplied, the process becomes more difficult. In the same way the product rule of derivatives is significantly more complicated than the sum and difference rule of derivatives, integration by parts (what can be used to integrate the product of functions) is more complicated than the sum and difference rule of integrals.\\par\nLet's begin by deriving this rule. This is fairly straightforward from the definiton of the product rule, but a short proof follows, regardless ($u$ and $v$ will stand in for two both differentiable and integrable functions).\n\\begin{align*}\n    \\frac{d}{dx}(uv) &= \\frac{d}{dx}(u)\\times v+u\\times\\frac{d}{dx}(v)\\\\\n    d(uv) &= du\\times v+u\\times dv\\\\\n    \\int d(uv) &= \\int (v\\, du+u\\, dv)\\\\\n    uv &= \\int v\\, du + \\int u\\, dv\\\\\n    uv-\\int v\\, du &= \\int u\\, dv\\\\\n    \\int u\\, dv &= uv-\\int v\\, du\\tag{9}\n\\end{align*}\nEquation 9 above is the final integration by parts formula. However, it may not be immediately obvious how to apply such a formula. As you will recall, $u$ and $v$ are \"dummy\" functions, that can be set equal to whatever necessary to answer the problem. Likewise, and this is key, $du$ and $dv$ are \\emph{related} dummy functions. To elaborate, $du=u'\\, dx$ and $dv=v'\\, dx$ where $u'$ and $v'$ are the derivatives of $u$ and $v$. Now it may be coming into focus. If we set one function equal to $u$, and the other function times $x$ equal $dv$, we can solve for $du$ and $v$ and rewrite the integrated equation. It may just be that $\\int v\\, du$ is an easier integral to evaluate, allowing the final antiderivative to be found.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int x\\cos(x)\\, dx\n\\end{equation*}\nDefine $u$ as $u=x$ and $dv$ as $dv=\\cos(x)\\, dx$. In this case, $du=1\\, dx=dx$ and $v=\\sin(x)$ by derivative and integral rules. Rewrite the integrable equation as the right side of equation 9.\n\\begin{equation*}\n    y=x\\sin(x)-\\int \\sin(x)\\, dx\n\\end{equation*}\nEvaluate the integral.\n\\begin{equation*}\n    y=x\\sin(x)-[-\\cos(x)]\n\\end{equation*}\nSimplify and add the constant of integration to yield the following, final integrated equation (equation 10).\n\\stepcounter{equation}\n\\begin{equation}\n    y=x\\sin(x)+\\cos(x)+C\n\\end{equation}\n\\ex This example is concerned with integrating the following function.$$y=\\ln(x)$$\nThis is certainly a simple function, yet the method of integrating it is not immediately clear. Certainly it cannot be simplified much. There's not a clear expression to be substituted. It is, in fact, integration by parts that is necessary in this case.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int \\ln(x)\\, dx\n\\end{equation*}\nTo use integration by parts requires the product of two functions, but it appears as if there is only one function present --- $\\ln(x)$. However, by the identity property, every function/variable/etc. is equal to itself times 1. Generally, this is not shown, but we will show it now.\n\\begin{equation*}\n    y=\\int \\ln(x)\\times 1\\, dx\n\\end{equation*}\nDefine $u$ as $u=\\ln(x)$ and $dv$ as $dv=1\\, dx$. In this case, $du=\\tfrac{1}{x}\\, dx$ and $v=x$ by derivative and integral rules. Rewrite the integrable equation as the right side of equation 9.\n\\begin{equation*}\n    y= \\ln(x)\\times x-\\int x\\times \\frac{1}{x}\\, dx\n\\end{equation*}\nSimplify.\n\\begin{equation*}\n    y= x\\ln(x)-\\int 1\\, dx\n\\end{equation*}\nEvaluate the integral.\n\\begin{equation*}\n    y= x\\ln(x)-[x]\n\\end{equation*}\nSimplify and add the constant of integration to yield the following, final integrated equation (equation 11).\n\\begin{equation}\n    y= x\\ln(x)-x+C\n\\end{equation}\n\\ex This example is concerned with integrating the following function.$$y=\\text{e}^x\\cos(x)$$\nThis clearly necessitates integration by parts as it is the product of two functions. However, integration by parts relies on one function (typically the one being differentiated) getting simpler for the second integral. For example, in Example 7, the derivative of $x$ is 1, so the second integral was just needed trigonometric rules of integration.\\par\nIn this case, $\\text{e}^x$ is its own derivative and never changes, while $\\cos(x)$ will cycle through positive and negative variations of itself and $\\sin(x)$. Therefore, rewrites will not reveal anything \\emph{directly} fruitful, and will eventually simplify to the original integrable equation. However, an additional strategy will become apparent after the process starts so we will begin by integrating as we would any function to be integrated by parts.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    y=\\int \\text{e}^x\\cos(x)\\, dx\n\\end{equation*}\nIt does not particularly matter what we define as $u$ and what we define as $dv$, so let's arbitrarily define $u$ as $u=\\text{e}^x$ and $dv$ as $dv=\\cos(x)\\, dx$. In this case, $du=\\text{e}^x\\, dx$ and $v=\\sin(x)$ by derivative and integral rules. Rewrite the integrable equation as the right side of equation 9.\n\\begin{equation*}\n    y=\\text{e}^x\\sin(x)-\\int \\sin(x)\\text{e}^x\\, dx\n\\end{equation*}\nIt appears as if we've accomplished nothing, but in fact, we are on the right track. $\\int \\sin(x)\\text{e}^x\\, dx$ is no easier than $\\int \\text{e}^x\\cos(x)\\, dx$, but that does not matter. Let's run integration by parts again. This time, however, we cannot choose arbitratily because the if we chose the opposite from the following way, the equation would directly simplify back to where we started. Define $u$ as $u=\\text{e}^x$ and $dv$ as $dv=\\sin(x)\\, dx$. In this case, $du=\\text{e}^x\\, dx$ and $v=-\\cos(x)$. Rewrite the integral in the above equation as the right side of equation 9 (in parenthesees to deal with the subtraction).\n\\begin{equation*}\n    y=\\text{e}^x\\sin(x)-\\left(\\text{e}^x(-\\cos(x))-\\int -\\cos(x)\\text{e}^x\\, dx\\right)\n\\end{equation*}\nSimplify.\n\\begin{align*}\n    y &= \\text{e}^x\\sin(x)-\\left(-\\text{e}^x\\cos(x)+\\int \\cos(x)\\text{e}^x\\, dx\\right)\\\\\n    &= \\text{e}^x\\sin(x)+\\text{e}^x\\cos(x)-\\int \\text{e}^x\\cos(x)\\, dx\\\\\n\\end{align*}\nIt may seem as if we're back where we started with $\\int \\text{e}^x\\cos(x)\\, dx$, but, in reality, we now have two other terms that will not cancel first. These two facts (that we have the original integral and that we have other terms) are key. Recall that $y$ does not just equal what it does directly above, but everything else since the original integrable equation as well. This means that we can set any two (or three or all) of the above equations equal to each other. In this case, set the original equation equal to the last one.\n\\begin{equation*}\n    \\int \\text{e}^x\\cos(x)\\, dx = \\text{e}^x\\sin(x)+\\text{e}^x\\cos(x)-\\int \\text{e}^x\\cos(x)\\, dx\n\\end{equation*}\nThis is it --- \\emph{combine like terms}.\n\\begin{equation*}\n    2\\int \\text{e}^x\\cos(x)\\, dx = \\text{e}^x\\sin(x)+\\text{e}^x\\cos(x)\n\\end{equation*}\nDivide both sides by 2.\n\\begin{equation*}\n    \\int \\text{e}^x\\cos(x)\\, dx = \\frac{\\text{e}^x\\sin(x)+\\text{e}^x\\cos(x)}{2}\n\\end{equation*}\nReturn the substitution of the original integrable equation and add the constant of integration to yield the following, final integrated equation (equation 12).\n\\begin{equation}\n    y = \\frac{\\text{e}^x\\sin(x)+\\text{e}^x\\cos(x)}{2}+C\n\\end{equation}\n\\paragraph{Summary} In this section, a great deal of, again, relatively simple-looking functions were integrated. Some, such as the natural log function, were nothing but a special function and its argument, similar to cosecant in Section 1.1.2, but took ingeniously simple observations to integrate. Others, such as $\\text{e}^x\\cos(x)$ required persistence and a loose conceptualizaiton of like terms. However, as has been the theme throughout Section 1.1, each initial integrable equation could be simplified to a more workable form where previously mentioned rules could be applied. Furthermore, each equation only had one variable with which to deal and $y$ was left mostly alone except for Example 9. These patterns will not continue, however, as we move onto Section 1.2.\n\\newpage\n\n\n\\subsection{Functions of \\emph{x} and \\emph{y} (Differential Equations)}\n\\subsubsection{Simple}\n\\paragraph{Introduction} In this section, we will explore several functions of $x$ and $y$ that can be integrated through relatively straightforward means, once the mental block surrounding two variables is passed. Furthermore, a method of examining such functions and predicting their integral will be explored.\n\\ex This example is concerned with integrating the following function.$$\\frac{dy}{dx}=\\frac{-x}{y}$$\nThe most notable difference between this function and all others so far is that it is of two variables, $x$ and $y$. Secondly, because of this, its derivative can no longer be shown as a function defined implicitly in $y$, but must be formally shown as $\\tfrac{dy}{dx}$. This means that, since this equation cannot be solved for $y$ without integration, this differential equation$^[$\\footnote{An equation with at least one $\\frac{dy}{dx}$ term of some degree as a component.}$^]$ does not describe a line that is present at some points in the coordinate plane and not others, but \\emph{slopes} at each and every combination of $x$ and $y$. This is often shown as a slope field, such as the two shown below in Figure \\ref{fig:2}.\n\n\\begin{figure}[h!]\n    \\centering\n    \\begin{subfigure}[b]{0.4\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/SlopeField-10a.png}\n        \\caption{Slope field for $\\tfrac{-x}{y}$.}\n        \\label{fig:2a}\n    \\end{subfigure}\\hspace{3em}\n    \\begin{subfigure}[b]{0.4\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/SlopeField-10b.png}\n        \\caption{Slope field for $2x$.}\n        \\label{fig:2b}\n    \\end{subfigure}\n    \\caption{Two slope fields.}\n    \\label{fig:2}\n\\end{figure}\n\nThe slope field shown in Figure \\ref{fig:2a} shows the differential equation given in this example on one unit intervals, while the slope field in Figure \\ref{fig:2b} shows the derivative of the parabola, $x^2$. What is done to generate each is that at every point to be shown, the $x$ and $y$ (if necessary) coordinates are plugged into the differential equation and a short line segment with this slope is placed at that point. For example, at point,\\ (1,1) in Figure \\ref{fig:2a}, we can find the slope to be $\\tfrac{-1}{1}=-1$. If we look at (1,1) in Figure \\ref{fig:2a}, we can see that the line segment does appear to have a slope of $-1$.\\par\nSlope fields can also give an estimate of what the shape may look like and what its constant of integration may do to it as it varies. For example, in Figure \\ref{fig:2b}, which we know is the slope field for the parabola, $x^2$, we can sort of make out a series of parabolas stacked vertically. This tells us that if we solve the differential equation,$$\\frac{dy}{dx}=2x$$we can expect to find an equation for a parabola and the constant of integration will make it vary vertically. Likewise, when looking at Figure \\ref{fig:2a}, we can expect to find an equation for a circle that grows and shrinks radially in size as $C$ varies. Clearly, slope fields are a good way to take a peak at where we're going.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    \\frac{dy}{dx}=\\frac{-x}{y}\n\\end{equation*}\nThis clearly differs from the initial integrable equations in every other example so far. Whereas the others had a variable ($y$) defined in terms of an integral symbol, an integrand, and a variable of integration, this is still just the initial differential equation. However, as was mentioned before, every function that an example first introduced could have been written after $\\tfrac{dy}{dx}$ (one could then multiply both sides by $dx$ and integrate both sides to get what was called the initial integrable equation, but these steps were skipped for simplicity's sake).\\par\nRegardless, what's important now is to isolate the variables and their respective differentials on either side of the equals sign. Cross multiply to achieve this.\n\\begin{equation*}\n    y\\, dy=-x\\, dx\n\\end{equation*}\nTake the integral of both sides. This is allowed for the same reason that you can add, subtract, multiply, or divide both sides by the same thing --- as long as the integrals have the same (or related) bounds, you can integrate both sides.\n\\begin{equation*}\n    \\int y\\, dy=\\int -x\\, dx\n\\end{equation*}\nEmploy the antipower rule twice.\n\\begin{equation*}\n    \\frac{y^2}{2}=\\frac{-x^2}{2}\n\\end{equation*}\nAdd the constants of integration. It is important that the constant(s) of integration is/are added immediately after integrating for reasons that will be more fully explored in Example 12. For now, it will suffice to say that as the equation is solved for $y$, operations performed on the various terms must also be performed on the constant of integration.\n\\begin{equation*}\n    \\frac{y^2}{2}+C=\\frac{-x^2}{2}+C\n\\end{equation*}\nSince a constant ($C\\in \\mathbb{R}$) plus or minus a constant ($C\\in \\mathbb{R}$) is just a constant ($C\\in \\mathbb{R}$), we can eliminate one of the two constants of integration.\n\\begin{equation*}\n    \\frac{y^2}{2}=\\frac{-x^2}{2}+C\n\\end{equation*}\nMultiply both sides by 2.\n\\begin{equation*}\n    y^2=-x^2+2C\n\\end{equation*}\nTwo times a constant ($C\\in \\mathbb{R}$) is a constant ($C\\in \\mathbb{R}$).\n\\begin{equation*}\n    y^2=-x^2+C\n\\end{equation*}\nMove the variables to the same side to yield the following, final integrated equation (equation 13).\n\\begin{equation}\n    x^2+y^2=C\n\\end{equation}\nThough we could solve this equation for $y$, it should be clear at this point that this is, indeed, the equation for a circle. Since $C$ is where $r^2$ would be in a regular circle equation, we can understand that $C$ controls not the vertical position of the circle, but its radial size, just as the slope field in Figure \\ref{fig:2a} predicted.\n\\ex This example is concerned with integrating the following function.$$\\frac{dy}{dx}=\\frac{-2x-y}{x+2y}$$\nThough this may initially look far harder than the equation discussed in Example 10, it is actually the same level of difficulty with one additional step covered by an aforementioned technique. In other words, while all functions heretofore explored have required differing techniques (or have built on previously used techniques) this is an example of a radically different looking funcition that I would classify in the same category as the one in Example 10.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    \\frac{dy}{dx}=\\frac{-2x-y}{x+2y}\n\\end{equation*}\nCross multiply.\n\\begin{equation*}\n    (x+2y)\\, dy=(-2x-y)\\, dx\n\\end{equation*}\nTake the integral of both sides.\n\\begin{equation*}\n    \\int (x+2y)\\, dy=\\int (-2x-y)\\, dx\n\\end{equation*}\nEmploy the sum and difference rule twice. Note that we could also have chosen to skip the above step and distributed the differentials to the terms in parenthesees before taking integrals of both terms at the same time. In other words, one can justify the sum and difference rule by noting that one can think of the integral symbol and differential as variables that are multiplied by the integrand and can be distributed or factored at will.\n\\begin{equation*}\n    \\int x\\, dy+\\int 2y\\, dy=\\int -2x\\, dx-\\int y\\, dx\n\\end{equation*}\nManipulate the equation to isolate the terms whose variable is the same as its corresponding differential and those whose variable is different than its corresponding differential on different sides of the equals sign.\n\\begin{equation*}\n    \\int 2y\\, dy-\\int -2x\\, dx=-\\int y\\, dx-\\int x\\, dy\n\\end{equation*}\nEmploy the antipower rule twice on the terms on the left side of the equals sign; rewrite the right side (factor out the negative, switch terms).\n\\vspace{-1em}\n\\begin{equation*}\n    \\frac{2y^2}{2}-\\frac{-2x^2}{2}=-\\left(\\int x\\, dy+\\int y\\, dx\\right)\n\\end{equation*}\nSimplify the left side.\n\\vspace{-1em}\n\\begin{equation*}\n    y^2+x^2=-\\left(\\int x\\, dy+\\int y\\, dx\\right)\n\\end{equation*}\nOn the right side, the observant reader may notice that $x\\, dy+y\\, dx$ is nothing but an expansion of the product rule, and integrating should yield $xy$. However, this will be proved, instead, using integration by parts. Rewrite the left term on the right side of the equals sign as the right side of equation 9.\n\\begin{equation*}\n    y^2+x^2=-\\left(xy-\\int y\\, dx+\\int y\\, dx\\right)\n\\end{equation*}\nSimplify and add the constant of integration to yield the following, final integrated equation (equation 14).\n\\begin{align*}\n    y^2+x^2 &= -\\left(xy\\right)\\\\\n    y^2+x^2+xy &= 0\\\\\n    x^2+xy+y^2 &= C\\tag{14}\n\\end{align*}\n\\vspace{-2em}\n\\ex This example is concerned with integrating the following function.$$\\frac{dy}{dx}=3y$$\nThis may seem familliar to the reader at first glance --- it describes the rate of density-dependent growth with a $k$ value of 3. Furthermore, evaluating this differential equation is arguably even more simple than some past examples, but the real implications of this example lie in its treatment of the constant of integration. While both Examples 10 and 11 led to variations of conic sections, this function is best solved for $y$ after integration and will allow us to explore different purposes of $C$ more clearly, as was mentioned in Example 10.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    \\frac{dy}{dx}=3y\n\\end{equation*}\nMultiply both sides by $dx$; divide both sides by $y$.\n\\begin{equation*}\n    \\frac{dy}{y}=3\\, dx\n\\end{equation*}\nTake the integral of both sides.\n\\begin{equation*}\n    \\int \\frac{dy}{y}=\\int 3\\, dx\n\\end{equation*}\nEmploy the rule of integration from Example 2 and the antipower rule.\n\\begin{equation*}\n    \\ln|y|=3x\n\\end{equation*}\nAdd the constant of integration.\n\\begin{equation*}\n    \\ln|y|=3x+C\n\\end{equation*}\nBy the converse of the definition of the logarithm, if $\\log_a(c)=b$, then $a^b=c$. Employ this definition to rewrite the above equation.\n\\begin{equation*}\n    |y|=\\text{e}^{3x+C}\n\\end{equation*}\nRewrite with a plus or minus sign to eliminate the absolute value symbol.\n\\begin{equation*}\n    y=\\pm\\text{e}^{3x+C}\n\\end{equation*}\nEmploy the rule of exponents that $a^{m+n}=a^ma^n$.\n\\begin{equation*}\n    y=\\pm\\text{e}^{3x}\\text{e}^C\n\\end{equation*}\nReorder the terms.\n\\begin{equation*}\n    y=\\pm\\text{e}^C\\text{e}^{3x}\n\\end{equation*}\n\n\\begin{figure}[h!]\n    \\centering\n    \\begin{subfigure}[b]{0.4\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/C-straight.png}\n        \\caption{Variation across $\\mathbb{R}$.}\n        \\label{fig:3a}\n    \\end{subfigure}\\hspace{3em}\n    \\begin{subfigure}[b]{0.4\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/C-e^x.png}\n        \\caption{Variation across $\\mathbb{R}_{>0}$.}\n        \\label{fig:3b}\n    \\end{subfigure}\n    \\caption{Different ways $C$ can vary.}\n    \\label{fig:3}\n\\end{figure}\n\nAt this point, it should be clear why it is necessary to add in the constant of integration immediately after integrating. Whereas the manipulations in the last two equations did not vary the range of the constant of integration ($C\\in\\mathbb{R}$), these manipulations have. $\\text{e}^C$ does not have a range among the real numbers, but a range among the positive real numbers (excluding zero), as $C$ varies. This is because of the graphical interpretation of $C$, as seen in Figure \\ref{fig:3}. Whereas $C$ varying among the real numbers can be visualised as a straight line (where every real number \\emph{is} within its range), as in Figure \\ref{fig:3a}, $\\text{e}^C$ varies only in the positive numbers as $C$ varies among the real numbers, as in Figure \\ref{fig:3b}. That being said, the plus or minus sign means that $C$ really varies among all of the real numbers except zero. This type of constant is expressed as $A$, where $A$ is directly defined as $A=\\pm\\text{e}^C$ and has a range $A\\in(-\\infty,0)\\cup(0,\\infty)$.\\par\nRewrite the equation with which we left off with $A$ to yield the following, final integrated equation (equation 15).\n\\stepcounter{equation}\n\\begin{equation}\n    y=A\\text{e}^{3x}\n\\end{equation}\n\\paragraph{Summary} Throughout this section, several functions of two variables have been integrated. The same techniques that were used in Section 1.1 were employed here, but integration occured on both sides of the equals sign with respect to not just $x$, but $x$ and $y$, independently.\\par\nFurthermore, the uses of the constant of integration were expanded from simply varying a function horizontally (see Figure \\ref{fig:2b}) to radially (see Figure \\ref{fig:2a}) and beyond (see Example 12). In Figure \\ref{fig:2}, some of these variations were explored graphically through slope fields, and in Figure \\ref{fig:3}, direct limits on the bounds of $C$ were discussed.\\par\nNotably, all of these functions involved terms or groups of terms that were multiplied by each other or divided by each other. When unmitigated addition and subtraction are introduced, things can become more complicated, as seen in Section 1.2.2.\n\\newpage\n\n\\subsubsection{Advanced Algebraic Techniques}\n\\paragraph{Introduction} In this section, two functions with addition and subtraction are analyzed. One is a monomial that is rather simply processed with a recurring technique. The other is a binomial in $y$ which requires a rather ingenious algebraic technique and some advanced processing after integration to solve for $y$.\n\\ex This example is concerned with integrating the following function.$$\\frac{dy}{dx}=1+y$$\nAs was mentioned in the summary of Section 1.2.1, this is clearly a function where the main (and arguably sole) component is addition. It looks rather simple, and, indeed, its integration is a good launching point for more broadly exploring addition and subtraction in two-variable functions.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    \\frac{dy}{dx}=1+y\n\\end{equation*}\nMultiply both sides by $dx$; divide both sides by $1+y$.\n\\begin{equation*}\n    \\frac{dy}{1+y}=dx\n\\end{equation*}\nTake the integral of both sides.\n\\begin{equation*}\n    \\int\\frac{dy}{1+y}=\\int dx\n\\end{equation*}\nDefine $u$ as $u=1+y$. In this case, $du=dy$ by derivative rules. Make these two substitutions.\n\\begin{equation*}\n    \\int\\frac{du}{u}=\\int dx\n\\end{equation*}\nEmploy the rule of integration from Example 2 and the antipower rule.\n\\begin{equation*}\n    \\ln|u|=x\n\\end{equation*}\nAdd the constant of integration.\n\\begin{equation*}\n    \\ln|u|=x+C\n\\end{equation*}\nReturn the substitution.\n\\begin{equation*}\n    \\ln|1+y|=x+C\n\\end{equation*}\nEmploy the definition of the lorarithm to rewrite the above equation.\n\\begin{equation*}\n    |1+y|=\\text{e}^{x+C}\n\\end{equation*}\nRewrite with a plus or minus sign to eliminate the absolute value symbol.\n\\begin{equation*}\n    1+y=\\pm\\text{e}^{x+C}\n\\end{equation*}\nEmploy the rule of exponents that $a^{m+n}=a^ma^n$.\n\\begin{equation*}\n    1+y=\\pm\\text{e}^x\\text{e}^C\n\\end{equation*}\nSubstitute $A$ for $\\pm\\text{e}^C$.\n\\begin{equation*}\n    1+y=A\\text{e}^x\n\\end{equation*}\nSubtract 1 from both sides of the equals sign to yield the following, final integrated equation (equation 16).\n\\begin{equation}\n    y=A\\text{e}^x-1\n\\end{equation}\n\\ex This example is concerned with integrating the following function.$$\\frac{dy}{dx}=y^2+2y-24$$\nThis function is a progression from both Examples 12 and 13. It is a polynomial in $y$ and will require techniques used in those aforementioned examples as well as a new technique called partial fractions$^[$\\footnote{\\label{fot:1}A method of splitting a single fraction with two or more elements multiplied in the denominator into the sum or difference of two or more resultant fractions, which subsequently often have lower degree terms in the denominator.}$^]$.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    \\frac{dy}{dx}=y^2+2y-24\n\\end{equation*}\nMultiply both sides by $dx$; divide both sides by $y^2+2y-24$.\n\\begin{equation*}\n    \\frac{dy}{y^2+2y-24}=dx\n\\end{equation*}\nTake the integral of both sides; separate $dy$ from the fraction on the left side of the equals sign.\n\\begin{equation}\n    \\int \\frac{1}{y^2+2y-24}\\, dy=\\int dx\n\\end{equation}\nNow, we run into a roadblock that we avoided in Example 13. The denominator of the $y$ fraction is a second-degree polynomial. This means that if we defined a function, $u$, as $u=y^2+2y-24$, the substitution for $dx$ would be $\\frac{du}{2y+2}$ by differentiation rules. This means that $y$ would still be in the integrand, and we would now be trying to integrate a multivariable ($y$ and $u$) function with only one variable of integration --- an impossible task. Therefore, we need a different technique, i.e. partial fractions.\\par\nAccording to the definition of the method of partial fractions in Footnote \\ref{fot:1}, it may be possible to separate this large fraction into two with first-degree denominators, which could be separated with the sum or difference rule of integrals. The following will guide the reader through this method.\\par\nIsolate the fraction for consideration --- we'll return to the integrable equation later. To begin, factor the denominator. One could solve it algebraically with the quadratic formula, but the keen eye will discern that two factors of -24 are -4 and 6, which sum to 2 (the $b$ coefficient). Therefore, rewrite the denominator with these factors in mind.\n\\begin{equation*}\n    \\frac{1}{(y-4)(y+6)}\n\\end{equation*}\nSet this fraction equal to the sum of two fractions, as shown below.\n\\begin{equation}\n    \\frac{1}{(y-4)(y+6)}=\\frac{A}{y-4}+\\frac{B}{y+6}\n\\end{equation}\n$A$ and $B$ are variable numerators. What this statement claims is that there are two fractions, with denominators $y-4$ and $y+6$ that, with some as-of-yet undiscovered numerators, sum to the original fraction. Now, the logical basis of partial fractions may be coming into focus. When fractions are added, the first step is to find a common denominator. Though an LCD (least common denominator) can be beneficial, one surefire way to find a common denominator is by multiplying the two denominators under question. However, to modify the denominator also requires modifying the numerator, i.e. multiplying each fraction by a clever form of one.\\par\nMultiply each fraction on the right side of the equals sign by the other fraction's denominator divided by that same denominator.\n\\begin{equation*}\n    \\frac{1}{(y-4)(y+6)}=\\frac{A}{y-4}\\times\\frac{y+6}{y+6}+\\frac{B}{y+6}\\times\\frac{y-4}{y-4}\n\\end{equation*}\nSimplify.\n\\begin{equation*}\n    \\frac{1}{(y-4)(y+6)}=\\frac{A(y+6)}{(y-4)(y+6)}+\\frac{B(y-4)}{(y+6)(y-4)}\n\\end{equation*}\nNow, the fractions on the right side have the same denominator, so they can be added (add the numberators over the common denominator).\n\\begin{equation*}\n    \\frac{1}{(y-4)(y+6)}=\\frac{A(y+6)+B(y-4)}{(y-4)(y+6)}\n\\end{equation*}\nMultiply both sides by $(y-4)(y+6)$.\n\\begin{equation*}\n    1=A(y+6)+B(y-4)\n\\end{equation*}\nDistribute. Add $0y$, or zero (this will not disturb equality), to the left side of the equation.\n\\begin{equation*}\n    0y+1=yA+6A+yB-4B)\n\\end{equation*}\nNow, we have one equation and two unknown variables...or so it seems. In fact, we actually can write a system of two equations from this one equation. Since the only operators are addition and subtraction, the first-degree terms (those with a $y$) and the zeroeth-degree terms (those without a $y$) must respectively sum to zero and one because of their equality with the left side of the equation. In other words, $Ay+By=0y$ and $6A-4B=1$. By dividing out the $y$ on both sides of the first equation and pairing it with the second, we get the following system of equations.\n\\begin{align*}\n    A+B &= 0\\\\\n    6A-4B &= 1\n\\end{align*}\nSolve this system as follows.\n\\begin{equation*}\n    A=-B\n\\end{equation*}\n\\begin{align*}\n    6(-B)-4B &= 1\\\\\n    -10B &= 1\\\\\n    B &= -\\frac{1}{10}\\tag{19}\n\\end{align*}\n\\begin{align*}\n    A + -\\frac{1}{10} &= 0\\\\\n    A &= \\frac{1}{10}\\tag{20}\n\\end{align*}\nSubstitute equations 19 and 20 into equation 18.\n\\begin{equation*}\n    \\frac{1}{(y-4)(y+6)}=\\frac{\\frac{1}{10}}{y-4}+\\frac{-\\frac{1}{10}}{y+6}\n\\end{equation*}\nSimplify the right side.\n\\begin{align*}\n    \\frac{1}{(y-4)(y+6)} &= \\frac{1}{10(y-4)}+\\frac{-1}{10(y+6)}\\\\\n    &= \\frac{1}{10y-40}-\\frac{1}{10y+60}\\tag{21}\n\\end{align*}\nSubstitute the right side of equation 21 into equation 17.\n\\begin{equation*}\n    \\int \\left(\\frac{1}{10y-40}-\\frac{1}{10y+60}\\right)\\, dy=\\int dx\n\\end{equation*}\nEmploy the sum and difference rule.\n\\begin{equation*}\n    \\int \\frac{1}{10y-40}\\, dy-\\int \\frac{1}{10y+60}\\, dy=\\int dx\n\\end{equation*}\nDefine $u$ as $u=10y-40$ and $v$ as $v=10y+60$. In the first case, $dy=\\frac{du}{10}$, while in the second case, $dy=\\frac{dv}{10}$. Substitute four times.\n\\begin{equation*}\n    \\int \\frac{1}{u}\\times \\frac{du}{10}-\\int \\frac{1}{v}\\times \\frac{dv}{10}=\\int dx\n\\end{equation*}\nMultiply fractions; factor out the constants.\n\\begin{equation*}\n    \\frac{1}{10}\\int \\frac{du}{u}-\\frac{1}{10}\\int \\frac{dv}{v}=\\int dx\n\\end{equation*}\nEmploy the rule of integration from Example 2 twice and the antipower rule once.\n\\begin{equation*}\n    \\frac{1}{10}\\ln|u|-\\frac{1}{10}\\ln|v|=x\n\\end{equation*}\nAdd the constant of integration.\n\\begin{equation*}\n    \\frac{1}{10}\\ln|u|-\\frac{1}{10}\\ln|v|=x+C\n\\end{equation*}\nReturn the substitutions.\n\\begin{equation*}\n    \\frac{1}{10}\\ln|10y-40|-\\frac{1}{10}\\ln|10y+60|=x+C\n\\end{equation*}\nAt this point, we have successfully integrated the initial differential equation. However, we will go a few steps farther and solve for $y$, as we have in the previous few examples. Some steps, especially with regard to manipulating $C$ will be combined for the sake of space and since they have been described in great detail in past examples.\\par\nAnyway, multiply both sides by 10.\n\\begin{equation*}\n    \\ln|10y-40|-\\ln|10y+60|=10x+C\n\\end{equation*}\nEmploy the property of logarithms that $\\log_a(b)-\\log_a(c)=\\log_a\\left(\\frac{b}{c}\\right)$.\n\\begin{equation*}\n    \\ln\\abs{\\frac{10y-40}{10y+60}}=10x+C\n\\end{equation*}\nEmploy the definition of the logarithm to rewrite the above equation.\n\\begin{equation*}\n    \\abs{\\frac{10y-40}{10y+60}}=\\text{e}^{10x+C}\n\\end{equation*}\nManipulate by steps from previous examples.\n\\begin{equation*}\n    \\frac{10y-40}{10y+60}=A\\text{e}^{10x}\n\\end{equation*}\nMultiply both sides by $10y+60$.\n\\begin{equation*}\n    10y-40=A\\text{e}^{10x}(10y+60)\n\\end{equation*}\nDistribute.\n\\begin{equation*}\n    10y-40=10Ay\\text{e}^{10x}+60A\\text{e}^{10x}\n\\end{equation*}\nCombine all terms with a $y$ on the left side of the equals sign and all terms without a $y$ on the right side of the equals sign.\n\\begin{equation*}\n    10y-10Ay\\text{e}^{10x}=60A\\text{e}^{10x}+40\n\\end{equation*}\nFactor out a $y$ from each term on the left side of the equation.\n\\begin{equation*}\n    y\\left(10-10A\\text{e}^{10x}\\right)=60A\\text{e}^{10x}+40\n\\end{equation*}\nDivide both sides by $\\left(10-10A\\text{e}^{10x}\\right)$.\n\\begin{equation*}\n    y=\\frac{60A\\text{e}^{10x}+40}{10-10A\\text{e}^{10x}}\n\\end{equation*}\nDivide both the numerator and denominator of the fraction by 10 to yield the following, final integrated equation (equation 22).\n\\setcounter{equation}{21}\n\\begin{equation}\n    y=\\frac{6A\\text{e}^{10x}+4}{1-A\\text{e}^{10x}}\n\\end{equation}\n\\paragraph{Summary} As was referenced in the introduction, addition in monomials can be handled through integration by substitution after combining similar variables on the same side of the equals sign. A polynomial is definitely more complicated. The one that was analyzed in Example 14 above required the complex process of partial fractions followed by a series of algebraic manipulations that culminated in a rather messy fraction --- a far cry from some of the other, more clean results in this paper.\\par\nIn the next section, only one function will be integrated, but the trick requires a thorough mastery of techniques of integration \\emph{and} differentiation to spot.\n\\newpage\n\n\\subsubsection{Advanced Calculus Techniques}\n\\paragraph{Introduction} In this section, only one function will be analyzed. It has addition and subtraction, like the functions in Section 1.2.2, but requires radically different calculus-based techniques to rewrite it into an integrable form. Unlike Example 14, the answer is almost as simple as the differential equation, but still has its own little gems (explored in the Summary).\\par\nThis function requires the reader to draw deep on calculus knowledge and intimately consider differentiation as well as integration for the first time in this paper. Lastly, it is still not technically a multivariable function, but it does require techniques that, perhaps, most closely resemble a multivariable function of any explored herein.\n\\ex This example is concerned with integrating the following function.$$\\frac{dy}{dx}=x+y$$\nAs was previously alluded to, this is the most complex function that we will be algebraically integrating. It appears extremely simple --- just the addition of the two variables, yet the trick here requires the product rule, of all things, a specialized function, and some rather ingenious algebraic and calculus manipulation. However, the final answer is not nearly as complicated as the one in Example 14.\\par\nLet's begin integrating. Write the integrable equation as follows.\n\\begin{equation*}\n    \\frac{dy}{dx}=x+y\n\\end{equation*}\nThe first step is not necessarily obvious. If we multiply both sides by $dx$ and employed the sum and difference rule, one integral would work out via the antipower rule. However, the other would require integrating $y$ with respect to $x$, an impossible task. Effectively, it does not appear as if there is a way to pair each variable with its differential, so an alternate method is required.\\par\nLet's get $y$ and its derivative ($\\frac{dy}{dx}$ or $y'$) on the same side of the equation. Subtract $y$ from both sides.\n\\begin{equation*}\n    \\frac{dy}{dx}-y=x\n\\end{equation*}\nWe cannot integrate yet, but we are getting closer. Think back to differentiation. When else does the addition or subtraction of a variable and its derivative come up in calculus? Though it has not been directly dealt with in this paper, Example 11 did reference the product rule when trying to evaluate $\\int x\\, dy + \\int y\\, dx$. As a review, the product rule is the antithesis to integration by parts, and can be stated in any of the following ways$^[$\\footnote{The prime (apostrophe) of a function designates its derivative.}$^]$, where $u$ and $v$ are differentiable functions.\n\\begin{align*}\n    \\frac{d}{dx}(uv) &= \\frac{d}{dx}(u)\\times v+u\\times \\frac{d}{dx}(v)\\\\\n    &= \\frac{du}{dx}\\times v+u\\times \\frac{dv}{dx}\\tag{23}\\\\\n    &= u'v+uv'\n\\end{align*}\nWhat appeals about the product rule is that if we can transform the left side of where we left off with the integrable equation into a format that directly resembles the right side of equation 23, then we can eliminate the addition by rewriting as the left side of equation 23. The right side of equation 23 is close to our situation above, but not a perfect match. We have a variable and its derivative separated by an addition/subtraction operator, but we do not have the second variable/derivative pair. The closest we can get right now to transforming the left side of where we left off into product-rule format is by showing the $1$ and $-1$ as \"functions\" multiplied by $y$ and its derivative, as follows.\n\\begin{equation*}\n    1\\times\\frac{dy}{dx}+-1\\times y=x\n\\end{equation*}\nUnfortunately, $1$ and $-1$ do not share a derivative/integral calculus relation, so we cannot simply compress the left side above into the left side of the definition of the product rule. However, we can multiply both sides of the equation by a function, $u$, that can resolve this issue.\\par\nMultiply both sides by $u$, an as-of-yet undefined function.\n\\stepcounter{equation}\n\\begin{equation}\n    u\\times\\frac{dy}{dx}+-u\\times y=ux\n\\end{equation}\nNow we're getting somewhere. The leftmost term, $uy'$, is practically directly related to the product rule. However, $-uy$ does not equal $u'y$, but it needs to, so set these terms equal to each other$^[$\\footnote{This is the key and it cannot be undersold. In trying to establish a link between equation 23 and the integrable equation, we need an exact parallel, or equality. By setting the two terms that need to be equal equal to each other, we can solve for a function, $u$, except we use calculus to solve for a function instead of algebra to solve for a variable.}$^]$.\n\\begin{equation*}\n    -uy=u'y\n\\end{equation*}\nDivide both sides by $y$.\n\\begin{equation*}\n    -u=u'\n\\end{equation*}\nRewrite $u'$ in Leibniz's notation.\n\\begin{equation*}\n    \\frac{du}{dx}=-u\n\\end{equation*}\nNow, we have a differential equation which we can solve to find the function, $u$, that we need to suit our needs. Divide both sides by $u$; multiply both sides by $dx$.\n\\begin{equation*}\n    \\frac{du}{u}=-dx\n\\end{equation*}\nTake the integral of both sides.\n\\begin{equation*}\n    \\int \\frac{du}{u}=\\int -dx\n\\end{equation*}\nEmploy the rule of integration from Example 2.\n\\begin{equation*}\n    \\ln|u|=-x\n\\end{equation*}\nEmploy the definition of the logarithm to rewrite the above equation.\n\\begin{equation*}\n    |u|=\\text{e}^{-x}\n\\end{equation*}\nRewrite with a plus or minus sign to eliminate the absolute value symbol.\n\\begin{equation*}\n    u=\\pm\\text{e}^{-x}\n\\end{equation*}\nAt this point, it would seem that there are two potential functions that satisfy the conditions for $u$. This is perhaps strange, but nevertheless true. I'll explain: Since $u$ is multiplied through the entirety of equation 24, if $u$ is positive, all terms in equation 24 will be positive. If $u$ is negative, all terms in equation 24 will be negative, but multiplying by $-1$ makes equation 24 equivalent to the first case. In effect, either definition of $u$ \\emph{does} satisfy the necessary conditions.\\par\nSubstitute the positive definiton (for simplicity's sake) of $u$ into equation 24.\n\\begin{equation*}\n    \\text{e}^{-x}\\times\\frac{dy}{dx}+-\\text{e}^{-x}\\times y=\\text{e}^{-x}x\n\\end{equation*}\nBy the definition of $u$, $-\\text{e}^{-x}=\\frac{d}{dx}\\left(\\text{e}^{-x}\\right)$, a fact that can be confirmed by derivative or integral rules. As such, substitute $\\frac{d}{dx}\\left(\\text{e}^{-x}\\right)$ for $-\\text{e}^{-x}$.\n\\begin{equation*}\n    \\text{e}^{-x}\\times\\frac{dy}{dx}+\\frac{d}{dx}\\left(\\text{e}^{-x}\\right)\\times y=\\text{e}^{-x}x\n\\end{equation*}\nAt this point, the left side of the above equation directly resembles equation 23 where $u$ (in equation 23) is related to $\\text{e}^{-x}$ while $v$ is related to $y$. Therefore, the product rule can be used to rewrite the left side of the above equation, as follows.\n\\begin{equation*}\n    \\frac{d}{dx}\\left(y\\text{e}^{-x}\\right)=\\text{e}^{-x}x\n\\end{equation*}\nDefine $w$ as $w=y\\text{e}^{-x}$ and rewrite the equation above. This simplifies the multivariate differential for the integration step. Make this substitution.\n\\begin{equation*}\n    \\frac{dw}{dx}=\\text{e}^{-x}x\n\\end{equation*}\nNow, it may become obvious how far we've come. As of defining $w$, we have rewritten the addition of two variables as a differential equation equal to the multiplication of two functions of the same variable. Furthermore, there is still a differential of that variable ($dx$) in the equation, so this can be evaluated via integration by parts. When integrated, $dw$ will become $w$, at which point the substitution can be returned and the equation solved for $y$. This process follows.\\par\nMultiply both sides by $dx$.\n\\begin{equation*}\n    dw=\\text{e}^{-x}x\\, dx\n\\end{equation*}\nTake the integral of both sides.\n\\begin{equation*}\n    \\int dw=\\int\\text{e}^{-x}x\\, dx\n\\end{equation*}\nDefine $u$ as $u=x$ and $dv$ as $dv=\\text{e}^{-x}\\, dx$. In this case, $du=dx$ and $v=-\\text{e}^{-x}$ by derivative and integral rules. Rewrite the right side of the above equation as the right side of equation 9.\n\\begin{equation*}\n    \\int dw=-x\\text{e}^{-x}-\\int -\\text{e}^{-x}\\, dx\n\\end{equation*}\nEmploy the antipower rule and the the definitions of function $u$ (that $\\frac{du}{dx}=-\\text{e}^{-x}$ and $u=\\text{e}^{-x}$).\n\\begin{equation*}\n    w=-x\\text{e}^{-x}-\\text{e}^{-x}\n\\end{equation*}\nAdd the constant of integration.\n\\begin{equation*}\n    w=-x\\text{e}^{-x}-\\text{e}^{-x}+C\n\\end{equation*}\nReturn the substitution of $w$.\n\\begin{equation*}\n    y\\text{e}^{-x}=-x\\text{e}^{-x}-\\text{e}^{-x}+C\n\\end{equation*}\nDivide both sides by $\\text{e}^{-x}$ to solve for $y$.\n\\begin{equation*}\n    y=-x-1+\\frac{1}{\\text{e}^{-x}}\\times C\n\\end{equation*}\nEmploy the rule of exponents that $a^{-m}=\\frac{1}{a^m}$.\n\\begin{equation*}\n    y=-x-1+\\frac{1}{\\frac{1}{\\text{e}^{x}}}\\times C\n\\end{equation*}\nMultiply the complex fraction by $1=\\frac{\\text{e}^{x}}{\\text{e}^{x}}$ and simplify. Note that this is, indeed, $C$ out front as opposed to $A$.\n\\begin{equation*}\n    y=-x-1+C\\text{e}^{x}\n\\end{equation*}\nReorder the terms to put the term with a positive coefficient in front to yield the following, final integrated equation (equation 25).\n\\begin{equation}\n    y=C\\text{e}^{x}-x-1\n\\end{equation}\n\\paragraph{Summary} And so ends the integration of the most interesting non-multivariate algebraically integrable function I have come across. Throughout Section 1.2, the reader may have wondered how integrating a two-variable function can still yield a one variable function. Furthermore, he or she may note that, if we take Example 15 for instance, the derivative of the final integrated equation does not appear to be equal to the initial integrable equation. However, it is --- there is simply an extra step after differentiation to return to the initial integrable equation. This will be briefly explored below.\\par\nLet's differentiate equation 25. Begin with the initial differentiable equation,$$y=C\\text{e}^{x}-x-1$$and separate terms with the sum and difference rule and differentiate with rules that have already been explored to yield$$\\frac{dy}{dx}=C\\text{e}^x-1$$Note that as $C$ is just a constant, it can be factored out during the differentiation process and thus does not change. This may appear to be completely different from $x+y$, but it is actually equivalent. If $x$ is added to both sides of the initial differentiable equation, the equality becomes$$x+y=C\\text{e}^x-1$$If the substitution is made, the reader will see the initial integrable equation from Example 15 as it should be.\\par\nThrough this exercise, it may also have become clear why a differential equation in $x$ and $y$ can still be termed a one-variable function. A true \\emph{multivariable} function might look like $z=x+y$, but because $\\frac{dy}{dx}$ is so inherently related to $y$, $x$, and planar graphing in general, all functions do stay true to the title and can be classified as functions of one-variable.\\par\nSection 2 will continue to explore one-variable functions, but these will not be able to be integrated by any rules discussed as of yet in this paper.\n\\newpage\n\n\n\n\\section{Approximations of Nonintegrable Functions}\n\\subsection{Restricted Estimates}\n\\paragraph{Introduction} In this section, the idea of a nonintegrable function will be explained. Once the reader is convinced that such a function exists, two methods of approximating definite integrals will be explored (for the same function so that the accuracy of the two approaches can be compared).\n\\ex This example is concerned with integrating the following function.$$y=\\sin(x^2)$$\nAs has been referneced above, this function cannot be integrated with any techniques discussed thus far. Every function so far could either be directly integrated using a set of techniques (antipower rule, $\\frac{du}{u}$ rule, etc.) or could be manipulated to such a form that these rules then applied.\\par\nHowever, sticking with the theme of simple functions that require complex techniques, it is the lack of complication of this function that makes it possible to see why any previous techniques will not work. Let's begin with Example 1. Is this a polynomial function that can be integrated via the sum and difference rule and the antipower rule? No. It's got a binomial, but this part of it is nested within a non-polynomial function. How about Example 2? Is there any way to turn this function into a case of $\\frac{du}{u}$? No, there is not. The derivative of this function is $x\\cos(x^2)$ which cannot be introduced directly through an algebraic rewrite or through a clever form of one \\'a la Example 3, no matter how much Table \\ref{tab:1} is analyzed. Example 4 deals with the very specific case of a difference of perfect squares under a radical, so it is not applicable. Example 5 does not work for the same reason as Example 2-3. Example 6 does not work for a similar reason to Example 4. Example 7-9 rely on integration by parts, but $\\sin(x^2)$ is only one function. The Example 8 technique of calling $1$ a function also does not work in this case because $2x^2\\cos(x^2)$ is no easier to integrate than $\\sin(x^2)$ (infact, rewriting this with integration by parts brings the reader right back to $\\sin(x^2)$). Obviously, nothing from Example 10-15 will work because these are differential equations and require rewrites so that Example 1-9 techniques can be used.\\par\n\n\\begin{figure}[h!]\n    \\centering\n    \\begin{subfigure}[b]{0.4\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/Rsum4.png}\n        \\caption{Four subintervals ($n=4$).}\n        \\label{fig:4a}\n    \\end{subfigure}\\hspace{3em}\n    \\begin{subfigure}[b]{0.4\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/Rsum12.png}\n        \\caption{Twelve subintervals ($n=12$).}\n        \\label{fig:4b}\n    \\end{subfigure}\n    \\caption{Riemann sums for $x^2$.}\n    \\label{fig:4}\n\\end{figure}\n\n\nSo what is the best course of action to integrate a, as is the title of Section 2, \"Nonintegrable Function?\" Of course, there is a hitch --- \"Nonintegrable\" simply means that it cannot be \\emph{perfectly} integrated (integrated via algebraic means) to find another function that exactly describes the area under the initial integrable function at every point. Any function (not at infinity) will have a finite area between itself, the $x$-axis, and the bounds of the integral (generally, $a$ and $b$). Even if this area cannot be expressed in a \"pretty\" answer in simplest radical form or somesuch, it can be \\emph{approximated}. One person could say an area is roughly 2 (for example). Another person could argue that it's 2.2. A third person could say it's 2.17. And as better and better approximations are found, eventually, an approximation can be found that is accurate enough that it suits any real purposes.\\par\nIn fact, all of integration is based on approximations, and then theorems. Integration begins as a Riemann sum, which imagines many small rectangles under a function (see Figure \\ref{fig:4} for such rectangles under a parabola). Rectangles are easy to find the area for (just width times height). Let's find the area of the rectangles in Figure \\ref{fig:4a}, where there are four rectangles under the parabola, $x^2$. The two vertical, blue-dashed lines are the bounds ($a=0$ and $b=2$ in both subfigures). The width ($w$) of each rectangle can be visually confirmed to be constant and 0.5. Note that the width of one rectangle is also equal to the length of the bounded region divided amont the number of rectangles. Mathematically,\n\\begin{equation}\n    w=\\frac{b-a}{n}\n\\end{equation}\nThe height of the rectangles is equal to $f(x)$ (where $f(x)=x^2$) at different points. Now, we have the information necessary to write a Riemann sum (the sum of these rectangles' areas; designated $A_a$ for area of Figure \\ref{fig:4a}).\n\\begin{equation*}\n    A_a = h_1w_1+h_2w_2+h_3w_3+h_4w_4\n\\end{equation*}\nHowever, since all of the width's are the same, factor them out and rewrite as just $w$.\n\\begin{equation*}\n    A_a = w(h_1+h_2+h_3+h_4)\n\\end{equation*}\nSubstitute in numbers and simplify.\n\\begin{align*}\n    A_a &= 0.5(0+0.25+1+2.25)\\\\\n    &= 1.75\n\\end{align*}\nIs this an approximation? Absolutely. However, more accurate ones may be found. If we sum up the area of all of the rectangles in Figure \\ref{fig:4b} (designated $A_b$; this exact process will not be listed, but it's identical to that used to find $A_a$) the area can be found to be about $A_b=2.34$. At this point, the Riemann sum has yielded accuracy to the ones place (note that the exact area is $\\frac{8}{3}\\approx 2.67$, which can be confirmed by evaluating $\\int_0^2 x^2\\, dx$).\\par\nWhat can we learn from this? We can learn that integrals can be approximated by finding areas via Riemann sums. As you approach infinitely many, infinitely small rectangles, you will approach the exact area and integral. Nothing about a Riemann sum uses antiderivatives (the technique used in Examples 1-15), and it is not as clean, but it does work.\\par\nTo finish this example, let's apply a Riemann sum of $n=4$ on $\\left[0,\\sqrt{\\pi}\\right]$ to $\\sin(x^2)$, as seen in Figure \\ref{fig:5}.\\par\n\n\\begin{figure}[h!]\n    \\centering\n    \\includegraphics[width=0.4\\linewidth]{Blender/Rsum4-sin.png}\n    \\caption{Riemann sum ($n=4$) for $\\sin(x^2)$.}\n    \\label{fig:5}\n\\end{figure}\n\nNote that the interval chosen was chosen because it reflects the first positive part of the wave after the $y$-axis. $\\sqrt{\\pi}$ can be confirmed to be a zero by using arcsine to solve for $x$ in $\\sin(x^2)$. However, this will not be proved here.\\par\nLet's begin summing. Find $w$ by employing equation 26.\n\\begin{align*}\n    w &= \\frac{\\sqrt{\\pi}-0}{12}\\\\\n    &= \\frac{\\sqrt{\\pi}}{12}\n\\end{align*}\nFinding the many values of $h$ has been discussed before, and since the values will not be able to be expressed in simplest radical form due to the nature of the sine function, the summation process will not be shown, but the following, final sum is displayed below (equation 27). Note that both Figures \\ref{fig:4} and \\ref{fig:5} have used left-hand Riemann sums, which means that after finding $h$, the rectangle is imagined stretching to the right. In Figure \\ref{fig:4}, this meant that the rectangles stayed strictly under the function and our estimate was lower. However, since Figure \\ref{fig:5} has a descending portion, some of the area approximated by the rectangles sticks outside of the function. This may lead this estimate to be a little more accurate as the excess area will fill in some or all of the area not accounted for by rectangles closer to $x=0$.\n\\begin{equation}\n    A=0.83\n\\end{equation}\n\\ex This example is similarly concerned with integrating the following function.$$y=\\sin(x^2)$$\nThere are other techniques which converge to the actual value far faster than Riemann sums. The technique that will be explored in this example is called a Taylor series$^[$\\footnote{A calculus-derived polynomial of infinite degree that converges to the values of some function across an interval or over the real numbers, $\\mathbb{R}$. The derivation is based off of constructing a polynomial to have the same $n^\\text{th}$ derivative as the function in question.}$^]$. Taylor series' can be centered$^[$\\footnote{A Taylor series' center is where it most accurately reflects the function in question. The farther away from the center, the less accurate the representation of the function.}$^]$ at different values. For the purposes of this approximation, we'll center at $x=0$, which will also make our Taylor series a Maclaurin series$^[$\\footnote{A Maclaurin series is a Taylor series centered at $x=0$.}$^]$.\\par\nLet's begin deriving a Taylor series. Make a table of the derivatives of $\\sin(x^2)$ and their values at $x=0$ (Table \\ref{tab:2}).\n\n\\begin{table}[h!]\n    \\centering\n    \\begin{tabular}{|c|c|c|}\n        \\hline\n        \\textbf{Derivative Number} & \\textbf{Function} & \\textbf{Value at }$\\mathbf{x=0}$\\\\\n        \\hline\n        0 & $\\sin(x^2)$ & 0\\\\\n        \\hline\n        1 & $2x\\cos(x^2)$ & 0\\\\\n        \\hline\n        2 & $2\\cos(x^2)-4x^2\\sin(x^2)$ & 2\\\\\n        \\hline\n        3 & $-12x\\sin(x^2)-8x^3\\cos(x^2)$ & 0\\\\\n        \\hline\n        4 & $(16x^4-12)\\sin(x^2)-48x^2\\cos(x^2)$ & 0\\\\\n        \\hline\n        5 & $160x^3\\sin(x^2)+(32x^5-120x)\\cos(x^2)$ & 0\\\\\n        \\hline\n        6 & $(720x^2-64x^6)\\sin(x^2)+(480x^4-120)\\cos(x^2)$ & $-120$\\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Derivatives of $\\sin(x^2)$.}\n    \\label{tab:2}\n\\end{table}\n\nThis table could go on forever, but we will curtail it at derivative number 6 since Section 2.1 is all about \\emph{restricted} estimates. By finding more and more derivatives, more and more accuracy for more and more values can be achieved. Before we can begin constructing a polynomial to these specifications, let's investigate some calculus properties of polynomials. This will be done in a theorem-proof$^[$\\footnote{These will not be formal, deductive proofs, but rather examples from which the theorem will be inferred to hold.}$^]$ fashion, and these theorems will be useful when constructing the Taylor series.\\par\n\\subparagraph{Theorem 1} The value of the $n^\\text{th}$ derivative of a polynomial evaluated at $x=0$ is based solely on that polynomial's $n^\\text{th}$ degree term.\n\\subparagraph{Justification} Let's take the polynomial $k_3x^3+k_2x^2+k_1x+k_0$, where all $k$ values are constants. This polynomial could also be written as $$0x^n+0x^{n-1}+\\ldots+0x^4+k_3x^3+k_2x^2+k_1x^1+k_0x^0$$This is the version that we will consider. Let's take the zeroeth derivative, i.e. the function as is evaluated at $x=0$. This can easily be confirmed to be $k_0$ as $0^n$ is zero except when $n=0$ and $0^0$ becomes 1.\\par\nLet's find the first derivative. By the power rule, the first derivative is$$0nx^{n-1}+0(n-1)x^{n-2}+\\ldots+0(4)x^3+3k_3x^2+2k_2x^1+1k_1x^0+0$$Note that $0n$, $0(n-1)$, and $0(4)$ are all equal to 0. Evaluating this first derivative at $x=0$ yields $k_1$.\\par\nLet's find the second derivative. By the power rule, the second derivative is$$0n(n-1)x^{n-2}+0(n-1)(n-2)x^{n-3}+\\ldots+0(4)(3)x^2+3(2)k_3x^1+2(1)k_2x^0+0+0$$Note that, similar to last time, many coefficients simplify. Evaluating this second derivative at $x=0$ yields $2k_2$.\\par\nAt this point, a pattern should be emerging. With each successive derivative, the constant (the only thing that shows up at $x=0$) becomes zero, the $x^1$ term's coefficient becomes the constant, and each term is raised to an exponent 1 less in value than before. Through the progression shown (and an imaginary future progression), it can be seen that the $n^\\text{th}$ derivative at $x=0$ is indeed dependent on only the $x^n$ term.\n\\subparagraph{Theorem 2} The value of the $n^\\text{th}$ derivative of a polynomial evaluated at $x=0$ can be found through analysis of the $n^\\text{th}$ degree term. Furthermore, this value is equal to $n!k$, where $k$ is the coefficient of the $n^\\text{th}$ term in the undifferentiated polynomial.\n\\subparagraph{Justification} Consider a single term in a polynomial. Every time a derivative is taken, the power rule mandates that\n\\begin{equation*}\n    \\frac{d}{dx}(kx^n)=nkx^{n-1}\n\\end{equation*}\nAs higher derivatives are taken, this process will continue.\n\\begin{equation*}\n    \\frac{d}{dx}(nkx^{n-1})=n(n-1)kx^{n-2}\n\\end{equation*}\nThe process will go on and on until the exponent reaches the value of zero, at which point the coefficient is just a constant which converts to zero at the next higher derivative. However, this constant \\emph{is} predictable based on the initial term. The coefficient, at each higher derivative, will be the initial constant multiplied by $n$, multiplied by the integer one less than $n$, multiplied by the integer two less than $n$, and on and on. If we're looking for the $i^\\text{th}$ derivative, the coefficient of the $n^\\text{th}$ term is given by\n\\begin{equation*}\n    \\frac{n!k}{(n-i)!}\n\\end{equation*}\nBy using factorial notation, we have compressed $n(n-1)(n-2)(\\ldots)$ into a concise expression. However, to find the value of the $n^\\text{th}$ derivative, $i=n$, so the denominator becomes $(n-n)!=0!=1$, and the overall expression for the $n^\\text{th}$ derivative condenses to the following.\n\\begin{equation*}\n    n!k\n\\end{equation*}\n\\subparagraph{Theorem 3} The $x^n$ term in a polynomial with the $n^\\text{th}$ derivative $c$ when evaluated at $x=0$ will have a coefficient of $\\frac{c}{n!}$.\n\\subparagraph{Justification} This follows fairly directly from Theorem 2. Theorem 2 states that if the $n^\\text{th}$ term is $kx^n$, the $n^\\text{th}$ derivative at $x=0$ will be $n!k$. If we need to find $k$ in terms of $c$, set $c$ equal to the known value of the $n^\\text{th}$ derivative and solve.\n\\begin{align*}\n    n!k &= c\\\\\n    k &= \\frac{c}{n!}\n\\end{align*}\\par\n\\bigskip\nThe first two theorems were mainly to build up to Theorem 3, which will be used here to construct the Taylor series. We will also finally need Table \\ref{tab:2}. In the context of Theorem 3, the right column of Table \\ref{tab:2} provides a list of $n^\\text{th}$ derivatives, $c$, for which we must find a $k$ value. Table \\ref{tab:2} goes up to the $6^\\text{th}$ derivative, so this Taylor polynomial will be sixth degree. Note that Taylor series (polynomials) are written with their terms ordered by ascending degree as opposed to descending (because a full Taylor series would have an $x^\\infty$ term.\\par\nWrite a sixth degree polynomial with variable coefficients.\n\\begin{equation*}\n    c_0+c_1x+c_2x^2+c_3x^3+c_4x^4+c_5x^5+c_6x^6\n\\end{equation*}\nBy Theorem 3, $c_n=\\frac{k_n}{n!}$. Substitute.\n\\begin{equation*}\n    \\frac{k_0}{0!}+\\frac{k_1}{1!}x+\\frac{k_2}{2!}x^2+\\frac{k_3}{3!}x^3+\\frac{k_4}{4!}x^4+\\frac{k_5}{5!}x^5+\\frac{k_6}{6!}x^6\n\\end{equation*}\nSimplify.\n\\begin{equation*}\n    k_0+k_1x+\\frac{k_2}{2}x^2+\\frac{k_3}{6}x^3+\\frac{k_4}{24}x^4+\\frac{k_5}{120}x^5+\\frac{k_6}{720}x^6\n\\end{equation*}\nSubstitute in all values from the right column of Table \\ref{tab:2} for their corresponding $k$ value.\n\\begin{equation*}\n    0+0x+\\frac{2}{2}x^2+\\frac{0}{6}x^3+\\frac{0}{24}x^4+\\frac{0}{120}x^5+\\frac{-120}{720}x^6\n\\end{equation*}\nSimplify.\n\\begin{equation}\n    x^2-\\frac{1}{6}x^6\n\\end{equation}\nLet's evaluate how close this polynomial is to the original function. Figure \\ref{fig:6} shows three graphs of the function with $y=0$, $y=x^2$, and all of equation 28 overlain.\n\n\\begin{figure}[h!]\n    \\centering\n    \\begin{subfigure}[b]{0.3\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/y-0.png}\n        \\caption{$\\sin(x^2)$ versus $y=0$.}\n        \\label{fig:6a}\n    \\end{subfigure}\\hspace{1.5em}\n    \\begin{subfigure}[b]{0.3\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/y-x2.png}\n        \\caption{$\\sin(x^2)$ versus $y=x^2$.}\n        \\label{fig:6b}\n    \\end{subfigure}\\hspace{1.5em}\n    \\begin{subfigure}[b]{0.3\\linewidth}\n        \\includegraphics[width=\\linewidth]{Blender/y-28.png}\n        \\caption{$\\sin(x^2)$ versus equation 28.}\n        \\label{fig:6c}\n    \\end{subfigure}\n    \\caption{Three levels of accuracy in approximating $\\sin(x^2)$ by Taylor series.}\n    \\label{fig:6}\n\\end{figure}\n\nSimilar to Example 16, a Taylor series is better at finding area (definite integrals) than indefinite integrals, though it is possible to achieve a decent one over a range with a great deal of terms. The key to using a Taylor series for definite integrals is that it is a polynomial which approximates the behavior of the function in question, and a polynomial can be easily integrated via the antipower rule between $a$ and $b$. This integral is the approximation. Finding more accuracy requires finding more terms in the Taylor series, as opposed to using more rectangles as with the Riemann sum. However, finding the indefinite integral of the Taylor series will give an approximation of the indefinite integral of the function, as well.\\par\nLet's integrate the Taylor series on $\\left[0,\\sqrt{\\pi}\\right]$. To be comparable to the Riemann sum in Example 16, I continued the Taylor series to 4 terms. Write the integrable equation as follows.\n\\begin{equation*}\n    A=\\int_0^{\\sqrt{\\pi}} \\left(x^2-\\frac{1}{6}x^6+\\frac{1}{120}x^{10}-\\frac{1}{5040}x^{14}\\right)\\, dx\n\\end{equation*}\nEvaluate this using the techniques discussed in Example 1. This process will not be displayed here. However, it yields the following, final sum which is displayed below (equation 29). It should be noted that this answer is accurate to two decimal places whereas the Riemann sum was only accurate to one (the exact area between $\\sin(x^2)$ and the $x$-axis on $\\left[0,\\sqrt{\\pi}\\right]$ as an abbreviated decimal is $0.894831\\ldots$).\n\\begin{equation}\n    A=0.89\n\\end{equation}\n\\paragraph{Summary} Throughout this section, Riemann sums and Taylor series were explored as methods of approximating definite integrals for functions that are not algebraically integrable. Taylor series converge far faster, but also require more setup and a deeper knowledge of calculus. The next section will explore a way of more quickly generating a Taylor series and coming closer to an \"exact value\" expression as opposed to repeating decimals.\n\\newpage\n\n\n\\subsection{Sigma Notation}\n\\paragraph{Introduction} This section contains only one example. The theory of Sigma notation will be developed, followed by an indefinite and definite evaluation of the integrable equation.\n\\ex This example is concerned with integrating the following function.$$y=\\text{e}^{-x^2}$$\nA Taylor \\emph{series} (as opposed to the Taylor \\emph{polynomials} that have been explored) is nothing more than an infinite sum. The derivatives in Table \\ref{tab:2} became very messy very quickly, but some functions have Taylor polynomials in which a clear pattern emerges$^[$\\footnote{For the purposes of this paper, convergence will be assumed and not proved. Examples are chosen that converge over $\\mathbb{R}$.}$^]$.\\par\nDifferentiating $\\text{e}^{-x^2}$ could become a difficult task very quickly. However, differentiating the parent function ($\\text{e}^x$) could not be simpler. $\\text{e}^x$ is its own derivative, so the $n^\\text{th}$ derivative of $\\text{e}^x$ evaluated at $x=0$ will always be $1$ since $\\text{e}^0=1$.\\par\nLet's begin deriving a Taylor series. To follow a relatable process to Example 17, make a table of the derivatives of $\\text{e}^x$ (not $\\text{e}^{-x^2}$) and their values at $x=0$ (Table \\ref{tab:3}).\n\n\\begin{table}[h!]\n    \\centering\n    \\begin{tabular}{|c|c|c|}\n        \\hline\n        \\textbf{Derivative Number} & \\textbf{Function} & \\textbf{Value at }$\\mathbf{x=0}$\\\\\n        \\hline\n        0 & $\\text{e}^x$ & $1$\\\\\n        \\hline\n        1 & $\\text{e}^x$ & $1$\\\\\n        \\hline\n        2 & $\\text{e}^x$ & $1$\\\\\n        \\hline\n        3 & $\\text{e}^x$ & $1$\\\\\n        \\hline\n        4 & $\\text{e}^x$ & $1$\\\\\n        \\hline\n        5 & $\\text{e}^x$ & $1$\\\\\n        \\hline\n        6 & $\\text{e}^x$ & $1$\\\\\n        \\hline\n    \\end{tabular}\n    \\caption{Derivatives of $\\text{e}^x$.}\n    \\label{tab:3}\n\\end{table}\n\nWrite a sixth degree polynomial with variable coefficients.\n\\begin{equation*}\n    c_0+c_1x+c_2x^2+c_3x^3+c_4x^4+c_5x^5+c_6x^6\n\\end{equation*}\nBy Theorem 3, $c_n=\\frac{k_n}{n!}$. Substitute.\n\\begin{equation}\n    \\frac{k_0}{0!}+\\frac{k_1}{1!}x+\\frac{k_2}{2!}x^2+\\frac{k_3}{3!}x^3+\\frac{k_4}{4!}x^4+\\frac{k_5}{5!}x^5+\\frac{k_6}{6!}x^6\n\\end{equation}\nSimplify.\n\\begin{equation*}\n    k_0+k_1x+\\frac{k_2}{2}x^2+\\frac{k_3}{6}x^3+\\frac{k_4}{24}x^4+\\frac{k_5}{120}x^5+\\frac{k_6}{720}x^6\n\\end{equation*}\nSubstitute in all values from the right column of Table \\ref{tab:3} for their corresponding $k$ value.\n\\begin{equation*}\n    1+1x+\\frac{1}{2}x^2+\\frac{1}{6}x^3+\\frac{1}{24}x^4+\\frac{1}{120}x^5+\\frac{1}{720}x^6\n\\end{equation*}\nSimplify.\n\\begin{equation}\n    1+x+\\frac{x^2}{2}+\\frac{x^3}{6}+\\frac{x^4}{24}+\\frac{x^5}{120}+\\frac{x^6}{720}\n\\end{equation}\nAt this point, a pattern should be emerging. For starters, the $n^\\text{th}$ term clearly has an $x^n$ term. The coefficient may be a little more elusive, but refering to equation 30 reveals it to be $n!$. With these two facts, it is possible to state that the $n^\\text{th}$ term will be\n\\begin{equation*}\n    \\frac{x^n}{n!}\n\\end{equation*}\nThis expression of a term as a function of $n$ makes it possible to compress the equation 31 into Sigma notation$^[$\\footnote{Sigma notation (or summation notation) is a method of writing a finite or infinite sum in a concise manner. The argument is a function of a variable that progresses from $a$ to $b$ among the integers, $\\mathbb{Z}$.}$^]$.\\par\nLet's familliarize ourselves with Sigma notation before tackling equation 31. Consider the following expression.\n\\begin{equation*}\n    \\sum_{n=1}^4 n^2\n\\end{equation*}\nThere is a lot of information to unpack here. The \\textbf{function} is $n^2$. The \\textbf{initial value} is $1$. The \\textbf{final value} is $4$. To begin expanding this expression, plug in the initial value for $n$.\n\\begin{equation*}\n    \\sum_{n=1}^4 n^2 = 1^2\\ldots\n\\end{equation*}\nThen, plug in the integer following the initial value for $n$. Add this value to the first term.\n\\begin{equation*}\n    \\sum_{n=1}^4 n^2 = 1^2+2^2\\ldots\n\\end{equation*}\nThe reader may now be able to see where this is going. Add the function of $3$ and the function of $4$. At $4$, we are finished since the final value is equal to $4$.\n\\begin{equation*}\n    \\sum_{n=1}^4 n^2 = 1^2+2^2+3^2+4^2\n\\end{equation*}\nIn this case, the right side can be simplified.\n\\begin{equation*}\n    \\sum_{n=1}^4 n^2 = 30\n\\end{equation*}\nReturning to equation 31, substitute in the generalized $n^\\text{th}$ term that follows it, change the initial value to 0, and change the final value to 6. \n\\begin{equation*}\n    \\sum_{n=0}^6 \\frac{x^n}{n!} = 1+x+\\frac{x^2}{2}+\\frac{x^3}{6}+\\frac{x^4}{24}+\\frac{x^5}{120}+\\frac{x^6}{720}\n\\end{equation*}\nThe left side of the above equation can be confirmed to be equal to the right side (this will be left as an exercise to the reader). By using Sigma notation, a sixth degree Taylor polynomial has been compressed into a tidy package. Furthermore, to make signify a higher degree Taylor polynomial only requires increasing the final value in the expression. In fact, sigma notation makes it feasible to convey an infinite series, which is directly equivalent to $\\text{e}^x$.\n\\begin{equation*}\n    \\sum_{n=0}^\\infty \\frac{x^n}{n!} = \\text{e}^x\n\\end{equation*}\\par\n\\bigskip\nNow, how does all of this relate to $\\text{e}^{-x^2}$? The answer lies in Example 4. To reiterate, the choice of $x$ as a variable is arbitrary. As long as it is used consistently, any letter can be chosen. For example, $u$.\n\\begin{equation*}\n    \\sum_{n=0}^\\infty \\frac{u^n}{n!} = \\text{e}^u\n\\end{equation*}\nThe above statement is just as true as the statement two above. However, as of right now, $u$ varies over the real numbers, $\\mathbb{R}$, from $-\\infty$ to $\\infty$. If necessary, though, $u$ can vary in a different manner. For example, we could define$$u=-x^2$$As long as this definition is consistently substituted, the statement will remain true. Substitute the above definition of $u$.\n\\begin{equation*}\n    \\sum_{n=0}^\\infty \\frac{(-x^2)^n}{n!} = \\text{e}^{-x^2}\n\\end{equation*}\nAt this point, we have found an acceptable Sigma notation representation of the Taylor series for $\\text{e}^{-x^2}$. However, this is not generally the form in which the statement is displayed, so a few modifications will follow.\\par\nEmploy the rule of exponents that $(ab)^m=a^mb^m$.\n\\begin{equation*}\n    \\sum_{n=0}^\\infty \\frac{(-1)^n(x^2)^n}{n!} = \\text{e}^{-x^2}\n\\end{equation*}\nEmploy the rule of exponents that $(a^m)^n=a^{mn}$.\n\\begin{equation*}\n    \\sum_{n=0}^\\infty \\frac{(-1)^n(x^{2n})}{n!} = \\text{e}^{-x^2}\n\\end{equation*}\nMove the oscillator$^[$\\footnote{\\emph{Oscillator} is another term for $(-1)^n$. This is a common occurance in Sigma notation for Taylor series because of its ability to oscillate (change) whether the next term is added or subtracted for each successive term.}$^]$ out of the numerator. Note that it is not necessary to add parentheses because all parts are multiplied, but it makes the expression visually easier to parse.\n\\begin{equation} \\label{eq:32}\n    \\sum_{n=0}^\\infty \\left((-1)^n\\cdot\\frac{x^{2n}}{n!}\\right) = \\text{e}^{-x^2}\n\\end{equation}\nEquation \\ref{eq:32} is certainly an important result in the process of integrating $\\text{e}^{-x^2}$, but it is not the last step. The left side of equation \\ref{eq:32} is only a model of $\\text{e}^{-x^2}$, not of its integral. Fortunately, by the antipower rule, integration of a polynomial is a very simple process. To reiterate from Example 1,\n\\begin{equation*}\n    \\int x^n = \\frac{x^{n+1}}{n+1}\n\\end{equation*}\nIn fact, this statement makes integration directly possible within sigma notation. To integrate a polynomial in sigma notation, just integrate $x$ power term as if $n$ is a variable (as done above).\n\\begin{align*}\n    \\text{e}^{-x^2} &= \\sum_{n=0}^\\infty \\left((-1)^n\\cdot\\frac{x^{2n}}{n!}\\right)\\\\\n    &\\rightarrow \\sum_{n=0}^\\infty \\left((-1)^n\\cdot\\frac{x^{2n+1}}{n!(2n+1)}\\right)+C\\tag{33}\n\\end{align*}\nAs it so happens, equation 33 does give a representation of the exact indefinite integral other than$$\\int \\text{e}^{-x^2}\\, dx$$That being said, sigma notation also works with definite integrals, as was alluded to earlier. In a strikingly similar fashion to the second part of the Fundamental Theorem of Calculus, the following, final statement can be made (equation 33). Logically, though, it does make sense --- $x$ is just a variable, so its convergence is a function, but a single value will just converge to its antiderivative directly.\n\\begin{equation}\n    \\int_a^b \\text{e}^{-x^2}\\, dx = \\sum_{n=0}^\\infty \\left((-1)^n\\cdot\\frac{b^{2n+1}}{n!(2n+1)}\\right)-\\sum_{n=0}^\\infty \\left((-1)^n\\cdot\\frac{a^{2n+1}}{n!(2n+1)}\\right)\n\\end{equation}\n\\paragraph{Summary} Throughout this section, infinite series have been shown to be the solution to every type of one-variable function discussed at the Calculus I-II level. Note that Taylor series would work for any example in this paper, though not every one would compress to Sigma notation as cleanly (or at all).\n\\newpage\n\n\n\n\\setcounter{secnumdepth}{0}\n\\begin{center}\n\\section{Conclusion}\n\\end{center}\nThis paper has explored eighteen wildly different functions of one-variable. A great deal of the examples required ingenious rewrites to transform them to a form that could be integrated by a simpler rule. However, when that failed, there were approximations of difficult functions (Taylor polynomials) and more exact expressions (Taylor series), but still with their own drawbacks.\\par\nI hope that this paper was enjoyable, informative, and clear.\n\n\\vspace{4em}\n\\noindent All figures are either the designs of the author, created in Blender 3D, or they are exports from Desmos, an online graphing calculator.\n\n\n\n\n\\end{document}", "meta": {"hexsha": "f89704fc83ddea3690b36b199a3ae005fd4c2997", "size": 93343, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Class1Var.tex", "max_stars_repo_name": "shadypuck/Class1Var", "max_stars_repo_head_hexsha": "9357cfc00fbb9e9395dd9a47eb742ac7814feca3", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Class1Var.tex", "max_issues_repo_name": "shadypuck/Class1Var", "max_issues_repo_head_hexsha": "9357cfc00fbb9e9395dd9a47eb742ac7814feca3", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Class1Var.tex", "max_forks_repo_name": "shadypuck/Class1Var", "max_forks_repo_head_hexsha": "9357cfc00fbb9e9395dd9a47eb742ac7814feca3", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 74.3768924303, "max_line_length": 1477, "alphanum_fraction": 0.7211467384, "num_tokens": 27876, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.5564917446601975}}
{"text": "\\documentclass[]{BasiliskReportMemo}\n\\usepackage{AVS}\n\n\\newcommand{\\submiterInstitute}{Autonomous Vehicle Simulation (AVS) Laboratory,\\\\ University of Colorado}\n\n\\newcommand{\\ModuleName}{MRP\\_Steering}\n\\newcommand{\\subject}{PRV Steering ADCS Control Module}\n\\newcommand{\\status}{Initial Documentation}\n\\newcommand{\\preparer}{H. Schaub}\n\\newcommand{\\summary}{This module uses the PRV Steering control logic to determine the ADCS control torque vector $\\bm L_{r}$.}\n\n\n\\begin{document}\n\n\n\\makeCover\n\n\n%\n%\tenter the revision documentation here\n%\tto add more lines, copy the table entry and the \\hline, and paste after the current entry.\n%\n\\pagestyle{empty}\n{\\renewcommand{\\arraystretch}{2}\n\\noindent\n\\begin{longtable}{|p{0.5in}|p{4.5in}|p{1.14in}|}\n\\hline\n{\\bfseries Rev}: & {\\bfseries Change Description} & {\\bfseries By} \\\\\n\\hline\nDraft & Initial Documentation Draft & H. Schaub \\\\\n0.1 & Updated the sign definition of $\\bm L_{r}$ & H. Schaub \\\\\n\\hline\n\n\\end{longtable}\n}\n\n\\newpage\n\\setcounter{page}{1}\n\\pagestyle{fancy}\n\n\\tableofcontents\n~\\\\ \\hrule ~\\\\\n\n\n\n\\section{Initialization}\nSimply call the module reset function prior to using this control module.  This will reset the prior function call time variable, and reset the attitude error integral measure.  The control update period $\\Delta t$ is evaluated automatically.  \n\n\n\\section{Steering Law Goals}\nThis technical note develops a new MRP based steering law that drives a body frame \\frameDefinition{B} towards a time varying reference frame \\frameDefinition{R}. The inertial frame is given by \\frameDefinition{N}.   The RW coordinate frame is given by $\\mathcal{W_{i}}:\\{ \\hat{\\bm g}_{s_{i}}, \\hat{\\bm g}_{t_{i}}, \\hat{\\bm g}_{g_{i}} \\}$.  The   Using MRPs, the overall control goal is \n\\begin{equation}\n\t\\label{eq:MS:1}\n\t\\bm\\sigma_{\\mathcal{B}/\\mathcal{R}} \\rightarrow 0\n\\end{equation}\nThe reference frame orientation $\\bm \\sigma_{\\mathcal{R}/\\mathcal{N}}$, angular velocity $\\bm\\omega_{\\mathcal{R}/\\mathcal{N}}$ and inertial angular acceleration $\\dot{\\bm \\omega}_{\\mathcal{R}/\\mathcal{N}}$ are assumed to be known. \n\nThe rotational equations of motion of a rigid spacecraft with $N$ Reaction Wheels (RWs) attached are given by\\cite{schaub}\n\\begin{equation}\n\t\\label{eq:MS:2}\n\t[I_{RW}] \\dot{\\bm \\omega} = - [\\tilde{\\bm \\omega}] \\left( \n\t[I_{RW}] \\bm\\omega + [G_{s}] \\bm h_{s} \n\t\\right) - [G_{s}] \\bm u_{s} + \\bm L\n\\end{equation}\nwhere  the inertia tensor $[I_{RW}]$ is defined as\n\\begin{equation}\n\t\\label{eq:MS:3}\n\t[I_{RW}] = [I_{s}] + \\sum_{i=1}^{N} \\left (J_{t_{i}} \\hat{\\bm g}_{t_{i}} \\hat{\\bm g}_{t_{i}}^{T} + J_{g_{i}} \\hat{\\bm g}_{g_{i}} \\hat{\\bm g}_{g_{i}}^{T}\n\t\\right)\n\\end{equation}\nThe spacecraft inertial without the $N$ RWs is $[I_{s}]$, while $J_{s_{i}}$, $J_{t_{i}}$ and $J_{g_{i}}$ are the RW inertias about the body fixed RW axis $\\hat{\\bm g}_{s_{i}}$ (RW spin axis), $\\hat{\\bm g}_{t_{i}}$ and $\\hat{\\bm g}_{g_{i}}$.  The $3\\times N$ projection matrix $[G_{s}]$ is then defined as\n\\begin{equation}\n\t\\label{eq:MS:4}\n\t[G_{s}] = \\begin{bmatrix}\n\t\t\\cdots \\leftexp{B}{\\hat{\\bm g}}_{s_{i}} \\cdots\n\t\\end{bmatrix}\n\\end{equation}\nThe RW inertial angular momentum vector $\\bm h_{s}$ is defined as\n\\begin{equation}\n\t\\label{eq:MS:5}\n\th_{s_{i}} = J_{s_{i}} (\\omega_{s_{i}} + \\Omega_{i})\n\\end{equation}\nHere $\\Omega_{i}$ is the $i^{\\text{th}}$ RW spin relative to the spacecraft, and the body angular velocity is written in terms of body and RW frame components as\n\\begin{equation}\n\t\\label{eq:MS:6}\n\t\\bm\\omega = \\omega_{1} \\hat{\\bm b}_{1} + \\omega_{2} \\hat{\\bm b}_{2} + \\omega_{3} \\hat{\\bm b}_{3}\n\t= \\omega_{s_{i}} \\hat{\\bm g}_{s_{i}} +  \\omega_{t_{i}} \\hat{\\bm g}_{t_{i}} +  \\omega_{g_{i}} \\hat{\\bm g}_{g_{i}}\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\\section{Steering Law}\n\\subsection{Steering Law Stability Requirement}\nAs is commonly done in robotic applications where the steering laws are of the form $\\dot{\\bm x} = \\bm u$, this section derives a kinematic based attitude steering law.  Let us consider the simple Lyapunov candidate function\\cite{Tsiotras:1994lr,schaub}\n\\begin{equation}\n\t\\label{eq:MS:7}\n\tV ( \\bm\\sigma_{\\mathcal{B}/\\mathcal{R}} ) = 2 \\ln \\left ( 1 + \\bm\\sigma_{\\mathcal{B}/\\mathcal{R}} ^{T} \\bm\\sigma_{\\mathcal{B}/\\mathcal{R}} \\right)\n\\end{equation}\nin terms of the MRP attitude tracking error $\\bm\\sigma_{\\mathcal{B}/\\mathcal{R}}$.  Using the MRP differential kinematic equations\n\\begin{equation}\n\t\\label{eq:MS:8}\n\t\\dot{\\bm\\sigma}_{\\mathcal{B}/\\mathcal{R}} = \\frac{1}{4}[B(\\bm\\sigma_{\\mathcal{B}/\\mathcal{R}})] \\leftexp{B}{\\bm\\omega}_{\\mathcal{B}/\\mathcal{R}}\n\t= \\frac{1}{4} \\left[\n\t(1-\\sigma_{\\mathcal{B}/\\mathcal{R}}^{2})[I_{3\\times 3} + 2 [\\tilde{\\bm\\sigma}_{\\mathcal{B}/\\mathcal{R}}] + 2 \\bm\\sigma_{\\mathcal{B}/\\mathcal{R}} \\bm\\sigma_{\\mathcal{B}/\\mathcal{R}}^{T}\n\t\\right] \\leftexp{B}{\\bm\\omega}_{\\mathcal{B}/\\mathcal{R}}\n\\end{equation}\nwhere $\\sigma_{\\mathcal{B}/\\mathcal{R}}^{2} = \\bm\\sigma_{\\mathcal{B}/\\mathcal{R}}^{T} \\bm\\sigma_{\\mathcal{B}/\\mathcal{R}}$, the time derivative of $V$ is\n\\begin{equation}\n\t\\label{eq:MS:9}\n\t\\dot V =\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}^{T} \\left(  \\leftexp{B}{ \\bm\\omega}_{\\mathcal{B}/\\mathcal{R}}  \\right)\n\\end{equation}\n\nTo create a kinematic steering law, let ${\\mathcal{B}}^{\\ast}$ be the desired body orientation, and $\\bm\\omega_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}}$ be the desired angular velocity vector of this body orientation relative to the reference frame $\\mathcal{R}$.  The steering law requires an algorithm for the desired body rates $\\bm\\omega_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}}$  relative to the reference frame make $\\dot V$ in Eq.~\\eqref{eq:MS:9} negative definite.  For this purpose, let us select\n\\begin{equation}\n\t\\label{eq:MS:10}\n\t\\leftexp{B}{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} = - \\bm f(\\bm\\sigma_{\\mathcal{B}/\\mathcal{R}})\n\\end{equation}\nwhere $\\bm f(\\bm\\sigma)$ is an even function such that \n\\begin{equation}\n\t\\label{eq:MS:11}\n\t\\bm \\sigma ^{T} \\bm f(\\bm \\sigma) > 0\n\\end{equation}\nThe Lyapunov rate simplifies to the negative definite expression:\n\\begin{equation}\n\t\\label{eq:MS:12}\n\t\\dot V = -  \\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}^{T} \\bm f(\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}) < 0\n\\end{equation}\n\n\n\n%\\subsection{Saturated  MRP Steering Law}\n%A very simple example would be to set\n%\\begin{equation}\n%\t\\label{eq:MS:13}\n%\t\\bm f (\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}) =  K_{1} \\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}\n%\\end{equation}\n%where $K_{1}>0$.  \n%This yields a kinematic control where the desired body rates are proportional to the MRP attitude error measure.  If the rate should saturate, then $\\bm f()$ could be defined as\n%\\begin{equation}\n%\t\\label{eq:MS:14}\n%\t\\bm f(\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}) = \\begin{cases}\n%\t\tK_{1} \\sigma_{i} \t\t&\\text{if } |K_{1} \\sigma_{i}| \\le \\omega_{\\text{max}} \\\\\n%\t\t\\omega_{\\text{max}} \\text{sgn}(\\sigma_{i}) &\\text{if } |K_{1} \\sigma_{i}| > \\omega_{\\text{max}}\n%\t\\end{cases}\n%\\end{equation}\n%where\n%$$\n%\t\\bm\\sigma_{\\mathcal{B}/\\mathcal{R}} = (\\sigma_{1}, \\sigma_{2}, \\sigma_{3})^{T}\n%$$\n%A smoothly saturating function is given by\n%\\begin{equation}\n%\t\\label{eq:MS:15}\n%\t\\bm f(\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}) = \\arctan \\left(\n%\t\t\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}} \\frac{K_{1} \\pi}{2  \\omega_{\\text{max}}}\n%\t\\right) \\frac{2 \\omega_{\\text{max}}}{\\pi}\n%\\end{equation}\n%where\n%\\begin{equation}\n%\t\\label{eq:MS:15.0}\n%\t\\bm f(\\bm\\sigma_{\\mathcal{B}/\\mathcal{R}}) = \\begin{pmatrix}\n%\t\tf(\\sigma_{1})\\\\ f(\\sigma_{2})\\\\ f(\\sigma_{3})\n%\t\t\\end{pmatrix}\n%\\end{equation}\n%Here as $\\sigma_{i} \\rightarrow \\infty$ then the function $f$ smoothly converges to the maximum speed rate $\\pm  \\omega_{\\text{max}}$.   For small $|\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}|$, this function linearizes to\n%\\begin{equation}\n%\t\\bm f(\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}) \\approx K_{1} \\bm \\sigma_{\\mathcal{B}/\\mathcal{R}} + \\text{ H.O.T}\n%\\end{equation}\n%\n%If the MRP shadow set parameters are used to avoid the MRP singularity at 360\\dg, then $|\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}|$ is upper limited by 1.  To control how rapidly the rate commands approach the $\\omega_{\\text{max}}$ limit, Eq.~\\eqref{eq:MS:15} is modified to include a cubic term:\n%\\begin{equation}\n%\t\\label{eq:MS:15.1}\n%\t f( \\sigma_{i}) = \\arctan \\left(\n%\t\t(K_{1} \\sigma_{i} +K_{3} \\sigma_{i}^{3}) \\frac{ \\pi}{2  \\omega_{\\text{max}}}\n%\t\\right) \\frac{2 \\omega_{\\text{max}}}{\\pi}\n%\\end{equation}\n%The order of the polynomial must be odd to keep $\\bm f()$ an even function.  A nice feature of Eq.~\\eqref{eq:MS:15.1} is that the control rate is saturated individually about each axis.  If the smoothing component is removed to reduce this to a bang-band rate control, then this would yield a Lyapunov optimal control which minimizes $\\dot V$ subject to the allowable rate constraint $\\omega_{\\text{max}}$.\n%\n%\\begin{figure}[tb]\n%\t\\centering\n%\t\\subfigure[$\\omega_{\\text{max}}$ dependency with $K_{1} = 0.1$, $K_{3} = 1$]\n%\t{\\label{fig:fSigmaOptionsA} \n%\t\\includegraphics[]{Figures/fSigmaOptionsA}}  \n%\t\\subfigure[$K_{1}$ dependency with $\\omega_{\\text{max}}  = 1{\\dg}/s$, $K_{3} = 1$]\n%\t{\\label{fig:fSigmaOptionsB}\n%\t\\includegraphics[]{Figures/fSigmaOptionsB}}\n%\t\\\\\n%\t\\subfigure[$K_{3}$ dependency with $\\omega_{\\text{max}}  = 1{\\dg}/s$, $K_{1} = 0.1$]\n%\t{\\label{fig:fSigmaOptionsC}\n%\t\\includegraphics[]{Figures/fSigmaOptionsC}} \n%\t\\caption{Illustrations of MRP Steering Parameters Influence.}\n%\t\\label{fig:fSigmaOptions}\n%\\end{figure}\n%\n%Figure~\\ref{fig:fSigmaOptions} illustrates how the parameters $\\omega_{\\text{max}}$, $K_{1}$ and $K_{3}$ impact the steering law behavior.  The maximum steering law rate commands are easily set through the $\\omega_{\\text{max}}$ parameters.  The gain $K_{1}$ controls the linear stiffness when the attitude errors have become small, while $K_{3}$ controls how rapidly the steering law approaches the speed command limit.\n%\n%The required velocity servo loop design is aided by knowing the body-frame derivative of $\\leftexp{B}{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}}$ to implement a feed-forward components.  Using the $\\bm f()$ function definition in Eq.~\\eqref{eq:MS:15.0}, this requires the time derivatives of $f(\\sigma_{i})$.  \n%\\begin{equation}\n%\t\\frac{\\leftexp{B}{\\D (\\leftexp{B}{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} ) }}{\\D t} =\n%\t{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} '\n%\t= - \\frac{\\partial \\bm f}{\\partial \\bm \\sigma_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}}} \\dot{\\bm \\sigma}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}}\n%\t= - \\begin{pmatrix}\n%\t\t\\frac{\\partial  f}{\\partial  \\sigma_{1}} \\dot{ \\sigma}_{1} \\\\\n%\t\t\\frac{\\partial  f}{\\partial  \\sigma_{2}} \\dot{ \\sigma}_{2} \\\\\n%\t\t\\frac{\\partial  f}{\\partial  \\sigma_{3}} \\dot{ \\sigma}_{3} \n%\t\\end{pmatrix}\n%\\end{equation}\n%where\n%\\begin{equation}\n%\t\\dot{\\bm\\sigma}\t_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} = \n%\t\\begin{pmatrix}\n%\t\t\\dot\\sigma_{1}\\\\\n%\t\t\\dot\\sigma_{2}\\\\\n%\t\t\\dot\\sigma_{3}\n%\t\\end{pmatrix} = \n%\t \\frac{1}{4}[B(\\bm\\sigma_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}})] \n%\t\\leftexp{B}{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}}\n%\\end{equation}\n%Using the general $f()$ definition in Eq.~\\eqref{eq:MS:15.1}, its sensitivity with respect to $\\sigma_{i}$ is\n%\\begin{equation}\n%\t\\frac{\n%\t\t\\partial f\n%\t}{\n%\t\t\\partial \\sigma_{i}\n%\t} = \n%\t\\frac{\n%\t(K_{1}  + 3 K_{3} \\sigma_{i}^{2})\n%\t}{\n%\t1+(K_{1}\\sigma_{i} + K_{3} \\sigma_{i}^{3})^{2} \\left(\\frac{\\pi}{2 \\omega_{\\text{max}}}\\right)^{2}\n%\t}\n%\\end{equation}\n%\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Principal Angle Steering Law}\nConsider the following saturation function $\\bm f(\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}})$ which is colinear with the principal rotation axis  $\\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}}$, and the magnitude scales uniformly with the principal rotation angle $\\phi_{\\mathcal{B}/\\mathcal{R}}$:\n\\begin{equation}\n\t\\label{eq:MS:16}\n\t\\bm f (\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}) = \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} f(\\phi_{\\mathcal{B}/\\mathcal{R}})\n\\end{equation}\nThe scalar function $f(\\phi_{\\mathcal{B}/\\mathcal{R}})$ is an even function where $f(\\phi_{\\mathcal{B}/\\mathcal{R}})\\phi_{\\mathcal{B}/\\mathcal{R}} \\ge 0$.  Note that $\\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}}$ is ill-defined for a zero principal rotation angle.  This saturation function will need a check to avoid numerical issues right at the zero angle condition.\n\nUsing the MRP definition in terms of principal rotation angle and axis\n\\begin{equation}\n\t\\label{eq:MS:17}\n\t\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}} = \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} \\tan\\left(\\frac{\\phi_{\\mathcal{B}/\\mathcal{R}}}{4}\\right)\n\\end{equation}\nand substituting into Eq.~\\eqref{eq:MS:12}, the Lyapunov rate is for this case\n\\begin{equation}\n\t\\label{eq:MS:18}\n\t\\dot V =  -  \\bm \\sigma_{\\mathcal{B}/\\mathcal{R}}^{T} \\bm f(\\bm \\sigma_{\\mathcal{B}/\\mathcal{R}})\n\t= -   \\tan\\left(\\frac{\\phi_{\\mathcal{B}/\\mathcal{R}}}{4}\\right) \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}}^{T} \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} f(\\phi_{\\mathcal{B}/\\mathcal{R}})\n\t= -   \\tan\\left(\\frac{\\phi_{\\mathcal{B}/\\mathcal{R}}}{4}\\right)f(\\phi_{\\mathcal{B}/\\mathcal{R}}) < 0\n\\end{equation}\nThis $\\dot V$ is negative definite in terms of the attitude error, thus yielding asymptotic convergence.  The saturation function in Eq.~\\eqref{eq:MS:16} has the convenient property that the resulting steering law employs an eigenaxis rotation towards the desired reference orientation. The benefit of the later is that small errors are reduced quickly, thus reducing the overall tracking error quicker.  The benefit of the eigenaxis approach is that the closed-loop attitude path towards the reference is more predictable.  \n\n\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\subfigure[$\\omega_{\\text{max}}$ dependency with $K_{1} = 0.034$, $K_{3} = 0.034$]\n\t{\\label{fig:fOptionsA} \n\t\\includegraphics[]{Figures/fOptionsA}}  \n\t\\subfigure[$K_{1}$ dependency with $\\omega_{\\text{max}}  = 1{\\dg}/s$, $K_{3} = 0.034$]\n\t{\\label{fig:fOptionsB}\n\t\\includegraphics[]{Figures/fOptionsB}}\n\t\\\\\n\t\\subfigure[$K_{3}$ dependency with $\\omega_{\\text{max}}  = 1{\\dg}/s$, $K_{1} = 0.034$]\n\t{\\label{fig:fOptionsC}\n\t\\includegraphics[]{Figures/fOptionsC}} \n\t\\caption{Illustrations of Principal Angle Steering Parameters Influence.}\n\t\\label{fig:fOptions}\n\\end{figure}\n\nConsider the $f(\\bm \\phi_{\\mathcal{B}/\\mathcal{R}})$ function given by:\n\\begin{equation}\n\t\\label{eq:MS:19}\n\tf(\\phi_{\\mathcal{B}/\\mathcal{R}}) = \n\t\\arctan \\left(\n\t\t(K_{1}\\phi_{\\mathcal{B}/\\mathcal{R}} + K_{3} \\phi_{\\mathcal{B}/\\mathcal{R}}^{3}) \\frac{\\pi}{2 \\omega_{\\text{max}}}\n\t\\right) \\frac{2 \\omega_{\\text{max}}}{\\pi}\n\\end{equation}\nThe linear approximation of this function is\n\\begin{equation}\n\tf(\\phi_{\\mathcal{B}/\\mathcal{R}}) \\approx K_{1} \\phi_{\\mathcal{B}/\\mathcal{R}} + \\text{ H.O.T}\n\\end{equation}\n\nThe resulting attitude steering law is of the form:\n\\begin{equation}\n\t\\label{eq:MS:20}\n\t\\bm\\omega_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} = -  \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} \n\t\\arctan \\left(\n\t\t(K_{1}\\phi_{\\mathcal{B}/\\mathcal{R}} + K_{3} \\phi_{\\mathcal{B}/\\mathcal{R}}^{3}) \\frac{\\pi}{2 \\omega_{\\text{max}}}\n\t\\right) \\frac{2 \\omega_{\\text{max}}}{\\pi}\n\\end{equation}\nThe impacts of the steering law gains $\\omega_{\\text{max}}$, $K_{1}$ and $K_{3}$ are illustrated in Figure~\\ref{fig:fOptions}.  As the function $f(\\phi)$  returns a value of $\\pm \\omega_{\\text{max}}$ as $\\phi\\rightarrow \\infty$, the gain $\\omega_{\\text{max}}$ determines the maximum rate limit that the steering law with request.  This is illustrated in Figure~\\ref{fig:fOptionsA} where reducing the $\\omega_{\\text{max}}$ value by a factor of 2 results in half of the asymptotic rate command.  \n\nThe parameter $K_{1}$ determines the final  pointing stiffness, and determines the final exponential convergence of the attitude pointing error as illustrated in Figure~\\ref{fig:fOptionsB}.  Increasing this value results in faster final convergence once the principal rotation error has reduced past the saturated $f(\\phi)$ function region.  \n\nFinally, the higher order $\\phi$ polynomial is provided to cause the $f(\\phi)$ function to saturate more quickly.  Setting $K_{3} $ = 0.034 in Figure~\\ref{fig:fOptionsC} doesn't change the initial slope of the rate command, but impacts how quickly the rates saturate on the maximum speed command.\n\n\nBecause $\\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}}'= 0$, the body relative time derivative of the steering control is\n\\begin{equation}\n\t\\label{eq:MS:28}\n\t\\frac{\\leftexp{B}{\\D (\\leftexp{B}{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} ) }}{\\D t} =\n\t{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} '\n\t= - \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} \\frac{\\partial f(\\phi_{\\mathcal{B}/\\mathcal{R}})}{\\partial \\phi_{\\mathcal{B}/\\mathcal{R}}} \\dot\\phi_{\\mathcal{B}/\\mathcal{R}}\n\\end{equation}\nThe $f()$ function sensitivity is\n\\begin{equation}\n\t\\frac{\n\t\t\\partial f\n\t}{\n\t\t\\partial \\phi_{\\mathcal{B}/\\mathcal{R}}\n\t} = \n\t\\frac{\n\t(K_{1}  + 3 K_{3} \\phi_{\\mathcal{B}/\\mathcal{R}}^{2})\n\t}{\n\t1+(K_{1}\\phi_{\\mathcal{B}/\\mathcal{R}} + K_{3} \\phi_{\\mathcal{B}/\\mathcal{R}}^{3})^{2} \\left(\\frac{\\pi}{2 \\omega_{\\text{max}}}\\right)^{2}\n\t}\n\\end{equation}\nThe principal rotation angle time derivative is given by\\cite{hughes}\n\\begin{equation}\n\t\\label{eq:MS:30}\n\t\\dot\\phi_{\\mathcal{B}/\\mathcal{R}} = \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}}^{T} \n\t\\leftexp{B}{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} \n\\end{equation}\nSubstituting Eqs.~\\eqref{eq:MS:10} and \\eqref{eq:MS:30} into Eq.~\\eqref{eq:MS:28} yields\n\\begin{align}\n\t{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}} '\n\t&= - \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} \\frac{\\partial f(\\phi_{\\mathcal{B}/\\mathcal{R}})}{\\partial \\phi_{\\mathcal{B}/\\mathcal{R}}} \n\t\\left( \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}}^{T} \n\t\\leftexp{B}{\\bm\\omega}_{{\\mathcal{B}}^{\\ast}/\\mathcal{R}}  \\right)\n\t\\nonumber \\\\\n\t&= -\n\t \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} \\frac{\\partial f(\\phi_{\\mathcal{B}/\\mathcal{R}})}{\\partial \\phi_{\\mathcal{B}/\\mathcal{R}}} \n\t \\left( \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}}^{T} \n\t (-\\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} f(\\phi_{\\mathcal{B}/\\mathcal{R}}))\n\t\\right)\n\t\\nonumber \\\\\n\t&= \\hat{\\bm e}_{\\mathcal{B}/\\mathcal{R}} \\frac{\\partial f(\\phi_{\\mathcal{B}/\\mathcal{R}})}{\\partial \\phi_{\\mathcal{B}/\\mathcal{R}}} \n\t  f(\\phi_{\\mathcal{B}/\\mathcal{R}}))\n\\end{align}\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Angular Velocity Servo Sub-System}\nTo implement the kinematic steering control, a servo sub-system must be included which will produce the required torques to make the actual body rates track the desired body rates.  The angular velocity tracking error vector is defined as\n\\begin{equation}\n\t\\label{eq:MS:32}\n\t\\delta \\bm \\omega = \\bm\\omega_{\\mathcal{B}/\\mathcal{B}^{\\ast}} = \\bm\\omega_{\\mathcal{B}/\\mathcal{N}} - \\bm\\omega_{\\mathcal{B}^{\\ast}/\\mathcal{N}}\n\\end{equation}\nwhere the $\\mathcal{B}^{\\ast}$ frame is the desired body frame from the kinematic steering law.  Note that\n\\begin{equation}\n\t \\bm\\omega_{\\mathcal{B}^{\\ast}/\\mathcal{N}} =  \\bm\\omega_{\\mathcal{B}^{\\ast}/\\mathcal{R}} +  \\bm\\omega_{\\mathcal{R}/\\mathcal{N}}\n\\end{equation}\nwhere $\\bm\\omega_{\\mathcal{R}/\\mathcal{N}}$ is obtained from the attitude navigation solution, and $ \\bm\\omega_{\\mathcal{B}^{\\ast}/\\mathcal{R}}$ is the kinematic steering rate command.  To create a rate-servo system that is robust to unmodeld torque biases, the state $\\bm z$ is defined as:\n\\begin{equation}\n\t\\label{eq:MS:34}\n\t\\bm z = \\int_{t_{0}}^{t_{f}} \\leftexp{B}{ \\delta\\bm\\omega}\\ \\D t\n\\end{equation}\n\nThe rate servo Lyapunov function is defined as \n\\begin{equation}\n\t\\label{eq:MS:35}\n\tV_{\\bm\\omega}(\\delta\\bm\\omega, \\bm z) = \\frac{1}{2} \\delta\\bm\\omega ^{T} [I_{\\text{RW}}] \\delta\\bm\\omega + \\frac{1}{2} \\bm z ^{T} [K_{I}] \\bm z\n\\end{equation}\nwhere the vector $\\delta\\bm\\omega$ and tensor $[I_{\\text{RW}}]$ are assumed to be given in body frame components, $[K_{i}]$ is a symmetric positive definite matrix.  The time derivative of this Lyapunov function is\n\\begin{equation}\n\t\\label{eq:MS:36}\n\t\\dot V_{\\bm\\omega} = \\delta\\bm\\omega^{T} \\left(\n\t\t[I_{\\text{RW}}] \\delta\\bm\\omega' + [K_{I}] \\bm z\n\t\\right)\n\\end{equation}\nUsing the identities ${\\bm\\omega}_{\\mathcal{B}/\\mathcal{N}}' = \\dot{\\bm\\omega}_{\\mathcal{B}/\\mathcal{N}}$ and $ \\bm\\omega_{\\mathcal{R}/\\mathcal{N}}' =  \\dot{\\bm\\omega}_{\\mathcal{R}/\\mathcal{N}} -  {\\bm\\omega}_{\\mathcal{B}/\\mathcal{N}} \\times  \\bm\\omega_{\\mathcal{R}/\\mathcal{N}}$,\\cite{schaub} the body frame derivative of $\\delta \\bm\\omega$ is\n\\begin{equation}\n\t\\label{eq:MS:37}\n\t\\delta\\bm \\omega '= \\dot{\\bm\\omega}_{\\mathcal{B}/\\mathcal{N}} - \\bm\\omega_{\\mathcal{B}^{\\ast}/\\mathcal{R}} ' -  \\dot{\\bm\\omega}_{\\mathcal{R}/\\mathcal{N}} +  {\\bm\\omega}_{\\mathcal{B}/\\mathcal{N}} \\times  \\bm\\omega_{\\mathcal{R}/\\mathcal{N}}\n\\end{equation}\nSubstituting Eqs.~\\eqref{eq:MS:2} and \\eqref{eq:MS:37} into the $\\dot V_{\\bm\\omega}$ expression in Eq.~\\eqref{eq:MS:36} yields\n\\begin{multline}\n\t\\label{eq:MS:38}\n\t\\dot V_{\\bm\\omega} = \\delta\\bm\\omega^{T} \\Big(\n\t\t- [\\tilde{\\bm \\omega}_{\\mathcal{B}/\\mathcal{N}}] \\left( \n\t[I_{RW}] \\bm\\omega_{\\mathcal{B}/\\mathcal{N}} + [G_{s}] \\bm h_{s} \n\t\\right) - [G_{s}] \\bm u_{s} + \\bm L + [K_{I}] \\bm z\n\t\\\\\n\t- [I_{\\text{RW}}](\\bm\\omega_{\\mathcal{B}^{\\ast}/\\mathcal{R}} ' +  \\dot{\\bm\\omega}_{\\mathcal{R}/\\mathcal{N}} - {\\bm\\omega}_{\\mathcal{B}/\\mathcal{N}} \\times  \\bm\\omega_{\\mathcal{R}/\\mathcal{N}})\n\t\\Big)\n\\end{multline}\n\n%\\begin{equation}\n%\t\\label{eq:MS:39}\n%\t\\dot V_{\\bm\\omega} = - \\delta\\bm\\omega^{T} [P]\\delta\\bm\\omega\n%\\end{equation}\nLet $[P]^{T} = [P]>$ be a symmetric positive definite rate feedback gain matrix.  The servo rate feedback control is defined as\n\\begin{multline}\n\t\\label{eq:MS:39}\n\t[G_{s}]\\bm u_{s} = [P]\\delta\\bm\\omega + [K_{I}]\\bm z - [\\tilde{\\bm\\omega}_{\\mathcal{B}^{\\ast}/\\mathcal{N}}] \n\t\\left( [I_{\\text{RW}}] \\bm\\omega_{\\mathcal{B}/\\mathcal{N}} + [G_{s}] \\bm h_{s} \\right)\n\t\\\\\n\t- [I_{\\text{RW}}](\\bm\\omega_{\\mathcal{B}^{\\ast}/\\mathcal{R}} ' +  \\dot{\\bm\\omega}_{\\mathcal{R}/\\mathcal{N}} -  {\\bm\\omega}_{\\mathcal{B}/\\mathcal{N}} \\times  \\bm\\omega_{\\mathcal{R}/\\mathcal{N}}) + \\bm L\n\\end{multline}\nDefining the right-hand-side as $\\bm L_{r}$, this is rewritten in compact form as\n\\begin{equation}\n\t[G_{s}]\\bm u_{s} = -\\bm L_{r}\n\\end{equation}\nThe array of RW motor torques can be solved with the typical minimum norm inverse\n\\begin{equation}\n\t\\bm u_{s} = [G_{s}]^{T}\\left( [G_{s}][G_{s}]^{T}\\right)^{-1}(- \\bm L_{r})\n\\end{equation}\n\n\nTo analyze the stability of this rate servo control, the $[G_{s}]\\bm u_{s}$ expression in Eq.~\\eqref{eq:MS:39} is substituted into the Lyapunov rate expression in Eq.~\\eqref{eq:MS:38}.\n\\begin{align}\n\t\\label{eq:MS:42}\n\t\\dot V_{\\omega} &= \\delta\\bm\\omega^{T} \\Big(\n\t\t- [P]\\delta\\bm\\omega - [\\tilde{\\bm \\omega}_{\\mathcal{B}/\\mathcal{N}}] \\left( \n\t[I_{RW}] \\bm\\omega_{\\mathcal{B}/\\mathcal{N}} + [G_{s}] \\bm h_{s} \n\t\\right) \n\t+ [\\tilde{\\bm\\omega}_{\\mathcal{B}^{\\ast}/\\mathcal{N}}] \n\t\\left( [I_{\\text{RW}}] \\bm\\omega_{\\mathcal{B}/\\mathcal{N}} + [G_{s}] \\bm h_{s} \\right)\n\t\\Big ) \n\t\\nonumber \\\\\n\t&= \\delta\\bm\\omega^{T} \\Big( - [P]\\delta\\bm\\omega\n\t- [\\widetilde{\\delta\\bm \\omega}] \\left( \n\t[I_{RW}] \\bm\\omega_{\\mathcal{B}/\\mathcal{N}} + [G_{s}] \\bm h_{s} \n\t\\right) \n\t\\Big )\n\t\\nonumber \\\\\n\t&= - \\delta\\bm\\omega ^{T} [P] \\delta\\bm\\omega < 0\n\\end{align}\nThus, in the absence of unmodeled torques, the servo control in Eq.~\\eqref{eq:MS:39} is asymptotically stabilizing in rate tracking error $\\delta\\bm\\omega$.  \n\nNext, the servo robustness to unmodeled external torques is investigated.  Let us assume that the external torque vector $\\bm L$ in Eq.~\\eqref{eq:MS:2} only approximates the true external torque, and the unmodeled component is given by $\\Delta \\bm L$.  Substituting the true equations of motion and the same servo control in Eq.~\\eqref{eq:MS:39} into the Lyapunov rate expression in Eq.~\\eqref{eq:MS:36} leads to\n\\begin{equation}\n\t\\label{eq:MS:43}\n\t\\dot V_{\\omega} = - \\delta\\bm\\omega ^{T} [P] \\delta\\bm\\omega - \\delta\\bm\\omega ^{T} \\Delta \\bm L\n\\end{equation}\nThis $\\dot V_{\\omega}$ is no longer negative definite due to the underdetermined sign of the $\\delta\\bm\\omega ^{T} \\Delta \\bm L$ components.  Equating the Lyapunov rates in Eqs.~\\eqref{eq:MS:36} and \\eqref{eq:MS:43} yields the following servo closed loop dynamics:\n\\begin{equation}\n\t\\label{eq:MS:44}\n\t[I_{\\text{RW}}]\\delta\\bm\\omega' + [P]\\delta\\bm\\omega + [K_{I}]\\bm z = \\Delta\\bm L\n\\end{equation}\nAssuming that $\\Delta\\bm L$ is either constant as seen by the body frame, or at least varies slowly, then taking a body-frame time derivative of Eq.~\\eqref{eq:MS:44} is\n\\begin{equation}\n\t\\label{eq:MS:45}\n\t[I_{\\text{RW}}]\\delta\\bm\\omega'' + [P]\\delta\\bm\\omega' + [K_{I}]\\delta \\bm \\omega = \\Delta\\bm L' \\approx 0 \t\n\\end{equation}\nAs $[I_{\\text{RW}}]$, $[P]$ and $[K_{I}]$ are all symmetric positive definite matrices, these linear differential equations are stable, and $\\delta\\bm\\omega\\rightarrow0$ given that assumption that $\\Delta\\bm L' \\approx 0$.  \n\n\n\\bibliographystyle{unsrt}   % Number the references.\n\\bibliography{references}   % Use references.bib to resolve the labels.\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "55e0017c9f59b8ce984962fe5a54655372db5712", "size": 24838, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/fswAlgorithms/attControl/PRV_Steering/_Documentation/AVS-Sim-PRV_Steering-2016-0108.tex", "max_stars_repo_name": "ian-cooke/basilisk_mag", "max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/fswAlgorithms/attControl/PRV_Steering/_Documentation/AVS-Sim-PRV_Steering-2016-0108.tex", "max_issues_repo_name": "ian-cooke/basilisk_mag", "max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z", "max_forks_repo_path": "src/fswAlgorithms/attControl/PRV_Steering/_Documentation/AVS-Sim-PRV_Steering-2016-0108.tex", "max_forks_repo_name": "ian-cooke/basilisk_mag", "max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.0766129032, "max_line_length": 525, "alphanum_fraction": 0.6678073919, "num_tokens": 9321, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799252, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5564917318771941}}
{"text": "We have seen that matrices make real-life problems accessible to mathematical analysis. The single-variable functions you know from school are another such link. However, real-life quantities -- for example, the expected profit of a business strategy -- rarely depend on one variable only, and it is therefore important to also study \\emph{functions of several variables}. The basic ideas for doing that are the same as for the single-variable theory, but we will also encounter new concepts, new difficulties, and new potential in this chapter.\n\n\\begin{application}[Stabilisation of mechanical processes]\nAlice is practising archery. She can control the velocity of the arrow and the angle at which she shoots. Note that there are many different ways to hit the target: she could aim a very strong shot directly at the bullseye or she could shoot with less force and aim higher to compensate for the reduced velocity of the arrow. Alice's maths skills are excellent -- for any given angle $\\alpha$, she can compute the velocity $v$ required to hit the bullseye. Of all such choices $(\\alpha,v)$, Alice wants to use the most stable one. That is, the configuration $(\\alpha^*,v^*)$ that hits the bullseye and is the least sensitive to trembling, misjudging of the angle or the velocity, etc., needs to be found.\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/target.pdf}\n\\end{center}\n\\end{application}\n\nFor our modelling, we assume that the tip of the arrow is exactly level with the bullseye when it is loosed, cf. the sketch above. The target is 30 metres away. Then the height of the arrow when it hits the target is\n\\[h(\\alpha,v) = -\\frac{4414.5}{\\cos^2(\\alpha) \\, v^2} + 30 \\, \\tan\\alpha \\:. \\]\nHere, $h=0$ corresponds to hitting the bullseye. There are different possibilities for defining a measurement of ``instability'', and Alice decides to use \n\\[ f(\\alpha,v) = h_{\\alpha}^2 + 3 h_v^2 \\]\nsince she finds controlling the velocity more difficult than controlling the angle. The symbols $h_\\alpha$ and $h_v$ are the partial derivatives of $h$ and will be defined in the next section. After familiarising yourself with that concept, convince yourself that $f$ as defined above can be considered a measure of instability of\\footnote{What does it mean for a derivative to be large at a point? It means that a small change of the variable will have a large effect!} $h$. In this chapter, we will learn how to minimise functions like $f$ (instability) under constraints like $h=0$ (hitting the bullseye). The optimal angle and velocity for Alice's target practice are\n\\[\\alpha^* = 44.70^{\\circ} \\:,\\quad v^* = 17.16 \\tfrac{m}{s} \\:. \\]\n\nThe fact that those numbers do not agree with how archers usually shoot is due to our assumptions and our modelling: wind was neglected, it was assumed that initially the tip of the arrow and the bullseye are exactly level, that the archer can do the maths and control the angle and velocity exactly, etc. -- in the absence of these assumptions, a very strong shot almost directly aimed at the bullseye is the most straightforward option. However, understanding the above methods is a first step towards more complex real-life applications such as the design of mechanical machines.\n\n\\section{Multivariate Functions and Partial Derivatives}\n\n\\begin{definition}[Functions of Several Variables]\nA \\emph{function of two variables} is a rule $f$ that assigns to each pair $(x,y)$ in a set $D\\subseteq\\mathbb{R}^2$ a unique real number $f(x,y)$. The set $D$ is called the \\emph{domain} of $f$ and also denoted $D(f)$. The \\emph{range} of $f$ is the set of values it maps to,\n\\[\nR(f) = \\left\\{ z \\in \\mathbb{R} \\: | \\: z = f(x,y)\\text{~for some~}(x,y)\\in D  \\right\\} \\:.\n\\]\nSimilarly, a \\emph{function of $n$ variables} is a rule $f$ that assigns a unique real number $f(x_1,x_2,\\dots,x_n)$ to each $(x_1,x_2,\\dots,x_n) \\in D\\subseteq\\mathbb{R}^n$.\n\\end{definition}\n\n\\begin{remark}\n\\begin{enumerate}[(i)]\n\t\\item Writing $z=f(x,y)$, we call $x$ and $y$ the independent variables, and $z$ the dependent variable. Functions of several variables are also called \\emph{multivariate functions}.\n\t\\item Unless specified otherwise, the domain is always the largest set of points $(x,y)$ at which the expression that defines $f$ can be evaluated.\n\t\\item Functions of two variables can be visualised as surfaces/graphs in $\\mathbb{R}^3$: The domain is a subset of the $xy$-plane and for $(x_0,y_0)$ in that domain, we mark a point of height $f(x_0,y_0)$ over/under $(x_0,y_0)$. That is, the point of the graph with $(x,y)$-coordinates $(x_0,y_0)$ will have $z$-coordinate $z_0=f(x_0,y_0)$.\n\t\\item Restriction to lines in the domain is an important technique for working with multivariate functions, cf. part (iv) of the following set of examples.\n\\end{enumerate}\n\\end{remark}\n\n\\begin{example}\n\\label{expl:first_graphs}\n\\begin{enumerate}[(i)]\n\\item Draw the graph of $f(x,y) = x^2 + y^2$.\\\\\n{\\it Sol.:}\n%\\figbox{Graph of $f$ (paraboloid) with curves from parts (iv) and (v) (3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f301.png}\n\\end{center}\n\\item Find the domain and range of $g(x,y)= 1 + \\sqrt{y-x^2}$.\\\\\n{\\it Sol.:}\nThe function $g$ is defined on the set\n\\[ D(g) = \\left\\{ (x,y) \\in \\mathbb{R}^2 \\: | \\: y \\ge x^2 \\right\\} = \\left\\{ y \\ge x^2 \\right\\} \\:, \\]\nsince we need $ y-x^2 \\ge 0$ to be able to evaluate the square root. On this domain, the argument of the square root takes all non-negative real values $t\\in[0,\\infty)$, producing non-negative real numbers $\\sqrt{t}\\in[0,\\infty)$. Therefore, the range is $R(g) = [1, \\infty).$\n\\item Find the domain and range of $h(x_1,x_2,x_3)= \\sin (x_1x_2+x_3)$.\\\\\n{\\it Sol.:}\nThe domain is $D = \\mathbb{R}^3$ since $h$ can be evaluated at any $(x_1,x_2,x_3)$. The range of $h$ is $R = [-1,1]$.\n\\item Draw the graphs of the following one-variable functions\n\\begin{equation*}\n\\begin{split}\nf_0(x) & = x^2 \\:, \\\\\nf_1(x) & = x^2 + 1 \\:, \\\\\nf_2(x) & = x^2 + 4 \\:.\n\\end{split}\n\\end{equation*}\nWhere do these curves appear in the graph in (i)?\\\\\n{\\it Sol.:}\nThe functions $f_0,f_1,f_2$ are restrictions of $f(x,y)$ in (i) to the lines $y=0,y=1,y=2$. This can also be expressed as\n\\begin{equation*}\n\\begin{split}\nf_0(x) & = f(x,0) = x^2 + 0^2 \\:, \\\\\nf_1(x) & = f(x,1) = x^2 + 1^2 \\:, \\\\\nf_2(x) & = f(x,2) = x^2 + 2^2 \\:.\n\\end{split}\n\\end{equation*}\n%\\figbox{Graphs of $f_0,f_1,f_2$ (compare to graph (i); 2D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f302.png}\n\\end{center}\n\\item The domain of the function $f(x,y) = x^2 + y^2$ from (i) is $D(f)=\\mathbb{R}^2$. Describe the subset $S$ of points in $D$ with $f(x,y)=1$.\\\\\n{\\it Sol.:} The equation\n\\[ x^2 + y^2 = 1 \\]\ndefines a circle in the $xy$-plane:\n%\\figbox{Curve $S$ in $xy$-plane (compare to graph (i); 2D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f303.png}\n\\end{center}\n\\end{enumerate}\n\\end{example}\n\n\\begin{definition}[Partial Derivatives]\nFor a function $f$ of two variables, the \\emph{partial derivatives} with respect to $x$, respectively $y$, are the functions \t\n\\begin{equation*}\n\\begin{split}\n\\df{f}{x}(x,y) & = \\lim_{h\\to 0} \\frac{f(x+h,y)-f(x,y)}{h} \\:, \\\\\n\\df{f}{y}(x,y) & = \\lim_{h\\to 0} \\frac{f(x,y+h)-f(x,y)}{h} \\:.\n\\end{split}\n\\end{equation*}\nWe also write $f_x$ for $\\df{f}{x}$ and $f_y$ for $\\df{f}{y}$. Similarly for functions of more than two variables:\n\\begin{equation*}\n\\df{f}{x_j}(x_1,x_2,\\dots,x_n) = \\lim_{h\\to 0}\n\\frac{f(x_1,x_2,\\dots,x_{j-1},x_j+h,x_{j+1},\\dots,x_n)-f(x_1,x_2,\\dots,x_n)}{h} \\:. \\\\\n\\end{equation*}\n\\end{definition}\n\n\\begin{example}\nFind $f_x$ for the function $f(x,y) = 5x^2y^7$.\\\\\n{\\it Sol.:}\n\\begin{equation*}\n\\begin{split}\n\\df{f}{x}(x,y) & = \\lim_{h\\to 0}  \\frac{f(x+h,y)-f(x,y)}{h} \\\\\n& = \\lim_{h\\to 0}  \\frac{5(x+h)^2y^7-5x^2y^7}{h} \\\\\n& = 5y^7 \\lim_{h\\to 0} \\frac{(x+h)^2 - x^2}{h} \\\\\n& = 5y^7 \\lim_{h\\to 0} \\frac{x^2+2hx+h^2 - x^2}{h} \\\\\n& = 5y^7 \\lim_{h\\to 0} (2x+h) = 5(2x)y^7 = 10xy^7 \\:.\n\\end{split}\n\\end{equation*}\nThe factor $2x$ in the last line is just the ordinary single-variable derivative of the factor $x^2$ of $f$. That is, the operator $\\rfrac{\\partial}{\\partial x}$ differentiates the $x$-dependent part of $f$ in the usual way and treats terms that do not depend on $x$ as constants:\n\\begin{equation*}\n\\df{f}{x}(x,y) = \\df{}{x} \\left( 5x^2y^7\\right) = 5 \\, \\df{}{x} \\left( x^2\\right) y^7 = 5(2x)y^7 = 10xy^7 \\:.\n\\end{equation*}\n\\end{example}\n\n\\begin{example}\n\\begin{enumerate}[(i)]\n\t\\item For $f = x^3 + y^5 - 2$, find $f_x$ and $f_y$.\\\\\n\t{\\it Sol.:}\n\t\\begin{equation*}\n\t\\begin{split}\n\tf_x & = \\df{}{x} \\left( x^3 + y^5 - 2 \\right) = 3x^2+0+0 = 3x^2 \\:, \\\\\n\tf_y & = \\df{}{y} \\left( x^3 + y^5 - 2 \\right) = 0+5y^4+0 = 5y^4 \\:. \n\t\\end{split}\n\t\\end{equation*}\n\t\\item For $f=x^3+x^2y^3 -2 y^2$, find $f_x$ and $f_y$.\\\\\n\t{\\it Sol.:}\n\t\\begin{equation*}\n\t\\begin{split}\n\tf_x & = \\df{}{x} \\left( x^3 \\right) + \\df{}{x} \\left( x^2 \\right)y^3 - 0 = 3x^2 +2xy^3 \\:, \\\\\n\tf_y & = 0 + x^2 \\df{}{y} \\left( y^3 \\right) - 2 \\df{}{y}\\left( y^2 \\right) = 3x^2y^2 -4y \\:. \\\\\n\t\\end{split}\n\t\\end{equation*}\n\\end{enumerate}\n\\end{example}\n\n\\begin{remark}\n\\begin{enumerate}[(i)]\t\n\\item For a geometric interpretation, we consider the graph of a function $f$ of two variables, i.e. $z = f(x,y)$. Then $\\rfrac{\\partial z}{\\partial x}$ is the slope on the surface ``when moving in the $x$ direction''. That is, $x$ is varied, $y$ is held constant, and we take note of the change of function values. Similarly, $\\rfrac{\\partial z}{\\partial y}$ is the slope in the $y$ direction. \n\nAs an intuitive example, consider a hiker climbing a mountain of the shape $z = 4-(x^2+y^2)$. \n%\\figbox{``Steep mountain'' with hiker's position and paths in $x$ and $y$ direction (3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f304.png}\n\\end{center}\nSuppose she checks her GPS and finds that her coordinates are $(x,y) = (1,0)$. The slopes at this point in both directions are\n\\begin{equation*}\n\\begin{split}\nz_x(1,0) & = \\df{}{x} \\left( 4-(x^2+y^2) \\right)_{|(x,y)=(1,0)} \n\t= \\left( -2x \\right)_{|(x,y)=(1,0)} = -2 \\:, \\\\\nz_y(1,0) & = \\df{}{y} \\left( 4-(x^2+y^2) \\right)_{|(x,y)=(1,0)} \n\t= \\left( -2y \\right)_{|(x,y)=(1,0)} = 0 \\:.\n\\end{split}\n\\end{equation*}\nThat is, there is no slope in the $y$ direction. The slope in the $x$ direction is negative, because the height decreases as $x$ increases (from where she stands, increasing $x$ would be moving away from the peak).\n\\item In order to be able to work with a larger set of functions -- not just sums of  products and powers of variables -- we extend the single-variable differentiation rules to the multivariate setting: The $x$-derivative of a product of functions is\n\\[ (fg)_x  = f_xg + fg_x \\:, \\]\nthat is, the product rule extends to partial derivatives without any changes. Similarly for $\\rfrac{\\partial}{\\partial y}$. The quotient rule for a $y$-derivative is\n\\[ \\left( \\frac{f}{g} \\right)_y = \\frac{f_y g-f g_y}{g^2} \\:. \\]\n\\item In order to find partial derivatives of functions like\n\\[ f(x,y) = \\sin(x+y^2) \\:, \\]\nwe also need to combine the chain rule for single-variable functions with partial derivatives. Note that the outer function, $h(t)=\\sin(t)$, is a single-variable function. Let $c$ be a constant and review the following applications of the single-variable chain rule.\n\\begin{equation}\n\\label{eq:single-var_chain}\n\\begin{split}\n\\Df{}{t}\\left(\\sin(t+c)\\right) & = \\cos(t+c) \\Df{}{t} (t+c) = \\cos(t+c) \\:,  \\\\\n\\Df{}{t}\\left(\\sin(c+t^2)\\right) & = \\cos(c+t^2) \\Df{}{t} (c+t^2) = 2t\\cos(c+t^2) \\:.\n\\end{split}\n\\end{equation}\nNow, the term $y^2$ looks like a constant to the operator $\\rfrac{\\partial}{\\partial x}$. Similarly for $x$ and $\\rfrac{\\partial}{\\partial y}$. We can thus apply the single-variable computations \\eqref{eq:single-var_chain} to the two-variable function $f$ (in the first case, just replace $y^2$ with $c$):\n\\begin{equation*}\n\\begin{split}\nf_x & = \\df{}{x}\\left(\\sin(x+y^2)\\right) = \\cos(x+y^2) \\df{}{x} (x+y^2) = \\cos(x+y^2) \\:,  \\\\\nf_y & = \\df{}{y}\\left(\\sin(x+y^2)\\right) = \\cos(x+y^2) \\df{}{y} (x+y^2) = 2y\\cos(x+y^2) \\:.\n\\end{split}\n\\end{equation*}\nThis can be stated as\n\\[ \\df{}{x} h(g(x,y)) = h'(g(x,y)) \\cdot g_x(x,y) \\:, \\]\nwhere $h$ is a single-variable function. Similarly for $\\rfrac{\\partial}{\\partial y}$. Note that the $'$ is not ambiguous -- $h$ has only one variable, and $h'$ is the derivative with respect to (abbreviated: ``w.r.t'') that variable.\n\\end{enumerate}\n\\end{remark}\n\n\\begin{example}\n\\begin{enumerate}[(i)]\n\t\\item The $y$-derivative of $f(x,y)=\\e^{x^2+y^3}$ is found easily,\n\t\\[ \\df{}{y} \\left( \\e^{x^2+y^3} \\right) \n\t= \\e^{x^2+y^3} \\cdot \\df{}{y}(x^2+y^3) = 3y^2 \\, \\e^{x^2+y^3} \\:, \\]\n\tand we can verify this result with the following alternative approach.\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\df{}{y} \\left( \\e^{x^2+y^3} \\right) & = \\df{}{y} \\left( \\e^{x^2}\\e^{y^3} \\right) = \\e^{x^2} \\df{}{y} \\left( \\e^{y^3} \\right) \\\\\n\t& = \\e^{x^2} \\Df{}{y} \\left( \\e^{y^3} \\right) = \\e^{x^2} \\e^{y^3} 3y^2 \n\t= 3y^2 \\, \\e^{x^2+y^3} \\:. \n\t\\end{split}\n\t\\end{equation*}\n\tHere, the ``$\\partial$'' was changed to a ``$\\d$'' to emphasise that in this approach, an ordinary derivative was taken, i.e., a derivative of a single-variable expression.\n\t\\item \n\t\\[\\begin{split}\n\t \\df{}{x}\\left( \\frac{\\sin x \\cos y }{x+y}\\right) \n\t& = \\frac{\\df{}{x}\\Big(\\sin x \\cos y\\Big) (x+y) - (\\sin x \\cos y)\\df{}{x}\\Big(x+y\\Big)}{(x+y)^2} \\\\\n\t& = \\frac{\\cos y \\left[ \\cos x \\cdot (x+y) - \\sin x \\right]}{(x+y)^2} \n\t\\end{split} \\]\n\t\\smallskip\n\\end{enumerate}\n\\end{example}\n\n\\begin{definition}[Higher-Order Partial Derivatives]\nThe \\emph{second-order partial derivatives} with respect to $x$ and $y$ of a function $f$ of two variables are written\n\\begin{equation*}\n\\begin{split}\n\\ddf{f}{x} & = \\df{}{x}\\left( \\df{f}{x}\\right) , \\\\\n\\ddf{f}{y} & = \\df{}{y}\\left( \\df{f}{y}\\right) ,\n\\end{split}\n\\end{equation*}\nand can also be denoted $f_{xx}$ and $f_{yy}$. We further have \\emph{mixed} second-order derivatives:\n\\begin{equation*}\n\\begin{split}\n\\dff{f}{x}{y} & = \\df{}{x}\\left( \\df{f}{y}\\right) , \\\\\n\\dff{f}{y}{x} & = \\df{}{y}\\left( \\df{f}{x}\\right) .\n\\end{split}\n\\end{equation*}\nSimilarly for functions of more than two variables and for higher-order derivatives; e.g.,\n\\[ f_{xyyz} =\\frac{\\partial^4 f}{\\partial x \\partial y \\partial y \\partial z} \n\t\t   = \\df{}{x}\\left(\\df{}{y}\\left(\\df{}{y}\\left(\\df{f}{z}\\right)\\right)\\right) \\]\nis a fourth-order derivative of a three-variable function $f=f(x,y,z)$.\n\\end{definition}\n\n\\begin{example}\n\\begin{enumerate}[(i)]\n\t\\item\n\tFor \\[ f(x,y) = 2x^9y^5 \\:, \\] find all partial derivatives of order up to 2.\\\\\n\t{\\it Sol.:}\n\t\\begin{equation*}\n\t\\begin{split}\n\tf & = 2x^9y^5 \\:, \\\\\n\tf_x & = 2(9x^8)y^5=18x^8y^5 \\:, \\\\\n\tf_y & = 2x^9(5y^4)=10x^9y^4 \\:, \\\\\n\tf_{xx} & = \\df{}{x} \\left( 18x^8y^5 \\right)=18(8x^7)y^5=144x^7y^5 \\:, \\\\\n\tf_{yy} & = \\df{}{y} \\left( 10x^9y^4 \\right)=10x^9(4y^3)=40x^9y^3 \\:, \\\\\n\tf_{xy} & = \\df{}{x} \\left( 10x^9y^4 \\right)=10(9x^8)y^4=90x^8y^4 \\:, \\\\\n\tf_{yx} & = \\df{}{y} \\left( 18x^8y^5 \\right)=18x^8(5y^4)=90x^8y^4 \\:.\n\t\\end{split}\n\t\\end{equation*}\n\t\\item\nFor \\[f(x,y) = x^2 + 3xy - y^2 + 7x -4 y +2 \\:, \\] find \\emph{all} partial derivatives.\\\\\n{\\it Sol.:}\n\\begin{equation*}\n\\begin{split}\nf & = x^2+3xy-y^2+7x-4y+2 \\:, \\\\\nf_x & =  2x+3y+7 \\:, \\\\\nf_y & =  3x-2y-4 \\:, \\\\\nf_{xx} & = 2 \\:, \\\\\nf_{yy} & = -2 \\:, \\\\\nf_{xy} & = 3 \\:, \\\\\nf_{yx} & = 3 \\:.\n\\end{split}\n\\end{equation*}\nSince all the second-order derivatives are constant, all derivatives of higher order are zero,\n\\[ \\frac{\\partial^k f}{\\partial \\dots} = 0 \\qquad \\text{for~}k>2 \\:. \\]\n\t\\item\n\tFor \\[ f(x,y) = x\\, \\cos(xy^2) \\:, \\] find all partial derivatives of order up to 2.\\\\\n\t{\\it Sol.:}\n\t\\begin{equation*}\n\t\\begin{split}\n\tf & = x \\cos(xy^2) \\:, \\\\\n\tf_x & = \\cos(xy^2) - xy^2 \\sin(xy^2) \\:, \\\\\n\tf_y & = -2x^2y \\sin(xy^2) \\:, \\\\\n\tf_{xx} & = -2y^2 \\sin(xy^2) - xy^4 \\cos(xy^2) \\:, \\\\\n\tf_{yy} & = -2x^2 \\sin(xy^2) - 4x^3y^2 \\cos(xy^2) \\:, \\\\\n\tf_{xy} & = -4xy \\sin(xy^2) - 2x^2y^3 \\cos(xy^2) = f_{yx} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\\end{enumerate}\n\\end{example}\n\n\\begin{remark}\nIn all of the previous examples, we had $f_{xy}=f_{yx}$. The following theorem states that this is not a coincidence -- it is always true as long as certain technical conditions are satisfied. Those conditions are met for the functions considered in this book, hence we can always work with the conclusion of the following theorem. However, be aware that there are functions that do not meet its requirements.\n\\end{remark}\n\n\\begin{theorem}[Schwarz's Theorem or Clairaut's Theorem]\n\\label{thm:schwarz}\nConsider a function $f=f(x,y)$ and a point $(a,b)$ in its domain. If the functions $f_{xy}$ and\n$f_{yx}$ are both continuous at $(a,b)$, then\n\\[f_{xy}(a,b)=f_{yx}(a,b) \\:. \\]\n\\end{theorem}\n\n\\begin{remark}\n\\texttt{Short remark on continuity for multivariate functions.}\n\\end{remark}\n\n\\begin{exercise}\n\\begin{enumerate}[(i)]\n\t\\item Restrict the function $f(x,y)=xy$ to the lines $x=0$, $y=0$, $y=x$, $y=-x$. Hence draw the graph of\\footnote{We have\n\t\\[ f(x,x)=x^2\\:, \\quad f(x,-x) = -x^2 \\:. \\]\n\tThat is, there is a parabola ``sitting'' on the line $y=x$ of the $xy$-plane and an upside-down parabola ``hanging'' under the line $y=-x$. The full graph looks like a saddle. You can use software to help visualising such functions; e.g., enter \\texttt{plot x*y} in WolframAlpha. Note that, even though this is a curved surface, you can form it with straight lines (e.g., with mikado sticks) since the restriction of the graph to any $y=c$ is a line.} $f$.\n\t\\item Find $f_x,f_y,f_{xx},f_{xy},f_{yy},f_{yx}$ for the functions\\footnote{The $xy$ derivatives of the three functions are\n\t\\[ 2x+12x^2y^3 \\:, \\quad \\e^x \\:, \\quad -\\sin x \\, \\tfrac{1}{y} \\:, \\]\n\tand we always have $f_{xy}=f_{yx}$. You can use software to check your results; e.g., \\texttt{partial derivatives of \\ldots} in WolframAlpha. Use such commands only to support your studies though -- do not rely on software!}\n\t\\begin{enumerate}[(a)]\n\t\t\\item $f(x,y) = x^2y+x^3y^4 + 7y \\:;$\n\t\t\\item $f(x,y) = y\\,\\e^x \\:;$\n\t\t\\item $f(x,y) = \\cos x \\ln y \\:.$\n\t\\end{enumerate}\n\t\\item Find $f_x,f_y,f_{xx},f_{xy},f_{yy},f_{yx}$ for the functions\\footnote{The $xy$ derivatives of the three functions are\n\t\t\\[ \\cos(xy+y^3) - y (x+3x^2) \\sin(xy+y^3) \\:, \\quad\n\t\t-4xy - 2xy \\ln xy \\:, \\quad \n\t\t\\tfrac{2y(1+x^2+y^2)^2-4xy(1+x^2+y^2)(2x)}{(1+x^2+y^2)^4} \\:. \\]\n\t\t(If you don't quite get those expressions, you may have forgotten to apply the product rule.)}\n\t\\begin{enumerate}[(a)]\n\t\t\\item $f(x,y) = \\sin \\left( xy + y^3 \\right) \\:;$\n\t\t\\item $f(x,y) = \\frac{x^2y^2}{2}-x^2y^2 \\ln (xy) \\:;$\n\t\t\\item $f(x,y) = \\frac{x}{1+x^2+y^2} \\:.$\n\t\\end{enumerate}\n\t\\item Find the domain and the range of\\footnote{You can check your domain with a WolframAlpha command of the form \\texttt{plot ... >= 0}.} $f(x,y) =\\sqrt{ 2 \\sin x \\sin y}$. \n\t\t\\item Let $D\\subseteq\\mathbb{R}^2$ be the open disk of radius $1$, $D = \\{x^2 + y^2 < 1\\}$. Define a function that is defined for $(x,y) \\in D$, and another function that is defined on the \\emph{complement} of $D$,\n\t\\[D^c = \\mathbb{R}^2 \\setminus D = \\{x^2+y^2 \\geq 1 \\} \\]\n\t(read this as ``all points outside of $D$'')\\footnote{For example: $\\ln(1-x^2-y^2)$ and $\\sqrt{x^2+x^2-1}$\\:.}.\n\t\\item Verify that\n\t\\begin{enumerate}[(a)]\n\t\t\\item $y(t,x) = \\e^{-kn^2t}\\sin(nx) \\:$ satisfies $\\: y_t=ky_{xx} \\:;$\n\t\t\\item $r(x,y,z) = \\sqrt{x^2+y^2+z^2} \\:$ satisfies\n\t\t$\\: r_{xx}+r_{yy}+r_{zz} = \\tfrac{2}{r} \\:.$\n\t\\end{enumerate}\n\t\\item Find \\emph{all} partial derivatives of\\footnote{Use Theorem~\\ref{thm:schwarz} to justify that your answer can be given in the form \\[\\frac{\\partial^n}{\\partial y^n} \\frac{\\partial^m f}{\\partial x^m}=\\dots \\:, \\] i.e. it can be assumed that $x$-derivatives are taken first.} $f(x,y) = \\e^{2x+3y}$.\n\t\\item Consider the function $f(x,y)=x^2(1-x^2)-y^2$. Find all points for which both the $x$- and the $y$-derivative is equal to zero. That is, find $(x_0,y_0)$ with\\footnote{Setting both partial derivatives equal to zero gives you a (non-linear) system of two equations; solving it should lead to three points.} $f_x(x_0,y_0)=f_y(x_0,y_0)=0$.\n\\end{enumerate}\n\\end{exercise}\n\n\n\\section{Chain Rule and Implicit Differentiation}\n\n\\begin{remark}\n\\label{rem:composition}\n\\emph{Curves} in the $xy$-plane can be parametrised by writing $x$ and $y$ as single-variable functions of a third variable, usually $t$ (think of it as ``time''):\n\\[ (x,y) = (x(t),y(t)) \\:. \\]\nWe denote such curves $\\gamma$,\n\\begin{equation*}\n\\begin{split}\n\\gamma \\: : \\: \\mathbb{R} & \\: \\rightarrow \\: \\mathbb{R}^2 \\\\\nt & \\: \\mapsto \\: \\gamma(t) = \\left(x(t),y(t)\\right) \\:.\n\\end{split} \n\\end{equation*}\nA given function $f=f(x,y)$ can then be restricted to $\\gamma$ -- this is carried out formally via a composition : define the single-variable function $F$ as\n\\[F(t) = \\left(f\\circ\\gamma\\right) (t) = f(\\gamma(t)) = f(x(t),y(t)) \\:. \\]\nFinding the derivative of $F$ is now an important and applicable task.\n\nThe distinction $f \\leftrightarrow F$ is often not made to simplify notation. That is, the single-variable function of $t$ may be called $f$ as well. This reduces the number of function names needed to write out a computation, but it also carries potential for confusion since the symbol ``$f$'' would then used for two different mathematical objects. \n\\end{remark}\n\n\\begin{example}\n\\label{expl:chainruleI}\n\\begin{enumerate}[(i)]\n\t\\item Draw the curves\n\t\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\\gamma_1 \\: & : \\quad (x(t),y(t)) = (t,t) \\:, \\\\\n\t\t\\gamma_2 \\: & : \\quad (x(t),y(t)) = (3\\cos t,3\\sin t) \\:, \\\\\n\t\t\\gamma_3 \\: & : \\quad (x(t),y(t)) = (t,1-t^2) \\:, \n\t\t\\end{split}\n\t\t\\end{equation*}\n\t\tand highlight the point corresponding to $t=0$ in each of them. \\\\\n\t\t{\\it Sol.:} \n\t\t%\\figbox{The curves $\\gamma_1,\\gamma_2,\\gamma_3$ (2D):}\n\t\t\\begin{center}\n\t\t\t\\includegraphics[width=0.6\\textwidth]{./Figures/f305.png}\n\t\t\\end{center}\n\t\\item Consider the functions\n\t\t\\begin{equation*}\n\t\t\\begin{split}\n\t\tf_1 \\: & : \\quad f_1(x,y) = x-y \\:, \\\\\n\t\tf_2 \\: & : \\quad f_2(x,y) = x^2+y^2 \\:, \\\\\n\t\tf_3 \\: & : \\quad f_3(x,y) = \\ln (3+x^2-y) \\:, \n\t\t\\end{split}\n\t\t\\end{equation*}\n\t\tand find the three single-variable functions $F_i=f_i\\circ\\gamma_i$.\\\\\n\t\t{\\it Sol.:} \n\t\t\\begin{equation}\n\t\t\\label{eq:chainruleI}\n\t\t\\begin{split}\n\t\tF_1(t) & = \\left(f_1\\circ\\gamma_1\\right)(t) = f_1(t,t) = t-t = 0 \\:, \\\\\n\t\tF_2(t) & = \\left(f_2\\circ\\gamma_2\\right)(t) = f_2(3\\cos t, 3\\sin t) \n\t\t= (3\\cos t)^2 +  (3\\sin t)^2 = 9 \\:, \\\\\n\t\tF_3(t) & = \\ln \\left( 3 + t^2 - (1-t^2) \\right) = \\ln \\left( 2 + 2t^2\\right) = \\ln 2 + \\ln\\left(1+t^2\\right).\n\t\t\\end{split}\n\t\t\\end{equation}\n\\end{enumerate}\n\\end{example}\n\n\\begin{theorem}[Chain Rule I]\n\\label{thm:CRI}\nLet $f$ and $F$ be as in Remark~\\ref{rem:composition}. Then\n\\[ \\Df{F}{t} = \\df{f}{x} \\Df{x}{t} + \\df{f}{y} \\Df{y}{t} \\:. \\]\n\\end{theorem}\n\n\\begin{example}\n\\label{expl:chain_rule_i}\n\\begin{enumerate}[(i)]\n\t\\item The derivatives of the functions $F_i$ in \\ref{expl:chainruleI} are\n\\begin{equation*}\n\\begin{split}\n\\Df{F_1}{t} & = \\df{f_1}{x} \\Df{x}{t} + \\df{f_1}{y} \\Df{y}{t} \n= \\df{(x-y)}{x} \\Df{t}{t} + \\df{(x-y)}{y} \\Df{t}{t} = 1 \\cdot 1 + (-1)\\cdot 1 = 0 \\:, \\\\\n\\Df{F_2}{t} & \n= 2x(t) \\cdot (-3\\sin t) + 2y(t) \\cdot 3\\cos t = -6 \\cos t \\sin t + 6 \\sin t \\cos t = 0 \\:,  \\\\\n\\Df{F_3}{t} & = \\frac{2x}{3+x^2-y} \\cdot 1 + \\frac{-1}{3+x^2-y} \\cdot (-2t) \n= \\frac{4t}{2+2t^2} = \\frac{2t}{1+t^2} \\:.\n\\end{split}\n\\end{equation*}\nAlternatively, these derivatives could have been found by directly differentiating the functions $F_i=F_i(t)$ in \\eqref{eq:chainruleI}. However, the chain rule allowed us to find the derivatives without using the explicit expressions of $t$ on the right-hand sides of \\eqref{eq:chainruleI}, which is often very helpful.\n\t\\item We now present an example in the simplified notation mentioned at the end of Remark~\\ref{rem:composition}: For $f=x^2 y + 3x y^4$, $x=\\sin 2 t$, $y= \\cos t$,  find $\\rfrac{\\d f}{\\d t}$ at $t=0$.\n\t{\\it Sol.:}\n\\[ \\Df{f}{t} = \\df{f}{x} \\Df{x}{t} + \\df{f}{y}\\Df{y}{t} \n= (2xy+ 3y^4)(2\\cos 2t)+(x^2+12 xy^3)(-\\sin t) \\:. \\]\nThe expressions containing the variable $t$ can be evaluated at $t=0$ directly. The other expressions we evaluate at the $x$ and $y$ values at $t=0$ : $x(0)=0$, $y(0)=1$. Therefore, the $t$-derivative of $f$ at $t=0$ is\n\\[ \\Df{f}{t}(0) = 3\\cdot 2 + 0 \\cdot 0 = 6 \\:. \\]\n\nTo give this derivative a geometric interpretation, think of a hiker walking along the path $(x(t),y(t))$ in the ``mountain range'' that is given by the graph of $f$. Then our computations show that at time $t=0$, he will be climbing at a rate of six height units per time unit.\n\\end{enumerate}\n\\end{example}\n\n\\begin{remark}\n\\label{rem:transformations}\n\\emph{Changes of variables} in the $xy$-plane can be realised by writing $x$ and $y$ as functions of \\emph{two} other variables,\n\\[ (x,y) = (x(s,t),y(s,t)) \\:. \\]\nGiven a function $f=f(x,y)$, one can then define a new function $F$ that has the same values as $f$, but is written out in term of $s$ and $t$ :\n\\[F(s,t) = f(x(s,t),y(s,t)) \\:. \\]\nThese changes of variables are quite useful -- for example, we will use them to solve differential equations in chapter \\ref{ch:de} -- and it is important to be able to understand the relation of the partial derivatives of $f$ and $F$.\n\nAs in Remark~\\ref{rem:composition}, the distinction $f \\leftrightarrow F$ is sometimes not made, and the function of the new variables may be called $f$ as well. Again, this simplifies the notation but also carries potential for confusion. \n\\end{remark}\n\n\\begin{theorem}[Chain Rule II]\n\\label{thm:CRII}\nFor $s,t,x,y,f,F$ as in Remark~\\ref{rem:transformations}, we have\n\\begin{equation*}\n\\begin{split}\n\\df{F}{s} & = \\df{f}{x}\\df{x}{s} + \\df{f}{y}\\df{y}{s} \\:, \\\\\n\\df{F}{t} & = \\df{f}{x}\\df{x}{t} + \\df{f}{y}\\df{y}{t} \\:.\n\\end{split}\n\\end{equation*}\n\\end{theorem}\n\n\\begin{example}\n\\label{expl:chain_rule_ii}\n\\begin{enumerate}[(i)]\n\t\\item Consider the function $f(x,y)=x-y$ and the change of variables \n\t\\[x(s,t)=s+t \\:, \\qquad y(s,t)=s-t \\:. \\]\n\tFor $f$ in the new variables, i.e. $F(s,t)=f(x(s,t),y(s,t))$, we have partial derivatives \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\df{F}{s} & = \\df{f}{x}\\df{x}{s} + \\df{f}{y}\\df{y}{s} = 1 \\cdot 1 + (-1) \\cdot 1 = 0 \\:, \\\\\n\t\\df{F}{t} & = \\df{f}{x}\\df{x}{t} + \\df{f}{y}\\df{y}{t} = 1 \\cdot 1 + (-1) \\cdot (-1) = 2 \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tAs in the examples for derivatives along curves, the derivatives can be checked by explicitly writing out the function $F$ in terms of $s$ and $t$ :\n\t\\[ F(s,t) = f(x,y) = x-y = (s+t)-(s-t) = 2t \\:, \\]\n\twhich has the partial derivatives that were obtained above.\n\t\\item Consider the paraboloid $f(x,y)=x^2+y^2$ and the change of variables\n\t\\begin{equation}\n\t\\label{eq:polar_coords}\n\tx(r,\\theta)=r\\cos\\theta, \\qquad y(r,\\theta)=r\\sin\\theta \\:.\n\t\\end{equation}\n\tThe partial derivatives of $F(r,\\theta)=f(x,y)$ are\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\df{F}{r} & = \\df{f}{x}\\df{x}{r} + \\df{f}{y}\\df{y}{r} \n\t= 2x \\cdot \\cos\\theta + 2y \\cdot \\sin\\theta \n\t= 2r \\left( \\cos^2\\theta + \\sin^2\\theta \\right) = 2r \\:, \\\\\n\t\\df{F}{\\theta} & = \\df{f}{x}\\df{x}{\\theta} + \\df{f}{y}\\df{y}{\\theta} \n\t= 2r\\cos\\theta \\cdot (-r\\sin\\theta) + 2r\\sin\\theta \\cdot r\\cos\\theta = 0 \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tAgain, we check these derivatives by explicitly writing out the function $F$ :\n\t\\[ F(r,\\theta) = f(x,y) = x^2+y^2 = r^2 \\left( \\cos^2\\theta + \\sin^2\\theta \\right) = r^2 \\:, \\]\n\twhich confirms the partial derivatives that were found via chain rule II.\n\t\n\tThe change of variables \\eqref{eq:polar_coords} is the very important change to \\emph{polar coordinates}.\n\t\n\t\\item Let $g(s, t)=f(s^2-t^2, t^2-s^2)$, where $f$ is an arbitrary differentiable function. Show that $g$ satisfies\n\t\\[ t\\df{g}{s} + s \\df{g}{t}=0 \\:. \\]\n\t{\\it Sol.:}\n\tLet $ x=s^2-t^2, y=t^2-s^2$. Then,\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\df{g}{s} & = \\df{f}{x} \\df{x}{s} + \\df{f}{y} \\df{y}{s}=f_x \\cdot 2s + f_y \\cdot (-2s) \\:, \\\\\n\t\\df{g}{t} & = \\df{f}{x} \\df{x}{t} + \\df{f}{y} \\df{y}{t}=f_x \\cdot (-2t) + f_y \\cdot 2t \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tTherefore\n\t\\[ t\\df{g}{s} + s \\df{g}{t}=0 \\:. \\]\n\t\\item As in the examples \\ref{expl:chain_rule_i} for chain rule I, we now present an example in simplified notation, where the function in the new variables is not given a new name. Consider the expression\n\t\\begin{equation}\n\t\\label{eq:first_pde_trafo}\n\t3\\ddf{u}{x} +\\frac12 \\dff{u}{x}{y} -\\frac12 \\ddf{u}{y} \n\t\\end{equation}\n\tfor the function $u=u(x,y)$, and rewrite it in terms of the new variables $\\xi$ (``xi'') and $\\eta$ (``eta'') defined as \n\t\\[ \\xi=x+3y \\:, \\quad \\eta=x-2y \\:. \\]\n\t{\\it Sol.:} Chain rule II allows to rewrite the derivatives of $u$ with respect to $x$ and $y$ as derivatives w.r.t. $\\xi$ and $\\eta$ :\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\df{u}{x} &= \\df{u}{\\xi} \\df{\\xi}{x} + \\df{u}{\\eta} \\df{\\eta}{x}\n\t = \\df{u}{\\xi} \\cdot 1 + \\df{u}{\\eta} \\cdot 1 = u_{\\xi} + u_{\\eta} \\:, \\\\\n \t\\df{u}{y} &= \\df{u}{\\xi} \\df{\\xi}{y} + \\df{u}{\\eta} \\df{\\eta}{y}\n\t = \\df{u}{\\xi} \\cdot 3 + \\df{u}{\\eta} \\cdot \\left( -2 \\right) \n\t = 3u_{\\xi} - 2 u_{\\eta} \\:.\n\t \\end{split}\n\t\\end{equation*}\n\tThese formulas for the first-order derivatives of $u$ are only intermediate steps in our computation -- we need the second-order derivatives appearing in \\eqref{eq:first_pde_trafo}: \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\ddf{u}{x} & = \\df{}{x}\\left( \\df{u}{x} \\right)\n\t\t= \\df{}{x} \\left( u_{\\xi} + u_{\\eta} \\right)\n\t\t= \\df{u_{\\xi}}{x} + \\df{u_{\\eta}}{x} \\\\\n\t\t& = \\df{u_{\\xi}}{\\xi} \\df{\\xi}{x} + \\df{u_{\\xi}}{\\eta} \\df{\\eta}{x}\n\t\t\\: + \\: \\df{u_{\\eta}}{\\xi} \\df{\\xi}{x} + \\df{u_{\\eta}}{\\eta} \\df{\\eta}{x} \n\t\t = u_{\\xi\\xi} + u_{\\eta\\xi} + u_{\\xi\\eta} + u_{\\eta\\eta} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tRecalling Theorem~\\ref{thm:schwarz}, we obtain\n\t\\[ \\ddf{u}{x} =  u_{\\xi\\xi} + 2 u_{\\xi\\eta} + u_{\\eta\\eta} \\:. \\]\n\tThe second-order derivative w.r.t. $y$ is \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\ddf{u}{y} &= \\df{}{y}\\left( 3u_{\\xi} - 2 u_{\\eta} \\right)\n\t= 3\\df{u_{\\xi}}{y} -2 \\df{u_{\\eta}}{y} \\\\\n\t&= 3 \\left( \\df{u_{\\xi}}{\\xi} \\df{\\xi}{y} + \\df{u_{\\xi}}{\\eta} \\df{\\eta}{y} \\right)\n-2 \\left( \\df{u_{\\eta}}{\\xi} \\df{\\xi}{y} + \\df{u_{\\eta}}{\\eta} \\df{\\eta}{y} \\right) \\\\\n\t&= 3\\df{u_{\\xi}}{\\xi} \\cdot 3 + 3\\df{u_{\\xi}}{\\eta} \\cdot \\left( -2 \\right)\n\t-2 \\df{u_{\\eta}}{\\xi} \\cdot 3\n\t-2 \\df{u_{\\eta}}{\\eta} \\cdot \\left( -2 \\right) \\\\\n\t&= 9 u_{\\xi\\xi} -12 u_{\\xi\\eta} + 4 u_{\\eta\\eta} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tThe mixed second-order derivative,\n\t\\[ u_{xy} = 3 u_{\\xi\\xi} + u_{\\xi\\eta} - 2 u_{\\eta\\eta} \\:, \\]\n\tis obtained similarly. \n\tThis allows to write out \\eqref{eq:first_pde_trafo} in terms of $\\xi$ and $\\eta$ :\n\t\\begin{equation*}\n\t\\begin{split}\n\t3\\ddf{u}{x} & +\\frac12 \\dff{u}{x}{y} -\\frac12 \\ddf{u}{y} \\\\\n\t&= 3 \\left[ u_{\\xi\\xi} + 2 u_{\\xi\\eta} + u_{\\eta\\eta} \\right] \n\t+\\frac12 \\left[ 3 u_{\\xi\\xi} + u_{\\xi\\eta} - 2 u_{\\eta\\eta} \\right] \n\t-\\frac12 \\left[ 9 u_{\\xi\\xi} - 12 u_{\\xi\\eta} + 4 u_{\\eta\\eta} \\right] \\\\\n\t&= 0 \\cdot u_{\\xi\\xi} + \\left( 6 + \\frac12 + 6 \\right) \\cdot u_{\\xi\\eta}\n\t+ 0 \\cdot u_{\\eta\\eta} = \\frac{25}{2} u_{\\xi\\eta} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tLater, in Section \\ref{sec:pdes}, we will use computations of this kind to solve a certain type of differential equation.\n\\end{enumerate}\n\\end{example}\n\n\\begin{definition}[Level Sets]\nGiven a function $f$ of $n$ variables and a constant $c$ in the range of $f$, the set of points in the domain of $f$ with\n\\[ f = c \\]\nis called a \\emph{level set} of $f$. That is, a level set of $f$ is of the form\n\\[  \\left\\{ (x,y)\\in\\mathbb{R}^2 \\: | \\: f(x,y)=c \\right\\}\\:.\\]\n\\end{definition}\n\n\\begin{remark}\n\\begin{enumerate}[(i)]\n\t\\item Level sets usually have one dimension less than the domain of the function. For example, the function $f(x,y) = x^2 + y^2$ is defined on the $xy$-plane, $\\mathbb{R}^2$, which is two-dimensional. We have seen in Example~\\ref{expl:first_graphs} that the condition\n\t\\[ f(x,y) = x^2 + y^2 = 1 \\]\n\tdescribes a curve, i.e. a one-dimensional object, namely the circle of radius $1$. In fact, $f(x,y)=x^2+y^2=c$ describes a one-dimensional object for any $c>0$. This may not work for all values of $c$ though: the only solution to $f=0$ is the point $(x,y)=(0,0)$, and points are considered zero-dimensional. The equation $f=c$, where $c<0$, has no solutions at all.\n\t\n\tSimilarly, consider the function $f(x,y,z) = x^2 + y^2 + z^2$. It is defined in the three-dimensional space $\\mathbb{R}^3$, and its level sets $f=c$ for $c>0$ are two-dimensional objects: spheres of radius $\\sqrt{c}$ (the surfaces of the balls of radius $\\sqrt{c}$ centred at the origin of the coordinate system).\n\t\\item Considering $x$ and $y$ independent variables, we have\n\t\\[ \\df{y}{x} = 0 \\]\n\t(the $x$-derivative of the function $g(x,y)=y$ is zero, since $g$ does not depend on $x$). \n\t\n\tNow consider a level set $f(x,y)=c$. Under the constraint $f(x,y)=c$, the variables cannot both move freely any more. Such a level set is one-dimensional by the previous remark, i.e. a curve, and we can use $x$ as a parameter for it. That is, we let $x$ vary freely (independent variable) and then $y$ is determined (not independent any more) by the choice of $x$.\n\t\n\tFor example, requiring $f_1 = 0$ in~\\ref{expl:chainruleI} gives $y(x) = x$, and $f_3=\\ln(2)$ gives $y(x)=1+x^2$.\n\\end{enumerate}\n\\end{remark}\n\n\\begin{theorem}[Implicit Differentiation]\n\\label{thm:impl_diff}\nIf the independent variable $y$ of the $xy$-plane $(x,y)\\in\\mathbb{R}^2$ is turned into a dependent variable $y=y(x)$ via a condition $f(x,y)=c$ (as in (ii) of the previous remark), then\n\\[ \\Df{y}{x} = -\\frac{f_x}{f_y} \\:. \\]\nSimilarly in $\\mathbb{R}^3$: If the independent variable $z$ of $(x,y,z)\\in\\mathbb{R}^3$ is turned into a dependent variable $z=z(x,y)$ via a condition $f(x,y,z)=c$, then\n\\begin{equation*}\n\\begin{split}\n\\df{z}{x} = -\\frac{f_x}{f_z} \\:, \\\\\n\\df{z}{y} = -\\frac{f_y}{f_z} \\:. \n\\end{split}\n\\end{equation*}\n\\end{theorem}\n\n\\begin{proof} Differentiating the condition $f(x,y(x))=c$ with respect to $x$ gives\n\\begin{equation}\n\\label{eq:pid1}\n\\Df{c}{x} = 0\n\\end{equation}\non the right-hand side. On the left-hand side, we use chain rule I to obtain\n\\begin{equation}\n\\label{eq:pid2}\n\\Df{f(x,y(x))}{x} = \\df{f(x,y)}{x}\\Df{x}{x} + \\df{f(x,y)}{y}\\Df{y}{x} \\:.\n\\end{equation}\nNote that here, $f$ appears as a function of one independent variable on the left, and as a two-variable function on the right. Since $\\rfrac{\\d x}{\\d x} = 1$, combining~\\eqref{eq:pid1} and~\\eqref{eq:pid2} leads to the claimed formula for $\\rfrac{\\d y}{\\d x}$.\n\nSimilarly, we differentiate the condition $f(x,y,z(x,y))=c$ with respect to $x$ and $y$,\n\\begin{equation*}\n\\begin{split}\n\\df{f(x,y,z(x,y))}{x} & = \\df{f}{x}\\df{x}{x} + \\df{f}{y}\\df{y}{x} +\\df{f}{z}\\df{z}{x} \n= f_x \\cdot 1 + f_y \\cdot 0 +f_z \\cdot \\df{z}{x} = 0 \\:, \\\\\n\\df{f(x,y,z(x,y))}{y} & = \\df{f}{x}\\df{x}{y} + \\df{f}{y}\\df{y}{y} +\\df{f}{z}\\df{z}{y} \n= f_x \\cdot 0 + f_y \\cdot 1 +f_z \\cdot \\df{z}{y} = 0 \\:,\n\\end{split}\n\\end{equation*}\nto derive the other formulas stated in the theorem. The terms with $\\rfrac{\\partial y}{\\partial x}$ and $\\rfrac{\\partial x}{\\partial y}$ drop out by the previous remark. Here, $f$ appears as a two-variable function on the left and as a three-variable function in the other expressions.\n\\end{proof}\n\t\n\\begin{example}\n\\label{expl:impl_diff}\n\\begin{enumerate}[(i)]\n\t\\item Find $\\rfrac{\\d y}{\\d x}$ if $ x^3+y^3=6xy$.\\\\\n\t{\\it Sol.:}\n\t\\[ f(x,y)=x^3 + y^3 - 6xy = 0\t\n\t\t\\quad \\Longrightarrow \\quad \\Df{y}{x} = -\\frac{f_x}{f_y}= -\\frac{3x^2-6y}{3y^2-6x} \\:. \\]\n\t\\item Find $\\tfrac{\\partial z}{\\partial x}$, $\\tfrac{\\partial z}{\\partial y}$, $\\tfrac{\\partial^2 z}{\\partial x \\partial y}$, where $z=z(x,y)$ is defined by the equation\n\t\\[ x+y-z=\\e^z \\:. \\]\n\t{\\it Method 1:} For $f(x,y,z)=x+y-z-\\e^z$, we have\n\t\\[f_x = \\df{f}{x} = 1, \\quad f_y= \\df{f}{y} = 1, \\quad\n\tf_z = \\df{f}{z} = -1-\\e^z \\:. \\]\n\tBy Theorem~\\ref{thm:impl_diff}, \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\df{z}{x} & = - \\frac{f_x}{f_z}= \\frac{1}{1+\\e^z} \\:,\\\\\n\t\\df{z}{y} & = - \\frac{f_y}{f_z}= \\frac{1}{1+\\e^z} \\:,\n\t\\end{split}\n\t\\end{equation*}\n\tand then, using the single-variable chain rule,\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\dff{z}{x}{y} & = \\df{}{x} \\left( \\frac{1}{1+\\e^z} \\right) \n\t= - \\frac{1}{(1+\\e^z)^2} \\df{}{x} \\left( \\e^z \\right) \\\\\n\t& = - \\frac{\\e^z}{(1+\\e^z)^2} \\df{}{x} \\left( z \\right)\t= - \\frac{\\e^z}{(1+\\e^z)^3} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\t{\\it Method 2:} Differentiating both sides of $x+y-z=\\e^z$ with respect to $x$ gives\n\t\\[ 1 - \\df{z}{x} = \\e^z \\df{z}{x} \\:. \\]\n\tSolving this for $z_x$, we find the same derivative as before.\n\t\n\tThe two computations are very closely related -- Method~2 was presented to show that a more flexible approach can be used as well.\n\t\\item For $z=u^v$, where $u=\\sin x$ and $v=\\cos x$, find $\\rfrac{\\d z}{\\d x}$. \\\\\n\t{\\it Sol.:} The chain rule for $z=z(x)$ reads\n\t\\[\n\t\\Df{z}{x} =\\df{z}{u} \\df{u}{x} + \\df{z}{v} \\df{v}{x}\n\t\\:. \\]\n\tThe partials of $z = z (u,v)$ are\n\t\\[\n\t\\df{z}{u} = v u^{v-1}, \\quad \\df{z}{v}= u^v \\ln u\n\t\\:, \\]\n\tand therefore\n\t\\[\n\t\\Df{z}{x} = v u^{v-1} \\cos x - u^v \\ln u \\cdot \\sin x\n\t=(\\sin x)^{\\cos x -1}  \\cos^2 x -   \\ln (\\sin x) \\cdot (\\sin x)^{\\cos x +1}\n\t\\:. \\]\n\t\\item (Logarithmic Differentiation) Find $\\rfrac{\\partial z}{\\partial x}$, $\\rfrac{\\partial z}{\\partial y}$  for\n\t\\begin{equation}\n\t\\label{eq:symmetric_z}\n\tz=(x^2y+xy^2)^{\\cos(xy)} \\:.\n\t\\end{equation} \n\t{\\it Sol.:}\n\tThe idea for dealing with the function in the power on the right is to use the rule $\\ln(a^p)=p\\ln(a)$ for logarithms. If $a=b$, then $\\ln(a)=\\ln(b)$ -- applying this principle to~\\eqref{eq:symmetric_z} gives\n\t\\[\n\t\\ln z = \\ln\\left( (x^2y+xy^2)^{\\cos(xy)} \\right) = \\cos(xy) \\ln(x^2y+xy^2)\n\t\\:. \\]\n\tDifferentiating both sides with respect to $x$, we obtain\n\t\\[\n\t\\frac{1}{z} \\df{z}{x} = -y \\sin(xy) \\ln(x^2y+xy^2)\n\t + \\cos(xy) \\: \\frac{2xy+y^2}{x^2y+xy^2}\n\t\\]\n\tand\n\t\\[\n\t\\frac{\\partial z}{\\partial x} =  (x^2y+xy^2)^{\\cos(xy)} \\left[\n\t-y \\sin (xy) \\ln (x^2y+xy^2)+ \\cos(xy) \\: \\frac{2x+y}{x^2+xy}\n\t\\right]\n\t\\:. \\]\n\tThe $y$-derivative of $z$ will be given as an exercise. It can be found analoguously, but it is much faster to point to the symmetry in~\\eqref{eq:symmetric_z} and hence write down $\\rfrac{\\partial z}{\\partial y}$ immediately.\n\\end{enumerate}\n\\end{example}\n\n\\begin{application}[\\texttt{Trajectories in phase space}]\n\\texttt{Simple and quick example of a 2D phases space, e.g. for 1D motion; define energy on that phase space and find level curves.}\n\\end{application}\n\n\\begin{exercise}\n\t\\begin{enumerate}[(i)]\n\t\t\\item Find the rate of change of the functions\n\t\t\\[ f(x,y) = x + x^2y - 5y^3\\:, \\quad g(x,y) = \\sqrt{1+x^2+y^2} \\]\n\t\talong the curve\n\t\t\\[ \\gamma(t) = (x(t),y(t)) = (t,t^2) \\:. \\]\n\t\tThat is, find the $t$-derivatives of\\footnote{\n\t\t\\[ \\begin{split}\n\t\t\\Df{F}{t} & = \\df{f}{x}\\Df{x}{t} + \\df{f}{y}\\Df{y}{t}\n\t\t= (1+2xy) \\cdot 1 + (x^2-15y^2) \\cdot 2t \\\\\n\t\t& = 1 + 2 t t^2 + (t^2-15t^4)2t = 1 + 4 t^3 + 30 t^5 \\\\\n\t\t\\Df{G}{t} & = \\frac{t+4t^3}{\\sqrt{1+t^2+t^4}}\n\t\t\\end{split} \\]\n\t\t} $F(t)=f(\\gamma(t))$ and $G(t)=g(\\gamma(t))$.\n\t\t\\item Write the function $f(x,y) = x+y^2$ in polar coordinates, i.e. find an expression for $F=F(r,\\theta)=f(x,y)$ in terms of $r$ and $\\theta$. Then differentiate $F$ directly and compare to the $r$ and $\\theta$ derivatives obtained via chain rule II (as in example~\\ref{expl:chain_rule_ii} (ii)).\n\t\t\\item Find $\\rfrac{\\partial z}{\\partial x}$ and $\\rfrac{\\partial z}{\\partial x}$\n\t\tfor $z=z(x,y)$ defined by $z^3-3xyz=4 \\:.$ \n\t\t\\item In Example~\\ref{expl:impl_diff} (iv), the $x$-derivative of $z$ in~\\eqref{eq:symmetric_z} was found by applying $\\rfrac{\\partial}{\\partial x}$ to the logarithm of~\\eqref{eq:symmetric_z}. Now find the derivative with respect to $y$ by repeating those steps. Comparing the two partial derivatives, try to interpret the reference to ``symmetry'' at the end of the example\\footnote{In~\\eqref{eq:symmetric_z}, the variable names $x$ and $y$ can be swapped, can't they? Having that in mind, what is the difference between the given $z_x$ and the $z_y$ you have found (they should be very similar -- if not, you have made a mistake)?}.\n\t\t\\item Consider $u=u(x,y)$ and define the new variables $\\xi=x-y$, $\\eta=x+by$. Find $b$ so that the equation\n\t\t\\[ \\ddf{u}{x} +4\\dff{u}{x}{y} +3\\ddf{u}{y} = 0 \\]\n\t\ttransforms to\\footnote{Carrying out the transformation as in Example~\\ref{expl:chain_rule_ii}~(iv) leads to\n\t\t\\[ 0 = 0 \\cdot \\ddf{u}{\\xi} - 2(1+b) \\cdot \\dff{u}{\\xi}{\\eta} \n\t\t+ (1+4b+3b^2) \\cdot \\ddf{u}{\\eta} \\:. \\]\n\t\tThe equation $u_{\\xi\\eta}=0$ we need to transform to has only a mixed derivatives. Hence $b$ needs to be chosen such that $1+4b+3b^2=0$. We choose $b=-\\rfrac13$, since the other solution would also make the mixed derivative disappear. This gives $\\rfrac{-4}{3}u_{u_{\\xi\\eta}}=0$; now multiply by $\\rfrac{-3}{4}$.} $u_{\\xi\\eta}=0$.\n\t\t\\item Consider $u=u(x,y)$ and the new variables $s,t$ defined by $x = \\e^s \\cos t$, $y = \\e^2 \\sin t$. Show that\\footnote{This requires quite a bit of work. Start with\n\t\t\\[ u_s = \\df{}{s}(u) = \\df{u}{x}\\df{x}{s} + \\df{u}{y}\\df{y}{s} \n\t\t= \\e^s \\left( u_x \\cos t + u_y \\sin t \\right) \\:. \\]\n\tWe need the product rule for the second $s$-derivative:\n\t\\[ u_{ss} = \\df{}{s}(u_s) = \\df{}{s} (\\e^s) \\cdot \\left( u_x \\cos t + u_y \\sin t \\right)\n\t+ \\e^s \\left( \\df{u_x}{s} \\cos t + \\df{u_y}{s} \\sin t \\right). \\]\n\tNow find the $s$-derivatives of the functions $u_x,u_y$ by applying the chain rule -- as in the first step. Then simplify, repeat for $u_{tt}$, find the sum $u_{ss}+u_{tt}$.}\n\t\t\\[ u_{xx} + u_{yy} = \\e^{-2s}(u_{ss}+u_{tt}) \\:. \\]\n\t\t\\item Find functions $f$ and corresponding constants $c$ such that the level sets $f=c$ are: a straight line of slope $+1$, a circle, a straight line of slope different from $\\pm1$, a parabola, a hyperbola, an ellipse, two concentric circles, etc. Check your examples using software\\footnote{For the hyperbola:\n\t\t\t\\[ y = \\frac{1}{x} \\quad \\longrightarrow \\quad xy = 1 \\quad \\Longrightarrow\n\t\t\t\\quad f(x,y)=xy \\:,\\:c=1 \\:. \\]\n\t\tHere is an example for the WolframAlpha syntax: \\texttt{plot sin(x)*sin(y) = 0.1}~.}.\n\t\t\\item Have the level set\n\t\t\\[x^2(1-x^2) - y^2 = 0\\]\n\t\tplotted and find the values of $x$ for which $\\rfrac{\\partial y}{\\partial x} = 0$. What is the connection between those $x$-values and the plot of the level set\\footnote{The level set is a curve of the shape of an infinity sign: $\\infty\\:$. The $y$-coordinate of its points can not be written as a function of $x$ since for most $x$, there are two corresponding $y$-values. Most individual segments of the curve can be written as $y(x)$ though (careful: this is not possible at $x=-1,0,1$), and then $y'(x_0)=0$ has the usual interpretation: a horizontal tangent line.}?\n\t\t\\item Find the first-order partial derivatives of\\footnote{You can \\emph{check} (not ``find''!) your result with \\texttt{differentiate z(x,y) = \\ldots}}\n\t\t$z(x,y) = \\left(\\cos(2x)\\right)^{\\ln y}$.\n\t\t\\item \\texttt{Trajectory in phase space}\n\t\\end{enumerate}\n\\end{exercise}\n\n\n\\section{Directional Derivatives and the Gradient Vector}\n\n\\begin{definition}[Gradient Vectors]\n\tThe \\emph{gradient vector} $\\nabla f$ of a function $f(x,y)$ is the row vector containing its partial derivatives,\n\t\\[ \\nabla f = \\begin{bmatrix} \\df{f}{x} & \\df{f}{y} \\end{bmatrix} \\:. \\]\n\tSimilarly for $f=f(x,y,z)$,\n\t\\[ \\nabla f = \\begin{bmatrix} \\df{f}{x} & \\df{f}{y} & \\df{f}{z} \\end{bmatrix} \\:. \\]\n\\end{definition}\n\n\\begin{definition}[Directional Derivatives]\nFor a vector $v = \\begin{bmatrix} a & b \\end{bmatrix}^\\top$ and a function $f$ of two variables, the \\emph{directional derivative} of $f$ along $v$ is\n\\[ \\mathrm{D}_v f = a \\df{f}{x} + b \\df{f}{y} \n= \\nabla f \\begin{bmatrix} a \\\\ b \\end{bmatrix} \\:, \\]\nand similarly for three-variable functions and directions $v = \\begin{bmatrix} a & b & c \\end{bmatrix}^\\top$.\n\\end{definition}\n\n\\begin{example} Find the gradient of the function $f(x,y)=x\\cos(y)-\\sin(xy)$, and further its directional derivative along $v = \\tfrac{1}{\\sqrt{2}}\\begin{bmatrix}\n\t\t1 & 1\n\t\\end{bmatrix}^\\top$ at the point $(x_0,y_0)=(\\pi,0)$.\\\\\n{\\it Sol.:}\n\\begin{equation*}\n\\begin{split}\nf(x,y) & = x\\cos(y)-\\sin(xy) \\:, \\\\\n\\nabla f (x,y) & = \n\\begin{bmatrix} \\cos(y)-\\cos(xy)y & -x\\sin(y)-\\cos(xy)x \\end{bmatrix} \\:, \\\\\n\\nabla f (x_0,y_0) & = \n\\begin{bmatrix} \\cos(0)-\\cos(\\pi 0)0 & -\\pi\\sin(0)-\\cos(\\pi 0) \\pi \\end{bmatrix}\n= \\begin{bmatrix} 1 & -\\pi \\end{bmatrix} \\:, \\\\\n\\mathrm{D}_v f (0,\\pi) & = \\nabla f (x_0,y_0) \\cdot v\n= \\frac{1}{\\sqrt{2}}\\begin{bmatrix} 1 & -\\pi \\end{bmatrix} \n\\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix} = \\frac{1-\\pi}{\\sqrt{2}} \\:.\n\\end{split}\n\\end{equation*}\n\\end{example}\n\n\\begin{remark}\n\\label{rem:gradient_and_dd}\n\\begin{enumerate}[(i)]\n\t\\item Consider a function $f=f(x,y)$ and a curve\n\t\\[ \\gamma(t) = \\begin{bmatrix}\n\tx(t) \\\\ y(t)\n\t\\end{bmatrix} \\]\n\tin the $xy$-plane. (In the previous section, we had used the notation ``$(x(t),y(t))$'' for curves. Here, we simply switched from point notation $(x, y)$ for the points of $\\gamma$ to vector notation $\\begin{bmatrix} x & y \\end{bmatrix}^\\top$, cf. the application in Section~\\ref{sec:rma}.) Define the function $F(t)=f(\\gamma(t))=f(x(t),y(t))$. Then, comparing the definition of directional derivatives to chain rule I, we find\n\t\\[ \\Df{F}{t} = \\df{f}{x}\\Df{x}{t} + \\df{f}{y}\\Df{y}{t}\n\t = \\begin{bmatrix} f_x & f_y \\end{bmatrix} \\begin{bmatrix}\tx' \\\\ y' \\end{bmatrix}\n\t = \\mathrm{D}_{\\gamma'}f , \\]\n\twhere prime dentotes the $t$-derivative and $\\gamma'=\\begin{bmatrix} x' & y' \\end{bmatrix}^\\top$.\n\t\\item We introduce the following notation for the statement of the next theorem:\n\t\\[ v \\parallel w \\]\n\tmeans that the vectors $v$ and $w$ are \\emph{parallel}, i.e. one of them can be obtained from the other by scalar multiplication. The expression\n\t\\[ v \\perp w \\]\n\tmeans that the two vectors are \\emph{perpendicular} (or \\emph{orthogonal}), i.e. the angle between them is $\\rfrac\\pi2$ ($90^\\circ$ degrees). Reviewing section~\\ref{sec:rma}, in particular remark~\\ref{rem:vectors} and the definition before that, you will find:\n\t\\[ v \\perp w \\quad \\Longleftrightarrow \\quad v \\circ w = 0 \\:. \\]\n\t\n\tAn important operation for vectors is to \\emph{normalise} them: Given a vector $v$, we scalar-multiply it by the inverse of its length,\n\t\\[ \\bar{v} = \\frac{1}{||v||} \\cdot v = \\frac{v}{||v||} \\:, \\]\n\tto obtain a vector $\\bar{v}$ that (a) is parallel to $v$, i.e. points in the same direction, and (b) has unit length, $||\\bar{v}||=1$. This is where the factor $\\rfrac{1}{\\sqrt{2}}$ of $v$ in the previous example comes from -- convince yourself that this vector has unit length! \n\\end{enumerate}\n\\end{remark}\n\n\\begin{theorem}[Geometric Interpretation of Directional Derivatives]\n\\label{thm:geom_dd}\nFor vectors $v$ of length $||v||=1$, the absolute value of the directional derivative is maximal if $\\nabla f ^\\top \\parallel v$, and zero if $\\nabla f ^\\top \\perp v.$\n\nIn particular, level sets of a function are always perpendicular to the gradient of the function.\n\\end{theorem}\n\\begin{proof}\nThe directional derivative of $f$ along $v$ at the point $(x_0,y_0)$ is\n\\begin{equation*}\n\\begin{split}\n\\abs{\\mathrm{D}_v f(x_0,y_0) }\n& = \\abs{ \\nabla f (x_0,y_0) \\cdot v} \n  = \\abs{ \\left( \\nabla f (x_0,y_0) \\right)^\\top \\circ v} \\\\\n& \\stackrel{\\text{Remark}~\\ref{rem:vectors}}{=}\n \\abs{ \\cos(\\theta) \\cdot || \\left(\\nabla f (x_0,y_0) \\right)^\\top || \\cdot || v ||} \n  = \\abs{ \\cos(\\theta) } \\cdot || \\left( \\nabla f (x_0,y_0) \\right)^\\top || \\:,\n\\end{split}\n\\end{equation*}\nwhere $\\theta$ is the angle between the vectors $v$ and $\\left( \\nabla f (x_0,y_0) \\right)^\\top$. The length of the latter is fixed, and only the cosine term can be changed by choosing a different vector $v$ of length 1. If the two vectors are parallel, then $\\abs{\\cos \\theta} = 1$; if they are perpendicular, then $\\abs{\\cos \\theta} = 0$; and for all other angles, $\\abs{\\cos \\theta}$ will be between $0$ and $1$. This proves the first part of the statement.\n\nFor the second part, let the curve $\\gamma(t)$ traverse a level set of $f$ and consider a fixed $t$-value $t_0$. Then $F(t)=f(\\gamma(t))$ is constant, and therefore\n\\[ 0 = \\Df{F}{t} (t_0) = \\nabla f (x_0,y_0) \\cdot \\gamma'(t_0) = \\left( \\nabla f (x_0,y_0) \\right)^\\top \\circ \\gamma'(t_0) \\:, \\]\nwhere $ \\begin{bmatrix}\nx'(t_0) & y'(t_0)\n\\end{bmatrix}^\\top = \\gamma'(t_0)$. This dot product being $0$ means that the two vectors are perpendicular. Since $\\gamma'(t_0)$ is the tangent vector of the curve $\\gamma(t)$ at $t_0$, we have completed the proof of the theorem.\n\\end{proof}\n\n\\begin{remark}\n\\begin{enumerate}[(i)]\n\t\\item The same works in three dimensions: Then, a level set $f=c$ is a surface (two-dimensional), and the vector $\\left(\\nabla f\\right)^\\top$ will always be perpendicular to it.\n\t\\item We now introduce a matrix of second-order derivatives that will be useful in the next section. For that, review Theorem~\\ref{thm:schwarz}.\n\\end{enumerate}\n\\end{remark}\n\n\\begin{definition}[Hessian Matrix]\nThe \\emph{Hessian matrix} or \\emph{Hessian} of $f=f(x,y)$ is the matrix\n\\[ \\mathrm{Hess} f = \\begin{bmatrix}\nf_{xx} & f_{xy} \\\\ f_{yx} & f_{yy}\n\\end{bmatrix} \\:. \\]\nSimilarly for functions of three or more variables. \n\\end{definition}\n\n\\begin{example}\nFind the gradient and the Hessian of $f(x,y,z) = xy^2 + z^3$.\\\\\n{\\it Sol.:}\n\\begin{equation*}\n\\begin{split}\n\\nabla f & = \\begin{bmatrix}\ny^2 & 2xy & 3z^2\n\\end{bmatrix},\\\\\n\\mathrm{Hess} f & = \\begin{bmatrix}\n0 & 2y & 0 \\\\\n2y & 2x & 0 \\\\\n0 & 0 & 6z\n\\end{bmatrix} \\:.\n\\end{split}\n\\end{equation*}\n\n\\end{example}\n\n\\begin{exercise}\n\\begin{enumerate}[(i)]\n\t\\item For $f(x,y) = \\ln(x+y^2)$, find the gradient and the Hessian matrix at\\footnote{\n\t\\[\\nabla f(3,1) = \\frac{1}{4}\\begin{bmatrix} 1 & 2 \\end{bmatrix} \\:, \\quad\n\t \\mathrm{Hess}f(3,1)=\\frac{1}{16}\\begin{bmatrix} -1 & -2 \\\\ -2 & +4 \\end{bmatrix} \\]}\n\t$(x_0,y_0)=(3,1)$.\n\t\\item Find the directional derivatives $D_vf(x_0,y_0)$ for\\footnote{\n\t$D_vf(x_0,y_0)=\\tfrac{63}{52};\\:-\\tfrac{7}{5};\\:\\tfrac{3-\\pi}{2\\sqrt{5}}.$}:\n\t\\begin{enumerate}[(a)]\n\t\t\\item $f(x,y)=y+\\rfrac{x^2}{y}\\:,\\quad (x_0,y_0)=(1,2)\\:,\\quad\n\t\tv = \\begin{bmatrix} \\rfrac{12}{13} & \\rfrac{5}{13}\\end{bmatrix}^\\top \\:;$\n\t\t\\item $f(x,y)=x^2-3xy+2y^2\\:,\\quad (x_0,y_0)=(1,1)\\:,\\quad\n\t\tv = \\begin{bmatrix} \\rfrac{3}{5} & -\\rfrac{4}{5}\\end{bmatrix}^\\top \\:;$\n\t\t\\item $f(x,y)=x\\arctan(\\rfrac{y}{x})\\:,\\quad (x_0,y_0)=(1,-1)\\:,\\quad\n\t\tv = \\begin{bmatrix} \\rfrac{2}{\\sqrt{5}} & \\rfrac{1}{\\sqrt{5}}\\end{bmatrix}^\\top \\:.$\n\t\\end{enumerate}\n\t\\item Find the Hessian of $f(x,y)=\\tfrac16\\left((x-1)^3+(y+2)^3\\right)$ at the point where\\footnote{This quite a bit of work if you expand $f$, and a quick computation if you work with the given form. Answer: $I$.} $\\nabla f=0$.\n\t\\item For $f(x,y) = \\left(\\e^{3x}+\\sin(4y) \\right)^{5}$, find the unit vector $v$ (i.e., $||v||=1$) of steepest descent at\\footnote{The steps for solving this are: Compare to theorem \\ref{thm:geom_dd}, find the gradient of $f$ at the given point, normalise, choose one of the two possible directions.\tAnswer: $ v = \\begin{bmatrix} -0.6 & 0.8 \\end{bmatrix}^{\\top}$.}\n\t$(x,y)=(0,\\rfrac{\\pi}{4})$.\n\t\\item In remark \\ref{rem:gradient_and_dd}, we have written out chain rule I as a matrix multiplication of a row vector and a column vector. Do the same for chain rule II: Find the matrix $M$ such that for the functions $f=f(x,y)$ and $F=F(s,t)$ in Theorem~\\ref{thm:CRII} we have\\footnote{The matrix $M$ contains the derivatives of the transformation $(s,t)\\leadsto(x,y)$, cf. Theorem~\\ref{thm:higher_dim_subst}.}\n\t\\[  \\nabla F = \\nabla f \\cdot M \\:. \\]\n\\end{enumerate}\n\\end{exercise}\n\n\n\\section{Taylor Approximations}\n\n\\begin{remark}\nRecall single-variable Taylor approximations: For $F=F(t)$, we have the approximation\n\\[ F(t) \\approx T_a^{(2)}F(t) = F(a) + F'(a)\\cdot(t-a) + \\frac{1}{2} F''(a)\\cdot(t-a)^2 \\]\nclose to the point $t=a$. In this section, we will derive the corresponding formula for functions of several variables.\n\\end{remark}\n\n\\begin{theorem}[Taylor Approximation]\nLet $f(x,y)$ have continuous partial derivatives of order up to $2$, and let the point $(a,b)$ be in its domain. Then the \\emph{first- and second-order Taylor approximations} of $f$ at $(a,b)$ are\n\\begin{equation}\n\\label{eq:TA}\n\\begin{split}\nT_{(a,b)}^{(1)} f (x,y) = \nf(a,b) & + \\nabla f (a,b) \\begin{bmatrix} x-a \\\\ y-b \\end{bmatrix} \\:, \\\\\nT_{(a,b)}^{(2)} f (x,y) = \nf(a,b) & + \\nabla f (a,b) \\begin{bmatrix} x-a \\\\ y-b \\end{bmatrix} \\\\\n & + \\frac{1}{2} \\begin{bmatrix} x-a & y-b \\end{bmatrix}\n\\mathrm{Hess} f (a,b) \\begin{bmatrix} x-a \\\\ y-b \\end{bmatrix} \\:.\n\\end{split}\n\\end{equation}\nClose to $(a,b)$, we have\n\\[ T_{(a,b)}^{(1)} f \\approx f, \\qquad T_{(a,b)}^{(2)} f \\approx f \\:, \\]\nwhere the second-order approximation is better in the sense that the error \\[\\abs{T_{(a,b)}^{(2)}f(x,y)-f(x,y)}\\]\ndecays faster than $\\abs{T_{(a,b)}^{(1)}f(x,y)-f(x,y)}$ as $(x,y)\\to(a,b)$. Similarly for $f=f(x,y,z)$.\n\\end{theorem}\n\n\\begin{proof}\nLet $(x,y)$ be close to the point $(a,b)$, so that both\n\\[ h = x-a \\quad \\text{and} \\quad  k = y-b \\]\nare small. Define the curve\n\\[ \\gamma(t) = (a+ht,b+kt) \\:, \\]\nand let $F(t) = f(\\gamma(t))$. Note that $\\gamma(0)=(a,b)$ and $\\gamma(1)=(x,y)$. We now find the second-order Taylor approximation of the single-variable function $F$ at $t=0$:\n\\begin{equation*}\n\\begin{split}\nF(t) & \\approx T_0^{(2)}F(t)\n= F(0) + F'(0)\\cdot t + \\frac{1}{2} F''(0)\\cdot t^2 \\\\\n& = F(0) + t \\cdot \\Df{F}{t}(0) + \\frac{1}{2}t^2 \\cdot \\Ddf{F}{t}(0) \\\\\n& = f(a,b) + t \\cdot \\left[ \\df{f}{x}\\Df{x}{t} + \\df{f}{y}\\Df{y}{t} \\right]_{t=0}\n+ \\frac{1}{2} t^2 \\cdot \\Df{}{t}\\left[ \\df{f}{x}\\Df{x}{t} + \\df{f}{y}\\Df{y}{t} \\right]_{t=0} \\\\\n& = f(a,b) + t \\cdot \\nabla f (a,b) \\begin{bmatrix} h \\\\ k \\end{bmatrix}\n+ \\frac{1}{2} t^2 \\cdot \\Df{}{t}\\left[ f_x \\, h + f_y \\, k \\right]_{t=0} \\:.\n\\end{split}\n\\end{equation*}\nThe remaining $t$-derivative is that of a composition with $\\gamma$ (the evaluation at $t=0$ indicated as $[\\dots]_{t=0}$ does not mean that $f_x$ and $f_y$ are evaluated at $t=0$ -- they are to be evaluated at $\\gamma(0)=(a,b)$), and therefore, chain rule I needs to be applied one more time. Note that $h$ and $k$ do not depend on $t$.\n\\begin{equation*}\n\\begin{split}\n\\Df{}{t} \\left[ f_x \\, h + f_y \\, k \\right]_{t=0} \n& = h \\cdot \\left[ \\df{f_x}{x} \\Df{x}{t} + \\df{f_x}{y} \\Df{y}{t} \\right] \n+ k \\cdot \\left[ \\df{f_y}{x} \\Df{x}{t} + \\df{f_y}{y} \\Df{y}{t} \\right] \\\\\n& = h^2 f_{xx} + hk f_{yx} + kh f_{xy} + k^2 f_{yy} \\:,\n\\end{split}\n\\end{equation*}\nwhere all the partials are evaluated at $(a,b)$. The last expression can be written as a matrix multiplication of the row vector containing $h$ and $k$, a matrix containing the second-order derivatives of $f$, and the column vector containing $h$ and $k$. This gives\n\\[\nF(t) \\approx f(a,b) + t \\, \\nabla f (a,b) \\begin{bmatrix} h \\\\ k \\end{bmatrix} \n+ \\frac{t^2}{2} \\begin{bmatrix} h & k \\end{bmatrix}\n\\mathrm{Hess} f (a,b) \\begin{bmatrix} h \\\\ k \\end{bmatrix} \\:.\n\\]\nEvaluating this approximation at $t=1$, we obtain the stated second-order approximation of $f$,\n\\[\nf(x,y) \\approx f(a,b) + \\nabla f (a,b) \\begin{bmatrix} h \\\\ k \\end{bmatrix} \n+ \\frac{1}{2} \\begin{bmatrix} h & k \\end{bmatrix}\n\\mathrm{Hess} f (a,b) \\begin{bmatrix} h \\\\ k \\end{bmatrix} \\:.\n\\]\nNote that $t=1$ is not small, i.e. that it is not close to the point $t=0$ about which $F$ was developed, and that in general it may be too big for a good approximation. However, in the above situation, this can be controlled by requiring $h$ and $k$ to be small, that is, by letting $(x,y)$ be close to the point $(a,b)$ from which the approximation of $f$ is carried out.\n\\end{proof}\n\n\\begin{example}\n\\begin{enumerate}[(i)]\n\t\\item Find the first- and second-order Taylor approximation of $f(x,y)=\\e^x \\ln(1+y)$ at $(a,b)=(0,0)$. \\\\\n\t{\\it Sol.:} We begin by finding all the values, vectors, and matrices at $(a,b)$ that are needed for writing out the Taylor approximations \\eqref{eq:TA}:\n\t\\begin{equation*}\n\t\\begin{split}\n\tf(x,y) & = \\e^x \\ln(1+y) \\:, \\\\\n\tf(0,0) & = 1 \\cdot 0 = 0 \\:, \\\\\n\t\\nabla f(x,y) & = \\begin{bmatrix} \\e^x \\ln(1+y) & \\frac{\\e^x}{1+y} \\end{bmatrix} \\:, \\\\\n\t\\nabla f(0,0) & = \\begin{bmatrix} 0 & 1 \\end{bmatrix} \\:, \\\\\n\t\\mathrm{Hess}f(x,y) & = \\begin{bmatrix} \\e^x \\ln(1+y) & \\frac{\\e^x}{1+y} \\:, \\\\\n\t\t\t\t\t\t\t\t \\frac{\\e^x}{1+y} & -\\e^x\\frac{1}{(1+y)^2} \\end{bmatrix} \\:, \\\\\n\t\\mathrm{Hess}f(0,0) & = \\begin{bmatrix}\t0 & 1 \\\\ 1 & -1\t\\end{bmatrix} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tThis gives\n\t\\begin{equation*}\n\tT^{(1)}_{(0,0)}f(x,y) = f(0,0)+\\nabla f(0,0) \n\t\\begin{bmatrix}\tx-0 \\\\ y-0 \\end{bmatrix}\n\t= 0 + \\begin{bmatrix} 0 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} = y\n\t\\end{equation*}\n\tand\n\t\\begin{equation*}\n\t\\begin{split}\n\tT^{(2)}_{(0,0)}f(x,y) \n\t& = 0 + \\begin{bmatrix} 0 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix}\n\t+ \\frac{1}{2} \\begin{bmatrix} x & y \\end{bmatrix}\n\t\\begin{bmatrix}\t0 & 1 \\\\ 1 & -1\t\\end{bmatrix}\n\t\\begin{bmatrix} x \\\\ y \\end{bmatrix} \\\\\n\t& = y + \\frac{1}{2} \\begin{bmatrix} x & y \\end{bmatrix} \n\t\\begin{bmatrix} y \\\\ x - y \\end{bmatrix} \n\t= y + xy - \\frac{y^2}{2} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\t\\item Find the first- and second-order Taylor approximation of $f(x,y)= 2x^2 + y^2$ about the point $(a,b)=(1,1)$. \\\\\n\t{\\it Sol.:}\n\t\\begin{equation*}\n\t\\begin{split}\n\tf(x,y) & = 2x^2 + y^2 \\:, \\\\\n\tf(1,1) & = 3 \\:, \\\\\n\t\\nabla f(x,y) & = \\begin{bmatrix} 4x & 2y \\end{bmatrix} \\:, \\\\\n\t\\nabla f(1,1) & = \\begin{bmatrix} 4 & 2 \\end{bmatrix} \\:, \\\\\n\t\\mathrm{Hess}f(x,y) & = \\begin{bmatrix} 4 & 0 \\\\ 0 & 2 \\end{bmatrix} \\:, \\\\\n\t\\mathrm{Hess}f(1,1) & = \\begin{bmatrix} 4 & 0 \\\\ 0 & 2 \\end{bmatrix} \\:,\n\t\\end{split}\n\t\\end{equation*}\n\t\\begin{equation*}\n\t\\begin{split} \n\tT^{(2)}_{(1,1)}f(x,y) & = 3 + \n\t\\begin{bmatrix} 4 & 2 \\end{bmatrix}\n\t\\begin{bmatrix} x-1 \\\\ y-1 \\end{bmatrix}\n\t+ \\frac{1}{2} \\begin{bmatrix} x-1 & y-1 \\end{bmatrix}\n\t\\begin{bmatrix} 4 & 0 \\\\ 0 & 2 \\end{bmatrix}\n\t\\begin{bmatrix} x-1 \\\\ y-1 \\end{bmatrix} \\\\\n\t& = -3 + 4x + 2y + \\begin{bmatrix} x-1 & y-1 \\end{bmatrix}\n\t\\begin{bmatrix} 2x-2 \\\\ y-1 \\end{bmatrix} = 2x^2 + y^2 \\:. \n\t\\end{split}\n\t\\end{equation*}\n\tNote that we have obtained $T_{(a,b)}^{(2)}f=f$. This is because $f(x,y)$ is already a very simple function -- an expression of $x$ and $y$ containing terms of order at most $2$. The analogous situation in the corresponding single-variable theory is: The $m$-th-order Taylor approximation of a polynomial of order $d \\leq m$ is the original polynomial itself. \n\t\n\tIt is important not to forget about one of the parts we were asked about -- the first-order approximation! It is contained in the above computation:\n\t\\[ T^{(1)}_{(1,1)}f(x,y) = -3 +4x+2y \\:. \\]\n\t\\item Find the second-order Taylor approximation of $f(x,y)= \\sin(x)\\sin(y)$ about the point $(a,b)=(\\rfrac{\\pi}{4},\\rfrac{\\pi}{4})$. \\\\\n\t{\\it Sol.:}\n    Note that $\\sin(\\rfrac{\\pi}{4})=\\cos(\\rfrac{\\pi}{4})=\\rfrac{\\sqrt{2}}{2}$. The required quantities at $(a,b)$ are\n\t\\begin{equation*}\n\t\\begin{split}\n\tf(x,y) & = \\sin(x)\\sin(y) \\:, \\\\\n\tf(\\rfrac{\\pi}{4},\\rfrac{\\pi}{4}) & = \\frac{1}{2} \\:, \\\\\n\t\\nabla f(x,y) & = \\begin{bmatrix} \\cos(x)\\sin(y) & \\sin(x)\\cos(y) \\end{bmatrix} \\:, \\\\\n\t\\nabla f(\\rfrac{\\pi}{4},\\rfrac{\\pi}{4}) \n\t& = \\begin{bmatrix} \\frac12 & \\frac12 \\end{bmatrix} \\:, \\\\\n\t\\mathrm{Hess}f(x,y) & = \\begin{bmatrix}\n\t-\\sin(x)\\sin(y) & \\cos(x)\\cos(y) \\\\ \\cos(x)\\cos(y) & -\\sin(x)\\sin(y) \\end{bmatrix} \\:, \\\\\n\t\\mathrm{Hess}f(\\rfrac{\\pi}{4},\\rfrac{\\pi}{4}) \n\t& = \\frac{1}{2} \\begin{bmatrix} -1 & 1 \\\\ 1 & -1 \\end{bmatrix} \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tFor simplicity, we write out the approximation in terms of $h=x-\\rfrac{\\pi}{4}$ and $k=y-\\rfrac{\\pi}{4}$:\n\t\\begin{equation*}\n\t\\begin{split}\n\tT^{(2)}_{(\\rfrac{\\pi}{4},\\rfrac{\\pi}{4})}f(x,y) \n\t& = \\frac{1}{2} + \\frac{1}{2} \\begin{bmatrix} 1 & 1 \\end{bmatrix}\n\t\\begin{bmatrix} h \\\\ k \\end{bmatrix}\n\t+ \\frac{1}{4} \t\\begin{bmatrix} h & k \\end{bmatrix}\n\t\\begin{bmatrix} -1 & 1 \\\\ 1 & -1\t\\end{bmatrix}\n\t\\begin{bmatrix} h \\\\ k \\end{bmatrix} \\\\\n\t& = \\frac{1}{2} \\left[ 1 + h + k + hk - \\frac{h^2}{2} - \\frac{k^2}{2} \\right] \\:,\n\t\\end{split}\n\t\\end{equation*}\n\twhich can then be translated back to an expression of $x$ and $y$ by replacing $h$ and $k$ with $x-\\rfrac{\\pi}{4}$ and $y-\\rfrac{\\pi}{4}$ respectively.\n\\end{enumerate}\n\\end{example}\n\n\\begin{remark}\n\\label{rem:taylor_appr}\n\\begin{enumerate}[(i)]\n\\item The second-order Taylor approximation of $f(x,y)$ at $(a,b)$ has the following properties (denote it by $T$ for simplicity: $T=T_{(a,b)}^{(2)}f$).\n\\begin{enumerate}[(a)]\n\t\\item $T(a,b) = f(a,b)$ \\hfill (same function value at $(a,b)$)\n\t\\item $\\nabla T(a,b) = \\nabla f(a,b)$ \\hfill (same first-order derivatives at $(a,b)$)\n\t\\item $\\mathrm{Hess}T(a,b) = \\mathrm{Hess}f(a,b)$ \\hfill (same second-order derivatives at $(a,b)$)\n\t\\item $T$ is a second-order expression, i.e., a linear combination of $1,x,y,x^2,xy,y^2$.\n\\end{enumerate}\nThe last point should help you identify mistakes. For example, if you obtain terms like $x^2y$ or $y^5$ or $x \\sin y$ in the approximation, you should review your computation -- forgetting to evaluate the gradient or the Hessian at the given point can lead to such expressions.\n\\item Similary, $T^{(1)}_{(a,b)}f$ is a first-order expression,\n\\[ T^{(1)}_{(a,b)}f = c_1 + c_2x +c_3y \\:, \\]\nIt further is a function of $x$ and $y$, of course, and we plot such functions in $\\mathbb{R}^3$ by drawing the function values as the $z$-coordinate: \n\\[ z = c_1 + c_2x +c_3y \\:. \\]\nThis is the equation of a plane in $\\mathbb{R}^3$, namely the plane tangent to the graph of $f$ at the point $(x_0,y_0,z_0)=(a,b,f(a,b))$. This is similar to the corresponding 1D theory: The first-order Taylor approximation gives the equation of the tangent line.\n%\\figbox{Graphs of $f$ and $T^{(1)}_{(a,b)}f$ (3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f306.png}\n\\end{center}\n\\end{enumerate}\n\\end{remark}\n\n\\begin{exercise}\n\\begin{enumerate}[(i)]\n\t\\item Find the second-order Taylor approximation of $f(x,y)=x^2(1-y^3)$ at the point\\footnote{\n\t\\[ T_{(2,1)}^{(2)}f(x,y) = -12y^2-12xy+12x-36y-24 \\]}\n\t$(a,b)=(2,1)$.\n\t\\item Find the second-order Taylor approximation of $f(x,y,z)=xy^2+z^3$ at\\footnote{You can verify your answer by checking the conditions in Remark~\\ref{rem:taylor_appr}. To make sure that your second derivatives are correct, compare to the example at the end of the previous section.} $(a,b,c)=(1,1,1)$.\n\t\\item Find the second-order Taylor approximation of $f(x,y)=\\arctan(x+2y)$ at\\footnote{\n\t\\[ T_{(5,-2)}^{(2)}f(x,y) = -\\frac{x^2}{4}-y^2-xy+x+2y+\\frac{\\pi-3}{4}\\]}\n\t$(a,b)=(5,-2)$.\n\t\\item Re-do example (ii) above, $f(x,y)=2x^2+y^2$, but now at a general point $(a,b)\\in\\mathbb{R}^2$. That is, use parameters $a$ and $b$ for the point at which the approximation is carried out, rather than specific numbers.\n\t\\item Find the intersection of the $xy$-plane with the tangent plane of $f(x,y)=2x^2+y^2$ at\\footnote{$y=x-\\tfrac32.$} $(1,-2)$.\n\\end{enumerate}\n\\end{exercise}\n\t\n\t\n\\section{Local Extrema and Saddle Points}\n\n\\begin{definition}[Local Extrema and Critical Points]\nFor a function $f(x,y)$ and a point $(a,b)$ in its domain (similarly in three variables):\n\\begin{enumerate}[(i)]\n\t\\item $(a,b)$ is a \\emph{local maximum} of $f$ if \n\t\\[ f(x,y) \\leq f(a,b) \\]\n\tin some neighbourhood of $(a,b)$.\n\t\\item $(a,b)$ is a \\emph{local minimum} of $f$ if there exists an $\\delta > 0$ such that\n\t\\[ f(x,y) \\geq f(a,b) \\]\n\tfor all $(x,y)$ with $\\left|\\left|\\begin{bmatrix} x \\\\ y \\end{bmatrix}\n\t\t\t\t\t\t\t   -\\begin{bmatrix} a \\\\ b \\end{bmatrix}\\right|\\right|<\\delta$.\n\t\\item $(a,b)$ is a \\emph{critical point} of $f$ if \n\t\\[ \\nabla f (a,b) = \\begin{bmatrix} 0 & 0 \\end{bmatrix} \\:. \\]\n\\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\n\\begin{enumerate}[(i)]\n\t\\item Neighbourhoods are small sets surrounding the point in question. For example, in one dimension: For $t_0 = 0.001$, there exists a small neighbourhood of $t_0$ that contains only positive numbers, e.g. the open interval $(t_0-0.0005,t_0+0.0005)$. For $t_0=0$, however, this is not possible: every neighbourhood -- no matter how small -- contains both positive and negative numbers. In the higher dimensional domains of multivariate functions, one can work with disks (in 2D) or balls (in 3D). This notion is very important in Mathematics and used informally in part (i) of the definition above. In part (ii), the same condition is written out more formally using a parameter $\\delta$ and the corresponding disk of radius $\\delta$ around the point $(a,b)$. It does not matter how small $\\delta$ is, but $\\delta>0$ is essential.\n\t\\item A local extremum is either a local minimum or a local maximum. Local maxima are not surpassed in some small neighbourhood, while for local minima there exists a neighbourhood in which none of the other function values is smaller. There is no requirement for surrounding function values to be strictly smaller, respectively strictly larger -- that means, for example, that for a constant function $f(x,y)=c$, every point in its domain is both a local maximum and a local minimum.\n\t\\item For the material in this section, it is useful to recall the corresponding single-variable theory. The following result should look familiar and its proof will be left as an exercise.\n\\end{enumerate}\n\\end{remark}\n\n\\begin{theorem}\n\\label{thm:crit_pts}\nIf $(a,b)$ is a local extremum of $f$, then $(a,b)$ is a critical point of~$f$.\n\\end{theorem}\n\n\\begin{example}\n\\label{expl:crit_points}\n\t\\begin{enumerate}[(i)]\n\t\t\\item Find the critical points of $f(x,y) = x^2y - y + 7$.\\\\\n\t\t{\\it Sol.:}\n\t\tThe gradient of $f$ is \n\t\t\\[  \\nabla f (x,y) = \\begin{bmatrix} 2xy & x^2-1 \\end{bmatrix} \\:. \\]\n\t\tSetting its second component, i.e., the $y$-derivative of $f$, equal to zero, we obtain $x^2-1=0$, which has solutions $x=\\pm1$. The second equation is $2xy=0$, which, since we know $x\\not=0$, has the solution $y=0$. Therefore, the critical points are $(-1,0)$ and $(+1,0)$.\n\t\t\\item Find the critical points of $f(x,y) = xy$.\\\\\n\t\t{\\it Sol.:}\n\t\tSetting the gradient of $f$,\n\t\t\\[  \\nabla f (x,y) = \\begin{bmatrix} y & x \\end{bmatrix} \\:, \\]\n\t\tequal to zero gives the critical point $(a,b)=(0,0)$.\n\t\t\n\t\tNote, however, that this is not a local extremum: The function value at that point is $0$, and there are points with both positive and negative function values arbitrarily close to it. For example, consider the points $(x,y)=(\\varepsilon,\\varepsilon)$ and $(x,y)=(\\varepsilon,-\\varepsilon)$, where $\\varepsilon$ is a small positive constant.\n\t\t\\item Find the critical points of $f(x,y) = x^3 - 3x + 3xy^2$.\\\\\n\t\t{\\it Sol.:}\n\t\tAfter computing the gradient of $f$, we find that $f_x=0$ for all points on the unit circle, and $f_y=0$ for all points that lie on one of the two axes. This gives the critical points $(+1,0)$, $(0,+1)$, $(-1,0)$, $(0,-1)$.\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{theorem}[Second-Derivative Test]\nLet $(a,b)$ be a critical point of $f(x,y)$ and let \n\\[ \\Delta = \\det \\left( \\mathrm{Hess} f (a,b) \\right) \n= \\left( f_{xx}f_{yy}-f_{xy}^2 \\right)_{|(a,b)} \\:. \\]\nThen $(a,b)$ can be classified as follows.\n\\begin{enumerate}[(i)]\n\t\\item If $\\Delta > 0$ and $f_{xx}(a,b)>0$, then $(a,b)$ is a local minimum.\n\t\\item If $\\Delta > 0$ and $f_{xx}(a,b)<0$, then $(a,b)$ is a local maximum.\n\t\\item If $\\Delta < 0$, then $(a,b)$ is not a local extremum.\n\t\\item If $\\Delta = 0$, then the test is inconclusive and $(a,b)$ could be any of the three possibilities above (local maximum, local minimum, or neither).\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{example}\n\\label{expl:classifications}\n\\begin{enumerate}[(i)]\n\t\\item Find and classify all critical points of $f(x,y) = x^2y - y + 7$.\\\\\n\t{\\it Sol.:} In the previous example, we have found the critical points $P_1=(-1,0)$ and $P_2=(+1,0)$. The Hessian of $f$ is\n\t\\[ \\mathrm{Hess}f(x,y) =\n\t\\begin{bmatrix}\t2y & 2x \\\\ 2x & 0 \\end{bmatrix} \\:. \\]\n\tAt the first point, we have\n\t\\[ \\mathrm{Hess}f(P_1) =\n\t\\begin{bmatrix}\t0 & -2 \\\\ -2 & 0 \\end{bmatrix} \\:, \\]\n\twhich has determinant $\\Delta = 0 \\cdot 0 - (-2) \\cdot (-2) = -4 < 0$. Hence, $P_1$ is neither a local maximum nor a local minimum. Points like $P_1$ are called \\emph{saddle points}, cf. Remark~\\ref{rem:schmiegquadriken}.\n\tAt the second critical point, we have\n\t\\[ \\mathrm{Hess}f(P_2) =\n\t\\begin{bmatrix}\t0 & 2 \\\\ 2 & 0 \\end{bmatrix} \\:, \\]\n\twhich has determinant $\\Delta = 0 \\cdot 0 - 2 \\cdot 2 = -4 < 0$. Hence, $P_2$ is a saddle point as well and therefore not a local extremum.\n\t\\item Find and classify all critical points of $f(x,y) = x^3 - 3x + 3xy$.\\\\\n\t{\\it Sol.:} Earlier we had found the critical points $P_1=(+1,0)$, $P_2=(0,+1)$, $P_3=(-1,0)$, and $P_4=(0,-1)$. The Hessian of $f$ is\n\t\\[ \\mathrm{Hess}f(x,y) =\n\t\\begin{bmatrix}\t6x & 6y \\\\ 6y & 6x \\end{bmatrix} = \n\t6\\begin{bmatrix}\tx & y \\\\ y & x \\end{bmatrix} \\:. \\]\n\tThis gives\n\t\\[ \\mathrm{Hess}f(P_1) = 6 I, \\qquad \\mathrm{Hess}f(P_3) = -6 I \\:, \\]\n\twhich both have determinant $\\Delta = +36 > 0$. Inspecting the signs of the entries in the upper left of those Hessians, we find that $P_1$ is a local minimum and that $P_3$ is a local maximum. At $P_2$ and $P_4$, we have\n\t\\[ \\mathrm{Hess}f(P_{2/4}) =\n\t\\begin{bmatrix}\t 0 & \\pm6 \\\\ \\pm6 & 0 \\end{bmatrix} \\:, \\]\n\twhich both have determinant $\\Delta = -36 < 0$. Therefore, $P_2$ and $P_4$ are not local extrema.\n\t\\item Find all critical points of \n\t$f(x,y) = x - \\ln x + 2 y^2 + xy$\n\tand classify them using the second derivative test.\\\\\n\t{\\it Sol.:}\n\t\\begin{equation*}\n\t\\begin{split}\n\tf(x,y) &= x - \\ln x + 2 y^2 + xy \\:, \\\\\n\t\\nabla f (x,y) & = \\begin{bmatrix} 1 - \\tfrac{1}{x} + y & 4y + x \\end{bmatrix} \\:,\\\\\n\t\\mathrm{Hess}f(x,y) &= \\begin{bmatrix} \\rfrac{1}{x^2} & 1 \\\\ 1 & 4 \\end{bmatrix} \\:,\\\\\n\tP &= (2,-\\tfrac{1}{2}) \\:,\\\\\n\t\\Delta &= 0 \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tTherefore, the second derivative test is inconclusive, and more work is required to classify the critical point $P=(2,-\\tfrac{1}{2})$ of $f$.\n\\end{enumerate}\n\\end{example}\n\n\\begin{remark}\n\\label{rem:schmiegquadriken}\nIt is useful to understand the workings behind the second derivative test. We therefore sketch its derivation:\n\nSince $(a,b)$ is a critical point of $f$, the second-order Taylor approximation\tat that point is \n\\[ T_{(a,b)}^{(2)} f (x,y) = f(a,b) + 0 \n + \\frac{1}{2} \\begin{bmatrix} x-a & y-b \\end{bmatrix}\n\\mathrm{Hess} f (a,b) \\begin{bmatrix} x-a \\\\ y-b \\end{bmatrix} \\:, \\]\nand it seems plausible that studying the approximation $T_{(a,b)}^{(2)}f$ should allow to classify $(a,b)$. Now, for the question whether $(a,b)$ is an extremum or not, overall additive constants such as the term $f(a,b)$ above do not matter. Also, we may shift our coordinate system so that $(a,b)\\rightarrow(0,0)$. That is, we let $\\wtd{x}=x-a$, $\\wtd{y}=y-b$, as in the proof of the Taylor approximation, and we then write $\\wtd{x}$ and $\\wtd{y}$ as $x$ and $y$ again to simplify the notation. Thirdly, we denote the matrix $\\tfrac12 \\cdot \\mathrm{Hess}f$ at the critical point by $M$. Note that $M$ is symmetric. With those steps, we have reduced classifying the critical point $(a,b)$ of the original function $f$ to classifying the critical point $(0,0)$ of the auxiliary function\n\\[ h(x,y) = \\begin{bmatrix}\tx & y \\end{bmatrix} M \n\\begin{bmatrix} x \\\\ y \\end{bmatrix} \\:. \\]\n\nMore advanced theory of matrices implies that it suffices to restrict one's attention to the case when $M$ is a diagonal matrix. The basic idea for this argument is: if $M$ is not diagonal, then there is a change of variables so that $M$ written out in the new variables is diagonal. This change of variables is a rotation of the $xy$-plane, which does not affect the property of being a minimum, maximum, or neither. The function $h$ is now of the form\n\\[ h(x,y) = \\begin{bmatrix}\tx & y \\end{bmatrix}  \n\\begin{bmatrix}\t\\lambda & 0 \\\\ 0 & \\mu \\end{bmatrix}\n\\begin{bmatrix} x \\\\ y \\end{bmatrix} \\:, \\]\nwhere $\\lambda,\\mu\\in\\mathbb{R}$. For example, consider the case $\\lambda = 4$, $\\mu = 1$. Then\n\\[ h(x,y) = 4 x^2 + y^2 = (2x)^2 + y^2 = \\wtd{x}^2+y^2 \\:, \\]\nwhere we rescaled the $x$ axis, $x \\rightarrow \\wtd{x} = 2x$. The expression on the right is a paraboloid, for which $(0,0)$ is a minimum. The rescaling causes a deformation -- imagine a paraboloid, then squeeze it so that its level sets are ellipses rather than circles -- but, again, this does not affect the property of having a minimum, maximum, or neither at the point $(0,0)$. We have therefore found that $h$ with $\\lambda = 4$ and $\\mu = 1$ has a local minimum at $(0,0)$. By this argument, we find that all the cases with $\\lambda > 0$ and $\\mu > 0$ are qualitatively the same and have a minimum. Let us choose the identity matrix as representing those cases,\n\\[ M_1 = I = \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix} \\:, \\]\nfor which $h(x,y)=x^2+y^2$ and the graph of $h$ is a paraboloid:\n%\\figbox{Example for case (i) ($h$ induced by $M_1$, paraboloid; 3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f307.png}\n\\end{center}\nThis is case (i) of the second derivative test, a local minimum. Note that the chosen representative $M_1=I$ for this class satisfies the assumptions of (i): $\\det M_1 > 0$ and its upper left entry is positive.\n\nContinuing this line of reasoning, one finds that the other cases to consider are:\n\\begin{equation*}\nM_2 = \\begin{bmatrix} -1 & 0 \\\\ 0 & -1 \\end{bmatrix} \\:,\nM_3 = \\begin{bmatrix} +1 & 0 \\\\ 0 & -1 \\end{bmatrix} \\:,\nM_4 = \\begin{bmatrix} +1 & 0 \\\\ 0 & 0 \\end{bmatrix} \\:, \nM_5 = \\begin{bmatrix} -1 & 0 \\\\ 0 & 0 \\end{bmatrix} \\:, \nM_6 = \\begin{bmatrix} 0 & 0 \\\\ 0 & 0 \\end{bmatrix} \\:.\n\\end{equation*}\nThe matrix $M_2$ generates the upside-down paraboloid $h(x,y)=-(x^2+y^2)$,\n%\\figbox{Example for case (ii) ($h$ induced by $M_2$, upside-down paraboloid; 3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f308.png}\n\\end{center}\nwhich has a maximum at $(0,0)$. Again, note that $M_2$ satisfies the assumptions of (ii) of the second derivative test. The matrix $M_3$ gives\n\\[ h(x,y) = x^2-y^2 \\:, \\]\nwhich does not have an extremum at $(0,0)$: we have $h(0,0)=0$, but both positive and negative function values arbitrarily close (e.g. $h(\\varepsilon,0)>0$ and $h(0,\\varepsilon)<0$). This is case (iii) of the second derivative test, and both the graph of $h(x,y) = x^2-y^2$,\n%\\figbox{Example for case (iii), ($h$ induced by $M_3$, saddle; 3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f309.png}\n\\end{center}\nand the form of $M_3$ agree with the statements in (iii).\n\nFinally, we argue that the cases represented by $M_4,M_5,M_6$ do not allow us to draw a conclusion, i.e. they correspond to case (iv) of the second derivative test. They all do satisfy the assumption $\\det M = 0$. The choice $M_5$ gives $h(x,y)=-x^2$, which has the following graph.\n%\\figbox{Example for case (iv) ($h$ induced by $M_5$; 3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f310.png}\n\\end{center}\nThe matrix $M_4$ produces the same graph, but upside-down, and the graph obtained by choosing $M_6$ is the plane $h(x,y)=0$. In each of these cases, there is at least one straight line passing through the critical point $(0,0)$ in question. On this line, small contributions of the original function $f$, that are not captured by the second-order Taylor approximation, could tip the balance to different conclusions. Some guidance for understanding this will be provided in the exercises below.\n\nThe purpose of this remark was to explain the workings behind the second derivative test, to outline its proof, and, perhaps most importantly, to help avoid confusion of the cases (i) and (ii): For example, suppose you are classifying a critical point, you have obtained a Hessian matrix with positive determinant and positive entry in the upper left corner, but you have forgotten whether this implies a minimum or a maximum. Then think of the representative $M = I$ of this situation -- this gives the function $h(x,y) = x^2 + y^2$, which is the well-known paraboloid, which has a \\emph{minimum} at $(0,0)$.\n\\end{remark}\n\n\\begin{application}[\\texttt{Mean squared error for linear regression}]\n\\texttt{Define mean squared error for the linear regression application from the previous chapter; apply theory from the current chapter to basic matrix functions and hence show that linear regression minimises the MSE.}\n\\end{application}\n\n\\begin{exercise}\n\t\\begin{enumerate}[(i)]\n\t\t\\item Fill in the gaps for examples \\ref{expl:crit_points} (iii) and \\ref{expl:classifications} (iii), which were only sketched.\n\t\t\\item Find an classify all critical points of\\footnote{Only one critical point: a saddle point at $(-2,4)$.}\n\t\t$f(x,y) = x^2+xy+2y-1$. \n\t\t\\item Find and classify all critical points of\\footnote{The critical points are\n\t\t\\[ (0,0)\\:,\\quad(2,0)\\:,\\quad(1,1)\\:,\\quad(1,-1), \\]\n\t\tand they are a maximum, minimum, saddle point, saddle point, respectively.}\n\t\t$f(x,y) = x^3+3xy^2-3x^2-3y^2-2$.\n\t\t\\item Convince yourself that Theorem~\\ref{thm:crit_pts} is true\\footnote{You could contrapose that statement and/or look at Theorem~\\ref{thm:geom_dd} for inspiration.}.\n\t\t\\item Give examples of a local minimum, a local maximum, and a critical point that is neither, that cannot be classified with the second derivative test\\footnote{$f(x,y)=x^4+y^4$ covers one of those cases.}.\n\t \t\\item Use graphing software to explore different functions of the form\n\t \t\\[ h(x,y) = \\begin{bmatrix} x & y \\end{bmatrix}\n\t \tM \\begin{bmatrix} x \\\\ y \\end{bmatrix} \\:, \\]\n\t \twhere $M$ is a symmetric $2 \\times 2$ matrix (i.e., $a_{12}=a_{21}$), and compare the different shapes you find to Remark~\\ref{rem:schmiegquadriken}.\n\t \t\\item For \\ref{expl:classifications} (iii), use graphing software to find out what the point $P$ is\\footnote{Classifying $P$ is quite difficult. Zooming in by appending \\texttt{~for x from 1.99 to 2.01 and y from -0.51 to -0.49~} to the WolframAlpha plot command provides some insight.}.\n\t\\end{enumerate}\n\\end{exercise}\n\n\n\\section{Extrema under Constraints: Lagrange Multipliers}\n\n\\begin{example}\n\\label{expl:naive_opt}\nFind the maximum of the function $z=f(x,y)=2x+y$ on the circle $S = \\{ (x,y) \\in \\mathbb{R}^2 \\: | \\: x^2+y^2=1 \\}$, as well as the value that $f$ takes at that point.\\\\\n{\\it Sol.:} First note that the graph of $f$ is a plane, and therefore $f$ does not have any extrema at all on its full domain $\\mathbb{R}^2$. However, a maximum does exist when restricting to the circle:\n%\\figbox{Maximum of $f$ on $S$ (3D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f311.png}\n\\end{center}\nTo find this point, we parametrise the circle as\n\\[ (x(t),y(t)) = (\\cos(t),\\sin(t)) \\:, \\]\nand then define the function $F(t)=f(x(t),y(t))$. This gives\n\\begin{equation*}\n\\begin{split}\nF(t) &= 2\\cos(t)+\\sin(t) \\:, \\\\\nF'(t) &= -2\\sin(t)+\\cos(t) \\:.\n\\end{split}\n\\end{equation*}\nSetting $F'(t)$ equal to zero, we obtain\n\\[ \\tan(t_0) = \\frac12 \\:, \\]\nwhich has solutions $t_0 \\approx 0.464 + k\\pi$. Taking $k=0$ and $k=1$ leads to the points\n\\begin{equation*}\n\\begin{split}\n(x_1,y_1)&\\approx(\\cos(0.464),\\sin(0.464))\\approx(0.894,0.448) \\:, \\\\\n(x_2,y_2)&\\approx(\\cos(3.606),\\sin(3.606))\\approx(-0.894,-0.448) \\:,\n\\end{split}\n\\end{equation*}\nand all other $k$ will yield one of those two points again. Evaluating, we see that $f$ has a larger value at $(x_1,y_1)$, and hence we obtain the answer: $f$ takes the maximum value $2.236$ on the circle $S$ at the point $(x,y)=(0.894,0.448)$.\n\\end{example}\n\n\\begin{remark}\nNow suppose the set $S$ is given as a level set of a function $g$, and is less regular and can not be parametrised easily. The following figure shows such a set $S$ in the domain of $f$, and further a level set $f(x,y)=c$, where the constant $c=c_1$ is chosen to be larger than any of the values $f$ takes on $S$.\n%\\figbox{The level sets $g=0$ and $f=c$ for large $c$ (2D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f312.png}\n\\end{center}\nNow, choosing a slightly smaller constant $c_2$, the curve $f=c$ will move towards $g=0$. We continue this process until the first contact between the two curves is made, say for $c=c_3$. This contact point is a local maximum -- convince yourself of that\\footnote{Can any of the other points in $S$ have larger function values?}!\n%\\figbox{The level sets $g=0$ and $f=c$ for smaller values of $c$ (2D):}\n\\begin{center}\n\t\\includegraphics[width=0.6\\textwidth]{./Figures/f313.png}\n\\end{center}\nNote that the level curves $g=0$ and $f=c_3$ have a common tangent line -- convince yourself that this is always the case\\footnote{Try to sketch a \\emph{first-contact} scenario where the tangent lines at the contact point intersect (you will not succeed; be aware that level curves of smooth functions do not have ends -- they are either closed curves or they go out to infinity).}. Recall that gradient vectors are orthogonal to level sets and their tangent lines or planes. It must therefore be the case that the gradients of $f$ and $g$ are parallel:\n\\[ \\nabla f = \\lambda \\nabla g, \\qquad \\text{for some~} \\lambda \\in \\mathbb{R} \\:. \\]\n\\end{remark}\n\n\\begin{theorem}[Lagrange Multipliers]\nTo maximise or minimise a function $f(x,y)$ on a set \n$S = \\{ (x,y) \\in \\mathbb{R}^2 \\: | \\: g(x,y) = 0 \\} $,\ndefine the function\n\\[ F(\\lambda,x,y) = f(x,y) - \\lambda g(x,y) \\:, \\]\nand then solve\n\\[ \\nabla F = \\begin{bmatrix} 0 & 0 & 0 \\end{bmatrix} \\:. \\]\nThe function $f$ can then be evaluated at the solutions $(x,y)$ of that system to identify the extreme value. The parameter $\\lambda$ is called \\emph{Lagrange multiplier}.\n\\end{theorem}\n\n\\begin{example}\n\\label{expl:opt}\n\\begin{enumerate}[(i)]\n\t\\item Re-do example \\ref{expl:naive_opt} using Lagrange multipliers.\\\\\n\t{\\it Sol.:} We have $f(x,y) = 2x + y$ and we let\n\t\\[ g(x,y) = x^2 + y^2 - 1 \\:, \\]\n\tso that\n\t\\[  S = \\{ g=0 \\} \\:. \\]\n\tThen we define\n\t\\[ F(\\lambda,x,y) = f(x,y) - \\lambda g(x,y) = 2x+y - \\lambda (x^2+y^2-1) \\:, \\]\n\twhich has partial derivatives\n\t\\begin{equation*}\n\t\\label{eq:lagrange_system}\n\t\\begin{split}\n\tF_\\lambda(\\lambda,x,y) & = 1-(x^2+y^2) \\:, \\\\\n\tF_x(\\lambda,x,y) & = 2 - 2\\lambda x \\:, \\\\\n\tF_y(\\lambda,x,y) & = 1 - 2\\lambda y \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tSetting $\\nabla F$ equal to $0$ leads to a system of three equations (the equations are not linear, and hence the techniques from chapter~\\ref{ch:vm} can not be used to solve it). The equations $F_x=0$ and $F_y=0$ lead to\n\t\\[ x = \\frac{1}{\\lambda}, \\qquad y = \\frac{1}{2\\lambda} \\:, \\]\n\twhich we can then substitute into $F_\\lambda=0$:\n\t\\[ 0 = 1 - \\left( \\left( \\frac{1}{\\lambda} \\right)^2 \n\t+ \\left( \\frac{1}{2\\lambda} \\right)^2 \\right) \n\t= 1 -\\frac{5}{4\\lambda^2} \\quad \\rightarrow \\quad \\lambda = \\pm \\frac{\\sqrt{5}}{2} \\:. \\]\n\tUsing the equations for $x$ and $y$ above, we obtain the points \n\t\\begin{equation*}\n\t\\begin{split}\n(x_1,y_1) &= \\left( \\frac{2}{\\sqrt{5}},\\frac{1}{\\sqrt{5}} \\right) \\approx (0.894,0.447) \\:, \\\\\n(x_2,y_2) &= \\left( -\\frac{2}{\\sqrt{5}},-\\frac{1}{\\sqrt{5}} \\right) \\approx (-0.894,-0.447) \\:.\n\t\\end{split}\n\t\\end{equation*}\n\tNow, $f$ has to have both a maximum and a minimum on $S$ -- this is similar to the extreme value theorem for continuous single-variable functions -- and the two points above are the only candidates for those extrema. Evaluating, we find\n\t\\[ f(x_1,y_1) = 2 \\frac{2}{\\sqrt{5}} + \\frac{1}{\\sqrt{5}} = \\sqrt{5} \\]\n\tand $f(x_2,y_2)= - \\sqrt{5}$. Hence, $(x_1,y_1)$ is the maximum.\n\t\n\tNote that, contrary to the computation in example \\ref{expl:naive_opt}, we did not have to numerically evaluate inverse trigonometric functions, and we were able to obtain exact symbolic expressions for the maximum function value and the coordinates of the point where it is taken. Also, comparing the two different approaches, we have proven that\n\t\\[ \\cos \\left( \\arctan \\left( \\frac{1}{2}\\right)\\right) = \\frac{2}{\\sqrt{5}} \\:. \\]\n\t\\item Find the point on the surface $(x-y)^2-z^2 = 1$ that is closest to the origin $(x,y,z)=(0,0,0)$ of $\\mathbb{R}^3$.\\\\\n\t{\\it Sol.:}\n\tThe function\n\t\\[ d(x,y,z) = \\sqrt{x^2+y^2+z^2} \\:, \\]\n\twhich assigns to every point in $\\mathrm{R}^3$ its distance from the origin, needs to be minimised on the surface\n\t\\[ S = \\{ x,y,z\\in \\mathbb{R} \\: | \\: (x-y)^2-z^2 = 1\\} \\subseteq \\mathbb{R}^3 \\:. \\]\n\tSuppose we have found a point $P \\in S$ with minimal $d(P)$. Then $(d(P))^2$ will be minimal as well (the formal justification for this step is that $x \\mapsto x^2$ is monotonously increasing for $x \\geq 0$). Hence we can simplify our computations by minimising \n\t\\[ f(x,y,z) = x^2+y^2+z^2 \\]\n\tinstead of $d$. The function $g(x,y,z)=(x-y)^2-z^2-1$ defines $S$ and gives\n\t\\[ F(\\lambda,x,y,z) = x^2+y^2+z^2-\\lambda((x-y)^2-z^2-1) \\:. \\]\n\tSetting the gradient of $F$ equal to zero, we obtain the system\n\t\\begin{equation*}\n\t\t\\begin{cases}\n\t\t\t0 &= 2x - \\lambda \\cdot 2(x-y) \\:, \\\\\n\t\t\t0 &= 2y - \\lambda \\cdot 2(x-y)(-1) \\:, \\\\\n\t\t\t0 &= 2z - \\lambda \\cdot (-2z) \\:, \\\\\n\t\t\t0 &= 1+z^2-(x-y)^2 \\:.\n\t\t\\end{cases}\n\t\\end{equation*}\n\tThere are different ways to solve this, and it is important to gain experience with computations of that kind in order to be able to carry them out efficiently and with confidence. Combining the first two equations yields $y=-x$ and the new system\n\t\\begin{equation*}\n\t\t\\begin{cases}\n\t\t\t0 &= 2x (1 - 2 \\lambda) \\:, \\\\\n\t\t\t0 &= 2z (1 + \\lambda) \\:, \\\\\n\t\t\t0 &= 1 + z^2 - 4x^2 \\:.\n\t\t\\end{cases}\n\t\\end{equation*}\n\tIn R1, one of the two factors has to be $0$. Taking $x=0$, R3 becomes $1+z^2=0$, which does not have solutions. Taking $\\lambda=\\tfrac{1}{2}$ in R1 leads to the points\n\t\\[ P_{1/2} \\: : \\: (x,y,z) = \\left( \\pm\\frac12, \\mp\\frac12,0 \\right) \\:. \\]\n\tThe function values at those critical points are $f(P_1)=f(P_2)=\\tfrac12$, corresponding to distances $d(P_1)=d(P_2)=\\tfrac{\\sqrt{2}}{2}$ from the origin. Now, there have to be points of minimal distance to the origin on $S$, and since the two critical points we have found are the only candidates for such extrema, we see that $P_1$ and $P_2$ are minima. Note that $f$ does not have any maxima -- this is possible because $S$ is unbounded.\n\t\\item Bob wants to sell free-range eggs and needs to decide how many chicken to keep (``$x$'') and how much land to lease (``$y$''). In order to be able to label his eggs as free-range, every chicken needs to have at least $4\\,m^2$ of space. Leasing land costs $f_2(y)=\\rfrac{y}{10}$ and selling eggs brings in an income $f_1(x)=8\\sqrt{x}$. Help Bob optimise his profits. (The numbers in this problem are fictional.)\\\\\n\t{\\it Sol.:}\n\t\\begin{equation*}\n\t\\begin{split}\n\t \\mathrm{Profit}\\: &: \\quad f(x,y) = f_1(x) - f_2(y) \\:, \\\\\n\t \\mathrm{Function~}F\\: &: \\quad\n\t F(\\lambda,x,y) = 8\\sqrt{x} - \\frac{y}{10} - \\lambda \\left( \\frac{y}{x}-4 \\right) \\:, \\\\\n\t \\mathrm{Answer}\\: &: \\quad x=100, \\quad y=400, \\quad f_{\\text{max}} = 40 \\:.\n\t\\end{split}\n\t\\end{equation*}\n\\end{enumerate}\t\n\\end{example}\n\n\\begin{exercise}\n\\begin{enumerate}[(i)]\n\t\\item Find the maximum of $f(x,y)=x$ on\\footnote{Maximising that function amounts to finding the point of the curve that has the largest $x$-coordinate: $x_{\\text{max}}=1$, taken at $(x,y)=(1,0)$.} $x^3+y^2=1$.\n\t\\item Just to make sure we are not sending Bob down the wrong path: also solve example~\\ref{expl:opt} (iii)  with single-variable theory, as in example~\\ref{expl:naive_opt}.\n\t\\item Find the maximum and minimum value of $f(x,y)=x^3y^5$ on the ellipse\\footnote{The maximum and minimum values of $f$ are $\\pm5^{\\rfrac52}$, taken at the points $(\\pm1,\\pm\\sqrt{5})$.} $3x^2+y^2=8.$\n\t\\item Find the point on $S\\,:\\:z= xy$ that is closest to the sphere of radius $1$ centred at\\footnote{A point on $S$, that is closest to the sphere, is also closest to its centre; considering different cases helps solving $\\nabla F = 0$; to check your answer: you should obtain a minimum distance of $2$.} $(x,y,z)=(0,0,5)$.\n\t\\item A single-variable problem similar to example \\ref{expl:opt} (ii): For $c\\in\\{1,-4\\}$, consider the curve $y=x^2+c$ and the distances of its points to the origin $(0,0)$ of the $xy$-plane. Sketch the curves and think about what kind of extrema exist in each case. Then compute all local and global extrema as well as the corresponding distances to the origin\\footnote{You should obtain four extrema with distances $1$, $1.936$, and $4$ to the origin. The minima are both local and global, the maximum is only local. Make sure to state the extrema (the points at which the minimal/maximal distances are realised), as they are part of what you were asked for.}.\n\t\\item Find the maximum value of \n\t\\[ f(x,y,z) = \\ln x + \\ln y + 3 \\ln z \\:, \\quad x>0,y>0,z>0 \\]\n\ton the sphere $x^2+y^2+z^2 = 5R^2$.\n\\end{enumerate}\t\n\\end{exercise}", "meta": {"hexsha": "b65332c4164bd9fd6dbd920fa72c732e174fbdaf", "size": 88310, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/ch_fsv.tex", "max_stars_repo_name": "pasc85/MathematicalMethods", "max_stars_repo_head_hexsha": "1dd151a72deebbcb0da09955d897bbad1691597b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/ch_fsv.tex", "max_issues_repo_name": "pasc85/MathematicalMethods", "max_issues_repo_head_hexsha": "1dd151a72deebbcb0da09955d897bbad1691597b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-11-26T21:39:11.000Z", "max_issues_repo_issues_event_max_datetime": "2020-11-28T11:35:22.000Z", "max_forks_repo_path": "Chapters/ch_fsv.tex", "max_forks_repo_name": "pasc85/MathematicalMethods", "max_forks_repo_head_hexsha": "1dd151a72deebbcb0da09955d897bbad1691597b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.1026722925, "max_line_length": 832, "alphanum_fraction": 0.6444117314, "num_tokens": 32835, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737473266735, "lm_q2_score": 0.8152324848629214, "lm_q1q2_score": 0.5564562921353199}}
{"text": "\\documentclass{article}[12pt]\n\\usepackage{amsmath}\n\\usepackage{gensymb}\n\\usepackage{standalone}\n\\usepackage{verbatim}\n\\usepackage[utf8]{inputenc}\n\\usepackage{setspace}\n\\usepackage[a4paper,margin=1in,footskip=0.25in]{geometry}\n\\usepackage{graphicx}\n\\usepackage{mathptmx}\n\\usepackage{booktabs}\n\\usepackage{cite}\n\\usepackage[english]{babel}\n\\usepackage[utf8]{inputenc}\n\n\\begin{document}\n\\section{Development of Matrices and solution}\n\tUsing input from the user interface, a matrix of dimension $nxm$ with $n$ and $m$ being the number specified in the $x$ and $y$ directions. The spaing and placement of the nodes is done using Python3.7 NumPy module. Nodes of the geometry are broken down into boundary conditions and field conditions. \n\\subsection{Boundary Conditions}\n\tBoundary condition locations are taken form the user interface by means of the terminal geometry widget. In this widget, the geometric location and size of the terminal is entered as well as the potential applied across the terminal. At this point, the terminal is limited to having an uniform distribution across the entire surface. These inputs are passed as arguments to the boundary condition method of Voltage Calculation class in Calculation.py. The specified voltage is used as the boundary condition and is placed assigned to its respective location in the $b$ matrix. The remainder of the boundary conditions are determined by reflecting the adjacent node normal to the boundary condition. For a terminal with known potential on the right side of the geometry, an example is shown in (\\ref{eq:right_boundary}).\n\t\t\\begin{equation}\n\t\t\\frac{\\partial{^2V}}{\\partial{x^2}}+\\frac{\\partial{^2V}}{\\partial{y^2}} \\approx \\frac{\\left(V_{x+1}-V_{xy}\\right)-\\left(V_x-V_{x+1}\\right)}{\\Delta x^2}+\\frac{\\left(V_{y-1}-V_{xy}\\right)-\\left(V_{xy}-V_{y+1}\\right)}{\\Delta y^2}=V_{ap}\n\t\t\\label{eq:right_boundary}\n\t\\end{equation}\nWhere $V_{xy}$ is the boundary node being evaluated, and $\\Delta x$, $\\Delta y$ are the distance between nodes in the $x$ and $y$ directions respectively. (\\ref{eq:right_boundary}) is reduced to have a coefficient of one for the node in question for matrix solving, seen in (\\ref{eq:reduced_boundary}).\n\t\\begin{equation}\n\t\tV_{xy}-\\frac{V_{x+1}\\Delta y^2}{\\Delta x^2+\\Delta y^2}-\\frac{V_{y-1}\\Delta x^2}{\\Delta  x^2+\\Delta y^2}-\\frac{V_{y+1}\\Delta x^2}{\\Delta  x^2+\\Delta y^2}=\\frac{-\\rho \\Delta  x^2 \\Delta y^2}{2\\epsilon\\left(\\Delta x^2+\\Delta y^2\\right)}\n\t\t\\label{eq:reduced_boundary}\n\t\\end{equation}\nThe remainder of the boundary conditions are solved using the same method with their respective nodes. \n\\subsection{Field Conditions}\n\tAfter determination of the boundary conditions is complete, the field of the geometry is calculated. All nodes in the field are determined using (\\ref{eq:field_values}).  The implementation of a space charge term, $\\rho$, can be done in this process. \n\t\\begin{equation}\n\t\tV_{xy}-\\frac{V_{x+1}\\Delta y^2}{2\\left(\\Delta x^2+\\Delta y^2\\right)}-\\frac{V_{x-1}\\Delta y^2}{2\\left(\\Delta x^2+\\Delta y^2\\right)}-\\frac{V_{y-1}\\Delta x^2}{2\\left(\\Delta  x^2+\\Delta y^2\\right)}-\\frac{V_{y+1}\\Delta x^2}{2\\left(\\Delta  x^2+\\Delta y^2\\right)}=\\frac{-\\rho \\Delta  x^2 \\Delta y^2}{2\\epsilon\\left(\\Delta x^2+\\Delta y^2\\right)}\n\t\t\\label{eq:field_values}\n\t\\end{equation}\n\\subsection{Calculation}\n\tAfter all nodes in the matrix have been accounted for, the A matrix will have ones on the diagonal with other constituents on the same row. Use of Python3.7 NumPy module is used to solve for the voltage vector. This vector is returned to the user interface by means of a contour plot.\n\\end{document}\n", "meta": {"hexsha": "7d49e70f997c27eec3cee0a9ca7ea4e1f606ad58", "size": 3603, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Submission_Building/Matrix_Development.tex", "max_stars_repo_name": "alangburl/Finite_Element_Votlage", "max_stars_repo_head_hexsha": "c57abf953b6e6d5342b4224e12254a0649b07817", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Submission_Building/Matrix_Development.tex", "max_issues_repo_name": "alangburl/Finite_Element_Votlage", "max_issues_repo_head_hexsha": "c57abf953b6e6d5342b4224e12254a0649b07817", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Submission_Building/Matrix_Development.tex", "max_forks_repo_name": "alangburl/Finite_Element_Votlage", "max_forks_repo_head_hexsha": "c57abf953b6e6d5342b4224e12254a0649b07817", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 90.075, "max_line_length": 820, "alphanum_fraction": 0.7518734388, "num_tokens": 1053, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8152324803738429, "lm_q2_score": 0.6825737344123242, "lm_q1q2_score": 0.5564562785429957}}
{"text": "\n\\subsection{Pushdown automatons}\n\nIn a finite-state machine the action of the machine depends on the state, and on the input. In a pushdown automaton it also depends on a third input \u2013 stack.\n\nThe stack is a list of symbols. It is the first item on the stack which is used to inform the decision.\n\nThe actions of a machine in finite-state machine were to move from state to state. In a pushdown automaton, the actions can also affect the stack, and therefore future changes in state.\n\nA pushdown automaton uses a Last-in First-out (LiFo) stack.\n\n", "meta": {"hexsha": "2001ed18e1b328f6062d41e4903673cb47a01ac7", "size": 547, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/computer/automatons/01-01-pushdown.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/computer/automatons/01-01-pushdown.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/computer/automatons/01-01-pushdown.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.5833333333, "max_line_length": 185, "alphanum_fraction": 0.7787934186, "num_tokens": 125, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.6992544335934766, "lm_q1q2_score": 0.5563674540937226}}
{"text": "\\chapter{Manifolds and surfaces}\n\n\\section{Sides of a surface}\n\n\\begin{defn}\nLet $\\mu$ be an endofuncoid on a set~$U$.\n\\emph{Surface side} of a set~$T\\subseteq\\Ob\\mu$ is a connected component\n(regarding $\\mu$) of the filter $(\\rsupfun{\\mu}T)\\setminus T$.\n\\fxnote{$\\mu$ is used twice in this definition. We may generalize\nfor two different funcoids instead.}\n\\end{defn}\n\nKeep in mind that the above definition may work nicely if~$\\mu$ is\na complete funcoid induced by a topological space.\n\n\\begin{example}\nFor an $\\mathbb{R}^{n-1}$ subspace~$T$ of a~$\\mathbb{R}^n$ ($n\\geq 1$)\neuclidean space and the complete funcoid~$\\mu$ induced by the usual topology:\n\\begin{enumerate}\n\\item $T$~has exactly two surface sides.\n\\item The filter $\\rsupfun{\\mu}@\\{a\\} \\setminus T$ (for every $a\\in T$)\n  has exactly two connected components.\n\\end{enumerate}\n\\end{example}\n\n\\begin{proof}\nWithout loss of generality assume that\n\\[ T=\\setcond{(x_0,x_1,\\dots,x_{n-2},0)}{\nx_0,x_1,\\dots,x_{n-2}\\in\\mathbb{R}};\n\\quad\na = (0,\\dots,0). \\]\n\nWe have\n\\[ \\rsupfun{\\mu}@\\{a\\} =\n\\left(\\uparrow\\setcond{v\\in\\mathbb{R}^n}{v_{n-1}>0}\\sqcap\\rsupfun{\\mu}@\\{a\\}\\right)\n\\sqcup\n\\left(\\uparrow\\setcond{v\\in\\mathbb{R}^n}{v_{n-1}<0}\\sqcap\\rsupfun{\\mu}@\\{a\\}\\right). \\]\n\nLet us prove that\n$\\uparrow\\setcond{v\\in\\mathbb{R}^n}{v_{n-1}>0}\\sqcap\\rsupfun{\\mu}@\\{a\\}$ and\n$\\uparrow\\setcond{v\\in\\mathbb{R}^n}{v_{n-1}<0}\\sqcap\\rsupfun{\\mu}@\\{a\\}$ are\nconnected components.\n\n??\n\\end{proof}\n\n\\subsection{Special points}\n\nWe will start from the example of open $T=\\setcond{(x,y,0)}{x^2+y^2<1}$\nand closed $T=\\setcond{(x,y,0)}{x^2+y^2\\leq 1}$ disks in~$\\mathbb{R}^3$.\n\n\\begin{xca}\nProve that open disk (in a usual 3-dimensional space) has two surface sides\nand closed disk has one surface side.\n\\end{xca}\n\n\\section{Special points}\n\n\\begin{defn}\n\\emph{Surface cardinality} of a point~$a$ (an element of the set $\\Ob\\mu$) is\nthe cardinality of the set of connected components of the filter\n$\\rsupfun{\\mu}\\{a\\}\\setminus T$.\n\\end{defn}\n\n\\begin{defn}\n\\emph{Cardinality regular point} is a point~$a$, which has a neighborhood\n($X\\in\\up\\rsupfun{\\mu}\\{a\\}$) such that all points~$x\\in X\\cap T$\nare of the same surface cardinality as the point~$a$.\n\n\\emph{Cardinality special point} is a point which is not cardinality regular.\n\\end{defn}\n\n\\begin{defn}\n\\emph{Isomorphism regular point} is a point~$a$, which has a neighborhood\n($X\\in\\up\\rsupfun{\\mu}\\{a\\}$) such that for all points~$x\\in X\\cap T$\nthe filter $\\rsupfun{\\mu}\\{a\\}$ is isomorphic\nto~$\\rsupfun{\\mu}\\{x\\}$.\n\n\\emph{Isomorphism special point} is a point which is not isomorphism regular.\n\\end{defn}\n\n\\fxnote{Try to replace isomorphism~$f$ with some kind of filter embedding.}\n\nConsider the dihedral angle~$T$ produced by two half-planes. Are the points of\nintersection of the half-planes isomorphism-special? (They should not\nbe considered special. If they are special, this is a probably flaw in\nthe definition of isomorphism special.)\n\nConsider union~$T$ of two intersecting lines on a plane. The intersection\nmay be considered as a special point, because it has more connected\ncomponents that the rest. We don't want to consider it special, however.\nWe can restrict to consider special only points which have less connected\ncomponents (rather than more) to correct this trouble. Also try to define\nit with some kind of morphisms of filters instead of isomorphism as in\nisomorphism-special.\n\n\\begin{xca}\nExcluding special points (either cardinality or isomorphism) from closed disk\nproduces open disk.\n\\end{xca}\n\nLet us note that special points of closed disk have surface cardinality\n$1$ which is less than surface cardinality ($2$) of regular points.\nSo, it is a conceivable idea to consider special points which have\nlesser surface cardinality than nearby points.\n\nConsider the following two subsets of a plane (the lines are the\nset~$T$, the small black blob is the point~$a$, and the cyan\nblob symbolizes the filter $(\\rsupfun{\\mu}\\{a\\})\\setminus T$):\n\n\\begin{figure}\n  \\begin{subfigure}[c]{0.3\\textwidth}\n    \\centering\n    \\begin{tikzpicture}\n        \\node at (0,0) [circle,draw,fill=cyan] (a) {};\n        \\fill (0,0) circle [radius=2pt,fill=black];\n        \\draw [ultra thick] (0,0) -- +(0:1cm);\n        \\draw [ultra thick] (0,0) -- +(120:1cm);\n        \\draw [ultra thick] (0,0) -- +(240:1cm);\n    \\end{tikzpicture}\n    \\caption{Three surface sides}\n  \\end{subfigure}\n  \\begin{subfigure}[c]{0.3\\textwidth}\n    \\centering\n    \\begin{tikzpicture}\n        \\node at (0,0) [circle,draw,fill=cyan] (a) {};\n        \\fill (0,0) circle [radius=2pt,fill=black];\n        \\draw [ultra thick] (0,0) -- +(0:1cm);\n        \\draw [ultra thick] (0,0) -- +(180:1cm);\n    \\end{tikzpicture}\n    \\caption{Two surface sides}\n  \\end{subfigure}\n  \\caption{Examples of surface cardinality}\n\\end{figure}\n\nFor one of the sets surface cardinality of~$a$ is $3$ and for\nanother it is~$2$.\n\nNow define \\emph{shift special points}.\n\nLet $I$ be an interval on~$\\mathbb{R}$ (containing zero?)\n\nA point~$a$ is \\emph{shift special} if there exists a transformation\n(that is a continuous function $f:I\\times\\mu\\to\\mu$ such that:\n\\begin{enumerate}\n  \\item $f(0)$ is identity. \\fxwarning{Is this condition needed?}\n  \\item for every sufficiently small~$\\epsilon>0$ we have $f(\\epsilon,a)\\in T$;\n  \\item there is $\\epsilon>0$ such that for every $0<\\epsilon'<\\epsilon$ we have\n    $f(\\epsilon')$ being not continuous at~$a$ regarding complete funcoid\n    defined by the function $x\\mapsto\\rsupfun{\\mu}\\{x\\}\\setminus T$.\n\\end{enumerate}\n\nWe may consider to additonally require that every~$f(\\epsilon)$ is isomorphism\nof funcoids.\n\n\\begin{example}\n$T$~is disk $\\setcond{(x,y,0)}{x^2+y^2\\leq 1}$. $f$~is the contraction\n$(\\epsilon,v)\\mapsto\\frac{1}{1+\\epsilon}v$. $a=(1,0,0)$.\n\nIn the usual topology~$f$ is continuous. In\n$x\\mapsto\\rsupfun{\\mu}\\{x\\}\\setminus T$ we have the function\n$\\epsilon\\mapsto f(\\epsilon)$ not continuous at zero.\nSo~$a$ is a shift special point.\n\\end{example}\n\n\\begin{proof}\n$f (0) (v) = v$. Thus $\\langle f (0) \\rangle (\\rsupfun{\\mu} \\{ a\n\\} \\setminus T) = \\rsupfun{\\mu} \\{ a \\} \\setminus T$ intersects\nthe plane $Z = 0$. But $f (0, a)$\n\n??\n\\end{proof}\n\n\\begin{question}\nCan we exclude real numbers from the play?\n\\end{question}\n\n\\begin{question}\nHow cardinality special points, isomorphism special points and shift\nspecial points are related with each others?\n\\end{question}\n\n\\begin{question}\nHow the number of surface sides is related with usual surface sides for\nmanifolds?\n\\url{https://en.wikipedia.org/wiki/Orientability#Orientability_of_manifolds}\n\\end{question}\n\n\\begin{rem}\nManifolds have no special points. (Prove!)\n\\end{rem}\n\nProve that $2$-manifold image which special points removed has the same number\nof sides as the defined above.\n\nAnother way to define special points: A special point is a point\nsuch that $T\\sqcap\\supfun{\\mu}\\{a\\}$ is not isomorphic to\n$T\\sqcap\\supfun{\\mu}\\{x\\}$ for nearby points~$x$. Consider replacement\nof isomorphism with injection, surjection, etc. here and above.\n\nHow many sides has in $\\mathbb{R}^3$ a plane without one point?\n\nEasy way to spot special points: They are boundary points in the\ntopology (or funcoid) induced on~$T$. Alternatively we can consider\npoints whose neighborhood in~$T$ is different (as non-isomorphic or\nmaybe non-injective or non-surjective or like this) than of nearby\npoints. Thus another way to remove special points: use interior funcoid.\n\n\\url{https://math.stackexchange.com/q/2836833/4876}\n", "meta": {"hexsha": "f0ef8322664f94798be3eeed9f835f7910949e39", "size": 7452, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-manifold.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-manifold.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-manifold.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.4857142857, "max_line_length": 87, "alphanum_fraction": 0.709742351, "num_tokens": 2426, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.6992544273261176, "lm_q1q2_score": 0.5563674491070477}}
{"text": "\\section*{\\href{https://sicp.comp.nus.edu.sg/chapters/3}{Numbers}}\n\nWe use decimal notation for numbers, with an optional decimal dot. ``Scientific notation''\n(multiplying the number with $10^x$) is indicated with the letter \\texttt{e}, followed\nby the exponent $x$.\nExamples for numbers are \\texttt{5432}, \\texttt{-5432.109}, and \\texttt{-43.21e-45}.\n", "meta": {"hexsha": "ebbf827971d41d2041d6a17faa046b96261b19ee", "size": 352, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/specs/source_numbers.tex", "max_stars_repo_name": "Meowzz95/js-slang", "max_stars_repo_head_hexsha": "9bc978d5861bcce41d29c5336c8f3c330fde3cbf", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/specs/source_numbers.tex", "max_issues_repo_name": "Meowzz95/js-slang", "max_issues_repo_head_hexsha": "9bc978d5861bcce41d29c5336c8f3c330fde3cbf", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2020-03-25T05:46:50.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-02T10:31:34.000Z", "max_forks_repo_path": "docs/specs/source_numbers.tex", "max_forks_repo_name": "Meowzz95/js-slang", "max_forks_repo_head_hexsha": "9bc978d5861bcce41d29c5336c8f3c330fde3cbf", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-03-31T06:16:46.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-01T01:04:51.000Z", "avg_line_length": 50.2857142857, "max_line_length": 90, "alphanum_fraction": 0.7386363636, "num_tokens": 104, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5563674491070476}}
{"text": "\\subsection{Na\\\"ive set theory}\\label{subsec:naive_set_theory}\n\nNa\\\"ive set theory is traditionally defined informally by only specifying that a set is an unordered collection of objects without repetition. It turns out that this can easily be formalized as a \\hyperref[def:first_order_theory]{first-order theory}, albeit an inconsistent one. Still, this theory is useful for introducing important concepts that can ease the introduction of more elaborate theories like \\hyperref[def:zfc]{\\logic{ZFC}}. The definitions we introduce and the proofs we provide will turn out to be valid in \\logic{ZFC} also. In other words, we will transparently utilize \\hyperref[def:naive_set_theory/unrestricted_comprehension]{unrestricted comprehension} for constructing sets and later in \\fullref{thm:zfc_existence_theorems} we will prove that they exist not only in na\\\"ive set theory, but also in \\logic{ZFC}.\n\n\\begin{remark}\\label{rem:pure_set_theory}\n  What we lose in this formalization are objects which are not sets, usually called \\term{atoms} or \\term{urelements} (because of the German prefix \\enquote{ur}, meaning primordial). It is not necessary for us to add such elements since we can encode everything via sets. Theories without atoms, like our versions of nai\\\"ve set theory and \\hyperref[def:axiom_of_universes]{\\logic{ZFC+U}}, are called \\term{pure set theories}.\n\\end{remark}\n\n\\begin{definition}\\label{def:naive_set_theory}\n  The \\term{language of na\\\"ive set theory} is a \\hyperref[def:first_order_syntax]{first-order language} \\( \\mscrL \\) with only a single \\hyperref[rem:first_order_formula_conventions/infix]{infix} binary relation \\( \\in \\) called \\term{set membership}. If \\( \\xi \\in \\eta \\), we say that \\( \\xi \\) is a \\term{member} or \\term{element} of \\( \\eta \\) and that \\( \\eta \\) \\term{contains} \\( \\xi \\).\n\n  For the sake of simplicity, we will not introduce into the language any other functional or predicate symbols, but will use \\hyperref[rem:predicate_formula]{predicate formulas} when needed mostly for formulating axioms. See the \\( \\ref{eq:def:grothendieck_universe/predicate}[\\tau] \\) predicate for an extreme example.\n\n  \\term{Na\\\"ive set theory} is a \\hyperref[def:first_order_theory]{first-order theory} axiomatized by the following:\n  \\begin{thmenum}\n    \\thmitem{def:naive_set_theory/extensionality}\\mcite[sec. 61.1]{OpenLogicFull} The \\term{axiom of extensionality}, which states that two sets are equal if and only if they have the same members. Symbolically,\n    \\begin{equation}\\label{eq:def:naive_set_theory/extensionality}\n      \\parens[\\Big]{ \\qforall \\xi (\\xi \\in \\tau \\leftrightarrow \\xi \\in \\sigma) } \\rightarrow \\parens[\\Big]{ \\tau \\doteq \\sigma }.\n    \\end{equation}\n\n    As a consequence, a set is only distinguished by what it contains and thus the ordering and repetition of members of a set play no role. This axiom is also important in \\logic{ZFC} --- see \\fullref{def:zfc/extensionality}.\n\n    It is very common when dealing with sets, as in \\eqref{eq:def:naive_set_theory/extensionality}, to use \\hyperref[rem:first_order_formula_conventions/relativization]{relativization of quantifiers} with \\( \\in \\).\n\n    As explained in \\fullref{rem:mathematical_logic_conventions/quantification}, we avoid excessive universal quantification. We actually add as an axiom of the theory the \\hyperref[thm:implicit_universal_quantification]{universal closure} of \\eqref{eq:def:naive_set_theory/extensionality}:\n    \\begin{equation}\\label{eq:def:naive_set_theory/extensionality_quantified}\n      \\qforall \\tau \\qforall \\sigma \\parens[\\Bigg]{ \\parens[\\Big]{ \\qforall \\xi (\\xi \\in \\tau \\leftrightarrow \\xi \\in \\sigma) } \\rightarrow \\parens[\\Big]{ \\tau \\doteq \\sigma } }.\n    \\end{equation}\n\n    The \\hyperref[def:material_implication/converse]{converse} of \\eqref{eq:def:naive_set_theory/extensionality} obvious.\n\n    \\thmitem{def:naive_set_theory/unrestricted_comprehension} The \\term{axiom schema of unrestricted comprehension} states that any formula defines a set. For each formula \\( \\varphi \\) not containing \\( \\tau \\) as a free variable, the following is an axiom:\n    \\begin{equation}\\label{eq:def:naive_set_theory/unrestricted_comprehension}\n      \\qexists \\tau \\qforall \\xi (\\xi \\in \\tau \\leftrightarrow \\varphi).\n    \\end{equation}\n\n    It is important to highlight that \\( \\varphi \\) may have any number of free variables as long as \\( \\tau \\) is not among them. Of course, this axiom is only interesting if \\( \\xi \\in \\boldop{Free}(\\varphi) \\). If \\( \\eta_1, \\ldots, \\eta_n \\) are all the other free variables of \\( \\varphi \\), then the \\hyperref[thm:implicit_universal_quantification]{universal closure} of the corresponding axiom is\n    \\begin{equation}\\label{eq:def:naive_set_theory/unrestricted_comprehension_quantified}\n      \\qforall {\\eta_1} \\cdots \\qforall {\\eta_n} \\qexists \\tau \\qforall \\xi (\\xi \\in \\tau \\leftrightarrow \\varphi).\n    \\end{equation}\n\n    In other words, the set \\( \\tau \\) is not unique in general, but depends on the free variables \\( \\eta_1, \\ldots, \\eta_n \\). For this reason, they are called \\term{parameters} of the axiom.\n\n    Compare this axiom schema to \\hyperref[def:zfc/specification]{restricted comprehension}. In the context of na\\\"ive set theory they are equivalent because each is a special case of the other one.\n\n    Because our goal is for all our constructions to be valid in \\hyperref[def:zfc]{\\logic{ZFC}}, we only use unrestricted comprehension where the existence of the set is justified by other axioms of \\logic{ZFC}.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{remark}\\label{rem:epsilon_and_set_membership}\n  The symbol \\( \\in \\) is derived from \\( \\varepsilon \\). Some older books like \\cite{Kelley1955} even use \\( \\varepsilon \\) for set membership. \\Fullref{thm:epsilon_induction} is named after set membership.\n\\end{remark}\n\n\\begin{definition}\\label{def:set}\n  Assume that we have a fixed \\hyperref[rem:standard_model_of_set_theory]{standard} \\hyperref[rem:transitive_model_of_set_theory]{transitive} model \\( \\mscrV = (V, I) \\) of \\hyperref[def:naive_set_theory]{na\\\"ive set theory} or \\hyperref[def:zfc]{\\logic{ZFC}}, with or without the \\hyperref[def:axiom_of_universes]{axiom of universes}. We will assume \\logic{ZFC+U} by default.\n\n  We say that any member of \\( V \\) is a \\term{set}. If \\( A \\) is a set and \\( x \\in A \\), we say that \\( x \\) is a \\term{member} or \\term{element} of \\( A \\) or, in accordance with \\fullref{def:point}, a \\term{point} in \\( A \\).\n\n  We usually refer to \\( V \\) as our \\term{universe} or \\term{universal set}. When working with \\hyperref[def:grothendieck_universe]{Grothendieck universes}, we may wish to further distinguish between the universal set and some Grothendieck universe. Fortunately, we very rarely refer to \\( V \\) itself.\n\\end{definition}\n\n\\begin{remark}\\label{rem:standard_model_of_set_theory}\n  We will say that a model \\( \\mscrV = (V, I) \\) of set theory is \\term{standard} if the interpretation \\( I(\\in) \\) of the membership predicate symbol is precisely the membership relation in the metatheory. We will only consider standard models of set theory. This is immensely important for the following reasons:\n\n  \\begin{itemize}\n    \\thmitem{rem:standard_model_of_set_theory/set_builder_notation} Set-builder notation relies on constructing sets in the metatheory and then using them in the object theory. If the model is not standard, then it does not hold that \\( (\\xi \\in \\eta)\\Bracks{\\xi \\mapsto x, \\eta \\mapsto y} = T \\) if and only if \\( x \\in y \\).\n\n    \\thmitem{rem:standard_model_of_set_theory/skolems_paradox} It is possible that cardinality is incompatible between the object theory and metatheory --- see \\fullref{ex:skolems_paradox}. We want to avoid countable sets in the metatheory to be uncountable in the object theory, for example.\n  \\end{itemize}\n\n  Therefore, it is reasonable to assume that all our models of set theory are standard. We also want their domains to be transitive sets -- see \\fullref{rem:transitive_model_of_set_theory}.\n\\end{remark}\n\n\\begin{definition}\\label{def:set_builder_notation}\n  As mentioned in \\fullref{rem:set_definition_recursion}, set theory somewhat blurs the line between logic and metalogic. In particular some \\hyperref[def:first_order_definability]{definable} subsets of the universe \\( U \\) of the fixed model \\( \\mscrU \\) are themselves sets within the object logic.\n\n  Fix a formula \\( \\varphi \\) whose free variables are \\( \\xi \\) and \\( \\eta_1, \\ldots, \\eta_n \\). In the simplest case, \\( n = 0 \\) and \\( \\xi \\) is the only free variable of \\( \\varphi \\).\n\n  Also fix an \\( n \\)-tuple \\( u_1, \\ldots, u_n \\) of members of \\( U \\), which we will call \\term{parameters}. Denote by \\( A \\) the subset of \\( U \\) consisting of members \\( x \\) of \\( U \\) such that \\( \\varphi\\Bracks{x, u_1, \\ldots, u_n} = T \\).\n\n  We introduce a special convenience notation for \\( A \\) called \\term{set-builder notation}:\n  \\begin{equation*}\n    A \\coloneqq \\set{ x \\given \\varphi\\Bracks{x, u_1, \\ldots, u_n} }.\n  \\end{equation*}\n\n  Since set-builder notation is metalogical, we do not impose strict syntax rules and use prose where it is straightforward to translate it into a logical formula.\n\n  For example, the \\hyperref[def:basic_set_operations/intersection]{intersection} of the sets \\( B \\) and \\( C \\) is given by the formula \\( \\xi \\in \\eta \\wedge \\xi \\in \\zeta \\), where \\( B \\) is a value for the parameter \\( \\eta \\) and \\( C \\) is a parameter for \\( \\zeta \\). The intersection can thus be written as\n  \\begin{equation*}\n    A \\coloneqq \\set{ x \\given x \\in B \\T{and} x \\in C }.\n  \\end{equation*}\n\n  Note that at this point \\( A \\) is a set within the metatheory and its members are sets within the object logic, however \\( A \\) may not be a set within the object logic and its members may not be sets within the metatheory.\n\n  Nonetheless, within na\\\"ive set theory, as a consequence of the \\hyperref[def:naive_set_theory/unrestricted_comprehension]{axiom schema of unrestricted comprehension}, \\( A \\) is also a set within the object logic. More precisely, given our choice of parameters \\( u_1, \\ldots, u_n \\), the axiom schema instance \\eqref{eq:def:naive_set_theory/unrestricted_comprehension_quantified} guarantees the existence of a member \\( a \\) of \\( U \\), such that\n  \\begin{equation*}\n    \\parens{ \\xi \\in \\tau }\\Bracks{ \\tau \\mapsto a, \\xi \\mapsto x } = T\n    \\T{if and only if}\n    x \\isinE A,\n  \\end{equation*}\n  where we have denoted set membership within the object logic by \\( \\in \\) and within the metatheory by \\( \\isinE \\). We will not further use this symbol and the two membership relations will be used interchangeably.\n\n  This is where the line between logic and metalogic blurs --- we can speak about roughly the same sets within the object logic and the metatheory.\n\n  Examples such as \\fullref{thm:russels_paradox} show that unrestricted comprehension can easily lead to an inconsistent object logic. In more elaborate set theories like \\hyperref[def:zfc]{\\logic{ZFC}}, we only allow restricted comprehension via the \\hyperref[def:zfc/specification]{axiom schema of specification}. Instead of defining \\( A \\) as a set of all members of \\( U \\) satisfying a certain property, restricted comprehension allows us to define \\( A \\) as a subset not of the universe \\( U \\), but of some well-behaved subset \\( B \\) of \\( U \\). The corresponding notation is\n  \\begin{equation*}\n    \\set{ x \\in B \\given \\varphi\\Bracks{x, u_1, \\ldots, u_n} }.\n  \\end{equation*}\n\n  Of course, we may still use unrestricted comprehension of the result is guaranteed to be a set within the object logic.\n\n  Within \\logic{ZFC}, subsets of \\( U \\) which are not sets in the object logic are called \\term{proper classes}. Sets and proper classes are collectively called \\term{classes}. We avoid referencing proper classes because that can easily lead us to an inconsistent theory. See \\fullref{def:large_and_small_sets} for a clever workaround.\n\n  A bigger problem that may happen is described in \\fullref{rem:transitive_model_of_set_theory}.\n\n  Other liberties regarding set-builder notation include the following:\n  \\begin{itemize}\n    \\item We often place arbitrary terms on the left side rather than only sets. This is simply a convenient metalogical notation; the symbols that are used in these terms are often not part of the object language. For example, we write the odd integers as\n    \\begin{equation*}\n      \\set{ 2n + 1 \\given n \\in \\BbbZ }.\n    \\end{equation*}\n\n    \\item Instead of using the delimiter \\( \\given \\), we sometimes also use \\( : \\), especially when dealing with absolute values and norms:\n    \\begin{equation*}\n      \\set{ \\abs{x} : \\abs{x}^2 < 1 }\n    \\end{equation*}\n    can be more readable than\n    \\begin{equation*}\n      \\set{ \\abs{x} \\given \\abs{x}^2 < 1 }.\n    \\end{equation*}\n\n    \\item If a set has only a small finite amount of members, we usually prefer to enumerate them as\n    \\begin{equation*}\n      \\set{ 1, 3, 9, 27 }.\n    \\end{equation*}\n\n    Because of the \\hyperref[def:naive_set_theory/extensionality]{axiom of extensionality}, the order and repetition of the objects inside the curly braces are irrelevant. Nevertheless, using any unconventional order does not benefit us in any way.\n\n    \\item We can also place an ellipsis if a certain pattern is obvious:\n    \\begin{equation*}\n      \\set{ 1, 3, 9, 27, \\ldots }.\n    \\end{equation*}\n\n    This works specifically for defining countable sets.\n  \\end{itemize}\n\n  Note that we have used certain numbers, but this was only for illustrative purposes because even the \\hyperref[def:set_of_natural_numbers]{natural numbers} are not yet defined in terms of sets.\n\\end{definition}\n\n\\begin{remark}\\label{rem:multile_set_membership_shorthand}\n  Within the metatheory, we often use the notation \\( x_1, \\ldots, x_n \\in A \\) to mean that \\( x_k \\in A \\) for \\( k \\in 1, \\ldots, n \\).\n\\end{remark}\n\n\\begin{remark}\\label{rem:singleton_sets}\n  Sets with a single elements are usually called \\term{singletons}. It is sometimes convenient, especially with connection to geometry or \\hyperref[def:multi_valued_function]{multi-valued functions} (e.g. when dealing with \\hyperref[def:net_convergence/limit]{limits of nets} or \\hyperref[def:subdifferentials]{subdifferentials}), to not distinguish between singleton sets and their corresponding element.\n\\end{remark}\n\n\\begin{definition}\\label{def:empty_set}\n  A very important set is the \\term{empty set}\n  \\begin{equation*}\n    \\varnothing \\coloneqq \\set{ x \\given \\bot },\n  \\end{equation*}\n  which contains no elements. We will also find useful the \\hyperref[rem:predicate_formula]{predicate formula}\n  \\begin{equation*}\\taglabel[\\op{IsEmpty}]{eq:def:empty_set/predicate}\n    \\ref{eq:def:empty_set/predicate}[\\tau] \\coloneqq \\qforall \\eta \\neg \\eta \\in \\tau.\n  \\end{equation*}\n\n  We will often refer to \\term{nonempty sets}, which are exactly what they, sound --- sets that are not the empty set.\n\\end{definition}\n\n\\begin{theorem}[Russell's paradox]\\label{thm:russels_paradox}\n  \\hyperref[def:naive_set_theory]{Na\\\"ive set theory} is \\hyperref[def:first_order_theory_consistency]{inconsistent}. More precisely, the instance of the \\hyperref[def:naive_set_theory/unrestricted_comprehension]{schema of unrestricted comprehension} with\n  \\begin{equation}\\label{eq:thm:russels_paradox_comprehension_formula}\n    \\varphi = (\\xi \\not\\in \\xi)\n  \\end{equation}\n  allows us to derive \\( \\bot \\) in \\hyperref[def:classical_logic]{classical logic}.\n\n  Thus, the set\n  \\begin{equation}\\label{eq:thm:russels_paradox_set}\n    R \\coloneqq \\set{ x \\given x \\not\\in x }\n  \\end{equation}\n  of all sets that do not contain themselves is not well-defined. Indeed, from \\( R \\not\\in R \\) it follows that \\( R \\in R \\) and from \\( R \\in R \\) it follows that \\( R \\not\\in R \\).\n\\end{theorem}\n\\begin{proof}\n  After substituting \\eqref{eq:thm:russels_paradox_comprehension_formula} in \\eqref{eq:def:naive_set_theory/unrestricted_comprehension}, we obtain the following axiom of na\\\"ive set theory:\n  \\begin{equation}\\label{eq:thm:russels_paradox_comprehension_axiom}\n    \\psi \\coloneqq \\qexists \\tau \\qforall \\xi (\\xi \\in \\tau \\leftrightarrow \\neg (\\xi \\in \\xi)).\n  \\end{equation}\n\n  We will show that the negation \\( \\neg\\psi \\) of \\( \\psi \\) is also derivable in this theory. An explicit form of the negation can be obtained by utilizing the equivalences \\fullref{thm:first_order_quantifiers_are_dual} and \\fullref{thm:boolean_equivalences/biconditional_negation}:\n  \\begin{equation*}\n    \\neg\\psi = \\qforall \\tau \\qexists \\xi (\\xi \\in \\tau \\leftrightarrow \\xi \\in \\xi).\n  \\end{equation*}\n\n  This holds when \\( \\xi \\) and \\( \\tau \\) take on the same value, hence it is satisfiable in na\\\"ive set theory. By \\fullref{thm:classical_first_order_logic_is_sound_and_complete}, \\( \\neg\\psi \\) is derivable in the theory.\n\n  Thus, \\( \\psi \\) and \\( \\neg\\psi \\) are both derivable in the same theory, and we can use \\eqref{eq:def:minimal_propositional_natural_deduction_system/neg/elim} to also derive \\( \\bot \\), which shows that na\\\"ive set theory is inconsistent.\n\\end{proof}\n\n\\begin{definition}\\label{def:subset}\n  We say that \\( A \\) is a \\term{subset} of \\( B \\) and write \\( A \\subseteq B \\) if every member of \\( A \\) is a member of \\( B \\). If \\( A \\) is a subset of \\( B \\), we say that B is a \\term{superset} of \\( A \\).\n\n  If \\( A \\subseteq B \\) and \\( A \\neq B \\), we say that \\( A \\) is a \\term{proper subset} of \\( B \\) and write \\( A \\subsetneq B \\).\n\n  The relation \\( \\subseteq \\) is called the inclusion relation, and it gives a partial ordering between sets. See \\fullref{thm:boolean_algebra_of_subsets}. If an entire family of sets are not pairwise comparable, we say that they are \\term{disjoint}.\n\n  The following \\hyperref[rem:predicate_formula]{predicate formula}\n  \\begin{equation*}\\taglabel[\\op{IsSubset}]{eq:def:subset/predicate}\n    \\ref{eq:def:subset/predicate}[\\rho, \\tau] \\coloneqq \\qforall \\xi (\\xi \\in \\rho \\rightarrow \\xi \\in \\tau),\n  \\end{equation*}\n  which is valid when \\( \\rho \\) is a subset of \\( \\tau \\), will occasionally be useful for us. We have chosen the letter \\( \\rho \\) because it is the result that would be on the right-hand side if we had a corresponding infix relation in the language.\n\\end{definition}\n\n\\begin{remark}\\label{rem:subset_notation}\n  Some authors, such as \\cite{Kelley1955}, use the notation \\( A \\subset B \\) to mean \\enquote{all elements of \\( A \\) belong to \\( B \\)}, even in the case when \\( A = B \\). To avoid confusion, we use the notations \\( A \\subseteq B \\) and \\( A \\subsetneq B \\) (see \\fullref{def:subset}).\n\\end{remark}\n\n\\begin{remark}\\label{rem:family_of_sets}\n  In a \\hyperref[rem:pure_set_theory]{pure set theory}, everything is encoded as a set. However, it is often the case that we are not interested in how a set's elements are encoded as sets and only in how they behave, e.g. when working with \\hyperref[def:set_of_natural_numbers]{natural numbers}, we are interested in the elements of \\( \\BbbN \\) and not in the way every element of \\( \\BbbN \\) is encoded as a set.\n\n  In order to reduce repetitiveness, sets whose elements we consider to be other sets, are often called \\term{families of sets}. In particular, if all (different) sets are \\hyperref[def:subset]{disjoint}, we say that the family is a \\term{disjoint family}. It is usually assumed that the sets are nonempty.\n\n  We often consider \\hyperref[def:cartesian_product/indexed_family]{indexed families}, i.e. sets which depend on a parameter, which further highlight our intention to distinguish between a point in a set, the set itself and some family of sets to which the latter belongs.\n\\end{remark}\n\n\\begin{definition}\\label{def:basic_set_operations}\n  We define the following operations:\n\n  \\begin{thmenum}\n    \\thmitem{def:basic_set_operations/union} Dually to \\hyperref[def:basic_set_operations/intersection]{intersections}, the \\term{union} of an arbitrary set \\( \\mscrA \\) is defined as\n    \\begin{equation*}\n      \\bigcup A \\coloneqq \\set{ x \\given \\qexists {A \\in \\mscrA} x \\in A }.\n    \\end{equation*}\n\n    We define the \\hyperref[rem:predicate_formula]{predicate formula}\n    \\begin{equation*}\\taglabel[\\op{IsUnion}]{eq:def:basic_set_operations/union/predicate}\n      \\ref{eq:def:basic_set_operations/union/predicate}[\\rho, \\tau] \\coloneqq \\qforall \\xi \\parens[\\Big]{ \\xi \\in \\rho \\leftrightarrow \\qexists {\\eta \\in \\tau} \\xi \\in \\rho }.\n    \\end{equation*}\n\n    In particular, \\( \\bigcup \\varnothing = \\varnothing \\).\n\n    For two sets \\( A \\) and \\( B \\), we define the \\term{binary union} as\n    \\begin{equation*}\n      A \\cup B \\coloneqq \\bigcup \\set{ A, B } = \\set{ x \\given x \\in A \\T{or} x \\in B }.\n    \\end{equation*}\n\n    \\thmitem{def:basic_set_operations/intersection} The \\term{intersection} of a nonempty set \\( \\mscrA \\) is\n    \\begin{equation*}\n      \\bigcap \\mscrA \\coloneqq \\set{ x \\given \\qforall {A \\in \\mscrA} x \\in A }.\n    \\end{equation*}\n\n    We also introduce the \\hyperref[rem:predicate_formula]{predicate formula}\n    \\begin{equation*}\\taglabel[\\op{IsIntersection}]{eq:def:basic_set_operations/intersection/predicate}\n      \\ref{eq:def:basic_set_operations/intersection/predicate}[\\rho, \\tau] \\coloneqq \\qforall \\xi \\parens[\\Big]{ \\xi \\in \\rho \\leftrightarrow \\qforall {\\eta \\in \\tau} \\xi \\in \\eta }.\n    \\end{equation*}\n\n    We leave \\( \\bigcap \\varnothing \\) undefined because it should be a \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{top element} in the \\hyperref[thm:boolean_algebra_of_subsets]{Boolean algebra of all sets}, but the latter object is an ambiguous object and does not even exist in \\logic{ZFC} --- see \\fullref{thm:zfc_existence_theorems/universe}. It does nonetheless satisfy \\( \\ref{eq:def:basic_set_operations/intersection/predicate}[\\rho, \\tau] \\).\n\n    For two sets \\( A \\) and \\( B \\), we define the \\term{binary intersection} as\n    \\begin{equation*}\n      A \\cap B \\coloneqq \\bigcap \\set{ A, B } = \\set{ x \\given x \\in A \\T{and} x \\in B }.\n    \\end{equation*}\n\n    \\thmitem{def:basic_set_operations/difference} The \\term{difference} of the sets \\( A \\) and \\( B \\) is\n    \\begin{equation*}\n      A \\setminus B \\coloneqq \\set{ x \\in A \\given x \\not\\in B }.\n    \\end{equation*}\n\n    We define the \\hyperref[rem:predicate_formula]{predicate formula}\n    \\begin{equation*}\\taglabel[\\op{IsDifference}]{eq:def:basic_set_operations/difference/predicate}\n      \\ref{eq:def:basic_set_operations/difference/predicate}[\\rho, \\tau, \\sigma] \\coloneqq \\qforall \\xi \\parens[\\Big]{ \\xi \\in \\rho \\leftrightarrow \\xi \\in \\tau \\wedge \\neg(\\xi \\in \\sigma) }.\n    \\end{equation*}\n\n    \\thmitem{def:basic_set_operations/power_set} The \\term{power set} \\( \\pow(A) \\) of \\( A \\) is the family of all subsets of \\( A \\). Symbolically,\n    \\begin{equation*}\n      \\pow(A) \\coloneqq \\set{ B \\given B \\subseteq A }.\n    \\end{equation*}\n\n    The operation \\( \\pow \\) is not technically a function since its domain is supposed to be the set of all sets, whose existence contradicts \\fullref{thm:russels_paradox}. Nevertheless, this notation makes sense and is justified by \\fullref{rem:unbounded_transfinite_recursion} and \\fullref{ex:unary_functors_in_set}.\n\n    We define the \\hyperref[rem:predicate_formula]{predicate formula}\n    \\begin{equation*}\\taglabel[\\op{IsPowerSet}]{eq:def:basic_set_operations/power_set/predicate}\n      \\ref{eq:def:basic_set_operations/power_set/predicate}[\\rho, \\tau] \\coloneqq \\qforall \\xi \\parens[\\Big]{ \\xi \\in \\rho \\leftrightarrow \\ref{eq:def:subset/predicate}[\\xi, \\tau] }.\n    \\end{equation*}\n\n    See \\fullref{thm:power_set_via_subsets} for a characterization of the power set.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:def:set_difference/properties}\n  \\hyperref[def:basic_set_operations/difference]{Set difference} has the following basic properties:\n  \\begin{thmenum}\n    \\thmitem{thm:def:set_difference/properties/intersection} If \\( A \\) and \\( B \\) are subsets of \\( C \\), then \\( A \\setminus B = A \\cap (C \\setminus B) \\).\n\n    \\thmitem{thm:def:set_difference/properties/double_difference} If \\( A \\subseteq B \\), then \\( B \\setminus (B \\setminus A) = A \\)\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:def:set_difference/properties/intersection} Since \\( a \\in A \\) implies \\( a \\in C \\), we have\n  \\begin{align*}\n    A \\setminus B\n    &=\n    \\set{ x \\in A \\given x \\not\\in B }\n    = \\\\ &=\n    \\set{ x \\in A \\given x \\in C \\T{and} x \\not\\in B }\n    = \\\\ &=\n    A \\cap (C \\setminus B).\n  \\end{align*}\n\n  \\SubProofOf{thm:def:set_difference/properties/double_difference} By \\hyperref[thm:minimal_propositional_negation_laws/dne]{double negation elimination},\n  \\begin{align*}\n    B \\setminus (B \\setminus A)\n    &=\n    \\set[\\Big]{ x \\in B \\given x \\not\\in \\set{ x \\in B \\given x \\not\\in A } }\n    = \\\\ &=\n    \\set{ x \\in B \\given x \\in A }\n    = \\\\ &=\n    A.\n  \\end{align*}\n\\end{proof}\n\n\\begin{proposition}\\label{thm:boolean_algebra_of_subsets}\n  Let \\( X \\) be an arbitrary set. Then the \\hyperref[def:basic_set_operations/power_set]{power set} \\( \\pow(X) \\) endowed with the \\hyperref[def:subset]{inclusion} partial order \\( \\subseteq \\) is a \\hyperref[def:semilattice/complete]{complete} \\hyperref[def:boolean_algebra]{Boolean algebra}. Explicitly:\n\n  \\begin{thmenum}\n    \\thmitem{thm:boolean_algebra_of_subsets/join} The \\hyperref[def:semilattice/join]{join} of an arbitrary family \\( \\mscrA \\) of subsets of \\( X \\) is simply the \\hyperref[def:basic_set_operations/union]{union} \\( \\bigcap \\mscrA \\).\n\n    \\thmitem{thm:boolean_algebra_of_subsets/top} The \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{top element} is the set \\( X \\) itself.\n\n    \\thmitem{thm:boolean_algebra_of_subsets/meet} The \\hyperref[def:semilattice/meet]{meet} of an arbitrary family \\( \\mscrA \\) of sets is simply the \\hyperref[def:basic_set_operations/intersection]{intersection} \\( \\bigcup \\mscrA \\). Unlike for a general family of sets, we have no problem defining the intersection of an empty set to be the top element \\( X \\).\n\n    \\thmitem{thm:boolean_algebra_of_subsets/bottom} The \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{bottom element} is the empty set.\n\n    \\thmitem{thm:boolean_algebra_of_subsets/complement} The \\hyperref[def:boolean_algebra]{complement} \\( A^\\complement \\) of the subset \\( A \\) is the \\hyperref[def:basic_set_operations/difference]{difference} \\( X \\setminus A \\).\n  \\end{thmenum}\n\n  \\begin{figure}\n    \\hfill\n    \\includegraphics[page=1]{output/thm__boolean_algebra_of_subsets.pdf}\n    \\hfill\\hfill\n    \\caption{The \\hyperref[def:hasse_diagram]{Hasse diagram} of \\( \\pow(\\set{ A, B }) \\) with respect to \\hyperref[def:subset]{set inclusion}}\n    \\label{fig:thm:boolean_algebra_of_subsets}\n  \\end{figure}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:boolean_algebra_of_subsets/join} The union of \\( \\mscrA \\) exists by \\fullref{thm:zfc_existence_theorems/arbitrary_union}, and it is itself a subset of \\( \\mscrA \\). Every set in \\( \\mscrA \\) is contained in \\( \\bigcup \\mscrA \\), hence it is indeed a join.\n\n  \\SubProofOf{thm:boolean_algebra_of_subsets/top} Clearly \\( X \\) contains every subset of \\( X \\).\n\n  \\SubProofOf{thm:boolean_algebra_of_subsets/join} The intersection of \\( \\mscrA \\) exists by \\fullref{thm:zfc_existence_theorems/arbitrary_union}, and it is itself a subset of \\( \\mscrA \\). Every set in \\( \\mscrA \\) contains \\( \\bigcap \\mscrA \\), hence it is indeed a meet.\n\n  \\SubProofOf{thm:boolean_algebra_of_subsets/bottom} The empty set is contained in every set, in particular in every subset of \\( A \\).\n\n  \\SubProofOf{thm:boolean_algebra_of_subsets/complement} The operation \\( A^\\complement \\) is well-defined for each subset \\( A \\) of \\( X \\) due to \\fullref{thm:zfc_existence_theorems/difference}.\n\n  By definition\n  \\begin{equation*}\n    A \\vee A^\\complement\n    =\n    A \\cup (X \\setminus A)\n    =\n    X\n  \\end{equation*}\n  and\n  \\begin{equation*}\n    A \\vee A^\\complement\n    =\n    A \\cup (X \\setminus A)\n    =\n    X,\n  \\end{equation*}\n  hence \\( A^\\complement \\) is indeed the complement of \\( A \\).\n\n  Therefore, \\( \\pow(X) \\) is a Boolean algebra.\n\\end{proof}\n\n\\begin{remark}\\label{rem:inductive_sets}\n  Induction is an important proof technique that is discussed in detail in the proof of \\fullref{thm:nonzero_natural_numbers_have_predecessors}. There are more general forms of induction than \\eqref{eq:def:peano_arithmetic/PA3} like \\fullref{thm:well_founded_induction} and \\fullref{thm:well_founded_induction}. They do, however, require concepts which in turn depend on the existence of natural numbers within set theory. As a consequence, we cannot prove \\eqref{eq:def:peano_arithmetic/PA3} via \\fullref{thm:well_founded_induction}.\n\n  We will introduce the concept of inductive sets in \\fullref{def:inductive_set} and prove in \\fullref{thm:omega_induction} that a special inductive set \\hyperref[thm:smallest_inductive_set_existence]{\\( \\omega \\)}, which will be the domain of our model of \\( \\BbbN \\), allows performing inductive proofs. The technique that allows us to perform inductive proofs on \\( \\omega \\) can be seen in the proof of \\fullref{thm:omega_is_transitive}. \\Fullref{thm:omega_induction} will allow us to define natural numbers without relying on metalogical induction along the way. See the proof of \\fullref{thm:omega_induction} for a description of natural number induction within set theory and \\fullref{rem:standard_models_of_arithmetic} for a further discussion of the use of natural numbers in the metatheory and in the object logic.\n\n  We also introduce \\term{recursion} in parallel as a technique for constructing objects. See \\fullref{thm:omega_recursion}.\n\\end{remark}\n\n\\begin{definition}\\label{def:ordinal_successor}\n  The \\term{successor} \\( \\op{succ}(A) \\) of a set \\( A \\) is the set\n  \\begin{equation*}\n    \\op{succ}(A) \\coloneqq A \\cup \\set{ A }.\n  \\end{equation*}\n\n  It is also called the \\term{ordinal successor} operation since it is an important concept in the \\hyperref[subsec:ordinals]{theory of ordinals}. See \\fullref{rem:def:ordinal_successor} for an example of how it naturally arises. It should be distinguished from \\term{successor cardinals} defined in \\fullref{def:successor_and_limit_cardinal}.\n\n  The following \\hyperref[rem:predicate_formula]{predicate formula}\n  \\begin{equation*}\\taglabel[\\op{IsSucc}]{eq:def:ordinal_successor/predicate}\n    \\ref{eq:def:ordinal_successor/predicate}[\\rho, \\tau] \\coloneqq \\qforall \\xi \\parens[\\Big]{ \\xi \\in \\rho \\leftrightarrow (\\xi \\in \\tau \\vee \\xi = \\tau) },\n  \\end{equation*}\n  which states that \\( \\rho \\) is the successor of \\( \\tau \\), will be useful for us when working with \\hyperref[def:inductive_set]{inductive sets}.\n\\end{definition}\n\n\\begin{definition}\\label{def:inductive_set}\n  A set is called \\term{inductive} if contains the empty set and is closed under the \\hyperref[def:ordinal_successor]{successor operator}.\n\n  We introduce the following \\hyperref[rem:predicate_formula]{predicate formula}\n  \\begin{equation*}\\taglabel[\\op{IsInductive}]{eq:def:inductive_set/predicate}\n    \\ref{eq:def:inductive_set/predicate}[\\tau] \\coloneqq\n      \\parens[\\Big]{ \\qexists {\\xi \\in \\tau} \\ref{eq:def:empty_set/predicate}[\\xi] }\n      \\wedge\n      \\parens[\\Big]{ \\qforall {\\xi \\in \\tau} \\qexists {\\eta \\in \\tau} \\ref{eq:def:ordinal_successor/predicate}[\\eta, \\xi] }.\n  \\end{equation*}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:smallest_inductive_set_existence}\n  There is a smallest (with respect to set inclusion) \\hyperref[def:inductive_set]{inductive set}, which we denote by \\( \\omega \\).\n\\end{proposition}\n\\begin{proof}\n  We cannot directly define \\( \\omega \\) as the intersection of all inductive sets since we want to avoid unrestricted comprehension. Fortunately, the existence of at least one inductive set \\( A \\) is justified by the \\hyperref[def:zfc/infinity]{axiom of infinity} in \\logic{ZFC} or by taking the entire universe in na\\\"ive set theory.\n\n  Hence, we use restricted comprehension:\n  \\begin{equation*}\n    \\omega \\coloneqq \\set{ x \\in A \\given x \\T{belongs to every inductive set} }.\n  \\end{equation*}\n\n  To see that \\( \\omega \\) is itself inductive, note that \\( \\varnothing \\in \\omega \\) and that if \\( x \\in \\omega \\), then it also belongs to all inductive sets and hence \\( \\op{succ}(x) \\) also belongs to all inductive sets, proving \\( \\op{succ}(x) \\in \\omega \\).\n\\end{proof}\n\n\\begin{theorem}[Induction via inductive sets]\\label{thm:omega_induction}\n  We can perform induction on the \\hyperref[thm:smallest_inductive_set_existence]{smallest inductive set \\( \\omega \\)}. That is, we can prove that some property holds for every element of \\( \\omega \\) by proving the following:\n  \\begin{itemize}\n    \\item The property holds for \\( \\varnothing \\)\n    \\item We can prove that is holds for \\( \\op{succ}(n) \\) by assuming that it holds for some set \\( n \\in \\omega \\).\n  \\end{itemize}\n\n  This is an analog of \\eqref{eq:def:peano_arithmetic/PA3} and is actually used in \\fullref{thm:omega_is_model_of_pa} to prove that \\( \\omega \\) is a model of \\hyperref[def:peano_arithmetic]{\\logic{PA}}. Instead of an entire theorem schema, however, for this theorem it is sufficient to use one single formula. The more general induction principles that use theorem schemas cannot be proved without natural numbers, which are a model of \\logic{PA} by virtue of this theorem.\n\n  More formally, the following is a theorem of both na\\\"ive set theory and \\hyperref[def:zfc]{\\logic{ZF}}:\n  \\footnotesize\n  \\begin{equation}\\label{eq:thm:omega_induction}\n    \\qexists \\sigma\n    \\qforall \\tau\n    \\parens[\\Bigg]\n      {\n        \\parens[\\Bigg]\n          {\n            \\parens[\\Big]\n            {\n              \\underbrace{ \\qexists {\\xi \\in \\tau} \\ref{eq:def:empty_set/predicate}[\\xi] }_{\\mathclap{\\T{base case}}}\n            }\n            \\wedge\n            \\qforall \\xi \\parens[\\Big]\n              {\n                \\overbrace\n                  {\n                    \\underbrace{ \\xi \\in \\tau }_{\\mathclap{\\substack{\\T{inductive} \\\\ \\T{hypothesis}}}}\n                    \\rightarrow\n                    \\underbrace\n                      {\n                        \\qexists {\\eta \\in \\tau} \\ref{eq:def:ordinal_successor/predicate}[\\eta, \\xi]\n                      }_{\\mathclap{\\substack{\\T{inductive step} \\\\ \\T{conclusion}}}}\n                  }^{\\T{inductive step}}\n              }\n          }\n        \\rightarrow\n        \\underbrace{ \\ref{eq:def:subset/predicate}[\\sigma, \\tau] }_{\\T{conclusion}}\n      }\n  \\end{equation}\n  \\normalsize\n\\end{theorem}\n\\begin{proof}\n  The antecedent of (the inner formula in) \\eqref{eq:thm:omega_induction} is a restatement of the predicate formula \\( \\ref{eq:def:inductive_set/predicate}[\\tau] \\). The situation resembles the \\hyperref[eq:def:zfc/infinity]{axiom of infinity}, but, instead of existence of an inductive set \\( \\tau \\), it states the existence of a set \\( \\sigma \\) such that if \\( \\tau \\) is an inductive set, then \\( \\sigma \\) is a subset of \\( \\tau \\) (if we restrict \\( \\xi \\) to range only over members of \\( \\sigma \\), then we would obtain equality of \\( \\tau \\) and \\( \\sigma \\) instead). In other words, we have reduced the verification of \\eqref{eq:thm:omega_induction} to showing that there exists a minimal inductive set in both na\\\"ive set theory and \\logic{ZF}.\n\n  We have already proved in \\fullref{thm:smallest_inductive_set_existence} that our fixed model \\( \\mscrV = (V, I) \\) of set theory has a minimal inductive set \\( \\omega \\). Thus, for any variable assignment \\( v: \\boldop{Var} \\to V \\), the modified assignment \\( v_{\\sigma \\mapsto \\omega} \\) satisfies \\eqref{eq:thm:omega_induction} with the outer existential quantifier removed. Hence, by \\fullref{def:first_order_valuation/formula_valuation}, it follows that the entire formula \\eqref{eq:thm:omega_induction} is satisfied by the assignment \\( v \\).\n\n  Both the assignment \\( v \\) and the model \\( \\mscrV \\) were arbitrary, therefore we can conclude that \\eqref{eq:thm:omega_induction} is a theorem of both na\\\"ive set theory and \\logic{ZF}.\n\\end{proof}\n\n\\begin{definition}\\label{def:transitive_set}\n  A set \\( A \\) is \\term{transitive} if from \\( B \\in A \\) it follows that \\( B \\subseteq A \\).\n\n  See \\fullref{rem:ordinal_definition} for a discussion of the motivation and terminology of transitive sets and \\fullref{rem:transitive_model_of_set_theory} for their importance.\n\n  We introduce the following \\hyperref[rem:predicate_formula]{predicate formula}:\n  \\begin{equation*}\\taglabel[\\op{IsSetTransitive}]{eq:def:transitive_set/predicate}\n    \\ref{eq:def:transitive_set/predicate}[\\tau] \\coloneqq \\qforall {\\xi \\in \\tau} \\qforall {\\eta \\in \\xi} {\\eta \\in \\tau}\n  \\end{equation*}\n\\end{definition}\n\n\\begin{proposition}\\label{thm:omega_is_transitive}\n  The set \\( \\omega \\) is transitive and every member of \\( \\omega \\) is transitive.\n\n  This proof demonstrates usage of \\fullref{thm:omega_induction}.\n\\end{proposition}\n\\begin{proof}\n  \\SubProof{Proof that all members of \\( \\omega \\) are transitive} In order to demonstrate how \\fullref{thm:omega_induction} works in practice, we will use inductive sets directly. Let \\( T \\subseteq \\omega \\) be the subset of all transitive members of \\( \\omega \\). We will show that \\( T \\) is inductive. Since \\( T \\subseteq \\omega \\) and \\( \\omega \\) is the smallest inductive set, we can conclude that \\( T = \\omega \\).\n\n  Clearly \\( \\varnothing \\in T \\)  because every member of \\( \\varnothing \\) vacuously is a subset of \\( \\varnothing \\).\n\n  Now suppose that \\( n \\in T \\) and let \\( m \\in \\op{succ}(n) = n \\cup \\set{ n } \\). If \\( m = n \\), then \\( m \\in \\op{succ}(n) \\) by definition of the successor operation. If \\( m \\in n \\), then \\( m \\subseteq n \\) by the inductive hypothesis and hence also \\( m \\subseteq n \\cup \\set{ n } = \\op{succ}(n) \\). Thus, \\( \\op{succ}(n) \\) is also transitive.\n\n  We have shown that \\( T \\) is inductive. Therefore, \\( \\omega = T \\) and every member of \\( \\omega \\) is transitive.\n\n  From now on, we will not be as explicit about the use of induction on \\( \\omega \\).\n\n  \\SubProof{Proof that \\( \\omega \\) is transitive} We will show that for all members \\( n \\) of \\( \\omega \\) we have \\( n \\subseteq \\omega \\).\n\n  The case \\( n = \\varnothing \\) is again trivial.\n\n  Now suppose that \\( n \\subseteq \\omega \\) and let \\( m \\in \\op{succ}(n) \\). If \\( m = n \\), clearly \\( m \\subseteq \\omega \\). If \\( m \\in n \\), then \\( m \\subseteq n \\) and, since \\( n \\subseteq \\omega \\), we have \\( m \\subseteq \\omega \\) by transitivity of \\( \\subseteq \\).\n\n  Therefore, \\( \\omega \\) is transitive.\n\\end{proof}\n\n\\begin{remark}\\label{rem:transitive_model_of_set_theory}\n  As discussed in \\fullref{def:set_builder_notation}, within the \\hyperref[def:naive_set_theory/unrestricted_comprehension]{axiom schema of unrestricted comprehension} it may happen that \\( U \\subseteq V \\) is not a set within the object logic.\n\n  But there is a bigger problem that may happen even for \\hyperref[rem:standard_model_of_set_theory]{standard models}. If \\( A \\in V \\) and \\( x \\in A \\) (in the metatheory), it is possible that \\( x \\) is not in \\( V \\). Therefore, if we have shown that \\( A \\) is a set within the object logic, it is possible that its members within the metatheory are not members in the object logic. On other words, it is possible for set membership itself to be incompatible between the metatheory and object logic.\n\n  If \\( V \\) is a transitive set, however, we would not have such a problem. That is, if we construct a set \\( A \\) in the metatheory and show that it belongs to some set \\( B \\) in the object logic, then \\( A \\) itself would also be a set in the object logic.\n\n  For this reason, it is very important to consider only transitive models of set theory.\n\\end{remark}\n\n\\begin{lemma}\\label{thm:members_of_omega_do_not_contain_themselves}\n  No element of \\( \\omega \\) is a member of itself.\n\\end{lemma}\n\\begin{proof}\n  We will again use \\fullref{thm:omega_induction}. By definition, \\( \\omega \\not\\in \\omega \\). Now suppose that \\( n \\not\\in n \\) for some \\( n \\in \\omega \\).\n\n  Aiming at a contradiction, suppose that \\( \\op{succ}(n) \\in \\op{succ}(n) \\). The assumption that \\( n = \\op{succ}(n) = n \\cup \\set{ n } \\) implies that \\( n = \\set{ n } \\). The assumption that \\( \\op{succ}(n) \\in n \\) implies that \\( n \\in n \\) since \\( \\op{succ}(n) \\) is transitive by \\fullref{thm:omega_is_transitive}. In both cases we have \\( n \\in n \\), which contradicts our inductive hypothesis. This contradiction shows that \\( \\op{succ}(n) \\not\\in \\op{succ}(n) \\).\n\n  \\Fullref{thm:omega_induction} allows us to conclude that no member of \\( \\omega \\) contains itself.\n\\end{proof}\n\n\\begin{theorem}\\label{thm:omega_is_model_of_pa_without_operations}\n  The \\hyperref[thm:smallest_inductive_set_existence]{smallest inductive set \\( \\omega \\)} satisfies the axioms \\eqref{eq:def:peano_arithmetic/PA1}-\\eqref{eq:def:peano_arithmetic/PA3} from \\hyperref[def:peano_arithmetic]{Peano arithmetic} with the following interpretation:\n  \\begin{thmenum}\n    \\thmitem{thm:omega_is_model_of_pa_without_operations/zero} \\hyperref[def:peano_arithmetic/zero]{Zero} is interpreted as \\( \\varnothing \\).\n\n    \\thmitem{thm:omega_is_model_of_pa_without_operations/succ} The \\hyperref[def:peano_arithmetic/succ]{successor} operation \\( s \\) is interpreted as \\( \\op{succ} \\).\n  \\end{thmenum}\n\n  We will generalize this theorem to \\fullref{thm:omega_is_model_of_pa} after we are able to define the arithmetic operations in \\( \\omega \\).\n\\end{theorem}\n\\begin{proof}\n  \\SubProofOf{eq:def:peano_arithmetic/PA1} Let \\( n, m \\in \\omega \\) and suppose that \\( \\op{succ}(n) = \\op{succ}(m) \\). If \\( n = m \\), there is nothing to prove.\n\n  Suppose that \\( n \\neq m \\). Thus, since\n  \\begin{equation*}\n    n \\cup \\set{ n } = \\op{succ}(n) = \\op{succ}(m) = m \\cup \\set{ m },\n  \\end{equation*}\n  we have both \\( n \\in m \\) and \\( m \\in n \\).\n\n  \\Fullref{thm:omega_is_transitive} implies that \\( n \\) is transitive and hence \\( n \\in n \\), which contradicts \\fullref{thm:members_of_omega_do_not_contain_themselves}.\n\n  The obtained contradiction shows that \\( n = m \\).\n\n  \\SubProofOf{eq:def:peano_arithmetic/PA2} Suppose that \\( \\varnothing \\) has a predecessor \\( n \\in \\omega \\). Then\n  \\begin{equation*}\n    \\varnothing = \\op{succ}(n) = n \\cup \\set{ n },\n  \\end{equation*}\n  which implies that \\( n \\in \\varnothing \\). But this contradicts the \\hyperref[def:empty_set]{definition of \\( \\varnothing \\)}.\n\n  Therefore, \\( \\varnothing \\) has no predecessor.\n\n  \\SubProofOf{eq:def:peano_arithmetic/PA3} It follows from \\fullref{thm:omega_induction} that \\eqref{eq:thm:omega_induction} is a theorem of \\logic{ZF}. Let \\( \\mscrV = (V, I) \\) be our ambient \\hyperref[rem:standard_model_of_set_theory]{standard} \\hyperref[rem:transitive_model_of_set_theory]{transitive} model of \\logic{ZFC}.\n\n  Fix any variable assignment \\( v: \\boldop{Var} \\to V \\). As in the proof of \\fullref{thm:omega_induction}, we consider the modified assignment \\( v_{\\sigma \\mapsto \\omega} \\) that \\enquote{eliminates} the outer existential quantifier in \\eqref{eq:thm:omega_induction}.\n\n  To show that \\eqref{eq:thm:omega_induction} really corresponds to \\eqref{eq:def:peano_arithmetic/PA3} (and hence that \\( \\omega \\) satisfies \\eqref{eq:def:peano_arithmetic/PA3}), fix some formula \\( \\varphi \\) of \\hyperref[def:peano_arithmetic]{Peano arithmetic} (\\hi{not \\logic{ZFC}!}) and suppose that \\( \\xi, \\zeta_1, \\ldots, \\zeta_n \\) are all of its free variables. Fix also some parameter values \\( u_1, \\ldots, u_n \\in \\omega \\) and, as in \\fullref{def:set_builder_notation}, define the set\n  \\begin{equation*}\n    A \\coloneqq \\set{ x \\in \\omega \\given \\varphi\\Bracks{x, u_1, \\ldots, x_n} }.\n  \\end{equation*}\n\n  Since \\eqref{eq:thm:omega_induction} is satisfied by \\( v \\), the inner formula in \\eqref{eq:thm:omega_induction} (without the quantifiers over \\( \\sigma \\) and \\( \\tau \\)) is satisfied by \\( v_{\\sigma \\mapsto \\omega, \\tau \\mapsto A} \\).\n\n  Since our choice of parameters \\( u_1, \\ldots, u_n \\) was arbitrary, we can conclude that the universal closure \\eqref{eq:def:peano_arithmetic/PA3_quantified} of \\eqref{eq:def:peano_arithmetic/PA3} is satisfied by \\( \\omega \\) for every formula \\( \\varphi \\) of \\logic{PA}.\n\\end{proof}\n\n\\begin{remark}\\label{rem:set_theory_natural_numbers_without_operations}\n  Due to \\fullref{thm:omega_is_model_of_pa_without_operations}, we will henceforth identify the \\hyperref[thm:smallest_inductive_set_existence]{smallest inductive set \\( \\omega \\)} with the set \\( \\BbbN \\) of \\hyperref[def:set_of_natural_numbers]{natural numbers}.\n\n  We are not yet able to add or multiply natural numbers, nor rely on their well-foundedness, however for all other purposes we are able to utilize them.\n\n  Since the ordering in \\fullref{def:natural_number_ordering} is defined via addition, we must define some other ordering. Luckily, as we shall see in \\fullref{subsec:ordinals}, \\( n < m \\) corresponds to \\( n \\in m \\). In particular, the members of \\( m \\) are ordered.\n\n  As a consequence, every natural number \\( n \\) equals the set of all smaller natural numbers by the \\hyperref[def:naive_set_theory/extensionality]{axiom of extensionality}. It is conventional to write \\( \\set{ 0, 1, \\ldots, n } \\) rather than \\( \\set{ m \\given m \\in n } \\). The former notation will be fully justified in \\fullref{subsec:ordinals}.\n\n  This is useful, for example, in \\fullref{def:sequence}.\n\\end{remark}\n", "meta": {"hexsha": "f3d254f247f2e2794c546c9ae2afc774b41fbfa1", "size": 45449, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/naive_set_theory.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/naive_set_theory.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/naive_set_theory.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.1629881154, "max_line_length": 832, "alphanum_fraction": 0.7178375762, "num_tokens": 13284, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631542, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.5563674391336976}}
{"text": "\\documentclass{article}%\n\\usepackage[T1]{fontenc}%\n\\usepackage[utf8]{inputenc}%\n\\usepackage{lmodern}%\n\\usepackage{textcomp}%\n\\usepackage{lastpage}%\n%\n\\input{common_symbols_and_format.tex}%\n\\usepackage{tocloft}%\n\\renewcommand{\\cfttoctitlefont}{\\Large\\bfseries}%\n%\n\\begin{document}%\n\\normalsize%\n\\logo%\n\\rulename{Static Log Difference Regression}%\n\\tblofcontents\n\\ruledescription{Regresses the one-day price changes against the lagged  difference of research to price for the specified number of days, using coefficients estimated from the start of the data.}%\n\n\\howtotrade{Given default parameter values, if the asset drift is 0.001 and the error is 0.02 (2$\\%$ daily volatility), this rule will take a $0.001 / (0.02)^2  = 2.5$ or 250$\\%$ position (leveraged).}\n\n\\ruleparameters{Kelly Fraction}{0.1}{Amplitude weighting (Kelly Fraction). 1.0 is maximum growth if regression is exact. <1.0 scales down positions taken.}{$\\kellyfraction$}{Regression Length}{100}{This is the number of days used to estimate the regression coefficients.}{$\\lookbacklength$}%\n\\stoptable%\n\n\\section{Equation}\nBelow are the equations which govern how this trading rule determines a trading position.\n\\begin{equation} \\label{eq1}\n   \\regressionprice_\\currenttime =  \\amplitudecoefficient  (ln(\\price_{\\currenttime}) - ln(\\research_{\\currenttime})) + \\constantc,\n\\end{equation}\n\nThe equation ($\\ref{eq1}$) predict the value of the price $\\regressionprice_\\currenttime$ at time $\\currenttime$ using the difference from the logarithm of the price and research values. Using the logarithm function properties, we can rewrite it easily as:\n\n\\begin{equation}\n       \\regressionprice_\\currenttime =  \\amplitudecoefficient  ln\\left( \\frac{\\price_{\\currenttime}}{\\research_{\\currenttime}}\\right)  + \\constantc,\n\\end{equation}\n\nSince we are using a static approach the amplitude coefficient $\\amplitudecoefficient$ remains constant. In order to calculate the resultant fractional portfolio allocation $\\position_{\\currenttime}$ we use the Kelly fraction to obtain the maximum results for the long run. \n\n\n\\begin{equation}\n\\position_\\currenttime = \\kellyfraction \\frac{\\regressionprice_\\currenttime}{ \\rmserror_{\\regressionprice}^{2}}  \n\\label{eq2}\n\\end{equation}\n\nAdditionally, the standard error $\\rmserror_{\\regressionprice}$ is calculated and included in equation (\\ref{eq2}) to normalize the predicted price. \n\n\n\\assumptions%\n\\keyterms%\n\\furtherlinks%\n\\end{document}", "meta": {"hexsha": "8fa26377e5906929ce37c860476a45e66a1b614f", "size": 2436, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/strategies/tex/StaticLogDifferenceRegression.tex", "max_stars_repo_name": "pawkw/infertrade", "max_stars_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_issues_repo_path": "docs/strategies/tex/StaticLogDifferenceRegression.tex", "max_issues_repo_name": "pawkw/infertrade", "max_issues_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 137, "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_forks_repo_path": "docs/strategies/tex/StaticLogDifferenceRegression.tex", "max_forks_repo_name": "pawkw/infertrade", "max_forks_repo_head_hexsha": "48231c2c026b4163291e299cd938969401ca6a4a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "avg_line_length": 48.72, "max_line_length": 291, "alphanum_fraction": 0.7725779967, "num_tokens": 654, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581000631541, "lm_q2_score": 0.6992544147913993, "lm_q1q2_score": 0.5563674391336974}}
{"text": "\\section{L'H\\^opital's Rule}\\label{sec:LH}\\label{sec:lhopitals_rule}\r\n\r\nThis section is concerned with a technique of evaluating certain limits that will be useful for determining asymptotes, and also in the following chapters, where integration is discussed.\r\n\r\nOur treatment of limits exposed us to ``$ 0/0 $'', an indeterminate form. If $\\ds \\lim_{x\\to c}f(x)=0$ and $\\ds \\lim_{x\\to c} g(x) =0$, we do not conclude that $\\ds \\lim_{x\\to c} f(x)/g(x)$ is $0/0$; rather, we use $0/0$ as notation to describe the fact that both the numerator and denominator approach $ 0 $. The expression $ 0/0 $ has no numeric value; other work must be done to evaluate the limit.\r\n\r\n\r\nOther indeterminate forms exist; they are: %Limits may seeming evaluate to\r\n $\\infty/\\infty$, $0\\cdot\\infty$, $\\infty-\\infty$, $0^0$, $1^\\infty$ and $\\infty^0$. %, expressions which have no inherent value. \r\n Just as ``$ 0/0 $'' does not mean ``divide $ 0 $ by $ 0 $,'' the expression ``$\\infty/\\infty$'' does not mean ``divide infinity by infinity.'' Instead, it means ``a quantity is growing without bound and is being divided by another quantity that is growing without bound.'' We cannot determine from such a statement what value, if any, results in the limit. Likewise, ``$0\\cdot \\infty$'' does not mean ``multiply zero by infinity.'' Instead, it means ``one quantity is shrinking to zero, and is being multiplied by a quantity that is growing without bound.'' We cannot determine from such a description what the result of such a limit will be.\r\n\r\nThis section introduces l'H\\^opital's Rule, a method of resolving limits that produce the indeterminate forms $ 0/0 $ and $\\infty/\\infty$. We'll also show how algebraic manipulation can be used to convert other indeterminate expressions into one of these two forms so that our new rule can be applied.\r\n\r\n\\begin{theorem}{L'H\\^opital's Rule, Part 1}{LHR}\r\n{Let $\\ds \\lim_{x\\to c}f(x) = 0$ and $\\ds \\lim_{x\\to c}g(x)=0$, where $f$ and $g$ are differentiable functions on an open interval $I$ containing $c$, and $g\\primeskip'(x)\\neq 0$ on $I$ except possibly at $c$. Then \\index{limit!L'H\\^opital's Rule}\\index{L'H\\^opital's Rule}\r\n$$ \\lim_{x\\to c} \\frac{f(x)}{g(x)} = \\lim_{x\\to c} \\frac{\\fp(x)}{g\\primeskip'(x)}.$$\r\n}\r\n\\end{theorem}\r\n\r\nWe demonstrate the use of l'H\\^opital's Rule in the following examples; we will often use ``LHR'' as an abbreviation of ``l'H\\^opital's Rule.''\\\\\r\n\r\n\r\n\\begin{example}{Using l'H\\^opital's Rule}{ex_lhr1}\r\n{\r\nEvaluate the following limits, using l'H\\^opital's Rule as needed.\r\n\r\n\\noindent%\r\n\\begin{minipage}[t]{.5\\textwidth}\r\n\\begin{enumerate}\r\n\\item\t\t$\\ds \\lim_{x\\to0}\\frac{\\sin x}x$\r\n\\item\t\t$\\ds \\lim_{x\\to 1}\\frac{\\sqrt{x+3}-2}{1-x}$\r\n\\end{enumerate}\r\n\\end{minipage}\r\n\\begin{minipage}[t]{.5\\textwidth}\r\n\\begin{enumerate}\\addtocounter{enumi}{2}\r\n\\item\t\t$\\ds \\lim_{x\\to0}\\frac{x^2}{1-\\cos x}$\r\n\\item\t\t$\\ds \\lim_{x\\to 2}\\frac{x^2+x-6}{x^2-3x+2}$\r\n\\end{enumerate}\r\n\\end{minipage}\r\n}\r\n\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{\\begin{enumerate}\r\n\\item\t\tWe could solve this using the Squeeze Theorem, but l'H\\^opital's Rule is much simpler to apply:\r\n$$\\lim_{x\\to0}\\frac{\\sin x}x \\;\\;\\;\\;\\left(\\to \\frac00\\right)\\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to0} \\frac{\\cos x}{1}=1.$$\r\n\r\n\\item\t\\hfill $\\ds \\lim_{x\\to 1}\\frac{\\sqrt{x+3}-2}{1-x} \\;\\;\\;\\;\\left(\\to \\frac00\\right)\t \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x \\to 1} \\frac{\\frac12(x+3)^{-1/2}}{-1} =-\\frac 14.$\\hfill\\null \r\n\r\n\\item\t\t\\hfill $\\ds \\lim_{x\\to 0}\\frac{x^2}{1-\\cos x} \\;\\;\\;\\;\\left(\\to \\frac00\\right) \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=}  \\lim_{x\\to 0} \\frac{2x}{\\sin x}.$ \\hfill\\null\r\n\r\nThis latter limit also evaluates to the $ 0/0 $ indeterminate form. To evaluate it, we apply l'H\\^opital's Rule again.\r\n\r\n\\hfill $\\ds  \\lim_{x\\to 0} \\frac{2x}{\\sin x}  \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\frac{2}{\\cos x} = 2 .$ \\hfill\\null\r\n\r\nThus $\\ds \\lim_{x\\to0}\\frac{x^2}{1-\\cos x}=2.$\r\n\r\n\\item\t\tWe already know how to evaluate this limit; first factor the numerator and denominator. We then have: \r\n$$\\lim_{x\\to 2}\\frac{x^2+x-6}{x^2-3x+2} = \\lim_{x\\to 2}\\frac{(x-2)(x+3)}{(x-2)(x-1)} = \\lim_{x\\to 2}\\frac{x+3}{x-1} = 5.$$\r\nWe now show how to solve this using l'H\\^opital's Rule.\r\n\r\n$$\\lim_{x\\to 2}\\frac{x^2+x-6}{x^2-3x+2} \\;\\;\\;\\;\\left(\\to \\frac00\\right)\\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=}  \\lim_{x\\to 2}\\frac{2x+1}{2x-3} = 5.$$\r\n\\end{enumerate}\r\n\\vskip-\\baselineskip\r\n}\r\n\\end{solution}\r\n\r\n\r\n%\r\nNote that at each step where l'H\\^opital's Rule was applied, it was \\emph{needed}: the initial limit returned the indeterminate form of ``$0/0$.'' If the initial limit returns, for example, $ 1/2 $, then l'H\\^opital's Rule does not apply.\r\n\r\nThe following theorem extends our initial version of l'H\\^opital's Rule in two ways. It allows the technique to be applied to the indeterminate form $\\infty/\\infty$ and to limits where $x$ approaches $\\pm\\infty$.\r\n\r\n\\begin{theorem}{L'H\\^opital's Rule, Part 2}{thm:LHR2}\r\n{\\begin{enumerate}\r\n\\item\t\tLet $\\ds\\lim_{x\\to a}f(x) = \\pm\\infty$ and $\\ds\\lim_{x\\to a}g(x)=\\pm \\infty$, where $f$ and $g$ are differentiable on an open interval $I$ containing $a$. Then \\index{limit!L'H\\^opital's Rule}\\index{L'H\\^opital's Rule}\r\n$$\\lim_{x\\to a} \\frac{f(x)}{g(x)} = \\lim_{x\\to a}\\frac{\\fp(x)}{g\\primeskip'(x)}.$$\r\n\r\n\\item\t\tLet $f$ and $g$ be differentiable functions on the open interval $(a,\\infty)$ for some value $a$, where $g\\primeskip'(x)\\neq 0$ on $(a,\\infty)$ and $\\ds\\lim_{x\\to\\infty} f(x)/g(x)$ returns either 0/0 or $\\infty/\\infty$. Then\r\n$$\\lim_{x\\to \\infty} \\frac{f(x)}{g(x)} = \\lim_{x\\to \\infty}\\frac{\\fp(x)}{g\\primeskip'(x)}.$$\r\nA similar statement can be made for limits where $x$ approaches $-\\infty$.\r\n\\end{enumerate}\r\n}\r\n\\end{theorem}\r\n\r\n\r\n\\begin{example}{Using l'H\\^opital's Rule with limits involving $\\infty$}{ex_LHR2}\r\n{\r\nEvaluate the following limits.\\\\\r\n\r\n$\\ds 1.\\ \\lim_{x\\to\\infty} \\frac{3x^2-100x+2}{4x^2+5x-1000} \\qquad\\qquad 2. \\ \\lim_{x\\to \\infty}\\frac{e^x}{x^3}.$\r\n}\r\n\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{\\begin{enumerate}\r\n\\item\t\tWe can evaluate this limit already using other techniques%Theorem \\ref{thm:lim_rational_fn_at_infty}\r\n; the answer is $ 3/4 $. We apply l'H\\^opital's Rule to demonstrate its applicability.\r\n$$\\lim_{x\\to\\infty} \\frac{3x^2-100x+2}{4x^2+5x-1000} \\;\\;\\;\\;\\left(\\to \\frac\\infty\\infty \\right) \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to\\infty} \\frac{6x-100}{8x+5} \\;\\;\\;\\;\\left(\\to \\frac\\infty\\infty \\right) \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to\\infty} \\frac68 = \\frac34.$$\r\n\r\n\\item\t\t$\\ds \\lim_{x\\to \\infty}\\frac{e^x}{x^3} \\;\\;\\;\\;\\left(\\to \\frac\\infty\\infty \\right) \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to\\infty} \\frac{e^x}{3x^2} \\;\\;\\;\\;\\left(\\to \\frac\\infty\\infty \\right) \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to\\infty} \\frac{e^x}{6x} \\;\\;\\;\\;\\left(\\to \\frac\\infty\\infty \\right) \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to\\infty} \\frac{e^x}{6} = \\infty.$\r\n\r\nRecall that this means that the limit does not exist; as $x$ approaches $\\infty$, the expression $e^x/x^3$ grows without bound. We can infer from this that $e^x$ grows ``faster'' than $x^3$; as $x$ gets large, $e^x$ is far larger than $x^3$. (This has important implications in computing when considering efficiency of algorithms.)\r\n\\end{enumerate}\r\n}\r\n\\end{solution}\r\n\r\n\r\n\r\n\\subsection*{Indeterminate Forms $0\\cdot\\infty$ and $\\infty-\\infty$}\r\n\r\n\r\nL'H\\^opital's Rule can only be applied to ratios of functions. When faced with an indeterminate form such as $0\\cdot\\infty$ or $\\infty-\\infty$, we can sometimes apply algebra to rewrite the limit so that l'H\\^opital's Rule can be applied. We demonstrate the general idea in the next example.\r\n\\index{limit!indeterminate form}\\index{indeterminate form}\\\\\r\n\r\n\r\n\r\n\\begin{example}{L'H\\^opital's Rule}{LHRule3}\r\nCompute the following limits:\\\\\r\n\\noindent\r\n\\begin{minipage}[t]{.5\\textwidth}\r\n\\begin{enumerate}\r\n\\item\t\t$\\ds \\lim_{x\\to0^+} x\\cdot e^{1/x}$\r\n\\item\t\t$\\ds \\lim_{x\\to0^-} x\\cdot e^{1/x}$\r\n\\item       $\\ds\\lim_{x\\to 0^+} x\\ln x$\r\n\\end{enumerate}\r\n\\end{minipage}\r\n\\begin{minipage}[t]{.5\\textwidth}\r\n\\begin{enumerate}\\addtocounter{enumi}{3}\r\n\\item\t\t$\\ds \\lim_{x\\to\\infty} \\ln(x+1)-\\ln x$\r\n\\item\t\t$\\ds \\lim_{x\\to\\infty} x^2-e^x$\r\n\\end{enumerate}\r\n\\end{minipage}\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{\\begin{enumerate}\r\n\\item\t\tAs $x\\rightarrow 0^+$, $x\\rightarrow 0$ and $e^{1/x}\\rightarrow \\infty$. Thus we have the indeterminate form $0\\cdot\\infty$. We rewrite the expression $x\\cdot e^{1/x}$ as $\\ds\\frac{e^{1/x}}{1/x}$; now, as $x\\rightarrow 0^+$, we get the indeterminate form $\\infty/\\infty$ to which l'H\\^opital's Rule can be applied. \r\n$$ \\lim_{x\\to0^+} x\\cdot e^{1/x} = \\lim_{x\\to 0^+} \\frac{e^{1/x}}{1/x} \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to 0^+}\\frac{(-1/x^2)e^{1/x}}{-1/x^2} =\\lim_{x\\to 0^+}e^{1/x} =\\infty.$$\r\n\r\nInterpretation: $e^{1/x}$ grows ``faster'' than $x$ shrinks to zero, meaning their product grows without bound.\r\n\r\n\\item\t\tAs $x\\rightarrow 0^-$, $x\\rightarrow 0$ and $e^{1/x}\\rightarrow e^{-\\infty}\\rightarrow 0$. The the limit evaluates to $0\\cdot 0$ which is not an indeterminate form. We conclude then that $$\\lim_{x\\to 0^-}x\\cdot e^{1/x} = 0.$$\r\n\r\n\\item As $x\\to 0^+$, $\\ln x\\to -\\infty$, so the product is indeterminate of the form $\\pm 0\\cdot\\infty $.\r\n So we can  apply L'H\\^opital's Rule after re-writing it in the form $\\frac{\\infty }{\\infty }$:\r\n$$x\\ln x = \\frac{\\ln x}{1/x}=\\frac{\\ln x}{x^{-1}}.$$\r\nNow as $x$ approaches zero, both the numerator and denominator\r\napproach infinity (one $-\\infty$ and one $+\\infty$, but only the size\r\nis important). Using  L'H\\^opital's Rule:\r\n$$\\lim_{x\\to 0^+} {\\ln x\\over x^{-1}}\\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=}\r\n\\lim_{x\\to 0^+} {1/x\\over -x^{-2}} =\\lim_{x\\to 0^+} -x = 0.$$\r\nInterpretation: $x$ approaches zero much faster than the $\\ln x$ approaches $-\\infty$.\r\n\\item\t\tThis limit initially evaluates to the indeterminate form $\\infty-\\infty$. By applying a logarithmic rule, we can rewrite the limit as \r\n$$ \\lim_{x\\to\\infty} \\ln(x+1)-\\ln x = \\lim_{x\\to \\infty} \\ln \\left(\\frac{x+1}x\\right).$$\r\n\r\nAs $x\\rightarrow \\infty$, the argument of the $\\ln$ term approaches $\\infty/\\infty$, to which we can apply l'H\\^opital's Rule.\r\n$$\\lim_{x\\to\\infty} \\frac{x+1}x \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\frac11=1.$$\r\n\r\nSince $x\\rightarrow \\infty$ implies $\\ds\\frac{x+1}x\\rightarrow 1$, it follows that \r\n$$x\\rightarrow \\infty \\quad \\text{ implies }\\quad \\ln\\left(\\frac{x+1}x\\right)\\rightarrow \\ln 1=0.$$\r\n\r\nThus $$ \\lim_{x\\to\\infty} \\ln(x+1)-\\ln x = \\lim_{x\\to \\infty} \\ln \\left(\\frac{x+1}x\\right)=0.$$\r\nInterpretation: since this limit evaluates to 0, it means that for large $x$, there is essentially no difference between $\\ln (x+1)$ and $\\ln x$; their difference is essentially 0.\r\n\r\n\\item\t\tThe limit $\\ds \\lim_{x\\to\\infty} x^2-e^x$ initially returns the indeterminate form $\\infty-\\infty$. We can rewrite the expression by factoring out $x^2$; $\\ds x^2-e^x = x^2\\left(1-\\frac{e^x}{x^2}\\right).$ We need to evaluate how $e^x/x^2$ behaves as $x\\rightarrow \\infty$:\r\n$$\\lim_{x\\to\\infty}\\frac{e^x}{x^2} \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to\\infty} \\frac{e^x}{2x} \\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=} \\lim_{x\\to\\infty} \\frac{e^x}{2} = \\infty.$$\r\n\r\nThus $\\lim_{x\\to\\infty}x^2(1-e^x/x^2)$ evaluates to $\\infty\\cdot(-\\infty)$, which is not an indeterminate form; rather, $\\infty\\cdot(-\\infty)$ evaluates to $-\\infty$. We conclude that \r\n$\\ds \\lim_{x\\to\\infty} x^2-e^x = -\\infty.$\r\n\r\nInterpretation: as $x$ gets large, the difference between $x^2$ and $e^x$ grows very large.\r\n\\end{enumerate}\r\n}\r\n\\end{solution}\r\n\r\n\r\n\r\n\r\n\\subsection*{Indeterminate Forms\\ \\ $0^0$, $1^\\infty$ and $\\infty^0$}\r\n\r\n\r\nWhen faced with an indeterminate form that involves a power, it often helps to employ the natural logarithmic function. The following Key Idea expresses the concept, which is followed by an example that demonstrates its use.\r\n\r\n\r\n\\begin{formulabox}[Limits Involving Indeterminate Powers] %Forms $0^0$, $1^\\infty$ and $\\infty^0$\r\n{\\label{idea:LHRpower}\r\nIf $\\ds \\lim_{x\\to c} \\ln\\big(f(x)\\big) = L$, then \r\n$\\ds \\lim_{x\\to c} f(x) = \\lim_{x\\to c} e^{\\ln(f(x))} = e\\,^L.$ \r\n\\index{limit!indeterminate form}\\index{indeterminate form}\r\n}\r\n\\end{formulabox}\r\n\r\n\r\n\r\n%\r\n\r\n\r\n\\begin{example}{Using l'H\\^opital's Rule with indeterminate forms involving exponents }{ex_LHR4}\r\n{Evaluate the following limits.\r\n\\begin{enumerate}\r\n\\item $\\ds \\lim_{x\\to\\infty} \\left(1+\\frac1x\\right)^x  $ \r\n\\item  $ \\ds \\lim_{x\\to0^+} x^x$\r\n\\item $\\ds{\\lim_{x\\to 1^+}x^{1/(x-1)}}.$\r\n\\end{enumerate}\r\n}\r\n\\end{example}\r\n\r\n\r\n\\begin{solution}\r\n{\\begin{enumerate}\r\n\\item\t\t%This equivalent to a special limit given in Theorem \\ref{thm:lim_continuous}; \r\nThis limit has important applications within mathematics and finance. Note that the exponent approaches $\\infty$ while the base approaches $ 1 $, leading to the indeterminate form $1^\\infty$. Let $f(x) = (1+1/x)^x$; the problem asks to evaluate $\\ds\\lim_{x\\to\\infty}f(x)$. Let's first evaluate $\\ds \\lim_{x\\to\\infty}\\ln\\big(f(x)\\big)$.\r\n\\begin{align*}\r\n\\lim_{x\\to\\infty}\\ln\\big(f(x)\\big) & = \\lim_{x\\to\\infty} \\ln \\left(1+\\frac1x\\right)^x \\\\\r\n\t\t\t&= \\lim_{x\\to\\infty} x\\ln\\left(1+\\frac1x\\right)\\\\\r\n\t\t\t&=  \\lim_{x\\to\\infty} \\frac{\\ln\\left(1+\\frac1x\\right)}{1/x}\\\\\r\n\t\t\t\\intertext{This produces the indeterminate form 0/0, so we apply l'H\\^opital's Rule.}\r\n\t\t\t&\\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=}\t\\lim_{x\\to\\infty} \\frac{\\frac{1}{1+1/x}\\cdot(-1/x^2)}{(-1/x^2)} \\\\\r\n\t\t\t&= \\lim_{x\\to\\infty}\\frac{1}{1+1/x}\\\\\r\n\t\t\t&= 1.\r\n\\end{align*}\r\nThus $\\ds\\lim_{x\\to\\infty} \\ln \\big(f(x)\\big) = 1.$ We return to the original limit and apply Key Idea \\ref{idea:LHRpower}.\r\n\r\n$$\\lim_{x\\to\\infty}\\left(1+\\frac1x\\right)^x = \\lim_{x\\to\\infty} f(x) =  \\lim_{x\\to\\infty}e^{\\ln (f(x))} = e^1 = e.$$\r\n\r\n\r\n\\item\t\tThis limit leads to the indeterminate form $0^0$. Let $f(x) = x^x$ and consider first $\\ds\\lim_{x\\to0^+} \\ln\\big(f(x)\\big)$. \r\n\\begin{align*}\r\n\\lim_{x\\to0^+} \\ln\\big(f(x)\\big) &= \\lim_{x\\to0^+} \\ln\\left(x^x\\right) \\\\\r\n\t\t\t&= \\lim_{x\\to0^+} x\\ln x \\\\\r\n\t\t\t&= \\lim_{x\\to0^+} \\frac{\\ln x}{1/x}.\\\\\r\n\t\t\t\\intertext{This produces the indeterminate form $-\\infty/\\infty$ so we apply l'H\\^opital's Rule.}\r\n\t\t\t&\\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=}\t\\lim_{x\\to0^+} \\frac{1/x}{-1/x^2} \\\\\r\n\t\t\t&= \\lim_{x\\to0^+} -x \\\\\r\n\t\t\t&= 0.\r\n\\end{align*}\r\nThus $\\ds\\lim_{x\\to0^+} \\ln\\big(f(x)\\big) =0$. We return to the original limit and apply Key Idea \\ref{idea:LHRpower}.\r\n$$\\lim_{x\\to0^+} x^x = \\lim_{x\\to0^+} f(x) = \\lim_{x\\to0^+} e^{\\ln(f(x))} = e^0 = 1.$$\r\nThis result is supported by the graph of $f(x)=x^x$ given in Figure \\ref{fig:LHR4}.\r\n\r\n\\mfigure{.8}{A graph of $f(x)=x^x$ supporting the fact that as $x\\to 0^+$, $f(x)\\to 1$.}{fig:LHR4}{\\begin{tikzpicture}\r\n\\begin{axis}[width=.6\\textwidth,%\r\ntick label style={font=\\scriptsize},axis y line=middle,axis x line=middle,name=myplot,axis on top,%\r\n\t\t\t%x=.37\\marginparwidth,\r\n\t\t\t%y=.37\\marginparwidth,\r\n%\t\t\txtick=\\empty,% \r\n%\t\t\textra x ticks={.5,3},\r\n%\t\t\textra x tick labels={$a$,$b$},\r\n\t\t\tytick={1,2,3,4},\r\n%\t\t\tyticklabels={$-0.002$,$0.002$,$0.004$},\r\n\t\t\t%minor y tick num=1,\r\n%\t\t\textra y ticks={0.001},%\r\n%\t\t\tminor x tick num=4,\r\n\t\t\tymin=-.4,ymax=4.5,%\r\n\t\t\txmin=-.1,xmax=2.2,%\r\n\t\t\tscaled ticks=false\r\n]\r\n\r\n\\addplot [{\\colorone},thick,smooth] coordinates {(0.01,0.955)(0.02,0.9247)(0.03,0.9001)(0.04,0.8792)(0.05,0.8609)(0.06,0.8447)(0.07,0.8302)(0.08,0.817)(0.09,0.8052)(0.1,0.7943)(0.2,0.7248)(0.3,0.6968)(0.4,0.6931)(0.5,0.7071)(0.6,0.736)(0.7,0.7791)(0.8,0.8365)(0.9,0.9095)(1.,1.)(1.1,1.111)(1.2,1.245)(1.3,1.406)(1.4,1.602)(1.5,1.837)(1.6,2.121)(1.7,2.465)(1.8,2.881)(1.9,3.386)(2.,4.)\r\n};\r\n\r\n\\draw (axis cs:1,1) node [below right] {\\scriptsize $f(x)=x^x$};\r\n\r\n\r\n\r\n%\\draw (axis cs:2.4,-0.002) node {\\scriptsize $f(x)$};\r\n\\end{axis}\r\n\r\n\\node [right] at (myplot.right of origin) {\\scriptsize $x$};\r\n\\node [above] at (myplot.above origin) {\\scriptsize $y$};\r\n\\end{tikzpicture}\r\n\\item \r\nThis limit is of the type $\\mathquotes{1^\\infty}$.\r\nTo deal with this type of limit we will again use logarithms.\r\nLet\r\n$$L=\\lim_{x\\to 1^+}x^{1/(x-1)}.$$\r\nNow, take the natural log of both sides:\r\n$$\\ln L=\\lim_{x\\to 1^+}\\ln\\left(x^{1/(x-1)}\\right).$$\r\nUsing log properties we have:\r\n$$\\ln L=\\lim_{x\\to 1^+}\\frac{\\ln x}{x-1}.$$\r\nThe right side limit is now of the type $0/0$, therefore, we can apply L'H\\^opital's Rule:\r\n$$\\ln L=\\lim_{x\\to 1^+}\\frac{\\ln x}{x-1}\\stackrel{\\ \\text{ by LHR \\rule[-5pt]{0pt}{3pt}} \\ }{=}\\lim_{x\\to 1^+}\\frac{1/x}{1}=1$$\r\nThus, $\\ln L=1$ and hence, our original limit (denoted by $L$) is: $L=e^1=e$. That is,\r\n$$L=\\lim_{x\\to 1^+}x^{1/(x-1)}=e.$$\r\nIn this case, even though our limit had a type of $\\mathquotes{1^\\infty}$, it actually had a value of $e$.\r\n}\r\n\\end{enumerate}\r\n}\r\n\\end{solution}\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n%Our brief revisit of limits will be rewarded in the next section where we consider \\textit{improper integration.} So far, we have only considered definite integrals where the bounds are finite numbers, such as $\\ds \\int_0^1 f(x)\\ dx$. Improper integration considers integrals where one, or both, of the bounds are ``infinity.'' Such integrals have many uses and applications, in addition to generating ideas that are enlightening.\r\n\r\n\r\n%The following application of derivatives allows us to compute certain limits.\r\n%\r\n%\\begin{definition}{Limits of the Indeterminate Forms $\\frac{0}{0}$ and $\\frac{\\infty}{\\infty}$}{IndeterminateLimits}\r\n%A limit of a quotient $\\lim\\limits_{x\\rightarrow a}\\frac{f\\left( x\\right) }{%\r\n%\tg\\left( x\\right) }$ is said to be an \\textbf{indeterminate form of the type }%\r\n%$\\frac{0}{0}$ if both $f\\left( x\\right) \\rightarrow 0$ and $g\\left( x\\right)\r\n%\\rightarrow 0$ as $x\\rightarrow a.$ Likewise, it is said to be an \\textbf{%\r\n%\tindeterminate form of the type} $\\frac{\\infty }{\\infty }$ if both $f\\left(\r\n%x\\right) \\rightarrow \\pm \\infty $ and $g\\left( x\\right) \\rightarrow \\pm\r\n%\\infty $ as $x\\rightarrow a$ (Here, the two $\\pm $ signs are independent of\r\n%each other).\r\n%\\end{definition}\r\n%\r\n%\\begin{theorem}{L'H\\^opital's Rule}{LHRule}\r\n%For a limit $\\lim\\limits_{x\\rightarrow a}\\frac{f\\left( x\\right) }{g\\left(\r\n%\tx\\right) }$ of the indeterminate form $\\frac{0}{0}$ or $\\frac{\\infty }{%\r\n%\t\\infty },$ $\\lim\\limits_{x\\rightarrow a}\\frac{f\\left( x\\right) }{g\\left(\r\n%\tx\\right) }=\\lim\\limits_{x\\rightarrow a}\\frac{f^{\\prime }\\left( x\\right) }{%\r\n%\tg^{\\prime }\\left( x\\right) }$ if $\\lim\\limits_{x\\rightarrow a}\\frac{%\r\n%\tf^{\\prime }\\left( x\\right) }{g^{\\prime }\\left( x\\right) }$ exists or equals $%\r\n%\\infty $ or $-\\infty$.\r\n%\\end{theorem}\r\n%\r\n%This theorem is somewhat difficult to prove, in part because it\r\n%incorporates so many different possibilities, so we will not prove it\r\n%here.\r\n%\r\n%We should also note that there may be instances where we would need to apply L'H\\^opital's Rule multiple times, but we must confirm that $\\lim_{x\\to a}\\frac{f'(x)}{g'(x)}$ is still indeterminate before we attempt to apply L'H\\^opital's Rule again. Finally, we want to mention that L'H\\^opital's rule is also valid for one-sided limits and limits at infinity.\r\n%\r\n%\\begin{example}{L'H\\^opital's Rule}{lhrule0}\r\n%Compute $\\ds\\lim_{x\\to \\pi}\\frac{x^2-\\pi^2}{\\sin x}$.\r\n%\\end{example}\r\n%\r\n%\\begin{solution} \r\n%We use L'H\\^opital's Rule: Since the numerator and denominator\r\n%both approach zero,\r\n%$$\\lim_{x\\to \\pi}\\frac{x^2-\\pi^2}{\\sin x}=\r\n%\\lim_{x\\to \\pi}\\frac{2x}{\\cos x},$$\r\n%provided the latter exists. But in fact this is an easy limit, since\r\n%the denominator now approaches $-1$, so \r\n%$$\\lim_{x\\to \\pi}\\frac{x^2-\\pi^2}{\\sin x}=\\frac{2\\pi}{-1} = -2\\pi.$$\r\n%\\end{solution}\r\n%\r\n%\\begin{example}{L'H\\^opital's Rule}{LHRule1}\r\n%Compute $\\ds\\lim_{x\\to \\infty}{2x^2-3x+7\\over\r\n%x^2+47x+1}$.\r\n%\\end{example}\r\n%\r\n%\\begin{solution} \r\n%As $x$ goes to infinity, both the numerator and denominator go to\r\n%infinity, so we may apply L'H\\^opital's Rule:\r\n%$$\\lim_{x\\to \\infty}\\frac{2x^2-3x+7}{x^2+47x+1}=\r\n%\\lim_{x\\to \\infty}\\frac{4x-3}{2x+47}.$$\r\n%In the second quotient, it is still the case that the numerator and\r\n%denominator both go to infinity, so we are allowed to use\r\n%L'H\\^opital's Rule again:\r\n%$$\\lim_{x\\to \\infty}\\frac{4x-3}{2x+47}=\\lim_{x\\to \\infty}\\frac{4}{2}=2.$$\r\n%So the original limit is 2 as well.\r\n%\\end{solution}\r\n%\r\n%\\begin{example} {L'H\\^opital's Rule}{LHRule2}\r\n%Compute $\\ds\\lim_{x\\to 0}\\frac{\\sec x - 1}{\\sin x}$.\r\n%\\end{example}\r\n%\r\n%\\begin{solution} \r\n%Both the numerator and denominator approach zero, so applying \r\n%L'H\\^opital's Rule:\r\n%$$\\lim_{x\\to 0}\\frac{\\sec x - 1}{\\sin x}=\r\n%\\lim_{x\\to 0}\\frac{\\sec x\\tan x}{\\cos x}=\\frac{1\\cdot 0}{1}=0.$$\r\n%\\end{solution}\r\n%\r\n%L'H\\^{o}pital's rule concerns limits of a quotient that are indeterminate forms. But not all\r\n%functions are given in the form of a quotient. But all the same, nothing\r\n%prevents us from re-writing a given function in the form of a quotient.\r\n%Indeed, some functions whose given form involve either a product $f\\left(\r\n%x\\right) g\\left( x\\right) $ or a power $f\\left( x\\right) ^{g\\left( x\\right)\r\n%} $ carry indeterminacies such as $0\\cdot \\pm \\infty $ and $1^{\\pm \\infty }.$\r\n%Something small times something numerically large (positive or negative)\r\n%could be anything. It depends on how small and how large each piece turns\r\n%out to be. A number close to 1 raised to a numerically large (positive or\r\n%negative) power could be anything. It depends on how close to 1 the base is,\r\n%whether the base is larger than or smaller than 1, and how large the\r\n%exponent is (and its sign). We can use suitable algebraic manipulations to\r\n%relate them to indeterminate quotients. We will illustrate with two\r\n%examples, first a product and then a power.\r\n%\r\n%\\begin{example}{L'H\\^opital's Rule}{LHRule3}\r\n%Compute $\\ds\\lim_{x\\to 0^+} x\\ln x$.\r\n%\\end{example}\r\n%\r\n%\\begin{solution} \r\n%This doesn't appear to be suitable for L'H\\^opital's Rule, but it also\r\n%is not ``obvious''. As $x$ approaches zero, $\\ln x$ goes to $-\\infty$,\r\n%so the product looks like:\r\n%$$(\\hbox{something very small})\\cdot(\\hbox{something very large and negative}).$$\r\n%This could be anything: it depends on {\\it how small} and\r\n%{\\it how large} each piece of the function turns out to be. \r\n%As defined earlier, this is a type of $\\pm\\mathquotes{0\\cdot\\infty}$, which is\r\n%indeterminate. So we can in fact apply L'H\\^opital's Rule after re-writing\r\n%it in the form $\\frac{\\infty }{\\infty }$:\r\n%$$x\\ln x = \\frac{\\ln x}{1/x}=\\frac{\\ln x}{x^{-1}}.$$\r\n%Now as $x$ approaches zero, both the numerator and denominator\r\n%approach infinity (one $-\\infty$ and one $+\\infty$, but only the size\r\n%is important). Using  L'H\\^opital's Rule:\r\n%$$\\lim_{x\\to 0^+} {\\ln x\\over x^{-1}}=\r\n%\\lim_{x\\to 0^+} {1/x\\over -x^{-2}} =\\lim_{x\\to 0^+} {1\\over x}(-x^2)=\r\n%\\lim_{x\\to 0^+} -x = 0.$$\r\n%One way to interpret this is that since $\\ds\\lim_{x\\to\r\n%  0^+}x\\ln x = 0$, the $x$ approaches zero much faster than the $\\ln x$\r\n%approaches $-\\infty$.\r\n%\\end{solution}\r\n%\r\n%Finally, we illustrate how a limit of the type $\\mathquotes{1^\\infty}$ can be indeterminate.\r\n%\r\n%\\begin{example} {L'H\\^opital's Rule}{LHRule4}\r\n%Evaluate $\\ds{\\lim_{x\\to 1^+}x^{1/(x-1)}}.$\r\n%\\end{example}\r\n%\r\n%\\begin{solution} \r\n%Plugging in $x=1$ (from the right) gives a limit of the type $\\mathquotes{1^\\infty}$.\r\n%To deal with this type of limit we will use logarithms.\r\n%Let\r\n%$$L=\\lim_{x\\to 1^+}x^{1/(x-1)}.$$\r\n%Now, take the natural log of both sides:\r\n%$$\\ln L=\\lim_{x\\to 1^+}\\ln\\left(x^{1/(x-1)}\\right).$$\r\n%Using log properties we have:\r\n%$$\\ln L=\\lim_{x\\to 1^+}\\frac{\\ln x}{x-1}.$$\r\n%The right side limit is now of the type $0/0$, therefore, we can apply L'H\\^opital's Rule:\r\n%$$\\ln L=\\lim_{x\\to 1^+}\\frac{\\ln x}{x-1}=\\lim_{x\\to 1^+}\\frac{1/x}{1}=1$$\r\n%Thus, $\\ln L=1$ and hence, our original limit (denoted by $L$) is: $L=e^1=e$. That is,\r\n%$$L=\\lim_{x\\to 1^+}x^{1/(x-1)}=e.$$\r\n%In this case, even though our limit had a type of $\\mathquotes{1^\\infty}$, it actually had a value of $e$.\r\n%\\end{solution}\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\Opensolutionfile{solutions}[ex]\r\n\\section*{Exercises for \\ref{sec:LH}}\r\n\r\nCompute the following limits.\r\n\r\n\\begin{multicols}{2}[]\r\n\r\n\\begin{enumialphparenastyle}\r\n\r\n\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n$\\ds\\lim_{x\\to 0} {\\cos x -1\\over \\sin x}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n$\\ds\\lim_{x\\to \\infty} {e^x\\over x^3}$\r\n\\begin{sol}\r\n $\\infty$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n$\\ds\\lim_{x\\to \\infty} {\\ln x\\over x}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n$\\ds\\lim_{x\\to \\infty} {\\ln x\\over \\sqrt{x}}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n$\\ds\\lim_{x\\to0}{\\sqrt{9+x}-3\\over x}$\r\n\\begin{sol}\r\n $1/6$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n$\\ds\\lim_{x\\to2}{2-\\sqrt{x+2}\\over 4-x^2}$\r\n\\begin{sol}\r\n $1/16$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to1}{\\sqrt{x}-1\\over \\sqrt[3]{x}-1}$\r\n\\begin{sol}\r\n $3/2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{(1-x)^{1/4}-1\\over x}$\r\n\\begin{sol}\r\n $-1/4$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{t\\to 0}{\\left(t+{1\\over t}\\right)((4-t)^{3/2}-8)}$\r\n\\begin{sol}\r\n $-3$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{t\\to 0^+}\\left({1\\over t}+{1\\over\\sqrt{t}}\\right)\r\n(\\sqrt{t+1}-1)$\r\n\\begin{sol}\r\n $1/2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to 0}{x^2\\over\\sqrt{2x+1}-1}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{u\\to 1}{(u-1)^3\\over (1/u)-u^2+3/u-3}$\r\n\\begin{sol}\r\n $-1$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to 0}{2+(1/x)\\over 3-(2/x)}$\r\n\\begin{sol}\r\n $-1/2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to 0^+}{1+5/\\sqrt{x}\\over 2+1/\\sqrt{x}}$\r\n\\begin{sol}\r\n $5$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to\\pi/2}{\\cos x\\over (\\pi/2)-x}$\r\n\\begin{sol}\r\n $1$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{e^x-1\\over x}$\r\n\\begin{sol}\r\n $1$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{x^2\\over e^x-x-1}$\r\n\\begin{sol}\r\n $2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to1}{\\ln x\\over x-1}$\r\n\\begin{sol}\r\n $1$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{\\ln(x^2+1)\\over x}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to1}{x\\ln x\\over x^2-1}$\r\n\\begin{sol}\r\n $1/2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{\\sin(2x)\\over\\ln(x+1)}$\r\n\\begin{sol}\r\n $2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to1}{x^{1/4}-1\\over x}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to1}{\\sqrt{x}-1\\over x-1}$\r\n\\begin{sol}\r\n $1/2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{3x^2+x+2\\over x-4}$\r\n\\begin{sol}\r\n $-1/2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{\\sqrt{x+1}-1\\over \\sqrt{x+4}-2}$\r\n\\begin{sol}\r\n $2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{\\sqrt{x+1}-1\\over \\sqrt{x+2}-2}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0^+}{\\sqrt{x+1}+1\\over\\sqrt{x+1}-1}$\r\n\\begin{sol}\r\n $\\infty$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to0}{\\sqrt{x^2+1}-1\\over\\sqrt{x+1}-1}$\r\n\\begin{sol}\r\n $0$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to1}{(x+5)\\left({1\\over 2x}+{1\\over x+2}\\right)}$\r\n\\begin{sol}\r\n $5$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n $\\ds\\lim_{x\\to2}{x^3-6x-2\\over x^3+4}$\r\n\\begin{sol}\r\n $-1/2$\r\n\\end{sol}\r\n\\end{ex}\r\n\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to \\pi} \\frac{\\sin x}{x-\\pi}$}\r\n\r\n\\begin{sol}\r\n  {$-1$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to \\pi/4} \\frac{\\sin x-\\cos x}{\\cos (2x)}$}\r\n \r\n\\begin{sol}\r\n {$-\\sqrt{2}/2$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to 0} \\frac{\\sin (5x)}{x}$}\r\n\r\n\\begin{sol}\r\n  {$5$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to 0^+} \\frac{e^x-1}{x^2}$}\r\n \r\n\\begin{sol}\r\n {$\\infty$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to 0^+} \\frac{e^x-x-1}{x^2}$}\r\n\r\n\\begin{sol}\r\n  {$1/2$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to 0^+} \\frac{x-\\sin x}{x^3-x^2}$}\r\n\r\n\\begin{sol}\r\n  {$0$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to \\infty} \\frac{\\Big(\\ln x\\Big)^2}{x}$}\r\n\r\n\\begin{sol}\r\n  {$0$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to \\infty} x^3-x^2$}\r\n \r\n\\begin{sol}\r\n {$\\infty$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to \\infty} \\sqrt{x}-\\ln x$}\r\n\r\n\\begin{sol}\r\n {$\\infty$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to -\\infty} xe^x$}\r\n\r\n\\begin{sol}\r\n {$0$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to 0^+} (\\sin x)^{x}$ \\\\ Hint: use the Squeeze Theorem.}\r\n\r\n\\begin{sol}\r\n {$1$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to 1^+} (1-x)^{1-x}$}\r\n \r\n\\begin{sol}\r\n {$1$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to \\infty} (x)^{1/x}$}\r\n\r\n\\begin{sol}\r\n {$1$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to 1^+} (\\ln x)^{1-x}$}\r\n \r\n\\begin{sol}\r\n {$1$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to \\infty} (1+x)^{1/x}$}\r\n \r\n\\begin{sol}\r\n{$1$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to \\infty} (1+x^2)^{1/x}$}\r\n\r\n\\begin{sol}\r\n {$1$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to \\pi/2} \\tan x\\cos x$}\r\n\r\n\\begin{sol}\r\n  {$1$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to \\pi/2} \\tan x\\sin (2x)$}\r\n \r\n\\begin{sol}\r\n {$2$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to 1^+} \\frac{1}{\\ln x} - \\frac{1}{x-1}$}\r\n\r\n\\begin{sol}\r\n{$1/2$}  \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to 3^+} \\frac{5}{x^2-9} - \\frac{x}{x-3}$}\r\n \r\n\\begin{sol}\r\n{$-\\infty$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to \\infty} x\\tan (1/x)$}\r\n\r\n\\begin{sol}\r\n{$1$}  \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to \\infty} \\frac{(\\ln x)^3}{x}$}\r\n \r\n\\begin{sol}\r\n{$0$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n {$\\ds \\lim_{x\\to 1} \\frac{x^2+x-2}{\\ln x}$}\r\n\r\n\\begin{sol}\r\n  {$3$}\r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n%%%%%%%%%%\r\n\\begin{ex} \r\n{$\\ds \\lim_{x\\to 0^+} xe^{1/x}$}\r\n\r\n\\begin{sol}\r\n {$\\infty$} \r\n\\end{sol}\r\n\r\n\\end{ex}\r\n\r\n\r\n\r\n\\begin{ex}\r\nDiscuss what happens if we try to use L'H\\^{o}pital's rule to find the limit $\\lim\\limits_{x\\rightarrow \\infty}\\dfrac{x+\\sin x}{x+1}$.\r\n\\end{ex}\r\n\r\n\\end{enumialphparenastyle}\r\n\r\n\\end{multicols}\r\n\\clearpage\r\n", "meta": {"hexsha": "b8f5c155e0b47c9c58b18cde2c815df6a021bca0", "size": 30994, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "5-applications-of-derivatives/5-5-Lhospitals-rule.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5-applications-of-derivatives/5-5-Lhospitals-rule.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5-applications-of-derivatives/5-5-Lhospitals-rule.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.0185950413, "max_line_length": 644, "alphanum_fraction": 0.6016648384, "num_tokens": 12088, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5563118927421278}}
{"text": "\n\\documentclass{article}\n\\usepackage{amsmath} \n\\DeclareMathOperator*{\\argmax}{argmax}\n\n\\begin{document}\n\\begin{Large}\n\\begin{center}\nBuilding an Optimal Roster for Rotissarie Baseball\n\\end{center}\n\n\\end{Large}\n\n\\section*{Introduction}\nThree years ago my roommate invited me to participate in his 10-team 5x5 rotissarie baseball league. I played baseball growing up, but never had much interest in the MLB. I accepted the offer to join -- after all, \\$10 is a small price to pay for a reason to watch baseball and an opportunity to glorify the part of baseball that matters the most - the stats. The thing that's intrigued me the most has always been the draft -- we use an auction draft, which adds an extra dimension to the strategy. With no set draft positions, there's no comfort of the ability to take the best player availble. Every pick is fought for, and to draft well requires a coherent strategy. \n\nThat first year, I had a relatively naive strategy: a list of a few players at each position that I wanted and a long list of players I didn't. I didn't put much thought into my budget, so my lists were exhausted midway through the draft. The second year, I didn't improve my strategy much...I think. Try as I might, I cannot remember anything about this draft. Every other draft I remember vividly, but not this one. I was about to graduate college, there was a lot going on -- no room in my head for rotisserie drafts I guess. Last year, with the league expanding to 12 teams, I improved my strategy a bit. I started by coming up with goal values for each category based on previous winning values. I exported Steeamer/Razzball projections into an excel sheet, set up to calculate my team's score for each category and tell me what percentage of my goal I'd reached so far. I knew I needed to step up my strategy for this year.\n\nAt its most basic, we can think of a draft as an optimization problem, albeit with a dizzying number of unknowns. But, if we assume that our Steamer/Razzball projections and valuations are basically correct -- and make a few other simplifications -- we can formulate our draft as a basic maximization problem that we can solve to determine the best overall roster available through the draft. \n\n\\section*{Problem Formulation}\nWe start by formulating a linear equation for our roster's overall score:\n\\begin{align}\nS = &\\sum\\limits_{i \\in Hitters} {H_i^{adj} + R_i^{adj} + HR_i^{adj} + RBI_i^{adj} + AVG_i^{adj}} + \\nonumber \\\\ &\\sum\\limits_{j \\in Pitchers} {W_j^{adj} + K_j^{adj} + SV_j^{adj} + {\\frac{1}{ERA_j}}^{adj} + {\\frac{1}{WHIP_j}}^{adj}}\n\\end{align} \nThere are a few things to note here. Each of these is an adjusted value. Since each category has a different magnitude but each is worth the same number of points, we'll need to adjust them. Because we want each category to have the same scale but maintain each player's relative value in each, we'll need to center their means but preserve their variances. We'll use the sklearn StandardScaler method to do so (specifying to maintain the standard deviation). I've used average, ERA, and WHIP directly in this equation, despite the fact that they're actually calculated overall for the entire team from the base statistics. I experimented with separating them out, but (somewhat suprisingly) found no added value in doing so. For clarity, I've split the pitchers and hitters into two different summations, but we will end up combinining them into one. This will be equivalent to equation (1) since hitters do not generate pitching stats and pitchers do not generate hitting stats. \n\\begin{align}\nS = \\sum\\limits_{p \\in Players} {H_p^{adj} + R_p^{adj} + HR_p^{adj} + RBI_p^{adj} + AVG_p^{adj}} + \\\\ \\nonumber {W_p^{adj} + K_p^{adj} + SV_p^{adj} + {\\frac{1}{ERA_p}}^{adj} + {\\frac{1}{WHIP_p}}^{adj}}\n\\end{align} \nSo, now that we have a single equation to calculate our score, we need to find the group of players that maximize it within the constraints of the league.\n\n\\begin{align}\nTeam = &\\argmax_{s \\in S} \\sum\\limits_{p \\in s} {H_p^{adj} + R_p^{adj} + HR_p^{adj} + RBI_p^{adj} + AVG_p^{adj}} + \\\\ \\nonumber &{W_p^{adj} + K_p^{adj} + SV_p^{adj} + {\\frac{1}{ERA_p}}^{adj} + {\\frac{1}{WHIP_p}}^{adj}} \\\\\n\\nonumber s.t. &\\sum\\limits_{p \\in s} c(p) \\leq 260 \\mbox{ where c(p) is the cost of player p}\\\\\n\\nonumber &\\{C,1B,2B,3B,SS,1B/3B,2B/SS,1B/2B/SS/3B,6 OF, 8 SP, 3 RP\\} \\in s \\\\ \n\\nonumber &\\sum\\limits_{p \\in s} N(p) = 25 \\mbox{ where } N(p) = \\begin{cases} \n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t1 & \\exists!n \\in s(n\\mbox{ is name of player }p) \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t0 & else \\\\\n                                                         \\end{cases}\\\\\n\\nonumber &\\mbox{where S is the set of sets of all possible combinations of 25 players}\n\\end{align}\n\\pagebreak\n\\section*{Results \\& Conclusion}\nUsing Gurobi, we can solve equation (3) to find our optimal set of players. \n\n\\begin{table}[ht] \\caption{Hitters}\n\\centering\n\\begin{tabular}{c c c c c c c}\n\\hline\\hline\nName & Position & R & HR & RBI & SB & AVG \\\\\n\\hline\nJorge Polanco & SS & 94.6 & 19.3 & 84.7 & 7.6 & 0.281 \\\\\nJose Abreu\t& 1B & 84.5 & 31.1 & 100.4 & 2.6 & 0.276 \\\\\nNicholas Castellanos & OF & 87.7 & 26.9 & 88.4 & 3.0 & 0.274 \\\\\nBryan Reynolds & OF & 83.1 & 17.9 & 66.0 & 6.3 & 0.288 \\\\\nJ.T. Realmuto & C & 77.8 & 24.0 & 76.4 & 5.8 & 0.270 \\\\\nMatt Chapman & 3B & 94.2 & 34.7 & 98.2 & 2.7 & 0.258 \\\\\nPaul DeJong & SS & 82.0 & 28.4 & 84.9 & 6.8 & 0.253 \\\\\nEric Hosmer & 1B & 79.7 & 23.4 & 80.2 & 3.4 & 0.263 \\\\\nDavid Peralta & OF & 80.6 & 20.6 & 74.4 & 2.5 & 0.279 \\\\\nFranmil Reyes & OF & 82.1 & 37.0 & 101.5 & 1.3 & 0.260 \\\\\nJason Heyward & OF & 72.0 & 15.8 & 72.1 & 6.5 & 0.261 \\\\\nShin-Soo Choo & OF & 83.3 & 21.0 & 61.6 & 9.0 & 0.249 \\\\\nJonathan Schoop & 2B & 67.4 & 26.5 & 77.0 & 2.3 & 0.262 \\\\\nBrandon Lowe & 2B & 70.9 & 23.7 & 71.6 & 6.7 & 0.246 \\\\\n\\hline\nTotal &  & 1139.9 & 350.3 & 1137.4 & 66.5 & 0.267 \\\\\n\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[ht] \\caption{Pitchers}\n\\centering\n\\begin{tabular}{c c c c c c c}\n\\hline\\hline\nName & Position & W & SV & K & ERA & WHIP \\\\\n\\hline\nGerritt Cole & SP & 14.9 & 0.0 & 273.3 & 3.250 & 1.040 \\\\\nTrevor Bauer & SP & 11.6 & 0.0 & 228.4 & 3.920 & 1.240 \\\\\nLuis Castillo & SP & 11.3 & 0.0 & 203.4 & 3.820 & 1.250 \\\\\nLucas Giolito & SP & 10.8 & 0.0 & 198.4 & 4.240 & 1.270 \\\\\nMatthew Boyd & SP & 10.0 & 0.0 & 194.2 & 4.320 & 1.220 \\\\\nRobbie Ray & SP & 9.9 & 0.0 & 197.5 & 4.120 & 1.320 \\\\\nCaleb Smith & SP & 8.2 & 0.0 & 172.4 & 4.580 & 1.320 \\\\\nMike Foltynewicz & SP & 9.7 & 0.0 & 157.5 & 4.70 & 1.350 \\\\\nJoe Jimenez & RP & 2.8 & 20.0 & 69.1 & 4.150 & 1.260  \\\\\nHansel Robles & RP & 3.3 & 20.0 & 70.9 & 4.180 & 1.270 \\\\\nJose Leclerc & RP & 3.5 & 20.0 & 91.9 & 3.390 & 1.210 \\\\\n\\hline\nTotal & & 96.0 & 60.0 & 1857.0 & 4.072 & 1.246 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\newpage\n\\noindent Then, we can use the scores and rankings from last season (with my team removed) to get a score for our team. The category and overall rankings are as follows:\n\n\\begin{table}[ht] \\caption{Category Rankings}\n\\centering\n\\begin{tabular}{c c c c c c c c c c c} \n\\hline\\hline \nRank & R & HR & RBI & SB & AVG & K & W & SV & ERA & WHIP \\\\\n\\hline\n1 & 1241 & 433 & 1198 & 141 & 0.2768 & 1788 & 112 & 115 & 3.493 & 1.102 \\\\\n2 & 1174 & 382 & 1147 & 127 & 0.274 & 1725 & 109 & 105 & 3.665 & 1.131 \\\\\n3 & 1159 & 355 & 1088 & 123 & 0.2735 & 1643 & 99 & 79 & 3.76 & 1.21 \\\\\n4 & 1136 & 353 & 1077 & 121 & 0.271 & 1598 & 98 & 73 & 3.898 & 1.216 \\\\\n5 & 1110 & 352 & 1075 & 113 & 0.2705 & 1531 & 97 & 59 & 3.907 & 1.244 \\\\\n6 & 1067 & 332 & 1030 & 110 & 0.2645 & 1480 & 93 & 59 & 4.107 & 1.247 \\\\\n7 & 1006 & 321 & 1016 & 106 & 0.2642 & 1391 & 85 & 55 & 4.112 & 1.262 \\\\\n8 & 997 & 302 & 1000 & 97 & 0.2641 & 1387 & 84 & 54 & 4.144 & 1.267 \\\\\n9 & 974 & 291 & 955 & 94 & 0.262 & 1336 & 80 & 42 & 4.217 & 1.284 \\\\\n10 & 966 & 284 & 905 & 73 & 0.2601 & 1330 & 74 & 39 & 4.444 & 1.291 \\\\\n11 & 898 & 260 & 897 & 72 & 0.2592 & 1132 & 64 & 3 & 4.616 & 1.339 \\\\ [1ex] \n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[ht] \\caption{Overall Rankings}\n\\centering\n\\begin{tabular}{c c}\n\\hline\\hline\nRank & Points \\\\\n\\hline \n1 & 95.5 \\\\\n2 & 83 \\\\\n3 & 74 \\\\\n4 & 74 \\\\\n5 & 73 \\\\\n6 & 70 \\\\\n7 & 69.5 \\\\\n8 & 57 \\\\\n9 & 51 \\\\\n10 & 45.5 \\\\\n11 & 24 \\\\\n[1ex] \n\\hline\n\\end{tabular}\n\\end{table}\n\nWe can obtain scores for each of our values by comparing them to our table of results for last season. As a refresher, rotissarie baseball leagues are scored by combining each of the category for the whole team, comparing all teams for each category, and granting 12 points to the first place team, 11 points to the second, etc. \n\n\\begin{table}[ht] \\caption{Team Score}\n\\centering \n\\begin{tabular}{c c}\n\\hline\\hline\nValue & Points \\\\\n\\hline\nR & 9 \\\\\nHR & 5 \\\\\nRBI & 9 \\\\\nSB & 5 \\\\\nAVG & 7 \\\\\nK & 12 \\\\\nW & 7 \\\\\nSV & 8 \\\\\nERA & 7 \\\\\nWHIP & 7 \\\\\n\\hline\nTotal & 76 \\\\\n[1ex] \n\\hline\n\\end{tabular}\n\\end{table}\n\\newpage\nAll things considered, this is not a bad draft - if we refer to table 4, this puts us in 3rd place in the league. However, if we were to accept that the draft is a simple linear optimization problem, we would expect that our team would be in first place. So, why isn't it? We could conclude that simply drafting the perfect team is not good enough to win a rotissarie baseball league -- that drafting perfectly could get you a very good team, but playing the waiver wire is required to come in first place. I think this is probably true, but I think there's something else that it's very important we do not overlook: a rotissarie baseball auction draft cannot actually be represented by a linear optimization problem. Once your team gets to first place in any particular category adding to that category further gives you no additional value. Nothing is gained by running up the score in any category, but our model doesn't represent this. To truly model the draft, we'd need to optimize a stepwise function, applying the scoring function to our model (and, of course, come up with a new way to score our model - perhaps by applying it in a series of mock drafts). This will be explored in future work. Despite this limitation, our model does very well at approximating an optimal rotissarie baseball team for our particular league. Another thing that is important to note is that this model depends completely on the accuracy of the Steamer/Razzball predictions. Despite using these predictions for a number of years, I have not analyzed their accuracy.\n\\newpage\n\\appendix\n\\section*{Appendix}\n\\end{document}\n", "meta": {"hexsha": "f5564a52c420cc758528fe1e810dc57dc34957ee", "size": 10443, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writeup.tex", "max_stars_repo_name": "samozm/rotissarie_baseball", "max_stars_repo_head_hexsha": "9338c2cd68bfa628b27ee1fd8756bc71d07ff3e5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "writeup.tex", "max_issues_repo_name": "samozm/rotissarie_baseball", "max_issues_repo_head_hexsha": "9338c2cd68bfa628b27ee1fd8756bc71d07ff3e5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writeup.tex", "max_forks_repo_name": "samozm/rotissarie_baseball", "max_forks_repo_head_hexsha": "9338c2cd68bfa628b27ee1fd8756bc71d07ff3e5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.0701754386, "max_line_length": 1555, "alphanum_fraction": 0.6748060902, "num_tokens": 3584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737869342623, "lm_q2_score": 0.6926419894793248, "lm_q1q2_score": 0.5563118896797907}}
{"text": "\\subsection{Accuracy of Set-Based Inversion}\\label{sec:ch03-set}\n\n%%%%%%%%%%%%%% discretization discussion, software contribution in later section\n\nIn this section, we address how to measure the distance between measures that are defined on different discretizations to ensure that the computations make sense mathematically.\nThere is a correspondence between the objects from measure theory that are involved in the SIP, and the finite-state counterparts in the algorithmic implementation.\nTherefore, we begin by summarizing the relationship between the random samples we generate and the Borel $\\sa$.\n\nThe measures computed from Algorithm~\\ref{alg:inv_density} are defined on a set of samples $S = \\set{\\param^{(j)}}_{j=1}^{\\nsamps}$ which implicitly define a Voronoi-cell partition $\\set{\\VV^{(j)}}_{j=1}^{\\nsamps}$ of the parameter space $\\pspace$.\nWe let $\\BB_{\\pspace, \\nsamps}$ denote the \\emph{computational algebra} generated by $\\set{\\VV^{(j)}}_{j=1}^{\\nsamps}$, i.e., using standard measure theory notation,\n$$\n\t\\BB_{\\pspace,\\nsamps} = \\sigma\\left(\\set{\\VV^{(j)}}_{j=1}^{\\nsamps}\\right).\n$$\n$\\BB_{\\pspace, \\nsamps}\\subset \\BB_\\pspace$ and the events $A\\in B_{\\pspace, \\nsamps}$ represent the $A\\in\\BB_\\pspace$ for which we can compute probabilities and make inferences.\nWhile Algorithm~\\ref{alg:inv_density} ultimately defines a probability measure implicitly on $(\\pspace,\\BB_\\pspace)$, computationally this is almost never done; instead, the measures are only interrogated on the computational algebra associated with the finite set of samples.\n\nDifferent sets $S_k = \\set{\\param^{(i)}}_{i=1}^{\\nsamps_k}$, where the $\\param^{(i)}$'s and $\\nsamps_k$'s may differ for each $k$, will lead to different measures computed from Algorithm~\\ref{alg:inv_density}.\nEach $S_k$ induces a computational algebra which we index using the notation $\\BB_{k}$ for simplicity, where it is understood that $\\BB_k = \\BB_{\\pspace, \\nsamps_k}$.\nThe fact that solutions are defined on different sub-$\\sa$s poses an immediate problem with respect to a computational approach to computing the distances between measures $\\PP_{\\pspace, \\ndiscs, \\nsamps_1}$ and $\\PP_{\\pspace, \\ndiscs, \\nsamps_2}$ even if $\\nsamps_1=\\nsamps_2$.\nSee Figure~\\ref{fig:voronoi_issues} for an example illustration of this scenario.\n\n\\begin{figure}[ht]\n\\centering\n\t\\begin{minipage}{.4875\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{./images/voronoi_diagrams/voronoi_diagram_N25_r0}\n\t\\end{minipage}\n\t\\begin{minipage}{.4875\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{./images/voronoi_diagrams/voronoi_diagram_N25_r10}\n\t\\end{minipage}\n\\caption{\nTwo different Voronoi partitions induced by $\\nsamps_1 = \\nsamps_2 = 25 $ uniform i.i.d.~random samples.\n}\n\\label{fig:voronoi_issues}\n\\end{figure}\n\n\nThe proof of the following Lemma describes how to extend {\\em any} probability measure defined on a computational algebra to the full $\\sa$ $\\pborel$, which we exploit in Algorithm~\\ref{alg:totvar_disc}.\n\n\\begin{lem}\n\\label{lem:measuresets}\nLet $\\mu$ be a measure on $(\\pspace, \\BB_\\pspace)$, $\\set{\\VV^{(j)}}_{j=1}^{\\nsamps}$ be a partition of $\\pspace$, and $\\BB_{\\pspace, \\nsamps}$ the computational algebra generated by $\\set{\\VV^{(j)}}_{j=1}^{\\nsamps}$.\nAssume $\\mu (\\VV^{(j)}) > 0 \\; \\forall \\; j=1,\\hdots, \\nsamps$.\nThen, there exists a probability measure $\\eta$ on $(\\pspace, \\BB_\\pspace)$ such that $\\eta(A) = \\eta_\\nsamps(A) \\; \\forall \\; A\\in\\BB_{\\pspace, \\nsamps}$.\n\\end{lem}\nIn the proof below, we use $\\eta_\\nsamps$ and $\\mu$ to construct a type of discrete Radon-Nikodym derivative of $\\eta$.\nThis is motivated by the formal structure of solutions given by Algorithm~\\ref{alg:inv_density}.\nThe proof of Lemma~\\ref{lem:measuresets} can be found in Appendix~\\ref{app:measuresets}, but the two key equations involved are reproduced here for later reference:\n\n\\begin{equation}\\label{eq:finiteradon}\nf_\\nsamps (\\param) = \\sum_{j=1}^{\\nsamps} \\frac{\\eta_\\nsamps (\\VV^{(j)}) }{\\mu (\\VV^{(j)})} \\Chi_{\\VV^{(j)}} (\\param).\n\\end{equation}\n\nThen, for any $A\\in\\BB_\\pspace$, define\n\n\\begin{equation}\\label{eq:approxmeasure}\n\\eta (A) = \\int_A f_\\nsamps (\\param) \\, d\\mu.\n\\end{equation}\n\nWe note that in practice, $\\Chi_{\\VV^{(j)}} (\\param)$ requires the use of nearest-neighbor computations, but otherwise evaluation of Eq.~\\eqref{eq:finiteradon} is straightforward.\n\n\n\\begin{algorithm}\n\\DontPrintSemicolon\n\\caption{Total Variation Discretization}\n\\label{alg:totvar_disc}\nLet $(\\pspace, B_{\\pspace, \\nsamps_1}, \\eta_{\\nsamps_1} )$ and $(\\pspace, B_{\\pspace, \\nsamps_2}, \\eta_{\\nsamps_2} )$ be given.\\\\\n\nConstruct $f_{\\nsamps_1}$ and $f_{\\nsamps_2}$ and corresponding $\\eta_1, \\eta_2$ using Eq.~\\eqref{eq:finiteradon} and Eq.~\\eqref{eq:approxmeasure}, respectively.\n\nUse Monte Carlo sampling to approximate\n$$d_\\text{TV}(\\eta_1, \\eta_2) = \\int_\\pspace f_{\\nsamps_1}\\lam - f_{\\nsamps_2}\\lam \\, d\\mu.$$\n\\end{algorithm}\n\nSince we now have a way to extend probability measures defined on $(\\pspace, \\BB_{\\pspace, \\nsamps})$ to  probability measure on $(\\pspace, \\BB_{\\pspace})$, we can use simple Monte Carlo approximation schemes to the Total Variation distance between two probability measures defined on two separate computational algebras.\nThis is demonstrated in Algorithm~\\ref{alg:totvar_disc}.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%\n\\section{Numerical Results and Analysis}\\label{sec:ch03-examples}\nWe first investigate what values of $\\nsamps$ are appropriate provided the goal is to resolve $d(\\PP_{\\pspace, \\ndiscs, \\nsamps}, \\PP_{\\pspace, \\ndiscs})$ to a desired level of accuracy so that\n\\begin{equation}\\label{eq:newobjective}\nd(\\PP_{\\pspace, \\ndiscs, \\nsamps}, \\PP_{\\pspace,\\ndiscs}) < \\tau,\n\\end{equation}\nwhere $\\tau$ is some designated tolerance.\n%\n% Recall that this discussion of error is in reference to some fixed $\\qoi\\in\\qspace$ to which Algorithm~\\ref{alg:inv_density} is applied.\n% The inequality presented in Eq.~\\eqref{eq:set-triangleineq} holds for probability measures induced by any map in $\\qspace$, though we obscure the dependence on $\\qoi$ for the time being.\n% To this end, we introduce notation of the form $\\PP_{\\pspace, \\ndiscs, \\nsamps}^{\\qoi}$ when we want to distinguish between measures constructed from inverting a particular map $\\qoi\\in\\qspace$.\n%\nThe reason for this is because the choice of $\\qoi$ will influence the number of model solutions necessary to accurately solve the SIP as shown in \\cite{BGE+15}.\nDifferent choices for $\\qoi$ may lead to radically different values for $\\nsamps$ in order to achieve the same bound on $d(\\PP_{\\pspace, \\ndiscs, \\nsamps}, \\PP_{\\pspace, \\ndiscs})$ as we see in the following examples.\nBy phrasing the analysis in terms of metrics, we are able to answer more broadly generalizable questions about error, including those regarding convergence rates and global accuracy of the estimates.\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\input{ch03/experimental_setup}\n\\input{ch03/rotation_example}\n\\input{ch03/skew_example}\n\\input{ch03/skew_example_3d}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "a560b0c0ba1de42cd0637b4438b50d9effca6da1", "size": 7108, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "set-based/set_accuracy.tex", "max_stars_repo_name": "mathematicalmichael/thesis", "max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z", "max_issues_repo_path": "set-based/set_accuracy.tex", "max_issues_repo_name": "mathematicalmichael/thesis", "max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 59, "max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z", "max_forks_repo_path": "set-based/set_accuracy.tex", "max_forks_repo_name": "mathematicalmichael/thesis", "max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.6862745098, "max_line_length": 321, "alphanum_fraction": 0.7229881823, "num_tokens": 2069, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737869342624, "lm_q2_score": 0.6926419831347361, "lm_q1q2_score": 0.5563118845839835}}
{"text": "% Created 2018-02-09 vie 13:42\n\\documentclass[letterpaper]{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{fixltx2e}\n\\usepackage{graphicx}\n\\usepackage{longtable}\n\\usepackage{float}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{marvosym}\n\\usepackage{wasysym}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\tolerance=1000\n\\usepackage{khpreamble}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{Sampling and anti-aliasing exercise}\n\\hypersetup{\n  pdfkeywords={},\n  pdfsubject={},\n  pdfcreator={Emacs 24.5.1 (Org mode 8.2.10)}}\n\\begin{document}\n\n\\maketitle\n\n\\section*{The Bessel filter}\n\\label{sec-1}\n  A second order Bessel filter\n\\[ H(s) = \\frac{3}{\\big(s/\\omega_0\\big)^2 + 3\\big(s/\\omega_0\\big) + 3}, \\]\nhas been designed to give attenuation of 0.1 at the Nyquist frequency, i.e. \\(|H(i\\omega_N)| = 0.1\\).\n\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{../figures/ps7-bessel-bode}\n\\end{center}\n\n\\begin{enumerate}\n\\item What is the bandwidth of the filter?\n\\item Determine the filter parameter \\(\\omega_0\\) in terms of \\(\\omega_N\\).\n\\item What is the phase shift at the Nyquist frequency?\n\\item Assume that the cross-over frequency of the loop gain is \\(\\omega_c = 0.1\\omega_N\\). What is the phase shift at this frequency?\n\\item What delay $T$  does this correspond to? Approximate the filter as a pure delay \\(H(i\\omega) \\approx \\mexp{-i\\omega T}\\)?\n\\item How large is the delay in terms of the sampling period \\(h = \\frac{\\pi}{\\omega_N}\\)?\n\\end{enumerate}\n\n\n\\section*{Choice of bandwidth of antialiasing filter}\n\\label{sec-2}\n\n\\begin{center}\n\\includegraphics[width=0.4\\linewidth]{../figures/Astrom-fig73.png}\n\\end{center}\nThe table can be used to find the delay incurred by an antialiasing filter. For instance, if we want to have an attenuation at the Nyquist frequency equal to \\(\\beta = 0.01\\), then a fourth-order Bessel low-pass filter will give a delay of about \\(T_d = 3.2 h\\), that is a delay of more than 3 sampling periods. \n\\begin{enumerate}\n\\item What is the advantage of using an antialiasing filter of higher order than 2?\n\\item If we can only accept a time delay of maximum two sampling periods, what is the attenutation we can obtain at the Nyquist frequency?\n\\item \\(\\omega_B = \\omega_N\\) means that the antialiasing filter is chosen so that the Nyquist frequency is the same as the bandwidth of the filter. The attenuation at \\(\\omega_N\\) is then 0.7=-3dB. If the sampling period is 0.4 seconds. What is the time delay due to a sixth-order antialiasing filter?\n\\item What is the attenuation of a forth-order filter at \\(5\\omega_B\\)\n\\end{enumerate}\n\n\\clearpage\n\n\n\\section*{Notch filter, preliminary exercise}\n\\label{sec-3}\nSketch the Bode diagram of the (unrealizable) filter\n\\[ F(s) = s^2 + \\omega_1^2 = (s+i\\omega_1)(s-i\\omega_1)\\]\n\\[ |F(i\\omega)| = |(i\\omega + i\\omega_1)||(i\\omega - i\\omega_1)| = |\\omega - (-\\omega_1)||\\omega - \\omega_1|\\]\n\\[ \\arg F(i\\omega) = \\arg i(\\omega + \\omega_1) + \\arg i(\\omega -\\omega_1) = 2\\arg i + 0 + \\arg(\\omega - \\omega_1) \\]\n\\begin{center}\n\\includegraphics[width=0.85\\linewidth]{../figures/bode-w1-empty}\n\\end{center}\n\n\n\\section*{Notch filter}\n\\label{sec-4}\nSketch the Bode diagram (\\(Q=10\\))of the second order continous-time notch filter\n\\[ F(s) = \\frac{s^2 + \\omega_1^2}{s^2 + \\frac{\\omega_1}{Q}s + \\omega_1^2} \\]\n\n\\begin{center}\n\\includegraphics[width=0.85\\linewidth]{../figures/bode-notch-empty}\n\\end{center}\n% Emacs 24.5.1 (Org mode 8.2.10)\n\\end{document}", "meta": {"hexsha": "e6038812c9f11ce4b131bae6aeb4ac5f8e910e1f", "size": 3526, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sampling-and-aliasing/exercises/anti-aliasing-notch-filter-exercise.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "sampling-and-aliasing/exercises/anti-aliasing-notch-filter-exercise.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "sampling-and-aliasing/exercises/anti-aliasing-notch-filter-exercise.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 40.0681818182, "max_line_length": 312, "alphanum_fraction": 0.7240499149, "num_tokens": 1144, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419704455589, "lm_q2_score": 0.8031737869342623, "lm_q1q2_score": 0.556311874392369}}
{"text": "\\input{../header_function}\r\n\r\n%---------- start document ---------- %\r\n \\section{algorithm -- basic number theoretic algorithms}\\linkedzero{algorithm}\r\n%\r\n  \\subsection{digital\\_method -- univariate polynomial evaluation}\\linkedone{algorithm}{digital\\_method}\r\n   \\func{digital\\_method}\r\n   {%\r\n     \\hiki{coefficients}{list},\\ %\r\n     \\hiki{val}{object},\\ %\r\n     \\hiki{add}{function},\\ %\r\n     \\hiki{mul}{function},\\ %\r\n     \\hiki{act}{function},\\ %\r\n     \\hiki{power}{function},\\ %\r\n     \\hiki{zero}{object},\\ %\r\n     \\hiki{one}{object}\r\n   }{%\r\n     \\out{object}%\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Evaluate a univariate polynomial corresponding to \\param{coefficients} at \\param{val}.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad  If the polynomial corresponding to \\param{coefficients} is of $R$-coefficients for some ring $R$, \r\n   then \\param{val} should be in an $R$-algebra $D$.\\\\\r\n   \\param{coefficients} should be a descending ordered list of tuples $(d,\\ c)$, \r\n   where $d$ is an integer which expresses the degree and $c$ is an element of $R$ which expresses the coefficient. \r\n   All operations 'add', 'mul', 'act', 'power', 'zero', 'one' should be explicitly given, where:\\\\\r\n      'add' means addition $(D \\times D \\to D)$,\r\n      'mul' multiplication $(D \\times D \\to D)$,\r\n      'act' action of $R$ $(R \\times D \\to D)$,\r\n      'power' powering $(D \\times \\mathbf{Z} \\to D)$,\r\n      'zero' the additive unit (an constant) in $D$ and\r\n      'one', the multiplicative unit (an constant) in $D$.\r\n%\r\n  \\subsection{digital\\_method\\_func -- function of univariate polynomial evaluation}\\linkedone{algorithm}{digital\\_method\\_func}\r\n   \\func{digital\\_method}\r\n   {%\r\n     \\hiki{add}{function},\\ %\r\n     \\hiki{mul}{function},\\ %\r\n     \\hiki{act}{function},\\ %\r\n     \\hiki{power}{function},\\ %\r\n     \\hiki{zero}{object},\\ %\r\n     \\hiki{one}{object}\r\n   }{%\r\n     \\out{function}%\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a function which evaluates polynomial corresponding to 'coefficients' at 'val' \r\nfrom an iterator 'coefficients' and an object 'val'.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad  All operations 'add', 'mul', 'act', 'power', 'zero', 'one' should be inputted \r\n   in a manner similar to \\linkingone{algorithm}{digital\\_method}.\\\\\r\n%\r\n  \\subsection{rl\\_binary\\_powering -- right-left powering}\\linkedone{algorithm}{rl\\_binary\\_powering}\r\n   \\func{rl\\_binary\\_powering}\r\n   {%\r\n     \\hiki{element}{object},\\ %\r\n     \\hiki{index}{integer},\\ %\r\n     \\hiki{mul}{function},\\ %\r\n     \\hikiopt{square}{function}{None},\\ %\r\n     \\hikiopt{one}{object}{None},\\ %\r\n   }{%\r\n     \\out{object}%\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return \\param{element} to the \\param{index} power by using right-left binary method.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad  \\param{index} should be a non-negative integer.\r\n   If \\param{square} is None, \\param{square} is defined by using \\param{mul}.\\\\\r\n%\r\n  \\subsection{lr\\_binary\\_powering -- left-right powering}\\linkedone{algorithm}{lr\\_binary\\_powering}\r\n   \\func{lr\\_binary\\_powering}\r\n   {%\r\n     \\hiki{element}{object},\\ %\r\n     \\hiki{index}{integer},\\ %\r\n     \\hiki{mul}{function},\\ %\r\n     \\hikiopt{square}{function}{None},\\ %\r\n     \\hikiopt{one}{object}{None},\\ %\r\n   }{%\r\n     \\out{object}%\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return \\param{element} to the \\param{index} power by using left-right binary method.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad  \\param{index} should be a non-negative integer.\r\n   If \\param{square} is None, \\param{square} is defined by using \\param{mul}.\\\\\r\n%\r\n  \\subsection{window\\_powering -- window powering}\\linkedone{algorithm}{window\\_powering}\r\n   \\func{window\\_powering}\r\n   {%\r\n     \\hiki{element}{object},\\ %\r\n     \\hiki{index}{integer},\\ %\r\n     \\hiki{mul}{function},\\ %\r\n     \\hikiopt{square}{function}{None},\\ %\r\n     \\hikiopt{one}{object}{None},\\ %\r\n   }{%\r\n     \\out{object}%\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return \\param{element} to the \\param{index} power by using small-window method.\\\\\r\n   \\spacing\r\n   % added document\r\n   \\quad The window size is selected by average analystic optimization.\\\\\r\n   \\spacing\r\n   % input, output document\r\n   \\quad  \\param{index} should be a non-negative integer.\r\n   If \\param{square} is None, \\param{square} is defined by using \\param{mul}.\\\\\r\n%\r\n  \\subsection{powering\\_func -- function of powering}\\linkedone{algorithm}{powering\\_func}\r\n   \\func{powering\\_func}\r\n   {%\r\n     \\hiki{mul}{function},\\ %\r\n     \\hikiopt{square}{function}{None},\\ %\r\n     \\hikiopt{one}{object}{None},\\ %\r\n     \\hikiopt{type}{integer}{0}\r\n   }{%\r\n     \\out{function}%\r\n   }\\\\\r\n   \\spacing\r\n   % document of basic document\r\n   \\quad Return a function which computes 'element' to the 'index' power from an object 'element' and an integer 'index'.\\\\\r\n   \\spacing\r\n   % added document\r\n   %\\spacing\r\n   % input, output document\r\n   \\quad If \\param{square} is None, \\param{square} is defined by using \\param{mul}.\r\n   \\param{type} should be an integer which means one of the following:\\\\\r\n   0;\\ \\linkingone{algorithm}{rl\\_binary\\_powering}\\\\\r\n   1;\\ \\linkingone{algorithm}{lr\\_binary\\_powering}\\\\\r\n   2;\\ \\linkingone{algorithm}{window\\_powering}\\\\\r\n\\\\\r\n%\r\n\\begin{ex}\r\n>>> d_func = algorithm.digital_method_func(\r\n... lambda a,b:a+b, lambda a,b:a*b, lambda i,a:i*a, lambda a,i:a**i, \r\n... matrix.zeroMatrix(3,0), matrix.identityMatrix(3,1)\r\n... )\r\n>>> coefficients = [(2,1), (1,2), (0,1)] # X^2+2*X+I\r\n>>> A = matrix.SquareMatrix(3, [1,2,3]+[4,5,6]+[7,8,9])\r\n>>> d_func(coefficients, A) # A**2+2*A+I\r\n[33L, 40L, 48L]+[74L, 92L, 108L]+[116L, 142L, 169L]\r\n>>> p_func = algorithm.powering_func(lambda a,b:a*b, type=2)\r\n>>> p_func(A, 10) # A**10 by window method\r\n[132476037840L, 162775103256L, 193074168672L]+[300005963406L, 368621393481L,\r\n 437236823556L]+[467535888972L, 574467683706L, 681399478440L]\r\n\r\n\\end{ex}%Don't indent!(indent causes an error.)\r\n\\C\r\n\r\n%---------- end document ---------- %\r\n\r\n\\input{../footer}\r\n", "meta": {"hexsha": "80d8ec2d1dba5d40ed8e2e7cc59ec65377ed00e4", "size": 6263, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/ja/algorithm.tex", "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_issues_repo_path": "manual/ja/algorithm.tex", "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "manual/ja/algorithm.tex", "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8411764706, "max_line_length": 129, "alphanum_fraction": 0.6250997924, "num_tokens": 1960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195385342972, "lm_q2_score": 0.7577943767446202, "lm_q1q2_score": 0.556311658159646}}
{"text": "\\documentclass{article}\n\n\\usepackage[numbers, compress]{natbib}\n\n\\usepackage[utf8]{inputenc} % allow utf-8 input\n\\usepackage[T1]{fontenc}    % use 8-bit T1 fonts\n\\usepackage{hyperref}       % hyperlinks\n\\usepackage{url}            % simple URL typesetting\n\\usepackage{booktabs}       % professional-quality tables\n\\usepackage{amsfonts}       % blackboard math symbols\n\\usepackage{nicefrac}       % compact symbols for 1/2, etc.\n\\usepackage{microtype}      % microtypography\n\n\\usepackage{amsthm,amsmath,amssymb}\n\\usepackage{macros}\n\\usepackage{subcaption}\n\\usepackage[textfont=small, labelfont=small]{caption}\n\\usepackage{graphicx}\n\\DeclareGraphicsExtensions{.pdf,.png,.jpg,.eps}\n\n\\usepackage{algorithm}\n\\usepackage{algorithmic}\n\n\\usepackage[color=yellow]{todonotes}\n\\usepackage{booktabs}\n\\usepackage[inline]{enumitem}\n\\usepackage{verbatim}\n\n\\usepackage[margin=1in]{geometry}\n\n\\usepackage{setspace}\n\n\n\\title{Information form filtering and smoothing for Gaussian linear dynamical systems}\n\n\\author{Scott W. Linderman \n  \\and\n  Matthew J. Johnson \n}\n\n\n\\begin{document}\n\n\\singlespacing\n\\maketitle\n\\onehalfspacing\n\nThe \\emph{information form} of the Gaussian density of~$x \\in \\reals^n$ is defined as,\n\\begin{align}\n  p(x \\given J, h) \n  &=  \\exp \\left \\{ -\\frac{1}{2} x^\\trans J x + h^\\trans x - \\log Z \\right \\},\n\\end{align}\nwhere\n\\begin{align}\n  \\log Z = \\frac{1}{2} h^\\trans J^{-1} h - \\frac{1}{2}\\log |J| +\\frac{n}{2} \\log 2\\pi.\n\\end{align}\nThe standard formulation is recovered by the transformations,  $\\Sigma = J^{-1}$, and\n$\\mu = J^{-1} h$. The advantage of working in the information form is that \nit corresponds to the natural parameterization of the Gaussian distribution, and\nmean field variational inference is considerably easier with this form.\n\nIn order to perform Kalman filtering and smoothing, we must be able to perform\ntwo operations: \\emph{conditioning} and \\emph{marginalization}. \n\n\\paragraph{Conditioning}\nIf,\n\\begin{align}\n  p(x) &= \\distNormal(x \\given J, h) \\\\\n  p(y \\given x) &\\propto \\distNormal(x \\given J_{\\sf obs}, h_{\\sf obs}) \n\\end{align}\nthen,\n\\begin{align}\n  p(x \\given y) &= \\distNormal(x \\given J + J_{\\sf obs}, h + h_{\\sf obs}).\n\\end{align}\n\n\\paragraph{Marginalization}\nIf,\n\\begin{align}\n  \\begin{bmatrix} x_1 \\\\ x_2  \\end{bmatrix}\n  &\\sim\n  \\distNormal \\left(\n  \\begin{bmatrix} \n    J_{11}        & J_{12} \\\\\n    J_{12}^\\trans & J_{22} \n  \\end{bmatrix},\n  \\begin{bmatrix} h_1 \\\\ h_2 \\end{bmatrix}\n  \\right),\n\\end{align}\nthen,\n\\begin{align}\n  x_2 &\\sim \\distNormal(\n        J_{22} - J_{21}J_{11}^{-1} J_{21}^\\trans, \\;\n        h_2 - J_{21} J_{11}^{-1} h_1)\n\\end{align}\n\n\\begin{proof}\n  We use the following integral identity,\n\\begin{align}\n  \\int \\exp \\left \\{ - \\frac{1}{2} x^\\trans J x + h^\\trans x\\right\\} \\mathrm{d} x\n  &= \\exp \\left\\{ \\frac{1}{2} h^\\trans J^{-1} h - \\frac{1}{2} \\log |J| + \\frac{n}{2} \\log 2 \\pi \\right\\}.\n\\end{align}\nFor~$x = [x_1, x_2]^\\trans$, we marginalize out~$x_1$ via,\n\\begin{align}\n  \\int \\exp \\left \\{ - \\frac{1}{2}\n  \\begin{bmatrix} x_1^\\trans & x_2^\\trans \\end{bmatrix} \n  \\begin{bmatrix} J_{11} & J_{12} \\\\ J_{21} & J_{22} \\end{bmatrix}\n  \\begin{bmatrix} x_1 \\\\  x_2 \\end{bmatrix}\n  + \\begin{bmatrix} h_1^\\trans & h_2^\\trans \\end{bmatrix} \n  \\begin{bmatrix} x_1 \\\\  x_2 \\end{bmatrix}\n  - \\log Z\n  \\right \\}\n  \\mathrm{d} x_1.\n\\end{align}\nRearranging terms, we have,\n\\begin{align}\n  &\\exp \\left \\{ -\\frac{1}{2} x_2^\\trans J_{22} x_2 + h_2^\\trans x_2 - \\log Z \\right \\}\n  \\int \\exp \\left \\{- \\frac{1}{2} x_1^\\trans J_{11} x_1 + (h_1 - J_{12}x_2)^\\trans x_1 \\right \\} \\mathrm{d}x_1 \\\\\n  &= \\exp \\left \\{ -\\frac{1}{2} x_2^\\trans J_{22} x_2 + h_2^\\trans x_2 - \\log Z \\right \\}\n  \\exp \\left \\{ \\frac{1}{2} (h_1 - J_{12}x_2)^\\trans J_{11}^{-1} (h_1 - J_{12} x_2) - \\frac{1}{2} \\log |J_{11}| + \\frac{n}{2} \\log 2 \\pi \\right \\} \\\\\n  &= \\exp \\left \\{ -\\frac{1}{2} x_2^\\trans (J_{22} - J_{21}J_{11}^{-1} J_{12}) x_2\n  + (h_2 - J_{21} J_{11}^{-1} h_1)^\\trans x_2 - \\log Z \\right \\} \\nonumber \\\\\n  &\\qquad \\qquad \\times\n  \\exp \\left \\{ \\frac{1}{2} h_1^\\trans J_{11}^{-1} h_1 - \\frac{1}{2} \\log |J_{11}| + \\frac{n}{2} \\log 2 \\pi \\right \\}.\n\\end{align}\nWe recognize this as a Gaussian potential on~$x_2$ of the form,\n\\begin{align}\n  p(x_2) &= \\exp \\left \\{-\\frac{1}{2} x_2^\\trans \\widetilde{J}_2 x_2 + \\widetilde{h}_2^\\trans x_2 - \\log \\widetilde{Z}_2 \\right \\} \\\\\n  \\widetilde{J}_2 &= J_{22} - J_{21}J_{11}^{-1} J_{12} \\\\\n  \\widetilde{h}_2 &= h_2 - J_{21} J_{11}^{-1} h_1 \\\\\n  \\log \\widetilde{Z}_2 &= \\log Z -\\frac{1}{2} h_1^\\trans J_{11}^{-1} h_1 + \\frac{1}{2} \\log |J_{11}| - \\frac{n}{2} \\log 2 \\pi.\n\\end{align}\n\\end{proof}\n\n\\section*{Filtering, Sampling, and Smoothing}\nBy interleaving these two steps we can filter, sample, and smooth the latent states\nin a linear dynamical system. Take the model,\n\\begin{align}\n  x_1 &\\sim \\distNormal(\\mu_1, Q_1) \\\\\n  x_{t+1} &\\sim \\distNormal(A_t x_t + B_t u_t, Q_t) \\\\\n  y_t &\\sim \\distNormal(C_t x_t + D_t u_t, R_t).\n\\end{align}\nIn information form, the initial distribution is,\n\\begin{align}\n  % initi\n  x_1 &\\sim \\distNormal(J=Q_1^{-1}, h = Q_1^{-1} \\mu_1).\n\\end{align}\nThe dynamics are given by,\n\\begin{align}\n  % dynamics\n  p(x_{t+1} \\given x_t) & \\propto \n  \\distNormal \\left(\n  \\begin{bmatrix} x_t \\\\ x_{t+1}  \\end{bmatrix}\n  \\, \\bigg| \\,\n  \\begin{bmatrix} \n    J_{11}        & J_{12} \\\\\n    J_{12}^\\trans & J_{22} \n  \\end{bmatrix},\n  \\begin{bmatrix} h_{1} \\\\ h_2 \\end{bmatrix}\n  \\right), \n\\end{align}\nwith,\n\\begin{align}\n  J_{11} &= A_t^\\trans Q_t^{-1} A_t, \\quad \n  J_{12} = -A_t^\\trans Q_t^{-1} \\quad\n  J_{22} = Q_t^{-1} \\quad\n  h_1 = -u_t^\\trans B_t^\\trans Q_t^{-1} A_t \\quad\n  h_2 = u_t^\\trans B_t Q_t^{-1}.\n\\end{align}\nFinally, the observations are given by,\n\\begin{align}\n  p(y_t \\given x_t) \n  &\\propto \\distNormal(x_t \\given J_{\\sf obs}, h_{\\sf obs}) \n\\end{align}\nwith\n\\begin{align}\n  J_{\\sf obs} = C_t^\\trans R_t^{-1} C_t \\quad\n  h_{\\sf obs} = (y_t - D_t u_t)^\\trans R_t^{-1} C_t\n\\end{align}\n\n\\subsection*{Filtering}\nWe seek the conditional distribution,~$p(x_t \\given y_{1:t})$, which \nwill be Gaussian. We begin with the initial distribution, \n\\begin{align}\n  p(x_1) &= \\distNormal(x_1 \\given J_{1 | 0}, h_{1 | 0}).\n\\end{align}\nAssume, inductively, that~$x_t \\given y_{1:t-1} \\sim \\distNormal(J_{t|t-1}, h_{t|t-1})$. Conditioning on the~$t$-th observation yields,\nConditioned on the first observation,\n\\begin{align}\n  p(x_t \\given y_{1:t}) &= \\distNormal(x_t \\given J_{t|t}, h_{t|t}), \\\\\n  J_{t|t} &= J_{t|t-1} + J_{\\sf obs} \\\\\n  h_{t|t} &= h_{t|t-1} + h_{\\sf obs}.\n\\end{align}\nThen, we predict the next latent state by writing the joint distribution of~$x_t$ and~$x_{t+1}$ and marginalizing out~$x_t$.\n\\begin{align}\n  p(x_{t+1} \\given y_{1:t}) &= p(x_t \\given y_{1:t}) \\, p(x_{t+1} \\given x_t) \\\\\n  &= \\distNormal(x_t \\given J_{t+1|t}, h_{t+1|t}) \\\\\n  J_{t+1|t} &= J_{22} - J_{21} (J_{t|t} + J_{11})^{-1}J_{21}^\\trans \\\\\n  h_{t+1|t} &= h_2 - J_{21} (J_{t|t} + J_{11})^{-1} (h_{t|t} + h_1)\n\\end{align}.\nThis completes one iteration and provides the input to the next. To start the recursion, we initialize,\n\\begin{align}\n  J_{1|0} = \\Sigma_{\\sf init}^{-1}, \\quad\n  h_{1|0} = \\Sigma_{\\sf init}^{-1} \\, \\mu_{\\sf init}.\n\\end{align}\n\n\\subsection*{Marginal Likelihood}\nThis filtering algorithm corresponds to message passing\nin a chain-structured Gaussian graphical model. To compute\nthe marginal likelihood,~$p(y_{1:T})$, we observe that it\nis the normalization constant of the graphical model,\n\\begin{align}\n  p(x_{1:T}) &=\n  \\frac{1}{Z} \\prod_{t=1}^T \\psi(x_t) \\prod_{t=1}^{T-1} \\psi(x_t, x_{t+1}), & & \\\\\n  \\psi(x_1) &= p(x_1) \\, p(y_1 \\given x_1)  & & \\\\\n  \\psi(x_t) &= p(y_t \\given x_t)  & \\text{for } t&=2 \\ldots T \\\\\n  \\psi(x_t, x_{t+1}) &= p(x_{t+1} \\given x_t)  & \\text{for } t&=1 \\ldots T-1.\n\\end{align}\nTo compute the normalizaion constant~$Z$, we recursively eliminate nodes\nvia message passing.\n\n\\paragraph{Base case.} Let us work backward from the final step in which\nwe are left with a graph with a single, unnormalized Gaussian potential.\n\\begin{align}\n  p(x_T) &= \\frac{1}{Z} \\psi(x_T)\n  = \\frac{1}{Z} \\exp \\left \\{-\\frac{1}{2} x_T^\\trans J_{T} x_T + h_{T}^\\trans x_T - \\log Z_{T} \\right \\}\n\\end{align}\nTo compute the normalizing constant,~$Z$, we integrate over~$x_T$ and use\nthe normalizing constant for Gaussian distributions:\n\\begin{align}\n  Z &= \\int \\exp \\left \\{-\\frac{1}{2} x_T^\\trans J_{T} x_T + h_{T}^\\trans x_T - \\log Z_{T} \\right \\} \\mathrm{d} x_T \\\\\n  &= \\exp \\left \\{ -\\log Z_{T} \\right \\}\n  \\int \\exp \\left \\{-\\frac{1}{2} x_T^\\trans J_{T} x_T + h_{T}^\\trans x_T \\right \\} \\mathrm{d} x_T \\\\\n  &= \\exp \\left \\{ -\\log Z_{T} +\\frac{1}{2} h_{T}^\\trans J_{T}^{-1} h_{T} - \\frac{1}{2} \\log |J_{T}| +\\frac{n}{2} \\log 2 \\pi \\right \\}\n\\end{align}\n\n\\paragraph{Condition Step.} In the second to last step, we have two potentials,\none from the dynamics induced by the preceding step and one from the observation:\n\\begin{align}\n  p(x_T) &= \\frac{1}{Z} \\psi_{\\sf dyn}(x_T) \\, \\psi_{\\sf obs}(x_T) \\\\\n  &= \\exp \\left \\{-\\frac{1}{2} x_T^\\trans J_{T|T-1} x_T + h_{T|T-1}^\\trans x_T - \\log Z_{T|T-1} \\right \\}\n  \\exp \\left \\{-\\frac{1}{2} x_T^\\trans J_{\\sf obs} x_T + h_{\\sf obs}^\\trans x_T - \\log Z_{\\sf obs} \\right \\}\n\\end{align}\nWe reduce this to the base case by simply summing the natural parameters and\nlog normalizers,\n\\begin{align}\n  J_{T} &= J_{T|T-1} + J_{\\sf obs} \\\\\n  h_{T} &= h_{T|T-1} + h_{\\sf obs} \\\\\n  \\log Z_{T} &= \\log Z_{T|T-1} + \\log Z_{\\sf obs}.\n\\end{align}\n\n\\paragraph{Predict Step.} Now consider a model with two latent states. \n\\begin{align}\n  p(x_{T-1}, x_T) &= \\frac{1}{Z} \\psi(x_{T-1}) \\, \\psi_{\\sf dyn}(x_{T-1}, x_T) \\, \\psi_{\\sf obs}(x_T)\n\\end{align}\nWe will eliminate the variable~$x_{T-1}$ by marginalizing, or integrating it out.\n\\begin{align}\n  p(x_T) &= \\frac{1}{Z} \\psi_{\\sf obs}(x_T) \\int \\psi(x_{T-1}) \\, \\psi_{\\sf dyn}(x_{T-1}, x_T) \\mathrm{d}x_{T-1}\n\\end{align}\nThe integrand is of the form,\n\\begin{align}\n  &\\psi(x_{T-1}) \\, \\psi_{\\sf dyn}(x_{T-1}, x_T)\n  \\\\\n  &= \\exp \\left \\{ -\\frac{1}{2} x_{T-1}^\\trans J_{T-1} x_{T-1} + h_{T-1}^\\trans x_{T-1} - \\log Z_{T-1} \\right \\} \\\\\n  &\\qquad \\times \n  \\exp \\left \\{ - \\frac{1}{2}\n  \\begin{bmatrix} x_{T-1}^\\trans & x_T^\\trans \\end{bmatrix} \n  \\begin{bmatrix} J_{11} & J_{12} \\\\ J_{21} & J_{22} \\end{bmatrix}\n  \\begin{bmatrix} x_{T-1} \\\\  x_T \\end{bmatrix}\n  + \\begin{bmatrix} h_1^\\trans & h_2^\\trans \\end{bmatrix} \n  \\begin{bmatrix} x_{T-1} \\\\  x_T \\end{bmatrix}\n  - \\log Z_{\\sf dyn}\n  \\right \\}\n  \\\\\n  &= \n  \\exp \\left \\{ - \\frac{1}{2}\n  \\begin{bmatrix} x_{T-1}^\\trans & x_T^\\trans \\end{bmatrix} \n  \\begin{bmatrix} J_{11} + J_{T-1} & J_{12} \\\\ J_{21} & J_{22} \\end{bmatrix}\n  \\begin{bmatrix} x_{T-1} \\\\  x_T \\end{bmatrix}\n  + \\begin{bmatrix} (h_1 + h_{T-1})^\\trans & h_2^\\trans \\end{bmatrix} \n  \\begin{bmatrix} x_{T-1} \\\\  x_T \\end{bmatrix}\n  - (\\log Z_{T-1} + \\log Z_{\\sf dyn})\n  \\right \\}\n\\end{align}\nAppealing to the marginalization propositions above, this implies,\n\\begin{align}\n  \\int \\psi(x_{T-1}) \\, \\psi_{\\sf dyn}(x_{T-1}, x_T) \\, \\mathrm{d} x_{T-1}\n  &= \\exp \\left \\{-\\frac{1}{2} x_T^\\trans J_{T|T-1} x_T + h_{T|T-1}^\\trans x_T\n  - \\log Z_{T|T-1} \\right \\},\n\\end{align}\nwhere\n\\begin{multline}\n  \\log Z_{T|T-1} = \\\\\n  \\log Z_{T-1} + \\log Z_{\\sf dyn} \n  -\\frac{1}{2} (h_1 + h_{T-1}^\\trans) (J_{11} + J_{T-1})^{-1} (h_1 + h_{T-1}^\\trans) \n  + \\frac{1}{2} \\log |J_{11} + J_{T-1}| - \\frac{n}{2} \\log 2 \\pi.\n\\end{multline}\nThus, the log normalizer passed into the condition step is an accumulation of\nlog normalizers from previous time steps ($\\log Z_{T-1}$), the log normalizer\nof the dynamics potential ($\\log Z_{\\sf dyn}$), and a term that comes from\nmarginalizing out the local variable~$x_{T-1}$. Once we have computed\n~$\\log Z_{T|T-1}$, it is passed into the condition step and then into the final\nintegration to compute the marginal likelihood.\n\nThis process of predicting and conditioning is recursively applied, marginalizing\nout variables one at a time, starting with~$x_1$ and ending with~$x_T$. At the\nend of this procedure, we are left with the marginal likelihood of the observations.\n\n\\subsection*{Standard Form Marginal likelihood}\nThe marginal likelihood of the observed data is given by,\n\\begin{align}\n  \\log p(y_{1:T} \\given u_{1:T})\n  &= \\sum_{t=1}^T \\log p(y_t \\given y_{1:t-1}, u_{1:t}) \\\\\n  &= \\sum_{t=1}^T \\log \\distNormal(y_t \\given C \\mu_{t | t-1} + D u_t,  S_t),\n\\end{align}\nwhere\n\\begin{align}\n  S_t &= C \\Sigma_{t | t-1} C^\\trans + R_t \\\\\n  \\mu_{t | t-1} &= J_{t | t-1}^{-1} h_{t|t-1} \\\\\n  \\Sigma_{t | t-1} &= J_{t|t-1}^{-1}\n\\end{align}\nExpanding the above, we have,\n\\begin{align}\n  \\log p(y_{1:T} \\given u_{1:T})\n  &= \\sum_{t=1}^T \\left[ -\\frac{N}{2} \\log 2 \\pi - \\frac{1}{2} \\log \\big| S_t  \\big|  \n     - \\frac{1}{2} (y_t - C \\mu_{t | t-1} - D u_t)^\\trans S_t^{-1}\n     (y_t - C \\mu_{t | t-1} - D u_t) \\right]\n\\end{align}\n\n\\subsection*{Backward Sampling}\nHaving computed~$J_{t|t}$ and~$h_{t|t}$, we the proceed backward in time to draw a joint sample of the latent states. \nGiven~$J_{t|t}$,~$h_{t|t}$, and~$x_{t+1}$, we have,\n\\begin{align}\n  p(x_{t} \\given y_{1:t}, x_{t+1}) &\\propto p(x_t \\given y_{1:T}) \\, p(x_{t+1} \\given x_t) \\\\\n  &\\propto \\distNormal(x_t \\given J_{t|t}, h_{t|t})  \\;\n    \\distNormal(x_t \\given J_{11}, \\, h_1 - x_{t+1}^\\trans J_{21} ) \\\\\n  &\\propto     \\distNormal(x_t \\given J_{t|t} + J_{11}, \\; h_{t|t} + h_1 - x_{t+1}^\\trans J_{21} )\n\\end{align}\nWe sample~$x_t$ from this conditional, then use it to sample~$x_{t-1}$, and repeat until we reach~$x_1$.\n\n\\subsection*{Rauch-Tung-Striebel Smoothing}\nNext we seek the conditional distribution given all the data,~$p(x_t \\given y_{1:T})$. \nThis will again be Gaussian, and we will call its parameters~$J_{t|T}$ and~$h_{t|T}$. \nAssume, inductively, that we have computed~$J_{t+1|T}$ and~$h_{t+1|T}$. We show how to \ncompute the parameters for time~$t$.\n\nFrom the Markov properties of the model and the conditional distribution derived above, we have,\n\\begin{align}\n  p(x_t \\given x_{t+1}, y_{1:T}) \n  &= \\distNormal(x_t \\given J_{t|t} + J_{11}, \\; h_{t|t} + h_1 - J_{12} x_{t+1}).\n\\end{align}\nExpanding, taking care to note that~$x_{t+1}$ appears in the normalizing constant, yields,\n\\begin{multline}\n  p(x_t \\given x_{t+1}, y_{1:T}) \n  = \\exp \\bigg \\{-\\frac{1}{2} x_t^\\trans (J_{t|t} + J_{11}) x_t + (h_{t|t} + h_1)^\\trans x_t - x_{t+1}^\\trans J_{12} x_t \n  \\\\ \n-\\frac{1}{2} x_{t+1}^\\trans J_{12}^\\trans (J_{t|t} + J_{11})^{-1} J_{12} x_{t+1} \n+(h_{t|t} + h_1)^\\trans (J_{t|t} + J_{11})^{-1} J_{12} x_{t+1} \\\\\n- \\frac{1}{2} (h_{t|t} + h_1)^\\trans (J_{t|t} + J_{11})^{-1} (h_{t|t} + h_1) \n    \\bigg \\}\n\\end{multline}\n\nNow consider the joint distribution of~$x_t$ and~$x_{t+1}$ given all the data,\n\\begin{align}\n  p(x_t, x_{t+1} \\given y_{1:T}) \n  &= p(x_t \\given x_{t+1}, y_{1:T}) p(x_{t+1} \\given y_{1:T}) \\\\\n  &\\propto \\distNormal \\left(\n  \\begin{bmatrix} x_t \\\\ x_{t+1}  \\end{bmatrix}\n  \\, \\bigg| \\,\n  \\begin{bmatrix} \n    \\widetilde{J}_{11}        & \\widetilde{J}_{12} \\\\\n    \\widetilde{J}_{12}^\\trans & \\widetilde{J}_{22} \n  \\end{bmatrix},\n  \\begin{bmatrix} \n    \\widetilde{h}_1 \\\\ \n    \\widetilde{h}_2 \n  \\end{bmatrix}\n  \\right), \n\\end{align}\nwith,\n\\begin{align}\n  \\widetilde{J}_{11} &= J_{t|t} + J_{11} \\\\ \n  \\widetilde{J}_{12} &= J_{12} \\\\\n  \\widetilde{J}_{22} &= J_{t+1|T} + J_{12}^\\trans (J_{t|t} + J_{11})^{-1} J_{12} \\\\\n  \\widetilde{h}_1 &= h_{t|t} + h_1 \\\\\n  \\widetilde{h}_2 &= h_{t+1|T} + (h_{t|t} + h_1)^\\trans (J_{t|t} + J_{11})^{-1} J_{12}.\n\\end{align}\nRecall that,\n\\begin{align}\n  J_{t+1|t} &= J_{22} - J_{21} (J_{t|t} + J_{11})^{-1}J_{21}^\\trans \\\\\n  h_{t+1|t} &= h_2 - J_{21} (J_{t|t} + J_{11})^{-1} (h_{t|t} + h_1).\n\\end{align}\nThus, \n\\begin{align}\n  \\widetilde{J}_{22} &= J_{t+1|T} - J_{t+1|t} + J_{22} \\\\\n  \\widetilde{h}_{2} &= h_{t+1|T} - h_{t+1|t} + h_2.\n\\end{align}\n\n\nFinally, marginalize,\n\\begin{align}\n  p(x_t \\given y_{1:T}) \n  &= \\distNormal(x_t \\given \n    \\widetilde{J}_{11} - \\widetilde{J}_{12} \\widetilde{J}_{22}^{-1} \\widetilde{J}_{12}^\\trans, \\;\n    \\widetilde{h}_{1} - \\widetilde{J}_{12} \\widetilde{J}_{22}^{-1} \\widetilde{h}_2) \\\\\n  &= \\distNormal(x_t \\given J_{t|T}, h_{t|T}).\n\\end{align}\nSubstituting the simplified forms above yields,\n\\begin{align}\n  J_{t|T} &= J_{t|t} + J_{11} - J_{12} (J_{t+1|T} - J_{t+1|t} + J_{22})^{-1} J_{12}^\\trans \\\\\n  h_{t|T} &= h_{t|t} + h_1 - J_{12} (J_{t+1|T} - J_{t+1|t} + J_{22})^{-1} (h_{t+1|T} - h_{t+1|t} +h_2).\n\\end{align}\n\n\\appendix\n\n\\subsection*{Working out marginal likelihood in a simple example}\n\\begin{align}\n  p(x) &= \\frac{1}{Z} \\distNormal(x \\given 0, 1) \\, \\distNormal(1 \\given x, \\sigma^2) \\\\\n  &= \\frac{1}{Z}\n  \\exp \\left \\{ -\\frac{1}{2} x^2 -\\frac{1}{2} \\log 2 \\pi \\right \\}\n  \\exp \\left \\{ -\\frac{1}{2} \\frac{(x-1)^2}{\\sigma^2}  -\\frac{1}{2} \\log 2 \\pi -\\frac{1}{2} \\log \\sigma^2 \\right \\} \\\\\n  &= \\frac{1}{Z} \\exp \\left \\{ -\\frac{1}{2} (x^2 + \\frac{x^2}{\\sigma^2}) -\\frac{1}{2} \\frac{-2x}{\\sigma^2} -\\frac{1}{2}\\frac{1}{\\sigma^2}  - \\log 2 \\pi  -\\frac{1}{2} \\log \\sigma^2 \\right \\} \\\\\n  &= \\frac{1}{Z} \\exp \\left \\{ -\\frac{1}{2} (1 + \\frac{1}{\\sigma^2}) x^2 + \\frac{x}{\\sigma^2} -\\frac{1}{2\\sigma^2}  - \\log 2 \\pi  -\\frac{1}{2} \\log \\sigma^2 \\right \\}\n\\end{align}\nThis implies that the normalization constant is,\n\\begin{align}\n  Z &= \\exp \\left \\{  - \\log 2 \\pi -\\frac{1}{2\\sigma^2} + \\frac{1}{2} (\\frac{1}{\\sigma^2}(1+\\frac{1}{\\sigma^2})^{-1} \\frac{1}{\\sigma^2}) - \\frac{1}{2} \\log |1+\\frac{1}{\\sigma^2}| + \\frac{1}{2} \\log 2 \\pi -\\frac{1}{2} \\log \\sigma^2\\right \\} \\\\\n  &= \\exp \\left \\{ -\\frac{1}{2} \\log 2 \\pi - \\frac{1}{2\\sigma^2} + \\frac{1}{2\\sigma^2} \\frac{1}{1+\\sigma^2} - \\frac{1}{2} \\log(1+\\sigma^2) + \\frac{1}{2}\\log \\sigma^2 -\\frac{1}{2} \\log \\sigma^2\\right \\}\\\\\n  &= \\exp \\left \\{ -\\frac{1}{2} \\log 2 \\pi - \\frac{1}{2}  \\frac{1}{1+\\sigma^2} - \\frac{1}{2} \\log(1+\\sigma^2) \\right \\}\n\\end{align}\nFrom the standard marginal likelihood, we know that this is like,\n\\begin{align}\n  \\log p(y=1) &= \\log \\distNormal(1 \\given 0, 1 + \\sigma^2) \\\\\n  &= -\\frac{1}{2} \\log 2 \\pi -\\frac{1}{2} \\log (1+\\sigma^2) - \\frac{1}{2} \\frac{1^2}{1+\\sigma^2} \\\\\n%  &= -\\frac{1}{2} \\log 2 \\pi -\\frac{1}{2} \\log 2 - \\frac{1}{4}\n\\end{align}\nThankfully, all checks out!\n\n\\subsection*{Working out marginal likelihood in an input example}\n\\begin{align}\n  p(x) &= \\frac{1}{Z} \\distNormal(x \\given 0, 1) \\, \\distNormal(1 \\given x + d, 1) \\\\\n  &= \\frac{1}{Z}\n  \\exp \\left \\{ -\\frac{1}{2} x^2 -\\frac{1}{2} \\log 2 \\pi \\right \\}\n  \\exp \\left \\{ -\\frac{1}{2} (x+d-1)^2  -\\frac{1}{2} \\log 2 \\pi \\right \\} \\\\\n  &= \\frac{1}{Z} \\exp \\left \\{ -\\frac{1}{2} (x^2 + x^2) -\\frac{1}{2} 2x(d-1) -\\frac{1}{2}(d-1)^2  - \\log 2 \\pi  \\right \\} \\\\\n  &= \\frac{1}{Z} \\exp \\left \\{ -\\frac{1}{2} 2x^2 + x(1-d) -\\frac{1}{2}(d-1)^2  - \\log 2 \\pi  \\right \\}\n\\end{align}\nThis implies that the normalization constant is,\n\\begin{align}\n  Z &= \\exp \\left \\{  - \\log 2 \\pi -\\frac{1}{2}(d-1)^2  + \\frac{1}{2} \\frac{(1-d)^2}{2} - \\frac{1}{2} \\log |2| + \\frac{1}{2} \\log 2 \\pi \\right \\} \\\\\n    &= \\exp \\left \\{  - \\frac{1}{2}\\log 2 \\pi -\\frac{1}{4}(d-1)^2   - \\frac{1}{2} \\log |2| \\right \\} \n\\end{align}\nFrom the standard marginal likelihood, we know that this is like,\n\\begin{align}\n  \\log p(y=1) &= \\log \\distNormal(1 \\given d, 2) \\\\\n  &= -\\frac{1}{2} \\log 2 \\pi -\\frac{1}{2} \\log 2 - \\frac{1}{2} \\frac{(1-d)^2}{2} \\\\\n%  &= -\\frac{1}{2} \\log 2 \\pi -\\frac{1}{2} \\log 2 - \\frac{1}{4}\n\\end{align}\nThankfully, all checks out!\n\n\n\\end{document}\n", "meta": {"hexsha": "30566c03d242f2681c28d9c6c458a56f6eb814dd", "size": 19289, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/info_form.tex", "max_stars_repo_name": "nitinshyamk/pylds", "max_stars_repo_head_hexsha": "1b0b866181203890d7b8eebe1069a746574d1531", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 68, "max_stars_repo_stars_event_min_datetime": "2015-02-25T07:45:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-18T07:37:12.000Z", "max_issues_repo_path": "notes/info_form.tex", "max_issues_repo_name": "nitinshyamk/pylds", "max_issues_repo_head_hexsha": "1b0b866181203890d7b8eebe1069a746574d1531", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2015-03-17T03:13:56.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-18T03:02:47.000Z", "max_forks_repo_path": "notes/info_form.tex", "max_forks_repo_name": "nitinshyamk/pylds", "max_forks_repo_head_hexsha": "1b0b866181203890d7b8eebe1069a746574d1531", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 26, "max_forks_repo_forks_event_min_datetime": "2015-03-16T23:31:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T03:20:38.000Z", "avg_line_length": 41.7510822511, "max_line_length": 242, "alphanum_fraction": 0.6024158847, "num_tokens": 8205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.556311654144027}}
{"text": "\\documentclass{article}\n\\usepackage[top=2cm,bottom=2cm,left=2cm,right=2cm]{geometry}\n\\usepackage{amsmath, amssymb, amsthm}\n\\usepackage{tikz}\n\\usepackage{hyperref}\n\n\\usepackage{datetime}\n\\newdateformat{petsa}{\\the\\day\\ \\shortmonthname[\\the\\month] \\the\\year}\n\n\\theoremstyle{plain}\n\\newtheorem{definition}{Definition}[section]\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{conjecture}[theorem]{Conjecture}\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{exercise}{Exercise}[section]\n\n\\title{Fractal Tangency}\n\\author{poypoyan}\n\\date{\\petsa\\today}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nFractals are fascinating objects, but are still considered as pathological because most of the time, analytical techniques do not work on those. For example, the well-known ``inscribed square problem\" is still unsolved for most fractal-like curves \\cite{square}. In this short note we attempt to add something else to say to fractals by carefully distinguishing them from `smooth' curves. For now, this study is restricted to the continuum, but we provide some ideas for generalization to other sets.\n\\end{abstract}\n\n\\section{Intuition}\nFor a start, paths to a point are enough to determine fractality. For example, we can say that a surface is `fractal' if a path passing through a point inside that surface is fractal. When restricted to continuous curves in the continuum ($\\mathbb{R}^n$), fractals can be thought of as exactly the nowhere differentiable functions. We somewhat follow that line of thought, but please remember a definition of the derivative: that it is the slope of the \\textit{tangent line}. This means that differentiable curves will eventually look like a straight line when `zoomed in' to a point.\n\nOn the other hand, fractals are often viewed as `self-similar' (although it should not always be \\cite{3b1b}!). Now observe that a straight line looks like itself when zoomed in. Thus, straight line is self-similar. In a sense, the straight line is to fractals as 0 is to positive natural numbers!\n\nSince the derivative already captures \\textit{tangency}, we can modify this to define \\textit{fractal tangency}, and distinguish fractals from `smooth' curves. The following sections just develop a formalization to this idea.\n\n\\section{Definitions}\nLet $C_n$ (for $n \\ge 1$) be the set of all continuous functions $f:[0,1]\\rightarrow\\mathbb{R}^n$ such that $f(0)=(0)^n$ and $f(x) \\ne (0)^n, \\forall x \\ne 0$, where $(0)^n$ is the origin on $\\mathbb{R}^n$. Aside from the Cartesian coordinates, we make use of a variant of \\textit{Hyper-spherical coordinates} (we shorten it as \\textit{HSC} here). Let the set of HSCs be $S_n = \\mathbb{R} \\times [0, \\pi]^{n-1}$ ($S_1 = \\mathbb{R}$).\n\n\\begin{definition}\\label{hsc}\nLet $T_n: S_n \\rightarrow \\mathbb{R}^n$ (again, for $n \\ge 1$) be the transformation from Hyper-spherical coordinates back to Cartesian coordinates such that:\n\\begin{itemize}\n\t\\item $T_1(r) = r$.\n\t\\item $T_{n+1}(r, \\theta_1, \\ldots, \\theta_n) = ( \\sin{\\theta_n} \\cdot T_n(r, \\theta_1, \\ldots, \\theta_{n-1}), |r| \\cos{\\theta_n})$.\n\\end{itemize}\n\\end{definition}\n\\begin{exercise}[aka author is lazy]\nProve that $T_n$ is bijective \\textit{except at origin} either by explicit construction of $T_n^{-1}$, or by showing surjection and injection. Please utilize induction if possible.\n\\end{exercise}\n\\begin{definition}\n\\textbf{Quasi-multiplication on HSC} is the binary operation $\\cdot: S_n \\times S_n \\rightarrow S_n$ such that $$(r, \\theta_1, \\ldots, \\theta_{n-1}) \\cdot (s, \\phi_1, \\ldots, \\phi_{n-1}) = (rs, \\pi - |\\pi - (\\theta_1 + \\phi_1)|, \\ldots, \\pi - |\\pi - (\\theta_{n-1} + \\phi_{n-1})|).$$\n\\end{definition}\n\\begin{definition}\n\\textbf{Quasi-division on HSC} is the binary operation $/: S_n \\times S_n \\rightarrow S_n$ such that $$(r, \\theta_1, \\ldots, \\theta_{n-1}) / (s, \\phi_1, \\ldots, \\phi_{n-1}) = (r/s, |\\theta_1 - \\phi_1|, \\ldots, |\\theta_{n-1} - \\phi_{n-1}|).$$\n\\end{definition}\nNotice the operations on angles. The idea is that if the sum/difference of angles is outside of $[0,\\pi]$, we put it back to $[0,\\pi]$ by `remeasuring' it from angle $0$.\n\n\\section{Equivalence Relation}\nNow we can define the main concept to distinguish the rough from the smooth:\n\\begin{definition}\\label{equiv}\nTwo continuous paths to the origin $f, g \\in C_n$ have the `same tangency at origin' (we notate as $f \\sim_n g$) if there exists a `hyper-rotation' $r = (1, \\theta_1, \\ldots, \\theta_{n-1}) \\in S_n$ such that $$\\lim_{x\\rightarrow 0^{+}} \\frac{r \\cdot T_n^{-1}(f(x))}{T_n^{-1}(g(x))} = (1, (0)^{n-1}).$$\n\\end{definition}\n\\begin{theorem}\n$\\sim_n$ is an equivalence relation on $C_n$.\n\\end{theorem}\n\\begin{proof}\nFor reflexivity, the $r$ for $f \\sim_n f$ would be $(1, (0)^{n-1})$. For symmetry, given the $r$ for $f \\sim_n g$, the $r'$ for $g \\sim_n f$ would be $(1, (0)^{n-1})/r$. Lastly for transitivity, given the $r$ for $f \\sim_n g$ and $r'$ for $g \\sim_n h$, the $r''$ for $f \\sim_n h$ would be $r \\cdot r'$.\n\\end{proof}\n\\begin{corollary}\n$\\sim_n$ partitions $C_n$ into the set of equivalence classes $C_n / \\sim_n$.\n\\end{corollary}\nOne such class is the class $L_n$ of `smooth' paths to a point, or more informatively, the class of paths with the \\textit{tangent line} at origin.\n\\begin{definition}\nFor a continuous path to the origin $f \\in C_n$, $f \\in L_n$ iff $f \\sim_n (x, (0)^{n-1})$.\n\\end{definition}\nNow we can finally distinguish fractals, and define the title of this note:\n\\begin{definition}\nA continuous path to the origin $f \\in C_n$  has \\textbf{fractal tangency} at origin iff $f \\not\\in L_n$.\n\\end{definition}\n\n\\section{Other Ideas}\nHere are some other ideas/comments for further study of the concepts outlined in previous sections:\n\\begin{itemize}\n\t\\item The limit in Definition \\ref{equiv} can be simplified to $$\\lim_{x \\rightarrow 0^+}\\frac{T_n^{-1}(f(x))}{T_n^{-1}(g(x))} = (1, \\theta_1, \\ldots, \\theta_{n-1}).$$ This suggests a weakening of the equivalence relation: the limit should now be $(r, \\theta_1, \\ldots, \\theta_{n-1})$ for any $r \\ne 0$. However, at the time of writing, we do not have a good `geometric' interpretation for this.\n\t\\item Of course there are fractals beyond $C_n$ like the well-known Sierpi\u0144ski carpet! The first idea is that paths on \\textit{sets} can be defined as $f:[0,1] \\rightarrow 2$. Fortunately there are no notions of continuity, HSC, and hyper-rotation anymore. Then the next idea is to modify the $\\epsilon$ part of the $\\epsilon$-$\\delta$ definition: instead of $|f(x)-L|<\\epsilon$, we simply have $f(x) \\oplus L = 0$, where $\\oplus$ is the XOR operation. Now the only perfectly `smooth' paths are $f_0(x)=0$ and $f_1(x)=1$, for all $x\\in[0,1]$. The last idea is to define a suitable (quasi-)multiplication and (quasi-)division: maybe the XOR operation can also be used for that. Hopefully we can now define the analogue for Definition \\ref{equiv}. These ideas should generalize to other sets aside from $\\mathbb{R}^n$ and $2$.\n\\end{itemize}\n\n\\bibliographystyle{plain}\n\\bibliography{fractal}\n\\end{document}", "meta": {"hexsha": "546a0389253c5adbe1bbccef7615e3202db293aa", "size": 7048, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "files/fractal.tex", "max_stars_repo_name": "poypoyan/matematex", "max_stars_repo_head_hexsha": "379c7c386b2a692537dea5209d9b246e434a8faa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/fractal.tex", "max_issues_repo_name": "poypoyan/matematex", "max_issues_repo_head_hexsha": "379c7c386b2a692537dea5209d9b246e434a8faa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/fractal.tex", "max_forks_repo_name": "poypoyan/matematex", "max_forks_repo_head_hexsha": "379c7c386b2a692537dea5209d9b246e434a8faa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.3111111111, "max_line_length": 825, "alphanum_fraction": 0.7234676504, "num_tokens": 2201, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7577943822145998, "lm_q1q2_score": 0.5563116533589986}}
{"text": "\\documentclass{article}\n\n\\usepackage[hidelinks]{hyperref}\n\n\\title{Math 15 Project - An algorithmic approach to Nonhomogenious equation solving}\n\\date{2021-06-04}\n\\author{Elias Schablowski}\n\n\n\\begin{document}\n\\pagenumbering{gobble}\n\\maketitle\n\\newpage\n\\tableofcontents\n\\pagebreak\n\\pagenumbering{arabic}\n\\section{Abstract}\n\\paragraph{Differential Equations} are a cornerstone of the physical world. Many processes can be measured and modeled most accurately with ODE.\n\\paragraph{Non-homogenious Differential Equations} are a family of equations which is more difficult to solve due to their dependence on two variables. It is also a very broad range of equations, for example, this category includes forced oscilators, double pendulums, and more.\n\\paragraph{Methods to solve} There are multiple methods to solve non-homogenious equations. Each with their own advantages and drawbacks. The most basic is undetermined coefficients.\n\\section{Theory}\n\n\\subsection{Undetermined Coefficients}\n\\paragraph{The Idea} behind undetermined coefficients is to take the sum of all linearly independent differentials. This is so that the differentials can cancel each other out when differentiated and thus form the the final equation.\n$$y_p=\\sum_{n=0}^Na_n f^{(n)}$$ Where N is the number of linearly independent differentials and $a_n$ is a scaling factor.\n\\paragraph{The advantage} of undetermined coefficients is that it is capable of turning a differential equation into a system of equations in O(N) time.\n\\paragraph{The disadvantage} of undetermined coefficients is the requirement for a finite set of linearly independent derivatives. This excludes many common trigonometric functions such as $tan(x)$ and $\\frac{a}{x^n}$.\n\n\\subsection{Variation of Parameters}\n\\paragraph{The Idea} behind variation of parameters is to modify the the complementary solution in such a way as to create the forcing function.\n\\paragraph{The advantage} of variation of parameters is that it can solve a wide array of problems.\n\\paragraph{The disadvantages} of variation of parameters is that it often does not result in a system of equations with a single solution, and often requires the addition of additional constraints, therefore not lending itself to a computerized approach without bias. Furthermore, it is also requires the solutions to the complementary equation to be know, which further adds to the complexity.\n\n\\subsection{The Laplace Transform}\n\\paragraph{The Idea} behind using the laplace transform is replacing a problematic function with one that is simpler to solve with, and can be transformed back without information loss (1 to 1 transformation).\n\\paragraph{Why the Laplace Transform?} The Laplace Transform has multiple properties that make it an ideal candedate to solve ODEs and PDEs. These properties are linearity and uniqeness. Furthermore, many properties also resemble those of differentiation.\n\\paragraph{The disadvantage} of the Laplace Transform is that computing the laplace transform requires solving an itegral which is complex.\n\n\\section{Algorithm}\n\\subsection{Inputs}\n\\paragraph{The Equation} is inputed as two expression trees, with one representiong the ODE/PDE and the other representing the function.\n\\paragraph{The Preferred Method} is an optional parameter which specifies the preferred method to solve the ODE/PDE.\n\\subsection{Selection of Method}\nThis process is one of elimination, namely finding whether the faster methods work.\n\\subsubsection{Undetermined Coefficients}\nThe method in which you can determine whether the ODE can be solved using Undetermined Coefficients is to determine whether the function has a finite amount liearly independent derivatives.\nThe most efficient method is to determine whether the function has a finite amount of derivaties is to check whether the function has either a function with an infinete amount linearly independent derivatives, division by a variable, or exponentiation with a variable base and exponent\nas per \\hyperref[sec:derivative_proofs]{Proofs}.\n\\subsubsection{Variation of Parameters}\nThis method is not used, due to the proof of aplicability is more expensive than the efficiency gains of the Laplace Transform.\n\\subsubsection{The Laplace Transform}\nThis method is used when Undetermined Coefficients cannot be used.\n\\subsection{Undetermined Coefficients}\n\\begin{enumerate}\n    \\item Find All linearly independent derivatives\n    \\item Sum all linearly independent derivatives together\n    \\item Remove all constant coefficients and shifts.\n    \\item Inject variables into all critical locations (coefficients of functions and variables as well as shifts)\n    \\item Plug this new function into the ODE\n    \\item Solve the equation for the values of the injected variables \\footnote{In my implementation, this was done using a system of equations (which limits this implementation to polynomials)}\n    \\item Substitute the found values in for the injected variables\n\\end{enumerate}\n\n\\subsection{Laplace Transform}\n\\begin{enumerate}\n    \\item Compute the Laplace Transform of the ODE.\n    \\item Query for the initial conditions.\n    \\item Solve for $X(s)$\\footnote{I used Wolfram Alpa to do this step due to the complexities of Symbolic solving.}\n    \\item Compute the Inverse Laplace Transform\n\\end{enumerate}\n\\footnotetext{See the implementation (as well as the tex document for this paper on \\hyperlink{https://github.com/eschablowski/}{})}\n\\newpage\n\\section{Proofs}\n\\subsection{Finiteness of Derivates}\n\\label{sec:derivative_proofs}\n\\paragraph{Note: } I made these proof, as I didn't find any general method of checking for infine linearly independent derivaties.\n\\subsubsection{Addition}\n\\paragraph {Let} $f$ and $g$ be any function.\n\\paragraph {Let} $y = f(x) + g(x)$\n\\paragraph{Let} $N_f$ be the number of linearly independent derivatives of $f$, $N_g$ be the number of linearly independent derivatives of $g$.\n$$y^{(n)}=f^{(n)}(x)+g^{(n)}(x)$$\n\\paragraph{Let} $L_f = \\sum_{n=0}^{N_f} f^{(n)}$ and $L_g = \\sum_{n=0}^{N_g} g^{(n)}$\n\\paragraph{Since} $L_f$ and $L_g$ are the sums of linearly independent derivatives of $f$ and $g$ respectively,\nthen the sum linearly independent derivatives of $y$ is $L_g + L_f$.\n\\paragraph{Therefore} the linearly independent derivaties of $y$ is fine as long as $N_g \\neq \\infty$ and $N_f \\neq \\infty$\n\n\\subsubsection{Multiplication}\n\\paragraph {Let} $f$ and $g$ be any function.\n\\paragraph {Let} $y = f(x) * g(x)$\n\\paragraph{Let} $N_f$ be the number of linearly independent derivatives of $f$, $N_g$ be the number of linearly independent derivatives of $g$, and $N$ be $N_f$ if $N_f > N_g$ or $N_g$ otherwise.\n$$y^{(n)}(x)=\\sum_{i=1}^n f^{(i-n)}(x) * g^{(n-i)}(x)$$\n\\paragraph{Therefore,} if $f$ and $g$ have finite linearly independent derivatives, $y$ must also have a finine number of linearly independent derivatives.\n\n\\pagebreak\n\\section{Further items for Consideration}\n\\subsection{The Gaver/Stehfest Algorithm}\nThe Gaver/Stehfest Algorithm is a method for approximating the value of the inverse Laplace Transform. \\citation{stehfest_1970}\nThis can be useful for solving ODEs for program internals and/or creating a function from ODEs.\nThe reason why it was not used in this program was due to the fact that the goal of the program was to compute $y_p$ for output, not for computing $y(x)$ at points of $x$.\n\n\\subsection{Further proofs}\nThe following rules of differentiation still require proofs for their finiteness from their arguments:\n\\begin{enumerate}\n    \\item Chain rule\n    \\item Division (probably most difficult)\n    \\item exponentiation\n\\end{enumerate}\n\\subsection{Improvements of the program}\nThe following improvements can still be made to the program:\n\\begin{enumerate}\n    \\item Add a CLI.\n    \\item Implement a better symbolic equation solver.\n\\end{enumerate}\n\\pagebreak\\nocite{*} % nocite is used since tex is removing the \\cite commands for some reason.\n\\bibliography{paper}\n\\bibliographystyle{ieeetr}\n\\end{document}\n", "meta": {"hexsha": "6097a381f5114656a5985bef3d224e6bf0848d40", "size": 7951, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/paper.tex", "max_stars_repo_name": "eschablowski/ODE-solver", "max_stars_repo_head_hexsha": "c84d4c663cf6b20915e4c634783a903a340a417c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/paper.tex", "max_issues_repo_name": "eschablowski/ODE-solver", "max_issues_repo_head_hexsha": "c84d4c663cf6b20915e4c634783a903a340a417c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/paper.tex", "max_forks_repo_name": "eschablowski/ODE-solver", "max_forks_repo_head_hexsha": "c84d4c663cf6b20915e4c634783a903a340a417c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.3813559322, "max_line_length": 394, "alphanum_fraction": 0.7863161866, "num_tokens": 1925, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.734119526900183, "lm_q2_score": 0.7577943712746406, "lm_q1q2_score": 0.5563116453277608}}
{"text": "% Created 2021-09-20 Mon 08:08\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=169]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{amssymb}\n\\usepackage{circuitikz}\n\\usepackage{khpreamble}\n\\usepgfplotslibrary{groupplots}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{The DC motor}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={The DC motor},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\\section{Intro}\n\\label{sec:org1ac59b1}\n\n\\section{The DC motor}\n\\label{sec:org49f77d8}\n\\begin{frame}[label={sec:orgf50b351}]{Force acting on an electric conductor in a magnetic field}\n\\begin{center}\n\\includegraphics[width=0.4\\linewidth]{../../figures/HD-fig1_14.png}\n\\includegraphics[width=0.53\\linewidth]{../../figures/HD-fig1_15.png}\n\\footnotesize Source: Hughes and Drury ''Electric motors and drives''\n\\end{center}\n\n\\[F=k_mI=(Bl_m)I,\\]\n\\end{frame}\n\n\\begin{frame}[label={sec:org479f6d4}]{Rotating motor}\n\\begin{center}\n\\includegraphics[width=0.4\\linewidth]{../../figures/HD-fig3_1.png}\n\\includegraphics[width=0.53\\linewidth]{../../figures/HD-fig3_2.png}\n{\\footnotesize Source: Hughes and Drury ''Electric motors and drives''}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org255a431}]{Magnetic force and electro-motive force}\n\\begin{block}{The magnetic force on a current-carrying conductor}\n\\[ F(t) = k_f i(t) \\quad\\Leftrightarrow\\quad T(t) = k_f r i(t) = k_m i(t)\\]\n\\end{block}\n\n\\begin{block}{Voltage generated in a conductor moving in a magnetic field}\n\\[ e(t) = k_v v(t) \\quad\\Leftrightarrow\\quad e(t) = k_v r \\omega(t) = k_e \\omega(t)\\]\n\\(e(t)\\) is called  \\emph{Back electro-motive force (Back e.m.f.)}.\n\n\\alert{In the SI-system units} \\(k_m = k_e = k\\).\n\\end{block}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgf3a2f0f}]{Equivalent circuit}\nConsider a DC motor with separate excitation\n\\begin{center}\n  \\begin{circuitikz}\n    \\draw (4,1) node[elmech](motor){M};\n    \\draw (motor.north) to[R=$R$] (4,4) to[L=$L$] (0,4)\n    to[american voltage source, label=$u$] (0,0) -| (motor.south);\n    \\draw[thick,->>](motor.right)--++(1,0)node[midway,above]{$\\omega$};\n\n    \\node[] at (2, -0.8 cm) {\\(L \\frac{d}{dt}i(t) +  Ri(t) + k\\omega(t) = u\\)};\n\n    \\node[] at (2, 4.5 cm) {Armature};\n\n    \\begin{scope}[xshift=8cm]\n    \\draw (0,1) to (4,1) to[R=$R_f$] (4,3) to[L=$L_f$] (0,3)\n    to[american voltage source, label=$V_f$] (0,1);\n    \\node[] at (2, 4.5 cm) {Field};\n    \\end{scope}\n  \\end{circuitikz}\n\\end{center}\n\n\\begin{center}\nNewton: \\(J\\frac{d}{dt}\\omega(t) = ki(t) - T_l(t)\\)\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org25f647d}]{Modeling the DC motor}\n\\begin{center}\n  \\begin{circuitikz}[yscale = 0.5]\n    \\draw (4,2) node[elmech](motor){M};\n    \\draw (motor.north) to[short] (4,4) to[R=$R$] (2,4) to[L=$L$] (0,4)\n    to[american voltage source, label=$u$] (0,0) -| (motor.south);\n    \\draw[thick,->>](motor.right)--++(1,0)node[midway,above]{$\\omega$};\n\n    \\node[] at (9, 4 cm) {\\(L \\frac{d}{dt}i(t) +  Ri(t) + k\\omega(t) = u\\)};\n    \\node[] at (9, 2 cm) {\\(\\frac{d}{dt}i(t) = \\frac{1}{L} \\Big(-Ri(t) - k\\omega(t) + u\\Big)\\)};\n    \\end{circuitikz}\n    \\end{center}\n    \\begin{center}\n    \\begin{circuitikz}[yscale = 1]\n  \\begin{scope}[xshift=8cm, yshift=-1cm,\n  block/.style={rectangle, draw, minimum width=12mm, minimum height=10mm},\n  amp/.style = {regular polygon, regular polygon sides=3,\n        draw, fill=white, text width=1em,\n        inner sep=1pt, outer sep=0mm,\n        shape border rotate=-90},\n\tsumm/.style = {circle, draw, inner sep = 1pt},]\n   \\node[block,] (int) at (0,0) {$\\int$};\n   \\node[amp, left of=int, node distance=30mm] (oneoverL) {$\\frac{1}{L}$}; \n   \\draw[->] (oneoverL) -- node[above] {$\\frac{d}{dt}i(t)$} (int);\n   \\node[summ, left of=oneoverL, node distance=20mm] (sum) {\\small $\\Sigma$};\n   \\node[coordinate, left of=sum, node distance=35mm] (Vin) {};\n   \\draw[->] (Vin) -- node[above, very near start] {$u$} node[coordinate, pos=0.6] (mp) {} (sum);\n   \\node[amp, above of=mp, node distance=15mm] (mkonst) {$-k$};\n   \\draw[->] (int) -- node[coordinate] (fb) {} node[above, near end] {$i(t)$} ++(25mm, 0);\n   \\draw[->] (mkonst) ++(-2cm, 0) -- node[above, near start] {$\\omega(t)$} (mkonst);\n   \\draw[->] (mkonst) -| (sum);\n   \\draw[->] (sum) -- (oneoverL);\n   \\node[amp, below of =int, node distance=16mm, rotate=180] (RR) {\\rotatebox{-180}{$-R$}};\n   \\draw[->] (fb) |- (RR) -| (sum);\n\n   \\end{scope}\n  \\end{circuitikz}\n  \\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org82727b0}]{Transmission}\n\\end{frame}\n\n\\begin{frame}[label={sec:org96d9455}]{Transmission}\n  \\begin{center}\n  \\begin{circuitikz}[xscale = 0.8]\n\\begin{scope}[xshift=8cm, yshift=-1cm,\nblock/.style={rectangle, draw, minimum width=12mm, minimum height=10mm},\namp/.style = {regular polygon, regular polygon sides=3,\n      draw, fill=white, text width=1em,\n      inner sep=1pt, outer sep=0mm,\n      shape border rotate=-90},\n      summ/.style = {circle, draw, inner sep = 1pt},]\n \\node[block,] (int) at (0,0) {$\\int$};\n \\node[amp, left of=int, node distance=25mm] (oneoverL) {$\\frac{1}{L}$}; \n \\draw[->] (oneoverL) -- node[above] {$\\frac{d}{dt}i$} (int);\n \\node[summ, left of=oneoverL, node distance=15mm] (sum) {\\small $\\Sigma$};\n \\node[coordinate, left of=sum, node distance=45mm] (Vin) {};\n \\draw[->] (Vin) -- node[above, very near start] {$u$} node[coordinate, pos=0.8] (mp) {} (sum);\n \\node[amp, above of=mp, node distance=15mm] (mkonst) {$-k$};\n \\node[amp, left of=mkonst, node distance=20mm] (gr) {$g_r$};\n \\node[amp, right of=int, node distance=25mm] (mk2) {$k$};\n \\node[amp, right of=mk2, node distance=20mm] (gr2) {$g_r$};\n \\draw[->] (int) -- node[coordinate, pos=0.5] (meas) {} node[above] {$i$} (mk2);\n \\draw[->] (mk2) -- node[above] {$T_m$} (gr2);\n \\draw[->] (gr2) -- node[above] {$T_e$} ++(15mm, 0);\n \\draw[->] (gr) ++(-2cm, 0) -- node[above, near start] {$\\omega_a$} (gr);\n \\draw[->] (gr) -- node[above, ] {$\\omega_m$} (mkonst);\n \\draw[->] (mkonst) -| (sum);\n \\draw[->] (sum) -- (oneoverL);\n \\node[amp, below of =int, node distance=16mm, rotate=180] (fb) {\\rotatebox{-180}{$-R$}};\n \\draw[->] (meas) |- (fb);\n \\draw[->] (fb) -| (sum);\n \\end{scope}\n\\end{circuitikz}\n\\end{center}\n\nIgnoring losses in the transmission:\n\\[\\underbrace{T_m\\omega_m}_{\\text{Power in}} = \\underbrace{T_e\\omega_a}_{\\text{Power out}}\\]\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org7308dfa}]{DC motor driving a load}\n  \\begin{center}\n  \\begin{tikzpicture}[scale = 0.5,\nblock/.style={rectangle, draw, minimum width=12mm, minimum height=10mm},\namp/.style = {regular polygon, regular polygon sides=3,\n      draw, fill=white, text width=1em,\n      inner sep=1pt, outer sep=0mm,\n      shape border rotate=-90},\n      summ/.style = {circle, draw, inner sep = 1pt},]\n \\node[block,] (dc) at (0,0) {$\\frac{1/R}{s\\frac{L}{R} + 1}$};\n \\node[summ, left of=dc, node distance=20mm] (sum) {\\small $\\Sigma$};\n \\node[coordinate, left of=sum, node distance=20mm] (Vin) {};\n \\draw[->] (Vin) -- node[above, very near start] {$u$} node[coordinate, pos=0.8] (mp) {} (sum);\n \\node[block, below of=sum, node distance=20mm] (mkonst) {$g_rk$};\n \\node[block, right of=dc, node distance=25mm] (mk2) {$g_rk$};\n \\node[summ, right of=mk2, node distance=20mm] (sumtorque) {\\small $\\Sigma$};\n \\node[block, right of=sumtorque, node distance=20mm] (load) {$\\frac{1}{Js}$};\n %\\node[block, right of=load, node distance=25mm] (int) {$\\frac{1}{s}$};\n \\node[coordinate, right of = load, node distance=25mm] (output) {}; \n \\node[coordinate, above of = sumtorque, node distance=20mm] (tload) {}; \n \\draw[->] (sum) -- (dc);\n \\draw[->] (dc) -- node[coordinate, pos=0.5] (meas) {} node[above] {$i$} (mk2);\n \\draw[->] (mk2) -- node[above] {$T_a$} (sumtorque);\n \\draw[->] (sumtorque) -- node[above] {} (load);\n \\draw[->] (load)  -- node[coordinate,] (meas) node[above,] {$\\omega_a$} (output);\n \\draw[->] (mkonst) -- node[pos=0.9, left,] {$-$} (sum);\n %\\draw[->] (int) --  node[very near end, above] {$\\theta_a$} (output);\n \\draw[->] (meas) |- (mkonst);\n \\draw[->] (tload) -- node[right, very near start] {$T_l$} node [right, pos=0.9] {$-$} (sumtorque);\n\\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\n\n\n\n\\begin{frame}[label={sec:org231c594}]{DC motor driving a load}\nAssuming the inductance to be negligable.\n\n  \\begin{center}\n  \\begin{tikzpicture}[scale = 0.5,\nblock/.style={rectangle, draw, minimum width=12mm, minimum height=10mm},\namp/.style = {regular polygon, regular polygon sides=3,\n      draw, fill=white, text width=1em,\n      inner sep=1pt, outer sep=0mm,\n      shape border rotate=-90},\n      summ/.style = {circle, draw, inner sep = 1pt},]\n \\node[block,] (dc) at (0,0) {$\\frac{1}{R}$};\n \\node[summ, left of=dc, node distance=20mm] (sum) {\\small $\\Sigma$};\n \\node[coordinate, left of=sum, node distance=20mm] (Vin) {};\n \\draw[->] (Vin) -- node[above, very near start] {$u$} node[coordinate, pos=0.8] (mp) {} (sum);\n \\node[block, below of=sum, node distance=20mm] (mkonst) {$g_rk$};\n \\node[block, right of=dc, node distance=25mm] (mk2) {$g_rk$};\n \\node[summ, right of=mk2, node distance=20mm] (sumtorque) {\\small $\\Sigma$};\n \\node[block, right of=sumtorque, node distance=20mm] (load) {$\\frac{1}{Js}$};\n %\\node[block, right of=load, node distance=25mm] (int) {$\\frac{1}{s}$};\n \\node[coordinate, right of = load, node distance=25mm] (output) {}; \n \\node[coordinate, above of = sumtorque, node distance=20mm] (tload) {}; \n \\draw[->] (sum) -- (dc);\n \\draw[->] (dc) -- node[coordinate, pos=0.5] (meas) {} node[above] {$i$} (mk2);\n \\draw[->] (mk2) -- node[above] {$T_a$} (sumtorque);\n \\draw[->] (sumtorque) -- node[above] {} (load);\n \\draw[->] (load)  -- node[coordinate,] (meas) node[above,] {$\\omega_a$} (output);\n \\draw[->] (mkonst) -- node[pos=0.9, left,] {$-$} (sum);\n %\\draw[->] (int) --  node[very near end, above] {$\\theta_a$} (output);\n \\draw[->] (meas) |- (mkonst);\n \\draw[->] (tload) -- node[right, very near start] {$T_l$} node [right, pos=0.9] {$-$} (sumtorque);\n\\end{tikzpicture}\n\\end{center}\n\n\n\\alert{Activity} What is the transfer function from the voltage input \\(u(t)\\) to the angular velocity \\(\\omega_a(t)\\)?\n\\end{frame}\n\n\n\n\\begin{frame}[label={sec:orgacbd599}]{DC motor driving a load}\n  \\begin{center}\n  \\begin{tikzpicture}[scale = 0.5,\nblock/.style={rectangle, draw, minimum width=12mm, minimum height=10mm},\namp/.style = {regular polygon, regular polygon sides=3,\n      draw, fill=white, text width=1em,\n      inner sep=1pt, outer sep=0mm,\n      shape border rotate=-90},\n      summ/.style = {circle, draw, inner sep = 1pt},]\n \\node[block,] (dc) at (0,0) {$\\frac{k_1}{s\\tau + 1}$};\n \\node[summ, right of=dc, node distance=20mm] (sum) {\\small $\\Sigma$};\n \\node[block, above of=sum, node distance=20mm] (loadtrf)  {$-\\frac{k_2}{s\\tau + 1}$};\n \\node[coordinate, left of=dc, node distance=20mm] (Vin) {};\n \\node[coordinate, above of=loadtrf, node distance=20mm] (tload) {};\n \\node[coordinate, right of=sum, node distance=30mm] (output) {};\n\n \\draw[->] (Vin) -- node[above, very near start] {$u$} node[coordinate, pos=0.8] (mp) {} (dc);\n \\draw[->] (tload) -- node[right, very near start] {$T_l$} node [right, pos=0.9] {} (loadtrf);\n \\draw[->] (dc) to (sum);\n \\draw[->] (loadtrf) to (sum);\n \\draw[->] (sum) -- node[above, near end,] {$\\omega_a$} (output);\n\n\\end{tikzpicture}\n\\end{center}\n\n\\[ \\tau=\\frac{JR}{(g_rk)^2}, \\quad k_1= \\frac{1}{g_r k}, \\quad k_2 = \\textcolor{white}{\\frac{R}{(g_r k)^2}} \\]\n\\end{frame}\n\\end{document}", "meta": {"hexsha": "0ead10ba20afdbea7296b1fe951cf83a06c31b57", "size": 11700, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "challenge/slides/dc-motor.tex", "max_stars_repo_name": "kjartan-at-tec/mr2025", "max_stars_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "challenge/slides/dc-motor.tex", "max_issues_repo_name": "kjartan-at-tec/mr2025", "max_issues_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "challenge/slides/dc-motor.tex", "max_forks_repo_name": "kjartan-at-tec/mr2025", "max_forks_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.9090909091, "max_line_length": 119, "alphanum_fraction": 0.6279487179, "num_tokens": 4418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7577943712746404, "lm_q1q2_score": 0.5563116453277608}}
{"text": "%================================================\r\n\\section{NIST Digits Dataset}\r\n%================================================\r\n\t%---------------------------------------------\r\n\t\\subsection{Problem and Approach}\r\n\t%----------------------------------------------\r\n\t\\begin{frame}\r\n\t\t\\frametitle{NIST Data Classification}\r\n\t\t\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item Small dataset (1797 points with 64 features, 10 labels, balanced).\r\n\t\t\t\\item 64 features: $8\\times 8$ images with pixels in the range $[0,16]$. \r\n\t\t\t\\item 10 classes: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9\r\n\t\t\t\\item Simple and computationally able to run locally, which makes it ideal for proof-of-concept on short timelines where image analysis is needed quickly.\r\n\t\t\t\\item Analsis Tools Considered:\r\n\t\t\t\t\\begin{itemize}\r\n\t\t\t\t\t\\item PCA Clustering (qualitative)\r\n\t\t\t\t\t\\item K Nearest Neighbor Classifier Accuracy (quantitative)\r\n\t\t\t\t\t\\item K Nearest Neighbor Confusion Matrix (quantitative)\r\n\t\t\t\t\t\\item Persistence Diagrams (qualitative)\r\n\t\t\t\t\t\\item Bottleneck Distance Matrix (quantitative)\r\n\t\t\t\t\\end{itemize}\r\n\t\t\\end{itemize}\r\n\t\t\r\n\t\t\\end{frame}\r\n\t\t\r\n\t\t\r\n\t%----------------------------------------------\r\n\t\\subsection{Results}\r\n\t%----------------------------------------------\r\n\t\t\r\n\t\t\\begin{frame}\r\n\t\t\\frametitle{PCA Clustering}\r\n\t\t\r\n\t\t\\begin{figure}\r\n\t\t\t\t\\centering\r\n\t\t\t\t\\includegraphics[scale=0.5]{images/digits_PCA.png}\r\n\t\t\t\t\\caption{PCA Clustering of the NIST Digits Dataset. Visually, we can see that there is some clustering, but the data has a pretty high degree of similarity even after PCA.}\r\n\t\t\\end{figure}\r\n\t\t\r\n\t\t\\end{frame}\r\n\t\t\r\n\t\t\\begin{frame}\r\n\t\t\t\\frametitle{K Nearest Neighbors Classifier}\r\n\t\t\t\r\n\t\t\t\\begin{itemize}\r\n\t\t\t\t\\item Pre-processing:\r\n\t\t\t\t\\begin{itemize}\r\n\t\t\t\t\t\\item We split the data into a 30\\% train and 70\\% test sets and keep the data balanced.\r\n\t\t\t\t\t\\item We also shuffle the data.\r\n\t\t\t\t\t\\item We scale the data such that it is within $(0,1)$ using a min-max scaler.\r\n\t\t\t\t\\end{itemize}\r\n\t\t\t\t\\item The function class we consider is trivial since it is dependent only on the training data given.\r\n\t\t\t\t\\item We can view this as a weighting function that weights the $k$-nearest points to the input with $\\frac{1}{k}$ and all other points $0$.\r\n\t\t\t\t\\item Classification Accuracy: 76 percent\r\n\t\t\t\t\\item Training Accuracy: 83 percent\r\n\t\t\t\\end{itemize}\r\n\t\t\\end{frame}\r\n\t\t\r\n\t\t\\begin{frame}\r\n\t\t\t\\frametitle{Confusion Matrix for Digits Data}\r\n\t\t\t\r\n\t\t\t\\begin{figure}\r\n\t\t\t\t\\centering\r\n\t\t\t\t\\includegraphics[scale=0.5]{images/confusion_matrix_digits.png}\r\n\t\t\t\t\\caption{Confusion Matrix of Digit Data. We see many interesting artifacts: 0's, 4's, and 6's are classified nicely, 1's have some confusion with 7's (vice versa), 2's with 5's, 3's with 9's, 5's with 8's, 8's with 2's and 5's, and 9's with 1's and 3's.}\r\n\t\t\\end{figure}\r\n\t\t\\end{frame}\r\n\t\t\r\n\t\t\\begin{frame}\r\n\t\t\\frametitle{Persistence Images of Digits: 0's}\r\n\t\t\r\n\t\t\\begin{figure}\r\n\t\t\t\t\\centering\r\n\t\t\t\t\\includegraphics[scale=0.5]{images/PI_H0_0.png}\r\n\t\t\t\t\\caption{Persistence Image for 0.}\r\n\t\t\\end{figure}\r\n\t\t\\end{frame}\r\n\r\n\t\t\\begin{frame}\r\n\t\t\\frametitle{Persistence Images of Digits: 0's}\r\n\t\t\r\n\t\t\\begin{figure}\r\n\t\t\t\t\\centering\r\n\t\t\t\t\\includegraphics[scale=0.5]{images/PI_H0_5.png}\r\n\t\t\t\t\\caption{Persistence Image for 5.}\r\n\t\t\\end{figure}\r\n\t\t\\end{frame}\r\n\t\t\r\n\t\t\r\n\t\t\t\t\\begin{frame}\r\n\t\t\\frametitle{Persistence Images of Digits: 8's}\r\n\t\t\r\n\t\t\\begin{figure}\r\n\t\t\t\t\\centering\r\n\t\t\t\t\\includegraphics[scale=0.4]{images/PI_H0_8.png}\r\n\t\t\t\t\\caption{Persistence Image for 8.}\r\n\t\t\\end{figure}\r\n\t\t\\end{frame}\r\n\t\t\r\n\t\t\t\t\\begin{frame}\r\n\t\t\\frametitle{Persistence Diagram of Digits: 0's, 5's, \\& 8's}\r\n\t\t\r\n\t\t\\begin{figure}\r\n\t\t\t\t\\centering\r\n\t\t\t\t\\includegraphics[scale=0.35]{images/pd058.png}\r\n\t\t\t\t\\caption{Persistence Image for 0, 5, 8. Notice the birth/death are similar for 5 and 8 versus 0.}\r\n\t\t\\end{figure}\r\n\t\t\\end{frame}\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\t\\begin{frame}\r\n\t\t\\frametitle{Pairwise Bottleneck Distances}\r\n\t\t\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item The bottleneck distances:\r\n\t\t\t\t\\begin{itemize}\r\n\t\t\t\t\t\\item With themselves: 0 (control)\r\n\t\t\t\t\t\\item 0, 5: 2.56941795\r\n\t\t\t\t\t\\item 0, 8: 2.21272278\r\n\t\t\t\t\t\\item 5, 8: 2.11571217 \r\n\t\t\t\t\\end{itemize}\r\n\t\t\\end{itemize}\r\n\t\t\r\n\t\tA downside to the Bottleneck Distance as a measure is that quantitatively it can be hard to distinguish for simple data what a good distance is.\r\n\t\t\\end{frame}", "meta": {"hexsha": "4f31fec0b7b40e48ac00c4abfb30e8a74d99ab89", "size": 4314, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "presentation/Digits_Data.tex", "max_stars_repo_name": "river-reger/topological-data-analysis-demo", "max_stars_repo_head_hexsha": "e72547d2f093ca221c83993adb6a85424ff7f515", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "presentation/Digits_Data.tex", "max_issues_repo_name": "river-reger/topological-data-analysis-demo", "max_issues_repo_head_hexsha": "e72547d2f093ca221c83993adb6a85424ff7f515", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentation/Digits_Data.tex", "max_forks_repo_name": "river-reger/topological-data-analysis-demo", "max_forks_repo_head_hexsha": "e72547d2f093ca221c83993adb6a85424ff7f515", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.703125, "max_line_length": 259, "alphanum_fraction": 0.6210013908, "num_tokens": 1335, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943822145997, "lm_q2_score": 0.7341195152660687, "lm_q1q2_score": 0.556311644542732}}
{"text": "\\section{Instance Generation}\n\nThe instances are generated randomly \\cite{bib:instances-CVRP}, \\cite{bib:constrained-knapsack}, \\cite{bib:grasp-and-tabu}. For that, first the graph is generated, and then the weight of each vertex is chosen. The knapsack capacity is selected so that, on average, X percent of the vertices fit in it. The following subsections analyze each of those aspects.\n\nConsider the parameters:\n\\begin{enumerate}\n    \\item $n$: number of vertices;\n    \\item $K$: average number of branches;\n    \\item $L$: maximum number of leaf vertices;\n    \\item $H$: the maximum value of an entry of the weight of each vertex;\n    \\item $m$: fraction of the average number of elements that fit in the knapsack;\n\\end{enumerate}\n\n\\subsection{How to Generate the Precedences}\n\nThe process of generating the precedences is specified in \\algref{algorith:find-trees}, which uses \\algref{algorith:generate-precedences}. The \\figref{fig:precedence-generation} has an example of such procedure. The following parameters are used to control the generation:\n\n\\begin{algorithm}[ht!]\n    \\caption{Find-Trees}\n    \\label{algorith:find-trees}\n    \\begin{algorithmic}[1]\n        \\Require{\n            $\\vertices$: vertices in the 2D plane,\n            $K$: average number of branches,\n            $L$: maximum number of leaf vertices\n        }\n        \\State{$k \\gets $ random number from 1 to $K$}\n        \\State{$\\tuple{R, \\mathcal{V}} \\gets $ find $k$ clusters in $V$}\n            \\Comment{$R$: a set of centers}\\\\\n            \\Comment{$\\mathcal{V}$: a set with each element being the set vertices of each cluster}\n        \\State{$\\mathcal{T} \\gets \\emptyset$}\n        \\For{each pair $r \\in R $ and $V' \\in \\mathcal{V}$}\n            \\If{$\\abs{V'} \\leqslant L$}\n                \\State{$T \\gets $ tree with $r$ as the root node and $V'$ as the leaves}\n            \\Else\n                \\State{$T \\gets $ tree with $r$ as the root node of the subtree Find-Trees($V', K, L$)}\n            \\EndIf\n            \\State{$\\mathcal{T} \\gets \\mathcal{T} \\cup \\Set{T}$}\n        \\EndFor\n        \\\\\\Return{$\\mathcal{T}$}\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[ht]\n    \\caption{Generate-Precedences}\n    \\label{algorith:generate-precedences}\n    \\begin{algorithmic}[1]\n        \\Require{\n            $n$: number of vertices,\n            $K$: average number of branches,\n            $L$: maximum number of leaf vertices\n        }\n        \\State{$V \\gets $ generate $n$ points in the 2D plane randomly}\n        \\State{$\\mathcal{T} \\gets $ Find-Trees($V, K, L$)}\n        \\\\\\Return{$\\mathcal{T}$}\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{images/precedence_construction.jpg}\n    \\caption{Precedence generation. The root nodes are the green, red and lemon. Red has four leaf vertices. Green has two branches, the pink with one leaf and the purple with two leaves. Lemon has one leaf and one branch with two leaves.}\n    \\label{fig:precedence-generation}\n\\end{figure}\n\n\\subsection{How to Generate the Weights}\n\nGenerate the weights randomly in the interval $\\interval{0}{H}$.\n\n\\subsection{How to Generate the Knapsack Capacity}\n\nGenerate each entry of the knapsack capacity $\\maximumWeight$ randomly in the interval $\\interval{0}{m \\cdot n \\cdot H}$.\n", "meta": {"hexsha": "a649d2aeb6c6c4a38602019c52515684a032cd47", "size": 3313, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "project/report/textual/instances.tex", "max_stars_repo_name": "lucasguesserts/MO824A-combinatorial-optimization", "max_stars_repo_head_hexsha": "a88569e4496c0ed4f89a4e8bac7ab8f42f6cb7d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "project/report/textual/instances.tex", "max_issues_repo_name": "lucasguesserts/MO824A-combinatorial-optimization", "max_issues_repo_head_hexsha": "a88569e4496c0ed4f89a4e8bac7ab8f42f6cb7d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project/report/textual/instances.tex", "max_forks_repo_name": "lucasguesserts/MO824A-combinatorial-optimization", "max_forks_repo_head_hexsha": "a88569e4496c0ed4f89a4e8bac7ab8f42f6cb7d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.3835616438, "max_line_length": 358, "alphanum_fraction": 0.6604286145, "num_tokens": 895, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.757794360334681, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.5563116417046559}}
{"text": "\\Lecture{Jayalal Sarma}{Sep 9, 2020}{01}{Pigeon Hole Principle and Basic Applications}{Jayalal Sarma}{$\\gamma$}{JS}\n\nWe start course with the simplest but surprising powerful tool in combinatorial arugments which is the pigeon hole principle. Through this principle as an example, we will also quick review the methods of proof. \n\n\\section{Quick Recap on Proof Techniques} \n\nA formal mathematical proof system in our context has axioms about various mathematical objects that we are using, like numbers, graphs which describes them through their properties. Then, there are rules of inferences such as modus ponens, modus tollens, resolution, syllogisms etc which helps us derive new statements from these axioms. \n\nThe peculiarity of these rules of inferences are that they \"conduct truth\"  and forms building blocks for huge \"truth conducting\" structures called mathematical proofs. That is, if for any object\\footnote{a little more formally, the assignment in the propositional logic, and model in general first oder logic}, the premises of the rules of inference are true, then the conclusion is also true for them.   Hence, suppose we derive a statement $\\phi$ starting with the axioms, applying the rules of inferences in various combinations. Since the individual rules of inferences \"conduct truth\", the resulting structure also conducts truth and is called the mathematical proof of the statement $\\phi$ from the axioms. Note that the truth of the statement $\\phi$ for the object under consideration can be stated on relative to the truth of the axioms that we used. However, this is not a concern, since we are intending to use the mathematical proof systems to derive statements about objects which we know would satisfy the axioms (in fact, we wrote down axioms as properties of those objects.\n\\begin{curiosity}\nIt is an amusing question to ask, whether there are other objects, which we did not intend to, which also satisfies the axioms that we wrote, by accident. Say for example, we wrote the axioms for graphs, but \"strings\" also satisfies them. If so, the theorems that we prove for graphs using only those axioms will also be true for strings, automatically !!. Quite interestingly this is true for natural numbers. The mathematical theory of natural numbers is axiomatized by what are called the Peano's axioms. There are numbers that one can define which are different from natural numbers for which any theorem that we prove for natural numbers also are true (because they satisfy the Peano's axioms). Then one might ask, are we not trying to represent exactly natural numbers? So should we not augment Peano's axioms with more properties of natural numbers such that we remove such {\\em unwanted} parallel models from satisfying the axioms we write. Even more interestingly, one can argue that this is not even possible. No matter, what extra formula we write the existence of such \"parallel models\" us inevitable. In fact, not just one \"parallel model\", there will be infintiely many of them. You should read about {\\em L\\\"owenheim\u2013Skolem theorem}.\n\\end{curiosity}\n\nWriting down mathematical proofs explicitly by using rules of inference may seem to be a mechanical way of proving statements. While it avoids any chance of mistakes because of the mathematical precision and rigor it affects quick readability and communication of ideas. Hence, one would like to have more \"human readable\" ways of representing these proofs by writing some of the steps in English, while ensuring that we do not lose the mathematical rigor. This brings in some subjectivity about how \"formal\" a proof is - that is, how close is it to the formal mathematical framework of rules of inferences in terms of notations, presentation etc.  Sometimes, very rigorous proofs tend to hide the intuitive idea behind the proof which one tends to (and sometimes need to) describe separately for easy communication. The more formal your proof is, the less chances of you making a logical error in the proof. It is a good idea to start writing proofs with the mindset of \"rigor extremist\" and once you are comfortable and see through the mathematically rigorous steps of a statement, you can rely more in English sentences. This course particularly would do it in the latter way, but ensuring that mathematical rigor is kept in tact. The beauty of the combinatorial proofs lies in the elegance and the combinatorial insight and intuition. Balancing the intuition with rigor in presentations and descriptions lies in the art of presentations.\n\nSuppose that we have to prove a statement $\\gamma$ of the form $p \\rightarrow q$.  We quickly recall the different ways of proof in the above described form. \n\n\\begin{description}\n\\item{\\bf Direct Proof:} Assume $p$ and then derive $q$ using the assumption and the axioms by applying the rules of inferences. The is considered as a proof of the statement $p \\implies q$ since it can be associated with a valid argument form by itself.\n\\item{\\bf Indirect Proof:} Assume $\\lnot q$ and then derive $\\lnot p$. Again, this is also considered as a proof of the statement $p \\implies q$ since it can be associated with a valid argument form by itself. This is also called proof by {\\em contrapositive}.\n\\item{\\bf Proof by Contradiction:}\nA proof by contradiction, assumes the negation of the statement to be proven (that is, $\\lnot \\gamma$) and then defined a statement $r$ (this forms a part of creativitiy of the proof), and then derives $r \\land (\\lnot r)$ from the assumption and axioms using the rules of inferences. By an associated valid argument form, this shows that $\\gamma$ must be true, again, by associating the definition of a valid argument form.\n\\end{description}\nIn addition, while proving quantified statements, there are a few additional ideas that are used which we quickly review below:\n\\begin{description}\n\\item{\\bf Proof by Exhaustive Cases:} Suppose we want to derive a statement $\\Gamma$ of the form $\\forall \\alpha P(\\alpha)$ where $\\alpha$ comes from domain of discourse $\\cal{D}$ (say, for example, $\\alpha$ is a natural number, that is, $\\cal{D} = \\N)$.  We can partition $\\calD = \\calD _1 \\cup \\calD_2 \\ldots \\cup \\calD_k$ into several subdomains and prove the statement $\\forall \\alpha \\in \\calD_i,~P(\\alpha)$ separately. Each part of the proof  $\\forall \\alpha \\in \\calD_i P(\\alpha)$ is said to be a \"case\" of the proof. The fact that, $\\calD = \\calD_1 \\cup \\calD_1 \\ldots \\cup \\calD_k$  is what is meant by the statement that the case analysis is {\\em exhaustive}.\n\\item{\\bf Proof by \"Counter Example\":} Suppose we want to disprove statements of the form $\\forall \\alpha P(\\alpha)$. That is, we want to derive $\\lnot\\left(\\forall \\alpha P(\\alpha)\\right)$ which is logically equivalent to $\\exists \\alpha \\lnot P(\\alpha)$. Hence it suffices to demonstrate an $\\alpha$ in the domain for which we can show $P(\\alpha)$ is false.\n\\item{\\bf Proof by Mathematical Induction:} This is a technique to prove statements of the form $\\forall \\alpha P(\\alpha)$ where the domain $\\calD$ is countably infinite. That is, the domain $\\calD$ can be put in bijection with the set of natural numbers. The technique forms part of the Peano's axioms that define the natural numbers and hence is a valid proof technique. If $\\phi : \\N \\to \\calD$ is a bijection, in order to prove $\\forall \\alpha P(\\alpha)$, we can equivalently prove $\\forall n \\in \\N, P(\\phi(n))$. In particular, it takes the following form:\n\\textit{If we can prove $P(\\phi(0))$ and the implication $\\left[ \\forall  n \\in \\mathbb{N}, P(\\phi(n)) \\implies P(\\phi(n+1) \\right]$  then we can conclude  $\\forall n~ P(\\phi(n)) $}.\nThere are versions of this proof techniques such as strong induction, structural induction, spiral induction, double induction etc which are adaptations of the above basic idea.\n\\end{description}\n\nMost of the proofs that we do in the courses will follow one of the above frameworks. We will not do examples of these techniques since that is already covered in the basic discrete mathematics course. \n\n\\section{The Pigeon Hole Principle (PHP)}\n\nWith the quick recap done in the previous part, we now plunge into the actual business in this lecture. We first prove the following basic version of the Pigeon hole principle.\n\\begin{theorem}\nLet $n,k \\in \\N$, such that $n > k$. Suppose we place $n$ identical balls in $k$ identical bins, then there is a bin that has  at least  two balls in it.\n\\end{theorem}\n\\begin{proof}\nLet $n, k \\in \\N$ and $n > k$. Assume for the sake of contradiction that when we placed the balls into the bins as indicated in the theorem, there was no bin with at least two balls in it. \n\nAs such the bins are identical, but number them from $1$ to $k$ now. Using this notation, let us define $b_i$ to be the number of balls that went into the bin number $i$. Clearly $\\forall i, b_i \\ge 0$. Since we did distribute all the balls into the bins, we have :\n$$\\calR : \\sum_{i=1}^k b_i = n $$\nUsing the assumption, we have that: $\\forall i, 0 \\le b_i \\le 1$. Summing up for $i$:\n$\\sum_{i=1}^k b_i \\le \\sum_{i=1}^k 1 = k < n$. Hence we have derived the statement :\n$$\\lnot \\calR : \\sum_{i=1}^k b_i \\ne n$$\nHence we have derived $\\calR \\land \\lnot \\calR$. This is a contradiction and hence the original assumption that we started out with must be false and hence there has to exist a bin which has two balls in it.\n\\end{proof}\n\n\\begin{curiosity}\nThe formal proof of PHP as simple as it sounds is still a subject of substantial research in an area called \\textit{proof complexity}. To demonstrate this, let us write the principle itself in more rigorous notations. Let $n > k$, and $\\{ x_{ij} \\mid  i \\in [n], j \\in [k] \\}$ be propositional variables (which can be called, say {\\em pigeon hole variables}). Following our original notation, where there are $n$ pigeons and $k$ holes, the basic Pigeon Hole Principle is the following Disjunctive normal form formula : \n$$\\textrm{\\sc PHP}_k^n \\defn \\left( \\bigvee_{i \\in [n]} \\bigwedge_{j \\in [k]} \\bar{x_{ij}} \\right) \\lor \\left( \\bigvee_{j \\in [k]} \\bigvee_{r \\ne s \\in [n]} (x_{rj} \\land x_{sj}) \\right) $$\nTo prove this, one possibility is to derive the contradiction from the negation of \n$\\textrm{\\sc PHP}_k^n$. This is an expression in conjunctive normal form, with clauses:\n$$ \\textrm{For $i \\in [n]$ the clauses : } Q_i \\defn \\bigvee_{j=1}^k x_{ij} $$\n$$\\textrm{ and for $s \\ne t \\in [n], j \\in [k]$ the clauses } Q_{s,t,j} \\defn \\bar{x_{sj}} \\lor \\bar{x_{tj}}$$\nIntuitively,  these say that there is a function from $[n] \\to [k]$ (which is represented by $x_{ij}=1$ to mean that the function takes $i$ to $j$) which is well defined (for every $i$, there exists a $j$ such that $x_{ij} = 1$) and also injective (for two different $s$ and $t$, it is not the case that $x_{si}$ is $1$ and $x_{tj}$).\nSince $n > k$, there cannot be an injection, and hence the negation of the conjunction of these clauses $\\textrm{\\sc PHP}_k^n$ must be true.\n\nSuppose we ask, starting from these clauses as axioms, and applying rules of inferences (say the resolution principle) alone, how many steps of proof does one need to do to derive the contradiction ($r \\land \\lnot r$ for some $r$). \\footnote{Notice that this sounds exactly like computation, how many steps of computation is required in order to certain tasks in terms of input parameters}. We measure this in terms of $n$ and $k$ which determines the number of variables in the system. The area which studies the complexity of proofs in the above is called {\\em proof complexity theory}. It turns out the the basic PHP itself is one of the tautologies for which one requires exponentially long proofs if we are restricting ourselves to resolution? What if we relax this? The area has several interesting open questions related to this and they have close connections to computational complexity theory too.\n\\end{curiosity}\n\n\\subsection{A Quick Example: }\n\nWe will now demonstrate the application of the principle itself by a quick example. This is meant to be a revision of the topic from the previous courses and hence it is very much possible that you have seen the application earlier.\n\n\\begin{theorem}\nIf you consider any five points placed inside the unit square then there must necessarily exist two points are at most $0.75$ unit away from each other.\n\\end{theorem}\n\\begin{proof}\nFirstly, to make it sound less magical, let us comment that  theorem is actually true for $0.75$ units replaced by $0.707$ units which is actually $\\frac{1}{\\sqrt{2}}$. The application of PHP goes as follows. Consider four small squares which are obtained by the midpoint of the square as one of the corners. These small squares form the bins and the five points that we place forms the balls. By applying PHP, we conclude that there must be two points which falls into the same small square. Now the argument can be completed by the fact that the maximum distance between any two points which are in the same small square is at most $\\frac{1}{\\sqrt{2}}$ since the sides of the square are $\\frac{1}{2}$ each.\n\\end{proof}\n\\begin{remark}[{\\bf Tightness}]\nIs the above theorem tight? Can it be improved? Improvement can be in terms of two parameters. Firstly, can we make the same claim for 4 points? Secondly, even for 5 points, can we make an improved claim about the minimum distance being, say 0.7 units? The answer to both these questions are no. For the first, we can demonstrate  4 points in which every pair is at least one distance away - the four corners themselves will serve as a counter example. For the second question, we can demonstrate 5 points which are actually only pairwise at least $\\frac{1}{\\sqrt{2}}$ distance away. \n\\end{remark}\n\n\\begin{remark}[{\\bf Glimpse of Extremals in Combinatorics}]\nThe above example theorem, while is a classical application of Pigeon Hole Principle, it also demonstrates a curious phenomenon. In spirit it says that \\textit{if there are large number of objects in a collection, then there must be some structure}. Question is how large? And what is structure? The answers to these vary and forms the foundations of this area. We will see more of this when we see Ramsey Theory.\n\\end{remark}\n\n\\section{Numbers and Remainders}\n\nIt is customary to do an example of PHP from numbers and division under remainders. We will do a slightly unusual example. \n\n\\begin{theorem}\nConsider the infinite sequence $7, 77, 777, \\ldots ,7777777, \\ldots $ - there must necessarily exist a number in this sequence that is divisible by 2003.\n\\end{theorem}\n\\begin{proof}\nAs weird as it sounds, one might wonder how does PHP play a role. There does not seem to be any place to apply PHP directly in the statement of the problem. Indeed, infinitude seems to indicate that we are allowed to take large numbers in the sequence. A usual trick is the division, and then consider the remainders.\n\nAs a start, consider  first 2003 numbers in the sequence. Denote them by $n_1, n_2, \\ldots n_{2003}$. Divide them by 2003 and collect the remainders that we see. Denote them by $a_1, a_2, \\ldots, a_{2003}$. If any of the $a_i$s are 0, then we are done since that  Indeed, we have that $1 \\le a_i \\le 2002$. Clearly, now the pigeons and holes are visible now. The numbers $n_i$s are the pigeons and the reminders are the holes. There are only 2002 holes but there are 2003 pigeons and hence by PHP, there must exists $1 \\le i < j \\le 2003$ such that $a_i = a_j$. This gives:\n\\begin{eqnarray}\nn_i  \\mod 2003 = n_j \\mod 2003 \\\\\n(n_i - n_j) \\mod 2003 =  0\\\\\n2003 \\textrm{ divides } (n_i - n_j) \\\\\n\\end{eqnarray}\n\nThat is good progress. We managed to show 2003 divides $(n_i - n_j)$. However, $n_i - n_j$ unfortunately, will not be in the sequence at all. How will this number look like? By the structure of the numbers, subtracted, this difference will be a number of $7$s and then several zeros. More precisely computing these number, we have that:\n$$(n_i - n_j) = n_{j-i} 10^{j-i}$$\nSo we have that $2003$ divides the product of $n_{j-1}$ and $10^{j-i}$. However, 2003 being an odd number which is not a multiple of $5$ will not have a common factor with any power of 10. Hence 2003 must necessarily divide $n_{j-i}$ which should be there in the sequence. This completes the proof.\n\\end{proof}\n\n\\section{Graphs}\n\nOur third application is related to problems that can be modelled as graphs.\n\n\\begin{theorem}\nIn any chess tournament, where there are $n$ participants, at any point of time there must be two participants who finished the same number of games in the tournament.\n\\end{theorem}\nIt is natural to model this situation as a graph with $n$ vertices where each vertex represents a participant and we put an edge between two vertices if player $i$  and player $j$ have played a game with each other. The number of games played by a player is exactly the degree of the vertex in this graph. Rewriting the above theorem in the new language now:\n\\begin{theorem}\nIn any undirected graph $G$, there must be two vertices which are having the same degree.\n\\end{theorem}\n\\begin{proof}\nThe proof is by an exhaustive case analysis. We need to argue the above for all graphs. We divide this domain into two based on whether there is an isolated vertex or not.\n\\begin{description}\n\\item{\\bf Case 1 : $G$ has an isolated vertex} - In this case, there is a vertex of degree $0$, and hence there cannot be a vertex of degree $n-1$. Thus we have $n$ vertices, and only $n-1$ possible degree values $\\{10,,2,\\ldots n-2\\}$. By the PHP, we must see two vertices which has the same degree.\n\\item{\\bf Case 2: $G$ does not have an isolated vertex} - In this case, there is no vertex of degree $0$, and hence the degree values of vertices can only be in the set $\\{1, 2, \\ldots n-1\\}$. Again we have $n$ vertices whose degrees take only $n-1$ possible values. Again, by PHP, we must see two vertices having the same degree.\n\\end{description}\n\\end{proof}\n\n\\begin{exercise-prob}[See Problem Set 1~(Problem \\ref{mutual-friends})]\n\\begin{show-ps1}{mutual-friends}\nA social network is said to be symmetric if the relation between users that is maintained as a part of the network, is symmetric. Consider a symmetric social network  and let the symmetric relation maintained be that of ``user $A$ and $B$ are {\\em friends}\" (like in the case of facebook). A user $C$ is said to be a \\textit{mutual friend} of users $A$ and $B$ if, $C$ is a friend of both $A$ and $B$. Prove that - for any user $A$ of the network who has at least two friends, there must exist two friends of $A$ who has the same number of mutual friends with $A$. \n\nComment on whether symmetry is critical for your argument. Take the example of {\\em instagram} where the symmetric relation of {\\em friends} is replaced by {\\em followers}. Generalize the definition of mutual friends to {\\em mutual followers}. Comment on whether a similar statement for followers can be established in this case.\n\\end{show-ps1}\n\\end{exercise-prob}\n\n\\section{Discussion Session}\n\nJust to get started, we considered the following question - \\textit{how many people do we need to choose so that we can be assured that two among the set of people we have chosen will have their birthday on the same day?} The answer to this question is given by Pigeon Hole Principle immediately by considering the people to be pigeons and the day of the year on which their birthday falls to be the pigeonhole. Hence to be guaranteed that out of the 366 holes, at least one contains two pigeons (people), and hence having the same birthday, we need to choose 367 people. Just to test our understanding, we asked \"is the theorem tight?\" in terms of number of people. It is indeed is, since there is a set of 366 people whom you can choose all of whom have different birthdays. That is, 367, is the smallest number for which the above statement can be proposed. Hence the theorem is tight.\n\nThe first discussion point that was raised was a comparison with Birthday paradox. If we do not choose 367 people we are not given the guarantee that there are two people in the set with same birthday. What if we don't want this guarantee with certainty - but instead, we will need to get a probabilistic guarantee. To formalize this one has to imagine an experiment where the people are chosen uniformly at random. More rigorously, the property of the distribution is that for every date of the year, the probability that the chosen person has a birthday on that day is $\\frac{1}{365}$. Let us say, we are talking about a non-leap-year. The questions is then of the form {\\em what is the minimum number of people we need to choose, as per the above experiment, such that we are guaranteed at least 99.999\\% chance of getting two people with the same birthday in the set?}. A natural number to choose is 364 which is slightly less than 365. But what is the minimum? The answer beats our usual intuition and is surprisingly low - we need to choose only 70 people to achieve this !!. The surprise goes even further if we ask for 50\\% success, then the number is just 23 !! - and hence this is called \\textit{the birthday paradox}.\n\n\\subsection{Impossibility of Perfect Lossless Compression}\n\nPHP has a variety of applications. A first application outside discrete math course, that we usually encounter is in the automata theory where we use it to prove the pumping lemma. In fact, this one principle is pivotal in showing that there {\\em cannot be} finite automaton accepting certain languages.\n\nWe then turned into a practically related application of PHP, in the context of file compression. We all have used file compression programs - say like zip or tar. They compress our files into smaller sizes and they usually provide compression ratio too. Here is a question out of curiosity. Can we give compression algorithm that is guaranteed provide compression for all the files? This is a natural requirement. Interestingly, the actual situation is worse, any compression algorithm, not only cannot reduce the size of all files, but also has to increase the size of some file. The argument uses PHP. \n\nCompression algorithms are nothing but programs which translates files (which are interpreted as strings) to strings. That is, they are functions of the form $C : \\Sigma^* \\to \\Sigma^*$. A compression algorithm is said to {\\em lossless} if this function is injective. That is, given a compressed strings (element in the RHS) we have a unique file that we can decompress it to. Indeed, compression algorithms that are not lossless are practically useless since there cannot exist decompression algorithms which can recover the compressed file.\n\n\\begin{theorem}\nFor any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger.\n\\end{theorem}\n\\begin{proof}\nLet $C$ be the compression function from $\\Sigma^* \\to \\Sigma^*$. Let us fix $\\Sigma = \\{0,1\\}$ without loss of generality.\nSuppose it makes at least one file smaller than its size as a result of the compression. In addition, for the sake of contradiction, suppose that the algorithm does not make any file larger than their respective sizes.  Let $w \\in \\Sigma^*$ be the shortest string (say, $|w| = \\ell$) which the algorithm makes smaller. That is, by the assumption, for $w' \\in \\Sigma^*$ such that $|w'| < |w|$, $|C(w')| = |w'|$. Thus, consider the following set :\n$$ \\Gamma = \\left\\{ w \\in \\Sigma^* \\mid | C(w) |  < \\ell \\right\\} $$\nFrom the above assumptions, we have that $|\\Gamma| \\ge 2^\\ell+1$, but then by definition $\\Gamma \\subseteq \\{0,1\\}^{\\ell-1}$. Hence by Pigeon Hole Principle, $\\exists w,w' \\in Gamma$ with $w \\ne w'$, such that $C(w) = C(w')$. This contradicts the lossless property.\n\\end{proof}\n%\n%\\subsection*{Computation of $M$ - from discussion on Sep 28th}\n%\n%\\paragraph{Harmonic Number Bound:}\n%\n%\\begin{eqnarray*}\n%H_n & = & 1+\\frac{1}{2}+\\frac{1}{3}+\\frac{1}{4} \\ldots +\\frac{1}{n} \\\\\n%& = & \\bigsum_{k=0}^{\\lfloor \\log n \\rfloor} \\sum_{j=0}^{2^k -1} \\left( \\frac{1}{2^k+j} \\right) \\\\\n%& \\le & \\bigsum_{k=0}^{\\lfloor \\log n \\rfloor} \\sum_{j=0}^{2^k -1} \\left( \\frac{1}{2^k} \\right) \\textrm{ since the denominator decreased} \\\\\n%& = &  \\sum_{k=0}^{\\lfloor \\log n \\rfloor} (1) \\le \\log n \n%\\end{eqnarray*}\n%We wanted to compute :\n%\n%\\[ M = \\sum_{d=1}^\\infty \\frac{\\mu(d)}{d^2} \\]\n%\n%By using prime decomposition theorem, if $p_1,p_2, \\ldots $ are list of primes:\n%\n%\\[ M = \\left( 1- \\frac{1}{p_1^2} \\right) \\left( 1- \\frac{1}{p_2^2} \\right)  \\left( 1- \\frac{1}{p_3^2} \\right)  \\ldots \\]\n%\n%The next idea is to compute $\\frac{1}{M}$ as follows: since we have written above as the product -\n%\n%\\begin{eqnarray*}\n%\\frac{1}{M} & = & \\frac{1}{\\left( 1- \\frac{1}{p_1^2} \\right) }  \\frac{1}{\\left( 1- \\frac{1}{p_2^2} \\right) }  \\frac{1}{\\left( 1- \\frac{1}{p_3^2} \\right) } \\ldots \\\\\n%\\end{eqnarray*}\n%Using the infinite series expansion ... $\\left(1-\\frac{1}{x^2}\\right() = 1+x^2+x^4+\\ldots$\n%\\begin{eqnarray*}\n%\\frac{1}{M} & = & \\left( 1+\\frac{1}{p_1^2} + \\frac{1}{p_1^4} \\ldots \\right)\n%\\left( 1+\\frac{1}{p_2^2} + \\frac{1}{p_2^4} \\ldots \\right)\n%\\left( 1+\\frac{1}{p_3^2} + \\frac{1}{p_3^4} \\ldots \\right) \\\\\n%& = & 1+\\frac{1}{2^2}+\\frac{1}{3^2}+ \\ldots \\\\\n%& = & \\frac{\\pi^2}{6} \\textrm{   {\\tt By Euler sum that Gautam mentioned in the session}}\n%\\end{eqnarray*}\n\n\n\\Lecture{Jayalal Sarma}{Sep 9, 2020}{02}{More on PHP}{Jayalal Sarma}{$\\gamma$}{JS}\n\n\\section{Warm up and  Generlizations of PHP}\n\nWe start with a usual application of PHP to numbers to warm up in the lecture.\n\\begin{theorem}\nIn any set of $n+1$ positive integers each at most $2n$, there must exist at least one number which divides the other. \n\\end{theorem}\n\\begin{proof}\nLet $S = \\{a_1, a_2, \\ldots , a_{n+1}\\}$ be the set of $n+1$ positive integers such that $a_i \\le 2n$. Each number can be written in the form $a_i = 2^{k_i}q_i$ where $k_i$ is the maximum power of $2$ that divides $a_i$ and $q_i$ hence is an odd number.\n\nNow consider the numbers $q_1, q_2, \\ldots q_{n+1}$. Can they be distinct? Since they all are in the range $1 \\le q_i \\le 2n$, where there are only $n$ odd numbers - By an application of PHP, we have that there must exist $i, j$ such that $1 \\le i \\ne j \\le n+1$ with $q_i = q_j$.  Hence, we have that $k_i \\ne k_j$. This gives the two exhaustive cases:\n\n\\begin{description}\n\\item{{\\bf Case 1:} $k_i > k_j$:} Since $2^{k_j}$ divides $2^{k_i}$ and this gives $q_j2^{k_j}$ divides $q_i2^{k_i}$. Hence $a_j$ divides $a_i$.\n\\item{{\\bf Case 2:} $k_j > k_i$:} Same as previous case, just swapping the role of $i$ and $j$.\n\\end{description}\nIn either case, we have that there exists two numbers in the set where one divides the other. This concludes the proof.\n\\end{proof}\n\nWe now state a usual generalization of PHP as a recap.\n\n\\begin{theorem}[{\\bf Generalized of PHP}]\nLet $n,m ,r$ be positive integers and let $n > mr$. If we distribute $n$ balls into $m$ bins, then there must be a bin which has at least $r+1$ balls.\n\\end{theorem}\n\nIndeed, the generalization comes handy when the combinatorial statement that we want to explore is not about a \"conflict\" but about multiple elements getting to same bag. A simple recap example to demonstrate this is the following question - {\\em how many students do we need to be in the course, such at the end of the semester, no matter how the performance of the students is, that at least five students get the same letter grade (out of the five grades $S,A,B,C,D,E$)?} This is also an extremal question. Applying generalized PHP, with $r+1 = 5$ and $m=6$, it is sufficient to have 25 students in the class.  And with 24 we cannot guarantee this since there is way to distributed 4 students each to each grade so that there are not 5 students having each grade.\n\n\\section{Example 2 : Erd\\\"os-Szekeres Theorem}\n\nThis is about a pattern that appears in sequence of distinct numbers first proved by Erd\\\"os and Szekeres in 1939. The theorem its has a geometric interpretation too. The theorem itself is a creative use of Pigeon Hole Principle and is a case of extremal combinatorics.\n\n\\begin{theorem}\nIn any Sequence of $n^2+1$ distinct real numbers there must necessarily exist either a strictly increasing subsequence  of $n+1$ numbers  or a strict decreasing subsequence of $n+1$ numbers.\n\\end{theorem}\nBefore we begin to prove this, let us play around with an example.\n$8, 11, 9, 1, 4, 6, 12, 10, 5, 7$. Here $n=3$ and there are 10 numbers in the sequence. There must be at least one strictly increasing subsequence of length 4. Indeed, there is - the subsequence $1,4,6,12$. In fact, there are more, $1,4,6,10$ etc. But anyways there is at least one. In fact, in this case, it so happens that there is a strictly decreasing sequence also of length $4$. This is the subsequence, $11, 9, 6, 5$. There are more subsequences.\n\n\\begin{proof}\nThe proof is an elegant and intuitive one. A perfect example of how such proofs are discovered. Suppose $a_1, a_2, \\ldots a_{n^2+1}$ forms the given sequence of numbers.\n\nSuppose we checked for the increasing subsequence of numbers of length $n+1$ and for decreasing subsequence of length $n+1$ but did not find it in the above sequence. How do we formally represent this data? Here is an idea, for each index $k$, let us associate a pair of numbers $(i_k,d_k)$, which are defined as : the length of longest increasing (for $i_k$ and respectively decreasing for $d_k$) subsequence of numbers starting from the number $a_k$ in the given sequence. The reason we failed to find the subsequence indicates that these pairs must satisfy, for every $k$, $1 \\le i_k \\le n$ and $1 \\le d_k \\le n$.\n\nThus we have $n^2+1$ tuples in hand where value for each component can be only between $1$ and $n$. Hence there are only $n^2$ different distinct such pairs possible. But now we have a scenario for PHP, which gives that there must exist $s,t \\in [n^2+1]$ such that $s \\ne t$ (say without loss of generality that $s > t$) such that the tuples for both these indices are the same. That is $i_s = i_t = i$ (say) and $d_s = d_t = d$ (say).\n\nWe know that the numbers are distinct in the sequence. Hence $a_s \\ne a_t$. Thus we have the following two exhaustive cases:\n\n\\begin{description}\n\\item{{\\sf Case 1:} $a_s > a_t$ :} Let $(t_1, t_2, \\ldots t_d)$ be the decreasing sequence of length $i$ that starts from $a_t$ with $t_1 = a_t$. But then $(a_s, t_1, t_2, \\ldots t_d)$ is also decreasing and it starts with $a_s$ and is of length $i+1$. This contradicts the fact that $d_s = d$.\n\\item{{\\sf Case 2:} $a_s < a_t$ :} Same as the above case, where we replace decreasing with increasing and the final contradiction is for the fact that $i_s=i$, because we can demonstrate a length $i+1$ increasing subsequence of length $i+1$ in the given sequence.\n\\end{description}\nHence the proof.\n\\end{proof}\n\n\\begin{remark}\nThe above theorem can also be generalized, and in fact is the original form of the Erd\\\"os-Szekeres theorem. For given natural numbers $r$, $s$ they showed that any sequence of distinct real numbers with length at least $(r-1)(s-1)+1$ contains a monotonically increasing subsequence of length $r$ or a monotonically decreasing subsequence of length $s$. \n\\end{remark}\n\n\\section{Example 3: People at Party}\n\nIf $6$ people are invited to a party, something interesting happens. Let us say some pairs of them are friends and some pairs of them are strangers with each other. There will always be some set of three people who are pairwise strangers with each other or there will be a set of three people who are pairwise friends with each other. This phenomenon can be easily mistaken to be a sociological or behavioural psychological fact that humans seem to be behave this way. However, it turns out that it can be seen as a simple result of combinatorics and is a nice applciation of PHP in disguise. We demonstrate this now.\n\n\\begin{theorem}\nIf $6$ people are invited to a party, then there must exist three of them who are pairwise strangers each other, and there must be three of them who are pairwise  friends with each other. \n\\end{theorem}\nThere are many equivalent ways of formulating this. One can talk about graphs to model the facts stated above. We defer these to a later point when we get to Ramsey numbers. We now get to the proof of the above in the same language as we discussed above.\n\\begin{proof}\nLet $P$ be the set of people who joined the party. Let $\\alpha \\in P$ be one of the attendees. We do the following case analysis based on how many people does $\\alpha$ are friends with in the party. We want to demonstrate a set $\\Gamma \\subseteq P$ such that $|\\Gamma| = 3$ and the members of $\\Gamma$ are either pairwise friends or pairwise strangers.\n\\begin{description}\n\\item{{\\bf Case 1:} $\\alpha$ has atleast three friends in $P$:} Let $\\beta, \\gamma, \\delta$ be the three friends. We ask the question, are $\\beta, \\gamma, \\delta$ friends amongst themselves? The answer to this will give the following exhaustive subcases.\n\\begin{description}\n\\item{{\\bf Case 1a:} $\\beta, \\gamma, \\delta$ are pairwise strangers among each other:} In this case  we can simply set $\\Gamma = \\{ \\beta, \\gamma, \\delta \\}$ which has the required property for $\\Gamma$ as desired.\n\\item{{\\bf Case 1b:} there is a pair among $\\beta, \\gamma,$ and $\\delta$ who are friends :} Without loss of generality let us say $\\beta$ and $\\gamma$ are friends (the other cases are similar). In this case, define $\\Gamma = \\{\\alpha, \\beta,\\gamma\\}$ and it has the desired properties.\n\\end{description}\n\\item{{\\bf Case 2:} $\\alpha$ has at most two friends in $P$}. In this case, there are three strangers for $\\alpha$ in $P$, and let us name them $\\beta, \\gamma, and \\delta$. We ask the question, are $\\beta, \\gamma, \\delta$ friends amongst themselves? The answer to this will give the following exhaustive subcases.\n\\begin{description}\n\\item{{\\bf Case 2a:} $\\beta, \\gamma, \\delta$ are pairwise friends among each other:} In this case  we can simply set $\\Gamma = \\{ \\beta, \\gamma, \\delta \\}$ which has the required property for $\\Gamma$ as desired.\n\\item{{\\bf Case 2b:} there is a pair among $\\beta, \\gamma,$ and $\\delta$ who are strangers :} Without loss of generality let us say $\\beta$ and $\\gamma$ are the strangers (the other cases are similar). In this case, define $\\Gamma = \\{\\alpha, \\beta,\\gamma\\}$ and it has the desired properties.\n\\end{description}\n\\end{description}\nSince we argued in both the cases, this completes the proof of the theorem.\n\\end{proof}\n\\begin{remark}\nA more elegant way to handle case 2 is to reduce it to case 1 itself. Consider the complement of the friends/stranger relation. Note that the result required for the theorem does not change since we just need $\\Gamma$ to be either pairwise strangers or pairwise friends and they just get complemented. Now, if $\\alpha$ has at most two friends in $P$, it has at least three friends in $P$ in the complementary relation and hence we can reuse Case 1 in this case.\n\\end{remark}\n\n\\begin{exercise}\nIs the above theorem tight? Indeed, one can construct 5 people going to a party and associate a friends/stranger relation among them such that there does not exist three people who are friends with each other and  there does not exist three people who are strangers with each other. The exercise is to explicitly write down this counter example relation.\n\\end{exercise}\n\n\\begin{exercise-prob}[See Problem Set 1~(Problem \\ref{php-square})]\n\\begin{show-ps1}{php-square}\nThe set $M$ consists of nine positive integers, none of which has a\nprime divisor larger than six. Prove that $M$ has two elements whose\nproduct is the square of an integer. Is the bound $9$ in the above statement tight?\n\\end{show-ps1}\n\\end{exercise-prob}\n\n\\Lecture{Jayalal Sarma}{Sep 10, 2020}{03}{PHP for Dirichlet's Approximation Principle}{Jayalal Sarma}{$\\gamma$}{JS}\n\nWe now discuss a very old and different application of PHP which predates the name PHP itself. This is to show the approximation principle of irrational numbers. This application got this principle the name as {\\em Dirichlet Box Principle}.\n\n\\section{Approximation of irrationals by rationals}\nThe task we have at hand is about approximating irrational numbers using rationals to a given accuracy. As a concrete example, suppose we want to approximate $\\sqrt{2}$ by $\\frac{p}{q}$ up to a given accuracy $\\epsilon$ such that\n$$ \\left| \\sqrt{2} - \\frac{p}{q} \\right| \\le \\epsilon $$\nthe driving question for us, is how large should $p$ and $q$ be? In fact they are related and hence, we need to ask how large the denominator $q$ should be? The larger the $q$ is, the more the granularity of the representation is, and the larger the storage cost is for the number to be represented. Hence, for a fixed irrational number and a given $\\epsilon$ we want the value of $q$ to be as small as possible.\n\nIndeed, if we want to do this for an arbitrary irrational number, here is a simple idea: let $q \\in \\N$ (which we will choose later). Divide the number line into intervals of length $\\frac{1}{q}$ each. Consider the number $\\alpha$ and see which interval it belongs to. Choose the nearest multiple of $q$ to be the $\\frac{p}{q}$ to be the approximation of $\\alpha$. A quick thought will convince you that the error introduced by this method is at most half of the interval size which is $\\frac{1}{2q}$. That is, for any $q$ that we choose, if we choose $p$ also accordingly as above,\n$$ \\left| \\alpha - \\frac{p}{q} \\right| \\le \\frac{1}{2q} $$\nThus, for a given $\\epsilon$, we should choose $\\frac{1}{2q} < \\epsilon$ to get the required accuracy. In other words, $q$ linearly changes with $\\frac{1}{\\epsilon}$. Just to get a sense of this growth, if $\\epsilon$ is given to be $0.0001$, then we should choose $q$ to be roughly 5000. \n\n\\section{Dirichlet's Approximation Principle}\n\nIndeed, we would have probably preferred a smaller $q$, due to the above mentioned representation cost. Dirichlet approximation principle, exactly improves the above and is a nice application of the pigeon hole principle.\n\n\\begin{theorem}[{\\bf Dirichlet's Approximation Principle}]\nFor every irrational number $\\alpha$, there is a $p, q \\in \\Z$ such that:\n$$\\left| \\alpha - \\frac{p}{q} \\right| < \\frac{1}{q^2}$$\n\\end{theorem}\n\nA few remarks about the improvement  is due. Indeed, now $q$ changes quadratically with $\\epsilon$. To check the numbers, suppose $\\epsilon$ is given to e $0.0001$, then we can afford to choose $q$ to be just 100, as opposed to 5000.  We will now prove the above theorem:\n\n\\begin{proof}\nLet $\\alpha$ be the irrational number that we are interested in approximating.\nFirst observation is that it is sufficient to prove that $\\exists p,q \\in \\Z$,\n$$\\left| q\\alpha - p \\right| < \\frac{1}{q}$$\nIn other words, we need to understand the nearest integer to the quantity $q\\alpha$. Intuitively, the fractional part of $q \\alpha$ plays a roles in this and which we will study now.\n\nFix a positive integer $N$ (we will choose this later) and consider the numbers $0, \\alpha, 2\\alpha, \\ldots N\\alpha$. Ideally we will choose $q$ to be less than $N$, hence one of these numbers is $q \\alpha$. However, since we are interested in the fractional parts, let us distribute them into intervals as we did in the naive case.\n\nConsider the interval from $[0,1)$ divided into subintervals of the form:\n$$ \\left[\\left.0,\\frac{1}{N}\\right.\\right),\\left[\\left.\\frac{1}{N},\\frac{2}{N}\\right.\\right) \\ldots ,\\left[\\left.\\frac{N-1}{N},1\\right.\\right) $$\n\nThere are $N$ intervals in this list. If we distribute the fractional part of $N+1$ numbers $0, \\alpha, 2\\alpha, \\ldots N\\alpha$ to this list, by PHP, we have that there must be two of the multiples of $\\alpha$ which falls within the same interval. In other words, there exists $a,b \\in \\{0,1, \\ldots N\\}$ such that:\n$$ \\{a\\alpha\\} - \\{b \\alpha\\} < \\frac{1}{N}$$\nJust to fast forward, the idea is to demonstrate that the choice of $q = a-b$ actually works for our purpose. To do this, we will show that the nearest integer to $a \\alpha  - b \\alpha$ is at most $\\frac{1}{a-b}$ away from it and that integer will be our $p$. Since $a-b$ is at most $N$, it is sufficient to show that there is an integer close to $a \\alpha  - b \\alpha$ is at most $\\{a \\alpha \\} - \\{b \\alpha\\}$ away. By the above, we have that this is at most $\\frac{1}{N}$ which in turn is at most $\\frac{1}{a-b}$. This is in fact a general statement which we can prove as follows:\n\n\\begin{lemma}\n\\label{lem:a-b}\nLet $A$ and $B$ be two real numbers, there is  an integer $p$ close to $|A-B|$ such that:\n$$d(p,|A-B|) \\le \\{A\\} - \\{B\\} $$\nwhere $d(s,t)$ denotes $|s-t|$.\n\\end{lemma}\n\\begin{proof}\nThe idea is very simple, let us write:\n$A = A_1+A_2 \\textrm{  and } B = B_1+B_2$\nwhere $A_1$ and $A_2$ are integral and fractional part respectively. If $A_2 > B_2$, then the distance to the integer $|A_1 -B_1|$ is at most $A_2-B_2$. The other case works in a similar way.\n\\end{proof}\n\nApplying Lemma~\\ref{lem:a-b} to the case when $A = a\\alpha$ and $B = b \\alpha$, gives us that there is an integer $p$ such that \n$$d(p,|a\\alpha - b\\alpha|) \\le |\\{a\\alpha\\} - \\{b\\alpha\\}| < \\frac{1}{N} \\le \\frac{1}{a-b} = \\frac{1}{q}$$\nThus, there is $p$ (as claimed by Lemma~\\ref{lem:a-b}) and $q$ (which is equal to $a-b$ which exists as per PHP application), such that:\n$$|q\\alpha - p| \\le \\frac{1}{q}$$\nThis completes the proof of the theorem.\n\\end{proof}\n\n\n\\begin{exercise-prob}[See Problem Set 1~(Problem \\ref{simultaneous-approx})]\n\\begin{show-ps1}{simultaneous-approx}\nLet $\\alpha_1, \\alpha_2, \\ldots \\alpha_k$ be $k$ rational numbers. Generalizing the Dirichlet's approximation principle argument that we did in class, using PHP again, prove that there must exist integers $p_1, p_2, \\ldots p_k$ and $q$ such that:\n$$\\forall i,~\\left| \\alpha_i - \\frac{p_i}{q} \\right| < \\frac{1}{q^{1+\\frac{1}{k}}}$$\n\\end{show-ps1}\n\\end{exercise-prob}\n\n\\begin{remark}\nThe proof of the theorem proves something stronger. That is, it actually can give a way to get many $q$'s which acheive the error bound. Notice that the choice of $N$ was free in the proof and $q$ that we end up choosing is $a-b$ which is at most $N$. Hence suppose that we already have a $p$ and $q$ in hand, if we run the proof by choosing $N$ to be large enough such that:\n$$\\frac{1}{N} < |q\\alpha - p|$$\nthen necessrily the new $p$ and $q$ that the proof gives will have to be different (and $q$ needs to be larger). By repeating this, we can produce a new $p$ and $q$ and so on and so forth. This gives us a way to produce infinitely many $p$ and $q$ such that:\n$$\\left| \\alpha - \\frac{p}{q} \\right| < \\frac{1}{q^2} $$\n\\end{remark}\n\nWe were clearly motivated to improve the denominator of $2q$ in the naive attempt to $q^2$ in the denominator in the rational approximation principle. Can this be improved further? The following curiosity remark says otherwise.\n\n\\begin{curiosity}[{\\bf Tightness of Dirichlet's Approximation Principle - Roth's Theorem}]\nLet $\\alpha$ be any algebraic number (which can be expressed as the root of a polynomial with coefficients from $\\mathbb{Q}$. For every $\\epsilon$ the inequality,\n$$\\left| \\alpha - \\frac{p}{q} \\right| < \\frac{1}{q^{2+\\epsilon}}$$\ncan hold true only for finitely many co-prime pairs $(p,q)$. This says that the Dirichlet's approximation principle cannot be improved (for infinitely many $p$ and $q$) with a larger order denominator.\n\\end{curiosity}\n\nThe above remark says that we cannot improve in the exponent for Dirichlet's approximation principle. Can we improve by having a large multiplier for the $q^2$ in the denominator? Even this has a limit, and leads to classifying irrational numbers using what is called the \\textit{Lagrange measure} of the number.\n\n\\begin{curiosity}[{\\bf Hurwitz Theorem and Irrationality Measures}]\nThis is an improvement of the above principle. For every irrational number $\\alpha$, there are infinitely many relatively prime integers $p$ and $q$ such that:\n$$\\left| \\alpha - \\frac{p}{q} \\right| \\le \\frac{1}{\\sqrt{5}q^2}$$\nThe $\\sqrt{5}$ in the denominator is the best possible. If we let it greater than $\\sqrt{5}$, then there is a counter example - consider the irrational number $\\frac{1+\\sqrt{5}}{2}$ (the golden ratio). It can be shown that this can have only finitely many relatively prime integers $p$ and $q$ with the above formula holding (this is done through arguments about continued fraction representations). For example, if we avoid \\textit{golden ratio} and some similar irrational numbers, then we can improve the denominator to $\\sqrt{8}$. If we avoid \\textit{silver ratio} ($1+\\sqrt{2}$) and associated irrational numbers, then we can improve this to $\\frac{\\sqrt{221}}{5}$. In general, the bound is of the form:\n$$\\left| \\alpha - \\frac{p}{q} \\right| \\le \\frac{1}{L_nq^2}$$\nwhere $L_n$ (called the \\textit{Lagrange numbers}) steadily increases if some bad irrational numbers are included. These also are viewed as measures of \"how much irrational the number is\". \n\\end{curiosity}\n\n\\Lecture{Jayalal Sarma}{Sept 14, 2020}{04}{Counting by Bijections and Double Counting Principle}{Jayalal Sarma}{$\\gamma$}{JS}\n\nWe now quickly review the basic tools from counting. Permutations and combinations forms the basics from discrete mathematics that we rely upon. We will stress on the aspects that are critical for the rest of the course. The first tool that we will demonstrate in detail is the power of counting by using bijections.\n\n\\section{Basic Examples of Counting by Bijections}\n\nThe cardinality of two sets is said to be the same if there is a bijection between the two. Indeed, for finite sets the notion of cardinality matches with that of size while it can be deceiving for infinite sets\\footnote{For infinte sets, there are notions of countability and uncountability of sets which we will not discuss here.}. We will concentrate on finite sets for this part and use bijections to establish combinatorial counting.\n\nWe start with something that we are all familiar with, in order to bring out the nuances involved with proof by bijections. Notice that we know how to count this object even otherwise, by other means, but this is just as a starting example.\n\n\\begin{proposition}\nThe number of subsets of a set is of $n$ elements is exactly $2^n$.\n\\end{proposition}\n\\begin{proof}\nLet $S$ be the given set of $n$ elements. Without loss of generality let us assume that $S = \\{1,2,\\ldots, n\\}$. The bijection is nothing but the well-known idea of characteristic vector of a set,\n\nWe establish a bijection between the following two sets.\n$$ \\phi : \\left\\{ A : \\begin{array}{c} A \\textrm{ is a subset of } \\\\ \\textrm{ the set $S$ } \\end{array} : \\right\\} \\to \\left\\{ x : \\begin{array}{c} x \\textrm{ is a string of length $n$} \\\\ \\textrm{ over alphabet $\\{0,1\\}$ } \\end{array} : \\right\\}$$\nWe first define the function as follows. Let $A$ be any subset of $S$, define the string $w = \\phi(x)$ as the $n$-bit string where for every $1 \\le i \\le n$:\n\\[\nw_i = \n\\begin{cases}\n0 & \\textrm{ if $i \\notin A$ }\\\\\n1 & \\textrm{ if $i \\in A$ }\n\\end{cases}\n\\]\n\nNotice that the function $\\phi$ is well-defined (this may have to be checked explicitly in certain bijections when we define) since we are defining the bit $w_i$ for every $i \\in [n]$. \n\nWe now argue that it is an injection. Suppose that two sets $A, B \\subseteq S$, but $A \\ne B$. That is there is an $i \\in S$ for which $i \\in A$, but $i \\notin B$. By the above definition, the $i$-th but of $\\phi(A)$ will be $1$ while the $i$-th bit of $\\phi(B)$ will be $0$. This implies $\\phi(A) \\ne \\phi(B)$.\n\nWe also show that $\\phi$ is a surjection. Given any $w \\in \\{0,1\\}^n$, we can define a pre-image $A \\subseteq S$ as $A = \\{ i \\mid w_i = 1 \\}$. By definition, $\\phi(A) = w$ and hence $w$ has a pre-image. This shows $\\phi$ is a surjection.\n\n\\end{proof}\n\nLet us argue a slight variant of the above example now. While there are other ways to establish this, we insist on using the method of counting by bijections.\n\n\\begin{proposition}\nThe number of even sized subsets of $[n]$ is equal to the number of odd sized subsets, and both are equal to $2^{n-1}$.\n\\end{proposition}\n\\begin{proof}\nby observing that the bijection that we defined in the previous proof has the additional feature that the number of $1$s in $\\phi(A)$ is exactly the cardinality of $A$, we can conclude that it is sufficient to establish a bijection between the following two sets:\n$$ \\psi : \\left\\{ x : \\begin{array}{c} x \\in \\{0,1\\}^n \\textrm{ having } \\\\ \\textrm{even no. of 1s in it}  \\end{array} \\right\\} \\to \\left\\{ w : \\begin{array}{c}  \\textrm{ $w \\in \\{0,1\\}^n$ having} \\\\ \\textrm{odd no. of 1s in it}  \\end{array} \\right\\}$$\nFix any $i \\in [n]$, we define a bijection with respect $i$ (this says there are actually $n$ bijections between the two sets above, not just one !). Technically, we should be writing $\\psi_i$ but we drop the subscript since it is not critical for the representation. \n\\begin{description}\n\\item{\\bf Definition:} We define the bijection as follows : let $e_i$ denote the string which has $1$ in the $i$-th position and $0$ elsewhere.\n$$\\psi(x) = x \\oplus e_i $$\nwhere $\\oplus$ denotes bitwise xor to produce an $n$ bit string.\n\\item{\\bf well-defined:} We explicitly check whether the function is well-defined. Indeed, consider any $x \\in \\{0,1\\}^n$ which has even number of $1$s in it. By the operation $x \\oplus e_i$ produces a string in $w \\in \\{0,1\\}^n$. Since the $i$-th bit is flipped, $w$ must necessarily have odd number of 1s in it.\n\\item{\\bf injection:} \nWe show that $\\psi$ is an injection. Consider $x, x' \\in \\{0,1\\}^n$ such that $x \\ne x'$.  There must exist an index $j$ in which they differ. We have two cases:\n\\begin{description}\n\\item{{\\bf Case 1:} $j = i$ :} Indeed, since the $i$-th bit is flipped by the mapping, the images $w = \\phi(x)$ and $w' = \\phi(x')$ must also have their $j$-th bit to  be different. Hence $\\phi(x) \\ne \\phi(x')$.\n\\item{{\\bf Case 2:} $j \\ne i$ :} Since the operation does not change any other bit. The images $w = \\phi(x)$ and $w' = \\phi(x')$ must also have their $j$-th bit to  be different. Hence $\\phi(x) \\ne \\phi(x')$.\n\\end{description}\nHence, we conclude that $\\phi$ is injective.\n\\item{\\bf surjection:} Given any $w \\in \\{0,1\\}^n$ which has odd weight, we show $x \\in \\{0,1\\}^n$ such that $\\phi(x) = w$. Indeed, defining $x = w \\oplus e_i$ will meet the requirement. Hence $\\phi$ is surjective.\n\\end{description}\nHence we conclude that $\\phi$ is a bijection and that the two sets must be of same cardinality. Since the two sets are disjoint and their union is of size $2^n$, it must be that both of them are of size $2^{n-1}$. This concludes the proof.\n\\end{proof}\n\nNote that in the above proof, we wrote down the steps in proving the bijection explicitly. It is somewhat standard to skip over the one which are obvious from the definitions, but it is a good practice to write these down in a formal proof so that the argument is not prone to errors.\n\n\\subsection{Discussion Session - Counting Cyclic Triplets}\n\nWe considered the following counting question in the discussion session to demonstrate that simple bijection and the idea of associating combinatorial objects is a very powerful combinatorial technique.\n\n\\begin{problem}\nImagine there are $2n+1$ players in a round-robin tournament. That is, each player plays against every other player. Assume that there are no ties in any match. We say that players $\\{a,b,c\\}$ forms a cyclic triplet if $a$ beats $b$, $b$ beats $c$, and $c$ beats $a$ (note that the order does not matter). We want to count the maximum number of cyclic triplets that is possible in the tournament.\n\\end{problem}\n\\begin{proof}\nIt is natural to model the above using graphs where each vertex is a player and two vertices $(i,j)$ is a directed edge if the player $i$ beats player $j$. This gives a directed graph with $2n+1$ vertices whose underlying undirected graph is a complete graph on $2n+1$ vertices. It is also called a tournament. Cyclic triplets can naturally be associated with triangles in these directed graphs. So the statement we are address is also equivalently stated as {\\em in any tournament on $2n+1$ vertices, what is the number of triangles?}.\n\nThe idea is to look at associated objects called \"corners\" of the triangles. There are ${2n+1 \\choose 3}$ triangles, and hence there are $3 {2n+1 \\choose 3}$ many corners. The corners can be classified into three types.\n\\begin{description}\n\\item{\\bf Type 1:} A corner where both edges are incoming edges.\n\\item{\\bf Type 2:} A corner where both edges are outgoing edges.\n\\item{\\bf Type 3:} A corner where one edge is incoming and the other is outgoing.\n\\end{description}\n\nWhat happens in a cyclic triplet? It forms a triangle, hence the three corners has to be of Type 3. A non-cyclic triplet will have one corner of each Type. This also establishes a bijection between the Type 1 and Type 2 corners as follows.\n\n$$ \\psi : \\left\\{ \\begin{array}{c} \\textrm{Type 1} \\\\ \\textrm{Corners}  \\end{array} \\right\\} \\to \\left\\{ \\begin{array}{c} \\textrm{Type 2} \\\\ \\textrm{Corners}  \\end{array} \\right\\} $$\n\ndefined as follows: given any corner $c$ of type 1, define $\\psi(c)$ as the type 2 corner that appears in the (undirected) triangle that this corner appears in. This is well defined because (1) $c$ cannot appear in a triangle corresponding to the cyclic triplet (2) exactly only type $2$ triplet can appear in a triangle corresponding to non-cyclic triplet. This uniquely assigns a type 2 corner with $c$ and makes the function, well-defined, injective and surjective (since the process can be reversed too). In fact, the same bijection also proves that the number of non-cyclic triplets is exactly the number of Type 1 corners and also exactly equalt to the number of Type 2 corners. Hence it sufficient to count the number of Type 1 corners. \n\nSo now the strategy is clearer. To obtain an upper bound on the number of cyclic triplets, we express it ${2n+1 \\choose 3}$ minus the number of non-cyclic triplets. The it suffices to obtain a lower bound for the number of non-cyclic triplets. Hence it suffices to obtain a lower bound on the number of type 1 corners (or equivalently the number of type 2) corners.\n\nOne natural attempt is to aim to count type 1 corners alone. If $d_i$ is the in-degree of the $i$-th vertex, then the number of type 1 corners can be estimated as :\n\n$$t_1 = \\bigsum_{i=1}^{2n+1} {d_i \\choose 3}$$\n\nHowever, it is unclear how to get a reasonably tight lower bound for this quantity. \nInstead, the trick is to count the type 1 and type 2 simultaneously as :\n$$t = t_1 + t_2 = \\bigsum_{i=1}^{2n+1} \\left[ {d_i \\choose 2}+ {2n-d_i \\choose 2} \\right] $$\nSince we know that $t_1 = t_2 = t$, we have the number of non-cyclic triplets is:\n\n$$\\frac{1}{2}\\bigsum_{i=1}^{2n+1} \\left[ {d_i \\choose 2}+{2n-d_i \\choose 2} \\right] $$\nSince we need a lower bound for this, we can use the fact that the summation is minimised when $d_i$ is equal in both the terms in the multiplication. Hence number of non-cyclic triplets is at least:\n\\begin{eqnarray*}\n\\frac{1}{2}\\bigsum_{i=1}^{2n+1} \\left[ {n \\choose 2}+{n \\choose 2} \\right] & \\ge & \\frac{n(n-1)(2n+1)}{2} \\\\\n\\textrm{\\# of Cyclic Triplets} & \\le & {2n+1 \\choose 3}  - \\frac{n(n-1)(2n+1)}{2} \\\\\n& \\le & \\frac{n(n+1)(2n+1)}{6}\n\\end{eqnarray*}\n\nA natural question is whether this is tight? Or did we have any slackness in the counting? The count will be tight if the $d_i = n$ can be achieved for all vertices. This can be done by the following explicit graph for which the number of cyclic triplets will be hence exactly equal to $\\frac{n(n+1)(2n+1)}{6}$ and shows that the above bound is tight.\n\nThe construction is as follows. Consider the sequence $1, 2, \\ldots ,2n+1$. Define $(i,j) \\in E$ if $i-j \\mod 2n+1$ is in $[n]$. This ensures that the in-degree and the out-degree of all the vertices is exactly $n$ and hence the graph will achieve the above bound.\n\\end{proof}\n\n\\section{From Bijections to Double Counting}\n\nWe will now introduce a new technique called {\\em double counting} which has the method of bijections as its backbone. \n\n\\paragraph{Double Counting Method:} The method can be presented as follows. There is one combinatorial (mostly counting) question that we will design which we will answer in two distinct (but provably correct) ways. Since the two answers are for the same counting problem, it is logical to equate them and such an equality gives relations that are otherwise no apparent. \n\nThe whole idea can be viewed as a method of bijection itself. In many situations, the double counting may also reveal an implicit bijection between the two different ways of answering the question. While this is not necessary for the double counting method, it is revealing to think about the underlying bijection.\n\nThis is a very elegant and powerful tool. The creativity in the proof is in designing the right question. Indeed, \\textit{asking the right question is mostly more than half way thorugh into constructing mathematical proofs} !!.\nWe demonstrate this by a simple example first.\n\n\\begin{proposition}\nFor any $k \\le n$,\n$${n \\choose k} = {n \\choose n-k}$$\n\\end{proposition}\n\\begin{proof}\nThe combinatorial counting question in this case can be the following:\n\\begin{description}\n\\item{\\bf Q:} In how many ways can we form a committee of size $k$ from a set of $n$ people.\n\\item{\\bf A1:} Directly choose the $k$ committee members from $n$ people. By definition, there are ${n \\choose k}$ ways of doing this.\n\\item{\\bf  A2:} Directly choose the $n-k$ non-members of the committee from $n$ people and declare the remaining to be the committee members. There are ${n \\choose n-k}$ ways of doing this too.\n\\end{description}\nThis completes the argument. Although not required for the proof, for the curious mind, the underlying bijection revealed is the complementation of the set.\n\\end{proof}\n\nNote that there is an easy algebraic way of arguing the above identity. But as the expressions get more complicated, this proof technique is more revealing and elegant.\n\n\\begin{proposition}\nFor any $n$ and $k$,\n$${n \\choose k} = {n-1 \\choose k} + {n-1 \\choose k-1}$$\n\\end{proposition}\n\\begin{proof}\nWe can reuse the question itself from the proof of the earlier proposition.\n\\begin{description}\n\\item{\\bf Q:} In how many ways can we form a committee of size $k$ from a set of $n$ people.\n\\item{\\bf A1:} Directly choose the $k$ committee members from $n$ people. By definition, there are ${n \\choose k}$ ways of doing this.\n\\item{\\bf  A2:} Let the potential members be $\\{1,2, \\ldots n\\}$. Classify the ways of choose $k$ committee members into two. Ones that includes $n$ and the ones that does not include $n$. Since these two kinds of committees are never the same, we can count both types and add them. More formally, this is expressed as, \\textit{condition on the fact whether $n$ is in the committee or not}. If $n$ is in the committee, then there are only $k-1$ remaining members of the committee needs to be chosen from the remaining $n-1$ members available to choose from - which gives ${n-1 \\choose k-1}$ ways of doing it. On the other hand, if $n$ is not in the committee, then there are still $k$ members to be chosen from $n-1$ potential members to choose from - this gives ${n-1 \\choose k}$ as the number of possible ways. Adding these two, gives the RHS as the second answer to the counting question.\n\\end{description}\nThis completes the argument.\n\\end{proof}\n\n\\begin{proposition}\nFor $k \\le n$,\n$$ k{n \\choose k} = n {n-1 \\choose k-1}$$\n\\end{proposition}\n\\begin{proof}\nWe can almost reuse the question itself from the proof of the earlier proposition.\n\\begin{description}\n\\item{\\bf Q:} In how many ways can we form a committee of size $k$ from a set of $n$ people, and then choose a chair of the committee (who is also  a part of the committee).\n\\item{\\bf A1:} Directly choose the $k$ committee members from $n$ people. By definition, there are ${n \\choose k}$ ways of doing this. And then among the members chosen, choose a chair for the committee which can be done in $k$ different ways. This gives $k {n \\choose k}$ ways of completing the task which is equal to the LHS.\n\\item{\\bf  A2:} First choose the chair from the potential members of the committee. This can be done in $n$ ways. And then choose the remaining $k-1$ members of the committee from the remaining potential members of the committee. This can eb done in ${n-1 \\choose k-1}$ ways.\n\\end{description}\nThis completes the argument.\n\\end{proof}\n\n\\begin{exercise}\nProve the following identities using double counting method:\n$$\\sum_{k=0}^n k {n \\choose k} = n 2^{n-1} \\textrm{\\hspace{15mm}} {m+n \\choose k} = \\sum_{i=0}^k {m \\choose i}{n \\choose k-i} \\textrm{\\hspace{15mm}} {n \\choose k}{k \\choose m} = {n \\choose m}{n-m \\choose k-m}$$\n\\end{exercise}\n\n\\begin{proposition}\n$$\\sum_{m=k}^n {m \\choose k} = {n+1 \\choose k+1}$$\n\\end{proposition}\n\\begin{proof}\nWe need to modify the question slightly here.\n\\begin{description}\n\\item{\\bf Q:} In how many ways can we choose $k+1$ numbers from the set $\\{1, 2, \\ldots, n+1\\}$?\n\\item{\\bf A2:} The RHS is immediate by definition.\n\\item{\\bf  A1:} Count conditioning on the largest element to be chosen in the set. Note that a subset cannot be counted against two largest elements since largest element of a given set is uniquely defined. Now, for a fixed largest element $m+1$, the numeber of ways of choosing remaining elements is given by the number of ways of choosing $k$ elements from the set $\\{1, 2, \\ldots m\\}$ since $m+1$ is the largest. This gives ${m \\choose k}$ ways of completing the task when the largest element is $m+1$. Since $m$ has to be atleast $k$ and can be at most $n$, this gives the number of ways of choosing a set of $k+1$ numbers from the set to be :\n$$\\sum_{m=k}^n {m \\choose k}$$\nwhich matches with the LHS.\n\\end{description}\nThis completes the argument.\n\\end{proof}\nUse a similar argument to do the following:\n\n\n\\begin{exercise-prob}[See Problem Set 1~(Problem \\ref{double-counting})]\n\\begin{show-ps1}{double-counting}\nUse a double counting argument to establish the following identity : \\\\\n$$ \\sum_{m=k}^{n-k} {m \\choose k} {n-m \\choose k} = {n+1 \\choose 2k+1} \\textrm{ ~~~where~~ $0 \\le k \\le \\frac{n}{2}$}\n$$\nGeneralize the idea to prove :\n$$ \\sum_{j=r}^{n+r-k} {j-1 \\choose r-1} {n-j \\choose k-r} = {n \\choose k} \\textrm{ ~~~where~~ $1 \\le r \\le k$}\n$$\n\\end{show-ps1}\n\\end{exercise-prob}", "meta": {"hexsha": "c2091f47ba3b9f7312e2d061ecc2e3e51273bfe6", "size": 62293, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "week01.tex", "max_stars_repo_name": "pot8ohead/theory-toolkit", "max_stars_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week01.tex", "max_issues_repo_name": "pot8ohead/theory-toolkit", "max_issues_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-10-08T07:34:26.000Z", "max_issues_repo_issues_event_max_datetime": "2020-11-30T06:06:12.000Z", "max_forks_repo_path": "week01.tex", "max_forks_repo_name": "pot8ohead/theory-toolkit", "max_forks_repo_head_hexsha": "177249454691f7e264d9ee7d5a354e180a54e095", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2020-09-25T01:35:07.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-28T11:22:06.000Z", "avg_line_length": 108.5243902439, "max_line_length": 1441, "alphanum_fraction": 0.7363909267, "num_tokens": 17119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5563116372965229}}
{"text": "\\input{includes/lab_preamble}\n\n\\def\\LabCourse{AP Computer Science A}\n\\def\\LabNumber{01}\n\\def\\LabTitle{Calendar Lab}\n\n\\begin{document}\n\t\\begin{coverpages}\n\t\t\\ \\\\[2cm]\n\t\t\\begin{center}\n\t\t\t\\huge\n\t\t\t\\textbf{\\LabTitle}\n\n\t\t\t\\Large\n\t\t\t\\LabCourse\n\t\t\\end{center}\n\n\t\t\\vspace{1.5cm}\n\n\t\t\\begin{center}\n\t\t\t\\includegraphics[scale=0.45]{graphics/logo_black}\n\n\t\t\t\\vspace{2.5cm}\n\n\t\t\t\\Large\n\t\t\tName: \\rule{11.5cm}{0.1pt}\n\t\t\\end{center}\n\t\\end{coverpages}\n\n\t\\thispagestyle{empty}\n\t\\tableofcontents\n\n\t\\pagebreak\n\n\t\\section{Background}\n\t\tIn this lab, you will be creating a method for finding the day of the week for any given date (day, month, year) and will use that method for generating a full calendar display for a given month.\\\\[\\baselineskip]\n\t\tThere have been a number of different algorithms developed to calculate what day of the week a given past or future date falls on. Although many of these methods require fairly complex look-up tables, a formula has been developed that can be used to directly calculate the day of the week without having to store or process these tables. This formula is given below:\n\n\t\t\\[\n\t\t\tw = \\left(d + \\floor{2.6m - 0.2} + y + \\floor{\\dfrac{y}{4}} + \\floor{\\dfrac{c}{4}} - 2c\\right) \\mathbf{mod}\\ 7\n\t\t\\]\n\n\t\tNote the following:\n\t\t\\begin{itemize}\n\t\t\t\\item $\\floor{x}$ is the \\emph{floor} operator, accessible via \\code{Math.floor(x)}\n\t\t\t\\item $\\mathbf{mod}$ is the \\emph{modulus} operator (\\code{\\%})\n\t\t\t\\item $Y$ is the year minus $1$ for January or February\n\t\t\t\\item $y$ is the last $2$ digits of $Y$\n\t\t\t\\item $c$ is the first $2$ digits of $Y$\n\t\t\t\\item $d$ is the day of the month ($1$ to $31$)\n\t\t\t\\item $m$ is the shifted month (March=1,...,February=12)\n\t\t\t\\item $w$ is the day of the week (0=Sunday,...,6=Saturday)\n\t\t\\end{itemize}\n\n\t\tRemarkably, this method will work regardless of whether or not a given year is a leap year. Consider the following examples.\n\n\t\t\\subsection{Examples}\n\t\t\t\\EBox{July 9, 1983}{\n\t\t\t\t\\[\n\t\t\t\t\tw = \\left(9 + \\floor{2.6(5) - 0.2} + 83 + \\floor{\\dfrac{83}{4}} + \\floor{\\dfrac{19}{4}} - 2(19)\\right) \\mathbf{mod}\\ 7 = 6\n\t\t\t\t\\]\n\t\t\t\t\\textbf{Result:} \\emph{Saturday}\n\t\t\t}\n\t\t\t\\ \\\\[18pt]\n\t\t\t\\EBox{April 15, 2016}{\n\t\t\t\t\\[\n\t\t\t\t\tw = \\left(15 + \\floor{2.6(2) - 0.2} + 16 + \\floor{\\dfrac{16}{4}} + \\floor{\\dfrac{20}{4}} - 2(20)\\right) \\mathbf{mod}\\ 7 = 5\n\t\t\t\t\\]\n\t\t\t\t\\textbf{Result:} \\emph{Friday}\n\t\t\t}\n\t\t\t\\ \\\\[18pt]\n\t\t\t\\EBox{January 30, 2000}{\n\t\t\t\t\\[\n\t\t\t\t\tw = \\left(30 + \\floor{2.6(11) - 0.2} + 99 + \\floor{\\dfrac{99}{4}} + \\floor{\\dfrac{11}{4}} - 2(19)\\right) \\mathbf{mod}\\ 7 = 0\n\t\t\t\t\\]\n\t\t\t\t\\textbf{Result:} \\emph{Sunday}\n\t\t\t}\n\n\t\\pagebreak\n\n\t\\section{Applications}\n\t\t\\QBox{Use the formula described in the background to calculate the day of the week for your birthday this year. Then, calculate the day of the week for your original birthday.}{4cm}\n\t\t\\ \\\\[9pt]\n\t\t\\QBox{Explain what happens when you attempt to calculate the day of the week for a non-existent day (February 29, 2017 or July 33, 2020). Is the result consisitent with what you would expect? Explain why or why not.}{4cm}\n\t\t\\ \\\\[9pt]\n\t\t\\QBox{In September, 1752, Britain and its colonies began using the Gregorian calendar. This caused Thursday, September 14, 1752 to be preceded by Wednesday, September 2, 1752. Will the formula described in the background take this calendar switch into account? Explain why or why not.}{4cm}\n\t\t\\ \\\\[9pt]\n\t\t\\QBox{What modification, if any, could you make to the formula described in the background to allow for the beginning of the week ($w = 0$) to be Monday?}{4cm} \\pagebreak\n\n\t\\section{Activity \\#1}\n\t\t\\subsection{Introduction}\n\t\t\tIn this activity, you will be creating two methods: one that will use the formula described in the background and return the numeric value of $w$ and one that will return the day of the week as a string. These methods will be important for the second activity.\n\n\t\t\\subsection{Exercises}\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item Implement the \\code{calculateDayOfWeek()} method. This method should take as parameters the \\code{day}, \\code{month}, and \\code{year} for the desired date and return the day of the week for that date as a number between 0 and 6.\n\t\t\t\t\\item Implement the \\code{getDayOfWeek()} method. This method should take as parameters the \\code{day}, \\code{month}, and \\code{year} for the desired date and return the day of the week as a \\code{String} (``Monday'', ``Tuesday'', etc.).\n\n\t\t\t\t\t\t\t{\\small\\textbf{Note:} Your \\code{getDayOfWeek()} method should use the \\code{calculateDayOfWeek()} method previously implemented.}\n\t\t\t\\end{enumerate}\n\n\t\t\\subsection{Questions}\n\t\t\t\\QBox{Briefly explain the method you used to calculate $y$ and $c$ from the \\code{year} variable that was passed.}{5cm}\n\t\t\t\\ \\\\[9pt]\n\t\t\t\\QBox{Why do you think it is considered ``better practice'' to use the \\code{calculateDayOfWeek()} method within the implementation of \\code{getDayOfWeek()} rather than just repeating the use of the formula?}{5cm}\n\n\t\\pagebreak\n\n\t\\section{Activity \\#2}\n\t\t\\subsection{Introduction}\n\t\t\tIn this activity, you will be creating a method to display a calendar for a given month and year. The output of your method should create a calendar similar to the one shown below.\n\t\t\t\\begin{center}\n\t\t\t\t\\small\n\t\t\t\t\\begin{tabular}{c c c c c c c}\n\t\t\t\t\t\\multicolumn{7}{c}{March, 2018}\\\\\n\t\t\t\t\tMo & Tu & We & Th & Fr & Sa & Su\\\\\n\t\t\t\t\t\\ & \\ & \\ & 01 & 02 & 03 & 04 \\\\\n\t\t\t\t\t05 & 06 & 07 & 08 & 09 & 10 & 11 \\\\\n\t\t\t\t\t12 & 13 & 14 & 15 & 16 & 17 & 18 \\\\\n\t\t\t\t\t19 & 20 & 21 & 22 & 23 & 24 & 25 \\\\\n\t\t\t\t\t26 & 27 & 28 & 29 & 30 & 31\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{center}\n\t\t\tIn particular, note the following:\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item The month and year are displayed above the main calendar.\n\t\t\t\t\\item There are blank spaces for days prior to the start of the month and after the end of the month.\n\t\t\t\t\\item Single digit days are displayed with a leading ``0''.\n\t\t\t\\end{itemize}\n\n\t\t\\subsection{Exercise}\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item Implement the \\code{isLeapYear()} method which will take a year as input and return \\code{true} if the given year is a leap year and \\code{false} otherwise.\n\t\t\t\t\\item Implement the \\code{printCalendar()} method which will take a month and year as input and produce the output described in the introduction to this activity.\\\\\n\t\t\t{\\small\\textbf{Note:} Unlike the formula you used for \\code{calculateDayOfWeek()}, your implementation of \\code{printCalendar()} will have to use \\code{isLeapYear()} in order to handle February's calendar output.}\n\t\t\t\\end{enumerate}\n\n\t\t\\subsection{Questions}\n\t\t\t\\QBox{Briefly explain what considerations you needed to make to take leap-years into account.}{5cm}\n\t\t\t\\ \\\\[9pt]\n\t\t\t\\QBox{Briefly explain how you handled the variable number of days in each month.}{5cm}\n\n\t\\pagebreak\n\n\t\\section{Final Analysis}\n\t\t\\QBox{Which part of the implementation of either \\code{calculateDayOfWeek()} or \\code{getDayOfWeek()} did you find most challenging? How did you overcome this challenge?}{4cm}\n\t\t\\ \\\\[9pt]\n\t\t\\QBox{Which part of the implementation of \\code{printCalendar()} did you find most challenging? How did you overcome this challenge?}{4cm}\n\t\t\\ \\\\[9pt]\n\t\t\\QBox{Given the knowledge and opportunity, what features would you add to your implemented methods to make them more useful and/or more versatile?}{4cm}\n\t\t\\ \\\\[9pt]\n\t\t\\QBox{What new programming techniques or knowledge did you learn as a result of this lab?}{4cm}\n\n\t\\pagebreak\n\t\\blankpage\n\n\t\\section{Template Class \\& Test Cases}\n\t\t\\lstinputlisting[basicstyle=\\small\\ttfamily,tabsize=2]{files/Calendar.java}\n\n\t\\pagebreak\n\t%Scoring Matrix\n\t\\section*{Scoring Matrix}\n\t\\vspace{0.25cm}\n\n\t\\renewcommand{\\arraystretch}{2}\n\t\\begin{tabular} {*{4}{*{3}{| >{\\bfseries\\centering}p{0.0575\\textwidth}}}|}\n\t\t\\hline\n\t\t\\multicolumn{12}{| c |}{\\bfseries\\Large\\LabTitle}\\\\\n\t\t\\hline\n\t\t\\multicolumn{3}{| c |}{\\bfseries Applications} & \\multicolumn{3}{| c |}{\\bfseries Activity \\#1} & \\multicolumn{3}{| c |}{\\bfseries Activity \\#2} & \\multicolumn{3}{| c |}{\\bfseries Final Analysis}\\\\\n\t\t\\hline\t\t\n\t\tQ01 & 1 & \\ & EX1 & 5 & \\ & EX1 & 3 & \\ & Q09 & 1 & \\ \\tabularnewline\n\t\t\\hline\t\t\n\t\tQ02 & 1 & \\ & EX2 & 4 & \\ & EX2 & 6 & \\ & Q10 & 1 & \\ \\tabularnewline\n\t\t\\hline\n\t\tQ03 & 1 & \\ & Q05 & 1 & \\ & Q07 & 1 & \\ & Q11 & 1 & \\ \\tabularnewline\n\t\t\\hline\n\t\tQ04 & 1 & \\ & Q06 & 1 & \\ & Q08 & 1 & \\ & Q12 & 1 & \\ \\tabularnewline\n\t\t\\hline\n\t\\end{tabular}\n\n\t\\vspace{0.25cm}\n\t\\textbf{Comments:}\n\\end{document}\n", "meta": {"hexsha": "989f1e7e09723c63cace91f1d3ee6034fde3d096", "size": 8329, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "labs/apcsa/01_calendar/lab_calendar.tex", "max_stars_repo_name": "jmscsedu/csedu", "max_stars_repo_head_hexsha": "4ad037bf3ee413daeab55a52725c15e17e6a31b3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-01-28T21:31:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-24T14:34:16.000Z", "max_issues_repo_path": "labs/apcsa/01_calendar/lab_calendar.tex", "max_issues_repo_name": "jmscsedu/csedu", "max_issues_repo_head_hexsha": "4ad037bf3ee413daeab55a52725c15e17e6a31b3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "labs/apcsa/01_calendar/lab_calendar.tex", "max_forks_repo_name": "jmscsedu/csedu", "max_forks_repo_head_hexsha": "4ad037bf3ee413daeab55a52725c15e17e6a31b3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-04-21T09:36:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-05T13:04:43.000Z", "avg_line_length": 45.0216216216, "max_line_length": 368, "alphanum_fraction": 0.6775123064, "num_tokens": 2740, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6442251064863697, "lm_q2_score": 0.8633916082162403, "lm_q1q2_score": 0.5562185507425453}}
{"text": "\n\\subsubsection{Notes}\n\n% https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/\n\nTo generate the required parameters, we use a 2 stage multi-layer perceptron (MLP) $G: X \\mapsto (z, \\beta, \\gamma, s, t)^{M}$ to generate a latent code $z_{i}$ that can be passed through the inverse flow to generate joint angles $\\theta = F^{-1}(z_{i})$, and the remaining components: $\\beta_{i}, \\gamma_{i}$ and orthographic camera parameters $s_{i}, t_{i}$.\n\nOnce these $M$ SMPL parameterizations have been computed, we can then generate the $M$ meshes using the standard SMPL skinning pipeline:\n\n$P_{i} = S((\\theta_{i}, \\beta_{i}, \\gamma_{i})$\n\nWe make use of a standard joint regressor $J: K \\mapsto \\mathbb{R}^{16\\times 2}$ in order to compute the approximate 3D joint positions $K \\in \\mathbb{R}^{16\\times 2}$ associated with the input 2D joints. Orthographic projection $\\pi(s, t): \\mathbb{R}^{16\\times 3} \\mapsto \\mathbb{R}^{16\\times 2}$ is then used to generate the predicted set of 2D points.\nFrom these diverse modes, we apply a loss to ensure they are all consistent with respect to the input set of 2D joints:\n\n\\begin{equation}\n    L_{reproj-input} = \\sum_i{|| \\pi(s_{i}, t_{i})[J(P_{i})] - x||}\n\\end{equation}\n\nWe then apply a 3D joint loss in order to identify the best hypothesis among predicted modes:\n\n% i* = argmin_{i}{{J(Y_{i})} - y}\n\n\\begin{equation}\n    L_{obj} = L_{joint3d} + L_{reproj-gt} + L_{vertex}\n\\end{equation}\n\nWhere\n\nThis, and this and this are the equations.\n\nWe consider the problem of reconstructing $x$ (3D object) form a partial observation $y$ (2D projection).\nWe also assume that observations are noisy, so the observation process is described by a known conditional distribution $p(y|x)$.\nWe assume that $x$ has a ``useful'' parameterization or code $z$.\nThe parameterization is another random variable linked to $x$ via distribution $p(x|z)$.\nWhile this could be learned, we assume a parameterization is given to us (e.g. SMPL).\n\nThe goal is to: 1) learn an ``inverse function'' $q(z|x)$ inverting the parameterization of the data $x$ and 2) learn a ``prediction'' function $q(z|y)$ that recovers $x$ in code space.\nThe assumption is that expressing reconstruction ambiguities is easy in code space, so that $q(z|y)$ is far simple to model than $p(x|y)$; however, we pay the price of having to learn $q(z|x)$.\n\nWe formulate this similar to a conditional variational autoencoder (cVAE):\n\n$$\n\\begin{aligned}\n\\log p(x|y)\n&= \\log \\int p(x,z|y) \\,dz \\\\\n%&= \\log \\int p(x,z|y) \\frac{q(z|x,y)}{q(z|x,y)} \\,dz \\\\\n&= \\log \\int \\underbrace{p(x|z,y) \\frac{p(z|y)}{q(z|x,y)}}_{\\text{$\\approx$ const.}} q(z|x,y) \\,dz \\\\\n&= \\log \\int p(x|z,y) \\frac{p(z|y)}{q(z|x,y)} q(z|x,y) \\,dz \\\\\n&\\geq \\int \\log \\left(p(x|z,y) \\frac{p(z|y)}{q(z|x,y)}\\right) q(z|x,y) \\,dz \\\\\n&=\\int \\log p(x|z,y) ~q(z|x,y) \\,dz \\\\\n&~~~~~- KL\\left(q(z|x,y) \\| p(z|y)\\right)\n\\end{aligned}\n$$\n\nThis bound is tight whenever $q(z|x,y) \\propto p(x|z,y) p(z|y)$, i.e. $q(z|x,y) = p(z|x,y)$.\nNow we make the assumption that $x$ contains a superset on the information of $y$ on $z$, i.e. $y \\perp z ~| x$.\nThis translates in the relations $p(x|z,y) = p(x|z)$ and $p(z|x,y) = p(z|x)$.\nThis also means that we can consider $q(z|x) = q(z|x,y)$.\nWith these simplification, we get:\n\n\\begin{multline}\n  \\log p(x|y) \\geq \\\\\n\\int \\log p(x|z) ~q(z|x) \\,dz - KL\\left(q(z|x) \\| p(z|y)\\right)\n\\end{multline}\n\nThe bound is tight when $q(z|x) = p(z|x)$ as in a standard VAE.\nBy comparisong, a standard VAE without conditioning looks like:\n\\begin{multline}\n  \\log p(x) \\geq \\\\\n\\int \\log p(x|z) ~q(z|x) \\,dz - KL\\left(q(z|x) \\| p(z)\\right)\n\\end{multline}\nSo the only difference is $p(z)$ instead of $p(z|y)$ in the KL divergence (and the fact that one bounds $p(x)$ instead of $p(x|y)$.)\n\nThis model combines:\n\\begin{enumerate}\n  \\item The forward parameterization $p(x|z)$ (e.g. SMPL, linear basis).\n  \\item The inverse parameterization $q(z|x)$, to be learned.\n  \\item The reconstruction in code space $p(z|y)$. This distribution models the ambiguity in the reconstruction, must be learned, and is assumed to be simpler to model than $p(x|y)$ directly.\n\\end{enumerate}\n\n\n\\subsection{Older}\n\nA VAE is model of a distributon $p(x)$ expressed as teh marginal of $p(x,u)$ where $u$ is a code.\nThe idea is that $p(x|u)$ can be learned as a neural network and $p(u)$ is given a-priori and simple.\nFurthermore, we also learn a conditional distribution $q(u|x)$ that approximates $p(u|x)$ and helps us computing the log-likelihood of the model.\n\nThis is done as follows:\n$$\n\\begin{aligned}\n\\log p(x)\n&= \\log \\int p(x,u) \\,du \\\\\n&= \\log \\int p(x,u) \\frac{q(u|x)}{q(u|x)} \\,du \\\\\n&= \\log \\int p(x|u) \\frac{p(u)}{q(u|x)} q(u|x) \\,du \\\\\n&= \\log \\int p(x|u) \\frac{p(u)}{q(u|x)} q(u|x) \\,du \\\\\n&\\geq \\int \\log \\left(p(x|u) \\frac{p(u)}{q(u|x)}\\right) q(u|x) \\,du \\\\\n&=\\int \\log p(x|u) ~q(u|x) \\,du - KL\\left(q(u|x) \\| p(u)\\right)\n\\end{aligned}\n$$\nNote that the model is comprised by\n\\begin{itemize}\n\\item the encoder $q(u|x)$  (trainable)\n\\item the decoder $p(x|u)$ (trainable)\n\\item the prior $p(u)$ (fixed)\n\\end{itemize}\nThe log-likelihood of this model is\n\\begin{multline}\nE_{p_{\\text{gt}}(x)}[\n  \\log p(x)\n]\n\\geq \\\\\nE_{p_{\\text{gt}}(x)}\n\\left[\nE_{q(u|x)}[\n  \\log p(x|u)\n]\n-\nKL\\left(q(u|x) \\| p(u)\\right)\n\\right]\n\\end{multline}\nwhere $p_{\\text{gt}}(x)$ is the true distribution of $x$.\n\nIn a standard implementation, we do the following:\n\n\\begin{enumerate}\n\\item \\textbf{Encoder} We set $q(u|x) = \\mathcal{N}(u | \\mu_x, \\Sigma_x)$ where $(\\mu_x,\\Sigma_x) = \\Phi(x)$ are computed by an encoder network $\\Phi$.\n\\item \\textbf{Decoder} We set $p(x|u) = \\mathcal{N}(x | \\mu_u, \\Sigma_u)$ where $(\\mu_u,\\Sigma_u) = \\Psi(u)$ are computed by a decoder network $\\Psi$.\n\\item \\textbf{Prior} We set $p(u)$ to a simple fixed distribution (e.g.~std Gaussian) so that $KL(q(u|x) \\| p(u))$ is trivial to compute.\n\\item We express samples $u = \\mu_x + \\Sigma_x^{-\\frac{1}{2}} \\epsilon$ where $\\epsilon$ is a sample from an std Gaussian, also known as *reparameterization trick*. We also use the shorthand notation $u = \\Phi(x) \\square \\epsilon$ that captures the fact that $\\mu_x$ and $\\Sigma_x$ are estimated by the encoder $\\Phi$.\n\\end{enumerate}\n\nThen, given a dataset of samples $x_1,x_2,\\dots,...$, we get random noise vectors $\\epsilon_1,\\epsilon_2,\\dots$ and use SGD to follow the gradient:\n$$\n  \\nabla_{\\Phi,\\Psi}\n  \\left[\n    \\log \\mathcal{N} (x_i| \\Psi(\\Phi(x_i) \\square \\epsilon_i))\n    - KL(\\mathcal{N}(u| \\Phi(x_i)) \\| p(u))\n  \\right]\n$$\n\nNote that this quantity can be computed in closed form.\n\n\\subsection{Variant}\n\nNow we consider a variant of the model above in which $p(x|u)$ is given and fixed and $p(u)$ must be learned (which is the opposite of what we had before).\n\nThe derivation of the bound above is the same, but the implementation is a little different. We have:\n\n\\begin{itemize}\n\\item the encoder $q(u|x)$ (trainable)\n\\item the decoder $p(x|u)$ (fixed)\n\\item the prior $p(u)$ (trainable)\n\\end{itemize}\n\n\\paragraph{Example: pose.} $x \\in\\mathbb{R}^{3\\times K}$ are 3D points, $u\\in\\mathbb{R}^d$ a set of angles, and $p(x|u)$ the action of the kinematic tree.\n\nWe may then build the model as follows:\n\\begin{itemize}\n\\item \\textbf{Encoder} We set $q(u|x) = \\mathcal{N}(u | \\mu_x, \\Sigma_x)$ where $(\\mu_x,\\Sigma_x) = \\Phi(x)$ are computed by an encoder network $\\Phi$. Same as before.\n\n\\item \\textbf{Decoder} We set $p(x|u) = \\mathcal{N}(x | \\mu_u, \\Sigma)$ where $\\mu_u = f(u)$ is computed by the a known model $f$ and $\\Sigma$ is a small fixed variance.\n\n\\item \\textbf{Prior} $p(u)$ is a representation of a not-so-trivial prior distribution.\n\n\\item We use the reparameterizatino trick as before, since that affects the encoder.\n\\end{itemize}\nHowever, now the story is slightly different since $p(x|u)$ is fixed and $p(u)$ must be learned.\nFor this, we rewrite the bound as:\n\\begin{multline}\nE_{p_{\\text{gt}}(x)}[\n  \\log p(x)\n]\n\\geq\n\\\\\nE_{p_{\\text{gt}}(x)}\nE_{q(u|x)}\\left[\n\\log p(x)\n-\n\\log \\frac{q(u|x)}{p(u)}\n\\right]\n=\n\\\\\nE_{p_{\\text{gt}}(x)}\nE_{q(u|x)}\\left[\n\\log p(x|u)\n-\n\\log q(u|x)\n+\n\\log p(u)\n\\right]\n\\end{multline}\n\n\\paragraph{Interpretation.} In expectation under $q(u|x)$, the term $\\log p(x|u)$ is a cross entropy term that encourages $x$ to be reconstructed well from the sample $u \\sim q(u|x)$. The term $-log q(u|x)$ is an entropy term that prevents $q(u|x)$ from collapsing to a deterministic function. The term $\\log p(u)$  is a cross-entropy that encourages $p(u)$ to match the required prior on the parameters.\n\nIn terms of SGD, we have something like:\n$$\n  \\nabla_{\\Phi,p(u)}\n  \\left[\n    \\log \\mathcal{N} (x_i| \\Psi(\\Phi(x_i) \\square \\epsilon_i))\n    + H(\\mathcal{N}(u|\\Phi(x_i)))\n    + \\log p(u_i)\n  \\right]\n$$\nwhere $H$ is the entropy of the Gaussian distribution $\\mathcal{N}(u|\\mu_{x_i}, \\Sigma_{x_i})$ (also easy to compute in closed form).\n\nThe ``tricky'' bit is to express $p(u_i)$ with an expressive and learnable model.\n\n\\subsection{Conditional VAEs}\n\nWe now want to modify the formulation so as to consider a partial observation $y$ and be able to compute $p(x|y)$ from it.\n\n\\paragraph{Example: pose} We consider the case in which we have $y = \\Pi(x)$ as observation, usually some form of projection or distortion of $x$ (possibly stochastic/noisy).\n\n\\paragraph{CVAE Formulation 1.} This takes VAE and adds $|y$ everywhere, using the fact $p(x|u,y) = p(x|u)$:\n$$\n\\begin{aligned}\n\\log p(x|y)\n&= \\log \\int p(x,u|y) \\,du \\\\\n&= \\log \\int p(x,u|y) \\frac{q(u|y)}{q(u|y)} \\,du \\\\\n&= \\log \\int p(x|u) \\frac{p(u|y)}{q(u|y)} q(u|y) \\,du \\\\\n&= \\log \\int p(x|u) \\frac{p(u|y)}{q(u|y)} q(u|y) \\,du \\\\\n&\\geq \\int \\log \\left(p(x|u,y) \\frac{p(u|y)}{q(u|y)}\\right) q(u|y) \\,du \\\\\n&=\\int \\log p(x|u) ~q(u|y) \\,du - KL\\left(q(u|y) \\| p(u|y)\\right)\n\\end{aligned}\n$$\n\nThe strange thing is that this requires to use $p(u|y)$ as prior rather than $p(u).$\n\n\\paragraph{CVAE Formulation 2.} This one seems much better: just replace $q(u|x)$ with $q(u|y)$ in the math:\n$$\n\\begin{aligned}\n\\log p(x)\n&= \\log \\int p(x,u) \\,du \\\\\n&= \\log \\int p(x|u) p(u) \\frac{q(u|y)}{q(u|y)}  \\,du \\\\\n&\\geq \\int \\log \\left(\\frac{p(x|u) p(u)}{q(u|y)}\\right) q(u|y) \\,du \\\\\n&=\n\\int \\log p(x|u)~ q(u|y) \\,du +\n\\int \\log \\frac{p(u)}{q(u|y)} q(u|y) \\,du\\\\\n&=\n\\int \\log p(x|u)~ q(u|y) \\,du\n-\nKL(q(u|y) \\| p(u)).\n\\end{aligned}\n$$\n\n", "meta": {"hexsha": "dbcb198d661043d194bd5d9f952623a56e589999", "size": 10282, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Docs/vae_cvae_notes.tex", "max_stars_repo_name": "benjiebob/phd-thesis-template", "max_stars_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Docs/vae_cvae_notes.tex", "max_issues_repo_name": "benjiebob/phd-thesis-template", "max_issues_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Docs/vae_cvae_notes.tex", "max_forks_repo_name": "benjiebob/phd-thesis-template", "max_forks_repo_head_hexsha": "2fd86bb807b830c06944d9c59962939d9a95ca7a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.7967479675, "max_line_length": 404, "alphanum_fraction": 0.6552227193, "num_tokens": 3580, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8539127417985636, "lm_q2_score": 0.6513548782017745, "lm_q1q2_score": 0.5562002299291467}}
{"text": "%\n%==> Section: Putting labels in pictures\n%\n\\section{\n  Putting labels in pictures\n}\n\n\\begin{frame}[fragile]\n  \\frametitle{\n    Putting labels in pictures\n}\n\n  When you construct a picture, in 99\\% of cases you also need to put labels. This is easy! Let us start by seeing how we would place some text in a picture.\n\n  \\lstinputlisting{./tex/src/put_text_in_pictures.tex}\n  \n  yields\n\n  \\begin{center}\n    \\input{./tex/src/put_text_in_pictures.tex}\n  \\end{center}\n\n Notice how the ``yes\" is positioned: the center of its ``baseline\" is at $(1,1).$\n\n\\end{frame}\n\n%\n%==> Relative positioning \n%\n\\begin{frame}[fragile]\n  \\frametitle{\n    Relative positioning\n  }\n\n  Sometimes you want a label to be situated relative to a point. Ti$k$Z has neat\n  commands for this. For instance you can write\n\n  \\lstinputlisting{./tex/src/relative_positioning.tex}\n  \n  to get\n\n  \\begin{center}\n    \\input{./tex/src/relative_positioning.tex}\n    \\end{center}\n\n\\end{frame}\n\n%\n%==> More relative positioning\n%\n\\begin{frame}[fragile]\n\n  You are not limited to put things below a point:\n\n  \\lstinputlisting{./tex/src/more_relative_positioning.tex}\n  \n  yields\n  \n  \\begin{center}\n    \\input{./tex/src/more_relative_positioning.tex}\n  \\end{center}\n\n\\end{frame}\n%\n%==> Mix and match positioning\n%\n\\begin{frame}[fragile]\n  \n  And, you can also mix and match\n\n  \\lstinputlisting{./tex/src/mix_and_match_positioning.tex}\n  \n  yields\n\n  \\begin{center}\n    \\input{./tex/src/mix_and_match_positioning.tex}\n  \\end{center}\n\n\\end{frame}\n\n%\n%==> Labeling axes and points\n%\n\\begin{frame}[fragile]\n  \\frametitle{\n    Labeling axes and points\n  }\n\n  \\lstinputlisting{./tex/src/label_axes_and_points.tex}\n  \n  gives us\n  \n  \\begin{center}\n    \\input{./tex/src/label_axes_and_points.tex}\n  \\end{center}\n\n\\end{frame}\n\n%\n%==> Supressing \\node\n%\n\\begin{frame}[fragile]\n\n  You can avoid some typing by mixing nodes in the middle of paths. For instance the last figure could have been written as follows:\n\n  \\lstinputlisting{./tex/src/label_axes_and_points_alt.tex}\n  \n  which would have given exactly the same result. Note that the node is put after the point to which it is attached and that we suppress the $\\backslash$ in\n\n  \\begin{lstlisting}\n    \\node\n  \\end{lstlisting}\n  \n\n\\end{frame}\n\n%\n%==> Fancy nodes\n%\n\\begin{frame}[containsverbatim]\n  \\frametitle{\n    Fancy nodes\n  }\n\n  You may want to put several lines in your ``node\" (this is convenient when drawing time lines for instance). This can be done by using the standard \\LaTeX\\, for indicating a new line but you must tell Ti$k$Z how to align things. From\n\n  {\n    \\scriptsize\n    \\lstinputlisting{./tex/src/fancy_nodes.tex}\n  }\n  \n  we obtain\n\n  \\begin{center}\n    \\input{./tex/src/fancy_nodes.tex}\n  \\end{center}\n  \n\\end{frame}\n", "meta": {"hexsha": "27ac38cb46bd52d5efcbf0814def29bc2ef32b34", "size": 2749, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/sections/putting_labels_in_pictures.tex", "max_stars_repo_name": "jlokimlin/tikz_crash_course", "max_stars_repo_head_hexsha": "0b4011d1d57be271d9b67a4ceb44e8448f31c0c8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2016-04-24T18:36:57.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-03T20:09:31.000Z", "max_issues_repo_path": "tex/sections/putting_labels_in_pictures.tex", "max_issues_repo_name": "jlokimlin/tikz_crash_course", "max_issues_repo_head_hexsha": "0b4011d1d57be271d9b67a4ceb44e8448f31c0c8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/sections/putting_labels_in_pictures.tex", "max_forks_repo_name": "jlokimlin/tikz_crash_course", "max_forks_repo_head_hexsha": "0b4011d1d57be271d9b67a4ceb44e8448f31c0c8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.7769784173, "max_line_length": 235, "alphanum_fraction": 0.7006184067, "num_tokens": 776, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6513548646660542, "lm_q2_score": 0.8539127473751341, "lm_q1q2_score": 0.556200222003149}}
{"text": "\n\\chapter{Martin-L\\\"of theories and Seely's equality}\n\\label{chap:5}\n\\thispagestyle{empty}\n\n\n\nIn this chapter we continue our categorical presentation of the tip theory with the introduction of Martin L\\\"of's type theory. This type theory, also known as intuitionistic type theory, is initially proposed as a foundation of mathematics. This is really important because it implies a change of mentality, as we do no longer are so focus on formalizing computation systems. \\\\\n\nThis relationship with first-order logic arises by the hand of dependent typing. That is, in Martin L\u00f6f's type theory there is a formal structure for generating types dependent on other types. This is used (without needing to know the formalism) nowadays continuously in programming languages. \\\\\n\nThe most interesting part for us will be to study the new structures that arise in the theory of categories to understand this new typing. In particular, we will study local closed Cartesian category structures through hyperdoctrines. \\\\\n\nThe main sources for this chapter are  \\cite{seely1984locally}, \\cite{martinlof1973intuitionistic}  and \\cite{mac2013categories}.\n\n\n\n\\section{Martin-L\u00f6f type theory}\n\nIn this section we introduce rather succinctly the foundations of the type theory. This presentation follows the one done in \\cite{seely1984locally}, with added insights and remarks form \\cite{martinlof1973intuitionistic} and \\cite{sep-type-theory-intuitionistic}.\n\n\n\n\\subsection{First definition of Martin-L\u00f6f type theory}\n\nAfter having previously introduced the calculations, in this one we will introduce Martin-L\\\"of type theory, ML for short, in a more systematic way, introducing type formation rules, term formation rules, and the equivalence rules. After that, we will introduce the category structure. The main source for this section is \\cite{martinlof1973intuitionistic}, with summaries taken from \\cite{seely1984locally}.\\\\\n\nAs a founding level of mathematics, we are going to define types and terms as before. The change in emphasis comes from the fact that we will manage the notation of the Curry-Howard from the beginning, as it was done while firstly introduced by Martin-L\\\"of in \\cite{martinlof1973intuitionistic}. In this we will always consider \\emph{types as propositions} and \\emph{terms as proofs}, and we will made use of this analogy to explain the properties of the theory. Also, as done in $\\lambda$-calculus, we should regard also \\emph{proofs as objects}, with all the constructions previously done still in place.\\\\\n\nOne last particularity of the Martin-L\\\"of theory is the fact that we can include details of proofs in the statement of propositions. This will be formalized further on, but to explain why we need to delay the introduction of term and type rules until the definition of type-valued function.\n\n\\subsubsection{Terms and types}\n\nThe formal system that Martin-L\\\"of presented consists of a set of rules $a : A$, which, as it could be predicted, mean that a term $a$ is of type $A$. We will always understand this statement as the fact that the proposition $A$ is proven by the proof $a$. We also consider equality between term $a = b$ and equality between types $A = B$. Types will be preferably denoted by $A,B,C...$ and terms by $a,b,c$ characters. After defining the abstract concept of term and types, and before introducing the full term and type forming rule, we will introduce the concept of \\emph{variables} and \\emph{function constants}.\\\\\n\nVariables are defined analogously as in the case of simply typed lambda calculus. The statements $x: A$ represents arbitrary object of a given type. As previously, an statement of $x: A$ where $x$ is a variable is called an \\emph{assumption}. We will also annotate the type of variables as super-indices, $x^A$. As in simply typed lambda calculus, a term $a$ may depend on variables $x_1,...,x_n$, usually noted by $a(x_1,...,x_n)$.Variables will be preferably denoted by $x,y,z$ characters. Further on, each time we talk about any generic type or term dependent on variables, we will always suppose the it follows the condition on variables.\\\\\n\nQuite importantly, as types can depend on variables, and each variable should have a type, we need to impose the so-called \\emph{condition on variables}. This states that in the statement $a(x_1,..,x_n)$ it should happens that no variable $x_j$ depends on variables $x_i$ for any $i<j$. The results of substituting $x_1:A_1,...,x_n:A_n$ with $a_1:A_1,...,a_n:A_n$ is denoted by $a(a_1,...,a_n)$. Note that in this last expression, $A_n$ can depend on $x_1,...,x_{n-1}$, thus noted $A[x_1,...,x_{n-1}]$. \\\\\n\nNote that types can depend on variables, that is, on terms. However, they do not depend on other types directly. This is the key difference with simply typed lambda calculus, in which the terms may depend on variables, but the the types cannot.\\\\\n\nA function constant is a function from terms to terms. Each function constant $f$ has associated an index $n$ with the number of argument, and also $n$ types symbols $$A_1, A_2[x_1^{A_1}],...,A_n[x_1^{A_1},...,x_1^{A_{n-1}}].$$ A function $f$ of index $n$ is called to be an \\emph{$n$-ary function constant}. A 0-ary function is called a \\emph{constant}.  Function constants are preferably denoted by $f,g,h$ characters. \\\\\n\nIn addition to function constants, we can consider \\emph{type valued function constants}. This have the same indication as above, and a 0-ary type valued function constant is called a type constant. In the analogy of terms and types with proof and proposition that have previously been discussed, type valued functions constants are \\emph{properties}. Type valued function constants are preferably denoted by $F,G,H$ characters. By opposition, function constants are called \\emph{term valued function constants}.\n\n\n\\begin{example}\\label{example:primeML}\n  Let $\\N$ be the type of every natural number and the type valued function $P(x)$ be the function that represents the property \"being a prime number\". We can express the fact that 3 is a natural number as $3: \\N$. We can then consider  type $P(3)$ will be the proposition ``3 is a prime number''. \n\\end{example}\n\n\\subsubsection{Full typing structure}\nNow we are in place to describe the formation rules.\n\n\\begin{definition}[Type formation rules, definition 1.1.1 \\cite{seely1984locally}]\n  The following are to be types:\n  \\begin{enumerate}\n  \\item 1 is a type.\n  \\item If $F(x_1,..,x_n)$ is a type-valued function constant, then $F(a_1,...,a_n)$ is a type, with  $x_1:A_1,...,x_n:A_n$ and $a_1:A_1,...,a_n:A_n$.\n  \\item If $a,b$ are terms, then $I(a,b)$ is a type.\n  \\item If $A$ is a type and $B(x^A)$ are types, then $\\Pi x^A. B(x^A)$ and  $\\Sigma x^A. B(x^A)$ are types.\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{definition}[Term formation rules, definition 1.1.2 \\cite{seely1984locally}]\\label{1.1.2-seely}\n  The following are to be terms of the indicated type:\n  \\begin{enumerate}\n  \\item $*:1$.\n  \\item For every type $A$, there exists countable many variables $x_i^A : A$, $i\\in \\N$.\n  \\item If $f(x_1,..,x_n)$ is a term-valued function constant, then $f(a_1,...,a_n)$ is a term of appropriate type, with  $x_1:A_1,...,x_n:A_n$ and $a_1:A_1,...,a_n:A_n$.\n  \\item If $t(x^A) : B(x^A)$, then:\n    $$\\lambda x^A.t(x^A) : \\Pi x^A. B(x^A).$$\n\n    Conversely, if $f : \\Pi x^A. B(x^A)$ and $a:A$, then  $f(a) : B(a)$ and $(\\lambda x^A.t(x^A))(a) : B(a)$.\n  \\item If $a:A$ and $b:B(a)$, then $\\langle a, b\\rangle : \\Sigma x^a. B(x)$. Conversely, given $c : \\Sigma x^a. B(x)$, then $\\pi (c) : A$ and $\\pi'(c) : B(\\pi(c))$. \n  \\item If $a:A$, then $r(a): I(a,a)$. If $a,b : A$, $c : I(a,b)$, $d : C(a,a,r(a))$, where $C(x^A,y^A,z^{I(a,a)})$, then $\\sigma_d[a,b,c] : C(a,b,c)$.  \n  \\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\n  Note that we only define terms to be dependent on the same variables as their types. See \\cite[Section 2.2]{martinlof1973intuitionistic} for more information.\n\\end{remark}\n\nWe can now explain the intuition behind this structures:\n\n\\begin{itemize}\n\\item We can start with the \\emph{Cartesian product of a family of types}, that is denoted by $\\Pi x^A. B(x^A)$. In the analogy proposition-types, we have that this represents the statement $\\forall x \\in A : B(x)$. Similarly, a proof of such a statement is a function that takes an arbitrary object $a:A$ and returns a proof for $B(a)$, and thus we define the terms of this types as lambda terms of the form $\\lambda x^A.t(x^A) : \\Pi x^A. B(x^A).$.\\\\\n\n\n\\item We follow explaining the \\emph{disjoint union of a family type}, that is denoted by the statement $\\Sigma x^A. B(x^A)$ and represents the statement $\\exists x \\in A : B(x)$. As this kind of statements are proven by providing an object and a proof of such statement, we thus define the terms of this type as pairs $\\langle x^A, y^{B(x^A)}\\rangle$ of an element and its associated proof. As every pair type, it has projections $\\pi,\\pi'$. This type is also understood as the type of every element of $A$ such that $B(x)$ holds.\n\\end{itemize}\n\\begin{example}\n  The set of every real number is:\n\n  $$\\R := \\left(\\Sigma x^{\\N\\to\\QQ}\\right).\\left(\\Pi n^\\N\\right).\\left(\\Pi m^\\N\\right).\\left(|x_{m+n} - x_m| \\le 2^{-m}\\right),$$\n  \n  where using the density properties of $\\QQ$ in $\\R$, we can define the property of being a real number as the type.\n\\end{example}\n\nIn the particular case of $B(x^A)$ having the same type $B$ for every $a:A$, we denote:\n$$A\\to B:= \\Pi x^A. B(x^A),\\qquad A\\times B := \\Sigma x^A. B(x^A).$$\n\nFinally, as a major difference with the previously introduced theories, we can consider the identity type $I(a,b)$, that represent tje  proposition of $a$ and $b$, now regarded as terms, are identical. The term $r(a) : I(a,a)$ introduced in the term formation rules represents a proof of reflexivity of $a$. \n\n\n\\begin{remark}\n  As it happens in the previous chapter, we can consider an expansion with the sum type of $A+B$, as is presented in \\cite[Section 1.6]{martinlof1973intuitionistic}. In the shake of brevity, we avoid it as it is not as interesting from a categorical viewpoint.\n\\end{remark}\n\n\\subsection{Equivalences and universes}\n\\subsubsection{Equivalences}\n\nThe most important difference in this exposition of type theory is that now we will not present the differentiated alpha/beta/eta equivalences, but we will present them all together as a single equivalence concept. This is due to two reasons. The first is that we consider that the objective of explaining the intuition and the multiple forms of equality is already satisfied. The second is that our main interest is in the relation to Category Theories, where we always consider all the equivalences together.\n\n\n\\begin{definition}[Equality rules, 1.1.3 \\cite{seely1984locally}] \\label{def:ml-equality}\n  Using the notation from definition \\ref{1.1.2-seely} point by point:\n  \\begin{enumerate}\n  \\item If $t:1$, then $t=*$.\n  \\item There is no equality rule imposed on variables.\n  \\item Every imposed equation in functions constants, translates to every interpretation of the function constant.\n  \\item $(\\lambda x^A.t(x^A))(a) =  t(a)$. Also, we can consider function constants as terms, with the obvious equality $f = \\lambda x^a.f(x)$.\n  \\item $\\pi(\\langle a,b\\rangle) = a$,  $\\pi'(\\langle a, b\\rangle) = b$, $c=\\langle \\pi(c),\\pi'(c)\\rangle$.\n  \\item Given $\\sigma_d[a,a,r(a)]=d$, if $f(a,b,c)\\in C(a,b,c)$ then $f=\\sigma(f(a,a,r(a)))(a,b,c)$. Lastly, if $a(x), b(x):A(x)$ and $t(x) : I(a(x),b(x))$, then $a(x)=b(x)$ and $t(x)=r(a(x))$.\n  \\end{enumerate}\n\n  Finally, we have the  usual internal coherence rules. That is, is reflexive, transitive and symmetric properties. In addition, if $a=b : A$, and $c:B(x^A)$ then $c(a)=c(b)$ and $a:A, b:B, a=b$ implies $A=B$. Reflexive, transitive and symmetric properties for equivalence $I(\\cdot, \\cdot)$ can be derived from the rules \\cite[1.3]{seely1984locally}.\n\\end{definition}\n\n% \\footnote{. We nonetheless included them as a natural part of the definition of equality to favor readability.}.\n\\begin{remark}\n  After having define the structure, as with typed lambda-calculus in section \\ref{section:pureandimpure}, we point out that each system of terms and types that follow these rules is said to be a ML theory. \n\\end{remark}\n\\subsubsection{Seely's enlargement}\n\nAfter having presented the Martin-L\\\"of as presented originally, in this subsection we will explain the modifications that Seely did to the structure. This modifications appeared in Seely's previous works \\cite{seely1977hyperdoctrines}\\footnote{Unlike the other works cited, Seely's doctoral thesis was not available digitally. We have only found available a physical version from the Cambridge Library and thus we were not able to read it. We trust its contents to the references made by Seely himself in his other works cited.} and \\cite{seely1983hyperdoctrines}. Seely introduced the following definition:\n\n\\begin{definition}\n  Let $z : \\Sigma x^A.B(x)$ be a variable, and $C(z)$ a type that on $z$ but not on $x:A$ neither on $y:B(x^A)$. Then, for every $t(x,y) : C(\\langle x, y\\rangle)$ there exists $\\tilde t (z) : C(z)$, defined by:\n  $$\\tilde t (z) := t(\\pi(z),\\pi'(z)): C(z).$$\n\\end{definition}\n\nThe point of this notion is simple, that is to more succinctly denote whenever a variable does not depend on a particular pair of variable but only on the moment when they work together. One particular example of this are the properties assign to satisfiable formulas.\n\nWith this definition, maintaining the notation, we also introduce the following equalities:\n\\begin{itemize}\n\\item $\\tilde t(\\langle x,y\\rangle) = t(\\langle x,y\\rangle) $.\n\\item If $f(z): C(z)$, $t(x,y):C(\\langle x,y\\rangle)$ and $f(\\langle x,y\\rangle) = t(x,y)$ then $f=\\tilde t$.\n\n\\end{itemize}\n\n\n\\subsubsection{Universes}\nNow let us have a word on \\emph{universes}. Let us begin this discussion with an illustrative example, as introduced by Martin-L\u00f6f. Let us imagine that we want to consider the type of sequences of natural numbers as \n\n$$\\Sigma x^\\N. F(x),$$\n\nwhere\n\n\\[   \n  \\begin{cases}\n    F(0):= N_1,\\\\\n    F((s(x)):= F(x)\\times \\N.\n  \\end{cases}\n\\]\n\nHere, $N_1$ is a placeholder type for empty sequences. This definition can be done by recursion on the types. But for this we would need some type $V$ which have to be  closed for the product. Similarly we can argue to have the same recursion property with a closed set for $A \\to B$.\\\\ \n\nTo consider this type we will introduce the type $V$ which will be called a universe, whose objects will be the types. In addition we will consider a reflection principle that allows us to work with all operations without leaving the universe.  The problem begins with the interpretation of Russel paradox in this context, given by Girard in \\cite{girard1972interpretation}. To avoid this problem, we need to avoid messing with the equivalent in ML of the \\emph{regularity axiom} in ZFC.\\\\\n\n\nThen we consider that $V$ is not a type contained in $V$. Thus we can, similarly to what we did with categories, consider as \\emph{small}, all the types within $V$, and $V$ itself and all types which are built from it are \\emph{large} types. By iterating this construction we can obtain a sequence:\n\n$$V = V_0 : V_1 : V_2 ...$$\n\nwhere every $V_i$ is a type from the ``type type'' $V_{i+1}$. \n\n\n\\section{Locally Cartesian closed categories and hyperdoctrines}\n\\subsection{Locally Cartesian closed categories}\nThis new typing system will be related to the categorical structure of \"locally closed Cartesian categories\". Instead of following the classical process of introducing the concept formally so that in the course of a technical demonstration one can see why they are related, we believe that there is extra value in explaining the idea behind the formalism.\\\\\n\n\nWhat is the key reason why ML can be considered as a larger structure? In part this is because we can include information about the terms in the typed ones, giving real information about the needs to make a proof system that underlies the mathematics. With this in mind we have to ask ourselves how we can modify the closed Cartesian Category structure so that it matches with this new typing.  The key idea then comes when considering slice categories (see Definition \\ref{def:slice-cat})\\footnote{Historically, the actual process was relating first order with LCC, and after that doing the same with ML theories, as done in \\cite{seely1977hyperdoctrines}.}.\\\\\n\nThis allow us to consider arrows (terms) as objects (types) in local contexts, being (a bit roughly) translatable to consider objects dependent on arrows.  As many things in category theory, once you have the idea you still have to lay down the structures carefully so that everything relates properly. Let us start with the process.\n\n\\begin{definition}\n  A category is said to have \\emph{finite limits} if it has every limit over any finite category, that is, over any category $J$ with both $Ob(J)$ and $Ar(J)$ finite.\n\\end{definition}\n\nFor the next definition, the reader is invited to remember the definition of slice category in definition \\ref{def:slice-cat}, if needed.\n\\begin{definition}\n  A \\emph{locally Cartesian closed category}, LCC for short,  is a category $C$ with finite limits, such that for any object $a$ of $C$, the slice category $C/a $ is Cartesian closed.\n\\end{definition}\n\n\\begin{remark}\n  If a category $C$ has finite limits, then every category $C/a$ with $a\\in Ob(C)$  has also finite limits. This needs not to happen with exponent objects, being this the main quality of LCCs.\n\\end{remark}\nLastly, we can introduce the category associated to this new structure.\n\\begin{definition}\n  We define the category $LCC$ of all locally Cartesian closed categories along with structure preserving functors.\n\\end{definition}\n\\subsection{Hyperdoctrines}\n\n\nAs a tool for dealing with LCC we are going to introduce hyperdoctrines. These were introduced in by Lawvere as part of his influential work \\cite{lawvere1969adjointness}. The notion of a hyperdoctrine is essentially an axiomatization of the collection of slices of a locally Cartesian closed category\\cite{nlab:hyperdoctrine}.\n\n\\begin{definition}\n  Let $C$ be a category with finite limits. A $C$-indexed category $P$ is a collection that  consists of:\n  \\begin{enumerate}\n  \\item For each $a\\in Ob(C)$  a category $P(a)$.\n  \\item For each $f:a\\to b, g:b\\to c\\in Ar(C)$, a functor $P(f) = f^*:P(B) \\to P(A)$, such that $(1_a)^* = 1_{P(a)}$ for every  $a \\in Ob(C)$ and $(gf)^*= f^*g^*$.\n  \\end{enumerate}\n\\end{definition}\n\\begin{remark}\n  Any category $C$ with finite limits, is self-indexed, with $C(b)=C/b$ and $C(f:b_1\\to b_2)=f^*: C/b_2 \\to C/b_1$ defined by pullback. That is, given an object $g:a\\to b_2 \\in C/b_2$, by the pullback diagram:\n  \\[\n    \\begin{tikzpicture}\n      \\node {\\begin{tikzcd}\n          a \\times_{b_2} b_1 \\ar[r,\"p_2\"]\\ar[d,\"p_1\"] & a\\ar[d,\"g\"]\\\\\n          b_1\\ar[r,\"f\"] & b_2.\n        \\end{tikzcd}};\n    \\end{tikzpicture}\n  \\]\n\n  we can define $f^*(g) = p_1$, with $p_i$ as in definition \\ref{def:pullback}. This has clear inspirations in topology, as many things in this area, with notation $f^*$ for the contravariant pullback functor given as in Grothendieck's six operations \\cite{nlab:six_operations}.\\\\\n\\end{remark}\n\nWe can see that $P$ is, in summary, an contravariant functor from $C$ to $Cat$. Properties from $(\\cdot)^*$ derive from the fact that arrows in $Cat$ are functors. Therefore a $C$-indexes category is just an object of $Cat^C$.\\\\\n\n\\begin{definition}\n  A $C$-indexes category $P$ is a hyperdoctrine if:\n  \\begin{enumerate}\n  \\item $P(a)$ is closed Cartesian for every $a\\in Ob(C)$.\n  \\item $f^*$ preserves exponential objects for every $f\\in Ar(C)$.\n  \\item $f^*$ has adjoints $\\Sigma_f \\dashv f^* \\dashv \\Pi_f$.\n  \\item $P$ satisfies the Beck condition: if the following diagram is a pullback in $C$, then for any object $\\varphi$ of $P(c)$,  we have $\\Sigma_k h^*(\\varphi) \\cong f^*\\Sigma_g(\\varphi)$.\n    \\[\n      \\begin{tikzpicture}\n        \\node {\\begin{tikzcd}\n            d\\ar[r,\"h\"]\\ar[d,\"k\"] & c\\ar[d,\"g\"]\\\\\n            a\\ar[r,\"f\"] &   b.\n          \\end{tikzcd}};\n      \\end{tikzpicture}\n    \\]\n    \n  \\end{enumerate}\n\\end{definition}\n\\begin{definition}\n  Two $C$-indexed categories $P_1$ and $P_2$ are equivalent if:\n  \\begin{itemize}\n  \\item For each $a\\in Ob(C)$, $P_1(a)\\cong P_2(a)$.\n  \\item For each $f\\in Ar(C)$, $P_1(f)(a)\\cong P_2(f)(a)$.\n  \\end{itemize}\n\\end{definition}\n\nNote that if $P_1\\cong P_2$ as $C$-indexed categories, and $P_1$ is a hyperdoctrine, the $P_2$ is a hyperdoctrine.\n\n\\begin{remark}\n  Similar to the definition of $f^*$ with the pullback, we can define the functor $\\Sigma_f: C/b_1 \\to C/b_2$ via post-composition. \n\\end{remark}\n\n\nNow we proceed to present and explain the following theorem.\n\n\\begin{theorem}\\label{theo:theox}\n  If $C$ has finite limits, then $C$ is a LCC if, and only if, as a $C$-indexed category $C$ is a hyperdoctrine. \n\\end{theorem}\n\nThis result, as noted in \\cites{seely1984locally}, can be deduced from results showed in \\cite[Section 1.3]{freyd1972aspects}. Nonetheless, as Freyd never talks about hyperdoctrines, it is a bit blurry. Thus, we now proceed to detail a bit this proof, avoiding some proofs but carefully pointing out where to find their statements. During this discussion $C$ is a category, $a,b\\in Ob(C)$ and $f:b_1\\to b_2\\in Ar(C)$.\\\\\n\n\n\\begin{definition}[Section 2, \\cite{nlab:reflective_subcategory}]\n  Definition 2.1. A full subcategory $i:C\\hookrightarrow D$ is reflective if the inclusion functor $i$ has a left adjoint $T\\dashv i$:\n  \\[\n    \\begin{tikzcd}\n      C\\arrow[hookrightarrow, shift left=.5ex, \"i\"{name=T}]{r}{} &\n      D\\arrow[ shift left=1ex, \"T\"{name=i}]{l}{} \\\\\n    \\end{tikzcd}\n  \\]\n  The left adjoint is sometimes called the reflector.\n\\end{definition}\n\n\\begin{example}\n  $Ab$ is a reflective subcategory of $Grp$, with reflector given by the abelianization of a group.\n\\end{example}\n\n\n\\begin{proposition}[Section 2, \\cite{nlab:reflective_subcategory}]\n  In reflective subcategories the inclusion $i:C\\hookrightarrow D$ creates all limits of $D$ and $C$ has all colimits which $D$ admits.\n\\end{proposition}\n\n\n\\begin{theorem}[Proposition 1.31, \\cite{freyd1972aspects}]\n  Let $A$ be a closed Cartesian categories and $A'\\subset A$ be a full reflexive subcategory with reflector  $R$. Then $R$ preserves products iff for all $b,a\\in Ob(A')$, we have that $b^a\\in A'$.\n\\end{theorem}\n\\begin{corollary}\n  Let $C$ be a category. We define the functor $\\sigma_b : C/b \\to C$ as the forgetful functor for every $b\\in Ob(C)$. Then $\\sigma_b$ preserves and reflects colimits, equalizers, pullbacks and monomorphisms.\n\\end{corollary}\n\nIf $C$ has finite products, we can define the functor $\\times_b: C \\to C/b$ such that:\n\\begin{itemize}\n\\item For each $a\\in Ob(C)$, $\\times_b(a) := \\pi':a\\times b \\to b$.\n\\item For each $g:a\\to g(a)\\in Ar(C)$, $\\times_b(f) := g\\times 1_{b}: \\times b \\to g(a)\\times b$.\n\\end{itemize}\n\n\\begin{proposition}[Proposition 1.33, \\cite{freyd1972aspects}]\nThere is an adjoint  $\\sigma_b \\dashv \\times_b$.\\\\\n\\end{proposition}\n\nNote that $(C/b_2)/(b_1\\to b_2)$ is isomorphic to $C/b_1$. With this identification we have $$\\times_{b_1\\to b_2} = f^*,\\qquad \\text{and} \\qquad\\sigma_{(b_1\\to b_2)} = \\Sigma_f.$$\n\nWith this note we have two important things to point out:\n\\begin{itemize}\n\\item We have that each $f^*$ have a left adjoint $\\Sigma_f$. It can be checked that in the context of $C$ having finite limits, they satisfy the Beck condition. \n\\item To prove that $f^*$ has a right adjoint for each $f$ if, and only if, for each $b\\in Ob(C)$ we have that $C/b$ is Cartesian closed, it suffices to proof such for $\\times_b$.\\\\\n\\end{itemize}\n\nOne implication of theorem \\ref{theo:theox} directly derives from the following proposition.\n\\begin{proposition}[Proposition 1.34, \\cite{freyd1972aspects}]\n  For $C$ Cartesian closed, $\\times_b: C\\to C/b$ has a right adjoint.\n\\end{proposition}\n\nThe other implication can be check by noting that, if $\\times_b$ has a right-adjoint $\\#_b$, then we have:\n$$(\\cdot \\times b, a) = \\left( \\sigma_b(\\times_b(\\cdot)), A\\right) \\cong^{1.} \\left( \\times_b(\\cdot), \\times_b(A)\\right)  \\cong^{2.} \\left( \\cdot, \\#(\\times_b(A)\\right)), $$\n\nwhere in 1. and 2. we use adjointness. Thus, we can define $a^b:= \\#_b(\\times_b(a))$.\n\n\n\\section{The LCC category of a Martin-L\\\"of theory}\n\\label{InternalLCC}\nIn this section we are going to explain how to create a locally Cartesian category. To do that, we define the function $\\CC$ that takes a ML theory and returns a category, and them proceed to proof that such a category is an LCC.\n\n\\begin{definition}\n  Let $M$ be a ML theory. We define the category $\\CC(M)$ as the category that has:\n  \\begin{itemize}\n  \\item As objects all closed types of $M$.\n  \\item As arrows $f:a\\to b$  all closed terms of type $a\\to b$.\n  \\end{itemize}\n\\end{definition}\n\nFurther on, we will consider $M$ an arbitrary ML theory.\n\\begin{proposition}\\label{prop:CM1}\n  $\\CC(M)$ is a Cartesian category.\n\\end{proposition}\n\\begin{proof}\n  \\begin{itemize}\n  \\item Operations:\n    \\begin{itemize}\n      \n    \\item \\emph{Identity}: given an object $a\\in \\CC(M)$, define $1_a := \\lambda x^a.x$. \n    \\item \\emph{Composition}: given a pair of arrows $(f:a\\to b) =  \\lambda x^a. \\varphi(x)$ and $ (g:b \\to c) = \\lambda x^b. \\psi(x)$ in $\\CC(M)$, define:\n      $$f\\circ g := \\lambda x^a. \\psi (\\varphi(x)). $$\n    \\end{itemize}\n\n  \\item Properties:\n    \\begin{itemize}\n    \\item \\emph{Associative}: given an extra arrow $h:c\\to d = \\lambda x^c. \\rho(x)$, we have that,\n\\begin{align*}\n  h \\circ (g\\circ f)  = h\\circ \\lambda x^a.\\psi(\\varphi(x)) & = \\lambda x^a.\\rho(\\psi(\\varphi(x))) \\\\ & =  \\lambda x^a.\\rho(\\psi(x)) \\circ f =  (h\\circ g)\\circ f.\n \\end{align*}\n    \\item \\emph{Unit}: \n      $$f \\circ 1_a = \\lambda x^a. \\varphi(x) = f.$$\n      Proceed analogously to check that $1_b \\circ f =f$.\n    \\end{itemize}\n\n\n  \\item Terminal object: Type 1 is the terminal object, due to $t: 1$ implying $t=*$ and thus $\\lambda x^a. *$ being the only term of type $a\\to 1$. \n  \\item Product:  product for types $a$ and $b$ is given by $a\\times b$, and projections by $\\pi, \\pi'$.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{proposition}\n  Given $f:a\\to b, g:c\\to b$, the pullback\n  \\[\n    \\begin{tikzpicture}\n      \\node {\\begin{tikzcd}\n          a\\times_b c \\ar[d,\"p_1\"]\\ar[r,\"p_2\"] & c\\ar[d,\"g\"]\\\\\n          a\\ar[r,\"f\"] &   b.\n        \\end{tikzcd}};\n    \\end{tikzpicture}\n  \\]\n  is defined by $a\\times_b c := \\Sigma x^a. \\Sigma y^c. I(f(x),g(y))$, with $p_1=\\pi$ and $p_2=\\pi'\\pi$.\n\\end{proposition}\n\\begin{proof}\n  Given $f',g'$ such that:\n\n  \n  \\[\n    \\begin{tikzcd}\n      q\n      \\arrow[bend left]{drr}{g'}\n      % \\arrow[dashed]{dr}[description]{u}x\n      \\arrow[bend right,swap]{ddr}{f'} & & \\\\\n      & a\\times_{b}c \\arrow{r}{p_2} \\arrow{d}[swap]{p_1} & c \\arrow{d}{g} \\\\\n      & c \\arrow[swap]{r}{f}   & b\n    \\end{tikzcd}\n  \\]\n\n  as $ff' = gg'$note that there is a term $\\rho(x) = r(f(f'(x))) : I(f(f'(x)), g(g'(x)))$. We can define $$u: q\\to a\\times_b c = \\lambda x^q. \\langle f'(x),\\langle g'(x),\\rho(x)\\rangle\\rangle$$\n  and by the equality rules we can check that is the only object of type $I(f(f'(x^q)), g(g'(x^q)))$ such that $up_1 = f',up_2=g'$.\n\\end{proof}\n\\begin{remark}\n  Note how the pullback is designed exactly as the fiber product (see example \\ref{fiber-product}), with the construction translated to ML theory.\n\\end{remark}\n\n\\begin{proposition}\\label{prop:CM2}\n  $\\CC(M)$ is Cartesian closed.\n\\end{proposition}\n\\begin{proof}\n  Given an object $b$, we can define of the adjoint $F_3^b\\dashv G_3^b$ (see definition \\ref{def:CCC}) by the bijection $\\varphi: \\hom(a\\times b, c) \\to \\hom(a, c^b)$ given by:\n  \n  $$\\varphi(t:a\\times b\\to c) = \\lambda x^a.\\lambda y^b. t(\\langle x,y\\rangle) : a\\to c^b = a\\to (b\\to c),$$\n  with inverse:\n  $$\\varphi^{-1}(t:a\\times c^b) = \\lambda x^{a\\times b}. s(\\pi'(z))(\\pi(z)) : a\\times b\\to c.$$\n\\end{proof}\n\\begin{remark}\n  Note that the evaluation is the classical term  $\\varepsilon  = \\lambda x^{(c^b\\times b)}.\\pi(x)(\\pi'(x))$.\n\\end{remark}\nNow, we have a nice structure for $C=\\CC(M)$ as a closed Cartesian category. Further on, we are going to prove that it is, in fact, locally Cartesian closed.  The strategy will be the following:  we are going to define a $C$-indexed hyperdoctrine $P(M)$, and prove that $C$ and $P(M)$ are equal as $C$-indexed categories.\n\n\\begin{definition} $P(M)$ is the $C$-indexed category such that:\n  \\begin{itemize}\n  \\item For an object $a\\in C$, $P(M)$ is the category with:\n    \\begin{itemize}\n    \\item objects are the types depending only on $x^a$.\n    \\item arrows $f: b\\to c$ are all the terms of type $a(x)\\to b(x)$ that only depend on $x^a$.\n    \\end{itemize}\n  \\item for an arrow $f:a\\to b = \\lambda x^a.\\varphi(x)$, $f^*$ is defined by precomposition, i.e. given a type or term $e(x^b)$ in $P(M)(b)$, we set $f^*(e) = e(f(x^a))$.  \n  \\end{itemize}\n\\end{definition}\n\n\\begin{remark}\n  In the context of $P(M)(a)$ variable $x$ will be used as the ever available variable $x^a$, and thus we would use primarily $y$ as standard variable symbol.\n\\end{remark}\n\\begin{proposition}\n  $P(M)(a)$ is Cartesian closed.\n\\end{proposition}\n\\begin{sproof}\n  Repeat the proofs for proposition \\ref{prop:CM1} and proposition \\ref{prop:CM2} step by step considering in each term an extra parameter.\n\\end{sproof}\n\n\n\\begin{proposition}\\label{lemma:hyperdoc}\n  For each $a\\in Ob(C)$, $C/a\\cong P(M)(a)$.\n\\end{proposition}\n\nTo prove this proposition we are g with the help of two lemmas that the composition of these functors is naturally isomorphic to the identity.\n\n\\[\n  \\begin{tikzcd}\n    C/a=\\CC(M)/a\\arrow[ shift left=.5ex, \"\\Phi\"{name=T}]{r}{} &\n    P(M)(a)\\arrow[ shift left=1ex, \"\\Psi\"{name=i}]{l}{} \\\\\n  \\end{tikzcd}\n\\]\n\\begin{itemize}\n\\item We define $\\Phi$ as the functor such that:\n\n  \\begin{itemize}\n  \\item For every object $f:b\\to a\\in Ob(C/a)$ is sent to the type\\footnote{Note that we are defining the inverse.}:\n    $$\\Phi(f)=\\Sigma y^b.I(x,f(y)).$$\n  \\item For every arrow $h: (f:b\\to a)\\to (g:b\\to a)$ is sent to the term:\n    $$\\Phi(h) = \\lambda z^{\\Phi(f)}. \\langle h(\\pi(z)), \\rho\\rangle,$$\n    with $\\rho : I(x,g(h(\\pi(z))))$ defined from $$\\pi'(z) : I(x,f(\\pi(z)))\\qquad\\text{and}\\qquad r(f(\\pi(z))) : I (f(\\pi(z)), g(h(\\pi(z)))),$$ and transitivity of $I(\\cdot,\\cdot)$.\n  \\end{itemize}\n\n\\item We define $\\Psi$ as the functor such that:\n  \\begin{itemize}\n  \\item For every object (type) $b(x^a)\\in Ob(P(M)(a))$, $\\Psi(b(x))$ is the projection $\\pi:b(x)\\to a \\in C/a$ associated with $\\Sigma x^a. b(x)$.\n  \\item For every arrow $t(x^a): b(x^a) \\to c(x^a)$ is sent to the term:\n    $$\\Psi(t) = \\lambda z^{\\Sigma x^a. b(x)}. \\langle \\pi(z), t(\\pi(z))(\\pi'(z))\\rangle.$$\n  \\end{itemize}\n\\end{itemize}\n\\begin{lemma}\\label{lemma:CM3}\n  Let $(f: b\\to a)= \\lambda x^b.f(x)$ be a term in $M$. Then $b$ and $\\Sigma x^a.\\Phi(f)(x)$ are isomorphic in $C$.\n\\end{lemma}\n\\begin{proof}\n  We want to define two morphisms $i:b \\to \\Sigma x^a. \\Sigma y^b. I(x,f(y))$, $j : \\Sigma x^a. \\Sigma y^b. I(x,f(y)) \\to b$ in $C$\\footnote{in the definition we do a notation simplification, $j$ shall receive a single variable an use $\\pi$ and $\\pi'$ to access the internal.} given by\n  $$i = \\lambda y^b. \\langle f(y),\\langle y, r(f(u))\\rangle \\rangle,$$\n  and\n  $$j = \\lambda \\langle x^a, \\langle y^b, z^{I(x,f(y))} \\rangle\\rangle . y .$$\n  Clearly $j(i(y)) = y$, and $i(j(\\langle x^a, \\langle y^b, z^{I(x,f(y))} \\rangle\\rangle)) = \\langle f(y),\\langle y, r(f(u))\\rangle \\rangle$, and equality arises from the definition of $\\Phi$.\\\\\n\\end{proof}\n\n\\begin{remark}\n  Note that lemma \\ref{lemma:CM3} is based on the intuition of $\\Phi(f)$ being the inverse of $f$. \n\\end{remark}\n\n\\begin{lemma}\\label{lemma:CM4}\n  For $b(x^a)$ a type, the objects $b(x^a)$ and $\\Sigma y^{\\Sigma x^a.b(x)}. I(x,\\pi(y))$ are isomorphic in $P(M)(a)$.\n\\end{lemma}\n\\begin{proof}\n  We want to define two morphisms $i(x^a):b(x) \\to\\Sigma y^{\\Sigma x^a.b(x)}. I(x,\\pi(y))$ and $j:\\Sigma x^a. \\Sigma y^b. I(x,f(y)) \\to b$ in $P(M)(a)$:\n  $$i(x^a) = \\lambda z^{b(x)}. \\langle\\langle x, z\\rangle, r(x) \\rangle,$$\n  and\n  $$j(x^a) = \\lambda \\langle x_\\#^a, z^{b(x_\\#^a)} \\rangle, v^{I(x,x_\\#)}\\rangle). z .$$\n  Proceed analogously as in previous lemma to check isomorphism. \n\\end{proof}\n\n% A proof for the following result can be found in \\cite[Section 3, sublemma 3.2.3.3]{seely1984locally}.\n% \\begin{lemma}\\label{lemma:CM5}\n%   If $a$ is a closed type, $b(x^a)$ a type in $P(M)(a)$, then there is a bijection between the set of terms $$\n% \\end{lemma}\n\n\n\\begin{proof}[Proof of proposition \\ref{lemma:hyperdoc}]\n  We have to define two natural bijection. We check that $\\Psi\\circ\\Phi(f:b\\to a) = \\pi: \\Sigma x^a. \\Sigma y^b.I(x,f(y)) \\to a$. We define the counit $\\varepsilon$ by the lemma \\ref{lemma:CM3}, as we have that there is an equivalent $\\tilde \\pi : b \\to a$ with an arrow given by the composition with the isomorphism. By the equivalence arrows we can check that in fact $\\tilde \\pi = f$, providing the counit. Naturality follows from the definitions.\\\\\n\nFor the  unit $\\eta$ proceed similarly from lemma \\ref{lemma:CM4}.\n\n\\end{proof}\n\\begin{theorem}\n  $\\CC(M)$ is locally Cartesian closed.\n\\end{theorem}\n\\begin{proof}\n  We prove that, as $C$ indexed categories, $C\\cong P(M)$. Finally, as $P(M)(a)$ is a closed Cartesian category, so is $\\CC(M)/a$ for each $a$.\\\\\n\n  Given proposition \\ref{lemma:hyperdoc} it is only left to show that $\\Phi$ and $\\Psi$ commutes with $f^*$. It suffices to show that:\n  \\[\n    \\begin{tikzpicture}\n      \\node {\\begin{tikzcd}\n          C/a \\ar[d,\"\\Phi\"]\\ar[r,\"f_C^*\"] & C/b\\\\\n          P(M)(a)\\ar[r,swap,\"f_P^*\"] &   P(M)(b)\\ar[u,\"\\Psi\"]\n        \\end{tikzcd}};\n    \\end{tikzpicture}\n  \\]\n  where we use the notation $f_P^*$ and $f_C^*$ to differentiate  induced functors in different indexes categories.   Let $t:c\\to a \\in Ob(C/a)$. We can see that\n  $$\\Psi\\circ f^*\\circ \\Phi(t) = \\pi: \\Sigma x^a. \\Sigma y^b.I(f(x),t(y)) \\to b,$$\n  being this the definition of pullback of $t$ along $f$, thus equal to $f_C^*$.\n\\end{proof}\n\n\n\\section{Interpretation of Martin-L\\\"of theories}\n\\label{InterpretationLCC}\nIn this last section we proceed to introduce the concept of interpretation of a ML theory in a LCC. We consider this of particular interest, since we will show a different view than the one presented in the previous chapter on how to consider the equivalence/pairings arising between typing theories and categorical theories.\\\\\n\nWe define the key concept of an interpretation  as done by Seely. We decide to extend this definition with numerous remark, to provide another layer of understanding of the concept. Throughout this chapter, $M$ will represent a ML theory, and $C$ will represent a LCC. This chapter is based on \\cite[Section 4]{seely1984locally}\n\n% \\begin{definition}[Definition]\n%   An interpretation $\\overline{\\cdot}: M\\to C$ consist of:\n%   \\begin{enumerate}\n%   \\item For a type-value function constant, $F$, with arguments of types $t_1,...,t_n,$ a morphism $\\phi: \\overline{F} \\to \\overline{t_n}$ of $C$.\n%   \\item For a term-valued function constant, $f$, with arguments of types $t_1,...,t_n$ and value of types $a$, a morphism $\\oveline{f}:$\n%   \\end{enumerate}\n% \\end{definition}\n\n\\begin{definition}[Interpretation of a Martin-L\\\"of theory]\nAn interpretation $\\overline{\\cdot}:M\\to C$, is a mapping such that:\n  \\begin{itemize}\n  \\item A closed type $b$ correspond to an object $\\overline b\\in C$.\n  \\item A type $b(x^a)$ will be interpreted as an object $\\beta: \\overline{b} \\to \\overline{a}$ in $C/\\overline{a}$. \n  \\item For a type $b(x^a)$ mapped to $\\beta : \\overline b \\to \\overline a $, a term $t(x^a) : b(x^a)$ maps to a morphism $\\overline t$ such that:\n    \\[\n      \\begin{tikzpicture}\n        \\node {\\begin{tikzcd}\n            \\overline a \\ar[rr,\"\\overline t\"]\\ar[rd,swap, \"1_{\\overline a}\"] &  & \\overline b\\ar[ld, \"\\beta\"] \\\\\n            &   \\overline a &\n          \\end{tikzcd}};\n      \\end{tikzpicture}\n    \\]\n  \\end{itemize}\n\n\\end{defintion}\n\\begin{remark}\n  Note than a type $t$ not depending on any variable is, in fact, a type depending on a variable $x^1$. Remember the original foundation of ML theories as proof theories, and recall the fact that to prove a type is the same as to proof that it is derivable form truth itself. Similar observations can be done with a term. \n\\end{remark}\n\n\\begin{remark}\n  Note that, when mapping a type, we have a codomain fixed, but not so the domain of the arrow and the arrow itself to which the type is mapped. the problem that, for each type we need to assign to it two new characters: the name of the domain and the name of the arrow. This causes a notational problem. We decided to alleviate this problem by sending a type in a Latin letter, to the Greek equivalent letter as arrow, and an overline Latin letter as codomain. Some of the pairing that will appear in this section are shown in figure \\ref{faketable:mul}.\n  \n  \\begin{figure}[!h]\n    \\begin{center}\n      \\begin{tabular}{|c|c|c|} \n        \\toprule\n        Type & Arrow & Codomain \\\\\n        \\midrule\n        $a$ & $\\alpha$ & $\\overline a$\\\\\n        $b$ & $\\beta$ & $\\overline b$\\\\\n        $r$ & $\\rho$ & $\\overline r$\\\\\n        $s$ & $\\sigma$ & $\\overline s$\\\\\n        $t$ & $\\tau$ & $\\overline t$\\\\\n        $x$ & $\\chi$ & $\\overline x$\\\\\n        \\bottomrule\n      \\end{tabular}\n    \\end{center}\n    \\caption{Commons pairings.} \\label{faketable:mul}\n  \\end{figure}\n\n\\end{remark}\n\n\\begin{remark}\\label{multiple-def}\n  Let us analyze the considerations that are held whenever considering a type $t_{n+1}(x_1^{t_1},...,x_n^{t_n})$ that satisfy the condition on variables.  A closed type corresponds to an object $\\overline t_1\\in C$. Then we proceed to consider $t_2(x_1^{t_1})$, so that $\\overline t_2$ is a morphism $\\tau_2:\\overline t_2\\to \\overline t_1$. The type $t_{3}(x_1^{t_1}, x_2^{t_2})$ is mapped to $\\tau_3: \\overline{t_3}\\to \\overline{t_2}$. This also induces a morphism $\\tau_3' = \\langle \\tau_2\\tau_3, \\tau_3\\rangle$.\\\\\n\n  Similarly for the terms, we will end up for any $f:t_3(x_1^{t_1}, x_2^{t_2})$ as:\n  \\[\n    \\begin{tikzpicture}\n      \\node {\\begin{tikzcd}\n          \\overline t_2 \\ar[rr,\"\\overline f\"]\\ar[rd,swap, \"1_{\\overline t_2}\"] &  & \\overline t_3 \\ar[ld, \"\\tau_3\"]\\ar[rd, \"\\tau_2\\tau_3\"]  \\\\\n          &   \\overline t_2\\ar[rr, swap, \"\\tau_2\"] & & \\overline t_1\\\\\n        \\end{tikzcd}};\n    \\end{tikzpicture}\n  \\]\n\n  Thus, a term $f:t_2$ without any more context will be considered in the maximum generality as an arrow:\n  \\[\n    \\begin{tikzpicture}\n      \\node {\\begin{tikzcd}\n          \\overline t_2 \\ar[rr,\"\\overline f\"]\\ar[rd,swap, \"\\chi\"] &  & \\overline t_3 \\ar[ld, \"\\tau\"]  \\\\\n          &   \\overline t_1 & \\\\\n        \\end{tikzcd}};\n    \\end{tikzpicture}\n  \\]\n  \n\\end{remark}\n\n\n\\begin{definition}[Interpretation of a Martin-L\\\"of theory - second part]\\label{def:terminando}\n  The following equalities must be true:\n  \\begin{itemize}\n  \\item The substitution of a variable in a type is defined as $\\overline{a(t)} = \\overline t^* \\overline a$, where $a$ is type dependent on a variable $x^b$ and $t : b$ is a term.\n  \\item The following type formation rules hold:\n    \\begin{enumerate}\n    \\item $\\overline 1 = \\top$, the terminal object in $C$.\n    \\item $\\overline{I(x_1^a,x_2^a)} = \\delta_{\\overline{a}}: \\overline{a} \\to \\overline{a}\\times \\overline{a}}$ where $\\Delta_a$ is the diagonal functor as in definition \\ref{daigon-alley}.\\footnote{In view of substitution, $I(a,b)$ would be the equalizer.}\n  \\item If $a,b(x^a)$ are types interpreted as $\\alpha: \\overline a \\to \\overline x$, $\\beta: \\overline b \\to \\overline a$ then:\n    $$\\overline{\\Pi x^a. b(x)} = \\Pi_\\alpha \\beta, \\qquad \\overline{\\Sigma x^a. b(x)} = \\Sigma_\\alpha \\beta$$\n  \\end{enumerate}\n\\item The following term formation rules hold:\n  \\begin{enumerate}\n  \\item $\\overline * = id_\\top: \\top \\to \\top$.\n  \\item If $b(x^a) : t(x^a)$ then $\\overline{\\lambda x^a. b(x)} = \\prod_\\alpha \\overline t$.\\\\\n\n    Similarly, let $a$ be a type interpreted as $\\alpha : \\overline a \\to \\overline x$, where $x$ represent any possible variable upon which $a$ depend. Let also $b(y^a)$ be a type interpreted by $\\beta : \\overline b \\to \\overline a$ and $g: a$, $f:\\Pi y^a.b(y)$,  interpreted by $\\overline g: \\chi \\to \\alpha $ and  $\\overline f: \\chi \\to \\Pi_\\alpha \\beta$, for some $ \\chi: \\overline z \\to \\overline x$. We define $\\overline f(a): 1_{\\overline z} \\to \\overline g^* \\beta$ as follows.\\footnote{Without loss of generality we assume that $g,f$ depend on the same variables. Are dummy variables if not}.\n\n Since $g$ is a term of type $a$ we have that, as in remark \\ref{multiple-def}\n      \\[\n        \\begin{tikzpicture}\n          \\node {\\begin{tikzcd}\n              \\overline z \\ar[rr,\"\\overline g\"]\\ar[rd,swap, \"\\chi\"] &  & \\overline a\\ar[ld, \"\\alpha\"] \\\\\n              &   \\overline x &\n            \\end{tikzcd}};\n        \\end{tikzpicture}\n      \\]\n\n      we can check that $\\overline g \\alpha = \\chi$, and so there exists an object $\\Sigma_\\alpha \\overline g = \\chi$. By adjointness of $\\Sigma_\\alpha \\dashv \\alpha^*\\overline g$, we can get a morphism $$\\varphi: \\overline g \\to \\alpha^*\\chi \\in Ob(C/ \\overline a).$$\n      We can proceed  analogously with the adjoint relative to $\\overline f: \\chi \\to \\Pi_\\alpha \\beta$, for a morphism $\\psi: \\alpha^* \\chi \\to \\beta$. We finish by composition, and repeating adjointness.\n\n    \\item With $a,b(x^a)$ as in previous point, let $g: a$, $f:b(g)$,  interpreted as $\\overline g: \\chi \\to \\alpha $ and  $\\overline f: 1_{\\overline z} \\to \\overline g ^* \\beta$, for some $ \\chi: \\overline z \\to \\overline x$, supposing again that $g,f$ depend on the same variables. Then $\\overline{\\langle g, f\\rangle}$ is defined by the morphism induced by $\\overline f$ seen as  $\\overline f = \\Sigma_{\\overline g} 1_{\\overline z}\\to \\beta$ by adjointness. Finish by applying $\\Sigma_\\alpha$ and using again that $\\overline g \\alpha = \\chi.$\\\\\n\n      For $\\Sigma x^a.b(x)$, we define  $\\overline \\pi  = \\beta: \\Sigma_\\alpha \\beta \\to \\alpha$, and $\\overline {\\pi'}$ as the identity.\n    \\item $\\overline {r(x^a)} = 1_{\\overline a}$. Hence, for a type $a$ interpreted as $\\alpha: \\overline a \\to \\overline x$, and a term $g : a$ interpreted as $\\overline g : \\chi \\to \\alpha$ with $\\chi : \\overline z \\to \\overline x$ then $\\overline {I(g,g)} = (Eq(g,g)\\to \\overline z)$ and $\\overline {r(g)} = 1_{\\overline z}$ with $Eq$ being the equalizer\\cite{nlab:equalizer} of the arrows. For this, we want to define $C[x^a, y^1, z^{I(x,y)}]$, we want to define a \\emph{substitution morphism} $b(f)\\time I(f,g) \\to b(g)$, with standard $a,b,g,f$.This is done like follow. If for some $\\chi: \\overline z \\to \\overline x$, we have that $g,f : a$ interpreted as $\\overline g,\\overline f:\\chi \\to \\alpha$, then the substitution morphism is defined by $\\overline g^*\\beta \\times \\overline {\\langle g,f \\rangle} ^* \\overline I.$\n    \\end{enumerate}\n\\end{itemize}\n\\end{definition}\n\nLet us review the progress so far. In this section we have defined the concept of interpretation. We have a way of looking at a ML theory within a category. After that, we have seen how to define the operations proper to a ML theory within a category. So finally it remains for us to check that the equivalences hold.\n\n\\begin{proposition}[Soundness]\nAll equality rules in definition \\ref{def:ml-equality} are valid under an interpretation.\n\\end{proposition}\n\n\nThis is what is known as soundness of interpretation. This result is enunciated in \\cite[proposition 4.5]{seely1984locally} but is not proven in detail. A proof can be found in the equivalent form of the interpretation of LPCE in hyperdoctrines in \\cite{seely1983hyperdoctrines}.\n\n\\begin{remark}\\label{rem:induced-functor} From the soundness it can be derived an interesting property. Any interpretation $\\overline{\\cdot}: M \\to C$ induces a LCC-preserving functor $F:\\CC (M) \\to C$, as under $\\overline M$ one sends closed type of $M$ (object of $\\CC (M)$) to object of $C$, and similarly with terms. The soundness of an interpretation thus provides the preservation of the LCC structure. \n\\end{remark}\n\n\nFinally, we introduce the concept of canonical interpretation.\n\n\\begin{definition}[Canonical interpretation]\n  Let $M$ be a Martin-L\\\"of theory, we can define the canonical interpretation $\\tilde \\cdot: M\\to \\CC(M)$ as the interpretation such that closed:\n  \\begin{itemize}\n  \\item $\\tilde 1 = 1$.\n  \\item A closed type correspond to itself as seen in $\\CC(M)$.\n  \\item If $a, b(x^a)$ are types, the $b(x)$ is interpreted as the projection  $\\pi$ related to  $\\Sigma x^a.b(x)$\\footnote{This refers to the intuitive idea of identifying $b(x)$ as the subset of $a$ such that property $b$ is satisfied.}.\n  \\item $\\widetilde{I(x,y)} = \\Sigma x^a.\\Sigma y^a. I(x,y) \\to a\\times a$, supposing that $a$ is closed. As they are closed $\\Pi x^a.b(x)$ and $\\Sigma x^a.b(x)$ are interpreted as themselves.\n  \\item Each term is interpreted as its associated morphism (remembering that a closed term $a$ is equivalent to a function term $a\\to 1$).\n  \\end{itemize}\n\\end{definition}\n\n\\begin{proposition}\\label{ultima-label}\n  Given any interpretation $\\overline \\cdot$, there is a unique functor $F:\\CC (M) \\to C$ such that the following diagram commute:\n        \\[\n        \\begin{tikzpicture}\n          \\node {\\begin{tikzcd}\n              M \\ar[rr,\"\\tilde \\cdot \"]\\ar[rd,swap, \"\\overline \\cdot \"] &  & \\CC (M) \\ar[ld, dashed, \"F\"] \\\\\n              &   C &\n            \\end{tikzcd}};\n        \\end{tikzpicture}\n      \\]\n\\end{proposition}\n\\begin{proof}\nDefine the functor $F$ as it is induced as in remark \\ref{rem:induced-functor}. Commutativity of the diagram follows form the definition of $\\tilde \\cdot$.\n\\end{proof}\n\n\\begin{definition}\nFor any ML theory $M$ and any LCC category $C$, we define $\\Int(M,C)$ as the set\\footnote{We can consider bigger universes if necessary.} of interpretations $M\\to C$.\n\\end{definition}\n\n\\begin{definition}\n  For ML theories $M,M'$,  an interpretation $M\\to M'$ is an interpretation $M\\to \\CC (M)$. Thus we define ML as the category of ML theory along with interpretation.\n\\end{definition}\n\\begin{remark}\n  If $i_1: M -> M'$ y $i_2: M' -> M''$ are interpretations, by proposition \\ref{ultima-label} there is a functor $F: \\CC(M') -> \\CC(M'')$ associated to $i_2$. Then we define $i_2 \\circ i_1 = F \\circ i_1$.\n\\end{remark}\n\n\\begin{proposition}\\label{que-cansaito-que-estoy}\n  \\begin{enumerate}\n  \\item For any ML theory $M$, and any $LCC$ category $C$, we have that\n    $$  \\Int (M,C) \\cong \\hom_{LCC} (\\CC(M), C)$$\n  \\item For ML theories $M,M'$ we have that:\n    $$ \\hom_{ML}(M,M') \\cong \\hom_{LCC}(\\CC(M),\\CC(m'))$$\n  \\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n  \\begin{enumerate}\n  \\item Follows from the definition of interpretations $M\\to M'$.\n  \\item Just observe that    $$ \\hom_{ML}(M,M') \\cong \\Int(M, \\CC(M'))\\cong \\hom_{LCC}(\\CC(M),\\CC(m'))$$\n  \\end{enumerate}\n\\end{proof}\n\\begin{remark}[remark 4.8.1 \\cite{seely1984locally}]\n  $\\Int$ is in fact functorial in both variables, and equalities in proposition \\ref{que-cansaito-que-estoy} are natural in each variables.\n\\end{remark}\n\nFinally, we note that we can repeat the same ideas carried away for interpretations  to introduce the concept of \\emph{the theory of a locally Cartesian category}. The idea is to consider the ML theory whose interpretation is the identity. As the ideas are similar to those previously explained, we will avoid most of the steps. A more extended explanation, (not much extended) can be found in \\cite[Section 5]{seely1984locally}.\n\n\\begin{definition}\n  Let $C$ be a LCC. We define the ML theory $\\MM (C)$, as the Martin-L\\\"of theory such that:\n  \\begin{itemize}\n  \\item As closed types it has the objects of $C$, a type with a free variable $x^a$ is an object $C/a$, a successive constructions as in remark \\ref{multiple-def}.\n  \\item Term-valued functions are the terms of $C$.\n  \\item Has type and terms formation rules as in definition \\ref{def:terminando}.\n  \\end{itemize}\n\\end{definition}\n\n\\begin{proposition}\n$\\MM (C) $ is an ML theory.\n\\end{proposition}\n\n\\begin{definition}\\label{def:ml-equivalent}\n    \n  Two ML theories $m,M'$ are equivalent if their functor $\\Int(M,\\cdot)$, $\\Int(M',\\cdot)$ are naturally isomorphic. \n\\end{definition}\n\nFinally, we present the derived equivalences:\n\n\n\\begin{theorem}[Theorem 6.1 \\cite{seely1984locally}]\\label{ml:6.1}\nIf $C$ is an LCC, then $C\\cong \\CC(\\MM(C))$. \n\\end{theorem}\n\\begin{proof}\n  By definition, the closed type and terms of $\\MM(C)$ (object and arrows of $\\CC(\\MM(C))$)  are the closed types and terms of $C$.\n\\end{proof}\n\n\\begin{theorem}[Theorem 6.2 \\cite{seely1984locally}]\\label{ml:6.2}\nIf $M$ is a Martin-L\\\"of theory, then $M\\cong \\MM(\\CC(M))$. \n\\end{theorem}\n\\begin{proof}\nIt follows form theorem \\ref{ml:6.1} and definition \\ref{def:ml-equivalent}.\n\\end{proof}\n\nThus, both categories are equivalent.\n\n", "meta": {"hexsha": "e56d456b59379c6b50ec72aa0f8cb0ff948587c6", "size": 48989, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/Chapters/Chapter5.tex", "max_stars_repo_name": "pedrobn23/Master-thesis", "max_stars_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-02T13:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-20T16:41:28.000Z", "max_issues_repo_path": "thesis/Chapters/Chapter5.tex", "max_issues_repo_name": "pedrobn23/Master-thesis", "max_issues_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/Chapters/Chapter5.tex", "max_forks_repo_name": "pedrobn23/Master-thesis", "max_forks_repo_head_hexsha": "372fad59dfe8b96269d37d7a67767d633daef673", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.0379084967, "max_line_length": 827, "alphanum_fraction": 0.69013452, "num_tokens": 15153, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879312056025699, "lm_q2_score": 0.7057850340255386, "lm_q1q2_score": 0.5561100527559935}}
{"text": "\\section{Input of Example Plugin for RAVEN}\nIn this section the developer of the ExternalModel plugin should report information\non how to use it in a RAVEN calculation. As mentioned in the previous section a simple \nModel that performs a summation of exponential (over time (or any monotonic variable) has been developed:\n\\newline\n \\begin{math}\n        Xi(t)=\\sum_{i=1}^{n} coef_i*e^{var_i*t}\n  \\end{math}\n  \\newline\nIn this section, the input of the plugins must be reported. As an example, we report the\ninput of the Example Plugin ``SumOfExponential'' that we use as template.\n\n\\subsection{Input of SumOfExponential ExternalModel plugin}\n\nThe input of SumOfExponential is an XML file. An example of the input structure is given in Listing \\ref{lst:InputExample}. The following section will discuss the\n different keywords in the input and describe how they are used in the SumOfExponential plugin.\n\n\\begin{lstlisting}[style=XML,morekeywords={anAttribute},caption=SumOfExponential \n  input example., label=lst:InputExample]\n  <ExternalModel name=\"a_name\" subType=\"SumOfExponential\">\n      <variables> Xi, monotonicVariable, var1, var2, var3</variables>\n      <!-- xml portion for this plugin only -->\n      <outputVariable>\n        Xi\n      </outputVariable>\n      <monotonicVariable>\n        time\n      </monotonicVariable>\n      <startMonotonicVariableValue>\n        0.0\n      </startMonotonicVariableValue>\n      <endMonotonicVariableValue>\n        1e6\n      </endMonotonicVariableValue>\n      <numberCalculationPoints>\n        1000000\n      </numberCalculationPoints>\n      <coefficient varName=\"var1\">1.1</coefficient>\n      <coefficient varName=\"var2\">-1.1</coefficient>\n      <coefficient varName=\"var3\">-1.1</coefficient>\n </ExternalModel>\n\\end{lstlisting}\n\nAs one can see, all the specifications of the SumOfExponential plugin are given in the \n\\xmlNode{ExternalModel} block. Inside the \\xmlNode{ExternalModel} block,  the XML\nnodes that belong to this plugin only (and not to the ExternalModel) are:\n\\begin{itemize}\n  \\item  \\xmlNode{outputVariable}, \\xmlDesc{string,\n  required parameter}, the name of the output variable (e.g. $Xi$)\n  \\item  \\xmlNode{monotonicVariable}, \\xmlDesc{string,\n  required parameter},  the name of the monotonic variable (e.g. $time$)\n  \\item  \\xmlNode{startMonotonicVariableValue}, \\xmlDesc{float,\n  required parameter}, the starting value of the monotonic variable (e.g. time)\n  \\item  \\xmlNode{endMonotonicVariableValue}, \\xmlDesc{float,\n  required parameter}, the ending value of the monotonic variable (e.g. time)\n  \\item  \\xmlNode{numberCalculationPoints},\\xmlDesc{int,\n  required parameter}, the number of steps in the calculation (e.g. number of time \n  steps).\n  \\item  \\xmlNode{coefficient}, \\xmlDesc{float,\n  optional parameter}, the $i-th$ coefficient for the exponential function ($coef_i$).\n  Default value is $1.0$.The user can input a coefficient for each variable  of the model. \n  The mapping between the $coef_i$ and the associated variable is defined by the \n  attribute $varName$:\n  \\begin{itemize}\n    \\item \\xmlAttr{varName}, \\xmlDesc{required string attribute}, variable this coefficient\n    is linked to.\n  \\end{itemize}\n\\end{itemize}\n\n", "meta": {"hexsha": "76225c9b601bd91d1b8054b026785314bdf98142", "size": 3201, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "plugins/ExamplePlugin/doc/include/ExamplePluginDocumentation.tex", "max_stars_repo_name": "rinelson456/raven", "max_stars_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 159, "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_issues_repo_path": "plugins/ExamplePlugin/doc/include/ExamplePluginDocumentation.tex", "max_issues_repo_name": "rinelson456/raven", "max_issues_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1667, "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_forks_repo_path": "plugins/ExamplePlugin/doc/include/ExamplePluginDocumentation.tex", "max_forks_repo_name": "rinelson456/raven", "max_forks_repo_head_hexsha": "1114246136a2f72969e75b5e99a11b35500d4eef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 95, "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "avg_line_length": 45.7285714286, "max_line_length": 162, "alphanum_fraction": 0.7382068104, "num_tokens": 850, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7057850278370111, "lm_q1q2_score": 0.5561100408504699}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\n\\begin{document}\n\n\\chapter{Hierarchical Clustering Model} \\label{hierarchical_clustering_model}\n\n% Quick overview of HCA again\n% Distance metric\n\n% Talk about maintaining \"unsupervised\" nature; build search space now evaluate next\n% Criteria being varied in search space; linkage [4], number of sectors [15; range from 5-19]\n% Visualize search space\n\n\nIn the previous chapter, we evaluated major families of learning methods and identified the Hierarchical Clustering Analysis (hereafter \\textit{HCA}) algorithm to be the method best aligned with the research goals of the project. Here we outline the specifics of our approach to applying HCA to our model input data, and build a search space of candidate universes to be evaluated, fully addressing RG-1.\n\n\\section{HCA Overview}\n\nAs outlined in Section~\\ref{learning_methods_survey:hca}, hierarchical clustering is a greedy learning algorithm which seeks to construct a hierarchy of clusters. The greedy nature of this algorithm results in extremely high computational complexity for any given model fit, but is extremely stable in its solution. Furthermore, it is an $\\mathcal{O}(1)$ complexity operation to extract classifications of varying arity due to the persistent hierarchical nature of the algorithm.\n\nOne of the main requirements of our heuristic is the ability to create sector universes with varying numbers of sectors. Because of this, we elected to utilize an Agglomorative approach to clustering. That is, we utilize a \\textit{bottom-up} HCA model, where each company begins in its own sector, with larger clusters derived at each successive step of the tree by merging existing pairs of clusters.\n\nAny given HCA algorithm tree is parameterized by two distinct settings; the distance metric, and the linkage method. To best understand the potential candidate universes that may be generated by this HCA-driven classification heuristic, we analyzed each of these model settings in turn:\n\n\\subsection{Distance Metric}\n\nThe distance metric is the measure of the distance between pairs of observations. This setting primarily affects the shape of the clusters. Due to the fact that our model input data is exclusively in monetary units (i.e. United States Dollars), we do not intend to transform the existing metric of wealth reflected by the dollar value measurement. Thus, we chose to use the $\\ell^2$ (i.e. Euclidean) distance metric for our heuristic.\n\n\\begin{gather*}\n    \\text{Let $\\boldsymbol{p}, \\boldsymbol{q}$} = \\text{Cartesian coordinates $\\boldsymbol{p} = (p_1, \\ldots, p_n)$ and $\\boldsymbol{q} = (q_1, \\ldots, q_n)$ where $\\{\\boldsymbol{p}, \\boldsymbol{q}\\} \\in \\mathbb{R}^{n \\times 2}$} \\\\\n    \\text{Let $dist(\\boldsymbol{p},\\boldsymbol{q})$} = \\text{$\\ell^2$ (i.e. Euclidean) distance between points $\\boldsymbol{p}$ and $\\boldsymbol{q}$} \\\\\n    \\\\\n    \\Rightarrow dist(\\boldsymbol{p}, \\boldsymbol{q}) = dist(\\boldsymbol{q}, \\boldsymbol{p})\n    = \\sqrt{(p_1 - q_1)^2 + (p_2 - q_2)^2 + \\cdots + (p_n - q_n)^2}\n    = \\sqrt{\\sum_{i=1}^n (p_i - q_i)^2}\n\\end{gather*}\n\n\\subsection{Linkage Method}\n\nThe second setting governing the behavior of the HCA algorithm is the selection of a linkage method. The linkage is a measure of distance between sets of observations as a function of the pairwise distances between observations. There are four major linkage method choices that we evaluate in our HCA model:\n\n\n    $$ \\text{Let $A, B, C, X, Y$} = \\text{Sets (i.e. clusters) of observations} $$\n    $$ \\text{Let $C$} = X \\cup Y $$\n    $$ \\text{Let $N$} = |A| + |X| + |Y|, \\; \\text{where} \\; |\\alpha| = \\text{Cardinality}(\\alpha) $$\n    \n\\hspace{7em} \\textbf{Single Linkage\\citeFormat{\\cite{Sibson1973SLINK:Method}}:}\n        $$ d_\\text{SLINK}(A, B) = \\text{min} \\; dist(\\boldsymbol{a}, \\boldsymbol{b})  \\; \\forall \\; \\{ \\boldsymbol{a}, \\boldsymbol{b} : \\boldsymbol{a} \\in A, \\boldsymbol{b} \\in B \\} $$\n    \n\\hspace{7em} \\textbf{Complete Linkage\\citeFormat{\\cite{Defays1977AnMethod}}:}\n        $$ d_\\text{CLINK}(A, B) = \\text{max} \\; dist(\\boldsymbol{a}, \\boldsymbol{b})  \\; \\forall \\; \\{ \\boldsymbol{a}, \\boldsymbol{b} : \\boldsymbol{a} \\in A, \\boldsymbol{b} \\in B \\} $$\n    \n\\hspace{7em} \\textbf{Average Linkage\\citeFormat{\\cite{Seifoddini1989SingleApplications}}:}\n        $$ d_\\text{ALC}(A, B) = \\frac{1}{|A| \\cdot |B|} \\sum_{\\boldsymbol{a} \\in A} \\sum_{\\boldsymbol{b} \\in B} dist(\\boldsymbol{a}, \\boldsymbol{b}) $$\n    \n\\hspace{7em} \\textbf{Ward (variance minimization) Linkage\\citeFormat{\\cite{Ward1963HierarchicalFunction}}:}\n        $$ d_\\text{WARD}(C, A) = \\sqrt{ \\frac{|A| + |X|}{N} d_\\text{WARD}(A, X)^2 + \\frac{|A| + |Y|}{N} d_\\text{WARD}(A, Y)^2 - \\frac{|A|}{N} d_\\text{WARD}(X, Y)^2 } $$\n\n\\section{Unsupervised Learning Approach}\n\n\\begin{wrapfigure}[14]{r}{0.45\\textwidth}\n    \\centering\n    \\vspace{\\wrapfigadjustment}\n    \\fbox{\n    \\includegraphics[width=.9\\linewidth]{images/tree.png}\n    }\n    \\caption{Dendrogram of a sample hierarchical clustering model result.}\n    \\label{fig:hierarchical_clustering_model:sample_dendogram}\n\\end{wrapfigure}\n\nAs per RG-1 (see Section~\\ref{research_goals:specific_research_goals}), we seek to create an entirely objective classification heuristic. Logically, this implies that we utilize an entirely nonparametric approach when designing the classification heuristic. However, as discussed above, the HCA algorithm is parameterized by both the distance metric, and the linkage method (in addition to a posterior selection of the number of sectors).\n\nTo work around the semi-supervised nature of the learning method, we elected to utilize HCA to build a search space of potential candidate sector universes. Following this, we will address our second research goal, RG-2, to rank these candidate sector universes against each other to determine the optimal learned sector classification.\n\nNote that this search-space generation varies the sector count and linkage method parameters of the HCA model, but not the distance metric. This is due to the fact that we wish to preserve the monotonic and geometric difference of magnitudes of wealth implied by the dollar values of our input data.\n\nFigure~\\ref{fig:hierarchical_clustering_model:sample_dendogram} is a dendrogram of a sample HCA model generated on a subset of the model input data. Despite having a very high time complexity for model training, the HCA model thrives in its ability to extract sector classifications of varying arity from a HCA model. Its ability to perform this action in constant time complexity greatly enhanced our ability to generate a large search space of candidate learned sector universes.\n\n\n\\pagebreak\n\n\\section{Learned Sector Universe Search Space}\n\n\n\\begin{figure}[h]\n    \\centering\n    \\fbox{\n    \\includegraphics[width=.8\\linewidth]{images/search_space_partial.png}\n    }\n    \\caption{Candidate learned sectors partial search space visualization.}\n    \\label{fig:hierarchical_clustering_model:partial_search_space}\n\\end{figure}\n\nWith the end goal of building a comprehensive search space of candidate learned sector classifications, we generated HCA models parameterized with each of the linkage methods, and then isolated sector classifications for varying numbers of sectors. Specifically, we varied the number of sectors in our search space universes with $N = \\{5, 6, \\ldots, 19 \\}$ for each of the four linkage methods, for a total of 60 candidate learned sector universes. A subset of our search space is visualized in Figure~\\ref{fig:hierarchical_clustering_model:partial_search_space}.\n\nThe HCA models were generated iteratively using the model input data discussed in Section~\\ref{model_data}. We utilized the built-in Hierarchical Clustering Module in \\textit{Scikit-learn}\\citeFormat{\\cite{Pedregosa2011Scikit-learn:Python}} to generate the models, and saved them in specially formatted CSV files (publicly available on the \\textit{reIndexer} website\\citeFormat{\\cite{Weerawarana2019ReIndexerUniverses}}) for later ingestion by the ranking system we developed to identify the optimal learned sector classification universe.\n\n\\end{document}", "meta": {"hexsha": "2357cf38c0d620c9fa574630212880b740f1c452", "size": 8099, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sections/06_hierarchical_clustering_model.tex", "max_stars_repo_name": "rukmal/FE-800-Learned-Sectors-Report", "max_stars_repo_head_hexsha": "f60f09dc689c4adc172b576a261df5ed77518f8f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-14T08:39:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T08:39:24.000Z", "max_issues_repo_path": "sections/06_hierarchical_clustering_model.tex", "max_issues_repo_name": "rukmal/MS-Thesis-Learned-Sectors-Report", "max_issues_repo_head_hexsha": "f60f09dc689c4adc172b576a261df5ed77518f8f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 39, "max_issues_repo_issues_event_min_datetime": "2019-05-13T22:30:23.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-17T03:37:55.000Z", "max_forks_repo_path": "sections/06_hierarchical_clustering_model.tex", "max_forks_repo_name": "rukmal/MS-Thesis-Learned-Sectors-Report", "max_forks_repo_head_hexsha": "f60f09dc689c4adc172b576a261df5ed77518f8f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-06-01T06:32:20.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-01T06:32:20.000Z", "avg_line_length": 82.6428571429, "max_line_length": 564, "alphanum_fraction": 0.7557723176, "num_tokens": 2106, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311856832191, "lm_q2_score": 0.7057850340255386, "lm_q1q2_score": 0.5561100386972138}}
{"text": "\\chapter{A chapter with equations, chemical formula, figures and tables}\n\\label{chap_main}\n\nThe previous chapter \\ref{chap_intro} contained only some text. Here some sample\nequations, chemical formula, figures, and tables\nare introduced.\n\n\\section{Section with equations and chemical formula}\n\\label{sec_eq_fig_tab}\n\nMaterials science is highly interdisciplinary. So you will have to write complex\nequations some condensed-matter physicists use like:\n\n\\begin{equation}\n\\expval{\\hat{O}}{\\Psi_e} = \\int \\dotsc \\int \\Psi_e^{*}(x_1,\\dotsc,x_N) \\hat{O}\n\\Psi_e(x_1,\\dotsc ,x_N)  dx_1 \\dotsi dx_N\n\\quad .\n\\label{main_eq1}\n\\end{equation}\n\nand,\n\n\\begin{equation}\n\\braket{\\Psi_e} = \\int \\dotsc \\int |\\Psi_e (x_1,\\dotsc,x_N)|^2  dx_1\\dotsi dx_N = 1\n\\quad .\n\\label{main_eq2}\n\\end{equation}\n\nSometimes, you would also have to write chemical formula like chemists:\n\n\\begin{equation}\n\\centering\n\\ch{H3O_{aq}+ + 2 H2O <-> H5O2_{(aq)}+ + H2O <-> H7O3_{(aq)}+} \\quad .\n\\label{main_eq3}\n\\end{equation}\n\nWith the appropriate tex packages defined, all this is possible.\n\n\\section{Section with tables and figures}\n\\label{sec_tab_figs}\n\nHere is a sample table and figure. For the table, I've taken data from my\nthesis \\cite{Surendralal2020} and references \\cite{Singh-miller2009, Sakong2018, Kittel1976, Salmeron1983}.\nNotice how in table \\ref{main_tab1}, I've rendered crystallographic directions using\nthe \"miller\" tex package.\n\n\\begin{table}[!tbh]\n\\begin{center}\n  \\begin{tabular}{llc}\n  \\toprule\n       &  a [\\AA{}]  &  $\\Phi\\mathrm{_{Pt\\hkl(111)}}$ [eV]  \\\\\n  \\midrule\n  PBE (this work) &  3.97  &    5.69  \\\\\n  PBE (ref. \\cite{Singh-miller2009}) & 3.99  &    5.69   \\\\\n  RPBE (this work) &  3.99 &   5.50 \\\\\n  RPBE (ref. \\cite{Sakong2018})&  3.99 &   5.51 \\\\\n  Experiments &  3.92 \\cite{Kittel1976} &   6.08 $\\pm$ 0.15 \\cite{Salmeron1983} \\\\\n  \\bottomrule\n  \\end{tabular}\n\\end{center}\n\\caption[Calculated lattice parameter of Pt and work function of the Pt\\hkl(1 1 1)\nsurface.]{The calculated lattice parameter of bulk Pt and the work function of the\nPt\\hkl(1 1 1) surface using different exchange-correlation functionals compared to\nexperimental values.}\n\\label{main_tab1}\n\\end{table}\n\n\\begin{figure}[!tbh]\n\\centering\n\\includegraphics[scale=1., height=8cm]{\\figpath/sinx.pdf}\n\\caption[A sinus curve]{A sinus curve to show how to include figures along with\ncaptions.} %The text in square brackets is what is rendered in the list of figures.\n\\label{main_fig1}\n\\end{figure}\n\n\\section{Section with code}\n\\label{sec_code}\n\nIf you are in a computational field, you might need to include some code in your\nthesis. The \"minted\" TeX package is useful for this. For example, some python code\ncan be inserted the following way:\n\n\\begin{minted}{python}\n    import numpy as np\n    import matplotlib.pylab as plt\n\\end{minted}\n\nYou could also insert inline code like so: \\mintinline{python}{numpy}.\n", "meta": {"hexsha": "cc3717ef4e525fd5209375b9b39ec8fa0009a9f6", "size": 2868, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/chapters/chap_2/chap_2.tex", "max_stars_repo_name": "ibrsam/matsci-thesis", "max_stars_repo_head_hexsha": "270b9917a398ae4004e35be1bba6a9c9800fd12f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-17T01:21:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-17T02:41:58.000Z", "max_issues_repo_path": "source/chapters/chap_2/chap_2.tex", "max_issues_repo_name": "ibrsam/matsci-thesis", "max_issues_repo_head_hexsha": "270b9917a398ae4004e35be1bba6a9c9800fd12f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-05-17T12:53:37.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-17T12:53:37.000Z", "max_forks_repo_path": "source/chapters/chap_2/chap_2.tex", "max_forks_repo_name": "ibrsam/matsci-thesis", "max_forks_repo_head_hexsha": "270b9917a398ae4004e35be1bba6a9c9800fd12f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-16T14:18:40.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-16T14:18:40.000Z", "avg_line_length": 32.2247191011, "max_line_length": 107, "alphanum_fraction": 0.7252440725, "num_tokens": 933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5561100359743362}}
{"text": "\\documentclass[a4paper,11pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{algorithmic}\n\\usepackage{algorithm}\n\\usepackage{pst-plot}\n\\usepackage{graphicx}\n\\usepackage{endnotes}\n\\usepackage{graphics}\n\\usepackage{floatflt}\n\\usepackage{wrapfig}\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n\\usepackage{verbatim}\n\\usepackage{hyperref}\n\\usepackage{multirow}\n\\usepackage{pdflscape}\n\\usepackage{enumitem}\n\\usepackage[normalem]{ulem}\n\n\\usepackage{hyperref}\n\\hypersetup{pdfborder={0 0 0 0}}\n\n\\pdfpagewidth 210mm\n\\pdfpageheight 297mm \n\\setlength\\topmargin{0mm}\n\\setlength\\headheight{0mm}\n\\setlength\\headsep{0mm}\n\\setlength\\textheight{250mm}\t\n\\setlength\\textwidth{159.2mm}\n\\setlength\\oddsidemargin{0mm}\n\\setlength\\evensidemargin{0mm}\n\\setlength\\parindent{7mm}\n\\setlength\\parskip{0mm}\n\n\\newenvironment{exercise}[3]{\\paragraph{Exercise #1: #2 (#3pt)}\\ \\\\}{\n\\medskip}\n\\newcommand{\\question}[2]{\\setlength\\parindent{0mm}\\ \\\\$\\mathbf{Q_#1:}$ #2\\ \\\\}\n\n\\author{\\large{Ardi Tampuu, Tambet Matiisen, Raul Vicente}}\n\\title{\\huge{Introduction to Computational Neuroscience}\\\\\\LARGE{Practice session on Artificial Neural Networks}}\n\n\\begin{document}\n\\maketitle\n\n\\textbf{A request:} Please track how long it will take to complete this set of exercises. Add this time to your final report.\n\\ \\\\\n\n\\textbf{For Pyhton users:} You can do this task with any neural network package you want (sklearn, Keras, Neon, Caffe). But to save your time - it is really easy to run the code in Matlab/Octave.\\\\\n\\ \\\\\n\\ \\\\\n\\ \\\\\n%\n% Intro\n%\nIn this session we are going to have a brief look on artificial neural networks. We start with simplest artificial neuron model called perceptron. Then we will see how simple feed-forward neural networks can be thought of as universal function approximators and what their limitations are. Finally we will use artificial neural network to predict rat's location like we did in Machine Learning practice session.\n\n%\n% Perceptron\n%\n\\begin{exercise}{1}{Perceptron}{1}\n\n\\begin{wrapfigure}{r}{0.3\\textwidth}\n\t\\centering\n\t\\vspace{-12pt}\n\t\\includegraphics[width=0.22\\textwidth]{perceptron.png}\n\t\\caption{Simple perceptron}\n\t\\label{fig:perceptronexample}\n\t\\vspace{-5pt}\n\\end{wrapfigure}\n\nPerceptron is the simplest artificial network model invented by Frank Rosenblatt in late 1950s. He added a learning rule to McCulloch-Pitts neuron, that allows it to learn certain functions from example inputs and outputs.\n\nPerceptron works on binary data \u2013 both its inputs $x_j$ and output $y$ are ones or zeros. Output 1 or 0 can be thought of as binary classification \u2013 whether object represented by the given input belongs to certain class or not. \n\nPerceptron\u2019s weights $w_j$ can be any real numbers. Its prediction is calculated with following formula:\n\n$$\ny = \\left\\{\n\t\\begin{array}{l l}\n\t\t1, \\text{if } x_1 w_1 + ... + x_m w_m + b \\geq 0\\\\\n\t\t0, \\text{otherwise}\n\t\\end{array}\n\t\\right.\n$$\n\nHere $b$ is the bias term, that is added to the sum. In practice it is easier to just add additional input, which is always one. Then we don\u2019t have to treat bias as something special, it is just an additional weight to learn. Learning rule for the perceptron is very simple:\n\n$$\nw_j = w_j + (t_i - y_i) x_{ij}\n$$\n\nIndex $i$ is used to denote $i$-th example data point. Learning rule must be applied for all data points and for each weight. You will continue updating the weights until all data points are classified correctly.\n\nIt turns out, that perceptron is always able to successfully learn a classification rule for datasets, which are \\textit{linearly separable} -- data points with label 1 and label 0 can be separated by a line (in case of two inputs), plane (in case of three inputs) or hyperplane (in case of input of any dimensionality). If the dataset is not linearly separable, perceptron will never \\textit{converge} (settle to a certain set of weight values). \\newline\n\nExample code for this exercise is in \\texttt{perceptron.m}. Your task is to fill in the perceptron learning rule and decide for four example datasets (replace the dataset number on line 12) if they are linearly separable or not. Of course, in 2D one can do this by just looking at the data, but in higher dimensional data this would not be the case. In your report you must include the final image with the decision boundary (the black line) for all four datasets. \\footnote{\\textbf{Decision boundary} is a line (plane/hyperplane) that separates the positive and negative datapoints - positives will be one one side and negatives on the other side} For linearly separable datasets also add the approximate number of steps it took for the perceptron to convergence (to reach 0 errors).\n\n\\end{exercise}\n\n%\n% Sinusoid\n%\n\\begin{exercise}{2}{Function Approximation}{1.5}\n\nUniversal Approximation Theorem states that any continuous function can be approximated to any desired precision by a feed-forward network with a single hidden layer containing a finite number of non-linear neurons. In practice this theorem is of no use, because:\n\\begin{itemize}\n  \\item it states only, that these functions can be \\uline{represented} by feed-forward network with one hidden layer. The theorem doesn\u2019t say anything about if this approximation is \\uline{learnable} - if with our current algorithms we can find the necessary weight values;\n  \\item the construction used in the proof uses a huge number of neurons in the hidden layer. This would be unreasonable for any practical application;\n  \\item as the theorem doesn\u2019t consider learnability, it also doesn\u2019t state anything about how well the networks generalizes to samples beyond its training data.\n\\end{itemize}\n\nNevertheless the theorem is a nice concept to guide your thinking \u2013 if some problem can be described as a function calculating output based on several inputs, then it probably can be approximated reasonably well with an artificial neural network.\\newline\n\n\\begin{wrapfigure}{r}{0.3\\textwidth}\n\t\\centering\n\t\\vspace{-12pt}\n\t\\includegraphics[width=0.20\\textwidth]{sine.png}\n\t\\label{fig:sineexample}\n\t\\vspace{-5pt}\n\\end{wrapfigure}\n\nIn this task we are going to approximate sine function using a neural network. This neural network is very simple \u2013 it consists of just one input node (the $x$ value), several hidden nodes and one output node ($y = sin(x)$).\n\nHidden nodes use sigmoid activation function to achieve non-linearity (the nodes apply sigmoid function to the weighted sum of their inputs). Output node is linear, no activation function is applied. Loss function is simply squared error:\n\n$$\nL = \\frac{1}{2n}\\sum_{i=1}^{n} (target_i - predicted_i)^2\n$$\n\nIt calculates average loss over all $n$ data points in the training set. \\newline\n\nFor creating neural networks we are making use of DeepLearnToolbox toolkit by Rasmus Berg Palm. The tookit is already included with your download in folder \\texttt{DeepLearnToolbox}. Feel free to explore the source in \\texttt{DeepLearnToolbox/NN} folder, it is quite clean Matlab code.\\newline\n\n\nFor this exercise you need to run the code in \\texttt{sinusoid.m} with different number of nodes and answer the questions.\n\n\\question{1}{First run the \\texttt{sinusoid.m} exactly as it is. The script creates a neural network that tries to approximate the sine function between $-2\\pi\\ and\\ 2\\pi$, based on 20 training data points. The neural network has 5 hidden nodes. All the weights and biases are initialized with random values. It would be best if you understand more of less what the code does (read the comments).\nNotice that the script will generate a plot that shows the true shape of the sine, the training data points, and the current approximation made by the network. This plot is updated as we pass over the training data and learn the weights. After the learning has finished another plot appears, summarizing how the loss decreased during training. Add the two plots to the report. Answer the following questions:\n\\begin{itemize}\n\\item Does the neural network represent the function well within the range $-2\\pi\\ to\\ 2\\pi$? If not, how many of the curves of the sine does the network manage to capture well? (In some cases (especially in Matlab) it might capture the shape well, but if you run it many times it mostly shouldn't) \n\\end{itemize}\n}\n\\question{2}{Now run the script \\texttt{minimal\\_sinusoid.m}. In this script we also have a network of 5 nodes, but the weights and biases are not initialized randomly, but with useful values given by me (not learned by an algorithm). In this case you should see that the network learns to capture the shape of the sine really well. As you can see - the function can be represented well by a 5-node network, but the learning algorithm we used in Q1 did not allow us to find this approximation. \n\\begin{itemize}\n\\item Hypothesize why doesn't the network always learn a good solution if we start with randomly initialized weights. In here uncommenting the line  \\texttt{plot\\_sinusoid\\_components(test\\_x,nn)} for the  two cases might provide insights.\n\\item How good is the network at extrapolation (predicting values outside the range it had training data for)? Why is that not surprising? \n\\end{itemize}\n}\n\\question{3}{Increase the number of hidden nodes in \\texttt{sinusoid.m}, by replacing the number 5 with the desired number on line 28. \n\\begin{itemize}\n\\item How many hidden nodes do you need so that starting from randomly initalized parameters the learning algorithm would converge to a good representation of the function (training loss$<0.1$). Run the code couple of times for each tested number of nodes and report the result if at least one trial is good. Add the plot with the datapoints and the approximation.\n\\end{itemize}\n}\n\\end{exercise}\n\n%\n% MNIST\n%\n\\begin{exercise}{3}{Rat position decoding}{1.5pt + 1.0pt + some bonus }\n\nFinally we will use artificial neural network to predict a rat's location based on the recordings from its hippocampal neurons. Remember that in the Machine Learning practice session we worked with this dataset and linear discriminant analysis gave us an accuracy of around 35\\% on test set. \n\nFor neural networks we are once again making use of DeepLearnToolbox toolkit.\n\nThis time our network has 71 inputs that are the spike counts in different neurons during a period of time. In this first part of the task we divide the space into 16 squares and predict in which region the rat is located (like we did in ML practice). This means the network has 16 outputs, each of which is a probability corresponding to one of the squares. Between the inputs and outputs we have one or more layers of hidden nodes with hyperbolic tangent nonlinearity. In order to guarantee that the 16 output probabilities sum up to 1, we use a function called softmax (Google it if interested). When teaching the network to predict the correct zone we minimize the cross-entropy loss (see lecture).\n\nAfter you have run the code and got initial results, your goal is to improve classification accuracy. For that you would need to tune learning rate, number of hidden nodes, number of epochs (iterations over full data set) and other parameters. Finding good parameters for neural networks can be quite tedious and frustrating, so I will not ask you to do it, instead we will do it together:\n\n\\begin{enumerate}\n  \\item We start by modifying the \\textbf{learning rate}. This parameter decides how big a step will we take in the direction of the gradient during gradient descent. With learning rate too high, our model will never converge. With LR too low, the model will take too long time to reacha  good solution and is also prone to getting stuck in a bad local minima. Try $LR=1$ first and see that the loss graph is not stable - there are major zig-zags. This indicates that the learning rate is too high. Next we try 0.1. Still jumpy, but a little bit better. Finally we try 0.01 - very smooth, but error goes down a lot slower. Add all plots and the prediction accuracies after 30 epochs (the code prints it out) to the report. Which LR would you choose (there is no absolute truth, so explain your choice)? Compare the results with baseline (LDA accuracy).\n  \\item \\textbf{Momentum} is a method that that allows better and more stable learning by remembering the global trends in weight changes - if all previous datapoints have suggested that a weight should be lowered, we decrease it faster than normal. If datapoints do not agree on how to change a certain weight, we change it only slowly. \\newline Set the value of momentum to 0.95. Run the code with LR=0.1 and LR=0.01 (you can even try 0.001). Which LR you judge more adequate now (there is no absolute truth, so explain your choice)?\n  \\item With momentum 0.95 and LR 0.01 you should see \\textbf{overfitting}, which means that network learns to predict well on training examples, but this knowledge doesn't generalize to the test set. You can detect this from large gap between training and validation misclassification rate. To fight against overfitting we have two tools - the weight penalty and dropout. \nIf we brought in weight penalty, we would again need to search for a good learning rate. So instead we use dropout (in this case you might need to increase the number of nodes, but that's the next bulletpoint). Dropout means that at every training step we do not consider some nodes (a certain proportion) in our network - we simply do as if they were not there. In every training step the left out neurons are different. Dropout is so effective because it is similar to adding random noise to the network - it makes it impossible to simply remember the training data and forces the model to generalize. It also has similarity with ensembling techniques.\\newline Your task is to try \\textbf{dropout} percentages 0.2, 0.4, 0.6 and see if you can bring the validation error down (making the difference with training error smaller is not enough - only thing we care about is the validation accuracy). Add plots to the report.\n    \\item Once you have stabilized the learning, try to increase \\textbf{number of hidden nodes} and see if testing error improves. You can also add more layers by simply adding a number in nnsetup - for example \\texttt{$nn\\ =\\ nnsetup([71\\ 100\\ 100\\ 16]);$}  creates two layers of 100 nodes. More layers might sound appealing, but it makes the learning slower to converge, so be cautious with it. Add the plots and accuracies that you achieved with bigger networks.\n  \\item Finally, if the loss for test set seems to still decrease at the end of training (end of 30 epochs), increase the \\textbf{number of epochs} and see how low the error can go. Usually it plateaus at some point and there is no reason to train further. (no task in particular in here)\n  \\item \\textbf{Your final task is} to get the validation accuracy above 0.42. There will be bonus points for the ones that get the best accuracy (it's a competition!).\n\\end{enumerate}\n\n-----------------------------------------------------\n\nIn the second part of this exercise we will not predict the 16 areas where the rat is in, but the XY coordinates directly. This means we have 2 linear output nodes (nodes with no nonlinearity) predicting the coordinates rather than 16 outputs predicting probabilities (using softmax). When training the network we minimize the squared difference between true coordinates and the predicted coordinates (mean squared error loss).\n\n\\begin{itemize}\n\\item Run the code \\texttt{rat\\_regression.m}. The program outputs the baseline error (how wrong the prediction is on average when using a linear model). After training a network the script will also print out how big is the average error made by the neural network. Report the two values and which model is better. Add the plot of how error changed during training. Notice that the error on the graph (the thing we minimize) is mean squared error (MSE), whereas we finally care about mean distance.\n\\item Try to improve the result (get lower error). Make a hypothesis of what could help you - more neurons? higher or lower dropout? different learning rate? Write down your hypothesis, run the models and report if it worked out or not. It is absolutely not important that you actually improve the result. The task is to explore. HINT: change only one parameter at a time, otherwise you will not know which change caused the benefit/problem. You need to report at least 3 different things you tried.\n\\item Again bonus points for the people who achieve the lowest error.\n\\end{itemize}\n\\end{exercise}\n\n\n\n\n\\ \\\\\n\\ \\\\\n\\ \\\\\n\\ \\\\\n\\ \\\\\nPlease submit a \\texttt{pdf} report with answers to the questions and comments about your solutions. YOU MUST INCLUDE THE CODE. Please mark how long it took to complete this set of exercises. Upload a zip/rar/etc with the \\texttt{pdf} and the code to the practice session page on the course website.\n\n\\end{document}\n", "meta": {"hexsha": "a3e581286cd9af6604d5f41a2182158fd19485a7", "size": 16809, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2016/Practices/06 - Artificial Neural Networks/text/cns-ann.tex", "max_stars_repo_name": "kuz/Computational-Neuroscience-Course", "max_stars_repo_head_hexsha": "b5657c8672397fa845dca88c2740277e7206cb5a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2015-01-24T01:14:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-04T20:40:00.000Z", "max_issues_repo_path": "2016/Practices/06 - Artificial Neural Networks/text/cns-ann.tex", "max_issues_repo_name": "NeuroCSUT/Computational-Neuroscience-Course", "max_issues_repo_head_hexsha": "cef9ef2dfc83cbfa91aa9b9ea1f23556aba2e9a2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2016/Practices/06 - Artificial Neural Networks/text/cns-ann.tex", "max_forks_repo_name": "NeuroCSUT/Computational-Neuroscience-Course", "max_forks_repo_head_hexsha": "cef9ef2dfc83cbfa91aa9b9ea1f23556aba2e9a2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2018-02-20T12:20:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-08T20:09:22.000Z", "avg_line_length": 83.6268656716, "max_line_length": 922, "alphanum_fraction": 0.7780950681, "num_tokens": 4018, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998714925403, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.5560989847014632}}
{"text": "\\subsection{Totally ordered sets}\\label{subsec:totally_ordered_sets}\n\n\\begin{definition}\\label{def:totally_ordered_set}\n  We say that a partially ordered set is \\term{totally ordered} if either the nonstrict order \\( \\leq \\) is \\hyperref[def:binary_relation/total]{total} or if the strict order \\( < \\) is \\hyperref[def:binary_relation/trichotomic]{trichotomic}.\n\n  The theory, homomorphisms and category \\( \\cat{Tos} \\) are obtained analogously to \\fullref{def:partially_ordered_set}, but with either of these additional axiom sets.\n\n  For a fixed \\hyperref[def:grothendieck_universe]{Grothendieck universe}, the category of \\( \\ucat{Tos} \\) is isomorphic to that of \\( \\mscrU \\)-small thin skeletal connected categories --- see \\fullref{thm:order_category_isomorphism/totally_ordered}.\n\\end{definition}\n\\begin{proof}\n  Equivalence between nonstrict and strict total orders follows directly from the compatibility condition \\eqref{def:partially_ordered_set/compatibility_nonstrict}.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:total_order_embedding_iff_strict}\n  Let \\( (P, \\leq_P) \\) and \\( (Q, \\leq_Q) \\) be \\hyperref[def:totally_ordered_set]{totally ordered sets} (more generally, \\( (Q, \\leq_Q) \\) can be any \\hyperref[def:preordered_set]{preordered set}).\n\n  An \\hyperref[def:partially_ordered_set/homomorphism]{order homomorphism} from \\( P \\) to \\( Q \\) is \\hyperref[def:function_invertibility/injective]{injective} if and only if it is strict.\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Follows from \\fullref{thm:order_embedding_is_strict}.\n\n  \\NecessitySubProof Let \\( f: P \\to Q \\) be a strict order homomorphism and suppose that \\( f(x) = f(y) \\) for some \\( x \\) and \\( y \\) in \\( P \\). We will use the \\hyperref[def:binary_relation/trichotomic]{trichotomy} of \\( <_P \\).\n  \\begin{itemize}\n    \\item If \\( x <_P y \\), then \\( f(x) <_Q f(y) \\) since \\( f \\) is strictly monotone, which contradicts \\( f(x) = f(y) \\).\n\n    \\item If \\( x >_P y \\), similarly \\( f(x) >_Q f(y) \\) and we again obtain a contradiction.\n\n    \\item It remains for \\( x \\) to be equal to \\( y \\).\n  \\end{itemize}\n\n  Since \\( x \\) and \\( y \\) were arbitrary, we conclude that \\( f \\) is injective.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:totally_ordered_strong_homomorphism}\n  For totally ordered sets, \\hyperref[def:partially_ordered_set/homomorphism]{strict order homomorphisms} are precisely the \\hyperref[rem:first_order_strong_homomorphism]{strong order homomorphisms}.\n\n  Let \\( (P, \\leq_P) \\) and \\( (Q, \\leq_Q) \\) be \\hyperref[def:totally_ordered_set]{totally ordered sets} (more generally, \\( (Q, \\leq_Q) \\) can be any \\hyperref[def:preordered_set]{preordered set}) and let \\( f: P \\to Q \\) be an order homomorphism between them. Then \\( f \\) is a strict order homomorphism if and only if it is a strong order homomorphism.\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Suppose that \\( f \\) is a strict order homomorphism. We must show that \\( f(x) \\leq f(y) \\) entails \\( x \\leq y \\).\n\n  If we suppose that \\( x > y \\) for some \\( x \\) and \\( y \\) in \\( P \\), then \\( f(x) > f(y) \\), which contradicts our assumption \\( f(x) \\leq f(y) \\).\n\n  Therefore, \\( f \\) is a strong order homomorphism.\n\n  \\NecessitySubProof Suppose that \\( f \\) is a strong order homomorphism. Let \\( x < y \\). It follows that \\( f(x) \\leq f(y) \\).\n\n  Suppose that \\( f(x) = f(y) \\). Then both \\( f(x) \\leq f(y) \\) and \\( f(x) \\geq f(y) \\) hold, which in turn imply that both \\( x \\leq y \\) and \\( x \\geq y \\) hold since \\( f \\) is a strong homomorphism. Thus, we obtain \\( x = y \\), which contradicts our assumption that \\( x < y \\).\n\n  Therefore, \\( f \\) is a strict order homomorphism.\n\\end{proof}\n\n\\begin{corollary}\\label{thm:totally_ordered_strict_isomorphisms}\n  A strict order embedding between totally ordered sets is an isomorphism if and only if it is bijective.\n\\end{corollary}\n\\begin{proof}\n  Follows from \\fullref{thm:totally_ordered_strong_homomorphism} and \\fullref{thm:automorphism_without_predicate_symbols}.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:totally_ordered_minimal_element_is_minimum}\n  If \\( (P, \\leq) \\) is a \\hyperref[def:totally_ordered_set]{totally ordered set} and \\( A \\subseteq P \\) is nonempty, then any \\hyperref[def:partially_ordered_set_extremal_points/maximal_and_minimal_element]{minimal element} is a \\hyperref[def:partially_ordered_set_extremal_points/maximum_and_minimum]{minimum}.\n\\end{proposition}\n\\begin{proof}\n  Let \\( x_0 \\) be a minimal element of \\( A \\). If \\( x_0 \\) is the only element of \\( A \\), it is clearly the minimum of \\( A \\). Suppose that \\( A \\) is not a singleton set.\n\n  By definition of total order, for any \\( x \\in A \\) either \\( x \\leq x_0 \\) or \\( x_0 \\leq x \\). If \\( x \\leq x_0 \\), then since \\( x_0 \\) is a minimal element, we have \\( x = x_0 \\).\n\n  Therefore, for any \\( x \\in A \\), either \\( x = x_0 \\) or \\( x > x_0 \\) (i.e. \\( x \\leq x_0 \\)), proving that \\( x_0 \\) is a minimum of \\( A \\).\n\\end{proof}\n\n\\begin{proposition}\\label{thm:totally_ordered_segment_isomorphism}\n  Let \\( (P, \\leq) \\) be a \\hyperref[def:totally_ordered_set]{totally-ordered set}. Let \\( Q \\) be the set containing the \\hyperref[def:partially_ordered_set_interval/ray]{strict initial segment} \\( P_{<x} \\) for every member \\( x \\) of \\( P \\).\n\n  Then \\( (P, \\leq) \\) is \\hyperref[def:partially_ordered_set/homomorphism]{strictly order-isomorphic} to \\( (Q, \\subseteq) \\).\n\\end{proposition}\n\\begin{proof}\n  Explicitly define the isomorphism\n  \\begin{equation*}\n    \\begin{aligned}\n      &f: P \\to Q \\\\\n      &f(x) \\coloneqq P_{<x} = \\set{ y \\in P \\given y < x }.\n    \\end{aligned}\n  \\end{equation*}\n\n  We will first show that \\( f \\) is \\hyperref[def:partially_ordered_set/homomorphism]{strictly monotone}. If \\( x < y \\), then \\( x \\in P_{<y} \\). But \\( x \\not\\in P_{<x} \\), hence \\( P_{<x} \\) is a strict subset of \\( P_{<y} \\). Thus, \\( f \\) is strictly monotone.\n\n  The function \\( f \\) is injective by \\fullref{thm:total_order_embedding_iff_strict} and surjective by definition. Thus, \\( f \\) is a strict isomorphism between \\( (P, \\leq) \\) and \\( (Q, \\subseteq) \\).\n\\end{proof}\n\n\\begin{proposition}\\label{thm:totally_ordered_cofinal_equivalences}\n  Let \\( (P, \\leq) \\) be an \\hyperref[def:partially_ordered_set_extremal_points/upper_and_lower_bounds]{unbounded from above} totally ordered set and let \\( A \\subseteq P \\). Then \\( A \\) is \\hyperref[def:cofinal_set]{cofinal} if and only if it is itself unbounded from above.\n\n  This equivalence is useful for \\hyperref[def:regular_cardinal]{regular cardinals} --- for example \\fullref{thm:cardinal_cofinality}.\n\n  Compare this result with \\fullref{thm:partially_ordered_cofinal_equivalences}.\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Let \\( A \\) be a cofinal set. Suppose that it is bounded from above. Then there must exist some \\( x \\in P \\) such that \\( x \\) is an upper bound of \\( A \\).\n  \\begin{itemize}\n    \\item If \\( x \\not\\in A \\), this contradicts the cofinality of \\( A \\).\n    \\item If \\( x \\in A \\), then it is a maximum. But since \\( P \\) is itself unbounded from above, we can find some \\( y \\in P \\) such that \\( y \\) is again an upper bound of \\( A \\) and \\( x < y \\). But this contradicts the cofinality of \\( A \\).\n  \\end{itemize}\n\n  \\NecessitySubProof Let \\( A \\) be unbounded from above. Suppose that is is not cofinal. Since there exists no upper bound of \\( A \\), for every \\( x\\in P \\) there exists some \\( y \\in A \\) such that \\( x \\leq y \\). Therefore, \\( A \\) is cofinal.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:total_lexicographic_order_is_total_order}\n  If \\( (P, \\leq_P) \\) and \\( (Q, \\leq_Q) \\) are \\hyperref[def:totally_ordered_set]{totally ordered sets}, then the \\hyperref[eq:def:lexicographic_order]{lexicographic} and \\hyperref[eq:def:lexicographic_order/reverse]{reverse lexicographic} orders on \\( P \\times Q \\) are \\hyperref[def:totally_ordered_set]{strict total order} relations.\n\n  Compare this result to \\fullref{thm:lexicographic_order_is_partial_order} and \\fullref{thm:well_ordered_lexicographic_order_is_well_ordered}.\n\\end{proposition}\n\\begin{proof}\n  We have already shown in \\fullref{thm:lexicographic_order_is_partial_order} and these are partial orders. It only remains to check trichotomy.\n\n  \\SubProofOf[def:binary_relation/trichotomic]{trichotomy} Let \\( \\prec \\) be the lexicographic order on \\( P \\times Q \\). Let \\( (a, b) \\) and \\( (c, d) \\) be pairs in \\( P \\times Q \\). Since \\( <_P \\) and \\( <_Q \\) are strict total orders, we only have the following possibilities:\n  \\begin{itemize}\n    \\item If \\( a = c \\) and \\( b = d \\), then \\( (a, b) = (c, d) \\).\n    \\item If \\( a = c \\) and \\( b <_Q d \\), then \\( (a, b) \\prec (c, d) \\).\n    \\item If \\( a = c \\) and \\( b >_Q d \\), then \\( (a, b) \\succ (c, d) \\).\n    \\item If \\( a <_P c \\), then \\( (a, b) \\prec (c, d) \\).\n    \\item If \\( a >_P c \\), then \\( (a, b) \\succ (c, d) \\).\n  \\end{itemize}\n\n  The proof for the reverse lexicographic order is analogous.\n\\end{proof}\n\n\\begin{definition}\\label{def:order_topology}\n  Let \\( P \\) be a \\hyperref[def:partially_ordered_set]{totally ordered set} with more than one element. The \\term{order topology} induced by \\( \\leq \\) is the topology generated by the \\hyperref[def:topological_subbase]{subbase} of open \\hyperref[def:partially_ordered_set_interval/ray]{rays}\n  \\begin{equation*}\n    S \\coloneqq \\set[\\Big]{ (a, \\infty) \\given a \\in P } \\cup \\set[\\Big]{ (-\\infty, b) \\given b \\in P }.\n  \\end{equation*}\n\n  The \\hyperref[def:topological_base]{base} corresponding to this subbase is\n  \\begin{equation*}\n    \\mscrB = S \\cup \\set[\\Big]{ \\varnothing } \\cup \\set[\\Big]{ (a, b) \\given a, b \\in P \\T{and} a < b }.\n  \\end{equation*}\n\n  See the proof of \\hyperref[thm:topological_base_axioms/B1]{B1} for why \\( P \\) must have more than one element.\n\\end{definition}\n\\begin{defproof}\n  \\SubProof{Proof of compatibility of \\( S \\) and \\( \\mscrB \\)} Define\n  \\begin{equation*}\n    \\mscrC = \\set*{ \\bigcap S \\given* S \\text{ is a nonempty finite subset of } S }.\n  \\end{equation*}\n\n  We will show that \\( \\mscrB = \\mscrC \\).\n\n  Let \\( B \\in \\mscrB \\). The cases \\( B \\in S \\) and \\( B = \\varnothing \\) are trivial. Suppose that \\( B \\not\\in S \\). Then there exist points \\( a < b \\) such that\n  \\begin{equation*}\n    B = (a, b) = (-\\infty, b) \\cap (a, \\infty).\n  \\end{equation*}\n\n  This is an intersection of members of \\( S \\), hence \\( B \\in \\mscrC \\). Therefore, \\( \\mscrB \\subseteq \\mscrC \\).\n\n  Now let let \\( C = S_1 \\cap \\cdots \\cap S_n \\), where \\( S_1, \\ldots, S_n \\) are members of \\( S \\). We will show by induction on \\( n > 0 \\) that \\( C \\in \\mscrB \\). The case \\( n = 1 \\) is trivial. Suppose that all \\( n \\)-ary intersections belong to \\( \\mscrB \\) and let\n  \\begin{equation*}\n    C = S_1 \\cap \\cdots \\cap S_n \\cap S_{n+1}.\n  \\end{equation*}\n\n  By the inductive hypothesis we have that \\( D \\coloneqq S_1 \\cap \\cdots \\cap S_n \\) belongs to \\( \\mscrB \\) and thus we have three cases:\n  \\begin{itemize}\n    \\item If either \\( D = (a, \\infty) \\) or \\( D = (-\\infty, b) \\), then \\( D \\in S \\).\n    \\item If \\( D = \\varnothing \\), then \\( D = (-\\infty, a) \\cap (a, \\infty) \\) for some \\( a \\in P \\).\n    \\item If \\( D = (a, b) \\), then \\( D = (-\\infty, b) \\cap (a, \\infty) \\).\n  \\end{itemize}\n\n  In all cases both cases \\( D \\) and \\( C = D \\cap S_{n+1} \\) are finite intersection of members of \\( S \\). Therefore, \\( \\mscrC \\subseteq \\mscrB \\). Since we already have the inclusion in the other direction, we conclude that \\( \\mscrC = \\mscrB \\).\n\n  \\SubProof{Proof that \\( \\mscrB \\) is a base} We will show that the axioms in \\fullref{thm:topological_base_axioms/B1} hold.\n\n  \\SubProofOf*[thm:topological_base_axioms/B1]{B1} Let \\( x \\in P \\).\n\n  If \\( x \\) is a \\hyperref[def:partially_ordered_set_extremal_points/maximum_and_minimum]{maximum}, then take any other value \\( y < x \\) and the set \\( (y, \\infty) \\) will contain \\( x \\). We use here that there is more than one element in \\( P \\).\n\n  If \\( x \\) is not a maximum, then \\( x \\) belongs to any interval \\( (-\\infty, y) \\) whenever \\( y > x \\).\n\n  In both cases there exists an interval in \\( S \\) containing \\( x \\). Thus, \\( \\bigcup S = P \\).\n\n  \\SubProofOf*[thm:topological_base_axioms/B2]{B2} Let \\( U \\) and \\( V \\) be members of \\( \\mscrB \\). We consider \\( 14 \\) cases:\n  \\begin{itemize}\n    \\item If either \\( U = \\varnothing \\) or \\( V = \\varnothing \\), then \\( U \\cap V = \\varnothing \\).\n    \\item If \\( U = (-\\infty, u) \\) and \\( V = (v, \\infty) \\), then\n    \\begin{itemize}\n      \\item If \\( u \\leq v \\), then \\( U \\cap V = \\varnothing \\).\n      \\item If \\( v < u \\), then \\( U \\cap V = (v, u) \\).\n    \\end{itemize}\n\n    \\item If \\( U = (-\\infty, u) \\) and \\( V = (v_1, v_2) \\), then\n    \\begin{itemize}\n      \\item If \\( u \\leq v_1 \\), then \\( U \\cap V = \\varnothing \\).\n      \\item If \\( v_1 < v_2 \\leq u \\), then \\( U \\cap V = V =  (v_1, v_2) \\).\n      \\item If \\( v_1 \\leq u < v_2 \\), then \\( U \\cap V = (v_1, u) \\).\n    \\end{itemize}\n\n    \\item If \\( U = (u_1, u_2) \\) and \\( V = (v, \\infty) \\), then\n    \\begin{itemize}\n      \\item If \\( u_2 \\leq v \\), then \\( U \\cap V = \\varnothing \\).\n      \\item If \\( u_1 \\leq v < u_2 \\), then \\( U \\cap V = (v, u_2) \\).\n      \\item If \\( v \\leq u_1 < u_2 \\), then \\( U \\cap V = U = (u_1, v_1) \\).\n    \\end{itemize}\n\n    \\item If \\( U = (u_1, u_2) \\) and \\( V = (v_1, v_2) \\), then\n    \\begin{itemize}\n      \\item If \\( u_2 < v_1 \\), then \\( U \\cap V = \\varnothing \\).\n      \\item If \\( u_1 < v_1 < u_2 < v_2 \\) then \\( U \\cap V = (v_1, u_2) \\).\n      \\item If \\( u_1 < v_1 < v_2 < u_2 \\) then \\( U \\cap V = V = (v_1, v_2) \\).\n      \\item If \\( v_1 < u_1 < u_2 < v_2 \\) then \\( U \\cap V = U = (u_1, u_2) \\).\n      \\item If \\( v_1 < u_1 < v_2 < u_2 \\) then \\( U \\cap V = (u_1, v_2) \\).\n    \\end{itemize}\n  \\end{itemize}\n\n  In all cases, the intersection \\( U \\cap V \\) belongs to \\( \\mscrB \\).\n\\end{defproof}\n\n\\begin{example}\\label{ex:def:order_topology}\n  Examples of \\hyperref[def:order_topology]{order topologies} include:\n  \\begin{itemize}\n    \\item The order topology on \\( R \\), which is equivalent to the \\hyperref[def:metric_topology]{metric topology} as shown in \\fullref{thm:real_metric_and_order_topologies_coincide}.\n\n    \\item All \\hyperref[def:ordinal]{ordinals} greater than one induce topological spaces called the \\hyperref[def:ordinal_space]{ordinal spaces}.\n  \\end{itemize}\n\\end{example}\n\n\\begin{definition}\\label{def:ordinal_space}\n  Let \\( \\alpha \\) be an \\hyperref[def:ordinal]{ordinal}. When regarded as the set of smaller ordinals, as shown valid in \\fullref{thm:ordinal_is_set_of_smaller_ordinals}, \\( \\alpha \\) is a \\hyperref[def:totally_ordered_set]{totally order set} and hence we can endow it with the \\hyperref[def:order_topology]{order topology} \\( \\mscrT \\) to obtain a \\hyperref[def:topological_space]{topological space}. We call the space \\( (\\alpha, \\mscrT) \\) an \\term{ordinal space}.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:limit_ordinal_order_topology}\n  Fix an \\hyperref[def:ordinal_space]{ordinal space} \\( (\\alpha, \\mscrT) \\).\n\n  An nonzero ordinal \\( \\beta \\in \\alpha \\) is a \\hyperref[def:successor_and_limit_ordinal]{limit ordinal} if and only if it is the \\hyperref[def:net_convergence/limit]{limit point} of some net of ordinals in the space \\( \\alpha \\).\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Let \\( \\beta \\) be a limit ordinal. When regarded as a subset of \\( \\alpha \\), it is itself a topological net because it is totally ordered. We will show that \\( \\beta \\) as a member of \\( \\alpha \\) is a limit point as a subset of \\( \\alpha \\).\n\n  By \\fullref{thm:net_convergence_via_subbases} it is enough to show that \\( \\beta \\) as a net is eventually in every set of the local subbase at \\( \\beta \\) of the order topology. This local subbase consists of all initial and final segments of \\( \\alpha \\) that contain \\( \\beta \\).\n\n  With the following we exhaust the local subbase at \\( \\beta \\): for all \\( \\gamma \\in \\alpha \\) that are distinct from \\( \\beta \\):\n  \\begin{itemize}\n    \\item If \\( \\gamma > \\beta \\), then \\( \\gamma \\) itself is a neighborhood of \\( \\beta \\) and a member of the subbase as an initial segment of \\( \\alpha \\). Since the entire net \\( \\beta \\) is contained in \\( \\gamma \\), it is eventually in the initial segment \\( \\alpha_{>\\gamma} \\).\n\n    \\item If \\( \\gamma < \\beta \\), then the final segment \\( \\alpha_{>\\gamma} \\) is a member of the local subbase of \\( \\beta \\). Let \\( \\delta \\) be some member of the net \\( \\beta \\).\n    \\begin{itemize}\n      \\item If \\( \\delta > \\gamma \\), it is itself an ordinal such that \\( \\varepsilon > \\delta \\) implies \\( \\varepsilon \\in \\alpha_{>\\gamma} \\).\n      \\item If \\( \\delta \\leq \\gamma \\), then \\( \\varepsilon > \\op{succ}(\\gamma) \\) implies \\( \\varepsilon \\in \\alpha_{>\\gamma} \\). The successor of \\( \\gamma \\) belongs to \\( \\beta \\) because \\( \\beta \\) is a limit ordinal and satisfies \\fullref{def:successor_and_limit_ordinal/smaller_successor}.\n    \\end{itemize}\n\n    Thus, again the net \\( \\beta \\) is eventually in the final segment \\( \\alpha_{>\\gamma} \\).\n  \\end{itemize}\n\n   We have shown that \\( \\beta \\) as a net is eventually in some every set in the local subbase of \\( \\beta \\), thus it is a limit of the net.\n\n   \\NecessitySubProof Let \\( \\beta \\) be a limit of the net \\( \\seq{ \\gamma_k }_{k \\in \\mscrK} \\subseteq \\alpha \\).\n\n   Let \\( \\delta \\in \\beta \\). Consider the neighborhood \\( (\\delta, \\beta) \\) of \\( \\beta \\). Since \\( \\beta \\) is a limit point, there must exist some index \\( k_0 \\in \\mscrK \\) such that \\( \\gamma_k \\in \\alpha_{>\\delta} \\) for every \\( k \\geq k_0 \\). Since \\( \\delta \\) was an arbitrary member of \\( \\beta \\), we conclude that \\( \\beta \\) it satisfies \\fullref{def:successor_and_limit_ordinal/smaller_successor} and is thus a limit ordinal.\n\\end{proof}\n", "meta": {"hexsha": "64adc9214f4869b4b9a9f8f86fba3c3b92f0b74d", "size": 17827, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/totally_ordered_sets.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/totally_ordered_sets.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/totally_ordered_sets.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.3657587549, "max_line_length": 468, "alphanum_fraction": 0.6541762495, "num_tokens": 5881, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154240079185319, "lm_q2_score": 0.7772998560157663, "lm_q1q2_score": 0.5560989783452973}}
{"text": "\\documentclass{beamer}\n\n\\usetheme{metropolis}\n\\setbeamertemplate{bibliography item}{\\insertbiblabel}\n\n\\usepackage{pgfpages}\n\\setbeamertemplate{note page}[plain]\n\\setbeameroption{show notes on second screen=right}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{bbm}\n\n\\DeclareMathOperator\\supp{supp}\n\n%Information to be included in the title page:\n\\title{Snapping Mechanism and Problems of Finite Precision}\n\\author{Christian Covington}\n\\institute{Harvard University Privacy Tools Project}\n\\date{September 30, 2019}\n\n\\begin{document}\n\n% Title Page\n\\frame{\\titlepage}\n\n% Table of Contents\n\\begin{frame}\n    \\frametitle{Overview}\n    \\tableofcontents\n    \\note{Introduce Laplace, that the promises break down when moving to implementation, and Mironov/Snapping}\n\\end{frame}\n\n% Problem Statement\n\\section{Problem Statement}\n\n\\begin{frame}[shrink=10]\n    \\frametitle{What is Differential Privacy and how do we achieve it?}\n    Let $M : \\mathcal{X}^{n} \\rightarrow \\mathcal{R}$ be a randomized algorithm, $D$ and $D'$ be neighboring data sets (differing in one row), and $S \\subseteq \\mathcal{R}$. Then $M$ satisfies $(\\epsilon, \\delta)$ differential privacy if\n    \\[ \\mathbb{P}(M(D) \\in S) \\leq \\exp(\\epsilon) \\cdot \\mathbb{P}(M(D') \\in S) + \\delta \\hspace{5pt} \\cite{DMNS06} \\]\n\n    \\pause\n\n    % One way to construct such a randomized algorithm is the ``additive-noise approach''; adding noise to the function we want to compute\n    % such that the noise is independent of the data.\n    We will focus on the Laplace Mechanism, which satisfies $(\\epsilon, 0)$ differential privacy:\n    \\[ M_{Lap}(\\mathcal{D}, f, \\epsilon) = f(\\mathcal{D}) + Lap \\left(\\frac{\\Delta f}{\\epsilon} \\right) \\hspace{5pt} \\cite{DMNS06} \\]\n    where $f: \\mathcal{D} \\rightarrow \\mathbb{R}$.\n\n    \\pause\n\n    For $(\\epsilon, 0)$-DP, it is necessary (but not sufficient) for $\\supp\\left( M(D) \\right) = \\supp\\left( M(D') \\right)$.\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Moving from Theory to Practice}\n    % Let $N$ be a stand-in for any type of noise we might want to add to produce a randomized algorithm.\n    Consider additive noise $N$. When $\\supp(N) = \\mathbb{R}$, the supports of mechanism outputs on neighboring data sets are equivalent.\n    This is not necessarily true when $\\supp(N) \\neq \\mathbb{R}$.\\footnote{E.g. let $f(D) = 0, f(D) = \\frac{1}{2}$, and $\\supp(N) = \\mathbb{Z}$.}\n\n    % Any software implementation of DP algorithms will necessarily have only finite precision, so $\\supp(N) \\neq \\mathbb{R}$.\n    % In the interest of concreteness, we will consider the IEEE-754 double-precision (binary64) floating point format.\n\n    Throughout the presentation, we will refer to an ``idealized mechanism'' as a mechanism that has access to infinite precision.\n    \\note{We will be considering IEEE-754 double-precision floating point numbers.}\n\\end{frame}\n\n% Floating Point\n\\section{IEEE 754 Floating Point}\n\\begin{frame}\n    \\frametitle{IEEE 754 Floating Point}\n    The IEEE 754 standard (referred to as \\emph{double} or \\emph{binary64}) floating point number has 3 components: \\newline\n    sign: 1 bit \\newline\n    significand/mantissa: 53 bits (only 52 are explicitly stored) \\newline\n    exponent: 11 bits\n\n    Let $S$ be the sign bit, $m_{1} \\hdots m_{52}$ be the bits of the mantissa, and $e_{1} \\hdots e_{11}$ be the bits of the exponent. Then a double is represented as\n\\[ (-1)^S (1.m_{1} \\hdots m_{52})_{2} \\times 2^{(e_1 \\hdots e_{11})_2 - 1023} \\]\n    Note that doubles ($\\mathbb{D}$) are not uniformly distributed over their range, so arithmetic precision is not constant across $\\mathbb{D}$.\n\\end{frame}\n\n% Laplace Mechanism\n\\section{Problems Implementing the Laplace Mechanism}\n\\begin{frame}\n    \\frametitle{Generating the Laplace: Overview}\n    The most common method of generating Laplace noise is to use inverse transform sampling. Let $Y$ be the random variable representing our Laplace noise with scale parameter $\\lambda$. Then,\n    \\[ Y \\leftarrow F^{-1}(U) = -\\lambda ln (1-U) \\]\n    where $F^{-1}$ is the inverse cdf of the Laplace and $U \\sim Unif(0,1)$.\n    \\note{We can reduce sampling from the Laplace to thinking about how\n          uniform random number generation and arithmetic operations\n          differ on $\\mathbb{D}$ as opposed to $\\mathbb{R}$.}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Sampling from Uniform}\n    Sampling from $\\mathbb{D} \\cap (0,1)$ is not particularly well-defined or consistent across implementations. Typically, the output of a uniform random sample is confined to a small subset of possible elements of $\\mathbb{D}$. \\cite{Mir12}\n    \\begin{figure}\n        \\includegraphics[height=0.5\\textheight, width=0.8\\textwidth]{support_of_random_doubles.png}\n        \\caption{Uniform random number generation \\cite{Mir12}}\n    \\end{figure}\n    \\note{Already, see that the set of possible draws from Laplace will differ by implementation.}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Natural Logarithm}\n    When implemented on uniform random numbers as normally generated, the natural log produces some values repeatedly and skips over others entirely. \\cite{Mir12}\n    \\begin{figure}\n        \\includegraphics[height=0.3\\textheight, width=0.475\\textwidth]{fp_fig_one.png}\n        \\hfill\n        \\includegraphics[height=0.3\\textheight, width=0.475\\textwidth]{fp_fig_two.png}\n        \\caption{Artefacts of natural logarithm on $\\mathbb{D}$ \\cite{Mir12}}\n    \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Attack}\n    Imagine we want to release a private version of the output of a function $f$ with $\\Delta f = 1$ and $\\epsilon = \\frac{1}{3}$.\n    Let $f(D) = 0, f(D') = 1$.\n    \\begin{figure}\n        \\includegraphics[height=0.375\\textheight, width=0.475\\textwidth]{laplace_attack.png}\n        \\hfill\n        \\includegraphics[height=0.375\\textheight, width=0.475\\textwidth]{smoking_gun_probability.png}\n        \\caption{Attack on Laplace Mechanism \\cite{Mir12}}\n    \\end{figure}\n    Mironov performed an attack on PINQ, reconstructing 18K records in fewer\n    than 1000 queries with total $\\epsilon < 10^{-6}$\n    \\note{Figure shows probability that output is in the support of only one of the data sets\n          after 1 release. \\newline\n\n          This is a lower bound on the probability that the DP guarantee is broken\n          (this is effectively showing that $\\delta \\neq 0$, but it could also be that $\\epsilon$\n          is too low). \\newline\n\n          White circles represent a common sampling method, black circles are from full\n          $\\mathbb{D} \\cap (0,1)$. \\newline\n\n          Note that attack is still reasonably effective for large $\\lambda$ (low purported privacy loss).\n          This allows an attacker to slowly use their privacy budget and possibly reconstruct an entire\n          database.\n          }\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Inadequate Fixes}\n    \\begin{itemize}\n        \\item Rounding Noise?\n        \\begin{itemize}\n            \\item Consider rounding noise to the nearest integer multiple of $2^{-32}$. Then, if $\\vert f(D) - f(D') \\vert < 2^{-32}$, then the supports of the mechanism outputs under the two data sets are completely disjoint.\n        \\end{itemize}\n        \\item Smoothing Noise?\n        \\begin{itemize}\n            \\item If $f(D), f'(D)$ are in different bands of precision, the support of the mechanism on one will be a proper subset of the support of the mechanism on the other.\n        \\end{itemize}\n    \\end{itemize}\n\\end{frame}\n\n% Snapping Mechanism\n\\section{Snapping Mechanism}\n\\begin{frame}\n    \\frametitle{Mechanism Statement}\n    The Snapping Mechanism \\cite{Mir12} is defined as follows:\n    \\[ \\tilde{f}(D) \\triangleq clamp_{B} \\left( \\lfloor clamp_{B}(f(D)) \\oplus S \\otimes \\lambda \\otimes LN(U^*) \\rceil_{\\Lambda} \\right) \\]\n    where $clamp_{B}$ restricts output to the range $[-B, B]$, $S \\otimes \\lambda \\otimes LN(U^*)$ is Laplace noise generated with our improved random number generator (more on this later), and $\\lfloor \\cdot \\rceil_{\\Lambda}$ rounds to the nearest $\\Lambda$, where $\\Lambda$ is the smallest power of two at least as large as $\\lambda$.\n\n    The mechanism guarantees $\\left(\\frac{1 + 12B \\eta + 2\\eta\\lambda}{\\lambda}, 0\\right)$-DP, where $\\eta$ is machine epsilon.\n\n    \\note{Explain all the floating point symbols.}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Mechanism Motivation/Explanation}\n    % \\emph{Keep working on this!}\n    \\[ \\tilde{f}(D) \\triangleq clamp_{B} \\left( \\lfloor clamp_{B}(f(D)) \\oplus S \\otimes \\lambda \\otimes LN(U^*) \\rceil_{\\Lambda} \\right) \\]\n    % \\begin{itemize}\n        Let $\\tilde{F}(\\cdot)$ be the idealized version of the snapping mechanism. Then $\\tilde{F}(\\cdot)$ satisfies\n        $(\\epsilon,0)$-DP.\n        For a given $x \\in supp(\\tilde{F}(D))$, consider:\n              \\begin{itemize}\n                \\item $[L,R) \\subset (0,1)$ is the set mapped to $x$ by $\\tilde{F}(D)$\n                \\item $[l,r) \\subset \\left( \\mathbb{D} \\cap (0,1) \\right)$ is the set mapped to $x$ by $\\tilde{f}(D)$\n              \\end{itemize}\n              The sampling mechanism, exact rounding, and clamping ensure that\n              $\\vert R - L \\vert \\approx \\vert r - l \\vert$ in terms\n              of relative error, which yields the DP-guarantee of $\\tilde{f}(D)$.\n    % \\end{itemize}\n    \\note{$clamp_B$ is stable $\\vert x-y \\vert \\leq c \\implies \\vert clamp_B(x) - clamp_B(y) \\vert \\leq c$,\n          so the inner clamping preserves privacy guarantees. \\newline\n\n          Rounding and outer clamping are considered post-processing. \\newline\n          }\n\\end{frame}\n\n\\section{Implementation Considerations}\n% \\begin{frame}\n%     \\frametitle{Why the lack of implementation?}\n%     Not actually sure, but probably some combination of:\n%     \\begin{itemize}\n%         \\item Technical differences from other mechanisms\n%             \\begin{itemize}\n%                 \\item Privacy guarantee is a function of $\\epsilon$\n%                 \\item Non-private function estimate is an input to the mechanism\n%             \\end{itemize}\n%         \\item Generally seen as low-order concern\n%         \\item Not immediately clear how to properly implement algorithm\n%             \\begin{itemize}\n%                 % \\item functional $\\epsilon$\n%                 \\item uniform random number generation\n%                 \\item exact calculations\n%             \\end{itemize}\n%         \\item No utility analysis in the paper\n%     \\end{itemize}\n% \\end{frame}\n\n\\begin{frame}\n    \\frametitle{Generating Uniform Random Numbers}\n    Our goal is to sample from $\\mathbb{D} \\cap (0,1)$ while maintaining the properties of $\\mathbb{R}$ as closely as possible. \\newline\n\n    IEEE 754 floating point numbers are of the form\n    \\[ (-1)^S (1.m_{1} \\hdots m_{52})_{2} \\times 2^{-E} \\]\n    Let: \\newline\n    $S = 0$ \\newline\n    $E \\sim Geom(p=0.5)$ \\newline\n    $\\forall i \\in \\{1,\\hdots,52\\}: m_i \\sim Bern(p=0.5)$. \\newline\n\n    \\note{This means that every $d \\in \\mathbb{D} \\cap (0,1)$ has a chance of being represented, and each is represented proportional to its unit of least precision.\n    In order to sample from $\\mathbb{D} \\cap (0,1)$ in this way, we need only be able to generate cryptographically secure random bits.}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Exact Calculations}\n    Multiple points in the algorithm require exact (rather than accurate-faithful) rounding. \\newline\n\n    Arithmetic with the natural logarithm is done with 118 bits of precision as described in \\cite{DLM07}\n    to ensure exact rounding. \\newline\n\n    All rounding is done via direct manipulation of the floating-point representation of the number. \\newline\n    \\note{Consider that for an arbitrary $x \\in \\mathbb{D}$ the natural log of $x$ is not necessarily $\\in \\mathbb{D}$. Let $a < ln(x) < b$ where $a,b \\in \\mathbb{D}$ and $\\not\\exists c \\in \\mathbb{D}: a < c < b$. Without loss of generality, assume that $\\vert a-x \\vert < \\vert b - x \\vert$, so that if we had infinite precision in calculating $ln(x)$ (but still had to output an element $\\in \\mathbb{D}$), we would output $a$.\n    Many mathematical libraries do what is called \\textit{accurate-faithful} rounding, which means that in the scenario above our algorithm\n    would output $a$ with high probability. In an \\textit{exact rounding} paradigm, the algorithm outputs $a$ with probability 1. \\newline\n}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Ensuring correct functional $\\epsilon$}\n    The mechanism guarantees  $\\left(\\epsilon(1 + 12B \\eta + 2\\eta), 0\\right)$-DP\n    (relative to the nominal $(\\epsilon,0)$-DP if you were to use the Laplace Mechanism). \\newline\n\n    We want the Snapping Mechanism's guarantee to be $(\\epsilon, 0)$-DP. \\newline\n\n    \\pause\n\n    Rewrite Laplace inside of the Snapping Mechanism needs to be written as if it respects $\\left( \\frac{\\epsilon - 2 \\eta}{1 + 12B \\eta}, 0\\right)$-DP. \\newline\n\n    We will refer to this rescaled $\\epsilon$ as $\\epsilon'$ and rewrite the Laplace random variable as $Y'$.\n\\end{frame}\n\n\\section{Utility Analysis}\n\\begin{frame}\n    \\frametitle{Difficulty of utility analysis}\n    Snapping Mechanism:\n    \\[ \\tilde{f}(D) \\triangleq clamp_{B} \\left( \\lfloor clamp_{B}(f(D)) \\oplus S \\otimes \\lambda' \\otimes LN(U^*) \\rceil_{\\Lambda'} \\right) \\]\n    What can we say about $\\vert f(D) - \\tilde{f}(D) \\vert$?\n    \\begin{itemize}\n        \\item if user sets $B$ poorly (e.g. $\\vert B \\vert << \\vert f(D) \\vert$), then\n        $\\vert f(D) - \\tilde{f}(D) \\vert$ could be arbitrarily bad\n        \\begin{itemize}\n            \\item We are currently setting $B$ within the mechanism,\n            rather than leaving it to the user\n        \\end{itemize}\n        \\item $\\lfloor \\cdot \\rceil_{\\Lambda'}$ makes distribution of noise more difficult to reason about\n        (becomes dependent on $f(D)$)\n        \\begin{itemize}\n            \\item We will make conservative statements based on worst-case\n        \\end{itemize}\n        \\note{Talk about why setting $B$ automatically helps both for empirical utility and the theoretical analysis.}\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Automatically setting $B$}\n    Every statistic in $PSI$ that uses the Laplace Mechanism asks the user for bounds on the\n    range of their data. We can use these to get a maximum possible value for\n    each statistic.\\footnote{Not immediately clear to me whether or not we would be able to do this in general.}\n\n    User provides $[D_{min}, D_{max}]$ as upper/lower bounds on the min/max value of $D$.\\footnote{Values\n    outside of this range are clipped.}\n    For a given statistic $T(\\cdot)$ we set $B$ such that\n    \\[ \\forall D \\text{ s.t. } \\min(D) \\geq D_{min} \\text{ and } \\max(D) \\leq D_{max}: B \\geq \\max T(D) \\]\n\n    This prevents the inner clamping bound from binding, so we rewrite the mechanism as\n    \\[ \\tilde{f}(D) \\triangleq clamp_{B} \\left( \\lfloor f(D) \\oplus Y' \\rceil_{\\Lambda'} \\right). \\]\n\\end{frame}\n\n\\begin{frame}[shrink=20]\n    \\frametitle{Error}\n    Assume $\\Delta f = 1$. We define error to be the absolute difference between the true statistic and our mechanism\n    release. We want to compare Snapping error vs Laplace error:\n    \\begin{align*}\n        \\big\\vert f(D) - \\lfloor f(D) + Y' \\rceil_{\\Lambda'} \\big\\vert \\nonumber &\\leq \\big\\vert f(D) - (f(D) + Y') \\big\\vert + \\big\\vert f(D) + Y' - \\lfloor f(D) + Y' \\rceil_{\\Lambda'} \\big\\vert \\\\\n                                                                                &\\leq \\vert -Y' \\vert + \\frac{\\Lambda'}{2} \\\\\n                                                                                &= \\vert Y' \\vert + \\frac{\\Lambda'}{2} \\\\\n                                                                                % &\\leq \\vert Y' \\vert + \\lambda' \\\\\n                                                                                % &= \\vert Y' \\vert + \\frac{1}{\\epsilon'}\n    \\end{align*}\n    % where $\\epsilon' = \\frac{\\epsilon - 2\\eta}{1+12B\\eta} < \\epsilon$.\n\n    Noting that $Y' = \\frac{\\epsilon}{\\epsilon'}Y$, we have that, conditional on a privacy loss parameter $\\epsilon$,\n    the Snapping error is at most\n    % $\\left( \\frac{1+12B\\eta}{\\epsilon-2\\eta} \\right)(1 + \\epsilon y)$\n    $\\frac{\\epsilon (1 + 12B \\eta)}{\\epsilon - 2\\eta}y + \\frac{\\Lambda'}{2}$\n    for a given amount of Laplace error $y$.\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Empirical Utility Testing - $\\epsilon = 0.001$}\n    \\begin{figure}\n        \\includegraphics[height=0.8\\textheight, width=1\\textwidth]{accuracy_snapping_vs_laplace_results_epsilon_0_001.png}\n    \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Empirical Utility Testing - $\\epsilon = 1$}\n    \\begin{figure}\n        \\includegraphics[height=0.8\\textheight, width=1\\textwidth]{accuracy_snapping_vs_laplace_results_epsilon_1.png}\n    \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Accuracy (part 1)}\n    Let $Z = \\vert Y' \\vert + \\frac{\\Lambda'}{2}$ (the Snapping error) and $F_Z$ its CDF.\n    \\begin{align}\n        F_{Z}(z) &= \\mathbb{P}(Z \\leq z) \\nonumber \\\\\n                 &= \\mathbb{P}\\left( \\vert Y' \\vert + \\frac{\\Lambda'}{2} \\leq z \\right) \\nonumber \\\\\n                 &= \\mathbb{P} \\left( \\vert Y' \\vert \\leq z - \\frac{\\Lambda'}{2} \\right) \\nonumber \\\\\n                 &= 1 - \\exp\\left( -\\epsilon'(z - \\frac{\\Lambda'}{2}) \\right) \\nonumber\n                %  &> \\mathbb{P} \\left( \\vert Y' \\vert \\leq z - \\lambda' \\right) \\nonumber \\\\\n                %  &= 1 - \\exp\\left( -\\epsilon'(z - \\lambda') \\right) \\nonumber \\\\\n                %  &= 1 - \\exp\\left( -\\epsilon'z + 1 \\right) \\nonumber \\\\\n                %  &= F_{Z}^{-}(z) \\nonumber\n    \\end{align}\n    % So, $\\forall z: F_{Z}^{-}(z) < F_{Z}(z)$.\n    % \\note{Could have stopped at line 3 (this is what I do in the actual software implementation).\n    %       Moving from $\\Lambda'$ to $\\lambda'$ gives a looser bound, but is useful to put things in terms of\n    %       the original Laplace noise.}\n\\end{frame}\n\\begin{frame}\n    \\frametitle{Accuracy (part 2)}\n    For a given $\\alpha$, let accuracy be the $a$ such that $\\alpha = \\mathbb{P}(Z > a)$.\n    \\begin{align*}\n        \\mathbb{P}(Z > a) &= 1 - \\mathbb{P}(Z \\leq a) \\\\\n                     &= 1 - F_{Z}(a) \\\\\n                     &= \\exp\\left( -\\epsilon'(a - \\frac{\\Lambda'}{2}) \\right)\n\t\t\t\t\t%  &\\leq 1 - F_{Z}^{-}(a) \\\\\n\t\t\t\t\t%  &= 1 - (1 - \\exp(-\\epsilon' a + 1)) \\\\\n\t\t\t\t\t%  &= \\exp(-\\epsilon' a + 1)\n    \\end{align*}\n    So, we have $a_{Snapping} = \\frac{\\ln \\left( \\frac{1}{\\alpha} \\right)}{\\epsilon'} + \\frac{\\Lambda'}{2}$,\n    compared to\n    $a_{Laplace} = \\frac{\\ln \\left( \\frac{1}{\\alpha} \\right)}{\\epsilon}$.\n    % \\footnote{$a_{Snapping}$ is the smallest $a$ for which we can prove the bound holds.}\n\\end{frame}\n\n\\begin{frame}[shrink=20]\n    \\frametitle{Accuracy (part 3)}\n    Recalling that\n$\\epsilon' = \\frac{\\epsilon - 2\\eta}{1 + 12B\\eta}$ we can represent the difference between the accuracy of the\nSnapping and Laplace mechanisms as follows:\n    \\begin{align*}\n        a_{Snapping} &= \\frac{\\ln \\left( \\frac{1}{\\alpha} \\right)}{\\epsilon'} + \\frac{\\Lambda'}{2} \\\\\n                     &= \\frac{\\ln \\left( \\frac{1}{\\alpha} \\right)}{\\left( \\frac{\\epsilon - 2\\eta}{1 + 12B\\eta} \\right)} + \\frac{\\Lambda'}{2} \\\\\n                     &= \\frac{(1 + 12B \\eta) \\left(\\ln \\left( \\frac{1}{\\alpha} \\right) \\right)}{\\epsilon - 2\\eta} + \\frac{\\Lambda'}{2} \\\\\n                     &= \\frac{\\epsilon(1 + 12B \\eta)}{\\epsilon - 2\\eta } \\cdot \\left(\\frac{\\ln \\left( \\frac{1}{\\alpha} \\right)}{\\epsilon} \\right) + \\frac{\\Lambda'}{2} \\\\\n                     &= \\frac{\\epsilon(1 + 12B \\eta)}{\\epsilon - 2\\eta} \\cdot a_{Laplace} + \\frac{\\Lambda'}{2}\n                     % a_{Snapping} &= \\frac{1 + \\ln \\left( \\frac{1}{\\alpha} \\right)}{\\epsilon'} \\\\\n        %             &= \\frac{1 + \\ln \\left( \\frac{1}{\\alpha} \\right)}{\\left( \\frac{\\epsilon - 2\\eta}{1 + 12B\\eta} \\right)} \\\\\n        %             &= \\frac{(1 + 12B \\eta) \\left( 1 + \\ln \\left( \\frac{1}{\\alpha} \\right) \\right)}{\\epsilon - 2\\eta} \\\\\n        %             &= \\frac{1 + 12B \\eta}{\\epsilon - 2\\eta} \\cdot \\left( 1 + \\ln \\left( \\frac{1}{\\alpha} \\right) \\right) \\\\\n        %             &= \\frac{1 + 12B \\eta}{\\epsilon - 2\\eta} \\cdot \\frac{1 + \\ln \\left( \\frac{1}{\\alpha} \\right)}{\\epsilon} \\cdot \\epsilon \\\\\n        %             &= \\frac{\\epsilon(1 + 12B \\eta)}{\\epsilon - 2\\eta } \\cdot \\left( \\frac{1}{\\epsilon} + \\frac{\\ln \\left( \\frac{1}{\\alpha} \\right)}{\\epsilon} \\right) \\\\\n        %             &= \\frac{1 + 12B \\eta}{\\epsilon - 2\\eta} \\cdot \\left(1 + \\epsilon \\cdot a_{Laplace} \\right)\n    \\end{align*}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Accuracy Testing - $\\epsilon = 0.001$}\n    \\begin{figure}\n        \\includegraphics[height=0.8\\textheight, width=1\\textwidth]{guaranteed_accuracy_snapping_vs_laplace_results_epsilon_0_001.png}\n    \\end{figure}\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Accuracy Testing - $\\epsilon = 1$}\n    \\begin{figure}\n        \\includegraphics[height=0.8\\textheight, width=1\\textwidth]{guaranteed_accuracy_snapping_vs_laplace_results_epsilon_1.png}\n    \\end{figure}\n\\end{frame}\n\n\\begin{frame}[shrink=30]\n    \\frametitle{Bias}\n    We write the bias of the snapping mechanism as\n    \\[ Bias = \\mathbb{E}(\\tilde{f}(D) - f(D)) = \\mathbb{E}\\left( clamp_B\\left( \\lfloor f(D) + Y' \\rceil_{\\Lambda'} \\right) - f(D) \\right) \\]\n    where $Y' \\sim Laplace(\\lambda')$ and the expectation is over the randomness of the snapping mechanism.\n\n    Now, we define an upper bound on the Bias:\n    \\[ Bias^{+} = \\mathbb{E}\\left( clamp_B\\left( f'(D) + Y^{*} \\right) - \\hat{f}(D) \\right) \\]\n    where $Y^{*} \\sim Laplace(-\\frac{\\Lambda}{2}, \\lambda')$.\n\n    Now, define the following:\n    \\[ p_L = F_{Y^*}(-B - \\hat{f}(D)) \\]\n    \\[ p_U = 1 - F_{Y^*}(B - \\hat{f}(D)) \\]\n    such that $F_{Y^*}$ is the CDF of $Y^*$ and $p_L, p_U$ are the probabilities that the lower/upper bounds are binding (respectively).\n    Then we can write\n    \\[ Bias^+ = p_L \\cdot (-B - \\hat{f}(D)) + p_U \\cdot (B - \\hat{f}(D)) + (1-p_l-p_U) \\cdot \\int_{-B-\\hat{f}(D)}^{B-\\hat{f}(D)}y^* f(y^*) dy^* \\]\n    where $f$ is the PDF of $Y^*$.\n\\end{frame}\n\n\\begin{frame}\n    \\frametitle{Possible Next Steps}\n    \\begin{itemize}\n        \\item Continue integration into PSI\n        \\item More considered choice of $B$\n        \\item Tighter accuracy bounds\n        \\item Extend to mechanisms other than Laplace\n    \\end{itemize}\n\\end{frame}\n\n\\begin{frame}[allowframebreaks]\n    \\frametitle{References}\n    \\bibliographystyle{alpha}\n    \\bibliography{presentation.bib}\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "58f596a508229270688c06240c93047c56b08301", "size": 22498, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "snapping_mechanism/reading_group_presentation/presentation.tex", "max_stars_repo_name": "ctcovington/floating_point", "max_stars_repo_head_hexsha": "53983d2135f46a0d4984f2320c54d482df9e093b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-04-11T06:57:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-13T21:58:25.000Z", "max_issues_repo_path": "snapping_mechanism/reading_group_presentation/presentation.tex", "max_issues_repo_name": "ctcovington/floating_point", "max_issues_repo_head_hexsha": "53983d2135f46a0d4984f2320c54d482df9e093b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-01T16:08:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-05T14:10:55.000Z", "max_forks_repo_path": "snapping_mechanism/reading_group_presentation/presentation.tex", "max_forks_repo_name": "ctcovington/floating_point", "max_forks_repo_head_hexsha": "53983d2135f46a0d4984f2320c54d482df9e093b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-07T14:48:55.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-07T14:48:55.000Z", "avg_line_length": 51.7195402299, "max_line_length": 428, "alphanum_fraction": 0.633656325, "num_tokens": 6851, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5560989726035366}}
{"text": "\\section{Measurement Procedure}\n\n\\subsection{Natural angular frequency}\n\n\\begin{enumerate}\n\\item Turn the Damping Selection knob to \"0\".\n\\item Rotate the balance wheel to the initial angular position $\\theta_0 \\approx\n  150\\degree$ and release it. Record the time of 10 periods. \n\\item REpeat for four times and calculate the natural angular frequency\n  $\\omega_0$. \n\\end{enumerate}\n\n\\subsection{Damping coefficient}\n\n\\begin{enumerate}    \n\\item Turn the Damping Selection knob to \"2\", and the selection should not be\n  changed during this part. \n\\item Rotate the balance wheel to the initial amplitude of approximately\n  $150\\degree$ and release it. Record the amplitude of each period and the time\n  of 10 periods.  \n\\item The solution to the homogeneous equation of motion, with the corresponding\n  initial conditions, is $\\theta_{t}=\\theta_0e^{-\\beta t}cos(\\omega_ft+\\alpha)$.\n  Hence $\\theta_1=\\theta_0e^{-\\beta T}, \\quad \\theta_2=\\theta_0e^{-\\beta (2T)},\n  \\hdots, \\theta_n=\\theta_0e^{-\\beta (nT)}$. The damping coefficient $\\beta$ can\n  then be calculated as \n\n\\[ \nln\\frac{\\theta_i}{\\theta_j}=ln\\frac{\\theta_0e^{-\\beta (iT)}}{\\theta_0e^{-\\beta\n    (jT)}}=(j-i)\\beta T. \n\\]\n\n\\item The value of $T$ should be the average period, and\n  $ln\\frac{\\theta_i}{\\theta_{i+5}}$ should be obtained by the successive\n  difference method as \n\n\\[\n\\beta=\\frac{1}{5T}ln\\frac{\\theta_i}{\\theta_{i+5}}.\n\\]\n\n\\end{enumerate}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{fig/es4}\n\\caption{The rear panel of the control box}\\label{real}\n\\end{figure}\n\n\\subsection{$\\theta_{st}$ vs. $\\omega$ and $\\varphi$ vs. $\\omega$\n  Characteristics of forced oscillations} \n\n\\begin{enumerate}\n\\item Keep the Damping Selection ar \"2\", and set the speed of the motor. Record\n  the amplitude $\\theta_{st}$, the period $T$, and the phase shift $\\varphi$\n  when the oscillation reaches a steady state. \n\\item Repeat the steps above by changing the speed of the motor. It will result\n  in a change of the phase shift $\\varphi$ (referred to as $\\Delta \\varphi$).\n  More data should be collected when $\\varphi$ and $\\theta_{st}$ change rapidly\n  (e.g. near to the resonance point). At least 15 data should be collected for\n  plotting.  \n\\item Choose Damping Selection \"1\" or \"3\". Repeat the above steps.\n\\item Plot the $\\theta_{st} (\\omega)$ characteristics, with $\\omega/\\omega_0$ on\n  the horizontal axis and $\\theta_{st}$ on the vertical axis. Two sets of data\n  should be plotted on the same graph. \nPlot the $\\varphi (\\omega)$ characteristics, with $\\omega/\\omega_0$ on the\nhorizontal axis and $\\varphi$ on the vertical axis. Two sets of data should be\nplotted on the same graph. \n\\end{enumerate}\n\n\n", "meta": {"hexsha": "6c2a61351cabbaeb18fb7767b6d1ffb1cd3a3c21", "size": 2688, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "E5/part/3mp.tex", "max_stars_repo_name": "iamwrm/VP141", "max_stars_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-24T11:28:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-24T11:28:04.000Z", "max_issues_repo_path": "E5/part/3mp.tex", "max_issues_repo_name": "iamwrm/VP141", "max_issues_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "E5/part/3mp.tex", "max_forks_repo_name": "iamwrm/VP141", "max_forks_repo_head_hexsha": "c0a5d1992967b1552d6f7ea0806c9244d58f64ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.4, "max_line_length": 80, "alphanum_fraction": 0.7235863095, "num_tokens": 764, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239836484143, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5560989705525937}}
{"text": "\\section{Link Modelling}\\label{sec:linkmodel}\nIn this section, we propose a method for modelling and computing link \\gls{pathloss} using building footprints\nbetween nodes of a link, on OpenStreetMap map tiles obtained through the Mapbox Maps Service\nAPI~\\cite{website:mapbox}. The model computes the \\gls{pathloss} based on the distance of the link and the\npercentage of that distance that is in a building. Buildings and other environmental obstructions, which is\npart of the shadow fading \\gls{pathloss} from \\cite{paper:linkmodel}, should cause a higher \\gls{pathloss}, as\nit is harder for the radio signal to propagate through buildings. \\medbreak\n\nThe main idea is to generate a map of the area that contains nodes of a link as an image, and when computing\nthe \\gls{pathloss} of that link, count the percentage of all pixels in a straight line between the nodes in a\nlink that are considered buildings on the map. The pseudo code description of this can be seen in\n\\autoref{algo:linkmodel:compute-building-percentage}.\n\n%In this section, our own Linkmodel will be described.\n\n%\\subsection{Computing path loss}\n%To facilitate to the problems discussed in \\autoref{sec:reachi-experiments}, we have devised our own linkmodel\n%to compute path loss. \n%Our model computes path loss based on the distance of the link and the percentage of\n%that distance that is in a building. Building and other obstructions in the environment cause more severe path\n%loss, and as such should be considered to punish the signal more. Our model limits to only buildings. \n\n%The idea\n%is to generate a map of the area that the nodes are located in as an image, then when computing the path loss\n%of a link, look up the colour of all pixels in a straight line between the nodes  of link and count how many\n%is a building. This gives a percentage for how much of the distance is covered by buildings. An pseudo code\n%implementation of the computation can be seen on \\autoref{algo:linkmodel:compute-building-percentage}.\n\n\\begin{algorithm}[ht]\n    \\DontPrintSemicolon\n    \\KwIn{$(x_1, y_1), (x_2, y_2)$}\n    \\KwOut{Percentage building between points.}\n    \\SetKwFunction{FLoSModelCompute}{CompBuildingPct}\n    \\SetKwProg{Fn}{Function}{}{}\n\n    \\Fn{\\FLoSModelCompute{$(x_1, y_1), (x_2, y_2)$}}{\n        %$(n_1,\\ n_2) \\leftarrow \\mathit{nodes}(l)$\\;\n        %$(x_1,\\ y_1) \\leftarrow$ compute position for $n_1$\\;\n        %$(x_2,\\ y_2) \\leftarrow$ compute position for $n_2$\\;\n        %\\;\n        $\\mathit{pixels} \\leftarrow 0$\\;\n        $\\mathit{buildings} \\leftarrow 0$\\;\n        \\While{$\\lambda \\in \\{0 \\dots 1\\}$}{\n            $(x,\\ y) \\leftarrow \\lambda \\cdot (x_1,\\ y_1) + (1 - \\lambda)\\cdot(x_2,y_2)$\\;\n            \\If{position ($x$, $y$) is a building}{\n                $\\mathit{buildings} \\leftarrow \\mathit{buildings} + 1$\\;\n            }\n            $\\mathit{pixels} \\leftarrow \\mathit{pixels} + 1$\\;\n        }\n\n        \\KwRet $\\frac{\\mathit{buildings}}{\\mathit{pixels}}$\\;\n    }\n    \\caption{The CompBuildingPct function.}\n    \\label{algo:linkmodel:compute-building-percentage}\n\\end{algorithm}\n\nTo compute the total \\gls{pathloss}, we first define two functions: $\\mathit{cvpl}(l)$ (\\gls{cvpl}) and\n$\\mathit{bopl}(l)$ (\\gls{bopl}). Both functions compute the distance-dependent \\gls{pathloss}, similarly to\nthe $\\mathit{pl}_d$ function from \\autoref{sec:reachi-experiments}, but rather than computing the total\ndistance based \\gls{pathloss}, we need to define a similar function for the distance with a clear view, and a\nsimilar function to the distance where buildings obstruct the signal. To do this, we define this as an\noptimisation problem, where we want to find the optimal constants for the two functions, by minimising the\ndifference between the computed \\gls{rssi} and the measured \\gls{rssi} for a set of links $L$. The\n$\\mathit{compRSSI}(l) = \\mathit{tx}_\\mathit{power} - (\\alpha \\cdot (\\ln(d(l)) / \\ln(\\delta)) + \\beta)$\nfunction denotes the computed \\gls{rssi} for a link with the chosen values for $\\alpha$, $\\beta$, and\n$\\delta$.\n\\medbreak\n\nThe problem is defined as follows:\n\n\\begin{itemize}\n    \\item Input: A set of links $L$.\n    \\item Output: Optimal values for $\\alpha, \\beta, \\delta$.\n    \\item Goal: Minimise the $\\mathit{score}(\\alpha, \\beta, \\delta)$ function:\n\n          $\\mathit{score}(\\alpha, \\beta, \\delta) = \\mathlarger{\\sum}\\limits_{l \\in L} (\\mathit{compRSSI}(l) - \\mathit{measuredRSSI}(l))^2$\n\\end{itemize}\n\n\\subsection{Greedy Approach}\nTo solve the optimisation problem, we have chosen a greedy approach. First, to compute the optimal values for\nthe $\\mathit{cvpl}(l)$ function, we compile a set of links $L$, where the computed building percentage is\nbelow 5 \\%, and for the $\\mathit{bopl}(l)$ function, we compile a set of links $L$ where the computed building\npercentage is above 80 \\%. Ideally, we would like for the building percentage to be close to 100 \\%, but the\nnumber of links in the Marikina log with more than 95 \\% of buildings is very low. With these sets, we attempted\nto find the optimal values for $\\alpha$, $\\beta$, and $\\delta$ by going through $\\alpha, \\beta \\in \\{-100,\n\\dots, 100\\}$ with increments of $0.5$, and $\\delta \\in \\{2, \\dots, 100\\}$ with increments of $1$. This\nresulted in the following values for the $\\mathit{cvpl}(l)$ and $\\mathit{bopl}(l)$ functions:\n%\n\\begin{eq} \n    \\mathit{cvpl}(l) = 48.5 \\cdot (\\ln{(d(l))} / \\ln{(77)}) + 37.5 \n\\end{eq}\n\n\\begin{eq} \n    \\mathit{bopl}(l) = 67 \\cdot (\\ln{(d(l))} / \\ln{(57))} + 11.5 \n\\end{eq}\n\n\n%To solve the optimisation problem, brute forcing will be utilised because of time restrictions. The set of\n%links $L$, consisted of links with a computed building percentage below five percent or above 80\\% was\n%collected from the Marikina log into their separate collections. The Marikina log was used because the\n%experiment was conducted in a city, resulting in links with varying building percentages. For both\n%collections the links were further sorted based on distance of the links, with 20 meter intervals i.e. links\n%with distances between 20 meters and 40 meters was sorted together. The average \\gls{rssi} for each\n%separation was then computed. Links with building percentage above 80\\% was used for $\\mathit{bopl}$ and\n%links below 5 percent for $\\mathit{cvpl}$. The parameters $\\alpha,\\ \\beta \\in \\{-100 \\dots 100\\}$ with $0.5$\n%increments and $\\delta\\ \\in \\{2 \\dots 100\\}$ with $1$ increments. The result was the following functions:\n%\\begin{eq} \\mathit{cvpl}(l) = 48.5 \\cdot (\\ln{(d(l))} / \\ln{(77)}) + 37.5 \\end{eq}\n%\n%\\begin{eq} \\mathit{bopl}(l) = 67 \\cdot (\\ln{(d(l))} / \\ln{(57))} + 11.5 \\end{eq}\n\n\n%We define two functions, $\\mathit{cvpl}(l)$ and $\\mathit{bopl}(l)$. Both functions compute path\n%loss based on distance. \\gls{cvpl} computes path loss for distances with zero percent building, while\n%\\gls{bopl} computes path loss for 100\\% building. When computing the path loss both functions will be used, in\n%the following equation:\n%\\todo[inline]{make me prettier}\n\n%\\todo[inline]{finish block}\n\n\n%Finding the optimal constants for $\\mathit{cvpl}$ and $\\mathit{bopl}$ can defined as an optimisation problem.\n%A set of links $L$ must be provided. The goal is to compute the difference from the computed \\gls{rssi} to the\n%measurements for all links in $L$.\n%\\medbreak\n\n%The problem is defined as follows:\n%\\begin{itemize}\n%    \\item Input: A set of link $L$.\n%    \\item Output: Optimal parameters for $\\alpha,\\ \\beta,\\ \\delta$.\n%    \\item Goal: Minimise the score function:\\smallbreak\n%    $\\mathit{compRSSI}(l)= \\alpha \\cdot (log_\\delta(d(l))) + \\beta$\\smallbreak\n%    $\\mathit{score}(\\alpha, \\beta, \\delta) = \\sum\\limits_{l\\ \\in\\ L} (\\mathit{compRSSI}(l) - \\mathit{measuredRSSI}(l))^2$\n%\n%\\end{itemize}\n\nWith these two functions defined, we can compute the total \\gls{pathloss} for a link. The function\n$p(l)$ denotes the points for the links, as required by the input to the CompBuildingPct function.\n%\n\\begin{eq}\\label{eq:pl} %%% REMEMBER %%%\n    \\mathit{pl}(l) = (\\mathit{cvpl}(l) \\cdot (1 - \\text{CompBuildingPct}(p(l)))) + (\\mathit{bopl}(l) \\cdot \\text{CompBuildingPct}(p(l)))\n\\end{eq}\n\nFinally, with the $\\mathit{pl}(l)$ function, we can compute the \\gls{rssi} for the link $l$: \n%\n\\begin{eq}\\label{eq:plrssi}\n    \\mathit{RSSI}_{\\mathit{dBm}}(l) = \\mathit{tx}_\\mathit{power} - \\mathit{pl}(l)\n\\end{eq}\n\n\\subsection{Evaluation}\n\nThe functions $\\mathit{cvpl}(l)$ and $\\mathit{bopl}(l)$ have been plotted\non \\autoref{plot:reachi-experiments:cvpl-vs-bopl}. $\\mathit{bopl}$ do result in greater \\gls{pathloss}, however,\nthe plot also reveals that up to 100 meters, $\\mathit{bopl}(l)$ compute a better \\gls{rssi} compared to\n$\\mathit{cvpl}(l)$. To further examine this, each function has been plotted with their training set.\n$\\mathit{cvpl}(l)$ on \\autoref{plot:reachi-experiments:marikina-log-below-5-pct} and $\\mathit{bopl}(l)$ can be\nseen in \\autoref{plot:reachi-experiments:marikina-log-above-80-pct}.\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\begin{axis}[\n                height=10cm, width=0.95\\textwidth,\n                ylabel={RSSI},\n                xlabel={Distance in meters},\n                axis lines*=left,\n                xmin=0, xmax=750,\n                enlargelimits=false,\n                ymajorgrids=true,\n                xmajorgrids=true,\n                grid style=dashed,\n                restrict y to domain=-120:0,\n                samples=700\n            ]\n\n            \\addplot[domain=0:1000, very thick, solid, cyan] {26 - bopl(x)};\n            \\addlegendentry{\\gls{bopl}};\n\n            \\addplot[domain=0:1000, very thick, dashed, red] {26 - cvpl(x)};\n            \\addlegendentry{\\gls{cvpl}};\n        \\end{axis}\n    \\end{tikzpicture}\n    \\caption{Plot showing samples drawn from \\gls{cvpl} and \\gls{bopl}}\n    \\label{plot:reachi-experiments:cvpl-vs-bopl}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\begin{axis}[\n                title={score: 501.4932, links: 13481, score/link: 0.0372},\n                height=10cm, width=0.95\\textwidth,\n                ylabel={RSSI},\n                xlabel={Distance in meters},\n                axis lines*=left,\n                xmin=0, xmax=750,\n                enlargelimits=false,\n                ymin=-90, ymax=-30,\n                xtick={0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750},\n                ymajorgrids=true,\n                xmajorgrids=true,\n                grid style=dashed,\n                samples=700\n            ]\n\n            \\addplot[very thick, solid, cyan, mark=*] coordinates {(20, -36.01344537815126) (40, -48.361111111111114) (60, -54.93279022403259) (80, -62.40816326530612) (100, -68.14871794871794) (120, -60.85954712362301) (140, -71.69568452380952) (160, -74.36896551724138) (180, -73.93817204301075) (200, -75.09929078014184) (220, -73.38403041825094) (240, -75.43994413407822) (260, -77.69102990033223) (280, -77.31512605042016) (300, -75.7751937984496) (320, -78.60714285714286) (340, -78.38524590163935) (360, -78.52459016393442) (380, -77.34285714285714) (400, -80.96153846153847) (420, -81.03571428571429) (440, -80.41379310344827) (460, -74.18181818181819) (480, -79.9090909090909) (500, -79.75) (520, -77.56521739130434) (540, -81.23076923076923) (560, -78.9) (580, -85.0) (620, -82.5) (640, -82.33333333333333) (660, -82.4) (680, -77.5) (700, -85.4) (740, -77.0)};\n            \\addlegendentry{Marikina field measurements}\n\n\n            \\addplot[domain=0:740, very thick, solid, red] {26 - cvpl(x)};\n            \\addlegendentry{\\gls{cvpl}};\n        \\end{axis}\n    \\end{tikzpicture}\n    \\caption{Field measurements with building percentage below 5 \\%.}\n    \\label{plot:reachi-experiments:marikina-log-below-5-pct}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\begin{axis}[\n                title={score: 350.6854, links: 377, score/link: 0.9302},\n                height=10cm, width=0.95\\textwidth,\n                ylabel={RSSI},\n                xlabel={Distance in meters},\n                axis lines*=left,\n                xmin=0, xmax=380,\n                enlargelimits=false,\n                ymin=-90, ymax=-30,\n                ymajorgrids=true,\n                xmajorgrids=true,\n                grid style=dashed,\n                samples=400\n            ]\n\n            \\addplot[very thick, solid, cyan, mark=*] coordinates {(20, -32.56521739130435) (40, -50.607142857142854) (60, -52.15384615384615) (80, -64.85714285714286) (100, -49.5) (120, -65.76623376623377) (140, -69.38888888888889) (160, -72.05714285714286) (180, -69.3125) (200, -78.83333333333333) (220, -76.84) (240, -75.75) (260, -80.91666666666667) (280, -72.88888888888889) (300, -78.95238095238095) (320, -76.44444444444444) (340, -82.75) (380, -87.0)};\n            \\addlegendentry{Marikina field measurements};\n\n            \\addplot[domain=0:380, very thick, solid, red] {26 - bopl(x)};\n            \\addlegendentry{\\gls{bopl}};\n        \\end{axis}\n    \\end{tikzpicture}\n    \\caption{Field measurements with building percentage above 80 \\%.}\n    \\label{plot:reachi-experiments:marikina-log-above-80-pct}\n\\end{figure}\n\nWith the optimal values for the $\\mathit{cvpl}(l)$ function, the final score was 501.4932 for 13481 links with\nless than 5 \\% buildings. \\autoref{plot:reachi-experiments:marikina-log-below-5-pct} shows how well this\nfunction fits with the measured values. \\smallbreak\n\nFor the $\\mathit{bopl}(l)$ function, we got a final score of 350.6854 for 377 links with more than 80 \\%\nbuildings. This is a very large difference compared to the 13481 links for the $\\mathit{cvpl}(l)$, which means \nthat we might experience lower precision for the $\\mathit{bopl}(l)$ function.\n\n%With the found optimal parameters for $\\mathit{cvpl}$, the resulting was score was 501.4932 for 13481 links.\n%The average score for a single link is then $0.0372 = 501.4932 / 13481$. We consider this as good score, and\n%as can be seen on \\autoref{plot:reachi-experiments:marikina-log-below-5-pct}, the function fits the\n%measurements.\\medbreak\n%\n%The $\\mathit{bopl}$ however got worse result. The score per link was 0.9302 which is much larger compared to\n%the score for $\\mathit{cvpl}$. It is important to note that there was only 377 links in the training set for\n%the $\\mathit{bopl}$ function, which is vastly less compared to the 13481 links for the $\\mathit{cvpl}$\n%function. Another reason for the lower precision of $\\mathit{bopl}$ is that the training set does not only\n%consist of links with 100\\% building, as we allowed for links with more than 80\\%. This was however necessary\n%as there was close to none links with 100\\% building. Having a large set of links with 100\\% building would\n%result in higher precision.\n\nFinally, a comparison of the computed \\gls{rssi} values from \\autoref{eq:plrssi} with the measurements from\nthe Marikina log is shown in \\autoref{plot:reachi-experiments:marikina-log-vs-computed}. The plot shows that\nthe function is slightly off on the from about 75 meters to 300 meters, but compared to\n\\autoref{plot:reachi-experiments:measurements-vs-ld}, we do see an improvement.\n\n\\begin{figure}[ht]\n    \\centering\n    \\begin{subfigure}[b]{0.48\\textwidth}\n        \\centering\n        \\qrcode[hyperlink]{https://youtu.be/vVqHzVThW34}\n        \\caption{Field measurements visualised.}\n        \\label{fig:field-measurements-visualised-qr}\n    \\end{subfigure}\n    \\hfill\n    \\begin{subfigure}[b]{0.48\\textwidth}\n        \\centering\n        \\qrcode[hyperlink]{https://youtu.be/tZdhf6zcs2Y}\n        \\caption{Computed \\gls{rssi} values visualised.}\n        \\label{fig:computed-values-visualised-qr}\n    \\end{subfigure}\n    \\caption{Marikina field measurements and computed \\gls{rssi} values.}\n    \\label{fig:building-visualisation}\n\\end{figure}\n\n\\autoref{fig:building-visualisation} contains YouTube links to two visualisations, where \n\\autoref{fig:field-measurements-visualised-qr} visualises the Marikina field measurements, and \n\\autoref{fig:computed-values-visualised-qr} visualises the computed \\gls{rssi} values. Both visualisations\nhighlight the same subset of the links, to make it easier to follow. \\medbreak\n\nThe complete source code for the C++ implementation can be found on GitHub:\n\n\\url{https://github.com/Joklost/sims2}\n\n\\begin{figure}[H]\n    \\centering\n    \\begin{tikzpicture}\n        \\begin{axis}[\n                title={score: 1410.2234, links: 17761, score/links: 0.0794},\n                height=12cm, width=0.95\\textwidth,\n                ylabel={RSSI},\n                xlabel={Distance in meters},\n                axis lines*=left,\n                xmin=0, xmax=750,\n                enlargelimits=false,\n                ymin=-90, ymax=-20,\n                xtick={0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750},\n                ymajorgrids=true,\n                xmajorgrids=true,\n                grid style=dashed,\n            ]\n\n            \\addplot[thick, solid, cyan, mark=*] coordinates {(20, -28.32345013477089) (40, -44.85830258302583) (60, -52.77323717948718) (80, -60.21201657458563) (100, -66.47435897435898) (120, -69.68905472636816) (140, -71.5976496922216) (160, -73.7866473149492) (180, -75.53428571428572) (200, -76.89289392378991) (220, -77.88135593220339) (240, -77.8035019455253) (260, -77.36784140969164) (280, -77.14030612244898) (300, -77.75299760191847) (320, -79.71686746987952) (340, -79.15481171548117) (360, -79.90728476821192) (380, -81.30909090909091) (400, -81.79746835443038) (420, -81.52272727272727) (440, -79.2) (460, -79.42105263157895) (480, -79.4375) (500, -79.0) (520, -77.91666666666667) (540, -83.0) (560, -81.27272727272727) (580, -83.57142857142857) (600, -86.0) (620, -83.4) (640, -86.5) (660, -81.42857142857143) (680, -79.0) (700, -82.71428571428571) (740, -77.0)};\n            \\addlegendentry{Marikina field measurements};\n\n            \\addplot[thick, solid, red, mark=triangle*] coordinates {(20,-33.04359925788497)(40,-48.19327731092437)(60,-54.03703703703704)(80,-58.16688567674113)(100,-60.96085858585859)(120,-63.43184421534937)(140,-65.41129831516353)(160,-67.02463054187191)(180,-68.64031007751937)(200,-69.87730061349693)(220,-71.07770961145194)(240,-72.32267441860465)(260,-73.50986842105263)(280,-74.37786259541984)(300,-75.32413793103449)(320,-76.22083333333333)(340,-76.95930232558139)(360,-77.8018018018018)(380,-78.84883720930233)(400,-79.06557377049181)(420,-79.03030303030303)(440,-79.25)(460,-80.21428571428571)(480,-80.8)(500,-82.42857142857143)(520,-83.2)(540,-83.55555555555556)(560,-83.77777777777777)(580,-84.16666666666667)(600,-82.0)(620,-84.0)(640,-83.0)(660,-83.6)(680,-84.0)(700,-84.0)(740,-85.0)};\n            \\addlegendentry{Computed values};\n        \\end{axis}\n    \\end{tikzpicture}\n    \\caption{Field measurements vs. computed values.}\n    \\label{plot:reachi-experiments:marikina-log-vs-computed}\n\\end{figure}\n\n%Finally a comparison of the computed results from \\autoref{eq:pl} compared with the measurements from the\n%Marikina log. Once again, links have been sorted into distance buckets and had the average \\gls{rssi} for each\n%bucket computed. The computed \\gls{rssi}, was computed with \\autoref{eq:pl} on the links from the Marikina\n%log, but the field measured \\gls{rssi} was removed. The computed \\gls{rssi} was added instead, and the same\n%sorting into distance buckets was done with the computed log. The result can be seen on\n%\\autoref{plot:reachi-experiments:marikina-log-vs-computed}. The computed score, once again, for a single link\n%was 0.0794. \\autoref{plot:reachi-experiments:marikina-log-vs-computed} shows that the function is off on the\n%range from about 75 meters to 300 meters, however when compared with\n%\\autoref{plot:reachi-experiments:measurements-vs-ld} ours does indeed give a better result.", "meta": {"hexsha": "4497b4a03a8b51d161123f31836c01e8e83a7020", "size": 19762, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/p10/sections/02-radiophysics/04-linkmodel.tex", "max_stars_repo_name": "Joklost/masters", "max_stars_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "reports/p10/sections/02-radiophysics/04-linkmodel.tex", "max_issues_repo_name": "Joklost/masters", "max_issues_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/p10/sections/02-radiophysics/04-linkmodel.tex", "max_forks_repo_name": "Joklost/masters", "max_forks_repo_head_hexsha": "66bccba28a32ee47b7b874122de41c87f253349e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.6409495549, "max_line_length": 878, "alphanum_fraction": 0.6814087643, "num_tokens": 6253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5560989652218995}}
{"text": "\r\n\\documentclass[12pt]{article}\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%TCIDATA{OutputFilter=LATEX.DLL}\r\n%TCIDATA{Version=5.50.0.2953}\r\n%TCIDATA{<META NAME=\"SaveForMode\" CONTENT=\"1\">}\r\n%TCIDATA{BibliographyScheme=Manual}\r\n%TCIDATA{Created=Wednesday, May 17, 2006 13:09:49}\r\n%TCIDATA{LastRevised=Thursday, March 05, 2009 15:00:11}\r\n%TCIDATA{<META NAME=\"GraphicsSave\" CONTENT=\"32\">}\r\n%TCIDATA{<META NAME=\"DocumentShell\" CONTENT=\"Standard LaTeX\\Blank - Standard LaTeX Article\">}\r\n%TCIDATA{CSTFile=40 LaTeX article.cst}\r\n\r\n\\newtheorem{theorem}{Theorem}\r\n\\newtheorem{acknowledgement}[theorem]{Acknowledgement}\r\n\\newtheorem{algorithm}[theorem]{Algorithm}\r\n\\newtheorem{axiom}[theorem]{Axiom}\r\n\\newtheorem{case}[theorem]{Case}\r\n\\newtheorem{claim}[theorem]{Claim}\r\n\\newtheorem{conclusion}[theorem]{Conclusion}\r\n\\newtheorem{condition}[theorem]{Condition}\r\n\\newtheorem{conjecture}[theorem]{Conjecture}\r\n\\newtheorem{corollary}[theorem]{Corollary}\r\n\\newtheorem{criterion}[theorem]{Criterion}\r\n\\newtheorem{definition}[theorem]{Definition}\r\n\\newtheorem{example}[theorem]{Example}\r\n\\newtheorem{exercise}[theorem]{Exercise}\r\n\\newtheorem{lemma}[theorem]{Lemma}\r\n\\newtheorem{notation}[theorem]{Notation}\r\n\\newtheorem{problem}[theorem]{Problem}\r\n\\newtheorem{proposition}[theorem]{Proposition}\r\n\\newtheorem{remark}[theorem]{Remark}\r\n\\newtheorem{solution}[theorem]{Solution}\r\n\\newtheorem{summary}[theorem]{Summary}\r\n\\newenvironment{proof}[1][Proof]{\\noindent\\textbf{#1.} }{\\ \\rule{0.5em}{0.5em}}\r\n\\input{tcilatex}\r\n\\begin{document}\r\n\r\n\r\n\\section{Model}\r\n\r\n\\[\r\ny_{ig}=X_{i}\\beta +\\left( \\bar{X}_{g}\\beta \\right) \\gamma +\\mu\r\n_{g}+\\varepsilon _{i}+\\xi _{i}\r\n\\]%\r\n\\qquad \r\n\\begin{eqnarray*}\r\n\\mu _{g} &\\mid &\\bar{X}_{g}\\sim N\\left( 0,\\tau ^{2}\\right)  \\\\\r\n\\varepsilon _{i} &\\mid &\\bar{X}_{g},X_{i},\\mu _{g}\\sim N\\left( 0,\\sigma\r\n^{2}\\right)  \\\\\r\n\\xi _{i} &\\mid &\\bar{X}_{g},X_{i},\\mu _{g},\\varepsilon _{i}\\sim N\\left(\r\n0,r_{i}\\right) .\r\n\\end{eqnarray*}%\r\nComments:\r\n\r\n\\begin{itemize}\r\n\\item $\\mu _{g}$ is group random effect\r\n\r\n\\item $\\gamma $ allows for group effects to be correlated with average group\r\ncharacteristics\r\n\r\n\\item $\\varepsilon _{i}$ is individual shock\r\n\r\n\\item $\\xi _{i}$ is measurement error with known variance $r_{i}$ that can\r\nvary by individual.\r\n\\end{itemize}\r\n\r\n\\section{Likelihood}\r\n\r\nThe likelihood function for group $g$ is%\r\n\\[\r\nL_{g}=\\int_{-\\infty }^{\\infty }\\phi \\left( \\frac{\\mu _{g}}{\\tau }\\right)\r\n\\prod_{i=1}^{T_{g}}\\phi \\left( \\frac{y_{ig}-X_{i}\\beta -\\left( \\bar{X}%\r\n_{g}\\beta \\right) \\gamma -\\mu _{g}}{\\sqrt{\\sigma ^{2}+r_{i}}}\\right) d\\mu\r\n_{g}\r\n\\]%\r\nLet%\r\n\\[\r\n\\tilde{y}_{ig}=y_{ig}-X_{i}\\beta -\\left( \\bar{X}_{g}\\beta \\right) \\gamma \r\n\\]%\r\nand let%\r\n\\[\r\n\\tilde{\\sigma}_{i}^{2}=\\sigma ^{2}+r_{i}.\r\n\\]%\r\nThen likelihood can be written as%\r\n\\[\r\n\\ln L_{g}=-\\frac{1}{2}\\left[ \\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{\\sigma}%\r\n_{i}^{2}\\right) +\\ln \\left( 1+\\tau ^{2}\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) +\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{%\r\n\\sigma}_{i}^{2}}-\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{%\r\n\\tilde{\\sigma}_{i}^{2}}\\right) ^{-1}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}%\r\n_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{2}\\right] \r\n\\]%\r\nfollowing derivation below.\r\n\r\nNote that in the case that $\\gamma =\\eta =0$, the likelihood becomes%\r\n\\[\r\n\\ln L_{g}=-\\frac{1}{2}\\left\\{ \\frac{1}{\\sigma ^{2}}\\left[ \\sum_{i=1}^{T_{g}}%\r\n\\left( y_{ig}-X_{i}\\beta \\right) ^{2}-\\frac{\\tau ^{2}}{T_{g}\\tau ^{2}+\\sigma\r\n^{2}}\\left( \\sum_{i=1}^{T_{g}}\\left( y_{ig}-X_{i}\\beta \\right) \\right) ^{2}%\r\n\\right] +\\ln \\left( T_{g}\\frac{\\tau ^{2}}{\\sigma ^{2}}+1\\right) +T_{g}\\ln\r\n\\left( \\sigma ^{2}\\right) \\right\\} \r\n\\]%\r\nBorrowing from the Stata manual, if we put the constant term back in\\\r\n(irrelevant for maximization \\&\\ derivatives) we have%\r\n\\[\r\n\\ln L_{g}=-\\frac{1}{2}\\left[ \\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{\\sigma}%\r\n_{i}^{2}\\right) +\\ln \\left( 1+\\tau ^{2}\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) +\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{%\r\n\\sigma}_{i}^{2}}-\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{%\r\n\\tilde{\\sigma}_{i}^{2}}\\right) ^{-1}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}%\r\n_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{2}+T_{g}\\ln \\left( 2\\pi \\right) %\r\n\\right] \r\n\\]\r\n\r\n\\section{First Derivatives}\r\n\r\n\\begin{eqnarray*}\r\n\\frac{\\partial \\left( \\ln L_{g}\\right) }{\\partial \\beta } &=&%\r\n\\sum_{i=1}^{T_{g}}\\frac{\\partial \\left( \\ln L_{g}\\right) }{\\partial \\left(\r\nX_{i}\\beta \\right) }X_{i} \\\\\r\n\\frac{\\partial \\left( \\ln L_{g}\\right) }{\\partial \\left( X_{i}\\beta \\right) }\r\n&=&\\left( 1+\\frac{\\gamma }{T_{g}}\\right) \\left( \\frac{\\tilde{y}_{ig}}{\\tilde{%\r\n\\sigma}_{i}^{2}}-\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma%\r\n}_{i}^{2}}\\right) \\left( \\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) \\left( \r\n\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) ^{-1}\\right)\r\n\\end{eqnarray*}%\r\n\\[\r\n\\frac{\\partial \\left( \\ln L_{g}\\right) }{\\partial \\sigma }=-\\sigma \\left[\r\n\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}-\\frac{\\tau ^{2}\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{4}}\\right) }{1+\\tau\r\n^{2}\\left( \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) }%\r\n-\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{4}}+2\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{4}}%\r\n\\right) \\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) ^{-1}-\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) ^{2}\\left( \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{4}}\\right) \\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{%\r\n\\tilde{\\sigma}_{i}^{2}}\\right) ^{-2}\\right] \r\n\\]%\r\n\\[\r\n\\frac{\\partial \\left( \\ln L_{g}\\right) }{\\partial \\tau }=-\\tau \\frac{%\r\n\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}{\\left( 1+\\tau\r\n^{2}\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) }+\\frac{1}{%\r\n\\tau ^{3}}\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma%\r\n}_{i}^{2}}\\right) ^{-2}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) ^{2} \r\n\\]%\r\n\\[\r\n\\frac{\\partial \\left( \\ln L_{g}\\right) }{\\partial \\gamma }=\\left( \\bar{X}%\r\n_{g}\\beta \\right) \\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) \\left[ 1-\\left( \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) \\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1%\r\n}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{-1}\\right] \r\n\\]\r\n\r\n\\section{Derivation of Likelihood}\r\n\r\nSubstituting expression for normal pdf gives:%\r\n\\[\r\nL_{g}=\\frac{1}{2\\pi }\\frac{1}{\\sqrt{\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1}{%\r\n\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\int_{-\\infty }^{\\infty }\\exp \\left( -\\frac{1}{%\r\n2}\\frac{\\mu _{g}^{2}}{\\tau ^{2}}\\right) \\prod_{i=1}^{T_{g}}\\exp \\left( -%\r\n\\frac{1}{2}\\frac{\\left( \\tilde{y}_{ig}-\\mu _{g}\\right) ^{2}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) d\\mu _{g} \r\n\\]%\r\nRewriting:%\r\n\\begin{eqnarray*}\r\nL_{g} &=&\\frac{1}{2\\pi }\\frac{1}{\\sqrt{\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1%\r\n}{\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\int_{-\\infty }^{\\infty }\\exp \\left( -\\frac{1%\r\n}{2}\\left( \\frac{\\mu _{g}^{2}}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{\\left( \r\n\\tilde{y}_{ig}-\\mu _{g}\\right) ^{2}}{\\tilde{\\sigma}_{i}^{2}}\\right) \\right)\r\nd\\mu _{g} \\\\\r\n&=&\\frac{1}{2\\pi }\\frac{1}{\\sqrt{\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1}{%\r\n\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\int_{-\\infty }^{\\infty }\\exp \\left( -\\frac{1}{%\r\n2}\\left( \\frac{\\mu _{g}^{2}}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}%\r\n_{ig}^{2}-2\\tilde{y}_{ig}\\mu _{g}+\\mu _{g}^{2}}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) \\right) d\\mu _{g} \\\\\r\n&=&\\frac{1}{2\\pi }\\frac{1}{\\sqrt{\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1}{%\r\n\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\int_{-\\infty }^{\\infty }\\exp \\left( -\\frac{1}{%\r\n2}\\left( \\frac{\\mu _{g}^{2}}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\mu _{g}^{2}-2\\mu _{g}\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}%\r\n}{\\tilde{\\sigma}_{i}^{2}}+\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) \\right) d\\mu _{g} \\\\\r\n&=&\\frac{1}{2\\pi }\\frac{1}{\\sqrt{\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1}{%\r\n\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\int_{-\\infty }^{\\infty }\\exp \\left( -\\frac{1}{%\r\n2}\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) \\left( \\mu _{g}^{2}-2\\mu _{g}\\frac{\\sum_{i=1}^{T_{g}}\\frac{%\r\n\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}+\\frac{\\sum_{i=1}^{T_{g}}%\r\n\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right) \\right) d\\mu _{g}\r\n\\end{eqnarray*}%\r\nCompleting the square:%\r\n\\begin{eqnarray*}\r\nL_{g} &=&\\frac{1}{2\\pi }\\frac{1}{\\sqrt{\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1%\r\n}{\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\int_{-\\infty }^{\\infty }\\exp \\left( -\\frac{1%\r\n}{2}\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) \\left( \\mu _{g}^{2}-2\\mu _{g}\\frac{\\sum_{i=1}^{T_{g}}\\frac{%\r\n\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}+\\left( \\frac{%\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{%\r\n\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right)\r\n^{2}-\\left( \\frac{\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}%\r\n_{i}^{2}}}{\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}}\\right) ^{2}+\\frac{\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{%\r\n\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{%\r\n\\tilde{\\sigma}_{i}^{2}}}\\right) \\right) d\\mu _{g} \\\\\r\nL_{g} &=&\\frac{1}{2\\pi }\\frac{1}{\\sqrt{\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1%\r\n}{\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\int_{-\\infty }^{\\infty }\\exp \\left( -\\frac{1%\r\n}{2}\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) \\left( \\left( \\mu _{g}^{2}-\\frac{\\sum_{i=1}^{T_{g}}\\frac{%\r\n\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right) ^{2}+\\frac{%\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1%\r\n}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}-\\left( \r\n\\frac{\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{%\r\n1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right)\r\n^{2}\\right) \\right) d\\mu _{g}\r\n\\end{eqnarray*}%\r\nPulling constants out of integrand and \"building\" a normal pdf:%\r\n\\[\r\nL_{g}=\\frac{1}{2\\pi }\\sqrt{\\frac{2\\pi }{\\left( \\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) }}\\frac{1}{\\sqrt{%\r\n\\tau ^{2}}}\\prod_{i=1}^{T_{g}}\\frac{1}{\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\exp\r\n\\left( -\\frac{1}{2}\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{%\r\n\\tilde{\\sigma}_{i}^{2}}\\right) \\left( \\frac{\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}%\r\n_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}%\r\n\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}-\\left( \\frac{\\sum_{i=1}^{T_{g}}\\frac{%\r\n\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right) ^{2}\\right)\r\n\\right) \\int_{-\\infty }^{\\infty }\\frac{1}{\\sqrt{\\frac{2\\pi }{\\left( \\frac{1}{%\r\n\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) }}}\\exp\r\n\\left( -\\frac{1}{2}\\frac{\\left( \\mu _{g}^{2}-\\frac{\\sum_{i=1}^{T_{g}}\\frac{%\r\n\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right) ^{2}}{\\frac{1}{%\r\n\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}}%\r\n\\right) d\\mu _{g} \r\n\\]%\r\nAll PDFs integrate to one:%\r\n\\[\r\nL_{g}=\\frac{1}{\\sqrt{2\\pi }}\\sqrt{\\frac{1}{\\left( 1+\\sum_{i=1}^{T_{g}}\\frac{%\r\n\\tau ^{2}}{\\tilde{\\sigma}_{i}^{2}}\\right) }}\\prod_{i=1}^{T_{g}}\\frac{1}{%\r\n\\sqrt{\\tilde{\\sigma}_{i}^{2}}}\\exp \\left( -\\frac{1}{2}\\left( \\frac{1}{\\tau\r\n^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) \\left( \\frac{%\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1%\r\n}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}-\\left( \r\n\\frac{\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{%\r\n1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right)\r\n^{2}\\right) \\right) \r\n\\]%\r\nTaking logs and dropping constants:%\r\n\\[\r\n\\ln L_{g}=-\\frac{1}{2}\\ln \\left( 1+\\sum_{i=1}^{T_{g}}\\frac{\\tau ^{2}}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) -\\frac{1}{2}\\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{%\r\n\\sigma}_{i}^{2}\\right) -\\frac{1}{2}\\left( \\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) \\left( \\frac{%\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1%\r\n}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}-\\left( \r\n\\frac{\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{%\r\n1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right)\r\n^{2}\\right) \r\n\\]%\r\nMultiply through by $-2$:%\r\n\\[\r\n-2\\ln L_{g}=\\ln \\left( 1+\\sum_{i=1}^{T_{g}}\\frac{\\tau ^{2}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) +\\ln \\tau ^{2}+\\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{\\sigma}%\r\n_{i}^{2}\\right) +\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{%\r\n\\tilde{\\sigma}_{i}^{2}}\\right) \\left( \\frac{\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}%\r\n_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}%\r\n\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}-\\left( \\frac{\\sum_{i=1}^{T_{g}}\\frac{%\r\n\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}}{\\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}\\right) ^{2}\\right) \r\n\\]%\r\nSimplifying:%\r\n\\[\r\n-2\\ln L_{g}=\\ln \\left( 1+\\sum_{i=1}^{T_{g}}\\frac{\\tau ^{2}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) +\\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{\\sigma}_{i}^{2}\\right)\r\n+\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}-%\r\n\\frac{1}{\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) ^{2}\\right) \r\n\\]%\r\n\\[\r\n-2\\ln L_{g}=\\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{\\sigma}_{i}^{2}\\right) +\\ln\r\n\\left( 1+\\tau ^{2}\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n+\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}-\\left( \r\n\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) ^{-1}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) ^{2} \r\n\\]\r\n\r\nChecking what happens when measurement error is zero:%\r\n\\begin{eqnarray*}\r\n-2\\ln L_{g} &=&\\ln \\left( 1+\\sum_{i=1}^{T_{g}}\\frac{\\tau ^{2}}{\\sigma ^{2}}%\r\n\\right) +\\sum_{i=1}^{T_{g}}\\ln \\left( \\sigma ^{2}\\right) +\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\sigma ^{2}}-\\frac{1}{\\frac{1}{%\r\n\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\sigma ^{2}}}\\left( \\sum_{i=1}^{T_{g}}%\r\n\\frac{\\tilde{y}_{ig}}{\\sigma ^{2}}\\right) ^{2}\\right) \\\\\r\n&=&\\ln \\left( 1+\\frac{T_{g}\\tau ^{2}}{\\sigma ^{2}}\\right) +T_{g}\\ln \\left(\r\n\\sigma ^{2}\\right) +\\frac{1}{\\sigma ^{2}}\\left( \\sum_{i=1}^{T_{g}}\\tilde{y}%\r\n_{ig}^{2}-\\frac{\\tau ^{2}}{\\sigma ^{2}+T_{g}\\tau ^{2}}\\left(\r\n\\sum_{i=1}^{T_{g}}\\tilde{y}_{ig}\\right) ^{2}\\right)\r\n\\end{eqnarray*}\r\n\r\n\\section{Derivation of Derivatives}\r\n\r\nrecall:%\r\n\\[\r\n-2\\ln L_{g}=\\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{\\sigma}_{i}^{2}\\right) +\\ln\r\n\\left( 1+\\tau ^{2}\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n+\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}-\\left( \r\n\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) ^{-1}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) ^{2} \r\n\\]\r\n\r\n\\[\r\n\\tilde{y}_{ig}=y_{ig}-X_{i}\\beta -\\gamma \\left( \\frac{1}{T_{g}}%\r\n\\sum_{i=1}^{T_{g}}X_{i}\\beta \\right) \r\n\\]%\r\n\\[\r\n\\frac{\\partial \\tilde{y}_{ig}}{\\partial \\left( X_{i}\\beta \\right) }=-\\left(\r\n1+\\frac{\\gamma }{T_{g}}\\right) \r\n\\]%\r\n\\[\r\n\\tilde{\\sigma}_{i}^{2}=\\sigma ^{2}+\\eta _{i}^{2}. \r\n\\]\r\n\r\nderive:\r\n\r\n\\begin{eqnarray*}\r\n\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{\\partial \\beta }\r\n&=&\\sum_{i=1}^{T_{g}}\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{\\partial\r\n\\left( X_{i}\\beta \\right) }X_{i} \\\\\r\n\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{\\partial \\left( X_{i}\\beta\r\n\\right) } &=&\\frac{\\partial \\tilde{y}_{ig}}{\\partial \\left( X_{i}\\beta\r\n\\right) }\\left( \\frac{2\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}-\\left( \\frac{1%\r\n}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n^{-1}2\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) \\left( \\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) \\right) \\\\\r\n&=&2\\frac{\\partial \\tilde{y}_{ig}}{\\partial \\left( X_{i}\\beta \\right) }%\r\n\\left( \\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}-\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n\\left( \\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) \\left( \\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{-1}\\right) \\\\\r\n&=&-2\\left( 1+\\frac{\\gamma }{T_{g}}\\right) \\left( \\frac{\\tilde{y}_{ig}}{%\r\n\\tilde{\\sigma}_{i}^{2}}-\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{%\r\n\\tilde{\\sigma}_{i}^{2}}\\right) \\left( \\frac{1}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) \\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) ^{-1}\\right)\r\n\\end{eqnarray*}\r\n\r\n\\bigskip\r\n\r\n\\[\r\n-2\\ln L_{g}=\\sum_{i=1}^{T_{g}}\\ln \\left( \\tilde{\\sigma}_{i}^{2}\\right) +\\ln\r\n\\left( 1+\\tau ^{2}\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n+\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{2}}-\\left( \r\n\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) ^{-1}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) ^{2} \r\n\\]%\r\n\\begin{eqnarray*}\r\n\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{\\partial \\sigma }\r\n&=&\\sum_{i=1}^{T_{g}}\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{\\tilde{\\sigma%\r\n}_{i}^{2}}\\frac{\\partial \\tilde{\\sigma}_{i}^{2}}{\\partial \\sigma } \\\\\r\n&=&2\\sigma \\sum_{i=1}^{T_{g}}\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{%\r\n\\tilde{\\sigma}_{i}^{2}} \\\\\r\n&=&2\\sigma \\left[ \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}+\\frac{%\r\n\\tau ^{2}\\left( \\sum_{i=1}^{T_{g}}\\frac{-1}{\\tilde{\\sigma}_{i}^{4}}\\right) }{%\r\n1+\\tau ^{2}\\left( \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) }%\r\n+\\sum_{i=1}^{T_{g}}-\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{4}}-\\left( \r\n\\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}%\r\n\\right) ^{-1}2\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) \\left( \\sum_{i=1}^{T_{g}}\\frac{-\\tilde{y}_{ig}}{\\tilde{%\r\n\\sigma}_{i}^{4}}\\right) -\\left( -1\\right) \\left( \\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{-2}\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{-1}{\\tilde{\\sigma}_{i}^{4}}\\right) \\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{2}%\r\n\\right] \\\\\r\n&=&2\\sigma \\left[ \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}-\\frac{%\r\n\\tau ^{2}\\left( \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{4}}\\right) }{%\r\n1+\\tau ^{2}\\left( \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) }%\r\n-\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}^{2}}{\\tilde{\\sigma}_{i}^{4}}+2\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{4}}%\r\n\\right) \\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{2}}\\right) ^{-1}-\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) ^{2}\\left( \\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}%\r\n_{i}^{4}}\\right) \\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{%\r\n\\tilde{\\sigma}_{i}^{2}}\\right) ^{-2}\\right]\r\n\\end{eqnarray*}%\r\n\\[\r\n\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{\\partial \\tau }=2\\tau \\frac{%\r\n\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}}{\\left( 1+\\tau\r\n^{2}\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) }-\\frac{2}{%\r\n\\tau ^{3}}\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma%\r\n}_{i}^{2}}\\right) ^{-2}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{%\r\n\\sigma}_{i}^{2}}\\right) ^{2} \r\n\\]\r\n\r\n\\begin{eqnarray*}\r\n\\frac{\\partial \\left( -2\\ln L_{g}\\right) }{\\partial \\gamma } &=&\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{2\\tilde{y}_{ig}\\left( -\\left( \\bar{X}_{g}\\beta\r\n\\right) \\right) }{\\tilde{\\sigma}_{i}^{2}}-\\left( \\frac{1}{\\tau ^{2}}%\r\n+\\sum_{i=1}^{T_{g}}\\frac{1}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{-1}2\\left(\r\n\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right)\r\n\\sum_{i=1}^{T_{g}}\\frac{-\\bar{X}_{g}\\beta }{\\tilde{\\sigma}_{i}^{2}}\\right) \\\\\r\n&=&-2\\sum_{i=1}^{T_{g}}\\frac{\\tilde{y}_{ig}\\left( \\bar{X}_{g}\\beta \\right) }{%\r\n\\tilde{\\sigma}_{i}^{2}}+2\\left( \\frac{1}{\\tau ^{2}}+\\sum_{i=1}^{T_{g}}\\frac{1%\r\n}{\\tilde{\\sigma}_{i}^{2}}\\right) ^{-1}\\left( \\sum_{i=1}^{T_{g}}\\frac{\\tilde{y%\r\n}_{ig}}{\\tilde{\\sigma}_{i}^{2}}\\right) \\sum_{i=1}^{T_{g}}\\frac{\\bar{X}%\r\n_{g}\\beta }{\\tilde{\\sigma}_{i}^{2}}\r\n\\end{eqnarray*}\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "003e361fc9ed792b59f4fb88d3a95d672faa1029", "size": 21337, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "xtreg_ch/docs/xtreg_ch.tex", "max_stars_repo_name": "linhtto/ttecon_stata", "max_stars_repo_head_hexsha": "1cf1d9e5cae30c5bd280b4b8ff0cc0c252601383", "max_stars_repo_licenses": ["MIT", "Unlicense"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2017-12-14T03:40:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-10T01:45:22.000Z", "max_issues_repo_path": "xtreg_ch/docs/xtreg_ch.tex", "max_issues_repo_name": "linhtto/ttecon_stata", "max_issues_repo_head_hexsha": "1cf1d9e5cae30c5bd280b4b8ff0cc0c252601383", "max_issues_repo_licenses": ["MIT", "Unlicense"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2017-06-01T21:41:59.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-13T01:18:22.000Z", "max_forks_repo_path": "xtreg_ch/docs/xtreg_ch.tex", "max_forks_repo_name": "linhtto/ttecon_stata", "max_forks_repo_head_hexsha": "1cf1d9e5cae30c5bd280b4b8ff0cc0c252601383", "max_forks_repo_licenses": ["MIT", "Unlicense"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2017-06-01T03:19:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-01T05:58:58.000Z", "avg_line_length": 50.8023809524, "max_line_length": 253, "alphanum_fraction": 0.5383605943, "num_tokens": 10142, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7310585727705126, "lm_q1q2_score": 0.5560801805232173}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{url}\n\n\\author{Niels M\u00f6ller}\n\\title{Notes on ECC formulas}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Weierstrass curve}\n\nConsider only the special case\n\\begin{equation*}\n  y^2 = x^3 - 3x + b \\pmod{p}\n\\end{equation*}\nSee \\url{http://www.hyperelliptic.org/EFD/g1p/auto-shortw.html}.\n\nAffine formulas for duplication, $(x_2, y_2) = 2(x_1, y_1)$:\n\\begin{align*}\n  t &=  (2y)^{-1} 3 (x_1^2 - 1) \\\\\n  x_2 &= t^2 - 2 x_1 \\\\\n  y_2 &= (x_1 - x_2) * t - y_1\n\\end{align*}\nAffine formulas for addition, $(x_3, y_3) = (x_1, y_1) + (x_2,\ny_2)$:\n\\begin{align*}\n  t &= (x_2 - x_1)^{-1} (y_2 - y_1) \\\\\n  x_3 &= t^2 - x_1 - x_2 \\\\\n  y_3 &= (x_1 - x_3) t - y_1\n\\end{align*}\n\n\\section{Montgomery curve}\n\nConsider the special case\n\\begin{equation*}\n  y^2 = x^3 + b x^2 + x  \n\\end{equation*}\nSee \\url{http://www.hyperelliptic.org/EFD/g1p/auto-montgom.html}.\n\nAffine formulas for duplication, $(x_2, y_2) = 2(x_1, y_1)$:\n\\begin{align*}\n  t &= (2 y_1)^{-1} (3 x_1^2 + 2b x_1 + 1) \\\\\n  x_2 &= t^2 - b - 2 x_1 \\\\\n  y_2 &= (3 x_1 + b) t - t^3 - y_1 \\\\\n  &= (3 x_1 + b - t^2) t - y_1 \\\\\n  &= (x_1 - x_2) t - y_1\n\\end{align*}\nSo the computation is very similar to the Weierstra\u00df case, differing\nonly in the formula for $t$, and the $b$ term in $x_2$.\n\nAffine formulas for addition, $(x_3, y_3) = (x_1, y_1) + (x_2,\ny_2)$:\n\\begin{align*}\n  t &= (x_2 - x_1)^{-1} (y_2 - y_1) \\\\\n  x_3 &= t^2 - b - x_1 - x_2 \\\\\n  y_3 &= (2 x_1 + x_2 + b) t - t^3 - y_1 \\\\\n  &= (2 x_1 + x_2 + b - t^2) t - y_1 \\\\\n  &= (x_1 - x_3) t - y_1\n\\end{align*}\nAgain, very similar to the Weierstra\u00df formulas, with only an\nadditional $b$ term in the formula for $x_3$.\n\n\\subsection{Montgomery ladder}\n\nIt's possible to do operations on a Montgomery curve in terms of the\n$x$ coordinate only. Or, with homogeneous coordinates, use $X$ and $Z$\nwith $x = X/Z$.\n\nFor doubling,\n\\begin{align*}\n  x' &= (x^2 - z^2)^2 = (x-z)^2 (x+z)^2 \\\\\n  t &= (x+z)^2 - (x-z)^2 \\\\\n  z' &= 4 xz (x^2 + bzx + z^2) = t \\left((x+z)^2 + b't\\right)\n\\end{align*}\nwith $b' = (b-2)/4$.\n\nAddition is a bit trickier. If we have $x$ and $z$ for points $Q_1$,\n$Q_2$ and $Q_3$, with $Q_3 = Q_1 +  Q_3$, and $x_1, z_1 \\neq 0$, we\nget the coordinates for $Q_2 + Q_3$ as\n\\begin{align*}\n  x' &= 4 (x_2 x_3 - z_2 z_3)^2 z_1 = \\left((x_2 - z_2)(x_3 + z_3) +\n    (x_2 + z_2)(x_3 - z_3)\\right)^2 z_1 \\\\\n  z' &= 4 (x_2 z_3 - z_2 x_3)^2 x_1 = \\left((x_2 - z_2)(x_3 + z_3) -\n    (x_2 + z_2)(x_3 - z_3)\\right)^2 x_1\n\\end{align*}\nNote that the doubling formula is symmetric in $Q_2$ and $Q_3$. Which\nis consistent with negating of $Q_1$, which really is the negatiion of\nthe $y$-coordinate, which doesn't appear in the formula.\n\nThis can be used for a binary ``Montgomery ladder'' to compute $n Q$\nfor any $n$. If we have the points $Q$, $n Q$, and $(n+1) Q$, we can\ncompute the three points\n\\begin{align*}\n  (2n) Q &= 2 (nQ) && \\text{doubling} \\\\\n  (2n+1) Q &= (nQ) + (n+1)Q && \\text{addition} \\\\\n  (2n+2) Q &= 2((n+1) Q) && \\text{doubling}\n\\end{align*}\n\nThe following algorithm is suggested by dj (see\n\\url{http://www.ietf.org/mail-archive/web/cfrg/current/msg05004.html}.\n\\begin{verbatim}\n   x2,z2,x3,z3 = 1,0,x1,1\n   for i in reversed(range(255)):\n     bit = 1 & (n >> i)\n     x2,x3 = cswap(x2,x3,bit)\n     z2,z3 = cswap(z2,z3,bit)\n     x3,z3 = ((x2*x3-z2*z3)^2,x1*(x2*z3-z2*x3)^2)\n     x2,z2 = ((x2^2-z2^2)^2,4*x2*z2*(x2^2+A*x2*z2+z2^2))\n     x2,x3 = cswap(x2,x3,bit)\n     z2,z3 = cswap(z2,z3,bit)\n   return x2*z2^(p-2)\n\\end{verbatim}\nIt's not too hard to decipher this. The update for $x_2, z_2$ is the\ndoubling. The update for $x_3, z_3$ is an addition.\n\nIf the bit is zero, we get $x_2', z_2'$ representing $Q_2' = 2 Q_2$,\nand $x_3', z_3'$ representing $Q_3' = Q_2 + Q_3 = 2 Q_2 + Q_1$.\n\nWhat if the bit is set? For the doubling, we get it applied to $Q_3$\ninstead, so we get $x_3', z_3'$ representing $Q_3' = 2 Q_3 = 2 Q_2 + 2\nQ_1$. For the add, the initial swap flips the sign of one of the\nintermediate values, but the end result is the same, so we get $x_2',\nz_2'$ representing $Q_2' = Q_2 + Q_3 = 2 Q_2 + Q_1$, as desired.\n\nNote that the initial conditional swap doesn't have to be a full swap;\nif that's convenient in the implementation, a conditional assignment\nshould be sufficient to get the duplication formula appplied to the\nright point. It looks like, in all cases, one will start by computing\n$x_2 \\pm z_2$ and $x_3 \\pm z_3$, so maybe one can apply conditional\nassignment to these values instead.\n\n\\section{Edwards curve}\n\nFor an Edwards curve, we consider the special case\n\\begin{equation*}\n  x^2 + y^2 = 1 + d x^2 y^2\n\\end{equation*}\nSee \\url{http://cr.yp.to/papers.html#newelliptic}. The neutral point is\n$(0, 1)$.\n\nAffine formulas for addition, $(x_3, y_3) = (x_1, y_1) + (x_2,\ny_2)$:\n\\begin{align*}\n  t &= d x_1 x_2 y_1 y_2 \\\\\n  x_3 &= (1 + t)^{-1} (x_1 y_2 + y_1 x_2) \\\\\n  y_3 &= (1 - t)^{-1} (y_1 y_2 - x_1 x_2)\n\\end{align*}\nWith homogeneous coordinates $(X_1, Y_1, Z_1)$ etc., D.~J.~Bernstein\nsuggests the formulas\n\\begin{align*}\n  A &= Z_1 Z_2 \\\\\n  B &= A^2 \\\\\n  C &= X_1 X_2 \\\\\n  D &= Y_1 Y_2 \\\\\n  E &= d C D \\\\\n  F &= B - E \\\\\n  G &= B + E \\\\\n  X_3 &= A F [(X_1 + Y_1)(X_2 + Y_2) - C - D] \\\\\n  Y_3 &= A G (D - C) \\\\\n  Z_3 &= F G\n\\end{align*}\nThis works also for doubling, but a more efficient variant is\n\\begin{align*}\n  B &= (X_1 + Y_1)^2 \\\\\n  C &= X_1^2 \\\\\n  D &= Y_1^2 \\\\\n  E &= C + D \\\\\n  H &= Z_1^2 \\\\\n  J &= E - 2H \\\\\n  X_3 &= (B - E) J \\\\\n  Y_3 &= E (C - D) \\\\\n  Z_3 &= E J\n\\end{align*}\n\n\\section{EdDSA}\n\nThe EdDSA paper (\\url{http://ed25519.cr.yp.to/ed25519-20110926.pdf})\nsuggests using the twisted Edwards curve,\n\\begin{equation*}\n  -x^2 + y^2 = 1 + d' x^2 y^2 \\pmod{p}\n\\end{equation*}\n(For this we use $d' = -d$, with $d = (121665/121666) \\bmod p$, where\n$d$ is the same as in the curve25519 equivalence described below).\nAssuming -1 has a square root modulo $p$, a point $(x, y)$ lies on\nthis curve if and only if $(\\sqrt{-1} x, p)$ lies of the non-twisted\nEdwards curve. The point addition formulas for the twisted Edwards\ncurve are\n\\begin{align*}\n  t &= d' x_1 x_2 y_1 y_2 \\\\\n  x_3 &= (1 + t)^{-1} (x_1 y_2 + y_1 x_2) \\\\\n  y_3 &= (1 - t)^{-1} (y_1 y_2 + x_1 x_2)\n\\end{align*}\nor in terms of $d$ rather than $d'$, signs are switched as\n\\begin{align*}\n  t &= d x_1 x_2 y_1 y_2 \\\\\n  x_3 &= (1 - t)^{-1} (x_1 y_2 + y_1 x_2) \\\\\n  y_3 &= (1 + t)^{-1} (y_1 y_2 + x_1 x_2)\n\\end{align*}\n\nFor the other formulas, it should be fine to just switch the sign of\nterms involving $x_1 x_2$ or $x_1^2$. The paper suggests further\noptimizations: For precomputed points, use the representation $(x-y,\nx+y, dxy)$. And for temporary points, maintain an additional redundant\ncoordinate $T$, with $Z T = X Y$ (see\n\\url{http://eprint.iacr.org/2008/522.pdf}).\n\nAccording to djb, the formulas in Section 3.1 are the once to use,\nbecause they are complete. See\n\\url{http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1.html#addition-add-2008-hwcd-3},\n\\begin{align*}\n  A &= (y_1 - x_1)(y_2 - x_2) \\\\\n  B &= (y_1 + x_1)(y_2 + x_2) \\\\\n  C &= 2 t_1 d' t_2 \\\\\n  D &= 2 z_1 z_2 \\\\\n  E &= B - A \\\\\n  F &= D - C \\\\\n  G &= D + C \\\\\n  H &= B + A \\\\\n  x_3 &= E F \\\\\n  y_3 &= G H \\\\\n  t_3 &= E H \\\\\n  z_3 &= F G\n\\end{align*}\n\nIn our notation $a = -1$, and the $d'$ above is $-d$.\n\n\\subsection{Decompression}\n\nFor EdDSA, points are represented by the $y$ coordinate and only the\nlow bit, or ``sign'' bit, of the $x$ coordinate. Then $x^2$ can be\ncomputed as\n\\begin{align*}\n  x^2 &= (1-y^2) (d y^2 - 1)^{-1} \\\\\n  &= 121666 (1-y^2) (121665 y^2 - 121666)^{-1}\n\\end{align*}\nWe then get $x$ from a square root, and we can use a trick of djb's to\navoid the inversion.\n\n\\section{Curve25519}\n\nCurve25519 is defined as the Montgomery curve\n\\begin{equation*}\n  y^2 = x^3 + b x^2 + x \\pmod p\n\\end{equation*}\nwith $b = 486662$ and $p = 2^{255} -19$. See\n\\url{http://cr.yp.to/ecdh/curve25519-20060209.pdf}. It is equivalent\nto the Edwards curve\n\\begin{equation*}\n  u^2 + v^2 = 1 + d u^2 v^2 \\pmod p\n\\end{equation*}\nwith $d = (121665/121666) \\bmod p$. The equivalence is given by\nmapping $P = (x,y)$ to $P' = (u, v)$, as follows.\n\\begin{itemize}\n\\item $P = \\infty$ corresponds to $P' = (0, 1)$\n\\item $P = (0, 0)$ corresponds to $P' = (0, -1)$\n\\item Otherwise, for all other points on the curve. First note that $x\n  \\neq -1$ (since then the right hand side is a not a quadratic\n  residue), and that $y \\neq 0$ (since $y = 0$ and $x \\neq 0$ implies\n  that $x^2 + bx + 1 = 0$, or $(x + b/2)^2 = (b/2)^2 - 1$, which also\n  isn't a quadratic residue). The correspondence is then given by\n  \\begin{align*}\n    u &= \\sqrt{b+2} \\, x / y \\\\\n    v &= (x-1) / (x+1)\n  \\end{align*}\n\\end{itemize}\n\nThe inverse transformation is\n\\begin{align*}\n  x &= (1+v) / (1-v) \\\\\n  y &= \\sqrt{b+2} \\, x / u \n\\end{align*}\nIf the Edwards coordinates are represented using homogeneous\ncoordinates, $u = U/W$ and $v = V/W$, then\n\\begin{align*}\n  x &= \\frac{W+V}{W-V} \\\\\n  y &= \\sqrt{b} \\frac{(W+V) W}{(W-V) U} \n\\end{align*}\nso we need to invert the value $(W-V) U$.\n\n\\subsection{Transforms for the twisted Edwards Curve}\n\nIf we use the twisted Edwards curve instead, let $\\alpha = \\sqrt{-1}\n\\pmod{p}$. Then we work with coordinates $(u', v)$, where $u' = \\alpha\nu$. The transform from Montgomery form $(x, y)$ is then\n\\begin{align*}\n  u &= (\\alpha \\sqrt{b+2}) \\, x / y\\\\\n  v &= (x-1) / (x+1)\n\\end{align*}\nAnd the inverse transform is similarly\n\\begin{align*}\n  x &= (1+v) / (1-v) \\\\\n  y &= (\\alpha \\sqrt{b+2}) \\, x / u \n\\end{align*}\nso it's just a change of the transform constant, effectively using\n$\\sqrt{-(b+2)}$ instead.\n\n\\subsection{Coordinates outside of the base field}\n\nThe curve25519 function is defined with an input point represented by\nthe $x$-coordinate only, and is specified as allowing any value. The\ncorresponding $y$ coordinate is given by \n\\begin{equation*}\n  y = \\sqrt{x^3 + b x^2 + x} \\pmod p\n\\end{equation*}\nwhenever this square root exists. But what if it doesn't? Then we work\nwith the curve over the extended field $F_{p^2}$. Let $n$ by any\nnon-square, then $(x^3 + b x^2 + x) n$ is a square, and we get the\n$y = y' / \\sqrt{n}$ with\n\\begin{equation*}\n  y' = \\sqrt{(x^3 + b x^2 + x) n}\n\\end{equation*}\nIt happens that for all multiples of such a point, this same factor is\ntacked on to all the $y$-coordinates, while all the $x$-coordinates\nremain in the base field $F_p$. It's the ``twist'' curve $y'^2 / n =\nx^3 + bx^2 + x$. On the corresponding Edwards curve, we\nget $u = \\sqrt{n} u'$ with\n\\begin{equation*}\n  u' = \\sqrt{b+2} \\, x / y'\n\\end{equation*}\nand the addition formula\n\\begin{align*}\n  t &= d n u'_1 u'_2 v_1 v_2 \\\\\n  u'_3 &= (1+t)^{-1}(u'_1v_2 + v_1 u'_2) \\\\\n  v_3 &= (1-t)^{-1}(v_1 v_2 - n u'_1 u'_2)\n\\end{align*}\nIt seems a bit tricky to handle both types of point in a single\nfunction without speed penalty, due to the conditional factor of $n$\nin the formula for $v_3$.\n\\end{document}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: t\n%%% End: \n", "meta": {"hexsha": "bf06d110be2fc0e4db24ba382d551e620b008aac", "size": 10945, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Modules/nettle/misc/ecc-formulas.tex", "max_stars_repo_name": "blondfrogs/ravenwallet-ios", "max_stars_repo_head_hexsha": "ebcaf881edb2b3d1588de200c4d2fff217c3061f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2018-10-31T14:37:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T05:37:43.000Z", "max_issues_repo_path": "Modules/nettle/misc/ecc-formulas.tex", "max_issues_repo_name": "blondfrogs/ravenwallet-ios", "max_issues_repo_head_hexsha": "ebcaf881edb2b3d1588de200c4d2fff217c3061f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 124, "max_issues_repo_issues_event_min_datetime": "2018-11-08T13:22:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-17T16:03:59.000Z", "max_forks_repo_path": "Modules/nettle/misc/ecc-formulas.tex", "max_forks_repo_name": "blondfrogs/ravenwallet-ios", "max_forks_repo_head_hexsha": "ebcaf881edb2b3d1588de200c4d2fff217c3061f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 26, "max_forks_repo_forks_event_min_datetime": "2018-10-31T14:37:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-31T23:54:51.000Z", "avg_line_length": 32.3816568047, "max_line_length": 97, "alphanum_fraction": 0.6178163545, "num_tokens": 4449, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703225, "lm_q2_score": 0.6859494485880928, "lm_q1q2_score": 0.555947560494687}}
{"text": "\\section{Higher group theory}\n\n\\subsection{The category of pointed connected \\texorpdfstring{$1$}{1}-types}\n\n\\begin{prp}\\label{prp:istrunc-precomp-isconn}\n  Consider a $k$-connected map $f:X\\to Y$, and a family $P$ of $(k+n)$-truncated types over $Y$, where $n\\geq 0$. Then the precomposition map\n  \\begin{equation*}\n    \\blank\\circ f : \\Big(\\prd{y:Y}P(y)\\Big)\\to\\Big(\\prd{x:X}P(f(x))\\Big)\n  \\end{equation*}\n  is $(n-2)$-truncated. \n\\end{prp}\n\n\\begin{prp}\nConsider a pointed $(k+1)$-connected type $X$, and a family $Y:X\\to\\UU^{\\le n+k}$ of $(n+k)$-truncated types over $X$. Then the map\n\\begin{equation*}\n\\evpt : \\Big(\\prd{x:X}Y(x)\\Big) \\to Y(\\pt)\n\\end{equation*}\ninduced by the point inclusion $\\unit\\to X$, is an $(n-2)$-truncated map.\n\\end{prp}\n\n\\begin{proof}\nNote that we have a commuting triangle\n\\begin{equation*}\n\\begin{tikzcd}[column sep=-1em]\n& \\Big(\\prd{x:X}Y(x)\\Big) \\arrow[dl,swap,\"\\blank\\circ \\mathsf{const}_\\pt\"] \\arrow[dr,\"\\evpt\"] & \\phantom{\\Big(\\prd{t:\\unit} Y(\\pt)\\Big)} \\\\\n\\Big(\\prd{t:\\unit} Y(\\pt)\\Big) \\arrow[rr,swap,\"\\evpt\",\"\\eqvsym\"'] & & Y(\\pt),\n\\end{tikzcd}\n\\end{equation*}\nso the map on the left is an $(n-2)$-truncated map if and only if the map on the right is. For the map on the left, the claim follows immediately from \\cref{prp:istrunc-precomp-isconn}, since the point inclusion $\\mathsf{const}_\\pt:\\unit\\to X$ is a $k$-connected map by \\cref{cor:ptd_connected}.\n\\end{proof}\n\n\\begin{defn}\n  If $X : \\UU_\\pt$ and $Y : X \\to \\UU_\\pt$, then we introduce the\n  type of \\define{pointed sections},\n\\begin{equation*}\n\\textstyle{\\prod_{(x:X)}^\\ast Y(x)} \\defeq \\sm{s:\\prd{x:X}Y(x)}s(\\pt)=\\pt\n\\end{equation*}\n  This type is itself pointed by the trivial section $\\lam{x}\\pt$.\n\\end{defn}\n\n\\begin{cor}\nConsider a pointed $k$-connected type $X$, and a family $Y:X\\to\\UU_\\pt^{\\le n+k}$ of pointed $(n+k)$-truncated types over $X$. Then the type $\\prod_{(x:X)}^\\ast Y(x)$ is $(n-1)$-truncated.\n\\end{cor}\n\n\\begin{proof}\nNote that we have a pullback square\n\\begin{equation*}\n\\begin{tikzcd}\n\\textstyle{\\prod_{(x:X)}^\\ast Y(x)} \\arrow[r] \\arrow[d] & \\unit \\arrow[d] \\\\\n\\prd{x:X}Y(x) \\arrow[r,swap,\"\\evpt\"] & Y(\\ast),\n\\end{tikzcd}\n\\end{equation*}\nso the claim follows from the fact that $\\evpt$ is an $(n-1)$-truncated map.\n\\end{proof}\n\n\\begin{thm}\n  The type $\\hom_{(n,k)}(G,H)$ is an $n$-type for any $G,H:(n,k)\\GType$.\n\\end{thm}\n\n\\begin{proof}\n  If $X$ is $(k-1)$-connected, and $Y$ is $(n+k)$-truncated, then the type of pointed maps $X \\to_\\pt Y$ is $n$-truncated.\n\\end{proof}\n\n\\begin{cor}\n  The type $(n,k)\\GType$ is $(n+1)$-truncated.\n\\end{cor}\n\\begin{proof}\n  This follows immediately from the preceding corollary, as the type\n  of equivalences $G \\eqvsym H$ is a subtype of the homomorphisms from\n  $G$ to $H$.\n\\end{proof}\n\nIf $k\\ge n+2$ (so we're in the stable range), then $\\hom_{(n,k)}(G,H)$\nbecomes a stably groupal $n$-groupoid. This generalizes the\nfact that the homomorphisms between abelian groups form an abelian\ngroup.\n\n\\begin{cor}\nThe automorphism group $\\Aut G$ of a higher group $G:(n,k)\\GType$ is a $1$-groupal $(n+1)$-group, equivalent to the automorphism group of the pointed type $B^kG$.\n\\end{cor}\n\n\\begin{prp}\n  For any two pointed $n$-connected $(n+k+1)$-truncated types $X$ and $Y$, the type of pointed maps\n  \\begin{equation*}\n    X\\to_\\ast Y\n  \\end{equation*}\n  is $k$-truncated. \n\\end{prp}\n\n\\begin{cor}\n  For any two pointed $n$-connected $(n+1)$-truncated types $X$ and $Y$, the type of pointed maps\n  \\begin{equation*}\n    X\\to_\\ast Y\n  \\end{equation*}\n  is a set.\n\\end{cor}\n\n\\begin{thm}\n  The pre-category of $n$-connected $(n+1)$-truncated types in a universe $\\UU$ is Rezk complete.\n\\end{thm}\n\n\\subsection{Equivalences of categories}\n\n\\begin{defn}\n  A \\define{functor} is...\n\\end{defn}\n\n\\begin{defn}\n  A functor $F:\\mathcal{C}\\to\\mathcal{D}$ is an equivalence if ...\n\\end{defn}\n\n\\subsection{The equivalence of groups and pointed connected \\texorpdfstring{$1$}{1}-types}\n\n\\begin{thm}\n  The loop space functor\n  \\begin{equation*}\n    \\pcttype{0}{1} \\to \\Group\n  \\end{equation*}\n  is an equivalence of categories.\n\\end{thm}\n", "meta": {"hexsha": "1be393d43f0ae901c885503070457ce210cb1491", "size": 4072, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Book/higher-group-theory.tex", "max_stars_repo_name": "hemangandhi/HoTT-Intro", "max_stars_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 333, "max_stars_repo_stars_event_min_datetime": "2018-09-26T08:33:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T23:50:15.000Z", "max_issues_repo_path": "Book/higher-group-theory.tex", "max_issues_repo_name": "hemangandhi/HoTT-Intro", "max_issues_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-06-18T04:16:04.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-16T15:27:01.000Z", "max_forks_repo_path": "Book/higher-group-theory.tex", "max_forks_repo_name": "hemangandhi/HoTT-Intro", "max_forks_repo_head_hexsha": "09c710bf9c31ba88be144cc950bd7bc19c22a934", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 30, "max_forks_repo_forks_event_min_datetime": "2018-09-26T09:08:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-16T00:33:50.000Z", "avg_line_length": 33.652892562, "max_line_length": 295, "alphanum_fraction": 0.6652750491, "num_tokens": 1508, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.835483553488848, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.555939583694363}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{listings}\n\\usepackage{courier}\n\n\\lstset{basicstyle=\\scriptsize\\ttfamily}\n\\lstset{commentstyle=\\normalfont\\itshape}\n\n\\title{Correctness of KACTL's modmul}\n\\author{Simon Lindholm}\n\\date{2020-05-02}\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Introduction}\n\nWithin computational number theory, and for hashing, there is sometimes a need to compute modular multiplications $a \\cdot b \\pmod{c}$ for relatively large $c$, in particular larger than $2^{32}$. KACTL contains the following algorithm for computing this for $0 \\le a, b \\le c < 7.268\\cdot 10^{18}$: \\footnote{this number equals $r \\cdot 2^{64}$, where $r = (\\sqrt{177} - 7) / 16 \\approx 0.394$ is the positive solution to the equation $8x^2 + 7x = 4$.}\n\n\\begin{lstlisting}[language=C++]\ntypedef int64_t ll;\ntypedef uint64_t ull;\null mod_mul(ull a, ull b, ull c) {\n    ll ret = a * b - c * ull(1.L / c * a * b);\n    return ret + c * (ret < 0) - c * (ret >= (ll)c);\n}\n\\end{lstlisting}\n\n\\noindent\nIt assumes an x86 or x86\\_64 processor, and runs about 2 times faster than the naive expression \\lstinline{(__int128_t)a * b % c}.\nThis paper shows why it works.\n\nOn a historical note, an earlier version of KACTL included the same algorithm but with the slightly modified expression \\texttt{ull((long double)a * b / c)}. This can be shown precise all the way up to $c < 2^{63}$; however, it is slightly longer and slightly slower in the common case of performing several modular multiplications with the same $c$, due to the division.\n\n\\section{The basic idea}\n\n$a \\cdot b \\pmod{c} = ab - \\lfloor ab/c \\rfloor c$. We can compute the value $ab/c$ approximately using floating point numbers -- in this case 80-bit long doubles, as seen by \\texttt{1.0L} in the code. Letting $R \\approx ab/c$, $S = ab - \\lfloor R \\rfloor c$ will be a number that's congruent to $ab$ modulo $c$, while being relatively close to the desired range $[0, c)$. To get it into the target range we simply add $c$ if the result of the computation is negative, or subtract $c$ if it is greater than or equal to $c$. It is fine for the computations $ab$ and $\\lfloor R \\rfloor c$ to overflow -- arithmetic will be performed mod $2^{64}$ and the residue when converted into $[-2^{63}, 2^{63})$ will be the value we reduce in $[0, c)$.\n\nFor the algorithm to work we will need to prove two things:\n\\begin{enumerate}\n    \\item $S$ is in $[-c, 2c)$\n    \\item $S$ is in $[-2^{63}, 2^{63})$\n\\end{enumerate}\n\nThe second one of these will be where the bound comes from.\n\n\\section{$S$ is in $[-c, 2c)$}\n\nWhen performing a basic arithmetic operation (addition, subtraction, multiplication, division) $\\oplus$ on two long doubles $a$, $b$, the resulting long double will be $r(a \\oplus b)$, where $r(x)$ denotes rounding $x$ to the nearest long double, with ties broken in favor of the one with a trailing zero in the bit representation.\n\n80-bit x86 long doubles are represented with a sign bit, a 15-bit exponent, and a 64-bit mantissa, of which the topmost bit is always 1.\nIt can represent the integers $0, 1, \\dots, 2^{64}$ perfectly, but the next representable integer after that is $2^{64} + 2$.\nThus, for $x$ in $[2^{64}, 2^{65})$, the difference between $x$ and $r(x)$ is at most $1$, and this rescales similarly to other powers of two in the exponent range that we will be working with. In particular, we have the inequality $|x - r(x)| \\le x \\cdot 2^{-64}$. By abuse of notation, we will write this as $r(x) = x \\cdot (1 \\pm 2^{-64})$, with $\\pm a$ representing any number in the range $[-a, a]$.\n\nNow, let us consider the expression $S = ab - \\lfloor r(r(r(1/c)a)b) \\rfloor c$, which we want to prove is in the range $[-c, 2c)$. Flooring subtracts less than one, so this would be implied by\n\\[ ab - r(r(r(1/c)a)b) c \\in [-c, c] \\]\nwhich we rewrite as\n\\[ |ab/c - r(r(r(1/c)a)b)| \\le 1. \\]\n\nIf $c \\le 2^{62}$, we have $r(r(r(1/c)a)b) = ab/c\\cdot(1 \\pm 2^{-64})^3$, yielding\n\\begin{align*}\n|ab/c - r(r(r(1/c)a)b)| &\\le (3\\cdot 2^{-64} + 3 \\cdot 2^{-128} + 2^{-192}) \\cdot ab/c \\\\\n                      &< 4\\cdot 2^{-64} \\cdot 2^{62} = 1.\n\\end{align*}\n\nOtherwise:\n\\begin{itemize}\n    \\item $2^{-63} < 1/c < 2^{-62}$, so $r(1/c) = 1/c \\pm 2^{-127}$,\n    \\item $r(1/c)a < 1.001 \\cdot a/c < 2$, so $r(r(1/c)a) = r(1/c)a \\pm 2^{-64}$,\n    \\item $r(r(1/c)a)b < 1.001 \\cdot ab/c < 2^{63}$, so $r(r(r(1/c)a)b) = r(r(1/c)a)b \\pm 2^{-2}$,\n\\end{itemize}\nand hence\n\\begin{align*}\n    r(r(r(1/c)a)b) &= ((1/c \\pm 2^{-127})a \\pm 2^{-64}) b \\pm 2^{-2} \\\\\n                   &= ab/c \\pm 2^{-127} ab \\pm 2^{-64} b \\pm 2^{-2},\n\\end{align*}\nyielding a difference from the exact value of at most\n\\[ 2^{-127} c^2 + 2^{-64} c + 2^{-2} \\le 0.313 + 0.395 + 0.25 = 0.958 < 1 \\]\ngiven $c \\le 0.395 \\cdot 2^{64}$.\n\n\\section{$S$ is in $[-2^{63}, 2^{63})$}\n\nSince $c < 2^{63}$, we get the bound $-2^{63} \\le S$ from the previous one. If $c \\le 2^{62}$, we also get the latter one. However, the case where $c > 2^{62}$ requires some care. Let us proceed by contradiction and assume $S \\ge 2^{63}$. If we manage to use this to deduce $c \\ge X$ we will know contrapositively that the bound holds for $c < X$. Expanding $S$, the assumption we have is that\n\n\\[ ab - \\lfloor r(r(r(1/c)a)b)\\rfloor c \\ge 2^{63} \\]\n\nWe can weaken this assumption by making the floor part smaller. $r(x)$ is a monotonically increasing function and all numbers are non-negative, so in particular we can make subexpressions of it smaller.\n\nIf $r(1/c)a \\ge 1$, then we can successively weaken the inequality:\n\n\\begin{align*}\n2^{63}\n  &\\le ab - \\lfloor r(r(1) \\cdot b)\\rfloor c \\\\\n  &= ab - bc \\\\\n  &\\le bc - bc \\\\\n  &= 0\n\\end{align*}\n\nand get a contradiction. Otherwise, $r(1/c)a < 1$, so $r(r(1/c)a) \\ge r(1/c)a - 2^{-65}$.\n\nAs in the previous section, $2^{62} < c < 2^{63}$ implies $r(1/c) \\ge 1/c - 2^{-127}$.\n\nSince $0 \\le r(r(1/c)a)b < 1.0001 c < 2^{64}$, all integers near it are exactly representable, and applying $r$ can't move it past an integer. Thus, $\\lfloor r(r(r(1/c)a)b) \\rfloor > r(r(1/c)a)b - 1$. Combining these inequalities results in\n\n\\begin{align*}\n2^{63}\n  &\\le ab - \\lfloor r(r(r(1/c)a)b)\\rfloor c \\\\\n  &\\le ab - (((1/c - 2^{-127})a - 2^{-65})b - 1) c \\\\\n  &\\le 2^{-127}abc + 2^{-65}bc + c.\n\\end{align*}\n\nSubstituting $a = 2^{64}x, b = 2^{64}y, c = 2^{64}z$, we can rewrite this as $1/2 \\le 2 xyz + 1/2 \\cdot yz + z$ with $0 \\le x, y \\le z$.\nBy using $0 \\le x, y \\le z$ and solving the equation $1/2 = 2z^3 + 1/2 \\cdot z^2 + z$, we see that this implies $z \\ge 0.351$. This is not quite what we were shooting for, though -- our aim is $0.394$.\n\nTo go above this, we need to improve upon the bound for $\\lfloor r(r(r(1/c)a)b) \\rfloor$. Let $k$ be such that $r(r(1/c)a)b$ is in the range $[2^k, 2^{k+1})$. Then if it its distance to the next larger integer is less than $2^{k-64}$, it will round upwards before being floored. Hence, flooring can only reduce the value by at most $1 - 2^{k-64}$, so we get the bound $\\lfloor r(r(r(1/c)a)b) \\rfloor \\ge r(r(1/c)a)b - 1 + 2^{k-64}$.\n\nDepending on $a, b, c$ we may end up with different values of $k$. The maximal $k$ is achieved by $a = b = c$, where $k = 62$. For this $k$, we get a similar bound to before, except with $1 - 2^{-2}$ instead of $1$: $1/2 \\le 2 xyz + 1/2yz + 3/4 \\cdot z$. Using $x,z \\le z$ and solving for equality yields $z \\ge 0.3962$, which implies the bound we want.\n\nFor $k = 61$, $r(r(1/c)a)b < 2^{62}$, which implies that $ab/c$ is similarly bounded. Loosely, we get $ab/c < 1.0001 \\cdot r(r(1/c)a)b < 1.0001 \\cdot 2^{62}$, and so\n\\begin{align*}\n2^{63}\n  &\\le 2^{-127}abc + 2^{-65}bc + 7/8 \\cdot c \\\\\n  &= 2^{-127}ab/c\\cdot c^2 + 2^{-65}bc + 7/8 \\cdot c \\\\\n  &< 1.0001 \\cdot 2^{62} \\cdot 2^{-127}\\cdot c^2 + 2^{-65}bc + 7/8 \\cdot c \\\\\n  &\\le 1.0001 \\cdot 2^{-64} c^2 + 7/8 \\cdot c\n\\end{align*}\n\nThis solves to around $z = 0.394$, the bound we want. We will get back to this case for a more careful analysis, without the loose $1.0001$ factor.\n\nFor $k = 60$, the same argument gives $2^{63} \\le 0.7501 \\cdot 2^{-64} c^2 + 15/16 \\cdot c$ which solves to $z = 0.403$, more than enough.\n\nFor $k \\le 59$, we get $2^{63} \\le 0.6251 \\cdot 2^{-64} c^2 + c$, which solves to $z = 0.399$, also enough.\n\nIt remains to analyze the $k = 61$ case in more detail, and show that the bound does hold for all $z$ up to the root of $1/2 = z^2 + 7/8z$. If $a/c > 0.999$, $b$ would need to be small for $ab/c$ to be below $2^{62}$, causing the $2^{-65}bc$ term to shrink enough that we get a (much) better bound than $0.394$ even with the $1.0001$ slop factor. Otherwise, $r(1/c)a < 1$, so $r(1/c)a \\le r(r(1/c)a) + 2^{-65}$. Instead of the loose inequality, we write\n\n\\begin{align*}\nab/c\n  &= (1/c)ab \\\\\n  &\\le (r(1/c) + 2^{-127})ab \\\\\n  &= r(1/c)ab + 2^{-127}ab \\\\\n  &\\le (r(r(1/c)a) + 2^{-65})b + 2^{-127}ab \\\\\n  &= r(r(1/c)a)b + 2^{-65}b + 2^{-127}ab \\\\\n  &< 2^{62} + 2^{-65}b + 2^{-127}ab/c\\cdot c \\\\\n  &\\le 2^{62} + 2^{-65}c + 2^{-127} \\cdot 2^{62} \\cdot 1.0001 \\cdot c \\\\\n  &\\le 2^{62} + 0.395\\cdot 2^{64} \\cdot (2^{-65} + 2^{-65}) \\\\\n  &= 2^{62} + 0.395.\n\\end{align*}\n\nHence,\n\n\\begin{align*}\n2^{63}\n  &\\le 2^{-127}ab/c\\cdot c^2 + 2^{-65}bc + 7/8 \\cdot c \\\\\n  &< (2^{62} + 0.395) \\cdot 2^{-127} \\cdot c^2 + 2^{-65}bc + 7/8 \\cdot c \\\\\n  &\\le (2^{-64} + 0.395 \\cdot 2^{-127}) c^2 + 7/8 \\cdot c.\n\\end{align*}\n\nSolving this gives a value of $c$ that floors to the same integer bound (to be exact, 7268172458553106874) as the same equation without the epsilon term, which thus be removed. This finishes the proof.\n\n\\end{document}\n", "meta": {"hexsha": "04397c29d95addc882c6048714d7a8b3f11757c9", "size": 9542, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/modmul-proof.tex", "max_stars_repo_name": "sarafanshul/KACTL", "max_stars_repo_head_hexsha": "fa14ed34e93cd32d8625ed3729ba2eee55838340", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2021-01-25T12:07:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-26T17:20:31.000Z", "max_issues_repo_path": "doc/modmul-proof.tex", "max_issues_repo_name": "sarafanshul/KACTL", "max_issues_repo_head_hexsha": "fa14ed34e93cd32d8625ed3729ba2eee55838340", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/modmul-proof.tex", "max_forks_repo_name": "sarafanshul/KACTL", "max_forks_repo_head_hexsha": "fa14ed34e93cd32d8625ed3729ba2eee55838340", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-02-28T11:13:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-20T12:56:20.000Z", "avg_line_length": 58.5398773006, "max_line_length": 740, "alphanum_fraction": 0.6289037938, "num_tokens": 3507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835452961425, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.5559395671557003}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{hyperref}\n\\usepackage[pdftex]{graphicx}\n%\\usepackage{physymb}\n\\usepackage{wrapfig}\n\\usepackage{subcaption}\n\\title{EP 222: Assignment 1}\n\n\\author{Manish Goregaokar (120260006)}\n\\date{July 28, 2013}\n\\begin{document}\n\\maketitle\n\\section{Atwood's Machine}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.3]{AtD}\n\\caption{Diagram for Problem 1}\n\n\\label{fig:atd}\n\\end{figure}\n ~\\\\\n\\begin{wrapfigure}{r}{0.4\\textwidth}\n  \\begin{center}\n    \\includegraphics[width=0.38\\textwidth]{atFBDs}\n  \\end{center}\n  \n  \\caption{\\footnotesize Free body diagrams: (a) For leftmost weight. (b) for middle weight. (c) For rightmost weight. (d) For middle pulley.}\n  \\label{fig:atfbds}\n\\end{wrapfigure}\n\nI have defined the variables in Figure~\\ref{fig:atd}.\n\nNow, assuming that the pulleys are massless, and writing Newton's laws of motion for the central pulley, with FBD in Figure~\\ref{fig:atfbds} (d), we get:\n\n$$2T_1-T_2=0\\times a_2=0$$ $$ \\therefore T_2=2T_1$$\n\nNow, if we assume that each block $m_i$ moves $x_i$ (downward positive) in a given time, the net work done by the strings will be $-(T_1x_1 + T_2x_2 + T_1x_3)$. This must be 0, as strings as a whole cannot do work. Differentiating twice with respect to time,\\\\~\\\\~\\\\ $$0=T_1\\ddot x_1 + T_2\\ddot x_2 + T_1\\ddot x_3$$ $$\\therefore T_1a_1 +2T_1a_2+T_1a_3=0$$ $$\\therefore a_1 +2a_2 + a_3 =0$$\n\nNow, applying Newton's second law on the other three FBDs, we get:\n\n\\begin{align}\nT_1-m_1g &= m_1a_1\\notag\\\\\n2T_1-m_2g &= m_2a_2\\notag\\\\\nT_1-m_3g &= m_3a_3\\notag\n\\end{align}\n\nFirstly, let us substitute $a_i'=a_i+g $\n\nThis reduces our equations to \n\\begin{align}\n4g &= a_1' +2a_2' + a_3'\\notag\\\\\nT_1 &= m_1a_1'\\notag\\\\\n2T_1 &= m_2a_2'\\notag\\\\\nT_1 &= m_3a_3'\\notag\n\\end{align}\n\nSubstituting values for $a_1',a_2',a_3'$ into the first equation, we get:\n\n$$4g=\\frac{T_1}{m_1} + 4\\frac{T_1}{m_2} + \\frac{T_1}{m_3}$$\n\n$$\\therefore T_1=\\frac{4g}{\\frac1{m_1}+\\frac4{m_2}+\\frac1{m_3}}=\\frac{4gm_1m_2m_3}{m_1m_2+m_2m_3 + 4m_1m_3}$$\n\nSubstituting these values for $T_1$ back, we get:\n\n\\fbox{\n \\addtolength{\\linewidth}{-2\\fboxsep}%\n \\addtolength{\\linewidth}{-2\\fboxrule}%\n\\begin{minipage}{\\linewidth}\n\\begin{align}\na_1 &=\\frac{4gm_2m_3}{m_1m_2+m_2m_3 + 4m_1m_3} -g\\notag\\\\\na_2 &=2\\frac{4gm_1m_3}{m_1m_2+m_2m_3 + 4m_1m_3} -g\\notag\\\\\na_3 &=\\frac{4gm_1m_2}{m_1m_2+m_2m_3 + 4m_1m_3} -g\\notag\n\\end{align}\n\\end{minipage}\n}\n\\newpage\n\\section{Block system}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.6]{2D}\n\\caption{Diagram for Problem 2}\n\n\\label{fig:2d}\n\\end{figure}\n\n ~\\\\\n\\begin{wrapfigure}{r}{0.4\\textwidth}\n  \\begin{center}\n    \\includegraphics[width=0.38\\textwidth]{2F}\n  \\end{center}\n  \n  \\caption{\\footnotesize Free body diagrams: (a) For large block $M$ (b) For small block $m$}\n  \\label{fig:2f}\n\\end{wrapfigure}\n\nLet $a_x,a_y$ be the components of acceleration of the small block, and let $A$ be the acceleration of the large block.\n\nLet $x$ denote the extension of the vertical string  and $X$ denote the contraction of the horizontal string where marked.\n\nNow, if the block $M$ moves forward by $X$, the horizontal motion will \"contract\" these portions of the string by $2X$ (net change in length). This must be compensated for with the  expansion of the vertical portion of the string $x$, so $x=2X$ (as the total length of the string does not change, it is just redistributed). Differentiating, $\\ddot x=2\\ddot X\\implies a_y=2A$.\n\nNow, as the system is in motion, $f=\\mu N$. Analyzing the horizontal direction with Newton's laws in Figure \\ref{fig:2f} (a), we get: $$2T-N=MA$$\n\nAnalyzing Figure \\ref{fig:2f} (b), we get: $$N=mA$$ $$mg-f-T=ma_y\\implies mg-\\mu N -T =ma_y$$\n\n\nSubstituting $N=mA$ and $a_y=2A$, we get:\n\n$$2T=(m+M)A$$\n\n$$mg-\\mu mA -T=2mA$$\n\nSolving, we get $$A=\\frac{2g}{5+2\\mu+\\frac Mm}$$\n\nThis implies  $$a_y=\\frac{4g}{5+2\\mu+\\frac Mm}$$\n\nSubstituting $\\frac Mm=10, \\mu=0.5$, we get:\n\n$$A=a_x=\\frac14 g, a_y=\\frac12 g$$\n\nThus vector acceleration of $M$ is \\fbox{$\\frac14g \\hat i $}, acceleration of $m$ is \\fbox{$\\frac14g(\\hat i -2\\hat j) $}\n\nAs initual downward velocity of $m$ is 0, time taken to cover a distance of $d=2.45 ~\\mathrm m$ is $\\sqrt{\\frac{2d}{a_y}}$ (from $s=ut+\\frac12 at^2$), which is $\\sqrt{\\frac{4.9}{9.8\\times \\frac{1}{4}}} = \\sqrt{2}~\\mathrm s$\n\nThus the block takes \\fbox{$1.414$ seconds} to reach the bottom.\n\\newpage\n\\section{Rod on belt}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.6]{3D}\n\\caption{Diagram for Problem 3}\n\n\\label{fig:3d}\n\\end{figure}\n ~\\\\\n\\begin{wrapfigure}{r}{0.3\\textwidth}\n  \\begin{center}\n    \\includegraphics[width=0.38\\textwidth]{3D2}\n  \\end{center}\n  \n  \\caption{\\footnotesize Close up of small element of rope}\n  \\label{fig:3D2}\n\\end{wrapfigure}\n\nLet us analyze a small portion of the belt, as tension on that belt is changing. We assume that friction acts to the right.\n\nNow, the horizontal component of tension is $(T+dT-T)\\cos{\\frac12 d\\theta}=dT$. The vertical component is $(T+T+dT)\\sin{\\frac12 d\\theta}=Td\\theta$.\n\nBalancing forces, we get $N=Td\\theta$, and $f=dT$. Now, $f\\leq \\mu N \\implies dT\\leq \\mu Td\\theta$.\n\nIntegrating, $\\int\\limits_{T_0}^T \\frac{dT}{T}=\\int\\limits_0^\\theta \\mu d\\theta$. From here, we get that \n\n$T(\\theta)\\leq T_0e^{\\mu\\theta}\\implies T_{wall}\\leq T_0e^{\\mu\\theta_0}$\n\nIf we assume that friction acts to the left, we get $T_{wall}\\geq T_0e^{-\\mu\\theta_0}$\n\nNow, the macroscopic variable $T_{wall}$ depends on the microscopic distribution of elongation in the string, which in turn depends on the process used to create this system. This is an unknown, which gives us a range (instead of an absolute answer) for $T_{wall}$ However, Newton's laws allow all values of $T_{wall}$ between $T_0e^{-\\mu\\theta_0}$ and $T_0e^{\\mu\\theta_0}$. Thus, we can claim that a state can be constructed where \\fbox{$T_{wall}=T_0e^{\\mu\\theta_0}$}, and this is the maximum tension possible.\n\n\\section{Leaning ladder}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.6]{4D}\n\\caption{Diagram for Problem 4}\n\n\\label{fig:4d}\n\\end{figure}\n\nFirstly, let us assume that the ladder has a length $2L$ and mass $m$. \n\nNow, note that the smallest angle in equilibrium will be  when the static friction attains its maximum value ($\\mu N$). So, $f=\\mu N$.\n\nNow, first writing translational equilibrium equations:\n\n$$mg=N$$\n$$\\mu N=f = N_w$$\n\nThe net torque on the ladder is $\\tau=N_wL\\sin\\theta+fL\\sin\\theta - NL\\cos\\theta$. The ladder is in rotational equilibrium, so $\\tau=0$. So, $$N_w L\\sin\\theta +fL\\sin\\theta =NL\\cos\\theta$$\n\nMaking substitutions from previously derived equations, we get $$2\\mu NL\\sin\\theta=\\cos\\theta$$\n\nor $$\\boxed{\\theta=\\cot^{-1}2\\mu}$$\n\n\\section{Cylinders}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.4]{5D}\n\\caption{Diagram for Problem 5}\n\n\\label{fig:5d}\n\\end{figure}\n \n\n\nHere, we assume that there is friction between the small cylinders and the large one, as otherwise we get that the system does not stay in rotational equilibrium. This is easily verified, as the only forces providing torque to the smaller cylinders are the friction from the ground and the friction between them and the larger cylinder.\\\\  As the grund friction is nonzero, there must be friction between the smaller cylinders and the larger one otherwise the system does not stay at rest. Note that the friction between the cylinders must be zero due to the symmetry of the system.\\\\\n\nThe free body diagrams have been labelled taking into account the symmetry of the system. Figure \\ref{fig:5d} indicates the contact forces at various points (direction is not shown as it  depends on which body we are considering, refer Figure \\ref{fig:5f} for details on the direction of the forces).\n\\begin{wrapfigure}{l}{0.5\\textwidth}\n  \\begin{center}\n    \\includegraphics[width=0.38\\textwidth]{5F}\n  \\end{center}\n  \n  \\caption{\\footnotesize Free body diagrams: (a) For small cylinders $(m,R)$ (b) For large cylinder $(4m,2R)$}\n  \\label{fig:5f}\n\\end{wrapfigure} \nNow, we first write the equation for torational equilibrium for the small cylinders, $\\tau=rf_g-rf=0 \\implies f_g=f$. We can thus conveniently replace all $f_g$ with $f$.\n\nWriting the equations for translational equilibrium of the small cylinders, we get\n\n\\newcommand{\\ccc}{\\cos\\theta}\n\\newcommand{\\sss}{\\sin\\theta}\n\n$$mg + N\\ccc +f\\sss = N_g$$\n\n$$N\\sss =f+f\\ccc$$\n\nWriting equilibrium equations in the vertical direction for the large cylinder:\n\n$$2N\\ccc+2f\\sss=4mg\\implies N\\ccc +f\\sss=2mg$$\n\nNote that $\\theta$ is known, with $\\sss=\\frac13, \\ccc=\\frac{\\sqrt8}3$\n\n$$\\therefore N=f(1+\\sqrt8)$$\n$$N\\sqrt 8 +f=6mg$$\n$$\\therefore f(1+\\sqrt 8) + f =6mg\\implies f=\\frac{6mg}{2+\\sqrt 8}$$\n\n$$\\therefore N=6mg\\frac{1+\\sqrt8}{2+\\sqrt8}$$\n\nAs $$Ng=mg + N\\ccc +f\\sss $$\n\\begin{align}\nN_g &= mg + 6mg\\ccc\\frac{1+\\sqrt8}{2+\\sqrt8} + \\sss\\frac{6mg}{2+\\sqrt 8}\\notag\\\\\n &= mg \\left(1 + \\frac{6}{3(2+\\sqrt 8)}\\left(\\sqrt8(1+\\sqrt8) + (2+\\sqrt8)\\right) \\right)\\notag\\\\\n &= mg\\left(1+\\frac{1}{1+\\sqrt 2}\\left(10+4\\sqrt2\\right)\\right )\\notag\\\\ \n &=mg\\frac{11+5\\sqrt 2}{1+\\sqrt 2}\\notag\n\\end{align}\n\n$$\\therefore \\frac{N_g}{f_g}=\\frac{\\left(mg\\frac{11+5\\sqrt 2}{1+\\sqrt 2}\\right)}{\\left(\\frac{3mg}{1+\\sqrt 2}\\right)}=\\frac{11+5\\sqrt 2}{3}$$\n\nNow, $\\mu N_g\\geq f_g$. In this case, the minimum $\\mu$ is during equality, so \\fbox{$\\mu=\\frac{N_g}{f_g}= \\frac{11+5\\sqrt 2}{3}$}\n\n\\section{Mass on wedge}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.4]{6D}\n\\caption{Diagram for Problem 6}\n\n\\label{fig:6d}\n\\end{figure}\n\nHere, I am assuming that the values of final velocity are required just before the mass $m$ reaches the table (as to calculate this value at the exact time will require knowledge of the nature of the collision). For the purposes of this problem I shall take the symbols $v_w,v_x,v_y$ to mean the velocities at the final stage. Initial velocities are zero.\n\nNow, we can conserve momentum in the horizontal direction here and overall energy as well. Conserving momentum in the horizontal direction, we get $mv_x=Mv_w \\implies v_x=\\frac Mm v_w$.\n\nConserving energy, $\\frac12(Mv_w^2+mv_x^2+mv_y^2)=mgh$\n\nFinally, in the wedge frame, the mass must move along the wedge surface. Thus, $\\frac{v_y}{v_x+v_w}=\\tan\\theta \\implies v_y=\\tan\\theta (v_x+v_w)$\n\nSubstituting the values for $v_x$ and $v_y$ in the energy conservation equation, we get \n\n$$2mgh= Mv_w^2+ m\\left(\\frac{M}m v_w\\right)^2+m\\tan^2\\theta (v_x+v_w)^2=Mv_w^2+ m\\left(\\frac{M}m v_w\\right)^2+m\\tan^2\\theta v_w^2 \\left(1+\\frac{M}{m}\\right)^2$$\n\n\n$$\\therefore v_w^2 = \\frac{2mgh}{M+\\frac{M^2}{m} + m\\tan^2\\theta \\left(1+\\frac{M}m\\right)^2}$$\n\n$$\\therefore \\boxed{v_w=\\sqrt{\\frac{2mgh}{M+\\frac{M^2}{m} + m\\tan^2\\theta \\left(1+\\frac{M}m\\right)^2}}}$$\n\n\\end{document}", "meta": {"hexsha": "ff398c6b8eca85492cd0c41c5f689c6943515a4b", "size": 10713, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Course material/EP 222 - Classical Mechanics/Assignment 1/assign1.tex", "max_stars_repo_name": "CourseResources/CourseResources", "max_stars_repo_head_hexsha": "4040bfe499609389d1978823e4e2896bf4ce41e3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-05-28T05:59:31.000Z", "max_stars_repo_stars_event_max_datetime": "2015-05-28T05:59:31.000Z", "max_issues_repo_path": "Course material/EP 222 - Classical Mechanics/Assignment 1/assign1.tex", "max_issues_repo_name": "CourseResources/CourseResources", "max_issues_repo_head_hexsha": "4040bfe499609389d1978823e4e2896bf4ce41e3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Course material/EP 222 - Classical Mechanics/Assignment 1/assign1.tex", "max_forks_repo_name": "CourseResources/CourseResources", "max_forks_repo_head_hexsha": "4040bfe499609389d1978823e4e2896bf4ce41e3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.5313653137, "max_line_length": 584, "alphanum_fraction": 0.7074582283, "num_tokens": 3868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105454764746, "lm_q2_score": 0.8354835350552604, "lm_q1q2_score": 0.5559395547977342}}
{"text": "\\documentclass[12pt,notitlepage]{article}\n\n\\usepackage{hyperref}\n\n\\title{Project Euler: Problem 2\n\\\\\\small{\\url{https://projecteuler.net/problem=1}}\\\\\n\\small{\\url{https://github.com/oltdaniel/euler}}}\n\n\\begin{document}\n  \\maketitle\n\n  \\begin{center}\n    \\textit{\"Each new term in the Fibonocci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:\\\\1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...\\\\By considering the terms in the Fibonocci sequence whose values do not exceed four million, find the sum of the even-valued terms.\"}\n  \\end{center}\n\n  The general idea to solve this, is starting an endless loop, that will break if the current fibonacii number exceeds\n  $4,000,000$. During each iteration the numbers before will be added and stored, as well as moving the last Fibonocci\n  number to the memory for the next iteration. THe current Fibonoccie will be then tested wether they are even or not\n  and added to the sum if they are.\\par\n\n  As we already know, that only even fibonacii numbers will be added to the sum, this routine can be optimized by using\n  some existing formulas.\n\n  \\subsection{Basic Idea}\n  The basic idea is to start an endless loop, that calculates the next Fibonocci number, by adding the two previous\n  ones. Followed by that, the new number, and the last one need to be stored for the next operation. After the current\n  Fibonocci number has been proven to be even, it can be added to the final sum.\\par\n  \\textbf{Complexity: $O(\\log (\\sqrt{5} * n) \\div \\log (\\frac{1 + \\sqrt{5}}{2}))$}\n\n  \\subsection{Memory Optimization}\n  As the above solution requires the use of many memory operations, by moving the numbers around, this step\n  is something that can be optimized with the \\textit{Binet's Formula}. This allows us to calculate the $n$-th\n  Fibonocci number without moving any values in the memory nor remembering any values calculated before. The formula\n  is defined as follows:\n\n  $$F_{n} = \\frac{\\varphi^{n} - \\psi^{n}}{\\sqrt{5}} = \\frac{\n    \\frac{\n      (1 + \\sqrt{5})\n    }\n    {2}^{n} -\n    \\frac{\n      (1 - \\sqrt{5})\n    }\n    {2}^{n}\n  }{\n    \\sqrt{5}\n  }$$\\par\n\n  \\textbf{Complexity: $O(\\log (\\sqrt{5} * n) \\div \\log (\\frac{1 + \\sqrt{5}}{2}))$}\n\n  \\subsection{Reduce Calculation}\n  Based on the optimizations made above, we can now simplify the sequence of numers we will add in a loop, into\n  a geometric series. This will allow us to calculate the sum with the following formula:\n\n  $$S(n) = (F_{n + 2} - 1) \\div 2$$\n\n  However, as we only want even numbers we need to extend this. We know, that a Fibonocci number is only even, if the\n  index of it is a multiple of $3$. We also know, that $n$ is the number of Fibonocci numbers under the set limit $4,000,000$.\n  In order to calculate $n$ for $S(n \\div 3)$ to find $s$, we can use the following formula based on \\textit{Binet's Formula}:\n\n  $$n = \\log(\\sqrt{5} * n) \\div \\log(\\frac{1 + \\sqrt{5}}{2})$$\n\n  Followed by that, we can say, that the sum of all even Fibonocci numbers under $4,000,000$ is $s = S(n / 3)$.\n\n  \\textbf{Complexity: $O(1)$}\n\\end{document}\n", "meta": {"hexsha": "531d087f5c2e95e5438da87fbbd86f66e1b95c89", "size": 3110, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/.sources/2.tex", "max_stars_repo_name": "oltdaniel/euler", "max_stars_repo_head_hexsha": "27c9ee81d75e0708365bd51832ffa58c9b37e310", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-09-18T02:40:05.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-18T02:40:05.000Z", "max_issues_repo_path": "notes/.sources/2.tex", "max_issues_repo_name": "oltdaniel/euler", "max_issues_repo_head_hexsha": "27c9ee81d75e0708365bd51832ffa58c9b37e310", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/.sources/2.tex", "max_forks_repo_name": "oltdaniel/euler", "max_forks_repo_head_hexsha": "27c9ee81d75e0708365bd51832ffa58c9b37e310", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.4179104478, "max_line_length": 327, "alphanum_fraction": 0.6980707395, "num_tokens": 944, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802735722128, "lm_q2_score": 0.8311430562234877, "lm_q1q2_score": 0.5559351948244116}}
{"text": "\\vsssub\n\\subsubsection{~$S_{db}$: Battjes and  Janssen 1978} \\label{sec:DB1}\n\\vsssub\n\n\\opthead{DB1 / MLIM}{Pre-WAM}{J. H. G. M. Alves}\n\n\\noindent\nThe implementation in \\ws\\ of depth-induced breaking algorithms is intended to\nextend the applicability of the model to within shallow water environments,\nwhere wave breaking, among other depth-induced transformation processes,\nbecomes important.\n\nFor this reason the approach of \\citet[][henceforth denoted as\nBJ78]{pro:BJ78}, which is based on the assumption that all waves in a random\nfield exceeding a threshold height, defined as a function of bottom topography\nparameters, will break. For a random wave field, the fraction of waves\nsatisfying this criterion is determined by a statistical description of\nsurf-zone wave heights (i.e., a Rayleigh-type distribution, truncated at a\ndepth-dependent wave-height maximum).\n\nThe bulk rate $\\delta$ of spectral energy density dissipation of the fraction\nof breaking waves, as proposed by BJ78, is estimated using an analogy with\ndissipation in turbulent bores as\n\n%-------------------------------%\n% Battjes Janssen surf breaking %\n%-------------------------------%\n% eq:BJ78_base\n\n\\begin{equation}\n\\delta = 0.25 \\: Q_b \\: f_m \\: H_{\\max}^2  , \\label{eq:BJ78_base}\n\\end{equation}\n\n\\noindent\nwhere $Q_b$ is the fraction of breaking waves in the random field, $f_m$ is\nthe mean frequency and $H_{\\max}$ is the maximum individual height a component\nin the random wave field can reach without breaking (conversely, above which\nall waves would break). In BJ78 the maximum wave height $H_{\\max}$ is defined\nusing a Miche-type criterion \\citep{art:Miche44},\n\n% eq:BJ78_Miche\n\n\\begin{equation}\n\\bar{k} H_{\\max} = \\gamma_M \\tanh ( \\bar{k} d )\n , \\label{eq:BJ78_Miche}\n\\end{equation}\n\n\\noindent\nwhere $\\gamma_M$ is a constant factor. This approach also removes energy in\ndeep-water waves exceeding a limiting steepness. This can potentially result\nin double counting of dissipation in deep-water waves. Alternatively,\n$H_{\\max}$ can be defined using a McCowan-type criterion, which consists of\nsimple constant ratio\n\n% eq:BJ78_McC\n\n\\begin{equation}\nH_{\\max} = \\gamma \\: d  , \\label{eq:BJ78_McC}\n\\end{equation}\n\n\\noindent\nwhere $d$ is the local water depth and $\\gamma$ is a constant derived from\nfield and laboratory observation of breaking waves. This approach will\nexclusively represent depth-induced breaking.  Although more general breaking\ncriteria for $H_{\\max}$ as a simple function of local depth exist\n\\citep[e.g.,][]{art:TG83}, it should be noted that the coefficient $\\gamma$\nrefers to the maximum height of an individual breaking wave within the random\nfield. \\cite{art:M1894} calculated the limiting wave-height-to-depth ratio for\na solitary wave propagating on a flat bottom to be 0.78, which is still used\npresently as a conservative criteria in engineering applications. The average\nvalue found by \\cite{pro:BJ78} was $\\gamma = 0.73$. More recent analyses of\nwaves propagating over reefs by \\cite{art:Nel94, art:Nel97} suggest a ratio of\n0.55.\n\nThe fraction of breaking waves $Q_b$ is determined in terms of a Rayleigh-type\ndistribution truncated at $H_{\\max}$ (i.e., all broken waves have a height\nequal to $H_{max}$), which results in the following expression:\n\n% eq:BJ78_Qb\n\n\\begin{equation}\n\\frac{1 - Q_b}{-\\ln Q_b} = \\left ( \\frac{H_{rms}}{H_{\\max}} \\right ) ^{2}\n , \\label{eq:BJ78_Qb}\n\\end{equation}\n\n\\noindent\nwhere $H_{rms}$ is the root-mean-square wave height. In the current\nimplementation, the implicit equation (\\ref{eq:BJ78_Qb}) is solved for $Q_b$\niteratively. With the assumption that the total spectral energy dissipation\n$\\delta$ is distributed over the entire spectrum so that it does not change\nthe spectral shape \\citep{art:EB96} the following depth-induced breaking\ndissipation source function is obtained\n\n% eq:BJ78\n\n\\begin{equation}\n\\cS_{db} (k,\\theta) = - \\alpha \\frac{\\delta}{E} F(k,\\theta)\n       = - 0.25 \\: \\alpha \\: Q_b \\: f_m \\frac{H_{\\max}^2}{E} F(k,\\theta)\n , \\label{eq:BJ78}\n\\end{equation}\n\n\\noindent\nwhere $E$ is the total spectral energy, and $\\alpha = 1.0$ is a tunable\nparameter. The user can select between Eqs.~(\\ref{eq:BJ78_Miche}) and\n(\\ref{eq:BJ78_McC}), and adjust $\\gamma$ and $\\alpha$. Defaults are\nEq.~(\\ref{eq:BJ78_McC}), $\\gamma = 0.73$ and $\\alpha = 1.0$.\n", "meta": {"hexsha": "e99399be2a78126d1dcf575b40c374653954af2d", "size": 4305, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "WW3/manual/eqs/DB1.tex", "max_stars_repo_name": "minsukji/ci-debug", "max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_stars_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "WW3/manual/eqs/DB1.tex", "max_issues_repo_name": "minsukji/ci-debug", "max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_issues_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z", "max_forks_repo_path": "WW3/manual/eqs/DB1.tex", "max_forks_repo_name": "minsukji/ci-debug", "max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_forks_repo_licenses": ["Apache-2.0", "CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z", "avg_line_length": 40.2336448598, "max_line_length": 78, "alphanum_fraction": 0.7386759582, "num_tokens": 1226, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430562234877, "lm_q2_score": 0.6688802669716107, "lm_q1q2_score": 0.5559351893383669}}
{"text": "%% SECTION HEADER /////////////////////////////////////////////////////////////////////////////////////\n\\section{Temperature Effect on the GW Propagation}\n\\label{sec:temp}\n\n%% SECTION CONTENT ////////////////////////////////////////////////////////////////////////////////////\n\n%% SUBSECTION HEADER //////////////////////////////////////////////////////////////////////////////////\nIn order to carry out the temperature dependent SE simulation and semi-analytical analysis of Lamb wave propagation in the \\acp{hsc}, the elastic modulus of \\acp{hsc} layers are calculated as per the methodology described in \\cite{chamis1983simplified,salamone2009guided}. The calculation is done for a range of temperatures (+50\\(^{\\circ}\\)C to -50\\(^{\\circ}\\)C) generally occurred in practical operating scenarios. The obtained temperature-dependent elastic properties (E11 = E22, E33, G12, G13 = G23) for composite laminate and adhesive (E, G) are presented in Figure 7 and Figure 8, respectively.\n\nIn this model \\cite{salamone2009guided}, reduction of Young\u2019s modulus of the resin \\(E_m\\) with variation in temperature is assumed as:\n\\begin{eqnarray}\n\tE_m(T)=F_m E_{rm},\n\t\\label{eq:factor_temp}\n\\end{eqnarray}\nwhere \\(E_{rm}\\) is the Young\u2019s modulus of resin at the reference temperature and \\(F_m\\) is the temperature degradation factor as proposed in \\cite{chamis1983simplified}:\n\\begin{eqnarray}\nF_m=\\sqrt{\\frac{T_{g0}-T}{T_{g0}-T_r}},\n\\label{eq:em_temp}\n\\end{eqnarray}\nwhere \\(T_{g0}\\) is the glass transition temperature and \\(T_r\\) is the reference temperature. In the study, the value of \\(T_{g0}=215^{\\circ}\\)C and \\(T_r=20^{\\circ}\\)C is selected from Table 2 in \\cite{chamis1983simplified}. \n", "meta": {"hexsha": "358d279ff4c21d536d8845c422dd04479f432eb4", "size": 1687, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/proposal/Dissertation/Chapters/Chapter3/sec:temp.tex", "max_stars_repo_name": "pfiborek/model_hc", "max_stars_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/proposal/Dissertation/Chapters/Chapter3/sec:temp.tex", "max_issues_repo_name": "pfiborek/model_hc", "max_issues_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/proposal/Dissertation/Chapters/Chapter3/sec:temp.tex", "max_forks_repo_name": "pfiborek/model_hc", "max_forks_repo_head_hexsha": "e27cf98e5d8a5a8ad2ba0c5e70520c281dfd384c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.3333333333, "max_line_length": 600, "alphanum_fraction": 0.6437462952, "num_tokens": 411, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430478583168, "lm_q2_score": 0.6688802735722128, "lm_q1q2_score": 0.5559351892291137}}
{"text": "% Copyright and Usage Information\n% ===============================\n\n% This file is provided solely for the personal and private use of students\n% taking CSC110 at the University of Toronto St. George campus. All forms of\n% distribution of this code, whether as given or with any changes, are\n% expressly prohibited. For more information on copyright for CSC110 materials,\n% please consult our Course Syllabus.\n\n% This file is Copyright (c) 2021 Mario Badr and Tom Fairgrieve.\n\\documentclass{article}\n\n\\setlength{\\parindent}{0pt}\n\\setlength{\\parskip}{5pt}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{amsfonts}\n\n\\usepackage[margin=1in]{geometry}\n\n\\title{CSC110 Fall 2021: Term Test 2\\\\\n       Question 2 (Analyzing Algorithm Running Time)}\n\\author{TODO: INSERT YOUR NAME HERE}\n\\date{Wednesday December 8, 2021}\n\n% Some useful LaTeX commands. You are free to use these or not, and also add your own.\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\cO}{\\mathcal{O}}\n\\newcommand{\\floor}[1]{\\left\\lfloor #1 \\right\\rfloor}\n\\newcommand{\\ceil}[1]{\\left\\lceil #1 \\right\\rceil}\n\\newcommand{\\C}{\\texttt}\n\n\\begin{document}\n\\maketitle\n\n\\subsection*{Question 2, Part 1}\n\n\\noindent\nWe define the function $g: \\N \\to \\R^{\\geq 0}$ as $g(n) = 7n(n-1)^2$.\nConsider the following statement:\n\n\\[\ng(n) \\in \\cO(n^4)\n\\]\n\n\\begin{enumerate}\n\n\\item[(a)]\nRewrite the statement $g(n) \\in \\cO(n^4)$ by expanding the definition of Big-O.\n\n\\bigskip\n\n\\textbf{Solution}:\n\n$\\exists c, n_0 \\in \\R^+ \\text{ s.t. } \\forall n \\in \\N, n \\ge n_0 \\Rightarrow 7n(n-1)^2 \\le c \\cdot n^4$\n\n\\item[(b)]\nWrite the \\emph{negation} of the statement from (a), using negation rules to simplify the statement as much as possible.\n\n\\bigskip\n\n\\textbf{Solution}:\n\n$\\forall c, n_0 \\in \\R^+, \\exists n \\in \\N \\text{ s.t. } n \\ge n_0 \\land 7n(n-1)^2 > c \\cdot n^4$\n\n\\item[(c)]\nWhich of statements (a) and (b) is true? Provide a complete proof that justifies your choice.\n\nIn your proof, you may not use any properties or theorems about Big-O/Omega/Theta.  Work from the expanded statement from (a) or (b).\n\n\\bigskip\n\n\\textbf{Solution}:\n\nI think statement (a) is true.\n\n\\begin{proof}\nWant to show: $\\exists c, n_0 \\in \\R^+ \\text{ s.t. } \\forall n \\in \\N, n \\ge n_0 \\Rightarrow 7n(n-1)^2 \\le c \\cdot n^4$ \\\\\nProve using Induction. \\\\\nTake $c = 7, n_0 = 1$ \\\\\nLet $n$ be an arbitrary natural number such that $n \\ge (n_0 = 1)$ \\\\\nWhat we want to prove becomes: $\\forall n \\in \\N, n \\ge 1 \\Rightarrow 7n(n-1)^2 \\le 7n^4$ \\\\\n\\\\\nSince $n \\ge 1$, \\\\\nMultiply both sides by $n^2$, we get $n^3 \\ge n^2$ \\\\\nFrom this, we also know that $(n - 1)^2 \\le n^3$ \\\\\nAlso, since $n \\ge 1$, \\\\\nMultiply the inequality by $-2$, we get $-2n \\le -2$ \\\\\nAdding $1$ to both sides, we get $-2n + 1 \\le -1$ \\\\\nPutting the two inequalities together, we have $n^2 - 2n + 1 \\le n^3 - 1 \\le n^3$ \\\\ \nFactoring the polynomial on the left, we have $(n - 1)^2 \\le n^3$ \\\\\nMultiply both sides by $7n$, we get $7n(n - 1)^2 \\le 7n^4$\\\\\nWhich is what we want to prove.\n\n\\end{proof}\n\n\\end{enumerate}\n\n\\subsection*{Question 2, Part 2}\n\n\\noindent\nConsider the function below.\n\n\\begin{verbatim}\ndef f(nums: list[int]) -> list[int]:           # Line 1\n    n = len(nums)                              # Line 2\n    i = 1                                      # Line 3\n    new_list = []                              # Line 4\n    while i < n:                               # Line 5\n        if nums[i] % 2 == 0:                   # Line 6\n            list.append(new_list, i)           # Line 7\n        else:                                  # Line 8\n            new_list = [i * j for j in nums]   # Line 9\n        i = i * 3                              # Line 10\n    return new_list                            # Line 11\n\\end{verbatim}\n\n\\begin{enumerate}\n\n\\item[(a)]\nPerform an \\emph{upper bound analysis} on the worst-case running time of \\texttt{f}.\nThe Big-O expression that you conclude should be \\emph{tight}, meaning that the worst-case running time should be Theta of this expression, but you are not required to show that here.\n\n\\textbf{To simplify your analysis}, you may omit all floors and ceilings in your calculations (if applicable).\nUse ``at most'' or $\\leq$ to be explicit about where a step count expression is an upper bound.\n\n\\textbf{Solution}:\n\nLet $n$ be the length of the input list \\C{nums}\n\nThere is one loop in the function which loops through \\C{nums} with $i$ increasing exponentially, which will run $\\ceil{log_3(n)}$ times. Inside the loop, if then number is even, it takes $\\cO(1)$ to append the item at the end of \\C{new\\_list}. If the number is odd, it sets \\C{new\\_list} to a list comprehension which iterates through all number in \\C{nums}, performing an $\\cO(1)$ multiplication every iteration, which takes exactly $n$ steps, which is a larger running time than if the number is even. Therefore, the inside of the loop will take at most $n$ steps, if all numbers \\C{nums[i]} iterated are odd.\n\nSince there are only constant-time operations outside the loop, the worst-case running time would be $\\ceil{log_3(n)}$ iterations multiplied by at most $n$ steps per iteration, which is $n\\ceil{log_3(n)}$ steps. \n\nSince $n\\ceil{log_3(n)} \\in \\cO(n\\ceil{log_3(n)})$, we can conclude that $WC_{f}(n) \\in \\cO(n\\ceil{log_3(n)})$\n\n\\item[(b)]\nPerform a \\emph{lower bound analysis} on the worst-case running time of \\texttt{f}.\nThe Omega expression you find should match your Big-O expression from part (a).\n\n\\textbf{Hint}: you don't need to try to find an ``exact maximum running-time'' input. \\emph{Any} input family whose running time is Omega of (``at least'') the bound you found in part (a) will yield a correct analysis for this part.\n\n\\textbf{Solution}:\n\nLet $n$ be the length of the input list \\C{nums}, let \\C{nums} be the list of length $n$ which every number is 1.\n\nIn this case, the if statement inside the loop always runs line 9 that takes $n$ steps, and then the $i = i * 3$ statement, which is 1 step, which is a total of $n + 1$ steps. The loop still iterates $\\ceil{log_3(n)}$ times. Since there are only constant-time operations outside the loop, the total number of steps for this input is $(n + 1)\\ceil{log_3(n)} + c$ which $c \\in \\N$ is a constant, which is $WC_{f}(n) \\in \\Omega(n\\ceil{log_3(n)})$\n\n\\end{enumerate}\n\n\\begin{center}\n    \\textbf{SUBMIT THIS FILE AND THE GENERATED PDF q2.pdf FOR GRADING}\n\\end{center}\n\\end{document}\n", "meta": {"hexsha": "07a8dbf5bfc3fb5b9ff74f4a68693a50728e43dd", "size": 6448, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TT2/q2.tex", "max_stars_repo_name": "hykilpikonna/CSC110", "max_stars_repo_head_hexsha": "12a4f9361e0c79fe03cafa3c283eb96706359f46", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TT2/q2.tex", "max_issues_repo_name": "hykilpikonna/CSC110", "max_issues_repo_head_hexsha": "12a4f9361e0c79fe03cafa3c283eb96706359f46", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TT2/q2.tex", "max_forks_repo_name": "hykilpikonna/CSC110", "max_forks_repo_head_hexsha": "12a4f9361e0c79fe03cafa3c283eb96706359f46", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.5534591195, "max_line_length": 612, "alphanum_fraction": 0.6577233251, "num_tokens": 1984, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802735722127, "lm_q2_score": 0.8311430436757313, "lm_q1q2_score": 0.5559351864314647}}
{"text": "\\chapter{First Examples}\nWe will now look at a simple example to demonstrate the basics of how Bayesian\nstatistics works. We start with some probabilities at the beginning of the problem (these are\ncalled {\\it prior probabilities}), and how exactly these get updated\nwhen we get more information (these updated probabilities are called\n{\\it posterior probabilities}). To help make things more clear, we will\nuse a table that we will call a {\\it Bayes' Box} to help us calculate the\nposterior probabilities easily.\n\nSuppose there are two balls in a bag. We know in advance\nthat at least one of them is black, but we're not sure whether they're both\nblack, or whether one is black and one is white. These are the only two\npossibilities we will consider.\nTo keep things concise, we\ncan label our two competing hypotheses. We could call them whatever we want,\nbut I will call them \\bb~and \\bw. So, at the beginning of the problem, we know\nthat {\\it one and only one} of the following statements/hypotheses is true:\\\\\n\\begin{framed}\n\\bb: Both balls are black\\\\\n\\bw: One ball is black and the other is white.\n\\end{framed}\nSuppose an experiment is performed to help us determine\nwhich of these two hypotheses is\ntrue. The experimenter reaches into the bag, pulls out one of the balls, and\nobserves its colour. The result of this experiment is (drumroll please!):\n\\begin{framed}\n$D$: The ball that was removed from the bag was black.\n\\end{framed}\nWe will now do a Bayesian analysis of this result.\n\n\\section{The Bayes' Box}\nA Bayesian analysis starts by choosing some values for the prior probabilities.\nWe have our two competing hypotheses \\bb~and \\bw, and we need to choose some\nprobability values to describe how sure we are that each of these is true.\nSince we are talking about two hypotheses, there will be two prior probabilities,\none for \\bb~and one for \\bw.\nFor simplicity, we will assume that we don't have much of an idea which is true,\nand so we will use the following prior probabilities:\n\\begin{eqnarray}\nP(\\bb) &=& 0.5\\\\\nP(\\bw) &=& 0.5.\n\\end{eqnarray}\nPay attention to the notation. The upper case $P$ stands for probability, and if we just\nwrite $P(\\texttt{whatever})$, that means we are talking about the\nprior probability of {\\tt whatever}. We will see the notation for the posterior probability\nshortly. Note also that since the two hypotheses are mutually exclusive\n(they can't both be true) and exhaustive (one of these is true, it can't be\nsome undefined third option).\nWe will almost always consider mutually exclusive and exhaustive hypotheses in\nthis course\\footnote{If this does not appear to be true in a particular problem,\nit is usually possible to redefine the various hypotheses into a set that of\nhypotheses that {\\it are}\nmutually exclusive and exhaustive.}.\n\nThe choice of 0.5 for the two prior probabilities describes the fact that,\nbefore we did the experiment, we were very uncertain about which of the two\nhypotheses was true.\nI will now present a {\\it Bayes' Box}, which lists all the hypotheses (in this\ncase\ntwo) that might be true, and the prior probabilities. There are some extra\ncolumns which we haven't discussed yet, and will be needed in order to\nfigure out the posterior probabilities in the final column.\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n{\\bf Hypotheses} & {\\tt prior} & {\\tt likelihood} &\n{\\tt prior $\\times$ likelihood} & {\\tt posterior}\\\\\n\\hline\n\\bb & 0.5 &   &  & \\\\\n\\bw & 0.5 &   &  & \\\\\n\\hline\nTotals: & 1 & & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nThe first column of a Bayes' Box is just the list of hypotheses we are\nconsidering. In this case there are just two. If you need to construct a Bayes' box for a new problem, just think\nabout what the possible answers to the problem are, and list them in the first\ncolumn. The second column lists the prior probabilities for each of the\nhypotheses.\nAbove, before we did the experiment, we decided to say that there was a 50\\%\nprobability that \\bb~is true and a 50\\% probability that \\bw~is true, hence the 0.5 values in this column.\nThe prior column should always sum to 1. Remember, the prior probabilities\nonly describe our initial uncertainty, before taking the data into account. Hopefully the data will help by changing these probabilities to\nsomething a bit more decisive.\n\n\\subsection{Likelihood}\nThe third column is called {\\it likelihood}, and this is a really important\ncolumn where the action happens. The likelihood is a quantity\nthat will be used for calculating the posterior\nprobabilities.\nIn colloquial language, likelihood is synonymous with\nprobability. It means the same thing. However, in statistics, likelihood is a\nvery\nspecific kind of probability. To fill in the third column of the Bayes' Box,\nwe need to calculate two likelihoods, so you can tell from this that the\nlikelihood is something different for each hypothesis. But what is it\nexactly?\n\\begin{framed}\n{\\bf The likelihood for a hypothesis is the probability that you would have\nobserved the data, if that hypothesis were true. The values can be found by\ngoing through each hypothesis in turn, imagining it is true, and asking,\n``What is the probability of getting the data that I observed?''.}\n\\end{framed}\n\nHere is the Bayes' Box with the likelihood column filled in. I will explain\nhow these numbers were calculated in a bit more detail in the next subsection.\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n{\\bf Hypotheses} & {\\tt prior} & {\\tt likelihood} &\n{\\tt h = prior $\\times$ likelihood} & {\\tt posterior}\\\\\n\\hline\n{\\tt BB} & 0.5 & 1 &  & \\\\\n{\\tt BW} & 0.5 & 0.5 &  & \\\\\n\\hline\nTotals: & 1 & & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nIf you have taken STATS 210 and used the maximum likelihood method, where you\nfind the value of a parameter that maximises the likelihood function, that is\nthe same as the likelihood we use in this course! So you have a head start\nin understanding this concept.\n\n\\subsection{Finding the Likelihood Values}\nWe will first calculate the value of the likelihood for the \\bb~hypothesis.\nRemember, the data we are analysing here is that we chose one of the balls in\nthe bag ``at random'', and it was black. The likelihood for the \\bb~hypothesis is\ntherefore the probability that we would get a black ball if \\bb~is true.\n\nImagine that \\bb~is true. That means both balls are black. What is the probability that\nthe experiment would result in a black ball? That's easy -- it's 100\\%! So we\nput the number 1 in the Bayes Box as the likelihood for the \\bb~hypothesis.\n\nNow imagine instead that \\bw~is true. That would mean one ball is black and the\nother is white. If this were the case and we did the experiment, what would be\nthe probability of getting the black ball in the experiment? Since one of the\ntwo balls is black, the chance of choosing this one is 50\\%. Therefore, the\nlikelihood for the \\bw~hypothesis is 0.5, and that's why I put 0.5 in the Bayes'\nBox for the likelihood for \\bw.\n\nIn general, the likelihood is the {\\it probability of the data that you actually\ngot, assuming a particular hypothesis is true}. In this example it was\nfairly easy to get the likelihoods directly by asking ``if this hypothesis\nis true, what is the probability of getting the black ball when we do the\nexperiment?''. Sometimes this is not so easy, and it can be helpful to think about\nALL possible experimental outcomes/data you might have seen -- even though\nultimately, you just need to select the one that actually occurred.\nTable~\\ref{tab:all_data} shows an example of this process.\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline\n{\\bf Hypotheses} & {\\bf Possible Data} & {\\bf Probability} \\\\\n\\hline\n{\\tt BB} & {\\color{blue} Black Ball} & {\\color{blue} 1}\\\\\n         & White Ball & 0 \\\\\n\\hline\n{\\tt BW} & {\\color{blue} Black Ball} & {\\color{blue} 0.5} \\\\\n         & White Ball & 0.5 \\\\\n\\hline\n\\end{tabular}\n\\caption{\\it This table demonstrates a method for calculating the likelihood\nvalues, by considering not just the data that actually occurred, but all\ndata that might have occurred. Ultimately, it is only the probability of the\ndata which actually occurred that matters, so this is highlighted in blue.\n\\label{tab:all_data}}\n\\end{center}\n\\end{table}\nThe fact that only the blue probabilities in Table~\\ref{tab:all_data} enter the\nBayes' Box calculation is related to the {\\it likelihood principle}, which we\nwill\ndiscuss in lectures. Note also that in Table~\\ref{tab:all_data}, the\nprobabilities for the different possible data sets add to 1 within each\nhypothesis, but the sum of the blue ``selected'' likelihood values is not 1\n(it is, in fact, meaningless).\n\nWhen we come to parameter estimation in later chapters, we will usually set up\nour problems in this way, by considering what data sets are possible, and\nassigning probabilities to them.\n\n\\subsection{The Mechanical Part}\nThe third column of the Bayes' Box is the product of the prior probabilities\nand the likelihoods, calculated by simple multiplication. The result will be\ncalled ``prior times likelihood'', but occasionally we will use the letter $h$\nfor these quantities. This is the {\\it unnormalised} posterior. It does not sum\nto 1 as the posterior probabilities should, but it is at least proportional to\nthe actual posterior probabilities.\n\nTo find the posterior probabilities, we take the {\\tt prior $\\times$ likelihood}\ncolumn and divide it by its sum, producing numbers that do sum to 1. This\ngives us the final\nposterior probabilities, which were the goal all along. The completed Bayes'\nBox is shown below:\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n{\\bf Hypotheses} & {\\tt prior} & {\\tt likelihood} &\n{\\tt h = prior $\\times$ likelihood} & {\\tt posterior}\\\\\n\\hline\n{\\tt BB} & 0.5 & 1 & 0.5 & 0.667\\\\\n{\\tt BW} & 0.5 & 0.5 & 0.25  & 0.333\\\\\n\\hline\nTotals: & 1 & & 0.75 & 1\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nWe can see that the posterior probabilities are not the same as the prior\nprobabilities, because we have more information now! The experimental result\nmade \\bb~a little bit more plausible than it was before. Its probability has\nincreased from 1/2 to 2/3.\n\n\\subsection{Interpretation}\nThe posterior probabilities of the hypotheses are proportional to the prior probabilies\nand the likelihoods. A high prior probability will help a hypothesis have a high\nposterior probability. A high likelihood value also helps.\nTo understand what this means about reasoning, consider\nthe meanings of the prior and the likelihood. There are two things that can\ncontribute to a hypothesis being plausible:\n\\begin{itemize}\n\\item If the prior probability is high. That is, the hypothesis was {\\it already}\nplausible, before we got the data.\n\\item If the hypothesis {\\it predicted the data} well. That is, the data was\nwhat we would have expected to occur if the hypothesis had been true.\n\\end{itemize}\nI hope you agree that this is all very sensible.\n\nIn class we will also study\nvariations on this problem, considering different assumptions about the prior\nprobabilities and how they affect the results, and also considering what happens\nwhen we get more and/or different data.\n\n\\section{Bayes' Rule}\nBayes' rule is an equation from probability theory, shown in\nFigure~\\ref{fig:bayes_neon}. The various terms in Bayes' rule are all\nprobabilities, but notice that there are conditional probabilities in there.\nFor example, the left hand side of the equation is $P(A|B)$ and that means\nthe probability of $A$ {\\bf given} $B$. That is, it's the probability of $A$\nafter taking into account the information $B$. In other words,\n$P(A|B)$ is a posterior probability, and Bayes' rule tells us how to calculate\nit from other probabilities.\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.4]{Figures/bayes_neon.jpg}\n\\caption{\\it A blue neon sign displaying Bayes' rule.\nYou can use it to calculate the probability of $A$ {\\it given} $B$,\nif you know the values of some other probabilities on the right hand side.\nImage credit: Matt Buck. Obtained from Wikimedia Commons.\n\\label{fig:bayes_neon}}\n\\end{center}\n\\end{figure}\nBayes' rule is true for {\\it any} statements $A$ and $B$. If you took the\nequation in Figure~\\ref{fig:bayes_neon} and replaced $A$ with\n``K\\={a}k\\={a}p\\={o} will survive beyond 2050'' and $B$ with\n``I had coffee this morning'', the\nresulting equation would still be true\\footnote{It would still be true, but\nit would not very interesting,\nbecause\nwhether or not I had coffee doesn't tell you much about the survival prospects\nof endangered New Zealand parrots.}.\n\nIt is helpful to relabel $A$ and $B$ in Bayes' rule to give a more clear\ninterpretation of how the equation is to be used. In this version of Bayes'\nrule (which is one you should commit to memory), $A$ has been replaced by $H$,\nand $B$ has been replaced by $D$. The reason for these letters is that you should\ninterpret $H$ as {\\it hypothesis} and $D$ as {\\it data}. Then you can interpret\nBayes' rule as telling you the probability of a hypothesis given some data, in\nother words, a posterior probability.\n\\begin{eqnarray}\nP(H|D) = \\frac{P(H)P(D|H)}{P(D)}\n\\end{eqnarray}\nIn Bayesian statistics, most of the terms in Bayes' rule have special names.\nSome of them even have more than one name, with different scientific\ncommunities preferring different terminology. Here is a list of the\nvarious terms and the names we will use for them:\n\\begin{itemize}\n\\item $P(H|D)$ is the {\\bf posterior probability}. It describes how certain\nor confident we are that\nhypothesis $H$ is true, given that we have observed data $D$. Calculating\nposterior probabilities is the main goal of Bayesian statistics!\n\\item $P(H)$ is the {\\bf prior probability}, which describes how sure we were\nthat $H$ was true, before we observed the data $D$.\n\\item $P(D|H)$ is the {\\bf likelihood}. If you were to assume that $H$ is true,\nthis is the probability that you would have observed data $D$.\n\\item $P(D)$ is the {\\bf marginal likelihood}. This is the probability that you\nwould have observed data $D$, {\\it whether $H$ is true or not}.\n\\end{itemize}\nSince you may encounter Bayesian methods outside of STATS 331, I have included\nan Appendix called ``Rosetta Stone'' that lists some common alternative\nterminology.\n\nIn the above example, we did some calculations to work out the numbers in the\nBayes' Box, particularly the posterior probabilities, which are the ultimate\ngoal of the calculation. {\\it What we were actually doing in these calculations\nwas applying Bayes' rule}. We actually applied Bayes' rule twice, once to\ncompute $P(\\bb | D)$ and a second time to calculate $P(\\bw | D)$.\n\n\\begin{framed}\n{\\bf When you use a Bayes' Box to calculate posterior probabilities,\nyou are really just applying Bayes' rule a lot of times:\nonce for each hypothesis listed in the first column.}\n\\end{framed}\n\n\\section{Phone Example}\nThis example is based on Question 1 from the 2012 final exam. I got the\nidea for this question from an example in David MacKay's wonderful book\n``Information Theory, Inference and Learning Algorithms''\n(available online as a free PDF download. You're welcome to check it out, but\nit is a large book and only about 20\\% of the content is relevant to this course!).\n\nYou move into a new house which has a phone\ninstalled. You can't remember the phone number, but you suspect it\nmight be {\\tt 555-3226} (some of you may recognise\nthis as being the phone number for Homer Simpson's ``Mr Plow'' business).\nTo test this hypothesis, you carry out an experiment\nby picking up the phone and dialing {\\tt 555-3226}.\n\nIf you are correct about\nthe phone number, you will definitely hear a busy signal because you are calling\nyourself.\nIf you are incorrect, the probability of hearing a busy signal is $1/100$.\nHowever, all of that is only true if you assume the phone is working, and it\nmight be broken! If the phone is broken, it will always give a busy signal.\n\nWhen you do the experiment, the outcome (the data) is that you do actually get the busy signal.\nThe question asked us to consider the following four hypotheses, and to calculate\ntheir posterior probabilities:\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nHypothesis & Description & Prior Probability\\\\\n\\hline\n$H_1$ & Phone is working and {\\tt 555-3226} is correct & 0.4\\\\\n$H_2$ & Phone is working and {\\tt 555-3226} is incorrect & 0.4\\\\\n$H_3$ & Phone is broken and {\\tt 555-3226} is correct & 0.1\\\\\n$H_4$ & Phone is broken and {\\tt 555-3226} is incorrect & 0.1\\\\\n\\hline\n\\end{tabular}\n\\caption{\\it The four hypotheses about the state of the phone and the phone\nnumber. The prior probabilities are also given.\n\\label{tab:phone}}\n\\end{center}\n\\end{table}\nNote that the four hypotheses are mutually exclusive and exhaustive. If you were\nto come up with hypotheses yourself, ``phone is working'' and ``{\\tt 555-3226} is correct''\nmight spring to mind. They wouldn't be mutually exclusive so you couldn't do a\nBayes' Box with just those two, but it is possible to put these together (using\n``{\\bf and}'') to make the four mutually exclusive options in the table.\n\n\\subsection{Solution}\nWe will go through the solution using a Bayes' Box. The four hypotheses listed\nin Table~\\ref{tab:phone} and their prior probabilities are given, so we can fill\nout the first two columns of a Bayes' Box right away:\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n{\\bf Hypotheses} & {\\tt prior} & {\\tt likelihood} &\n{\\tt prior $\\times$ likelihood} & {\\tt posterior}\\\\\n\\hline\n$H_1$ & 0.4 &  &  & \\\\\n$H_2$ & 0.4 &  &  & \\\\\n$H_3$ & 0.1 &  &  & \\\\\n$H_4$ & 0.1 &  &  & \\\\\n\\hline\nTotals: & 1 & & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nThe next thing we need is the likelihoods. The outcome of the experiment (the\ndata) was the busy signal, so we need to work out $P(\\textnormal{busy signal} | H)$ for each $H$\nin the problem (there are four of them). Let's start (naturally!) with $H_1$.\n\nIf we assume $H_1$ is true, then the phone is working and {\\tt 555-3226} is the\ncorrect phone number. In that case, we would definitely get a busy signal\nbecause we are calling ourselves. Therefore\n$P(\\textnormal{busy signal} | H_1) = 1$ is our first likelihood value.\n\nNext, let's imagine that $H_2$ is true, so the phone is working, but\n{\\tt 555-3226} is not the right phone number. In this case, it is given in the\nquestion that the probability of getting a busy signal is $1/100$ or 0.01 (in\nreality, this would be based on some other data, or perhaps be a totally\nsubjective judgement).\nTherefore $P(\\textnormal{busy signal} | H_2) = 0.01$, and that's our second\nlikelihood value.\n\nThe likelihoods for $H_3$ and $H_4$ are quite straightforward\nbecause they both imply the phone is broken, and that means a busy signal is certain.\nTherefore $P(\\textnormal{busy signal} | H_3) = P(\\textnormal{busy signal} | H_4) = 1$.\nWe have our four likelihoods, and can proceed to work out everything in the\nBayes' Box, including the main goal -- the posterior probabilities! Here it is:\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n{\\bf Hypotheses} & {\\tt prior} & {\\tt likelihood} &\n{\\tt prior $\\times$ likelihood} & {\\tt posterior}\\\\\n\\hline\n$H_1$ & 0.4 & 1 &  0.4 & 0.662\\\\\n$H_2$ & 0.4 & 0.01 & 0.004 & 0.00662\\\\\n$H_3$ & 0.1 & 1 & 0.1 & 0.166\\\\\n$H_4$ & 0.1 & 1 & 0.1 & 0.166\\\\\n\\hline\nTotals: & 1 & & 0.604 & 1\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nTo conclude this phone problem, I should admit that\nI actually calculated the numbers in the Bayes' Box using R. My code is shown\nbelow. A lot of the code we write in labs will look like this. Obviously in the\n2012 exam the students had to use their calculators instead.\n\\begin{minted}[mathescape,\n               numbersep=5pt,\n               gobble=0,\n               frame=single,\n               framesep=2mm, fontsize=\\small]{r}\nprior = c(0.4, 0.4, 0.1, 0.1) # Vector of prior probs\nlik = c(1, 0.01, 1, 1)        # Vector of likelihoods\nh = prior*lik\nZ = sum(h)                    # Sum of prior times likelihood\npost = prior*lik/Z            # Normalise to get posterior\n# Look at all the results\nprint(prior)\nprint(lik)\nprint(h)\nprint(Z)\nprint(post)\n\\end{minted}\n\nNow let's try to see if this makes sense. There are many things we could think\nabout, but let's just consider the question of whether\nthe phone is working or not. The first two hypotheses correspond to the phone\nbeing in a working state. If you want to calculate the probability of $A$\n{\\bf or} $B$, then you can just add the probabilities if they are mutually\nexclusive. The prior probability that the phone is working is\ntherefore:\n\\begin{eqnarray}\nP(\\textnormal{phone working}) &=& P(H_1 \\vee H_2)\\\\\n&=& P(H_1) + P(H_2)\\\\\n&=& 0.4 + 0.4\\\\\n&=& 0.8.\n\\end{eqnarray}\nHere, I have introduced the notation $\\vee$, meaning ``logical or'': For\nany two propositions $A$, $B$, the proposition $(A \\vee B)$ is true if\neither one of $A$ or $B$ is true (or both).\n\nThe posterior probability is worked out in a similar way, but using the posterior\nprobabilities instead of the prior ones:\n\\begin{eqnarray}\nP(\\textnormal{phone working}|\\textnormal{busy signal}) &=& P(H_1 \\vee H_2 | \\textnormal{busy signal})\\\\\n&=& P(H_1|\\textnormal{busy signal}) + P(H_2|\\textnormal{busy signal})\\\\\n&=& 0.662 + 0.00662\\\\\n&=& 0.6689.\n\\end{eqnarray}\nOur probability that the phone is working has gone down a little bit as a result of this\nevidence! That makes sense to me. A busy signal is what you would expect to\nhappen if the phone was broken. This data doesn't {\\it prove} the phone is\nbroken, but it does point in that direction a little bit, and hence the\nprobability that the phone is working has been reduced from 0.8 to 0.6689.\n\n\\section{Important Equations}\nPosterior probabilities are calculated using Bayes' rule. For a single\nhypothesis $H$ given data $D$, Bayes' rule is:\n\\begin{eqnarray}\nP(H|D) = \\frac{P(H)P(D|H)}{P(D)}\\label{eq:bayes1}\n\\end{eqnarray}\nThis gives the posterior probability $P(H|D)$ in terms of the prior probability\n$P(H)$, the likelihood $P(D|H)$ and the marginal likelihood $P(D)$ in the\ndenominator. To obtain $P(H)$, think about your prior beliefs (which may\nindicate a large amount of uncertainty, or may already be well informed based\non previous data sets). To obtain $P(D|H)$, think about what the experiment is\ndoing: If $H$ is true, what data would you expect to see and with what\nprobabilities?\n\nThe denominator is the probability of obtaining the data $D$ but without\nassuming that $H$ is either true or false. This is obtained using the sum rule.\nThere are two ways that the data $D$ could occur, either via the route of $H$\nbeing true (this has probability $P(H)P(D|H)$), or via the route of $H$ being\nfalse (this has probability $P(\\bar{H})P(D|\\bar{H})$). These two ways are\nmutually exclusive, so we can add their probabilities:\n\\begin{eqnarray}\nP(D) = P(H)P(D|H) + P(\\bar{H})P(D|\\bar{H}).\n\\end{eqnarray}\n\nBayes' rule can be applied to a whole set\nof hypotheses (that are mutually exclusive and exhaustive) simultaneously.\nThis is a more common way of using it, and it is the way we use it when we use\na Bayes' Box. If we applied Equation~\\ref{eq:bayes1} to $N$ hypotheses\n$H_1, H_2, ..., H_N$, given data $D$, we would get the following for the posterior\nprobability of each hypothesis $H_i$ (for $i=1, 2, ..., N$):\n\\begin{eqnarray}\nP(H_i|D) &=& \\frac{P(H_i)P(D|H_i)}{P(D)}\n\\end{eqnarray}\nThe denominator $P(D)$ is a single number. It does not depend on the index $i$.\nIt can again be obtained using the sum rule. There are $N$\nmutually exclusive ways that the data $D$ could have occurred: via $H_1$ being\ntrue, or via $H_2$ being true, etc. Adding the probabilities of these gives:\n\\begin{eqnarray}\nP(D) &=& \\sum_{i=1}^N P(H_i)P(D|H_i).\n\\end{eqnarray}\nwhich just happens to be the sum of the prior times likelihood values.\nIf you don't find equations particularly easy to read, just remember that\nfollowing the steps for making a Bayes' Box is equivalent to applying\nBayes' rule in this form! The $P(H_i)$ values are the prior probability column, the\n$P(D|H_i)$ values are the likelihood column, and the denominator is the\nsum of the prior times likelihood column. For example, the posterior probability\nfor $H_1$ (the top right entry in a Bayes' Box) is given by the prior probability\nfor $H_1$ times the likelihood for $H_1$, divided by the sum of prior times\nlikelihood values. That is, $P(H_1|D) = P(H_1)P(D|H_1)/P(D)$.\nThe correspondence between the probabilities that go in a Bayes'\nBox (in general) and the terms in the Equations are given in\nTable~\\ref{tab:general_bayes_box}.\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n{\\bf Hypotheses} & {\\tt prior} & {\\tt likelihood} &\n{\\tt prior $\\times$ likelihood} & {\\tt posterior}\\\\\n\\hline\n$H_1$ & $P(H_1)$ & $P(D|H_1)$ & $P(H_1)\\times P(D|H_1)$ & $P(H_1|D)$\\\\\n$H_2$ & $P(H_2)$ & $P(D|H_2)$ & $P(H_2)\\times P(D|H_2)$ & $P(H_2|D)$\\\\\n\\ldots & \\ldots & \\ldots & \\ldots & \\ldots\\\\\n\\hline\nTotals: & 1 & & $P(D)$ & 1\\\\\n\\hline\n\\end{tabular}\n\\caption{\\it A general Bayes' Box. Using Bayes' rule or making a Bayes' Box are\nactually the same thing, and this table can be used to identify the\nterms.\\label{tab:general_bayes_box}}\n\\end{center}\n\\end{table}\n\n", "meta": {"hexsha": "14228050beb6a08c4837ca726ea58f406a2f8b8d", "size": 25265, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "first_example.tex", "max_stars_repo_name": "xulinpan/stat331", "max_stars_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 55, "max_stars_repo_stars_event_min_datetime": "2015-03-09T18:03:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-25T03:36:54.000Z", "max_issues_repo_path": "first_example.tex", "max_issues_repo_name": "xulinpan/stat331", "max_issues_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-07-07T05:00:32.000Z", "max_issues_repo_issues_event_max_datetime": "2015-07-10T08:48:27.000Z", "max_forks_repo_path": "first_example.tex", "max_forks_repo_name": "xulinpan/stat331", "max_forks_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2015-07-29T14:34:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-04T20:04:47.000Z", "avg_line_length": 46.1882998172, "max_line_length": 139, "alphanum_fraction": 0.7399168811, "num_tokens": 7047, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802603710087, "lm_q2_score": 0.8311430520409023, "lm_q1q2_score": 0.5559351810546735}}
{"text": "% When using TeXShop on the Mac, let it know the root document.\n% The following must be one of the first 20 lines.\n% !TEX root = ../design.tex\n\n\\chapter[Linear-chain Conditional Random Field]{Linear-chain Conditional Random Field}\n% Motivation. Why do we want to have this abstract layer?\nConditional random field(CRF) \\cite{DBLP:conf/icml/LaffertyMP01} is a type of discriminative undirected probabilistic graphical model.\nLinear-chain CRFs are special CRFs which assume that the next state depends only on the current state.\nLinear-chain CRFs achieve state of the art accuracy in some real world natural language processing tasks such\nas part of inforamtion extraction\\cite{chen2012optimizing}, speech tagging(POS) and named entity resolution(NER).\n\n\\section{Linear-chain CRF Learning}\n\n\\subsection{Mathematical Notations}\n\\begin{itemize}\n\\item $p(\\boldsymbol Y | \\boldsymbol X)$: conditional probability distributions of label sequence $\\boldsymbol Y$ given input sequence $\\boldsymbol X$.\n\\item $M$: total number of unique features.\n\\item $I$: the position of last token in a sentence.\n\\item $N$: number of sentences in the training data set.\n\\item $\\lambda$: the coefficients (feature weights).\n\\item $\\ell_{\\lambda}$: log-likelihood summed over all training sentences.\n\\item $\\nabla \\ell_{\\lambda}$: gradient vector summed over all training sentences.\n\\item $\\ell_{\\lambda}^\\prime$: adjusted log-likelihood to avoid overfitting using spherical Gaussian weight prior.\n\\item $\\nabla \\ell_{\\lambda}^\\prime$: adjusted gradient vector to avoid overfitting using spherical Gaussian weight prior.\n\n\\end{itemize}\n\n\\subsection{Formulation}\nA linear-chain CRF \\cite{DBLP:conf/naacl/ShaP03} is a distribution\n    \\[p(\\boldsymbol Y | \\boldsymbol X) = \\frac{\\exp{\\sum_{m=1}^M \\sum_{i=0}^{I} \\lambda_m f_m(y_i,y_{i-1},x_i)}}{Z(X)},\\]\n\nwhere Z(X) is an instance specific normalizer\n\\[Z(X) = \\sum_{y} \\exp{\\sum_{m=1}^M \\sum_{i=0}^{I} \\lambda_m f_m(y_i,y_{i-1},x_i)}.\\]\n\nTrain a CRF by maximizing the log-likelihood of a given training set $ T=\\{(\\vec{x^{(k)}},\\vec{y^{(k)}})\\}_{k=1}^N$.\nSeek the zero of the gradient.\\\\\n    \\[\\ell_{\\lambda}=\\sum_k \\log p_\\lambda(\\vec{y^{(k)}}|\\vec{x^{(k)}}) =\\sum_k[\\lambda F(\\vec{y^{(k)}},\\vec{x^{(k)}})-\\log Z_\\lambda(\\vec{x^{(k)}})]\\]\n    \\[\\nabla \\ell_{\\lambda}=\\sum_k[F(\\vec{y^{(k)}},\\vec{x^{(k)}})-E_{p_\\lambda(Y|\\vec{x^{(k)}})}F(Y,\\vec{x^{(k)}})]\\]\n\nTo avoid overfitting, we penalize the likelihood with a spherical Gaussian weight prior:\\\\\n    \\[\\ell_{\\lambda}^\\prime=\\sum_k[\\lambda F(\\vec{y^{(k)}},\\vec{x^{(k)}})-\\log Z_\\lambda(\\vec{x^{(k)}})]-\\frac{\\lVert \\lambda \\rVert^2}{2\\sigma ^2}\\]\n    \\[\\nabla \\ell_{\\lambda}^\\prime=\\sum_k[F(\\vec{y^{(k)}},\\vec{x^{(k)}})-E_{p_\\lambda(Y|\\vec{x^{(k)}})}F(Y,\\vec{x^{(k)}})]-\\frac{\\lambda}{\\sigma ^2}\\]\nNote:We hard code $\\sigma$ as $100$ in the implementation which follows other CRF packages in the literature.\n\n\\subsection{Forward-backward Algorithm}\n$E_{p_\\lambda(Y|x)}F(Y,x)$ is computed using a variant of the forward-backward algorithm:\n\n    \\[E_{p_\\lambda(Y|x)}F(Y,x) = \\sum_y p_\\lambda(y|x)F(y,x) = \\sum_i\\frac{\\alpha_{i-1}(f_i*M_i)\\beta_i^T}{Z_\\lambda(x)}\\]\n    \\[Z_\\lambda(x) = \\alpha_I.1^T\\]\n    where $\\alpha_i$ and $\\beta_i$ the forward and backward state cost vectors defined by\\\\\n  \\[\\alpha_i =\n    \\begin{cases}\n    \\alpha_{i-1}M_i, & 0<i<=n\\\\\n    1, & i=0\n    \\end{cases}\n    ,\n    \\beta_i^T =\n    \\begin{cases}\n    M_{i+1}\\lambda_{i+1}^T, & 1<=i<I\\\\\n    1, & i=n\n    \\end{cases}\n  \\]\n\n\\subsection{L-BFGS Convex Solver}\n\nThe limited-memory BFGS(L-BFGS) \\cite{DBLP:journals/siamjo/MoralesN00} is the\nlimited memory variation of the Broyden-Fletcher-Goldfarb-Shanno(BFGS) algorithm\nwhich is the state of art of large scale non constraint convex optimization\nmethod. We translate the in-memory Java implementation to C++ in-database\nimplementation using Eigen support. Eigen vector and Eigen matrix are used\ninstead of the plain one dimentional and two dimentional arrays. In the Java in-\nmemory implementation, it defines many static variables defined and shared\nbetween the interations. However, in the MADlib implementation, we define these\nvariables in the state object. Before each iteration of L-BFGS optimization, we\nneed to initialize the L-BFGS with the current state object.  At the end of each\niteration, we need to dump the updated variables to the database state for next\niteration.\n\n\n\\subsection{Parallel CRF Training}\n\\begin{algorithm}[CRF training$(z_{1:M})$] \\label{alg:CRF training}\n\\alginput{Observation set $z_{1:M}$,\\\\\nconvergence criterion $\\mathit{Convergence}()$,\\\\\nstart strategy $\\mathit{Start}()$,\\\\\ninitialization strategy $\\mathit{Initialization}()$,\\\\\ntransition strategy $\\mathit{Transition}()$,\\\\\nfinalization strategy $\\mathit{Finalization}()$}\n\\algoutput{Coefficients $w \\in \\mathbb{R}^N$}\n\\algprecond{$iteration = 0, diag = \\boldsymbol 1$}\n\\begin{algorithmic}[1]\n\t\\State $w_\\text{new} \\set \\mathit{Start}(z_{1:M})$\n\t\\Repeat\n        \\State $w_\\text{old} \\set w_\\text{new}$\n        \\State $\\mathit{state} \\set \\mathit{Initialization}(w_\\text{new})$\n\t\t\\For{$m \\in 1..M$} \\Comment{Single entry in the observation set}\n\t\t\t\\State $\\mathit{state} \\set \\mathit{Transition}(\\mathit{state}, z_m)$\n                \\Comment{Computing gradient and log-likelihood.}\n\t\t\\EndFor\n\t\t\\State $w_\\text{new} \\set Finalization(\\mathit{state})$ \\Comment{Mainly invoke L-BFGS convex solver}\n\t\\Until{$Convergence(w_\\text{new}, g_\\text{new}, \\mathit{iteration})$}\n    \\State \\Return $w_\\text{new}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\paragraph{Programming Model.}\nWe provide above the algorithm of parallel CRF training strategy, in the fashion of the selected programming model supported by MADlib (mainly user-defined aggregate).\n\n\\paragraph{Parallelism.}\nThe outer loop is inherently sequential over multiple iterations.\nThe iteration $n+1$ takes the output of iteration $n$ as input, so on so forth until the stop criterion is satisfied.\nThe inner loop which calculates the gradient and log-likelihood for each document is data-parallel.\nSimple model averaging are used to merge two states.\nA merge function is not explicitly added to the pseudocode for simplicity.\nThe finalization function invokes the L-BFGS convex solver to get a new solution. L-BFGS is sequential, but very fast.\nExperiments show that the speed-up ration approaches the number of segments configured in the Greenplum database.\n\n\\paragraph{Convergence criterion.}\nUsually, the following conditions are combined by AND, OR, or NOT.\n\\begin{enumerate}\n    \\item The norm of gradient divided by the norm of coefficient drops below a given threshold.\n    \\item The maximum number of iterations is reached.\n    \\item There could be more.\n\\end{enumerate}\n\n\\paragraph{Start strategy.}\nIn most cases, zeros are used unless otherwise specified.\n\n\\paragraph{Transition strategies.}\nThis function contains the logic of computing the gradient and log-likelihood for each tuple using the forward-backward\nalgorithm. The algorithms will be discussed in the following sections.\n\n\\begin{algorithm}[transition-lbfgs$(\\mathit{state}, z_m)$] \\label{alg:transition-lbfgs}\n\\alginput{Transition state $\\mathit{state}$,\\\\\nobservation entry $z_m$,\\\\\ngradient function $\\mathit{Gradient}()$}\n\\algoutput{Transition state $\\mathit{state}$}\n\\begin{algorithmic}[1]\n    \\State $\\{state.g,state.loglikelihood\\}  \\set \\mathit{Gradient}(\\mathit{state}, z_m)$\n        \\Comment{using forward-backward algorithm to calculate gradient and loglikelihood}\n    \\State $\\mathit{state}.num\\_rows \\set \\mathit{state}.num\\_rows + 1$\n    \\State \\Return $\\mathit{state}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\paragraph{Merge strategies.}\nThe merge function simply sums the gradient and log-likelihood over all training documents\n\\begin{algorithm}[merge-lbfgs$(\\mathit{state_1}, \\mathit{state_2})$] \\label{alg:merge-lbfgs}\n\\alginput{Transition state $\\mathit{state_1}$,\\\\\nTransition state $\\mathit{state_2}$}\n\\algoutput{Transition state $\\mathit{state_{new}}$}\n\\begin{algorithmic}[1]\n    \\State $\\mathit{state_{new}}.g \\set \\mathit{state_1}.g + \\mathit{state_2}.g$\n    \\State $\\mathit{state_{new}}.loglikelihood \\set \\mathit{state_1}.loglikelihood + \\mathit{state_2}.loglikelihood$\n    \\State \\Return $\\mathit{state_{new}}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\paragraph{Finalization strategy.}\nThe finalization function invokes the L-BFGS convex solver to get a new coefficent vector.\\\\\n\n\\begin{algorithm}[finalization-lbfgs$(state)$] \\label{alg:CRF training}\n\\alginput{Transition state $state$,\\\\\nLBFGS $\\mathit{lbfgs}()$}\n\\algoutput{Transition state  $state$}\n\\begin{algorithmic}[1]\n        \\State $\\{state.g,state.loglikelihood\\} \\set penalty(state.g,state.loglikelihood)$ \\Comment{To avoid overfitting, add penalization}\n        \\State $\\{state.g,state.loglikelihood\\}\\set-\\{state.g,state.loglikelihood\\}$ \\Comment{negation for maximization}\n        \\State LBFGS instance($state)$ \\Comment{initialize the L-BFGS instance with previous state}\n        \\State instance.$lbfgs()$ \\Comment{invoke the L-BFGS convex solver}\n        \\State instance.save\\_state$(state)$ \\Comment{save updated variables to the state for next iteration}\n        \\State \\Return $state$\n\\end{algorithmic}\n\\end{algorithm}\n\nFeeding with current solution, gradient, log-likelihood, etc., the L-BFGS will output a new solution.\nTo avoid overfitting, a penalization function is needed. We choose to penalize the log-likelihood with a spherical Gaussian weight prior.\nAlso, L-BFGS is to maximum objective, so we need to negate the gradient vector and log-likelihood to fit our needs in order minimize the log-likehood.\n\n\\section{Linear-chain CRF Applications}\nLinear-chain CRF can be used in various applications such as  part of speech tagging and named entity resolution.\nAll the following sections assume that the application is part of speech tagging. It can be fitted to named entity resolution with minimal effort.\n\n\\subsection{Part of Speech Tagging}\nPart-of-speech tagging, also called grammatical tagging or word-category disambiguation \\cite{DBLP:journals/coling/DeRose88}, is the process of assigning\na part of speech to each word in a sentence. POS has been widely used in information retrieval and text to speech. There are two distinct methods for\nPOS task: rule-based and stochastic.\nIn rule-based method, large collection of rules are defined to indentify the tag. Stochastic method is based on probabilistic\ngraphic models such as hidden markov models and conditional random fields. In practice, conditional random fields are approved\nto achieve the state of art accuracy.\n\\subsection{Tag Set}\nThere are various tag set used in the literature. The Pennsylvania Treebank tag-set \\cite{DBLP:journals/coling/MarcusSM94} is the commonly used tag set and contains\n45 tags. The following table shows part of tags in the tag set.\n\n\\begin {table}[h]\n\\caption {Pen Treebank III Tag Set} \\label{tab:tagset}\n\\[\\begin{tabular}{lll||lll}\n  Tag & Description         & Example       & Tag & Description         & Example\\\\\n  \\hline\n  CC  & Coordin,Conjunction & and,but,or    & SYM & Symbol              & +,\\%,\\&\\\\\n  CD  & Cardinal number     & one,two,three & TO  & 'to'                & to\\\\\n  DT  & Determiner          & a,the         & UH  & Interjection        & ah,oops\\\\\n  EX  & Existential         & there         & VB  & Verb,base form      & eat\\\\\n  ... & ...                 & ...           & ... & ...                 & ...\\\\\n  RBR & Adverb,comparative  & faster        & .   & Sentence-final      & (.!?)\\\\\n  RBS & Adverb,superlative  & fastest       & :   & Mid-sentence punc   & (:;...-)\\\\\n  RP  & Particle            & up,off        &     &                     &\\\\\n  \\hline\n\\end{tabular}\\]\n\\end{table}\n\n\\subsection{Regular Expression Table}\n  Regex feature captures the relationship between the morphology of a token and it's corresponding tag.\nFor example, a token ending with 's' is mostly likely to be a plural noun whereas a token ending with 'ly' is more likely to be an\nadverb. One can define his/her own regular expressions to capture the intrinsic characteristics of the given training data.\n\\begin {table}[h]\n\\caption {Regular expression table} \\label{tab:regex}\n\\begin{center}\n\\begin{tabular}{ll||ll}\n  pattern & name             & pattern & name\\\\\n  \\hline\n  $\\wedge[A-Z][a-z]+\\$$    & InitCapital       & $\\wedge[A-Z]+\\$$  & isAllCapital \\\\\n  $\\wedge.*[0-9]+.*\\$$     & containsDigit     & $\\wedge.+[.]\\$$   & endsWithDot\\\\\n  $\\wedge.+[,]\\$$          & endsWithComma     & $\\wedge.+er\\$$    & endsWithEr\\\\\n  $\\wedge.+est\\$$\t   & endsWithEst       & $\\wedge.+ed\\$$    & endsWithEd\\\\\n  $\\wedge.+s\\$$\t           & endsWithS         & $\\wedge.+ing\\$$   & endsWithIng\\\\\n  $\\wedge.+ly\\$$\t   & endsWithly        & $\\wedge.+-.+\\$$   & isDashSeparatedWords\\\\\n  $\\wedge.*@.*\\$$\t   & isEmailId         &           & \\\\\n  \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Feature Extraction}\nThe Feature Extraction module provides functionality for basic text-analysis\ntasks such as part-of-speech (POS) tagging and named-entity resolution.\nAt present, six feature types are implemented.\n    \\begin{itemize}\n    \\item Edge Feature: transition feature that encodes the transition feature weight from current label to next label.\n    \\item Start Feature: fired when the current token is the first token in a sentence.\n    \\item End Feature: fired when the current token is the last token in a sentence.\n    \\item Word Feature: fired when the current token is observed in the trained dictionary.\n    \\item Unknown Feature: fired when the current token is not observed in the trained dictionary for at least certain times.\n    \\item Regex Feature: fired when the current token can be matched by the regular expression.\n    \\end{itemize}\n\nAdvantages of extracting features using SQL statements:\n\\begin{itemize}\n\\item [$\\star$] Decoupling the feature extraction and other code.\n\\item [$\\star$] Compared with procedure language, SQL is much more easier to understand.\n\\item [$\\star$] Storing all the features in tables avoids recomputing of features over iterations. It also boosts the performance.\n\\item [$\\star$] SQL is naivelly paralleled.\n\\end{itemize}\n\n\\subsection{Column Names Convention and Table Schema}\n\\subsubsection{Column Names Convention}\nThe following column names are commonly used in the tables of the following sections.\n\\begin{itemize}\n\\item doc\\_id: Unique integer indentifier of a document.\n\\item start\\_pos: Position of the token in the document starting from 0.\n\\item seg\\_text: Text token itself.\n\\item prev\\_label: Label of previous token.\n\\item label: Label of current token.\n\\item max\\_pos: End positon of the document.\n\\item weight: Feature weight associated with certain feature\n\\item f\\_index: Unique integer identifier of a feature\n\\item f\\_name: Feature name.\n\\end{itemize}\n\n\\subsubsection{Training and Testing Data Schema}\nThe text data has to been tokenized before it can be stored in the database. One of the commonly used tokenization program for part-of-speech\ntagging is the Treebank tokenization script.\nThe following table dipicts how the training/testing data is stored in the database table.\n\n\\begin {table}[h]\n\\caption {Training or Testing data} \\label{tab:trainingdata}\n\\begin{center}\n    \\footnotesize\\tt\n\\begin{tabular}{llll||llll}\n  start\\_pos & doc\\_id & seg\\_text & label & start\\_pos & doc\\_id & seg\\_text & label\\\\\n  \\hline\n        0&1&'confidence'&11& 1&1&'in'&5\\\\\n        2&1&'the'&2& 3&1&'pound'&11\\\\\n        4&1&'is'&31& 5&1&'widely'&19\\\\\n        6&1&'expected'&29& 7&1&'to'&24\\\\\n        8&1&'take'&26& 9&1&'another'&2\\\\\n        10&1&'sharp'&6& 11&1&'dive'&11\\\\\n        12&1&'if'&5& 13&1&'trade'&11\\\\\n        14&1&'figures'&12& 15&1&'for'&5\\\\\n        16&1&'september'&13& 17&1&','&42\\\\\n        18&1&'due'&6& 19&1&'for'&5\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Design Challanges and Work-arounds}\nAs far as I know, the MADlib C++ abstraction layer doesn't support array of self-defined composite data types or multi-dimentional arrays.\nBut we do have the need of these complex data types in the implemenation of this module. For example, the\n$viterbi\\_mtbl$ table is indeed a two dimentional arrays. Due to the limitations of current C++ abstraction layer, we have to convert the matrix\nto an array and later index the data with  $M[i*n+j]$ instead of the normal way $M[i][j]$. Another example is the data types to represent the\nfeatures. A single feature cannot be represented by a single DOUBLE varialbe but by a unit of struct : $[prev\\_label, label, f\\_index, start\\_pos, exist]$\nBut there is no arrays of struct type, we had to represent it with one dimentional array.\nAlso we have to store the features for a document using  array of doubles instead of array of struct.\n\n\\subsection{Training Data Feature Extraction}\nGiven training data, SQLs are writen to extract all the features. It happened that any type of features mentioned above can be extracted out by one single SQL clause which makes the code succinct. We illustrate the training data feature extraction by SQL clause examples.\n\n\\paragraph{Sample Feature Extraction SQLs for edge features and regex features}\n\\begin{itemize}\n\\item $SQL_1$:\\\\\n\\begin{lstlisting}[language=SQL,gobble=4]\n    SELECT doc2.start_pos, doc2.doc_id, 'E.', ARRAY[doc1.label, doc2.label]\n    FROM   segmenttbl doc1, segmenttbl doc2\n    WHERE  doc1.doc_id = doc2.doc_id AND doc1.start_pos+1 = doc2.start_pos\n\\end{lstlisting}\n\n\\item $SQL_2$:\\\\\n\\begin{lstlisting}[language=SQL,gobble=4]\n    SELECT start_pos, doc_id, 'R_' || name, ARRAY[-1, label]\n    FROM  regextbl, segmenttbl\n    WHERE seg_text ~ pattern\n\\end{lstlisting}\n\\end{itemize}\n\n\\paragraph{Build the feature dictionary and assign each feature with a unique feature id}\n\\begin{itemize}\n\\item $SQL_3$\\\\\n\\begin{lstlisting}[language=SQL,gobble=4]\n    INSERT INTO tmp_featureset(f_name, feature)\n    SELECT DISTINCT f_name, feature\n    FROM   tmp1_feature;\n    INSERT INTO featureset(f_index, f_name, feature)\n    SELECT nextval('seq')-1, f_name, feature\n    FROM   tmp_featureset;\n\\end{lstlisting}\n\\end{itemize}\n\n\\paragraph{Generate sparse\\_r table}\n\\begin{itemize}\n\\item $SQL_3$\\\\\n\\begin{lstlisting}[language=SQL,gobble=4]\n    INSERT INTO rtbl(start_pos,doc_id,feature)\n    SELECT start_pos, doc_id, array_cat(fset.feature,\n\t\t\tARRAY[f_index,start_pos,\n\t\t\tCASE WHEN tmp1_feature.feature = fset.feature THEN 1\n\t\t\tELSE 0 END] )\n    FROM   tmp1_feature, featureset fset\n    WHERE  tmp1_feature.f_name = fset.f_name AND fset.f_name <> 'E.';\n\\end{lstlisting}\n\\end{itemize}\n\n\n\nThe final input table schema which contains all the feature data for the crf learning algorithm is as follows:\n\\begin{center}\n    \\begin{tabular}{ | l | l | l | l |}\n    \\hline\n    doc\\_id & sparse\\_r & dense\\_m & sparse\\_m \\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\n\\begin{itemize}\n\n\\item\nsparse r feature(single state feature):(prev\\_label, label, f\\_index, start\\_pos, exist)\n\n\\begin{center}\n    \\begin{tabular}{ | l | l |}\n    \\hline\n    label & Description \\\\ \\hline\n    prev\\_label & the label of previous token, it is always 0 in r table.\\\\\n    label       & the label of the single state feature\\\\\n    f\\_index    & the index of the feature in the feature table\\\\\n    start\\_pos  & the postion of the token(starting from 0)\\\\\n    exist       & indicate whether the token exists or not in the acutal training data set\\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\n\\item\ndense m feature:\n(prev\\_label, label, f\\_index, start\\_pos, exist)\n\\begin{center}\n    \\begin{tabular}{ | l | l |}\n    \\hline\n    label & Description \\\\ \\hline\n    prev\\_label & the label of previous token.\\\\\n    label       & the label of current token\\\\\n    f\\_index    & the index of the feature in the feature table\\\\\n    start\\_pos  & the postion of the token in a sentence(starting from 0)\\\\\n    exist       & indicate whether the token exists or not in the acutal training data set\\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\n\\item\nsparse m feature:(f\\_index, prev\\_label, label)\n\\begin{center}\n    \\begin{tabular}{ | l | l |}\n    \\hline\n    label & Description \\\\ \\hline\n    f\\_index    &  the index of the feature in the feature table\\\\\n    prev\\_label &  the label of previous token \\\\\n    label       &  the label of current token\\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\n\\end{itemize}\nFor performance consideraton, we split the m feature to $dense\\_m$ feature and $sparse\\_m$ feature\\\\\nThe acutal $spare\\_r$ table is array union of individal r features ordered by the start positon of tokens.\nSo the function to compute the gradient vector and log-likelihood can scan the feature arrays from beginning to end.\n\n\\subsection{Learned Model}\nThe CRF learning algorithm will generate two tables: feature table and dictionary table\nFeature table stores all the features and their corresponding feature weight.\nThe dictionary constains all the tokens and the number of times they appear in the training data.\\\\\n\\begin {table}[h]\n\\caption {Feature table} \\label{tab:featuretbl}\n\\begin{center}\n    \\scriptsize\\tt\n    \\begin{tabular}{ | l | l | l | l | l || l | l | l | l | l | }\n    \\hline\n    f\\_index & f\\_name & prev\\_label & label & weight & f\\_index & f\\_name & prev\\_label & label & weight\\\\\n    \\hline\n    0&'U'&-1&6&2.037& 1&'E.'&2&11&2.746   \\\\\n    2&'W\\_exchequer'&-1&13&1.821& 3&'W\\_is'&-1&31&1.802 \\\\\n    4&'E.'&11&31&2.469& 5&'W\\_in'&-1&5&3.252 \\\\\n    6&'E.'&11&12&1.305& 7&'U'&-1&2&-0.385 \\\\\n    8&'E.'&31&29&1.958& 9&'U'&-1&29&1.422 \\\\\n    10&'R\\_endsWithIng'&-1&11&1.061&11&'W\\_of'&-1&5&3.652 \\\\\n    12&'S.'&-1&13&1.829& 13&'E.'&24&26&3.282 \\\\\n    14&'W\\_helped'&-1&29&1.214& 15&'E.'&11&24&1.556 \\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\\end {table}\n\n\\begin {table}[h]\n\\caption {Dictionary table} \\label{tab:title}\n\\begin{center}\n    \\scriptsize\\tt\n    \\begin{tabular}{ | l | l || l | l || l | l || l | l | }\n    \\hline\n    token     & total & token   & total & token     & total & token       & total\\\\\n    'freefall'& 1     & 'policy'& 2     & 'measures'&1      & 'commitment'&1\\\\\n    'new'&1& 'speech'&1& '''s'&2& 'reckon'&1\\\\\n    'underlying'&1&'week'&1& 'prevent'&1& 'has'&2\\\\\n    'failure'&1& 'restated'&1&'announce'&1& 'thursday'&1\\\\\n    'but'&1& 'lawson'&1& 'last'&1& 'firm'&1\\\\\n    'exchequer'&1& 'helped'&1& 'sterling'&2& $\\ldots$ & $\\ldots$\\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\\end {table}\n\n\n\\subsection{Testing Data Feature Extraction}\n  This component extracts features from the testing data based on the learned models.\n  It will produce two factor tables\n  $viterbi\\_mtbl$ and  $viterbi\\_rtbl$. The $viterbi\\_mtbl$\n  table and a $viterbi\\_rtbl$ table are used to calculate the best label\n  sequence for each sentence.\n\n\\paragraph{Sample Feature Extraction SQLs}\n\\begin{itemize}\n\\item $SQL_1$: Extracting unique tokens:\\\\\n              \\begin{lstlisting}[language=SQL,gobble=4]\n                        INSERT INTO segment_hashtbl\n                        SELECT DISTINCT seg_text\n                        FROM   segmenttbl\n              \\end{lstlisting}\n\n\\item $SQL_2$: Summerize over all single state features with respect to specific tokens and labels :\\\\\n              \\begin{lstlisting}[language=SQL,gobble=4]\n                        INSERT INTO viterbi_rtbl\n                        SELECT seg_text, label, SUM(value)\n                        FROM   rtbl\n                        GROUP BY seg_text,label\n              \\end{lstlisting}\n\\end{itemize}\n\\begin{center}\n    \\begin{tabular}{ | l | l | l | l |}\n    \\hline\n    doc\\_id & $viterbi\\_mtbl$ & $viterbi\\_rtbl$ \\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\n  \\begin{itemize}\n  \\item\n  $viterbi\\_mtbl$ table\n  encodes the edge features which are solely dependent on upon current label and\n  previous y value. The m table has three columns which are prev\\_label, label,\n  and value respectively.\n  If the number of labels is $n$, then the m factor table will have $n^2$\n  rows. Each row encodes the transition feature weight value from the previous label\n  to the current label.\n\n  startFeature is considered as a special edge feature which is from the\n  beginning to the first token. Likewise, endFeature can be considered\n  as a special edge feature which is from the last token to the very end.\n  So m table encodes the edgeFeature, startFeature, and endFeature.\n  If the total number of labels in the label space is 45 from 0 to 44,\n  then the m factor array is as follows:\n  \\item\n  $viterbi\\_r$ table\n  is related to specific tokens. It encodes the single state features,\n  e.g., wordFeature, RegexFeature for all tokens. The r table is represented as\n  shown in the table.\\\\\n\n\\begin {table}[h]\n\\caption {viterbi\\_mtbl table} \\label{tab:viterbimtbl}\n \\begin{center}\n%\\tiny\\tt\n  \\begin{tabular}{l*{6}{c}r}\n   token             & 0   & 1   & 2   & 3   & ... & 43 &  44 \\\\\n   \\hline\n  -1                 & 2.1 & 1.1 & 1.0 & 1.1 & 1.1 & 2.1 & 1.1  \\\\\n   0                 & 1.1 & 3.9 & 1.2 & 2.1 & 2.8 & 1.8 & 0.8  \\\\\n   1                 & 0.7 & 1.7 & 2.9 & 3.8 & 0.6 & 3.2 & 0.2  \\\\\n   2                 & 0.2 & 3.2 & 3.8 & 2.9 & 0.2 & 0.1 & 0.2  \\\\\n   3                 & 1.2 & 6.9 & 7.8 & 8.0 & 0.1 & 1.9 & 1.7  \\\\\n   ...               & ... & ... & ... & ... & ... & ... & ...  \\\\\n   44                & 8.2 & 1.8 & 3.7 & 2.1 & 7.2 & 1.3 & 7.2  \\\\\n   45                & 1.8 & 7.8 & 5.6 & 9.8 & 2.3 & 9.4 & 1.1\n  \\end{tabular}\n \\end{center}\n\\end{table}\n\\begin {table}[h]\n\\caption {viterbi\\_rtbl table} \\label{tab:viterbirtbl}\n \\begin{center}\n%\\tiny\\tt\n  \\begin{tabular}{l*{6}{c}r}\n   token             & 0   & 1   & 2   & 3   & ... & 43 &  44 \\\\\n   \\hline\n   madlib            & 0.2 & 4.1 & 0.0 & 2.1 & 0.1 & 2.5 & 1.2  \\\\\n   is                & 1.3 & 3.0 & 0.2 & 3.1 & 0.8 & 1.9 & 0.9  \\\\\n   an                & 0.9 & 1.1 & 1.9 & 3.8 & 0.7 & 3.8 & 0.7  \\\\\n   open-source       & 0.8 & 0.2 & 1.8 & 2.7 & 0.5 & 0.8 & 0.1  \\\\\n   library           & 1.8 & 1.9 & 1.8 & 8.7 & 0.2 & 1.8 & 1.1  \\\\\n   ...               & ... & ... & ... & ... & ... & ... & ...  \\\\\n  \\end{tabular}\n \\end{center}\n\\end{table}\n\\end{itemize}\n\n\\section{Linear-chain CRF Inference}\nThe Viterbi algorithm \\cite{DBLP:journals/scholarpedia/Viterbi09} is the popular algorithm to find the top-k most likely labelings of a document for CRF models.\nFor the tasks in natural language processing domain, it is sufficient to only generate the best label sequence.\nWe chose to use a SQL clause to drive the inference over all documents.\nIn Greenplum, Viterbi can be run in parallel over different subsets of the document on a multi-core machine.\n\n\\subsection{Parallel CRF Inference}\nThe $vcrf\\_top\\_label$ is implemented sequentially and each function call will finish labeling of one document.\nThe inference is paralledl in the level of document. We use a SQL statment to drive the inference of all documents.\nSo, the CRF inference is naivelly parallel.\n\\begin{lstlisting}[language=SQL,gobble=4]\n        SELECT doc_id, vcrf_top1_label(mfactors.score, rfactors.score)\n        FROM   mfactors,rfactors\n\\end{lstlisting}\n\n\\subsection{Viterbi Inference Algorithm}\n\\[\nV(i,y) =\n\\begin{cases}\n \\max_{y^\\prime}(V(i-1,y^\\prime)) + \\textstyle \\sum_{k=1}^K \\lambda_kf_k(y,y^\\prime,x_i), & \\text{if } i\\ge0 \\\\\n 0, & \\text{if } i=-1.\n\\end{cases}\n\\]\n\n\\subsection{Viterbi Inference output}\nThe final inference output will produce the the best label sequence for each document and also the conditional probability given the\nobserved input sequence.\n\\begin {table}[h]\n\\caption {Viterbi inference output}\n\\begin{center}\n    \\scriptsize\\tt\n    \\begin{tabular}{ | l | l | l | l | l | }\n    \\hline\n    doc\\_id & start\\_pos & token & label & probability     \\\\\\hline\n    1   & 0    & madlib        & proper noun, singular &0.6\\\\\n    1   & 1    & is            & Verb, base form       &0.6 \\\\\n    1   & 2    & an            & determiner            &0.6 \\\\\n    1   & 2    & open-source   & adjective             &0.6 \\\\\n    1   & 4    & library       & noun                  &0.6 \\\\\n    1   & 5    & for           & preposition           &0.6 \\\\\n    1   & 6    & scalable      & adjective             &0.6 \\\\\n    1   & 7    & in-dababase   & adverb                &0.6 \\\\\n    1   & 8    & analytics     & noun, singular        &0.6 \\\\\n    1   & 9    & .             & sentence-final punc   &0.6 \\\\\n    2   & 0    & it            & personal pronoun      &0.4 \\\\\n    2   & 1    & provides      & verb, base form       &0.4 \\\\\n    2   & 2    & data-parallel & noun                  &0.4 \\\\\n    2   & 2    & $\\ldots$      & $\\ldots$                    &0.4 \\\\\n    \\hline\n    \\end{tabular}\n\\end{center}\n\\end{table}\n", "meta": {"hexsha": "9a1858d05cdced5b966ae637dfdd1d7e9fbe5e95", "size": 28577, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/design/modules/crf.tex", "max_stars_repo_name": "madlib/archived_madlib", "max_stars_repo_head_hexsha": "5b964cb50c562f8f8fd4bc47556cd2bbb49d27e4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-09-18T07:44:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-14T19:45:18.000Z", "max_issues_repo_path": "doc/design/modules/crf.tex", "max_issues_repo_name": "madlib/archived_madlib", "max_issues_repo_head_hexsha": "5b964cb50c562f8f8fd4bc47556cd2bbb49d27e4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/design/modules/crf.tex", "max_forks_repo_name": "madlib/archived_madlib", "max_forks_repo_head_hexsha": "5b964cb50c562f8f8fd4bc47556cd2bbb49d27e4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.4700996678, "max_line_length": 271, "alphanum_fraction": 0.6708191903, "num_tokens": 8634, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430478583168, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.5559351782570245}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{amsmath, amsfonts, amssymb, amsthm}\n\n\\usepackage{enumerate,hyperref}\n\\usepackage[margin=1in]{geometry}\n\\usepackage[section]{placeins}\n\n\\theoremstyle{definition}\n\\newtheorem{problem}{Problem}\n\\newtheorem*{lemma}{Lemma}\n\\newtheorem*{corollary}{Corollary}\n\n\\providecommand{\\equationref}[1]{Equation \\eqref{eq:#1}}\n\\providecommand{\\needscite}{[\\textit{Citation Needed}]}\n\n\\setcounter{secnumdepth}{0}\n\n\\title{Spell Damage Analysis and Stat Weights}\n\\author{Balor - Anathema}\n\\date{Status: DRAFT. Last updated \\today}\n\n\\begin{document}\n\\maketitle\nThis analysis was motivated by determining stat weights for a Balance druid casting Starfire. Wherever possible, however, things were kept general so as to be applicable to other spells and classes.\n\\section{Assumptions}\nWe use the following assumptions about how damage works.\n\\begin{itemize}\n\t\\item Spells have a base chance to hit that is purely a function of player level, target level, and Hit gear. Resistance does not affect a spell's chance to hit. For raid bosses, the base spell hit is 83, and thus a spell's percent chance to hit is $(83 + H)$ where $H$ is your total hit bonus from gear or talents.\\needscite\n\t\\item Whether a spell lands as a criticial hit is determined after a spell is known to land. That is, a 10\\% chance to crit means that 10\\% of all spells \\textit{that hit} will be critical hits, not that 10\\% of all spells that are cast will crit. \\needscite This is in contrast to melee attacks, which use a different system to determine hit and crit chance.\n\t\\item Critical hits provide a fixed multiplicative boost to the damage of a spell. This is usually a 1.5 multiplier, but can vary depending on talents. \\needscite For Balance Druids, the Vengeance talent gives a 2.0 damage multiplier on critical hits.\n\t\\item Spellpower increases the damage of a spell by increasing the damage of a non-resisted, non-critical hit by $c$ times your total spellpower, where $c$ is a fixed constant for a given spell. Usually, this constant is given by the default cast time for that spell divided by 3.5. \\needscite\n\\end{itemize}\n\n\\section{The Damage Formula}\nLet $B$ be the base damage of a spell, $c$ be that spell's corresponding spell coefficient, $H \\in [0, 16]$ be a player's current total hit bonus (as a percentage, so +12\\% hit is $H = 12$. Note that player hit chance can not be increased to 100, so only the first 16 are useful \\needscite), $P$ be a player's total spellpower that applies to that spell, and $R \\in [0, 100]$ be the player's spell crit, also as a percentage. Finally, let $x$ be the crit bonus, or one minus the crit multiplier (for example, if spell crits do 1.5 times damage in the default case, $x = 0.5$). Then the expected damage from one spell cast on a raid boss is given by the following.\n\\begin{equation}\n\\left(0.83 + \\frac{H}{100}\\right)\\left(B + cP\\right)\\left(1 + x\\frac{R}{100}\\right)\n\\label{eq:damage}\n\\end{equation}\nTo get DPS, we can simiply divide this by $T$, the total casting time of the spell. There is one complication here for druids, however. The Nature's Grace talent decreases the cast time of your next spell by 0.5 seconds whenever a spell lands a critical hit. Using assumption 2 above, we know that the probability of one spell resulting in a critical hit is $(0.83 + \\frac{H}{100})(\\frac{R}{100})$. Therefore, we can calculate an average cast time for the spell over a sufficiently long encounter as the following. Note that $t$ here is the casting time reduction that a critical hit yields. In the case of having Nature's Grace, $t=0.5$. If one does not have Nature's Grace, then $t=0$.\n\\begin{equation}\nT - t\\left(0.83 + \\frac{H}{100}\\right)\\frac{R}{100}\n\\label{eq:time}\n\\end{equation}\n\nNote that this is somewhat inaccurate, as the first spell in a fight is guaranteed to take $T$ time to cast, and so this is truly only the expected cast time for all subsequent spells. Factoring in the additional time from the first cast would require making assumptions on the total encounter length, which we hope to avoid here. Over sufficiently long encounters, these will converge to the same, so the effect of thsi is ignored in the following analysis.\n\nDividing the expected damage in \\equationref{damage} by the expected cast time in \\equationref{time} yields our expected total DPS, $D$.\n\n\\begin{equation}\nD = d\\frac{\\left(0.83 + \\frac{H}{100}\\right)\\left(mB + cP\\right)\\left(1 + x\\frac{R}{100}\\right)}{T - t\\left(0.83 + \\frac{H}{100}\\right)\\frac{R}{100}}\n\\end{equation}\n\nFor completeness, we have added in two additional factors, $d$, and $m$. $m$ is any multiplicative modifier on the base damage of a spell that might arise from talents or set bonuses. For example, the Druid talent Moonfury sets $m=1.1$. $d$ is any multiplicative damage modifer on total damage of the spell, including things like Curse of Shadows and the target's resistance. (TODO: add argument for why we can treat resistance, which really determines a probability distrubution of multiplicative damage reductions, as one simple average damage reduction. Also verify that either resistance cannot cause full 100\\% damage reductions, or, that if it does, a spell can still be a crit while being 100\\% resisted. If this is untrue, resistance will have an effect on Nature's Grace proc rates.).\n\n\\section{Stat Weightings}\nTo determine how we should value each stat ($H$, $P$, $R$), we have to examine how DPS varies as you change each stat. To do so, we will use derivatives, which measure the rate of change of the function with respect to a given parameter. The partial derivatives of DPS with respect to $H$, $P$, and $R$ are given below.\n\n\\begin{equation}\n\\frac{\\partial D}{\\partial P} = d\\frac{c\\left(83+H\\right)\\left(100 + xR\\right)}{100^2T - t(83 + H)R}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial D}{\\partial H} = d\\left(mB + cP\\right)\\left(100+xR\\right) \\left(\\frac{100^2T}{\\left(100^2T - t\\left(83+H\\right)R\\right)^2}\\right)\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\partial D}{\\partial R} = d\\left(mB+cP\\right)\\left(83+H\\right) \\left(\\frac{xT + t\\left(0.83 + \\frac{H}{100}\\right)}{\\left(100T - t\\left(0.83 + \\frac{H}{100}\\right)R\\right)^2}\\right)\n\\end{equation}\n\n$\\frac{\\partial D}{\\partial P}$ says that, when adding a very small amount of $P$, we expect the function value to change by $\\frac{\\partial D}{\\partial P}$ \\textit{per point of $P$ we varied}. It is the limiting value for very small changes of $P$, which gives a sense of how relevant $P$ is to the output function at a given point in the parameter space.\n\nSince we are concerned with stat weights, what we care most about is how these derivatives relate to each other. If we set the value of one spellpower to be 1 by convention, then taking ratios of derivatives will give us values for the other stats, $R$ and $H$. These equations are as follows.\n\n\\begin{equation}\n\\textrm{HitWeight} = \\frac{\\frac{\\partial D}{\\partial H}}{\\frac{\\partial D}{\\partial P}} = \\frac{\\frac{mB}{c} + P}{83 + H} \\left(\\frac{100^2 T}{100^2T - t(83 + H)R}\\right)\n\\end{equation}\n\n\\begin{equation}\n\\textrm{CritWeight} = \\frac{\\frac{\\partial D}{\\partial R}}{\\frac{\\partial D}{\\partial P}} = x\\frac{\\frac{mB}{c} + P}{100+xR} \\left(\\frac{T + \\frac{t}{x}\\left(0.83+\\frac{H}{100}\\right)}{T - t\\left(0.83 + \\frac{H}{100}\\right)\\frac{R}{100}}\\right)\n\\end{equation}\n\n\\subsection{No Nature's Grace}\nTo slightly generalize these to other classes, we can remove Nature's Grace from the equations by setting the casting time reduction from a crit to zero. That is, by setting $t=0$. Note that the equations were already factorized to make the impact of Nature's Grace apparent. Upon doing so, we get the following stat weights, which should be applicable to other classes.\n\n\\begin{equation}\n\\nonumber\n\\textrm{SpellpowerWeight} = 1\n\\end{equation}\n\\begin{equation}\n\\nonumber\n\\textrm{HitWeight} = \\frac{\\frac{mB}{c} + P}{83 + H}\n\\end{equation}\n\\begin{equation}\n\\nonumber\n\\textrm{CritWeight} = x\\frac{\\frac{mB}{c} + P}{100 + xR}\n\\end{equation}\n\n\n\\end{document}", "meta": {"hexsha": "9dd89e7b899168398a25938a30b9ba776b9e0284", "size": 8029, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "contrib/whitepaper/SpellDamage.tex", "max_stars_repo_name": "kmmiles/libclassic", "max_stars_repo_head_hexsha": "d1cfbb110a49677b8cb1cc82231e4931efa02e63", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "contrib/whitepaper/SpellDamage.tex", "max_issues_repo_name": "kmmiles/libclassic", "max_issues_repo_head_hexsha": "d1cfbb110a49677b8cb1cc82231e4931efa02e63", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-12-04T20:57:18.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-27T07:02:52.000Z", "max_forks_repo_path": "contrib/whitepaper/SpellDamage.tex", "max_forks_repo_name": "ultrabis/libclassic", "max_forks_repo_head_hexsha": "d1cfbb110a49677b8cb1cc82231e4931efa02e63", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 79.495049505, "max_line_length": 793, "alphanum_fraction": 0.7430564205, "num_tokens": 2290, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430520409023, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.5559351755686288}}
{"text": " \n\\emph{Mathematics} underpins proof systems such as sagaproofs because it offers a well established framework for rigorous analysis necessary to justify security.\nHere we are referring to the use of mathematics in the \\emph{construction} of proof systems, but other types of mathematics are used in the \\emph{analysis} of proof systems.\nHere we discuss the motivations behind the use of finite algebraic structures (in particular rings and fields) throughout sagaproofs and other proof systems.\nWe also introduce the important algebraic concepts of quotients and extensions.\nThis discussion is informal and formal definitions are clarified elsewhere when appropriate.\n\n\n\\section{Restricting to finite structures}\n\nWhile mathematical structures are many and varied, only those suited for constructing proof systems are of interest for our purposes.\nWhat exactly constitutes suitability for proof systems is not yet fully qualified, but a minimal requirement is efficient representation and manipulation on a computer.\nInfinite structures can be challenging to handle in this sense, especially when enforcing consistency across computers, which is crucial in the case of proof systems involving many parties.\nTherefore, narrowing focus to finite structures seems justifiable.\n\nBy many metrics, the majority of mathematical research targets infinite structures, so narrowing focus to finite structures excludes the majority of possibilities.\nThis may be seen as both a boon and a bane in that while limiting opportunity it also focuses attention.\nThe rich class of structures just beyond finite structures is finitely \\emph{generated} structures, which cannot be entirely ruled out, and as such the choice of finite structures is not fully justified.\nNevertheless, finite structures are the structures of choice for now.\n\n\n\\section{Choice of finite structures}\n\nRestricting to finite structures still leaves an infinite number of possibilities.\nWe must further restrict attention to those structures in which we see opportunity for proof systems.\nIn fact, we must restrict even further to those that seem appropriate for \\emph{practical} proof systems.\n\nDiverging for a moment, the term `practical' for qualifying proof systems, or more generally any solution to a technical problem, can be ambiguous.\nSome solutions are capable of implementation in the near future, say the next decade.\nThese solutions are largely dependent on current technologies and their near-term trajectories.\nOther solutions are only capable of implementation some time in the distant future.\nThese solutions are largely insensitive to the current state of technology.\nThroughout this project we refer to the former types of solutions as `practical' and the latter types as `impractical'.\nIn both cases, academic literature may refer to a solution as `efficient' because it requires bounded resources as a function of the problem size.\nSpecifically, bounds are taken to be polynomial functions in order that efficient solutions may be combined in various ways while remaining efficient.\nDo not confuse `efficient' with `practical' as many solutions are efficient but impractical.\nA solution may be efficient since it meets certain resource bounds, and yet impractical since the resources demanded are more than current technology can provide.\n\n\\begin{remark}\n    While there is some fundamental difference between efficient and inefficient solutions, there is no fundamental difference between practical and impractical solutions.\n    The practicality of a solution is dependent on the capability of modern technology, which in turn is dependent on the ever-changing scale and sophistication of human civilization.\n    The efficiency of a solution, on the other hand, is fixed and independent of human civilization. \n    There is no reason the efficiency and practicality of solutions should exhibit any correlation.\n\n    On a spectrum from the Plank length to the diameter of the observable universe, the size of human civilization sits in a rather arbitrary position roughly in the middle.\n    Given our relative size, we are lucky to control enough atoms in this universe to implement many efficient solutions, and yet our limited control leaves so many efficient solutions out of practical reach.\n\\end{remark}\n\nIn our context the solutions we seek are proof systems, and the technologies of concern are computing and networking technologies.\nIn particular, we are concerned with the affordable speed and volume of computer calculations and network signals.\nThese metrics are what separate the practical from the impractical.\n\nAcademic literature on proof systems can be roughly split into the `theoretical' type and the `applied' type.\nTheoretical research is motivated by efficient solutions, which may be impractical.\nApplied research is motivated by practical solutions, which are always efficient.\nSince theoretical research pursues solutions independent of current technology, any mathematical structure amenable for proof systems suffices.\nSince applied research pursues solutions highly dependent on current technology, choices of mathematical structures are more severely limited.\n\nA set of finite structures of popular use in theoretical research but of rare use in applied research is graphs, that is the set of structures each consisting of a finite number of objects and the connections between them.\nFor example, expander graphs played a central role in \\cite{Din07} to prove the PCP Theorem, likely the most important result in theoretical proof systems.\nGraphs are extensively studied, including the expander graphs that may be of use for our purposes.\nThe use of graphs in proof systems, however, seems to require resources beyond the capacity of modern technology.\n\nOn the other hand, a set of finite structures of popular use in both theoretical and applied research is finite fields, that is a certain set of highly symmetric algebraic structures.\nFinite fields are extensively studied and possess many homomorphic properties desirable for our purposes.\nFurthermore, many uses of finite fields in proof systems require resources within the capacity of modern computing and networking technologies.\n\nMuch of our focus in this project is devoted to finite fields.\nFinite fields are the finite objects of study in `field theory' which itself is embedded in `ring theory.'\nOur choice of finite structures is finite rings, and much of the time we restrict attention to finite fields.\nIf this choice of finite structures seems arbitrary, see the section below regarding arithmetic.\nRing theory may be viewed as the natural generalization of arithmetic, and arithmetic is anything but arbitrary.\n\n\n\\section{Representing structures}\n\nOur mathematical structures of choice, rings and in particular finite fields, belong to a larger class of mathematical structures called \\emph{algebraic} structures. \nAn algebraic structure consists of a set of objects, called \\emph{elements}, and relationships between those elements.\nA defining characteristic of algebraic structures is that relationships between elements are described by the elements themselves. \nThis is in contrast to other types of structures like graphs, where the connections between objects (i.e. edges between vertices) are not described by objects but by truth values.\nThe circular nature of algebraic relationships may be what gives them so much richness that has attracted so much attention for study.\n\nIn an algebraic structure, the most basic relationship is the familiar notion of equality, and every element is considered equal to itself.\nEquality is a unary relationship as it pertains to every element individually.\nUnary relationships can be represented as unary functions between the elements.\nIn the case of equality the unary function would be the identity function, that which sends every element to itself.\n\nOne step beyond unary relationships are binary relationships.\nA binary relationship applies to every pair of elements, describing the relationship of each pair by a third element.\nBinary relationships can be represented as binary functions between the elements.\nBeyond binary relationships are ternary relationships applying to every triple of elements, and one may continue further with relationships of every higher arity.\nIt turns out, however, that most study does not go beyond binary relationships.\nThis is likely because binary relationships capture enough complexity for most purposes of study, and relationships of higher arity either yield no additional complexity or yield so much complexity only special cases can be meaningfully studied.\n\nSince relationships are represented as functions from elements to elements, they are called \\emph{operations} in that they operate on inputs elements to produce an output element.\nFormally, an algebraic structure consists of a set of elements together with one or more operations defined on those elements.\nInformally, however, an algebraic structure is often identified only by the set of elements and the operations are left to be determined from context.\nHenceforth in our study of structures, all structures are algebraic structures, represented by sets and operations that can be gleaned from context when not specified.\n\nA ring is an algebraic structure with addition and multiplication operations, from which one may derive others such as subtraction and exponentiation.\nA finite field a ring that also allows for a division operation.\n\n\n\\section{Building structures}\n\nComplex algebraic structures can be built from simpler structures by a sequence of steps through intermediate structures.\nThere may exist multiple paths from one structure to another, and the steps in any path may not be strictly ordered.\nA step from one structure to another is a transformation from one to another.\nThere are three different kinds of transformations.\n\n\\begin{itemize}\n\n    \\item{Derived definition}\n    Derived definitions are different in nature than the quotients and extensions described below.\n    Given a structure, we may define a new set of elements in terms of existing elements, and new operations in terms of existing operations.\n    An example is matrix algebra where new elements are blocks of existing elements, and new operations are formulated in terms of existing addition and multiplication operations.\n\n    \\item{Quotient}\n    A \\emph{quotient} is best thought of as a contraction, the dual of an extension (see below).\n    The name `contraction' would be most fitting, but we follow the standard name `quotient.'\n    Given a structure, we may declare two elements to be equal.\n    Doing so yields a new structure with the same operations and fewer elements, thus a contraction of the original structure.\n\n    Such a transformation can naturally be expressed by an equation involving any existing elements.\n    When the equation holds in the original structure the transformation is trivial and no contraction occurs.\n    When the equation does not hold in the original structure, the transformation is non-trivial and any amount of contraction may occur.\n\n    An example is taking integers with addition and multiplication and declaring the equation $2=0$.\n    This yields a structure with two elements identifiable as the two truth values.\n    Addition now corresponds to exclusive disjunction (XOR) and multiplication to conjunction (AND).\n\n    \\item{Extension}\n    An \\emph{extension} is best thought of as an expansion, the dual of a quotient.\n    The name `expansion' would be most fitting, but we follow the standard name `extension.'\n    Given a structure, we may introduce a new element.\n    Doing so yields a new structure with the same operations and more elements, thus an expansion of the original structure.\n\n    Such a transformation can naturally be expressed by an optional equation describing the new element in terms of existing elements.\n    When the equation is not given, the new element is called \\emph{transcendental} over the original structure, and infinite expansion occurs.\n    When the equation is given, the new element is called \\emph{algebraic} over the original structure, and any amount of expansion may occur.\n    The word `transcendental' reflects how the new element transcends anything expressible in the original structure.\n    The word `algebraic' reflects how the new element is algebraically expressible in the original structure.\n\n    An example is taking real numbers with addition and multiplication and introducing the element $i$ with the equation $i^2+1=0$. \n    This yields the complex numbers. \n\n\\end{itemize}\n\nIn the special case of rings and finite fields, we may express quotients and (usually) extensions in special ways and special notation is used for each.\nIn both cases, we may replace the equation specifying the new structure with only an expression to act as one side of the equation with the other side predetermined.\nIn the case of quotients, the predetermined side is zero, an element present in every ring structure.\nIn the case of extensions, the predetermined side is the new element.\nIsolating the new element on one side of the equation is usually doable using ring operations, namely addition, multiplication, exponentiation, and their inverted versions when they exist (as they do for finite fields).\n\nAs for notation, suppose we have existing ring structure $R$ and expression $E$.\nIn the case of quotients we may express the new structure as $R\\mod E$ or $R/E$. \nThe former is read as `R modulo E' and the latter as `R over E.'\nIn the case of extensions we may express the new structure as $R[E]$, and iterated extensions of the form $R[E_1][E_2],\\dots,[E_n]$ may be simplified to $R[E_1,E_2,\\dots,E_n]$.\nThis is read as `adjoining elements $E_1,E_2,\\dots,E_n$ to $R$.'\nIn the quotient and extension examples above, the relevant expressions are $2$ and $i^2+1$ respectively, and the relevant notations are $\\Z\\mod 2$ and $\\R[\\sqrt{-1}]$ respectively.\n\nA common sequence of steps worth mentioning is an infinite expansion followed by a contraction.\nBeginning with $R$ we first adjoin $x$ for a transcendental extension $R[x]$.\nThen we set some expression $f(x)$ to zero for a contraction $R[x]/f(x)$.\n\n\n\\section{Arithmetic and its importance}\n\nTo conclude this discussion justifying our choice of rings and finite fields, it may be motivating to examine their relationship to arithmetic.\nTo clarify, by arithmetic we mean numbers under addition and multiplication.\nAll rings and finite fields can be obtained using arithmetic as a starting structure and applying a sequence of the three transformations above, namely derived definitions, quotients, and extensions.\nFurthermore, the translation from arithmetic to any ring structure is \\emph{natural}, usually only involving a few simple transformations.\nMany other structures in mathematics, in contrast, do not flow so naturally from a single source like arithmetic.\nIf we accept this as justification for rings conditioned on arithmetic as our starting point, we may shift attention to justification for arithmetic. \n\nMathematics originates from the study of quantities, specifically via their representations as numbers and the ways in which they can be manipulated.\nThis original study is known as arithmetic, and our promised justification for arithmetic is tasked with showing the naturality of quantities and their ways of manipulation.\n\nRegarding quantities, represented as numbers, their naturality and ubiquity is clear and taken for granted.\nAnalysis of nearly any structure in mathematics involves arithmetic simply because quantities of one thing or another are inevitably relevant.\nEven the empty structure consisting of nothing, the simplest structure conceivable, involves the quantity zero.\nA natural next step beyond a structure with \\emph{nothing} is a structure with \\emph{something}, from which we may conceive the quantity \\emph{one}.\nContinuing this process one may derive all natural numbers, repeatedly defining a new structure different from all previous structures, containing something they do not.\n\nRegarding the manipulations of quantities, that is operations on numbers, we begin with addition and observe that other operations immediately follow.\nThe most basic operation performed on numbers is addition, and we take this operation for granted.\nWith zero as the most basic number and addition as the most basic operation, one may ask what numbers add to zero, and doing so one conceives the negative numbers along with the subtraction operation.\nAnother natural way to create a new operation is to extend an existing operation by repeating it more than once, or any number of times.\nNote how the repetition parameter is itself a \\emph{number}.\nRepeating addition a certain number of times yields the multiplication operation.\nIn this case both the operator and the operand are of the same type (a number), a property unique to arithmetic because in other structures where the operands are not numbers, this symmetry cannot exist.\nOne may ask what numbers multiply to one, and doing so one conceives the rational numbers, along with the division operation.\nOne may go further, defining exponentiation, logarithms, roots, and all other arithmetic concepts.\nEach structure is naturally derived from the previous.\n\n", "meta": {"hexsha": "47434a65978480a84c6ceafc1489db25a11fe044", "size": 17303, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "mathematics/introduction.tex", "max_stars_repo_name": "sagaproof/exploration", "max_stars_repo_head_hexsha": "609ce9ea4a6b4e765a44b6dc6a273611d525810e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mathematics/introduction.tex", "max_issues_repo_name": "sagaproof/exploration", "max_issues_repo_head_hexsha": "609ce9ea4a6b4e765a44b6dc6a273611d525810e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mathematics/introduction.tex", "max_forks_repo_name": "sagaproof/exploration", "max_forks_repo_head_hexsha": "609ce9ea4a6b4e765a44b6dc6a273611d525810e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.2806122449, "max_line_length": 245, "alphanum_fraction": 0.8112466046, "num_tokens": 3293, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.839733963661418, "lm_q2_score": 0.66192288918838, "lm_q1q2_score": 0.5558391313763759}}
{"text": "\\section{Quick Methodology}\n\n\\begin{itemize}\n    \\item Fix the parameters \\( \\Lambda \\), \\( \\lambda_i^o \\), \\( \\mu_i \\) and \n    \\( C_i \\). \n    \\item \\( \\forall \\hspace{0.2cm} \\hat{c_i} \\in \\{1,2, \\dots, C_A\\} \\) and \n    \\( \\forall \\hspace{0.2cm} \\hat{c_j} \\in \\{1,2, \\dots, C_B\\} \\) \n    \\item Calculate \\( p_A \\) and \\( p_B = 1-p_A \\) s.t. \\( (W_q)_A = (W_q)_B \\). \n    \\item Calculate the probability \\(P((W_q)_i \\leq 4 \\) hours\n    \\item Fill matrix A with \\( U_{\\hat{c_i}, \\hat{c_j}}^A = \n    1 - |0.95 - P((W_q)_A \\leq 4)| \\) and\n    \\item fill matrix B with \\( U_{\\hat{c_i}, \\hat{c_j}}^B = \n    1 - |0.95 - P((W_q)_B \\leq 4)| \\)\n\\end{itemize}\n\n\n\\begin{table}[h]\n    \\centering\n    A = \n    \\begin{tabular}{|l|l|l|l|}\n    \\hline\n    \\( U_{1,1}^A \\) & \\( U_{1,2}^A \\) & \\dots & \\( U_{1,C_B}^A \\) \\\\ \\hline\n    \\( U_{2,1}^A \\) & \\( U_{2,2}^A \\) & \\dots & \\( U_{2,C_B}^A \\) \\\\ \\hline\n    \\vdots & \\vdots & \\( \\ddots \\) & \\vdots \\\\ \\hline\n    \\( U_{C_A,1}^A \\) & \\( U_{C_A,2}^A \\) & \\dots & \\( U_{C_A,C_B}^A \\) \\\\ \\hline\n    \\end{tabular}\n\\end{table}  \n\n\\begin{table}[h]\n    \\centering\n    B = \n    \\begin{tabular}{|l|l|l|l|}\n    \\hline\n    \\( U_{1,1}^B \\) & \\( U_{1,2}^B \\) & \\dots & \\( U_{1,C_B}^B \\) \\\\ \\hline\n    \\( U_{2,1}^B \\) & \\( U_{2,2}^B \\) & \\dots & \\( U_{2,C_B}^B \\) \\\\ \\hline\n    \\vdots & \\vdots & \\( \\ddots \\) & \\vdots \\\\ \\hline\n    \\( U_{C_A,1}^B \\) & \\( U_{C_A,2}^B \\) & \\dots & \\( U_{C_A,C_B}^B \\) \\\\ \\hline\n    \\end{tabular}\n\\end{table}  \n\n\\begin{itemize}\n    \\item Ambulance decides the proportion of people to distribute to each hospital \n    based on optimal patient distribution.\n\\end{itemize}\n", "meta": {"hexsha": "27b4e8c7cc94ffd0f8bac86a389c05e6ce4c43a5", "size": 1620, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/main/Methodology/Quick/main.tex", "max_stars_repo_name": "11michalis11/AmbulanceDecisionGame", "max_stars_repo_head_hexsha": "45164ba51da0417297f715e41716cb91facc120f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/main/Methodology/Quick/main.tex", "max_issues_repo_name": "11michalis11/AmbulanceDecisionGame", "max_issues_repo_head_hexsha": "45164ba51da0417297f715e41716cb91facc120f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 20, "max_issues_repo_issues_event_min_datetime": "2020-04-20T09:08:31.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-23T11:09:25.000Z", "max_forks_repo_path": "tex/main/Methodology/Quick/main.tex", "max_forks_repo_name": "11michalis11/AmbulanceDecisionGame", "max_forks_repo_head_hexsha": "45164ba51da0417297f715e41716cb91facc120f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.0, "max_line_length": 84, "alphanum_fraction": 0.4950617284, "num_tokens": 743, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117940706733, "lm_q2_score": 0.6723317123102956, "lm_q1q2_score": 0.5558245560946523}}
{"text": "\\section{Application: A formula for the inverse of a matrix}\n\\label{sec:adjugate}\n\n\\begin{outcome}\n  \\begin{enumerate}\n  \\item Find the cofactor matrix and the adjugate of a matrix.\n  \\item Find the inverse of a matrix using the adjugate formula.\n  \\end{enumerate}\n\\end{outcome}\n\nThe determinant of a matrix also provides a way to find the inverse of\na matrix.  Recall the definition of the inverse of a matrix from\nDefinition~\\ref{def:invertible-matrix}. If $A$ is an\n$n\\times n$-matrix, we say that $A^{-1}$ is the inverse of $A$ if\n$AA^{-1} = I$ and $A^{-1}A=I$.\n\nWe now define a new matrix called the \\textbf{cofactor matrix} of $A$.\nThe cofactor matrix of $A$ is the matrix whose $\\ijth$ entry is the\n$\\ijth$ cofactor of $A$.\n\n\\begin{definition}{The cofactor matrix}{cofactor-matrix}\n  Let $A$ be an $n\\times n$-matrix. Then the \\textbf{cofactor matrix\n    of $A$}%\n  \\index{cofactor matrix}%\n  \\index{matrix!cofactor matrix}, denoted $\\cof(A)$, is defined by\n  \\begin{equation*}\n    \\cof(A)\n    ~=~ \\mat{\\cofactor{A}{ij}}\n    ~=~ \\begin{mymatrix}{cccc}\n      \\cofactor{A}{11} & \\cofactor{A}{12} & \\cdots & \\cofactor{A}{1n} \\\\\n      \\cofactor{A}{21} & \\cofactor{A}{22} & \\cdots & \\cofactor{A}{2n} \\\\\n      \\vdots & \\vdots & \\ddots & \\vdots \\\\\n      \\cofactor{A}{n1} & \\cofactor{A}{n2} & \\cdots & \\cofactor{A}{nn} \\\\\n    \\end{mymatrix},\n  \\end{equation*}\n  where $\\cofactor{A}{ij}$ is the $\\ijth$ cofactor of $A$.\n\\end{definition}\n\nWe will use the cofactor matrix to create a formula for the inverse of\n$A$. First, we define the \\textbf{adjugate}%\n\\index{adjugate of a matrix}%\n\\index{matrix!adjugate} of $A$, denoted $\\adj(A)$, to be the transpose\nof the cofactor matrix:\n\\begin{equation*}\n  \\adj(A) = \\cof(A)^T.\n\\end{equation*}\nThe adjugate is also sometimes called the\n\\textbf{classical adjoint}%\n\\index{classical adjoint}%\n\\index{matrix!classical adjoint} of $A$.\n\n\\begin{example}{Cofactor matrix and adjugate}{cofactor-matrix-and-adjugate}\n  Find the cofactor matrix and the adjugate of $A$, where\n  \\begin{equation*}\n    A = \\begin{mymatrix}{rrr}\n      1 & 2 & 3 \\\\\n      3 & 0 & 1 \\\\\n      1 & 2 & 1 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{example}\n\n\\begin{solution}\n  We first find $\\cof(A)$. To do so, we need to compute the cofactors\n  of $A$. We have:\n  \\begin{eqnarray*}\n    \\cofactor{A}{11} ~=~ +\\minor{A}{11}\n    &=&\n    \\begin{absmatrix}{ccc}\n      \\strikeh{3.2em}{\\strikev{3.2em}{1}} & 2 & 3 \\\\\n      3 & 0 & 1 \\\\\n      1 & 2 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ \\begin{absmatrix}{rr}\n      0 & 1 \\\\\n      2 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ -2,\n    \\\\\n    \\cofactor{A}{12} ~=~ -\\minor{A}{12}\n    &=&\n    -\\begin{absmatrix}{ccc}\n      \\strikeh{3.2em}{1} & \\strikev{3.2em}{2} & 3 \\\\\n      3 & 0 & 1 \\\\\n      1 & 2 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ -\\begin{absmatrix}{rr}\n      3 & 1 \\\\\n      1 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ -2,\n    \\\\\n    \\cofactor{A}{13} ~=~ +\\minor{A}{13}\n    &=&\n    \\begin{absmatrix}{ccc}\n      \\strikeh{3.2em}{1} & 2 & \\strikev{3.2em}{3} \\\\\n      3 & 0 & 1 \\\\\n      1 & 2 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ \\begin{absmatrix}{rr}\n      3 & 0 \\\\\n      1 & 2 \\\\\n    \\end{absmatrix}\n    ~=~ 6,\n    \\\\\n    \\cofactor{A}{21} ~=~ -\\minor{A}{21}\n    &=&\n    -\\begin{absmatrix}{ccc}\n      \\strikev{3.2em}{1} & 2 & 3 \\\\\n      \\strikeh{3.2em}{3} & 0 & 1 \\\\\n      1 & 2 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ -\\begin{absmatrix}{rr}\n      2 & 3 \\\\\n      2 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ 4,\n  \\end{eqnarray*}\n  and so on. Continuing in this way, we find the cofactor matrix\n  \\begin{equation*}\n    \\cof(A)\n    =\n    \\begin{mymatrix}{rrr}\n      -2 & -2 & 6 \\\\\n      4 & -2 & 0 \\\\\n      2 & 8 & -6 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n  Finally, the adjugate is the transpose of the cofactor matrix:\n  \\begin{equation*}\n    \\adj(A) = \\cof(A)^T =\n    \\begin{mymatrix}{rrr}\n      -2 & 4 & 2 \\\\\n      -2 & -2 & 8 \\\\\n      6 & 0 & -6 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{solution}\n\nThe following theorem provides a formula for $A^{-1}$ using the\ndeterminant and the adjugate of $A$.\n\n\\begin{theorem}{Formula for the inverse}{inverse-and-determinant}\n  Let $A$ be an $n\\times n$-matrix. Then\n  \\begin{equation*}\n    A \\, \\adj(A) = \\adj(A)\\,A = \\det(A)\\,I.\n  \\end{equation*}\n  Moreover, if $A$ is invertible if and only if $\\det(A) \\neq 0$. In\n  this case, we have:\n  \\begin{equation*}\n    A^{-1} = \\frac{1}{\\det(A)} \\adj(A).\n  \\end{equation*}\n  We call this the \\textbf{adjugate formula}%\n  \\index{determinant!formula for matrix inverse}%\n  \\index{adjugate formula}%\n  \\index{inverse!of a matrix!adjugate formula}%\n  \\index{matrix!inverse!adjugate formula} for the matrix inverse.\n\\end{theorem}\n\n\\begin{proof}\n  Recall that the $(i,j)$-entry of $\\adj(A)$ is equal to\n  $\\cofactor{A}{ji}$.  Thus the $(i,j)$-entry of $B=A\\,\\adj(A)$ is:\n  \\begin{eqnarray*}\n    B_{ij}\n    &=& a_{i1}\\adj(A)_{1j} + a_{i2}\\adj(A)_{2j} + \\ldots + a_{in}\\adj(A)_{nj} \\\\\n    &=& a_{i1}\\cofactor{A}{j1} + a_{i2}\\cofactor{A}{j2} + \\ldots + a_{in}\\cofactor{A}{jn}.\n  \\end{eqnarray*}\n  By the cofactor expansion theorem, we see that this expression for\n  $B_{ij}$ is equal to the determinant of the matrix obtained from $A$\n  by replacing its $j$th row by $\\mat{a_{i1}, a_{i2}, \\dots a_{in}}$,\n  i.e., by its $i$th row.\n\n  If $i=j$ then this matrix is $A$ itself and therefore\n  $B_{ii}=\\det(A)$. If on the other hand $i\\neq j$, then this matrix\n  has its $i$th row equal to its $j$th row, and therefore $B_{ij}=0$\n  in this case. Thus we obtain:\n  \\begin{equation*}\n    A \\, \\adj(A) = {\\det(A)} I.\n  \\end{equation*}\n  By a similar argument (using columns instead of rows), we can verify that:\n  \\begin{equation*}\n    \\adj(A)\\,A = {\\det(A)} I.\n  \\end{equation*}\n  This proves the first part of the theorem. For the second part,\n  assume that $A$ is invertible. Then by\n  Theorem~\\ref{thm:determinant-invertible}, $\\det(A)\\neq 0$. Dividing the\n  formula from the first part of the theorem by $\\det(A)$, we obtain\n  \\begin{equation*}\n    A\\paren{\\frac{1}{\\det(A)}\\adj(A)} ~=~ \\paren{\\frac{1}{\\det(A)}\\adj(A)}A ~=~ I,\n  \\end{equation*}\n  and therefore\n  \\begin{equation*}\n    A^{-1} = \\frac{1}{\\det(A)} \\adj(A).\n  \\end{equation*}\n  This completes the proof.\n\\end{proof}\n\n\\begin{example}{Finding the inverse using a formula}{inverse-and-determinant}\n  Use the adjugate formula to find the inverse of the matrix\n  \\begin{equation*}\n    A=\\begin{mymatrix}{rrr}\n      1 & 2 & 3 \\\\\n      3 & 0 & 1 \\\\\n      1 & 2 & 1 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{example}\n\n\\begin{solution}\n  We must compute\n  \\begin{equation*}\n    A^{-1} ~=~ \\frac{1}{\\det(A)} \\adj(A).\n  \\end{equation*}\n  We will start by computing the determinant. We expand along the\n  second row:\n  \\begin{equation*}\n    \\det(A)\n    ~=~ \\begin{absmatrix}{rrr}\n      1 & 2 & 3 \\\\\n      3 & 0 & 1 \\\\\n      1 & 2 & 1 \\\\\n    \\end{absmatrix}\n    ~=~ -3 \\begin{absmatrix}{rr}\n      2 & 3 \\\\\n      2 & 1 \\\\\n    \\end{absmatrix}\n    - 1 \\begin{absmatrix}{rr}\n      1 & 2 \\\\\n      1 & 2 \\\\\n    \\end{absmatrix}\n    ~=~ -3(-4) -1(0) ~=~ 12.\n  \\end{equation*}\n  We have already calculated the adjugate $\\adj(A)$ in\n  Example~\\ref{exa:cofactor-matrix-and-adjugate}:\n  \\begin{equation*}\n    \\adj(A) ~=~\n    \\begin{mymatrix}{rrr}\n      -2 & 4 & 2 \\\\\n      -2 & -2 & 8 \\\\\n      6 & 0 & -6 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n  Therefore, the inverse of $A$ is given by\n  \\begin{equation*}\n    \\def\\arraystretch{1.4}\n    A^{-1}\n    ~=~ \\frac{1}{\\det(A)}\\adj(A)\n    ~=~ \\frac{1}{12}\\begin{mymatrix}{rrr}\n      -2 & 4 & 2 \\\\\n      -2 & -2 & 8 \\\\\n      6 & 0 & -6 \\\\\n    \\end{mymatrix}\n    ~=~ \\begin{mymatrix}{rrr}\n      -\\frac{1}{6} & \\frac{1}{3} & \\frac{1}{6} \\\\\n      -\\frac{1}{6} & -\\frac{1}{6} & \\frac{2}{3} \\\\\n      \\frac{1}{2} & 0 & -\\frac{1}{2} \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n  Since it is very easy to make a mistake in this calculation, we\n  double-check our answer by computing $A^{-1}A$:\n  \\begin{equation*}\n    \\def\\arraystretch{1.4}\n    A^{-1}A ~=~\n    \\allowbreak \\begin{mymatrix}{rrr}\n      -\\frac{1}{6} & \\frac{1}{3} &\n      \\frac{1}{6} \\\\\n      -\\frac{1}{6} & -\\frac{1}{6} &\n      \\frac{2}{3} \\\\\n      \\frac{1}{2} & 0 & -\\frac{1}{2}\n    \\end{mymatrix} \\begin{mymatrix}{rrr}\n      1 & 2 & 3 \\\\\n      3 & 0 & 1 \\\\\n      1 & 2 & 1\n    \\end{mymatrix} ~=~ \\begin{mymatrix}{rrr}\n      1 & 0 & 0 \\\\\n      0 & 1 & 0 \\\\\n      0 & 0 & 1\n    \\end{mymatrix}\n    ~=~\n    I.\n  \\end{equation*}\n\\end{solution}\n\n\\begin{example}{Finding the inverse using a formula}{inverse-formula}\n  Use the adjugate formula to find the inverse of the matrix\n  \\begin{equation*}\n    A=\\begin{mymatrix}{rrr}\n      0  &  2 &  1 \\\\\n      -1 &  2 &  2 \\\\\n      2  & -2 & -2 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{example}\n\n\\begin{solution}\n  We start by calculating the determinant:\n  \\begin{equation*}\n    \\det(A)\n    ~=~ \\begin{absmatrix}{rrr}\n      0  &  2 &  1 \\\\\n      -1 &  2 &  2 \\\\\n      2  & -2 & -2 \\\\\n    \\end{absmatrix}\n    ~=~ -2 \\begin{absmatrix}{rr}\n      -1 &  2 \\\\\n      2  & -2 \\\\\n    \\end{absmatrix}\n    + 1 \\begin{absmatrix}{rrr}\n      -1 &  2 \\\\\n      2  & -2 \\\\\n    \\end{absmatrix}\n    ~=~ -2\\cdot (-2) + 1\\cdot(-2) ~=~ 2.\n  \\end{equation*}\n  Next, we compute the cofactor matrix:\n  \\begin{equation*}\n    \\cof(A)\n    ~=~\n    \\begin{mymatrix}{ccc}\n      ~~~\\begin{absmatrix}{rr}\n       2 &  2 \\\\\n       -2 & -2 \\\\\n      \\end{absmatrix}\n      &\n      -\\begin{absmatrix}{rr}\n      -1 &  2 \\\\\n      2  & -2 \\\\\n      \\end{absmatrix}\n      &\n      ~~~\\begin{absmatrix}{rr}\n      -1 &  2 \\\\\n      2  & -2 \\\\\n      \\end{absmatrix}\n      \\\\\\\\[-1ex]\n      -\\begin{absmatrix}{rr}\n      2 &  1 \\\\\n      -2 & -2 \\\\\n      \\end{absmatrix}\n      &\n      ~~~\\begin{absmatrix}{rr}\n      0 &  1 \\\\\n      2 & -2 \\\\\n      \\end{absmatrix}\n      &\n      -\\begin{absmatrix}{rr}\n      0  &  2 \\\\\n      2  & -2 \\\\\n      \\end{absmatrix}\n      \\\\\\\\[-1ex]\n      ~~~\\begin{absmatrix}{rr}\n      2 &  1 \\\\\n      2 &  2 \\\\\n      \\end{absmatrix}\n      &\n      -\\begin{absmatrix}{rr}\n      0  &  1 \\\\\n      -1 &  2 \\\\\n      \\end{absmatrix}\n      &\n      ~~~\\begin{absmatrix}{rr}\n      0  &  2 \\\\\n      -1 &  2 \\\\\n      \\end{absmatrix}\n    \\end{mymatrix}\n    ~=~ \\begin{mymatrix}{rrr}\n      0 &  2 & -2 \\\\\n      2 & -2 &  4 \\\\\n      2 & -1 &  2 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n  The adjugate is the transpose of the cofactor matrix:\n  \\begin{equation*}\n    \\adj(A) ~=~ \\cof(A)^T\n    ~=~ \\begin{mymatrix}{rrr}\n      0  &  2 &  2 \\\\\n      2  & -2 & -1 \\\\\n      -2 &  4 &  2 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n  We therefore have\n  \\begin{equation*}\n    \\def\\arraystretch{1.2}\n    A^{-1}\n    ~=~\n    \\frac{1}{\\det(A)}\\adj(A)\n    ~=~\n    \\frac{1}{2}\n    \\begin{mymatrix}{rrr}\n      0  &  2 &  2 \\\\\n      2  & -2 & -1 \\\\\n      -2 &  4 &  2 \\\\\n    \\end{mymatrix}\n    ~=~\n    \\begin{mymatrix}{rrr}\n      0  &  1 &  1 \\\\\n      1  & -1 & -\\frac{1}{2} \\\\\n      -1 &  2 &  1 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n  Once again, we double-check our work by computing $A^{-1}A$:\n  \\begin{equation*}\n    \\def\\arraystretch{1.2}\n    A^{-1}A ~=~\n    \\begin{mymatrix}{rrr}\n      0  &  1 &  1 \\\\\n      1  & -1 & -\\frac{1}{2} \\\\\n      -1 &  2 &  1 \\\\\n    \\end{mymatrix}\n    \\begin{mymatrix}{rrr}\n      0  &  2 &  1 \\\\\n      -1 &  2 &  2 \\\\\n      2  & -2 & -2 \\\\\n    \\end{mymatrix}\n    ~=~ \\begin{mymatrix}{rrr}\n      1 & 0 & 0 \\\\\n      0 & 1 & 0 \\\\\n      0 & 0 & 1\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{solution}\n\nIt is always a good idea to double-check your work.  At the end of the\ncalculation, it is very easy to compute $A^{-1}A$ and check whether it\nis equal to $I$. If they are not equal, be sure to go back and\ndouble-check each step. One common mistake is to forget to take the\ntranspose of the cofactor matrix, so be sure not to forget this step.\n\nIn practice, it is usually much faster to compute the inverse by the\nmethod of Section~\\ref{ssec:computing-inverses}, because this only\nrequires solving a single system of equations, rather than computing a\nlarge number of cofactors. However, there are some situations where\nthe adjugate formula is useful. One such situation is when the matrix\nhas complicated entries that are functions rather than numbers. The\nfollowing example illustrates this.\n\n\\begin{example}{Inverse for non-constant matrix}{inverse-non-constant-matrix}\n  Let\n  \\begin{equation*}\n    A(t) =\\begin{mymatrix}{ccc}\n      e^{t} & 0 & 0 \\\\\n      0 & \\cos t & \\sin t \\\\\n      0 & -\\sin t & \\cos t\n    \\end{mymatrix}.\n  \\end{equation*}\n  Show that $A(t)^{-1}$ exists and find it.\n\\end{example}\n\n\\begin{solution}\n  First note that\n  \\begin{equation*}\n    \\det(A(t)) ~=~ e^{t}(\\cos^2 t + \\sin^2 t) ~=~ e^{t}\\neq 0.\n  \\end{equation*}\n  Therefore $A(t)^{-1}$ exists for all values of the variable $t$. The\n  cofactor matrix is\n  \\begin{equation*}\n    \\cof(A(t))\n    ~=~ \\begin{mymatrix}{ccc}\n      1 & 0 & 0 \\\\\n      0 & e^{t}\\cos t & e^{t}\\sin t \\\\\n      0 & -e^{t}\\sin t & e^{t}\\cos t\n    \\end{mymatrix}.\n  \\end{equation*}\n  The adjugate is the transpose of the cofactor matrix, and therefore\n  the inverse is\n  \\begin{equation*}\n    A(t)^{-1}\n    ~=~ \\frac{1}{\\det(A(t))}\\adj(A(t))\n    ~=~\n    \\frac{1}{e^{t}}\\begin{mymatrix}{ccc}\n      1 & 0 & 0 \\\\\n      0 & e^{t}\\cos t & -e^{t}\\sin t \\\\\n      0 & e^{t}\\sin t & e^{t}\\cos t\n    \\end{mymatrix}\n    ~=~ \\begin{mymatrix}{ccc}\n      e^{-t} & 0 & 0 \\\\\n      0 & \\cos t & -\\sin t \\\\\n      0 & \\sin t & \\cos t\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{solution}\n\nAnother situation where the adjugate formula is useful is the case of a\n$2\\times 2$-matrix. In this case both the determinant and the adjugate\nare especially easy to compute. For a $2\\times 2$-matrix\n\\begin{equation*}\n  A ~=~ \\begin{mymatrix}{rr}\n    a & b \\\\\n    c & d\n  \\end{mymatrix},\n\\end{equation*}\nwe have\n\\begin{eqnarray*}\n  \\det(A) &=& ad-bc\n  \\\\\n  \\adj(A) &=&\n  \\begin{mymatrix}{rr}\n    d & -b \\\\\n    -c & a\n  \\end{mymatrix}.\n\\end{eqnarray*}\nTherefore, $A$ is invertible if and only if $ad-bc\\neq 0$, and in\nthat case, the inverse is given by\n\\begin{equation}\\label{eqn:inverse-2-by-2}\n  A^{-1} ~=~\n  \\frac{1}{ad-bc}\n  \\begin{mymatrix}{rr}\n    d & -b \\\\\n    -c & a\n  \\end{mymatrix}.\n\\end{equation}\n\n\\begin{example}{Inverse of a $2\\times 2$-matrix}{inverse-2-by-2}\n  Find the inverse of\n  \\begin{equation*}\n    A ~=~ \\begin{mymatrix}{rr}\n      7 & 5 \\\\\n      2 & 2 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{example}\n\n\\begin{solution}\n  We use formula {\\eqref{eqn:inverse-2-by-2}} to compute the inverse:\n  \\begin{equation*}\n    \\def\\arraystretch{1.4}\n    A^{-1} ~=~\n    \\frac{1}{7\\cdot 2-2\\cdot 5}\n    \\begin{mymatrix}{rr}\n      2 & -5 \\\\\n      -2 & 7\n    \\end{mymatrix}\n    ~=~\n    \\begin{mymatrix}{rr}\n      \\frac{1}{2} & -\\frac{5}{4} \\\\\n      -\\frac{1}{2} & \\frac{7}{4}\n    \\end{mymatrix}.\n  \\end{equation*}\n\\end{solution}\n", "meta": {"hexsha": "e80bc952eae34d92b16c71d2e846cf7b18f2cc78", "size": 14720, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "baseText/content/Determinants-Application-Inverse.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "baseText/content/Determinants-Application-Inverse.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "baseText/content/Determinants-Application-Inverse.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 27.3605947955, "max_line_length": 90, "alphanum_fraction": 0.5470788043, "num_tokens": 5706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.672331699179286, "lm_q2_score": 0.8267117940706734, "lm_q1q2_score": 0.5558245452390919}}
{"text": "Our equations look like:\n\\begin{equation}\n\\frac{\\partial\\Ub}{\\partial t} = \\nabla\\cdot\\Fb + \\Sb_{\\rm react} + \\Sb,\n\\end{equation}\nwhere $\\Fb$ is the flux vector, $\\Sb_{\\rm react}$ are the reaction source terms, and $\\Sb$ are the non-reaction source terms, which includes any user-defined external sources, $\\Sb_{\\rm ext}$.  We use Strang splitting to discretize the advection-reaction equations.  In summary, for each time step, we update the conservative variables, $\\Ub$, by reacting for half a time step, advecting for a full time step (ignoring the reaction terms), and reacting for half a time step.  In summary,\n\\begin{equation}\n\\Ub^n = \\Ub^n + \\frac{\\dt}{2}\\Sb_{\\rm react}^n,\n\\end{equation}\n\\begin{equation}\n\\Ub^{n+1} = \\Ub^n - \\Delta t \\nabla \\cdot\\Fb^\\nph + \\dt\\frac{\\Sb^n + \\Sb^{n+1}}{2},\n\\end{equation}\n\\begin{equation}\n\\Ub^{n+1} = \\Ub^{n+1} + \\frac{\\dt}{2}\\Sb_{\\rm react}^{n+1},\n\\end{equation}\nThe construction of $F$ is purely explicit, and based on an unsplit second-order Godunov method.  We predict the standard primitive variables, as well as $\\rho e$, at time-centered edges and use an approximate Riemann solver construct fluxes.  At the beginning of the time step, we assume that $\\Ub$ and $\\phi$ are defined consistently, i.e., $\\rho^n$ and $\\phi^n$ satisfy equation (\\ref{eq:Self Gravity}).\\\\\n\nCASTRO also supports radiation (Chapter \\ref{Chap:Radiation}) and level sets (Chapter \\ref{Chap:Level Sets}).  We omit the details in this section.  Here is the single-level algorithm:\n\\begin{description}\n\\item[Step 1:] {\\em React $\\Delta t/2$.}\n\nUpdate the solution due to the effect of reactions over half a time step.\n\\begin{eqnarray}\n(\\rho E)^n &=& (\\rho E)^n - \\frac{\\dt}{2}\\sum_k(\\rho q_k\\omegadot_k)^n,\\\\\n(\\rho X_k)^n &=& (\\rho X_k)^n + \\frac{\\dt}{2}(\\rho\\omegadot_k)^n.\n\\end{eqnarray}\n\\item[Step 2:] {\\em Solve for gravity.}\n\nSolve for gravity using:\n\\begin{equation}\n\\gb^n = \\nabla\\phi^n, \\qquad \n\\Delta\\phi^n = -4\\pi G\\rho^n,\n\\end{equation}\nor use one of the simpler gravity types.\n\n\\item[Step 3:] {\\em Compute explicit source terms.}\n\nWe now compute explicit source terms for each variable in $\\Qb$ and $\\Ub$.  The primitive variable source terms will be used to construct time-centered fluxes.  The conserved variable source will be used to advance the solution.  We neglect reaction source terms since they are accounted for in {\\bf Steps 1} and {\\bf 6}.  The source terms are:\n\\begin{equation}\n\\Sb_{\\Qb}^n =\n\\left(\\begin{array}{c}\nS_\\rho \\\\\n\\Sb_{\\ub} \\\\\nS_p \\\\\nS_{\\rho e} \\\\\nS_{A_k} \\\\\nS_{X_k} \\\\\nS_{Y_k}\n\\end{array}\\right)^n\n=\n\\left(\\begin{array}{c}\nS_{{\\rm ext},\\rho} \\\\\n\\gb + \\frac{1}{\\rho}\\Sb_{{\\rm ext},\\rho\\ub} \\\\\n\\frac{1}{\\rho}\\frac{\\partial p}{\\partial e}S_{{\\rm ext},\\rho E} + \\frac{\\partial p}{\\partial\\rho}S_{{\\rm ext}\\rho} \\\\\n\\nabla\\cdot\\kappa\\nabla T + S_{{\\rm ext},\\rho E} \\\\\n\\frac{1}{\\rho}S_{{\\rm ext},\\rho A_k} \\\\\n\\frac{1}{\\rho}S_{{\\rm ext},\\rho X_k} \\\\\n\\frac{1}{\\rho}S_{{\\rm ext},\\rho Y_k}\n\\end{array}\\right)^n,\n\\end{equation}\n\\begin{equation}\n\\Sb_{\\Ub}^n =\n\\left(\\begin{array}{c}\n\\Sb_{\\rho\\ub} \\\\\nS_{\\rho E} \\\\\nS_{\\rho A_k} \\\\\nS_{\\rho X_k} \\\\\nS_{\\rho Y_k}\n\\end{array}\\right)^n\n=\n\\left(\\begin{array}{c}\n\\rho \\gb + \\Sb_{{\\rm ext},\\rho\\ub} \\\\\n\\rho \\ub \\cdot \\gb + \\nabla\\cdot\\kappa\\nabla T + S_{{\\rm ext},\\rho E} \\\\\nS_{{\\rm ext},\\rho A_k} \\\\\nS_{{\\rm ext},\\rho X_k} \\\\\nS_{{\\rm ext},\\rho Y_k}\n\\end{array}\\right)^n.\n\\end{equation}\n\n\\item[Step 3:] {\\em Advect $\\Delta t$.}\n\nThe goal is to advance\n\\begin{equation}\n\\Ub^{n+1} = \\Ub^n - \\dt\\nabla\\cdot\\Fb^\\nph + \\dt\\Sb^n.\n\\end{equation}\nneglecting reaction terms.  Note that since the source term is not time centered, this is not a second-order method.  After the advective update, we correct the solution, effectively time-centering the source term. The advection step is complicated, and more detail is given in Section \\ref{Sec:Advection Step}.  Here is the summarized version:\n\\begin{enumerate}\n\\item Compute primitive variables.\n\\item Predict primitive variables to time-centered edges.\n\\item Solve the Riemann problem.\n\\item Compute fluxes and update.\n\\end{enumerate}\n\\item[Step 4:] {\\em Solve for updated gravity.}\n\nSolve for gravity using:\n\\begin{equation}\n\\gb^{n+1} = \\nabla\\phi^{n+1}; \\qquad \\Delta\\phi^{n+1} = -4\\pi G\\rho^{n+1},\n\\end{equation}\nor use one of the simpler gravity types.\n\\item[Step 5:] {\\em Correct solution with time-centered source terms.}\n\nWe need to correct the solution by effectively time-centering the source terms.  These corrections are to be performed sequentially since new source term evaluations may depend on previous corrections.\n\nFirst, we correct the solution with the updated gravity:\n\\begin{eqnarray}\n(\\rho\\ub)^{n+1} &=& (\\rho\\ub)^{n+1} + \\frac{\\dt}{2}\\left[(\\rho\\gb)^{n+1} - (\\rho\\gb)^n\\right], \\\\\n(\\rho E)^{n+1} &=& (\\rho E)^{n+1} + \\frac{\\dt}{2}\\left[\\left(\\rho\\ub\\cdot\\gb\\right)^{n+1} - \\left(\\rho\\ub\\cdot\\gb\\right)^n\\right].\n\\end{eqnarray}\n\nNext, we correct $\\Ub$ with updated external sources.  For example, for the momentum, we correct using\n\\begin{equation}\n(\\rho\\ub)^{n+1} = (\\rho\\ub)^{n+1} + \\frac{\\dt}{2}\\left(\\Sb_{{\\rm ext},\\rho\\ub}^{n+1} - \\Sb_{{\\rm ext},\\rho\\ub}^n\\right).\n\\end{equation}\nWe correct $\\rho E, \\rho A_k, \\rho X_k$, and $\\rho Y_k$ in an analogous manner.\n\nFinally, we correct the solution with updated thermal diffusion using\n\\begin{equation}\n(\\rho E)^{n+1} = (\\rho E)^{n+1} + \\frac{\\dt}{2}\\left(\\nabla\\cdot\\kappa\\nabla T^{n+1} - \\nabla\\cdot\\kappa\\nabla T^n\\right).\n\\end{equation}\n\\item[Step 6:] {\\em React $\\Delta t/2$.}\n\nUpdate the solution due to the effect of reactions over half a time step.\n\\begin{eqnarray}\n(\\rho E)^{n+1} &=& (\\rho E)^{n+1} - \\frac{\\dt}{2}\\sum_k(\\rho q_k\\omegadot_k)^{n+1},\\\\\n(\\rho X_k)^{n+1} &=& (\\rho X_k)^{n+1} + \\frac{\\dt}{2}(\\rho\\omegadot_k)^{n+1}.\n\\end{eqnarray}\n\\item[Step 7:] {\\em Modify auxiliary variables.}\n\nThis is problem-dependent.  By default we treat the auxiliary variables as \nadvected quantities, so no additional steps are required.\n\\end{description}\nThus concludes the single-level algorithm description.\n\n\\subsection{Nyx::advance()}\n\nif (doGrav)\n\n\\hspace{.1in}  define oldGravityVector\n\nend if\n\nAdvanceSolution()\n\nif (doGrav)\n\n\\hspace{.1in}  define newGravityVector\n\n\\hspace{.1in}  correct solution due to new gravity\n\nend if\n\n\\section{Advection Step}\\label{Sec:Advection Step}\nThere are four major steps in the advective update, detailed below.\n\\subsection{Compute Primitive Variables}\\label{Sec:Compute Primitive Variables}\nWe compute the primtive variables from the conserved variables.\n\\begin{itemize}\n\\item $\\rho, \\rho e$ - directly copy these from the conserved state vector\n\\item $\\ub, A_k, X_k, Y_k$ - copy these from the conserved state vector, dividing by $\\rho$\n\\item $p,T$ - use the EOS.  First, we do the following:\n\\begin{enumerate}\n\\item Use the EOS to set $e = e(\\rho,T_{\\rm small},X_k)$.\n\\item If $e < 0$, abort the program with an error message.\n\\end{enumerate}\nNow, use the EOS to compute $p,T = p,T(\\rho,e,X_k)$.\n\\end{itemize}\nWe also compute the flattening coefficient, $\\chi\\in[0,1]$, used in the edge state prediction to further limit slopes near strong shocks.  We use the same flattening procedure described in the the FLASH paper.  A flattening coefficient of 1 indicates that no additional limiting takes place; a flattening coefficient of 0 means we effectively drop order to a first-order Godunov scheme (this convention is opposite of that used in the FLASH paper).  For each cell, we compute the flattening coefficient for each spatial direction, and choose the minimum value over all directions.  As an example, to compute the flattening for the x-direction, here are the steps:\n\\begin{enumerate}\n\\item Define $\\zeta$\n\\begin{equation}\n\\zeta_i = \\frac{p_{i+1}-p_{i-1}}{\\max\\left(p_{\\rm small},|p_{i+2}-p_{i-2}|\\right)}.\n\\end{equation}\n\\item Define $\\tilde\\chi$\n\\begin{equation}\n\\tilde\\chi_i = \\min\\left\\{1,\\max[0,a(\\zeta_i - b)]\\right\\},\n\\end{equation}\nwhere $a=10$ and $b=0.75$ are tunable parameters.  We are essentially setting $\\tilde\\chi_i=a(\\zeta_i-b)$, and then constraining $\\tilde\\chi_i$ to lie in the range $[0,1]$.  Then, if either $u_{i+1}-u_{i-1}<0$ or\n\\begin{equation}\n\\frac{p_{i+1}-p_{i-1}}{\\min(p_{i+1},p_{i-1})} \\le c,\n\\end{equation}\nwhere $c=1/3$ is a tunable parameter, then set $\\tilde\\chi_i=0$.\n\\item Define $\\chi$\n\\begin{equation}\n\\chi_i =\n\\begin{cases}\n1 - \\max(\\tilde\\chi_i,\\tilde\\chi_{i-1}) & p_{i+1}-p_{i-1} > 0 \\\\\n1 - \\max(\\tilde\\chi_i,\\tilde\\chi_{i+1}) & \\text{otherwise}\n\\end{cases}.\n\\end{equation}\n\\end{enumerate}\n\\subsection{Edge State Prediction}\nWe wish to compute a left and right state of primitive variables at each edge to be used as inputs to the Riemann problem.  We use a version of the Colella and Sekora 2009 PPM algorithm, which has been further modified to eliminate sensitivity due to roundoff error (modifications via personal communication with Colella).  Note that CASTRO also has options for the original PPM algorithm of Colella and Woodward 1984, and piecewise-linear algorithm described in Saltzman 1994.  We also use characteristic tracing with corner coupling in 3D, as described in Miller and Colella 2002.  We give full details of the PPM algorithm, as it has not appeared before in the literature, and summarize the developments from Miller and Colella 2002.\n\nThe PPM algorithm is used to compute time-centered edge states by extrapolating the base-time data in space and time.  The edge states are dual-valued, i.e., at each face, there is a left state and a right state estimate.  The spatial extrapolation is one-dimensional, i.e., transverse derivatives are ignored.  We also use a flattening procedure to further limit the edge state values.  The Miller and Colella 2002 algorithm, which we describe later, incorporates the transverse terms, and also describes the modifications required for equations with additional characteristics besides the fluid velocity.  There are four steps to compute these dual-valued edge states (here, we use $s$ to denote an arbitrary scalar from $\\Qb$, and we write the equations in 1D, for simplicity):\n\\begin{itemize}\n\\item {\\bf Step 1}: Compute $s_{i,+}$ and $s_{i,-}$, which are spatial interpolations of $s$ to the hi and lo side of the face with special limiters, respectively.  Begin by interpolating $s$ to edges using a 4th-order interpolation in space:\n\\begin{equation}\ns_{i+\\myhalf} = \\frac{7}{12}\\left(s_{i+1}+s_i\\right) - \\frac{1}{12}\\left(s_{i+2}+s_{i-1}\\right).\n\\end{equation}\nThen, if $(s_{i+\\myhalf}-s_i)(s_{i+1}-s_{i+\\myhalf}) < 0$, we limit $s_{i+\\myhalf}$ \na nonlinear combination of approximations to the second derivative.  The steps are as follows:\n\\begin{enumerate}\n\\item Define:\n\\begin{eqnarray}\n(D^2s)_{i+\\myhalf} &=& 3\\left(s_{i}-2s_{i+\\myhalf}+s_{i+1}\\right) \\\\\n(D^2s)_{i+\\myhalf,L} &=& s_{i-1}-2s_{i}+s_{i+1} \\\\\n(D^2s)_{i+\\myhalf,R} &=& s_{i}-2s_{i+1}+s_{i+2}\n\\end{eqnarray}\n\\item Define\n\\begin{equation}\ns = \\text{sign}\\left[(D^2s)_{i+\\myhalf}\\right],\n\\end{equation}\n\\begin{equation}\n(D^2s)_{i+\\myhalf,\\text{lim}} = s\\max\\left\\{\\min\\left[Cs\\left|(D^2s)_{i+\\myhalf,L}\\right|,Cs\\left|(D^2s)_{i+\\myhalf,R}\\right|,s\\left|(D^2s)_{i+\\myhalf}\\right|\\right],0\\right\\},\n\\end{equation}\nwhere $C=1.25$ as used in Colella and Sekora 2009.  The limited value of $s_{i+\\myhalf}$ is\n\\begin{equation}\ns_{i+\\myhalf} = \\frac{1}{2}\\left(s_{i}+s_{i+1}\\right) - \\frac{1}{6}(D^2s)_{i+\\myhalf,\\text{lim}}.\n\\end{equation}\n\\end{enumerate}\nNow we implement an updated implementation of the Colella and Sekora 2009 algorithm which eliminates sensitivity to roundoff.  First we need to detect whether a particular cell corresponds to an ``extremum''.  There are two tests.\n\\begin{itemize}\n\\item For the first test, define\n\\begin{equation}\n\\alpha_{i,\\pm} = s_{i\\pm\\myhalf} - s_i.\n\\end{equation}\nIf $\\alpha_{i,+}\\alpha_{i,-} \\ge 0$, then we are at an extremum.\n\\item We only apply the second test if either $|\\alpha_{i,\\pm}| > 2|\\alpha_{i,\\mp}|$.  \nIf so, we define:\n\\begin{eqnarray}\n(Ds)_{i,{\\rm face},-} &=& s_{i-\\myhalf} - s_{i-\\sfrac{3}{2}} \\\\\n(Ds)_{i,{\\rm face},+} &=& s_{i+\\sfrac{3}{2}} - s_{i-\\myhalf}\n\\end{eqnarray}\n\\begin{equation}\n(Ds)_{i,{\\rm face,min}} = \\min\\left[\\left|(Ds)_{i,{\\rm face},-}\\right|,\\left|(Ds)_{i,{\\rm face},+}\\right|\\right].\n\\end{equation}\n\\begin{eqnarray}\n(Ds)_{i,{\\rm cc},-} &=& s_{i} - s_{i-1} \\\\\n(Ds)_{i,{\\rm cc},+} &=& s_{i+1} - s_{i}\n\\end{eqnarray}\n\\begin{equation}\n(Ds)_{i,{\\rm cc,min}} = \\min\\left[\\left|(Ds)_{i,{\\rm cc},-}\\right|,\\left|(Ds)_{i,{\\rm cc},+}\\right|\\right].\n\\end{equation}\nIf $(Ds)_{i,{\\rm face,min}} \\ge (Ds)_{i,{\\rm cc,min}}$, set \n$(Ds)_{i,\\pm} = (Ds)_{i,{\\rm face},\\pm}$.  Otherwise, set \n$(Ds)_{i,\\pm} = (Ds)_{i,{\\rm cc},\\pm}$.  Finally, we are at an extreumum if\n$(Ds)_{i,+}(Ds)_{i,-} \\le 0$.\n\\end{itemize}\nThus concludes the extremum tests.  The remaining limiters depend on whether we are at an extremum.\n\\begin{itemize}\n\\item If we are at an extremum, we modify $\\alpha_{i,\\pm}$.  First, we define\n\\begin{eqnarray}\n(D^2s)_{i} &=& 6(\\alpha_{i,+}+\\alpha_{i,-}) \\\\\n(D^2s)_{i,L} &=& s_{i-2}-2s_{i-1}+s_{i} \\\\\n(D^2s)_{i,R} &=& s_{i}-2s_{i+1}+s_{i+2} \\\\\n(D^2s)_{i,C} &=& s_{i-1}-2s_{i}+s_{i+1}\n\\end{eqnarray}\nThen, define\n\\begin{equation}\ns = \\text{sign}\\left[(D^2s)_{i}\\right],\n\\end{equation}\n\\begin{equation}\n(D^2s)_{i,\\text{lim}} = \\max\\left\\{\\min\\left[s(D^2s)_{i},Cs\\left|(D^2s)_{i,L}\\right|,Cs\\left|(D^2s)_{i,R}\\right|,Cs\\left|(D^2s)_{i,C}\\right|\\right],0\\right\\}.\n\\end{equation}\nThen,\n\\begin{equation}\n\\alpha_{i,\\pm} = \\frac{\\alpha_{i,\\pm}(D^2s)_{i,\\text{lim}}}{\\max\\left[(D^2s)_{i},1\\times 10^{-10}\\right]}\n\\end{equation}\n\\item If we are not at an extremum and $|\\alpha_{i,\\pm}| > 2|\\alpha_{i,\\mp}|$, then define\n\\begin{equation}\ns = \\text{sign}(\\alpha_{i,\\mp})\n\\end{equation}\n\\begin{equation}\n\\delta\\mathcal{I}_{\\text{ext}} = \\frac{-\\alpha_{i,\\pm}^2}{4\\left(\\alpha_{j,+}+\\alpha_{j,-}\\right)},\n\\end{equation}\n\\begin{equation}\n\\delta s = s_{i\\mp 1} - s_i,\n\\end{equation}\nIf $s\\delta\\mathcal{I}_{\\text{ext}} \\ge s\\delta s$, then we perform the following test.\nIf $s\\delta s - \\alpha_{i,\\mp} \\ge 1\\times 10^{-10}$, then\n\\begin{equation}\n\\alpha_{i,\\pm} =  -2\\delta s - 2s\\left[(\\delta s)^2 - \\delta s \\alpha_{i,\\mp}\\right]^{\\myhalf}\n\\end{equation}\notherwise,\n\\begin{equation}\n\\alpha_{i,\\pm} =  -2\\alpha_{i,\\mp}\n\\end{equation}\n\\end{itemize}\nFinally, $s_{i,\\pm} = s_i + \\alpha_{i,\\pm}$.\n\\item {\\bf Step 2}: Construct a quadratic profile using $s_{i,-},s_i$, and $s_{i,+}$.\n\\begin{equation}\ns_i^I(x) = s_{i,-} + \\xi\\left[s_{i,+} - s_{i,-} + s_{6,i}(1-\\xi)\\right],\\label{Quadratic Interp}\n\\end{equation}\n\\begin{equation}\ns_6 = 6s_{i} - 3\\left(s_{i,-}+s_{i,+}\\right),\n\\end{equation}\n\\begin{equation}\n\\xi = \\frac{x - ih}{h}, ~ 0 \\le \\xi \\le 1.\n\\end{equation}\n\\item {\\bf Step 3:} Integrate quadratic profiles.  We are essentially computing the average value swept out by the quadratic profile across the face assuming the profile is moving at a speed $\\lambda_k$.\\\\ \\\\\nDefine the following integrals, where $\\sigma_k = |\\lambda_k|\\Delta t/h$:\n\\begin{eqnarray}\n\\mathcal{I}_{i,+}(\\sigma_k) &=& \\frac{1}{\\sigma_k h}\\int_{(i+\\myhalf)h-\\sigma_k h}^{(i+\\myhalf)h}s_i^I(x)dx \\\\\n\\mathcal{I}_{i,-}(\\sigma_k) &=& \\frac{1}{\\sigma_k h}\\int_{(i-\\myhalf)h}^{(i-\\myhalf)h+\\sigma_k h}s_i^I(x)dx\n\\end{eqnarray}\nPlugging in (\\ref{Quadratic Interp}) gives:\n\\begin{eqnarray}\n\\mathcal{I}_{i,+}(\\sigma_k) &=& s_{i,+} - \\frac{\\sigma_k}{2}\\left[s_{i,+}-s_{i,-}-\\left(1-\\frac{2}{3}\\sigma_k\\right)s_{6,i}\\right], \\\\\n\\mathcal{I}_{i,-}(\\sigma_k) &=& s_{i,-} + \\frac{\\sigma_k}{2}\\left[s_{i,+}-s_{i,-}+\\left(1-\\frac{2}{3}\\sigma_k\\right)s_{6,i}\\right].\n\\end{eqnarray}\n\\item {\\bf Step 4:} Obtain 1D edge states by performing a 1D extrapolation to get \nleft and right edge states.  Note that we include an explicit source term contribution.\n\\begin{eqnarray}\ns_{L,i+\\myhalf} &=& s_i - \\chi_i\\sum_{k:\\lambda_k \\ge 0}\\lb_k\\cdot\\left[s_i-\\mathcal{I}_{i,+}(\\sigma_k)\\right]\\rb_k + \\frac{\\dt}{2}S_i^n, \\\\\ns_{R,i-\\myhalf} &=& s_i - \\chi_i\\sum_{k:\\lambda_k < 0}\\lb_k\\cdot\\left[s_i-\\mathcal{I}_{i,-}(\\sigma_k)\\right]\\rb_k + \\frac{\\dt}{2}S_i^n.\n\\end{eqnarray}\nHere, $\\rb_k$ is the $k^{\\rm th}$ right column eigenvector of $\\Rb(\\Ab_d)$ and $\\lb_k$ is the $k^{\\rm th}$ left row eigenvector lf $\\Lb(\\Ab_d)$.  The flattening coefficient is $\\chi_i$.\n\\end{itemize}\nIn order to add the transverse terms in an spatial operator unsplit framework, the details follow exactly as given in Section 4.2.1 in Miller and Colella 2002, except for the details of the Riemann solver, which are given below.\n\\subsection{Riemann Problem}\nInputs from the edge state prediction are $\\rho_{L/R}, u_{L/R}, v_{L/R}, p_{L/R}$, and $(\\rho e)_{L/R}$ ($v$ represents all of the transverse velocity components).  We also compute $\\gamma$ at cell centers and copy these to edges directly to get the left and right states, $\\gamma_{L/R}$.  We also define $c_{\\rm avg}$ as a face-centered value that is the average of the neighboring cell-centered values of $c$.  We have also computed $\\rho_{\\rm small}, p_{\\rm small}$, and $c_{\\rm small}$ using cell-centered data.  \n\nHere are the steps.  First, define $(\\rho c)_{\\rm small} = \\rho_{\\rm small}c_{\\rm small}$. Then, define:\n\\begin{equation}\n(\\rho c)_{L/R} = \\max\\left[(\\rho c)_{\\rm small},\\left|\\gamma_{L/R},p_{L/R},\\rho_{L/R}\\right|\\right].\n\\end{equation}\nDefine star states:\n\\begin{equation}\np^* = \\max\\left[p_{\\rm small},\\frac{\\left[(\\rho c)_L p_R + (\\rho c)_R p_L\\right] + (\\rho c)_L(\\rho c)_R(u_L-u_R)}{(\\rho c)_L + (\\rho c)_R}\\right],\n\\end{equation}\n\\begin{equation}\nu^* = \\frac{\\left[(\\rho c)_L u_L + (\\rho c)_R u_R\\right]+ (p_L - p_R)}{(\\rho c)_L + (\\rho c)_R}.\n\\end{equation}\nIf $u^* \\ge 0$ then define $\\rho_0, u_0, p_0, (\\rho e)_0$ and $\\gamma_0$ to be the left state.  Otherwise, define them to be the right state.  Then, set\n\\begin{equation}\n\\rho_0 = \\max(\\rho_{\\rm small},\\rho_0),\n\\end{equation}\nand define\n\\begin{equation}\nc_0 = \\max\\left(c_{\\rm small},\\sqrt{\\frac{\\gamma_0 p_0}{\\rho_0}}\\right),\n\\end{equation}\n\\begin{equation}\n\\rho^* = \\rho_0 + \\frac{p^* - p_0}{c_0^2},\n\\end{equation}\n\\begin{equation}\n(\\rho e)^* = (\\rho e)_0 + (p^* - p_0)\\frac{(\\rho e)_0 + p_0}{\\rho_0 c_0^2},\n\\end{equation}\n\\begin{equation}\nc^* = \\max\\left(c_{\\rm small},\\sqrt{\\left|\\frac{\\gamma_0 p^*}{\\rho^*}\\right|}\\right)\n\\end{equation}\nThen,\n\\begin{eqnarray}\nc_{\\rm out} &=& c_0 - {\\rm sign}(u^*)u_0, \\\\\nc_{\\rm in} &=& c^* - {\\rm sign}(u^*)u^*, \\\\\nc_{\\rm shock} &=& \\frac{c_{\\rm in} + c_{\\rm out}}{2}.\n\\end{eqnarray}\nIf $p^* - p_0 \\ge 0$, then $c_{\\rm in} = c_{\\rm out} = c_{\\rm shock}$.  Then, if $c_{\\rm out} = c_{\\rm in}$, we define $c_{\\rm temp} = \\epsilon c_{\\rm avg}$.  Otherwise, $c_{\\rm temp} = c_{\\rm out} - c_{\\rm in}$.  We define the fraction\n\\begin{equation}\nf = \\half\\left[1 + \\frac{c_{\\rm out} + c_{\\rm in}}{c_{\\rm temp}}\\right],\n\\end{equation}\nand constrain $f$ to lie in the range $f\\in[0,1]$.\n\nTo get the final ``Godunov'' state, for the transverse velocity, we upwind based on $u^*$.\n\\begin{equation}\nv_{\\rm gdnv} =\n\\begin{cases}\nv_L, & u^* \\ge 0 \\\\\nv_R, & {\\rm otherwise}\n\\end{cases}.\n\\end{equation}\nThen, define\n\\begin{eqnarray}\n\\rho_{\\rm gdnv} &=& f\\rho^* + (1-f)\\rho_0, \\\\\nu_{\\rm gdnv} &=& f u^* + (1-f)u_0, \\\\\np_{\\rm gdnv} &=& f p^* + (1-f)p_0, \\\\\n(\\rho e)_{\\rm gdnv} &=& f(\\rho e)^* + (1-f)(\\rho e)_0.\n\\end{eqnarray}\nFinally, if $c_{\\rm out} < 0$, set $\\rho_{\\rm gdnv}=\\rho_0, u_{\\rm gdnv}=u_0, p_{\\rm gdnv}=p_0$, and $(\\rho e)_{\\rm gdnv}=(\\rho e)_0$.  If $c_{\\rm in}\\ge 0$, set $\\rho_{\\rm gdnv}=\\rho^*, u_{\\rm gdnv}=u^*, p_{\\rm gdnv}=p^*$, and $(\\rho e)_{\\rm gdnv}=(\\rho e)^*$.\n\\subsection{Compute Fluxes and Update}\nCompute the fluxes as a function of the primitive variables, and then advance the solution:\n\\begin{equation}\n\\Ub^{n+1} = \\Ub^n - \\dt\\nabla\\cdot\\Fb^\\nph + \\dt\\Sb^n.\n\\end{equation}\nAgain, note that since the source term is not time centered, this is not a second-order method.  After the advective update, we correct the solution, effectively time-centering the source term.\n", "meta": {"hexsha": "3f9843722a21802a2bcdba7d0ca8b10d265a41de", "size": 19862, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "UsersGuide/FlowChart/FlowChart.tex", "max_stars_repo_name": "Gosenca/axionyx_1.0", "max_stars_repo_head_hexsha": "7e2a723e00e6287717d6d81b23db32bcf6c3521a", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-02-18T09:13:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T21:27:46.000Z", "max_issues_repo_path": "UsersGuide/FlowChart/FlowChart.tex", "max_issues_repo_name": "Gosenca/axionyx_1.0", "max_issues_repo_head_hexsha": "7e2a723e00e6287717d6d81b23db32bcf6c3521a", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-12T08:54:31.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-12T08:54:31.000Z", "max_forks_repo_path": "UsersGuide/FlowChart/FlowChart.tex", "max_forks_repo_name": "Gosenca/axionyx_1.0", "max_forks_repo_head_hexsha": "7e2a723e00e6287717d6d81b23db32bcf6c3521a", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-09-04T10:26:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T23:51:51.000Z", "avg_line_length": 51.8590078329, "max_line_length": 780, "alphanum_fraction": 0.6748565099, "num_tokens": 7245, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8774767810736693, "lm_q2_score": 0.6334102775181399, "lm_q1q2_score": 0.5558028114155971}}
{"text": "\\chapter{Approximate Bayesian Computation}\n\\label{chap:abc}\n\nWe are now in possession of a sampler able to approximate the posterior distribution $p(\\btheta \\mid \\by_{1:T})$ even when the model likelihood is not tractable. As such, the sampler can be used in general non-linear SSMs, as long as the observation model $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$ is a well-defined probability density. This requirement can be relaxed by introducing the method of Approximate Bayesian Computation (ABC). The algorithm derived in this chapter utilizes the ABC framework to approximate the SSM likelihood even when the observation model is misspecified or given only as a deterministic mapping $\\bx_t \\mapsto \\by_t$. This allows to infer $\\btheta$ even when $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$ is not given in terms of a probability density function.\n\nWe first motivate the use of ABC methods in our problem in \\autoref{sec:abc-motivation}. Then, in \\autoref{sec:abc-general}, we describe the method in general and discuss some limitations. \\autoref{sec:abc-ssm} introduces ABC to our state-space model framework and addresses some potential issues through kernel functions. Finally, in \\autoref{sec:abcmh}, we summarize how exactly is the ABC method used in our model, and provide an alternative variant of the Metropolis-Hastings algorithm which relies on ABC instead of the particle filter to estimate the likelihood.\n\n\n\\section{Motivation} \\label{sec:abc-motivation}\nIn the previous chapter, we derived a way to bypass the likelihood function evaluation when calculating the Metropolis-Hastings acceptance ratio. The method relies on the particle filter to calculate a set of weights $w_t^{(i)} \\propto \\obs_t(\\by_t \\mid \\bx_t^{(i)}, \\btheta)$, where $\\obs_t(\\by_t \\mid \\bx_t^{(i)}, \\btheta)$ is the observation model defined in \\eqref{eq:factorization}. These weights are used to estimate the likelihood $p(\\by_{1:T} \\mid \\btheta)$ as given in \\eqref{eq:likelihood-estimate}. However, calculating the weights in such way requires full knowledge of this observation model.\n\nIn practice, one may not have access to a correct observation model in the form of a probability density $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$. Instead, only a model of the process which generates an observation $\\by_t$ from the latent state $\\bx_t$ may be available. This generative process may take the form of a differential equation, chemical reaction, simulation, etc. One is then in possession of a mean to generate an observation, but not to evaluate how probable it is. By attempting to fit an arbitrary probability distribution to this generative model, an error is necessarily introduced. The particle filter weights might then not reflect reality, and would lead to incorrect results when using such misspecified model for $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$.\n\nAs an alternative way to approximate the likelihood ${p(\\by_{1:T} \\mid \\btheta)}$, we can utilize our knowledge of the generative process $\\bx_t \\mapsto \\by_t$ to simulate a number of pseudo-observations $\\bu_t$. A surrogate for the observation density $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$ is then calculated by evaluating the closeness of these pseudo-observations to the true measurement $\\by_t$. Intuitively, if a large number of the simulated observations fall close to $\\by_t$, we would expect the true probability density to be high in that region. By bypassing the evaluation of $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$, inference can proceed even without knowing the observation model density. This is exactly the idea behind the approximate Bayesian computation methodology; it is discussed in \\autoref{sec:abc-general}.\n\nUnfortunately, such approximation comes at a price. In \\autoref{chap:inference}, it has been shown that using the particle filter does not introduce any approximation error, since the likelihood estimate is unbiased and leaves the limiting distribution of the Metropolis-Hastings Markov chain intact. This is not the case when applying ABC methods, see \\autoref{sec:abc-ssm}.\n\n\n\\section{ABC in general} \\label{sec:abc-general}\nBefore describing how to apply ABC to state-space models, we first summarize the underlying ideas. The ABC method is introduced in the context of general Bayesian inference under a misspecified likelihood function. Later on, we build on these foundations when applying ABC to our SSM framework.\n\nOne thing to note is that ABC has traditionally been applied to estimate the posterior $p(\\btheta \\mid \\by)$ for some parameter $\\btheta$ and observation $\\by$. The method is first considered with this application in mind, and in \\autoref{sec:abc-ssm}, we describe how use it to estimate the SSM likelihood instead.\n\n\\paragraph{Approximate Bayesian Computation}\nThe methodology of ABC dates back to \\cite{abc-old-old}, where a procedure using simulated psedudo-observations to approximate the posterior distribution was first described. Lately, ABC methods have gained popularity in modelling biological processes \\citep{abc-old}. A more recent review can be found in \\cite{abc-recent, abc-super-recent}.\n\nIn its classical formulation, ABC provides a way to approximate an intractable posterior ${p(\\btheta \\mid \\by) \\propto p(\\by \\mid \\btheta) \\pprior(\\btheta)}$ by introducing an auxiliary variable $\\bu$. The posterior approximation is then constructed by integrating over this variable and considering only values sufficiently close to the true measurement \\citep{jasra-filtering}. It takes the form of\n\\begin{equation} \\label{eq:abc-integral}\np(\\btheta \\mid \\by) \\approx p^\\epsilon(\\btheta \\mid \\by) = \\frac{\\int \\I_{\\A_{\\epsilon, \\by}}(\\bu) p(\\bu \\mid \\btheta) \\pprior(\\btheta) \\; \\dx{\\bu}}{\\int_{\\A_{\\epsilon, \\by}} \\dx{\\bu}},\n\\end{equation}\nwhere $\\I_{\\A_{\\epsilon, \\by}}$ is the indicator function of a set $\\A_{\\epsilon, \\by} = \\left\\{\\bu \\in \\R^{d_y} : \\rho(\\bu, \\by) \\leq \\epsilon \\right\\}$ and $\\rho: \\R^{d_y} \\times \\R^{d_y} \\to \\R$ is a metric, typically the Euclidean distance.\n\nThe motivation behind \\eqref{eq:abc-integral} is that such integral can be approximated by randomly sampling from the likelihood $p(\\cdot \\mid \\btheta)$ without needing to evaluate it. This way, the likelihood can exist only conceptually, and we are able to simulate samples $\\bu$ from a model reflecting some real-world process, without considering the underlying probability density.\n\nThe hyper-parameter $\\epsilon \\geq 0$ controls how far the auxiliary variable $\\bu$ can be from the true measurement $\\by$ to be considered similar. Clearly, if we set $\\epsilon = 0$, the integral becomes $p(\\by \\mid \\btheta) \\pprior(\\btheta)$, and we recover the true posterior. In general, the smaller $\\epsilon$, the better approximation is obtained, though at the cost of increased computational complexity.\n\nTo avoid the curse of dimensionality, a summary statistic $\\bm{s}: \\R^{d_y} \\to \\R^p$, $1 \\leq p < d_y$ is often introduced. Instead of comparing $\\rho(\\bu, \\by) \\leq \\epsilon$, one then compares $\\rho(\\bm{s}(\\bu), \\bm{s}(\\by)) \\leq \\epsilon$ (assuming that the metric has been redefined to $\\rho: \\R^p \\times \\R^p \\to \\R$).\n\nIt can be shown that if $\\bm{s}$ is a sufficient statistic for the parameter $\\btheta$, the probability density $p^\\epsilon(\\btheta \\mid \\by)$ converges to $p(\\btheta \\mid \\by)$ as $\\epsilon \\to 0$ \\citep{jasra-time-series}. However, it is typically impossible to find such statistic outside of the exponential family of distributions. Otherwise, using a statistic that is not sufficient introduces an additional approximation error.\n\n\\paragraph{Basic version of the ABC simulation}\nWe now give a basic variant of a sampling-based approximation to $p^\\epsilon(\\btheta \\mid \\by)$. In the spirit of \\eqref{eq:abc-integral}, \\autoref{alg:abc-rejection} performs rejection sampling by comparing whether a sampled $\\bu$ is in $\\A_{\\epsilon, \\by}$ or not. After describing the algorithm, we discuss some limitations of this basic approach.\n\\begin{algorithm}[ht]\n    \\caption{ABC Rejection Algorithm}\n    \\label{alg:abc-rejection}\n    \\begin{algorithmic}[1]\n        \\Input $\\text{Number of samples } M, \\text{ observation } \\by, \\text{ metric } \\rho, \\text{ maximum distance } \\epsilon.$\n        \n        \\State $i \\gets 1$\n        \n        \\While{$i \\leq M$}\n        \\State $\\text{Sample } \\btheta^\\prime \\sim \\pprior(\\cdot).$ \\Comment{Sample from the prior.}\n        \\State $\\text{Simulate } \\bu \\text{ from } p(\\cdot \\mid \\btheta^\\prime).$ \\Comment{Simulate a pseudo-observation.}\n        \n        \\If {$\\rho(\\bu, \\by) \\leq \\epsilon$}\n        \\State $\\btheta^{(i)} \\gets \\btheta^\\prime$ \\Comment{Accept the proposed sample.}\n        \\State $i \\gets i + 1$\n        \\EndIf\n        \\EndWhile\n        \n        \\Output $\\text{Accepted samples } \\left\\{ \\btheta^{(1)}, \\ldots, \\btheta^{(M)} \\right\\}.$\n    \\end{algorithmic}\n\\end{algorithm}\n\nABC rejection iteratively samples parameters $\\btheta^\\prime$ from the prior, plugs them into the likelihood $p(\\cdot \\mid \\btheta^\\prime)$, and simulates pseudo-observations $\\bu$. These are then compared to the true measurement $\\by$ using the metric $\\rho$. If the proposed parameter $\\btheta^\\prime$ gave rise to a pseudo-observation similar enough to the true $\\by$ (i.e., $\\bu \\in \\A_{\\epsilon, \\by}$), the parameter is kept under the assumption that the true data are likely under $\\btheta^\\prime$. The ABC approximation is then given in terms of the accepted samples $\\btheta^{(1)}, \\ldots, \\btheta^{(M)}$ as the empirical distribution\n\\begin{equation*}\np(\\btheta \\mid \\by) \\approx \\frac{1}{M} \\sum_{i=1}^M \\delta_{\\btheta^{(i)}}(\\btheta).\n\\end{equation*}\n\nSetting a low value of $\\epsilon$ increases the approximation accuracy, at the cost of increased rejection rate. On the other hand, setting $\\epsilon$ too large causes the algorithm to accept more often, but leads to simulating pseudo-measurements dissimilar to $\\by$ and, in turn, incorrect $\\btheta^{(i)}$. Setting a suitable value of $\\epsilon$ is therefore the main difficulty when using ABC. Several approaches are discussed by \\cite{jasra-filtering, jasra-time-series}. One particular way \\citep{dedecius} is used in \\autoref{sec:abc-ssm} in the context of SSMs.\n\nThere are many improvement to the basic ABC of \\autoref{alg:abc-rejection}, discussed for instance by \\cite{abc-recent}. In particular, more sophisticated sampling approaches relying again on MCMC are described. This is not an issue relevant to the SSM framework, as the samples are generated in a different fashion, given in the next section.\n\n\\section{ABC in SSMs} \\label{sec:abc-ssm}\nNext, we describe how exactly is the ABC methodology applied in the context of SSMs.\n\n\\autoref{sec:abc-general} states that the typical use case of ABC arises in cases where we have knowledge about the data-generating process, but are unable to evaluate the probability of such data. In the context of SSMs, this translates into knowing how the observed values $\\by_t$ have been generated from the latent states $\\bx_t$, but being unable to evaluate the density $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$. This prevents us from calculating the importance weights $w_t$ through the particle filter, which relies on the availability of $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$.\n\n\\cite{tina-toni} and \\cite{jasra-filtering} describe how a filter could be constructed using the ABC approximation. Additionally, \\cite{jasra-time-series} applies this filter in the context of SSMs. We first discuss the construction of this filter. Afterwards, we address a particular limitation of this approach through kernel functions.\n\n\\paragraph{Filter construction through ABC}\nIn SSMs, the dimensionality of the observation space is typically low; the observations are often scalar quantities. It is then not necessary to consider any summary statistics.\n\n\\cite{jasra-filtering} consider a modification of the particle filter (\\autoref{alg:particle-filter}) which simulates pseudo-observations according to the observation model, and calculates the importance weights based on their closeness to the true measurements. The pseudocode is given in \\autoref{alg:abc-filter}.\n\n\\begin{algorithm}[ht]\n    \\caption{ABC-based filter}\n    \\label{alg:abc-filter}\n    \\begin{algorithmic}[1]\n        \\Input $\\text{Number of particles } N,\\ \\text{current parameter value } \\btheta, \\text{maximum distance } \\epsilon,\\ \\left\\{\\by_1, \\ldots, \\by_T\\right\\}.$\n        \n        \\State $\\text{Sample } \\bx_0^{(i)} \\sim \\sprior(\\bx_0 \\mid \\btheta), \\quad i = 1, \\ldots, N.$ \\Comment{Initialize $N$ particles.}\n        \n        \\State $w_0^{(i)} \\gets \\frac{1}{N}, \\quad i = 1, \\ldots, N.$ \\Comment{Initialize uniform weights.}\n        \n        \\For{$t = 1\\ \\mathbf{to}\\ T$}\n        \\State $\\text{Sample } \\bx_t^{(i)} \\sim \\trans_t(\\bx_t \\mid \\bx_{t-1}^{(i)}, \\btheta), \\quad i = 1, \\ldots, N.$ \\Comment{Sample $N$ new particles.}\n        \n        \\State $\\text{Simulate } \\bu_t^{(i)} \\text{ from } \\obs_t(\\bu_t \\mid \\bx_t^{(i)}, \\btheta), \\quad i = 1, \\ldots, N.$ \\Comment{Simulate $N$ pseudo-observations.}\n        \n        \\State $\\text{Set } w_t^{(i)} \\propto \\I_{\\A_{\\epsilon, \\by_t}}(\\bu_t^{(i)}) w_{t-1}^{(i)}, \\quad i = 1, \\ldots, N.$\n        \n        \\State $\\text{Resample } \\bx_t^{(i)} \\text{ and reset } w_t^{(i)} \\text{ using \\autoref{alg:resampling}}, \\quad i = 1, \\ldots, N.$\n        \\EndFor\n    \\end{algorithmic}\n\\end{algorithm}\n\nThe algorithm proceeds similarly to \\autoref{alg:particle-filter} except for the way the weights are computed. Instead of evaluating the unavailable density $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$ at the true observation $\\by_t$, a pseudo-observation $\\bu_t$ is simulated. The weight is then set to a non-zero value if ${\\bu_t \\in \\A_{\\epsilon, \\by_t}}$, and 0 otherwise. It may seem that the weights for the same particle $i$ necessarily collapse to 0 after a number of time steps due to the recursive multiplication in step 6. However, step 7 resets the weights uniformly after resampling, so such collapse does not occur.\n\nAnalogously to \\autoref{sec:particle-filter-estimate}, the weights are then used to approximate the likelihood $p(\\by_{1:T} \\mid \\btheta)$. According to \\cite{jasra-time-series}, the estimate is given by\n\\begin{equation} \\label{eq:abc-likelihood}\n\\widehat{\\aux} = \\prod_{t=1}^T \\frac{1}{N} \\sum_{i=1}^N \\frac{w_t^{(i)}}{\\int_{\\A_{\\epsilon, \\by_t}} \\dx{\\bu}}.\n\\end{equation}\nThe integral in the denominator essentially normalizes the weights $w_t^{(i)}$ to be equal to the probability density of $\\mathcal{U}(\\by_t; \\epsilon)$, the uniform distribution in a sphere centered at $\\by_t$ with radius $\\epsilon$ given in terms of the metric $\\rho$.\n\n\\paragraph{Bias}\nThe use of this ABC filter introduces bias to the parameter inference. Recall that in \\autoref{sec:particle-filter-estimate}, we required $\\widehat{\\aux}$ to be an unbiased estimator of $p(\\by_{1:T} \\mid \\btheta)$. This was the case when the weights were calculated according to $w_t^{(i)} \\propto \\obs_t(\\by_t \\mid \\bx_t) w_{t-1}^{(i)}$. This unbiasedness is not preserved here, since $\\widehat{\\aux}$ estimates\n\\begin{equation*}\n\\begin{split}\np^\\epsilon(\\by_{1:T} \\mid \\btheta) &= \\int p^\\epsilon(\\bx_{0:T}, \\by_{1:T} \\mid \\btheta) \\; \\dx{\\bx_{0:T}} \\\\\n&= \\int \\sprior(\\bx_0 \\mid \\btheta) \\prod_{t=1}^T \\trans_t(\\bx_t \\mid \\bx_{t-1}, \\btheta) \\obs_t^\\epsilon(\\by_t \\mid \\bx_t, \\btheta) \\; \\dx{\\bx_{0:T}}\n\\end{split}\n\\end{equation*}\n(compare the inside of the integral with \\eqref{eq:factorization}), as noted by \\cite{jasra-time-series}. Here the approximate observation density is given by the ABC form\n\\begin{equation*}\n\\obs_t^\\epsilon(\\by_t \\mid \\bx_t, \\btheta) = \\frac{\\int \\I_{\\A_{\\epsilon, \\by_t}}(\\bu) \\obs_t(\\bu \\mid \\bx_t, \\btheta) \\; \\dx{\\bu}}{\\int_{\\A_{\\epsilon, \\by_t}} \\dx{\\bu}},\n\\end{equation*}\nsimilarly to \\eqref{eq:abc-integral}. This essentially means that by plugging \\eqref{eq:abc-likelihood} into the Metropolis-Hastings acceptance ratio, the limiting distribution of the underlying Markov chain becomes $p^\\epsilon(\\btheta \\mid \\by_{1:T}) \\propto p^\\epsilon(\\by_{1:T} \\mid \\btheta) \\pprior(\\btheta)$, instead of the correct $p(\\btheta \\mid \\by_{1:T})$. In general, this bias cannot be dealt with, and is a price to pay for using the incorrect observation model.\n\nAn interesting way to address this deficiency has been proposed by \\cite{noisy-abc1} and by \\cite{noisy-abc2}. The authors note that one uses data $\\by_{1:T}$ assumed to have been generated according to \\eqref{eq:factorization}, $p(\\by_{1:T} \\mid \\btheta)$, but fitting the ABC approximation $p^\\epsilon(\\by_{1:T} \\mid \\btheta)$. In an attempt to bring the data closer the model being really fitted, the authors use a sequence of perturbed observations $\\bm{z}_t = \\by_t + \\bm{v}$, $\\bm{v} \\sim \\mathcal{U}(\\bm{0}; \\epsilon)$ which denotes the uniform distribution in a sphere given by $\\rho$, with radius $\\epsilon$ and centered at the origin. It is proved that if $\\btheta$ is estimated according to maximum likelihood, the estimate is consistent when using the perturbed sequence $\\bm{z}_{1:T}$. This approach is called the Noisy ABC.\n\n\n\\paragraph{Use of kernel functions}\nA limitation of \\autoref{alg:abc-rejection} and \\autoref{alg:abc-filter} lies in the use of the indicator function $\\I_{\\A_{\\epsilon, \\by}}$. There are two problems:\n\\begin{enumerate}\n    \\item A practical one; it may happen that no samples are accepted and the output is null in the case of \\autoref{alg:abc-rejection}, or too many weights become zero in the case of \\autoref{alg:abc-filter} and the filter collapses.\n    \\item A more fundamental one, the simulated pseudo-observations $\\bu_t$ are all assigned equal weights, regardless of how far they lie from the true measurement $\\by_t$. Intuitively, a pseudo-observation closer to the true $\\by_t$ should be assigned a higher weight than one which is further away.\n\\end{enumerate}\nBoth issues can be mitigated by considering a general kernel function in place of the indicator $\\I_{\\A_{\\epsilon, \\by}}$. Let a kernel of width $\\epsilon$ centered at $\\by$ and evaluated at $\\bu$ be denoted by $\\kappa(\\bu; \\by, \\epsilon)$. In machine learning, kernel functions are often taken \\emph{proportional} to some symmetric probability density function \\citep{elements}. However, as we aim to replace the indicator function $\\I_{\\A_{\\epsilon, \\by}}$ by such kernel, we require the kernel to be properly normalized (integrate to unity) to mirror the normalization in \\eqref{eq:abc-likelihood}.\n\nWith the kernel function, we can write $w_t^{(i)} \\propto \\kappa(\\bu_t; \\by_t, \\epsilon) w_{t-1}^{(i)}$. The likelihood estimate \\eqref{eq:abc-likelihood} becomes\n\\begin{equation} \\label{eq:abc-likelihood-kernel}\n\\widehat{\\aux} = \\prod_{t=1}^T \\frac{1}{N} \\sum_{i=1}^N w_t^{(i)}\n\\end{equation}\ndue to the kernel being normalized. This way, the weights are no longer uniform but reflect the distance of $\\bu_t$ from $\\by_t$. There is also no risk of the filter collapsing due to majority of the weights becoming zero.\n\nIntroducing the kernel function to the weights computation is in principle similar to using importance sampling rather than simple rejection sampling \\citep{information-theory}. Instead of accepting/rejecting the generated samples based on whether they match some criterion, they are all accepted and weighted according to \\emph{how well} they match it.\n\nWith the kernel functions in mind, we describe a procedure for automatic tuning of the kernel width $\\epsilon$, which has been ignored so far. The adopted method has been derived in a one-dimensional setting, i.e., $\\bu, \\by \\in \\R$. We correspondingly denote $\\by$ by $y$ and $\\bu$ by $u$. When considering multivariate observations, the kernels are applied coordinate-wise (assuming independence between the components of $\\by$ or $\\bu$, respectively).\n\nBefore describing the kernel tuning procedure, we give several examples of commonly used kernel functions and comment on their usage. The kernels are shown in \\autoref{fig:kernels}.\n\\begin{enumerate}\n    \\item \\textbf{Gaussian kernel} The Gaussian kernel takes the form\n    \\begin{equation*}\n    \\kappa(u; y, \\epsilon) = \\frac{1}{\\sqrt{2 \\pi \\epsilon^2}} \\exp \\left\\{-\\frac{\\left(u - y\\right)^2}{2 \\epsilon^2}\\right\\}.\n    \\end{equation*}\n    It is one of the most-commonly used kernel functions.\n    \\item \\textbf{Cauchy  kernel} The Cauchy kernel takes the form\n    \\begin{equation*}\n    \\kappa(u; y, \\epsilon) = \\frac{1}{\\pi \\epsilon \\left[ 1 + \\left(\\frac{u - y}{\\epsilon}\\right)^2 \\right]}.\n    \\end{equation*}\n    As opposed to the Gaussian distribution, the Cauchy distribution has heavier tails, which make it suitable for situations with potentially distant observations. This kernel typically assigns non-trivial probability even to distant pseudo-observations, preventing the filter from collapsing under outliers.\n    \n    \\item \\textbf{Uniform kernel} The uniform kernel takes the form\n    \\begin{equation*}\n    \\kappa(u; y, \\epsilon) = \\begin{cases}\n    \\frac{1}{2 \\epsilon}, & y - \\epsilon < u < y + \\epsilon; \\\\\n    0, & \\text{otherwise}.\n    \\end{cases}\n    \\end{equation*}\n    Using this kernel, we recover the standard ABC which accepts $u$ if $u \\in \\A_{\\epsilon, y} = \\left\\{u : \\left| u - y \\right| < \\epsilon \\right\\}$. If we are in a situation with multivariate observations $\\by_t$ and apply the uniform kernel coordinate-wise, the set of accepted samples coincides with the standard ABC using $\\rho(\\bu, \\by) = \\max_{i=1}^{d_y} \\left|u_i - y_i\\right|$, the $L_\\infty$ distance.\n\\end{enumerate}\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=\\linewidth]{kernels}\n    \\caption{The Gaussian, Cauchy and uniform kernel functions centered at $y = 0$ with width $\\epsilon = 3$.}\n    \\label{fig:kernels}\n\\end{figure}\n\n\nThe Noisy ABC procedure described above is then naturally generalized by perturbing the observations by samples from a given kernel, instead of the uniform distribution in a ball.\n\n\n\\paragraph{Kernel width tuning}\nIn this paragraph, we address the issue of tuning the kernel width $\\epsilon$. Since the previous section has shown that the indicator function $\\I_{\\A_{\\epsilon, \\by}}$ is recovered by a particular kernel, the method also applies to the standard ABC formulation.\n\nA careful setting of $\\epsilon$ is necessary. When the kernel width is too low, most of the $N$ generated pseudo-observations are assigned with low probabilities, and the importance weights are close to zero. On the other hand, when the width is too high, even outlying pseudo-observations are assigned a non-trivial probability and shift the filter to incorrect values. In addition, the kernel becomes flat and close to the uniform variant. A manual setting of $\\epsilon$ is thus a non-obvious task without any guidelines in the data. Additionally, the width should somehow reflect the filter evolution over time, since all observations $\\by_t$ are different and may require different kernel widths.\n\nIn this thesis, we adopt a procedure described by \\cite{dedecius}, which is briefly reviewed below. More details can be found in the original paper.\n\nThe method is based on the idea that the true observation model $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$ should cover a given number of generated pseudo-observations $\\bu_t^{(i)}, i = 1, \\ldots, N$ by a $100p\\%$ high probability region ($p$-HPR), where $p \\in \\left(0, 1\\right)$ is a given constant. If this is true, the pseudo-observations can be expected to describe the distribution $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$ sufficiently-well. As this distribution is not known, it is approximated by the kernel $\\kappa$ evaluated at $\\bu_t^{(i)}, i = 1, \\ldots, N$.\n\nAs given in \\eqref{eq:abc-likelihood-kernel}, the kernel is evaluated at each pseudo-observation $\\bu_t^{(i)}, i = 1, \\ldots, N$ while centered at $\\by_t$. We then need to tune the width at time $t$, denoted $\\epsilon_t$, so that a given fraction $\\frac{\\alpha}{N}$ of the pseudo-observations is covered by the $p$-HPR of the kernel. For the procedure to work, the kernel function $\\kappa$ must be invariant under translation and scaling, i.e., belong to the location-scale family of distributions. Many popular kernels including the three discussed above, belong to this family.\n\nThe tuning procedure involves two steps:\n\\begin{enumerate}\n    \\item Identify $u_t^{[\\alpha]}$, the $\\alpha$th closest pseudo-observation to $y_t$.\n    \\item Center the kernel $\\kappa$ at $y_t$ and set the width $\\epsilon_t$ so that\n    \\begin{equation} \\label{eq:kernel-tuning-integral}\n    \\left| \\int_{y_t}^{u_t^{[\\alpha]}} \\kappa(u_t; y_t, \\epsilon_t) \\; \\dx{u_t} \\right| = \\frac{p}{2},\n    \\end{equation}\n    meaning that $u_t^{[\\alpha]}$ lies at the boundary of the $p$-HPR of $\\kappa(\\cdot; y_t, \\epsilon_t)$.\n\\end{enumerate}\nThese two steps are visualized in \\autoref{fig:kernel-tuning}. In the case of multidimensional $\\by_t$ and $\\bu_t$, this procedure is performed coordinate-wise. \n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=\\linewidth]{kernel_tuning}\n    \\caption{Visualization of the kernel tuning procedure. In this picture, ${\\alpha = 2}$, ${p = 0.95}$, and the kernel is Gaussian. Plotted is the kernel along with a number of pseudo-observations $u_t^{(i)}$ and the true measurement $y_t$. Equation \\eqref{eq:kernel-tuning-integral} states that the shaded area has volume $p/2 = 0.475$.}\n    \\label{fig:kernel-tuning}\n\\end{figure}\n\nThe meaning of equation \\eqref{eq:kernel-tuning-integral} is that $u_t^{[\\alpha]}$ is either the $\\frac{1-p}{2}$-quantile or the $\\frac{1+p}{2}$-quantile of $\\kappa(\\cdot; y_t, \\epsilon_t)$, depending on whether $u_t^{[\\alpha]} \\leq y_t$ or $u_t^{[\\alpha]} \\geq y_t$. If we restrict ourselves to symmetric kernels, we may get rid of this case division by exploiting kernel symmetry.\n\nLet $F$ denote the cumulative distribution function of the kernel $\\kappa(\\cdot; 0, 1)$ centered at 0 with width $\\epsilon = 1$. Let its quantile function be denoted by $F^{-1}$. From $\\kappa$ belonging to the location-scale family, we get that the quantile function of a general kernel $\\kappa(\\cdot; y_t, \\epsilon_t)$ is\n\\begin{equation} \\label{eq:location-scale-quantile}\nQ(\\beta) = y_t + \\epsilon_t F^{-1}(\\beta), \\quad \\beta \\in \\left(0, 1\\right).\n\\end{equation}\nAs \\eqref{eq:kernel-tuning-integral} and the assumed kernel symmetry require $\\left| u_t^{[\\alpha]} \\right|$ to be the $\\frac{1+p}{2}$-quantile of $\\kappa(\\cdot; y_t, \\epsilon_t)$, we can substitute $u_t^{[\\alpha]}$ for $Q(\\beta)$ in \\eqref{eq:location-scale-quantile} and solve for $\\epsilon_t$, obtaining\n\\begin{equation} \\label{eq:kernel-tuning}\n\\epsilon_t = \\frac{\\left| u_t^{[\\alpha]} - y_t \\right|}{F^{-1}(\\frac{1+p}{2})},\n\\end{equation}\nwhere the absolute value comes from the kernel being symmetric, so it is irrelevant whether we consider pseudo-observations lower or greater than the true observation $y_t$. The quantile function $F^{-1}$ is uniquely associated with each kernel, and the only free parameters are $\\alpha$ and $p$.\n\n\n\\section{Likelihood estimate through ABC} \\label{sec:abcmh}\nThe main contribution of this thesis is the utilization of the ABC framework to make inference possible even in SSMs with an unknown observation model. We have already derived all the necessary elements needed to state our main result. It remains to put them together by formalizing the entire inference process using ABC as a likelihood estimator.\n\n\\autoref{alg:abc-filter-complete} presents a modification of the particle filter which uses the ABC approximation as a surrogate for the unknown observation model. Compared to the particle filter, our formulation is applicable even when the observation model $\\obs_t(\\by_t \\mid \\bx_t, \\btheta)$ is not given as a probability density. Unlike the basic ABC filter described in \\autoref{alg:abc-filter}, our variant employs kernel functions to measure observation similarity, reflecting the fact that the closer a pseudo-measurement is to the true observation, the higher weight it should be assigned. In addition, we account for automatic tuning of the kernel widths by adapting them so that they cover a sufficient number of the simulated pseudo-measurements.\n\n\\begin{algorithm}[ht]\n    \\caption{ABC-based filter with automatic kernel tuning}\n    \\label{alg:abc-filter-complete}\n    \\begin{algorithmic}[1]\n        \\Input $\\text{Number of particles } N,\\ \\text{current parameter value } \\btheta, \\text{ HPR } p, \\newline \\text{number of covered pseudo-observations } \\alpha,\\ \\left\\{\\by_1, \\ldots, \\by_T\\right\\}.$\n        \n        \\State $\\text{Sample } \\bx_0^{(i)} \\sim \\sprior(\\cdot \\mid \\btheta), \\quad i = 1, \\ldots, N.$ \\Comment{Initialize $N$ particles.}\n        \n        \\State $w_0^{(i)} \\gets \\frac{1}{N}, \\quad i = 1, \\ldots, N.$ \\Comment{Initialize uniform weights.}\n        \n        \\For{$t = 1\\ \\mathbf{to}\\ T$}\n        \\State $\\text{Sample } \\bx_t^{(i)} \\sim \\trans_t(\\bx_t \\mid \\bx_{t-1}^{(i)}, \\btheta), \\quad i = 1, \\ldots, N.$ \\Comment{Sample $N$ new particles.}\n        \n        \\State $\\text{Simulate } \\bu_t^{(i)} \\text{ from } \\obs_t(\\cdot \\mid \\bx_t^{(i)}), \\quad i = 1, \\ldots, N.$ \\Comment{Simulate $N$ pseudo-observations.}\n        \n        \\State $\\text{Identify } \\bu_t^{[\\alpha]}.$ \\Comment{Find the $\\alpha$th closest pseudo-observation to $\\by_t$.}\n        \n        \\State $\\epsilon_t \\gets \\frac{\\left| u_t^{[\\alpha]} - y_t \\right|}{F^{-1}(\\frac{1+p}{2})}$ \\Comment{Set the kernel width at time $t$ according to \\eqref{eq:kernel-tuning}.}\n        \n        \\State $\\text{Set } w_t^{(i)} \\propto \\kappa(\\bu_t^{(i)}; \\by_t, \\epsilon_t) w_{t-1}^{(i)}, \\quad i = 1, \\ldots, N.$\n        \n        \\State $\\text{Resample } \\bx_t^{(i)} \\text{ and reset } w_t^{(i)} \\text{ using \\autoref{alg:resampling}}, \\quad i = 1, \\ldots, N.$\n        \\EndFor\n    \\end{algorithmic}\n\\end{algorithm}\n\nFinally, \\autoref{alg:marginal-metropolis-hastings-abc} reformulates the marginal Metropolis-Hastings according to the ABC methodology. Under the new formulation, the estimator $\\widehat{\\aux}$ of $p(\\by_{1:T} \\mid \\btheta)$ is constructed by evaluating \\eqref{eq:abc-likelihood-kernel} on a set of importance weights calculated by \\autoref{alg:abc-filter-complete}. The result is used to compute the acceptance probability which again controls whether a proposed $\\btheta^\\prime$ and the corresponding $\\widehat{\\aux}^\\prime$ are accepted or not.\n\n\\begin{algorithm}[ht]\n    \\caption{Marginal Metropolis-Hastings with ABC filter}\n    \\label{alg:marginal-metropolis-hastings-abc}\n    \\begin{algorithmic}[1]\n        \\Input $\\text{Number of samples } M,\\ \\left\\{\\by_1, \\ldots, \\by_T\\right\\}.$\n        \n        \\State $\\text{Initialize } \\btheta^{(0)}.$\n        \\State $\\text{Run \\autoref{alg:abc-filter-complete} with } \\btheta^{(0)} \\text{ to obtain the weights } w_{0,t}^{(i)}, \\quad t = 1, \\ldots, T,\\ i = 1, \\ldots, N.$\n        \\State $\\text{Calculate } \\widehat{\\aux}^{(0)} \\text{ according to \\eqref{eq:abc-likelihood-kernel} using } w_{0,t}^{(i)}.$\n        \n        \\For{$m = 1\\ \\mathbf{to}\\ M$}\n        \\State $\\text{Sample } \\btheta^\\prime \\sim \\prop(\\cdot \\mid \\btheta^{(m-1)}).$\n        \\State $\\text{Run \\autoref{alg:abc-filter-complete} with } \\btheta^\\prime \\text{ to obtain the weights } w_{m,t}^{(i)}, \\quad t = 1, \\ldots, T, \\ i = 1, \\ldots, N.$\n        \\State $\\text{Calculate } \\widehat{\\aux}^\\prime \\text{ according to \\eqref{eq:abc-likelihood-kernel} using } w_{m,t}^{(i)}.$\n        \\State $\\text{Calculate the aceptance probability } $ \\begin{equation*} \\label{eq:acceptance-probability-tractable-abc}\n        \\alpha = \\min \\left\\{1, \\frac{\\widehat{\\aux}^\\prime \\pprior(\\btheta^\\prime)}{\\widehat{\\aux}^{(m-1)} \\pprior(\\btheta^{(m-1)})} \\frac{\\prop(\\btheta^{(m-1)} \\mid \\btheta^\\prime)}{\\prop(\\btheta^\\prime \\mid \\btheta^{(m-1)})} \\right\\}.\n        \\end{equation*}\n        \\State $\\text{Sample } u \\sim \\mathcal{U}(0,1).$\n        \\If {$u \\leq \\alpha$}\n        \\State $\\left( \\btheta^{(m)}, \\widehat{\\aux}^{(m)} \\right) \\gets \\left( \\btheta^\\prime, \\widehat{\\aux}^\\prime \\right)$ \\Comment{With probability $\\alpha$, accept the proposed sample.}\n        \\Else\n        \\State $\\left( \\btheta^{(m)}, \\widehat{\\aux}^{(m)} \\right) \\gets \\left( \\btheta^{(m-1)}, \\widehat{\\aux}^{(m-1)} \\right)$ \\Comment{With probability $1 - \\alpha$, reject the proposed sample.}\n        \\EndIf\n        \\EndFor\n        \n        \\Output $\\left\\{ \\btheta^{(1)}, \\ldots, \\btheta^{(M)} \\right\\}$\n    \\end{algorithmic}\n\\end{algorithm}", "meta": {"hexsha": "226299f0428e4a26a2e5c94ee90c9bcafb8cc61e", "size": 32398, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/tex/chapters/abc.tex", "max_stars_repo_name": "tomaskala/master-thesis", "max_stars_repo_head_hexsha": "746dfd0c0747f4e0a206fe0f975363ca29f52226", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-10-19T10:52:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-20T19:13:34.000Z", "max_issues_repo_path": "thesis/tex/chapters/abc.tex", "max_issues_repo_name": "tomaskala/master-thesis", "max_issues_repo_head_hexsha": "746dfd0c0747f4e0a206fe0f975363ca29f52226", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/tex/chapters/abc.tex", "max_forks_repo_name": "tomaskala/master-thesis", "max_forks_repo_head_hexsha": "746dfd0c0747f4e0a206fe0f975363ca29f52226", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 112.8850174216, "max_line_length": 837, "alphanum_fraction": 0.7211556269, "num_tokens": 8944, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.763483758172699, "lm_q2_score": 0.727975460709318, "lm_q1q2_score": 0.555797440599852}}
{"text": "%!TEX root = forallxcam.tex\n\\part{Natural deduction for FOL}\n\\label{ch.NDFOL}\n\n\n\\chapter{Basic rules for FOL}\\label{s:BasicFOL}\n\nThe language of FOL uses of all of the connectives of TFL. So proofs in FOL will use all of the basic and derived rules from chapter \\ref{ch.NDTFL}. We shall also use the proof-theoretic notions (particularly, the symbol `$\\proves$') introduced in that chapter. However, we will also need some new basic rules to govern the quantifiers, and to govern the identity sign.\n\n\n\\section{Universal elimination}\n\nFrom the claim that everything is F, you can infer that any particular thing is F. You name it; it's F. So the following should be fine:\n\\begin{proof}\n\t\\hypo{a}{\\forall xRxxd}\n\t\\have{c}{Raad} \\Ae{a}\n\\end{proof}\nWe obtained line 2 by dropping the universal quantifier and replacing every instance of `$x$' with `$a$'. Equally, the following should be allowed:\n\\begin{proof}\n\t\\hypo{a}{\\forall xRxxd}\n\t\\have{c}{Rddd} \\Ae{a}\n\\end{proof}\nWe obtained line 2 here by dropping the universal quantifier and replacing every instance of `$x$' with `$d$'. We could have done the same with any other name we wanted. \n\nThis motivates the universal elimination rule ($\\forall$E):\n\\factoidbox{\n\\begin{proof}\n\t\\have[m]{a}{\\forall \\meta{x}\\meta{A}(\\ldots \\meta{x} \\ldots \\meta{x}\\ldots)}\n\t\\have[\\ ]{c}{\\meta{A}(\\ldots \\meta{c} \\ldots \\meta{c}\\ldots)} \\Ae{a}\n\\end{proof}}\nThe notation here was introduced in \\S\\ref{s:TruthFOL}. The point is that you can obtain any \\emph{substitution instance} of a universally quantified formula: replace every instance of the quantified variable with any name you like. \n\nI should emphasise that (as with every elimination rule) you can only apply the $\\forall$E rule when the universal quantifier is the main logical operator. So the following is \\emph{banned}:\n\\begin{proof}\n\t\\hypo{a}{\\forall x Bx \\eif Bk}\n\t\\have{c}{Bb \\eif Bk}\\by{naughtily attempting to invoke $\\forall$E}{a}\n\\end{proof}\nThis is illegitimate, since `$\\forall x$' is not the main logical operator in line 1. (If you need a reminder as to why this sort of inference should be banned, reread \\S\\ref{s:MoreMonadic}.)\n\n\\section{Existential introduction}\nFrom the claim that some particular thing is an F, you can infer that something is an F. So we ought to allow:\n\\begin{proof}\n\t\\hypo{a}{Raad}\n\t\\have{b}{\\exists x Raax} \\Ei{a}\n\\end{proof}\nHere, we have replaced the name `$d$' with a variable `$x$', and then existentially quantified over it. Equally, we would have allowed:\n\\begin{proof}\n\t\\hypo{a}{Raad}\n\t\\have{c}{\\exists x Rxxd} \\Ei{a}\n\\end{proof}\nHere we have replaced both instances of the name `$a$' with a variable, and then existentially generalised. But we do not need to replace \\emph{both} instances of a name with a variable: if Narcissus loves himself, then there is someone who loves Narcissus. So we  also allow:\n\\begin{proof}\n\t\\hypo{a}{Raad}\n\t\\have{d}{\\exists x Rxad} \\Ei{a}\n\\end{proof}\nHere we have replaced \\emph{one} instance of the name `$a$' with a variable, and then existentially generalised. These observations motivate our introduction rule, although to explain it, we shall need to introduce some new notation.\n\nWhere $\\meta{A}$ is a sentence containing the name $\\meta{c}$, we can emphasise this by writing `$\\meta{A}(\\ldots \\meta{c} \\ldots \\meta{c}\\ldots)$'. We shall write `$\\meta{A}(\\ldots \\meta{x} \\ldots \\meta{c}\\ldots)$' to indicate any formula obtained by replacing \\emph{some or all} of the instances of the name \\meta{c} with the variable \\meta{x}. Armed with this, our introduction rule is:\n\\factoidbox{\n\\begin{proof}\n\t\\have[m]{a}{\\meta{A}(\\ldots \\meta{c} \\ldots \\meta{c}\\ldots)}\n\t\\have[\\ ]{c}{\\exists \\meta{x}\\meta{A}(\\ldots \\meta{x} \\ldots \\meta{c}\\ldots)} \\Ei{a}\n\\end{proof}\n\\meta{x} must not occur in $\\meta{A}(\\ldots \\meta{c} \\ldots \\meta{c}\\ldots)$}\nThe constraint is included to guarantee that any application of the rule yields a  sentence of FOL. Thus the following is allowed:\n\\begin{proof}\n\t\\hypo{a}{Raad}\n\t\\have{d}{\\exists x Rxad} \\Ei{a}\n\t\\have{e}{\\exists y \\exists x Rxyd} \\Ei{d}\n\\end{proof}\nBut this is banned:\n\\begin{proof}\n\t\\hypo{a}{Raad}\n\t\\have{d}{\\exists x Rxad} \\Ei{a}\n\t\\have{e}{\\exists x \\exists x Rxxd}\\by{naughtily attempting to invoke $\\exists$I}{d}\n\\end{proof}\nsince the expression on line 3 contains clashing variables, and so is not a sentence of FOL.\n\n\\section{Empty domains}\nThe following proof combines our two new rules for quantifiers:\n\t\\begin{proof}\n\t\t\\hypo{a}{\\forall x Fx}\n\t\t\\have{in}{Fa}\\Ae{a}\n\t\t\\have{e}{\\exists x Fx}\\Ei{in}\n\t\\end{proof}\nCould this be a bad proof? If anything exists at all, then certainly we can infer that something is F, from the fact that everything is F. But what if \\emph{nothing} exists at all? Then it is surely vacuously true that everything is F; however, it does not following that something is F, for there is nothing to \\emph{be} F. So if we claim that, as a matter of logic alone, `$\\exists x Fx$' follows from `$\\forall x Fx$', then we are claiming that, as a matter of \\emph{logic alone}, there is something rather than nothing. This might strike us as a bit odd.\n\nActually, we are already committed to this oddity. In \\S\\ref{s:FOLBuildingBlocks}, we stipulated that domains in FOL must have at least one member. We then defined a logical truth (of FOL) as a sentence which is true in every interpretation. Since `$\\exists x\\ x=x$' will be true in every interpretation, this \\emph{also} had the effect of stipulating that it is a matter of logic that there is something rather than nothing.\n\nSince it is far from clear that logic should tell us that there must be something rather than nothing, we might well be cheating a bit here. \n\nIf we refuse to cheat, though, then we pay a high cost. Here are three things that we want to hold on to:\n\t\\begin{ebullet}\n\t\t\\item $\\forall x Fx \\proves Fa$: after all, that was $\\forall$E.\n\t\t\\item $Fa \\proves \\exists x Fx$: after all, that was $\\exists$I.\n\t\t\\item the ability to copy-and-paste proofs together: after all, reasoning works by putting lots of little steps together into rather big chains.\n\t\\end{ebullet}\nIf we get what we want on all three counts, then we have to countenance that $\\forall xFx \\proves \\exists x Fx$. So, if we get what we want on all three counts, the proof system alone tells us that there is something rather than nothing. And if we refuse to accept that, then we have to surrender one of the three things that we want to hold on to!\n\nBefore we start thinking about which to surrender, we might want to ask how \\emph{much} of a cheat this is. Granted, it may make it harder to engage in theological debates about why there is something rather than nothing. But the rest of the time, we will get along just fine. So maybe we should just regard our proof system (and FOL, more generally) as having a very slightly limited purview. If we ever want to allow for the possibility of \\emph{nothing}, then we shall have to cast around for a more complicated proof system. But for as long as we are content to ignore that possibility, our proof system is perfectly in order. (As, similarly, is the stipulation that every domain must contain at least one object.)\n\n\n\\section{Universal introduction}\nSuppose you had shown of each particular thing that it is F (and that there are no other things to consider). Then you would be justified in claiming that everything is F. This would motivate the following proof rule. If you had established each and every single substitution instance of `$\\forall x Fx$', then you can infer `$\\forall x Fx$'. \n\nUnfortunately, that rule would be utterly unusable. To establish each and every single substitution instance would require proving `$Fa$', `$Fb$', $\\ldots$, `$Fj_2$', $\\ldots$, `$Fr_{79002}$', $\\ldots$, and so on. Indeed, since there are infinitely many names in FOL, this process would never come to an end. So we could never apply that rule. We need to be a bit more cunning in coming up with our rule for introducing universal quantification. \n\nOur cunning thought will be inspired by considering:\n$$\\forall x Fx \\therefore \\forall y Fy$$\nThis argument should \\emph{obviously} be valid. After all, alphabetical variation ought to be a matter of taste, and of no logical consequence. But how might our proof system reflect this? Suppose we begin a proof thus:\n\\begin{proof}\n\t\\hypo{x}{\\forall x Fx} \n\t\\have{a}{Fa} \\Ae{x}\n\\end{proof}\nWe have proved `$Fa$'. And, of course, nothing stops us from using the same justification to prove `$Fb$', `$Fc$', $\\ldots$, `$Fj_2$', $\\ldots$, `$Fr_{79002}, \\ldots$, and so on until we run out of space, time, or patience. But reflecting on this, we see that there is a way to prove $F\\meta{c}$, for any name \\meta{c}. And if we can do it for \\emph{any} thing, we should surely be able to say that `$F$' is true of \\emph{everything}. This therefore justifies us in inferring `$\\forall y Fy$', thus:\n\\begin{proof}\n\t\\hypo{x}{\\forall x Fx}\n\t\\have{a}{Fa} \\Ae{x}\n\t\\have{y}{\\forall y Fy} \\Ai{a}\n\\end{proof}\nThe crucial thought here is that `$a$' was just some \\emph{arbitrary} name. There was nothing special about it---we might have chosen any other name---and still the proof would be fine. And this crucial thought motivates the universal introduction rule ($\\forall$I):\n\\factoidbox{\n\\begin{proof}\n\t\\have[m]{a}{\\meta{A}(\\ldots \\meta{c} \\ldots \\meta{c}\\ldots)}\n\t\\have[\\ ]{c}{\\forall \\meta{x}\\meta{A}(\\ldots \\meta{x} \\ldots \\meta{x}\\ldots)} \\Ai{a}\n\\end{proof}\n\t\\meta{c} must not occur in any undischarged assumption\\\\\n\t\\meta{x} must not occur in $\\meta{A}(\\ldots \\meta{c} \\ldots \\meta{c}\\ldots)$}\nA crucial aspect of this rule, though, is bound up in the first constraint. This constraint ensures that we are always reasoning at a sufficiently general level.\\footnote{Recall from \\S\\ref{s:BasicTFL} that we are treating `$\\ered$' as a canonical contradiction. But if it were the canonical contradiction as involving some \\emph{constant}, it might interfere with the constraint mentioned here. To avoid such problems, we shall treat `$\\ered$' as a canonical contradiction \\emph{that involves no particular names}.} To see the constraint in action, consider this terrible argument:\n\t\\begin{quote}\n\t\tEveryone loves Kylie Minogue; therefore everyone loves themselves.\n\t\\end{quote}\nWe might symbolise this obviously invalid inference pattern as:\n$$\\forall x Lxk \\therefore \\forall x Lxx$$\nNow, suppose we tried to offer a proof that vindicates this argument:\n\\begin{proof}\n\t\\hypo{x}{\\forall x Lxk}\n\t\\have{a}{Lkk} \\Ae{x}\n\t\\have{y}{\\forall x Lxx} \\by{naughtily attempting to invoke $\\forall$I}{a}\n\\end{proof}\\noindent\nThis is not allowed, because `$k$' occurred already in an undischarged assumption, namely, on line 1. The crucial point is that, if we have made any assumptions about the object we are working with, then we are not reasoning generally enough to license $\\forall$I.\n\nAlthough the name may not occur in any \\emph{undischarged} assumption, it may occur in a \\emph{discharged} assumption. That is, it may occur in a subproof that we have already closed. For example, this is just fine:\n\\begin{proof}\n\t\\open\n\t\t\\hypo{f1}{Gd}\n\t\t\\have{f2}{Gd}\\by{R}{f1}\n\t\\close\n\t\\have{ff}{Gd \\eif Gd}\\ci{f1-f2}\n\t\\have{zz}{\\forall z(Gz \\eif Gz)}\\Ai{ff}\n\\end{proof}\nThis tells us that `$\\forall z (Gz \\eif Gz)$' is a \\emph{theorem}. And that is as it should be.\n\nI should emphasise one last point. As per the conventions of \\S\\ref{s:MainLogicalOperatorQuantifier}, the use of $\\forall$I requires that we are replacing \\emph{every} instance of the name \\meta{c} in $\\meta{A}(\\ldots \\meta{x}\\ldots\\meta{x}\\ldots)$ with the variable \\meta{x}. If we only replace \\emph{some} names and not others, we end up `proving' silly things. For example, consider the argument:\n\t\\begin{quote}\n\tEveryone is as old as themselves; so everyone is as old as Judi Dench\n\t\\end{quote}\nWe might symbolise this as follows:\n$$\\forall x Oxx \\therefore \\forall x Oxd$$\nBut now suppose we tried to \\emph{vindicate} this terrible argument with the following:\n\\begin{proof}\n\t\\hypo{x}{\\forall x Oxx}\n\t\\have{a}{Odd}\\Ae{x}\n\t\\have{y}{\\forall x Oxd}\\by{naughtily attempting to invoke $\\forall$I}{a}\t\n\\end{proof}\nFortunately, our rules do not allow for us to do this: the attempted proof is banned, since it doesn't replace \\emph{every} occurrence of `$d$' in line $2$ with an `$x$'.\n\n\\section{Existential elimination}\nSuppose we know that \\emph{something} is F. The problem is that simply knowing this does not tell us which thing is F. So it would seem that from `$\\exists x Fx$' we cannot immediately conclude `$Fa$', `$Fe_{23}$', or any other substitution instance of the sentence. What can we do?\n\nSuppose we know that something is F, and that everything which is F is G. In (almost) natural English, we might reason thus:\n\t\\begin{quote}\n\t\tSince something is F, there is some particular thing which is an F. We do not know anything about it, other than that it's an F, but for convenience, let's call it `obbie'. So: obbie is F. Since everything which is F is G, it follows that obbie is G. But since obbie is G, it follows that something is G. And nothing depended on which object, exactly, obbie was. So, something is G.\n\t\\end{quote}\nWe might try to capture this reasoning pattern in a proof as follows:\n\\begin{proof}\n\t\\hypo{es}{\\exists x Fx}\n\t\\hypo{ast}{\\forall x(Fx \\eif Gx)}\n\t\\open\n\t\t\\hypo{s}{Fo}\n\t\t\\have{st}{Fo \\eif Go}\\Ae{ast}\n\t\t\\have{t}{Go} \\ce{st, s}\n\t\t\\have{et1}{\\exists x Gx}\\Ei{t}\n\t\\close\n\t\\have{et2}{\\exists x Gx}\\Ee{es,s-et1}\n\\end{proof}\\noindent\nBreaking this down: we started by writing down our assumptions. At line 3, we made an additional assumption: `$Fo$'. This was just a substitution instance of `$\\exists x Fx$'. On this assumption, we established `$\\exists x Gx$'. But note that we had made no \\emph{special} assumptions about the object named by `$o$'; we had \\emph{only} assumed that it satisfies `$Fx$'. So nothing depends upon which object it is. And line 1 told us that \\emph{something} satisfies `$Fx$'. So our reasoning pattern was perfectly general. We can discharge the specific assumption `$Fo$', and simply infer `$\\exists x Gx$' on its own.\n\nPutting this together, we obtain the existential elimination rule ($\\exists$E):\n\\factoidbox{\n\\begin{proof}\n\t\\have[m]{a}{\\exists \\meta{x}\\meta{A}(\\ldots \\meta{x} \\ldots \\meta{x}\\ldots)}\n\t\\open\t\n\t\t\\hypo[i]{b}{\\meta{A}(\\ldots \\meta{c} \\ldots \\meta{c}\\ldots)}\n\t\t\\have[j]{c}{\\meta{B}}\n\t\\close\n\t\\have[\\ ]{d}{\\meta{B}} \\Ee{a,b-c}\n\\end{proof}\n\\meta{c} must not occur in any assumption undischarged before line $i$\\\\\n\\meta{c} must not occur in $\\exists \\meta{x}\\meta{A}(\\ldots \\meta{x} \\ldots \\meta{x}\\ldots)$\\\\\n\\meta{c} must not occur in \\meta{B}}\nAs with universal introduction, the constraints are extremely important. To see why, consider the following terrible argument:\n\t\\begin{quote}\n\t\tTim Button is a lecturer. Someone is not a lecturer. So Tim Button is both a lecturer and not a lecturer.\n\t\\end{quote}\nWe might symbolise this obviously invalid inference pattern as follows:\n$$Lb, \\exists x \\enot Lx \\therefore Lb \\eand \\enot Lb$$\nNow, suppose we tried to offer a proof that vindicates this argument:\n\\begin{proof}\n\t\\hypo{f}{Lb}\n\t\\hypo{nf}{\\exists x \\enot Lx}\t\n\t\\open\t\n\t\t\\hypo{na}{\\enot Lb}\n\t\t\\have{con}{Lb \\eand \\enot Lb}\\ae{f, na}\n\t\\close\n\t\\have{econ1}{Lb \\eand \\enot Lb}\\by{naughtily attempting to invoke $\\exists$E }{nf, na-con}\n\\end{proof}\nThe last line of the proof is not allowed. The name that we used in our substitution instance for `$\\exists x \\enot Lx$' on line 3, namely `$b$', occurs in line 4. And this would be no better:\n\\begin{proof}\n\t\\hypo{f}{Lb}\n\t\\hypo{nf}{\\exists x \\enot Lx}\t\n\t\\open\t\n\t\t\\hypo{na}{\\enot Lb}\n\t\t\\have{con}{Lb \\eand \\enot Lb}\\ae{f, na}\n\t\t\\have{con1}{\\exists x (Lx \\eand \\enot Lx)}\\Ei{con}\t\t\n\t\\close\n\t\\have{econ1}{\\exists x (Lx \\eand \\enot Lx)}\\by{naughtily attempting to invoke $\\exists$E }{nf, na-con1}\n\\end{proof}\nThe last line is still not be allowed. For the name that we used in our substitution instance for `$\\exists x \\enot Lx$', namely `$b$', occurs in an undischarged assumption, namely line 1. \n\nThe moral of the story is this. \\emph{If you want to squeeze information out of an existential quantifier, choose a new name for your substitution instance.} That way, you can guarantee that you meet all the constraints on the rule for $\\exists$E.\n\n\\practiceproblems\n\\problempart\nExplain why these two `proofs' are incorrect. Also, provide interpretations invalidate the fallacious argument forms the `proofs' enshrine:\n\\begin{multicols}{2}\n\t\\begin{proof}\n\t\t\\hypo{Rxx}{\\forall x Rxx}\n\t\t\\have{Raa}{Raa}\\Ae{Rxx}\n\t\t\\have{Ray}{\\forall y Ray}\\Ai{Raa}\n\t\t\\have{Rxy}{\\forall x \\forall y Rxy}\\Ai{Ray}\n\t\\end{proof}\n\t\\begin{proof}\n\t\t\\hypo{AE}{\\forall x \\exists y Rxy}\n\t\t\\have{E}{\\exists y Ray}\\Ae{AE}\n\t\t\\open\n\t\t\t\\hypo{ass}{Raa}\n\t\t\t\\have{Ex}{\\exists x Rxx}\\Ei{ass}\n\t\t\\close\n\t\t\\have{con}{\\exists x Rxx}\\Ee{E, ass-Ex}\n\t\\end{proof}\n\\end{multicols}\n\n\\problempart \n\\label{pr.justifyFOLproof}\nThe following three proofs are missing their citations (rule and line numbers). Add them, to turn them into bona fide proofs. \n\\begin{proof}\n\\hypo{p1}{\\forall x\\exists y(Rxy \\eor Ryx)}\n\\hypo{p2}{\\forall x\\enot Rmx}\n\\have{3}{\\exists y(Rmy \\eor Rym)}{}\n\t\\open\n\t\t\\hypo{a1}{Rma \\eor Ram}\n\t\t\\have{a2}{\\enot Rma}{}\n\t\t\\have{a3}{Ram}{}\n\t\t\\have{a4}{\\exists x Rxm}{}\n\t\\close\n\\have{n}{\\exists x Rxm} {}\n\\end{proof}\n\\begin{multicols}{2}\n\\begin{proof}\n\\hypo{1}{\\forall x(\\exists yLxy \\eif \\forall zLzx)}\n\\hypo{2}{Lab}\n\\have{3}{\\exists y Lay \\eif \\forall zLza}{}\n\\have{4}{\\exists y Lay} {}\n\\have{5}{\\forall z Lza} {}\n\\have{6}{Lca}{}\n\\have{7}{\\exists y Lcy \\eif \\forall zLzc}{}\n\\have{8}{\\exists y Lcy}{}\n\\have{9}{\\forall z Lzc}{}\n\\have{10}{Lcc}{}\n\\have{11}{\\forall x Lxx}{}\n\\end{proof}\n\\begin{proof}\n\\hypo{a}{\\forall x(Jx \\eif Kx)}\n\\hypo{b}{\\exists x\\forall y Lxy}\n\\hypo{c}{\\forall x Jx}\n\\open\n\t\\hypo{2}{\\forall y Lay}\n\t\\have{3}{Laa}{}\n\t\\have{d}{Ja}{}\n\t\\have{e}{Ja \\eif Ka}{}\n\t\\have{f}{Ka}{}\n\t\\have{4}{Ka \\eand Laa}{}\n\t\\have{5}{\\exists x(Kx \\eand Lxx)}{}\n\\close\n\\have{j}{\\exists x(Kx \\eand Lxx)}{}\n\\end{proof}\n\\end{multicols}\n\n\n\\problempart\n\\label{pr.BarbaraEtc.proof1}\nIn \\S\\ref{s:MoreMonadic} problem part A, we considered fifteen syllogistic figures of Aristotelian logic. Provide proofs for each of the argument forms. NB: You will find it \\emph{much} easier if you symbolise (for example) `No F is G' as `$\\forall x (Fx \\eif \\enot Gx)$'.\n\n\\\n\n\\problempart\n\\label{pr.BarbaraEtc.proof2}\nAristotle and his successors identified other syllogistic forms which depended upon `existential import'. Symbolise each of these argument forms in FOL and offer proofs.\n\\begin{ebullet}\n\t\\item \\textbf{Barbari.} Something is H. All G are F. All H are G. So: Some H is F\n\t\\item \\textbf{Celaront.} Something is H. No G are F. All H are G. So: Some H is not F\n\t\\item \\textbf{Cesaro.} Something is H. No F are G. All H are G. So: Some H is not F.\n\t\\item \\textbf{Camestros.} Something is H. All F are G. No H are G. So: Some H is not F.\n\t\\item \\textbf{Felapton.} Something is G. No G are F. All G are H. So: Some H is not F.\n\t\\item \\textbf{Darapti.} Something is G. All G are F. All G are H. So: Some H is F.\n\t\\item \\textbf{Calemos.} Something is H. All F are G. No G are H. So: Some H is not F.\n\t\\item \\textbf{Fesapo.} Something is G. No F is G. All G are H. So: Some H is not F.\n\t\\item \\textbf{Bamalip.} Something is F. All F are G. All G are H. So: Some H are F.\n\\end{ebullet}\n\n\\problempart\n\\label{pr.someFOLproofs}\nProvide a proof of each claim.\n\\begin{earg}\n\\item $\\proves \\forall x Fx \\eor \\enot \\forall x Fx$\n\\item $\\proves\\forall z (Pz \\eor \\enot Pz)$\n\\item $\\forall x(Ax\\eif Bx), \\exists x Ax \\proves \\exists x Bx$\n\\item $\\forall x(Mx \\eiff Nx), Ma\\eand\\exists x Rxa\\proves \\exists x Nx$\n\\item $\\forall x \\forall y Gxy\\proves\\exists x Gxx$\n\\item $\\proves\\forall x Rxx\\eif \\exists x \\exists y Rxy$\n\\item $\\proves\\forall y \\exists x (Qy \\eif Qx)$\n\\item $Na \\eif \\forall x(Mx \\eiff Ma), Ma, \\enot Mb\\proves \\enot Na$\n\\item $\\forall x \\forall y (Gxy \\eif Gyx) \\proves \\forall x\\forall y (Gxy \\eiff Gyx)$\n\\item $\\forall x(\\enot Mx \\eor Ljx), \\forall x(Bx\\eif Ljx), \\forall x(Mx\\eor Bx)\\proves \\forall xLjx$\n\\end{earg}\n\n\\problempart\n\\label{pr.likes}\nWrite a symbolisation key for the following argument, symbolise it, and prove it:\n\\begin{quote}\nThere is someone who likes everyone who likes everyone that she likes. Therefore, there is someone who likes herself.\n\\end{quote}\n\n\\problempart\n\\label{pr.FOLequivornot}\nFor each of the following pairs of sentences: If they are provably equivalent, give proofs to show this. If they are not, construct an interpretation to show that they are not logically equivalent.\n\\begin{earg}\n\\item $\\forall x Px \\eif Qc, \\forall x (Px \\eif Qc)$\n\\item $\\forall x\\forall y \\forall z Bxyz, \\forall x Bxxx$\n\\item $\\forall x\\forall y Dxy, \\forall y\\forall x Dxy$\n\\item $\\exists x\\forall y Dxy, \\forall y\\exists x Dxy$\n\\item $\\forall x (Rca \\eiff Rxa), Rca \\eiff \\forall x Rxa$\n\\end{earg}\n\n\\problempart\n\\label{pr.FOLvalidornot}\nFor each of the following arguments: If it is valid in FOL, give a proof. If it is invalid, construct an interpretation to show that it is invalid.\n\\begin{earg}\n\\item $\\exists y\\forall x Rxy \\therefore \\forall x\\exists y Rxy$\n\\item $\\exists x(Px \\eand \\enot Qx) \\therefore \\forall x(Px \\eif \\enot Qx)$\n\\item $\\forall x(Sx \\eif Ta), Sd \\therefore Ta$\n\\item $\\forall x(Ax\\eif Bx), \\forall x(Bx \\eif Cx) \\therefore \\forall x(Ax \\eif Cx)$\n\\item $\\exists x(Dx \\eor Ex), \\forall x(Dx \\eif Fx) \\therefore \\exists x(Dx \\eand Fx)$\n\\item $\\forall x\\forall y(Rxy \\eor Ryx) \\therefore Rjj$\n\\item $\\exists x\\exists y(Rxy \\eor Ryx) \\therefore Rjj$\n\\item $\\forall x Px \\eif \\forall x Qx, \\exists x \\enot Px \\therefore \\exists x \\enot Qx$\n\\end{earg}\n\n\n\\chapter{Conversion of quantifiers}\\label{s:CQ}\n\nIn this section, we shall add some additional rules to the basic rules of the previous section. These govern the interaction of quantifiers and negation.\n \nIn \\S\\ref{s:FOLBuildingBlocks}, we noted that $\\enot\\exists x\\meta{A}$ is logically equivalent to $\\forall x \\enot\\meta{A}$. We shall add some rules to our proof system that govern this. In particular, we add:\n\t\\factoidbox{\n\t\\begin{proof}\n\t\t\\have[m]{a}{\\forall \\meta{x} \\enot\\meta{A}}\n\t\t\\have[\\ ]{con}{\\enot \\exists \\meta{x} \\meta{A}}\\cq{a}\n\t\\end{proof}}\nand\n\\factoidbox{\n\t\\begin{proof}\n\t\t\\have[m]{a}{ \\enot \\exists \\meta{x} \\meta{A}}\n\t\t\\have[\\ ]{con}{\\forall  \\meta{x} \\enot \\meta{A}}\\cq{a}\n\t\\end{proof}}\nEqually, we add:\n\\factoidbox{\n\t\\begin{proof}\n\t\t\\have[m]{a}{\\exists \\meta{x}\\enot \\meta{A}}\n\t\t\\have[\\ ]{con}{\\enot \\forall \\meta{x} \\meta{A}}\\cq{a}\n\t\\end{proof}}\nand\n\\factoidbox{\n\t\\begin{proof}\n\t\t\\have[m]{a}{\\enot \\forall \\meta{x} \\meta{A}}\n\t\t\\have[\\ ]{con}{\\exists \\meta{x} \\enot \\meta{A}}\\cq{a}\n\t\\end{proof}}\n\n\\practiceproblems\n\n\\problempart\nShow that the following are jointly contrary:\n\\begin{earg}\n\\item $Sa\\eif Tm, Tm \\eif Sa, Tm \\eand \\enot Sa$\n\\item $\\enot\\exists x Rxa, \\forall x \\forall y Ryx$\n\\item $\\enot\\exists x \\exists y Lxy, Laa$\n\\item $\\forall x(Px \\eif Qx), \\forall z(Pz \\eif Rz), \\forall y Py, \\enot Qa \\eand \\enot Rb$\n\\end{earg}\n\n\\problempart\nShow that each pair of sentences is provably equivalent:\n\\begin{earg}\n\\item $\\forall x (Ax\\eif \\enot Bx), \\enot\\exists x(Ax \\eand Bx)$\n\\item $\\forall x (\\enot Ax\\eif Bd), \\forall x Ax \\eor Bd$\n\\end{earg}\n\n\\problempart\nIn \\S\\ref{s:MoreMonadic}, I considered what happens when we move quantifiers `across' various logical operators. Show that each pair of sentences is provably equivalent:\n\\begin{earg}\n\\item $\\forall x (Fx \\eand Ga), \\forall x Fx \\eand Ga$\n\\item $\\exists x (Fx \\eor Ga), \\exists x Fx \\eor Ga$\n\\item $\\forall x(Ga \\eif Fx), Ga \\eif \\forall x Fx$\n\\item $\\forall x(Fx \\eif Ga), \\exists x Fx \\eif Ga$\n\\item $\\exists x(Ga \\eif Fx), Ga \\eif \\exists x Fx$\n\\item $\\exists x(Fx \\eif Ga), \\forall x Fx \\eif Ga$\n\\end{earg}\nNB: the variable `$x$' does not occur in `$Ga$'. When all the quantifiers occur at the beginning of a sentence, that sentence is said to be in \\emph{prenex normal form}. These equivalences are sometimes called \\emph{prenexing rules}, since they give us a means for putting any sentence into prenex normal form.\n\n\n\\chapter{Rules for identity}\nIn \\S\\ref{s:Interpretations}, I mentioned the philosophically contentious thesis of the \\emph{identity of indiscernibles}. This is the claim that objects which are indiscernible in every way are, in fact, identical to each other. I also mentioned that we will not subscribe to this thesis. It follows that, no matter how much you tell me about two objects, I cannot prove that they are identical. Unless, of course, you tell me that the two objects are, in fact, identical. But then the proof will hardly be very illuminating.\n\nThe general point, though, is that \\emph{no sentences} which do not already contain the identity predicate could justify an inference to `$a=b$'. So our identity introduction rule cannot allow us to infer to an identity claim containing two \\emph{different} names.\n\nHowever, every object is identical to itself. No premises, then, are required in order to conclude that something is identical to itself. So this will be the identity introduction rule:\n\\factoidbox{\n\\begin{proof}\n\t\\have[\\ \\,\\,\\,]{x}{\\meta{c}=\\meta{c}} \\by{=I}{}\n\\end{proof}}\nNotice that this rule does not require referring to any prior lines of the proof. For any name \\meta{c}, you can write $\\meta{c}=\\meta{c}$ on any point, with only the {=}I rule as justification. \n\nOur elimination rule is more fun. If you have established `$a=b$', then anything that is true of the object named by `$a$' must also be true of the object named by `$b$'. For any sentence with `$a$' in it, you can replace some or all of the occurrences of `$a$' with `$b$' and produce an equivalent sentence. For example, from `$Raa$' and `$a = b$', you are justified in inferring `$Rab$', `$Rba$' or `$Rbb$'. More generally:\n\\factoidbox{\\begin{proof}\n\t\\have[m]{e}{\\meta{a}=\\meta{b}}\n\t\\have[n]{a}{\\meta{A}(\\ldots \\meta{a} \\ldots \\meta{a}\\ldots)}\n\t\\have[\\ ]{ea1}{\\meta{A}(\\ldots \\meta{b} \\ldots \\meta{a}\\ldots)} \\by{=E}{e,a}\n\\end{proof}}\nThe notation here is as for $\\exists$I. So $\\meta{A}(\\ldots \\meta{a} \\ldots \\meta{a}\\ldots)$ is a formula containing the name $\\meta{a}$, and $\\meta{A}(\\ldots \\meta{b} \\ldots \\meta{a}\\ldots)$ is a formula obtained by replacing one or more instances of the name $\\meta{a}$ with the name $\\meta{b}$. Lines $m$ and $n$ can occur in either order, and do not need to be adjacent, but we always cite the statement of identity first. Symmetrically, we allow:\n\\factoidbox{\\begin{proof}\n\t\\have[m]{e}{\\meta{a}=\\meta{b}}\n\t\\have[n]{a}{\\meta{A}(\\ldots \\meta{b} \\ldots \\meta{b}\\ldots)}\n\t\\have[\\ ]{ea2}{\\meta{A}(\\ldots \\meta{a} \\ldots \\meta{b}\\ldots)} \\by{=E}{e,a}\n\\end{proof}}\nThis rule is sometimes called \\emph{Leibniz's Law}, after Gottfried Leibniz. \n\nTo see the rules in action, we shall prove some quick results. First, we shall prove that identity is \\emph{symmetric}:\n\\begin{proof}\n\t\\open\n\t\t\\hypo{ab}{a = b}\n\t\t\\have{aa}{a = a}\\by{=I}{}\n\t\t\\have{ba}{b = a}\\by{=E}{ab, aa}\n\t\\close\n\t\\have{abba}{a = b \\eif b =a}\\ci{ab-ba}\n\t\\have{ayya}{\\forall y (a = y \\eif y = a)}\\Ai{abba}\n\t\\have{xyyx}{\\forall x \\forall y (x = y \\eif y = x)}\\Ai{ayya}\n\\end{proof}\nWe obtain line 3 by replacing one instance of `$a$' in line 2 with an instance of `$b$'; this is justified given `$a= b$'. \n\nSecond, we shall prove that identity is \\emph{transitive}:\n\\begin{proof}\n\t\\open\n\t\t\\hypo{abc}{a = b \\eand b = c}\n\t\t\\have{ab}{a = b}\\ae{abc}\n\t\t\\have{bc}{b = c}\\ae{abc}\n\t\t\\have{ac}{a = c}\\by{=E}{ab, bc}\n\t\\close\n\t\\have{con}{(a = b \\eand b =c) \\eif a = c}\\ci{abc-ac}\n\t\\have{conz}{\\forall z((a = b \\eand b = z) \\eif a = z)}\\Ai{con}\n\t\\have{cony}{\\forall y\\forall z((a = y \\eand y = z) \\eif a = z)}\\Ai{conz}\n\t\\have{conx}{\\forall x \\forall y \\forall z((x = y \\eand y = z) \\eif x = z)}\\Ai{cony}\n\\end{proof}\nWe obtain line 4 by replacing `$b$' in line 3 with `$a$'; this is justified given `$a= b$'. \n\n\\practiceproblems\n\\problempart\n\\label{pr.identity}\nProvide a proof of each claim.\n\\begin{earg}\n\\item $Pa \\eor Qb, Qb \\eif b=c, \\enot Pa \\proves Qc$\n\\item $m=n \\eor n=o, An \\proves Am \\eor Ao$\n\\item $\\forall x\\ x=m, Rma\\proves \\exists x Rxx$\n\\item $\\forall x\\forall y(Rxy \\eif x=y)\\proves Rab \\eif Rba$\n\\item $\\enot \\exists x\\enot x = m \\proves \\forall x\\forall y (Px \\eif Py)$\n\\item $\\exists x Jx, \\exists x \\enot Jx\\proves \\exists x \\exists y\\ \\enot x = y$\n\\item $\\forall x(x=n \\eiff Mx), \\forall x(Ox \\eor \\enot Mx)\\proves On$\n\\item $\\exists x Dx, \\forall x(x=p \\eiff Dx)\\proves Dp$\n\\item $\\exists x\\bigl[(Kx \\eand \\forall y(Ky \\eif x=y)) \\eand Bx\\bigr], Kd\\proves Bd$\n\\item $\\proves Pa \\eif \\forall x(Px \\eor \\enot x = a)$\n\\end{earg}\n\n\\problempart\nShow that the following are provably equivalent:\n\\begin{ebullet}\n\\item $\\exists x \\bigl([Fx \\eand \\forall y (Fy \\eif x = y)] \\eand x = n\\bigr)$\n\\item $Fn \\eand \\forall y (Fy \\eif n= y)$\n\\end{ebullet}\nAnd hence that both have a decent claim to symbolise the English sentence `Nick is the F'.\n\n\\\n\n\\problempart\nIn \\S\\ref{sec.identity}, I claimed that the following are logically equivalent symbolisations of the English sentence `there is exactly one F':\n\\begin{ebullet}\n\\item $\\exists x Fx \\eand \\forall x \\forall y \\bigl[(Fx \\eand Fy) \\eif x = y\\bigr]$\n\\item $\\exists x \\bigl[Fx \\eand \\forall y (Fy \\eif x = y)\\bigr]$\n\\item $\\exists x \\forall y (Fy \\eiff x = y)$\n\\end{ebullet}\nShow that they are all provably equivalent. (\\emph{Hint}: to show that three claims are provably equivalent, it suffices to show that the first proves the second, the second proves the third and the third proves the first; think about why.)\n\n\n\\\n\\problempart\nSymbolise the following argument\n\t\\begin{quote}\n\t\tThere is exactly one F. There is exactly one G. Nothing is both F and G. So: there are exactly two things that are either F or G.\n\t\\end{quote}\nAnd offer a proof of it.\n%\\begin{ebullet}\n%\\item  $\\exists x \\bigl[Fx \\eand \\forall y (Fy \\eif x = y)\\bigr], \\exists x \\bigl[Gx \\eand \\forall y ( Gy \\eif x = y)\\bigr], \\forall x (\\enot Fx \\eor \\enot Gx) \\proves \\exists x \\exists y \\bigl[\\enot x = y \\eand \\forall z ((Fz \\eor Gz) \\eif (x = y \\eor x = z))\\bigr]$\n%\\end{ebullet}\n\n\n\n\n\\chapter{Derived rules}\\label{s:DerivedFOL}\nAs in the case of TFL, I first introduced some rules for FOL as basic (in \\S\\ref{s:BasicFOL}), and then added some further rules for conversion of quantifiers (in \\S\\ref{s:CQ}). In fact, the CQ rules should be regarded as \\emph{derived} rules, for they can be derived from the  \\emph{basic} rules of \\S\\ref{s:BasicFOL}. (The point here is as in \\S\\ref{s:Derived}.) Here is a justification for the first CQ rule:\n\\begin{proof}\n\t\\hypo[m]{An}{\\forall x \\enot A x}\n\t\\open\n\t\t\\hypo[k]{E}{\\exists x Ax}\n\t\t\\open\n\t\t\t\\hypo{c}{Ac}%\\by{for $\\exists$E}{}\n\t\t\t\\have{nc}{\\enot Ac}\\Ae{An}\n\t\t\t\\have{red}{\\ered}\\ri{c,nc}\n\t\t\\close\n\t\t\\have{red2}{\\ered}\\Ee{E,c-red}\n\t\\close\n\t\\have{dada}{\\enot \\exists x Ax}\\ni{E-red2}\n\\end{proof}\n%You will note that on line 3 I have written `for $\\exists$E'. This is not technically a part of the proof. It is just a reminder---to me and to you---of why I have bothered to introduce `$\\enot Ac$' out of the blue. You might find it helpful to add similar annotations to assumptions when performing proofs. But do not add annotations on lines other than assumptions: the proof requires its own citation, and your annotations will clutter it.\nHere is a justification of the second CQ rule:\n\\begin{proof}\n\t\\hypo[m]{nEna}{\\exists x  \\enot Ax} \n\t\\open\n\t\t\\hypo[k]{Aa}{\\forall x Ax}\n\t\t\\open\n\t\t\t\\hypo{nac}{\\enot Ac}%\\by{for $\\exists$E}{}\n\t\t\t\\have{a}{Ac}\\Ae{Aa}\n\t\t\t\\have{con}{\\ered}\\ri{a,nac}\n\t\t\\close\n\t\t\\have{con1}{\\ered}\\Ee{nEna, nac-con}\n\t\\close\n\t\\have{dada}{\\enot \\forall x Ax}\\ni{Aa-con1}\n\\end{proof}\nThis explains why the CQ rules can be treated as derived. Similar justifications can be offered for the other two CQ rules. \n\n\\practiceproblems\n\n\\problempart\nOffer proofs which justify the addition of the third and fourth CQ rules as derived rules.\n\n\n\n\\chapter{Proof-theoretic concepts and semantic concepts}\nWe have used two different turnstiles in this book.  This:\n$$\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n \\proves \\meta{C}$$\nmeans that there is some proof which ends with $\\meta{C}$ and whose only undischarged assumptions are among $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$. This is a \\emph{proof-theoretic notion}. By contrast, this:\n$$\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n \\entails \\meta{C}$$\nmeans that  no valuation (or interpretation) makes all of $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ true and $\\meta{C}$ false. This concerns assignments of truth and falsity to sentences. It is a \\emph{semantic notion}.\n\nI cannot emphasise enough that these are different notions. But I can emphasise it a bit more: \\emph{They are different notions.}\n\nAt the risk of repetition:  \\emph{They are different notions.}\n\nOnce you have fully internalised this point, continue reading. \n\nAlthough our semantic and proof-theoretic notions are different, there is a deep connection between them. To explain this connection, I shall start by considering the relationship between logical truths and theorems.\n\nTo show that a sentence is a theorem, you need only perform a proof. Granted, it may be hard to produce a twenty line proof, but it is not so hard to check each line of the proof and confirm that it is legitimate; and if each line of the proof individually is legitimate, then the whole proof is legitimate. Showing that a sentence is a logical truth, though, requires reasoning about all possible interpretations. Given a choice between showing that a sentence is a theorem and showing that it is a logical truth, it would be easier to show that it is a theorem.\n\nContrawise, to show that a sentence is \\emph{not} a theorem is hard. We would need to reason about all (possible) proofs. That is very difficult. But to show that a sentence is not a logical truth, you need only construct an interpretation in which the sentence is false. Granted, it may be hard to come up with the interpretation; but once you have done so, it is relatively straightforward to check what truth value it assigns to a sentence. Given a choice between showing that a sentence is not a theorem and showing that it is not a logical truth, it would be easier to show that it is not a logical truth.\n\nFortunately, \\emph{a sentence is a theorem if and only if it is a logical truth}. As a result, if we provide a proof of $\\meta{A}$ on no assumptions, and thus show that $\\meta{A}$ is a theorem, i.e.\\ ${}\\proves \\meta{A}$, we can legitimately infer that $\\meta{A}$ is a logical truth, i.e., $\\entails\\meta{A}$. Similarly, if we construct a model in which \\meta{A} is false and thus show that it is not a logical truth, i.e.\\ $\\nentails \\meta{A}$, it follows that \\meta{A} is not a theorem, i.e.\\  $\\nproves \\meta{A}$.\n\nMore generally, we have the following powerful result:\n$$\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n \\proves\\meta{B} \\textbf{ iff }\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n \\entails\\meta{B}$$\nThis shows that, whilst provability and entailment are \\emph{different} notions, they are extensionally equivalent. As such:\n\t\\begin{ebullet}\n\t\t\\item An argument is \\emph{valid} iff \\emph{the conclusion can be proved from the premises}.\n\t\t\\item Two sentences are \\emph{logically equivalent} iff they are \\emph{provably equivalent}.\n\t\t\\item Sentences are \\emph{jointly consistent} iff they are \\emph{not jointly contrary}.\n\t\\end{ebullet}\nFor this reason, you can pick and choose when to think in terms of proofs and when to think in terms of valuations/interpretations, doing whichever is easier for a given task. The table on the next page summarises which is (usually) easier.\n\nIt is intuitive that provability and semantic entailment should agree. But---let me repeat this---do not be fooled by the similarity of the symbols `$\\entails$' and `$\\proves$'. These two symbols have very different meanings. And the fact that provability and semantic entailment agree is not an easy result to come by. \n\nIn fact, demonstrating that provability and semantic entailment agree is, very decisively, the point at which introductory logic becomes intermediary logic. Agreement, in the case of TFL, is covered in a little sequel to this book, \\texttt{Metatheory}. Agreement, in the case of FOL, is one of the first big results in mathematical logic.\n\n\\begin{sidewaystable}\n\\begin{center}\n\\begin{tabular*}{\\textwidth}{p{.25\\textheight}p{.325\\textheight}p{.325\\textheight}}\n & \\textbf{Yes}  & \\textbf{No}\\\\\n\\\\\nIs \\meta{A} a \\textbf{logical truth}? \n& give a proof which shows $\\proves\\meta{A}$ \n& give an interpretation in which \\meta{A} is false\\\\\n\\\\\nIs \\meta{A} a \\textbf{contradiction}? &\ngive a proof which shows $\\proves\\enot\\meta{A}$ & \ngive an interpretation in which \\meta{A} is true\\\\\n\\\\\n%Is \\meta{A} contingent? & \n%give two interpretations, one in which \\meta{A} is true and another in which \\meta{A} is false & give a proof which either shows $\\proves\\meta{A}$ or $\\proves\\enot\\meta{A}$\\\\\n%\\\\\nAre \\meta{A} and \\meta{B} \\textbf{equivalent}? &\ngive two proofs, one for $\\meta{A}\\proves\\meta{B}$ and one for $\\meta{B}\\proves\\meta{A}$  \n& give an interpretation in which \\meta{A} and \\meta{B} have different truth values\\\\\n\\\\\nAre $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ \\textbf{jointly consistent}? \n& give an interpretation in which all of $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ are true \n& prove a contradiction from assumptions $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$\\\\\n\\\\\nIs $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n \\therefore \\meta{C}$ \\textbf{valid}? \n& give a proof with assumptions $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ and concluding with \\meta{C}\n& give an interpretation in which each of $\\meta{A}_1, \\meta{A}_2, \\ldots, \\meta{A}_n$ is true and \\meta{C} is false\\\\\n\\end{tabular*}\n\\end{center}\n\\end{sidewaystable}\n\n", "meta": {"hexsha": "5cf5f6b879cf2f3cab0353899af3eb4f90fa7529", "size": 37772, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "forallx-cam-prooffol.tex", "max_stars_repo_name": "OpenLogicProject/forallx-cam", "max_stars_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-02-19T01:39:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-04T05:59:31.000Z", "max_issues_repo_path": "forallx-cam-prooffol.tex", "max_issues_repo_name": "ryanmichaelhebert/forallx-cam", "max_issues_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "forallx-cam-prooffol.tex", "max_forks_repo_name": "ryanmichaelhebert/forallx-cam", "max_forks_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2016-09-08T05:09:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T10:13:13.000Z", "avg_line_length": 58.9266770671, "max_line_length": 718, "alphanum_fraction": 0.7130149317, "num_tokens": 12189, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.5557974355058612}}
{"text": "\\section{OFE and COFE Constructions}\n\n\\subsection{Trivial Pointwise Lifting}\n\nThe (C)OFE structure on many types can be easily obtained by pointwise lifting of the structure of the components.\nThis is what we do for option $\\maybe\\cofe$, product $(M_i)_{i \\in I}$ (with $I$ some finite index set), sum $\\cofe + \\cofe'$ and finite partial functions $K \\fpfn \\monoid$ (with $K$ infinite countable).\n\n\\subsection{Next (Type-Level Later)}\n\nGiven a OFE $\\cofe$, we define $\\latert\\cofe$ as follows (using a datatype-like notation to define the type):\n\\begin{align*}\n  \\latert\\cofe \\eqdef{}& \\latertinj(x:\\cofe) \\\\\n  \\latertinj(x) \\nequiv{n} \\latertinj(y) \\eqdef{}& n = 0 \\lor x \\nequiv{n-1} y\n\\end{align*}\nNote that in the definition of the carrier $\\latert\\cofe$, $\\latertinj$ is a constructor (like the constructors in Coq), \\ie this is short for $\\setComp{\\latertinj(x)}{x \\in \\cofe}$.\n\n$\\latert(-)$ is a locally \\emph{contractive} functor from $\\OFEs$ to $\\OFEs$.\n\n\n\\subsection{Uniform Predicates}\n\nGiven a camera $\\monoid$, we define the COFE $\\UPred(\\monoid)$ of \\emph{uniform predicates} over $\\monoid$ as follows:\n\\begin{align*}\n\\monoid \\monnra \\SProp \\eqdef{}& \\setComp{\\pred: \\monoid \\nfn \\SProp}\n{\\All n, \\melt, \\meltB. \\melt \\mincl[n] \\meltB \\Ra \\pred(\\melt) \\nincl{n} \\pred(\\meltB)} \\\\\n  \\UPred(\\monoid) \\eqdef{}&  \\faktor{\\monoid \\monnra \\SProp}{\\equiv} \\\\\n  \\pred \\equiv \\predB \\eqdef{}& \\All m, \\melt. m \\in \\mval(\\melt) \\Ra (m \\in \\pred(\\melt) \\iff  m \\in \\predB(\\melt)) \\\\\n  \\pred \\nequiv{n} \\predB \\eqdef{}& \\All m \\le n, \\melt. m \\in \\mval(\\melt) \\Ra (m \\in \\pred(\\melt) \\iff  m \\in \\predB(\\melt))\n\\end{align*}\nYou can think of uniform predicates as monotone, step-indexed predicates over a camera that ``ignore'' invalid elements (as defined by the quotient).\n\n$\\UPred(-)$ is a locally non-expansive functor from $\\CMRAs$ to $\\COFEs$.\n\nIt is worth noting that the above quotient admits canonical\nrepresentatives. More precisely, one can show that every\nequivalence class contains exactly one element $P_0$ such that:\n\\begin{align*}\n  \\All n, \\melt.  (\\mval(\\melt) \\nincl{n} P_0(\\melt)) \\Ra n \\in P_0(\\melt)  \\tagH{UPred-canonical}\n\\end{align*}\nIntuitively, this says that $P_0$ trivially holds whenever the resource is invalid.\nStarting from any element $P$, one can find this canonical\nrepresentative by choosing $P_0(\\melt) := \\setComp{n}{n \\in \\mval(\\melt) \\Ra n \\in P(\\melt)}$.\n\nHence, as an alternative definition of $\\UPred$, we could use the set\nof canonical representatives. This alternative definition would\nsave us from using a quotient. However, the definitions of the various\nconnectives would get more complicated, because we have to make sure\nthey all verify \\ruleref{UPred-canonical}, which sometimes requires some adjustments. We\nwould moreover need to prove one more property for every logical\nconnective.\n\n\n\\clearpage\n\\section{RA and Camera Constructions}\n\n\\subsection{Product}\n\\label{sec:prodm}\n\nGiven a family $(M_i)_{i \\in I}$ of cameras ($I$ finite), we construct a camera for the product $\\prod_{i \\in I} M_i$ by lifting everything pointwise.\n\nFrame-preserving updates on the $M_i$ lift to the product:\n\\begin{mathpar}\n  \\inferH{prod-update}\n  {\\melt \\mupd_{M_i} \\meltsB}\n  {\\mapinsert i \\melt f \\mupd \\setComp{ \\mapinsert i \\meltB f}{\\meltB \\in \\meltsB}}\n\\end{mathpar}\n\n\\subsection{Sum}\n\\label{sec:summ}\n\nThe \\emph{sum camera} $\\monoid_1 \\csumm \\monoid_2$ for any cameras $\\monoid_1$ and $\\monoid_2$ is defined as (again, we use a datatype-like notation):\n\\begin{align*}\n  \\monoid_1 \\csumm \\monoid_2 \\eqdef{}& \\cinl(\\melt_1:\\monoid_1) \\mid \\cinr(\\melt_2:\\monoid_2) \\mid \\mundef \\\\\n  \\mval(\\mundef) \\eqdef{}& \\emptyset \\\\\n  \\mval(\\cinl(\\melt)) \\eqdef{}& \\mval_1(\\melt)  \\\\\n  \\cinl(\\melt_1) \\mtimes \\cinl(\\meltB_1) \\eqdef{}& \\cinl(\\melt_1 \\mtimes \\meltB_1)  \\\\\n%  \\munit \\mtimes \\ospending \\eqdef{}& \\ospending \\mtimes \\munit \\eqdef \\ospending \\\\\n%  \\munit \\mtimes \\osshot(\\melt) \\eqdef{}& \\osshot(\\melt) \\mtimes \\munit \\eqdef \\osshot(\\melt) \\\\\n  \\mcore{\\cinl(\\melt_1)} \\eqdef{}& \\begin{cases}\\mnocore & \\text{if $\\mcore{\\melt_1} = \\mnocore$} \\\\ \\cinl({\\mcore{\\melt_1}}) & \\text{otherwise} \\end{cases}\n\\end{align*}\nAbove, $\\mval_1$ refers to the validity of $\\monoid_1$.\nThe validity, composition and core for $\\cinr$ are defined symmetrically.\nThe remaining cases of the composition and core are all $\\mundef$.\n\nNotice that we added the artificial ``invalid'' (or ``undefined'') element $\\mundef$ to this camera just in order to make certain compositions of elements (in this case, $\\cinl$ and $\\cinr$) invalid.\n\nThe step-indexed equivalence is inductively defined as follows:\n\\begin{mathpar}\n  \\infer{x \\nequiv{n} y}{\\cinl(x) \\nequiv{n} \\cinl(y)}\n\n  \\infer{x \\nequiv{n} y}{\\cinr(x) \\nequiv{n} \\cinr(y)}\n\n  \\axiom{\\mundef \\nequiv{n} \\mundef}\n\\end{mathpar}\n\n\nWe obtain the following frame-preserving updates, as well as their symmetric counterparts:\n\\begin{mathpar}\n  \\inferH{sum-update}\n  {\\melt \\mupd_{M_1} \\meltsB}\n  {\\cinl(\\melt) \\mupd \\setComp{ \\cinl(\\meltB)}{\\meltB \\in \\meltsB}}\n\n  \\inferH{sum-swap}\n  {\\All \\melt_\\f \\in M, n. n  \\notin \\mval(\\melt \\mtimes \\melt_\\f) \\and \\mvalFull(\\meltB)}\n  {\\cinl(\\melt) \\mupd \\cinr(\\meltB)}\n\\end{mathpar}\nCrucially, the second rule allows us to \\emph{swap} the ``side'' of the sum that the camera is on if $\\mval$ has \\emph{no possible frame}.\n\n\\subsection{Option}\n\nThe definition of the camera/RA axioms already lifted the composition operation on $\\monoid$ to one on $\\maybe\\monoid$.\nWe can easily extend this to a full camera by defining a suitable core, namely\n\\begin{align*}\n  \\mcore{\\mnocore} \\eqdef{}& \\mnocore & \\\\\n  \\mcore{\\maybe\\melt} \\eqdef{}& \\mcore\\melt & \\text{If $\\maybe\\melt \\neq \\mnocore$}\n\\end{align*}\nNotice that this core is total, as the result always lies in $\\maybe\\monoid$ (rather than in $\\maybe{\\mathord{\\maybe\\monoid}}$).\n\n\\subsection{Finite Partial Functions}\n\\label{sec:fpfnm}\n\nGiven some infinite countable $K$ and some camera $\\monoid$, the set of finite partial functions $K \\fpfn \\monoid$ is equipped with a camera structure by lifting everything pointwise.\n\nWe obtain the following frame-preserving updates:\n\\begin{mathpar}\n  \\inferH{fpfn-alloc-strong}\n  {\\text{$G$ infinite} \\and \\mvalFull(\\melt)}\n  {\\emptyset \\mupd \\setComp{\\mapsingleton \\gname \\melt}{\\gname \\in G}}\n\n  \\inferH{fpfn-alloc}\n  {\\mvalFull(\\melt)}\n  {\\emptyset \\mupd \\setComp{\\mapsingleton \\gname \\melt}{\\gname \\in K}}\n\n  \\inferH{fpfn-update}\n  {\\melt \\mupd_\\monoid \\meltsB}\n  {\\mapinsert i \\melt f] \\mupd \\setComp{ \\mapinsert i \\meltB f}{\\meltB \\in \\meltsB}}\n\\end{mathpar}\nAbove, $\\mvalFull$ refers to the (full) validity of $\\monoid$.\n\n$K \\fpfn (-)$ is a locally non-expansive functor from $\\CMRAs$ to $\\CMRAs$.\n\n\\subsection{Agreement}\n\nGiven some OFE $\\cofe$, we define the camera $\\agm(\\cofe)$ as follows:\n\\begin{align*}\n  \\agm(\\cofe) \\eqdef{}& \\setComp{\\melt \\in \\finpset\\cofe}{\\melt \\neq \\emptyset} /\\ {\\sim} \\\\[-0.2em]\n  \\melt \\nequiv{n} \\meltB \\eqdef{}& (\\All x \\in \\melt. \\Exists y \\in \\meltB. x \\nequiv{n} y) \\land (\\All y \\in \\meltB. \\Exists x \\in \\melt. x \\nequiv{n} y) \\\\\n  \\textnormal{where }& \\melt \\sim \\meltB \\eqdef{} \\All n. \\melt \\nequiv{n} \\meltB  \\\\\n~\\\\\n%    \\All n \\in {\\melt.V}.\\, \\melt.x \\nequiv{n} \\meltB.x \\\\\n  \\mval(\\melt) \\eqdef{}& \\setComp{n}{ \\All x, y \\in \\melt. x \\nequiv{n} y } \\\\\n  \\mcore\\melt \\eqdef{}& \\melt \\\\\n  \\melt \\mtimes \\meltB \\eqdef{}& \\melt \\cup \\meltB\n\\end{align*}\n%Note that the carrier $\\agm(\\cofe)$ is a \\emph{record} consisting of the two fields $c$ and $V$.\n\n$\\agm(-)$ is a locally non-expansive functor from $\\OFEs$ to $\\CMRAs$.\n\nWe define a non-expansive injection $\\aginj$ into $\\agm(\\cofe)$ as follows:\n\\[ \\aginj(x) \\eqdef \\set{x} \\]\nThere are no interesting frame-preserving updates for $\\agm(\\cofe)$, but we can show the following:\n\\begin{mathpar}\n  \\axiomH{ag-val}{\\mvalFull(\\aginj(x))}\n\n  \\axiomH{ag-dup}{\\aginj(x) = \\aginj(x)\\mtimes\\aginj(x)}\n  \n  \\axiomH{ag-agree}{n \\in \\mval(\\aginj(x) \\mtimes \\aginj(y)) \\Ra x \\nequiv{n} y}\n\\end{mathpar}\n\n\n\\subsection{Exclusive Camera}\n\nGiven an OFE $\\cofe$, we define a camera $\\exm(\\cofe)$ such that at most one $x \\in \\cofe$ can be owned:\n\\begin{align*}\n  \\exm(\\cofe) \\eqdef{}& \\exinj(\\cofe) \\mid \\mundef \\\\\n  \\mval(\\melt) \\eqdef{}& \\setComp{n}{\\melt \\notnequiv{n} \\mundef}\n\\end{align*}\nAll cases of composition go to $\\mundef$.\n\\begin{align*}\n  \\mcore{\\exinj(x)} \\eqdef{}& \\mnocore &\n  \\mcore{\\mundef} \\eqdef{}& \\mundef\n\\end{align*}\nRemember that $\\mnocore$ is the ``dummy'' element in $\\maybe\\monoid$ indicating (in this case) that $\\exinj(x)$ has no core.\n\nThe step-indexed equivalence is inductively defined as follows:\n\\begin{mathpar}\n  \\infer{x \\nequiv{n} y}{\\exinj(x) \\nequiv{n} \\exinj(y)}\n\n  \\axiom{\\mundef \\nequiv{n} \\mundef}\n\\end{mathpar}\n$\\exm(-)$ is a locally non-expansive functor from $\\OFEs$ to $\\CMRAs$.\n\nWe obtain the following frame-preserving update:\n\\begin{mathpar}\n  \\inferH{ex-update}{}\n  {\\exinj(x) \\mupd \\exinj(y)}\n\\end{mathpar}\n\n\\subsection{Fractions}\n\nWe define an RA structure on the rational numbers in $(0, 1]$ as follows:\n\\begin{align*}\n  \\fracm \\eqdef{}& \\fracinj(\\mathbb{Q} \\cap (0, 1]) \\mid \\mundef \\\\\n  \\mvalFull(\\melt) \\eqdef{}& \\melt \\neq \\mundef \\\\\n  \\fracinj(q_1) \\mtimes \\fracinj(q_2) \\eqdef{}& \\fracinj(q_1 + q_2) \\quad \\text{if $q_1 + q_2 \\leq 1$} \\\\\n  \\mcore{\\fracinj(x)} \\eqdef{}& \\bot \\\\\n  \\mcore{\\mundef} \\eqdef{}& \\mundef\n\\end{align*}\nAll remaining cases of composition go to $\\mundef$.\nFrequently, we will write just $x$ instead of $\\fracinj(x)$.\n\nThe most important property of this RA is that $1$ has no frame.\nThis is useful in combination with \\ruleref{sum-swap}, and also when used with pairs:\n\\begin{mathpar}\n  \\inferH{pair-frac-change}{}\n  {(1, a) \\mupd (1, b)}\n\\end{mathpar}\n\n%TODO: These need syncing with Coq\n% \\subsection{Finite Powerset Monoid}\n\n% Given an infinite set $X$, we define a monoid $\\textmon{PowFin}$ with carrier $\\mathcal{P}^{\\textrm{fin}}(X)$ as follows:\n% \\[\n% \\melt \\cdot \\meltB \\;\\eqdef\\; \\melt \\cup \\meltB \\quad \\mbox{if } \\melt \\cap \\meltB = \\emptyset\n% \\]\n\n% We obtain:\n% \\begin{mathpar}\n% \t\\inferH{PowFinUpd}{}\n% \t\t{\\emptyset \\mupd \\{ \\{x\\} \\mid x \\in X  \\}}\n% \\end{mathpar}\n\n% \\begin{proof}[Proof of \\ruleref{PowFinUpd}]\n% \tAssume some frame $\\melt_\\f \\sep \\emptyset$. Since $\\melt_\\f$ is finite and $X$ is infinite, there exists an $x \\notin \\melt_\\f$.\n% \tPick that for the result.\n% \\end{proof}\n\n% The powerset monoids is cancellative.\n% \\begin{proof}[Proof of cancellativity]\n% \tLet $\\melt_\\f \\mtimes \\melt = \\melt_\\f \\mtimes \\meltB \\neq \\mzero$.\n% \tSo we have $\\melt_\\f \\sep \\melt$ and $\\melt_\\f \\sep \\meltB$, and we have to show $\\melt = \\meltB$.\n% \tAssume $x \\in \\melt$. Hence $x \\in \\melt_\\f \\mtimes \\melt$ and thus $x \\in \\melt_\\f \\mtimes \\meltB$.\n% \tBy disjointness, $x \\notin \\melt_\\f$ and hence $x \\in meltB$.\n% \tThe other direction works the same way.\n% \\end{proof}\n\n\n\n\\subsection{Authoritative}\n\\label{sec:auth-camera}\n\nGiven a camera $M$, we construct $\\authm(M)$ modeling someone owning an \\emph{authoritative} element $\\melt$ of $M$, and others potentially owning fragments $\\meltB \\mincl \\melt$ of $\\melt$.\nWe assume that $M$ has a unit $\\munit$, and hence its core is total.\n(If $M$ is an exclusive monoid, the construction is very similar to a half-ownership monoid with two asymmetric halves.)\n\\begin{align*}\n\\authm(M) \\eqdef{}& \\maybe{\\exm(M)} \\times M \\\\\n\\mval( (x, \\meltB ) ) \\eqdef{}& \\setComp{ n }{ (x = \\mnocore \\land n \\in \\mval(\\meltB)) \\lor (\\Exists \\melt. x = \\exinj(\\melt) \\land \\meltB \\mincl_n \\melt \\land n \\in \\mval(\\melt)) } \\\\\n  (x_1, \\meltB_1) \\mtimes (x_2, \\meltB_2) \\eqdef{}& (x_1 \\mtimes x_2, \\meltB_2 \\mtimes \\meltB_2) \\\\\n  \\mcore{(x, \\meltB)} \\eqdef{}& (\\mnocore, \\mcore\\meltB) \\\\\n  (x_1, \\meltB_1) \\nequiv{n} (x_2, \\meltB_2) \\eqdef{}& x_1 \\nequiv{n} x_2 \\land \\meltB_1 \\nequiv{n} \\meltB_2\n\\end{align*}\nNote that $(\\mnocore, \\munit)$ is the unit and asserts no ownership whatsoever, but $(\\exinj(\\munit), \\munit)$ asserts that the authoritative element is $\\munit$.\n\nLet $\\melt, \\meltB \\in M$.\nWe write $\\authfull \\melt$ for full ownership $(\\exinj(\\melt), \\munit)$ and $\\authfrag \\meltB$ for fragmental ownership $(\\mnocore, \\meltB)$ and $\\authfull \\melt , \\authfrag \\meltB$ for combined ownership $(\\exinj(\\melt), \\meltB)$.\n\nThe frame-preserving update involves the notion of a \\emph{local update}:\n\\newcommand\\lupd{\\stackrel{\\mathrm l}{\\mupd}}\n\\begin{defn}\n  It is possible to do a \\emph{local update} from $\\melt_1$ and $\\meltB_1$ to $\\melt_2$ and $\\meltB_2$, written $(\\melt_1, \\meltB_1) \\lupd (\\melt_2, \\meltB_2)$, if\n  \\[ \\All n, \\maybe{\\melt_\\f}. n \\in \\mval(\\melt_1) \\land \\melt_1 \\nequiv{n} \\meltB_1 \\mtimes \\maybe{\\melt_\\f} \\Ra n \\in \\mval(\\melt_2) \\land \\melt_2 \\nequiv{n} \\meltB_2 \\mtimes \\maybe{\\melt_\\f} \\]\n\\end{defn}\nIn other words, the idea is that for every possible frame $\\maybe{\\melt_\\f}$ completing $\\meltB_1$ to $\\melt_1$, the same frame also completes $\\meltB_2$ to $\\melt_2$.\n\nWe then obtain\n\\begin{mathpar}\n  \\inferH{auth-update}\n  {(\\melt_1, \\meltB_1) \\lupd (\\melt_2, \\meltB_2)}\n  {\\authfull \\melt_1 , \\authfrag \\meltB_1 \\mupd \\authfull \\melt_2 , \\authfrag \\meltB_2}\n\\end{mathpar}\n\n\\subsection{STS with Tokens}\n\\label{sec:sts-camera}\n\nGiven a state-transition system~(STS, \\ie a directed graph) $(\\STSS, {\\stsstep} \\subseteq \\STSS \\times \\STSS)$, a set of tokens $\\STST$, and a labeling $\\STSL: \\STSS \\ra \\wp(\\STST)$ of \\emph{protocol-owned} tokens for each state, we construct an RA modeling an authoritative current state and permitting transitions given a \\emph{bound} on the current state and a set of \\emph{locally-owned} tokens.\n\nThe construction follows the idea of STSs as described in CaReSL \\cite{caresl}.\nWe first lift the transition relation to $\\STSS \\times \\wp(\\STST)$ (implementing a \\emph{law of token conservation}) and define a stepping relation for the \\emph{frame} of a given token set:\n\\begin{align*}\n (s, T) \\stsstep (s', T') \\eqdef{}& s \\stsstep s' \\land \\STSL(s) \\uplus T = \\STSL(s') \\uplus T' \\\\\n s \\stsfstep{T} s' \\eqdef{}& \\Exists T_1, T_2. T_1 \\disj \\STSL(s) \\cup T \\land (s, T_1) \\stsstep (s', T_2)\n\\end{align*}\n\nWe further define \\emph{closed} sets of states (given a particular set of tokens) as well as the \\emph{closure} of a set:\n\\begin{align*}\n\\STSclsd(S, T) \\eqdef{}& \\All s \\in S. \\STSL(s) \\disj T \\land \\left(\\All s'. s \\stsfstep{T} s' \\Ra s' \\in S\\right) \\\\\n\\upclose(S, T) \\eqdef{}& \\setComp{ s' \\in \\STSS}{\\Exists s \\in S. s \\stsftrans{T} s' }\n\\end{align*}\n\nThe STS RA is defined as follows\n\\begin{align*}\n  \\monoid \\eqdef{}& \\STSauth(s:\\STSS, T:\\wp(\\STST) \\mid \\STSL(s) \\disj T) \\mid{}\\\\& \\STSfrag(S: \\wp(\\STSS), T: \\wp(\\STST) \\mid \\STSclsd(S, T) \\land S \\neq \\emptyset) \\mid \\mundef \\\\\n  \\mvalFull(\\melt) \\eqdef{}& \\melt \\neq \\mundef \\\\\n  \\STSfrag(S_1, T_1) \\mtimes \\STSfrag(S_2, T_2) \\eqdef{}& \\STSfrag(S_1 \\cap S_2, T_1 \\cup T_2) \\qquad\\qquad\\qquad \\text{if $T_1 \\disj T_2$ and $S_1 \\cap S_2 \\neq \\emptyset$} \\\\\n  \\STSfrag(S, T) \\mtimes \\STSauth(s, T') \\eqdef{}& \\STSauth(s, T') \\mtimes \\STSfrag(S, T) \\eqdef \\STSauth(s, T \\cup T') \\qquad \\text{if $T \\disj T'$ and $s \\in S$} \\\\\n  \\mcore{\\STSfrag(S, T)} \\eqdef{}& \\STSfrag(\\upclose(S, \\emptyset), \\emptyset) \\\\\n  \\mcore{\\STSauth(s, T)} \\eqdef{}& \\STSfrag(\\upclose(\\set{s}, \\emptyset), \\emptyset)\n\\end{align*}\nThe remaining cases are all $\\mundef$.\n\nWe will need the following frame-preserving update:\n\\begin{mathpar}\n  \\inferH{sts-step}{(s, T) \\ststrans (s', T')}\n  {\\STSauth(s, T) \\mupd \\STSauth(s', T')}\n\n  \\inferH{sts-weaken}\n  {\\STSclsd(S_2, T_2) \\and S_1 \\subseteq S_2 \\and T_2 \\subseteq T_1}\n  {\\STSfrag(S_1, T_1) \\mupd \\STSfrag(S_2, T_2)}\n\\end{mathpar}\n\n\\paragraph{The core is not a homomorphism.}\nThe core of the STS construction is only satisfying the RA axioms because we are \\emph{not} demanding the core to be a homomorphism---all we demand is for the core to be monotone with respect the \\ruleref{ra-incl}.\n\nIn other words, the following does \\emph{not} hold for the STS core as defined above:\n\\[ \\mcore\\melt \\mtimes \\mcore\\meltB = \\mcore{\\melt\\mtimes\\meltB} \\]\n\nTo see why, consider the following STS:\n\\newcommand\\st{\\textlog{s}}\n\\newcommand\\tok{\\textmon{t}}\n\\begin{center}\n  \\begin{tikzpicture}[sts]\n    \\node at (0,0)   (s1) {$\\st_1$};\n    \\node at (3,0)  (s2) {$\\st_2$};\n    \\node at (9,0) (s3) {$\\st_3$};\n    \\node at (6,0)  (s4) {$\\st_4$\\\\$[\\tok_1, \\tok_2]$};\n    \n    \\path[sts_arrows] (s2) edge  (s4);\n    \\path[sts_arrows] (s3) edge  (s4);\n  \\end{tikzpicture}\n\\end{center}\nNow consider the following two elements of the STS RA:\n\\[ \\melt \\eqdef \\STSfrag(\\set{\\st_1,\\st_2}, \\set{\\tok_1}) \\qquad\\qquad\n  \\meltB \\eqdef \\STSfrag(\\set{\\st_1,\\st_3}, \\set{\\tok_2}) \\]\n\nWe have:\n\\begin{mathpar}\n  {\\melt\\mtimes\\meltB = \\STSfrag(\\set{\\st_1}, \\set{\\tok_1, \\tok_2})}\n\n  {\\mcore\\melt = \\STSfrag(\\set{\\st_1, \\st_2, \\st_4}, \\emptyset)}\n\n  {\\mcore\\meltB = \\STSfrag(\\set{\\st_1, \\st_3, \\st_4}, \\emptyset)}\n\n  {\\mcore\\melt \\mtimes \\mcore\\meltB = \\STSfrag(\\set{\\st_1, \\st_4}, \\emptyset) \\neq\n    \\mcore{\\melt \\mtimes \\meltB} = \\STSfrag(\\set{\\st_1}, \\emptyset)}\n\\end{mathpar}\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"iris\"\n%%% End: \n", "meta": {"hexsha": "609dc49345e31442fce9e515212383aeb212036a", "size": 17076, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/constructions.tex", "max_stars_repo_name": "resource-reasoning/iris-coq", "max_stars_repo_head_hexsha": "f891015e2ab48926cec9618b0eadf0c0fec9ba1b", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-11-24T19:28:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-18T06:28:02.000Z", "max_issues_repo_path": "docs/constructions.tex", "max_issues_repo_name": "resource-reasoning/iris-coq", "max_issues_repo_head_hexsha": "f891015e2ab48926cec9618b0eadf0c0fec9ba1b", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-11-01T08:35:12.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-01T08:35:12.000Z", "max_forks_repo_path": "docs/constructions.tex", "max_forks_repo_name": "resource-reasoning/iris-coq", "max_forks_repo_head_hexsha": "f891015e2ab48926cec9618b0eadf0c0fec9ba1b", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-11-01T08:13:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-31T09:26:24.000Z", "avg_line_length": 47.3019390582, "max_line_length": 399, "alphanum_fraction": 0.6741625673, "num_tokens": 6421, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542924, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5557974355058611}}
{"text": "% -*- mode: LaTeX; TeX-PDF-mode: t; -*-\n\\input{./econtexRoot}\\documentclass[PortfolioChoiceWithRiskyHousing]{subfiles}\n\\input{./econtexRoot}\\input{\\LaTeXInputs/econtex_onlyinsubfile}\n\\onlyinsubfile{\\externaldocument{PortfolioChoiceWithRiskyHousing}} % Get xrefs -- esp to appendix -- from main file; only works properly if main file has already been compiled;\n\n\\begin{document}\n\n\\section{The Portfolio Choice Problem for Rental Households}\n\nHouseholds that do not own and instead rent their homes have to decide how much to consume, how much to spend on rent, and how much to save. Their normalized problem can be stated as:\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\wFunc_{t}(\\mRat_{t}) & = \\max_{\\{\\aRat_{t}, \\hRat_{t}, \\riskyshare_{t}\\}} \\utilFunc(\\cRat_{t}, \\hRat_{t}) + \\DiscFac \\Ex_{t} \\left[ \\PGro_{t+1}^{1-\\CRRA} \\wFunc_{t+1}(\\mRat_{t+1}) \\right] \\\\\n\t\t& \\text{s.t.} \\\\\n\t\t\\aRat_{t} & = \\mRat_{t} - \\cRat_{t} - \\hRat_{t} \\\\\n\t\t\\Rport_{t+1}(\\riskyshare_{t}) & = \\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t} \\\\\n\t\t\\mRat_{t+1} & = \\aRat_{t}\\Rport_{t+1}(\\riskyshare_{t})/\\PGro_{t+1} + \\tShkEmp_{t+1}\n\t\\end{split}\n\\end{equation}\n\nConsider the problem of a consumer that has $\\xRat_{t}$ to spend on consumption and housing. Their problem is\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\utilFunc(\\xRat) & = \\max_{\\{\\cRat, h\\}} \\utilFunc(\\cRat, h) \\\\\n\t\t& \\text{s.t.} \\\\\n\t\t\\xRat & = \\cRat + h\n\t\\end{split}\n\\end{equation}\n\n%The first order condition with respect to $\\cRat$ implies\n%\n%\\begin{equation}\n%\t\\utilFunc^{\\cRat}(\\cRat, h) = \\utilFunc^{h}(\\cRat, h) \n%\\end{equation}\n\nGiven the functional form of utility we are using (CRRA with paramter $\\CRRA$), the well known solution to this simple problem is $\\cRat_{*} = (1-\\alpha)\\xRat$ and $\\hRat_{*} = \\alpha \\xRat$. Restating the problem in terms of $\\xRat$, we obtain:\n\n\\begin{equation}\n\t\\utilFunc(\\xRat) = \\utilFunc(\\cRat_{*}, \\hRat_{*}) = \\frac{(\\cRat_{*}^{1-\\alpha}\\hRat_{*}^{\\alpha})^{1-\\CRRA}}{1-\\CRRA} = \\xFer \\frac{\\xRat^{1-\\CRRA}}{1-\\CRRA}\n\\end{equation}\n\nwhere $\\xFer = \\left( (1-\\alpha)^{1-\\alpha}\\alpha^{\\alpha} \\right)^{1-\\CRRA}$. Because both consumption and housing are non-durable in the case of a rental household, the consumer can first decide how much to spend on both goods ($\\xRat_{t}$) and then decide how much to spend on each of the goods without changing the problem. A further step to simplify the problem is to use iterated expectations to split up the problem into subperiods. We can define\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\wOpt_{t}(\\bRat_{t+1}) & = \\Ex_{t}\\left[ \\PGro_{t+1}^{1-\\CRRA} \\wFunc_{t+1}(\\mRat_{t+1}) \\right] \\\\\n\t\t& \\text{where} \\\\\n\t\t\\mRat_{t+1} & = \\bRat_{t+1}/\\PGro_{t+1} + \\tShkEmp_{t+1}\n\t\\end{split}\n\\end{equation}\n\nNow, we can rewrite our original problem as\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\wFunc_{t}(\\mRat_{t}) & = \\max_{\\{\\aRat_{t}, \\riskyshare_{t}\\}} \\utilFunc(\\xRat_{t}) + \\DiscFac \\Ex_{t} \\left[ \\wOpt_{t}(\\bRat_{t+1}) \\right] \\\\\n\t\t& \\text{s.t.} \\\\\n\t\t\\aRat_{t} & = \\mRat_{t} - \\xRat_{t} \\\\\n\t\t\\Rport_{t+1}(\\riskyshare_{t}) & = \\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t} \\\\\n\t\t\\bRat_{t+1} & = \\aRat_{t}\\Rport_{t+1}(\\riskyshare_{t})\n\t\\end{split}\n\\end{equation}\n\nwhich embeds the simple subproblem and our defined iterated expectation.\n\nWe can rewrite the problem as\n\n\\begin{equation}\n\t\\wFunc_{t}(\\mRat_{t}) = \\max_{\\{\\aRat_{t}, \\riskyshare_{t}\\}} \\utilFunc(\\mRat_{t} - \\aRat_{t}) + \\DiscFac \\Ex_{t} \\left[ \\wOpt_{t}\\left( \\aRat_{t}(\\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t}) \\right) \\right] \\\\\n\\end{equation}\n\nFirst order condition with respect to $\\aRat$ provides the Euler equation\n\n\\begin{equation}\n\t\\utilFunc'(\\xRat_{t}) = \\DiscFac \\Ex_{t} \\left[ \\wOpt_{t}'(\\bRat_{t+1})\\Rport_{t+1}(\\riskyshare_{t}) \\right]\n\\end{equation}\n\nand the first order condition with respect to $\\riskyshare_{t}$ is\n\n\\begin{equation}\n\t\\DiscFac \\Ex_{t} \\left[ \\wOpt_{t}'(\\bRat_{t+1})\\aRat_{t}(\\Risky_{t+1}-\\Rfree) \\right] = 0\n\\end{equation}\n\nThe envelope condition is given by\n\n\\begin{equation}\n\t\\wFunc_{t}'(\\mRat_{t}) = \\utilFunc'(\\xRat_{t})\n\\end{equation}\n\nAnd finally,\n\n\\begin{equation}\n\t\\wOpt_{t}'(\\bRat_{t+1}) = \\Ex_{t} \\left[ \\PGro_{t+1}^{1-\\CRRA} \\wFunc_{t+1}'(\\mRat_{t+1})/\\PGro_{t+1} \\right] = \\Ex_{t} \\left[ \\PGro_{t+1}^{-\\CRRA} \\wFunc_{t+1}'(\\mRat_{t+1}) \\right]\n\\end{equation}\n\n\\section{The portfolio problem of a homeowner with no mortgage}\n\nA homeowner with no mortgage debt is allowed to invest more on their house to increase its size (or they can let it depreciate). In doing so, they choose home investment, consumption, and savings. Their problem is summarized as follows:\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vFunc_{t}(\\mRat_{t}, \\hRat_{t-1}) = \\max_{\\aRat_{t}, \\riskyshare_{t}, \\iRat_{t}} \\utilFunc(\\cRat_{t}, \\hRat_{t}) & + \\DiscFac \\Ex_{t} \\left[ \\PGro_{t+1}^{1-\\CRRA} \\left( (1-\\timeRate) \\vFunc_{t+1}(\\mRat_{t+1}, \\hRat_{t+1}) + \\timeRate \\wFunc_{t+1}(\\mRat_{t+1}^{\\wFunc}) \\right) \\right] \\\\\n\t\t&\\text{s.t.} \\\\\n\t\t\\hRat_{t} & = (1-\\depr)\\hRat_{t-1} +  \\iRat_{t}/\\Qrisky_0 \\\\\n\t\t\\hRat_{t+1} & = \\hRat_{t}/\\PGro_{t+1} \\\\\n\t\t\\aRat_{t} & = \\mRat_{t} - \\cRat_{t} - \\iRat_{t} \\\\\n\t\t\\Rport_{t+1}(\\riskyshare_{t}) & = \\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t} \\\\\n\t\t\\mRat_{t+1} & = \\aRat_{t}\\Rport_{t+1}(\\riskyshare_{t})/\\PGro_{t+1} + \\tShkEmp_{t+1} \\\\\n\t\t\\mRat_{t+1}^{\\wFunc} & = \\mRat_{t+1} + \\Qrisky_{t+1}\\hRat_{t+1}\n\t\\end{split}\n\\end{equation}\n\n%OR\n%\n%\\begin{equation}\n%\t\\begin{split}\n%\t\t\\vFunc_{t}(\\mRat_{t}, \\hRat_{t}) = \\max_{\\aRat_{t}, \\riskyshare_{t}, \\iRat_{t}} \\utilFunc(\\cRat_{t}, \\hRat_{t}) & + \\DiscFac \\Ex_{t} \\left[ \\PGro_{t+1}^{1-\\CRRA} \\left( (1-\\timeRate) \\vFunc_{t+1}(\\mRat_{t+1}, \\hRat_{t+1}) + \\timeRate \\wFunc_{t+1}(\\mRat_{t+1}^{\\wFunc}) \\right) \\right] \\\\\n%\t\t&\\text{s.t.} \\\\\n%\t\t\\hRat_{t+1} & = ((1-\\depr)\\hRat_{t-1} + \\Qrisky_0 \\iRat_{t})/\\PGro_{t+1} \\\\\n%\t\t\\aRat_{t} & = \\mRat_{t} - \\cRat_{t} - \\iRat_{t} \\\\\n%\t\t\\Rport_{t+1}(\\riskyshare_{t}) & = \\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t} \\\\\n%\t\t\\mRat_{t+1} & = \\aRat_{t}\\Rport_{t+1}(\\riskyshare_{t})/\\PGro_{t+1} + \\tShkEmp_{t+1} \\\\\n%\t\t\\mRat_{t+1}^{\\wFunc} & = \\mRat_{t+1} + \\Qrisky_{t+1}\\hRat_{t+1}\n%\t\\end{split}\n%\\end{equation}\n\nTo facilitate the solution method, we can split the above problem into different subperiods.\n\nIn the first subperiod, the household arrives with cash on hand and their previous housing size. They then pick their current size by investing $\\iRat_{t}$ where housing costs are $\\Qrisky_0$. After investing, they are left with net cash on hand after housing costs, and a new housing size.\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vFunc_{t}(\\mRat_{t}, \\hRat_{t-1}) & = \\max_{\\iRat_{t}} \\vOpt_{t}(\\nRat_{t}, \\hRat_{t}) \\\\\n\t\t& \\text{s.t.} \\\\\n\t\t\\nRat_{t} & = \\mRat_{t} - \\iRat_{t} \\\\\n\t\t\\hRat_{t} & = (1-\\depr)\\hRat_{t-1} + \\iRat_{t}/\\Qrisky_0\n\t\\end{split}\n\\end{equation}\n\nIn the second subperiod, the household arrives with net cash on hand and their current housing size. This subperiod is a standard portfolio choice problem, indexed by their house size. The agent must then choose a level of savings $\\aRat_{t}$ and the proportion of their savings that will go into the risky asset $\\riskyshare_{t}$ versus the safe asset $(1-\\riskyshare_{t})$.\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vOpt_{t}(\\nRat_{t}, \\hRat_{t}) & = \\max_{\\{\\aRat_{t}, \\riskyshare_{t}\\}} \\utilFunc(\\cRat_{t}, \\hRat_{t}) + \\DiscFac \\Ex_{t}\\left[ \\vOptAlt_{t}(\\bRat_{t+1}, \\hRat_{t}) \\right] \\\\\n\t\t\\aRat_{t} & = \\nRat_{t} - \\cRat_{t} \\\\\n\t\t\\Rport_{t+1}(\\riskyshare_{t}) & = \\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t} \\\\\n\t\t\\bRat_{t+1} & = \\aRat_{t}\\Rport_{t+1}(\\riskyshare_{t})\n\t\\end{split}\n\\end{equation}\n\nFinally in the last subperiod, the household's uncertainty is realized. Simultaneously, they observe their permanent and transitory income shocks, whether they will become renters in the next period (function $\\wFunc_{t+1}$ with probability $\\timeRate$), and if they do become renters, the liquidation price of their house per unit of housing.\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vOptAlt_{t}(\\bRat_{t+1}, \\hRat_{t}) & = \\Ex_{t} \\left[ \\PGro_{t+1}^{1-\\CRRA} \\left( (1-\\timeRate) \\vFunc_{t+1}(\\mRat_{t+1}, \\hRat_{t+1}) + \\timeRate \\wFunc_{t+1}(\\mRat_{t+1}^{\\wFunc}) \\right) \\right] \\\\\n\t\t& \\text{where} \\\\\n\t\t\\hRat_{t+1} & = \\hRat_{t}/\\PGro_{t+1} \\\\\n\t\t\\mRat_{t+1} & = \\bRat_{t+1}/\\PGro_{t+1} + \\tShkEmp_{t+1} \\\\\n\t\t\\mRat_{t+1}^{\\wFunc} & = \\mRat_{t+1} + \\hRat_{t+1} \\Qrisky_{t+1}\n\t\\end{split}\n\\end{equation}\n\n\\subsection{First order conditions: Choosing home investment}\n\nThe problem is\n\n\\begin{equation}\n\t\\vFunc_{t}(\\mRat_{t}, \\hRat_{t-1}) = \\max_{\\iRat_{t}} \\vOpt_{t}(\\mRat_{t} - \\iRat_{t}, (1-\\depr)\\hRat_{t-1} + \\iRat_{t}/\\Qrisky_0)\n\\end{equation}\n\nThe first order condition with respect to $\\iRat_{t}$ is\n\n\\begin{equation}\n\t\\vOpt_{t}^{\\nRat}(\\nRat_{t}, \\hRat_{t}) =  \\vOpt_{t}^{\\hRat}(\\nRat_{t}, \\hRat_{t})/\\Qrisky_0\n\\end{equation}\n\nwhich equalizes the marginal benefit of additional net cash-on-hand (cash-on-hand net of home investment) with the marginal cost of a larger house. The envelope conditions are\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vFunc_{t}^{\\mRat}(\\mRat_{t}, \\hRat_{t-1}) & = \\vOpt_{t}^{\\nRat}(\\nRat_{t}, \\hRat_{t}) \\\\\n\t\t\\vFunc_{t}^{\\hRat}(\\mRat_{t}, \\hRat_{t-1}) & = \\vOpt_{t}^{\\hRat}(\\nRat_{t}, \\hRat_{t})(1-\\depr)\n\t\\end{split}\n\\end{equation}\n\n\\subsection{First order conditions: Choosing consumption and portfolio investment}\n\nOnce again, let's reduce the problem to 1 line.\n\n\\begin{equation}\n\t\\vOpt_{t}(\\nRat_{t}, \\hRat_{t}) = \\max_{\\{\\aRat_{t}, \\riskyshare_{t}\\}} \\utilFunc(\\nRat_{t} - \\aRat_{t}, \\hRat_{t}) + \\DiscFac \\Ex_{t}\\left[ \\vOptAlt_{t}(\\aRat_{t}(\\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t}), \\hRat_{t}) \\right]\n\\end{equation}\n\nNotice that $\\hRat_{t}$ passes through this problem unaltered. Indeed, in this subproblem, the house size indexes the portfolio choice (and may affect marginal utility) but does not need further addressing beyond a simple portfolio choice model.\n\nThe first order condition with respect to $\\aRat_{t}$ is\n\n\\begin{equation}\n\t\\utilFunc^{\\cRat}(\\cRat_{t}, \\hRat_{t})  = \\DiscFac \\Ex_{t} \\left[ \\vOptAlt_{t}^{\\bRat}(\\bRat_{t+1}, \\hRat_{t})\\Rport_{t+1}(\\riskyshare_{t}) \\right]\n\\end{equation}\n\nThe first order condition with respect to $\\riskyshare_{t}$ is\n\n\\begin{equation}\n\t\\DiscFac \\Ex_{t}\\left[ \\vOptAlt_{t}^{\\bRat}(\\bRat_{t+1}, \\hRat_{t})\\aRat_{t}(\\Risky_{t+1}-\\Rfree) \\right] = 0\n\\end{equation}\n\nFinally, the envelope conditions are\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vOpt_{t}^{\\nRat}(\\nRat_{t}, \\hRat_{t})  &  = \\utilFunc^{\\cRat}(\\cRat_{t}, \\hRat_{t}) \\\\\n\t\t\\vOpt_{t}^{\\hRat}(\\nRat_{t}, \\hRat_{t}) & = \\utilFunc^{\\hRat}(\\cRat_{t}, \\hRat_{t}) + \\DiscFac \\Ex_{t} \\left[ \\vOptAlt_{t}^{\\hRat}(\\bRat_{t+1}, \\hRat_{t}) \\right]\n\t\\end{split}\n\\end{equation}\n\nThe second envelope condition is due to the nature of the $\\hRat_{t}$ pass-through.\n\n\\subsection{Envelope conditions: Uncertainty is realized}\n\nThe last subperiod is harder to re-write in one line, but because there is no maximization it is straight forward to calculate the derivatives.\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vOptAlt_{t}^{\\bRat}(\\bRat_{t+1}, \\hRat_{t}) & = \\Ex_{t} \\left[ \\PGro_{t+1}^{-\\CRRA} \\left( (1-\\timeRate)\\vFunc_{t+1}^{\\mRat}(\\mRat_{t+1}, \\hRat_{t+1}) + \\timeRate \\wFunc_{t+1}^{\\mRat}(\\mRat_{t+1}^{\\wFunc}) \\right) \\right] \\\\\n\t\t\\vOptAlt_{t}^{\\hRat}(\\bRat_{t+1}, \\hRat_{t} ) & = \\Ex_{t} \\left[ \\PGro_{t+1}^{-\\CRRA} \\left( (1-\\timeRate)\\vFunc_{t+1}^{\\hRat}(\\mRat_{t+1}, \\hRat_{t+1}) + \\timeRate \\wFunc_{t+1}^{\\mRat}(\\mRat_{t+1}^{\\wFunc})\\Qrisky_{t+1} \\right) \\right]\n\t\\end{split}\n\\end{equation}\n\n\\section{Solving the homeowner with mortgage problem}\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vFunc_{t}(\\mRat_{t}, \\hRat_{t}, \\dRat_{t-1}) & = \\max_{\\aRat_{t}, \\riskyshare_{t}, \\iRat_{t}} \\utilFunc(\\cRat_{t}, \\hRat_{t}) + \\DiscFac \\Ex_{t} \\left[ \\PGro_{t+1}^{1-\\CRRA} \\vFunc_{t+1}(\\mRat_{t+1}, \\hRat_{t+1}, \\dRat_{t+1}) \\right] \\\\\n\t\t&\\text{s.t.} \\\\\n\t\t\\dRat_{t} & = \\dRat_{t-1} + (1-\\depr)\\hRat_{t} - \\iRat_{t} \\\\\n\t\t\\aRat_{t} & = \\mRat_{t} - \\cRat_{t} - \\iRat_{t} \\\\\n\t\t\\Rport_{t+1}(\\riskyshare_{t}) & = \\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t} \\\\\n\t\t\\mRat_{t+1} & = \\aRat_{t}\\Rport_{t+1}(\\riskyshare_{t})/\\PGro_{t+1} + \\tShkEmp_{t+1} \\\\\n\t\t\\hRat_{t+1} & = \\hRat_{t}/\\PGro_{t+1} \\\\\n\t\t\\mRat_{t+1}^{\\wFunc} & = \\mRat_{t+1} + \\Qrisky_{t+1}\\hRat_{t+1}\n\t\\end{split}\n\\end{equation}\n\nCan also be split up into subparts\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vFunc_{t}(\\mRat_{t}, \\hRat_{t}, \\dRat_{t-1}) & = \\max_{\\iRat_{t}} \\vOpt_{t}(\\nRat_{t}, \\hRat_{t}, \\dRat_{t}) \\\\\n\t\t\\nRat_{t} & = \\mRat_{t} - \\iRat_{t} \\\\\n\t\t\\dRat_{t} & = \\dRat_{t-1} + (1-\\depr)\\hRat_{t} - \\iRat_{t}\n\t\\end{split}\n\\end{equation}\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vOpt_{t}(\\nRat_{t}, \\hRat_{t}, \\dRat_{t}) & = \\max_{\\{\\aRat_{t}, \\riskyshare_{t}\\}} \\utilFunc(\\cRat_{t}, \\hRat_{t}) + \\DiscFac \\Ex_{t}\\left[ \\vOptAlt_{t}(\\bRat_{t+1}, \\hRat_{t}, \\dRat_{t}) \\right] \\\\\n\t\t\\aRat_{t} & = \\nRat_{t} - \\cRat_{t} \\\\\n\t\t\\Rport_{t+1}(\\riskyshare_{t}) & = \\Rfree + (\\Risky_{t+1} - \\Rfree)\\riskyshare_{t} \\\\\n\t\t\\bRat_{t+1} & = \\aRat_{t}\\Rport_{t+1}(\\riskyshare_{t})\n\t\\end{split}\n\\end{equation}\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\vOptAlt_{t}(\\bRat_{t+1}, \\hRat_{t}, \\dRat_{t}) & = \\Ex_{t} \\left[ \\PGro_{t+1}^{1-\\CRRA} \\vFunc_{t+1}(\\mRat_{t+1}, \\hRat_{t+1}, \\dRat_{t+1}) \\right] \\\\\n\t\t& \\text{where} \\\\\n\t\t\\hRat_{t+1} & = \\hRat_{t}/\\PGro_{t+1} \\\\\n\t\t\\dRat_{t+1} & = \\dRat_{t}\\Rfree_{D}/\\PGro_{t+1} \\\\\n\t\t\\mRat_{t+1} & = \\bRat_{t+1}/\\PGro_{t+1} + \\tShkEmp_{t+1} \\\\\n\t\t\\mRat_{t+1}^{\\wFunc} & = \\mRat_{t+1} + \\hRat_{t+1} \\Qrisky_{t+1}\n\t\\end{split}\n\\end{equation}\n\n\\end{document}\n\\endinput\n", "meta": {"hexsha": "ebf702677d5d3e789c083cc686719907789fbb1b", "size": 13515, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Appendix-FutureModels.tex", "max_stars_repo_name": "econ-ark/PortfolioChoiceWithRiskyHousing", "max_stars_repo_head_hexsha": "bdd6bc79443c8c22aeb40756858cb6d7c98f52d8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-12T21:28:37.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-12T21:28:37.000Z", "max_issues_repo_path": "Appendix-FutureModels.tex", "max_issues_repo_name": "alanlujan91/PortfolioChoiceWithRiskyHousing", "max_issues_repo_head_hexsha": "bdd6bc79443c8c22aeb40756858cb6d7c98f52d8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-09-02T20:55:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-02T20:55:18.000Z", "max_forks_repo_path": "Appendix-FutureModels.tex", "max_forks_repo_name": "alanlujan91/PortfolioChoiceWithRiskyHousing", "max_forks_repo_head_hexsha": "bdd6bc79443c8c22aeb40756858cb6d7c98f52d8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-04-18T08:43:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-05T17:44:55.000Z", "avg_line_length": 48.4408602151, "max_line_length": 453, "alphanum_fraction": 0.6377358491, "num_tokens": 5997, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.5557974355058611}}
{"text": "\\section{Performance}\\label{sec:performance}\n\nWe benchmark the performance of our own variants of the Union-Find decoder, as well as the Union-Find Partitioned-Growth (UFPG) decoder of Algorithm \\ref{algo:ufbb}. Decoding rate $d$ for a given lattice size $L$ and physical error rate is acquired by Monte Carlo simulations. We compare the performance under the independent noise model of only i.i.d. bit-flip errors with chance $p_X$ and the phenomenological noise model, which adds faulty syndrome measurements occurring at chance $p_X$. Code thresholds $p_{\\text{th}}$ are obtained by curve fitting for the crossing point of decoding rates plotted in $(p_X, d)$ space, for a range of values for $L$ and $p_X$ \\cite{wang2003confinement}. We also use the decoding rate at the threshold $d(p_{\\text{th}})= d_{\\text{th}}$ as a metric to compare decoders. \n\n\\subsection{Union-Find decoder variants}\n\nFirst, we show that $p_{\\text{th}}$ can be increased, and the matching weight $\\abs{\\m{C}}$ can be decreased within the Union-Find decoder \\textbf{without} the Partitioned-Growth data structure. We compare distinct variants of our implementation of the Union-Find decoder, that either implements Weighted Growth via bucket sort or no Weighted Growth, and either constructs a forest $F$ of grown clusters post-growth, which is the case in the original Union-Find decoder, or maintains acyclic vertex-trees $\\vset$ during cluster growth. The latter is achieved by applying lines \\ref{algo:dfa}-\\ref{algo:dfb} of Algorithm \\ref{algo:ufbb} in the UF decoder. The labels used for each variant are listed in \\Cref{tab:uftable}. The full descriptions for each of the variants can be found in \\cite{markthesis}.\n\n\\Figure[b](topskip=0pt, botskip=0pt, midskip=0pt){figures/python/comp_matching_weight.pdf}{\n  The matching weight $\\abs{\\m{C}}$ of the Union-Find decoder variants (Table \\ref{tab:uftable}) and the UFPG decoder, normalized to the minimum weight $\\min{\\abs{\\m{C}}}$ of the Minimum-Weight Perfect Matching decoder. All weights are obtained by Monte Carlo simulations on $p_X=0.098$ with a minimum of $100.000$ samples. The x-axis scales linearly with $N = L^2$. \\label{comp_weight}}\n\n\\begin{table}[htb]\n  \\centering\n  \\begin{tabularx}{\\linewidth} { | R{2} || C{.5} | C{.5} | }\n    \\hline\n    & $\\mathbf{F}$ &  $\\pmb{\\vset}$\\\\\n    \\hhline{|=::=:=|}\n    No Weighted Growth & fUF  & vUF \\\\\n    \\hline\n    \\textbf{B}ucket sort Weighted Growth & bfUF & bvUF \\\\\n    \\hline\n  \\end{tabularx}\n  \\caption{Abbreviated names for the variants of the Union-Find decoder.}\\label{tab:uftable}\n\\end{table}\n\n\n\\Figure[bt](topskip=0pt, botskip=0pt, midskip=0pt){figures/python/threshold_ufpg.pdf}{\n  The decoding rates $d$ of \\emph{(a)} the UFPG decoder and \\emph{(c)} the bvUF and MWPM decoders, obtained via Monte Carlo simulations with a minimum of 100.000 samples per lattice size per error rate. \\emph{(a)} The decoding rates of the UFPG decoder do not cross in a single point, such that there is no apparent code threshold. \\emph{(b)} The intersections of threshold curves of subsequent lattice sizes are the so-called threshold coordinates, which follow a trend in $(p_X, d)$ space as the lattice sizes increase. \\emph{(c)} The threshold coordinates of the UFPG decoder occupy the region in $(p_X, d)$ space of the MWPM decoder. \\label{threshold_ufpg}}\n  \n\\begin{table}[htb]\n  \\centering\n  \\begin{tabularx}{\\linewidth} { | R{1} || C{1} | C{1} | C{1} | C{1} | }\n    \\hline\n    \\multirow{2}{*}{} & \\multicolumn{2}{c|}{Independent}& \\multicolumn{2}{c|}{Phenomenological} \\\\\n    \\cline{2-5}\n      & $p_{\\text{th}}$ & $d_{\\text{th}}$ & $p_{\\text{th}}$ & $d_{\\text{th}}$ \\\\\n    \\hhline{|=::=:=:=:=|}\n    fUF & $9.72\\%$ & $73.34\\%$ & $ 2.53\\%$ & $92.39\\%$ \\\\\n    \\hline\n    vUF & $9.79\\%$ & $74.32\\%$ & $2.56\\%$ & $93.64\\%$ \\\\\n    \\hline\n    bfUF & $9.98\\%$ & $72.71\\%$ & $2.68\\%$ & $91.32\\%$ \\\\\n    \\hline\n    bvUF & $10.01\\%$ & $72.86\\%$ & $2.69\\%$ & $92.08\\%$ \\\\\n    \\hline\n    MWPM & $10.35\\%$ & $71.58\\%$ & $2.97\\%$ & $90.24\\%$\\\\\n    \\hline\n  \\end{tabularx}\n  \\caption{Threshold error rates $p_{\\text{th}}$ and threshold decoding success rates $d_{\\text{th}}$ for the implementations of the  Union-Find decoder of \\Cref{tab:uftable}.}\\label{tab:ufndfwug}\n\\end{table}\n\n\n\\Figure[t](topskip=0pt, botskip=0pt, midskip=0pt){figures/python/comp_time.pdf}{\n  The mean computation time of the UFPG, bvUF, and MWPM decoders near the threshold error rate. All weights are obtained by Monte Carlo simulations for $p_X=0.098$ with a minimum of $100.000$ samples. The x-axis scales linearly with $N = L^2$.\\label{comp_time}}\n\nThe values for $p_{\\text{th}}$ and $d_{\\text{th}}$ for each variant, including the Minimum-Weight Perfect Matching (MWPM) decoder, are listed in \\Cref{tab:ufndfwug}. Weighted Growth has the expected behavior of increasing $p_{\\text{th}}$. While there is no major increase in $p_{\\text{th}}$ from the $\\vset$-variants over the $F$-variants, a significant increase in $d_{\\text{th}}$ can be observed. We suspect that the acyclic graphs of $\\vset$ have shorter branches in between junctions compared to $F$, which leads to a decreased matching weight and increased $d_{\\text{th}}$. We plot the matching weight $\\abs{\\m{C}}$ of the UF variants, normalized to the minimum weight of the MWPM decoder for $p_X = 0.098$ in \\Cref{comp_weight}. Here we see a correlation between a decrease in $\\abs{\\m{C}}$ and increase in performance: both Weighted Growth and maintaining $\\vset$ during growth increase performance and decrease matching weight. Furthermore, the matching weight in $\\vset$-variants have a relatively low and constant factor over the minimum weight, which improves upon the $L$-dependent behavior of $F$-variants.\n\n\n\\subsection{Union-Find Partitioned-Growth decoder}\n\nWe benchmark the performance of the Union-Find Partitioned-Growth (UFPG) decoder of Algorithm \\ref{algo:ufbb}. The decoding rates are plotted in \\Cref{threshold_ufpg}\\emph{a} per lattice size. We discover that the curves related to the decoding rates do not cross in a single point, such that there is no clear threshold $p_{\\text{th}}$. In fact, the intersections of two curves of subsequent lattice sizes, which we dub threshold coordinates, follow a trend where larger input lattice sizes results in a decrease in $p_{\\text{th}}$ but an increase in $d_{\\text{th}}$. We ascribe the degradation of the threshold coordinate for larger lattices to the Parity Inversion effect. These threshold coordinates $(p_{\\text{th}}, d_{\\text{th}})$ are plotted in \\Cref{threshold_ufpg}\\emph{b}. When these coordinates are plotted together with the bvUF and MWPM decoders' decoding rates, such as in \\Cref{threshold_ufpg}\\emph{c}, we see that the threshold coordinate for small lattice sizes is similar to the MWPM threshold coordinate. For larger lattice sizes, UFPG threshold coordinates move towards the bvUF threshold on the $p_X$ axis, but still has an increased performance due to the increased $d_{\\text{th}}$. Overall, the threshold coordinates occupy a region in $(p_X, d)$ space previously reserved to the MWPM decoder. A direct comparison between the Union-Find, bvUF, UFPG and MWPM decoders is included in \\Cref{thres_comp}.\n\nThe matching weight $\\abs{\\m{C}}$ of the UFPG decoder is successfully decreased compared to all UF variants of \\Cref{tab:uftable}. For $p_X = 0.098$, an error rate close to $p_{\\text{th}}$ for all decoders, the normalized matching weight is halved compared to the bvUF decoder (\\Cref{comp_weight}). We compare the average running time of Monte Carlo simulations to obtain a matching between the UFPG, bvUF, and MWPM decoders. A comparison for a physical error rate near the threshold is included in \\Cref{comp_time}. The average running time of UFPG, while not behaving according to worst-case $\\m{O}(n \\log{n})$, is also not linear. \n\nFinally, we show in \\Cref{comp_lowerror} the performance of the UFPG decoder in the low-error regime with phenomenological noise. The decoding rate of the UFPG decoder is improved from the bvUF decoder and behaves similarly to the MWPM decoder. The mean computation time of the decoders in this regime, plotted in \\Cref{comp_lowerror_time}, shows that the UFPG decoder performs at about the same speed as the bvUF decoder.\\par\n\nThe simulator for the surface code, the Union-Find decoder variants, and the Union-Find Partitioned-Growth decoder have all been implemented in Python using our application \\cite{qsurface}. The MWPM decoder utilizes the C implementation of BlossomV \\cite{kolmogorov2009blossom} due to substantial slow performances of Python implementations. Simulations were initially performed on a single 3.20 GHz Intel Core i5 CPU but later parallelized on all 24 threads of 3.60 GHz Intel Xeon E5 CPU's. ", "meta": {"hexsha": "80417d8c20d42048e389d36de2222800cf0f0c1f", "size": 8751, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sec_performance.tex", "max_stars_repo_name": "watermarkhu/tqe_paper_ufbb", "max_stars_repo_head_hexsha": "f9b171049e028ace58be3ab4a01cddac94f7e01e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sec_performance.tex", "max_issues_repo_name": "watermarkhu/tqe_paper_ufbb", "max_issues_repo_head_hexsha": "f9b171049e028ace58be3ab4a01cddac94f7e01e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sec_performance.tex", "max_forks_repo_name": "watermarkhu/tqe_paper_ufbb", "max_forks_repo_head_hexsha": "f9b171049e028ace58be3ab4a01cddac94f7e01e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-11T15:53:16.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-11T15:53:16.000Z", "avg_line_length": 130.6119402985, "max_line_length": 1423, "alphanum_fraction": 0.7378585305, "num_tokens": 2485, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7634837743174788, "lm_q1q2_score": 0.5557974343295374}}
{"text": "\\documentclass[a4paper]{article}\n\n\\input{temp}\n\n\\begin{document}\n\n\\title{Methods}\n\n\\maketitle\n\n\\newpage\n\n\\tableofcontents\n\n\\newpage\n\n\\section{Introduction}\n\nMany of the most important equations in mathematical physics are linear. For example the Laplace's equation,\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\phi\\left(x\\right) = 0\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 = \\sum_{i=1}^n \\frac{\\partial^2}{\\partial x_i^2}\n\\end{aligned}\n\\end{equation*}\nthe wave equation,\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{c^2}\\frac{\\partial^2 \\phi}{\\partial t^2} = \\nabla^2 \\phi\n\\end{aligned}\n\\end{equation*}\nthe heat equation,\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\phi}{\\partial t} = \\kappa \\nabla^2 \\phi\n\\end{aligned}\n\\end{equation*}\nand the Schrodinger's equation:\n\\begin{equation*}\n\\begin{aligned}\ni\\hbar\\frac{\\partial \\phi}{\\partial t} = -\\frac{\\hbar^2}{2m} \\nabla^2\\phi + V\\left(x\\right) \\phi\n\\end{aligned}\n\\end{equation*}\nHere linearity means, if $\\phi_1$ and $\\phi_2$ are each solutions of one of these, then\n\\begin{equation*}\n\\begin{aligned}\n\\lambda_1 \\phi_1 + \\lambda_2 \\phi_2\n\\end{aligned}\n\\end{equation*}\nis also a solution for any constants $\\lambda_1$,$\\lambda_2$.\n\nNow we look at the d'Almbert's solution of the wave equation:\\\\\nIn 1+1 dimensions, the wave equation is \n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{c^2} \\frac{\\partial^2 \\phi}{\\partial t^2} - \\frac{\\partial^2 \\phi}{\\partial x^2}= 0\n\\end{aligned}\n\\end{equation*}\nLet\n\\begin{equation*}\n\\begin{aligned}\nu=x-ct,v=x+ct\n\\end{aligned}\n\\end{equation*}\nThen\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial}{\\partial x}|_t = \\frac{\\partial u}{\\partial x}|_t \\frac{\\partial v}{\\partial x}|_t \\frac{\\partial}{\\partial v}|_u = \\frac{\\partial}{\\partial u}|_v + \\frac{\\partial}{\\partial v}|_u\\\\\n\\frac{1}{c}\\frac{\\partial}{\\partial t}|_x = \\frac{\\partial}{\\partial v}|_u - \\frac{\\partial}{\\partial u}|_v\n\\end{aligned}\n\\end{equation*}\nSo in the $\\left(u,v\\right)$ coordinates, the wave equation becomes\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial^2\\phi}{\\partial u\\partial v} = 0\n\\end{aligned}\n\\end{equation*}\nIntegrating with respect to $v$, we get\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\phi}{\\partial u} = F\\left(u\\right)\n\\end{aligned}\n\\end{equation*}\nIntegrating again with respect to $u$, we get\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(u,v\\right) &= G\\left(v\\right) + \\int^u F\\left(u'\\right)du'\\\\\n&= G\\left(x+ct\\right) + H\\left(x-ct\\right)\n\\end{aligned}\n\\end{equation*}\n\nTo fix these arbitrary functions, we need some initial data. Suppose we set\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,0\\right) = f\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\partial_t \\phi\\left(x,0\\right)=g\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nThen\n\\begin{equation*}\n\\begin{aligned}\n&\\phi\\left(x,0\\right) = G\\left(x\\right)+H\\left(x\\right) = f\\left(x\\right) \\implies cG'+cH'=cf'\\\\\n&\\partial_t \\phi\\left(x,0\\right) = cG'\\left(x\\right) - cH'\\left(x\\right) = g\\left(x\\right)\\\\\n&G'\\left(x\\right) = \\frac{1}{2}\\left(f'+\\frac{g}{c}\\right) \\implies G\\left(x\\right) = \\frac{1}{2}\\left[f\\left(x\\right)-f\\left(0\\right)\\right] + \\frac{1}{2c}\\int_0^x g\\left(y\\right)dy\\\\\n&H\\left(x\\right) = \\frac{1}{2} \\left[f\\left(x\\right)+f\\left(0\\right)\\right] - \\frac{1}{2c}\\int_0^x g\\left(y\\right)dy\n\\end{aligned}\n\\end{equation*}\nTherefore our solution obeying both initial conditions is\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,t\\right)=\\frac{1}{2}\\left[f\\left(x-ct\\right)+f\\left(x+ct\\right)\\right] + \\frac{1}{2c}\\int_{x-ct}^{x+ct} g\\left(y\\right)dy\n\\end{aligned}\n\\end{equation*}\n\n\\newpage\n\n\\section{Vector Spaces}\n\n\\subsection{Vector Spaces}\n\n\\begin{defi}\nA vector space $V$ is a space with an operation $+$ that obeys the following properties:\\\\\ncommutativity: $\\mathbf{u} + \\mathbf{v} = \\mathbf{v} + \\mathbf{u}$;\\\\\nassociativity: $\\mathbf{u} + \\left(\\mathbf{v} + \\mathbf{w}\\right) = \\left(\\mathbf{u} + \\mathbf{v}\\right) + \\mathbf{w}$;\\\\\nand has an identity $\\mathbf{0}$ that satisfies $\\mathbf{u} + \\mathbf{0} = \\mathbf{u}$ $\\forall \\mathbf{u} \\in V$.\n\nWe can also multiply vectors by scalars $\\lambda \\in \\R, \\C$, with that being distributive on both the vectors and the field, i.e.\n\\begin{equation*}\n\\begin{aligned}\n\\lambda \\left(\\mathbf{u} + \\mathbf{v}\\right) = \\lambda \\mathbf{u} + \\lambda \\mathbf{v}\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\lambda+\\mu\\right) \\mathbf{u} = \\lambda \\mathbf{u} + \\mu \\mathbf{u}\n\\end{aligned}\n\\end{equation*}\n\\end{defi}\n\nWe often give vector spaces an \\emph{inner product} $\\left(,\\right): V\\times v \\to \\C$, obeying:\\\\\nadditivity: $\\left(\\mathbf{u},\\mathbf{v}+\\mathbf{w}\\right) = \\left(\\mathbf{u},\\mathbf{v}\\right) + \\left(\\mathbf{u},\\mathbf{w}\\right)$;\\\\\nlinearity (in 2nd argument): $\\left(\\mathbf{u},\\lambda \\mathbf{v}\\right) = \\lambda \\left(\\mathbf{u},\\mathbf{v}\\right)$;\\\\\nconjugate symmetry: $\\left(\\mathbf{u},\\mathbf{v}\\right) = \\left(\\mathbf{v},\\mathbf{u}\\right)^*$;\\\\\npositive definite:$\\left(\\mathbf{u},\\mathbf{u}\\right)\\geq 0$ $\\forall \\mathbf{u}\\in V$, with equality holding iff $\\mathbf{u} = \\mathbf{0}$.\n\nNote that\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\lambda\\mathbf{u},\\mathbf{v}\\right) = \\left(\\mathbf{v},\\lambda\\mathbf{u}\\right)^* = \\left[\\lambda\\left(\\mathbf{v},\\mathbf{u}\\right)\\right]^* = \\lambda^* \\left(\\mathbf{u},\\mathbf{v}\\right)\n\\end{aligned}\n\\end{equation*}\n\nA set $\\left\\{\\mathbf{v}_1,...,\\mathbf{v}_n\\right\\}$ of vectors form a \\emph{basis} of $V$ if every $\\mathbf{u}\\in V$ can be uniquely expressed as a linear combination\n\\begin{equation*}\n\\begin{aligned}\n\\mathbf{u} = \\sum_{i=1}^n \\lambda_i \\mathbf{v}_i\n\\end{aligned}\n\\end{equation*}\n\nNote that we can use the inner product to explicitly find the coefficients $\\lambda_i$:\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\mathbf{v}_j,\\mathbf{u}\\right) &= \\left(\\mathbf{v}_j,\\sum_{i=1}^n \\lambda_i \\mathbf{v}_i\\right)\\\\\n&= \\sum_{i=1}^n \\left(\\mathbf{v}_j,\\lambda_i,\\mathbf{v}_i\\right)\\\\\n&= \\sum_{i=1}^n \\lambda_i \\left(\\mathbf{v}_j,\\mathbf{v}_i\\right)\n\\end{aligned}\n\\end{equation*}\nThe basis is \\emph{orthonormal} if\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\mathbf{v}_j,\\mathbf{v}_i\\right) = \\delta_{ij}\n\\end{aligned}\n\\end{equation*}\nand in this case we have\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\mathbf{v}_j,\\mathbf{u}\\right) = \\lambda_i\n\\end{aligned}\n\\end{equation*}\nSo we've found the expression for $\\lambda_i$ explicitly using the inner product.\n\n\\subsection{Functions as infinite dimensional vectors}\n\nA complex valued function $f$ on a domain $\\Omega$ is a map $f:\\Omega \\to \\C$.\nThe set of all these functions is naturally a vector space with the usual addition\n\\begin{equation*}\n\\begin{aligned}\n\\left(f+g\\right)\\left(x\\right) = f\\left(x\\right)+g\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nand the usual multiplication\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\lambda f\\right)\\left(x\\right) = \\lambda f\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\n\nNow we want an inner product on this vector space. One possibility is to take\n\\begin{equation*}\n\\begin{aligned}\n\\left(f,g\\right) = \\int_\\Omega f^*\\left(x\\right)g\\left(x\\right) d\\mu\n\\end{aligned}\n\\end{equation*}\nwhere $\\mu$ is some \\emph{measure}.\n\nAn example:\n\\begin{equation*}\n\\begin{aligned}\n\\Omega&=\\left[a,b\\right],\\\\\n\\left(f,g\\right) &= \\int_a^b f^*\\left(x\\right)g\\left(x\\right) dx\n\\end{aligned}\n\\end{equation*}\n\nAnother example:\n\\begin{equation*}\n\\begin{aligned}\n\\Omega &= \\left\\{\\left(r,\\theta\\right)\\in\\R^2\\:r\\leq 1\\right\\}\\\\\n\\left(f,g\\right) &= \\int_\\Omega f^*\\left(r,\\theta\\right)g\\left(r,\\theta\\right)rdrd\\theta\n\\end{aligned}\n\\end{equation*}\n\nFor functions on a circle $f:S'\\to C$, we can think of these as periodic functions $f\\left(\\theta+2\\pi\\right)=f\\left(\\theta\\right)$, with $\\theta\\in\\left[-\\pi,\\pi\\right)$. Fourier found a nice basis.\n\n\\newpage\n\n\\section{Fourier Series}\n\n\\subsection{Fourier Series}\n\nConsider the complex-valued functions $e^{in\\theta}$, where $n\\in\\Z$. We have\n\\begin{equation*}\n\\begin{aligned}\n\\left(e^{in\\theta},e^{im\\theta}\\right) = \\int_{-\\pi}^\\pi e^{-in\\theta} e^{+im\\theta} d\\theta = \\left\\{\n\\begin{array}{ll}\n2\\pi & \\text{ if } n=m\\\\\n0 & \\text{ if } n \\neq m\n\\end{array}\n\\right.\\\\\n\\left[\\int_{-\\pi}^\\pi \\left[\\cos\\left(m-n\\right)\\theta+i\\sin\\left(m-n\\right) \\theta\\right] d\\theta = \\left[\\frac{\\sin\\left(m-n\\right) \\theta - i\\cos \\left(m-n\\right) \\theta}{m-n}\\right]_{-\\pi}^\\pi = 0\\right]\n\\end{aligned}\n\\end{equation*}\n\nSo the set $\\left\\{\\frac{e^{in\\theta}}{\\sqrt{2\\pi}}\\right\\}$ is an orthonormal set of functions on $S^1$.\n\nThe $n^{th}$ Fourier component of a general $f:S^1 \\to \\C$ is\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f_n} = \\frac{1}{2\\pi} \\left(e^{in\\theta},f\\right) = \\frac{1}{2\\pi}\\int_{-\\pi}^\\pi e^{-in\\theta} f\\left(\\theta\\right) d\\theta\n\\end{aligned}\n\\end{equation*}\n\nThe \\emph{Fourier series} for $f$ is then\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{n\\in\\Z} \\hat{f_n} e^{in\\theta} = f\\left(\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nHowever it's not clear if this infinite series converges.\n\nFor example, let\n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\theta\\right) = |\\theta|\n\\end{aligned}\n\\end{equation*}\nfor $\\theta\\in\\left[-\\pi,\\pi\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f_n} &= \\frac{1}{2\\pi}\\int_{-\\pi}^\\pi e^{-in\\theta} |\\theta| d\\theta \\\\&= \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi |\\theta| \\cos n\\theta d\\theta \\\\&= \\frac{1}{\\pi} \\int_0^\\pi \\theta \\cos n\\theta d\\theta \\\\&=\n\\left\\{\n\\begin{array}{ll}\n-\\frac{2}{\\pi n^2} & \\text{ if n is odd}\\\\\n0 & \\text{ if n is even}\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n\nAnother example: let \n\\begin{equation*}\n\\begin{aligned}\nf\\left((\\theta\\right) = \\theta\n\\end{aligned}\n\\end{equation*}\nThen\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f_n} &= \\frac{1}{2\\pi}\\int_{-\\pi}^\\pi e^{-in\\theta} \\theta d\\theta \\\\&= -\\frac{1}{2\\pi in}\\left[e^{-in\\theta} \\theta\\right]_{-\\pi}^\\pi + \\frac{1}{2\\pi in} \\int_{-\\pi}^\\pi e^{-in\\theta}d\\theta \\\\&= -\\frac{1}{2in}\\left[e^{-in\\pi} + e^{+in\\pi}\\right] \\\\&=-\\frac{1}{in}\\left(-1\\right)^n \\\\&=\\frac{\\left(-1\\right)^{n+1}}{in}\n\\end{aligned}\n\\end{equation*}\nif $n\\neq 0$, and is $0$ if $n=0$.\n\nThus $|\\theta|$ has Fourier series $\\frac{\\pi}{2} + \\sum_{n\\in\\Z} \\frac{-2}{\\pi \\left(2n+1\\right)^2} e^{i\\left(2n+1\\right)\\theta}$,\\\\\nwhile $\\theta$ has Fourier series $\\sum_{n\\neq 0} \\frac{\\left(-1\\right)^{n+1}}{in}e^{in\\theta}$.\\\\\nThe first one is ok, but the second one diverges. $\\theta$ is discontinuous as a periodic function (different values at $\\pi$ and $-\\pi$).\n\n\\subsection{Convergence in the norm}\nOne thing we should mean by 'converge' is\n\\begin{equation*}\n\\begin{aligned}\n\\lim_{N\\to\\infty} \\left(S_N f-f,S_N f-f\\right) = 0\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\nS_N f = \\sum_{n=-N}^N \\hat{f_n} e^{in\\theta}\n\\end{aligned}\n\\end{equation*}\n\nIf we have convergence in the norm, i.e. if\n\\begin{equation*}\n\\begin{aligned}\n\\lim_{n\\to \\infty} \\int_{-\\pi}^\\pi \\left(S_N f-f\\right)^* \\left(S_N f-f\\right) d\\theta = \\int_{-\\pi}^\\pi |S_N f\\left(\\theta\\right) - f\\left(\\theta\\right) |^2 d\\theta = 0\n\\end{aligned}\n\\end{equation*}\n\nThe integrand in the second integral is non-negative, so it has to be 0 \\emph{almost everywhere}. So\n\\begin{equation*}\n\\begin{aligned}\n\\lim_{N\\to \\infty} S_N f\\left(\\theta\\right) = f\\left(\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nat \\emph{almost all} $\\theta \\in S^1$. In this case we have \n\\begin{equation*}\n\\begin{aligned}\n\\int_{-\\pi}^\\pi |S_N f\\left(\\theta\\right) - f\\left(\\theta\\right) |^2 d\\theta &= \\int_{-\\pi}^\\pi \\left(\\sum_{n=-N}^N e^{in\\theta} \\hat{f_n} - f\\left(\\theta\\right)\\right)^* \\left(\\sum_{m=-N}^N e^{im\\theta} \\hat{f}_m - f\\left(\\theta\\right)\\right) d\\theta\\\\\n\\int_{-\\pi}^\\pi \\left(\\sum_{n,m=-N}^N e^{-i\\left(n-m\\right)\\theta} \\hat{f_n}^* \\hat{f_m}\\right) d\\theta &= \\sum_{n,m=-N}^N \\hat{f_n}^* \\hat{f_m} \\int_{-\\pi}^\\pi e^{i\\left(m-n\\right)\\theta} d\\theta = 2\\pi \\sum_{n=-N}^N |\\hat{f_n}|^2\n\\end{aligned}\n\\end{equation*}\nSince we've shown that the last integral is 0 unless $m=n$. Also\n\\begin{equation*}\n\\begin{aligned}\n-\\int_{-\\pi}^\\pi \\sum_{n=-N}^N e^{-in\\theta} \\hat{f_n}^* f\\left(\\theta\\right) d\\theta &= -\\sum_{n=-N}^N \\hat{f_n}^* \\int_{-\\pi}^\\pi e^{-in\\theta} f\\left(\\theta\\right) d\\theta \\\\&= -2\\pi \\sum_{n=-N}^N |\\hat{f_n}|^2\n\\end{aligned}\n\\end{equation*}\nwe thus have\n\\begin{equation*}\n\\begin{aligned}\n\\lim_{N \\to \\infty} \\left[-2\\pi \\sum_{n=-N}^N |f_n|^2 + \\int_{-\\pi}^\\pi f\\left(\\theta\\right)^* f\\left(\\theta\\right)d\\theta\\right] = 0\n\\end{aligned}\n\\end{equation*}\ni.e.\n\\begin{equation*}\n\\begin{aligned}\n\\left(f,f\\right) = 2\\pi \\sum_{n\\in \\Z} |\\hat{f_n}|^2\n\\end{aligned}\n\\end{equation*}\nwhich is known as \\emph{Parseval's theorem}. Note that this is an infinite dimensional analogue of Pythagoras theorem.\n\n\\subsection{Pointwise Convergence}\nA stronger notion of convergence is to ask\n\\begin{equation*}\n\\begin{aligned}\n\\lim_{N\\to\\infty} S_N f\\left(\\theta\\right) - f\\left(\\theta\\right) = 0\n\\end{aligned}\n\\end{equation*}\nfor \\emph{all} $\\theta \\in S^1$. More precisely, given any $\\varepsilon>0$, there exists some $N\\left(\\varepsilon,\\theta\\right)$ s.t. $|S_n f\\left(\\theta\\right) - f\\left(\\theta\\right)| <\\varepsilon$ for all $n>N\\left(\\varepsilon,\\theta\\right)$.\n\nIf $N\\left(\\varepsilon,\\theta\\right) = N\\left(\\varepsilon\\right)$, i.e. $N$ doesn't depend on $\\theta$, then we have \\emph{uniform convergence}.\n\nFor example, let $f\\left(\\theta\\right) = \\theta$ for $\\theta = \\left[-\\pi,\\pi\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\nS_N f = \\sum_{n=-N, n\\neq 0}^N \\frac{\\left(-1\\right)^{n+1}}{in} e^{in\\theta} = 2 \\sum_{n=1}^N \\frac{\\left(-1\\right)^{n+1}}{n} \\sin\\left(n\\theta\\right)\n\\end{aligned}\n\\end{equation*}\n\nIf $\\theta = \\left(2k+1\\right) \\pi$ for some $k\\in\\Z$, we have\n\\begin{equation*}\n\\begin{aligned}\nS_N f\\left(\\left(2k+1\\right)\\pi\\right) = 0\n\\end{aligned}\n\\end{equation*}\n\nwhich is the average value of the original function at that point.\n\n\\subsection{Convergence of Fourier Series}\nWe can establish a simple condition that ensures convergence of a Fourier series.\n\nSuppose for some numbers $\\left\\{a_n\\right\\}$, the partial sums\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{n = -N}^N |a_n|\n\\end{aligned}\n\\end{equation*}\nconverge as $n \\to \\infty$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{n=-N}^N a_n e^{in\\theta}\n\\end{aligned}\n\\end{equation*}\nconverges uniformly to a function $f: S^1 \\to \\C$, and that $f\\left(\\theta\\right)$ is everywhere continuous.\n\n\\begin{proof}\nSince $\\sum_{n=-N}^N |a_n|$ converges, given any $\\varepsilon > 0$, $\\exists$ $N_0\\left(\\varepsilon\\right)$ s.t.\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{M \\leq |n| \\leq N} |a_n| < \\varepsilon\n\\end{aligned}\n\\end{equation*}\nfor all $N \\geq M \\geq N_0\\left(\\varepsilon\\right)$ (Cauchy). Therefore\n\\begin{equation*}\n\\begin{aligned}\n\\left|\\sum_{M \\leq |n| \\leq N} a_n e^{in\\theta} \\right| & \\leq \\sum_{M \\leq |n| \\leq N} |e^{in\\theta} a_n|\\\\\n& = \\sum_{M \\leq |n| \\leq N} |a_n| \\\\\n&< \\varepsilon\n\\end{aligned}\n\\end{equation*}\nfor all $\\theta \\in S^1$. So\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{n=-N}^N a_n e^{in\\theta}\n\\end{aligned}\n\\end{equation*}\nconverges uniformly to some $f\\left(\\theta\\right)$ as $N \\to \\infty$.\\\\\nThe partial sums $\\sum_{n=-N}^N a_n e^{in\\theta}$ are all continuous functions, and the uniform limit of a sequence of continuous functions is itself continuous (see Analysis).\n\nAlso (since the integral is $0$ unless $n=m$, in which case it equals $2\\pi$),\n\\begin{equation*}\n\\begin{aligned}\na_m &= \\sum_{n=-N}^N \\left[\\frac{1}{2\\pi} a_n \\int_{-\\pi}^\\pi e^{i\\left(n-m\\right)\\theta} d\\theta\\right]\\\\\n&= \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left[\\sum_{n=-N}^Na_n e^{in\\theta} \\right] e^{-im\\theta} d\\theta\\\\\n&\\to \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi e^{-im\\theta} f\\left(\\theta\\right) d\\theta \\\\\n&= \\hat{f}_m\n\\end{aligned}\n\\end{equation*}\nas $N \\to \\infty$, so the $a_m$'s are indeed the Fourier coefficients, and\n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\theta\\right) = \\sum_{n \\in \\Z} \\hat{f}_n e^{in\\theta}\n\\end{aligned}\n\\end{equation*}\n\\end{proof}\n\n(Compare with Taylor series where the continuous function $f\\left(x\\right) = |x|$ has no Taylor expansion around $x=0$, neither does $e^{-1/x^2}$).\n\n\\subsection{Integration and Differentiation of Fourier Series}\nIntegration is a 'smoothing' operation. Suppose we have a function $f\\left(\\theta\\right)$ whose Fourier series converges, i.e. $\\left(S_N f\\right) \\left(\\theta\\right) = \\sum_{n=-N}^N \\hat{f}_n e^{in\\theta} \\to f\\left(\\theta\\right)$ as $N \\to \\infty$.\n\nIntegrating term-by-term, we get\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-\\pi}^\\theta \\left(S_N f\\right) \\left(\\phi\\right) d\\phi &= \\hat{f}_0 \\left(\\theta-\\pi\\right) + \\sum_{n=-N,n \\neq 0}^N \\frac{\\hat{f}_n}{in}\\left[e^{in\\theta} - \\left(-1\\right)^n\\right]\n\\end{aligned}\n\\end{equation*}\nThis new sequence certainly converges, since the original one did by assumption, and each coefficient is suppressed by a further power of $n$ ($n \\neq 0$).\n\nOn the other hand, consider the square wave function\n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\theta\\right) = \\left\\{\n\\begin{array}{ll}\n-1 & -\\pi \\leq \\theta < 0\\\\\n+1 & 0 < \\theta < \\pi\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nWe have\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}_n = \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi e^{-in\\theta} f\\left(\\theta\\right) \\sim \\frac{1}{n}\n\\end{aligned}\n\\end{equation*}\nfor odd $n$, and is zero for even $n$. So (as exercise)\n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\theta\\right) \\sim \\frac{4}{\\pi} \\sum_{n=1}^\\infty \\frac{\\sin\\left(2n-1\\right)\\theta}{2n-1}\n\\end{aligned}\n\\end{equation*}\nNow try differentiating term-by-term,\n\\begin{equation*}\n\\begin{aligned}\n\\frac{4}{\\pi}\\sum_{n=1}^\\infty \\cos\\left(2n-1\\right)\\theta\n\\end{aligned}\n\\end{equation*}\nwhich diverges at $\\theta = 0$.\n\n\\begin{lemma}\nLet $f:S^1 \\to \\C$ be continuous. If\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{n \\in \\Z} |n \\hat{f}_n|\n\\end{aligned}\n\\end{equation*}\nconverges, then $f$ is actually continuously differentiable, and\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{n=-N}^N in\\hat{f}_n e^{in\\theta}\n\\end{aligned}\n\\end{equation*}\nconverges uniformly to $f'\\left(\\theta\\right)$ as $N \\to \\infty$.\n\\begin{proof}\nFor $n \\neq 0$, $|\\hat{f}_n| \\leq |n\\hat{f}_n|$, so the comparison test tells us $\\sum_{n \\in \\Z} |\\hat{f}_n|$ converges. So from before,\n\\begin{equation*}\n\\begin{aligned}\n\\left(S_N f\\right) = \\sum_{n=-N}^N \\hat{f}_n e^{in\\theta} \\to f\\left(\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nuniformly as $N \\to \\infty$. So\n\\begin{equation*}\n\\begin{aligned}\n\\left(S_N f\\right)'\\left(\\theta\\right) = \\sum_{n=-N}^N in \\hat{f}_n e^{in\\theta} \\to g\\left(\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nuniformly for some continuous $g\\left(\\theta\\right)$. But since $S_N f\\to f$, $\\left(S_N f\\right)' \\to g$ uniformly implies that $g\\left(\\theta\\right) = f'\\left(\\theta\\right)$, and hence $f'\\left(\\theta\\right)$ is continuous. So $f$ is continuously differentiable.\n\\end{proof}\n\\end{lemma}\n\n\\begin{lemma}\nLet $f:S^1 \\to \\C$ be $\\left(m-1\\right)$ times continuously differentiable, and let $f^{\\left(m-1\\right)}$ itself be differentiable with continuous derivative everywhere except for some finite set of points $\\left\\{\\theta_1,...\\theta_r\\right\\} \\in S^1$.\\\\\n(For example, let \n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\theta\\right) = \\frac{|\\theta|^3}{3}\n\\end{aligned}\n\\end{equation*}\nThen\n\\begin{equation*}\n\\begin{aligned}\nf''\\left(\\theta\\right)= |\\theta| \\theta\n\\end{aligned}\n\\end{equation*}\nwhich is not continuous at $\\theta = 0$.)\\\\\nIf also $|f^{\\left(m\\right)} \\left(\\theta\\right)| \\leq M$ for all $\\theta \\in S^1 \\backslash \\left\\{\\theta_1,...,\\theta_r\\right\\}$, then\n\\begin{equation*}\n\\begin{aligned}\n\\left|\\hat{f}_n\\right| \\leq \\frac{M}{n^m}\n\\end{aligned}\n\\end{equation*}\nfor all $n \\neq 0$.\n\\begin{proof}\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}_n &= \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi e^{-in\\theta} f\\left(\\theta\\right) d\\theta\\\\\n&= - \\frac{1}{2\\pi in} \\left[e^{in\\theta} f\\left(\\theta\\right) \\right]_{-\\pi}^\\pi + \\frac{1}{2\\pi in} \\int_{-\\pi}^\\pi e^{-in\\theta} f'\\left(\\theta\\right) d\\theta\\\\\n&= ...\\\\\n&= \\frac{1}{2\\pi \\left(in\\right)^m} \\int_{-\\pi}^\\pi e^{-in\\theta} f^{\\left(m\\right)} \\left(\\theta\\right) d\\theta\n\\end{aligned}\n\\end{equation*}\n\\end{proof}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\left|\\hat{f}_n \\right| &\\leq \\frac{1}{2\\pi |n|^m} \\int_{-\\pi}^\\pi \\left|e^{-in\\theta} f^{\\left(m\\right)} \\left(\\theta\\right)\\right| d\\theta\\\\\n&\\leq \\frac{M}{|n|^m}\n\\end{aligned}\n\\end{equation*}\nThis is a very intuitive result: the smoother a function is, the better its Fourier series will converge.\n\nFor example, for $f\\left(\\theta\\right) = \\theta$, $\\hat{f}_n \\sim \\frac{1}{n}$, while for $f\\left(\\theta\\right) = |\\theta|$, $\\hat{f}_n \\sim \\frac{1}{n^2}$.\n\\end{lemma}\n\n\\newpage\n\n\\section{Sturm-Liouville Theory}\n\nThere could be many different sets of 'basis' functions in which we could expand any given function. To decide which basis is most appropriate, we'll need to know more about what problem we're trying to solve.\n\n\\subsection{Matrices and their adjoints}\n\nGiven two vector spaces $V,W$ with $\\dim\\left(V\\right) = n$, $\\dim \\left(W\\right) = m$, we can consider a linear map $M: V\\to W$. If we are given a basis $\\left\\{\\mathbf{v}_1,...,\\mathbf{v}_n\\right\\}$ of $V$ and $\\left\\{\\mathbf{w}_1,...\\mathbf{w}_n\\right\\}$, then by linearity, the action of $M$ is determined by its action of the $\\left\\{\\mathbf{v}_i\\right\\}$. We have\n\\begin{equation*}\n\\begin{aligned}\nM \\mathbf{v}_i = \\sum_{j=1}^m M_{ij} \\mathbf{w}_j\n\\end{aligned}\n\\end{equation*}\nfor some coefficients $M_{ij} \\in \\C$.\n\nIf $\\left\\{\\mathbf{w}_j\\right\\}$ is an orthonormal basis, then \n\\begin{equation*}\n\\begin{aligned}\n\\left(\\mathbf{w}_k,M \\mathbf{v}_i\\right) = \\sum_{j=1}^m M_{ij} \\left(\\mathbf{w}_k,\\mathbf{w}_j\\right) = M_{ik}\n\\end{aligned}\n\\end{equation*}\nhere $\\left(,\\right)$ is an inner product on $W$. If $m=n$, then $W \\cong V$ and we treat $M$ as a map from $V$ to itself.\n\nIn this case, we define the \\emph{eigenvalues} of this map to be the roots $\\left\\{\\lambda_i\\right\\}$ of the characteristics polynomial\n\\begin{equation*}\n\\begin{aligned}\n|M-\\lambda I| = 0\n\\end{aligned}\n\\end{equation*}\n\nThe \\emph{adjoint} of a map $A:V \\to V$ is a map $B:V\\to V$ defined by\n\\begin{equation*}\n\\begin{aligned}\n\\left(B\\mathbf{u},\\mathbf{v}\\right) = \\left(\\mathbf{u},A\\mathbf{v}\\right)\n\\end{aligned}\n\\end{equation*}\nfor all $\\mathbf{u},\\mathbf{v} \\in V$.\n\nIn components, that says\n\\begin{equation*}\n\\begin{aligned}\nB_{ij} = A_{ji}^*\n\\end{aligned}\n\\end{equation*}\n(or $B=\\left(A^T\\right)^* = A^+$).\n\nA map $M$ is \\emph{self-adjoint} if and only if $\\left(M \\mathbf{u},\\mathbf{v}\\right) = \\left(\\mathbf{u},M\\mathbf{v}\\right)$ for all $\\mathbf{u},\\mathbf{v}\\in V$.\n\n\\begin{prop}\nSelf adjoint matrices have only real eigenvalues.\n\\begin{proof}\nSuppose $M \\mathbf{v}_i = \\lambda_i \\mathbf{v}_i$ so that $\\mathbf{v}_i$ is the eigenvector of $M$ with eigenvalue $\\lambda_i$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\lambda_i\\left(\\mathbf{v}_i,\\mathbf{v}_i\\right) = \\left(\\mathbf{v}_i,M\\mathbf{v}_i\\right) = \\left(M \\mathbf{v}_i,\\mathbf{v}_i\\right) = \\lambda_i^* \\left(\\mathbf{v}_i,\\mathbf{v}_i\\right)\n\\end{aligned}\n\\end{equation*}\nSince that is true for all eigenvectors, $\\lambda_i = \\lambda_i^*$. So $M$ is self adjooint.\n\\end{proof}\n\\end{prop}\n\n\\begin{prop}\nEigenvectors of a self-adjiont $M$ with distinct eigenvalues are orthogonal.\n\\begin{proof}\n\\begin{equation*}\n\\begin{aligned}\n\\lambda_i\\left(\\mathbf{v}_j,\\mathbf{v}_i\\right) = \\left(\\mathbf{v}_j,M\\mathbf{v}_i\\right) = \\left(M\\mathbf{v}_j,\\mathbf{v}_i\\right) = \\lambda_j \\left(\\mathbf{v}_j,\\mathbf{v}_i\\right)\n\\end{aligned}\n\\end{equation*}\nSince $\\lambda_j \\in \\R$. So\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\lambda_i-\\lambda_j\\right)\\left(\\mathbf{v}_j,\\mathbf{v}_i\\right) = 0\n\\end{aligned}\n\\end{equation*}\n\\end{proof}\n\\end{prop}\n\nWe can use this orthogonality to solve linear equations. For example, suppose $M\\mathbf{a} = \\mathbf{f}$ and we wish to find $\\mathbf{a}$ ($ = M^{-1}\\mathbf{f}$).\\\\\nWe let $\\left\\{\\mathbf{v}_1,...,\\mathbf{v}_n\\right\\}$ be a basis of eigenvectors for $M$. Then\n\\begin{equation*}\n\\begin{aligned}\nM \\left(\\sum_{i=1}^n a_i \\mathbf{v}_i\\right) &= \\sum_{i=1}^n a_i M \\mathbf{v}_i\\\\\n&= \\sum_{i=1}^n a_i \\lambda_i \\mathbf{v}_i \\\\\n&= \\sum_{i=1}^n f_i \\mathbf{v}_i\n\\end{aligned}\n\\end{equation*}\n\nTaking the inner product with $\\mathbf{v}_j$,\n\\begin{equation*}\n\\begin{aligned}\n&\\sum_{i=1}^n a_i \\lambda_i \\left(\\mathbf{v}_j,\\mathbf{v}_i\\right) = \\sum_{i=1}^n f_i \\left(\\mathbf{v}_j,\\mathbf{v}_i\\right)\\\\\n\\implies & a_j\\lambda_j = f_j \\text{ or } a_j = \\frac{f_j}{\\lambda_j}\n\\end{aligned}\n\\end{equation*}\n\nFor functions, the analogue of our linear map $M$ is a linear differential operator\n\\begin{equation*}\n\\begin{aligned}\n\\mathcal{L} = A_p\\left(x\\right) \\frac{d^p}{dx^p} + A_{p-1}\\left(x\\right) \\frac{d^{p-1}}{dx^{p-1}} + ... + A_1\\left(x\\right) \\frac{d}{dx} + A_0\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nWe call $p$ the \\emph{order} of $\\mathcal{L}$. We'll typically be interested in second order differential operators:\n\\begin{equation*}\n\\begin{aligned}\n\\mathcal{L} = R\\left(x\\right)\\frac{d^2}{dx^2} + P\\left(x\\right) \\frac{d}{dx} + Q\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nWe begin by putting this operator in Sturm-Liouville form. Suppose $R\\left(x\\right) \\neq 0$ for all $x \\in \\left[a,b\\right]$. Then we equivalently have\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d^2}{dx^2} + \\frac{P\\left(x\\right)}{R\\left(x\\right)}\\frac{d}{dx} + \\frac{Q\\left(x\\right)}{R\\left(x\\right)} = e^{-\\int_0^x \\frac{P}{R} dt} \\frac{d}{dx}\\left(e^{\\int_0^x \\frac{P\\left(t\\right)}{R\\left(t\\right)} dt} \\frac{d}{dx}\\right) + \\frac{Q}{R}\n\\end{aligned}\n\\end{equation*}\nBy using integrating factor. So equivalently, we consider operators of the Sturm-Liouville form\n\\begin{equation*}\n\\begin{aligned}\n\\mathcal{L} = \\frac{d}{dx}\\left(p\\left(x\\right)\\frac{d}{dx}\\right) + q\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\np\\left(x\\right) = \\exp\\int_0^x \\frac{P\\left(t\\right)}{R\\left(t\\right)},\\\\\nq\\left(x\\right) = \\frac{Q\\left(x\\right) p \\left(x\\right)}{R\\left(x\\right)}\n\\end{aligned}\n\\end{equation*}\n\nSL operators are self-adjoint with respect to the inner product\n\\begin{equation*}\n\\begin{aligned}\n\\left(f,g\\right) = \\int_a^b f^* \\left(x\\right) g\\left(x\\right) dx\n\\end{aligned}\n\\end{equation*}\n\\emph{provided} $f,g$ obey appropriate boundary conditions:\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\mathcal{L}f,g\\right) &= \\int_a^b \\left[\\frac{d}{dx}\\left(p\\frac{df}{dx}\\right) + qf\\right]^* g\\left(x\\right) \\\\\n&= \\int_a^b \\left[\\frac{d}{dx} \\left(p\\frac{df^*}{dx}\\right) g + q f^* g\\right] dx\\\\\n&= \\left[p \\frac{df^*}{dx} g\\right]_a^b - \\int_a^b \\left(p \\frac{df^*}{dx} \\frac{dg}{dx} - qf^* g dx\\right)\\\\\n&= \\left[p\\left(\\left.f^{*}\\right.' g - f^* g'\\right) \\right]_a^b + \\int_a^b f^* \\left[\\frac{d}{dx} \\left(p\\frac{dg}{dx}\\right) + qg\\right] dx + \\left(f,\\mathcal{L}g\\right)\n\\end{aligned}\n\\end{equation*}\nLet the first term be the boundary term. The boundary term vanishes provided we impose some conditions:\n\\begin{equation*}\n\\begin{aligned}\nb_1 f'\\left(a\\right) + b_2 f\\left(a\\right) = 0\\\\\nc_1 f'\\left(b\\right) + c_2 f\\left(b\\right) = 0\n\\end{aligned}\n\\end{equation*}\nfor some $b_{1,2}, c_{1,2} \\in \\C$.\\\\\nIf $p\\left(a\\right) = p\\left(b\\right)$, then the boundary terms cancel if $f\\left(a\\right) = f\\left(b\\right)$, $f'\\left(a\\right) = f'\\left(b\\right)$ and similarly for $g$.\n\nFor functions obeying these boundary conditions,\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\mathcal{L}f,g\\right) = \\left(f,\\mathcal{L}g\\right)\n\\end{aligned}\n\\end{equation*}\n\nIt follows that:\\\\\n$\\bullet$ eigenvalues of a SL operator are real;\\\\\n$\\bullet$ eigenfunctions with distinct eigenvalues are orthogonal.\n\nIt's often convenient to generalise the eigenfunctions to allow for weight functions. A function $W:\\left[a,b\\right] \\to \\R$ is a \\emph{weight function} if $W\\left(x\\right) \\geq 0 $ for all $x \\in \\left[a,b\\right]$ with at most finitely many zeros.\n\nWe say $f$ is an eigenfunction of $\\mathcal{L}$ with weight $W$ if\n\\begin{equation*}\n\\begin{aligned}\n\\left(\\mathcal{L}f\\right)\\left(x\\right) = \\lambda W\\left(x\\right) f\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\n\nWe define an inner product of weight $W$ to be:\n\\begin{equation*}\n\\begin{aligned}\n\\left(f,g\\right)_W &= \\int_a^b f^*\\left(x\\right) g\\left(x\\right) W\\left(x\\right) dx\\\\\n&= \\left(f,Wg\\right) = \\left(Wf,g\\right)\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\lambda \\left(f,f\\right)_W = \\left(f,\\mathcal{L}f\\right) = \\left(\\mathcal{L}f,f\\right) = \\lambda^* \\left(f,f\\right)_W\n\\end{aligned}\n\\end{equation*}\n\nLet $f_i$ be an eigenfunction of a S-L operator $\\mathcal{L}$, with weight $W\\left(x\\right)$ and eigenvalue $\\lambda_i$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\lambda_i \\left(f_i,f_k\\right)_W &= \\left(f_i,\\mathcal{L}f_i\\right)\\\\\n&= \\left(\\mathcal{L}f_i,f_i\\right) = \\lambda_i^* \\left(f_i,f_i\\right)_W\n\\end{aligned}\n\\end{equation*}\nSince $\\mathcal{L}$ is self-adjoint (needs boundary conditions). So $\\lambda_i \\in \\R$ for a self-adjoint operator.\n\\begin{equation*}\n\\begin{aligned}\n\\lambda_i \\left(f_j,f_i\\right)_W &= \\left(f_i,\\mathcal{L}f_i\\right)\\\\\n&=\\left(\\mathcal{L} f_i,f_i\\right) \\\\\n&=\\lambda_j \\left(f_j,f_i\\right)_W\\\\\n&\\implies \\left(\\lambda_j - \\lambda_i\\right) \\left(f_j,f_i\\right)_W = 0\n\\end{aligned}\n\\end{equation*}\nSo the eigenfunctions with distinct eigenvalues are orthogonal with respect to $\\left(,\\right)_W$.\n\nNotice that since the coefficient functions $p\\left(x\\right),q\\left(x\\right)$ in the S-L operator $\\mathcal{L}$ are real, if $\\mathcal{L}f = \\lambda Wf$, then\n\\begin{equation*}\n\\begin{aligned}\n\\mathcal{L} \\left(f^*\\right) = \\left(\\mathcal{L} f\\right)^* = \\left(\\lambda W f\\right)^* = \\lambda W f^*\n\\end{aligned}\n\\end{equation*}\nSo $f^*\\left(x\\right)$ is also an eigenfunction with the same eigenvalue. By choosing $\\text{Re} f\\left(x\\right), \\text{Im} f\\left(x\\right)$, we can always choose our eigenfunctions to be real.\n\nGiven any $f\\left(x\\right)$ on our domain $\\Omega$, we can expand it in a basis of eigenfunctions $\\left\\{y_i\\left(x\\right)\\right\\}$ for some SL operator on (the interior) of $\\Omega$:\n\\begin{equation*}\n\\begin{aligned}\nf\\left(x\\right) = \\sum_{i=1}^\\infty \\hat{f}_i y_i\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nwith\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}_i = \\left(y_i,f\\right)_W\n\\end{aligned}\n\\end{equation*}\njust as for Fourier series.\n\n\\subsection{Examples of solving equations with SL operators}\n\n\\begin{eg}\nChoose $\\Omega = \\left[-L,L\\right]$, $p\\left(x\\right) = -1$, $q\\left(x\\right) = 0$. So\n\\begin{equation*}\n\\begin{aligned}\n\\mathcal{L} = \\frac{d}{dx} \\left(p\\left(x\\right) \\frac{d}{dx}\\right) + q\\left(x\\right) = -\\frac{d^2}{dx^2}\n\\end{aligned}\n\\end{equation*}\nWe'll also choose weight function to be $W\\left(x\\right) = 1$, and ask that all our functions obey boundary conditions $f\\left(L\\right) = f\\left(-L\\right), f'\\left(L\\right) = -f'\\left(-L\\right)$.\n\nAn eigenfunction obeys\n\\begin{equation*}\n\\begin{aligned}\n-\\frac{d^2 f}{dx^2} = \\lambda f\n\\end{aligned}\n\\end{equation*}\nif $\\lambda < 0$, the unique solution obeying the boundary conditions is $f=0$;\\\\\nif $\\lambda = \\left(\\frac{n\\pi}{L}\\right)^2$ for $n \\in \\Z$, then we have eigenfunctions $f\\left(x\\right) = e^{\\pm i\\pi nx / 2}$ and we've recovered Fourier series.\n\\end{eg}\n\n\\begin{eg}\nChoose $\\Omega = \\left[-1,1\\right]$, $q\\left(x\\right) = 0$, $p\\left(x\\right) = -\\left(1-x^2\\right)$, $W\\left(x\\right) = 1$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\mathcal{L} f = \\frac{d}{dx} \\left[-\\left(1-x^2\\right) \\frac{df}{dx}\\right] = -\\lambda f\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\left(f,\\mathcal{L} g\\right) &= \\left(\\mathcal{L}f,g\\right) + \\left[p\\left(x\\right)\\left(\\left.f^*\\right.' g - f^* g'\\right)\\right]_{-1}^1\n\\end{aligned}\n\\end{equation*}\nFor $p = -\\left(1-x^2\\right)$, we have $p\\left(\\pm 1\\right) = 0$. So all we need to ask of $f,g$ is that it remains \\emph{regular} on $\\partial \\Omega$.\n\nWe look for a solution of the form\n\\begin{equation*}\n\\begin{aligned}\n\\Theta\\left(x\\right) = \\sum_{n=0}^\\infty a_n x^n\n\\end{aligned}\n\\end{equation*}\nthat remains regular throughout $\\Omega$. Differentiating (applying SL operator?), we have\n\\begin{equation*}\n\\begin{aligned}\n\\left(1-x^2\\right)\\sum_{n=0}^\\infty a_n n\\left(n-1\\right) x^{n-2} - 2\\sum_{n=0}^\\infty a_n nx^n + \\lambda \\sum_{n=0}^\\infty a_n x^n = 0\n\\end{aligned}\n\\end{equation*}\nThis must hold for all $x \\in \\Omega$, it must hold for each power of $x$ separately. Collect the coefficient for $x^n$, we have\n\\begin{equation*}\n\\begin{aligned}\n&a_{n+2}\\left(n+2\\right)\\left(n+1\\right) -a_n n \\left(n-1\\right) - 2a_n n + \\lambda a_n = 0\\\\\n\\implies &a_{n+2} = \\frac{n\\left(n+1\\right) - \\lambda}{\\left(n+2\\right)\\left(n+1\\right)} a_n\n\\end{aligned}\n\\end{equation*}\nas a recurrence relation.\n\nWe are free to choose $a_0$ and $a_1$ independently, so we get two linearly independent solutions\n\\begin{equation*}\n\\begin{aligned}\n&\\Theta_0\\left(x\\right) = a_0\\left[1- \\frac{\\lambda}{2}x^2 + \\frac{\\left(-\\lambda\\right)\\left(6-\\lambda\\right)}{4!}x^4 + ...\\right],\\\\\n&\\Theta_1\\left(x\\right) = a_1\\left[x+\\frac{\\left(2-\\lambda\\right)}{3!}x^3 + \\frac{\\left(2-\\lambda\\right)\\left(12-\\lambda\\right)}{5!} x^5 + ...\\right]\n\\end{aligned}\n\\end{equation*}\nNote the $\\Theta_0$ is an even function while $\\Theta_1$ is an odd function.\n\nNow examine the behaviour of the $\\Theta_i\\left(x\\right)$ near the boundary. As $n \\to \\infty$,\n\\begin{equation*}\n\\begin{aligned}\n\\frac{a_{n+2}}{a_n} \\sim 1-\\frac{2}{n} + \\frac{4-\\lambda}{n^2}\n\\end{aligned}\n\\end{equation*}\nso the series always converges when $|x|<1$.\n\nHowever, at $x = \\pm 1$, Gauss's test tells us that the series in fact \\emph{diverges} (??? see \\href{http://mathworld.wolfram.com/GausssTest.html}{Gauss's Test}).\n\nThe only way out is to restrict the allowed values of $\\lambda$. If $\\lambda = l\\left(l+1\\right)$ for some $l \\in \\N$, then the series actually terminates and we have a polynomial solution $P_l\\left(x\\right)$, known as the \\emph{$l^{th}$ Legendre polynomial}. e.g.,\n\\begin{equation*}\n\\begin{aligned}\n&P_0\\left(x\\right) = 1,\\\\\n&P_1\\left(x\\right) = x,\\\\\n&P_2\\left(x\\right) = \\frac{1}{2}\\left(3x^2-1\\right),\\\\\n&P_3\\left(x\\right) = \\frac{1}{2}\\left(5x^3 - 3x\\right)\n\\end{aligned}\n\\end{equation*}\n\nIn fact, the general formula is\n\\begin{equation*}\n\\begin{aligned}\nP_l\\left(x\\right) = \\frac{1}{2_l l!} \\frac{d^l}{dx^l} \\left(x^2-1\\right)^l\n\\end{aligned}\n\\end{equation*}\n(check!) where the normalisation ensures $P_l\\left(1\\right) = 1$.\n\\begin{proof}\n\\begin{equation*}\n\\begin{aligned}\nP_l\\left(1\\right) &= \\frac{1}{2^l l!} \\frac{d^l}{dx^l} \\left[\\left(x-1\\right)^l \\left(x+1\\right)^l\\right] |_{x=1}\\\\\n&= \\frac{1}{2^l l!} \\left[l! \\left(x+1\\right)^l + \\text{ terms involving } \\left(x-1\\right)\\right]_{x=1}\\\\\n&= 1\n\\end{aligned}\n\\end{equation*}\n\\end{proof}\n\nFrom general SL theory, we know that $\\left(P_l,P_m\\right) = 0$ when $l\\neq m$, but let's see this directly. WLOG assume $m<l$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\left(P_m,P_l\\right) &= \\frac{1}{2^l l!} \\int_{-1}^1 P_m\\left(x\\right) \\frac{d^l}{dx^l} \\left(x^2-1\\right)^l\\\\\n&= \\frac{1}{2^l l!}\\left[P_m\\left(x\\right) \\frac{d^{l-1}}{dx^{l-1}} \\left(x^2-1\\right)^l\\right]_{-1}^1 - \\frac{1}{2^l l!} \\int_{-1}^1 \\frac{d P_m}{dx} \\frac{d^{l-1}\\left(x^2-1\\right)^l}{dx^{l-1}} dx\\\\\n&= \\frac{1}{2^l l!} \\int_{-1}^1 \\frac{d P_m}{dx} \\frac{d^{l-1}}{dx^{l-1}}\\left(x^2-1\\right)^l dx\\\\\n&= \\frac{\\left(-1\\right)^l}{2^l l!} \\int_{-1}^1 \\frac{d^l P_m}{dx^l} \\left(x^2-1\\right)^l dx\n\\end{aligned}\n\\end{equation*}\nBut $P_m\\left(x\\right) = \\frac{1}{2^m m!} \\frac{d^m}{dx^m} \\left(x^2-1\\right)^m$ is a polynomial of degree $m$, so differentiating $l>m$ times gives $0$.\n\nWhen $l=m$, we have\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-1}^1 P_l\\left(x\\right) P_m\\left(x\\right) dx = \\frac{2}{2+l}\n\\end{aligned}\n\\end{equation*}\n\nThe Legendre polynomials are the basic orthogonal polynomials on $\\left[-1,1\\right]$. Any $l^{th}$ order polynomial has $l$ roots (generically $\\in \\C$), but all the roots of all the $P_l\\left(x\\right)$'s $\\in \\R$ and lie in $\\left(-1,1\\right)$.\n\n\\begin{proof}\nSuppose otherwise, that $P_l\\left(x\\right)$ has only $m<l$ roots in $x \\in \\left(-1,1\\right)$. We consider the degree $m$ polynomial\n\\begin{equation*}\n\\begin{aligned}\nQ\\left(x\\right) = \\prod_{i=1}^m \\left(x-x_i\\right)\n\\end{aligned}\n\\end{equation*}\nwhere $x_i$ are roots of $P_l\\left(x\\right)$ in $\\left(-1,1\\right)$.\\\\\nOn one hand,\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-1}^1 Q\\left(x\\right) P_l\\left(x\\right) dx \\neq 0\n\\end{aligned}\n\\end{equation*}\nsince $P_l\\left(x\\right)$ and $Q\\left(x\\right)$ change sign at the same places.\\\\\nOn the other hand, we can expand\n\\begin{equation*}\n\\begin{aligned}\nQ\\left(x\\right) = \\sum_{r=1}^n \\hat{q}_r P_r\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nin terms of the Legendre polynomials. So\n\\begin{equation*}\n\\begin{aligned}\n\\int_{-1}^1 QP_l\\left(x\\right) dx = \\sum_{r=1}^m \\hat{q}_r \\int P_r\\left(x\\right) P_l\\left(x\\right) dx = 0\n\\end{aligned}\n\\end{equation*}\nby orthogonality of the $P_i$'s. Contradiction.\n\\end{proof}\n\\end{eg}\n\n\\newpage\n\n\\section{Laplace's Equation}\n\n\\subsection{Laplace's Equation and Uniqueness Theorem}\n\nLaplace's equation in a domain $\\Omega \\subset \\R^d$ is\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\psi = 0\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 = \\sum_{i=1}^d \\frac{\\partial^2}{\\partial x_i^2}\n\\end{aligned}\n\\end{equation*}\n\nThere exists a unique solution $\\psi$ to Laplace's equation inside $\\Omega$ that obeys \n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x\\right) = f\\left(x\\right) \\ \\forall x = \\partial \\Omega\n\\end{aligned}\n\\end{equation*}\nthis boundary condition is called the \\emph{Dirichlet boundary conditions} (see Vector Calculus in Part IA). We'll prove the uniqueness theorem again:\n\\begin{proof}\nSuppose $\\psi_1$, $\\psi_2$ both solve the Laplace's equation inside $\\Omega$ and $\\psi_1 = \\psi_2$ on $\\partial \\Omega$.\\\\\nLet $\\delta \\psi$ = $\\psi_1 - \\psi_2$. Then\n\\begin{equation*}\n\\begin{aligned}\n0 &= \\int_\\Omega \\delta\\psi\\nabla^2 \\left(\\delta\\psi\\right)\\\\\n&= -\\int_\\Omega \\left(\\nabla\\delta\\psi\\right)\\cdot\\left(\\nabla\\delta\\psi\\right) + \\int_{\\partial\\Omega}\\delta\\psi \\mathbf{n} \\cdot \\nabla \\delta \\psi\n\\end{aligned}\n\\end{equation*}\nWhere $\\mathbf{n}$ is the outward pointing unit normal vector to $\\partial \\Omega$. But $\\delta \\psi$ on the boundary is $0$. So\n\\begin{equation*}\n\\begin{aligned}\n0=-\\int_\\Omega\\left(\\nabla\\delta\\psi\\right)\\cdot\\left(\\nabla\\delta\\psi\\right) = -\\int_\\Omega ||\\nabla\\delta\\psi||^2\n\\end{aligned}\n\\end{equation*}\nSo provided that $\\delta \\psi$ is continuous, we must have $\\nabla\\delta\\psi = 0$ everywhere in $\\Omega$. Thus $\\delta\\psi$ is a constant throughout $\\Omega$. Since $\\delta \\psi = 0$ on the boundary, $\\delta \\psi = 0$ throughout $\\Omega$. Therefore $\\psi_1 = \\psi_2$ everywhere in $\\Omega$.\n\\end{proof}\n\n\\subsection{Separation of Variables}\nLet $\\Omega$ be the infinite cuboid\n\\begin{equation*}\n\\begin{aligned}\n\\Omega = \\left\\{\\left(x,y,z\\right) \\in \\R^3 | 0\\leq x\\leq a,0\\leq y\\leq b,0\\leq z\\leq \\infty\\right\\}\n\\end{aligned}\n\\end{equation*}\nSuppose we want to solve\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\psi=0\n\\end{aligned}\n\\end{equation*}\ninside $\\Omega$, subject to the conditions\n\\begin{equation*}\n\\begin{aligned}\n&\\psi\\left(0,y,z\\right)=\\psi\\left(a,y,z\\right)=0\\\\\n&\\psi\\left(x,0,z\\right)=\\psi\\left(x,b,z\\right)=0\\\\\n&\\psi\\left(x,y,0\\right) = f\\left(x,y\\right)\\\\\n&\\lim_{z\\to\\infty} \\psi = 0\n\\end{aligned}\n\\end{equation*}\n\nTo get started, we look for a special solution to Laplace's equation of the form\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x,y,z\\right) = X\\left(x\\right)Y\\left(y\\right)Z\\left(z\\right)\n\\end{aligned}\n\\end{equation*}\nThen\n\\begin{equation*}\n\\begin{aligned}\n0 &= \\nabla^2 \\psi\\\\\n&= Y\\left(y\\right)Z\\left(z\\right)X''\\left(x\\right) + X\\left(x\\right)Z\\left(z\\right)Y''\\left(y\\right)+X\\left(x\\right)+Y\\left(y\\right)Z''\\left(z\\right)\n\\end{aligned}\n\\end{equation*}\n\nIf $\\psi \\not\\equiv 0$, we can equivalently write\n\\begin{equation*}\n\\begin{aligned}\n0 &= \\frac{1}{\\psi} \\nabla^2 \\psi\\\\\n&= \\frac{X''}{X} + \\frac{Y''}{Y} + \\frac{Z''}{Z}\n\\end{aligned}\n\\end{equation*}\n\nEach term on the RHS depends on a \\emph{different} variable. Since we want a solution for all $\\left(x,y,z\\right) \\in \\Omega$, each term must be \\emph{constant}. That is,\n\\begin{equation*}\n\\begin{aligned}\n\\frac{X''}{X} = -\\lambda, \\frac{Y''}{Y} = -\\mu,\\frac{Z''}{Z} = \\lambda+\\mu\n\\end{aligned}\n\\end{equation*}\nfor some $\\lambda,\\mu$. That implies\n\\begin{equation*}\n\\begin{aligned}\n&X\\left(x\\right) = A\\sin\\left(\\sqrt{\\lambda}x\\right) + B\\cos\\left(\\sqrt{\\lambda}x\\right)\\\\\n&Y\\left(y\\right) = C\\sin\\left(\\sqrt{\\mu}y\\right) + D\\cos\\left(\\sqrt{\\mu}y\\right)\\\\\n&Z\\left(z\\right) = E\\exp\\left(\\sqrt{\\lambda+\\mu}z\\right) + F\\exp\\left(-\\sqrt{\\lambda+\\mu}z\\right)\n\\end{aligned}\n\\end{equation*}\nWe now impose the homogeneous boundary conditions. We find $B=0$, $D=0$, $\\lambda = \\left(\\frac{n\\pi}{a}\\right)^2$ for $n=1,2,...$, $\\mu = \\left(\\frac{m\\pi}{b}\\right)^2$ for $m=1,2,...$; $E=0$.\n\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x,y,z\\right) = A_{n,m} \\sin\\left(\\frac{n\\pi x}{a}\\right)\\sin \\left(\\frac{m\\pi y}{b}\\right) \\exp\\left(-\\sqrt{\\frac{n^2 \\pi^2}{a^2} + \\frac{m^2\\pi^2}{b^2}} z\\right)\n\\end{aligned}\n\\end{equation*}\nsolves $\\nabla^2 \\psi=0$ and all the homogeneous boundary conditions for any choices of $n,m \\in\\left\\{1,2,...\\right\\}$.\n\nThus the linear combination\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x,y,z\\right) = \\sum_{n,m=1}^\\infty A_{nm} \\sin\\left(\\frac{n\\pi x}{a}\\right) \\sin\\left(\\frac{m\\pi y}{b}\\right) \\exp\\left(-\\sqrt{\\frac{n^2 \\pi^2}{a^2} + \\frac{m^2\\pi^2}{b^2}} z\\right)\n\\end{aligned}\n\\end{equation*}\ndoes too.\n\nTo fix the coefficients $A_{nm}$, we must use the \\emph{in}homogenous boundary conditions that $\\psi\\left(x,y,0\\right) = f\\left(x,y\\right)$. We expand $f\\left(x,y\\right)$ as a double Fourier series\n\\begin{equation*}\n\\begin{aligned}\nf\\left(x,y\\right) = \\sum_{n,m} \\hat{f}_{nm} \\sin\\left(\\frac{n\\pi x}{a}\\right) \\sin\\left(\\frac{m\\pi y}{b}\\right)\n\\end{aligned}\n\\end{equation*}\nwhere \n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}_{nm} = \\frac{4}{ab} \\int_0^a dx \\int_0^b dy \\left[\\sin\\left(\\frac{n\\pi x}{a}\\right)\\sin\\left(\\frac{m\\pi y}{b}\\right) f\\left(x,y\\right)\\right]\n\\end{aligned}\n\\end{equation*}\nand choose $A_{nm} = \\hat{f}_{nm}$.\n\n\\begin{eg}\nSuppose $f\\left(x,y\\right) = 1$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}_{nm} &= \\frac{4}{ab} \\int_0^a \\int_0^b \\sin\\left(\\frac{n\\pi x}{a}\\right) \\sin\\left(\\frac{m\\pi y}{b}\\right) dxdy\\\\\n&= \\left\\{\n\\begin{array}{ll}\n\\frac{16}{nm\\pi^2} & n,m \\text{ odd}\\\\\n0 & \\text{ else}\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x,y,z\\right) = \\frac{16}{\\pi^2}\\sum_{k,l=1}^\\infty \\frac{\\sin\\frac{\\left(2k-1\\right)\\pi x}{a} \\sin \\frac{\\left(2l-1\\right) \\pi y}{b}}{\\left(2k-1\\right)\\left(2l-1\\right)} \\exp \\left(-S_{2k-1,2l-1} z\\right)\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\subsection{Laplace's equation in spherical polar coordinates}\nLet $$\\Omega = \\left\\{\\left(r,\\theta,\\phi\\right) \\in \\R^3 | r \\leq a\\right\\}$$ and suppose $\\psi:\\R^3 \\to \\R$ solves $\\nabla^2 \\psi = 0$ inside $\\Omega$, with $\\psi\\left(r=a,\\theta,\\phi\\right) = f\\left(\\theta,\\phi\\right)$.\n\nIn spherical polar coordinates, we have\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\psi = \\frac{1}{r^2} \\frac{\\partial}{\\partial r} \\left(r^2 \\frac{\\partial \\psi}{\\partial r}\\right) + \\frac{1}{r^2 \\sin \\theta} \\frac{\\partial}{\\partial \\theta}\\left(\\sin\\theta\\frac{\\partial \\psi}{\\partial\\theta}\\right) + \\frac{1}{r^2} \\frac{1}{\\sin^2 \\theta} \\frac{\\partial^2 \\psi}{\\partial \\phi^2}\n\\end{aligned}\n\\end{equation*}\nFor simplicity, we'll just consider the case $\\psi\\left(r,\\theta,\\phi\\right) = \\psi\\left(r,\\theta\\right)$, $f=f\\left(\\theta\\right)$.\n\nWe seek a solution to $\\nabla^2 \\psi = 0$ of the form $\\psi\\left(r,\\theta\\right) = R\\left(r\\right)\\Theta\\left(\\theta\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\n&\\frac{\\Theta}{r^2}\\frac{d}{dr}\\left(r^2 \\frac{dR}{dr}\\right) + \\frac{R}{r^2 \\sin\\theta}\\frac{d}{d\\theta}\\left(\\sin\\theta \\frac{d\\Theta}{d\\theta} \\right) = 0\\\\\n&\\implies \\frac{1}{R}\\frac{d}{dr}\\left(r^2 \\frac{dR}{dr}\\right) = -\\frac{1}{\\Theta\\sin\\theta}\\frac{d}{d\\theta} \\left(\\sin\\theta\\frac{d\\Theta}{d\\theta}\\right) = \\lambda\n\\end{aligned}\n\\end{equation*}\nSince in the second equation, LHS only depends on $R,r$ while RHS only depends on $\\Theta,\\theta$. So they must both be constants.\n\nThe angular equation\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{\\sin\\theta} \\frac{d}{d\\theta} \\left(\\sin\\theta\\frac{d\\Theta}{d\\theta}\\right) = \\lambda\\Theta\n\\end{aligned}\n\\end{equation*}\nwe've met before: let $x = \\cos\\theta$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial}{\\partial \\theta} = \\frac{\\partial x}{\\partial \\theta} \\frac{\\partial}{\\partial x} = -\\sin\\theta\\frac{\\partial}{\\partial x}\n\\end{aligned}\n\\end{equation*}\nSo the angular equation becomes\n\\begin{equation*}\n\\begin{aligned}\n+\\frac{d}{dx}\\left(\\left(1-x^2\\right)\\frac{d\\Theta}{dx}\\right) = -\\lambda\\Theta\n\\end{aligned}\n\\end{equation*}\nthis is exactly Legendre's equation, so its regular solutions are $P_l\\left(\\cos\\theta\\right)$ where $\\lambda = l\\left(l+1\\right)$, $l=0,1,...$. (See section 4.1)\n\nWith these values of the eigenvalue $\\lambda$, the radial equation becomes\n\\begin{equation*}\n\\begin{aligned}\n\\left(r^2 R'\\right)' = l\\left(l+1\\right)R\n\\end{aligned}\n\\end{equation*}\nWe try a solution of the form $R=r^\\alpha$:\n\\begin{equation*}\n\\begin{aligned}\n\\alpha\\left(\\alpha+1\\right)r^\\alpha = -l\\left(l+1\\right) r^\\alpha\\\\\n\\implies \\alpha = l,\\left(-l-1\\right)\n\\end{aligned}\n\\end{equation*}\nSo our solution $R\\left(r\\right) \\Theta\\left(\\theta\\right)$ is now\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(r,\\theta\\right) = \\sum_{l=0}^\\infty \\left(a_l r^l + \\frac{b_l}{r^{l+1}}\\right) P_l \\left(\\cos\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nWe want a solution that remains regular at $r=0$. So $b_l=0$ for all $l$. To fix the $a_l$, use the inhomogeneous boundary condition that\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(a,\\theta\\right) = f\\left(\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nwe get\n\\begin{equation*}\n\\begin{aligned}\nf\\left(\\theta\\right) = \\sum_{l=0}^\\infty \\hat{f}_l P_l\\left(\\cos\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}_l  = \\frac{2l+1}{2}\\int_0^\\pi f\\left(\\theta\\right) P_l\\left(\\cos\\theta\\right) \\sin\\theta d\\theta\n\\end{aligned}\n\\end{equation*}\nSo our final answer obeying all the boundary condition is\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(r,\\theta\\right) = \\sum_l \\hat{f}_l \\left(\\frac{r}{a}\\right)^l P_l\\left(\\cos\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nThis is the \\emph{unique} solution by uniqueness theorem.\n\n\\subsection{Multiple expansions}\nLet $\\mathbf{k}$ be a unit vector. Then $\\frac{1}{|\\mathbf{r}-\\mathbf{k}|}$ solves\n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\frac{1}{|\\mathbf{r}-\\mathbf{k}|} = 0\n\\end{aligned}\n\\end{equation*}\nFor all $\\mathbf{r} \\neq \\mathbf{k} \\in \\R$, and is regular at $\\mathbf{r}=0$. So\n\\begin{equation}\\label{1}\n\\begin{aligned}\n\\frac{1}{|\\mathbf{r}-\\mathbf{k}|} = \\sum_{l=0}^\\infty A_l r^l P_l\\left(\\cos\\theta\\right)\n\\end{aligned}\n\\end{equation}\nSuppose $\\mathbf{r}$ point in the $\\mathbf{k}$-direction. Then\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{|\\mathbf{r}-\\mathbf{k}|} = \\frac{1}{\\sqrt{1+r^2-2r}} = \\frac{1}{1-r} = \\sum_{l=0}^\\infty r^l\n\\end{aligned}\n\\end{equation*}\n(Taylor expansion, converges for $|r|<1$).\n\nSince $P_l\\left(\\cos 0\\right) = P_l\\left(1\\right) = 1$, the multiple expansion \\eqref{1} becomes\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{|\\mathbf{r}-\\mathbf{k}|} = \\sum_{l=0}^\\infty A_l r^l\n\\end{aligned}\n\\end{equation*}\nSo $A_l=1$.\n\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{|\\mathbf{r}-\\mathbf{r}'|} &= \\frac{1}{r'} \\sum_{l=0}^\\infty \\left(\\frac{r}{r'}\\right)^l P_l\\left(\\hat{\\mathbf{r}} \\cdot \\hat{\\mathbf{r}}'\\right)\\\\\n&= \\frac{1}{r'} + \\frac{r}{r'^2} \\hat{\\mathbf{r}}\\cdot\\hat{\\mathbf{r}}' + ...\\\\\n&= \\frac{1}{r'} + \\frac{\\mathbf{r}\\cdot\\mathbf{r}'}{\\left(r'\\right)^3} + ...\n\\end{aligned}\n\\end{equation*}\nThe first term is the potential experienced at $\\mathbf{r}'$ from a (unit) charge at the origin. It's called the monopole term. The second, dipole term, is the potential at $\\mathbf{r}'$ due to two unit charges of opposite sign placed at $\\pm \\mathbf{r}$.\n\n\\subsection{Laplace's equation in cylindrical coordinates}\nLet $\\Omega = \\left\\{\\left(r,\\theta,z\\right) \\in \\R^3, r\\leq a, z\\geq 0\\right\\}$, and suppose $\\psi:\\R^3 \\to \\R$ obeys $\\nabla^2 \\psi = 0$ inside $\\Omega$.\n\n\\begin{tikzpicture}\n\\draw (0,0) ellipse (0.4 and 0.7);\n\\draw (0,-0.7) -- (3,-0.7);\n\\draw (0,0.7) -- (3,0.7);\n\\end{tikzpicture}\n\nIn cylindrical coordinates, \n\\begin{equation*}\n\\begin{aligned}\n\\nabla^2 \\psi = \\frac{1}{r} \\frac{\\partial}{\\partial r}\\left(r \\frac{\\partial \\psi}{\\partial r}\\right) + \\frac{1}{r^2} \\frac{\\partial^2 \\psi}{\\partial \\theta^2} + \\frac{\\partial^2 \\psi}{\\partial z^2} = 0\n\\end{aligned}\n\\end{equation*}\nso once again, look for a solution of the form\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(r,\\theta,z\\right) = R\\left(r\\right)\\Theta\\left(\\theta\\right) Z\\left(z\\right)\\\\\n\\implies \\left(\\frac{R''}{R} + \\frac{1}{r} \\frac{R'}{R}\\right) + \\frac{1}{r^2} \\frac{\\Theta''}{\\Theta} + \\frac{Z''}{Z} = 0\n\\end{aligned}\n\\end{equation*}\nThis implies\n\\begin{equation*}\n\\begin{aligned}\n\\frac{Z''}{Z} = \\mu, \\frac{\\Theta''}{\\Theta} = -\\lambda, \\frac{R''}{R} + \\frac{1}{r}\\frac{R'}{R} + \\left(\\mu-\\frac{\\lambda}{r^2}\\right) = 0\n\\end{aligned}\n\\end{equation*}\nFor our solution to be periodic in $\\theta$, we must have $\\lambda = n^2$ for $n \\in \\N$, which implies\n\\begin{equation*}\n\\begin{aligned}\n\\Theta\\left(\\theta\\right) = a_n \\sin\\left(n\\theta\\right) + b_n \\cos\\left(n\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nIf we want our solution to decay as $z \\to \\infty$, we need to choose $\\mu >0$ and pick the solution $Z\\left(z\\right) = \\exp\\left(-\\sqrt{\\mu}z\\right)$.\n\nThe radial equation is \\emph{Bassel's equation}. To put it in SL form, multiply through by $1/r$ to find\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dr} \\left(r\\frac{dR}{dr}\\right) - \\frac{n^2}{r} R = -\\mu r R\n\\end{aligned}\n\\end{equation*}\nWe can rescale by introducing $x = \\sqrt{\\mu} r$ to obtain\n\\begin{equation*}\n\\begin{aligned}\nx^2 \\frac{d^2R}{dx^2} + x\\frac{dR}{dx} + \\left(x^2 - n^2\\right) R = 0\n\\end{aligned}\n\\end{equation*}\nwhich is independent of the eigenvalue $\\mu$. This second order ODE has 2 independent solutions for each $n=0,1,2,...$, written $J_n\\left(x\\right)$ and $Y_n\\left(x\\right)$ and called \\emph{Bassel functions of the first($J_n$)/second($Y_n$) kind.}\n\nThe first kind:\n\n\\begin{tikzpicture}\n\\draw (-.5,0) -- (5,0);\n\\draw (0,-.5) -- (0,1.5);\n\\draw (0,1) parabola[bend at end] (3,-0.5);\n\\draw (3,-0.5) parabola[bend at start] (4,0.3);\n\\end{tikzpicture}\n\nThe second kind:\n\n\\begin{tikzpicture}\n\\draw (-.5,0) -- (2.5,0);\n\\draw (0,-4) -- (0,1);\n\\draw (0.1,-4) parabola[bend at end] (1,0.5);\n\\draw (1,0.5) parabola[bend at start] (1.5,-0.3);\n\\end{tikzpicture}\n\nall $J_n$ are regular at origin ($J_0\\left(0\\right) = 1$, $J_n\\left(0\\right) = 0$ for other $n$), while all $Y_n$ are singular at $x=0$.\n\nWe want $\\nabla^2 \\psi = 0$ inside $\\Omega$, decaying to $0$ as $z \\to +\\infty$, regular at $r=0$ and vanishing at $r=a$. Our product ansaty(?) gives\n\\begin{equation*}\n\\begin{aligned}\n\\psi_{n\\mu}\\left(r,\\theta,z\\right) = \\left(a_n\\sin n\\theta+b_n \\cos n\\theta\\right) e^{-\\sqrt{\\mu}z} \\left(J_n\\left(\\sqrt{\\mu}r\\right) + c_n Y_n \\left(\\sqrt{\\mu}r\\right)\\right)\n\\end{aligned}\n\\end{equation*}\nRegularity at $r=0$ $\\implies$ $c_n=0$;\\\\\n$\\psi|_{r=a}=0$ $\\implies$ $\\sqrt{\\mu} a = k_{in}$, where $k_{in}$ is the $i^{th}$ root of $J_n\\left(x\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(r,\\theta,z\\right) = \\sum_{n=0}^\\infty \\sum_{i=1}^\\infty \\left(a_{ni}\\sin\\left(n\\theta\\right) + b_{ni}\\cos\\left(n\\theta\\right)\\right) e^{-\\frac{k_{in}z}{a}} J_n\\left(k_{in}\\frac{r}{a}\\right)\n\\end{aligned}\n\\end{equation*}\nis our general solution.\n\nIf we also impose the inhomogeneous boundary condition\n\\begin{equation*}\n\\begin{aligned}\n\\psi|_{z=0} = f\\left(r,\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nwe can fix the constants $a_{ni}$, $b_{ni}$. The Bessel functions obey the orthogonality condition\n\\begin{equation*}\n\\begin{aligned}\n\\int_0^a J_n\\left(k_{in}\\frac{r}{a}\\right)J_n\\left(k_{jn}\\frac{r}{a}\\right) r dr = \\delta_{ij}\\left[J'_n\\left(k_{in}\\right)\\right]^2\n\\end{aligned}\n\\end{equation*}\n\ne.g. At $z=0$, our solution is \n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(r,\\theta,0\\right) = \\sum_{n,i}\\left(a_{ni} \\sin n\\theta + b_{ni} \\cos n\\theta\\right) J_n \\left(k_{ni} \\frac{r}{a}\\right) = f\\left(r,\\theta\\right)\n\\end{aligned}\n\\end{equation*}\nThis implies\n\\begin{equation*}\n\\begin{aligned}\n&\\underbrace{\\frac{1}{\\pi}\\int_{-\\pi}^\\pi \\cos  \\left(m\\theta\\right) f\\left(r,\\theta\\right)} = \\sum_{n,i} &\\underbrace{\\frac{b_{ni}}{\\pi} \\int_{-pi}^\\pi \\cos \\left(m\\theta\\right) \\cos\\left(n\\theta\\right) d\\theta}& J_n \\left(k_{ni} \\frac{r}{a}\\right)\\\\\n&\\hat{f_m}\\left(r\\right) &\\delta_{ij}&\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n&\\hat{f}_m\\left(r\\right) = \\sum_i b_{mi}J_m \\left(\\frac{k_{mi} r}{a}\\right)\\\\\n\\implies &\\int_0^a J_m\\left(k_{mj} \\frac{r}{a}\\right) \\hat{f}_m\\left(r\\right) rdr = \\frac{a^2}{2} \\sum_i b_{mi}\\delta_{ij} \\left[J'_m \\left(k_{mj} \\right)\\right]^2 = \\frac{a^2}{2}b_{mj} \\left[J'_m\\left(k_{mj}\\right)\\right]^2\\\\\n\\implies & b_{mj} = \\frac{2}{a^2 \\left[J'_m \\left(k_{mi}\\right)\\right]^2} \\int_0^a J_m\\left(k_{mj}\\frac{r}{a}\\right) \\hat{f}_m \\left(r\\right) r dr.\n\\end{aligned}\n\\end{equation*}\n\n\\newpage\n\n\\section{The Heat Equation}\n\n\\subsection{Heat equation}\n\nLet $\\Omega \\leq \\R^n$ be a domain in $\\R^n$, thought of as \"space\". The heat equation is\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\psi}{\\partial t} = \\kappa \\nabla^2 \\psi\n\\end{aligned}\n\\end{equation*}\nfor a function $\\psi: \\Omega \\times \\left[0,\\infty\\right) \\to \\R$ and positive constant $\\kappa$, called the \\emph{diffusion constant}.\n\nThe two most important properties of evolution under heat flows are:\\\\\n$\\bullet$ Total heat $\\int_\\Omega \\psi dV$ is \\emph{conserved};\n\n\\begin{proof}\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dt}\\int_\\Omega \\psi dV = \\int_\\Omega \\frac{\\partial\\psi}{\\partial t} dV = \\kappa \\int_\\Omega \\left(\\nabla^2\\psi\\right) dV = \\kappa \\int_{\\partial\\Omega} \\mathbf{n} \\cdot \\left(\\nabla \\psi\\right) dS\n\\end{aligned}\n\\end{equation*}\nSo provided $\\mathbf{n} \\cdot \\nabla \\psi = 0$, which says heat doesn't flow out of $\\Omega$, $\\frac{d}{dt}\\int_\\Omega \\psi dV = 0$.\n\\end{proof}\n \n$\\bullet$ Heat flow is a strongly \\emph{smoothing operation};\\\\\nSuppose $\\psi$ is an eigenfunction of the Laplacian on $\\Omega$ with eigenvalue $\\lambda$ (i.e. $\\nabla^2 \\psi = -\\lambda \\psi$), and $\\psi = 0$ or $\\mathbf{n}\\cdot \\nabla\\psi = 0$ on $\\partial \\Omega$. These eigenvalues are necessarily non-negative.\n\n\\begin{proof}\n\\begin{equation*}\n\\begin{aligned}\n&-\\lambda \\int_\\Omega \\psi^* \\psi dV = \\int_\\Omega \\psi^* \\nabla^2 \\psi dV = -\\int_\\Omega \\nabla\\psi^* \\cdot \\nabla\\psi dV = - ||\\nabla\\psi||^2 \\leq 0\\\\\n\\implies & \\lambda = \\frac{\\int_\\Omega \\nabla \\psi^* \\cdot \\nabla \\psi dV}{\\int_\\Omega \\left|\\psi\\right|^2 dV} \\geq 0\n\\end{aligned}\n\\end{equation*}\nwe've used our boundary condition above.\\\\\nFormally, we have that our solution to the heat equation can be written as\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(\\mathbf{x},t\\right) = \\left(e^{t\\nabla^2}\\right)\\psi\\left(\\mathbf{x},0\\right)\n\\end{aligned}\n\\end{equation*}\n(set $\\kappa = 1$). Here we think $e^{t\\nabla^2}$ as the Taylor expansion:\n\\begin{equation*}\n\\begin{aligned}\ne^{t\\nabla^2} = 1+t\\nabla^2 + \\frac{t^2}{2} \\nabla^2\\nabla^2+...\n\\end{aligned}\n\\end{equation*}\nExpanding our initial data $\\psi\\left(\\mathbf{x},0\\right)$ in terms of a complete set $\\left\\{\\psi_I\\left(\\mathbf{x}\\right)\\right\\}$ of eigenfunctions of $\\nabla^2$ as \n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(\\mathbf{x},0\\right) = \\sum_I c_I \\psi_I \\left(\\mathbf{x}\\right)\n\\end{aligned}\n\\end{equation*}\nThen we get\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(\\mathbf{x},t\\right) = e^{t\\nabla^2} \\left(\\sum_I c_I\\psi_I\\left(\\mathbf{x}\\right)\\right) = \\sum_I c_I \\left(e^{t\\nabla^2} \\psi_I\\right) = \\sum_I c_I e^{-\\lambda_I t} \\psi_I\\left(\\mathbf{x}\\right)\n\\end{aligned}\n\\end{equation*}\nWe've seen (for Fourier series - true for any expansion in eigenfunctions) that the smoothness of the original function is reflected in the decay of the coefficients in the expansion as $|I| \\to \\infty$. Heat flow exponentially suppresses eigenfunction with large $\\lambda_I$. Heat flow exponentially suppresses eigenfunction with large $\\lambda_I$.\n\\end{proof}\n\nThe heat equation also has some symmetries:\\\\\nIf $\\phi\\left(x,t\\right)$ obeys $$\\frac{\\partial \\phi}{\\partial t} = k\\nabla^2 \\phi$$ then so too does:\\\\\n$\\bullet$ $\\xi\\left(x,t\\right) = \\phi\\left(x-x_0,t-t_0\\right)$ -- translation in space/time;\\\\\n$\\bullet$ $\\psi\\left(x,t\\right) = \\lambda^p \\phi\\left(\\sqrt{\\lambda}x,\\lambda t\\right)$ rescaling by $\\lambda \\in \\R^+$.\n\nIn particular, we can look for a \\emph{similarity solution}, which is one that is independent of this scaling (up to an overall factor). i.e.\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(x,t\\right) = \\phi\\left(x,t\\right) = \\lambda^p \\phi\\left(\\sqrt{\\lambda}x,\\lambda t\\right)\n\\end{aligned}\n\\end{equation*}\n\nIf this is so, then since $\\phi\\left(x,t\\right)$ is independent of $\\lambda$, we can set $\\lambda$ to be anything convenient, e.g. $\\lambda = \\frac{1}{t}$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,t\\right) = t^{-p} \\phi\\left(\\frac{x}{\\sqrt{t}},1\\right) = t^{-p} u\\left(\\eta\\right)\n\\end{aligned}\n\\end{equation*}\nwhere $\\eta = \\frac{x}{\\sqrt{t}}$ is the \\emph{similarity variable}. Then\n\n\\begin{equation*}\n\\begin{aligned}\n&\\partial_t \\phi = -pt^{p-1} u + t^{-p} \\frac{\\partial \\eta}{\\partial t} u' = t^{-p-1} \\left(-pu+\\eta u'\\right)\\\\\n\\implies & \\kappa \\partial_x^2 \\phi = \\kappa t^{-p} \\frac{\\partial}{\\partial x}\\left(\\frac{\\partial \\eta}{\\partial x} u'\\right) = \\kappa t^{-p-1} u''\n\\end{aligned}\n\\end{equation*}\n\nso the heat equation becomes an ODE for $u\\left(\\eta\\right)$:\n\\begin{equation*}\n\\begin{aligned}\n0=t^{-p-1} \\left(-pu-\\frac{\\eta u'}{2} - \\kappa u''\\right)\n\\end{aligned}\n\\end{equation*}\n\nFor example, choosing $p = \\frac{1}{2}$, we get\n\\begin{equation*}\n\\begin{aligned}\n0 = ku'' + \\frac{1}{2}\\left(\\eta u' + u\\right) = \\left(\\kappa u' + \\frac{\\eta u}{2}\\right)'\n\\end{aligned}\n\\end{equation*}\n\nIf we impose the initial condition\n\\begin{equation*}\n\\begin{aligned}\nu'\\left(0\\right) = 0\n\\end{aligned}\n\\end{equation*}\nthen\n\\begin{equation*}\n\\begin{aligned}\n\\frac{u'}{u} = -\\frac{\\eta}{2\\kappa}\n\\end{aligned}\n\\end{equation*}\nand so\n\\begin{equation*}\n\\begin{aligned}\nu\\left(\\eta\\right) = A e^{-\\eta^2/4\\kappa}\n\\end{aligned}\n\\end{equation*}\ntherefore\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,t\\right) = \\frac{1}{\\sqrt{4\\pi t}} e^{-\\frac{x^2}{4\\kappa t}}\n\\end{aligned}\n\\end{equation*}\nwhere we fixed $A$ by $\\int_R \\phi\\left(x,t\\right) dx = 1$.\n\nThis is a Gaussian with variance proportional to $t$, and is the \\emph{fundamental solution} to the heat equation.\n\nFor heat flow on $\\R^n \\times \\left[0,\\infty\\right)$, we'd instead get the fundamental solution\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(\\mathbf{x},t\\right) = \\frac{1}{\\left(4\\pi \\kappa t\\right)^{n/2}} e^{-\\frac{|\\mathbf{x}|^2}{4\\kappa t}}\n\\end{aligned}\n\\end{equation*}\n\n\\begin{tikzpicture}\n\\draw (-3,0) -- (3,0);\n\\draw (0,-0.5) -- (0,3);\n\\draw[red] (-3,0.1) .. controls (-0.8,0.3) and (-0.3,1.5) .. (0,1.5);\n\\draw[red] (3,0.1) .. controls (0.8,0.3) and (0.3,1.5) .. (0,1.5);\n\\draw[green] (-3,0.025) .. controls (-0.8,0.075) and (-0.3,0.375) .. (0,0.375);\n\\draw[green] (3,0.025) .. controls (0.8,0.075) and (0.3,0.375) .. (0,0.375);\n\\end{tikzpicture}\n\nThe fact that heat \\emph{always} flows from hot to cold is apparently in conflict with the time reversibility of Newton's laws.\n\nEinstein realised that this could be explained statistically: suppose a particle is being jostled at random s.t. the probability it moves through distance $y$ in time $\\Delta t$ is $p\\left(y\\right)$. Assume:\\\\\n$\\bullet$ $p\\left(y\\right)$ independent of $t$;\\\\\n$\\bullet$ $p\\left(y\\right)$ is strongly peaked near $0$,\\\\\n$\\bullet$ $p\\left(-y\\right) = p\\left(y\\right)$ no preferred direction.\nLet $P\\left(x,t\\right)$ be the probability our particle is found at $x$ at time $t$. Then\n\\begin{equation*}\n\\begin{aligned}\nP\\left(x,t + \\Delta t\\right) &= \\int_{-\\infty}^\\infty p\\left(y\\right) P\\left(x-y,t\\right) dy\\\\\n&\\approx \\int_{-\\infty}^\\infty \\sum_{n=0}^\\infty \\frac{p\\left(y\\right)}{n!} \\frac{\\partial^n P}{\\partial y^n} \\left(x,t\\right)\\left(-y\\right)^n dy\\\\\n&= \\sum_{n=0}^\\infty \\frac{\\partial^n P}{\\partial y^n} \\frac{1}{n!} \\left<y^n\\right>\n\\end{aligned}\n\\end{equation*}\nwhere $\\left<y^n\\right> = \\int_{-\\infty}^\\infty p\\left(y\\right) y^n$. The above is then approximately\n\\begin{equation*}\n\\begin{aligned}\nP\\left(x,t\\right) + \\frac{\\left<y^2\\right>}{2} \\frac{\\partial^2 P}{\\partial x^2}\n\\end{aligned}\n\\end{equation*}\nwith some small corrections. Then\n\\begin{equation*}\n\\begin{aligned}\n\\frac{P\\left(x,t+\\Delta t\\right) - P\\left(x,t\\right)}{\\Delta t} = \\kappa \\frac{\\partial^2 P}{\\partial x^2}\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n\\kappa = \\frac{\\left<y^2\\right>}{2\\Delta t}\n\\end{aligned}\n\\end{equation*}\nAs $\\Delta t$ becomes small (but still large compared to collision times) we have the heat equation.\n\n\\begin{tikzpicture}\n\\begin{axis} [axis lines = middle, xmin = -5,xmax = 5, ymin = -5,ymax = 5]\n\\addplot{x^2};\n\\end{axis}\n\\end{tikzpicture}\n\n\\subsection{Heat flow in a semi-infinite region}\n\nSuppose $\\phi\\left(x,t\\right)$ obeys the heat equation inside $\\Omega \\times\\left[0,\\infty\\right)$ with $\\Omega = \\left\\{x \\geq 0\\right\\}$ subject to the boundary conditions $\\phi\\left(x,t\\right) \\to c$ as $x \\to + \\infty$ and\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(0,t\\right) = \\phi_0 + A\\cos \\left(\\frac{2\\pi t}{t_0}\\right) + B \\cos \\left(\\frac{2\\pi t}{t_0}\\right)\n\\end{aligned}\n\\end{equation*}\n\n\\begin{tikzpicture}\n\\draw[green,very thick] (0,0) -- (3,0);\n\\draw[yellow,very thick] (2,2.0) circle [radius=0.5cm];\n\\draw[yellow,thick,->] (2,1.3) .. controls (2.3,0.8) and (1.7,0.6) .. (2,0.3);\n\\end{tikzpicture}\n\nTo solve, we look for a simple solution to the heat equation of the form\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,t\\right) = X\\left(x\\right) T\\left(t\\right)\n\\end{aligned}\n\\end{equation*}\n\\begin{equation*}\n\\begin{aligned}\n\\frac{T'}{T} = \\kappa \\frac{X''}{X} = \\lambda\n\\end{aligned}\n\\end{equation*}\nwe want oscillatory solutions in time, so set $\\lambda = i\\omega$, we get\n\\begin{equation*}\n\\begin{aligned}\n\\phi_\\omega\\left(x,t\\right) = e^{i\\omega t} \\left(a_\\omega e^{-x\\sqrt{\\frac{i\\omega}{\\kappa}}} + b_\\omega e^{+x \\sqrt{\\frac{i\\omega}{\\kappa}}} \\right)\n\\end{aligned}\n\\end{equation*}\n\n\\begin{equation*}\n\\begin{aligned}\n\\sqrt{i\\omega} = \\left\\{\n\\begin{array}{ll}\n\\frac{1+i}{\\sqrt{2}}\\sqrt{|\\omega|} & \\omega > 0\\\\\n\\frac{i-1}{\\sqrt{2}}\\sqrt{|\\omega|} & \\omega < 0\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(0,t\\right) = \\phi_0 + \\frac{A}{2}\\left(e^{i\\omega_D t} + e^{-i\\omega_D t}\\right) + \\frac{B}{2}\\left(e^{i\\omega_Y t} + e^{-i\\omega_Y t}\\right)\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n\\omega_{D,Y} = \\frac{1}{2\\pi t_{D,Y}}\n\\end{aligned}\n\\end{equation*}\n\nSo we have non-zero contributions when the separation constant, $\\omega = \\pm \\omega_D,\\pm\\omega_Y,0$,\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,t\\right) = \\phi_0 + Ae^{-\\sqrt{\\frac{\\omega_D}{2\\kappa}}x} \\cos\\left(\\omega_D t-\\sqrt{\\frac{\\omega_D}{2\\kappa}} x\\right) + Be^{-\\sqrt{\\frac{\\omega_Y}{2\\kappa}}x} \\cos\\left(\\omega_Y t-\\sqrt{\\frac{\\omega_Y}{2\\kappa}} x\\right)\n\\end{aligned}\n\\end{equation*}\n\nLet $\\Omega = \\left\\{\\left(r,\\theta,\\phi\\right) \\in \\R^3 \\mid r \\leq a \\right\\}$ and suppose $\\psi:\\Omega \\times \\left[0,\\infty\\right) \\to \\R$ obeys $\\frac{\\partial \\psi}{\\partial t} = \\kappa \\nabla^2 \\psi$. For simplicity, we'll take $\\psi\\left(r,\\theta,\\phi\\right) = \\psi\\left(r\\right)$. We impose\\\\\n$\\bullet$ boundary condition $\\psi|_{\\partial \\Omega \\times \\left(0,\\infty\\right)} = \\psi\\left(a\\right) = 0$;\\\\\n$\\bullet$ initial condition $\\psi|_{\\Omega\\times\\left\\{0\\right\\}} = \\psi_0$ constant.\n\nLook for a solution of the form $\\psi\\left(r,t\\right) = R\\left(r\\right) T\\left(t\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{\\kappa T} \\frac{dT}{dt} = \\frac{1}{Rr^2} \\frac{d}{dr}\\left(r^2 \\frac{dR}{dr}\\right) = -\\lambda^2\n\\end{aligned}\n\\end{equation*}\nfor some constant $\\lambda$.\n\nThe radial equation is\n\\begin{equation*}\n\\begin{aligned}\n\\frac{d}{dr}\\left(r^2 \\frac{dR}{dr}\\right) = -\\lambda^2 r^2 R\n\\end{aligned}\n\\end{equation*}\nand has solutions\n\\begin{equation*}\n\\begin{aligned}\nR\\left(r\\right) = A_\\lambda \\frac{\\sin\\left(\\lambda r\\right)}{r} + B_\\lambda \\frac{\\cos\\left(\\lambda r\\right)}{r}\n\\end{aligned}\n\\end{equation*}\n\nRegularity at $r=0$ forces us to set $B_\\lambda = 0$, while the boundary condition $\\psi_{r=a} = 0$ fixes $\\lambda = \\frac{n\\pi}{a}$ for some $n=1,2,...$.\n\nSo our separates solution is\n\\begin{equation*}\n\\begin{aligned}\n\\psi_n\\left(r,t\\right) = \\frac{A_n}{r}\\sin\\left(\\frac{n\\pi r}{a}\\right) \\exp \\left(-\\frac{n^2 \\pi^2 \\kappa t}{a^2}\\right)\n\\end{aligned}\n\\end{equation*}\nwhich implies that the general solution obeying homogeneous boundary condition is\n\\begin{equation*}\n\\begin{aligned}\n\\sum_{n \\in \\Z^+} \\frac{A_n}{r}\\sin\\left(\\frac{n\\pi r}{a}\\right) \\exp\\left(-\\frac{n^2 \\pi^2 \\kappa t}{a^2}\\right)\n\\end{aligned}\n\\end{equation*}\nwe fix the $A_n$ by imposing $\\psi\\left(r,0\\right) = \\psi_0$. So\n\\begin{equation*}\n\\begin{aligned}\n&r\\psi_0 = \\sum_{n \\in \\Z^+} A_n \\sin\\left(\\frac{n\\pi r}{a}\\right)\\\\\n\\implies & A_m = \\frac{\\psi_0}{2a}\\int_0^a r\\sin\\left(\\frac{m\\pi r}{a}\\right) dr = \\frac{\\left(-1\\right)^{m+1} \\psi_0}{2\\pi n}\n\\end{aligned}\n\\end{equation*}\nTherefore\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(r,t\\right) = \\frac{\\psi_0}{2\\pi r}\\sum_{n \\in \\Z^+} \\frac{\\left(-1\\right)^{n+1}}{n} \\sin\\left(\\frac{n\\pi r}{a}\\right) \\exp\\left(-\\frac{n^2 \\pi^2 \\kappa t}{a^2}\\right)\n\\end{aligned}\n\\end{equation*}\n\nConsider\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial \\psi}{\\partial r}|_{r=a} &= -\\frac{\\psi_0}{a} \\sum_{n \\in \\Z^+} \\exp\\left(-\\frac{n^2 \\pi^2 \\kappa t}{a^2}\\right)\\\\\n&\\approx -\\frac{\\psi_0}{2a} \\int_{-\\infty}^\\infty \\exp\\left(-\\frac{-x^2\\pi^2 \\kappa t}{a^2}\\right) dx\\\\\n&\\approx -\\frac{\\psi_0}{2} \\sqrt{\\frac{a^2}{\\pi \\kappa t}}\n\\end{aligned}\n\\end{equation*}\n\nSo $$t_{now} = \\frac{\\psi_0^2 a^2}{\\left(\\frac{\\partial \\psi}{\\partial r}\\right)_{now}^2 4\\kappa \\pi}$$\n\nWe have $\\psi_0 \\sim 1000 ^\\circ$C, $\\frac{\\partial \\psi}{\\partial r}|_{now} \\sim 20^\\circ$C$/100$m, $\\kappa$ is the thermal diffusivity of rock. Then we get $t_{now} \\approx 100$ million years. However we didn't take heat generated by radioactivity into account.\n\n\\newpage\n\n\\section{The Wave Equation}\n\n\\subsection{The Wave Equation}\n\\begin{equation*}\n\\begin{aligned}\n\\psi: \\Omega \\times \\R \\to \\R\n\\end{aligned}\n\\end{equation*}\nsolves the wave equation if \n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{c^2}\\frac{\\partial^2 \\phi}{\\partial t^2} = \\nabla^2 \\phi\n\\end{aligned}\n\\end{equation*}\n\nThere's a unique solution subject to \\\\\n$\\bullet$ initial conditions:\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(\\mathbf{x},0\\right) = f\\left(\\mathbf{x}\\right)\\\\\n\\partial_t \\phi\\left(\\mathbf{x},0\\right) = g\\left(\\mathbf{x}\\right)\n\\end{aligned}\n\\end{equation*}\n$\\bullet$ boundary conditions:\n\\begin{equation*}\n\\begin{aligned}\n\\psi\\left(\\mathbf{x},t\\right)|_{\\partial \\Omega} = h\\left(\\mathbf{x},t\\right)\n\\end{aligned}\n\\end{equation*}\nwhich is a Dirichlet boundary condition.\n\\begin{proof}\nWe consider the \\emph{energy}\n\\begin{equation*}\n\\begin{aligned}\nE_\\phi = \\frac{1}{2} \\int_\\Omega \\left(\\frac{\\partial \\phi}{\\partial t}\\right)^2 + c^2 \\left(\\nabla \\phi\\right)\\cdot\\left(\\nabla\\phi\\right)dV\n\\end{aligned}\n\\end{equation*}\nDifferentiating under the integral, we have\n\\begin{equation*}\n\\begin{aligned}\n\\frac{dE_\\phi}{dt} &= \\int_\\Omega \\frac{\\partial \\phi}{\\partial t}\\frac{\\partial^2\\phi}{\\partial t^2} + c^2\\left(\\nabla\\phi\\right)\\cdot\\nabla\\left(\\frac{\\partial \\phi}{\\partial t}\\right) dV\\\\\n&= \\int_\\Omega \\frac{\\partial \\phi}{\\partial t}\\left(\\frac{\\partial^2 \\phi}{\\partial t^2} - c^2 \\nabla^2 \\phi\\right) dV + c^2 \\int_{\\partial \\Omega} \\frac{\\partial \\phi}{\\partial t}\\left(\\nabla\\phi\\right)\\cdot\\mathbf{\\hat{n}} dS\n\\end{aligned}\n\\end{equation*}\nwhere $\\mathbf{\\hat{n}}$ is the outward-pointing normal. Then\n\\begin{equation*}\n\\begin{aligned}\n\\frac{dE_\\phi}{dt} = c^2 \\int_{\\partial \\Omega} \\frac{\\partial \\phi}{\\partial t} \\left(\\nabla\\phi\\right)\\cdot\\mathbf{\\hat{n}} dS\n\\end{aligned}\n\\end{equation*}\nSo $E_\\phi$ is constant in time if no energy flows out of the region $\\Omega$ (i.e. if this boundary term vanished).\n\nSo let $\\phi_1,\\phi_2$ each solve the wave equation with the same boundary conditions and initial conditions. Then $\\delta\\phi = \\phi_2 - \\phi_1$ obeys the wave equation with\n\\begin{equation*}\n\\begin{aligned}\n\\delta_\\phi|_{\\partial \\Omega \\times \\R} = 0,\\\\\n\\delta\\phi|_{\\Omega\\times\\left\\{0\\right\\}} = 0,\\\\\n\\partial_t \\left(\\delta\\phi\\right)|_{\\Omega \\times \\left\\{0\\right\\}} = 0\n\\end{aligned}\n\\end{equation*}\nThat implies \n\\begin{equation*}\n\\begin{aligned}\n\\frac{dE_{\\delta\\phi}}{dt} = \\int_\\Omega \\partial_t \\left(\\delta\\phi\\right) \\mathbf{\\hat{n}}\\cdot\\nabla\\delta\\phi dS = 0\n\\end{aligned}\n\\end{equation*}\nsince $\\partial_t \\left(\\delta \\phi\\right)|_{\\partial \\Omega} = 0$. So\n\\begin{equation*}\n\\begin{aligned}\nE_{\\delta\\phi}\\left(t\\right) = E_{\\delta_\\phi}\\left(0\\right) = 0\n\\end{aligned}\n\\end{equation*}\nsince $\\delta\\phi|_{\\Omega\\times\\left\\{0\\right\\}} = \\partial_t \\delta\\phi|_{\\Omega\\times\\left\\{0\\right\\}} = 0$.\\\\\nHowever, $E_{\\delta\\phi}$ is the integral of non-negative quantities, so $E_{\\delta\\phi} = 0$ if and only if $\\frac{\\partial\\delta\\phi}{\\partial t} = 0$ and $\\nabla\\left(\\delta\\phi\\right) = 0$ $\\forall \\left(\\mathbf{x},t\\right) \\in \\Omega\\times \\R$ (at best if $\\delta \\phi$ continuous).\\\\\nSo $\\delta\\phi = 0$ always, so $\\phi_1 = \\phi_2$.\n\\end{proof}\n\n\\begin{eg}\nConsider a string of undisturbed length $L$:\n\n\\begin{tikzpicture}\n\\draw (0,0) .. controls (0.5,1) and (1.5,-1) .. (2,0);\n\\draw (2,0) .. controls (3,1) and (3.5,-0.2) .. (4,0);\n\\end{tikzpicture}\n\n(see figure 1)\n\nAnd let $T_A$, $T_B$ be tensions pulling tangentially. The string makes no lateral displacement. That means\n\\begin{equation*}\n\\begin{aligned}\nT_A \\cos \\theta_A = T_B \\cos \\theta_B = T\n\\end{aligned}\n\\end{equation*}\nNewton's 2nd law gives\n\\begin{equation*}\n\\begin{aligned}\n\\mu \\delta x \\frac{\\partial^2 \\phi}{\\partial t^2} = T_B \\sin \\theta_B - T_A \\sin \\theta_A\\\\\n\\implies \\frac{\\mu\\delta x}{T} \\frac{\\partial^2 \\phi}{\\partial t^2} &= \\frac{T_B \\sin \\theta_B}{T_B \\cos\\theta_B} - \\frac{T_A \\sin \\theta_A}{T_A \\cos \\theta_A} \\\\&= \\tan \\theta_B - \\tan \\theta_A\\\\\n&= \\frac{\\partial \\phi}{\\partial x}|_B - \\frac{\\partial \\phi}{\\partial x}|_A\\\\\n&\\approx \\frac{\\partial^2 \\phi}{\\partial x^2} \\delta x\n\\end{aligned}\n\\end{equation*}\nSo $\\phi$ obeys\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{c^2}\\frac{\\partial^2\\phi}{\\partial t^2} = \\frac{\\partial^2 \\phi}{\\partial x^2}\n\\end{aligned}\n\\end{equation*}\nwith\n\\begin{equation*}\n\\begin{aligned}\nc^2 = \\frac{T}{\\mu}\n\\end{aligned}\n\\end{equation*}\nas well as initial conditions\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,0\\right) = f\\left(x\\right),\\\\\n\\partial_t \\phi\\left(x,0\\right) = g\\left(x\\right)\n\\end{aligned}\n\\end{equation*}\nand boundary conditions\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(0,t\\right) = \\phi\\left(L,t\\right) = 0\n\\end{aligned}\n\\end{equation*}\n\nWe look for a solution of the form $\\phi\\left(x,t\\right) = X\\left(x\\right)T\\left(t\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\nX'' = -\\lambda^2 X,\\\\\nT'' = -c^2 \\lambda^2 T\n\\end{aligned}\n\\end{equation*}\nFor $X$ we have\n\\begin{equation*}\n\\begin{aligned}\nA\\sin\\left(\\lambda x\\right) + B\\cos\\left(\\lambda x\\right)\n\\end{aligned}\n\\end{equation*}\nBy boundary condition, $B=0$. So the solutions are\n\\begin{equation*}\n\\begin{aligned}\n\\phi_n\\left(x,t\\right) = \\sin\\left(n\\pi x/L\\right) \\left[A_n \\sin\\left(n\\pi ct/L\\right) + B_n \\cos\\left(n\\pi ct/L\\right)\\right]\n\\end{aligned}\n\\end{equation*}\nThe general solution obeying homogeneous boundary conditions is\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,t\\right) = \\sum_{n=1}^\\infty \\sin\\left(n\\pi x/L\\right) \\left[A_n \\sin\\left(n\\pi ct/L\\right) + B_n \\cos\\left(n\\pi ct/L\\right)\\right]\n\\end{aligned}\n\\end{equation*}\n\nThe coefficients $A_n$, $B_n$ are fixed by the initial conditions on $\\phi$, $\\partial_t \\phi$ respectively. We find\n\\begin{equation*}\n\\begin{aligned}\nB_n = \\frac{2}{L} \\int_0^L f\\left(x\\right) \\sin\\left(n\\pi x/L\\right) dx,\\\\\nA_n = \\frac{2}{n\\pi c}\\int_0^L g\\left(x\\right) \\sin\\left(n\\pi x/L\\right) dx\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\begin{eg}\nSuppose we pluck the string so that at $t=0$,\n\\begin{equation*}\n\\begin{aligned}\nf\\left(x\\right) = \\left\\{\\begin{array}{ll}\n\\frac{2hx}{L} & 0\\leq x \\leq \\frac{L}{2}\\\\\n\\frac{2h\\left(L-x\\right)}{L} & \\frac{L}{2} \\leq x \\leq L\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\n\n\\begin{tikzpicture}\n\\draw[very thick,blue] (0,0) -- (3,1);\n\\draw[very thick,blue] (3,1) -- (6,0);\n\\end{tikzpicture}\n\nrelease from rest: $g\\left(x\\right) = 0$.\n\nThen the Fourier coefficients are\n\\begin{equation*}\n\\begin{aligned}\n\\hat{f}_n \\left\\{ \\begin{array}{ll}\n\\left(-1\\right)^{\\left(n+1\\right)/2} \\frac{8h}{n^2 \\pi^2} & n \\text{ odd}\\\\\n0 & n \\text{ even}\n\\end{array}\n\\right.\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(x,t\\right) = \\frac{8h}{\\pi^2} \\sum_{m=1}^\\infty \\frac{\\left(-1\\right)^{m+1}}{\\left(2m-1\\right)^2} \\sin\\frac{\\left(2m-1\\right)\\pi x}{L} \\cos\\frac{\\left(2m-1\\right) \\pi c t}{L}\n\\end{aligned}\n\\end{equation*}\n\nNote that all the frequencies of the different normal modes $\\phi_n\\left(x,t\\right)$ are integer multiples of the fundamental frequency $\\pi c/L$.\n\nThe kinetic energy of the string (mass/unit length $\\mu$) is\n\\begin{equation*}\n\\begin{aligned}\nKE = \\frac{1}{2}\\mu \\int_0^L \\left(\\frac{\\partial \\phi}{\\partial t}\\right)^2 dx\n\\end{aligned}\n\\end{equation*}\n\nBecause the string is under tension, it also has potential energy. For a small piece of the string, the extension is $\\delta s - \\delta x$, where $\\delta x$ is the original length (difference in the $x$ direction) and $\\delta s$ is the strength under tension. We have\n\\begin{equation*}\n\\begin{aligned}\n\\delta s \\approx \\sqrt{\\delta \\phi^2 + \\delta x^2}\n\\end{aligned}\n\\end{equation*}\nso the PE of this piece is\n\\begin{equation*}\n\\begin{aligned}\nT\\left(\\sqrt{\\left(\\frac{\\partial \\phi}{\\partial x}\\right)^2 +1} - 1 \\right) \\delta x \\approx \\frac{1}{2}T \\left(\\frac{\\partial \\phi}{\\partial x}\\right)^2\n\\end{aligned}\n\\end{equation*}\nIntegrating over the length of the string,\n\\begin{equation*}\n\\begin{aligned}\nPE = \\frac{T}{2}\\int_0^L \\left(\\frac{\\partial \\phi}{\\partial x}\\right)^2 dx\n\\end{aligned}\n\\end{equation*}\nso the total energy at time $t$ is\n\\begin{equation*}\n\\begin{aligned}\nE &= KE + PE\\\\\n&= \\frac{\\mu}{2} \\int_0^L \\left(\\frac{\\partial \\phi}{\\partial t}\\right)^2 + c^2 \\left(\\frac{\\partial \\phi}{\\partial x}\\right)^2 dx\n\\end{aligned}\n\\end{equation*}\nas before ($\\mu = 1$).\n\nPlug in our general solution, we obtain\n\\begin{equation*}\n\\begin{aligned}\nKE\\left(t\\right) = \\frac{\\mu \\pi^2 c^2}{4L}\\sum_{n=1}^\\infty n^2 \\left[A_n \\sin\\left(\\frac{n\\pi ct}{L}\\right) - B_n \\cos\\left(\\frac{n\\pi ct}{L}\\right)\\right]^2\n\\end{aligned}\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\nPE\\left(t\\right) = \\frac{\\mu \\pi^2 c^2}{4L}\\sum_{n=1}^\\infty n^2 \\left[A_n \\cos\\left(\\frac{n\\pi ct}{L}\\right) + B_n \\sin\\left(\\frac{n\\pi ct}{L}\\right)\\right]^2\n\\end{aligned}\n\\end{equation*}\nSo\n\\begin{equation*}\n\\begin{aligned}\nE\\left(t\\right) = \\frac{\\mu c^2 \\pi^2}{4L} \\sum_{n=1}^\\infty n^2\\left(A_n^2 + B_n^2\\right)\n\\end{aligned}\n\\end{equation*}\nwhich is independent of time.\n\nThe string looks just like an infinite collection of harmonic oscillators with frequencies $n\\pi c/L$. They behave independently - the solution is a \\emph{sum} of terms for each $n$ separately.\n\\end{eg}\n\n\\subsection{Vibration of a Circular Membrane}\nLet $$\\Omega = \\left\\{ \\left(r,\\theta\\right) \\in \\R^2, r\\leq 1\\right\\}$$ and suppose\n\\begin{equation*}\n\\begin{aligned}\n\\phi: \\Omega \\times \\left[0,\\infty\\right) \\to \\R\n\\end{aligned}\n\\end{equation*}\nsolves\n\\begin{equation*}\n\\begin{aligned}\n\\frac{1}{c^2}\\frac{\\partial^2 \\phi}{\\partial t^2} = \\nabla^2 \\phi\n\\end{aligned}\n\\end{equation*}\ninside $\\Omega$, with initial conditions $\\phi\\left(r,\\theta,0\\right) = f\\left(r,\\theta\\right)$, $\\partial_t\\phi\\left(r,0,0\\right) = f\\left(r,\\theta\\right)$ and boundary condition $\\phi\\left(1,\\theta,t\\right) = 0$.\n\nLet $\\phi\\left(r,\\theta,t\\right) = R\\left(r\\right)\\Theta\\left(\\theta\\right)T\\left(t\\right)$. Then\n\\begin{equation*}\n\\begin{aligned}\nT'' = -c^2 \\lambda T, \\Theta'' = -\\mu \\Theta, r\\left(rR'\\right)' + \\left(r^2 \\lambda - \\mu\\right) R = 0\n\\end{aligned}\n\\end{equation*}\n\nWe want the solution to be periodic in $\\theta$, so choose $\\mu = m^2$ for some $n \\in \\N$. The radial equation is then\n\\begin{equation*}\n\\begin{aligned}\nr\\left(rR'\\right) + \\left(r^2\\lambda - m^2\\right)R = 0\n\\end{aligned}\n\\end{equation*}\nwhich is the Bessel's equation of order $m$. So\n\\begin{equation*}\n\\begin{aligned}\nR\\left(r\\right) = a_m J_m \\left(\\sqrt{\\lambda} r \\right) + b_m Y_m \\left(\\sqrt{\\lambda} r\\right)\n\\end{aligned}\n\\end{equation*}\n\nTo obey boundary conditions at $r=1$, we must choose $\\sqrt{\\lambda}$ to be a root $k_{mi}$ of the $m^{th}$ order Bessel function. So\n\\begin{equation*}\n\\begin{aligned}\nT\\left(t\\right) = A_{mi}\\sin\\left(k_{mi} ct\\right) + C_{mi} \\cos\\left(k_{mi} ct\\right)\n\\end{aligned}\n\\end{equation*}\n\nCombining all the terms, we have a \\emph{horrible} solution:\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(r,\\theta,t\\right) = &\\sum_{i=1}^\\infty \\left[A_{0i} \\sin \\left(k_{0i} ct\\right) + C_{0i} \\cos\\left(k_{0i} ct\\right)\\right] J_0 \\left(k_{0i} r\\right) \n\\\\& +\\sum_{m=1}^\\infty \\sum_{i=1}^\\infty \\left[A_{mi} \\cos\\left(m\\theta\\right) + B_{mi}\\sin \\left(m\\theta\\right)\\right] \\sin\\left(k_{mi} ct\\right) J_m\\left(k_{mi} r\\right) \n\\\\&+\\sum_{m=1}^\\infty \\sum_{i=1}^\\infty \\left[C_{mi} \\cos\\left(m\\theta\\right) + D_{mi} \\sin\\left(m\\theta\\right)\\right] \\cos\\left(k_{mi} ct\\right) J_m \\left(k_{mi} r\\right)\n\\end{aligned}\n\\end{equation*}\nThe coefficients $\\left\\{A,B,C,D\\right\\}_{mi}$ are fixed by the inhomogeneous conditions.\n\n\\begin{eg}\nIf a drum is initially flat ($\\phi|_{t=0} = 0$), but struck in the centre so that $\\partial_t\\phi|_{t=0} \\ g\\left(r\\right)$, then there is no angular dependence, so only the $m=0$ terms survive, and $C_{0i} = 0$. So\n\\begin{equation*}\n\\begin{aligned}\n\\phi\\left(r,\\theta,t\\right) = \\sum_{i=1}^\\infty A_{0i} \\sin\\left(k_{0i} ct\\right) J_0 \\left(k_{0i} r\\right)\n\\end{aligned}\n\\end{equation*}\nwhere (as an exercise)\n\\begin{equation*}\n\\begin{aligned}\nA_{0i} = \\frac{2}{ck_{0i}} \\frac{1}{\\left[J'_0\\left(k_{0i}\\right)\\right]^2} \\int_0^1 J_0 \\left(k_{0i}r\\right) g\\left(r\\right) r dr\n\\end{aligned}\n\\end{equation*}\n\\end{eg}\n\n\\begin{tikzpicture}\n\\draw (-1,3) .. controls (1,3) .. (1,1) ..\n controls (2,1) .. (2,-1) ..\n  controls (1,-1) .. (1,-3) ..\n   controls (-1,-3) .. (-1,-1) ..\n    controls (-2,-1) .. (-2,1) .. \n    controls (-1,1) .. cycle;\n    node [centre]($\\Omega$)\n\\end{tikzpicture}\n\nIn general there can be strange frequencies depending on $\\Omega$. There is an interesting question -- if we can 'hear' the shape of a drum. The answer is no (the first example with $\\dim\\left(\\Omega\\right) = 16$ and now a lot of examples)!\n\nBut if $\\Omega$ is convex then the answer is yes.\n\nAlso, let $N\\left(\\lambda_0\\right)$ be the number of eigenvalues of $\\nabla^2|_\\Omega$ that is less than $\\lambda_0$. Then\n\\begin{equation*}\n\\begin{aligned}\nArea\\left(\\Omega\\right) = \\lim_{\\lambda_0\\to\\infty} \\frac{N\\left(\\lambda_0\\right)}{\\lambda_0} 4\\pi^2\n\\end{aligned}\n\\end{equation*}\n(by Weyl). We can also get limits on Perimeter($\\Omega)$ (c.f. Spectral geometry in Part III).\n\n\\end{document}", "meta": {"hexsha": "575931362b807f38c0d3f22c1827ac2914effa53", "size": 79838, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/Methods.tex", "max_stars_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_stars_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-25T17:34:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T17:34:25.000Z", "max_issues_repo_path": "Notes/Methods.tex", "max_issues_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_issues_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/Methods.tex", "max_forks_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_forks_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.6218362283, "max_line_length": 369, "alphanum_fraction": 0.6683784664, "num_tokens": 31165, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7634837635542924, "lm_q1q2_score": 0.555797426494202}}
{"text": "\n\\documentclass[oneside]{report}\n\n\\title{Linear Algebra Study Guide}\n\\author{Orgho A. Neogi}\n\\date{20 September 2017}\n\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\makeatletter\n\\renewcommand*\\env@matrix[1][*\\c@MaxMatrixCols c]{%\n  \\hskip -\\arraycolsep\n  \\let\\@ifnextchar\\new@ifnextchar\n  \\array{#1}}\n\\makeatother\n\n\\usepackage{graphicx}\n\n\\usepackage{float}\n\n\\usepackage[colorlinks=true]{hyperref}\n\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{corollary}{Corollary}[theorem]\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{definition}{Definition}[section]\n\n\\begin{document}\n\n\\maketitle\n\n\\tableofcontents\n\n\\chapter{Linear Equations in Linear Algebra}\n\n\n\\section{Systems of Linear Equations}\n\nA system of linear equations has\n\n\\begin{enumerate}\n  \\item no solution, or\n  \\item exactly one solution, or\n  \\item infinitely many solutions.\n\\end{enumerate}\n\nA system of linear equations is said to be consistent if it has either one solution or\ninfinitely many solutions; a system is inconsistent if it has no solution.\n\nGiven an example System of Equations:\n\n\\begin{alignat*}{8}\n  &x_1 - &2x_2 + &x3  &= &0 \\\\\n  &     &2x_2 - &8x_3 &= &8 \\\\\n  &5x_1 &     - &5x_3 &= &10\n\\end{alignat*}\n\nThe matrix of coefficients is:\n\n\\begin{center}\n$\\begin{bmatrix}\n  &1 &2 &1\\\\\n  &0 &2 &-8\\\\\n  &5 &0 &-5\n\\end{bmatrix}$\n\\end{center}\n\nThe augmented matrix is:\n\\begin{center}\n  $\\begin{bmatrix}[ccc|c]\n    1 &2 &1  &0\\\\\n    0 &2 &-8 &8\\\\\n    5 &0 &-5 &10\n  \\end{bmatrix}$\n\\end{center}\n\nA system of linear equations can be solved using elementary row operations.\n\nElementary row operations are:\n\\begin{enumerate}\n  \\item (Replacement) Replace one row by the sum of itself and a multiple of another row.\n  \\item (Interchange) Interchange two rows.\n  \\item (Scaling) Multiply all entries in a row by a nonzero constant.\n\\end{enumerate}\n\nRow operations can be applied to any matrix and are reversible.\n\n\n\\section{Row Reduction and Echelon Forms}\n\n\\begin{definition}[Echelon Form]\nA rectangular matrix is in echelon form (or row echelon form) if it has the\nfollowing three properties:\n\n\\begin{enumerate}\n  \\item All nonzero rows are above any rows of all zeros.\n  \\item Each leading entry of a row is in a column to the right of the leading entry of the row above it.\n  \\item All entries in a column below a leading entry are zeros.\n\n  If a matrix in echelon form satisfies the following additional conditions, then it is in reduced row echelon form.\n\n \\item The leading entry in each nonzero row is 1.\n \\item Each leading 1 is the only nonzero entry in its column.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{theorem}[Uniqueness of the Reduced Echelon Form]\n  Each matrix is row equivalent to one and only one reduced echelon matrix.\n\\end{theorem}\n\n\\begin{definition}[Pivot Position]\n  A pivot position in a matrix A is a location in A that corresponds to a leading 1\n  in the reduced echelon form of A .\n\\end{definition}\n\n\\begin{definition}[Pivot Column]\n  A pivot column is a column of A that contains\n  a pivot position.\n\\end{definition}\n\n\\begin{theorem}[Existence and Uniqueness Theorem]\n  A linear system is consistent if and only if the rightmost column of the augmented\n  matrix is not a pivot column\u2014that is, if and only if an echelon form of the\n  augmented matrix has no row of the form:\n\n  \\begin{center}\n    $\\begin{bmatrix}\n      &0 &\\dots &0  &b\n    \\end{bmatrix}$\n    With $b$ non-zero\n  \\end{center}\nIf a linear system is consistent, then the solution set contains either (i) a unique\n  solution, when there are no free variables, or (ii) infinitely many solutions, when\n  there is at least one free variable.\n\\end{theorem}\n\nUsing Row Reduction to Solve a Linear System:\n\\begin{enumerate}\n  \\item Write the augmented matrix of the system.\n  \\item Use the row reduction algorithm to obtain an equivalent augmented matrix in echelon form. Decide whether the system is consistent. If there is no solution, stop; otherwise, go to the next step.\n  \\item Write the system of equations corresponding to the matrix obtained in step 3.\n  \\item Rewrite each nonzero equation from step 4 so that its one basic variable is expressed in terms of any free variables appearing in the equation.\n\\end{enumerate}\n\n\\section{Vector Equations}\n\nA matrix with only one column is called a column vector, or simply a vector.\n\nAlgebraic Properties of $\\mathbb{R}^n$:\n\nFor all $\\vec{u}$, $\\vec{v}$, $\\vec{w}$ in $\\mathbb{R}^n$ and all scalars $c$ and $d$:\n\\begin{enumerate}\n  \\item $\\vec{u} + \\vec{v} = \\vec{v} + \\vec{u}$\n  \\item $(\\vec{u} + \\vec{v}) + \\vec{w} = \\vec{u} + (\\vec{v} + \\vec{w})$\n  \\item $\\vec{u} + 0 = 0 + \\vec{u} = 0$\n  \\item $\\vec{u} + (-\\vec{u}) = -\\vec{u} + \\vec{u} = 0$, where $-\\vec{u}$ denotes $-1\\vec{u}$\n  \\item $c(\\vec{u} + \\vec{v}) = c\\vec{u} + c\\vec{v}$\n  \\item $(c + d)\\vec{u} = c\\vec{u} + d\\vec{u}$\n  \\item $c(d\\vec{u}) = (cd)\\vec{u}$\n  \\item $1\\vec{u} = \\vec{u}$\n\\end{enumerate}\n\nA vector equation\n\n\\begin{center}\n  $x_1\\vec{a_1} + x_2\\vec{a_2} + \\dots + x_n\\vec{a_n} = \\vec{b}$\n\\end{center}\n\nhas the same solution set as the linear system whose augmented matrix is\n\n\\begin{center}\n   $\\begin{bmatrix}[cccc|c]\n      \\vec{a_1} &\\vec{a_2} &\\dots &\\vec{a_n} &\\vec{b}\n    \\end{bmatrix}$\n\\end{center}\n\n\\begin{definition}[span]\n  If $\\vec{v_1},\\dots,\\vec{v_p}$ are in $\\mathbb{R}^n$, then the set of all linear combinations of $\\vec{v_1},\\dots,\\vec{v_p}$ is denoted by $Span\\{\\vec{v_1},\\dots,\\vec{v_p}\\}$ and is called the subset of $\\mathbb{R}^n$ spanned by $\\vec{v_1},\\dots,\\vec{v_p}$. That is $Span\\{\\vec{v_1},\\dots,\\vec{v_p}\\}$ is the collection of all vectors that can be written in the form $c_1\\vec{v_1} + c_2\\vec{v_2} + \\dots + c_p\\vec{v_p}$ with $c_1,\\dots,c_p$ scalars.\n\\end{definition}\n\n\\section{The Matrix Equation $A\\vec{x} = \\vec{b}$}\n\n\\begin{theorem}\nIf $A$ is an $m \\times n$ matrix, with colums $\\vec{a_1},\\dots,\\vec{a_n}$, and if $\\vec{b}$ is in $\\mathbb{R}^m$, the matrix equation\n\\begin{center}\n  $A\\vec{x} = \\vec{b}$\n\\end{center}\nhas the same solution set as the vector equation\n\\begin{center}\n  $x_1\\vec{a_1} + x_2\\vec{a_2} + \\dots + x_n\\vec{a_n} = \\vec{b}$\n\\end{center}\nwhich, in turn, has the same solution set as the system of linear equations whose\naugmented matrix is\n\\begin{center}\n   $\\begin{bmatrix}[cccc|c]\n      \\vec{a_1} &\\vec{a_2} &\\dots &\\vec{a_n} &\\vec{b}\n    \\end{bmatrix}$\n\\end{center}\n\\end{theorem}\n\nThe equation $A\\vec{x} = \\vec{b}$ has a solution if and only if $\\vec{b}$ is a linear combination of the columns of A .\n\n\\begin{theorem}\nLet $A$ be an $m \\times n$ matrix.Then the following statements are logically equivalent. That is, for a particular $A$ , either they are all true statements or they are all false.\n\\begin{enumerate}\n  \\item For each $\\vec{b}$ in $\\mathbb{R}^m$, the equation $A\\vec{x} = \\vec{b}$ has a solution\n  \\item Each $\\vec{b}$ in $\\mathbb{R}^m$ is a linear combination of the columns of $A$\n  \\item The columns of $A$ span $\\mathbb{R}^m$\n  \\item $A$ has a pivot position in every row\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{theorem}\nIf $A$ is an $m \\times n$ matrix, $\\vec{u}$ and $\\vec{v}$ are vectors in $\\mathbb{R}^n$, and $c$ is a scalar, then:\n\\begin{enumerate}\n  \\item $A(\\vec{u}+\\vec{v}) = A\\vec{u} + A\\vec{v}$\n  \\item $A(c\\vec{u}) = c(A\\vec{u})$\n\\end{enumerate}\n\\end{theorem}\n\n\\section{Solution Sets of Linear Systems}\n\nThe homogeneous equation $A\\vec{x} = \\vec{0}$ has a non trivial solution if and only if the equation has at least one free variable.\n\n\\begin{theorem}\n  Suppose the equation $A\\vec{x} = \\vec{b}$ is consistent for some given $\\vec{b}$, and let $\\vec{p}$ be a solution. Then the solution set of $A\\vec{x} = \\vec{b}$ is the set of all vectors of the form $\\vec{w} = \\vec{p} + \\vec{v_h}$, where $\\vec{v_h}$ is any solution to the homogeneous equation $A\\vec{x} = \\vec{0}$.\n\\end{theorem}\n\nWriting a solution set (of a consistent system) in parametric vector form:\n\\begin{enumerate}\n  \\item Row reduce the augmented matrix to reduced echelon form.\n  \\item Express each basic variable in terms of any free variables appearing in an\nequation.\n  \\item Write a typical solution $\\vec{x}$ as a vector whose entries depend on the free\nvariables, if any.\n  \\item Decompose $\\vec{x}$ into a linear combination of vectors (with numeric entries) using\nthe free variables as parameters.\n\\end{enumerate}\n\n\\section{Linear Independence}\n\n\\begin{definition}[Linearly Independent]\n  An indexed set of vectors $\\{\\vec{v_1}, \\dots , \\vec{v_p}\\}$ in $\\mathbb{R}^n$ is said to be linearly independent if the vector equation\n  \\begin{center}\n    $x_1\\vec{v_1}+x_2\\vec{v_2}+\\dots+x_p\\vec{v_p} = 0$\n  \\end{center}\n  has only the trivial solution. The set $\\{\\vec{v_1}, \\dots , \\vec{v_p}\\}$ is said to be linearly dependent if it is not linearly independent.\n\\end{definition}\n\nThe columns of a matrix A are linearly independent if and only if the equation\n$A\\vec{x} = \\vec{0}$ has only the trivial solution. \\newline\n\nA set of two vectors $\\{\\vec{v_1},\\vec{v_2}\\}$ is linearly dependent if at least one of the vectors is a multiple of the other. The set is linearly independent if and only if neither of the\nvectors is a multiple of the other.\n\n\\begin{theorem}[Characterization of Linearly Dependent Sets]\n  An indexed set $S = \\{\\vec{v_1}, \\dots , \\vec{v_p}\\}$ of two or more vectors is linearly dependent if and only if at least one of the vectors in $S$ is a linear combination of the others. In\n  fact, if $S$ is linearly dependent and $v_1 \\neq 0$, then some $v_j$ (with $j > 1$ ) is a linear\n  combination of the preceding vectors, $v_1, \\dots, v_{j-1}$.\n\\end{theorem}\n\n\\begin{theorem}\n  If a set contains more vectors than there are entries in each vector, then the set\nis linearly dependent. That is, any set $S = \\{\\vec{v_1}, \\dots , \\vec{v_p}\\}$ in $\\mathbb{R}^n$ is linearly dependent if $p > n$.\n\\end{theorem}\n\n\\begin{theorem}\n  If a set $S = \\{\\vec{v_1}, \\dots , \\vec{v_p}\\}$ in $\\mathbb{R}^n$ , contains the zero vector, then the set is linearly dependent.\n\\end{theorem}\n\n\\section{Introduction to Linear Transformations}\n\n\\begin{definition}[Linear Transformation]\n  A transformation of mapping $T$ is linear if:\n  \\begin{enumerate}\n    \\item $T(\\vec{u} + \\vec{v}) = T(\\vec{u}) + T(\\vec{v})$ for all $\\vec{u}, \\vec{v}$ in the domain of $T$;\n    \\item $T(c\\vec{u}) = cT(\\vec{u})$ for all scalars $c$ and all $\\vec{u}$ in the domain of $T$;\n  \\end{enumerate}\n\\end{definition}\n\nEvery matrix transformation is a linear transformation.\n\nAll Linear transformations can be written as matrix multiplication.\n\n\\section{The Matrix of a Linear Transformation}\n\n\\begin{theorem}\n  Let $T:\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ be a linear transformation. Then there exists a unique matrix\n$A$ such that:\n\\begin{center}\n  $T(\\vec{x})=A\\vec{x}$ For all $\\vec{x}$ in $\\mathbb{R}^n$\n\\end{center}\n\\end{theorem}\n\n\\chapter{Matrix Algebra}\n\n\\section{Matrix Operations}\n\\begin{theorem}\n  Let $A,B,C$ be matrices of the same size and let $r$ and $s$ be scalars:\n  \\begin{enumerate}\n    \\item $A + B = B + A$\n    \\item $(A + B) + C = A + (B + C)$\n    \\item $A + 0 = A$\n    \\item $r(A + B) = rA + rB$\n    \\item $(r + s)A = rA + sA$\n    \\item $r(sA) = (rs)A$\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{theorem}\nLet $A$ be a $m \\times n$ matrix, and let $B$ and $C$ have sizes for which the indicated\nsums and products are defined.\n  \\begin{enumerate}\n    \\item $A(BC) = (AB)C$\n    \\item $A(B + C) = AB + AC$\n    \\item $(B + C)A = BA + CA$\n    \\item $r(AB) = (rA)B = A(rB)$\\newline\n    For any scalar $r$\n    \\item $I_m A = A = A I_n$\n  \\end{enumerate}\n\\end{theorem}\n\n\n\\begin{theorem}\n  Let $A$ and $B$ denote matrices whose sizes are appropriate for the following sums\n  and products.\n  \\begin{enumerate}\n    \\item $(A^T)^T = A$\n    \\item $(A + B)^T = A^T + B^T$\n    \\item For any scalar r, \\newline\n    $(rA)^T = rA^T$\n    \\item $(AB)^T = B^TA^T$\n  \\end{enumerate}\n\\end{theorem}\n\n\\section{Matrix Inverse}\n\\begin{theorem}\n  If $A$ is an invertible $n \\times n$ matrix, then for each $\\vec{b}$ in $\\mathbb{R}^n$, the equation $A\\vec{x} = \\vec{b}$ has the unique solution $\\vec{x} = A^{-1} \\vec{b}$\n\\end{theorem}\n\n\\begin{enumerate}\n  \\item $(AB)^{-1} = B^{-1}A^{-1}$\n  \\item $(A^T)^{-1} = (A^{-1})^T$\n\\end{enumerate}\n\nThe product of an $n \\times n$ invertible matrices is invertible, and the inverse is the\nproduct of their inverses in the reverse order.\n\n\\begin{theorem}\n  An $n \\times n$ matrix $A$ is invertible if and only if $A$ is row equivalent to $I_n$ and in\nthis case, any sequence of elementary row operations that reduces $A$ to $I_n$ also transforms $I_n$ into $A^{-1}$\n\\end{theorem}\n\n\\section{Subspaces of $\\mathbb{R}^n$}\n\n\\begin{definition}[Subspace]\n  A subspace of $\\mathbb{R}^n$ is any set $H$ in $\\mathbb{R}^n$ that has 3 properties:\n  \\begin{enumerate}\n    \\item The zero vector is in $H$\n    \\item For each $\\vec{u}$ and $\\vec{v}$ in $H$, the sum $\\vec{u} + \\vec{v}$ is in $H$\n    \\item For each $\\vec{u}$ and each scalar $c$ in $H$, the vector $c\\vec{u}$ is in $H$\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{definition} [Column Space]\n The column space of a matrix A is the set Col A of all linear combinations of the columns of A.\n\\end{definition}\n\n\\begin{definition} [Null Space]\n  The null space of a matrix $A$ is the set Nul A of all solutions of the homogeneous\nequation $A\\vec{x} = \\vec{0}$\n\\end{definition}\n\n\\begin{definition} [Basis]\n  A basis for a subspace $H$ of $\\mathbb{R}^n$ is a linearly independent set in $H$ that spans $H$\n\\end{definition}\n\n\\section{Dimension and Rank}\n\n\\begin{definition}[dimension]\n  The dimension of a nonzero subspace $H$, denoted by dim $H$, is the number of\nvectors in any basis for $H$. The dimension of the zero subspace \\{0\\} is defined to\nbe zero.\n\\end{definition}\n\n\\begin{definition}[rank]\n  The rank of a matrix $A$, denoted by rank $A$, is the dimension of the column space\nof A .\n\\end{definition}\n\n\\begin{theorem}[Rank Theorem]\n  If a matrix $A$ has $n$ columns, then rank $A + dim Nul A = n$\n\\end{theorem}\n\n\\begin{theorem}[Basis Theorem]\n  Let $H$ be a p-dimensional subspace of $\\mathbb{R}^n$: Any linearly independent set of exactly\n$p$ elements in $H$ is automatically a basis for H: Also, any set of $p$ elements of $H$\nthat spans $H$ is automatically a basis for $H$.\n\\end{theorem}\n\n\\chapter{Determinants}\n\n\\section{Introduction to Determinants}\n\n\\begin{definition}[determinant]\n  For $n \\geq 2$ the determinant of an $n \\times n$ matrix $A = [a_{ij}]$ is the sum on $n$ terms of the form $\\pm a_{1j} det A_{1j}$, with plus and minus signs alternating, where the entries $a_{11}, a_{12}, \\dots , a_{1n}$ are from the first row of A.\n  \\begin{center}\n    $$det A = \\sum_{j=1}^{n} (-1)^{1+j} a_{1j}detA_{1j}$$\n  \\end{center}\n\\end{definition}\n\n\\begin{theorem}\n  If $A$ is a triangular matrix, then $det A$ is the product of the entries on the main\ndiagonal of $A$.\n\\end{theorem}\n\n\\section{Properties of Determinants}\n  \\begin{theorem}[Row Operations]\n    Let $A$ be a square matrix:\n    \\begin{enumerate}\n      \\item If a multiple of one row of $A$ is added to another row to produce a matrix $B$,\nthen $det B = det A$.\n      \\item If two rows of $A$ are interchanged to produce $B$, then $det B = - det A$.\n      \\item If one row of $A$ is multiplied by $k$ to produce $B$ , then $det B = k \\times det A$ .\n    \\end{enumerate}\n  \\end{theorem}\n\n  \\begin{theorem}\n    A square matrix $A$ is invertible if and only if $det A \\neq 0$.\n  \\end{theorem}\n\n  \\begin{theorem}\n    If $A$ is an $n\\times n$ matrix, then $det A^T = det A$.\n  \\end{theorem}\n\n  \\begin{theorem}[Multiplicative Property]\n    If $A$ and $B$ are $n\\times n$ matrices, then $det AB = (det A)(det B)$\n  \\end{theorem}\n\n  \\chapter{Eigenvalues and Eigenvectors}\n\n  \\section{Eigenvalues and Eigenvectors}\n\n  \\begin{definition}[Eigenvector and Eigenvalue]\n    An eigenvector of an $n\\times n$ matrix $A$ is a non-zero vector $\\vec{x}$ such that $A\\vec{x}=\\lambda\\vec{x}$ for some scalar $\\lambda$. A scalar $\\lambda$ is called an eigenvalue of $A$ if there is a nontrivial solution $\\vec{x}$ of $A\\vec{x}=\\lambda\\vec{x}$; such an $\\vec{x}$ is called an eigenvector corresponding to $\\lambda$.\n  \\end{definition}\n\n  \\section{The Characteristic equation}\n  \\begin{theorem}[The invertible Matrix theorem]\n    Let $A$ be an $n\\times n$ matrix. Then $A$ is invertible if and only if:\n    \\begin{enumerate}\n      \\item The number $0$ is not an eigenvalue of $A$.\n      \\item The determinant of $A$ is not $0$.\n    \\end{enumerate}\n  \\end{theorem}\n\n  The scalar equation $det(A - \\lambda I) = 0$ is called the characteristic equation of A.\n\n\\section{Diagonalization}\n\n\\begin{theorem}[The Diagonalization Theorem]\n  An $n \\times n$ matrix $A$ is diagonalizable if and only if $A$ has $n$ linearly independent eigenvectors.\n\\end{theorem}\n\n\\end{document}\n", "meta": {"hexsha": "4efa49f5fd4dee6752997f42c8d67d09989fe1ee", "size": 16705, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "linearAlgebraStudyGuide.tex", "max_stars_repo_name": "OrghoN/mat221", "max_stars_repo_head_hexsha": "7dccf2b15ac8c25caa13f4a3031b7afe3825bb24", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "linearAlgebraStudyGuide.tex", "max_issues_repo_name": "OrghoN/mat221", "max_issues_repo_head_hexsha": "7dccf2b15ac8c25caa13f4a3031b7afe3825bb24", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linearAlgebraStudyGuide.tex", "max_forks_repo_name": "OrghoN/mat221", "max_forks_repo_head_hexsha": "7dccf2b15ac8c25caa13f4a3031b7afe3825bb24", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.847639485, "max_line_length": 451, "alphanum_fraction": 0.6815324753, "num_tokens": 5427, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6370307944803831, "lm_q2_score": 0.8723473630627235, "lm_q1q2_score": 0.555712133754714}}
{"text": "\\chapter{Possible Future Work}\n\\label{chap:future}\n\nThe work in this thesis is pertinent to emerging methods in machine learning as well as current problems in structural bioinformatics.\nAs such, extensions of this work can be made in several distict veins, some focused on model formulation and training procedure, others on possible applications outside of interface prediction.\nSeveral of these extensions are documented below, categorized as either extensions of method or extensions of application.\n\n\\section{Extensions of Method}\n\n\\subsection{Double Coupling and Ensemble Approaches}\n\nAs discussed in Chapter \\ref{chap:experiments}, Sum Coupling outperforms Product Coupling on rigid and medium difficulty complexes, whereas Product Coupling is better for difficult complexes.\nTherefore it would be reasonable to incorporate both sum and Product Coupling into a single convolution operation:\n\n\\begin{equation}\nh_i(x | W^\\textsc{c}, W^\\textsc{n}, W^\\textsc{e}, b) = \\sigma \\bigg( W^{\\textsc{c}} x_i + \\frac{1}{|\\mathcal{N}_i|}\\sum_{j \\in \\mathcal{N}_i} (W^{\\textsc{n}} x_j) \\odot (W^{\\textsc{e}} A_{ij}) + W^{\\textsc{n}} x_j + W^{\\textsc{e}} A_{ij} + b \\bigg),\n\\label{eq:double_coupling}\n\\end{equation}\n\n\\noindent\nwhere here the same weight matrices are used in both the Sum Coupling and Product Coupling components to help prevent overfitting. \nAlternately, separate weight matrices could be used to increase the model's expressive power.\n\nRecall that concerning the RFPP metric, the optimal network depth increased with complex difficulty, where more layers were needed for complexes of greater difficulty. \nThis suggests that an ensemble model with multiple networks of varying depth could perform  better than each individual network.\nAlternative ensemble approaches like \\emph{boosting} or \\emph{bagging} may also prove beneficial.\nBoosting trains a sequence of \"weak\" models, with each added model being trained by giving more importance (loss weight) to examples misclassified by the existing set of models.\nFor interface prediction, this weighting can be performed on a per-complex basis, using RFPP or AUC as an indication of performance on each complex, or on a per-residue-pair basis, using the cross entropy loss of that specific training example.\nThe final prediction is a weighted average of the weak models' predictions, with the weights determined by the validation error of each respective model (with higher weight given to models with lower error).\nBagging creates multiple datasets by sampling with replacement from the original data set, and trains a different model on each sampled dataset.\nThis sampling can be performed at the complex level or the residue pair level.\nAgain, the final prediction is a weighted average of the weak models' predictions, weighted according to validation error.\n\n\n\n\\subsection{RFPP Optimization}\n\nPredicting an entire interface is a more difficult task than predicting only a few interacting residues in that problem.\nIt could be that training a classifier in the former may hinder its performance in the latter.\nAn alternative approach would be to just focus on generating a few predictions of high quality (i.e. confidence), which means optimizing performance on just the top positive predictions.\nThe existing model uses a loss function which incorporates every training example, not just the top positive prediction, hence the model is optimized for all examples.\nThe new approach implies treating RFPP as the primary metric and optimizing it directly.\nAn alternate loss function which includes the loss from only the highest scoring positive example could be used.\nHence when the loss is minimized, the performance of the highest performing positive example will be maximized.\nHere a sum over all examples has been replaced by a max function, which is not differentiable, but differentiable variants of the loss function could be constructed to enable gradient descent based learning.\nThis method is currently being investigated by another student in Professor Ben-Hur's research group.\n\n\\subsection{Additional Data Sources}\n\nThe success of deep learning methods has been attributed to large volumes of data and deep architectures~\\cite{krizhevsky2012}.\nFor interface prediction, despite the large volume of recorded protein structures, precious few complexes have been labeled in bound and unbound forms for use in model training and testing. \nThis dependence on small curated subsets of the available proteins has potentially limited the full leveraging of deep learning methods, however some opportunities exist for enlarging the training and testing data.\n\nThe Docking Benchmark Dataset was conceived as a method to evaluate docking methods, and correspondingly was carefully constructed to be non-redundant with respect to SCOP families, in order to give a fair evaluation.\nHowever for training purposes, it may be useful to include redundant proteins simply to provide the model with more training data. \nThe Critical Assessment of PRediction of Interactions (CAPRI) is an annual competition aimed at evaluating protein-protein docking methods~\\cite{janin2003}, and could also be used to help train or evaluate partner-specific protein interface prediction methods. \n\n\\subsection{Unsupervised Pretraining}\n\nAnother approach to the problem of limited labeled complexes is to utilize the vast quantity (>125,000) of recorded protein structures via unsupervised pre-training.\nHinton and Salakhutdinov (2006)~\\cite{hinton2006b} proposed a greedy layer-wise training algorithm for a particular form of neural network called an \\emph{autoencoder}.\nGenerally speaking, an autoencoder is simply a network trained to reproduce the input at the output under certain constraints~\\cite{goodfellow2016}.\nThese constraints, such as undercompleteness, regularization, and sparsity, prevent the network from simply learning an identity mapping of the input, and instead promote the learning of a compressed representation of the input (i.e. an \"encoding\").\nThe portion of the network which compresses the input is called the encoder, while the portion which reconstructs the input from the encoding is called the decoder.\nThe encoding half of the autoencoder can be used independently of the decoding half to create representations of the input that are often useful in supervised tasks like classification~\\cite{hinton2006b, bengio2007}.\n\nMasci et al. (2011)~\\cite{masci2011} proposed convolutional autoencoders (CAEs) which perform convolution to encode an image and \\emph{deconvolution} to decode the image.\nThere is also a corresponding \\emph{unpooling} operation which upsamples images which were pooled during the encoding step.\nDeconvolution is simply a convolution operation performed on the result of an encoding convolution, with weights being tied between convolution and deconvolution layers.\nFor $k$ filters of size $m \\times m$ and $c$ channels, the corresponding deconvolution would consist of $c$ filters of size $m \\times m$ and $k$ channels, where the weights have been reflected in both spatial dimensions (e.g. the weights for the lower left pixel of the receptive field are now applied to the upper right pixel).\nThe spatial reflection allows a particular weight to carry a specific meaning, namely it characterizes the relationship between a particular pixel in an image and a pixel in its encoding.\nTo illustrate this, Figure \\ref{fig:deconv} shows a deconvolution for a single filter and single channel.\nCAEs are trained in the same greedy, layerwise fashion as conventional autoencoders.\n\n\\begin{figure}\n\t\\includegraphics[width=0.8\\textwidth]{conv_ae.png}\n\t\\caption{Application of convolution, then deconvolution on a single channel input image. The relationship between the purple pixel in the input/reconstruction and the red pixel in the encoding is captured by the appropriate weight in the filters (lower left for convolution and upper right for deconvolution). By reflecting the convolution filter horizontally and vertically for use in deconvolution, the same weight ($W_{31}$) is used for this relationship during both encoding and decoding.\n\t\t\\label{fig:deconv}}\n\\end{figure}\n\n\nThe concept of deconvolution can be applied to graph convolutions as well.\nNote that when reflecting image convolution filters, the center weight remains in the same position.\nIn the same way, graph deconvolutions retain the same weights for the central vertex. \nFurthermore, since all neighbors use the same weights, there is no spatial reflection necessary.\nDeconvolution simply consists of transposing the center and neighbor weight matrices so that channels become filters and filters become channels.\nFor Sum Coupling, deconvolution becomes:\n\n\\begin{equation}\n\\hat{x_i}(h_i | W^\\textsc{c}, W^\\textsc{n}, W^{\\textsc{e}*}, b^*) = \\sigma \\bigg( (W^{\\textsc{c}})' h_i + \\frac{1}{|\\mathcal{N}_i|}\\sum_{j \\in \\mathcal{N}_i} \\big[ (W^{\\textsc{n}})' h_j + W^{\\textsc{e}*} A_{ij} \\big] + b^* \\bigg),\n\\label{eq:sum_deconv}\n\\end{equation}\n\n\\noindent\nwhere $\\hat{x}$ denotes the reconstruction of $x$ after deconvolving, $h_i$ is the convolved representation at vertex $i$, $(W^\\textsc{c})'$ is the transpose of $W^\\textsc{c}$, likewise for $(W^\\textsc{n})'$ and $W^\\textsc{n}$, and both $W^{\\textsc{e}*}$ and $b^*$ are weight matrices with unshared weights. \nNote that graph convolutions generate representations on vertices of the graph, not the edges, so the non-encoded edge information ($A_{ij}$) must be used when deconvolving, and the weight matrix $W^{\\textsc{e}*}$ cannot be tied to the encoding matrix $W^{\\textsc{e}}$.\n\nAlternately, edge representations could be created in a separate convolution operation, where an edge's receptive field is simply the incident vertices:\n\n\\begin{equation}\nh_{ij}(A_{ij} |W^{\\textsc{ee}}, W^{\\textsc{ev}}, b_e) = \\sigma\\bigg( W^{\\textsc{ee}} A_{ij} + \\frac{1}{2}\\big(\nW^{\\textsc{ev}} x_i + W^{\\textsc{ev}} x_j \\big) + b_\\textsc{e} \\bigg),\n\\label{eq:edge_conv}\n\\end{equation}\n\n\\noindent\nwhere $h_{ij}$ is the representation of edge $(i, j)$ , $W^{\\textsc{ee}}$ is the weight matrix associated with the edge, $W^{\\textsc{ev}}$ is the weight matrix associated with the incident vertices, and $b_\\textsc{e}$ is the bias.\nIn this case, vertex deconvolution can use the encoded edge representations and associated weights.\nEquation (\\ref{eq:sum_deconv}) then becomes:\n\n\\begin{equation}\n\\hat{x_i}(h_i | W^\\textsc{c}, W^\\textsc{n}, W^\\textsc{ev}, b) = \\sigma \\bigg( (W^{\\textsc{c}})' h_i + \\frac{1}{|\\mathcal{N}_i|}\\sum_{j \\in \\mathcal{N}_i} \\big[ (W^{\\textsc{n}})' h_j + (W^{\\textsc{ev}})' h_{ij} \\big] + b^* \\bigg),\n\\label{eq:sum_deconv2}\n\\end{equation}\n\n\\noindent\nwhere $(W^{\\textsc{ev}})'$ is the transpose of $W^{\\textsc{ef}}$.\nEdges can also be deconvolved using the edge and vertex representations:\n\n\\begin{equation}\n\\hat{A_{ij}}(h_{ij} | W^{\\textsc{ee}}, W^{\\textsc{e}}, b) = \\sigma \\bigg( (W^{\\textsc{ee}})' h_{ij} + \\frac{1}{2}\\big[\n(W^{\\textsc{e}})' h_i + (W^{\\textsc{e}})' h_j \\big] + b^{*}_\\textsc{e} \\bigg).\n\\label{eq:edge_deconv}\n\\end{equation}\n\n\\noindent\nThis added weight sharing and symmetry between convolution and deconvolution operations may allow training of deeper networks which recognize more sophisticated structures.\n\n\n\n\\subsection{Simplicial Complex Convolution}\n\nGraphs are only one way to model structural relationships between entities.\nA simplicial complex is a generalization of a graph in which higher order simplices (triangles, tetrahedra, etc), in addition to edges, describe relationships between groups of vertices. \nFor example, just as an edge indicates a relationship between two vertices, a triangle (or 2-simplex) indicates the relationship between three vertices.\nFor example, both the \\v{C}ech and Vietoris-Rips definitions can be used to construct a simplicial complex from a set of points in an underlying metric space:\n\n\\begin{definition}\n\tGiven a point set $X$ in some metric space and a number $\\epsilon>0$, the \\v{C}ech complex $C^{\\textsc{\\v{C}}}_\\epsilon$ is the simplicial complex whose simplices are formed as follows. For each subset $S \\subset X$ of points, form an ($\\epsilon/2$)-ball around each point in $S$, and include $S$ as a simplex (of dimension $|S|$) if there is a common point contained in all of the balls in $S$.\n\\end{definition}\n\n\\begin{definition}\n\tGiven a point set $X$ in some metric space and a number $\\epsilon>0$, the Vietoris-Rips complex $C^{\\textsc{VR}}_\\epsilon$ is the simplicial complex whose simplices are formed as follows. For each subset $S \\subset X$ of points, form an ($\\epsilon/2$)-ball around each point in $S$, and include $S$ as a simplex (of dimension $|S|$) if for each pair of balls in $S$ there is a common point contained in both balls.\n\\end{definition}\n\nIn both definitions, simplices are included in the simplicial complex based on some notion of proximity.\nThis means simplicial complexes are naturally equipped to identify clusters of vertices within a neighborhood, something which can be useful for interface prediction, where local regions of residues may constitute portions of an interface.\nThough the number of possible convolution operations on simplicial complexes are many, one example is provided here as an illustration. \nIn this formulation, rather than sum the signal from all neighbors in the receptive field, each simplex $S_k, k \\in \\{1, 2,... K\\}$ (not counting vertices and edges, which are the lowest order simplexes) in the neighborhood is summed individually and the maximum simplex signal is added to the central signal:\n\n\\begin{equation}\n\\begin{split}\nh_i(x |  & W^{\\textsc{c}}, W^{\\textsc{n}}, W^{\\textsc{e1}}, W^{\\textsc{e2}}, b) = \\\\ \n&\\sigma \\bigg( W^{\\textsc{c}} x_i + \\max_{k} \\Big[ \\frac{1}{|S_k|}  \\sum_{j \\in \\mathcal{S}_k} ( W^{\\textsc{n}} x_j + W^{\\textsc{e1}} A_{ij} ) + \\frac{1}{|S_k|^{2}}\\sum_{j, l \\in \\mathcal{S}_k} (W^{\\textsc{e2}} A_{jl}) \\Big] + b \\bigg).\n\\end{split}\n\\label{eq:simplicial_complex_conv}\n\\end{equation}\n\n\\noindent\nThis formulation also includes \\emph{secondary} edges ($A_{jl}$), those between different neighbors in the same simplex.\nIt is possible that neighbors and edges occur in more than one simplex in the neighborhood, but that doesn't pose a problem because of the normalization and max function.\nThe use of simplicial complexes in defining convolution operations is currently being investigated by a student in Professor Ben-Hur's research group.\n\n\n\\section{Extensions of Application}\n\nInterface prediction is not the only active area of protein research, and so the modeling of proteins as graphs and use of graph convolution could potentially be used for other protein problems as well.\nA common characteristic of docking methods is that they create a high number of putative 3D bound structures~\\cite{janin2013}.\nGraph convolutional neural networks could also be used to evaluate putative docking solutions and rank them in order of likelihood.\nFor example, the interface derived from a docking solution could be compared to the interface predicted by a trained pairwise graph convolutional neural network and ranked according to similarity with the network prediction.\nSimilarly, much research has been performed in protein modeling, which seeks to predict a protein's secondary and tertiary structures from sequence alone~\\cite{schwede2013}.\nThough graph convolutions cannot be used to create protein folds \\emph{de novo}, they can be used to evaluate a set of putative folds and likewise rank them.\nIn both of these applications, scoring is being performed not at the individual vertex level, but at the graph level.\nThese could take advantage of a global pooling method like taking the maximum or average across all vertices, or developing a fingerprint similar to work by Duvenaud \\& Maclaurin, et al.~\\cite{duvenaud2015}.\n\nAny application involving structured data that do not fit naturally into a grid is a potential candidate for graph convolutional approaches.\nAside from proteins, other biological and chemical data often meet this criterion.\nIn particular, Quantitative Structure-Activity Relationship (QSAR) and Quantitative Structure-Property Relationship (QSPR) both attempt to predict chemical behavior and properties of proposed chemical structures in order to identify structures with certain desired properties. \nIdentified structures become candidates for laboratory synthesis and experimentation, a much more laborious and time consuming process. \nThis is the focus of the Fingerprint convolution from Duvenaud \\& Maclaurin et al.~\\cite{duvenaud2015}.\nStudies in QSAR and QSPR have long been considered \"unquestionably of great importance in modern chemistry and biochemistry\"~\\cite{karelson1996}.\nLike proteins, chemical structures can be modeled as a graph and used to train graph convolutional networks to predict the properties of interest.\nThis is made possible by the vast and growing databases of chemical structures and their corresponding structures~\\cite{olah2008, judson2008}.\n\nBeyond the biochemical realm, graph structured data abound.\nKnowledge bases are collections of structured data in the form of triples indicating \\emph{(subject, predicate, object)}, where \\emph{subject} and \\emph{object} are named entities and \\emph{predicate} indicates the relationship between them, and pairs indicating \\emph{(subject, attribute)}, where \\emph{attribute} is a type or property label for \\emph{subject}.\nThese data can easily be thought of graphs, where entities are vertices, each type of predicate is one of many different types of edge features, and each attribute is one of many different types of vertex features.\nKnowledge bases emerge in online information repositories such as YAGO~\\cite{mahdisoltani2014}, DBPedia~\\cite{auer2007}, and Wikidata~\\cite{vrandevcic2014}, and are used in the Google search engine \\cite{singhal2012}.\nDue to their large size, knowledge graphs are often under-annotated, where vertex relationships and properties are missing. \nA graph convolutional neural network can be used to predict properties or relationships on graphs, essentially performing automatic inference on the graph.\nThis is the chief problem of interest for Schichtkrull \\& Kipf~\\cite{schlichtkrull2017}.\n\n\n\n", "meta": {"hexsha": "5f9f90d60d39d7b0fb06415c798f7376c787cfee", "size": 18208, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "future.tex", "max_stars_repo_name": "fouticus/msthesis", "max_stars_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "future.tex", "max_issues_repo_name": "fouticus/msthesis", "max_issues_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "future.tex", "max_forks_repo_name": "fouticus/msthesis", "max_forks_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.3743589744, "max_line_length": 493, "alphanum_fraction": 0.7855887522, "num_tokens": 4322, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8577681158979306, "lm_q2_score": 0.6477982179521103, "lm_q1q2_score": 0.5556606568948187}}
{"text": "\\section{Option valuation using Fourier inversion method}\n\\label{chap:fourier}\n\nNow we show how to value options using the Fourier inversion method introduced by \\textcite{heston1993closed}. Our approach is based on \\textcite[pp. 222--233]{nawalkabeliaevasoto2007dynamic}.\n\nWe make the following assumptions. The interest rate model is affine $A_M(N)$ model. Thus\n\t\\begin{align}\n\t\tr(t) = \\delta(t) + \\sum_{m=1}^M X_m(t) + \\sum_{i=1}^{N-M} Y_i(t) .\n\t\\end{align}\nThe correlated gaussian process are\n\\begin{align}\n\\dx Y_i(t) &= - k_i Y_i(t) \\dx t + \\nu_i \\dx W_i (t),\n\\end{align}\nwhere $W_i$ is a Wiener process and \n\\begin{align}\n\\dx W_i (t) \\dx W_j (t) &= \\rho_{ij} \\dx t    \n\\end{align}\nfor all $i,j = 1,2, \\ldots, N-M$. Here $-1 < \\rho_{ij} = \\rho_{ji} < 1$ and $\\rho_{ii} = 1$. The $M$ square-root processes are\n\\begin{align}\n\\dx X_m(t) = \\alpha_m ( \\theta_m - X_m(t) ) \\dx t + \\sigma_m \\sqrt{ X_m(t) } \\dx Z_m (t)\n\\end{align}\nwhere $Z_m$ are independent Wiener process and\n\\begin{align}\n\\dx W_i (t) \\dx Z_m (t) &= 0    \n\\end{align}\nfor all $i = 1,2, \\ldots, N-M$ and $m = 1,2, \\ldots, M$. \n\nWe explicitly assume that $\\delta(t) = 0$ for all $t \\geq 0$ to keep notation simple and we do the dynamic extension of Section \\ref{sec:dynamicextension} to add the $\\delta(t)$ afterwards. Now zero-coupon bond prices are given by\n\t\\begin{align}\n\t\t\\label{fourierbondaffineassumption}\n\t\t\\Bond(t,T) = \\e^{A(\\tau) - B^{\\top} (\\tau) X(t) - C^{\\top} (\\tau) Y(t) } ,\n\t\\end{align} \nwhere $\\tau = T-t$. This is a simplification from \\textcite[p. 433--435]{nawalkabeliaevasoto2007dynamic}.\n\t\nThe price of a call option expiring on $S$ written on $T$-bond with strike price $K$ is\n\t\\begin{align}\n\t\tc(t) &= \\E_{\\Pm_0} \\left( \\e^{ - \\int_t^S r(s) \\dx s } \\left( \\Bond(S,T)) - K \\right)_+ \\ | \\ \\F_t \\right) \\\\\n\t\t\t&= \\E_{\\Pm_0} \\left( \\e^{ - \\int_t^S r(s) \\dx s } \\left( \\Bond(S,T)) - K \\right) \\1_{ \\Bond(S,T) - K \\geq 0 } \\ | \\ \\F_t \\right) \\\\\n\t\t\t&= \\Bond(t,T) \\Pi_1 - K \\Bond(t,S) \\Pi_2 ,\n\t\\end{align}\nwhere\n\t\\begin{align}\n\t\t\\Pi_1 &= \\E_{\\Pm_0} \\left( \\frac{\\e^{ - \\int_t^S r(s) \\dx s } \\Bond(S,T) \\1_{ \\Bond(S,T) \\geq K }}{\\Bond(t,T)} \\ | \\ \\F_t \\right) \\\\\n\t\t\\Pi_2 &= \\E_{\\Pm_0} \\left( \\frac{\\e^{ - \\int_t^S r(s) \\dx s } \\1_{ \\Bond(S,T) \\geq K }}{\\Bond(t,T)} \\ | \\ \\F_t \\right) .\n\t\\end{align}\nHere all the expectations are taken under the risk-free measure. We now write $\\Pi_1$ under a different measure. As $\\Bond(t,T) \\geq K$ is equivalent to $\\ln \\Bond(t,T) \\geq  \\ln K$, we change the variable by $y = \\ln \\Bond(t,T)$ and get\n\t\\begin{align}\n\t\t\\label{fourierchangeofvariable}\n\t\t\\Pi_1 &= \\int_{\\ln K}^{\\infty} \\left( \\frac{\\e^{ - \\int_t^S r(s) \\dx s } \\Bond(S,T) }{\\Bond(t,T)} f(y) \\right) \\ \\dx y .\n\t\\end{align}\nWe notice that \n\t\\begin{align}\n\t\t\\xi_1 (t) &= \\frac{\\Bond(S,T) \\Bank(t)}{\\Bond(t,T) \\Bank(S)} \\\\\n\t\t&= \\frac{ \\Bond(S,T) }{\\Bond(t,T)} \\e^{ - \\int_t^S r(s) \\dx s } .\n\\end{align}\nis the Radon-Nikod\\'{y}m derivative of $T$-forward measure with respect to risk-free measure. Thus\n\t\\begin{align}\n\t\t\\Pi_1 &= \\int_{\\ln K}^{\\infty} \\xi_1(t) f(y) \\ \\dx y \\\\\n\t\t\t&= \\int_{\\ln K}^{\\infty} f_1(y) \\ \\dx y\n\\end{align}\nwhere $f_1$ is the probability density function under the $T$-forward measure. Let $g_1$ be the characteristic function of $T$-forward measure, hence \n\t\\begin{align}\n\t\tg_1 (\\omega) &= \\int_{-\\infty}^{\\infty} \\e^{i\\omega y} f_1(y) \\ \\dx y \\\\\n\t\t\t&= \\int_{-\\infty}^{\\infty} \\left( \\e^{i\\omega y} \\xi_1(t) f(y) \\right) \\ \\dx y \\\\\n\t\t\t&= \\E_{\\Pm_0} \\left( \\e^{i\\omega \\ln \\Bond(S,T)} \\xi_1(t)  \\ | \\ \\F_t \\right) \\text{ and } \\\\\n\t\tg_1 (\\omega) \\Bond(S,T) &= \\E_{\\Pm_0} \\left( \\e^{(1+i\\omega) \\ln \\Bond(S,T)} \\e^{ - \\int_t^S r(s) \\dx s } \\ | \\ \\F_t \\right)\n\t\\end{align}\nas, by Equation \\ref{fourierbondaffineassumption},\n\t\\begin{align}\n\t\ty = \\ln \\Bond(S,T) = A(\\tau) - B^{\\top} (\\tau) X(t) - C^{\\top} (\\tau) Y(t) .\n\t\\end{align}\nFeynman-Kac theorem this expected value may be presented as a $N$-dimensional stochastic partial differential equation. \\textcite{nawalkabeliaevasoto2007dynamic} shows that a solution is\n\\begin{align}\n\\exp \\left( A_1^*(s) - \\sum_{m=1}^M B_{1m}^*(s) X_m(t) - \\sum_{i=1}^{N-M} C_{1m}^*(s) Y_i(t) \\right) ,\n\\end{align}\nwhere\n\\begin{align}\nA_1^*(0) &= a_1 = A(U) ( 1+\\boldsymbol{i} \\omega ) \\\\ \nB_{1m}^*(0) &= b_{1m} = B_m(U) ( 1+\\boldsymbol{i} \\omega ) \\\\ \nC_{1i}^*(0) &= c_{1i} = C_i(U) ( 1+\\boldsymbol{i} \\omega )\n\\end{align}\nfor $i= 1,2, \\ldots, N-M$ and $m=1,2, \\ldots , M$. Here\n\\begin{align}\nA_1^*(z) &= a_1  +  \\frac{1}{2} \\sum_{i=1}^{N-M} \\sum_{j=1}^{N-M} \\frac{\\nu_i \\nu_j \\rho_{ij}}{k_ik_j} ( z - q_i C_i(z) - q_j C_j(z) \\\\ &+ q_iq_j \\frac{1 - \\exp^{ - (k_i+k_j)z } }{k_i+k_j} ) \\\\\n&- 2 \\sum_{m=1}^M \\frac{\\alpha_m \\theta_m}{\\sigma_m^2} \\left( \\beta_{3m} z + \\log ( \\frac{1 - \\beta_ {4m} \\exp^{ \\beta_{1m} z } }{1 - \\beta_{4m}} )  \\right) \\\\\nB_{1m}^*(z) &= \\frac{2}{\\sigma_m^2} \\left( \\frac{ \\beta_{2m} \\beta_{4m} \\exp^{\\beta_{1m} z} - \\beta_{3m} }{ \\beta_{4m} \\exp^{\\beta_{1m} z} - 1 } \\right) \\\\\nC_{1i}^* (z) &= \\frac{1-q_i \\exp^{-k_iz}}{k_i},\n\\end{align}\nwhere\n\\begin{align}\nq_i &= 1 - k c_{1j} \\\\\n\\beta_{1m} &= \\sqrt{\\alpha_m^2 + 2 \\sigma_m^2} \\\\\n\\beta_{2m} &= \\frac{- \\alpha_m + \\beta_{1m}}{2} \\\\\n\\beta_{3m} &= \\frac{- \\alpha_m - \\beta_{1m}}{2} \\\\\n\\beta_{4m} &= \\frac{ - \\alpha_m - \\beta_{1m} - b_{1m} \\sigma_m^2 }{ - \\alpha_m + \\beta_{1m} - b_{1m} \\sigma_m^2 }\n\\end{align}\nfor $i= 1,2, \\ldots, N-M$ and $m=1,2, \\ldots , M$. It should be noted that $a_1, b_{1m}$ and $c_{1i}$ are actually functions of $\\omega$ and therefore $A_1^*, B_{1m}^*$ and $C_{1i}^*$ also depend on $\\omega$.\n\nThus the characteristic function is\n\\begin{align}\ng_1 (\\omega) &= \\frac{ \\exp \\left( A_1^*(s) - \\sum_{m=1}^M B_{1m}^*(s) X_m(t) - \\sum_{i=1}^{N-M} C_{1m}^*(s) Y_i(t)  \\right) }{\\Bond(t,T)} \n\\end{align}\nwhich allows us to calculate the values of characteristic functions. Now\n\t\\begin{align}\n\t\t\\Pi_1 &= \\int_{\\ln K}^{\\infty} f_1(y) \\ \\dx y \\\\\n\t\t\t&= \\frac{1}{2} + \\frac{1}{\\pi} \\int_0^{\\infty} \\Re \\left( \\frac{\\exp^{-\\boldsymbol{i}\\omega \\log K} g_1(\\omega)}{\\boldsymbol{i} \\omega} \\right) \\dx \\omega\n\t\\end{align}\nwhich can be calculated numerically. \\textcite{nawalkabeliaevasoto2007dynamic} note that this computation only requires that the model has analytical bond pricing formulas, so it can be utilized in variety of models.\n\n\nWe can solve $\\Pi_2$ similarly. Instead of Equation \\ref{fourierchangeofvariable}, we have\n\t\\begin{align}\n\t\t\\Pi_2 &= \\int_{\\ln K}^{\\infty} \\left( \\frac{\\e^{ - \\int_t^S r(s) \\dx s } \\Bond(S,T) }{\\Bond(t,T)} f(y) \\right) \\ \\dx y .\n\t\\end{align}\nand similar reasoning shows that\n\t\\begin{align}\n\t\t\\Pi_2 = \\frac{1}{2} + \\frac{1}{\\pi} \\int_0^{\\infty} \\Re \\left( \\frac{\\exp^{-\\boldsymbol{i}\\omega \\log K} g_2(\\omega)}{\\boldsymbol{i} \\omega} \\right) \\dx \\omega .\n\t\\end{align}\nNow\n\t\\begin{align}\n\t\tg_2 (\\omega) &= \\frac{ \\exp \\left( A_2^*(s) - \\sum_{m=1}^M B_{2m}^*(s) X_m(t) - \\sum_{i=1}^{N-M} C_{1m}^*(s) Y_i(t)  \\right) }{\\Bond(t,S)} \n\t\\end{align}\t\nwhere\n\\begin{align}\nA_2^*(0) &= a_2 = A(U) ( \\boldsymbol{i} \\omega ) \\\\ \nB_{2m}^*(0) &= b_{2m} = B_m(U) ( \\boldsymbol{i} \\omega ) \\\\ \nC_{2i}^*(0) &= c_{2i} = C_i(U) ( \\boldsymbol{i} \\omega )\n\\end{align}\nfor $i= 1,2, \\ldots, N-M$ and $m=1,2, \\ldots , M$. Similarly as before \n\\begin{align}\nA_2^*(z) &= a_2  +  \\frac{1}{2} \\sum_{i=1}^{N-M} \\sum_{j=1}^{N-M} \\frac{\\nu_i \\nu_j \\rho_{ij}}{k_ik_j} ( z - q_i C_i(z) - q_j C_j(z) \\\\ &+ q_iq_j \\frac{1 - \\exp^{ - (k_i+k_j)z } }{k_i+k_j} ) \\\\\n&- 2 \\sum_{m=1}^M \\frac{\\alpha_m \\theta_m}{\\sigma_m^2} \\left( \\beta_{3m} z + \\log ( \\frac{1 - \\beta_ {4m} \\exp^{ \\beta_{1m} z } }{1 - \\beta_{4m}} )  \\right) \\\\\nB_{2m}^*(z) &= \\frac{2}{\\sigma_m^2} \\left( \\frac{ \\beta_{2m} \\beta_{4m} \\exp^{\\beta_{1m} z} - \\beta_{3m} }{ \\beta_{4m} \\exp^{\\beta_{1m} z} - 1 } \\right) \\\\\nC_{2i}^* (z) &= \\frac{1-q_i \\exp^{-k_iz}}{k_i},\n\\end{align}\nwhere\n\\begin{align}\nq_i &= 1 - k c_{2j} \\\\\n\\beta_{1m} &= \\sqrt{\\alpha_m^2 + 2 \\sigma_m^2} \\\\\n\\beta_{2m} &= \\frac{- \\alpha_m + \\beta_{2m}}{2} \\\\\n\\beta_{3m} &= \\frac{- \\alpha_m - \\beta_{2m}}{2} \\\\\n\\beta_{4m} &= \\frac{ - \\alpha_m - \\beta_{2m} - b_{2m} \\sigma_m^2 }{ - \\alpha_m + \\beta_{2m} - b_{2m} \\sigma_m^2 }\n\\end{align}\nfor $i= 1,2, \\ldots, N-M$ and $m=1,2, \\ldots , M$. Again, it should be noted that $a_2, b_{2m}$ and $c_{2i}$ are actually functions of $\\omega$ and therefore $A_2^*, B_{2m}^*$ and $C_{2i}^*$ also depend on $\\omega$.\n\t\nSince the computations above involves numerical integration, it is computationally costly. However \\textcite{carrmadan1999optionvaluation} showed that Fast Fourier Transform (FFT) can utilized in the computation, which can reduce the complexity significantly. Sadly, this was not attempted in this thesis. \n\nAs we explicitly assumed that $\\delta = 0$, we can add the shift to the model by the extension method introduced in Section \\ref{sec:dynamicextension}.\n\n\n", "meta": {"hexsha": "d169f0beed57caa79a0333671a4c0348025ce0c3", "size": 8755, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "fourier.tex", "max_stars_repo_name": "mrytty/gradu-public", "max_stars_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "fourier.tex", "max_issues_repo_name": "mrytty/gradu-public", "max_issues_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "fourier.tex", "max_forks_repo_name": "mrytty/gradu-public", "max_forks_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.9801324503, "max_line_length": 306, "alphanum_fraction": 0.6031981725, "num_tokens": 3732, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438951104066295, "lm_q2_score": 0.6584175072643413, "lm_q1q2_score": 0.5556353149864991}}
{"text": "\\section{Cellular Complexes with GraphBLAS }\\label{graphblas-implementation}\n%=================================================================\n\nWe have implemented in Julia~\\cite{BEKS14}---the novel language for scientific computing---our topological operations over cell complexes, using the package \\href{https://github.com/abhinavmehndiratta/GraphBLAS.jl}{\\texttt{SuiteSparseGraphBLAS.jl}}, which is a Julia wrapper~\\cite{Mehndiratta:2019} for \\texttt{SuiteSparse:GraphBLAS}, i.e., the GraphBLAS standard~\\cite{osti_1208646,DBLP:journals/corr/KepnerABBFGHKLM16,Buluc:7965104} provided within the \\emph{SuiteSparse} library~\\cite{Davis:2018}  of sparse matrix software. \n\n\n\\subsection{Geometric / topological sparse matrices}\n%------------------------------------------------\n\nWithin a typical computational pipeline in geometric applications, we may distinguish at least three types of sparse matrices: (a) characteristic matrices of cells as subsets of vertices; (b) boundary representations of edges, faces and solid cells; (c) matrix representation of binary incidence/adjacency relations between cells of different dimension. \n\n\\paragraph{Characteristic matrices} provide the simplest representation of the independent elements (i.e., those that cannot be generated by linear combination of other elements) of a $p$-chain space $C_p$ ($0\\leq p \\leq d$) from a cellular $d$-complex ($2\\leq d \\leq 3$). They are built from arrays of arrays of vertex indices using the sol-called ``coords'' method, i.e., starting from $(i,j,x)$ triples.\n\n\\paragraph{Boundary operators} give a simple mathematical representation of the so-called ``B-reps'', normally used for solid models, in our case extended to cells of every dimension. In particular, every column of the $[\\partial_p]$ matrix gives the signed representation of a basis $p$-cell (i.e., an independent $p$-chain) as a ($p$-1)-cycle (i.e., a ($p$-1)-chain without-boundary). They are built by multiplication of two characteristic matrices, followed by suitable ``filtering'' of  values.\n\n\\paragraph{Incidence relations} are generated by multiplication of the characteristic matrices of the two types of cells (of dimension $p$ and $q$, say) under consideration. The non-zero $(i,j)$ element of the product matrix provides the ``strength'' of the elementary incidence, i.e., the number of vertices shared between the $i$-th $p$-cell and the $j$-th $q$-cell.  Their building is done by multiplication of two appropriate instances of (co)boundary matrices, as shown in Table~\\ref{tab:relations}b.\n\n\n\\subsection{Matrix computation of boundary chain}\n%------------------------------------------------\n\nThe mathematics used to compute the graded chain complex $C_\\bullet = (C_p, \\partial_p)$ starting from sparse binary characteristic matrices $M_p$, with $p$-cells indexing the rows and $0$-cells indexing the columns is given below.\nThe boundary matrices $\\partial_p$ ($1\\leq p\\leq 3$) between non-oriented chain spaces are computed by \\emph{sparse matrix multiplication} of characteristic matrices, followed by \\emph{matrix filtering},  produced in Julia by broadcasting vectorized integer division, i.e., ``$\\texttt{.}\\!\\div$'', as follows:\n\n{\\footnotesize\\begin{lstlisting}\n$\\partial_1 \\texttt{ = } \\texttt{M}_0 * \\texttt{M}_1' \\texttt{ = } \\texttt{M}_1'$  \n$\\partial_2 \\texttt{ = } (\\texttt{M}_1 * \\texttt{M}_2')\\ \\texttt{.}\\!\\div\\ \\texttt{sum(}\\texttt{M}_1,\\texttt{dims=}2)$\n$\\partial_3 \\texttt{ = } (\\texttt{M}_2 * \\texttt{M}_3')\\ \\texttt{.}\\!\\div\\ \\texttt{sum(}\\texttt{M}_2,\\texttt{dims=}2)$\n\\end{lstlisting}}\n\n\n\\subsection{GraphBLAS computation of boundary chain}\n%------------------------------------------------\n\nOur current open-source implementation of cellular and chain complexes using sparse matrices and \\href{https://github.com/abhinavmehndiratta/GraphBLAS.jl}{\\texttt{SuiteSparseGraphBLAS.jl}} is mantained in \\href{https://github.com/gmgigi96/SparseMM}{\\texttt{https://github.com/gmgigi96/SparseMM}}, where it provides fast and easy-to-use matrix tools for geometric and topological computing, including the input of cellular complexes, the computation of (unsigned) boundary operators, the answer to single and multiple queries about incidence or adjacency of cells.\n\n", "meta": {"hexsha": "b32fe8ec5fa4c2d63491454d0442852dd4b3a9c5", "size": 4240, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/implementation.tex", "max_stars_repo_name": "cvdlab/Chain-BLAS", "max_stars_repo_head_hexsha": "38a2413ccefd1bc47ae404215e3616d21b16a89e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/implementation.tex", "max_issues_repo_name": "cvdlab/Chain-BLAS", "max_issues_repo_head_hexsha": "38a2413ccefd1bc47ae404215e3616d21b16a89e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/implementation.tex", "max_forks_repo_name": "cvdlab/Chain-BLAS", "max_forks_repo_head_hexsha": "38a2413ccefd1bc47ae404215e3616d21b16a89e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 114.5945945946, "max_line_length": 563, "alphanum_fraction": 0.7297169811, "num_tokens": 1073, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189134878876, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5556063126936195}}
{"text": "\\documentclass{article}\n\n%\\documentclass[aps,pra,notitlepage,amsmath,amssymb,letterpaper,12pt]{revtex4-1}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{titling}\n\\usepackage{amsthm}\n\\usepackage{graphicx}\n\\usepackage{epstopdf}\n\n\\epstopdfDeclareGraphicsRule{.gif}{png}{.png}{convert gif:#1 png:\\OutputFile}\n\\AppendGraphicsExtensions{.gif}\n\n%  Below define helpful commands to set up problem environments easily\n\\newenvironment{problem}[2][Problem]{\\begin{trivlist}\n\\item[\\hskip \\labelsep {\\bfseries #1}\\hskip \\labelsep {\\bfseries #2.}]}{\\end{trivlist}}\n\\newenvironment{solution}{\\begin{proof}[Solution]}{\\end{proof}}\n \n% --------------------------------------------------------------\n%                   Document Begins Here\n% --------------------------------------------------------------\n \n\\begin{document}\n \n\\title{definition of derivative of a function }\n\\author{Jianhua Li}\n\\date{\\today}\n\n\\maketitle\n\n\\begin{abstract}\nCreate a LaTeX file testlatex.tex (using the template in the info repository). In this file write an explanation of what the definition of the derivative $f'(x)$ of a function $f(x)$ means. Include both inline and numbered equations, as well as a proper title, abstract, and section headings. Find a suitable image to illustrate your definition online, and include it as a figure, with proper citation of the source.\n\\end{abstract}\n\n\n\n%\\section{Section Title Here} % Specify main sections this way\n\n\\section{Definition}\n%The derivative of f(x) is defined as the limit:\n%\\lim_{x\\to\\infty} f(x) $$= 2x \n%https://apcalcnotebookarjw.wikispaces.com/Definition+of+the+derivative+of+a+function\n\n\\begin{align}\n  %f(x) = x^{2}\n  f'(x_0) = \\lim_{x\\to x_0} \\frac{f(x_0)-f(x)}{x_0-x}\\\\\n  f'(x) = \\lim_{h\\to\\infty} \\frac{f(x+h)-f(x)}{h} %it worked\n  %f'(x_0) = \\lim_{x\\to\\x_0}\\frac{f(x_0)-f(x)}{x_0-x}\n\\end{align}\n\n%\\begin{align}   \n\t%f'(x_0) &=\\lim_{x\\to\\x_0}\\frac{f(x_0)-f(x)}{x_0-x}\n   %$$f'(x) &=\\lim_{h\\to\\infty}f(x)\\frac{f(x+h)-f(x)}{h}\n%\\[\n%    \\binom{n}{k} = \\frac{n!}{k!(n-k)!}\n%\\]\n%f(x) &= 2x^2-16x+35\\\\\n%f'(x) &= 2x+2\\\\\n%\\[ \\lim_{x \\to +\\infty} \\frac{3x^2 +7x^3}{x^2 +5x^4} = 3.\\] \n\n%\\end{align}\n% Use align environments for equations. The \\\\ is a newline character. The & is the alignment character.\n% Using align* or \\nonumber on each line removes equation numbers\n\nExample Illustration:\n\n\n\\begin{figure}[h!] % h forces the figure to be placed here, in the text\n  \\includegraphics{250px-Lim-secant.png}  % if pdflatex is used, jpg, pdf, and png are permitted\n  \\caption{A secant approaches a tangent when h approach to 0,Source https://en.wikipedia.org/wiki/Derivative}\n  \\label{fig:tangent}\n\\end{figure}\n\nThis text should be below the figure unless \\LaTeX  decides that a different layout works better.\n \n% Repeat as needed\n \n\\end{document}\n\n\n", "meta": {"hexsha": "6f979e521b64a1bf5dec4a606b0929f4323dc4d1", "size": 2772, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "testlatex.tex", "max_stars_repo_name": "chapman-cs510-2017f/cw-01-jetli", "max_stars_repo_head_hexsha": "a13fb236e56a62f180f63c4b43d63620ae301a7d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "testlatex.tex", "max_issues_repo_name": "chapman-cs510-2017f/cw-01-jetli", "max_issues_repo_head_hexsha": "a13fb236e56a62f180f63c4b43d63620ae301a7d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "testlatex.tex", "max_forks_repo_name": "chapman-cs510-2017f/cw-01-jetli", "max_forks_repo_head_hexsha": "a13fb236e56a62f180f63c4b43d63620ae301a7d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.2222222222, "max_line_length": 416, "alphanum_fraction": 0.6699134199, "num_tokens": 885, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757645879592642, "lm_q2_score": 0.8221891392358015, "lm_q1q2_score": 0.5556063049002635}}
{"text": "\\chapter{Schr\\\"{o}dinger Wave Equation}\\label{c4}\nBohr's model explained the spectrum of the Hydrogen atom quite well. But not\ntoo well. When a Hydrogen atom is subjected to an electric field, the spectral\nlines split. This is called the `Stark effect' after the name of its discoverer.\nSimilarly, `Zeeman effect' is the splitting of spectral lines when the atom is\nsubjected to a magnetic field. By splitting we mean that in the place of a \nsingle line in the absence of a field we now see more than one. A spectral line\nindicates the energy level of an electron. A splitting of a spectral line tells\nus that electrons which used to have the same energies in the absence of a \nfield have different energies when the field is turned on. The Bohr model \ncould not explain why this happens. Arnold Sommerfeld tried to extend the Bohr\nmodel by allowing the orbits to be elliptical and by using certain concepts in\nadvanced classical mechanics. However, it was clear that the theory was getting\n\\emph{ad hoc} and that a better explanation was needed.\n\nWerner Heisenberg, a German physicist, first came up with a new theory to\nexplain quantum phenomena \\cite{heisenberg1925quantum}. His paper is quite\nhard to follow. People of Steven Weinberg's \\cite{aitchison2004understanding}\nstature have admitted that while they understand what Heisenberg wrote they \ncannot fathom what inspired him to write it. He also found out that the \nequations of quantum mechanics are best written in terms of matrices. As a \nresult, his formulation of quantum mechanics is called `matrix mechanics'.\n\nA year later, the Austrian physicist Erwin Schr\\\"{o}dinger proposed an \nalternative formalism to solve atomic problems. His `equation of motion' was \nconceived using certain optical analogies and is called the wave equation. A\nfunction satisfying the equation is called the wave function and it describes\nthe matter waves we studied in chapter \\ref{c2}. Schr\\\"{o}dinger's paper\n\\cite{schrodinger1926undulatory} is remarkably lucid and it offers an insight\ninto how he came up with his equation. The Schr\\\"{o}dinger wave equation\nis a linear partial differential equation and the physicists of \nSchr\\\"{o}dinger' time had a expertise and comfort in dealing with them. As a \nresult, a first introduction to quantum mechanics is almost always using the\nSchr\\\"{o}dinger equation. Schr\\\"{o}dinger showed \\cite{schrodinger1926relation}\nthat his approach is equivalent to the one proposed by Heisenberg and developed\nby Born and Jordan. The equivalence was put on a firm mathematical footing by\nJohn von Neumann \\cite{von2018mathematical}.\n\nIn this chapter, we will state Schr\\\"{o}dinger's equation, interpret its\nsolution and show how to use it in a few very simple situations. Schr\\\"{o}dinger\nderived \\cite{schrodinger1926undulatory} his equation using variational \nprinciples. However, we shall take it as a starting point of our discussion.\nThe way we accepted Newton's laws as `correct' because they explained a few\nnatural phenomena, we will also accept Schr\\\"{o}dinger's equation because it\nexplains a lot many phenomena in the atomic world.\n\n\\section{Schr\\\"{o}dinger equation}\\label{c4s1}\nBefore stating and understanding Schr\\\"{o}dinger's equation that describes\nthe unfamiliar matter waves, let us consider start in the more familiar world\nof sound waves. We know that sound waves travel through the air as a succession\nof compressions and rarefactions. We can detect and measure the compression and\nrarefaction using a pressure gauge. If $p(x, t)$ is the air pressure at a point\n$x$ and at time $t$ then the equation of sound waves is\n\\begin{equation}\\label{c4s1e1}\n\\frac{\\partial^2 p}{\\partial x^2} = \\frac{1}{c_s^2}\n\\frac{\\partial^2 p}{\\partial t^2}\n\\end{equation}\nwhere $c_s$ is the speed of sound. It is easy to check that any function of \nthe form $p(x \\pm c_st)$ is a solution of \\eqref{c4s1e1}. In particular,\n\\begin{equation}\\label{c4s1e2}\np(x, t) = \\cos(x - c_st)\n\\end{equation}\nis a solution. A solution of a wave equation tells us how pressure varys with\nposition and time. \n\nEquation \\eqref{c4s1e1} described waves in one dimension. If the pressure varies\nin all three dimensions then the equation becomes\n\\begin{equation}\\label{c4s1e3}\n\\frac{\\partial^2 p}{\\partial x^2} + \\frac{\\partial^2 p}{\\partial y^2} + \n\\frac{\\partial^2 p}{\\partial z^2} = \n\\frac{1}{c_s^2} \\frac{\\partial^2 p}{\\partial t^2}\n\\end{equation}\nWe can check that\n\\begin{equation}\\label{c4s1e4}\np(x, y, z, t) = \\cos(x + y + z - \\sqrt{3} c_st)\n\\end{equation}\nis a solution of \\eqref{c4s1e4}.\n\n\\subsection{Problem set 1}\n\\begin{enumerate}\n\\item Verify that \\eqref{c4s1e2} is a solution of \\eqref{c4s1e1}.\n\\item Verify that \\eqref{c4s1e4} is a solution of \\eqref{c4s1e3}.\n\\item If $p_1(x, t)$ and $p_2(x, t)$ are two solutions of \\eqref{c4s1e1} then\nshow that $\\alpha_1 p_1 + \\alpha_2 p_2$, where $\\alpha_1, \\alpha_2$ are real\nnumbers is also a solution of \\eqref{c4s1e1}.\nThis fact is called the principle of linear superposition. The expression, \n$\\alpha_1 p_1 + \\alpha_2 p_2$ is called a \\emph{linear combination} of $p_1$\nand $p_2$.\n\\end{enumerate}\n\nA sound wave is described by the pressure fluctations it causes. de Broglie\nwaves are described by the variations of a wave function $\\Psi$. It could be a\ncomplex-valued function and therefore it does not have a physical interpretation\nby itself. However, its squared modulus $|\\Psi|^2 = \\Psi\\Psi^\\ast$ is the \nprobability density of finding at a particle at a point $(x, y, z)$ at time $t$.\nIn particular, $|\\Psi|^2 dV$ is the probability of finding a particle in a \nsmall volume element $dV$ around the point $(x,y,z)$ at time $t$. The way the\nair pressure $p$ satisfies the equation \\eqref{c4s1e3}, the wave function \n$\\Psi$ satisfies \n\\begin{equation}\\label{c4s1e5}\n-\\frac{\\hslash^2}{2m}\\left(\\frac{\\partial^2\\Psi}{\\partial x^2} + \n\\frac{\\partial^2\\Psi}{\\partial y^2} + \\frac{\\partial^2\\Psi}{\\partial z^2}\n\\right) + V(x,y,z,t)\\Psi = i\\hslash\\frac{\\partial\\Psi}{\\partial t}.\n\\end{equation}\nThis is called the Schr\\\"{o}dinger equation. In this equation,\n\\begin{enumerate}\n\\item $\\hslash = h/(2\\pi)$; we first introduced this quantity when we wrote\n\\eqref{c3s3e2}.\n\\item $m$ is the mass of the particle.\n\\item $V(x, y, z, t)$ is the potential function. It describes the force acting\non the particle. We will describe it more a little later.\n\\item $i$ is the imaginary number.\n\\end{enumerate}\nA few remarks are in order.\n\\begin{enumerate}\n\\item A wave equation always has a second order partial derivative in time.\nEquation \\eqref{c4s1e5} has just the first order partial derivative in time.\nTherefore, strictly speaking, \\eqref{c4s1e5} is not a wave equation although\nthe name `wave function' persists for its solution $\\Psi$. It is closer to the\nheat equation than it is to a wave equation.\n\\item The coefficient on the right hand side is an imaginary number. Therefore,\nit is not surprising that its solution $\\Psi$ is complex valued.\n\\item The only term in \\eqref{c4s1e5} that is specific to a problem is the\npotential function $V$. The other terms never change.\n\\item We remarked that $|\\Psi|^2$ has an interpretation of a probability\ndensity function. Therefore, if we want to find the probability of finding\na particle a region of volume $V$ then it is\n\\begin{equation}\\label{c4s1e6}\nP(V) = \\int_V |\\Psi|^2 dV.\n\\end{equation}\nThis statement is accurate only when \n\\begin{equation}\\label{c4s1e7}\n\\int_{-\\infty}^\\infty |\\Psi|^2dV = 1.\n\\end{equation}\nSuch a function is said to be `normalised'. If a wave function is not normalised\n, that is, if\n\\begin{equation}\\label{c4s1e8}\n\\int_{-\\infty}^\\infty |\\Psi|^2dV = N,\n\\end{equation}\nthen we can alway consider the function\n\\begin{equation}\\label{c4s1e9}\n\\Psi_1 = \\frac{1}{\\sqrt{N}}\\Psi.\n\\end{equation}\nWe will show in the next problem set that $\\Psi_1$ is also a solution of \n\\eqref{c4s1e5}. $\\Psi_1$ is called the \\emph{normalised} wave function and\nthe number $1/\\sqrt{N}$ is called the \\emph{normalisation constant}.\n\\end{enumerate}\n\n\\subsection{Problem set 2}\n\\begin{enumerate}\n\\item Show that is $\\Psi_1$ and $\\Psi_2$ are two solutions of \\eqref{c4s1e5} \nthen $\\alpha_1\\Psi_1 + \\alpha_2\\Psi_2$, where $\\alpha_1$ and $\\alpha_2$ are\ncomplex numbers, is also a solution. Therefore, show that \\eqref{c4s1e9} is\nalso a solution of \\eqref{c4s1e5}.\n\\end{enumerate}\n\n\\section{Some solutions of Schr\\\"{o}dinger equation}\\label{c4s2}\nPartial differential equations are, in general, much more difficult to solve \nthan ordinary differential equations. Schr\\\"{o}dinger equation is no exception.\nWe will consider a few situations where an explicit, closed-form solution is\navailable. In the previous section we remarked that the physics of a problem\nis in the choice of the potential function $V$. The situations we alluded to \nare really the different choices of $V$.\n\n\\subsection{The free particle}\nA free particle is the one not subject to any forces. Therefore, a convenient\nchoice of $V$ is to set it to zero. Schr\\\"{o}dinger equation now becomes\n\\begin{equation}\\label{c4s2e1}\n-\\frac{\\hslash^2}{2m}\\left(\\frac{\\partial^2\\Psi}{\\partial x^2} + \n\\frac{\\partial^2\\Psi}{\\partial y^2} + \\frac{\\partial^2\\Psi}{\\partial z^2}\n\\right) = i\\hslash\\frac{\\partial\\Psi}{\\partial t}.\n\\end{equation}\nSince the particle has no force acting on it, its energy does not change. If\n$E$ is its energy then we can write\n\\begin{equation}\\label{c4s2e2}\n\\Psi(\\vec{r}, t) = \\psi(\\vec{r})e^{-iEt/\\hslash}.\n\\end{equation}\nSubstituting it in \\eqref{c4s2e1} we get\n\\begin{equation}\\label{c4s2e3}\n-\\frac{\\hslash^2}{2m}e^{-iEt/\\hslash}\\left(\\frac{\\partial^2\\psi}{\\partial x^2} \n+ \\frac{\\partial^2\\psi}{\\partial y^2} + \\frac{\\partial^2\\psi}{\\partial z^2}\n\\right) = E\\psi(\\vec{r})e^{-iEt/\\hslash}.\n\\end{equation}\nCancelling the factor $e^{-iEt/\\hslash}$ and rearranging a bit, we get\n\\begin{equation}\\label{c4s2e4}\n\\frac{\\partial^2\\psi}{\\partial x^2} \n+ \\frac{\\partial^2\\psi}{\\partial y^2} + \\frac{\\partial^2\\psi}{\\partial z^2} =\n-\\frac{2mE}{\\hslash^2}\\psi.\n\\end{equation}\nSince we chose $V = 0$, the particle has no potential energy. Its energy is \nentirely kinetic. If $E$ is the kinetic energy then we know that the quantity\n$2mE$ is the square of momentum. The coefficient on the right hand side is\n\\begin{equation}\\label{c4s2e5}\n-\\frac{2mE}{\\hslash^2} = -\\frac{p^2}{\\hslash^2} = -k^2,\n\\end{equation}\nwhere we have use the de Broglie relation,\n\\begin{equation}\\label{c4s2e6}\np = \\frac{h}{\\lambda} = \\frac{\\hslash}{k}.\n\\end{equation}\nThus, the Schr\\\"{o}dinger equation simplifies to\n\\begin{equation}\\label{c4s2e7}\n\\frac{\\partial^2\\psi}{\\partial x^2} \n+ \\frac{\\partial^2\\psi}{\\partial y^2} + \\frac{\\partial^2\\psi}{\\partial z^2} =\n-k^2\\psi.\n\\end{equation}\nIf we define the vector\n\\begin{equation}\\label{c4s2e8}\n\\vec{k} = k_x\\hat{i} + k_y\\hat{j} + k_z\\hat{k},\n\\end{equation}\nand let\n\\begin{equation}\\label{c4s2e9}\n\\psi(\\vec{r}) = e^{i\\vec{k}\\cdot\\vec{r}} = e^{i(k_xx+k_yy+k_zz)},\n\\end{equation}\nthen we can readily verify that $\\psi$ defined in \\eqref{c4s2e9} satisfies\n\\eqref{c4s2e7}. Thus, the complete solution of equation \\eqref{c4s2e1} is\n\\begin{equation}\\label{c4s2e10}\n\\Psi(\\vec{r}, t) = e^{i(\\vec{k}\\cdot\\vec{r} - \\omega t)},\n\\end{equation}\nwhere we have used the relation\n\\begin{equation}\\label{c4s2e11}\nE = \\hslash\\omega.\n\\end{equation}\nThe solution of \\eqref{c4s2e10} is called a plane wave solution. It can have \nany value for its energy $E$.\n\n\\subsection{Time-independent potentials}\nWhen $V$ is independent of time we can write\n\\begin{equation}\\label{c4s2e12}\n\\Psi(\\vec{r}, t) = \\psi(\\vec{r})\\tau(t),\n\\end{equation}\nwhere $\\tau$ is a function of time alone. Equation \\eqref{c4s1e5} becomes\n\\begin{equation}\\label{c4s2e13}\n\\tau(t)\\left[-\\frac{\\hslash^2}{2m}\\left(\\frac{\\partial^2\\psi}{\\partial x^2} + \n\\frac{\\partial^2\\psi}{\\partial y^2} + \\frac{\\partial^2\\psi}{\\partial z^2}\n\\right) + V(x,y,z)\\psi\\right] = i\\psi(\\vec{r})\\hslash\\frac{d\\tau}{dt}\n\\end{equation}\nDivide both sides by $\\Psi = \\psi\\tau$ to get\n\\begin{equation}\\label{c4s2e14}\n\\frac{1}{\\psi(\\vec{r})}\\left[-\\frac{\\hslash^2}{2m}\\left(\n\\frac{\\partial^2\\psi}{\\partial x^2} + \n\\frac{\\partial^2\\psi}{\\partial y^2} + \\frac{\\partial^2\\psi}{\\partial z^2}\n\\right) + V(x,y,z)\\psi\\right] = \\frac{i\\hslash}{\\tau(t)}\\frac{d\\tau}{dt}\n\\end{equation}\nThe left hand side of this equation depends on $\\vec{r}$ alone and the \nright hand side on $t$ alone. Therefore, each side must be a constant, say $E$.\nThus, we get two equations\n\\begin{equation}\\label{c4s2e15}\n-\\frac{\\hslash^2}{2m}\\left(\\frac{\\partial^2\\psi}{\\partial x^2} + \n\\frac{\\partial^2\\psi}{\\partial y^2} + \\frac{\\partial^2\\psi}{\\partial z^2}\n\\right) + V(x,y,z)\\psi = E\\psi\n\\end{equation}\nand\n\\begin{equation}\\label{c4s2e16}\n\\frac{1}{\\tau}\\frac{d\\tau}{dt} = -i\\frac{E}{\\hslash}.\n\\end{equation}\nSince $E$ is a constant, equation \\eqref{c4s2e16} is particularly easy to\nsolve. We see that\n\\begin{equation}\\label{c4s2e17}\n\\tau(t) = \\alpha e^{-iEt/\\hslash},\n\\end{equation}\nwhere $\\alpha$ is a constant of integration that can be chosen while normalising\nthe wavefunction.\n\nEquation \\eqref{c4s2e15} is called  Schr\\\"{o}dinger time independent equation. \nIn this course, we will deal only with the situation when $V$ is independent of \n$t$ and therefore we will focus on solving \\eqref{c4s2e15}.\n\n\\subsection{Particle in an infinite potential well}\nThe free particle is a rather uninteresting. It does not have any features that\nare peculiar to a quantum system. We now consider a particles in a box. The box \nprevents the particle from escaping and therefore one can consider it be \nrepresented by an infinitely high potential wall. That is, it takes an infinte\nenergy for the particle to escape it. Let us start with a one-dimensional case\nwhere the potential is defined by\n\\begin{equation}\\label{c4s2e18}\nV(x) = \\begin{cases}\n0 \\text{ if } -\\frac{L}{2} \\le x \\le \\frac{L}{2} \\\\\n\\infty \\text{ otherwise.}\n\\end{cases}\n\\end{equation}\nWithin the confines of the box, the particle is free. Without, it cannot even \nget. Therefore, we must solve  Schr\\\"{o}dinger's time independent equation only\nin the region\n\\[\n-\\frac{L}{2} \\le x \\le \\frac{L}{2}.\n\\]\nSince we are considering a one dimensional box, \\eqref{c4s2e15} becomes\n\\begin{equation}\\label{c4s2e19}\n-\\frac{\\hslash^2}{2m}\\frac{d^2\\psi(x)}{dx^2} = E\\psi(x).\n\\end{equation}\nLet\n\\begin{equation}\\label{c4s2e20}\nk^2 = \\frac{2mE}{\\hslash^2}.\n\\end{equation}\n$k$ is called the wave number and it is related to the momentum by the equation\n\\begin{equation}\\label{c4s2e21}\np = \\hslash k.\n\\end{equation}\nIn terms of $k$, equation \\eqref{c4s2e19} becomes\n\\begin{equation}\\label{c4s2e22}\n\\frac{d^2\\psi(x)}{dx^2} + k^2\\psi(x) = 0.\n\\end{equation}\nOne can check that the general solution of this ordinary differential equation \nis\n\\begin{equation}\\label{c4s2e23}\n\\psi(x) = \\alpha e^{-ikx} + \\beta e^{ikx},\n\\end{equation}\nwhere $\\alpha$ and $\\beta$ are constants of integration. The infinite potential\nbarrier of the box prevents the particle from escaping. Therefore, at the walls,\nthe function $\\psi$ must be zero. That is\n\\begin{eqnarray}\n\\psi\\left(-\\frac{L}{2}\\right) &=& 0 \\label{c4s2e24} \\\\\n\\psi\\left(\\frac{L}{2}\\right) &=& 0 \\label{c4s2e25}\n\\end{eqnarray}\nFrom \\eqref{c4s2e23} and \\eqref{c4s2e24} we have\n\\[\n\\alpha e^{ikL/2} + \\beta e^{-ikL/2} = 0\n\\]\nor that\n\\begin{equation}\\label{c4s2e26}\n\\alpha = -\\beta e^{-ikL}\n\\end{equation}\nand hence equation \\eqref{c4s2e23} becomes\n\\begin{equation}\\label{c4s2e27}\n\\psi(x) = -\\beta e^{-ikL}e^{-ikx} + \\beta e^{ikx}.\n\\end{equation}\nFrom \\eqref{c4s2e25} and \\eqref{c4s2e27},\n\\[\n\\psi\\left(\\frac{L}{2}\\right) = -\\beta e^{-3ikL/2} + \\beta e^{ikL/2} = \n-\\beta e^{-ikL/2}(e^{-ikL} - e^{ikL}) = 0\n\\]\nor\n\\begin{equation}\\label{c4s2e28}\n-2i\\beta e^{ikL/2}\\sin(kL) = 0\n\\end{equation}\nwhich can be true only if\n\\begin{equation}\\label{c4s2e29}\nkL = n\\pi, n = 0, \\pm 1, \\pm 2, \\ldots.\n\\end{equation}\nIf $n = 0$ then $k = 0$ and from \\eqref{c4s2e23} $\\psi(x) = \\alpha + \\beta$. \nSuch a solution will not obey the boundary conditions \\eqref{c4s2e24} and\n\\eqref{c4s2e25} unless $\\alpha = \\beta = 0$. But this is a trivial solution\nindicating that there is no particle. Therefore, the only physically interesting\nvalues of $k$ are\n\\begin{equation}\\label{c4s2e30}\nk_n = \\frac{n\\pi}{L}, n = \\pm 1, \\pm 2, \\ldots.\n\\end{equation}\nAs a result of this condition, we see that the momentum of the particle is\nrestricted to\n\\begin{equation}\\label{c4s2e31}\np_n = \\frac{n\\pi\\hslash}{L}, n = \\pm 1, \\pm 2, \\ldots.\n\\end{equation}\nThus, the particle's momentum is quantised. So is energy, as seen from\n\\eqref{c4s2e20},\n\\begin{equation}\\label{c4s2e32}\nE_n = \\frac{n^2\\pi^2\\hslash^2}{2mL^2}, n = \\pm 1, \\pm 2, \\ldots.\n\\end{equation}\nThe different values of $n$ are the energy levels of the particle. If the \nparticle has to be pushed from a lower energy then it must be supplied with an\nenergy equal to the difference between the two energy levels. On the other hand,\nwhen a particle makes a transition from a higher energy level to a lower on it\nemits a photon of frequency $\\nu$ such that $\\Delta E = h\\nu$.\n\nEquations \\eqref{c4s2e31} and \\eqref{c4s2e32} indicate that both $p_n$ and\n$E_n$ increase with $n$. The lowest energy of the particle is\n\\begin{equation}\\label{c4s2e33}\nE_1 = \\frac{\\pi^2\\hslash^2}{2mL^2}.\n\\end{equation}\nThe lowest energy of a classical particle is zero. It remains at rest in the \nbox. However, a quantum particle cannot be at rest because that would violate\nthe uncertainty principle. \n\nWe still have not written the complete solution of the Schr\\\"{o}dinger equation\n\\eqref{c4s2e22}. From \\eqref{c4s2e27} and \\eqref{c4s2e30}, we have\n\\begin{equation}\\label{c4s2e34}\n\\psi_n(x) = \\beta(e^{in\\pi x/L} + (-1)^n e^{-in\\pi x/L}).\n\\end{equation} \nWe can write it in a simpler form\n\\begin{equation}\\label{c4s2e35}\n\\psi_n(x) = \\begin{cases}\n2\\beta\\cos\\left(\\frac{n\\pi x}{L}\\right) & \\text{ if } n = \\pm 2, \\pm 4, \\ldots\\\\\n2i\\beta\\sin\\left(\\frac{n\\pi x}{L}\\right) & \\text{ if } n = \\pm 1, \\pm 3, \\ldots.\n\\end{cases}\n\\end{equation}\nThe constant $\\beta$ can be found using the normalisation condition\n\\begin{equation}\\label{c4s2e36}\n\\int_{-L/2}^{L/2} |\\psi_n(x)|^2dx = 1.\n\\end{equation}\nThe integration is quite straigtforward and we get\n\\begin{equation}\\label{c4s2e37}\n\\beta = \\frac{1}{\\sqrt{2L}}.\n\\end{equation}\nThus, the complete solution is\n\\begin{equation}\\label{c4s2e38}\n\\psi_n(x) = \\begin{cases}\n\\sqrt{\\frac{2}{L}}\\cos\\left(\\frac{n\\pi x}{L}\\right) & \n   \\text{ if } n = \\pm 2, \\pm 4, \\ldots \\\\\n\\sqrt{\\frac{2}{L}}i\\sin\\left(\\frac{n\\pi x}{L}\\right) & \n   \\text{ if } n = \\pm 1, \\pm 3, \\ldots.\n\\end{cases}\n\\end{equation}\n\n\\subsection{Problem set 3}\n\\begin{enumerate}\n\\item Energy is indendepent of the sign of $n$, but momemtum is not. What is \nthe physical significance of the this observation?\n\\item Choose any $n$, say $n = 4$, for which\n\\[\n\\psi_4(x) = \\sqrt{\\frac{2}{L}}\\cos\\left(\\frac{4\\pi x}{L}\\right).\n\\]\nAt what $x$ is $\\psi_4(x) = 0$? Points at which $\\psi$ vanishes are called \nnodes.  Thus, there are some points at which the particle just cannot be. This \nis another stark difference between the classical and the quantum particles.\n\\item The normalisation condition is defined by \\eqref{c4s1e7}. Why did we \nchange the limits of integration from $\\pm\\infty$ to $\\pm L/2$?\n\\end{enumerate}\n\n\\section{Operators and eigenvalues}\\label{c4s3}\nThe wavefunction $\\Psi$ has the entire information about the quantum particle.\nIf we know $\\Psi$, we can find the particle's position, momentum, angular \nmomentum and energy, up to a certain extent. We have learnt that \n\\begin{itemize}\n\\item At an atomic scale, particles have a wavelike character and therefore\ndo not have a precise position or momentum.\n\\item Therefore, they cannot have precise angular momentum.\n\\item Under some situations, they do have definite values of energy.\n\\end{itemize}\nHow to extract this information from the wavefunction $\\Psi$? Dynamical \nvariables are represented by operators in quantum mechanics. An operator is\nsomething that maps one function to another. We shall denote operators with\na hat on top of a letter. From now on, we shall not use the hat notation for\nunit vectors. Some examples of operators in\nquantum mechanics are\n\\begin{enumerate}\n\\item The position operator is just $\\hat{r}$. Its action on the wavefunction is\n\\begin{equation}\\label{c4s3e1}\n\\hat{r}(\\Psi) = \\vec{r}\\Psi.\n\\end{equation}\n\\item The momentum operator $\\hat{p}$ is\n\\begin{equation}\\label{c4s3e2}\n\\hat{p}(\\Psi) = (i\\hslash)\\left(\\vec{i}\\frac{\\partial}{\\partial x} + \n\\vec{j}\\frac{\\partial}{\\partial y} + \\vec{k}\\frac{\\partial}{\\partial z}\n\\right)\\Psi\n= i\\hslash\\nabla\\Psi.\n\\end{equation}\n\\item The energy operator $\\hat{E}$ is\n\\begin{equation}\\label{c4s3e3}\n\\hat{E}(\\psi) = i\\hslash\\frac{\\partial\\Psi}{\\partial t}.\n\\end{equation}\n\\end{enumerate}\nWhen the potential energy is a function of $\\vec{r}$ alone then the potential\nenergy operator $\\hat{V}(\\vec{r})$ is\n\\begin{equation}\\label{c4s3e4}\n\\hat{V}(\\vec{r})(\\Psi) = V(\\vec{r})\\Psi.\n\\end{equation}\nThe relation between kinetic energy and momentum is $T = p^2/(2m)$. Therefore,\nwe can infer that the relationship between the operators is\n\\begin{equation}\\label{c4s3e5}\n\\hat{T}(\\Psi) = \\frac{\\hat{p}^2}{2m}(\\Psi).\n\\end{equation}\nUsing \\eqref{c4s3e2} we can easily infer that\n\\begin{equation}\\label{c4s3e6}\n\\hat{T}(\\Psi) = -\\frac{\\hslash^2}{2m}\\left(\\frac{\\partial^2}{\\partial x^2} \n+ \\frac{\\partial^2}{\\partial y^2} + \\frac{\\partial^2}{\\partial x^2}\\right)\n(\\Psi)\n\\end{equation}\nThe fact that total energy is the sum of kinetic and potential energies can be\nexpressed in terms of operators are\n\\begin{equation}\\label{c4s3e7}\n\\hat{E}(\\Psi) = \\hat{T}(\\Psi) + \\hat{V}(\\Psi).\n\\end{equation}\nUsing equations \\eqref{c4s3e3}, \\eqref{c4s3e4} and \\eqref{c4s3e6} we get\n\\begin{equation}\\label{c4s3e8}\ni\\hslash\\frac{\\partial\\Psi}{\\partial t} = -\\frac{\\hslash^2}{2m}\n\\left(\\frac{\\partial^2}{\\partial x^2} + \\frac{\\partial^2}{\\partial y^2} +\n \\frac{\\partial^2}{\\partial z^2}\\right)\\Psi + V(\\vec{r})\\Psi.\n\\end{equation} \nThis is Schr\\\"{o}dinger equation. It is just an expression of the fact that the\ntotal energy is the sum of kinetic and potential energies.\n\nIf $\\hat{O}$ is an operator then an equation of the form\n\\begin{equation}\\label{c4s3e9}\n\\hat{O}\\Psi = \\lambda\\Psi,\n\\end{equation}\nwhere $\\lambda$ is a number, is called an `eigenvalue' equation. The number \n$\\lambda$ is an `eigenvalue` of the operator $\\hat{O}$ and the function $\\Psi$\nis its `eigenfunction'. An example of an eigenvalue equation is\n\\begin{equation}\\label{c4s3e10}\n-i\\hslash \\frac{d}{dx} e^{ikx} = \\hslash k e^{ikx}.\n\\end{equation}\n\nThe total energy operator is called the Hamiltonian and is denoted by $\\hat{H}$.\nIt is named after the Irish physicist William Rowan Hamilton who gave an \nalternative formulation of classical mechanics. Thus,\n\\begin{equation}\\label{c4s3e11}\n\\hat{H} = \\hat{T} + \\hat{V} = -\\frac{\\hslash^2}{2m}\\nabla^2 + V,\n\\end{equation}\nwhere the symbol $\\nabla^2$ stands for the Laplacian and is defined as\n\\begin{equation}\\label{c4s3e12}\n\\nabla^2 = \\frac{\\partial^2}{\\partial x^2} + \n\\frac{\\partial^2}{\\partial y^2} +  \\frac{\\partial^2}{\\partial z^2}.\n\\end{equation}\n\n\\subsection{Problem set 4}\n\\begin{enumerate}\n\\item Let $\\hat{A}$ and $\\hat{B}$ be commute. That is, let $\\hat{A}\\hat{B} =\n\\hat{B}\\hat{A}$. Show that if $f$ is an eigenfunction of $A$ then it is also an\neigenfunction of $\\hat{B}$.\n\\item Eigenvalues of an operator are the only possible outcomes of an experiment\nmeasuring the dynamical variable that the operator represents. From this fact \nand the previous problem conclude that if two operators commute then they can\nbe measured simultaneously.\n\\item Find $[\\hat{x}, \\hat{p}_x]$. Can they be measured simultaneously?\n\\end{enumerate}\n\n\\section{Postulates of quantum mechanics}\\label{c4s4}\nNon-relativistic quantum mechanics of a single is based on the following \npostulates.\n\\begin{enumerate}\n\\item The state of a quantum mechanical system of one particle is given by a\nwavefunction $\\Psi(\\vec{r}, t)$.\n\\item A dynamical variable of classical mechanics is represented by a \n(Hermitian) operator in quantum mechanics.\n\\item The only values of an operator in a measurement are its eigenvalues.\n\\item The state of a quantum mechanical system can be expressed as a linear\ncombination of all the eigenfunctions of an operator. If $\\hat{O}$ is an \noperator with eigenvalue equations\n\\begin{equation}\\label{c4s4e1}\n\\hat{O}\\Psi_i = \\lambda_i\\Psi, i = 1, \\ldots, n,\n\\end{equation}\nand if $\\Phi = c_1\\Psi_1 + \\cdots + c_n\\Psi_n$ is an arbitrary normalised state\nthen the probability of an experiment giving an eigenvalue $\\lambda_i$ is\n\\begin{equation}\\label{c4s4e2}\nP(\\lambda_i) = |c_i|^2.\n\\end{equation}\n\\item The wavefunction evolves according to the Schr\\\"{o}dinger equation.\n\\end{enumerate}\n\nNote that we have chosen to restrict our attention to systems with just one\nparticle. For some time to come, we will restrict ourselves only to these \nsystems.\n\n\\subsection{Problem set 5}\n\\begin{enumerate}\n\\item Many dynamical variables have a classical analogue. The operator for such\nvariables is found by expressing these variables in terms of classical position\nand momentum and then replacing them by their operator representations. Using \nthis technique find the operators for the three coordinates of angular momentum.\nRecall that the angular momentum is defined as $\\vec{L}=\\vec{r}\\times\\vec{p}$.\n\\item Using the expressions for $\\hat{L}_x, \\hat{L}_y$ and $\\hat{L}_z$ obtained\nin the previous problem, show that\n\\begin{eqnarray}\n{[}\\hat{L}_x, \\hat{L}_y] &=& i\\hslash \\hat{L}_z \\label{c4s4e3} \\\\\n{[}\\hat{L}_y, \\hat{L}_z] &=& i\\hslash \\hat{L}_x \\label{c4s4e4} \\\\\n{[}\\hat{L}_z, \\hat{L}_x] &=& i\\hslash \\hat{L}_y \\label{c4s4e5}\n\\end{eqnarray}\n\\item The square magnitude of the angular momentum operator is represented by \nthe operator $\\hat{L}^2 = \\hat{L}_x^2 + \\hat{L}_y^2 + \\hat{L}_z^2$. Show that\n\\begin{equation}\\label{c4s4e6}\n[\\hat{L}^2, \\hat{L}_z] = 0.\n\\end{equation}\n\\end{enumerate}\n", "meta": {"hexsha": "46938026d48ec35a243cb8b1e76c287fdf31c361", "size": 25902, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "qm/modern-physics/notes/c4.tex", "max_stars_repo_name": "drameyjoshi/physics", "max_stars_repo_head_hexsha": "9d3360258bdd12bc3d981ae08c8358dff4b777b3", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "qm/modern-physics/notes/c4.tex", "max_issues_repo_name": "drameyjoshi/physics", "max_issues_repo_head_hexsha": "9d3360258bdd12bc3d981ae08c8358dff4b777b3", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "qm/modern-physics/notes/c4.tex", "max_forks_repo_name": "drameyjoshi/physics", "max_forks_repo_head_hexsha": "9d3360258bdd12bc3d981ae08c8358dff4b777b3", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.0071047957, "max_line_length": 80, "alphanum_fraction": 0.7291328855, "num_tokens": 8632, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8221891305219504, "lm_q2_score": 0.6757645944891558, "lm_q1q2_score": 0.5556063043805575}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[margin=1in]{geometry}\n\\usepackage{setspace}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\n\n\n\\title{Chapter 9\\\\Partial Differential Equations}\n\\author{solutions by Hikari}\n\\date{September 2021}\n\\begin{document}\n\n\\newcommand{\\pdv}[2]{\\frac{\\partial#1}{\\partial#2}}\n\\newcommand{\\V}{\\mathbf}\n\n\\maketitle\n\n\\section*{9.2 First-Order Equations}\n\n\\paragraph{9.2.1}\nLet $s=x+2y$, $t=2x-y$, then\n\\[\n\\pdv{\\varphi}{x}=\\pdv{\\varphi}{s}+2\\pdv{\\varphi}{t}\n\\]\n\\[\n\\pdv{\\varphi}{y}=2\\pdv{\\varphi}{s}-\\pdv{\\varphi}{t}\n\\]\nso the equation becomes\n\\[\n5\\pdv{\\varphi}{s}+t\\varphi=0\n\\]\n\\[\n\\frac{1}{\\varphi}d\\varphi=-\\frac{t}{5}ds\n\\]\n\\[\n\\ln\\varphi=-\\frac{ts}{5}+C(t)\n\\]\n\\[\n\\varphi=e^{-\\frac{1}{5}(2x^2-2y^2+3xy)}f(2x-y)\n\\]\nOr note that $s+2t=5x$, so the solution can be transformed into\n\\[\n\\varphi=e^{-\\frac{st}{5}}e^{-\\frac{2t^2}{5}}e^{\\frac{2t^2}{5}}f(t)\n=e^{-\\frac{5x\\cdot t}{5}}g(t)\n\\]\n\\[\n=e^{-2x^2+xy}g(2x-y)\n\\]\n\n\\paragraph{9.2.2}\nLet $s=x-2y$, $t=2x+y$, then \n\\[\n\\pdv{\\varphi}{x}=\\pdv{\\varphi}{s}+2\\pdv{\\varphi}{t}\n\\]\n\\[\n\\pdv{\\varphi}{y}=-2\\pdv{\\varphi}{s}+\\pdv{\\varphi}{t}\n\\]\nso the equation becomes \n\\[\n5\\pdv{\\varphi}{s}+\\frac{3t-s}{5}=0\n\\]\n\\[\n\\pdv{\\varphi}{s}=\\frac{s-3t}{25}\n\\]\n\\[\n\\varphi=\\frac{s^2-6st}{50}+f(t)\n\\]\n\\[\n=\\frac{s^2-6st}{50}+\\frac{9t^2}{50}-\\frac{9t^2}{50}+f(t)\n\\]\n\\[\n=\\frac{(s-3t)^2}{50}+g(t)\n\\]\n\\[\n=\\frac{(x+y)^2}{2}+g(2x+y)\n\\]\n\n\\paragraph{9.2.3}\nLet $s=x+y-z,\\;t=x-y,\\;u=x+y+2z$, then\n\\[\n\\pdv{\\varphi}{x}=\\pdv{\\varphi}{s}+\\pdv{\\varphi}{t}+\\pdv{\\varphi}{u}\n\\]\n\\[\n\\pdv{\\varphi}{y}=\\pdv{\\varphi}{s}-\\pdv{\\varphi}{t}+\\pdv{\\varphi}{u}\n\\]\n\\[\n\\pdv{\\varphi}{x}=-\\pdv{\\varphi}{s}+2\\pdv{\\varphi}{u}\n\\]\nso the equation becomes\n\\[\n3\\pdv{\\varphi}{s}=0\n\\]\n\\[\n\\varphi=f(t,u)=f(x-y,x+y+2z)\\]\nor note that $t+u=2(x+z)$, so the equation can be transformed into\n\\[\n\\varphi=f(t,2(x+z)-t)=g(t,x+z)=g(x-y,x+z)\n\\]\n\n\\paragraph{9.2.4}\nLet $s=x+y+z,\\;t=x-y,\\;u=x+y-2z$, then\n\\[\n\\pdv{\\varphi}{x}=\\pdv{\\varphi}{s}+\\pdv{\\varphi}{t}+\\pdv{\\varphi}{u}\n\\]\n\\[\n\\pdv{\\varphi}{y}=\\pdv{\\varphi}{s}-\\pdv{\\varphi}{t}+\\pdv{\\varphi}{u}\n\\]\n\\[\n\\pdv{\\varphi}{z}=\\pdv{\\varphi}{s}-2\\pdv{\\varphi}{u}\n\\]\nso the equation becomes\n\\[\n3\\pdv{\\varphi}{s}=t\n\\]\n\\[\n\\varphi=\\frac{st}{3}+f(t,u)\n\\]\n\\[\n=\\frac{st}{3}-\\frac{ut}{3}+\\frac{ut}{3}+f(t,u)\n\\]\n\\[\n=\\frac{3z\\cdot t}{3}+g(t,u)\n\\]\n\\[\n=(x-y)z+g(x-y,x+y-2z)\n\\]\n\n\\paragraph{9.2.5}\n(a)\n\\[\n\\pdv{\\varphi}{x}=\\pdv{u}{x}\\pdv{\\varphi}{u}+\\pdv{v}{x}\\pdv{\\varphi}{v}=y\\pdv{\\varphi}{u}+2x\\pdv{\\varphi}{v}\n\\]\n\\[\n\\pdv{\\varphi}{y}=\\pdv{u}{y}\\pdv{\\varphi}{u}+\\pdv{v}{y}\\pdv{\\varphi}{v}=x\\pdv{\\varphi}{u}-2y\\pdv{\\varphi}{v}\n\\]\nso the equation becomes\n\\[\n(x^2+y^2)\\pdv{\\varphi}{u}=0\n\\]\n\\[\n\\varphi=f(v)=f(x^2-y^2)\n\\]\n\n(b)\nThe characteristics are $x^2-y^2=constant$, which are hyperbolas centered at $(0,0)$ and with $x=y,\\;x=-y$ as asymptotes. \n\n\\paragraph{9.2.6}\nLet $u=x^2-y^2,\\;v=xy$, then\n\\[\n\\pdv{\\varphi}{x}=\\pdv{u}{x}\\pdv{\\varphi}{u}+\\pdv{v}{x}\\pdv{\\varphi}{v}=2x\\pdv{\\varphi}{u}+y\\pdv{\\varphi}{v}\n\\]\n\\[\n\\pdv{\\varphi}{y}=\\pdv{u}{y}\\pdv{\\varphi}{u}+\\pdv{v}{y}\\pdv{\\varphi}{v}=-2y\\pdv{\\varphi}{u}+x\\pdv{\\varphi}{v}\n\\]\nso the equation beocomes\n\\[\n2(x^2+y^2)\\pdv{\\varphi}{u}=0\n\\]\n\\[\n\\varphi=f(v)=f(xy)\n\\]\n\n\\section*{9.3 Second-Order Equations}\n\n\\paragraph{9.3.1}\n\\begin{align*}\n    & \\pdv{\\varphi}{x}=\\pdv{\\varphi}{\\xi}\\pdv{\\xi}{x}+\\pdv{\\varphi}{\\eta}\\pdv{\\eta}{x}=c^{\\frac{1}{2}}\\pdv{\\varphi}{\\xi}\\\\\n    & \\pdv{^2\\varphi}{x^2}=c^{\\frac{1}{2}}\\left(\\pdv{^2\\varphi}{\\xi^2}\\pdv{\\xi}{x}+\\pdv{^2\\varphi}{\\xi\\partial\\eta}\\pdv{\\eta}{x}\\right)=c\\pdv{^2\\varphi}{\\xi^2}\\\\\n    & \\pdv{^2\\varphi}{x\\partial y}=c^{\\frac{1}{2}}\\left(\\pdv{^2\\varphi}{\\xi^2}\\pdv{\\xi}{y}+\\pdv{^2\\varphi}{\\xi\\partial\\eta}\\pdv{\\eta}{y} \\right)=-b\\pdv{^2\\varphi}{\\xi^2}+\\pdv{^2\\varphi}{\\xi\\partial\\eta}\\\\\n    & \\pdv{\\varphi}{y}=\\pdv{\\varphi}{\\xi}\\pdv{\\xi}{y}+\\pdv{\\varphi}{\\eta}\\pdv{\\eta}{y}=-c^{-\\frac{1}{2}}b\\pdv{\\varphi}{\\xi}+c^{-\\frac{1}{2}}\\pdv{\\varphi}{\\eta}\\\\\n   & \\pdv{^2\\varphi}{y^2}=-c^{-\\frac{1}{2}}b\\left(\\pdv{^2\\varphi}{\\xi^2}\\pdv{\\xi}{y}+\\pdv{^2\\varphi}{\\xi\\partial\\eta}\\pdv{\\eta}{y} \\right)+c^{-\\frac{1}{2}}\\left(\\pdv{^2\\varphi}{\\xi\\partial\\eta}\\pdv{\\xi}{y}+\\pdv{^2\\varphi}{\\eta^2}\\pdv{\\eta}{y} \\right)=c^{-1}b^2\\pdv{^2\\varphi}{\\xi^2}-2c^{-1}b\\pdv{^2\\varphi}{\\xi\\partial\\eta}+c^{-1}\\pdv{^2\\varphi}{\\eta^2}\n\\end{align*}\nSubstituting, we get\n\\[\n\\mathcal{L}=a\\pdv{^2}{x^2}+2b\\pdv{^2}{x\\partial y}+c\\pdv{^2}{y^2}\n\\]\n\\[\n=(ac-b^2)\\pdv{^2}{\\xi^2}+\\pdv{^2}{\\eta^2}\n\\]\n\n\\section*{9.4 Separation of Variables}\n\n\\paragraph{9.4.1}\n\\[\n(\\nabla^2+k^2)(a_1\\varphi_1+a_2\\varphi_2)=a_1\\nabla^2\\varphi_1+a_1k^2\\varphi_1+a_2\\nabla^2\\varphi_2+a_2k^2\\varphi_2=a_1(\\nabla^2+k^2)\\varphi_1+a_2(\\nabla^2+k^2)\\varphi_2\n\\]\n\n\\paragraph{9.4.2}\nLet $\\varphi(\\rho,\\varphi,z)=P(\\rho)\\Phi(\\varphi)Z(z)$ and substitute:\n\\[\n\\frac{\\Phi Z}{\\rho}\\frac{d}{d\\rho}\\left(\\rho\\frac{dP}{d\\rho}\\right)+\\frac{PZ}{\\rho^2}\\frac{d^2\\Phi}{d\\varphi^2}+P\\Phi\\frac{d^2Z}{dz^2}+\\left[k^2+f(\\rho)+\\frac{1}{\\rho^2}g(\\varphi)+h(z) \\right]P\\Phi Z=0\n\\]\nThe equation can be seperated into\n\\[\n\\frac{1}{Z}\\frac{d^2Z}{dz^2}+h(z)=l^2\n\\]\n\\[\n\\frac{1}{\\Phi}\\frac{d^2\\Phi}{d\\varphi^2}+g(\\varphi)=-m^2\n\\]\n\\[\n\\frac{\\rho}{P}\\frac{d}{d\\rho}\\left(\\rho\\frac{dP}{d\\rho} \\right)+\\left[f(\\rho)+l^2+k^2 \\right]\\rho^2-m^2=0\n\\]\n\n\\paragraph{9.4.3}\nLet $\\psi(r,\\theta,\\varphi)=R(r)\\Theta(\\theta)\\Phi(\\phi)$ and substitute, we have\n\\[\n\\frac{1}{Rr^2}\\frac{d}{dr}\\left(r^2\\frac{dR}{dr}\\right)+\\frac{1}{\\Theta r^2\\sin\\theta}\\frac{d}{d\\theta}\\left(\\sin\\theta\\frac{d\\Theta}{d\\theta} \\right)+\\frac{1}{\\Phi r^2\\sin^2\\theta}\\frac{d^2\\Phi}{d\\varphi^2}=-k^2\n\\]\nIr can be rearranged into\n\\[\n\\frac{1}{R}\\frac{d}{dr}\\left(r^2\\frac{dR}{dr} \\right)+r^2k^2=-\\frac{1}{\\Theta\\sin\\theta}\\frac{d}{d\\theta}\\left(\\sin\\theta\\frac{d\\Theta}{d\\theta} \\right)-\\frac{1}{\\Phi\\sin^2\\theta}\\frac{d^2\\Phi}{d\\varphi^2}\n\\]\nBy equating each side to $\\lambda$ we can separate $R$. The rest of the equation can be rearranged into\n\\[\n\\frac{1}{\\Phi}\\frac{d^2\\Phi}{d\\varphi^2}=-\\frac{\\sin\\theta}{\\Theta}\\frac{d}{d\\theta}\\left(\\sin\\theta\\frac{d\\Theta}{d\\theta} \\right)-\\lambda\\sin^2\\theta\n\\]\nBy equating each side to $-m^2$ we can separate $\\Phi$ and $\\Theta$. \n\nSo the equation can be separated into\n\\[\n\\frac{1}{R}\\frac{d}{dr}\\left(r^2\\frac{dR}{dr} \\right)+r^2k^2=\\lambda\n\\]\n\\[\n\\frac{1}{\\Phi}\\frac{d^2\\Phi}{d\\varphi^2}=-m^2\n\\]\n\\[\n-\\frac{\\sin\\theta}{\\Theta}\\frac{d}{d\\theta}\\left(\\sin\\theta\\frac{d\\Theta}{d\\theta} \\right)-\\lambda\\sin^2\\theta=-m^2\n\\]\nwhich are the same as equation (9.74), (9.77), and (9.78).\n\n\\paragraph{9.4.4}\nLet $\\psi(r,\\theta,\\varphi)=R(r)\\Theta(\\theta)\\Phi(\\varphi)$. Substitute and divide by $R\\Theta\\Phi$, we get\n\\[\n\\frac{1}{Rr^2}\\frac{d}{dr}\\left(r^2\\frac{dR}{dr} \\right)+f(r)+\\frac{1}{r^2}\\left[\\frac{1}{\\Theta\\sin\\theta}\\frac{d}{d\\theta}\\left(\\sin\\theta\\frac{d\\Theta}{d\\theta} \\right)+g(\\theta) \\right]+\\frac{1}{r^2\\sin^2\\theta}\\left[\\frac{1}{\\Phi}\\frac{d^2\\Phi}{d\\varphi^2}+h(\\varphi) \\right]+k^2=0\n\\]\nIt can be seperated into\n\\[\n\\frac{1}{\\Phi}\\frac{d^2\\Phi}{d\\varphi^2}+h(\\varphi)=-m^2\n\\]\n\\[\n\\frac{1}{R}\\frac{d}{dr}\\left(r^2\\frac{dR}{dr} \\right)+r^2f(r)+r^2k^2=\\lambda\n\\]\n\\[\n-\\frac{1}{\\Theta\\sin\\theta}\\frac{d}{d\\theta}\\left(\\sin\\theta\\frac{d\\Theta}{d\\theta} \\right)-g(\\theta)+\\frac{m^2}{\\sin^2\\theta}=\\lambda\n\\]\n\n\\paragraph{9.4.5}\nLet $\\psi(x,y,z)=X(x)Y(y)Z(z)$ and substitute, we have\n\\[\nYZ\\frac{d^2X}{dx^2}+XZ\\frac{d^2Y}{dy^2}+XY\\frac{d^2Z}{dz^2}+\\frac{2mE}{\\hbar^2}XYZ=0\n\\]\nIt can be separated into\n\\[\n\\frac{1}{X}\\frac{d^2X}{dx^2}=-l^2\\qquad \\frac{1}{Y}\\frac{d^2Y}{dy^2}=-m^2\\qquad \\frac{1}{Z}\\frac{d^2Z}{dz^2}=-n^2\n\\]\nwhere $l^2+m^2+n^2=\\frac{2mE}{\\hbar^2}$.\n\\medskip\n\nThe solution of $X$ is $X=A\\sin lx+B\\cos lx$. When the boundary conditions $X(0)=X(a)=0$ ia applied, we must require $B=0$ and $la=\\lambda\\pi$, where $\\lambda$ is a positive integer. Similarly $mb=\\mu\\pi$, $nc=\\nu\\pi$, where $\\mu,\\nu$ are positive integers. So\n\\[\nE=\\frac{\\hbar^2}{2m}(l^2+m^2+n^2)=\\frac{\\pi^2\\hbar^2}{2m}\\left(\\frac{\\lambda^2}{a^2}+\\frac{\\mu^2}{b^2}+\\frac{\\nu^2}{c^2}\\right)\n\\]\nand the minimum of $E$ is\n\\[\nE_{min}=\\frac{\\pi^2\\hbar^2}{2m}\\left(\\frac{1}{a^2}+\\frac{1}{b^2}+\\frac{1}{c^2}\\right)\n\\]\nin which case $\\lambda=\\mu=\\nu=1$.\n \n\\paragraph{9.4.6}\nFrom Exercise 3.10.32 (c), we have\n\\[\n\\V{L}^2=-\\frac{1}{\\sin\\theta}\\pdv{}{\\theta}\\left(\\sin\\theta\\pdv{}{\\theta}\\right)-\\frac{1}{\\sin^2\\theta}\\pdv{^2}{\\varphi^2}\n\\]\nLet $\\psi(r,\\theta,\\varphi)=R(r)\\Theta(\\theta)\\Phi(\\varphi)$. Substitute into the equation and divide by $R\\Theta\\Phi$, we have\n\\[\n-\\frac{1}{\\Theta\\sin\\theta}\\pdv{}{\\theta}\\left(\\sin\\theta\\pdv{\\Theta}{\\theta} \\right)-\\frac{1}{\\Phi\\sin^2\\theta}\\pdv{^2\\Phi}{\\varphi^2}=l(l+1)\n\\]\nwhich can be separated into\n\\[\n\\frac{1}{\\Phi}\\pdv{^2\\Phi}{\\varphi^2}=-m^2\n\\]\nand\n\\[\n\\frac{1}{\\sin\\theta}\\pdv{}{\\theta}\\left(\\sin\\theta\\pdv{\\Theta}{\\theta} \\right)-\\frac{m^2}{\\sin^2\\theta}\\Theta+l(l+1)\\Theta=0\n\\]\nLet $t=\\cos\\theta$ and $\\Theta(\\theta)=P(\\cos\\theta)=P(t)$, the $\\Theta$ equation becomes\n\\[\n(1-t^2)P''(t)-2tP'(t)-\\frac{m^2}{1-t^2}P(t)+l(l+1)P(t)=0\n\\]\nwhich is the associated Legendre equation. \n\n\\paragraph{9.4.7}\n(a) Multiply the equation by $-\\frac{2}{\\hbar}\\left(\\frac{m}{k} \\right)^{1/2}$ and use the definitions of $a$ and $\\lambda$, we have\n\\[\n\\frac{1}{a^2}\\frac{d^2\\psi}{dx^2}-a^2x^2\\psi+\\lambda\\psi=0\n\\]\nNote that $\\frac{d^2\\psi}{dx^2}=\\frac{d^2\\psi}{d\\xi^2}\\left(\\frac{d\\xi}{dx} \\right)^2=a^2\\frac{d^2\\psi}{d\\xi^2}$, and $a^2x^2=\\xi^2$, we have\n\\[\n\\frac{d^2\\psi}{d\\xi^2}+(\\lambda-\\xi^2)\\psi=0\n\\]\n\n(b) \n\\[\n\\frac{d\\psi}{d\\xi}=\\left[y'(\\xi)-\\xi y(\\xi) \\right]e^{-\\frac{\\xi^2}{2}}\n\\]\n\\[\n\\frac{d^2\\psi}{d\\xi^2}=\\left[y''(\\xi)-2\\xi y'(\\xi)+(\\xi^2-1)y(\\xi) \\right]e^{-\\frac{\\xi^2}{2}}\n\\]\nSubstitute and eliminate $e^{-\\frac{\\xi^2}{2}}$, we have\n\\[\ny''(\\xi)-2\\xi y'(\\xi)+(\\lambda-1)y(\\xi)=0\n\\]\nwhich is the Hermite differential equation.\n\n\\section*{9.5 Laplace and Poisson Equations}\n\n\\paragraph{9.5.1}\n(a) Using Equation (3.158),\n\\[\n\\nabla^2\\varphi_1=\\frac{1}{r^2}\\pdv{}{r}\\left(r^2\\pdv{(\\frac{1}{r})}{r} \\right)=\\frac{1}{r^2}\\pdv{}{r}(-1)=0\n\\]\n\n(b)\nSubstitute $r\\cos\\theta$ for $z$, then\n\\[\n\\varphi_2=\\frac{1}{2r}\\ln\\frac{1+\\cos\\theta}{1-\\cos\\theta}\n\\]\n\\[\n\\pdv{\\varphi_2}{r}=-\\frac{1}{2r^2}\\ln\\frac{1+\\cos\\theta}{1-\\cos\\theta}\n\\]\n\\[\n\\pdv{\\varphi_2}{\\theta}=-\\frac{1}{r\\sin\\theta}\n\\]\n\nUsing Equation (3.158),\n\\[\n\\nabla^2\\varphi_2=\\frac{1}{r^2}\\pdv{}{r}\\left(r^2\\pdv{\\varphi_2}{r} \\right)+\\frac{1}{r^2\\sin\\theta}\\pdv{}{\\theta}\\left(\\sin\\theta\\pdv{\\varphi_2}{\\theta} \\right)\n\\]\n\\[\n=\\frac{1}{r^2}\\pdv{}{r}\\left(-\\frac{1}{2}\\ln\\frac{1+\\cos\\theta}{1-\\cos\\theta} \\right)+\\frac{1}{r^2\\sin\\theta}\\pdv{}{\\theta}\\left(-\\frac{1}{r} \\right)=0\n\\]\n\n\\paragraph{9.5.2}\n\\[\n\\nabla^2\\Psi=\\pdv{^2\\Psi}{x^2}+\\pdv{^2\\Psi}{y^2}+\\pdv{^2\\Psi}{z^2}=0\n\\]\nso\n\\[\n\\nabla^2\\left(\\pdv{\\Psi}{z}\\right)=\\pdv{^3\\Psi}{x^2\\partial z}+\\pdv{^3\\Psi}{y^2\\partial z}+\\pdv{^3\\Psi}{z^3}\n\\]\n\\[\n=\\pdv{}{z}\\left(\\pdv{^2\\Psi}{x^2}+\\pdv{^2\\Psi}{y^2}+\\pdv{^2\\Psi}{z^2}\\right)=0\n\\]\nwhich means $\\pdv{\\Psi}{z}$ is also a solution of Laplace's equation.\n\n\\paragraph{9.5.3}\nSuppose $\\psi_1$ and $\\psi_2$ are distinct solutions to the Laplace or Poisson equation for the same Dirichlet boundary conditions, then $\\psi=\\psi_1-\\psi_2$ will also be a solution to the Laplace equation with a zero Dirichlet boundary conditions. From Eq. (9.88),\n\\[\n\\int\\displaylimits_S\\psi\\pdv{\\psi}{\\V{n}}dS=\\int\\displaylimits_V\\psi\\nabla^2\\psi\\,d\\tau+\\int\\displaylimits_V   \\nabla\\psi\\cdot\\nabla\\psi\\,d\\tau\n\\]\n$\\int\\displaylimits_S\\psi\\pdv{\\psi}{\\V{n}}dS$ vanishes because $\\psi$ vanishes on the boundary. $\\int\\displaylimits_V\\psi\\nabla^2\\psi\\,d\\tau$ vanishes because $\\psi$ is a solution to the Laplace equation. Therefore $\\int\\displaylimits_V   \\nabla\\psi\\cdot\\nabla\\psi\\,d\\tau$ must vanish, which means $\\nabla\\psi=0$ everywhere, so $\\psi=constant=0$ because it is zero on the boundary. So $\\psi_1=\\psi_2$, which means the solution is unique.\n\n\\section*{9.6 Wave Equation }\n\n\\paragraph{9.6.1}\nUsing the d\u2019Alembert\u2019s solution:\n\\[\n\\psi(x,t)=\\frac{1}{2}\\left[\\psi(x+ct,0)+\\psi(x-ct,0)\\right]+\\frac{1}{2c}\\int_{x-ct}^{x+ct}\\pdv{\\psi(x,0)}{t}dx\n\\]\n\\[\n=\\frac{1}{2}\\left[\\sin(x+ct)+\\sin(x-ct)\\right]+\\frac{1}{2c}\\int_{x-ct}^{x+ct}\\cos x\\,dx\n\\]\n\\[\n=\\sin x\\cos ct+\\frac{1}{c}\\cos x\\sin ct\n\\]\n\n\\paragraph{9.6.2}\n\\[\n\\psi(x,t)=\\frac{1}{2}\\left[\\psi(x+ct,0)+\\psi(x-ct,0)\\right]+\\frac{1}{2c}\\int_{x-ct}^{x+ct}\\pdv{\\psi(x,0)}{t}dx\n\\]\n\\[\n=\\frac{1}{2}\\left[\\delta(x+ct)+\\delta(x-ct)\\right]\n\\]\n\n\\paragraph{9.6.3}\n\\[\n\\psi(x,t)=\\frac{1}{2}\\left[\\psi(x+ct,0)+\\psi(x-ct,0)\\right]+\\frac{1}{2c}\\int_{x-ct}^{x+ct}\\pdv{\\psi(x,0)}{t}dx\n\\]\n\\[\n=\\frac{1}{2}\\left[\\psi_0(x+ct)+\\psi_0(x-ct)\\right]\n\\]\nwhere $\\psi_0$ is the given square-wave pulse.\n\n\\paragraph{9.6.4}\n\\[\n\\psi(x,t)=\\frac{1}{2}\\left[\\psi(x+ct,0)+\\psi(x-ct,0)\\right]+\\frac{1}{2c}\\int_{x-ct}^{x+ct}\\pdv{\\psi(x,0)}{t}dx\n\\]\n\\[\n=\\frac{1}{2c}\\int_{x-ct}^{x+ct}\\sin x\\,dx\n=\\frac{1}{2c}\\left[\\cos(x-ct)-\\cos(x+ct) \\right]\n=\\frac{1}{c}\\sin x\\cos ct\n\\]\n\n\\section*{9.7 Heat-Flow, or Diffusion PDE}\n\n\\paragraph{9.7.1}\nSubstitute $T(r,t)=R(r)T(t)$ into the equation, we have\n\\[\nR\\pdv{T}{t}=KT\\nabla^2R=KT\\frac{1}{r^2}\\pdv{}{r}\\left(r^2\\pdv{R}{r} \\right)\n\\]\nwhich can be separated into\n\\[\n\\frac{1}{KT}\\pdv{T}{t}=\\frac{1}{Rr^2}\\left(r^2\\pdv{^2R}{r^2}+2r\\pdv{R}{r} \\right)=-\\alpha^2\n\\]\nso the $R$ equation is\n\\[\nr^2\\frac{d^2R}{dr^2}+2r\\frac{dR}{dr}+\\alpha^2r^2R=0\n\\]\nFor $R=\\frac{\\sin\\alpha r}{r}$:\n\\[\nr^2\\pdv{^2R}{r^2}+2r\\pdv{R}{r}+\\alpha^2r^2R\n\\]\n\\[\n=\\frac{d}{dr}\\left(r^2\\frac{dR}{dr}\\right)+\\alpha^2r^2R\n\\]\n\\[\n=\\frac{d}{dr}\\left(\\alpha r\\cos\\alpha r-\\sin\\alpha r\\right)+\\alpha^2r\\sin\\alpha r\n\\]\n\\[\n=\\alpha\\cos\\alpha r-\\alpha^2r\\sin\\alpha r-\\alpha\\cos\\alpha r+\\alpha^2 r\\sin\\alpha r=0\n\\]\nFor $R=\\frac{\\cos\\alpha r}{r}$:\n\\[\nr^2\\pdv{^2R}{r^2}+2r\\pdv{R}{r}+\\alpha^2r^2R\n\\]\n\\[\n=\\frac{d}{dr}\\left(r^2\\frac{dR}{dr}\\right)+\\alpha^2r^2R\n\\]\n\\[\n\\frac{d}{dr}\\left(-\\alpha r\\sin\\alpha r-\\cos\\alpha r\\right)+\\alpha^2r\\cos\\alpha r\n\\]\n\\[\n=-\\alpha\\sin\\alpha r-\\alpha^2r\\cos\\alpha r+\\alpha\\sin\\alpha r+\\alpha^2r\\cos\\alpha r=0\n\\]\nso $\\frac{\\sin\\alpha r}{r}$ and $\\frac{\\cos\\alpha r}{r}$ are solutions to the equation.\n\n\\paragraph{9.7.2}\nSubstitute $T(\\rho,t)=P(\\rho)T(t)$ into the equation, we have\n\\[\nP\\pdv{T}{t}=KT\\nabla^2P=KT\\frac{1}{\\rho}\\pdv{}{\\rho}\\left(\\rho\\pdv{P}{\\rho}\\right)\n\\]\nwhich can be separated into\n\\[\n\\frac{1}{KT}\\pdv{T}{t}=\\frac{1}{P\\rho}\\left(\\rho\\pdv{^2P}{\\rho^2}+\\pdv{P}{\\rho}\\right)=-\\alpha^2\n\\]\nso\n\\[\n\\frac{dT}{dt}+\\alpha^2KT=0\n\\]\n\\[\n\\rho\\frac{d^2P}{d\\rho^2}+\\frac{dP}{d\\rho}+\\alpha^2\\rho P=0\n\\]\n\n\\paragraph{9.7.3}\nUse Equation 9.114:\n\\[\n\\psi(x,t)=\\frac{1}{\\sqrt{\\pi}}\\int_{-\\infty}^\\infty A\\delta(x-2a\\xi\\sqrt{t})e^{-\\xi^2}d\\xi\n\\]\n\\[\n=\\frac{A}{\\sqrt{\\pi}}\\int_{-\\infty}^{\\infty}\\delta(x-y)e^{-\\frac{y^2}{4a^2t}}\\frac{dy}{2a\\sqrt{t}}\n\\]\n\\[\n=\\frac{A}{2a\\sqrt{\\pi t}}e^{-\\frac{x^2}{4a^2t}}\n\\]\n\n\\paragraph{9.7.4}\nFrom Equation 9.101, the solutions have the form\n\\[\n\\psi(x,t)=(A\\cos\\omega x+B\\sin\\omega x)e^{-\\omega^2a^2t}+C_0'x+C_0\n\\]\nUsing the boundary conditions: \n\\begin{alignat*}{2}\n    & \\psi(0,\\infty)=C_0=1\\qquad && C_0=1\\\\\n    & \\psi(L,\\infty)=C_0'L+C_0=0\\qquad && C_0'=-\\frac{1}{L}\\\\\n    & \\psi(0,t)=Ae^{-\\omega^2 a^2t}+1=1\\qquad && A=0\\\\\\\n    & \\psi(L,t)=B\\sin(\\omega L)\\,e^{-\\omega^2a^2t}=0\\qquad && \\omega L=n\\pi\\qquad\\textit{n is a positive integer}\n\\end{alignat*}\nSo\n\\[\n\\psi(x,t)=\\sum_{n=1}^\\infty a_n\\sin\\left(\\frac{n\\pi x}{L}\\right)e^{-\\frac{n^2\\pi^2a^2}{L^2}t}-\\frac{x}{L}+1\n\\]\nTo determine $a_n$, use $\\psi(x,0)=0$ and the orthogonality $\\int_0^L\\sin\\left(\\frac{n\\pi x}{L}\\right)\\sin\\left(\\frac{m\\pi x}{L}\\right)dx=\\frac{L}{2}\\delta_{nm}$:\n\\[\n\\int_0^L\\psi(x,0)\\sin\\left(\\frac{n\\pi x}{L}\\right)dx=\\frac{L}{2}a_n+\\frac{L}{n\\pi}=0\n\\]\nso $a_n=-\\frac{2}{n\\pi}$, and the overall solution is\n\\[\n\\psi(x,t)=-\\sum_{n=1}^\\infty \\frac{2}{n\\pi}\\sin\\left(\\frac{n\\pi x}{L}\\right)e^{-\\frac{n^2\\pi^2a^2}{L^2}t}-\\frac{x}{L}+1\n\\]\nIt can be verified that the solution satisfies the boundary, initial conditions and the PDE.\n\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "80a6991b4daff90a647075952ff5f0e9f843c78e", "size": 15752, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Mathematical Methods for Physicists/Chapter 09/main.tex", "max_stars_repo_name": "hikarimusic2002/Solutions", "max_stars_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematical Methods for Physicists/Chapter 09/main.tex", "max_issues_repo_name": "hikarimusic2002/Solutions", "max_issues_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematical Methods for Physicists/Chapter 09/main.tex", "max_forks_repo_name": "hikarimusic2002/Solutions", "max_forks_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.586407767, "max_line_length": 437, "alphanum_fraction": 0.6022727273, "num_tokens": 7696, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891218080991, "lm_q2_score": 0.6757645944891559, "lm_q1q2_score": 0.5556062984920453}}
{"text": "\\chapter{\\proj Image Restoration}\n\\label{ch_restore}\n\\index{restoration}\n\\section{Introduction}\n\\subsection{Statistical significance test}\nImages generally contain noise. Hence the wavelet coefficients are \nnoisy too. For filtering, it is necessary to know if a\ncoefficient is due to signal (i.e.\\ it is significant) or to noise. \nWe introduce a statistical significance \ntest for wavelet coefficients. Let $\\cal H_0$ be the hypothesis that the \nimage is locally constant at scale $j$.  \nRejection of hypothesis $\\cal H_0$ depends (for a positive coefficient value)\non:\n\\begin{eqnarray*}\nP = Prob(W_N > w_j(x,y))  \n\\end{eqnarray*}\nand if the coefficient value is negative \n\\begin{eqnarray*}\nP = Prob(W_N < w_j(x,y))  \n\\end{eqnarray*}\nGiven a threshold, $\\epsilon$, if $P > \\epsilon$ the null hypothesis is not\nexcluded.  Although non-null, the value of the coefficient could be due to \nnoise.  On the other hand, if $P < \\epsilon$, the coefficient value cannot be\ndue only to the noise alone, and so the null hypothesis is rejected.  In this\ncase, a significant coefficient has been detected.\n \nOur noise modeling in the wavelet space is based on the assumption that the\nnoise in the data follows a distribution law, which can be: \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item a Gaussian distribution \n\\item a Poisson distribution  \n\\item a Poisson + Gaussian  distribution (noise in CCD detectors)\n\\item Poisson noise with few events (galaxy counts, X-ray images, \npoint patterns)\n\\item Speckle noise\n\\item Root Mean Square map: we have a noise standard deviation of each data value.\n\\ei\n \nIf the noise does not  follow any of these distributions, \nwe can derive a noise model\nfrom any of the following assumptions:\n\\bi\n\\item it is stationary, and we have a subimage containing \na realization of the noise,\n\\item it is additive, and non-stationary,\n\\item it is multiplicative and stationary,\n\\item it is multiplicative, but non-stationary,\n\\item it is undefined but stationary,\n\\item it is additive, stationary, and correlated.\n\\ei\n\n\\subsection{Noise modeling}\nWe summarize here the different noise modeling \nstrategies implemented in MR/1.\n\\label{sect_noise}\n\n\\begin{enumerate}\n\\item{Gaussian noise} \\\\\nGiven stationary Gaussian noise, it suffices to compare $w_j(x,y)$ to $k \\sigma_j$.   \n\\begin{eqnarray}\n\\begin{array}{l}\n\\mbox{ if }  \\mid w_j \\mid \\ \\geq \\ k \\sigma_j \\ \\ \\mbox{ then } w_j \\mbox{ is significant } \\\\ \n\\mbox{ if }  \\mid w_j \\mid \\ < \\ k \\sigma_j \\ \\ \\mbox{ then } w_j \\mbox{ is not significant }\n\\end{array}\n\\end{eqnarray}\n\n\\item{Poisson noise} \\\\\nIf the noise in the data $I$ is Poisson, the transform \n\\begin{eqnarray}\nt(I(x,y)) = 2\\sqrt{I(x,y) + \\frac{3}{8}}\n\\end{eqnarray}\nacts as if the data arose from a\nGaussian white noise model (Anscombe, 1948), with $\\sigma = 1$, under the\nassumption that the mean value of $I$ is large.  \nThe image is first transformed, and the same processing is performed\nas in the Gaussian case. This processing works if the number of photons\nper pixel is greater than 30. Otherwise the detection levels will be\nover-estimated, and the case ``Poisson noise with few events\" should \ninstead be used.\n\n\n\\item{Poisson noise + Gaussian} \\\\\nThe  generalization of the variance stabilizing is:\n\\begin{eqnarray*}\nt(I(x,y)) = \\frac{2}{\\alpha} \\sqrt{\\alpha I(x,y) + \\frac{3}{8} \\alpha^2 + \\sigma^2 -\n\\alpha g}\n \\end{eqnarray*}\nwhere $\\alpha$ is the gain of the detector, and $g$ and $\\sigma$ are the mean and\nthe standard deviation of the read-out noise.  \n\n\\item{Multiplicative noise} \\\\\nThe image is first log-transformed. Then the transformed image is treated \nas an image with Gaussian additive noise.\n\n\\item{Non-stationary additive noise} \\\\\nThe noise is assumed to be locally Gaussian. So we must consider one \nnoise standard deviation per pixel. The Root Mean Square (RMS) map $R_{\\sigma}(x,y)$\ncan be furnished by the user, or automatically calculated by estimating\nfor each pixel the standard deviation in a box around it.\n\nFrom $R_{\\sigma}(x,y)$, we have to compute the noise standard deviation\n$\\sigma_j(x,y)$ for any wavelet coefficient $w_j(x,y)$. $w_j(x,y)$ is obtained by\nthe correlation product between the image $I$ and a function \n$g_j$: $w_j(x,y) = \\sum_k \\sum_l I(x,y)  g_j(x+k,y+l)$.\n \nThen we have: $\\sigma_j^2(x,y) = \\sum_k \\sum_l R_{\\sigma}^2(x,y) g_j^2(x+k,y+l)$. \\\\ \nIn the case of the \\`a trous algorithm, the coefficients $g_j(x,y)$\n are not known exactly, but they can easily be computed by taking the\nwavelet transform of a Dirac $w^{\\delta}$. The map $\\sigma_j^2$ is calculated by \ncorrelating the square of the wavelet scale $j$ of  $w^{\\delta}$\n by $R^2_\\sigma(x,y)$.\n \n\\item{Non-stationary multiplicative noise} \\\\\nThe image is first log-transformed. Then the transformed image is treated \nas an image with non-stationary additive noise.\n\n\\item{Undefined stationary noise}\\\\\nA k-sigma clipping is applied at each scale.\n\n\\item{Undefined noise}\\\\\nThe standard deviation is estimated for each wavelet coefficient, \nby considering a box around it, and the calculation of $\\sigma$ is done \nin the same way as for non-stationary additive noise.  The latter \ndetermines a map of variances for the image, and then derives the \nvariances for the wavelet coefficients.  ``Undefined noise'' does not\nassume additivity of the noise, and so calculates the noise from local\nvariance in the resolution scales.  \n\n\\item{Stationary correlated noise} \\\\\nThe noise is stationary, but correlated. This noise modeling requires\na noise map, containing a realization of the noise. The threshold \nat a scale $j$ $S_j$ is found   \nby computing the wavelet transform of the noise map, and using\nthe histogram of $S_j$ to derive the noise probability density \nfunction, pdf, of $S_j$.       \n \n\\item{Poisson noise with few events} \\\\\nThis case corresponds to noise with a very small number of photons  \nper pixel (this is the case for instance of X-ray images where the \nnumber of photons per pixel is often lower than 1).\nThis special case requires more processing time, due to the fact that \na set of autoconvolutions of the histogram of the wavelet function must be\ncalculated. For faster filtering, the table can be pre-computed using \nthe ``mr\\_abaque\" program, and then the restoration \nis carried out using ``mr\\_pfilter\" \n(see section~\\ref{sect_event}). \n\n\\item{Speckle noise} \\\\\nSee section~\\ref{speckle}.\n\\end{enumerate}\n \n \n \n\\subsection{Filtering Methods}\n\\index{filtering}\n\\subsubsection*{Hard and Soft Thresholding}\nMany filtering methods have been proposed in the last ten years.\n{\\em Hard thresholding} consists of setting to 0 all \nwavelet coefficients which have an absolute\nvalue lower than a threshold $T_j$:\n\\begin{eqnarray}  \\tilde w_{j,k} = \n\\left\\{ \\begin{array}{ll} w_{j,k} &  \\mbox{ if } \\mid w_j \\mid \\geq T_j  \\nonumber  \\\\ \n\n0 &  \\mbox{ otherwise}  \\end{array} \\right. \n\\end{eqnarray}\nwhere $w_{j,k}$ is a wavelet coefficient at scale $j$ and at spatial\nposition $k$. \n\n{\\em Soft thresholding} consists of replacing each wavelet coefficient\nby the value $\\tilde w$ where\n\\begin{eqnarray}  \\tilde w_{j,k} = \n\\left\\{ \\begin{array}{ll} sgn(w_{j,k}) ( \\mid w_{j,k} \\mid - T_j)    &  \\mbox{ if } \\mid w_j \\mid \\geq T_j \\nonumber  \\\\ \n0 &  \\mbox{ otherwise}  \\end{array} \\right. \n\\end{eqnarray} \n\nWhen the discrete orthogonal wavelet transform is used, it is interesting \nto note\nthat the hard and soft thresholded estimators are solutions of the following\nminimization problems:\n\\begin{eqnarray*}\n  \\tilde w  =   \\mathrm{arg}_w \\min \\frac{1}{2} \\parallel y - {\\cal W}^{-1} w \\parallel^2_{l^2} + \n \\lambda \\parallel w \\parallel^2_{l^0} & & \\mbox{\\bf   hard threshold} \\nonumber \\\\\n  \\tilde w   =   \\mathrm{arg}_w \\min \\frac{1}{2} \\parallel y - {\\cal W}^{-1} w \\parallel^2_{l^2} + \n \\lambda \\parallel w \\parallel^2_{l^2} & & \\mbox{\\bf   soft threshold}  \n\\end{eqnarray*}\nwhere $y$ is the input data, ${\\cal W}$ the wavelet transform operator, and\n$l^0$ indicates the limit of $l^\\delta$ when $\\delta \\rightarrow 0$. This \ncounts in fact the number of non-zero elements in the sequence.\n\nSeveral approaches have been proposed for deriving the $T_j$ thresholds. \n\n\n\\subsubsection*{k-Sigma Thresholding}\nThe k-Sigma approach consists of deriving $T_j$ from the probability \nof false detection $\\epsilon$  \n\\begin{eqnarray*}\nProb(w > T_j) < \\epsilon \n\\end{eqnarray*}\nGiven stationary Gaussian noise, it suffices to compare $w_{j,k}$ to \n\\index{stationary signal}\n$k \\sigma_j$, where $\\sigma_j$ is the noise standard deviation in band $j$.  \nOften $k $ is chosen as 3, which corresponds approximately \nto $\\epsilon = 0.002$.   \n\n\\subsubsection*{Iterative Filtering}\nWhen a redundant wavelet transform is used, the result after a simple hard \nthresholding can still be improved by iterating. Indeed, we want  the\nwavelet transform of our solution $s$ to reproduce the same significant \nwavelet coefficients (i.e., coefficients larger than $T_j$). This can \nbe expressed in the following way:\n\\begin{eqnarray}\n ({\\cal W} s)_{j,k} = w_{j,k} \\mbox{ if  } \\mid w_{j,k}  \\mid > k \\sigma_j\n\\end{eqnarray}\nwhere $w_{j,k}$ are the wavelet coefficients of the input data $y$. Denoting\n$M$ the multiresolution support (i.e. $M(j,k) = 1$ if \n$ \\mid w_{j,k}  \\mid > k \\sigma_j$, and 0 otherwise), we want:\n\\begin{eqnarray*}\n M.{\\cal W} s  = M.{\\cal W} y\n\\end{eqnarray*}\nThe solution can be obtained by the following Van Cittert \niteration \\cite{starck:book98}:\n\\begin{eqnarray}\n s^{n+1} & = & s^n +  {\\cal W}^{-1} (M.{\\cal W} y - M.{\\cal W} s^n)  \\nonumber \\\\\n          & = & s^n +  {\\cal W}^{-1} (M.{\\cal W} R^n)\n\\label{eqn_iter_support}\n\\end{eqnarray}\nwhere $R^n = y- s^n$.\nAnother approach consists of minimizing the functional\n\\begin{eqnarray}\n J(s) = \\parallel M.{\\cal W} y - M.{\\cal W} s  \\parallel^2\n\\label{eqn_iter_gradient}\n\\end{eqnarray}\nusing a minization method such as the fixed step gradient or the \nconjugate gradient.\n\n\\subsubsection*{Iterative Filtering using Smoothness Constraint}\nThis method consists in adding a smoothness constraint ${\\cal S}$ when \nderiving the solution from the significant coefficients.\nWe solve the following optimization problem \\cite{starck:spie01a}:\n\\begin{equation}\n  \\label{eq:l1-smooth}\n  \\min {\\cal S}(\\tilde{s}), \\quad \\mbox{subject to} \\quad \\tilde{s} \\in C,  \n\\end{equation}\nwhere $C$ is the set of vectors $\\tilde{s}$ \nwhich obey the linear constraints\n\\begin{equation}\n\\label{eq:constraints1}\n\\left\\{  \\begin{array}{ll}\n  \\tilde{s} \\ge 0, \\\\\n  |{\\cal W}s - {\\cal W} \\tilde{s}| \\le e; \n  \\end{array}\n  \\right. \n\\end{equation}\nHere, the second inequality constraint \nonly concerns the set of significant coefficients.\nWe consider two possible functions for ${\\cal S}$:  \n\\begin{eqnarray}\n   {\\cal S}_w (\\tilde{s})& = &   \\min \\|{\\cal W}\\tilde{s}\\|_{\\ell_1} \\nonumber \\\\\n   {\\cal S}_{tv}(\\tilde{s}) & = &   \\min \\|\\tilde{s}\\|_{TV}   \n\\end{eqnarray}\nwhere $\\|\\cdot\\|_{TV}$ is the Total Variation norm, i.e. the discrete\nequivalent of the integral of the euclidian norm of the gradient.\n\n\\subsubsection*{Universal threshold}\nUniversal thresholding consists of using \na threshold  \\cite{rest:donoho93_1,rest:donoho93_2}\n$T_j =  \\sqrt{2\\log(n)}\\sigma_j$,\nwhere $n$ is the number of pixels in the input data.\nIt ensures with a probability tending to one that all noise is removed.\nThe drawback is that it tends to give an oversmoothed estimator.\n\n\\subsubsection*{SURE Threshold}\nThe SURE threshold minimizes \nStein's unbiased risk estimator \\cite{wave:donoho95}.\n\n\\subsubsection*{MULTI-SURE Thresholding}\nThe SURE method is applied independently on each band of the wavelet transform.\n\n\\subsubsection*{MAD Thresholding}\nThe Median Absolute Deviation (MAD) threshold on a given band $j$ is:\n\\[T_j =  k \\sigma_{j,m} \\] \nwhere $\\sigma_{j,m}$ is the Median\nAbsolute Deviation ($\\sigma_{j,m} = \\mbox{MED}( \\mid w_j \\mid ) / 0.6745$,\nwhere MED is the median function).\nThe MAD method does not requires any knowledge about the noise such as the \nnoise standard deviation. Is is considered as a very good method to \ndenoise data contaminated by correlated noise.\n\n\\subsubsection*{Multiscale Wiener Filtering}\nA multiresolution Wiener filtering \\cite{starck:sta94_4} consists of \nmultiplying all coefficients $w_j$ of a given scale $j$ by \n\\begin{eqnarray}\n\\alpha_j = \\frac{S_j}{S_j + N_j}\n\\end{eqnarray}\nwhere $S_j$ and $N_j$ are respectively the variance of the signal \nand of the noise at the scale $j$.\n\n\\subsubsection*{Hierarchical Multiscale Wiener Filtering}\nA hierarchical Wiener filtering \\cite{starck:sta94_4} tries to \nintroduce a prediction \ninto the estimation of $\\tilde w_j$. This prediction is obtained from \nthe coefficient $w_h$ at the same position but at the following scale.\n\\begin{eqnarray}\n\\tilde w_j = \\frac{H_j}{N_j+H_j+Q_j} w_j + \\frac{N_j}{N_j+H_j+Q_j} w_h\n\\end{eqnarray}\nwith:\n\\begin{eqnarray}\nQ_j = \\frac{H_jN_j}{S_j}\n\\end{eqnarray}\nwhere $H_j$ is the variance of the image obtained by taking the difference\nof the scale $j$ and the following one $j+1)$.\n\n\\subsubsection*{Hierarchical thresholding}\nFor each wavelet coefficient $w$, the threshold is derived from {\\em NSigma} \n$sigma$ and the signal-to-noise \nratio of the wavelet coefficient at the same position but\nat the next scale. \nThe threshold used here, $T_h$ \\cite{starck:sta94_4}, is equal \nto $T_j=k \\sigma_j$ if $\\mid w_j \\mid \\ \\geq T$,\nand $T_h = T f(\\mid\\frac{w_h}{S_h}\\mid)$ otherwise\n($S_h$ is the standard deviation of $w_h$). The function $f(a)$\nmust return a value between 0 and 1. The chosen function $f$ is:\n\\begin{itemize}\n\\item $f(a) = 0$ if $a \\geq k$  \n\\item $f(a) = 1 - \\frac{1}{k} a$ if $a < k$  \n\\end{itemize}\n\n\\newpage\n\n\\section{Filtering: mr\\_filter}\n\\index{mr\\_filter}\n\\label{sect_filter}\nProgram \n{\\em mr\\_filter} filters an image. Several methods can be used \\cite{starck:sta94_4,starck:sta94_5,starck:sta94_1}. Those using\nmultiresolution need noise modeling. For the case of Poisson noise with\nfew events, the {\\em mr\\_pfilter} program can also \nbe used (see section~\\ref{sect_event}). The case of speckle noise is \ntreated in section~\\ref{speckle}.\n{\\bf\n\\begin{center}\n USAGE: mr\\_filter option image\\_in image\\_out\n\\end{center}}\nwhere options are \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-f type\\_of\\_filtering]} \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item Multiresolution Hard k-Sigma Thresholding.\n\\item Multiresolution Soft k-Sigma Thresholding.\n\\item Iterative Multiresolution Thresholding. \\\\\n(see equation~\\ref{eqn_iter_support}).\n\\item Adjoint operator applied to the multiresolution support. \\\\\nMinimize a functional from the detected wavelet coefficients\n(see equation~\\ref{eqn_iter_gradient}). It is the \nsame method as this which is used by {mr\\_detect} for individual object \nreconstruction.\n\\item Hierarchical Hard Thresholding.\n\\item Hierarchical Wiener filtering.\n\\item Multiresolution Wiener filtering.\n\\item Median filtering \\\\\nStandard median filtering\n\\item Average filtering \\\\\nStandard average filtering\n\\item B-spline filtering \\\\\nConvolve the image with a B-spline\n\\item  Universal Hard Thresholding.\n\\item  Universal Soft Thresholding.\n\\item SURE Hard Thresholding.\n\\item SURE Soft Thresholding.\n\\item MULTI-SURE Hard Thresholding.\n\\item MULTI-SURE Soft Thresholding.\n\\item Median Absolute Deviation (MAD) Hard Thresholding.\n\\item Median Absolute Deviation (MAD) Soft Thresholding. \n\\item Total Variation + Wavelet Constraint \n\\item Wavelet Constraint Iterative Methods \n\\end{enumerate}\nDefault is multiresolution thresholding.\n\\item {\\bf [-t type\\_of\\_multiresolution\\_transform]} \n\\item {\\bf [-T type\\_of\\_filters]}  \n\\item {\\bf [-u]} \n\\item {\\bf [-g sigma]} \n\\item {\\bf [-c gain,sigma,mean]} \n\\item {\\bf [-m type\\_of\\_noise]}\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item Gaussian noise \\\\\nThe standard deviation is either provided by the user using the option ``-g''\nor it is automatically calculated.\n\\item Poisson noise\n\\item Poisson noise + Gaussian noise \\\\\nParameter (gain, readout noise, etc.) can be given by the ``-c'' option.\nThe generalized Anscombe transform is used.\n\\item Multiplicative noise\n\\item Non-stationary additive noise \\\\\nThe standard deviation is estimating in a box around each pixel.\nThe size of the box can be fixed by the ``-S'' option. By default, the\nstandard deviation of the pixel values inside the box is taken as the\nnoise standard deviation, but by using the ``-N'' option, \nthe standard deviation\ncan be calculated by k-sigma clipping. An alternative with this model is to\nuse a pre-defined root mean square map (see option ``-R'').\n\\item Non-stationary multiplicative noise\n\\item Undefined stationary noise\n\\item Undefined noise\n\\item Stationary correlated noise \\\\\nThe noise map must be given using the ``-R'' option.  \nOption ``-E'' allows us to fix the confidence interval\nof the detection (this replaces the ``-s'' option used by other noise models).    \n\\item Poisson noise with few events \\\\\nOption ``-E'' allows us to fix the confidence interval\nof the detection (this replaces the ``-s'' option used by other noise models).\nWith this noise model, only the \\`a trous wavelet transform algorithm can\nbe used.\n\\end{enumerate}\nDefault is Gaussian noise.\n\n\\item {\\bf [-n number\\_of\\_scales]} \n\\item {\\bf [-s NSigma]} \\\\\nOnly used with {\\em type\\_of\\_filtering} in [1,2,3,4,5,17,18].\n\\item {\\bf [-e Epsilon]} \\\\\nConvergence parameter (used only by the filtering method 2). Default is \n1e-5 in the case of Poisson noise with few events, \nand 1e-3 in other cases.\n\\item {\\bf [-i number\\_of\\_iterations]} \\\\\nMaximum number of iterations (used only by iterative filtering \nmethods). Default is 10. \n\\item {\\bf [-w support\\_file\\_name]} \\\\\nIf this option is set, two files are created. The first one (``.mr\") contains\nthe multiresolution support, and the second one contains \n an image which is created from the  multiresolution support.\n Default is not to do this. The suffix must not be specified.\n\\item {\\bf [-k]} \\\\\nSuppress isolated pixels in the multiresolution support. Default is no\nsuppression.\n\\item {\\bf [-K]} \\\\\nSuppress the last scale. The last scale is not used during the restoration\nprocess. This leads to a \nfiltered image which does not contain a background,\nor extended structures (depending on the number of scales used). \nDefault is no suppression.\n \\item {\\bf [-p]} \\\\\nDetect only positive structures. Wavelet coefficients can be either positive \nor negative. Using this option, only positive coefficients are considered\nas significant. Default is to take both positive and negative coefficients.\n\\item {\\bf [-E Epsilon]} \\\\\nPrecision for computing thresholds, used only  in the case of Poisson noise\n with few events. Default is 1e-3, which is equivalent to a $3.1$ sigma \ndetection in the Gaussian case.\n\\item {\\bf [-S SizeBlock]} \\\\\nSize of the  blocks used for local variance estimation. Default is 7.\n\\item {\\bf [-N NiterSigmaClip]} \\\\\nIteration number used for local variance estimation. Default is 1.\n\\item {\\bf [-F first\\_detection\\_scale]} \\\\\nIf this option is set, all wavelet coefficients detected at scales lower \nthan {\\em first\\_detection\\_scale} are considered as significant.\n\\item {\\bf [-R RMS\\_Map\\_File\\_Name]} \\\\\nRoot mean square map. If this option is set, the noise model is \nautomatically fixed to: ``Non-stationary additive noise\", \nand the RMS image is used as the local standard deviations image.\n\\item {\\bf [-P]} \\\\\n By default, a positivity constraint is applied to the solution. This means\n that the solution is forced to be positive everywhere. By using this option,\n this constraint is suppressed.\n \\item {\\bf [-b]} \\\\\nAdd the maximum level constraint (maximum = 255). For one-byte images\n(GIF, JPEG, etc.), this constraint limits the range values between 0 and 255.\n\\item {\\bf [-W WindowSize]} \\\\\nWindow size for median and average filtering. Default is 5. This option is\nonly valid if {\\em type\\_of\\_filtering} is equal to 4 or 5.\n\\item {\\bf [-v]} \\\\\nVerbose.\n\\end{itemize}\n\n\\noindent\nFor median, average, and B-spline filtering, all options (except ``-W\")\nhave no effect.\n% For transforms 13 to 18, only the thresholding filtering can be used. \n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item mr\\_filter image\\_in.d ima\\_out.d \\\\\nFilters an image by multiresolution  thresholding, assuming\nGaussian noise (its standard deviation is automatically estimated).\n\\item mr\\_filter -t24 image\\_in.d ima\\_out.d \\\\\nHard thresholding using the undecimated bi-orthogonal wavelet transform.\n\\item mr\\_filter -t24 -u1 image\\_in.d ima\\_out.d \\\\\nHard thresholding using the partially undecimated bi-orthogonal \nwavelet transform.\n\\item mr\\_filter -f 3 image\\_in.d ima\\_out.d \\\\\nIterative filtering using the \\`a trous algorithm.\n\\item mr\\_filter -m 2 -s 4 -p image\\_in.d ima\\_out.d \\\\\nSame as before, but considering Poisson noise, and the thresholding is done\n at 4 sigma (instead of 3). All negative coefficients are thresholded.\n\\item mr\\_filter -K -n 3 -s 7 ngc2997.fits stars.fits \\\\\nSubtracts from the input image (see Figure~\\ref{fig_ngc}) \nthe noise and the last scale. As only three\nscales were used, all large structures disappear (see \nFigure~\\ref{fig_ngc_clean} top). The difference (see \nFigure~\\ref{fig_ngc_clean} bottom) between the \ninput and the output images shows the galaxy, in which all high small-scale\nstructures have been removed. \n\\end{itemize}\n\n\\subsubsection*{Filtering Strategy:}\nThe {\\em mr\\_filter} programs allows the user to perform a filtering by \nmany methods, and using many different transforms. Depending on the\napplication, the optimal filtering method may change. For \nastronomical applications, for instance, methods 3 and 4 lead to very good\nresults, especially for the photometry of the objects. However, some\ngeneral aspects can be noted:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item Directional transforms (i.e., option  ``-t'' in [14,21,24]) are well\nadapted to images which contain edges, lines, etc., while isotropic\ntransforms (\\`a trous algorithm, etc.) are better adapted for images\nwhich contain isotropic features.\n\\item Redundant transforms do a better job than non-redundant transforms, \nbecause they respect the translation invariance property.\nSo the undecimated bi-orthogonal wavelet transform (i.e. option  -t 24) for \nimages containing edges, and the \\`a trous algorithm (i.e. option  -t 2)\nfor images with isotropic features, should always be preferred to decimated\ntransforms.\n\\item If the computation time is important, or if the image is very large,\na good trade-off between quality and memory or computation time is to \nlet the first scale remain undecimated, while decimating the others. \nThis is done by using\nthe ``-t 24 -u 1'' option for the bi-orthogonal WT, or the ``-t 19'' \noption for\nthe \\`a trous algorithm.\n\\item The number of scales (i.e., option  -n {\\em ScaleNumber}) \nwhich can be used\ndepends on the image size. The larger the image, the \nlarger the number of scales.\nThe user must keep in mind that increasing by 1 the number of scales \ncorresponds\nto adding a new scale in which the number of independent pixels is divided by two.\nIn theory, the number of scales could be $n = \\log_2(N_s)+1$, where $N_s$ is the\nimage size in its smallest direction, but in practice, it is preferable to\nuse a lower value ($\\log_2(N_s) -2$ or $\\log_2(N_s) -3$). \n\\end{itemize}\n\n\n\\begin{figure}[htb]\n\\centerline{\n\\vbox{\n\\hbox{\n\\psfig{figure=fig_ngc_stars.ps,bbllx=1.9cm,bblly=12.6cm,bburx=14.6cm,bbury=25.4cm,width=10cm,height=10cm,clip=}\n}\n\\hbox{\n\\psfig{figure=fig_ngc_bgr.ps,bbllx=1.9cm,bblly=12.6cm,bburx=14.6cm,bbury=25.4cm,width=10cm,height=10cm,clip=}\n}}}\n\\caption{Top, output image produced by {\\em mr\\_filter}. The noise and the\nlarge structures of the galaxy have been removed. Bottom,\ndifference between NGC2997 and the output image.}\n\\label{fig_ngc_clean}\n\\end{figure}\n\n\\clearpage\n\n\n\\section{Image with Speckle Noise}\n\\label{speckle}\n\\subsection{Speckle noise statistics}\n\nSpeckle occurs in all types of coherent imagery such as synthetic aperture radar (SAR) imagery, \nacoustic imagery and laser illuminated imagery \\cite{rest:dainty75}. \nWhen an object is illuminated by a coherent source of radiation and the object\n has a surface that is roughly the order of the wavelength of the incident \n radiation, the wave scattered from the surface consists of contributions \n from many independent scattering areas. The signal is assumed to be a \ncomplex Gaussian distributed noise of zero mean and standard deviation\n $\\sigma$. The value of $\\sigma$ is a function of the backscattering \ncoefficient $\\sigma_0$ of the observed surface. The modulus $\\rho$ of\n the signal gives  an image (see Figure \\ref{fig:im1}) from which one \n can derive some properties about the object surface by measuring the local \n value of $\\sigma$.\nThe probability density function (PDF) \nof the modulus of a homogeneous scene is a Rayleigh distribution:\n\\begin{equation}\np(\\rho)=\\frac{\\rho}{\\sigma^2}e^{-\\frac{\\rho^2}{ 2\\sigma^2}}\n\\label{eq:2-9}\n\\end{equation}\nof mean $M_{\\rho}$ and standard deviation $\\sigma_{\\rho}$:\n\\begin{eqnarray}\nM_{\\rho} = \\sqrt{ \\frac{\\pi}{2} } \\sigma \\quad \\sigma_{\\rho}=\\sqrt \n{\\frac{4-\\pi}{2}}\\sigma\n\\end{eqnarray}\nThe ratio $\\sigma_{\\rho}/M_{\\rho}$ is a constant of \nvalue $\\sqrt{\\frac{4-\\pi}{\\pi}}$.\nThis means that the speckle is multiplicative noise. \nFor this reason some filtering methods, \ncalled homomorphic techniques \\cite{rest:frances95}, use a logarithmic \ntransform of the modulus to convert the multiplicative noise into additive \nnoise.  The PDF of the  modulus of log-transformed speckle noise is:\n\\begin{eqnarray}\np(\\ell) =  \\frac{e^{2\\ell}}{ \\sigma^2} e^{-\\frac{e^{2\\ell}}{2 \\sigma^2} }\n\\label{eq:2-22}\n\\end{eqnarray}\nThe mean $M_{\\ell}$ and the standard deviation $\\sigma_{\\ell}$ are:\n\\begin{eqnarray}\nM_{\\ell} = 0.058 + \\log(\\sigma) \\quad \\sigma_{\\ell}= \\sqrt{\\frac{\\pi^2}{24}} = 0.641\n\\end{eqnarray}\nWhen using the logarithm, the standard deviation is \nindependent of the local value of the signal, but  the \nestimation of $\\sigma_0$ from the filtered image has a \nbias which is not minimum according to the Cramer-Rao \ntheorem. The minimum variance bound estimator of a Rayleigh \ndistribution is the  energy (i.e.\\ the square of the \nmodulus $I=\\rho^2$) which is known to have an exponential \n(i.e.\\ Laplace) \ndistribution of parameter $a=2\\sigma^2$ \\cite{rest:hoekman91}:\n\\begin{eqnarray}\np(I)=\\frac{1}{a}e^{-\\frac{I}{a}}\n\\end{eqnarray}\nWith this transformation the noise is still multiplicative, \nbut it has been shown \\cite{rest:bijaoui97} that the ratio $R$ \nbetween the energy of the image to be filtered and a spatially \nLaplace-distributed image of local mean value $a(k_x,k_y)$ has a \nLaplace PDF of mean $1$. In that case,  the filtering algorithm \nconsists of finding the local value of $a$ which is iteratively \nobtained by rejecting the non-significant wavelet coefficients \nfound in the image ratio $R$ according to a simulated Laplace \nnoise of mean $1$. \n\n\\subsection{Speckle filtering: mr\\_rfilter}\n\\index{speckle}\n\\index{mr\\_rfilter}\n\n\\subsubsection{General description}\n The filtering algorithm can be applied on the modulus, the logarithm of \n the modulus  or the square of the modulus of the image. \n Visually the results are quite similar, but the use of a quadratic \n transform allows more accurate radiometric estimation from the filtered image. \nWhen using the logarithmic transform the general structure of the algorithm is \nthe following (for more details see \\cite{rest:bijaoui97}):\n\n\\begin{enumerate}\n\\item Simulate a log-Rayleigh noise image model.\n\\item Compute the wavelet transform of this simulated image by \nusing the \\`a trous algorithm. \n\\item According to the chosen statistical decision \nlevel $\\epsilon$, compute the corresponding positive and \nnegative thresholds for each scale as shown in Figure \\ref{fig:seuil}. \n\\item Compute the logarithm of the original image.\n\\item Threshold the wavelet transform of the log-transformed \noriginal image according to the thresholds computed with the noise model.\n\\item Reconstruct the image from the thresholded wavelet transform.\n\\end{enumerate}\n\nIf the reconstructed image is correct (i.e.\\ the \nreconstructed image is the filtered image) the residual between \nthis image and the original one must contain only noise.  If not, \nwe have to iterate from step 5 to step 6  to extract the \nsignificant structures from the residual and refine the filtered image. \\\\\n\nFor multiplicative noise (i.e.\\ Rayleigh or Laplace noise) \nwe have to test the significant part of the ratio $R$ between the \noriginal transformed image and a reference image. \nThis reference image is the filtered image if the wavelet transform \nof the ratio contains no significant wavelet coefficients. \nThe algorithm starts with any approximation of the image to be filtered \nas the reference image. Generally, the last smooth image of the \nwavelet transform is used to initialize the first reference image. \nThe reference image is iteratively refined by introducing the significant\npart detected from the ratio $R$ until this  ratio has a  Laplace PDF \nof parameter $1$. In the case of Rayleigh distributed noise (i.e.\\\nno transformation) the test is carried out on the variation of the standard \ndeviation of the residual between two successive iterations. \nFor many reasons (precision of the convergence test, estimation of \nthe scattering parameter $\\sigma_0$ from the filtered image) the \nLaplace noise model (i.e.\\ square-transformed Rayleigh noise) gives\nthe best results.  \n\n\\subsubsection{Thresholding function}\n\nThe thresholded wavelet coefficients $\\bar{w}(i,k_x,k_y)$ are \ncomputed from the original wavelet coefficients ${w}(i,k_x,k_y)$ by \napplying the following equation: \n\n\\begin{equation}\n\\bar w(i,k_x,k_y) = S(w(i,k_x,k_y)) w(i,k_x,k_y) \n\\end{equation}\nwhere  $S(w)$ is the thresholding function. For hard thresholding \n (see Figure \\ref{fig:seuil}-a), $S(w)$ is defined by:\n\\begin{eqnarray}\nS(w) =\n\\left\\lbrace\n\\begin{array}{l}\n 0 \\quad \\mbox{if \\quad} w \\in [x_n, x_p]\\\\\n 1 \\quad \\mbox{elsewhere}\n\\end{array}\n\\right.\n\\end{eqnarray}\nFor soft thresholding (see Figure \\ref{fig:seuil}-b) the thresholding \nfunction is:\n\\begin{eqnarray}\nS(w) =\n\\left\\lbrace\n\\begin{array}{l}\n\\hspace*{0.8truecm} 0 \\quad  \\mbox{\\hspace*{0.8truecm}if \\quad}  w \\in [xn_1, xp_1] \\\\\n\\\\\n{\\displaystyle { \\frac{\\; \\; w \\; -xp_1} {xp_2 \\! - xp_1}}} \\quad \\mbox{if \\quad} w \\in [xp_1, xp_2]\\\\\n\\\\\n{\\displaystyle { \\frac{\\; \\; w \\; -xn_1} {xn_2 \\! - xn_1}}} \\quad \\mbox{if \\quad} w \\in [xn_2, xn_1]\\\\\n\\\\\n\\hspace*{0.8truecm} 1 \\quad \\mbox{\\hspace*{0.8truecm}elsewhere}\\\\\n\\end{array}\n\\right.\n\\label{eq:fonction_seuillage_ponderee}\n\\end{eqnarray}\n\n\\begin{figure}[htb]\n\\centerline{\n\\hbox{\\psfig{figure=fig_speckle_1.ps,bbllx=1.5cm,bblly=7cm,bburx=12.5cm,bbury=21cm,height=7cm,width=5.5cm,clip=}}}\n\\caption{Threshold determination and thresholding functions; \n(a) hard thresholding, (b) soft thresholding.}\n\\label{fig:seuil}\n\\end{figure} \n\n\\subsubsection{Hierarchical thresholding}\nWith the hierarchical thresholding,  the thresholded wavelet\n coefficients at a given scale $i$  are such that: \n\\begin{equation}\n\\bar w(i,k_x,k_y) = S(w(i,k_x,k_y)) S(w(i+1,k_x,k_y)) w(i,k_x,k_y) \n\\end{equation}\nThis kind of thresholding is generally used  to reduce the number\n of false detections on the first scale.\n\n\\subsubsection{mr\\_rfilter}\n\\index{mr\\_rfilter}\nProgram {\\em mr\\_rfilter} filters an image that contains speckle noise \nby using a multiscale \\`a trous algorithm.\n{\\bf \n\\begin{center}\n USAGE: mr\\_rfilter options image\\_in image\\_out\n\\end{center}}\nwhere options are~:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-t type\\_of\\_noise]} \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item [{\\bf 0.} ] {\\bf Rayleigh noise:}\\\\\nThe filtering algorithm is applied to the modulus of the image. \n\n\\item [{\\bf 1.} ] {\\bf Log Rayleigh:}\\\\\nThe  filtering algorithm is applied after a logarithmic transform of the image.\n\n\\item [{\\bf 2.} ] {\\bf  Laplace:}\\\\\nThe filtering algorithm is applied after a quadratic transform of the image.\n\n\\end{itemize}\n\nDefault is Laplace noise model.\n\n\\item {\\bf [-n number\\_of\\_scales]} \n\n\\item {\\bf [-i number\\_of\\_iterations]} \\\\\nMaximum number of iterations. Default is 10. \n\n\\item {\\bf [-e Epsilon]} \\\\\nConvergence parameter. Default is 0.001.\n\n\\item {\\bf [-E decision\\_level]} \\\\\nStatistical decision level $\\epsilon$ used for each scale. If 0, the \nvalue of $\\epsilon$  is set interactively for each of the first four \nscales. If the maximum number of scales (scale\\_max) set by the \nparameter -n is greater than 4, the  statistical decision levels \nfrom scale 5 to scale\\_max is set to the value of the  \nstatistical decision level used for scale 4.\\\\ \nDefault is 0.001.\n\n\n\\item {\\bf [-N number\\_of\\_images]} \\\\\nIf the input image is a multi-look image (i.e.\\ the input image  \nis the average of N single-look images).\\\\\nDefault is 1.\n\n\\item {\\bf [-r Ratio]} \\\\\nratio=Epsilon2/Epsilon1 used for soft thresholding. If set, a hierarchical filtering is used for the first scale. The value of Epsilon1 is set with the -E option. If ratio=1, a standard hard thresholding is used for all scales.\nDefault is 10.\n\\end{itemize}\n\n\\subsubsection*{Examples}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item mr\\_rfilter -t1 -E.001 -i5  image.fits result\\\\\nFilter the image {\\it image.fits} by considering an additive log-Rayleigh \nnoise model (logarithmic transformation) with 5 iterations and a \nstatistical decision level $\\epsilon=0.001$ for all scales. \n\n\\item mr\\_rfilter -t1 -E.001 -e0.01 image.fits result\\\\\nThe same as above, but the iterations are stopped when the difference between the standard deviation of the residual and the theoretical value \n(i.e.\\ 0.64 for the log-Rayleigh distribution) is less than $0.01$. \n\n\\item mr\\_rfilter -t2 -E0 -i5 -n8 image.fits result\\\\\nFilter the image {\\it image.fits} by considering a multiplicative Laplace  noise model (quadratic transformation) with 5 iterations and 8 scales. The program will ask the user to enter the statistical decision level $Epsilon[i]$ at each scale. Typical values are:\\\\\nEpsilon[0] = 0.0001\\\\\nEpsilon[1] = 0.001\\\\\nEpsilon[2] = 0.01\\\\\nEpsilon[3] = 0.01\\\\\nEpsilon[4] = 0.1\\\\\nThe values from Epsilon[5] to Epsilon[8] are automatically set to  0.1.\n\n\n\\item mr\\_rfilter -t3 -E0 -r10 -i5 -n8  image.fits result\\\\\nThe same as above but a soft thresholding is used with a ratio=10, and a Rayleigh noise model.\n\n\\end{itemize}\n\n\n\\begin{figure}[htb]\n\\centerline{\n\\vbox{\n\\hbox{\n\\psfig{figure=fig_speckle_2.ps,bbllx=3.3cm,bblly=10.7cm,bburx=18.5cm,bbury=25.9cm,height=8cm,width=8cm,clip=}\n}\n\\hbox{\n\\psfig{figure=fig_speckle_3.ps,bbllx=3.3cm,bblly=10.7cm,bburx=18.5cm,bbury=25.9cm,height=8cm,width=8cm,clip=}\n}}}\n\\caption{{\\small Top, raw synthetic aperture radar (SAR) image from the\nERS1 satellite  (CNES image).  \nBottom, image produced by {\\it mr\\_rfilter} -t2 -n6 -i5 -E0 -r10, \nwith Epsilon[1]=Epsilon[2]=0.0001 and Epsilon[3] to Epsilon[6]=0.001.}}\n\\label{fig:im1}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\vbox{\n\\hbox{\n\\psfig{figure=fig_speckle_4.ps,bbllx=3.3cm,bblly=10.7cm,bburx=18.5cm,bbury=25.9cm,height=8cm,width=8cm,clip=}\n}\n\\hbox{\n\\psfig{figure=fig_speckle_5.ps,bbllx=3.3cm,bblly=10.7cm,bburx=18.5cm,bbury=25.9cm,height=8cm,width=8cm,clip=}\n}}}\n\\caption{Top, multi-look image generated by averaging 7 raw SAR images. Bottom,  output image produced by {\\em mr\\_rfilter} -t2 -N7 -n6 -i5 -E0 -r10, with Epsilon[1]=Epsilon[2]=0.0001 and Epsilon[3] to Epsilon[6]=0.001.}\n\\label{fig:im1f}\n\\end{figure}\n\n\\clearpage\n\\newpage\n\n\n\\section{Sparse Images: \\`a trous Algorithm}\n\nIn this section we are concerned with the \nrestoration of images with few photons or counts by the \\`a trous algorithm.\n\n\n\\subsection{Initialization: mr\\_abaque}\n\\index{mr\\_abaque}\n\\label{sect_event}\nProgram \n{\\em mr\\_abaque} precomputes a table which is used by {\\em mr\\_psupport} and\n{\\em mr\\_pfilter}. This table contains the threshold levels for a wavelet\ncoefficient with a confidence interval $\\epsilon$. These levels are a \nfunction of the number of events used for\nthe calculation of the wavelet coefficient. The table is saved by default \nin the FITS table format. The levels are calculated from the autoconvolution of\nthe histogram of the wavelet function. The algorithm first calculates the\nhistogram, then performs the autoconvolution, derives the probability \ndistribution\nof a wavelet coefficient (depending on the number of events), the \ndistribution function, and finally derives the thresholds for a confidence \ninterval equal to {\\em Epsilon}.\n{\\bf\n\\begin{center}\n USAGE: mr\\_abaque option file\\_out\n\\end{center}}\nwhere options are: \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item {\\bf [-e Epsilon]} \\\\\nEpsilon = confidence level. Default is 1e-3. The correspondence between\nthe confidence level and N$\\sigma$ detection in the Gaussian case is as \nshown in Table~\\ref{corrtable}.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{cc}\\hline \\hline\n Espilon & NSigma \\\\ \\hline\n $1e^{-1}$ & 1.28160 \\\\ \n $1e^{-2}$ &  2.32630 \\\\ \n $1e^{-3}$ &  3.09020 \\\\ \n $1e^{-4}$ & 3.71900 \\\\ \n $1e^{-5}$ & 4.26490 \\\\ \n $1e^{-6}$ &  4.75340 \\\\ \n $1e^{-7}$ & 5.19930 \\\\ \n $1e^{-8}$ & 5.61200 \\\\ \n $1e^{-9}$ & 5.99780 \\\\ \\hline \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Correspondence between confidence level and Gaussian detection\nlevel.}\n\\label{corrtable}\n\\end{table}\n\n\\item {\\bf [-n Number]} \\\\\nNumber = Number of events in the data, given as an integer \npower of 2. Default is 25 (i.e.\\ $2^{25}$). \n\\item {\\bf [-w]} \\\\\nWrite the following files:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item Aba\\_histo.fits: contains all histograms \\\\\n                    h(3*i) = histogram values \\\\\n                    h(3*i+1) = reduced coordinates \\\\\n                    h(3*i+2) = histogram values for reduced coordinates \\\\\n% \\item Aba\\_distrib.fits: density functions \\\\\n%                     F(3*i) = reduced coordinates  \\\\\n%                    F(3*i+1) = function values \\\\\n%                    F(3*i+2) = function values for reduced coordinates \\\\\n%\\item Aba\\_log\\_distrib.fits: log transformation of F \\\\\n%                    L(3*i) = function values \\\\\n%                    L(3*i+1) = real coordinates \\\\\n%                    L(3*i+2) = reduced coordinates \\\\\n%\\item Aba\\_mean.d: contains the mean real values of the autoconvolved histograms\n%\\item Aba\\_sigma.d: contains the sigma real values of the autoconvolved histograms\n\\item Aba\\_bspline.d: contains the 1D B-spline used \n\\item Aba\\_wavelet.d: contains the 2D wavelet used \n\\end{itemize}\n\\item {\\bf [-d]}  \\\\\nUse all default parameters.\n\\end{itemize}\n\\subsubsection*{Example:}\n\\begin{itemize}\n\\item mr\\_abaque -d \\\\\nCreates the table (with filename ``Abaque.fits\") with all default options.\nThe table is saved in an image format with 26 lines and 2 columns (by default).\n$Abaque(i,0)$ corresponds to the negative detection level for a wavelet \ncoefficient calculated from $2^i$ events  (photons), and $Abaque(i,1)$ corresponds to the positive threshold.\n\\item mr\\_abaque -e 1e-04 \\\\\nCreates the table for a detection with a confidence interval of 1e-04. \n\\end{itemize}\n\n\n\n\\subsection{Support creation: mr\\_psupport}\n\\index{mr\\_psupport}\n\\label{set_psup}\nProgram \n{\\em mr\\_psupport} applies a wavelet transform using the \\`a trous\nalgorithm to data, assuming that the noise follows a Poisson distribution,\nand in the case where we have only few events per pixel \n\\cite{starck:pie98,starck:sta98_1,rest:slezak93,rest:slezak94,starck:mur98_1}.\nData can either be given by an ASCII table of coordinates in real values, or\nby an image. In the first case, the data must begin with a line containing\nthe size (number of rows and number of columns separated by a space) \nof the image in which the events can be inserted, \nand all other rows must contain\none position (real coordinates separated by a space). The wavelet \ntransform is then thresholded and stored in the output multiresolution file\n(``.mr\"). If the option ``-s'' or ``-t'' is set, an analysis of the detected \nstructures is performed. Results are stored in a file.\n{\\bf\n\\begin{center}\n USAGE: mr\\_psupport option mr\\_file\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item {\\bf [-a ascii\\_file\\_in]}  \\\\\nRead the input data from an ASCII table. An example of a table with three\nevents in an image of size 10 $\\times$ 10 is: \\\\\n10 10 \\\\\n2.5 3.8 \\\\\n4. 4. \\\\\n0.3 9.2 \\\\\nThe first coordinates must all have values\n between 0 and the number of rows minus $0.5$,\nand the second coordinate must have values  \nbetween 0 and the number of columns minus $0.5$.\n\\item {\\bf [-I image\\_file\\_in]}  \\\\\nRead the input data from an image.\n\\item {\\bf [-F first\\_detection\\_scale]} \\\\\nFirst scale used for the detection. Default is 1.\n\\item {\\bf [-e minimum\\_of\\_events]}  \\\\\nMinimum number of events for a detection. Default is 4. If a wavelet \ncoefficient has been calculated from fewer events than this minimum\nvalue, it is not considered as significant.\n\\item {\\bf [-w]}  \\\\\nwrite the following file \\\\\nxx\\_Wavelet.mr: contains the wavelet transform of the image.\n\\item {\\bf [-s SignifStructureAnalysis\\_FileName]} \\\\\nWrite in xx\\_Segment.mr the segmented scales. \\\\\nAnalyze the detected wavelet coefficients, and write in the file:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item Number of detected structures per scale\n\\item Percentage of significant wavelet coefficients\n\\item Mean deviation of shape from sphericity\n\\item For each detected structure, its surface area, its perimeter, and\n\\item its deviation of shape from sphericity, \n\\item its angle, its elongation in both axis directions.\n\\end{itemize}\n\\item {\\bf [-t SignifStructureAnalysis\\_FileName]} \\\\\nSame as -s option, but results are stored in an ASCII table format.\nThe table contains: scale number, structure number, \nMax\\_x, Max\\_y, Surface, Perimeter, Morpho, Angle, Sigma\\_X, Sigma\\_Y. \n\\item {\\bf [-p]}  \\\\\nDetect only positive structure.\n\\item {\\bf [-q abaque\\_file]}  \\\\\nDefault is Abaque.d\n\\item {\\bf [-n number\\_of\\_scales]}  \\\\\nNumber of scales used in the multiresolution transform.\nDefault is 6.\n\\end{itemize}\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\item mr\\_psupport -a tabevent.ascii support.mr\\\\\nRead the table, create the associated image, apply the wavelet\ntransform, and threshold the non-significant wavelet coefficients.\n\\item mr\\_psupport -a tabevent.ascii -s analysis.txt support.mr\\\\ \nSame as before, but also apply a segmentation to each scale. Results\nof the segmentation are stored in ``xx\\_Segment.mr\", and each region is\nanalyzed. Results of this analysis are saved in ``analysis.txt\".\n\\end{itemize}\n\n\n\\subsection{Filtering: mr\\_pfilter}\n\\index{mr\\_pfilter}\nProgram {\\em mr\\_pfilter} filters an image (as does {\\em mr\\_filter})\nassuming  that the noise follows a Poisson distribution and in the case \nwhere we have only few events per pixel.\nData can either be given by an image or by \nan ASCII table of coordinates in real values.\nAs for {\\em mr\\_psupport}, the pre-computed table must exist. By default,\nthe program searches for a file of name ``Abaque.fits\".\nIf the ``-c'' option is used, the user gives a second input image file name.\nThen the multiresolution support is estimated from the first image\n(given by option ``-a'' or ``-I''), and the second image is filtered\nusing the multiresolution support of the first.\n{\\bf\n\\begin{center}\n USAGE: mr\\_pfilter option image\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-a ascii\\_file\\_in]}  \\\\\nSee section \\ref{set_psup}.\n\\item {\\bf [-I image\\_file\\_in]} \n\\item {\\bf [-c ImageFileName]}  \\\\\nImage to be filtered using the multiresolution support\nderived from the image given by option ``-a'' or ``-I''.\nBy default, the image to filter is the same as the image\nused for the multiresolution support calculation.\n\\item {\\bf [-w]}  \\\\\nwrite the following files \\\\\nxx\\_Wavelet.mr: contains the wavelet transform of the image. \\\\\nxx\\_Support.mr: contains the thresholded  wavelet transform.\n\\item {\\bf [-p]}  \\\\\nDetect only positive structure.\n\\item {\\bf [-F first\\_detection\\_scale]} \\\\\nFirst scale used for the detection. Default is 1.\n\\item {\\bf [-q abaque\\_file]}  \\\\\nDefault is Abaque.d\n\\item {\\bf [-n number\\_of\\_scales]}  \\\\\nNumber of scales used in the multiresolution transform.\nDefault is 6.\n\\item {\\bf [-e minimum\\_of\\_events]}  \\\\\nMinimum number of events for a detection. Default is 4.\n\\item {\\bf [-f type\\_of\\_filtering]}  \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item Iterative multiresolution thresholding \n\\item Adjoint operator applied to the multiresolution support  \n\\item Multiresolution Hard K-Sigma Thresholding \n\\end{enumerate}\nDefault is Iterative multiresolution thresholding.\n\\item {\\bf [-l]} \\\\\nDilate the support. \n\\item {\\bf [-k]} \\\\\n Suppress isolated pixels in the support.\n\\item {\\bf [-K]} \\\\       \nSuppress the last scale.\n% \\item {\\bf [-E Epsilon]} \\\\ \n% Convergence parameter. Default is 1e-05. \n\\item {\\bf [-i number\\_of\\_iterations]}  \\\\\n Maximum number of iterations. Default is 50.\n\\end{itemize}\n\n\\noindent\nThe a and I options cannot be used together, and one of them must be set.\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item mr\\_abaque -e 1e-4 \\\\\nCreates the table (file=Abaque.fits) for a confidence interval of 1e-4.\n\\item mr\\_pfilter -a tabevent.ascii filter\\_imag.fits\\\\\nRead the table, create the associated image, and apply\nthe filtering using the multiresolution support.\n\\item mr\\_pfilter  -I inpu\\_image.fits -f 2 -i 10 -F 3 -p output\\_image.fits \\\\\nFiltering using the second method, with 10 iterations, without taking\naccount of negative wavelet coefficients, and starting the \ndetection at the third\nscale.\n\\end{itemize}\n\n\\newpage\n\\section{Images with Poisson Noise: Haar Transform}\n\nIn this section, we are concerned with \nrestoration of Images with Poisson noise by the Haar Transform.\n\n\\subsection{Introduction}\nSeveral authors \n\\cite{wave:kolac97,wave:kolac99,wave:timmermann99,wave:nowak99,wave:jammal99} \nhave recently suggested independently that the \nHaar wavelet transform was\nvery well suited to treat data with Poisson noise. Indeed, \nsince a Haar wavelet coefficient\nis just the difference between two random variables following a Poisson\ndistribution, it is easier to derive mathematical tools to remove the \nnoise than with other wavelet methods. \n\nThe means used to filter the noise is however different following \ndifferent authors.\nIn \\cite{wave:nowak99}, a kind of Wiener filter has been implemented.\nTimmermann and Nowak \\cite{wave:timmermann99} have used\na Bayesian approach with an a priori model on the original signal.\nKolaczyk \\cite{wave:kolac97} proposed to use the Haar transform \nfor gamma-ray burst detection\nin a one-dimensional signal, and has extended his \nmethod to images \\cite{wave:kolac99}. In his method, the thresholds are \ncalculated from the PDF of the wavelet coefficients. A similar approach\nhas been developed by Jammal and Bijaoui \\cite{wave:jammal99} for medical\nimage filtering.\n\nAs far back as 1910, Haar described the following function as providing\nan orthonormal basis.  The analyzing\nwavelet of a continuous variable is a step function.\n\n\\[\\begin{array}{ll}\n\\psi(x)  = 1               & \\mbox{ if } 0 \\leq x < \\frac{1}{2} \\\\\n\\psi(x) = -1               & \\mbox{ if } \\frac{1}{2} \\leq x < 1 \\\\\n\\psi(x) = 0                & \\mbox{ otherwise}\n\\end{array}\\]\n\nThe Haar wavelet constitutes an orthonormal basis.  Two Haar wavelets of the \nsame scale (i.e.\\ value of $m$) never overlap, so we have scalar product \n$<\\psi_{m,n}, \\psi_{m,n'}> $ $= \\delta_{n,n'} $ (the Kronecker delta,\nequal to 1 when the subscripts are the same, otherwise equal to zero).  \nOverlapping supports are\npossible if the two wavelets have different scales, e.g.\\ \n$\\psi_{1,1}$ and $\\psi_{3,0}$ (see \\cite{wave:daube92}, \npp.\\ 10--11).  However, if \n$m < m'$, then the support of $\\psi_{m,n}$ lies wholly in the region where\n$\\psi_{m',n'}$ is constant.  It follows that $<\\psi_{m,n}, \\psi_{m',n'}>$\nis proportional to the integral of $\\psi_{m,n}$, i.e.\\ zero.\n\n\\subsection{Poisson noise and Haar wavelet coefficients}\n\\subsubsection*{Thresholding assuming a uniform background}\n\nAssuming a constant background with a background rate $\\lambda$,\nKolaczyk and Dixon \\cite{wave:kolac99} proposed to use the \nnormalized Haar transform (L2-normalization) with the following threshold,\ncorresponding a false detection rate of $\\alpha$:\n\\be\nt_j = 2^{-(j+1)} [ z^2_{\\alpha/2} \n+ \\sqrt{ z^4_{\\alpha/2} + 4\\lambda_j z^2_{\\alpha/2}}]\n\\ee\nwhere $j$ is the scale level ($j=1..J$, $J$ being the number of scales),\n$\\lambda_j = 2^{2j} \\lambda$ is the background rate over $n_j=2^{2j}$\npixels, and $z{\\alpha/2}$ is the point under the Gaussian density function\nfor which there falls $\\alpha/2$ mass in the tails beyond each $z{\\alpha/2}$.\nAn upper bound limit for the threshold limit, valid also with other\nfilters, is \\cite{wave:kolac99}:\n\\be\nt_j = 2^{-(j+1)} \\log(n_j) \n+ \\sqrt{  \\log^2(n_j) + 2\\lambda_j \\log(n_j)}]\n\\ee \nThis formula results from substituting $z= \\sqrt{2\\log(n_j)}$ in the\nprevious equation.\n\nJammal and Bijaoui \\cite{wave:jammal99} have calculated the \nprobability density function of an unnormalized wavelet coefficient \nwhich is given by\n\\be\np(w_j=\\nu) = e^{-2^{2j}\\lambda} I_\\nu(2^{2j}\\lambda)\n\\ee\nwhere $I_\\nu(x)$ is the modified Bessel function of integer order $\\nu$.\nFor a given false detection rate $\\alpha$,\nthe threshold $t_j$ can be derived from this PDF.\n\n\\subsubsection*{Thresholding with non-uniform background}\n\\label{bgr_sect}\nIn many cases, the background cannot be considered to be constant, and an\nestimation of $\\lambda_{j,k,l}$ (the background rate at scale $j$ and position\n$k,l$) is necessary. \nSeveral approaches can be used to take into account the  background \nvariation:\n\\begin{itemize}\n\\item Image model: if a model image $M$ can be provided by the user,\nthen  $\\lambda_{j,k,l}$ is easily obtained by integrating $M$ over the\ncorrect surface area.\n\\item Lower resolution: the filtering must be started from the coarser scale \n$S_{N}$. The solution is refined scale by scale in order to obtain $S_{N-1},\nS_{N-2},...,S_{1},S_{0}$. The $\\lambda_{j,k,l}$ values are obtained from\n$\\lambda_{j,k,l}$ ($\\lambda_{j,k,l} = \\frac{\\lambda_{j,k/2,l/2}}{4}$).\n\\item iterative procedure: a filtering can first be performed assuming a \nconstant background (with rate equal to the mean of the image), and the\nfiltered image can be used as a model image. This process can repeated \nseveral times until convergence.\n\\end{itemize}\n\n\\subsubsection*{Reconstruction}\nThe Haar transform is known to produce block-artifacts. Two approaches\nmay be used to resolve this problem\n\\begin{itemize}\n\\item Cycle-spinning method \\cite{rest:donoho95}.\n\\item Iterative constraint reconstruction \\cite{compress:bobichon97}.\n\\end{itemize}\nThe cycle-spinning method precedes the restoration algorithm (Haar transform\n+ thresholding + reconstruction) at every version of the original\nimage data obtainable by combinations of left-right and upwards-downwards\ntranslations. The final image is simply the average of all images resulting\nfrom each set of translation sequence.\n\nThe iterative constraint reconstruction consists of considering the \nreconstruction as an inverse problem, where the solution must \nrespect some constraints. Constraints on the solution are:\n\\begin{itemize}\n\\item The positivity.  \n\\item The range of variation of the Haar coefficients. \n\\item The smoothness at all resolution levels.\n\\end{itemize}\nIf $w_{j,k,l}$ and $s_{j,k,l}$ are respectively \na Haar coefficient at the scale $j$ and at position $k,l$ of the\ndata and the solution, then $s_{j,k,l}$ must verify:\n\\be\n\\left\\{\n\\begin{array}{ll}\ns_{j,k,l} \\in [-t_j, 0] & \\mbox{ if } w_{j,k,l} \\in  [-t_j, 0]  \\\\\ns_{j,k,l} \\in [0, t_j, ] & \\mbox{ if } w_{j,k,l} \\in  [0, t_j] \\\\\ns_{j,k,l} \\in [w_{j,k,l}-t_j/2), w_{j,k,l}+t_j/2] & \\mbox{ if } \\mid w_{j,k,l} \\mid  > t_j\n\\end{array}\n\\right.\n\\ee\nThe smoothness constraint consists of minimizing the gradient of the solution\n$S_j$ at the scale $j$ in both vertical and horizontal directions:\n\\be\nC(S) = \\parallel D_x S(x,y) \\parallel^2 + \\parallel D_y S(x,y) \\parallel^2\n\\ee \nwhere $D_x$ and $D_y$ are the gradient operators in both directions.\nA full description of the algorithm can be found in \\cite{compress:bobichon97}.\n\n\\subsubsection*{MMI model}\nThe Multiscale Multiplicative Innovations (MMI) model was proposed\nin \\cite{wave:timmermann99}, and introduces a prior model $f_\\Lambda$, which\nis a beta-mixture density functions of the form:\n\\be\nf(\\delta) = \\sum_{i=1}^{M} p_i \\frac{ (1-\\delta^2)^{s_i-1)} }{ B (s_i,s_i)2^{2s_i-1}}\n\\ee\nfor $-1 \\le \\delta le 1$, where $B$ is the Euler beta function, $-1 \\le p_i le 1$\nis the weight of the i-th beta density $\\frac{ (1-\\delta^2)^{s_i-1)}}{ B (s_i,s_i)2^{2s_i-1}}$\nwith parameter $s_i \\ge 1$, and $\\sum_{i=1}^{M} p_i = 1$.\nThe final algorithm consists of multiplying each wavelet \ncoefficient $w_{j,k,l}$ by a term which is derived from the model.\n\n\\subsubsection*{PRESS-optimal filter}\nThe PRESS-optimal filter shrinks the noisy wavelet coefficient toward\nzero according to the estimated signal to signal-plus-noise ratio.\nThe PRESS-optimal filter is given by:\n\\be\nh_{j,k,l} = \\frac{\\tilde w_{j,k,l}^2 }{ \\tilde w_{j,k,l}^2 + \\sigma_{j,k,l}^2}\n\\ee \nwhere $\\sigma_{j,k,l}^2$ and $\\tilde w_{j,k,l}$ are the \nthe noise and signal power.\nThe noise power is proportional to an estimate of the local intensity of the image\nfalling under the support of the Haar coefficient $w_{j,k,l}$. The signal \npower can estimated from a model, or directly from the data by:\n\\be\n\\tilde w_{j,k,l}^2 = w_{j,k,l}^2 - \\sigma_{j,k,l}^2\n\\ee \n\n\\subsection{Filtering: mr\\_hfilter}\n\\index{mr\\_hfilter}\nProgram {\\em mr\\_hfilter} filters an image using the undecimated \nHaar wavelet transform assuming  that the noise follows a Poisson \ndistribution.\n{\\bf\n\\begin{center}\n USAGE: mr\\_hfilter option image\\_in image\\_out\n\\end{center}}\nwhere options are:\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-n  number\\_of\\_scales]} \\\\\n Number of scales used in the multiresolution transform. By default,\n the number of scales is calculated from the image size by the relation:\n\\begin{eqnarray}\nNscale = \\log_2( MIN(Nl,Nc)) - 2\n\\end{eqnarray}\n\\item {\\bf [-s Nsigma]} \\\\\nFalse detection rate. The false detection rate for a detection is given\n\\begin{eqnarray}\n\\epsilon =  \\mbox{erfc}( NSigma / \\sqrt{2})\n\\end{eqnarray}\n{\\em Nsigma} parameter allows us to express the false detection rate\nas if it were Gaussian noise. \\\\\nDefault is 3.\n\\item {\\bf [-F first\\_detection\\_scale]} \\\\\nFirst scale used for the detection. Default is 1.\n\\item {\\bf [-M BackgroundModel]} \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item  Flat background \n\\item  Multiresolution background estimation \n\\item  Background image model \n\\item  Iterative background estimation \n\\end{enumerate}\nDefault is 2.\n\\item {\\bf [-T ThresholdingMethod]} \n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item Kolaczyk-Dixon threshold.  \n\\item Kolaczyk-Dixon upper bound threshold.   \n\\item Jammal-Bijaoui threshold. \\\\\nUsing this method, the $\\epsilon$ false detection rate must be in \nthe interval in $[10^{-6},10^{-2}]$.\n\\item PRESS-optimal filter\n\\end{enumerate}\nDefault is 1.\n\\item {\\bf [-h Haar\\_FilterBank]} \n\\begin{enumerate}\n\\item Haar filter \n\\item Biorthogonal 2/6 Haar filters \n\\item Biorthogonal 2/10 Haar filters \n\\end{enumerate}\nDefault is Biorthogonal 2/6 Haar filters.\n\\item {\\bf [-B FileName]} \\\\\nBackground Image Model File Name. Only valid if the\n{\\em BackgroundModel} is set to 3 (Background image model).\n% \\item {\\bf [-i NiterScale]} \\\\\n% Number of iteration for the constraint reconstruction. \\\\\n% Default is 4.\n\\item {\\bf [-I NiterBgr]} \\\\\n Number of iteration for the iterative background estimation. Only valid if the\n{\\em BackgroundModel} is set to 4 (Iterative background estimation). \\\\\nDefault is 4.\n\\item {\\bf [-S]} \\\\\nApply a soft thresholding instead of a hard-thresholding.\n\\item {\\bf [-L LambdaValue]} \\\\\nLambda Value. Only valid if the\n{\\em BackgroundModel} is set to 1 (Flat background). \\\\\nDefault value is set to mean of the input image.\n\\item {\\bf [-P PsfInFile]} \\\\\nInput Point Spread Function.\nIf set, a deconvolution is performed.\n\\item {\\bf [-G RegulParam]} \\\\\nRegularization parameter for the deconvolution.\n\\end{itemize}\nWhen using PRESS-optimal filter, options ``s,F,M,B,I,S,L'' have no effect.\n\n\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item mr\\_hfilter imag.fits filter\\_imag.fits\\\\\nImage filtering using all default options.\n\\item mr\\_hfilter -T 3  image.fits output\\_image.fits \\\\\nFiltering using the Jammal-Bijaoui Threshold.\n\\item mr\\_hfilter -T 3 -M 4 image.fits output\\_image.fits \\\\\nFiltering using the Jammal-Bijaoui Threshold, and an iterative \nbackground estimation.\n\\end{itemize}\n\n\\subsection*{Experiments}\nFrom our experiments on astronomical images, we found that:\n\\begin{itemize}\n\\item the \\`a trous algorithm is significantly better than any of \nthe Haar-based methods.\n\\item PRESS-optimal filter is not optimal at all!\n\\item The lower resolution furnishes a good estimation of the background.\nIterating does not improve significantly the results.\n\\item The Jammal-Bijaoui threshold is a little better than the Kolaczyk \none for compact source detection, and is equivalent for more extended sources.\nBut the Kolaczyk threshold requires less computation time and is \neasier to implement.\n\\end{itemize}\nSuch  study shows clearly that the Haar transform is less efficient for \nrestoring X-ray astronomical images than the \\`a trous algorithm. But its\nsimplicity, and the time computation, may be attractive in many cases.\n\n\\clearpage\n\\newpage\n\n\\section{Image deconvolution}\n\\label{sect_deconv}\n\\index{deconvolution}\n\\subsection{Introduction}\nSee chapter~\\ref{ch_deconv}.\n\n\\subsection{Standard methods: im\\_deconv}\n\\index{im\\_deconv}\nProgram \n{\\em im\\_deconv} deconvolves an image, assuming a space \ninvariant point spread\nfunction (PSF). The PSF is automatically centered in the image, and\nits flux is renormalized to unity. The output resolution to achieve can \nbe limited using the concept of ICF (Intrinsic Correlation Function),\neither by selecting a Gaussian ICF function (and fixing its full-width\nat half maximum with the ``-f'' option), or by giving an image ICF file name\n(option ``-I''). The main \nmethods are iterative, and some of them can be optimized\nusing the ``-O'' option, which activates the automatic convergence parameter\ncalculation. For iterative methods, there is a stopping criterion,\nwhich can be modified using the ``-e'' option. A fixed number of iterations\ncan be specified by the options ``-e 0 -i NIter''.\nFor MEM methods, the regularization parameter ``-G'' controls the smoothness\nof the solution. Increasing ``-G'' produces a smoother image. In order to\navoid divergence for large values, it is recommended to decrease at the \nsame time the convergence parameter by use of the ``-C'' option.\n\n{\\bf\n\\begin{center}\n USAGE: im\\_deconv option image\\_in psf\\_in image\\_out\n\\end{center}}\nwhere options are \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\itemsep=0.1truecm\n\\item {\\bf [-d type\\_of\\_deconvolution]}\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item  Deconvolution by Van Cittert's algorithm \\\\\nStandard Van Cittert algorithm\n\\item  Deconvolution by gradient algorithm \\\\\nMinimization of a functional by the fixed step gradient method.\n\\item  Deconvolution by division in Fourier space \\\\\nDivides the Fourier transform of the image by the Fourier transform\nof the PSF. If the ``-f'' option is set, the solution is convolved with a \nGaussian.\n\\item  Deconvolution by Richardson-Lucy algorithm \\\\\nStandard Richardson-Lucy (Shepp-Vardi, maximum likelihood \nexpectation-maximization, EM) algorithm. This method fails\nwhen the input image contains negative values.\n\\item  Deconvolution by CLEAN algorithm \\\\\nStandard CLEAN algorithm. The ``-G'' option fixes the loop gain factor, and\nthe ``-f'' option \nfixes the size of the clean beam (full-width at half-maximum).\nBy default, the residual map is not added to the clean map, and the user\ncan get it using the ``-r resi\\_filename'' command. Criteria  \nused for stopping the CLEAN loop are \n\\begin{enumerate}\n\\item the maximum of the residual map is lower than the average \nvalue plus NSigma $\\times$ Noise.\n{\\em NSigma} and {\\em Noise} can be modified  using ``-s'' and ``-g'' options,\n\\item the maximum number of iterations is reached.\n\\end{enumerate}\n\\item Deconvolution by the MEM method (Frieden entropy)\n\\item Deconvolution by the MEM method (Gull entropy)\n\\item Deconvolution using Tikhonov regularization \n\\item Deconvolution using MAP method\n\\item Deconvolution by the Gradient algorithm + Markov Random Field Regularization \n\\item Deconvolution by Lucy's algorithm + Markov Random Field Regularization \n\\end{enumerate}\nDefault is deconvolution by the gradient algorithm.\n\\item {\\bf [-i number\\_of\\_iterations]} \\\\\nMaximum number of iterations. Default is 1e6 for CLEAN and 500 for other\ndeconvolution methods.\n\\item {\\bf [-P]} \\\\ \n Suppress the positivity constraint.\n\\item {\\bf [-G RegulParam]} \\\\\nRegularization parameter. Only used with methods 5 to 8, and 10, 11.\nDefault is $0.1$.\n\\item {\\bf [-C ConvergParam]} \\\\\nConvergence parameter. \\\\ \nDefault is 1.\n\\item {\\bf [-f ICF\\_Fwhm]} \\\\\nFwhm = full-width at half-maximum.  \n\\item {\\bf [-I ICF\\_FileName]} \\\\\nIntrinsic correlation function file.\n\\item {\\bf [-F First\\_Guess]} \\\\\nInput solution file name. This image is the starting point of the\niterative methods. This option has no effect with deconvolution methods\n3 and 5.\n\\item {\\bf [-O]} \\\\\nOptimization. The iteration parameter is calculated at each iteration in\norder to optimize the minimization of the functional. Only used with \ndeconvolution methods 2, 4, and 8. For method 4, the optimization is done\nunder the assumption of Poisson noise, and for methods 2 and 8  \nunder the assumption of Gaussian noise.\n\\item {\\bf [-M Model\\_Image]} \\\\\nInput model file name for MEM method. Only used with method 7.\n\\item {\\bf [-r residual\\_file\\_name]} \\\\\nIf this option is set, the residual is written to \nthe file of name {\\em residual\\_file\\_name}. By default, the\nresidual is not written. The residual is equal to the difference between\nthe input image and the solution convolved with the PSF.\n\\item {\\bf [-e epsilon]} \\\\\nConvergence parameter. Default is $1e-3$.\n\\item {\\bf [-S]} \\\\\nDo not shift automatically the maximum of the PSF to the center.\n\\item {\\bf [-v]} \\\\\nVerbose.\n\\end{itemize}\n\\noindent\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item im\\_deconv -d 3 image\\_in.d psf\\_in.d ima\\_out.d \\\\\nDeconvolves an image by the regularized Richardson-Lucy algorithm, assuming\n Gaussian noise (its standard deviation is automatically estimated).\n \\item im\\_deconv -d 3 -f 2. image\\_in.d psf\\_in.d ima\\_out.d \\\\\n Ditto, but limit the resolution to achieve. The solution will be the \n convolution product of a hidden solution by a Gaussian of full-width at\n half maximum equal to 2.\n \\item im\\_deconv -F InputSol.d image\\_in.d psf\\_in.d ima\\_out.d \\\\\n Deconvolves an image by the gradient method. The program uses the image\n given by option ``-F'' as entry point to the iteration.\n \\item im\\_deconv -d 5 -G 0.1 -r residual.d -f 3 image\\_in.d psf\\_in.d ima\\_out.d \\\\\n    im\\_op ima\\_out.d + residual.d clean\\_map.d \\\\\n Use the CLEAN algorithm. The loop-gain parameter is set to $0.1$, the \n clean beam has a FWHM equal to 3. The residual is added to the output\n in order to form the clean map.\n \\item im\\_deconv -d 6 -G 40 -C 0.1 image\\_in.d psf\\_in.d ima\\_out.d \\\\\n Deconvolution by MEM method. The regularization parameter is set to 40,\n and the iteration parameter to $0.1$.\n  \\item im\\_deconv -d 7 -G 1 -C 0.5 -M ModelFileName image\\_in.d psf\\_in.d ima\\_out.d \\\\\n Deconvolution by MEM method, using the Gull and Skilling entropy function.\n The model image (i.e.\\ normally close to the background level) is given\n using ``-M'' option.\n \\item im\\_deconv -d 8 -G 0.1 image\\_in.d psf\\_in.d ima\\_out.d \\\\\n Deconvolves an image by the  Tikhonov method. Regularization parameter is\n  set to 0.1.\n\\end{itemize}\n\n\n\\subsection{Multiresolution methods: mr\\_deconv}\n\\index{mr\\_deconv}\nProgram \n{\\em mr\\_deconv} deconvolves an image using the multiresolution\nsupport, assuming a space invariant point spread\nfunction (PSF) \\cite{starck:sta94_3,starck:sta94_2,starck:pan96,starck:sta02_2}.\n{\\bf\n\\begin{center}\n USAGE: mr\\_deconv option image\\_in psf\\_in image\\_out\n\\end{center}}\nwhere options are \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item {\\bf [-d type\\_of\\_deconvolution]}\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item  Deconvolution by multiresolution Van Cittert algorithm \\\\\nRegularization of Van Cittert's algorithm using the multiresolution support.\nif ``-G'' is set, a Total Variation smothness constraint is added. \n\\item  Deconvolution by multiresolution gradient algorithm \\\\\nRegularization of fixed step gradient algorithm using the multiresolution support.\nif ``-G'' is set, a Total Variation smothness constraint is added. \n\\item  Deconvolution by multiresolution Richardson-Lucy algorithm \\\\\nRegularization of Richardson-Lucy algorithm using the multiresolution support.\nif ``-G'' is set, a Total Variation smothness constraint is added. \n\\item  Deconvolution by multiresolution MAP algorithm \\\\\nRegularization of MAP  algorithm using the multiresolution support.\n\\item  Deconvolution by the division in Fourier space + Wavelet filtering \\\\\nThis method is also called Wavelet-Waguelette decomposition. It consists of\nfirst dividing the image by the PSF in Fourier space, and then\napplying spatial filtering using the wavelet transform, and considering\nthe correct noise behavior. Indeed, afting the division, the noise is not\nwhite anymore.\n\\end{enumerate}\nDefault is deconvolution by the multiresolution Lucy algorithm (3).\n\n\\item {\\bf [-t type\\_of\\_multiresolution\\_transform]} \\\\\nSee \\ref{sect_trans}. Transforms 1 to 24 are available.\n\\item {\\bf [-T type\\_of\\_filters]}  \n\\item {\\bf [-u]} \n\\item {\\bf [-g sigma]} \n\\item {\\bf [-c gain,sigma,mean]} \n\\item {\\bf [-m type\\_of\\_noise]} \n\\item {\\bf [-n number\\_of\\_scales]}\n\\item {\\bf [-s NSigma]} \n\\item {\\bf [-i number\\_of\\_iterations]} \\\\\nMaximum number of iterations. Default is  500.\n\\item {\\bf [-e Epsilon]} \\\\\nConvergence parameter. \\\\\nDefault is 1e-3.\n\\item {\\bf [-R RMS\\_Map\\_File\\_Name]} \n\\item {\\bf [-f ICF\\_Fwhm]} \\\\\nIntrinsic correlation function. \\\\\nFwhm = Full Width at Half Maximum.\n\\item {\\bf [-P]} \\\\ \n Suppress the positivity constraint.\n\\item {\\bf [-I ICF\\_FileName]} \\\\\nIntrinsic correlation function file.\n\\item {\\bf [-F First\\_Guess]} \\\\\nInput solution file name.\n\\item {\\bf [-r residual\\_file\\_name]} \\\\\nIf this option is set, the residual is written to \nthe file of name {\\em residual\\_file\\_name}. By default, the\nresidual is not written. The residual is equal to the difference between\nthe input image and the solution convolved with the PSF.\n\\item {\\bf [-S]} \\\\\nDo not shift automatically the maximum of the PSF to the center.\n\\item {\\bf [-p]} \\\\\nDetect only positive structure. Default is no.\n% \\item {\\bf [-N NiterSigmaClip]} \n\\item {\\bf [-k]} \\\\\nSuppress isolated pixels in the support. Default is no.\n\\item {\\bf [-K]} \\\\\nSuppress the last scale.  \nThis option is available only for deconvolution methods 1 and 2. \\\\\nDefault is no.\n\\item {\\bf [-O]} \\\\\nOptimization. Only for deconvolution methods 1 to 3. \\\\\nDefault is no.\n\\item {\\bf [-G RegulParam]} \\\\\n Regularization parameter (only valid for deconvolution methods 1,2 and 3).\n Default is 0. \n\\item {\\bf [-v]} \\\\\nVerbose.\n\\end{itemize}\n\\noindent\n\\subsubsection*{Examples:}\n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item mr\\_deconv image\\_in.d psf\\_in.d ima\\_out.d \\\\\nDeconvolves an image by the regularized Richardson-Lucy algorithm, assuming\n Gaussian noise (its standard deviation is automatically estimated).\n\\item mr\\_deconv -F image\\_in.d image\\_in.d psf\\_in.d ima\\_out.d \\\\\nDitto, but the used entry point is the data (it is a flat image by default). \nThe solution will contain\nthe noise present in the raw data, while the solution is smooth in\nthe previous example.\n\\item mr\\_deconv -i 50 -e 0 image\\_in.d psf\\_in.d ima\\_out.d \\\\\nDitto, but force the number of iterations to be equal to 50.\n\\item mr\\_deconv -K -d 2 -m 2 -r resi.d image\\_in.d psf\\_in.d ima\\_out.d \\\\\nDeconvolves an image by the regularized gradient method. The deconvolved\nimage will be background free. Noise is assumed to be Poisson. The residual\nis saved in the file ``resi.d''.\n\\item mr\\_deconv -d 1 -s 5 -p -r resi.d image\\_in.d psf\\_in.d ima\\_out.d \\\\\nim\\_op ima\\_out.d + resi.d clean\\_map.d \\\\\nRun the deconvolution by the regularized Van Cittert method. Only positive\nwavelet coefficients are used, and the detection is done at $5\\sigma$, assuming\nGaussian noise.\n\\end{itemize}\n\n\n\\subsection{Wavelet CLEAN: mr\\_mrc}\n\\index{mr\\_mrc}\nProgram {\\em mr\\_mrc} deconvolves an image using the Wavelet \nCLEAN method \\cite{starck:book98,starck:book02}.\n{\\bf\n\\begin{center}\n USAGE: mr\\_mrc option image\\_in psf\\_in image\\_out\n\\end{center}}\nwhere options are \n\\begin{itemize}\n\\baselineskip=0.4truecm\n\\item {\\bf [-g sigma]}\n\\item {\\bf [-m type\\_of\\_noise]}\n\\begin{enumerate}\n\\baselineskip=0.4truecm\n\\item Gaussian noise.\n\\item Undefined stationary noise.\n\\item Stationary correlated noise.\n\\end{enumerate}\nDefault is Gaussian noise.\n\\item {\\bf [-n number\\_of\\_scales]}\n\\item {\\bf [-s NSigma]} \n\\item {\\bf [-i number\\_of\\_iterations]} \\\\\nMaximum number of iterations. Default is  500.\n\\item {\\bf [-e Epsilon]} \\\\\nConvergence parameter. \\\\\nDefault is 1e-4.\n\\item {\\bf [-R RMS\\_Map\\_File\\_Name]} \n\\item {\\bf [-f ICF\\_Fwhm]} \\\\\nIntrinsic correlation function. \\\\\nFwhm = Full Width at Half Maximum.\n\\item {\\bf [-G LoopGain]} \\\\ \nLoop Gain parameter. Default is 0.01.\n\\item {\\bf [-E]} \\\\\nUse the Dirty Map instead of the Energy Dirty Map. \n\\item {\\bf [-L]} \\\\\nApply CLEAN also on the last scale. Default is Van-Cittert.\n\\item {\\bf [-A]} \\\\\nDo not add the residual to the CLEAN map. \n\\item {\\bf [-r residual\\_file\\_name]} \\\\\nIf this option is set, the residual is written to \nthe file of name {\\em residual\\_file\\_name}. By default, the\nresidual is not written. The residual is equal to the difference between\nthe input image and the solution convolved with the PSF.\n\\item {\\bf [-K]} \\\\\nSuppress the last scale.  \nDefault is no.\n\\item {\\bf [-v]} \\\\\nVerbose.\n\\end{itemize}\n\\noindent\n\\subsubsection*{Example:}\n\\begin{itemize}\n\\item mr\\_mrc image\\_in.d psf\\_in.d ima\\_out.d \\\\\nDeconvolves an image by the Wavelet-CLEAN method, assuming\n Gaussian noise (its standard deviation is automatically estimated).\n\\end{itemize}\n", "meta": {"hexsha": "d12dad627b4c61adf4f09cff37c431c481835566", "size": 70853, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_mra/doc_mr1/ch_restore.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_mra/doc_mr1/ch_restore.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_mra/doc_mr1/ch_restore.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.3860981308, "max_line_length": 265, "alphanum_fraction": 0.7395170282, "num_tokens": 20307, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929104825006, "lm_q2_score": 0.7122321842389469, "lm_q1q2_score": 0.5555360543238448}}
{"text": "\n% This LaTeX was auto-generated from an M-file by MATLAB.\n% To make changes, update the M-file and republish this document.\n\n%%% \\documentclass{article}\n%%% \\usepackage{graphicx}\n%%% \\usepackage{color}\n\n%%% \\sloppy\n%%% \\definecolor{lightgray}{gray}{0.5}\n\\setlength{\\parindent}{0pt}\n\n%%% \\begin{document}\n\n    \n    \n\\subsection*{Integral and Differential Non Linearity of ADC}\n\n\\begin{par}\nExample for algorithm INL-DNL\n\\end{par} \\vspace{1em}\n\\begin{par}\nINL-DNL is an algorithm for estimating Integral and Differential Non-Linearity of an ADC. ADC has to sample a pure sine wave. To estimate all transition levels the amplitude of the sine wave should overdrive the full range of the ADC by at least 120\\%. If not so, non estimated transition levels will be assumed to be 0 and the results may be less accurate. As an input ADC codes are required.';\n\\end{par} \\vspace{1em}\n\\begin{par}\nSee also 'Virosztek, T., P\u00e1lfi V., Renczes B., Koll\u00e1r I., Balogh L., S\u00e1rhegyi A., M\u00e1rkus J., Bilau Z. T., ADCTest project site: \\url{http://www.mit.bme.hu/projects/adctest} 2000-2014';\n\\end{par} \\vspace{1em}\n\n\\subsubsection*{Contents}\n\n\\begin{itemize}\n\\setlength{\\itemsep}{-1ex}\n   \\item Generate sample data\n   \\item Call algorithm\n\\end{itemize}\n\n\n\\subsubsection*{Generate sample data}\n\n\\begin{par}\nSuppose a sine wave of nominal frequency 10 Hz and nominal amplitude 1.5 V is sampled by ADC with bit resolution of 4 and full range of 1 V. First quantity \\lstinline{bitres} with number of bits of resolution of the ADC is prepared and put into input data structure \\lstinline{DI}.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nDI = [];\nDI.bitres.v = 4;\n\\end{lstlisting}\n\\begin{par}\nWaveform is constructed. Amplitude is selected to overload the ADC.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nt=[0:1/1e4:1-1/1e4];\nAnom = 3.5; fnom = 2; phnom = 0;\nwvfrm = Anom*sin(2*pi*fnom*t + phnom);\n\\end{lstlisting}\n\\begin{par}\nNext ADC code values are calculated. It is simulated by quantization and scaling of the sampled waveform. In real measurement code values can be obtained directly from the ADC. Suppose ADC range is -2..2.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\ncodes = wvfrm;\nrmin = -2; rmax = 2;\nlevels = 2.^DI.bitres.v - 1;\ncodes(codes<rmin) = rmin;\ncodes(codes>rmax) = rmax;\ncodes = round((codes-rmin)./(rmax-rmin).*levels);\n\\end{lstlisting}\n\\begin{par}\nNow lets introduce ADC error. Instead of generating code 2 ADC erroneously generates code 3 and instead of 11 it generates 10.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\ncodes(codes==2) = 3;\ncodes(codes==11) = 10;\ncodes = codes + min(codes);\n\\end{lstlisting}\n\\begin{par}\nCreate quantity \\lstinline{codes} and plot a figure with sampled sine wave and codes.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nDI.codes.v = codes;\nfigure\nhold on\nstairs(t, codes);\nwvfrm = (wvfrm - rmin)./(rmax-rmin).*levels;\nplot(t, wvfrm, '-r');\nxlabel('t (s)')\nylabel('Codes / Voltage (scaled)');\nlegend('Codes generated by ADC','Original waveform scaled to match codes');\nhold off\n\\end{lstlisting}\n\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{algs_examples_published/INL-DNL_alg_example_01.pdf}\n\\end{center}\n\n\n\\subsubsection*{Call algorithm}\n\n\\begin{par}\nApply INL algorithm to the input data \\lstinline{DI}.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nDO = qwtb('INL-DNL', DI);\n\\end{lstlisting}\n\n        \\begin{lstlisting}[style=output]\nQWTB: no uncertainty calculation\n\\end{lstlisting} \\color{black}\n    \\begin{par}\nPlot results of integral non-linearity. One can clearly observe defects on codes 3 and 11.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nfigure\nplot(DO.INL.v, '-x');\nxlabel('Transition levels')\nylabel('INL (k)')\n\\end{lstlisting}\n\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{algs_examples_published/INL-DNL_alg_example_02.pdf}\n\\end{center}\n\\begin{par}\nPlot results of differential non-linearity. One can clearly observe defects on transitions 2-3 and 10-11.\n\\end{par} \\vspace{1em}\n\\begin{lstlisting}[style=mcode]\nfigure\nplot(DO.DNL.v, '-x');\nxlabel('Code bins')\nylabel('DNL (k)')\n\\end{lstlisting}\n\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{algs_examples_published/INL-DNL_alg_example_03.pdf}\n\\end{center}\n\n\n\n%%% \\end{document}\n    \n", "meta": {"hexsha": "ca89ee4661f2ed33a70227cd365e40d1e45b5de6", "size": 4277, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/algs_examples_published/doc_INL-DNL.tex", "max_stars_repo_name": "qwtb/qwtb", "max_stars_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-12-09T13:18:54.000Z", "max_stars_repo_stars_event_max_datetime": "2015-12-09T13:18:54.000Z", "max_issues_repo_path": "doc/algs_examples_published/doc_INL-DNL.tex", "max_issues_repo_name": "qwtb/qwtb", "max_issues_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 18, "max_issues_repo_issues_event_min_datetime": "2015-12-09T13:08:38.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-13T11:33:41.000Z", "max_forks_repo_path": "doc/algs_examples_published/doc_INL-DNL.tex", "max_forks_repo_name": "qwtb/qwtb", "max_forks_repo_head_hexsha": "f6c79c7dca4065fd85d6f1c05257c1af34e85e34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2016-11-11T02:12:47.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-17T12:59:18.000Z", "avg_line_length": 30.9927536232, "max_line_length": 395, "alphanum_fraction": 0.7329904138, "num_tokens": 1327, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5555360506813491}}
{"text": "\\section{Conclusion}\n\nIn the present work an easily implemented algorithm to compute\r\\citet{shaw2018selfattention} relative\rpositional embedding with linear complexity has been presented. An\rimplementation of \\citet{choromanski2021rethinking} prefix sum algorithm that doesn't requires custom CUDA\rcode (while maintaining linear complexity) was also presented.\r\n\nThese two elements allowed to define a kernelized attention function\rwith relative positional encoding, that can be computed with linear\rcomplexity with regards to sequence length. The proposed model presents\rlinear scalability with sequence length, can be implemented out-of-the-box in neural network framework, and is adapted to sequence to sequence\rproblems instead of being restricted to encoders only models.\n\n\\endinput\n", "meta": {"hexsha": "7dc8c5aa16af127655ba49187c225c495ef3a091", "size": 788, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sections/07-conclusions.tex", "max_stars_repo_name": "ScalableTransformer/Scaleformer", "max_stars_repo_head_hexsha": "57e65deb7ba5fdda88a21bdaf71092a0101cf8c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/sections/07-conclusions.tex", "max_issues_repo_name": "ScalableTransformer/Scaleformer", "max_issues_repo_head_hexsha": "57e65deb7ba5fdda88a21bdaf71092a0101cf8c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sections/07-conclusions.tex", "max_forks_repo_name": "ScalableTransformer/Scaleformer", "max_forks_repo_head_hexsha": "57e65deb7ba5fdda88a21bdaf71092a0101cf8c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-01T06:24:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-01T06:24:31.000Z", "avg_line_length": 98.5, "max_line_length": 413, "alphanum_fraction": 0.8375634518, "num_tokens": 155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5555360470388536}}
{"text": "\\section{Results \\& Discussion}\n\nOur theoretical discussion provides an abstract machine to perform quantum computation using a measurement-based paradigm and implement circuit-based quantum algorithms using a measurement-based approach. We provide an implementation of Deutsch's algorithm below.\n\nWith a circuit based-formulation, Deutsch's algorithm takes a binary function \\(f: \\Set{0,1} \\rightarrow \\Set{0,1}\\), and decides if it is constant or balanced. The oracle that implements the said function has the following form.\n\\begin{equation}\n  U_f \\, \\ket{x}\\ket{y} = \\ket{x}\\ket{y \\oplus f(x)}\n\\end{equation} \nThe oracle constructions that we have used are given in the Table \\ref{tab:oracle}.\n\n\\begin{table}[htb]\n  \\center\n  \\begin{tabular}{llr}\n    \\toprule\n    Type & Function & Oracle Unitary  \\\\\n    \\midrule\n    Constant & \\(f(x) = 0\\) & \\(\\symbb{I}\\)\\\\\n    Constant & \\(f(x) = 1\\) & \\(\\sigma_x^1\\) \\\\\n    Balanced & \\(f(x) = x\\) & \\(\\cnot\\) \\\\\n    Balanced & \\(f(x) = \\neg x\\) & \\(\\sigma_x^1\\cnot\\) \\\\\n    \\bottomrule\n  \\end{tabular}\n  \\caption{Deutsch's algorithm unitary oracle implementations}\\label{tab:oracle}\n\\end{table}\n\nThe quantum circuit for the algorithm is given in Figure \\ref{fig:deutsch_circuit}, based on \\cite{Vallone2010}.\n\n\\begin{figure}[hb]\n  \\centering\n  \\begin{quantikz}\n    \\lstick{\\(\\ket+\\)} & \\qw & \\qw & \\gate[wires=2][2cm] {U_f} \\gateinput{$x$} \\gateoutput{$x$} & \\gate{H} & \\meter{} \\\\\n    \\lstick{\\(\\ket+\\)} & \\gate{R_z(\\pi)} &\\gate{H} & \\gateinput{$y$} \\gateoutput{$y \\oplus f(x)$} & \\qw & \\qw \n  \\end{quantikz}\n  \\caption{Quantum cicruit implementing Deutsch's algorithm}\\label{fig:deutsch_circuit}\n\\end{figure}\n\nWe used Paddle Quantum\\cite{Paddlequantum} library for simulating our MBQC circuits. The library includes an early version of an MBQC simulator which allows creating a graph state, measuring its vertex qubits and removing by-products. We used the procedure described in the previous section to convert the quantum circuit gate by gate to a series of measurement patterns and removed by-products on each step. We were able to distinguish between constant and balanced functions successfully.\n\n", "meta": {"hexsha": "745cfbfb86873c04a440b34846a3c9e6c0b301d6", "size": 2147, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/final/sections/results.tex", "max_stars_repo_name": "kurabirko/phys400", "max_stars_repo_head_hexsha": "1e7608322457c090e4db8c52ff1c7c8c55a612c3", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "documents/final/sections/results.tex", "max_issues_repo_name": "kurabirko/phys400", "max_issues_repo_head_hexsha": "1e7608322457c090e4db8c52ff1c7c8c55a612c3", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "documents/final/sections/results.tex", "max_forks_repo_name": "kurabirko/phys400", "max_forks_repo_head_hexsha": "1e7608322457c090e4db8c52ff1c7c8c55a612c3", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.0512820513, "max_line_length": 490, "alphanum_fraction": 0.7116907313, "num_tokens": 609, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5555360470388536}}
{"text": "\\title{Inference}\n\n{{navbar}}\n\n\\subsubsection{Inference}\n\nWe describe how to perform inference in probabilistic models.\nFor background, see the\n\\href{/tutorials/inference}{Inference tutorial}.\n\nSuppose we have a model $p(\\mathbf{x}, \\mathbf{z}, \\beta)$ of data $\\mathbf{x}_{\\text{train}}$ with latent variables $(\\mathbf{z}, \\beta)$.\nConsider the posterior inference problem,\n\\begin{equation*}\nq(\\mathbf{z}, \\beta)\\approx p(\\mathbf{z}, \\beta\\mid \\mathbf{x}_{\\text{train}}),\n\\end{equation*}\nin which the task is to approximate the posterior\n$p(\\mathbf{z}, \\beta\\mid \\mathbf{x}_{\\text{train}})$\nusing a family of distributions, $q(\\mathbf{z},\\beta; \\lambda)$,\nindexed by parameters $\\lambda$.\n\nIn Edward, let \\texttt{z} and \\texttt{beta} be latent variables in the model,\nwhere we observe the random variable \\texttt{x} with\ndata \\texttt{x\\_train}.\nLet \\texttt{qz} and \\texttt{qbeta} be random variables defined to\napproximate the posterior.\nWe write this problem as follows:\n\n\\begin{lstlisting}[language=Python]\ninference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})\n\\end{lstlisting}\n\n\\texttt{Inference} is an abstract class which takes two inputs.  The\nfirst is a collection of latent random variables \\texttt{beta} and\n\\texttt{z}, along with ``posterior variables'' \\texttt{qbeta} and\n\\texttt{qz}, which are associated to their respective latent\nvariables.  The second is a collection of observed random variables\n\\texttt{x}, which is associated to the data \\texttt{x\\_train}.\n\nInference adjusts parameters of the distribution of \\texttt{qbeta}\nand \\texttt{qz} to be close to the\nposterior $p(\\mathbf{z}, \\beta\\,|\\,\\mathbf{x}_{\\text{train}})$.\n\nRunning inference is as simple as running one method.\n\n\\begin{lstlisting}[language=Python]\ninference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})\ninference.run()\n\\end{lstlisting}\n\nInference also supports fine control of the training procedure.\n\n\\begin{lstlisting}[language=Python]\ninference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})\ninference.initialize()\n\ntf.global_variables_initializer().run()\n\nfor _ in range(inference.n_iter):\n  info_dict = inference.update()\n  inference.print_progress(info_dict)\n\ninference.finalize()\n\\end{lstlisting}\n\n\\texttt{initialize()} builds the algorithm's update rules\n(computational graph) for $\\lambda$;\n\\texttt{tf.global\\_variables\\_initializer().run()} initializes $\\lambda$\n(TensorFlow variables in the graph);\n\\texttt{update()} runs the graph once to update\n$\\lambda$, which is called in a loop until convergence;\n\\texttt{finalize()} runs any computation as the algorithm\nterminates.\n\nThe \\texttt{run()} method is a simple wrapper for this procedure.\n\n\\subsubsection{Other Settings}\n\nWe highlight other settings during inference.\n\n\\textbf{Model parameters}.\nModel parameters are parameters in a model that we will always compute\npoint estimates for and not be uncertain about.\nThey are defined with \\texttt{tf.Variable}s, where the inference\nproblem is\n\\begin{equation*}\n\\hat{\\theta} \\leftarrow^{\\text{optimize}}\np(\\mathbf{x}_{\\text{train}}; \\theta)\n\\end{equation*}\n\n\\begin{lstlisting}[language=Python]\nfrom edward.models import Normal\n\ntheta = tf.Variable(0.0)\nx = Normal(loc=tf.ones(10) * theta, scale=1.0)\n\ninference = ed.Inference({}, {x: x_train})\n\\end{lstlisting}\n\nOnly a subset of inference algorithms support estimation of model\nparameters.\n(Note also that this inference example does not have any latent\nvariables. It is only about estimating \\texttt{theta} given that we\nobserve $\\mathbf{x} = \\mathbf{x}_{\\text{train}}$. We can add them so\nthat inference is both posterior inference and parameter estimation.)\n\nFor example, model parameters are useful when applying neural networks\nfrom high-level libraries such as Keras and TensorFlow Slim. See\nthe \\href{/api/model-compositionality}{model compositionality} page\nfor more details.\n\n\\textbf{Conditional inference}.\nIn conditional inference, only a subset of the posterior is inferred\nwhile the rest are fixed using other inferences. The inference\nproblem is\n\\begin{equation*}\nq(\\mathbf{z}\\mid\\beta)q(\\beta)\\approx\np(\\mathbf{z}, \\beta\\mid\\mathbf{x}_{\\text{train}})\n\\end{equation*}\nwhere parameters in $q(\\mathbf{z}\\mid\\beta)$ are estimated and\n$q(\\beta)$ is fixed.\n%\nIn Edward, we enable conditioning by binding random variables to other\nrandom variables in \\texttt{data}.\n\\begin{lstlisting}[language=Python]\ninference = ed.Inference({z: qz}, {x: x_train, beta: qbeta})\n\\end{lstlisting}\n\nIn the \\href{/api/inference-compositionality}{compositionality page},\nwe describe how to construct inference by composing\nmany conditional inference algorithms.\n\n\\textbf{Implicit prior samples}.\nLatent variables can be defined in the model without any posterior\ninference over them. They are implicitly marginalized out with a\nsingle sample. The inference problem is\n\\begin{equation*}\nq(\\beta)\\approx\np(\\beta\\mid\\mathbf{x}_{\\text{train}}, \\mathbf{z}^*)\n\\end{equation*}\nwhere $\\mathbf{z}^*\\sim p(\\mathbf{z}\\mid\\beta)$ is a prior sample.\n\n\\begin{lstlisting}[language=Python]\ninference = ed.Inference({beta: qbeta}, {x: x_train})\n\\end{lstlisting}\n\nFor example, implicit prior samples are useful for generative adversarial\nnetworks. Their inference problem does not require any inference over\nthe latent variables; it uses samples from the prior.\n\n\\begin{center}\\rule{3in}{0.4pt}\\end{center}\n\n\\begin{itemize}\n  \\item @{ed.inferences.Inference}\n  \\item @{ed.inferences.VariationalInference}\n    \\begin{itemize}\n    \\item @{ed.inferences.KLqp}\n      \\begin{itemize}\n      \\item @{ed.inferences.ReparameterizationKLqp}\n      \\item @{ed.inferences.ReparameterizationKLKLqp}\n      \\item @{ed.inferences.ReparameterizationEntropyKLqp}\n      \\item @{ed.inferences.ScoreKLqp}\n      \\item @{ed.inferences.ScoreKLKLqp}\n      \\item @{ed.inferences.ScoreEntropyKLqp}\n      \\end{itemize}\n    \\item @{ed.inferences.KLpq}\n    \\item @{ed.inferences.GANInference}\n      \\begin{itemize}\n      \\item @{ed.inferences.BiGANInference}\n      \\item @{ed.inferences.ImplicitKLqp}\n      \\item @{ed.inferences.WGANInference}\n      \\end{itemize}\n    \\item @{ed.inferences.MAP}\n      \\begin{itemize}\n      \\item @{ed.inferences.Laplace}\n      \\end{itemize}\n    \\end{itemize}\n  \\item @{ed.inferences.MonteCarlo}\n    \\begin{itemize}\n    \\item @{ed.inferences.Gibbs}\n    \\item @{ed.inferences.MetropolisHastings}\n    \\item @{ed.inferences.HMC}\n    \\item @{ed.inferences.SGLD}\n    \\item @{ed.inferences.SGHMC}\n    \\end{itemize}\n  \\item @{ed.inferences.complete_conditional}\n\\end{itemize}\n", "meta": {"hexsha": "0fc20f283303a434f93f02cfd679e1f2af902b81", "size": 6504, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/api/inference.tex", "max_stars_repo_name": "zhangyewu/edward", "max_stars_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5200, "max_stars_repo_stars_event_min_datetime": "2016-05-03T04:59:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:32:26.000Z", "max_issues_repo_path": "docs/tex/api/inference.tex", "max_issues_repo_name": "zhangyewu/edward", "max_issues_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 724, "max_issues_repo_issues_event_min_datetime": "2016-05-04T09:04:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-28T02:41:12.000Z", "max_forks_repo_path": "docs/tex/api/inference.tex", "max_forks_repo_name": "zhangyewu/edward", "max_forks_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1004, "max_forks_repo_forks_event_min_datetime": "2016-05-03T22:45:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T00:08:08.000Z", "avg_line_length": 34.7807486631, "max_line_length": 139, "alphanum_fraction": 0.7427736777, "num_tokens": 1785, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799929002541068, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5555360422744935}}
{"text": "\\chapter{Functional Methods in Quantum Field Theory}\\label{chap:QFT}\nThis chapter introduces a treatment of quantum field theory using functional methods. The main goal is to become familiar with the physical concepts and the notation used throughout this work and to derive the flow equation for the average effective action, introduced by Christof Wetterich in 1993 \\cite{Wetterich1992}. \nFor the derivation of the flow equation we are following \\cite{Gies2006, PawlowskiNPgaugeLecture}.\n\n\\section{Generating Functionals and Correlation Functions}\nConsider a theory setting of $N$ real scalar fields $\\varphi_a(x), a \\in \\{1,\\dots,N\\}$ in $d$-dimensional Euclidean space. The corresponding partition sum in presence of sources $J_a(x)$ reads\n\\begin{align}\n\tZ[J] = \\frac{1}{\\mathcal{N}} \\int \\D\\varphi \\operatorname{e}^{-\\S + J\\cdot\\varphi}.\n\t\\label{eqn:partition}\n\\end{align}\nThe action $\\mathcal{S}$ is specified together with an ultraviolet cutoff scale $\\Lambda$, later being the momentum scale where we initialize the flow equations and some normalization factor $\\mathcal{N}$.\\\\\nIn this notation, the scalar product sums over field components and integrates over all space,\n\\begin{align}\n\tJ\\cdot\\varphi = \\int_x J_a(x) \\ \\varphi_a(x) = \\int_p \\tilde{J}_a(p) \\ \\tilde{\\varphi}_a(p),\n\\end{align}\nwith\n\\begin{align}\n\\int_x = \\int_{\\mathbb{R}^d} \\dd^d x \\qquad \\text{and} \\qquad \\int_p = \\int_{\\mathbb{R}^d} \\frac{\\dd^d p}{(2\\pi)^d}.\t\n\\end{align}\n\nThe partition sum $Z[J]$ is called a \\textit{generating functional}. It directly allows us to compute field expectation values\n\\begin{align}\n\t\\phi := \\cf{\\varphi} = \\eval{\\frac{1}{Z}\\frac{\\delta Z}{\\delta J}}_{J=0} = \\int \\D\\varphi \\ \\varphi \\ \\operatorname{e}^{-\\S + J\\cdot\\varphi}\n\\end{align}\nand higher order correlation functions\n\\begin{align}\n\\cf{\\varphi(x_1) \\cdots \\varphi(x_n)} := \\cf{\\varphi^n} = \\frac{1}{Z}\\eval{\\frac{\\delta^n Z}{\\delta^n J}}_{J=0} = \\int \\D\\varphi \\ \\overbrace{\\varphi_1 \\cdots \\varphi_n}^{:= \\ \\varphi^n} \\ \\operatorname{e}^{-\\S + J\\cdot\\varphi}\n\\end{align}\nvia functional differentiation. This means, we are basically able to compute all contributing Feynman diagrams for our theory setting, if we have knowledge of its corresponding (grand) canonical partition sum. \\\\\n For a more efficient description of the theory in terms of only the \\textit{connected} correlation functions, we define the \\textit{Schwinger functional} $W[J]$ as the logarithm of $Z[J]$,  \n\\begin{align}\nW[J] = \\ln Z[J].\n\\label{eqn:Schwinger}\n\\end{align}\nIt is the generating functional for the connected correlation functions. The normalization factor $\\mathcal{N}$, introduced in (\\ref{eqn:partition}) enters here as an additive constant, which drops out for all higher order correlation functions, except for the zero-point function. This term is connected to the thermodynamic quantities of the system and becomes important, when external parameters such as temperature, volume or the chemical potential are varied. For the case of quantum gravity, it is linked to the cosmological constant $\\Lambda$. Nevertheless, in general we are only interested in correlation functions with $n\\geq 1$ and therefore we drop this term.\\\\\nConsider for example the connected two-point function $G_{ab}(x,y) = G_{\\alpha\\beta}$\\footnote{To save on notation, we introduce collective indices $\\alpha = (x,a)$ or $(q,a)$ in momentum space.}, known as the propagator, correlating the field $\\varphi_a$ at spacetime point $x$ with the field $\\varphi_b$ at $y$,\n\\begin{equation}\n\\begin{aligned}\n\tG_{\\alpha\\beta} &= \\frac{\\delta^2W[J]}{\\delta J_{\\alpha}\\delta J_{\\beta}} = \\frac{\\delta}{\\delta J_{\\alpha}}\\left(\\frac{1}{Z}\\frac{\\delta Z}{\\delta J_{\\beta}}\\right)  \\\\[10pt]\n\t\t\t\t&= \\frac{1}{Z}\\left(\\frac{\\delta^2Z}{\\delta J_{\\alpha}\\delta J_{\\beta}}\\right) - \\frac{1}{Z^2}\\left(\\frac{\\delta Z}{\\delta J_{\\alpha}}\\right)\\left(\\frac{\\delta Z}{\\delta J_{\\beta}}\\right)\\\\[10pt]\n\t\t\t\t&= \\cf{\\varphi_{\\alpha}\\varphi_{\\beta}} - \\phi_{\\alpha}\\phi_{\\beta} = \\cf{\\varphi_{\\alpha}\\varphi_{\\beta}}_{\\text{c}}. \n\\end{aligned}\n\\label{eqn:G_connected}\t\t\t\t\t\t\n\\end{equation}\nThe propagator is the key object in functional approaches to quantum field theory. It depends on the chosen background via $J$. \\\\\nIt is still possible to make our computations even more efficient, because $W[J]$ still contains some redundant information. Connected correlation functions can be separated into so-called one-particle irreducible (1PI) and one-particle reducible ones. The 1PI correlation functions are those, whose corresponding Feynman diagrams can \\textit{not} be separated into two disconnected ones by cutting a single internal line. As an example, contributing 1PI and reducible diagrams to the connected four-point function for Yukawa theory are depicted in figure (\\ref{fig:1PI_Yukawa}). \\\\\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figs/TikZ/1PI_Yukawa}\n\\caption[Contributing one-particle reducible and 1PI diagrams to the four-point-function in Yukawa theory]{Contributing one-particle reducible and 1PI diagrams to the four-point-function in Yukawa theory, inspired by \\cite{FloerchingerWetterichQFT}.}\t\n\\label{fig:1PI_Yukawa}\n\\hrulefill\n\\end{figure}\nThe generating functional for the  1PI correlation functions, the \\textit{effective action} $\\Gamma$, is obtained from the Schwinger functional via a Legendre transformation, \n\\begin{equation}\n\t\\Gamma[\\phi]=\\sup _{J}\\left\\{\\int_{x} J(x) \\phi(x)-W[J]\\right\\}=\\int_{x} J_{\\mathrm{sup}}(x) \\phi(x)-W\\left[J_{\\mathrm{sup}}\\right],\n\\label{eqn:Def_Gamma}\n\\end{equation}\nwhere $J_{\\mathrm{sup}}$ has to be understood as a field-dependent current $J_{\\mathrm{sup}}[\\phi]$. In the following, we will drop the subscript, its meaning is implicitly understood. \nThe quantum equation of motion derived from $\\Gamma$ reads\n\\begin{align}\n\tJ(x) = \\frac{\\delta\\Gamma[\\phi]}{\\delta\\phi(x)}.\n\t\\label{eqn:quantum_eom}\n\\end{align}\nIt allows us to understand the dynamics of field expectation values, taking the effects of all quantum fluctuations into account.\nFrom a physical point of view, the effective action $\\Gamma$ is the quantum analogue of the classical action $\\mathcal{S}$. The performed Legendre transformation leads us to a mean field description of our theory with $\\phi = \\cf{\\varphi}$ on a given background, as introduced before. The symmetries of the classical action are in general still present in the effective action.\\\\\nIn terms of the effective action, higher order correlation functions are again obtained by performing functional derivatives, but now w.\\,r.\\,t. the mean field $\\phi$,\n\\begin{align}\n\t\\Gamma^{(n)}\\left(x_{1}, \\ldots, x_{n}\\right)=\\frac{\\delta^{n} \\Gamma}{\\delta \\phi\\left(x_{1}\\right) \\cdots \\delta \\phi\\left(x_{n}\\right)}.\n\\end{align}\nWith the definition of the effective action (\\ref{eqn:Def_Gamma}), we find\n\\begin{equation}\n\t\\operatorname{e}^{-\\Gamma[\\phi]}=\\int_{\\Lambda} \\mathcal{D} \\varphi \\exp \\left(-\\mathcal{S}[\\phi+\\varphi]+\\int_x \\frac{\\delta \\Gamma[\\phi]}{\\delta \\phi(x)} \\varphi(x)\\right).\n\\end{equation}  \nThe solution of such functional integro-differential equations is highly non-trivial. To solve this problem, we want to make use of the Functional Renormalization Group. The general idea of this approach is to introduce a scale-dependent action $\\Gamma_k$, interpolating between the bare, microscopic action $\\mathcal{S}$ and the full quantum effective action $\\Gamma$. A more formal motivation and a derivation of the equation governing this interpolation process is presented in the next section.   \n \\section{Functional Renormalization Group}\nThe Functional Renormalization Group (FRG) is a mathematical tool, allowing us to investigate the dynamics of physical systems on different energy (momentum) scales. This idea is based on a continuous version of Leo P. Kadanoff's block spin model on the lattice \\cite{Kadanoff1966} and was developed by Kenneth G. Wilson in 1971 \\cite{Wilson1971}. It aims at solving the theory by integrating successively momentum shell by momentum shell, being the reason why the path integral approach to quantum field theory provides a suitable framework. The main advantage of the FRG approach is, that no regularization or renormalization procedure has to be applied. The latter one is already implemented systematically, which secures the self-consistency of the approach. As this section is only supposed to introduce the basics of the FRG, we refer the interested reader to more complete reviews, e.\\,g. \\cite{Pawlowski2005, Gies2006}, particularly for applications in different areas of physics.\n\nAs a first step towards a FRG equation we need to introduce an infrared cutoff scale $k$ in our theory, below which the modes are not integrated out. A common way to introduce such a scale is by  adding a scale-dependent cutoff term $\\Delta\\mathcal{S}_k$ in the definition of the partition sum (\\ref{eqn:partition}) and therefore automatically also in the definition of the Schwinger functional (\\ref{eqn:Schwinger}):\n\\begin{align}\nW_{k}[J]=\\ln Z_{k}[J]=\\ln \\int \\mathcal{D} \\varphi  \\operatorname{e}^{-\\mathcal{S}[\\varphi]+J \\cdot \\varphi-\\Delta \\mathcal{S}_{k}[\\varphi]}.\n\\label{eqn:Wk}\n\\end{align}\nThe physical scale $k$ we introduced here is known as \\textit{renormalization scale} and has units of inverse length, meaning large $k$ correspond to small distances and vice versa. The cutoff term $\\Delta\\mathcal{S}_k$ is a quadratic functional depending on the field $\\varphi$:\n\\begin{align}\n\t\\Delta \\mathcal{S}_{k}[\\varphi]=\\frac{1}{2} \\varphi \\cdot R_{k} \\cdot \\varphi=\\frac{1}{2} \\int_{x, y} \\varphi_{\\alpha} \\ R_{k, \\alpha\\beta} \\ \\varphi_{\\beta}.\n\\end{align}\nThe function $R_k$ is called \\textit{regulator}. It plays an important role for this formulation of quantum field theory. The regulator is chosen such that only the propagation for momentum modes with $p^2 \\lesssim k^2$ is suppressed. The most important physical limits are summarized in the following:\n\\begin{align}\n\tR_{k}(p^2) \\rightarrow\\left\\{\\begin{array}{ll}{k^{2}} & {\\text { for } p \\rightarrow 0} \\\\ {0} & {\\text { for } p \\rightarrow \\infty} \\\\ {0} & {\\text { for } k \\rightarrow 0} \\\\ {\\infty} & {\\text { for } k \\rightarrow \\Lambda}\\end{array}\\right.\n\\label{eqn:regulator_limits}\n\\end{align}\nA convenient choice of the regulator is given by\n\\begin{align}\n\tR_k(p^2) = p^2 \\cdot r_k(y),\n\\end{align}\nwith $ y := \\frac{p^2}{k^2}$, and a dimensionless regulator shape function $r_k$, only depending on the dimensionless momentum ratio $y$. There is a plethora of different types of shape functions, but for the computations performed in this work we restrict ourselves to a class of rather simple, so-called Litim-type regulators with shape functions\n\\begin{align}\nr_k(y) = \\left(\\frac{1}{y} - 1\\right)\\theta(1-y),\n\\label{eqn:Litim}\t\n\\end{align}\nwhere $\\theta$ is the Heaviside step function. This class of \\textit{sharp} regulators is a good choice for finding analytic FRG equations in simple approximations. For numerical approaches, exponential regulators such as depicted in figure (\\ref{fig:exp_regulator}) are well suited.\\\\\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figs/Plots/regulator_plot}\n\\caption[Shape of a typical exponential regulator function $R(p^2)$ and its derivative w.\\,r.\\,t. the RG time $t$.]{Shape of a typical exponential regulator function $R(p^2)$ and its derivative w.\\,r.\\,t. the RG time $t$. The regulator has a finite value for momenta smaller than $k^2$ and therefore acts as a suppressing mass term. The peak of $\\partial_tR_k$ around $k^2=p^2$ clearly shows the implementation of Wilsons idea of shell-wise momentum integration.}\t\n\\label{fig:exp_regulator}\n\\hrulefill\n\\end{figure}\nAt this point it is quite convenient to introduce the \\textit{RG time} $t$ as\n\\begin{align}\n\tt = \\ln\\left(\\frac{k}{\\Lambda}\\right) \\qquad \\longrightarrow\\qquad \\partial_t = \\frac{\\partial}{\\partial\\ln(k/\\Lambda)} = \\frac{k}{\\Lambda}\\frac{\\partial}{\\partial(k/\\Lambda)} = k \\partial_k,\n\\end{align}\nwhere $\\Lambda$ is a fixed reference scale. Usually one chooses the ultraviolet cutoff scale, where the flow is initialized.\\\\\nIn this setting, (\\ref{eqn:Wk}) provides a good starting point for solving the theory by successively lowering the cutoff scale $k$ infinitesimally and integrating out all momentum modes $\\varphi_{p\\approx k}$. This procedure can be formalized by taking a scale derivative of our scale-dependent functional (\\ref{eqn:Wk}):\n\\begin{equation}\n\\begin{aligned} \\partial_{t} W_{k}[J] &=-\\frac{1}{2} \\int \\mathcal{D} \\varphi \\ \\varphi(-p) \\partial_{t} R_{k}(p) \\varphi(p) \\operatorname{e}^{-\\mathcal{S}[\\varphi]+ J \\cdot\\varphi - \\Delta \\mathcal{S}_{k}[\\varphi]} \\\\ &=-\\frac{1}{2} \\int_p \\partial_{t} R_{k}(p) G_{k}(p)+\\partial_{t} \\Delta S_{k}[\\phi], \n\\label{eqn:dtW}\n\\end{aligned}\n\\end{equation}\nwhere we used the definition of the connected propagator:\n\\begin{equation}\nG_k = \\frac{\\delta^2 W_k[\\phi]}{\\delta\\phi(x)\\delta\\phi(y)}.\n\\label{eqn:Gk}\n\\end{equation}\nThe \\textit{flowing} or \\textit{effective average action} $\\Gamma_k$ is then again defined via a modified Legendre transformation, including the insertion of $\\Delta S_k$:\n\\begin{equation}\n\t\\Gamma_{k}[\\phi]=\\sup _{J}\\left(\\int_x J(x) \\phi(x)-W_{k}[J]\\right)-\\Delta S_{k}[\\phi].\n\\end{equation}\nThis yields the modified, scale-dependent quantum equation of motion:\n\\begin{align}\n\tJ(x) = \\frac{\\delta\\Gamma_k[\\phi]}{\\delta\\phi(x)} + \\left(R_k\\phi\\right)(x).\n\\end{align}\nCompared to the scale-independent version (\\ref{eqn:quantum_eom}), we find an additional, regulator dependent term, but with the properties of the regulator presented in (\\ref{eqn:regulator_limits}) in mind, we see that in the limit $k\\rightarrow 0 $ the initial equation of motion is restored.\nWe find\n\\begin{equation}\n\t\\frac{\\delta J(x)}{\\delta \\phi(y)}=\\frac{\\delta^{2} \\Gamma_{k}[\\phi]}{\\delta \\phi(x) \\delta \\phi(y)}+R_{k}(x, y).\\label{eqn:dJdphi}\n\\end{equation}\nWith the help of these relations we are able to show that \n\\begin{equation}\n\\begin{aligned} \\delta\\left(x-x^{\\prime}\\right) =\\frac{\\delta J(x)}{\\delta J\\left(x^{\\prime}\\right)}&=\\int_y \\frac{\\delta J(x)}{\\delta \\phi(y)} \\frac{\\delta \\phi(y)}{\\delta J\\left(x^{\\prime}\\right)} \\\\[10pt] &=\\int_y\\left(\\Gamma_{k}^{(2)}[\\phi]+R_{k}\\right)(x, y) \\ G_{k}\\left(y-x^{\\prime}\\right).\n\\end{aligned}\n\\end{equation}\nHere, we used (\\ref{eqn:dJdphi}) and the definition of $G_k$ (\\ref{eqn:Gk}). \nThis yields the following important identity:\n\\begin{equation}\n\tG_k = \\left(\\Gamma_k^{(2)} + R_k\\right)^{-1}.\n\t\\label{eqn:inverse_prop_identity}\n\\end{equation}\nAltogether, we arrive at the \\textit{flow equation}, a.\\,k.\\,a. the \\textit{Wetterich equation} for the average effective action:\n\\begin{equation}\n\\begin{aligned}\n\\partial_t \\Gamma_k[\\phi] &\\overset{\\phantom{(\\ref{eqn:dtW})}}{=} -\\partial_t W_k +\\int\\left(\\partial_t J\\right) \\phi - \\partial_t \\Delta S_k[\\phi] = - \\partial_t W_k[J] - \\partial_t \\Delta S_k[\\phi] \\\\[5pt] \n&\\overset{(\\ref{eqn:dtW})}{=} \\frac{1}{2} \\int_p G_{k}(p) \\ \\partial_{t} R_{k}(p)\\\\[5pt]\n&\\overset{(\\ref{eqn:inverse_prop_identity})}{=}\\frac{1}{2} \\operatorname{STr}\\left[\\left(\\Gamma_{k}^{(2)}[\\phi]+R_{k}\\right)^{-1}\\partial_{t} R_{k}\\right].\n\\end{aligned}\n\\label{eqn:Wetterich}\n\\end{equation}\nThe supertrace $\\operatorname{STr}$ sums over all internal indices and integrates over momentum space. For Grassmann fields, it also involves the inclusion of a minus sign. We will drop the $\\operatorname{S}$ for the rest of this work, its meaning should be understood implicitly.\nThe flow equation can be represented diagrammatically as a $1$-loop equation:\n\\begin{figure}[H]\n\\centering\n\\begin{gather}\n\\begin{aligned}\n\\includegraphics[scale=1.1]{figs/TikZ/wetterich_equation}\n\\end{aligned}\n\\end{gather}\n\\end{figure}\n\\vspace{-0.7cm}\nThe full propagator $\\left[\\Gamma_k^{(2)} + R_k\\right]^{-1}$ is represented as usual as a single, double, dashed etc. line, dependent on the field content.\nThe crossed circle $\\otimes$ denotes the insertion of the respective regulator or more precisely its derivative w.\\,r.\\,t. the RG time $t$. Here $\\partial_{t} R_{k, ij}(p, q)=\\partial_{t} R_{k}(p^2)(2 \\pi)^{d} \\  \\delta_{i j} \\ \\delta(p-q)$ and therefore the trace on the r.\\,h.\\,s. effectively sums over just one index $i$ and integrates over one loop momentum $p$. \\\\\n\\begin{figure}[t]\n\\centering\n\\includegraphics{figs/TikZ/regulator_dependence}\n\\caption[Flow of $\\Gamma_k$ through infinite-dimensional theory space for different regulators.]{Flow of $\\Gamma_k$ through infinite-dimensional theory space for different regulators, inspired by \\cite{Gies2006}. Although the trajectories in theory space, governed by the flow equation (\\ref{eqn:Wetterich}) may be different, they flow towards the same quantum effective action $\\Gamma_{k\\rightarrow 0} \\equiv \\Gamma$.}\t\n\\label{fig:theory_space}\n\\hrulefill\n\\end{figure}\nIt is important to mention, that the Wetterich equation is an \\textit{exact} equation, no approximations have been made. The only modification, the implementation of $\\Delta S_k$ vanishes in the limit $k\\rightarrow 0$. Solutions of the flow equation correspond to trajectories in \\textit{theory space}, the space spanned by all (= infinitely many) dimensionless couplings $g_{\\alpha}$. The choice of the regulator has direct impact on the exact form of the trajectory. This is often referred to as \\textit{scheme dependence}. Nevertheless, for all regulators satisfying the properties (\\ref{eqn:regulator_limits}) it is guaranteed, that the flow will lead to the same quantum effective action $\\Gamma$. For a visualization of this idea, have a look at figure (\\ref{fig:theory_space}).  In principle this means, that $\\lim_{k\\rightarrow 0}\\Gamma_k\\equiv\\Gamma$, but in most practical cases it is unavoidable to employ truncation schemes to be able to solve the flow equation. A plethora of different truncation schemes has been developed recently, details concerning the most important schemes can be found e.\\,g. in the reviews about the FRG we referred to at the beginning of this section. We want to conclude this chapter with a more formal discussion of the concept of theory space, before proceeding to an introduction of the basic concepts of (classical) gravity.\n\\section{Renormalization Group Flow and Theory Space}\nWe want to use this section to formalize the concept of theory space we introduced in the last section and to discuss important characteristics of the renormalization group flow such as the $\\beta$-functions and their zeros, the fixed points of the flow. For this part, we mainly follow \\cite{ReuterSaueressig2012}. \\\\\nThe theory space is defined as the space spanned by all dimensionless couplings of the theory. To be more precise, it consists of all (action) functionals $A:\\Phi \\mapsto A[\\Phi]$, that are compatible with the imposed symmetries of the theory such as e.\\,g. diffeomorphism invariance in the case of (quantum) gravity. \\\\\nThe flow equation (\\ref{eqn:Wetterich}) defines a vector field $\\vec{\\beta}$ in theory space whose integral curves are the trajectories $\\Gammak$ parametrized by the scale $k$. Assuming the existence of a complete set of basis functionals $\\left\\{P_{\\alpha}[\\ \\cdot \\ ]\\right\\}$, we can expand $\\Gamma_k$ as follows:\n\\begin{equation}\n\t\\Gamma_{k}[\\Phi, \\bar{\\Phi}]=\\sum_{\\alpha=1}^{\\infty} \\bar{g}_{\\alpha}(k) P_{\\alpha}[\\Phi, \\bar{\\Phi}].\n\\end{equation}\nHere, the expansion coefficients $ \\bar{g}_{\\alpha}(k)$ are given by the generalized couplings. Inserting this ansatz into the flow equation (\\ref{eqn:Wetterich}), yields a set of infinitely many coupled differential equations for the couplings:\n\\begin{equation}\n\tk \\partial_{k} \\bar{g}_{\\alpha}(k)=\\bar{\\beta}_{\\alpha}\\left(\\bar{g}_{1}, \\bar{g}_{2}, \\cdots ; k\\right), \\qquad \\alpha=1,2, \\cdots\n\\end{equation}\nThe \\textit{beta functions} $\\bar{\\beta}_{\\alpha}\\left(\\bar{g}_{1}, \\bar{g}_{2}, \\cdots ; k\\right)$  are the components of the vector field $\\vec{\\beta}$ and arise from an expansion of the trace on the r.\\,h.\\,s. of the flow equation in terms of the functional basis\\footnote{The expansion reads: $\\frac{1}{2}\\tr{\\cdots} = \\sum_{\\alpha=1}^{\\infty} \\bar{\\beta}_{\\alpha}\\left(\\bar{g}_{1}, \\bar{g}_{2}, \\cdots ; k\\right) P_{\\alpha}[\\Phi, \\bar{\\Phi}]$.}. Up to this point, we are still dealing with dimensionful couplings $\\bar{g}$, but as mentioned earlier usually the flow equation is expressed in terms of \\textit{dimensionless couplings}\n\\begin{equation}\n\tg_{\\alpha} \\equiv k^{-d_{\\alpha}} \\bar{g}_{\\alpha},\n\\end{equation}\nwhere $d_{\\alpha}$ is the canonical mass dimension of the respective coupling. The \\textit{essential} couplings\\footnote{Essential in this sense means, that they can not be absorbed into the fields via a rescaling.} provide a set of coordinates for the theory space. This allows us to interpret the idea of renormalization theory in a new, geometrical way: We need to construct \\enquote{infinitely long} trajectories $\\Gammak$, that lie \\textit{entirely} in theory space. In this case, the couplings are prevented from diverging and we are able to define a consistent quantum field theory. \\newpage\nA \\textit{fixed point} $g^{*}$ of the flow is a zero point of the vector field $\\vec{\\beta}$, i.\\,e.  $\\beta_{\\alpha}(g^{*})\\equiv 0 \\ \\forall \\alpha$. The existence of such fixed points is crucial for our discussion of Asymptotic Safety as an approach to quantum gravity, based on the concepts we introduced here. \\\\\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{figs/TikZ/hypersurface}\n\t\\caption[Visualization of a fixed point with its corresponding UV hypersurface $\\Sigma_{\\mathrm{UV}}$ in theory space.]{Visualization of a fixed point $g^{*}$ (orange dot) with its corresponding UV hypersurface $\\Sigma_{\\mathrm{UV}}$ and trajectories starting at $g^{*}$ (green) in theory space. The flow points towards the IR. Trajectories starting off the surface (red) are pulled towards the FP along the irrelevant direction (here: $g_1$) until  the IR repulsive directions $g_2$ and $g_3$ dominate and drive the flow away from $g^{*}$. This figure is inspired by \\cite{Eichhorn2018}.}\\label{fig:hypersurface}\n\t\\hrulefill\n\\end{figure}\nIn general, one distinguishes different classes of fixed points. The \\textit{Gaussian} or \\textit{non-inter-acting} fixed points  (GFP) are classified by $g^{*}_{\\alpha}=0 \\ \\forall\\alpha$. This class of fixed points is relevant for perturbation theory, where the limit $k\\rightarrow\\infty$ is taken at such a GFP\\footnote{E.\\,g. in Yang-Mills theory, the concept of \\textit{Asymptotic freedom}, where the couplings tend to zero in the limit $k\\rightarrow\\infty$, is based on the existence of an UV attractive Gaussian fixed point, rendering the theory perturbatively renormalizable \\cite{GrossWilczek1973}.}. If at least one of the couplings $g^{*}_{\\alpha}\\neq 0$, the fixed point is classified as \\textit{Non-Gaussian} or \\textit{interacting} (NGFP). The idea of Asymptotic Safety relies on the existence of such a NGFP, rendering the theory \\enquote{safe} from divergences in the ultraviolet (UV) regime. An important characteristic of a fixed point is its stability or more precisely if it is \\textit{attractive} or \\textit{repulsive} for near RG trajectories. Additionally one distinguishes between infrared ($k\\rightarrow0$) and ultraviolet ($k\\rightarrow\\infty$) attractive (repulsive) fixed points. To analyze this behavior, the flow near a fixed point is linearized, i.\\,e.\n\\begin{equation}\n\t\\partial_{t} g_{\\alpha}(k)=\\sum_{j=1}^{\\infty} B_{\\alpha j}\\left(g_{j}-g_{j}^{*}\\right),\n\t\\label{eqn:linearized_flow}\n\\end{equation}\nwhere we defined the \\textit{stability matrix} \\ $\\mathbf{B} = B_{\\alpha j} = \\partial_j\\beta_{\\alpha}(g^{*}_{\\alpha}) $. The solution of the differential equation (\\ref{eqn:linearized_flow}) reads:\n\\begin{equation}\n\tg_{\\alpha}(k)=g_{\\alpha}^{*}+\\sum_{j=1}^{\\infty} C_{j} V_{\\alpha}^{j}\\left(\\frac{k}{k_{0}}\\right)^{\\theta_{j}}.\n\\end{equation}\nHere, the $V^{j}$ are the eigenvectors of the stability matrix with eigenvalues $\\theta_j$ a.\\,k.\\,a. \\textit{critical exponents}. In general, the $\\theta_{j}$, are complex numbers. We use the real part of the critical exponents to classify the coupling as \\textit{relevant} (= attractive) or \\textit{irrelevant} (= repulsive):\n\\begin{align}\n\tg^{*}_{\\alpha} \\ \\text{ is } \\left\\{\\begin{array}{ll}{\\text{relevant }} & {\\text { for } \\ \\mathfrak{Re}\\left(\\theta_j\\right) > 0} \\\\[10pt] {\\text{irrelevant }} & {\\text { for } \\  \\mathfrak{Re}\\left(\\theta_j\\right) < 0} \\\\ \\end{array}\\right..\n\\end{align}\nFixed points with critical exponents $\\theta_{j}=0$ are called \\textit{marginal}. Based on this classification, it follows quite naturally to define an UV (or IR)  \\textit{critical hypersurface} $\\Sigma_{\\mathrm{UV}}$ in theory space for a NGFP, consisting of all points that are pulled into the NGFP for increasing $k$. The dimension of $\\Sigma_{\\mathrm{UV}}$ is equal to the number of UV relevant couplings. This means, that trajectories lying on such a hypersurface tend to flow towards the fixed point in the UV limit. To visualize this idea, a schematic sketch of such a hypersurface in a $3$-dim. theory space is depicted in figure (\\ref{fig:hypersurface}).", "meta": {"hexsha": "2914e2ad71b80e5cb843246f3c982130fbce8fb3", "size": 25157, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Thesis/content/02_functional_methods.tex", "max_stars_repo_name": "mathieukaltschmidt/BSc-Thesis", "max_stars_repo_head_hexsha": "d930ee60ab526835c904252e68272408f3d6a16f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-22T15:05:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-22T15:05:57.000Z", "max_issues_repo_path": "Thesis/content/02_functional_methods.tex", "max_issues_repo_name": "mathieukaltschmidt/BSc-Thesis", "max_issues_repo_head_hexsha": "d930ee60ab526835c904252e68272408f3d6a16f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Thesis/content/02_functional_methods.tex", "max_forks_repo_name": "mathieukaltschmidt/BSc-Thesis", "max_forks_repo_head_hexsha": "d930ee60ab526835c904252e68272408f3d6a16f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-25T05:06:03.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-25T05:06:03.000Z", "avg_line_length": 116.4675925926, "max_line_length": 1368, "alphanum_fraction": 0.742815121, "num_tokens": 7251, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.6959583187272712, "lm_q1q2_score": 0.5555047278660049}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% UMB-CS114-2015F: Introduction to Programming in Java\n% Copyright 2015 Pejman Ghorbanzade <pejman@ghorbanzade.com>\n% Creative Commons Attribution-ShareAlike 4.0 International License\n% More info: https://github.com/ghorbanzade/UMB-CS114-2015F\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\def \\topDirectory {.}\n\\def \\texDirectory {\\topDirectory/src/main/tex}\n\n\\documentclass[12pt,letterpaper,twoside]{article}\n\\usepackage{\\texDirectory/template/style/directives}\n\\usepackage{\\texDirectory/template/style/assignment}\n\\input{\\texDirectory/template/config}\n\n\\begin{document}\n\n\\doc{title}{Solution to Assignment 4}\n\\doc{date-pub}{Nov 04, 2015 at 5:30 PM}\n\\doc{date-due}{Nov 18, 2015 at 5:30 PM}\n\\doc{points}{8}\n\n\\prepare{header}\n\n\\section*{Question 1}\n\nAn $n \\times n$ matrix is called a \\textit{positive Markov matrix} if each element is positive and the sum of the elements in each column is 1.\nWrite a program \\texttt{MarkovMatrix.java} that prompts the user to enter a $3 \\times 3$ matrix of double values.\nUse a method with the following signature to test if the given matrix is a Markov matrix.\n\n\\begin{terminal}\npublic static boolean isMarkovMatrix(double[][] matrix)\n\\end{terminal}\n\nYour program is expected to function as shown in following examples:\n\n\\begin{terminal}\n$ javac MarkovMatrix.java\n$ java MarkovMatrix\nEnter Row 1: 0.15 0.875 0.375\nEnter Row 2: 0.55 0.005 0.225\nEnter Row 3: 0.30 0.12 0.4\nMarkov matrix given.\n\\end{terminal}\n\n\\newpage\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class MarkovMatrix {\n\tpublic static void main(String[] args) {\n\t\tdouble[][] matrix = initMatrix(3, 3);\n\t\tif (isMarkovMatrix(matrix))\n\t\t\tSystem.out.println(\"Markov matrix given.\");\n\t\telse\n\t\t\tSystem.out.println(\"Matrix not Markov.\");\n\t}\n\tpublic static double[][] initMatrix(int row, int col) {\n\t\tScanner input = new Scanner(System.in);\n\t\tdouble[][] matrix = new double[row][col];\n\t\tfor (int i = 0; i < row; i++) {\n\t\t\tSystem.out.printf(\"Enter Row %d: \", i + 1);\n\t\t\tfor (int j = 0; j < col; j++)\n\t\t\t\tmatrix[i][j] = input.nextDouble();\n\t\t}\n\t\tinput.close();\n\t\treturn matrix;\n\t}\n\tpublic static boolean isMarkovMatrix(double[][] matrix) {\n\t\tfor (int i = 0; i < matrix[0].length; i++) {\n\t\t\tdouble sum = 0;\n\t\t\tfor (int j = 0; j < matrix.length; j++) {\n\t\t\t\tif (matrix[j][i] <= 0)\n\t\t\t\t\treturn false;\n\t\t\t\tsum += matrix[j][i];\n\t\t\t}\n\t\t\tif (sum != 1)\n\t\t\t\treturn false;\n\t\t}\n\t\treturn true;\n\t}\n}\n\\end{lstlisting}\n\n\\newpage\n\n\\section*{Question 2}\n\nWrite a program \\texttt{PointAndSphere2.java} that prompts user to enter respectively, coordinates of a point, coordinates of the center of a sphere and radius of the sphere.\nYour program would then determine the location of the point with respect to the sphere.\nThe point should be an instance of class \\texttt{Point} and the sphere should be an instance of class \\texttt{Sphere}.\nFollowing is an example of an accepted output format.\n\n\\begin{terminal}\n$ javac Point.java Sphere.java PointAndSphere2.java\n$ java PointAndSphere2\nCoordinates of Point: 1 1 1\nCoordinates of Sphere: 0 0 0\nRadius of Sphere: 1.7\nThe point is outside the sphere.\n\\end{terminal}\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\npublic class Point {\n\tpublic double[] coordinates;\n\tpublic Point(double[] coordinates) {\n\t\tthis.coordinates = coordinates;\n\t}\n\tpublic double getDistance(Point point) {\n\t\tdouble sum = 0;\n\t\tfor (int i = 0; i < 3; i++)\n\t\t\tsum += Math.pow(this.coordinates[i] - point.coordinates[i], 2);\n\t\treturn Math.sqrt(sum);\n\t}\n}\n\\end{lstlisting}\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\npublic class Sphere {\n\tpublic Point center;\n\tpublic double radius;\n\tpublic Sphere(Point center, double radius) {\n\t\tthis.center = center;\n\t\tthis.radius = radius;\n\t}\n}\n\\end{lstlisting}\n\n\\newpage\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class PointAndSphere2 {\n\tpublic static void main(String[] args) {\n\t\tdouble[] pointCoordinates = promptCoordinates(\"Point\");\n\t\tdouble[] sphereCoordinates = promptCoordinates(\"Sphere\");\n\t\tdouble radius = promptRadius();\n\t\tPoint point = new Point(pointCoordinates);\n\t\tPoint center = new Point(sphereCoordinates);\n\t\tSphere sphere = new Sphere(center, radius);\n\t\tdouble dist = sphere.center.getDistance(point);\n\t\tif (dist > sphere.radius)\n\t\t\tSystem.out.println(\"The point is outside the sphere.\");\n\t\telse if (dist == sphere.radius)\n\t\t\tSystem.out.println(\"The point is on the sphere.\");\n\t\telse\n\t\t\tSystem.out.println(\"The point is inside the sphere.\");\n\t}\n\tpublic static double[] promptCoordinates(String name) {\n\t\tScanner input = new Scanner(System.in);\n\t\tSystem.out.printf(\"Coordinates of %s: \", name);\n\t\tdouble[] coordinates = new double[3];\n\t\tfor (int i = 0; i < coordinates.length; i++)\n\t\t\tcoordinates[i] = input.nextDouble();\n\t\treturn coordinates;\n\t}\n\tpublic static double promptRadius() {\n\t\tScanner input = new Scanner(System.in);\n\t\tSystem.out.print(\"Radius of Sphere: \");\n\t\tdouble radius = input.nextDouble();\n\t\treturn radius;\n\t}\n}\n\\end{lstlisting}\n\n\\newpage\n\n\\section*{Question 3}\n\nWrite a program \\texttt{MatrixFiller2.java} that prompts user for a number $x$ between 1 to 9 and instantiates an $x \\times x$ matrix from class \\texttt{Matrix.java} whose elements are randomly generated from range 1 to $x^2$.\nFollowing is an expected sample run of your program.\n\n\\begin{terminal}\n$ javac Matrix.java MatrixFiller2.java\n$ java MatrixFiller2\nSize of Matrix: 4\n 05 13 07 16\n 12 02 10 01\n 09 14 14 08\n 02 05 01 14\n\\end{terminal}\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\npublic class Matrix {\n\tint[][] elements;\n\tpublic Matrix(int row, int col) {\n\t\tthis.elements = new int[row][col];\n\t\tfor (int i = 0; i < row; i++)\n\t\t\tfor (int j = 0; j < col; j++) {\n\t\t\t\tint rand = (int) (Math.random() * row * col) + 1;\n\t\t\t\tthis.elements[i][j] = rand;\n\t\t\t}\n\t}\n\tpublic void display() {\n\t\tint row = this.elements.length;\n\t\tint col = this.elements[0].length;\n\t\tfor (int i = 0; i < row; i++) {\n\t\t\tfor (int j = 0; j < col; j++)\n\t\t\t\tSystem.out.printf(\" %02d\", elements[i][j]);\n\t\t\tSystem.out.println();\n\t\t}\n\t}\n}\n\\end{lstlisting}\n\n\\newpage\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class MatrixFiller2 {\n\tpublic static void main(String[] args) {\n\t\tScanner input = new Scanner(System.in);\n\t\tSystem.out.print(\"Size of Matrix: \");\n\t\tint size = input.nextInt();\n\t\tinput.close();\n\t\tMatrix matrix = new Matrix(size, size);\n\t\tmatrix.display();\n\t}\n}\n\\end{lstlisting}\n\n\\section*{Question 4}\n\nWrite a class \\texttt{Circle.java} from which we can instantiate a circle by giving its radius and use it as is shown in the following program.\n\n\\lstset{caption=}\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class Circles {\n\tpublic static void main(String[] args) {\n\t\tScanner input = new Scanner(System.in);\n\t\tSystem.out.print(\"Enter radius: \");\n\t\tdouble radius = input.nextDouble();\n\t\tinput.close();\n\t\tCircle myCircle = new Circle(radius);\n\t\tdouble area = myCircle.getArea();\n\t\tdouble perimeter = myCircle.getCircumference();\n\t\tSystem.out.printf(\"Area: %.2f, Perimeter: %.2f\\n\", area, perimeter);\n\t}\n}\n\\end{lstlisting}\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\npublic class Circle {\n\tdouble radius;\n\tpublic Circle(double radius) {\n\t\tthis.radius = radius;\n\t}\n\tpublic double getArea() {\n\t\tdouble area = Math.PI * Math.pow(this.radius, 2);\n\t\treturn area;\n\t}\n\tpublic double getCircumference() {\n\t\tdouble circumference = 2 * Math.PI * this.radius;\n\t\treturn circumference;\n\t}\n}\n\\end{lstlisting}\n\n\\section*{Question 5}\n\nThe code snippet given below is content of a file \\texttt{Kitten.java} found in a public repository.\nUnfortunately, the program cannot be executed because the file \\texttt{Cat.java} which defines the class \\texttt{Cat} is missing.\nYou are expected to develop the class \\texttt{Cat} in a file \\texttt{Cat.java} such that \\texttt{Kitten.java} is successfully executed.\n\n\\lstset{caption=}\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\nimport java.util.Scanner;\npublic class Kitten {\n\tpublic static void main(String[] args) {\n\t\tCat myCat = new Cat(\"Kitty\");\n\t\tdouble[] movement = promptMove(myCat);\n\t\tmyCat.move(movement[0], movement[1]);\n\t\tmyCat.showPosition();\n\t\tmyCat.showDistance();\n\t}\n\tpublic static double[] promptMove(Cat myCat) {\n\t\tScanner input = new Scanner(System.in);\n\t\tchar[] directions = {'X', 'Y'};\n\t\tdouble[] movement = new double[directions.length];\n\t\tfor (int i = 0; i < directions.length; i++) {\n\t\t\tSystem.out.printf(\"Distance to move in %c direction: \", directions[i]);\n\t\t\tmovement[i] = input.nextDouble();\n\t\t}\n\t\tinput.close();\n\t\treturn movement;\n\t}\n}\n\\end{lstlisting}\n\n\\newpage\n\nFollowing is a sample expected run of the program.\n\n\\begin{terminal}\n$ javac Cat.java Kitten.java\n$ java Kitten\nDistance to move in X direction: 3\nDistance to move in Y direction: 4\nKitty is in (3.0, 4.0).\nKitty is 5.00 units away from (0, 0).\n\\end{terminal}\n\n\\subsection*{Solution}\n\n\\lstset{language=Java,tabsize=4}\n\\begin{lstlisting}\npublic class Cat {\n\tpublic String name;\n\tpublic double dirX = 0;\n\tpublic double dirY = 0;\n\tpublic Cat(String name) {\n\t\tthis.name = name;\n\t}\n\tpublic void move(double dirX, double dirY) {\n\t\tthis.dirX += dirX;\n\t\tthis.dirY += dirY;\n\t}\n\tpublic void showPosition() {\n\t\tSystem.out.printf(\"%s is in (%.1f, %.1f).\\n\", this.name, this.dirX, this.dirY);\n\t}\n\tpublic void showDistance() {\n\t\tdouble distance = Math.sqrt(Math.pow(this.dirX, 2) + Math.pow(this.dirY, 2));\n\t\tSystem.out.printf(\"%s is %.2f units away from (0, 0).\\n\", this.name, distance);\n\t}\n}\n\\end{lstlisting}\n\n\\end{document}\n", "meta": {"hexsha": "0043c2b9efe94f4045f40f25771e09ee2fb1e5a1", "size": 9725, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/main/tex/assignments/hw04s.tex", "max_stars_repo_name": "ghorbanzade/UMB-CS114-2015F", "max_stars_repo_head_hexsha": "2cb102460bcbfbfff9f1a6b20791d777ef2af159", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:41.000Z", "max_issues_repo_path": "src/main/tex/assignments/hw04s.tex", "max_issues_repo_name": "ghorbanzade/UMB-CS114-2015F", "max_issues_repo_head_hexsha": "2cb102460bcbfbfff9f1a6b20791d777ef2af159", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/main/tex/assignments/hw04s.tex", "max_forks_repo_name": "ghorbanzade/UMB-CS114-2015F", "max_forks_repo_head_hexsha": "2cb102460bcbfbfff9f1a6b20791d777ef2af159", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0259365994, "max_line_length": 226, "alphanum_fraction": 0.6966580977, "num_tokens": 2712, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6261241911813151, "lm_q2_score": 0.8872045847699186, "lm_q1q2_score": 0.5555002530514198}}
{"text": "\\section{Disorder Terms}\n\n\n\\begin{slide}{Structural Disorder and the Pair Distribution Function}\n\n  \\begin{cenpage}{135mm}\n\n  An EXAFS measurement averages billions of {\\BlueEmph{snapshots}} of the\n  local structure:\n\n\\begin{cenpage}{88mm}\n\\begin{itemize}\n\\item Each absorbed x-ray generates 1 photo-electron.\n\\item the photo-electron / core-hole pair lives for about\n  $10^{-15}$ s --  much faster than the thermal vibrations ($10^{-12}$ s).\n\\item An EXAFS measurement samples $10^4$ (dilute fluorescence) to $10^{10}$\n  absorbed x-rays for each energy point.\n\\end{itemize}\n\\end{cenpage}\n\n\\vmm\n \\vmm\n\nSo far, we've put this in the EXAFS Equation as \\hspace{2mm}\n$\\chi \\sim N \\exp({-2k^2\\sigma^2}) $\n\n\\vmm \\hrule \\vmm \\onslide+<2->\n\n\\begin{columns}\n  \\begin{column}{70mm}\n    More generally, EXAFS samples the\n\n    \\vmm\n    {\\RedEmph{Partial Pair Distribution Function}}\n\n    \\vmm\n\n    {\\RedEmph{$g(R)$}} =   probability that an\n    atom is a distance $R$ away from the absorber.\n\n    \\vspace{8mm}\n\n    \\end{column}\n  \\begin{column}{65mm}\n\n    \\includegraphics[width=60mm]{figs/errors/gnxas}\n    \\end{column}\n  \\end{columns}\n\\end{cenpage}\n\\end{slide}\n\n\n\\begin{slide}{EXAFS and The Pair Distribution Function}\n\n\n  \\begin{cenpage}{135mm}\n  To fully account for a highly disordered local structure, we should use\n\n\\[\n    \\chi(k)  = \\Biggl\\langle \\sum_j {\\frac{f_j(k)e^{i2kR_j + \\delta_j(k)}}{kR_j^2}} \\Biggr\\rangle\n\\]\n\nwhere $ \\langle x \\rangle = \\int dR\\, x \\, g(R) / \\int dR\\, g(R) $ --\naveraging over the billions+ of snapshots.\n\n\\vmm\\pause\n$R$ won't change too much, so we'll neglect the changes to $1/R^2$:\n\n\\[\n  \\chi \\approx\n  \\sum_j {f_j(k){\\frac{e^{ i\\delta_j(k)} }{kR_j^2}}}   \\biggl\\langle e^{i2kR_j}  \\biggr\\rangle\n\\]\n\neach path in the sum now has a $g(R)$ with respect to the absorbing atom.\n\n\\vmm \\hrule \\vmm \\onslide+<3->\n\nThe {\\RedEmph{the cumulant expansion}} relates $\\langle e^x\\rangle$ to $\\langle x \\rangle$, the moments of $g(x)$:\n\n\n\\[ \\biggl\\langle e^{i2kR} \\biggr\\rangle\n= \\exp \\bigg[ \\sum_{n=1}^{\\infty} { \\frac{(2ik)^n}{n!}}C_n  \\bigg].\n\\]\n\n\n\n\\vfill\n\\end{cenpage}\n\\end{slide}\n\n%% Slide\n\\begin{slide}{The Cumulants and Moments of a Distribution Function}\n\n  \\begin{cenpage}{135mm}\n  The cumulants $C_n$ of $g(R)$ are related to the moments of $g(R)$:\n  $\\langle r^n \\rangle$,\n\n  with $r= R - R_0$ and $R_0$ is the centroid of the distribution:\n\n  \\vmm\n\n    \\begin{tabular}{lll}\n      $C_1 = \\Delta R$  & {\\BlueEmph{\\tt{deltar}}}    & $ = \\langle r \\rangle    $    \\\\\n      $C_2  = \\sigma^2$ &  {\\BlueEmph{\\tt{sigma2}}} & $ = \\langle r^2 \\rangle - \\langle r \\rangle^2    $   \\\\\n      $ C_3 $ & {\\BlueEmph{\\tt{third}}} & $ = \\langle r^3 \\rangle - 3 \\langle r^2 \\rangle\n      \\langle r \\rangle  + 2 \\langle r \\rangle^3   $    \\\\\n      $ C_4 $ & {\\BlueEmph{\\tt{fourth}}} & $ =  \\langle r^4 \\rangle - 3 \\langle r^2 \\rangle^2\n      - 4\\langle r^3 \\rangle \\langle r \\rangle\n      +12  \\langle r^2 \\rangle  \\langle r \\rangle^2\n      - 6\\langle r \\rangle^4  $   \\\\\n    \\end{tabular}\n\n    \\vmm\n\n    \\begin{postitbox}{80mm}\n    $C_3$ (the {\\RedEmph{third cumulant}}) can be important in many cases.\n  \\end{postitbox}\n\n    \\vmm\\hrule\\vmm\n    \\onslide+<2->\n\n\\begin{columns}\n  \\begin{column}{70mm}\n\n    But: Sometimes, the cumulant expansion isn't good enough.  One can also build models by using\n    paths spaced in $R$ (say, at 0.2 {\\AA} steps), and model the amplitude of each Path with a\n    distribution like (following GNXAS):\n\n    \\[    g(R, N, R_0, \\sigma, \\beta) = \\frac{2N [  e^{-\\alpha} \\alpha^{q-1}]}{\\sigma\\beta\\Gamma(q) }\\]\n\n    where  $\\alpha = q + 2(R-R_0)/(\\beta\\sigma)$,  and $q = 4/\\beta^2$\n\n\\vspace{5mm}\n\n    \\end{column}\n  \\begin{column}{65mm}\n    \\onslide+<2->\n      \\includegraphics[width=60mm]{figs/errors/gnxas_histogram}\n    \\end{column}\n  \\end{columns}\n\n  \\end{cenpage}\n\\end{slide}\n", "meta": {"hexsha": "3289c95db28daf1337cbe39c89cca0d07a863cc2", "size": 3816, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/disorder_terms.tex", "max_stars_repo_name": "newville/xafsfun", "max_stars_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/disorder_terms.tex", "max_issues_repo_name": "newville/xafsfun", "max_issues_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/disorder_terms.tex", "max_forks_repo_name": "newville/xafsfun", "max_forks_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.5, "max_line_length": 114, "alphanum_fraction": 0.6294549266, "num_tokens": 1369, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.7662936484231889, "lm_q1q2_score": 0.5554634976741145}}
{"text": "\\documentclass[11pt]{article}\n\n\\usepackage{alltt,fullpage,graphics,color,epsfig,amsmath, amssymb}\n\\usepackage{hyperref}\n\\usepackage{boxedminipage}\n\\usepackage[ruled,vlined]{algorithm2e}\n\n\\newcommand{\\floor}[1]{\\lfloor #1 \\rfloor}\n\\newcommand{\\ceil}[1]{\\lceil #1 \\rceil}\n\n\\title{CS 510 Assignment 1:A Poisson distribution }\n\\author{Daniel Campos}\n\\date{February 7th,2021}\n\\begin{document}\n\\maketitle\n\\section{Log Likelihood of Poisson distribution}\n\\subsection{Solution}\nFirst we find the likelihood function\n\\begin{equation}\n    L(u) = \\prod_{i = 1}^{n} \\frac{{u^x}{e^{-u}}}{x!}\n\\end{equation}\nNext we find the log likelihood function\n\\begin{equation}\nL(u) = \\sum_{i=1}^{n} (x_i*log(u) - u - log(x_i)!)\n\\end{equation}\n\\begin{equation}\nL(u) = log(u) * \\sum_{i=1}^{n} x_i - n*u - \\sum_{i=1}^{n} log(x_i)!\n\\end{equation}\n\\section{Derivative of the log likelihood}\n\\subsection{Solution}\n\\begin{equation}\n    \\frac{d}{du} = \\frac{\\sum_{i=1}^{n} x_i}{u} - n\n\\end{equation}\n\\section{Solve for MLE of U}\n\\subsection{Solution}\nWe take our prior derivative and set to 0 \n\\begin{equation}\n    \\frac{d}{du} = \\frac{\\sum_{i=1}^{n} x_i}{u} - n = 0 \n\\end{equation}\n\\begin{equation}\n    n= \\frac{\\sum_{i=1}^{n} x_i}{u}\n\\end{equation}\n\\begin{equation}\n    nu= \\frac{\\sum_{i=1}^{n} x_i}{1}\n\\end{equation}\n\\begin{equation}\n    u= \\frac{\\sum_{i=1}^{n} x_i}{n}\n\\end{equation}\n\\begin{equation}\n    u= \\bar{x}\n\\end{equation}\n\\end{document}", "meta": {"hexsha": "c3cb28b5988f8258078520fe104bc64600c77877", "size": 1416, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignments/1.tex", "max_stars_repo_name": "spacemanidol/CS510IR", "max_stars_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignments/1.tex", "max_issues_repo_name": "spacemanidol/CS510IR", "max_issues_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignments/1.tex", "max_forks_repo_name": "spacemanidol/CS510IR", "max_forks_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.2307692308, "max_line_length": 67, "alphanum_fraction": 0.677259887, "num_tokens": 548, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.7662936484231889, "lm_q1q2_score": 0.5554634976741145}}
{"text": "\\subsection{Simulation}\nIn this section, the synchronisation errors caused by the hardware, the carrier phase and frequency shift are simulated, then algorithms allowing to reduce them are introduced.\n\n\\subsubsection{Synchronisation errors}\n\nOn \\autoref{fig:CFOphi} is shown how the CFO affects the constellation. The linearly increasing phase shift caused by the CFO causes a rotation of the symbols. This effect is first manually corrected to study the impact of ISI only.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/CFOphi}\n    \\caption{Illustration of the effect of the CFO phase shift on a 4-QAM constellation for $\\frac{E_b}{N_0} = 15 \\si{\\deci\\bel}$}\n    \\label{fig:CFOphi}\n\\end{figure}\n\nThe effect of the CFO ISI on the BER curves is shown in \\autoref{fig:CFOisi}. Low values of a frequency mismatch between the emitter and receiver quartz (expressed in parts per million, \\textit{ppm}) do not disturb the BER too much but increasing value prevent correct communication. \n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/cfoISI}\n    \\caption{Illustration of the effect of the CFO ISI on the BER curves for a 4-QAM modulation}\n    \\label{fig:CFOisi}\n\\end{figure}\n\nNext, a sampling time shift is simulated. The impact of this shift is depicted in \\autoref{fig:samplingshift}.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/samplingshift}\n    \\caption{Effect of the sampling shift on the BER for 4-QAM}\n    \\label{fig:samplingshift}\n\\end{figure}\n\n\n\\subsubsection{Correction algorithms}\n\nAs explained later, the sampling time shift is the first unwanted effect that is tackled, thanks to the Gardner algorithm. \\autoref{fig:gardner} shows how the Gardner algorithm is able to correct the sampling time shift.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/gardner}\n    \\caption{Gardner compensation of the sampling shift without CFO for a 4-QAM modulation}\n    \\label{fig:gardner}\n\\end{figure}\n\nThe main parameter of the Gardner algorithm is the factor $\\kappa$. \\autoref{fig:gardnerK} shows how increasing $\\kappa$ allows to have faster convergence, but at the cost of a less stable estimate. All the subsequent Figures have been generated for a 4-QAM modulation.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/gardnerK}\n    \\caption{Influence of $\\kappa$ on the convergence of the Gardner algorithm}\n    \\label{fig:gardnerK}\n\\end{figure}\n\nThe Gardner algorithm is robust to CFO, as shown in \\autoref{fig:gardnerCFO}. Its convergence is nearly unaffected by the presence of realistic CFO.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/gardnerCFO}\n    \\caption{Illustration of the robustness of the Gardner to CFO, $\\kappa = 0.1$}\n    \\label{fig:gardnerCFO}\n\\end{figure}\n\nAfter the correction of the sampling time shift, the CFO and the Time of Arrival (ToA) are estimated by sending a pilot (a known sequence) in between frames of data. The main parameters are the length of the pilot $N$ and the size of the averaging windows $K$. The following Figures depict the depency of the estimation of the ToA and the CFO on these parameters. It appears that when $N=40$ and $K=16$, both estimations can be trusted when the SNR is high enough ($> \\si{5}{dB}$).\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/dftoaN}\n    \\caption{Impact of the pilot length on the CFO estimation for a 4-QAM}\n    \\label{fig:dftoaN}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/stdevt_N}\n    \\caption{Impact of the pilot length on the ToA estimation for a 4-QAM}\n    \\label{fig:shiftoaN}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/stdevf_K}\n    \\caption{Impact of the size of the averaging window $K$ on the CFO estimation for a 4-QAM}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/stdevt_K}\n    \\caption{Impact of the size of the averaging window $K$ on the ToA estimation for a 4-QAM}\n\\end{figure}\n\n\nOn the next two figures, the robustness of the frame acquisition to CFO is clearly visible as it has nearly no impact on the estimations.\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/stdevt_CFO}\n    \\caption{Impact of the CFO and the sample time shift on the CFO estimation}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\input{fig/stdevf_CFO}\n    \\caption{Impact of the CFO and the sample time shift on the CFO estimation}\n\\end{figure}\n\n\n\n\\subsection{Questions}\n\n\\subsubsection{Questions regarding the simulation}\n\n\\paragraph{Derive analytically the baseband model of the channel including the synchronisation errors.} \\mbox{}\n\nAssuming that the oscillators from the emitter and the receiver differ from each other by a pulsation $\\Delta \\omega$ and a phase shift $\\phi$, the signal at the receiver can be written as a function of the complex envelope of the emitted signal $e_s(t)$ (neglecting the noise):\n\n\\begin{equation*}\n    r(t) = \\Re \\left\\{ e_s(t) \\right\\} \\cos ( \\omega_c t ) + \\Im \\left\\{ e_s(t) \\right\\} \\sin ( \\omega_c t )\n\\end{equation*}\n\nThe complex envelope must be computed to go to the baseband model.\n\n\\begin{equation*}\n    \\begin{split}\n    e_r(t) & = r(t) \\cos \\left[ (\\omega_c + \\Delta \\omega) t \\right] + r(t) \\sin \\left[ (\\omega_c + \\Delta \\omega) t \\right] \\\\\n    & =  \\cos \\left[ (\\omega_c + \\Delta \\omega) t \\right] \\times \\left[\\Re \\left\\{ e_s(t) \\right\\} \\cos ( \\omega_c t ) + \\Im \\left\\{ e_s(t) \\right\\} \\sin ( \\omega_c t ) \\right]\\\\\n    & +  \\sin \\left[ (\\omega_c + \\Delta \\omega) t \\right] \\times \\left[\\Im \\left\\{ e_s(t) \\right\\} \\cos ( \\omega_c t ) + \\Im \\left\\{ e_s(t) \\right\\} \\sin ( \\omega_c t ) \\right]\n    \\end{split}\n\\end{equation*}\n\nUsing the Simpsons rules and lowpass filtering out the high frequency components, one can find\n\n\\begin{equation*}\n\\begin{split}\n    &e_r(t) =  \\Re \\left\\{ e_s(t) \\right\\} \\cos ( \\Delta \\omega_c t + \\phi) - j   \\Im \\left\\{ e_s(t) \\right\\} \\sin ( \\Delta \\omega_c t + \\phi)\\\\\n    &\\hspace{3cm} \\Rightarrow e_r(t) = e_s(t) \\cdot e^{j\\Delta \\omega t + \\phi}\n\\end{split}\n\\end{equation*}\n\nHence, the synchronisation errors can be modelled by a multiplication of the baseband model by $ e^{j\\Delta \\omega t + \\phi}$.\n\n\\paragraph{How do you separate the impact of the carrier phase drift and ISI due to the CFO in your simulation?} \\mbox{}\n\nThe linearly increasing phase shift is manually perfectly compensated after the convolution with the second halfroot filter, taking into account the discarded samples of the convolution. \n\n\\paragraph{How do you simulate the sampling time shift in practice?} \\mbox{}\n\nFirst, the sampling frequency is significantly increased to increase the accuracy. The sampling time shift is then implemented as a shift in the indexes when downsampling the received signal.\n\n\\paragraph{How do you select the simulated $E_b/N_0$ ratio?} \\mbox{}\n\nThe SNR should be sufficiently high so that the various algorithms in the communication chain converge. However, a noiseless simulation is not realistic. The SNR is thus fixed to typical values ranging from 5 to 10 \\si{\\deci\\bel}.\n\n\\paragraph{How do you select the lengths of the pilot and data sequences?} \\mbox{}\n\nThe length of a data sequence is selected to ensure a correct phase interpolation between two pilot sequences. If it is too long, the phase interpolation is may be incorrect since $e^{j (x+ k 2 \\pi)} = e^{j x}$ and the residual CFO may be incorrectly determined. \nThe length of the pilot should be sufficiently long to get a correct estimate of the ToA and the CFO. As shown in the results of the simulations, the remaining CFO at SNR greater than \\si{5}{dB} is sufficiently low when $N>20$. However, the length and rate of repetitions of the pilot should not be too high because they reduce the channel throughput.\n\n\\subsubsection{Questions regarding the communication system}\n\n\\paragraph{In which order are the synchronisation effects estimated and compensated? Why?} \\mbox{}\n\nThe Gardner's algorithm being robust to CFO, it is first used to correct the sampling time shift. Then the CFO and the ToA are handled by the frame acquisition. Between the the pilot sequences, the residual CFO is approximated by linear interpolation.\n\n\\paragraph{Explain intuitively how the error is computed in the Gardner algorithm. Why is the Gardner algorithm robust to CFO?} \\mbox{}\n\nAt a given time, the estimation of the time shift is obtained from the previous estimation weighted by a certain factor. This factor depends on the middle sample between the two time steps. The direction of the correction is given by the sign of the middle value, and the magnitude of the correction is given by its magnitude. The correction will be very low if a zero crossing happens. For example, if the sampling happens too late for a downwards crossing, the value of the correction factor will be negative.\n\nBy looking at the mathematical form of the feedback implemented in the Gardner algorithm, it appears that for reasonable values of the CFO, the phases will nearly cancel each other thanks to the conjugate operator.\n\n\\paragraph{Explain intuitively why the differential cross-correlator is better suited than the usual cross-correlator? Isn\u2019t it interesting to start the summation at $k$ = 0 (no time shift)?} \\mbox{}\n\nThe usual cross-correlator is nearly optimal according to the ML estimator but it necessitates a computationally heavy 2D exhaustive search in order to be robust to CFO. Instead, the differential cross-correlator allows for a solution of lower complexity which first estimates the ToA then the CFO.\n\nIt is not interesting to start the summation at $k=0$ since the term $D_0 \\left[ n \\right]$ does not carry any information other than the power of the window.\n\n\\newpage\n\n\\paragraph{Are the frame and frequency acquisition algorithms optimal? If yes, give the optimisation criterion.} \\mbox{}\n\nThe algorithm is not optimal because the optimisation is not done for both the ToA and the CFO at the same time. However, it is sufficient when both CFO and the noise have reasonable values, as it is the case in practical applications.\n\nThe optimisation criterion is the maximum likelihood of observing the received symbol $y$ knowing the pilot and the CFO.", "meta": {"hexsha": "4c9a40b3f6b2766d8beb2737537e4c962971dcef", "size": 10057, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/sections/step3.tex", "max_stars_repo_name": "mpetitjean/DVB-S2", "max_stars_repo_head_hexsha": "c63a0617cc679de76166c62727a0c778099f9d62", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-08T10:12:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T07:46:58.000Z", "max_issues_repo_path": "report/sections/step3.tex", "max_issues_repo_name": "amatepl/DVB-S2", "max_issues_repo_head_hexsha": "c63a0617cc679de76166c62727a0c778099f9d62", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/sections/step3.tex", "max_forks_repo_name": "amatepl/DVB-S2", "max_forks_repo_head_hexsha": "c63a0617cc679de76166c62727a0c778099f9d62", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2018-10-08T10:48:54.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-18T09:02:37.000Z", "avg_line_length": 55.2582417582, "max_line_length": 511, "alphanum_fraction": 0.7475390275, "num_tokens": 2610, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7662936484231889, "lm_q1q2_score": 0.5554634885650636}}
{"text": "\\chapter{A Proof of Theorem \\ref{SubG8}}\\label{AppB}\nThis chapter is devoted to a concrete proof of Theorem \\ref{IrreducibleSubs}. \nWe use simillar methods to the ones we already applied in chapter 3. After \nreducing the lattices we compare the sublattices with the rational ones introduced \nin \\cite{Kunyavski} and \\cite{Nicole1}.\n \\section{ (5,6,3 ) }\nThe group is generated by  \n$$\n\\left[ \\begin {array}{ccccc} -1&0&0&0&0\\\\0&0&0&0&-\n1\\\\0&0&0&-1&0\\\\0&0&-1&0&0\n\\\\0&-1&0&0&0\\end {array} \\right]. \n$$\nThe corresponding lattice is sign permutation. This implies rationality of \nthe corresponding torus.\n \\section{(5,18,28)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\0&0&1&0&0\n\\\\0&1&0&0&0\\\\0&0&0&0&1\n\\\\0&0&0&1&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} -1&0&0&0&0\\\\0&0&0&0&1\n\\\\0&0&0&1&0\\\\0&0&1&0&0\n\\\\0&1&0&0&0\\end {array} \\right] \n$$\n\nThe corresponding lattice is a sign permutation lattice. Thus it is hereditarily rational.\n\n \\section{(5,19,14)}\nAlgorithm 2 produces the change of basis matrix \n$$\n\\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\ 0&0&1&0&0\n\\\\ 0&0&0&1&0\\\\ 0&0&0&0&1\n\\\\ 1&0&1&0&0\\end {array} \\right] \n$$\nWith the above transformation we can see the new representative is generated by \n$$\n \\left[ \\begin {array}{cccc|c} 0&0&1&-1&0\\\\0&-1&0&-2\n&0\\\\1&0&0&1&0\\\\0&0&0&1&0\n\\\\ \\hline 0&1&0&1&1\\end {array} \\right] \n \\left[ \\begin {array}{cccc|c} 0&0&1&0&0\\\\0&1&0&2&0\n\\\\1&0&0&0&0\\\\0&0&0&-1&0\n\\\\\\hline 0&0&0&-1&1\\end {array} \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nThe corresponding group to $M$ has GAP ID [4,5,1,10] and also can be generated by\n$$\n \\left[ \\begin {array}{ccc|c} 1&0&-1&0\\\\ 0&1&-1&0\n\\\\ 0&0&-1&0\\\\ \\hline 0&0&0&-1\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{ccc|c} 0&1&0&0\\\\ 1&0&0&0\n\\\\ 0&0&1&0\\\\  \\hline 0&0&0&-1\\end {array}\n \\right] \n$$\nSo $M$ decomposes into a direct sum of a rank one sign permutation lattice \n(which is hereditarily rational) and a rank 3 lattice given by a group, $H$, \ngenerated by\n$$\n\\left[ \\begin {array}{cc|c} 1&0&-1\\\\ 0&1&-1\n\\\\ \\hline 0&0&-1\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cc|c} 0&1&0\\\\ 1&0&0\n\\\\ \\hline 0&0&1\\end {array}\n \\right] \n$$\nLooking at the generators of $H$ tells us we can form \n$$0 \\longrightarrow \\Z^- \\longrightarrow L_H \\longrightarrow P \\longrightarrow 0$$ \nwhere $P$ is given by the group generated by\n$$\n\\begin{bmatrix}\n1&0\\\\\n0& 1\n\\end{bmatrix}\n\\tand\n\\begin{bmatrix}\n0&1\\\\\n1&0\n\\end{bmatrix}\n.$$\nSince $P$ is a permutation lattice, by Corollary \\ref{permcoker} we conclude \nthat [4,5,1,10] is hereditarily rational. This implies our desired result \nwhich is hereditarily rationality of (5,19,14). \n\n\n \\section{(5,22,14)}\nThe group is generated by \n$$\n  \\left[ \\begin {array}{cccc|c} 1&0&0&0&-1\\\\0&1&0&0&1\n\\\\0&0&1&0&1\\\\0&0&0&1&-1\n\\\\ \\hline 0&0&0&0&-1\\end {array} \\right] ,\n \\left[ \\begin {array}{cccc|c} 1&0&0&0&0\\\\0&0&0&1&-1\n\\\\0&0&1&0&0\\\\0&1&0&0&1\n\\\\ \\hline 0&0&0&0&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cccc|c} 0&0&1&0&1\\\\0&1&0&0&0\n\\\\1&0&0&0&-1\\\\0&0&0&1&0\n\\\\ \\hline 0&0&0&0&1\\end {array} \\right] \n$$\nNow we define $P$ to be the lattice corresponding to, $H$, generated by \n$$\n  \\left[ \\begin {array}{cccc} 1&0&0&0\\\\0&1&0&0\n\\\\0&0&1&0\\\\0&0&0&1\n\\end {array} \\right] ,\n \\left[ \\begin {array}{cccc} 1&0&0&0\\\\0&0&0&1\n\\\\0&0&1&0\\\\0&1&0&0\n\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cccc} 0&0&1&0\\\\0&1&0&0\n\\\\1&0&0&0\\\\0&0&0&1\n\\end {array} \\right],\n$$\nWe can see the corresponding lattice to (5,22,14), $L$, fits into the following exact sequence\n$$0\\longrightarrow \\Z^- \\longrightarrow L \\longrightarrow P \\longrightarrow 0.$$\nand since $P$ is permutation, by Corollary \\ref{permcoker} we can conclude that $L$ is hereditarily rational.\n \\section{(5,57,8)}\nThe group is generated by \n$$\n\\left[ \\begin {array}{ccccc} -1&0&0&0&0\\\\0&0&0&1&0\n\\\\0&-1&0&0&0\\\\0&0&0&0&1\n\\\\0&0&-1&0&0\\end {array} \\right] \n$$\nand the corresponding lattice is a sign permutation lattice which is hereditarily rational.\n  \\section{(5,81,54)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 1&0&1&0&1\\\\0&1&0&0&0\n\\\\0&0&0&0&-1\\\\0&0&1&1&1\n\\\\0&0&-1&0&0\\end {array} \\right],\n \\left[ \\begin {array}{ccccc} 0&0&1&0&0\\\\-1&0&0&1&0\n\\\\1&0&0&0&0\\\\0&1&1&0&0\n\\\\-1&0&-1&0&-1\\end {array} \\right]\n\\tand\n \\left[ \\begin {array}{ccccc} 0&0&-1&-1&-1\\\\0&1&0&0\n&0\\\\0&0&1&0&0\\\\-1&0&-1&0&-1\n\\\\0&0&0&0&1\\end {array} \\right] \n$$\nThe dual group is generated by \n$$\n  \\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\0&1&0&0&0\n\\\\1&0&0&1&-1\\\\0&0&0&1&0\n\\\\1&0&-1&1&0\\end {array} \\right] ,\n \\left[ \\begin {array}{ccccc} 0&-1&1&0&-1\\\\0&0&0&1&0\n\\\\1&0&0&1&-1\\\\0&1&0&0&0\n\\\\0&0&0&0&-1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&0&0&-1&0\\\\0&1&0&0&0\n\\\\-1&0&1&-1&0\\\\-1&0&0&0&0\n\\\\-1&0&0&-1&1\\end {array} \\right] \n$$\nAlgorithm (2) produces the change of basis matrix \n$$\n \\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\ 0&0&1&0&0\n\\\\ 0&0&0&1&0\\\\ 0&0&0&0&1\n\\\\ 1&-2&1&-1&-1\\end {array} \\right] \n$$\nWith the above transformation we can see the new representative is generated by \n$$\n \\left[ \\begin {array}{cccc|c} 1&2&0&2&0\\\\0&-1&0&-2&0\n\\\\0&2&1&2&0\\\\0&0&0&1&0\n\\\\ \\hline 0&1&0&1&1\\end {array} \\right], \n \\left[ \\begin {array}{cccc|c} 0&2&1&0&0\\\\0&-1&0&0&0\n\\\\1&2&0&0&0\\\\0&0&0&-1&0\n\\\\ \\hline 0&1&0&0&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cccc|c} 1&-2&-2&-2&0\\\\0&2&1&1\n&0\\\\0&-2&-1&-2&0\\\\0&-1&-1&0&0\n\\\\ \\hline 0&-1&-1&-1&1\\end {array} \\right] \n$$\nNow by considering $M$ to be the corresponding lattice to \n$$\n \\left[ \\begin {array}{cccc} 1&2&0&2\\\\0&-1&0&-2\n\\\\0&2&1&2\\\\0&0&0&1\\end {array}\n \\right], \n \\left[ \\begin {array}{cccc} 0&2&1&0\\\\0&-1&0&0\n\\\\1&2&0&0\\\\0&0&0&-1\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cccc} 1&-2&-2&-2\\\\0&2&1&1\n\\\\0&-2&-1&-2\\\\0&-1&-1&0\n\\end {array} \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nThe generators of [4,13,7,5] (another representative of the corresponding conjugacy class to $M$) are \n$$\n\\left[ \\begin {array}{ccc|c} 0&-1&1&0\\\\ 0&-1&0&0\n\\\\ 1&-1&0&0\\\\ \\hline 0&0&0&-1\\end {array}\n \\right], \n \\left[ \\begin {array}{ccc|c} 0&1&-1&0\\\\ 1&0&-1&0\n\\\\ 0&0&-1&0\\\\ \\hline 0&0&0&1\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{ccc|c} 0&0&-1&0\\\\ 1&0&-1&0\n\\\\ 0&1&-1&0\\\\ \\hline 0&0&0&1\\end {array}\n \\right] \n$$\nThe generators of rank 3 lattice are \n$$\n \\left[ \\begin {array}{ccc} 0&-1&1\\\\ 0&-1&0\n\\\\ 1&-1&0\\end {array} \\right], \n \\left[ \\begin {array}{ccc} 0&1&-1\\\\ 1&0&-1\n\\\\ 0&0&-1\\end {array} \\right]\n\\tand \n \\left[ \\begin {array}{ccc} 0&0&-1\\\\ 1&0&-1\n\\\\ 0&1&-1\\end {array} \\right] \n$$\nthe CrystCatZClass of the former group is [3,4,6,4] which is rational by \\cite{Kunyavski} and its subgroups are  [ 3, 1, 1, 1 ], [ 3, 2, 1, 2 ], [ 3, 2, 2, 2 ], [ 3, 3, 1, 4 ], [ 3, 3, 2, 4 ], [ 3, 4, 2, 2 ] and  [ 3, 4, 6, 4 ] where all of them are rational. This implies that (5, 81, 54) is hereditarily rational.\n  \\section{(5,98,28)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 0&-1&0&0&0\\\\-1&0&0&0&0\n\\\\-1&0&1&0&-1\\\\0&-1&0&1&1\n\\\\-1&1&0&0&-1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\0&1&0&0&0\n\\\\0&1&-1&0&0\\\\0&0&1&0&-1\n\\\\0&1&-1&-1&0\\end {array} \\right] \n$$\nThe dual group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 0&-1&-1&0&-1\\\\-1&0&0&\n-1&1\\\\0&0&1&0&0\\\\0&0&0&1&0\n\\\\0&0&-1&1&-1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\0&1&1&0&1\n\\\\0&0&-1&1&-1\\\\0&0&0&0&-1\n\\\\0&0&0&-1&0\\end {array} \\right] \n$$\nAlgorithm (1) produces the change of basis matrix \n$$\n \\left[ \\begin {array}{ccccc} 1&0&-1&0&0\\\\ 0&1&-1&0&0\n\\\\ 0&0&0&1&0\\\\ 0&0&0&0&1\n\\\\ 2&-2&-1&1&-2\\end {array} \\right] \n$$\nWith the above transformation we can see the new representative is generated by \n$$\n  \\left[ \\begin {array}{cccc|c} -6&-5&0&-2&0\\\\5&4&0&2\n&0\\\\-3&-3&1&0&0\\\\5&5&0&1&0\n\\\\ \\hline 3&2&0&1&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cccc|c} 5&6&0&0&0\\\\-4&-5&0&0&0\n\\\\1&2&0&-1&0\\\\-3&-4&-1&0&0\n\\\\ \\hline -2&-3&0&0&1\\end {array} \\right] \n$$\nNow by considering $M$ to be the corresponding lattice to \n$$\n\\left[ \\begin {array}{cccc} -6&-5&0&-2\\\\5&4&0&2\n\\\\-3&-3&1&0\\\\5&5&0&1\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cccc} 5&6&0&0\\\\-4&-5&0&0\n\\\\1&2&0&-1\\\\-3&-4&-1&0\n\\end {array} \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nThe generators of [4,13,3,3] are \n$$\n \\left[ \\begin {array}{ccc|c} 0&0&1&0\\\\ 0&1&0&0\n\\\\ 1&0&0&0\\\\ \\hline 0&0&0&-1\\end {array}\n \\right] \n \\left[ \\begin {array}{ccc|c} 0&0&-1&0\\\\ 1&0&-1&0\n\\\\ 0&1&-1&0\\\\ \\hline 0&0&0&-1\\end {array}\n \\right] \n$$\nThe generators of rank 3 lattice are \n$$\n  \\left[ \\begin {array}{ccc} 0&0&1\\\\ 0&1&0\n\\\\ 1&0&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccc} 0&0&-1\\\\ 1&0&-1\n\\\\ 0&1&-1\\end {array} \\right] \n$$\nthe GAP ID of the former group is [3,4,6,4] which is hereditarily rational by the argument given in the previous case. So (5,98,28) is hereditarily rational.\n  \\section{(5,99,57)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 1&0&1&0&1\\\\0&1&0&0&0\n\\\\0&-1&-1&0&0\\\\0&0&1&1&1\n\\\\0&1&0&0&-1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&1&1&0&0\\\\1&0&0&-1&0\n\\\\0&0&0&1&0\\\\0&0&1&0&0\n\\\\0&0&-1&-1&-1\\end {array} \\right] \n$$\nThe dual group is generated by  \n$$\n\\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\0&1&-1&0&1\n\\\\1&0&-1&1&0\\\\0&0&0&1&0\n\\\\1&0&0&1&-1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\1&0&0&0&0\n\\\\1&0&0&1&-1\\\\0&-1&1&0&-1\n\\\\0&0&0&0&-1\\end {array} \\right] \n$$\nAlgorithm (1) produces the change of basis matrix \n$$\n\\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\ 0&0&1&0&0\n\\\\ 0&0&0&1&0\\\\ 0&0&0&0&1\n\\\\ 1&2&-1&-1&1\\end {array} \\right] \n$$\nWith the above transformation we can see the new representative is generated by \n$$\n\\left[ \\begin {array}{cccc|c} 1&-2&0&-2&0\\\\-1&0&0&1\n&0\\\\0&2&1&2&0\\\\1&-1&0&-2&0\n\\\\ \\hline 0&1&0&1&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cccc|c} -2&-2&-1&0&0\\\\1&1&1&0\n&0\\\\1&2&0&0&0\\\\-1&-2&-1&-1&0\n\\\\ \\hline 1&1&0&0&1\\end {array} \\right] \n$$\nNow by considering $M$ to be the corresponding lattice to \n$$\n\\left[ \\begin {array}{cccc} 1&-2&0&-2\\\\-1&0&0&1\n\\\\0&2&1&2\\\\1&-1&0&-2\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cccc} -2&-2&-1&0\\\\1&1&1&0\n\\\\1&2&0&0\\\\-1&-2&-1&-1\n\\end {array} \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nThe generators of [4,12,4,7] are \n$$\n \\left[ \\begin {array}{ccc|c} 0&0&1&0\\\\ 0&1&0&0\n\\\\ 1&0&0&0\\\\ \\hline 0&0&0&-1\\end {array}\n \\right] \n \\left[ \\begin {array}{ccc|c} 0&0&-1&0\\\\ 1&0&-1&0\n\\\\ 0&1&-1&0\\\\ \\hline 0&0&0&1\\end {array}\n \\right] \n$$\nThe generators of rank 3 lattice are \n$$\n \\left[ \\begin {array}{ccc} 0&0&1\\\\ 0&1&0\n\\\\ 1&0&0\\end {array} \\right] \n \\left[ \\begin {array}{ccc} 0&0&-1\\\\ 1&0&-1\n\\\\ 0&1&-1\\end {array} \\right]\n$$\nthe GAP ID of the former group is [3,4,6,4] which is hereditarily rational by argument given in the previous case. So (5,99,57) is hereditarily rational.\n  \\section{(5,164,2)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{cc|ccc} -1&1&0&0&0\\\\-1&0&0&0&0\n\\\\ \\hline 0&0&0&-1&0\\\\0&0&0&0&1\n\\\\0&0&-1&0&0\\end {array} \\right] \n$$\nThe corresponding lattice decomposes into a rank 2 lattice which is hereditarily rational and a rank 3 sign permutation lattice which is also hereditarily rational. Hence (5,164,2) is hereditarily rational.\n  \\section{(5,174,2)}\nThe group is generated by \n$$\n  \\left[ \\begin {array}{ccccc} 0&0&-1&0&-1\\\\0&0&0&1&\n1\\\\-1&0&0&0&-1\\\\0&1&0&0&-1\n\\\\0&0&0&0&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&0&-1&0&-1\\\\-1&0&1&0\n&0\\\\0&0&-1&1&0\\\\0&0&-1&0&0\n\\\\0&1&1&0&0\\end {array} \\right] \n$$\nThe dual group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 0&0&-1&0&0\\\\0&0&0&1&0\n\\\\-1&0&0&0&0\\\\0&1&0&0&0\n\\\\-1&1&-1&-1&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&-1&0&0&0\\\\0&0&0&0&1\n\\\\-1&1&-1&-1&1\\\\0&0&1&0&0\n\\\\-1&0&0&0&0\\end {array} \\right] \n$$\nAlgorithm (1) produces the change of basis matrix \n$$\n \\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\ 0&0&1&0&0\n\\\\ 0&0&0&1&0\\\\ 0&0&0&0&1\n\\\\ 1&-1&0&0&-1\\end {array} \\right] \n$$\nWith the above transformation we can see the new representative is generated by \n$$\n\\left[ \\begin {array}{cccc|c} 0&-1&1&0&0\\\\-1&0&0&-1\n&0\\\\0&0&0&-1&0\\\\0&0&-1&0&0\n\\\\ \\hline 0&0&-1&-1&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cccc|c} -1&0&0&-1&0\\\\0&-1&1&0\n&0\\\\0&-1&0&0&0\\\\1&0&0&0&0\n\\\\ \\hline 1&-1&0&0&1\\end {array} \\right] \n$$\nNow by considering $M$ to be the corresponding lattice to \n$$\n \\left[ \\begin {array}{cccc} 0&-1&1&0\\\\-1&0&0&-1\n\\\\0&0&0&-1\\\\0&0&-1&0\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cccc} -1&0&0&-1\\\\0&-1&1&0\n\\\\0&-1&0&0\\\\1&0&0&0\\end {array}\n \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nThe generators of [4,17,13] are \n$$\n \\left[ \\begin {array}{cc|cc} -1&-1&0&0\\\\ 0&1&0&0\n\\\\ \\hline 0&0&1&1\\\\  0&0&0&-1\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cc|cc} -1&-1&0&0\\\\ 1&0&0&0\n\\\\ \\hline 0&0&-1&-1\\\\ 0&0&1&0\\end {array}\n \\right] \n$$\nand the lattice decomposes into rank 2 lattices which we know they are hereditarily rational. This implies that (5,174,2) is hereditarily rational.\n  \\section{(5,174,5)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{cc|ccc} -1&0&0&0&0\\\\1&1&0&0&0\n\\\\ \\hline 0&0&0&0&1\\\\0&0&0&1&0\n\\\\0&0&1&0&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cc|ccc} -1&-1&0&0&0\\\\1&0&0&0&0\n\\\\ \\hline 0&0&0&0&1\\\\0&0&-1&0&0\n\\\\0&0&0&-1&0\\end {array} \\right] \n$$\nand the lattice decomposes into a rank 2 lattice and a rank 3 sign permutation lattice, both of which are hereditarily rational. This implies that (5,174,5) is hereditarily rational.\n  \\section{(5,389,4)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 1&0&0&-1&0\\\\0&1&0&-1&0\n\\\\0&0&0&-1&-1\\\\0&0&0&-1&0\n\\\\0&0&-1&1&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 1&0&0&-1&0\\\\1&0&0&0&1\n\\\\1&-1&0&0&0\\\\1&0&-1&0&0\n\\\\-1&0&0&0&0\\end {array} \\right] \n$$\nThe dual group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\0&1&0&0&0\n\\\\0&0&0&0&-1\\\\-1&-1&-1&-1&1\n\\\\0&0&-1&0&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 1&1&1&1&-1\\\\0&0&-1&0&0\n\\\\0&0&0&-1&0\\\\-1&0&0&0&0\n\\\\0&1&0&0&0\\end {array} \\right] \n$$\nAlgorithm (1) produces the change of basis matrix \n$$\n \\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\ 0&0&1&0&0\n\\\\ 0&0&0&1&0\\\\ 0&0&0&0&1\n\\\\ 1&0&1&0&-1\\end {array} \\right] \n$$\nWith the above transformation we can see the new representative is generated by \n$$\n \\left[ \\begin {array}{cccc|c} 1&0&-1&0&0\\\\0&0&0&-1&0\n\\\\0&0&-1&0&0\\\\0&-1&0&0&0\n\\\\ \\hline 0&0&-1&0&1\\end {array} \\right]\n\\tand\n \\left[ \\begin {array}{cccc|c} 0&0&0&1&0\\\\-1&0&1&0&0\n\\\\0&-1&0&0&0\\\\0&0&-1&0&0\n\\\\ \\hline 0&0&-1&0&1\\end {array} \\right] \n$$\nNow by considering $M$ to be the corresponding lattice to \n$$\n \\left[ \\begin {array}{cccc} 1&0&-1&0\\\\0&0&0&-1\n\\\\0&0&-1&0\\\\0&-1&0&0\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cccc} 0&0&0&1\\\\-1&0&1&0\n\\\\0&-1&0&0\\\\0&0&-1&0\\end {array}\n \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nthe above group is [4,21,3,2] which has the following subgroups [ 4, 1, 1, 1 ], [ 4, 3, 1, 3 ], [ 4, 5, 1, 1 ], [ 4, 11, 1, 1 ], [ 4, 17, 1, 2 ], [ 4, 17, 1, 3 ], [ 4, 21, 1, 1 ] and [ 4, 21, 3, 2 ] \nwhere all of them are rational.\n\n  \\section{(5,901,3)}\nThe group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\1&0&0&0&0\n\\\\0&0&0&0&-1\\\\0&0&0&1&0\n\\\\0&0&-1&0&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&0&0&0&-1\\\\1&0&0&0&0\n\\\\0&1&0&0&0\\\\0&0&1&0&0\n\\\\0&0&0&-1&0\\end {array} \\right] \n$$\nThe dual group is generated by \n$$\n \\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\1&0&0&0&0\n\\\\0&0&0&0&-1\\\\0&0&0&1&0\n\\\\0&0&-1&0&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&1&0&0&0\\\\0&0&1&0&0\n\\\\0&0&0&1&0\\\\0&0&0&0&-1\n\\\\-1&0&0&0&0\\end {array} \\right] \n$$\nAlgorithm (1) produces the change of basis matrix \n$$\n \\left[ \\begin {array}{ccccc} 1&0&0&0&0\\\\ 0&0&1&0&0\n\\\\ 0&0&0&1&0\\\\ 0&0&0&0&1\n\\\\ 1&1&1&1&-1\\end {array} \\right] \n$$\nWith the above transformation we can see the new representative is generated by \n$$\n \\left[ \\begin {array}{cccc|c} -1&0&0&0&0\\\\-1&0&0&-1\n&0\\\\-1&0&1&0&0\\\\1&-1&0&0&0\n\\\\ \\hline 1&0&0&0&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{cccc|c} -1&0&0&-1&0\\\\-1&0&0&0\n&0\\\\-1&1&0&0&0\\\\1&0&-1&0&0\n\\\\ \\hline 1&0&0&0&1\\end {array} \\right] \n$$\nNow by considering $M$ to be the corresponding lattice to \n$$\n \\left[ \\begin {array}{cccc} -1&0&0&0\\\\-1&0&0&-1\n\\\\-1&0&1&0\\\\1&-1&0&0\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cccc} -1&0&0&-1\\\\-1&0&0&0\n\\\\-1&1&0&0\\\\1&0&-1&0\\end {array}\n \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nThe lattice $M$ corresponds to [4,27,3,1] with subgroups [ 4, 1, 1, 1 ], [ 4, 3, 1, 3 ], [ 4, 27, 1, 1 ], [ 4, 27, 3, 1 ] where all of them are rational. So (5,901,3) is hereditarily rational.\n\n \\section{(5,918,4)}\nThe group is generated by \n$$\n  \\left[ \\begin {array}{ccccc} 0&-1&0&-1&0\\\\ \\noalign{\\medskip}-1&1&0&1\n&1\\\\ -1&0&0&0&0\\\\ 0&-1&1&0&0\n\\\\ 1&0&0&-1&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&1&0&1&1\\\\ 0&-1&-1&-1\n&-1\\\\ 0&0&-1&0&-1\\\\ -1&0&1&1&1\n\\\\ 0&1&0&0&1\\end {array} \\right] \n$$\nThe dual group is generated by \n$$\n\\left[ \\begin {array}{ccccc} 0&-1&-1&0&1\\\\ -1&1&0&-\n1&0\\\\ 0&0&0&1&0\\\\ -1&1&0&0&-1\n\\\\ 0&1&0&0&0\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} 0&0&0&-1&0\\\\ 1&-1&0&0&\n1\\\\ 0&-1&-1&1&0\\\\ 1&-1&0&1&0\n\\\\ 1&-1&-1&1&1\\end {array} \\right]\n$$\nAlgorithm (1) produces the change of basis matrix \n$$\n  \\left[ \\begin {array}{ccccc} 0&1&1&-2&1\\\\ 1&0&0&0&0\n\\\\ 0&1&0&0&0\\\\ 0&0&1&0&0\n\\\\ 0&0&0&1&0\\end {array} \\right]\n$$\nWith the above transformation we can see the new representative is generated by \n$$\n  \\left[ \\begin {array}{ccccc} 1&0&1&1&0\\\\ -1&0&-1&0&0\n\\\\ -2&1&-1&0&0\\\\ 2&0&1&0&0\n\\\\ -1&0&-1&0&1\\end {array} \\right] \n\\tand\n \\left[ \\begin {array}{ccccc} -1&-1&-1&-1&0\\\\ 1&-1&1\n&0&0\\\\ 1&1&2&2&0\\\\ -1&0&-2&-1&0\n\\\\ 1&0&1&1&1\\end {array} \\right] \n$$\nNow by considering $M$ to be the corresponding lattice to \n$$\n\\left[ \\begin {array}{cccc} 1&0&1&1\\\\ -1&0&-1&0\n\\\\ -2&1&-1&0\\\\ 2&0&1&0\\end {array}\n \\right] \n \\tand\n \\left[ \\begin {array}{cccc} -1&-1&-1&-1\\\\ 1&-1&1&0\n\\\\ 1&1&2&2\\\\ -1&0&-2&-1\n\\end {array} \\right] \n$$\nwe can produce\n$$\n\\exactseq{}\n.$$\nThe lattice $M$ corresponds to [4,31,1,2] which is a subgroup of $[4,31,7,1]$. In \\cite{Nicole1} it is shown that $[4,31,7,1]$ is hereditarily rational and so is (5,918,4).\n", "meta": {"hexsha": "f6eb2e9300ffbf88c132115a299843b442cb99da", "size": 17885, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "appendixb.tex", "max_stars_repo_name": "ajamshid/Algebraic-Tori", "max_stars_repo_head_hexsha": "6ba715a1fb604a7650e9ddf703b6ae55fd0499e1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-15T15:27:25.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-15T15:27:25.000Z", "max_issues_repo_path": "appendixb.tex", "max_issues_repo_name": "armin-jamshidpey/Algebraic-Tori", "max_issues_repo_head_hexsha": "6ba715a1fb604a7650e9ddf703b6ae55fd0499e1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "appendixb.tex", "max_forks_repo_name": "armin-jamshidpey/Algebraic-Tori", "max_forks_repo_head_hexsha": "6ba715a1fb604a7650e9ddf703b6ae55fd0499e1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.1602023609, "max_line_length": 315, "alphanum_fraction": 0.5847917249, "num_tokens": 9396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.5554634815131924}}
{"text": "\\documentclass[14pt]{extarticle}\r\n\r\n\\usepackage{listings}\r\n\\usepackage{xcolor}\r\n\\usepackage{amsmath,amsfonts}\r\n\\usepackage{hyperref}\r\n\r\n\\title{GEometric Algebra}\r\n\r\n\\author{LAK132}\r\n\r\n\\begin{document}\r\n\r\n\\color{white}\r\n\\pagecolor{black!90}\r\n\r\n\\setlength{\\parindent}{0pt}\r\n\\setlength{\\parskip}{6pt}\r\n\r\n\\maketitle\r\n\r\n\\pagebreak\r\n\\tableofcontents\r\n\r\n%\r\n% Dot product\r\n%\r\n\r\n\\pagebreak\r\n\\section{Dot product}\r\n\\label{sec:dot-product}\r\n\r\nThe dot product (\\(\\cdot\\)) of 2 vectors (\\(a\\) and \\(b\\))\r\ncreates a scalar:\r\n\r\n\\newcommand{\\dotpart}[4]{ #1_#2 #3_#4 }\r\n\r\n\\newcommand{\\expdot}[3][]{\r\n  #1 \\dotpart{#2}{1}{#3}{1} + \\cdots + #1 \\dotpart{#2}{n}{#3}{n}}\r\n\r\n\\( a \\cdot b = \\expdot{a}{b} \\)\r\n\r\nNote:\r\n\r\n\\( a \\cdot b = b \\cdot a \\)\r\n\r\n%\r\n% Outer product\r\n%\r\n\r\n\\pagebreak\r\n\\section{Outer product}\r\n\\label{sec:outer-product}\r\n\r\nThe outer product (\\(\\wedge\\)) of 2 vectors (\\(a\\) and \\(b\\))\r\ncreates a bivector (\\(\\mathbf{A}\\)):\r\n\r\n\\newcommand{\\wedgepart}[4]{(#1_#2 #3_#4 - #1_#4 #3_#2) (e_#2 \\wedge e_#4)}\r\n\r\n\\newcommand{\\expwedge}[2]{\r\n  \\wedgepart{#1}{1}{#2}{2} + \\cdots + \\wedgepart{#1}{n}{#2}{1}}\r\n\r\n\\newcommand{\\expwedgepre}[3]{\r\n  #1 \\wedgepart{#2}{1}{#3}{2} + \\cdots + #1 \\wedgepart{#2}{n}{#3}{1}}\r\n\r\n\\newcommand{\\expwedgepost}[3]{\r\n  \\wedgepart{#1}{1}{#2}{2} #3 + \\cdots + \\wedgepart{#1}{n}{#2}{1} #3}\r\n\r\n\\newcommand{\\expwedgeprepost}[4]{\r\n  #1 \\wedgepart{#2}{1}{#3}{2} #4 + \\cdots + #1 \\wedgepart{#2}{n}{#3}{1} #4}\r\n\r\n\\(\r\n  \\mathbf{A}\r\n  = a \\wedge b \\\\\r\n  = \\expwedge{a}{b}\r\n  % = (a_1 b_2 - a_2 b_1) (e_1 \\wedge e_2)\r\n  % + (a_2 b_3 - a_3 b_2) (e_2 \\wedge e_3)\r\n  % + \\cdots\r\n  % + (a_n b_1 - a_1 b_n) (e_n \\wedge e_1)\r\n\\)\r\n\r\nWhere \\(n\\) is the dimension of \\(a\\) and \\(b\\),\r\nand \\(e_{1..n}\\) are the basis vectors.\r\n\r\nIf we let\r\n\\( A_{e_i e_j} = (a_{e_i} b_{e_j} - a_{e_j} b_{e_i})(e_i \\wedge e_j) \\),\r\nthen this simplifies to:\r\n\r\n\\(\r\n  \\mathbf{A}\r\n  = a \\wedge b \\\\\r\n  = (a_1 e_1 + \\cdots + a_n e_n) \\wedge (b_1 e_1 + \\cdots + b_n e_n) \\\\\r\n  = a_1 b \\\\\r\n  = A_{e_1 e_2} + A_{e_2 e_3} + \\cdots + A_{e_n e_1}\r\n\\)\r\n\r\nChaining:\r\n\r\n\\(\r\n  a_1 \\wedge a_2 \\wedge \\cdots \\wedge a_r\r\n  = \\frac{1}{r!} \\sum\\limits_{\\sigma \\in \\mathfrak{G}_r}\r\n  \\textrm{sgn} (\\sigma) a_{\\sigma (1)} a_{\\sigma (2)} \\cdots a_{\\sigma (r)} ,\r\n\\)\r\n\r\nNote:\r\n\r\n\\( a \\wedge b = - (b \\wedge a) \\)\r\n\r\nIn 2D:\r\n\r\n\\(\r\n  a \\wedge b\r\n  = \\mathbf{A}\r\n  = A_{xy}\r\n  = (a_x b_y - a_y b _x) (x \\wedge y)\r\n\\)\r\n\r\nIn 3D:\r\n\r\n\\(\r\n  a \\wedge b\r\n  = \\mathbf{A}\r\n  = A_{xy} + A_{yz} + A_{zx} \\\\\r\n  = (a_x b_y - a_y b_x) (x \\wedge y)\r\n  + (a_y b_z - a_z b_y) (y \\wedge z)\r\n  + (a_z b_x - a_x b_z) (z \\wedge x)\r\n\\)\r\n\r\n\\pagebreak\r\nIn 3D we also find that the outer product looks very similar to\r\nthe cross product:\r\n\r\n\\(\r\n  a \\wedge b = \\\\\r\n  (a_x b_y - a_y b_x) (x \\wedge y) + \\\\\r\n  (a_z b_x - a_x b_z) (z \\wedge x) + \\\\\r\n  (a_y b_z - a_z b_y) (y \\wedge z)\r\n\\)\r\n\r\n\\(\r\n  a \\times b = \\\\\r\n  (a_x b_y - a_y b_x) z + \\\\\r\n  (a_z b_x - a_x b_z) y + \\\\\r\n  (a_y b_z - a_z b_y) x\r\n\\)\r\n\r\n%\r\n% Geometric product\r\n%\r\n\r\n\\pagebreak\r\n\\section{Geometric product}\r\n\\label{sec:geometric-product}\r\n\r\nThe geometric product of 2 vectors (\\(a\\) and \\(b\\))\r\ncreates a rotor (\\(\\mathbf{R}\\)):\r\n\r\n\\(\r\n  ab\r\n  = \\mathbf{R}\r\n  = \\frac{1}{2} (ab + ba) + \\frac{1}{2} (ab - ba)\r\n  = (a \\cdot b) + (a \\wedge b)\r\n\\)\r\n\r\nFor perpendicular vectors (\\(c\\) and \\(d\\)) we find that \\( c \\cdot d = 0 \\),\r\nhence:\r\n\r\n\\( cd = (c \\cdot d) + (c \\wedge d) = 0 + (c \\wedge d) = c \\wedge d \\)\r\n\r\nBecause basis vectors are perpendicular,\r\nwe can use this to simplify the outer product equation:\r\n\r\n\\(\r\n  a \\wedge b \\\\\r\n  = (a_1 b_2 - a_2 b_1) e_1 e_2\r\n  + (a_2 b_3 - a_3 b_2) e_2 e_3\r\n  + \\cdots\r\n  + (a_n b_1 - a_1 b_n) e_n e_1 \\\\\r\n  = (a_1 b_2 - a_2 b_1) e_{12}\r\n  + (a_2 b_3 - a_3 b_2) e_{23}\r\n  + \\cdots\r\n  + (a_n b_1 - a_1 b_n) e_{n1}\r\n\\)\r\n\r\nIn 2D:\r\n\r\n\\(\r\n  ab\r\n  = (a_x b_x + a_y b_y)\r\n  + (a_x b_y - a_y b_x) xy\r\n\\)\r\n\r\nIn 3D:\r\n\r\n\\(\r\n  ab \\\\\r\n  = (a_x b_x + a_y b_y + a_z b_z)\r\n  + (a_x b_y - a_y b_x) xy\r\n  + (a_y b_z - a_z b_y) yz\r\n  + (a_z b_x - a_x b_z) zx\r\n\\)\r\n\r\n%\r\n% Reflection\r\n%\r\n\r\n\\pagebreak\r\n\\section{Reflection}\r\n\\label{sec:reflection}\r\n\r\nGiven a vector \\(v\\) and unit vector \\(a\\), \\(v_\\parallel\\) is \\(v\\)\r\nprojected onto \\(a\\):\r\n\r\n\\( v_\\parallel = a (a \\cdot v) \\)\r\n\r\n\\(v_\\perp\\) is the the vector perpendicular to \\(a\\) that sums with\r\n\\(v_\\parallel\\) to equal \\(v\\):\r\n\r\n\\( v_\\perp = v - v_\\parallel \\)\r\n\r\nTo reflect \\(v\\) by the plane perpendicular to \\(a\\),\r\nwe define the function \\(R_a(v)\\):\r\n\r\n\\( R_a(v) = -ava \\)\r\n\r\n\\(\r\n  va \\\\\r\n  = (v \\cdot a) + (v \\wedge a) \\\\\r\n  = ((v_\\parallel + v_\\perp) \\cdot a)\r\n  + ((v_\\parallel + v_\\perp) \\wedge a) \\\\\r\n  = (v_\\parallel \\cdot a) + (v_\\perp \\cdot a)\r\n  + (v_\\parallel \\wedge a) + (v_\\perp \\wedge a) \\\\\r\n  = (v_\\parallel \\cdot a) + (0) + (0) + (v_\\perp \\wedge a) \\\\\r\n  = (v_\\parallel \\cdot a) + (v_\\perp \\wedge a) \\\\\r\n  = (v_\\parallel \\cdot a) + (v_\\perp a)\r\n\\)\r\n\r\n\\(\r\n  ava \\\\\r\n  = a (v a) \\\\\r\n  = a ((v_\\parallel \\cdot a) + (v_\\perp a)) \\\\\r\n  = (a (v_\\parallel \\cdot a)) + (a (v_\\perp a)) \\\\\r\n  = (a (v_\\parallel \\cdot a)) + (a v_\\perp a) \\\\\r\n  = v_\\parallel + (a v_\\perp a) \\\\\r\n  = v_\\parallel - (v_\\perp a a) \\\\\r\n  = v_\\parallel - v_\\perp\r\n\\)\r\n\r\n\\(\r\n  -ava \\\\\r\n  = -(v_\\parallel - v_\\perp)\\\\\r\n  = v_\\perp - v_\\parallel \\\\\r\n  = v - 2 a (a \\cdot v)\r\n\\)\r\n\r\n%\r\n% Rotation\r\n%\r\n\r\n\\pagebreak\r\n\\section{Rotataion}\r\n\\label{sec:rotation}\r\n\r\n% Basis vectors table:\r\n%\r\n% 2D:\r\n%   2   1\r\n%\r\n% 3D:\r\n%   2   1\r\n% 123   3\r\n%\r\n% 4D:\r\n%   2   1 123\r\n% 123   3   2\r\n% 134 234   4\r\n%\r\n% 5D:\r\n%   2   1 123 124\r\n% 123   3   2 234\r\n% 134 234   4   3\r\n% 145 245 345   5\r\n%\r\n% 6D:\r\n%   2   1 123 124 125\r\n% 123   3   2 234 235\r\n% 134 234   4   3 345\r\n% 145 245 345   5   4\r\n% 156 256 356 456   6\r\n%\r\n% 7D:\r\n%   2   1 123 124 125 126\r\n% 123   3   2 234 235 236\r\n% 134 234   4   3 345 346\r\n% 145 245 345   5   4 456\r\n% 156 256 356 456   6   5\r\n%   6 126 136 146 156   1\r\n\r\n\\subsection{2D}\r\n\\label{subsec:rotation-2D}\r\n\r\nRotor \\( R = R_a + R_{xy} \\mathbf{xy} \\)\r\n\r\nVector \\( v = v_x \\mathbf{x} + v_y \\mathbf{y} \\)\r\n\r\nTrivector \\( T = R v = R_a v + R_{xy} \\mathbf{xy} v \\)\r\n\r\n\\(\r\n  = R_a (v_x \\mathbf{x} + v_y \\mathbf{y})\r\n  + R_{xy} \\mathbf{xy} (v_x \\mathbf{x} + v_y \\mathbf{y})\r\n\\)\r\n\r\n\\(\r\n  = R_a v_x \\mathbf{x} + R_a v_y \\mathbf{y}\r\n  + R_{xy} v_x \\mathbf{xyx} + R_{xy} v_y \\mathbf{xyy}\r\n\\)\r\n\r\n\\(\r\n  = R_a v_x \\mathbf{x} + R_a v_y \\mathbf{y}\r\n  - R_{xy} v_x \\mathbf{y} + R_{xy} v_y \\mathbf{x}\r\n\\)\r\n\r\n\\(\r\n  = (R_a v_x + R_{xy} v_y) \\mathbf{x}\r\n  + (R_a v_y - R_{xy} v_x) \\mathbf{y}\r\n\\)\r\n\r\nRotor \\( R' = R_a - R_{xy} \\mathbf{xy} \\)\r\n\r\nVector \\( v' = R v R' = T R' \\)\r\n\r\n\\(\r\n  = ((R_a v_x + R_{xy} v_y) \\mathbf{x} + (R_a v_y - R_{xy} v_x) \\mathbf{y})\r\n  (R_a - R_{xy} \\mathbf{xy})\r\n\\)\r\n\r\n\\(\r\n  = ((R_a v_x + R_{xy} v_y) \\mathbf{x}\r\n     + (R_a v_y - R_{xy} v_x) \\mathbf{y}) R_a \\\\\r\n  - ((R_a v_x + R_{xy} v_y) \\mathbf{x}\r\n     + (R_a v_y - R_{xy} v_x) \\mathbf{y}) R_{xy} \\mathbf{xy}\r\n\\)\r\n\r\n\\(\r\n  = ((R_a v_x + R_{xy} v_y) \\mathbf{x} R_a\r\n     + (R_a v_y - R_{xy} v_x) \\mathbf{y} R_a) \\\\\r\n  - ((R_a v_x + R_{xy} v_y) \\mathbf{x} R_{xy} \\mathbf{xy}\r\n     + (R_a v_y - R_{xy} v_x) \\mathbf{y} R_{xy} \\mathbf{xy})\r\n\\)\r\n\r\n\\(\r\n  = ((R_a R_a v_x + R_a R_{xy} v_y) \\mathbf{x}\r\n     + (R_a R_a v_y - R_a R_{xy} v_x) \\mathbf{y}) \\\\\r\n  - ((R_a R_{xy} v_x + R_{xy} R_{xy} v_y) \\mathbf{xxy}\r\n     + (R_a R_{xy} v_y - R_{xy} R_{xy} v_x) \\mathbf{yxy})\r\n\\)\r\n\r\n\\(\r\n  = ((R_a^2 v_x + R_a R_{xy} v_y) \\mathbf{x}\r\n     + (R_a^2 v_y - R_a R_{xy} v_x) \\mathbf{y}) \\\\\r\n  - ((R_a R_{xy} v_x + R_{xy}^2 v_y) \\mathbf{y}\r\n     - (R_a R_{xy} v_y - R_{xy}^2 v_x) \\mathbf{x})\r\n\\)\r\n\r\n\\(\r\n  = ((R_a^2 v_x + R_a R_{xy} v_y) \\mathbf{x}\r\n     + (R_a^2 v_y - R_a R_{xy} v_x) \\mathbf{y}) \\\\\r\n  + ((R_a R_{xy} v_y - R_{xy}^2 v_x) \\mathbf{x}\r\n     - (R_a R_{xy} v_x + R_{xy}^2 v_y) \\mathbf{y})\r\n\\)\r\n\r\n\\(\r\n  = ((R_a^2 v_x + R_a R_{xy} v_y)\r\n     + (R_a R_{xy} v_y - R_{xy}^2 v_x)) \\mathbf{x} \\\\\r\n  + ((R_a^2 v_y - R_a R_{xy} v_x)\r\n     - (R_a R_{xy} v_x + R_{xy}^2 v_y)) \\mathbf{y}\r\n\\)\r\n\r\n\\(\r\n  = (R_a^2 v_x + R_a R_{xy} v_y + R_a R_{xy} v_y - R_{xy}^2 v_x) \\mathbf{x} \\\\\r\n  + (R_a^2 v_y - R_a R_{xy} v_x - R_a R_{xy} v_x - R_{xy}^2 v_y) \\mathbf{y}\r\n\\)\r\n\r\n\\(\r\n  = (R_a^2 v_x + 2 R_a R_{xy} v_y - R_{xy}^2 v_x) \\mathbf{x}\r\n  + (R_a^2 v_y - 2 R_a R_{xy} v_x - R_{xy}^2 v_y) \\mathbf{y}\r\n\\)\r\n\r\n\\(\r\n  = ((R_a^2 - R_{xy}^2) v_x + 2 R_a R_{xy} v_y) \\mathbf{x}\r\n  + ((R_a^2 - R_{xy}^2) v_y - 2 R_a R_{xy} v_x) \\mathbf{y}\r\n\\)\r\n\r\n\\(\r\n  R = \\mathbf{xy}\r\n  = (\\mathbf{x} \\cdot \\mathbf{y}) + (\\mathbf{x} \\wedge \\mathbf{y})\r\n  = 0 + 1 \\mathbf{xy}\r\n\\)\r\n\r\n\\(\r\n  = ((0^2 - 1^2) v_x + 0 v_y) \\mathbf{x}\r\n  + ((0^2 - 1^2) v_y - 0 v_x) \\mathbf{y}\r\n\\)\r\n\r\n\\(\r\n  = - v_x \\mathbf{x} + - v_y \\mathbf{y}\r\n\\)\r\n\r\n\\end{document}", "meta": {"hexsha": "3613362d70f9619c0d40459fab5722cf9d6105d7", "size": 8601, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "gal.tex", "max_stars_repo_name": "LAK132/gal", "max_stars_repo_head_hexsha": "032a20582191ea0b6c28f34b7b079f113bbb1037", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "gal.tex", "max_issues_repo_name": "LAK132/gal", "max_issues_repo_head_hexsha": "032a20582191ea0b6c28f34b7b079f113bbb1037", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "gal.tex", "max_forks_repo_name": "LAK132/gal", "max_forks_repo_head_hexsha": "032a20582191ea0b6c28f34b7b079f113bbb1037", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.0293398533, "max_line_length": 79, "alphanum_fraction": 0.508545518, "num_tokens": 4002, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768248, "lm_q2_score": 0.7662936324115011, "lm_q1q2_score": 0.555463476958667}}
{"text": "\\section{Reinforcement Learning for Automated Trading}\n\\label{sec:application_to_systematic_trading}\n\nMany financial applications can be seen as sequential decision problems which naturally fall in the stochastic optimal control framework introduced above. In this section we discuss how reinforcement learning algorithms can be applied to the asset allocation problem, where an agent invests his capital on various assets available in the market.  \n\n\\subsection{Asset Allocation With Transaction Costs} \nThe asset allocation problem consists of determining how to dynamically invest the available capital in a portfolio of different assets in order to maximize the expected total return or another relevant performance measure. Let us consider a financial market consisting of $I+1$ different stocks that are traded only at discrete times $t \\in \\{0, 1, 2, \\ldots\\}$ and denote by ${Z}_t = {(Z_t^0, Z_t^1, \\ldots, Z_t^I)}^T$ their prices at time $t$. Typically, $Z_t^0$ refers to a riskless asset whose dynamic is given by $Z_t^0 = {(1 + X)}^t$ where $X$ is the deterministic risk-free interest rate. The investment process works as follows: at time $t$, the investor observes the\nstate of the market $S_t$, consisting for example of the past asset prices and other relevant economic variables, and subsequently chooses how to rebalance his portfolio, by specifying the units of each stock ${n}_t = {(n_t^0 , n_t^1 , \\ldots , n_t^I)}^T$ to be held between $t$ and $t+1$. In doing so, he needs to take into account the transaction costs that he has to pay to the broker to change his position.  At time $t+1$, the investor realizes a profit or a loss from his investment due to the stochastic variation of the stock values. The investor\u2019s goal is to maximize a given performance measure. Let $W_t$ denote the wealth of the investor at time $t$. The profit realized between $t$ and $t+1$ is simply given by the difference between the trading results and the transaction costs payed to the broker. More formally\n\\begin{equation*}\n\t\\Delta W_{t+1} = W_{t+1} - W_t = \\text{PNL}_{t+1} - \\text{TC}_{t}\t\n\\end{equation*}\nwhere $\\text{PNL}_{t+1}$ denotes the profit due to the variation of the\nportfolio asset prices between $t$ and $t+1$\n\\begin{equation*}\n\t\\text{PNL}_{t+1} = {n}_t \\cdot \\Delta{Z}_{t+1} = \\sum^{I}_{i=0} \n\tn_t^i (Z_{t+1}^i - Z_t^i) \n\\end{equation*}\nand $\\text{TC}_t$ denotes the fees payed to the broker to change the portfolio\nallocation and on the short positions\n\\begin{equation*}\n\t\\text{TC}_t = \\sum^{I}_{i=0} \\delta_p^i \\left| n_t^i - n_{t-1}^i\\right| Z_t^i \n\t\t\t\t- \\delta_f W_t \\ind{{n}_t \\neq {n}_{t-1}} \n\t\t\t\t- \\sum^{I}_{i=0} \\delta_s^i {(n_t^i)}^{-} Z_t^i\n\\end{equation*}\nThe transaction costs consist of three different components. The first term \nrepresent a transaction cost that is proportional to the change in value of the \nposition in each asset. The second term is a fixed fraction of the total value\nof the portfolio which is payed only if the allocation is changed. The last\nterm represents the fees payed to the broker for the shares borrowed to build a\nshort position. The portfolio return between $t$ and $t+1$ is thus given by\n\\begin{equation}\\label{eq:portfolio_return}\n\tX_{t+1} = \\frac{\\Delta W_{t+1}}{W_t} = \\sum^{I}_{i=0} \\left[ a_t^i\n\tX_{t+1}^i - \\delta_i \\left| a_t^i - \\tilde{a}_t^i \\right| - \\delta_s\n\t{(a_t^i)}^- \\right] - \\delta_f \\ind{{a}_t \\neq \\tilde{{a}}_{t-1}}  \n\\end{equation}\nwhere \n\\begin{equation*}\n\tX_{t+1}^i = \\frac{\\Delta Z_{t+1}^i}{Z_t^i}\n\\end{equation*}\nis the return of the $i$-th stock between $t$ and $t+1$, \n\\begin{equation*}\n\ta_t^i = \\frac{n_t^i Z_t^i}{W_t}\n\\end{equation*}\nis the fraction of wealth invested in the $i$-th stock between time $t$ and\n$t+1$ and finally \n\\begin{equation*}\n\t\\tilde{a}_t^i = \\frac{n_{t-1}^i Z_t^i}{W_t} = \\frac{a_{t-1}^i (1+X_t^i)}\n\t{1 + X_t}\n\\end{equation*}\nis the fraction of wealth invested in the $i$-th stock just before the \nreallocation. We assume that the agent invests all his wealth at each step, so \nthat $W_t$ can be also interpreted as the value of his portfolio. This \nassumption leads to the following constraint on the portfolio weights\n\\begin{equation}\n\t\\sum^{I}_{i=0} a_t^i = 1 \\;\\;\\;\\;\\; \\forall t \\in \\{0, 1, 2, \\ldots\\}\n\\end{equation}\nWe notice that we are neglecting the typical margin requirements on the short\npositions, which would reduce the available capital at time $t$. Considering\nmargin requirements would lead to a more complex constraint on the portfolio\nweights which would be difficult to treat in the reinforcement learning\nframework. Plugging this constraint into Eq. (\\ref{eq:portfolio_return}), we\nobtain\n\\begin{equation}\\label{eq:portfolio_return_benchmark}\n\tX_{t+1} = X + \\sum^{I}_{i=1} a_t^i (X_{t+1}^i - X) - \\sum^{I}_{i=0}\n\t\\left[\\delta_i \\left| a_t^i - \\tilde{a}_t^i \\right| - \\delta_s^i\n\t{(a_t^i)}^-\\right] - \\delta_f \\ind{{a}_t \\neq \\tilde{{a}}_{t-1}}   \n\\end{equation}\nwhich highlights the role of the risk-free asset as a benchmark for the \nportfolio returns. The total profit realized by the investor between $t=0$ and\n$T$ is \n\\begin{equation*}\n\t\\Pi_T = W_T - W_0 = \\sum^{T}_{t=1} \\Delta W_t = \\sum^{T}_{t=1} W_t X_t  \n\\end{equation*}\nThe portfolio return between $t=0$ and $T$ is given by\n\\begin{equation*}\n\tX_{0,T} = \\frac{W_T}{W_0} - 1 = \\prod_{t=1}^T (1+X_t) - 1\n\\end{equation*}\nIn order to cast the asset allocation problem in the reinforcement learning\nframework, we consider the log-return of the portfolio between $t=0$ and $T$\n\\begin{equation}\n\tR_{0,T} = \\log \\frac{W_T}{W_0} = \\sum^{T}_{t=1} \\log(1+X_t) = \\sum_{t=1}^T\n\tR_t\n\\end{equation}\nwhere $R_{t+1}$ is the log-return of the portfolio between $t$ and $t+1$\n\\begin{equation}\n\tR_{t+1} = \\log \\left\\{ 1 + \\sum^{I}_{i=0} \\left[ a_t^i X_{t+1}^i - \\delta_i\n\t\\left| a_t^i - \\tilde{a}_t^i \\right| - \\delta_s {(a_t^i)}^- \\right] -\n\t\\delta_f \\ind{{a}_t \\neq \\tilde{{a}}_{t-1}}\\right\\}\n\\end{equation}\nThe portfolio log-return can be used as the reward function of a\nRL algorithm, either in a offline or in an online approach.\n\n\\subsection{Reinforcement Learning Application}\nIn the previous section we derived the reward function for the asset allocation problem with transaction costs. In order to apply the policy gradient algorithms discussed in the previous sections we still need to define the state space, the action space and the agent's policy. For simplicity, we limit ourselves to the case of a single risky asset, i.e. $I = 1$, but the discussion could be generalized to the multi-asset case.\\\\\nWe assume that at each time step the agent considers the $P+1$ past returns of the risky asset, i.e. $\\{X_t, X_{t-1}, \\ldots, X_{t-P}\\}$. In order to properly incorporate the effects of transaction costs into his decision process, the agent must keep track of its current position $\\tilde{a}_t$. The state of the system is thus given by $S_t = \\{X_t, X_{t-1}, \\ldots, X_{t-P}, \\tilde{a}_t\\}$ We might also include some external variables $Y_t$ that may be relevant to the trader, such as the common technical indicator used in practice. Furthermore, these input variables may be used to construct more complex features for example using some deep learning techniques, such as a deep auto-encoder.\\\\\nThe agent, or trading system, specifies the portfolio weights $a_t = (a_t^0, a_t^1)^T$ according to a long-short strategy, i.e. the agent may be long ($a_t^1 = +1$) or short ($a_t^1 = -1$) on the risky-asset while $a_t^0 = 1 - a_t^1$ since the agents invests all the available capital at each time step. In the GPOMDP framework we assume that the agent selects $a_t^1$ according to a Boltzmann policy, i.e.\n\\begin{equation}\n\t\\pi_\\theta(s, +1) = \\frac{e^{\\theta^T s}}{1 + e^{\\theta^T s}} \\;\\;\\;\\;\\; \\pi_\\theta(s, -1) = \\frac{1}{1 + e^{\\theta^T s}}\n\\end{equation}\nwhere we included a bias term in the parameters and in the state. In the parameter-based formulation, we assume that agent selects actions according to the binary controller\n\\begin{equation}\n\tF_\\theta(s) = \\sign(\\theta^T s)\n\\end{equation}\nwhere the controller parameters are normally distributed $\\theta \\sim \\calN(\\mu, \\diag(\\sigma))$. Since the formulation of the asset allocation problem given above is non-episodic, we actually applied the online version of the algorithms discussed above. The main considerations made above still hold and we refer to the full thesis for the details. \n", "meta": {"hexsha": "d6f34db2aa5d091c35d31e331c947c84c14ca6b7", "size": 8348, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Pacs/Report/Sections/3_application_to_systematic_trading.tex", "max_stars_repo_name": "AmineAboussalah/Thesis", "max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z", "max_issues_repo_path": "Pacs/Report/Sections/3_application_to_systematic_trading.tex", "max_issues_repo_name": "pnecchi/Thesis", "max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pacs/Report/Sections/3_application_to_systematic_trading.tex", "max_forks_repo_name": "pnecchi/Thesis", "max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z", "avg_line_length": 79.5047619048, "max_line_length": 827, "alphanum_fraction": 0.7255630091, "num_tokens": 2567, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397348, "lm_q2_score": 0.702530051167069, "lm_q1q2_score": 0.5553713121581397}}
{"text": "\\section{The Ratio and Root Tests}\\label{sec:ratioroottest}\n\nDoes the series $\\ds\\sum_{n=0}^\\infty {n^5\\over 5^n}$ converge? It is\npossible, but a bit unpleasant, to approach this with the integral\ntest or the comparison test, but there is an easier way. Consider what\nhappens as we move from one term to the next in this series:\n\n\\[\\cdots+{n^5\\over5^n}+{(n+1)^5\\over 5^{n+1}}+\\cdots\\]\n\nThe denominator goes up by a factor of 5, $\\ds 5^{n+1}=5\\cdot5^n$, but the\nnumerator goes up by much less: $\\ds (n+1)^5=n^5+5n^4+10n^3+10n^2+5n+1$,\nwhich is much less than $\\ds 5n^5$ when $n$ is large, because $\\ds 5n^4$ is\nmuch less than $\\ds n^5$. So we might guess that in the long run it begins\nto look as if each term is $1/5$ of the previous term. We have seen\nseries that behave like this: The geometric series.\n\n\\[\\sum_{n=0}^\\infty {1\\over 5^n} = {5\\over4},\\]\nSo we might try comparing the given series to some\nvariation of this geometric series. This is possible, but a bit\nmessy. We can in effect do the same thing, but bypass most of the\nunpleasant work.\n\nThe key is to notice that\n\\[\n  \\lim_{n\\to\\infty} {a_{n+1}\\over a_n}=\n  \\lim_{n\\to\\infty} {(n+1)^5\\over 5^{n+1}}{5^n\\over n^5}=\n  \\lim_{n\\to\\infty} {(n+1)^5\\over n^5}{1\\over 5}=1\\cdot {1\\over5}\n    ={1\\over 5}.\n\\]\n\nThis is really just what we noticed above, done a bit more formally:\nin the long run, each term is one fifth of the previous term. Now pick\nsome number between $1/5$ and $1$, say $1/2$. Because\n$$\\lim_{n\\to\\infty} {a_{n+1}\\over a_n}={1\\over5},$$\nthen when $n$ is big enough, say $n\\ge N$ for some $N$, \n\n\\[\n  {a_{n+1}\\over a_n}<{1\\over2} \\quad \\hbox{so}\\quad a_{n+1}<{a_n\\over2}.\n\\]\n\nSo $\\ds a_{N+1}< a_N/2$, $\\ds a_{N+2}<a_{N+1}/2<a_N/4$,\n$\\ds a_{N+3}<a_{N+2}/2<a_N/8$, and so on. The general form is\n$\\ds a_{N+k}< a_N/2^k$. So if we look at the series\n\n\\[\n  \\sum_{k=0}^\\infty a_{N+k}=\n  a_N+a_{N+1}+a_{N+2}+a_{N+3}+\\cdots+a_{N+k}+\\cdots,\n\\]\nits terms are less than or equal to the terms of the sequence\n\n\\[\n  a_N+{a_N\\over2}+{a_N\\over4}+{a_N\\over8}+\\cdots+{a_N\\over2^k}+\\cdots=\n  \\sum_{k=0}^\\infty {a_N\\over 2^k} = 2a_N.\n\\]\n\nSo by the comparison test, $\\ds\\sum_{k=0}^\\infty a_{N+k}$ converges,\nand this means that $\\ds\\sum_{n=0}^\\infty a_{n}$ converges, since\nwe've just added the fixed number $\\ds a_0+a_1+\\cdots+a_{N-1}$.\n\nUnder what circumstances could we do this? What was crucial was that\nthe limit of $\\ds a_{n+1}/a_n$, say $L$, was less than 1 so that we could pick a\nvalue $r$ so that $L<r<1$. The fact that $L<r$ ($1/5<1/2$ in our\nexample) means that we can compare the series $\\sum a_n$ to $\\sum\nr^n$, and the fact that $r<1$ guarantees that $\\sum r^n$\nconverges. That's really all that is required to make the argument\nwork. We also made use of the fact that the terms of the series were\npositive; in general we simply consider the absolute values of the\nterms and we end up testing for absolute convergence.\n\n\\begin{theorem}{The Ratio Test}{RatioTest}\nSuppose that $\\ds\\lim_{n\\to \\infty} |a_{n+1}/a_n|=L$. If $L<1$\nthe series $\\sum a_n$ converges absolutely, \nif $L>1$ the series diverges, and if\n$L=1$ this test gives no information.\n\\end{theorem}\n\\begin{proof}\nThe example above essentially proves the first part of this, if we\nsimply replace $1/5$ by $L$ and $1/2$ by $r$. \nSuppose that $L>1$, and pick $r$ so that $1<r<L$.\nThen for $n\\ge N$, for some $N$,\n\\[{|a_{n+1}|\\over |a_n|} > r \\quad \\hbox{and}\\quad |a_{n+1}| > r|a_n|.\\]\n\nThis implies that $\\ds |a_{N+k}|>r^k|a_N|$, but since $r>1$ this means\nthat $\\ds\\lim_{k\\to\\infty}|a_{N+k}|\\not=0$, which means also that\n$\\ds\\lim_{n\\to\\infty}a_n\\not=0$. By the divergence test, the series\ndiverges. \n\nTo see that we get no information when $L=1$, we need to exhibit two\nseries with $L=1$, one that converges and one that diverges. The series\n$\\sum 1/n^2$ and $\\sum 1/n$ provide a simple example.\n\\end{proof}\n\nThe ratio test is particularly useful for series involving\nthe factorial function.\n\n\\begin{example}{}{}\nAnalyze $\\ds\\sum_{n=0}^\\infty  \\frac{5^n}{n!}$.\n\\end{example}\n\\begin{solution}\n$$\n  \\lim_{n\\to\\infty} {5^{n+1}\\over (n+1)!}{n!\\over 5^n}=\n  \\lim_{n\\to\\infty} {5^{n+1}\\over 5^n}{n!\\over (n+1)!}=\n  \\lim_{n\\to\\infty} {5}{1\\over (n+1)}=0.\n$$\nSince $0<1$, the series converges.\n\\end{solution}\n\nA similar argument justifies a similar test\nthat is occasionally easier to apply. \n\n\\begin{theorem}{The Root Test}{RootTest}\nSuppose that $\\ds\\lim_{n\\to \\infty} |a_n|^{1/n}=L$. If $L<1$\nthe series $\\sum a_n$ converges absolutely, \nif $L>1$ the series diverges, and if\n$L=1$ this test gives no information.\n\\end{theorem}\n\nThe proof of the root test is actually easier than that of the ratio\ntest, and is left as an exercise.\n\n\\begin{example}{}{}\nAnalyze $\\ds\\sum_{n=0}^\\infty {5^n\\over n^n}$.\n\\end{example}\n\\begin{solution}\nThe ratio test turns out to be a bit difficult on this series (try\nit). Using the root test:\n$$\n  \\lim_{n\\to\\infty} \\left({5^n\\over n^n}\\right)^{1/n}=\n  \\lim_{n\\to\\infty} {(5^n)^{1/n}\\over (n^n)^{1/n}}=\n  \\lim_{n\\to\\infty} {5\\over n}=0.\n$$\nSince $0<1$, the series converges.\n\\end{solution}\n\nThe root test is frequently useful when $n$ appears as an exponent in\nthe general term of the series.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\Opensolutionfile{solutions}[ex]\n\\section*{Exercises for \\ref{sec:ratioroottest}}\n\n\\begin{enumialphparenastyle}\n\n\\begin{ex}\nCompute $\\ds\\lim_{n\\to\\infty} |a_{n+1}/a_n|$ for the series\n$\\sum 1/n^2$.\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\lim_{n\\to\\infty} |a_{n+1}/a_n|$ for the series\n$\\sum 1/n$.\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\lim_{n\\to\\infty} |a_n|^{1/n}$ for the series\n$\\sum 1/n^2$.\n\\end{ex}\n\n\\begin{ex}\nCompute $\\ds\\lim_{n\\to\\infty} |a_n|^{1/n}$ for the series\n$\\sum 1/n$.\n\\end{ex}\n\n\\begin{ex}\nDetermine whether the series converge.\n\n\\begin{multicols}{2}\n\\begin{enumerate}\n\t\\item $\\ds\\sum_{n=0}^\\infty (-1)^{n}{3^n\\over 5^n}$\n\t\\item $\\ds\\sum_{n=1}^\\infty {n!\\over n^n}$\n\t\\item $\\ds\\sum_{n=1}^\\infty {n^5\\over n^n}$\n\t\\item $\\ds\\sum_{n=1}^\\infty {(n!)^2\\over n^n}$\n\\end{enumerate}\n\\end{multicols}\n\\begin{sol}\n\\begin{enumerate}\n\t\\item converges\n\t\\item converges\n\t\\item converges\n\t\\item diverges\n\\end{enumerate}\n\\end{sol}\n\\end{ex}\n\n\\begin{ex}\nProve Theorem \\ref{thm:RootTest}, the root test.\n\\end{ex}\n\n\\end{enumialphparenastyle}", "meta": {"hexsha": "9c318349d1da74c0fdebb3ab1b20ba8bafa62999", "size": 6222, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "9-sequences-and-series/9-7-ratio-root-test.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9-sequences-and-series/9-7-ratio-root-test.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9-sequences-and-series/9-7-ratio-root-test.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.9206349206, "max_line_length": 80, "alphanum_fraction": 0.6663452266, "num_tokens": 2375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947425132314, "lm_q2_score": 0.8479677602988601, "lm_q1q2_score": 0.5553296280404435}}
{"text": "\\chapter{Relationships are pointfree funcoids}\n\n\\begin{thm}\n  $(\\tofcd, \\torldin )$ are components\n  of a complete pointfree funcoid.\n\\end{thm}\n\n\\begin{proof}\n  For every ultrafilters $x$ and $y$ we have $x\n  \\mathrel{[\\tofcd (f \\sqcap \\torldin \n  g)]} y \\Leftrightarrow x \\times^{\\mathsf{RLD}} y \\nasymp f \\sqcap\n  \\torldin  g \\Leftrightarrow x\n  \\times^{\\mathsf{RLD}} y \\sqsubseteq\n  \\torldin  g \\land x \\times^{\\mathsf{RLD}} y\n  \\nasymp f \\sqcap \\torldin g \\Leftrightarrow x \\times^{\\mathsf{FCD}} y \\in\n  \\atoms g : x \\times^{\\mathsf{RLD}} y \\nasymp f \\sqcap \\torldin g\n  \\Leftrightarrow\n  x \\times^{\\mathsf{FCD}} y \\in\n  \\atoms g : x \\times^{\\mathsf{RLD}} y \\nasymp f\n  \\Leftrightarrow\n  x \\times^{\\mathsf{FCD}} y \\in \\atoms g \\land x\n  \\times^{\\mathsf{FCD}} y \\sqsubseteq \\tofcd f\n  \\Leftrightarrow x \\mathrel{[g \\sqcap \\tofcd f]} y$.\n  \n  Thus $\\tofcd (f \\sqcap \\torldin  g) =\n  g \\sqcap \\tofcd f$. Consequently $f \\sqcap\n  \\torldin  g = \\bot \\Leftrightarrow g \\sqcap\n  \\tofcd f = \\bot$ that is $g \\nasymp \\tofcd f\n  \\Leftrightarrow f \\nasymp \\torldin  g$.\n  \n  It is complete by theorem~\\bookref{rels-dist}.\n\\end{proof}\n\nWe will also prove in another way that $\\tofcd$,\n$\\torldin $ are components of pointfree funcoids:\n\n\\begin{thm}\n  $\\torldin $ is a component of a pointfree funcoid\n  (between filters on boolean lattices).\n\\end{thm}\n\n\\begin{proof}\n  Consider the pointfree funcoid $\\mathscr{R}$ defined by the formula\n  $\\left\\langle \\mathscr{R} \\right\\rangle^{\\ast} F =\n  \\torldin  F$ for binary relations $F$ (obviously it\n  does exists). Then $\\left\\langle \\mathscr{R} \\right\\rangle f = \\left\\langle\n  \\mathscr{R} \\right\\rangle \\bigsqcap^{\\mathsf{FCD}} \\up^{\\Gamma}\n  f = \\bigsqcap^{\\mathsf{RLD}}_{F \\in \\up^{\\Gamma} f}\n  \\left\\langle \\mathscr{R} \\right\\rangle^{\\ast} F =\n  \\bigsqcap^{\\mathsf{RLD}}_{F \\in \\up^{\\Gamma} f}\n  \\torldin  F = \\torldin \n  \\bigsqcap^{\\mathsf{FCD}}_{F \\in \\up^{\\Gamma} f} F =\n  \\torldin  f$.\n\\end{proof}\n\n\\begin{thm}\n  $\\tofcd$ is a component of a complete pointfree funcoid\n  (between filters on boolean lattices).\n\\end{thm}\n\n\\begin{proof}\n  Consider the pointfree funcoid $\\mathscr{Q}$ defined by the formula\n  $\\left\\langle \\mathscr{Q} \\right\\rangle^{\\ast} F = \\tofcd F$\n  for binary relations $F$ (obviously it does exists). Then $\\left\\langle\n  \\mathscr{Q} \\right\\rangle f = \\left\\langle \\mathscr{Q} \\right\\rangle\n  \\bigsqcap^{\\mathsf{RLD}} \\up f = \\text{(because $\\up f$\n  is a filter base)} = \\bigsqcap^{\\mathsf{FCD}}_{F \\in \\up f}\n  \\left\\langle \\mathscr{Q} \\right\\rangle^{\\ast} F =\n  \\bigsqcap^{\\mathsf{FCD}}_{F \\in \\up f} \\tofcd F\n  = \\bigsqcap^{\\mathsf{FCD}}_{F \\in \\up f} F =\n  \\bigsqcap^{\\mathsf{FCD}} \\up f = \\tofcd f$.\n\\end{proof}\n\n\\begin{prop}\n$\\tofcd \\bigsqcap S = \\bigsqcap_{f\\in S}\\tofcd f$ if~$S$ is\na filter base of reloids (with the same sources and destinations).\n\\end{prop}\n\n\\begin{proof}\nTheorem~\\bookref{supfun-genbase}.\n\\end{proof}\n\n\\begin{conjecture}\n$\\torldin \\bigsqcap S = \\bigsqcap_{f\\in S}\\torldin f$ if~$S$ is\na filter base of funcoids (with the same sources and destinations).\n\\end{conjecture}\n", "meta": {"hexsha": "fcd3d33e7da1e74ad3e1e30d43a561ca93405c27", "size": 3067, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chap-rels-are-fcd.tex", "max_stars_repo_name": "vporton/algebraic-general-topology", "max_stars_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2018-06-26T00:14:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T04:56:16.000Z", "max_issues_repo_path": "chap-rels-are-fcd.tex", "max_issues_repo_name": "vporton/algebraic-general-topology", "max_issues_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-12-30T07:16:23.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-13T02:05:02.000Z", "max_forks_repo_path": "chap-rels-are-fcd.tex", "max_forks_repo_name": "vporton/algebraic-general-topology", "max_forks_repo_head_hexsha": "d1d02a6515a6dabbc5d30b0c00a3e6a9878b36b1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2528735632, "max_line_length": 77, "alphanum_fraction": 0.6680795566, "num_tokens": 1231, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8479677737461007, "lm_q2_score": 0.6548947290421275, "lm_q1q2_score": 0.5553296254239087}}
{"text": "\\section{Methodology}\\label{sec:methodology:clustering}\n\nIn order to understand the make up of \\bslongs{} in \\cplop{}, we chose to investigate how a density-based clustering algorithm might group the \\isols{} we have collected so far and how that might affect a rudimentary \\mst{} technique based off of clustering: cluster an unknown \\isol{} along with the \\isols{} in \\cplop{} and classify it as the most plural species of the cluster.\nDensity-based clustering ties in well with our notion of closely-related strains of \\ecoli{} (the relation being the separate \\pearson{} comparison of each \\itsshort{} region we use to compare \\isols{}).\n\\index{\\ecoli{} strain}\nFrom the computer science point of view, a bacterial strain is essentially a cluster of \\ecoli{} \\isol{} representations stored in \\cplop{}.\nOur \\mst{} method, thus, works as follows:\n\\begin{enumerate}\n    \\item \\textbf{Strain Identification.} Identify bacterial strains in \\cplop{} by clustering\n    all \\cplop{} \\isols{}.\n    \\item \\textbf{MST.} Given an isolate of unknown origin, find the cluster it belongs to.\n    Return the \\spec{} of the plurality of isolates in the cluster.\n\\end{enumerate}\n\nOur clustering algorithm is the density-based clustering algorithm developed by Johnson \\cite{johnson2015density}.\nIt extends \\dbscan{} for the case of two \\compfuncs{} between data points (our isolates are compared based on the two \\itsshort{} regions) and implements an efficient spatial data structure to manage the retrieval of the data points.\n\n\\dbscan{} can easily use an efficient range query technique to find nearby points and speed up clustering time considerably by taking advantage of the \\trieq{} that proper metric spaces have.\nUnfortunately, \\pearson{} does not encode a metric space, because it fails the \\trieq{}\\footnote{$d(x,z)\\leq d(x,y) + d(y,z)$}. \nThis complicates range queries, discussed in \\autoref{sec:background:dbscan}, because spatial indexes tend to rely on the \\trieq{}, usually with \\euclid{}, to argue that certain points can be ignored during a spatial index tree traversal. \n\nTo allow us the use of a spatial data structure with the \\pearson{} to store data points during the clustering procedure we use instead the \\clustplop{}, which we derive by recognizing in \\autoref{eq:pearson} that \\pearson{} is made up of \\zscores{} of \\pcveca{} and \\pcvecb{}. \nThe \\zscore{} of \\pcveca{} is:\n\\[\n\\zscoreeq{\\pca}\n\\]\nwhere \\vecavg{\\pca} and \\vecstddev{\\pca} are the mean and standard deviation of the values in a single pyroprint respectively. \nThus, for clustering, we compare \\pyros{} using the Euclidean distance \\euczfunclabel{} of $z$-scores.\n\\begin{equation}\\label{eq:euclidean_zscores}\n\\eucz{\\pca}{\\pcb}\n\\end{equation}\nwhere \\numdims{} is the number of dimensions.\nThis allows us to use spatial indexes and \\bigo{\\log{n}} lookup in \\dbscan{}.\n\nEach \\isol{} is represented in \\cplop{} by a pair of \\pyros{}: one each from each \\Ssixt{} and \\Sfive{} region, complicating the use of \\dbscan{} and the meaning of $\\alpha$ threshold.\nWe handle this in \\dbscan{} by performing two range queries, one each for \\Ssixt{} and \\Sfive{}, taking the intersection of the two results.\nWe must, however, pick a suitable \\eps{} for each \\itsshort{} region.\n\n\\cplop{} uses a threshold value of $\\alpha = 0.995$  to compare two \\pyros{}.\n\\Pyros{} with \\pearson{} above $\\alpha$ are considered to represent the same \\dna{} material, while \\pyros{} with a \\pearson{} below $\\alpha$ are considered to represent different \\dna{} material \\cite{Shealy:SeniorProject, soliman2013cplop, SolimanDVMBNWKG12}.\n\nThe number of dispensations \\numdims{} used to build a \\pyro{} differ for the \\Ssixt{} and \\Sfive{} regions.\nBecause  $\\numdims{}_{\\Ssixt{}} \\neq \\numdims{}_{\\Sfive{}}$, the original $\\alpha$ under the space defined by \\eqref{eq:euclidean_zscores} no longer applies in the same way to both regions.\nAn alternative formulation of \\eqref{eq:euclidean_zscores}, with respect to the \\pearson{} \\pcfunclabel{}, is:\n\\begin{equation}\\label{eq:euclidean_zscores_alternate}\n\\euczalternate{\\pca}{\\pcb}\n\\end{equation}\nwhere \\numdims{} is the number of dimensions and $\\numdims{}_{\\pcveca{}} = \\numdims{}_{\\pcvecb{}} = \\numdims{}$. \nUsing \\autoref{eq:euclidean_zscores_alternate}, we can convert $\\alpha$ to the values in \\autoref{tab:converted_thresholds}. \nWe use these converted $\\alpha$ values as the \\eps{} for each \\itsshort{} region's \\codefn{RangeQuery}.\n\\begin{table}\n\\centering\n\\caption{Converted $\\alpha$ threshold to fit the new metric space defined by \\eqref{eq:euclidean_zscores}.}\n\\label{tab:converted_thresholds}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{\\itsshort{} Region} & \\Ssixt{}     & \\Sfive{}     \\\\\n                            & $\\alpha$     & $\\alpha$     \\\\ \\hline\n\\pcfunc{\\pca}{\\pcb}         & 0.995        & 0.995        \\\\ \\hline\n\\numdims{}                  & \\Ssixtdims{} & \\Sfivedims{} \\\\ \\hline\n\\euczfunc{\\pca{}}{\\pcb{}}   & 0.9747       & 0.9644       \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nWhen clustering \\cplop{} isolates using our density-based clustering algorithm, we need to set up the two parameters at our disposal: \\minneigh{} and \\eps{}.\nFor \\eps{} we choose the two values shown in Table \\ref{tab:converted_thresholds} converted from the 0.995 \\pearson{} threshold of pyroprint similarity. \nEssentially, we only want to consider the \\eps{}-neighborhood of a \\pyro{} that contains the other \\pyros{} that we consider to represent the same DNA material.\n\nFor the \\minneigh{} parameter, we use \\textit{grid search} running our clustering with \\minneigh{} set to $1, 2, 3, 4, 5, 6$, and $7$.\nThe \\minneigh{} value adjusts how strict our definition of a cluster is.\nThat is, the higher the value of \\minneigh{}, the more neighbors a core point must have with \\eps{} of it and its neighbors to become a cluster.\nBalancing this value with the coverage of our algorithm is crucial to its success, because for too low of a value, we may not have a clear plurality in a cluster, while too high of a value may miss some smaller clusters that might classify our unknown \\isol{} into something other than noise.\n", "meta": {"hexsha": "5fd36373c8bfb747f37c896b688f065c2dd10ce7", "size": 6112, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/clustering/methodology.tex", "max_stars_repo_name": "jmcgover/thesis", "max_stars_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/clustering/methodology.tex", "max_issues_repo_name": "jmcgover/thesis", "max_issues_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/clustering/methodology.tex", "max_forks_repo_name": "jmcgover/thesis", "max_forks_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.7260273973, "max_line_length": 380, "alphanum_fraction": 0.7342931937, "num_tokens": 1668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5553268032119346}}
{"text": "\\documentclass{article}\n\\usepackage[T2A]{fontenc}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{amsfonts}\n\\usepackage{mathtools}\n\\usepackage{algorithm}\n\\usepackage{algorithmicx}\n\\usepackage{algpseudocode}\n\\usepackage[left=75pt]{geometry}\n\\usepackage{tikz}\n\n\\newcommand{\\score}[1]{\\textrm{score}(#1)}\n\\newcommand{\\bigO}[1]{\\mathcal{O}{(#1)}}\n\\newcommand{\\graph}[1]{\\mathcal{#1}}\n\\newcommand{\\ops}[1]{\\textrm{ops}(#1)}\n\\newcommand{\\expect}[1]{\\mathbb{E} #1}\n\\newcommand{\\argmin}{\\operatornamewithlimits{argmin}}\n\\newcommand{\\argmax}{\\operatornamewithlimits{argmax}}\n\n\\author{Dmitry Lunin, \\\\ \\small supervised by Senko O.V.}\n\\title{Improving speed efficiency of score-based Bayesian network structure learning algorithms}\n\\begin{document}\n\\maketitle\n\\tableofcontents\n\\pagebreak\n\n\\abstract{\n\tThe task of Bayesian network structure learning is NP-hard \\cite{StructureLearningIsNPComplete}, and as such it is usually solved by heuristic algorithms. That makes speed optimization of the structure learning algorithms especially important, since it allows to traverse more structures in a given time, yielding better quality answers.\n\t\n\tThe proposed method improves speed efficiency of many structure learning algorithms by optimizing one of the most basic operations in score-based structure learning: the score evaluation and comparison.\n}\n\n\\section{Structure Learning}\n\\subsection{Introduction}\n\\paragraph{The task} The aim of structure learning is to identify the Bayesian network structure (i.e. the graph) using data. \n\n\\paragraph{Applications}\n\nThe first application is knowledge discovery. We can use structure learning to find out (in)dependencies between variables in the data, or test our prior assumptions about them, obtaining better understanding of the data and the domain of knowledge. \n\nThe second application is off-the-shelf machine learning. In order to apply Bayesian network methods to a problem, firstly one needs to build a Bayesian network, which may require significant effort and domain expertise. On the other hand, many machine learning methods, for example random forests can be applied immediately, yielding adequate results which can be improved later by fine-tuning parameters. Advanced structure learning methods make it possible to apply such workflow to the Bayesian network methods.\n\n\\paragraph{Approaches} There are two main approaches to the structure learning task. The first, constraint-based approach, is based on doing statistical tests to obtain conditional independence statements, and build a Bayesian network that satisfies them. The point of the second approach, the score-based structure learning, is to introduce a metric of compatibility of a Bayesian network with the data, and then optimize it over the space of all Bayesian network structures. In this work, we focus on the score-based approach.\n\n\\subsection{Score-based approach}\n\\paragraph{Description} The idea of score-based approach is to associate a Bayesian network graph with a score, and then maximize it. Since the number of such graphs is very large, greedy optimization methods are often used.\n\\subsubsection{Decomposable scores}\nA score is decomposable if it can be represented as a sum of scores of graph nodes: \n\\begin{equation}\n\\label{eq:decomposable_score}\n\\score{\\graph{G}} = \\sum_i{\\score{X_i}}\n\\end{equation} \nA decomposable score allows for fast recalculation of graph score after local operations on graph. \n\\subsubsection{Local operations}\n\n\\theoremstyle{definition}\n\\newtheorem*{local.operation}{Def}\n\\begin{local.operation}\n\tWe call a graph operation \\textbf{local} if it affects edges pointing to a constant number of nodes.\n\\end{local.operation}\n\n\\theoremstyle{definition}\n\\newtheorem*{operation.score}{Def}\n\\begin{operation.score}\n\tThe \\textbf{score} of a graph operation is defined as the difference between the graph scores after and before applying the operation.\n\t$$ \\score{\\textrm{op}} = \\score{\\graph{G}_{\\textrm{after}}} - \\score{\\graph{G}_{\\textrm{before}}} $$\n\\end{operation.score}\n\nUsually the set of operations includes \\textbf{edge addition}, \\textbf{edge deletion} and \\textbf{edge reversal}. \n\nThe advantage of using local operations in the structure search is that their (decomposable) scores can be computed efficiently, since they affect only a constant number of terms in the sum (\\ref{eq:decomposable_score}). Edge affection and edge deletion affect 1 term and edge reversal affects 2 terms.\n\n\\subsubsection{Speed concerns}\nThe most expensive operation in the local search is the computation of $\\score{X_i}$, especially when the amount of data is large. Hence, we want to minimize the number of such computations.\n\n\\subsection{Graph scores and mutual information}\n\\paragraph{Mutual Information} Mutual information is defined as:\n\\begin{equation}\nI(X; Y) = \\sum_{x \\in X}{\\sum_{y \\in Y}{p(x, y)\\,\\log \\frac{p(x,y)}{p(x)\\,p(y)}\\,dx\\,dy}}\n\\end{equation}\nIt can also be expressed as $KL$-divergence between distributions $p(x, y)$ and $p(x)p(y)$:\n\\begin{equation}\nI(X; Y) = D_{KL}(p(x, y)\\,||\\,p(x)p(y))\n\\end{equation}\nNote that $p(x, y) = p(x)p(y)$, and therefore $I(X; Y) = 0$, when $X$ is independent of $Y$. \n\n\\paragraph{Application} Often computing graph scores for structure learning problem requires mutual information estimation. For example, BIC score. The reason behind this is that in discrete case, likelihood of a Bayesian network structure can be computed using mutual information:\n\\begin{equation}\n\\log p(\\graph{G}|X) = N \\sum_{i=1}^M{I(X_i; Pa(X_i))} - N \\sum_{i=1}^M{H(X_i)}  \n\\end{equation}\nwhere $N$ is the number of data points, $M$ is the number of variables.\n\\paragraph{Entropy} \nMutual information, on the other hand, can be computed via entropy:\n\\begin{equation}\n\\label{eq:mi_via_entropy}\nI(X; Y) = H(X) + H(Y) - H(X, Y)\n\\end{equation}\nWhere $H(X)$ is Shannon entropy:\n\\begin{equation}\nH(X) = \\sum_{x \\in X}{p(x)\\,\\log p(x)\\,dx}\n\\end{equation}\nWe use Equation \\ref{eq:mi_via_entropy} for computing mutual information because it allows us to cache $H(X)$ instead of $I(X; Y)$; since it can be used for computing several $I(X; Y)$, this scheme is more efficient.\n\n\\subsection{Greedy Local Search}\n\\subsubsection{Algorithm description}\nAt each step of the algorithm, we choose the local operation with maximal score and apply it. The algorithm terminates where there are no operations that increase the score. That means that it has reached a local optimum (relative to the given score and local operations set).\n\n\\subsubsection{Improvements}\n\\paragraph{Storing operation scores in a heap} In order to find an operation with maximal score efficiently, we can use data structures such as a binary heap. That way, we can retrieve an operation with maximum score in $\\bigO{1}$ time.\n\nHowever, when we apply the operation, several problems arise. Firstly, some operations in the heap start violating acyclity constraints. This problem can be solved by checking for acyclity when the operation is retrieved from the heap. \n\nSecondly, the score of some operations changes. The number of such operations is $\\bigO{K}$. Hence removing and re-inserting them to the heap would require $\\bigO{K \\log{N_{op}}}$ time. \n\n\\paragraph{Tabu search} In order to avoid local maximums and plateaus, a list of recently applied operations can be stored. Operations from that list are not considered during a step of the greedy search.\n\n\\paragraph{Dataset perturbation} Another method for avoiding plateaus is to resample the dataset after a certain number of iterations.\n\n\\begin{algorithm}[t]\n\t\\caption{Greedy Local Search algorithm}\\label{euclid}\n\t\\begin{algorithmic}[1]\n\t\t\\Procedure{GreedyLocalSearch}{$\\graph{G}_0, \\score{\\cdot}, \\ops{\\cdot}$}\n\t\t\\State $\\graph{G} \\gets \\graph{G}_0$\n\t\t\\While{$\\exists o \\in \\ops{\\graph{G}}: \\score{o} > 0$}\n\t\t\\State $\\displaystyle o \\gets \\argmax_{o \\in \\ops{\\graph{G}}} {\\score{o}}$\n\t\t\\State $\\graph{G} \\gets o(\\graph{G})$\n\t\t\\EndWhile\n\t\t\\State \\textbf{return} $\\graph{G}$\n\t\t\\EndProcedure\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Fast score comparison}\n\\subsection{CPD posterior}\n\\paragraph{CPD estimation} Suppose we have a discrete random variable which takes on values $v_1, v_2, \\ldots, v_m$ with probabilities $p_1, p_2, \\ldots, p_m$ (these are the true probabilities). Now we have a dataset $\\mathcal{D} = (x_1, x_2, \\ldots, x_N)$, and we want to estimate the true probabilities $p_i$.\n\nBy the Bayes theorem,\n\\begin{equation}\np(p_1, \\ldots, p_m | \\mathcal{D}) = \\frac{p(p_1, \\ldots, p_m)}{p(\\mathcal{D})} p(\\mathcal{D} | p_1, \\ldots, p_m) \n\\end{equation}\n\n$p(\\mathcal{D})$ is a constant w.r.t. $p_1, \\ldots, p_m$.\n\\begin{multline}\np(\\mathcal{D}|p_1, \\ldots, p_m) = \\prod_{i=1}^{N}{p(x_i|p_1, \\ldots, p_m)} = \\prod_{i=1}^{N}{p_{x_i}} = \\prod_{i=1}^{N}{p_1^{[x_i = 1]} p_2^{[x_i = 2]} \\ldots p_m^{[x_i = m]}} = \\\\ p_1^{\\sum_{i=1}^N{[x_i = 1]}} p_2^{\\sum_{i=1}^N{[x_i = 2]}} \\ldots p_m^{\\sum_{i=1}^N{[x_i = m]}} = p_1^{n_1} p_2^{n_2} \\ldots p_m^{n_m} = \\prod_{i = 1}^m{p_i^{n_i}}\n\\end{multline}\n\nWe assume that our prior on the probabilities is a Dirichlet distribution, i.e. it has the form \n\\begin{equation*}\np(p_1, \\ldots, p_m) \\sim \\textrm{Dirichlet}(\\alpha_1, \\alpha_2, \\ldots, \\alpha_m)\n\\end{equation*}\n\n\\begin{equation}\np(p_1, \\ldots, p_m) = \\frac{1}{B(\\alpha_1, \\alpha_2, \\ldots, \\alpha_m)} \\prod_{i=1}^m{p_i^{\\alpha_i - 1}}\n\\end{equation}\n\nThe uniform prior corresponds to $\\alpha_i = 1$ assignment.\n\nCombining these equations, we have that the posterior over $p_1, p_2, \\ldots, p_m$ is also a Dirichlet distribution\n\\begin{multline}\np(p_1, p_2, \\ldots, p_m|\\mathcal{D}) = \\frac{1}{Z_1}\\,p(p_1, \\ldots, p_m)\\,p(\\mathcal{D}|p_1, \\ldots, p_m) = \\\\ \\frac{1}{Z_2} \\, \\prod_{i=1}^m{p_i^{\\alpha_i - 1}} \\, \\prod_{i=1}^m{p_i^{n_i}} = \\frac{1}{Z_2}{ \\prod_{i=1}^m{p_i^{n_i + \\alpha_i - 1}}}\n\\end{multline}\n\n\\begin{equation*}\np(p_1, p_2, \\ldots, p_m|\\mathcal{D}) \\sim \\textrm{Dirichlet}(\\alpha_1 + n_1, \\alpha_2 + n_2, \\ldots, \\alpha_m + n_m)\n\\end{equation*}\n\n\\subsection{Dirichlet distribution properties}\n\n\\paragraph{Probability density function}\n\\begin{equation}\np(p_1, \\ldots, p_m) = \\frac{1}{B(\\alpha_1, \\alpha_2, \\ldots, \\alpha_m)} \\prod_{i=1}^m{p_i^{\\alpha_i - 1}}\n\\end{equation}\n\n\\paragraph{Support}\n\\begin{equation}\np_1 + p_2 + \\ldots + p_m = 1\n\\end{equation}\n\n\\paragraph{Marginals}\n\\begin{equation*}\n\\alpha_0 = \\sum_{i=1}^m{\\alpha_i}\n\\end{equation*}\n\n\\begin{equation*}\np(p_i) \\sim \\textrm{Beta}(\\alpha_i, \\alpha_0 - \\alpha_i)\n\\end{equation*}\n\n\\begin{equation}\np(p_i) = \\frac{1}{B(\\alpha_i, \\alpha_0 - \\alpha_i)} \\, p_i^{\\alpha_i} (1 - p_i)^{\\alpha_0 - \\alpha_i}\n\\end{equation}\n\n\\paragraph{Mean}\n\\begin{equation}\n\\expect{p_i} = \\frac{\\alpha_i}{\\alpha_0} = \\frac{\\alpha_i}{\\sum\\limits_{j=1}^m{\\alpha_j}}\n\\end{equation}\n\n\\paragraph{Variance}\n\\begin{equation}\n\\mathbb{D}{p_i} = \\frac{\\alpha_i(\\alpha_0 - \\alpha_i)}{\\alpha_0^2(\\alpha_0 + 1)}\n\\end{equation}\n\n\\paragraph{Covariance}\n\\begin{equation}\n\\textrm{cov}(p_i, p_j) = \\frac{-\\alpha_i\\alpha_j}{\\alpha_0^2(\\alpha_0 + 1)}\n\\end{equation}\n\n\\subsection{Theorems}\n\n\\newtheorem{theorem}{Theorem}\n\\begin{theorem}\n\tIf $p_1, p_2, \\ldots, p_m \\sim \\textrm{Dirichlet}(\\alpha_1, \\alpha_2, \\ldots, \\alpha_n)$, $x = p_i$, $\\alpha_x = \\alpha_i$, $y = p_j$, $\\alpha_y = \\alpha_j$ and $k_x, k_y, m_x, m_y$ are arbitrary positive constants, then \n\t\\begin{multline}\n\t\\mathbb{E}(x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y) = \\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\,  \\sum_{i=0}^{m_y}{ C^i_{m_y} \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\frac{\\partial^{m_x + i}}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{i}} \\, B(\\alpha_x + k_x, \\alpha_y + \\alpha_z + k_y)}\n\t\\end{multline}\n\t\n\twhere $\\alpha_z = \\alpha_0 - \\alpha_x - \\alpha_y$\n\t\\begin{proof}\n\t\tLet $z$ be the combined probability of all values $v_k$ except $v_i$ and $v_j$. Then\n\t\t\n\t\t\\begin{equation*}\n\t\tx, y, z \\sim \\textrm{Dirichlet}(\\alpha_x, \\alpha_y, \\alpha_z)\n\t\t\\end{equation*}\n\t\t\n\t\t\\begin{equation}\n\t\tp(x, y, z) = \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} x^{\\alpha_x - 1} y^{\\alpha_y - 1} z^{\\alpha_z - 1}\n\t\t\\end{equation}\n\t\t\n\t\t\n\t\t$x + y + z = 1$, so $z$ is a deterministic function of $x$ and $y$: \n\t\t\n\t\t\\begin{equation}\n\t\tp(x, y) = \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} x^{\\alpha_x - 1} y^{\\alpha_y - 1} (1 - x - y)^{\\alpha_z - 1}\n\t\t\\end{equation}\n\t\t\n\t\t\\begin{multline*}\n\t\t\\mathbb{E}(x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y) = \\int\\limits_{0 < x + y < 1} {p(x, y) \\, x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y \\, dx \\, dy} = \n\t\t\\\\ \\int\\limits_{0 < x + y < 1} {\\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} x^{\\alpha_x - 1} y^{\\alpha_y - 1} (1 - x - y)^{\\alpha_z - 1} \\, x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y \\, dx \\, dy} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\int\\limits_{0 < x + y < 1} { x^{\\alpha_x + k_x - 1} y^{\\alpha_y + k_y - 1} (1 - x - y)^{\\alpha_z - 1} \\, \\log^{m_x} x \\log^{m_y} y \\, dx \\, dy} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\int\\limits_0^1 { x^{\\alpha_x + k_x - 1} \\log^{m_x}{x} \\, dx \\, \\int\\limits_0^{1-x}{ y^{\\alpha_y + k_y - 1} (1 - x - y)^{\\alpha_z - 1} \\, \\log^{m_y} y \\, dy}} = \n\t\t\\\\  \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\int\\limits_0^1 { x^{\\alpha_x + k_x - 1} \\log^{m_x}{x} \\, dx \\, \\int\\limits_0^{1-x}{ \\frac{\\partial^{m_y}}{(\\partial \\alpha_y)^{m_y}} (y^{\\alpha_y + k_y - 1} (1 - x - y)^{\\alpha_z - 1}) \\, dy}} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\int\\limits_0^1 { x^{\\alpha_x + k_x - 1} \\log^{m_x}{x} \\, dx \\, \\frac{\\partial^{m_y}}{(\\partial \\alpha_y)^{m_y}} \\int\\limits_0^{1-x}{  (y^{\\alpha_y + k_y - 1} (1 - x - y)^{\\alpha_z - 1}) \\, dy}} \n\t\t\\end{multline*}\n\t\t\n\t\tNow let's compute $I(x) = \\int\\limits_0^{1-x}{  (y^{\\alpha_y + k_y - 1} (1 - x - y)^{\\alpha_z - 1}) \\, dy} $. Let $y = (1 - x)t$, then\n\t\t\n\t\t\\begin{multline*}\n\t\tI(x) = \\int\\limits_0^1{((1 - x)t)^{\\alpha_y + k_y - 1} (1 - x - (1 - x)t)^{\\alpha_z - 1} \\, d((1 - x)t)} = \\\\ \\int\\limits_0^1{(1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} \\, t^{\\alpha_y + k_y - 1} (1 - t)^{\\alpha_z - 1} \\, dt} = \\\\ (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} \\int\\limits_0^1{t^{\\alpha_y + k_y - 1} (1 - t)^{\\alpha_z - 1} \\, dt} = (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} B(\\alpha_y + k_y, \\alpha_z)\n\t\t\\end{multline*}\n\t\t\n\t\t\\begin{multline*}\n\t\t\\mathbb{E}(x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y) = \\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\int\\limits_0^1 { x^{\\alpha_x + k_x - 1} \\log^{m_y}{x} \\, \\frac{\\partial^{m_y} (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y}} \\, dx } = \n\t\t\\\\ \\sum_{i=0}^{m_y}{ C^i_{m_y} \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\int\\limits_0^1 { x^{\\alpha_x + k_x - 1} \\log^{m_x}{x} \\, (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} \\, \\log^i{(1 - x)} dx }} = \n\t\t\\\\ \\sum_{i=0}^{m_y}{ \\frac{C^i_{m_y}}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\int\\limits_0^1 { x^{\\alpha_x + k_x - 1} \\log^{m_x}{x} \\, (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} \\, \\log^i{(1 - x)} dx }} = \n\t\t\\\\ \\sum_{i=0}^{m_y}{ \\frac{C^i_{m_y}}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\int\\limits_0^1 { \\frac{\\partial^{m_x}}{(\\partial \\alpha_x)^{m_x}} (x^{\\alpha_x + k_x - 1} \\, (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} \\, \\log^i{(1 - x)}) \\, dx }} = \n\t\t\\\\ \\sum_{i=0}^{m_y}{ \\frac{C^i_{m_y}}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\frac{\\partial^{m_x}}{(\\partial \\alpha_x)^{m_x}} \\, \\int\\limits_0^1 {x^{\\alpha_x + k_x - 1} \\, (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} \\, \\log^i{(1 - x)} \\, dx }} = \n\t\t\\\\ \\sum_{i=0}^{m_y}{ \\frac{C^i_{m_y}}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\frac{\\partial^{m_x}}{(\\partial \\alpha_x)^{m_x}} \\,\n\t\t\t\\frac{\\partial^{i}}{(\\partial \\alpha_y)^{i}} \\, \\int\\limits_0^1 {x^{\\alpha_x + k_x - 1} \\, (1 - x)^{\\alpha_y + \\alpha_z + k_y - 1} \\, dx }} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\,  \\sum_{i=0}^{m_y}{ C^i_{m_y} \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\frac{\\partial^{m_x + i}}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{i}} \\, B(\\alpha_x + k_x, \\alpha_y + \\alpha_z + k_y)}\n\t\t\\end{multline*}\n\t\\end{proof}\n\\end{theorem}\n\n\\begin{theorem}\n\tIf $x \\sim \\textrm{Beta}(\\alpha_x, \\alpha_y)$, $y = 1 - x$ and $k_x, k_y, m_x, m_y$ are arbitrary positive constants, then \n\t\\begin{equation}\n\t\\mathbb{E}(x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y) = \\frac{1}{B(\\alpha_x, \\alpha_y)} \\, \\frac{\\partial^{m_x + m_y} \\, B(\\alpha_x + k_x, \\alpha_y + k_y)}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{m_y}}\n\t\\end{equation}\n\t\\begin{proof}\n\t\t\n\t\t\\begin{multline*}\n\t\t\\mathbb{E}(x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y) = \n\t\t\\frac{1}{B(\\alpha_x, \\alpha_y)} \\, \\int\\limits_0^1{p(x) x^{k_x} y^{k_y} \\log^{m_x} x \\log^{m_y} y dx} =\n\t\t\\\\ \\int\\limits_0^1{x^{\\alpha_x + k_x - 1} y^{\\alpha_y + k_y - 1} \\log^{m_x} x \\log^{m_y} y \\, dx} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y)} \\, \\int\\limits_0^1{x^{\\alpha_x + k_x - 1} (1 - x)^{\\alpha_y + k_y - 1} \\log^{m_x} x \\log^{m_y} (1 - x) \\, dx} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y)} \\, \\int\\limits_0^1{ \\frac{\\partial^{m_x}}{(\\partial \\alpha_x)^{m_x}} \\, (x^{\\alpha_x + k_x - 1} (1 - x)^{\\alpha_y + k_y - 1} \\log^{m_y} (1 - x)) \\, dx} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y)} \\, \\int\\limits_0^1{ \\frac{\\partial^{m_x + m_y}}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{m_y}} \\, (x^{\\alpha_x + k_x - 1} (1 - x)^{\\alpha_y + k_y - 1}) \\, dx} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y)} \\, \\frac{\\partial^{m_x + m_y}}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{m_y}} \\, \\int\\limits_0^1{  (x^{\\alpha_x + k_x - 1} (1 - x)^{\\alpha_y + k_y - 1}) \\, dx} = \n\t\t\\frac{1}{B(\\alpha_x, \\alpha_y)} \\, \\frac{\\partial^{m_x + m_y} B(\\alpha_x + k_x, \\alpha_y + k_y)}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{m_y}} \n\t\t\\end{multline*}\n\t\\end{proof}\n\\end{theorem}\n\n\\begin{theorem}\n\tIf $p_1, p_2, \\ldots, p_m \\sim \\textrm{Dirichlet}(\\alpha_1, \\alpha_2, \\ldots, \\alpha_n)$, $x = p_i$, $\\alpha_x = \\alpha_i$, $y = p_j$, $\\alpha_y = \\alpha_j$, then\n\t\n\t\\begin{multline*}\n\t\\mathbb{E}(x^{k_x}\\, y^{k_y} \\log^{m_x}{x} \\, \\log^{m_y}{y} \\, \\log(x + y)) = \n\t\\\\ -\\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\sum_{n=1}^{\\infty}{\\frac{1}{n} \\sum_{i=0}^{m_y}{ C^i_{m_y} \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z + n)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\frac{\\partial^{m_x + i}}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{i}} \\, B(\\alpha_x + k_x, \\alpha_y + \\alpha_z + n + k_y)}} \n\t\\end{multline*}\t\n\t\n\t\\begin{proof}\n\t\t\n\t\t\\begin{multline}\n\t\t\\mathbb{E}(x^{k_x}\\, y^{k_y} \\log^{m_x}{x} \\, \\log^{m_y}{y} \\, \\log(x + y)) = \n\t\t\\\\ \\int\\limits_{0 < x + y < 1}{\\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, x^{k_x + \\alpha_x - 1} y^{k_y + \\alpha_y - 1} (1 - x - y)^{\\alpha_z - 1} \\, \\log(x + y) \\, \\log^{m_x}{x} \\, \\log^{m_y}{y} \\, dx \\, dy} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\int\\limits_{0 < x + y < 1}{x^{k_x + \\alpha_x - 1} y^{k_y + \\alpha_y - 1} (1 - x - y)^{\\alpha_z - 1} \\, \\log(1 - (1 - x - y)) \\, \\log^{m_x}{x} \\, \\log^{m_y}{y} \\, dx \\, dy} = \n\t\t\\\\ \\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\int\\limits_{0 < x + y < 1}{x^{k_x + \\alpha_x - 1} y^{k_y + \\alpha_y - 1} (1 - x - y)^{\\alpha_z - 1} \\, \\log(-\\sum_{n=1}^{\\infty}{\\frac{(1 - x - y)^n}{n}}) \\, \\log^{m_x}{x} \\, \\log^{m_y}{y} \\, dx \\, dy} = \n\t\t\\\\-\\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\sum_{n=1}^{\\infty}{\\frac{1}{n} \\int\\limits_{0 < x + y < 1}{x^{k_x + \\alpha_x - 1} y^{k_y + \\alpha_y - 1} (1 - x - y)^{n + \\alpha_z - 1} \\, \\log^{m_x}{x} \\, \\log^{m_y}{y} \\, dx \\, dy}}\n\t\t\\end{multline}\n\t\t\n\t\tNote that the integral has the same form as in the proof of Theorem 1, so we have\n\t\t\n\t\t\\begin{multline*}\n\t\t\\mathbb{E}(x^{k_x}\\, y^{k_y} \\log^{m_x}{x} \\, \\log^{m_y}{y} \\, \\log(x + y)) = \n\t\t\\\\ -\\frac{1}{B(\\alpha_x, \\alpha_y, \\alpha_z)} \\, \\sum_{n=1}^{\\infty}{\\frac{1}{n} \\sum_{i=0}^{m_y}{ C^i_{m_y} \\frac{\\partial^{m_y-i} B(\\alpha_y + k_y, \\alpha_z + n)}{(\\partial \\alpha_y)^{m_y-i}} \\, \\frac{\\partial^{m_x + i}}{(\\partial \\alpha_x)^{m_x} (\\partial \\alpha_y)^{i}} \\, B(\\alpha_x + k_x, \\alpha_y + \\alpha_z + n + k_y)}} \n\t\t\\end{multline*}\t\n\t\t\n\t\\end{proof}\n\\end{theorem}\n\n\\subsection{Entropy variance}\n\\paragraph{Entropy} Recall that the entropy has the form\n\\begin{equation*}\nH(p_1, p_2, \\ldots, p_m) = \\sum_{i = 1}^m{p_i \\log p_i}\n\\end{equation*}\n\nWe want to compute the variance of our estimate of the entropy $\\mathbb{D}H(p_1, p_2, \\ldots, p_m)$, where $p_1, p_2, \\ldots, p_m \\sim \\textrm{Dirichlet}(\\alpha_1 + n_1, \\ldots, \\alpha_m + n_m)$, as discussed above.\n\n\\begin{equation}\n\\mathbb{D}(\\sum_{i = 1}^n{X_i}) = \\sum_{i=1}^n{\\mathbb{D}X_i} + 2 \\sum_{i = 1}^n{\\sum_{j = 1, \\\\ j < i}^n{{\\textrm{cov}(X_i, X_j)}}}\n\\end{equation}\n\nApplying this formula to our task, we have that\n\\begin{equation}\n\\mathbb{D}(\\sum_{i = 1}^n{p_i \\log{p_i}}) = \\sum_{i=1}^n{\\mathbb{D}(p_i \\log{p_i})} + 2 \\sum_{i = 1}^n{\\sum_{j = 1, \\\\ j < i}^n{{\\textrm{cov}(p_i \\log{p_i}, p_j \\log{p_j})}}}\n\\end{equation}\n\nA well-known fact from probability theory states that\n\\begin{equation}\n\\mathbb{D}X = \\mathbb{E}(X - \\mathbb{E}X)^2 = \\mathbb{E}(X^2 - 2X \\mathbb{E}X - (\\mathbb{E}X)^2) = \\mathbb{E}(X^2) - (\\mathbb{E}X)^2\n\\end{equation}\n\n\\begin{multline}\n\\textrm{cov}(X, Y) = \\mathbb{E}(X - \\mathbb{E}X)(Y - \\mathbb{E}Y) = \\mathbb{E}(XY - X \\mathbb{E}Y - Y \\mathbb{E}X + \\mathbb{E}X\\mathbb{E}Y) = \\mathbb{E}XY - \\mathbb{E}X\\mathbb{E}Y\n\\end{multline}\n\nApplying,\n\\begin{equation}\n\\mathbb{D}(p_i \\log{p_i}) = \\mathbb{E}(p_i^2 \\log^2{p_i}) - (\\mathbb{E} \\, p_i \\log{p_i})^2 \n\\end{equation}\n\\begin{equation}\n\\textrm{cov}(p_i \\log{p_i}, p_j \\log{p_j}) = \\mathbb{E}(p_i p_j \\log{p_i} \\log{p_j}) - (\\mathbb{E} \\, p_i \\log{p_i})(\\mathbb{E} \\, p_j \\log{p_j})\n\\end{equation}\n\nNote that all values on the right side can be computed using Theorem 1 or Theorem 2. Hence, we have an closed-form expression for $\\mathbb{D}H(p_1, p_2, \\ldots, p_m)$.\n\n\\subsection{Mutual information variance}\nMutual information can be expressed as  \n\\begin{equation}\nMI(X, Y) = H(X) + H(Y) - H(X, Y)\n\\end{equation}\nwhere $X$ and $Y$ are some sets of random variables. \n\nIn local structure search, we need to estimate\n\\begin{equation}\nMI(X, Pa(X)) = H(X) + H(Pa(X)) - H(X, Pa(X))\n\\end{equation}\n\nWhile doing structure search, we can easily precompute $H(X)$ on the entire dataset; estimating that term on a subset of the data doesn't make much sense. So we consider $H(X)$ to be a fixed constant as an approximation, which is reasonable because it's variance is very small (it is an entropy over only one variable that is computed using the entire dataset). \n\n\\begin{equation*}\n\\mathbb{D}(MI(X, Pa(X))) \\approx \\mathbb{D}(H(Pa(X)) - H(X, Pa(X)))\n\\end{equation*}\n\nIn terms of $p_{11}, p_{12}, \\ldots, p_{nm}$ -- probabilities of instantiations of $(X, Pa(X))$:\n\\begin{equation*}\nH(Pa(X)) - H(X, Pa(X)) = -\\sum_j{(\\sum_i{p_{ij}}) \\log(\\sum_i{p_{ij}})} + \\\\ \\sum_{i,j}{p_{ij} \\log p_{ij}}\n\\end{equation*}\n\n\\begin{equation*}\n\\mathbb{D}(H(Pa(X)) - H(X, Pa(X))) = \\mathbb{D}H(Pa(X)) + \\mathbb{D}H(X, Pa(X)) - 2\\,\\mathrm{cov}(H(Pa(X)), H(X, Pa(X)))\n\\end{equation*}\n\nVariances $\\mathbb{D}H(Pa(X))$ and $\\mathbb{D}H(X, Pa(X))$ can be computed as in the previous section. Now consider $\\mathrm{cov}(H(Pa(X)), H(X, Pa(X)))$:\n\n\\begin{multline}\n\\mathrm{cov}(H(Pa(X)), H(X, Pa(X))) = \\mathrm{cov}(\\sum_j{(\\sum_i{p_{ij}}) \\log(\\sum_i{p_{ij}})}, \\sum_{i,j}{p_{ij} \\log p_{ij}}) = \n\\\\ \\sum_a \\sum_{b,c} \\mathrm{cov}((\\sum_i{p_{ia}}) \\log(\\sum_i{p_{ia}}), p_{bc} \\log p_{bc})\n\\end{multline}\n\n\\begin{multline}\n\\mathrm{cov}((\\sum_i{p_{ia}}) \\log(\\sum_i{p_{ia}}), p_{bc} \\log p_{bc}) = \\mathbb{E}((\\sum_i{p_{ia}}) \\log(\\sum_i{p_{ia}}) \\, p_{bc} \\log p_{bc}) - \\mathbb{E}((\\sum_i{p_{ia}}) \\log(\\sum_i{p_{ia}})) \\mathbb{E}(p_{bc} \\log p_{bc})\n\\end{multline}\n\nThe first term can be computed by applying Theorem 3 in case $a = c$:\n\\begin{multline}\n\\mathbb{E}((\\sum_i{p_{ia}}) \\log(\\sum_i{p_{ia}}) \\, p_{ba} \\log p_{ba}) = \\mathbb{E}((p_{ba} + \\sum_{i \\neq b}{p_{ia}}) \\log(p_{ba} + \\sum_{i \\neq b}{p_{ia}}) \\, p_{ba} \\log p_{ba}) = \\\\\n\\mathbb{E}(p_{ba}^2 \\log(p_{ba} + \\sum_{i \\neq b}{p_{ia}}) \\log p_{ba}) + \\mathbb{E}((\\sum_{i \\neq b}{p_{ia}}) \\log(p_{ba} + \\sum_{i \\neq b}{p_{ia}}) \\, p_{ba} \\log p_{ba}), \\,\\,\\, \\mathrm{where} \\,\\,\\, x = p_{ba}, \\, y = \\sum_{i \\neq b}{p_{ia}}\n\\end{multline}\n\nOr by applying Theorem 1 or Theorem 2 otherwise: $x = p_{bc}, y = \\sum_i{p_{ia}}$.\n\nThe second term can also be computed using Theorem 1 or Theorem 2.\n\n\\subsection{Score difference}\n\\paragraph{BIC score} A popular score function choice is the Bayesian Information Criterion (BIC) \\cite{BICScore}:\n\\begin{equation}\n\\textrm{score}_{BIC}(\\graph{G}) = \\mathit{l}(\\graph{G} | \\mathcal{D}) - \\frac{\\log{N}}{2} \\textrm{Dim}[\\graph{G}] = N \\sum_{i=1}^M MI(X_i, \\textrm{Pa}(X_i)) + N \\sum_{i=1}^M H(X_i) - \\frac{\\log{N}}{2} \\textrm{Dim}[\\graph{G}]\n\\end{equation}\n\nThe second term doesn't depend on the network structure, and the last doesn't depend on the data. Hence, we need to consider only the first term (which was done in the previous section).\n\n\\paragraph{Edge addition and deletion} Consider addition of an edge from $X_j$ to $X_i$. \n\\begin{multline*}\n\\mathbb{D}(\\,\\score{X_i; \\textrm{Pa}(X_i) \\cup \\{X_j\\}} - \\score{X_i; \\textrm{Pa}(X_i)}) = \n\\\\ \\mathbb{D}(H(X_i) + H(\\textrm{Pa}(X_i), X_j) - H(X_i, \\textrm{Pa}(X_i), X_j) - H(X_i) - H(\\textrm{Pa}(X_i)) + H(X_i, \\textrm{Pa}(X_i))) = \n\\\\ \\mathbb{D}(H(\\textrm{Pa}(X_i), X_j) - H(X_i, \\textrm{Pa}(X_i), X_j) - H(\\textrm{Pa}(X_i)) + H(X_i, \\textrm{Pa}(X_i)))\n\\end{multline*}\n\nIn order to this variance, we need to know:\n\\begin{enumerate}\n\t\\item Variances of individual terms (computed in 2.4)\n\t\\item These covariances can be computed as in 2.5 (one set of variables in the entropy is a subset of another): \n\t\\\\ $cov(H(\\textrm{Pa}(X_i), X_j), H(X_i, \\textrm{Pa}(X_i), X_j))$, \n\t\\\\ $cov(H(\\textrm{Pa}(X_i), X_j),  H(\\textrm{Pa}(X_i)))$, \n\t\\\\ $cov(H(X_i, \\textrm{Pa}(X_i), X_j), H(\\textrm{Pa}(X_i)))$, \n\t\\\\ $cov(H(X_i, \\textrm{Pa}(X_i), X_j), H(X_i, \\textrm{Pa}(X_i)))$ \n\t\\\\ $cov(H(\\textrm{Pa}(X_i)), H(X_i, \\textrm{Pa}(X_i)))$\n\t\n\t\\item $cov(H(\\textrm{Pa}(X_i), X_j), H(\\textrm{Pa}(X_i), X_i)$, which can't be computed that way.\n\\end{enumerate}\n\nIn order to get an upper bound on $cov(H(\\textrm{Pa} (X_i), X_j), H(\\textrm{Pa}(X_i), X_i)$, we can use the following statement (via Cauchy\u2013Schwarz inequality):\n\\begin{equation}\n|cov(X, Y)| \\le \\sqrt{\\mathbb{D}X \\, \\mathbb{D}Y}\n\\end{equation}\n\n\\paragraph{Edge reversal} Edge reversal score variance upper bound can be derived in the same way. Also note that it can be represented as a combination of edge deletion and edge addition.\n\n\\section{Improved Greedy Search}\n\\subsection{Overview}\nWe focus on the Greedy Local Search algorithm for the following reasons:\n\\begin{itemize}\n\t\\item It is conceptually simple, but effective. In a large comparative study \\cite{MinMaxHillClimbing} its (with certain improvements discussed in Section 1) performance was shown to be on par with other state-of-the-art algorithms.\n\t\\item It is sometimes used as a sub-procedure in more complex structure learning algorithms, such as Min-Max Hill Climbing.\n\t\\item It is very likely that it benefits the most from the proposed improvement. While other algorithms can, for example, employ constraint-based structure learning techniques, Greedy Local Search fully relies on score comparison queries.\n\\end{itemize}\n\nEstimation of the variance of the operation scores difference allows to estimate \\\\ $P(\\score{\\textrm{op}_a} < \\score{\\textrm{op}_b} | \\mathcal{D})$ using Chebyshev inequality.\n\n\\subsection{Finding maximum score}\n\\paragraph{The task} On each step of the Greedy Local Search algorithm, we need to find an operation with the largest score. It can be easily done by linear search in one pass. Now we want to improve the speed of such procedure, while \n\\begin{itemize}\n\t\\item getting the correct answer with probability $ \\ge 1 - p_0 $.\n\t\\item retaining $\\bigO{N_{op}}$ complexity of the algorithm.\n\\end{itemize}\n\\paragraph{The algorithm} The algorithm, which we call Stochastic Linear Search, proceeds as follows. As in the ordinary linear search, we consider operations one by one, while maintaining the current maximum. The only difference is that instead of directly comparison of scores computed on the entire dataset, $\\textrm{MaxUntilSuccess}$ sub-procedure is used. It either returns the comparison result, or fails if there is not enough data to reliably compare scores.\n\n\\begin{algorithm}[t]\n\t\\caption{Stochastic Linear Search}\\label{euclid}\n\t\\begin{algorithmic}[1]\n\t\t\\Procedure{MaxUntilSuccess}{$\\textrm{op}_1$, $\\textrm{op}_2$, $p_0$, $N_{init}$, $N_{batch}$, $\\mathcal{D}$}\n\t\t\\State $N_{cur} \\gets N_{init}$\n\t\t\\While{$N_{cur} < N$}\n\t\t\\If{$P(\\score{\\textrm{op}_1} > \\score{\\textrm{op}_2} \\, | \\, \\mathcal{D}[:N_{cur}]) > p_0$}\n\t\t\t\\Return $(\\score{\\textrm{op}_1} > \\score{\\textrm{op}_2})$\n\t\t\\Else \\If{$P(\\score{\\textrm{op}_1} < \\score{\\textrm{op}_2} \\, | \\, \\mathcal{D}[:N_{cur}]) > p_0$}\n\t\t\\Return $(\\score{\\textrm{op}_1} < \\score{\\textrm{op}_2})$\n\t\t\\EndIf\n\t\t\\EndIf\n\t\t\\State $N_{cur} \\gets N_{cur} + N_{batch}$\n\t\t\\EndWhile \n\t\t\n\t\t\\Return error\n\t\t\\EndProcedure\n\t\t\\\\\n\t\t\n\t\t\\Procedure{StochasticLinearSearch}{$\\textrm{ops}$, $p_0$, $N_{init}$, $N_{batch}$, $\\mathcal{D}$}\n\t\t\\State $m \\gets \\textrm{None}$\n\t\t\\For{$x \\in \\textrm{ops}$}\n\t\t\\State $m \\gets MaxUntilSuccess(\\frac{p_0}{n}, m, x, N_{init}, N_{batch}, \\mathcal{D}) $\n\t\t\\If{$m = \\textrm{error}$} \n\t\t\t\\Return error\n\t\t\\EndIf\n\t\t\n\t\t\\EndFor\n\t\t\n\t\t\\Return $m$\n\t\t\\EndProcedure\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Modified greedy search algorithm}\nThe only change to the Greedy Local Search algorithm is that instead of simply finding an operation with maximal score the Stochastic Linear Search is used. \n\n\\paragraph{Error probability} As a consequence, on each step of the greedy search, there is (at most) $p_0$ probability to make a suboptimal operation choice. We could try to get an upper bound on the probability of the error of the entire algorithm run, however, it would likely be overestimated. The Chebyshev inequality upper bound of $P(\\score{\\textrm{op}_a} < \\score{\\textrm{op}_b} | \\mathcal{D})$ is also an overestimation: experiments show that the distribution over operation scores are rather close to normal. On the other hand, we usually don't need precise local greedy search run results per se; rather, we need as high resulting network score as possible, and the fact that it is indeed a local maximum w.r.t. chosen local operations. \\textbf{Concluding, it makes sense to leave $p_0$ as an algorithm parameter that controls the accuracy-speed tradeoff}.\n\n\\section{Conclusion}\nIn this work, an optimization of Bayesian network score comparison was proposed, which allows to enhance speed of structure learning algorithms that rely on network score evaluations (i.e. score-based approach methods). It was done by deriving variance of the plug-in entropy and mutual information estimator (which may be a useful result in itself). The necessary modifications to the Greedy Local Search procedure were described. \n\n\\paragraph{Future work}\n\\begin{itemize}\n\t\\item \\textbf{Empirical comparison of the proposed algorithm to the existing ones}.\n\t\\item \\textbf{Working with heap} One of the common speed optimizations of the Greedy Local Search is storing operations in a heap. In order to use it alongside the proposed method, a stochastic version of heap is required (like the stochastic version of linear search is used in the current version).\n\t\\item \\textbf{Applying the technique to other score-based algorithms} While for some algorithms, like MMHC, it is as simple as plugging in the modified Greedy Search algorithm, others, like GES require further research. \n\t\\end{itemize}\n\n\\begin{thebibliography}{9}\n\t\n\t\\bibitem{KollerFriedman}\n\tDaphne Koller, Nir Friedman\n\t\\emph{Probabilistic Graphical Models: Principles and Techniques},\n\tThe MIT Press, Cambridge, Massachusetts,\n\t2009.\n\t\n\t\\bibitem{LocalSearch}\n\tDavid Heckerman, Dan Geiger, David M. Chickering\n\t\\emph{Learning Bayesian Networks: The Combination of Knowledge and Statistical Data}, 1995\n\t\n\t\\bibitem{LocalSearchOrigin}\n\tEdward Herskovits, Gregory Cooper\n\t\\emph{Kutato: An Entropy-Driven System for Construction of Probabilistic Expert Systems from Databases}\n\t\n\t\\bibitem{MDLScore}\n\tLam and F. Bacchus\n\t\\emph{Learning Bayesian belief networks: An approach based on the MDL principle}\n\t\n\t\\bibitem{BICScore}\n\tBarron, Rissanen, Yu\n\t\\emph{The minimum description length principle in coding\n\tand modeling}, 1998\n\t\n\t\\bibitem{StructureLearningIsNPComplete}\n\tDavid M. Chickering. \n\t\\emph{Learning Bayesian networks is NP-complete}\n\t\n\t\\bibitem{AsymptoticStructureLearningIsNPHard}\n\tDavid M. Chickering, Christopher  Meek, David Heckerman.    \\emph{Large-sample   learning   of\n\tBayesian networks is NP-hard}\n\t\n\t\\bibitem{MeekConjecture}\n\tDavid M. Chickering\n\t\\emph{Optimal Structure Identification With Greedy Search}, Journal of Machine Learning Research, 2002\n\t\n\t\\bibitem{MinMaxHillClimbing}\n\tIoannis Tsamardinos, Laura E. Brown, Constantin F. Aliferis\n\t\\emph{The max-min hill-climbing Bayesian network\n\tstructure learning algorithm}, 2006\n\t\n\t\\bibitem{GESLargeSampleOptimality}\n\tDavid M. Chickering, Cristopher Meek\n\t\\emph{Finding Optimal Bayesian Networks}, UAI, 2002\n\t\n\t\\bibitem{Review}\n\tTimo J. T. Koski, John M. Noble\n\t\\emph{A Review of Bayesian Networks and Structure Learning}\n\t\n\\end{thebibliography}\n\n\\end{document}\n\n", "meta": {"hexsha": "c15f564838ad9598d024df95acb4f65823f3f729", "size": 33612, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/coursework.tex", "max_stars_repo_name": "DLunin/pygraphmodels", "max_stars_repo_head_hexsha": "4ea8ebed74f3a7d5d56af4d5f189a514aab420f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/coursework.tex", "max_issues_repo_name": "DLunin/pygraphmodels", "max_issues_repo_head_hexsha": "4ea8ebed74f3a7d5d56af4d5f189a514aab420f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/coursework.tex", "max_forks_repo_name": "DLunin/pygraphmodels", "max_forks_repo_head_hexsha": "4ea8ebed74f3a7d5d56af4d5f189a514aab420f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.6714801444, "max_line_length": 867, "alphanum_fraction": 0.6682732357, "num_tokens": 12401, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321889812553, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5553268032119346}}
{"text": "\\selectlanguage{english}\n\\begin{abstract}\n    \\noindent For the first project of the Computer Simulations (ELTE Physics MSc) course I propose a concept of an N-body simulation, which aims to reproduce the satellite formation inside asteroid belts around a larger stellar object (eg. a gas giant or a star). Due to the N-body simulations' high computational difficulty, my spare objective is to observe at least some form of clustering process inside these type of systems. To achieve these goals, I will use Newton's law of universal gravitation, solving the particles' equation of motion by a simple 4th order Runge-Kutta function. Other numerical methods - described in Sec. III. will be optionally tested.\n\\end{abstract}\n\n\\begin{multicols}{2}\n\\section{Introduction}\nThe problem of gravitational- or electromagnetic attraction of more, than 2 bodies are impossible to solve analitically, except for some finite special cases. To study the motion of the particles in such systems we're ought to rely on numerical simulations and approximations. The computational difficulty of the problem grows non-linearly as we try to simulate more and more particles, and thus these simulations are required to be reinforced by some clever numerical tricks to overcome the barrier of immense computational times. \\par\nIn this project I tried to create a gravitational n-body simulation, which could be used to simulate the motion of relatively small, but still interesting amount of bodies. As I've achieved pretty good values for runtime, it was still too slow, to simulate a system with eg. 1000 bodies on my personal computer for hundreds of years. In this paper I present my current results at the deadline.\n\n\\section{Theoretical background}\nIn the short description I've already covered some of the theoretical background of the gravitational n-body problem. I've already wrote, that the force, acting on the $i$th body could be described by the following sum:\n\n\\begin{equation} \\label{eq:1}\nm_{i} \\boldsymbol{\\ddot{r}}_{i}\n=\n- G \\sum_{i \\neq j} \\frac{m_{i} m_{j}}{\\left| r_{i} - r_{j} \\right|^{2}} \\frac{\\boldsymbol{r}_{i} - \\boldsymbol{r}_{j}}{\\left| r_{i} - r_{j} \\right|}\n\\end{equation}\nUsing this, one can simply describe the acceleration acting on the $i$th body:\n\n\\begin{equation} \\label{eq:2}\n\\boldsymbol{\\ddot{r}}_{i}\n=\n- G \\sum_{i \\neq j} \\frac{m_{j}}{\\left| r_{i} - r_{j} \\right|^{2}} \\frac{\\boldsymbol{r}_{i} - \\boldsymbol{r}_{j}}{\\left| r_{i} - r_{j} \\right|}\n\\end{equation}\nWhich differential equation essentially should be solved numerically for the $x$, $y$ and $z$ components of the $\\boldsymbol{r}$ vector to acquire the coordinates and velocities of the individual particles. \\par\nIn 3D coordinates it is an article-level problem to study the possible 3D trajectories \\citep{3Dtrajectory} and the system could be seriously unstable. After acknowledging these facts, I decided to study the behavior of the setup, where the objects are scattered in a 2D plane around a central object. In my simulations the central object was the Jupiter, and the objects around it symbolized the asteroid belt.\n\n\\section{Preliminary setup of the simulation}\nTo unambiguously define the initial setup of the system, I decided to create an asteroid-generation pipeline, following the same steps in order in the generation of every new objects. \\par\nAt the I. step the size and mass of the central object where chosen, along with number of small bodies as well. \\par\nAt the II. step the pipeline generated spherical bodies with various sizes, but constant densities. The masses were calculated next from these two values. Here, only the minimal and maximal radius of the small bodies were given to the pipeline. For the densities I've chosen $\\rho_{S} = 2270\\ \\text{kg}/\\text{m}^{3}$ consistently, which is a relatively medium-size value for asteroids \\citep{krasinsky2002hidden}. \\par\nAt the III. step the initial coordinates and velocities of the small bodies were generated. According to Kepler's first law, a satellite orbits a central object on an elliptic orbit, where the central object is situated in one of the focal points. When the object is gravitationally bound, its orbit's shape is an ellipse or in special case, a circle. The distance of the satellite from the central body could be easily expressed:\n\n\\begin{equation} \\label{eq:3}\nd\n=\n\\frac{r_{p}}{1 + e * \\cos{\\varphi}}\n\\end{equation}\nwhere $r_{p}$ is the length of the perigee, $e$ is the eccentricity, $\\varphi$ is the polar angle. Here, I generated random $r_{p}$ perigee distances and $e$ eccentricities from a pre-defined interval. The possible perigee distances' values were $r_{p} = 10 R \\pm 1.5 R$, where $R$ is the radius of the central planet, while the maximal eccentricity was chosen to be $e_{max} = 0.4$. Using this two values, the semi-major ($a$) and semi-minor ($b$) axis of the trajectories for every body could be calculated, as follows:\n\n\\begin{equation} \\label{eq:4}\na = \\frac{r_{p}}{1 - e^2}\n\\end{equation}\n\\begin{equation} \\label{eq:5}\nb = \\frac{r_{p}}{\\sqrt{1 - e^2}}\n\\end{equation}\nAfter this, I generated a random $\\varphi$ azimuth angle and calculated the coordinates, using the following forms:\n\n\\begin{equation} \\label{eq:6}\n\\begin{pmatrix}\nx \\\\\ny\n\\end{pmatrix}\n=\n\\begin{pmatrix}\na * \\cos{\\varphi} \\\\\nb * \\sin{\\varphi}\n\\end{pmatrix}\n\\end{equation}\nUsing the Vis-Viva equation the length of the velocity vector $\\left| \\boldsymbol{v} \\right|$ could be calculated, if the initial setup for every small body is considered as a two-body problem of the asteroid and the central object. Since the masses of the small bodies are very small, compared the to central object, it is a perfectly fine to use this approximation.\n\n\\begin{equation} \\label{eq:7}\n\\left| \\boldsymbol{v} \\right|\n=\n\\sqrt{GM \\left( \\frac{2}{a} - \\frac{1}{d} \\right)}\n\\end{equation}\nwhere $G$ is the gravitational constant, $M$ is the mass of the central object, $a$ is the semi-major axis of the currently described small body, $d$ is the distance of this body from the central object's center of mass. We also know, that the velocity vector always a tangent of the satellites trajectory in every point. It could be calculated by taking the gradient of the ellipse's coordinates:\n\n\\begin{equation} \\label{eq:8}\n\\boldsymbol{e}_{t}\n=\n\\frac{\\boldsymbol{\\nabla} \\boldsymbol{r}}{\\left| \\boldsymbol{\\nabla} \\boldsymbol{r} \\right|}\n=\n\\frac{1}{\\left| \\boldsymbol{\\nabla} \\boldsymbol{r} \\right|}\n\\boldsymbol{\\nabla}\n\\begin{pmatrix}\nx \\\\\ny\n\\end{pmatrix}\n=\n\\frac{1}{\\left| \\boldsymbol{\\nabla} \\boldsymbol{r} \\right|}\n\\boldsymbol{\\nabla}\n\\begin{pmatrix}\na * \\cos{\\varphi} \\\\\nb * \\sin{\\varphi}\n\\end{pmatrix}\n\\end{equation}\n\\begin{equation} \\label{eq:9}\n\\boldsymbol{e}_{t}\n=\n\\frac{1}{\\left| \\boldsymbol{\\nabla} \\boldsymbol{r} \\right|}\n\\begin{pmatrix}\n-a * \\sin{\\varphi} \\\\\nb * \\cos{\\varphi}\n\\end{pmatrix}\n\\end{equation}\nThe actual velocity vector then could be written in the following form, using the result from the (\\ref{eq:7}) equation:\n\n\\begin{equation} \\label{eq:10}\n\\boldsymbol{v}_{t}\n=\n\\frac{\\left| \\boldsymbol{v} \\right|}{\\left| \\boldsymbol{\\nabla} \\boldsymbol{r} \\right|}\n\\begin{pmatrix}\n-a * \\sin{\\varphi} \\\\\nb * \\cos{\\varphi}\n\\end{pmatrix}\n\\end{equation}\nAfter acquiring the velocity vectors and coordinates, the (\\ref{eq:2}) differential equation could be solved for every $i$th object numerically.\n\n\\section{Technical details}\nSince I used Python (especially Jupyter Notebook) for my simulation, it was crucial to speed up the process, since Python is not the fastest programming language regarding runtime. I used two different methods to achieve a speed-up. First, the simulation only calculated the $j$th force acting on the $i$th object, if the asteroids, indexed by $i$ and $j$ were closer, than $2R$ to each other. Second, I used the \\texttt{numba} library's \\texttt{@jit(nopython=True)} decorator to greatly optimize my code. This decorator compiles the Python functions into machine code before running them, thus greatly enhancing the runtime. Sadly it has strict limitations, so I was forced to write my code accordingly. \\par\nThe full source code, used files and everything related to the project could be found in my GitHub repository of this course\\footnote{\\url{https://github.com/masterdesky/ELTE\\_Comp\\_Simulations\\_2020}}.\n\n\\section{Solving the differential equations}\nTo solve the equation of motions numerically for all bodies, I used a simple (not adaptive) fourth-order Runge-Kutta function. The functionality of this iterative algorithm is well-known and is the following:\n\n\\begin{equation}\nk_{1} = dt * \\dot{y}_{n}\n\\end{equation}\n\\begin{equation}\nk_{2} = dt * \\left( \\dot{y}_{n} + 0.5 * k_1 \\right)\n\\end{equation}\n\\begin{equation}\nk_{3} = dt * \\left( \\dot{y}_{n} + 0.5 * k_2 \\right)\n\\end{equation}\n\\begin{equation}\nk_{4} = dt * \\left( \\dot{y}_{n} + k_3 \\right)\n\\end{equation}\n\\begin{equation}\ndy = \\frac{k_{1} + 2 * k_{2} + 2 * k_{3} + k_{4}}{6}\n\\end{equation}\n\\begin{equation}\ny_{n+1} = y_{n} + dy\n\\end{equation}\nIn the simulation the $\\dot{y}_{n}$ represented the $\\dot{r}$ and $\\ddot{r}$ values as well, which were incremented by parallel.\n\n\\section{Results and problems}\nThe results were quite disappointing, and I didn't have enough time to fix the trivial, but crucial problem with my simulation. The whole code works very well, at least it doesn't have any logical or physical flaws. The main problem was, that immediately from the beginning, the orbiting bodies' kinetic energy started to grow, which is quite a surprise for an RK4 algorithm, as it is much more common phenomenon, when using the simple Euler's method. \\par\nAfter many unsuccessful fixing attempt, - at first glance - I finally solved the problem by choosing the time-step to a smaller value. By trial and error I found, that with $dt \\leq 0.001$ years, the simulation becomes highly unstable. Maybe this could be still an acceptable and working parameter, if I'd change RK4 to some higher-order Runge-Kutta function, but I've had very limited time, as I've mentioned it already. Sadly, RK4 still didn't reserved kinetic energy for longer time periods, and started to slightly, but continuously grow. The effect caused by choosing different time steps could be objectively observed by starting the simulation from the same initial conditions with different $dt$ values and let it run for some time.\n\n\\begin{center}\n    \\includegraphics[width=0.5\\textwidth]{{img_src/kin_E_max_10_y_dt_0.001_y}.pdf}\n    \\captionof{figure}{Kinetic energy changes of an arbitrary satellite with time step $dt=0.001$ for $10$ simulated years. A slight grow of the curve could be observed. The local extremas are marked by red and oranges 'X' markers.} \\label{fig:1}\n\\end{center}\nUsing the time step $dt=0.001$ for $10$ simulated years, the final arrangement of the simulated asteroids could be seen on figure (\\ref{fig:4}) and figure (\\ref{fig:5}). Running the simulation even further, the energy growth of the small objects in the simulated asteroid belt simply blew up the simulation, as they started to move exponentially faster every iteration after some time.\n\\begin{center}\n    \\includegraphics[width=0.5\\textwidth]{{img_src/kin_E_max_100_y_dt_0.001_y}.pdf}\n    \\captionof{figure}{Kinetic energy changes of an arbitrary satellite with time step $dt=0.001$ for $100$ simulated years. A slight grow of the curve could be observed early, then an exponential upstream blows up the simulation. The local extrema are marked by red and oranges 'X' markers. The growth could be seen best by observing the slope of the red line.} \\label{fig:2}\n\\end{center}\nIn contrast, choosing the step size parameter as $dt = 0.0001$, the energy seemed to remain conserved and the simulation remained totally stable.\n\\begin{center}\n    \\includegraphics[width=0.5\\textwidth]{{img_src/kin_E_max_100_y_dt_0.0001_y}.pdf}\n    \\captionof{figure}{Kinetic energy changes of an arbitrary satellite with time step $dt=0.0001$ for $100$ simulated years. There are no relevant energy growth could be seen. The local extrema are marked by red and oranges 'X' markers.} \\label{fig:3}\n\\end{center}\nThe final positions and velocities of the small bodies, running the simulation with these parameters could be seen on Fig. (\\ref{fig:6}) and Fig. (\\ref{fig:7}). It seems a wise choice to run the simulation with $dt=0.0001$ and let it run for a long time. However it would still not yield any useful results regarding the moon formation or clustering of bodies in an asteroid belt. The total runtime for $100$ years with this time step was $5.25$ hours, but we should simulate at least a few million years to get any kind of results. The simulation maybe could be made \"faster\", if we would simulate a system, where the asteroids orbit a gas giant, which also orbits a star, thus creating Lagrangian points, where the small bodies could accumulate easier.\n\n\\section{Discussion}\nAt least, I've successfully built a gravitational n-body problem simulation, which could handle time steps relatively fast. Since I couldn't let the simulation run for longer time, and with smaller time steps, I couldn't detect any kind of gravitational clustering, or Moon-formation in an asteroid belt. However I proposed some ideas, which could be further examined in the future. \\par\nI've also made some animations of the working simulation, which could be reached on my YouTube channel\\footnote{\\url{https://www.youtube.com/channel/UCBDSB7PdQ3E9l9WSBsTy7cQ}}, under the \"Planetary Motions\" playlist.\n\n\\end{multicols}", "meta": {"hexsha": "adb2ef64250833af8b00b93b6a072c85354044b4", "size": 13491, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Project 1/Documentation/Report/src/text_src/body.tex", "max_stars_repo_name": "masterdesky/ELTE_Comp_Simulations_2020", "max_stars_repo_head_hexsha": "b7689e0ecf9b5deaf1c713b647c1c106b8af7457", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project 1/Documentation/Report/src/text_src/body.tex", "max_issues_repo_name": "masterdesky/ELTE_Comp_Simulations_2020", "max_issues_repo_head_hexsha": "b7689e0ecf9b5deaf1c713b647c1c106b8af7457", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project 1/Documentation/Report/src/text_src/body.tex", "max_forks_repo_name": "masterdesky/ELTE_Comp_Simulations_2020", "max_forks_repo_head_hexsha": "b7689e0ecf9b5deaf1c713b647c1c106b8af7457", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.2777777778, "max_line_length": 754, "alphanum_fraction": 0.7613223631, "num_tokens": 3565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321983146848, "lm_q2_score": 0.6893056167854461, "lm_q1q2_score": 0.5553267993615186}}
{"text": "\\chapter{Hypothesis Testing and Model Selection}\nHypothesis testing (also known as model selection, particularly when it is\ndone using the method in Section~\\ref{sec:marginal_likelihood})\nis a very important topic that is traditionally considered\na different topic from parameter estimation. However, in Bayesian statistics\nwe will see that hypothesis testing is basically the same thing as parameter\nestimation! The one difference, for us, will be that we will sometimes change\nthe prior distribution a little bit.\n\nOne big advantage of Bayesian statistics is that it {\\it unifies}\nparameter estimation and hypothesis\ntesting\\footnote{``Unifies'' is a popular word for physicists. It means that\ntwo seemingly different topics are fundamentally the same, or at least closely\nrelated.}. That's good news, because instead of having to\nunderstand two different topics, we only have to understand one!\n\nTo see why hypothesis testing is fundamentally the same as parameter estimation,\nyou only need to\nunderstand how parameter estimation works from a Bayesian point of view, which\nwe have already studied.\nParameter estimation is nothing more than testing a bunch of hypotheses about\nthe value of the parameter. For example, $\\theta=1$ vs. $\\theta=2$ vs. $\\theta=3$ and\nso on. If we have their posterior probabilities, then we've tested them.\n\n\\section{An Example Hypothesis Test}\nSuppose we were performing a Bayesian parameter estimation analysis using a\nBayes' Box. Here is an example Bayes' Box with made up numbers:\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\tt{possible values} & \\tt{prior} & \\tt{likelihood} & \\tt{prior} $\\times$ \\tt{likelihood} & \\tt{posterior}\\\\\n$\\theta$ & $p(\\theta)$ & $p(x|\\theta)$ & $p(\\theta)p(x|\\theta)$ & $p(\\theta|x)$\\\\\n\\hline\n1.5 & 0.25 & 0.2 & 0.05 & 0.1\\\\\n2.0 & 0.25 & 0.4 & 0.1 & 0.2\\\\\n2.5 & 0.25 & 0.6 & 0.15 & 0.3\\\\\n3.0 & 0.25 & 0.8 & 0.2 & 0.4\\\\\n\\hline\nTotals & 1 & & 0.5 & 1\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nSuppose we wanted to test the following two hypotheses about the parameter $\\theta$.\nThe first hypothesis $H_0$ is a ``null hypothesis'', and the second hypothesis,\n$H_1$, is an ``alternative hypothesis''.\n\\begin{eqnarray}\nH_0: && \\theta = 2\\\\\nH_1: && \\theta \\neq 2\n\\end{eqnarray}\nIn classical statistics, if you saw a question phrased in this way, you would\nneed to come up with a {\\it test statistic}\nand then calculate a {\\it p-value}, which tries to say something about whether\nthe value of the test statistic would be considered extreme, under the\nassumption that $H_0$ is true.\nIn Bayesian statistics, the only thing we need to do is calculate\nthe posterior probability of $H_0$ and the posterior probability of $H_1$.\nThe posterior probability of $H_0$ is given by:\n\\begin{eqnarray}\nP(H_0|x) &=& P(\\theta = 2|x)\\\\\n&=& 0.2\n\\end{eqnarray}\nAll we did here was look up the appropriate number in the Bayes' Box! The\nposterior probability of $H_1$ is only slightly harder (but still easy)\nto calculate: $H_1$ will\nbe true if $\\theta$ takes any value other than 2. Therefore, the posterior\nprobability of $H_1$ is\n\\begin{eqnarray}\nP(H_1|x) &=& P(\\theta = 1.5 \\vee \\theta = 2.5 \\vee \\theta = 3|x)\\\\\n&=& P(\\theta = 1.5|x) + P(\\theta = 2.5|x) + P(\\theta = 3|x)\\\\\n&=& 0.1 + 0.3 + 0.4\\\\\n&=& 0.8.\n\\end{eqnarray}\nHere we used the fact that everything in a Bayes' Box is mutually exclusive\n(only one of the hypotheses is true) so we could add the probabilities.\nAlternatively, you could have just noticed that $H_1$ is true if $H_0$ is false.\nSo $P(H_0|x) + P(H_1|x) = 1$, which implies $P(H_1|x) = 1 - P(H_0|x)$.\n\n\\section{The ``Testing'' Prior}\nHere we will study a hypothesis testing example that involves a null and an\nalternative hypothesis. Since the bus example has been used a lot, we will now\nswitch over to a different example.\n\nSuppose it is known that the mean systolic blood pressure\nin the general population is 120 mm Hg, with a standard deviation of\n15 mm Hg (millimetres of mercury is an\nold fashioned unit for pressure, even though it sounds like a unit of length).\nA new drug is developed that may\nbe helpful in reducing blood pressure. A sample of $N=100$ people\n(who can be considered representative of the general population)\nare given the drug, and their systolic blood pressure is measured. This results\nin 100 blood pressure measurements $\\{x_1, x_2, ..., x_N\\}$, which will be\nour data. As a shorthand, I'll sometimes write $\\boldsymbol{x}$\n(a bold vector) to denote the data collectively,\ninstead of $\\{x_1, x_2, ..., x_N\\}$.\n\nWe are interested in whether the drug works. Let $\\mu$ be the mean systolic\nblood pressure\nthat would apply in the general population if everyone was taking the\ndrug. Our goal is to infer the value of $\\mu$ from the data. In classical\nstatistics, this is sometimes phrased as a hypothesis test between the two\ncompeting hypotheses. We will not be concerned with the possibility that the\ndrug has the opposite effect to what is intended.\n\\begin{equation}\n\\begin{array}{ll}\nH_0: & \\mu = 120 \\textnormal{ (the drug does nothing)}\\\\\nH_1: & \\mu < 120 \\textnormal{ (the drug reduces blood pressure)}\n\\end{array}\n\\end{equation}\nSuppose the mean of all the data values was\n\\begin{eqnarray}\n\\bar{x} &=& \\frac{1}{N} \\sum_{i=1}^{100} x_i\\\\\n&=& 115.9.\n\\end{eqnarray} % and (x squared)bar = 13654.82\nDoes this data provide evidence against $H_0$ and in favour of $H_1$? In\nclassical statistics this question would be addressed using a {\\it p-value}.\nThe p-value would be the probability of getting a result this extreme or\na result more extreme than what is observed, assuming that the ``null hypothesis''\nis true. That is,\n\\begin{eqnarray}\n\\textnormal{p-value} = P(\\bar{x} \\leq 115.9 | H_0).\n\\end{eqnarray}\nIn case you're curious, the p-value in this case is 0.0031, which is usually\ntaken to mean that there is fairly strong evidence against $H_0$ and in favour\nof $H_1$. To calculate the p-value I had to assume that the probability distribution\nfor the data values $\\{x_1, x_2, ..., x_{100}\\}$ was a normal distribution\nwith a known standard deviation of $\\sigma=15$, and\nthat they were independent:\n\\begin{eqnarray}\nx_i \\sim \\mathcal{N}(\\mu, \\sigma^2). \\label{eq:normal}\n\\end{eqnarray}\n\nIn Bayesian statistics, p-values are not used. Instead, we should think of this\nas a parameter estimation problem. We can state a set of hypotheses about the\nvalue of $\\mu$, and then choose a prior distribution, update to a posterior\ndistribution, etc. Then our result will be the posterior probability of the\nnull hypothesis, $P(H_0 | \\boldsymbol{x}) = P(\\mu = 120 | \\boldsymbol{x})$.\nThis is helpful because the posterior probability of the null hypothesis is\nexactly what we want. It is a description of how plausible the null hypothesis\nis given the data. It is not some other probability that isn't really relevant.\nWe can also get the posterior probability of $H_1$ by summing the posterior\nprobabilities for all other values of $\\mu$ apart from 120, or by using\n$P(H_1 | \\boldsymbol{x}) = 1 - P(H_0 | \\boldsymbol{x})$.\n\nThere is only one minor tweak we need to make to make Bayesian inference an\nappropriate framework for solving this problem. When the null and alternative\nhypotheses are written like we wrote them above, it implies that the value of $\\mu$\nthat we are calling the ``null hypothesis'' is a {\\it special value that is\nespecially plausible}. To take this into account in our Bayesian analysis we\nneed to make sure the prior distribution recognises there is a special\nvalue of the parameter that we think is extra plausible. When we do this, we\nwill call it a {\\it testing prior}. An example of a testing prior and the\nresulting Bayes' Box for\nthe blood pressure problem is given in Table~\\ref{tab:testing_prior}.\nThe R code for calculating these results is given below.\n\n\\begin{minted}[mathescape,\n               numbersep=5pt,\n               gobble=0,\n               frame=single,\n               framesep=2mm, fontsize=\\small]{r}\n# Parameter values\nmu = seq(110, 120)\n\n# Make the testing prior\nprior = rep(0.5/10, 11)\nprior[11] = 0.5\n\n# Compute the likelihood for the 100 data points.\n# The numbers get close to 0, so let's use logs\nlog_lik = rep(0, 11)\n\n# Use a for loop to loop over all data values\n# and multiply the likelihoods\nfor(i in 1:100)\n{\n  log_lik = log_lik + dnorm(x[i], mean=mu, sd=15, log=TRUE)\n}\n\n# Rescale the likelihood for readability\nlik = exp(log_lik - max(log_lik))*1000\n#lik = lik/max(lik)*1000\n\n# Calculate the posterior\nh = prior*lik\npost = h/sum(h)\n\n# The null hypothesis\npost[11]\n\\end{minted}\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\tt{possible values} & \\tt{prior} & \\tt{likelihood} & \\tt{prior} $\\times$ \\tt{likelihood} & \\tt{posterior}\\\\\n$\\mu$ & $p(\\mu)$ & $p(\\boldsymbol{x}|\\mu)$ & $p(\\mu)p(\\boldsymbol{x}|\\mu)$ & $p(\\mu|\\boldsymbol{x})$\\\\\n\\hline\n110 & 0.05 & 0.44\t& 0.02 & 0.0001\\\\\n111 & 0.05 & 4.83\t& 0.24 & 0.0012\\\\\n112 & 0.05 & 34.12\t& 1.71 & 0.0086\\\\\n113 & 0.05 & 154.64\t& 7.73 & 0.0389\\\\\n114 & 0.05 & 449.33\t& 22.47 & 0.1129\\\\\n115 & 0.05 & 837.13\t& 41.86 & 0.2103\\\\\n116 & 0.05 & 1000.00    & 50.00 & 0.2512\\\\\n117 & 0.05 & 756.93\t& 38.30 & 0.1924\\\\\n118 & 0.05 & 376.15\t& 18.81 & 0.0945\\\\\n119 & 0.05 & 118.44\t& 5.92 & 0.0298\\\\\n{\\bf 120} & {\\bf 0.5}   & {\\bf 23.91} & {\\bf 11.96} & {\\bf 0.0601}\\\\\n\\hline\nTotals & 1 & & 199.01 & 1\\\\\n\\hline\n\\end{tabular}\n\\caption{\\it An example of a testing prior for the blood pressure problem. We\ngive more prior probability to the special value $\\mu=120$ because it is\nparticularly plausible. For readability I have rescaled the likelihoods so that\nthe maximum is 1000. Note that the posterior probability of $H_0$ can simply\nbe read off the table.\n\\label{tab:testing_prior}}\n\\end{center}\n\\end{table}\n\nThe conclusion of our Bayesian hypothesis test is that the posterior probability\nof $H_0$ is 0.0601. Recall that the classical p-value was 0.0031. These numbers\nare very different, and {\\it there is no reason why they should be similar}. The\np-value might imply that the evidence is overwhelming (if you are not experienced\nat interpreting p-values), but the posterior probability still says there's a\n6\\% chance the drug does nothing.\n\nNote that the calculation of the posterior distribution uses {\\it all} of the\ndata values, rather than reducing the whole data set down to a single number\n(the sample mean $\\bar{x}$). In this particular example, reducing the whole\ndataset to a single number is harmless\\footnote{In this problem, the sample mean\nis a ``sufficient statistic'': deleting all of the data and using just the\nmean has no consequences!}. But in different situations (e.g. if your sampling distribution\nor likelihood was based on the heavy-tailed Cauchy distribution instead of a normal\ndistribution), reducing an entire data set to a single ``test statistic'' can\nbe extremely wasteful!\n\nNote also that there were some fairly arbitrary decisions made in choosing our\ntesting prior. We decided not to allow $\\mu > 120$, but the analysis could\nalso have allowed for that. The discrete approximation was fairly coarse. Finally, we\nassumed $\\mu$ couldn't be lower than 110, and had a uniform prior for all\n$\\mu$ values apart from 120. Some of these assumptions can and should be\nquestioned when applying Bayesian hypothesis testing in practice.\nIn Figure~\\ref{fig:testing_priors}, there are three possible ideas for what\nthe prior should be in the blood pressure question. They may all seem somewhat\nreasonable in this situation, but could lead to different conclusions.\n\nPrior 1 is basically the same as the prior in our Bayes' Box, although it goes\ndown to $\\mu=100$ and divides the possible $\\mu$ values more finely. This prior\nsays the null has a 50\\% probability, and if $\\mu$ is not equal to 120, then it\ncould be anything.\nPrior 2\nis similar, but has only 30\\% of the prior probability on the null hypothesis,\ninstead of 50\\%,\nand the shape of the prior is non-uniform for the lower values of $\\mu$.\nThis is like saying ``$\\mu$ could be precisely 120, and if it's not precisely\n120, then it is probably at least {\\it close} to 120''. In a lot of hypothesis\ntesting situations this would be a more accurate description of our prior\nbeliefs than Prior 1.\nPrior 3 isn't really a testing prior at all (it doesn't have a spike), but is\njust a bell-shaped prior. This is like saying ``Alright, I would never believe\n$\\mu$ is {\\it exactly} 120, but I think there's a reasonable chance it's {\\it\nclose} to 120. Often, it would be nonsense to think the null hypothesis\nis perfectly true, to an arbitrary level of accuracy. Something like Prior 3\nwould be more appropriate.\nThese three priors would all give different results, and the appropriate choice\ndepends on the individual problem you are solving. \n\n\\begin{framed}\n{\\bf Remember, if the conclusions depend sensitively on the choice of the\nprior distribution, that is an important finding. You either need to be\nreally careful about choosing your prior, or you need more data.}\n\\end{framed}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.6]{Figures/testing_priors.pdf}\n\\caption{\\it Three possible priors we could use for the blood pressure question.\nAll may seem ``reasonable'', and the choice can affect the results (quite\nsignificantly in some cases). Care should be taken when choosing the prior\nin hypothesis testing problems.\n\\label{fig:testing_priors}}\n\\end{center}\n\\end{figure}\n\n\n\\section{Some Terminology}\nThere is some alternative terminology that is widely used and is particularly\npopular in Bayesian hypothesis testing (aka model selection) problems. Suppose\nthere were two hypotheses $H_1$ and $H_2$, and some data $x$. Now, $H_1$ and\n$H_2$ might be a null and alternative hypothesis, or they might be two\nparticular values of the parameter, or something else. Two repetitions of Bayes' rule\nfor these two hypotheses are:\n\\begin{eqnarray}\nP(H_1 | x) &=& \\frac{P(H_1)p(x|H_1)}{p(x)}\\\\\nP(H_2 | x) &=& \\frac{P(H_2)p(x|H_2)}{p(x)}\n\\end{eqnarray}\nThese could also be written in words:\n\\begin{eqnarray}\n\\textnormal{posterior} &=& \\frac{\\textnormal{prior}\\times\\textnormal{likelihood}}{\\textnormal{marginal likelihood}}\n\\end{eqnarray}\n\nDividing these two equations gives the {\\it odds form} of Bayes' rule, which\ndeals with ratios of probabilities instead of probabilities themselves.\n\\begin{eqnarray}\n\\frac{P(H_1 | x)}{P(H_2|x)} &=& \\frac{P(H_1)}{P(H_2)} \\times\n\\frac{p(x|H_1)}{p(x|H_2)}\\label{eq:odds_form}\n\\end{eqnarray}\nIn words, this can be written as:\n\\begin{eqnarray}\n\\textnormal{posterior odds} = \\textnormal{prior odds}\n\\times\n\\textnormal{bayes factor}\n\\end{eqnarray}\nSometimes people talk about odds (or odds ratios, which are the same thing)\nand Bayes Factors instead of about prior and\nposterior probabilities. The odds tell us how plausible $H_1$ is compared to\n$H_2$. For example, a posterior odds ratio of 5 means $H_1$ is 5 times as plausible\nas $H_2$. Of course, odds can be greater than 1 even though probabilities\ncannot be. The Bayes Factor is the ratio of the likelihoods. Results are often\nquoted as Bayes Factors because that's the part of the equation where the\ndata is important. If you say ``The Bayes Factor for $H_1$ over $H_2$ was 10'',\nthen whoever you're talking to is free to apply whatever prior odds they like,\nwhereas if you state the posterior odds then people may wonder what your prior\nodds were.\n\n\\section{Hypothesis Testing and the Marginal Likelihood}\\label{sec:marginal_likelihood}\nThe Bayes' factor in Equation~\\ref{eq:odds_form} is the ratio of likelihoods\nfor two hypotheses $H_1$ and $H_2$. If we wanted to calculate the Bayes Factor\nfor $H_0$ and $H_1$ in the blood pressure example, we could easily get the likelihood\nfor $H_0$ (it's right there in the Bayes' Box). But how would we get $p(x|H_1)$,\nwhich needs to be a single number? $H_1$ is the statement $\\mu = 100$ {\\bf or}\n$\\mu = 111$ {\\bf or} $\\mu = 112$ and so on up to $\\mu = 119$.\n\nImagine we had left $\\mu=120$ out of our Bayes' Box and just done parameter\nestimation within the context of $H_1$ (i.e. assuming, for argument's sake,\nthat $H_1$ is true). This would involve a {\\it reduced} Bayes' Box with one\nless row in it. We would end up getting some marginal likelihood\n$p(x) = \\sum p(\\theta)p(x|\\theta)$.\nThe key is to realise that since we are assuming $H_1$ throughout, the\nmarginal likelihood is really $p(x|H_1) = \\sum p(\\theta|H_1)p(x|\\theta, H_1)$,\nwhich is exactly the thing we need to calculate the Bayes Factor!\n\nAll of this implies\nthere are two mathematically equivalent ways of doing Bayesian hypothesis testing,\nor model selection. One is to make a big model that includes both the null and\nthe alternative hypothesis. The Bayes' Box with a testing prior accomplishes this.\nIn most cases this is the most convenient way to do the calculations.\n\nThe other way is to do the two analyses separately. First, do parameter estimation\nwithin the context of $H_1$. Then, do parameter estimation within the context\nof $H_2$. Then, use the marginal likelihoods as if they were likelihoods, to\ncompare $H_1$ vs. $H_2$. This second way of calculating Bayes Factors is\nmost useful when the two analyses were actually done separately by different\npeople.\n\n%\\section{Analytic Solution}\n%In this section we'll solve the blood pressure model selection problem\n%analytically. Along the way, we'll learn a useful conjugate prior for when the\n%likelihood is based on a normal distribution.\n\n%Firstly, let's write down the likelihood,\n%which is a product of $N$ univariate normal densities with mean $\\mu$ and\n%standard deviation $\\sigma$:\n%\\begin{eqnarray}\n%p(\\boldsymbol{x} | \\mu) &=&\n%\\prod_{i=1}^N \\frac{1}{\\sigma\\sqrt{2\\pi}}\n%\\exp\n%\\left[-\\frac{1}{2\\sigma^2}\\left(x_i - \\mu\\right)^2\\right].\n%\\end{eqnarray}\n%We can rearrange this a bit, which will be useful later:\n%\\begin{eqnarray}\n%p(\\boldsymbol{x} | \\mu) &=& \\sigma^{-N}(2\\pi)^{-N/2}\n%\\exp\n%\\left[-\\frac{1}{2\\sigma^2}\\sum_{i=1}^N\\left(x_i - \\mu\\right)^2\\right]\\\\\n%&=& \\sigma^{-N}(2\\pi)^{-N/2}\n%\\exp\n%\\left[-\\frac{1}{2\\sigma^2}\\left(N\\bar{x^2} - 2N\\mu\\bar{x} + N\\mu^2\\right)\\right]\\\\\n%&=& \\sigma^{-N}(2\\pi)^{-N/2}\n%\\exp\n%\\left[-\\frac{N}{2\\sigma^2}\\left(\\bar{x^2} + \\mu^2\\right) + \\frac{N\\mu}{\\sigma^2}\\bar{x}\\right].\\label{eq:lik_simplified}\n%%\\sum_{i=1}^N\\left(x_i^2 - 2\\mu x_i + \\mu^2\\right)\\right].\n%\\end{eqnarray}\n%where $\\bar{x}$ is the arithmetic mean of the data and $\\bar{x^2}$ is the mean\n%squared value of the data. Notice that the likelihood only depends on the data\n%through the values of $\\bar{x}$ and $\\bar{x^2}$. In fact, if we're only trying\n%to infer $\\mu$ and not $\\sigma$ then the part that depends on $\\bar{x^2}$\n%can be treated as a ``constant'', and will not affect the posterior for $\\mu$.\n%This is why $\\bar{x}$ is a sufficient statistic. If we were treating $\\sigma$\n%as unknown as well, $\\bar{x}$ and $\\bar{x^2}$ would be sufficient statistics.\n\n%As we saw before, model selection problems can be solved in two different ways\n%(but with equivalent results). We can calculate the posterior for $\\mu$ in\n%a way that includes both $H_0$ and $H_1$ in the calculation from the beginning\n%(perhaps assigning 50\\% prior probability to $H_0$), or we can compute the\n%marginal likelihoods for $H_0$ and $H_1$ separately. In analytical calculations\n%the separate calculation is the most popular method, so we'll use it here.\n\n%The posterior odds ratio for $H_0$ over $H_1$ is\n%\\begin{eqnarray}\n%\\frac{P(H_0 | \\boldsymbol{x})}{P(H_1|\\boldsymbol{x})} &=& \\frac{P(H_0)}{P(H_1)} \\times\n%\\frac{p(\\boldsymbol{x}|H_0)}{p(\\boldsymbol{x}|H_1)}\\label{eq:odds_form2}.\n%\\end{eqnarray}\n%If we're assigning 50\\% prior probability to $H_0$ and to $H_1$, then the\n%prior odds ratio is $0.5 / 0.5 = 1$. Therefore we just need the two likelihoods,\n%$p(\\boldsymbol{x} | H_0)$ and $p(\\boldsymbol{x} | H_1)$. The first of these is fairly straightforward to\n%get, since $H_0$ implies $\\mu = 120$. We just calculate the value of the\n%likelihood using Equation~\\ref{eq:lik_simplified}, which we've already done\n%(it was in Table~\\ref{tab:testing_prior}, albeit arbitrarily scaled). Its value\n%is $p(\\boldsymbol{x} | H_0) = 1.638 \\times 10^{-159}$.\n\n%To complete the calculation we need the value of $p(\\boldsymbol{x}|H_1)$, but\n%this is a bit more difficult since $H_1$ doesn't correspond to a particular\n%value of $\\mu$, but instead implies that $\\mu$ is any value other than 120.\n%So what exactly does it mean? In the discrete situation, it's possible to\n%calculate $p(\\boldsymbol{x}|H_1)$ by recognising that $H_1$ is equivalent to\n%$(\\mu = 110) \\vee (\\mu = 111) \\vee... (\\mu = 119)$. Then\n%we can use the sum rule:\n%\\begin{eqnarray}\n%p(\\boldsymbol{x}|H_1) &=& \\sum_{\\mu=110}^{119} p(\\mu | H_1)\n%p(\\boldsymbol{x}|\\mu, H_1).\\label{eq:discrete_evidence}\n%\\end{eqnarray}\n%Since we're working analytically, we'll be replacing this marginal likelihood\n%with a continuous version that allows any possible $\\mu$ value on the real line.\n%The integral for the marginal likelihood will take a form analogous to\n%Equation~\\ref{eq:discrete_evidence}:\n%\\begin{eqnarray}\n%p(\\boldsymbol{x}|H_1) &=& \\int p(\\mu | H_1)\n%p(\\boldsymbol{x}|\\mu, H_1) \\, d\\mu.\\label{eq:continuous_evidence}\n%\\end{eqnarray}\n%The limits for the integration must be whatever range of $\\mu$ is possible.\n%The product of two terms inside the integral of Equation~\\ref{eq:continuous:evidence} is the prior times the likelihood for a parameter estimation problem in\n%which we want to infer $\\mu$ from the data,\n\n%Recall the notion of a ``conjugate prior'' which was discussed briefly earlier\n%(when we used a beta prior together with a binomial sampling distribution).\n%This is a mathematical trick where we choose a particular kind of prior\n%that works together nicely with the mathematical form of the likelihood, so\n%that we can do our calculations and actually get an answer in closed form.\n%If we look at Equation~\\ref{eq:lik_simplified} as a function of $\\mu$, we\n%can see that it is an exponential of a quadratic term, which has the same\n%mathematical form as a gaussian.\n\n%If we make the prior for $\\mu$ (given $H_1$) a normal distribution, then the\n%equation for the density will also be a gaussian function of $\\mu$. Two\n%gaussian functions multiplied together become a single gaussian function, whose\n%integral we can do. This is the kind of thing that Bayesians used to have to\n%do in order to make their calculations tractable. It's okay if you wouldn't\n%have guessed that a normal distribution would be the conjugate prior here.\n\n%A normal prior centered at centered at $m$ with a standard deviation of $s$\n%has the following probability density function:\n%\\begin{eqnarray}\n%p(\\mu | H_1) &=& \\frac{1}{s\\sqrt{2\\pi}}\\exp\n%\\left[-\\frac{1}{2s^2}\\left(\\mu - m\\right)^2\\right]\n%\\end{eqnarray}\n\n%Substituting this and the likelihood into Equation~\\ref{eq:continuous_evidence},\n%and simplifying a bit, we get:\n%\\begin{equation}\n%p(\\boldsymbol{x}|H_1) = s^{-1}\n%\\sigma^{-N}(2\\pi)^{-N/2-1} \\int \n%\\exp\n%\\left[-\\frac{1}{2s^2}\\left(\\mu - m\\right)^2-\\frac{N}{2\\sigma^2}\\left(\\bar{x^2} + \\mu^2\\right) + \\frac{N\\mu}{\\sigma^2}\\bar{x}\\right] \\, d\\mu.\\label{eq:continuous_evidence2}\n%\\end{equation}\n\n%To do the integral, we need to work with the term inside the exponential until\n%it is of the form $-\\frac{1}{2s'^2}(\\mu - m')^2$, i.e. what's usually inside\n%the exponential part of a gaussian function. If we could do that, then we\n%could do the integral like so:\n%\\begin{equation}\n%\\int_{-\\infty}^{\\infty} \\exp\\left[C-\\frac{1}{2s'^2}(\\mu - m')^2\\right]\\, d\\mu\n%= s'\\sqrt{2\\pi}\\exp(C)\\label{eq:doable}\n%\\end{equation}\n%which is just the normalisation condition of a normal distribution.\n%Rearranging Equation~\\ref{eq:continuous_evidence2} into this form is a pain\n%(it involves expanding and simplifying various terms and then\n%``completing the square''), and so I'll just state the result. If we set\n%\\begin{eqnarray}\n%m' &=& \\frac{\\frac{m}{s^2} + \\frac{N\\bar{x}}{\\sigma^2}}{\\frac{1}{s^2} + \\frac{N}{\\sigma^2}}\\\\\n%\\frac{1}{s'^2} &=& \\frac{1}{s^2} + \\frac{N}{\\sigma^2}\\\\\n%C &=& \\frac{m^2}{2s^2} + \\frac{m'^2}{2s'^2}\n%\\end{eqnarray}\n%then the integral in Equation~\\ref{eq:continuous_evidence2} matches that in\n%Equation~\\ref{eq:doable}. Therefore the marginal likelihood is\n%\\begin{equation}\n%p(\\boldsymbol{x}|H_1) = \\frac{s'}{s}\n%\\sigma^{-N}(2\\pi)^{-N/2} \n%\\exp\n%\\left[\\frac{m^2}{2s^2} + \\frac{m'^2}{2s'^2}\\right].\n%\\end{equation}\n\n\n", "meta": {"hexsha": "d124c5b5e628ca3304350cae98ccee611d3a3437", "size": 24366, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "hypothesis_testing.tex", "max_stars_repo_name": "xulinpan/stat331", "max_stars_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hypothesis_testing.tex", "max_issues_repo_name": "xulinpan/stat331", "max_issues_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hypothesis_testing.tex", "max_forks_repo_name": "xulinpan/stat331", "max_forks_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.8296593186, "max_line_length": 172, "alphanum_fraction": 0.7295001231, "num_tokens": 7301, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056167854461, "lm_q2_score": 0.8056321983146848, "lm_q1q2_score": 0.5553267993615186}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\begmath 2.12 Finite Laguerre Series\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThis subroutine computes the value of a finite sum of Laguerre polynomials,%\n\\begin{equation*}\ny=\\sum_{j=0}^{\\text{N}}a_jL_j(x)\n\\end{equation*}\nfor a specified summation limit, N, argument, $x$, and sequence of\ncoefficients, $a_j$. The Laguerre polynomials are defined in\n\\cite{ams55:or-poly}.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\n\\begin{description}\n\\item[INTEGER]  \\ {\\bf N}\n\n\\item[REAL]  \\ {\\bf X, Y, A}(0:$m\\geq $ N)\n\\end{description}\n\nAssign values to X, N, and A(0), A(1), ..., A(N).\n$$\n\\fbox{{\\bf CALL SLASUM (X, N, A, Y)}}\n$$\nThe sum will be stored in Y.\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[X]  \\ [in] Argument of the polynomials.\n\n\\item[N]  \\ [in] Highest degree of polynomials in sum.\n\n\\item[A()]  \\ [in] The coefficients must be given in A(J), J = 0, ..., N.\n\n\\item[Y]  \\ [out] Computed value of the sum.\n\\end{description}\n\n\\subsubsection{Modifications for Double Precision}\n\nFor double precision usage, change the REAL statement to DOUBLE PRECISION\nand change the subroutine name from SLASUM to DLASUM.\n\n\\subsection{Examples and Remarks}\n\nSee DRSLASUM and ODSLASUM for an example of the usage of SLASUM. DRSLASUM\nevaluates the following identity, the coefficients of which were obtained\nfrom Table~22.10, page~799, of \\cite{ams55:or-poly}.\n\\begin{equation*}\nz=y-w=0,\n\\end{equation*}\nwhere\n\\begin{multline*}\ny = 7.2L_0(x)-3.2L_1(x)+108L_2(x)-144L_3(x)\\\\\n+108L_4(x)-43.2L_5(x)+7.2L_6(x),\n\\end{multline*}\nand\n\\begin{equation*}\nw=0.01x^6.\n\\end{equation*}\n\n\\subsection{Functional Description}\n\nThe sum is evaluated by the following algorithm:%\n\\begin{gather*}\n\\hspace{-10pt}b_{N+2}=0,\\quad b_{N+1}=0, \\\\\n\\hspace{-10pt}b_k=\\frac{2k+1-x}{k+1}b_{k+1}-\\frac{k+1}{k+2}b_{k+2}+a_k,\n\\quad k=\\text{N},...,0,\\\\\n\\hspace{-10pt}y=b_0.\n\\end{gather*}\nFor an error analysis applying to this algorithm see \\cite{Ng:1968:DSS} and\n\\cite{Ng:1971:RAC}. The first four Laguerre polynomials are\n\\begin{gather*}\nL_0(x)=1,\\quad L_1(x)=1-x,\\\\\nL_2(x)=1-2x+0.5x^2,\\\\\nL_3(x)=1-3x+1.5x^2-(1/6)x^3.\n\\end{gather*}\nFor $k \\geq 2$ the Laguerre polynomials satisfy the recurrence\n\\begin{equation*}\nkL_k(x)=(2k-1-x)L_{k-1}(x)-(k-1)L_{k-2}(x).\n\\end{equation*}\nThe Laguerre polynomials are orthogonal relative to integration with the\nweight function $e^{-x}$ over the interval [0, $\\infty )$, thus%\n\\begin{equation*}\n\\int_0^\\infty e^{-x}L_i(x)L_j(x)\\,dx=0\\quad \\text{if }i\\neq j.\n\\end{equation*}\nLaguerre polynomials are normally used only with an argument $x$ satisfying $%\nx \\geq 0.$\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error procedures and Restrictions}\n\nThe subroutine will return Y = 0 if N $< 0$. It is recommended that X\nsatisfy X $\\geq 0.$\n\n\\subsection{Supporting Information}\n\nThe source language is ANSI Fortran~77.\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.2in} {\\bf Required Files}\\vspace{2pt} \\\\\nDLASUM & \\hspace{.35in} DLASUM\\rule[-5pt]{0pt}{8pt}\\\\\nSLASUM & \\hspace{.35in} SLASUM\\\\\n\\end{tabular}\n\nBased on a 1974 program by E.W. Ng, JPL. Present version by C.L. Lawson and\nS. Y. Chiu, JPL, 1983.\n\n\\begcodenp\n\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRSLASUM}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{slasum}}\n\n\\vspace{30pt}\\centerline{\\bf \\large ODSLASUM}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{slasum}}\n\n\\end{document}\n", "meta": {"hexsha": "93fcbfab0ac3c84c8cd5f86fb955c4328612f607", "size": 3678, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch02-12.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch02-12.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch02-12.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 27.8636363636, "max_line_length": 98, "alphanum_fraction": 0.7109842306, "num_tokens": 1344, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056040203136, "lm_q2_score": 0.8056321936479701, "lm_q1q2_score": 0.5553267858607243}}
{"text": "\\subsection{First-order satisfiability}\\label{subsec:first_order_satisfiability}\n\n\\begin{definition}\\label{def:first_order_substitution}\n  As in \\hyperref[subsec:propositional_logic]{propositional logic}, we sometimes want to perform substitution, however we have different types of syntactic objects (terms and formulas) which have different substitution rules. The notion of free and bound variables further complicates us --- see for example the problems outlined in \\fullref{rem:first_order_substitution_renaming_justification}. In particular, this means that an analogous to \\fullref{thm:propositional_substitution_equivalence} theorem cannot longer justify substitution as it is done in \\fullref{alg:conjunctive_normal_form_reduction} --- we can have weaker statements as in \\fullref{thm:propositional_substitution_equivalence} that implicitly rely on variable renaming in order to hold. This implies that it is of no practical use to define substitution of a first-order subformula inside another formula as it is done in \\fullref{def:propositional_substitution}. Instead, we concert ourselves with substituting variables --- propositional variables with first-order formulas and first-order variables with first-order terms. Furthermore, since this does not complicate us, we allow substituting arbitrary terms rather than only first-order variables.\n\n  While substituting a propositional variable is the syntactic analog to applying \\hyperref[def:boolean_function]{Boolean functions} to different variables or propositional constants, substituting a first-order variable can express applying \\hyperref[def:function]{arbitrary functions} to different first-order variables or arbitrary constants. For example, in a suitable language, we can apply \\( \\log(x) \\) to the constant \\( e \\) by substituting \\( x \\) with \\( e \\) to obtain the ground term \\( \\log(e) \\).\n\n  As in \\fullref{def:propositional_substitution}, we define different kinds of (single) \\term{substitution} in more generality that in e.g. \\cite[def. 15.25]{OpenLogicFull}. Where applicable, \\term{simultaneous substitution} is defined via the same trick as in \\fullref{def:propositional_substitution}.\n\n  \\begin{thmenum}\n    \\thmitem{def:first_order_substitution/propositional} Let \\( \\varphi \\) be a \\hyperref[def:propositional_syntax/formula]{propositional formula} with variables \\( \\boldop{Var}(\\varphi) = \\set{ P_1, \\ldots, P_n } \\). For brevity, denote \\( V \\coloneqq \\boldop{Var}(\\varphi) \\). Let \\( \\Theta = \\set{ \\theta_1, \\ldots, \\theta_n } \\) be a set of \\hyperref[def:first_order_syntax/formula]{first-order formulas}.\n\n    It does not make sense to replace a single propositional variable by a single formula. Furthermore, a first-order formula \\( \\theta_k \\) cannot possibly contain any of the propositional variables \\( P_1, \\ldots, P_n \\). This allows us to introduce a simplification of the simultaneous substitution based on \\eqref{eq:def:propositional_substitution/single} as\n    \\begin{equation}\\label{eq:def:first_order_substitution/propositional}\n      \\varphi[V \\mapsto \\Theta] \\coloneqq \\begin{cases}\n        \\varphi,                                                    &\\varphi \\in \\set{ \\top, \\bot } \\\\\n        \\theta_k,                                                   &\\varphi = \\theta_k \\T{for some} k = 1, \\ldots, n \\\\\n        \\neg \\psi[V \\mapsto \\Theta],                                &\\varphi = \\neg \\psi \\\\\n        \\psi_1[V \\mapsto \\Theta] \\bincirc \\psi_2[V \\mapsto \\Theta], &\\varphi = \\psi_1 \\bincirc \\psi_2, \\bincirc \\in \\Sigma.\n      \\end{cases}\n    \\end{equation}\n\n    As in \\fullref{def:propositional_substitution}, it is not strictly necessary for any of the variables to belong to \\( \\boldop{Var}(\\varphi) \\).\n\n    \\thmitem{def:first_order_substitution/term_in_term} We define the substitution of the \\hyperref[def:first_order_syntax/term]{first-order term} \\( \\kappa \\) with \\( \\mu \\) in the term \\( \\tau \\) as\n    \\begin{equation}\\label{eq:def:first_order_substitution/term_in_term}\n      \\tau[\\kappa \\mapsto \\mu] \\coloneqq \\begin{cases}\n        \\mu,                                                               &\\tau = \\kappa, \\\\\n        \\tau,                                                              &\\tau \\neq \\kappa \\T{and} \\tau \\in \\boldop{Var}, \\\\\n        f(\\tau_1[\\kappa \\mapsto \\mu], \\ldots, \\tau_n[\\kappa \\mapsto \\mu]), &\\tau \\neq \\kappa \\T{and} \\tau = f(\\tau_1, \\ldots, \\tau_n).\n      \\end{cases}\n    \\end{equation}\n\n    It is not strictly necessary for \\( \\kappa \\) to be a \\hyperref[def:first_order_syntax/subterm]{subterm} of \\( \\tau \\).\n\n    \\thmitem{def:first_order_substitution/term_in_formula} This case is more complicated. We define the substitution of the term \\( \\kappa \\) with the term \\( \\mu \\) in the first-order formula \\( \\varphi \\) as\n    \\begin{equation}\\label{eq:def:first_order_substitution/term_in_formula}\n      \\varphi[\\kappa \\mapsto \\mu] \\coloneqq \\begin{cases}\n        \\varphi,                                                           &\\varphi \\in \\set{ \\top, \\bot }, \\\\\n        p(\\tau_1[\\kappa \\mapsto \\mu], \\ldots, \\tau_n[\\kappa \\mapsto \\mu]), &\\varphi = p(\\tau_1, \\ldots, \\tau_n), \\\\\n        \\tau_1[\\kappa \\mapsto \\mu] \\doteq \\tau_2[\\kappa \\mapsto \\mu],      &\\varphi = \\tau_1 \\doteq \\tau_2, \\\\\n        \\neg \\psi[\\kappa \\mapsto \\mu],                                     &\\varphi = \\neg \\psi, \\\\\n        \\psi_1[\\kappa \\mapsto \\mu] \\bincirc \\psi_2[\\xi \\mapsto \\mu],       &\\varphi = \\psi_1 \\bincirc \\psi_2, \\bincirc \\in \\Sigma, \\\\\n        (\\dagger),                                                         &\\varphi = \\quantifier{Q}{\\xi} \\psi, Q \\in \\set{ \\forall, \\exists },\n      \\end{cases}\n    \\end{equation}\n    where\n    \\begin{empheq}[left=(\\dagger) \\coloneqq \\empheqlbrace]{align}\n      &\\varphi,                                                                        && \\xi \\in \\boldop{Var}(\\kappa), \\label{eq:def:first_order_substitution/term_in_formula/quantifiers/trivial} \\\\\n      &\\quantifier{Q}{\\xi} \\parens[\\Big]{\\psi[\\kappa \\mapsto \\mu]},                    && \\xi \\not\\in \\boldop{Var}(\\kappa) \\cup \\boldop{Var}(\\mu), \\label{eq:def:first_order_substitution/term_in_formula/quantifiers/direct} \\\\\n      &\\quantifier{Q}{\\eta} \\parens[\\Big]{\\psi[\\xi \\mapsto \\eta][\\kappa \\mapsto \\mu]}, && \\xi \\not\\in \\boldop{Var}(\\kappa) \\T{and} \\xi \\in \\boldop{Var}(\\mu) \\T{and} &\\label{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming} \\\\\n                                                                                      &&& \\eta \\not\\in \\boldop{Var}(\\kappa) \\cup \\boldop{Var}(\\mu) \\cup \\boldop{Var}(\\psi). \\nonumber\n    \\end{empheq}\n\n    In \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming}, we chose a new variable \\( \\eta \\). We implicitly assume that there exist enough variables in the language, so that we can find \\( \\eta \\) that satisfies the condition. In order to fully avoid nondeterminism in the choice of \\( \\eta \\), we can pick a well-ordering on the set \\( \\boldop{Var} \\) and always choose \\( \\eta \\) to be the smallest variable not present in \\( \\varphi \\). This rule is called \\term{renaming of the bound variables} \\( \\xi \\) to \\( \\eta \\) and is done to mitigate capturing as described in \\fullref{rem:first_order_substitution_renaming_justification/capturing}.\n\n    We could avoid the rule for renaming (as it is done in \\cite[def. 15.25]{OpenLogicFull}), however renaming both free and bound variables is natural and is often done in practice. For example, consider the \\hyperref[def:peano_arithmetic]{Peano arithmetic} formula \\enquote{there exists \\( n \\) such that \\( nm \\) is even}. Note that the bound variable \\( n \\) is renamed to \\( k \\) and the free variable \\( m \\) to \\( n \\) in the larger formula \\enquote{for every \\( n \\) there exists \\( k \\) such that \\( kn \\) is even}.\n\n    The rule \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/trivial} may seem redundant, but when doing inductive proofs (e.g. the proof of \\fullref{thm:renaming_assignment_compatibility}), we usually need to separately consider the cases where \\( \\xi \\in \\boldop{Var}(\\kappa) \\) and \\( \\xi \\not\\in \\boldop{Var}(\\kappa) \\setminus \\boldop{Var}(\\mu) \\) and the rule \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/direct} being trivial simplifies the proofs.\n\n    See \\fullref{rem:first_order_substitution_parentheses} regarding the additional parentheses in \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming}.\n\n    See \\fullref{ex:first_order_substitution} for examples of applying the different quantifier rules.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{remark}\\label{rem:first_order_substitution_renaming_justification}\n  The renaming rule \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming} is designed to mitigate the following two problems (compared to \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/direct}):\n\n  \\begin{thmenum}\n    \\thmitem{rem:first_order_substitution_renaming_justification/capturing} Renaming mitigates \\enquote{capturing} free variables as in\n    \\begin{equation*}\n      \\parens[\\Big]{ \\qforall \\eta p(\\xi, \\eta) }[\\xi \\mapsto \\eta] = \\qforall \\eta p(\\eta, \\eta)\n    \\end{equation*}\n    by instead producing, up to a choice of new variables, the formula\n    \\begin{equation*}\n      \\parens[\\Big]{ \\qforall \\eta p(\\xi, \\eta) }[\\xi \\mapsto \\eta] = \\qforall \\zeta p(\\eta, \\zeta).\n    \\end{equation*}\n\n    \\thmitem{rem:first_order_substitution_renaming_justification/colliding} Renaming mitigates \\enquote{colliding} multiple bound variables as in\n    \\begin{equation*}\n      \\parens[\\Big]{ \\qforall \\xi \\qforall \\eta p(\\xi, \\eta) }[\\xi \\mapsto \\eta] = \\qforall \\xi \\qforall \\eta p(\\eta, \\eta)\n    \\end{equation*}\n    by instead producing, up to a choice of new variables, the formula\n    \\begin{equation*}\n      \\parens[\\Big]{ \\qforall \\xi \\qforall \\eta p(\\xi, \\eta) }[\\xi \\mapsto \\eta] = \\qforall \\zeta \\qforall \\sigma p(\\zeta, \\sigma).\n    \\end{equation*}\n  \\end{thmenum}\n\\end{remark}\n\n\\begin{remark}\\label{rem:first_order_substitution_parentheses}\n  When performing \\hyperref[def:propositional_substitution]{substitution}, it is sometimes convenient to add additional parentheses to avoid ambiguity. For example, while parentheses around quantifier expressions are not necessary by the syntax of first-order logic, adding such parentheses helps avoid the ambiguity in\n  \\begin{equation*}\n    \\qforall \\xi p(\\xi, \\eta) [\\eta \\mapsto \\zeta].\n  \\end{equation*}\n\n  Instead, we either write\n  \\begin{equation*}\n    \\parens[\\Big]{ \\qforall \\xi p(\\xi, \\eta) } [\\eta \\mapsto \\zeta]\n  \\end{equation*}\n  or\n  \\begin{equation*}\n    \\qforall \\xi \\parens[\\Big]{ p(\\xi, \\eta)[\\eta \\mapsto \\zeta] }.\n  \\end{equation*}\n\n  This convention is only part of the metasyntax and the parentheses are not part of the syntax of the formulas themselves.\n\\end{remark}\n\n\\begin{example}\\label{ex:first_order_substitution}\n  The following term substitutions should justify the distinct cases in \\eqref{eq:def:first_order_substitution/term_in_formula}:\n  \\begin{thmenum}\n    \\thmitem{ex:first_order_substitution/1} The trivial case without actual substitution:\n    \\begin{align*}\n      &\\phantom{{}={}}\n      \\parens[\\Big]{\\qforall \\xi p(\\xi, \\eta)}[\\xi \\mapsto \\eta]\n      = \\\\ &=\n      \\parens[\\Big]{\\qforall \\xi p(\\xi, \\eta)}[\\xi \\mapsto \\eta]\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/trivial}} = \\\\ &=\n      \\qforall \\xi p(\\xi, \\eta).\n    \\end{align*}\n\n    \\Fullref{ex:first_order_substitution/5} demonstrates that this does not work for nested substitution.\n\n    \\thmitem{ex:first_order_substitution/2} A simple substitution without renaming:\n    \\begin{align*}\n      &\\phantom{{}={}}\n      \\parens[\\Big]{\\qforall \\xi p(\\xi, \\eta)}[\\eta \\mapsto \\zeta]\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/direct}} = \\\\ &=\n      \\qforall \\xi \\parens[\\Big]{p(\\xi, \\eta)[\\eta \\mapsto \\zeta]}\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula}} = \\\\ &=\n      \\qforall \\xi p(\\xi, \\zeta).\n    \\end{align*}\n\n    \\thmitem{ex:first_order_substitution/3} A simple renaming without actual substitution:\n    \\begin{align*}\n      &\\phantom{{}={}}\n      \\parens[\\Big]{\\qforall \\xi p(\\xi, \\eta)}[\\eta \\mapsto \\xi]\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming}} = \\\\ &=\n      \\qforall \\zeta \\parens[\\Big]{p(\\xi, \\eta)[\\xi \\mapsto \\zeta][\\eta \\mapsto \\xi]}\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula}} = \\\\ &=\n      \\qforall \\zeta p(\\zeta, \\xi).\n    \\end{align*}\n\n    \\thmitem{ex:first_order_substitution/4} \\Fullref{ex:first_order_substitution/3}, but with \\( \\mu \\) in \\eqref{eq:def:first_order_substitution/term_in_formula} containing \\( \\xi \\) indirectly:\n    \\begin{align*}\n      &\\phantom{{}={}}\n      \\parens[\\Big]{\\qforall \\xi p(\\xi, \\eta)}[\\eta \\mapsto f(\\xi)]\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming}} = \\\\ &=\n      \\qforall \\zeta \\parens[\\Big]{p(\\xi, \\eta)[\\xi \\mapsto \\zeta][\\eta \\mapsto f(\\xi)]}\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula}} = \\\\ &=\n      \\qforall \\zeta p(\\zeta, f(\\xi)).\n    \\end{align*}\n\n    \\thmitem{ex:first_order_substitution/5} Only renaming with multiple quantifiers which shows the limitations of \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/trivial}:\n    \\begin{align*}\n      &\\phantom{{}={}}\n      \\parens[\\Big]{\\qforall \\eta \\qexists \\xi p(\\xi, \\eta)}[\\xi \\mapsto \\eta]\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming}} = \\\\ &=\n      \\qforall \\zeta \\parens*{ \\parens[\\Big]{ \\qforall \\xi p(\\xi, \\eta) }[\\eta \\mapsto \\zeta][\\xi \\mapsto \\eta]}\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula}} = \\\\ &=\n      \\qforall \\zeta \\parens*{ \\parens[\\Big]{ \\qforall \\xi p(\\xi, \\zeta) }[\\xi \\mapsto \\eta]}\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/trivial}} = \\\\ &=\n      \\qforall \\zeta \\qforall \\xi p(\\xi, \\zeta).\n    \\end{align*}\n\n    \\thmitem{ex:first_order_substitution/6} Both renaming and substitution with multiple quantifiers:\n    \\begin{align*}\n      &\\phantom{{}={}}\n      \\parens[\\Big]{ \\qforall \\eta (p(\\xi, \\eta) \\vee \\qforall \\xi p(\\xi, \\eta)) }[\\xi \\mapsto \\eta]\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming}} = \\\\ &=\n      \\qforall \\zeta \\parens*{ \\parens[\\Big]{ p(\\xi, \\eta) \\vee \\qexists \\xi p(\\xi, \\eta) }[\\eta \\mapsto \\zeta][\\xi \\mapsto \\eta] }\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula}} = \\\\ &=\n      \\qforall \\zeta \\parens*{ p(\\eta, \\zeta) \\vee \\parens[\\Big]{ \\qexists \\xi p(\\xi, \\eta) }[\\eta \\mapsto \\zeta][\\xi \\mapsto \\eta] }\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/direct}} = \\\\ &=\n      \\qforall \\zeta \\parens*{ p(\\eta, \\zeta) \\vee \\parens[\\Big]{ \\qexists \\xi p(\\xi, \\zeta) }[\\xi \\mapsto \\eta] }\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/trivial}} = \\\\ &=\n      \\parens[\\Big]{ \\qforall \\zeta p(\\eta, \\zeta) } \\vee \\parens[\\Big]{ \\qexists \\xi p(\\xi, \\zeta) }.\n    \\end{align*}\n\n    \\thmitem{ex:first_order_substitution/7} Substitution of more general terms than variables with renaming of term's variables:\n    \\begin{align*}\n      &\\phantom{{}={}}\n      \\parens*{\\qforall \\xi p(\\xi, \\eta, f(\\eta))}[f(\\eta) \\mapsto g(\\eta, \\xi)]\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/renaming}} = \\\\ &=\n      \\qforall \\zeta \\parens[\\Big]{p(\\xi, \\eta, f(\\eta))[\\xi \\mapsto \\zeta][f(\\eta) \\mapsto g(\\eta, \\xi)]}\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula}} = \\\\ &=\n      \\qforall \\zeta p(\\zeta, \\eta, g(\\eta, \\xi)).\n    \\end{align*}\n  \\end{thmenum}\n\\end{example}\n\n\\begin{proposition}\\label{thm:renaming_assignment_compatibility}\n  We will show how \\hyperref[rem:first_order_substitution_renaming_justification]{syntactic renaming} is compatible with a certain \\enquote{semantic renaming}.\n\n  Fix a \\hyperref[def:first_order_syntax]{first-order language} \\( \\mscrL \\), a \\hyperref[def:first_order_structure]{structure} \\( \\mscrX = (X, I) \\) on \\( \\mscrL \\) and a \\hyperref[def:first_order_valuation/variable_assignment]{variable assignment} \\( v \\) in \\( \\mscrX \\).\n\n  \\begin{thmenum}\n    \\thmitem{thm:renaming_assignment_compatibility/terms} For any term \\( \\tau \\) and any two variables \\( \\xi \\) and \\( \\eta \\), we have\n    \\begin{equation}\\label{eq:thm:renaming_assignment_compatibility/terms}\n      \\tau\\Bracks{v_{\\xi \\mapsto \\eta}}\n      =\n      \\parens[\\Big]{ \\tau[\\xi \\mapsto \\eta] }\\Bracks{v}.\n    \\end{equation}\n\n    \\thmitem{thm:renaming_assignment_compatibility/formulas} For any formula \\( \\varphi \\), any variable \\( \\xi \\) and any other variable \\( \\eta \\) not in \\( \\boldop{Var}(\\varphi) \\) we have\n    \\begin{equation}\\label{eq:thm:renaming_assignment_compatibility/formulas}\n      \\varphi\\Bracks{v_{\\xi \\mapsto \\eta}}\n      =\n      \\parens[\\Big]{ \\varphi[\\xi \\mapsto \\eta] }\\Bracks{v}.\n    \\end{equation}\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  In both cases, we use structural induction on the definition of the substitution.\n\n  \\SubProofOf{thm:renaming_assignment_compatibility/terms}\n\n  \\begin{itemize}\n    \\item If \\( \\tau = \\xi \\), then\n    \\begin{equation*}\n      \\tau[\\xi \\mapsto \\eta] = \\xi[\\xi \\mapsto \\eta] = \\eta\n    \\end{equation*}\n    and \\eqref{eq:thm:renaming_assignment_compatibility/terms} follows directly.\n\n    \\item If \\( \\tau \\) is a variable and \\( \\tau \\neq \\xi \\), then\n    \\begin{equation*}\n      \\tau[\\xi \\mapsto \\eta] = \\tau\n    \\end{equation*}\n    and \\eqref{eq:thm:renaming_assignment_compatibility/terms} again holds trivially.\n\n    \\item If \\( \\tau = f(\\tau_1, \\ldots, \\tau_n) \\) and if the inductive hypothesis holds for \\( \\tau_1, \\ldots, \\tau_n \\), then\n    \\begin{balign*}\n      \\parens[\\Big]{ \\tau[\\xi \\mapsto \\eta] }\\Bracks{v}\n      &=\n      \\parens[\\Big]{ f(\\tau_1[\\xi \\mapsto \\eta], \\ldots, \\tau_n[\\xi \\mapsto \\eta]) }\\Bracks{v}\n      = \\\\ &=\n      I(f) \\parens[\\Bigg]{ \\parens[\\Big]{ \\tau_1[\\xi \\mapsto \\eta] }\\Bracks{v}, \\ldots, \\parens[\\Big]{ \\tau_1[\\xi \\mapsto \\eta] }\\Bracks{v} }\n      \\reloset {\\T{ind.}} = \\\\ &=\n      I(f) \\parens[\\Big]{ \\tau_1\\Bracks{v_{\\xi \\mapsto \\eta}}, \\ldots, \\tau_n\\Bracks{v_{\\xi \\mapsto \\eta}} }\n      = \\\\ &=\n      \\parens[\\Big]{ f(\\tau_1, \\ldots, \\tau_n) }\\Bracks{v_{\\xi \\mapsto \\eta}}\n      = \\\\ &=\n      \\tau\\Bracks{v_{\\xi \\mapsto \\eta}}.\n    \\end{balign*}\n  \\end{itemize}\n\n  In all cases, \\eqref{eq:thm:renaming_assignment_compatibility/terms} holds.\n\n  \\SubProofOf{thm:renaming_assignment_compatibility/formulas}\n  \\hfill\n  \\begin{itemize}\n    \\item If \\( \\varphi \\in \\set{ \\top, \\bot } \\), then \\( \\varphi \\) has no subterms and thus \\eqref{eq:thm:renaming_assignment_compatibility/formulas} holds vacuously.\n\n    \\item If \\( \\varphi = p(\\tau_1, \\ldots, \\tau_n) \\), then since \\eqref{eq:thm:renaming_assignment_compatibility/terms} holds for all \\( \\tau_k \\), we have\n    \\begin{equation*}\n      \\parens[\\Big]{ \\tau_k[\\xi \\mapsto \\eta] }\\Bracks{v} = \\tau_k\\Bracks{v_{\\xi \\mapsto \\eta}}\n    \\end{equation*}\n    and thus\n    \\begin{equation*}\n      I(p)\\parens[\\Big]{ \\parens[\\Big]{ \\tau_1[\\xi \\mapsto \\eta] }\\Bracks{v}, \\ldots, \\parens[\\Big]{ \\tau_1[\\xi \\mapsto \\eta] }\\Bracks{v} }\n      \\reloset {\\T{ind.}} =\n      I(p)\\parens[\\Big]{ \\tau_1\\Bracks{v_{\\xi \\mapsto \\eta}}, \\ldots, \\tau_n\\Bracks{v_{\\xi \\mapsto \\eta}} }.\n    \\end{equation*}\n\n    Therefore,\n    \\begin{balign*}\n      \\parens[\\Big]{ \\varphi[\\xi \\mapsto \\eta] }\\Bracks{v}\n      &=\n      \\parens[\\Big]{ p(\\tau_1[\\xi \\mapsto \\eta], \\ldots, \\tau_n[\\xi \\mapsto \\eta]) }\\Bracks{v}\n      = \\\\ &=\n      \\parens[\\Big]{ p(\\tau_1, \\ldots, \\tau_n) }\\Bracks{v_{\\xi \\mapsto \\eta}}\n      = \\\\ &=\n      \\varphi\\Bracks{v_{\\xi \\mapsto \\eta}}.\n    \\end{balign*}\n\n    \\item The case \\( \\varphi = \\tau_1 \\doteq \\tau_2 \\) is proved analogously.\n\n    \\item The cases \\( \\varphi = \\neg \\psi \\) and \\( \\varphi = \\psi_1 \\bincirc \\psi_2 \\) are proved in an straightforward manner.\n\n    \\item Let \\( \\varphi = \\qforall \\zeta \\psi \\), where the inductive hypothesis holds for \\( \\psi \\). We consider three cases\n    \\begin{itemize}\n      \\item Suppose that \\( \\zeta = \\xi \\). By definition, we have\n      \\begin{equation*}\n        \\varphi[\\xi \\mapsto \\eta]\n        =\n        \\varphi,\n      \\end{equation*}\n      hence \\eqref{eq:thm:renaming_assignment_compatibility/formulas} holds trivially.\n\n      \\item Suppose that \\( \\zeta \\neq \\xi \\). It follows that\n      \\begin{equation*}\n        \\varphi[\\xi \\mapsto \\eta]\n        =\n        \\qforall \\zeta \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] }.\n      \\end{equation*}\n\n      \\begin{itemize}\n        \\item If \\( \\parens[\\Big]{\\varphi[\\xi \\mapsto \\eta]}\\Bracks{v} = T \\), by definition of \\hyperref[def:first_order_valuation/formula_valuation]{quantifier formula valuation}, for any \\( x \\in X \\) we have\n        \\begin{equation}\\label{eq:thm:renaming_assignment_compatibility/formulas/true_modified_assignment}\n          \\parens[\\Bigg]{\\underbrace{\\qforall \\zeta \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] }}_{\\varphi[\\xi \\mapsto \\eta]} }\\Bracks{v}\n          =\n          \\parens[\\Big]{\\psi[\\xi \\mapsto \\eta]}\\Bracks{v_{\\zeta \\mapsto x}}\n          =\n          T.\n        \\end{equation}\n\n        On the other hand, by the inductive hypothesis,\n        \\begin{equation*}\n          \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] }\\Bracks{v} = \\psi\\Bracks{v_{\\xi \\mapsto \\eta}}\n        \\end{equation*}\n        and, as a special case, for any \\( x \\in X \\),\n        \\begin{equation}\\label{eq:thm:renaming_assignment_compatibility/formulas/ind_hyp_modified_assignment}\n          \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] }\\Bracks{v_{\\zeta \\mapsto x}} = \\psi\\Bracks{v_{\\xi \\mapsto \\eta, \\zeta \\mapsto x}}.\n        \\end{equation}\n\n        Combining \\eqref{eq:thm:renaming_assignment_compatibility/formulas/true_modified_assignment} and \\eqref{eq:thm:renaming_assignment_compatibility/formulas/ind_hyp_modified_assignment}, we obtain\n        \\begin{equation*}\n          \\parens[\\Big]{ \\varphi[\\xi \\mapsto \\eta] }\\Bracks{v}\n          =\n          \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] }\\Bracks{v_{\\zeta \\mapsto x}}\n          =\n          \\underbrace{\\psi\\Bracks{v_{\\xi \\mapsto \\eta, \\zeta \\mapsto x}}}_{T \\T*{for all} x \\in X}\n          =\n          \\varphi\\Bracks{v_{\\xi \\mapsto \\eta}},\n        \\end{equation*}\n        which proves the case.\n\n        \\item If \\( \\parens[\\Big]{\\varphi[\\xi \\mapsto \\eta]}\\Bracks{v} = F \\), then there exists \\( x \\in X \\) such that\n        \\begin{equation*}\n          \\parens[\\Big]{\\psi[\\xi \\mapsto \\eta]}\\Bracks{v_{\\zeta \\mapsto x}} = F.\n        \\end{equation*}\n\n        Since \\eqref{eq:thm:renaming_assignment_compatibility/formulas/ind_hyp_modified_assignment} holds by the inductive hypothesis, we have\n        \\begin{equation*}\n          \\psi\\Bracks{v_{\\xi \\mapsto \\eta, \\zeta \\mapsto x}} = F\n        \\end{equation*}\n        for the same \\( x \\).\n\n        It follows that \\( \\varphi\\Bracks{v_{\\xi \\mapsto \\eta}} = F \\), which proves the case.\n      \\end{itemize}\n    \\end{itemize}\n\n    \\item We can prove the case \\( \\varphi = \\qexists \\zeta \\psi \\) using double negation on the previous case.\n  \\end{itemize}\n\n  In all cases, \\eqref{eq:thm:renaming_assignment_compatibility/formulas} holds.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:first_order_substitution_equivalence}\n  Analogously to \\fullref{thm:propositional_substitution_equivalence}, we will show that all substitutions defined in \\fullref{def:first_order_substitution} types of substitution preserve the corresponding \\hyperref[def:first_order_semantics]{semantics}.\n\n  By induction, this proposition also holds for \\hyperref[def:propositional_substitution/simultaneous]{simultaneous substitution}.\n\n  Fix a \\hyperref[def:first_order_structure]{structure} \\( \\mscrX = (X, I) \\) and a \\hyperref[def:first_order_valuation/variable_assignment]{variable assignment} \\( v \\).\n\n  \\begin{thmenum}\n    \\thmitem{thm:first_order_substitution_equivalence/propositional} As in \\fullref{def:first_order_substitution/propositional}, let \\( \\varphi \\) be a \\hyperref[def:propositional_syntax/formula]{propositional formula} with variables \\( {V = \\set{ P_1, \\ldots, P_n }} \\) and let \\( \\Theta = \\set{ \\theta_1, \\ldots, \\theta_n } \\) be a set of \\hyperref[def:first_order_syntax/formula]{first-order formulas}.\n\n    Furthermore, let \\( J \\) be a \\hyperref[def:propositional_valuation/interpretation]{propositional interpretation} such that, for all \\( k = 1, \\ldots, n \\),\n    \\begin{equation}\\label{eq:thm:first_order_substitution_equivalence/propositional/compatibility}\n      P_k\\Bracks{J} = \\theta_k\\Bracks{v}.\n    \\end{equation}\n\n    Then\n    \\begin{equation}\\label{eq:thm:first_order_substitution_equivalence/propositional}\n      \\parens[\\Big]{ \\varphi[V \\mapsto \\Theta] }\\Bracks{v} = \\varphi \\Bracks{J}.\n    \\end{equation}\n\n    In particular, \\( \\vDash \\varphi \\) in the sense of \\fullref{def:propositional_semantics/tautology} implies \\( \\vDash \\varphi[V \\mapsto \\Theta] \\) in the sense of \\fullref{def:first_order_semantics/tautology}.\n\n    \\thmitem{thm:first_order_substitution_equivalence/term_in_term} Let \\( \\tau \\) be a \\hyperref[def:first_order_syntax/term]{first-order term} and let \\( \\kappa \\) be a \\hyperref[def:first_order_syntax/subterm]{subterm} of \\( \\tau \\). Let \\( \\mu \\) be another term such that\n    \\begin{equation}\\label{eq:thm:first_order_substitution_equivalence/term_in_term/compatibility}\n      \\mu\\Bracks{v} = \\kappa\\Bracks{v}.\n    \\end{equation}\n\n    Then\n    \\begin{equation}\\label{eq:thm:first_order_substitution_equivalence/term_in_term}\n      \\tau[\\kappa \\mapsto \\mu]\\Bracks{v} = \\tau\\Bracks{v}.\n    \\end{equation}\n\n    \\thmitem{thm:first_order_substitution_equivalence/term_in_formula} Let \\( \\varphi \\) be a \\hyperref[def:first_order_syntax/formula]{first-order formula} and let \\( \\kappa \\) be a \\hyperref[def:first_order_syntax/formula_terms]{term of \\( \\varphi \\)}. Let \\( \\mu \\) be another term such that\n    \\begin{equation}\\label{eq:thm:first_order_substitution_equivalence/term_in_formula/compatibility}\n      \\mu\\Bracks{v} = \\kappa\\Bracks{v}.\n    \\end{equation}\n\n    Then\n    \\begin{equation}\\label{eq:thm:first_order_substitution_equivalence/term_in_formula}\n      \\varphi[\\kappa \\mapsto \\mu]\\Bracks{v} = \\varphi\\Bracks{v}.\n    \\end{equation}\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  In all cases, we use structural induction by the definition of the substitution. The inductive hypothesis for a formula is that the proposition holds for arbitrary substitutions and valuations.\n\n  \\SubProofOf{thm:first_order_substitution_equivalence/propositional} Let \\( \\varphi \\) be a propositional formula.\n  \\begin{itemize}\n    \\item If \\( \\varphi \\in \\set{ \\top, \\bot } \\), no substitution is performed and thus \\eqref{eq:thm:first_order_substitution_equivalence/propositional} holds trivially.\n\n    \\item If \\( \\varphi = P_k \\) for some \\( k = 1, \\ldots, n \\), then follows \\eqref{eq:thm:first_order_substitution_equivalence/propositional} from \\eqref{eq:thm:first_order_substitution_equivalence/propositional/compatibility}.\n\n    \\item If \\( \\varphi = \\neg \\psi \\) and if the inductive hypothesis holds for \\( \\psi \\), then\n    \\begin{equation*}\n      \\parens[\\Big]{ \\psi[V \\mapsto \\Theta] }\\Bracks{v}\n      =\n      \\overline{\\parens[\\Big]{ \\psi[V \\mapsto \\Theta] }\\Bracks{v}}\n      \\reloset {\\T{ind.}} =\n      \\overline{\\psi \\Bracks{J}}\n      =\n      \\varphi \\Bracks{J}.\n    \\end{equation*}\n\n    \\item If \\( \\varphi = \\psi_1 \\bincirc \\psi_2, \\bincirc \\in \\Sigma \\) and if the inductive hypothesis holds for both \\( \\psi_1 \\) and \\( \\psi_2 \\), then\n    \\begin{equation*}\n      \\parens[\\Big]{ \\psi[V \\mapsto \\Theta] }\\Bracks{v}\n      =\n      \\parens[\\Big]{ \\psi_1[V \\mapsto \\Theta] }\\Bracks{v} \\bincirc \\parens[\\Big]{ \\psi_2[V \\mapsto \\Theta] }\\Bracks{v}\n      \\reloset {\\T{ind.}} =\n      \\psi_1 \\Bracks{J} \\bincirc \\psi_2\\Bracks{J}\n      =\n      \\varphi\\Bracks{J}.\n    \\end{equation*}\n  \\end{itemize}\n\n  In all cases, \\eqref{eq:thm:first_order_substitution_equivalence/propositional} holds.\n\n  \\SubProofOf{thm:first_order_substitution_equivalence/term_in_term} The proof is identical to that of \\fullref{thm:renaming_assignment_compatibility/terms}.\n\n  \\SubProofOf{thm:first_order_substitution_equivalence/term_in_formula} The proof is identical to that of \\fullref{thm:renaming_assignment_compatibility/formulas} except for the special cases where \\hyperref[rem:first_order_substitution_renaming_justification]{renaming} occurs, i.e. \\( \\varphi = \\qforall \\xi \\psi \\) and \\( \\varphi = \\qexists \\xi \\psi \\), where\n  \\begin{itemize}\n    \\item \\( \\xi \\in \\boldop{Free}(\\mu) \\).\n    \\item \\( \\eta \\not\\in \\boldop{Var}(\\kappa) \\cup \\boldop{Var}(\\mu) \\cup \\boldop{Var}(\\psi) \\).\n    \\item The inductive hypothesis holds for \\( \\psi \\).\n  \\end{itemize}\n\n  We will only show the case \\( \\varphi = \\qforall \\xi \\psi \\) since the existential case is handled similarly.\n\n  Since \\( \\xi \\in \\boldop{Free}(\\mu) \\), we have\n  \\begin{equation*}\n    \\varphi[\\kappa \\mapsto \\mu]\n    =\n    \\qforall \\eta \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta][\\kappa \\mapsto \\mu] },\n  \\end{equation*}\n  which does not allow us to use the inductive hypothesis directly.\n\n  We proceed to prove the statement by nested induction on the number of quantifiers. We have already shown the case of \\( 0 \\) quantifiers. Suppose that the statement holds for all formulas with strictly less than \\( n \\) quantifiers and suppose that \\( \\varphi \\) has exactly \\( n \\) quantifiers.\n\n  Furthermore, for formulas with \\( n \\) quantifiers with \\( \\forall \\) as the outermost one, the non-renaming cases \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/trivial} and \\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/direct} hold. Therefore, since \\( \\eta \\not\\in \\boldop{Free}(\\mu) \\),\n  \\begin{equation}\\label{eq:thm:first_order_substitution_equivalence/term_in_formula/nested_induction}\n    \\begin{aligned}\n      &\\phantom{{}={}}\n      \\varphi[\\kappa \\mapsto \\mu]\\Bracks{v}\n      = \\\\ &=\n      \\parens[\\Bigg]{\\qforall \\eta \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta][\\kappa \\mapsto \\mu] }}\\Bracks{v}\n      \\reloset {\\eqref{eq:def:first_order_substitution/term_in_formula/quantifiers/direct}} = \\\\ &=\n      \\parens[\\Bigg]{ \\parens*{ \\qforall \\eta \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] } }[\\kappa \\mapsto \\mu] }\\Bracks{v}\n      \\reloset {\\T{ind.}} = \\\\ &=\n      \\parens[\\Bigg]{\\qforall \\eta \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] } }\\Bracks{v},\n    \\end{aligned}\n  \\end{equation}\n  where we have implicitly used that \\( \\psi \\) has \\( n - 1 \\) quantifiers.\n\n  On the other hand, due to \\fullref{thm:renaming_assignment_compatibility/formulas},\n  \\begin{equation*}\n    \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] }\\Bracks{v} = \\psi\\Bracks{v_{\\xi \\mapsto \\eta}}\n  \\end{equation*}\n  and, in particular, for any \\( x \\in X \\),\n  \\begin{equation*}\n    \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] }\\Bracks{v_{\\eta \\mapsto x}}\n    =\n    \\psi\\Bracks{v_{\\xi \\mapsto \\eta,\\eta \\mapsto x}}\n    =\n    \\psi\\Bracks{v_{\\xi \\mapsto x}},\n  \\end{equation*}\n  where the last equality holds because \\( \\eta \\not\\in \\boldop{Var}(\\psi) \\).\n\n  Hence,\n  \\begin{equation*}\n    \\underbrace{ \\parens[\\Bigg]{\\qforall \\eta \\parens[\\Big]{ \\psi[\\xi \\mapsto \\eta] } }\\Bracks{v} }_{\\varphi[\\xi \\mapsto \\eta]\\Bracks{v}}\n    =\n    \\underbrace{ \\parens[\\Big]{\\qforall \\xi \\psi }\\Bracks{v_{\\xi \\mapsto \\eta}} }_{\\varphi\\Bracks{v_{\\xi \\mapsto \\eta}}}.\n  \\end{equation*}\n\n  This proves \\eqref{eq:thm:first_order_substitution_equivalence/term_in_formula}.\n\\end{proof}\n\n\\begin{remark}\\label{rem:predicate_formula}\n  As explained in \\fullref{rem:first_order_formula_conventions/necessary_signature}, we avoid adding to a language more predicates than necessary. For this reason, we sometimes use \\term{predicate formulas}. For example, if \\( \\leq \\) is a \\hyperref[def:partially_ordered_set/theory]{partial order} symbol and we want to have a predicate for whether \\( \\xi \\) is the \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{bottom element}, we can define the formula\n  \\begin{equation}\\taglabel[\\op{IsBottom}]{rem:predicate_formula/bottom}\n    \\ref{rem:predicate_formula/bottom}[\\xi] \\coloneqq \\qforall \\eta (\\xi \\leq \\eta).\n  \\end{equation}\n\n  Note that \\( [\\xi] \\) is only a notational convenience for highlighting which variables are free, the actual formula is named \\( \\op{IsBottom} \\). This is consistent with \\fullref{rem:first_order_formula_valuation_without_variable_assignment} which allows us to write \\( \\op{IsBottom}[\\eta] \\) rather than \\( \\op{IsBottom}[\\xi \\mapsto \\eta] \\) to verify if \\( \\eta \\) is a bottom element\n\\end{remark}\n\n\\begin{proposition}\\label{thm:first_order_quantifiers_are_dual}\n  For any formula \\( \\varphi \\) and any variable \\( \\xi \\) over \\( \\mscrL \\), we have the following equivalences:\n  \\begin{align}\n    \\neg \\qforall \\xi \\varphi &\\gleichstark \\qexists \\xi \\neg \\varphi \\label{thm:first_order_quantifiers_are_dual/negation_of_universal} \\\\\n    \\neg \\qexists \\xi \\varphi &\\gleichstark \\qforall \\xi \\neg \\varphi \\label{thm:first_order_quantifiers_are_dual/negation_of_existential}\n  \\end{align}\n\\end{proposition}\n\\begin{proof}\n  The two equivalences are connected using \\hyperref[thm:boolean_equivalences/double_negation]{double negation}. We will only prove \\eqref{thm:first_order_quantifiers_are_dual/negation_of_universal}.\n\n  Let \\( \\mscrX = (X, I) \\) be a structure over \\( \\mscrL \\) and let \\( v \\) be a variable assignment. Then\n  \\begin{align*}\n    (\\neg \\qforall \\xi \\varphi)\\Bracks{v}\n    &=\n    \\overline{(\\qforall \\xi \\varphi)\\Bracks{v}}\n    = \\\\ &=\n    \\overline{\\bigwedge\\set{ \\varphi\\Bracks{v_{\\xi \\mapsto x}} \\given x \\in X }}\n    \\reloset {\\eqref{eq:thm:de_morgans_laws/complement_of_meet}} = \\\\ &=\n    \\bigvee\\set{ \\overline{\\varphi\\Bracks{v_{\\xi \\mapsto x}}} \\given x \\in X }\n    = \\\\ &=\n    \\bigvee\\set{ (\\neg \\varphi)\\Bracks{v_{\\xi \\mapsto x}} \\given x \\in X }\n    = \\\\ &=\n    (\\qexists \\xi \\neg \\varphi)\\Bracks{v}.\n  \\end{align*}\n\\end{proof}\n\n\\begin{proposition}\\label{thm:implicit_universal_quantification}\n  For any formula \\( \\varphi \\) and any variable \\( \\xi \\) over \\( \\mscrL \\), the formulas \\( \\varphi \\) and \\( \\qforall \\xi \\varphi \\) are \\hyperref[def:first_order_semantics/equivalence]{semantically equivalent}.\n\n  This allows us to skip quantifiers when writing formulas without changing their validity. Given a formula \\( \\varphi \\) with free variables \\( \\xi_1, \\ldots, \\xi_n \\), we call\n  \\begin{equation*}\n    \\qforall {\\xi_1} \\cdots \\qforall {\\xi_n} \\varphi\n  \\end{equation*}\n  its \\term{universal closure} and say that \\( \\varphi \\) itself is \\term{implicitly universally quantified}. Universal closures of quantifierless formulas are called \\term{universal formulas}.\n\n  See \\fullref{ex:def:first_order_natural_deduction_system/eigenvariables/invalid_universal_closure} for how this fails for derivability.\n\\end{proposition}\n\\begin{proof}\n  Let \\( \\mscrX = (X, I) \\) be a structure that satisfies \\( \\varphi \\). Let \\( v \\) be a variable assignment in \\( \\mscrX \\). Then for any \\( x \\in X \\), the modified variable assignment \\( v_{\\xi \\mapsto x} \\) also satisfies \\( \\varphi \\), i.e.\n  \\begin{equation*}\n    \\varphi\\Bracks{v} = \\varphi\\Bracks{v_{\\xi \\mapsto x}} = T.\n  \\end{equation*}\n\n  Thus, \\( \\mscrX \\) is also a model for \\( \\qforall \\xi \\varphi \\).\n\n  Conversely, suppose that \\( \\mscrX \\) satisfies \\( \\qforall \\xi \\varphi \\) and \\( v \\) is any variable assignment. Then\n  \\begin{equation*}\n    \\varphi\\Bracks{v_{\\xi \\mapsto x}} = T\n  \\end{equation*}\n  for any \\( x \\), including \\( x \\coloneqq v(\\xi) \\). Thus,\n  \\begin{equation*}\n    \\varphi\\Bracks{v_{\\xi \\mapsto v(\\xi)}} = \\varphi\\Bracks{v} = T.\n  \\end{equation*}\n\n  Therefore, \\( \\mscrX \\) is also a model for \\( \\varphi \\).\n\\end{proof}\n\n\\begin{proposition}\\label{thm:quantifier_satisfiability}\n  Let \\( \\mscrL \\) be a first-order language, \\( \\varphi \\) be a formula, \\( \\xi \\) be any variable and \\( \\tau \\) be a \\hyperref[def:first_order_syntax/ground_term]{ground term} in \\( \\mscrL \\). The following hold:\n\n  \\begin{thmenum}\n    \\thmitem{thm:quantifier_satisfiability/universal} \\( \\qforall \\xi \\varphi \\vDash \\varphi[\\xi \\mapsto \\tau] \\),\n\n    \\thmitem{thm:quantifier_satisfiability/existential} \\( \\varphi[\\xi \\mapsto \\tau] \\vDash \\qexists \\xi \\varphi \\).\n  \\end{thmenum}\n\n  See also \\fullref{def:first_order_natural_deduction_system/terms} for inference rules corresponding to this proposition.\n\\end{proposition}\n\\begin{proof}\n  The proof is very straightforward, but the technical details make it look a bit more complicated.\n\n  First note that if the formulas on the left are unsatisfiable, the proof is trivial. Hence, we will assume that they are satisfiable.\n\n  \\SubProofOf{thm:quantifier_satisfiability/universal} From \\fullref{thm:implicit_universal_quantification} it follows that \\( \\qforall \\xi \\varphi \\vDash \\varphi \\). If \\( \\xi \\) is not free in \\( \\varphi \\), then \\( \\varphi[\\xi \\mapsto \\tau] = \\varphi \\) and the proof is finished. Suppose that \\( \\xi \\) is free in \\( \\varphi \\).\n\n  Let \\( \\mscrX = (X, I) \\) be a model of \\( \\qforall \\xi \\varphi \\). Let \\( v \\) be a variable assignment in \\( \\mscrX \\) and let \\( t \\coloneqq \\tau\\Bracks{v} \\). To avoid the case where \\( \\xi \\in \\boldop{Var}(\\tau) \\), we replace it with \\( \\eta \\) that is not a variable in neither \\( \\tau \\) nor \\( \\varphi \\). Then\n  \\begin{align*}\n    \\varphi[\\xi \\mapsto \\tau]\\Bracks{v}\n    &=\n    \\varphi[\\xi \\mapsto \\eta][\\eta \\mapsto \\tau]\\Bracks{v}\n    = \\\\ &=\n    \\varphi[\\xi \\mapsto \\eta][\\eta \\mapsto \\tau]\\Bracks{v_{\\eta \\mapsto t}}\n    \\reloset {\\ref{thm:first_order_substitution_equivalence/term_in_formula}} = \\\\ &=\n    \\varphi[\\xi \\mapsto \\eta]\\Bracks{v_{\\eta \\mapsto t}}\n    \\reloset {\\ref{thm:renaming_assignment_compatibility/formulas}} = \\\\ &=\n    \\varphi\\Bracks{v_{\\xi \\mapsto \\eta, \\eta \\mapsto t}}\n    = \\\\ &=\n    \\varphi\\Bracks{v_{\\xi \\mapsto t}}.\n  \\end{align*}\n\n  Since \\( (\\qforall \\xi \\varphi)\\Bracks{v} = T \\), by definition of valuation of \\( \\forall \\) we have\n  \\begin{equation*}\n    \\varphi\\Bracks{v_{\\xi \\mapsto t}} = T.\n  \\end{equation*}\n\n  Therefore, \\( \\varphi[\\xi \\mapsto \\tau]\\Bracks{v} = T \\) and, since \\( v \\) was chosen arbitrarily, we have \\( \\mscrX \\vDash \\varphi[\\xi \\mapsto \\tau] \\).\n\n  \\SubProofOf{thm:quantifier_satisfiability/existential} For any model \\( \\mscrX = (X, I) \\) of \\( \\varphi \\), any assignment \\( v \\) satisfies \\( \\varphi[\\xi \\mapsto \\tau] \\). Thus, \\( \\varphi\\Bracks{v_{\\xi \\mapsto t}} = T \\) for \\( t \\coloneqq \\tau\\Bracks{v} \\) and hence \\( v \\) satisfies \\( \\qexists \\xi \\varphi \\). Therefore, \\( \\mscrX \\) is also a model of \\( \\qexists \\xi \\varphi \\).\n\\end{proof}\n\n\\begin{proposition}\\label{thm:existential_quantifier_removal}\n  Let \\( \\varphi \\) be a formula in the language \\( \\mscrL \\). If \\( \\tau = c \\) for some constant \\( c \\) that does not occur in \\( \\varphi \\), the formulas \\( \\varphi[\\xi \\mapsto c] \\) and \\( \\qexists \\xi \\varphi \\) are equisatisfiable.\n\n  If there is no such constant in \\( \\mscrL \\), we can instead define the extension \\( \\widetilde \\mscrL \\) of \\( \\mscrL \\) by adjoining a new constant \\( c \\).\n\n  This general procedure of creating a new extension language in order to obtain an equisatisfiable formula without the outer quantifier is called \\term{existential quantifier elimination}.\n\\end{proposition}\n\\begin{proof}\n  \\Fullref{thm:quantifier_satisfiability/existential} implies that \\( \\qexists \\xi \\varphi \\vDash \\varphi[\\xi \\mapsto c] \\), which is even stronger than the statement that if \\( \\qexists \\xi \\varphi \\) is satisfiable, so it \\( \\varphi[\\xi \\mapsto c] \\).\n\n  For the other direction, suppose that \\( \\mscrX = (X, I) \\) satisfies \\( \\qexists \\xi \\varphi \\). Fix a variable assignment \\( v \\). Then there exists a value \\( x \\) such that\n  \\begin{equation}\\label{eq:thm:quantifier_satisfiability/existential/existence_modified_assignment}\n    \\varphi\\Bracks{v_{\\xi \\mapsto x}} = T.\n  \\end{equation}\n\n  Define the interpretation\n  \\begin{equation*}\n    \\widetilde I (a) \\coloneqq \\begin{cases}\n      x, &a = c, \\\\\n      a, &a \\in \\boldop{Func}, \\\\\n      a, &a \\in \\boldop{Pred}\n    \\end{cases}\n  \\end{equation*}\n  as \\( I \\) modified at \\( c \\), so that \\( c\\Bracks{v} = x \\). It remains to show that the structure \\( (X, \\widetilde I) \\) is a model of \\( \\varphi[\\xi \\mapsto \\tau] \\). Since the new structure has the same domain, \\( v \\) is a variable assignment in this new structure. Nevertheless, we will denote it by \\( \\widetilde v \\) in order to distinguish between valuations in the two structures.\n\n  Since \\( \\xi\\Bracks{v} = c\\Bracks{v} = x \\), from \\fullref{thm:first_order_substitution_equivalence/term_in_formula} it follows that\n  \\begin{equation*}\n    \\varphi[\\xi \\mapsto c]\\Bracks{v} = \\varphi\\Bracks{v_{\\xi \\mapsto x}} \\reloset{\\eqref{eq:thm:quantifier_satisfiability/existential/existence_modified_assignment}} = T.\n  \\end{equation*}\n\n  Since the above holds for any assignment \\( v \\), the structure \\( (X, \\widetilde I) \\) is a model of \\( \\varphi[\\xi \\mapsto c] \\).\n\\end{proof}\n\n\\begin{theorem}[First-order semantic deduction theorem]\\label{thm:semantic_deduction_theorem}\\mcite[thm. 16.29]{OpenLogicFull}\n  Let \\( \\Gamma \\) be a set of formulas over some first-order language, let \\( \\psi \\) be an arbitrary formula and let \\( \\varphi \\) be a \\hyperref[def:first_order_syntax/ground_formula]{closed formula}.\n\n  Then the entailment \\( \\Gamma, \\varphi \\vDash \\psi \\) holds if and only if \\( \\Gamma \\vDash \\varphi \\to \\psi \\) holds.\n\n  See \\fullref{rem:deduction_with_free_variables} for the importance of the condition that \\( \\varphi \\) is a closed formula.\n\n  Due to \\fullref{rem:propositional_logic_as_first_order_logic}, this theorem also holds for propositional formulas.\n\n  Compare this result with \\fullref{thm:syntactic_deduction_theorem}.\n\\end{theorem}\n\\begin{proof}\n  \\SufficiencySubProof Let \\( \\Gamma, \\varphi \\vDash \\psi \\) and let \\( \\mscrX = (X, I) \\) be a model for \\( \\Gamma \\).\n\n  \\begin{itemize}\n    \\item If \\( \\mscrX \\vDash \\varphi \\), then \\( \\mscrX \\vDash \\Gamma \\cup \\set{ \\varphi } \\) and from our assumption we conclude \\( \\mscrX \\vDash \\psi \\). Hence, for any variable assignment \\( v \\) we have\n    \\begin{equation*}\n      (\\varphi \\rightarrow \\psi)\\Bracks{v}\n      =\n      (\\varphi\\Bracks{v} \\rightarrow \\psi\\Bracks{v})\n      =\n      (T \\rightarrow T)\n      =\n      T.\n    \\end{equation*}\n\n    \\item If \\( \\mscrX \\not\\vDash \\varphi \\), then, since \\( \\varphi \\) is closed and the valuation \\( \\varphi\\Bracks{v} \\) does not depend on the variable assignment \\( v \\), for any \\( v \\) we have\n    \\begin{equation}\\label{eq:thm:semantic_deduction_theorem/closedness_condition}\n      (\\varphi \\rightarrow \\psi)\\Bracks{v}\n      =\n      (\\varphi\\Bracks{v} \\rightarrow \\psi\\Bracks{v})\n      =\n      (F \\rightarrow \\psi\\Bracks{v})\n      =\n      T.\n    \\end{equation}\n\n  \\end{itemize}\n\n  In both cases we conclude that \\( \\mscrX \\vDash \\varphi \\rightarrow \\psi \\). Therefore, \\( \\Gamma \\vDash \\varphi \\rightarrow \\psi \\).\n\n  \\NecessitySubProof Let \\( \\Gamma \\vDash \\varphi \\rightarrow \\psi \\) and let \\( \\mscrX = (X, I) \\) be a model for \\( \\Gamma \\cup \\set{ \\varphi } \\). Let \\( v \\) be an arbitrary assignment in \\( \\mscrX \\).\n\n  Obviously \\( \\mscrX \\vDash \\Gamma \\) and \\( \\mscrX \\vDash \\varphi \\). We thus have \\( (\\varphi \\rightarrow \\psi)\\Bracks{v} = T \\) and \\( \\varphi\\Bracks{v} = T \\), which only leaves \\( \\psi\\Bracks{v} = T \\) as a possible option.\n\n  Hence, \\( \\mscrX \\vDash \\psi \\) and, consequently, \\( \\Gamma \\cup \\set{ \\varphi } \\vDash \\psi \\).\n\\end{proof}\n\n\\begin{remark}\\label{rem:deduction_with_free_variables}\n  In order to highlight the importance of closed formulas in certain theorems, we will take a close look at the proof of \\fullref{thm:semantic_deduction_theorem}.\n\n  Note that \\eqref{eq:thm:semantic_deduction_theorem/closedness_condition} only holds because the formula \\( \\varphi \\) is closed. If it were not closed, we could only conclude that there exists a variable assignment \\( v_0 \\) such that \\( \\varphi\\Bracks{v_0} = F \\). Clearly then \\( (\\varphi \\rightarrow \\psi)\\Bracks{v_0} = T \\). But there may exist another assignment \\( v \\) such that \\( \\varphi\\Bracks{v} = T \\) and \\( \\psi\\Bracks{v} = F \\), which would imply that \\( \\mscrX \\not\\vDash \\varphi \\rightarrow \\psi \\).\n\\end{remark}\n", "meta": {"hexsha": "a22a8e04fd74e3c3c188a613402614e736e28cd1", "size": 44508, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/first_order_satisfiability.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/first_order_satisfiability.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/first_order_satisfiability.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.3568281938, "max_line_length": 1221, "alphanum_fraction": 0.6710254336, "num_tokens": 14152, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8175744850834648, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5552791805671256}}
{"text": "\\documentclass{article}\n\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{color}\n\\newtheorem{lemma}{Lemma}\n\n\\DeclareMathOperator{\\Ker}{Ker}\n\\let\\Im\\undefined\n\\DeclareMathOperator{\\Im}{Im}\n\\let\\span\\undefined\n\\DeclareMathOperator{\\span}{span}\n\n\\begin{document}\n\nLet us consider an optimisation problem\n\\begin{displaymath}\n\\begin{array}{rcl}\n \\bar{\\gamma} & := & \\arg \\min\\limits_{\\gamma \\in \\Omega} L_{\\Theta}(\\gamma) \\\\\n \\gamma & := & [\\gamma_1, \\dots, \\gamma_K] \\in \\mathbb{R}^{KT}, \\\\\n \\gamma_k & := & [\\gamma_k(1), \\dots, \\gamma_k(T)] \\in \\mathbb{R}^{T} \\\\\n L^{\\epsilon}_{\\Theta}(\\gamma) & := & \\frac{1}{T} b_{\\Theta}^T \\gamma + \\frac{\\epsilon^2}{T} \\gamma^T H \\gamma, \\\\\n \\Omega & := & \\lbrace \\gamma \\in \\mathbb{R}^{KT}: \\gamma \\geq 0 \\wedge \\sum\\limits_{k=1}^K \\gamma(t) = 1, \\forall t = 1,\\dots T \\rbrace\n\\end{array}\n\\end{displaymath}\nand $H \\in \\mathbb{R}^{KT,KT}$ is block-diagonal matrix, whose blocks $L_k \\in \\mathbb{R}^{T,T}$ are formed by Laplace matrix. \\newline\nIn what follows, we proved that this problem has always unique solution.\n\n\\section{Properties}\n\n\\begin{itemize}\n\\item $H$ is SPS (since blocks are SPS) and\n \\begin{equation}\n  \\label{eq:kerH}\n  \\begin{array}{rcl}  \n   \\Ker H & = & \\span \\lbrace [c, {\\bf 0}, \\dots, {\\bf 0}]^T, [{\\bf 0}, c, {\\bf 0} \\dots, {\\bf 0}]^T, \\dots, [{\\bf 0} \\dots, {\\bf 0}, c]^T \\rbrace \\subset \\mathbb{R}^{KT}, \\\\\n   c & := & [ 1, \\dots, 1] \\in \\mathbb{R}^T\n  \\end{array}\n \\end{equation}\n\\item $L^{\\epsilon}_{\\Theta}(\\gamma)$ is continuous (not strictly) convex function $\\mathbb{R}^{KT} \\rightarrow \\mathbb{R}$,\n%\\item gradient is given by $\\nabla_{\\gamma} L^{\\epsilon}_{\\Theta}(\\gamma) = \\frac{1}{T} b_{\\Theta} + \\frac{2\\epsilon^2}{T} H \\gamma$ (\"well known\" - can be proved using Taylor expansion),\n\\item $\\Omega \\subset \\mathbb{R}^{KT}$ is bounded closed convex set {\\color{red}(has to be proved?)},\n\\item the matrix $B = [I, \\dots I] \\in \\mathbb{R}^{T,KT}$ ($I \\in \\mathbb{R}^{T,T}$ denotes identity matrix) forms the equivalent definition of $\\Omega$ given by\n \\begin{displaymath}\n  \\Omega = \\lbrace \\gamma \\in \\mathbb{R}^{KT}: \\gamma \\geq 0 \\wedge B\\gamma = c \\rbrace\n \\end{displaymath}\n\\item $\\Ker H \\cap \\Ker B = \\lbrace 0 \\rbrace$, proof: let $d := [\\alpha_1 c, \\dots, \\alpha_K c] \\neq 0$ be a vector from $\\Ker H$, then\n\\begin{displaymath}\n Bd = \\left[\\sum\\limits_{k=1}^K \\alpha_k, \\dots, \\sum\\limits_{k=1}^K \\alpha_k \\right]^T\n\\end{displaymath}\nand because $d$ is nonzero (not all $\\alpha_k$ is equal zero), then $Bd \\neq 0$ and therefore $d \\notin \\Ker B$.\n\\end{itemize}\n\n\\begin{lemma}\n\\label{th:penalized}\nLet $A \\in \\mathbb{R}^{n \\times n}$ be a SPS matrix, let $B \\in \\mathbb{R}^{m \\times n}$, $\\rho > 0$, and let $\\Ker A \\cap \\Ker B = \\lbrace 0 \\rbrace$.\nThen matrix\n\\begin{displaymath}\nA_{\\rho} = A + \\rho B^T B\n\\end{displaymath}\nis SPD.\n\\end{lemma}\n\n\\begin{proof}\nLet us follow proof by Dost\\'{a}l \\cite{DosBOOK-2009}, Lemma 1.2. \\newline\nIf $x \\in \\mathbb{R}^n \\setminus \\lbrace 0 \\rbrace$ and $\\Ker A \\cap \\Ker B = \\lbrace 0 \\rbrace$, then either $Ax \\neq 0$ or $Bx \\neq 0$.\n% (equivalently either $\\Vert Ax \\Vert \\neq 0$ or $\\Vert Bx \\Vert \\neq 0$)\nSince $Ax \\neq 0$ is equivalent to $A^{\\frac{1}{2}}x \\neq 0$,\n% ($\\Vert Ax \\Vert \\neq 0$ is equivalent to $\\Vert A^{\\frac{1}{2}}x \\Vert \\neq 0$)\nwe get for $\\rho > 0$ \n\\begin{displaymath}\n x^T A_{\\rho} x = \n x^T (A + \\rho B^T B) x = x^T A x + \\rho x^T B^T B x =\n \\Vert A^{\\frac{1}{2}}x \\Vert^2 + \\rho \\Vert Bx \\Vert^2 > 0~\\mathrm{.} \n\\end{displaymath}\nThus $A_{\\rho}$ is positive definite. \\newline\n\\end{proof}\n\n\\section{\"The\" proof}\n\nLet us define equivalent penalised optimisation problem\n\\begin{displaymath}\n \\bar{\\gamma} =  \\arg \\min\\limits_{\\gamma \\in \\Omega} L_{\\Theta}^{\\epsilon}(\\gamma) + \\frac{\\epsilon^2 \\rho}{T} \\Vert B \\gamma - c \\Vert^2 ,\n\\end{displaymath}\nwhere $\\rho > 0$ is arbitrary number (\\emph{penalisation} parameter).\nObviously, any solution of this problem is also the solution of original problem and vice versa (the penalisation term is equal to $0$ for all feasible $\\gamma \\in \\Omega$).\nHowever, the object function of the new problem can be written in form\n\\begin{displaymath}\n  L_{\\Theta}^{\\epsilon}(\\gamma) + \\frac{\\epsilon^2 \\rho}{T} \\Vert B \\gamma - c \\Vert^2 = \\frac{\\epsilon^2}{T} \\gamma^T (H + \\rho B^T B) \\gamma + \\frac{1}{T} \\gamma^T (b_{\\Theta} - 2 \\epsilon^2 \\rho B^T c) + \\frac{\\varepsilon^2 \\rho}{T} c^T c\n\\end{displaymath} \nand the Hessian matrix is given by $(H + \\rho B^T B)$, which is SPD (see Lemma \\ref{th:penalized}) and consequently, the object function is strictly convex (in this case, penalisation works as regularisation).\nSince strictly convex QP on closed convex set has always unique solution (see additional Lemma below), the solution of original problem is also unique.\n\nTherefore, we can conclude that in the problems of this type, {\\it equality constraints regularise original problem}.\n\n\\section{QP solvability - strictly convex cost function on convex set}\n\nFor completeness, let us review the proof of uniqueness of solution of QP with strictly convex cost function (SPD Hessian matrix) on closed convex set. \\newline\n\nLet $\\bar{x} \\in \\Omega$ be a solution of problem \n\\begin{displaymath}\n \\min\\limits_{x \\in \\Omega} f(x), ~~~~ f(x) := \\frac{1}{2} x^T A x - b^T x,\n\\end{displaymath}\nwhere $A \\in \\mathbb{R}^{n,n}$ is SPD matrix, $b \\in \\mathbb{R}^n$ and $\\Omega$ is closed convex set. \\newline\nLet us consider arbitrary $y \\in \\Omega \\setminus \\lbrace \\bar{x} \\rbrace$ (in what follows, we will show that $f(y) > f(\\bar{x})$)\nand let us denote $d := y - x \\neq 0$. Then using the definition of $f$ we can write\n\\begin{displaymath}\n \\begin{array}{rcl}\n f(y) = f(\\bar{x} + d) & = & \\frac{1}{2} (\\bar{x}+d)^T A (\\bar{x}+d) - b^T (\\bar{x} + d) \\\\\n & = & \\frac{1}{2} \\bar{x}^T A \\bar{x} + \\frac{1}{2} \\bar{x}^T A d + \\frac{1}{2} d^T A \\bar{x} + \\frac{1}{2} d^T A d - b^T \\bar{x} - b^T d \\\\\n \\mathit{\"A = A^T\"}~~~ & = & f(\\bar{x}) + d^T( A \\bar{x} - b) + \\frac{1}{2} d^T A d \\\\\n & > & f(\\bar{x}) \n \\end{array}\n\\end{displaymath}\nLast inequality holds since\n\\begin{itemize}\n \\item $d^T A d > 0$ because $d \\neq 0$ and $A$ is SPD,\n \\item $d^T( A \\bar{x} - b) = d^T \\nabla f(\\bar{x}) \\geq 0$ since $\\bar{x}$ is minimiser of $f$ and $\\Omega$ is convex (every vector from $\\bar{x}$ directed to fesible set is ascent (i.e. not descent) vector).\n\\end{itemize}\n\nFor curiosity: in proof we found that $f(\\bar{x} + d) = f(\\bar{x}) + d^T( A \\bar{x} - b) + \\frac{1}{2} d^T A d$ which is in fact full Taylor expansion of $f$.\n\n\\end{document}\n\n\n\n", "meta": {"hexsha": "cfb9d0197be457c56b0f8c19a960047f1c3f2b3b", "size": 6583, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Documents/other/solvability/QPsolvability.tex", "max_stars_repo_name": "eth-cscs/PASC_inference", "max_stars_repo_head_hexsha": "de66682f07b65dd21c7ada2fda05f21156e8cf6d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2016-11-26T10:54:34.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-24T06:50:04.000Z", "max_issues_repo_path": "Documents/other/solvability/QPsolvability.tex", "max_issues_repo_name": "eth-cscs/PASC_inference", "max_issues_repo_head_hexsha": "de66682f07b65dd21c7ada2fda05f21156e8cf6d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documents/other/solvability/QPsolvability.tex", "max_forks_repo_name": "eth-cscs/PASC_inference", "max_forks_repo_head_hexsha": "de66682f07b65dd21c7ada2fda05f21156e8cf6d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-11-21T16:58:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-25T12:47:19.000Z", "avg_line_length": 50.6384615385, "max_line_length": 241, "alphanum_fraction": 0.6474251861, "num_tokens": 2444, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.721743206297598, "lm_q2_score": 0.7690802423634961, "lm_q1q2_score": 0.5550784400235634}}
{"text": "\\chapter{Compute the diameter}\n\\section{Definition of diameter}\nThe diameter is the maximum distance between two vertices\n\\section{A simple approach}\nIn the general case the best approach is calculate APSP (All Pairs Shortest Path) then find the maximum distance, that has a time complexity of $ O(n^{2.38}) $ where the 2.38 derive from some optimization on matrix calculation(see below) for dense graph while $ O(mn) $ for sparse graph. The problem with this simple approach is that is not usable in real-world graphs because contains millions of nodes and edges.\n\\section{Lower bound for time complexity of diameter computation}\n\\subsection{SETH Hypothesis}\nThe Strong Exponential Time Hypothesis (SETH) says that there not exists an algorithm that solves k-SAT in less than $ O((2-\\epsilon)^n) $ where $ \\epsilon > 0 $\n\\subsection{Reduction between two problems}\nGiven two problems A and B, the relative sets of instances of the problem $ I_A $ and $ I_B $ and the solutions set $ A(x) $ and $ B(x) $ of a instance $ x $ we can say that A is reducible to B if exists two functions \\textit{f} and \\textit{g} where given $ x \\in I_A $, $ x' = f(x) \\in I_B $, $ y' = B(x') $ and $ g(y', x) \\in A(x) $\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{img/problem_reduction}\n\t\\caption{Graphical representation of problem reduction}\n\t\\label{fig:problemreduction}\n\\end{figure}\n\\subsection{K-SAT*}\nA variant of the k-sat problem is the $ K-SAT^* $ where between them change only the input: The $ K-SAT^* $ receive two sets of assignments $ X  $ and $ Y$ of size $ n = 2^{\\frac{m}{2}} $ to respectively the first half and second half of variables where m is the number of variables and a set of clauses $ C $.\\\\\nIn order to find an assignment that satisfy $ C $ we combine each first-half assignment of X to Y, then try to check if applied to the set of clauses returns \\textit{true}.\\\\\nSo the complexity of the algorithm is virtually $ O(n^2) $ on the input size, but if we substitute n in function of m (the number of variables) we obtain $ O((2^{\\frac{m}{2}})^2)  = O(2^m)$. Also this problem cannot have the complexity $ O(n^{2-\\varepsilon}) $ unless SETH is false.\nRemember that SAT clauses are in CNF form that means:\\\\ \\medskip\n\\noindent\n$ (X_{1}\\vee X_{2}\\vee \\cdots \\vee X_{n})\\wedge (Y_{1}\\vee X_{2}\\vee \\cdots \\vee X_{n})\\wedge (X_{1}\\vee Y_{2}\\vee \\cdots \\vee X_{n})\\wedge (Y_{1}\\vee Y_{2}\\vee \\cdots \\vee X_{n})\\wedge \\cdots \\wedge (Y_{1}\\vee Y_{2}\\vee \\cdots \\vee Y_{n}). $\\\\\nTo resume in words, in CNF  variables in the clause are concatenated with OR operator while the clauses are concatenated with AND operator\n\\subsection{Disjoint set problem}\nGiven a set of sets C, the solution is 1 when exists two sets $ A,B \\in C $ such that $ A \\cap B = \\emptyset $.\nThe complexity of this algorithm is $ O(|C|^2) $.\n\\subsection{Reduction from K-SAT* to disjoint set}\nGiven the set of all clauses $ C $ and X,Y respectively the assignment to the first and second half of variables, we define the collection $ S = S_1 \\cup S_2 $ as follow:\n\\begin{center}\n\t$ S_1:= \\{ \\{t_1\\} \\cup \\{ c : x \\in X \\nvDash c \\,\\, \\forall c \\in C \\} \\}$ \\\\ \\medskip\n\t$ S_2:= \\{ \\{t_2\\} \\cup \\{ c : y \\in Y \\nvDash c \\,\\, \\forall c \\in C \\} \\}$\n\\end{center}\n$ t_1 $ and $ t_2 $ are only tokens to avoid the empty intersection between set from the same assignment.\\\\\nTo resume, if exists a good assignment in $ K-SAT^* $ then in disjoint sets should exists an empty intersection between two sets, one in $ S_1 $ and another in $ S_2 $.\\\\\nTo explain the reduction from $ K-SAT^* $ to disjoin set, a good assignment should satisfy all the clauses in K-SAT* and this thing is transposed in disjoint set with the intersection operator: given $ s_1 \\in S_1 $ and $ s_2 \\in S_2 $ if $ s_1 \\cap s_2 \\ne \\emptyset$ then exists a clause not satisfied both from X and Y so the clause if false, then is not a good assignment while a good assignment should satisfy all the clauses so it should not have any clause in the intersection.\n\\subsection{From disjoint set to diameter computation}\nGiven in input a set of sets $ C $ and the variables $ X $ we can build a clique of the size of  $ X $, then add a node for each subset in $ C $ and add an edge between the node  $ c_i \\in C $  and $ x_j $ if $ x_j $ appears in the set $ c_i $.\\\\\nThen we can interpreter the diameter between $ c_i \\text{ and } c_j $ in the following mode :\n\\begin{itemize}\n\t\\item $ diameter(c_i, c_j) = 2  \\rightarrow $  exists an intersection between $ c_i $ and $ c_j $\n\t\\item $ diameter(c_i, c_j) = 3  \\rightarrow $ not exists an intersection between $ c_i $ and $ c_j $\n\\end{itemize}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.5\\linewidth]{img/disjoint_set_to_diameter}\n\t\\caption{Graphical representation of reduction from disjoint sets to diameter computation}\n\t\\label{fig:disjointsettodiameter}\n\\end{figure}\n\\subsection{Complexity of diameter computation}\nWith the above reductions we have demonstrated that the complexity of diameter computation cannot be in the form $ O(n^{2 - \\varepsilon}) $ unless SETH is false, so the complexity of diameter cannot be lower than $ O(n^2)  $\n\\section{Heuristic for computing the diameter}\n\\subsection{BFS and diameter}\nThe height of BFS tree is a lower bound for computing the diameter. We can use this value also as approximation of the diameter doing many BFS from a random vertex. Below we present two methods to approximate the diameter with BFS. This approximations works well in various types of graphs (including social networks) but in other types not, for example road networks.\n\\subsection{2-SWEEP}\n\\begin{enumerate}\n\t\\item Pick a random vertex \\textit{r}\n\t\\item Do a BFS from r\n\t\\item Pick \\textit{x}, one of the farther vertexes from r\n\t\\item Do a BFS from \\textit{x} and return the height of the BFS tree as diameter\n\\end{enumerate}\nComplexity: $ 2 \\cdot m $\n\n\\begin{figure}[H]\n\t\\includegraphics[width=0.4\\linewidth]{img/2sweep}\n\t\\caption{2-SWEEP graphical representation}\n\t\\label{fig:2sweep}\n\\end{figure}\n\\subsection{4-SWEEP}\n\\begin{enumerate}\n\t\\item Do a 2-SWEEP\n\t\\item Pick one of the middle vertexes in the longest path of the 2-SWEEP\n\t\\item Do a 2-SWEEP from that vertex\n\\end{enumerate}\nComplexity: $ 4 \\cdot m $\n\\section{Exact heuristic of diameter}\n\\subsection{Eccentricity}\nThe eccentricity of a vertex $ ecc(v) $ is the maximum distance between $ v $ and any other node.. In formula it is $ecc(v) = \\max_{u\\in V}d(u,v) $\\\\\nWe can express the diameter in terms of eccentricity: it is the maximum eccentricity among all nodes, or in formula $diameter(G) =  \\max_{v \\in V}ecc(v) $ \n\\subsection{How heuristic works and complexity}\nThe intent of this exact heuristic is to find the maximum eccentricity among all nodes in order to return the diameter but the worst case doesn't change: it is always $ O(nm) $ so the reduction from $ K-SAT^* $ is valid but in the mean case this heuristic should be very effective in terms of time cost. For this purpose we use the concept of eccentricity, some definitions and a demonstration.\n\\subsection{Some terms}\n\\begin{itemize}\n\t\\item $ F(u)  = \\{v | d(u,v) = ecc(u)\\}$ so the set of nodes at maximum distance from \\textit{u}\n\t\\item $ F_i(u)  = \\{v | d(u,v) = i\\}$ so the nodes at distance \\textit{i} from \\textit{u} (e.g. $ F_{ecc(u)(u) = F(u)} $ and $ F_{1}(u) = Neightbours(u) $)\n\t\\item $ B_i(u) = \\max_{z \\in F_i(u)}ecc(z) $ so the maximum eccentricity between nodes at distance \\textit{i} from \\textit{u}\n\\end{itemize}\n\\subsection{To the upper bound}\nLet's demonstrate the following assert:\nFor any $ 1 \\leq i \\leq ecc(u) $ and $ 1 \\leq k < i $ and for any $ x \\in F_{i-k)}(u) $ such that $ ecc(x) >2(i-1) $ there exists $ y \\in F_j(u) $ such that $ d(x,y) = ecc(x) $ with $ j \\geq i $.\nLet's first demonstrate that $ j \\geq i $:\\\\\nFirst, we can do some observations:\n\\begin{itemize}\n\t\\item for $ x \\in F_i(u) $ or $ y \\in F_i(u) $ we have that $ d(x,y) < B_i(u) $ because $ d(x,y) \\leq \\min\\{ecc(x), ecc(y)\\} \\leq B_i(u) $. Remember that $ B_i(u) = max_{x \\in F_i(u)}ecc(x) $\n\t\\item for any $ 1 \\leq i, j \\leq ecc(u) $ and $ \\forall x \\in F_i(u) , y \\in F_j(u) $ we have $ d(x,y) \\leq i + j \\leq 2 \\max\\{i,j\\} $\n\\end{itemize}\nWith this observations we can demonstrate that $ j \\geq i $. Suppose for absurd that $ i > j $, $ x \\in F_{i-k}(u) $, $ ecc(x) > 2(i-1) $ and $ y_x \\in F_j(u)  $ at distance $ ecc(x) $ from $ x $ we have that:\n\\begin{center}\n\t$  2(i-1)< ecc(x) = d(x,y_x) \\leq 2\\max\\{i-k, j\\} \\leq 2\\max\\{i-k, i-1\\} = 2(i-1) $ \n\\end{center}\nwhich is a contradiction so $ j \\geq i $.\\\\\nIn words, it means that if a node has a eccentricity more than $ 2(i-1) $ and stay above the level \\textit{i} then a node at maximum distance from \\textit{x} stay below \\textit{x} in the \\textit{BFS tree} of \\textit{u}.\\\\\nNow let's define \\textit{lb} as the maximum eccentricity below the level \\textit{i} or in then we have an upper bound on $ x \\in F_i(u) $ that is $  ecc(x) \\leq \\max\\{lb, 2(i-1)\\}$. We can have two cases about $ ecc(x) $ that are:\n\\begin{itemize}\n\t\\item $ ecc(x) \\leq 2(i-1)  \\rightarrow $ is true that $ ecc(x) \\leq \\max\\{lb, 2(i-1)\\} $\n\t\\item $ ecc(x) > 2(i-1)  \\rightarrow $ in this case exists a node \\textit{y} below  \\textit{x} that has eccentricity greatest or equal than \\textit{x}. In the latter assert we have demonstrate that if a node has eccentricity more than \\textit{2(i-1)} above the level \\textit{i} then exists a node \\textit{y} below the level \\textit{i} such that $ d(x,y) = ecc(x) $. This imply that the eccentricity of $ y $ is at minimum $ d(x,y) $ but can be also more so we can assert that $ ecc(y) \\geq ecc(x) $ . In conclusion $ ecc(x)  \\leq ecc(y) \\leq lb $, so the dis-equation $ ecc(x) \\leq \\max{2(i-1), lb}  $ is respected\n\\end{itemize}\n\\subsection{The algorithm}\nWith the upper bound $ \\max\\{2(i-1), lb\\} $ we can assert that if $ M > 2(i-1) $ below the level i doesn't exists a node with eccentricity more than $ lb $\nso we can return $ lb $.\nWe can schematize the algorithm in the following steps:\n\\begin{enumerate}\n\t\\item Do a BFS from a node \\textit{u}\n\t\\item Set $ i = ecc(u) \\text{ and } M = B_i(u)$ \n\t\\item If $ M > 2(i-1) \\text{ return } M; \\text{ else set } i = i-1  \\text{ and } M = \\max\\{M, B_i(u)\\}$ then repeat this step while not return a value\n\\end{enumerate}", "meta": {"hexsha": "61b19a0dc5b1a1f5b6c926196573a37a07db05bd", "size": 10303, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/diameter.tex", "max_stars_repo_name": "Michedev/AAGM_resume", "max_stars_repo_head_hexsha": "31bcd6b58a39b19aa03a5aa13f8ad3e4d8f21ff3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/diameter.tex", "max_issues_repo_name": "Michedev/AAGM_resume", "max_issues_repo_head_hexsha": "31bcd6b58a39b19aa03a5aa13f8ad3e4d8f21ff3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/diameter.tex", "max_forks_repo_name": "Michedev/AAGM_resume", "max_forks_repo_head_hexsha": "31bcd6b58a39b19aa03a5aa13f8ad3e4d8f21ff3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.0598290598, "max_line_length": 615, "alphanum_fraction": 0.6983402892, "num_tokens": 3230, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802264851919, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5550784331666033}}
{"text": "SymPy includes several submodules that allow users to solve domain specific\nphysics problems. For example, a comprehensive physics submodule is included\nthat is useful for solving problems in mechanics, optics, and quantum\nmechanics along with support for manipulating physical quantities with units.\n\n\n\\subsection{Classical Mechanics}\nOne of the core domains that SymPy suports is the physics of classical\nmechanics. This is in turn separated into two distinct components:\nvector algebra and mechanics.\n\n\\subsubsection{Vector Algebra}\n% TODO This section requires some citations.\n\nThe \\verb|sympy.physics.vector| submodule provides reference frame-, time-,\nand space-aware vector and dyadic objects that allow for three-dimensional\noperations such as addition, subtraction, scalar multiplication, inner and\nouter products, and cross products. The vector and dyadic objects both can be\nwritten in very compact notation that make it easy to express the vectors and\ndyadics in terms of multiple reference frames with arbitrarily defined\nrelative orientations. The vectors are used to specify the positions,\nvelocities, and accelerations of points; orientations, angular velocities, and\nangular accelerations of reference frames; and forces and torques. The dyadics\nare essentially reference frame-aware $3 \\times 3$\ntensors~\\cite{tai1997generalized}. The vector and dyadic objects can be used\nfor any one-, two-, or three-dimensional vector algebra, and they provide a\nstrong framework for building physics and engineering tools.\n\nThe following Python code demonstrates how a vector is created using\nthe orthogonal unit vectors of three reference frames that are oriented with\nrespect to each other, and the result of expressing the vector in the $A$\nframe. The $B$ frame is oriented with respect to the $A$ frame using Z-X-Z\nEuler Angles of magnitude $\\pi$, $\\frac{\\pi}{2}$, and\n$\\frac{\\pi}{3}$, respectively, whereas the $C$ frame is oriented\nwith respect to the $B$ frame through a simple rotation about the $B$ frame's\n$X$ unit vector through $\\frac{\\pi}{2}$.\n\n\\begin{verbatim}\n>>> from sympy.physics.vector import ReferenceFrame\n>>> A, B, C = symbols('A B C', cls=ReferenceFrame)\n>>> B.orient(A, 'body', (pi, pi/3, pi/4), 'zxz')\n>>> C.orient(B, 'axis', (pi/2, B.x))\n>>> v = 1*A.x + 2*B.z + 3*C.y\n>>> v\nA.x + 2*B.z + 3*C.y\n>>> v.express(A)\nA.x + 5*sqrt(3)/2*A.y + 5/2*A.z\n\\end{verbatim}\n\n\\subsubsection{Mechanics}\n\nThe \\verb|sympy.physics.mechanics| submodule utilizes the \\texttt{sympy.\\allowbreak{}physics.\\allowbreak{}vector} submodule\nto populate time-aware particle and rigid-body objects to fully describe the\nkinematics and kinetics of a rigid multi-body system. These objects store all\nof the information needed to derive the ordinary differential or differential\nalgebraic equations that govern the motion of the system, i.e., the equations\nof motion. These equations of motion abide by Newton's laws of motion and can\nhandle arbitrary kinematic constraints or complex loads. The submodule\noffers two automated methods for formulating the equations of motion based on\nLagrangian Dynamics~\\cite{Lagrange1811} and Kane's Method~\\cite{kane1985dynamics}.\nLastly, there are automated linearization routines for constrained dynamical\nsystems~\\cite{Peterson2014}.\n\n\\subsection{Quantum Mechanics}\n\\label{sec:quantum}\n\nThe \\verb|sympy.physics.quantum| submodule has extensive capabilities to\nsolve problems in quantum mechanics, using Python objects to represent the\ndifferent mathematical objects relevant in quantum theory~\\cite{sakurai2011modern}:\nstates (bras and kets), operators (unitary, Hermitian, etc.), and basis sets, as\nwell as operations on these objects such as representations, tensor products,\ninner products, outer products, commutators, and anticommutators. The base\nobjects are designed in the most general way possible to enable any particular\nquantum system to be implemented by subclassing the base operators and defining\nthe relevant class methods to provide system-specific logic.\n\nSymbolic quantum operators and states may be defined, and one can perform\na full range of operations with them.\n\\begin{verbatim}\n>>> from sympy.physics.quantum import Commutator, Dagger, Operator\n>>> from sympy.physics.quantum import Ket, qapply\n>>> A, B, C, D = symbols('A B C D', cls=Operator)\n>>> a = Ket('a')\n>>> comm = Commutator(A, B)\n>>> comm\n[A,B]\n>>> qapply(Dagger(comm*a)).doit()\n-<a|*(Dagger(A)*Dagger(B) - Dagger(B)*Dagger(A))\n\\end{verbatim}\nCommutators can be expanded using common commutator identities:\n\\begin{verbatim}\n>>> Commutator(C+B, A*D).expand(commutator=True)\n-[A,B]*D - [A,C]*D + A*[B,D] + A*[C,D]\n\\end{verbatim}\n\nOn top of this set of base objects, a number of specific quantum systems have\nbeen implemented in a fully symbolic framework. These include:\n\n\\begin{itemize}\n\n\\item Many of the exactly solvable quantum systems, including simple harmonic\noscillator states and raising/lowering operators, infinite square well states,\nand 3D position and momentum operators and states.\n\n\\item Second quantized formalism of non-relativistic many-body quantum\nmechanics~\\cite{fetter2003quantum}.\n\n\\item Quantum angular momentum~\\cite{zare1988angular}. Spin operators and their\neigenstates can be represented in any basis and for any quantum numbers.\nA rotation operator representing the Wigner D-matrix, which may be defined\nsymbolically or numerically, is also implemented to rotate spin eigenstates.\nFunctionality for coupling and uncoupling of arbitrary spin eigenstates is\nprovided, including symbolic representations of Clebsch-Gordon coefficients and\nWigner symbols.\n\n\\item Quantum information and computing~\\cite{nielsen2010quantum}. Multidimensional\nqubit states, and a full set of one- and two-qubit gates are provided and can\nbe represented symbolically or as matrices/vectors. With these building blocks,\nit is possible to implement a number of basic quantum algorithms including the\nquantum Fourier transform, quantum error correction, quantum teleportation,\nGrover's algorithm, dense coding, etc. In addition, any quantum circuit may be\nplotted using the \\verb|circuit_plot| function (Figure~\\ref{fig-circuitplot-qft}).\n\n\n\\end{itemize}\n\nHere are a few short examples of the quantum information and computing capabilities\nin \\verb|sympy.physics.quantum|. Start with a simple four-qubit state and flip the second\nqubit from the right using a Pauli-X gate:\n\n\\begin{verbatim}\n>>> from sympy.physics.quantum.qubit import Qubit\n>>> from sympy.physics.quantum.gate import XGate\n>>> q = Qubit('0101')\n>>> q\n|0101>\n>>> X = XGate(1)\n>>> qapply(X*q)\n|0111>\n\\end{verbatim}\nQubit states can also be used in adjoint operations, tensor products, inner/outer\nproducts:\n\\begin{verbatim}\n>>> Dagger(q)\n<0101|\n>>> ip = Dagger(q)*q\n>>> ip\n<0101|0101>\n>>> ip.doit()\n1\n\\end{verbatim}\nQuantum gates (unitary operators) can be applied to transform these states and\nthen classical measurements can be performed on the results:\n\\begin{verbatim}\n>>> from sympy.physics.quantum.qubit import measure_all\n>>> from sympy.physics.quantum.gate import H, X, Y, Z\n>>> c = H(0)*H(1)*Qubit('00')\n>>> c\nH(0)*H(1)*|00>\n>>> q = qapply(c)\n>>> measure_all(q)\n[(|00>, 1/4), (|01>, 1/4), (|10>, 1/4), (|11>, 1/4)]\n\\end{verbatim}\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[scale=0.65]{images/fig1-circuitplot-qft}\n\\caption{The circuit diagram for a three-qubit quantum Fourier transform\ngenerated by SymPy.}\n\\label{fig-circuitplot-qft}\n\\end{center}\n\\end{figure}\nLastly, the following example demonstrates creating a three-qubit quantum Fourier\ntransform, decomposing it into one- and two-qubit gates, and then generating a\ncircuit plot for the sequence of gates (see Figure~\\ref{fig-circuitplot-qft}).\n% This depends on matplotlib. We want to make sure the rest of the paper\n% doesn't depend on it, so skip.\n% no-doctest\n\\begin{verbatim}\n>>> from sympy.physics.quantum.qft import QFT\n>>> from sympy.physics.quantum.circuitplot import circuit_plot\n>>> fourier = QFT(0,3).decompose()\n>>> fourier\nSWAP(0,2)*H(0)*C((0),S(1))*H(1)*C((0),T(2))*C((1),S(2))*H(2)\n>>> c = circuit_plot(fourier, nqubits=3)\n\\end{verbatim}\n", "meta": {"hexsha": "f4c83edddb895399b373e2ec6f0202ae4dcbca18", "size": 8115, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "domain_specific.tex", "max_stars_repo_name": "ProgZone/sympy-paper", "max_stars_repo_head_hexsha": "b3b85809cc92d1fd588971f944abda9fa995a426", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 40, "max_stars_repo_stars_event_min_datetime": "2016-03-27T06:55:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-22T18:42:39.000Z", "max_issues_repo_path": "domain_specific.tex", "max_issues_repo_name": "ProgZone/sympy-paper", "max_issues_repo_head_hexsha": "b3b85809cc92d1fd588971f944abda9fa995a426", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 205, "max_issues_repo_issues_event_min_datetime": "2016-03-17T03:08:15.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-01T17:09:29.000Z", "max_forks_repo_path": "domain_specific.tex", "max_forks_repo_name": "ProgZone/sympy-paper", "max_forks_repo_head_hexsha": "b3b85809cc92d1fd588971f944abda9fa995a426", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 37, "max_forks_repo_forks_event_min_datetime": "2016-03-17T16:02:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-15T15:06:54.000Z", "avg_line_length": 43.6290322581, "max_line_length": 123, "alphanum_fraction": 0.7699322243, "num_tokens": 2085, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707281, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.5550784316004458}}
{"text": "\\documentclass[a4paper]{article}\n\n%  math support\n\\usepackage{mathtools}\n\\usepackage{autobreak}\n\\usepackage{extarrows}\n\\usepackage[thmmarks,amsmath]{ntheorem}\n\\usepackage{amssymb}\n\\usepackage{esint}\n% \\usepackage{amsmath}\n\\usepackage{hyperref}\n\n% theorems\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{defi}[thm]{Definition}\n\\newtheorem{lemma}[thm]{Lemma}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem{coro}[thm]{Corollary}\n{\n    \\theoremstyle{plain}\n    \\theoremheaderfont{\\sffamily\\bfseries}\n    \\theorembodyfont{\\normalfont}\n    % auto add \\QED\n    \\theoremsymbol{\\mbox{$\\Box$}}\n    \\newtheorem{proof}{Proof}[section]\n}\n{\n    \\theoremstyle{nonumberplain}\n    \\newtheorem{myDef}{Definition}\n}\n% abbr\n\\newcommand*\\Laplace{\\mathop{}\\!\\mathbin\\bigtriangleup}\n\\newcommand\\st{\\quad \\text{s.t.} \\quad}\n\\newcommand\\diff{\\,\\mathrm{d}}\n\\newcommand\\compact{\\subset \\subset}\n\\newcommand\\xiff\\xLongleftrightarrow\n\\newcommand\\ximpliedby\\xLongleftarrow\n\\newcommand\\ximplies\\xLongrightarrow\n\\newcommand\\xeq\\xlongequal\n\\newcommand\\nto\\nrightarrow\n\n% declare \\norm macro\n\\DeclarePairedDelimiter{\\norm}\\lVert\\rVert\n\\DeclarePairedDelimiter{\\set}\\lbrace\\rbrace\n\\DeclarePairedDelimiter{\\abs}\\lvert\\rvert\n% Functional Analysis Symbols\n\\def\\L{\\mathcal{L}}\n\\def\\K{\\mathcal{K}}\n\\def\\R{\\mathbb{R}}\n\\DeclareMathOperator{\\Ima}{Im}\n\\DeclareMathOperator{\\Ker}{Ker}\n\\DeclareMathOperator{\\spann}{span}\n\\DeclareMathOperator{\\tr}{tr}\n\n\\newcommand\\spanset[1]{\\ensuremath\\spann\\{#1\\}}\n% \\DeclareMathOperator{\\dim}{dim}\n% domain range\n\\DeclareMathOperator{\\domain}{D}\n\\DeclareMathOperator{\\range}{R}\n\n\\begin{document}\n\\section{Solutions}\n\\setcounter{proof}{3}\n\\begin{proof}\nFor $y = (a, b)$, set $y_1 = (a, -b)$, $y_2 = (-a, b)$, $y_3 = (-a, -b)$, then\n\\begin{equation*}\nG(x, y) = \\Gamma(x,y) - \\Gamma(x,y_1) - \\Gamma(x,y_2) + \\Gamma(x,y_3)\n\\end{equation*}\n\\end{proof}\n\\begin{proof}\nSet $x = (x_1, \\cdots, x_n)$, $y = (a, 0, \\cdots, 0)$, \nthen we use Poisson integral and maximum principle for harmonic functions to get\n\\begin{equation*}\n\\begin{split}\n& \\int_{\\partial B_1} \\left[ (x_1 - a)^2 + x_2^2 + \\cdots + x_n^2)\\right]^{-n/2} \\diff S_x \\\\\n={} & \\int_{\\partial B_1} \\frac{\\diff S_x}{\\abs{x-y}^n}\n= \\frac{1-\\abs{y}^2}{n\\omega_n}\\int_{\\partial B_1} \\frac{n\\omega_n}{1-\\abs{y}^2}\\frac{1}{\\abs{x-y}^n} \\diff S_x \\\\\n={} & \\frac{n\\omega_n}{1-\\abs{y}^2}\n\\end{split}\n\\end{equation*}\n\\end{proof}\n\\setcounter{proof}{6}\n\\begin{proof}\nWe use the mean value inequalities for $u(x)$, for any ball $B(x,r) \\subset \\subset \\R^n$, we have\n\\begin{align*}\n\\abs{u} &= \\abs{\\frac{1}{\\omega_n r^n}\\int_{B(x,r)} u \\diff x} \\\\\n&\\leq \\frac{1}{\\omega_n r^n}\\int_{B(x,r)} \\abs{u} \\diff x \\\\\n&\\leq (\\omega_n r^n)^{-1/p}\\norm{u}_{L_p(B(x,r))} \\\\\n&\\leq (\\omega_n r^n)^{-1/p}\\norm{u}_{L_p(\\R^n)} \\to 0 \\quad \\text{as} \\quad r \\to \\infty \n\\end{align*}\nHence $u(x) = 0$\n\\end{proof}\n\\setcounter{proof}{8}\n\\begin{proof}\nSet\n\\begin{equation*}\nf(x) = x\\log{x},\n\\end{equation*}\nwe have\n\\begin{equation*}\nf'(x) = \\log{x} + 1 \\leq 0 \\quad \\text{in} \\quad (0, 1/e], \n\\end{equation*}\nhence $f(x)$ is nonincreasing in $[0, 1/e]$. Since $f(x) \\in C^1((1/e, 1])$, we consider $f(x)$ in $[0, 1/e]$. \\\\\nSet\n\\begin{equation*}\ng(h) = f( x + h ) - f(x) - f(h),\n\\end{equation*}\nwe have\n\\begin{equation*}\ng'(h) = \\log{\\frac{x+h}{h}} \\geq 0 \\quad \\text{in} \\quad (0, 1-x],\n\\end{equation*}\nsince $g(0)=0$, we have\n\\begin{equation*}\nf(x) - f(x + h) \\leq -f(h).\n\\end{equation*}\nSince $f(x)$ is nonincreasing in $[0, 1/e]$, we have\n\\begin{equation*}\n\\abs{\\frac{f(x)-f(x+h)}{h^\\alpha}} = \\frac{f(x)-f(x+h)}{h^\\alpha} \\leq -\\frac{f(h)}{h^\\alpha} = -h^{1-\\alpha}\\log{h}.\n\\end{equation*}\nhence $f(x) \\in C^{0,\\alpha}[0,1]$, where $\\alpha \\in (0,1)$, and $f(x) \\notin C^{0,1}[0,1]$.\n\\end{proof}\n\\begin{proof}\nSince $A,B$ are both positive-definite matrix, we have\n\\begin{equation*}\nA = P\\sqrt{\\bar{A}}\\sqrt{\\bar{A}}P^T, \\qquad B = Q\\bar{B}Q^T.\n\\end{equation*}\nThen\n\\begin{align*}\n\\det{A}\\det{B} &= \\det{(\\sqrt{\\bar{A}}P^TQ\\bar{B}Q^TP\\sqrt{\\bar{A}}^T)} \\\\\n&\\leq \\left( \\frac{\\tr{(\\sqrt{\\bar{A}}P^TQ\\bar{B}Q^TP\\sqrt{\\bar{A}}^T)}}{n} \\right)^n \\\\\n&= \\left( \\frac{\\tr{(AB)}}{n} \\right)^n\n\\end{align*}\nsince $\\sqrt{\\bar{A}}P^TQ\\bar{B}Q^TP\\sqrt{\\bar{A}}^T$ is positive-definite, and use AM-GM inequality.\n\\end{proof}\n\\setcounter{proof}{11}\n\\begin{proof}\nWLOG $\\forall x > 0$, $\\eta \\in (0,1)$, we consider $(x\\eta, x)$, set\n\\begin{equation*}\nu = \\log{x}, \\quad \\bar{u} = \\fint_{x\\eta}^x \\log{x} \\diff x, \\quad \\tilde{u} = \\fint_{x\\eta}^x \\abs{u - \\bar{u}} \\diff x, \n\\quad \\bar{u}_0 = \\fint_{\\eta}^1 \\log{t} \\diff t.\n\\end{equation*}\nCalculate\n\\begin{align*}\n\\bar{u} &= \\fint_{\\eta}^1 \\log{xt} \\diff t = \\bar{u}_0 + \\log{x} \\\\\n\\tilde{u} &= \\fint_{\\eta}^1 \\abs{\\log{t} + \\log{x} - \\bar{u}} \\diff t = \\fint_{\\eta}^1 \\abs{\\log{t} - \\bar{u}_0} \\diff t \\\\\n\\bar{u}_0 &= \\frac{\\eta \\log{\\eta} - \\eta + 1}{\\eta - 1}.\n\\end{align*}\nSince\n\\begin{equation*}\n\\fint_{\\eta}^1 (\\log{t} - \\bar{u}_0) \\diff t = 0\n\\end{equation*}\nthere exist $t_0 \\in (\\eta, 1)$ s.t. $\\log{t_0} = \\bar{u}_0$, hence\n\\begin{equation*}\n\\frac{1}{1 - \\eta}\\int_{t_0}^1 (\\log{t} - \\bar{u}_0) \\diff t \n= -\\frac{1}{1 - \\eta}\\int^{t_0}_{\\eta} (\\log{t} - \\bar{u}_0) \\diff t \\geq 0.\n\\end{equation*}\nThen we have\n\\begin{equation*}\n\\tilde{u} = \\frac{1}{1- \\eta}\\left( \\int_{t_0}^1 - \\int_{\\eta}^{t_0} (\\log{t} - \\bar{u}_0) \\right) \\diff t \n= \\frac{2}{1-\\eta}\\int_{t_0}^1 (\\log{t} - \\bar{u}_0) \\diff t\n\\end{equation*}\nsince\n\\begin{equation*}\n\\log{t} - \\bar{u}_0 \\leq \\log{1} - \\bar{u}_0 \\leq 1\n\\end{equation*}\nwe have\n\\begin{equation*}\n\\tilde{u} \\leq \\frac{2(1-t_0)}{1-\\eta} \\leq 2.\n\\end{equation*}\nHence $\\log{x} \\in BMO(0,\\infty)$.\n\\end{proof}\n\n\\section{G-T Chapter 2 Solutions}\n\\setcounter{proof}{1}\n\\begin{proof}\nFix $x_0 \\in T$, where $T$ is the open, smooth portion of $\\partial \\Omega$, then we can find $r > 0$, s.t. $B_r(x_0)$ is \ndivided into  two parts by $T$, then we extend $u$ with zero in $B_r(x_0)-\\Omega$.\n\nNow we prove $u$ is harmonic in $B_r(x_0)$. Since $u$ is harmonic in $B_r(x_0) \\cap \\Omega$ and \n$u \\equiv 0$ in $B_r(x_0)-\\Omega$, all we need to show is $u$ is harmonic in $B_r(x_0) \\cap T$, \nso we need prove for every ball $B_R(y) \\subset \\subset \\Omega \\cup B_r(x_0)$ it satisfies the mean value property, \nwhere $y \\in B_r(x_0) \\cap T$. By the proof of the mean value inequalities, we only need to show\n\\begin{equation}\\label{2.2.1}\n\\int_{\\partial B_R(y)} \\frac{\\partial u}{\\partial \\nu} \\diff s = 0.\n\\end{equation}\n$u = \\partial u/\\partial \\nu = 0$ on the smooth portion $T$ guarantees that $u \\in C^1$ in $T$, \nthen we have the integral \\eqref{2.2.1} is well-defined, since $u \\equiv 0$ in $B_r(x_0) - \\Omega$, we have\n\\begin{equation*}\n\\int_{\\partial B_R(y)} \\frac{\\partial u}{\\partial \\nu} \\diff s = \\int_{B_R(x_0) \\cap \\Omega} \\Laplace u \\diff x = 0\n\\end{equation*}\nhence $u$ is harmonic in $\\Omega\\cup B_r(x_0)$, and $u \\equiv 0$ in $B_r(x_0) - \\Omega$, by analytic, $u \\equiv 0$ in $\\Omega$.\n\\end{proof}\n\\begin{proof}\n\\begin{enumerate}\n\\item Fix $x,y \\in \\Omega$, $x \\neq y$, write\n\\begin{equation*}\nv(z) \\coloneqq G(x, z), \\quad w(z) \\coloneqq G(y, z), \\quad z \\in \\Omega,\n\\end{equation*}\nthen\n\\begin{align*}\n\\Laplace v(z) &= 0, \\qquad z \\neq x, \\\\\n\\Laplace w(z) &= 0, \\qquad z \\neq y\n\\end{align*}\nand $w = v = 0$ on $\\partial\\Omega$.\nConsider Green's identity on $\\tilde{\\Omega} \\coloneqq \\Omega\\setminus[B_{\\epsilon}(x)\\cup B_{\\epsilon}(y)]$\n\\begin{align*}\n& \\int_{\\tilde{\\Omega}} (v\\Laplace w - w\\Laplace v) \\diff z\n= \\int_{\\partial\\Omega} \\left(v\\frac{\\partial w}{\\partial\\nu} - w\\frac{\\partial v}{\\partial\\nu}\\right)\\diff s \\\\\n+{} & \\int_{\\partial B_{\\epsilon}(x)} \\left(v\\frac{\\partial w}{\\partial\\nu} - w\\frac{\\partial v}{\\partial\\nu}\\right)\\diff s\n+ \\int_{\\partial B_{\\epsilon}(y)} \\left(v\\frac{\\partial w}{\\partial\\nu} - w\\frac{\\partial v}{\\partial\\nu}\\right)\\diff s,\n\\end{align*}\nsince $v, w$ is harmonic in $\\tilde{\\Omega}$, and vanishes on $\\partial\\Omega$, we have\n\\begin{equation*}\n\\int_{\\partial B_{\\epsilon}(x)} \\left(v\\frac{\\partial w}{\\partial\\nu} - w\\frac{\\partial v}{\\partial\\nu}\\right)\\diff s\n+ \\int_{\\partial B_{\\epsilon}(y)} \\left(v\\frac{\\partial w}{\\partial\\nu} - w\\frac{\\partial v}{\\partial\\nu}\\right)\\diff s = 0,\n\\end{equation*}\nsince\n\\begin{equation*}\n\\abs*{\\int_{\\partial B_{\\epsilon}(x)} v\\frac{\\partial w}{\\partial\\nu}\\diff s}\n\\leq C\\int_{\\partial B_{\\epsilon}(x)}\\abs{v}\\diff s \\leq C\\epsilon\n\\end{equation*}\nand\n\\begin{equation*}\n\\lim_{\\epsilon \\to 0} \\int_{\\partial B_{\\epsilon}(x)}\\frac{\\partial v}{\\partial\\nu}w\\diff s\n= \\lim_{\\epsilon \\to 0} \\int_{\\partial B_{\\epsilon}(x)}\\frac{\\partial\\Gamma}{\\partial\\nu}(x-z)w(z)\\diff s = w(x).\n\\end{equation*}\nSimilarly we deduce the other integral to get $w(x) = v(y)$, hence $G(x,y) = G(y,x)$.\n\\item Consider $w(x) = G(x,y) = \\Gamma(x-y) + h$ is harmonic in $\\Omega\\setminus\\set{y}$, since $h$ is harmonic in $\\Omega$,\nand $\\abs*{\\Omega} < \\infty$, we have $\\abs{h} < \\infty$ in $\\Omega$. But when $x \\to y$, $\\Gamma(x-y) \\to -\\infty$,\nhence there exist $r > 0$ s.t. $w<0$ in $\\partial B_r(y)$\n\nBy the strong maximum principle, since $w(x) \\equiv 0$ in $\\partial\\Omega$, we have $w<0$ in $\\Omega\\setminus B_r(y)$.\n\\item Fix $x_0 \\in \\partial\\Omega$, since $f$ is bounded, w.l.o.g, we assume $f(y) \\equiv 1$, then\n\\begin{equation*}\n\\int_{\\Omega} \\abs{G(x,y)}\\diff y = \\int_{\\Omega\\cap B_{2\\epsilon}(x_0)} \\abs{G(x,y)}\\diff y\n+ \\int_{\\Omega\\setminus B_{2\\epsilon}(x_0)} \\abs{G(x,y)}\\diff y \\eqqcolon I + J.\n\\end{equation*}\nTo estimate $I$, since $G(x_0, y) = 0$, for $\\epsilon$ sufficiently small,\nwe have $h(x,y) > 0$ where $x$ in $B_{\\epsilon}(x_0)$, hence $\\abs{G(x,y)} < \\abs{\\Gamma(x,y)}$,\nand $B_{2\\epsilon}(x_0) \\subset B_{3\\epsilon}(x)$, so we have\n\\begin{align*}\nI \\leq \\int_{\\Omega\\cap B_{3\\epsilon}(x_0)} \\abs{G(x,y)}\\diff y\n\\leq \\int_{\\Omega\\cap B_{3\\epsilon}(x_0)} \\abs{\\Gamma(x,y)}\\diff y \\leq C\\epsilon^2,\n\\end{align*}\nthen we have\n\\begin{align*}\n\\lim_{x \\to x_0}I = 0.\n\\end{align*}\nTo estimate $J$, we have\n\\begin{align*}\n\\abs{G(x,y)} \\leq \\abs{\\Gamma(\\epsilon)} \\quad \\forall y \\in \\Omega\\setminus B_{2\\epsilon}(x_0), x \\in B_{\\epsilon}(x_0),\n\\end{align*}\nand for any fixed $y$\n\\begin{align*}\nG(x,y) \\to 0 \\quad \\text{as} \\quad x \\to x_0,\n\\end{align*}\nuse Lebesgue\u2019s dominated convergence theorem, we have\n\\begin{align*}\n\\lim_{x \\to x_0} J = 0,\n\\end{align*}\nhence we completes the proof.\n\\end{enumerate}\n\\end{proof}\n\\begin{proof}\nIt is suffices to show that $U$ is harmonic on $T$, $\\forall x \\in T$.\n\nThere is a ball $B = B(x)$, by the reflection, we have $U$ is continuous on $\\partial B$,\nthus we can find a harmonic functions $v$ in $B$,\nand $v \\equiv U$ on $\\partial B$ by Poisson integral formula.\n\nSince $U$ is defined as odd reflection, use Poisson integral we have $v \\equiv u \\equiv 0$ on $T$,\nuse the maximum principle on $\\Omega^+\\cap B$ and $\\Omega^-\\cap B$, we get $U \\equiv v$ in $B$.\nHence $U$ is harmonic in $\\Omega^+\\cup T\\cup\\Omega^-$.\n\\end{proof}\n\\begin{proof}\nWLOG we set the annular region as $B_{R}(0)\\setminus B_r(0)$, where $R>r>0$.\nAll we need to do is let the Green's function vanishes on $\\partial B_R\\cup\\partial B_r$\nby combine the fundamental solution $\\Gamma$.\n\\begin{enumerate}\n\\item Let it vanishes on $\\partial B_R$, we have\n\\begin{align*}\nh_1 = \\Gamma(x,y) - \\frac{\\abs{y}}{R}\\Gamma(x,\\frac{R^2}{\\abs{y}^2}y),\n\\end{align*}\n\\item let it vanishes on $\\partial B_r$, we have\n\\begin{align*}\nh_2 = h_1 - \\frac{\\abs{y}}{r}\\Gamma(x,\\frac{r^2}{\\abs{y}^2}y) + \\frac{R}{r}\\Gamma(x,\\frac{r^2}{R^2}y),\n\\end{align*}\n\\item let it vanishes on $\\partial B_R$, we have\n\\begin{align*}\nh_3 = h_2 + \\frac{r}{R}\\Gamma(x,\\frac{R^2}{r^2}y) - \\frac{r\\abs{y}}{R^2}\\Gamma(x,\\frac{R^4}{r^2\\abs{y}^2}y),\n\\end{align*}\n\\item let it vanishes on $\\partial B_r$, we have\n\\begin{align*}\nh_4 = h_3 - \\frac{R\\abs{y}}{r^2}\\Gamma(x,\\frac{r^4}{R^2\\abs{y}^2}y) + \\frac{R^2}{r^2}\\Gamma(x,\\frac{r^4}{R^4}y),\n\\end{align*}\n\\item let it vanishes on $\\partial B_R$, we have\n\\begin{align*}\nh_5 = h_4 + \\frac{r^2}{R^2}\\Gamma(x,\\frac{R^4}{r^4}y) - \\frac{r^2\\abs{y}}{R^3}\\Gamma(x,\\frac{R^6}{r^4\\abs{y}^2}y),\n\\end{align*}\n\\item $\\cdots$\n\\end{enumerate}\nSet\n\\begin{align*}\ng_n &= \\left(\\frac{r}{R}\\right)^{n-1}\\Gamma(x,\\left(\\frac{r}{R}\\right)^{2(1-n)}y)\n- \\left(\\frac{r}{R}\\right)^{n-1}\\frac{\\abs{y}}{R}\\Gamma(x, \\left(\\left(\\frac{r}{R}\\right)^{n-1}\\frac{\\abs{y}}{R}\\right)^{-2}y) \\\\\n&- \\left(\\frac{R}{r}\\right)^{n-1}\\frac{\\abs{y}}{r}\\Gamma(x, \\left(\\left(\\frac{R}{r}\\right)^{n-1}\\frac{\\abs{y}}{r}\\right)^{-2}y)\n+ \\left(\\frac{R}{r}\\right)^{n}\\Gamma(x,\\left(\\frac{R}{r}\\right)^{-2n}y),\n\\end{align*}\nwe get the Green's function\n\\begin{align*}\nG(x,y) = \\sum_{n=1}^\\infty g_n\n\\end{align*}\nwhere the RHS is convergence by M-test.\n\\end{proof}\n\\begin{proof}\nFix $y \\in B_R$, $\\forall x \\in B_R$\n\\begin{align*}\n\\abs*{\\frac{x-y}{R-\\abs{y}}} \\geq 1 = \\abs*{\\frac{x}{R}},\n\\end{align*}\nhence\n\\begin{align*}\n(R-\\abs{y})^n\\int_{\\partial B_R} \\frac{u\\diff s}{\\abs{x-y}^n} \\leq R^n \\int_{\\partial B_R} \\frac{u\\diff s}{\\abs{x}^n},\n\\end{align*}\nit implies that\n\\begin{align*}\nu(y) \\leq \\frac{R^{n-2}(R+\\abs{x})}{(R-\\abs{x})^{n-1}}u(0).\n\\end{align*}\nSimilarly we can get the lower bound.\n\\end{proof}\n\\begin{proof}\n$\\forall z \\in \\partial\\Omega$, w.l.o.g we assume $z = 0$ and $x_n = 0$ is the tangent hyperplane of $\\partial\\Omega$ at $z$.\nBy the definition of $C^2$ boundary, we can choose $x_n = 0$ as the supporting hyperplane and define a support function $f$\nin $B_{\\epsilon}$ is $C^2(\\R^{n-1})$ and satisfying\n\\begin{align*}\nf(x') = x_n > 0 \\quad \\text{in} \\quad B_{\\epsilon}\\setminus\\set{0} \\text{, and} \\quad f(0)=0,\n\\end{align*}\nwhere $x' = (x_1, \\cdots, x_{n-1})$, and $(x',x_n) \\in \\partial\\Omega$.\nThen we consider a ball $B_r(x_0)$ where $r \\leq \\epsilon$\ntangent with the supporting hyperplane $x_n = 0$ at $z$, we can define the support function\n\\begin{align*}\ng(x') = r - \\sqrt{r^2 - \\abs{x'}^2},\n\\end{align*}\nset $h=g-f$, and $h(0) = 0$, all we need to do is to prove there exist $r$ s.t. $D^2h(0)$ is positive-definite. Since\n\\begin{align*}\nh_{ij} = (r^2 - \\abs{x'}^2)^{-3/2}x_ix_j + \\delta_{ij}(r^2 - \\abs{x'}^2)^{-1/2} - \\partial_{ij}f(x'),\n\\end{align*}\nwe have\n\\begin{align*}\n\\left(h_{ij}(0)\\right) = r^{-1}I - (\\partial_{ij}f(0)),\n\\end{align*}\nhence we choose $r$ s.t.\n\\begin{align*}\n\\frac{1}{r} \\geq \\max{\\abs{\\lambda_i}},\n\\end{align*}\nwhere $\\lambda_i$ are eigenvalues of $D^2f(0)$.\n\\end{proof}\n\\setcounter{section}{3}\n\\section{G-T Chapter 4 Solutions}\n\\setcounter{proof}{6}\n\\begin{proof}\n\\begin{equation}\\label{4.7.1}\n\\begin{split}\n\\Laplace_x u(x/\\abs{x}^2)\n&= \\sum_i \\frac{\\partial}{\\partial x_i}\\left(\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{\\partial y_k}{\\partial x_i}\\right)\n= \\sum_i\\sum_l\\frac{\\partial}{\\partial y_l}\\left(\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{\\partial y_k}{\\partial x_i}\\right)\n\\frac{\\partial y_l}{\\partial x_i} \\\\\n&= \\sum_i\\sum_l\\sum_k\\left(\\frac{\\partial^2 u}{\\partial y_l\\partial y_k}\\frac{\\partial y_k}{\\partial x_i}\n+\\frac{\\partial^2 y_k}{\\partial x_i\\partial y_l}\\frac{\\partial u}{\\partial y_k}\\right)\\frac{\\partial y_l}{\\partial x_i} \\\\\n&= \\sum_i\\sum_l\\sum_k\\frac{\\partial^2 u}{\\partial y_l\\partial y_k}\\frac{\\partial y_k}{\\partial x_i}\n\\frac{\\partial y_l}{\\partial x_i} + \\sum_i\\sum_k\\frac{\\partial^2 y_k}{\\partial x_i^2}\\frac{\\partial u}{\\partial y_k}\n\\end{split}\n\\end{equation}\nsince\n\\begin{align*}\n\\frac{\\partial y_k}{\\partial x_i} &= \\partial_i \\frac{x_k}{\\abs{x}^2}\n= \\frac{\\delta_{ik}}{\\abs{x}^2} - 2\\frac{x_ix_k}{\\abs{x}^4}, \\\\\n\\frac{\\partial^2 y_k}{\\partial x_i^2}\n&= \\frac{-2\\delta_{ik}x_i}{\\abs{x}^4} + 8\\frac{x_i^2x_k}{\\abs{x}^6}\n- 2\\frac{x_k}{\\abs{x}^4} - 2\\frac{x_i\\delta_{ik}}{\\abs{x}^4} \\\\\n&= -4\\frac{\\delta_{ik}x_i}{\\abs{x}^4} + 8\\frac{x_i^2x_k}{\\abs{x}^6} - 2\\frac{x_k}{\\abs{x}^4},\n\\end{align*}\nwe have\n\\begin{align}\n\\eqref{4.7.1} &= \\sum_i\\sum_l\\sum_k\\frac{\\partial^2 u}{\\partial y_l\\partial y_k}\n\\left(\\frac{\\delta_{ik}}{\\abs{x}^2} - 2\\frac{x_ix_k}{\\abs{x}^4}\\right)\n\\left(\\frac{\\delta_{il}}{\\abs{x}^2} - 2\\frac{x_ix_l}{\\abs{x}^4}\\right) \\label{4.7.a} \\\\\n&+ \\sum_i\\sum_k\\frac{\\partial u}{\\partial y_k}\\left(-4\\frac{\\delta_{ik}x_i}{\\abs{x}^4}\n+ 8\\frac{x_i^2x_k}{\\abs{x}^6} - 2\\frac{x_k}{\\abs{x}^4}\\right) \\label{4.7.b}.\n\\end{align}\nFor \\eqref{4.7.a} we have\n\\begin{align*}\n\\eqref{4.7.a} &= \\sum_i\\sum_l\\sum_k\\frac{\\partial^2 u}{\\partial y_l\\partial y_k}\\frac{\\delta_{ik}\\delta_{il}}{\\abs{x}^4}\n- 4\\sum_l\\sum_k\\frac{\\partial^2 u}{\\partial y_l\\partial y_k}\\frac{x_kx_l}{\\abs{x}^6}\n+ 4\\sum_i\\sum_l\\sum_k\\frac{x_i^2x_kx_l}{\\abs{x}^8} \\\\\n&= \\sum_i\\frac{\\partial^2 u}{\\partial y_i^2}\\frac{1}{\\abs{x}^4}\n- 4\\sum_l\\sum_k\\frac{\\partial^2 u}{\\partial y_l\\partial y_k}\\frac{x_kx_l}{\\abs{x}^6}\n+ 4\\sum_i\\frac{x^2_i}{\\abs{x}^2}\\sum_l\\sum_k\\frac{x_kx_l}{\\abs{x}^6} \\\\\n&= \\sum_i\\frac{\\partial^2u}{\\partial y_i^2}\\frac{1}{\\abs{x}^4},\n\\end{align*}\nand for \\eqref{4.7.b} we have\n\\begin{align*}\n\\eqref{4.7.b} &= -4\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{x_k}{\\abs{x}^4}\n+ 8\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{x_k}{\\abs{x}^4}\n- 2n\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{x_k}{\\abs{x}^4} \\\\\n&= 2(2-n)\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{x_k}{\\abs{x}^4}.\n\\end{align*}\nConsider\n\\begin{align*}\n\\Laplace v(x) = \\Laplace{u(x)}\\abs{x}^{2-n} + u(x)\\Laplace{\\abs{x}^{2-n}} + 2\\nabla{u}\\nabla{\\abs{x}^{2-n}},\n\\end{align*}\nsince\n\\begin{align*}\n\\nabla{u}\\nabla{\\abs{x}^{2-n}} &= \\sum_i\\frac{\\partial u}{\\partial x_i}\\frac{\\abs{x}^{2-n}}{x_i}\n= \\sum_i\\left(\\sum_j\\frac{\\partial u}{\\partial y_j}\\frac{\\partial y_j}{\\partial x_i}\\right)(2-n)\\abs{x}^{-n}x_i \\\\\n&= (2-n)\\abs{x}^{-n}\\sum_i\\sum_j\\frac{\\partial u}{\\partial y_j}\n\\left(\\frac{\\delta_{ij}}{\\abs{x}^2} - 2\\frac{x_ix_j}{\\abs{x}^4}\\right)x_i \\\\\n&= (2-n)\\left(\\abs{x}^{-2-n}\\sum_j\\frac{\\partial u}{\\partial y_j}x_j\n-2\\abs{x}^{-2-n}\\sum_j\\frac{\\partial u}{\\partial y_j}x_j\\right) \\\\\n&= -(2-n)\\abs{x}^{-2-n}\\sum_j\\frac{\\partial u}{\\partial y_j}x_j,\n\\end{align*}\nhence\n\\begin{align*}\n\\Laplace v(x) &= \\Laplace_yu\\abs{x}^{-2-n} + 2(2-n)\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{x_k}{\\abs{x}^{2+n}}\n- 2(2-n)\\sum_k\\frac{\\partial u}{\\partial y_k}\\frac{x_k}{\\abs{x}^{2+n}} \\\\\n&= \\Laplace_yu\\abs{x}^{-2-n}.\n\\end{align*}\n\\end{proof}\n\\end{document}", "meta": {"hexsha": "0d08d0d6dba43dd37b683fe3a1d0abd26f300634", "size": 18005, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pdehw.tex", "max_stars_repo_name": "focus000/pdehw", "max_stars_repo_head_hexsha": "c999f2ab02adf4c91571ef6dda33175e10c8e3fe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "pdehw.tex", "max_issues_repo_name": "focus000/pdehw", "max_issues_repo_head_hexsha": "c999f2ab02adf4c91571ef6dda33175e10c8e3fe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pdehw.tex", "max_forks_repo_name": "focus000/pdehw", "max_forks_repo_head_hexsha": "c999f2ab02adf4c91571ef6dda33175e10c8e3fe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.0677570093, "max_line_length": 129, "alphanum_fraction": 0.633490697, "num_tokens": 7651, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7690802264851919, "lm_q1q2_score": 0.5550784285635052}}
{"text": "\\section{Densely Connected Convolutional Networks - DenseNet}\n\nA traditional convolutional network consists of L layers and have L connections - one between each layer and its subsequent layer. The DenseNet also known as Densely Connected Convolutional Networks has $L(L+1)/2$ direct connections. The DenseNet has several compelling advantages. For example it alleviates the vanishing-gradient problem, it strengthens the feature propagation and substantially reduce the number of parameters.\n\nA DenseNet does not solely consist of dense blocks. It also consists of convolution, pooling and some classification layer at the end. This can be seen in figure \\ref{fig:architecture}. The convolution and pooling layer are referred to as the transition layers.\n\n\\myFigure{dens.PNG}{A deep densely connected convolutional network where there are three dense blocks. The blocks between the three dense blocks are adjacent blocks also referred to as transition layers\\citep{DENSE}}{fig:architecture}{1}\n\nTo improve the flow of information in the DenseNet, a direct connection is used from any layer to all subsequent layers. The results of this is that the $l^{th}$ layer receives the feature-map of all the past layers. The output of the $l^{th}$ layer is denoted as $X_l$.\n\n\\begin{equation}\nX_l=H_l([x_0,x_1,...,X_{l-1}])\n\\end{equation}\n\n$[x_0,x_1,...,X_{l-1}]$ refers to the concatenation of the feature-maps, which the layers produces. This network architecture is referred to as DenseNet. The direct connection is illustrated in figure \\ref{fig:dense}. The multiple inputs of $H_e(.)$ are concatenated into a single tensor\\footnote{Tensors are geometric objects that describe the linear relation between geometric vectors, scalars and other tensors}.\n\nConvolutional networks's becomes deeper. With the increasing depth new problem emerges. Such as the information of the input or gradient are passed through many layers, it can vanish this is also known as the vanishing-gradient problem. A solution for this is to bypass the signal from one layer to the next. The solution in DenseNet is to connect all layers directly with each other. This connection preserves the feed-forward nature and each layers receives additional inputs from all preceding layers. This is seen in figure \\ref{fig:dense}.\n\n\\myFigure{denselayers.PNG}{A 5 layer dens block \\citep{DENSE}}{fig:dense}{0.7}\n\nIn figure \\ref{fig:dense} a 5 layers dens block is illustrated. Between each of the layers in the dense block \\emph{Batch Normalization}, \\emph{ReLU} and convolutional layer are applied.", "meta": {"hexsha": "748bda4080fb769ab0ba7f2b927b3bdf50ae8649", "size": 2561, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/chapter/DenseNet.tex", "max_stars_repo_name": "Rotvig/cs231n", "max_stars_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-11T12:30:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-11T12:30:50.000Z", "max_issues_repo_path": "Report/chapter/DenseNet.tex", "max_issues_repo_name": "Rotvig/cs231n", "max_issues_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/chapter/DenseNet.tex", "max_forks_repo_name": "Rotvig/cs231n", "max_forks_repo_head_hexsha": "a25aef7f8675eca930fc5cf651409edbcc35c6e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 121.9523809524, "max_line_length": 544, "alphanum_fraction": 0.7973447872, "num_tokens": 578, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.782662489091802, "lm_q2_score": 0.7090191399336401, "lm_q1q2_score": 0.5549226848741914}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{IB}\n\n\\def\\ntitle{Complex Methods}\n\\def\\nlecturer{R.\\ E.\\ Hunt}\n\n\\def\\nterm{Lent}\n\\def\\nyear{2018}\n\n\\input{header}\n\n\\usepackage[normalem]{ulem}%strikethrough\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\tableofcontents\n\n\\section{Analytic Functions}\n\n\\subsection{The Complex Plane and the Riemann Sphere}\n\nAny \\(z \\in \\C\\) can be written in the form \\(x + iy\\) where \\(x = \\Re z\\) and \\(y = \\Im z\\), \\(x, y \\in \\R\\) or \\(re^{i\\theta}\\) where the \\emph{modulus} \\(|z| = r = \\sqrt{x^2 + y^2}\\) and the argument \\(\\theta = \\arg z\\) satisfies \\(y = x\\tan \\theta\\). The argument is defined only up to multiples of \\(2\\pi\\). The \\emph{principal value of the argument} is the value of \\(\\theta\\) in the range \\((-\\pi, \\pi]\\). Note that the formula \\(\\tan^{-1} \\frac{y}{x}\\) gives the correct value for the principal value of \\(\\theta\\) only if \\(x > 0\\). If \\(x \\leq 0\\) then it might be out by \\(\\pm \\pi\\) (consider \\(1 + i\\) and \\(1 - i\\)).\n\nAn \\emph{open set} \\(D\\) is a subset of \\(\\C\\) which does not include its boundary\\footnote{Hint: this is an applied course.}.\n\nA \\emph{neighbourhood} of a point \\(z\\) is an open set containing \\(z\\).\n\nA \\emph{domain} is an open set that is connected (i.e.\\ cannot be split into wto disjoint open subsets). A \\emph{simply-connected domain} is one with no holes (i.e.\\ any curve lying in the domain can be shrunk continuouly to a point without leaving the domain).\n\nNote that a hole could be caused merely be a particular function under consideration being undefined at a single point, e.g.\\ \\(\\frac{1}{z}\\).\n\nThe \\emph{extended complex plane} is \\(\\C^* = \\C \\cup \\{\\infty\\}\\). We can reach the ``point at infinity'' be going off in any direction in the plane, and all are equivalent. Conceptually we may use the \\emph{Riemann sphere}, which is a sphere resting on the complex plane with its ``south pole'' \\(S\\) at \\(z = 0\\). For any point \\(z \\in \\C\\), drawing a line through the ``north pole'' \\(N\\) of the sphere to \\(z\\) and noting where this line intersects the sphere specifies an equivalent point \\(P\\) on the sphere. Then \\(\\infty\\) is equivalent to the ``north pole'' itself.\n\nTo investigate properties of \\(\\infty\\) we use the substitution \\(\\zeta = \\frac{1}{z}\\). A function \\(f(z)\\) is said to have a particular property at infinity if \\(f(\\frac{1}{z})\\) has the same property at \\(0\\).\n\n\\subsection{Complex Differentiation}\n\nRecall the definition of differentiation for a real function \\(f(x)\\):\n\\[\n  f'(x) = \\lim_{\\delta x \\to 0} \\frac{f(x + \\delta x) - f(x)}{\\delta x}.\n\\]\nIt is implicit that the limit must be the same whichever direction we approach from. Consider \\(|x|\\) at \\(x = 0\\) for example: if we approach from the right (\\(\\delta x \\to 0^+\\)) then the limit is \\(+1\\), whereas from the left (\\(\\delta x \\to 0^-\\)) it is \\(-1\\). Because these limits are different we say that \\(|x|\\) is not differentiable at \\(x = 0\\).\n\nNow extend the defintion to complex function \\(f(z)\\). \\(f\\) is \\emph{differentiable} at \\(z\\) if\n\\[\n  f'(z) = \\lim_{\\delta z \\to 0} \\frac{f(z + \\delta z) - f(z)}{\\delta z}\n\\]\nexists (and is therefore independent of direction of approach --- but now there is an infinity of possible directions).\n\nWe say that \\(f\\) is \\emph{analytic} at a point \\(z\\) if there exists a neighbourhood of \\(z\\) throughtout which \\(f'\\) exists. The term \\emph{regular} and \\emph{holomorphic} are also used. A function which is analytic throughtout \\(\\C\\) is called \\emph{entire}.\n\nA \\emph{singularity} of \\(f\\) is a point at which it is not analytic, or not even defined.\n\nThe property of analyticity is in fact a surprisingly strong one. For example, two consequenses include\n\\begin{enumerate}\n\\item if a function is analytic then it is differentiable infinitely many times (c.f.\\ the existence of real functions which can be differentiated \\(N\\) times but no more, for any given \\(N\\));\n\\item a bounded entire function is constant (c.f.\\ \\(\\tanh x\\) for \\(x \\in \\R\\), which is bounded but not constant).\n\\end{enumerate}\n\n\\subsection{Cauchy-Riemann Equations}\n\nSeparate \\(f\\) and \\(z\\) into real and imaginary parts:\n\\[\n  f(z) = u(x, y) + iv(x, y)\n\\]\nwhere \\(z = x + iy\\) and \\(u, v\\) are real functions. Suppose that \\(f\\) is differentiable at \\(z\\). We may take \\(\\delta z\\) in any direction. First take it to be real, \\(\\delta x = \\delta x\\). Then\n\\begin{align*}\n  f'(z) &= \\lim_{\\delta x \\to 0} \\frac{f(z + \\delta x) - f(z)}{\\delta x} \\\\\n        &= \\lim_{\\delta x \\to 0} \\frac{u(x + \\delta x, y) + iv(x + \\delta x, y) - u(x, y) - iv(x, y)}{\\delta x} \\\\\n        &= \\frac{\\partial u}{\\partial x} + i \\frac{\\partial v}{\\partial x}\n\\end{align*}\nNow take \\(\\delta z\\) to be pure imaginary, \\(\\delta z = i\\delta y\\). Then\n\\begin{align*}\n  f'(z) &= \\lim_{\\delta y \\to 0} \\frac{f(z + i\\delta y) - f(z)}{i\\delta y} \\\\\n        &= \\lim_{\\delta y \\to 0} \\frac{u(x, y + \\delta y) + iv(x, y + \\delta y) - u(x, y) - iv(x, y)}{i\\delta y} \\\\\n        &= -i\\frac{\\p u}{\\p y} + \\frac{\\p v}{\\p y}.\n\\end{align*}\nThe two values for \\(f'(z)\\) are the same since \\(f\\) is differentiable. Thus\n\\begin{align*}\n  \\frac{\\p u}{\\p x} &= \\frac{\\p v}{\\p u} \\\\\n  \\frac{\\p u}{\\p y} &= -\\frac{\\p v}{\\p x}\n\\end{align*}\nThis is know as the \\emph{Cauchy-Riemann equations}. The converse (that a function satisfying the Cauchy-Riemann equaitons is differentiable) is also true as long as we impose additional requirements, for example that the partial derivative \\(u_x, u_y, v_x, v_y\\) are continuous functions of \\(x\\) and \\(y\\), in the sense described in IB Analysis II.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(f(z) = z\\) is entire. Here \\(u = x, v = y\\) and the Cauchy-Riemann equations are satisfied.\n  \\item \\(f(z) = e^z = e^x(\\cos y + i \\sin y)\\) is entire since\n    \\begin{align*}\n      \\frac{\\p u}{\\p x} &= e^x \\cos y = \\frac{\\p v}{\\p y} \\\\\n      \\frac{\\p u}{\\p y} &= -e^x \\sin y = -\\frac{\\p v}{\\p x}\n    \\end{align*}\n    The derivative is\n    \\[\n      f'(z) = e^x \\cos y + ie^x \\sin y = e^x\n    \\]\n    as expected.\n  \\item \\(f(z) = z^n\\), where \\(n\\) is a positive integer, is entire. Write \\(z = r(\\cos \\theta + i \\sin \\theta)\\) we obtain \\(u = r^n \\cos n\\theta, v = r^n \\sin n\\theta\\). We can check the Cauchy-Riemann equaitions using \\(r = \\sqrt{x^2 + y^2}\\) and \\(\\tan \\theta = \\frac{y}{x}\\). The derivative is \\(nz^{n - 1}\\) as we would expect!\n  \\item Any rational function, i.e.\\ \\(f(z) = \\frac{P(z)}{Q(z)}\\) where \\(P\\) and \\(Q\\) are polynomials, is analytic except at points where \\(Q(z) = 0\\). For instance \\(f(z) = \\frac{z}{z^2 + 1}\\) is analytic except at \\(\\pm i\\).\n  \\item Many standard real functions can be extended naturally to complex functions and obey the usual rule for their derivatives. For example \\(f(z) = \\sin z = \\frac{e^{iz} - e^{-iz}}{2i}\\) has derivative \\(f'(z) = \\cos z\\). We can also write\n    \\begin{align*}\n      \\sin z &= \\sin (x + iy) \\\\\n             &= \\sin x \\cos iy + \\cos x \\sin iy \\\\\n             &= \\sin x \\cosh y + i \\cos x \\sinh y\n    \\end{align*}\n    This applies to other trigonometric functions.\n\n    \\(\\log z = \\log |z| + i \\arg z\\) has derivative \\(\\frac{1}{z}\\).\n\n    The product, quotient and chain rules hold in exactly the same way as for real function.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(f(z) = \\Re z\\) has \\(u = x, v = 0\\) but \\(\\frac{\\p u}{\\p x} = 1 \\neq 0 = \\frac{\\p u}{\\p y}\\) so \\(\\Re z\\) is nowhere analytic.\n  \\item \\(f(z) = |z|\\) has \\(u = \\sqrt{x^2 + y^2}, v = 0\\) and is nowhere analytic.\n  \\item \\(f(z) = \\conj z = x - iy\\) is nowhere analytic.\n  \\item \\(f(z) = |z|^2 = x^2 + y^2\\). The Cauchy-Riemann equaitions are satisfied only at the origin, so \\(f\\) is only differentiable at \\(z = 0\\). However it is not analytic there because there is no neighbourhood of \\(0\\) throughout which \\(f\\) is differentiable.\n  \\end{enumerate}\n\\end{eg}\n\n\\subsection{Analytic Continuation*}\n\nIf we are given the values of an analytic function in some restricted region --- which could be rather small, such as a short curve somewhere in the complex plane --- then there is a unique extension of the function to the rest of \\(\\C\\) that is still analytic. (No proof given here.) The extension might have some singularities, and might be multivalued.\n\nThis fact can be useful in extending the domain of definition of a function. We shall see an example in Section 5.2.\n\n\\subsection{Harmonic Functions}\n\nIf \\(f(z) = u + iv\\) is analytic, then\n\\[\n  \\frac{\\p^2 u}{\\p x^2} = \\frac{\\p}{\\p x} \\frac{\\p u}{\\p x} = \\frac{\\p}{\\p x} \\frac{\\p v}{\\p y} = -\\frac{\\p^2 u}{\\p y^2}\n\\]\nso \\(u\\) satisfies Laplace's equaiotn in two dimension, \\(\\gradient^2 u = 0\\). Similarly so does \\(v\\). A function satisfying Laplace's equation in an open set is said to be \\emph{harmonic} there.\n\nFunctions \\(u\\) and \\(v\\) satisfying the Cauchy-Riemann equations are called \\emph{harmonic conjugates}. If we know one then we can find the other, up to a constant. For example, consider \\(u(x, y) = x^2 - y^2\\), which is easily verified to be harmonic. Its harmonic conjugate \\(v\\) satisfies\n\\begin{align*}\n  \\frac{\\p v}{\\p y} &= 2x \\implies v = 2xy + g(x) \\\\\n  \\frac{\\p v}{\\p x} &= 2y \\implies 2y + g'(x) = 2y\n\\end{align*}\nso \\(g = \\alpha\\) for some constant \\(\\alpha\\). The corresponding analytic function whose real part is \\(u\\) is therefore\n\\[\n  f(z) = x^2 - y^2 + 2ixy + i\\alpha = (x + iy)^2 + i\\alpha = z^2 + i\\alpha.\n\\]\n\nIf the domain is not simply-connected then this method might give a solution that is multi-valued. For example, if \\(u = \\frac{1}{2} \\log(x^2 + y^2)\\), which is harmonic in the domain \\(|z| > 0\\), the corresponding \\(f(z)\\) is \\(\\log z\\), which is multi-valued (see \\Cref{sec:multi-valued functions}).\n\nWe end this section by a geometric obseravation:\n\n\\begin{proposition}\n  Contours of harmonic conjugate functions are perpendicular to each other:\n\\end{proposition}\n\n\\begin{proof}\n  \\(\\gradient u\\) is perpendicular to contours of \\(u\\) (i.e.\\ curves \\(u = \\) constant), using a result from IA Vector Calculus. Similarly \\(\\gradient v\\) is perpendicular to contour of \\(v\\). But\n  \\begin{align*}\n    \\gradient u \\cdot \\gradient v &= \\frac{\\p u}{\\p x}\\frac{\\p v}{\\p x} + \\frac{\\p u}{\\p y}\\frac{\\p v}{\\p y} \\\\\n                                  &= \\frac{\\p u}{\\p x}\\left(-\\frac{\\p u}{\\p y}\\right) + \\frac{\\p u}{\\p y}\\frac{\\p u}{\\p x} \\\\\n                                  &= 0\n  \\end{align*}\n  and the result follows.\n\\end{proof}\n\n\\subsection{Multi-valued functions}\\label{sec:multi-valued functions}\n\nFor \\(z = re^{i\\theta}\\) we define \\(\\log z = \\log r + i\\theta\\). There are thus infinitely many values of \\(\\log z\\), for \\(\\theta\\) may take an infinity of values. For example,\n\\[\n  \\log i = \\frac{\\pi i}{2} \\text{ or } \\frac{5\\pi i}{2} \\text{ or } -\\frac{3\\pi i}{2} \\text{ or } \\dots\n\\]\ndepending on which choice of \\(\\theta\\) we make.\n\nMissed a lecture\n\nNote that a branch cut alone does not specify a branch (compare (b) above, with the principal branch which is a different branch even though it has the same branch cut) nor is a single value of the fucntion sufficient by itself (compare (a) and (c) above).\n\n\\subsubsection{Riemann Surfaces*}\n\nRiemann imagined different branches as separate copies of \\(\\C\\), stacked on top of each other but each one joined to the next at the branch cut. This structure is a \\emph{Riemann surface}.\n\n\\subsubsection{Multiple Branch Cuts}\n\nWhen there is more than one branch point we may need more than one branch cut. FOr \\(f(z) = (z(z - 1))^{1/3}\\) there are branch points at \\(0\\) and \\(1\\), so we need two branch cuts. A possibility is shown below. Then no curve can wrap round either \\(0\\) or \\(1\\).\n\nFor any \\(z\\) write \\(z = re^{i\\theta}\\) and \\(z - 1 = r_1e^{i\\theta_1}\\) with \\(\\theta \\in (-\\pi, \\pi], \\theta_1 \\in [0, 2\\pi)\\) and define\n\\[\n  f(z) = \\sqrt[3]{rr_1} e^{i(\\theta + \\theta_1)/3}\n\\]\nThis is continuous so long as we don't cross either cut. Sometimes we need fewer branch cuts than we might think. See the worked example.\n\n\\subsection{M\u00f6bius Maps}\n\nThe M\u00f6bius map \\(z \\mapsto w = \\frac{az + b}{cz + d}\\) where \\(ad - bc \\neq 0\\), is analytic except at \\(z = -\\frac{d}{c}\\). It is useful to consider it as a map \\(\\C^* \\to \\C^*\\) with\n\\begin{align*}\n  -\\frac{d}{c} &\\mapsto \\infty \\\\\n  \\infty &\\mapsto \\frac{a}{c}\n\\end{align*}\nIt is then bijective, the inverse being \\(w \\mapsto z = \\frac{-dw + b}{cw - a}\\), another M\u00f6bius map.\n\n\\begin{definition}[Circline]\n  A \\emph{circline} is either a circle or a line.\n\\end{definition}\n\nM\u00f6bius maps take circlines to circlines.\n\n\\begin{proof}\n  Any circline can be expressed as a circle of Apollonius\n  \\[\n    |z - z_1| = \\lambda|z - z_2|\n  \\]\n  where \\(z_1, z_2  \\in \\C, \\lambda \\in \\R_{> 0}\\). This is a result from IA Vectors and Matrices. The case \\(\\lambda = 1\\) corresponds to a line and \\(\\lambda \\neq 1\\) to a circle. Apply a M\u00f6bius map,\n  \\begin{align*}\n    \\left| \\frac{-dw + b}{cw - a} - z_1 \\right| &= \\lambda \\left| \\frac{-dw + b}{cw - a} - z_2 \\right| \\\\\n    |(cz_1 + d) - (az_1 + b)| &= \\lambda|(cz_1 + d)w - (az_2 + b)| \\\\\n    |w - w_1| &= \\lambda \\left| \\frac{cz_2 + d}{cz_1 + d} \\right| |w - w_2|\n  \\end{align*}\n  where \\(w_1 = \\frac{az_1 + b}{cz_1 + d}, w_2 = \\frac{az_2 + b}{cz_2 + d}\\), which is another circle of Apollonius.\n\n  The proof failsif either \\(cz_1 + d = 0\\) or \\(cz_2 + d = 0\\), but in either of these cases the equation trivially defines a circle.\n\\end{proof}\n\nGeometrically it is clear that choosing three distinct points in \\(\\C^*\\) uniquely specifies a circline (if one of the points is \\(\\infty\\) then we have specified the straight line through the other two points).\n\nGiven \\(\\alpha, \\beta, \\eta, \\alpha', \\beta', \\gamma' \\in \\C^*\\) we can find a M\u00f6bius map which sends \\(\\alpha \\mapsto \\alpha'\\) etc.\n\n\\begin{proof}\n  The map\n  \\[\n    f_1(z) = \\frac{\\beta - \\gamma}{\\beta - \\alpha}\\frac{z - \\alpha}{z - \\gamma}\n  \\]\n  sends \\(\\alpha \\mapsto 0, \\beta \\mapsto 1, \\gamma \\mapsto \\infty\\). Let \\(f_2(z)\\) be defined analogously for the primed version. Then \\(f_2^{-1} \\compose f_1\\) is the required mapping. It is also a M\u00f6bius map as they are closed under composition.\n\\end{proof}\n\nPutting all these results together, we conclude that we can find a M\u00f6bius map taking any given circline to any other.\n\n\\subsection{Conformal Maps}\n\n\\begin{definition}[Conformal map]\n  A \\emph{conformal map} \\(f: U \\to V\\), where \\(U, V\\) are \\emph{open} subsets of \\(\\C\\), is one which is analytic with non-zero derivative in \\(U\\).\n\\end{definition}\n\nThough not part of the definition, it is usual (and helpful) to require that \\(f\\) be one-to-one from \\(U\\) to \\(V\\).\n\nAn alternative definition is that a conformal map is one that preserves the angle (in both magnitude and orientation) between intersecting curves. We shall show that our definition implies this. The converse is also true (proof omittted) so the two definitions are equivalent.\n\nSuppose that \\(z_1(t)\\) is a curve in \\(\\C\\) parameterised by \\(t \\in \\R\\), which passes through a point \\(z_0\\) when \\(t = t_1\\). Suppose further that its tangent there, \\(z_1'(t_1)\\), has a wel-defined direction. Then \\(z_1'(t) \\neq 0\\) and the curve makes an angle \\(\\phi = \\arg z_1'(t_1)\\) to the \\(x\\)-axis at \\(z_0\\). Consider the image of the curve, \\(Z_1(t) = f(z_1(t))\\). Its tangent direction at \\(t = t_1\\) is\n\\[\n  Z_1'(t_1) = z_1'(t_1)f'(z_1(t_1)) = z_1'(t_1)f'(z_0)\n\\]\nand therefore makes an angle with the \\(x\\)-axis of\n\\[\n  \\arg Z_1'(t_1) = \\phi + \\arg f'(z_0).\n\\]\nNote that \\(\\arg f'(z_0)\\) exists since \\(f\\) is conformal so \\(f'(z_0) = 0\\). In other words, the trangent direction is rotated by \\(\\arg f'(z_0)\\).\n\nNow if \\(z_2(t)\\) is another curve passing throught \\(z_0\\) then its tangent direction will also be rotated by \\(\\arg f'(z_0)\\). The result follows.\n\nSometimes we do not know what \\(V\\), the image set of \\(f\\) acting on  \\(U\\), is in advance. Often the easiest way to find it is first to find the iamge of the boundary \\(\\p U\\), which will form the boundary \\(\\p V\\) of \\(V\\); but, since this does not reveal upon which side of \\(\\p V\\) \\(V\\) lies, to then find the image of a single point of our choice within \\(U\\), which will lie within \\(V\\).\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item The map \\(z \\mapsto az + b\\) where \\(a, b \\in \\C, a \\neq 0\\), rotates by \\(\\arg a\\), enlarges by \\(|a|\\) and translates by \\(n\\) and is conformal everywhere.\n  \\item \\(f(z) = z^2\\) is a conformal map from\n    \\[\n      U = \\{z: 0 < |z| < 1, 0 < \\arg z < \\frac{\\pi}{2}\\}\n    \\]\n    to\n    \\[\n      V = \\{w: 0 < |w| < 1, 0 < \\arg w < \\pi\\}.\n    \\]\n    Note that the right angle between the two boundary curves at \\(z = 1\\) is preserved because \\(f\\) is conformal there. Similarly at \\(z = 1\\). But the right angle at \\(z = 0\\) is not preserved because \\(f\\) is not conformal there (\\(f'(0) = 0\\)). Fortunately this does not matter since \\(U\\) is an open set so does not include \\(0\\).\n  \\item How would we map the lef-hand half-plane\n    \\[\n      U = \\{z: \\Re z < 0\\}\n    \\]\n    to a wedge\n    \\[\n      V = \\{w: -\\frac{\\pi}{4} < \\arg w < \\frac{\\pi}{4}\\}?\n    \\]\n    We need to halve the angle, so by using \\(z^{1/2}\\), for which we need to choose a branch. The branch cut must not lie in \\(U\\) (since \\(z^{1/2}\\) is not analytic on the branch cut) so choose a cut along the negative imaginary axis: \\(re^{i\\theta} \\mapsto \\sqrt{r} e^{i\\theta/2}\\) where \\(\\theta \\in (-\\pi/2, 3\\pi/2]\\). Having defined this branch, we now apply \\(z^{1/2}\\) to \\(U\\) to produce the wedge \\(\\{z': \\pi/4 < \\arg z' < 3\\pi/4\\}\\); so we just need to rotate through \\(-\\pi/2\\). The final map is \\(f(z) = -iz^{1/2}\\).\n  \\item \\(e^z\\) takes rectangles conformally to sectors of annuli. With an appropriate choice of branch, \\(\\log z\\) does the reverse.\n  \\item M\u00f6bius maps (which are conformal everywhere except at the point that it is sent to \\(\\infty\\) are very useful in taking circles, or parts of them, to straight lines, or vice versa. Consider \\(f(z) = \\frac{z - 1}{z + 1}\\) acting on the unit disc \\(U = \\{z: |z| < 1\\}\\). The boundary of \\(U\\) is a cricle; the three points \\(-1, i\\) and \\(1\\) lie on the circle and are mapped to \\(\\infty, i\\) and \\(0\\) respectively. Therefore the image of \\(\\p U\\) is the imaginary axis; since \\(f(0) = -1\\) we see that the image of \\(U\\) is the left-hand half-plane.\n\n    The inverse map, which is \\(z \\mapsto \\frac{1 + z}{1 - z}\\), maps \\(V\\) to \\(U\\) conformally.\n\n    Alternatively, \\(w = \\frac{z - 1}{z + 1}\\) if and only if \\(z = \\frac{1 + w}{1 - w}\\). So \\(|z| < 1\\) if and only if \\(|w + 1| < |w - 1|\\), i.e.\\ \\(w\\) is close to \\(-1\\) than ti is to \\(1\\), which describges precisely the left-hand half-plane.\n\n    In fact this particular map can usefully be depoloyed more generally on \\emph{quadrants} of the unit disc or of the complex plane.\n  \\item \\(f(z) = \\frac{1}{z}\\) is a simple M\u00f6bius map useful for acting on vertical or horizontal lines, which maps to circles passing through the origin with centres one of the axes, or for mapping sectors within the unit circle to sectors outside the circle, or vice versa.\n  \\end{enumerate}\n\\end{eg}\n\nIn practice, complicated conformal maps are usually built up from individual building blocks, each a simple conformal map; the required map is the composition of these (note that the composition of conformal maps is conformal, by the chain rule). Seee the worked examples.\n\n\\sout{Hungry. Skipped a lecture.}\n\nHungry. Skipped two lectures.\n\n\\section{Laurent series and Cauchy's theorem}\n\n\\subsection{Taylor and Laurent series}\n\n\\subsection{Zeroes}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(z^3 + iz^2 + z + i = (z - i)(z + i)^2\\) has a simple zero at \\(z = i\\) and a zero of order two at \\(z = -i\\).\n  \\item \\(\\sinh x\\) has zero where\n    \\[\n      \\frac{e^z - e^{-z}}{2} = 0\n    \\]\n    so \\(z \\in \\pi i\\Z\\). The zeros are all simple since \\(\\cosh n\\pi i = \\cos n\\pi \\neq = 0\\).\n  \\item Since \\(\\sinh z\\) has a simple zero at \\(z = \\pi i\\), \\(\\sinh^3 z\\) has a zero of order \\(3\\) there. If needed, we can find its Taylor series about \\(\\pi i\\) by write \\(\\zeta = z - \\pi i\\).\n    \\begin{align*}\n      \\sinh^3 z\n      &= \\sinh^3 (\\zeta + \\pi i) \\\\\n      &= (-\\sinh \\zeta)^3 \\\\\n      &= -(\\zeta + \\frac{1}{3!}\\zeta^3 + \\dots) ^3 \\\\\n      &= -\\zeta^3 - \\frac{1}{2}\\zeta^5 - \\dots \\\\\n      &= -(z - \\pi i)^3 - \\frac{1}{2}(a - \\pi i)^5 + \\dots\n    \\end{align*}\n  \\end{enumerate}\n\\end{eg}\n\n\\subsection{Laurent Series}\n\nIf \\(f\\) has a singularity at \\(z_0\\) then we cannot expect it to have a Taylor series there. Instead, if \\(f\\) is analytic in an annulus \\(R_1 < |z - z_0| < R_2\\) then it has a \\emph{Laurent series} about \\(z_0\\)\n\\[\n  f(z) = \\sum_{n = \\infty}^\\infty a_n(z - z_0)^n\n\\]\nconvergent within the annulus. See the proof in the separate sheet.\n\nIt can be shown that the Laurent series for \\(f\\) about a particular \\(z_0\\) is unique within any given radius. Note that Taylor series are just a special case of Laurent series, with \\(a_n = 0\\) for all \\(n < 0\\).\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\frac{e^z}{z^3}\\) has a Laurent series about \\(z_0 = 0\\) given by\n    \\[\n      \\frac{e^z}{z^3} = \\sum_{m = 0}^\\infty \\frac{z^{m - 3}}{m!} = \\sum_{n = -3}^\\infty \\frac{z^n}{(n + 3)!}\n    \\]\n    so \\(a_n = \\frac{1}{(n + 3)!}\\) for \\(n \\geq -3\\).\n  \\item \\(e^{1/z}\\) about \\(z_0 = 0\\) has\n    \\[\n      e^{1/z} = 1 + \\frac{1}{z} + \\frac{1}{2!z^2} + \\frac{1}{3!z^3} + \\dots\n    \\]\n    so \\(a_n = \\frac{1}{(-n)!}\\) for \\(n \\leq 0\\).\n  \\item If \\(f(z) = \\frac{1}{z - a}\\) where \\(a \\in \\C\\) then \\(f\\) is analytic in \\(|z| < |a|\\) so it has a Taylor series about \\(z = 0\\) given by\n      \\[\n        \\frac{1}{z - a} = -\\frac{1}{a} \\left( 1 - \\frac{z}{a} \\right)^{-1} = -\\sum_{n = 0}^\\infty a^{-n - 1}z^n.\n      \\]\n      The binomial expansion is valid since \\(\\left| \\frac{z}{a} \\right| < 1\\). In \\(|z| > |a|\\) it has a Laurent series (in the annulus \\(|a| < |z| < \\infty\\)) given by\n      \\[\n        \\frac{1}{z - a} = \\frac{1}{z} \\left( 1 - \\frac{a}{z} \\right)^{-1} = \\sum_{m = 0}^\\infty \\frac{a^m}{z^{m + 1}}.\n      \\]\n  \\end{enumerate}\n\\end{eg}\n\n\n\\end{document}\n", "meta": {"hexsha": "f64fc042ce0fd614f769302457bed0e0d0d70166", "size": 22027, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "IB/complex_methods.tex", "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_issues_repo_path": "IB/complex_methods.tex", "max_issues_repo_name": "b-mehta/tripos", "max_issues_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_forks_repo_path": "IB/complex_methods.tex", "max_forks_repo_name": "b-mehta/tripos", "max_forks_repo_head_hexsha": "8d3037ede28fed3a3cdb82a88dd3a005bf94b310", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "avg_line_length": 60.6804407713, "max_line_length": 629, "alphanum_fraction": 0.6292277659, "num_tokens": 7432, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.7826624789529375, "lm_q1q2_score": 0.5549226824977794}}
{"text": "%#################### 7.1 ####################\n\\section{Simple pie chart}\n\t\\begin{table}[h!]\n\t\\begin{tabular}{c | c}\n\t\\begin{minipage}[m]{0.4\\textwidth}\n\t\\enum{ \n\t\\begin{tikzpicture}[thick,scale=0.6, every node/.style={transform shape}] \n\\pie{22.97/Los Angeles Lakers,\n22.97/Boston,\n8.11/Golden State,\n8.11/Chicago,\n6.76/San ,\n31.07/Other Teams}\n\\end{tikzpicture}}{7.1}\n\t\\end{minipage}\n\t&\n\t\\begin{minipage}[m]{0.55\\textwidth}\n\t\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\t\\begin{lstlisting}[numberstyle=\\zebra{red!15}{yellow!15},numbers=left,basicstyle=\\footnotesize]{tex}\n\\documentclass[border=0.2cm]{standalone} \n\\usepackage{pgf-pie}  \n\n\\begin{document}\n\\begin{tikzpicture}\n\\pie{22.97/Los Angeles Lakers,\n22.97/Boston Celtics,\n8.11/Golden State Warriors,\n8.11/Chicago Bulls,\n6.76/San Antonio Spurs,\n31.07/Other Teams}\n\\end{tikzpicture}\n\\end{document}\n\t\\end{lstlisting}\n\t\\end{minipage}\n\t\\end{tabular}\n\t\\end{table}\n\n\t%#################### 7.2 ####################\n\\section{Circled arrows with text}\n\\begin{table}[h!]\n\\begin{tabular}{c | c}\n\\begin{minipage}[m]{0.4\\textwidth}\n\\enum{ \n\\begin{center}\n\\begin{tikzpicture}[->,scale=.9]\n\\node (i) at (90:1cm)  {$T$};\n\\node (j) at (-30:1cm) {$D$};\n\\node (k) at (210:1cm) {$R$};\n\\draw (70:1cm)  arc (70:-10:1cm) node[midway, right] {{\\footnotesize 1}};\n\\draw (-50:1cm) arc (-50:-130:1cm) node[midway, below] {{\\footnotesize 1}};\n\\draw (190:1cm) arc (190:110:1cm) node[midway, left] {{\\footnotesize -1}};\n\\end{tikzpicture}\\end{center} }{7.2}\n\\end{minipage}\n&\n\\begin{minipage}[m]{0.55\\textwidth}\n\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\\begin{lstlisting}[numberstyle=\\zebra{red!15}{yellow!15},numbers=left,basicstyle=\\scriptsize]{tex}\n\\documentclass{article} \n\\usepackage{tikz}\n\n\\begin{document}\n\\begin{tikzpicture}[->,scale=.7]\n\\node (i) at (90:1cm)  {$T$};\n\\node (j) at (-30:1cm) {$D$};\n\\node (k) at (210:1cm) {$R$};\n\\draw (70:1cm)  arc (70:-10:1cm) node[midway, right] {{\\footnotesize 1}};\n\\draw (-50:1cm) arc (-50:-130:1cm) node[midway, below] {{\\footnotesize 1}};\n\\draw (190:1cm) arc (190:110:1cm) node[midway, left] {{\\footnotesize -1}};\n\\end{tikzpicture}\n\\end{document}\n\\end{lstlisting}\n\\end{minipage}\n\\end{tabular}\n\\end{table}\n\\newpage\n%#################### 7.3 ####################\n\\section{Diamond with text}\n\t\\begin{table}[h!]\n\t\\begin{tabular}{c | c}\n\t\\begin{minipage}[m]{0.4\\textwidth}\n\t\\enum{   \\includegraphics[width=1\\linewidth]{7.3.pdf}   }{7.3}\n\t\\end{minipage}\n\t&\n\t\\begin{minipage}[m]{0.55\\textwidth}\n\t\\renewcommand\\textminus{\\mbox{-}}%<<<<<<<<<<<\n\t\\begin{lstlisting}[numberstyle=\\zebra{red!15}{yellow!15},numbers=left,basicstyle=\\scriptsize]{tex}\n\\documentclass[a4paper,14pt]{extreport}\n\\usepackage[left=1.5cm,right=1.5cm,top=1.5cm,bottom=2cm,bindingoffset=0cm]{geometry}\n\\usepackage{amsmath}\n\\usepackage{tikz}\n\\usetikzlibrary{shapes.geometric}\n \n\\begin{document}\n\\begin{tikzpicture}\n\\node[diamond,font=\\small,\nline width=0.4mm,scale=0.7,\n    draw = cyan, minimum width = 7.5cm, %text = red,\n    minimum height = 9cm] (d) at (0,0) { };\n      \\node [above=0.5cm] (a) at (d.90) {$w = f(x,y)$};\n      \\node [above=0.5cm,right=0.1cm] (b) at (d.45) {$\\dfrac{\\partial w}{\\partial y}$};\n      \\node [above=0.5cm,left=0.1cm] (c) at (d.135) {$\\dfrac{\\partial w}{\\partial x}$};\n      \\node [left=0.1cm] (dd) at (d.180) {$x$};\n      \\node [right=0.1cm] (e) at (d.0) {$y$};\n      \\node [below=0.1cm] (f) at (d.270) {$t$};\n      \\node [below=0.9cm,right=-0.3cm] (g) at (d.-30) {$\\dfrac{\\partial y}{\\partial t}$};\n      \\node [below=0.5cm,left=0.1cm] (h) at (d.220) {$\\dfrac{\\partial x}{\\partial t}$};\n      \\node at (d.90) [cyan,circle,fill,inner sep=3pt]{};\n      \\node at (d.180) [cyan,circle,fill,inner sep=3pt]{};\n      \\node at (d.0) [cyan,circle,fill,inner sep=3pt]{};\n      \\node at (d.270) [cyan,circle,fill,inner sep=3pt]{};\n\\end{tikzpicture}\n\\end{document}\n\t\\end{lstlisting}\n\t\\end{minipage}\n\t\\end{tabular}\n\t\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "520a75c17f90fb481217f3c0542f2234adad71a0", "size": 3897, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/C7.tex", "max_stars_repo_name": "AnMnv/eBook", "max_stars_repo_head_hexsha": "768a1914311525e412c0edcb79e858f2b68bd544", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-12-16T17:18:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-16T17:59:04.000Z", "max_issues_repo_path": "source/C7.tex", "max_issues_repo_name": "AnMnv/eBook", "max_issues_repo_head_hexsha": "768a1914311525e412c0edcb79e858f2b68bd544", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/C7.tex", "max_forks_repo_name": "AnMnv/eBook", "max_forks_repo_head_hexsha": "768a1914311525e412c0edcb79e858f2b68bd544", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5227272727, "max_line_length": 101, "alphanum_fraction": 0.6248396202, "num_tokens": 1572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7826624840223698, "lm_q1q2_score": 0.5549226764676299}}
{"text": "\\lab{Policy Function Iteration}{Policy Function Iteration}\n\\objective{This section teaches how to improve dynamic programming convergence using policy function iteration.}\n\nNow that we have covered how to solve simple dynamic programming problems by value function iteration, we consider\nthe convergence of the algorithm.  We demonstrate two additional methods known as policy function iteration, or Howard's\nImprovement, and modified policy function iteration.\n\n\\section*{Policy Function Iteration}\nFor infinite horizon dynamic programming problems, it can be shown that value function iteration converges at the\nrate $\\beta$, where $\\beta$ is the discount factor.  In practice, $\\beta$ is usually close to one, which means this\nalgorithm often converges slowly.\n\nIn order to examine the value function iteration algorithm, it is helpful to see which functions take the most runtime.\nWe can perform this analysis using the profiling tools in IPython.\n\\begin{problem}\n\\label{prob:profile}\nImport your function \\li{eatCake} from the Value Function Iteration lab into an IPython environment.\nProfile this function with arguments $\\beta = .9$, $N = 100$, \\li{finite=False}, \\li{plot=False}.\nNow change the arguments $\\beta = .95$ and $N = 1000$.\nHow does your runtime change? What parts of the code take up the most time?\n\nRecall that we can access the profiling tool in IPython as follows:\n\\begin{lstlisting}\n>>> %prun function_call\n\\end{lstlisting}\nwhere \\li{function_call} is a placeholder for whatever function you wish to profile.\n\\end{problem}\n\nIn Problem \\ref{prob:profile} you should have noticed that runtime was significantly longer to run for larger $N$ or\n$\\beta$ closer to 1.  The profiler gives more detailed information than just the overall runtime, however.\nThe results of Problem \\ref{prob:profile} should look something like the following (we have removed a few\ncolumns and several rows from the full output):\n\\begin{lstlisting}\n     503 function calls in 2.101 seconds\n\nOrdered by: internal time\n\nncalls  tottime  filename:lineno(function)\n    1    1.759   <ipython-input-4-1487b3d35ebf>:1(eatCake)\n   59    0.244   {method 'argmax' of 'numpy.ndarray' objects}\n    1    0.033   twodim_base.py:427(triu)\n    1    0.018   {numpy.core.multiarray.where}\n    1    0.009   method 'repeat' of 'numpy.ndarray' objects}\n    1    0.009   {method 'outer' of 'numpy.ufunc' objects}\n    1    0.005   twodim_base.py:349(tri)\n    1    0.004   twodim_base.py:657(mask_indices)\n    1    0.004   {method 'astype' of 'numpy.ndarray' objects}\n   59    0.004   {method 'reduce' of 'numpy.ufunc' objects}\n\\end{lstlisting}\nWe notice that the most time was spent in the maximization step (\\li{method 'argmax'}).\n\nRecall that the value function iteration method for the infinite horizon problem\napproximates the value function $V$ iteratively, producing a sequence of\napproximations $V_1, V_2, V_3, \\ldots$ that converge to $V$.\nAt each step, the algorithm also determines the policy function $\\psi_k$\ncorresponding to the approximated value function $V_k$.\nThis process can also be viewed as applying the approximate policy function $\\psi_k$ once to obtain\nthe approximate value function $V_k$.\nUnfortunately, this gives us a rather crude approximation to the value function,\nresulting in slow convergence.\n\nWhat if instead of iteratively approximating the value function in this manner, we instead iteratively approximate\nthe policy function, and calculate the exact value function associated with each new policy function?\nThis is the idea behind the policy function iteration algorithm.\nIn this way we iterate on the policy functions rather than the value functions.\nThe algorithm for the policy function iteration can be summarized as follows:\n\\begin{enumerate}\n\\item Set an initial policy rule $W' = \\psi_0(W)$ and a tolerance $\\delta$.\n\n\\item \\label{item:step2_PFI} Compute the value function assuming this rule is used forever:\n\\begin{equation*}\nV_k(W) = \\sum_{t=0}^\\infty \\beta^t u(W_t - \\psi_k(W_t)).\n\\end{equation*}\n\n\\item Determine a new policy $\\psi_{k+1}$ so that\n\\begin{equation*}\n\\psi_{k+1}(W) = \\underset{W'}{\\text{argmax}} \\{u(W-W') + \\beta V_k(W')\\}.\n\\end{equation*}\n\n\\item If $\\delta_k = ||\\psi_{k+1} - \\psi_k|| < \\delta$, stop, otherwise go back to step \\ref{item:step2_PFI} with subscript $k+1$.\n\\end{enumerate}\nIn order to compute the value function, $V_k$ corresponding to a given policy $\\psi_k$, we must solve\n\\begin{equation}\n\\label{Val_Fun}\nV_k(W) = u(W-W') + \\beta V_k(W')\n\\end{equation}\nfor $V_k$.\n\nAs always, we take a discrete approximation to $W$, obtaining a length $N$ vector\n\\[\nw = (w_1, w_2, \\ldots, w_N)\n\\]\ngiving the possible cake quantities.\nThe variable $W'$ becomes $w' = \\psi_k(w)$.\nOnce we have done this, equation \\eqref{Val_Fun} is a linear system which we can rewrite as\n\\begin{equation*}\nV_k(w) = u(w-w') + \\beta QV_k(w)\n\\end{equation*}\nwhere $Q$ is the $N\\times N$ matrix\n\\begin{equation*}\nQ_{ij} = \\left\\{\n     \\begin{array}{ll}\n       1 & \\text{if} \\quad  w_i' = w_j\\\\\n       0 & \\text{otherwise.}\n     \\end{array}\n   \\right.\n\\end{equation*}\nSolving this system of equations, we have $V_k(w) = (I-\\beta Q)^{-1}u(w-w')$.\nAlthough $Q$ may be large, we can take advantage of the fact that it is sparse, containing only $N$ nonzero entries out of $N^2$ total entries.\n\n\\begin{problem}\n\\label{prob:cake_eating_policyfun}\nSolve the infinite horizon cake eating problem from the Value Function Iteration lab again, this time using policy function iteration.\nWrite a function \\li{policyIter} that takes arguments $\\beta$ (discount factor), $N$ (size of discrete approximation), and\n$W_{max}$ (size of original cake, set to default value of 1). Take the function $u$ to be the square root function, as before.\nReturn the converged value function and policy function, both of which should be arrays of length $N$.\n\nAs in the value function iteration, first create your discrete approximation of the $W$ values:\n\\begin{lstlisting}\n>>> import numpy as np\n>>> w = np.linspace(0, Wmax, N)\n\\end{lstlisting}\n\nYou will still need to pre-compute all values of $u(W-W')$, storing these in an $N \\times N$ array. Make sure, as before,\nthat the upper triangular entries are set to large negative values, so that we don't choose to consume more cake than is\navailable. In the code snippets below, we will refer to this array as \\li{U}.\n\nYou may also find it convenient to not track the approximated policy function $\\psi_k$ directly during the iteration, but\nrather to have a length $N$ array \\li{psi_ind}, whose $i$-th entry gives the index of $w_i'$ relative to the discrete approximation\n$w$. Thus, rather than initializing a policy function $\\psi_0$ directly, you can initialize it indirectly by setting, for example,\n\\begin{lstlisting}\n>>> psi_ind = np.arange(N)\n\\end{lstlisting}\nwhich corresponds to an initial policy $\\psi_0(W) = W$.\nThe policy function vector can be obtained from \\li{psi_ind} by simply slicing \\li{w} as follows:\n\\begin{lstlisting}\n>>> psi = w[psi_ind]\n\\end{lstlisting}\n\nIn order to take advantage of the sparse matrices $I$ and $Q$, use the following imports from the SciPy \\li{sparse} library.\n\\begin{lstlisting}\n>>> from scipy import sparse\n>>> from scipy.sparse import linalg\n\\end{lstlisting}\nand the following code to initialize $I$ (outside the loop)\n\\begin{lstlisting}\n>>> I = sparse.identity(N, format='csr')\n>>> rows = np.arange(0, N)\n\\end{lstlisting}\nand $Q$ (inside the loop)\n\\begin{lstlisting}\n>>> columns = psi_ind\n>>> data = np.ones(N)\n>>> Q = sparse.coo_matrix((data, (rows,columns)), shape=(N,N))\n>>> Q = Q.tocsr()\n\\end{lstlisting}\nRather than compute $(I-\\beta Q)^{-1}$ directly, use Scipy's sparse solver\n\\begin{lstlisting}\nV = linalg.spsolve(I-beta*Q, u(w-w[psi_ind]))\n\\end{lstlisting}\nwhere \\li{u} is the square root function.\n\nIn each iteration, we update \\li{psi_indices} much as we did in the value function iteration algorithm:\n\\begin{lstlisting}\n>>> psi_ind = np.argmax(U + beta*V, axis=1)\n\\end{lstlisting}\nwhere we assume \\li{V} has shape \\li{(1,N)}.\n\nUse the 2-norm when computing $\\|\\psi_{k+1} - \\psi_k\\|$, and set the tolerance to $\\delta = 1e-9$.\n\nTake $N = 1000$ and $\\beta = .95$.\nPlot the policy function and compare with your policy function from the Value Function Iteration Lab, using\nthe same inputs. You should obtain the same answer, but in less runtime.\n\\end{problem}\n\n\\section*{Modified Policy Function Iteration}\nWhile policy function iteration converges in fewer iterations, solving the linear system can be slow,\nespecially for problems with a large state space.  There is an alternative to this called modified policy\nfunction iteration.\n\nIn modified policy function iteration, we don't compute the exact value function corresponding to a policy.\nInstead, at step \\ref{item:step2_PFI} of the policy iteration algorithm we iterate $m$ times on the value function\nequation \\eqref{Val_Fun} to get an approximation of the new value function.  This is faster than solving for the\nexact value function for large state spaces.  There is no strict rule on the value of $m$, the number of value function\niterations.  In practice values such as $m=10$ or $m=15$ often work well.\n\nNote that our methods for solving dynamic programs boil down to some combination of two things: iterating on the value\nfunction and iterating on the policy function.  Modified policy function does a combination of the two, taking advantage\nof the advantages of both methods.  Because modified policy iteration takes only slightly more work to code than value\nfunction iteration, it is often preferred in practice.  Whether policy or modified policy iteration will perform better\nmay depend on the problem.\n\n\\begin{problem}\nSolve the infinite horizon cake eating problem again, this time using the modified policy function\niteration method.\n\nWrite a function \\li{modPolicyIter} that takes the same arguments as your \\li{policyIter} function, along\nwith an additional keyword argument $m$ (set to default value $m=15$), which gives the number of iterations used\nto approximate the value function at each step. Return the converged value function and policy function. \n\nMuch of the code in your \\li{policyIter} will be unchanged when implementing this function. The key differences are\nas follows.\nFirst, you do not need to initialize the sparse arrays $I$ and $Q$ in the modified policy function iteration.\nSecondly, rather than solving a linear system of equations to obtain the value function, you will loop\nthe equation\n\\begin{lstlisting}\n>>> V = u(w - w[psi_ind]) + beta*V[psi_ind]\n\\end{lstlisting}\n$m$ times in each iteration. Notice how this line of code corresponds with Equation \\eqref{Val_Fun}, where\n\\li{w} represents $W$, \\li{w[psi_ind]} represents $W'$, and \\li{V[psi_ind]} represents $V(W')$.\n\nThe remainder of the code should be unchanged.\n\n\n\\end{problem}\n\n\\begin{problem}\nSolve the cake eating problem with each of the three methods and report how many iterations each takes.  Use $N= 1000$ as the\nnumber of grid points for $W$ and $\\beta = 0.95$.  It is important that you use the same initial guess in each case in order to\nmake the results comparable.  The accuracy of the initial guess greatly affects the number of iterations to convergence.  Take\nyour initial guess as $V = 0$, which corresponds to an initial guess of the policy function with indices $[0,1,2,\\ldots, N-1]$\n(meaning $\\psi(W) = W$, i.e. we eat no cake, leaving it all for the next period).\n\\end{problem}\n\nIn general we should see that value function iteration takes more iterations than modified policy function iteration which in turn\ntakes more iterations than policy function iteration.  It is important to note that this does not directly say anything about runtime.\nEach iteration of policy iteration may take longer than an iteration of value function iteration.\n\n\\section*{Discrete Choice (Threshold) Problems}\\label{SecDiscrChoice}\n\nOne powerful application of dynamic programming is that we can make \nmodels that have both continuous and discrete state variables. These models are sometimes referred to as\ndiscrete choice problems or optimal stopping problems.  Examples include models of employment that involve both\nthe choice of whether to work and how much to work, models of firm entry and exit that involve the choice of both\nwhether to produce and how much to produce, and models of marriage that involve the choice of whether to date\n(get married or keep dating) and how much to date.\nThis application illustrates the versatility of dynamic programming as a dynamic solution method\n\nIn this lab, we follow a simple version of a standard job search model.\nAssume that workers live infinitely long.\nWe will split a worker's life into discrete time periods, and in each period the worker is either\nemployed or unemployed, and receives a job offer. The worker must make a choice between discrete actions\n(such as accepting or rejecting a job offer), with the goal of maximizing some utility function (hence,\nthis is a \\emph{discrete choice} problem).\n\nWe can state this problem in terms of dynamic programming by defining an appropriate value function.\nLet the value of entering a period with most recent wage $w$,\ncurrent job offer wage $w'$, and employment status $s$ be given by the following value function,\n\\begin{equation}\\label{EqV}\n   V(w,w',s) = \\begin{cases}\n                  V^E(w)    \\quad&\\text{if}\\quad s = E \\\\\n                  V^U(w,w') \\quad&\\text{if}\\quad s = U \\\\\n               \\end{cases}\n\\end{equation}\nwhere employment status is a binary variable $s\\in\\{E,U\\}$ ($E$ indicates ``employed\" and $U$ indicates ``unemployed\");\na person can be either employed or unemployed.\n\nAs in the cake eating problem, the value function is calculated as the sum of some reward (based on the\ncurrent state) and the discounted value of entering the next period in some particular state.\nThe reward function, denoted (as usual) by $u$, gives the utility of spending available funds.\nAssuming that a worker receives some wage $x$ in a given period and spends all available money in the period,\nthe utility of consumption is given by\n\\[\nu(x).\n\\]\nCalculating the value for the next period depends on the employment status in the current period, so we address\nthis separately for each case.\n\nLet us first consider the case where the individual is unemployed ($s = U$). As is customary, let $s'$ denote the\nemployment status of the worker in the next period.\nIn this unemployed state, the worker receives unemployment benefits equal to a fraction of her most recent wage,\ni.e. $\\alpha w$, where $\\alpha \\in (0, 1)$.\nHence, the utility of consumption in the current unemployed state is given by\n\\[\nu(\\alpha w).\n\\]\n\nThe worker also receives one wage offer ($w'$) per period, and will obtain\nthis wage in the next period provided that she chooses to accept employment, i.e. provided $s' = E$.\nThe worker must decide whether to accept the current wage offer $w'$ or to remain unemployed in the next period,\ni.e. she must decide on the value of $s'$. How\ndoes she make this choice? She must weigh the value of entering the next period as an employed worker with wage\n$w'$ (given by $V^E(w')$) versus the value of entering the next period as an unemployed worker with\nprevious wage $w$ and unknown wage offer $w''$ (given by $V^U(w,w'')$). Because the worker cannot know\nwhat the future wage offer $w''$ will be, it is treated as a random variable with a particular probability\ndistribution. Hence, the worker must actually compute the \\emph{expected} value of entering the next\nperiod unemployed. This term is simply\n\\[\n\\mathbb{E}_{w''}V^U(w,w''),\n\\]\nwhere $\\mathbb{E}_{w''}$ denotes the expectation operator with respect to the probability distribution of future\nwage offers $w''$.\nTo sum up, the worker chooses to accept the wage offer $w'$ or remain unemployed in the next period based on\nwhich option gives the greater expected value, and the value of this decision is given by\n\\[\n\\max\\Bigl\\{V^E(w'), \\,\\, \\mathbb{E}_{w''}V^U(w,w'')\\Bigr\\}.\n\\]\n\nThe overall value of the current unemployed state with previous wage $w$ and current wage offer $w'$ is\njust the utility of consumption plus the discounted value of the next period, i.e.\n\\begin{equation}\\label{EqVu}\nV^U(w,w') = u(\\alpha w) + \\beta \\max\\Bigl\\{V^E(w'), \\,\\, \\mathbb{E}_{w''}V^U(w,w'')\\Bigr\\},\n\\end{equation}\nwhere $\\beta$ is the discount factor.\n\nNow we turn to the case where the job status is employed ($s = E$).\nIn this case, the worker receives a wage $w$ in the current period, and so the utility of consumption is\njust\n\\[\nu(w).\n\\]\nIn the next period, the worker will have most recent wage $w$, she will receive wage offer $w''$, and will\nhave employment status $s'$. As in the unemployed case, $w''$ is unknown and treated as a random variable.\nUnlike the unemployed case, however, the worker's future employment status $s'$ is not under her control,\nbut rather is also a random variable. The reason for this is that the worker will remain employed\nuntil she loses the job, a random event that occurs with some fixed probability in each time period.\nHence, we must calculate the expected value of the next period with respect to both $w''$ and $s'$.\nWe may write the entire value function for the employed case as\n\\begin{equation}\\label{EqVe1}\n   V^E(w) = u(w) + \\beta \\mathbb{E}_{w'',s'}V(w,w'',s').\n\\end{equation}\n\nTo calculate the expectation term, we need to know the joint probability distribution over $w''$ and $s'$.\nThis can be characterized in the following way.\nWe assume that $s'$ and $w''$ are independent. Hence, we can split the joint expectation\noperator into the composition of the two individual expectation operators:\n\\[\n\\mathbb{E}_{w'',s'} = \\mathbb{E}_{w''}\\mathbb{E}_{s'}.\n\\]\nLet $\\gamma$ represent the probability that an employed worker becomes unemployed in the next period,\nso that $1-\\gamma$ is the probability of remaining employed in the next period.\nIf the worker stays employed in the next period ($s' = E$), then next period's wage equals the current\nperiod's wage, and the term inside the expectation is\n\\[\nV(w,w'',E) = V^E(w).\n\\]\nWe then have\n\\begin{align*}\n\\mathbb{E}_{s'}V(w,w'',s') &= (1-\\gamma)V(w,w'',E) + \\gamma V(w,w'',U)\\\\\n&= (1-\\gamma)V^E(w) + \\gamma V^U(w,w'').\n\\end{align*}\nNotice that the term $(1-\\gamma)V^E(w)$ is constant with respect to $w''$. Then\n\\begin{align*}\n\\mathbb{E}_{w''}\\mathbb{E}_{s'}V(w,w'',s') &= \\mathbb{E}_{w''}\\left[(1-\\gamma)V^E(w) + \\gamma V^U(w,w'')\\right]\\\\\n&= \\mathbb{E}_{w''}(1-\\gamma)V^E(w) + \\mathbb{E}_{w''}\\gamma V^U(w,w'')\\\\\n&= (1-\\gamma)V^E(w) + \\gamma \\mathbb{E}_{w''}V^U(w,w'').\n\\end{align*}\nHence, we can rewrite \\eqref{EqVe1} as follows:\n\\begin{equation}\\label{EqVe2}\n   V^E(w) = u(w) + \\beta \\Bigl[(1-\\gamma)V^E(w) + \\gamma \\mathbb{E}_{w''}V^U(w,w'')\\Bigr].\n\\end{equation}\n\nWe have now completely described the value function. What about the policy function?\nThe policy function for the unemployed worker gives her decision on whether to accept the job $s'=E$\nor to reject the job $s'= U$.\nThis will be a function of both the most recent wage $w$ and the current wage offer $w'$.\nThe employment status $s'$ in the next period is determined by the policy function $\\psi$:\n\\[\ns' = \\psi(w,w').\n\\]\n\nThese discrete choice problems are often called threshold\nproblems because the policy choice depends on whether the state variable is greater than or less than\nsome threshold level. That is, an unemployed worker will accept a job if and only if the offer wage is\nabove some set amount that depends on the most recent wage $w$. In the labor search model,\nthe threshold level is called the ``reservation wage'' $w_R'$. The reservation wage $w_R'$ is defined as\nthe wage offer such that the worker is indifferent between accepting the job $s' = E$ and\nstaying unemployed $s' = U$. Hence, this reservation wage satisfies the equation\n\\begin{equation}\\label{EqWR}\n   V^E(w_R') = E_{w''}\\left[V^U(w,w'')\\right].\n\\end{equation}\nThe policy function will then take the form of accepting the job if $w' \\geq w_R'$ or\nrejecting the job offer and remaining unemployed if $w' < w_R'$:\n\\begin{equation}\\label{EqSprime}\n   s' = \\psi(w,w') = \\begin{cases}\n                      E \\quad\\text{if}\\quad w' \\geq w_R' \\\\\n                      U \\quad\\text{if}\\quad w' < w_R'.\n                   \\end{cases}\n\\end{equation}\nFigure \\ref{fig:disc_policy} shows an example of the discrete policy function.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{disc_policy.pdf}\n\\caption{Here is the policy function for fixed $w = 50$.  Numerically we let 0 represent unemployment, $U$,\nand 1 represent employment, $E$.  Thus we see that an individual will choose to take a new job, given\ntheir old wage was 50, at a wage of roughly 35.  Thus for a previous wage of 50, we say the reservation wage is 35.}\n\\label{fig:disc_policy}\n\\end{figure}\n\nIn summary, the labor search discrete choice problem is characterized by the value functions \\eqref{EqV}, \\eqref{EqVu},\nand \\eqref{EqVe2}, the reservation wage \\eqref{EqWR}, and the policy function \\eqref{EqSprime}. Because wage offers\nare distributed according to some given probability distribution (denote the cdf by $F(w')$),\nand because the policy function takes the form of \\eqref{EqSprime},\nthe probability that the unemployed worker receives a wage offer that she will reject is $F(w_R')$ and the probability\nthat she receives a wage offer that she will accept is $1 - F(w_R')$. Just like the continuous-choice cake eating\nproblems, this problem can be solved by value function, policy function, or modified policy function iteration.\n\nThe value function iteration solution method for the equilibrium in the labor search problem is analogous to the\nvalue function iteration from the previous labs. The only difference is that two value functions ($V^E$ and $V^U$)\nmust converge to a fixed point in this problem instead of just one value function converging in the previous problems.\nAlthough there are two value functions to consider, there is only one policy function, since decisions are only made\nin the unemployed state.  Thus, there is only one policy function on which to iterate in the case of policy or modified\npolicy iteration.\n\nIn the following problems, you will solve the job search problem using value function iteration and modified policy\nfunction iteration. Assume that the consumption utility function $u$ is given by\n\\[\nu(w) = \\sqrt{w}.\n\\]\nAssume that the probability of becoming unemployed in a given period is $\\gamma = 0.10$, the fraction of wages paid\nin unemployment benefits is $\\alpha = 0.5$, and the discount factor is $\\beta = 0.9$.\n\nAssume that the log of wage offers are distributed normally.  We then say that offers are distributed\nlognormally and write\n\\[\nw'\\sim \\text{LogN}(\\mu,\\sigma).\n\\]\nThis is a convenient choice for the distribution of\nwage offers.  Among other things, it guarantees that wage offers will be positive.\nA mean of $20$ and variance of $200$ are typical parameters of such a wage distribution,\nand we will use these parameters in the following problems.\n\nAs usual when dealing with continuous variables, we form a discrete approximation of\nthe possible wage values. In particular, approximate the wage values by a vector of\nlength $N = 500$ of equally-spaced values from $w_{min} = 0$ to $w_{max} = 100$, inclusive.\nWe then form a corresponding discrete approximation of the probability density function\n$f(w')$ for the lognormal wage offers using code provided in the file \\li{discretelognorm.py},\nas follows, where \\li{w} is the length-$N$ vector of wage values, $m$ is the mean, and $v$ is\nthe variance as specified above:\n\\begin{lstlisting}\n>>> from discretelognorm import discretelognorm\n>>> f = discretelognorm(w, m, v)\n\\end{lstlisting}\nThe function \\li{discretelognorm} computes the discrete pdf of the specified lognormal distribution in much\nthe same way that you calculate the discrete normal pdf when solving the stochastic cake eating problem.\n\n\n\\begin{problem}\nSolve the job search problem using value function iteration. Return the converged value functions\n$V^E$ and $V^U$, as well as the converged policy function $\\psi$. The following steps provide detailed\ninstructions. Note that there are multiple ways to proceed, and the following is simply one (fairly good)\npossibility.\n\\begin{enumerate}\n\n   \\item As described above, represent the possible wage values by an array \\li{w} of length\n   $N$. Denote this array entrywise by\n   \\[\n   w = (w_1,w_2,\\ldots,w_N).\n   \\]\n   Calculate the corresponding discrete lognormal pdf \\li{f}, exactly as shown above.\n\n   \\item Note that $u(w)$ and $u(\\alpha w)$ are needed when computing the value functions.\n   Since these quantities do not change from one iteration to another, it is smart to\n   compute them once at the outset. Denote $u(w)$ by \\li{uw} and $u(\\alpha w)$ by \\li{uaw}.\n   These are easily calculated as follows:\n\\begin{lstlisting}\n>>> uw = u(w)\n>>> uaw = u(alpha*w).reshape((N,1))\n\\end{lstlisting}\n    where \\li{u} is the square root function. We must reshape \\li{uaw} because of array broadcasting\n    issues that arise in the code snippets below.\n\n   \\item Since $V^E$ is a function of only $w$, it will be represented by a vector of length $N$, where\n   the $i$-th entry gives $V^E(w_i)$. The unemployed value function $V^U$, however,\n   is a function of both $w$ and $w'$, so it will be represented by an $N \\times N$ array,\n   where the $(i,j)$-th entry gives $V^U(w_i,w_j)$. Initialize the entries of these arrays to 0:\n\\begin{lstlisting}\n>>> VE = np.zeros(N)        #employed value function\n>>> VU = np.zeros((N,N))    #unemployed value function\n\\end{lstlisting}\n\n   \\item Note that $\\mathbb{E}_{w''}V^U(w,w'')$  is needed to calculate both $V^E$ and $V^U$.\n   This expectation depends on $w$, and so can be represented by a length $N$ array, where the\n   $i$-th entry is $\\mathbb{E}_{w''}V^U(w_i,w'')$.\n   It is convenient to assign a variable to this array to keep track of it throughout the iterations.\n   We denote the expectation by \\li{EVU}, and initialize it to zeros:\n\\begin{lstlisting}\n>>> EVU = np.zeros(N)\n\\end{lstlisting}\n\n   \\item For reasons that will soon become apparent, we will need to create an $N\\times N$ helper array\n   whose rows are equal to \\li{VE} (call this array \\li{MVE}), and a $N \\times N$ helper array whose columns\n   are equal to \\li{EVU} (call this array \\li{MEVU}).\n   At the outset, simply initialize these arrays to zeros.\n\n   \\item Because job status is a binary variable, the policy function returns one of two possible values. It is\n   convenient to represent ``employed\" by $1$ and ``unemployed\" by $0$. Now the policy function depends\n   on $w$ and $w'$, so it will also be represented by an $N\\times N$ array \\li{PSI} of zeros and ones,\n   where the $(i,j)$-th entry gives $\\psi(w_i, w_j)$.\n\n   \\item Now we are ready to begin the iteration.\n   A single iteration involves computing the updated value functions $V^E$ and $V^U$ from\n   equations \\eqref{EqVe2} and \\eqref{EqVu} and then calculating the $2$-norm distance between\n   both pairs of old and updated value functions to test for convergence. If both of these\n   $2$-norm distances are less than $10^{-9}$, terminate the iteration.\n\n   Before calculating the updated value functions, we first update our helper arrays \\li{MVE} and\n   \\li{MEVU}. The rows of \\li{MVE} need to equal \\li{VE}. We can use array broadcasting:\n\\begin{lstlisting}\n>>> MVE[:,:] = VE.reshape((1,N))\n\\end{lstlisting}\n   The columns of \\li{MEVU} need to equal \\li{EVU}, so use a similar technique:\n\\begin{lstlisting}\n>>> MEVU[:,:] = EVU.reshape((N,1))\n\\end{lstlisting}\n\n   Now let us address how to compute the updated $V^U$, which we denote by \\li{VU1}.\n   Equation \\eqref{EqVu} shows that it is the sum of\n   two terms. The first, $u(\\alpha w)$, we have already computed and stored in the variable \\li{uaw}.\n   The second term involves a maximization between two alternatives. One can imagine writing\n   a double for loop ranging over the values of $w'$ and $w$ to compute each individual\n   $\\max\\{V^E(w'), \\mathbb{E}_{w''}V^U(w,w'')\\}$, but we can take advantage of the helper\n   arrays \\li{MVE} and \\li{MEVU} to do this computation in one efficient line of code.\n   Note that the $(i,j)$-th entry of \\li{MVE} is just $V^E(w_j)$ and the $(i,j)$-th\n   entry of \\li{MEVU} is $\\mathbb{E}_{w''}(w_i, w'')$, and\n   \\[\n   V^U(w_i,w_j) = u(\\alpha w_i) + \\beta\\max\\{V^E(w_j), \\mathbb{E}_{w''}V^U(w_i,w'')\\}.\n   \\]\n   Hence, taking the entrywise maximum of the arrays \\li{MVE} and \\li{MEVU} gives us the appropriate\n   max term for $V^U$. To get the entrywise maximum of two arrays, stack the arrays along a new\n   axis using \\li{np.dstack}, and maximize along that axis. The computation for \\li{VU}, then, is\n   \\begin{lstlisting}\n>>> VU1 = uaw + beta*np.max(np.dstack([MEVU, MVE]), axis=2)\n   \\end{lstlisting}\n\n   Calculating the updated $V^E$, denoted by \\li{VE1}, is more straightforward.\n   Equation \\eqref{EqVe2} shows that it is just\n   a particular linear combination of the arrays \\li{uw}, \\li{VE}, and \\li{EVU}:\n   \\begin{lstlisting}\n>>> VE1 = uw + beta*((1-gamma)*VE + gamma*EVU)\n   \\end{lstlisting}\n\n   We can now calculate the 2-norm distances between old and updated value functions.\n   It remains to update \\li{VE}, \\li{VU}, and \\li{EVU}.\n   The first two updates are trivial, and calculating \\li{EVU} is equivalent to the\n   matrix-vector multiplication of \\li{VU} with \\li{f}. This is similar to how we computed\n   expectations in previous labs:\n   \\begin{lstlisting}\n>>> EVU = np.dot(VU,f).ravel()\n   \\end{lstlisting}\n   We use the \\li{ravel} function to ensure that \\li{EVU} is a flat array.\n\n   \\item Notice that it is not necessary to iteratively update the policy function, as it is not needed\n   to update the value functions. Thus, we need only compute the policy function once, after convergence\n   of the value functions has been achieved. This is done in a manner similar to calculating\n   $\\max\\{V^E(w'), \\mathbb{E}_{w''}V^U(w,w'')\\}$ as described above, except we need to take the \\emph{argmax}:\n\\begin{lstlisting}\n>>> PSI = np.argmax(np.dstack([MEVU,MVE]), axis=2)\n\\end{lstlisting}\n\n   \\item Compute the reservation wage $w_R'$ as a function of the current wage $w$. It will be represented\n   by a length $N$ array called \\li{wr}. The reservation wage is the\n   value of $w'$ where the policy function changes from zeros to ones (the optimal choice changes from remaining\n   unemployed to accepting the job offer). We can calculate this as follows:\n   \\begin{lstlisting}\n>>> wr_ind = np.argmax(np.diff(PSI), axis = 1)\n>>> wr = w[wr_ind]\n   \\end{lstlisting}\n\n   \\item Plot the equilibrium reservation wage $w_R'$ of the converged problem as a function of the current\n   wage $w$ with the current wage on the $x$-axis and the reservation wage $w_R'$ on the $y$-axis. This is\n   the most common way to plot discrete choice policy functions. The reservation wage represents the wage\n   that makes the unemployed worker indifferent between taking a job offer and rejecting it. So any wage\n   above the reservation wage line represents $s' = E$ and any wage below the reservation wage line represents\n   $s' = U$. Your plot should resemble that in Figure \\ref{fig:res_wage}.\n\n\\end{enumerate}\n\\end{problem}\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{reservation_wage.pdf}\n\\caption{The reservation wage as a function of previous wage for the job search problem.}\n\\label{fig:res_wage}\n\\end{figure}\n\nIn the previous problem, it was necessary to iterate on two value functions.\nConsequently the convergence is relatively slow.\nWe can improve upon this situation by using modified policy function iteration.\n\n\\begin{problem}\nSolve the same problem, this time using modified policy function iteration with $15$ value function iterations\nwithin each policy iteration.  You should be able to re-use much of your code from the previous problem.\n\nStart off by initializing all of the same variables. Additionally, initialize your policy function array \\li{PSI},\nsay\n\\begin{lstlisting}\n>>> PSI = 2*np.ones((N,N))\n\\end{lstlisting}\n\nNext comes the iteration.\nEssentially, the iteration will consist of an outer while-loop (which terminates once the 2-norm distance\nbetween successive policy functions passes below $10^{-9}$), and an inner for loop (with $15$ loops).\n\n\nThe first step in the while-loop is to calculate the new policy function \\li{PSI1}, just as in the previous\nproblem. Next, perform the inner for-loop, which consists simply of the value function iteration, but this\ntime using the current policy function. This means the line of code\n\\begin{lstlisting}\n>>> VU = uaw + beta*np.max(np.dstack([MEVU, MVE]), axis=2)\n\\end{lstlisting}\nis no longer valid, as it does not use the policy function. We must instead have\n\\begin{lstlisting}\n>>> VU = uaw + beta*(MVE*PSI1 + MEVU*(1 - PSI1))\n\\end{lstlisting}\nWhy is this code correct?\n\nFinally, after exiting the for loop, calculate the 2-norm distance between the old and the new policy function,\nand then update your old policy function, i.e.\n\\begin{lstlisting}\n>>> PSI = PSI1\n\\end{lstlisting}\n\nAfter convergence is achieved, once again compute the reservation wage array, and plot it as in the previous\nproblem. Then return the converged policy function.\n\\end{problem}\n\n\\begin{problem}\nHow many iterations did the value function iteration method take?\nHow many iterations did the modified policy function iteration method take?\nWhich was faster?\n\\end{problem}\n", "meta": {"hexsha": "a472fe1b6158c338a6d667e67546dfd2adc92aaa", "size": 33460, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Labs/PolicyFunctionIteration/Policy_Function_Iteration.tex", "max_stars_repo_name": "jessicaleete/numerical_computing", "max_stars_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2016-10-18T19:54:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-09T20:12:38.000Z", "max_issues_repo_path": "Labs/PolicyFunctionIteration/Policy_Function_Iteration.tex", "max_issues_repo_name": "jessicaleete/numerical_computing", "max_issues_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Labs/PolicyFunctionIteration/Policy_Function_Iteration.tex", "max_forks_repo_name": "jessicaleete/numerical_computing", "max_forks_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-05-14T16:07:59.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-20T09:05:06.000Z", "avg_line_length": 53.7942122186, "max_line_length": 143, "alphanum_fraction": 0.7378661088, "num_tokens": 8845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.7826624688140726, "lm_q1q2_score": 0.5549226753091302}}
{"text": "\\epigraph{``Since Newton, mankind has come to realise that the law of physics are always expressed in the language of differential equations\"}{Steven Strogatz}\n\n\\section{RNN}\n\\subsection{Introduction}\nWeather forecasting has traditionally been done by physical models of the atmosphere, which are unstable to perturbations, and thus are inaccurate for large periods of time\\cite{why_rnn}. Since machine learning techniques are more robust to perturbations, it would be logical to combine a neural network with a physical model. Weather forecasting is a sequential data problem, therefore, a recurrent neural network is the most suitable option for this task. \n\n\\begin{definition}\nA recurrent neural network is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence.\n\\end{definition}\n\nBefore, we delve into the specific example of using a recurrent neural network to predict the future state of the atmosphere, it is necessary to review what a recurrent neural network is. Recurrent Neural Networks (RNNs) are neural networks that are used in situations where data is presented in a sequence. For example, let's say you want to predict the future position of a fast-moving ball. Without information on the previous position of the ball, it is only possible to make an inaccurate guess. If you had, however, a large number of snapshots of the previous position, you are then able to predict the future position of the ball with some certainty. RNNs excel at modelling sequential data such as this. This is due to sequential memory.\n\nIn order to intuitively understand sequential memory, the prime example would be the alphabet. While it is easy to say the alphabet from A-Z, it is much harder to go from Z-A. There is a logical reason why this is difficult. As a child, you learn the alphabet in a sequence. Sequential memory is a mechanism that makes it easier for your brain to recognise sequence patterns.\n\nIn a traditional neural network, there is an input layer, hidden layer, and an output layer. In a recurrent neural network, a loop is added to pass information forward as seen in the diagram below (provided by Towards Data Science)\\cite{intro_rnn}:\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.2\\linewidth]{Images/rnn.png}\n    \\caption{Visualisation of a Recurrent Neural Network}\n\\end{figure}\n\nThe information that is forwarded is the hidden layer, which is a representation of previous inputs. How this works in practise is that you initialise your network layers and the initial hidden state. The shape and dimension of the hidden state will be dependent on the shape and dimension of your recurrent neural network. Then you loop through your inputs, pass the relevant parameter and hidden state into the RNN. The RNN returns the output and a modified hidden state. Last you pass the output to the output layer, and it returns a prediction. \n\nThere is, however, a major problem known as short-term memory. Short-term memory is caused by something known as the vanishing gradient problem, which is also prevalent in other neural network architectures. As the RNN processes more steps, it has troubles retaining information from previous steps. Short-Term memory and the vanishing gradient is due to the nature of back-propagation. This can be comprehended through understanding how a neural network is trained\\cite{intro_rnn}.\n\n\\begin{definition}\nBack-propagation is an algorithm used to train and optimise neural networks.\n\\end{definition}\n\nTo train a recurrent neural network, you use an application of back-propagation called back-propagation through time. Training a neural network has three major steps. First, the relevant data vector is normalised between 0 and 1, the vector is fed into the RNN, and it goes through an activation function. The activation function utilised in the software is the rectified linear activation function\\cite{lstm_rnn}. \n\n\\begin{definition}\nThe rectified linear activation function is a piece-wise linear function that will output the input directly if is positive, otherwise, it will output zero.\n\\end{definition}\n\nThe function is linear for values greater than zero, meaning it has a lot of the desirable properties of a linear activation function when training a neural network using back-propagation. Yet, it is a nonlinear function as negative values are always output as zero. As a result, the rectified function is linear for half of the input domain and nonlinear for the other half, it is referred to as a piece-wise linear function\\cite{relu}. This nonlinear element is extremely important if the system has a nonlinear component, for example in predicting the evolution of the future state of the atmosphere.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.65\\linewidth]{Images/relu.png}\n    \\caption{Sketch of the Rectified Linear Activation Function}\n\\end{figure}\n\nSecond, it outputs the results. Third, it compares the prediction to the ground truth using a loss function.\n\n\\begin{definition}\nA loss function outputs an error value which is an estimate of how poorly the network is performing.\n\\end{definition}\n\nThe lost function that will be utilised in the software will be the function for mean squared error. The reason for choosing this particular function is that it heavily penalises large errors, as it squares the difference between the predicted and actual value. A large error in a weather forecast is highly undesirable, hence, the use of this function. The function is represented below:\n\n\\begin{equation}\n    MSE = \\frac{1}{n}\\sum_{i=1}^n(Y_i-\\hat{Y_i})^2\n\\end{equation}\n\nIf  a vector of $n$ predictions is generated from a sample of $n$ data points on all variables, and $Y$ is the vector of observed values of the variable being predicted, with $\\hat{Y_i}$ being the predicted values.\n\n\\begin{definition}\nMean squared error is the average squared difference between the estimated values and the actual value.\n\\end{definition}\n\nReturning to the training of the RNN, it uses that error value from the loss function. to do back propagation which calculates the gradients for each time step in the network. The gradient is the value used to adjust the networks internal weights, allowing the network to learn. The bigger the gradient, the bigger the adjustments and vice versa. Here is where the problem lies. When doing back propagation, the gradient of the current time step is calculated with respect to the effects of the gradients, in the time step before it. So if the adjustments to the time step before it is small, then adjustments to the current time step will be even smaller.  The gradient values will exponentially shrink as it propagates through each time step. That causes gradients to exponentially shrink as it back propagates down. The earlier layers fail to do any learning as the internal weights are barely being adjusted due to extremely small gradients.\n\nBecause of vanishing gradients, the RNN doesn\u2019t learn the long-range dependencies across time steps. So not being able to learn on earlier time steps causes the network to have a short-term memory. In order to combat this, a long short-term memory is used\\cite{intro_rnn}.\n\n\\subsection{LSTM}\nLSTM's were created as a solution to the short-term memory problem. They have internal mechanisms called gates that can regulate the flow of information. These gates can learn which data in a sequence is important to keep or throw away. By doing that, it can pass relevant information down the long chain of sequences to make predictions. For example, if you were interested in buying a particular product, you might read a review in order to determine if the purchase of the product is a good decision. When you read a review, your brain subconsciously only remembers important keywords. You pick up words like ``amazing\", ``superb\", or ``awful\", you don't remember words such as \"the\", \"as\", or \"because\". This is what an LSTM does, it learns to keep only the relevant information to make predictions.\n\nAn LSTM has a similar control flow as a recurrent neural network. It processes data passing on information as it propagates forward. The differences are the operations within the LSTM\u2019s cells. The core concept of LSTM\u2019s are the cell state, and it\u2019s various gates. The cell state is the method by which information is transferred down the sequence chain. The cell state, in theory, can carry relevant information throughout the processing of the sequence. So even information from the earlier time steps can make its way to later time steps, reducing the effects of short-term memory. As the cell state goes on its journey, information gets added or removed to the cell state via gates\\cite{lstm_rnn}.\n\n\\begin{definition}\nA gate is an electric circuit with an output which depends on the combination of several inputs.\n\\end{definition}\n\nGates contain the sigmoid activation function. The sigmoid activation function squishes values between 0 and 1. That is helpful to update or forget data because any number getting multiplied by 0 is 0, causing values to disappears or be ``forgotten\". Any number multiplied by 1 is the same value therefore that value stay\u2019s the same or is ``kept\".\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.65\\linewidth]{Images/sigmoid.png}\n    \\caption{Sketch of the Sigmoid Activation Function}\n\\end{figure}\n\nThere are three types of gates utilised within a neural network: a forget gate, an input gate, and an output gate. A forget gate decides what information should be thrown away or kept. Information from the previous hidden state and information from the current input is passed through the sigmoid function. An input gate is where the previous hidden state and current input are passed into a sigmoid function. The output gate decides what the next hidden state should be. The hidden state is also used for predictions. First, we pass the previous hidden state and the current input into a sigmoid function. Then we pass the newly modified cell state to the rectified linear activation function. We multiply the rectified linear activation function output with the sigmoid output to decide what information the hidden state should carry. The output is the hidden state. The new cell state and the new hidden is then carried over to the next time step\\cite{lstm_rnn}.\n\n\\subsection{Principal Component Analysis}\\label{pca_section}\nThe number of features in a dataset is referred to as its dimensionality. Dimensionality reduction refers to techniques that reduce the number of input variables in a dataset. More input features often make a predictive modelling task more challenging to model, and can be computational expensive. Considering dimensionality reduction is a data preparation technique performed on data prior to modelling, it might be performed after data cleaning and data scaling and before training a predictive model\\cite{dimension_reduction}. \n\nFor a given training window, there is 22,032,000 points. Attempting such a dataset would be computational burdensome, hence, the attractiveness of dimensionality reduction. Dimensionality reduction can reduce the number of data points without severely impacting accuracy. Fewer input dimensions often mean correspondingly fewer parameters or a simpler structure in the machine learning model, referred to as degrees of freedom. A model with too many degrees of freedom is likely to overfit the training dataset and therefore may not perform well on new data. It is desirable to have simple models that generalise well, and in turn, input data with few input variables. The problem is high-dimensional functions have the potential to be much more complicated than low-dimensional ones, and that those complications are harder to discern. The only way to beat this problem is to incorporate knowledge about the data that is correct\\cite{dimension_reduction}. A popular choice for dimensionality reduction is principal component analysis, and was the technique selected for this project. \n\n\\begin{definition}\nPrincipal Component Analysis, is a dimensionality reduction method that is often used to reduce the dimensionality of large data sets, by transforming a large set of variables into a smaller one that still contains most of the information in the large set.\n\\end{definition}\n\nReducing the number of variables of a data set naturally comes at the expense of accuracy, but the trick in dimensionality reduction is to trade a little accuracy for simplicity. Because smaller data sets are easier to explore and visualise and make analysing data much easier and faster for machine learning algorithms without extraneous variables to process\\cite{pca}. \n\nFirst, the dataset is standardised to the range of the continuous initial variables so that each one of them contributes equally to the analysis. Second, to see if there is any relationship between the variables of the input dataset, covariance matrix computation is done. The covariance matrix is not more than a table that summaries the correlations between all the possible pairs of variable. The eigenvectors and eigenvalues of the covariance matrix is computed in order to identify the principal components. Principal components are new variables that are constructed as linear combinations or mixtures of the initial variables. PCA tries to put maximum possible information in the first component, then maximum remaining information in the second and so on. Organizing information in principal components this way, will allow for the reduction of dimensionality without losing much information, and  by discarding the components with low information and considering the remaining components as your new variables\\cite{pca}. For the purposes of this project, 99\\% of the variance was retained in order to balance accuracy with reducing the number of features. \n\n\\section{Dataset}\n\\subsection{ERA5 Atmospheric Reanalysis Dataset}\\label{era5_dataset}\nERA5 provides hourly estimates of a large number of atmospheric, land and oceanic climate variables. The data covers the Earth on a 30km grid and resolves the atmosphere using 137 levels from the surface up to a height of 80km. ERA5 includes information about uncertainties for all variables at reduced spatial and temporal resolutions. Quality-assured monthly updates of ERA5 are published within 3 months of real time. Preliminary daily updates of the dataset are available to users within 5 days of real time. ERA5 combines vast amounts of historical observations into global estimates using advanced modelling and data assimilation systems\\cite{era5}.\n\nThe ERA5 reanalysis dataset was used for training, validating and testing the performance of the neural network architecture. Reanalysis datasets provide the best guess of the atmospheric state at any point in time by combining a forecast model with the available observations. The raw data is available hourly for 40 years from 1979 to 2019 on a $0.25^{\\circ}$ latitude-longitude grid ($721 \\times 1440$ grid points) with 37 vertical levels. Since this raw dataset is quite significant, it is necessary to regrid the dataset to a lower resolution and use a smaller fraction of the available dataset\\cite{rasp2020weatherbench}.  The poles were excluded from the dataset in order to avoid a singularity, and the potential negative impact that could have on predictive ability of the neural network.\n\nIt was ultimately decided to use a spatial resolution of $3^{\\circ}$ ($60 \\times 120$ grid points) and a temporal resolution of 2 hours. In regards to pressure surfaces, 17 vertical pressure levels were ultimately chosen:  1, 2, 5, 10, 20, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900,  950, 1000 hPa. Note that it is common to use pressure in hectopascals as a vertical coordinate instead of physical height. The pressure at sea level is approximately 1000 hPa and decreases roughly exponentially with height. 850 hPa is at around 1.5 km height. 500 hPa is at around 5.5 km height. The data is split into yearly NetCDF files for each variable. The entire dataset at $3^{\\circ}$ resolution has a size of 75GB. The available variables were chosen based on meteorological consideration. Geopotential, temperature, humidity and wind are prognostic state variables in most physical NWP and climate models\\cite{rasp2020weatherbench}. \n\n\\subsection{Integrated Forecasting System}\\label{ifs_section}\nThe Integrated Forecast System is a global numerical weather prediction system jointly developed and maintained by the European Centre for Medium-Range Weather Forecasts. The version of the IFS run at ECMWF is often referred to as the \"ECMWF\" or the \"European model\" in North America, to distinguish it from the American GFS. It comprises a spectral atmospheric model with a terrain-following vertical coordinate system coupled to a 4D variational data assimilation system. In 1997 the IFS became the first operational forecasting system to use a 4D variational, data assimilation system\n\n\\begin{definition}\n4D dimensional variational data assimilation system adjusts a short-range forecast, called the background, in space and time to bring it into closer agreement with meteorological observations.\n\\end{definition}\n\nIt is one of the predominant global medium-range models in general use worldwide; its most prominent rival is in the 6\u201310 day medium range include the American Global Forecast System, the Canadian Global Environmental Multiscale Model and the UK Met Office Unified Model. It is the gold standard of medium-range numerical weather prediction. The current IFS deterministic forecast is computed on a cluster with 11,664 cores. One 10 day forecast at 10 km resolution takes around 1 hour of real time to compute\\cite{rasp2020weatherbench}. The Integrated Forecasting System will be used as a comparison against the neural network.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.8\\linewidth]{Images/ifs.jpg}\n    \\caption{ECMWF Integrated Forecast System}\n\\end{figure}\n\nTo provide physical baselines more in line with the current resolution of the neural network, it was compared against the IFS model at two coarser horizontal resolutions, T42 (approximately $2.8^{\\circ}$) with 62 vertical levels and T63 (approximately $1.9^{\\circ}$) with 137 vertical levels. It must be noted that I personally did not generate such forecasts, or perform the analysis on said forecasts. It acquired the results from Weather Bench, a benchmark dataset for data-driven weather forecasting. According to said source; computationally, a single forecast takes 270 seconds for the T42 model and 503 seconds for the T64 model on a single XC40 node with 36 cores.\n\n\\section{Implementation}\\label{implement_rnn}\nThe dataset consists of five features: geopotential, zonal wind, meridional wind, air temperature, and relative humidity. For a single day, there will be twelve observations. The goal for this project will be to predict the relevant atmospheric parameter in 6 hours time given the last fifteen days of data. In order to make such predictions, it is necessary to create a window of the last 180 ($15 \\times 12$) observations to train the model\\cite{time_series}. The neural network was trained on observational data from 2005 to 2016. The remainder of the dataset, 2017 to 2019, was preserved for validation, testing and benchmarking the neural network against physics-based models.\n\nAt the start, a seed is set in order to ensure reproducibility. As mentioned previously, it is important to scale features before training a neural network. Normalisation is a common way of doing this scaling by subtracting the mean and dividing by the standard deviation of each feature. In order for the most optimal performance, the method ``MinMaxScaler\" from the library, scikit-learn, is utilised within the software\\cite{scikit-learn}. An LSTM requires a 1-dimensional sequence, however, the atmosphere is a 3-dimensional system. Hence, it is necessary to flatten the 3-dimensional vector that represents the state of the atmosphere. This is done in order to avoid the need of repeatably running the RNN. Batches are then created to split the data into manageable sequences. A large batch size is used in order to greater emphasise values at the extremes, as such observations are particularly important in numerical weather predictions in tracking hurricanes and tornadoes. The diagram on the following page shows how the data is represented after flattening the data and batching it (provided by Tensorflow)\\cite{time_series}.\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=.5\\linewidth]{Images/data_rnn.png}\n    \\caption{Visualisation of how the data is represented after flattening and batching.}\n\\end{figure}\n\nFollowing this process, the data is fed into the RNN. The LSTM model is built using Keras in TensorFlow, which is an free and open-source software library for machine learning. It was developed by the Google Brain Team\\cite{tensorflow}. It is apparent that a multi-step model is needed as the model needs to learn to predict a range of future values. The source code for the LSTM model developed for the software is shown below:\n\n\\begin{minted}[mathescape,linenos,frame=lines]{python}\n# Prepossessed historical data, which has been flatten and batched.\nx_data, y_data, x_val, y_val = prepossessing_function(input_data)\n# Prepossessed initial conditions, which has been flatten and batched.\ninitial_conditions = prepossessing_function(input_initialconditions)\n\n# The network is shown data from the last 15 days.\npast_history = 15 * 12\n\n# The network predicts the next 6 hours worth of steps.\nfuture_target = 0.5 * 12\n\n# Create, and train models.\n# Optimiser.\nopt = Adam(lr=1e-3, decay=1e-5)\n# Create model.\nmodel = Sequential()\nmodel.add(\n    LSTM(\n        480, activation='relu', input_shape=(past_history, features)\n    )\n)\nmodel.add(RepeatVector(future_target))\nmodel.add(LSTM(480, activation='relu', return_sequences=True))\nmodel.add(TimeDistributed(Dense(features)))\nmodel.compile(\n    optimizer=opt, loss='mse', metrics=['mean_absolute_error']\n)\n\n# Train.\nmodel.fit(\n    x_data, y_data, validation=(x_val, y_val), epochs=epochs, batch_size=120\n)\n\n# Predict (this function is iterated to the forecast length required).\nfuture_state = model.predict(initial_conditions)\n# Invert normalisation, and flattening.\nfuture_state = inverse_prepossessing(future_state)\n\\end{minted}\n\nThe model consists of four LSTM layers, which in combination are able to produce a more accurate and reliable prediction than a single LSTM layer. As is evident, the activation function for each LSTM is the rectified linear activation function, which is built into Keras. The number of epochs can be specified by the end user depending on the computational resources they have and what they need. More epochs will evidently lead to a more accurate neural network.\n\n\\begin{definition}\nAn epoch is one forward pass and one backward pass of all the training examples.\n\\end{definition}\n", "meta": {"hexsha": "656e644dee788fb96c0272a10d992354bf06953e", "size": 23009, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "scifest/national/project-book/Chapters/Neural_Network_Architecture.tex", "max_stars_repo_name": "amsimp/papers", "max_stars_repo_head_hexsha": "a212b3f65140f0292d51055be324a7c1b084e121", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-15T10:06:17.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-15T10:06:17.000Z", "max_issues_repo_path": "scifest/national/project-book/Chapters/Neural_Network_Architecture.tex", "max_issues_repo_name": "amsimp/papers", "max_issues_repo_head_hexsha": "a212b3f65140f0292d51055be324a7c1b084e121", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scifest/national/project-book/Chapters/Neural_Network_Architecture.tex", "max_forks_repo_name": "amsimp/papers", "max_forks_repo_head_hexsha": "a212b3f65140f0292d51055be324a7c1b084e121", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 126.4230769231, "max_line_length": 1165, "alphanum_fraction": 0.805163197, "num_tokens": 4808, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7090191276365463, "lm_q1q2_score": 0.5549226680610685}}
{"text": "\\documentclass[11pt]{amsart}\n\\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.\n\\geometry{letterpaper}                   % ... or a4paper or a5paper or ... \n%\\geometry{landscape}                % Activate for for rotated page geometry\n%\\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{epstopdf}\n\\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}\n\n\\title{}\n\\author{}\n%\\date{}                                           % Activate to display a given date or no date\n\n\\begin{document}\n\\maketitle\n%\\section{}\n%\\subsection{}\n\nThe set of equations we are solving are:\n\n\\begin{align}\n\\frac{dV_i}{dt} &= rV_i\\Big(1 - \\frac{V_i}{K}\\Big) - \\frac{\\alpha V_i}{V_i + B}H_i, \\\\\n\\frac{dH_i}{dt} &= H_i\\Big(\\frac{\\alpha \\beta V_i}{V_i + B} - m\\Big) + f_i(H),\n\\end{align}\n\n\\noindent where $V$ and $H$ are the vegetation and herbivore density interacting on $i = 1, 2, ... 100$ discrete nodes of a network, and where $r$ is intrinsic growth rate, $K$ is the carrying capacity of the vegetation, $\\alpha$ is the maximum predation rate, $\\beta$ is the herbivore efficiency, $B$ is the half-saturation constant, $m$ is the mortality rate of the herbivore, and $f$ is a function describing the coupling interaction between different nodes. An example $f$ is given below:\n\n\\begin{equation}\nf_i(H) = \\frac{\\sigma}{2P} \\sum_{k = i - P}^{i + P} (H_k - H_i)\n\\end{equation}\n\n\\noindent where $\\sigma$ controls the coupling strength and $1 \\leq P \\leq N/2$ is the coupling range of the topology. This is linear coupling and is the coupling used in Dutta and Banerjee (2015) which we are trying to reproduce and extend. Our extension will be changing the form of $f$.\n\n\\begin{equation}\nD_n(H)=d_n\\frac{H^\\alpha}{S+H^\\alpha}\n\\end{equation}\nWhere $d_n$ is the maximum dispersal rate per capita, varying this has the physical interpretation of varying the distance between nodes or the ability of animal to traverse that distance. S is the half saturation and $\\alpha$ varies the shape of the distribution. We took this function evaluated at each node to gain the dispersal contribution (effectively limiting the maximum dispersal between nodes vs the linear case where the greater the difference, the greater the contribution).\n\\end{document}  \n", "meta": {"hexsha": "f90fdfe0cc25821e424ae27deb8b2a4b632d3310", "size": 2408, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "RMequations.tex", "max_stars_repo_name": "NicholasBermuda/chimeras", "max_stars_repo_head_hexsha": "17322671f6023a3a23670ac0e4596d522facab79", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-03-17T02:58:04.000Z", "max_stars_repo_stars_event_max_datetime": "2018-03-17T02:58:04.000Z", "max_issues_repo_path": "RMequations.tex", "max_issues_repo_name": "NicholasBermuda/chimeras", "max_issues_repo_head_hexsha": "17322671f6023a3a23670ac0e4596d522facab79", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RMequations.tex", "max_forks_repo_name": "NicholasBermuda/chimeras", "max_forks_repo_head_hexsha": "17322671f6023a3a23670ac0e4596d522facab79", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.2, "max_line_length": 492, "alphanum_fraction": 0.7101328904, "num_tokens": 661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7090191214879991, "lm_q1q2_score": 0.5549226632488313}}
{"text": "% Preamble.\n\\documentclass[12pt]{article}\n\\usepackage[margin=1.25in]{geometry}\n\\usepackage[fleqn]{amsmath}\n\\usepackage{textcomp}\n\\usepackage{gensymb}\n\\usepackage{amsfonts}\n\\usepackage{enumitem}\n%\\usepackage{tikz}  % Include for figures.\n%\\usepackage{subfiles}  % Include for subfiles.\n\n%% Title macros.\n\\newcommand{\\HOMEWORKNUM}{22}\n\\newcommand{\\NAME}{D. Choi}\n\\newcommand{\\DATE}{2020-06-25}\n\n\\title{\\vspace{-2\\baselineskip}MATH 225 - Homework \\#\\HOMEWORKNUM}\n\\author{\\NAME}\n\\date{\\DATE}\n\n%% Formatting options.\n%\\pagenumbering{gobble}  % Include for single-page document.\n\n\n% Document.\n\\begin{document}\n\\maketitle\n\n\\section*{1.}\n\\textit{For the following matrices, find the eigenvectors and associated\neigenvalues by thinking geometrically about the corresponding matrix\ntransformation.} \\\\[\\baselineskip]\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item $\\begin{pmatrix} 3 & 0 \\\\ 0 & 3 \\end{pmatrix}$. \\\\[\\baselineskip]\n\tThis matrix transformation is equivalent to scaling a vector by $3$:\n\t\\begin{equation*}\n\t\t\\begin{pmatrix} 3 & 0 \\\\ 0 & 3 \\end{pmatrix}\n\t\t\\vec{v}\n\t\t=\n\t\t3 \\vec{v}\n\t\t.\n\t\\end{equation*}\n\tThus, every nonzero vector is an eigenvector of this matrix. Their\n\tcorresponding eigenvalue is \\boxed{3}.\n\t\n\t\\item $\\begin{pmatrix} -2 & 0 \\\\ 0 & 4 \\end{pmatrix}$. \\\\[\\baselineskip]\n\tThis matrix transformation is equivalent to scaling a vector's first\n\tcomponent by $-2$ and its second component by $4$.\n\t\\begin{equation*}\n\t\t\\begin{pmatrix} -2 & 0 \\\\ 0 & 4 \\end{pmatrix}\n\t\t\\begin{pmatrix} x \\\\ y \\end{pmatrix}\n\t\t=\n\t\t\\begin{pmatrix} -2x \\\\ 4y \\end{pmatrix}\n\t\t.\n\t\\end{equation*}\n\tThus, nonzero vectors in the form of\n\t$\\begin{pmatrix} n \\\\ 0 \\end{pmatrix}$ are eigenvectors of this matrix,\n\twith a corresponding eigenvalue of \\boxed{-2}, and nonzero vectors in the\n\tform of $\\begin{pmatrix} 0 \\\\ n \\end{pmatrix}$ are eigenvectors of this\n\tmatrix, with a corresponding eigenvalue of \\boxed{4}.\n\t\n\t\\item \\textit{The identity matrix.} \\\\[\\baselineskip]\n\tThe identity transformation is equivalent to scaling a vector by $1$:\n\t\\begin{equation*}\n\t\tI \\vec{v} = 1 \\vec{v}.\n\t\\end{equation*}\n\tThus, every nonzero vector is an eigenvector of the identity matrix. Their\n\tcorresponding eigenvalue is \\boxed{1}.\n\t\n\t\\item \\textit{A diagonal matrix with distinct diagonal entries.}\n\t\\\\[\\baselineskip]\n\tThis matrix transformation is equivalent to scaling a vector's entries\n\teach by their corresponding diagonal entry. \\\\\n\tThus, such a matrix's eigenvectors are scalar multiples of the standard\n\tbasis vectors. Each eigenvector's corresponding diagonal entry is their\n\tcorresponding eigenvalue.\n\\end{enumerate}\n\n\\section*{2.}\n\\textit{Suppose that $A$ is a $2 \\times 2$ matrix having eigenvectors}\n\\begin{equation*}\n\t\\vec{v}_1 = \\begin{pmatrix} 2 \\\\ 1 \\end{pmatrix}, \\quad\n\t\\vec{v}_2 = \\begin{pmatrix} -1 \\\\ 2 \\end{pmatrix}\n\\end{equation*}\n\\textit{and associated eigenvalues $\\lambda_1 = 2$, $\\lambda_2 = -3$. If\n$\\vec{x} = \\begin{pmatrix} 5 \\\\ 0 \\end{pmatrix}$, find the vector\n$A^4 \\vec{x}$.}\n\n\\section*{3.}\n\\textit{Determine whether the following statements are true or false and\nprovide a justification for your response.}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item \\textit{The eigenvalues of a diagonal matrix are equal to the\n\tentries on the diagonal.}\n\t\\item \\textit{If $A v = \\lambda v$, then $A^2 v = \\lambda v$ as well.}\n\t\\item \\textit{Every vector is an eigenvector of the identity matrix.}\n\t\\item \\textit{If $\\lambda = 0$ is an eigenvalue of $A$, then $A$ is\n\tinvertible.}\n\t\\item \\textit{For every $n \\times n$ matrix $A$, it is possible to find\n\ta basis of $\\mathbb{R}^n$ consisting of eigenvectors of $A$.}\n\\end{enumerate}\n\n\\end{document}", "meta": {"hexsha": "2ea234f07ea3878422c4d57726d673e826430a88", "size": 3629, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "usc-20202-math-225-39425/hw22/main.tex", "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "usc-20202-math-225-39425/hw22/main.tex", "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "usc-20202-math-225-39425/hw22/main.tex", "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2330097087, "max_line_length": 75, "alphanum_fraction": 0.7147974649, "num_tokens": 1181, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737473266735, "lm_q2_score": 0.8128673178375734, "lm_q1q2_score": 0.5548418912157747}}
{"text": "\\section{Theoretical Foundations}\n\\label{sec:Theoretical_Foundations}\nThis section summarizes the theory and formulas necessary to understand the following experiments on interference and diffraction in section \\ref{sec:Evaluation}.\n\n\\subsection{Wave Interference}\n\\label{subsec:Interference}\nInterference is caused, when two waves superpose to form a wave of greater, lower or the same amplitude. The interfering waves are usually coherent with each other, either because they come from the same source or because they have the same frequency \\cite{diffraction}.\n\n\\subsection{Diffraction Slit / Anti-Slit}\n\\label{subsec:Diffraction_Slit}\nThe minima of the diffraction pattern of a slit or an anti-slit $\\varphi_m$ can be derived by using the Huygens\u2013Fresnel principle. It states that every point on a slit is itself the source of spherical wavelets. These wavelets have the same amplitude and phase \\cite{diffraction}.\n\nThe order $m$, the wavelength $\\lambda$ and the width of the slit $w$ is used to calculate the minima $\\varphi_m$:\n\\begin{equation}\n\\sin\\varphi_m = \\frac{m\\cdot\\lambda}{w} \\qquad \\text{with} \\qquad m = \\pm 1, \\pm 2, ...\n\\label{eq:slit_minima}\n\\end{equation}\nTo obtain the maxima $\\varphi_m$, the following equation is used \\cite{uni_hamburg}:\n\\begin{equation}\n\\sin\\varphi_m =\n\\begin{cases} \n\t0 & \\qquad \\text{for} \\qquad m = 0\\\\\n\t\\left(m + \\frac{1}{2}\\right)\\cdot\\frac{\\lambda}{w} & \\qquad \\text{for} \\qquad m = 1, 2, 3, ...\\\\[6pt]\n\t\\left(m - \\frac{1}{2}\\right)\\cdot\\frac{\\lambda}{w} & \\qquad \\text{for} \\qquad m = -1, -2, -3, ...\n\\end{cases}\n\\label{eq:slit_maxima}\n\\end{equation}\nwhere:\n\\begin{conditions}\n\t\\varphi_m & angle of the interference of order $m$ \\\\\n\tm & order of the interference \\\\\n\t\\lambda & wavelength of the laser \\\\\n\tw & width of the slit or the anti-slit\n\\end{conditions}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1.5]{slit_theory}\n\t\\caption{Diffraction of light on a slit \\cite{diffraction} - partially modified}\n\t\\label{fig:diff_slit}\n\\end{figure}\nFigure \\ref{fig:diff_slit} shows the diffraction of light on a slit. The partial wave 1 (red) and the partial wave 2 (green) differ by $\\lambda/2$ and thus interfere destructively, which results in a minimum. The width of the slit is denoted as $w$ and the angle of interference is denoted as $\\varphi$ \\cite{diffraction}.\n\n\\subsection{Diffraction Circular Aperture}\n\\label{subsec:Diffraction_Circular_Aperture}\nWhen diffraction occurs at a circular aperture, interference rings can be observed. The angles at which the minima occur can be derived by using the following equation \\cite{diffraction}:\n\\begin{equation}\n\\sin\\varphi_c = \\frac{c_k\\cdot\\lambda}{d} \\qquad \\text{with} \\qquad c_k = \\frac{j_{\\text{1,k}}}{\\pi}\n\\label{eq:circular_aperture}\n\\end{equation}\nThe first five Bessel coefficients $c_k$ are the following (used in this laboratory report):\n\\begin{equation}\nc_k = \\{1.220,~2.233,~3.238,~4.241,~5.243,~...\\}\n\\label{eq:coeffs}\n\\end{equation}\nwhere:\n\\begin{conditions}\n\t\\varphi_c & angle of the interference \\\\\n\tc_k & order of the interference \\\\\n\tj_{\\text{1,k}} & $k$th positive zero of the Bessel function $J_1(x)$ \\\\\n\t\\lambda & wavelength of the laser \\\\\n\td & diameter of the circular aperture\n\\end{conditions}\nThe Bessel coefficients $c_k$ shown above were calculated with MATLAB (see appendix \\ref{sec:MATLAB_Error_Calculation}).\n\n\\subsection{Diffraction Cross-Grid Aperture}\n\\label{subsec:Diffraction_Cross-Grid_Aperture}\nTo calculate the maximas of a cross-grid aperture on either the x-axis or the y-axis, the equation for a line-grid can be used \\cite{diffraction}.\n\\begin{equation}\n\\sin\\varphi_m = \\frac{m\\cdot\\lambda}{g}\n\\label{eq:cross-grid}\n\\end{equation}\nwhere:\n\\begin{conditions}\n\t\\varphi_m & angle of the interference of order $m$ \\\\\n\tm & order of the interference \\\\\n\t\\lambda & wavelength of the laser \\\\\n\tg & grid constant\n\\end{conditions}\nFor the above equation \\ref{eq:cross-grid} to be valid, the distance between the cross-grid aperture and the projection surface ($x$) must be large \\cite{diffraction}.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1.5]{line-grid}\n\t\\caption{Diffraction of light on a line-grid \\cite{diffraction} - partially modified}\n\t\\label{fig:line-grid}\n\\end{figure}\nFigure \\ref{fig:line-grid} shows the diffraction of light on a line-grid, which is a simplified version of the cross-grid. The angle of interference is denoted as $\\varphi$.\n\n\\subsection{Calculation of the Angle $\\varphi$}\n\\label{subsec:Calculation_of_the_Angle}\nThe angle of the interference $\\varphi$ is only measured indirectly by measuring the distance from the aperture to the projection surface ($x$) and the position of the maxima or minima ($y$).\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{angle}\n\t\\caption{Calculation of the angle $\\varphi$}\n\t\\label{fig:angle}\n\\end{figure}\nFigure \\ref{fig:angle} shows the relationship between the angle $\\varphi$ and the distances $x$ and $y$. To calculate the angle of interference the following equation \\ref{eq:angle} is used:\n\\begin{equation}\n\\varphi = \\arctan\\left(\\frac{y}{x}\\right)\n\\label{eq:angle}\n\\end{equation}\nwhere:\n\\begin{conditions}\n\t\\varphi & angle of the interference \\\\\n\t$x$ & distance between the aperture and the projection surface (see figure \\ref{fig:experimental_arrangement}) \\\\\n\t$y$ & position of the maxima or the minima\n\\end{conditions}\n", "meta": {"hexsha": "9682e1c0905b0d218ab08114b270b86224bfd177", "size": 5353, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "glaL3_O_9_Interference_and_Diffraction/sections/theoretical_foundations.tex", "max_stars_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_stars_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "glaL3_O_9_Interference_and_Diffraction/sections/theoretical_foundations.tex", "max_issues_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_issues_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "glaL3_O_9_Interference_and_Diffraction/sections/theoretical_foundations.tex", "max_forks_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_forks_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.5648148148, "max_line_length": 322, "alphanum_fraction": 0.7500467028, "num_tokens": 1557, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.8128673269042767, "lm_q1q2_score": 0.5548418869068156}}
{"text": "\\section{Discussion}\nSending a satellite into space involves a good deal of planning. For the scope we have we will come up with a theory for orbital mechanics. Satellites are acted upon by many forces while in orbit. These forces are; \\\\\n1. The gravitation of the Earth, Fe.\\\\\n2. The gravitational force from the Sun, Fs.\\\\\n3. The gravitation of nearby and large Jupiter, Fj.\\\\\n4. The centripetal force, Fc.\\\\\n5. The frictional force, Ff.\\\\\n6. The gravitation of nearby satellites, Fsat.\\\\\n\n\nIt can be shown that forces 2-6 are negligible and the dominant force is that of force 1. This makes up an interesting problem which involves two bodies that attract each other due to gravitation. This problem is called the \u201cTwo Body Problem\u201d in physics.\n\nThe Two-body problem was first described by Sir Isaac Newton in the year . It inquires,  \u201c given the positions and velocities of two bodies that act on each other by gravity, at a certain point in time, find the equations that describe the subsequent velocities and positions of the bodies.\u201d\nThese equations have enabled the advent of mankind into outer space and as a result satellite communications. The satellites that orbit the earth are kept in accurate paths by their engineers. This is only achievable by a good understanding and application of the physics involved. For the scope of this project we will go about solving the two body problem only, this is because an introduction of a third body will complicate the system.\\\\\n\n\\vspace{4cm}\n\\section{Equations that determine satellite location at any given time, t}\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=2.5, width=1\\textwidth]{twobody}\n\t\\caption{Two bodies with masses m1 and m2 seperated by a distance r.}\n\t\\label{fig:two-body problem}\n\\end{figure}\nWe will now model the two body problem. Let us take a rectangular coordinate frame for the two body problem as shown in the figure above.\n\\\\\n\n\nThe mutual distance apart is r;\n\\[ r^2 =  (x_2-x_1)^2 + (y_2-y_1)^2  ... \\big(1)\\]\n\nThe magnitude of gravity;\n\\[ F =  Gm_1m_2 / r^2 ... \\big(2)\\]\n\\\\\nNow the point P1 is attracted to P2 with the force of gravity, F, and can be resolved into the x and y components as;\\\\\n1. along the OX AXIS\n\\[F_X = Gm_1m_2 / r^2\\big(x_2-x_1/r) ... \\big(3)\\] \n\\\\\n2. along the OY AXIS\n\\[F_Y = Gm_1m_2 / r^2\\big(y_2-y_1/r) ... \\big(4)\\]\nWhere, x2-x1/r and y2-y1/r are direct cosines.\n\\\\\nFor point P2 this will be;\\\\\n1. along the OX AXIS\n\\[F_X = Gm_1m_2 / r^2\\big(x_1-x_2/r) ... \\big(5)\\] \n\\\\\n2. along the OY AXIS\n\\[F_Y = Gm_1m_2 / r^2\\big(y_1-y_2/r) ... \\big(6)\\]\nWhere, x1-x2/r and y1-y2/r are direct cosines.\n\\\\\nFrom Newton\u2019s 2nd Law of motion we have;\n\\[F=ma\\]\n\\\\\nWe can further work on equations 3) to 6) to obtain the following;\\\\\n1. for particle 1\n\\[F_X = m_1\\big(d^2x/dt^2)= Gm_1m_2 / r^2\\big(x_2-x_1/r) ... \\big(7)\\] \n\\[F_Y = m_1\\big(d^2y/dt^2)= Gm_1m_2 / r^2\\big(y_2-y_1/r) ... \\big(8)\\]\n2. for particle 2\n\\[F_X = m_2\\big(d^2x/dt^2)= Gm_1m_2 / r^2\\big(x_1-x_2/r) ... \\big(9)\\]\n\\[F_Y = m_2\\big(d^2y/dt^2)= Gm_1m_2 / r^2\\big(y_1-y_2/r) ... \\big(10)\\]\n\\\\\nSimplifying equations 7) to 10) respectively to obtain equations 11) to 14);\\\\\n1. for particle 1\n\\[d^2x_1/dt^2= Gm_2 / r^2\\big(x_2-x_1/r) ... \\big(11)\\] \n\\[d^2y_1/dt^2= Gm_2 / r^2\\big(y_2-y_1/r) ... \\big(12)\\]\n2. for particle 2\n\\[d^2x_1/dt^2= Gm_1 / r^2\\big(x_1-x_2/r) ... \\big(13)\\]\n\\[d^2y_1/dt^2= Gm_1 / r^2\\big(y_1-y_2/r) ... \\big(14)\\]\n\\\\\nLet us subtract equation 11) from equation 13);\n\\[\\frac{d^2x_2}{dt^2} - \\frac{d^2x_1}{dt^2}= \\frac{Gm_1} {r^3}\\big(\\frac{x_1-x_2}{r}) - \\frac{Gm_2} {r^3}\\big(\\frac{x_2-x_1}{r}) ... \\big(15)\\]\n\n\\[\\frac{d^2\\small(x_2-x_1)}{dt^2}= -\\frac{G} {r^3}\\bigg[\\big(m_1+m_2)-\\big(x_2-x_1)\\bigg] ... \\big(16)\\]\nLet  \\(x = x_2-x_1 ... \\big(17)\\) and \n\\(\\mu = G(m1+m2) ... \\big(18)\\) \nthen;\n\\[\\frac{d^2x}{dt^2}+\\frac{\\mu x}{r^3}=0 ... \\big(19)\\]\n\\\\\nIn a similar fashion for equations 12) and 14) we arrive at;\n\\[\\frac{d^2y}{dt^2}+\\frac{\\mu y}{r^3}=0 ... \\big(20)\\]\nwhere;\n\\[y = y_2-y_1 ... \\big(21)\\]\n\nThe solution of this is written as;\n\\[r = \\frac{\\frac{h^2}{\\mu}}{1+e\\cos\\theta} ... \\big(22)\\]\n\nwhere \\(\\theta\\) is the true anomaly, h is a constant which is twice the rate of description of area by radius vector, e, is the eccentricity of the orbit.\\\\\n\nTo prove this, consider; \n\\[r^2 = \\big(x_2-x_1)^2+\\big(y_2-y_1)^2 ... \\big(23)\\]\nequation 15) and 16) can be combined as; \n\\[\\frac{d^2r}{dt^2}+\\frac{\\mu\\vec{r}}{r^3}=0 ... \\big(24)\\]\n\\[\\ddot{\\vec{r}}+\\frac{\\mu\\vec{r}}{r^3}=0 ... \\big(25)\\]\n\\[\\ddot{\\vec{r}}=-\\frac{\\mu\\vec{r}}{r^3} ... \\big(26)\\]\\\\\ncrossing both side with \\(\\vec{r}\\) we will have;\n\\[\\vec{r}\\times\\ddot{\\vec{r}}=-\\vec{r}\\times\\frac{\\mu\\vec{r}}{r^3}\\]\n\\[=0 ... \\big(27)\\]\\\\\nthe left hand side can be written as;\n\\[\\vec{r}\\times\\ddot{\\vec{r}}= \\frac{d}{dt}\\big(\\vec{r}\\times\\dot{\\vec{r}}) ... \\big(28)\\]\\\\\nsince the time derivative of \\(\\vec{r}\\times\\dot{\\vec{r}}\\) is equal to zero, the quantity itself must be a constant, i.e.; \\(\\vec{r}\\times\\dot{\\vec{r}}=\\vec{h}=constant\\).\\\\\nThe vector \\(\\vec{h}\\) is the angular momentum per unit mass. It is related to the angular momentum vector \\(\\vec{l}\\) by;\n\\[\\vec{l}= m\\vec{h}\\]\nwhere \\(m\\) is the mass of the particle.\\\\\n\nConsidering by Kepler's second law, then the motion of this particle over a small time step \\(\\Delta t\\), we have;\n\\[\\Delta A = \\frac{1}{2}\\arrowvert r\\times \\dot{r}\\Delta t\\arrowvert ... \\big(29)\\]\n\\[=\\frac{1}{2}\\arrowvert\\vec{h}\\arrowvert\\Delta t ... \\big(30)\\]\n\nTherefore;\n\\[\\vec{h}\\times\\ddot{\\vec{r}}=-\\frac{\\mu\\big(\\vec{h}\\times\\vec{r})}{r^3}  ... \\big(31)\\]\n\\[=-\\frac{\\mu}{r^3}\\big[\\big(\\vec{r}\\times\\dot{\\vec{r}})\\times\\vec{r}  ... \\big(32)\\]\n\\[=-\\frac{\\mu}{r^3}\\big[\\dot{\\vec{r}}\\big(\\vec{r}\\cdot\\vec{r})-\\vec{r}\\big(\\vec{r}\\cdot\\vec{r})]  ... \\big(33)\\]\n\nnow since;\n\\[\\frac{d}{dt}\\bigg(\\frac{\\vec{r}}{r}\\bigg)=\\frac{1}{r}\\dot{r}-\\frac{\\dot{r}}{r^2}\\vec{r}  ... \\big(34)\\]\n\\[=\\frac{1}{r^3}\\big[\\dot{\\vec{r}}\\big(\\vec{r}\\cdot\\vec{r})-\\vec{r}\\big(\\vec{r}\\cdot\\dot{\\vec{r}})]  ... \\big(35)\\] \n\nthen;\n\\[\\vec{h}\\times\\dot{r}=-\\mu\\frac{d}{dt}\\bigg(\\frac{\\vec{r}}{r}\\bigg)  ... \\big(36)\\]\n\nintegrating both sides with respect to time will yield;\n\\[\\vec{h}\\times\\dot{r}= -\\mu \\frac{\\vec{r}}{r}- \\vec{A}  ... \\big(37)\\]\n\nwhere A is determined by the initial position and velocity.\\\\\n\nWe multiply through by \\(\\vec{r}\\);\n\\[\\big(\\vec{h}\\times\\dot{r})\\times\\cdot\\vec{r}= -\\mu \\boldmath{r}- \\vec{A}\\cdot\\vec{r}  ... \\big(38)\\]\n\nIf we introduce, \\(\\nu\\), as the true anomaly, the angle between A and the position vector \\(\\vec{r}\\), we obtain;\n\\[h^2=\\mu r + Ar \\cos \\nu  ... \\big(39)\\]\nsince\n\\[\\big(a\\times b\\big)\\cdot c = -\\big(c\\times b\\big)\\cdot a  ... \\big(40)\\]\nletting \\(P=h^2/\\mu \\) and \\(e = A/\\mu \\) then;\n\\[r=\\frac{P}{1+e \\cos \\nu}  ... \\big(41)\\]\nFor the above equation for any conic section except a parabola we will have\n\\[P = a\\big(1-e^2)   ... \\big(42)\\]\n\ntherefore;\n\\[r=\\frac{a\\big(1-e^2)}{1+e \\cos \\nu}   ... \\big(43)\\]\n\nNow we will use this equation to get the position of the satellite in it's orbit from the central body. We still have the problem which is described as; what is the position of a satellite in it's orbit after an elapsed time, \\(t-t_o\\). We can further refine the problem as; how long does it take for the satellite to move from one point to another in its orbit.\n\nKepler was able to solve this problem by defining the Mean Anomaly, the part of the orbit form the centre which the satellites sweeps relative to the perigee. Perigee is the shortest point to the centre along the satellite's orbit. \n\nThe change in Mean Anomaly is defined as;\n\\[M-M_o = n(t-t_o)   ... \\big(44)\\]\n\nwhere;\n\\[n=\\sqrt{\\frac{\\mu}{a^3}}   ... \\big(45)\\] \n\nWe define a variable \\(E\\) called the eccentric anomaly which can be written as;\n\\[cos E = \\frac{ae + r \\cos \\nu}{a}   ... \\big(46)\\]\n\\[ \\cos E = \\frac{e+\\cos \\nu}{1+e \\cos \\nu}   ... \\big(47)\\]\n\ngiven the above we will have the mean anomaly as;\n\\[M= E-sinE   ... \\big(48)\\]\nthis is Kepler's equation.\n\n", "meta": {"hexsha": "01746b08a7d0384e7afeb542c2316ed103fcc365", "size": 7942, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/chapters/chapter03.tex", "max_stars_repo_name": "Sylvance/two-body-problem-simulation", "max_stars_repo_head_hexsha": "b40f0018960891ae59fd2eb970a94427b9319e67", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-03-13T14:29:54.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-13T14:29:54.000Z", "max_issues_repo_path": "thesis/chapters/chapter03.tex", "max_issues_repo_name": "Sylvance/two-body-problem-simulation", "max_issues_repo_head_hexsha": "b40f0018960891ae59fd2eb970a94427b9319e67", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/chapters/chapter03.tex", "max_forks_repo_name": "Sylvance/two-body-problem-simulation", "max_forks_repo_head_hexsha": "b40f0018960891ae59fd2eb970a94427b9319e67", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-03-20T07:18:27.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-18T17:53:48.000Z", "avg_line_length": 50.2658227848, "max_line_length": 441, "alphanum_fraction": 0.6479476202, "num_tokens": 2983, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737214979745, "lm_q2_score": 0.8128673133042217, "lm_q1q2_score": 0.5548418671261226}}
{"text": "\\section{\\texorpdfstring{Chordal graphs}{Chordal graphs}}\n\\vspace{5mm}\n\\large\n\n\\begin{definition}[Chordal graph]\n\tG is chordal if $\\forall k \\geq 4: C_k \\not\\leq_{ind} G$.\n\tSometimes called triangulated graphs.\n\\end{definition}\n\n\\begin{definition}[Simplicial]\n\tA vertex $u \\in V(G)$ is simplicial  if $G[N_G(u)]$ (reduction of graph to neighborhood of $u$) is a complete graph.\n\tDefinition is independent from taking closed (includes $u$) or open neighborhood.\n\\end{definition}\n\n\\begin{lemma}[1]\n\tEvery inclusion-wise minimal vertex cut in a chordal graph induces a clique.\n\\end{lemma}\n\\begin{proof}\n\t$G\\setminus A$ has components $V_1, V_2, \\ldots V_k, k \\geq 2$.\n\tThen\n\t\\[ \\forall i \\forall u \\in A \\exists w \\in V_i: uw \\in E(G) \\]\n\tPick some component $V_i$ and some edge in $A$ then there is an edge between them.\n\tOn the contrary, if there is no edge, $u$ can be removed from $A$.\n\tWhich contradicts with minimality of $A$.\n\n\t% TODO picture\n\tNow we take $u, v \\in A$, by previous observation\n\t\\[ \\exists w_1, w_2 \\in V_i: uw_1, vw_2 \\in E(G) \\]\n\tThen take $P_1$ shortest path between $w_1, w_2$.\n\tSimilarly $w_3, w_4 \\in V_j$ and the shortest path $P_2$ between $w_3, w_4$.\n\n\t$P_1 \\cup P_2$ is an induced cycle unless $uv \\in E(G)$.\n\n\tAlso, there is no edge between $V_i$ and $V_j, i \\ne j$ as otherwise $A$ is not a cut.\n\n\tAs $P_1$ is shortest path $vw_1, uw_2 \\notin E(G)$.\n\n\tTo sum up, $uv \\in E(G)$.\n\tSince $u,v$ were arbitrary, $A$ is a complete subgraph.\n\\end{proof}\n\n\\begin{lemma}[2]\\label{chordal_lemma_2}\n\tA chordal graph is complete or it contains 2 non-adjacent simplicial vertices.\n\\end{lemma}\n\\begin{proof}\n\tBy induction on $|V(G)|$.\n\tThe first step is $G$ is a complete graph.\n\n\tInductive step: $G$ is not complete.\n\tTake a minimal vertex cut $A$.\n\tLet $B$ be a connected component of $G\\setminus A$ and $C = (V(G) \\setminus A) \\setminus B$.\n\t\\[ G_1 = G[B \\cup A] \\]\n\t\\[ G_2 = G[C \\cup A] \\]\n\n\t% TODO picture\n\n\tAs $|V(G_1)| < |V(G)|$ we can apply induction on it.\n\tNote that induced subgraph of chordal graph is also chordal.\n\tBy induction hypothesis $G_1, G_2$ are either complete or have 2 simplicial vertices.\n\n\tOne of the simplicial vertices cannot be in $A$ because $A$ is complete graph and simplicial vertices are not adjacent.\n\tNo edges can connect $B, C$ therefore both of the vertices are simplicial in $G$.\n\\end{proof}\n\n\\begin{corollary}\n\tEvery nonempty chordal graphs has a simplicial vertex.\n\\end{corollary}\n\nSometimes it is easier to proof stronger statement, because we have more power in inductive step.\n\n\\begin{definition}[PES]\n\tPerfect elimination scheme - for graph $G$ is a \\emph{linear ordering} of its vertices.\n\t\\[ V(G) = u_1, \\ldots, u_n\\]\n\tSuch that $\\forall i: u_i$ is simplicial for $G[\\{ u_1, \\ldots u_i\\}]$\n\\end{definition}\n\n\\begin{lemma}[Chordal has PES]\n\tEvery chordal graph allows a PES.\n\\end{lemma}\n\\begin{proof}\n\tTake any simplicial vertices and move it to the right.\n\tThen delete vertex picked in previous step and repeat.\n\n\tFormally: by induction on $n$ using corollary.\n\\end{proof}\n\n\\begin{definition}[Perfect graph]\n\t$G$ is perfect if for every subgraph chromatic number is equal to clique number.\n\t\\[ \\forall H \\leq G: \\chi(H) = \\omega(H) \\]\n\\end{definition}\n\n\\begin{theorem}[Chordal is Perfect]\n\tA chordal graph is perfect.\n\\end{theorem}\n\\begin{proof}\n\t% TODO rewrite\n\tTake a PES for a graph, color from left to right by colors $ \\in \\{ 1, 2, 3, \\ldots \\}$ by \\textbf{first fit method}.\n\n\tTake smallest number that was not taken by the neighbors.\n\n\tIf we a forced to use color $k$ then neighbors of the vertex used $(k - 1)$ colors.\n\tWhich implies existence of complete graph on $(k - 1)$ vertices on the left from current vertex.\n\\end{proof}\n\n\\begin{definition}[Clique-tree decomposition]\n\tClique-tree decomposition of a graph $G$ is a tree $T$\n\t\\[ T = (\\Q, F): V(T) = \\Q = \\{\\text{maximal cliques of G} \\} \\]\n\tand\n\t\\[ \\forall u \\in V(G): T[ \\{Q: u \\in Q \\in \\Q \\}] \\text{ is connected} \\]\n\\end{definition}\n\n\\begin{theorem}[Chordal equivalent statements]\n\tFor any graph $G$ the following are equivalent:\n\t\\begin{enumerate}\n\t\t\\item $G$ is chordal\n\t\t\\item $G$ has a PES\n\t\t\\item $G$ allows a Clique-tree decomposition\n\t\t\\item $G$ is an intersection graph of subtrees of a tree\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\t$1 \\Rightarrow 2$ By induction on the number of vertices, using \\cref{chordal_lemma_2}.\n\tPick simplicial vertex, put it at the end of PES.\n\tRemove vertex from graph and continue.\n\n\t$2 \\Rightarrow 3$ let we have a PES $\\{ u_1, \\ldots u_n \\}$.\n\t$G$ has maximal cliques: $Q = \\{ Q_1, \\ldots, Q_k \\}$, $T = (\\Q, E(T))$.\n\tRemove last vertex in PES from graph\n\t\\[ G^{'} = G \\setminus u_n \\]\n\tIt has cliques: $Q^{'} = \\{ Q_1^{'}, \\ldots, Q_k^{'} \\} \\Rightarrow$ by i.h\n\t\\[ \\exists T^{'} = (\\Q^{'}, E(T^{'}) \\]\n\tConsider 2 cases: $N_G(u_n) \\cup \\{ u_n \\}$ is a maximal clique.\n\tThen is a maximal clique in $G^{'}$.\n\n\tO/w $N_G(u_n)$ is not a maximal clique of $G^{'}$.\n\t$\\Rightarrow \\exists Q_i^{'} \\in \\Q^{'}: N_G(u_n) \\subsetneq Q_i^{'}$.\n\tWe connect $N_G(u_n)$ to $Q_i^{'}$.\n\n\t$3 \\Rightarrow 4$ we want to find an intersection graph of subtrees in tree.\n\t\\[ G \\simeq IF(\\{T_u: u \\in V(G) \\}) \\]\n\tsuch that\n\t\\[ V(T_u) = \\{ Q_i: y \\in Q_i \\} \\subseteq \\Q \\]\n\n\tProving the equivalence:\n\t\\[ uv \\in E(G) \\Rightarrow \\exists Q_i \\in \\Q: u,v \\in Q_i \\Rightarrow Q_i \\in V(T_u) \\cap V(T_v) \\neq \\emptyset \\]\n\tOn the other hand\n\t\\[ V(T_u) \\cap V(T_v) \\neq \\emptyset \\Rightarrow \\exists Q_i \\in V(T_u) \\cap V(T_v) \\Rightarrow u, v \\in Q_i \\Rightarrow uv \\in E(G) \\]\n\n\t$4 \\Rightarrow 1$ Let we have a tree $T$ with a collection of subtrees.\n\t\\[V_u \\subseteq V(T), u \\in V(G): T[V_u] \\text{ is connected}: \\forall u \\ne v \\in V(G): uv \\in E(G) \\iff V_u \\cap V_v \\ne \\emptyset \\]\n\tAssume by contradiction $G$ is not chordal.\n\tBy definition $\\exists k \\geq 4 \\in \\N: C_k \\leq_{ind} G$.\n\tLet the cycle be $\\{ u_1, u_2, \\ldots, u_k \\}$.\n\t$u_k u_1 \\in E(G)$ by the assumption, take\n\t\\[ T_1 = T[V_{u_1}], T_2 = T[V_{u_2}], T_3 = T[V_{u_3}] \\]\n\t$T_1$ should cross $T_2$, also $T_3$ should cross $T_2$ but not $T_1$.\n\t\\[ V_{u_1} \\cap V_{u_2} \\ne \\emptyset \\land V_{u_1} \\cap V_{u_3} = \\emptyset \\]\n\tTherefore\n\t\\[ \\exists e \\in E(T_2): e \\notin E(T_1), E(T_2) \\]\n\tRemoving 1 edge makes tree disconnect:\n\t% TODO disjoint union\n\t\\[ T \\setminus e = T_a \\mathbin{\\dot{\\cup}} T_b, V_{u_1} \\subseteq V(T_a), V_{u_3} \\subseteq V(T_b) \\]\n\tProceed by induction\n\t\\[ \\forall j, j = 3 \\ldots k: V(T_j) \\subseteq T_b \\]\n\tinductive step\n\t\\[ V_{u_j} \\subseteq V(T_b), V_{u_{j + 1}} \\cap V(T_b) \\neq \\emptyset, V_{u_{j + 1}} \\cap V_{u_2} \\ne \\emptyset \\Rightarrow V_{u_{j + 1}} \\subseteq V(T_b)\\]\n\tTherefore\n\t\\[ V_{u_k} \\cap V_{u_1} = \\emptyset \\]\n\twhich contradicts $u_k u_1 \\in E(g)$.\n\n\t% TODO next lecture\n\n\\end{proof}\n", "meta": {"hexsha": "cda04f6624c2bd38594c872ce91ef7198f10eee5", "size": 6760, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/chord.tex", "max_stars_repo_name": "karlov/NDMI037", "max_stars_repo_head_hexsha": "4f5a6c646af03f13ab6ea0c65ab3579ee5283106", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/chord.tex", "max_issues_repo_name": "karlov/NDMI037", "max_issues_repo_head_hexsha": "4f5a6c646af03f13ab6ea0c65ab3579ee5283106", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/chord.tex", "max_forks_repo_name": "karlov/NDMI037", "max_forks_repo_head_hexsha": "4f5a6c646af03f13ab6ea0c65ab3579ee5283106", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8505747126, "max_line_length": 157, "alphanum_fraction": 0.6674556213, "num_tokens": 2428, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737214979745, "lm_q2_score": 0.8128673133042217, "lm_q1q2_score": 0.5548418671261226}}
{"text": "\\subsection{Hash Map}\r\n\\definecolor{inkscapeNavy}{rgb}{0.0, 0.0, 0.5}\r\n\\definecolor{inkscapeGreen}{rgb}{0.0, 0.5, 0.0}\r\n\\definecolor{inkscapeMaroon}{rgb}{0.5, 0.0, 0.0}\r\n\r\n\\begin{frame}{Associative Arrays}{The Hash Map}\r\n  \\textbf{Idea:}\r\n  \\begin{itemize}\r\n    \\item\r\n      Mapping the keys onto indices with a {\\color{MainA}hash function}\r\n    \\item\r\n      Store the values at the calculated indices in a normal array\r\n  \\end{itemize}\r\n  \\textbf{Example:}\r\n  \\begin{itemize}\r\n    \\item\r\n      Key set: $x = \\{3904433, 312692, 5148949\\}$\r\n    \\onslide <2-> \\item\r\n      Hash function:\r\n      {\\color{MainA}$h(x) = x \\;\\mathrm{mod}\\; 5$},\r\n      in the range {\\color{MainA} $[0, \\ldots, 4]$ }\r\n    \\onslide <3-> \\item We need an array {\\color{MainA}T}\r\n      with {\\color{MainA}5} elements.\\\\\r\n      A \\enquote{hash table} with 5 \\enquote{buckets}\r\n    \\onslide <4-> \\item\r\n      The element with the key {\\color{MainA}x}\r\n      is stored in {\\color{MainA}$T[h(x)]$}\r\n  \\end{itemize}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Associative Arrays}{The Hash Map}\r\n  \\textbf{Storage:}\r\n  \\vspace{0.1em}\r\n  \\begin{itemize}\r\n    \\setlength\\itemsep{0.75em}\r\n    \\item<2->\r\n    \\small \\texttt{insert(3904433,\"A\")}:\r\n    $h(3904433) = 3 \\hspace*{-0.1em} \\Rightarrow$\r\n    T[3] = {\\color{inkscapeGreen}(3904433, \"A\")}\r\n    \\item<3->\r\n    \\small \\texttt{insert(312692, \"B\")}:\r\n    $h(312692) = 2 \\hspace*{-0.1em} \\Rightarrow$\r\n    T[2] = {\\color{inkscapeMaroon}(312692, \"B\")}\r\n    \\item<4->\r\n    \\small \\texttt{insert(5148949, \"C\")}:\r\n    $h(5148949) = 4 \\hspace*{-0.1em} \\Rightarrow$\r\n    T[4] = {\\color{inkscapeNavy}(5148949, \"C\")}\r\n  \\end{itemize}\r\n  \\vspace{0.1em}\r\n  \\begin{figure}\r\n  \\caption{Hash table T}\r\n  \\centering\r\n\\only<1-1>{\\includegraphics[width=0.6\\textwidth]{Images/Bucket1.pdf}}%\r\n\\only<2-2>{\\includegraphics[width=0.6\\textwidth]{Images/Bucket2.pdf}}%\r\n\\only<3-3>{\\includegraphics[width=0.6\\textwidth]{Images/Bucket3.pdf}}%\r\n\\only<4->{\\includegraphics[width=0.6\\textwidth]{Images/Bucket4.pdf}}\r\n  \\end{figure}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n%%%%% : TODO: Add the check step!\r\n\\begin{frame}{Associative Arrays}{The Hash Map}\r\n  \\textbf{Searching:}\r\n  \\small\r\n  \\begin{itemize}\r\n     \\setlength\\itemsep{0.75em}\r\n    \\item<1->\r\n      \\texttt{search(3904433)}:\r\n      $h(3904433) = 3 \\hspace*{0.5em} \\Rightarrow$\r\n      T[3] $\\rightarrow$ {\\color{inkscapeGreen}(3904433, \"A\")}\r\n    \\item<2->\r\n      \\texttt{search(123459)}:\r\n      $h(123459) = 4 \\hspace*{0.5em} \\Rightarrow$\r\n      T[4] {}\\\\\r\n      \\vspace{0.75em}\r\n      $\\Rightarrow$ {\\color{red}Value with key 123459 does not exist}\r\n    \\item<3->\r\n      Search time for this example: {\\color{MainA}$\\mathcal{O}(1)$}\r\n  \\end{itemize}\r\n  \\vspace*{-1.0em}\r\n  \\begin{figure}\r\n    \\caption{Hash table T}\r\n    \\centering\r\n    \\includegraphics[width=0.6\\textwidth]{Images/Bucket4.pdf}\r\n  \\end{figure}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Associative Arrays}{Hash Collisions}\r\n  \\textbf{Further inserting:}\r\n  \\begin{itemize}\r\n    \\item<1->\r\n      \\texttt{insert(876543, \"D\")}:\r\n      $\\hspace*{0.5em} h(876543) = 3$\\\\\r\n      \\onslide <2->\r\n      $\\hspace*{0.5em} \\Rightarrow$\r\n      T[3] = (876543, \"D\")\r\n      $\\Rightarrow$ {\\color{red}{Collision}}\r\n    \\item<3->\r\n      This happens more often than expected\r\n      \\begin{itemize}\r\n        \\item\r\n          \\textbf{Birthday problem:}\r\n          with 23 people we have the probability of $50\\;\\%$ that 2 of\r\n          them have birthday at the same day\r\n      \\end{itemize}\r\n  \\end{itemize}\r\n  \\vspace*{-1.0em}\r\n  \\onslide <1->\r\n  \\begin{figure}\r\n    \\caption{Hash table T}\r\n    \\centering\r\n       \\includegraphics[width=0.6\\textwidth]{Images/Bucket4.pdf}\r\n  \\end{figure}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Associative Arrays}{Hash Collisions}\r\n  \\textbf{Problem:}\r\n  \\begin{itemize}\r\n    \\item\r\n      Two keys are equal {\\color{MainA} $h(x) = h(y)$} but not the values\r\n      {\\color{MainA} $x \\neq y$}\r\n  \\end{itemize}\r\n  \\onslide<2->{\\textbf{Easiest Solution:}}\r\n  \\begin{itemize}\r\n    \\item<3->\r\n      Represent each bucket as list of key-value pairs\r\n    \\item<4->\r\n      Append new values to the end of the list\r\n  \\end{itemize}\r\n    \\vspace*{-1.0em}\r\n    \\begin{figure}\r\n    \\onslide <3->\\caption{Hash table T}\r\n    \\centering\r\n    \\only<1-3>{\\includegraphics[width=0.6\\textwidth]{Images/Bucket4.pdf}}\r\n    \\only<4-4>{\\includegraphics[width=0.6\\textwidth]{Images/Bucket5.pdf}}\r\n  \\end{figure}\r\n\\end{frame}\r\n\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n%% \\begin{frame}{Associative Arrays}{Hash Collisions}\r\n%%   \\textbf{Example:}\r\n%%   \\begin{itemize}\r\n%%     \\item\r\n%%     Key set: $k = \\{3904, 3126, 5148, 4522\\}$\r\n%%     \\item\r\n%%     Hash function:\r\n%%     $h(k) = k \\;\\mathrm{mod}\\; 5 \\hspace*{1.0em} \\in \\{0, \\ldots, 4\\}$\r\n%%   \\end{itemize}\r\n%%   \\textbf{Inserting:}\r\n%%   \\begin{itemize}\r\n%%     \\item\r\n%%       \\dots\r\n%%     \\item\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|insert(8769, \"D\")|:\r\n%%       $\\hspace*{0.5em} h(8769) = 4$\r\n%%       $\\hspace*{0.5em} \\Rightarrow$\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|T[4].append((8769, \"D\"))|\r\n%%     \\item\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|search(8769)|:\r\n%%       $\\hspace*{0.5em} h(8769) = 4$\\\\\r\n%%       $\\hspace*{1.5em} \\Rightarrow$\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|T[4]|\r\n%%       $\\Rightarrow$\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|[(3904, \"A\"), (8769, \"D\")]|\r\n%%   \\end{itemize}\r\n%%   \\vspace*{-1.0em}\r\n%%   \\begin{table}[!b]\r\n%%     \\caption{Hash table T}\r\n%%     \\label{tab:hash_table:example_linked}\r\n%%     \\begin{tabularx}{\\textwidth}{l|ccccc}\r\n%%       {} & 0 & 1 & 2 & 3 & 4\\\\\r\n%%       \\midrule\r\n%%       Bucket &\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|[]| &\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|[(3126, \"B\")]| &\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|[]| &\r\n%%       \\lstinline[\r\n%%         language=Python,\r\n%%         style={python-idle-code},\r\n%%         basicstyle=\\small\r\n%%       ]|[(5148, \"C\")]| &\r\n%%       \\begin{math}\r\n%%         \\left[\\begin{array}{@{}l@{}}\r\n%%           \\lstinline[\r\n%%             language=Python,\r\n%%             style={python-idle-code},\r\n%%             basicstyle=\\small\r\n%%           ]|(3904, \"A\"),|\\\\\r\n%%           \\lstinline[\r\n%%             language=Python,\r\n%%             style={python-idle-code},\r\n%%             basicstyle=\\small\r\n%%           ]|(8769, \"D\")|\r\n%%         \\end{array}\\right]\r\n%%       \\end{math}\r\n%%     \\end{tabularx}\r\n%%   \\end{table}\r\n%% \\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Associative Arrays}{Expected Runtime}\r\n  \\begin{columns}\r\n    \\begin{column}{0.5\\linewidth}\r\n      {\\color{MainA}Best case}:\r\n      \\begin{itemize}\r\n        \\item\r\n          We have {\\color{MainA}$n$} keys which are equally distributed over {\\color{MainA}$m$} buckets\r\n        \\item\r\n          We have {\\color{MainA}$\\approx \\frac{n}{m}$} pairs per bucket\r\n        \\item\r\n          The runtime for searching is nearly {\\color{MainA}$\\mathcal{O}(1)$}\r\n          if \\textbf{not} {\\color{MainA}$n \\gg m$}\r\n      \\end{itemize}\r\n    \\end{column}\r\n    \\begin{column}{0.5\\linewidth}\r\n      \\begin{center}\r\n        \\textbf{Best case}\\\\\r\n        \\textbf{($m = 5, \\, n = 10$)}\\\\[1em]\r\n        \\includegraphics[height=0.4\\textheight]{Images/hash-uniform.pdf}\r\n%        \\label{tab:hash_table:runtime_best_case}\r\n        %  \\begin{tabularx}{\\textwidth}{c|c}\r\n        %   {} & Bucket\\\\\r\n        %   \\midrule\r\n        %   0 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small,\r\n        %     mathescape\r\n        %   ]|[($\\ldots$), ($\\ldots$)]|\\\\\r\n        %   1 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small,\r\n        %     mathescape\r\n        %   ]|[($\\ldots$), ($\\ldots$)]|\\\\\r\n        %   2 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small,\r\n        %     mathescape\r\n        %   ]|[($\\ldots$), ($\\ldots$)]|\\\\\r\n        %   3 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small,\r\n        %     mathescape\r\n        %   ]|[($\\ldots$), ($\\ldots$)]|\\\\\r\n        %   4 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small,\r\n        %     mathescape\r\n        %   ]|[($\\ldots$), ($\\ldots$)]|\r\n        % \\end{tabularx}\r\n      \\end{center}\r\n    \\end{column}\r\n  \\end{columns}\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Associative Arrays}{Expected Runtime}\r\n  \\begin{columns}\r\n    \\begin{column}{0.4\\linewidth}\r\n      {\\color{MainA}Worst case}:\r\n      \\begin{itemize}\r\n        \\item\r\n          All {\\color{MainA}$n$} keys are mapped onto the same bucket\r\n        \\item\r\n          The runtime is {\\color{MainA}$\\Theta(n)$} for searching\r\n      \\end{itemize}\r\n    \\end{column}\r\n    \\begin{column}{0.5\\linewidth}\r\n      \\begin{center}\r\n        \\textbf{Worst case}\\\\\r\n        \\textbf{($m = 5, \\, n = 10$)}\\\\[1em]\r\n        \\includegraphics[height=0.4\\textheight]{Images/hash-extreme.pdf}\r\n        \\label{tab:hash_table:runtime_worst_case}\r\n        % \\begin{tabularx}{\\textwidth}{c|c}\r\n        %   {} & Bucket\\\\\r\n        %   \\midrule\r\n        %   0 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small\r\n        %   ]|[]|\\\\\r\n        %   1 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small\r\n        %   ]|[]|\\\\\r\n        %   2 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small,\r\n        %     mathescape\r\n        %   ]|[($\\ldots$), ($\\ldots$), $\\;\\ldots\\;$, ($\\ldots$)]|\\\\\r\n        %   3 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small\r\n        %   ]|[]|\\\\\r\n        %   4 & \\lstinline[\r\n        %     language=Python,\r\n        %     style={python-idle-code},\r\n        %     basicstyle=\\small\r\n        %   ]|[]|\r\n        % \\end{tabularx}\r\n      \\end{center}\r\n    \\end{column}\r\n  \\end{columns}\r\n\\end{frame}\r\n\r\n", "meta": {"hexsha": "f4f7573b005e09b84b37c908dba41b21e967e9c8", "size": 11348, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-5/Chapter/eng/020_HashMap.tex", "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_issues_repo_path": "Lecture-5/Chapter/eng/020_HashMap.tex", "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_forks_repo_path": "Lecture-5/Chapter/eng/020_HashMap.tex", "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "avg_line_length": 31.8764044944, "max_line_length": 104, "alphanum_fraction": 0.4864293268, "num_tokens": 3605, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718434873426302, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.5546424065982797}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\subsubsection{The basic concept of Fully Convolutional Network}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nThe main purpose of such approach is to automatically perform feature extraction by training a model using full wavefield images, hence, it will learn by itself to recognise the patterns and detect the delamination and localise it.\nIn our work we are using the fully convolutional network (FCN)~\\cite{long2015fully}, which aims to perform pixel-wise segmentation by classifying every pixel of the input image as damaged or not. \n\nThe idea behind FCN is to stack a group of convolutional layers in an encoder-decoder style. \nThe encoder is capable for downsampling the input image through convolutions with strides, consequently, resulting in a compressed feature representation of the input image, and the decoder is capable to upsample the image with compressed features applying techniques like transposed convolution with strides and upsampling with interpolation (e.g. bilinear or nearest).\n\n\nIn order to reduce overfitting in the model, some techniques were applied such as adding dropouts to layers and batch normalization.\n\nFor the output layer, we have applied two activation functions in separate experiments, the first one is softmax and the second one is sigmoid. \nThe softmax function calculates the probability of the damage occurrence and the healthy state for every single pixel, hence, the summation of the two probabilities must equal one. Eq.~(\\ref{softmax}) illustrates the softmax, where \\(P(x)_{i}\\) is the probability of each target class \\(x_{j}\\) over all possible target classes \\(x_{j}\\), C in our case are two classes  (damaged and undamaged).\nTo predict the output label of the detection (\\(y_{pred}\\)) which represent the probability of damaged and undamaged, we applied the \\(\\argmax\\) function to select the maximum probability of the softmax activation function.\n\t\\begin{equation}\n\t\tP(x)_{i} = \\frac{e^{x_{i}}}{\\sum_{j}^{C} e^{x_{j}}}\n\t\t\\label{softmax}\n\t\\end{equation} \n\t\\begin{equation}\n\t\ty_{pred} = \\argmax_{i}\\left( P(x)_{i} \\right)\n\t\t\\label{argmax}\n\t\\end{equation}\n\nWhen using sigmoid in the output layer, it produces a vector of values between (\\(0\\) and \\(1\\)) indicating the damage weight for each pixel. \nLow values indicate low damage probability and high output values indicate high damage probability. Eq.~(\\ref{sigmoid}) illustrates the sigmoid function, where \\(z\\) is the summation of adjustable weights \\(\\{w_0,w_1,...,w_n \\}\\) multiplied by input variables (from the previous layer) \\(\\{x_0,x_1,_...,x_n\\}\\) and bias \\(b\\) as shown in Eq.~(\\ref{z}).\t\n\t\\begin{equation}\n\t\t\\sigma(z) = \\frac{1}{1+e^{-z}}\n\t\t\\label{sigmoid}\n\t\\end{equation}\n\t\\begin{equation}\n\t\tz= \\sum_{i=0}^{n}  w_i\\, x_i +b\n\t\t\\label{z}\n\t\\end{equation}\nSelecting the loss function is a crucial task in deep learning since it measures how good the model predicts.\nWe have applied two types of losses based on the used function in the final activation layer: a binary cross-entropy (BCE) loss function applied with a sigmoid activation function in the output layer and a categorical cross-entropy (CCE) loss function with a softmax activation in the output layer that is also called \\enquote{softmax loss function}.\nEq.~(\\ref{BCE}) illustrates the BCE, where \\(\\hat{Y}\\) represents the predicted vector values and \\(Y\\) represents the ground truth vector values, when \\(\\hat{Y} \\approx Y\\) then the BCE will be almost \\(0\\) meaning that the model was able to predict the output, so, the aim is to reduce the loss function to the minimum value.\n\t\\begin{equation}\n\t\tBCE = (1-Y)\\log(1-\\hat{Y})+Y\\log(\\hat{Y})\n\t\t\\label{BCE}\n\t\\end{equation}\nEq.~(\\ref{CCE}) illustrates the CCE, where \\( P(x)_{i}\\) is the softmax value of the target class. \n\t\\begin{equation}\n\tCCE = -\\log\\left( P(x)_{i} \\right)\n\t\\label{CCE}\n\t\\end{equation}\n\nMoreover, we have applied intersection over union (IoU) as our accuracy metric. \nIoU is applied to find the intersection area between the ground truth value and the predicted value.  \nThe IoU metric is defined as:\n\\begin{equation}\nIoU = \\frac{Intersection}{Union} = \\frac{\\hat{Y} \\cap Y}{\\hat{Y} \\cup Y} \n\\label{IoU}\n\\end{equation}\nThe intersection between the predicted and the ground truth values is simply calculated through multiplying their values then summing the resulted values.\nAs we mentioned earlier the ground truth values are either \\((0\\) or \\(1)\\) thus only the predicted values larger than \\(0\\) multiplied by their ground truth label \\(1\\) will be counted, the rest values will equal to \\(0\\). \nThe union can be calculated by summing all values in both the predicted and the ground truth  vectors, then we subtract the intersection from their sum.\nOur main goal is to maximize the IoU accuracy metric, since the higher the IoU, the higher the accuracy of the predicted delamination in terms of location, shape and size.\n\t\nFurthermore, during training the model our focus is to minimize the loss and maximize the accuracy metric by converting it into an optimization problem. \nThe optimizer is responsible for updating the model learnable parameters such as filters weights and biases in a way the overall loss is minimized and the accuracy is maximized.\nIn the proposed approach, Adam optimizer was applied, which is considered as a combination of RMSprop and Stochastic Gradient Descent (SGD)~\\cite{Kingma2015}. \n\nIn the next subsection, we are going to present an FCN model for pixel-wise semantic segmentation in order to detect and localise delaminations.\n", "meta": {"hexsha": "33beb2c6793a6f981febba7afeab439fc8987131", "size": 5579, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/journal_papers/Paper_Article_R1/section_fully_convolutional_networks_R1.tex", "max_stars_repo_name": "IFFM-PAS-MISD/aidd", "max_stars_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c", "max_stars_repo_licenses": ["RSA-MD"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-03T05:36:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-03T05:36:07.000Z", "max_issues_repo_path": "reports/journal_papers/Paper_final/section_fully_convolutional_networks_R1.tex", "max_issues_repo_name": "IFFM-PAS-MISD/aidd", "max_issues_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c", "max_issues_repo_licenses": ["RSA-MD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/journal_papers/Paper_final/section_fully_convolutional_networks_R1.tex", "max_forks_repo_name": "IFFM-PAS-MISD/aidd", "max_forks_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c", "max_forks_repo_licenses": ["RSA-MD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 85.8307692308, "max_line_length": 394, "alphanum_fraction": 0.7495967019, "num_tokens": 1365, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079209, "lm_q2_score": 0.6926419767901475, "lm_q1q2_score": 0.55459289772019}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n%\\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png}\n%\\institute{Rice University}\n%\\faculty{Faculty of Whatever Sciences}\n%\\department{Department of Mathematics}\n%\\title{Class Notes}\n%\\subtitle{Based on MATH xxx}\n%\\author{\\textit{Author}\\\\Gabriel \\textsc{Gress}}\n%\\supervisor{Linus \\textsc{Torvalds}}\n%\\context{Well, I was bored...}\n%\\date{\\today}\n\n%\\makeindex\n\n\\begin{document}\n\n% \\maketitle\n\n% Notes taken on 06/03/21\n\n\\section{Group Representations and Free Groups}\n\\label{sec:group_representations_and_free_groups}\nWe revisit group representations by introducing some new concepts to incorporate, and see how that allows us to expand our theory.\n\n\\begin{defn}[Commutator]\n\tLet \\(x,y \\in G\\) be elements of a group, and let \\(A,B \\subset G\\) be nonempty subsetf of \\(G\\). The \\textbf{commutator of \\(x\\) and \\(y\\)} is denoted by\n\t\\begin{align*}\n\t\t[x,y] = x ^{-1}y^{-1}xy\n\t\\end{align*}\n\tand the group generated by commutators of elements from \\(A,B\\) is denoted by\n\t\\begin{align*}\n\t\t[A,B] = \\langle [a,b] \\mid a \\in A, b \\in B \\rangle .\n\t\\end{align*}\n\tWe can also define a subgroup of \\(G\\) by the group generated by commutators of elements of \\(G\\):\n\t\\begin{align*}\n\t\tG' = \\langle [x,y] \\mid x,y \\in G \\rangle \n\t\\end{align*}\n\tWe call this the \\textbf{commutator subgroup} of \\(G\\).\n\\end{defn}\nThis terminology arises because the commutator of \\(x,y\\) is 1 if and only if \\(x\\) and \\(y \\) commute.\n\n\\begin{prop}[Properties of commutators]\n\tLet \\(x,y \\in G\\) be elements of a group and let \\(H\\leq G\\). Then\n\t\\begin{itemize}\n\t\t\\item \\(xy = yx[x,y]\\)\n\t\t\\item \\(H \\triangleleft G\\) if and only if \\([H,G] \\leq H\\) \n\t\t\\item \\(\\sigma [x,y] = [\\sigma (x),\\sigma (y)]\\) for any automorphism \\(\\sigma \\) of \\(G\\). Hence, \\(G' \\textrm{char}G\\), and \\(G / G'\\) is abelian.\n\t\t\\item If \\(H \\triangleleft G\\) and \\(G/H\\) is abelian, then \\(G' \\leq H\\). Conversely, if \\(G' \\leq H\\), then \\(H \\triangleleft G\\) and \\(G / H\\) is abelian.\n\t\t\\item If \\(\\varphi :G\\to H\\) is a homomorphism of \\(G\\) into \\(H\\) and \\(H\\) is abelian, then \\(G' \\leq \\textrm{Ker}\\varphi \\) and the following diagram commutes:\n\\begin{center}\n\t\t\t\\begin{tikzpicture}\n  \\matrix (m)\n    [\n      matrix of math nodes,\n      row sep    = 3em,\n      column sep = 4em\n    ]\n    {\n\t    G & G / G' \\\\\n\t     & H            \\\\\n    };\n  \\path\n    (m-1-2) edge [->] node {} (m-2-2)\n    (m-1-1.east |- m-1-2)\n      edge [->] node {} (m-1-2)\n      (m-1-1) edge [->] node [below] {$\\varphi$} (m-2-2);\n\\end{tikzpicture}\n\\end{center}\n\t\\end{itemize}\n\\end{prop}\nThe way to think about this is that by passing to the quotient by the commutator subgroup of \\(G\\), we collapse all commutators to identity. Hence, all elements in the quotient group commute. This is why we have such a strong property in that, if \\(G' \\leq H\\), then \\(G / H\\) must be abelian.\\\\\n\nOne word of caution-- there can be elements of the commutator subgroup that \\textit{cannot} be written as a single commutator \\([x,y]\\) for any \\(x,y\\). In other words, \\(G'\\) is not just the set of single commutators, but is the group generated by elements of that form.\n\n\\begin{prop}\n\tLet \\(H,K \\leq G\\) be subgroups. The number of distinct ways of writing each element of the set \\(HK\\) in the form \\(hk\\), for some \\(h \\in H\\), \\(k \\in K\\), is \\(\\left| H \\cap K \\right| \\).\\\\\n\n\tIf \\(H\\cap K = 1\\), then each element of \\(HK\\) can be written uniquely as a product \\(hk\\) for some \\(h \\in H\\), \\(k \\in K\\).\n\\end{prop}\n\n\\begin{thm}\n\tLet \\(H,K\\leq G\\) be subgroups of \\(G\\) such that \\(H,K \\triangleleft G\\) and \\(H\\cap K = 1\\). Then\n\t\\begin{align*}\n\t\tHK \\cong H\\times K\n\t\\end{align*}\n\\end{thm}\n\n\\subsection{Free Groups}\n\\label{sub:free_groups}\n\nThe idea of the free group is to define a group \\(F(S)\\) to be generated by some set \\(S\\) with no relations on any of the elements of \\(S\\). For example, if \\(S = \\left\\{ a,b \\right\\} \\), then some elements of \\(F(S)\\) would be of the form \\(a,aa,ab,abab,bab\\), as well as the inverses of these elements. We call elements of a free group \\textbf{words}. Then we can multiply elements in the free group simply by concatenation. Our goal will be to define this formally and show it indeed satisfies the necessary properties.\n\n\\begin{general}[Construction of Free Groups]\n\tLet \\(S\\) be a set, and let \\(S^{-1}\\) be a set disjoint from \\(S\\) such that there is a bijection from \\(S\\) to \\(S^{-1}\\). We denote the corresponding element for \\(s \\in S\\) to be \\(s\\mapsto s^{-1}\\in S^{-1}\\), and furthermore we denote \\((s^{-1})^{-1} = s\\). Finally, we add a third singleton set disjoint from \\(S,S^{-1}\\) and call it \\(\\left\\{ 1 \\right\\} \\), and define it so \\(1^{-1} = 1\\). We also define that for any \\(x \\in S \\cup S^{-1}\\cup \\left\\{ 1 \\right\\} \\), \\(x^{1} = x\\).\\\\\n\n\tA \\textbf{word} on \\(S\\) is a sequence \\((s_1,s_2,s_3,\\ldots)\\) where \\(s_i \\in S\\cup S^{-1} \\cup \\left\\{ 1 \\right\\} \\), and \\(s_i = 1\\) for all \\(i\\geq N\\) for some arbitrarily large \\(N\\) (so that words are \"infinite\", but not in practice). In order to get uniqueness of words, we say a word is \\textbf{reduced} if\n\t\\begin{align*}\n\t\ts_{i+1} \\neq s_{i}^{-1} \\quad \\forall i, s_i \\neq 1\\\\\n\t\ts_k = 1 \\implies s_i = 1 \\; \\forall i \\geq k\n\t\\end{align*}\n\tWe refer to the special word given by\n\t\\begin{align*}\n\t\t(1,1,1,\\ldots)\n\t\\end{align*}\n\tto be the \\textbf{empty word} and denote it by \\(1\\). Let \\(F(S)\\) be the set of reduced words on \\(S\\), and embed mKS into \\(F(S)\\) by\n\t\\begin{align*}\n\t\ts \\mapsto (s,1,1,1,\\ldots)\n\t\\end{align*}\n\tHence we identify \\(S\\) with its image and consider \\(S\\subset F(S)\\). Notie that if \\(S = \\emptyset\\), \\(F(S) = \\left\\{ 1 \\right\\} \\).\\\\\n\n\tNow we simply introduce a binary operation on \\(F(S)\\), so that two words in \\(F(S)\\) are concatenated, then reduced to their reduced word form. We leave the details of defining this binary operation to the reader, but one can check that this operation is well-defined and satisfies all the properties of a group operation.\n\\end{general}\n\n\\begin{thm}\n\t\\(F(S)\\) is a group by the binary operation of word concatenation with reduction.\n\\end{thm}\nFurthermore, free groups satisfy a special kind of universal property.\n\\begin{thm}\n\tLet \\(G\\) be a group, \\(S\\) a set, and \\(\\varphi :S\\to G\\) a set map. There is a unique group homomorphism \\(\\Phi :F(S) \\to G\\) such that the following diagram commutes:\n\\begin{center}\n\t\t\t\\begin{tikzpicture}\n  \\matrix (m)\n    [\n      matrix of math nodes,\n      row sep    = 3em,\n      column sep = 4em\n    ]\n    {\n\t    S & F(S) \\\\\n\t     & G            \\\\\n    };\n  \\path\n\t  (m-1-2) edge [->] node [right] {\\(\\Phi \\)} (m-2-2)\n    (m-1-1.east |- m-1-2)\n    edge [->] node [above] {inclusion} (m-1-2)\n      (m-1-1) edge [->] node [below] {$\\varphi$} (m-2-2);\n\\end{tikzpicture}\n\\end{center}\n\n\\end{thm}\nThis further shows that \\(F(S)\\) is unique up to a unique isomorphism, which is the identity map on the set \\(S\\).\n\n\\begin{defn}[Free Group]\n\tThe group \\(F(S)\\) is called the \\textbf{free group} on the set \\(S\\). A group \\(F\\) is a \\textbf{free group} if there is some set \\(S\\) such that \\(F = F(S)\\), in which case we call \\(S\\) a set of \\textbf{free generators} of \\(F\\). The cardinality of \\(S\\) is called the \\textbf{rank} of the free group.\n\\end{defn}\n\n\\begin{thm}\n\tSubgroups of a free group are free.\n\\end{thm}\nFurthermore, if \\(G\\leq F\\) are free and \\([F:G] = m\\), then\n\\begin{align*}\n\t\\textrm{rank}(G) = 1 + m(\\textrm{rank}(F)-1)\n\\end{align*}\nProving this requires a lot of other tools, such as covering spaces.\n\n\\subsection{Presentations}\n\\label{sub:presentations}\n\nNotice that if we take \\(S = G\\), then we can view \\(G\\) as a homomorphic image of the free group \\(F(G)\\) onto \\(G\\). Moreover, if \\(G= \\langle S \\rangle \\), there is a unique surjective homomorphism from  \\(F(S)\\) onto \\(G\\) which is the identity on \\(S\\). This allows us to construct a more powerful construction of presentations, generators, and relations.\n\n\\begin{defn}\n\tA subset \\(S\\subset G\\) \\textbf{generates \\(G\\)} by \\(G = \\langle S \\rangle \\) if and only if the map \\(\\pi :F(S) \\to G\\) which extends the identity map of \\(S\\) to \\(G\\) is surjective.\n\\end{defn}\nThis is distinct but equivalent to our earlier notion for subsets generating a group. However, it is more flexible, so we will use this from here on out.\n\n\\begin{defn}[Presentations, Generators, and Relations]\n\tLet \\(S\\subset G\\) be a subset of \\(G\\) such that \\(G = \\langle S \\rangle \\). A \\textbf{presentation} for \\(G\\) is a pair \\((S,R)\\), where \\(R\\) is a set of words in \\(F(S)\\) such that\n\t\\begin{align*}\n\t\t\\textrm{ncl}_{F(S)}(\\langle R \\rangle ) = \\textrm{Ker}(\\pi )\n\t\\end{align*}\n\twhere \\(\\textrm{ncl}\\) denotes the normal closure (the smallest normal subgroup containing \\(\\langle R \\rangle \\)). The elements of \\(S\\) are called \\textbf{generators}, and the elements of \\(R\\) are called \\textbf{relations} of \\(G\\).\\\\\n\n\tWe say \\(G\\) is \\textbf{finitely generated} if there is a presentation \\((S,R)\\) such that \\(S\\) is finite. Furthermore, \\(G\\) is \\textbf{finitely presented} if \\(R\\) is also finite.\n\\end{defn}\nA word of caution-- the kernel of the map \\(F(S) \\to G\\) is \\textit{not} \\(\\langle R \\rangle \\), but instead the union of all subsets conjugate to \\(\\langle R \\rangle \\) (including \\(\\langle R \\rangle \\) itself). Furthermore, even if \\(S\\) is fixed, a group will have many different presentations.\\\\\n\nFinally, often when writing relations, if we have \\(w_1w_2^{-1} = 1\\), we might instead write \\(w_1=w_2\\), or vice versa.\n\n\\subsection{Applying presentations to find homomorphisms and automorphisms}\n\\label{sub:applying_presentations_to_find_homomorphisms_and_automorphisms}\n\nSuppose \\(G\\) is presented by \\((\\langle a,b \\rangle , \\langle r_1,\\ldots,r_k \\rangle )\\). Then if \\(a',b' \\in H\\) are elements that satisfy \\(r_1,\\ldots,r_k\\), then there is a homomorphism from \\(G\\) into \\(H\\). If \\(\\pi :F(\\left\\{ a,b \\right\\} )\\to G\\) is the presentation homomorphism, we can define\n\\begin{align*}\n\t\\pi ':F(\\left\\{ a,b \\right\\} ) \\to H\\\\\n\t\\pi'(a) = a', \\; \\pi'(b) = b'.\n\\end{align*}\nThis works because \\(\\textrm{Ker}\\pi \\leq \\textrm{Ker}\\pi'\\), and so \\(\\pi '\\) factors through \\(\\textrm{Ker}\\pi \\) and we get\n\\begin{align*}\n\tG \\cong F(\\left\\{ a,b \\right\\} ) / \\textrm{Ker}\\pi \\to H\n\\end{align*}\nMoreover, if \\(\\langle a',b' \\rangle = H = G\\), then this homomorphism is an automorphism of \\(G\\)(!!). In the other direction, any automorphism on a presentation must send a set of generators to another set of generators satisfying the same relations.\n\n\\begin{exmp}[Dihedral presentation]\n\tConsider \\(D_8 = \\langle a,b \\mid a^2 = b^{4} =1, aba = b^{-1} \\rangle \\). Any pair of elements \\(a',b'\\) that are of order 2 and 4 (and \\(a'\\) is noncentral) must satisfy the same relations. There are four noncentral elements of order 2, and two elements of order 4, so \\(D_8\\) has 8 automorphisms.\n\\end{exmp}\nSimilarly, any distinct pair of elements of order \\(4\\) in \\(Q_8\\) that are not inverses of each other necessarily generate \\(Q_8\\) and satisfy its relations. There are \\(24\\) such pairs, so \\(\\left| \\textrm{Aut}(Q_8) \\right| =24\\). As one can see, free groups are an incredibly useful tool to classify these maps.\n\n% \\printindex\n\\end{document}\n", "meta": {"hexsha": "95e561264ea247be8b09dc609689838da0624c17", "size": 11182, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Group Theory/Notes/source/GroupRepresentations2.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Group Theory/Notes/source/GroupRepresentations2.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Group Theory/Notes/source/GroupRepresentations2.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.631840796, "max_line_length": 523, "alphanum_fraction": 0.6543552137, "num_tokens": 3705, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.7931059585194573, "lm_q1q2_score": 0.5545828578041345}}
{"text": "\\section{Vector representation of words}", "meta": {"hexsha": "de0ae1c1cf7ed03932238a83ab5cd0f9cc27a387", "size": 40, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "project/checkpoint/vector.tex", "max_stars_repo_name": "tamnguyenthe/CS5890Project", "max_stars_repo_head_hexsha": "74446d536074d369af73bf6d88e38f917c7a3ee6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "project/checkpoint/vector.tex", "max_issues_repo_name": "tamnguyenthe/CS5890Project", "max_issues_repo_head_hexsha": "74446d536074d369af73bf6d88e38f917c7a3ee6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project/checkpoint/vector.tex", "max_forks_repo_name": "tamnguyenthe/CS5890Project", "max_forks_repo_head_hexsha": "74446d536074d369af73bf6d88e38f917c7a3ee6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.0, "max_line_length": 40, "alphanum_fraction": 0.85, "num_tokens": 8, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059609645724, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5545828545432122}}
{"text": "% !TeX spellcheck = en_US\n\\documentclass[12pt,a4paper,DIV=calc]{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T2A]{fontenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{siunitx}\n\\usepackage{textcomp}\n\\usepackage[english]{babel}\n\\usepackage{microtype}\n\\usepackage{enumitem}\n\\linespread{1.05}\n\\KOMAoptions{DIV=last}\n\\usepackage{mathtools}\n\\usepackage{physics}\n\\usepackage{tikz}\n\\usetikzlibrary{calc}\n\\usepackage[colorlinks,linkcolor=blue,citecolor=blue,urlcolor=blue]{hyperref}\n\\frenchspacing\n\n\\begin{document}\n\nThis document describes the mathematical formulas behind the \\textsc{Quill} PIC code and how the formulas used in the lecture notes \\cite{PukhovLectures} and the paper \\cite{PukhovCERN} by Prof. Pukhov translate to the formulas used here.\n\n\\tableofcontents\n\n\\newpage\n\n\\section{Grid in Quill} \\label{sec:grid}\n\nUnlike the grid described in \\cite{PukhovLectures, PukhovCERN}, where the charge was in the centers of the cells, in Quill, the charge is in the nodes.\n\n\\begin{figure}[h]\n\\centering\n\\begin{tikzpicture}\n    \\def\\len{5};\n    \\def\\ldx{2};\n    \\def\\ldy{1};\n    \\coordinate (A) at (0,0);\n    \\coordinate (B) at (\\len,0);\n    \\coordinate (C) at ({\\len+\\ldx},\\ldy);\n    \\coordinate (D) at (\\ldx,\\ldy);\n    \\coordinate (A1) at (0,\\len);\n    \\coordinate (B1) at (\\len,\\len);\n    \\coordinate (C1) at ({\\len+\\ldx},{\\len+\\ldy});\n    \\coordinate (D1) at (\\ldx,{\\len+\\ldy});\n    \\draw (A) -- (B);\n    \\draw (B) -- (C);\n    \\draw [dashed] (C) -- (D);\n    \\draw [dashed] (A) -- (D);\n    \\draw (A) -- (A1);\n    \\draw (B) -- (B1);\n    \\draw (C) -- (C1);\n    \\draw [dashed] (D) -- (D1);\n    \\draw (A1) -- (B1);\n    \\draw (B1) -- (C1);\n    \\draw (C1) -- (D1);\n    \\draw (A1) -- (D1);\n    \\node at (A) {\\textbullet};\n    \\node[anchor=north east] at (A) {$\\rho^{i,j,k}$};\n    \\node at (C1) {\\textbullet};\n    \\node[anchor=south west] at (C1) {$\\rho^{i+1,j+1,k+1}$};\n    \\coordinate (AB) at ($(A)!0.5!(B)$);\n    \\node at (AB) {\\textbullet};\n    \\node[anchor=north] at (AB) {$E_x^{i,j,k}$};\n    \\coordinate (AC) at ($(A)!0.5!(C)$);\n    \\node at (AC) {\\textbullet};\n    \\node[anchor=west] at (AC) {$B_z^{i,j,k}$};\n    \\coordinate (AD) at ($(A)!0.5!(D)$);\n    \\node at (AD) {\\textbullet};\n    \\node[anchor=south] at (AD) {$E_y^{i,j,k}$};\n    \\coordinate (AA1) at ($(A)!0.5!(A1)$);\n    \\node at (AA1) {\\textbullet};\n    \\node[anchor=east] at (AA1) {$E_z^{i,j,k}$};\n    \\coordinate (AD1) at ($(A)!0.5!(D1)$);\n    \\node at (AD1) {\\textbullet};\n    \\node[anchor=south] at (AD1) {$B_x^{i,j,k}$};\n    \\coordinate (AB1) at ($(A)!0.5!(B1)$);\n    \\node at (AB1) {\\textbullet};\n    \\node[anchor=west] at (AB1) {$B_y^{i,j,k}$};\n\\end{tikzpicture}\n\\caption{Depiction of one cell of the grid used in Quill.}\n\\label{fig:cell}\n\\end{figure}\n\n\\section{Indices used in Quill}\\label{sec:indices}\nThe papers \\cite{PukhovLectures, PukhovCERN} use half-integer indices, which is impossible to do in the code.\nBelow is the table describing how to translate indices from Quill to the indices used in the papers for different fields.\n\n\\begin{tabular}{ l | l l | l l }\n    Field & Quill &  Papers & Papers & Quill\\\\\n    \\hline\n    $\\rho$ & $i,j,k$ & $\\lbrace i,j,k\\rbrace-1/2$ & $\\lbrace i,j,k\\rbrace + 1/2$ & $\\lbrace i,j,k\\rbrace+1$ \\\\\n    &&&&\\\\\n    $j_x$, $E_x$ & $i,j,k$ & $i,j-1/2,k-1/2$ & $i,j+1/2,k+1/2$ & $i,j+1,k+1$ \\\\\n    $j_y$, $E_y$ & $i,j,k$ & $i-1/2,j,k-1/2$ & $i+1/2,j,k+1/2$ & $i+1,j,k+1$ \\\\\n    $j_z$, $E_z$ & $i,j,k$ & $i-1/2,j-1/2,k$ & $i+1/2,j+1/2,k$ & $i+1,j+1,k$ \\\\\n    &&&&\\\\\n    $B_x$ & $i,j,k$ & $i-1/2,j,k$ & $i+1/2,j,k$ & $i+1,j,k$\\\\\n    $B_y$ & $i,j,k$ & $i,j-1/2,k$ & $i,j+1/2,k$ & $i,j+1,k$\\\\\n    $B_z$ & $i,j,k$ & $i,j,k-1/2$ & $i,j,k+1/2$ & $i,j,k+1$\n\\end{tabular}\n\n\\section{Maxwell solver}\n\n\\subsection{The NDFX solver}\n\n\\subsubsection{Scheme}\n\nUsing the table from Sec.~\\ref{sec:indices}, we write the formulas for the NDFX solver with indices used in the PIC code.\nAs the time scheme is of little interest here, the indices will be in superscripts. \n\\begin{align}\n    \\frac{\\Delta B_x^{i,j,k}}{\\Delta t} = &-\\frac{1}{\\Delta y}\\big[b_z \\left(E_z^{i,j+1,k} - E_z^{i,j,k} \\right) + \\nonumber \\\\\n    &+a_z \\left(E_z^{i,j+1,k+1} - E_z^{i,j,k+1} + E_z^{i,j+1,k-1} - E_z^{i,j,k-1}\\right)\\big] + \\nonumber \\\\\n    &+\\frac{1}{\\Delta z} \\big[b_y \\left(E_y^{i,j,k+1} - E_y^{i,j,k}\\right) + \\nonumber \\\\\n    &+a_y \\left(E_y^{i,j+1,k+1}-E_y^{i,j+1,k} + E_y^{i,j-1,k+1} - E_y^{i,j-1,k}\\right)\n\\end{align}\n\\begin{align}\n    \\frac{\\Delta B_y^{i,j,k}}{\\Delta t} = &\\frac{1}{\\Delta x}\\big[b_z \\left(E_z^{i+1,j,k} - E_z^{i,j,k} \\right) + \\nonumber \\\\\n    &+a_z \\left(E_z^{i+1,j,k+1} - E_z^{i,j,k+1} + E_z^{i+1,j,k-1} - E_z^{i,j,k-1}\\right) - \\nonumber \\\\\n    &-\\frac{1}{\\Delta z} \\big[ b_x \\left(E_x^{i,j,k+1} - E_x^{i,j,k}\\right) + \\nonumber \\\\\n    &+a_x \\left(E_x^{i+1,j,k+1} - E_x^{i+1,j,k} + E_x^{i-1,j,k+1} - E_x^{i-1,j,k}\\right)\n\\end{align}\n\\begin{align}\n    \\frac{\\Delta B_z^{i,j,k}}{\\Delta t} = &\\frac{1}{\\Delta y}\\big[b_x\\left(E_x^{i,j+1,k} - E_x^{i,j,k} \\right) + \\nonumber \\\\\n    &+a_x \\left(E_x^{i+1,j+1,k} - E_x^{i+1,j,k} + E_x^{i-1,j+1,k} - E_x^{i-1,j,k}\\right) \\big] - \\nonumber \\\\\n    &-\\frac{1}{\\Delta x} \\big[ b_y \\left(E_y^{i+1,j,k} - E_y^{i,j,k}\\right) + \\nonumber \\\\\n    &+a_y \\left(E_y^{i+1,j+1,k} - E_y^{i,j+1,k} + E_y^{i+1,j-1,k} - E_y^{i,j-1,k} \\right) \\big]\n\\end{align}\n\\begin{align}\n    \\frac{\\Delta E_x^{i,j,k}}{\\Delta t} = &\\frac{1}{\\Delta y}\\big[b_z \\left(B_z^{i,j,k} - B_z^{i,j-1,k} \\right) + \\nonumber\\\\\n    &+ a_z \\left(B_z^{i,j,k+1} - B_z^{i,j-1,k+1} + B_z^{i,j,k-1} - B_z^{i,j-1,k-1}\\right) \\big] - \\nonumber\\\\\n    &- \\frac{1}{\\Delta z}\\big[ b_y \\left(B_y^{i,j,k}-B_y^{i,j,k-1}\\right) + \\nonumber\\\\\n    &+ a_y \\left(B_y^{i,j+1,k} - B_y^{i,j+1,k-1} + B_y^{i,j-1,k} - B_y^{i,j-1,k-1}\\right) \\big] - \\nonumber\\\\\n    &- j_x^{i,j,k}\n\\end{align}\n\\begin{align}\n    \\frac{\\Delta E_y^{i,j,k}}{\\Delta t} = &-\\frac{1}{\\Delta x}\\big[b_z\\left(B_z^{i,j,k} - B_z^{i-1,j,k}\\right) + \\nonumber \\\\\n    &+ a_z \\left(B_z^{i,j,k+1} - B_z^{i-1,j,k} + B_z^{i,j,k-1} - B_z^{i-1,j,k-1}\\right)\\big] + \\nonumber \\\\\n    &+ \\frac{1}{\\Delta z}\\big[ b_x \\left(B_x^{i,j,k} - B_x^{i,j,k-1}\\right) + \\nonumber \\\\\n    &+ a_x\\left(B_x^{i+1,j,k} - B_x^{i+1,j,k-1} + B_x^{i-1,j,k} - B_x^{i-1,j,k-1}\\right)\\big] - \\nonumber \\\\\n    &- j_y^{i,j,k}\n\\end{align}\n\\begin{align}\n    \\frac{\\Delta E_z^{i,j,k}}{\\Delta t} = &\\frac{1}{\\Delta x}\\big[b_y\\left(B_y^{i,j,k} - B_y^{i-1,j,k}\\right) + \\nonumber \\\\\n    &+ a_y \\left(B_y^{i,j+1,k} -B_y^{i-1,j+1,k} + B_y^{i,j-1,k} - B_y^{i-1,j-1,k} \\right)\\big] - \\nonumber \\\\\n    &- \\frac{1}{\\Delta y}\\big[ b_x\\left(B_x^{i,j,k} - B_x^{i,j-1,k}\\right) + \\nonumber \\\\\n    &+ a_x \\left(B_x^{i+1,j,k} - B_x^{i+1,j-1,k} + B_x^{i-1,j,k} - B_x^{i-1,j-1,k} \\right)\\big] - \\nonumber \\\\\n    &- j_z^{i,j,k}\n\\end{align}\n\nThe coefficients $b_\\alpha$ and $a_\\alpha$ must satisfy the condition \n\\begin{equation}\nb_z + 2 a_z = 1\n\\end{equation}\nfor the scheme to be valid.\n\n\\subsubsection{Dispersion}\n\\label{sec:maxwell:ndfx:dispersion}\n\nIn order to find the dispersion of this scheme, we consider the fields as plane waves,\n\\begin{align}\n    &E_x = E_{x,0} \\exp(i \\frac{k_x \\Delta x}{2}) \\exp(-i \\omega t + i \\vb{k} \\vb{r}),\\\\\n    &E_y = E_{y,0} \\exp(i \\frac{k_y \\Delta y}{2}) \\exp(-i \\omega t + i \\vb{k} \\vb{r}),\\\\\n    &E_z = E_{z,0} \\exp(i \\frac{k_z \\Delta z}{2}) \\exp(-i \\omega t + i \\vb{k} \\vb{r}),\\\\\n    &B_x = B_{x,0} \\exp(i \\frac{-\\omega \\Delta t + k_y \\Delta y + k_z \\Delta z}{2}) \\exp(-i \\omega t + i \\vb{k} \\vb{r}),\\\\\n    &B_y = B_{y,0} \\exp(i \\frac{-\\omega \\Delta t + k_x \\Delta x + k_z \\Delta z}{2}) \\exp(-i \\omega t + i \\vb{k} \\vb{r}),\\\\\n    &B_z = B_{z,0} \\exp(i \\frac{-\\omega \\Delta t + k_x \\Delta x + k_y \\Delta y}{2}) \\exp(-i \\omega t + i \\vb{k} \\vb{r}).\n\\end{align}\nHere, we have taken into account the fact that the field components are displaced both on the grid and in time with respect to each other.\n\nIf we use this plane wave, the scheme becomes\n\\begin{alignat}{2}\n    &-A_t B_{x,0} = &- A_y C_z E_{z,0} &+ A_z C_y E_{y,0},\\\\\n    &-A_t B_{y,0} = & A_x C_z E_{z,0} &- A_z C_x E_{x,0},\\\\\n    &-A_t B_{z,0} = & A_y C_x E_{x,0} &- A_x C_y E_{y,0},\\\\\n    &-A_t E_{x,0} = & A_y C_z B_{z,0} &- A_z C_y B_{y,0},\\\\\n    &-A_t E_{y,0} = & -A_x C_z B_{z,0} &+ A_z C_x B_{x,0},\\\\\n    &-A_t E_{z,0} = & A_x C_y B_{y,0} &- A_y C_x B_{x,0},\n\\end{alignat}\nwhere \n\\begin{align}\n    &A_t = \\frac{1}{\\Delta t} \\sin(\\frac{\\omega \\Delta t}{2}),\\\\\n    &A_\\alpha = \\frac{1}{\\Delta \\alpha} \\sin(\\frac{k_\\alpha \\Delta \\alpha}{2}),\\\\\n    &C_\\alpha = b_\\alpha + 2 a_\\alpha \\cos(k_\\alpha \\Delta \\alpha) = 1 - 4 a_\\alpha \\Delta \\alpha^2 A_\\alpha^2.\n\\end{align}\n\nThe equality of the determinant of the system to zero gives us the dispersion relation\n\\begin{equation}\n    A_t^2 = A_x^2 C_y C_z + A_y^2 C_x C_z + A_z^2 C_x C_y.\n\\end{equation}\n\n\\subsection{The FDTD solver}\n\n\\subsubsection{Scheme}\n\nThe simple FDTD solver is obtained from the NDFX solver by replacing $b_{x,y,z}$ with $1$ and $a_{x,y,z}$ with $0$ and is thus written as\n\n\\begin{align}\n    &\\frac{\\Delta B_x^{i,j,k}}{\\Delta t} =& -\\frac{1}{\\Delta y}\\left(E_z^{i,j+1,k} - E_z^{i,j,k} \\right) +\\frac{1}{\\Delta z} \\left(E_y^{i,j,k+1} - E_y^{i,j,k}\\right) \\\\\n    &\\frac{\\Delta B_y^{i,j,k}}{\\Delta t} =& \\frac{1}{\\Delta x}\\left(E_z^{i+1,j,k} - E_z^{i,j,k} \\right) - \\frac{1}{\\Delta z} \\left(E_x^{i,j,k+1} - E_x^{i,j,k}\\right) \\\\\n    &\\frac{\\Delta B_z^{i,j,k}}{\\Delta t} =& \\frac{1}{\\Delta y}\\left(E_x^{i,j+1,k} - E_x^{i,j,k} \\right) - \\frac{1}{\\Delta x} \\left(E_y^{i+1,j,k} - E_y^{i,j,k}\\right) \\\\\n    &\\frac{\\Delta E_x^{i,j,k}}{\\Delta t} = &\\frac{1}{\\Delta y}\\left(B_z^{i,j,k} - B_z^{i,j-1,k} \\right) - \\frac{1}{\\Delta z}\\left(B_y^{i,j,k}-B_y^{i,j,k-1}\\right) &- j_x^{i,j,k}\\\\\n    &\\frac{\\Delta E_y^{i,j,k}}{\\Delta t} =& -\\frac{1}{\\Delta x}\\left(B_z^{i,j,k} - B_z^{i-1,j,k}\\right) + \\frac{1}{\\Delta z}\\left(B_x^{i,j,k} - B_x^{i,j,k-1}\\right) &- j_y^{i,j,k}\\\\\n    &\\frac{\\Delta E_z^{i,j,k}}{\\Delta t} =& \\frac{1}{\\Delta x}\\left(B_y^{i,j,k} - B_y^{i-1,j,k}\\right) - \\frac{1}{\\Delta y}\\left(B_x^{i,j,k} - B_x^{i,j-1,k}\\right) &- j_z^{i,j,k}\n\\end{align}\n\n\\subsubsection{Dispersion}\n\\label{sec:maxwell:fdtd:dispersion}\n\nBy applying the dispersion relation in Sec.~\\ref{sec:maxwell:ndfx:dispersion} to the FDTD scheme, we get\n\\begin{multline}\n    \\frac{1}{\\Delta t^2}\\sin^2\\left(\\frac{\\omega \\Delta t}{2}\\right) \\\\ \n    = \\frac{1}{\\Delta x^2}\\sin^2\\left(\\frac{k_x \\Delta x}{2}\\right) + \\frac{1}{\\Delta y^2}\\sin^2\\left(\\frac{k_y \\Delta y}{2}\\right) + \\frac{1}{\\Delta z^2}\\sin^2\\left(\\frac{k_z \\Delta z}{2}\\right).\n\\end{multline}\nThe FDTD scheme is stable if\n\\begin{equation}\n    \\frac{1}{\\Delta t^2} \\leq \\frac{1}{\\Delta x^2} + \\frac{1}{\\Delta y^2} + \\frac{1}{\\Delta z^2}.\n\\end{equation}\n\n\\section{Interpolation of magnetic field on E-field positions}\n\nThe magnetic field needs to be interpolated to the positions of the electric field to be used in the particle pusher.\nThe simplest interpolation (averaging over the 8 closest point) leads to\n\\begin{multline}\n    \\hat{B}_x^{i,j,k} = \\frac{1}{8} \\big(B_x^{i,j,k} + B_x^{i+1,j,k} + B_x^{i,j-1,k} + B_x^{i,j,k-1} + \\\\\n    + B_x^{i+1,j-1,k} + B_x^{i+1,j,k-1} + B_x^{i,j-1,k-1} + B_x^{i+1,j-1,k-1} \\big),\n\\end{multline}\n\\begin{multline}\n    \\hat{B}_y^{i,j,k} = \\frac{1}{8} \\big(B_y^{i,j,k} + B_y^{i-1,j,k} + B_y^{i,j+1,k} + B_y^{i,j,k-1} + \\\\\n    + B_y^{i-1,j+1,k} + B_y^{i-1,j,k-1} + B_y^{i,j+1,k-1} + B_y^{i-1,j+1,k-1} \\big),\n\\end{multline}\n\\begin{multline}\n    \\hat{B}_z^{i,j,k} = \\frac{1}{8} \\big(B_z^{i,j,k} + B_z^{i-1,j,k} + B_z^{i,j-1,k} + B_z^{i,j,k+1} + \\\\\n    + B_z^{i-1,j-1,k} + B_z^{i-1,j,k+1} + B_z^{i,j-1,k+1} + B_z^{i-1,j-1,k+1}\\big).\n\\end{multline}\n\n\\section{Shape function}\nIn PIC methods, the distribution function $f(\\vb{r},\\vb{p},t)$ is represented as a set of macroparticles\n\\begin{equation}\n    f(\\vb{r},\\vb{p},t) = \\sum_n W_n S(\\vb{r} - \\vb{r}_n) \\delta(\\vb{p} - \\vb{p}_n).\n\\end{equation}\nIn Quill,\n\\begin{align}\n    &S(\\vb{r}) = S_x(x) S_y(y) S_z(z),\\\\\n    &S_\\alpha(\\alpha) = \\begin{cases}\n        1,&\\abs{\\alpha} < \\Delta \\alpha/2,\\\\\n        0,&\\abs{\\alpha} > \\Delta \\alpha/2.\n    \\end{cases}\n\\end{align}\nHere, $\\alpha$ can be $x,y,z$.\n\n\\section{Density deposition}\n\nThe density is calculated as\n\\begin{equation}\n    \\rho(\\vb{r},t) = \\int f(t, \\vb{r}, \\vb{p}) \\dd{\\vb{p}} = \\sum_n W_n S(\\vb{r} - \\vb{r}_n).\n\\end{equation}\nThe interpolation of this density on the grid is made using the volume integration\n\\begin{multline}\n    \\rho^{i,j,k} = \\frac{1}{\\Delta x \\Delta y \\Delta z} \\int_{x^i-\\Delta x/2}^{x^i+\\Delta x/2} \\int_{y^j-\\Delta y/2}^{y^j+\\Delta y/2} \\int_{z^k-\\Delta z/2}^{z^k+\\Delta z/2} \\rho(\\vb{r},t)\\dd{t} = \\\\ \n    =\\sum_n W_n S^{\\rho}(\\vb{r}^{i,j,k} - \\vb{r}_n),\n\\end{multline}\nwhere $\\vb{r}^{i,j,k} = (x^i, y^j, z^k) = (i \\Delta x, j \\Delta y, k \\Delta z)$,\n\\begin{align}\n    &S^\\rho(\\vb{r}) = S_x^\\rho(x) S_y^\\rho(y) S_z^\\rho(z),\\\\\n    &S_\\alpha^\\rho(\\alpha) = \\frac{1}{\\Delta \\alpha} \\int_{\\alpha-\\Delta \\alpha/2}^{\\alpha + \\Delta \\alpha/2} S_0(\\alpha')\\dd{\\alpha'} = \\begin{dcases}\n        1 - \\frac{\\abs{\\alpha}}{\\Delta \\alpha},& \\abs{\\alpha} < \\Delta\\alpha,\\\\\n        0,& \\abs{\\alpha} > \\Delta \\alpha.\n    \\end{dcases}\n\\end{align}\n\nHence, a single particle located in the cell shown in Fig.~\\ref{fig:cell} induces densities at 8 points\n\\begin{equation}\n\\begin{aligned}\n    &\\rho^{i,j,k} = W b_x b_y b_z, && \\rho^{i,j,k+1} = W b_x b_y a_z,\\\\\n    &\\rho^{i,j+1,k} = W b_x a_y b_z, && \\rho^{i,j+1,k+1} = W b_x a_y a_z,\\\\\n    &\\rho^{i+1,j,k} = W a_x b_y b_z, && \\rho^{i+1,j,k+1} = W a_x b_y a_z,\\\\\n    &\\rho^{i+1,j+1,k} = W a_x a_y b_z, && \\rho^{i+1,j+1,k+1} = W a_x a_y a_z,\n\\end{aligned}\n\\end{equation}\nwhere $a_\\alpha = (\\alpha - \\alpha^{i,j,k}) / \\Delta \\alpha$, $b_\\alpha = 1 - a_\\alpha$.\nE.g., if the particle is located at $\\vb{r}^{i,j,k}$, then all $a_\\alpha = 1$, and the only non-zero density is $\\rho^{i,j,k}$.\n\n\\section{Currents deposition}\n\nCurrents are calculated so that the charge is preserved.\nFor each current position on the grid, a plane perpendicular to the current direction is taken, and a the charge flow through a square with sides corresponding to the cell size is calculated.\nFor example, for $j_x$ we have\n\\begin{equation}\n    j_x^{i,j,k} \\delta t \\Delta y \\Delta z = \\int_t^{t+\\delta t} \\dd{t'} \\int_{y^j-\\Delta y/2}^{y^j + \\Delta y /2}\\int_{z^k-\\Delta z/2}^{z^k + \\Delta z /2} \\dd{y}\\dd{z} v_x \\sum_n W_n S(\\vb{r}-\\vb{r}_n),\n\\end{equation}\nwhere $\\vb{r} = (x^i+\\Delta x /2, y, z)$, $\\vb{r}_n = \\vb{r}_n(t')$.\nNow we can calculate the integrals in $y$ and $z$, getting\n\\begin{equation}\n    j_x^{i,j,k} \\delta t = \\sum_n W_n \\int_t^{t+\\delta t} \\dd{t'} v_x S_x(x^i + \\Delta x/2 - x_n) S^\\rho_y(y^j - y_n) S^\\rho_z(z^k - z_n).\n\\end{equation}\nSimilar expressions are obtained for $j_y$ and $j_z$.\n\nUp to that point, the trajectory $\\vb{r}_n(t)$ does not matter.\nFor simplicity, we will consider a single macroparticle which does not leave the $(i,j,k)$ cell.\nIn a typical scheme, the motion of a particle on a timestep is linear, so that $\\vb{r}(t') = \\vb{r}_1 + \\vb{v} (t'-t)$.\nIn this case, particle induces currents only in 12 points.\nLet us rigorously calculate $j_x^{i,j,k}$, induced by such a particle.\nFirst, \n\\begin{equation}\n    S_x(x^i + \\Delta x/2 - x(t')) = 1,\n\\end{equation}\nbecause the particle never leaves the cell.\nAlso, $y^j - y(t') < 0$, $z^k - z(t') < 0$, so the answer is\n\\begin{multline}\n    j_x^{i,j,k} \\delta t = W \\int_0^{\\delta t} \\dd{t'} v_x \\left(1 - \\frac{y_0+v_y t' - y^j}{\\Delta y}\\right)\\left(1 - \\frac{z_0+v_z t' - z^k}{\\Delta z}\\right) = \\\\\n    = W \\int_0^{\\delta t}\\dd{t'} v_x \\left(a_y + \\frac{v_y (t'-\\delta t/2)}{\\Delta y}\\right)\\left(a_z + \\frac{v_z (t'-\\delta t/2)}{\\Delta z}\\right) = \\\\\n    = W \\delta x \\left( b_y b_z + \\frac{\\delta y\\delta z}{12\\Delta y \\Delta z}\\right),\n\\end{multline}\nwhere \n\\begin{equation}\n    b_y = 1 - \\frac{y_0 + \\delta y / 2 - y^j}{\\Delta y},\\quad b_z = 1 - \\frac{y_0 + \\delta z / 2 - z^k}{\\Delta z}.\n\\end{equation}\n\nNow we can write down all 12 currents generated by a macroparticle\n\\begin{equation}\n\\begin{aligned}\n    j_x^{i,j,k} \\delta t &= W \\delta x (b_y b_z + A_{yz}), & j_x^{i,j+1,k} &= W \\delta x (a_y b_z - A_{yz}),\\\\\n    j_x^{i,j,k+1} \\delta t &= W \\delta x(b_y a_z - A_{yz}), & j_x^{i,j+1,k+1} &= W \\delta x(a_y a_z + A_{yz}),\\\\\n    j_y^{i,j,k} \\delta t &= W \\delta y (b_x b_z + A_{xz}), & j_y^{i+1,j,k} &= W \\delta y (a_x b_z - A_{xz}),\\\\\n    j_y^{i,j,k+1} \\delta t &= W \\delta y (b_x a_z - A_{xz}), & j_y^{i+1,j,k+1} &= W \\delta y (a_x a_z + A_{xz}),\\\\\n    j_z^{i,j,k} \\delta t &= W \\delta z (b_x b_y + A_{xy}), & j_z^{i+1,j,k} &= W \\delta z (a_x b_y - A_{xy}),\\\\\n    j_z^{i,j+1,k} \\delta t &= W \\delta z(b_x a_y - A_{xy}), & j_z^{i+1,j+1,k} &= W \\delta z(a_x a_y + A_{xy}),\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n    a_\\alpha = \\frac{\\alpha - \\alpha^{i,j,k} + 0.5 \\delta \\alpha}{\\Delta \\alpha},\n    \\quad b_\\alpha = 1 - a_\\alpha,\n    \\quad A_{\\alpha \\beta} = \\frac{\\delta\\alpha \\delta\\beta}{12\\Delta\\alpha\\Delta\\beta}.\n\\end{equation}\nIf a particle leaves the cell, its trajectory should be separated into several pieces.\n\n\\section{Interpolation of fields on particles}\nFields need to be interpolated from the field on particles.\nIn Quill, the formulas should be\n\\begin{align}\n    &E_x^p = E_x^{i,j+1,k+1} a_y a_z + E_x^{i,j,k+1} b_y a_z + E_x^{i,j+1,k} a_y b_z + E_x^{i,j,k} b_y b_z,\\\\\n    &E_y^p = E_y^{i+1,j,k+1} a_x a_z + E_y^{i,j,k+1} b_x a_z + E_y^{i+1,j,k} a_x b_z + E_y^{i,j,k} b_x b_z,\\\\\n    &E_z^p = E_z^{i+1,j+1,k} a_y a_x + E_z^{i+1,j,k} b_y a_x + E_z^{i,j+1,k} a_y b_x + E_z^{i,j,k} b_y b_x.\n\\end{align}\nHere, $b_{x,y,z} = 1 - a_{x,y,z}$.\n\n\\begin{thebibliography}{9}\n    \\bibitem{PukhovLectures}\n    A. Pukhov, Three-Dimensional Particle-in-Cell Simulations of Relativistic Laser-Plasma Interactions. Lecture Notes. (1999).\n    \n    \\bibitem{PukhovCERN}\n    A. Pukhov, Particle-In-Cell Codes for Plasma-based Particle Acceleration. \\href{https://e-publishing.cern.ch/index.php/CYR/article/view/220}{\\textit{CERN Yellow Rep.} \\textbf{1}, 181 (2016)}.\n\\end{thebibliography}\n\n\\end{document}", "meta": {"hexsha": "1374f0fa48e7be023e7b8746cbaf18be8a23fc60", "size": 17981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old_doc/quill.tex", "max_stars_repo_name": "agolovanov/quill", "max_stars_repo_head_hexsha": "18cf99cc8517f173765d4f56a90a6d53403e90f8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2021-03-30T12:14:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T02:03:06.000Z", "max_issues_repo_path": "old_doc/quill.tex", "max_issues_repo_name": "agolovanov/quill", "max_issues_repo_head_hexsha": "18cf99cc8517f173765d4f56a90a6d53403e90f8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-12-09T11:02:56.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-18T13:52:06.000Z", "max_forks_repo_path": "old_doc/quill.tex", "max_forks_repo_name": "agolovanov/quill", "max_forks_repo_head_hexsha": "18cf99cc8517f173765d4f56a90a6d53403e90f8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2021-02-15T18:27:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-09T10:58:28.000Z", "avg_line_length": 50.0863509749, "max_line_length": 238, "alphanum_fraction": 0.5848951671, "num_tokens": 7668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.5545828377628224}}
{"text": "\\documentclass{amsart}\n\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\n\\title{Problem Set 2}\n\\author{Mark Ditsworth}\n\n\\begin{document}\n\t\\maketitle\n\t\\section{Problem 2.1}\n\tGiven graph $G=(V,E,W)$ consider a random walk on V with transition probabilities $M_{ij} = P{X(t+1)=j | X(t)=i} = \\frac{w_{ij}}{\\text{deg}(i)}$.\n\t\\\\\\\\\n\tPartition the vertex set as $V = V_+ \\cup V_- \\cup V_*$. Suppose that every node in $V_*$ is connected to at least one node in either $V_+$ or $V_-$. Given a node $i \\subset V$ let $g(i)$ be the probability that a random walker starting at $i$ reaches a node in $V_+$ before reaching one in $V_-$. If $i \\in V_+$, then $g(i)=1$ and if $i \\in V_-$, then $g(i)=0$. Find $g(i)$ for $i \\in V_*$.\n\t\\\\\\\\\n\t\\textbf{Solution:}\n\tThere are two scenarios in which the random walker will reach a node in $V_+$ before one in $V_-$: Moving from $V_*$ to $V_+$ immediately, or moving from $V_*$ to $V_*$ repeatedly until moving to $V_+$. We essentially view leaving $V_*$ as an absorbing barrier, and want the probability that we end in $V_+$, which is the sum of all probabilities that will end in that scenario:\n\t\\[\n\t\\sum_{j \\in V_+}M_{ij} + \\sum_{j \\in V_*}\\sum_{k \\in V_+}M_{ij}M_{jk} + \\sum_{j \\in V_*}\\sum_{k \\in V_*}\\sum_{l \\in V_+}M_{ij}M_{jk}M_{kl} + \\dots\n\t\\]\n\tDefine $\\mathbf{\\nu_*} \\in \\mathbb{R}^n$, $n=|V|$ where the $j$th element is $1$ if $j \\in V_*$, and $0$ otherwise. Similarly define $\\mathbf{\\nu_+} \\in \\mathbb{R}^n$ for the partition $V_+$. Let $\\Psi_i \\in \\mathbb{R}^n$, $n=|V|$ have elements everywhere equal 0 except for the $i$th node that is the starting node. The probability that a random walker starting at node $i$ is absorbed in the $V_+$ partition at time-step $k$ is expressed as\n\t\\[\n\t\\Psi_i^T \\left[ M\\text{diag}\\left(V_*\\right) \\right]^{(k-1)} MV_+\n\t\\]\n\tThe $\\left[ M\\text{diag}\\left(V_*\\right) \\right]^{(k-1)}$ matrix expresses the probability of starting at node $i$, and ending at node $j$ after $k-1$ steps, discounting the intermediate nodes that would absorb the random walker earlier (nodes belonging to $V_+$ or $V_-$) For convenience, we will denote this matrix as $M_*$, as in only accounting for transitions within partition $V_*$. The resulting matrix is dotted with $M$ for the last transition time step, and then dotted with $V_+$ to attain the vector of probabilities end in partition $V_+$. $\\Psi_i^T$ dotted with this vector selects the probability stemming from starting at node $i$. Thus, summing the probabilities across all $k$ yields\n\t\\[\n\t\\Psi_i^T \\left[\\sum_{k=0}^{\\infty}\\left( M_* \\right)^k\\right]MV_+\n\t\\]\n\tWith each element of $M_*$ less than $0$, and the summation across each row bounded above by $1$, it is clear that $\\lim\\limits_{n \\rightarrow \\infty} (M_*)^n = \\mathbf{0}$. Thus, the infinite sum converges to $(I-M_*)^{-1}$. In conclusion, the probability that a random walker starting at node $i \\in V_*$ will reach a node in $V_+$ before a node in $V_-$ is calculated by\n\t\\[\n\tg(i) = \\Psi_i^T\\left( I - M_* \\right)^{-1}MV_+.\n\t\\]\n\tNote that for graphs with large $|V|$, the inverse operation can be quite expensive. Thus $g(i)$ should be approximated either via pseudo-inverse operations or monte carlo simulations. See the attached notebook \\textsf{ps2.ipynb} for justification of this closed-form expression for $g(i)$.\n\t\\\\\\\\\n\t\\section{Problem 2.2}\n\tFor a graph $G$ let $h(G)$ denote its Cheeger constant and $\\lambda_2(\\mathcal{L}_G)$ the second-smallest eigenvalue of its normalized graph Laplacian ($\\mathcal{L}_G = D - W$). The Cheeger inequality guarantees that \n\t\\[\n\t\\frac{1}{2}\\lambda_2(\\mathcal{L}_G) \\le h_G \\le \\sqrt{2\\lambda_2(\\mathcal{L}_G)}\n\t\\]\n\tThis excercise shows that this inequality is tight (at least up to constants).\n\t\n\t1. Construct a family of graphs $\\mathcal{G}$ for which $\\lambda_2(\\mathcal{L}_G) \\rightarrow 0$ and for which there exists a constant $C>0$ for which \n\t\\[\n\t\\forall G \\in \\mathcal{G}, h_G \\le C\\lambda_2(\\mathcal{L}_G)\n\t\\]\n\t\\\\\\\\\n\t\n\t2. Construct a family of graphs $\\mathcal{G}$ for which $\\lambda_2(\\mathcal{L}_G) \\rightarrow 0$ and for which there exists a constant $c>0$ for which \n\t\\[\n\t\\forall G \\in \\mathcal{G}, h_G \\ge c\\sqrt{\\lambda_2(\\mathcal{L}_G)}\n\t\\]\n\t\\\\\\\\\n\t\n\t\\section{Problem  2.3}\n\tGiven a graph $G$ show that the dimension of the nullspace of $\\mathcal{L}_G$ corresponds to the number of connected components of $G$.\n\t\\\\\\\\\n\t\\textbf{Solution:} The nullspace of $\\mathcal{L}_G$ is the set of linearly independent vectors $\\mathcal{X}$ such that $\\forall \\mathbf{x} \\in \\mathcal{X}$, $\\mathcal{L}_G\\mathbf{x} = \\mathbf{0}$.\n\tBy expanding $\\mathcal{L}_G$ we can see that both $D$ and $W$ transform each $\\mathbf{x}$ to equal vectors.\n\t\\[\n\t\\mathcal{L}_G \\mathbf{x} = (D - W)\\mathbf{x} = D\\mathbf{x} - W\\mathbf{x} = \\mathbf{0}\n\t\\]\n\t\\[\n\tD\\mathbf{x} = W\\mathbf{x}\n\t\\]\n\tConsider that graph $G = (V,E,W),~(|V|=n)$ has $k$ connected components. Let $\\mathbf{x}^i$ be a vector where\n\t\\[\n\tx^i_j = \n\t\\begin{cases}\n\t\t1 & j \\in \\text{ component } i\\\\\n\t\t0 & \\text{otherwise} \n\t\\end{cases}\n\t\\]\n\tClearly, without loss of generality, $D\\mathbf{x}^i$ is equal to a vector containing the degree of each node in $G$ that is in component $i$. $W\\mathbf{x}^i$ is clearly the same since for each node, it will sum all weights of edges that connect to nodes in component $i$. Thus, there will be $k$ linearly independent vectors that are transformed to $\\mathbf{0}$ by $\\mathcal{L}_G$. All other vectors that do this will be linear combinations of $\\mathcal{X} = \\{\\mathbf{x}^1,\\dots ,\\mathbf{x}^k\\}$. Therefor, the cardinality of the nullspace of $\\mathcal{L}_G$ is equal to the number of conneted components in graph $G$.\n\t\\\\\\\\\n\t\\section{Problem 2.4}\n\tGiven a connected unweighted graph $G = (V,E)$, its diameter is equal to \n\t\\[\n\t\\text{diam}(G) = \\max_{u,v\\in V} \\min_{path p, u\\rightarrow v} \\text{len}(p)\n\t\\]\n\tShow that \n\t\\[\n\t\\text{diam}(G) \\ge \\frac{1}{\\text{vol}(G)\\lambda_2(\\mathcal{L}_G)}\n\t\\]\n\t\\\\\\\\\n\t\\section{Problem 2.5}\n\tProve the Courant Fisher Theorem: Given a symmetric matrix $A\\in \\mathbb{R}^{n\\times n}$ with eigenvalues $\\lambda_1 \\le \\dots \\lambda_n$, for $k \\le n$,\n\t\\[\n\t\\lambda_k(A) = \\min_{U:\\text{dim}(U)=k} \\left[ \\max_{x\\in U} \\frac{x^TAx}{x^Tx} \\right].\n\t\\]\n\t\\\\\n\tAlso show that\n\t\\[\n\t\\lambda_2(A) = \\max_{y\\in\\mathbb{R}^n} \\left[ \\min_{x\\in\\mathbb{R}^n:x\\perp y} \\frac{x^TAx}{x^Tx} \\right]\n\t\\]\n\t\\\\\\\\\n\t\\section{Problem 2.6}\n\tGiven a set of points $x_1,\\dots,x_n \\in \\mathbb{R}^p$ and a partition of them in $k$ clusters $S_1,\\dots,S_k$ reca the k-means objective\n\t\\[\n\t\\min_{S_1,\\dots,S_k}\\min_{\\mu_1,\\dots,\\mu_k}\\sum_{l=1}^{k}\\sum_{i\\in S_l}\\| x_i - \\mu_l \\|^2.\n\t\\]\n\tShow that this is equivalent to \n\t\\[\n\t\\min_{S_1,\\dots,S_k}\\sum_{l=1}^{k}\\frac{1}{S_l}\\sum_{i,j\\in S_l}\\| x_i - x_j \\|^2.\n\t\\]\n\t\\\\\n\t\\textbf{Solution:}\n\t\\[\n\t\\min_{S_1,\\dots,S_k}\\min_{\\mu_1,\\dots,\\mu_k}\\sum_{l=1}^{k}\\sum_{i\\in S_l}\\| x_i - \\mu_l \\|^2 =\n\t\\]\n\t\\[\n\t\\min_{S_1,\\dots,S_k}\\sum_{l=1}^{k}\\sum_{i\\in S_l}\\|x_i - \\frac{1}{|S_l|}\\sum_{j\\in S_l}x_j\\|^2 =\n\t\\]\n\t\\[\n\t\\min_{S_1,\\dots,S_k}\\sum_{l=1}^{k}\\frac{1}{|S_l|}\\sum_{i\\in S_l}\\| |S_l|x_i - \\sum_{j\\in S_l}x_j\\|^2 =\n\t\\]\n\t\\[\n\t\\min_{S_1,\\dots,S_k}\\sum_{l=1}^{k}\\frac{1}{|S_l|}\\sum_{i\\in S_l}\\sum_{j\\in S_l}\\| x_i - x_j\\|^2 =\n\t\\]\n\t\\[\n\t\\min_{S_1,\\dots,S_k}\\sum_{l=1}^{k}\\frac{1}{|S_l|}\\sum_{i,j\\in S_l}\\| x_i - x_j\\|^2.\n\t\\]\n\t\n\t\n\\end{document}", "meta": {"hexsha": "ab8531456b9bb246dc6663b409d0eb7867ceabdc", "size": 7325, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignments/PS2/ps2.tex", "max_stars_repo_name": "markditsworth/mds", "max_stars_repo_head_hexsha": "c2fd3e946a4e661606d17e2089a6da351ace2393", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignments/PS2/ps2.tex", "max_issues_repo_name": "markditsworth/mds", "max_issues_repo_head_hexsha": "c2fd3e946a4e661606d17e2089a6da351ace2393", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignments/PS2/ps2.tex", "max_forks_repo_name": "markditsworth/mds", "max_forks_repo_head_hexsha": "c2fd3e946a4e661606d17e2089a6da351ace2393", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.1349206349, "max_line_length": 702, "alphanum_fraction": 0.662116041, "num_tokens": 2723, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544085240401, "lm_q2_score": 0.7931059414036511, "lm_q1q2_score": 0.554582825953112}}
{"text": "\\documentclass[a4paper,11pt]{article}\n%\\documentclass[a4paper,11pt]{scrartcl}\n\n\n\n\\input{../preambles/preamble}\n\\input{../preambles/unicode}\n\n\\setmainlanguage{english}\n\\setotherlanguages{german,greek,russian}\n\n\\input{../preambles/math-single}\n\\input{../preambles/math-brac}\n\\input{../preambles/math-thm}\n\\input{../preambles/phys-chem}\n\n\\setromanfont[Mapping=tex-text]{Linux Libertine O}\n% \\setsansfont[Mapping=tex-text]{DejaVu Sans}\n% \\setmonofont[Mapping=tex-text]{DejaVu Sans Mono}\n\n\\usepackage[%style=authoryear-icomp,\n\t\t\tbackend=biber]{biblatex}\n\\addbibresource{./exp-pot.bib}\n\n\\title{Non-relativistic Particle in an Exponential Potential}\n\\author{Yi-Fan Wang (\u738b\\ \u4e00\u5e06)}\n%\\date{}\n\n\\begin{document}\n\\maketitle\n\nConsider the one-dimensional motion of a non-relativistic particle in an \nexponential potential, the motion of which can be described by the Lagrangian \naction\n\\begin{align}\nS \\coloneqq \\int \\dd t\\,\\cbr{\\frac{m}{2}\\dot{x}^2 - V\\ee^{g x}},\n\\end{align}\nwhere $g$ and $V$ are real quantities. One sees that when $V > 0$ ($< 0$), the \npotential is bounded below (above), and the second case is potentially \nproblematic.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Canonical formalism}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nThe canonical Hamiltonian of the particle reads\n\\begin{align}\nH = \\frac{p^2}{2m} + V\\ee^{g x}.\n\\end{align}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\section{Canonical quantisation}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nUsing the Laplace--Beltrami operator, the Hamiltonian ``operator'' reads\n\\begin{align}\n\\widehat{H} = -\\frac{\\phs^2}{2m}\\partial_x^2 + V\\ee^{g x}.\n\\label{eq:hamilt-optr-0}\n\\end{align}\nNote that the domain of the unbounded operator has not been specified; hence \ncomes the quotation marks. In \\cite[ch.\\ 4]{Gitman2012}, it was suggested \nthat one could use \\emph{operation} instead of ``operator'' to distinguish the \ncase, where only the action of an operator is described, whereas the domain is \nnot.\n\n%%%%%%%%%%%%%%%%\n\\subsection{Spectrum and generalised eigenfunctions of the Hamiltonian}\n\n%%%%%%%%%%%%%%%%\n\nThe eigenvalue equation of the Hamiltonian, or the time-independent Schr\u00f6dinger \nequation, reads\n\\begin{align}\n-\\frac{\\phs^2}{2m}\\partial_x^2 \\rfun{\\psi}{x} + V\\ee^{g x} \\rfun{\\psi}{x} =\nE \\rfun{\\psi}{x}.\n\\label{eq:tise-0}\n\\end{align}\n\nIn order to solve \\cref{eq:tise-0}, define\n\\begin{align}\n\\nu \\coloneqq \\frac{\\sqrt{8m\\vbr{E}}}{g\\phs},\n\\end{align}\nand transform the coordinate\n\\begin{align}\n\\xi \\coloneqq \\frac{\\sqrt{8m\\vbr{V}\\ee^{g x}}}{g\\phs},\n\\label{eq:trsf-x-xi-0}\n\\end{align}\nso that the Hamiltonian ``operator'' reads\n\\begin{align}\n\\widehat{H} = \\frac{g^2\\phs^2}{8m}\\rbr{-\\xi^2\\partial_\\xi^2 - \\xi\\partial_\\xi + \n\\mscrv \\xi^2},\\qquad \\mscrv \\coloneqq \\sgn V,\n\\label{eq:hamilt-optr-1}\n\\end{align}\nand \\cref{eq:tise-0} transforms into the standard Besselian form\n\\begin{align}\n\\xi^2 \\rfun{\\psi''}{\\xi} + \\xi^1 \\rfun{\\psi'}{\\xi} +\n\\rbr{-\\mscrv \\xi^2 + \\mscre \\nu^2} \\rfun{\\psi}{\\xi} = 0,\\qquad\n\\mscre \\coloneqq \\sgn E.\n\\label{eq:tise-1}\n\\end{align}\nThe solutions of \\cref{eq:tise-1} can be classified into four cases \naccording to $\\rbr{\\mscrv, \\mscre}$ and are listed in \\cref{tab:tise-1}. \nBecause of the transformation in \\cref{eq:trsf-x-xi-0}, the corresponding \nrepresentation space $\\mbfF_\\xi$ of the state vectors \\cite[ch.\\ \n5.1]{Kiefer2012} consists of the square-integrable functions on $\\rbr{0, \n+\\infty}$ endowed with the inner product\n\\begin{align}\n\\rbr{\\psi, \\phi}_\\xi \\coloneqq \\int_0^{+\\infty}\\frac{\\dd \\xi}{\\xi}\\,\n\\rfun{\\psi^*}{\\xi}\\rfun{\\phi}{\\xi}.\n\\end{align}\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{c||r@{, }l|r@{, }l}\n\\toprule\nSign & \\multicolumn{2}{c|}{Solution 1} & \\multicolumn{2}{c}{Solution 2} \\\\\n\\midrule\n$\\rbr{+,+}$ &\n$\\rfun{\\BesselK_{\\ii \\nu}}{\\xi}$ & D & \n$\\rfun{\\BesselI_{\\ii \\nu}}{\\xi}$ & U \\\\\n$\\rbr{-,-}$ &\n$\\rfun{\\BesselK_{\\nu}}{\\xi}$ & U & \n$\\rfun{\\BesselI_{\\nu}}{\\xi}$ & U \\\\\n$\\rbr{+,-}$ &\n$\\rfun{\\BesselF_{\\ii \\nu}}{\\xi}$ & D & \n$\\rfun{\\BesselG_{\\ii \\nu}}{\\xi}$ & D \\\\\n$\\rbr{-,+}$ &\n$\\rfun{\\BesselJ_{\\nu}}{\\xi}$ & N & \n$\\rfun{\\BesselY_{\\nu}}{\\xi}$ & U \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{Local solutions of \\cref{eq:tise-1} and their normalisability, where \nsign means $\\rbr{\\mscrv, \\mscre}$, N denotes normalisable, D \n$\\mupdelta$-normalisable, and U unnormalisable.\n\\label{tab:tise-1}}\n\\end{table}\n\nTransforming\n\\begin{align}\n\\ee^y \\coloneqq \\xi = \\frac{\\sqrt{8m\\vbr{V}\\ee^{g x}}}{g\\phs}\n\\end{align}\nyields the Hamiltonian ``operator'' in terms of an alternative dimensionless \nform\n\\begin{align}\n\\widehat{H} = \\frac{g^2\\phs^2}{8m}\\rbr{-\\partial_y^2 + \\mscrv \\ee^{2y}}.\n\\label{eq:hamilt-optr-2}\n\\end{align}\nThe solutions of the eigenvalue equation for \\cref{eq:hamilt-optr-2} are the \nones listed in \\cref{tab:tise-1} with $\\xi$ replaced by $\\ee^y$. The \nrepresentation space $\\mbfF_y$ is comprised of the square-integrable functions \non $\\rbr{-\\infty, +\\infty}$ endowed with the inner product\n\\begin{align}\n\\rbr{\\psi, \\phi}_y \\coloneqq \\int_{-\\infty}^{+\\infty}\\dd y\\,\n\\rfun{\\psi^*}{y}\\rfun{\\phi}{y}.\n\\end{align}\n\n%%%%%%%%%%%%%%%%\n\\subsection{Problem of self-adjointness}\n\n%%%%%%%%%%%%%%%%\nOn a Hilbert space $\\mbfH$ endowed with the inner product $\\rbr{\\cdot,\\cdot}$, \nAn operator $A$ is characterised by its \\emph{domain} $\\rfun{\\mathrm{Dom}}{A}$ \nand the operation on a vector in $\\rfun{\\mathrm{Dom}}{A}$. Physicists often skip \nthe discussion about the domain, which proves to be problematic in the current \ncase.\n\nTo be more specific, the following definitions are needed. $A$ is called \n\\emph{symmetric} if $\\rbr{f, A g} \\equiv \\rbr{A f, g}$, $\\forall f, g \\in \n\\rfun{\\mathrm{Dom}}{A}$. The \\emph{adjoint} of $A$ is denoted as $A^\\dagger$ and \nsatisfies $\\rbr{A^\\dagger f, g} \\coloneqq \\rbr{f, A g}$, $\\forall g \\in \n\\rfun{\\mathrm{Dom}}{A}$. Finally, $A$ is \\emph{self-adjoint} if $A^\\dagger = A$, \nwhich implies the identical operation $A^\\dagger f \\equiv A f$, $\\forall f \\in \n\\rfun{\\mathrm{Dom}}{A}$ and the identical domain, \n$\\rfun{\\mathrm{Dom}}{A^\\dagger} equiv \\rfun{\\mathrm{Dom}}{A}$.\n\nNote that in infinite dimensions, for an unbounded $A$, $\\mbfH \\supsetneq \n\\rfun{\\mathrm{Dom}}{A^\\dagger} \\supseteq \\rfun{\\mathrm{Dom}}{A}$ \\cite[ch.\\ \n9]{Hall2013}, which is the main difference from the case in finite dimensions, \nwhere $\\mbfH = \\rfun{\\mathrm{Dom}}{A^\\dagger} = \\rfun{\\mathrm{Dom}}{A}$.\n\nThe Hamiltonian in \\cref{eq:hamilt-optr-2} is manifestly not self-adjoint for \nsome of the cases. If it were always self-adjoint, the (generalised) \neigenfunctions of different eigenvalues would necessarily be orthogonal. This \nis obviously not the case for $\\rbr{-,+}$, where\n\\begin{align}\n\\rbr{J_{\\nu_1}, J_{\\nu_2}}_y = \n\\end{align}\n\n\n\\printbibliography\n\n\\end{document}\n", "meta": {"hexsha": "91937ae9748d68f5292fc63d57c1063bc3f90c11", "size": 6688, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exp-pot/exp-pot.tex", "max_stars_repo_name": "cmp0xff/Notes", "max_stars_repo_head_hexsha": "afd712c1e42275bf781a030d6c5f1b7f4c6ec57b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exp-pot/exp-pot.tex", "max_issues_repo_name": "cmp0xff/Notes", "max_issues_repo_head_hexsha": "afd712c1e42275bf781a030d6c5f1b7f4c6ec57b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exp-pot/exp-pot.tex", "max_forks_repo_name": "cmp0xff/Notes", "max_forks_repo_head_hexsha": "afd712c1e42275bf781a030d6c5f1b7f4c6ec57b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.2974358974, "max_line_length": 81, "alphanum_fraction": 0.6698564593, "num_tokens": 2367, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680199891789, "lm_q2_score": 0.7461390043208003, "lm_q1q2_score": 0.5545066464777866}}
{"text": "% !TEX root = ../main.tex\n\nThe protocols we will discuss work by recursively computing and comparing fingerprints of sets.\nThis chapter defines and motivates a specific fingerprinting scheme that admits efficient computation with small overhead for the storage and maintenance of auxiliary data structures. \\Cref{initial-considerations} outlines the solution space and theoretic bounds. \\Cref{group-fingerprints} characterizes a family of functions that admit efficient incremental computation, \\Cref{collisions} proposes members of this family that can be used for fingerprinting, and \\cref{crypto} examines security concerns in the face of malicious parties trying to find fingerprint collisions. We examine randomized solutions which compute with high probability fingerprints in logarithmic time in \\cref{randomization}.\n\n\\section{Initial Considerations}\n\\label{initial-considerations}\n\nOur protocols work by recursively testing fingerprints for equality. For our purposes, we can define a fingerprint or hash function as follows:\n\n\\begin{definition}\nA \\defined{hash function} is a function $\\fun{\\h}{U}{D}$ with a finite codomain such that for randomly chosen $u \\in U$ and $d \\in D$ the probability that $\\h(u) = d$ is roughly\\footnote{To keep the focus on data structure synchronization rather than being sidetracked by cryptography, we will for the most part keep arguments about probabilities qualitative rather than quantitative.} $\\frac{1}{\\abs{D}}$. $\\h(u)$ is called the \\defined{hash of $u$}, \\defined{fingerprint of $u$} or \\defined{digest of $u$}.\n\\end{definition}\n\nGiven a universe $U$ of items, a function $\\enc : U \\rightarrow \\{0, 1\\}^{*}$ for encoding items as binary strings, a linear order $\\preceq$ on $U$, a hash function $\\h: \\{0, 1\\}^{*} \\rightarrow \\{0, 1\\}^{k}$ mapping binary strings to binary strings of length $k$, and some finite $S \\subseteq U$, a natural starting point for defining a fingerprint of the set $S$ is to sort the items according to $\\preceq$, concatenate the encodings, and hash the resulting string.\n\nWhile this is straightforward to specify and implement, it does not suffice for our purposes. To allow for efficient set reconciliation, we need to be able to efficiently compute the new fingerprint after a small modification to the set such as insertion or deletion of a single item. Furthermore, we want to be able to efficiently compute the fingerprints of all subsets defined by a range of the original set.\n\nThe fingerprint based on concatenating encodings does not allow for efficient incremental reevaluation. When an item is added to $S$ which is less than any item previously in $S$, the hash function needs to be run over the whole string of length $\\complexity{\\abs{S} + 1}$ again. Furthermore, for any subrange of the set, a full fingerprint computation needs to be performed as well. Precomputing the fingerprints of all subranges requires a prohibitive amount of space. Every subrange corresponds to a substring of the string consisting of all items in $S$ in ascending order, so there are $\\frac{\\abs{S} \\cdot (\\abs{S} + 1)}{2} + 1 \\in \\complexity{n^2}$ many subranges in total.\n\nThe go-to approach for efficiently handling small changes to a set of totally ordered items is using a (balanced) search tree; we briefly state some definitions.\n\n\\begin{definition}\nLet $U$ be a set and $\\preceq$ a binary relation on $U$.\nWe call $\\preceq$ a \\defined{linear order on $U$} if it satisfies three properties:\n\n  \\begin{description}\n    \\item[anti-symmetry:] for all $x, y \\in U$: if $x \\preceq y$ and $y \\preceq x$ then $x = y$\n    \\item[transitivity:] for all $x, y, z \\in U$: if $x \\preceq y$ and $y \\preceq z$ then $x \\preceq z$\n    \\item[linearity:] for all $x, y \\in U$: $x \\preceq y$ or $y \\preceq x$\n  \\end{description}\n\nIf $\\preceq$ is a linear order, we write $x \\prec y$ to denote that $x \\preceq y$ and $x \\neq y$.\n\\end{definition}\n\n\\begin{definition}\nLet $U$ be a set, $\\preceq$ a linear order on $U$, and $V \\subseteq U$. Let $T$ be a rooted directed tree with vertex set $V$.\n\nLet $v \\in V$, then $T_v$ denotes the subtree of $T$ with root $v$.\n\n$T$ is a \\defined{binary search tree on V} if for all inner vertices $v$ with left child $a$ and right child $b$: $a' \\prec v$ for all $a' \\in T_a$ and $v \\prec b'$ for all $b' \\in T_b$.\n\n\\end{definition}\n\n\\begin{definition}\nLet $T = (V, E)$ be a binary search tree and $\\epsilon \\in \\mathbb{R}_{> 0}$.\nWe call $T$ \\defined{$\\epsilon$-balanced} if $\\textit{height}(T) \\leq \\ceil*{\\epsilon \\cdot log_2(|V|)}$.\nSince the precise choice of $\\epsilon$ will not matter for our complexity analyses, we will usually simply talk about \\defined{balanced} trees.\n\\end{definition}\n\nIn the context of fingerprinting, balanced trees often take the form of Merkle trees~\\cite{merkle1989certified} --- binary trees storing items in their leaves, in which each leaf vertex is labeled with the hash of the associated item, and inner vertices are labeled with the hash of the concatenation of the child labels. The root label serves as a fingerprint for the set of items stored in the leaves.\n\nWhen inserting or removing an item, the number of labels that need updating is proportional to the length of of the path from the root to the item, so in a balanced tree of $n$ items $\\complexity{\\mathit{log}(n)}$. The problem with this approach however is that fingerprints are not unambiguous: there are different balanced search trees storing the same items set, and different tree arrangements result in different root labels.\n\nUnfortunately it does not suffice to specify a particular balancing scheme, since different insertion orders of the same overall set of items can result in different trees, even when using the same balancing scheme. While this is sufficient for a setting in which only a single machine updates the set and all other machines apply updates in the same order, as assumed e.g. in~\\cite{naor2000certificate}, we aim for a less restrictive setting in which the evolution of the local set does not influence synchronization.\n\nAn alternative would be to define exactly one valid tree shape for any set of items, but this precludes logarithmic time complexity of updates, as \\cite{uniquerepresentation} shows that search, insertion and deletion in such trees take at least $\\complexity{\\sqrt{n}}$ time in the worst case.\n\nWe will inspect two options for circumventing this lower bound and to achieve logarithmic complexity: letting the fingerprint function abstract over the tree shape, downgrading it to an implementation detail, which we examine in \\cref{group-fingerprints} to \\cref{crypto}, or utilizing randomization to define a unique tree layout which allows logarithmic operations with high probability, which we examine in \\cref{randomization}.\n\n\\section{Incremental Computations}\n\\label{group-fingerprints}\n\nWe now study a family of fingerprinting functions for sets that admit efficient incremental computation based on an auxiliary tree structure, but whose output does not depend on the exact shape of that tree. We first examine a general class of such functions, which reduce a finite set to a single value according to a monoid.\n\n\\begin{definition}\nLet $M$ be a set, $\\groupaddsym: M \\times M \\rightarrow U$, and $\\neutraladd \\in M$.\n\nWe call $(U, \\groupaddsym, \\neutraladd)$ a \\defined{monoid} if it satisfies two properties:\n\n  \\begin{description}\n    \\item[associativity:] for all $x, y, z \\in M$: $\\groupadd{(\\groupadd{x}{y})}{z} = \\groupadd{x}{(\\groupadd{y}{z})}$\n    \\item[neutral element:] for all $x \\in M$: $\\groupadd{\\neutraladd}{x} = x = \\groupadd{x}{\\neutraladd}$.\n  \\end{description}\n\\end{definition}\n\n\\begin{definition}\n\\label{def-lift}\nLet $U$ be a set, $\\preceq$ a linear order on $U$, $\\mathcal{M} \\defeq (M, \\groupaddsym, \\neutraladd)$ a monoid, and $\\fun{\\f}{U}{M}$.\n\nWe \\defined{lift $\\f$ to finite sets via $\\mathcal{M}$} to obtain $\\partialfun{\\lift{\\f}{\\mathcal{M}}}{\\powerset{U}}{M}$ with:\n\n\\begin{align*}\n\\lift{\\f}{\\mathcal{M}}(\\emptyset) &\\defeq \\mymathbb{0}\\\\\n\\lift{\\f}{\\mathcal{M}}(S) &\\defeq \\f(\\set{\\min(S)}) \\oplus \\lift{\\f}{\\mathcal{U}}(S \\setminus \\set{\\min(S)})\\\\\n\\end{align*}\n\nIn other words, if $S = \\set{s_0, s_1, \\ldots, s_{\\abs{S} - 1}}$ with $s_0 \\prec s_1 \\prec \\ldots \\prec s_{\\abs{S} - 1}$, then $\\lift{\\f}{\\mathcal{M}}(S) = \\groupadd{\\f(s_0)}{\\groupadd{\\f(s_1)}{\\groupadd{\\ldots}{\\f(s_{\\abs{S} - 1})}}}$.\n\\end{definition}\n\nFunctions of the form $\\lift{\\f}{\\mathcal{M}}$ can be incrementally computed by using labeled binary search trees:\n\n\\begin{definition}\nLet $U$ be a set, $S \\subset U$ a finite set, $\\preceq$ a linear order on $U$, $\\mathcal{M} \\defeq (M, \\groupaddsym, \\neutraladd)$ a monoid, $\\fun{\\f}{U}{M}$, and let $T$ be a binary search tree on $S$.\n\nWe define a \\defined{labeling function} $\\fun{\\liftlabel{\\f}{\\mathcal{M}}}{S}{M}$:\n\n  \\[\n   \\liftlabel{\\f}{\\mathcal{M}}(v) \\defeq \\begin{cases}\n\\f(v), &  \\text{for leaf $v$} \\\\\n\\groupadd{\\liftlabel{\\f}{\\mathcal{M}}(c_{<})}{\\f(v)} & \\, \\parbox[t]{.6\\textwidth}{$v$ internal vertex with left child $c_{<}$\\\\and no right child}\\\\\n\\groupadd{\\f(v)}{\\liftlabel{\\f}{\\mathcal{M}}(c_{<})} & \\, \\parbox[t]{.6\\textwidth}{$v$ internal vertex with right child $c_{>}$\\\\and no left child}\\\\\n\\groupadd{\\liftlabel{\\f}{\\mathcal{M}}(c_{<})}{\\groupadd{\\f(v)}{\\liftlabel{\\f}{\\mathcal{M}}(c_{>})}} & \\, \\parbox[t]{.6\\textwidth}{$v$ internal vertex with left child $c_{<}$\\\\and right child $c_{>}$}\n\\end{cases}\n  \\]\n\nSee \\cref{fig:fp-tree} for an example.\n\\end{definition}\n\n\\begin{figure*}\n\\begin{scaletikzpicturetowidth}{\\textwidth}\n\\begin{tikzpicture}[scale=\\tikzscale]\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\begin{pgfonlayer}{main}\n\t\t%vertices\n\t\t\\node (vroot) at (0, 1) [aux] {\\aux{\\exampled}{\n\\groupadd{\n    (\\groupadd{\\hexamplea}{\\groupadd{\\hexampleb}{\\hexamplec)}}}\n{\\groupadd{\\hexampled}\n{(\\groupadd{\\hexamplee}{\\groupadd{\\hexamplef}{\\hexampleg}})}\n}\n}};\n\n\t\t\\node (v00) at (-4, -1) [aux] {\\aux{\\exampleb}{\\groupadd{\\hexamplea}{\\groupadd{\\hexampleb}{\\hexamplec}}}};\n\t\t\\node (v01) at (4, -1) [aux] {\\aux{\\examplef}{\\groupadd{\\hexamplee}{\\groupadd{\\hexamplef}{\\hexampleg}}}};\n\n                \\node (v10) at (-6, -3) [aux] {\\aux{\\examplea}{\\hexamplea}};\n                \\node (v11) at (-2, -3) [aux] {\\aux{\\examplec}{\\hexamplec}};\n                \\node (v12) at (2, -3) [aux] {\\aux{\\examplee}{\\hexamplee}};\n                \\node (v13) at (6, -3) [aux] {\\aux{\\exampleg}{\\hexampleg}};\n\t\t%edges\n                \\draw (vroot) edge[edge] (v00);\n                \\draw (vroot) edge[edge] (v01);\n\n\t\t\\draw (v00) edge[edge] (v10);\n\t\t\\draw (v00) edge[edge] (v11);\n\t\t\\draw (v01) edge[edge] (v12);\n\t\t\\draw (v01) edge[edge] (v13);\n\t\\end{pgfonlayer}\n\\end{tikzpicture}\n\\end{scaletikzpicturetowidth}\n\n\\caption{\nA balanced search tree labeled by $\\liftlabel{\\h}{(M, \\groupaddsym, \\neutraladd)}$. For fingerprinting, $\\h$ could be a hash function and $\\groupaddsym$ the xor operation on fixed-width bitstrings.\n}\n\n\\label{fig:fp-tree}\n\\end{figure*}\n\n\\begin{proposition}\nLet $U$ be a set, $S \\subset U$ a finite set, $\\preceq$ a linear order on $U$, $\\mathcal{M} \\defeq (M, \\groupaddsym, \\neutraladd)$ a monoid, $\\fun{\\f}{U}{M}$, and let $T$ be a binary search tree on $S$ with root $r \\in S$.\n\nThen $\\liftlabel{\\f}{\\mathcal{M}}(r) = \\lift{\\f}{\\mathcal{M}}(S)$.\n\n\\begin{proof}\nBy induction on the number of vertices of $T$.\n\n\\textbf{IB:} If $r$ is a leaf, then $\\abs{\\V(T)} = 1$ and thus $\\liftlabel{\\f}{\\mathcal{M}}(r) \\overset{\\text{def}}= \\f(r) \\overset{\\text{def}}= \\lift{\\f}{\\mathcal{M}}(\\V(T)) = \\lift{\\f}{\\mathcal{M}}(S)$.\n\n\\textbf{IH:} Let  $c_{<}$ and $c_{>}$ be vertices for which $\\liftlabel{\\f}{\\mathcal{M}}(c_{<}) = \\lift{\\f}{\\mathcal{M}}(\\V(T_{c_{<}}))$ and $\\liftlabel{\\f}{\\mathcal{M}}(c_{>}) = \\lift{\\f}{\\mathcal{M}}(\\V(T_{c_{>}}))$.\n\n\\textbf{IS:} If $r$ is an internal vertex with left child $c_{<}$ and right child $c_{>}$, then:\n\n\\begin{align*}\n \\liftlabel{\\f}{\\mathcal{M}}(r) &\\overset{\\text{def}}= \\groupadd{\\liftlabel{\\f}{\\mathcal{M}}(c_{<})}{\\groupadd{\\f(r)}{\\liftlabel{\\f}{\\mathcal{M}}(c_{>})}}\\\\\n&\\overset{\\text{IH}}= \\groupadd{\\lift{\\f}{\\mathcal{M}}(\\V(T_{c_{<}}))}{\\groupadd{\\f(r)}{\\lift{\\f}{\\mathcal{M}}(\\V(T_{c_{>}}))}}\\\\\n& \\overset{\\text{def}}= \\lift{\\f}{\\mathcal{M}}(\\V(T))\\\\\n&= \\lift{\\f}{\\mathcal{M}}(S)\\\\\n\\end{align*}\n\nThe cases for internal vertices with exactly one child follow analogously.\n\\end{proof}\n\\end{proposition}\n\nThis correspondence can be used to incrementally compute $\\lift{\\f}{\\mathcal{M}}(S)$: Initially, a labeled search tree storing the items in $S$ is constructed. $\\lift{\\f}{\\mathcal{M}}(S)$ is the root label. When an item is inserted or removed, only the labels on the path from the root to the point of modification require recomputation, so only a logarithmic number of operations is performed if a self-balancing tree is used.\n\nNote that the exact shape of the tree determines the grouping of how to apply $\\groupaddsym$, but by associativity all groupings yield the same result. All trees storing the same set have the same root label.\n\nIf $U$ is small enough that space usage of $\\complexity{\\abs{U}}$ is acceptable, an implicit tree representation such as a binary indexed tree (Fenwick tree)~\\cite{fenwick1994new} can be used. In that case, array positions that correspond to some $u \\in U \\setminus S$ are simply filled with a dummy value whose hash is defined to be $\\neutraladd$.\n\n\\subsection{Subsets}\n\\label{subsets-associative}\n\nIn addition to incremental computation of the fingerprint of a given set, the reconciliation protocol also requires the efficient computation of the fingerprints of arbitrary ranges of the given set. We first fix some terminology and notation:\n\n\\begin{definition}\n\\label{def-range}\nLet $S \\subseteq U$, $\\preceq$ a linear order over $U$, and $x, y \\in U$.\n\nThe \\defined{range from $x$ to $y$ in $S$}, denoted by $\\range{x}{y}{S}$, is the set $\\set{s \\in S \\mid x \\preceq s \\prec y}$ if $x \\prec y$, and $S \\setminus \\range{y}{x}{S}$ if $y \\preceq x$. We call $x$ the \\defined{lower boundary} and $y$ the \\defined{upper boundary} of the range (even if $y \\preceq x$).\n\nNote that the upper boundary is excluded from the range, so in particular $\\range{x}{x}{S} = S$.\n\\end{definition}\n\nIn the remainder of this section, we assume that for all ranges $\\range{x}{y}{S}$ we have $x \\prec y$. If that is not the case, all computations can be performed for the sets $\\set{s \\in S \\mid x \\preceq s}$ and $\\set{s \\in S \\mid s \\prec y}$ which partition $\\range{x}{y}{S}$ if $y \\preceq x$, and the resulting values can be combined via $\\groupaddsym$ to obtain the desired result.\n\nGiven a balanced search tree $T$ with root $r$ for a set $S$ that is labeled by $\\liftlabel{\\f}{\\mathcal{M}}$, we can compute $\\lift{\\f}{\\mathcal{M}}(\\range{x}{y}{S})$ in logarithmic time. Intuitively, one traces paths in $T$ to both $x$ and $y$, and then the result is the sum over all vertices ``in the area between'' these paths. For every vertex on the traced paths, the label of the ``inner'' child vertex summarizes multiple vertices within the area. Summing over all these children yields the value corresponding to the whole inner area. Since the length of the delimiting paths is logarithmic, overall only a logarithmic number of labels needs to be added up.\n\nWe now give a precise definition of this algorithm, defining it via Haskell~98~\\cite{jones2003haskell} code, not necessarily because it is executable, but mostly to have a typed mathematic notation. For clarity of presentation, only complete binary trees are considered. Arbitrary binary trees can also have inner nodes with exactly one child. These can be handled with almost the same algorithm by acting as if these nodes had a second child which contributes \\texttt{emptyFingerprint} to any application of \\texttt{combine}.\n\nWe start by specifying the types involved in the computation, being slightly more general than currently necessary so that the algorithm can later be reused for certain non-monoidal fingerprinting schemes:\n\n\\begin{minted}[mathescape]{haskell}\n-- $\\mathtt{U}$ is the type of items, $\\mathtt{D}$ the codomain of the hash function.\nhash :: U -> D\n\n-- For now, the set of fingerprints is identical to the set of hashes.\ntype Fingerprint = D\n\n-- The fingerprint of the empty set, for now the neutral element.\nemptyFp :: Fingerprint\nemptyFp = 0\n\n-- Combine three fingerprints into a new fingerprint.\ncombine :: Fingerprint -> Fingerprint -> Fingerprint -> Fingerprint\ncombine a b c = a + b + c\n\n-- Convert a hash to a fingerprint, for now the identity function.\nliftFp :: D -> Fingerprint\nliftFp d = d\n\n-- A node is either a leaf or an inner vertex.\ndata Node = Leaf U D | Inner Node U Node D\n\n-- Extract the label of a node.\nlabel :: Node -> D\nlabel Leaf _ fp      = fp\nlabel Inner _ _ _ fp = fp\n\\end{minted}\n\nThe algorithm proceeds by first finding the vertex $v$ with the smallest distance to the root such that $x \\preceq v \\prec y$. This might be $r$ itself.\n\n\\begin{minted}[mathescape]{haskell}\n-- Find the node within $\\range{x}{y}{S}$ that is closest to the root,\n-- assuming $x \\prec y$.\nfindInitial :: Node -> U -> U -> Maybe Node\nfindInitial (Leaf v _) x y\n    | x <= v && v < y = Just v\n    | otherwise       = Nothing\nfindInitial (Inner l v r _) x y\n    | v < x           = findInitial r x y\n    | v >= y          = findInitial l x y\n    | otherwise       = Just v\n\\end{minted}\n\nIf there is no such $v$, then $\\range{x}{y}{S} = \\emptyset$ and thus $\\lift{\\f}{\\mathcal{M}}(\\range{x}{y}{S}) = \\neutraladd$. If however there is such a $v$, then all vertices of $T$ which are not vertices in $T_v$ are either greater than or equal to $y$ if $v \\prec x$, or strictly less than $x$ if $x \\prec v$; in either case they do not influence $\\lift{\\f}{\\mathcal{M}}(\\range{x}{y}{S})$.\n\nIf $v$ is a leaf, $\\range{x}{y}{S} = \\set{v}$ and thus $\\lift{\\f}{\\mathcal{M}}(\\range{x}{y}{S}) = \\f(v)$. Otherwise, let $c_{<}$ be the left child of $v$ and let $c_{>}$ be the right child. Since all vertices in $T_{c_{>}}$ are greater than $v$, they are in particular greater than $x$. Analogously all vertices in $T_{c_{<}}$ are less than $y$.\n\nKeeping in mind that $\\range{x}{\\max(\\V(T_{c_{<}}))}{\\V(T_{c_{<}})}$ simply denotes the set of vertices in $T_{c_{<}}$ that are strictly greater than $x$, and analogously $\\range{\\min(\\V(T_{c_{>}}))}{y}{\\V(T_{c_{>}})}$ denotes the vertices of $T_{c_{>}}$ that are less than or equal to $y$, we have:\n\n\\begin{align*}\n\\range{x}{y}{S} &= \\disjointunion{\\range{x}{\\max(\\V(T_{c_{<}}))}{\\V(T_{c_{<}})}}{\\disjointunion{\\set{v}}{\\range{\\min(\\V(T_{c_{>}}))}{y}{\\V(T_{c_{>}})}}}\\\\\n&\\mathrm{implying}\\\\\n\\lift{\\f}{\\mathcal{M}}(\\range{x}{y}{S}) &= \\groupadd{\\lift{\\f}{\\mathcal{M}}(\\range{x}{\\max(\\V(T_{c_{<}}))}{\\V(T_{c_{<}})})}{\\groupadd{\\f(v)}{\\lift{\\f}{\\mathcal{M}}(\\range{\\min(\\V(T_{c_{>}}))}{y}{\\V(T_{c_{>}})})}}\\\\\n\\end{align*}\n\n\\begin{minted}[mathescape]{haskell}\n-- Compute the fingerprint over all items stored in $v$\n-- within the range $\\range{x}{y}{S}$, assuming $x \\prec y$.\nrangeFingerprint :: Node -> U -> U -> Fingerprint\nrangeFingerprint v x y = case findInitial v x y of\n    Nothing               -> emptyFp\n    Just (Leaf _ fp)      -> liftFp 'sfp\n    Just (Inner l v' r _) -> combine\n                                (accGeq l x) -- $\\lift{\\f}{\\mathcal{M}}({u \\in \\V(T_l) \\mid x \\preceq u})$\n                                (liftFp (hash v')) -- $\\f(v')$\n                                (accLt r y)  -- $\\lift{\\f}{\\mathcal{M}}({u \\in \\V(T_l) \\mid u \\prec y})$\n\\end{minted}\n\n$\\lift{\\f}{\\mathcal{M}}(\\range{x}{\\max(\\V(T_{c_{<}}))}{\\V(T_{c_{<}})})$ can be computed by traversing from $c_{<}$ to $x$, summing over those vertices along the path that are greater or equal to $x$, as well as the right children of all vertices along the path. Correctness follows from a rather technical but straightforward induction we omit.\n\n\\begin{minted}[mathescape]{haskell}\n-- Sum up the fingerprints of all items in the given tree\n-- which are greater than or equal to $x$.\naccGeq :: Node -> U -> Fingerprint\naccGeq (Leaf v fp) x\n    | v < x     = emptyFp\n    | otherwise = liftFp fp\naccGeq (Inner l v r _) x\n    | v < x     = accGeq r x\n    | otherwise = combine (accGeq l x) (liftFp (hash v)) (label r)\n\\end{minted}\n\nAnalogously $\\lift{\\f}{\\mathcal{M}}(\\range{\\min(\\V(T_{c_{>}}))}{y}{\\V(T_{c_{>}})})$ can be computed by traversing from $c_{>}$ to $y$, summing over those vertices along the path that are strictly less than $y$, as well as the left children of all vertices along the path.\n\n\\begin{minted}[mathescape]{haskell}\n-- Sum up the fingerprints of all items in the given tree\n-- which are strictly less than $y$.\naccLt :: Node -> U -> Fingerprint\naccLt (Leaf v fp) y\n    | v >= y    = emptyFp\n    | otherwise = liftFp fp\naccLt (Inner l v r _) y\n    | v >= y    = accLt l y\n    | otherwise = combine (label l) + (liftFp (hash v)) + (accLt r y)\n\\end{minted}\n\nA simplified approach to these computations can be taken if $M$ has efficiently computable inverses with respect to $\\groupaddsym$, i.e. if $(M, \\groupaddsym, \\neutraladd)$ is a group.\n\n\\begin{definition}\nLet $(M, \\groupaddsym, \\neutraladd)$ be a monoid.\nWe call it a \\defined{group} if for all $x \\in M$ there exists $y \\in M$ such that $\\groupadd{x}{y} = \\neutraladd$.\nThis $y$ is necessarily unique and denoted by $\\inverseadd{x}$.\nFor $x, y \\in M$ we write $\\groupsubtract{x}{y}$ as a shorthand for $\\groupadd{x}{\\inverseadd{y}}$.\n\\end{definition}\n\nObserve that $\\range{x}{y}{S} = \\range{\\min(S)}{y}{S} \\setminus \\range{\\min(S)}{x}{S}$, and thus also $\\lift{\\f}{\\mathcal{M}}(\\range{x}{y}{S}) = \\groupadd{\\inverseadd{\\lift{\\f}{\\mathcal{M}}(\\range{\\min(S)}{x}{S})}}{\\lift{\\f}{\\mathcal{M}}(\\range{\\min(S)}{y}{S})}$. We can thus simplify \\texttt{rangeFingerprint}, requiring neither \\texttt{findInitial} nor \\texttt{accGeq}:\n\n\\begin{minted}{haskell}\nrangeFingerprintGroup :: Node -> U -> U -> Fingerprint\nrangeFingerprintGroup v x y = -(accLt v x) + (accLt v y)\n\\end{minted}\n\n\\subsubsection{Complexity}\n\nThe worst-case running time occurs when the vertex $v$ with the smallest distance to $r$ such that $x \\preceq v \\prec y$ is $r$ itself, since then both \\texttt{sumGeq} and \\texttt{sumLt} perform a traversal to a leaf of maximal length. Since $T$ is balanced, only a logarithmic number of recursive calls is executed. Assuming $\\f$ can be computed in $\\complexity{1}$, the resulting time complexity is in $\\complexity{\\log(\\abs{S})}$.\n\nNeither \\texttt{accLt} nor \\texttt{accGeq} are tail-recursive, but they can be rewritten into a tail-recursive form via the standard technique of adding an accumulator argument:\n\n\\begin{minted}[mathescape]{haskell}\naccGeq :: Node -> U -> Fingerprint\naccGeq v x = accGeq' emptyFp v x\n\naccGeq' :: Fingerprint -> Node -> U -> Fingerprint\naccGeq' acc (Leaf v fp) x\n    | v < x     = acc\n    | otherwise = combine emptyFp (liftFp fp) acc\naccGeq' acc (Inner l v r _) x\n    | v < x     = accGeq' acc r x\n    | otherwise = accGeq' (\n                      combine (liftFp (hash v)) (label r) acc\n                  ) l x\n\naccLt :: Node -> U -> Fingerprint\naccLt v y = accLt' emptyFp v y\n\naccLt' :: Fingerprint -> Node -> U -> Fingerprint\naccLt' acc (Leaf v fp) y\n    | v >= y    = acc\n    | otherwise = combine acc (liftFp fp) emptyFp\naccLt' acc (Inner l v r _) y\n    | v >= y    = accLt' acc l y\n    | otherwise = accLt' (\n                      combine acc (label l) + (liftFp (hash v))\n                  ) r y\n\\end{minted}\n\nThe space complexity of computing the fingerprint for a range is thus $\\complexity{1}$. Note however that the tail-recursive forms require associativity to compute the same fingerprints as the original versions.\n\nAs a closing remark on these computations, we want to point out that associativity not only allows us to abstract over differently shaped binary search trees, but also results in trees of higher degree leading to the same fingerprints. Binary trees simplify the presentation and are theoretically optimal, but on actual hardware traversing pointers is expensive whereas loading sequential memory is cheap, which becomes even more pronounced when the tree resides on secondary storage.\n\nOptimized tree implementations thus have higher branching and store multiple items per vertex, a typical example being B-trees~\\cite{bayer2002organization}. Our fingerprinting scheme admits such optimized implementations, in particular, different nodes that want to synchronize data can choose their implementation techniques fully independently and yet maintain compatibility.\n\n\\subsection{Bulk Fingerprint Computation}\n\\label{bulk}\n\nThe synchronization protocol partitions item sets into disjoint ranges. When computing the fingerprints of $k$ such ranges using the previous algorithms, the overall time complexity is in $\\complexity{k \\cdot \\log(\\abs{S})}$. The reason why these algorithms lead to a worse complexity than the overall $\\complexity{\\abs{S}}$ vertices and edges in the tree, is that there is some amount of redundancy in these independent computations. For example, every single computation touches the root node, roughly half of them touch each of the root's children, etc.\n\nWe can bound the complexity of multiple fingerprint computation for disjoint ranges by $\\complexity{\\abs{S}}$ by performing a more efficient tree traversal which visits any given vertex at most twice. This is achieved by starting at the lower boundary of the first range, then tracing the (unique) path to the upper boundary of the first range, which is also the lower boundary of the second range, so from here the path to the upper boundary of the second range can be traced, and so on.\n\nThus we consider a traversal algorithm that traces the shortest path from some vertex $x$ to another vertex $y$ with $x \\preceq y$. We express the traversal as a function that takes as arguments some vertex $y$ to search for and a starting vertex $x$, and that then returns either the next vertex on the shortest path from $x$ to the least vertex $z$ such that $y \\preceq z$, or $\\nil$ if no such $z$ exists or the traversal has finished, i.e. $x = z$.\n\nSince we use the traversal for the computation of fingerprints over successive ranges, the tree is always traversed from a lesser to a strictly greater node. During a traversal, the function can be called with $y \\preceq x$, but only if $x$ lies on the shortest path from some $w \\prec y$ to $z$. The definition we give does not correctly handle inputs for which this does not hold, it cannot perform backward traversals.\n\nIf the function is to trace of the shortest path in $\\complexity{1}$ space, the decision which vertex to move to must be based on local information only. A vertex $x$ however does not store enough information to determine whether the target vertex $y$ can be reached by a downward traversal starting at $x$ or whether one first needs to find a common ancestor of $x$ and $y$. To enable this decision, we store in each vertex a reference to its greatest descendent.\n\nTo allow upward traversal in the tree in $\\complexity{1}$ space, we store in each vertex a reference to its parent, if one exists. As usual, we restrict the presentation of the algorithm to complete binary trees.\n\n\\begin{minted}[mathescape]{haskell}\n-- Second-to-last member of inner nodes is a pointer to the greatest\n-- descendent, last member of all nodes is the parent pointer.\ndata Node = Leaf U D (Maybe Node) | Inner Node U Node D Node (Maybe Node)\n\nvalue :: Node -> U\nvalue Leaf v _ _ = v\nvalue Inner _ v _ _ _ _ _ = v\n\ngreatestDescendent :: Node -> Node\ngreatestDescendent v@(Leaf _ _ _) = v\ngreatestDescendent Inner _ _ _ _ _ g _ = g\n\n-- Return the next vertex on the shortest path\n-- from $\\mathtt{x}$ to the least vertex $\\mathtt{z}$ with $\\mathtt{y} \\preceq \\mathtt{z}$,\n-- assuming there exists $\\mathtt{w} \\prec \\mathtt{y}$ such that $\\mathtt{x}$ lies\n-- on the shortest path from $\\mathtt{w}$ to $\\mathtt{z}$.\nnext :: Node -> U -> Maybe Node\nnext (Leaf _ _ p) y\n    | x < y  = p\n    | x >= y = Nothing\nnext (Inner l x r _ g p) y\n    | (value g) < y = p\n    | y < x && ((value (greatestDescendent l)) >= y)  = Just l\n    | y > x  = Just r\n    | otherwise = Nothing -- traversal done\n\\end{minted}\n\nSince paths in trees are unique, this traversal touches the same vertices as \\texttt{rangeFingerprint}, except those visited during the execution of \\texttt{findInitial}. The fingerprint can thus also be computed during the traversal, by maintaining a single bit of state indicating whether the traversal is in the upward or downward phase:\n\n\\begin{minted}[mathescape]{haskell}\ndata Phase = Up | Down\n\nisPeak :: Node -> U -> Bool\nisPeak src@(Leaf _ _ p) y = (next src y) == p\nisPeak src@(Inner _ _ _ _ _ p) y = (next src y) == p\n\n-- Compute the fingerprint for the range $\\range{x}{y}{S}$,\n-- and also return the last vertex of the traversal.\n-- The accumulator must initially be $\\mathtt{emptyFp}$, the phase $\\mathtt{Up}$,\n-- and the starting vertex of the traversal $\\mathtt{x}$.\ncomputeFingerprint :: D -> Phase -> Node -> U -> U -> (D, Node)\n-- Phase $\\mathtt{Up}$ correspondends to computing $\\mathtt{accGeq}$.\ncomputeFingerprint acc Up src@(Leaf v fp _) x y =\n     case (next src y) of\n        Nothing\n            | v < x = (acc, src)\n            | otherwise = (combine acc (liftFp fp) emptyFp, src)\n        Just n\n            | v < x = computeFingerprint acc Up n x y\n            | otherwise = computeFingerprint\n                (combine acc (liftFp fp) emptyFp) Up n x y\ncomputeFingerprint acc Up src@(Inner _ v r _ _ _) x y =\n    case (next src y) of\n        Nothing\n            | v < x = (acc, src)\n            | otherwise = (combine acc (liftFp (hash v)) emptyFp, src)\n        Just n\n            | isPeak src y = computeFingerprint\n                (combine acc (liftFp (hash v)) emptyFp) Down n x y\n            | otherwise && v < x = computeFingerprint acc Up n x y\n            | otherwise = computeFingerprint\n                (combine acc (liftFp (hash v)) (label r)) Up n x y\n-- Phase $\\mathtt{Down}$ correspondends to computing $\\mathtt{accLt}$.\ncomputeFingerprint acc Down src@(Leaf v fp _) x y =\n    | v >= y = (acc, src)\n    | otherwise = (combine acc (liftFp fp) emptyFp, src)\ncomputeFingerprint acc Down src@(Inner l v _ _ _ _) x y =\n    case (next src y) of\n        Nothing\n            | v >= y = (acc, src)\n            | otherwise = ((combine\n                    acc\n                    (liftFp (label l))\n                    (liftFp (hash v))\n                ), src)\n        Just n\n            | v >= y -> computeFingerprint acc Down n x y\n            | otherwise -> computeFingerprint\n                (combine\n                    acc\n                    (liftFp (label l))\n                    (liftFp (hash v))\n                ) Down n x y\n\\end{minted}\n\nBy computing the fingerprints of successive ranges via this function, using the output vertex as the starting point for the next traversal, the fingerprints are computed while traversing any edge at most two times (once during a downward traversal, and once during an upward traversal). The overall time complexity for $k$ ranges is thus in $\\complexity{\\min(k \\cdot \\log(\\abs{S}), \\abs{S})}$. Since the function is tail recursive, the space overhead is in $\\complexity{1}$.\n\n\\subsection{Differences to Homomorphic Hashing}\n\nThe construction of lifting a function to sets resembles that of multiset homomorphic hash functions, introduced in \\cite{clarke2003incremental}, with further instantiations given in \\cite{cathalo2009comparing} and \\cite{maitin2017elliptic}. We briefly define multiset homomorphic hashing in order to point out salient differences to our construction.\n\n\\begin{definition}\nLet $\\mathcal{U}_0 \\defeq (U_0, \\groupaddsym_0, \\neutraladd_0)$ and $\\mathcal{U}_1 \\defeq (U_1, \\groupaddsym_1, \\neutraladd_1)$ be monoids and let $\\fun{f}{U_0}{U_1}$.\n\nWe call $\\f$ a \\defined{monoid homomorphism from $\\mathcal{U}_0$ to $\\mathcal{U}_1$} if it satisfies two properties:\n\n\\begin{description}\n  \\item[preserves operation:] for all $x, y \\in U_0$: $\\f(x \\groupaddsym_0 y) = \\f(x) \\groupaddsym_1 \\f(y)$\n  \\item[preserves neutral element:] $f(\\neutraladd_0) = \\neutraladd_1$\n\\end{description}\n\nThe second property directly follows from the first one by considering the cases $x \\defeq \\neutraladd_0$ and $y \\defeq \\neutraladd_0$.\n\\end{definition}\n\n\\begin{definition}\nLet $\\mathcal{S} \\defeq (\\N^U, \\cup, \\emptyset)$ be the monoid of multisets over the universe $U$ under union, $\\mathcal{M} \\defeq (M, \\groupaddsym, \\neutraladd)$ a monoid, and $\\fun{\\f}{\\N^U}{M}$.\n\nWe call $\\f$ a \\defined{multiset homomorphic hash function} if $\\f$ is a hash function and a monoid homomorphism from $\\mathcal{S}$ to $\\mathcal{M}$.\n\\end{definition}\n\nA multiset homomorphic hash function allows to incrementally maintain the hash of a multiset under insertions using only $\\complexity{1}$ storage space, by storing the hash of the multiset and combining it with the hash of any inserted item. This can be generalized to support deletions as well: by allowing negative multiplicities for multiset items, multisets form a group under union. If the hash function is a group homomorphism, the hash of the multiset can be updated on deletion by subtracting the hash of the deleted item.\n\nThis is more efficient than our approach, which requires $\\complexity{n}$ space for the tree structure. This efficiency comes however at the cost of a loss in flexibility. As multiset union is commutative, so must be the homomorphic monoid/group of fingerprints. Since our protocols require the ability to compute fingerprints over arbitrary ranges and we need the tree of size $\\complexity{n}$ to do so in any case, homomorphic hashing would not yield any space usage benefits, and we thus choose the approach that does not require commutativity.\n\nWe additionally want to explicitly work with regular sets, not multisets. The reason why homomorphic hashes operate on multisets is that they admit more natural homomorphisms under union. For example, taking the size of a multiset is a homomorphism into the additive monoid on the natural numbers, whereas this is not true for regular sets, as e.g. $\\abs{\\set{x, y} \\cup \\set{y, z}} \\neq \\abs{\\set{x, y}} + \\abs{\\set{y, z}}$. Set union is however well behaved with respect to taking the size of the sets if it is a disjoint union, i.e. no item occurs in both sets.\n\nFor an algebraic-flavored characterization of our fingerprint functions, we thus need to account for items occurring on both sides of the union. This alone still would not change that the resulting concept would necessarily require a commutative monoid, so we must also restrict possible inputs to be sorted. This leads to the following definition:\n\n\\begin{definition}\nLet $U$ be a set, $\\preceq$ a linear order on $U$, $\\mathcal{M} \\defeq (M, \\groupaddsym, \\neutraladd)$ a monoid, and $\\partialfun{\\f}{\\powerset{U}}{M}$ a partial function mapping all finite subsets of $U$ into $M$.\n\nWe call $\\f$ a \\defined{\\somewhatmorphism} if for all finite sets $S_0, S_1 \\in \\powerset{U}$ such that $\\max(S_0) \\prec \\min(S_1)$: $\\f(S_0 \\cup S_1) = \\groupadd{\\f(S_0)}{\\f(S_1)}$.\n\\end{definition}\n\nThis definition captures exactly the functions of form $\\lift{\\f}{\\mathcal{M}}$, as shown in the following propositions:\n\n\\begin{proposition}\nLet $U$ be a set, $\\preceq$ a linear order on $U$, $\\mathcal{M} \\defeq (M, \\groupaddsym, \\neutraladd)$ a monoid, and $\\fun{\\f}{U}{M}$.\n\nThen $\\lift{\\f}{\\mathcal{M}}$ is a \\somewhatmorphism.\n\n\\begin{proof}\nLet $S_0, S_1 \\in \\powerset{U}$ be finite sets such that $\\max(S_0) \\prec \\min(S_1)$. Then:\n\n\\begin{align*}\n\\lift{\\f}{\\mathcal{M}}(S_0 \\cup S_1) &= \\biggroupadd_{\\substack{s_i \\in S_0 \\cup S_1,\\\\ \\text{ascending}}} \\f(s_i)\\\\\n&= \\biggroupadd_{\\substack{s_i \\in S_0,\\\\ \\text{ascending}}} \\f(s_i) \\groupaddsym \\biggroupadd_{\\substack{s_i \\in S_1,\\\\ \\text{ascending}}} \\f(s_i)\\\\\n&= \\lift{\\f}{\\mathcal{M}}(S_0) \\groupaddsym \\lift{\\f}{\\mathcal{M}}(S_1)\\\\\n\\end{align*}\n\\end{proof}\n\\end{proposition}\n\n\\begin{proposition}\nLet $U$ be a set, $\\preceq$ a linear order on $U$, $\\mathcal{M} \\defeq (M, \\groupaddsym, \\neutraladd)$ a monoid, and $\\partialfun{\\g}{\\powerset{U}}{M}$ a \\somewhatmorphism.\n\nThen there exists $\\fun{\\f}{U}{M}$ such that $\\g = \\lift{\\f}{\\mathcal{M}}$.\n\n\\begin{proof}\nDefine $\\fun{\\f}{U}{M}$ as $\\f(u) \\defeq \\g(\\set{u})$. We show by induction on the size of $S \\subseteq U$ that $\\g(S) = \\lift{f}{\\mathcal{M}}(S)$.\n\n\\vspace{10pt}\n\n\\textbf{IB:} If $S = \\emptyset$, then $\\g(S) = \\neutraladd = \\lift{\\f}{\\mathcal{M}}(S)$. Suppose that $\\g(\\emptyset) \\neq \\neutraladd$, this would contradict the fact that for all $x \\in U$ we have $\\g(\\set{x}) = \\g(\\set{x}) \\groupaddsym \\g(\\emptyset) = \\g(\\emptyset) \\groupaddsym \\g(\\set{x})$, which holds because $\\set{x} = \\set{x} \\cup \\emptyset = \\emptyset \\cup \\set{x}$ and $\\g$ is a \\somewhatmorphism.\n\nIf $S = \\set{x}$, then $\\g(S) = \\f(x) = \\lift{\\f}{\\mathcal{M}}(S)$.\n\n\\textbf{IH:} For all sets $T$ with $\\abs{T} = n$ it holds that $\\g(T) = \\lift{\\f}{\\mathcal{M}}(T)$.\n\n\\textbf{IS:} Let $S \\subseteq U$ with $\\abs{S} = n + 1$, then:\n\n\\begin{align*}\n\\g(S) &= \\g(\\set{\\min(S)}) \\groupaddsym \\g(S \\setminus \\set{\\min(S)})\\\\\n&\\overset{\\text{IH}}= \\g(\\set{\\min(S)}) \\groupaddsym \\lift{\\f}{\\mathcal{M}}(S \\setminus \\set{\\min(S)})\\\\\n&= \\f(\\min(S)) \\groupaddsym \\lift{\\f}{\\mathcal{M}}(S \\setminus \\set{\\min(S)})\\\\\n&= \\lift{\\f}{\\mathcal{M}}(S)\\\\\n\\end{align*}\n\n\\vspace{10pt}\nAs $\\g$ is only defined over finite inputs, we thus have $\\g = \\lift{\\f}{\\mathcal{M}}$.\n\\end{proof}\n\\end{proposition}\n\n\\section{Monoidal Fingerprints}\n\\label{collisions}\n\nNow that we have characterized a family of functions that admit efficient recomputation in response to changes to the underlying set as well as efficient computation for ranges within the set, the remaining task is to find such functions which are also suitable fingerprints. This consists of deciding on the monoid of fingerprints, and choosing the mapping from items to monoid elements.\n\nAs the fingerprint of a singleton set $\\lift{\\f}{\\mathcal{M}}(\\set{u})$ is equal to $\\f(u)$, $\\f$ must itself already be a hash function. Typical hash functions map values to bit strings of a certain length, i.e. the codomain is $\\set{0, 1}^k$ for some $k \\in \\N$. We will thus consider monoids whose elements can be represented by such bit strings.\n\nA natural choice of the monoid universe is then $\\range{0}{2^k}{\\N}$, some simple monoidical operations on this universe include bitwise xor, addition modulo $2^k$, and multiplication modulo $2^k$. In the following, addition and multiplication will always be implicitly taken modulo $2^k$. Note that $\\xor$ and addition also admit inverses, so the slightly simplified computation of fingerprints can be used.\n\nOf these three options, multiplication is the least suitable, because multiplying any number by $0$ yields $0$. Consequently for every set containing an item $u$ with $\\f(u) = 0$ the fingerprint of the set would also be $0$, which clearly violates the criterion that all possible values for fingerprints occur with equal probability.\n\nAddition and xor however are particularly well-behaved in that regard, as they form finite commutative groups:\n\n\\begin{proposition}\nLet $\\mathcal{G} \\defeq (G, \\groupaddsym, \\neutraladd)$ be a finite commutative group, i.e. a group with a finite universe such that for all $x, y \\in G$: $\\groupadd{x}{y} = \\groupadd{y}{x}$. Let $U$ be a set and let $\\fun{f}{U}{G}$ be a hash function.\n\nThen $\\lift{\\f}{\\mathcal{G}}$ is a hash function as well.\n\n\\begin{proof}\nWe first show that for randomly chosen $x, y, z \\in G$ the probability that $\\groupadd{x}{y} = z$ is $\\frac{1}{\\abs{G}}$.\n\nFor $x, z \\in G$ there is $y \\in G$ such that $\\groupadd{x}{y} = z$, namely $y \\defeq \\groupsubtract{z}{x}$ (because $\\groupadd{x}{\\groupsubtract{z}{x}} \\overset{\\text{commutativity}}= \\groupadd{\\groupadd{x}{\\inverseadd{x}}}{z} = z$). As $G$ is finite, this $y$ has to be unique, since otherwise there would not be enough elements left that can be added to $x$ to result in all of the $\\abs{G} - 1$ possible remaining $z'$. Thus for any fixed $x, z$ the probability that a randomly chosen $y$ satisfies $\\groupadd{x}{y} = z$ is $\\frac{1}{\\abs{G}}$.\n\nComputing $\\lift{\\f}{\\mathcal{G}}(S)$ consists of repeatedly adding group elements which by themselves are distributed uniformly at random if $S$ was chosen randomly and $\\f$ is a high quality hash function. Thus the accumulated value after every step is any given $z \\in G$ with probability $\\frac{1}{\\abs{G}}$, as is in particular the probability for the final result being equal to $z$.\n\\end{proof}\n\nA more formal proof for xor specifically is given in section 6.2 of \\cite{maziarz2021hashing}.\n\\end{proposition}\n\nBy the same argument, knowing the fingerprint for some set does not provide any information about the fingerprints for sets that differ by even only a single value. In conclusion, high quality fingerprints can be achieved by choosing any transitive and commutative operation for the monoid, for example xor or addition modulo $2^k$, as long as values are mapped into the monoid with a high quality hash function.\n\nWhile multiplication does not form a group when performed on the numbers in $\\range{0}{2^k}{\\N}$, there still are groups based on multiplication modulo some number, e.g. $\\Z_n^\\ast$, the group yielded by multiplication modulo $n$ on the set $\\set{x \\in \\range{0}{n}{\\N} \\mid \\text{$x$ is coprime to $n$}}$. In the following, when talking about multiplication, we will assume that the universe is chosen such that multiplication forms a group.\n\n\\section{Cryptographically Secure Fingerprints}\n\\label{crypto}\n\nIn the protocols for synchronizing data structures, fingerprints of sets are used for probabilistic equality checking: sets with equal fingerprints are assumed to be equal. Synchronization can thus become faulty if it involves unequal sets with equal fingerprints. If the universe of possible fingerprints is chosen large enough, and the distribution of fingerprints of randomly chosen sets is random within that universe, the probability for this to occur becomes negligible.\n\nRandom distribution of input sets is however a very strong assumption. What if a malicious party can influence the sets to be fingerprinted, with the goal of causing fingerprint collisions and consequently triggering faulty behavior of the system? Cryptographically secure fingerprints are an answer to this problem, being chosen such that it is computationally infeasible for an adversary to find inputs that lead to faulty synchronization.\n\n\\subsection{General Considerations}\n\\label{crypto-general}\n\nA typical definition of cryptographically secure hash functions is the following~\\cite{menezes2018handbook}:\n\n\\begin{definition}\nA \\defined{secure hash function} is a hash function $\\fun{\\h}{U}{D}$ that satisfies three additional properties:\n\n\\begin{description}\n  \\item[pre-image resistance:] Given $d \\in D$, it is computationally infeasible to find a $u \\in U$ such that $\\h(u) = d$.\n  \\item[second pre-image resistance:] Given $u \\in U$, it is computationally infeasible to find a $u' \\in U, u' \\neq u$ such that $\\h(u) = \\h(u')$.\n  \\item[collision resistance:] It is computationally infeasible to find $u, v \\in U, u ~= v$ such that $\\h(u) = \\h(v)$.\n\\end{description}\n\\end{definition}\n\nPre-image resistance has no influence on the vulnerability of the protocol to malicious actors, so all of the following discussion will focus on collision resistance only.\n\nSince $\\lift{\\f}{\\mathcal{M}}(\\set{u}) = \\f(u)$, $\\f$ must necessarily be collision resistant if $\\lift{\\f}{\\mathcal{M}}$ is to be collision resistant. This alone is unfortunately not sufficient, we will see a specific counterexamples in \\cref{{specific-monoids}}. Choosing a secure hash function $\\f$ always comes with a performance cost, insecure hash functions usually take less time and less space to compute. If the synchronization protocol is only being run in a trusted environment, an insecure hash function might be preferable.\n\nWhether a hash function is secure is not a binary dichotomy, but depends on what is considered ``feasible'' for an adversary. Greater security can usually be obtained at the cost of longer digests and longer computation times. Before presenting options for secure hash functions, we thus examine the impact of hash collisions first.\n\nWe can generally distinguish between malicious actors in two different positions: those who can actively impact the contents of the data structure to be synchronized, and those who passively relay updates and need to search for a collision within the available data. As a set of size $n$ has $2^n$ subsets, if fingerprints are bit strings of length $k$, then by the pigeonhole principle a fingerprint collision can be found within any set of size at least $k + 1$.\n\nAn attack against the fingerprinting scheme by an active adversary can involve computing many fingerprints and adding the required items to the set once a collision has been found. Such an attack is not usable by the passive adversary. We will primarily focus on discussing active adversaries, as they are strictly more powerful than passive ones. Yet it should be kept in mind that passive adversaries can be more common in certain settings, particularly in peer-to-peer systems: if a node is interested in synchronizing a data structure, it probably trusts the source of the data, otherwise it would have little reason for expending resources on synchronization. The data may however be synchronized not with the original source but with completely untrusted nodes. Additionally, an active adversary that does not want to risk detection by adding suspicious items to the data structure is restricted to the operations of a passive adversary.\n\nFingerprint collisions result in parts of the data structure not being synchronized, so information is being withheld from one or both of the synchronizing nodes. When a malicious node synchronizes with an honest one, the malicious node can withhold arbitrary information by simply pretending not to have certain data, which does not require finding collisions at all.\n\nSo the cases in which a malicious node can do actual damage by finding a collision are those where it supplies data to two different, honest nodes such that these two nodes perform faulty synchronization amongst each other. Specifically: let $\\mathcal{M}$ be a malicious node, $\\mathcal{A}$ and $\\mathcal{B}$ be honest nodes, then a successful attack consists of $\\mathcal{M}$ crafting sets $X_A, X_B$ and sending these to $\\mathcal{A}$ and $\\mathcal{B}$ respectively, so that when $\\mathcal{A}$ and $\\mathcal{B}$ then run the synchronization protocol, they end up with different data structures. A passive adversary does not craft $X_A, X_B$ but must find them as subsets of some set $X$ supplied by an honest node. As we assume the underlying hash function $\\f$ to be secure, at least one non-singleton set has to be involved in a collision.\n\nThere are some qualitative arguments that even if an adversary finds a fingerprint collision, the impact is rather low. Let $S_A \\subseteq X_A$ and $S_B \\subseteq X_B$ be nonequal sets with the same fingerprint. To have any impact on the correctness of a particular protocol run, their two fingerprints need to actually be compared during that run. For that to happen, they need to be of the form $\\range{x_A}{y_A}{X_A}$ and $\\range{x_B}{y_B}{X_B}$ respectively. The fingerprints of these ranges can then be compared if one of the nodes sends the fingerprint for the range $\\range{\\min(x_A, x_B)}{\\max(y_A, y_B)}{X_i}$. That alone is still not sufficient, as any item within that range that is not part of the sets would change the fingerprint. So the two ranges actually need to be of the form $\\range{\\min(x_A, x_B)}{\\max(y_A, y_B)}{X_A}$ and $\\range{\\min(x_A, x_B)}{\\max(y_A, y_B)}{X_B}$, or simplified: there have to be $x, y \\in U$ such that $S_A = \\range{x}{y}{X_A}$ and $S_B = \\range{x}{y}{X_B}$.\n\n\\label{why-random-boundaries}\nIf the adversary has found such sets, this provides still no guarantee that the range $\\range{x}{y}{X_i}$ is being compared during the synchronization session of $\\mathcal{A}$ and $\\mathcal{B}$. For a set containing $n$ items, there are $n^2 - (n - 1) \\in \\complexity{n^2}$ distinct ranges, but only $\\complexity{n}$ are compared in a given protocol run, since the worst-case message complexity is $\\complexity{n}$ (see \\cref{communication-complexity}). These numbers should only be considered as rough guidelines, they gloss over details such as the fact that there are more ranges containing roughly $\\frac{n}{2}$ items than there are ranges containing almost all or almost no items. Yet they demonstrate that finding suitable colliding ranges still does not guarantee that a particular pair of nodes will be affected. In particular, there is no need for $\\mathcal{A}$ and $\\mathcal{B}$ to choose the range boundaries that occur in a protocol run deterministically (see \\cref{random-boundaries}). They can even perform multiple randomized protocol runs in parallel, while keeping track of item transmissions and sending every item at most once across all these protocol runs.\n\nAnother factor mitigating the impact of an adversary finding fingerprint collisions is the communication with other, non-colluding nodes. A fourth party could send some $u \\in U, x \\preceq u \\prec y$ to $\\mathcal{A}$ or $\\mathcal{B}$ before $\\mathcal{A}$ and $\\mathcal{B}$ synchronize, disrupting the collision.\n\nFinally, in systems where nodes repeatedly synchronize with different other nodes, a single fingerprint collision in a single synchronization session would merely delay propagation of information rather than stop it completely (note that this does not hold for collisions of two singleton sets). Since the attack model requires both $\\mathcal{A}$ and $\\mathcal{B}$ to synchronize with more than one node in total, this might apply to many affected settings. Peer-to-peer systems communicating on a random overlay network in particular fall into this category. A malicious actor with enough control over the communication of other nodes to guarantee a tangible benefit from fingerprint collisions can likely disrupt operation of the network more effectively by exercising that control than by sabotaging synchronization.\n\nAll of these arguments are however purely qualitative and should as such be taken into account with caution, they are not a substitute for quantitative cryptographic analysis. A strong attacker might be able to find many pairs of sets of colliding fingerprints, or many sets that all share the same fingerprint, and none of the above arguments consider these cases.\n\nIn a system with a consensus mechanism among all participating nodes, the choice of $\\f$ can periodically be changed, with the frequency being some function of the time it takes to find a viable collision, the cost of rebuilding all auxiliary data structures, and the general level of paranoia among the participating nodes. The $\\complexity{n}$ cost of rebuilding the data structures is then being amortized over the number of synchronization sessions occurring between rebuilds, which is still an improvement over protocols that need to perform $\\complexity{n}$ computations per synchronization session.\n\n\\subsection{Specific Monoids}\n\\label{specific-monoids}\n\nMultiplication, addition and xor as ways of combining hashes have been studied in \\cite{bellare1997new}, in the context of hashing sequences, of which our ordered sets are a special case. Their setting requires not only associativity but also commutativity. They thoroughly break xor by reducing the problem of finding a collision to that of solving a system of linear equations. We will not restate the full attack, but we will describe a weaker but simpler mechanism based on similar ideas that allows finding subsets whose fingerprint is a specific target value. This can be used as an optimization in a trusted setting (see \\cref{subset-checks}).\n\nThe main observation is that xor is the additive operation in $\\mathds{F}_2$, the finite field on two elements $\\set{0, 1}$. A fingerprint can be interpreted as a vector $\\begin{bmatrix}b_1 & b_2 & \\ldots & b_k\\end{bmatrix} \\in \\set{0, 1}^{1 \\times k}$. The fingerprint of a set $S = \\set{s_1, s_2, \\ldots, s_n}$ can thus be computed as the sum (within $\\mathds{F}_2$, i.e. as the xor) of the vectors corresponding to $s_1, s_2, \\ldots, s_n$. The fingerprint of some $S' \\subseteq S$ can also be regarded as the sum over all vectors, but each vector being first multiplied with a coefficient of $1$ if $s_i \\in S'$ and coefficient $0$ if $s_i \\notin S'$. In other words, the fingerprint of $S'$ is a linear combination of the hashes of the items in $S$.\n\nThis enables us to efficiently find subsets whose fingerprint are a particular vector. Let $S = \\set{s_1, s_2, \\ldots, s_n}$ be a set of items, and let $b = \\begin{bmatrix}b_1 & b_2 & \\ldots & b_k\\end{bmatrix} \\in \\set{0, 1}^{1 \\times k}$ be the target fingerprint. For $0 < i \\leq n$, define $a_i = \\begin{bmatrix}a_{i, 1} & a_{i, 2} & \\ldots & a_{i, k}\\end{bmatrix} \\in \\set{0, 1}^{1 \\times k}$ to be $\\f(s_i)$ interpreted as a vector over $\\mathds{F}_2$. The coefficients for linear combinations of the $a_i$ equal to $b$ are the solutions to the following system of linear equations over $\\mathds{F}_2$:\n\n\\[\n\\begin{bmatrix}\na_{1, 1} & a_{1, 2} & \\ldots & a_{1, k}\\\\\na_{2, 1} & a_{2, 2} & \\ldots & a_{2, k}\\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\na_{n, 1} & a_{n, 2} & \\ldots & a_{n, k}\\\\\n\\end{bmatrix} \\cdot \\begin{bmatrix}\nx_1\\\\\nx_2\\\\\n\\vdots\\\\\nx_n\n\\end{bmatrix} = \\begin{bmatrix}\nb_1\\\\\nb_2\\\\\n\\vdots\\\\\nb_n\n\\end{bmatrix}\n\\]\n\nSolutions can be found by gaussian elimination in $\\complexity{n^3}$ time, an exponential improvement over brute forcing by computing the fingerprints of all subsets.\n\nAs xor admits polynomial-time attacks by solving linear equations, the authors of \\cite{bellare1997new} next consider addition and multiplication. They unify parts of their discussion by relating the hardness of finding collisions to solving the balance problem: in a commutative group $(G, \\groupaddsym, \\neutraladd)$, given a set of group elements $S = \\set{s_1, s_2, \\ldots, s_n}$, find disjoint, nonempty subsets $S_0 = \\set{s_{0, 0}, s_{0, 1}, \\ldots, s_{0, k}} \\subseteq S, S_1 = \\set{s_{1, 0}, s_{1, 1}, \\ldots, s_{1, l}} \\subseteq S$ such that $s_{0, 0} \\groupaddsym s_{0, 1}  \\groupaddsym \\ldots  \\groupaddsym s_{0, k} = s_{1, 0}  \\groupaddsym s_{1, 1}  \\groupaddsym \\ldots  \\groupaddsym s_{1, l}$. They then reduce the hardness of the balance problem to other problems.\n\nFor addition, the balance problem is as hard as subset sum, which was at the time of publication conjectured to be sufficiently hard. Wagner showed however in \\cite{wagner2002generalized} how to solve the balance problem in subexponential time for addition. To give an impression for the impact of this attack, consider \\cite{mihajloska2015reviving} which uses addition for combining SHA-3~\\cite{dworkin2015sha} digests, producing fingerprints of length between $2688$ and $4160$ or $6528$ to $16512$ bits to achieve security levels of $128$ or $256$ bit respectively against Wagner's attack. \\cite{lyubashevsky2005parity} gives an improvement over Wagner's attack finding collisions in $\\complexity{2^{n^\\epsilon}}$ for arbitrary $\\epsilon < 1$, further weakening addition as a choice of monoid operation.\n\nFor multiplication, the balance problem is as hard as the discrete logarithm problem in the group. This is a more ``traditional'' hardness assumption than subset sum, there are groups for which no efficient algorithm is known. The main drawback is that multiplication is less efficient to compute than addition. \\cite{stanton2010fastad} includes a comparison between the performance of addition and multiplication for incremental hashing, the additive hash outperforms the multiplicative one by two orders of magnitude, even though it uses longer digests to account for Wagner's attack.\n\nFor our context in which fingerprints are frequently sent over the network, longer computation time might still be preferable over longer hashes. Fingerprints based on multiplication nevertheless need larger digests than traditional, non-incremental hash functions. \\cite{maitin2017elliptic} suggests fingerprints of $3200$ bit to achieve $128$ bit security, and motivated by this uses binary elliptic curves as the underlying group to achieve more compact fingerprints - fingerprints of length $2k$ bits give $\\complexity{2^k}$ security.\n\n\\cite{bellare1997new} also proposes a fourth monoid based on lattices. \\cite{lewi2019securing} give a specific instantiation providing 200 bits of security with fingerprints of size $16 \\cdot 1024 = 16384$ bit.\n\nAll of the preceeding monoid operations are commutative, which our approach to fingerprint computation does not require. A typical associative but not commutative operation is matrix multiplication. Study of a family of hash functions based on multiplication of invertible matrices was initiated in \\cite{zemor1991hash}, whose security is related to solving hard graph problems on the Cayley graph of the matrix multiplication group. \\cite{petit2011rubik} gives a good overview about the general principle and the security aspects of Cayley hash functions.\n\nWhile \\cite{tillich1994hashing}, an improvement over the originally proposed scheme, has been successfully attacked in \\cite{grassl2011cryptanalysis}\\cite{petit2010preimages}, there are several modifications~\\cite{petit2009graph}\\cite{bromberg2017navigating}\\cite{sosnovski2016cayley} for which no attacks are currently known, and \\cite{mullan2016text} showed random self-reducibility for Cayley hash functions.\n\nWhereas the schemes based on commutative groups operate on two bitstrings at a time, the Cayley hash functions operate on two individual bits at a time. Attacks thus assume that the manipulated bits can be freely chosen, using this to e.g. craft palindromic inputs. This may mean that such attacks are hard to apply to our setting where the input bits are produced in non-reorderable batches by another, cryptographically secure hash function. After finding a bit sequence that yields a Cayley hash collision, an attacker still needs to find items for which the concatenation of their hashes is the desired bit sequence.\n\nAlternatively, one can forgo the additional hash function and apply the Cayley hash function directly to the encodings of items. This simplifies the overall scheme and removes reliance on a second hash function. On the other hand, if one expects the additional hash function to be harder to attack than the Cayley hash, then making the attacks against the Cayley hash function less flexible might overall make attacks more difficult. Furthermore, Cayley hash functions are usually slower than typical choices for the hash function from items to bitstrings, so using both can produce significant speedups if items have long encodings.\n\nAside from Cayley hashes, we are not aware of any non-commutative monoids used for hashing. We would further like to note that any hashes based on groups still have more structure than required for our designs, as we don't need existence of inverse elements. Strictly speaking our designs do not even require a neutral element: the fingerprint of the empty set never needs to be exchanged in a protocol run, so we could have just as well defined fingerprints for all nonempty sets without relying on the neutral element to do so. The presentation we chose is merely more elegant. And finally, we do not require pre-image resistance for our fingerprints. Suitable hash functions could thus be located in a more general design space than studied in any literature we know of.\n\n\\section{Pseudorandom Data Structures}\n\\label{randomization}\n\nMonoidal fingerprints allow efficient deterministic computation of fingerprints, but at the price of having to rely on some non-standard cryptographic primitives. A more established cryptographic construction is that of Merkle trees~\\cite{merkle1989certified} and related data structures, all based on hashing the concatenation of child hashes. There is a significant body of work in the context of authenticated data structures based on these constructions, from rather simple balanced binary trees~\\cite{naor2000certificate} to arbitrary acyclic directed graphs~\\cite{martel2004general}, and we will be able to refer to such work for proofs of collision resistance.\n\nThe Merkle construction is however not associative when using a cryptographically secure hash function $\\h$, as $\\h(\\h(\\h(a) \\concatenate \\h(b)) \\concatenate \\h(c)) = \\h(\\h(a) \\concatenate \\h(\\h(b) \\concatenate \\h(c)))$ would constitute a violation of collision resistance. Different such tree representations of the same set thus lead to completely different fingerprints. So in order to use this construction, there must be a unique search tree representation defined for every set.\n\n\\cite{uniquerepresentation} shows that deterministic search trees with unique shapes require $\\complexity{\\sqrt{n}}$ time for insertions and deletions in the worst case, making them unsuitable for our use case. A natural extension is to look at randomized data structures that define a unique presentation which allows modification and search in $\\complexity{\\log(n)}$ with high probability.\n\nThis has been thoroughly researched in the field of \\defined{history-independent data structures}, intuitively speaking data structures whose bitlevel representations do not leak information about their construction sequence. This requirement has been shown in~\\cite{hartline2005characterizing} to be equivalent to having a unique representation. For more background, we refer to the excellent introduction of~\\cite{bender2016anti}.\n\nIn the following, we present two such data structures for representing sets - a randomized binary tree, and the skip list - and define Merkle-like fingerprinting schemes for them such that the fingerprints of arbitrary ranges can be computed in logarithmic time with high probability. As all nodes need to arrive at the same representation, no true randomness can be involved. Instead, all random decisions are based on pseudorandom bits derived from the data itself.\n\nThese schemes allow cryptographically secure fingerprints whose collision resistance does not depend on uncommon cryptographic building blocks. The price to pay is that computing fingerprints can take linear time in the worst case. Note however that the communication complexity of any synchronization protocol using these fingerprints is completely unaffected by the randomized computation time.\n\nUnfortunately, adversaries can efficiently create data sets for which the randomized solutions degrade to linear performance, thus enabling denial-of-service attacks as the cost of computing the fingerprints during a protocol run can be made to dominate the communication cost. Passive adversaries however cannot effectively attack these fingerprints: even though they can technically only forward those parts of the data structure that have degraded performance relative to the number of items in those parts, they would still increase the absolute resource usage of their victims if they simply forwarded all data.\n\nProponents of randomized data structures also claim that they are significantly easier to implement than balanced search trees (see e.g.~\\cite{seidel1996randomized} or~\\cite{pugh1990skip}), making it worthwhile to consider them even in trusted environments, in which their expected logarithmic update complexities suffice.\n\n\\subsection{Pseudorandom Treaps}\n\nA \\defined{treap}~\\cite{seidel1996randomized} is a search tree in which every vertex has an associated \\defined{priority} from a totally ordered set, and in which the priorities of the vertices form a heap:\n\n\\begin{definition}\nLet $U$ be a set, $\\prec$ a linear order on $U$, $P$ a set, $\\leq$ a linear order on $P$, $\\fun{\\priority}{U}{P}$, $V \\subseteq U$, and $T$ a binary search tree on $V$.\n\nThen we call $T$ a treap if for all $v \\in V$ with parent $p$ it holds that $\\priority(v) \\leq \\priority(p)$.\n\\end{definition}\n\nFrom the properties of treaps shown in~\\cite{seidel1996randomized} we will rely on the following:\n\n\\begin{itemize}\n  \\item If the priorities of all vertices are pairwise unequal, then there is exactly one treap on the vertex set, which is equal to the search tree obtained by successively inserting items in decreasing priority without any balancing.\n  \\item If the priorities are distributed uniformly at random, the height of the treap is expected to be logarithmic in the number of vertices.\n  \\item Inserting or deleting an item while maintaining the treap properties can be done in time proportional to the height of the treap.\n\\end{itemize}\n\nIn the following, we will fix some cryptographically secure hash function $\\fun{\\p}{U}{\\set{0, 1}^k}$, and define $\\priority(u) \\defeq \\p(u)$, using the numeric order on $\\set{0, 1}^k$ for comparing priorities. Since $\\p$ is collision resistant, we can assume the resulting treaps to be unique, and since the output of a secure hash function is indistinguishable from a random mapping, we can assume the resulting treaps to have expected logarithmic height.\n\nTreaps store items in every vertex, whereas Merkle trees only store items in their leaves. \\cite{buldas2002eliminating} proposes the following natural generalization and proves that the labels are collision free if the underlying hash function $\\h$ is collision free:\n\n  \\[\n   \\tlabel{\\h}(v) \\defeq \\begin{cases}\n\\h(v), &  \\text{for leaf $v$} \\\\\n\\tlabel{\\h}(\\tlabel{\\h}(c_{<}) \\concatenate \\tlabel{\\h}(v)) & \\, \\parbox[t]{.6\\textwidth}{v internal vertex with left child $c_{<}$\\\\and no right child}\\\\\n\\tlabel{\\h}(\\tlabel{\\h}(v) \\concatenate \\tlabel{\\h}(c_{>})) & \\, \\parbox[t]{.6\\textwidth}{v internal vertex with right child $c_{>}$\\\\and no left child}\\\\\n\\tlabel{\\h}(\\tlabel{\\h}(c_{<}) \\concatenate \\h(v) \\concatenate \\tlabel{\\h}(c_{>})) & \\, \\parbox[t]{.6\\textwidth}{v internal vertex with left child $c_{<}$\\\\and right child $c_{>}$}\n\\end{cases}\n  \\]\n\nWe will call a treap labeled according to this scheme a \\defined{Merkle treap}. Since Merkle treaps are unique, we can define the fingerprint of some set as the root label of the Merkle treap on that set if it is nonempty, or $\\nil$ otherwise.\n\n\\subsubsection{Subsets}\n\nNow that we have defined the fingerprint of a set, we need a way of efficiently computing the fingerprint of any range $\\range{x}{y}{S}$ given the Merkle treap of the set $S$. The algorithm can be expressed in exactly the same framework as that of \\cref{subsets-associative}, changing merely some type definitions and the \\texttt{combine} function:\n\n\\begin{minted}[mathescape]{haskell}\n-- A fingerprint is now either a hash or $\\mathtt{Nothing}$.\ntype Fingerprint = Maybe D\n\nemptyFp :: Fingerprint\nemptyFp = Nothing\n\nliftFp :: D -> Fingerprint\nliftFp d = Just d\n\n-- Combine three fingerprints into a new fingerprint.\ncombine :: Fingerprint -> Fingerprint -> Fingerprint -> Fingerprint\ncombine Nothing Nothing Nothing = Nothing\ncombine (Just a) Nothing Nothing = Just a\ncombine Nothing (Just b) Nothing = Just b\ncombine Nothing Nothing (Just c) = Just c\ncombine (Just a) (Just b) Nothing = Just (hash (concat a b))\ncombine (Just a) Nothing (Just c) = Just (hash (concat a c))\ncombine Nothing (Just b) (Just c) = Just (hash (concat b c))\ncombine (Just a) (Just b) (Just c) = Just (hash (concat (concat a b) c))\n\\end{minted}\n\nWhile the tree traversal and accumulation of data otherwise stays the same, the nature of the correctness argument changes quite a bit, since we cannot rely on the associativity of \\texttt{combine} anymore. Instead, the core idea of the argument is that the computed fingerprint depends only on the relative positioning between the tree vertices within the range, which stays identical no matter how many vertices outside the range are added.\n\nRecall that the shape of a treap is always that of the tree obtained by inserting items sorted by decreasing priority without rebalancing. Now consider the vertex $v$ with the smallest distance to the root $r$ such that $x \\preceq v \\prec y$, i.e. the vertex computed by \\texttt{findInitial}. All items within $\\range{x}{y}{S}$ are contained in $\\V(T_v)$. The fingerprint computation for the range is clearly local to $T_v$, and while insertion of items outside that tree might change the path from $r$ to $v$, they do not change the shape of $T_v$.\n\nIt thus remains to show that \\texttt{accGeq} and \\texttt{accLt} compute the same result no matter how many items outside the range are contained in the tree they process. We give the full argument for \\texttt{accGeq}, the case for \\texttt{accLt} follows analogously.\n\n\\begin{proof}\nLet $v$ be the root of a tree $T_v$ and $x \\in U$. We show by induction on $\\abs{\\V(T_v)}$ that \\texttt{accGeq} computes a value equal to the root label of the Merkle treap on $S_{\\succeq} \\defeq \\set{u \\in \\V(T_v) \\mid x \\preceq u}$ (or $\\nil$ if the set is empty).\n\n\\textbf{IB:} If $v$ is a leaf with $v \\prec x$, then $\\mathtt{(accGeq\\ v\\ x)} = \\nil$ and indeed $S_{t} = \\emptyset$. If $v$ is a leaf with $x \\preceq v$, then $\\mathtt{(accGeq\\ v\\ x)} =  \\mathtt{Just \\h(v)} = \\mathtt{Just \\tlabel{\\h}(v)}$.\n\n\\textbf{IH:} Let  $n \\in \\N$ and $T_r$ be a tree rooted at $r$ with $\\abs{\\V(T_r)} = n$. Then $\\mathtt{(accGeq\\ v\\ x)} = \\tlabel{\\h}(v)$.\n\n\\textbf{IS:} Let $T_v$ be a tree rooted at $v$ with $\\abs{\\V(T_v)} = n + 1$, let $c_{<}$ be the right child of $v$  and let $c_\n{>}$ be the left child of $v$.\n\n\\begin{caselist}\n\\case If $v \\prec x$, then:\n\n\\begin{align*}\n\\mathtt{(accGeq\\ v\\ x)} &\\overset{def}= \\mathtt{(accGeq\\ c_{>}\\ x)}\\\\\n&\\overset{IH}= \\tlabel{\\h}(c_{>})\\\\\n&\\overset{v \\prec x}= tlabel{\\h}(v).\\\\\n\\end{align*}\n\n\\case Otherwise, $x \\preceq v$. Neither $v$ nor any vertex in $T_{c_{<}}$ has any influence on the shape of $T_{c_{>}}$. We thus have:\n\n\\begin{align*}\n\\mathtt{(accGeq\\ v\\ x)} &\\overset{def}= \\mathtt{(combine\\ (accGeq\\ c_{<}\\ x)\\ \\h(v) \\tlabel{\\h}(c_{>}))}\\\\\n&\\overset{IH}= \\mathtt{(combine\\ \\tlabel{\\h}(c_{<})\\ \\h(v) \\tlabel{\\h}(c_{>}))}\\\\\n&\\overset{def}= \\tlabel{\\h}(\\tlabel{\\h}(c_{<}) \\concatenate \\h(v) \\concatenate \\tlabel{\\h}(c_{>}))\\\\\n&\\overset{def}= \\tlabel{\\h}(v).\\\\\n\\end{align*}\n\\end{caselist}\n\\end{proof}\n\nThe computation time is proportional to the height of the treap, so expected $\\complexity{\\log(n)}$. But as the hash function is not associative, the tail-recursive versions of \\texttt{accGeq} and \\texttt{accLt} do not compute the correct result. Intuitively, the problem is that the computation traverses the tree from top to bottom, but the fingerprints are defined through a computation from the bottom to the top. Accordingly, tail-recursive formulations can be achieved by adding parent-pointers to all vertices and performing the computation by first traversing to the range boundary and then accumulating fingerprints while moving back up through the tree. The space complexity thus remains in $\\complexity{1}$.\n\nJust as with balanced search trees, a binary treap might be the most natural formulation, but not the most efficient one on real hardware. \\cite{golovin2009b} describes B-treaps, which are treaps of higher degree. If such an optimized realization is chosen, the need to map between the actual data layout and the shape of the treap which defines the fingerprints complicates the implementation, in particular compared to monoidal fingerprints which naturally abstract over tree implementation details.\n\nBeyond treaps there are other randomized search trees. \\cite{pugh1989incremental} suggests hash tries for fingerprinting sets, and our mechanism for computing fingerprints of ranges is compatible with their approach. The reason we opted for treaps is that the pseudorandom selection of the tree shape is completely decoupled from the ordering over the items, whereas in a hash trie the ordering follows from the tree shape (or vice versa, depending on the viewpoint). As our protocols can benefit from a non-arbitrary ordering (compare \\cref{observations}), treaps are the more flexible solution.\n\nThere exists also some work on deterministic unique tree representations that stay efficient if the number of items is either a particularly small or a particularly large fraction of the size of the universe, see~\\cite{sundar1994unique} for both a specific approach and more related work. These approaches could also be adapted for our purposes, but since they pose restrictions on the number of items that can be stored, they are less flexible than our other approaches.\n\n\\subsubsection{Adversarial Treap Construction}\n\nThe main motivation for looking at pseudorandom schemes was to be able to rely on well-known, conventional, non-associative hash functions, implying that resilience against malicious peers is desired. While Merkle-treaps are collision-resistant, they open up another angle of attack: a malicious party can try to construct unbalanced treaps to make the cost of computing fingerprints super-logarithmic in the number of items, leading to a denial-of-service attack as the cost of computing the fingerprints during a protocol around dominates the communication cost.\n\nThere is a fairly straightforward way of creating treaps on $n$ vertices of height $n$ in expected $\\complexity{n^2}$ time and $\\complexity{1}$ space, making treaps unsuitable for an adversarial environment. A sequence $(u_0, u_1, \\ldots, u_{n - 1})$ of items such that the treap on these items has height $n$ can be computed by successively choosing arbitrary but always increasing with respect to $\\preceq$ items $p_j$ and letting $u_i \\defeq p_j$ if $\\floor{i \\cdot \\frac{2^k}{n}} \\leq \\priority(p_j) < \\floor{(i + 1) \\cdot \\frac{2^k}{n}}$ (recall that $k$ is the number of bits of a hash digest). The probability for the priority of an arbitrary item to fall within the desired range is $\\frac{1}{n}$, so finding one takes expected $\\complexity{n}$ time, finding a sequence of $n$ of these accordingly takes expected $\\complexity{n^2}$ time.\n\n\\subsection{Pseudorandom Skip Lists}\n\nThe skip list~\\cite{pugh1990skip}\\cite{pugh1998skip} is a probabilistic data structure for storing sets that does not use a tree representation, but rather a hierarchy of progressively sparser linked lists containing set items in sorted order. The list at layer $0$ contains every item, the list at layer $i + 1$ contains every item from layer $i$ with a probability of $\\frac{1}{2}$. Intuitively, when inserting an item into the skip list, one performs coin flips until one loses, the item is then added at the correct position into as many layers as one performed coin flips. \\Cref{fig:skip-list-example} gives an example.\n\n\\newcommand{\\skiplistnode}[3]{\n\\node (n#1#2) at (#1, #2) [skiplistnode] {#3};\n}\n\n\\newcommand{\\skiplisttower}[3]{\n\\foreach \\y in {0,...,#2}\n{\n    \\skiplistnode{#1}{\\y}{#3}\n}\n}\n\n\\newcommand{\\toweredges}[2]{\n\\foreach \\y in {1,...,#2}\n{\n    \\pgfmathtruncatemacro{\\myPrev}{\\y-1}\n    \\draw (n#1\\y) edge[edge] (n#1\\myPrev);\n}\n}\n\n\\newcommand{\\listedges}[2]{\n\\foreach \\y in {0,...,#2}\n{\n    \\pgfmathtruncatemacro{\\myPrev}{#1-1}\n    \\draw (n#1\\y) edge[edge] (n\\myPrev\\y);\n}\n}\n\n\\begin{figure*}\n\\begin{scaletikzpicturetowidth}{\\textwidth}\n\\begin{tikzpicture}[scale=\\tikzscale,font=\\footnotesize]\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\begin{pgfonlayer}{main}\n\\skiplisttower{0}{2}{$\\bot$}\n\\skiplisttower{1}{2}{$\\examplea$}\n\\skiplisttower{2}{0}{$\\exampleb$}\n\\skiplisttower{3}{1}{$\\examplec$}\n\\skiplisttower{4}{0}{$\\exampled$}\n\\skiplisttower{5}{1}{$\\examplee$}\n\\skiplisttower{6}{0}{$\\examplef$}\n\\skiplisttower{7}{0}{$\\exampleg$}\n\\skiplisttower{8}{1}{$\\exampleh$}\n\\skiplisttower{9}{2}{$\\top$}\n\n\\toweredges{0}{2}\n\\toweredges{1}{2}\n\\toweredges{3}{1}\n\\toweredges{5}{1}\n\\toweredges{8}{1}\n\\toweredges{9}{2}\n\n\\listedges{1}{2}\n\\listedges{2}{0}\n\\listedges{3}{0}\n\\listedges{4}{0}\n\\listedges{5}{0}\n\\listedges{6}{0}\n\\listedges{7}{0}\n\\listedges{8}{0}\n\\listedges{9}{1}\n\n\\draw (n31) edge[edge] (n11);\n\\draw (n51) edge[edge] (n31);\n\\draw (n81) edge[edge] (n51);\n\\draw (n92) edge[edge] (n12);\n\t\\end{pgfonlayer}\n\\end{tikzpicture}\n\\end{scaletikzpicturetowidth}\n\n\\caption{\nAn example skip list, demonstrating the layers of sorted linked lists. The $\\bot$ and $\\top$ vertices are the start and end points for all lists.\n}\n\n\\label{fig:skip-list-example}\n\\end{figure*}\n\nTo define a pseudorandom skip list, we again fix some cryptographically secure hash function $\\fun{\\p}{U}{\\set{0, 1}^k}$, and determine the layers of an item $u$ by counting the number of leading zero bits of $\\p(u)$ and adding one.\n\nToward a fingerprinting scheme, we define a labeling over all items in all layers. Let $u_{l, i}$ denote the $i$-th item in layer $l$, recall that $\\h$ is some cryptographically secure hash function.\nThen $\\sllabel{\\h}(u_{0, i}) \\defeq \\h(u_{0, i})$ and $\\sllabel{\\h}(u_{l + 1, i}) \\defeq \\h(\\sllabel{\\h}(u_{l, j}) \\cdot \\h( \\sllabel{\\h}(u_{l, j + 1}) \\cdot \\h(\\ldots \\cdot \\sllabel{\\h}(u_{l, j + m}))))$ where $(u_{l, j} \\ldots u_{l, j + m})$ is the longest subsequence of layer $l$ such that $u_{l, j} = u_{l + 1, i}$ and $u_{l, j + m} \\prec u_{l + 1, i + 1}$. \\Cref{fig:skip-list-flow} visualizes an example.\n\n\\begin{figure*}\n\\begin{scaletikzpicturetowidth}{\\textwidth}\n\\begin{tikzpicture}[scale=\\tikzscale,font=\\footnotesize]\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\begin{pgfonlayer}{main}\n\\skiplisttower{0}{2}{$\\bot$}\n\\skiplisttower{1}{2}{$\\examplea$}\n\\skiplisttower{2}{0}{$\\exampleb$}\n\\skiplisttower{3}{1}{$\\examplec$}\n\\skiplisttower{4}{0}{$\\exampled$}\n\\skiplisttower{5}{1}{$\\examplee$}\n\\skiplisttower{6}{0}{$\\examplef$}\n\\skiplisttower{7}{0}{$\\exampleg$}\n\\skiplisttower{8}{1}{$\\exampleh$}\n\\skiplisttower{9}{2}{$\\top$}\n\n\\toweredges{0}{2}\n\\toweredges{1}{2}\n\\toweredges{3}{1}\n\\toweredges{5}{1}\n\\toweredges{8}{1}\n\\toweredges{9}{2}\n\n\\listedges{1}{2}\n\\listedges{2}{0}\n\\listedges{3}{0}\n\\listedges{4}{0}\n\\listedges{5}{0}\n\\listedges{6}{0}\n\\listedges{7}{0}\n\\listedges{8}{0}\n\\listedges{9}{1}\n\n\\draw (n31) edge[edge] (n11);\n\\draw (n51) edge[edge] (n31);\n\\draw (n81) edge[edge] (n51);\n\\draw (n92) edge[edge] (n12);\n\n\\draw[->] (n10) edge[dep] (n11);\n\\draw[->] (n20) edge[dep] (n11);\n\n\\draw[->] (n30) edge[dep] (n31);\n\\draw[->] (n40) edge[dep] (n31);\n\n\\draw[->] (n50) edge[dep] (n51);\n\\draw[->] (n60) edge[dep] (n51);\n\\draw[->] (n70) edge[dep] (n51);\n\n\\draw[->] (n80) edge[dep] (n81);\n\n\\draw[->] (n11) edge[dep] (n12);\n\\draw[->] (n31) edge[dep] (n12);\n\\draw[->] (n51) edge[dep] (n12);\n\\draw[->] (n81) edge[dep] (n12);\n\t\\end{pgfonlayer}\n\\end{tikzpicture}\n\\end{scaletikzpicturetowidth}\n\n\\caption{\nA visualization of which labels influence which other labels. One can see that these dependencies form a tree.\n}\n\n\\label{fig:skip-list-flow}\n\\end{figure*}\n\nEvery label contributes to the computation of at most one other label, which is located on the next layer. Maintaining labels when inserting or deleting an item thus takes a number of hash computations bounded by the height of the skip list, which is expected $\\complexity{\\log(n)}$. The number of label concatenations that need to be performed at each layer is also randomized, but on average there are two, as the probability for an item to also occur at the next layer is $\\frac{1}{2}$. The expected time thus stays in $\\complexity{\\log(n)}$.\n\nDenote by $\\skipdomain(u_{l, i})$ the set of items $u$ such that $u_{l, i} \\preceq u \\prec u_{l, i + 1}$, i.e. the set of all items whose hash contributes to the label of $u_{l, i}$. Now let $S \\subseteq U$ be a finite set. The fingerprint of $S$ is defined as $\\nil$ if $S = \\emptyset$, and otherwise as $\\h(u_{l_0, i_0} \\cdot \\h(u_{l_1, i_1} \\cdot \\h(\\ldots \\cdot u_{l_m, i_m})))$ where $(u_{l_0, i_0}, u_{l_1, i_1}, \\ldots, u_{l_m, i_m})$ is the unique shortest sequence of vertices in the pseudorandom skip list on $S$ such that its items are strictly increasing and their $\\skipdomain$s partition $S$, choosing the vertex of the highest possible layer if multiple vertices have equal domains. See \\cref{fig:skip-list-fingerprint} for an example.\n\n\\begin{figure*}\n\\begin{scaletikzpicturetowidth}{\\textwidth}\n\\begin{tikzpicture}[scale=\\tikzscale,font=\\footnotesize]\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\begin{pgfonlayer}{main}\n\\skiplisttower{0}{2}{$\\bot$}\n\\skiplisttower{1}{2}{$\\examplea$}\n\\node (n20) at (2, 0) [fingerprintnode] {$\\exampleb$};\n\\skiplisttower{3}{0}{$\\examplec$}\n\\node (n31) at (3, 1) [fingerprintnode] {$\\examplec$};\n\\skiplisttower{4}{0}{$\\exampled$}\n\\skiplisttower{5}{0}{$\\examplee$}\n\\node (n51) at (5, 1) [fingerprintnode] {$\\examplee$};\n\\node (n60) at (6, 0) [fingerprintnode] {$\\examplef$};\n\\skiplisttower{7}{0}{$\\exampleg$}\n\\skiplisttower{8}{1}{$\\exampleh$}\n\\skiplisttower{9}{2}{$\\top$}\n\n\\toweredges{0}{2}\n\\toweredges{1}{2}\n\\toweredges{3}{1}\n\\toweredges{5}{1}\n\\toweredges{8}{1}\n\\toweredges{9}{2}\n\n\\listedges{1}{2}\n\\listedges{2}{0}\n\\listedges{3}{0}\n\\listedges{4}{0}\n\\listedges{5}{0}\n\\listedges{6}{0}\n\\listedges{7}{0}\n\\listedges{8}{0}\n\\listedges{9}{1}\n\n\\draw (n31) edge[edge] (n11);\n\\draw (n51) edge[edge] (n31);\n\\draw (n81) edge[edge] (n51);\n\\draw (n92) edge[edge] (n12);\n\t\\end{pgfonlayer}\n\\end{tikzpicture}\n\\end{scaletikzpicturetowidth}\n\n\\caption{\nThe skip list nodes contributing to the fingerprint of the range $\\range{\\exampleb}{\\exampleg}{S}$. If the range had been $\\range{\\exampleb}{\\examplef}{S}$, it would have again been the topmost $\\examplee$ vertex whose label would have been hashed.\n}\n\n\\label{fig:skip-list-fingerprint}\n\\end{figure*}\n\nWe next sketch an algorithmic way of determining this sequence for any range $\\range{x}{y}{S}$ given the deterministic skip list for $S$, but refrain from giving correctness proofs as they are rather verbose and convey no particular insight that is not better conveyed in the accompanying figures.\n\nLet $\\bar{u} = (u_0, u_1, \\ldots, u_m)$ be the shortest path in the skip list from its beginning to the vertex $y$ excluding $y$ itself, i.e. the sequence of vertices traversed when looking up $y$ in the skip list. This sequence has expected logarithmic length and can be computed in expected logarithmic time. Obtain a shorter sequence $\\bar{u'} = (u'_0, u'_1, \\ldots, u'_{m'})$ by keeping from all vertices in $\\bar{u}$ that correspond to the same item the one in the greatest layer such that its $\\skipdomain$ does not contain any item greater than or equal to $y$. \\Cref{fig:skip-list-fingerprint-traversal} provides an example.\n\n\\begin{figure*}\n\\begin{scaletikzpicturetowidth}{\\textwidth}\n\\begin{tikzpicture}[scale=\\tikzscale,font=\\footnotesize]\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\begin{pgfonlayer}{main}\n\\skiplisttower{0}{2}{$\\bot$}\n\\skiplisttower{1}{2}{$\\examplea$}\n\\node (n20) at (2, 0) [fingerprintnode] {$\\exampleb$};\n\\skiplisttower{3}{0}{$\\examplec$}\n\\node (n31) at (3, 1) [fingerprintnode] {$\\examplec$};\n\\skiplisttower{4}{0}{$\\exampled$}\n\\skiplisttower{5}{0}{$\\examplee$}\n\\node (n51) at (5, 1) [fingerprintnode] {$\\examplee$};\n\\node (n60) at (6, 0) [fingerprintnode] {$\\examplef$};\n\\skiplisttower{7}{0}{$\\exampleg$}\n\\skiplisttower{8}{1}{$\\exampleh$}\n\\skiplisttower{9}{2}{$\\top$}\n\n\\toweredges{0}{2}\n\\toweredges{1}{2}\n\\toweredges{3}{1}\n\\toweredges{5}{1}\n\\toweredges{8}{1}\n\\toweredges{9}{2}\n\n\\listedges{1}{2}\n\\listedges{2}{0}\n\\listedges{3}{0}\n\\listedges{4}{0}\n\\listedges{5}{0}\n\\listedges{6}{0}\n\\listedges{7}{0}\n\\listedges{8}{0}\n\\listedges{9}{1}\n\n\\draw (n31) edge[edge] (n11);\n\\draw (n51) edge[edge] (n31);\n\\draw (n81) edge[edge] (n51);\n\\draw (n92) edge[edge] (n12);\n\n\n\\begin{scope}[transparency group, opacity=0.3]\n\t\t\t\\draw[red,line width=14pt,rounded corners=0.1em,line cap=round] (n02.center) -- (n12.center) -- (n11.center) -- (n31.center) -- (n51.center) -- (n50.center) -- (n60.center) -- (n70.center);\n\t\t\t\\draw[red,line width=14pt,rounded corners=1em,line cap=round] (n02.center) -- (n12.center) -- (n11.center) -- (n31.center) -- (n51.center) -- (n50.center) -- (n60.center) -- (n70.center);\n\t\t\\end{scope}\n\n\\begin{scope}[transparency group, opacity=0.3]\n\t\t\t\\draw[blue,line width=14pt,rounded corners=0.1em,line cap=round] (n92.center) -- (n91.center) -- (n81.center) -- (n51.center) -- (n31.center) -- (n30.center) -- (n20.center);\n\t\t\t\\draw[blue,line width=14pt,rounded corners=1em,line cap=round] (n92.center) -- (n91.center) -- (n81.center) -- (n51.center) -- (n31.center) -- (n30.center) -- (n20.center);\n\t\t\\end{scope}\n\t\\end{pgfonlayer}\n\\end{tikzpicture}\n\\end{scaletikzpicturetowidth}\n\n\\caption{\nThe forward search path to $\\exampleg$ (starting at the top-left corner) and the backward search path to $\\exampleb$ (starting at the top-right corner) allow to read off which vertices contribute to the fingerprint of $\\range{\\exampleb}{\\exampleg}{S}$.\n}\n\n\\label{fig:skip-list-fingerprint-traversal}\n\\end{figure*}\n\nNow let $\\bar{v} = (v_0, v_1, \\ldots, v'_o)$ be the shortest path in the skip list obtained by inverting the order on items from its beginning to the vertex $x$. Intuitively, this corresponds to looking up $x$ ``from the back'' in a doubly-linked skip list. Obtain from $\\bar{v}$ the corresponding shorter sequence $\\bar{v'} = (v'_0, v'_1, \\ldots, v'_{o'})$ by keeping from all vertices that correspond to the same item the one of the greatest layer such that its $\\skipdomain$ does not contain any item strictly less than $x$.\n\nWe claim that $\\bar{v'}$ and $\\bar{u'}$ always intersect (if $\\range{x}{y}{S} \\neq \\emptyset$) in at least one vertex, i.e. there are unique $v'_i \\in \\bar{v'}$ and $u'_j \\in \\bar{u'}$ with $v'_i = u'_j$. Then the sequence $(v'_0, v'_1, \\ldots, v'_i = u'_j, u'_{j + 1}, \\ldots, u'_{m})$ is the unique shortest sequence of vertices of strictly increasing items whose $\\skipdomain$s partition $\\range{x}{y}{S}$. Since $\\bar{v'}$ and $\\bar{u'}$ can be computed in expected logarithmic time, so can this sequence, and thus fingerprints can be computed in expected logarithmic time.\n\n\\subsubsection{Adversarial Skip List Construction}\n\nPseudorandom skip lists are even less suited for adversarial environments then pseudorandom treaps. Performance degrades if most items have the same maximum layer. A randomly chosen item has a hash beginning with a $1$ bit with probability $\\frac{1}{2}$, these items only reside in layer $0$. An adversary can thus create a linear list of exclusively layer $0$ items of length $n$ in expected $2n \\in \\complexity{n}$ time.\n", "meta": {"hexsha": "0096c3179396c1a17580f2e53c76337ebd020532", "size": 86629, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/fingerprints.tex", "max_stars_repo_name": "AljoschaMeyer/master_thesis", "max_stars_repo_head_hexsha": "01bd42dd4cc51078e1526ac7197c5294551beafb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-30T11:02:56.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-30T11:02:56.000Z", "max_issues_repo_path": "chapters/fingerprints.tex", "max_issues_repo_name": "AljoschaMeyer/master_thesis", "max_issues_repo_head_hexsha": "01bd42dd4cc51078e1526ac7197c5294551beafb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/fingerprints.tex", "max_forks_repo_name": "AljoschaMeyer/master_thesis", "max_forks_repo_head_hexsha": "01bd42dd4cc51078e1526ac7197c5294551beafb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 79.2580054895, "max_line_length": 1177, "alphanum_fraction": 0.7272622332, "num_tokens": 24282, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5545066338430843}}
{"text": "\\chapter{TRANSPOSITION SYSTEMS}\n\n \n\n\\section{MONOPHASE TRANSPOSITIOIN SYSTEMS}\n\n\\subsection{Transp05ition Systems Employing Geometric Designs}\n\nIn part one brief mention was made of the use of geometric designs\nand \ufb01gures other than rectangles in producing transposition ciphers. It\nwas stated that triangles, trapezoids, and polygons of various symmetrical\nshapes can be employed. Figures of these types form connecting links\nbetween the methods that use simple rectangular designs and the more\ncomplicated methods that use \ufb01gures in which transposition takes place\nalong diagonals.\n\n\\subsectionTrapezoidal Designs}\n\n(1-. A trapezoid or, more accurately, a truncated triangle, of pre\u2014\narranged dimensions as regards the number of cells (which in this case\nare rhombs into which it is to be partitioned, is constructed. There will\nbe left on one side of the design a series of small triangles which are not\nto be used for inscribing letters, and are therefore crossed off in the\ndesign, as shown in \ufb01gure 24. Only two agreements are necessary in\norder .to fix the dimensions of the design: a keyword or keyphrase to\ndetermine the number of cells at the base of the design, and an under-\nstanding as to the height of the design expressed in number of cells. The\nsuccessive horizontal rows of cells will decrease by one in number from\nbottom to top of the design. In \ufb01gure 24, the keyphrase NO CANDY\nFOR ISSUE is used as a basis for deriving a numerical key of 15 ele\u2014\nments, and it is assumed that by prearrangement it was agreed that the\nheight of the design should be eight cells. Therefore, the bottom row\nhas 15 cells, the next one upwards, 14, the next, 13, and so on, to the\nlast, with 8 cells. The inscription may follow any route agreed upon; in\nthe example, it follows the normal manner of writing. The transcription\nfollows the numerical key order, yielding this cryptogram:\n\n87\n\n \n\n \n\n \n\n \n\n \n\nREF 1D:A56932\n\nODAIK AEDME HPODV ITEIP NHUET BOBRO\nHDTFS EISNI ETBEF BCBTM ESHGA RTORD\nIRERE AWARR ERTNS IEPVR VASEO FTEDL\nNA\n\nb. Decryptographing is merely the reverse of cryptographing, there\nbeing no dif\ufb01culties provided that the design has been correctly con-\nstructed. For this purpose cross\u2014section paper will be found useful. The\nanalysis of such a cryptogram is somewhat complicated by the presence\nof columns having varying numbers of letters; it may be further com-\nplicated by following complex routes in inscription. It is also possible\n\n \n\n/\u2019Ev71p\u20191y\u2019D}rQ/'E/\" Tim\u201c\nF I v E H U N DA13zmt\nE 'D -D A s H L H A Sari\nB R 0 K E N D o w n.43mm\nT o P I P E R A T I vim,\nE T H A T I T B E R E P A.mh\nI R E D B E F 0 RA\u2018EEI\u00a7EIEEIaEIEWN\nA\ufb02\ufb02\ufb01\ufb02\ufb02\ufb02\ufb02\ufb02' IRSLM\n\n7\u2014\u20149\u2014\u20142-1_-8-3-15--5\u20141o\u201411-\u20146\u201412-13-14.\u20144\n\n'NOOANDYFORISSUE\nFigur224.\n\nto follow a numerical key in the inscription of the plain text in horizontal\nlines; this additional procedure would further complicate and delay\nsolution.\n\n87. Triangular Designs\n\na. The simplest way of drawing up a triangle for cryptographing is to\ntake cross\u2014section paper, draw a square the side of which is equal to the\nlength agreed upon as expressed in the number of cells, and then draw\na diagonal cutting the large square into two equal triangles. This is\nshown in \ufb01gure 25, where the length agreed upon is nine, that is, nine\ncells per side. The letters of the plain text are inscribed in accord-\nance with any prearranged route, the one illustrated in \ufb01gure 26 be-\ning a simple method wherein the letters are inscribed in horizontal\nlines in the normal manner. When so inscribed, the letters in the dia-\ngram will form 2n \u2014 1 columns where n is the number of cells\nforming one of the sides of the square from which the triange has been\nconstructed. The total number of letters that can be inscribed within\n\n \n\n \n\nREF ID:A56932\n\n \n\nFigure 25.\n\nthetriangleis the sum ofn + (n \u2014 1) + (n \u2014 2 + (n \u2014\u2014 3) +...\n+ 1. For a triangle based upon a side of 9 cells, the sum is 9 + 8 + 7\n+ 6 + 5 + 4 + 3 + 2 +1: 45. The letters may then be tran-\nscribed to form the cryptogram by following another route, or by follow-\ning a derived numerical key applied to the base of the triangle. A simple\nmethod of deriving a key of Zn \u2014 1 elements from a key of n elements\nor letters is exempli\ufb01ed herewith. Let the key be DIAGONALS, a word\nof nine letters. Extend this key to Zn \u2014 1 places by repetition, and then\nassign numerical values as usual:\n\nI! = 9; 2 n \u2014- 1 = 17\nO N A L S D I A G O N A L\n5-13-2-11-17\u2014\u20146-10\u20143\u20148-16-l4\u2014\u20144-12\n\nThis numerical key is the one that has been employed in enciphering\nthe message in Figure 26.\n\n \n\n \n\n5--9--1--7-15-13--2-1L-17-6-10--3--8-16-14--4-12\nCryptogram:\n\nRICRC OCSGE DOONI UAOOE\nSEYID RTISS DTSNR AUNTN\nPERTR\n\nFigure 26.\n\n8\u20182\n\n \n\n\ufb01ver\u2014'M -\"-\"\" ' ' \" '\n\nREF ID:A56932\n\n \n\n \n\nCryptogram :\n\nUUSOC YNTSO REOYS ONRER\nDRITI DTOGD RANEO RICSN\nCTRNI GENNE ATGSR OSIIR\nSOIET RTUAI POECO TNESS\nDPRCD AURSD\n\nFigure 27.\n\nb. By a slight change in procedure it is possible to encipher a mes\u2014\nsage and produce a text which, for the sake of accuracy in special\ncases, is double the original length, but which is self\u2014checking. Sup\u2014\npose that instead of applying a single numerical key to the base of the\ntriangle, a double\u2014length key is applied to the legs, as shown in\n\ufb01gure 27. Here the key is TRIANGLES, extended to double length\nby simple repetition, as follows:\n\n1\u20142-3-4-\u20145-6-7-8\u20149-10-11-12-13-14-15-16-17-18\n\nKeyword: TRIANGLESTRIANGLES\n\nNumerical key: 17-13-7-1-11-5-9-3-15-18-14\u20148\u20142-12\u20146-10\u20144-16\nThis key is applied to the legs of the triangle beginning at the lower\nleft-hand corner. The transcription then follows key\u2014number order,\nwhich results in doubling the length of the message but the repeated\nletters are scattered throughout the whole message. In decryptographing\nsuch a message the clerk merely omits the second occurrence of a letter\nif it agrees (in identity) with its \ufb01rst appearance in the text.\n\n6. Many variations in inscription and transcription can be employed\nin the case of triangles as well as trapezoids. Some of the variations in\nthe case of triangles are shown in \ufb01gure 28.\n\n88. Diagonal Methods\n\na. A method involving diagonal transposition which is reported to\nhave been employed by the French Army in World War I is now to\nbe described. A numerical key is derived from a fairly long word or\nphrase, and a rectangle is constructed, as in \ufb01gure 29. The text is\ninscribed in this rectangle in normal fashion, nulls being employed, if\nnecessary, to complete the last line of the rectangle.\n\n90\n\n \n\n \n\n \n\nREF ID:A56932\n\nn . -. -' '.| I . .\" ' \\..-.w.=-v$whh\ufb02?\ufb02llllll\n\n \n\n \n\n17-13-\u20147--1\u201411\u2014-5--9--3-15-14--8--2\u201412--6-10--I.-16\n\nInscription: Up left side, down right, alternately.\n\nTranscription: (a) In rows from the base line, left to right and right to left,\nalternately, upwards :\n\nPISOS RNATU SIERS Etc.\n(1)) In diagonals from right leg, in key\u2014number order:\nRIEDR OUAYN etc.\n(c) In rows from left leg, in key-number order:\nCTGEO YTCEU etc.\n(d) From columns in key-number order:\nCNROI TUGRU etc.\n\nFigure 28.\n\nMessage: ENEMY BATTERY LOCATED AT WOODS 1,000 YARDS\nSOUTHEAST OF MUMMASBURG HEAVY ARTILLERY\nSTOP THEY ARE FIRING AT RATE OF THREE ROUNDS\nPER MINUTE FOR THE BATTERY X WILLS, MAJ.\n\nKeyphrase: MIDNIGHT RIDE OF PAUL REVERE.\n\nEnciphering diagram:\n\nMIDNIGHTRIDEOFPAULREVERE\nl5\u201411\u20142-16\u201412-9\u201410\u201422\u201419-13\u20143\u20144\u201417\u20148\u201418\u20141\u201423\u201414\u201420\u20145\u201424\u20146\u201421\u20147\n\n \n\nE N|\u00a7| M YB A T T EE\u2019IY Lo clfAJ T E D|\u00a7| TIWI 00\nD [310 N ET H o u SM! DY AR [E] s so 1le HE\n{El ST 0 PM U M M IEISB IEIR H E III VIZ! MEI TI\nL LE R YS T o |E|THE TE E F 1131 NC |_\u2018A_'|T\nR AT E or T [E] R EER ou l\ufb01ln s [E] ElEl MI NIQI\nT er 0 RT |El E B ATT ER TIXIIE I LL [SIM AJ\nCryptogram:\n\nADARR SESAR NUANX YAAPH HAURA UWYFW\nRHEDO TETFS HETBE RTOIL TGIMO EITJO\nYRURB TMSFT AHUTT NSLAE YEFYO RESTE\nAESII EDLRT MNORE OLDYO ECAGR YTUMR\nBDSVE LOHTN ATOMO ETEFS TANM\n\nFigure 29.\n\n91\n\n \n\nREF ID:A56932\n\nb. The correspondents agree beforehand upon several diagonals which\nrun from left to right, and from right to left and which intersect,\nthus cutting up the design quite thoroughly. In \ufb01gure 29 let these selected\ndiagonals be those indicated by the numbers from 1 to 6, inclusive,\nthe odd ones indicating diagonals running from left to right. In the\ntranscription, the letters along the indicated diagonals are \ufb01rst set down\nin groups of \ufb01ve, proceeding in key\u2014number order. Correspondents must\nalso agree beforehand as to whether a letter which lies at the intersection\nof two diagonals will be taken both times it is encountered or taken\nonly once and, if so, whether on its \ufb01rst or second appearance. After\nall these letters have been written down, one then proceeds with the\nremaining letters in the usual columnar manner, omitting the letters\nwhich have already been taken. The cryptographing process will become\nclear upon the study of the example in \ufb01gure 29.\n\n89. Interrupted Keyword Transposition\n\na. This method of transposition is a development of a more simple\nmethod wherein the transposition follows a numerical key. The latter\nmust \ufb01rst be described. A keyword or keyphrase of fair length is selected\nand a numerical key derived from it. Let this key be the phrase UNI-\nFORMITY OF METHOD.\n\nKeyphrase: UNIFORMITYOFME TH OD\nNumerical key: 17-10-6-3-11-14-8-7-15-18-12-4-9-2-16-5-13-1\nThe plain text is then written out in horizontal lines corresponding\nto the length of the key; then transposition is effected within each row,\naccording to the sequence of numbers applicable, as shown in \ufb01gure 30.\n\nMessage: ADMINISTRATIVE ORDERS MUST BE COMPLETED AND\nREADY TO ACCOMPANY FIELD ORDERS NOT LATER\nTHAN 5:00 RM. THIS DATE.\n\nEnciphering diagram :\n\n \n\nI\u201d, 17-10-6-3-11-14--871-5 18-12-4--92-16-5-15-1\nA DMI N IST R A TIVE on DE\nR suu s TBE c o MPLE TE DA\nN DRE A DYT o A ccou PA NY\nF IEL D 0RD E R sno'r LA TE\nR THA N FIV E r MTHI SD Ar\nE\n\nCryptogram:\nEEIIR MTSVD NTDIR OAAAE UPEME BLSSM\nDTG'I'R OYMEC ARTYO DACND OPNAE TLNAE\nDROID STOEL FRTIA TDHVI HTNMA FESRP\nE\n\nFigure 30.\n\n9.2\n\n \n\n \n\nREF ID:A56932\n\nb. In the foregoing case the eneipherment takes place only by trans\u2014\nposition within rows, but it is possible to complicate the method by\ntransposing, in addition, the rows as a whole, employing the same key\nor only a portion of it, as much as is required. Thus, if the message\ncontained 18 rows of 18 letters each, then the transposition of rows\ncould be effected according to key\u2014number order, the last row being\ntaken \ufb01rst (since the number 1 of the numerical key happens in this\ncase to be at the end of the numerical key), the 14th row being taken\nsecond (since the number 2 of the numerical key is the 14th number),\nand so on. Where the message does not contain as many complete rows\nas there are numbers in the key, the transposition takes place in key-\nnumber order nevertheless, the rows being taken in the numerical order\nof the numbers present. Using the same key and message as in the\nforegoing case, the encipherment would be as shown in \ufb01gure 31.\n\nEneiphering diagram:\nl7-10-6-3-11-14-8-7-15-18-12-4-9-2-16-5-13-1\n\n \n\nl7: ADMINISTRATIVE OR DE\n10: RSMUSTBECOMPLE TE DA\n6: NDREADYTOACCOM PA NY\n3: FIELDORDERSNOT LA TE\n11: RTHANFIVEPMTHI SD AT\n14: E\n\nCryptogram:\n\nETLNA EDROI DSTOE LFRYM ECART YODAC\nNDOPN AAEUP EMEBL SSMDT CTROT IATDH\nVIHTN MAFES RPEEE IIRMT SVDN'I' DIROA\nA\n\nFigure 31\n\nc. From the preceding method it is but a step to the method of\ninterrupted key transposition now to be described. Instead of writing\nthe text in regular-length groups corresponding to the length of the\nkey, it is written out in irregular groups the lengths of which vary\naccording to some prearranged plan. For example, note the basis of\nthe variable grouping in \ufb01gure 32, which uses the same message and\nkey as in a above.\n\n(1. This method may be combined with that shown in b above, thus\nfurther complicating the system. In decryptographing such a message it\nis best to use cross\u2014section paper, block out the cells to be occupied by\nletters in the deciphering diagram, and indicate the key numbers appli-\ncable to each line. This will facilitate the process materially and help\neliminate errors.\n\n93\n\n' :nli\ufb01\ufb02uli I||\n\n \n\nREF ID:A56932\n\nEnciphering diagram:\n17-10\u20146\u20143- ll-14\u20148\u20147-15-18-12\u20144\u20149\u20142-16\u20145-15\u20141\n\n \n\nA D M I N I S T R A T I V E 0 R D E\nR S M U S T B E C 0 M P L E T E D A\nN D R E A D Y T 0 A C C O M P A N Y\nF I E L D 0 R D E R S N O T L A T E\nR T H A N F I V E P M T H I S D A T\nE\n\nH\n\u20181\n\n-10\u20146\u20145-11-l4\u20148\u20147-15-18-12\u20144\u20149\u20142-16\u20145-13\u20141\n\nN STRATIVEORDE\nS BECOMPLE\n\n \n\n*iH\n\nADYTOACC......\nNYFIELDORDER..\n\n>FJ>CH\n\nT E R T H\nI V E P . . . . . . .\nI S D A T E (L C E P)*.\n\n(\u201cThe four \ufb01nal letters LCEP are nulls, to complete the row.)\n\n=>oamoza=u>\nD-JZP\u2018ZEUL\u2018JUIU\n:qpowwuzs\n\nCryptogram (columnar transposition in key-number sequence):\n\nEEEDI UAEAT IIIPC OERRM MDRPO AFHTE\nTIHTS BYFTP AVLRP DSEDM NLNTN SANEV\nSTMCD CDITD YREDR COEEO EARTN OSTAM\nAOALL\n\nFigure 32.\n\ne. Another method of interrupted transposition is that which employs\na rather long sequence of digits to control the interruption. In order\nto avoid the necessity of carrying around such a written sequence, it\nis possible to agree upon a number whose reciprocal when converted\nby actual division into its equivalent decimal number will give a long\nseries of digits. For example, the reciprocal of 7, or 1/7, yields a\nrepeating sequence of six digits: 142857142857 . . .; the reciprocal\nof 49, 1/49, yields a repeating sequence of 42 digits, etc. Zeros, when they\nappear, are omitted from the sequence. Suppose the number 19 is agreed\nupon, the reciprocal of which yields the sequence (0)52631578947368421.\nOn cross-section paper mark o\ufb02' sets of cells corresponding in number\nto the successive digits. Thus:\n\n \n\n \n\n5 2 (i 8 l 5\n||l|l|><|||><|||l||l><||l|><l|><|||l|l\nLet the message be ATTACK HAS BEEN POSTPONED.\nEncipherment :\n5 2 6 3 1 5\n\n \n\nwillIEIsloIXITIll><l1rl$lNITINIDIXIAIBIPI><I\u00a2I><IKIEI0IPITI\nCryptogram:\nAHESO TATSN TNDAB PCKEO PE\n\n94\n\n \n\n:33 L V:\n\nF\u2018 ETQ\"E'\u00a7$'33\u2019E\n\n \n\n \n\nREF ID:A56932\n\n' '--' ...,\u201cw.'. ' ---:'=\u2018--.-.\"..'=-'-._ .....\n\n \n\nf. To decryptograph such a message, the cryptogram is written down\nin a series of cross-section cells, which are then blocked off in sets\naccording to the numerical key:\n\n5 2 6 3 1 5\n\nIAIHIEISIOIXITIAIXITISINITINIDIXIAIBIPIXICIXIKIElolPT\ufb02\n\nTaking the letters in consecutive order out of the successive sets, and\ncrossing them off the series at the same time as they are being written\ndown to construct the plain text, the message is found to begin with\nthe following two words:\n\n5. 2 6 3 1 5\nIAIHIEISIOIXITIAIXITISINITINIDIXIAIBIPIXICIXIKTEIOIPIEI\nATTACK HAS . . .\n\ng. Preparatory to cryptographing, it is necessary to \ufb01nd the length\nof the message to be enciphered and then to mark off as many cells as\nwill be required for encipherment. Nulls are used to \ufb01ll in cells that\nare not occupied after cnciphering the whole message. The secrecy of\nthe method depends, of course, upon the reciprocal selected, but there\nis no reason why any fraction that will yield a long series of digits\ncannot be employed. If the selection of key numbers were restricted\nto reciprocals, the secrecy would be more limited in scope than is actually\nnecessitated by the method itself.\n\n \n\n \n\n90. Permutation Method\n\n0. An old method, known in literature as the aerial telegraphy method,1\nforms the basis of this system. A set of permutations of 3 4, . . .\n9 digits is agreed upon and these permutations are listed in a de\ufb01nite\nseries. As an example, let these permutations be made of the digits 1\nto 5, selecting only four of the possible 120. Suppose those selected\nare the following, set down in successive lines of the diagram in\n\ufb01gure 33a:\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nPermutation\n\n2 3 1 5 4 2 3 1 5 4\n\n3 2 5 1 4 3 2 5 1 4\n\n1 5 5 2 4 1 5 3 2 4\n\n4 3 1 5 2 4 3 1 5 2\nFigure 33a.\n\n1 So named because it was \ufb01rst devised and employed in messages transmitted by a system of\n\nsemaphore signaling in practical usage in Europe before the electrical telegraph was invented.\n\n95\n\n \n\n \n\n \n\n \n\nREF ID:A56932\n\nThe letters of the plain text, taken in sets of \ufb01ves, are distributed\nwithin the sections of the diagram in accordance with the permuta-\ntions indicated above the sections and also at the left. Thus, the \ufb01rst\n\ufb01ve letters of the text, supposing them to be the initial letters of. the\nword RECOMMENDATIONS, are inserted in the following positions:\n\nPermutation\n\n \n\n \n\n23154\n\n \n\n \n\n \n\n \n\n \n\n \n\nE C R M 0\n\n \n\nThe next \ufb01ve letters are inscribed in the second line of the diagram\nin the sections indicated by the permutation above and at the- left of\nthe line. Thus:\n\n \n\n \n\n \n\n \n\nPermutation\n' 2 3 1 5 4\n2 3 1 5 4 E C . R M O\n3 2 5 1 4\nl 4\n5 2 5. N E A M D\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nThis process is continued for each line and for as many lines as there\nare permutations indicated at the left. In the foregoing case, after\ntwenty letters have been inserted, one inserts a second set of \ufb01ve\nletters again on the \ufb01rst line, placing the letters of this second set\nimmediately to the right of those of the \ufb01rst set, respectively in key-\nnumber order. The succeeding lines are treated in similar fashion\nuntil the whole message has been enciphered. The following example\nwill illustrate the process:\nMessage: RECOMMENDATIONS FOR LOCATION OF NEW\nBALLOON POSITIONS MUST BE SUBMITTED\nBEFORE 12TH AIRDROME COMPANY CHANGES\nCOMMAND POST TOMORROW.\n\nEnciphering diagram :\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nPermutation\n2 .3 1 5 4 2 3 1 5 4\nEASEOM CTIDMA RCOTRM MOIECD OITBEN\n3 2 5 1 4 5 2 5 1 4\n_ NOSRPS ESNOMO ANUTNT MNOFOP DF'MEAT\n1 5 3 2 4 1 5 5 2 4\nTESWYO 'SLSTNR OBBLHO IWTECM NAEFAR\n4 3 1 5 2 4 3 1 5 2\nLNIRCB* ROMISC* FLUHGO op'mow OOBAEW\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n' The letters B. G. and D are nulls. to complete the \ufb01gure.\n' ' \" Figure 3317.\n\n\u201896\n\n \n\nFun\u201c- \u00a2.__ ..\n\n \n\nREF ID:A56932\n\nThe letters of the cipher text are taken from the diagram according\nto any prearranged route, the most simple being to transcribe the lines\nof letters in groups of \ufb01ves, thus:\n\nEASEO MCTID MARCO TRMNO IECDO ITBEN\nNOSRP SESNO MOANU TNTMN OFOPD FMEAT\nTESWY OSLST NROBB LHOIW TECMN AEFAR\nLNIRC BROME SCFLU HGOOP TDODO OBAEW\n\nb. The foregoing method when employed in its most simple form\ndoes not yield cryptograms of even a moderate degree of security;\nbut if the method of inscription and transcription is varied and made\nmore. complex, the degree of security may be increased quite notice-\nably. It is possible to use longer permutations, based on sets of 6,\n7, 8, or 9 digits, but in every case the successive permutations must\nbe prearranged as regards both their exact composition and their order\nor arrangement in the diagram. '\n\n91. Transposition Method Using Special Figures\n\n0. The method now to be described is useful only in special cases\nwhere the correspondence is restricted to brief communications between\na very limited number of persons. It is necessary to agree in advance on\ncertain particulars, as will be seen. Let the message to be enciphered\nbe the following:\n\nFOUR TRANSPORTS WILL BE COMPLETED BY END\nOF APRIL AND SIX MORE BY END OF JULY.\n\nNote the following \ufb01gures and enciphcrment:\n\n \n\n \n\n \n\n \n\n \n\n \n\no n r s 1. o\n/\u201d- N /\" I l I i -\nr u T\u2014\u2014A s o r w r. n c u\n\nL/ 4 l l i\n\nn / n n 1 r. r\n\nI n n P A 1\nL\u2014\u2014J\u2014r n-\u2014\u2014\u2014-r n+0 A\u2014\u2014R n+1! 3+1:\n\n1: r 1 n u\n\nr 1'\nc n r n 0\u2014-l-\u2014-J\u2019 1.___..._\n\nn n ' u\nCryptogram:\n\nORPSL OFUTA SOTWL BCMRN RIEPE BDPAI\nLTDYN OARLN SXEEF IDMRE FYOEY NOJLB\nDU\n\nFigure 34.\n\n97\n\nREF ID:A56932\n\nb. It will be noted that it is essential to agree in advance not only\nupon the nature of the \ufb01gure but also upon the number of \ufb01gures per\nline.\n\nc. The next series is a modi\ufb01cation of the preceding. The same\n\ni message will be employed, with a double-cross \ufb01gure, \ufb01ve \ufb01gures per\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nline.\n\\ o u r o 1. a a a n c\nS D E ' I 9\nl I 2 C I D I A\nA II. I S M 0 \u2018l 3 It I\u2019\nA I I I J\nD n V\nY 1-\na x 1 I s v\n| Cryptogram:\n\nouror. BETDO FRSRL ELI-INF NTITP CEDIA\n| ARwsu oven? ANREF JLDOB owsn mm\nEY\n\nl Figure 35.\n\n(1. Still another series may be formed, as follows:\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nF s I. I N\nl u\u2014\u2014-\u2014-o I\u2014\u2014\u2014P r\u2014-\u2014I. n\u2014-\u2014-E I\u2014\u2014\u2014D\nl 2\u201c\u201c; 3\u201c: Li ;__\u00a7 33:2\n| 1' r c n A\nl I- c c\n\u00a7:*t 3:3\u00a2 :\u201c\u00a7\n1\u201851\u00bb a~\u20143 Y_:u\ns 1 1.\nCryptogram:\n\nFSLLN NOIPP LEEID AUWOM BYTRO RRSRO\nEBEPF TTCDA LOOMA DRFXN NEJID EBYUS\nYL\n\nFigure 36.\n\ne. A \ufb01gure of different form than the preceding forms the basis of\nthe next type.\n\n \n\n98\n\n1-\"\n\nREF ID:A56932\n\nOOEDRTOYRWPNNLE\n\nrpazuncnrsmrixnnsn'r\n\nFNROPSBJ\u2019IXEL\n\nOAODADEFRIYULMN!\n\nCryptogram:\nOOEDR TOYRW PNNLE FPBEU RCBTS\n\nMEAIL DSLTF NROPS BJIXE LOAOD\nADEF\u2018R IYULM NY\n\nFigure 37.\n\nf. From the foregoing examples, it is obvious that many other \ufb01gures\nmay be used for effective transpositions of this kind, such as stars\nof varying numbers of points, polygons of various symmetrical shapes,\netc. It is merely necessary to agree upon the \ufb01gures, the number of\n\ufb01gures per line, the starting points of the inscription and transcription\nprocesses.\n\n9. The method lends itself readily to combination with simple\nmonoalphabetic substitution, yielding cryptograms of a rather high\ndegree of security.\n\nSection II. POLYPHASE TR'A-NSPOSITION SYSTEMS\n\n92. Polyphase Transposition Methods in General\n\na. In paragraph 33, brief mention was made of transposition systems\nin which two or more processes of rearrangement are involved. It was\nstated that only a very limited number of such transposition methods\nare practicable for military use, but that the degree of security afforded\nby them is considerably greater than that afforded by certain much\nmore complicated substitution methods. The methods referred to are\nthose which involve two or more successive transpositions, and merely\nfor purposes of brevity in reference they will here be called polyphase\ntransposition methods to distinguish them from the single monophase\nmethods thus far described.\n\nb. It is obvious that a polyphase transposition method may involve\n2, 3, . . . successive transpositions of the letters of the plain text.\nTo describe these methods in general terms, one may indicate that\nthe letters resulting from a \ufb01rst transposition, designated as the T\u2014l\n\n99\n\nREF ID:A56932\n\ntransposition, form the basis of a second, or T\u20142 transposition. If the\nprocess is continued, there may be T\u20143, T\u20144 . . . transpositions, and\neach may involve the use of a geometric \ufb01gure or design. For con\u2014\nvenience, the design involved in accomplishing the T\u20141 transposition\nmay be designated as the D\u20141 design; that involved in accomplishing\nthe T\u20142 transposition as the D\u20142 design, etc. However, it may as well\nbe stated at this point, that so far as military cryptography is concerned,\nmethods which involve more than D\u20142 and T\u20142 elements are entirely\nimpractical and often those which involve no more than D\u20142 and T\u20142\nelements are also impracticable for such use.\n\n93. True and False Polyphase Transpositions\n\na. It is possible to perform two or more transpositions with the\nletters of a text and yet the \ufb01nal cryptogram will be no more dif\ufb01cult\nto solve than if only a single transposition had been effected. The equiva-\nlent of this in the case of substitution ciphers is to encipher a mono-\nalphabetic cryptogram by means of a second single alphabet; the \ufb01nal\nresult is still a monoalphabetic substitution cipher. Likewise, if a mes-\nsage had been enciphered by a simple form of route transposition\nand a second and similar or approximately similar form of simple\nroute transposition is again applied to the text of the \ufb01rst transposition,\nthe \ufb01nal text is still that of a monophase transposition cipher. Again,\ntwo transpositions may be accomplished without really affecting a more\nthorough scrambling of the letters composing the original text. Examples\nwill serve to clarify the differences between false and true polyphase\ntransposition.\n\nb. Note the following simple columnar transposition cipher pre-\npared according to the method described in paragraph 27 :\n\nMessage: DELIVER ALL AMMUNITION TO 4TH DIVISION\nDUMP-\n\nKeyword : SCHEDULE = i:\n\nq\n\nSCHEDUL\n-1-5-5-2-8-6-\n\nEnciphering rectangle :\n\nI\u2014l\n\n \n\n \n\n \n\n \n\n \n\nvac-arcs:\nHWHI\u2018H\n\noao>rcn\n23:23:49:\nDua=<m\n:Hocmm\na<wzwa>\n\u2018UHOHbIDh\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nCryptogram (T\u2014l):\n\nELI'RI VMTDD IMNHN A I\nDL'PUS EUOIU IO P LAOTO mm\n\nFigure 38.\n1 00\n\n \n\n__ 7\u2018 _,.-., .Pm-mw\u2014uum\n\nREF ID:A56932\n\n._.___.... mum\u2014s- mai'!'-;l:i\"\"l\"lih l... \u2018 \u201cL'-\n\n' \u201cIII\u201c a:\n\nIn producing the foregoing cryptogram only the columns were trans\u2014\nposed. Suppose that by prearrangement, using the keyword BREAK\n(derived numerical key = 2\u20145\u20143\u20141\u20144), the horizontal lines of the fore\u2014\ngoing enciphering rectangle were also to be transposed. For example,\nlet the horizontal lines of the rectangle D\u2014l be transposed immediately\nbefore taking the letters out of the columns of the design (in key-number\n\n--- 'NV\u2018F\u2019E\u2018EWH\n\nwUW-\ufb02 ra-r'zr -=\u2014~1e--'1=r*r\u201d'. w\n\n \n\norder) to form the cipher text. Thus:\n\n.\u2014\n\n \n\n \n\n \n\n \n\n \n\n \n\nne- .. my: to\nIn c: all\" U\nL\" m a Uc.\u2018 \u00ab1'\n>0 O l\"!-] 0\u2018\n32 2 H 3: ca\n\n \n\nHWHr'm\non\u2014Jo:>r'\nZIZEH\nCHOCFJ\nS<'\u00a212>\u00a71\n\u2019UHOH>\nr'H'Hmw\nEUH<UI~7\nCGOHHco\nZ:\".15U<G=\nH-uo>H~h\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nD\u2014 l\nCryptogram (T\u20142):\n\nREIIL DVTDM HINNM IAOPI TLOOA VRFMN\nUDTSL IEOUU\n\nFigure 39.\n\nc. The foregoing, however, is not a case of true polyphase or- so-\ncalled double transposition. The same \ufb01nal result may be accomplished\nin a way which will at \ufb01rst glance appear quite different but is in.\nreality one that accomplishes the same two operations by combining\nthem in one operation. Let the message be inscribed as before, but this\ntime with both numerical keys applied to the top and side of the\nrectangle. Then let another rectangle of the same dimensions, but with\nnumbers in straight sequence instead of key\u2014number sequence, be set\nalongside it. Thus: ' '\n\n7\n\np.-\n\n5\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n3 2 8 6 4\n2 D E L I V E R A 1\n5 L L A M M U ' N I 2\n3 T I 0 N T 0 F 0 3\n1 U R T H D I V I 4\n4 S I 0 N D U M P 5\nD\u2014l\nFigure 40.\n\nEach letter D\u2014l is now transferred to that cell in D\u20142 which is indicated\nby the row and column indicators of the letter in D\u2014l. For example,\nthe \ufb01rst letter, D, of D\u2014l, has the indicators 2\u20147 and it is placed in\n\n101\n\n \n\nREF ID:A56932\n\nthe 2\u20147 cell in D\u2014Z; the second letter of D\u2014l, which is E, is placed\nin the 2\u20141 cell of D\u20142, and so on. The \ufb01nal result is as follows:\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n71532864 12345 rs\nans I'VERA 1R -HITVUI\nI 5LLAMMUN12EVIALRDE\n. arrourorosrruoorro\n\n1URTHDIVI4IDNPOMSU\n\n4SIONDUMP5LMMIANLU\nI 13-1 . D\u20142\nFigure41.\n\nIt will be seen that if the columns of D\u20142 are now read downwards\nin straight order from left to right the \ufb01nal cryptogram is identical\nwith that obtained in \ufb01gure 39: REIIL DVTDM, etc.\n\nd. The foregoing cipher, often called the Nihilist Cipher, is referred\nto in some of the older literature as a double transposition cipher\nbecauSe it involves a transposition of both columns and rows; and\nI. indeed as described in b above it seems to involve a double process.\n\nIt is, however, not an example of true double transposition. When\n\nthe mechanism of this cipher is compared with that now to be\n. described, the great difference in the cryptographic security of the two\n. methods will become apparent.\n\n94. True Double Transposition\n\nIn the form of the false double transposition described above, it is\nonly entire columns and entire rows that are transposed. The disarrange-\nment of the letters is after all not very thorough. In true double trans-\n\n; position this is no longer the case, for here the letters of columns and\ni rows become so thoroughly rearranged that the \ufb01nal text presents a\ncomplete scrambling almost as though the letters of the message had\nbeen tossed into a hat and then drawn out at random.\n\ni Section I\". TRUE DOUBLE TRANSPOSITION\ni 95. True Double Transposition of the Columnar Type\n\na. It is by what is apparently a simple modi\ufb01cation of certain of the\nc0111mnar methods already described that an exceedingly good true\ndouble transposition can be effected. Let a numerical key be derived\nfrom a keyword in the usual manner and let the message be written\nout under this key to form a rectangle in the usual manner for colum-\nnar transposition. The length of the message itself determines the\nanct dimensions of the rectangle thus formed, and whether or not it\n\u20185 completely or incompletely \ufb01lled.\n\n102\n\n \n\n \n\n \n\n \n\nREF ID:A56932\n\nb. In its most effective form the double transposition is based upon\nan incompletely \ufb01lled rectangle; that is, one in which one or more cells\nin the last line remain un\ufb01lled. An example of the method now follows:\nLet the keyword be INTERNATIONAL; the message to be enciphered,\n\nas follows :\n\nOUR ATTACK SLOWING UP IN FRONT OF HILL 1000 YARDS\nSOUTHEAST OF GOLDENVILIE STOP REQUEST PROMPT\n\nREEN FORCEMEN T.\n\nKeyword :\n\nIN TE RNA TI ONAL\n\nDerived numerical key: 4-7-12-5-11\u20148-1-15\u20145-10-9-2-6\n\n4\u2014\u2014\u20187\u201412\u20143~\u201411\u20148\u2014\u2014.1\u201413-5\u2014 10 \u20149.\u2014-2-~6\n\n \n\nU\n\nR\n\nA\n\nT\n\nA C\n\nK\n\nS\n\n0.\n\n \n\n \n\n \n\n \n\n \n\n \n\nwant-imam\n\nMMUOL\u2018EO\n\nPJDHCOZL\"\n\nZCZHCD-J\n\n\ufb01m<=mo=\n\n \n\nOMHMU\u2019WHO\n\n \n\n \n\nNSF>ZIZ\n\n \n\nQ\u2019UE\u2018MUHQ\n\n \n\nresume-4r:\n\n \n\nzomo>rm\n\n \n\nNKHWWOHI-l\nFED-J'UOUIIFI\"!\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n4\u20147\u201412 \u20143\u201411 \u2014 8\u2014 1\u201413 \u2014 5- 10 \u2014 9\u20142\u2014 6\n\n \n\nA\n\nN\n\nN\n\nD\n\nG\n\nOPN\n\n0\n\nT\n\nU\n\nT\n\nN\n\n \n\nU\n\nN\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nD-2\nFigure 42a.\n\n103\n\nREF ID:A56932\n\nThe \ufb01rst, Or D\u2014l, rectangle is inscribed in the usual manner of simple\nnumerical key columnar transposition. It is shown as D\u2014l in the accom-\npanying \ufb01gure. . The letters of T\u2014l transposition are then inscribed\nin the second, or D\u2014Z, rectangle in the normal manner of writing, that\nis, frorn left to right and from the top downwards. This is shown in\nD\u2014Z of \ufb01gure 42a for the \ufb01rst two columns of D\u2014l (in numerical key\norder.) after transfer of their letters into D\u2014Z. The letters of the\nremaining columns of D\u20141 are transferred in the same manner into\nD\u2014Z, yielding the following rectangle:\n\n4\u20147\u201412-3\u201411 \u2014 8\u2014- 1\u201413\u2014 5- 10 \u2014 9\u20142\u20146\nN'NgDGOPNOTUT\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nUHMIMW\nmrar'zr'm\nI\u2018D-12>5UO\n'U'UOIT\u2018WH\nOF\u2018CHE'TJZ\n\n \n\nI c: :> m in Q > c: .:>\n'11 o o a m :21 z\nta tn ta H :x: H :>.\nm\u201d o m o < m c:\no g o 5:: [=1 0 r'\n*u so :1: '21 u: z}: -<\na o o a c: an 2-1\n-a H U a z v\u2014a I351\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nFigure 421).\n\nFor the T\u2014Z text- the letters are transcribed from the D\u20142 rectangle,\nreading down the'columns in key-number order, and grouping the letters\nin \ufb01ves. The Cryptogram is as follows:\n\nPTRUT OGTTI RLOPP DUSVO SOSAU AOREA\nCORSH EEDNF WTULC NNEST QOFOY KFFHR\nPUORA NTLTE LNLES GLOER OMONA IHIES\nENETN MDIT\n\ne. In paragraph 29 a variation of the simple columnar key method\nof transposition was described. If the process therein indicated is\nrepeated, double transposition is effected. The following example will\nserve to illustrate the method, using the same message and key as were\nused in the paragraph 29:\n\nMessage: REQUEST IMMEDIATE REENFORCEMENTS\nKeyword: P R 0 D U C T\nDerived numerical key: 4-5-3-2-7-1-6\n\n:104\n\n \n\n \n\nREF 1D:A56932\n\nEncipherment :\n\n4-5\u20143-2\u20147\u20141-6 4-5\u20145-2-7-1-6 4\u20145-3\u20142\u20147-1-6\nTextzREQUESTIMMEDIATEREENF\"\nT\u2014l:SINEUEEEQMRCRITOTEMER\nT\u20142:EREEEREFNMTASETSEIQOT\n\n4\u20145-3-2\u20147-1\u20146 4\u20145\n\n0 R C E M E N T S\n\nS 1' A F N E D E M\n\nM E I R D U C M N\nCryptogram:\n\nEREEE REFNM TASET SEIQO TMEIR\n\nDUCMN\n\nd. In some respects this modi\ufb01ed method is simpler for the novice-to\nperform correctly than is that employing rectangles. Experience has\nshown that many inexpert cryptographic clerks fail to perform the two\ntranspositions correctly when D\u20141 and D\u20142 rectangles are employed\nin the work.\n\n96. General Remarks on True Polyphase Transposition\n\na. The cryptographic security of the true double transposition method\ndeserves-discussion. Careful study of a .cryptogram': enciphered by; the\ndouble transposition method set forth in paragraph 95 b and c will\nindicate that an extremely thorough scrambling of the letters is. indeed\nbrought about by the method. Basically, its principle is the splitting up\nof the adjacent or successive letters constituting the plain text by two\nsets of\u2018 cuts\u201d , the second of which IS in a direction that 15 perpendicular\nto the \ufb01rst, with the individual \u201ccuts\u201d of both sets arranged in a\nvariable and irregular order. It is well adapted for a regular and\nvoluminous exchange of cryptograms between correspondents, because\neven if many messages in the same key are intercepted, so\" long .asllno\ntwo messages are identical in length, they can only be cryptanalyzed\nafter considerable effort.\n\nb. Triple and quadruple transpositions of the same nature are possible\nbut not practical for serious usage. Theoretically, a continuation or\nrepetition of the tranposition process will ultimately bring about a condi-\ntion'iivherein the D-n rectangle is identical with the D'~1. rectangle: in\nother words, after a certain number of transpositions the rectangle pro\u2014\nduced by a repetition of the cryptographing process results \ufb01nally in\ndecryptographing the message. Exactly how many repetitive transposi-\ntions intervene in such cases is extremely variable and depends upon\nfactors lying outside the scope of this text, I I\n\n1:195\n\nREF ID:A56932\n\nc. In the example of cryptographing given in paragraph 95b, the\nD\u20141 and D\u2014Z rectangles are identical in dimensions, and identical\nnumerical keys are applied to effect the T\u2014l and T\u20142 transpositions.\nIt is obvious, however, that it is not necessary to maintain these identi\u2014\nties; D\u20141 and D\u20142 rectangles of different dimensions may readily be\nemployed, and even if it is agreed to have the dimensions identical, the\nnumerical keys for the two transpositions may be different. Furthermore,\nit is possible to add other variable elements. (1) The direction or manner\nof inscribing the letters in the D\u20141 rectangle may be varied; (2) the\ndirection of reading off or taking the letters out of the D\u20141 rectangle\nin effecting the T\u2014l transposition, that is, in transferring them into the\nD\u20142 rectangle, may be varied; (3) the direction of inscribing these\nletters in the D\u20142 rectangle may be varied; ( 4) the direction of reading\noff or taking the letters out of the D\u20142 rectangle in effecting the T\u2014Z\ntransposition may be varied.\n\nd. The solution of cryptograms enciphered upon the double transposi-\ntion principle is often made possible by the presence of certain plain\u2014text\ncombinations, such as QU and CH (in German). For this reason, care-\nful cryptographers substitute a single letter for such combinations, as\ndecided upon by preagreement. For example, in one case the letter Q\nwas invariably .used as a substitute for the compound CH, with good\neffect.\n\nSection IV. GRILLES AND OTHER TYPES 'OF MATRICES\n\n97. Type of Cryptographic Grilles\n\nBroadly speaking, cryptographic grilles2 are sheets of paper, card-\nboard, or thin metal in which perforations have been made for the\nuncovering of spaces in which letters (or groups of letters, syllables,\nentire words) may be written on another sheet of paper upon which the\ngrille is superimposed. This latter sheet, usually made also of cross-\nsection paper, will hereafter be designated for purposes of brevity in\nreference as the grille grid, or grid. Its external dimensions are the\nsame as those of the grille. Grilles are of several types depending upon\ntheir construction and manner of employment. They will be treated here\nunder the titles of (1) simple grilles, (2) revolving grilles, (3) non-\nperforated grilles, and (4) \u201cpost card\u201d grilles.\n\n98. Simple Grilles\n\na. These consist usually of a square in which holes or apertures have\nbeen cut in prearranged positions. When the grille is superimposed upon\n3Also often called \u201cstencils.\" The general term matrix (plural, matrices) is very useful in\n\nreferring to a geometric \ufb01gure or diagram used for transposition purposes. Other terms in\ncommon use are cage, frame, bar, etc.\n\n106\n\n \n\nREF ID:A56932\n\nthe grid, these apertures disclose cells on the grid, in which cells letters,\ngroups of letters, syllables, or entire words may be inscribed. An example\nis shown in \ufb01gure 43. The four sides of the obverse surface of the grille\nare designated by the \ufb01gures 1, 2, 3, 4; the four sides of the reverse\nsurface, by the \ufb01gures 5, 6, 7, 8. These \ufb01gures are employed to indicate\nthe position of the grille upon the grid in encipherment.\nb. (1) In cryptographing a message the grille is placed upon the grid,\nin one of the eight possible positions: Obverse surface up, with\n\ufb01gure 1, 2, 3, or 4 at the top left; or reverse surface up, with\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n1\n57 %\u2019//777 V%*\n7%%,//////7/7A \u201d/A\n/ \ufb01\ufb01l/A/AV/f\ufb02/\nWf/\u00e9/O/ @\ufb01\ufb01/JV\n\u00e9f/AV/A /%7/ 0:0;\nQ/AVA @f/A A202.\n0074/ [A A W47\nQWVJVA 7/ @717\n75% 7A Af///V/\ufb02\nu A // //// e\n(5)\n27A AV/V/Vf . 7/ 3':\nQVA W/f\ufb02O/AVAV/zy\n0/57/71 /,7 7A\n%7/47//A5 @9047\n@714 7/71/ \u2019Af/\ufb02/\n/%0%%\u00e9 7// V\n90 4% T/// QV\nVAW/V/z 7/, Ay/zy/V\n, @QVAV/ OVA\nSEX V/V/ ///\u2019u.)\nFigure 43.\n\n107\n\n \n\n1%\n\nREF ID:A56932\n\n\ufb01gure 5, 6, 7, or 8 at the top left. The letters of the plain text\nare then inscribed in the cells disclosed by the apertures, follow\u2014\ning any prearranged route. In \ufb01gure 44, the normal manner of\nwriting, from left to right, and from the top downwards, has\nbeen followed in the inscription, the message being ALL\nDESTROYERS OUTSIDE.\n\ng5)\n\n \n\n[7/\n\n7,\n\n// .\nE\n\n74\n\n74\n\n \n\n\\\n(8) 7'\n\n \n\nA /\n/%\n11%\n7A\nQT\n\nR\n@WAYVAE\n7&8 OV/X/X/A\n7/8 I //\n\n \n\n \n\n \n\nV\n\\\\\n\\ m\u00a7xx\nV\n\nk\\\\\ne \u00a7s\\\\\u2018\u00a7&&\u00a7\\\\\\\u201c\n\n \n\n \n\n \n\n \n\n'* =\u00b0 O\\x\u00a7s\\\\\\lr\u2018 \\\\ t\u2018\n\u00aeS\u00a7\\ \u00ae\u00a7\u00a7QQ\\\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n2%\n\n \n\n(L)\u20ac\n\nFigure 44.\n\n(2) The transcription process now follows. The cipher text is\n\nwritten down, the letters being taken by following any pre-\n\narranged route, which must be perpendicular to the route of\n\ninscription, otherwise the letters will follow in plain-text order.\n\nIn the following, the route is by columns from left to right.\nCryptogram:\n\nLRTAD TSSER YOIDS ELOEU\n\n(3) If the number of letters of the plain\u2014text message exceeds the\n\nnumber of cells disclosed by one placement of the grille, the\nletters given by this placement are written down (in crypto-\ngraphic order), and then the grille is placed in the next position\non a fresh grid; the process is continued in this manner until\nthe entire message has been cryptographed. The several sections\nof the cipher letters resulting from the placements of the grille\non successive grids merely follow each other in the \ufb01nal crypto-\ngram. In this manner of employment it is only necessary for\nthe correspondents to agree upon the initial position of the grille\nand its successive positions or placements.\n\nREF ID:A56932\n\nc. It is obvious that by the use of a simple grille the. letters of a\nmessage to be CTYPtngaPhed may be distributed within an enveloping\nmessage consisting mostly of \u201cdummy\u201d text, inserted for purposes of\nenabling the message to escape suppression in censorship. 'For example,\nsuppose the grille shown in \ufb01gure 43 is employed in position 1 and the\nmessage to be conveyed is ALL DESTROYERS OUTSIDE. The\nletters of this message are inscribed in their proper places on the grid,\nexactly as shown in \ufb01gure 44. An \u201copen\u201d or disguising text is now to\nbe composed; the latter serving as an envelope or \u201ccover\u201d for the letters\nof the secret text, which remain in the positions in which they fall on\nthe grid. The open or disguising text, in other words, is built around or\nsuperimposed on the secret text. Note how this is done in \ufb01gure 45, with\nan apparently innocent message reading:\n\nI HAVE WORKED VERY WELL ALL DAY, TRYING TO- GET\nEVERYTHING STRAIGHTENED UP BEFORE GOING ON MY\nNEXT TRIP SOUTH, BUT INSIDE TEN DAYS . . .\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n1(5)\n\n\u2018IHAVEWORKEg.\n\nDVERYWELLAg\nLLDAYTRY-IN\n.GTOGETEVER\n\"YTHINGSTRA\n.IGHTENEDUP\nBEFOREGOIN\nGONMYNEXTT\nG-RIPSOUTHBU\n\u00abTINSIDETEN\nFigure 45. (1.) g\n\nd. The foregoing method naturally requires the transmission of con\u2014\nsiderably more text than is actually necessary for conveying the message\nintended. Where questions of censorship are not involved, the method\nis therefore impractical. A modi\ufb01cation of the method suggests itself in\nthe use of a transparent sheet of paper superimposed upon a square or\nother \ufb01gure in which the individual cells are irregularly numbered and\nthe inscription process follows the sequence of numbers. An example is\nshown in \ufb01gure 46, using the message ROCK CREEK BRIDGE WILL\nBE DESTROYED WHEN TAIL HAS CROSSED.\n\n109\n\n \n\nREF ID:A56932\n\n \n\n21.3944 715\n372941 1114531\n184310 24 20 28 14\n12 84248 43338\n354730 462617\n191332224036 9\n\nOI\nN\n0|\n\n \n\n \n\n \n\n \n\n \n\nSugar\u00bb;\ncognac:\nFHHFHO\ncrummy-a\nICON\u201d!!!\ncoat/awn:\n>mxmm\u00b0\n>WH\u2018MH\n\ufb01Hr\u2018n\u2018H\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nFigure 46.\n\nThe transcription may now follow any prearranged route. The normal\nmethod of reading would produce the cryptogram beginning WCTEH\nOEERI, etc. It is obvious that the correspondents must possess designs\nwith identically numbered cells.8\n\n99. Revolving Grilles\n\na. In this type of grille (see \ufb01g. 47a) the apertures are also formed\nby perforating a sheet of cross-section paper according to prearrange-\nment, but these apertures are so distributed that when the grille is\nturned four times successively through angles of 90\u00b0 and set in four\ngrille positions on the grid, all the cells on the grid are disclosed in turn.\n(The preparation of such grilles is discussed in par. 103.) If letters are\ninserted in the cells so disclosed, then after a complete revolution of the\ngrille every one of the cells of the grid will contain a letter and thus the\ngrid will be completely \ufb01lled. For this reason such a grille is also called\na self\u2014\ufb01lling, or an automatic-completion grille. The secrecy of messages\nenciphered by its means is dependent upon the distribution or position of\nthe apertures, the sequence of grille positions on the grid, that is, whether\nin the order 1, 2, 3, 4 clockwise; or 1, 3, 4, 2 etc.), and the route followed\nin inscribing and transcribing the letters in the cells of the grid. For each\nposition of the grille, one-fourth the total number of letters of the text\nis inscribed; hence it is convenient to refer to \u201csections\u201d of the text, it\nbeing understood that each section consists of one-fourth the total num-\nber of letters.\n\nb. There are two possible procedures so far as the inscription-trans-\nscription sequence is concerned. (1) The letters of the plain text may be\ninscribed in the cells of the grid through the apertures disclosed by the\ngrille and then, when the grid has been completely \ufb01lled, the grille\nremoved, and the letters transcribed from the grid according to a pre-\narranged route; or, (2) the letters of the plain text may \ufb01rst be inscribed\nin the cells of the grid according to a prearranged route and then the\ngrille applied to the completely-\ufb01lled grid to give the sequence of letters\n\n'The system employed by the French Army in 1886 was of the nature here described.\n\n110\n\n \n\n \n\nREF ID:A56932\n\nCryptogram :\n\n \n\n \n\nLHICV YROOT WILHN F'SOMT\nHURTI TCU'LO ROEDA TMVUI\nESTEL YF'RMU RNSF'E FASES\nESEAT OIDTL YNOIN AHEAH I\nEDFOT NHSHH ETAMI YOSRE\n\nFigure 47.\n\n111\n\nBM\\\\\\1\\\\\\\\}\\\\\\\\!\\\\\\\\I\n\n \n\nREF ID:A56932\n\nforming the cipher text of the transcription process. The \ufb01rst method\nwill be described in c below; the second in e below.\n\nc. Taking the simplest manner of inscribing the letters, that is, from\nleft to right and from the top downwards, the letters of the \ufb01rst section\nof the text are inscribed in the cells disclosed by the apertures, the grille\nbeing in the \ufb01rst position. This is shown in b of \ufb01gure 47. The grille is\nthen given % turn clockwise, bringing \ufb01gure 2 to the top left. If the\ngrille has been correctly prepared, none of the cells disclosed in the\nsecond grille position on the grid will be occupied by a letter. The letters\nof the second section are then inscribed, this being shown in c of \ufb01gure\n47. In d and e of \ufb01gure 47, the results of inscribing the third and fourth\nsections, respectively, are shown. The letters of the cryptogram are\nthen taken out of the completed grid by following any prearranged route\nof transcription. The cryptogram below has been transcribed by follow-\ning down the columns in succession from left to right.\n\nd. To decryptograph such a message, the cipher letters are inscribed\ncolumnwise in a grid 10 by 10 (that is, one composed of 100 cells, 10 per\nside) and then the grille applied to the square in four consecutive posi-\ntions corresponding to those used in cryptographing. The letters dis-\nclosed by each placement of the grille are written down as they appear,\nsection after section.\n\ne. The second manner of employing a revolving grille is merely the\nreciprocal of the \ufb01rst. The procedure followed in the \ufb01rst method to\ndecryptograph a message is followed in the second method to crypto\u2014\ngraph a message; and the procedure followed in the \ufb01rst method to\ncryptograph is followed in the second method to decryptograph.\n\n100. Grilles of Other Geometric Forms\n\nGrilles are not limited to square\u2014shaped \ufb01gures. They may be equi\u2014\nlateral triangles, pentagons, hexagons, and so on. Any \ufb01gure which can\nbe pivoted upon a central point and which when revolved upon this\npivot can be placed in a succession of homologous positions over a grid\ncorresponding to the grille will serve equally well. A triangle affords\nthree grille positions, a pentagon, \ufb01ve, and so on.\n\n101. Polyphase Transposition by Grilles\n\nOne grille may be employed to inscribe the letters of the message on\nthe grid, and a second, and different, grille employed to transcribe them\nfrom the grid to form the \ufb01nal text of the cryptogram. This would con-\nstitute a real double transposition method of great complexity. Polyphase\ntransposition by a series of grilles is of course possible.\n\n112\n\n \n\nREF ID:A56932\n\n102. Increasing the Security of Revolving Grilles\n\na. The total number of letters which a grille will exactly encipher is\ntermed its capacity. If the number of letters of a message is always equal\nto the total capacity of the grille, this information is of great aid in solu-\ntion by the enemy. For example, a message of 64 letters indicates a grille\n8 by 8 with 16 apertures; one of 144 letters, a grille 12 by 12 with 36\napertures, and so on. There are, however, methods of employing a\ngrille so that it will serve to encipher messages the lengths of which are\ngreater or less than the capacity of the grille.\n\nb. When the total number of letters is less than the capacity of the\ngrille, no modi\ufb01cation in method of use is necessary. Encipherment of\nsuch a message comes to a close when the last plain-text letter has been\ninscribed. In decryptographing such a message, the recipient must strike\nout, on the grid upon which he is to inscribe the cipher text, a number of\ncells corresponding to the difference between the number of letters of\nthe text as received and the total capacity of the grille. The location of\nthe cells to be thus eliminated must be prearranged, and it is best usually\nto strike them off from the \ufb01nal positions of the grid.\n\n1529 1301933 57\n\n1.2 2 164346 6207/\n\nI / /\nZj/\ufb02V/ZZ 17 u. 31 13 21 '47 34 Z;\n/, I /, x 3 32 1.5 7 35 '8 /\nQZ/\u00e9\ufb02f/\ufb02 25 38 11 39 22 36 9 7/\n% - /// so 12 26 51 48 10 23 V/\na 27 52 40 28 \u201921; 49 37 7/\n13 1.1 1!. @V/A/ /////7/\n\nb\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nb\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\nFigure 48.\n\nc. When the total number of letters is equal to or greater than the\ncapacity of the grille, a grid of greater capacity than that of the grille\ncan be prepared, on which the grille may be positioned several times,\nthus forming a large or composite grid composed by the juxtaposition\nof the several small grids. If there are a few cells in excess of the actual\nnumber required, these may be struck off from the large grid at pre-\narranged points, for example, from the last column and row, as shown\nin b of \ufb01gure 48. The grille is then placed in its \ufb01rst position in turn on\neach of the component grids, then in its second position, and so on. An\nexample will serve to illustrate. A message of \ufb01fty-two letters is to be\n\n113\n\nREF ID:A56932\n\ncnciphered with the grille shown in a of \ufb01gure 48, the capacity of which\nis sixteen letters. The number of letters of the message being greater than\nthree times sixteen, the composite grid must be composed of four small\ngrids containing a total of sixty-four cells. Therefore, twelve of these\ncells must be eliminated. These are shown in b of \ufb01gure 48, together with\nthe number indicating the positions occupied by the letters of the text.\n\n103. Construction of Revolving Grilles\n\n(I. There are several ways of preparing revolving grilles, of which the\none described below is the most simple. All methods make use of cross-\nsection paper.\n\nb. Suppose a revolving grille with a capacity of 100 letters is to be\nconstructed. The cells of a sheet of cross-section paper 10 by 10 are\nnumbered consecutively in bands from the outside to the center, in the\nmanner shown in a of \ufb01gure 49. It will be noted that in each band, if n\nis the number of cells forming one side of the band, the highest number\nassigned to the cells in each band is n \u2014\u2014 1.\n\nc. It will be noted that in each band there is a quadruplication of\neach digit; the \ufb01gure 1 appears four times, the \ufb01gure 2 appears four\ntimes, and so on. From each receding band there is to be cut out\n(n\u2014l) cells: from the outermost band, therefore, nine cells are to be\ncut out; from the next band, seven; from the next, \ufb01ve; from the next,\nthree; and from the last, one cell. In determining speci\ufb01cally what cells\nare to be cut out in each band, the only rules to be observed are these:\n(1) One and only one cell bearing the \ufb01gure 1 is to be cut out, one\nand only one cell bearing the \ufb01gure 2 is to be cut out, and so on; (2) as\nrandom a selection as possible is to be made among the cells available\nfor selection for perforation. In b of \ufb01gure 49 is shown a sample grille\nprepared in this way.\n\n(1. If the side of the grille is composed of an odd number of cells, the\ninnermost band will consist of but one cell. In such case this central cell\nmust not be perforated.\n\ne. It is obvious that millions of differently perforated grilles may be\nconstructed. Grilles of \ufb01xed external dimensions may be designated by\nindicators, as was done by the German Army in 1915 when this system\nwas employed. For example, the FRITZ grille might indicate a 10 by 10\ngrille, serving to cncipher messages of about 100 letters; the ALBERT\ngrille might indicate a 12 by 12 grille, serving to encipher messages of\nabout 144 letters, and so on. Thus, with a set of grilles of various dimen-\nsions, all constructed by a central headquarters and distributed to lower\nunits, systematic use of grilles for messages of varying lengths can be\nafforded.\n\nf. A system for designating the positions of the perforated cells of a\ngrille may be established between correspondents, so that the necessity\n\n114\n\n \n\nREF ID:A56932\n\nfor physical transmission of grilles for intercommunication is eliminated.\nAn example of a possible system is that which is based upon the co\u2014\nordinate method of indicating the perforations. The columns from left\nto right and the rows from bottom to top are designated by .the letters\nA, B, C, . . . Thus, the grille shown in b of \ufb01gure 49 would have the\nfollowing formula:\n\nADG; BBEH; CD]; DEG; EACH; FFI; GE; HBDH]; IDG;\n\nJABFI.\n\n \n\n \n\n \n\n \n\nu: HM'M\n\n \n\n \n\n \n\n \n\n \n\n \n\n~10UIl-\u2018NUl-\u2018NWIN\naxxnitqu-thowzsu.\n\ncoal-INA\u00bb\n\n \n\nHmu\u2018hmmqmolp\nsol-amwthkucbsit-Jm\n\n \n\n \n\n \n\n \n\n \n\nUI\u2018F\u2018LANO-\u2018Hw-hv\u2018m\n\n \n\n \n\n \n\nmwamu\u2018buwuo\n\n \n\n \n\nbWNI\u2014\u2018MNHUIO\u2018Q\nmemhuqum\nHONQO\u2018UIINWMH\n\n \n\n \n\n \n\n \n\n@NV\u201c\ns\\\\\n\\V\n\n\\V \\V\n\n\\V\n\\\n\\V\n\\\n\n \n\n \n\n \n\n\\\\\\\\\\\\\nk\n\u00a7\n\\\\\\\\\n\u00a7\\\nQ\n\nV\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n\\VQkV\nN\n\\V\nQ\n\n \n\nFigure 49.\n\n115\n\n \n\n \n\n \n\nREF ID:A56932\n\n9. Given the formula, the eight corners of the grille can be labeled in\nvarious ways by prearrangement; but the simplest method is that shown\nin connection with b of \ufb01gure 49. Then the initial position of the grille\ncan be indicated by the number which appears at the upper left-hand\ncorner when the grille is placed on the grid, ready for use. Thus, position\n1 indicates that the grille is in position with the \ufb01gure 1 at the upper\nleft-hand corner; position 3, with the \ufb01gure 3 at the upper left\u2014hand\ncorner, etc. .\n\nh. The direction of revolving the grille can be clockwise or counter-\nclockwise, so that correspondents must make arrangements beforehand\nas to which direction is to be followed.\n\n1'. Revolving grilles can be constructed so that they have two operating\nfaces, an obverse and a reverse face. They may be termed revolving-\nreversible grilles. The principles of their construction merely involve a\nmodi\ufb01cation of those described in connection with ordinary revolving\ngrilles. A revolving\u2014reversible grille will have eight possible placement\nindicators; usually positions 1 and 5, 2 and 6, and so forth, correspond\nin this obverse\u2014reverse relationship, as shown in \ufb01gure 43.\n\nj. The principles of construction described above apply also to grilles\nof other shapes, such as triangles, pentagons, and so forth.\n\n104. lNonpen\u2018oral\u2018ed Grilles\n\na. All the effects of a grille with actual perforations may be obtained\nby the modi\ufb01ed use of a nonperforated grille. Let the cells that': would\nnormally be cut out in a grille be indicated merely by crosses thereon,\nand then on a sheet of cross-section paper let the distribution of'- letters\nresulting from each placement of the grille on a grid be indicated by\ninserting crosses in the appropriate cells, as shown in \ufb01gure 50.\n\nGrille Grille Position\n\n   \n\nbUNH\n\nFigure 5011. Figure 501).\n\nb. Note should be made of the fact that in \ufb01gure 50b the distribu\u2014\ntion of crosses shown in the third row of cells is the reverse of that\nshowu in the \ufb01rst; the distribution shown in the fourth row is the reverse\nof that shown in the second. This rule is applicable to all revolving\ngrilles and is of importance in solution.\n\n1:. If the letters of the text are now inscribed (normal manner of\nwriting) in the cells not eliminated by crosses, and the letters transcribed\n\n116\n\nREF 1D:A56932\n\nfrom columns to form the cryptogram, the results are the same as though\na perforated grille had been employed. Thus:\n\n \n\nA I\n\nEIGRIIOLDARDDLTY\nCryptogram:\n\nEWCRA EOLDA RDDAT Y\nFigure 506.\n\nd. It is obvious that a numerical key'. may be applied to effect a\ncolumnar transposition in the foregoing method, giving additional\nsecurity. I\n\ne. The method is applicable to grilles of other shapes, such as tri-\nangles, pentagons, hexagons, octagons, etc.\n\nf. In \ufb01gure 50c it is noted that there are many cells that might be\noccupied by letters but are not. It is obvious that these may be \ufb01lled with\nnulls so that the grid is completely \ufb01lled with letters; Long messages may\nbe enciphered by the superposition of several diagrams of the same\ndimensions as \ufb01gure 50c.\n\n105. Rectangular or \"Post Card\" Grilles\n\na. The grille shown in \ufb01gure 51 differs from the ordinary revolving\ngrille in that (1) the apertures are rectangular in shape, and are greater\nin width, thus permitting of inscribing several letters in the cells dis\u2014\nclosed on the grid by each perforation of the grille; and (2) the grille\nitself admits of but two positions with its obverse side up and two with\nits reverse side up. In \ufb01gure 51 the apertures are numbered in succes\u2014\nsion from top to bottom in four series, each applying to one position of\nthe grille; the numbers in parentheses apply to the apertures when the\ngrille is reversed; the numbers at the corners apply to the four positions\nin which the grille may be placed upon the grid.\n\n17. One of the ways in which such a grille may be used is to write the\n\ufb01rst letter of the text at the extreme left of the cell disclosed by aperture\n1, the second letter, at the extreme left of the cell disclosed by aperture 2,\nand so on. The grille is retained in the same position and the 17th letter\nis written immediately to the right of the lst, the 18th immediately to the\nright of the 2d, and so on. Depending upon the width of the aperture,\nand thus of the cells disclosed on the grid, 2, 3, 4 . . . letters may' be\ninserted in these cells. When all the cells have been \ufb01lled, the grille may\nthen be placed in the second position, then the third, and \ufb01nally, the\nfourth.\n\n117\n\n \n\n \n\n \n\n \n\nREF ID:A56932\n\n'1 if?! its):\n29 I: Eli]\n1:11\u201465 39)::\noz 6 3\u00b0 (lg-gal\nis E\n\n \n\n \n\n \n\n \n\n\u201c\u201c0 (99\u00bb:\n\u201c)9: 9(41\n199)\u201c: 1\u00b0 43\n11 43 (mm\n(gs-)3; 12 44\n13 45 (5:9)]:\n14 \u20186 (29m\n%:\u00a7 15 47\n{a}; 16m)\nif L!\nl s\nFigure 51.\n\nc. Another way in which the grille may be used is to change the\nposition of the grille after the 16th letter has been inserted, then after\nthe 32d, 48th, and 64th ; the 65th letter is then inserted to the righ\ufb02 of\nthe lst, the Slst, to the right of the 17th, and so on until the grid is com-\npleted.\n\nd. Whole words may, of course, be inserted in the cells disclosed by\nthe apertures, instead of individual letters, but the security of the latter\nmethod is much lower than that of the former.\n\ne. The text of the grid may be transcribed (to form the cryptogram)\nby following any prearranged route.\n\n1\u2018. The successive positions of a post card grille may be prearranged.\nThe order 1, 2, 3, 4 is but one of 24 different sequences in which it may\nbe superimposed upon the grid.\n\ng. A modi\ufb01cation of the principles set forth in paragraph 103, dealing\nwith the construction of revolving grilles, is applied in the construction\nof rectangular or \u201cpost card\u201d grilles. Note the manner in which the\ncells in a of \ufb01gure 51 are assigned numbers; homologous cells in each\nband receive the same number. In a of \ufb01gure 52 there are three bands,\n\n118\n\n \n\nREF ID:A56932\n\nnumbered from 1 to 8, 9 to 16, and 17 to 24. Then in each band one\nand only one cell of the same numbered set of four cells is cut out. For\nexample, if cell 1a is selected for perforation from band 1 (as indicated\nby the check mark in that cell), then a cross is written in the other three\nhomologous cells, 1b, c, and d, to indicate that they are not available for\nselection for perforation. Then a cell bearing the number 2 in band 1\nis selected, for example, 2c, and at once 2a, b, and d are crossed o\ufb01l as\nbeing ineligible for selection, and so on. In c of \ufb01gure 52 is shown a\ngrille as \ufb01nally prepared, the nonshadcd cells representing apertures.\n\n_h. The grille, c of \ufb01gure 52, is a \u201csix\u2014column\u201d one, that is, the cells\nform six columns. It is obvious that grilles with any even number of\ncolumns of cells are possible. The number of apertures in each band\nshould be equal and this number multiplied by the number of bands and\nthen by 4 should equal the capacity of the grille. In the case of the one\nshown in c of \ufb01gure 52, the capacity is 8 by 3 by 4 or 96 cells; this is\nthe same as is obtained merely by multiplying the height (in cells) by the\n\n1\n\n \n   \n     \n  \n   \n     \n\n  \n\nBand\n\n \n \n\n_Wl/V/Il-W/l/l/ll\n/// /7//77///7// 7///\u2014\n///7/// \u2019////\u2019///A 7///\n/// -7///-j\n///7/// //-7////////\n/// ;7///.'/// //-'///7\n///,7/// //7///A7////7////7///.\n///\u20147///////./7/////////\u2018\n77/ '////.- ////'(/\n7///\u20147/7\n\n  \n\n//A\n\n \n\n   \n\n:\\\\:\\\\:\n\n \n\n      \n \n\n%\\\n\n \n\n   \n\n       \n    \n   \n\n\u2019////A'//////\n\n7///A\n\n7/////// ////'////: /////\n7///-//// /////\u2019/////\n\n-\n//////// /\u2019//\n'////)-\u2019///'////// /\u2019/////\u2019////\n\n  \n    \n\n \n\n    \n\nFigure 52.\n\n119\n\nREF ID:A56932\n\nnumber of columns, 16 X 6 = 96. If four letters are inscribed in each\nrectangle, the capacity of the grille in terms of letters is 384. The grid in\nthis case would, after completion, present 24 columns of letters, to which\na numerical key for a second transposition can be applied in transcrip-\ntion to produce the \ufb01nal text of the cryptogram.\n\n106. Inde\ufb01nite or Continuous Grilles\n\na. In his Manual of Cryptography, Sacco illustrates a type of grille\nwhich he has devised and which has elements of practical importance.\nAn example of such a grille is shown in \ufb01gure 53. This grille contains 20\ncolumns of cells, and each column contains 5 apertures distributed at\nrandom in the column. There are therefore 100 apertures in all, and\nthis is the maximum number of letters which may be enciphered in one\nposition of the grille. The plain text is inscribed vertically, from left to\nright, using only as many columns as may be necessary to inscribe the\ncomplete message. A 25-letter message would require but 5 columns. To\nform the cryptogram the letters are transcribed horizontally from the\nrows, taking the letters from left to right as they appear in the apertures.\nIf the total number of letters is not a multiple of 5, su\ufb02icient nulls are\nadded to make it so. In decryptographing, the total number of letters is\ndivided by 5, this giving the number of columns employed. The cipher\ntext is inscribed from left to right and top downwards in the apertures\nin the rows of the indicated number of columns and the plain text then\nreappears in the apertures in the columns, reading downward and from\nleft to right. (It is, of course, not essential that nulls be added in the\nencipherment to make the length of the cryptogram an exact multiple of\n5, for the matter can readily be handled even if this is not done. In de-\ncipherment the total number of letters divided by 5 will give the number\nof complete columns; the remainder left over from the division will give\nthe number of cells occupied by letters in the last column on the right.)\n\n \n\n \n\n \n\n \n\n \n\n \n\n1' 31!! V\nJr. Limb.\n\n \n\nFigure 53a.\n\n120\n\n \n\nREF ID:A56932\n\n1). Such a grille can assume 4 positions, two obverse and two reverse.\nArrangements must be made in advance as to the sequence in which the\nvarious positions will be employed. That is why the grille shown in \ufb01gure\n53a has the position-designating letter \u201cA\u201d in the upper left-hand corner\nand the letter \u201cB\u201d (upside down) in the lower right\u2014hand corner. On the\nobverse side of the grille would be the position\u2014designating letters \u201cC\"\nand I\u2018D.\u201d\n\nc. Figure 53b shows how a message is enciphered.\n\nMessage:\nAll RECEIVING HEAVY MACHINE GUN FIRE FROM HILL SIX TWO ZERO.\n\n \n\n \n\n \n\n \n\nFigure 53b.\n\nCryptogram:\nEGIIX FNNEA YTHFL RIRMO IOLWE MERVA ERMAH EGSOA ICUEC NVHIZ.\n\n(The letters E and A in the 10th column are nulls. Columns 11 to 20 are not\nused at all, the irregular right-hand edge of the grille merely indicating that this\nportion of the grille remains vacant.)\n\nSection V. MISCELLANEOUS TRANSPOSITION SYSTEMS\n\n107. Complex Route Transposition\n\na. In \ufb01gure 54 a route for inscribing letters within a rectangle is\nindicated by a sequence of numbers. The initial point may be at any of\nthe four corners of the rectangle, or it may be at any other point, as pre\u2014\narranged. The letters may be inscribed to form the rectangle by following\nthe route indicated and then transcribed from the rectangle to form the\ncryptogram by following another route; or the letters may be inscribed\naccording to one route and transcribed accordingly to the numerical\nroute indicated.\n\nb. A variation of the foregoing is that illustrated in \ufb01gure 55, wherein\nthe inscription follows the route shown by the arrows. The initial point\n\n121\n\nREF 1D:A56932\n\nof inscription is indicated by the \ufb01gure 1, and the \ufb01nal point, by the\nfigure 2.\n\nc. In the foregoing case, the route is a succession of the moves made\nby the king in the game of chess; it forms the so-called \u201cking\u2019s tour\u201d,\nin which the playing piece makes a complete or reentrant journey cover\u2014\ning all cells of the chessboard, each cell being traversed only once. A\nroute c0mposed of a succession of moves made by the knight, or the so-\ncalled \u201cknight\u2019s tour\u201d, is also possible, but in order to be practical a grid\nwith the cells numbered in succession would have to be prepared for the\ncorrespondents, since millions of different reentrant knight\u2019s tours can\nbe constructed\u2018 on a chessboard of the usual 64 cells.\n\na\u201c\n\n \n\nFigure 54. Figure 55.\n\n108. Transposition of Groups of Letters, Syllables. and Words\n\nThere is nothing in the previously described methods which precludes\nthe possibility of their application to pairs of letters, sets of three or\nmore letters, or even syllables and whole words. Nor, of course, is their\nuse limited to operations with plain text they may be applied as second\u2014\nary steps after a substitutive process has been completed (see sec. I, ch.\n\n10).\n\n109. Disguised Transposition Methods\n\na. The system often encountered in romances and mystery stories,\nwherein the message to be conveyed is inserted in a series of nonsig\u2014\nni\ufb01cant words constructed with the purpose of avoiding or evading sus-\npicion, is a species of this form of \u201copen\u201d cryptogram involving trans\u2014\nposition. The \u201copen\u201d or enveloping, apparently innocent text may be\ndesignated as the exle-mal text; the secret or cryptographic text may be.\ndesignated as the internal text. A complicated example of external or\nopen and internal or secret text is that shown in paragraph 98.\n\n\u2018Eee BalTW. W. R., Mathematical Recreations and Essays, London, 1928.\n\n122\n\n \n\n \n\nw\u2014~ w\"\"'1' \".\u2018\"\u201c\" '\n\n \n\n \n\nREF ID:A56932\n\n1). Little need be said of the method based upon constructing external\ntext the letters of which, at prearranged positions or intervals, spell out\nthe internal text. For example, it may be prearranged that every fourth\nletter of the external text forms the series of letters for spelling out\nthe internal text, so that only the 4th, 8th, 12th . . . letters of the external\ntext are signi\ufb01cant. The same rule may apply to the complete words of\nthe external text, the n, 2n, 3n, . . . words form the internal text. The\npreparation of the external text in a suitable form to escape suspicion\nis not so easy as might be imagined, when efficient, experienced, and\nvigilant censorship is at work. Often the paragraph or passage containing\nthe secret text is sandwiched in between other paragraphs added to pad\nthe letter as a whole with text suitable to form introductory and closing\nmatter to help allay suspicion as to the presence of secret, hidden text.\n\nc. A modi\ufb01cation of the foregoing method is that in which the lst,\n3d, 5th, . . . words of a secret message are transmitted at one time or by\none agency of communication, and the 2d, 4th, 6th, . . . words of the\nmessage are transmitted at another time or by another agency of com-\nmunication. Numerous variations of this scheme will suggest themselves,\nbut they are not to be considered seriously as practical methods of secret\nintercommunication.\n\n(1. Two correspondents may agree upon a speci\ufb01c size of paper and a\nspecial diagram drawn upon this sheet, the lines of which pass through\nthe words or letters of the internal text as they appear in the external\ntext. For example, the legs of an equilateral triangle drawn upon the\nsheet of paper can serve for this purpose. This method is practicable\nonly when messages can be physically conveyed by messenger, by the\npostal service, or by telephotographic means. Many variations of this\nbasic scheme may perhaps be encountered in censorship work.\n\n110. Cipher Machines for EFFectin-g Transposition\n\nThese may be dismissed with the brief statement that if any exist\ntoday they are practically unknown. A few words are devoted to the\nsubject in paragraph 147.\n\n123\n\n \n\n", "meta": {"hexsha": "197d7b5c286e6bac171f83d92109665de0c3e300", "size": 67096, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "AdvancedMilitaryCryptography/Chapter1.tex", "max_stars_repo_name": "dreemkiller/MilitaryCryptography", "max_stars_repo_head_hexsha": "098f202e65b281ca56ea7ea3736b71934ec59727", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "AdvancedMilitaryCryptography/Chapter1.tex", "max_issues_repo_name": "dreemkiller/MilitaryCryptography", "max_issues_repo_head_hexsha": "098f202e65b281ca56ea7ea3736b71934ec59727", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AdvancedMilitaryCryptography/Chapter1.tex", "max_forks_repo_name": "dreemkiller/MilitaryCryptography", "max_forks_repo_head_hexsha": "098f202e65b281ca56ea7ea3736b71934ec59727", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.9196341065, "max_line_length": 94, "alphanum_fraction": 0.7480773817, "num_tokens": 19608, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389930307512, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.5545066295987853}}
{"text": "% LaTeX test file for kate's syntax highlighting and code folding\n\n\\ordinaryLaTeXcommandwithoption[10pt,a4paper]{article}\n% BEGIN region\n%comment, this is all ok % $ \n\\%no comments please\n\\\\%comment\n% END of region\n\n\\newcommand{\\nohighlighting}\n\n\\section{normal}\n\n\\ref{blue}\n\\pageref{blue}\n\\cite{blue}\n\n\\begin{environmentshavespecialcolors}\nnormal\n\\end{environmentshavespecialcolors}\n\n$equations are green, \\commands somewhat darker$\nnormal\n$$equations are green, \\commands somewhat darker$$\nnormal\n\\( \n\\frac{1}{2}\n\\begin{test}\n\\end{test}\n\\)\nnormal\n\\[\n%comment \ndisplaymath \n\\]\nnormal\n\\begin{equation}\ngreen\\darkergreen\n\\begin{test}\n\\test\n\\end{test}\n\\end{equation}\n\n\\begin{equation*}\ngreen\\darkergreen\n%comment\n\\begin{test}\n\\test\n\\end{test}\n\\%no comment\n\\end{equation*}\n\n\\{     %this should be comment\n\n\\verb%this shouldn't be%and this should be normal text\n\\verb!verbatim text! normal text\n\ntext \\texttt{more text}\n\n\\begin{verbatim}\ntext inside a verbatim environment is also treated special $ %,\nyou can even put a \\begin{verbatim} inside\n\\end{verbatim}\n\nnormal\n\n\\begin{Verbatim}\n&@@#^%&^#$\n\\end{Verbatim}\n\n\\begin{Verbatim*}\n@*&^#@*(^#(*@&\n\\end{Verbatim*}\n\nnormal\n\n\\begin{Verbatim}\n\\begin{verbatim}\nThis is a verbatim block.\n\\end{verbatim}\n\\end{Verbatim}\n\nnormal\n\n% test alignat\n\\begin{alignat}{2}\nA  &= B &= C \\\\\nA  &= B &= C\n\\end{alignat}\nnormal text\n\n\\iffalse\n\\fill commented out text\n\\fi\n\n% Math mode\n\nDepending on the value of $x$ the equation \\( f(x) = \\sum_{i=0}^{n} \\frac{a_i}{1+x} \\) may diverge or converge.\n \n\\[ f(x) = \\sum_{i=0}^{n} \\frac{a_i}{1+x} \\]\n \n\\[\nS = \\{ z \\in \\mathbb{C}\\, |\\, |z| < 1 \\} \\quad \\textrm{and} \\quad S_2=\\partial{S}\n\\]\n\n\\[\n\\frac{\n    \\begin{array}[b]{r}\n      \\left( x_1 x_2 \\right)\\\\\n      \\times \\left( x'_1 x'_2 \\right)\n    \\end{array}\n  }{\n    \\left( y_1y_2y_3y_4 \\right)\n  }\n\\]\n\n\\begin{eqnarray*}\n\\begin{eqnarray*}\nf(x) = \\sum_{i=0}^{n} \\frac{a_i}{1+x} \\\\\n\\textstyle f(x) = \\textstyle \\sum_{i=0}^{n} \\frac{a_i}{1+x} \\\\\n\\scriptstyle f(x) = \\scriptstyle \\sum_{i=0}^{n} \\frac{a_i}{1+x} \\\\\n\\scriptscriptstyle f(x) = \\scriptscriptstyle \\sum_{i=0}^{n} \\frac{a_i}{1+x}\n\\end{eqnarray*}\n\\end{eqnarray*}\n\n\\begin{xalignat}{3}\ni_{11} & =i_{23}\\nonumber\n\\end{xalignat}\n\nc\n\\begin{equation}\nc\n\\begin{aligned}\na & b\\\\\nc & d\n\\end{aligned}\nc\n\\end{equation}\nc\n\n$$E=mc^2$$\n \n\\begin{equation}\nE=m\n\\end{equation}\n\n\\begin{equation\nx=3\\textrm{plop}\n\\end{equation}\n\n\\[ \\begin{array}{llll}\n  x^3 = (-x)^3 & \\text{if $x > 0$}\\\\\n  x^3 = (-x)^3 & \\text{if {$x > 0$}}\\\\\n  x^3 = (-x)^3 & \\text{if {\\color{green} $x > 0$}}\\\\\n\\end{array} \\]\n\n\\section*{Notes for My Paper}\n\n\\begin{center}\n\\begin{tabular}{ |c|c|c| } \n \\hline\n cell1 & cell2 & cell3 \\\\ \n cell4 & cell5 & cell6 \\\\ \n cell7 & cell8 & cell9 \\\\ \n \\hline\n\\end{tabular}\n\n\\begin{tabular*}{0.75\\textwidth}{@{\\extracolsep{\\fill} } | c | c | c | r | }\n  \\hline\n  label 1 & label 2 & label 3 & label 4 \\\\\n  \\hline \n  item 1  & item 2  & item 3  & item 4  \\\\\n  \\hline\n\\end{tabular*}\n\n\\begin{tabular*}{0.75\\textwidth}{ | c | c | c | r | }\n  \\hline\n  label 1 & label 2 & label 3 & label 4 \\\\\n  \\hline \n  item 1  & item 2  & item 3  & item 4  \\\\\n  \\hline\n\\end{tabular*}\n\\end{center}\n\n\n\\begin{tabularx}{1\\textwidth}{ |>{\\setlength\\hsize{1\\hsize}\\centering}X|>{\\setlength\\hsize{1\\hsize}\\raggedleft}X@{} >{\\setlength\\hsize{1\\hsize}\\raggedright}X|>{\\setlength\\hsize{1\\hsize}\\centering}X| } \n  \\hline\nLabel 1 & \\multicolumn{2}{>{\\centering\\setlength\\hsize{2\\hsize} }X|}{Label 2} & Label 3\\tabularnewline\n\\hline \n  123  & 123  & 456  & 123  \\tabularnewline\n  \\hline\n  123  & 123  & 456  & 123  \\tabularnewline\n  \\hline\n\\end{tabularx}\n\n\\begin{lstlisting}\nWrite('Case insensitive ');\nWrite('Pascal keywords.');\n\\end{lstlisting}\n\n\\begin{lstlisting}%[frame=single]\nWrite('Case insensitive ');\nWrite('Pascal keywords.');\n\\end{lstlisting}\n\n\\begin{lstlisting}[frame=single]\nWrite('Case insensitive ');\nWrite('Pascal keywords.');\n\\end{lstlisting}\n\n\\begin{lstlisting}[frame=single] % blah blah\nWrite('Case insensitive ');\nWrite('Pascal keywords.');\n\\end{lstlisting}\n\n\\begin{lstlisting}\n[frame=single] % blah blah\nWrite('Case insensitive ');\nWrite('Pascal keywords.');\n\\end{lstlisting}\n\n\\begin{minted}{python}\ndef foo(x):\n    return x\n\\end{minted}\n\n\\begin{minted}\n% blah blah\n{python}\ndef foo(x):\n    return x\n\\end{minted}\n\n\\begin{minted}[frame=lines]{python}\ndef foo(x):\n    return x\n\\end{minted}\n\n\\begin{minted}\n% plop\n[frame=lines,\nfontsize=\\footnotesize\n]\n% ok\n{python}\ndef foo(x):\n    return x\n\\end{minted}\n\n\\begin{comment}\nThis is a comment block.\n\\end{comment}\n\n\\documentclass{article}\n\\usepackage{fancyvrb}\n\n\\documentclass[12pt]{article}\n\\begin{document}\n\nText that has a footnote\\footnote{This $i_s$ the \\[i_s\\] $$x_i$$ \\(x_i\\)footnote}\\addtocounter{footnote}{-1}\\addtocounter{footnote}{-1} looks like this. Later text referring to same footnote\\footnotemark uses the other command.\n\n\\end{document}\n\n% A delimiter is not required before \"\\\" in keywords of kind \"\\command\" (see bug #413493)\n\\begin{equation}123\\end{equation}\n\n\\begin{comment}Comment\\end{comment}\n\\iffalse text\\fi normal text\n\ntext\\begin{tabular}text\\end{tabular}\ntext\\begin{a}text\\end{a}\n\n% Close brackets correctly in some commands (see bug #415384)\n\\input{\\a{}}\n\\cites{ text { text } text }\n\n% Parentheses within [ ... ] (see bug #418979)\n\\cite[eq. (1.1)]{some_reference}\n\\cite[eq. \\(x^n + y^n = z^n\\)]{some reference}\n\n% Close folding correctly in \"\\end{...}\" and in region markers (see bug #419125)\n\\begin{document}\n  \\begin{abstract}\n\n  \\end{abstract}\n\n  \\begin{itemize}\n    \\item[\\(\\bullet\\)]\n    %%  BEGIN something\n    Here are some text describiing a new theorem. In the equation\n    \\begin{equation}\n      E = mc^2\n    \\end{equation}\n    we have a solution for energy-mass equivalence.\n    %% END\n\n    \\item[\\(\\bullet\\)]\n    \\begin{align*}\n      B(z) &= p^r \\sum_{k \\geq 0} \\binom{-r}{k} (-1)^k (1 - p)^k z^k \\\\\n      B(z) &= p^r \\sum_{k \\geq 0} \\binom{-r}{k} (-(1-p) z)^k\n    \\end{align*}\n  \\end{itemize}\n\\end{document}\n", "meta": {"hexsha": "dbe0a6a4a579a653813fcea55e59ef211554535a", "size": 5946, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "autotests/input/highlight.tex", "max_stars_repo_name": "RemotelyChaotic/syntax-highlighting", "max_stars_repo_head_hexsha": "74e8872cd249c5e9579ef52d80cc0c093873beb2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "autotests/input/highlight.tex", "max_issues_repo_name": "RemotelyChaotic/syntax-highlighting", "max_issues_repo_head_hexsha": "74e8872cd249c5e9579ef52d80cc0c093873beb2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "autotests/input/highlight.tex", "max_forks_repo_name": "RemotelyChaotic/syntax-highlighting", "max_forks_repo_head_hexsha": "74e8872cd249c5e9579ef52d80cc0c093873beb2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.5592105263, "max_line_length": 227, "alphanum_fraction": 0.6515304406, "num_tokens": 2207, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124811, "lm_q2_score": 0.7461389817407016, "lm_q1q2_score": 0.5545066212083816}}
{"text": "\\section*{High  Dimensional Probability Topics}\r\n\r\nVerysin's Book :\r\n\r\nHigh-dimensional probability offers insight into the behavior of random vectors, random matrices, random subspaces, and objects used to quantify uncertainty in high dimensions. Drawing on ideas from probability, analysis, and geometry, it lends itself to applications in mathematics, statistics, theoretical computer science, signal processing, optimization, and more. It is the first to integrate theory, key tools, and modern applications of high-dimensional probability. Concentration inequalities form the core, and it covers both classical results such as Hoeffding's and Chernoff's inequalities and modern developments such as the matrix Bernstein's inequality. It then introduces the powerful methods based on stochastic processes, including such tools as Slepian's, Sudakov's, and Dudley's inequalities, as well as generic chaining and bounds based on VC dimension. A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression.\r\n\r\n- Concentration of sums of independent random variables\r\n- Random vectors in high dimensions\r\n- Random matrices\r\n- Concentration without independence\r\n- Quadratic forms, symmetrization and contraction\r\n- Random processes\r\n- Chaining\r\n- Deviations of random matrices and geometric consequences\r\n- Sparse recovery\r\n- Dvoretzky-Milman's theorem\r\n\r\n%https://blogs.msdn.microsoft.com/ericlippert/2005/06/20/high-dimensional-spaces-are-counterintuitive-part-five/\r\n\r\n%https://stats.stackexchange.com/questions/99171/why-is-euclidean-distance-not-a-good-metric-in-high-dimensions\r\n\r\n%https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf\r\n\r\n%https://bib.dbvis.de/uploadedFiles/155.pdf\r\n\r\n%https://link.springer.com/chapter/10.1007/3-540-44503-X_27\r\n\r\n%https://www.stat.berkeley.edu/~mmahoney/\r\n\r\n%https://www.stat.berkeley.edu/~mmahoney/pubs/laplace-cvpr14.pdf ", "meta": {"hexsha": "03fa06d97dffcc310bc3804bc35eac4d89780793", "size": 2084, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "HDP.tex", "max_stars_repo_name": "brucebcampbell/machine-learning-notes", "max_stars_repo_head_hexsha": "6c5229ef7b943455a4e890f0ec62764adf9a2c40", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HDP.tex", "max_issues_repo_name": "brucebcampbell/machine-learning-notes", "max_issues_repo_head_hexsha": "6c5229ef7b943455a4e890f0ec62764adf9a2c40", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HDP.tex", "max_forks_repo_name": "brucebcampbell/machine-learning-notes", "max_forks_repo_head_hexsha": "6c5229ef7b943455a4e890f0ec62764adf9a2c40", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.4666666667, "max_line_length": 1152, "alphanum_fraction": 0.8051823417, "num_tokens": 450, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407016, "lm_q2_score": 0.743167997235783, "lm_q1q2_score": 0.5545066127197836}}
{"text": "\\chapter{Schemes}\n\n\\section{BLISS Software Implementation}\n\n\\subsection{Introduction}\nThe emphasis (at this stage) of the library is to create a flexible software library on which various lattice-based cryptographic schemes can be developed and implemented to a level of production quality. It is a requirement that the library is highly portable and used as a research and development tool, therefore highly optimized inline assembly and extracting every cycle of performance are not the goals, but are possible. It is a requirement that confidence in \\textit{libsafecrypto} is achieved by using thorough testing of the software (unit testing, functional testing and known-answer tests) together with a build system that integrates this testing into its build process. Also, where cryptographic primitives are required they will be drawn from well-established and well-respected implementations in the public domain.\n\nThe first algorithm to be implemented with the \\textit{libsafecrypto} library is BLISS, which is a lattice-based digital signature scheme based on the Ring-LWE problem. The BLISS software implementation is originally based on the hilabliss \\url{(https://github.com/mjosaarinen/hilabliss)} and BLZZRD (\\url{https://github.com/mjosaarinen/blzzrd}) implementations created by Markku-Juhani O. Saarinen. This in turn is based on the BLISS-B variant of BLISS \\url{(https://eprint.iacr.org/2014/874}) by L\u00e9o Ducas, derived from the original paper \\url{(https://eprint.iacr.org/2013/383)}.\n\n\\subsection{Key Generation}\n\n\\begin{itemize}\n\\item $f$, $g$ are uniform random polynomials with a specific number of $\\pm 1$ and $\\pm 2$ coefficients\n\\item $g = 2g + 1$\n\\item $a = g / f$, if $f$ is not invertible then \\textbf{Restart}.\n\\end{itemize}\n\nIn summary, the private key $(f, g)$ and public key $a$ are generated. In a practical implementation the following is also performed:\n\n\\begin{itemize}\n\\item To save cycles the public key is stored and distributed in the NTT domain. This saves 1 NTT in the signature and verification schemes. \\textbf{What if someone finds something better than the NTT?}\n\\item The sign of $g$ must be inverted \\textbf{(WHY???)}. From the hilabliss/BLZZRD original this is done by INTT, sign inversion, followed by NTT. The same can be achieved by simply performing the sign inversion in the NTT domain - giving an approx. 25\\% boost in performance by saving a single NTT and INTT. I think this should be regarded as a bug fix rather than an optimisation\n\\item The two private keys are sparse polynomials and are ripe for compression, whereas the public key is randomly distributed across the range of the modulus and lossless compression tends to make it bigger. Note that the $g$ polynomial contains only the coefficients $\\pm 2$ and $\\pm 4$ with the exception of the first coefficient - this is exploited by all of the entropy coders. \n\\end{itemize}\n\n\\subsection{Signatures}\n\n\\begin{itemize}\n\\item Generate $y_1$ and $y_2$ using a Gaussian sampler.\n\\item $u = y_1 * a + y_2 \\bmod 2q$\n\\item $c = H(\\lfloor u \\rceil_d \\bmod p, \\mu)$\n\\item Choose a random bit $b$\n\\item $z_1 = y_1 + (-1)^b s_1 c$\n\\item $z_2 = y_2 + (-1)^b s_2 c$\n\\item \\textbf{Continue} if a randomly chosen value between 0.0 and 1.0 is greater than a rejection threshold computed using a parameter set factor $M$  and the scalar product of the polynomials $s$, $c$, $y_1$ and $y_2$. Otherwise \\textbf{Restart}.\n\\item $z_2 = ( \\lfloor u \\rceil_d - \\lfloor u - z_2 \\rceil_d \\bmod p$\n\\item \\textbf{Output} $(z_1, z_2, c)$\n\\end{itemize}\n\nThe main bottlenecks in this operation are the NTT/INTT, the hash function \\textit{H()} and of course the rejection sampling which causes the signature to retry with a probability associated with the parameter set variables (e.g. 1.6 with BLISS-B-IV). To overcome these performance issues the following measures have been taken:\n\n\\subsubsection{Hash Functions}\n\n\\begin{itemize}\n\\item We've provided a choice of hashing function's that offer different performance and security properties (SHA-2 \\cite{sha2_gladman}, SHA-3 \\cite{tiny_sha3}, Whirlpool \\cite{whirlpool} and BLAKE2-B \\cite{blake2}). Currently it is fixed to SHA3-512, but this can be overridden with the relevant flags passed to the \\textit{create()} function. The security of the hash must be relative to the security of the signature scheme, therefore there is a possibility that we can use smaller hash blocks with lower security parameter sets rather than the 512-bit message digest that is used by default with all parameter sets.\n\\begin{itemize}\n  \\item BLAKE2-B and SHA2 offer much better performance than SHA3, with Whirlpool not far behind (see Figure \\ref{fig:bliss_b_hash_comparison}).\n\n\\pgfplotsset{compat=1.13,width=14cm,height=12cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=0pt, xmin=0, ytick=data, enlarge y limits=0.08, symbolic y coords={SHA3-512, SHA3-384, SHA3-256, SHA3-224, SHA2-512, SHA2-384, SHA2-256, SHA2-224, BLAKE2-512, BLAKE2-384, BLAKE2-256, BLAKE2-224, WHIRLPOOL-512}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}, legend style={at={(0.5,-0.15)},anchor=north,legend columns=-1}]\n\\addplot\n  coordinates\n    {(5053,SHA3-512) (5608,SHA3-384) (5176,SHA3-256) (5309,SHA3-224)\n     (7517,SHA2-512) (7299,SHA2-384) (6467,SHA2-256) (6431,SHA2-224)\n     (7790,BLAKE2-512) (7743,BLAKE2-384) (7368,BLAKE2-256) (7242,BLAKE2-224)\n     (6815,WHIRLPOOL-512)};\n\n\\addplot\n  coordinates\n    {(14418,SHA3-512) (17687,SHA3-384) (15486,SHA3-256) (16075,SHA3-224)\n     (30569,SHA2-512) (30540,SHA2-384) (23177,SHA2-256) (22621,SHA2-224)\n     (33619,BLAKE2-512) (34045,BLAKE2-384) (29763,BLAKE2-256) (29366,BLAKE2-224)\n     (24878,WHIRLPOOL-512)};\n\\legend{Signatures (per sec), Verifications (per sec)}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{BLISS-B-IV performance with various hash functions (Intel i7 6700 CPU @ 3.4GHz)}\n\\label{fig:bliss_b_hash_comparison}\n\\end{figure}\n\n  \\item BLAKE2-B lost out to Keccak in the SHA3 NIST competition, but it is an RFC (7693, draft).\n\n\\end{itemize}\n\\item A standalone performance analysis of the various hashing functions integrated into \\textit{libsafecrypto} is provided in Figure \\ref{fig:bliss_b_bytes_per_sec}:\n\\end{itemize}\n\n\n\\pgfplotsset{compat=1.13,width=14cm,height=22cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=0pt, xmin=0, xmax=1000000000, ytick=data, enlarge y limits=0.05, symbolic y coords={SHA3-512, SHA3-384, SHA3-256, SHA3-224, SHA2-512, SHA2-384, SHA2-256, SHA2-224, BLAKE2-512, BLAKE2-384, BLAKE2-256, BLAKE2-224, WHIRLPOOL-512}, y tick label style={/pgf/number format/1000 sep=}]\n\\addplot\n  coordinates\n    {(41733600,SHA3-512) (66453490,SHA3-384) (33580860,SHA3-256) (26117498,SHA3-224) (90001193,SHA2-512) (111962015,SHA2-384) (137634227,SHA2-256) (133880238,SHA2-224) (318378956,BLAKE2-512) (245563664,BLAKE2-384) (143380969,BLAKE2-256) (341531003,BLAKE2-224) (63398434,WHIRLPOOL-512)};\n\\addplot\n  coordinates\n    {(67311617,SHA3-512) (47430787,SHA3-384) (60777654,SHA3-256) (39262358,SHA3-224) (169106491,SHA2-512) (176718677,SHA2-384) (246871771,SHA2-256) (250791574,SHA2-224) (577727014,BLAKE2-512) (582344548,BLAKE2-384) (589684613,BLAKE2-256) (591718794,BLAKE2-224) (131113251,WHIRLPOOL-512)};\n\\addplot\n  coordinates\n    {(81599679,SHA3-512) (76776751,SHA3-384) (62408720,SHA3-256) (43766790,SHA3-224) (217818471,SHA2-512) (214763352,SHA2-384) (335508930,SHA2-256) (336483310,SHA2-224) (718552357,BLAKE2-512) (729769587,BLAKE2-384) (738456684,BLAKE2-256) (732518714,BLAKE2-224) (149474568,WHIRLPOOL-512)};\n\\addplot\n  coordinates\n    {(87154415,SHA3-512) (81770554,SHA3-384) (36837864,SHA3-256) (44490799,SHA3-224) (231389170,SHA2-512) (230322287,SHA2-384) (372942248,SHA2-256) (373587750,SHA2-224) (771918396,BLAKE2-512) (748316921,BLAKE2-384) (788611685,BLAKE2-256) (783909562,BLAKE2-224) (158717452,WHIRLPOOL-512)};\n\\legend{128 bits, 512 bits, 2048 bits, 8192 bits}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Bytes per second performance of hash functions with varying message lengths [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_bytes_per_sec}\n\\end{figure}\n\n\\begin{itemize}\n\\item Profiling of the functional test executable for BLISS-B in \\textit{valgrind/kcachegrind} shows that the performance bottlenecks are principally the hash function and the NTT, therefore small improvements to the associated functions provide reasonable performance gains.\n\n\\begin{itemize}\n\n  \\item Markku's Keccak/SHA3 has been optimised with some measures to manually perform loop unrolling so that the compiler doesn't have to use high (and potentially unstable) optimisation levels to achieve the same level of loop unrolling. However, this will be at the cost of an increased image size.\n\n\\begin{verbatim}\n#ifdef SHA3_UNROLLED\n    bc[0] = st[0] ^ st[0 + 5] ^ st[0 + 10] ^ st[0 + 15] ^ st[0 + 20];\n    bc[1] = st[1] ^ st[1 + 5] ^ st[1 + 10] ^ st[1 + 15] ^ st[1 + 20];\n    bc[2] = st[2] ^ st[2 + 5] ^ st[2 + 10] ^ st[2 + 15] ^ st[2 + 20];\n    bc[3] = st[3] ^ st[3 + 5] ^ st[3 + 10] ^ st[3 + 15] ^ st[3 + 20];\n    bc[4] = st[4] ^ st[4 + 5] ^ st[4 + 10] ^ st[4 + 15] ^ st[4 + 20];\n#else\n    for (i = 0; i < 5; i++) {\n        bc[i] = st[i] ^ st[i + 5] ^ st[i + 10] ^ st[i + 15] ^ st[i + 20];\n    }\n#endif\n\\end{verbatim}\n\n  \\item Small indexing calculations in SHA-3 that are used repeatedly in loops where providing a large latency, therefore this has been replaced with 5 small LUTs to store the precomputed values.\n\n\\begin{verbatim}\n    int i4mod5[5] = {4, 0, 1, 2, 3};\n    int i1mod5[5] = {1, 2, 3, 4, 0};\n\n    ...\n\n    for (i = 0; i < 5; i++) {\n        t = bc[i4mod5[i]] ^ ROTL64(bc[i1mod5[i]], 1);\n        for (j = 0; j < 25; j += 5) {\n            st[j + i] ^= t;\n        }\n    }\n\\end{verbatim}\n\nAs opposed to the original:\n\n\\begin{verbatim}\n    for (i = 0; i < 5; i++) {\n        t = bc[(i + 4) % 5] ^ ROTL64(bc[(i + 1) % 5], 1);\n        for (j = 0; j < 25; j += 5) {\n            st[j + i] ^= t;\n        }\n    }\n\\end{verbatim}\n\\end{itemize}\n\\end{itemize}\n\n\n\\clearpage\n\\subsubsection{Signature Restart}\n\\begin{itemize}\n\\item To improve the performance of the \\textbf{Restart} if the rejection threshold is not met we have implemented a cheap but effective trick - every other time the signature algorithm is restarted the two random polynomials (sampled from a Gaussian distribution in the previous iteration) are simply swapped rather than re-generated, gaining approx. 10\\%-25\\% more signatures per second as shown in Figure \\ref{fig:bliss_b_gaussian_swapping}.\n\n\\pgfplotsset{compat=1.13,width=15cm,height=7cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar, xmin=0, xmax=12500, xtick={0,2000,4000,6000,8000,10000}, ytick=data, enlarge y limits=0.5, symbolic y coords={Without Swapping, With Swapping}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}, legend style={at={(0.5,-0.15)},anchor=north,legend columns=-1},]\n\\addplot coordinates {(9917,Without Swapping) (10877,With Swapping)};\n\\addplot coordinates {(5654,Without Swapping) (7061,With Swapping)};\n\\addplot coordinates {(8143,Without Swapping) (9746,With Swapping)};\n\\addplot coordinates {(5871,Without Swapping) (6444,With Swapping)};\n\\legend{BLISS-B-I, BLISS-B-II, BLISS-B-III, BLISS-B-IV}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Signatures per second performance of BLISS-B with no entropy coding and Restarts with/without Gaussian swapping [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_gaussian_swapping}\n\\end{figure}\n\n\\begin{itemize}\n  \\item In other schemes with a retrial a similar technique could be used. As BLISS must generate two Gaussian distributions it can quickly swap memory pointers rather than swapping the actual memory contents. Other schemes with only a single Gaussian distribution could employ methods to filter, scramble or partially update the existing distribution rather than create an entirely new polynomial using a Gaussian sampler.\n\\end{itemize}\n\\end{itemize}\n\n\\subsubsection{Entropy Coding}\n\\begin{itemize}\n\\item The signature is composed of three polynomials, $z_1$, $z_2$ and $c$. $z_1$ is a 12-bit code with a Gaussian distribution which can be exploited with an entropy coder to reduce the signature size. $z_2$ is a 2, 3 or 4 bit sparse polynomial (depending on the parameter set selected) that can also be readily compressed. The third polynomial $c$ contains random indices for an $n$ element polynomial and is short in length (12 to 39 elements) and therefore compression gains are minimal. The performance of BLISS-B signature compression is seen in Figure \\ref{fig:bliss_b_signature_compression}, whilst the performance of the various entropy coders is shown in Figure \\ref{fig:bliss_b_signature_compression_performance}.\n\n\\pgfplotsset{compat=1.13,width=14cm,height=8cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=0pt, xlabel=Compression (\\%), xmin=0, xmax=120, xtick={0,20,40,60,80,100}, ytick=data, enlarge y limits=0.15, symbolic y coords={None, BAC, BAC with RLE, strongSwan Huffman, Huffman}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}]\n\\addplot coordinates {(100,None) (78.709,BAC) (80.606,BAC with RLE) (78.487,strongSwan Huffman) (80.721,Huffman)};\n\\addplot coordinates {(100,None) (78.918,BAC) (85.116,BAC with RLE) (77.555,strongSwan Huffman) (79.869,Huffman)};\n\\addplot coordinates {(100,None) (84.829,BAC) (89.264,BAC with RLE) (78.737,strongSwan Huffman) (80.36,Huffman)};\n\\legend{BLISS-B-I, BLISS-B-III, BLISS-B-IV}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{BLISS-B Signature Compression with various entropy coders}\n\\label{fig:bliss_b_signature_compression}\n\\end{figure}\n\n\\pgfplotsset{compat=1.13,width=14cm,height=8cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=0pt, xlabel=Signatures (per second), xmin=0, xmax=16000, xtick={0,2000,4000,6000,8000,10000,12000}, ytick=data, enlarge y limits=0.15, symbolic y coords={None, BAC, BAC with RLE, strongSwan Huffman, Huffman}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}]\n\\addplot coordinates {(11067,None) (6506,BAC) (6658,BAC with RLE) (10714,strongSwan Huffman) (10369,Huffman)};\n\\addplot coordinates {(9537,None) (5824,BAC) (6054,BAC with RLE) (9211,strongSwan Huffman) (8913,Huffman)};\n\\addplot coordinates {(7010,None) (4599,BAC) (4888,BAC with RLE) (6778,strongSwan Huffman) (6395,Huffman)};\n\\legend{BLISS-B-I, BLISS-B-III, BLISS-B-IV}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Performance of BLISS-B Signature Compression with various entropy coders [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_signature_compression_performance}\n\\end{figure}\n\n\\item The norms of the signature are currently checked by the signature algorithm, however this is not required and can be removed for a small performance gain.\n\\end{itemize}\n\n\\clearpage\n\\subsubsection{NTT/INTT}\n\\begin{itemize}\n\n  \\item The NTT loops have been modified to reduce some loop overhead in generating control variables, where possible they've been modified to encourage the compiler to use loop vectorisation and SIMD instructions.\n  \\item The NTT being used requires bit shuffling. This requires a function to swap each element of the polynomial with the element indexed by the bit reverse of it's index (\\textit{see inverse\\_shuffle\\_32() in ntt.c}). For example, ARM instruction sets have a bit reverse instruction that can readily obtain the correct index for the swap operation.\n\n\\begin{verbatim}\n\nUINT32 sc_bit_reverse_32(UINT32 x)\n{\n#ifdef __arm__\n    UINT32 y;\n    __asm__(\"rbit %0, %1\\n\" : \"=r\"(y) : \"r\"(x));\n    return y;\n#else\n    x = (((x & 0xaaaaaaaa) >> 1) | ((x & 0x55555555) << 1)); // Swap odd and even\n    x = (((x & 0xcccccccc) >> 2) | ((x & 0x33333333) << 2)); // Swap pairs\n    x = (((x & 0xf0f0f0f0) >> 4) | ((x & 0x0f0f0f0f) << 4)); // Swap nibbles\n    x = (((x & 0xff00ff00) >> 8) | ((x & 0x00ff00ff) << 8)); // Swap bytes\n    return (x >> 16) | (x << 16);                            // Swap pairs of bytes\n#endif\n}\n\n...\n\n// Make use of a bit reversal instruction\nUINT32 bits = 32 - sc_ctz_32(n);\nfor (i = 1; i < n-1; i++) {       // 00..0 and 11..1 remain same\n    UINT32 r = sc_bit_reverse_32(i);\n    r >>= bits;\n    if (i < r) {\n        SINT32 x = v[i];\n        v[i] = v[r];\n        v[r] = x;\n    }\n}\n\\end{verbatim}\n\n  \\item On x86 we can use the gcc \\textit{builtin\\_ctz} function (if available) to provide a more optimal bit reversal function by incrementing the MSB's to obtain bit reversal.\n\n\\begin{verbatim}\n// If a BUILTIN funtion exists for CTZ then inverse incremental\n// counting can be achieved by incrementing the MSB's\nUINT32 bits = sc_ctz_32(n) - 1;\nj = n >> 1;\nfor (i = 1; i < n - 1;) {       // 00..0 and 11..1 remain same\n    if (i < j) {\n        SINT32 x = v[i];\n        v[i] = v[j];\n        v[j] = x;\n    }\n    UINT32 mask = i++;\n    mask ^= i;\n    UINT32 len = sc_ctz_32(i);\n    mask <<= bits - len;\n    j ^= mask;\n}\n\\end{verbatim}\n\n  \\item A fallback method is provided that increments the MSB's without the help of the \\textit{builtin\\_ctz} function and provides a generic solution. However, this method requires a \\textit{while} loop.\n\n\\begin{verbatim}\n// This is the fallback method that also increments the MSB's\nj = n >> 1;\nfor (i = 1; i < n - 1; i++) {       // 00..0 and 11..1 remain same\n    if (i < j) {\n        SINT32 x = v[i];\n        v[i] = v[j];\n        v[j] = x;\n    }\n    k = n;\n    do {\n        k >>= 1;\n        j ^= k;\n    } while ((j & k) == 0);\n}\n\\end{verbatim}\n\n  \\item As the performance of the loops in the NTT are critical it was seen that reduction scheme specific implementations of the main NTT function were beneficial to performance. So rather than set a function pointer to use the desired multiplication and modular reduction routine, specific NTT functions were added for each reduction scheme that remove the overhead of function calls and in addition permit the compiler to perform loop unrolling and loop vectorisation, leading to some reasonable performance gains. A summary describing the performance gains achieved through simple optimisation of the NTT is shown in Figure \\ref{fig:bliss_b_optimised_ntt}.\n\n\\pgfplotsset{compat=1.13,width=11cm,height=9cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar, xmin=0, xmax=25000, xtick={0,5000,10000,15000,20000,25000}, ytick=data, symbolic y coords={Optimized Loops with Barrett and builtin\\_ctz, Barrett and builtin\\_ctz, Barrett Reduction, Original}, enlarge y limits=0.2, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}, legend style={at={(0.5,-0.15)},anchor=north,legend columns=-1},]\n\\addplot coordinates {(21846,Optimized Loops with Barrett and builtin\\_ctz) (20124,Barrett and builtin\\_ctz) (19273,Barrett Reduction) (12397,Original)};\n\\addplot coordinates {(6967,Optimized Loops with Barrett and builtin\\_ctz) (6711,Barrett and builtin\\_ctz) (6561,Barrett Reduction) (4837,Original)};\n\\addplot coordinates {(17897,Optimized Loops with Barrett and builtin\\_ctz) (16713,Barrett and builtin\\_ctz) (16964,Barrett Reduction) (5682,Original)};\n\\legend{Verifications (per sec), Signatures (per sec), Key Generation (per sec)}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Performance gains from optimised NTT loops [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_optimised_ntt}\n\\end{figure}\n\n  \\item The NTT roots of unity tables are stored using the minimum bit width type required. When these coefficients are used in multiplication routines they are typically cast to an int64\\_t or a double, and within the loop operations of the NTT/INTT they are read infrequently relative to other variables. Therefore using the minimal storage type reduces the memory use within the image whilst having no impact on performance.\n\\end{itemize}\n\n\\subsubsection{CSPRNG}\n\\begin{itemize}\n\n  \\item A range of PRNG's are provided by the SAFEcrypto library: ISAAC \\cite{issac}, KISS \\cite{kiss}, AES-PRNG (utilizes Brian Gladman's AES \\cite{aes_gladman}) and CHACHA20-CSPRNG \\cite{chacha20_csprng}.\n  \\item A functional test of the various CSPRNG options was used to determine their relative performance - the results are shown in Figure \\ref{fig:hash_mbyte_per_sec}.\n\n\\pgfplotsset{compat=1.13,width=10cm,height=9cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[ybar=8pt, xtick=data, enlarge x limits=0.5, symbolic x coords={32-bit, 64-bit}, x tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={vertical}, legend style={at={(.5,-0.15)},anchor=north,legend columns=-1}]\n\\addplot coordinates {(32-bit, 66.933) (64-bit, 89.083)};\n\\addplot coordinates {(32-bit, 148.129) (64-bit, 217.928)};\n\\addplot coordinates {(32-bit, 88.041) (64-bit, 87.73)};\n\\addplot coordinates {(32-bit, 45.615) (64-bit, 45.882)};\n\\addplot coordinates {(64-bit, 239.004)};\n\\legend{POSIX random(), ISAAC, AES-PRNG, CHACHA20-CSPRNG, KISS}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Performance of various CSPRNG schemes in MByte/sec [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:hash_mbyte_per_sec}\n\\end{figure}\n\n\\noindent \\textbf{NOTE: KISS is 64-bit only, we probably need to implement the 32-bit version for comparison purposes.}\n\n\\item The performance of BLISS-B with no entropy coding whilst using a range of CSPRNG's is shown in Figure \\ref{fig:bliss_b_csprng_comparison}. As expected there is no real difference in improvement for the verification operation as it requires no random number generation. The higher performance of KISS and ISAAC translates to higher performance within BLISS-B, with both CSPRNG's offering very similar performance.\n\n\\pgfplotsset{compat=1.13,width=15cm,height=10cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=0pt, ytick=data, enlarge y limits=0.15, symbolic y coords={POSIX random, ISAAC, AES-PRNG, CHACHA20-CSPRNG, KISS}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}, legend style={at={(.5,-0.15)},anchor=north,legend columns=-1}]\n\\addplot coordinates {(22200,POSIX random) (22124,ISAAC) (22233,AES-PRNG) (22355,CHACHA20-CSPRNG) (22123,KISS)};\n\\addplot coordinates {(6981,POSIX random) (7803,ISAAC) (6993,AES-PRNG) (5932,CHACHA20-CSPRNG) (7713,KISS)};\n\\addplot coordinates {(14948,POSIX random) (16175,ISAAC) (15179,AES-PRNG) (13314,CHACHA20-CSPRNG) (16266,KISS)};\n\\legend{Verifications (per sec), Signatures (per sec), Key Generation (per sec)}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Performance of various CSPRNG schemes in BLISS-B-IV [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_csprng_comparison}\n\\end{figure}\n\n\\end{itemize}\n\n\n\\clearpage\n\\subsection{Verification}\n\n\\begin{itemize}\n\\item Reject if $||(z_1 | 2^d . z_2)||_2 > B_2$\n\\item Reject if $||(z_1 | 2^d . z_2)||_\\infty > B_2$\n\\item Accept if $c = H(\\lfloor a_1 . z_1 + q . c \\rceil_d + z_2 mod p, \\mu)$\n\\end{itemize}\n\nVerification utilises some of the same functions as signing (NTT, Hashing, checking the signature norms) so achieves the same gains when these are improved.\n\n\\begin{itemize}\n\\item There are some operations that require a polynomial to be reduced using \\textit{2q} or \\textit{p} rather than \\textit{q}. These have also been accelerated using Barrett and Floating Point reduction techniques, taking care to structure the code such that automatic vectorisation is enabled by the compiler.\n\\end{itemize}\n\n\n\n\\subsection{General Performance Improvements}\n\n\\begin{itemize}\n\\item Intermediate storage is achieved using heap memory allocated when the schemes call the \\textit{create()} functions. This memory is released when the scheme is destroyed. This means that when any cryptographic functions are called there is no dynamic memory allocation or static memory used to store intermediate variables.\n\\item Integer and floating point types of 32-bit or larger are preferred on x86/AMD64 as the compiler can exploit SIMD/AVX auto-vectorization. This compiler optimization is to be investigated further for other microprocessor architectures.\n\\end{itemize}\n\n\n\\subsection{Summary}\n\nThe original hilabliss/BLZZRD implementations of BLISS are reference designs used to show the algorithms and functional operation of the signature scheme; as such they are intended to be non-optimal.\n\nWe have shown that the use of a Binary Arithmetic Coder within BLZZRD is sub-optimal in terms of performance, both as a Gaussian Sampler and an entropy coder. Huffman is significantly faster, whilct as a Gaussian Sampler it is statistically poor.\n\nAlso, hilabliss/BLZZRD and many other lattice-based cryptographic schemes are using SHA-3 as a hash function when implementing a random oracle. It is shown that this is a poor choice in terms of performance if not security. Therefore it is strongly suggested that a more optimal choice of hash would be SHA-2 or BLAKE2-B, both are standardized and strong hash functions in line with SHA-3.\n\nThe performance gains that have been achieved are summarised in Figures \\ref{fig:bliss_b_no_entropy_comparison} and \\ref{fig:bliss_b_entropy_comparison}. In this diagram the original algorithm is shown in comparison to (a) a version in which the NTT's and modular reduction have been optimised and (b) a version in which the the BLISS-B algorithm has been fully optimised with the addition of the ISAAC as a CSPRNG and BLAKE2 as a hash function. In all of these implementations entropy coding is disabled.\n\n\\noindent \\textbf{I hope to add a multithreaded version in which the CSPRNG and Gaussian Sampler have been placed into worker threads which should be significantly faster again, even if it is cheating!}\n\n\\pgfplotsset{compat=1.13,width=12cm,height=8cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=2pt, xmax=50000, ytick=data, enlarge y limits=0.4, symbolic y coords={Original, Optimized, Optimized with ISAAC and BLAKE2}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}, legend style={at={(.5,-0.15)},anchor=north,legend columns=-1}]\n\\addplot coordinates {(40682,Optimized with ISAAC and BLAKE2) (21696,Optimized) (12386,Original)};\n\\addplot coordinates {(11254,Optimized with ISAAC and BLAKE2) (6824,Optimized) (4824,Original)};\n\\addplot coordinates {(19677,Optimized with ISAAC and BLAKE2) (17384,Optimized) (5655,Original)};\n\\legend{Verifications (per sec), Signatures (per sec), Key Generation (per sec)}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Performance of SAFEcrypto BLISS-B-IV with no entropy coding [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_no_entropy_comparison}\n\\end{figure}\n\n\\pgfplotsset{compat=1.13,width=12cm,height=8cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=2pt, xmax=25000, ytick=data, enlarge y limits=0.4, symbolic y coords={Original, Optimized, Optimized with ISAAC and BLAKE2}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}, legend style={at={(.5,-0.15)},anchor=north,legend columns=-1}]\n\\addplot coordinates {(20262,Optimized with ISAAC and BLAKE2) (13648,Optimized) (8848,Original)};\n\\addplot coordinates {(10235,Optimized with ISAAC and BLAKE2) (6401,Optimized) (4526,Original)};\n\\addplot coordinates {(19647,Optimized with ISAAC and BLAKE2) (17701,Optimized) (5677,Original)};\n\\legend{Verifications (per sec), Signatures (per sec), Key Generation (per sec)}\n\\end{axis}\n\\end{tikzpicture}\n\n\\textbf{[Put comparisons to other implementations of BLISS-B in here as it is compatible.]}\n\\caption{Performance of SAFEcrypto BLISS-B-IV with Huffman compression [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_entropy_comparison}\n\\end{figure}\n\n\n\\clearpage\nIn Figure \\ref{fig:bliss_b_comparison} the original implementation of BLZZRD \\cite{markku_blzzrd} is compared to a compatible implementation created using the SAFEcrypto library (i.e. SHA-3, Binary Arithmetic Coding and Blinding countermeasures). Verification is shown to be approximately 14\\% slower using the SAFEcrypto library which can be attributed to the poor performance of the BAC decoder (\\textit{some more effort could be exercised in trying to improve this}). However, the signature performance is approximately 25\\% faster whilst key generation is significantly faster by 330\\%.\n\nA higher performance version of BLZZRD, here called \\textit{BLZZRD+}, has been implemented to address the poor performance associated with the BAC and hash. BLZZRD+ utilises the more efficient BLAKE2 hash function, Ziggurat Gaussian Sampling and Huffman coding, whilst maintaining blinding countermeasures. As such it gains much greater performance for signing and verifying, being approximately 250\\% and 170\\% faster respectively than the original BLZZRD implementation.\n\n\\textit{The paper in \\cite{markku_blzzrd} quotes performance on a 2.5GHz i7 in milliseconds per operation. To scale these figures for use here they have been inverted to convert to operations per second and muliplied by $1.36$ to scale to a 3.4GHz i7.}\n\n\\pgfplotsset{compat=1.13,width=12cm,height=8cm}\n\\begin{figure}[ht!]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[xbar=2pt, xmax=30000, ytick=data, enlarge y limits=0.4, symbolic y coords={SAFEcrypto BLZZRD+, SAFEcrypto BLZZRD, Original BLZZRD}, y tick label style={/pgf/number format/1000 sep=}, nodes near coords, nodes near coords align={horizontal}, legend style={at={(.5,-0.15)},anchor=north,legend columns=-1}]\n\\addplot coordinates {(28391,SAFEcrypto BLZZRD+) (9595,SAFEcrypto BLZZRD) (11146,Original BLZZRD)};\n\\addplot coordinates {(5164,SAFEcrypto BLZZRD+) (3704,SAFEcrypto BLZZRD) (2968,Original BLZZRD)};\n\\addplot coordinates {(16097,SAFEcrypto BLZZRD+) (15971,SAFEcrypto BLZZRD) (4770,Original BLZZRD)};\n\\legend{Verifications (per sec), Signatures (per sec), Key Generation (per sec)}\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Performance comparison of SAFEcrypto BLZZRD-IV [Intel i7 6700 CPU @ 3.4GHz]}\n\\label{fig:bliss_b_comparison}\n\\end{figure}\n\n", "meta": {"hexsha": "4e58c34030189070af0bf9604bc7c3fd30be1537", "size": 30014, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/SAD/schemes.tex", "max_stars_repo_name": "simonjj22/libsafecrypto", "max_stars_repo_head_hexsha": "3717bec9d9298f163f45acd5af54d708e03e0b9f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/SAD/schemes.tex", "max_issues_repo_name": "simonjj22/libsafecrypto", "max_issues_repo_head_hexsha": "3717bec9d9298f163f45acd5af54d708e03e0b9f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/SAD/schemes.tex", "max_forks_repo_name": "simonjj22/libsafecrypto", "max_forks_repo_head_hexsha": "3717bec9d9298f163f45acd5af54d708e03e0b9f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.9648351648, "max_line_length": 831, "alphanum_fraction": 0.7436196442, "num_tokens": 9113, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.833324587033253, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.5545029846045049}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage[margin=.75in]{geometry}\n\\usepackage{nopageno}\n\\usepackage{hyperref}\n\\usepackage{multicol}\n\n\n\\usepackage{amsmath,amssymb,amsthm}\n\\newtheorem*{proposition}{Proposition}\n\\newtheorem*{theorem}{Theorem}\n\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\N}{\\mathbb{N}}\n\n\\usepackage{xcolor}\n\\newcommand{\\blue}{\\textcolor{blue}}\n\n\\usepackage{versions}\n\\includeversion{solution}\n\n\\newcommand{\\bs}{\\begin{solution}}\n\\newcommand{\\es}{\\end{solution}}\n\n\\begin{document}\n\\vspace{-1.2in}\n\\begin{center} \\textbf{\\Large{Skill Mastery Quizzes - Question Bank}} \\\\\nCommunicating in Mathematics (MTH 210-02)\\\\\nFall 2017\n\\end{center}\n\n\n\n\\section*{Skills}\n\n\n\\subsection*{Logic}\n\n\\begin{itemize}\n\\item[L1] Identify the hypothesis and conclusion of a conditional statement, determine its truth value, and apply it.\n\t\\begin{enumerate}\n\t\n\t\t\t\\item[L1-1] Consider the following conditional statement: \n\t\\begin{center}\n\tIf $n$ is a prime number then $n^2$ has three positive factors.  \n\t\\end{center}\n\tIdentify the hypothesis and conclusion.  \n\t\n\t\\bs \\blue{ The hypothesis is ``$n$ is a prime number\" and the conclusion is ``$n^2$ has three positive factors\". Notice we leave off the if and the then when writing the hypothesis and conclusion.} \\end{solution}\n\t\n\tAssume the above conditional statement is true. Assuming \\emph{only} the conditional statement and that  $6$ is not prime what can you conclude (if anything)? Explain your answer.\n\t\n\t\\bs \\blue{We are given that the conditional statement is true and that $6$ is not prime. The fact that $6$ is not prime tells us that our hypothesis is false. Given a false hypothesis, we know nothing about the conclusion. So we can not conclude anything. We can also look at this from a truth table perspective. Consider the truth table for the conditional statement $P\\rightarrow Q$: \n }\n \\begin{center}\n \\begin{tabular}{c|c|c}\n$P$ &$Q$ &$P\\rightarrow Q$\\\\\n\\hline\nT &T &T\\\\\nT &F &F\\\\\nF &T &T\\\\\nF &F &T\n\\end{tabular}\n\\end{center} \n\\textcolor{blue}{In the third and fourth rows the hypothesis of the conditional statement is false (that is, $n$ is not a prime number), and the conditional statement is true (that is, if $n$ is a prime number then $n^2$ has $3$ positive factors). Since $Q$ (that is, $n^2$ has $3$ positive factors), is true in the third row and false in the fourth row we can't conclude anything about the truth value of $Q$.} \\end{solution}\n\n\\item[L1-2] Consider the following (true) conditional statement: \n\t\\begin{center}\n\tIf the function $f$ is continuous at $a$, then $\\displaystyle{\\lim_{x\\to a} f(x)}$ exists.\n\\end{center}\n\tIdentify the hypothesis and conclusion of this conditional statement.\n\t\n\t\\textcolor{blue}{The hypothesis is ``the function $f$ is continuous at $a$\" and the conclusion is ``$\\displaystyle{\\lim_{x\\to a} f(x)}$ exists\".}\n\t\nAssume the above conditional statement is true. Assuming \\emph{only} the conditional statement and that a function $f$ is not continuous at $7$, what can you conclude (if anything)?  Explain your answer.\n\n\t\\textcolor{blue}{Given that the function $f$ is not continuous at $7$ we know the hypothesis of the conditional statement is false. Thus we cannot conclude anything, since the statement makes no promises about what happens if a function is not continuous at a given $a$. See Quiz 1 solutions for an explanation with a truth table.}\n\t\n\\item[L1-2] Consider the following conditional statement: \n\t\\begin{center}\n\tIf $p\\neq 2$ and $p$ is an even number, then $p$ is not prime.\n\\end{center}\n\tIdentify the hypothesis and conclusion of this conditional statement.\n\t\nAssume the above conditional statement is true. Assuming \\emph{only} the conditional statement and that $7$ is not equal to $2$ and is not even what can you conclude (if anything)?  Explain your answer.\n\t\n\n\t\\item[L1-3] If $f: \\mathbb{R}\\to\\mathbb{R}$ is a differentiable function at a real number $a$ then $f$ is a continuous function at $a$.  State the hypothesis and conclusion of this conditional statement.  Suppose you know a function $g: \\mathbb{R}\\to\\mathbb{R}$ is continuous at $5$.  What can you conclude (if anything)?  Explain your answer.\n\t\n\n\t\\item[L1-token]  Consider the following conditional statement:\n\t\\begin{center}\n\tIf $p$ is a prime number then $p=2$ or $p$ is an odd number.\n\t\\end{center}\n\tIdentify the hypothesis and conclusion. \n\tAssume the above conditional statement is true. Assuming \\emph{only} the conditional statement and that $7$ is odd what can you conclude (if anything)?  Explain your answer.\n\t\n\n\n\n\n\t\\end{enumerate}\n\t\n\t\n\t\\newpage\n\t\n\n\\item[L2] State precisely the definition of an even and odd integer and outline the proof of a statement using these terms.\n\t\\begin{enumerate}\n\t\\item[L2-1] State the definition of even integer precisely. \n\t\tAn integer $n$ is even provided that...\n\t\n\t\\bs\\textcolor{blue}{ there exists an integer $q$ such that $n=2q$.}\\end{solution}\n\t\n\tThen outline a proof that if $x$ is even and $y$ is odd then $x+y$ is odd. (Make sure to include key details - like what things are integers.)\n\t\n\t\\bs\\textcolor{blue}{Suppose $x$ is even and $y$ is odd. Then there exist integers $a,b$ such that $x=2a$ and $y=2b+1$. Then $x+y=(2a)+(2b+1)$ by substitution. By the distributive property $x+y=2(a+b)+1$. Note that $a+b$ is an integer by closure of the integers under addition. Let $q=a+b$. Then $x+y=2q+1$ for the integer $q$ and so $x+y$ is an odd integer.}\\end{solution}\n\t\n\t\\item[L2-2] State the definition of odd integer precisely. Then outline a proof that if $x,y\\in \\Z$ are odd integers then $x+y$ is even.\n\t\\item[L2-3] State the definition of even integer precisely. Then outline a proof that if $x,y\\in \\Z$ are even integers then $xy$ is even.\n\t\\item[L2-token] State the definition of odd integer precisely. Then outline a proof that if $x,y\\in \\Z$ are odd integers then $xy$ is odd.\n\t\\end{enumerate}\n\t\n\\newpage\n\n\\item[L3] Construct truth tables for statements that use the logical operators and, or, not, and implies.\n\t\\begin{enumerate}\n\t\\item  Construct a truth table for $(\\neg P\\vee Q) \\rightarrow R$.\n\t\\item Construct a truth table for $P \\implies (Q\\wedge R)$\n\t\\item Construct a truth table for $P \\implies (Q\\vee R)$.\n\t\\end{enumerate}\n\n\\newpage\n\n\\item[L4] Write sets using set builder notation and interpret sets written in this notation.\n\t\\begin{enumerate}\n\t\t\\item[L4-1] Write the set $\\left\\{ \\sqrt{2}, \\left(\\sqrt{2}\\right)^3, \\left(\\sqrt{2}\\right)^5,\\dots\\right\\}$ in set builder notation.\n\n\t\\item[L4-2] Describe what the set $\\{x\\in\\mathbb{R} \\mid 3\\leq x\\leq 5\\}$ is in words. Then write what the set is in roster notation or explain why you can not.\n\n\t\t\\item[L4-3] Write the set $\\{\\dots, -5,-3,1,1,3,5,\\dots\\}$ using set builder notation.\n\t\\end{enumerate}\n\n\\newpage\t\n\n\\item[L5]  Negate a statement with and, or, not, implies, exists, and/or for all.\n\t\\begin{enumerate}\n\t\t\\item[L5-1] Write a useful negation of the following statement:\n\t\t\\begin{center}\n\t\tThere exists $x\\in\\mathbb{Z}$ such that if $y\\in \\mathbb{Z}$ then $\\frac{y}{x}\\in\\mathbb{Z}$.\n\t\t\\end{center}\n\t\tUseful negations don't start with ``It is not true that...\" and avoid the word not.\n\t\t\n\t\t\\item[L5-2] Write a useful negation for the following statement:\n\t\t\\begin{center}\n\t\tFor every positive real number $\\epsilon$ there exists a natural number $n$ with $\\frac{1}{n}<\\epsilon$.\n\t\t\\end{center}\n\t\tUseful negations don't start with ``It is not true that...\" and avoid the word not.\n\t\t\n\t\n\t\\item[L5-3] For all integers $n$ and $m$, if $nm$ is even then $n$ is even or $m$ is even.\n\t\t\n\t\\item[L5-token?]  Write a useful negation of the following statement:\n\t\t\\begin{center}\n\t\tFor all $m\\in\\mathbb{Z}$ there exists $n\\in\\mathbb{Z}$ such that $m>n$.\n\t\t\\end{center}\n\t\tUseful negations don't start with ``It is not true that...\" and avoid the word not.\n\t\\end{enumerate}\n\\end{itemize}\n\n\n\\newpage\n\n\\subsection*{Proofs}\n\\begin{itemize}\n\\item[P1]  Given a theorem, correctly state what will be assumed in a direct proof, proof by contradiction, and proof by contrapositive.\n\t\\begin{enumerate}\n\t\\item[P1-1] Consider the following statement:\n\t\t\\begin{center}\n\t\tEvery even integer greater than $2$ can be expressed as the sum of two (not necessarily distinct) prime numbers.\n\t\t\\end{center}\n\tState what you would assume in a direct proof. State what you would assume in a proof by contradiction.\n\t\\item[P1-2]  Consider the following statement:\n\t\t\\begin{center}\n\t\tFor all natural numbers $p$ and $q$, if $p$ and $q$ are twin primes other than $3$ and $5$ , then $pq+1$ is a perfect square and $36$ divides $pq+1$.\n\t\t\\end{center}\n\t\tState what you would assume in a direct proof. State what you would assume in a proof by contradiction.\n\t\\item[P1-3]  Consider the following statement\n\t\t\\begin{center}\n\t\tLet $a$ and $b$ be integers with $a\\neq 0$. If $a$ does not divide $b$ then the equation $ax^3+bx+(b+a) = 0$ does not have a solution that is a natural number.\n\t\t\\end{center}\n\t\tState what you would assume in a direct proof. State what you would assume in a proof by contradiction.\n\t\\end{enumerate}\n\t\n\t\n\t\\newpage\n\t\n\t\n\\item[P2]  Identify situations in which it is appropriate to use induction and state the procedure for proving a statement by induction.\n\n\n\\begin{enumerate}\n\t\\item[P2-1] For which of the following situations is it more appropriate to use induction. Explain.\n\t\t\\begin{enumerate}\n\t\t\\item For all $a\\in\\Z$ the equation $ax^3+ax + a = 0$ does not have a solution that is a natural number.\n\t\t\\item Let $a$ and $b$ be integers and $n\\in \\mathbb{N}$. For all $m\\in\\mathbb{N}$ if $a\\cong b \\pmod{n}$ then $a^m \\cong b^m \\pmod{n}$.\n\t\t\\end{enumerate}\n\tFor the statement you chose, state what your steps would be in a proof by induction.\n\n\n\n\t\\item[P2-2] For which of the following situations is it more appropriate to use induction. Explain.\n\t\t\\begin{enumerate}\n\t\t\\item For all integers $a$ and $b$, $(a+b)^2 \\equiv (a^2 +b^2) \\pmod{2}$\n\t\t\\item For each natural number $n$, $3$ divides $4^n-1$.\n\t\t\\end{enumerate}\n\tFor the statement you chose, state what your steps would be in a proof by induction.\n\n\t\\item[P2-3] For which of the following situations is it more appropriate to use induction. Explain.\n\t\t\\begin{enumerate}\n\t\t\\item For all $a\\in\\Z$ the equation $ax^3+ax + a = 0$ does not have a solution that is a natural number.\n\t\t\\item For all $n\\in\\N$, $1+2+3+\\cdots + n = \\dfrac{n(n+1)}{2}$.\n\t\t\\end{enumerate}\n\tFor the statement you chose, state what your steps would be in a proof by induction.\n\t\n\\end{enumerate}\n\n\\newpage\n\n\n\\item[P3] Clearly and correctly disprove a statement using a counterexample.\n\n\t\\begin{enumerate}\n\t\\item[P3-1] The following statement is incorrect:\n\t\t\\begin{center}\n\t\tIf $n$ is an integer then $n^2\\equiv 1\\pmod{3}$.\n\t\t\\end{center}\n\t\t\tShow the statement is false using a counterexample. You should clearly explain why the counterexample you found shows the statement is false.\n\t\n\t\\item[P3-2] The following statement is incorrect:\n\t\t\\begin{center}\n\t\tThe set of natural numbers is closed under subtraction.\n\t\t\\end{center}\n\t\tShow the statement is false using a counterexample. You should clearly explain why the counterexample you found shows the statement is false.\n\t\t\n\t\t\\item[P3-3] The following statement is incorrect:\n\t\t\\begin{center}\n\t\tFor each integer $n$, $(n^2+1)$ is a prime number.\n\t\t\\end{center}\n\tShow the statement is false using a counterexample. You should clearly explain why the counterexample you found shows the statement is false.\n\t\n\t\n\t\t\n\t\\end{enumerate}\n\t\n\\newpage\n\n\n\\item[P4] Evaluate if a given proof is valid and adheres to our writing guidelines.\n\t\\begin{enumerate}\n\t\\item[P4-1] Consider the following proposition and proof. Is the proof correct? If not, explain why not. If so, does the proof meet our writing guidelines? \n\t\n\t\t\\begin{proposition}\n\t\tIf $m$ is an odd integer then $m+6$ is an odd integer.\n\t\t\\end{proposition}\n\t\t\\begin{proof}\n\t\tFor $m+6$ to be an odd integer there must exist an integer $n$ such that\n\t\t\t\\[m+6 = 2n+1.\\]\n\t\tBy subtracting $6$ from both sides of this equation we obtain\n\t\t\t\\begin{align*}\n\t\t\tm &= 2n-6+1\\\\\n\t\t\t&=2(n-3)+1.\n\t\t\t\\end{align*}\n\t\tBy the closure properties of integers, $n-3$ is an integer, and hence, the last equation implies that $m$ is an odd integer. This proves that if $m$ is an odd integer then $m+6$ is an odd integer.\n\t\t\\end{proof}\n\t\t\n\\item[P4-2] Considering the following proposition and proof. Is the proof correct? If not, explain why not. If so, does the proof meet our writing guidelines?\n\n\t\\begin{theorem}\n\tFor each integer $n$, $3\\mid n^2 + 2$.\n\t\\end{theorem}\n\n\t\\begin{proof}\n\tWe will consider two cases, $n\\equiv 1\\pmod{3}$ and $n\\equiv 2\\pmod{3}$. When $n\\equiv 1 \\pmod{3}$, there exists an integer $k$ such that $3k = n-1$. Then \n\t\t$$n^2+2 = (3k+1)^2 + 2 = 9k^2 + 6k + 1 + 2 = 3(3k^2+2k+1).$$\n\tSince integers are closed under addition and multiplication  $3k^2+2k+1$ is an integer. Thus $3\\mid n^2+2$ in this case.\n\t\n\tWhen $n\\equiv 2 \\pmod{3}$ there exists an integer $k$ such that $3k=n-2$. Then\n\t\t$$n^2+2 = (3k+2)^2 +2 =9k^2+12k+4+2 = 3(3k^2+4k+2).$$\n\tSince integers are closed under addition and multiplication, $3k^2+4k+2$ is an integer. Thus $3\\mid n^2+2$ in this case as well.\n\t\n\tSince we've proven that  $3\\mid n^2+2$ in all possible cases we have completed the proof.\n\t\\end{proof}\n\t\\end{enumerate}\n\t\n\n\\item[P4-3]  Considering the following proposition and proof. Is the proof correct? If not, explain why not. If so, does the proof meet our writing guidelines?\n\n\t\\begin{theorem}\n\tFor all integers $m$ and $n$, if $mn$ is an even integer, then $m$ is even or $n$ is even.\n\t\\end{theorem}\n\t\\begin{proof}\n\tFor either $m$ or $n$ to be even there exists an integer $k$ such that $m=2k$ or $n=2k$. So if we multiply $m$ and $n$ the product will contain a factor of $2$ and, hence, $mn$ will be even.\n\t\\end{proof}\n\t\n\\end{itemize}\n\n\\newpage\n\n\n\\subsection*{Sets, Functions, and Equivalence Relations}\n\\begin{itemize}\n\\item[S1] Use the symbols $\\in, \\notin, =,\\neq,\\subseteq,\\not\\subseteq,\\subset,\\not\\subset$ correctly.\n\t\\begin{enumerate}\n\t\\item[S1-1] Let $A = \\{ 1,\\{2\\}, \\{3,4\\}, 5\\}$.  Fill in a correct symbol for each of the following:\n\t\t\\begin{itemize}\n\t\t\\item $\\{1\\} \\underline{\\hspace{.25in}} A$\n\t\t\\item $\\{2\\} \\underline{\\hspace{.25in}} A$\n\t\t\\item $\\{1,2,3,4,5\\} \\underline{\\hspace{.25in}}A$\n\t\t\\end{itemize}\n\t\\item[S1-2] Let $A = \\{1,2,4\\}$ and $B = \\{1,2,3,5\\}$.  Fill in a correct symbol for each of the following:\n\t\t\\begin{itemize}\n\t\t\\item $A\\underline{\\hspace{.25in}} B$\n\t\t\\item $\\emptyset \\underline{\\hspace{.25in}} A$\n\t\t\\item $\\{4,2,1\\} \\underline{\\hspace{.25in}} B$\n\t\t\\end{itemize}\n\t\\item[S1-3]  Let $A = \\{0,1,2,3,\\{4\\}\\}$.  Fill in a correct symbol   (from $\\in$, $\\subset$, $\\subseteq$, $=$, $\\neq$) for each of the following.\n\t\t\\begin{itemize}\n\t\t\\item $\\{4\\} \\underline{\\hspace{.25in}} A$\n\t\t\\item $\\{2\\} \\underline{\\hspace{.25in}} A$\n\t\t\\item $\\{1,2,3\\} \\underline{\\hspace{.25in}}A$\n\t\t\\end{itemize}\n\t\\end{enumerate}\n\t\n\\newpage\n\n\\item[S2] Given two sets and a universal set identify the union, intersection, complement, and set difference and find the power set of a given set.\n\t\\begin{enumerate}\n\t\\item[S2-1] Let $U = \\{1,2,3,4,5,6,7,8,9,10\\}$ be the universal set. Let $A = \\{3,4,5,6,7\\}$ and $B=\\{1,5,7,9\\}$.\n\t\t\\begin{enumerate}\n\t\t\\item Find $A\\cap B$\n\t\t\\item Find $A \\cup B$\n\t\t\\item Find $A^C$\n\t\t\\item Find $A\\setminus B$ (or $A-B$).\n\t\t\\end{enumerate}\n\t\\item[S2-2]  Let $U = \\Z$. Let $A = \\{x\\in \\Z: x\\ge 7\\}$ and $B = \\{x\\in\\Z: x is odd\\}$.  (Roster method is okay for your answers.)\n\t\t\\begin{enumerate}\n\t\t\\item Find $A\\cap B$\n\t\t\\item Find $A \\cup B$\n\t\t\\item Find $A^C$\n\t\t\\item Find $A\\setminus B$ (or $A-B$).\n\t\t\\end{enumerate}\n\t\\item[S2-3]  Let $U = \\{1,2,3,4,5,6,7,8,9,10\\}$ be the universal set.  Let $A= \\{2,4,6,8,10\\}$ and $B = \\{1,3\\}$.\n\t\t\\begin{enumerate}\n\t\t\\item Find $A\\cap B$\n\t\t\\item Find $A^C$\n\t\t\\item Find  $A-B$\n\t\t\\item Find $A\\cup B$.\n\t\t\\end{enumerate}\n\t\\end{enumerate}\n\t\n\\newpage\n\n\\item[S3] Correctly use function terminology such as domain, codomain, range, dependent variable, independent variable, image, and preimage.\n\t\\begin{enumerate}\n\t\\item[S3-1] Let $f: \\mathbb{R} \\to \\mathbb{R}$ be defined by $f(x) = x^2-2x$.\n\t\t\\begin{itemize}\n\t\t\\item State the domain, codomain, and range of $f$. (Clearly state which one is which.)\n\t\t\\item Find the image(s) of $3$ under $f$.\n\t\t\\item Find the preimage(s) of $0$.\n\t\t\\end{itemize}\n\t\\item[S3-2] Let $R^* = \\{x\\in \\R: x\\ge 0\\}$.  Let $s: \\R^* \\to \\R^*$ be defined by $f(x) = x^2$.\n\t\t\\begin{itemize}\n\t\t\\item State the domain, codomain of $f$. (Clearly state which one is which.)\n\t\t\\item Find the image(s) of $3$ under $f$.\n\t\t\\item Find the preimage(s) of $4$.\n\t\t\\end{itemize}\n\t\\item[S3-3]  Let $f: \\Z\\to \\Z$ be defined by $f(m) = 3-m$.\n\t\t\\begin{itemize}\n\t\t\\item State the domain, codomain, and range of $f$. (Clearly state which one is which.)\n\t\t\\item Find the image(s) of $3$ under $f$.\n\t\t\\item Find the preimage(s) of $0$.\n\t\t\\end{itemize}\n\t\\end{enumerate}\n\t\n\t\\newpage\n\t\n\\item[S4] State the definition of injection, surjection, and bijection.\n\t\\begin{enumerate}\n\t\\item[S4-1] Let $A$ and $B$ be sets. Carefully complete the definitions of the following terms: \n\t\t\\begin{itemize}\n\t\t\\item  A function $f: A \\to B$ is injective provided that...\n\t\t\\item A function $f: A\\to B$ is surjective provided that...\n\t\t\\item A function $f: A\\to B$ is bijective provided that...\n\t\t\\end{itemize}\n\t\\end{enumerate}\n\t\n\\newpage\n\n\\item[S5] Prove or disprove that a given relation is reflexive, symmetric, and/or transitive.\n\t\\begin{enumerate}\n\t\\item For all $a,b\\in \\Z$ say $a\\sim b$ if and only if $a\\mid b$.  Is $\\sim$ an equivalence relation? Explain.\n\t\\item For all $a,b \\in \\Z$ say $a\\sim b$ if and only if $a\\leq b$.  Is $\\sim$ an equivalence relation? Explain.\n\t\\item For all $a,b\\in \\R$ say $a\\sim b$ if and only if $|a-b|<5$.  Is $\\sim$ an equivalence relation? Explain.\n\t\\end{enumerate}\n\t\n\n\\newpage\n\n\\item[S6] State the definition of ``$a$ divides $b$\" and ``$a$ is congruent to $b$ modulo $n$\" and correctly apply these definitions in examples.\n\t\\begin{enumerate}\n\t\\item  Carefully state the definition of $a\\mid b$ (for nonzero $a$) and $a \\equiv b\\pmod{n}$ (for nonzero $n$).  Then give an example of integers $a$ and $b$ such that $a\\cong b \\pmod{15}$ and $b<0$.\n\t\\item Carefully state the definition of $a\\mid b$ (for nonzero $a$) and $a \\equiv b\\pmod{n}$ (for nonzero $n$).  Then give an example of integers $a$ and $b$ such that $a \\nmid b$.\n\t\\item Carefully state the definition of $a\\mid b$ (for nonzero $a$) and $a \\equiv b\\pmod{n}$ (for nonzero $n$).  Then give an example of integers $a$ and $b$ such that $a\\not\\equiv b \\pmod{3}$.\n\t\\end{enumerate}\n\\end{itemize}\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "442eb8f5b54610dd5bb4deaa9236a2e40daefce6", "size": 18599, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "from LDK/3-Quizzes/InstructionandQuestionBank/SkillMasteryQuestionBank.tex", "max_stars_repo_name": "mkjanssen/discrete", "max_stars_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "from LDK/3-Quizzes/InstructionandQuestionBank/SkillMasteryQuestionBank.tex", "max_issues_repo_name": "mkjanssen/discrete", "max_issues_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "from LDK/3-Quizzes/InstructionandQuestionBank/SkillMasteryQuestionBank.tex", "max_forks_repo_name": "mkjanssen/discrete", "max_forks_repo_head_hexsha": "4038b6d102000f4eeb27adaa8d0fd2bde63c28ac", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.0734597156, "max_line_length": 426, "alphanum_fraction": 0.6936932093, "num_tokens": 6116, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6654105454764747, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.5545029776571216}}
{"text": "\\documentclass[12pt, letterpaper]{article}\n\\usepackage[english]{babel}\n\\usepackage{amsmath}\n\\usepackage{unicode-math}\n\\usepackage[margin=0.7in]{geometry}\n\\usepackage{graphicx}\n%\\usepackage{mathrsfs}\n\\usepackage{fontspec}\n\\setmainfont{Calibri}\n% must be compiled with XeLaTeX, not pdfLaTeX\n\n  %%%%%%%%%%%%\n  % PREAMBLE %\n  %%%%%%%%%%%%\n\\begin{document}\n\\selectlanguage{english}\n\n\\section*{Postrefinement: The model {\\tt rs}. }\n  \\par This is intended to be the simplest possible model for the reciprocal lattice point (RLP), \n  describing the RLP as a sphere of radius ${r_s}$, which is globally constant over the whole dataset\n  consisting of an ensemble of crystal lattices (or frames).  \n\\subsection*{1 The size of the RLP model}\n\n  \\par The constant value of ${r_s}$ is computed as follows.\n  From model refinement and integration, each crystal has associated with it a list of Miller indices \n  $\\mathbf{h_i}$ and a reciprocal space orientation matrix $\\mathbf{A}$ \n  defined by Rossmann $\\textit{et\\ al.}$ (1979),\n    \\begin{equation}\n    \\mathbf{A} = \n    \\left(\n    \\begin{array}{c c c}\n      a_{x}^{*}   &  b_{x}^{*}   & c_{x}^{*}  \\\\\n      a_{y}^{*}   &  b_{y}^{*}   & c_{y}^{*}  \\\\\n      a_{z}^{*}   &  b_{z}^{*}   & c_{z}^{*}  \\\\\n    \\end{array}\n    \\right)\n    \\text{.}\n    \\label{eqn:RossmannA}\n  \\end{equation}\n  The reciprocal space coordinates of RLP $i$ are computed with \n    \\begin{equation}\n    \\mathbf{q} = \\mathbf{A}\\mathbf{h}\n    \\text{,}\n    \\label{eqn:Ah}\n  \\end{equation}\n  leading to a reciprocal position Q, with a small distance offset\n   ${r_h}$ away from the Ewald sphere that represents the perfect diffracting condition.  \n  The fact that $|{r_h}|$ is non-zreo is indicative that\n  Bragg observations from still shots represent partial reflections.  Note that array index $i$ denoting\n  a specific Miller index is dropped on occasion for clarity.  The geometry is explained in Fig. 1. \n  \n  \\begin{figure}[htb]\n  \\begin{center}\n  \\includegraphics[scale=1.5]{Figure_1.pdf}\n  \\label{fig:1}\n  \\end{center}\n  \\begin{center}\n  {Fig. 1. Ewald sphere construction.}\n  \\end{center}\n  \\end{figure}\n\n  The quantity $|{r_h}|$ is given by \n    \\begin{equation}\n    {r_h} = ||\\mathbf{q}+\\mathbf{s_0}||-||\\mathbf{s_1}|| = ||\\mathbf{q}+\\mathbf{s_0}||-\\dfrac{1}{ \\lambda}\n    \\text{,}\n    \\label{eqn:Rh}\n  \\end{equation}\n\n  where ${s_0}$ and ${s_1}$ are respectively the beam vector and the diffracted ray vector,\n  each of length $1/\\lambda$.  For the model {\\tt rs}, the constant value of ${r_s}$ is taken \n  as the root mean-squared value of\n  ${r_h}$ over all Bragg spots integrated from a given crystal.  It therefore depends on whatever\n  algorithm has been used to predict spots, regardless of whether there is measurable signal\n  in the spots.  \n  \\subsection*{2 The geometry of the RLP model}\n\n  \\par The intention is to create a model of the RLP similar to a hard sphere, so that if any portion \n  of the sphere touches the Ewald sphere there is signal expected, otherwise none.  However, this \n  is a discontinuous model (in terms of the spot partiality expressed as a function of ${r_h}$ and\n  therefore not easily amenable to parameter fitting.  Therefore we relax the requirement for a hard\n  sphere and adopt a radial profile somewhat smoother.  For the Uervirojnangkoorn (2015) paper\n  we used a profile based on a Lorentzian function.  The derivation is as follows.  \n  \n\\par A suggestion from James Holton defines the Bragg spot partiality as\n  \n  \\begin{equation}\n    p = \\frac{\\text{Area of intersection between the Ewald sphere and } F_{hkl}}\n             {\\text{Area of intersection between the Ewald sphere and } F_{000}}\n    \\text{.}\n    \\label{eqn:part}\n  \\end{equation}\n  \nThe \"areas of intersection\" in question are really spherical caps that represent the Ewald \nsphere's intersection with the reciprocal space ball of radius $r_s$.  However, we're not going to \ninsist on such detail; instead we will simply take a circular area of radius $r_p$ such that we\nhave the right triangle\n\n  \\begin{equation}\n    {r}_p^2 = {r}_s^2 - {r}_h^2\n    \\text{,}\n    \\label{eqn:pyth}\n  \\end{equation}\n\nand then the approximate expression for partiality becomes (model A),\n\n  \\begin{equation}\n    p_A = \\frac{\\pi r_p^2}{\\pi r_s^2} = 1 - \\frac{r_h^2}{r_s^2} \\text{ for }|r_h|<r_s, 0 \\text{ otherwise}\n    \\text{.}\n    \\label{eqn:fexprr}\n  \\end{equation}\n\nPartiality as a function of $r_h$ is a simple inverted parabola with $p_A=1$ at $r_h=0$ and \nroots at $\\pm r_s$ (Fig. 2).  Outside of this domain the partiality is 0. \n\n  \\begin{figure}[htb!]\n  \\begin{center}\n  \\includegraphics[scale=0.65]{Figure_2.pdf}\n  %\\includesvg{Figure_2}\n  \\label{fig:2}\n  \\end{center}\n  \\begin{center}\n  {Fig. 2. Three partiality models:  \n  a simple hard-sphere model ($p_A$, blue), a soft-sphere Lorentzian\n  function ($p_B$, red), and an intermediate model based on a Gaussian function\n  ($p_G$, green).  }\n  \\end{center}\n  \\end{figure}\n\nHowever, having a \nmathematically discontinuous\nexpression ($p_A$) will leave us at a disadvantage for postrefinement.  The postrefinement strategy  \nwill be to express the lack of closure $r_h$ in terms of model parameters such as the unit cell\ndimensions and crystal orientation.  Then optimize a target function $f$ expressed in \nterms of the partiality $p$, attempting to find parameter values to minimize $f$.  It is \ncrucial in this procedure to have an expression for partiality that is smooth and differentiable. \n\nWe will therefore have to modify our simple model of the Bragg spot as a reciprocal space ball.  \nOne functional form that might have the desired properties is the Lorentzian function\n  \\begin{equation}\n    L = \\frac{1}{\\pi}\\frac{\\frac{1}{2}\\Gamma}{x^2 + (\\frac{1}{2}\\Gamma)^2}\n    \\text{,}\n    \\label{eqn:loren}\n  \\end{equation}\nwhere $\\Gamma$ is the full-width at half maximum (FWHM).  \n\nLet's tinker with this expression so it conforms to expectation...first multiply by \na scaling constant to get $L'(0)=1$:\n  \\begin{equation}\n    L^{\\prime} = \\frac{\\pi \\Gamma}{2}L\n    \\text{,}\n    \\label{eqn:lorenA}\n  \\end{equation}\n\nand finally setting the FWHM to the FWHM value obtained from eqn \\eqref{eqn:fexprr},\n  \\begin{equation}\n    \\Gamma = \\frac{2r_s}{\\sqrt{2}}\n    \\text{,}\n    \\label{eqn:fwhm}\n  \\end{equation}\nso we get a new partiality expression (model B):\n\n  \\begin{equation}\n    p_B = \\frac{r_s^2}{2r_h^2 + r_s^2}\n    \\text{.}\n    \\label{eqn:pb}\n  \\end{equation}\n\nFinally, for postrefinement we'll need the partial derivative of $p_B$ with respect to $r_h$ (use the \nquotient rule):\n  \\begin{equation}\n    \\frac{\\partial{p_B}}{\\partial{r_h}} = \\frac{-4r_s^2r_h}{(2r_h^2 + r_s^2)^2}\n    \\text{.}\n    \\label{eqn:deriv}\n  \\end{equation}\n\n\n  \\subsection*{3 Model parameters and target function}\n  \\par The goal of this work is to refine the parameters of the partiality model so that the \n  observed intensities, corrected to their full spot equivalents, offer the best agreement over\n  repeated measurements of the same asymmetric-unit Miller index.  In practice,\n  the parameters representing each crystal lattice are refined \n  against a set of reference intensities $I_{\\mathrm{ref}}$.  \n  Program {\\it prime} uses simple scaling to create an initial reference, after which repeated \n  cycles of postrefinement are performed, with the reference being created from the corrected, \n  merged intensities from the previous cycle.  In {\\it cxi.merge} the reference is an isomorphous\n  atomic structure, from which intensities $I_{\\mathrm{ref}}$ are calculated, and only one \n  cycle is performed.  The polarization-corrected measurements are denoted $I_{\\mathrm{obs}}$.\n  The parameters refined for each crystal lattice are:\n  \n  \\par $G$: the scale factor.\n  \\par $B$: the Wilson $B$-factor.\n  \\par $\\theta_{x}$: incremental crystal rotation angle on $x$-axis ($\\perp$ to beam).\n  \\par $\\theta_{y}$: incremental crystal rotation angle on $y$-axis ($\\perp$ to beam).\n  \\par The least-squares target function used to achieve best agreement between model and \n  observation is \n  \n  \n  \\begin{equation}\n \\mathscr{F} =  \\sum_{i} \\limits\n    ( G  \\exp(\\dfrac{-8B\\sin^2\\theta}{\\lambda^2}) p_B I_{\\mathrm{ref}} - I_{\\mathrm{obs}})^{2}\n\\end{equation}\nwhere $\\theta$ is the Bragg diffraction angle defined in Fig. 1, and $\\lambda$ the wavelength, both \ntreated as constants, and the sum is over all measurements integrated from a single crystal lattice. \n\n  \\subsection*{4 Necessary derivatives for parameter refinement}\n\n  \\par Given the least-squares form, derivatives of the target functional with respect to \n  parameter $\\mathscr{P}$ are in general\n  \n    \\begin{equation}\n    \\frac{\\partial\\mathscr{F}}{\\partial\\mathscr{P}} = 2\\sum_{i} \\limits\n    \\mathscr{R}_i\\dfrac{\\partial\\mathscr{R}_i}{\\partial\\mathscr{P}}\n    \\text{,}\n    \\label{eqn:genFP}\n  \\end{equation}\nwhere the residual comparison on each observation is \n\n  \\begin{equation}\n \\mathscr{R}_i =  \n    G  \\exp(\\dfrac{-8B\\sin^2\\theta}{\\lambda^2}) p_B I_{\\mathrm{ref}} - I_{\\mathrm{obs}}\n    \\text{.}\n    \\end{equation}\nThe derivatives in the Jacobian matrix, required for parameter optimization, are more-or-less\nstraightforward:\n\n    \\begin{equation}\n    \\frac{\\partial\\mathscr{R}_i}{\\partial G} = \n    \\exp(\\dfrac{-8B\\sin^2\\theta}{\\lambda^2}) p_B I_{\\mathrm{ref}}\n    \\text{,}\n    \\label{eqn:dG}\n  \\end{equation}\n\n     \\begin{equation}\n    \\frac{\\partial\\mathscr{R}_i}{\\partial B} = \n    G  \\exp(\\dfrac{-8B\\sin^2\\theta}{\\lambda^2}) p_B I_{\\mathrm{ref}}\n    \\left( \\dfrac{-8\\sin^2\\theta}{\\lambda^2}\\right)\n    \\text{.}\n    \\label{eqn:dB}\n  \\end{equation}\n\n  \\par The derivatives with respect to $\\theta_{x}$ and $\\theta_{y}$ require more work.  All of the\n  dependence on crystal orientation comes through the expression for partiality:\n  \n      \\begin{equation}\n    \\frac{\\partial\\mathscr{R}_i}{\\partial \\theta_{x\\mathrm{|}y}} = \n  \\frac{\\partial\\mathscr{R}_i}{\\partial p_B}\n   \\frac{\\partial p_B} {\\partial \\theta_{x\\mathrm{|}y}}\n    \\text{,}\n    \\label{eqn:dthxy}\n  \\end{equation}\n\nwith\n\n      \\begin{equation} \n  \\frac{\\partial\\mathscr{R}_i}{\\partial p_B} =\n  G  \\exp(\\dfrac{-8B\\sin^2\\theta}{\\lambda^2}) I_{\\mathrm{ref}}\n    \\text{.}\n    \\label{eqn:P_terms}\n  \\end{equation}\n\nAs for the variation of the partiality model $p_B$ defined in \\eqref{eqn:pb}, the {\\tt rs}\nmodel assumes that the sphere radius $r_s$ is fixed, thus the only remaining variable is the\ndistance  $r_h$ between RLP and Ewald sphere:\n\n   \\begin{equation}\n   \\frac{\\partial p_B} {\\partial \\theta_{x\\mathrm{|}y}} = \n  \\frac{\\partial p_B}{\\partial r_h}\n   \\frac{\\partial r_h} {\\partial \\theta_{x\\mathrm{|}y}}\n    \\text{.}\n    \\label{eqn:dpb2}\n  \\end{equation}\n\n\n  \\par An expression for $\\dfrac{\\partial p_B}{\\partial r_h}$ has already been \n  given in \\eqref{eqn:deriv}, so it now\n  remains to investigate the derivative $\\dfrac{\\partial r_h} {\\partial \\theta_{x\\mathrm{|}y}}$,\n  based on the definition of $r_h$ given in \\eqref{eqn:Rh}.\n\nIntroduce the vector $\\mathbf{S}$:\n   \\begin{equation}\n   \\mathbf{S} = \\mathbf{q}+\\mathbf{s_0}\n    \\text{,}\n    \\label{eqn:SS}\n  \\end{equation}\n\n   \\begin{equation}\n {r_h} = ||\\mathbf{S}||-\\dfrac{1}{ \\lambda}\n     \\text{,}\n    \\label{eqn:sss}\n  \\end{equation}\n  \n    \\begin{equation}\n   \\dfrac{\\partial r_h} {\\partial \\theta_{x\\mathrm{|}y}} = \n   \\dfrac\n   {\\mathbf{S}\\cdot{\n   \\dfrac{\\partial \\mathbf{S}} {\\partial \\theta_{x\\mathrm{|}y}}\n   }}\n   {||\\mathbf{S}||}\n    \\text{,}\n    \\label{eqn:rhth}\n  \\end{equation}\n\n    \\begin{equation}\n   \\dfrac{\\partial \\mathbf{S}} {\\partial \\theta_{x\\mathrm{|}y}} = \n  {\n   \\dfrac{\\partial \\mathbf{q}} {\\partial \\theta_{x\\mathrm{|}y}}\n   }\n    \\text{.}\n    \\label{eqn:derivsq}\n  \\end{equation}\n\nFinally we investigate the derivative of the RLP position $\\mathbf{q}$ with respect to the crystal \nrotations.  The effective orientation matrix $\\mathbf{A}$ may be expressed as the \nreference orientation matrix $\\mathbf{A}_\\mathrm{ref}$ determined during crystal refinement and \nintegration, composed with additional rotational operators $\\mathbb{R}_x$ and \n$\\mathbb{R}_y$ determined by the postrefined angles $\\theta_{x}$ and \n$\\theta_{y}$:\n\n    \\begin{equation}\n   \\mathbf{A} = \n\\mathbb{R}_y( \\mathbb{R}_x ( \\mathbf{A}_\\mathrm{ref} )  )\n    \\text{.}\n    \\label{eqn:compoase}\n  \\end{equation}\n\nThe derivatives of $\\mathbf{q}$ work out as follows:\n\n  \\begin{equation}\n   \\dfrac{\\partial \\mathbf{q}} {\\partial \\theta_{x}} = \n  \\mathbb{R}_y \\dfrac{\\partial\\mathbb{R}_x}{\\partial \\theta_{x}}  \\mathbf{A}_\\mathrm{ref} \\mathbf{h}\n    \\text{,}\n    \\label{eqn:qthx}\n  \\end{equation}\n  \\begin{equation}\n   \\dfrac{\\partial \\mathbf{q}} {\\partial \\theta_{y}} = \n  \\dfrac{\\partial\\mathbb{R}_y}{\\partial \\theta_{y}} \\mathbb{R}_x \\mathbf{A}_\\mathrm{ref} \\mathbf{h}\n    \\text{.}\n    \\label{eqn:qthy}\n  \\end{equation}\n\nThe derivatives of the rotation operator are already encoded in the cctbx library ({\\tt scitbx/matrix/\\_\\_init\\_\\_.py}).  Formulae for the rotation operator and its derivative with \nrespect to angle $\\theta$ are given in the LaTeX documentation included in that directory.\n\n  \\subsection*{5 The model { \\tt rs\\_hybrid}: Additional refinement of the parameter $r_s$ }\nAfter refining the parameters $G$, $B$, $\\theta_{x}$, and $\\theta_{y}$\nwe now decide to add a second minimization round to refine an additional parameter for each crystal lattice:\n$r_s$, the RLP radius, as shown in Fig. 1.\n\n\\par We thus need the derivative of the residual $\\mathscr{R}_i$ with respect to this new parameter:\n        \n   \\begin{equation}\n    \\frac{\\partial\\mathscr{R}_i}{\\partial r_s} = \n  \\frac{\\partial\\mathscr{R}_i}{\\partial p_B}\n   \\frac{\\partial p_B} {\\partial r_s}\n    \\text{,}\n    \\label{eqn:dRrs}\n  \\end{equation}\n\nwhere  $\\dfrac{\\partial\\mathscr{R}_i}{\\partial p_B}$ has already been given by Eqn. \\eqref{eqn:P_terms}. \n\nThe remaining factor is derived from the partiality expression associated with the Lorentzian RLP \nprofile, Eqn. \\eqref{eqn:pb}.  Using the quotient rule, with $N$=numerator and $D$=denominator,\n\n   \\begin{equation}\n    \\frac{\\partial p_B} {\\partial r_s} = \n  \\frac{\n      \\dfrac{\\partial N}{\\partial{r_s}} D-\n      \\dfrac{\\partial D}{\\partial{r_s}} N\n    }\n    {D^2}\n    =\n   \\frac{4{r_s}{r_h}^2}{(2r_h^2 + r_s^2)^2}\n    \\text{.}\n    \\label{eqn:dpb_drs}\n  \\end{equation}\n\n  \\subsection*{6 A Gaussian-shaped radial profile for the RLP }\nOne criticism of the RLP model B and its Lorentzian-shaped radial profile $p_B (r_h)$ is that the\nprofile tails off very gradually.  There is still a significant partiality fraction many radii away from the\nRLP center.  Perhaps this is unphysical; after all, we are trying to test the hypothesis that negative\nmeasurements are in fact false predictions that are in fact too far from the Ewald sphere to contribute\nany diffracted signal.  Therefore, let's choose a model that would provide more sharp delineation \nbetween RLP and not-RLP.  We'll then test if this model fits the data better, presumably by looking \nat the target functional or the correlation coefficient.\nA candidate function giving a sharper cutoff, while still being smoothly\ndifferentiable, is the Gaussian,\n\n   \\begin{equation}\n    G = \n  \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\n   \\exp (- \\dfrac {(x - \\mu)^2}{2\\sigma^2})\n    \\text{,}\n    \\label{eqn:gaussian}\n  \\end{equation}\n\nwith mean $\\mu$ and standard deviation $\\sigma$.\n  \nAs before, we develop a modified function that has an amplitude of 1 at a reciprocal-distance offset $r_h$ of 0:\n\n   \\begin{equation}\n    G^{\\prime} = \n   \\exp (- \\dfrac {r_h^2}{2\\sigma^2})\n    \\text{.}\n    \\label{eqn:gaussprime}\n  \\end{equation}\n\nTo eliminate the variable $\\sigma$ we set the FWHM of function \\eqref{eqn:gaussprime} ,\n   \\begin{equation}\n   2 \\sqrt{(2 \\ln{2})\\sigma^2}\n    \\text{,}\n    \\label{eqn:fwhmgaussprime}\n  \\end{equation}\n\nto be equal to the FWHM expression worked out for models A and B in eqn \\eqref{eqn:fwhm}. From this \ncondition we can work out the value of the variance\n\n   \\begin{equation}\n   \\sigma^2 = \n    \\dfrac {r_s^2} {4\\ln{2}}\n    \\text{,}\n    \\label{eqn:gsigma}\n  \\end{equation}\n\nand now eliminate $\\sigma$ to arrive at a new expression for the partiality (model G):\n\n   \\begin{equation}\n  p_G = \n    \\exp( \\dfrac {-(2 \\ln{2}) r_h^2} {r_s^2}\n    )\n    \\text{,}\n    \\label{eqn:pg}\n  \\end{equation}\n\nThis function is plotted in Fig. 2 (green dots), illustrating that $p_G$ is a better \napproximation to the \nhard-sphere RLP model (blue) than is $p_B$ (red).  Finally, for parameter refinement we need the \npartial derivatives of $p_G$ with respect to its constituent variables,\n\n  \\begin{equation}\n    \\frac{\\partial{p_G}}{\\partial{r_h}} = \n    - p_G \\frac {(4 \\ln{2}) r_h}{r_s^2}\n    \\text{, }\n    \\label{eqn:dpG_drh}\n  \\end{equation}\nand\n  \\begin{equation}\n    \\frac{\\partial{p_G}}{\\partial{r_s}} = p_G\\frac{(4 \\ln{2})r_h^2}{r_s^3}\n    \\text{.}\n    \\label{eqn:dpG_drs}\n  \\end{equation}\n\n  \n%%%%%%%%%%%%%%%%%%%%%\n\\end{document}\n", "meta": {"hexsha": "a1c816bf4619ec5066905d79eca72d6c33db7e57", "size": 16891, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cctbx/examples/merging/samosa/postrefinement_rs_model.tex", "max_stars_repo_name": "rimmartin/cctbx_project", "max_stars_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 155, "max_stars_repo_stars_event_min_datetime": "2016-11-23T12:52:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T15:35:44.000Z", "max_issues_repo_path": "cctbx/examples/merging/samosa/postrefinement_rs_model.tex", "max_issues_repo_name": "rimmartin/cctbx_project", "max_issues_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 590, "max_issues_repo_issues_event_min_datetime": "2016-12-10T11:31:18.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T23:10:09.000Z", "max_forks_repo_path": "cctbx/examples/merging/samosa/postrefinement_rs_model.tex", "max_forks_repo_name": "rimmartin/cctbx_project", "max_forks_repo_head_hexsha": "644090f9432d9afc22cfb542fc3ab78ca8e15e5d", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 115, "max_forks_repo_forks_event_min_datetime": "2016-11-15T08:17:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-09T15:30:14.000Z", "avg_line_length": 37.1230769231, "max_line_length": 181, "alphanum_fraction": 0.6825528388, "num_tokens": 5398, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289388083214156, "lm_q2_score": 0.6688802669716107, "lm_q1q2_score": 0.5544608114131574}}
{"text": "\\documentclass[a4paper]{article}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[utf8]{inputenc}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{xcolor}\n\\usepackage{amsthm}\n\\usepackage[mathcal]{euscript}\n\n\\usepackage{url}\n\n\\newcommand{\\Hcal}{\\mathcal{H}}\n\\newcommand{\\real}{\\mathbb{R}}\n\n\\title{Some notes on Sokal (2010)}\n\\author{Nazarov Ivan}\n\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Uniform Boundedness Principle} % (fold)\n\\label{sec:uniform_boundedness_principle}\n\nIn this section we treat the Hilbert space $\\Hcal$ with the natural inner product\nnorm as a Banach space $(\\Hcal, \\|\\cdot\\|)$. Consider a collection $(l_n)_{n\\geq1}$\nof bounded linear functionals $\\Hcal\\to\\real$ that is bounded at every point:\n$\\sup_{n\\geq1} \\lvert l_n(x)\\rvert < +\\infty$ for any $x \\in \\Hcal$. The following\nsection follows the proof given in Sokal (2010).\\footnotemark\n\\footnotetext{arXiv:1005.1585 -- \\url{https://arxiv.org/abs/1005.1585v2}}\n\n\\paragraph{Lemma} % (fold)\n\\label{par:lemma}\n\nFor any $x\\in \\Hcal$ and $\\delta > 0$ we have $\\delta \\|l\\| \\leq \\sup_{z\\in B(x, \\delta)}\n\\lvert l(z) \\rvert$, where $B(x, \\delta) = \\{z\\in \\Hcal\\colon \\|z-x\\| \\leq \\delta\\}$.\nIndeed, for any $\\|h\\| \\leq 1$ we have \n\\begin{equation*}\n  \\lvert l(h) \\rvert\n    = \\tfrac1\\delta \\lvert l(\\delta h) \\rvert\n    \\leq \\tfrac1{2 \\delta} \\bigl(\n      \\lvert l(x + \\delta h) \\rvert + \\lvert l(x - \\delta h) \\rvert\n    \\bigr)\n    \\leq \\tfrac1{2 \\delta}\n      2 \\sup\\{\\lvert l(z) \\rvert \\colon z\\in B(x, \\delta)\\}\n      \\,,\n\\end{equation*}\nimplying that the upper bound on rhs is not less than the supremum of the lhs.\n\n% paragraph lemma (end)\n\n\\paragraph{Proof} % (fold)\n\\label{par:proof}\n\nWe proceed by contradiction and suppose that $\\sup_{n\\geq1} \\|l_n\\| = \\infty$.\n\nTake a sequence $L_k \\uparrow +\\infty$, and a pair of values $\\eta, q \\in (0, 1)$,\nwhich will be specified and discussed along the proof. There exists $(n_k)_{k\\geq1}\n\\uparrow$ such that $\\|l_{n_k}\\| > L_k$. Let $x_0 = 0 \\in \\Hcal$ and $\\delta_k = q^k$.\nFor any $k\\geq 1$ the norm $\\|l_{n_k} \\|$ is finite, therefore the lemma implies\n\\begin{equation*}\n  \\delta_k \\eta \\|l_{n_k}\\|\n    < \\delta_k \\|l_{n_k}\\|\n    \\leq \\sup\\{\\lvert l(z) \\rvert \\colon z\\in B(x_{k-1}, \\delta_k)\\}\n      \\,,\n\\end{equation*}\nwhich further implies the existence of $x_k \\in B(x_{k-1}, \\delta_k)$ with $\\delta_k\n\\eta \\|l_{n_k}\\| < \\lvert l_{n_k}(x_k) \\rvert$. Thus, we define a sequence $(x_k)_{k\\geq1}\n\\in \\Hcal$ with $\\|x_k - x_{k-1}\\| \\leq \\delta_k$ and the just mentioned bound on\n$\\|l_{n_k}\\|$. The constructed sequence is Cauchy: for any $p \\geq k$\n\\begin{equation*}\n  \\|x_p - x_k \\|\n    \\leq \\sum_{i=k+1}^p \\|x_i - x_{i-1} \\|\n    % \\leq \\sum_{i=1}^{p-k} \\delta_{i+k}\n    \\leq \\sum_{i=1}^{p-k} q^{i+k}\n    % = q^{k+1} \\tfrac{1 - q^{p-k}}{1-q}\n    < \\frac{q \\delta_k}{1-q}\n    \\,.\n    % 1 - q^n = 1 + q^1 + .. + q^{n-1} - (q^1 + q^2 + .. + q^n)\n    %  = (1 - q)(1 + q^1 + .. + q^{n-1})\n\\end{equation*}\nSince $\\Hcal$ is complete, $\\exists x_*\\in \\Hcal$ such that $\\|x_k - x_*\\| \\to 0$.\nFor any $p \\geq k$ we get\n\\begin{equation*}\n  \\|x_k - x_* \\|\n    \\leq \\| x_p - x_*\\| + \\sum_{i=k+1}^p \\|x_i - x_{i-1} \\|\n    \\leq \\| x_p - x_*\\| + \\frac{q \\delta_k}{1-q}\n    \\,,\n\\end{equation*}\nand conclude that $\\|x_k - x_* \\| \\leq \\frac{q \\delta_k}{1-q}$ for any $k\\geq 1$.\nObserve that for all $k\\geq 1$\n\\begin{align*}\n  \\delta_k \\eta \\|l_{n_k} \\|\n    &< \\lvert l_{n_k}(x_k) \\rvert\n      \\leq \\lvert l_{n_k}(x_*) \\rvert\n        + \\lvert l_{n_k}(x_k - x_*) \\rvert\n    \\\\\n    &\\leq \\lvert l_{n_k}(x_*) \\rvert\n      + \\|x_k - x_*\\| \\|l_{n_k}\\|\n    \\\\\n    &\\leq \\lvert l_{n_k}(x_*) \\rvert\n      + \\tfrac{q}{1-q} \\delta_k \\|l_{n_k}\\|\n      \\,,\n\\end{align*}\nwhence $\\bigl(\\eta - \\tfrac{q}{1-q} \\bigr) \\delta_k \\|l_{n_k}\\| \\leq \\lvert l_{n_k}(x_*) \\rvert$\nfor all $k\\geq 1$.\n\nHere we shall make the necessary specifications, that, despite affecting the particular\nvalues, do not alter the key properties of the constructed sequence. We assume that\n$1 > \\eta > \\tfrac{q}{1-q}$ (this implies $q < \\tfrac\\eta{1 + \\eta}$). Since $0 \\leq\nL_k < \\|l_{n_k}\\|$ we finally get the following lower bound:\n\\begin{equation*}\n  \\tilde{L}_k\n    = \\bigl(\\eta - \\tfrac{q}{1-q} \\bigr) q^k L_k\n    \\leq \\lvert l_{n_k}(x_*) \\rvert\n    \\,.\n\\end{equation*}\nWe are also free to require that the original $L_k \\uparrow \\infty$ be such that\n$\\tilde{L}_k \\uparrow \\infty$. For such choice we get the following statement: there\nis $(n_k)_{k\\geq1} \\uparrow$ and $x_* \\in \\Hcal$ such that $\\sup_{k\\geq 1} \\lvert\nl_{n_k}(x_*) \\rvert = +\\infty$. This contradicts the original assumption that\n$\\sup_{n\\geq 1}\\lvert l_n(x) \\rvert$ is bounded for any given $x\\in \\Hcal$.\n\nSo is it possible to meet all the requirements, outlined in the specifications above?\nIt is, if we pick any $\\alpha, \\beta > 1$ and set\n\\begin{itemize}\n  \\item $q = \\tfrac1{2 (1 + \\alpha)}$ and $\\eta = \\tfrac{\\alpha q}{1-q}$\n  \\item $L_k = \\bigl(\\eta - \\tfrac{q}{1-q}\\bigr)^{-1} q^{-k} \\beta^k$\n\\end{itemize}\nthen $\\eta < 1$, $\\eta - \\tfrac{q}{1-q} = \\tfrac{\\alpha - 1}{1 + 2\\alpha} > 0$,\n$L_k = \\tfrac{1 + 2\\alpha}{\\alpha - 1} (2 (1 + \\alpha) \\beta)^k \\to \\infty$,\n$\\delta_k = q^k \\to 0$, and $\\tilde{L}_k = \\beta^k \\to \\infty$.\n\n% paragraph proof (end)\n\n\\paragraph{Other proofs} % (fold)\n\\label{par:other_proofs}\n\nThere is another proof of this principle, using explicitly the continuity of the\nlinear functionals and the Baire Category theorem, to get a point that violates\nthe orignal assumptions.\n\n% paragraph other_proofs (end)\n\n\\paragraph{Notes} % (fold)\n\\label{par:notes}\n\nThe uniform boundedness principle holds for arbitrarily large collection of linear\nfunctionals and can be extended to boundned linear operators.\n\n% paragraph notes (end)\n\n% section* uniform_boundedness_principle (end)\n\n\\end{document}\n", "meta": {"hexsha": "00325ecf3ac4cc10e53096818cc916218980b9b0", "size": 5762, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "scribbles/arxiv-1005-1585.tex", "max_stars_repo_name": "ivannz/general-scribbles", "max_stars_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-12-07T20:41:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-28T12:47:40.000Z", "max_issues_repo_path": "scribbles/arxiv-1005-1585.tex", "max_issues_repo_name": "ivannz/general-scribbles", "max_issues_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scribbles/arxiv-1005-1585.tex", "max_forks_repo_name": "ivannz/general-scribbles", "max_forks_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.2389937107, "max_line_length": 96, "alphanum_fraction": 0.6312044429, "num_tokens": 2255, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802603710086, "lm_q2_score": 0.8289388125473629, "lm_q1q2_score": 0.5544608087683147}}
{"text": "\\documentclass[12pt]{cdblatex}\n\\usepackage{fancyhdr}\n\\usepackage{footer}\n\n\\begin{document}\n\n% =================================================================================================\n% create checkpoint file\n\n\\bgroup\n\\CdbSetup{action=hide}\n\\begin{cadabra}\n   import cdblib\n   checkpoint_file = 'tests/semantic/output/rnc2gen.json'\n   cdblib.create (checkpoint_file)\n   checkpoint = []\n\\end{cadabra}\n\\egroup\n\n% =================================================================================================\n\\section*{Convert from rnc to generic coordinates}\n\nThe following code is based on the {\\tt gen2rnc.tex} code.\n\nIt is common to do some computations in a local RNC. Doing so makes various parts of the\ncomputations much easier to manage than in the original non-RNC coordinates. One simple example\nis the proof of the second Bianchi identities.\n\nThis code develops the inverse transformation, that is from the local RNC coordinates back to\ngeneric coordinates. The key equation (drawn form {\\tt gen2rnc.tex}) is\n\\begin{align}\n   x^a_j = x^a_i + y^a - \\sum_{k=2}^\\infty\\>\\frac{1}{k!}\\>\\Gamma^{a}_{\\ubk}y^{.\\ubk}\n\\end{align}\nIn {\\tt gen2rnc.tex} this equation was solved for the RNC coordinates $y$ given the generic\ncoordinates $x_j$ and $x_i$. Here we will instead take $x_i$ and $y$ as given and use this\nequation to compute $x_j$. The first change we will make is to replace $x_j$ with $x$ (as the\nsubscript $j$ serves no useful purpose).\n\nThus our job will be to compute\n\\begin{align}\n   \\label{eq:rnc2xyz}\n   x^a = x^a_i + y^a - \\sum_{k=2}^\\infty\\>\\frac{1}{k!}\\>\\Gamma^{a}_{\\ubk}y^{.\\ubk}\n\\end{align}\ngiven $x_i$ and $y$. The generalised connections will be computed recursively by\n\\begin{align}\n   \\label{eq:GenGamma}\n   \\Gamma^{a}{}_{b\\uc d} = \\Gamma^{a}{}_{(b\\uc,d)}\n                   - (n+1) \\Gamma^{a}{}_{p(\\uc}\n                           \\Gamma^{p}{}_{bd)}\n\\end{align}\n\nAs noted in {\\tt gen2rnc.tex}, the generalised connections will scale with the expensions\nparameter $\\eps$ according to\n\\begin{gather*}\n   \\Gamma^{a}{}_{bc} = \\BigO{\\eps}\\>,\\qquad\n   \\Gamma^{a}{}_{bcd} = \\BigO{\\eps^2}\\>,\\qquad\n   \\Gamma^{a}{}_{bcde} = \\BigO{\\eps^3}\\>,\\qquad\n   \\text{etc.}\n\\end{gather*}\n\n\\clearpage\n\n% ===================================================================\n\n\\begin{cadabra}\n   {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w#}::Indices(position=independent).\n\n   D{#}::Derivative.\n   \\nabla{#}::Derivative.\n   \\partial{#}::PartialDerivative.\n\n   g_{a b}::Metric.\n   g^{a b}::InverseMetric.\n   g_{a}^{b}::KroneckerDelta.\n   g^{a}_{b}::KroneckerDelta.\n   \\delta^{a}_{b}::KroneckerDelta.\n   \\delta_{a}^{b}::KroneckerDelta.\n\n   R_{a b c d}::RiemannTensor.\n   R^{a}_{b c d}::RiemannTensor.\n   R_{a b c}^{d}::RiemannTensor.\n\n   A^{a}::Depends(\\partial{#}).\n\n   g_{a b}::Depends(\\partial{#}).\n   R_{a b c d}::Depends(\\partial{#}).\n   R^{a}_{b c d}::Depends(\\partial{#}).\n\n   Q^{a}_{b c}::Depends(\\partial{#}).\n\n   Q^{a}_{b c}::TableauSymmetry(shape={2}, indices={1,2}).\n   Q^{a}_{b c d}::TableauSymmetry(shape={3}, indices={1,2,3}).\n   Q^{a}_{b c d e}::TableauSymmetry(shape={4}, indices={1,2,3,4}).\n   Q^{a}_{b c d e f}::TableauSymmetry(shape={5}, indices={1,2,3,4,5}).\n   Q^{a}_{b c d e f g}::TableauSymmetry(shape={6}, indices={1,2,3,4,5,6}).\n\n   Q^{p}_{a b}::Weight(label=numQ,value=1).\n   Q^{p}_{a b c}::Weight(label=numQ,value=2).\n   Q^{p}_{a b c d}::Weight(label=numQ,value=3).\n   Q^{p}_{a b c d e}::Weight(label=numQ,value=4).\n   Q^{p}_{a b c d e f}::Weight(label=numQ,value=5).\n\n   def product_sort (obj):\n       substitute (obj,$ A^{a}                     -> A001^{a}               $)\n       substitute (obj,$ x^{a}                     -> A002^{a}               $)\n       substitute (obj,$ g^{a b}                   -> A003^{a b}             $)\n       substitute (obj,$ Q^{p}_{a b}               -> A004^{p}_{a b}         $)\n       substitute (obj,$ Q^{p}_{a b c}             -> A005^{p}_{a b c}       $)\n       substitute (obj,$ Q^{p}_{a b c d}           -> A006^{p}_{a b c d}     $)\n       substitute (obj,$ Q^{p}_{a b c d e}         -> A007^{p}_{a b c d e}   $)\n       substitute (obj,$ Q^{p}_{a b c d e f}       -> A008^{p}_{a b c d e f} $)\n       sort_product   (obj)\n       rename_dummies (obj)\n       substitute (obj,$ A001^{a}                  -> A^{a}                  $)\n       substitute (obj,$ A002^{a}                  -> x^{a}                  $)\n       substitute (obj,$ A003^{a b}                -> g^{a b}                $)\n       substitute (obj,$ A004^{p}_{a b}            -> Q^{p}_{a b}            $)\n       substitute (obj,$ A005^{p}_{a b c}          -> Q^{p}_{a b c}          $)\n       substitute (obj,$ A006^{p}_{a b c d}        -> Q^{p}_{a b c d}        $)\n       substitute (obj,$ A007^{p}_{a b c d e}      -> Q^{p}_{a b c d e}      $)\n       substitute (obj,$ A008^{p}_{a b c d e f}    -> Q^{p}_{a b c d e f}    $)\n\n       return obj\n\n   def truncateQ (obj,n):\n\n       ans = Ex(0)\n\n       for i in range (0,n+1):\n          foo := @(obj).\n          bah  = Ex(\"numQ = \" + str(i))\n          keep_weight (foo, bah)\n          ans = ans + foo\n\n       return ans\n\n   # A^{a} = dx^a/ds\n\n   Gamma := Q^{d}_{a b} A^{a} A^{b}.\n\n   dAds  := A^{c} \\partial_{c}{A^{d}}-> - @(Gamma).\n\n   # =============================================================\n   eq0 := @(Gamma).                        # cdb (eq0.000,eq0)\n\n   # =============================================================\n   eq1 := A^{c} \\partial_{c}{@(eq0)}.      # cdb (eq1.000,eq1)\n\n   distribute      (eq1)                   # cdb (eq1.001,eq1)\n   unwrap          (eq1)                   # cdb (eq1.002,eq1)\n   product_rule    (eq1)                   # cdb (eq1.003,eq1)\n   distribute      (eq1)                   # cdb (eq1.004,eq1)\n   substitute      (eq1,dAds)              # cdb (eq1.005,eq1)\n   distribute      (eq1)                   # cdb (eq1.006,eq1)\n   eq1 = truncateQ (eq1,5)                 # cdb (eq1.007,eq1)\n   sort_product    (eq1)                   # cdb (eq1.008,eq1)\n   rename_dummies  (eq1)                   # cdb (eq1.009,eq1)\n   canonicalise    (eq1)                   # cdb (eq1.010,eq1)\n\n   # =============================================================\n   eq2 := A^{c} \\partial_{c}{@(eq1)}.      # cdb (eq2.000,eq2)\n\n   distribute      (eq2)                   # cdb (eq2.001,eq2)\n   unwrap          (eq2)                   # cdb (eq2.002,eq2)\n   product_rule    (eq2)                   # cdb (eq2.003,eq2)\n   distribute      (eq2)                   # cdb (eq2.004,eq2)\n   substitute      (eq2,dAds)              # cdb (eq2.005,eq2)\n   distribute      (eq2)                   # cdb (eq2.006,eq2)\n   eq2 = truncateQ (eq2,5)                 # cdb (eq2.007,eq2)\n   sort_product    (eq2)                   # cdb (eq2.008,eq2)\n   rename_dummies  (eq2)                   # cdb (eq2.009,eq2)\n   canonicalise    (eq2)                   # cdb (eq2.010,eq2)\n\n   # =============================================================\n   eq3 := A^{c} \\partial_{c}{@(eq2)}.      # cdb (eq3.000,eq3)\n\n   distribute      (eq3)                   # cdb (eq3.001,eq3)\n   unwrap          (eq3)                   # cdb (eq3.002,eq3)\n   product_rule    (eq3)                   # cdb (eq3.003,eq3)\n   distribute      (eq3)                   # cdb (eq3.004,eq3)\n   substitute      (eq3,dAds)              # cdb (eq3.005,eq3)\n   distribute      (eq3)                   # cdb (eq3.006,eq3)\n   eq3 = truncateQ (eq3,5)                 # cdb (eq3.007,eq3)\n   sort_product    (eq3)                   # cdb (eq3.008,eq3)\n   rename_dummies  (eq3)                   # cdb (eq3.009,eq3)\n   canonicalise    (eq3)                   # cdb (eq3.010,eq3)\n\n   # =============================================================\n   eq4 := A^{c} \\partial_{c}{@(eq3)}.      # cdb (eq4.000,eq4)\n\n   distribute      (eq4)                   # cdb (eq4.001,eq4)\n   unwrap          (eq4)                   # cdb (eq4.002,eq4)\n   product_rule    (eq4)                   # cdb (eq4.003,eq4)\n   distribute      (eq4)                   # cdb (eq4.004,eq4)\n   substitute      (eq4,dAds)              # cdb (eq4.005,eq4)\n   distribute      (eq4)                   # cdb (eq4.006,eq4)\n   eq4 = truncateQ (eq4,5)                 # cdb (eq4.007,eq4)\n   sort_product    (eq4)                   # cdb (eq4.008,eq4)\n   rename_dummies  (eq4)                   # cdb (eq4.009,eq4)\n   canonicalise    (eq4)                   # cdb (eq4.010,eq4)\n\n\\end{cadabra}\n\n\\clearpage\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb*{eq0.000} \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb*{eq1.000} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.001} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.002} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.003} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.004} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.005} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.006} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.007} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.008} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.009} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq1.010} \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb*{eq2.000} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.001} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.002} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.003} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.004} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.005} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.006} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.007} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.008} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.009} \\end{dmath*}\n   \\begin{dmath*} \\cdb*{eq2.010} \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb*{eq3.010} \\end{dmath*}\n\\end{dgroup*}\n\n\\clearpage\n\\begin{dgroup*}\n   \\begin{dmath*} \\cdb*{eq4.010} \\end{dmath*}\n\\end{dgroup*}\n\n% -------------------------------------------------------------------\n% reformat\n\n\\clearpage\n\n\\begin{cadabra}\n   def reformat (obj):\n      bah := @(obj).\n      distribute     (bah)\n      bah = product_sort (bah)\n      rename_dummies (bah)\n      canonicalise   (bah)\n      factor_out     (bah,$A^{a?}$)\n      substitute     (bah,$A^{a}->y^{a}$)\n      substitute     (bah,$Q^{a}_{b c}->\\Gamma^{a}_{b c}$)\n      ans := @(bah).\n      return ans\n\n   eq0 = reformat(eq0)  # cdb (eq0.100,eq0)\n   eq1 = reformat(eq1)  # cdb (eq1.100,eq1)\n   eq2 = reformat(eq2)  # cdb (eq2.100,eq2)\n   eq3 = reformat(eq3)  # cdb (eq3.100,eq3)\n   eq4 = reformat(eq4)  # cdb (eq4.100,eq4)\n\n   checkpoint.append (eq0)\n   checkpoint.append (eq1)\n   checkpoint.append (eq2)\n   checkpoint.append (eq3)\n   checkpoint.append (eq4)\n\n\\end{cadabra}\n\n\\clearpage\n\n% =================================================================================================\n\\section*{Convert from local RNC coords (y) to generic (x)}\n\n\\begin{align*}\n   x^a = x^a_i + \\nx{0}^{a} - \\nx{1}^{a} - \\nx{2}^{a} - \\nx{3}^{a} - \\nx{4}^{a} - \\nx{5}^{a}\n\\end{align*}\n\n\\begin{dgroup*}\n   \\begin{dmath*}    \\nx{0}^{a} = y^{a} \\end{dmath*}\n   \\begin{dmath*} 2! \\nx{1}^{a} = \\cdb{eq0.100} \\end{dmath*}\n   \\begin{dmath*} 3! \\nx{2}^{a} = \\cdb{eq1.100} \\end{dmath*}\n   \\begin{dmath*} 4! \\nx{3}^{a} = \\cdb{eq2.100} \\end{dmath*}\n   \\begin{dmath*} 5! \\nx{4}^{a} = \\cdb{eq3.100} \\end{dmath*}\n   \\begin{dmath*} 6! \\nx{5}^{a} = \\cdb{eq4.100} \\end{dmath*}\n\\end{dgroup*}\n\n% =================================================================================================\n% export checkpoints in json format\n\n\\bgroup\n\\CdbSetup{action=hide}\n\\begin{cadabra}\n   for i in range( len(checkpoint) ):\n      cdblib.put ('check{:03d}'.format(i),checkpoint[i],checkpoint_file)\n\\end{cadabra}\n\\egroup\n\n\\end{document}\n", "meta": {"hexsha": "11108684aeb763dc2cf39358b12f85daead03508", "size": 11610, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "source/cadabra/rnc2gen.tex", "max_stars_repo_name": "leo-brewin/riemann-normal-coords", "max_stars_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-20T16:15:58.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-20T16:15:58.000Z", "max_issues_repo_path": "source/cadabra/rnc2gen.tex", "max_issues_repo_name": "leo-brewin/riemann-normal-coords", "max_issues_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "source/cadabra/rnc2gen.tex", "max_forks_repo_name": "leo-brewin/riemann-normal-coords", "max_forks_repo_head_hexsha": "4e6546028229b6f43fcef1c0b83660cddc021716", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.3311897106, "max_line_length": 99, "alphanum_fraction": 0.4823428079, "num_tokens": 4101, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.828938799869521, "lm_q2_score": 0.6688802669716107, "lm_q1q2_score": 0.5544608057598518}}
{"text": "\\documentclass[]{article}\n\\usepackage{amsmath}\n\n\\hbadness=99999\n\\title{Linear Algebra: Week 1 Notes and Exercises}\n\\author{Christopher Aytona}\n\n\\begin{document}\n\n\\begin{center}\n\\Huge{Week 1 Notes and Exercises}\n\\end{center}\n\n\\section{Notes}\n\\textbf{Network Flow Diagrams}\\\\\n$400 + x_2 = x_1$\\\\\n$400 = x_1 - x_2$\\\\\n$x_1 + x_3 - x_4 = 600$\\\\\n$x_4 + x_5 = 100$\\\\\n$x_2 + x_3 + x_5 = 300$\\\\\n\n\\textbf{Solutions to Systems of Linear Equations}\\\\\n\\textsf{A unique solution(Consistent)}\\\\\n$x - 2y = -1$\\\\\n$x = 2y-1$\\\\\n$-2y+1+3y = 3$\\\\\n$y = 2$\\\\\n$x - 4 = -1$\\\\\n$x = 3$\\\\\n$(x, y) = (3, 2)$\\\\\n\\textsf{No solutions(Inconsistent)}\\\\\n$x - 2y = -1$\\\\\n$-x+2y=3$\\\\\n$x = 2y-1$\\\\\n$-2y+1+2y=3$\\\\\n$0y=2$\\\\\n\\textsf{Infinitely many solutions(Consistent)}\\\\\n$x-2y=-1$\\\\\n$-x+2y=1$\\\\\n$x=2y-1$\\\\\n$-2y+1+2y=1$\\\\\n$0y=0$\\\\\n\n\\textbf{Linear Equations}\\\\\n\\textsf{Method 1}\\\\\n1) $x_1 - 2x_2 = -1$\\\\\n2) $-x_1 + 3x_2 = 3$\\\\\nRewrite (1) as:\\\\\n3) $x_1 = -1 + 2x_2$\\\\\nSub (3) into (2):\\\\\n$-(-1+2x_2) + 3x_2 = 3$\\\\\n$1 - 2x_2 + 3x_2 +2 = 3$\\\\\n$1 + x_2 = 3$\\\\\n$x_2 = 3 - 1$\\\\\n4) $x_2 = 2$\\\\\nSub (4) into (1):\\\\\n$x_1 - 2(2) = -1$\\\\\n$x_1 - 4 = -1$\\\\\n$x_1 = -1 + 4$\\\\\n$x_1 = 3$\\\\\n\\textsf{Method 2}\\\\\n1) $x_1 - 2x_2 + x_3 = 0$\\\\\n2) $2x_2 - 8x_3 = 8$\\\\\n3) $-4x_2 + 5x_2 + x_3 = -9$\\\\\nMultiply (1) by 4\\\\\n$4x_1 - 8x_2 + 4x_3 = 0$\\\\\n$-4x_1 + 5x_2 + x_3 = -9$\\\\\nAdd both\\\\\n$-3x_2 + 5x_3 = -9$\\\\\n$2x_2 - 8x_3 = 8$\\\\\nMake the coefficient the same(By multiplication)\\\\\n$-6x_2 + 10x_3 = -18$\\\\\n$6x_2 - 24x_3 = 24$\\\\\nAddition\\\\\n$-14x_3 = 6$\\\\\n$x_3 = \\frac{-6}{14}$\\\\\n\n\\textbf{Exercise}\\\\\n\\textsf{Problem 1}\\\\\n1) $x_1 - 3x_3 = 8$\\\\\n2) $2x_1 + 2x_2 + 9x_3 = 7$\\\\\n3) $x_2 + 5x_3 = -2$\\\\\n$x_1 = 8 + 3x_3$\\\\\n$x_2 = -2 - 5x_3$\\\\\nSub\\\\\n$2(8 + 3x_3) + 2(-2 - 5x_3) + 9x_3 = 7$\\\\\n$16 + 6x_3 -4 -10x_3 + 9x_3 = 7$\\\\\n$6x_3 - 10x_3 + 9x_3 = 7 - 16 + 4$\\\\\n$5x_3 = -5$\\\\\n$x_3 = -1$\\\\\n$x_1 - 3(-1) = 8$\\\\\n$x_1 = 8 - 3$\\\\\n$x_1 = 5$\\\\\n$2x_2 + 18(-1) = -9$\\\\\n$2x_2 = 6$\\\\\n$x_2 = 3$\\\\\n$(x_1, x_2, x_3) = (5, 3, -1)$\\\\\n\\textsf{Problem 2}\\\\\n1) $x_2 + 4x_3 = -5$\\\\\n2) $x_1 + 3x_2 + 5x_3 = -2$\\\\\n3) $3x_1 + 7x_2 + 7x_3 = 4$\\\\\n$x_2 = -5 - 4x_3$\\\\\n$x_1 = -3(-5 - 4x_3) - 5x_3$\\\\\n$x_1 = 15 + 4x_3 - 5x_3$\\\\\n$x_1 = -x_3 + 15$\\\\\n$3(-x_3 + 15) + 7(-5 - 4x_3) + 7x_3 = 4$\\\\\n$-3x_3 + 45 - 35 - 28x_3 + 7x_3 = 4$\\\\\n$-24x_3 = -6$\\\\\n$x_3 = \\frac{1}{4}$\\\\\n$x_2 + 4(\\frac{1}{4}) = -5$\\\\\n$x_2 = -6$\\\\\n$x_1 + 3(-6) + 5(\\frac{1}{4}) = -2$\\\\\n$x_1 = -2 + 18 - \\frac{5}{4}$\\\\\n$x_1 = 15\\frac{1}{4}$\\\\\n$(x_1, x_2, x_3) = (15\\frac{1}{4}, -6, \\frac{1}{4})$\\\\\n\n\\textbf{Row Operations}\\\\\nThere are 3 types of row operations that can be performed on an augmented matrix without changing the solutions.\\\\\n$\\cdot$ Interchanging two rows.\\\\\n$\\cdot$ Scaling a row.\\\\\n$\\cdot$ Adding/Subtracting a row to another.\\\\\n\n\\textbf{Reduced Row Echelon Form}\\\\\n$\\cdot$ All non-zero rows are above any rows of all zeros.\\\\\n$\\cdot$ The leading non-zero coefficient of a non-zero row is a 1 and is strictly to the right of the leading 1 in the row above it.\\\\\n$\\cdot$ Each leading 1 is the only non-zero entry in that column.\\\\\n\n\\textbf{Matrices Example}\\\\\n\\[\n\\begin{bmatrix}\n\t1 & -2 & 1 & | & 0 \\\\\n\t0 & 2 & -8 & | & 8 \\\\\n\t-4 & 5 & 9 & | & -9\n\\end{bmatrix}\n\\]\\\\\n$4R_1 + R_3$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & -2 & 1 & | & 0 \\\\\n\t0 & 2 & -8 & | & 8 \\\\\n\t0 & -3 & 13 & | & -9\n\\end{bmatrix}\n\\]\\\\\n$\\frac{1}{2}R_2$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & -2 & 1 & | & 0 \\\\\n\t0 & 1 & -4 & | & 4 \\\\\n\t0 & -3 & 13 & | & -9\n\\end{bmatrix}\n\\]\\\\\n$3R_2 + R_3$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & -2 & 1 & | & 0 \\\\\n\t0 & 1 & -4 & | & 4 \\\\\n\t0 & 0 & 1 & | & 3 \\\\\n\\end{bmatrix}\n\\]\n$R_2 + 4R_3$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & -2 & 1 & | & 0 \\\\\n\t0 & 1 & 0 & | & 16 \\\\\n\t0 & 0 & 1 & | & 3\n\\end{bmatrix}\n\\]\\\\\n$R_1 - R_3$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & -2 & 0 & | & -3 \\\\\n\t0 & 1 & 0 & | & 16 \\\\\n\t0 & 0 & 1 & | & 3\n\\end{bmatrix}\n\\]\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 0 & 0 & | & 29 \\\\\n\t0 & 1 & 0 & | & 16 \\\\\n\t0 & 0 & 1 & | & 3\n\\end{bmatrix}\n\\]\\\\\n$x_1 = 29 , x_2 = 16 , x_3 = 3$\\\\\n\\textsl{More examples}\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 0 & 1 & | & 32 \\\\\n\t0 & 1 & 2 & | & 16 \\\\\n\t0 & 0 & 0 & | & 0\n\\end{bmatrix}\n\\]\\\\\nIt is a reduced row echelon.\\\\\nLet $x_3 = S$\\\\\n$x_1 = 32 - S$\\\\\n$x_2 = 16 - 2S$\\\\\n$x_3 = 0 + S$\\\\\n\n\\section{Exercise 1.1}\nSolve each system in Exercise 1-4 by using elementary row operations on the equations or on the augmented matrix. Follow the systematic elimination procedure described in this section.\\\\\n\n1) $x_1 + 5x_2 = 7$, $-2x_1 - 7x_2 = -5$\\\\\n$x_1 = -5x_2 + 7$\\\\\n$-2(-5x_2 + 7) - 7x_2 = -5$\\\\\n$10x_2 - 14 - 7x_2 = -5$\\\\\n$3x_2 = 9$\\\\\n$x_2 = 3$\\\\\n$x_1 = -5(3) + 7$\\\\\n$x_1 = -15 + 7$\\\\\n$x_1 = -8$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 5 & | & 7 \\\\\n\t-2 & -7& | & -5\n\\end{bmatrix}\n\\]\\\\\n$R_2 + 2R_1$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 5 & | & 7 \\\\\n\t-2+2(1) & -7+2(5) & | & -5+2(7)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1 & 5 & | & 7 \\\\\n\t0 & 3 & | & 9\n\\end{bmatrix}\n\\]\\\\\n$\\frac{1}{3}R_2$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 5 & | & 7 \\\\\n\t\\frac{1}{3}(0) & \\frac{1}{3(3)} & | & \\frac{1}{3}(9)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1 & 5 & | & 7 \\\\\n\t0 & 1 & | & 3\n\\end{bmatrix}\n\\]\\\\\n$R_1 + -5R_2$\\\\\n\\[\n\\begin{bmatrix}\n\t1 + -5(0) & 5 + -5(1) & | & 7 + -5(3) \\\\\n\t0 & 1 & | & 3\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1 & 0 & | & -8 \\\\\n\t0 & 1 & | & 3\n\\end{bmatrix}\n\\]\\\\\n$$(x_1,x_2) = (-8,3)$$\\\\\n\n2) $3x_1 + 6x_2 = -3$, $5x_1 + 7x_2 = 10$\\\\\n$3x_1 = -6x_2 - 3$\\\\\n$x_1 = -2x_2 - 1$\\\\\n$5(-2x_2 - 1) + 7x_2 = 10$\\\\\n$-10x_2 - 5 + 7x_2 = 10$\\\\\n$-3x_2 = 15$\\\\\n$x_2 = -5$\\\\\n$x_1 = -2(-5) - 1$\\\\\n$x_1 = 9$\\\\\n\\[\n\\begin{bmatrix}\n\t3 & 6 & | & -3 \\\\\n\t5 & 7 & | & 10\n\\end{bmatrix}\n\\]\\\\\n$\\frac{1}{3}R_1$\n\\[\n\\begin{bmatrix}\n\t\\frac{1}{3}(1) & \\frac{1}{3}(2) &|& \\frac{1}{3}(-1)\\\\\n\t5&7&|&10\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&2&|&-1\\\\\n\t5&7&|&10\n\\end{bmatrix}\n\\]\\\\\n$R_2-5R_1$\n\\[\n\\begin{bmatrix}\n\t1&2&|&-1\\\\\n\t5-5(1)&7-5(2)&|&10-5(-1)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&2&|&-1\\\\\n\t0&-3&|&15\n\\end{bmatrix}\n\\]\\\\\n$-\\frac{1}{3}R_2$\n\\[\n\\begin{bmatrix}\n\t1&2&|&-1\\\\\n\t-\\frac{1}{3}(0)&-\\frac{1}{3}(-3)&|&-\\frac{1}{3}(15)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&2&|&-1\\\\\n\t0&1&|&-5\n\\end{bmatrix}\n\\]\\\\\n$R_1-2R_2$\n\\[\n\\begin{bmatrix}\n\t1-2(0)&2-2(1)&|&-1-2(-5)\\\\\n\t0&1&|&-5\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&0&|&9\\\\\n\t0&1&|&-5\n\\end{bmatrix}\n\\]\\\\\n$$(x_1, x_2) = (9,-5)$$\\\\\n\n3) Find the point $(x_1, x_2)$ that lies on the line $x_1+2x_2=4$ and on the line $x_1-x_2=1$.\\\\\n$x_1 + 2x_2 = 4$\\\\\n$x_1-x_2=1$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 2 & | & 4 \\\\\n\t1 & -1 & | & 1\n\\end{bmatrix}\n\\]\\\\\n$R_2 - R_1$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 2 & | & 4 \\\\\n\t1-1 & -1-2 & | & 1 - 4\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1 & 2 & | & 4\\\\\n\t0 & -3 & | & -3\n\\end{bmatrix}\n\\]\\\\\n$-\\frac{1}{3}R_2$\\\\\n\\[\n\\begin{bmatrix}\n\t1 & 2 & | & 4\\\\\n\t-\\frac{1}{3}(0) & -\\frac{1}{3}(-3) & | & -\\frac{1}{3}(-3)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1 & 2 & | & 4 \\\\\n\t0 & 1 & | & 1\n\\end{bmatrix}\n\\]\\\\\n$R_1 - 2R_2$\\\\\n\\[\n\\begin{bmatrix}\n\t1 - 2(0) & 2 - 2(1) & | & 4 - 2(1)\\\\\n\t0 & 1 & | & 1\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1 & 0 & | & 2\\\\\n\t0 & 1 & | & 1\n\\end{bmatrix}\n\\]\\\\\n$$(x_1, x_2) = (2, 1)$$\\\\\n\n4) Find the point of intersection of the lines $x_1+2x_2=-13$ and $3x_1-2x_2=1$.\\\\\n$x_1+2x_2=-13$\\\\\n$3x_1-2x_2 = 1$\\\\\n\\[\n\\begin{bmatrix}\n\t1&2&|&-13\\\\\n\t3&-2&|&1\n\\end{bmatrix}\n\\]\\\\\n$R_2-3R_1$\n\\[\n\\begin{bmatrix}\n\t1&2&|&-13\\\\\n\t3-3(1)&-2-3(2)&|&1-3(-13)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&2&|&-13\\\\\n\t0&-8&|&40\n\\end{bmatrix}\n\\]\\\\\n$-\\frac{1}{8}R_2$\n\\[\n\\begin{bmatrix}\n\t1&2&|&-13\\\\\n\t-\\frac{1}{8}(0)&-\\frac{1}{8}(-8)&|&-\\frac{1}{8}(40)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&2&|&-13\\\\\n\t0&1&|&-5\n\\end{bmatrix}\n\\]\\\\\n$R_1-2R_2$\n\\[\n\\begin{bmatrix}\n\t1-2(0)&2-2(1)&|&-13-2(-5)\\\\\n\t0&1&|&-5\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&0&|&-3\\\\\n\t0&1&|&-5\n\\end{bmatrix}\n\\]\\\\\n$$(x_1, x_2) = (-3, -5)$$\\\\\n\n17) Do the three lines $2x_1+3x_2=-1$, $6x_1+5x_2=0$, and $2x_1-5x_2=7$ have a common point of intersection? Explain.\\\\\n\\[\n\\begin{bmatrix}\n\t2&3&|&-1\\\\\n\t6&5&|&0\\\\\n\t2&-5&|&7\n\\end{bmatrix}\n\\]\n$R_2-3R_1$ and $R_3-R_1$\n\\[\n\\begin{bmatrix}\n\t2&3&|&-1\\\\\n\t6-3(2)&5-3(3)&|&0-3(-1)\\\\\n\t2-2&-5-3&|&7-(-1)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t2&3&|&-3\\\\\n\t0&-4&|&3\\\\\n\t0&-8&|&8\n\\end{bmatrix}\n\\]\\\\\n$R_3-2R_2$\n\\[\n\\begin{bmatrix}\n\t2&3&|&-3\\\\\n\t0&-4&|&3\\\\\n\t0-2(0)&-8-2(-4)&|&8-2(3)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t2&3&|&-3\\\\\n\t0&-4&|&3\\\\\n\t0&0&|&2\n\\end{bmatrix}\n\\]\\\\\nTherefore, the three lines doesn't have a common point of intersection because the third equation shows the system is inconsistent with $0=2$\\\\\n\n18) Do the three planes $2x_1+4x_2+4x_3=4$, $x_2-2x_3=-2$, and $2x_1+3x_2=0$ have at least one common point of intersection? Explain.\\\\\n\\[\n\\begin{bmatrix}\n\t2&4&4&|&4\\\\\n\t0&1&-2&|&-2\\\\\n\t2&3&0&|&0\n\\end{bmatrix}\t\n\\]\\\\\n$R_3-R_1$\n\\[\n\\begin{bmatrix}\n\t2&4&4&|&4\\\\\n\t0&1&-2&|&-2\\\\\n\t2-2&3-4&0-4&|&0-4\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t2&4&4&|&4\\\\\n\t0&1&-2&|&-2\\\\\n\t0&-1&-4&|&-4\n\\end{bmatrix}\n\\]\\\\\n$R_3+R_2$\n\\[\n\\begin{bmatrix}\n\t2&4&4&|&4\\\\\n\t0&1&-2&|&-2\\\\\n\t0+0&-1+1&-4+(-2)&|&-4+(-2)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t2&4&4&|&4\\\\\n\t0&1&-2&|&-2\\\\\n\t0&0&-6&|&-6\n\\end{bmatrix}\n\\]\\\\\n$\\frac{1}{2}R_1$ and $-\\frac{1}{6}R_3$\n\\[\n\\begin{bmatrix}\n\t\\frac{1}{2}(2)&\\frac{1}{2}(4)&\\frac{1}{2}(4)&|&\\frac{1}{2}(4)\\\\\n\t0&1&-2&|&-2\\\\\n\t-\\frac{1}{6}(0)&-\\frac{1}{6}(0)&-\\frac{1}{6}(-6)&|&-\\frac{1}{6}(-6)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&2&2&|&2\\\\\n\t0&1&-2&|&-2\\\\\n\t0&0&1&|&1\n\\end{bmatrix}\n\\]\\\\\nTherefore, since the Reduced Row Echelon form of a matrix has a leading 1 in each row, then the corresponding system is consistent and has at least one solution.\\\\\n\nIn Exercise 19-22, determine the value(s) of $h$ such that the matrix is the augmented matrix of a consistent linear system.\\\\\n\n19) \\[\n\\begin{bmatrix}\n\t1 & h &|& 4\\\\\n\t3 & 6 &|& 8\n\\end{bmatrix}\n\\]\n$R_2-3R_1$\n\\[\n\\begin{bmatrix}\n\t1&h&|&4\\\\\n\t3-3(1)&6-3(h)&|&8-3(4)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&h&|&4\\\\\n\t0&3h-6&|&-4\n\\end{bmatrix}\n\\]\\\\\nIf $h=2$, then the system has no solution, because $3(2)-6=0$ cannot equal $-4$. Otherwise, if $h\\neq2$, the system has a solution.\\\\\n\n20) \\[\n\\begin{bmatrix}\n\t1&h&|&-5\\\\\n\t2&-8&|&6\n\\end{bmatrix}\n\\]\n$R_2-2R_1$\n\\[\n\\begin{bmatrix}\n\t1&h&|&-5\\\\\n\t2-2(1)&-8-2(h)&|&6-2(-5)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&h&|&-5\\\\\n\t0&-2h-8&|&16\n\\end{bmatrix}\n\\]\\\\\nIf $h=-4$, then the system has no solution, because $-2(-4)-8=0$ cannot equal $16$. Otherwise, if $h\\neq-4$, the system has a solution.\\\\\n\n21) \\[\n\\begin{bmatrix}\n\t1&4&|&-2\\\\\n\t3&h&|&-6\n\\end{bmatrix}\n\\]\n$R_2-3R_1$\n\\[\n\\begin{bmatrix}\n\t1&4&|&-2\\\\\n\t3-3(1)&h-3(4)&|&-6-3(-2)\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&4&|&-2\\\\\n\t0&h-12&|&0\n\\end{bmatrix}\n\\]\\\\\nThe system has infinite solutions because $R_2 = 0$.\\\\\n\n22) \\[\n\\begin{bmatrix}\n\t-4&12&|&h\\\\\n\t2&-6&|&-3\n\\end{bmatrix}\n\\]\n$R_2+\\frac{1}{2}R_1$\n\\[\n\\begin{bmatrix}\n\t-4&12&|&h\\\\\n\t2+\\frac{1}{2}(-4)&-6+\\frac{1}{2}(12)&|&-3+\\frac{h}{2}\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t-4&12&|&h\\\\\n\t0&0&|&-3+\\frac{h}{2}\n\\end{bmatrix}\n\\]\\\\\nThe system is only consist if and only if $h=6$.\\\\\n\n23) \\\\\n\na) Every elementary row operation is reversible.\\\\\nTrue, scaling and adding or subtracting rows are reversible.\\\\\n\nb) A $5\\times6$ matrix has six rows.\\\\\nFalse, a $5\\times6$ matrix has five rows and six columns.\\\\\n\nc) The solution set of a linear system involving variables $x_1, \\cdots, x_n$ is a list of numbers $(s_1,\\cdots,s_n)$ that makes each equation in the system a true statement when the values $s_1,\\cdots,s_n$ are substituted for $x_1,\\cdots, x_n$ respectively.\\\\\nFalse, the solution set consists of all possible solutions as a statement is only true if it is always true. \\\\\n\nd) Two fundamental questions about a linear system involve existence and uniqueness.\\\\\nTrue, uniqueness implies existence and existence implies uniqueness. Then $A$ is invertible.\\\\\n\n24) \\\\\n\na) Two matrices are row equivalent if they have the same number of rows.\\\\\nFalse, row equivalent requires a sequence of row operations that transforms one matrix into another.\\\\\n\nb) Elementary row operations on an augmented matrix never change the solution set of the associated linear system.\\\\\nTrue.\\\\\n\nc) Two equivalent linear systems can have different solutions sets.\\\\\nFalse.\\\\\n\nd) A consistent system of linear equations has one or more solutions.\\\\\nTrue, a consistent system has at least one solution.\\\\\n\n25) Find an equation involving $g, h,$ and $k$ that makes this augmented matrix correspond to a consistent system:\n\\[\n\\begin{bmatrix}\n\t1&-4&7&|&g\\\\\n\t0&3&-5&|&h\\\\\n\t-2&5&-9&|&k\n\\end{bmatrix}\n\\]\\\\\n$R_3+2R_1$\n\\[\n\\begin{bmatrix}\n\t1&-4&7&|&g\\\\\n\t0&3&-5&|&h\\\\\n\t-2+2(1)&5+2(-4)&-9+2(7)&|&k+2g\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&-4&7&|&g\\\\\n\t0&3&-5&|&h\\\\\n\t0&-3&5&|&k+2g\n\\end{bmatrix}\n\\]\\\\\n$R_3+R_2$\n\\[\n\\begin{bmatrix}\n\t1&-4&7&|&g\\\\\n\t0&3&-5&|&h\\\\\n\t0&-3+3&5+(-5)&|&k+2g+h\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&-4&7&|&g\\\\\n\t0&3&-5&|&h\\\\\n\t0&0&0&|&k+2g+h\n\\end{bmatrix}\n\\]\\\\\nTherefore, this system is consistent if and only if $k+2g+h = 0$.\\\\\n\n26) Suppose the system below is consistent for all possible values of $f$ and $g$. What can you say about the coefficients $c$ and $d$? Justify your answer.\\\\\n$2x_1+4x_2=f$\\\\\n$cx_1+dx_2=g$\\\\\n\\[\n\\begin{bmatrix}\n\t2&4&|&f\\\\\n\tc&d&|&g\n\\end{bmatrix}\n\\]\\\\\n$\\frac{1}{2}R_1$\n\\[\n\\begin{bmatrix}\n\t1&2&|&\\frac{f}{2}\\\\\n\tc&d&|&g\n\\end{bmatrix}\n\\]\\\\\n$R_2 = R_2-cR_1$\n\\[\n\\begin{bmatrix}\n\t1&2&|&\\frac{f}{2}\\\\\n\tc-c(1)&d-c(2)&|&g-c\\frac{f}{2}\n\\end{bmatrix}=\n\\begin{bmatrix}\n\t1&2&|&\\frac{f}{2}\\\\\n\t0&d-2c&|&g-c\\frac{f}{2}\n\\end{bmatrix}\n\\]\\\\\n\nIn Exercise 29-32, find the elementary row operation that transforms the first matrix into the second, and then find the reverse row operation that transforms the second matrix into the first.\\\\\n\n29) $R_1$ swap with $R_3$.\\\\\n\\[\n\\begin{bmatrix}\n0&-2&|&5\\\\\n1&3&|&-5\\\\\n3&-1&|&6\n\\end{bmatrix}\n\\begin{bmatrix}\n3&-1&|&6\\\\\n1&3&|&-5\\\\\n0&-2&|&5\n\\end{bmatrix}\n\\]\\\\\n\n\n30)$-\\frac{1}{5}R_3$.\\\\\n\\[\n\\begin{bmatrix}\n1&3&|&-4\\\\\n0&-2&|&6\\\\\n0&-5&|&10\n\\end{bmatrix}\n\\begin{bmatrix}\n1&3&|&-4\\\\\n0&-2&|&6\\\\\n0&1&|&-2\n\\end{bmatrix}\n\\]\\\\\n\n\\section{Exercise 1.2}\n\nIn Exercise 1 and 2, determine which matrices are in reduced echelon form and which others are only in echelon form.\\\\\n\n1) a) \\[\n\\begin{bmatrix}\n1&0&0&0\\\\\n0&1&0&0\\\\\n0&0&1&1\n\\end{bmatrix}\n\\]\nReduced echelon form because each leading 1 is the only non-zero entry in that column.\\\\\n\nb) \\[\n\\begin{bmatrix}\n1&0&1&0\\\\\n0&1&1&0\\\\\n0&0&0&1\n\\end{bmatrix}\n\\]\nReduced echelon form because the leading non-zero coefficient of a non-zero row is a 1 and is strictly to the right of the leading 1 in the row above it.\\\\\n\nc) \\[\n\\begin{bmatrix}\n1&0&0&0\\\\\n0&1&1&0\\\\\n0&0&0&0\\\\\n0&0&0&1\n\\end{bmatrix}\n\\]\nNot echelon form because not all non-zero rows are above any rows of all zeros.\\\\\n\nd) \\[\n\\begin{bmatrix}\n1&1&0&1&1\\\\\n0&2&0&2&2\\\\\n0&0&0&3&3\\\\\n0&0&0&0&4\n\\end{bmatrix}\n\\]\nEchelon form but not reduced yet.\\\\\n\\end{document}", "meta": {"hexsha": "2292f3e79e00c9bd84b25f494cbce7cb1598305e", "size": 14365, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/Week1/Week1Notes.tex", "max_stars_repo_name": "aytona/LinearAlgebra", "max_stars_repo_head_hexsha": "2a278b2957bc12456eb4bbc3f4d13b3c06d8d8d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/Week1/Week1Notes.tex", "max_issues_repo_name": "aytona/LinearAlgebra", "max_issues_repo_head_hexsha": "2a278b2957bc12456eb4bbc3f4d13b3c06d8d8d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/Week1/Week1Notes.tex", "max_forks_repo_name": "aytona/LinearAlgebra", "max_forks_repo_head_hexsha": "2a278b2957bc12456eb4bbc3f4d13b3c06d8d8d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.7592847318, "max_line_length": 260, "alphanum_fraction": 0.5468847894, "num_tokens": 7584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743505760727, "lm_q2_score": 0.7490872075132153, "lm_q1q2_score": 0.554455137345938}}
{"text": "% 08VectorFields.tex\n% Fund Science! & Help Ernest finish his Physics Research! : quantum super-A-polynomials - a thesis by Ernest Yeung\n%                                               \n% http://igg.me/at/ernestyalumni2014                                                                             \n%                                                              \n% Facebook     : ernestyalumni  \n% github       : ernestyalumni                                                                     \n% gmail        : ernestyalumni                                                                     \n% google       : ernestyalumni                                                                                   \n% linkedin     : ernestyalumni                                                                             \n% tumblr       : ernestyalumni                                                               \n% twitter      : ernestyalumni                                                             \n% youtube      : ernestyalumni                                                                \n% indiegogo    : ernestyalumni                                                                        \n%\n% Ernest Yeung was supported by Mr. and Mrs. C.W. Yeung, Prof. Robert A. Rosenstone, Michael Drown, Arvid Kingl, Mr. and Mrs. Valerie Cheng, and the Foundation for Polish Sciences, Warsaw University.                  \n% \n% Caltech Honor Code and the spirit of Open Source/Creative Commons \n%\n\n\n\n\n\\exercisehead{4.1} \nConsider $ \\begin{aligned} & \\quad \\\\ \n  & 1 : \\mathbb{R}^n \\to \\mathbb{R}^n \\\\ & x^i(x) = x^i \\end{aligned}$, \\quad \\, smooth structure on $\\mathbb{R}^n$, that's open.  \n\nConsider $\\begin{aligned} & \\quad \\\\ \n  & F: T\\mathbb{R}^n \\to \\mathbb{R}^{2n} \\\\\n& F(x^1 \\dots x^n, v^1 \\dots v^n ) = (x^1 \\dots x^n, v^1 \\dots v^n) \\end{aligned}$ \\\\\n$F= F^{-1}$, so clearly $F = 1_{T\\mathbb{R}^n} $ is cont., bijective, and it's inverse cont. and smooth.  $F$ diffeomorphism.  \n\n\\exercisehead{4.2}  $F:M \\to N$.  Consider (3.6)\n\n\\[\n\\begin{gathered}\n  \\begin{aligned}\n    & (U , \\varphi) \\subset M \\quad \\quad \\, \\varphi = (x^1 \\dots x^m) \\\\ \n    & (V, \\psi ) \\subset N \\quad \\quad \\, \\psi = (y^1 \\dots y^n) \n\\end{aligned}  \\quad \\quad \\quad \\, \\begin{aligned} & X = X^i \\frac{ \\partial }{ \\partial x^i } \\\\ \n    & Y = Y^j \\frac{ \\partial }{ \\partial y^j } \\end{aligned}\n\\end{gathered}\n\\]\n\n\\[\n(F_* X)(f) = Y^j \\frac{ \\partial }{ \\partial y^j} f = X(fF) = X^i \\frac{ \\partial }{ \\partial x^i } fF = X^i \\frac{ \\partial (f\\psi^{-1})}{ \\partial y^j} \\frac{ \\partial }{ \\partial x^i } (\\psi F^j \\varphi^{-1})(\\varphi(p)) = X^i \\frac{ \\partial F^j}{ \\partial x^i }(p) \\frac{ \\partial f}{ \\partial y^j}\n\\]\nwhere\n\\[\nfF = f\\psi^{-1} \\psi F\\varphi^{-1} \\varphi \\Longrightarrow fF(p) = (f\\psi^{-1})(y) (\\psi F\\varphi^{-1})(\\varphi(p))\n\\]\n(a serious case of abuse of notation)\n\nFor $F_*X$, \n\\[\nY^j = X^i \\frac{ \\partial F^j}{ \\partial x^i}\n\\]\n\n\\[\nF_* \\left. \\frac{ \\partial }{ \\partial x^i } \\right|_p = F_* \\frac{ \\partial }{ \\partial x^i} = \\delta_i^{ \\, \\, k } \\frac{ \\partial F^j}{ \\partial x^k} \\frac{ \\partial }{ \\partial y^j} = \\frac{ \\partial F^j}{ \\partial x^i } \\frac{ \\partial }{ \\partial y^j} = \\frac{ \\partial F^j}{ \\partial x^i}(p) \\left. \\frac{ \\partial }{ \\partial y^j } \\right|_{F(p)}\n\\]\nwith $X^k = \\delta_i^{\\, \\, k}$\n\n\\[\n\\begin{aligned}\n  &  F_* : TM \\to TN \\\\\n  & F_*( x^1 \\dots x^n, v^1 \\dots v^n) = (y^1(x) \\dots y^n(x), v^i \\frac{ \\partial F^1}{ \\partial x^i} \\dots v^i \\frac{ \\partial F^n}{ \\partial x^i} )\n\\end{aligned}\n\\]\nClearly $F_*$ smooth since $F$ smooth.  \n\n\\begin{lemma}[4.8] Suppose smooth $F: M \\to N$, \\, $\\begin{aligned} & \\quad \\\\ & Y \\in \\tau(M) \\\\ & Z \\in \\tau(N) \\end{aligned}$ \\\\\n$Y,Z$, $F$-related iff $\\forall \\, $ smooth $\\mathbb{R}$-valued $f$ on open $V \\subset N$, \n\\[\nY(fF) = (Zf) F \\quad \\quad \\quad (4.4) \n\\]\n\\end{lemma}\n\n\\begin{proof}\n  $\\forall \\, p \\in M$, $\\forall \\, $ smooth $\\mathbb{R}$-valued $f$, $f$ defined near $F(p)$\n\\[\n\\begin{gathered}\n  Y(fF)(p) = Y_p(fF) = (F_* Y_p)f \\quad \\quad \\quad (F_*Y)f = Y(fF) \\\\ \n  (Zf)F(p) = (Zf)(F(p)) = Z_{F(p)}f  \n\\end{gathered}\n\\]\nif $(Zf) F(p) = Y(fF)(p) = Zf = (F_*Y) f $ \n\\[\n(Zf)F = Y(fF) \\Longleftrightarrow Z = F_* Y \\text{ i.e. iff $Y, Z$ \\, $F$-related }\n\\] \n\n\\end{proof}\n\n\n\n\\subsubsection*{ Vector Fields on a Manifold with Boundary}\n\n\\subsection*{ Lie Brackets }\n\n\\begin{lemma}[4.12] Lie bracket of smooth vector fields $V,W$, $\\begin{aligned} & \\quad \\\\\n    & [V, W ] : C^{\\infty} M \\to C^{\\infty} M \\\\ \n    & [V, W ] f = VW f - WV f \\end{aligned}$ \\quad is a smooth vector fields.  \n\\end{lemma}\n\n\\begin{proof}\n  By Prop. 4.7.  ($M$ smooth, map $\\mathcal{Y} : C^{\\infty}M \\to C^{\\infty} M$ is a derivation iff $\\mathcal{Y} f = Yf$, $Y$ some smooth vector field $Y \\in \\tau(M)$.  \\\\\nSuffices to show $[V,W]$ derivation of $C^{\\infty}M$  \n\\[\n\\begin{gathered}\n  [V,W] (fg) = V(W(fg)) -  W(V(fg)) = V(fWg + gWf) - W( fVg  + gVf) = \\\\\n   = VfWg + fVW g + Vg Wf + gVWf - WfVg - fWVg - WgVf - gWVf  = \\\\\n   =fVWg + gVWf - fWV g - gWVf = f[V,W] g + g[V,W]f\n\\end{gathered}\n\\]\n\\end{proof}\n\n\nextremely useful coordinate formula for Lie bracket \n\\begin{lemma}[4.13]\n  Let $\\begin{aligned} & \\quad \\\\\n    & V = V^i \\frac{ \\partial }{ \\partial x^i} \\\\ \n    & W = W^j \\frac{ \\partial }{ \\partial x^j} \\end{aligned}$ \\quad \\quad $\\begin{aligned}\n    & [V,W] = \\left( V^i \\frac{ \\partial W^j}{ \\partial x^i } - W^i \\frac{ \\partial V^j}{ \\partial x^i } \\right) \\frac{ \\partial }{ \\partial x^j } \\quad \\quad \\quad (45) \\\\ \n    & [V,W]  = (VW^j - WV^j) \\frac{\\partial }{ \\partial x^j} \\quad \\quad \\quad (46) \\end{aligned}$\n\n\\end{lemma}\n\n\\begin{proof}\n  $[V,W]$ smooth vector field already, its values are determined locally $\\left. ([V,W] f ) \\right|_U = [V,W] ( \\left. f \\right|_U )$  \\\\\nIt suffices to compute in a single smooth chart.  \n\\[\n\\begin{gathered}\n[V,W] f = V^i \\frac{ \\partial }{ \\partial x^i } \\left( W^j \\frac{ \\partial f}{ \\partial x^j} \\right) - W^j \\frac{ \\partial }{ \\partial x^j} \\left( V^i \\frac{ \\partial f}{ \\partial x^i }\\right) = V^i \\frac{ \\partial W^i}{ \\partial x^i } \\frac{ \\partial f}{ \\partial x^j} + V^i W^j \\frac{ \\partial^2 f}{ \\partial x^i \\partial x^j } - W^j \\frac{ \\partial V^i }{ \\partial x^j} \\frac{ \\partial f}{ \\partial x^i } - W^j V^i \\frac{ \\partial^2 f}{ \\partial x^j  \\partial x^i } = \\\\\n= \\left( V^i \\frac{ \\partial W^j}{ \\partial x^i } - W^i \\frac{ \\partial V^j}{ \\partial x^i} \\right) \\frac{ \\partial f}{ \\partial x^j}\n\\end{gathered}\n\\]\n\n\\end{proof}\n\n\\exercisehead{4.6}\n\nFor Lemma 4.15 (Properties of the Lie Bracket), part (d), \n\nthe point is to use the derivative properties of the vector fields.  \n\n\\[\n\\begin{gathered}\n  \\left[fV, gW \\right] h = fV( gW h ) - gW (fVh) = (fVg)(Wh) + g(fV(Wh)) - (gWf)(Vh) - f(gW)(Vh) = \\\\\n  = g( f VW) h - (fgWV)h +   f(Vg)W h - g(Wf)Vh\n\\end{gathered}\n\\]\n\n\\begin{proposition}[4.16] (Naturality of the Lie Bracket)\n  Let smooth $F:M \\to N$, \\quad $\\begin{aligned} & \\quad \\\\ & V_1, V_2 \\in \\tau(M) \\\\ & W_1, W_2 \\in \\tau(N) \\end{aligned}$, \\quad $V_i$ \\, $F$-related to $W_i$, $i=1,2$.  \n\nThen $[V_1, V_2 ]$ $F$-related to $[W_1, W_2]$\n\n\\end{proposition}\n\n\\begin{proof}\nUse Lemma 4.8, and given $V_i$, \\, $F$-related to $W_i$  \n\n\\[\n\\begin{aligned}\n  & V_1 V_2 (fF) = V_1 (W_2 f) F = W_1 W_2 fF \\\\ \n  & V_2 V_1 (fF) = (W_2 W_1 f) F\n\\end{aligned} \\quad \\quad \\Longrightarrow [V_1, V_2](fF) = ( [W_1, W_2] f)F\n\\]\nSo $[V_1, V_2]$ , \\, $F$-related to $[W_1, W_2]$\n\\end{proof}\n\n\\begin{corollary}[4.17]\n  Suppose $F: M \\to N$ diffeomorphism, $V_1, V_2 \\in \\tau(M)$ \\\\\nThen $F_*[V_1, V_2] = [F_* V_1, F_* V_2 ]$\n\n\\end{corollary}\n\n\\begin{proof} $F$ diffeomorphism.  Then Lemma 4.9, $\\exists \\, $ push-forward (or alternatively, by Prop. 4.16, $W_i = F_* V_i$ i.e. $F$-related).  \n\\[\nF_*[V_1, V_2 ] = [W_1, W_2 ] =  [F_* V_1, F_*V_2]\n\\]\n\\end{proof}\n\n\\subsection*{ The Lie Algebra of a Lie Group }\n\n\\[\nL_g = m i_g \n\\]\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em]\n  {\n%    U_i \\subset \\mathbb{R}^{n+1} - 0 &  \\\\\n%    V_i \\subset \\mathbb{R}P^n  & \\mathbb{R}^n  \\\\ };\nG & G \\times G & G   \\\\  };\n%  \\path[-stealth]\n  \\path[->]\n  (m-1-1) edge node [right] {$i_g$} (m-1-2)\n  (m-1-2) edge node [below] {$m$} (m-1-3);\n\\end{tikzpicture}\n\n$i_g(h) = (g,h)$, $m$ is multiplication, follows $L_g$ smooth.  \n\n$L_g$ diffeomorphism of $G$, since $L_{g^{-1}}$ smooth inverse. \n\n$\\forall \\, $ 2 pts. $g_1, g_2 \\in G$, \\, $\\exists \\, ! \\, L_{g_2 g_1^{-1}}$ s.t. $L_{g_2 g_1^{-1}} g_1 = g_2$  many important properties of Lie groups follow from $L_{g_2 g_1^{-1}}$ as diffeomorphism.  \n\nvector field $X$ on $G$ \\emph{left invariant} if \n\\begin{equation}\n  (L_g)_* X_{g'} = X_{gg'} \\quad \\quad \\forall \\, g, g' \\in G \\quad \\quad \\quad (4.8)\n\\end{equation}\n\n$L_g$ diffeomorphism.  \n\\[\n(L_g)_*(aX+ bY) = a(L_g)_*X + b(L_g)_* Y\n\\]\nset of all smooth left-invariant vector fields on $G$ is a linear subspace $\\tau(M)$, \\emph{ and } closed under Lie bracket.  \n\n\\begin{lemma}[4.18] \nLet $G$ Lie group, suppose $X,Y$ smooth left-invariant vector fields on $G$ \\\\\nThen $[X,Y]$ also left invariant.  \n\\end{lemma}\n\n\\begin{proof}\n  Given $\\begin{aligned} & \\quad \\\\ & (L_g)_*X = X \\\\ & (L_g)_* Y = Y \\end{aligned}$ by def. of left-invariance.  \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\subsection*{Vector Fields on Manifolds}\n\n\\begin{lemma}[8.6] \\textbf{(Extension Lemma for Vector Fields)} \\\\\n  $M$ smooth manifold with or without boundary \\\\\n$A \\subseteq M$ closed subset. \\\\\nSuppose $X$ smooth vector field along $A$.  \\\\\nGive open $U \\supset A$, $\\exists \\, $ smooth global vector field $\\widetilde{X}$ on $M$ s.t. $\\left. \\widetilde{X} \\right|_A = X$ and $\\text{supp}{\\widetilde{X}} \\subseteq U$\n\\end{lemma}\n\n\\exercisehead{8.9}\n\n\\begin{enumerate}\n\\item[(a)] $\\forall \\, p \\in M$, \\, $\\begin{aligned} & \\quad \\\\\n  & X_p = X^i(p) \\frac{ \\partial }{ \\partial x^i} \\\\\n  & Y_p = Y^i(p) \\frac{ \\partial }{ \\partial x^i } \\end{aligned}$ \\\\\n$f,g \\in C^{\\infty}{(M)}$\n\n\\[\n\\begin{aligned}\n  & (fX)_p = f(p)X_p = f(p) X^i(p) \\frac{ \\partial }{ \\partial x^i } \\\\ \n  & (gY)_p = g(p)Y_p = g(p)Y^i(p) \\frac{ \\partial }{ \\partial x^i }\n\\end{aligned}\n\\]\n\\[\n(fX + gY)_p  = f(p)X_p + g(p)Y_p = f(p) X^i(p) \\frac{ \\partial}{ \\partial x^i} + g(p)Y^i(p) \\frac{ \\partial }{ \\partial x^i } = (f(p)X^i(p) + g(p) Y^i(p)) \\frac{ \\partial }{ \\partial x^i }\n\\]\n$f(p)X^i(p) + g(p)Y^i(p)$ smooth so $(fX+gY)_p$ smooth.\n\\item[(b)] Let $g=f$ \n\\[\n(fX+fY)_p = f(p)X_p + f(p)Y_p = f(p)X^i(p) \\frac{ \\partial }{ \\partial x^i} + f(p)Y^i(p)\\frac{ \\partial}{\\partial x^i} = f(p)(X^i(p) + Y^i(p))\\frac{ \\partial }{ \\partial x^i } = (f(X+Y))_p\n\\]\nLet $Y=X$ so $\\forall \\, p$, \n\\[\n(fX+gX)_p  = f(p)X_p + g(p)X_p = f(p)X^i(p)\\frac{ \\partial}{ \\partial x^i} +g(p)X^i(p) \\frac{ \\partial }{ \\partial x^i} = (f(p) + g(p))X^i(p)\\frac{ \\partial }{ \\partial x^i } = ((f+g)X)_p\n\\]\n\n\\[\n(g(fX))_p = g(p)(fX)_p= g(p)f(p)X_p = ((gf)X)_p\n\\]\nLet $f=1$, $g=0$, $1X=X$\n\\end{enumerate}\n\n\n\n\n\n\\hrulefill\n\n\\subsubsection*{Local and Global Frames}\n\n\\subsubsection*{Vector Fields as Derivations of $C^{\\infty}(M)$}\n\nif $X \\in \\mathfrak{X}(M)$, smooth $f$ defined on open $U\\subseteq M$, obtain\n\\[\n\\begin{aligned}\n  & Xf : U \\to \\mathbb{R} \\\\  \n  & (Xf)(p) = X_p F\n\\end{aligned}\n\\]\nFrom J. Lee: (Be careful not to confuse the notations $fX$ and $Xf$: the former is the smooth \\emph{vector field} on $U$ obtained by multiplying $X$ by $f$, while the latter is the real-valued \\emph{function} on $U$ obtained by applying the vector field $X$ to the smooth function $f$)\n\n\\begin{proposition}[8.14]\n$X: M \\to TM$ \n\nequivalent\n\\begin{enumerate}\n\\item[(a)] $X$ smooth \n\\item[(b)] $\\forall \\, f \\in C^{\\infty}(M)$, $Xf $ smooth on $M$ \n\\item[(c)] $\\forall \\, $ open $U \\subseteq M$, $\\forall \\, f \\in C^{\\infty}(M)$, $Xf \\in C^{\\infty}(U)$\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n  (a) $\\Longrightarrow $ (b), assume $X$ smooth, \\\\\nlet $f\\in C^{\\infty}(M)$ \\\\\n$M$ manifold, $\\forall \\, p \\in M$, choose smooth $x^i$ on open $U\\ni p$ \\\\\nThen $\\forall \\, x \\in U$, \n\\[\nXf(x) = \\left( \\left. X^i(x) \\frac{ \\partial }{ \\partial x^i} \\right|_x \\right) f = X^i(x) \\frac{ \\partial f}{ \\partial x^i}(x)\n\\]\n$X^i$ smooth on $U$ by Prop. 8.1, $Xf$ smooth in $U$\n\\end{proof}\n\n\\subsection*{Vector Fields and Smooth Maps}\n\n\\begin{proposition}[8.16]\n  Suppose smooth $F:M\\to N$, \\quad \\, $\\begin{aligned} & \\quad \\\\ \n    & X \\in \\mathfrak{X}(M) \\\\\n    & Y \\in \\mathfrak{X}(N) \\end{aligned}$ \\\\\n\nThen $X,Y$ $F$-related iff $\\forall \\, $ smooth $h$, defined on open $V \\subset N$ \n\\[\nX(hF) = (Yh)F\n\\]\n\n\n\n\\end{proposition}\n\n\\begin{proof}\n$\\forall \\, p \\in M$, $\\forall \\, $ smooth $h$ defined on open $V \\ni F(p)$ \n\\[\n\\begin{aligned}\n  & X(hF)(p) = X_p(hF) = dF_p(X_p)h \\\\ \n  & (Yh)F(p) = Yh(F(p)) = Y_{F(p)}h\n\\end{aligned}\n\\]\n$X(hF) = (Yh)F$ \\quad \\, $\\forall \\, h \\in C^{\\infty}(N)$ iff $dF_p(X_p) = Y_{F(p)}$ \\, $\\forall \\, p$\n\n\n\n\n\\end{proof}\n\n\n\n\n\\begin{proposition}[8.19]\n  smooth $M,N$, diffeomorphism $F:M \\to N$ \\\\\n$\\forall \\, X \\in \\mathfrak{X}(M)$, $\\exists \\, !$ smooth vector field on $N$ $F$-related to $X$\n\\end{proposition}\n\n\\begin{proof}\n$\\forall \\, p \\in M$, $F(p) = q\\in N$  \\\\\n\ndefine $Y$ by \n\\[\n\\begin{aligned}\n  & Y_q = dF_{F^{-1}(q)}(X_{F^{-1}(q)} ) = dF_p(X_p) \\\\ \n  \\Longrightarrow & Y_{F(p)} = dF_p(X_p) \n\\end{aligned}\n\\]\n$Y: N \\to TN$ \n\\[\nY = N \\xrightarrow{F^{-1}} M \\xrightarrow{X} TM \\xrightarrow{dF} TN\n\\]\n$Y = dF \\circ X \\circ F^{-1}$\n\n$dF, X, F^{-1}$ smooth.  $Y$ smooth. \n\\end{proof}\n\\textbf{pushforward} of $X$ by $F$, denote $F_*X$ \n\\begin{equation}\n  (F_*X)_q = dF_{F^{-1}(q)}(X_{F^{-1}(q) } ) \\quad \\quad \\quad \\, (8.7)\n\\end{equation}\n$(F_*X)_q = dF_p(X_p)$\n\n\n\n\\begin{corollary}[8.21]\n  Suppose diffeomorphism $F: M \\to N$, $X \\in \\mathfrak{X}(M)$ \\\\\n  $\\forall \\, h \\in C^{\\infty}(N)$\n\\[\n((F_*X)h) \\circ F = X(h\\circ F)\n\\]\n\\end{corollary}\n\n\n\\subsubsection*{Vector Fields and Submanifolds}\n\n\n\\subsection*{Lie Brackets}\n\n\\begin{proposition}[8.26] \\textbf{(Coordinate Formula for the Lie Bracket)}\n  \\begin{equation}\n    [X,Y] = \\left( X^i \\frac{ \\partial Y^j}{ \\partial x^i } - Y^i \\frac{ \\partial X^j}{ \\partial x^i } \\right) \\frac{\\partial}{ \\partial x^j }  \\quad \\quad \\quad \\, (8.8)\n\\end{equation}\n\\end{proposition}\n\n\\subsection*{The Lie Algebra of a Lie Group}\n\nRecall that $G$ acts smoothly and transitively on itself by left translation:\n\\[\nL_g(h) = gh\n\\]\n\n$X$ on $G$ \\textbf{left-invariant} if \n\\begin{equation}\nd(L_g)_{g'}(X_{g'}) = X_{gg'} \\quad \\, \\forall \\, g, g' \\in G \\quad \\quad \\quad \\, (8.12)\n\\end{equation}\n\n$L_g$ diffeomorphism, so \n\\[\n(L_g)_*X = X \\quad \\quad \\, \\forall \\, g \\in G\n\\]\n\n\n\n\n\n\\textbf{Example 8.36 (Lie Algebras)}\n\n\\begin{enumerate}\n\\item[(a)]\n\\item[(b)]\n\\item[(c)]\n\\item[(d)]\n\\item[(e)]\n\\item[(f)] $\\forall \\, $ vector $V$ becomes Lie algebra if $[,]=0$ \\\\\nsuch a Lie algebra is \\textbf{abelian}\n\\end{enumerate}\n\n$\\text{Lie}{G}$ Lie algebra of all smooth left-invariant vector fields on Lie Group $G$ \\textbf{ Lie algebra of $G$ }\n\n\n\n\\begin{theorem}[8.37] $\\begin{aligned}  & \\quad \\\\\n    & \\epsilon : \\text{Lie}{(G)} \\to T_eG \\\\ \n    & \\epsilon(X) = X_e \\end{aligned}$\n\n$\\epsilon$ vector space isomorphism\n\\end{theorem}\n\n\\begin{proof}\n  If $\\epsilon(X) = X_e = 0$ for some $X \\in \\text{Lie}{(G)}$\n\nleft invariant $d(L_g)_{g'}(X_{g'}) = X_{gg'}$\n\\[\nd(L_g)_e(X_e) = X_g=0 \\quad \\, \\forall \\, g \\in G, \\text{ so } X= 0 \n\\]\n$\\epsilon$ injective\n\nLet $V \\in T_eG$ arbitrary. \\\\\n\\quad define (rough) vector field $v^L$ on $G$ by \n\\begin{equation}\n  \\left. v^L \\right|_g = d(L_g)_e(v) \\quad \\quad \\quad \\, (8.13)\n\\end{equation}\n\n\\end{proof}\n\n\n\n\n\n\\textbf{Example 8.40}\n\\begin{enumerate}\n\\item[(a)] $L_b(x) = b + x$  \\quad \\quad \\, $bx = b + x$ \\quad \\quad \\, $y = x + b$ \\\\\n\n$d(L_g) = 1$ \\\\\n$X_x = X^i \\frac{ \\partial }{ \\partial x^i }$ \\\\\n$d(L_b)_x X_x = 1 X_x = X_x = \\widetilde{X}^i \\frac{ \\partial }{ \\partial (x+b)} = \\widetilde{X}^i(x+b) \\frac{ \\partial }{ \\partial x} = X^i(x) \\frac{ \\partial }{ \\partial x^i }$\n\n$X^i$ constants \\\\\n\n$[X,Y]=0$ if $X,Y$ constants. \n\nlie algebra of $\\mathbb{R}^n$ abelian (cf. Example 8.36, (f))\n\\item[(b)]\n\\item[(c)]\n\\end{enumerate}\n\n\\begin{proposition}[8.41] \\textbf{(Lie Algebra of the General Linear Group)}\n  \\begin{equation}\n    \\text{Lie}{(GL(n,\\mathbb{R}))} \\to T_{1_n}GL(n,\\mathbb{R}) \\to \\mathfrak{gl}{(n,\\mathbb{R})} \\quad \\quad \\, (8.14)\n\\end{equation}\nis isomorphism\n\\end{proposition}\n\n\\begin{proof}\nglobal coordinates $X^i_{ \\, \\, j}$ on $GL(n,\\mathbb{R})$\n\nnatural isomorphism\n\\[\n\\begin{gathered}\n  T_1GL(n,\\mathbb{R}) \\longleftrightarrow \\mathfrak{gl}(n,\\mathbb{R}) \\\\\n  A^i_{ \\, \\, j } \\left. \\frac{ \\partial }{ \\partial X^i_{ \\, \\, j }} \\right|_{1_n} \\longleftrightarrow (A^i_{ \\, \\, j})\n\\end{gathered}\n\\]\n\n\nRecall \n\\[\nd(L_g)_{g'}(X_{g'}) = X_{gg'}\n\\]\nRecall Lie algebra of all smooth left invariant vector fields on $G$ \\\\ \nRecall (8.13)\n\\begin{equation}\n\\left. v \\right|_g = d(L_g)_e(v) \\quad \\quad \\quad \\, (8.13)\n\\end{equation}\n\n$L_X$ is restriction to $GL(n,\\mathbb{R})$ of linear map $A \\mapsto XA$ on $\\mathfrak{gl}{(n,\\mathbb{R})}$  \n\n\\[\n\\begin{aligned}\n  & L_X g = Xg = X^i_{ \\, \\, k } g^k_{ \\, \\, j } \\\\ \n  & L_X 1 = X1 = X^i_{ \\, \\, k } \\delta^k_{ \\, \\, j } = X^i_{ \\, \\, j } = X\n\\end{aligned}\n\\]\n\n$X^i_{ \\, \\, j}$ global coordinates on $GL(n,\\mathbb{R})$, so\n\\[\n\\left. \\frac{ \\partial }{ \\partial X^i_{\\, \\, j}} \\right|_1 = \\left. \\frac{ \\partial }{ \\partial X^i_{\\, \\, j}} \\right|_X\n\\]\n\n\\[\nDL_X = \\frac{ \\partial}{ \\partial A^k_{ \\, \\, j} } ( XA)^i_{ \\, \\, j} = \\frac{ \\partial }{ \\partial A^k_{ \\, \\, l} } X^i_{ \\, \\, m} A^m_{ \\, \\, j} = X^i_{ \\, \\, m} \\delta^m_{ \\, \\, k } \\delta^l_{ \\, \\, j } = X^i_{ \\, \\, k } \\delta^i_{ \\, \\, j}\n\\]\n\n\\[\n(DL_X)_1(A) = ((DL_X)^{i \\, \\, \\, l }_{ \\, \\, j \\, \\, \\, k } A^k_{\\, \\, l} \\left. \\frac{ \\partial }{ \\partial X^i_{ \\, \\, j } } \\right|_X = ( X^i_{ \\, \\, k } \\delta^l_{ \\, \\, j} A^k_{ \\, \\, l } ) \\left. \\frac{ \\partial }{ \\partial X^i_{ \\, \\, j } }\\right|_X = (X^i_{ \\, \\, k} A^k_{ \\, \\, j} ) \\left. \\frac{ \\partial }{ \\partial X^i_{ \\, \\, j} } \\right|_X\n\\]\n\n\n\\end{proof}\n\n\n\n\n\\subsection*{Problems}\n\n\\problemhead{8-1}\n\n$\\forall \\, p \\in A$, choose neighborhood $W_p$ of $p$, smooth $\\widetilde{X}:A \\to TM$ s.t. $\\widetilde{X} = X$ on $W_p \\bigcap A$\n\nReplace $W_p$ by $W_p \\bigcap U$, so $W_p \\subseteq U$\n\n$\\lbrace W_p | p \\in A \\rbrace \\bigcup \\lbrace M \\backslash A \\rbrace$ open cover of $M$\n\nLet $\\lbrace \\psi_p | p \\in A \\rbrace \\bigcup \\lbrace \\psi_0 \\rbrace$ smooth partition of unity subordinate to this cover, with $\\text{supp}{\\psi_p} \\subseteq W_p$, $\\text{supp}{\\psi_0} \\subseteq M \\backslash A$\n\n$\\forall \\, p \\in A$, $(U_p,x^i)$ smooth coordinate chart\n\n\\[\n\\begin{gathered}\n  X_p = \\left. X^i(p) \\frac{ \\partial }{ \\partial x^i } \\right|_p \\\\ \n  X^i: U_p \\to \\mathbb{R}\n\\end{gathered}\n\\]\n\n$\\psi_p \\widetilde{X}^i(p)$ smooth on $W_p$\n\n$\\psi_p\\widetilde{X}^i(p)$ has smooth extension to all of $M$ if $\\psi_p \\widetilde{X}^i(p) =0$ on $M\\backslash \\text{supp}{\\psi_p}$ \\\\\n\non open $W_p \\backslash \\text{supp}{ \\psi_p}$, they agree\n\ndefine $\\widetilde{X}^i:M \\to \\mathbb{R}$ \n\\[\n\\widetilde{X}^i(x) = \\sum_{p \\in A} \\psi_p(x) \\widetilde{X}^i(p)\n\\]\n\n$\\lbrace \\text{supp}{\\psi_p} \\rbrace$ locally finite, so $\\sum_{p\\in A} \\psi_p(x) \\widetilde{X}^i(p)$ has only finite number of nonzero terms in neighborhood of $\\forall \\, x \\in M$, so $\\widetilde{X}^i(x)$ smooth\n\nIf $x\\in A$, $\\psi_0(x) = 0$, $\\widetilde{X}^i(x) = X^i(x)$ \\quad \\, $\\forall \\, p $ s.t. $\\psi_p(x) \\neq 0$, so \n\\[\n\\widetilde{X}^i(x) = \\sum_{p\\in A} \\psi_p(x) X^i(x) = (\\psi_0(x) + \\sum_{p\\in A} \\psi_p(x)) X^i(x) = X^i(x)\n\\]\nso $\\widetilde{X}^i$ extension of $X$\n\n\n\\hrulefill\n\n\\problemhead{8-2} \\textsc{Euler's Homogeneous Function Theorem} \n$y = \\lambda x$\n\\[\n\\frac{ \\partial f}{ \\partial y^i } \\frac{ \\partial y^i}{ \\partial \\lambda} = x^i \\frac{ \\partial f}{ \\partial y^i} = x^i \\frac{ \\partial f}{ \\partial ( \\lambda x^i)} = \\frac{d}{d\\lambda} f(y) = \\frac{d}{d\\lambda} f(\\lambda x) = \\frac{d}{d\\lambda} ( \\lambda^c f(x)) = c \\lambda^{c-1} f(x)\n\\]\n\n$\\lambda = 1$\n\n\\[\n\\boxed{ x^i \\frac{ \\partial f}{ \\partial x^i } = V_x f(x) = cf(x) }\n\\]\n\n\n\\hrulefill\n\n\n\n\n\\problemhead{8-29}\n\n\\[\n\\begin{aligned}\n  & \\mathfrak{o}{(n)} = \\lbrace A \\in \\mathfrak{gl}(n,\\mathbb{R}) | A^T + A = 0 \\rbrace \\\\ \n  & \\mathfrak{o}{(3)} = \\lbrace A \\in \\mathfrak{gl}(3,\\mathbb{R}) | A^T + A = 0 \\rbrace \n\\end{aligned}\n\\]\n\n\n\\[\n\\begin{aligned}\n& \\mathfrak{su}(n) = \\lbrace A \\in \\mathfrak{gl}(n,\\mathbb{C}) | A^* + A = 0 , \\text{tr}A = 0 \\brace \\\\ \n& \\mathfrak{su}(2) = \\lbrace A \\in \\mathfrak{gl}(2,\\mathbb{C}) | A^* + A = 0 , \\text{tr}A = 0 \\brace \\\\ \n\\end{aligned}\n\\]\n\n$\\forall \\, A \\in \\mathfrak{su}(2)$, $A$ is of the form, for $a,b,c \\in \\mathbb{R}$, \n\\[\nA = \\left( \\begin{matrix} ia & b + ic \\\\ \n  -b + i c & -ia \\end{matrix} \\right)\n\\]\n$\\forall \\, B \\in \\mathfrak{o}(3)$, $B$ is of the form \n\\[\nB = \\left( \\begin{matrix}  & a & b \\\\ \n  -a & & c \\\\ \n  -b & -c & \\end{matrix} \\right)\n\\]\nThen the following identification, $F$ is clearly an isomorphism, as its 1-to-1 and onto:\n\\[\n\\begin{aligned}\n  & F : \\mathfrak{su}(2) \\to \\mathfrak{o}(3) \\\\ \n  & F\\left( \\begin{matrix} ia & b + ic \\\\ \n    -b + ic & -ia \\end{matrix} \\right) = \\left( \\begin{matrix} & a & b \\\\ \n    -a & & c \\\\\n    -b & -c & \\end{matrix} \\right)\n\\end{aligned}\n\\]\n\n", "meta": {"hexsha": "353a6a946f9ae7af6e197563f1ad7dd5c263b41f", "size": 21153, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LeeJM/08VectorFields.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LeeJM/08VectorFields.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LeeJM/08VectorFields.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 32.8973561431, "max_line_length": 473, "alphanum_fraction": 0.5510329504, "num_tokens": 8649, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743505760728, "lm_q2_score": 0.7490872075132152, "lm_q1q2_score": 0.554455137345938}}
{"text": "\\documentclass[12pt]{article}\n\n\\usepackage{sbc-template}\n\n\\usepackage{graphicx,url}\n\\usepackage{amsmath}\n\\usepackage{float}\n\\usepackage[brazil]{babel}\n%\\usepackage[latin1]{inputenc}\n\\usepackage[utf8]{inputenc}\n% UTF-8 encoding is recommended by ShareLaTex\n\n\n\\sloppy\n\n\\title{A study on the numerical methods to solve the n-body problem\\\\Symplectic Integrators}\n\n\\author{Gil S. M. Neto}\n\n\n\\address{Instituto de Matem\u00e1tica - Departamento de Matem\u00e1tica Aplicada\\\\ Universidade Federal do Rio de Janeiro\n  (UFRJ)\\\\\n Rio de Janeiro -- RJ -- Brazil\n  \\email{gil.neto@ufrj.br}\n}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\n  This is a study on the numerical solutions for the n-body problem, modeling\n  the solar system and solving Newton's Law of Gravity for the position of the planets we can take a look on how different numerical approaches behaves on this system.\\\\\n\n\\end{abstract}\n\n\\section{Introduction}\nOne might be interested in how a determined particle moves along other particles, how to determine how this system of particles evolve? If we are talking about particles, or bodies, in the macroscopic world we should take a look at Newton's Law of Universal Gravitation\\\\\n\\[\nF = G \\frac{m_1m_2}{r^2}\n\\]\nWhere \\(G = 6.67428 \\cdot 10^{-11} \\frac{m^3}{kg\\cdot s^2}\\) is the Gravitational constant, \\(m_1\\) is the mass of the first body, typically the body that we are interested in the position, \\(m_2\\) is the body interacting with our desired body, and \\(r\\) is the distance between the bodies.\\\\\nBut how can this equation be of any help to determine a body position? Well, combining this with Newton's Second Law of Motion\n\\[\nF = ma\n\\]\nWe get\n\\begin{align*}\nm_1a &= G \\frac{m_1m_2}{r^2}\\\\\na &= G \\frac{m_2}{r^2}\n\\end{align*}\nBut acceleration is just the second derivative of position, so what we truly have is the differential equation\n\\[\n\\ddot{r} = G \\frac{m_2}{\\lVert r \\rVert^2}\n\\]\nSo, by transforming this into an initial value problem we can solve for the body's position.\\\\\nThe N-Body problem is a well know problem, the equations described above only tells us about a system of two bodies. But what about systems with more bodies? How to simulate a galaxy, for example, or how to determine where a GPS satellite must be in Earth's orbit?\\\\\nThis article is a case study of the N-Body problem, where N = 9, so we have the sun and the eight planets of the solar system (sorry Pluto). And we will take a look at how to model Newton's Equation of Motion on the computer, as a system of linear differential equations that we can solve.\n\\section{One Differential Equation to rule them all} \\label{sec:thediffeq}\n(...) One Differential Equation to find them all. And inspired by a Lord of The Rings line, we can use our One Differential Equation to determine all bodies' position.\n\\[\n\\ddot{r} = G \\frac{m_2}{\\lVert r \\rVert^2}\n\\]\nRewriting this as a system of first order linear equations:\n\\begin{align*}\n  \\begin{cases}\n    \\dot{r} = v \\\\\n    \\ddot{v} = G \\frac{m_2}{\\lVert r \\rVert^2}\n  \\end{cases}\n\\end{align*}\nGive us an easy way to determine a body's position. But we are dealing with a system of 9 bodies, so the force acting on our main body is the sum of all forces from all the bodies, and our differential equation for the i-th body is:\n\\[\n\\ddot{r_i} = G\\sum_{j=1}^{8}\\left(\\frac{m_j}{\\lVert r_j \\rVert^2}\\right), j \\neq i\n\\]\nAnd our system of equations to solve becomes\n\\begin{align*}\n  \\begin{cases}\n    \\dot{r} = v \\\\\n    \\ddot{v} = G (\\frac{m_1}{\\lVert r_1 \\rVert^2}\\right + \\frac{m_2}{\\lVert r_2 \\rVert^2}\\right + \\dots + \\frac{m_8}{\\lVert r_8 \\rVert^2})\n  \\end{cases}\n\\end{align*}\nSolving this for each planet gives us all the position of planets in the solar system.\n\\section{The Pyshics of the problem}\nGoing through the Physics Laws of the problem will give us tools to verify if our model is a good model, and if the numerical solutions are, in fact, solving the right model.\n\n\\subsection{Hamiltonian and Angular Momentum}\nThis system under Newton's Law is a conservative system, in other words, it conserves energy, as Professor Alan Sokal from University College London says, and we can look at the Hamiltonian of the system.\\\\\nHere \\(q\\) denotes position vector, so \\(\\dot{q}\\) is the velocity vector. \\\\\nFirst of all, the self potential energy\n\\[\nU = G \\displaystyle \\sum_{i,j}^n \\frac{m_i m_j}{\\lVert q_i - q_j \\rVert}\n\\]\nThe linear momentum\n\\[\np_i = m_i \\dot{q_i}\n\\]\nAnd finally the Hamiltonian\n\\[\nH = \\displaystyle \\sum_{i = 1}^{n} \\frac{\\lVert p_i \\rVert ^2}{2m_i} - U\n\\]\nAnd so \\(H\\) is constant.\\\\ yy \nAnother conserved quantity is the Angular Momentum, which is defined as:\n\\[\nL = r \\times p\n\\]\n\\(r\\) begin the position vector and \\(p = mv\\) the linear momentum vector.\\\\\nWe can simplify the equation for an easy one to solve\n\\begin{align*}\n  L &= r \\times p\\\\\n  \\lVert L \\rVert &= \\sqrt{\\rVert r \\lVert^2 \\rVert p \\lVert^2(sin(\\theta))^2} \\\\\n  &= \\sqrt{\\rVert r \\lVert^2 \\rVert p \\lVert^2 - \\rVert r \\lVert^2 \\rVert p \\lVert^2(cos(\\theta))^2} \\\\\n  &= \\sqrt{\\rVert r \\lVert^2 \\rVert p \\lVert^2 - \\langle r, p \\rangle ^2}\n\\end{align*}\nIn the second step we used the trigonometric identity \\(sin^2 (t) + cos^2(t) = 1\\), and in the third step \\(\\langle \\cdot \\rangle\\) stands for the scalar product.\\\\\nWith this in mind, we expect that the numerical method for solving the n-body problem also conserves Energy and Angular Momentum.\n\n\\subsection{Kepler's Law}\n\\begin{enumerate}\n  \\item Planets move around the Sun in ellipses, with the Sun at one focus\n  \\item The line connecting the Sun to a planet sweeps equal areas in equal times\n  \\item The square of the orbital period of a planet is proportional to the cube of the mean distance from the Sun\n\\end{enumerate}\n\\subsubsection{First Law}\nFor the first Law, we will check for the eccentricity \\(\\epsilon \\) of the orbits, if \\(\\lvert \\epsilon \\rvert < 1\\) the orbit is indeed an ellipse. Also we will compare the numerical result of eccentricity to the data from NASA about the orbits.\\\\\nWorking on some Analytical Geometry, eccentricity is given by:\n\\[\n\\epsilon = \\frac{ap - pe}{ap + pe}\n\\]\nWhere \\(ap\\) is the Aphelion (i.e. the furthest distance to the Sun) distance, \\(pe\\) is the Perihelion (i.e. the closest distance to the Sun) distance\n\n\\section{The numerical methods}\nGiven the energy conservative nature of the problem, the Euler Method and Runge-Kutta methods won't give us satisfactory results, because they don't conserve energies. Methods that conserves energy are called Symplectic methods.\nIn this article we will make use of the non-symplectics:\n\\begin{itemize}\n  \\item Euler (First Order)\n\\end{itemize}\n and the symplectic ones:\n \\begin{itemize}\n   \\item Semi implicit Euler (Euler-Cromer - Second Order)\n   \\item 3 Step Verlet Integrator (Leapfrog - Second Order)\n   \\item 7 Step Verlet Integrator (Leapfrog - Higher Order)\n \\end{itemize}\n  \\(V(t, x), A(t, x)\\) are the velocity and acceleration at time \\(t\\) and position \\(x(t)\\), \\(x_t\\) is the position at time \\(t\\), \\(v_t\\) is the velocity at time \\(t \\), \\(h\\) is the time step.\n \\subsubsection{Euler}\n \\begin{align*}\n   x_{t+1} &= x_t + h v_t\\\\\n   v_{t+1} &= v_t + h a_t\n \\end{align*}\n \\subsubsection{Euler-Cromer}\n \\begin{align*}\n  v_{t+1} &= v_t + h a_t \\\\\n  x_{t+1} &= x_t + h v_{t+1}\n \\end{align*}\n \\subsection{Velocity-Verlet}\n 3 Step Verlet comes in two flavors, Velocity-Verlet and Position-Verlet, they yield the same results, so we will deal only with the velocity version, position one can be found in the source code.\\\\\n \\begin{align*}\n  v_{t+\\frac{1}{2}} &= v_t + \\frac{1}{2} h a_t \\\\\n  x_{t+1} &= x_t + h v_{t+\\frac{1}{2}}\\\\\n  v_{t+1} &= v_{t+\\frac{1}{2}} + \\frac{1}{2} h A(t+1, x_{t+1})\n \\end{align*}\n\n \\subsection{7 Step Verlet - Leapfrog Integrator}\n \\begin{align*}\n  w &= \\sqrt[3]{2}\\\\\n  f &= 2 - w\\\\\n  leap_1 = leap_7 &= \\frac{h}{2f}\\\\\n  leap_2 = leap_6 &= \\frac{h}{f}\\\\\n  leap_3 = leap_5 &= (1-w) \\frac{h}{2f}\\\\\n  leap_4 &= -h \\frac{w}{f}\\\\\n  x_{1} &= x_t + leap_1 \\cdot v_t\\\\\n  v_{2} &= v_t + leap__2 \\cdot A(x_{1})\\\\\n  x_{3} &= x_{1} + leap_3 \\cdot v_{2}\\\\\n  v_{4} &= v_{2} + leap_4 \\cdot A(x{3})\\\\\n  x_{5} &= x_{3} + leap_5 \\cdot v_{4}\\\\\n  v_{t+1} &= v_{4} + leap_6 \\cdot A(x_{5})\\\\\n  x_{t+1} &= x_{5} + leap_7 \\cdot v_{t+1}\n \\end{align*}\n\n\\section{Results}\\label{sec:figs}\nNow we will look at some results, the initial values for the simulation where choosen as: initial \\(x\\) coordinate: Aphelion distance; initial \\(y\\) coordinate: 0; initial velocity: Data from NASA Fact Sheet. The time step \\(h\\) is measured in seconds, \\(h = 3600\\) means \\(1\\) hour step.\n\n\\begin{figure}[H]\n  \\includegraphics[width=.4\\textwidth, height =.3\\textheight]{euler-3600-orbit.png}\n  \\includegraphics[width=.4\\textwidth, height =.3\\textheight]{eulercromer-3600-orbit.png}\n\n  \\includegraphics[width=.4\\textwidth, height =.3\\textheight]{verlet-3600-orbit.png}\n  \\includegraphics[width=.4\\textwidth, height =.3\\textheight]{leapfrog-3600-orbit.png}\n  \\caption{Orbits for \\(h = 3600\\) and simulated \\(100000\\) times (\\(\\approx 4100\\) days)}\n  \\label{Euler Method}\n\\end{figure}\nJust as the picture shows, the Euler Method gets off orbit, and the symplectic ones stay in orbit.\nNow let's take a look at the Energy.\n\\begin{figure}[H]\n  \\includegraphics[width=.92\\textwidth, height =.5\\textheight]{h_euler.png}\n\n  \\includegraphics[width=.92\\textwidth, height =.5\\textheight]{h_leap.png}\n  \\caption{Mechanical Energy simulated \\(100000\\) times}\n  \\label{Euler Method}\n\\end{figure}\n\\begin{figure}[H]\n  \\includegraphics[width=.92\\textwidth, height =.5\\textheight]{an_euler.png}\n\n  \\includegraphics[width=.92\\textwidth, height =.5\\textheight]{an_leap.png}\n  \\caption{Angular momentum simulated \\(40000\\) times)}\n  \\label{Euler Method}\n\\end{figure}\nThe symplectic methods conserves the hamiltonian with bigger steps while euler methods only conserves with really small steps, also the hamiltonian with euler method is decreasing and not only a senoidal movement, while the leapfrog hamiltonian is bounded.\\\\\nOn the angular momentum graphs we can see this same characteristic.\n\n\\begin{table}[H]\n\\centering\n\\caption{Eccentricity values comparison for Kepler's third Law}\n\\label{tab:exTable1}\n\\smallskip\n\\begin{tabular}{|l|c|c|}\n\\hline\n& Leapfrog Value & NASA Data\\\\[0.5ex]\n\\hline\n&&\\\\[-2ex]\nMercury & 0.196 & 0.206\\\\[0.5ex]\n\\hline\n&&\\\\[-2ex]\nVenus & 0.018 & 0.007\\\\[0.5ex]\n\\hline\n&&\\\\[-2ex]\nEarth & 0.0164 & 0.017\\\\[0.5ex]\n\\hline\n&&\\\\[-2ex]\nMars & 0.09 & 0.093\\\\[0.5ex]\n\\hline\n&&\\\\[-2ex]\nJupiter & 0.05 & 0.048\\\\[0.5ex]\n\\hline\n\\end{tabular}\n\\end{table}\nAnd so, our Model is pretty close from the pyshical data.\n\\section{Conclusion and links}\nWhile the model and algorithm can be improved, like it's velocity (100000 simulations took 25 minutes on a i7 computer), we can really see the differences between the Symplectic and non-symplectic methods.\\\\\nThe 7 step Leapfrog has complexity \\(\\mathcal{O}(n^2)\\), there's an algorithm worth mentioning: Barnes-Hut algorithm because it has complexity \\(\\mathcal{O}(\\log{}n)\\), it's widely used and it's main principle is to group distant bodies into one big body, so you reduce the amount of calculations.\\\\\nLinks for the souce code of the simulation can be found in \\url{https://github.com/mirandagil/university-courses/tree/master/analise-numerica-edo-2019-1/project}\n\n\\begin{figure}[H]\n  \\includegraphics[width=.7\\textwidth, height =.4\\textheight]{planets.png}\n  \\caption{Leapfrog simulation with \\( h = 7200 \\) simulated \\(100000\\) times}\n  \\label{Euler Method}\n\\end{figure}\n\n\\section{References}\n\\begin{thebibliography}{9}\n\\bibitem{latexcompanion}\nAlan Sokal.\n\\textit{Notes on MATH 0054 - Analytical Dynamics - University College London}.\n\\url{https://www.ucl.ac.uk/~ucahad0/}.\n\n\\bibitem{latexcompanion2}\nPeter Young.\n\\textit{Notes on Computational Physics - University of California}.\n\\url{https://young.physics.ucsc.edu/115/}.\n\n\\bibitem{latexcompanion3}\nBrian Tyrrell\n\\textit{Using numerical methods to solve the Gravitational n-Body Problem & represent the result graphically using OpenGL}.\n\\url{https://www.maths.tcd.ie/~btyrrel/nbody.pdf}.\n\n\\bibitem{knuthwebsite}\nPlanetary Fact Sheet - NASA,\n\\\\\\texttt{https://nssdc.gsfc.nasa.gov/planetary/factsheet/index.html}\n\n\\bibitem{knu2thwebsite}\nMITx: 8.01.2x Mechanics: Momentum and Energy\n\\\\\\texttt{https://courses.edx.org/courses/course-v1:MITx+8.01.2x+3T2018/course/}\n\\end{thebibliography}\n\n\n\\end{document}\n", "meta": {"hexsha": "f3a515822e624c896ac38870e616bd0ee574d0da", "size": 12358, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "analise-numerica-edo-2019-1/project/project-tex/sbc-template.tex", "max_stars_repo_name": "mirandagil/university-courses", "max_stars_repo_head_hexsha": "e70ce5262555e84cffb13e53e139e7eec21e8907", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-23T16:39:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-23T16:39:01.000Z", "max_issues_repo_path": "analise-numerica-edo-2019-1/project/project-tex/sbc-template.tex", "max_issues_repo_name": "mirandagil/university-courses", "max_issues_repo_head_hexsha": "e70ce5262555e84cffb13e53e139e7eec21e8907", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "analise-numerica-edo-2019-1/project/project-tex/sbc-template.tex", "max_forks_repo_name": "mirandagil/university-courses", "max_forks_repo_head_hexsha": "e70ce5262555e84cffb13e53e139e7eec21e8907", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.7753623188, "max_line_length": 299, "alphanum_fraction": 0.7171063279, "num_tokens": 3875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6619228891883799, "lm_q2_score": 0.837619961306541, "lm_q1q2_score": 0.5544398248298846}}
{"text": "\\section{Roll decay system identification to determine roll damping}\n\\subsection{PIT approach}\nInvestigate the Parameter Identification Technique (PIT) approach \\parencite{soder_assessment_2019}.\n\\input{../paper/general_content/roll_diff_equation_quadratic}\n", "meta": {"hexsha": "0a6d8f3a12378240ed5e33c23f9e3a4fef0ac674", "size": 258, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "logbook/roll_decay_system_identification.tex", "max_stars_repo_name": "martinlarsalbert/rolldecay", "max_stars_repo_head_hexsha": "bee335c27e6519bd32ed9488751bc13e7f6fa9b3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "logbook/roll_decay_system_identification.tex", "max_issues_repo_name": "martinlarsalbert/rolldecay", "max_issues_repo_head_hexsha": "bee335c27e6519bd32ed9488751bc13e7f6fa9b3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-02-02T23:07:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-13T03:27:41.000Z", "max_forks_repo_path": "logbook/roll_decay_system_identification.tex", "max_forks_repo_name": "martinlarsalbert/rolldecay", "max_forks_repo_head_hexsha": "bee335c27e6519bd32ed9488751bc13e7f6fa9b3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.6, "max_line_length": 100, "alphanum_fraction": 0.8488372093, "num_tokens": 57, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.837619947119304, "lm_q2_score": 0.66192288918838, "lm_q1q2_score": 0.5544398154390278}}
{"text": "\\documentclass[oneside]{article}\n\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{xurl}\n\\usepackage{hyperref}\n\n\\begin{document}\n\\title{How to Register WinRAR for a License File}\n\\author{GreenYun}\n\\maketitle\n\n\\section{Introduction}\nWinRAR is a file archiver developed by Eugene Roshal.\nTrail version of WinRAR can be download and distributed freely.\nSold and supported by win.rar GmbH, WinRAR is allowed to be used in a 40-day test period.\nA license should be purchased to continue using the software.\n\nAfter a person/organization/company pay to win.rar GmbH, one will receive a file named ``RARreg.key'' for registering the software.\nThe file may be sent on a CD or via email. Put the file and the installer within a same directory or copy it to the installation directory and complete the registration.\n\nThe proper way to register for a license file is pay to win.rar GmbH.\nYet this article will discuss the generation process of ``RARreg.key''. (Valid from version 4.x till current.)\n\n\\section{Prerequisites}\nTo generate the license file, one must provide his name or the company name --- referred to as ``Name''. Name containing characters other than ASCII characters is discouraged. (May lead to wrong key.)\n\nAnother information is provided by the receipt after the payment.\n``License Type'' is a string that usually describes how many copies is authorized the buyer to keep.\nFor instance, ``Single PC usage license'', ``1000 PC usage license'', etc.\n\n\\section{A Variant of SHA-1 Algorithm}\\label{SHA-1}\nWinRAR adopted SHA-1 algorithm as message hash method. In general SHA-1 algorithm implementation, the digest $S$ is combined with 5 state values as\n\\[S=S_0S_1S_2S_3S_4\\]\nwhere $S_i$ ($i\\in\\left\\{0,1,2,3,4\\right\\}$) are 32-bit unsigned integers.\n\nThe extra step before generate $S$ is doing byte-wise reverse for each $S_i$. After that concatenate them as $S$, a 160-bit unsigned integer. Note that we will store $S$ or other data discussed below in big-endian (or network order) --- the most significant byte in the front (or ``on the left'').\n\n\\subsection{Yet Another Variant}\\label{YA-SHA-1}\nThere is another SHA-1 function used by WinRAR with all initial state values set to zero.\nThis function is found as the only use to generate a secret numbers with a zero-length string input, and the answer is\n\\[\\mathtt{1050D90D0F27A54653461BD1B4E33C7C0FFD8D43}\\]\n\n\\section{Digital Signature Algorithm}\nThe digital signature algorithm (the DSA) used by WinRAR is a variant of the SM2 digital signature algorithm.\n\n\\subsection{The Composite Field}\nWinRAR chose a composite field of $\\mathbb{F}_{\\left(2^{15}\\right)^{17}}$, described as follows.\n\n\\subsubsection{The Base Field}\nThe base field of $\\mathbb{F}_{2^{15}}$ is generated by the primitive polynomial\n\\[B\\left(x\\right)=x^{15}+x+1\\]\nwhere the coefficients are in $\\mathbb{F}_{2}$.\n\n$\\forall a\\left(x\\right)\\in\\mathbb{F}_{2^{15}}$,\n\\[a\\left(x\\right)=a_{14}x^{14}+a_{13}x^{13}+\\cdots+a_1x+a_0\\]\ncoefficients are combined as a 15-bit series, denoted as\n\\[a=a_{14}a_{13}\\cdots a_1a_0\\]\n\n\\subsubsection{The Extension Field}\nThe extension field of $\\mathbb{F}_{\\left(2^{15}\\right)^{17}}$ is constructed by the primitive polynomial\n\\[E\\left(x\\right)=x^{17}+x^3+1\\]\nwhere the coefficients are in finite field $\\mathbb{F}_{2^{15}}$.\n\n$\\forall b\\left(x\\right)\\in\\mathbb{F}_{\\left(2^{15}\\right)^{17}}$,\n\\[b\\left(x\\right)=b_{16}x^{16}+b_{15}x^{15}+\\cdots+b_1x+b_0\\]\nand $b$ is denoted $b$ as\n\\[b=b_{16}b_{15}\\cdots b_1b_0\\]\n\nNote that, $\\forall b_i\\in\\mathbb{F}_{2^{15}}$, $i\\in\\left\\{0,1,\\cdots,15,16\\right\\}$, which means $b_i$ can be translated into a 15-bit series, and the total length of $b$ is $15\\times 17=255$ bits.\n\n\\subsection{The Elliptic Curve}\\label{curve}\nThe selected curve $C$ is\n\\[y^2+xy=x^3+\\alpha x^2+\\beta\\]\nwhere $x, y, \\alpha, \\beta\\in\\mathbb{F}_{\\left(2^{15}\\right)^{17}}$. WinRAR chose $\\alpha=0$ and $\\beta=161$, and a base point $G\\in C$: (all numbers below are denoted as hexadecimal)\n\\begin{align*}\n      G   & = \\left(x_G, y_G\\right)                                                     \\\\\n      x_G & = \\mathtt{56FDCBC6A27ACEE0CC2996E0096AE74FEB1ACF220A2341B898B549440297B8CC} \\\\\n      y_G & = \\mathtt{20DA32E8AFC90B7CF0E76BDE44496B4D0794054E6EA60F388682463132F931A7}\n\\end{align*}\nAnd the order of $G$:\n\\[\\mu=\\mathtt{1026DD85081B82314691CED9BBEC30547840E4BF72D8B5E0D258442BBCD31}\\]\n\n\\subsection{Key Generation}\\label{key-gen}\nWinRAR use some string (ASCII) as seed to generate private--public key pair.\n\n\\subsubsection{The Private Key}\n\\paragraph{First, generate hash digest for the input message.}\nCalculate the digest for the input message if the length of input message is not zero, use the method described in section \\ref{SHA-1}. Assign the digest to $g$.\n\nIf zero-length string is input, directly assign\n\\[g=\\mathtt{CDE43B4C6847B9D5DC5EF4A350265329EB3EB781}\\]\n\nNow we treat $g$ a 20-byte octet stream, (significant byte first,) and concatenate a counter $c$ of a 32-bit unsigned integer after the last byte of $g$.\nWe will use $\\parallel$ to denote ``concatenation'', which means a message $M$ should be\n\\[M=g\\parallel c\\]\nwhich is a 25-byte stream.\n\n\\paragraph{The loop strats} by setting the counter $c$ to $1$. Send $M$ as message to the SHA-1 function described in section \\ref{SHA-1}, and store the digest as $S$.\nObviously, $S$ is a 20-byte octet stream, denoted as\n\\[S=S_{19}S_{18}\\cdots S_1S_0\\]\nin network order, where $S_i$ ($i\\in\\left\\{0,1,\\cdots,18,19\\right\\}$) are the bytes.\n\nAssume $k$ is another octet stream, with zero-length before the loop starts.\nThe most least two bytes of $S$ will be taken to append to the left side of $k$:\n\\[k=S_1\\parallel S_0\\parallel k\\]\n\nLoop this process for 15 rounds, after each round increment $c$ by 1 and update to $M$.\n\nIf the digest after the $i$-time loop is denoted as $S^i$, the final $k$ after all 15 round will looks like:\n\\[k=S^{15}_1S^{15}_0S^{14}_1S^{14}_0\\cdots S^2_1S^2_0S^1_1S^1_0\\]\nand this is the private key we generated.\n\n\\paragraph{To verify the key generator,} check if an empty message input generates $k=k_0$, and $k_0$ described as follows:\n\\[k_0=\\mathtt{59FE6ABCCA90BDB95F0105271FA85FB9F11F467450C1AE9044B7FD61D65E}\\]\n\n\\subsubsection{The Public Key}\nAs the base point $G$ on the elliptic curve is known (section \\ref{curve}), the public key is calculated by multiplying the private key $k$ to the base point, according to elliptic curve arithmetics\n\\[P=k\\cdot G\\]\n\n\\paragraph{To verify the key generator,} check if $k_0$ (as described before) generates $P=P_0$, and\n\\begin{align*}\n      P_0 & =(x_P,y_P)                                                                 \\\\\n      x_P & =\\mathtt{3861220ED9B36C9753DF09A159DFB148135D495DB3AF8373425EE9A28884BA1A} \\\\\n      y_P & =\\mathtt{12B64E62DB43A56114554B0CBD573379338CEA9124C8443C4F50E6C8B013EC20}\n\\end{align*}\n\n\\paragraph{A public key compress method} is used by the SM2 digital signature algorithm.\nBut WinRAR followed only some simplified steps:\n\\begin{enumerate}\n      \\item Let $P=\\left(x_P,y_P\\right)$ on the elliptic curve. if $x_P=0$, $\\tilde{y}_P=0$; or else $\\tilde{y}_P$ is the most least (right-most) bit of $e=y_P\\cdot x_P^{-1}$ of the composite field $\\mathbb{F}_{\\left(2^{15}\\right)^{17}}$;\n      \\item Concatenate $x_P$ and $\\tilde{y}_P$ and obtain the bit stream.\n\\end{enumerate}\n\nThe compressed public key is denoted as\n\\[\\tilde{P}=x_P\\parallel\\tilde{y}_P\\]\n\n\\subsection{The Signing Process}\\label{sign}\nA message $M$ is signed using a private key $k$, following these steps:\n\\begin{enumerate}\n      \\item Pick a random number $n$, which satisfies $0<n<\\mu$;\\label{signing-start}\n      \\item Generate a digest $h$ via the algorithm described in section \\ref{SHA-1}, and extend $h$ by pushing the least 10 bytes of the secret number (described in \\ref{YA-SHA-1}) to its left side, which has total 30 bytes and may looks like\n            \\[h=\\mathtt{1BD1B4E33C7C0FFD8D43}\\parallel\\mathrm{Sha}_1\\left(M\\right)\\]\n      \\item For a point $P=\\left(x_P,y_P\\right)$ on the elliptic curve, let $X\\left(P\\right)=x_P$, then calculate\n            \\[r\\equiv X\\left(n\\cdot G\\right)+h\\mod{\\mu}\\]\n      \\item Check if either $r=0$ or $r+n=\\mu$, go to step \\ref{signing-start}, else continue;\n      \\item Calculate\n            \\[s\\equiv n-k\\cdot r\\mod{\\mu}\\]\n      \\item Check if $s=0$, go to step \\ref{signing-start} or obtain signature $\\left(r,s\\right)$.\n\\end{enumerate}\n\n\\section{The Generation of the License}\nAssume the input message ``Name'' and ``License Type'' are denoted as $U$ and $L$, and follow the next steps to generate the license:\n\\begin{enumerate}\n      \\item Follow the steps in section \\ref{key-gen} to obtain private--public key pair with input $U$, and convert to the compressed public key form $\\tilde{P}_U$;\n      \\item Convert $\\tilde{P}_U$ into hexadecimal string form, pad with \\texttt{0} on the left side until the length of the string is 64;\n      \\item Split the string form of $\\tilde{P}_U$ into two parts --- the first 48 characters ($s^+$) and the remainders ($s^-$);\n      \\item Let $D_3$ be a string that is constructed as follows:\n            \\[D_3=\\texttt{\"60\"}\\parallel s^+\\]\n            where text between two double quotation mark is a string literal;\n      \\item Follow the steps in section \\ref{key-gen} to obtain private--public key pair with input $D_3$, and convert to the compressed public key form $\\tilde{P}_3$;\n      \\item Convert $\\tilde{P}_3$ into hexadecimal string form, pad with \\texttt{0} on the left side until the length of the string is 64, denoted as $D_0$;\n      \\item Let $I$ be a 20-character string that is constructed as follows:\n            \\[I=s^-\\parallel D_{0,0}D_{0,1}D_{0,2}D_{0,3}\\]\n            where $D_{0,i}$ is the $i$th character of the string $D_0$;\\\\\n            (\\emph{Note:} In practice, WinRAR does not case the contents of $I$ at all.)\n      \\item Use the algorithm described in \\ref{sign} with $L$ as message input and $k_0$ as private key input, obtain signature $\\left(r_L,s_L\\right)$;\n      \\item Convert $r_L$ and $s_L$ into hexadecimal string form, $s_L^+$ and $s_L^-$, pad each with \\texttt{0} on its left side until both length are 60; (Remain unchanged if the length is greater than 60)\n      \\item Let $D_1$ be a string that is constructed as follows:\n            \\[D_1=\\texttt{\"60\"}\\parallel s_L^-\\parallel s_L^+\\]\n      \\item Construct a message string $M_1$ as\n            \\[M_1=U\\parallel D_0\\]\n            and sign it with private key $k_0$, obtain $\\left(r_1,s_1\\right)$;\n      \\item Convert $r_1$ and $s_1$ into hexadecimal string form, $s_1^+$ and $s_1^-$, pad each with \\texttt{0} on its left side until both length are 60; (Remain unchanged if the length is greater than 60)\n      \\item Construct a string $D_2$ as\n            \\[D_2=\\texttt{\"60\"}\\parallel s_1^-\\parallel s_1^+\\]\n      \\item A CRC32 checksum is calculated using the message string\n            \\[L\\parallel U\\parallel D_0\\parallel D_1\\parallel D_2\\parallel D_3\\]\n      \\item Convert the checksum into \\textbf{decimal} string form $s_c$, pad with \\texttt{0} on the left side until the length is 10;\n      \\item Let $l_0$, $l_1$, $l_2$ and $l_3$ are the length of $D_0$, $D_1$, $D_2$ and $D_3$, respectively, and convert $l_i$ into decimal string forms $s_{l,i}$;\n      \\item Let $D$ be a string constructed as follows:\n            \\[D=s_{l,0}\\parallel s_{l,1}\\parallel s_{l,2}\\parallel s_{l,3}\\parallel D_0\\parallel D_1\\parallel D_2\\parallel D_3 \\parallel s_c\\]\n\\end{enumerate}\n\n\\subsection{Output}\n\\textbf{The first line:} a string literal: \\texttt{\"RAR registration data\"}.\\\\\n\\textbf{The second line:} $U$.\\\\\n\\textbf{The third line:} $L$.\\\\\n\\textbf{The fourth line:} combined with a string literal \\texttt{\"UID=\"} and $I$.\\\\\n\\textbf{The following lines:} $D$ separated into 7 lines, while the first 6 lines have 54 characters each.\n\n\\section*{Reference}\n\\begin{enumerate}\n      \\item \\url{https://en.wikipedia.org/wiki/WinRAR}\n      \\item \\url{https://github.com/bitcookies/winrar-keygen}\n      \\item \\url{https://github.com/obaby/winrar-keygen}\n      \\item ``GB/T 32918\u20142016: Information security technology\u2014Public key cryptographic algorithm SM2 based on elliptic curves''\n\\end{enumerate}\n\n\\end{document}", "meta": {"hexsha": "0d93dd6daed9e3165c0703dcf0fdf7f42d015b68", "size": 12185, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/rarreg-howto.tex", "max_stars_repo_name": "GreenYun/rarreg", "max_stars_repo_head_hexsha": "914b3f9c1a1521bc40a3b5f467c7337c17583e31", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/rarreg-howto.tex", "max_issues_repo_name": "GreenYun/rarreg", "max_issues_repo_head_hexsha": "914b3f9c1a1521bc40a3b5f467c7337c17583e31", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/rarreg-howto.tex", "max_forks_repo_name": "GreenYun/rarreg", "max_forks_repo_head_hexsha": "914b3f9c1a1521bc40a3b5f467c7337c17583e31", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.925, "max_line_length": 297, "alphanum_fraction": 0.7078375051, "num_tokens": 3815, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8558511543206819, "lm_q2_score": 0.6477982179521103, "lm_q1q2_score": 0.5544188526011944}}
{"text": "\\documentclass{memoir}\n\\usepackage{notestemplate}\n\n%\\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png}\n%\\institute{Rice University}\n%\\faculty{Faculty of Whatever Sciences}\n%\\department{Department of Mathematics}\n%\\title{Class Notes}\n%\\subtitle{Based on MATH xxx}\n%\\author{\\textit{Author}\\\\Gabriel \\textsc{Gress}}\n%\\supervisor{Linus \\textsc{Torvalds}}\n%\\context{Well, I was bored...}\n%\\date{\\today}\n\n\\begin{document}\n\n% \\maketitle\n\n% Notes taken on 02/10/21\n\n\\section{Generating Sets}\n\\label{sec:generating_sets}\n\n\\begin{defn}\n\tTake an \\(R\\)-module \\(\\prescript{}{R}M\\) and a subset \\(X \\subset M\\).\n\t\\begin{enumerate}\n\t\t\\item The \\textbf{\\(R\\)-submodule generated by \\(X\\)} is\n\t\t\t\\begin{align*}\n\t\t\t\tRX = \\left\\{r_1x_1+\\ldots+r_mx_m \\mid r_i \\in R, \\; x_i \\in X, \\; m \\in \\Z>0 \\right\\} .\n\t\t\t\\end{align*}\n\t\t\tThe set \\(X\\) is called the \\textbf{generating set} of \\(RX\\).\n\t\t\\item A \\(R\\)-submodule \\(\\prescript{}{R}N\\) of \\(\\prescript{}{R}M\\) is \\textbf{finitely generated} if \\(N = RX\\) for \\(\\left| X \\right| <\\infty\\) and \\textbf{cyclic} if \\(N=RX\\) for \\(\\left| X \\right| =1\\).\n\t\\end{enumerate}\n\\end{defn}\n\n\\subsection{Free Modules}\n\\label{sub:free_modules}\n\n\\begin{defn}[Linear independence by \\(R\\)-modules]\n\tWe say that \\(X = \\left\\{x_1,\\ldots,x_n \\right\\} \\) is \\textbf{\\(R\\)-linearly independent} if\n\t\\begin{align*}\n\t\tr_1x_1+\\ldots+r_nx_n = 0 \\implies r_i = 0 \\quad \\forall i = 1,\\ldots,n\n\t\\end{align*}\n\\end{defn}\n\n\\begin{defn}\n\tWe say that an \\(R\\)-module \\(\\prescript{}{R}M\\) is \\textbf{free on the subset \\(X\\)} of \\(M\\) if\n\t\\begin{align*}\n\t\tM = RX\\\\\n\t\tX \\text{ is \\(R\\)-linearly independent}\n\t\\end{align*}\nIn this case, we call \\(X\\) the \\textbf{basis} of \\(\\prescript{}{R}M\\), and sometimes denote \\(\\prescript{}{R}M\\) by \\(F_R(X)\\).\\\\\n\nIf \\(R\\) is commutative, then we call \\(\\left| X \\right| \\) the \\textbf{rank} of \\(\\prescript{}{R}M\\).\n\\end{defn}\nThis illustrates a key difference between vector spaces and modules-- vector spaces are always free, while modules need not be.\n\n\\begin{exmp}[Free and non-free modules]\nMost modules have no basis! A free \\(\\Z\\)-module is also called a \\textbf{free abelian group}; lattices in \\(\\R^2\\) are free abelian groups, while finite, non-zero abelian groups are not free.\n\\end{exmp}\n\n\\begin{defn}[R-Matrix]\n\tLet \\(R\\) be a ring. An \\textbf{\\(R\\)-matrix} is a matrix whose entries are in \\(R\\). An \\textbf{invertible \\(R\\)-matrix} is an \\(R\\)-matrix that has an inverse that is also an \\(R\\)-matrix. The \\(n \\times n\\) invertible \\(R\\)-matrices form a group called the \\textbf{general linear group over \\(R\\)}:\n\t\\begin{align*}\n\t\tGL_n(R) = \\left\\{n\\times n \\text{ invertible \\(R\\)-matrices} \\right\\} .\n\t\\end{align*}\n\tThe \\textbf{determinant} of an \\(R\\)-matrix \\(A = (a_{ij})\\) is defined in the usual way\n\t\\begin{align*}\n\t\t\\textrm{det}(A) = \\sum_{p} \\pm a_{1,p 1}\\ldots a_{n, pn}.\n\t\\end{align*}\n\tor the sum over all permutations of the indices and the sign being the sign of the permutation. Of course, all the usual properties of determinants hold for \\(R\\)-matrices.\n\\end{defn}\n\n\\begin{lemma}\n\tLe \\(R\\) be a non-zero ring. Then a square \\(R\\)-matrix \\(A\\) is invertible if and only if it has either a left inverse or a right inverse, and only if its determinant is a unit of the ring. Furthermore, an invertible \\(R\\)-matrix is square.\n\\end{lemma}\n\n\\begin{prop}[Free modules and \\(R\\)-matrices]\n\tLet \\(R\\) be a non-zero ring. Then the matrix \\(P\\) of a change of basis in a free module is an invertible \\(R\\)-matrix. Furthermore, any two bases of the same free module over \\(R\\) have the same cardinality.\n\\end{prop}\nEvery homomorphism \\(f\\) between two free modules is given by left multiplication by an \\(R\\)-matrix.\n\n\\begin{thm}[Universal Property of Free Modules]\n\tFor any set \\(A\\) there is a free \\(R\\)-module \\(F_R(A)\\) on the set \\(A\\) and \\(F_R(A)\\) satisfies the \\textit{universal property}: if \\(\\prescript{}{R}M\\) is any \\(R\\)-module and \\(\\varphi :A\\to M\\) is any map of sets, then there is a unique \\(R\\)-module homomorphism \\(\\Phi:F_R(A) \\to M\\) such that \\(\\Phi (a) = \\varphi (a)\\) for all \\(a \\in A\\). In other words, the following diagram commutes:\n\\begin{center}\n\t\t\t\\begin{tikzpicture}\n  \\matrix (m)\n    [\n      matrix of math nodes,\n      row sep    = 3em,\n      column sep = 4em\n    ]\n    {\n\t    A & F_R(A) \\\\\n\t     & M            \\\\\n    };\n  \\path\n  (m-1-2) edge [->] node [right] {\\(\\Phi \\)} (m-2-2)\n    (m-1-1.east |- m-1-2)\n    edge [->] node [above] {inclusion} (m-1-2)\n      (m-1-1) edge [->] node [below] {$\\varphi$} (m-2-2);\n\\end{tikzpicture}\n\\end{center}\nFurthermore, if \\(A = \\left\\{ a_1,\\ldots,a_n \\right\\} \\), then\n\\begin{align*}\n\tF_R(A) = Ra_1 \\oplus Ra_2 \\oplus \\ldots \\oplus Ra_n \\stackrel{R}{\\cong} R^{n}\n\\end{align*}\n\\end{thm}\nThis corresponds to the notion of free groups from group theory.\n\n\\begin{hw}\n\tIf \\(F_R,F_R'\\) are free modules on the same set \\(A\\), there is a unique isomorphism between \\(F_R\\) and \\(F_R'\\) which is the identity map on \\(A\\).\\\\\n\n\tIf \\(\\prescript{}{R}F\\) is any free \\(R\\)-module with basis \\(A\\), then \\(\\prescript{}{R}F \\cong F_R(A)\\).\n\\end{hw}\nIf we have a free \\(R\\)-module with a basis \\(A\\), the above statement says that we can define \\(R\\)-module homomorphisms from the free module into other \\(R\\)-modules by simply specifying how the homomorphism acts on elements of \\(A\\).\\\\\n\nThe free module \\(F_\\Z(A)\\) is called the \\textbf{free abelian group on \\(A\\)}. If \\(A\\) is finite, then we say it is of \\textbf{rank \\(\\left| A \\right| \\)} and is isomorphic to\n\\begin{align*}\n\t\\Z \\oplus \\ldots^{n} \\oplus \\Z.\n\\end{align*}\n\n%---\n%\n%diagonalizaton of modules etc\n%\n%---\n\n\\end{document}\n", "meta": {"hexsha": "23aacff21716d2c04a89b04e9c209f76056d0d18", "size": 5617, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture8 - GeneratingSets_FreeModules.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture8 - GeneratingSets_FreeModules.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Abstract Algebra - Introductory/Algebra II/Notes/source/Lecture8 - GeneratingSets_FreeModules.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2076923077, "max_line_length": 398, "alphanum_fraction": 0.6523055012, "num_tokens": 1949, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6477982179521103, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.5544188454558933}}
{"text": "\\input{Homework-header}\n\\newcommand{\\currentHW}{8}\n\n\\begin{document}\n\n\\homeworkOnATopic{}{Prerequisite Exponential Function Topics \\\\\\standardHomeworkSubtitle}{0}{%\n%DesiredHomeworkName: Prerequisite_Exponential_Function_Topics_Not_Tested\n\\item \\input{../../freecalc/modules/logarithms/homework/derivatives-non-const-exponent}\n\\item \\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-problems-2}\n\\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-problems-2-solutions}\n\\item \\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-and-compound-interest}\n\\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-and-compound-interest-solutions}\n\\item \\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-various-interesting-problems}\n}\n\n\\homeworkOnATopic{}{on Lecture 1 \\\\ \\standardHomeworkSubtitle}{1}{\n%DesiredHomeworkName: Inverse_Trig\n\\item \\input{../../freecalc/modules/inverse-trig/homework/trig-evaluated-on-inverse-trig-1}\n\\input{../../freecalc/modules/inverse-trig/homework/trig-evaluated-on-inverse-trig-1-solutions}\n\\item \\input{../../freecalc/modules/inverse-trig/homework/trig-evaluated-on-inverse-trig-2}\n\\input{../../freecalc/modules/inverse-trig/homework/trig-evaluated-on-inverse-trig-2-solutions}\n\\item \\input{../../freecalc/modules/inverse-trig/homework/trig-evaluated-on-inverse-trig-3}\n\\input{../../freecalc/modules/inverse-trig/homework/trig-evaluated-on-inverse-trig-3-solutions}\n\\item \\input{../../freecalc/modules/inverse-trig/homework/inverse-trig-derivatives}\n\\item \\input{../../freecalc/modules/inverse-trig/homework/artan-sum-problem}\n\\input{../../freecalc/modules/inverse-trig/homework/artan-sum-problem-solution}\n}\n\n\\homeworkOnATopic{}{on Lecture 3 \\\\\\standardHomeworkSubtitle}{3}{\n%DesiredHomeworkName: Integration_By_Parts\n\\item \\input{../../freecalc/modules/integration-by-parts/homework/integration-by-parts2}\n\\input{../../freecalc/modules/integration-by-parts/homework/integration-by-parts2-solutions}\n\\item \\input{../../freecalc/modules/integration-by-parts/homework/integration-by-parts}\n\\input{../../freecalc/modules/integration-by-parts/homework/integration-by-parts-solutions}\n\\item \\input{../../freecalc/modules/integration-by-parts/homework/integrate-x-power-n-e-power-x}\n\\input{../../freecalc/modules/integration-by-parts/homework/integrate-x-power-n-e-power-x-solution}\n}\n\n\\homeworkOnATopic{}{on Lecture 4 \\\\\\standardHomeworkSubtitle}{4}{\n%DesiredHomeworkName: Integration_Rational_Functions_Building_Blocks\n\\item \\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-building-blocks}\n\\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-building-blocks-solutions}\n\\item \\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-building-block-II-and-III-parametric}\n\\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-building-block-II-and-III-parametric-solution}\n\\item \\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-building-block-III-b-parametric}\n\\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-building-block-III-b-parametric-solution}\n}\n\n\\homeworkOnATopic{}{on Lecture 5 \\\\ \\standardHomeworkSubtitle}{5}{\n%DesiredHomeworkName: Integration_Rational_Functions_Partial_Fractions\n\\item \\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-quadratics-in-denominator-1}\n\\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-quadratics-in-denominator-1-solutions}\n\\item \\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-complete-1}\n\\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-complete-1-solutions}\n\\item \\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-complete-large-example-1}\n\\input{../../freecalc/modules/partial-fractions/homework/integration-rational-functions-complete-large-example-1-solution}\n}\n\n\\homeworkOnATopic{}{on Lecture 6 \\\\ \\standardHomeworkSubtitle}{6}{\n%DesiredHomeworkName: Trig_Integrals\n\\item \\input{../../modules/trig-integrals/homework/rationalizing-substitution-1}\n\\input{../../modules/trig-integrals/homework/rationalizing-substitution-1-solutions}\n\\item \\input{../../modules/trig-integrals/homework/sinn-cosm-1}\n\\item \\input{../../modules/trig-integrals/homework/sin-cos-1}\n\\item \\input{../../modules/trig-integrals/homework/tan-sec-1}\n\\input{../../modules/trig-integrals/homework/tan-sec-1-solutions}\n}\n\n\\homeworkOnATopic{}{on Lecture 7 \\\\\\standardHomeworkSubtitle}{7}{\n%DesiredHomeworkName: Integration_Radicals_Of_Quadratics\n\\item \\input{../../modules/trig-substitution/homework/prepare-radical-of-quadratic-for-integration-complete-square}\n\\input{../../modules/trig-substitution/homework/prepare-radical-of-quadratic-for-integration-complete-square-solutions}\n\\section{Trig or Euler substitution, solutions use trig sub}\n\\subsection{Case 1: $\\sqrt{x^2+1}$}\n\\item \\input{../../modules/trig-substitution/homework/trig-substitution-tan-sec-1}\n\\input{../../modules/trig-substitution/homework/trig-substitution-tan-sec-1-solutions}\n\\subsection{Case 2: $\\sqrt{1-x^2}$}\n\\item \\input{../../modules/trig-substitution/homework/trig-substitution-sin-cos-1}\n\\input{../../modules/trig-substitution/homework/trig-substitution-sin-cos-1-solutions}\n\\section{Trig or Euler substitution, solutions use Euler sub}\n\\subsection{Case 1: $\\sqrt{x^2+1}$}\n\\item \\input{../../modules/trig-substitution/homework/integration-euler-substitution-case1}\n\\input{../../modules/trig-substitution/homework/integration-euler-substitution-case1-solutions}\n\\item \\textbf{This problem will not be quizzed.}\n\\input{../../modules/trig-substitution/homework/integration-euler-substitution-case1-integral-radical-quadratic}\n\\subsection{Case 2: $\\sqrt{1-x^2}$}\n\\item \\input{../../modules/trig-substitution/homework/integration-euler-substitution-case2}\n\\input{../../modules/trig-substitution/homework/integration-euler-substitution-case2-solutions}\n\\subsection{Case 3: $\\sqrt{x^2-1}$}\n\\item \\input{../../modules/trig-substitution/homework/integration-euler-substitution-case3}\n\\section{Theory through problems (Optional homework, will not be quizzed, will not be tested)}\n\\subsection{Case 1: $\\sqrt{x^2+1}$}\n\\subsubsection{$x=\\cot \\theta$}\n\\item \\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case1-cot}\n\\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case1-cot-solution}\n\\item \\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-cot}\n\\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-cot-solution}\n\\subsubsection{$x=\\tan \\theta$}\n\\item \\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case1-tan}\n\\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case1-tan-solution}\n\\item \\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-tan}\n\\subsection{Case 2: $\\sqrt{1-x^2}$}\n\\subsubsection{$x=\\cos \\theta$}\n\\item \\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-cos}\n\\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-cos-solution}\n\\item \\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-cos}\n\\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-cos-solution}\n\\subsubsection{$x=\\sin \\theta$}\n\\item \\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-sin}\n\\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-sin-solution}\n\\item \\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-sin}\n\\subsection{Case 3: $\\sqrt{x^2-1}$}\n\\subsubsection{$x=\\sec \\theta$}\n\\item \\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-sec}\n\\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-sec-solution}\n\\item \\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-sec}\n\\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-sec-solution}\n\\subsubsection{$x=\\csc \\theta$}\n\\item \\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-csc}\n\\input{../../modules/trig-substitution/homework/trig-and-euler-substitution-theory-case2-csc-solution}\n\\item \\input{../../modules/trig-substitution/homework/euler-substitution-theory-alternative-case1-csc}\n}\n\n\\homeworkOnATopic{}{on Lecture 8 \\\\\\standardHomeworkSubtitle}{8}{\n%DesiredHomeworkName: L'Hospital's_Rule\n\\item \\input{../../freecalc/modules/lhospital/homework/lhospital-rule-coming-from-Maclaurin-series-1}\nProblem \\ref{eqProblemLimlixto0(arcsinx-x-x^3/6)/(sin^5 x)} can be done easily using Maclaurin series, but we challenge the student to try it using L'Hospital's rule.\n\n\\input{../../freecalc/modules/lhospital/homework/lhospital-rule-coming-from-Maclaurin-series-1-solutions}\n\n\\item \n\\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-problems-2}\n\\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-problems-2-solutions}\n\\item \n\\input{../../freecalc/modules/logarithms/homework/e-as-a-limit-problems}\n}\n\n\\homeworkOnATopic{}{on Lecture  9\\\\\\standardHomeworkSubtitle}{9}{\n%DesiredHomeworkName: Improper_Integrals\n\\item \\input{../../modules/improper-integrals/homework/improper-integral-convergent-or-divergent-1}\n\\input{../../modules/improper-integrals/homework/improper-integral-convergent-or-divergent-1-solutions}\n\\item \\input{../../modules/improper-integrals/homework/improper-integral-convergent-or-divergent-2}\n}\n\n\\homeworkOnATopic{}{on Lecture 11\\\\ \\standardHomeworkSubtitle}{11}{\n%DesiredHomeworkName: Curve_Basics_Polar_Curves\n\\item \\input{../../modules/parametric-curves/homework/match-x-t-and-y-t-graph-to-x-y-graph-1}\n\\item \\input{../../modules/polar-curves/homework/match-polar-curve-graph-to-formula}\n\\item \\input{../../modules/polar-curves/homework/sketch-the-curve-1}\n\\input{../../modules/polar-curves/homework/sketch-the-curve-1-solutions}\n}\n\n\\homeworkOnATopic{}{on Lecture 12 \\\\ \\standardHomeworkSubtitle}{12}{\n%DesiredHomeworkName: Tangents_Curve_Length\n\\item \\input{../../modules/parametric-curves/homework/find-horizontal-vertical-tangents}\n\\item \\input{../../modules/parametric-curves/homework/show-multiple-tangents}\n\\item \\input{../../modules/parametric-curves/homework/curve-length-1}\n\\input{../../modules/parametric-curves/homework/curve-length-1-solutions}\n\\item \\input{../../modules/parametric-curves/homework/curve-length-2}\n}\n\n\\homeworkOnATopic{}{on Lecture 13\\\\ \\standardHomeworkSubtitle}{13}{\n%DesiredHomeworkName: Area_Locked_By_Curves\n\\item \\input{../../modules/parametric-curves/homework/area-under-curve-1}\n\\item \\input{../../modules/polar-curves/homework/area-locked-by-curve-1}\n\\input{../../modules/polar-curves/homework/area-locked-by-curve-1-solutions}\n\\item \\input{../../modules/polar-curves/homework/area-locked-by-curve-2}\n}\n\n\\homeworkOnATopic{}{on Lecture 15 \\\\\\standardHomeworkSubtitle}{15}{\n%DesiredHomeworkName: Sequences\n\\item \\input{../../modules/sequences/homework/sequence-list-sequence-terms-1}\n\\item \\input{../../modules/sequences/homework/sequence-list-sequence-recursive-terms-1}\n\\item \\input{../../modules/sequences/homework/guess-sequence-formula-1}\n\\item\n\\input{../../modules/sequences/homework/convergent-or-divergent-sequence-1}\n\\input{../../modules/sequences/homework/convergent-or-divergent-sequence-1-solutions}\n}\n\n\\homeworkOnATopic{}{on Lecture 16 \\\\\\standardHomeworkSubtitle}{16}{\n%DesiredHomeworkName: Series\n\\item \\input{../../modules/series/homework/express-infinite-decimal-as-rational-1}\n\\input{../../modules/series/homework/express-infinite-decimal-as-rational-1-solutions}\n\\item  \\input{../../modules/series/homework/geometric-series-sum-1} \n\\input{../../modules/series/homework/geometric-series-sum-1-solutions}\n\\item \\input{../../modules/series/homework/telescoping-series-2}\n\\input{../../modules/series/homework/telescoping-series-2-solutions}\n\\item \\textbf{Problems b, c, d will not appear on the quiz.}\n\\input{../../modules/series/homework/telescoping-series-1}\n\\input{../../modules/series/homework/telescoping-series-1-solutions}\n}\n\n\\homeworkOnATopic{}{on Lecture 17 \\\\\\standardHomeworkSubtitle}{17}{\n%DesiredHomeworkName: Basic_Convergence_Divergence_Tests_Integral_Test\n\\item \\input{../../modules/series/homework/series-basic-tests-1}\n\\input{../../modules/series/homework/series-basic-tests-1-solutions}\n\\input{../../modules/series/homework/series-integral-or-comparison-test-1}\n\\input{../../modules/series/homework/series-integral-or-comparison-test-1-solutions}\n}\n\n\\homeworkOnATopic{}{on Lecture 18 \\\\\\standardHomeworkSubtitle}{18}{\n%DesiredHomeworkName: Absolute_Convergence_Ratio_Root_Test\n\\item \\input{../../modules/series/homework/series-ratio-and-or-root-test-2}\n\\input{../../modules/series/homework/series-ratio-and-or-root-test-2-solutions}\n\\item \\input{../../modules/series/homework/convergence-x-to-the-nth-n-factorial-div-n-to-the-n-th}\n}\n\n\\homeworkOnATopic{}{on Lecture 19 \\\\\\standardHomeworkSubtitle}{19}{\n%DesiredHomeworkName: Power_Series\n\\item \\input{../../modules/power-series/homework/interval-of-convergence-1}\n\\input{../../modules/power-series/homework/interval-of-convergence-1-solutions}\n\\item \\input{../../modules/power-series/homework/find-maclaurin-series-1}\n\\item \\input{../../modules/power-series/homework/find-maclaurin-series-4}\n\\item \\input{../../modules/power-series/homework/find-maclaurin-series-5}\n\\input{../../modules/power-series/homework/find-maclaurin-series-5-solutions}\n\\item \\input{../../modules/power-series/homework/newton-binomial-integer-generalized}\n\\input{../../modules/power-series/homework/newton-binomial-integer-generalized-solution}\n\\item \n\\input{../../modules/power-series/homework/newton-binomial-generalized}\n\\input{../../modules/power-series/homework/newton-binomial-generalized-solution}\n\\item \n\\input{../../modules/power-series/homework/find-maclaurin-series-6}\n\\input{../../modules/power-series/homework/find-maclaurin-series-6-solutions}\n\\item \\input{../../modules/power-series/homework/find-taylor-series-1}\n\\input{../../modules/power-series/homework/find-taylor-series-1-solutions}\n\\item \\input{../../modules/power-series/homework/find-taylor-series-2}\n\\item \\textbf{(This problem is of higher difficulty, it will not appear on the quiz.) }\n\\input{../../modules/power-series/homework/example-non-zero-function-with-zero-maclaurin-series}\n}\n\n\\homeworkOnATopic{}{on Lecture 20\\\\ \\standardHomeworkSubtitle}{20}{\n%DesiredHomeworkName: A_Bit_Of_Differential_Equations\n\\item\n\\input{../../modules/diff-eq-separable/homework/mixing-problem-1}\n\\input{../../modules/diff-eq-separable/homework/mixing-problem-1-solution}\n\\item \\input{../../modules/diff-eq-separable/homework/separable-DFQs-1}\n\\input{../../modules/diff-eq-separable/homework/separable-DFQs-1-solution}\n}\n\n\\homeworkOnATopic{}{on Lecture 21 \\\\\\standardHomeworkSubtitle}{21}{\n%DesiredHomeworkName: Complex_Numbers\n\\item \\input{../../modules/complex-numbers/homework/plot-complex-number-find-principal-argument-1}\n\\input{../../modules/complex-numbers/homework/plot-complex-number-find-principal-argument-1-solutions}\n\\item \\input{../../modules/complex-numbers/homework/complex-number-arithmetics-1}\n\\input{../../modules/complex-numbers/homework/complex-number-arithmetics-1-solutions}\n\\item \\input{../../modules/complex-numbers/homework/find-nth-root-1}\n\\input{../../modules/complex-numbers/homework/find-nth-root-1-solutions}\n\\item \\input{../../modules/complex-numbers/homework/polar-form-complex-numbers-1}\n\\item \\input{../../modules/complex-numbers/homework/trig-identities-via-Euler-formula-1}\n}\n\n\n\\homeworkOnATopic{}{Review problems for the final\\\\ This is a subset of the Master Problem Sheet}{22}{\n%DesiredHomeworkName: Review_Problems_Final\n\\item Problems that have appeared past final(s):\n\\begin{enumerate}\n\\item Problem \\ref{problemint(3x^2+2x-1)/((x-1)(x^2+1))dx}.\n\\item Problem \\ref{problemint(sqrt(9-x^2)/(x^2)dx)}.\n\\item Problem \\ref{problemConvergencesum_2^infty1/(xlnx)dx} (the problem was formulated slightly differently - as an improper integral).\n\\item Problem \\ref{problemsumn=0^infty(2^n+5^n)/10^n}.\n\\item Problem \\ref{problemsumn=2^inftyln(1-1/n^2)}.\n\\item Problem \\ref{problemConvergencesumn=1^infty(-1)^nlnn}.\n\\item Problem \\ref{problemConvergencesumn=2^infty(-1)^n/lnn}.\n\\item Problem \\ref{problemTaylorlnxarounda=2}.\n\\item Problem \\ref{problemIntervalConvergence_sum(x-2)^n/(3sqrt(n+1))}.\n\\item Problem \\ref{problemlengthx=sqrt(t)-2t,y=8/3t^(3/4)}.\n\\item Problem \\ref{problem-Area-swept-byr=sin2theta_outsider=1/2}.\n\\item Problem \\ref{problemy'=y^2(1+x),y(0)=3}. \n\\end{enumerate}\n\\item  \\input{../../modules/partial-fractions/homework/integration-rational-functions-complete-1}\n\\input{../../modules/partial-fractions/homework/integration-rational-functions-complete-1-solutions}\n\\item \\input{../../modules/trig-substitution/homework/trig-substitution-tan-sec-1}\n\\input{../../modules/trig-substitution/homework/trig-substitution-tan-sec-1-solutions}\n\\item \\input{../../modules/trig-substitution/homework/trig-substitution-sin-cos-1}\n\\input{../../modules/trig-substitution/homework/trig-substitution-sin-cos-1-solutions}\n\\item \\input{../../modules/integration-by-parts/homework/integration-by-parts2}\n\\input{../../modules/integration-by-parts/homework/integration-by-parts2-solutions}\n\\item \\input{../../modules/series/homework/series-integral-or-comparison-test-1}\n\\input{../../modules/series/homework/series-integral-or-comparison-test-1-solutions}\n\\item \\input{../../modules/lhospital/homework/lhospital-rule-coming-from-Maclaurin-series-1}\n\\item \\input{../../modules/series/homework/geometric-series-sum-1}\n\\input{../../modules/series/homework/geometric-series-sum-1-solutions}\n\\item \\input{../../modules/series/homework/telescoping-series-2}\n\\input{../../modules/series/homework/telescoping-series-2-solutions}\n\\item \\input{../../modules/series/homework/series-basic-tests-1}\n\\input{../../modules/series/homework/series-basic-tests-1-solutions}\n\\item \\input{../../modules/power-series/homework/find-maclaurin-series-4}\n\\item \\input{../../modules/power-series/homework/find-taylor-series-1}\n\\input{../../modules/power-series/homework/find-taylor-series-1-solutions}\n\\item \n\\input{../../modules/power-series/homework/interval-of-convergence-1}\n\\input{../../modules/power-series/homework/interval-of-convergence-1-solutions}\n\\item \n\\input{../../modules/parametric-curves/homework/curve-length-1}\n\\input{../../modules/parametric-curves/homework/curve-length-1-solutions}\n\\item \\input{../../modules/sequences/homework/convergent-or-divergent-sequence-1}\n\\input{../../modules/sequences/homework/convergent-or-divergent-sequence-1-solutions}\n\\item \n\\input{../../modules/diff-eq-separable/homework/separable-DFQs-1}\n\\input{../../modules/diff-eq-separable/homework/separable-DFQs-1-solution}\n\\item  \\textbf{This problem type will appear on the final as a bonus. We have not studied the material for this problem type.}\n\\input{../../modules/polar-curves/homework/area-locked-by-curve-1}\n\\input{../../modules/polar-curves/homework/area-locked-by-curve-1-solutions}\n}\n\\end{document}", "meta": {"hexsha": "e7767eafe15eb3f887d313ddeb04f8bc8ecebf7a", "size": 19533, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX-materials/Curriculum-400-Calculus-II/Homework.tex", "max_stars_repo_name": "tmilev/courses_calculator", "max_stars_repo_head_hexsha": "dd67435dd7a25c59afcefb9ef3b1f004246eca81", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LaTeX-materials/Curriculum-400-Calculus-II/Homework.tex", "max_issues_repo_name": "tmilev/courses_calculator", "max_issues_repo_head_hexsha": "dd67435dd7a25c59afcefb9ef3b1f004246eca81", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LaTeX-materials/Curriculum-400-Calculus-II/Homework.tex", "max_forks_repo_name": "tmilev/courses_calculator", "max_forks_repo_head_hexsha": "dd67435dd7a25c59afcefb9ef3b1f004246eca81", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.0426229508, "max_line_length": 166, "alphanum_fraction": 0.7738186658, "num_tokens": 5629, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.7520125737597972, "lm_q1q2_score": 0.5543522071464393}}
{"text": "%!TEX root = TDT4265-Summary.tex\r\n\\section{Object recognition}\r\nObject recognition is the task of finding a specific object within an image.\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\subsection{Patterns and pattern classes}\r\nPatterns are arrangements of descriptors (or ``features''). Pattern recognition is assigning observed patterns to a pattern class. Usually patterns are represented as vectors (quantitative), strings, or trees (structural). When using pattern vectors, the choice of descriptors is super important.\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\subsection[Decision-theoretic methods]{Recognition based on decision-theoretic methods}\r\nBased on decision functions. For pattern classes $\\omega_1, \\dots, \\omega_W$, we need functions $d_1(\\x), \\dots, d_W(\\x)$, so that if $\\x$ belongs in class $\\omega_i$, then $d_i(\\x)$ has the largest value. The decision boundary between two classes are values of $\\x$ where two decision functions have the same value.\r\n\r\n\\subsubsection{Matching}\r\n\r\n\\paragraph{Minimum distance matching} This method matches candidate vectors $\\x$ to prototype vectors $\\m$ that represent each class. A candidate belongs to the class with the nearest prototype (``nearness'' measured with some vector norm).\r\n\r\n\\paragraph{Correlation matching}\r\nUsing a template, you can shift it around on an image to find out where it fits best (in the sense of maximum correlation). This can easily be normalized for intensity changes, but not so easily for size and rotation.\r\n\r\n\\subsubsection{Optimum statistical classifiers}\r\nWe can also formulate a classification that is optimal in the sense of minimizing classification errors.\r\n\r\n\\subsubsection{Neural networks}\r\nNeural networks can be used to classify images and image components. More on this somewhere else.\r\n\r\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\\subsection{Feature-based object recognition}\r\nThis method depends on a lower-level method to extract features and interest points from images. These can then be matched between images to determine the location of an object. Usually your input is an image of the object alone, and an image of the object in a scene. The output is the location and orientation of the object within the scene. SIFT in Section \\ref{ssec:sift} is one way to extract feature descriptors.\r\n\r\n\\subsubsection{RANSAC}\r\nRANSAC (random sample consensus) is a method of outlier detection that can be used when matching feature vectors between images, to remove false matches. It goes like this:\r\n\\begin{enumerate}\r\n    \\item Repeat $N$ times:\r\n    \\begin{enumerate}\r\n        \\item Choose a subset $s \\in S$ of points and fit your model to them.\r\n        \\item See how many points in $S$ match the model.\r\n    \\end{enumerate}\r\n    \\item Choose the best fitting model as your result.\r\n\\end{enumerate}\r\nHere, $S$ are all your data points. $s$ is a subset large enough to define your model (2 for a line, 8 for a fundamental matrix, etc.). $N$ is a predetermined number of iterations defined to guarantee a certain probability of finding a good model.\r\n\r\nRANSAC can do robust estimation (accurate estimation despite many outliers). However, it cannot guarantee with finite $N$ that it finds a good model, it can only guarantee it to a specified probability. It needs very many iterations for sets with many outliers, but variations exist to solve this. The number of iterations necessary to guarantee a good model with probability $p$, given outlier ratio $\\epsilon$ is\r\n\\begin{equation}\r\n    N = \\frac{\\log(1-p)}{\\log(1-(1-\\epsilon)^s)}.\r\n\\end{equation}\r\n", "meta": {"hexsha": "775f8451415c00413d958ef5086f3233f96f83aa", "size": 3624, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "TDT4265 Computer vision/12-object-recognition.tex", "max_stars_repo_name": "jakoblover/ntnu-course-summaries", "max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z", "max_issues_repo_path": "TDT4265 Computer vision/12-object-recognition.tex", "max_issues_repo_name": "jakoblover/ntnu-course-summaries", "max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TDT4265 Computer vision/12-object-recognition.tex", "max_forks_repo_name": "jakoblover/ntnu-course-summaries", "max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.7826086957, "max_line_length": 419, "alphanum_fraction": 0.7284768212, "num_tokens": 763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7520125737597971, "lm_q2_score": 0.7371581568543044, "lm_q1q2_score": 0.5543522028040337}}
{"text": "%!TEX root =  ../main.tex\n\n\\subsection{$xy\\theta$}\n\n\\objective{Produce and derive trigonometric function graphs}\n\n\nOn a piece of paper or on the screen of an electronic device, we typically graph only two\nvariables.  3D graphs are usually ``faked'', as in orthographic projections or the like.  The \nUnit Circle is actually three things graphed at once, and so somewhat obscures each:\nthe height (sine), the width (cosine), and the angle turned ($\\theta$).  We will tease apart\nvarious attributes of the Unit Circle and their relationships to $\\theta$, in order to understand\neach more clearly.\n\n\\subsubsection{$y\\theta$}\n\\marginfig[-0in]{\\chapdir/pics/TIsinx}{$y=\\sin(x)$ on the TI-8*}\nIf we graph the height we have ascended to as the dependent variable, and the angle we\nhave turned as the independent variable, this corresponds to $y=\\sin(x)$.  This is a good\nbeginning, because we are graphing the $y$ of the Unit Circle as a our $y$, but we have\nchanged the $x$ to $\\theta$.  Enter this equation in your TI-8* and hit ZOOM-TRIG, begin\nsure to be in radians mode.\n\nThe graph has not been shifted up or down at all, so its midline or \\textbf{axis} is the line\n$y=0$.  The graph oscillates as much as one up or one down from that axis, so 1\nis the \\textbf{amplitude}.  We know that the pattern of the Unit Circle (relative to\nthe angle turned) takes $360^\\circ$ or $\\tau$ radians to repeat, so that should\nbe the \\textbf{period}.  What has the calculator done to make us get back to the\nsame point on the way in four tick marks?  Press WINDOW, and you will see that\nthe Xscl is $1.57\\dots$, a.k.a. $\\frac{\\tau}{4}$.\n\nNotice how the graph spends a great deal of time near the origin looking like $y=x$.\nIn other words, its derivative at 0 is 1.  Practice moving your writing hand in a smooth\nwave, tracing the unit circle at a consistent pace, but focusing your attention on your\nheight above and below the $x$-axis.  This should feel that same as tracing $y=\\sin(x)$.\nBecause this graph is so ubiquitous in nature, anything like it is described with the\nadjective \\textbf{sinusoidal}, and anything which moves as it does is said to be in \n\\textbf{simple harmonic motion}.\n\n\\subsubsection{$x\\theta$}\nA cosine graph is very, very similar.  Tracing the Unit Circle while focusing on your\n$x$ position is nearly identical, except the cycle begins on 1.  Change you TI-8* to\n$y=\\cos(x)$ but change nothing else.  Now you are graphing $x$ from the Unit Circle\non $y$, and $\\theta$ from the Unit Circle on $x$.\n\n\\begin{example}{Watermill}\n\\exProblem\nA mill is powered by a water-wheel in a river, which is 12 feet in diameter.  You observe\nthat it takes 20 seconds to complete a rotation and only the bottom 1ft is in the water.\nThere is a flag or marker at the top of the wheel right now.\nCreate an equation to model the water-wheels behavior, graph it, and determine the\ntime the flag will go into the water, and how long it spends underwater each rotation.\n\n\n\\exSolution\n\nWe begin by reasoning from the Unit Circle to the water-wheel.  Unit Circle graphs have an\namplitude of 1 because the are waves on a circle of radius 1.  The water-wheel has\na radius of 6 ft, so that will be our amplitude.  Graphically, that would make our wave\nsix times taller than normal, so we need a vertical dilation of 6.  So far we have\n$$\ny=6\\sin(x)\n$$\n\\marginfig[-0in]{\\chapdir/pics/6sinx}{$y=6\\sin(x)$}\nWhich produces a graph as in Fig.~\\ref{fig:6sinx}.  The water-wheel must have its axis\ntranslated, if only the bottom 1ft is in the water.  With an amplitude of 6ft, our current\nversion is 5ft too low, so we can revise our equation to $y=6\\sin(x)+5$.  We also note\nthat as a sine wave, our graph begins in the middle and is rising, whereas we need to\nmodel an object beginning at the top.  Therefore, we switch our equation to a cosine \nfunction.\n\nThe normal period of a sinusoidal wave is $\\tau$, but we need the period to be 20.\nHorizontal dilation is accomplished by dividing --- not multiplying, as in vertical\ndilation --- so we must ask what to divide $\\tau$ by in order to get 20.  The answer is\n$\\frac{\\tau}{20}$.  Our final equation is (including the form we must use in the TI-8*):\n$$\ny=6\\cos\\frac{\\tau}{20}(x)+5 = 6\\cos\\frac{\\pi}{10}(x)+5\n$$\n\nAccording to the zero function of the grapher, the height is zero at $x=8.135705$\nand again at $x=11.864295$, meaning the flag will go under just after 8 seconds\nfrom now, and be underwater for a little less than 4 seconds.\n\n\\marginfig[1in]{\\chapdir/pics/waterwheelgraph}{The entire mill setup visualized}\n\\end{example}\n\n", "meta": {"hexsha": "09a0caaf4ce76f08ce519b18537207fd1415d0c1", "size": 4560, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch09/0904.tex", "max_stars_repo_name": "aquatiki/AnalysisTextbook", "max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z", "max_issues_repo_path": "ch09/0904.tex", "max_issues_repo_name": "aquatiki/AnalysisTextbook", "max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch09/0904.tex", "max_forks_repo_name": "aquatiki/AnalysisTextbook", "max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.023255814, "max_line_length": 97, "alphanum_fraction": 0.7453947368, "num_tokens": 1257, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.752012562644147, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.554352198952447}}
{"text": "\n\\chapter{Background Research on Rare-Event Simulation}  \n\\label{ch:Background Research on Rare Event Simulation}\nIn this chapter background research on rare-event simulation is reviewed, including the models and the simulation algorithms. First, a descriptive presentation of the different models is conducted, where both Markovian and non-Markovian examples are given...\n\n\\section{Models, Markovian and Non-Markovian}\n\\label{sec:Models, Markovian and Non-Markovian}\nIn the current work different types of models are considered, where one of the main classifications is the requirement of memory or not. As introduced, a model without memory requirement is considered Markovian, and by opposition one that requires memory is non-Markovian. A Markovian model is a stochastic model that changes based on a random factor, and the future state depends only on the current one, not on previous states or events leading to it. A non-Markovian model does not fulfill this property, therefore only knowing the current state is not enough to completely describe the current situation. In a non-Markovian model the followed path of states and the events that lead to them are relevant as well.\n\n\\subsection{Markov Chain}\n\\label{sub:Markov Chain}\nA Markov chain model is a static Markovian model, composed of states and transitions, where each state has incoming and outgoing transitions. Each of the outgoing transitions has a probability of being followed, and by definition, this probability only depends on the current state, not on previous visited states. Therefore, if a simulation is run over a Markov chain and at two different points in time the same state is reached, there is no difference between the execution status at those points, and all the following possible paths have the same probability in both scenarios. A simple Markov chain with three states and their transitions is presented in Figure~\\ref{fig:Simple Markov Chain Model}. The values shown over the transitions correspond to their rates, and the probability is calculated with this value over the sum of all outgoing transitions of the state.\n\n\\begin{figure}[t]\n\\center\n\\includegraphics[width=.25\\textwidth]{Model_Markov_Chain_Simple.pdf}\n\\caption{Simple Markov Chain Model}\n\\label{fig:Simple Markov Chain Model}\n\\end{figure}\n\nThe distribution of the time remaining in a state is memoryless, which is modeled with an exponential distribution: $f(x) = \\lambda e^{-\\lambda x}$, $F(x) = 1 - e^{-\\lambda x} \\ (x \\geq 0)$ \\cite{haas:spn}. The average firing time for a transition with rate $\\lambda$ is $\\lambda^{-1}$. Moreover, in a state with $t$ outgoing transitions the average time spent is:\n$$ Average\\ Time = \\displaystyle\\frac{\\displaystyle 1}{\\displaystyle\\sum\\limits_{i=0}^t (rate_i)} $$\n\n\nIn the example presented in Figure~\\ref{fig:Simple Markov Chain Model} the transition matrix is:\n\n\\[ Q = \\left( \\begin{array}{ccc}\n-7 & 4 & 3 \\\\\n3 & -4 & 1 \\\\\n2 & 0 & -2 \\end{array} \\right)\\]\n\n\n\\section{Monte Carlo Simulation Algorithm} \nA Monte Carlo simulation algorithm follows a standard Monte Carlo method \\cite{montecalro}. Whenever a decision needs to be made, a random result is generated based on the probability of each of the possible choices. Considering a Markov chain as the underlying model for a simulation executing a Monte Carlo algorithm, given the current state there are some outgoing transitions, each of them with a probability of being executed next...\n\n\\input{sourcecode/Algorithm_Monte_Carlo}\n", "meta": {"hexsha": "7ecbcae8046b05053aae42d36f79541af463cb5e", "size": 3483, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content/Thesis_2_Background.tex", "max_stars_repo_name": "tuiSSE/sse-thesis-template", "max_stars_repo_head_hexsha": "5a081682cd2ea4176effc47d263f47de5d5b3f79", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2016-04-13T17:01:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T21:05:30.000Z", "max_issues_repo_path": "content/Thesis_2_Background.tex", "max_issues_repo_name": "tuiSSE/sse-thesis-template", "max_issues_repo_head_hexsha": "5a081682cd2ea4176effc47d263f47de5d5b3f79", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2015-12-08T10:32:15.000Z", "max_issues_repo_issues_event_max_datetime": "2017-10-26T13:19:47.000Z", "max_forks_repo_path": "content/Thesis_2_Background.tex", "max_forks_repo_name": "tuiSSE/sse-template-deu", "max_forks_repo_head_hexsha": "5a081682cd2ea4176effc47d263f47de5d5b3f79", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2017-07-29T16:24:46.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-28T15:49:24.000Z", "avg_line_length": 94.1351351351, "max_line_length": 874, "alphanum_fraction": 0.7881136951, "num_tokens": 799, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.752012562644147, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.554352198952447}}
{"text": "\\section{Fitness Function}\\label{sec:fitness}\nTo determine the fitness of individuals, we use the distance travelled between cities in order of the priorities calculated by the individual parse tree. To be more specific, for each city the priority is determined using the parse tree of the individual in question; then, after the priority for all cities have been calculated, the cities are ordered by their priorities to provide a route - or solution - for the TSP. Of course, using this calculation for the fitness of an individual means an individual with a lower fitness value is deemed better than one with a higher fitness value. Algorithm \\ref{alg:fitness} depicts the fitness function in its entirety.\n\n\\begin{algorithm}[H]\\label{alg:fitness}\n\\SetAlgoLined\n \\ForEach{city in cities}{\n   city.priority = individual.decisionTree.calcPriority(city)\\;\n }\n \\BlankLine\n cities.orderBy(city.priority)\\;\n \\BlankLine\n return calcDistance(cities)\\;\n \\caption{Fitness Function}\n\\end{algorithm}\n", "meta": {"hexsha": "35487fd91697f932267c2010b6d34272116e4690", "size": 991, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/report/03_fitness/fitness.tex", "max_stars_repo_name": "marcus-bornman/cos_790_assignment_3", "max_stars_repo_head_hexsha": "662fb0a2ec1b442ac702f195584aa9b913eae7c6", "max_stars_repo_licenses": ["AFL-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/report/03_fitness/fitness.tex", "max_issues_repo_name": "marcus-bornman/cos_790_assignment_3", "max_issues_repo_head_hexsha": "662fb0a2ec1b442ac702f195584aa9b913eae7c6", "max_issues_repo_licenses": ["AFL-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/report/03_fitness/fitness.tex", "max_forks_repo_name": "marcus-bornman/cos_790_assignment_3", "max_forks_repo_head_hexsha": "662fb0a2ec1b442ac702f195584aa9b913eae7c6", "max_forks_repo_licenses": ["AFL-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.0666666667, "max_line_length": 663, "alphanum_fraction": 0.7941473259, "num_tokens": 213, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8244619436290699, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5543118940557804}}
{"text": "\\documentclass[a4paper, 11pt]{article}\n\\usepackage{fullpage, amssymb, amsmath, graphicx, bm} \n\n\\newcommand{\\mytitle}{}\n\n\\begin{document}\n\\noindent\n\\large\\textbf{\\mytitle} \\\\ \\\\ Tyler Gordon \\\\\n\\normalsize \\today \n\\ \\ \\hrulefill\n\\section{Input coordinates in second dimension are same as data coordinates}\nIn the 1D case, the predictive distribution of a GP at coordinates $\\bm{y^*} = (t^*_1, t^*_2... t^*_p)^T$ has a mean given by \n\\begin{equation}\n\t\\mu_p^* = \\mu_\\theta(t_p^*) + K(t_p^*, \\bm{t})\\bm{z}\n\\end{equation}\nwhere $\\bm{z} = K(\\bm{t}, \\bm{t})^{-1}[\\bm{y} - \\bm{\\mu_\\theta}(\\bm{t})]$. In the 2D case for a GP with covariance in the second dimension \ndefined the covariance matrix $Q(\\bm{u},\\bm{u})$, the predictive distribution is given by \n\\begin{equation}\n\t\\bm{\\mu_p}^* = \\bm{\\mu_\\theta}(t_p^*) + K(t_p^*, \\bm{t})X\n\\end{equation}\nwhere $X_p = Q(\\bm{u}, \\bm{u})\\bm{z}_{pM:(p+1)M}$ is a vector of length M and \n$\\bm{z} = K_{2D}^{-1}[\\bm{y} - \\bm{\\mu_\\theta}(\\bm{t})]$ where $K_{2D}$ is the full \n2D covariance matrix and $\\bm{y} = [y(t_1, u_1), y(t_1, u_2), ...y(t_1, u_M), y(t_2, u_1), ...y(t_N, u_M)]^T$ is the dataset on which \nthe GP is conditioned. The algorithm for computing the predictive distribution is given by equations B26-B30 in FM17 with the \nsubstitution of $X$ for $\\bm{z}$. The result is a matrix $\\mu^* \\in \\mathbb{Z}^{N x M}$ where $\\mu_{n, m}$ is the predicted mean \nat coordinates $(t_n, u_q)$. \n\\section{Input coordinates in second dimension are different from data coordinates}\nWhen the input coordinates for the second dimension do not coincide with the data coordinates, the matrix X becomes \n\\begin{equation}\n\tX_p = Q(\\bm{u}^*,\\bm{u})\\bm{z}_{pM:(p+1)M}\n\\end{equation}\nFrom which we see that $\\bm{X}_p$ is just the mean of the predictive distribution for a 1D process with kernel Q at input coordinates $\\bm{u}^*$. \nIf the covariance in the second dimension is also defined by a celerite kernel, the cost of computing each $\\bm{X}_p$ is  \n$\\mathcal{O}(mM + rR)$ where $M$ is the number of data coordinates in the second dimension, $R$ is the number of \ninput coordinates, and $m$ and $r$ are some constants. Therefore the total cost of computing $X$ is $\\mathcal{O}(mMP + rRP)$, \nand the total cost of the 2D prediction algorithm is $\\mathcal{O}(mMP + rRP + pP + nN) \\sim \\mathcal{O}((mM+rR+p)P + nN)$\n\\end{document}\n", "meta": {"hexsha": "79401561d7298aaa3c7e9184ee686d4751fb1ceb", "size": 2351, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/predict.tex", "max_stars_repo_name": "tagordon/celerite2d.jl", "max_stars_repo_head_hexsha": "7fa487b1b1283f574559c06398449b6ba1dcf3ee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-19T15:43:04.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-19T15:43:04.000Z", "max_issues_repo_path": "notes/predict.tex", "max_issues_repo_name": "tagordon/celerite2d.jl", "max_issues_repo_head_hexsha": "7fa487b1b1283f574559c06398449b6ba1dcf3ee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/predict.tex", "max_forks_repo_name": "tagordon/celerite2d.jl", "max_forks_repo_head_hexsha": "7fa487b1b1283f574559c06398449b6ba1dcf3ee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.8684210526, "max_line_length": 146, "alphanum_fraction": 0.681412165, "num_tokens": 821, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672227971211, "lm_q2_score": 0.6859494614282923, "lm_q1q2_score": 0.554293276275541}}
{"text": "\\chapter{Loops and strings}\n\nComputers are often used to automate repetitive tasks, such as searching for text in documents.\nRepeating tasks without making errors is something that computers do well and people do poorly.\n\nIn this chapter, we'll learn how to use \\java{while} and \\java{for} loops to add repetition to your code.\nWe'll also take a first look at \\java{String} methods and solve some interesting problems.\n\n%We have seen methods, like \\java{countdown} and \\java{factorial}, that use recursion to iterate.\n%Although recursion is elegant and powerful, it takes some getting used to.\n%Java provides language features that make iteration much easier: the \\java{while} and \\java{for} statements.\n\n\n\\section{The while statement}\n\\label{The_while_statement}\n\n\\index{while}\n\\index{loop!while}\n\\index{statement!while}\n\nUsing a \\java{while} statement, we can repeat the same code multiple times:\n\n%slr: 8-16-19\n%\\begin{code}\n%int n = 3;\n%while (n > 0) {\n%    System.out.println(n);\n%    n = n - 1;\n%}\n%System.out.println(\"Blastoff!\");\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [230] {Blastoff.java}\npublic class Blastoff {\n\n    public static void main(String[] args) {\n       int n = 3;\n       while (n > 0) {\n          System.out.println(n);\n          n = n - 1;\n       }\n       System.out.println(\"Blastoff!\");\n    }\n}\n\\end{trinket}\n\nRun the program.  What do you see?\n\nReading the code in English sounds like: ``Start with \\java{n} set to 3.\nWhile \\java{n} is greater than zero, print the value of \\java{n}, and reduce the value of \\java{n} by 1.\nWhen you get to zero, print Blastoff!''\n%So the output is:\n%\n%\\begin{stdout}\n%3\n%2\n%1\n%Blastoff!\n%\\end{stdout}\n%slr: end 8-16-19\n\nThe flow of execution for a \\java{while} statement is:\n\n\\begin{enumerate}\n\n\\item Evaluate the condition in parentheses, yielding \\java{true} or \\java{false}.\n\n\\item If the condition is \\java{false}, skip the following statements in braces.\n\n\\item If the condition is \\java{true}, execute the statements and go back to step 1.\n\n\\end{enumerate}\n\n\\index{loop}\n\nThis type of flow is called a {\\bf loop}, because the last step ``loops back around'' to the first.\nFigure~\\ref{fig.while} shows this idea using a flowchart.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.9]{figs/while.pdf}\n\\caption{Flow of execution for a \\java{while} loop.}\n\\label{fig.while}\n\\end{center}\n\\end{figure}\n\n\\index{loop body}\n\\index{infinite loop}\n\\index{loop!infinite}\n\nThe {\\bf body} of the loop should change the value of one or more variables so that, eventually, the condition becomes \\java{false} and the loop terminates.\nOtherwise the loop will repeat forever, which is called an {\\bf infinite loop}.\n\n\\begin{code}\nint n = 3;\nwhile (n > 0) {\n    System.out.println(n);\n    // n never changes\n}\n\\end{code}\n\nThis example will print the number \\java{3} forever, or at least until you terminate the program.\nAn endless source of amusement for computer scientists is the observation that the directions on shampoo, ``Lather, rinse, repeat,'' are an infinite loop.\n\nIn the first example, we can prove that the loop terminates when \\java{n} is positive.\nBut in general, it is not so easy to tell whether a loop terminates.\nFor example, this loop continues until \\java{n} is 1 (which makes the condition \\java{false}):\n\n%slr: 8-16-19\n%\\begin{code}\n%while (n != 1) {\n%    System.out.println(n);\n%    if (n % 2 == 0) {         // n is even\n%        n = n / 2;\n%    } else {                  // n is odd\n%        n = 3 * n + 1;\n%    }\n%}\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [270] {Collatz.java}\npublic class Collatz {\n\n    public static void main(String[] args) {\n       int n = 3;       \n       while (n != 1) {\n          System.out.println(n);\n          if (n % 2 == 0) {         // n is even\n             n = n / 2;\n          } else {                  // n is odd\n             n = 3 * n + 1;\n          }\n       }\n    }\n}\n\\end{trinket}\n%slr: end 8-16-19\n\nEach time through the loop, the program displays the value of \\java{n} and then checks whether it is even or odd.\nIf it is even, the value of \\java{n} is divided by two.\nIf it is odd, the value is replaced by $3n+1$.\nFor example, if the starting value is 3, the resulting sequence is 3, 10, 5, 16, 8, 4, 2, 1.\n\nSince \\java{n} sometimes increases and sometimes decreases, there is no obvious proof that \\java{n} will ever reach 1 and that the program will ever terminate.\nFor some values of \\java{n}, such as the powers of two, we can prove that it terminates.\nThe previous example ends with such a sequence, starting when \\java{n} is 16 (or $2^4$).\n\nThe hard question is whether this program terminates for {\\em all} values of n.\nSo far, no one has been able to prove it {\\em or} disprove it!\nFor more information, see \\url{https://en.wikipedia.org/wiki/Collatz_conjecture}.\n%The field of computer science is interested in these types of questions, because their answers give insight to the limits of what computers can and cannot do.\n\n\n\\section{Increment and decrement}\n\nHere is another \\java{while} loop example; this one displays the numbers 1 to 5.\n\n%slr 8-16-19\n%\\begin{code}\n%int i = 1;\n%while (i <= 5) {\n%    System.out.println(i);\n%    i++;  // add 1 to i\n%}\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [220] {Increment.java}\npublic class Increment {\n\n    public static void main(String[] args) {\n       int i = 1;\n       while (i <= 5) {\n          System.out.println(i);\n          i++;  // add 1 to i\n       }\n    }\n}\n\\end{trinket}\n%slr: end 8-16-19\n\\index{increment}\n\\index{decrement}\n\nAssignments like \\java{i = i + 1} don't often appear in loops, because Java provides a more concise way to add and subtract by one.\nSpecifically, \\java{++} is the {\\bf increment} operator; it has the same effect as \\java{i = i + 1}.\nAnd \\java{--} is the {\\bf decrement} operator; it has the same effect as \\java{i = i - 1}.\n\n%So far in this book we have only used (\\java{=}) to assign values to variables.\n%For convenience, Java provides other assignment operators that increase or decrease the value of a variable.\n\nIf you want to increment or decrement a variable by an amount other than \\java{1}, you can use \\java{+=} and \\java{-=}.\nFor example, \\java{i += 2} increments \\java{i} by \\java{2}.\n\n%slr:  8-16-19\n%\\begin{code}\n%int i = 2;\n%while (i <= 8) {\n%    System.out.print(i + \", \");\n%    i += 2;  // add 2 to i\n%}\n%System.out.println(\"Who do we appreciate?\");\n%\\end{code}\n%\n%And the output is:\n%\n%\\begin{stdout}\n%2, 4, 6, 8, Who do we appreciate?\n%\\end{stdout}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [230] {Increment2.java}\npublic class Increment2 {\n\n    public static void main(String[] args) {\n       int i = 2;\n       while (i <= 8) {\n          System.out.print(i + \", \");\n          i += 2;  // add 2 to i (same as i = i + 2;\n       }\n       System.out.println(\"Who do we appreciate?\");\n    }\n}\n\\end{trinket}\n\n\\textbf{Section Exercises}\n\\begin{enumerate}\n\\item See if you can predict the output of the program before runnning it.  Were you correct?  Mixing \\java{print} and \\java{println} statements can be tricky!\n\\item Change the \\java{print} statement on line 6.  Instead, do the equivalent with a \\java{printf} statement.\n\\end{enumerate}\n%slr: end 8-16-19\n\n\\section{The for statement}\n\n\\index{for}\n\\index{loop!for}\n\\index{statement!for}\n\nThe loops we have written so far have several elements in common:\n\\begin{enumerate}\n\\item They start by \\textit{initializing} a variable\n\\item They have a \\textit{condition} that depends on that variable\n\\item Inside the loop they do something to \\textit{update} that variable\n\\end{enumerate}\n\n\\index{iteration}\n\nRunning the same code multiple times is called {\\bf iteration}.\nThis type of loop is so common that there is another statement, the \\java{for} loop, that expresses it more concisely.\nFor example, we can rewrite the 2-4-6-8 loop this way:\n\n%slr:  8-16-19\n%\\begin{code}\n%for (int i = 2; i <= 8; i += 2) {\n%    System.out.print(i + \", \");\n%}\n%System.out.println(\"Who do we appreciate?\");\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [210] {For.java}\npublic class For {\n\n    public static void main(String[] args) {\n       \n       for (int i = 2; i <= 8; i += 2) {\n          System.out.print(i + \", \");\n       }\n       System.out.println(\"Who do we appreciate?\");\n    }\n}\n\\end{trinket}\n%slr: end 8-16-19\n\n\\java{for} loops have these three components in parentheses, separated by semicolons: the \\textbf{initializer}, the \\textbf{condition}, and the \\textbf{update}.\n\n\\begin{enumerate}\n\n\\item The {\\em initializer} runs once at the very beginning of the loop.\n(It is equivalent to the line before the \\java{while} statement.)\n\n\\item The {\\em condition} is checked each time through the loop.\nIf it is \\java{false}, the loop ends.\nOtherwise, the body of the loop is executed (again).\n\n\\item At the end of each iteration, the {\\em update} runs, and we go back to step~2.\n\n\\end{enumerate}\n\nThe \\java{for} loop is often easier to read because it puts all the loop-related statements at the top of the loop.\nDoing so allows you to focus on the statements in the loop body.\nFigure~\\ref{fig.for} illustrates \\java{for} loops with a flowchart.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[scale=0.9]{figs/for.pdf}\n\\caption{Flow of execution for a \\java{for} loop.}\n\\label{fig.for}\n\\end{center}\n\\end{figure}\n\nThere is another difference between \\java{for} loops and \\java{while} loops: if you declare a variable in the initializer, it only exists {\\em inside} the \\java{for} loop.\nFor example:\n\n%slr:  8-16-19\n%\\begin{code}\n%for (int n = 3; n > 0; n--) {\n%    System.out.println(n);\n%}\n%System.out.println(\"n is now \" + n);  // compiler error\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [200] {ForScope.java}\npublic class ForScope {\n\n    public static void main(String[] args) {   \n       for (int n = 3; n > 0; n++) {\n          System.out.println(n);\n       }\n       System.out.println(\"n is now \" + n);\n    }\n}\n\\end{trinket}\n\nLine 7 tries to display \\java{n} (for no reason other than demonstration) but it won't compile - try it and see.  This is because \\java{n} has \\textbf{local scope}: it is declared within the \\java{for} loop and is not visible outside the for loop.\n%slr: end 8-16-19\nIf you need to use a loop variable outside the loop, you have to declare it {\\em outside} the loop, like this:\n\n\\begin{code}\nint n;\nfor (n = 3; n > 0; n--) {\n    System.out.println(n);\n}\nSystem.out.println(\"n is now \" + n);\n\\end{code}\n\nNotice that the \\java{for} statement does not say \\java{int n = 3}.\nRather, it simply initializes the existing variable \\java{n}.\n%slr: 8-16-19\n\n\\textbf{Section Exercises}\n\\begin{enumerate}\n\\item Fix the syntax error in the program as shown above.  Now \\java{n} is visible throughout the \\java{main} method.\n\\end{enumerate}\n%slr: end 8-16-19\n\n\\section{Nested loops}\n\\label{nested}\n\n\\index{loop!nested}\n\\index{nested!loops}\n\nLike conditional statements, loops can be nested one inside the other.\nNested loops allow you to iterate over two variables.\nFor example, we can generate a ``multiplication table'' like this:\n\n%slr:  8-16-19\n%\\begin{code}\n%for (int x = 1; x <= 10; x++) {\n%    for (int y = 1; y <= 10; y++) {\n%        System.out.printf(\"%4d\", x * y);\n%    }\n%    System.out.println();\n%}\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [230] {MultTable.java}\npublic class MultTable {\n    public static void main(String[] args) {\n       \n       for (int x = 1; x <= 10; x++) {\n          for (int y = 1; y <= 10; y++) {\n             System.out.printf(\"%4d\", x * y);\n          }\n          System.out.println();\n       }\n    }\n}\n\\end{trinket}\n%slr: end 8-16-19\n\n\\index{loop variable}\n\\index{variable!loop}\n\\index{inner loop}\n\\index{outer loop}\n\nVariables like \\java{x} and \\java{y} are called {\\bf loop variables}, because they control the execution of a loop.\nIn this example, the first loop (\\java{for x}) is known as the ``outer loop'', and the second loop (\\java{for y}) is known as the ``inner loop''.\n\nEach loop repeats their corresponding statements 10 times.\nThe outer loop iterates from 1 to 10 only once, but the inner loop iterates from 1 to 10 each of those 10 times.\nAs a result, the \\java{printf} method is invoked 100 times.\n\n\\index{format specifier}\n\nThe format specifier \\java{\\%4d} displays the value of \\java{x * y} padded with spaces so it's four characters wide.\nDoing so causes the output to align vertically, regardless of how many digits the numbers have:\n\n\\begin{stdout}\n   1   2   3   4   5   6   7   8   9  10\n   2   4   6   8  10  12  14  16  18  20\n   3   6   9  12  15  18  21  24  27  30\n   4   8  12  16  20  24  28  32  36  40\n   5  10  15  20  25  30  35  40  45  50\n   6  12  18  24  30  36  42  48  54  60\n   7  14  21  28  35  42  49  56  63  70\n   8  16  24  32  40  48  56  64  72  80\n   9  18  27  36  45  54  63  72  81  90\n  10  20  30  40  50  60  70  80  90 100\n\\end{stdout}\n\nIt's important to realize that the output is displayed row by row.\nThe inner loop displays a single row of output, followed by a newline.\nThe outer loop iterates over the rows themselves.\nAnother way to read nested loops, like the ones in this example, is ``for each row \\java{x}, and for each column \\java{y}, \\ldots''\n\n\n\\section{Characters}\n\nSome of the most interesting problems in computer science involve searching and manipulating text.\nIn the next few sections, we'll discuss how to apply loops to strings.\nAlthough the examples are short, the techniques work the same whether you have one word or one million words.\n\n\\index{charAt}\n\\index{char}\n\\index{type!char}\n\nStrings provide a method named \\java{charAt}.\nIt returns a \\java{char}, a java primitive data type that stores an individual character (as opposed to strings of them).\n\n\\begin{code}\nString fruit = \"banana\";\nchar letter = fruit.charAt(0);\n\\end{code}\n\nThe argument \\java{0} means that we want the character at {\\bf index} 0.\nString indexes range from 0 to $n-1$, where $n$ is the length of the string.\nSo the character assigned to \\java{letter} is \\java{b}.\n\n\\begin{center}\n\\ttfamily\n\\begin{tabular}{cccccc}\n\\hline\n\\multicolumn{1}{|l|}{b} & \\multicolumn{1}{l|}{a} & \\multicolumn{1}{l|}{n} & \\multicolumn{1}{l|}{a} & \\multicolumn{1}{l|}{n} & \\multicolumn{1}{l|}{a} \\\\ \\hline\n0                       & 1                      & 2                      & 3                      & 4                      & 5\n\\end{tabular}\n\\end{center}\n\n\nCharacters work like the other data types we have seen.\nYou can compare them using relational operators:\n\n\\begin{code}\nif (letter == 'a') {\n    System.out.println('?');\n}\n\\end{code}\n\n\\index{quote mark}\n\\index{escape sequence}\n\nCharacter literals, like \\java{'a'}, appear in single quotes.\nUnlike string literals, which appear in double quotes, character literals can only contain a single character.\nEscape sequences, like \\java{'\\t'}, are legal because they represent a single character.\n\nThe increment and decrement operators also work with characters.\nSo this loop displays the letters of the alphabet:\n\n%slr:  8-16-19\n%\\begin{code}\n%System.out.print(\"Roman alphabet: \");\n%for (char c = 'A'; c <= 'Z'; c++) {\n%    System.out.print(c);\n%}\n%System.out.println();\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [195] {Alphabet.java}\npublic class Alphabet {\n    public static void main(String[] args) {\n       \n       for (char c = 'A'; c <= 'Z'; c++) {\n          System.out.print(c);\n       }\n       System.out.println();\n    }\n}\n\\end{trinket}\n\\index{Unicode}\n\\java{char}s are thinly disguised \\java{int}s.  The integer values are encoded as characters.\n%slr: end 8-16-19\nJava uses {\\bf Unicode} to represent characters, so strings can store text in other alphabets like Cyrillic and Greek, and non-alphabetic languages like Chinese.\nYou can read more about it at \\url{http://unicode.org/}.\n\nIn Unicode, each character is represented by a ``code point'', which you can think of as an integer.\nThe code points for uppercase Greek letters run from 913 to 937, so we can display the Greek alphabet like this:\n\n%slr:  8-16-19\n%\\begin{code}\n%System.out.print(\"Greek alphabet: \");\n%for (int i = 913; i <= 937; i++) {\n%    System.out.print((char) i);\n%}\n%System.out.println();\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [210] {GreekAlphabet.java}\npublic class GreekAlphabet {\n    public static void main(String[] args) {\n       \n       System.out.println(\"Greek alphabet: \");\n       for (int i = 913; i <= 937; i++) {\n          System.out.print((char) i);\n       }\n       System.out.println();\n    }\n}\n\\end{trinket}\n%slr:  end 8-16-19\n\nThis example uses a type cast to convert each integer (in the range) to the corresponding character.\nTry running the code and see what happens.\n%slr:  8-16-19\n\n\\textbf{Section Exercises}\n\\begin{enumerate}\n\\item Try printing out (a portion of) the Thai alphabet:  code point 3585 through 3642\n\\item Read up on Unicode blocks at \\url{https://en.wikipedia.org/wiki/Unicode_block}.  See if you can print other alphabets.  (You'll have to convert from Hex to decimal.)\n\\end{enumerate}\n%slr: end 8-16-19\n\n\\section{String iteration}\n\n\\index{iteration}\n\nThe following loop iterates the characters in \\java{fruit} and displays them, one on each line:\n%slr:  8-16-19\n\n%\\begin{code}\n%for (int i = 0; i < fruit.length(); i++) {\n%    char letter = fruit.charAt(i);\n%    System.out.println(letter);\n%}\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [210] {StringIter.java}\npublic class StringIter {\n    public static void main(String[] args) {\n       \n       String fruit = \"banana\";\n       for (int i = 0; i < fruit.length(); i++) {\n          char letter = fruit.charAt(i);\n          System.out.println(letter);\n       }\n    }\n}\n\\end{trinket}\n%slr: end 8-16-19\n\\index{string!length}\n\\index{length!string}\n\nStrings provide a method called \\java{length} that returns the number of characters in the string.\nBecause it is a method, you have to invoke it with the empty argument list, \\java{()}.\nWhen \\java{i} is equal to the length of the string, the condition becomes \\java{false} and the loop terminates.\n\nTo find the last letter of a string, you might be tempted to do something like:\n\n\\begin{code}\nint length = fruit.length();\nchar last = fruit.charAt(length);      // wrong!\n\\end{code}\n\n\\index{StringIndexOutOfBoundsException}\n\\index{exception!StringIndexOutOfBounds}\n\nThis code compiles and runs, but invoking the \\java{charAt} method throws a \\java{StringIndexOutOfBoundsException}.\nThe problem is that there is no sixth letter in \\java{\"banana\"}.\nSince we started counting at 0, the 6 letters are indexed from 0 to 5.\nTo get the last character, you have to subtract 1 from \\java{length}.\n\n\\begin{code}\nint length = fruit.length();\nchar last = fruit.charAt(length - 1);  // correct\n\\end{code}\n\nMany string algorithms involve reading one string and building another.\nFor example, to reverse a string, we can add one character at a time:\n\n%slr:  8-16-19\n%\\begin{code}\n%public static String reverse(String s) {\n%    String r = \"\";\n%    for (int i = s.length() - 1; i >= 0; i--) {\n%        r += s.charAt(i);\n%    }\n%    return r;\n%}\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [290] {StringReverse.java}\npublic class StringReverse {\n    \n    public static String reverse(String s) {\n       String r = \"\"; //empty String\n       for (int i = s.length() - 1; i >= 0; i--) {\n          r += s.charAt(i);  //means r = r + s.charAt(i);  + is concatenation for Strings\n       }\n       return r;\n    }    \n    \n    public static void main(String[] args) {\n       String fruit = \"banana\";\n       String yiurf = reverse(fruit);\n       System.out.println(yiurf);\n    }\n}\n\\end{trinket}\n\n\\index{empty string}\n\nThe initial value of \\java{r} is \\java{\"\"}, which is the {\\bf empty string}.\nThe loop iterates the letters of \\java{s} in reverse order.\nEach time through the loop, it creates a new string and assigns it to \\java{r}.\nWhen the loop exits, \\java{r} contains the letters from \\java{s} in reverse order.\nSo the result of \\java{reverse(\"banana\")} is \\java{\"ananab\"}.\n\n\n\\section{The indexOf method}\n\n\\index{indexOf}\n\nTo search for a specific character in a string, you could write a \\java{for} loop and use \\java{charAt} like in the previous section.\nHowever, the \\java{String} class already provides a method for doing just that.\n\n\\begin{code}\nString fruit = \"banana\";\nint index = fruit.indexOf('a');     // returns 1\n\\end{code}\n\nThis example finds the index of \\java{'a'} in the string.\nBut the letter appears three times, so it's not obvious what \\java{indexOf} should do.\nAccording to the documentation, it returns the index of the {\\em first} appearance.\n\nTo find subsequent appearances, you can use another version of \\java{indexOf}, which takes a second argument that indicates where in the string to start looking.\n\n\\begin{code}\nint index = fruit.indexOf('a', 2);  // returns 3\n\\end{code}\n\nTo visualize how \\java{indexOf} and other \\java{String} methods work, it helps to draw a picture like Figure~\\ref{fig.banana}.\nThe previous code starts at index 2 (the first \\java{'n'}) and finds the next \\java{'a'}, which is at index 3.\n\n\\index{memory diagram}\n\\index{diagram!memory}\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics{figs/banana.pdf}\n\\caption{Memory diagram for a \\java{String} of six characters.}\n\\label{fig.banana}\n\\end{center}\n\\end{figure}\n\n%\\begin{center}\n%\\begin{tabular}{c|c|c|c|c|c}\n%%\\hline\n%b & a & n & a & n & a \\\\\n%\\hline\n%0 & 1 & 2 & 3 & 4 & 5 \\\\\n%%\\hline\n%\\end{tabular}\n%\\end{center}\n\nIf the character happens to appear at the starting index, the starting index is the answer.\nSo \\java{fruit.indexOf('a', 5)} returns \\java{5}.\nIf the character does not appear in the string, \\java{indexOf} returns \\java{-1}.\nSince indexes cannot be negative, this value indicates the character was not found.\n\nYou can also use \\java{indexOf} to search for an entire string, not just a single character.\nFor example, the expression \\java{fruit.indexOf(\"nan\")} returns \\java{2}.\n\n\n\\section{String comparison}\n\\label{strcmp}\n\n\\index{equals}\n\\index{string!comparing}\n\nTo compare two strings, it may be tempting to use the \\java{==} and \\java{!=} operators.\n\n\\begin{code}\nString name1 = \"Alan Turing\";\nString name2 = \"Ada Lovelace\";\nif (name1 == name2) {                 // wrong!\n    System.out.println(\"The names are the same.\");\n}\n\\end{code}\n\nThis code compiles and runs, and sometimes it gets the answer right.\nBut sometimes it gets the answer wrong.\nIf you give it two different strings that contain the same letters, the condition will be \\java{false}.\n\n%slr 8-16-19\nHere is an example of that situation.  When prompted, enter the string \"foo\" and see what happens:\n\n% height = 130 + 11 * num_lines\n\\begin{trinket} [220] {StringEquals.java}\nimport java.util.Scanner;\npublic class StringEquals {\n   public static void main(String[] args) {\n    \n      Scanner kbd = new Scanner(System.in);\n      String s = \"foo\";\n      System.out.print(\"Enter a string: \");\n      String u = kbd.next();\n      System.out.println(s == u); //don't do this!\n   }\n}\n\\end{trinket}\n\nThe \\java{String}s have the same content but are not the \\textit{same object}.\n%slr:  end 8-16-19\nThe problem is that the \\java{==} operator checks whether the two variables refer to the {\\em same object} by comparing the references.\nWe'll learn more about references in the next chapter.\nThe correct way to compare strings is with the \\java{equals} method, like this:\n\n%slr:  8-16-19\n% height = 130 + 11 * num_lines\n\\begin{trinket} [220] {StringEquals.java}\nimport java.util.Scanner;\npublic class StringEquals {\n    public static void main(String[] args) {\n    \n       Scanner kbd = new Scanner(System.in);\n       String s = \"foo\";\n       System.out.print(\"Enter a string: \");\n       String u = kbd.next();\n       System.out.println(s.equals(u)); //correct\n    }\n}\n\\end{trinket}\n\nNow run it and enter \"foo\" when prompted. The \\java{String}s have the same content and the \\java{equals} method returns \\java{true}.\n%\\begin{code}\n%if (name1.equals(name2)) {\n%    System.out.println(\"The names are the same.\");\n%}\n%\\end{code}\n\n%This example invokes \\java{equals} on \\java{name1} and passes \\java{name2} as an argument.\n%The \\java{equals} method returns \\java{true} if the strings contain the same characters; otherwise it returns \\java{false}.\n%slr: end 8-16-19\n\n\\index{compareTo}\n\nIf the strings differ, we can use \\java{compareTo} to see which comes first in alphabetical order:\n\n%slr:  8-16-19\n%\\begin{code}\n%int diff = name1.compareTo(name2);\n%if (diff == 0) {\n%    System.out.println(\"The names are the same.\");\n%} else if (diff < 0) {\n%    System.out.println(\"name1 comes before name2.\");\n%} else if (diff > 0) {\n%    System.out.println(\"name2 comes before name1.\");\n%}\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [260] {StringCompareTo.java}\npublic class StringCompareTo {    \n    public static void main(String[] args) {\n       String name1 = \"Alan Turing\";\n       String name2 = \"Ada Lovelace\";\n       int diff = name1.compareTo(name2);\n       if (diff == 0) {\n          System.out.println(\"The names are the same.\");\n       } else if (diff < 0) {\n          System.out.println(\"name1 comes before name2.\");\n       } else if (diff > 0) {\n          System.out.println(\"name2 comes before name1.\");\n       }\n    }\n}\n\\end{trinket}\n%slr: end 8-16-19\n\nThe return value from \\java{compareTo} is the difference between the first characters in the strings that are not the same.\nIn the preceding code, \\java{compareTo} returns positive 8, because the second letter of \\java{\"Ada\"} comes before the second letter of \\java{\"Alan\"} by 8 letters.\n\nIf the strings are equal, their difference is zero.\nIf the first string (the one on which the method is invoked) comes first in the alphabet, the difference is negative.\nOtherwise, the difference is positive.\n\n\\index{case-sensitive}\n\nBoth \\java{equals} and \\java{compareTo} are case-sensitive.\nIn Unicode, uppercase letters come before lowercase letters.\nSo \\java{\"Ada\"} comes before \\java{\"ada\"}.\n%slr:  8-16-19\n\n\\textbf{Section Exercises}\n\\begin{enumerate}\n\\item Change the \\java{StringCompareTo} program above so as to print out the value of \\java{diff}.  Do you get the value 8?\n\\item Change name1 and name2.  Predict what the value of \\java{diff} will be.  Run the program to check.\n\\end{enumerate}\n%slr: end 8-16-19\n\n\n\\section{Substrings}\n\n\\index{substring}\n\nThe \\java{substring} method returns a new string that copies letters from an existing string, starting at the given index.\n\n\\begin{itemize}\n\\item \\java{fruit.substring(0)} returns \\java{\"banana\"}\n\\item \\java{fruit.substring(2)} returns \\java{\"nana\"}\n\\item \\java{fruit.substring(6)} returns \\java{\"\"}\n\\end{itemize}\n\nThe first example returns a copy of the entire string.\nThe second example returns all but the first two characters.\nAs the last example shows, \\java{substring} returns the empty string if the argument is the length of the string.\n\nLike most string methods, \\java{substring} is \\textbf{overloaded}.\nThat is, there are other versions of \\java{substring} that have different parameters.\nIf it's invoked with two arguments, they are treated as a start and end index:\n\n\\begin{itemize}\n\\item \\java{fruit.substring(0, 3)} returns \\java{\"ban\"}\n\\item \\java{fruit.substring(2, 5)} returns \\java{\"nan\"}\n\\item \\java{fruit.substring(6, 6)} returns \\java{\"\"}\n\\end{itemize}\n\nNotice that the character indicated by the end index is {\\em not} included.\nDefining \\java{substring} this way simplifies some common operations.\nFor example, to select a substring with length \\java{len}, starting at index \\java{i}, you could write \\java{fruit.substring(i, i + len)}.\n%So \\java{fruit.substring(2, 2 + 3)} returns \\java{\"nan\"}.\n%slr:  8-16-19\n\n\\textbf{Section Exercises}\n\nUse the following to complete the exercises:\n\n% height = 130 + 11 * num_lines\n\\begin{trinket} [180] {Substring.java}\npublic class Substring {    \n    public static void main(String[] args) {\n       String name = \"Ada Lovelace\";\n       System.out.println(name.substring(4));\n       System.out.println(name.substring(4, 7));\n    }\n}\n\\end{trinket}\n\n\\begin{enumerate}\n\\item Predict the output of the program prior to running it.  Were you correct?\n\\item Play with the parameter passed to \\java{substring} in line 4.  Make certain you understand how this version of \\java{substring} sorks!\n\\item Play with the parameter\\textbf{s} passed to \\java{substring} in line 5.  Make certain you understand how this version of \\java{substring} sorks!\n\\end{enumerate}\n%slr: end 8-16-19\n\n\n\\section{String formatting}\n\n\\index{printf}\n\nIn Section~\\ref{printf}, we learned how to use \\java{System.out.printf} to display formatted output.\nSometimes programs need to create strings that are formatted a certain way, but not display them immediately, or ever.\nFor example, the following program uses a method that returns a time string in 12-hour format:\n\n%slr:  8-16-19\n%\\begin{code}\n%public static String timeString(int hour, int minute) {\n%    String ampm;\n%    if (hour < 12) {\n%        ampm = \"AM\";\n%        if (hour == 0) {\n%            hour = 12;  // midnight\n%        }\n%    } else {\n%        ampm = \"PM\";\n%        hour = hour - 12;\n%    }\n%    return String.format(\"%02d:%02d %s\", hour, minute, ampm);\n%}\n%\\end{code}\n% height = 130 + 11 * num_lines\n\\begin{trinket} [350] {Formatting.java}\npublic class Formatting {    \n\n    public static String timeString(int hour, int minute) {\n       String ampm;\n       if (hour < 12) {\n           ampm = \"AM\";\n           if (hour == 0) {\n               hour = 12;  // midnight\n           }\n       } else {\n           ampm = \"PM\";\n           hour = hour - 12;\n       }\n       return String.format(\"%02d:%02d %s\", hour, minute, ampm);\n    }  \n    \n    public static void main(String[] args) {\n       String formattedTime = timeString(13, 46);\n       System.out.println(formattedTime);\n    }\n}\n\\end{trinket}\n\n\\index{string!format}\n\n\\java{String.format}, a static method of the \\java{String} class, takes the same arguments as \\java{System.out.printf}: a format specifier followed by a sequence of values.\nThe main difference is that \\java{System.out.printf} displays the result on the screen.\n\\java{String.format} creates a new string, but does not display anything.\n\nIn this example, the format specifier \\java{\\%02d} means ``two digit integer padded with zeros'', so \\java{timeString(19, 5)} returns the string \\java{\"07:05 PM\"}.\nAs an exercise, try writing two nested \\java{for} loops (in \\java{main}) that invoke \\java{timeString} and display all possible times over a 24-hour period.\n\n%slr: 8-16-19\nAt some point today, skim through the documentation for \\java{String} found in the \\href{https://docs.oracle.com/javase/8/docs/api/}{Java API}.\nKnowing what other methods are there will help you avoid reinventing the wheel.\n%The easiest way to find documentation for Java classes is to do a web search for ``Java'' and the name of the class.\n%slr:  end 8-16-19\n\n\n\\section{Vocabulary}\n\n\\begin{description}\n\n\\term{loop}\nA statement that executes a sequence of statements repeatedly.\n\n\\term{loop body}\nThe statements inside the loop.\n\n\\term{infinite loop}\nA loop whose condition is always true.\n\n\\term{increment}\nIncrease the value of a variable.\n\n\\term{decrement}\nDecrease the value of a variable.\n\n\\term{iteration}\nExecuting a sequence of statements repeatedly.\n\n\\term{loop variable}\nA variable that is initialized, tested, and updated in order to control a loop.\n\n\\term{index}\nAn integer variable or value used to indicate a character in a string.\n\n\\term{Unicode}\nAn international standard for representing characters in most of the world's languages.\n\n\\term{empty string}\nThe string \\java{\"\"}, which contains no characters and has a length of zero.\n\n\\end{description}\n\n\n\\section{Exercises}\n\nThe code for this chapter is in the {\\tt ch06} directory of {\\tt ThinkJavaCode2}.\nSee page~\\pageref{code} for instructions on how to download the repository.\nBefore you start the exercises, we recommend that you compile and run the examples.\n\nIf you have not already read Appendix~\\ref{debugger}, now might be a good time.\nIt describes the DrJava debugger, which is a useful tool for visualizing the flow of execution through loops.\n\n\n\\begin{exercise}  %%V6 Ex7.1\n\nConsider the following methods:\n\n\\begin{code}\npublic static void main(String[] args) {\n    loop(10);\n}\n\npublic static void loop(int n) {\n    int i = n;\n    while (i > 1) {\n        System.out.println(i);\n        if (i % 2 == 0) {\n            i = i / 2;\n        } else {\n            i = i + 1;\n        }\n    }\n}\n\\end{code}\n\n\\begin{enumerate}\n\n\\item Draw a table that shows the value of the variables \\java{i} and \\java{n} during the execution of \\java{loop}.\nThe table should contain one column for each variable and one line for each iteration.\n\n\\item What is the output of this program?\n\n\\item Can you prove that this loop terminates for any positive value of \\java{n}?\n\n% If i is odd and we increment by 1, the result is even.  So the second\n% branch is always followed by the first branch.\n% If i is even and we divide by 2, the result might be odd.  So in the\n% worst case, we might alternate between the branches.\n% But we can't do more of the second branch than the first.\n% So we divide at least as often as we add.\n\n% If i is 1, we're done.\n% If i is 2, we divide by 2 and we're done.\n% If i is greater than 2, the first branch decreases more than the\n% second branch increases.\n% So if we do one of each, the net effect is a decrease.\n% Therefore, the value of i has to decrease after any two steps.\n\n\\end{enumerate}\n\n\\end{exercise}\n\n\n\\begin{exercise}  %%V6 Ex7.2\n\nLet's say you are given a number, $a$, and you want to find its square root.\nOne way to do that is to start with a rough guess about the answer, $x_0$, and then improve the guess using this formula:\n%\n\\[ x_1 =(x_0 + a/x_0) / 2 \\]\n%\nFor example, if we want to find the square root of 9, and we start with $x_0 = 6$, then $x_1 = (6 + 9/6) / 2 = 3.75$, which is closer.\nWe can repeat the procedure, using $x_1$ to calculate $x_2$, and so on.\nIn this case, $x_2 = 3.075$ and $x_3 = 3.00091$.\nSo it converges quickly on the correct answer.\n\nWrite a method called \\java{squareRoot} that takes a \\java{double} and returns an approximation of the square root of the parameter, using this technique.\nYou should not use \\java{Math.sqrt}.\n\nAs your initial guess, you should use $a/2$.\nYour method should iterate until it gets two consecutive estimates that differ by less than 0.0001.\n%In other words, return when the absolute value of $x_n - x_{n-1}$ is less than 0.0001.\nYou can use \\java{Math.abs} to calculate the absolute value of the difference.\n\n\\end{exercise}\n\n\n\\begin{exercise}  %%V6 Ex7.6\n\nOne way to evaluate $\\exp(-x^2)$ is to use the infinite series expansion:\n%\n\\[ \\exp(-x^2) = 1 - x^2 + x^4/2 - x^6/6 + \\ldots \\]\n%\nThe $i$th term in this series is $(-1)^i x^{2i} / i!$.\nWrite a method named \\java{gauss} that takes \\java{x} and \\java{n} as arguments and returns the sum of the first \\java{n} terms of the series.\nYou should not use \\java{factorial} or \\java{pow}.\n\n\\end{exercise}\n\n\n\\begin{exercise}  %%V6 Ex9.5\n\n\\index{abecedarian}\n\nA word is said to be ``abecedarian'' if the letters in the word appear in alphabetical order.\nFor example, the following are all six-letter English abecedarian words:\n\n\\begin{quote}\nabdest, acknow, acorsy, adempt, adipsy, agnosy, befist, behint, %\\\\\nbeknow, bijoux, biopsy, cestuy, chintz, deflux, dehors, dehort, %\\\\\ndeinos, diluvy, dimpsy %\\\\\n\\end{quote}\n\nWrite a method called \\java{isAbecedarian} that takes a \\java{String} and returns a \\java{boolean} indicating whether the word is abecedarian.\n%Your method can be iterative or recursive.\n\n\\end{exercise}\n\n\n\\begin{exercise}  %%V6 Ex9.6\n\\label{doubloon}\n\n\\index{doubloon}\n\nA word is said to be a ``doubloon'' if every letter that appears in the word appears exactly twice.\nHere are some example doubloons found in the dictionary:\n\n\\begin{quote}\nAbba, Anna, appall, appearer, appeases, arraigning, beriberi, bilabial, boob, Caucasus, coco, Dada, deed, Emmett, Hannah, horseshoer, intestines, Isis, mama, Mimi, murmur, noon, Otto, papa, peep, reappear, redder, sees, Shanghaiings, Toto\n\\end{quote}\n\nWrite a method called \\java{isDoubloon} that takes a string and checks whether it is a doubloon.\nTo ignore case, invoke the \\java{toLowerCase} method before checking.\n\\end{exercise}\n\n\n\\begin{exercise}  %%V6 Ex9.8\n\n\\index{Scrabble}\n\nIn Scrabble\\footnote{Scrabble is a registered trademark owned in the USA and Canada by Hasbro Inc., and in the rest of the world by J.\\ W.\\ Spear \\& Sons Limited of Maidenhead, Berkshire, England, a subsidiary of Mattel Inc.} each player has a set of tiles with letters on them.\nThe object of the game is to use those letters to spell words.\nThe scoring system is complex, but longer words are usually worth more than shorter words.\n\nImagine you are given your set of tiles as a string, like \\java{\"quijibo\"}, and you are given another string to test, like \\java{\"jib\"}.\n\nWrite a method called \\java{canSpell} that takes two strings and checks whether the set of tiles can spell the word.\nYou might have more than one tile with the same letter, but you can only use each tile once.\n\n\\end{exercise}\n", "meta": {"hexsha": "6aff45c2c0d302880ad0072f0d4d3831cb3762bb", "size": 37240, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ch06.tex", "max_stars_repo_name": "StevenLRichardson/ThinkJava2Trinket", "max_stars_repo_head_hexsha": "540f35463dbab881cf2557553e93df28b37a4f32", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-06-29T10:05:31.000Z", "max_stars_repo_stars_event_max_datetime": "2019-06-29T10:05:31.000Z", "max_issues_repo_path": "ch06.tex", "max_issues_repo_name": "StevenLRichardson/ThinkJava2Trinket", "max_issues_repo_head_hexsha": "540f35463dbab881cf2557553e93df28b37a4f32", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch06.tex", "max_forks_repo_name": "StevenLRichardson/ThinkJava2Trinket", "max_forks_repo_head_hexsha": "540f35463dbab881cf2557553e93df28b37a4f32", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.5798016231, "max_line_length": 278, "alphanum_fraction": 0.6860633727, "num_tokens": 10602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.685949467848392, "lm_q2_score": 0.8080672066194946, "lm_q1q2_score": 0.5542932703663789}}
{"text": "\\hypertarget{implicitmesh}{%\n\\section{ImplicitMesh}\\label{implicitmesh}}\n\nThe \\texttt{implicitmesh} module allows you to build meshes from\nimplicit functions. For example, the unit sphere could be specified\nusing the function \\texttt{x\\^{}2+y\\^{}2+z\\^{}2-1\\ ==\\ 0}.\n\nTo use the module, first import it:\n\n\\begin{lstlisting}\nimport implicitmesh\n\\end{lstlisting}\n\nTo create a sphere, first create an ImplicitMeshBuilder object with the\nimplict function you'd like to use:\n\n\\begin{lstlisting}\nvar impl = ImplicitMeshBuilder(fn (x,y,z) x^2+y^2+z^2-1)\n\\end{lstlisting}\n\nYou can use an existing function (or method) as well as an anonymous\nfunction as above.\n\nThen build the mesh,\n\n\\begin{lstlisting}\nvar mesh = impl.build(stepsize=0.25)\n\\end{lstlisting}\n\nThe \\texttt{build} method takes a number of optional arguments:\n\n\\begin{itemize}\n\n\\item\n  \\texttt{start} - the starting point. If not provided, the value\n  Matrix({[}1,1,1{]}) is used.\n\\item\n  \\texttt{stepsize} - approximate lengthscale to use.\n\\item\n  \\texttt{maxiterations} - maximum number of iterations to use. If this\n  limit is exceeded, a partially built mesh will be returned.\n\\end{itemize}\n", "meta": {"hexsha": "e8664e4ee251946e2731d58ec1a5b2725b6d0540", "size": 1148, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "manual/src/Reference/implicitmesh.tex", "max_stars_repo_name": "mattsep/morpho", "max_stars_repo_head_hexsha": "50bb935653c0675b81e9f2d78573cf117971a147", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-09-18T14:44:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T11:41:50.000Z", "max_issues_repo_path": "manual/src/Reference/implicitmesh.tex", "max_issues_repo_name": "mattsep/morpho", "max_issues_repo_head_hexsha": "50bb935653c0675b81e9f2d78573cf117971a147", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 79, "max_issues_repo_issues_event_min_datetime": "2021-10-05T17:33:19.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:06:10.000Z", "max_forks_repo_path": "manual/src/Reference/implicitmesh.tex", "max_forks_repo_name": "mattsep/morpho", "max_forks_repo_head_hexsha": "50bb935653c0675b81e9f2d78573cf117971a147", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-10-05T16:56:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-31T19:55:27.000Z", "avg_line_length": 26.6976744186, "max_line_length": 71, "alphanum_fraction": 0.7465156794, "num_tokens": 331, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672135527632, "lm_q2_score": 0.6859494485880927, "lm_q1q2_score": 0.5542932595586345}}
{"text": "\\section{Correlation Estimation}\n\n\\begin{enumerate}[label=\\alph*), leftmargin=*]\n%% a)\n\\item\n%\n\nFigure \\ref{fig:2_1_a} shows the biased (red) and unbiased (blue) estimates of the autocorrelation function (ACF) as well as the correlogram spectral estimates obtained for:\nwhite gaussian noise (WGN), a noisy sinusoidal signal ($\\sigma^{2} = 1$) and filtered white gaussian noise.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/a/acf-WGN}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/a/psd-WGN}\n    \\end{subfigure}\n    ~\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/a/acf-Filtered_WGN}\n    \\end{subfigure}\n    ~ \n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/a/psd-Filtered_WGN}\n    \\end{subfigure}\n    ~\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/a/acf-Noisy_Sinewave}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/a/psd-Noisy_Sinewave}\n    \\end{subfigure}\n    \\caption{ACF and Correlogram: biased and unbiased estimates of various signals.}\n    \\label{fig:2_1_a}\n\\end{figure}\n\nRegarding the ACF estimates, we verify that for small lags, $k \\lessapprox 150$, the two estimates are very similar, but for larger lags \nthe unbiased estimates increase in value while the biased estimates fade out to zero. We now show that the biased and unbiased ACF\nestimates are obtained by windowing the ideal ACF $r_{xx}(k)$ with the Bartlett window, $w_{B}(k)$, and the rectangular window, $w_{R}(k)$, respectively.\n\n\\begin{align}\n    \\E[ \\hat{r}_{biased}(k) ] &= \\sum_{n=k+1}^{N} \\frac{N - k}{N} r_{xx}(k) = w_{B}(k) r_{xx}(k) \\\\\n    \\E[ \\hat{r}_{unbiased}(k) ] &= \\sum_{n=k+1}^{N} \\frac{N - k}{N - K} r_{xx}(k) = w_{R}(k) r_{xx}(k)\n\\end{align}\n\nThus, using the Fourier transform pair ACF-PSD, we obtain the expected values of the correlograms $\\E[\\hat{P}_{biased}]$ and $\\E[\\hat{P}_{unbiased}]$ as the convolution of\nof the true power spectrum with the Fourier transform of the Bartlett and rectangular window, respectively. The Fourier transform of the rectangular window\nis the $sinc$ function, which introduces \\textbf{negative} values, and does not preserve the positive semi-definiteness of the PSD. On the other hand, the\nFourier transform of the Bartlett window is strictly non-negative\\footnote{see figure \\ref{fig:1_3_a_2} from Assignment 1}, guaranteeing positive semi-definiteness.\nThe correlograms in figure \\ref{fig:2_1_a} verify our theoretical argument, where the unbiased estimates lead to negative PSD values while this is not the case for\nthe biased estimates.\n\n%% b)\n\\item\n%\n\nFigure \\ref{fig:2_1_b} illustrates the periodogram of 100 realisations of the random process $x(n)$,\nas well as the ensemble mean and standard deviation, where:\n\n\\begin{equation}\n    x(n) = 1.5 sin(2 \\pi 0.3 n) + sin(2 \\pi 0.6 n) + 2 sin(2 \\pi 0.9 n) + w(n) \\quad w \\sim \\mathcal{N}(0, 1)\n\\end{equation}\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/b/psd_mean}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/b/psd_std}\n    \\end{subfigure}\n    \\caption{Correlogram: mean and standard deviation of $x(n)$.}\n    \\label{fig:2_1_b}\n\\end{figure}\n\nWhen the biased ACF estimate is used, the obtained correlogram is an \\textbf{inconsistent estimator} since:\n\n\\begin{equation}\n    \\Var[\\hat{P}_{biased}(f)] = P_{xx}^{2}(f) \\bigg[ 1 + \\bigg(\\frac{sin(2 \\pi N f)}{N sin(2 \\pi f)}\\bigg)^{2} \\bigg]\n\\end{equation}\n\nand as a result when $N \\rightarrow \\infty$, $\\Var[\\hat{P}_{biased}(f)] \\rightarrow P_{xx}^{2}(f) \\gg 0$. This also explains the increased standard deviation of the periodogram\nat the peak frequencies of the ideal PSD, $P_{xx}(f)$, of the process.\n\n%% c)\n\\item\n%\n\nThe experiments are repeated and the periodograms are expressed in $dB$ this time, as illustrated in figure \\ref{fig:2_1_c}.\nInterestingly, the variance close to the peak frequencies, $f = 0.3,\\ 0.6,\\ 0.9\\ Hz$, is decreased.\nThis is a counter-intuitive result, but it can be explained by the logarithmic function's gradient ($\\frac{1}{x}$).\nFluctuations around zero are significantly amplified, while values greater than $1$ are attenuated (sub-linear trend). Consequently, since the power at peak frequencies is much greater\nthan $1$, its variance is squeezed, while at all other frequencies power is concentrated around zero, leading to amplified variance.\n\nClearly, this is an advantageous representation, since at frequencies of interest the spread-out is reduced and the interpretation of the periodograms is easier.\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/c/psd_mean}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/c/psd_std}\n    \\end{subfigure}\n    \\caption{Correlogram: mean and standard deviation of $x(n)$ in $dB$.}\n    \\label{fig:2_1_c}\n\\end{figure}\n\n%% d)\n\\item\n%\n\nFigure \\ref{fig:2_1_d} shows the periodograms of complex exponential signals (two sine waves) in noise for different number of samples and therefore frequency resolutions.\nAs expected, for small number of samples ($n \\leq 40$) the two peaks cannot be distinguished but for larger values ($n \\geq 45$) the two peaks are visible.\nThis observation agrees with theory, since the resolution of the (standard) periodogram is given by $\\Delta f = \\frac{0.89}{N}$ and the two complex signals have\nfrequencies $f_{1} = 0.3\\ Hz$ and $f_{2} = 0.32\\ Hz$, hence discrimination is possible when:\n\n\\begin{equation}\n    \\Delta f \\leq f_{2} - f_{1} \\Longrightarrow N \\geq \\frac{0.89}{f_{2} - f_{1}} = \\frac{0.89}{0.32 - 0.3} = 44.5\n\\end{equation}\n\n\\begin{figure}[h]\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/d/complex_exponentials-1}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/d/complex_exponentials-2}\n    \\end{subfigure}\n    \\caption{Periodogram: frequency resolution and peak identification.}\n    \\label{fig:2_1_d}\n\\end{figure}\n\nHence, despite the simplicity of the signals under investigation (two sine waves), the periodogram fails to adequately estimate their power spectral density when a small number\nof samples is available. Therefore, alternative methods (i.e subspace methods) must be used in these cases.\n\n%% e)\n\\item\n%\n\nThe MUltiple SIgnal Classification algorithm (MUSIC) estimates the spectral density of a signal, using a \\textbf{subspace} method. It assumes that the signal, $x(n)$,\nconsists of $p$ complex exponentials in the presence of Gaussian white noise. Given an autocorrelation matrix, $\\mathbf{R}_{xx} \\in \\sR^{M \\times M}$,\nif its eigenvalues are sorted in decreasing order, the eigenvectors corresponding to the $p$ largest eigenvalues (i.e. directions of largest variability) span the signal subspace, $\\mathbf{R}_{s}$.\nThe remaining $M-p$ eigenvectors span the orthogonal space, $\\mathbf{R}_{n}$, where there is only noise.\nLet the noise eigenvectors $\\vv_{i}$ and the helper vector $\\ve$ such that:\n\n\\begin{equation}\n    \\vv_{i}, \\quad i = p+1, \\ldots, M \\qquad \\text{and} \\qquad\n    \\ve = \n    \\begin{bmatrix}\n        1 & e^{jw} & e^{j2w} & \\cdots &  e^{j(M-1)w}\n    \\end{bmatrix}^{T}\n\\end{equation}\n\nthen the MUSIC spectral estimate $\\hat{P}_{MU}$ is given by:\n\n\\begin{equation}\n    \\hat{P}_{MU}(e^{jw}) = \\frac{1}{\\sum_{i=p+1}^{M} |\\ve^{H} \\vv_{i}|^{2}}\n\\end{equation}\n\nwhere at signal frequencies $w_{1},\\ \\ldots,\\ w_{k},\\ \\ldots,\\ w_{p}$ the noise eigenvectors $\\vv_{i}$ and the signal eigenvectors $\\ve_{k}$ will be orthogonal (since $\\mathbf{R}_{xx}$ is Hermitian)\nand therefore $\\hat{P}_{MU}$ will have $p$ peaks, as expected for a spectrum estimator of $p$ complex exponentials.\n\nThe modified autocorrelation matrix $\\mathbf{R}_{xx}$ with $M = 14$ is obtained using MATLAB command:\n\n\\begin{equation*}\n    \\mathtt{[X,Rxx] = corrmtx(x, 14, \"mod\")}\n\\end{equation*}\n\nwhich is used by the MUSIC algorithm for spectral estimation\n\n\\begin{equation*}\n    \\mathtt{[S,F] = pmusic(Rxx, p, [ ], 1, \"corr\")}\n\\end{equation*}\n\nwhere \\texttt{p} is the signal subspace dimensionality, a hyperparameter that must be tuned/selected.\n\nIn figure \\ref{fig:2_1_e}, the 100 realisations overlay of a two complex exponentials signal ($f_{1} = 0.3\\ Hz$ and $f_{2} = 0.32\\ Hz$) MUSIC spectrum estimate is provided,\nalogn with their standard deviation, for different values of hyperparameter $p$.\n\n\\begin{figure}[h]\n    \\centering\n    \\centering\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/e/MUSIC_mean-p_1}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/e/MUSIC_std-p_1}\n    \\end{subfigure}\n    ~\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/e/MUSIC_mean-p_2}\n    \\end{subfigure}\n    ~ \n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/e/MUSIC_std-p_2}\n    \\end{subfigure}\n    ~\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/e/MUSIC_mean-p_3}\n    \\end{subfigure}\n    ~\n    \\begin{subfigure}{0.49\\textwidth}\n        \\centering\n        \\includegraphics[height=1.5in]{report/parametric-and-line-spectra/correlation-estimation/assets/e/MUSIC_std-p_3}\n    \\end{subfigure}\n    \\caption{MUSIC: spectral estimation of two complex exponentials and signal space size $p$.}\n    \\label{fig:2_1_e}\n\\end{figure}\n\nDespite the small number of samples available ($n = 30$) and thus the poor frequency resolution, the algorithm successfully identifies the two\nfrequency components at $f_{1}$ and $f_{2}$, when $p=2$ is selected, unlike the periodogram that requires a larger amount of samples.\nSimilar to the periodogram, the standard deviation of the estimator increases closer to the peak values, but most importantly, the reliability\nof the estimator deteriorates significantly when $p$ is not matched with the nature of signal $x(n)$.\n\nTo sum up, MUSIC method is very powerful when a small number of samples is available but its chief disadvantage is that it requires the number of components $p$ to be known in advance,\nso \\textbf{it cannot be used in more general cases}, when prior knowledge of the signal is not provided. On the other hand, the periodogram needs\na finer frequency resolution to detect closely-spaced sine waves (larger $n$), but it does not require any knowledge about the signal $x(n)$.\n\n%\n\\end{enumerate}", "meta": {"hexsha": "270279fb9888f829d33b685df0ae9c2f9873093d", "size": 12098, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/report/parametric-and-line-spectra/correlation-estimation/index.tex", "max_stars_repo_name": "filangel/ASPMI", "max_stars_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-02-20T14:43:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-13T21:13:02.000Z", "max_issues_repo_path": "tex/report/parametric-and-line-spectra/correlation-estimation/index.tex", "max_issues_repo_name": "AmjadHisham/ASPMI", "max_issues_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/report/parametric-and-line-spectra/correlation-estimation/index.tex", "max_forks_repo_name": "AmjadHisham/ASPMI", "max_forks_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-07-17T08:32:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-12T18:26:18.000Z", "avg_line_length": 48.5863453815, "max_line_length": 198, "alphanum_fraction": 0.7183005455, "num_tokens": 3610, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417500561683, "lm_q2_score": 0.8418256492357358, "lm_q1q2_score": 0.5542727398785092}}
{"text": "\\section{Updates: 04/28/2020}%\n\\label{sec:updates_2020_04_28}\n%\n\\subsection{Additional changes to loss function}%\n\\label{subsec:additional_changes_to_loss_fn}\n%\nInstead of working with the mixed loss function,\n\\(\\ell_{\\lambda_{\\mathcal{Q}}}(\\xip, \\xi, A(\\xip|\\xi))\\) from\nEq.\\ref{eq:ell_lambda}, we can focus exclusively on the second term, which is\ndirectly related to the tunneling rate.\n%\nThe equation then becomes\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% TODO: Include both ell_p and ell_q functions with separate weights in\n% `master` loss function\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{align}\n  \\ell_{\\lambda_{\\Q}}\\left(\\xip, \\xi, A(\\xip|\\xi)\\right) \n  &= - \\frac{\\delta_{\\Q}\\cdot A(\\xi^{\\prime}|\\xi)}{\\lambda_{\\Q}^{2}}\\\\\n  &= - {\\left(\\frac{{\\mathcal{Q}^{\\prime} -\n  \\mathcal{Q}}}{\\lambda_\\mathcal{Q}}\\right)}^{2}\\cdot\n    A(\\xi^{\\prime}|\\xi)\n\\end{align}\n", "meta": {"hexsha": "9a9e7f376e4f437ebb97d2302cef594facd51550", "size": 946, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/updates/updates_2020_04_28.tex", "max_stars_repo_name": "saforem2/l2hmc-qcd", "max_stars_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2019-04-18T18:50:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T18:30:48.000Z", "max_issues_repo_path": "doc/updates/updates_2020_04_28.tex", "max_issues_repo_name": "saforem2/l2hmc-qcd", "max_issues_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2019-09-09T21:10:48.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T17:43:51.000Z", "max_forks_repo_path": "doc/updates/updates_2020_04_28.tex", "max_forks_repo_name": "saforem2/l2hmc-qcd", "max_forks_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-10-31T02:25:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-25T00:49:14.000Z", "avg_line_length": 37.84, "max_line_length": 77, "alphanum_fraction": 0.5729386892, "num_tokens": 285, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8418256472515683, "lm_q2_score": 0.6584174938590246, "lm_q1q2_score": 0.5542727329296289}}
{"text": "\\section{Functions} % (fold)\n\\label{sec:functions}\n\n% This sheet is a little long\n\n\\begin{questions}\n\n\\titledquestion{Hello} % (fold)\n\\label{sub:hello_world}\n\n\\begin{parts}\n    \\part Write a function \\verb\\hello_world\\ that prints \\verb\\'Hello, world!'\\\n    \\part Write a function \\verb\\hello_name(name)\\ that prints \\verb\\'Hello, name!'\\\n    where \\texttt{name} is a string.\n    \\part Explain the difference between the \\texttt{print} and \\texttt{return}\n    keywords.\n    What would change if instead of \\texttt{print} you would use \\texttt{return}?\n\\end{parts}\n\n% titledquestion hello_world (end)\n\n\\titledquestion{Polynomial} % (fold)\n\\label{sub:polynomial}\n\nWrite a function that evaluates the polynomial $3x^2 - x + 2$.\n\n% titledquestion polynomial (end)\n\n\\titledquestion{Maximum} % (fold)\n\\label{sub:maximum}\n\nWrite a function \\verb\\my_max(x,y)\\ that returns the maximum of $x$ and $y$.\nDo not use the \\texttt{max} function, but use \\texttt{if} instead in following two ways:\n\\begin{parts}\n    \\part Use both \\texttt{if} and \\texttt{else}.\n    \\part Use \\texttt{if} but not \\texttt{else} (nor \\texttt{elif}).\n\\end{parts}\n\n% titledquestion maximum (end)\n\n\\titledquestion{Primes} % (fold)\n\\label{sub:primes}\n\n\\begin{parts}\n    \\part Write a function \\verb\\is_prime(n)\\ that returns \\texttt{True} only if $n$ is prime.\n    \\part Note that apart from 2 and 3, all primes are of the form $6k \\pm 1$\n        (though not all numbers of the form $6k \\pm 1$ are prime of course).\n        Using this, we can improve the computation time by a factor $3$.\n        Update your function to use this.\n    \\part Write a function that returns all primes up to $n$.\n    \\part Write a function that returns the first $n$ primes.\n\\end{parts}\n\n% titledquestion primes (end)\n\n\\titledquestion{Root finding} % (fold)\n\\label{sub:root_finding}\n\nSuppose $f$ is a continuous function and $f(a) < 0$ and $f(b) > 0$ for some known $a$ and $b$.\nFor simplicity, assume $a < b$.\nThen, there must exist some $c$ such that $f(c) = 0$.\n\n\\begin{parts}\n    \\part Write a function \\verb|root(f, a, b)| that takes a function \\verb|f| and two floats\n\\verb|a| and \\verb|b| and returns the root \\verb|c|. Hint: check the sign at the midpoint of the interval.\n    \\part Remove the assumption that $a < b$, and that $f(a) < 0$ and $f(b) > 0$, if\n    your current code relies on them.\n    \\part Add a check that prints\\\\ \\verb|'function evals have same sign'|\\\\ if $f(a) >0$\n    and $f(b) >0$ or if $f(a) <0 $ and $f(b) < 0$.\n\\end{parts}\n\n\n\n% titledquestion root_finding (end)\n\n\\end{questions}\n\n% section functions (end)\n", "meta": {"hexsha": "d1f83e56dc1299def2a817037eac31daea5ce4a8", "size": 2581, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "exercises/tex/functions.tex", "max_stars_repo_name": "naskoch/python_course", "max_stars_repo_head_hexsha": "84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2015-08-10T17:46:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-18T21:09:03.000Z", "max_issues_repo_path": "exercises/tex/functions.tex", "max_issues_repo_name": "naskoch/python_course", "max_issues_repo_head_hexsha": "84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/tex/functions.tex", "max_forks_repo_name": "naskoch/python_course", "max_forks_repo_head_hexsha": "84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-04-24T03:31:02.000Z", "max_forks_repo_forks_event_max_datetime": "2019-05-13T07:36:06.000Z", "avg_line_length": 32.6708860759, "max_line_length": 106, "alphanum_fraction": 0.6842309182, "num_tokens": 787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.705785040214066, "lm_q2_score": 0.7853085834000791, "lm_q1q2_score": 0.554259050115476}}
{"text": "%===============================================================================\n% COVID-19 Reproduction Numbers in South Africa\n% Kevin Durant\n% May 2020\n%===============================================================================\n\n\\documentclass[12pt,a4paper]{article}\n\n% \\usepackage{booktabs} % better looking tables.\n\\usepackage{mathtools} % also loads amsmath.\n\\usepackage{microtype}\n\\usepackage{iftex}\n\\ifPDFTeX\n  \\usepackage[T1]{fontenc}\n  \\usepackage[utf8]{inputenc}\n  \\usepackage{amssymb}\n\\else\n  % Unicode-math should be loaded after other maths- or font-related packages.\n  % It loads amsmath and fontspec if necessary, and enables a custom version of\n  % the Latin Modern Math font by default.\n  \\usepackage{unicode-math}\n%   \\setmainfont{STIX Two Text}\n%   \\setmathfont{STIX Two Math}\n\\fi\n\\usepackage{biblatex}\n\\usepackage{svg}\n\\usepackage{hyperref}\n\\addbibresource{cvza.bib}\n\n% General maths commands %======================================================\n\n\\DeclarePairedDelimiter\\lr{\\lparen}{\\rparen}  % sized parentheses.\n\\DeclareMathOperator\\Pb{P}                    % probability.\n\\DeclareMathOperator\\B{B}                     % beta distribution.\n\\DeclareMathOperator\\BP{BP}                   % beta prime distribution.\n\\DeclareMathOperator\\NB{NB}                   % negative binomial distribution.\n\n% Title %=======================================================================\n\n\\title{Model Summary}\n\\author{Kevin Durant}\n\\date{}\n\n% Document %====================================================================\n\n\\begin{document}\n\n\\maketitle\n\n\\section{Definitions} %=========================================================\n\n$k_t$: number of new infections on day $t$. \\\\\n$\\lambda_t$: underlying daily rate of infection (and expected number of new\ninfections) on day $t$.\n\n\\section{Assumptions} %=========================================================\n\nFirstly, that $k_t$ depends on $\\lambda_t$ via a negative binomial distribution\nwith dispersion parameter $r$:\n\\begin{equation*}\n  \\Pb(k_t \\mid r, \\lambda_t)\n  = \\NB\\lr*{k_t \\Bigm\\vert r, \\frac{\\lambda}{r + \\lambda} = p_t}.\n\\end{equation*}\n\nSecondly, that the posterior on $p_t$ is a beta distribution (which is the\nconjugate prior for the negative binomial with known dispersion):\n\\begin{align*}\n  \\Pb(p_t \\mid k_1, \\dots, k_t) &= \\B(p_t \\mid \\alpha_t, \\beta_t) \\\\\n  \\Leftrightarrow \\Pb\\lr*{\\frac{\\lambda_t}{r} \\Bigm\\vert k_1, \\dots, k_t}\n    &= \\BP\\lr*{\\frac{\\lambda_t}{r} \\Bigm\\vert \\alpha_t, \\beta_t}.\n\\end{align*}\nAs indicated here, this is equivalent to the assumption of a beta prime prior\non $\\lambda_t/r$.\n\nThirdly, we assume that the predictive prior on $p_t$ is related to the\nposterior on $p_{t-1}$ as follows:\n\\begin{align*}\n  \\Pb(p_t \\mid k_1, \\dots, k_{t-1})\n    &= \\B\\lr*{p_t \\Bigm\\vert \\frac{\\alpha_{t-1}}{c}, \\frac{\\beta_{t-1}}{c}} \\\\\n  \\Leftrightarrow \\Pb\\lr*{\\frac{\\lambda_t}{r} \\Bigm\\vert k_1, \\dots, k_{t-1}}\n    &= \\BP\\lr*{\\frac{\\lambda_t}{r} \\Bigm\\vert\n    \\frac{\\alpha_{t-1}}{c}, \\frac{\\beta_{t-1}}{c}}.\n\\end{align*}\n\n\\section{Results} %=============================================================\n\nThe parameters of the posterior distributions satisfy\n\\begin{align*}\n  \\alpha_t &= \\frac{\\alpha_{t-1}}{c} + k_t\n    = \\frac{a_1}{c^{t-1}} + \\sum_{i=0}^{t-1} \\frac{k_{t-i}}{c^i}, \\\\\n  \\beta_t &= \\frac{\\beta_{t-1}}{c} + r\n    = \\frac{b_1}{c^{t-1}} + \\sum_{i=0}^{t-1} \\frac{r}{c_i},\n\\end{align*}\nand the marginal likelihood of the model parameters $r$ and $c$ is\n\\begin{align*}\n  \\Pb(k_1, \\dots, k_t \\mid r, c)\n    &= \\prod_{t=1}^{t} \\Pb(k_t \\mid k_1, \\dots, k_{t-1}, r, c) \\\\\n  &= \\prod_{t=1}^{t} \\frac{1}{k_t\\B(k_t, r)}\n    \\frac{\\B(a_t + k_t, b_t + r)}{\\B(a_t, b_t)} \\quad (\\text{if } k_t > 0).\n\\end{align*}\n\n\\end{document}", "meta": {"hexsha": "5919be4112bc6e5e050a873986027f6be62a701f", "size": 3732, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/summary.tex", "max_stars_repo_name": "kevdur/covid", "max_stars_repo_head_hexsha": "5893b911afb974b9b00dc6af871a7c7129fd7952", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/summary.tex", "max_issues_repo_name": "kevdur/covid", "max_issues_repo_head_hexsha": "5893b911afb974b9b00dc6af871a7c7129fd7952", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-06-11T10:04:33.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T21:39:26.000Z", "max_forks_repo_path": "doc/summary.tex", "max_forks_repo_name": "kevdur/covid", "max_forks_repo_head_hexsha": "5893b911afb974b9b00dc6af871a7c7129fd7952", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.5882352941, "max_line_length": 80, "alphanum_fraction": 0.5667202572, "num_tokens": 1158, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7057850216484838, "lm_q1q2_score": 0.5542590337626063}}
{"text": "\\documentclass{article}\n\n\\title{Neural Network Initialization}\n\\author{Vladimir Feinberg}\n\n\\input{../old/defs}\n\\usepackage{animate}\n\n\\begin{document}\n\n\\maketitle\n\nInitialization is an important aspect of ANN training. It contributes to improvements both in terms of training speed and generalization. Content is mostly from Dr. Goodfellow's \\nurl{http://www.deeplearningbook.org/}{Deep Learning Book}.\n\nParameters should not be initialized uniformly so as to break symmetry between hidden units; otherwise, the parameters will remain the same during training. This motivates random initialization. Biases, however, are typically constants, heuristically chosen.\n\nWeights are typically Gaussian or uniform, but scale is important. Large scale, which breaks symmetry strongly, must be balanced with stability of the learning algorithm. ReLU activations, since they're linear, suggest that that the scale of the output for a basic ReLU MLP with $\\norm{y}\\sim \\norm{\\vx}\\prod_i\\norm{W^{(i)}}$. In general, the scale of weights should be treated as a hyperparameter. Importantly, loss (such as softmax) not scale-invariant; keep an eye out for initialization.\n\nOutput layer bias should be initialized with the marginal statistics of the training data. Certain initialization schemes may be compatible with positive constant bias initialization, which lets gradients start on the linear part of a ReLU, but it is unclear if this yields improvement over zero bias initialization. (\\nurl{https://research.google.com/pubs/pub45473.html}{Jozefowicz et al 2015}). If a unit acts as a control gate for letting other information pass (like a forget gate in an LSTM), then bias should be initialized as positive there.\n\n\\section{Uniform Random Initialization}\n\n(\\nurl{http://proceedings.mlr.press/v9/glorot10a.html}{Glorot and Bengio 2010}). Normed initialization suggests that a fully connected layer initializes an $n\\times m$ weight matrix entrywise with $\\Uniform\\pa{-u, u}$ where $u=\\sqrt{\\frac{6}{m+n}}$. This approach intends to keep gradient variance the same across layers, and it is very commonly used because of its simplicity.\n\n\\section{Orthogonal Matrix Initialization}\n\n(\\nurl{https://arxiv.org/abs/1312.6120}{Saxe et al 2013}, \\nurl{https://arxiv.org/abs/1412.6558}{Susillo 2014}). Orthogonal matrix initialization with a scaling factor dependent on layer activation is motivated by making hidden units track a varied set of functions, resulting in training time independent of depth (if nonlinearities are the identity).\n\n\\subsection{Greedy Pre-training}\n\n(\\nurl{https://papers.nips.cc/paper/3048-greedy-layer-wise-training-of-deep-networks}{Bengio et al 2007}). Greedy pre-training is a form of initialization where a FFNN is trained by first training a shallow network to completion, then appending new layers after lower layers are trained. This greedy form of optimization isn't optimal from a training loss perspective, but in addition to speeding up convergence it can be viewed as a form of regularization. However, it has fallen out of favor as other modern techniques (ReLU, dropout, BN, unsupervised pre-training) have been developed.\n\n\\subsection{Unsupervised Pre-training}\n\n(\\nurl{http://www.jmlr.org/papers/v11/erhan10a.html}{Erhan et al 2010}, \\nurl{https://arxiv.org/abs/1412.6597}{Paine et al 2014}). Unsupervised approaches have found success in both improving generalization error and optimization time. Here, layers from unsupervised learners, such as autoencoders or GANs, might be used (\\nurl{https://arxiv.org/abs/1606.03498}{Salimans et al 2017}). An early technique was to use TICA (\\nurl{http://people.ee.duke.edu/~lcarin/Iulian9.22.06b.pdf}{Hyv\\:{a}rinen et al 2001}), introduced by \\nurl{https://papers.nips.cc/paper/4136-tiled-convolutional-neural-networks}{Le et al 2010}.\n\n\\subsection{Layer-sequential Unit-variance (LSUV)}\n\n(\\nurl{https://arxiv.org/abs/1511.06422}{Mishkin and Matas 2015}). Tested out on FFNNs with results comparable to BN, greedy pre-training, and other initialization schemes, LSUV offers a cheap initialization procedure which effectively does a single round of BN at initialization time. There's not a lot of background on LSUV use, nor its extensions to RNNs.\n\n\n\\end{document}\n% LocalWords: Dumoulin Visin Boureau Ngiam autoplay Theano Hyv rinen Erhan", "meta": {"hexsha": "0bffa1682ead90d11096b48ca0da1f5f1df366db", "size": 4307, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "deep-learning/initialization.tex", "max_stars_repo_name": "vlad17/shallow-ml-notes", "max_stars_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2017-06-27T18:39:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T18:48:49.000Z", "max_issues_repo_path": "deep-learning/initialization.tex", "max_issues_repo_name": "vlad17/shallow-ml-notes", "max_issues_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "deep-learning/initialization.tex", "max_forks_repo_name": "vlad17/shallow-ml-notes", "max_forks_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-03-23T10:45:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-07T06:21:43.000Z", "avg_line_length": 100.1627906977, "max_line_length": 615, "alphanum_fraction": 0.7921987462, "num_tokens": 1056, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599563, "lm_q2_score": 0.7853085808877581, "lm_q1q2_score": 0.5542590289027027}}
{"text": "Previously, the idea of using \\textit{optimal control}\\cite{Kirk2004} techniques (also referred to as \\textit{trajectory optimization}) for robot motion planning and control was presented. In this chapter, the optimal control problem is revisited in more detail, including a brief discussion on the use of both \\textit{indirect} and \\textit{direct} methods.\n\n\\notessection{Optimal Control and Trajectory Optimization}\nConsider an optimal control problem (OCP) formulated as the following optimization problem:\n\\begin{equation} \\label{eq:ocp}\n\\begin{split}\n\\underset{\\bm{u},\\x}{\\text{minimize}} \\:\\: & h(\\x(t_f),t_f) + \\int_{t_0}^{t_f} g(\\x(t),\\bm{u}(t),t) dt,\\\\\n\\text{s.t.} \\:\\:& \\dot{\\x}(t) = a(\\x(t),\\bm{u}(t),t), \\\\\n&\\x(t_0) = \\x_0,\n\\end{split}\n\\end{equation}\nwhere $\\x \\in \\R^n$ is the robot state, $\\bm{u} \\in \\R^m$ is the control input, $x_0$ is a known robot initial condition, $a(\\x, \\bm{u}, t)$ is a function describing the robot's dynamics, and the functions $h(\\x(t_f), t_f)$ and $g(\\x(t), \\bm{u}(t), t)$ define the cost function\\footnote{State constraints $\\x(t) \\in \\mathcal{X}$ and control constraints $\\bm{u}(t) \\in \\mathcal{U}$ are also often included in practice, but for simplicity are not included here.}.\nThe goal is to solve the optimal control problem \\eqref{eq:ocp} in order to define an \\textit{optimal} open-loop control law of the form\n\\begin{equation*}\n    \\bm{u}^*(t) = f(\\x(t_0), t).\n\\end{equation*}\n\nUnfortunately, this optimization problem is particularly challenging to solve since it is \\textit{infinite-dimensional}\\footnote{It is referred to as infinite-dimensional because it is an optimization over functions and not just a finite set of parameters.}. Methods for solving \\eqref{eq:ocp} can be categorized as either \\textit{indirect} or \\textit{direct}. Both types of methods (almost always) require some form of discretization, such that the problem can be solved numerically. However, the way in which the problem is discretized is what makes each method unique.\n\n\\begin{enumerate}\n    \\item \\textit{Indirect methods} follow a ``first optimize, then discretize\" approach. These methods first derive conditions for optimality of the original infinite-dimensional problem. A solution is then recovered by discretizing the optimality conditions.\n    \\item \\textit{Direct methods} follow a ``first discretize, then optimize\" approach. These methods first discretize the original problem into a finite-dimensional problem (called a \\textit{nonlinear program}), which is then solved numerically to recover an optimal solution.\n\\end{enumerate}\n\n\n\\subsection{Indirect Methods}\nAs previously mentioned, indirect methods solve the optimal control problem \\eqref{eq:ocp} by deriving \\textit{necessary optimality conditions} (NOC). A numerical procedure is then used to find solutions that satisfy these conditions of optimality, thereby ``indirectly'' solving the original OCP. As a brief example, for unconstrained finite-dimensional optimization problems the classic first-order necessary optimality condition\\footnote{It is important to note that these conditions are called necessary because they are ``necessary'', but they may not be ``sufficient''. In other words there may exist solutions that satisfy the NOCs but do not solve the original problem.} is that the gradient of the function must be zero (e.g. minimize $f(x) = x^2$ with $x \\in \\R$ has NOC $\\frac{df}{dx} = 0$).\n\n\\subsubsection{Constrained Finite-Dimensional Optimization} \\label{subsubsec:constfiniteopt}\nBefore discussing techniques to derive necessary optimality conditions for the infinite-dimensional OCP \\eqref{eq:ocp}, it is useful to briefly examine analogous conditions in finite-dimensional optimization\\cite{BoydVandenberghe2004}. Consider the equality-constrained finite-dimensional optimization problem:\n\\begin{equation} \\label{eq:finiteopt}\n\\begin{split}\n\\underset{\\x}{\\text{minimize}} \\:\\: &f(\\x), \\\\\n\\text{s.t.} \\:\\:&h_i(\\x) = 0, \\quad i = 1,\\dots,m \\\\\n\\end{split}\n\\end{equation}\nwith variable $\\x \\in \\R^n$. \\\\\n\nNecessary optimality conditions for \\eqref{eq:finiteopt} are derived by first forming a function called the Lagrangian $L(\\x, \\blam)$, which augments the objective function with a weighted sum of the constraint functions:\n\\begin{equation} \\label{eq:lagrangian}\n    L(\\x,\\blam) = f(\\x) + \\sum_{i=1}^m \\lambda_i h_i(\\x),\n\\end{equation}\nwhere $\\blam \\in \\R^m$ is a vector of \\textit{Lagrange multipliers}. The NOCs are then given as:\n\\begin{equation} \\label{eq:finitenoc}\n\\begin{split}\n\\nabla_{\\x} L(\\x^*,\\blam^*) &= 0, \\\\\n\\nabla_{\\blam} L(\\x^*,\\blam^*) &= 0,\n\\end{split}\n\\end{equation}\nwhich are the gradients of the Lagrangian with respect to the variables $\\x$ and the multipliers $\\blam$. Note that the NOCs \\eqref{eq:finitenoc} are a set of $n+m$ \\textit{algebraic} equations with $n+m$ unknowns. In contrast, it will be seen next that the NOCs for infinite-dimensional problems are not algebraic, but rather differential.\n\n\\subsubsection{Necessary Optimality Conditions}\nAnalogously to the Lagrangian \\eqref{eq:lagrangian} in finite-dimensional optimization, the first step to defining the NOCs for the infinite-dimensional OCP \\eqref{eq:ocp} is to define a function called the \\textit{Hamiltonian}:\n\\begin{equation} \\label{eq:hamiltonian}\nH(\\x(t), \\bu(t), \\p(t), t) := g(\\x(t), \\bu(t), t) + \\p^\\top (t)a(\\x(t), \\ac(t), t),\n\\end{equation}\nwhere $\\p(t) \\in \\R^n$ is a multiplier referred to as a \\textit{costate}. The NOCs are then given by a set of differential and algebraic equations:\n\\begin{equation} \\label{eq:optnoc}\n\\begin{split}\n\\dot{\\x}^*(t) &= \\frac{\\partial{H}}{\\partial{\\p}}(\\x^*(t), \\bu^*(t), \\p^*(t), t), \\\\\n\\dot{\\p}^*(t) &= -\\frac{\\partial{H}}{\\partial{\\x}}(\\x^*(t), \\bu^*(t), \\p^*(t), t), \\\\\n0 &= \\frac{\\partial{H}}{\\partial{\\bu}}(\\x^*(t), \\bu^*(t), \\p^*(t), t),\n\\end{split}\n\\end{equation}\nwhich must be satisfied for all $t \\in [t_0, t_f]$. These NOCs consist of $2n$ first order differential equations and $m$ algebraic equations. Identifying unique solutions to the $2n$ differential equations requires $2n$ boundary conditions (actually $2n+1$ if the final time $t_f$ is not fixed). The initial condition $\\x^*(t_0) = \\x_0$ specifies $n$ of these conditions, and the remaining conditions are given by\n\\begin{equation} \\label{eq:boundarycond}\n\\begin{split}\n&\\big(\\frac{\\partial{h}}{\\partial{\\x}}(\\x^*(t_f), t_f) - \\p^*(t_f))\\big)^\\top \\delta{\\x_f}\\\\\\\n& + \\big(H(\\x^*(t_f), \\ac^*(t_f), \\p^*(t_f), t_f) +\\frac{\\partial{h}}{\\partial{t}}(\\x^*(t_f), t_f)\\big)\\delta t_f = 0,\\\\\n\\end{split}\n\\end{equation}\nwhere $\\delta \\x_f$ and $\\delta t_f$ are referred to as \\textit{variations}. If either the final time or final state is fixed in the optimal control problem the corresponding variation is forced to be zero, which changes the boundary conditions \\eqref{eq:boundarycond}. The resulting boundary conditions for the four possible scenarios are now summarized:\n\n\\paragraph{Fixed Final Time and Fixed Final State:} If both $t_f$ and $\\x(t_f)$ are fixed, both variations $\\delta t_f$ and $\\delta \\x_f$ are set to zero. In this case the boundary conditions \\eqref{eq:boundarycond} are trivially satisfied, and the remaining boundary conditions on the NOCs \\eqref{eq:optnoc} are given by:\n\\begin{equation*}\n\\begin{split}\n&\\x^*(t_0) = \\x_0, \\\\\n&\\x^*(t_f) = \\x_f.\n\\end{split}\n\\end{equation*}\n\n\\paragraph{Fixed Final Time and Free Final State:} If only $t_f$ is fixed, then only the variation $\\delta t_f = 0$. In this case the conditions \\eqref{eq:boundarycond} simplify and the boundary conditions for the NOCs \\eqref{eq:optnoc} are given by:\n\\begin{equation*}\n\\begin{split}\n&\\x^*(t_0) = \\x_0, \\\\\n&\\frac{\\partial h }{\\partial \\x} (\\x^* (t_f), t_f) - \\p^*(t_f) = 0.  \n\\end{split}\n\\end{equation*}\n\n\\paragraph{Free Final Time and Fixed Final State:} If only $\\x_f$ is fixed, then only the variation $\\delta \\x_f = 0$. In this case the conditions \\eqref{eq:boundarycond} simplify and the boundary conditions for the NOCs \\eqref{eq:optnoc} are given by:\n\\begin{equation*}\n\\begin{split}\n&\\x^*(t_0) = \\x_0, \\\\\n&\\x^*(t_f) = \\x_f, \\\\\n&H(\\x^*(t_f), \\ac^*(t_f), \\p^*(t_f), t_f) +\\frac{\\partial{h}}{\\partial{t}}(\\x^*(t_f), t_f) = 0.  \n\\end{split}\n\\end{equation*}\nNote that in this case since the final time is free an additional boundary condition is added, so there are now $2n + 1$ total conditions.\n\n\\paragraph{Free Final Time and Free Final State:} If neither $t_f$ or $\\x(t_f)$ is fixed, then the boundary conditions for the NOCs \\eqref{eq:optnoc} are given by:\n\\begin{equation*}\n\\begin{split}\n&\\x^*(t_0) = \\x_0, \\\\\n&\\frac{\\partial h }{\\partial \\x} (\\x^* (t_f), t_f) - \\p^*(t_f) = 0, \\\\\n&H(\\x^*(t_f), \\ac^*(t_f), \\p^*(t_f), t_f) +\\frac{\\partial{h}}{\\partial{t}}(\\x^*(t_f), t_f) = 0.  \n\\end{split}\n\\end{equation*}\nAgain, since the final time is free an additional boundary condition is added such that there are $2n+1$ total. Note that last two conditions are both extracted from \\eqref{eq:boundarycond} because the variations $\\delta \\x_f$ and $\\delta t_f$ are independent.\n\n\\subsubsection{Two-Point Boundary Value Problems}\nFinding solutions that satisfy the necessary optimality conditions \\eqref{eq:optnoc} for the optimal control problem is challenging. In particular, any solution must satisfy a set of $2n$ differential equations with boundary conditions specified at both $t_0$ and $t_f$. The problem of finding solutions to differential equations with boundary conditions specified at two points is called a \\textit{two-point boundary value problem}.\nLuckily, numerical procedures have been developed for solving these types of problems. For example the \\texttt{scikits.bvp\\_solver} package in Python or the function \\texttt{bvp4c} in Matlab implement schemes for solving these problems.\n\nMost solvers for two-point boundary value problems typically assume the NOCs \\eqref{eq:optnoc} and their boundary conditions are expressed in the standard form:\n\\begin{equation} \\label{eq:standardtpbvp}\n\\dot{\\z} = g(\\z, t), \\quad l(\\z(t_0), \\z(t_f)) = 0.\n\\end{equation}\nHowever, some types of problems may not directly fit into this standard form. For such instances, it is sometimes possible to convert a non-standard form problem into the standard form \\eqref{eq:standardtpbvp} \\cite{AscherRussel1981}.\n\nIn optimal control settings one common case where the two-point boundary value problem cannot directly be expressed in standard form is free final time problems, where $t_f$ needs to be determined but does not have any associated dynamics. A useful trick in this case is to define a new variable $\\tau = \\frac{t}{t_f} \\in [0,1]$ to replace the time variable $t$ (since before $t_f$ wasn't known but now $\\tau_f = 1$ is known). With this new variable the following changes can be made:\n\\begin{enumerate}\n    \\item Replace all derivatives with respect to $t$ with derivatives with respect to $\\tau$, using $\\frac{d(\\cdot)}{d\\tau} = t_f \\frac{d(\\cdot)}{dt}$ (chain rule).\n    \\item Introduce a ``dummy'' state $r$ that corresponds to $t_f$ with dynamics $\\dot{r} = 0$.\n    \\item Replace $t_f$ with $r$ in all NOCs and in all boundary conditions.\n\\end{enumerate}\nThe ``dummy'' state $r$ can then be included in the vector $\\z$ and the NOCs expressed in the standard form \\eqref{eq:standardtpbvp}. In summary, this approach can be thought of as ``tricking'' the standard-form solver to think that the final time is 1 and that $t_f$ is actually a state with dynamics (although the dynamics are $\\dot{t}_f = 0$).\n\n\n\\begin{example}[Free Final Time OCP] \\label{ex:ocp}\n\\theoremstyle{definition}\nConsider a double integrator system\n\\begin{equation*}\n    \\ddot{x} = u,\n\\end{equation*}\nwhere $x \\in \\R$ is the state and $u \\in \\R$ is the control input where the control task is to find a trajectory that minimizes the cost function\n\\begin{equation*}\nJ = \\frac{1}{2}\\alpha t_f^2 + \\int_{0}^{t_f} \\frac{1}{2}\\beta u^2(t) \\mathrm{d}t,\n\\end{equation*}\nand satisfies the boundary conditions\n\\begin{equation*}\nx(0) = 10,\\quad \\dot{x}(0) = 0,\\quad x(t_f) = 0,\\quad \\dot{x}(t_f) = 0.\n\\end{equation*}\nThis problem is a free final time problem with a fixed final state, and the cost is formulated to find a trajectory that minimizes a combination of the time to reach the final state and the amount of control effort required to get there. A trade-off between minimizing final time and minimizing control effort is made by adjusting the weighting parameters\\footnote{What does intuition suggest the optimal behavior would be for $\\alpha = 0$ or for $\\beta=0$?} $\\alpha$ and $\\beta$. From the cost function it is apparent that:\n\\begin{equation*}\n\\begin{split}\nh(\\x(t_f), t_f) &= \\frac{1}{2}\\alpha t_f^2, \\quad g(\\x(t), \\bu(t), t) = \\frac{1}{2}\\beta u^2(t), \n\\end{split}\n\\end{equation*}\nand the dynamics equation can be equivalently expressed as a first-order system of ODEs by setting $x_1 = x$ and $x_2 = \\dot{x}$:\n\\begin{equation*}\n\\begin{split}\n\\dot{x}_1 &= x_2, \\\\\n\\dot{x}_2 &= u, \\\\\n\\end{split}\n\\end{equation*}\nsuch that $\\x = [x_1, \\: x_2]^\\top $ and the boundary conditions become:\n\\begin{equation*}\nx_1(0) = 10, \\quad x_2(0) = 0,\\quad x_1(t_f) = 0,\\quad x_2(t_f) = 0.\n\\end{equation*}\n\nNow that the problem has been introduced, the first step is to derive the Hamiltonian:\n\\begin{equation*}\nH = \\frac{1}{2}\\beta u^2 + p_1x_2 + p_2u,\n\\end{equation*}\nwhere $p_1$ and $p_2$ are the costates. Next, the NOCs \\eqref{eq:optnoc} can be derived by taking the partial derivatives of $H$ with respect to $p$, $x$, and $u$:\n\\begin{equation*}\n\\begin{split}\n\\dot{x}_1^* &= x_2^*, \\\\\n\\dot{x}_2^* &= u^*, \\\\\n\\dot{p}_1^* &= 0, \\\\\n\\dot{p}_2^* &=-p_1^*, \\\\\n0 & = \\beta u^* + p_2^*.\n\\end{split}\n\\end{equation*}\nThe next step is then to determine appropriate boundary conditions for the NOCs. As mentioned before, this problem is a free final time and fixed final state problem. Therefore the boundary conditions are given by\n\\begin{equation*}\n\\begin{split}\n&x_1^*(0) = 10, \\\\\n&x_2^*(0) = 0, \\\\\n&x_1^*(t_f) = 0, \\\\\n&x_2^*(t_f) = 0, \\\\\n&\\frac{1}{2}\\beta u^*(t_f)^2 + p_1^*(t_f)x_2^*(t_f)+p_2^*(t_f)u^*(t_f) + \\alpha t_f = 0.\n\\end{split}\n\\end{equation*}\n\nNow, from the last NOC it can be seen that the optimal control $u^*$ can be solved for in terms of the costate $p_2^*$:\n\\begin{equation*}\n    u^* = -\\frac{1}{\\beta}p_2^*.\n\\end{equation*}\nThis expression can then be substituted into the second NOC and into the boundary conditions. At this point the resulting two-point boundary value problem can be expressed in the standard form \\eqref{eq:standardtpbvp} (by using the free final time trick previously discussed), and solved numerically. However, it also turns out that this problem is simple enough to solve analytically as well.\n\n\\paragraph{Analytical Solution:} \nIntegrating the differential equations for the costates $p_1$ and $p_2$ gives:\n\\begin{equation*}\n\\begin{split}\np_1^* &= C_1, \\\\\np_2^* &= -C_1t + C_2, \\\\\n\\end{split}\n\\end{equation*}\nwhere $C_1$ and $C_2$ are constants. Therefore, the optimal control $u^*$ can be expressed as $u^* = \\frac{C_1}{\\beta}t - \\frac{C_2}{\\beta}$ and the states $x_1$ and $x_2$ can be integrated to yield:\n\\begin{equation*}\n\\begin{split}\nx_2^* &= \\frac{C_1}{2\\beta}t^2 - \\frac{C_2}{\\beta}t + C_3, \\\\\nx_1^* &= \\frac{C_1}{6\\beta}t^3 - \\frac{C_2}{2\\beta}t^2 + C_3t + C_4, \\\\\n\\end{split}\n\\end{equation*}\nwhere $C_3$ and $C_4$ are additional constants. There are now five unknown quantities, $C_1$, $C_2$, $C_3$, $C_4$, and $t_f$, which can be determined by leveraging the five boundary conditions. In particular from the condition $x_1^*(0) = 10$ and $x^*_2(0) = 0$ it is easy to see that $C_3 = 0$ and $C_4 = 10$. The remaining boundary conditions can then be used to analytically solve for the remaining constants, and in particular:\n\\begin{equation*}\nt_f = (1800 \\frac{\\beta}{\\alpha})^{1/5}.\n\\end{equation*}\n\nFor a couple of interesting insights, it can be noted that as $\\beta \\xrightarrow{} 0$ the cost function penalizes the final time more, and from the expression for $t_f$ we can see that $t_f \\xrightarrow{} 0$. Additionally, as $\\alpha \\xrightarrow{} 0$ the cost function penalizes control inputs, and correspondingly it can be seen in the expression for $t_f$ that $t_f \\xrightarrow{} \\infty$. Further, note that the optimal control takes the form\n\\begin{equation*}\n    u^*(t) = \\frac{C_1}{\\beta}t - \\frac{C_2}{\\beta}.\n\\end{equation*}\nThus the control input is linear in time and its magnitude is inversely proportional to $\\beta$.\n\\end{example}\n\n\\subsection{Direct Methods}\nUnlike indirect methods, direct methods do not require a derivation of the necessary optimality conditions. Instead these methods directly discretize the original optimal control problem \\eqref{eq:ocp} to turn it into a finite-dimensional constrained optimization problem called a \\textit{nonlinear programming problem}.\n\nWhile several approaches for discretizing the OCP exist, one simple approach is to just use a forward Euler time discretization. Recall that the forward Euler time discretization method (the simplest of the Runge-Kutta methods) can be used to numerically solve differential equations. In particular, with the choice of a time step $h_i$ the differential equations $\\dot{\\x} = a(\\x, \\bu, t)$ are discretized as:\n\\begin{equation} \\label{eq:hdynamics}\n\\x_{i+1} = \\x_{i} + h_ia(\\x_i, \\bu_i, t_i),\n\\end{equation}\nwhere $\\x_i = \\x(t_i)$, $\\bu_i = \\bu(t_i)$, and $t_{i+1} - t_i = h_i$. With this recursive expression \\eqref{eq:hdynamics}, an initial condition $\\x(t_0)$, and a sequence of inputs $\\bu(t_i)$ for $i \\geq 0$, the states $\\x(t_i)$ can be computed easily.\nSuppose the optimal control problem \\eqref{eq:ocp} was defined over the time interval $[t_0, t_f]$. Applying a forward Euler time discretization essentially partitions this interval into a finite set of $N$ times $\\{t_0, t_1, \\dots, t_N\\}$ where $t_N = t_f$ and the time step between each is $h_i = t_{i+1} - t_i$. Then the parameters of the optimization problem will simply become the state and controls at these times, $\\x_i = \\x(t_i)$ and $\\bu_i = \\bu(t_i)$ for $i = 0,\\dots,N$. \n\nRewriting the original OCP \\eqref{eq:ocp} as a function of the discrete set of parameters $t_i$, $\\x_i$, and $\\bu_i$ will require modifications to both the constraints and to the cost function. First, the recursive formula \\eqref{eq:hdynamics} is used to replace the dynamics constraint $\\dot{\\x} = a(\\x, \\bu, t)$ in the OCP\\footnote{The original dynamics model $\\dot{\\x} = a(\\x, \\bu, t)$ is sometimes called the \\textit{continuous time} model and the recursive formula $\\x_{i+1} = \\x_{i} + ha(\\x_i, \\bu_i, t_i)$ is called the \\textit{discrete time} model.}. Updating the cost function is going to require a numerical approximation of the integral, such as by using one of the Newton-Cotes formulas. The simplest of which would yield the approximation:\n\\begin{equation*}\n\\int_{t_0}^{t_f} g(\\x(t),\\bm{u}(t),t) dt \\approx \\sum_{i=0}^{N-1} h_i g(\\x_i,\\bm{u}_i,t_i).\n\\end{equation*}\nThe OCP \\eqref{eq:ocp} can now be expressed completely as the finite-dimensional nonlinear program (NLP):\n\\begin{equation} \\label{eq:nlp}\n\\begin{split}\n\\underset{\\bm{u}_i,\\x_i}{\\text{minimize}} \\:\\: & h(\\x_N,t_N) + \\sum_{i=0}^{N-1} h_i g(\\x_i,\\bm{u}_i,t_i),\\\\\n\\text{s.t.} \\:\\:& \\x_{i+1} = \\x_{i} + h_ia(\\x_i, \\bu_i, t_i), \\quad i = 0, \\dots, N-1, \\\\\n&\\x_0 = \\x(t_0).\n\\end{split}\n\\end{equation}\n\n\\subsection{Consistency of Time Discretization}\nThe finite-dimensional problem \\eqref{eq:nlp} is only an \\textit{approximation} of the original problem \\eqref{eq:ocp}, so it is important to justify that this approximation method is \\textit{consistent} with the original problem. This is accomplished by taking a look at the necessary optimality conditions for the NLP \\eqref{eq:nlp} and comparing them to the necessary optimality conditions for the original OCP \\eqref{eq:ocp}.\n\nRecall that the necessary conditions of optimality for equality-constrained finite-dimensional optimization problems have previously been discussed in Section \\ref{subsubsec:constfiniteopt}. In particular, the Lagrangian is first formulated, which for \\eqref{eq:nlp} takes the form:\n\\begin{equation*}\nL = h(\\x_N,t_N) + \\sum_{i=0}^{N-1} h_i g(\\x_i,\\bm{u}_i,t_i) + \\sum_{i=0}^{N-1} \\blam_i^\\top (\\x_{i} + h_ia(\\x_i, \\bu_i, t_i) - \\x_{i+1}).\n\\end{equation*}\nNote that even though the initial condition constraint is included in \\eqref{eq:nlp} it can be ignored in the Lagrangian by simply assuming $\\x_0$ is not actually a decision variable in the optimization problem (since it is fixed).\nThe NOCs are then given by:\n\\begin{equation} \\label{eq:dirnoc}\n\\begin{split}\n\\nabla_{\\x_i} L = h_i \\frac{\\partial g}{\\partial \\x}(\\x_i, \\bu_i) + h_i\\big(\\frac{\\partial a}{\\partial \\x}(\\x_i, \\bu_i)\\big)^\\top  \\blam_i + (\\blam_i - \\blam_{i-1}) &= 0, \\quad i = 1,\\dots,N-1 \\\\\n\\nabla_{\\x_N} L = \\frac{\\partial h}{\\partial \\x}(\\x_N) - \\blam_{N-1} &= 0, \\\\\n\\nabla_{\\bu_i} L = h_i \\frac{\\partial g}{\\partial \\bu}(\\x_i, \\bu_i) + h_i\\big(\\frac{\\partial a}{\\partial \\bu}(\\x_i, \\bu_i)\\big)^\\top  \\blam_i &= 0, \\quad i = 0, \\dots, N-1\\\\\n\\x_{i} + h_ia(\\x_i, \\bu_i, t_i) - \\x_{i+1} &= 0, \\quad i = 0,\\dots,N-1\\\\\n\\end{split}\n\\end{equation}\n\nNow, from the indirect method with equations \\eqref{eq:hamiltonian}, \\eqref{eq:optnoc}, and boundary conditions \\eqref{eq:boundarycond} with fixed final time and free final state, the NOCs for the infinite-dimensional OCP can be written as:\n\\begin{equation} \\label{eq:indinoc}\n\\begin{split}\n\\frac{\\partial g}{\\partial \\x}(\\x(t), \\bu(t)) + \\big(\\frac{\\partial a}{\\partial \\x}(\\x(t), \\bu(t))\\big)^\\top  \\p(t) + \\dot{\\p}(t) &= 0, \\quad t \\in [t_0,t_f] \\\\\n\\frac{\\partial h}{\\partial \\x}(\\x(t_f)) - \\p(t_f) &= 0, \\\\\n\\frac{\\partial g}{\\partial \\bu}(\\x(t), \\bu(t)) + \\big(\\frac{\\partial a}{\\partial \\bu}(\\x(t), \\bu(t))\\big)^\\top  \\p(t) &= 0, \\quad t \\in [t_0, t_f]\\\\\n\\dot{\\x}(t) - a(\\x(t), \\bu(t), t) &= 0, \\quad t\\in[t_0,t_f]\\\\\n\\x_0 - \\x(t_0) &= 0.\n\\end{split}\n\\end{equation}\n\nThe NOCs \\eqref{eq:dirnoc} for the discretized problem and the NOCs for the original OCP \\eqref{eq:indinoc} are remarkably similar. In fact, the NOCs \\eqref{eq:dirnoc} can be seen as themselves simply the discretized versions of \\eqref{eq:indinoc}. To see this, simply perform a forward Euler discretization of the equations in \\eqref{eq:indinoc} with:\n\\begin{equation*}\n\\begin{split}\n\\dot{\\p}(t) = \\frac{\\blam_{i} - \\blam_{i-1}}{h_i}, \\quad \\p(t_i) = \\blam_i, \\quad i = 0, \\dots, N-1, \\\\\n\\dot{\\x}(t) = \\frac{\\x_{i+1} - \\x_{i}}{h_i}, \\quad \\x(t_i) = \\x_i, \\quad \\bu(t_i) = \\bu_i, \\quad i = 0, \\dots, N-1.\n\\end{split}\n\\end{equation*}\nTherefore, as the time step $h_i \\xrightarrow{} 0$ the NOCs for the discretized (direct method) problem converge to the NOCs derived directly for the original infinite-dimensional OCP (indirect method)!\n\n\\subsection{Exercises}\n\\subsubsection{Optimal Control and Trajectory Optimization}\nComplete \\textit{Extra Problem:  Optimal Control and Trajectory Optimization} located in the online repository:\n\n\\vspace{\\baselineskip}\n\n\\url{https://github.com/PrinciplesofRobotAutonomy/AA274A_HW1},\n\n\\vspace{\\baselineskip}\n\nwhere you will compute a dynamically feasible and \\textit{optimal} trajectory for a unicycle robot by using an indirect method to set up the necessary optimality conditions and solve them using a two-point boundary value solver.", "meta": {"hexsha": "533eb065d9e23c93b98d875719308d16891d0483", "size": 23202, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/source/ch04.tex", "max_stars_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_stars_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-03-23T16:03:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T14:15:38.000Z", "max_issues_repo_path": "tex/source/ch04.tex", "max_issues_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_issues_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/source/ch04.tex", "max_forks_repo_name": "StanfordASL/Principles-of-Robot-Autonomy", "max_forks_repo_head_hexsha": "852ce0fd1361d95576f72558d2c29d8610ced652", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.0068965517, "max_line_length": 802, "alphanum_fraction": 0.7162744591, "num_tokens": 7336, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.7341195327172401, "lm_q1q2_score": 0.5541978374005047}}
{"text": "\\documentclass[11pt]{amsart}\n\\usepackage{geometry}                % See geometry.pdf to learn the layout options. There are lots.\n\\geometry{letterpaper}                   % ... or a4paper or a5paper or ... \n%\\geometry{landscape}                % Activate for for rotated page geometry\n%\\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{epstopdf}\n\\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}\n\n\\title{Category cheat sheet for Haskellers}\n\\author{Mark Hopkins}\n%\\date{}                                           % Activate to display a given date or no date\n\\DeclareMathOperator{\\End}{End}\n\\DeclareMathOperator{\\Coend}{Coend}\n\\DeclareMathOperator{\\Hom}{Hom}\n\n\\newcommand{\\myvec}[1]{\\vec{#1}}\n\\newcommand{\\cat}[1]{\\mathbf{#1}}\n\\newcommand{\\op}[1]{#1^{\\text{op}}}\n\n\\newcommand{\\blank}{{-}}\n\n%\\def\\cat{Cat}\n\\begin{document}\n%\\newcommand{cat}[1]{\\boldmath{#1}}\n    \\maketitle\n\n    \\subsection*{Products}\n\n    $\\cat{C}$ has products if\n    $\n    \\cat{C}(C, A \\times B) \\cong \\cat{C}(C, A) \\times \\cat{C}(C, B)\n    $\n\n    \\subsection*{Coproducts}\n\n    $\\cat{C}$ has coproducts or sums if\n    $\n    \\cat{C}(A + B, C) \\cong \\cat{C}(A,C) \\times \\cat{C}(B,C)\n    $\n\n    \\subsection*{Quotient}\n    A quotient of a set is a way of identifying elements.\n\n    \\subsection*{Enriched categories}\n    This means doing category theory with a category $\\mathcal V$ in place of $\\cat{Set}$.\n    This means there is a $\\mathcal V$-object of object of objects and hom-objects instead of hom-sets.\n\n    $\\mathcal V$ should be a complete, cocomplete, biclosed monoidal category.\n%\\section{}\n%\\subsection{}\n\n    \\subsection*{Complete category}\n    A category is complete if it has all limits.\n\n    \\subsection*{Cocomplete category}\n    A category is cocomplete if it has all colimits.\n\n    \\subsection*{Profunctor}\n    A profunctor on $\\cat C$ is a functor $\\op{\\cat C} \\times {\\cat c} \\to \\cat(Set)$\n\n    \\subsection*{Ends}\n    If $F:\\op{\\cat C} \\times \\cat C \\to \\cat D$\n    then we can form the ``diagonal'' product $\\prod_{c\\in\\cat C} F(c,c)$ in $\\cat D$.\n\n    Given a morphism $f: c \\to c'$ in $\\cat C$, there are two ways we can apply it to factors in this product:\n    we can apply it covariantly\n\n    \\[\n        F(c, f): F(c,c) \\to F(c', c')\n    \\]\n\n    or contravariantly\n\n    \\[\n        F(f, c'): F(c',c') \\to F(c', c').\n    \\]\n\n    $\\End F = \\int_{\\cat C} F = \\int_{c\\in\\cat C} F(c,c)$\n    is the subobject for which these give the same answer for all $f$.\n\n    It comes with projection morphisms\n    $\\int_{c\\in\\cat C} F(c,c) \\to F(c, c)$ for every $c\\in \\cat{C}$\n    satisfying a universal property.\n\n    The universal quantifier lets us approximate the end of a profunctor in Haskell.\n\n    \\begin{verbatim}\n        data End f = End (forall c. f c c)\n\n        proj :: End f -> f c c\n        proj (End fcc) = fcc\n    \\end{verbatim}\n\n    \\subsection*{Coends}\n    If $F:\\cat C^{\\text{op}} \\times \\cat C \\to \\cat D$\n    then we can form the ``diagonal'' sum $\\sum_{c\\in\\cat C} F(c,c)$ in $\\cat D$.\n\n    Given a morphism $f: c \\to c'$ in $\\cat C$, there are two ways we can apply it to terms in this sum:\n    we can apply it covariantly\n\n    \\[\n        F(1_c, f): F(c,c) \\to F(c', c')\n    \\]\n\n    or contravariantly\n\n    \\[\n        F(f, 1_{c'}): F(c',c') \\to F(c', c').\n    \\]\n\n    $\\Coend F = \\int^{\\cat C} F = \\int^{c\\in\\cat C} F(c,c)$\n    is the quotient of the sum which identifies the two answers for all $f$.\n\n    It comes with injection morphisms\n    $F(c, c) \\to \\int^{c\\in\\cat C} F(c,c)$ for every $c\\in \\cat{C}$\n    satisfying a universal property.\n\n    The existential quantifier lets us approximate the coend of a profunctor in Haskell.\n\n    \\begin{verbatim}\n        data Coend f = forall c. Coend (f c c)\n\n        inj :: f c c -> Coend c\n        inj fcc = Coend fcc\n    \\end{verbatim}\n\n    \\subsection*{Presheaf}\n    A presheaf on $\\cat C$ is a functor $\\op{\\cat C} -> \\cat{Set}$.\n\n    \\subsection*{Representable functor}\n    A covariant representable functor is one isomorphic to $\\cat{C}(c, \\blank)$.\n\n    In Haskell this means a datatype defined using only products (no sums).\n    For instance\n\n    \\begin{verbatim}\n        data Pair a = Pair a a deriving (Eq, Show, Functor)\n    \\end{verbatim}\n\n    \\verb|Pair a| is isomorphic to \\verb|Bool -> a|, so it's representable, and represented by \\verb|Bool|.\n\n    You can think of it as an indexable data structure, with $c$ playing the role of the index.\n    We also call this the reader monad.\n\n    A contravariant representable functor is one isomorphic to $\\cat{C}(\\blank, c)$.\n    It's an example of a presheaf\n\n    \\subsection*{Yoneda lemma}\n    The Yoneda lemma says that natural transformations \\textit{out of} a representable functor (covariant or\n    contravariant)\n    are tightly controlled.\n\n    \\[\n        [\\op{\\cat{C}}, \\cat{Set}]\\left(\\cat{C}(c, \\blank), G\\right) \\cong G(c)\n    \\]\n\n    In words, if $F$ and $G$ are presheaves on $\\cat C$, and $F$ is represented by $c$, then the natural transformations\n    from $F$ to $G$ line up one-to-one with the elements of the set $G(c)$.\n    \\subsection*{Kan extensions}\n    \\subsection*{}\n\n\\end{document}  ", "meta": {"hexsha": "a533df1bf17a03c37aa1c96aa542984f0454badc", "size": 5258, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cheatsheet.tex", "max_stars_repo_name": "mjhopkins/catgory-cheat-sheet", "max_stars_repo_head_hexsha": "c96c7134540c71388d296b0eab52b44a240332e2", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-03-04T12:52:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-18T04:34:28.000Z", "max_issues_repo_path": "cheatsheet.tex", "max_issues_repo_name": "mjhopkins/catgory-cheat-sheet", "max_issues_repo_head_hexsha": "c96c7134540c71388d296b0eab52b44a240332e2", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cheatsheet.tex", "max_forks_repo_name": "mjhopkins/catgory-cheat-sheet", "max_forks_repo_head_hexsha": "c96c7134540c71388d296b0eab52b44a240332e2", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.0609756098, "max_line_length": 120, "alphanum_fraction": 0.622289844, "num_tokens": 1633, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.554197836713573}}
{"text": "\\iffalse\nNoether's theorem says that continuous symmetries of physical systems gives rise to conservation laws. In this class we'll see some examples of low dimensional Lie groups and how they give rise to various phenomenon in physics like time dilation and length contraction in special relativity, spin states of electrons.\n\nKeywords: bilinear forms, signature, SO(2), SO(3), Spin, SO(1,3), Minkowski space and relativity, Noether's theorem, Lie groups.\n\nPrereqs: Linear algebra, Group theory\nHomework: Recommended\n\\fi\n\n\n\\input{../preamble}\n\n\n\\begin{document}\n\\title{Rotations}\n\\author{Apurva Nakade}\n\\thispagestyle{fancy}\n\\maketitle\n\n\n\n\\emph{I did not have time to proof-read these notes, these are likely to have more errors than usual :-/}$\\\\\\\\$\n\nLet's start by analyzing the orthogonal group in 2 dimensions $O(2)$.\n\\begin{align}\n\tO(2)\n\t&=\n\t\\left\\{ \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} : \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} \\begin{bmatrix} a & c \\\\ b & d \\end{bmatrix} = I_2\\right\\} \\\\\n\tSO(2)\n\t&=\n\t\\left\\{ \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} : \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} \\begin{bmatrix} a & c \\\\ b & d \\end{bmatrix} = I_2, ad-bc = 1 \\right\\}\n\\end{align}\nBy a direct computation we can show that every element of $O(2)$ is one of the two forms (Exercise \\ref{thm:exO2})\n\\begin{align}\n\t\\label{eq:O2}\n\t\\begin{bmatrix} \\cos \\theta & -\\sin \\theta \\\\ \\sin \\theta & \\cos \\theta \\end{bmatrix}, & \\begin{bmatrix} \\cos \\theta & \\sin \\theta \\\\ \\sin \\theta & -\\cos \\theta \\end{bmatrix}\n\\end{align}\nThese matrices have determinants $1$ and $-1$ respectively and represent rotations and reflections in $\\R^2$. The eigenvalues are of the form $e^{i\\theta}, e^{-i\\theta}$ for the rotation matrices and $\\pm 1$ for the reflection ones and so we get\n\n\\begin{proposition}\n\tEvery matrix in $O(2)$ is either a rotation, in which case it is similar to a matrix of the form $\\begin{bmatrix} e^{i\\theta} & \\\\ & e^{-i\\theta} \\end{bmatrix}$ or a reflection about a line, in which case it is similar to a matrix of the form $\\begin{bmatrix} -1 & \\\\ & 1 \\end{bmatrix}$.\n\\end{proposition}\n\n\n\\section{Orthogonal matrices}\nThis method does not generalize to higher dimensions (or does it?) instead we use eigenvalues to analyze the matrices.\n\n\\begin{thm}[Spectral theorem]\n\tEvery matrix in $O(n)$ and $U(n)$ is diagonalizable over the complex numbers.\n\\end{thm}\nRecall that diagonalizable means that the matrix is similar to a diagonal matrix i.e. it becomes diagonal after doing some base change. Even though $O(n)$ has real entries it's eigenvalues and eigenvectors might be complex i.e. the eigenvectors can be vectors in $\\C^n$ instead of $\\R^n$.\n\nBecause $O(n) \\subseteq U(n)$ it suffices to analyze the eigenvectors of unitary matrices. Let $M \\in U(n)$ be a unitary matrix. By the Spectral theorem there exist $n$ eigenvectors $v_1, \\ldots, v_n \\in \\C^n$ with corresponding eigenvalues $\\lambda_1, \\ldots, \\lambda_n$ i.e. $A v_i = \\lambda_i v_i$. Using the definition of unitary matrices we must have\n\\begin{alignat}{4}\n\t               &   & \\innerp{Av_i}{Av_i}                         & = \\innerp{v_i}{v_i} \\\\\n\t\\implies \\quad &   & \\innerp{\\lambda_i v_i}{\\lambda_i v_i}       & = \\innerp{v_i}{v_i} \\\\\n\t\\implies \\quad &   & \\conj \\lambda_i \\lambda_i \\innerp{v_i}{v_i} & = \\innerp{v_i}{v_i} \\\\\n\t\\implies \\quad &   & \\conj \\lambda_i \\lambda_i                   & = 1\n\\end{alignat}\nAs $O(n)\t\\subseteq U(n)$ the same holds for $O(n)$ so we get the following proposition.\n\n\\begin{proposition}\n\tEvery eigenvalue of an unitary or an orthogonal matrix is a complex number of norm 1 and hence is of the form $e^{i \\theta}$ for some $\\theta$.\n\\end{proposition}\n\n\n\\subsection{Orthogonal matrices in 3 dimensions}\nConsider a matrix $A \\in O(3)$, by the previous section $A$ has 3 eigenvalues of the form $\\lambda_1 = e^{i \\theta_1}$, $\\lambda_2 = e^{i \\theta_2}$, $\\lambda_3 = e^{i \\theta_3}$ for some $\\theta_1$, $\\theta_2$, $\\theta_3$. But $O(3)$ has real entries and hence the complex eigenvalues of $A$ should come in conjugate pairs. The only way this can happen is if $\\theta_1 = 0$ or $\\pi$ and $\\theta_2 = - \\theta_3$.\n\n\\begin{proposition}\n\tFor any $A \\in O(3)$ the eigenvalues of $A$ are of the form $\\lambda_1 = \\pm 1 , \\lambda_2 = e^{i \\theta}, \\lambda_3 = e^{-i \\theta}$ for some $\\theta$. Further $\\lambda_1 = 1$ iff $A \\in SO(3)$.\n\\end{proposition}\n\nIf $A \\in SO(3)$ then $A$ is similar to\n\\begin{align}\n\t\\label{eq:O3Type1}\n\tA \\sim \\begin{bmatrix} 1 &   &   \\\\ & e^{i \\theta} & \\\\ & & e^{-i\\theta} \\end{bmatrix} \\sim  \\begin{bmatrix} 1 &   &   \\\\ &\\cos \\theta & -\\sin \\theta \\\\ &\\sin \\theta & \\cos \\theta \\end{bmatrix}\n\\end{align}\nThis is saying that any matrix in $SO(3)$ represents rotation around an axis.\n\nIf $A \\in O(3) \\setminus SO(3)$ then $A$ is similar to\n\\begin{align}\n\t\\label{eq:O3Type2}\n\tA \\sim \\begin{bmatrix} -1 &   &   \\\\ & e^{i \\theta} & \\\\ & & e^{-i\\theta} \\end{bmatrix} \\sim  \\begin{bmatrix} -1 &   &   \\\\ &\\cos \\theta & -\\sin \\theta \\\\ &\\sin \\theta & \\cos \\theta \\end{bmatrix}\n\\end{align}\nThis is saying that any matrix in $O(3) \\setminus SO(3)$ represents rotation around an axis followed by a reflection along the perpendicular plane.\n\n\\begin{proposition}\n\tEvery linear transformation of $\\R^3$ that preserves distances is either a rotation about an axis or a rotation about an axis followed by a rotation about the perpendicular plane.\n\\end{proposition}\n\n\n\n\n\n\\iffalse\n\\section{$SU(2)$}\nThe arguments above prove that every unitary matrix is similar to a diagonal matrix with entries of the form $e^{i\\theta}$. Since unitary matrices can have entries in complex numbers there are no conditions on the $\\theta$'s.\n\nLet us restrict to $n=2$ and consider the matrices in $SU(2)$\n\n\\begin{align}\n\tSU(2) = \\{ A \\in M_{2 \\times 2}(\\R) : A ^* A = I_2, \\det A = 1\\}\n\\end{align}\n\nAs we did for $O(2)$ we can explicitly write down the matrices in $SU(2)$. Let $\\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix}$ be a matrix in $SU(2)$ then the conditions on $SU(2)$ imply that $d = \\conj a$ and $c = -\\conj b$ and $\\norm{a}^2 + \\norm{b}^2 = 1$ i.e.\n\\begin{align}\n\tSU(2) = \\left\\{ \\begin{bmatrix} a & b \\\\ -\\conj{b} & \\conj{a} \\end{bmatrix} : a, b \\in \\C \\mbox{ and } \\norm{a}^2 + \\norm{b}^2 = 1 \\right\\}\n\\end{align}\nLet $a = x_1 + i y_1$ and $b = x_2 + i y_2$. The the condition $\\norm{a}^2 + \\norm{b}^2 = 1$ is equivalent to $x_1^2 + y_1^2 + x_2^2 + y_2^2 = 1$, but this is exactly the equation of the sphere in $\\R^3$.\n\n\\begin{thm}\n\tAs a (topological) space $SU(2)$ is isomorphic to $S^3$.\n\\end{thm}\nNote that this implies that every point in $S^3$ gives rise to a unitary transformation of $\\C^2$.\n\\fi\n\n\n\n\\section{Quaternions}\nThere is another way to talk about rotations, using quaternions! Recall that \\textbf{quaternions} form a non-abelian group, denoted $\\mathbb{H}$, that is isomorphic as a set to $\\R^4$. Elements of $\\mathbb{H}$ are of the form $a + bi + cj + dk$ and satisfy the relations\n\\begin{align}\n\ti^2 = j^2 = k^2 = -1, ij = k, jk = i, ki = j\n\\end{align}\n\n\\iffalse\n\nSimilar to complex numbers we have conjugation and norm on quaternions given by\n\\begin{align}\n\t\\norm{a + bi + cj + dk}^2 & = a^2 + b^2 + c^2 + d^2 \\\\\n\t\\conj{a + bi + cj + dk}   & = a - bi - cj - dk\n\\end{align}\nand we also have the identity\n\\begin{align}\n\t(a + bi + cj + dk).\\conj{(a + bi + cj + dk)} = \\norm{a + bi + cj + dk}^2\n\\end{align}\nIn particular note that if $p$ has norm 1 then $p^-1 = \\conj p$.\n\\fi\n\nA quaternion $p \\in \\mathbb{H}$ defines a linear transformation $\\Phi(p):\\mathbb{H} \\rightarrow \\mathbb{H}$ that sends $v \\mapsto p v p^{-1}$. These transformation turn out to be rotations when restricted to the unit quaternion group!\n\nLet $S\\mathbb{H}$ denote the group of unit quaternions i.e. $\\{ p \\in \\mathbb{H} : \\norm{p} = 1\\}$. We think of $\\R^3$ as the set of \\emph{purely imaginary} quaternions i.e. the vector $(x,y,z)$ represents the quaternion $xi + yj + zk$. It turns out to be the case that when $p \\in S \\mathbb{H}$ the transformation $v \\mapsto p v p^{-1}$ preserves the set of purely imaginary quaternions. In fact a much stronger result holds.\n\n\n\\begin{thm}\n\t\\label{thm:quaternions}\n\tThe map sending $p\\in S\\mathbb{H}$ to $\\Phi(p)$ defines a homomorphism\n\t\\begin{align}\n\t\t\\Phi : S \\mathbb{H} \\rightarrow SO(3)\n\t\\end{align}\n\tThis homomorphism is surjective with kernel $\\Z/2$.\n\\end{thm}\nThe proof of this has several steps and is in Exercises in  \\ref{sec:exQuaternions}.\n\nThe group $S\\mathbb{H}$ shows up in several avatars in various branches of mathematics. It is the spin group in 3 dimensions, denoted $Spin(3)$. Because $SO(3)$ is the group of rotation of $\\R^3$ the above theorem is asserting that there are two quaternions over each rotation of $\\R^3$. In physics this fact becomes relevant because in quantum mechanics certain systems have $S\\mathbb{H}$ as their symmetry groups and for such systems there are is a physical quantity, called \\textbf{spin} which has two possible values for each value of the angular moment.\n\n\n\n\\iffalse\n\\begin{proof}\n\tSuppose $\\norm{p} = 1$ for some $p \\in \\mathbb{H}$. We need to show that for $\\alpha, \\beta \\in \\R^3$ we have $\\innerp{p\\alpha}{p\\beta} = \\innerp{\\alpha}{\\beta}$. We need a good way to manipulate inner products. Let $\\Re(a+bi+cj+dk) = a$ denote the real part of quaternions. Then it is easy to see that $\\innerp{\\alpha}{\\beta} = \\Re(\\alpha \\beta)$. We're reduced to showing thatwhen $\\norm{p} = 1 $ we have\n\t\\begin{align}\n\t\t\\label{eq:random}\n\t\t\\Re(p \\alpha p ^{-1}.p \\beta p ^{-1}) = \\Re{(\\alpha\\beta)}\n\t\\end{align}\n\n\n\tNow we invoke the notion of conjugate quaternions. Similar to complex numbers we have conjugation and norm on quaternions given by\n\t\\begin{align}\n\t\t\\conj{a + bi + cj + dk}   & = a - bi - cj - dk      \\\\\n\t\t\\norm{a + bi + cj + dk}^2 & = a^2 + b^2 + c^2 + d^2\n\t\\end{align}\n\tand we also have the identity\n\t\\begin{align}\n\t\t(a + bi + cj + dk).\\conj{(a + bi + cj + dk)} = \\norm{a + bi + cj + dk}^2\n\t\\end{align}\n\tIn particular note that if $p$ has norm 1 then $p^{-1} = \\conj p$. Plugging this back in \\eqref{eq:random} gives us the desired result.\n\n\tThe kernel of this homomorphism is exactly the quaternions $p$ such that for all $v$ we have\n\t\\begin{align}\n\t\tp v p ^{-1} = v\n\t\\end{align}\n\tThe only such $p$'s are the purely \\emph{real} quaternions i.e. the ones with no $i,j,k$ components. The only such quaternions of norm 1 are $p = \\pm 1$ and so the kernel of $\\Phi$ is $\\Z/2$.\n\\end{proof}\n\\fi\n\n\n\n\\newpage\n\\section{Exercise}\n\\begin{exercise}\n\t\\label{thm:exO2}\n\tConsider a matrix $A = \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} \\in O(2)$.\n\t\\begin{enumerate}\n\t\t\\item Show that for some $\\theta$, $\\phi$ we must have $a = \\cos \\theta$, $b = \\sin \\theta$, and $c = \\cos \\phi$, $d = \\sin \\phi$.\n\t\t\\item Find the relations between $\\theta$ and $\\phi$ and prove that every matrix in $O(2)$ is of the form \\eqref{eq:O2}.\n\t\t\\item Describe the matrices in \\eqref{eq:O2} geometrically and compute their eigenvalues.\n\t\\end{enumerate}\n\\end{exercise}\n\n\n\n\\iffalse\n\\begin{exercise}\n\t\\label{thm:exO(n)}\n\tWhat is wrong with the following proof?\n\tSuppose $A \\in O(n)$ and let $v$ be an eigenvector of $A$ with eigenvalue $\\lambda$ then\n\t\\begin{alignat}{4}\n\t\t         &   & \\innerp{Av}{Av}               & = \\innerp{v}{v} \\\\\n\t\t\\implies &   & \\innerp{\\lambda v}{\\lambda v} & = \\innerp{v}{v} \\\\\n\t\t\\implies &   & \\lambda^2 \\innerp{v}{v}       & = \\innerp{v}{v} \\\\\n\t\t\\implies &   & \\lambda^2                     & =  1\n\t\\end{alignat}\n\tHence every eigenvalue of $A$ is $\\pm 1$.\n\\end{exercise}\n\\fi\n\n\\begin{exercise}\n\tShow that the matrices\n\t\\begin{align}\n\t\t\\begin{bmatrix} 1  &   &   \\\\ &\\cos \\theta & \\sin \\theta \\\\ &\\sin \\theta & -\\cos \\theta \\end{bmatrix}\n\t\t&&\n\t\t\\begin{bmatrix} -1 &   &   \\\\ &\\cos \\theta & \\sin \\theta \\\\ &\\sin \\theta & -\\cos \\theta \\end{bmatrix}\n\t\\end{align}\n\tare in $O(3)$. What do these geometrically represent? Find the matrices of type \\eqref{eq:O3Type1} or \\eqref{eq:O3Type2} to which these are similar.\n\\end{exercise}\n\n\\begin{exercise}\n\tDescribe the matrices in $SO(n)$ geometrically for arbitrary positive integer $n$. Do these matrices still represent rotations? What is the difference between matrices in $SO(2n)$ and matrices in $SO(2n+1)$.\n\\end{exercise}\n\n\\subsection{Quaternions}\n\\label{sec:exQuaternions}\nThe following exercises prove theorem \\ref{thm:quaternions}.\n\n\\begin{exercise}\n\tThe first step is to figure out how to deal with inner products using quaternions. Let $\\Re(a + bi + cj + dk) = a$ denote the real part of quaternions.\n\t\\begin{enumerate}\n\t\t\\item Show that for two vectors $x, y \\in \\R^3$ the dot product $\\innerp{x}{y}$ is equal to $-\\Re(xy)$.\n\t\t\\item Show that for any quaternion $p \\conj{p} = \\norm{p}^2$ and hence if $p \\in S\\mathbb{H}$ then $p^{-1} = \\conj{p}$.\n\t\t\\item Show that for $p \\in S \\mathbb{H}$ and $v \\in \\mathbb{H}$ we have $ \\Re{(x)} = \\Re{(p x p^{-1}) }$. This implies in particular that $\\Phi(p)$ takes the purely imaginary quaternions to purely imaginary quaternions.\n\t\t\\item Show that for $p \\in S \\mathbb{H}$ and $x, y \\in \\R^3$ we have $ \\innerp{x}{y} = \\innerp{px\\conj{p}}{px\\conj{p}}$.\n\t\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\n\tLet $p \\in S \\mathbb{H}$ be a unit quaternion. The above exercise proves that the transformation $\\Phi(p)$ preserves the dot product.\n\t\\begin{enumerate}\n\t\t\\item Show that for $q \\in \\mathbb{H}$ we have $\\Phi(pq) = \\Phi(p)\\Phi(q)$ and hence we have a group homomorphism $\\Phi: S\\mathbb{H}\\rightarrow SO(3)$.\n\t\t      (It is $SO(3)$ and not $SO(4)$ because we're looking at the transformations of the space of purely imaginary quaternions.)\n\t\t\\item Argue that because $S\\mathbb{H}$ is connected the image of $\\Phi$ should be a subgroup of $SO(3)$ and hence $\\Phi$ is a homomorphism $S\\mathbb{H} \\rightarrow SO(3)$.\n\t\t\\item Show that for the unit quaternion $p = \\cos (\\theta/2) + \\sin (\\theta/2)(x i+yj+zk)$ the transformation $\\Phi(p)$ fixes the vector $(x,y,z)$. Use this to argue that $\\Phi$ is surjective.\n\t\t\\item Show that the center of $S\\mathbb{H}$ is the set of purely real quaternions. Argue that the kernel of $\\Phi$ is $\\Z/2$.\n\t\\end{enumerate}\n\\end{exercise}\n\n\\begin{exercise}\n\t\\begin{align}\n\t\tO_8 = \\{ \\pm 1, \\pm i, \\pm j, \\pm k \\}\n\t\\end{align}\n\tLet $O_8 \\subseteq \\mathbb{H}$ be the finite quaternion group. Describe the image of $O_8$ under the homomorphism $\\Phi$ (defined in Section \\ref{sec:quaternions}).\n\\end{exercise}\n\n\n\n\n\n\n\n\n\n\n\\iffalse\n\\begin{exercise}\n\tShow that $O(n)$, $SO(n)$, $U(n)$ and $SU(n)$ are compact subsets of $M_{n \\times n}(\\R \\mbox{ or } \\C)$, but $GL_n(\\R)$, $SL_n(\\R)$, $GL_n(\\C)$ and $SL_n(\\C)$ are not.\n\\end{exercise}\n\n\\begin{exercise}\n\tFind the center of the groups $GL_n(\\R)$ and $SL_n(\\C)$.\n\\end{exercise}\n\n\n\n\\begin{exercise}\n\tA \\textbf{maximal torus} of a matrix group $G$ is a maximal abelian subgroup of $G$ i.e. a subgroup $H \\subseteq G$ is called a maximal torus if $H$ is abelian and if an abelian subgroup $H' \\subseteq G$ contains $H$ then $H=H'$.\n\n\tFind a maximal torus of each of the following groups: $U(n)$, $SU(n)$, $SO(2n)$ and $SO(2n+1)$.\n\n\\end{exercise}\n\\fi\n\n\n\\end{document}\n", "meta": {"hexsha": "12f566892291d8faf214f54ce7660d7320dfd896", "size": 15045, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "03 Symmetries of Spaces/02 Rotations.tex", "max_stars_repo_name": "apurvnakade/mc2017", "max_stars_repo_head_hexsha": "ebec59bce5ee1979872e0f37208da6abd91dbb75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "03 Symmetries of Spaces/02 Rotations.tex", "max_issues_repo_name": "apurvnakade/mc2017", "max_issues_repo_head_hexsha": "ebec59bce5ee1979872e0f37208da6abd91dbb75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "03 Symmetries of Spaces/02 Rotations.tex", "max_forks_repo_name": "apurvnakade/mc2017", "max_forks_repo_head_hexsha": "ebec59bce5ee1979872e0f37208da6abd91dbb75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.1734693878, "max_line_length": 558, "alphanum_fraction": 0.6672648721, "num_tokens": 5076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.734119526900183, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5541978330091212}}
{"text": "\\chapter{Abstract}\n\n{\n% for aesthetic reasons, this page will not have indented paragraphs\n\\setlength{\\parskip}{1ex plus 0.5ex minus 0.2ex}\n\\setlength{\\parindent}{0pt}\n\nIn 1968, Milnor asked if a finitely-generated group could have volume growth that is neither exponential nor polynomial (so-called `intermediate'), and if there is an algebraic classification of groups with polynomial volume growth.\nWe consider the analogous questions for geodesic growth.\n\nWe show that no virtually abelian group can have intermediate geodesic growth.\nIn particular, we completely characterise the geodesic growth for every virtually abelian group.\nWe show that the geodesic growth is either polynomial of an integer degree with rational geodesic growth series, or exponential with holonomic geodesic growth series.\nIn addition, we show that the language of geodesics is blind multicounter.\nThese results hold for each finite weighted monoid generating set of any virtually abelian group.\n\nA direct consequence of Gromov's classification of polynomial volume growth is that if a group has polynomial geodesic growth with respect to some finite generating set, then it is virtually nilpotent. \nUntil now, the only known examples with polynomial geodesic growth were all virtually abelian. \nWe furnish the first example of a virtually 2-step nilpotent group having polynomial geodesic growth with respect to a certain finite generating set.\n\nHolt and R\\\"over proved that finitely-generated bounded automata groups have indexed co-word problems.\nWe sharpen this result to show that their co-word problem is ET0L.\nWe do so using an equivalent machine model known as a cspd automaton.\nThis extends a result of \\citeauthor{ciobanu2018} who showed this for the first Grigorchuk group by explicitly constructing an ET0L grammar.\n\n}\n", "meta": {"hexsha": "d28f647de5891e8d7437b2e6366be430dba2bb67", "size": 1811, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapter/00_Abstract.tex", "max_stars_repo_name": "alexbishop/phd-thesis", "max_stars_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter/00_Abstract.tex", "max_issues_repo_name": "alexbishop/phd-thesis", "max_issues_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter/00_Abstract.tex", "max_forks_repo_name": "alexbishop/phd-thesis", "max_forks_repo_head_hexsha": "06f7d5f3f5fa8e6bdb9aa48796223acd9ba4ae3d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.0740740741, "max_line_length": 232, "alphanum_fraction": 0.8177802319, "num_tokens": 396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7341195269001831, "lm_q2_score": 0.7549149813536518, "lm_q1q2_score": 0.5541978289612034}}
{"text": "%auto-ignore\n\\providecommand{\\MainFolder}{..}\n\\documentclass[\\MainFolder/Text.tex]{subfiles}\n\n\\begin{document}\n\n\\section{Construction of a Hodge propagator for spheres}\n\\allowdisplaybreaks\n\\label{Sec:GreenSphere}\n\nThe standard Riemannian volume form on the round sphere $\\Sph{n}\\subset \\R^{n+1}$ is the restriction of the following closed form on $\\R^{n+1}\\backslash \\{0\\}$:  \n\\[ \\Vol(x) \\coloneqq \\frac{1}{|x|^{n+1}}\\sum_{i=1}^{n+1} (-1)^{i+1} x^i \\Diff x_1 \\dotsm \\widehat{\\Diff x_i} \\dotsm \\Diff x_{n+1}. \\]\nHere $\\widehat{\\Diff x_i}$ means that $\\Diff{x_i}$ is omitted.\nWe denote the Riemannian volume of $\\Sph{n}$ by\n\\[ V\\coloneqq \\int_{\\Sph{n}} \\Vol. \\]\nThe $n$-form $\\HKer$ from Proposition~\\ref{Lemma:HKer} reads\n\\begin{equation*}\n\\HKer =  \\frac{1}{V}\\bigl(\\Pr_1^*\\Vol +  (-1)^{n} \\Pr_2^*\\Vol\\bigr).\n\\end{equation*}\nAccording to Proposition~\\ref{Prop:GKer}, the equation which we want to solve reads\n\\begin{equation} \\label{Eq:GreenKernel}\n\\Dd \\Prpg = \\frac{1}{V}\\bigl((-1)^{n}\\Pr_1^*\\Vol +  \\Pr_2^*\\Vol\\bigr).\n\\end{equation}\nWe denote \n\\[\\tilde{\\Prpg}\\coloneqq V\\Prpg\\quad\\text{and}\\quad \\tilde{\\HKer}\\coloneqq V\\HKer.\\]\n\nThe following lemma will be used to construct a solution to~\\eqref{Eq:GreenKernel}. \n\\begin{Lem}[Relative Poincar\\'e Lemma]\\label{Lem:ChainHtpy}\nLet $M$ be a smooth oriented manifold and $\\psi: [0,1]\\times M \\rightarrow M$ a smooth map. Consider the operator $T: \\DR^*(M) \\rightarrow \\DR^{*-1}(M)$ defined by\n\\[ T(\\eta) \\coloneqq \\FInt{[0,1]} \\psi^*\\eta\\quad\\text{for all }\\eta\\in \\DR(M), \\]\nwhere we integrate along the fiber of the oriented fiber bundle $\\Pr_2: [0,1]\\times M \\rightarrow M$. Then we have\n\\[ \\Dd\\circ T + T\\circ \\Dd = \\psi_1^* - \\psi_0^*. \\]\n\\end{Lem}\n%\n\\begin{proof}\nStokes' formula from Proposition~\\ref{Prop:StokesForm} gives\n\\[ \\Dd \\FInt{[0,1]}\\psi^*\\eta = -\\Bigl( \\FInt{[0,1]} \\Dd \\psi^* \\eta - \\FInt{\\Bdd [0,1]} \\psi^* \\eta \\Bigr) = - \\FInt{[0,1]} \\psi^* \\Dd \\eta + \\psi^*_1 \\eta - \\psi^*_0 \\eta\\]\nfor all $\\eta \\in \\DR(M)$.\n\\end{proof}\n\n\n\\begin{Proposition}[Solution to \\eqref{Eq:GreenKernel}] \\label{Prop:GKerSph}\nFor all $(x,y)\\in (\\Sph{n}\\times \\Sph{n}) \\backslash \\Diag$, let\n\\begin{equation} \\label{Eq:GreenKernelMC1}\n\\Prpg (x,y) \\coloneqq (-1)^{n} \\sum_{k=0}^{n-1} g_k(x,y)\\omega_k(x,y),\n\\end{equation}\nwhere\n\\begin{equation}\\label{Eq:FunctionsGk1}\n g_k(x,y) \\coloneqq \\int_{0}^1 \\frac{t^k(t-1)^{n-1-k}}{(2t(t-1)(1+x\\cdot y) + 1)^{\\frac{n+1}{2}}} \\Diff{t}\n\\end{equation}\nand \n\\begin{equation} \\label{Eq:FormOmega1}\n\\omega_k(x,y) \\coloneqq \\begin{multlined}[t]\\frac{1}{k!} \\frac{1}{(n-1-k)!} \\sum_{\\sigma \\in \\Perm_{n+1}} (-1)^\\sigma x^{\\sigma_1} y^{\\sigma_2} \\Diff{x^{\\sigma_3}} \\dotsm \\Diff{x^{\\sigma_{2+k}}} \\\\ \\Diff{y^{\\sigma_{3+k}}} \\dotsm \\Diff{y^{\\sigma_{n+1}}}.\\end{multlined}\n\\end{equation}\nThe form~\\eqref{Eq:GreenKernelMC1} is a smoooth solution to~\\eqref{Eq:GreenKernel} on $(\\Sph{n}\\times \\Sph{n}) \\backslash \\Diag$.\n\\end{Proposition}\n%\\Red{Can $\\omega$ be written as a product in the sense that in $n=1$ is $\\omega_{00}=x\\cdot Ry$.?}\n%\n\\begin{proof} %Checked on 19.9.17 \nDefine the set\n\\[ N \\coloneqq (\\R^{n+1}_{\\neq 0}\\times\\R^{n+1}_{\\neq 0})\\backslash \\{(x,a x) \\mid x\\in \\R^{n+1}, a>0\\}. \\]\nIt is an open thickening of $(\\Sph{n}\\times \\Sph{n})\\backslash \\Diag$ in $\\R^{n+1} \\times \\R^{n+1}\\backslash \\Diag $. Consider the smooth deformation retraction\n\\begin{align*}\n \\psi : [0,1] \\times N & \\longrightarrow N \\\\\n (t,x,y) &\\longmapsto \\psi_t(x,y) \\coloneqq (x,(1-t)y-tx)\n\\end{align*}\nwith \n\\[ \\psi_0(x,y)=(x,y) \\quad \\text{and}\\quad \\psi_1(x,y)= (x,-x)\\quad\\text{for all }(x,y)\\in N. \\]\nThe retraction is depicted in Figure~\\ref{Fig:Retraction}.\n\\begin{figure}\n\\centering\n%\\includegraphics[width=.3\\textwidth,trim=5.1cm 20.5cm 7.6cm 1.3cm,clip]\n\\input{\\GraphicsFolder/contraction.tex}\n\\caption[Retraction of the configuration space $C_2(\\Sph{n})$ to $\\Sph{n}$.]{Retraction $\\psi_t = (\\psi_t^1, \\psi_t^2)$. A point of $\\Sph{n}\\times \\Sph{n}$ is visualized as a pair of points on $\\Sph{n}$.}\\label{Fig:Retraction}\n\\end{figure}\nDenote by $A: \\R^{n+1} \\rightarrow \\R^{n+1}$, $x\\mapsto -x$ the antipodal map. It is easy to see that \n\\[ A^* \\Vol = (-1)^{n+1} \\Vol, \\]\nand hence\n\\begin{equation*}\n\\psi_1^* \\tilde{\\HKer} = \\psi_1^* \\Pr_1^*\\Vol + (-1)^n \\psi_1^*\\Pr_2^* \\Vol  = \\Pr_1^*\\Vol + (-1)^n \\Pr_1^*A^* \\Vol = 0. \n\\end{equation*}\nDefine \n\\begin{equation}  \\label{Eq:GExpr}\n\\Prpg \\coloneqq (-1)^{n+1} \\FInt{[0,1]} \\psi^*\\HKer.\n\\end{equation}\nLet $T: \\DR^*(N)\\rightarrow \\DR^{*-1}(N)$ be the cochain homotopy from Lemma~\\ref{Lem:ChainHtpy} associated to $\\psi$. Because $\\Dd \\HKer=0$, we get\n\\[ \\Dd \\Prpg = (-1)^{n+1} \\Dd T(\\HKer) = (-1)^{n+1}(\\Dd T + T \\Dd)\\HKer = (-1)^{n+1}(\\psi_1^*-\\psi_0^*)\\HKer = (-1)^{n}\\HKer. \\]\nFor every $i=1$,~$\\dotsc$, $n+1$, we have\n\\[ \\psi^*(\\Diff{x^i}) = \\Diff{x^i}\\quad\\text{and}\\quad\\psi^*(\\Diff{y^i}) = (1-t)\\Diff{y^i} - t\\Diff{x^i} - (y^i+x^i)\\Diff{t}. \\]\nWe compute\n%\n\\begin{align*}\n &  \\FInt{[0,1]} \\psi^* \\tilde{\\HKer}\\\\\n & \\quad = (-1)^{n}\\FInt{[0,1]} \\psi^* \\Pr_2^* \\Vol \\\\\n & \\quad = (-1)^{n+1}\\FInt{[0,1]} \\sum_{i=1}^{n+1} (-1)^{i}\\frac{((1-t)y^i - t x^i)}{{\\Abs{(1-t)y-tx}^{n+1}}} \\psi^*(\\Diff{y^1} \\dotsm \\widehat{\\Diff{y^i}} \\dotsm \\Diff{y^{n+1}}) \\\\\n & \\quad = (-1)^{n+1} \\sum_{1\\le i<j \\le n+1} (-1)^{i+j}(x^i y^j - y^i x^j) \\FInt{[0,1]}  \\frac{\\Diff{t} \\psi^*(\\Diff{y^1} \\dotsm \\widehat{\\Diff{y^i}} \\dotsm \\widehat{\\Diff{y^j}} \\dotsm \\Diff{y^{n+1}})}{\\Abs{(1-t)y-tx}^{n+1}} \\\\ \n & \\quad = \\begin{multlined}[t] - \\sum_{k=0}^{n-1} \\Bigl(\\int_0^1 \\frac{t^k(t-1)^{n-1-k}}{\\Abs{(1-t)y-tx}^{n+1}} \\Diff{t}\\Bigr) \\sum_{1\\le i<j\\le n+1} (-1)^{i+j+1} (x^i y^j - y^i x^j) \\\\\n \\sum_{\\mathclap{\\substack{\\sigma: \\{1,\\dotsc,n-1\\}\\rightarrow \\{1,\\dotsc,\\hat{i},\\dotsc,\\hat{j},\\dotsc,n+1\\} \\\\ \\sigma_1<\\dots < \\sigma_k \\\\ \\sigma_{k+1}<\\dots < \\sigma_{n-1}}}} (-1)^\\sigma \\Diff{x^{\\sigma_1}} \\dotsm \\Diff{x^{\\sigma_k}}\\Diff{y^{\\sigma_{k+1}}} \\dotsm \\Diff{y^{\\sigma_{n-1}}}.\\end{multlined}\n\\end{align*}\nThe formulas~\\eqref{Eq:FunctionsGk1} and~\\eqref{Eq:FormOmega1} are obtained from this by writing \n\\[ \\Abs{(1-t)y - tx}^2 = 2t(t-1)(1+x \\cdot y) + 1 \\]\nin the denominator of the integrand and by simple combinatorics in the form part, respectively. Smoothness of~$\\Prpg$ on $(\\Sph{n}\\times \\Sph{n})\\backslash \\Diag$ follows from the expression~\\eqref{Eq:GExpr}.\n\\end{proof}\nNote that $g_k$ are smooth functions on $(\\Sph{n}\\times \\Sph{n})\\backslash \\Diag$.\n%\n\\begin{Example}[$\\Prpg$ for $\\Sph{1}$ and $\\Sph{2}$]\\phantomsection\\label{Example:Circle}% On 15.9.17\n\\begin{ExampleList}\n\\item Let \n\\[\\alpha: (\\Sph{1}\\times\\Sph{1})\\backslash \\Diag \\rightarrow (0,2\\pi)\\]\nbe the smooth function assigning to a pair $(x,y)\\in  (\\Sph{1}\\times\\Sph{1})\\backslash \\Diag$ the counterclockwise angle from $x$ to $y$. Let $\\alpha_1$, $\\alpha_2 \\in [0,2\\pi)$ be such that $x=\\cos(\\alpha_1)\\StdBasis_1 + \\sin(\\alpha_1)\\StdBasis_2$ and $y=\\cos(\\alpha_2)\\StdBasis_1+\\sin(\\alpha_2)\\StdBasis_2$ for the standard Euclidean basis $\\StdBasis_1$, $\\StdBasis_2$ of $\\R^2$. It is easy to see that\n\\[ \\alpha(x,y) = \\begin{cases} \\alpha_2 - \\alpha_1 & \\text{if }\\alpha_1<\\alpha_2, \\\\\n\\alpha_2-\\alpha_1+2\\pi & \\text{if }\\alpha_1>\\alpha_2. \\end{cases} \\]\nTherefore, we get\n\\[ \\Diff{\\alpha} = \\Diff{\\alpha_2} -\\Diff{\\alpha_1} = -2\\pi H \\quad\\text{on } (\\Sph{1}\\times\\Sph{1})\\backslash \\Diag. \\]\nOn the other hand, we can compute $\\Prpg$ from~\\eqref{Eq:GreenKernelMC1} as follows. Using the substitution $u=2t-1$, we get for all $x$, $y\\in \\Sph{1}$ with $x\\neq \\pm y$ the following:\n%\n\\begin{align*}\n g_{0}(x,y) &= \\int_0^1 \\frac{\\Diff{t}}{2t(t-1)(1+x\\cdot y) +1} \\\\\n & =  \\frac{1}{1-x\\cdot y}\\int_{-1}^1 \\frac{\\Diff{u}}{\\frac{1+x\\cdot y}{1- x\\cdot y}u^2 + 1} \\\\\n & = \\frac{2}{\\sqrt{1-(x\\cdot y)^2}}\\arctan\\Bigl(\\sqrt{\\frac{1+x\\cdot y}{1-x\\cdot y}}\\Bigr) \\\\\n & = \\frac{\\pi - \\arccos(x\\cdot y)}{\\sqrt{1-(x\\cdot y)^2}} \\\\ \n & = \\frac{\\pi - \\arccos(x\\cdot y)}{ \\Abs{x^1 y^2 - x^2 y^1}} \\\\\n & = \\frac{\\pi - \\alpha(x,y)}{x^1 y^2 - x^2 y^1}.\n\\end{align*}\nThe third from last equality can be obtained by trigonometric considerations and the second from last equality by an algebraic manipulation with the denominator. We will explain the last equality. Consider the matrix \n\\[ R=\\begin{pmatrix}\n0 & -1 \\\\ 1 & 0\n\\end{pmatrix} \\]\nrepresenting the counterclockwise rotation by $\\frac{\\pi}{2}$. The function $\\arccos: (-1,1) \\rightarrow (0,\\pi)$ satisfies\n\\[ \\arccos(x\\cdot y) = \\begin{cases}\n                        \\alpha(x,y) & \\text{if }y\\cdot Rx > 0, \\\\\n                        2\\pi - \\alpha(x,y) & \\text{if }y\\cdot Rx<0. \n                       \\end{cases}\\]\nThe last equality becomes clear when we notice that $x^1 y^2 - x^2 y^1 = y\\cdot Rx$. \n\nFinally, we have $\\omega_{0}(x,y) = x^1 y^2 - x^2 y^1$, and hence\n\\begin{align*}\n2\\pi \\Prpg(x,y) &=  - g_{0}(x,y) \\omega_{0}(x,y) \\\\\n& = \\alpha(x,y) - \\pi \\\\\n& = \\pi - \\alpha(y,x).\n\\end{align*}\n\\item For $n=2$, we get the formulas\n\\allowdisplaybreaks\n\\begin{align*}\ng_0(x,y) & = - g_1(x,y) = \\frac{1}{x\\cdot y - 1}\\quad\\text{and} \\\\\n\\omega_0(x,y) &=(x^2 y^3 - x^3 y^2) \\Diff{y^1} +(x^3 y^1 - x^1 y^3) \\Diff{y^2} + (x^1 y^2 - x^2 y^1) \\Diff{y^3} \\\\ &=\\sum_{i=1}^3 (x\\times y)^i \\Diff{y^i}.\n\\end{align*}\nThe formula for $\\omega_1(x,y)$ is obtained from the formula for $\\omega_0(x,y)$ by replacing~$\\Diff{y}$ with~$\\Diff{x}$.\\qedhere\n\\end{ExampleList}\n\\end{Example}\n\nConsider the diagonal action of the orthogonal group $O(n+1)$ on $\\R^{n+1}\\times\\R^{n+1}$ by matrix multiplication.\n%\\[ R(x,y)\\coloneqq (Rx,Ry)\\quad \\text{for all }(x,y)\\in \\R^{n+1}\\times\\R^{n+1}, R\\in O(n+1), \\] \n%where $Rx$ denotes the standard action on $\\R^{n+1}$.\n\\begin{Proposition}[Symmetries of $\\Prpg$]\\label{Prop:SymmetryOfG}\nConsider $\\Prpg$ from Proposition~\\ref{Prop:GKerSph}. For all $R\\in O(n+1)$, we have\n\\[ R^* \\Prpg = (-1)^R \\Prpg, \\]\nwhere $(-1)^R = \\det(R)$. Moreover, if $\\tau$ denotes the twist map, then\n\\[\\tau^*\\Prpg = (-1)^{n}\\Prpg. \\] \n\\end{Proposition}\n%\n\\begin{proof} % Check on 13.9.17\nWe will use the thickening $N$, the antipodal map $A$ and the expression~\\eqref{Eq:GExpr} for~$\\Prpg$ from the proof of Proposition~\\ref{Prop:GKerSph}.\n\nIt is easy to check that both $\\tau$ and $R$ preserve $N$. Let~$\\tilde{\\tau}$ and $\\tilde{R}$ be the isomorphisms of the fiber bundle $\\Pr_2: [0,1]\\times N \\rightarrow N$ given by \n\\[ \\tilde{\\tau}(t,x,y) \\coloneqq (1-t,y,x)\\quad \\text{and}\\quad \\tilde{R}(t,x,y) \\coloneqq (t,Rx,Ry) \\]\nfor all $(t,x,y)\\in [0,1]\\times N$. Then $\\tilde{\\tau}$ covers $\\tau$ and $\\tilde{R}$ covers $R$. A simple computation directly from Definition~\\ref{Def:FibInt} shows that the fiberwise integration commutes with the pullback along a bundle morphism if the bundle map and the base map are both either orientation preserving or reversing. In our case, we have \n\\[ (-1)^{\\tau + \\tilde{\\tau}} = -1\\quad \\text{and}\\quad (-1)^{R+\\tilde{R}} = 1. \\]\nUsing this and the equation\n\\[ \\Pr_2 \\circ \\psi \\circ \\tilde{\\tau} = A\\circ \\Pr_2 \\circ \\psi, \\]\nwe get firstly\n\\allowdisplaybreaks\n\\begin{align*}\n\\tau^* \\FInt{[0,1]} \\psi^*\\tilde{\\HKer} &=  - \\FInt{[0,1]} \\tilde{\\tau}^* \\psi^*\\Pr_2^*\\Vol \\\\ &= - \\FInt{[0,1]} \\psi^*\\Pr_2^*A^* \\Vol \\\\ &= (-1)^n \\FInt{[0,1]} \\psi^*\\Pr_2^*\\Vol \\\\ & = (-1)^n \\FInt{[0,1]} \\psi^*\\tilde{\\HKer}\n\\end{align*}\nand secondly\n\\[ R^* \\FInt{[0,1]} \\psi^* \\HKer = \\FInt{[0,1]} \\tilde{R}^* \\psi^* \\HKer = \\FInt{[0,1]} \\psi^* R^*\\HKer = (-1)^{n+1} \\FInt{[0,1]} \\psi^* \\HKer. \\]\nThis proves the proposition.\n\\end{proof}\nBoth diffeomorphisms $R$ and $\\tau$ preserve $\\Delta$, and hence they extend to diffeomorphisms of $\\Bl_\\Diag(\\Sph{n}\\times\\Sph{n})$. If also $\\Prpg$ extends, then the statement of Proposition~\\ref{Prop:SymmetryOfG} holds for $\\Prpg$ on $\\Bl_\\Diag(\\Sph{n}\\times\\Sph{n})$.\n\nIn the rest of the section, we will be proving that $\\Prpg$ extends smoothly to $\\Bl_{\\Diag}(\\Sph{n}\\times \\Sph{n})$. This is a local problem at the boundary, where we introduce the following radial coordinates. Define the set\n\\[ X \\coloneqq \\{(r,\\eta,x)\\in [0,\\infty)\\times \\Sph{n}\\times\\Sph{n} \\mid \\eta\\cdot x = 0\\}, \\]\nand let $\\kappa: X \\longrightarrow \\Bl_\\Diag(\\Sph{n}\\times \\Sph{n})$ be the map defined by\n\\begin{align*}\n\\kappa(r,\\eta,x) &\\coloneqq \\begin{cases} \n\\Bigl(x,\\dfrac{x+r\\eta}{\\Abs{x+r\\eta}}\\Bigr)\\in (\\Sph{n}\\times \\Sph{n})\\backslash \\Diag & \\text{for }r>0, \\\\[2ex]\n[(-\\eta,\\eta)]\\in P^+ N_{(x,x)}\\Diag & \\text{for }r=0.\n\\end{cases}\n\\end{align*}\nRecall that the oriented projectivization $P^+$ was defined in Definition~\\ref{Def:SphBlow}.\nFor the upcoming computations, it is convention to define the map $\\gamma: \\R \\rightarrow (-1,1)$ by\n\\[ \\gamma(r) \\coloneqq \\frac{r}{\\sqrt{1+r^2}+1} \\quad\\text{for all }r\\in \\R. \\]\nIt is a diffeomorphism with inverse $r = \\frac{2 \\gamma}{1-\\gamma^2}$.\n\n\\begin{Lem}[Parametrization of collar neighborhood] \\label{Lem:NewBlowupParam}\nThe subset $X\\subset \\R\\times \\R^{n+1}\\times\\R^{n+1}$ is a submanifold with boundary, and the map $\\kappa: X \\longrightarrow \\Bl_\\Diag(\\Sph{n}\\times \\Sph{n})$ is an embedding onto a neighborhood of $\\Bdd \\Bl_\\Diag(\\Sph{n}\\times \\Sph{n})$.\n\\end{Lem}\n%\n\\begin{proof}\nThe set $X$ is a Cartesian product of $[0,\\infty)$ and a regular level set; therefore, it is a submanifold with boundary. The inclusion $\\Sph{n}\\times \\Sph{n} \\subset \\R^{n+1}\\times \\R^{n+1}$ induces an embedding of manifolds with boundary $\\Bl_\\Diag(\\Sph{n}\\times\\Sph{n})\\subset \\Bl_\\Diag(\\R^{n+1}\\times \\R^{n+1})$.  Consider the global chart $\\tilde{\\Id}: \\Bl_\\Diag(\\R^{n+1}\\times \\R^{n+1}) \\rightarrow [0,\\infty) \\times \\Sph{n} \\times \\R^{n+1}$ from~\\eqref{Eq:BlowUpChart} induced by the identity. We have\n\\[ \\begin{aligned}Y &\\coloneqq \\tilde{\\Id}(\\Bl_\\Diag(\\Sph{n}\\times \\Sph{n})) \\\\ &= \\{(\\tilde{r},w,u)\\in [0,\\infty) \\times \\Sph{n} \\times \\R^{n+1} \\mid \\Abs{u}^2+\\tilde{r}^2=1,\\ w\\cdot u = 0\\}, \\end{aligned}\\] \nwhere we denote $r$ on $Y$ by $\\tilde{r}$ in order to distinguish it from $r$ on $X$. It suffices to prove the claim for the map $\\mu\\coloneqq \\tilde{\\Id}\\circ\\kappa: X \\rightarrow Y$. For $(r,\\eta,x)\\in X$, we compute\n\\[ \\mu(r,\\eta,x) = \\biggl( \\frac{\\gamma}{\\sqrt{1+\\gamma^2}}, \\frac{1}{\\sqrt{1+\\gamma^2}}(\\gamma x - \\eta), \\frac{1}{1+\\gamma^2}(x+\\gamma \\eta) \\biggr). \\]\nThis formula defines a smooth map of $\\R\\times \\R^{n+1}\\times\\R^{n+1}$.\n%Let $\\eta, x \\in \\Sph{n}$ be such that $\\eta\\cdot x = 0$ and let $r\\in \\R$. Equations\n%\\[ x = u + R \\omega \\quad\\text{and}\\quad \\frac{x+r\\eta}{\\Abs{x+r\\eta}} = u-R\\omega \\]\n%are solved for\n%\\begin{equation} \\label{Eq:Extension}\n%R = \\frac{\\gamma}{\\sqrt{1+\\gamma^2}}, \\quad\n% \\omega = \\frac{1}{\\sqrt{1+\\gamma^2}}(\\gamma x - \\eta), \\quad u  = \\frac{1}{1+\\gamma^2}(x+\\gamma \\eta). \n% \\end{equation}\n%We used $\\Abs{x+r\\eta} = \\sqrt{1+r^2} = \\frac{1+\\gamma^2}{1-\\gamma^2}$.\n%The solution satisfies $\\Abs{\\omega}=1$, $\\Abs{u}^2 + R^2 = 1$ and $\\omega \\cdot u = 0$ which implies $(R,\\omega,u)\\in Y$. Therefore, it defines a smooth extension of $\\mu$ \n%\\[ \\begin{aligned} \\tilde{\\mu}: \\R\\times \\R^{n+1} \\times \\R^{n+1} &\\longrightarrow \\R\\times \\R^{n+1}\\times \\R^{n+1} \\\\\n%(r,\\eta,x) &\\longmapsto (R,\\omega,u). \\end{aligned} \\]\nIt is a local diffeomorphism because its Jacobian is non-vanishing:\n\\[ |\\Jac{\\mu}| = \\frac{\\partial \\tilde{r}}{\\partial r} \\Bigl(\\frac{\\partial w}{\\partial \\eta}\\frac{\\partial u}{\\partial x}  - \\frac{\\partial w}{\\partial x}\\frac{\\partial u}{\\partial \\eta}\\Bigr)^{n+1} = (-1)^{n+1}(1+\\gamma^2)^{-\\frac{n+4}{2}} \\frac{\\partial \\gamma}{\\partial r}. \\]\nMoreover, the map $\\mu$ is injective, maps $X$ into $Y$ and $\\Bdd X$ onto $\\Bdd Y$. The claim follows.\n\\end{proof}\n%\nConsider the action of $O(n+1)$ on $X$ defined by\n\\[ R\\cdot(r,\\eta,x) \\coloneqq (r,R\\eta,Rx)\\quad\\text{for all }(r,\\eta,x)\\in X\\text{ and }R\\in O(n+1). \\]\nVia $\\kappa$, this agrees with the diagonal action of $O(n+1)$ on $\\Bl_\\Diag(\\Sph{n}\\times\\Sph{n})$. Denote\n\\[ \\Prpg'\\coloneqq \\kappa^* \\Prpg \\in \\Omega^{n-1}(\\Int(X)). \\]\nFrom Proposition~\\ref{Prop:SymmetryOfG} we get\n\\begin{equation} \\label{Eq:SymmetryOfGPrime}\nR^* \\Prpg' = (-1)^R \\Prpg'\\quad \\text{for all }R\\in O(n+1).\n\\end{equation}\n%\nConsider the smooth curve (see Figure~\\ref{Fig:CurveOnSphere})\n\\begin{equation*}\\label{Eq:CurveZetaDef}\n \\begin{aligned} \\zeta': [0,\\infty) &\\longrightarrow  X \\\\\n                    r &\\longmapsto (r,e_n,e_{n+1}). \\end{aligned}\n\\end{equation*}\nWe have the following lemma.\n%\n\\begin{figure}[t]\\centering\n\\input{\\GraphicsFolder/zeta.tex}\n\\caption[A curve approaching the diagonal in the configuration space $C_2(\\Sph{n})$.]{The curve $\\zeta\\coloneqq \\kappa \\circ \\zeta'$ is given by $\\zeta(r)=\\bigl(e_{n+1},\\frac{e_{n+1}+r e_n}{\\Abs{e_{n+1}+r e_n}}\\bigr)$ for $r>0$.}\\label{Fig:CurveOnSphere}\n\\end{figure}\n%\n\\begin{Lem}[Smooth extension along curve] \\label{Lem:ExtAlongCurve}\nThe form $\\Prpg'$ extends smoothly to $X$ if and only if the map $\\Prpg'\\circ \\zeta' : (0,\\infty)\\rightarrow \\Lambda^{n-1}T^* X$ extends smoothly to the interval $[0,\\infty)$. \n\\end{Lem}\n%\n\\begin{proof}\nAs for the non-trivial implication, let $(0,\\eta_0,x_0) \\in X$ be a boundary point. Pick vectors $v_1$,~$\\dotsc$, $v_{n-1}\\in \\R^{n+1}$ so that the vectors $v_1$,~$\\dotsc$, $v_{n-1}$, $\\eta_0$, $x_0$ are linearly independent, and define the set \n\\[ U\\coloneqq\\{(r,\\eta,x)\\in X \\mid v_1,\\,\\dotsc,\\,v_{n-1},\\,\\eta,\\,x \\text{ are linearly independent}\\}. \\]\nIt is an open neighborhood of $(0,\\eta_0,x_0)$ in $X$. Applying the Gram-Schmidt orthogonalization to $v_1$,~$\\dotsc$, $v_{n-1}$, $\\eta$, $x$, we find a smooth map $R: U \\rightarrow O(n+1)$ such that \n\\[ R(r,\\eta,x)\\cdot (r,\\eta,x) = (r,e_n,e_{n+1}) \\quad \\text{for all }(r,\\eta,x)\\in U. \\]\nThe equation~\\eqref{Eq:SymmetryOfGPrime} implies\n\\[ \\Prpg'(r,\\eta,x) =(-1)^R R(r,\\eta,x)^*\\bigl(\\Prpg'(r,e_n,e_{n+1})\\bigr)\\quad\\text{for all }(r,\\eta,x)\\in \\Int(U), \\]\nwhere $R(r,\\eta,x)^*: \\Lambda^* T^* X \\rightarrow \\Lambda^* T^* X$ is the smooth cotangential map which is induced by the diffeomorphism $R(r,\\eta,x): X\\rightarrow X$, and which maps the fiber over $z\\in X$ to the fiber over $R(r,\\eta,x)^{-1} z$. By the assumption, all maps in the composition are smooth in their arguments. The lemma follows.\n\\end{proof}\n\n\\begin{Lem}[Local expression at boundary] \\label{Lem:FormulaAlongCurve}\nOn the interval $(0,\\infty)$, we have\n\\begin{equation*}\\label{Eq:FormulaAlongCurve}\n\\tilde{\\Prpg}'\\circ\\zeta' = (-1)^{n+1}(1+\\gamma^2)^{-\\frac{n-1}{2}} \\sum_{k=0}^{n-1} \\gamma^{n-k} (h_{k}\\circ\\gamma)(\\nu_{k}\\circ\\zeta'), \n\\end{equation*}\nwhere the functions $h_{k}: (0,1)\\rightarrow \\R$ are defined by\n\\begin{equation*}\n  h_{k}(\\gamma)\\coloneqq \\int_{-1}^1 \\frac{(u+\\gamma^2)^{k}(u-1)^{n-1-k}}{(u^2+\\gamma^2)^{\\frac{n+1}{2}}} \\Diff{u} \\quad \\text{for all }\\gamma\\in (0,1)\n\\end{equation*}\nand the forms $\\nu_{k}\\in \\Omega(X)$ are defined by\n\\begin{align*}\n \\nu_{k}(r,x,\\eta) &\\coloneqq \\frac{1}{k!(n-1-k)!}\\sum_{\\sigma\\in \\Perm_{n-1}} (-1)^\\sigma \\Diff{x^{\\sigma_1}}\\dotsm\n\\Diff{x^{\\sigma_k}} \\Diff{\\eta^{\\sigma_{k+1}}} \\dotsm \\Diff{\\eta^{\\sigma_{n-1}}}.\n\\end{align*}\n\\end{Lem}\n\n\\begin{proof}\nWe start with the following formula from the proof of Proposition~\\ref{Prop:GKerSph}:\n\\[ \\tilde{\\Prpg} =  \\sum_{1\\le i<j \\le n+1} (-1)^{i+j}(x^i y^j - y^i x^j) \\FInt{[0,1]}  \\frac{\\Diff{t} \\psi^*(\\Diff{y^1} \\dotsm \\widehat{\\Diff{y^i}} \\dotsm \\widehat{\\Diff{y^j}} \\dotsm \\Diff{y^{n+1}})}{\\Abs{(1-t)y-tx}^{n+1}}. \\]\nWe restrict to the points $(x,y)=\\kappa(r,e_n,e_{n+1})$ with $r>0$. There, we have\n\\begin{align*}\n&x^1= \\dotsb =x^n=0,\\ x^{n+1}=1, \\\\\n&y^1 = \\dotsb = y^{n-1} =0,\\ y^n= \\frac{2\\gamma}{1+\\gamma^2},\\ y^{n+1} = \\frac{1-\\gamma^2}{1+\\gamma^2}.\n% &\\eta^1 = \\dotsb = \\eta^{n-1}=\\eta^{n+1}=0,\\ \\eta^{n}=1.\n% &\\Diff{x^{n+1}} = 0,\\ \\Diff{\\eta^n}=0,\\\\Diff{\\eta^{n+1}}=-\\Diff{x^n}.\n\\end{align*}\nUnder the substitution $u = 2t-1$, we get\n\\[\\Abs{(1-t)y-t x}^2 = \\frac{4 t (t-1)}{1+\\gamma^2}+1=\\frac{u^2+\\gamma^2}{1+\\gamma^2}. \\]\nWe make the following preliminary computations:\n\\[ \\begin{aligned}\nx^i y^j - y^i x^j & = 0\\quad\\text{for }1\\le i \\le n-1\\text{ and }i<j\\le n+1, \\\\\nx^n y^{n+1} - y^n x^{n+1} & = -\\frac{2\\gamma}{1+\\gamma^2}, \\\\\n\\kappa^*(\\Diff{y^i}) & = \\frac{1}{1+\\gamma^2}\\bigl((1-\\gamma^2)\\Diff{x^i}+ 2\\gamma \\Diff{\\eta^i}\\bigr)\\quad\\text{for } 1\\le i \\le n-1. \\end{aligned} \\]\nWe plug these in the formula for $\\tilde{\\Prpg}$ and get\n% \\Red{//There might by $\\pm$ issue when exchanging $\\varphi^*$ and $\\int^{[0,1]}$ as $\\varphi$ is not orientation preserving in every dimension.// There is no issue since we extend $\\varphi$ from the base space $X$ to $X\\times [0,1]$ and the extension preserves the total space orientation if and only if the base map preserves total space orientation} \n%\n\\begin{align*}\n \\tilde{\\Prpg}'(\\zeta'(r)) &= 2\\gamma(1+\\gamma^2)^{\\frac{n-1}{2}} \\FInt{[0,1]}\\Diff{t}\\frac{\\prod_{i=1}^{n-1} \\bigl((1-t)\\kappa^*(\\Diff{y^i}) - t\\Diff{x^i}\\bigr)}{(u^2+\\gamma^2)^{\\frac{n+1}{2}}}  \\\\ \n&=(-1)^{n+1}\\gamma(1+\\gamma^2)^{-\\frac{n-1}{2}}\\FInt{[-1,1]} \\Diff{u} \\frac{\\prod_{i=1}^{n-1}\\bigl((u+\\gamma^2)\\Diff{x^i} + \\gamma(u-1)\\Diff{\\eta^i}\\bigr)}{(u^2+\\gamma^2)^{\\frac{n+1}{2}}}  \\\\\n& =(-1)^{n+1}(1+\\gamma^2)^{-\\frac{n-1}{2}}\\sum_{k=0}^{n-1} \\gamma^{n-k}\\Bigl( \\int_{-1}^1 \\frac{(u+\\gamma^2)^{k}(u-1)^{n-1-k}}{(u^2+\\gamma^2)^{\\frac{n+1}{2}}} \\Diff{u}\\Bigr) \\nu_{k}.\n\\end{align*}\nThe lemma follows.\n% Checked on 28.9.\n\\end{proof}\n%\n%Because the overall coefficient $(1+\\gamma^2)^{-\\frac{n-1}{2}}$ is smooth and non-zero on $[0,\\infty)$, and the forms $\\nu_{k,n-1-k}$, $k=0,\\ldots,n-1$ are linearly independent smooth forms on $X$, the map $G\\circ \\gamma$ extends smoothly to $[0,\\infty)$ if and only if the functions $\\gamma^{n-k}h_{k,n-1-k}$ for all $k=0,\\ldots,n-1$ do.\n%\n\\begin{Lem}[Integrals depending on parameter] \\label{Lem:GeneralIntegralExtension}\nLet $n\\in \\N$, and let $l\\in\\{ 0, 1,~\\dotsc, n-1\\}$. The function $F_{n,l}: (0,\\infty)\\rightarrow \\R$ defined by\n\\begin{equation}\\label{Eq:GeneralIntegral}\nF_{n,l}(t) \\coloneqq \\int_{-1}^1 \\frac{t^{n-l} u^l}{(u^2+t^2)^{\\frac{n+1}{2}}} \\Diff{u}\\quad\\text{for all }t\\in(0,\\infty)\n\\end{equation}\nextends smoothly to $[0,\\infty)$.\n\\end{Lem} \n\\begin{proof}\nWe have\n\\[ F_{1,0}(t) = 2 \\arctan\\Bigl(\\frac{1}{t}\\Bigr)=\\pi - 2 \\arctan(t) \\quad\\text{for all }t\\in (0,\\infty). \\]\nThe right-hand side is a smooth function on $\\R$.\n\nFor $n\\ge 2$, we deduce the recursive formula\n\\[ F_{n,0}(t) = \\frac{1}{n-1}\\Bigl((n-2)F_{n-2,0}(t)+\\frac{2 t^{n-2}}{(1+t^2)^{\\frac{n-1}{2}}}\\Bigr).\\]\n%We see that $F_{n,0}$ extends smoothly to $\\R$ for all $n$ by induction.\n%by defining\n%\\[ F_{n,0}(0)\\coloneqq \\frac{(n-2)!!}{(n-1)!!}\\begin{cases} \\pi & \\text{for }n \\text{ odd},\\\\ 2 & \\text{for }n \\text{ even}.\n%\\end{cases}\\] \nIf $l$ is odd, then $F_{n,l}\\equiv 0$ for all $n$ because the integrand of \\eqref{Eq:GeneralIntegral} is odd as a function of $u$.\n\nFor $n\\ge 3$ and even $2\\le l \\le n-1$, we deduce yet another recursive formula\n\\[  F_{n,l}(t) = \\frac{1}{n-l}\\Bigl((l-1) F_{n,l-2}(t) -  \\frac{2 t^{n-l}}{(1+t^2)^{\\frac{n-1}{2}}} \\Bigr).\n\\]\n%by defining\n%\\[ F_{n,l}(0)\\coloneqq \\begin{cases} \\frac{(l-1)!!(n-l-2)!!}{(n-2)!!} F_{n,0} & \\text{for }l\\text{ even} \\\\ 0 & \\text{for }l\\text{ odd}\\end{cases} \\]\nThe claim for all $F_{n,l}$ follows by induction.\n\\end{proof}\n\n\\begin{Proposition}[Smooth extension to boundary]\\label{Prop:GKerBdd}\nThe form $\\Prpg$ from~\\eqref{Eq:GreenKernelMC1} extends smoothly to $\\Bl_\\Diag(\\Sph{n}\\times\\Sph{n})$.\n\\end{Proposition}\n%\n\\begin{proof}\nAccording to Lemmas~\\ref{Lem:NewBlowupParam} and~\\ref{Lem:ExtAlongCurve}, it suffices to show that the curve $\\Prpg'\\circ \\zeta': (0,\\infty)\\rightarrow \\Lambda^{n-1}T^* X$ extends smoothly to $[0,\\infty)$. Lemma~\\ref{Lem:FormulaAlongCurve} gives an expression for $\\Prpg'\\circ \\zeta'$ as a linear combination of smooth forms $\\nu_{k}\\in \\Omega^{n-1}(X)$ with coefficients $\\gamma^{n-k}(h_{k}\\circ \\gamma)$ for $k=0$,~$\\dotsc$, $n-1$ multiplied by the overall coefficient $(-1)^n (1+\\gamma^2)^{-\\frac{n-1}{2}}$. We expand\n\\begin{equation*}\n\\gamma^{n-k}(h_{k}\\circ\\gamma) = \\sum_{a=0}^k\\sum_{b=0}^{n-1-k}(-1)^{n-1-k-b} \\binom{k}{a} \\binom{n-1-k}{b} \\int_{-1}^1 \\frac{\\gamma^{n+k-2a}u^{a+b}}{(u^2+\\gamma^2)^{\\frac{n+1}{2}}} \\Diff{u}\n\\end{equation*}\nand notice that we can write\n\\begin{equation*}\n \\int_{-1}^1 \\frac{\\gamma^{n+k-2a}u^{a+b}}{(u^2+\\gamma^2)^{\\frac{n+1}{2}}} \\Diff{u} = \\gamma^{k-a+b}(F_{n,a+b}\\circ \\gamma)\n\\end{equation*}\nfor the function $F_{n,l}$ from~\\eqref{Eq:GeneralIntegral} with $l\\coloneqq a+b$. Because $0\\le l \\le n-1$, Lemma~\\ref{Lem:GeneralIntegralExtension} asserts that $F_{n,l}$ extends smoothly to $[0,\\infty)$. Because $k-a+b\\ge 0$, the entire coefficient at $\\nu_{k}$ extends smoothly to $[0,\\infty)$ for every $k=0$,~$\\dotsc$, $n-1$. The lemma follows.\n\\end{proof}\n\n%\\begin{Example} \n%\\[\n%\\Prpg'(\\zeta'(r)) = \\begin{cases}\n% \\dfrac{1}{2\\pi}\\bigl(\\pi - \\arctan(r)\\bigr) & \\text{for }n=1, \\\\[2ex]\n%\\dfrac{1}{4\\pi}\\Bigl(\\Bigl(1+\\dfrac{1}{\\sqrt{1+r^2}}\\Bigr) \\Diff{\\eta^1} - \\dfrac{r}{\\sqrt{1+r^2}} \\Diff{x^1}\\Bigr)& \\text{for }n=2, \\\\[2ex]\n% \\begin{gathered}[b]\\frac{1}{2\\pi^2}\\Bigl(\\frac{1}{2}\\Bigl(\\pi - \\arctan(r)+\\frac{r}{1+r^2}\\Bigr)\\Diff{\\eta^1}\\Diff{\\eta^2} - \\frac{r^2}{2(1+r^2)}\\SplitEq (\\Diff{x^1}\\Diff{\\eta^2} -\\Diff{x^2}\\Diff{\\eta^1}) + \\Bigl(\\pi - \\arctan(r) - \\frac{r}{2(1+r^2)}\\Bigr)\\Diff{x^1}\\Diff{x^2}\\Bigr) \\end{gathered} & \\text{for }n=3.\n%\\end{cases}\n%\\]\n%%We see that the right hand side indeed smoothly extends $G'\\circ \\zeta'$ to $[0,\\infty)$. \n%\\end{Example}\n\nWe summarize our results in the following proposition:\n\n\\begin{Proposition}[Hodge propagator for $\\Sph{n}$]\\label{Proposition:GreenKernel}\nThe form $\\Prpg$ from~\\eqref{Eq:GreenKernelMC1} defines a Hodge propagator for $\\Sph{n}$ satisfying Definition~\\ref{Def:GreenKernel}. Moreover, we have the symmetries\n\\begin{align*}\nR^* \\Prpg &= (-1)^R \\Prpg\\quad \\text{for all }R\\in O(n+1)\\text{ and} \\\\\n\\tau^* \\Prpg & = (-1)^n \\Prpg.\n\\end{align*}\n\\end{Proposition}\n\\begin{proof}\nThe proposition is a summary of Propositions~\\ref{Prop:GKerSph},~\\ref{Prop:SymmetryOfG} and~\\ref{Prop:GKerBdd}.\n\\end{proof}\n%\\begin{proof}\n%The proposition is proved in Section~\\ref{Section:Proof1}: Lemma~\\ref{Lemma:GKer} transforms~\\eqref{Eq:CochainHomotopy} into the differential equation~\\eqref{Eq:GreenKernel} for $\\Prpg$, Lemma~\\ref{Prop:GKerSph} provides a solution, in Lemma~\\ref{Prop:GKerBdd} we show that the solution extends smoothly to the blow-up, and finally in Lemma~\\ref{Prop:SymmetryOfG} we show that $\\Prpg$ satisfies the symmetry property~\\eqref{Eq:SymProp}.\n%\\end{proof}\n\n\\begin{Remark}[Better notation due to R. Bryant, see  \\cite{MO291535}] \\label{Remark:Bryant}\nPick an oriented basis $e_1$,~$ \\dotsc$, $e_{n+1}$ of $\\R^{n+1}$ as generators of the exterior algebra $\\Lambda^*(\\R^{n+1})$, and view $x$, $y$, $\\Diff{x}$, $\\Diff{y}$ as $\\Lambda^*(\\R^{n+1})$-valued forms on~$\\R^{n+1}$. For example, we view $x$ as the map $x\\in \\R^{n+1} \\mapsto \\sum_{i=1}^{n+1} x^i e_i \\in \\Lambda^1(\\R^{n+1})$ and $\\Diff{x}$ as the map $x\\in \\R^{n+1} \\mapsto \\sum_{i=1}^{n+1} (\\Diff{x}_i)_x e_i \\in \\Lambda^1(\\R^{n+1})$. There is a natural wedge product on the space of $\\Lambda^*(\\R^{n+1})$-valued forms. If $\\omega$ is a top-form, we denote by $[\\omega]$ the coefficient of $\\omega$ at $e_1 \\wedge \\dotsm \\wedge e_{n+1}$. Then it holds\n\\[ \\omega_k(x,y) = \\frac{1}{k!}\\frac{1}{(n-1-k)!}[x\\wedge y \\wedge (\\Diff{x})^{k} \\wedge (\\Diff{y})^{n-1-k}]. \\]\n\nNote that if we view $e_i$ as odd variables, then $[\\cdot]$ corresponds to the odd integration~$\\int \\mathrm{D}e(\\cdot)$. It would be interesting to know whether this notation simplifies some proofs, especially if Lemma~\\ref{Lemma:ABVanishing} can be deduced from abstract algebraic facts or rules valid for odd integration.\n%and\n%\\[ \\Vol(x) = \\frac{1}{n!} [x \\wedge (\\Diff{x})^n].\\]\n\\end{Remark}\n\\end{document}\n", "meta": {"hexsha": "f90cc206036f04063d6f1bb02aefc4b7a8cb6f7a", "size": 27539, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Subfiles/Comp_SnG.tex", "max_stars_repo_name": "p135246/phd-thesis", "max_stars_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Subfiles/Comp_SnG.tex", "max_issues_repo_name": "p135246/phd-thesis", "max_issues_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Subfiles/Comp_SnG.tex", "max_forks_repo_name": "p135246/phd-thesis", "max_forks_repo_head_hexsha": "0e124466a3d0ff988c012225400fadb0b170aa9e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.7161458333, "max_line_length": 657, "alphanum_fraction": 0.6288173136, "num_tokens": 11382, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7549149868676283, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.5541978286177376}}
{"text": "\\chapter{Poisson denoising}\n\\label{ch_denoising}\n\n\\markright{Poisson denoising}\n\n\\section{MS-VST + IUWT}\n\nUnder the hypothesis of homogeneous Poisson intensity, the stabilized wavelet coefficients $d_j$ behave like centered Gaussian variables of standard deviation $\\sigma_{(j)}$. We can detect significant coefficients with binary hypothesis testing as in Gaussian denoising.\n\nUnder the null hypothesis $\\mathcal{H}_0$ of homogeneous Poisson intensity, the distribution of the stabilized wavelet coefficient $d_j[k]$ at scale $j$ and location index $k$ can be written as:\n\\begin{equation}\n\\label{ }\np(d_j[k]) = \\frac{1}{\\sqrt{2\\pi}\\sigma_j}\\exp(-d_j[k]^2 / 2 \\sigma_j^2) .\n\\end{equation}\n\nThe rejection of the hypothesis $\\mathcal{H}_0$ depends on the double-sided p-value:\n\\begin{equation}\n\\label{ }\np_j[k] = 2 \\frac{1}{\\sqrt{2\\pi}\\sigma_j}\\int_{|d_j[k]|}^{+\\infty} \\exp(-x^2 / 2 \\sigma_j^2) dx .\n\\end{equation}\n\nConsequently, to accept or reject $\\mathcal{H}_0$, we compare each $|d_j[k]|$ with a critical threshold $\\kappa \\sigma_j$, $\\kappa= 3,4 \\text{ or } 5$ corresponding respectively to significance levels. This amounts to deciding that:\n\\begin{itemize}\n  \\item if $|d_j[k]| \\geqslant  \\kappa \\sigma_j$, $d_j[k]$ is significant.\n  \\item if $|d_j[k]| < \\kappa \\sigma_j$, $d_j[k]$ is not significant.\n\\end{itemize}\n\nThen we have to invert the MS-VSTS scheme to reconstruct the estimate. However, although the direct inversion is possible (Eq. (\\ref{eq30})), it can not guarantee a positive intensity estimate, while the Poisson intensity is always nonnegative. A positivity projection can be applied, but important structures could be lost in the estimate. To tackle this problem, we reformulate the reconstruction as a convex optimisation problem and solve it iteratively with an algorithm based on Hybrid Steepest Descent (HSD)~\\citep{wave:yamada01}.\n\nWe define the multiresolution support $\\mathcal{M}$, which is determined by the set of detected significant coefficients after hypothesis testing:\n\\begin{equation}\n\\label{eq33}\n\\mathcal{M} := \\{ (j,k) | \\text{if } d_j[k] \\text{ is declared significant} \\} .\n\\end{equation}\n\nWe formulate the reconstruction problem as a convex constrained minimization problem:\n\n\\begin{equation}\n\\label{eq34}\n\\begin{split}\n\\text{Arg} \\min_{\\mathbf{X}} \\| \\mathbf{ \\Phi}^{T}\\mathbf{X}\\|_1,\n\\text{s.t.} \\\\ \\: \\left\\{\\begin{array}{c}\\mathbf{X} \\geqslant 0 , \\\\\\forall (j,k)\\in \\mathcal{M},      (\\mathbf{ \\Phi}^{T}\\mathbf{X})_j[k]=(\\mathbf{ \\Phi}^{T}\\mathbf{Y})_j[k] , \\end{array}\\right.\n\\end{split}\n\\end{equation}\nwhere $\\mathbf{\\Phi}$ denotes the IUWT synthesis operator.\n\nThis problem is solved with the following iterative scheme: the image is initialised by $\\mathbf{X}^{(0)} = 0$, and the iteration scheme is, for $n=0$ to $N_{\\max}-1$:\n\n\n\\begin{eqnarray}\n\\tilde{\\mathbf{X}} &=& P_{+}[\\mathbf{ X}^{(n)} + \\mathbf{ \\Phi} P_{\\mathcal{M}} \\mathbf{ \\Phi}^{T} (\\mathbf{ Y} - \\mathbf{ X}^{(n)})] \\\\\n\\mathbf{X}^{(n+1)} &=& \\mathbf{ \\Phi}\\text{ST}_{\\lambda_n}[\\mathbf{ \\Phi}^{T}\\tilde{\\mathbf{X}}]\n\\end{eqnarray}\nwhere $P_{+}$ denotes the projection on the positive orthant, $P_{\\mathcal{M}}$ denotes the projection on the multiresolution support $\\mathcal{M}$:\n\\begin{equation}\nP_{\\mathcal{M}}d_j[k] = \\left\\{\\begin{array}{cc} d_j[k] & \\text{if} \\  (j,k) \\in \\mathcal{M} , \\\\0 & \\text{otherwise} \\end{array} . \\right.\n\\end{equation}\nand $\\text{ST}_{\\lambda_n}$ the soft-thresholding with threshold $\\lambda_n$:\n\\begin{equation}\n\\text{ST}_{\\lambda_n} [d] = \\left\\{\\begin{array}{cc} \\mathrm{sign}(d)(|d| - \\lambda_n) & \\text{if} \\ |d| \\geqslant \\lambda_n , \\\\0 & \\text{otherwise} \\end{array} . \\right.\n\\end{equation}\nWe chose a decreasing threshold $\\lambda_n = \\frac{N_{\\max} - n}{N_{\\max} - 1},n=1,2,\\cdots,N_{\\max}$.\n\nThe final estimate of the Poisson intensity is: $\\hat{\\mathbf{\\Lambda}} = \\mathbf{X}^{(N_{\\max})}$. Algorithm~\\ref{alg1} summarizes the main steps of the MS-VSTS + IUWT denoising algorithm.\n\n\n\n\\begin{algorithm}[!h]\n\\caption{MS-VSTS + IUWT Denoising}\n\\label{alg1}\n\\begin{algorithmic}[1]\n\\REQUIRE $\\quad$ data $a_0:=\\mathbf{Y}$, number of iterations $N_{\\max}$, threshold $\\kappa$ \\\\\n\\underline{\\emph{\\textbf{Detection}}} \\\\\n\\FOR{$j=1$ to $J$}\n\\STATE Compute $a_j$ and $d_j$ using (\\ref{eq27}).\n\\STATE Hard threshold $|d_j[k]|$ with threshold $\\kappa \\sigma_j$ and update $\\mathcal{M}$.\n\\ENDFOR \\\\\n\\underline{\\emph{\\textbf{Estimation}}} \\\\\n\\STATE Initialize $\\mathbf{X}^{(0)}=0$, $\\lambda_0 = 1$.\n\\FOR{$n=0$ to $N_{\\max}-1$}\n\\STATE $\\tilde{\\mathbf{X}}= P_{+}[\\mathbf{ X}^{(n)} + \\mathbf{ \\Phi} P_{\\mathcal{M}} \\mathbf{ \\Phi}^{T} (\\mathbf{ Y} - \\mathbf{ X}^{(n)})]$.\n\\STATE $\\mathbf{X}^{(n+1)} = \\mathbf{ \\Phi}\\text{ST}_{\\lambda_n}[\\mathbf{ \\Phi}^{T}\\tilde{\\mathbf{X}}]$.\n\\STATE $\\lambda_{n+1} = \\frac{N_{\\max} - (n+1)}{N_{\\max} - 1}$.\n\\ENDFOR\n\\STATE Get the estimate $\\hat{\\mathbf{\\Lambda}} = \\mathbf{X}^{(N_{\\max})}$.\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Multi-resolution support adaptation}\n\nWhen two sources are too close, the less intense source may not be detected because of the negative wavelet coefficients of the brightest source. To avoid such a drawback, we may update the multi-resolution support at each iteration. The idea is to withdraw the detected sources and to make a detection on the remaining residual, so as to detect the sources which may have been missed at the first detection.\n\n\nAt each iteration $n$, we compute the MS-VSTS of $\\mathbf{X}^{(n)}$. We denote $d^{(n)}_j[k]$ the stabilised coefficients of $\\mathbf{X}^{(n)}$. We make a hard thresholding on $(d_j[k]-d^{(n)}_j[k])$ with the same thresholds as in the detection step. Significant coefficients are added to the multiresolution support $\\mathcal{M}$.\n\n\\begin{algorithm}\n\\caption{MS-VSTS + IUWT Denoising + Multiresolution Support Adaptation}\n\\label{alg4}\n\\begin{algorithmic}[1]\n\\REQUIRE $\\quad$ data $a_0:=\\mathbf{Y}$, number of iterations $N_{\\max}$, threshold $\\kappa$ \\\\\n\\underline{\\emph{\\textbf{Detection}}} \\\\\n\\FOR{$j=1$ to $J$}\n\\STATE Compute $a_j$ and $d_j$ using (\\ref{eq27}).\n\\STATE Hard threshold $|d_j[k]|$ with threshold $\\kappa \\sigma_j$ and update $\\mathcal{M}$.\n\\ENDFOR \\\\\n\\underline{\\emph{\\textbf{Estimation}}} \\\\\n\\STATE Initialize $\\mathbf{X}^{(0)}=0$, $\\lambda_0 = 1$.\n\\FOR{$n=0$ to $N_{\\max}-1$}\n\\STATE $\\tilde{\\mathbf{X}}= P_{+}[\\mathbf{ X}^{(n)} + \\mathbf{ \\Phi} P_{\\mathcal{M}} \\mathbf{ \\Phi}^{T} (\\mathbf{ Y} - \\mathbf{ X}^{(n)})]$.\n\\STATE $\\mathbf{X}^{(n+1)} = \\mathbf{ \\Phi}\\text{ST}_{\\lambda_n}[\\mathbf{ \\Phi}^{T}\\tilde{\\mathbf{X}}]$.\n\\STATE Compute the MS-VSTS on  $\\mathbf{X}^{(n)}$ to get the stabilised coeffcients $d^{(n)}_j$.\n\\STATE Hard threshold $|d_j[k]-d^{(n)}_j[k]|$ and update $\\mathcal{M}$.\n\\STATE $\\lambda_{n+1} = \\frac{N_{\\max} - (n+1)}{N_{\\max} - 1}$.\n\\ENDFOR\n\\STATE Get the estimate $\\hat{\\mathbf{\\Lambda}} = \\mathbf{X}^{(N_{\\max})}$.\n\n\\end{algorithmic}\n\\end{algorithm}\n\nThe main steps of the algorithm are summarized in Algorithm~\\ref{alg4}. In practice, we use Algorithm~\\ref{alg4} instead of Algorithm~\\ref{alg1} in our experiments.\n\n\\section{MS-VST + Curvelets}\n\nInsignificant coefficients are zeroed by using the same hypothesis testing framework as in the wavelet scale. At each wavelet scale $j$ and ridgelet band $k$, we make a hard thresholding on curvelet coefficients with threshold $\\kappa \\sigma_{j,k}$, $\\kappa= 3,4 \\text{ or } 5$. Finally, a direct reconstruction can be performed by first inverting the local ridgelet transforms and then inverting the MS-VST + IUWT~(Equation~(\\ref{eq30})). An iterative reconstruction may also be performed.\n\nAlgorithm~\\ref{algcurv} summarizes the  main steps of the MS-VSTS + Curvelets denoising algorithm.\n\n\\begin{algorithm}\n\\caption{MS-VSTS + Curvelets Denoising}\n\\label{algcurv}\n\\begin{algorithmic}[1]\n\\STATE Apply the MS-VST + IUWT with $J$ scales to get the stabilized wavelet subbands $d_j$.\n\\STATE Set $B_1 = B_{\\min}$.\n\\FOR{$j=1$ to $J$}\n\\STATE Partition the subband $d_j$ with blocks of side-length $B_j$ and apply the digital ridgelet transform to each block to obtain the stabilized curvelets coefficients.\n\\IF {$j$ modulo $2=1$}\n\\STATE $B_{j+1} = 2 B_j$\n\\ELSE\n\\STATE $B_{j+1} =  B_j$\n\\ENDIF \\\\\n\\STATE HTs on the stabilized curvelet coefficients.\n\\ENDFOR \\\\\n\\STATE Invert the ridgelet transform in each block before inverting the MS-VST + IUWT.\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Experiments}\n\nThe method was tested on simulated Fermi data. The simulated data are the sum of a Milky Way diffuse background model and 1000 gamma ray point sources. We based our Galactic diffuse emission model intensity on the model $gll\\_iem\\_v02$ obtained at the Fermi Science Support Center~\\citep{Models}\n. This model results from a fit of the LAT photons with various gas templates as well as inverse Compton in several energy bands. We used a realistic point-spread function for the sources, based on Monte Carlo simulations of the LAT and accelerator tests, that scale approximately as $0.8(E/1GeV)^{-0.8}$ degrees. The position of the 205 brightest sources were taken from the Fermi 3-month source list~\\citep{Abdo}. The position of the 795 remaining sources follow the LAT 1-year Point Source Catalog~\\citep{Catalog}\n  sources distribution: each simulated source was randomly sorted in a box of $\\Delta$l=5$^o$ and $\\Delta$b=1$^o$ around a LAT 1-year catalog source. We simulated each source assuming a power-law dependence with its spectral index given by the 3-month source list and the first year catalog. We used an exposure of $3.10^{10} s.cm^2$ corresponding approximatively to one year of Fermi all-sky survey around 1 GeV. The simulated counts map shown here correspond to photons energy from 150 MeV to 20 GeV.\n\n\nFig.~\\ref{rechsd} compares the result of denoising with MS-VST + IUWT (Algorithm~\\ref{alg1}), MS-VST + curvelets (Algorithm~\\ref{algcurv}) and Anscombe VST + wavelet shrinkage on a simulated Fermi map. Fig.~\\ref{recface} shows one HEALPix face of the results. \nAs expected from theory, the Anscombe method produces poor results to  denoise Fermi data, because the underlyning intensity is too weak. \nBoth wavelet and curvelet denoising on the sphere  perform much better. \nFor this application, wavelets are slightly better than curvelets ($SNR_{wavelets} = 65.8 dB$, $SNR_{curvelets} = 37.3 dB$, $SNR (dB) = 20 \\log (\\sigma_{signal} / \\sigma_{noise})$). As this image contains many point sources, thisresult is expected. Indeed wavelet are better than curvelets to represent isotropic objects.\n\n\\begin{figure}[htb]\n\\centering{\n\\hbox{\n\\includegraphics[width=3in,height=2.4in]{13822fg10.pdf}  \n\\includegraphics[width=3in,height=2.4in]{13822fg11.pdf}} \n\\hbox{\n\\includegraphics[width=3in,height=2.4in]{13822fg12.pdf} \n\\includegraphics[width=3in,height=2.4in]{13822fg13.pdf}}\n\\hbox{\n\\includegraphics[width=3in,height=2.4in]{13822fg14.pdf} \n\\includegraphics[width=3in,height=2.4in]{13822fg15.pdf}}\n\n\\caption{\\emph{Top Left}: Fermi simulated map without noise.\n\\emph{Top Right}: Fermi simulated map with Poisson noise.\n\\emph{Middle Left}: Fermi simulated map denoised with Anscombe VST + wavelet shrinkage.\n\\emph{Middle Right}: Fermi simulated map denoised with MS-VSTS + curvelets (Algorithm~\\ref{algcurv}).\n\\emph{Bottom Left}: Fermi simulated map denoised with MS-VSTS + IUWT (Algorithm~\\ref{alg1}) with threshold $5\\sigma_j$.\n\\emph{Bottom Right}: Fermi simulated map denoised with MS-VSTS + IUWT (Algorithm~\\ref{alg1}) with threshold $3\\sigma_j$.\nPictures are in logarithmic scale.}\n\\label{rechsd}\n}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=2.5in]{13822fg16.pdf} \\hfill\n\\includegraphics[width=2.5in]{13822fg17.pdf} \\hfill\n\\includegraphics[width=2.5in]{13822fg18.pdf} \\hfill\n\\includegraphics[width=2.5in]{13822fg19.pdf} \\hfill\n\\includegraphics[width=2.5in]{13822fg20.pdf} \\hfill\n\\includegraphics[width=2.5in]{13822fg21.pdf} \\hfill\n\\caption{View of a single HEALPix face from the results of Figure~\\ref{rechsd}.\n\\emph{Top Left}: Fermi simulated map without noise.\n\\emph{Top Right}: Fermi simulated map with Poisson noise.\n\\emph{Middle Left}: Fermi simulated map denoised with Anscombe VST + wavelet shrinkage.\n\\emph{Middle Right}: Fermi simulated map denoised with MS-VSTS + curvelets (Algorithm~\\ref{algcurv}).\n\\emph{Bottom Left}: Fermi simulated map denoised with MS-VSTS + IUWT (Algorithm~\\ref{alg1}) with threshold $5\\sigma_j$.\n\\emph{Bottom Right}: Fermi simulated map denoised with MS-VSTS + IUWT (Algorithm~\\ref{alg1}) with threshold $3\\sigma_j$.\nPictures are in logarithmic scale.\n}\n\\label{recface}\n\\end{center}\n\\end{figure*}\n", "meta": {"hexsha": "b8827697d92ea92d085598c3a78b9770d1194d69", "size": 12535, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/doc/doc_isap/msvst_denoising.tex", "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/doc/doc_isap/msvst_denoising.tex", "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/doc/doc_isap/msvst_denoising.tex", "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.8495145631, "max_line_length": 536, "alphanum_fraction": 0.7207818109, "num_tokens": 4028, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7154240079185318, "lm_q1q2_score": 0.5541555491663692}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% Written By Michael Brodskiy\n% Class: Analytic Geometry & Calculus III (Math-292)\n% Professor: V. Cherkassky\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\documentclass[12pt]{article} \n\\usepackage{alphalph}\n\\usepackage[utf8]{inputenc}\n\\usepackage[russian,english]{babel}\n\\usepackage{titling}\n\\usepackage{amsmath}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{amssymb}\n\\usepackage[super]{nth}\n\\usepackage{everysel}\n\\usepackage{ragged2e}\n\\usepackage{geometry}\n\\usepackage{fancyhdr}\n\\geometry{top=1.0in,bottom=1.0in,left=1.0in,right=1.0in}\n\\newcommand{\\subtitle}[1]{%\n  \\posttitle{%\n    \\par\\end{center}\n    \\begin{center}\\large#1\\end{center}\n    \\vskip0.5em}%\n\n}\n\\usepackage{hyperref}\n\\hypersetup{\ncolorlinks=true,\nlinkcolor=blue,\nfilecolor=magenta,      \nurlcolor=blue,\ncitecolor=blue,\n}\n\n\\urlstyle{same}\n\n\n\\title{Lecture IV Notes}\n\\date{\\today}\n\\author{Michael Brodskiy\\\\ \\small Professor: V. Cherkassky}\n\n\n\\begin{document}\n\n\\maketitle\n\n\\subsection{The Distance From a Point to a Plane}\n\nThe distance from any point to a plane may be found using the formula $\\frac{|ax+by+cz+d|}{\\sqrt{a^2+b^2+c^2}}$, where $a$-$d$ are the coefficients of the equation of the plane, and $x$-$z$ are the coordinates of the point.\n\n\\subsection{The Distance From a Plane to a Plane}\n\nFind one point on either plane, and repeat the process to find a distance from a point to a plane.\n\n\\section{Cylinders and Quadric Surfaces}\n\nThe equation of a quadric surface is given by the equation: $Ax^2+By^2+Cz^2+Dxy+Eyz+Fxz+Gx+Hy+Iz+J$, where $A$-$J$ are constants.\\\\\n\nThere are six different figures that should be known:\n\n\\begin{tabular}{|p{.45\\textwidth}||p{.45\\textwidth}|}\n\n\\hline\n  Figure & Equation \\\\\n\\hline\nEllipsoid: A Figure in Which All Traces are Ellipses & $\\frac{x^2}{a^2}+\\frac{y^2}{b^2}+\\frac{z^2}{c^2}=1$\\\\\n\\hline\n  Cone: A Figure in Which Horizontal Traces are Ellipses and Vertical Traces in $x$ and $y$ are Hyperbolas & $\\frac{x^2}{a^2}+\\frac{y^2}{b^2}=\\frac{z^2}{c^2}$\\\\\n\\hline\nElliptic Paraboloid: Horizontal Traces are Ellipses and Vertical Traces are Parabolas & $\\frac{x^2}{a^2}+\\frac{y^2}{b^2}=\\frac{z}{c}$\\\\ \n\\hline\nHyperboloid of One Sheet: Horizontal Traces are Ellipses and Vertical Traces are Hyperbolas & $\\frac{x^2}{a^2}+\\frac{y^2}{b^2}-\\frac{z^2}{c^2}=1$\\\\ \n\\hline\nHyperbolic Paraboloid: Horizontal Traces are Hyperbolas and Vertical Traces are Parabolas & $\\frac{x^2}{a^2}-\\frac{y^2}{b^2}=\\frac{z}{c}$\\\\\n\\hline\nHyperboloid of Two Sheets: Horizontal Traces are Ellipses in $z$ and Vertical Traces are Hyperbolas & $-\\frac{x^2}{a^2}-\\frac{y^2}{b^2}+\\frac{z^2}{c^2}=1$\\\\\n\\hline\n\n\\end{tabular}\n\n\\end{document}\n", "meta": {"hexsha": "e9bd63e08200c32ec67a1962da7182567ae9839d", "size": 2957, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture Notes/Lecture4.tex", "max_stars_repo_name": "MDBrodskiy/Vector_Calculus", "max_stars_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-15T15:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T15:51:52.000Z", "max_issues_repo_path": "Lecture Notes/Lecture4.tex", "max_issues_repo_name": "MDBrodskiy/Vector_Calculus", "max_issues_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notes/Lecture4.tex", "max_forks_repo_name": "MDBrodskiy/Vector_Calculus", "max_forks_repo_head_hexsha": "d4820f31c0c585ae65e6d61249d8c725077005eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7882352941, "max_line_length": 223, "alphanum_fraction": 0.6246195468, "num_tokens": 937, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649232, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.5541555397667545}}
{"text": "\\chapter{Introduction} \\label{sec:Introduction}\nOptimizing predictive models on datasets obtained from citizen-science projects can be computationally expensive as these datasets grow in size. Consequently, running models based on Multi-layered Neural Networks, Integer Programming, and other optimization routines can become more computationally difficult as the number of parameters increase, despite using the faster Central Processing Units (CPUs) in the market. Incidentally, it becomes difficult for citizen-science projects, which often deal with large datasets, to scale if the organizers do not employ special processing units to run neural networks as optimization models, which require extensive tensor operations. One such special processing unit is the Graphical Processing Unit (GPU), which offers numerous cores to parallelize computation. GPUs can often outperform CPUs in computing such predictive models if these models \\textit{heavily} rely on large-scale tensor operations \\cite{ParallelNVIDIA, cuDNNPaper}. By using GPUs over CPUs to accelerate computation on a citizen-science project, the model could achieve better optimization in less time, enabling the project to scale.\n\n\\section{Avicaching} \\label{sec:Avicaching}\nPart of the eBird project, which aims to ``maximize the utility and accessibility of the vast numbers of bird observations made each year by recreational and professional bird watchers'' \\cite{EBird}, Avicaching is an incentive-driven game trying to homogenize the spatial distribution of citizens' (agents') observations \\cite{Xue2016Avi1, Xue2016Avi2}. Since the dataset of agents' observations in eBird is geographically heterogeneous (concentrated in some places like cities and sparse in others), Avicaching homogenizes the observation set by placing rewards at and attracting agents to under-sampled locations \\cite{Xue2016Avi1}. For the agents, collecting rewards increases their `utility' (excitement, fun etc.), while for the organizers, a more homogeneous observation dataset means better sampling and higher confidence in using it for other models. \n\\begin{figure}[!htbp]\n    \\centering\n    \\includegraphics[width=\\textwidth]{avicaching_change}\n    \\caption[Avicaching 2014-15 Results in Tompkins and Cortland Counties]{Avicaching 2014-15 Results in Tompkins and Cortland Counties, NY: With the previous model \\cite{Xue2016Avi1, Xue2016Avi2, EBird}, Avicaching was able to attract `eBirders' to under-sampled locations by distributing rewards over the location set.}\n    \\label{fig:Avicaching 2014-15 Results in Tompkins and Cortland Counties}\n\\end{figure}\n\nTo accomplish this task of specifying rewards at different locations based on the historical records of observations, Avicaching would learn how agents change their behavior when a certain sample of rewards was applied to the set of locations, and then redistribute rewards across the locations based on those learned parameters \\cite{Xue2016Avi2}. This requirement naturally translates into a  optimization problem, which we implement using multi-layered neural networks and linear programming.\n\n\\section{Important Questions} \\label{sec:Important Questions}\nAlthough the previously devised solutions to Avicaching were conceptually effective \\cite{Xue2016Avi1, Xue2016Avi2}, using CPUs to solve Mixed Integer Programming and (even) shallow neural networks made the models impractical to scale. Solving the problems faster would have also allowed organizers to find better results (more optimized). These concerns form the pivots of our research study.\n\n\\subsection{Solving Faster} \\label{sec:Important Questions - Solving Faster}\nWe were interested in using GPUs to run our optimization models because of their capability to accelerate problems based on large tensor operations \\cite{ParallelNVIDIA, cuDNNPaper}. Newer generation NVIDIA GPUs, equipped with thousands of CUDA (NVIDIA's parallel computing API) cores \\cite{NVIDIA}, could have empowered Avicaching's organizers to scale the game, if the underlying models were computed using simple arithmetic operations on tensors rather than using conditional logic (see~\\Cref{sec:Avoiding Specific Operations in GPUs}). Since even the faster CPUs --- in the range of Intel Core i7 chipsets --- are sequential in processing and do not provide as comparable parallel processing\\footnote{CPUs often have multiple cores nowadays, but very few compared to what many GPUs have.} as GPUs do, we sought to solve the problem much faster using GPUs. But \\textbf{how fast could we do it?}\n\n\\subsection{Better Results} \\label{sec:Important Questions - Better Results}\nFor learning parameters in agents' change of behavior on a fixed set of rewards, the previously devised sub-model delivered predictions that differed $26\\%$ from Ground Truth \\cite[Table~1]{Xue2016Avi2}. This model was then used to redistribute rewards in a budget. If we could get closer to the Ground Truth, i.e., better learn the parameters for the change, we could redistribute rewards with superior prediction/accuracy. Since the organizers require the \\textit{best} distribution of rewards, we will need a set of learned parameters that is closer to the Ground Truth (in terms of Normalized Mean Squared Error \\cite[Section~4.2]{Xue2016Avi2}). In a gist, we aimed to \\textbf{learn the parameters more suitably}, and \\textbf{find the  best allocation of rewards.}\n\n\\subsection{Adjusting the Model's Features} \\label{sec:Important Questions - Adjusting the Model's Features}\nOnce the model starts delivering better results than the previously devised models, one thinks if some characteristics\\footnote{Tinkering with hyper-parameters like learning rate or adding a weight regularizer.} of the model can be changed to get more preferable results (one could also build a better model). While a goal of ``getting better results'' is an unending struggle, there is a trade-off with practicality as these adjustments take time and computation power to test --- and we didn't have unlimited resources. Therefore, we asked if one could \\textbf{reasonably adjust the model's features to improve performance and optimization.}\n\n\\section{Computation Using GPUs} \\label{sec:Computation Using GPUs}\nThe use of GPUs has changed drastically in the last decade --- from rendering superior graphics to parallelizing floating point operations. Companies like NVIDIA are now providing General Purpose GPUs (GPGPUs) that are capable of executing parallel algorithms using thousands of cores and threads \\cite{NVIDIA}. Furthermore, by working with newly-developed parallel programming APIs like CUDA, one can handle a GPU's threads more efficiently and optimize a task's datapath \\cite{CUDADocs}. In the next sections, we briefly describe NVIDIA GPU's architecture, instruction set, and best practices for optimization. Although we do not implement the CUDA back-end manually and use PyTorch's implementation \\cite{PTDocs}, understanding the basics of the processor was helpful for designing our models.\n\n\\subsection{GPU Architecture} \\label{sec:GPU Architecture}\nWe describe the structure and abstractions of an NVIDIA GPU with Pascal architecture\\footnote{We use NVIDIA Quadro P4000 (Pascal architecture) for our tests.}. These devices are multi-threaded multi-processors for executing ``fine-grained threads efficiently'' \\cite[Appendix~B.4]{PattersonARM}. Unlike Personal Computer CPUs, which currently comprise 1-8 cores with fast and low-latency datapaths, GPUs provide scalable parallel processing power with high throughput and latency\\footnote{High latency is undesirable.} \\cite{PattersonARM, DemystifyingGPU}. Informally, GPUs are often referred to as a collection of many `dumb' cores, unlike the few `smart' cores in a CPU. Nonetheless, GPUs are efficient in executing simple instructions in parallel, using hierarchies of cores, threads, and memory.\n\n\\subsubsection{Core Organization}\nIn most architectures by NVIDIA, many Scalar Processors (SP or CUDA cores) are clustered in a Streaming Multiprocessor (SM), which are then organized in Grid Processing Clusters (GPCs). A GPU can have several GPCs, as shown in \\Cref{fig:Organization of Cores in NVIDIA Pascal GP100 GPU} \\cite{CUDADocs, ParallelNVIDIA, DemystifyingGPU}.\n\\begin{figure}\n    \\centering\n    \\begin{subfigure}{\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{gpu_pascal}\n        \\caption[Organization of Cores in NVIDIA Pascal GP100 GPU]{Organization of Cores in NVIDIA Pascal GP100 GPU: In 6 GPCs and 60 SMs, 3840 SP or CUDA Cores (depicted by green blocks) are arranged \\cite{PascalWhitepaper, ParallelNVIDIA}.}\n        \\label{fig:Organization of Cores in NVIDIA Pascal GP100 GPU}\n    \\end{subfigure}\\vspace*{1em}\n    \\begin{subfigure}{\\textwidth}\n        \\centering\n        \\includegraphics[width=.7\\textwidth]{gpu_pascal_sm}\n        \\caption[Constituents of a Streaming Multiprocessor]{Constituents of a Streaming Multiprocessor \\cite{PascalWhitepaper, ParallelNVIDIA}.}\n        \\label{fig:Constituents of a Streaming Multiprocessor}\n    \\end{subfigure}\n    \\caption[The NVIDIA Pascal Architecture]{The NVIDIA Pascal Architecture: A GPU, like a typical CPU, contains multiple hierarchies of memory. In addition, NVIDIA also builds a hierarchy of core organization.}\n    \\label{fig:The NVIDIA Pascal Architecture}\n\\end{figure}\n\nThe Streaming Multiprocessor (\\Cref{fig:Constituents of a Streaming Multiprocessor}), which is the primary multi-threaded multi-processor, houses a shared memory unit for all SPs along with registers and Special Function Units (SFUs), which can calculate specific mathematical functions in fewer clock cycles than threads. The SM is responsible for relaying instructions to threads (in SPs) and maintaining synchronized parallel computation at barriers \\cite{PascalWhitepaper, DemystifyingGPU}. GPCs and other hierarchies provide abstractions to memory and program access.\n\n\\subsubsection{Memory Organization}\nEven the memory units are organized into hierarchies like multi-level caches in CPUs (\\Cref{fig:The NVIDIA Pascal Architecture}). In a SM, there are: dedicated local memory units for threads, register files shared by several SPs in a SM, and shared memory units for all SP Cores in a SM \\cite{PascalWhitepaper, ParallelNVIDIA}.\n\nThe global internal memory of the GPU (like the RAM of a computer), which is housed separately around the clusters of SMs, stores the datasets for a program. Data is then distributed in the shared and local memory units once threads start executing an instruction \\cite[Appendix~B]{PattersonARM}. Moreover, similar to the multi-level cache structure in CPUs, GPUs also have on-chip cache hierarchy (\\Cref{fig:The NVIDIA Pascal Architecture}), which enables threads to quickly access working data, thus increasing the performance. On a side note, GPUs' caches are often not big enough to hold full datasets as programs' often require large datasets, leading to higher miss rates than those of caches in CPUs \\cite[Appendix~B]{PattersonARM}. \n\n\\subsubsection{Thread Organization}\nAccording to NVIDIA, the parallel structure is obtained by organizing threads into a hierarchy (``grids'' of ``blocks'' of ``warps'' of ``threads''), where threads of a warp (an abstraction) execute in lockstep per clock cycle \\cite{CUDADocs, DemystifyingGPU} \\cite[Appendix~B]{PattersonARM}. In other words, warps are the basic units of execution in a GPU, which receive singular instructions from the control unit, and forward them to their constituent threads for execution \\cite{CUDADocs, DemystifyingGPU}. This abstraction allows the GPU to execute independent sub-tasks in parallel on a large scale (\\Cref{fig:GPU - Thread Organization in a GPU}), reducing the total runtime of the program.\n\\begin{figure}[!htbp]\n    \\centering\n    \\includegraphics[width=.55\\textwidth]{gpu_threads_blocks}\n    \\caption[Thread Organization in a GPU]{Thread Organization in a GPU: Warps are only an abstraction for identifying threads executing a single instruction; however, threads are physically organized into blocks and grids \\cite{CUDADocs,ParallelNVIDIA}}\n    \\label{fig:GPU - Thread Organization in a GPU}\n\\end{figure}\n\nNVIDIA calls this setup ``Single-Instruction Multiple-Thread (SIMT)'' \\cite{CUDADocs} \\cite[Appendix~B.4]{PattersonARM}, where a single instruction is executed in lockstep by multiple threads in a warp (though allowed to branch and diverge). The differences between SIMT and Single-Instruction Multiple-Data (SIMD), a common feature in CPUs, are very subtle. While SIMD distributes data sequences into vectors to be executed by a single core/thread of the CPU (data-level parallelism), SIMT requires a particular thread to only operate on a scalar element of the dataset  \\cite{PattersonARM}. In this way, having thousands of threads running in locksteps provides better throughput than doing vector operations per clock cycle. SIMT also enables programmers to write a program meant for a single, independent thread instead of managing vectors in their code, which promotes ease of use and programming \\cite[Chapter~4]{CUDADocs}.\n\n\\subsubsection{Instruction Set}\nThe Instruction Set of a GPU is limited, which causes complex instructions to take several clock cycles. Even though a Special Functional Unit (SFU) computes complex functions (sine, square-root, reciprocal function etc.) in fewer clock cycles, their count is small compared to the commonplace threads \\cite[Appendix~B]{PattersonARM}. The instruction set for Pascal architecture contains basic operations to load and store memory, perform basic floating point, integer (add, subtract, multiply, divide, min, max, left-shift etc.), binary logical (or, and, xor etc.), and branch-jump operations \\cite{CUDABinUtils, DemystifyingGPU}. This comprises a very basic set of instructions unlike those in Intel CPUs, for example, which have coalesced and dedicated datapaths for complex instructions\\footnote{Intel x86 CPUs have Complex Instruction Set Computer (CISC) Design.} \\cite{PattersonARM}.\n\n\\subsection{Avoiding Specific Types of Operations in GPUs} \\label{sec:Avoiding Specific Operations in GPUs}\nGPUs are not better than CPUs at many operations, unlike popular perception. With parallel processing comes concerns of synchronization delays, blocking instructions, program correctness etc. Moreover, since GPUs are separate units of processors, usually connected to CPUs by PCIe lanes, back and forth memory transfer can cause slowdowns in overall performance. Even the GPUs low-level caches are not big enough to deliver hit rates as compared to CPUs. Often there exists a tradeoff when programming with GPUs, and one should take care to avoid such delays.\n\n\\subsubsection{Branch and Diverge}\nSince NVIDIA GPUs deal with program correctness through intra-warp synchronization (keeping warp threads in lockstep), threads must remain in sync even when conditional blocks force different datapaths for different threads. This ultimately reduces performance as some threads become inactive, which can aggregate and slowdown the performance of the whole program.\n\nTo avoid such delays and maximize throughput, one should elude from using extensive branching in the program. Using conditional checks can change a thread's datapath, which can possibly stall other threads in the warp from proceeding to the next instruction in the program. This behavior is referred to as `branch and diverge' (\\Cref{fig:Decrease in GPU Throughput due to Branch and Diverge}), which occurs when some threads in a warp follow a different datapath due to conditional branching, `diverging' from the group. Now when the warp's threads are forced to converge, those diverging threads would take longer to complete the instruction, causing the non-diverging threads to wait. This radically decreases the throughput of the warp, which again diverges from the block, adding up the latency \\cite[Appendix~B]{PattersonARM}\\cite{DemystifyingGPU}.\n\\begin{figure}[!htbp]\n    \\centering\n    \\includegraphics[width=\\textwidth]{gpu_branch_diverge}\n    \\caption[Decrease in GPU Throughput due to Branch and Diverge]{Decrease in GPU Throughput due to Branch and Diverge: In this case, while some threads execute \\texttt{A, B}, others execute \\texttt{X, Y}. If those sets of operations are all executed by threads of the same warp, the threads could diverge from the group and decrease the throughput \\cite{ParallelNVIDIA}.}\n    \\label{fig:Decrease in GPU Throughput due to Branch and Diverge}\n\\end{figure}\n\nTo ensure program correctness, the diverging threads in a warp are stalled while the other non-diverging threads are executed and vice-versa, according to NVIDIA \\cite{CUDADocs,ParallelNVIDIA}. For example, in \\Cref{fig:Decrease in GPU Throughput due to Branch and Diverge}, the 4 threads with thread-indexes $=\\{0,1,2,3\\}$ will have to wait while the other threads in the warp execute instructions \\texttt{X} and \\texttt{Y}, and, the other threads will be stalled while the 4 threads execute instructions \\texttt{A} and \\texttt{B}. This leads to a $50\\%$ decrease in GPU performance as instructions get `serialized' \\cite{ParallelNVIDIA}, though ensuring program correctness. The overhead can slowdown the run drastically if the program is extensively branched\\footnote{On a side note, functions like \\texttt{min, max, abs} would not add overhead due to branch and diverge since they have devoted datapaths, and operation codes in the GPUs' instruction set.}. This overhead can either be decreased by reducing the number of conditional statements or by intentionally directing conditional statements to threads to different warps in the block, which execute independently \\cite{ParallelNVIDIA}. The latter, at the maximum, only affects full program synchronization if the operations in conditional blocks have unequal runtimes. Furthermore, the performance is usually much better than the otherwise decrease due to diverging intra-warp threads.\n\n\\subsubsection{Memory Limitations}\nPCIe lanes between GPU's internal memory and system's main memory are not as fast as on-chip cache access\\cite{CUDADocs, ParallelNVIDIA}. This means that transferring datasets back and forth between GPUs and CPUs can add up considerably. Often the limits of GPU's internal memory or required operations on a dedicated device can force the user to transfer datasets multiple times. Even so, one should look for optimizations that can reduce the number of transfers whenever possible, especially when large datasets are involved.\n\nThis suggestion is also applicable when transferring datasets in memory intra-GPU (between caches). Miss rates in a GPU's local caches are often high due to limited size, and data requests from higher-level caches can often be expensive.\n\n\\subsection{GPUs' Strengths} \\label{sec:Gpu's Strengths}\nInferring from GPUs' weaknesses and architecture, we can say that GPUs are good at handling independent sub-tasks with less diverging sections. These independent sub-tasks can usually be clubbed into matrices and tensors, which the threads operate on in parallel. The fewer conditional branching we have in our algorithms, the better.\n\nTherefore, GPUs are particularly good at doing linear algebra \\cite{PattersonARM, ParallelNVIDIA} since most arithmetic operations on matrix elements can be independently executed without branching. Avoiding branching often requires extensive dataset preprocessing; consequently, one should meticulously design the structure of the models' datasets. Tensor-based datasets are often required in machine learning problems, graphics rendering routines, graph-based algorithms etc., which GPUs can potentially accelerate \\cite{PattersonARM}.", "meta": {"hexsha": "8d8ef7cf149e418f0fefb52142377718d00c65c9", "size": 19681, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/introduction.tex", "max_stars_repo_name": "anmolkabra/avicaching-summ17", "max_stars_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/introduction.tex", "max_issues_repo_name": "anmolkabra/avicaching-summ17", "max_issues_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/introduction.tex", "max_forks_repo_name": "anmolkabra/avicaching-summ17", "max_forks_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 200.8265306122, "max_line_length": 1445, "alphanum_fraction": 0.8075301052, "num_tokens": 4270, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.5541555276213812}}
{"text": "\n\\section{The \\counti algorithm}\n\\Label{sec:counti}\n\nThe \\counti algorithm in the \\cxx Standard Library \\cite[\\S\n28.5.9]{cxx-17-draft} counts\nthe frequency of occurrences for a particular element in\na sequence.\nFor our purposes we have modified\nthe generic implementation\nto that of arrays of type \\valuetype.\nThe signature now reads:\n\n\\begin{lstlisting}[style = acsl-block]\n\n  size_type\n  count(const value_type* a, size_type n, value_type v);\n\\end{lstlisting}\n\nInformally, the function returns the number of occurrences of\n\\inl{v} in the array \\inl{a}.\n\n\\subsection{The logic function \\Count}\n\nWhen trying to specify \\counti we are faced with the situation that\n\\acsl does not provide a definition of counting a value in an array.\\footnote{\n  This statement is not quite true because the \\acsl documentation \n  lists \\inl{numof} as one of several \\emph{higher order logic constructions} \\cite[\\S 2.6.7]{ACSLSpec}.\n  However, these \\emph{extended quantifiers} are mentioned only as experimental features.\n}\nWe therefore start with an axiomatic definition of \\emph{logic function} \\Count\nthat captures the basic intuitive features of counting on an array section.\nThe expression \\inl{Count(a,m,n,v)} returns the number of\noccurrences of \\inl{v} in \\inl{a[m],...,a[n-1]}.\n\nThe specification of \\counti will then be fairly short because it employs\nour \\emph{logic function}\n\\Count whose (considerably) longer definition is given in the \nListings~\\ref{logic:Count-1} and~\\ref{logic:Count-2}.\\footnote{\n This definition of \\Count is a generalization of\n the \\emph{logic function} \\inl{nb_occ} of the \\acsl specification \\cite{ACSLSpec}.\n}\n\n\n\\begin{itemize}\n\\item\nThe \\acsl keyword \\inl{axiomatic} \nis used to structure the specification and gather the logic function \\Count and related lemmas.\nNote that the interval bounds \\inl{m} and \\inl{n} and the return value for \\Count are of type \\inl{integer}.\n\n\\item\nThe logic functions \\Count is recursively defined.\nIt consist of two checks: whether the range is empty and for the value of\nthe \"current\" element in the array. The recursion goes down on the range length.\nWe also provide an overloaded version of \\Count that accepts only\nthe length of an array, thus relieving the use the supply the argument $m=0$ for the\ncase of a complete array.\n\n\\item\nLemma \\logicref{CountEmpty} covers the cases of empty ranges.\n\n\\item\nLemmas \\logicref{CountHit} and \n\\logicref{CountMiss} reduce\ncounting of a range of length~$n-m$ to a range of length~$n-m-1$.\n\n\\item\nLemmas \\logicref{CountOne} and \\logicref{CountSingle} built on on top of \\CountHit\nand \\CountMiss. \nUsing them simplifies several \\coq proofs.\nThey also slightly change the induction scheme from $n-1 \\rightarrow n$\nto $n \\rightarrow n+1$.\n\\end{itemize}\n\n\n\\begin{logic}[hbt]\n\\begin{minipage}{\\textwidth}\n\\lstinputlisting[linerange={1-45}, style=acsl-block, frame=single]{Source/Count.acsl}\n\\end{minipage}\n\\caption{\\label{logic:Count-1}The logic function \\Count (1)}\n\\input{Listings/Count.acsl.labels.tex}\n\\input{Listings/Count.acsl.index.tex}\n\\end{logic}\n\n\\FloatBarrier\n\n\\begin{itemize}\n\n\\item\nThe logic function \\Count depends only on the set \\inl{a[m..n-1]} of memory locations.\nLemma \\logicref{CountUnchanged} makes this claim explicit by ensuring that\n\\Count produces the same result\nif the values \\inl{a[0..n-1]} do not change between two program states indicated\nby the labels~\\inl{K} and~\\inl{L}.\nWe use here predicate \\logicref{Unchanged} to express the premise.\n\n\\item \nLemma \\logicref{CountEqual} is a generalization of lemma \\CountUnchanged for\nthe case of comparing \\Count on two arrays.\n\n\\item\nLemmas \\logicref{CountUnion} and \\logicref{CountCut} \nallow to deal with partitions of arrays.\n\\end{itemize}\n\n\\begin{logic}[hbt]\n\\begin{minipage}{\\textwidth}\n\\lstinputlisting[linerange={46-90}, style=acsl-block, frame=single]{Source/Count.acsl}\n\\end{minipage}\n\\caption{\\label{logic:Count-2}The logic function \\Count (2)}\n\\end{logic}\n\n\\FloatBarrier\n\n\\begin{itemize}\n\\item \nLemmas \\logicref{CountSingleBounds} and \\logicref{CountBounds}\nexpress lower and upper bounds of \\Count.\nLemma \\logicref{CountIncreasing} states that \\Count is a monotonically increasing.\n\n\\item\nFinally, lemmas \\logicref{CountSingleShift} and \\logicref{CountShift}\nstate that \\Count is invariant under array shifts.\n\\end{itemize}\n\nWe mention here also lemma \\logicref{CountSomeEqual}\nwhich brings together properties of \\logicref{Count} and \\logicref{Find}.\n\n\\input{Listings/CountFind.acsl.tex}\n\n\\clearpage \n\n\\subsection{Formal specification of \\counti}\n\nIn the contract of \\specref{counti} we use the logic function\n\\logicref{Count}\nNote that our specification also states that the result of \\counti is non-negative\nand less than or equal the size of the array.\n\n\\input{Listings/count.h.tex}\n\n\\subsection{Implementation of \\counti}\n\nThe following listing shows a possible implementation of \\implref{counti}.\nNote that we refer to the logic function \\Count in one of the loop invariants.\n\n\\input{Listings/count.c.tex}\n\n\\clearpage\n\n", "meta": {"hexsha": "5e7e4a871844b3b4c1e5a0e85bfb011fe1a5a1b7", "size": 5007, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/nonmutating/count.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/nonmutating/count.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/nonmutating/count.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 33.1589403974, "max_line_length": 108, "alphanum_fraction": 0.771919313, "num_tokens": 1397, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833737577158, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.5541555229215737}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage[utf8x]{luainputenc}\n\\usepackage{aeguill}\n%\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{mathrsfs}\n\\usepackage{fullpage}\n\\usepackage{fancyhdr}\n\\setlength{\\headheight}{12pt}\n\\pagestyle{fancy}\n\\chead{Linear Algebra}\n\\lhead{October 12, 2015}\n\\rhead{Jon Allen}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n%\\renewcommand{\\labelenumii}{\\alph{enumii}.}\n%\\renewcommand{\\labelenumiii}{\\alph{enumiii}.}\n%\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\section*{3.1}\n%6, 8, 9(b,c), 11, 15, 16.\n\\begin{enumerate}\n\\setcounter{enumi}{5}\n\\item\n  \\begin{enumerate}\n  \\item\n  Let $U$ and $V$ be subspaces of $\\mathbb{R}^n$. Define the \\emph{intersection} of $U$ and $V$ to be\n  \\[U\\cap V=\\{\\mathbf{x}\\in \\mathbb{R}^n:\\mathbf{x}\\in U\\text{ and }\\mathbf{x}\\in V\\}\\]\n  Show that $U\\cap V$ is a subspace of $\\mathbb{R}^n$. Give two examples.\n\n  We know that $\\mathbf{0}\\in U$ and $\\mathbf{0}\\in V$. And so $\\mathbf{0}\\in U\\cap V$. Now if we take any $\\mathbf{u},\\mathbf{v}\\in U\\cap V$ then $\\mathbf{u},\\mathbf{v}\\in U$ and $\\mathbf{u},\\mathbf{v}\\in V$ and so $\\mathbf{u}+\\mathbf{v}\\in U$ and $\\mathbf{u}+\\mathbf{v}\\in V$. Thus $\\mathbf{u}+\\mathbf{v}\\in  U\\cap V$. Similarly $\\alpha\\mathbf{u}\\in U$ and $\\alpha\\mathbf{u}\\in V$ for any $\\alpha\\in \\mathbb{R}$. Thus $\\alpha\\mathbf{u}\\in U\\cap V$.\n  \\item\n  Is $U\\cup V=\\{\\mathbf{x}\\in \\mathbb{R}^n:\\mathbf{x}\\in U\\text{ or }\\mathbf{x}\\in V\\}$ always a subspace of $\\mathbb{R}^n$? Give a proof or counterexample.\n\n  Let $V=\\{3n:\\forall n\\in \\mathbb{Z}\\}$ and $U=\\{2n:\\forall n\\in \\mathbb{Z}\\}$. Then $3\\in V$ and $2\\in U$. Therefore $3,2\\in U\\cup V$. But $3+2=5\\not\\in V$ and $5\\not\\in U$. Therefore $5\\not\\in U\\cup V$ and so we have a counterexample.\n  \\end{enumerate}\n\\setcounter{enumi}{7}\n\\item\nLet $\\mathbf{v}_1,\\dots,\\mathbf{v}_k\\in \\mathbb{R}^n$ and let $\\mathbf{v}\\in \\mathbb{R}^n$. Prove that $\\text{Span}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k)=\\text{Span}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k,\\mathbf{v})$ if and only if $\\mathbf{v}\\in \\text{Span}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k)$.\n\nDefine $A=\\text{Span}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k)$ and $B=\\text{Span}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k,\\mathbf{v})$.\n\nWe know that $\\mathbf{v}\\in B$ and so if we assume that $A=B$ then $\\mathbf{v}\\in A$ follows immediately. Now lets assume that $\\mathbf{v}\\in A$. Then for any $\\mathbf{x}\\in A$ we have $\\mathbf{x}=\\alpha_1\\mathbf{v}_1+\\dots+\\alpha_k\\mathbf{v}_k+0\\mathbf{v}\\in B$.\nNow, because $\\mathbf{v}=\\alpha_1\\mathbf{v}_1+\\dots+\\alpha_k\\mathbf{v}_k$ Then any element $\\mathbf{x}\\in B$ we have $\\mathbf{x}=\\beta_1\\mathbf{v}_1+\\dots+\\beta_k\\mathbf{v}_k+\\beta\\mathbf{v}=\\beta_1\\mathbf{v}_1+\\dots+\\beta_k\\mathbf{v}_k+\\beta(\\alpha_1\\mathbf{v}_1+\\dots+\\alpha_k\\mathbf{v}_k)=(\\beta_1+\\beta\\alpha_1)\\mathbf{v}_1+\\dots+(\\beta_k+\\beta\\alpha_k)\\mathbf{v}_k\\in A$.\n$\\Box$\n\\item\nDetermine the intersection of the subspaces $\\mathcal{P}_1$ and $\\mathcal{P}_2$ in each case:\n  \\begin{enumerate}\n  \\setcounter{enumii}{1}\n  \\item\n    $\\mathcal{P}_1=\\text{Span}\\left((1,2,2),(0,1,1)\\right),\\mathcal{P}_2=\\text{Span}\\left((2,1,1),(1,0,0)\\right)$\n    \\begin{align*}\n      a\\left(\\begin{array}{r}1\\\\2\\\\2\\end{array}\\right)\n      +b\\left(\\begin{array}{r}0\\\\1\\\\1\\end{array}\\right)\n      -c\\left(\\begin{array}{r}2\\\\1\\\\1\\end{array}\\right)\n      -d\\left(\\begin{array}{r}1\\\\0\\\\0\\end{array}\\right)\n      =0\\Rightarrow\n      \\left[\\begin{array}{rrrr}\n      1&0&2&1\\\\\n      2&1&1&0\\\\\n      2&1&1&0\n      \\end{array}\\right]\\Rightarrow\n      \\left[\\begin{array}{rrrr}\n      1&0&2&1\\\\\n      0&1&-3&-2\\\\\n      0&0&0&0\n      \\end{array}\\right]\n    \\end{align*}\n    And so we have two free variables. Which means given any element in $\\mathcal{P}_1$ we can find that element in $\\mathcal{P}_2$ and the other way around. That is $\\mathcal{P}_1=\\mathcal{P}_2$\n  \\item\n    $\\mathcal{P}_1=\\text{Span}\\left((1,0,-1),(1,2,3)\\right),\\mathcal{P}_2=\\{\\mathbf{x}:x_1-x_2+x_3=0\\}$\n    Converting $\\mathcal{P}_2$ to standard form we have $\\mathbf{x}=\\left(\\begin{array}{c}x_1\\\\x_1+x_3\\\\x_3\\end{array}\\right)$ or $\\mathcal{P}_2=\\text{Span}\\{(1,1,0),(0,1,1)\\}$. And so we put everything in an array as above\n    \\[\n    \\left[\\begin{array}{rrrr}\n    1&1&-1&0\\\\\n    0&2&-1&-1\\\\\n    -1&3&0&-1\n    \\end{array}\\right]\n    \\left[\\begin{array}{rrrr}\n    1&1&-1&0\\\\\n    0&2&-1&-1\\\\\n    0&4&-1&-1\n    \\end{array}\\right]\n    \\left[\\begin{array}{rrrr}\n    1&1&-1&0\\\\\n    0&2&0&0\\\\\n    0&2&-1&-1\n    \\end{array}\\right]\n    \\left[\\begin{array}{rrrr}\n    1&1&-1&0\\\\\n    0&1&0&0\\\\\n    0&0&1&1\n    \\end{array}\\right]\n    \\left[\\begin{array}{rrrr}\n    1&0&0&1\\\\\n    0&1&0&0\\\\\n    0&0&1&1\n    \\end{array}\\right]\n    \\]\n    Which gives us one free variable, and so the two planes intersect on a line. We notice that $(1,0,-1)=(1,1,0)-(0,1,1)$ and so since $(1,0,-1)$ is in $P_1$ and in $\\mathcal{P}_2$ so any element in $\\text{Span}(1,0,-1)$ is in the intersection, thus we have found our line.\n  \\end{enumerate}\n\\setcounter{enumi}{10}\n\\item\nSuppose $V$ and $W$ are orthogonal subspaces of $\\mathbb{R}^n$, i.e., $\\mathbf{v}\\cdot\\mathbf{w}=0$ for every $\\mathbf{v}\\in V$ and every $\\mathbf{w}\\in W$.\nProve that $V\\cap W=\\{\\mathbf{0}\\}$.\n\nStart by assuming there exists some element  $\\mathbf{u}\\in V\\cap W$ such that $\\mathbf{u}\\ne \\mathbf{0}$.\nThen because $\\mathbf{u}\\in V$ and $\\mathbf{u}\\in W$ we know that $\\mathbf{u}\\cdot\\mathbf{u}=\\mathbf{0}$.\nAnd of course $\\mathbf{u}\\cdot\\mathbf{u}=||\\mathbf{u}||^2$.\nBut $\\mathbf{u}\\ne 0$ and so $||\\mathbf{u}||^2>0$ and $\\mathbf{u}\\cdot\\mathbf{u}\\ne 0$ which leaves us with a contradiction.\nWe must assume then that the intersection of these sets contains only $\\mathbf{0}$.\n\\setcounter{enumi}{14}\n\\item\nLet $A$ be an $m\\times n$ matrix. Let $V\\subset \\mathbb{R}^n$ and $W\\subset \\mathbb{R}^m$ be subspaces.\n  \\begin{enumerate}\n  \\item\n  Show that $E=\\{\\mathbf{x}\\in \\mathbb{R}^n:A\\mathbf{x}\\in W\\}$ is a subspace of $\\mathbb{R}^n$.\n\n  Of course $A\\mathbf{0}=\\mathbf{0}\\in W$. Let us choose $\\mathbf{x},\\mathbf{y}\\in E$. Then we have $A\\mathbf{x}\\in W$ and $A\\mathbf{y}\\in W$ and so $A\\mathbf{x}+A\\mathbf{y}\\in W$. But $A\\mathbf{x}+A\\mathbf{y}=A(\\mathbf{x}+\\mathbf{y})$ and so $\\mathbf{x}+\\mathbf{y}\\in E$. And if $A\\mathbf{x}\\in W$ then $\\alpha A\\mathbf{x}\\in W$ for any $\\alpha\\in \\mathbb{R}$. So because $\\alpha A\\mathbf{x}=A(\\alpha\\mathbf{x})$ then we have $\\alpha\\mathbf{x}\\in E$. And so $E$ is a subspace.\n  \\item\n  Show that $F=\\{\\mathbf{y}\\in \\mathbb{R}^m:\\mathbf{y}=A\\mathbf{x}\\text{ for some }\\mathbf{x}\\in V\\}$ is a subspace of $\\mathbb{R}^m$\n\n  Observe that $\\mathbf{0}=A\\mathbf{0}$ and $\\mathbf{0}\\in V$. And so $\\mathbf{0}\\in F$. Now, let us take $\\mathbf{u},\\mathbf{v}\\in F$. Then $\\mathbf{u}=A\\mathbf{x}_1$ and $\\mathbf{v}=A\\mathbf{x}_2$ where $\\mathbf{x}_1,\\mathbf{x}_2\\in V$. Now $\\mathbf{u}+\\mathbf{v}=A\\mathbf{x}_1+A\\mathbf{x}_2=A(\\mathbf{x}_1+\\mathbf{x}_2)$. But $\\mathbf{x}_1+\\mathbf{x}_2\\in V$ and so $\\mathbf{u}+\\mathbf{v}\\in F$. And if we choose $\\alpha\\in \\mathbb{R}$ then $\\alpha\\mathbf{u}=\\alpha A\\mathbf{x}=A(\\alpha\\mathbf{x})$ for some $\\mathbf{x}\\in V$. And since $\\mathbf{x}\\in V$ then $\\alpha\\mathbf{x}\\in V$ and so $\\alpha\\mathbf{u}\\in F$. Thus $F$ is a subspace.\n  \\end{enumerate}\n\\item\nSuppose $A$ is a symmetric $n\\times n$ matrix. Let $V\\subset \\mathbb{R}^n$ be a  subspace with the property that $A\\mathbf{x}\\in V$ for every $\\mathbf{x}\\in V$. Show that $A\\mathbf{y}\\in V^\\perp$ for all $\\mathbf{y}\\in V^\\perp$\n\nWe choose some $\\mathbf{y}\\in V^\\perp$ and $\\mathbf{x}\\in V$. Then $\\mathbf{x}\\cdot\\mathbf{y}=\\mathbf{0}$. Now if $\\mathbf{x}A\\mathbf{y}=\\mathbf{0}$ then $A\\mathbf{y}\\in V^\\perp$. Of course $\\mathbf{x}A\\mathbf{y}=A^T\\mathbf{x}^T\\mathbf{y}=(A\\mathbf{x})\\mathbf{y}$. But $A\\mathbf{x}\\in V$ and so $(A\\mathbf{x})\\mathbf{y}=0$ therefore $A\\mathbf{y}\\in V^\\perp$\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "da5a0645a157bcbffd8e9c75abeb9817c1fff35b", "size": 7853, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "linear/linear-hw-2015-10-12.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "linear/linear-hw-2015-10-12.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear/linear-hw-2015-10-12.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.1703703704, "max_line_length": 642, "alphanum_fraction": 0.6355532917, "num_tokens": 3339, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8198933381139645, "lm_q1q2_score": 0.5540548945087582}}
{"text": "\\section{Adding Mass to CLAuDE}\n\\subsection{The Momentum Equations}\nThe momentum equations are a set of equations that describe the flow of a fluid on the surface of a rotating body. For our model we will use the f-plane approximation. The equations corresponding\nto the f-plane approximation are given in \\autoref{eq:x momentum} and \\autoref{eq:y momentum} \\cite{momentumeqs}. Note that we are ignoring vertical moevement, as this does not have a significant\neffect on the whole flow. All the symbols in \\autoref{eq:x momentum} and \\autoref{eq:y momentum} mean:\n\n\\begin{itemize}\n    \\item $u$: The east to west velocity ($ms^{-1}$).\n    \\item $t$: The time ($s$).\n    \\item $f$: The coriolis parameter as in \\autoref{eq:coriolis}.\n    \\item $v$: The north to south velocity ($ms^{-1}$).\n    \\item $\\rho$: The density of the atmosphere ($kgm^{-3}$).\n    \\item $p$: The atmospheric pressure ($Pa$).\n    \\item $x$: The local longitude coordinate ($m$).\n    \\item $y$: The local latitude coordinate ($m$).\n\\end{itemize}\n\nIf we then define a vector $\\bar{u}$ as $(u, v, 0)$, we can rewrite both \\autoref{eq:x momentum} as \\autoref{eq:x momentum laplace}. Here $\\nabla u$ is the gradient of \n$u$ in both $x$ and $y$ directions. Then if we write out $\\nabla u$ we get \\autoref{eq:x momentum final}. Similarly, if we want to get $\\delta v$ instead of $\\delta u$ we rewrite \n\\autoref{eq:y momentum} to get \\autoref{eq:y momentum laplace} and \\autoref{eq:y momentum final}.\n\n\\begin{subequations}\n    \\begin{equation}\n        \\frac{Du}{Dt} - fv = -\\frac{1}{\\rho} \\frac{\\delta p}{\\delta x}\n        \\label{eq:x momentum}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{Dv}{Dt} - fu = -\\frac{1}{\\rho} \\frac{\\delta p}{\\delta y}\n        \\label{eq:y momentum}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta u}{\\delta t} + \\bar{u} \\cdot \\nabla u - fv = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta x}\n        \\label{eq:x momentum laplace}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta v}{\\delta t} + \\bar{u} \\cdot \\nabla v - fu = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta y}\n        \\label{eq:y momentum laplace}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta u}{\\delta t} + u\\frac{\\delta u}{\\delta x} + v\\frac{\\delta u}{\\delta y} - fv = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta x}\n        \\label{eq:x momentum final}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta v}{\\delta t} + u\\frac{\\delta v}{\\delta x} + v\\frac{\\delta v}{\\delta y} - fu = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta y}\n        \\label{eq:y momentum final}\n    \\end{equation}\n\\end{subequations}\n\nNow that we have the momentum equations sorted out, we need to define a method to do the gradient calculations for us. Therefore we define two functions \\autoref{alg:gradient x} and \n\\autoref{alg:gradient y} that calculate the $x$ and $y$ gradients respectively.\n\n\\begin{algorithm}[hbt]\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{Matrix (double array) $a$, first index $i$, second index $j$}\n    \\Output{Gradient in the $x$ direction}\n    $grad \\leftarrow \\frac{a[i, (j + 1)\\text{ mod } nlon] - a[i, (j - 1) \\text{ mod } nlon]}{\\delta x[i]}$ \\;\n    \\Return{$grad$} \\;\n    \\caption{Calculating the gradient in the $x$ direction}\n    \\label{alg:gradient x}\n\\end{algorithm}\n\n\\begin{algorithm}[hbt]\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{Matrix (double array) $a$, first index $i$, second index $j$}\n    \\Output{Gradient in the $y$ direction}\n    \\eIf{$i == 0$ or $i == nlat - 1$}{\n        $grad \\leftarrow 0$ \\;\n    }{\n        $grad \\leftarrow \\frac{a[i + 1, j] - a[i - 1 j]}{\\delta y}$ \\;\n    }\n    \\Return $grad$ \\;\n    \\caption{Calculating the gradient in the $y$ direction}\n    \\label{alg:gradient y}\n\\end{algorithm}\n\nWith the gradient functions defined, we can move on to the main code for the momentum equations. The main loop is shown in \\autoref{alg:stream3}. Do note that this loop replaces the one\nin \\autoref{alg:stream2v2} as these calculate the same thing, but the new algorithm does it better.\n\n\\begin{algorithm}\n    $S_{xu} \\leftarrow \\texttt{gradient\\_x}(u, lan, lon)$ \\;\n    $S_{yu} \\leftarrow \\texttt{gradient\\_y}(u, lan, lon)$ \\;\n    $S_{xv} \\leftarrow \\texttt{gradient\\_x}(v, lan, lon)$ \\;\n    $S_{yv} \\leftarrow \\texttt{gradient\\_y}(v, lan, lon)$ \\;\n    $S_{px} \\leftarrow \\texttt{gradient\\_x}(p, lan, lon)$ \\;\n    $S_{py} \\leftarrow \\texttt{gradient\\_x}(p, lan, lon)$ \\;\n    \\While{\\texttt{TRUE}}{\n        \\For{$lat \\in [1, nlat - 1]$}{\n            \\For{$lon \\in [0, nlon]$}{\n                $u[lan, lon] \\leftarrow u[lan, lon] + \\delta t \\frac{-u[lan, lon]S_{xu} - v[lan, lon]S_{yu} + f[lan]v[lan, lon] - S_{px}}{\\rho}$ \\;\n                $v[lan, lon] \\leftarrow v[lan, lon] + \\delta t\\frac{-u[lan, lon]S_{xv} - v[lan, lon]S_{yv} - f[lan]u[lan, lon] - S_{py}}{\\rho}$ \\;\n            }\n        }\n    }\n    \\caption{Calculating the flow of the atmosphere (wind)}\n    \\label{alg:stream3}\n\\end{algorithm}\n\n\\subsection{Thermal Diffusion}\nAs of this time, what you notice if you run the model is that the winds only get stronger and stronger (and the model is hence blowing up). This is because there is no link yet between the \nvelocities of the atmosphere and the temperature. Currently, any air movement does not affect the temperature in the atmosphere of our model while it does in reality. So we need to change some \ncalculations to account for that. Thermal diffusion helps with spreading out the temperatures and tempering the winds a bit.\n\nThe diffusion equation, as written in \\autoref{eq:diffusion}, describes how the temperature spreads out over time\\cite{diffusion}. The symbols in the equation represent:\n\n\\begin{itemize}\n    \\item $u$: A vector consisting out of 4 elements: $x, y, z, t$. $x, y, z$ are the local coordinates and $t$ is time.\n    \\item $\\alpha$: The thermal diffusivity constant.\n    \\item $\\nabla^2$: The Laplace operator, more information in \\autoref{sec:laplace}.\n    \\item $\\bar{u}$: The time derivative of $u$, or in symbols $\\frac{\\delta u}{\\delta t}$.\n\\end{itemize}\n\n\\begin{equation}\n    \\bar{u} = \\alpha \\nabla^2 u\n    \\label{eq:diffusion}\n\\end{equation}\n\nNow to get this into code we need the following algorithms \\autoref{alg:laplacian} and \\autoref{alg:diffusion}. \\autoref{alg:laplacian} implements the laplacian operator, whereas \n\\autoref{alg:diffusion} implements the diffusion calculations. $\\Delta_x$ and $\\Delta_y$ in \\autoref{alg:laplacian} represents the calls to \\autoref{alg:gradient x} and \\autoref{alg:gradient y} \nrespectively. $\\nabla^2$ in \\autoref{alg:diffusion} represents the call to \\autoref{alg:laplacian}.\n\n\\begin{algorithm}\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{A matrix (double array) a}\n    \\Output{A matrix (double array) with results for the laplacian operator for each element}\n    \\For{$lat \\in [1, nlat - 1]$}{\n        \\For{$lon \\in [0, nlon]$}{\n            $output[lat, lon] \\leftarrow \\frac{\\Delta_x(a, lat, (lon + 1) \\text{ mod } nlon) - \\Delta_x(a, lat, (lon - 1) \\text{ mod } nlon)}{\\delta x[lat]} + \\frac{\\Delta_y(a, lat + 1, lon) - \n            \\Delta_y(a, lat - 1, lon)}{\\delta y}$\\;\n        }\n    }\n    \\Return{$ouput$} \\;\n    \\caption{Calculate the laplacian operator over a matrix a}\n    \\label{alg:laplacian}\n\\end{algorithm}\n\n\\begin{algorithm}\n    $\\alpha_a \\leftarrow 2 \\cdot 10^{-5}$ \\;\n    $\\alpha_p \\leftarrow 1.5 \\cdot 10^{-6}$ \\;\n    \\While{\\texttt{TRUE}}{\n        $T_a \\leftarrow T_a + \\delta t \\alpha_a \\nabla^2(T_a)$ \\;\n        $T_p \\leftarrow T_p + \\delta t \\alpha_p \\nabla^2(T_p)$ \\;\n    }\n    \\caption{The main loop for calculating the effects of diffusion}\n    \\label{alg:diffusion}\n\\end{algorithm}\n\n\\subsection{Advection}\nWith thermal diffusion in place, the temperature will spread out a bit, however air is not transported yet. This means that the winds we simulate are not actually moving any air. Advection is\ngoing to change that. Advection is a fluid flow transporting something with it as it flows. This can be temperature, gas, solids or other fluids. In our case we will be looking at temperature.\nThe advection equation is shown in \\autoref{eq:advection}. The symbols are:\n\n\\begin{itemize}\n    \\item $\\psi$: What is carried along (in our case temperature, $K$).\n    \\item $t$: The time ($s$).\n    \\item $u$: The fluid velocity vector ($ms^{-1}$).\n    \\item $\\nabla$: The divergence operator (as explained in \\autoref{sec:laplace}).\n\\end{itemize}\n\n\\begin{equation}\n    \\frac{\\delta \\psi}{\\delta t} + \\nabla \\cdot (\\psi u) = 0\n    \\label{eq:advection}\n\\end{equation}\n\nAs we expect to use the divergence operator more often throughout our model, let us define a seperate function for it in \\autoref{alg:divergence}. $\\Delta_x$ and $\\Delta_y$ in \n\\autoref{alg:divergence} represents the calls to \\autoref{alg:gradient x} and \\autoref{alg:gradient y} respectively. We do the multiplication with the velocity vector here already, as we expect \nthat we might use it in combination with the divergence operator more frequently.\n\n\\begin{algorithm}\n    \\SetKwInOut{Input}{Input}\n    \\SetKwInOut{Output}{Output}\n    \\Input{A matrix (double array) $a$}\n    \\Output{A matrix (double array) containing the result of the divergence operator taken over that element}\n    $dim_1 \\leftarrow \\text{ Length of } a \\text{ in the first dimension}$ \\;\n    \\For{$i \\in [0, dim_1]$}{\n        $dim_2 \\leftarrow \\text{ Length of } a \\text{ in the second dimension (i.e. the length of the array stored at index } i)$ \\;\n        \\For{$j \\in [0, dim_2]$}{\n            $output[i, j] \\leftarrow \\Delta_x(au, i, j) + \\Delta_y(av, i, j)$ \\;\n        }\n    }\n    \\Return{$output$} \\;\n    \\caption{Calculate the result of the divergence operator on a vector}\n    \\label{alg:divergence}\n\\end{algorithm}\n\nWith the divergence functon defined, we now need to adjust \\autoref{alg:diffusion} to incorporate this effect. The resulting algorithm can be found in \\autoref{alg:advection}. Here $\\nabla$\nrepresents the function call to \\autoref{alg:divergence}.\n\n\\begin{algorithm}\n    $\\alpha_a \\leftarrow 2 \\cdot 10^{-5}$ \\;\n    $\\alpha_p \\leftarrow 1.5 \\cdot 10^{-6}$ \\;\n    \\While{\\texttt{TRUE}}{\n        $T_{add} \\leftarrow T_a + \\delta t \\alpha_a \\nabla^2(T_a) + \\nabla(T_a)$ \\;\n        $T_a \\leftarrow T_a + T_{add}[5:-5, :] \\text{ //Only add } T_{add} \\text{ to } T_a \\text{ for indices in the interval } [-nlat + 5, nlat - 5]$. \\;\n        $T_p \\leftarrow T_p + \\delta t \\alpha_p \\nabla^2(T_p)$ \\;\n    }\n    \\caption{The main loop for calculating the effects of advection}\n    \\label{alg:advection}\n\\end{algorithm}\n\nNow that we have the air moving, we also need to account for the moving of the density. This is because moving air to a certain place will change the air density at that place if the air at that \nplace does not move away at the same rate. Say we are moving air to $x$ at $y \\ ms^{-1}$. If air at $x$ moves at a rate $z \\ ms^{-1}$ and $z \\neq y$ then the air density at $x$ will change.\nThe equation we will need for that is the mass continuity equation as shown in \\autoref{eq:mass continuity} \\cite{masscontinue}.\n\n\\begin{equation}\n    \\frac{\\delta \\rho}{\\delta t} + \\nabla \\cdot (\\rho v) = 0\n    \\label{eq:mass continuity}\n\\end{equation}\n\nUsing this equation means that we will no longer assume that the atmosphere is incompressible. Therefore we need to change a few things in the code. First we need to change the $\\rho$ in \n\\autoref{alg:stream3}. Since $\\rho$ is no longer constant we need to access the right value of $\\rho$ by specifying the indices. So $\\rho$ will change to $\\rho[lat, lon]$. Furthermore we need\nto calculate $\\rho$ after the movement of air has taken place, so we need to change \\autoref{alg:advection} as well to include the calculations for $\\rho$. The new version can be found in \n\\autoref{alg:advectionv2}. Again the $\\nabla$ represents the call to \\autoref{alg:divergence}.\n\n\n\\begin{algorithm}\n    $\\alpha_a \\leftarrow 2 \\cdot 10^{-5}$ \\;\n    $\\alpha_p \\leftarrow 1.5 \\cdot 10^{-6}$ \\;\n    \\While{\\texttt{TRUE}}{\n        $T_{add} \\leftarrow T_a + \\delta t \\alpha_a \\nabla^2(T_a) + \\nabla(T_a)$ \\;\n        $T_a \\leftarrow T_a + T_{add}[5:-5, :] \\text{ //Only add } T_{add} \\text{ to } T_a \\text{ for indices in the interval } [-nlat + 5, nlat - 5]$. \\;\n        $\\rho \\leftarrow \\rho + \\delta t \\nabla \\rho$ \\;\n        $T_p \\leftarrow T_p + \\delta t \\alpha_p \\nabla^2(T_p)$ \\;\n    }\n    \\caption{The main loop for calculating the effects of advection}\n    \\label{alg:advectionv2}\n\\end{algorithm}\n\nNow that we have a varying density, we need to account for that in the temperature equations. So let us do that. We need it in the denominator as the density has a direct effect on the \nheat capacity of the atmosphere. The changes are reflected in \\autoref{alg:temperature with density}.\n\n\\begin{algorithm}[hbt]\n    \\SetAlgoLined\n\n    \\While{\\texttt{TRUE}}{\n        \\For{$lat \\in [-nlat, nlat]$}{\n            \\For{$lon \\in [0, nlot]$}{\n                $T_p[lat, lon] \\leftarrow T_p[lat, lon] + \\frac{\\delta t ((1 - a[lat, lon])S + 4\\epsilon \\sigma (T_a[lat, lon])^4 - 4\\sigma (T_p[lat, lon])^4)}{\\rho[lat, lon]C_p[lat, lon]}$ \\;\n                $T_a[lat, lon] \\leftarrow T_a[lat, lon] + \\frac{\\delta t (\\sigma (T_p[lat, lon])^4 - 2\\epsilon\\sigma (T_a[lat, lon])^4)}{\\rho[lat, lon]C_a}$ \\;\n                $t \\leftarrow t + \\delta t$ \\;\n            }\n        }\n    }\n    \\caption{The main loop of the temperature calculations}\n    \\label{alg:temperature with density}\n\\end{algorithm}\n\n\\subsection{Improving the Coriolis Parameter}\nAnother change introduced is in the coriolis parameter. Up until now it has been a constant, however we know that it varies along the latitude. So let's make it vary over the latitude. Recall \n\\autoref{eq:coriolis}, where $\\Theta$ is the latitude. Coriolis ($f$) is currently defined in \\autoref{alg:gradient}, so let's incorporate the changes which are shown in \\autoref{alg:coriolis}.\n\n\\begin{algorithm}\n    \\SetAlgoLined\n    $C \\leftarrow 2\\pi R$ \\;\n    $\\delta y \\leftarrow \\frac{C}{nlat}$ \\;\n    $\\Omega \\leftarrow 7.2921 \\cdot 10^{-5}$ \\;\n\n    \\For{$lat \\in [-nlat, nlat]$}{\n        $\\delta x[lat] \\leftarrow \\delta y \\cos(lat \\cdot \\frac{\\pi}{180})$ \\;\n        $f[lat] \\leftarrow 2\\Omega \\sin(lat \\frac{\\pi}{180})$ \\;\n    }\n    \\caption{Calculating the gradient $\\delta x$}\n    \\label{alg:coriolis}\n\\end{algorithm}", "meta": {"hexsha": "b29ab135892adc71f1f03736a108cf56d423e257", "size": 14363, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex-docs/streams/Stream3.tex", "max_stars_repo_name": "balintf/claude", "max_stars_repo_head_hexsha": "a3ebf0605ca26c4aadd0273f6b70813bdf931c9c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex-docs/streams/Stream3.tex", "max_issues_repo_name": "balintf/claude", "max_issues_repo_head_hexsha": "a3ebf0605ca26c4aadd0273f6b70813bdf931c9c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex-docs/streams/Stream3.tex", "max_forks_repo_name": "balintf/claude", "max_forks_repo_head_hexsha": "a3ebf0605ca26c4aadd0273f6b70813bdf931c9c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.9962406015, "max_line_length": 195, "alphanum_fraction": 0.6606558518, "num_tokens": 4517, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424373085146, "lm_q2_score": 0.6548947425132315, "lm_q1q2_score": 0.5540032546621751}}
{"text": "\\section{Fitting strategy}\n\\label{sec:fit}\n\n\\paragraph{} The estimation of CP-mixing parameter $\\tilde{d}$ uses a maximum-likelihood fit performed on $m_{\\gamma\\gamma}$ distribution simultaneously in all 6 OO bins. The likelihood function could be constructed as:\n\n\\begin{center}\n\\begin{math}\n\\mathcal{L} = \\mathcal{L}(\\boldsymbol{x} |\\boldsymbol{\\theta}) = \\prod_{bin}\\prod_{j=1}^{N}f_{bin}(m_{\\gamma\\gamma})G(\\theta)\n\\end{math}\n\\end{center}\n\n\nWhere $\\boldsymbol x$ represents dataset, $\\boldsymbol \\theta$ is nuisance parameter, $f_{bin}(m_{\\gamma\\gamma})$ is the diphoton invariant mass distribution model for each bins, $G(\\boldsymbol \\theta)$ is constrain function for systematic uncertainties. The parameter of interest(POI), $\\tilde{d}$, is embedded in $m_{\\gamma\\gamma}$ model, so there is no analytic relation between likelihood value and POI. A set of signal templates corresponding to different value of CP-mixing parameter $\\tilde{d}$ is created by reweighting the SM VBF $H\\to\\gamma\\gamma$ to build the CP-mixing models, and then a template fit could be performed with data and model with different $\\tilde{d}$ value (background model keeps consistent) to evaluate the likelihood function. For the fit in each $\\tilde{d}$ model, VBF signal strength, continuum background yield and parameters describing the background model are float, which means the effect of background mis-modelling is considered and any constrain from possible model-dependent cross section information is not exploited. Other nuisance parameters are fixed to their best-fit values $\\hat{\\boldsymbol \\theta}$. \n\n\n\\paragraph{} A negative log-likelihood (NLL) curve can be constructed by calculating the NLL value for each $\\tilde{d}$ hypothesis. Best-estimated $\\tilde{d}$ as well as its central confidence interval at 68\\% (95\\%) confidence level (CL) can be determined with the minimum value of NLL, $NLL_{min}$ and the points at which $\\Delta NLL = NLL-NLL_{min} = 0.5(1.96)$. An Asimov dataset is used to get expected sensitivity from this method and is shown in Figure ~\\ref{fig:NLLcurve}. \n\n\n\\paragraph{} Since the Optimal Observable is the main sensitive variable for CP test in this analysis, another 2D model for OO and $m_\\gamma\\gamma$ was considered to construct the likelihood function. In some preliminary study this 2D model did not show significant improvement with baseline method, and had some difficulty in modelling, it was obsoleted in this analysis. Appendix ~\\ref{appendix:2Dmodel} includes some results based on this. \n\n", "meta": {"hexsha": "42ab51984315e0180efb4ea0ac42a4a67994d35c", "size": 2526, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/Fitting.tex", "max_stars_repo_name": "phreborn/vbfcp_INT", "max_stars_repo_head_hexsha": "904cd56e96a3489887bb9e808d28f6dae4d7f058", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/Fitting.tex", "max_issues_repo_name": "phreborn/vbfcp_INT", "max_issues_repo_head_hexsha": "904cd56e96a3489887bb9e808d28f6dae4d7f058", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/Fitting.tex", "max_forks_repo_name": "phreborn/vbfcp_INT", "max_forks_repo_head_hexsha": "904cd56e96a3489887bb9e808d28f6dae4d7f058", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 120.2857142857, "max_line_length": 1149, "alphanum_fraction": 0.7731591449, "num_tokens": 629, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.84594244507642, "lm_q2_score": 0.6548947357776795, "lm_q1q2_score": 0.5540032540514462}}
{"text": "\\section{Three-state promoter model for simple repression}\\label{supp_model}\n\nIn order to tackle the question of how much information the simple repression\nmotif can process we require the joint probability distribution of mRNA and\nprotein $P(m, p; t)$. To obtain this distribution we use the chemical master\nequation formalism as described in \\secref{sec_model}. Specifically, we assume\na three-state model where the promoter can be found 1) in a transcriptionally\nactive state  ($A$ state), 2) in a transcriptionally inactive state without the\nrepressor bound ($I$ state) and 3) with the repressor bound ($R$ state). (See\n\\fref{fig2_minimal_model}(A)). These three states generate a system of coupled\ndifferential equations for each of the three state distributions $P_A(m, p)$,\n$P_I(m, p)$ and $P_R(m, p)$. Given the rates shown in\n\\fref{fig2_minimal_model}(A) let us define the system of ODEs. For the\ntranscriptionally active state we have\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{P_A(m, p)} &=\n    - \\overbrace{\\kpoff P_A(m, p)}^{A \\rightarrow I} % A -> I\n    + \\overbrace{\\kpon P_I(m, p)}^{I \\rightarrow A}\\\\ % I -> A\n    &+ \\overbrace{r_m P_A(m-1, p)}^{m-1 \\rightarrow m} % m-1 -> m\n    - \\overbrace{r_m P_A(m, p)}^{m \\rightarrow m+1}% m -> m+1\n    + \\overbrace{\\gm (m + 1) P_A(m+1 , p)}^{m+1 \\rightarrow m} % m+1 -> m\n    - \\overbrace{\\gm m P_A(m , p)}^{m \\rightarrow m-1}\\\\ % m -> m-1\n    &+ \\overbrace{r_p m P_A(m, p - 1)}^{p-1 \\rightarrow p} % p-1 -> p\n    - \\overbrace{r_p m P_A(m, p)}^{p \\rightarrow p+1} % p -> p+1\n    + \\overbrace{\\gp (p + 1) P_A(m, p + 1)}^{p + 1 \\rightarrow p} % p+1 -> p\n    - \\overbrace{\\gp p P_A(m, p)}^{p \\rightarrow p-1}. % p -> p-1\n  \\end{aligned}\n\\end{equation}\nFor the inactive promoter state $I$ we have\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{P_I(m, p)} &=\n    \\overbrace{\\kpoff P_A(m, p)}^{A \\rightarrow I} % A -> I\n    - \\overbrace{\\kpon P_I(m, p)}^{I \\rightarrow A} % I -> A\n    + \\overbrace{\\kroff P_R(m, p)}^{R \\rightarrow I} % R -> I\n    - \\overbrace{\\kron P_I(m, p)}^{I \\rightarrow R}\\\\ % I -> R\n    &+ \\overbrace{\\gm (m + 1) P_I(m+1 , p)}^{m+1 \\rightarrow m} % m+1 -> m\n    - \\overbrace{\\gm m P_I(m , p)}^{m \\rightarrow m-1}\\\\ % m -> m-1\n    &+ \\overbrace{r_p m P_I(m, p - 1)}^{p-1 \\rightarrow p} % p-1 -> p\n    - \\overbrace{r_p m P_I(m, p)}^{p \\rightarrow p+1} % p -> p+1\n    + \\overbrace{\\gp (p + 1) P_I(m, p + 1)}^{p + 1 \\rightarrow p} % p+1 -> p\n    - \\overbrace{\\gp p P_I(m, p)}^{p \\rightarrow p-1}. % p -> p-1\n  \\end{aligned}\n\\end{equation}\nAnd finally for the repressor bound state $R$ we have\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{P_R(m, p)} &=\n    - \\overbrace{\\kroff P_R(m, p)}^{R \\rightarrow I} % R -> I\n    + \\overbrace{\\kron P_I(m, p)}^{I \\rightarrow R}\\\\ % I -> R\n    &+ \\overbrace{\\gm (m + 1) P_R(m+1 , p)}^{m+1 \\rightarrow m} % m+1 -> m\n    - \\overbrace{\\gm m P_R(m , p)}^{m \\rightarrow m-1}\\\\ % m -> m-1\n    &+ \\overbrace{r_p m P_R(m, p - 1)}^{p-1 \\rightarrow p} % p-1 -> p\n    - \\overbrace{r_p m P_R(m, p)}^{p \\rightarrow p+1} % p -> p+1\n    + \\overbrace{\\gp (p + 1) P_R(m, p + 1)}^{p + 1 \\rightarrow p} % p+1 -> p\n    - \\overbrace{\\gp p P_R(m, p)}^{p \\rightarrow p-1}. % p -> p-1\n  \\end{aligned}\n\\end{equation}\n\nFor an unregulated promoter, i.e. a promoter in a cell that has no repressors\npresent, and therefore constitutively expresses the gene, we use a two-state\nmodel in which the state $R$ is not allowed. All the terms in the system of ODEs\ncontaining $\\kron$ or $\\kroff$ are then set to zero.\n\nAs detailed in \\secref{sec_model} it is convenient to express this system using\nmatrix notation \\cite{Sanchez2013}. For this we define $\\PP(m, p) = (P_A(m, p),\nP_I(m, p), P_R(m, p))^T$. Then the system of ODEs can be expressed as\n\\begin{equation}\n  \\begin{aligned}\n    \\dt{\\PP(m, p)} &= \\Km \\PP(m, p)\n    - \\Rm \\PP(m, p) + \\Rm \\PP(m-1, p)\n    - m \\Gm \\PP(m, p) + (m + 1) \\Gm \\PP(m + 1, p)\\\\\n    &- m \\Rp \\PP(m, p) + m \\Rp \\PP(m, p - 1)\n    - p \\Gp \\PP(m, p) + (p + 1) \\Gp \\PP(m, p + 1),\n  \\end{aligned}\n\\end{equation}\nwhere we defined matrices representing the promoter state transition $\\Km$,\n\\begin{align}\n  \\Km \\equiv\n  \\begin{bmatrix}\n    -\\kpoff   & \\kpon         & 0\\\\\n    \\kpoff    & -\\kpon -\\kron  & \\kroff\\\\\n    0         & \\kron         & -\\kroff\n  \\end{bmatrix},\n\\end{align}\nmRNA production, $\\Rm$, and degradation, $\\Gm$, as\n\\begin{equation}\n  \\Rm \\equiv\n  \\begin{bmatrix}\n    r_m   & 0 & 0\\\\\n    0     & 0 & 0\\\\\n    0     & 0 & 0\\\\\n  \\end{bmatrix},\n\\end{equation}\nand\n\\begin{equation}\n  \\Gm \\equiv\n  \\begin{bmatrix}\n    \\gm   & 0   & 0\\\\\n    0     & \\gm & 0\\\\\n    0     & 0   & \\gm\\\\\n  \\end{bmatrix}.\n\\end{equation}\nFor the protein we also define production $\\Rp$ and degradation $\\Gp$ matrices\nas\n\\begin{equation}\n  \\Rp \\equiv\n  \\begin{bmatrix}\n    r_p   & 0   & 0\\\\\n    0     & r_p & 0\\\\\n    0     & 0   & r_p\\\\\n  \\end{bmatrix}\n\\end{equation}\nand\n\\begin{equation}\n  \\Gp \\equiv\n  \\begin{bmatrix}\n    \\gp   & 0   & 0\\\\\n    0     & \\gp & 0\\\\\n    0     & 0   & \\gp\\\\\n  \\end{bmatrix}.\n\\end{equation}\n\nThe corresponding equation for the unregulated two-state promoter takes the\nexact same form with the definition of the matrices following the same scheme\nwithout including the third row and third column, and setting $\\kron$ and\n$\\kroff$ to zero.\n\nA closed-form solution for this master equation might not even exist. The\napproximate solution of chemical master equations of this kind is an active\narea of research. As we will see in \\siref{supp_param_inference} the two-state\npromoter master equation has been analytically solved for the mRNA\n\\cite{Peccoud1995} and protein distributions \\cite{Shahrezaei2008}. For our\npurposes, in \\siref{supp_maxent} we will detail how to use the Maximum Entropy\nprinciple to approximate the full distribution for the two- and three-state\npromoter.\n\n\\section{Parameter inference}\\label{supp_param_inference}\n\n(Note: The Python code used for the calculations presented in this section can\nbe found in the\n\\href{https://www.rpgroup.caltech.edu//chann_cap/software/chemical_master_mRNA_FISH_mcmc.html}{following\nlink} as an annotated Jupyter notebook)\n\nWith the objective of generating falsifiable predictions with meaningful\nparameters, we infer the kinetic rates for this three-state promoter model\nusing different data sets generated in our lab over the last decade concerning\ndifferent aspects of the regulation of the simple repression motif. For\nexample, for the unregulated promoter transition rates $\\kpon$ and $\\kpoff$ and\nthe mRNA production rate $r_m$, we use single-molecule mRNA FISH counts from an\nunregulated promoter \\cite{Jones2014a}. Once these parameters are fixed, we use\nthe values to constrain the repressor rates $\\kron$ and $\\kroff$. These\nrepressor rates are obtained using information from mean gene expression\nmeasurements from bulk LacZ colorimetric assays \\cite{Garcia2011c}. We also\nexpand our model to include the allosteric nature of the repressor protein,\ntaking advantage of video microscopy measurements done in the context of\nmultiple promoter copies \\cite{Brewster2014} and flow-cytometry measurements of\nthe mean response of the system to different levels of induction\n\\cite{Razo-Mejia2018}. In what follows of this section we detail the steps\ntaken to infer the parameter values. At each step the values of the parameters\ninferred in previous steps constrain the values of the parameters that are not\nyet determined, building in this way a self-consistent model informed by work\nthat spans several experimental techniques.\n\n\\subsection{Unregulated promoter rates}\n\nWe begin our parameter inference problem with the promoter on and off rates\n$\\kpon$ and $\\kpoff$, as well as the mRNA production rate $r_m$. In this case\nthere are only two states available to the promoter -- the inactive state $I$\nand the transcriptionally active  state $A$. That means that the third ODE for\n$P_R(m, p)$ is removed from the system. The mRNA steady state distribution for\nthis particular two-state promoter model was solved analytically by Peccoud and\nYcart \\cite{Peccoud1995}. This distribution $P(m) \\equiv P_I(m) + P_A(m)$ is of\nthe form\n\\begin{equation}\n  \\small\n  P(m \\mid \\kpon, \\kpoff, r_m, \\gm) =\n  {\\Gamma \\left( \\frac{\\kpon}{\\gm} + m \\right) \\over\n  \\Gamma(m + 1) \\Gamma\\left( \\frac{\\kpoff+\\kpon}{\\gm} + m \\right)}\n  {\\Gamma\\left( \\frac{\\kpoff+\\kpon}{\\gm} \\right) \\over\n  \\Gamma\\left( \\frac{\\kpon}{\\gm} \\right)}\n  \\left( {r_m \\over \\gm} \\right)^m\n  F_1^1 \\left( {\\kpon \\over \\gm} + m,\n  {\\kpoff + \\kpon \\over \\gm} + m,\n  -{r_m \\over \\gm} \\right),\n  \\label{seq_two_state_mRNA}\n\\end{equation}\nwhere $\\Gamma(\\cdot)$ is the gamma function, and $F_1^1$ is the confluent\nhypergeometric function of the first kind. This rather complicated expression\nwill aid us to find parameter values for the rates. The inferred rates $\\kpon$,\n$\\kpoff$ and $r_m$ will be expressed in units of the mRNA degradation rate\n$\\gm$. This is because the model in \\eref{seq_two_state_mRNA} is homogeneous in\ntime, meaning that if we divide all rates by a constant it would be equivalent\nto multiplying the characteristic time scale by the same constant. As we will\ndiscuss in the next section, \\eref{seq_two_state_mRNA} has degeneracy in the\nparameter values. What this means is that a change in one of the parameters,\nspecifically $r_m$, can be compensated by a change in another parameter,\nspecifically $\\kpoff$, to obtain the exact same distribution. To work around\nthis intrinsic limitation of the model we will include in our inference prior\ninformation from what we know from equilibrium-based models.\n\n\\subsubsection*{Bayesian parameter inference of RNAP rates}\n\nIn order to make progress at inferring the unregulated promoter state\ntransition rates, we make use of the single-molecule mRNA FISH data from Jones\net al. \\cite{Jones2014a}. \\fref{sfig_lacUV5_FISH} shows the distribution of mRNA per cell\nfor the \\textit{lacUV5} promoter used for our inference. This\npromoter, being very strong, has a mean copy number of $\\ee{m} \\approx 18$\nmRNA/cell.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS01.pdf}\n\t\\caption{\\textbf{\\textit{lacUV5} mRNA per cell distribution.} Data from\n\t\\cite{Jones2014a} of the unregulated \\textit{lacUV5} promoter as inferred\n\tfrom single molecule mRNA FISH.}\n  \\label{sfig_lacUV5_FISH}\n\\end{figure}\n\nHaving this data in hand we now turn to Bayesian parameter inference. Writing\nBayes theorem we have\n\\begin{equation}\n  P(\\kpon, \\kpoff, r_m \\mid D) = {P(D \\mid \\kpon, \\kpoff, r_m)\n  P(\\kpon, \\kpoff, r_m) \\over P(D)},\n  \\label{seq_bayes_rnap_rates}\n\\end{equation}\nwhere $D$ represents the data. For this case the data consists of single-cell\nmRNA counts $D = \\{ m_1, m_2, \\ldots, m_N \\}$, where $N$ is the number of\ncells. We assume that each cell's measurement is independent of the others such\nthat we can rewrite \\eref{seq_bayes_rnap_rates} as\n\\begin{equation}\n  P(\\kpon, \\kpoff, r_m \\mid \\{m_i\\}) \\propto\n  \\left[\\prod_{i=1}^N P(m_i \\mid \\kpon, \\kpoff, r_m) \\right]\n  P(\\kpon, \\kpoff, r_m),\n  \\label{seq_bayes_sample}\n\\end{equation}\nwhere we ignore the normalization constant $P(D)$. The likelihood term $P(m_i\n\\mid \\kpon, \\kpoff, r_m)$ is exactly given by \\eref{seq_two_state_mRNA} with\n$\\gm = 1$. Given that we have this functional form for the distribution, we can\nuse Markov Chain Monte Carlo (MCMC) sampling to explore the 3D parameter space\nin order to fit \\eref{seq_two_state_mRNA} to the mRNA-FISH data.\n\n\\subsubsection*{Constraining the rates given prior thermodynamic knowledge.}\n\nOne of the strengths of the Bayesian approach is that we can include all the\nprior knowledge on the parameters when performing an inference\n\\cite{MacKay2003}. Basic features such as the fact that the rates have to be\nstrictly positive constrain the values that these parameters can take. For\nthe specific rates analyzed in this section we know more than the simple\nconstraint of non-negative values. The expression of an unregulated promoter has\nbeen studied from a thermodynamic perspective \\cite{Brewster2012}. Given the\nunderlying assumptions of these equilibrium models, in which the probability of\nfinding the RNAP bound to the promoter is proportional to the transcription\nrate \\cite{Bintu2005a}, they can only make statements about the mean expression\nlevel. Nevertheless if both the thermodynamic and the kinetic model describe\nthe same process, the predictions for the mean gene expression level must\nagree. That means that we can use what we know about the mean gene expression,\nand how this is related to parameters such as molecule copy numbers and binding\naffinities, to constrain the values that the rates in question can take.\n\nIn the case of this two-state promoter it can be shown that the mean number of\nmRNA is given by \\cite{Sanchez2013} (See \\siref{supp_moments} for moment\ncomputation)\n\\begin{equation}\n  \\ee{m} = {r_m \\over \\gm} {\\kpon \\over \\kpon + \\kpoff}.\n  \\label{seq_mean_kinetic}\n\\end{equation}\nAnother way of expressing this is as ${r_m \\over \\gm} \\times\np_{\\text{active}}^{(p)}$, where $p_{\\text{active}}^{(p)}$ is the probability of\nthe promoter being in the transcriptionally active state. The thermodynamic\npicture has an equivalent result where the mean number of mRNA is given by\n\\cite{Brewster2012, Bintu2005a}\n\\begin{equation}\n  \\left\\langle m \\right\\rangle = {r_m \\over \\gm}\n  {{P \\over \\Nns} e^{-\\beta\\eP}\n  \\over 1 + {P \\over \\Nns} e^{-\\beta\\eP}},\n  \\label{seq_mean_thermo}\n\\end{equation}\nwhere $P$ is the number of RNAP per cell, $\\Nns$ is the number of non-specific\nbinding sites, $\\eP$ is the RNAP binding energy in $k_BT$ units and $\\beta\\equiv\n{(k_BT)}^{-1}$. Using \\eref{seq_mean_kinetic} and \\eref{seq_mean_thermo} we can\neasily see that if these frameworks are to be equivalent, then it must be true\nthat\n\\begin{equation}\n  {\\kpon \\over \\kpoff} = {P \\over \\Nns} e^{-\\beta\\eP},\n\\end{equation}\nor equivalently\n\\begin{equation}\n  \\ln \\left({\\kpon \\over \\kpoff}\\right) =\n  -\\beta\\eP + \\ln P - \\ln \\Nns.\n\\end{equation}\nTo put numerical values into these variables we can use information from the\nliterature. The RNAP copy number is order $P \\approx 1000-3000$ RNAP/cell for a\n1 hour doubling time \\cite{Klumpp2008}. As for the number of non-specific\nbinding sites and the binding energy, we have that $\\Nns = 4.6\\times 10^6$\n\\cite{Bintu2005a} and $-\\beta\\eP \\approx 5 - 7$ \\cite{Brewster2012}. Given\nthese values we define a Gaussian prior for the log ratio of these two\nquantities of the form\n\\begin{equation}\n  P\\left(\\ln \\left({\\kpon \\over \\kpoff}\\right) \\right) \\propto\n  \\exp \\left\\{ - {\\left(\\ln \\left({\\kpon \\over \\kpoff}\\right) -\n  \\left(-\\beta\\eP + \\ln P - \\ln \\Nns \\right) \\right)^2\n  \\over 2 \\sigma^2} \\right\\},\n  \\label{seq_prior_single}\n\\end{equation}\nwhere $\\sigma$ is the variance that accounts for the uncertainty in these\nparameters. We include this prior as part of the prior term $P(\\kpon, \\kpoff,\nr_m)$ of \\eref{seq_bayes_sample}. We then use MCMC to sample out of the\nposterior distribution given by \\eref{seq_bayes_sample}. \\fref{sfig_mcmc_rnap}\nshows the MCMC samples of the posterior distribution. For the case of the\n$\\kpon$ parameter there is a single symmetric peak. $\\kpoff$ and $r_m$ have a\nrather long tail towards large values. In fact, the 2D projection of $\\kpoff$ vs\n$r_m$ shows that the model is sloppy, meaning that the two parameters are highly\ncorrelated. This feature is a common problem for many non-linear systems used in\nbiophysics and systems biology \\cite{Transtrum2015}. What this implies is that\nwe can change the value of $\\kpoff$, and then compensate by a change in $r_m$ in\norder to maintain the shape of the mRNA distribution. Therefore it is impossible\nfrom the data and the model themselves to narrow down a single value for the\nparameters. Nevertheless since we included the prior information on the rates as\ngiven by the analogous form between the equilibrium and non-equilibrium\nexpressions for the mean mRNA level, we obtained a more constrained parameter\nvalue for the RNAP rates and the transcription rate that we will take as the\npeak of this long-tailed distribution.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS02.pdf}\n\t\\caption{\\textbf{MCMC posterior distribution.} Sampling out of\n\t\\eref{seq_bayes_sample} the plot shows 2D and 1D projections of the 3D\n\tparameter space. The parameter values are (in units of the mRNA degradation\n\trate $\\gm$) $\\kpon = 4.3^{+1}_{-0.3}$, $\\kpoff = 18.8^{+120}_{-10}$ and $r_m =\n\t103.8^{+423}_{-37}$ which are the modes of their respective distributions,\n\twhere the superscripts and subscripts represent the upper and lower bounds of\n\tthe 95$^\\text{th}$ percentile of the parameter value distributions}\n  \\label{sfig_mcmc_rnap}\n\\end{figure}\n\nThe inferred values $\\kpon = 4.3^{+1}_{-0.3}$, $\\kpoff = 18.8^{+120}_{-10}$\nand $r_m = 103.8^{+423}_{-37}$ are given in units of the mRNA degradation\nrate $\\gm$. Given the asymmetry of the parameter distributions we report the\nupper and lower bound of the 95$^\\text{th}$ percentile of the posterior\ndistributions. Assuming a mean life-time for mRNA of $\\approx$ 3 min (from\nthis\n\\href{http://bionumbers.hms.harvard.edu/bionumber.aspx?&id=107514&ver=1&trm=mRNA%20mean%20lifetime}{link})\nwe have an mRNA degradation rate of $\\gm \\approx 2.84 \\times 10^{-3} s^{-1}$.\nUsing this value gives the following values for the inferred rates: $\\kpon =\n0.024_{-0.002}^{+0.005} s^{-1}$, $\\kpoff = {0.11}_ {-0.05}^{+0.66} s^{-1}$, and\n$r_m = 0.3_{-0.2}^{+2.3} s^{-1}$.\n\n\\fref{sfig_lacUV5_theory_data} compares the experimental data from\n\\fref{sfig_lacUV5_FISH} with the resulting distribution obtained by substituting\nthe most likely parameter values into \\eref{seq_two_state_mRNA}. As we can see\nthis two-state model fits the data adequately.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics[width=0.5\\columnwidth]\n  {../fig/si/figS03.pdf}\n\t\\caption{\\textbf{Experimental vs. theoretical distribution of mRNA per cell\n  using parameters from Bayesian inference.} Dotted line shows the result of\n  using \\eref{seq_two_state_mRNA} along with the parameters inferred for the\n  rates. Blue bars are the same data as \\fref{sfig_lacUV5_FISH} obtained from\n  \\cite{Jones2014a}.}\n  \\label{sfig_lacUV5_theory_data}\n\\end{figure}\n\n\\subsection{Accounting for variability in the number of promoters}\n\nAs discussed in ref. \\cite{Jones2014a} and further expanded in\n\\cite{Peterson2015} an important source of cell-to-cell variability in gene\nexpression in bacteria is the fact that, depending on the growth rate and the\nposition relative to the chromosome replication origin, cells can have multiple\ncopies of any given gene. Genes closer to the replication origin have on\naverage higher gene copy number compared to genes at the opposite end. For the\nlocus in which our reporter construct is located (\\textit{galK}) and the\ndoubling time of the mRNA FISH experiments we expect to have $\\approx$ 1.66\ncopies of the gene \\cite{Jones2014a, Bremer1996}. This implies that the cells\nspend 2/3 of the cell cycle with two copies of the promoter and the rest with a\nsingle copy.\n\nTo account for this variability in gene copy we extend the model assuming that\nwhen cells have two copies of the promoter the mRNA production rate is $2 r_m$\ncompared to the rate $r_m$ for a single promoter copy. The probability of\nobserving a certain mRNA copy $m$ is therefore given by\n\\begin{equation}\n  P(m) = P(m \\mid \\text{one promoter}) \\cdot P(\\text{one promoter}) +\n  P(m \\mid \\text{two promoters}) \\cdot P(\\text{two promoters}).\n  \\label{seq_prob_multipromoter}\n\\end{equation}\nBoth terms $P(m \\mid \\text{promoter copy})$ are given by\n\\eref{seq_two_state_mRNA} with the only difference being the rate $r_m$. It is\nimportant to acknowledge that \\eref{seq_prob_multipromoter} assumes that once\nthe gene is replicated the time scale in which the mRNA count relaxes to the\nnew steady state is much shorter than the time that the cells spend in this two\npromoter copies state. This approximation should be valid for a short lived\nmRNA molecule, but the assumption is not applicable for proteins whose\ndegradation rate is comparable to the cell cycle length as explored in\n\\secref{sec_cell_cycle}.\n\nIn order to repeat the Bayesian inference including this variability in gene\ncopy number we must split the mRNA count data into two sets -- cells with a\nsingle copy of the promoter and cells with two copies of the promoter. For the\nsingle molecule mRNA FISH data there is no labeling of the locus, making it\nimpossible to determine the number of copies of the promoter for any given\ncell. We therefore follow Jones et al. \\cite{Jones2014a} in using the cell area\nas a proxy for stage in the cell cycle. In their approach they sorted cells by\narea, considering cells below the 33$\\th$ percentile having a single promoter\ncopy and the rest as having two copies. This approach ignores that cells are\nnot uniformly distributed along the cell cycle. As first derived in\n\\cite{Powell1956} populations of cells in a log-phase are exponentially\ndistributed along the cell cycle. This distribution is of the form\n\\begin{equation}\nP(a) = (\\ln 2) \\cdot 2^{1 - a},\n\\label{seq_cell_cycle_dist}\n\\end{equation}\nwhere $a \\in [0, 1]$ is the stage of the cell cycle, with $a = 0$ being the\nstart of the cycle and $a = 1$ being the cell division (See\n\\siref{supp_cell_age_dist} for a derivation of \\eref{seq_cell_cycle_dist}).\n\\fref{sfig_cell_area} shows the separation of the two groups based on area\nwhere \\eref{seq_cell_cycle_dist} was used to weight the distribution along the\ncell cycle.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS04.pdf}\n\t\\caption{\\textbf{Separation of cells based on cell size.} Using the area as\n  a proxy for position in the cell cycle, cells can be sorted into two groups --\n  small cells (with one promoter copy) and large cells (with two promoter\n  copies). The vertical black line delimits the threshold that divides both\n  groups as weighted by \\eref{seq_cell_cycle_dist}.}\n  \\label{sfig_cell_area}\n\\end{figure}\n\nA subtle, but important consequence of \\eref{seq_cell_cycle_dist} is that\ncomputing any quantity for a single cell is not equivalent to computing the same\nquantity for a population of cells. For example, let us assume that we want to\ncompute the mean mRNA copy number $\\ee{m}$. For a single cell this would be of\nthe form\n\\begin{equation}\n  \\ee{m}_{\\text{cell}} = \\ee{m}_1 \\cdot f + \\ee{m}_2 \\cdot (1 - f),\n\\end{equation}\nwhere $\\ee{m}_i$ is the mean mRNA copy number with $i$ promoter copies in the\ncell, and $f$ is the fraction of the cell cycle that cells spend with a single\ncopy of the promoter. For a single cell the probability of having a single\npromoter copy is equivalent to this fraction $f$. But \\eref{seq_cell_cycle_dist}\ntells us that if we sample unsynchronized cells we are not sampling uniformly\nacross the cell cycle. Therefore for a population of cells the mean mRNA is\ngiven by\n\\begin{equation}\n  \\ee{m}_{\\text{population}} = \\ee{m}_1 \\cdot \\phi + \\ee{m}_2 \\cdot (1 - \\phi)\n  \\label{seq_mean_m_pop}\n\\end{equation}\nwhere the probability of sampling a cell with one promoter $\\phi$ is given by\n\\begin{equation}\n  \\phi = \\int_0^f P(a) da,\n\\end{equation}\nwhere $P(a)$ is given by \\eref{seq_age_prob}. What this equation computes is\nthe probability of sampling a cell during a stage of the cell cycle $< f$ where\nthe reporter gene hasn't been replicated yet. \\fref{sfig_mRNA_by_size} shows the\ndistribution of both groups. As expected larger cells have a higher mRNA copy\nnumber on average.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS05.pdf}\n\t\\caption{\\textbf{mRNA distribution for small and large cells.} (A) histogram\n\tand (B) cumulative distribution function of the small and large cells as\n\tdetermined in \\fref{sfig_cell_area}. The triangles above histograms in (A)\n\tindicate the mean mRNA copy number for each group.}\n  \\label{sfig_mRNA_by_size}\n\\end{figure}\n\nWe modify \\eref{seq_bayes_sample} to account for the two separate groups of\ncells. Let $N_s$ be the number of cells in the small size group and $N_l$ the\nnumber of cells in the large size group. Then the posterior distribution for the\nparameters is of the form\n\\begin{equation}\n  \\small\nP(\\kpon, \\kpoff, r_m \\mid \\{m_i\\}) \\propto\n  \\left[\\prod_{i=1}^{N_s} P(m_i \\mid \\kpon, \\kpoff, r_m)\\right]\n  \\left[\\prod_{j=1}^{N_l} P(m_j \\mid \\kpon, \\kpoff, 2 r_m)\\right]\n  P(\\kpon, \\kpoff, r_m),\n  \\label{seq_bayes_sample_double}\n\\end{equation}\nwhere we split the product of small and large cells.\n\nFor the two-promoter model the prior shown in \\eref{seq_prior_single} requires a\nsmall modification. \\eref{seq_mean_m_pop} gives the mean mRNA copy number of a\npopulation of asynchronous cells growing at steady state. Given that we assume\nthat the only difference between having one vs. two promoter copies is the\nchange in transcription rate from $r_m$ in the single promoter case to $2 r_m$\nin the two-promoter case we can\nwrite \\eref{seq_mean_m_pop} as\n\\begin{equation}\n  \\ee{m} = \\phi \\cdot {r_m \\over \\gm} {\\kpon \\over \\kpon + \\kpoff} +\n      (1 -\\phi) \\cdot {2 r_m \\over \\gm} {\\kpon \\over \\kpon + \\kpoff}.\n\\end{equation}\nThis can be simplified to\n\\begin{equation}\n  \\ee{m} = (2 - \\phi) {r_m \\over \\gm} {\\kpon \\over \\kpon + \\kpoff}.\n  \\label{seq_mean_m_double_rates}\n\\end{equation}\n\nEquating \\eref{seq_mean_m_double_rates} and \\eref{seq_mean_thermo} to again\nrequire self-consistent predictions of the mean mRNA from the equilibrium and\nkinetic models gives\n\\begin{equation}\n  (2 - \\phi) {\\kpon \\over \\kpon + \\kpoff} =\n  {{P \\over \\Nns} e^{-\\beta\\eP}\n  \\over 1 + {P \\over \\Nns} e^{-\\beta\\eP}}.\n\\end{equation}\nSolving for $\\kpon \\over \\kpoff$ results in\n\\begin{equation}\n  \\left({\\kpon \\over \\kpoff}\\right) =\n  {\\rho \\over \\left[ (1 + \\rho)(2 - \\phi) - \\rho \\right]},\n  \\label{seq_kinetic_thermo_equiv}\n\\end{equation}\nwhere we define $\\rho \\equiv {P \\over \\Nns} e^{-\\beta\\eP}$. To simplify things\nfurther we notice that for the specified values of $P = 1000 - 3000$ per cell,\n$\\Nns = 4.6 \\times 10^6$ bp, and $-\\beta\\eP = 5 - 7$, we can safely assume that\n$\\rho \\ll 1$. This simplifying assumption has been previously called the weak\npromoter approximation \\cite{Garcia2011c}. Given this we can simplify\n\\eref{seq_kinetic_thermo_equiv} as\n\\begin{equation}\n  {\\kpon \\over \\kpoff} = {1 \\over 2 - \\phi} {P \\over \\Nns} e^{-\\beta\\eP}.\n\\end{equation}\nTaking the log of both sides gives\n\\begin{equation}\n  \\ln\\left({\\kpon \\over \\kpoff}\\right) = -\\ln (2 - \\phi) + \\ln P - \\ln\\Nns\n  - \\beta\\eP.\n\\end{equation}\nWith this we can set as before a Gaussian prior to constrain the ratio of the\nRNAP rates as\n\\begin{equation}\n  P\\left(\\ln \\left({\\kpon \\over \\kpoff}\\right) \\right)  \\propto\n  \\exp \\left\\{ - {\\left(\\ln \\left({\\kpon \\over \\kpoff}\\right) -\n  \\left[-\\ln(2 - \\phi) -\\beta\\eP + \\ln P - \\ln \\Nns \\right) \\right]^2\n  \\over 2 \\sigma^2} \\right\\}.\n  \\label{seq_prior_double}\n\\end{equation}\n\n\\fref{sfig_mcmc_rnap_double} shows the result of sampling out of\n\\eref{seq_bayes_sample_double}. Again we see that the model is highly sloppy\nwith large credible regions obtained for $\\kpoff$ and $r_m$. Nevertheless,\nagain the use of the prior information allows us to get a parameter values\nconsistent with the equilibrium picture.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS06.pdf}\n\t\\caption{\\textbf{MCMC posterior distribution for a multi-promoter model.}\n\tSampling out of \\eref{seq_bayes_sample_double} the plot shows 2D and 1D\n\tprojections of the 3D parameter space. The parameter values are (in units of\n\tthe mRNA degradation rate $\\gm$) $\\kpon = 6.4^{+0.8}_{-0.4}$, $\\kpoff =\n\t132^{+737}_{-75}$ and $r_m = 257^{+1307}_{-132}$ which are the modes of\n\ttheir respective distributions, where the superscripts and subscripts\n\trepresent the upper and lower bounds of the 95$\\th$ percentile of the\n\tparameter value distributions. The sampling was bounded to values $<$ 1000 for\n  numerical stability when computing the confluent hypergeometric function.}\n  \\label{sfig_mcmc_rnap_double}\n\\end{figure}\n\nUsing again a mRNA mean lifetime of $\\approx 3$ min gives the following\nvalues for the parameters: $\\kpon = {0.03}_{-0.002}^{+0.004} s^{-1}$, $\\kpoff =\n{0.7} _{-0.4}^{+4.1} s^{-1}$, and $r_m = {1.4}_{-0.7}^{+7.3} s^{-1}$.\n\\fref{sfig_lacUV5_theory_data_double} shows the result of applying\n\\eref{seq_prob_multipromoter} using these parameter values. Specifically\n\\fref{sfig_lacUV5_theory_data_double}(A) shows the global distribution\nincluding cells with one and two promoters and\n\\fref{sfig_lacUV5_theory_data_double}(B) splits the distributions within the\ntwo populations. Given that the model adequately describes both populations\nindependently and pooled together we confirm that using the cell area as a\nproxy for stage in the cell cycle and the doubling of the transcription rate\nonce cells have two promoters are reasonable approximations.\n\n\\begin{figure}[h!]\n\t\\centering \\includegraphics\n  {../fig/si/figS07.pdf}\n  \\caption{\\textbf{Experimental vs. theoretical distribution of mRNA per cell\n  using parameters for multi-promoter model.} (A) Solid line shows the result\n  of using \\eref{seq_prob_multipromoter} with the parameters inferred by\n  sampling \\eref{seq_bayes_sample_double}. Blue bars are the same data as\n  \\fref{sfig_lacUV5_FISH} from \\cite{Jones2014a}. (B) Split distributions of\n  small cells (light blue bars) and large cells (dark blue) with the\n  corresponding theoretical predictions with transcription rate $r_m$ (light\n  blue line) and transcription rate $2 r_m$ (dark blue line)}\n\t\\label{sfig_lacUV5_theory_data_double}\n\\end{figure}\n\nIt is hard to make comparisons with literature reported values because these\nkinetic rates are effective parameters hiding a lot of the complexity of\ntranscription initiation \\cite{Browning2004}. Also the non-identifiability of\nthe parameters restricts our explicit comparison of the actual numerical values\nof the inferred rates. Nevertheless from the model we can see that the mean\nburst  size for each transcription event is given by $r_m / \\kpoff$. From our\ninferred values we obtain then a mean burst size of $\\approx 1.9$ transcripts\nper cell. This is similar to the reported burst size of 1.15 on a similar\nsystem on \\textit{E. coli} \\cite{Yu2006}.\n\n\\subsection{Repressor rates from three-state regulated promoter.}\n\nHaving determined the unregulated promoter transition rates we now proceed to\ndetermine the repressor rates $\\kron$ and $\\kroff$. The values of these rates\nare constrained again by the correspondence between our kinetic picture and\nwhat we know from equilibrium models \\cite{Phillips2015}. For this analysis we\nagain exploit the feature that, at the mean, both the kinetic language and the\nthermodynamic language should have equivalent predictions. Over the last decade\nthere has been great effort in developing equilibrium models for gene\nexpression regulation \\cite{Buchler2003, Vilar2011, Bintu2005a}. In particular\nour group has extensively characterized the simple repression motif using this\nformalism \\cite{Garcia2011c, Brewster2014, Razo-Mejia2018}.\n\nThe dialogue between theory and experiments has led to simplified expressions\nthat capture the phenomenology of the gene expression response as a function of\nnatural variables such as molecule count and affinities between molecular\nplayers. A particularly interesting quantity for the simple repression motif\nused by Garcia \\& Phillips \\cite{Garcia2011c} is the fold-change in gene\nexpression, defined as\n\\begin{equation}\n  \\foldchange = {\\ee{\\text{gene expression}(R \\neq 0)} \\over\n                 \\ee{\\text{gene expression}(R = 0)}},\n\\end{equation}\nwhere $R$ is the number of repressors per cell and $\\ee{\\cdot}$ is the\npopulation average. The fold-change is simply the mean expression level in the\npresence of the repressor relative to the mean expression level in the absence\nof regulation. In the language of statistical mechanics this quantity takes the\nform\n\\begin{equation}\n  \\foldchange = \\left( 1 + {R \\over \\Nns} e^{-\\beta\\eR} \\right)^{-1},\n  \\label{seq_fc_thermo}\n\\end{equation}\nwhere $\\eR$ is the repressor-DNA binding energy, and as before $\\Nns$ is the\nnumber of non-specific binding sites where the repressor can bind\n\\cite{Garcia2011c}.\n\nTo compute the fold-change in the chemical master equation language we compute\nthe first moment of the steady sate mRNA distribution $\\ee{m}$ for both the\nthree-state promoter ($R \\neq 0$) and the two-state promoter case ($R=0$) (See\n\\siref{supp_moments} for moment derivation). The unregulated (two-state)\npromoter mean mRNA copy number is given by \\eref{seq_mean_m_double_rates}. For\nthe regulated (three-state) promoter we have an equivalent expression of the\nform\n\\begin{equation}\n  \\ee{m (R \\neq 0)} = (2 - \\phi){r_m \\over \\gm} {\\kroff\\kpon\n  \\over \\kpoff\\kroff + \\kpoff\\kron + \\kroff\\kpon}.\n\\end{equation}\nComputing the fold-change then gives\n\\begin{equation}\n  \\foldchange = {\\ee{m (R \\neq 0)} \\over \\ee{m (R = 0)}} =\n  {\\kroff \\left( \\kpoff + \\kpon \\right) \\over\n  \\kpoff\\kron + \\kroff \\left( \\kpoff + \\kpon \\right)},\n  \\label{seq_fold_change_cme}\n\\end{equation}\nwhere the factor $(2 - \\phi)$ due to the multiple promoter copies, the\ntranscription rate $r_m$ and the mRNA degradation rate $\\gm$ cancel out.\n\nGiven that the number of repressors per cell $R$ is an experimental variable\nthat we can control, we assume that the rate at which the promoter transitions\nfrom the transcriptionally inactive state to the repressor bound state,\n$\\kron$, is given by the concentration of repressors $[R]$ times a diffusion\nlimited on rate $k_o$. For the diffusion limited constant $k_o$ we use the\nvalue used by Jones et al. \\cite{Jones2014a}. With this in hand we can rewrite\n\\eref{seq_fold_change_cme} as\n\\begin{equation}\n  \\foldchange = \\left( 1 + {k_0 [R] \\over \\kroff}\n                {\\kpoff \\over \\kpon + \\kpoff} \\right)^{-1}.\n  \\label{seq_fc_kinetic}\n\\end{equation}\n\nWe note that both \\eref{seq_fc_thermo} and \\eref{seq_fc_kinetic} have the same\nfunctional form. Therefore if both languages predict the same output for the\nmean gene expression level, it must be true that\n\\begin{equation}\n  {k_o [R] \\over \\kroff}{\\kpoff \\over \\kpon + \\kpoff} =\n  {R \\over \\Nns} e^{-\\beta\\eR}.\n\\end{equation}\nSolving for $\\kroff$ gives\n\\begin{equation}\n  \\kroff = {k_o [R] \\Nns e^{\\beta\\eR} \\over R}{\\kpoff \\over \\kpon + \\kpoff}.\n  \\label{seq_kroff_complete}\n\\end{equation}\n\nSince the reported value of $k_o$ is given in units of nM$^{-1}$s$^{-1}$ in\norder for the units to cancel properly the repressor concentration has to be\ngiven in nM rather than absolute count. If we consider that the repressor\nconcentration is equal to\n\\begin{equation}\n[R] = \\frac{R}{V_{cell}}\\cdot \\frac{1}{N_A},\n\\end{equation}\nwhere $R$ is the absolute repressor copy number per cell, $V_{cell}$ is the cell\nvolume and $N_A$ is Avogadro's number. The \\textit{E. coli} cell volume is 2.1\nfL \\cite{Radzikowski2016}, and Avogadro's number is $6.022 \\times 10^{23}$. If\nwe further include the conversion factor to turn M into nM we find that\n\\begin{equation}\n[R] = {R \\over 2.1 \\times 10^{-15} L} \\cdot {1 \\over 6.022 \\times 10^{23}}\n\\cdot {10^9 \\text{ nmol} \\over 1 \\text{ mol}} \\approx 0.8 \\times R.\n\\end{equation}\nUsing this we simplify \\eref{seq_kroff_complete} as\n\\begin{equation}\n  \\kroff \\approx 0.8 \\cdot k_o \\cdot \\Nns e^{\\beta\\eR}\n   \\cdot {\\kpoff \\over \\kpon + \\kpoff}.\n  \\label{seq_kroff}\n\\end{equation}\nWhat \\eref{seq_kroff} shows is the direct relationship that must be satisfied if\nthe equilibrium model is set to be consistent with the non-equilibrium kinetic\npicture. \\tref{stab_koff} summarizes the values obtained for the three operator\nsequences used throughout this work. To compute these numbers the number of\nnon-specific binding sites $\\Nns$ was taken to be $4.6 \\times 10^6$ bp, i.e. the\nsize of the {\\it E. coli} K12 genome.\n\n\\begin{table}[]\n  \\caption{\\textbf{Binding sites and corresponding parameters.}}\n\\begin{tabular}{|c|c|c|}\n\\hline\n Operator & $\\eR\\; (k_BT)$ & $\\kroff \\; (s^{-1})$  \\\\ \\hline\n O1 & -15.3 & 0.002  \\\\ \\hline\n O2 & -13.9 & 0.008  \\\\ \\hline\n O3 & -9.7  & 0.55   \\\\ \\hline\n\\end{tabular}\n\\label{stab_koff}\n\\end{table}\n\n{\\it In-vivo} measurements of the Lac repressor off rate have been done with\nsingle-molecule resolution \\cite{Hammar2014}. The authors report a mean\nresidence time of $5.3 \\pm 0.2$ minutes for the repressor on an O1 operator. The\ncorresponding rate is $\\kroff \\approx 0.003$ $(s^{-1})$, very similar value to\nwhat we inferred from our model. In this same reference the authors determined\nthat on average the repressor takes $30.9 \\pm 0.5$ seconds to bind to the\noperator \\cite{Hammar2014}. Given the kinetic model presented in\n\\fref{fig2_minimal_model}(A) this time can be converted to the probability of\nnot being on the repressor bound state $P_{\\text{not }R}$. This is computed as\n\\begin{equation}\n  P_{\\text{not }R} = {\\tau_{\\text{not }R} \\over\n                      \\tau_{\\text{not }R} + \\tau_{R}},\n\\end{equation}\nwhere $\\tau_{\\text{not }R}$ is the average time that the operator is not\noccupied by the repressor and $\\tau_{R}$ is the average time that the repressor\nspends bound to the operator. Substituting the numbers from \\cite{Hammar2014}\ngives $P_{\\text{not }R} \\approx 0.088$. From our model we can compute the zeroth\nmoment $\\ee{m^0 p^0}$ for each of the three promoter states. This moment is\nequivalent to the probability of being on each of the promoter states. Upon\nsubstitution of our inferred rate parameters we can compute $P_{\\text{not }R}$\nas\n\\begin{equation}\n  P_{\\text{not }R} = 1 - P_R \\approx 0.046,\n\\end{equation}\nwhere $P_R$ is the probability of the promoter being bound by the repressor. The\nvalue we obtained is within a factor of two from the one reported in\n\\cite{Hammar2014}.\n", "meta": {"hexsha": "f8a538ff67d1735e70ed2b1328ba8f80095d69e6", "size": 37711, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/appendix_model.tex", "max_stars_repo_name": "RPGroup-PBoC/chann_cap", "max_stars_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-08-21T04:06:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-09T07:36:58.000Z", "max_issues_repo_path": "doc/appendix_model.tex", "max_issues_repo_name": "RPGroup-PBoC/chann_cap", "max_issues_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/appendix_model.tex", "max_forks_repo_name": "RPGroup-PBoC/chann_cap", "max_forks_repo_head_hexsha": "f2a826166fc2d47c424951c616c46d497ed74b39", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-04-29T17:43:28.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-09T00:20:16.000Z", "avg_line_length": 50.2813333333, "max_line_length": 106, "alphanum_fraction": 0.7316167696, "num_tokens": 11390, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8459424373085145, "lm_q2_score": 0.6548947357776796, "lm_q1q2_score": 0.5540032489642859}}
{"text": "\\chapter{Introduction}\nIn Nashian game theory, we ask ourselves, if a rational player knew the strategies of all opponent players, what strategy would she choose.\nIt would naturally be the strategy that maximizes her own payoff.\nA state in which no player would be motivated to change strategy even after knowing the opponents' strategies is called a Nash equilibrium.\n\nNon-Nashian game theory offers a different perspective.\nIt assumes that each player can perfectly predict the other players' strategies.\nLet us think about what this means for an individual player (let us call her Alice).\nWhen thinking about this assumption for the first time, it might seem evident that Alice should choose whatever strategy yields the best payoff for her, considering all the opponents' strategies that she correctly predicted, i.e., the Nashian best response.\nHowever, since the other players can also predict perfectly, they would know about this and adapt their strategies accordingly.\nIf Alice acts rationally, she needs to take this into consideration (and the other players need to take her consideration into theirs, and she needs to take theirs into hers, and so on).\n\nFor example, consider the famous Prisonner's dilemma (see \\autoref{tab:prisoners-dilemma}).\nSuppose that Alice was playing against Bob and predicted that Bob was going to cooperate.\nHer Nahian best response would then be to defect.\nIf she did choose to defect, though, Bob would predict it, and he would defect as well.\nThis would lead to a Nash equilibrium (defect, defect).\nHowever, it is not a PTE because Alice knew about this from the beginning, and she must have also considered the other case---cooperating.\nIf Alice cooperated, we can see by the same argument (since the game is symmetric) that Bob would not be motivated to switch from cooperating to defecting.\nThis is because if he defected, Alice would defect as well, and (cooperate, cooperate) has a strictly better payoff for Bob than (defect, defect).\nThus, both Alice and Bob know that cooperating leads to the best possible payoff for each, and (cooperate, cooperate) indeed is a PTE.\n\n\\begin{table}\n\t\\caption{Prisoner's dilemma---a classic example of a game in normal form.\n\tTwo suspects (row- and column-player) are placed in solitary confinement, with no means of communicating with each other.\n\tEach suspect has two options: either cooperate with the other by remaining silent or defect by testifying against the other.\n\tIf both suspects remain silent, each of them will serve one year in prison.\n\tIf one defects while the other remains silent, the other will serve three years.\n\tIf both defect, each of them will serve two years.\n\t}\n\t\\label{tab:prisoners-dilemma}\n\t\\centering\n\t\\begin{tabular}{|c|c|c|}\n\t  \\hline\n\t\t\t\t& cooperate & defect \\\\\n\t  \\hline\n\t  cooperate & -1, -1    & -3, 0  \\\\\n\t  \\hline\n\t  defect    & 0, -3     & -2, -2 \\\\\n\t  \\hline\n\t\\end{tabular}\n  \\end{table}\n\nNote that Alice and Bob actually never needed to use their predicting ability at all.\nJust believing that the other player can predict their strategy was enough to choose an optimal strategy.\nThis is, in fact, true for any game, not just for our example \\cite{Fourny20}.\n\nIn general, games can have multiple PTEs.\nHowever, games without ties (sometimes also called games in general position), if a PTE exists, then it is unique, and Pareto-optimal \\cite{Fourny20}.\n\nWe provide a formal definition of PTE in \\autoref{chap:background}, but this should be enough to illustrate why it is an interesting concept.\nSince game theory has many practical applications, it is interesting to research how or under what conditions players might be motivated to play in a way that leads to everyone reaching as good a payoff as possible.\n\nThe goal of this work is twofold:\nFirst, it is to develop a REST API-based backend for playing games that will teach human players to think similarly to Alice and Bob in our example.\nThat is, it will motivate them to play towards a PTE (or at least to something that is, in some sense, close to a PTE).\nWe develop a backend that supports playing 2-player randomly generated games against a computer.\nThe backend is in the form of a web (HTTP) based API server.\nAny client can connect to the server and request a new game which the server generates randomly and sends back to the client.\nSubsequently, the client can choose a strategy and send it to the server, which responds with the computer's strategy and the resulting payoff.\nThe human player is supposed to think that the computer chooses its strategy independently, not knowing the opponent's strategy, even though in reality, it actually does know the opponent's strategy and uses it to choose its own---this is to simulate Perfect Prediction.\nAll games played through the server are stored in a database together with the results to be analyzed later.\n\nThe second goal is to define new notions of best responses that might ultimately lead to a PTE.\nThe idea here is that even though the computer knows its opponent's strategy and it could just play the Nashian best response to maximize its own utility, it should instead choose some potentially suboptimal strategy, such that a player who plays towards a PTE is better off than one who does not.\nThis way, it might be possible to train people into non-Nashian thinking.\nTo this end, we define a perfectly transparent best response, perfectly transparent $i$-best profile, and perfectly transparent $i$-optimal profile.\nThese three best responses induce three new equilibria: perfectly transparent best response equilibrium, perfectly transparent best profile Equilibrium, and perfectly transparent optimal profile equilibrium.\nFurthermore, we analyze large datasets of randomly generated games to find out information about how they relate to each other, and to some other equilibria---most importantly to PTE.\n", "meta": {"hexsha": "96a755aed2e1987652496de6110c28693ed0ee44", "size": 5859, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/chapters/introduction.tex", "max_stars_repo_name": "KuceraMartin/perfect-prediction-game", "max_stars_repo_head_hexsha": "61296cdae8fd33f902d73a819d39535c7945501a", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/chapters/introduction.tex", "max_issues_repo_name": "KuceraMartin/perfect-prediction-game", "max_issues_repo_head_hexsha": "61296cdae8fd33f902d73a819d39535c7945501a", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/chapters/introduction.tex", "max_forks_repo_name": "KuceraMartin/perfect-prediction-game", "max_forks_repo_head_hexsha": "61296cdae8fd33f902d73a819d39535c7945501a", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.7, "max_line_length": 297, "alphanum_fraction": 0.7898958867, "num_tokens": 1267, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943767446202, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.5539920843970226}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                                                                 %\n%   KUIP  - Reference Manual -- LaTeX Source                      %\n%                                                                 %\n%   Chapter 6: KUIP Programming example                           %\n%                                                                 %\n%   External EPS files referenced: none                           %\n%                                                                 %\n%   Editor: Michel Goossens / CN-AS                               %\n%   Last Mod.:  6 Dec  1991   mg                                  %\n%                                                                 %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\chapter{KUIP Programming example}\n\nAs an example of how to implement a user interface with KUIP the code\nfor a simple Reverse Polish Notation pocket calculator is presented.\n%\n%---------------------------------------------------------------------------\n%\n\\section{The Command Definition File}\n\\begin{XMPt}{The CDF for the RPN calculator}\n>Name RPNDEF\n \n>Menu RPN\n>Guidance\nReverse Polish Notation sub-pocket calculator\nusing KUIP for the user interface.\n \n>Command ENTER\n>Parameters\n+\nNUM 'Enter a number' R D=0.\n>Guidance\nPush the number(s) given as parameter(s) into the stack.\nIf none push a zero.\n>Action RPENT\n \n>Command ADD\n>Guidance\nPush the number(s) given as parameter(s) into the stack.\nAdd the two upper-most numbers of the stack and shift it up.\n>Action RPOPER\n \n>Command SUBTRACT\n>Guidance\nPush the number(s) given as parameter(s) into the stack.\nSubtract the two upper-most numbers of the stack and shift it up.\n>Action RPOPER\n \n>Command MULTIPLY\n>Guidance\nPush the number(s) given as parameter(s) into the stack.        \nMultiply the two upper-most numbers of the stack and shift it up.\n>Action RPOPER\n\\condbreak{5\\baselineskip} \n>Command DIVIDE\n>Guidance\nPush the number(s) given as parameter(s) into the stack.\nDivide the two upper-most numbers of the stack and shift it up.\n>Action RPOPER\n \n>Command PRINT\n>Guidance\nPrint the content of the stack.\n>Action RPRINT\n \n>Command CLEAR\n>Guidance\nClear the stack.\n>Action RPCLR\n\\end{XMPt}\n%\n%---------------------------------------------------------------------------\n%\n\\section{The application program}\n\n\\begin{XMPt}{FORTRAN code for the RPN calculator application program}\n      PROGRAM RPN\n      COMMON/PAWC/PAW(50000)\n      COMMON/RPNSTK/ISTACK,STACK(100)\n*---> Initialize ZEBRA and the store /PAWC/\n      CALL MZEBRA(-3)\n      CALL MZPAW(50000,' ')\n*---> Initialize KUIP with 5000 words as minimum division size\n      CALL KUINIT(5000)\n*---> Create user command structure from definition file (command\n*---> definition routine name RPNDEF defined in CDF \n*---> with '>Name RPNDEF')\n      CALL RPNDEF\n*---> Change the prompt by executing the command SET/PROMPT\n      CALL KUEXEC('SET/PROMPT ''RPN >''')\n*---> Initialize the stack in /RPNSTK/\n      CALL RPCLR\n*---> Give control to KUIP (with no 'STYLE G', call KUWHAG \n*---> to get 'STYLE G')\n      CALL KUWHAT\n*---> Typing 'QUIT' or 'EXIT' we return here\n      END\n \n      SUBROUTINE RPCLR\n      COMMON/RPNSTK/ISTACK,STACK(100)\n      DO 10 I=1,100\n        STACK(I)=0.\n10    CONTINUE\n      ISTACK=50\n      CALL RPRINT\n      END\n \n      SUBROUTINE RPENT\n      COMMON/RPNSTK/ISTACK,STACK(100)\n      CHARACTER*32 CMD\n      CALL KUPATL(CMD,NPAR)\n      IF(CMD.EQ.'ENTER' .AND. NPAR.EQ.0) NPAR=1\n      DO 10 I=1,NPAR\n        CALL KUGETR(R)\n        ISTACK=ISTACK+1\n        IF(ISTACK.GT.100) THEN\n          PRINT *,'ERROR: Stack overflow'\n          GOTO 20\n        ENDIF\n        STACK(ISTACK)=R\n10    CONTINUE\n20    CONTINUE\n      IF(CMD.EQ.'ENTER') CALL RPRINT\n      END\n \n      SUBROUTINE RPOPER\n      COMMON/RPNSTK/ISTACK,STACK(100)\n      CHARACTER*32 CMD\n      CALL KUPATL(CMD,NPAR)\n      CALL RPENT\n      IF(ISTACK.LT.2) THEN\n        PRINT *,'ERROR: Stack underflow'\n        GOTO 10\n      ENDIF\n      IF(CMD.EQ.'ADD') THEN\n        STACK(ISTACK-1)=STACK(ISTACK)+STACK(ISTACK-1)\n      ELSEIF(CMD.EQ.'SUBTRACT') THEN\n        STACK(ISTACK-1)=STACK(ISTACK)-STACK(ISTACK-1)\n      ELSEIF(CMD.EQ.'MULTIPLY') THEN\n        STACK(ISTACK-1)=STACK(ISTACK)*STACK(ISTACK-1)\n      ELSEIF(CMD.EQ.'DIVIDE') THEN\n        IF(STACK(ISTACK).EQ.0) THEN\n          PRINT *,'ERROR: Divide by zero'\n        ELSE\n          STACK(ISTACK-1)=STACK(ISTACK-1)/STACK(ISTACK)\n        ENDIF\n      ENDIF\n      ISTACK=ISTACK-1\n10    CONTINUE\n      CALL RPRINT\n      END\n\n      SUBROUTINE RPRINT\n      COMMON/RPNSTK/ISTACK,STACK(100)\n      PRINT *,(STACK(I),I=ISTACK,ISTACK-3,-1)\n      IF(STACK(ISTACK).GT.0) THEN\n        PRINT *,'********'\n      ELSE\\condbreak{4\\baselineskip} \n        PRINT *,'*********'\n      ENDIF\n      END\n\\end{XMPt}\n\\Rind[KUPATL]{}\n\\Rind[KUGETR]{}\n%\n%---------------------------------------------------------------------------\n%\n%\\newpage\n\\section{Example of a run}\n\n\\begin{XMPt}{Example of a session using the RPN calculator}\n$ rpn\n 0.0000000 0.0000000 0.0000000 0.0000000\n *********\n RPN > \\underline{style an}\n \n From  /...\n \n  1:   KUIP          Command Processor commands.\n  2:   MACRO         Macro Processor commands.\n  3:   VECTOR        Vector Processor commands.\n  4:   RPN           Reverse Polish Notation sub-pocket calculator\n \n Enter a number ('Q'=command mode): \\underline{4}\n \n From  /RPN/...\n \n  1: * ENTER         Push the number(s) given as parameter(s) into the stack.\n  2: * ADD           Push the number(s) given as parameter(s) into the stack.\n  3: * SUBTRACT      Push the number(s) given as parameter(s) into the stack.\n  4: * MULTIPLY      Push the number(s) given as parameter(s) into the stack.\n  5: * DIVIDE        Push the number(s) given as parameter(s) into the stack.\n  6: * PRINT         Print the content of the stack.\n  7: * CLEAR         Clear the stack.\n \n Enter a number ('\\'=one level back, 'Q'=command mode): \\underline{1}\n \n  * /RPN/ENTER  [ NUM ]\n \n Add parameters or just <CR> to the command line\n ('#'=cancel execution, '?'=help) :\n\n /RPN/ENTER \\underline{1 2 3}\n 3.000000 2.000000 1.000000 0.0000000\n ********\n \n From  /RPN/...\n \n  1: * ENTER         Push the number(s) given as parameter(s) into the stack.\n  2: * ADD           Push the number(s) given as parameter(s) into the stack.\n  3: * SUBTRACT      Push the number(s) given as parameter(s) into the stack.\n  4: * MULTIPLY      Push the number(s) given as parameter(s) into the stack.\n  5: * DIVIDE        Push the number(s) given as parameter(s) into the stack.\n  6: * PRINT         Print the content of the stack.\n  7: * CLEAR         Clear the stack.\n \n Enter a number ('\\'=one level back, 'Q'=command mode): \\underline{2}\n\n  * /RPN/ADD\n \n Add parameters or just <CR> to the command line\n ('#'=cancel execution, '?'=help) :\n\n /RPN/ADD\n 5.000000 1.000000 0.0000000 0.0000000\n ********\n From  /RPN/...\n \n  1: * ENTER         Push the number(s) given as parameter(s) into the stack.\n  2: * ADD           Push the number(s) given as parameter(s) into the stack.\n  3: * SUBTRACT      Push the number(s) given as parameter(s) into the stack.\n  4: * MULTIPLY      Push the number(s) given as parameter(s) into the stack.\n  5: * DIVIDE        Push the number(s) given as parameter(s) into the stack.\n  6: * PRINT         Print the content of the stack.\n  7: * CLEAR         Clear the stack.\n \n Enter a number ('\\'=one level back, 'Q'=command mode): \\underline{q}\\vspace{7pt}\n RPN > \\underline{print}\n 5.000000 1.000000 0.0000000 0.0000000\n ********\n RPN > \\underline{enter 1 2 3}\n 3.000000 2.000000 1.000000 5.000000\n ********\n RPN > \\underline{add}\n 5.000000 1.000000 5.000000 1.000000\n ********\n RPN > \\underline{add}\n 6.000000 5.000000 1.000000 0.0000000\n ********\n RPN > \\underline{add}\n 11.00000 1.000000 0.0000000 0.0000000\n ********\n RPN > \\underline{add 5}\n 16.00000 1.000000 0.0000000 0.0000000\n ********\n RPN > \\underline{add 10 20}\n 30.00000 16.00000 1.000000 0.0000000\n ********\n RPN > \\underline{usage rpn/}\n \n  * RPN/ENTER  [ NUM ]\n  * RPN/ADD\n  * RPN/SUBTRACT\n  * RPN/MULTIPLY\n  * RPN/DIVIDE\n  * RPN/PRINT\n  * RPN/CLEAR\n \n RPN > \\underline{help clear}\n \n  * /RPN/CLEAR\n \n    Clear the stack.\n \n RPN > \\underline{c}\n *** Ambiguous command. Possible commands are :\n \n /KUIP/ALIAS/CREATE\n /KUIP/SET_SHOW/COMMAND\n /KUIP/SET_SHOW/COLUMNS\n /MACRO/SYNTAX/Branching/CASE\n /VECTOR/CREATE\n /VECTOR/COPY\n /RPN/CLEAR\n \n RPN > \\underline{r/c}\n 0.0000000 0.0000000 0.0000000 0.0000000\n *********\n RPN > \\underline{help rpn/print}\n \n  * /RPN/PRINT\n \n    Print the content of the stack.\n\n RPN > \\underline{help en}\n \n  * /RPN/ENTER  [ NUM ]\n \n    NUM        R 'Enter a number' D=0\n \n    Push the number(s) given as parameter(s) into the stack.\n    If none push a zero.\n \n RPN > \\underline{q}\n$\n\\end{XMPt}\n", "meta": {"hexsha": "9dc1f7821242d4da072913dba18221ae61d15fb9", "size": 8818, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "kuip/kuipap1.tex", "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_issues_repo_path": "kuip/kuipap1.tex", "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "kuip/kuipap1.tex", "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.5372168285, "max_line_length": 81, "alphanum_fraction": 0.5772283965, "num_tokens": 2633, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.757794360334681, "lm_q1q2_score": 0.5539920679601056}}
{"text": "\n\\section{Axioms for propositional logic}\n\n", "meta": {"hexsha": "00cf28d0701c58d7275c6059eb87c01770b2dcc0", "size": 43, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/logic/propositionalLogicAxioms/01-00-Axioms_for_propositional_logic.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/logic/propositionalLogicAxioms/01-00-Axioms_for_propositional_logic.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/logic/propositionalLogicAxioms/01-00-Axioms_for_propositional_logic.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 10.75, "max_line_length": 40, "alphanum_fraction": 0.7906976744, "num_tokens": 12, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7577943712746406, "lm_q2_score": 0.7310585669110202, "lm_q1q2_score": 0.5539920670772763}}
{"text": "\\lab{NumPy Visual Guide}{NumPy Visual Guide}\n\\label{appendix:numpy-visual-guide}\n\\objective{NumPy operations can be difficult to visualize, but the concepts are straightforward.\nThis appendix provides visual demonstrations of how NumPy arrays are used with slicing syntax, stacking, broadcasting, and axis-specific operations.\nThough these visualizations are for 1- or 2-dimensional arrays, the concepts can be extended to $n$-dimensional arrays.\n% See Lab \\ref{lab:NumPy} for an introduction to NumPy operations and synatx.\n}\n\n\\section*{Data Access} % ======================================================\n\nThe entries of a 2-D array are the rows of the matrix (as 1-D arrays).\nTo access a single entry, enter the row index, a comma, and the column index.\nRemember that indexing begins with $0$.\n\n\\begin{align*}\n\\text{\\li{A[0]}} = \\left[\\begin{array}{rrrrr}\n\\rowcolor{red!20}\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\end{array}\\right]\n&&\n\\text{\\li{A[2,1]}} = \\left[\\begin{array}{rrrrr}\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\cellcolor{red!20} \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\n\\end{array}\\right]\n\\end{align*}\n\n\\section*{Slicing} % ==========================================================\n\nA lone colon extracts an entire row or column from a 2-D array.\nThe syntax \\li{[a:b]} can be read as ``the $a$th entry up to (but not including) the $b$th entry.''\nSimilarly, \\li{[a:]} means ``the $a$th entry to the end'' and \\li{[:b]} means ``everything up to (but not including) the $b$th entry.''\n\n\\begin{align*}\n\\text{\\li{A[1]}} = \\text{\\li{A[1,:]}} = \\left[\\begin{array}{rrrrr}\n\\times & \\times & \\times & \\times & \\times\\\\\n\\rowcolor{red!20}\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\end{array}\\right]\n&&\n\\text{\\li{A[:,2]}} = \\left[\\begin{array}{rr>{\\columncolor{red!20}}rrr}\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\end{array}\\right]\n\\end{align*}\n\n\\begin{align*}\n\\text{\\li{A[1:,:2]}} = \\left[\\begin{array}{>{\\columncolor{red!20}}r>{\\columncolor{red!20}}rrrr}\n\\cellcolor{white}\\times & \\cellcolor{white}\\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\\\\\n\\times & \\times & \\times & \\times & \\times\n\\end{array}\\right]\n&&\n\\text{\\li{A[1:-1,1:-1]}} = \\left[\\begin{array}{rrrrr}\n\\times & \\times & \\times & \\times & \\times\\\\\n\\rowcolor{red!20}\n\\cellcolor{white}\\times & \\times & \\times & \\times & \\cellcolor{white}\\times\\\\\n\\rowcolor{red!20}\n\\cellcolor{white}\\times & \\times & \\times & \\times & \\cellcolor{white}\\times\\\\\n\\times & \\times & \\times & \\times & \\times\\end{array}\\right]\n\\end{align*}\n\n\\section*{Stacking} % =========================================================\n\n\\li{np.hstack()} stacks sequence of arrays horizontally and \\li{np.vstack()} stacks a sequence of arrays vertically.\n\n\\begin{align*}\n\\text{\\li{A}} = \\left[\\begin{array}{ccc}\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\n\\end{array}\\right]\n&&\n\\text{\\li{B}} = \\left[\\begin{array}{ccc}\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*} \\\\\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*} \\\\\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*}\n\\end{array}\\right]\n\\end{align*}\n\n\\begin{align*}\n\\text{\\li{np.hstack((A,B,A))}} =\n\\left[\\begin{array}{ccccccccc}\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*}&\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*}&\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*}&\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\n\\end{array}\\right]\n\\end{align*}\n\n\\begin{align*}\n\\text{\\li{np.vstack((A,B,A))}} =\n\\left[\\begin{array}{ccc}\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*} \\\\\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*} \\\\\n\\textcolor{red}{*} & \\textcolor{red}{*} & \\textcolor{red}{*} \\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\n\\end{array}\\right]\n\\end{align*}\nBecause 1-D arrays are flat, \\li{np.hstack()} concatenates 1-D arrays and \\li{np.vstack()} stacks them vertically.\nTo make several 1-D arrays into the columns of a 2-D array, use \\li{np.column_stack()}.\n\n\\begin{align*}\n\\text{\\li{x}} = \\left[\\begin{array}{cccc}\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\n\\end{array}\\right]\n&&\n\\text{\\li{y}} = \\left[\\begin{array}{cccc}\n\\textcolor{red}{*}&\\textcolor{red}{*}&\\textcolor{red}{*}&\\textcolor{red}{*}\\end{array}\\right]\n\\end{align*}\n\n\\begin{align*}\n\\text{\\li{np.hstack((x,y,x))}} =\n\\left[\\begin{array}{cccccccccccc}\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\n\\textcolor{red}{*}&\\textcolor{red}{*}&\\textcolor{red}{*}&\\textcolor{red}{*}&\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\n\\end{array}\\right]\n\\end{align*}\n\n\\begin{align*}\n\\text{\\li{np.vstack((x,y,x))}} =\n\\left[\\begin{array}{cccc}\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{red}{*}&\\textcolor{red}{*}&\\textcolor{red}{*}&\\textcolor{red}{*}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}&\\textcolor{blue}{\\times}\n\\end{array}\\right]\n&&\n\\text{\\li{np.column_stack((x,y,x))}} =\n\\left[\\begin{array}{ccc}\n\\textcolor{blue}{\\times}&\\textcolor{red}{*}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{red}{*}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{red}{*}&\\textcolor{blue}{\\times}\\\\\n\\textcolor{blue}{\\times}&\\textcolor{red}{*}&\\textcolor{blue}{\\times}\n\\end{array}\\right]\n\\end{align*}\nThe functions \\li{np.concatenate()} and \\li{np.stack()} are more general versions of \\li{np.hstack()} and \\li{np.vstack()}, and \\li{np.row_stack()} is an alias for \\li{np.vstack()}.\n\n\\section*{Broadcasting} % =====================================================\n\nNumPy automatically aligns arrays for component-wise operations whenever possible.\n% The default behavior adds the first element to the first element of each row, the second element to the second element of each row, and so on.\nSee \\url{http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html} for more in-depth examples and broadcasting rules.\n\n\\begin{align*}\n\\text{\\li{A}} = \\left[\\begin{array}{ccc}\n1 & 2 & 3\\\\\n1 & 2 & 3\\\\\n1 & 2 & 3\\\\\n\\end{array}\\right]\n&&\n\\text{\\li{x}} = \\left[\\begin{array}{ccc}\n10 & 20 & 30\n\\end{array}\\right]\n\\end{align*}\n\n\\begin{align*}\n\\text{\\li{A + x}}\n&= \\begin{blockarray}{ccc}\n\\begin{block}{[ccc]}\n1 & 2 & 3\\\\\n1 & 2 & 3\\\\\n1 & 2 & 3\\\\\n\\end{block}\n  & + &  \\\\\n\\begin{block}{[ccc]}\n10 & 20 & 30\\\\\n\\end{block}\n\\end{blockarray}\n&= \\left[\\begin{array}{ccc}\n11 & 22 & 33\\\\\n11 & 22 & 33\\\\\n11 & 22 & 33\n\\end{array}\\right]\n\\\\ \\\\\n\\text{\\li{A + x.reshape((-1,1))}}\n&= \\left[\\begin{array}{ccc}\n1 & 2 & 3 \\\\\n1 & 2 & 3 \\\\\n1 & 2 & 3 \\\\\n\\end{array}\\right]\n+ \\left[\\begin{array}{c}\n10 \\\\ 20 \\\\ 30\\\\\n\\end{array}\\right]\n&= \\left[\\begin{array}{ccc}\n11 & 12 & 13\\\\\n21 & 22 & 23\\\\\n31 & 32 & 33\\\\\n\\end{array}\\right]\n\\end{align*}\n\n\\section*{Operations along an Axis} % =========================================\n\nMost array methods have an \\li{axis} argument that allows an operation to be done along a given axis.\nTo compute the sum of each column, use \\li{axis=0}; to compute the sum of each row, use \\li{axis=1}.\n\n\\begin{align*}\nA = \\left[\\begin{array}{cccc}\n1 & 2 & 3 & 4\\\\\n1 & 2 & 3 & 4\\\\\n1 & 2 & 3 & 4\\\\\n1 & 2 & 3 & 4\n\\end{array}\\right]\n\\end{align*}\n\n\\begin{align*}\n\\text{\\li{A.<<sum>>(axis=0)}} &= %\\text{\\li{np.array([sum(A[:,j]) for j in range(A.shape[1])])}} =\n\\left[\\begin{array}{>{\\columncolor{red!20}}c|>{\\columncolor{blue!20}}c|>{\\columncolor{yellow!20}}c|>{\\columncolor{green!20}}c}\n1 & 2 & 3 & 4\\\\\n1 & 2 & 3 & 4\\\\\n1 & 2 & 3 & 4\\\\\n1 & 2 & 3 & 4\n\\end{array}\\right]\n= \\left[\\begin{array}{cccc} \\cellcolor{red!20}4 & \\cellcolor{blue!20}8 & \\cellcolor{yellow!20}12 & \\cellcolor{green!20}16 \\end{array}\\right]\n\\\\ \\\\\n\\text{\\li{A.<<sum>>(axis=1)}} &= %\\text{\\li{np.array([sum(A[i,:]) for i in range(A.shape[0])])}} =\n\\left[\\begin{array}{cccc}\n\\rowcolor{red!20} 1 & 2 & 3 & 4\\\\ \\hline\n\\rowcolor{blue!20} 1 & 2 & 3 & 4\\\\ \\hline\n\\rowcolor{yellow!20} 1 & 2 & 3 & 4\\\\ \\hline\n\\rowcolor{green!20} 1 & 2 & 3 & 4\\\\\n\\end{array}\\right]\n= \\left[\\begin{array}{cccc} \\cellcolor{red!20}10 & \\cellcolor{blue!20}10 & \\cellcolor{yellow!20}10 & \\cellcolor{green!20}10 \\end{array}\\right]\n\\end{align*}\n", "meta": {"hexsha": "d258a133ebea8a77ad263ef2732fefb8d0f4c3a3", "size": 9763, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Appendices/NumpyVisualGuide/NumpyVisualGuide.tex", "max_stars_repo_name": "chrismmuir/Labs-1", "max_stars_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Appendices/NumpyVisualGuide/NumpyVisualGuide.tex", "max_issues_repo_name": "chrismmuir/Labs-1", "max_issues_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Appendices/NumpyVisualGuide/NumpyVisualGuide.tex", "max_forks_repo_name": "chrismmuir/Labs-1", "max_forks_repo_head_hexsha": "13c23611b90d73b0c2c7d275bce9808f829009f2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.012295082, "max_line_length": 181, "alphanum_fraction": 0.6287001946, "num_tokens": 3561, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.6442251064863695, "lm_q2_score": 0.8596637577007394, "lm_q1q2_score": 0.5538169758472314}}
{"text": " \\documentclass [12pt]{article} \n\n\\usepackage {amsmath}\n\\usepackage {amsthm}\n\\usepackage {amssymb}\n\\usepackage {graphicx} \n\\usepackage {float}\n\\usepackage {multirow}\n\\usepackage {xcolor}\n\\usepackage {algorithmic}\n\\usepackage [ruled,vlined,commentsnumbered,titlenotnumbered]{algorithm2e} \\usepackage {array} \n\\usepackage {booktabs} \n\\usepackage {url} \n\\usepackage {parskip} \n\\usepackage [margin=1in]{geometry} \n\\usepackage [T1]{fontenc} \n\\usepackage {cmbright} \n\\usepackage [many]{tcolorbox} \n\\usepackage [colorlinks = true,\n            linkcolor = blue,\n            urlcolor  = blue,\n            citecolor = blue,\n            anchorcolor = blue]{hyperref} \n\\usepackage {enumitem} \n\\usepackage {xparse} \n\\usepackage {verbatim}\n\\usepackage{listings}\n\\usepackage{xcolor}\n\\lstset { %\n    language=C++,\n    backgroundcolor=\\color{black!5}, % set backgroundcolor\n    basicstyle=\\footnotesize,% basic font setting\n}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{remark}{Remark}\n\n\n\n\\DeclareTColorBox {Solution}{}{breakable, title={Solution}} \\DeclareTColorBox {Solution*}{}{breakable, title={Solution (provided)}} \\DeclareTColorBox {Instruction}{}{boxrule=0pt, boxsep=0pt, left=0.5em, right=0.5em, top=0.5em, bottom=0.5em, arc=0pt, toprule=1pt, bottomrule=1pt} \\DeclareDocumentCommand {\\Expecting }{+m}{\\textbf {[We are expecting:} #1\\textbf {]}} \\DeclareDocumentCommand {\\Points }{m}{\\textbf {(#1 pt.)}} \n\n\\begin {document} \n\n{\\LARGE \\textbf {COMP 285 (NC A\\&T, Spr `22)}\\hfill \\textbf {Lecture 7} } \n\\vspace {1em} \n\\begin {Instruction} \n\nAdapted From Virginia Williams' lecture notes. Additional credits: J. Su, W. Yang, Gregory Valiant, Mary Wootters, Aviad Rubinstein, Sami Alsheikh.\n\\end {Instruction} \n\n\\begin{centering}\n\\section*{k-Select Problem \\& Median Selection}\n\\end{centering}\n\n\\section{Introduction}\nIn the last lecture, we wrapped up with the subtitution method for solving recurrences and introduced the selection problem. In this lecture, we find an $O(n)$ algorithm to solve the selection problem.\n\n\n\\section{Selection Problem}\nThe selection problem is to find the $k$-th smallest number in an array $A$.\n\n\\textbf{Input}: array $A$ of $n$ numbers, and an integer $k \\in \\{1, \\cdot , n\\}$.\n\\textbf{Output}: the $k$-th smallest number in $A$.\n\nOne approach is to sort the numbers in ascending order, and then return the $k$th number in the sorted list. This takes $O(n \\log n)$ time, since it takes $O(n \\log n)$ time for the sort (e.g. by \\texttt{MergeSort}) and $O(1)$ time to return $k$th number.\n\n\\subsection{Minimum Element}\nAs always, we ask if we can do better (i.e. faster in big-O terms). In the special case where $k = 1$, selection is the problem of finding the minimum element. We can do this in $O(n)$ time by scanning through the array and keeping track of the minimum element so far. If the current element is smaller than the minimum so far, we update the minimum.\n\n\\begin{algorithm}\n\\caption{SelectMin(A)}\\label{alg:min}\n\\begin{algorithmic}\n\\STATE $m \\gets \\infty$\n\\STATE $n \\gets $ length($A$)\n\\FOR{$i=1$ to $n$}\n    \\IF{$A[i] < m$}\n        \\STATE $m \\gets A[i]$\n    \\ENDIF\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\nIn fact, this is the best running time we could hope for.\n\n\\textbf{Definition.} A deterministic algorithm is one which, given a fixed input, always performs the same operations (as opposed to an algorithm which uses randomness).\n\n\\textbf{Claim.} \\textit{Any deterministic algorithm for finding the minimum has runtime $\\Omega(n)$.}\n\n\\begin{proof}\nIntuitively, the claim holds because any algorithm for the minimum must\nlook at all the elements, each of which could be the minimum. Suppose a correct deterministic algorithm does not look at $A[i]$ for some $i$. Then the output cannot depend on $A[i]$, so the algorithm returns the same value whether $A[i]$ is the minimum element or the maximum element. Therefore the algorithm is not always correct, which is a contradiction. So there is no sublinear deterministic algorithm for finding the minimum.\n\\end{proof}\n\nSo for $k = 1$, we have an algorithm which achieves the best running time possible. By similar reasoning, this lower bound of $\\Omega(n)$ applies to the general selection problem. So ideally we would like to have a linear-time selection algorithm in the general case.\n\n\\section{Linear-Time Selection}\nIn fact, a linear-time selection algorithm does exist. Before showing the linear time selection algorithm, it's helpful to build some intuition on how to approach the problem. The high-level idea will be to try to do a Binary Search over an unsorted input. At each step, we hope to divide the input into two parts, the subset of smaller elements of $A$, and the subset of larger elements of $A$. We will then determine whether the $k$-th smallest element lies in the first part (with the ``smaller'' elements) or the part with larger elements, and recurse on exactly one of\nthose two parts.\n\nHow do we decide how to partition the array into these two pieces? Suppose we have a\nblack-box algorithm \\texttt{ChoosePivot} that chooses some element in the array $A$, and we use this pivot to define the two sets\u2013any $A[i]$ less than the pivot is in the set of ``smaller'' values, and any $A[i]$ greater than the pivot is in the other part. We will figure out precisely how to specify this subroutine \\texttt{ChoosePivot} a bit later, after specifying the high-level algorithm structure. The algorithm \\texttt{ChoosePivot} does not affect the \\textit{correctness} of the algorithm as we will see in \\ref{alg:select}. Rather, it only affects the runtime.\n\nFor clarity we'll assume all elements are distinct from now on, but the idea generalizes easily. Let $n$ be the size of the array and assume we are trying to find the $k$-th element.\n\n\\begin{algorithm}\n\\caption{Select(A, n, k)}\\label{alg:select}\n\\begin{algorithmic}\n\\IF {$n=1$}\n    \\RETURN $A[1]$\n\\ENDIF\n\\STATE $p \\gets \\texttt{ChoosePivot}(A, n)$\n\\STATE $A_< \\gets \\{A[i] \\mid A[i] < p \\}$\n\\STATE $A_> \\gets \\{A[i] \\mid A[i] > p \\}$\n\\IF {$|A_<| = k - 1$}\n    \\RETURN $p$\n\\ELSIF {$|A_<| > k - 1$}\n    \\RETURN \\texttt{Select}$(A_<, |A_<|, k)$\n\\ELSIF {$|A_<| < k - 1$}\n    \\RETURN \\texttt{Select}$(A_>, |A_>|, k - |A_<| - 1)$\n\\ENDIF\n\\end{algorithmic}\n\\end{algorithm}\n\nAt each iteration, we use the element $p$ to partition the array into two parts: all elements smaller than the pivot and all elements larger than the pivot, which we denote $A_<$ and $A_>$, respectively.\n\nDepending on what the size of the resulting sub-arrays are, the runtime can be different. For example, if one of these sub-arrays is of size $n - 1$, at each iteration, we only decreased the size of the problem by 1, resulting in total running time $O(n^2)$. If the array is split into two equal parts, then the size of the problem at iteration reduces by half, resulting in a linear time solution. (We assume \\texttt{ChoosePivot} runs in $O(n)$.)\n\n\\textbf{Proposition.} \\textit{ If the pivot p is chosen to be the minimum or maximum element, then Select runs in $\\Theta(n^2)$ time.}\n\n\\textit{Proof.} At each iteration, the number of elements decreases by $1$. Since running \\texttt{ChoosePivot} and creating $A_<$ and $A_>$ takes linear time, the recurrence for the runtime is $T(n) = T(n - 1) + \\Theta(n)$. Expanding this,\n$$\nT(n) \\leq c_1n + c_1(n -  1) + c_1(n -  2) + ... + c_1 = c_1n(n + 1)/2\n$$\nand\n$$\nT(n) \\geq c_2n + c_2(n -  1) + c_2(n -  2) + ... + c_2 = c_2n(n + 1)/2.\n$$\nWe conclude that $T(n) = \\Theta(n^2)$.\n\n\\textbf{Proposition}. \\textit{If the pivot $p$ is chosen to be the median element, then Select runs in $O(n)$ time.}\n\n\\textit{Proof}. Intuitively, the running time is linear since we remove half of the elements from consideration each iteration. Formally, each recursive call is made on inputs of half the size, namely, $T(n) \\leq T(n/2)+cn$. Expanding this, the runtime is $T(n) \\leq cn+cn/2+cn/4+...+c \\leq 2cn$, which is $O(n)$. So how do we design \\texttt{ChoosePivot} that chooses a pivot in linear time? In the following, we describe three ideas.\n\n\\subsection{Idea \\#1: Choose a random pivot}\n\nAs we saw earlier, depending on the pivot chosen, the worst-case runtime can be $O(n^2)$ if we are unlucky in the choice of the pivot at every iteration. As you might expect, it is extremely unlikely to be this unlucky, and one can prove that the expected runtime is $O(n)$ provided the pivot is chosen uniformly at random from the set of elements of $A$. In practice, this randomized algorithm is what is implemented, and the hidden constant in the $O(n)$ runtime is very small.\n\n\\subsection{Idea \\#2: Choose a pivot that creates the most ``balanced'' split}\n\nConsider \\texttt{ChoosePivot} that returns the pivot that creates the most ``balanced'' split, which would be the median of the array. However, this is exactly selection problem we are trying to solve, with $k = n/2$! As long as we do not know how to find the median in linear time, we cannot use this procedure as \\texttt{ChoosePivot}.\n\n\\end{document}", "meta": {"hexsha": "a8b9480270a23b9455964605696b1f442c87c242", "size": 8962, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assets/lectures/lecture7.tex", "max_stars_repo_name": "facebookEIR/algorithms-course", "max_stars_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-16T02:47:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-16T02:47:46.000Z", "max_issues_repo_path": "assets/lectures/lecture7.tex", "max_issues_repo_name": "facebookEIR/algorithms-course", "max_issues_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/lectures/lecture7.tex", "max_forks_repo_name": "facebookEIR/algorithms-course", "max_forks_repo_head_hexsha": "f0893b43aaf3b321eb134c82512bd7b9271fdea6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-01-20T21:52:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-21T03:00:16.000Z", "avg_line_length": 59.3509933775, "max_line_length": 573, "alphanum_fraction": 0.7252845347, "num_tokens": 2566, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6442251064863697, "lm_q2_score": 0.8596637433190939, "lm_q1q2_score": 0.5538169665822145}}
{"text": "\\chapter{Efficient Retrieval}\n\n\\begin{multicols*}{2}\n\\section{Dimension Reduction}\n\n\\noindent To reduce sparsity problem, we use dimensionality reduction technique, such as truncated singular value decomposition, to discard some small / unimportant singular values. \n\n\\section{Efficient Cosine Ranking}\n\\noindent To compute $K$ highest-scoring document for a query, we need to compute cosine similarity for all documents. To improve efficiency, we can consider only $K$ documents that likely to be among $K$ highest scoring documents. \n\n\\subsection{Index Elimination}\n\\noindent Rule 1: only consider documents that contain at least one query term \\\\\n\\noindent Rule 2: only consider high-IDF query terms. Low-IDF terms are likely to be stopwords and contribute little to the scores. \\\\\n\\noindent Rule 3: only consider documents that contain many query terms. This can be done during posting traversal. \n\n\\subsection{Champion Lists}\n\\noindent Precompute, for each term $t$ in the dictionary, the set of the $r$ documents with the highest weights for $t$; the value of $r$ is chosen in advance. We call this set of $r$ documents the champion list for term $t$.\\\\\n\n\\noindent At query time, we only compute scores for documents in the champion list for some query terms. \n\n\\subsection{Static Quality Scores}\n\\noindent We want top-ranking documents to be both relevant and authoritative. Let the query-independent quality score as $g(d)$, the net score:\n$$\\text{netscore}(\\vec{q},\\vec{d})=g(d) + \\text{cosine} (\\vec{q},\\vec{d})$$\n\n\\noindent We get the top $K$ documents by net score\n\n\\subsection{High and Low Lists}\n\\noindent We maintain two posting lists called high and low. We first traverse high lists first to get top $K$ documents. If we do not get enough document, then only we traverse low lists. \n\n\\subsection{Cluster Pruning}\n\\noindent We first pick $\\sqrt{N}$ documents at random (leaders), then pre-compute the nearest leader for all documents. \\\\\n\\noindent We process query by finding its nearest leader $L$ and find $K$ nearest document from $L$\u2019s followers\n\n\\section{Parametric and Zone Indexes}\n\\noindent There could be some metadata / fields for each documents. We want to search by fields. \\\\\n\n\\noindent A zone is a region of documents that can contain an arbitrary amount of text. We build inverted indexes on zones to permit querying.\n\n\\begin{center}\n\\includegraphics[width=8cm]{zone-index}\n\\end{center}\n\n\\section{Tiered Indexes}\n\\noindent Break postings up into a hierarchy of lists. Inverted index thus broken up into tiers of decreasing importance.\\\\\n\n\\noindent We process query by using top tier unless it fails to yield $K$ documents. \n\n\\section{Aggregate Scores}\n\n$$\\text{score} = \\alpha \\times \\text{cosine}(\\vec{q},\\vec{d}) + \\beta \\times g(d) + \\gamma \\times \\text{proximity}(\\vec{q},\\vec{d})$$\n\n\\end{multicols*}\n", "meta": {"hexsha": "c7e943f75077304240a3af24347807bdd44c6925", "size": 2836, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "efficient.tex", "max_stars_repo_name": "Andyccs/CZ4034-information-retrieval-summary", "max_stars_repo_head_hexsha": "1636bbebc0fd7864e3d6234a57e0e978fbf7d5a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2016-04-23T05:00:21.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-05T07:10:54.000Z", "max_issues_repo_path": "efficient.tex", "max_issues_repo_name": "Andyccs/CZ4034-information-retrieval-summary", "max_issues_repo_head_hexsha": "1636bbebc0fd7864e3d6234a57e0e978fbf7d5a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "efficient.tex", "max_forks_repo_name": "Andyccs/CZ4034-information-retrieval-summary", "max_forks_repo_head_hexsha": "1636bbebc0fd7864e3d6234a57e0e978fbf7d5a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.5094339623, "max_line_length": 232, "alphanum_fraction": 0.7630465444, "num_tokens": 713, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5537448903812773}}
{"text": "\\chapter{ Group Change Detection (GCD) } \n \\label{sec:dynamicGCD}\n%\\section{ Group Change Detection (GCD) Techniques  } \\label{Sec:GCD}\nMethods for detecting group anomalies were  explored in previous chapters however this Chapter thoroughly describes state-of-the-art Group Change Detection (GCD) techniques.   GCD methods detect  significant changes in statistical properties of a group stochastic process (GSP) which consists of a collection of time series.   \n GCD research focuses on a single group over time however it is possible to analyse multiple groups over time which combines the static GAD and dynamic GCD  problem. \n \n \nWe formulate  the GCD problem with a slightly different problem definition to Section \\ref{Sec:Problem} and  explicitly write groups as a function of time. % and formulate  the GCD problem as follows.\n  Consider a collection of $N'$ stochastic processes  of $V$ dimensions observed over a sequence of discrete times  $\\mathcal{T}=\\{1,2,\\dots,T\\}$.  Stochastic processes $n= 1,2,\\dots,N'$ are represented by $   \\{ (\\,{ X}_{nv}(t) \\,)_{v=1}^V  : \\, t \\in \\mathcal{T}  \\}$.  \nA group distribution at a fixed time $t$ is associated with the matrix of random variables \n\\begin{align}\n{\\bf G}(t)=\\big( { X}_{nv}(t)\\big)  \\in \\mathbb{R}^{N' \\times V} \\label{Gdist}\n\\end{align}\nsuch that  a GSP  is represented by $\\{ {{\\bf G}}(t) \\}_{t \\in\\mathcal{T}}$  \nand the total number of observations is $N= T N^{'} $.    \n  GCD techniques  are explained  by descriptive components from Section \\ref{Sec:Problem}  in  categories of discriminative methods, generative models and hypothesis tests. \n\n\n\n\n\\section{ Discriminative: Group Level Events in Time Series (GLETS) }\nWe refer to the GCD technique proposed by \nChen et al. \\cite{GLETS} as Group Level Events in Time Series (GLETS). GLETS is a discriminative method that detects a significant change in a group of time series where group memberships are previously unknown. A group of time series with numerical (discrete or continuous) values is clustered based on a dissimilarity measure given  observations over a reference period. Since GLETS employs a sequential clustering approach, it detects two type of temporal changes;\n\\begin{enumerate}[(a)]\n\\item Group formation:  a collection of dissimilar time series exhibit more cohesive structure over a period of time. % with a higher correlation.\n\\item Group disbanding:  a group of correlated time series splits into multiple subgroups. % behaviours after a certain time period. \n\\end{enumerate}\n%A group of correlated time series is clustered based on Pearson's correlation over a training interval. \nThe four key components of GLETS are explained as follows. \n\\begin{enumerate}[1.]\n\\item Characterisation function $f_1({\\bf G}_{train})$: \\\\ \n%A training set is assumed to contain information about regular  behaviour of a GSP.\n Suppose training data consists of ${\\bf G}_{train}= \\{ {\\bf G}(t)\\}_{t \\in [\\tau,\\tau+a]} $ % contains group observations in the interval $[\\tau,\\tau+a]$\n where ${\\bf G}(t) = \\big( X_{nv}(t) \\big) \\in \\mathbb{R}^{N' \\times V}$ with $N'$ time series of $V$ dimensions and  $a$ is an appropriate window size chosen by a domain expert. Clustering is based on a dissimilarity metric $d$  over a training interval where   $d({\\bf X}_{n}(t),{\\bf X}_{n'}(t),[\\tau,\\tau+a])$ is calculated for all pairs $n,n'\\in \\{1,2,\\dots,N' \\}$.  In Chen et al. \\cite{GLETS}, the dissimilarity metric $d$ is selected as Pearson's correlation coefficient  for financial data whereas Euclidean distance is applied for a remote sensing application. For example, Euclidean distance between the  $i$th and $j$th series in a time window $[\\tau,\\tau+a]$ is calculated by \n\\[d \\Big({\\bf X}_{n}(t),{\\bf X}_{n'}(t),[\\tau,\\tau+a] \\Big) = \n\\sqrt{ \\displaystyle \\sum_{t=\\tau}^{\\tau+a} \\big( {\\bf X}_{n}(t)-{\\bf X}_{n'}(t) \\big)^2 }\n \\]\nA GSP over interval $[\\tau,\\tau+a]$ is then characterised by the entropy score  \n\\[\\displaystyle H({\\bf G}_{train}) = -\\frac{1}{N}\\sum_{n=1}^{N'} \\ln \\Big(  \\frac{1}{N}\\sum_{n'=1}^{N'}  \\exp\\big[-d( {\\bf X}_{n}(t),{\\bf X}_{n'}(t),[\\tau,\\tau+a])\\,\\big ] \\Big)  \\]\nwhere $d({\\bf X}_{n}(t),{\\bf X}_{n'}(t),[\\tau,\\tau+a])$ is a measure of  dissimilarity  between time series ${\\bf X}_{n}(t)$ and ${\\bf X}_{n'}(t)$  for  $t \\in [\\tau,\\tau+a]$. \n%\n\\item Characterisation function $f_2({\\bf G}_{test})$: \\\\\nThe function $f_2$ captures the behaviour  of a GSP over a test period of group observations. Group formation and group disbanding are respectively detected by examining  time windows before and after the given training interval.   A group of correlated time series over training interval $[\\tau,\\tau+a]$ is compared with test set % of group observations  on test periods \n   \\[ \\displaystyle {\\bf G}_{test} = \\left\\{ \\begin{array}{ll} \\{ {\\bf G}(t) \\}_{t \\in [\\tau-a,\\tau] } &  \\Longleftrightarrow \\;  \\mbox{Group formation} \\\\ \n  \\{ {\\bf G}(t) \\}_{t \\in [\\tau+a,\\tau+2a] } &  \\Longleftrightarrow \\; \n  \\mbox{Group disbanding} \\end{array} \\right. \\] \n%where $a$ is the window sise of training or test sets.\n For example, a group formation is analysed using a test  characterisation function over $[\\tau-a,\\tau]$ with\n\\[ \\displaystyle  H({\\bf G}_{test})    = -\\frac{1}{N}\\sum_{n=1}^{N'} \\ln \\Big(  \\frac{1}{N}\\sum_{n'=1}^{N'}  \\exp\\big[-d({\\bf X}_{n}(t),{\\bf X}_{n'}(t),[\\tau-a,\\tau])\\,\\big ] \\Big)  \\]  Similarly, group disbanding is detected by examining the test period  $[\\tau+a,\\tau+2a]$.  \n\n% The training set is characterised by  $ H({\\bf G}_{train}) $.  \n\\end{enumerate}\n\\begin{enumerate}[3.] \n \\item Measure $ \\mathcal{D}\\big(f_1 ({\\bf G}_{train}) , f_2({\\bf G}_{test} )\\big )$: \\\\\n A group change is quantified by the relative difference in characterised behaviour of  training and test intervals  with  \n\\[ \\displaystyle \\frac{ H({\\bf G}_{test})}{ H({\\bf G}_{train}) }   \\] \n  where values close to $1$  indicate no significant group deviation  in a GSP.   \n\\end{enumerate}\n\\begin{enumerate}[4.]\n\\item Threshold $\\epsilon= th$: \\\\ \nA threshold  $th$ is selected by domain experts in  Chen et al. \\cite{GLETS} to identify test periods that are considerably different from a training set.   A significant temporal change in a GSP is detected when a score for a temporal change is greater than a suitable threshold $th$. \n\\end{enumerate}\n%Poor results are obtained when a chosen threshold is not suitable for a particular dataset.\n\n\n\n\\section{ Generative: Dynamic Group Latent Anomaly Detection (DGLAD) }\nYu et al. \\cite{GLAD} extend the static GLAD model (described in subsection \\ref{subs:GLAD}) to the dynamic setting in a method called Dynamic-GLAD or DGLAD. The DGLAD  model provides a solution for both GAD and GCD problems by handling multiple group stochastic processes. When group memberships are not previously known, DGLAD aggregates data instances based on similarity of features as well as pairwise connections.   In Yu et al. \\cite{GLAD}, DGLAD detects significant changes in a GSP   with a lower false positive rate than other compared methods.   \n A  drawback of the DGLAD model is the requirement for observations recorded over a period of time which is not readily available, especially for network data. \n\nIn DGLAD, structural values such as expected number of topics and number of groups  are initially selected by information criteria as previously described for  generative models in Section \\ref{Sec:G}. To alleviate  computational burden, DGLAD  assumes that structural  values remain constant throughout the time period of interest. This may be violated in practice as the emergence and dissolution of topics or groups naturally occur over time.  It is also difficult to interpret inferred groups  as DGLAD allows individuals to switch groups   over time.  Thus an appropriate selection of initial structural values and a careful interpretation is recommended when applying the DGLAD model.\n\nDGLAD is explained using four key components as follows.  %introduced in Section \\ref{Sec:Problem}. \n\\begin{enumerate}[1.]\n\\item Characterisation function $f_1({\\bf G}_{train})= \\theta_m(t-1)$: \\\\ \nStatistical properties of inferred groups are characterised by topic mixtures.  Latent topic variables are inferred for  each time step where the $m$th group at time $t-1$ is characterised by topic mixtures ${\\theta}_{m}(t-1)$. \n\\end{enumerate}\n\n \n\n  \n\\begin{enumerate}[2.]\n\\item Characterisation function $f_2({\\bf G}_{test})=\\theta_m(t)$: \\\\ \n%The $m$th group at a specific time point is characterised by the topic proportions $\\theta_m(t)$. \nTo determine if a significant temporal change occurs from time step $t-1$ to $t$, the $m$th group at time $t$ is characterised by topic mixtures ${\\theta}_{m}(t-1)$.  \n  Since multiple time steps are analysed, topic mixtures %$\\{ \\boldsymbol\\theta(1),\\dots,  \\boldsymbol\\theta(T) \\} $\n are inferred using an additional particle filtering technique as discussed in Doucet and Johansen \\cite{doucet2009tutorial}. This particle filter utilises sequential importance sampling where the transition for topic mixtures over consecutive time steps follows  a multivariate Gaussian distribution with  \n\\begin{equation}\n\\boldsymbol\\theta(t) \\,| \\, \\boldsymbol\\theta(t-1) \\sim \\mathcal{N} \\Big(\\boldsymbol\\theta(t-1), \\sigma^2 {\\bf I} \\Big)  \\label{Eqn:Transition}\n\\end{equation}\nwhere $\\sigma^2$ is variance of topic mixtures over groups at the previous time step.  Since the variance-covariance matrix in Equation (\\ref{Eqn:Transition}) has off-diagonals with zero entries, any two different groups conditioned on previous observations are assumed to be independent. \n\\end{enumerate}\n \n\\begin{enumerate}[3.] \n\\item Measure $ \\mathcal{D}\\big(f_1({\\bf G}_{train}), f_2({\\bf G}_{test})\\big )= || \\theta_m(t) -\\theta_m(t-1) ||$: \\\\\nInstead of examining likelihood scores, a difference between topic mixtures at consecutive time steps is computed to detect a significant group deviations.   A measure of temporal change in the $m$th group  transitioning from time $t-1$ to $t$ is given by \n\\[ || \\theta_m(t) -\\theta_m(t-1) ||  \\] \nwhere $|| \\cdot ||$ is Euclidean distance. \n\\end{enumerate}\n \n\n\\begin{enumerate}[4.]\n\\item   Threshold $\\epsilon $ is  selected by domain experts: \\\\  \nSimilar to other generative models, a threshold is arbitrarily selected based on a small proportion of groups with the highest scores.   \n Larger differences between topic mixtures at consecutive time steps indicate relatively  greater temporal change in a GSP.   \n\\end{enumerate} \n\n\n\n\n\\section{Hypothesis Test: \"What's strange about recent event\" (WSARE)}\n Wong et al. \\cite{WSARE} propose  WSARE, an  unsupervised method specifically  formulated for analysing an emergency department database.    \nNumerical and categorical features (such as age and gender) are recorded for each patient admitted to the emergency department. Rules are established from a combination of features to uncover anomalous patterns in different demographic groups.  A significant temporal change in a demographic  group represents a potential disease outbreak where deviating patterns are characterised by higher proportions of daily counts for a demographic group as compared to expected (historical) proportions.  %Solely examining daily counts is not effectively in detecting an outbreak for specific diseases. \n\n\n\\begin{enumerate}[1.]\n\\item Characterisation function $f_1({\\bf G}_{train})=O_{past}$: \\\\  \nFirstly a demographic group is identified based on particular rules consisting of one or two features. \n%The counts of a demographic group at the current date is compared to past observations  $O_{past}$. \n The regular behaviour is inferred from a training interval  over five to eight weeks prior to the current date. The  aggregated historical count is denoted by $O_{past}$. WSARE evaluates hypotheses based on \n\\begin{align}\n&\\mbox{ $H_0:$ Date and rules are independent   \\, versus \\,  $H_1:$ Date and  rules are not independent  }\n\\label{Hyp:WSARE} \n\\end{align}\n\\end{enumerate}\n\\begin{enumerate}[2.]\n\\item Characterisation function $f_2({\\bf G}_{test})=O_{today}$: \\\\  WSARE algorithm detects significant temporal changes in  demographic groups using specific rules. Table  \\ref{Ex:WSARE} provides an example of a single feature rule with $\\mathtt{Gender} = \\mathtt{Male}$  and two feature rule  $\\mathtt{Gender} = \\mathtt{Male}$ \\& $\\mathtt{Age} > 70$. \n%  WSARE  initially searches for rules with the most significant $p$-values.  \n  Hypothesis tests in WSARE are based on   counts of emergency department admittance for a particular demographic group  where an observed count for the current date is denoted by $O_{today}$. Given specific rules, the counts $O_{today}$ characterise the behaviour of a GSP at the current test date. \n  \n \n\n\\begin{table}[h]%\n\\hspace{-5mm}\n\\tabcolsep=0.2cm\n\t\\renewcommand{\\arraystretch}{2.4}\n\\begin{subtable}{.5\\linewidth}\n\\centering\n\\scalebox{0.88}{\n\\begin{tabular}{|c|c|c|cccc }\n\\hline \n  Rule   &    $O_{today}$ & $O_{past}$\n  \\\\ \\hline %\\\\[-2mm] \n $\\mathtt{Gender} = \\mathtt{Male}$ & \n 66 & 503  \\\\ \\hline% \\\\%[-2mm]   \n  $\\mathtt{Gender} = \\mathtt{Female}$ & \n  34 & 497  \\\\%[2mm]   \n\\hline\n \\end{tabular}\n }\n%\\end{center}\n \\smallskip\n\\caption{ A single feature rule.}\n   \\label{Ex:WSARE1}\n\\end{subtable}\n\t\\renewcommand{\\arraystretch}{2.44}\n\t\\begin{subtable}{.5\\linewidth}\n\t\\centering\n\\scalebox{0.88}{\n\\begin{tabular}{|c|c|c|cccc }\n\\hline%\\\\[-2mm]\n  Rule   &    $O_{today}$ & $O_{past}$\n  \\\\ \\hline%\\\\%[-2mm] \n $\\mathtt{Gender}   = \\mathtt{Male } \\; \\&  \\;  \\mathtt{Age} > 70 $ & \n 27 & 129   \\\\ \\hline %[-2mm]   \n  $\\mathtt{Gender} = \\mathtt{Female} \\;\\& \\; \\mathtt{Age} > 70 $ & \n  3 & 71  \\\\%[2mm]   \n\\hline\n \\end{tabular}\n }\n \\smallskip\n\\caption{ A two feature rule.}\n  \\label{Ex:WSARE2}\n  \\end{subtable}\n  %\\vspace{-10mm}\n  \\caption{Examples of counts observed for demographic groups based on one or two feature rules.  }\n  \\label{Ex:WSARE}\n\\end{table}  \n \n\\end{enumerate}\n\\begin{enumerate}[3.] \n\\item Measure $ \\mathcal{D}\\big(f_1 ({\\bf G}_{train}) , f_2({\\bf G}_{test} )\\big )$: \\\\\n%To evaluate the hypothesis   (\\ref{Hyp:WSARE}), \nWSARE utilises Fisher's exact test where data under the null hypothesis follows a hypergeometric probability. \n% with parameters $a,b,c,d $, \n%\\begin{align}\n% P(a,b,c,d) = \\displaystyle \\frac{ {a+b \\choose{a} } { c+d \\choose{c} } } { {n' \\choose{a+c} } } \\label\n%\\end{align}\n%where the total count is $n' = a+b+c+d$. \n A $p$-value is the likelihood that dates  and rules are independent for a demographic group of interest such that extremely low $p$-values indicate that the current counts are significantly different  from  previous observations. Using the hypergeometric probability in Equation (\\ref{Eqn:hyperG}),  examples in Table \\ref{Ex:WSARE} (a) and  (b) are associated with significant $p$-values  respectively calculated as    \\[P(66,503,34,497) \\approx  0.0032 \\mbox{\\, and \\,}\nP(27,129,3,71) \\approx  0.0056\n \\]\n  Chi-square test for independence can also be applied to Table  \\ref{Ex:WSARE} (a) with a resulting $p$-value of 0.00093. \nHowever the example in Table  \\ref{Ex:WSARE} (b) does not satisfy the condition that counts $> 5$ for a Chi-square test so  Fisher's exact test  is more appropriate.  \n \n\nWSARE  considers  all possible rules involving a single  feature   then determines if an additional component is significant. In Table \\ref{Ex:WSARE}, WSARE initially  detects a significant temporal change for %in the demographic group \n $\\mathtt{Gender} = \\mathtt{Male}$ but also detects a change in the group with a additional component  $\\mathtt{Gender}   = \\mathtt{Male } \\; \\&  \\;  \\mathtt{Age} > 70 $. Since thousands of hypothesis tests are conducted, %the emergency department database contains thousands of rules, \n   $p$-values are adjusted using  a randomisation test based on bootstrap sampling of dates in past records. % A null distribution is generated in a similar way to the bootstrap sampling  approach in ATD from Section \\ref{Sec:ATD}. \n   The  $p$-value  for observed counts and   the $b$th bootstrap sample are respectively denoted by $\\tilde{p}$  and $\\tilde{p}_b$. A compensated $p$-value for the $i$th rule is calculated  by    \n \\begin{equation}\n  p_i^* = \\displaystyle  \\frac{1}{B} \\sum_{b=1}^ B I ( \\tilde{p}_b > \\tilde{p} )  \n  \\label{Eqn:CompensatedP}\n\\end{equation}    \n where $B$ is the number of bootstrap  simulations for the randomisation test.  This reduces potential overfitting with less false discoveries based on estimated $p$-values. \n\\end{enumerate}\n\\begin{enumerate}[4.]\n\\item Threshold $\\epsilon=\\alpha_{FDR}$: \\\\ \n %which is significant when compared to a $0.$ \nFor a specific current date, the compensated $p$-value   from Equation (\\ref{Eqn:CompensatedP}) is evaluated at a significant level  $\\alpha$.  \nWhen monitoring demographic groups over multiple time steps, a threshold is adjusted to control for the false discovery rate (FDR). A threshold $\\alpha_{FDR}$ is calculated by the FDR procedure in  Benjamini  and Hochberg \\cite{MultiTest}. Thus a demographic group experiences a significant change if the compensated $p$-value is less than an adjusted significant level $\\alpha_{FDR}$. \n\\end{enumerate}\n%A demographic group has a significantly higher count compared to past events if the null hypothesis is rejected at a predetermined significance level. \n\n\\section{Hypothesis Test: Multiple Sensor Sequential Detection (MSSD)}\nWe refer to the method  proposed by  Xie and Siegmund  \\cite{xie2013} as Multiple Sensor Sequential Detection (MSSD). MSSD is a sequential technique for detecting change-points in a proportion of sensors in a GSP. MSSD is formulated for continuous real-valued input data and examines the statistical properties of groups based on  location. Since the true times of significant changes in a GSP are unknown,  hypotheses are evaluated at each time step. The goal is to detect group changes as soon as possible but also reduce the number of false positives. \n\n\n\\begin{enumerate}[1.]\n\\item Characterisation function $f_1({\\bf G}_{train})=\\{\\mu_n(T)\\}_{n=1}^{N'}$: \\\\ \nObservations from each stochastic process in a GSP  are assumed to be mutually independent and normally distributed with zero mean and unit variance before a change-point occurs. MMSD then tests whether there is a significant change  in the mean of the $n$th series at time $\\tau$, \n\\[ H_0: \\mu_n(\\tau)= 0  \\mbox{ for all } n=1,2,\\dots, N' \\mbox{ \\, versus \\,  } H_1: \\mu_n(\\tau) > 0 \\mbox{ for } n \\in \\mathsf{S}  \\]  \n where $\\mathsf{S}$  denotes the set of time series affected by a change-point at time $\\tau$.  \n%  The function $f_1$ characterises \n   Group observations ${\\bf G}_{train} =\\{ {\\bf G} ({t}) \\}_{t=1}^T$ over  training interval  $[1,T] $ are characterised by  set of means  $\\{\\mu_n(T)\\}_{n=1}^{N'}$ where   \n% A training sequence is then characterised by the where     \nthe $n$th mean is estimated by  \n$   \\mu_n(T) =\\frac{1}{T} \\sum_{t=1}^T   X_{n} (t) $. \n %A cumulative statistic for the $n$th series is calculated by   \\[ R_{T n} = T\\mu_n(T) %\\sum_{t=1}^T   X_{t n}\n%  \\]   \n% for $n=1,\\dots, N$\n% The statistical property  of the GSP over all of the observed stochastic processes is characterised by $\\{ R_{T n} \\}_{n=1}^N$.\n\\end{enumerate}\n%\n\\begin{enumerate}[2.]\n\\item Characterisation function $f_2({\\bf G}_{test}) = \\{\\mu_n(\\tau) \\}_{n=1}^{N'}$: \\\\ \nSince the true times of significant change are usually unknown, a variety of time values  $\\tau$ are analysed. In a sequential fashion,  the test set ${\\bf G}_{test} =\\{ {\\bf G} ({t}) \\}_{t=1}^\\tau$ is examined for time steps  $\\tau=1,2,\\dots,T$. The function $f_2$ characterises the behaviour of a GSP over $[1,\\tau]$ by the set of means  $\\{\\mu_n(\\tau)\\}_{n=1}^{N'}$ where  the $n$th mean is  \n$  \\mu_n(\\tau) =\\frac{1}{\\tau} \\sum_{t=1}^\\tau   X_{ n}(t) $. \n% The GSP over a test period is characterised by $\\{ R_{\\tau n} \\}_{n=1}^N$.\n\\end{enumerate}\n%\n\\begin{enumerate}[3.] \n\\item Measure $ \\mathcal{D}\\big(f_1 ({\\bf G}_{train}) , f_2({\\bf G}_{test} )\\big )$: \\\\\nXie and Siegmund  \\cite{xie2013} propose multiple ways to measure a significant group deviation at time $\\tau$. %MMSD compares  two sets of observations in a similar way to CUSUM approachs that are explained in  Polunchenko  and Tartakovsky \\cite{Polunchenko2012}. \nOne particular metric $ (T-\\tau)^{-1/2}  \\big( T \\mu_n(T) - \\tau \\mu_n(\\tau) \\big) $ directly measures the change in the mean value of the $n$th time series from training period $[1,T]$ and test interval $[1,\\tau]$. \n% that compares means of training and test period for the $n$th time series %$\\{\\mu_n(T) \\}_{n=1}^N$ and  test set $\\{ \\mu_n(\\tau) \\}_{n=1}^N$ \nIt is assumed that a significant change occurs in a proportion $p_0 = |\\mathsf{S}|/N \\in (0,1]$ of time series in a GSP.   \n To incorporate information of proportions, MSSD calculates a global log-likelihood for measuring a group deviation in a GSP  with\n\\begin{align}\n  Z_{\\tau,T}=\\sum_{n=1}^{N'} g\\Big( \\frac{   T \\mu_n(T) - \\tau \\mu_n(\\tau)  } {(T-\\tau)^{1/2}}\n \\, , \\, p_0  \\Big) \\label{Eqn:Z} %\\\\ & \\mbox{ \\nonumber \n\\end{align}\nwhere  \n$ g(u,p_0)= \\ln \\big(1-p_0+p_0 \\exp[(u^+)^2/2] \\,\\big)$ and $u^+=\\max(0,u)$. The initial selection of proportion $p_0$ also influences subsequent results.\nIf a significant change is known to only affect a small number of stochastic processes in a GSP then  instead of the measure in Equation (\\ref{Eqn:Z}), a maximum statistic is more appropriate with \n\\[ \\Big( (T-\\tau)^{-1/2}  \\max_{1 \\le n \\le N'} \n\\big( T \\mu_n(T) - \\tau \\mu_n(\\tau) \\big)^+ \\Big)^2/2  \n\\]\n%We are also able to consider extreme behaviours of the proportion of stochastic processes in a GSP. \n% % the probability function in Equation (\\ref{Eqn:Z}) reduces to $ g(u,p_0)=  (u^+)^2/2 $. On the other hand if an identical change occurs in a large proportion of sensors with $p_0 \\approx 1  $ then $ g(u,p_0)=  (u^+)^2/2 $.\n\\end{enumerate}\n\\begin{enumerate}[4.]\n\\item Threshold $\\epsilon = b $: \\\\ \n% a significant group deviation  in a GSP\nFirstly average run length is  the expected duration before a false detection when no real change occurs whereas expected delay detection is the time lag  before a true change-point is identified. \nMMSD calculates a threshold based on minimising the expected delay detection whilst constraining the average run length.  The threshold value for $b$ is selected  to satisfy optimisation criteria as further  described in Xie and Siegmund  \\cite{xie2013}. Thus a significant change at time $\\tau $ occurs in a proportion $p_0$ of time series in a GSP  if the test statistic from Equation (\\ref{Eqn:Z}) is greater than an estimated threshold with $Z_{\\tau,T}>b$. \n%A threshold is selected in order to identify test groups that are considerably different from the behaviours from a training set. \n%$E [Z_{0,T}]$  \n\\end{enumerate}\n \n ", "meta": {"hexsha": "c8870697618ed612804c6df5eeabc407bd88eecf", "size": 22778, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ARXIV_DAD_Survey/sections/Comparison2.tex", "max_stars_repo_name": "raghavchalapathy/Deep-Learning-for-Anomaly-Detection-A-Survey", "max_stars_repo_head_hexsha": "aa775990a4b23306885979c4ef8e8cb3ed00441b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 107, "max_stars_repo_stars_event_min_datetime": "2019-01-11T12:06:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T12:03:57.000Z", "max_issues_repo_path": "ARXIV_DAD_Survey/sections/Comparison2.tex", "max_issues_repo_name": "raghavchalapathy/Deep-Learning-for-Anomaly-Detection-A-Survey_Arxiv_WorkingDocument", "max_issues_repo_head_hexsha": "aa775990a4b23306885979c4ef8e8cb3ed00441b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ARXIV_DAD_Survey/sections/Comparison2.tex", "max_forks_repo_name": "raghavchalapathy/Deep-Learning-for-Anomaly-Detection-A-Survey_Arxiv_WorkingDocument", "max_forks_repo_head_hexsha": "aa775990a4b23306885979c4ef8e8cb3ed00441b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 27, "max_forks_repo_forks_event_min_datetime": "2019-01-15T02:42:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-06T07:59:29.000Z", "avg_line_length": 84.0516605166, "max_line_length": 690, "alphanum_fraction": 0.7174466591, "num_tokens": 6546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956581097540519, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5537448853637129}}
{"text": "\\section{DAG}\n\n%%%%%%%%%%%%%%%\n\\begin{frame}{DAG}\n  \\begin{center}\n    no back edge $\\iff$ DAG $\\iff$ $\\exists$ topo. ordering\n  \\end{center}\n\n  \\begin{block}{Topo. sorting algorithm by Tarjan(probably), 1976}\n    DFS on digraph, $u \\to v$:\n    \\begin{itemize}\n      \\item \\textcolor{red}{no} back edge: $\\text{f}[u] < \\text{f}[v]$\n      \\item others: $\\text{f}[u] > \\text{f}[v]$\n    \\end{itemize}\n\n    \\[ u \\to v \\Rightarrow \\text{f}[u] > \\text{f}[v] \\]\n    \\[ u \\to v \\Rightarrow u \\prec v \\] \n\n    \\begin{center}\n      Topo. sorting: sort vertices in \\textcolor{blue}{\\emph{decreasing}} order of their \\textcolor{blue}{\\emph{finish}} times.\n    \\end{center}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Digraph as DAG}\n  \\begin{exampleblock}{Digraph as DAG \\pno{3.4.6}}\n    \\begin{theorem}\n      Every digraph is a dag of its SCCs.\n    \\end{theorem}\n  \\end{exampleblock}\n\n  \\begin{alertblock}{Remark.}\n    \\begin{itemize}\n      \\item SCC algorithm\n      \\item SCC: reachability/connectivity equivalence class\n      \\item two tiered structure of digraphs\n    \\end{itemize}\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Kahn's toposort algorithm}\n  \\begin{exampleblock}{Kahn's toposort algorithm (1962) \\pno{3.4.19}}\n    \\begin{itemize}\n      \\item queue for source vertices ($\\text{in}[v] = 0$)\n      \\item repeat: dequeue $v$, delete it, output it\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{lemma}\n      Every DAG has at least one source and at least one sink vertex.\n    \\end{lemma}\n  \\end{block}\n\n  \\begin{alertblock}{Remark}\n    DFS on DAG:\n    \\begin{itemize}\n      \\item $\\argmax_{v} f(v)$ $\\Rightarrow$ source (used in SCC algorithm)\n      \\item $\\argmin_{v} f(v)$ $\\Rightarrow$ sink\n    \\end{itemize}\n  \\end{alertblock}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Hamiltonian path in DAG}\n  \\begin{exampleblock}{Hamiltonian path in DAG \\pno{3.4.16}}\n    \\begin{itemize}\n      \\item DAG $G$\n      \\item path visiting each vertex once\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item general digraph: NP-hard\n      \\item dag: $\\exists$ HP $\\iff$ $\\exists!$ topo. ordering\n\t\\begin{itemize}\n\t  \\item $\\Leftarrow$: By contridiction. $\\exists u \\sim v: u \\nrightarrow v$; swap\n\t\\end{itemize}\n    \\end{itemize}\n\n    Algorithm:\n    \\begin{itemize}\n      \\item toposort, check edges\n      \\item the Kahn toposort algorithm\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Semi-connected DAG}\n  \\begin{definition}{Semi-connected digraph}\n    $\\forall u,v: u \\leadsto v \\lor v \\leadsto u$\n  \\end{definition}\n\n  \\begin{exampleblock}{Semi-connected DAG \\pno{3.4.21 (c) + (d)}}\n    To test whether a DAG is semi-connected.\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{center}\n      dag: $\\exists$ HP $\\iff$ $\\exists!$ topo. ordering $\\iff$ semi-connected\n    \\end{center}\n\n    \\begin{proof}\n      \\begin{itemize}\n\t\\item $\\Leftarrow$: by contradiction; total order ($\\forall u,v: u \\prec v \\lor v \\prec u$)\n\t\\item $\\Rightarrow$: $\\exists HP$\n      \\end{itemize}\n    \\end{proof}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Minimum cost reachable}\n  \\begin{exampleblock}{Minimun cost reachable \\pno{3.4.22}}\n    Compute $\\text{cost}[u] = \\min \\set{\\text{cost}[v] \\mid u \\leadsto v}$.\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item dag: reverse topo. ordering\n\t\\begin{itemize}\n\t  \\item backtracking: $\\text{cost}[u] = \\min_{u \\to v} \\set{\\text{cost}[v]}$\n\t\\end{itemize}\n      \\item digraph: dag of scc\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{Line up}\n  \\begin{exampleblock}{Line up \\pno{3.4.29}}\n    \\begin{itemize}\n      \\item $i$ hates $j$: $i \\prec j$\n      \\item $i$ hates $j$: $\\# i < \\# j$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    $i$ hates $j$: $i \\to j$;\n    \\begin{itemize}\n      \\item DAG?\n      \\item longest path; critical path\n      \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n\\begin{frame}{One-to-all reachability}\n  \\begin{exampleblock}{One-to-all reachability \\pno{3.4.28}}\n    \\begin{itemize}\n      \\item given $v: v \\leadsto^{?} \\forall u$\n      \\item $\\exists? v: v \\leadsto \\forall u$\n    \\end{itemize}\n  \\end{exampleblock}\n\n  \\begin{block}{Solution.}\n    \\begin{itemize}\n      \\item DFS/BFS\n      \\item SCC; $\\exists!$ source vertex $v$ $\\iff$ $v \\leadsto \\forall u$\n\t\\begin{proof}\n\t  \\begin{itemize}\n\t    \\item $\\Rightarrow$: By contradiction. $\\exists u: v \\nrightarrow u \\land \\text{in}[u] > 0 \\Rightarrow \\exists u' \\to u \\land v \\nrightarrow u'$. Cycle. \n\t    \\item $\\Leftarrow$: (1) source (2) $\\exists !$\n\t  \\end{itemize}\n\t\\end{proof}\n    \\end{itemize}\n  \\end{block}\n\\end{frame}\n%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "0b644d37e7449d2da9c15fcb3da5f91a099c89f5", "size": 4786, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-graph-decomposition-2016-05-19/sections/dag.tex", "max_stars_repo_name": "hengxin/algorithm-ta-tutorial", "max_stars_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 45, "max_stars_repo_stars_event_min_datetime": "2017-03-29T08:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T15:12:15.000Z", "max_issues_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-graph-decomposition-2016-05-19/sections/dag.tex", "max_issues_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_issues_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alg-ta-by-years/alg-ta-2016/algorithm-tutorial-graph-decomposition-2016-05-19/sections/dag.tex", "max_forks_repo_name": "courses-at-nju-by-hfwei/algorithm-ta-tutorial", "max_forks_repo_head_hexsha": "0bb0376d96f388671597903fc833f68d7946020e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-10-10T08:47:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-18T07:58:43.000Z", "avg_line_length": 28.6586826347, "max_line_length": 158, "alphanum_fraction": 0.6076055161, "num_tokens": 1619, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334526, "lm_q2_score": 0.7956581000631541, "lm_q1q2_score": 0.5537448786192519}}
{"text": "\\section{Air Velocity}\nDid you ever feel the wind blow? Most probably. That's what we will be calculating here. How hard the wind will blow. This is noted as velocity, how fast something moves. \n\n\\subsection{Equation of State and the Incompressible Atmosphere}\nThe equation of state relates one or more variables in a dynamical system (like the atmosphere) to another. The most common equation of state in the atmosphere is the ideal gas equation as \ndescribed by \\autoref{eq:ideal gas} \\cite{idealGas}. The symbols in that equation represent:\n\n\\begin{itemize}\n    \\item $p$: The gas pressure (\\si{Pa}).\n    \\item $V$: The volume of the gas (\\si{m^3}).\n    \\item $n$: The amount of moles in the gas (\\si{mol}).\n    \\item $R$: The Gas constant as defined in \\autoref{sec:gas constant} (\\si{JK^{-1}mol^{-1}}) \\cite{idealGas}.\n    \\item $T$: The temperature opf the gas ($K$).\n\\end{itemize}\n\nIf we divide everything in \\autoref{eq:ideal gas} by $V$ and set it to be unit (in this case, set it to be exactly $1$ \\si{m^3}) we can add in the molar mass in both the top and bottom parts of \nthe division as show in \\autoref{eq:gas unit}. We can then replace $\\frac{nm}{V}$ by $\\rho$ the density of the gas (\\si{kgm^{-3}}) and $\\frac{R}{m}$ by $R_s$ the specific gas constant (gas \nconstant that varies per gas in \\si{JK^{-1}mol^{-1}}) as shown in \\autoref{eq:state gas}. The resulting equation is the equation of state that you get that most atmospheric physicists use when \ntalking about the atmosphere \\cite{simon}.\n\n\\begin{subequations}\n    \\begin{equation}\n        pV = nRT\n        \\label{eq:ideal gas}\n    \\end{equation}\n    \\begin{equation}\n        p = \\frac{nR}{V}T = \\frac{nmR}{Vm}T\n        \\label{eq:gas unit}\n    \\end{equation}\n    \\begin{equation}\n        p = \\rho R_sT\n        \\label{eq:state gas}\n    \\end{equation}\n\\end{subequations}\n\nThe pressure is quite important, as air moves from a high pressure point to a low pressure point. So if we know the density and the temperature, then we know the pressure and we can work out \nwhere the air will be moving to (i.e. how the wind will blow). In our current model, we know the atmospheric temperature but we do not know the density. For simplicities sake, we will now assume\nthat the atmosphere is Incompressible, meaning that we have a constant density. Obviously we know that air can be compressed and hence our atmosphere can be compressed too but that is not \nimportant enough to account for yet, especially considering the current complexity of our model.\n\nThe code that corresponds to this is quite simple, the only change that we need to make in \\autoref{eq:state gas} is that we need to replace $T$ by $T_a$, the temperature of the atmosphere. As\n$T_a$ is a matrix (known to programmers as a double array), $p$ will be a matrix as well. Now we only need to fill in some values. $\\rho = 1.2$\\cite{densityAir}, $R_s = 287$\\cite{specificGasConstantAir}.\n\n\\subsection{The Momentum Equations} \\label{sec:momentum}\nThe momentum equations are a set of equations that describe the flow of a fluid on the surface of a rotating body. For our model we will use the f-plane approximation. The equations corresponding\nto the f-plane approximation are given in \\autoref{eq:x momentum} and \\autoref{eq:y momentum} \\cite{momentumeqs}. Note that we are ignoring vertical movement, as this does not have a significant\neffect on the whole flow. All the symbols in \\autoref{eq:x momentum} and \\autoref{eq:y momentum} mean:\n\n\\begin{itemize}\n    \\item $u$: The east to west velocity (\\si{ms^{-1}}).\n    \\item $t$: The time (\\si{s}).\n    \\item $f$: The coriolis parameter as in \\autoref{eq:coriolis}.\n    \\item $v$: The north to south velocity (\\si{ms^{-1}}).\n    \\item $\\rho$: The density of the atmosphere (\\si{kgm^{-3}}).\n    \\item $p$: The atmospheric pressure (\\si{Pa}).\n    \\item $x$: The local longitude coordinate (\\si{m}).\n    \\item $y$: The local latitude coordinate (\\si{m}).\n\\end{itemize}\n\nIf we then define a vector $\\bar{u}$ as $(u, v, 0)$, we can rewrite both \\autoref{eq:x momentum} as \\autoref{eq:x momentum laplace}. Here $\\nabla u$ is the gradient of $u$ in both $x$ and $y$ \ndirections. Then if we write out $\\nabla u$ we get \\autoref{eq:x momentum final}. Similarly, if we want to get $\\delta v$ instead of $\\delta u$ we rewrite \\autoref{eq:y momentum} to get \n\\autoref{eq:y momentum laplace} and \\autoref{eq:y momentum final}.\n\n\\begin{subequations}\n    \\begin{equation}\n        \\frac{Du}{Dt} - fv = -\\frac{1}{\\rho} \\frac{\\delta p}{\\delta x}\n        \\label{eq:x momentum}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{Dv}{Dt} - fu = -\\frac{1}{\\rho} \\frac{\\delta p}{\\delta y}\n        \\label{eq:y momentum}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta u}{\\delta t} + \\bar{u} \\cdot \\nabla u - fv = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta x}\n        \\label{eq:x momentum laplace}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta v}{\\delta t} + \\bar{u} \\cdot \\nabla v - fu = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta y}\n        \\label{eq:y momentum laplace}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta u}{\\delta t} + u\\frac{\\delta u}{\\delta x} + v\\frac{\\delta u}{\\delta y} - fv = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta x}\n        \\label{eq:x momentum final}\n    \\end{equation}\n    \\begin{equation}\n        \\frac{\\delta v}{\\delta t} + u\\frac{\\delta v}{\\delta x} + v\\frac{\\delta v}{\\delta y} - fu = -\\frac{1}{\\rho}\\frac{\\delta p}{\\delta y}\n        \\label{eq:y momentum final}\n    \\end{equation}\n\\end{subequations}\n\nWith the gradient functions defined in \\autoref{alg:gradient x} and \\autoref{alg:gradient y}, we can move on to the main code for the momentum equations. The main loop is shown in \n\\autoref{alg:stream3}. Do note that this loop replaces the one in \\autoref{alg:stream2v2} as these calculate the same thing, but the new algorithm does it better.\n\n\\begin{algorithm}\n    $S_{xu} \\leftarrow \\texttt{gradient\\_x}(u, lan, lon)$ \\;\n    $S_{yu} \\leftarrow \\texttt{gradient\\_y}(u, lan, lon)$ \\;\n    $S_{xv} \\leftarrow \\texttt{gradient\\_x}(v, lan, lon)$ \\;\n    $S_{yv} \\leftarrow \\texttt{gradient\\_y}(v, lan, lon)$ \\;\n    $S_{px} \\leftarrow \\texttt{gradient\\_x}(p, lan, lon)$ \\;\n    $S_{py} \\leftarrow \\texttt{gradient\\_x}(p, lan, lon)$ \\;\n    \\For{$lat \\leftarrow 1$ \\KwTo $nlat - 1$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlon$}{\n            $u[lan, lon] \\leftarrow u[lan, lon] + \\delta t \\frac{-u[lan, lon]S_{xu} - v[lan, lon]S_{yu} + f[lan]v[lan, lon] - S_{px}}{\\rho}$ \\;\n            $v[lan, lon] \\leftarrow v[lan, lon] + \\delta t \\frac{-u[lan, lon]S_{xv} - v[lan, lon]S_{yv} - f[lan]u[lan, lon] - S_{py}}{\\rho}$ \\;\n        }\n    }\n    \\caption{Calculating the flow of the atmosphere (wind)}\n    \\label{alg:stream3}\n\\end{algorithm}\n\n\\subsection{Improving the Coriolis Parameter}\nAnother change introduced is in the coriolis parameter. Up until now it has been a constant, however we know that it varies along the latitude. So let's make it vary over the latitude. Recall \n\\autoref{eq:coriolis}, where $\\Theta$ is the latitude. Coriolis ($f$) is currently defined in \\autoref{alg:gradient}, so let's replace it with \\autoref{alg:coriolis}.\n\n\\begin{algorithm}\n    \\SetAlgoLined\n    $\\Omega \\leftarrow 7.2921 \\cdot 10^{-5}$ \\;\n\n    \\For{$lat \\leftarrow -nlat$ \\KwTo $nlat$}{\n        $f[lat] \\leftarrow 2\\Omega \\sin(lat \\frac{\\pi}{180})$ \\;\n    }\n    \\caption{Calculating the coriolis force}\n    \\label{alg:coriolis}\n\\end{algorithm}\n\n\\subsection{Adding Friction}\nIn order to simulate friction, we multiply the speeds $u$ and $v$ by $0.99$. Of course there are equations for friction but that gets complicated very fast, so instead we just assume that we\nhave a constant friction factor. This multiplication is done directly after \\autoref{alg:stream3} in \\autoref{alg:stream4v1}.\n\n\\subsection{Adding in Layers}\nWith adding in atmospheric layers we need to add vertical winds, or in other words add the $w$ component of the velocity vectors. We do that by editing \\autoref{alg:stream3}. We change it to \n\\autoref{alg:velocity}. Here we use gravity ($g$) instead of the coriolis force ($f$) and calculate the change in pressure. Therefore we need to store a copy of the pressure before we do any \ncalculations. This needs to be a copy due to aliasing \\footnote{Aliasing is assigning a different name to a variable, while it remains the same variable. Take for instance that we declare a \nvariable $x$ and set it to be $4$. Then we say $y \\leftarrow x$, which you might think is the same as saying they $y \\leftarrow 4$ but behind the screen it is pointing to $x$. So if $x$ changes, \nthen so does $y$.}\n\n\\begin{algorithm}\n    $S_{xu} \\leftarrow \\texttt{gradient\\_x}(u, lan, lon)$ \\;\n    $S_{yu} \\leftarrow \\texttt{gradient\\_y}(u, lan, lon)$ \\;\n    $S_{xv} \\leftarrow \\texttt{gradient\\_x}(v, lan, lon)$ \\;\n    $S_{yv} \\leftarrow \\texttt{gradient\\_y}(v, lan, lon)$ \\;\n    $S_{px} \\leftarrow \\texttt{gradient\\_x}(p, lan, lon)$ \\;\n    $S_{py} \\leftarrow \\texttt{gradient\\_y}(p, lan, lon)$ \\;\n    \\For{$lat \\leftarrow 1$ \\KwTo $nlat - 1$}{\n        \\For{$lon \\leftarrow 0$ \\KwTo $nlon$}{\n            \\For{$layer \\leftarrow 0$ \\KwTo $nlevels$}{\n                $u[lan, lon, layer] \\leftarrow u[lat, lon, layer] + \\delta t \\frac{-u[lat, lon, layer]S_{xu} - v[lat, lon, layer]S_{yu} + f[lat]v[lat, lon, layer] - S_{px}}{\\rho}$ \\;\n                $v[lan, lon, layer] \\leftarrow v[lat, lon, layer] + \\delta t \\frac{-u[lat, lon, layer]S_{xv} - v[lat, lon, layer]S_{yv} - f[lat]u[lat, lon, layer] - S_{py}}{\\rho}$ \\;\n                $w[lan, lon, layer] \\leftarrow w[lat, lon, layer] - \\frac{p[lat, lon, layer] - p_o[lat, lon, layer]}{\\delta t\\rho[lat, lon, layer]g}$ \\;\n            }\n        }\n\n    $p_o \\leftarrow copy(p)$ \\;\n    }\n    \\caption{Calculating the flow of the atmosphere (wind)}\n    \\label{alg:velocity}\n\\end{algorithm}", "meta": {"hexsha": "4b390e6f11feb00cd1217d0101b425cefe138d7b", "size": 9823, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex-docs/topics/velocity.tex", "max_stars_repo_name": "RolfHut/claude", "max_stars_repo_head_hexsha": "56ecbc1807e2c74e9edfdfcafce539e5ebc0cd33", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex-docs/topics/velocity.tex", "max_issues_repo_name": "RolfHut/claude", "max_issues_repo_head_hexsha": "56ecbc1807e2c74e9edfdfcafce539e5ebc0cd33", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex-docs/topics/velocity.tex", "max_forks_repo_name": "RolfHut/claude", "max_forks_repo_head_hexsha": "56ecbc1807e2c74e9edfdfcafce539e5ebc0cd33", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.5668789809, "max_line_length": 203, "alphanum_fraction": 0.670161865, "num_tokens": 3096, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956580903722561, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.553744876892355}}
{"text": "\\chapter{Aerodynamic Characteristics}\n\n\\section{Aerodynamic Characteristics Approximation}\n\n\\subsection{Lift Coefficient Approximation}\n\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[width=120mm]{eps/approx_cz.eps}\n  \\caption{Lift coefficient approximation}\n\\end{figure}\n\nFollowing approximation is used to get lift coefficient for the full range of angle of attack. \\cite{NASA-TM-102267}\n\nLift coefficient is given by the following expressions:\n\\begin{multline}\n  C_L =\n  -\n  \\frac{\n    \\left( \\alpha + \\alpha_{L2} \\right)\n    \\left( \\alpha + \\frac{\\pi}{2} \\right) \n  }\n  {\n    \\left( \\alpha_{L1} - \\alpha_{L2} \\right)\n    \\left( \\alpha_{L1} - \\frac{\\pi}{2} \\right) \n  } C_{L1}\n  -\n  \\frac{\n    \\left( \\alpha + \\alpha_{L1} \\right)\n    \\left( \\alpha + \\frac{\\pi}{2} \\right)\n  }\n  {\n    \\left( \\alpha_{L2} - \\alpha_{L1} \\right)\n    \\left( \\alpha_{L2} - \\frac{\\pi}{2} \\right)\n  } C_{L2} \\\\\n  \\mathrm{~for~} -\\frac{\\pi}{2} \\leq \\alpha \\leq - \\alpha_{L1}\n\\end{multline}\n\\begin{align}\n  C_L &=\n  \\frac{ C_{L1} - C_{Ls} }{ \\alpha_{L1} - \\alpha_{Ls} }\n  \\left( \\alpha + \\alpha_{Ls} \\right) - C_{Ls}\n  \\mathrm{~for~} - \\alpha_{L1} < \\alpha \\leq - \\alpha_{Ls} \\\\\n  C_L &= \\frac{C_{Ls}}{\\alpha_{Ls}} \\alpha\n  \\mathrm{~for~} - \\alpha_{Ls} < \\alpha < \\alpha_{Ls} \\\\\n  C_L &=\n  \\frac{ C_{L1} - C_{Ls} }{ \\alpha_{L1} - \\alpha_{Ls} }\n  \\left( \\alpha - \\alpha_{Ls} \\right) + C_{Ls}\n  \\mathrm{~for~} \\alpha_{Ls} \\leq \\alpha < \\alpha_{L1}\n\\end{align}\n\\begin{multline}\n  C_L =\n  \\frac{\n    \\left( \\alpha - \\alpha_{L2} \\right)\n    \\left( \\alpha - \\frac{\\pi}{2} \\right) \n  }\n  {\n    \\left( \\alpha_{L1} - \\alpha_{L2} \\right)\n    \\left( \\alpha_{L1} - \\frac{\\pi}{2} \\right) \n  } C_{L1}\n  +\n  \\frac{\n    \\left( \\alpha - \\alpha_{L1} \\right)\n    \\left( \\alpha - \\frac{\\pi}{2} \\right)\n  }\n  {\n    \\left( \\alpha_{L2} - \\alpha_{L1} \\right)\n    \\left( \\alpha_{L2} - \\frac{\\pi}{2} \\right)\n  } C_{L2} \\\\\n  \\mathrm{~for~} \\alpha_{L1} \\leq \\alpha \\leq \\frac{\\pi}{2}\n\\end{multline}\n\n\\subsection{Drag Coefficient Approximation}\n\nFollowing approximation is used to get drag coefficient for the full range of angle of attack. \\cite{NASA-TM-102267} Drag coefficient is assumed to be symmetric.\n\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics[width=120mm]{eps/approx_cx.eps}\n  \\caption{Drag coefficient approximation}\n\\end{figure}\n\nDrag coefficient is given by the following expressions:\n\\begin{multline}\n  C_D =\n  \\frac{\n    \\left( \\alpha^2 - \\alpha_{D2}^2 \\right)\n    \\left( \\alpha^2 - \\alpha_{D1}^2 \\right)\n  }\n  {\n    \\alpha_{D1}^2 \\alpha_{D2}^2\n  } C_{D0}\n  +\n  \\frac{\n    \\left( \\alpha^2 - \\alpha_{D2}^2 \\right) \\alpha^2\n  }\n  {\n    \\left( \\alpha_{D1}^2 - \\alpha_{D2}^2 \\right) \\alpha_{D1}^2\n  } C_{D1}\n  +\n  \\frac{\n    \\left( \\alpha^2 - \\alpha_{D1}^2 \\right) \\alpha^2\n  }\n  {\n    \\left( \\alpha_{D2}^2 - \\alpha_{D1}^2 \\right) \\alpha_{D2}^2\n  } C_{D2} \\\\\n  \\mathrm{~for~} -\\alpha_{D2} \\leq \\alpha \\leq \\alpha_{D2}\n\\end{multline}\n\\begin{multline}\n  C_D =\n  \\frac{\n    \\left( \\alpha - \\alpha_{D3} \\right)\n    \\left( \\alpha - \\alpha_{D4} \\right)\n    \\left( \\alpha - \\frac{\\pi}{2} \\right)\n  }\n  {\n    \\left( \\alpha_{D2} - \\alpha_{D3} \\right)\n    \\left( \\alpha_{D2} - \\alpha_{D4} \\right)\n    \\left( \\alpha_{D2} - \\frac{\\pi}{2} \\right)\n  } C_{D2}\n  \\\\\n  +\n  \\frac{\n    \\left( \\alpha - \\alpha_{D2} \\right)\n    \\left( \\alpha - \\alpha_{D4} \\right)\n    \\left( \\alpha - \\frac{\\pi}{2} \\right)\n  }\n  {\n    \\left( \\alpha_{D3} - \\alpha_{D2} \\right)\n    \\left( \\alpha_{D3} - \\alpha_{D4} \\right)\n    \\left( \\alpha_{D3} - \\frac{\\pi}{2} \\right)\n  } C_{D3}\n  +\n  \\frac{\n    \\left( \\alpha - \\alpha_{D2} \\right)\n    \\left( \\alpha - \\alpha_{D3} \\right)\n    \\left( \\alpha - \\frac{\\pi}{2} \\right)\n  }\n  {\n    \\left( \\alpha_{D4} - \\alpha_{D2} \\right)\n    \\left( \\alpha_{D4} - \\alpha_{D3} \\right)\n    \\left( \\alpha_{D4} - \\frac{\\pi}{2} \\right)\n  } C_{D4}\n  \\\\\n  +\n  \\frac{\n    \\left( \\alpha - \\alpha_{D2} \\right)\n    \\left( \\alpha - \\alpha_{D3} \\right)\n    \\left( \\alpha - \\alpha_{D4} \\right)\n  }\n  {\n    \\left( \\frac{\\pi}{2} - \\alpha_{D2} \\right)\n    \\left( \\frac{\\pi}{2} - \\alpha_{D3} \\right)\n    \\left( \\frac{\\pi}{2} - \\alpha_{D4} \\right)\n  } C_{D5}\n  \\\\\n  \\mathrm{~for~} -\\frac{\\pi}{2} \\leq \\alpha < -\\alpha_{D2}\n  \\mathrm{~and~} \\alpha_{D2} < \\alpha \\leq \\frac{\\pi}{2}\n\\end{multline}\n\nData available in \\cite{NACA-TN-3361} and \\cite{SheldahiKlimas1981} can be used to approximate aerodynamic characteristics outside linear range of the lift.\n\n\\section{Finite Wing Aerodynamic Characteristics}\n\nAerodynamic characteristics of the finite wing can be estimated within linear range of the lift.\n\nLift curve slope is given as follows: \\cite{Corke2003}\n\\begin{equation}\n  \\frac{dC_L}{d\\alpha}\n  =\n  \\frac{\n    2\\pi A\n  }\n  {\n    2\n    +\n    \\sqrt{ 4 + A^2 \\left( 1 + \\tan^2 \\Lambda_{t/c} \\right) }\n  }\n\\end{equation}\n\nThe finite wing lift coefficient within linear range is given by the following formula: \\cite{Corke2003}\n\\begin{equation}\n  C_L = C_{L \\alpha = 0} + \\frac{dC_L}{d \\alpha} \\alpha\n\\end{equation}\n\nThe finite wing maximum lift coefficient is given by: \\cite{Raymer1992}\n\\begin{equation}\n  C_{L,max} = 0.9 C_{L,max \\infty} \\cos \\Lambda_{t/c}\n\\end{equation}\n\nThe finite wing drag coefficient is given as follows. \\cite{Corke2003}\n\\begin{equation}\n  C_D = C_{D0} + \\frac{C_L^2}{\\pi A e}\n\\end{equation}\n\n\\section{Horizontal Tail Incidence}\n\nEquilibrium of moments acting on an aircraft is given by the following equation:\n\\begin{equation}\n  \\label{eq-aero-equilibrium-moments}\n  r_{CG} mg + \\frac{1}{2} \\rho V^2 S \\hat c C_m\n  =\n  l_h \\frac{1}{2} \\rho V^2 S_h C_{L,h}\n\\end{equation}\n\nHorizontal stabilizer lift coefficient is given as follows:\n\\begin{equation}\n  \\label{eq-aero-lift-coef-stab-h}\n  C_{L,h}\n  =\n  \\left(\n    \\alpha + i_h - \\alpha \\frac{\\partial \\epsilon}{\\partial \\alpha}\n  \\right)\n  \\frac{d C_{L,h}}{d \\alpha}\n\\end{equation}\n\nWhere downwash derivative is given as: \\cite{Paturski09}\n\\begin{equation}\n  \\frac{\\partial \\epsilon}{\\partial \\alpha}\n  =\n  \\frac{2a}{\\pi A}\n\\end{equation}\n\nSubstituting equation (\\ref{eq-aero-lift-coef-stab-h}) into (\\ref{eq-aero-equilibrium-moments}) gives:\n\\begin{equation}\n  r_{CG} mg + \\frac{1}{2} \\rho V^2 S \\hat c C_m\n  =\n  l_h \\frac{1}{2} \\rho V^2 S_h\n  \\left(\n  \\alpha + i_h - \\alpha \\frac{\\partial \\epsilon}{\\partial \\alpha}\n  \\right)\n  \\frac{d C_{L,h}}{d \\alpha}\n\\end{equation}\n\nSolving this equation for horizontal tail incidence angle gives:\n\\begin{equation}\n  \\label{eq-aero-incidence-angle}\n  i_h = \n  \\frac{ 2 r_{CG} mg + \\rho V^2 S \\hat c C_m }\n  { l_h \\rho V^2 S_h \\cfrac{d C_{L,h}}{d \\alpha} }\n  -\n  \\alpha \\left( 1 - \\frac{\\partial \\epsilon}{\\partial \\alpha} \\right)\n\\end{equation}\n\nEquilibrium of forces acting on an aircraft in level flight is given by the following equation:\n\\begin{equation}\n  \\label{eq-aero-equilibrium-forces}\n  mg = \\frac{1}{2} \\rho V^2 S C_L\n\\end{equation}\n\nAircraft lift coefficient is given as follows:\n\\begin{equation}\n  \\label{eq-aero-lift-coef}\n  C_L = C_{L0} + \\alpha \\frac{d C_L}{d \\alpha}\n\\end{equation}\n\nSubstituting equation (\\ref{eq-aero-lift-coef}) into (\\ref{eq-aero-equilibrium-forces}) gives:\n\\begin{equation}\n  mg = \\frac{1}{2} \\rho V^2 S\n  \\left( C_{L0} + \\alpha \\frac{d C_L}{d \\alpha} \\right)\n\\end{equation}\n\nSolving this equation for angle of attack gives:\n\\begin{equation}\n  \\label{eq-aero-angle-of-attack}\n  \\alpha = \n  \\frac{ 2mg - \\rho V^2 S C_{L0} }{ \\rho V^2 S \\cfrac{d C_L}{d \\alpha} }\n\\end{equation}\n\nSubstituting equation (\\ref{eq-aero-angle-of-attack}) into (\\ref{eq-aero-incidence-angle}) gives:\n\\begin{equation}\n  i_h =\n  \\frac{ 2 r_{CG} mg + \\rho V^2 S \\hat c C_m }\n  { l_h \\rho V^2 S_h \\cfrac{d C_{L,h}}{d \\alpha} }\n  -\n  \\frac{ 2mg - \\rho V^2 S C_{L0} }\n  { \\rho V^2 S \\cfrac{d C_L}{d \\alpha} }\n  \\left( 1 - \\frac{\\partial \\epsilon}{\\partial \\alpha} \\right)\n\\end{equation}\n\n\\section{Critical Angle of Attack}\n\nEquilibrium of forces acting on an aircraft in level flight is given by the following equation:\n\\begin{equation}\n  mg = \\frac{1}{2} \\rho V^2 \\left( S C_L + S_h C_{L,h} \\right)\n\\end{equation}\n\nAs conventional configuration airplanes have horizontal stabilizer negative incidence angle, it is assumed that horizontal stabilizer is within its lift linear range when maximum lift coefficient is reached.\n\\begin{equation}\n  mg = \\frac{1}{2} \\rho V^2\n  \\left[\n    S C_L\n    +\n    S_h \\left(\n      \\alpha_{cr} + i_h\n      -\n      \\alpha_{cr} \\frac{\\partial \\epsilon}{\\partial \\alpha}\n    \\right)\n    \\frac{d C_{L,h}}{d \\alpha}\n  \\right]\n\\end{equation}\n\nSolving this equation for the maximum lift coefficient gives:\n\\begin{equation}\n  \\label{eq-aero-lift-coef-max-1}\n  C_{L,max} = \n  \\frac{\n    mg - \\cfrac{1}{2} \\rho V_{stall}^2 S_h\n    \\left(\n      \\alpha_{cr} + i_h\n      -\n      \\alpha_{cr} \\cfrac{\\partial \\epsilon}{\\partial \\alpha}\n    \\right)\n    \\cfrac{d C_{L,h}}{d \\alpha}\n  }\n  {\n    \\cfrac{1}{2} \\rho V_{stall}^2 S\n  }\n\\end{equation}\n\nAssuming that maximum lift coefficient is within linear range:\n\\begin{equation}\n  \\label{eq-aero-lift-coef-max-2}\n  C_{L,max} =  C_{L0} + \\alpha_{cr} \\frac{d C_L}{d \\alpha}\n\\end{equation}\n\nSubstituting equation (\\ref{eq-aero-lift-coef-max-2}) into (\\ref{eq-aero-lift-coef-max-1}) gives:\n\\begin{equation}\n  C_{L0} + \\alpha_{cr} \\frac{d C_L}{d \\alpha} =\n  \\frac{\n    mg - \\cfrac{1}{2} \\rho V_{stall}^2 S_h\n    \\left(\n      \\alpha_{cr} + i_h\n      -\n      \\alpha_{cr} \\cfrac{\\partial \\epsilon}{\\partial \\alpha}\n    \\right)\n    \\cfrac{d C_{L,h}}{d \\alpha}\n  }\n  {\n    \\cfrac{1}{2} \\rho V_{stall}^2 S\n  }\n\\end{equation}\n\nSolving this equation for critical angle of attack gives\n\\begin{equation}\n  \\label{eq-aero-alpha-critical}\n  \\alpha_{cr} =\n  \\frac{\n    2mg - \\rho V_{stall}^2\n    \\left(\n      S_h i_h \\cfrac{d C_{L,h}}{d \\alpha}\n      +\n      S C_{L0}\n    \\right)\n  }\n  {\n    \\rho V_{stall}^2\n    \\left[\n      S \\cfrac{d C_L}{d \\alpha}\n      +\n      S_h \\left( 1 - \\cfrac{\\partial \\epsilon}{\\partial \\alpha} \\right)\n      \\cfrac{d C_{L,h}}{d \\alpha}\n    \\right]\n  }\n\\end{equation}\n", "meta": {"hexsha": "28479605ecb08de8d12f94ee98b64514a780d6ec", "size": 9904, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/data_2.tex", "max_stars_repo_name": "marek-cel/mscsim-docs", "max_stars_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2019-12-01T02:27:28.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-09T07:02:20.000Z", "max_issues_repo_path": "tex/data_2.tex", "max_issues_repo_name": "marek-cel/mscsim-docs", "max_issues_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/data_2.tex", "max_forks_repo_name": "marek-cel/mscsim-docs", "max_forks_repo_head_hexsha": "9984f33c84787c4420f11f2834bb35e040e1f36f", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-12-01T10:56:23.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-01T19:41:05.000Z", "avg_line_length": 27.1342465753, "max_line_length": 207, "alphanum_fraction": 0.6179321486, "num_tokens": 3797, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7956580903722561, "lm_q2_score": 0.6959583250334526, "lm_q1q2_score": 0.5537448718747908}}
{"text": "\\title{\\bf Light I}\n\n\\section{Basics \\& Nomenclature}\n\nIn astrophysical parlance, the {\\it specific intensity} $I_\\nu$ is the\nflux of electromagnetic energy across a surface from a particular\ndirection, per unit area, per unit solid angle, per unit time, per\nunit frequency. The specific intensity can also be expressed per unit\nwavelength, and denoted in this case $I_\\lambda$. For example, units\nmay be erg s$^{-1}$ cm$^{-2}$ Hz$^{-1}$ arcsec$^{-2}$ or erg s$^{-1}$\ncm$^{-2}$ \\AA$^{-1}$ arcsec$^{-2}$.  The term {\\it surface brightness}\nis often also used to denote the specific intensity.\n\nThe quantities $I_\\nu$ and $I_\\lambda$ are related:\n\\begin{eqnarray}\nI_\\nu(\\nu) |{\\rm d}\\nu| &=& I_\\lambda(\\lambda = c/\\nu) |{\\rm\n  d}\\lambda| \\cr\nI_\\nu(\\nu) &=& \\left|\\frac{{\\rm d}\\lambda}{{\\rm d}\\nu}\\right|\nI_\\lambda = \\frac{c}{\\nu^2} I_\\lambda(\\lambda = c/\\nu) \\cr\nI_\\lambda(\\lambda) &=& \\left|\\frac{{\\rm d}\\nu}{{\\rm d}\\lambda}\\right|\nI_\\nu = \\frac{c}{\\lambda^2} I_\\nu(\\nu = c/\\lambda).\n\\end{eqnarray}\n\nA {\\it flux density} $f_\\nu$ or $f_\\lambda$ is the integral of the\nspecific intensity through the surface integrated over a range of\ndirections:\n\\begin{equation}\nf_\\nu = \\int {\\rm d}\\Omega I_\\nu \\cos\\theta. \n\\end{equation}\nA special unit for flux density (mostly in use in radio astronomy) is\nthe Jansky, which is $10^{-23}$ erg s$^{-1}$ cm$^{-2}$\nHz$^{-1}$. Sometimes the flux density is also called the {\\it spectral\n  energy distribution}.\n\nThe electromagnetic wave can also be expressed in terms of\nphotons. The energy of photons correspond to a wavelength of light:\n\\begin{equation}\nE_\\nu = h\\nu\n\\end{equation}\n\nIf an emitting object is moving along the line-of-sight to the\nobserver, the photon's wavelengths are shifted by a factor $(1+z)$\nwhere $z$ is the redshift:\n\\begin{eqnarray}\n\\lambda_{\\rm obs}  &=& (1+z) \\lambda_{\\rm emit} \\cr\n\\nu_{\\rm obs}  &=& \\frac{\\nu_{\\rm emit}}{1+z}\n\\end{eqnarray}\nThe specific intensity and the flux is also altered by this\nmotion. Under both special and general relativity, the photon density\nin phase space is conserved. From this it can be calculated that:\n\\begin{equation}\n  \\label{eq:sb_dimming}\n  I_{\\lambda, {\\rm obs}} \\dd\\lambda_{\\rm obs} = \n  \\frac{I_{\\lambda, {\\rm emit}} \\dd\\lambda_{\\rm emit}}{(1+z)^4}\n\\end{equation}\nAt small velocities, $z \\approx v/c$, but at higher velocities\nrelativistic corrections are important.\n\nA {\\it luminosity} is defined as the total energy emitted from some\nsource per unit time per unit frequency (or wavelength). It has\ntypical units of erg s$^{-1}$ Hz$^{-1}$ or erg s$^{-1}$\n\\AA$^{-1}$. For isotropically emitting sources, the total luminosity\nis related to the observed flux as:\n\\begin{equation}\n\\label{eq:luminosity_distance}\nf_\\nu = \\frac{L_\\nu}{4\\pi D^2}\n\\end{equation}\nwhere $D$ is the distance to the source.\n\nIn a cosmological context (relevant at 100s of Mpc distance or so\ndepending on the required precision), there is not a single\nwell-defined distance to a source. The directly observable quantity is\nthe redshift $z$, associated with the Universe's expansion. In this\ncase $D$ must be the {\\it luminosity distance}, sometimes denoted\n$D_L$, defined to satisfy Equation \\ref{eq:luminosity_distance}; its\nrelation to redshift depends on the cosmological parameters\n(see \\citealt{hogg99cosm}). The flux dependence on distance in\nstandard, general relativity-based cosmology is a combination of the\ndependence of the angular size on distance times the surface\nbrightness dimming effect.\n\nAn analogous quantity, the {\\it angular diameter distance} $D_A(z)$,\ncan be defined that satisfies the relation for small angles:\n\\begin{equation}\n\\theta = \\frac{s}{D_A}\n\\end{equation}\nwhere $s$ is a physical size of an object and $\\theta$ is the angular\nsize of that object when observed at redshift $z$.\n\nThe flux density or specific intensity can only be measured at some\nfinite resolution in wavelength or frequency. In optical astronomical\nparlance, a common expression of resolution is:\n\\begin{equation}\nR = \\frac{\\lambda}{\\Delta \\lambda}\n\\end{equation}\nwhere $\\Delta \\lambda$ is the full-width half maximum (FWHM) of the\nline spread function (the Green's function response of the system to a\npoint source).\n\nOptical and infrared instruments that measure the wavelength\ndependence of the flux density through dispersal of the light (with\ndiffraction gratings or prisms in the optical) are typically known as\nspectrographs. Existing instruments range in resolution from $R\\sim\n20$ to $R>100,000$.  At higher energies, in addition to dispersal\ntechniques, X-ray detectors and position sensitive proportional\ncounters also often have intrinsic energy sensitivity.  Radio\nreceivers separate signals into frequencies electronically.\n\nImaging instruments with wavelength-dependent sensitivity (either due\nto the detector or through a band pass filter) can also measure the\nwavelength dependence of the flux density or specific\nintensity. Typically, this is performed more coarsely at about $R\\sim\n5$, though narrow band systems reach $R\\sim 50$ or more (typically\nthough not quite always requiring one exposure per filter).\n\nIn the optical and infrared the use of such coarse band pass filters\nmeans that the interpretation of the measurements at high precision\nrequiring knowing the band pass very well. A full interpretation of\nthe measurement involves also a model of the flux density. The\nobservations are usually interpreted as the ratio between the signal\nreceived from the object to the signal that would have been received\nby some standard source.\n\nThis quantity is termed a {\\it maggie} and depends on the band\npass. It can be expressed in terms of a model for $f_\\nu$ or\n$f_\\lambda$ of the object and $g_\\nu$ or $g_\\lambda$ of the standard\nsource:\n\\begin{eqnarray}\n  \\mu_b &=&\n  \\frac{\\int {\\rm d}\\nu (f_\\nu(\\nu)/\\nu) R_b(\\nu)}\n       {\\int {\\rm d}\\nu (g_\\nu(\\nu)/\\nu) R_b(\\nu)} \\cr\n       &=&\n  \\frac{\\int {\\rm d}\\lambda f_\\lambda(\\lambda) \\lambda R_b(\\lambda) }\n       {\\int {\\rm d}\\lambda g_\\lambda(\\lambda) \\lambda R_b(\\lambda)} \n\\end{eqnarray}\nwhere $R(\\lambda)$ or $R(\\nu)$ is the contribution of a photon\nentering Earth's atmosphere (or the aperture of a space telescope) to\nthe output signal of the instrument in band $b$. This formula is the\nsame for photon-counting and energy-counting devices, though note that\nfor the latter instrumentalists typically report a definition of\n$R(\\lambda)$ or $R(\\nu)$ which is the contribution of an amount of\nenergy to the signal rather than that of a photon; therefore the\n$R(\\lambda)$ or $R(\\nu)$ so defined needs to be transformed into\nper-photon units.\n\nTypical choices of $g_\\nu$ are the spectrum of Vega (which is only\nknown to a few percent accuracy in the optical and near-infrared) and\nthe AB system of $g_\\nu = 3631$ Jy (the flux density of Vega near 5500\n\\AA).  All absolute measurements are based on standards whose flux\ndensities are thought to be known, but only rarely by comparing\ndirectly to Vega (which is too bright) and never to an AB source (they\ndon't exist). Relative measurements can be calibrated fairly precisely\nindependently of absolute calibration, though for broad band passes\nthe colors of the sources need to be accounted for.\n\nThe classical astronomical {\\it magnitude} is defined as:\n\\begin{equation}\nm = -2.5 \\log_{10} \\mu\n\\end{equation}\nThe bright end of this system corresponds roughly to the original\nmagnitude system developed by astronomers in the ancient world.\n\nThe luminosity is often expressed as the {\\it absolute magnitude},\nwhich is defined as the magnitude the object would have were it at\nrest with respect to us, at 10 pc distance. For non-cosmologically\ndistant objects (within a few 10s of Mpc) this can be expressed as:\n\\begin{equation}\nM = m - 5 \\log_{10} \\left(\\frac{D}{10 {\\rm ~pc}}\\right) = m - {\\rm DM}\n\\end{equation}\nwhere DM is defined as the distance modulus. \n\nAt cosmological distances, two effects are important. First, $D$ must\nbe the luminosity distance. Second, the observed band pass corresponds\nto a different part of the rest frame spectrum of the object for each\ndifferent redshift $z$. This effect is known as the $K$-correction,\ndefined as:\n\\begin{equation}\nM = m - {\\rm DM} - K(z; f_\\lambda)\n\\end{equation}\nBasically the $K$-correction accounts for the ratio of the observed to\nrest frame flux given the shape of the flux density\n$f_\\lambda(\\lambda)$ (it does not depend on the amplitude). It is\ncommon at low redshifts to $K$-correct from the observed bandpass to\nthe same band pass in the rest frame. However, at higher redshift it\ncan be more stable to $K$-correct to a bandpass with a closer\neffective wavelength to that observed. \\citet{hogg02c} describes the\nmathematical definition of the $K$-correction in all such cases. \n\n\\section{Commentary}\n\nWhile the magnitude system was originally used for convenience in\ncalibration over a large dynamic range in brightness, it retains its\nusefulness because of its encapsulation of the response function\n$R(\\lambda)$.  Sometimes measurements of $\\mu$ are expressed in terms\nof $f_{\\lambda, {\\rm eff}}$ at some choice of $\\lambda_{\\rm eff}$\n(e.g. by just multiplying an AB maggie by 3631 Jy and using some\naverage effective wavelength of the filter), but for broad band passes\neither $f_{\\lambda, {\\rm eff}}$ or $\\lambda_{\\rm eff}$ is a strong\nfunction of the actual, often unknown $f_\\lambda(\\lambda)$. $\\mu$ and\n$m$ do not suffer from this dependence (though interpreting them\nphysically still can only be done by accounting for the filter curve\nand a model for $f_\\lambda(\\lambda)$).\n\n$K$-corrections depend on knowing the flux density itself, or at least\nits wavelength dependence. To estimate that wavelength dependence\nrequires having a measurement or model. Thus, the $K$-correction is\nnaturally an uncertain quantity.\n\n\\section{Important numbers}\n\n\\begin{itemize}\n\\item $h\\nu \\sim 1$ eV for $\\lambda \\sim 1.2$ $\\mu$m.\n\\item $\\nu \\sim 300$ THz for $\\lambda \\sim 1$ $\\mu$m.\n\\item $\\nu \\sim 30$ GHz for $\\lambda \\sim 1$ cm.\n\\item $\\nu \\sim 1$ GHz for $\\lambda \\sim 1$ m.\n\\item $m_{V, {\\rm Vega}} \\sim 0$ mag.\n\\item $M_{V, \\odot} \\sim 5$ mag.\n\\end{itemize}\n\n\\section{Key References}\n\n\\begin{itemize}\n  \\item\n    \\href{http://adsabs.harvard.edu/abs/2000asqu.book.....C}{\n    {\\it Allen's Astrophysical Quantities},\n      \\citet{cox00a}}, Chapter 5\n  \\item\n    \\href{http://adsabs.harvard.edu/abs/1999astro.ph..5116H}{\n    {\\it Distance measures in Cosmology},\n      \\citet{hogg99cosm}}\n  \\item\n    \\href{http://adsabs.harvard.edu/abs/2002astro.ph.10394H}{\n    {\\it The $K$-correction},\n      \\citet{hogg02c}}\n\\end{itemize}\n\n\\section{Order-of-magnitude Exercises}\n\n\\begin{enumerate} \n\\item If Vega is about 8 parsecs away and has $m_V \\sim 0$, how far\n  away would it be visible through the deepest images from the Hubble\n  Space Telescope ($m_V\\sim 30$)?\n\n\\begin{answer}\n  The magnitude difference of 30\n    corresponds to a factor of $10^{12}$ in luminosity. Using Equation\n    \\ref{eq:luminosity_distance}, this translates to $10^6$ in\n    distance. So Vega-like stars are visible (in principle) to about 8\n    Mpc. In practice, if such stars were in another galaxy, it is very\n    difficult to resolve them from the other stars in the galaxy; if\n    they were free-floating between galaxies there are still a high\n    density of 30th magnitude galaxies causing confusion in the image.\n\\end{answer}\n\n\\item Estimate the number of photons per second that enter your eye\n    per second in visible light (4000--7000 \\AA) from a star with\n  magnitude $\\sim 6$ (about the faintest visible at a dark\n  site). Assume a nighttime pupil diameter of 5 mm.\n\n\\begin{answer}\nIn this wavelength range, Vega and AB magnitudes are about the same at\nthe precision necessary here, so we don't have to worry about which\nversion we are dealing with. So:\n\\begin{equation}\nf_\\nu \\sim (3631 \\mathrm{~Jy}) 10^{-0.4 m} \\sim 14 \\mathrm{~Jy} = 1.4 \\times\n10^{-22} \\mathrm{~erg} \\mathrm{~cm}^{-2} \\mathrm{~s}^{-1} \\mathrm{~Hz}^{-1}\n\\end{equation}\nThe flux density in the visible should be $f_\\nu \\Delta\\nu$ where:\n\\begin{equation}\n\\Delta\\nu = c \\left(\\frac{1}{4000 \\mathrm{~\\AA}} -\n\\frac{1}{7000 \\mathrm{~\\AA}}\\right) \\sim 320 \\mathrm{~THz},\n\\end{equation}\nand thus $f \\sim 4\\times 10^{-8}$ erg cm$^{-2}$ s$^{-1}$. Each photon\nhas an energy (assuming $\\lambda = 5500$ \\AA):\n\\begin{equation}\nE = h\\nu = (6.62\\times\n10^{-27} \\mathrm{~erg~Hz}^{-1})( \\mathrm{550~THz}) \\sim 4 \\times\n10^{-12} \\mathrm{~erg}.\n\\end{equation}\nSo the flux of photons is:\n\\begin{equation}\n\\frac{\\dot N}{A} = \\frac{f_\\nu \\Delta\\nu}{h\\nu} \\sim \n10^4 \\mathrm{~s}^{-1} \\mathrm{~cm}^{-2}.\n\\end{equation}\nIf $A \\sim \\pi r^2 \\sim 0.2$ cm$^2$ then $\\dot N \\sim 2000$ s$^{-1}$. \n\\end{answer}\n\n\\item How much does surface brightness dimming change the magnitudes\n  per square arcsecond for a galaxy at redshift $z\\sim 1$?\n\n\\begin{answer}\nThe specific intensity integrated over all wavelengths is reduced by\n$(1+z)^4$. In magnitudes this is:\n\\begin{equation}\n\\Delta m = 2.5 \\log_{10} (1+z)^4 = 10 \\log_{10} (1+z) \\sim\n3 \\mathrm{~mag}\n\\end{equation}\n\\end{answer}\n\\end{enumerate}   \n\n\\section{Analytic Exercises}\n\n\\begin{enumerate}\n\\item For a Gaussian line spread function with a standard deviation\n  $\\sigma$, what is the FWHM?\n\n\\begin{answer}\nThe FWHM is twice the distance from the peak to the point halfway down\nthe peak, so it is determined by:\n\\begin{equation}\n\\exp\\left(- ({\\rm FWHM} / 2)^2 / 2 \\sigma^2\\right) = 0.5.\n\\end{equation}\nSolving for the FWHM yields: \n\\begin{equation}\n{\\rm FWHM} = 2 \\sqrt{2\\ln 2} \\sigma \\approx 2.35 \\sigma\n\\end{equation}\n\\end{answer}\n\\item Prove Equation \\ref{eq:sb_dimming}, based on the fact that\n  photon density in phase space is conserved.\n\n\\begin{answer}[Author: Trey Jensen]\nBeginning with the photon particle distribution, $f(x,p,t)$, we know\nthat subject only to cosmic expansion, the number of photons will be\nconserved,\n\\begin{equation}\n\\dd N =f(x,p,t)\\dd V \\dd^3 p.\n\\end{equation}\nIf we rewrite the right hand side in the suggestive manner,\n\\begin{equation}\nf(x,p,t)p^2\\dd x \\dd A \\dd p \\dd \\Omega,\n\\end{equation}\nwhere $\\dd A$ is the area element, $\\dd \\Omega$ the solid angle\nelement of momenta (direction of propagation), then we see this can be\nwritten with the specfic intensity in mind. The specific intensity is\nthe flux of energy through a surface per area, per solid angle, per\ntime, per frequency. We can get an infinitesimal number of photons\nvia,\n\\begin{equation}\n \\dd N =\\frac{I_\\nu}{E} \\dd A \\dd \\Omega  \\dd t \\dd \\nu.\n\\end{equation}\nEquating these two equations of $\\dd N$, and because we have massless\nparticles, $p=E=h\\nu$, then using the fact that $\\dd x / \\dd t = c$:\n\\begin{equation}\n    I_\\nu=f(x,p,t)h^{4} \\nu^3 c\n\\end{equation}\nIn the FRW metric, frequency $\\nu$ scales inversely with $a(t)$ (this\ncan also be derived from the geodesic equation of a photon). Thus,\ninspecting the above equation,\n\\begin{equation}\n    I_\\nu\\propto a(t)^{3} = \\frac{1}{(1+z)^3}.\n\\end{equation}\nThen if we consider the integral of $I_\\nu$:\n\\begin{equation}\n    I_{\\nu, \\rm obs} \\dd\\nu_{\\rm obs}  =\n    \\frac{1}{(1+z)^4} I_{\\nu, \\rm emit} \\dd\\nu_{\\rm emit}\n\\end{equation}\nand similar for $I_\\lambda$.\n\\end{answer}\n\n\\item Based on Equation \\ref{eq:sb_dimming}, how is the angular\n  diameter distance related to the luminosity distance? \n\n\\begin{answer}\nThe angular diameter distance must satisfy:\n\\begin{equation}\n\\label{eq:da}\nD_A = \\frac{s}{\\theta}\n\\end{equation}\nTake a uniform surface brightness sphere of radius $s$. The flux\ndensity must satisfy two equalities:\n\\begin{equation}\nf_\\nu = \\frac{L_\\nu}{4\\pi D_L^2} = I_\\nu \\pi \\theta^2.\n\\end{equation}\nUsing Equation \\ref{eq:sb_dimming} and \\ref{eq:da}, the second equality\nimplies:\n\\begin{equation}\n\\frac{L_\\nu}{4\\pi D_L^2} = I_\\nu \\pi \\theta^2 = \\frac{I_{\\nu, 0} \\pi\ns^2}{(1+z)^4 D_A^2}.\n\\end{equation}\nThen using $L=4\\pi s^2 I_{\\nu, 0}$ we find:\n\\begin{equation}\nD_L = (1+z)^2 D_A\n\\end{equation}\n\\end{answer}\n\\end{enumerate}\n\n\\section{Numerics and Data Exercises}\n\n\\begin{enumerate}\n\\item Retrieve a spectrum of a star, a quasar, and a galaxy from the\n  Sloan Digital Sky Survey. Plot each of them. These spectra are given\n  in $f_\\lambda$ (per-\\AA) units. Convert one them to $f_\\nu$\n  (per-Hertz) and plot it. Smooth one of them in $f_\\lambda$ with a\n  Gaussian corresponding to $R\\sim 1000$ and plot it.\n\\item Plot $D_L$ versus $z$ based on the equations found\n  in \\citet{hogg99cosm}, for a flat $\\Lambda$CDM cosmology with\n  $\\Omega_m = 0.3$ and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$. Determine\n  where the difference in inferred luminosity of an object, relative\n  to assuming $D_L = cz / H_0$, would reach 1\\%. \n\\item Download the filter curve for the SDSS $g$ and $r$\n  bands. Calculate the observed $g$ and $r$ band magnitudes\n  corresponding to a galaxy spectrum (say for some galaxy with $z \\sim\n  0.15$ or greater). Note that this won't necessarily be the same as\n  the magnitudes measured from the images, since the spectra are taken\n  through 2- or 3-arcsec diameter fibers. Calculate the rest-frame\n  $g-r$ color, and also what the $K$-correction would be for galaxies\n  with this SED in the $r$-band between about $z\\sim 0$ and $z\\sim\n  0.25$. Download the photometric data for a sample of galaxies\n  between about $z\\sim 0$ and\n  $0.25$. Plot their $g-r$ colors versus redshift, together with the\n  predicted colors of the galaxy you have a spectrum of.\n\\item The \\href{http://www.mpia.de/THINGS}{THINGS survey} maps a\n  number of nearby galaxies in 21-cm radio emission, mapping a\n  hyperfine transition in hydrogen. Download the data on the galaxy\n  NGC 2403 and estimate its rotation velocity at the furthest\n  measurable points (it is all right to do this by eyeballing rather\n  than a detailed fit).\n\\item Using the ROSAT all sky survey (for example\n  at \\href{http://www.xray.mpe.mpg.de/cgi-bin/rosat/rosat-survey}{this\n  web site}), find an X-ray image of the center of the galaxy NGC\n  1068. Compare this to the X-ray image of the center of the galaxy\n  NGC 3992. Use the hardest (highest energy) band available from the\n  Position Sensitive Proportional Counter (PSPC) images\n\\end{enumerate}\n\n\\bibliographystyle{apj}\n\\bibliography{exex}  \n", "meta": {"hexsha": "505d3aa113dbc01b37ec765caab6fd504aa91661", "size": 18164, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/light-1-text.tex", "max_stars_repo_name": "blanton144/exex", "max_stars_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tex/light-1-text.tex", "max_issues_repo_name": "blanton144/exex", "max_issues_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/light-1-text.tex", "max_forks_repo_name": "blanton144/exex", "max_forks_repo_head_hexsha": "b4d9d52b4fe8af761783f49b2c197a109d94cfdf", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.4392523364, "max_line_length": 76, "alphanum_fraction": 0.7288592821, "num_tokens": 5370, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289836, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5537350082431414}}
{"text": "\\documentclass[11pt,a4paper]{article}\n\\usepackage[vmargin=15mm,hmargin=20mm]{geometry}     % margins\n\\usepackage{natbib}\n\\usepackage{booktabs}\n\\usepackage{amsmath}\n\\usepackage{bm}\n\\usepackage[colorlinks=true,allcolors=blue]{hyperref}\n\\setlength{\\parskip}{\\baselineskip}\n\\DeclareMathOperator{\\sech}{sech}\n\n\\title{\\texttt{mw\\_poisson}: Mathematical Background}\n\\author{A. P. Naik}\n\\date{July 2020}\n\n\\begin{document}\n\n\\maketitle\n\n\nThis document details some of the mathematics underpinning the \\texttt{mw\\_poisson} package, and provides further references. \\texttt{mw\\_poisson} is a python-3 package providing a Poisson solver for the (axisymmetric) Milky Way potential, essentially a vectorised, python alternative to \\href{https://github.com/PaulMcMillan-Astro/GalPot}{GalPot}.\n\nThe Poisson solver is based on the method described in \\citet{Dehnen1998}, designed for the solution of Poisson's equation in the context of an axisymmetric discoid mass distribution.\n\nThe parameterisation of the Milky Way density profile is the empirical model of \\citet{McMillan2017}. As well as the best-fitting model of \\citet{McMillan2017}, other parameter values for the various components can also be adopted, including some 'meta-parameters' such as the slope of the dark matter halo.\n\n\n\\section{Spheroid-Disc Decomposition}\n\nGiven some model for the density distribution $\\rho(\\bm{x})$ of our Galaxy, our goal is to calculate the corresponding gravitational potential $\\Phi(\\bm{x})$ by solving Poisson's equation\n\\begin{equation}\n    \\nabla^2 \\Phi = 4\\pi G \\rho.\n\\end{equation}\nOne way to go about this would be to perform a direct spherical harmonic expansion. However, such methods can be slow to converge if a component is strongly confined to the disc plane. Fortunately, \\citet{Dehnen1998} describe an alternative method that can be employed in such circumstances, provided the density distribution is axisymmetric. In a nutshell, the potential is decomposed into an analytic disc-plane component, plus another component that needs to be calculated numerically. This latter component is less strongly confined to the disc plane, and so the spherical harmonic expansion method is more effective.\n\nFor the method to work, we require that $\\rho(\\bm{x})$ can be written as a linear combination of various discoid and spheroid components, i.e.\n\\begin{equation}\n    \\rho = \\sum_i \\rho_{d,i} + \\sum_j \\rho_{s,j},\n\\end{equation}\nwhere the spheroids $\\rho_{s,j}$ need not be spherically symmetric, just not too strongly confined to the disc plane. The discoid components $\\rho_{d,i}$ must be separable, i.e. it can be written as\n\\begin{equation}\n\\label{E:rho_disc}\n    \\rho_{d,i}(R,z) = \\Sigma_i(R)\\zeta_i(z),\n\\end{equation}\nwhere the normalisation is such that\n\\begin{equation}\n\\label{E:zeta_norm}\n    \\int_{-\\infty}^{-\\infty} \\zeta(z) dz = 1.\n\\end{equation}\n\nThen, the potential can be written as\n\\begin{equation}\n    \\Phi = \\Phi_\\mathrm{ME} + \\Phi_\\mathrm{disc},\n\\end{equation}\nwhere $\\Phi_\\mathrm{disc}$ can be written analytically as\n\\begin{equation}\n    \\Phi_\\mathrm{disc} = 4\\pi G \\sum_i \\Sigma_i(r)H_i(z),\n\\end{equation}\nand $H(z)$ is a function that satisfies $H''(z) = \\zeta(z)$ and $H'(0)=H(0)=0$. Note that the argument of $\\Sigma$ here is the spherical radius $r$ rather than the cylindrical radius $R$.\n\nMeanwhile, the other component $\\Phi_\\mathrm{ME}$ solves a new Poisson equation\n\\begin{equation}\n\\label{E:PoissonME}\n    \\nabla^2 \\Phi_\\mathrm{ME} = 4\\pi G \\rho_\\mathrm{ME},\n\\end{equation}\nwhere the `density' is given by\n\\begin{equation}\n\\label{E:rho_ME}\n    \\rho_\\mathrm{ME} \\equiv \\sum_i \\left[\\left(\\Sigma_i(R)-\\Sigma_i(r)\\right)\\zeta_i(z) - \\Sigma_i''(r)H_i(z) - \\frac{2}{r}\\Sigma_i'(r)\\left(H_i(z) + zH_i'(z)\\right)\\right] + \\sum_j \\rho_{s,j}.\n\\end{equation}\nAs required, this is less strongly confined to the disc-plane than the true density (e.g., $\\rho_\\mathrm{ME}=0$ in the disc-plane). Thus, a spherical harmonic expansion for will converge quickly on a solution for Eq. (\\ref{E:PoissonME}). This spherical harmonic method is the subject of the next section.\n\n\\section{Spherical Harmonic Solver}\n\nAccording to \\citet[][Eq. 2.95]{Binney2008}, a gravitational potential $\\Phi$ can be expanded in terms of spherical harmonics $Y_l^m$ as\n\\begin{equation}\n\\label{E:phi_sh_full}\n    \\Phi(r, \\theta, \\phi) = - 4\\pi G \\sum_{l=0}^\\infty \\sum_{m=-l}^l \\frac{Y_l^m(\\theta,\\phi)}{2l+1}\\left(r^{-l-1}\\int_0^r dr' r'^{l+2} \\rho_{lm}(r')+ r^l\\int_r^\\infty dr' r'^{1-l} \\rho_{lm}(r') \\right),\n\\end{equation}\nwhere the density coefficient $\\rho_{lm}(r)$ relates to a given density distribution $\\rho(\\bm{x})$ via\n\\begin{equation}\n\\label{E:rholm}\n    \\rho_{lm}(r) \\equiv \\int_0^\\pi d\\theta \\sin(\\theta) \\int_0^{2\\pi} d\\phi Y_l^{m*}(\\theta,\\phi)\\rho(r, \\theta, \\phi).\n\\end{equation}\nNote that $Y_l^{m*}$ is the complex conjugate of $Y_l^{m}$.\n\nBecause $Y_l^{m*} \\propto e^{-i m \\phi}$, if an axisymmetric density distribution is assumed then the $\\phi$ integral in Eq. (\\ref{E:rholm}) vanishes for $m \\neq 0$. The expression for the gravitational potential (\\ref{E:phi_sh_full}) then simplifies to\n\\begin{equation}\n    \\Phi(r, \\theta) = - 4\\pi G \\sum_{l=0}^\\infty \\frac{Y_l^0(\\theta)}{2l+1}\\left(r^{-l-1}\\int_0^r dr' r'^{l+2} \\rho_{l0}(r')+ r^l\\int_r^\\infty dr' r'^{1-l} \\rho_{l0}(r') \\right).\n\\end{equation}\n\nFiner resolution is required in the central regions the galaxy than at larger distances, so it is convenient to instead use $q \\equiv \\ln r$ as the coordinate in the radial integration. Then,\n\\begin{equation}\n    \\Phi(r, \\theta) = - 4\\pi G \\sum_{l=0}^\\infty \\frac{Y_l^0(\\theta)}{2l+1}\\left(r^{-l-1}\\int_0^{\\ln r} dq e^{(l+3)q} \\rho_{l0}(q)+ r^l\\int_{\\ln r}^\\infty dq e^{(2-l)q} \\rho_{l0}(r') \\right).\n\\end{equation}\n\nTo calculate $\\Phi(r, \\theta)$ numerically, one can construct a regular coordinate grid in $q$ and $\\theta$, i.e., using \\texttt{python}-indexing: $q \\rightarrow \\{q_0, q_1, q_2, ..., q_{M-1}\\}$ and $\\theta \\rightarrow \\{\\theta_0, \\theta_1, \\theta_2, ..., \\theta_{N-1}\\}$, respectively with constant grid spacings $h_q$ and $h_\\theta$. The $\\theta$ grid spans $0$ to $\\pi$, while $q$ is bounded by some minimum and maximum radius chosen by hand, ensuring the full dynamic range of the galaxy is captured. Converting the integrals to discrete sums, one then obtains\n\\begin{equation}\n    \\Phi(r_i, \\theta_j) = - 8\\pi^2 G h_q h_\\theta \\sum_{l=0}^\\infty \\frac{Y_l^0(\\theta_j)}{2l+1} C_i^l,\n\\end{equation}\n\\begin{equation}\n    C_i^l \\equiv r_i^{-l-1}\\sum_{a=0}^{i}\\sum_b e^{(l+3)q_a} Y_l^{0*}(\\theta_b) \\sin(\\theta_b) \\rho(r_a, \\theta_b) + r_i^l\\sum_{a=i+1}^{M-1}\\sum_b e^{(2-l)q_a} Y_l^{0*}(\\theta_b) \\sin(\\theta_b) \\rho(r_a, \\theta_b).\n\\end{equation}\nOf course in practice, one must truncate the sum over $l$ at some value.\n\n\n\n\n\n\n\n\\section{Parametric Models}\n\n\\subsection{Spheroid}\n\nFor the spheroidal components of the galaxy (e.g. bulge, halo), the program assumes the following axisymmetric form for the density profile:\n\\begin{equation}\n\\label{E:rho_spheroid}\n    \\rho_s(R,z) = \\frac{\\rho_0}{\\left(\\frac{r'}{r_0}\\right)^\\beta\\left(1+\\frac{r'}{r_0}\\right)^\\alpha}e^{-\\left(\\frac{r'}{r_\\mathrm{cut}}\\right)^2},\n\\end{equation}\nwhere $r' \\equiv \\sqrt{R^2 + (z/q)^2}$. The profile is thus specified by six parameters: the normalisation $\\rho_0$, the outer slope $\\alpha$, the inner slope $\\beta$, the scale radius $r_0$, the cutoff radius $r_\\mathrm{cut}$, and the flattening $q$.\n\nAn arbitrary number of such spheroids, with different parameter combinations can be included in the Poisson solver. Various commonly-used density profiles can be recovered from Eq. (\\ref{E:rho_spheroid}) after some parameter specification. For instance, the NFW profile corresponds to $\\beta=1, \\alpha=2, r_\\mathrm{cut}=\\infty, q=1$.\n\n\\citet{McMillan2017} use an axisymmetric Bissantz-Gerhard model for the Milky Way bulge, and an NFW profile for the dark matter halo. The best-fitting parameter values of these are reproduced in the table below.\n\n\\begin{center}\n    \\begin{tabular}{l c c c c c c}\\toprule[1.5pt]\n    Component & \\multicolumn{6}{c}{Parameter} \\\\\n              & $\\rho_0$                & $\\alpha$ & $\\beta$ & $r_0$   & $r_\\mathrm{cut}$ & $q$ \\\\\n              & $M_\\odot/\\mathrm{pc}^3$ & -        & -       & kpc     & kpc              & -   \\\\ \\midrule[0.5pt]\n    Bulge     & 98.351                  & 1.8      & 0       & 0.075   & 2.1              & 0.5 \\\\\n    Halo      & 0.00853702              & 2        & 1       & 19.5725 & $\\infty$         & 1   \\\\ \\bottomrule[1.5pt]\n    \\end{tabular}\n\\end{center}\n\n\\subsection{Disc}\n\nAccording to Eq. (\\ref{E:rho_disc}), the disc density profile is specified by two functions, the radial surface density $\\Sigma(R)$ and the vertical profile $\\zeta(z)$. For $\\Sigma(R)$, the program assumes a `holed' exponential disc, i.e.\n\\begin{equation}\n    \\Sigma(R) = \\Sigma_0 e^{-x},\n\\end{equation}\nwhere $x \\equiv R_h/R+ R/R_0$, $\\Sigma_0$ is the density normalisation, and $R_0$ and $R_h$ are respectively the scale radius and radius of the central hole. Equation \\ref{E:rho_ME} requires expressions for the first and second derivatives of $\\Sigma$. These are given by\n\\begin{align}\n    \\Sigma'(R)  & = -\\frac{\\Sigma_0}{R_0} e^{-x} \\left(1 - \\frac{R_h R_0}{R^2}\\right) ,\\\\\n    \\Sigma''(R) & = \\frac{\\Sigma_0}{R_0^2} e^{-x} \\left[\\left(1 - \\frac{R_h R_0}{R^2}\\right)^2 - \\frac{2R_h R_0^2}{R^3}\\right] .\n\\end{align}\n\nMeanwhile, for the vertical profile $\\zeta(z)$, the program can optionally adopt either an exponential or a sech\\textsuperscript{2} profile. In the former case,\n\\begin{equation}\n    \\zeta(z) = \\frac{1}{2z_0}e^{-\\frac{|z|}{z_0}},\n\\end{equation}\nwhere $z_0$ is the scale height of the disc. Note that the factor $1/2z_0$ ensures that Eq. (\\ref{E:zeta_norm}) is satisfied, i.e. $\\zeta$ is properly normalised. The corresponding functions $H(z)$ function and its first derivative $H'(z)$ are given by\n\\begin{align}\n    H(z)  & = \\frac{z_0}{2} \\left(e^{-\\frac{|z|}{z_0}} - 1 + \\frac{|z|}{z_0}\\right), \\\\\n    H'(z) & = \\frac{z}{2z_0} \\left(1 - e^{-\\frac{|z|}{z_0}}\\right).\n\\end{align}\nIt can be verified that $H''(z) = \\zeta(z)$ and $H'(0)=H(0)=0$.\n\nFor the sech\\textsuperscript{2} profile, these functions are instead given by the following expressions, which also satisfy the various requirements:\n\\begin{align}\n    \\zeta(z) & = \\frac{1}{4z_0}\\sech^2\\left(\\frac{z}{2z_0}\\right),\\\\\n    H(z)     & = z_0 \\ln\\left(\\cosh\\left(\\frac{z}{2z_0}\\right)\\right), \\\\\n    H'(z)    & = \\frac{1}{2} \\tanh\\left(\\frac{z}{2z_0}\\right).\n\\end{align}\n\nThus, the disc density is specified by the choice of vertical profile and four parameters: $\\Sigma_0, R_0, R_h$, and $z_0$. As with the spheroids above, an arbitrary number of such discs with different parameter combinations can be included in the calculation.\n\nThe Milky Way model of \\citet{McMillan2017} incorporates four disc components: neutral and molecular hydrogen discs, and thin and thick stellar discs. The best-fitting parameters (and vertical shapes) of these discs are reproduced in the table below.\n\n\\begin{center}\n    \\begin{tabular}{l l c c c c}\\toprule[1.5pt]\n    Component          & Vertical Profile & \\multicolumn{4}{c}{Parameter} \\\\\n                       &                  & $\\Sigma_0$              & $R_0$   & $R_h$ & $z_0$ \\\\\n                       &                  & $M_\\odot/\\mathrm{pc}^2$ & kpc     & kpc   & pc    \\\\ \\midrule[0.5pt]\n    Thin               & Exponential      & 895.679                 & 2.49955 & 0     & 300   \\\\\n    Thick              & Exponential      & 183.444                 & 3.02134 & 0     & 900   \\\\\n    HI                 & $\\sech^2$        & 53.1319                 & 7       & 4     & 85    \\\\\n    H\\textsubscript{2} & $\\sech^2$        & 2179.95                 & 1.5     & 12    & 45    \\\\ \\bottomrule[1.5pt]\n    \\end{tabular}\n\\end{center}\n\n\n\\bibliographystyle{mnras}\n\\bibliography{library}\n\n\n\\end{document}\n", "meta": {"hexsha": "9a0324c407089216d2d3fefd52d09545a9a7055f", "size": 11860, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/theory.tex", "max_stars_repo_name": "aneeshnaik/mw_poisson", "max_stars_repo_head_hexsha": "8d2aa141086a7646ff0fe846aa22e654edd1cd6b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "theory/theory.tex", "max_issues_repo_name": "aneeshnaik/mw_poisson", "max_issues_repo_head_hexsha": "8d2aa141086a7646ff0fe846aa22e654edd1cd6b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/theory.tex", "max_forks_repo_name": "aneeshnaik/mw_poisson", "max_forks_repo_head_hexsha": "8d2aa141086a7646ff0fe846aa22e654edd1cd6b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.085106383, "max_line_length": 621, "alphanum_fraction": 0.6731028668, "num_tokens": 3865, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7606506526772883, "lm_q1q2_score": 0.5537350003433736}}
{"text": "\\section{Homogenization Approach}\n\nThe main objective of up-scaling DEM simulations is to be able to describe the  behavior of the discontinuous medium in terms of a more computationally efficient continuum model. The homogenization algorithms used herein to determine the average stress-strain behaviour, $\\left<\\boldsymbol{\\sigma}\\right>$-$\\left<\\boldsymbol{\\epsilon}\\right>$, of the REV from the microscale displacements $\\mathbf{u}^m$, strain $\\boldsymbol{\\epsilon}^m$, and stresses $\\boldsymbol{\\sigma}^m$ are based on the methods developed by D'Addetta et al. \\citet{daddetta_particle_2004} and Wellmann et al. \\citet{wellmann_homogenization_2008}. In this homogenization process, the resultant inter-block contact forces and block displacement from the DEM simulations are converted to average stresses and strains.\n\nFor the homogenization procedure to yield meaningful results, it should be applied to a Representative Elementary Volume (REV). The exact size of the REV depends on the geometry and mechanical properties of the DEM model. For the homogenization approach to hold, the REV of size $d$ within a system with a characteristic length $D$ and consisting of blocks with a characteristic diameter $\\delta$, must satisfy scale separation: $D\\gg d\\gg\\delta$ \\citep{wellmann_homogenization_2008}. \n\nIn the following sections, all deformations are assumed to be small, such that there is no need to differentiate between the deformed and undeformed configurations.  \n\nWe begin by defining a $L\\times L$ square DEM simulation domain over which mixed-boundary conditions will be applied. The REV is taken as a circular domain of radius $R$, $2R<L$.  The REV is taken to be a subdomain of the actual DEM simulation domain to eliminate any boundary effects.  As will be seen below, it is convenient to take the boundary of the domain used for homogenization as a slightly large domain encompassing the REV boundary. The boundary of the homogenization domain, denoted as $\\Gamma_h$, is defined by the outer edges of the deformable blocks, i.e., the cohesive/contact surfaces between deformable blocks, which intersect a circle of radius $R$ located in the center of the DEM simulation domain. Let the homogenization domain, the domain bounded by $\\Gamma_h$, be denoted by $\\Omega_h$. These definitions are illustrated in Figure \\ref{fig:homoarea} for a $10$m $\\times$ $10$m DEM domain, where the deformable blocks are defined through a Voronoi tessellation. The radius of the REV domain is $2.5$m.  It can be seen that the actual domain used for homogenization is non-circular and larger than the REV domain. \n\n\n", "meta": {"hexsha": "8cd6321312bc13b2c5d7dd7caa967e89e74869f1", "size": 2618, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "section_homogenizationApproach.tex", "max_stars_repo_name": "yetisir/up-scaling-dem-simulations", "max_stars_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "section_homogenizationApproach.tex", "max_issues_repo_name": "yetisir/up-scaling-dem-simulations", "max_issues_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "section_homogenizationApproach.tex", "max_forks_repo_name": "yetisir/up-scaling-dem-simulations", "max_forks_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-29T23:14:09.000Z", "avg_line_length": 218.1666666667, "max_line_length": 1136, "alphanum_fraction": 0.7964094729, "num_tokens": 595, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5537350003433736}}
{"text": "\\documentclass[main.tex]{subfiles}\n\\begin{document}\n\n\\marginpar{Tuesday\\\\ 2020-8-18, \\\\ compiled \\\\ \\today}\n\nThis parametric equation is expressed with respect to the principal axes of the ellipse; if we want to write in a different coordinate system which is obtained from a rotation of the principal axes we must use the rotation matrix \n%\n\\begin{align}\n\\left[\\begin{array}{c}\nx \\\\ \ny\n\\end{array}\\right]\n=\n\\left[\\begin{array}{cc}\n\\cos \\chi  & -\\sin \\chi   \\\\ \n\\sin \\chi  &  \\cos \\chi \n\\end{array}\\right]\n\\left[\\begin{array}{c}\nx' \\\\ \ny'\n\\end{array}\\right]\n\\,;\n\\end{align}\n%\nif we make \\(x'\\) and \\(y'\\) explicit we can find \n%\n\\begin{align}\nx &= A \\cos \\beta \\cos \\chi \\cos \\omega t + A \\sin \\beta \\sin \\chi \\sin \\omega t \\\\\ny &= A \\cos \\beta \\sin \\chi \\cos \\omega t - A \\sin \\beta \\cos \\chi \\sin \\omega t \n\\,.\n\\end{align}\n\nThe expressions for \\(x\\) and \\(y\\) and the ones for \\(E_x\\) and \\(E_y\\) are quite similar: we must identify \n%\n\\begin{align}\n\\xi_1 \\cos \\phi_1 &= A \\cos \\beta \\cos \\chi \\\\\n\\xi_1 \\sin \\phi_1 &= A \\sin \\beta \\sin \\chi \\\\\n\\xi_2 \\cos \\phi_2 &= A \\cos \\beta \\sin \\chi \\\\\n\\xi_2 \\sin \\phi_2 &= -A \\sin \\beta \\cos \\chi \n\\,,\n\\end{align}\n%\nwhich means that the electric field rotates in the shape of an ellipse, which is tilted about the \\(x\\), \\(y\\) axes by an angle \\(\\chi \\). \n\n\\subsubsection{Stokes parameters}\n\nFrom these four equations, we find that (after working through the simplifications of the trigonometric functions)\n%\n\\begin{align}\n\\xi_1^2 + \\xi_2^2 &= A^2 \\overset{\\text{def}}{=} I \\\\\n\\xi_1^2 - \\xi_2^2 &= A^2 \\cos 2 \\beta \\cos 2 \\chi  \\overset{\\text{def}}{=} Q \\\\\n2 \\xi_1 \\xi_2 \\cos(\\phi_1 - \\phi_2 ) &= A^2 \\sin 2 \\chi \\cos 2 \\beta \\overset{\\text{def}}{=} U \\\\\n2 \\xi_1 \\xi_2 \\sin(\\phi_1 - \\phi_2 ) &= A^2 \\sin 2 \\beta \\overset{\\text{def}}{=} V\n\\,,\n\\end{align}\n%\nwhere we defined the \\textbf{Stokes parameters} \\(I\\), \\(Q\\), \\(U\\) and \\(V\\).\\footnote{They can also be expressed as the difference of the square modulus of the electric field with respect to three bases (corresponding to the three Pauli matrices): one Cartesian (for \\(Q\\), \\(\\sigma _z\\)) one Cartesian rotated by \\SI{45}{\\degree} (for \\(U\\), \\(\\sigma _x\\)) and one circular (for \\(V\\), \\(\\sigma _y\\)).} They completely describe the state of an elliptically polarized monochromatic wave. \n\nWe can reverse their definitions as \n%\n\\begin{align}\nA &= \\sqrt{I}  \\\\\n\\sin 2 \\beta &= \\frac{V}{I} \\\\\n\\tan 2 \\chi &= \\frac{U}{Q}\n\\,.\n\\end{align}\n\nNow let us discuss the meaning of these parameters. Since \\(A = \\xi_0 \\), \\(I = \\xi_0^2\\), while the energy flux is given by \n%\n\\begin{align}\nS = \\frac{c}{4 \\pi } \\xi_0^2 = \\frac{c}{4 \\pi } I\n\\,.\n\\end{align}\n\nSo, the parameter \\(I\\) is directly proportional to the energy flux.\nThe major axes of the polarization ellipse are \\(2A \\cos \\beta \\) and \\(2 A \\sin \\beta \\) respectively; therefore their ratio \\(\\tan \\beta \\) measures the eccentricity of the ellipse, and the ratio \\(V/I\\) can be used to recover \\(\\beta \\).\n\nThe ratio \\(U/Q\\) describes the orientation  of the ellipse with respect to the \\(x\\), \\(y\\) axes.\nIf the wave is linearly polarized then  \\(V =0 \\) (this is a degenerate case for our description); if \\(V > 0\\) the polarization  is left-handed, while if \\(V< 0\\) the polarization is right-handed.\n\nIf we have \\(U = Q = 0\\), this corresponds to circular polarization. \n\nThe ellipse is fully determined by three numbers (\\(\\beta \\), \\(\\chi \\), \\(A\\)): why are there 4 Stokes parameters? They cannot all be independent if this is the case\\dots  \n\nIn fact, they are connected by the relation \n%\n\\begin{align}\nI^2= Q^2 + V^2+ U^2\n\\,.\n\\end{align}\n\nThe reason we use four parameters instead of just three is that this line of reasoning only holds for perfectly polarized monochromatic waves: in the general case, for partially polarized radiation, we have \n%\n\\begin{align}\nI^2 \\geq Q^2 + V^2 + U^2\n\\,.\n\\end{align}\n\nFor completely \\emph{un}polarized light we have \\(Q = V = U = 0\\). \n\nA useful property is that these parameters are \\textbf{additive}: the superposition of waves with different Stokes parameters can be the described by the sum of the Stokes parameters, as \\(I _{\\text{total}} = \\sum _{k} I_k\\) and so on. \n\nThis suggests that we write them in a vector: \n%\n\\begin{align}\n\\vec{S} = \\left[\\begin{array}{c}\nI \\\\ \nQ \\\\ \nU \\\\ \nV\n\\end{array}\\right]\n\\,.\n\\end{align}\n\nIn general, we can decompose radiation which is partially polarized into a completely unpolarized part and a completely polarized part: \n%\n\\begin{align}\n\\vec{S} = \\underbrace{\\left[\\begin{array}{c}\nI - \\sqrt{Q^2+V^2+U^2} \\\\ \n0 \\\\ \n0 \\\\ \n0\n\\end{array}\\right]}_{\\text{unpolarized}}\n+ \n\\underbrace{\\left[\\begin{array}{c}\n\\sqrt{Q^2+V^2+U^2} \\\\ \nQ \\\\ \nU \\\\ \nV\n\\end{array}\\right]}_{\\text{polarized}}\n\\,.\n\\end{align}\n\nThis allows us to define the \\textbf{polarization degree}: the fraction of the radiation which is polarized, calculated as \n%\n\\begin{align}\n\\Pi_{L} = \\frac{I _{\\text{polarized}}}{I} = \\frac{\\sqrt{Q^2+U^2+V^2}}{I}\n\\,,\n\\end{align}\n%\nwhich is measurable with an instrument.\n\n\\subsubsection{Electromagnetic potentials}\n\nWe can express the electric and magnetic fields through the scalar and vector potentials \\(\\phi \\) and \\(\\vec{A}\\): \n%\n\\begin{align}\n\\vec{B} = \\vec{\\nabla} \\times \\vec{A} \n\\qquad \\text{and} \\qquad\n\\vec{E} = - \\vec{\\nabla} \\phi - \\frac{1}{c} \\pdv{\\vec{A}}{t}\n\\,.\n\\end{align}\n\nIn terms of these potentials Maxwell's equations read \n%\n\\begin{align}\n\\nabla^2 \\phi - \\frac{1}{c^2} \\pdv[2]{\\phi }{t} &= - 4 \\pi \\rho \\\\\n\\nabla^2 \\vec{A} - \\frac{1}{c^2} \\pdv[2]{\\vec{A}}{t} &= - \\frac{4 \\pi }{c} \\vec{j} \n\\,.\n\\end{align}\n\nThe potentials are redundant, we can impose some conditions on them while still being able to describe any physical system. One we can impose is the \\emph{Lorentz gauge}: \n%\n\\begin{align}\n\\vec{\\nabla} \\cdot \\vec{A} + \\frac{1}{c} \\pdv{\\phi }{t} = 0\n\\,,\n\\end{align}\n%\nunder which we can get a formal solution for the potentials in terms of the charge and current densities: \n%\n\\begin{align}\n\\phi (\\vec{r}, t) &= \\int \\frac{[ \\rho ]}{\\abs{\\vec{r} - \\vec{r}'}} \\dd[3]{r'} \\\\\n\\vec{A} (\\vec{r}, t) &= \\frac{1}{c} \\int \\frac{[ \\vec{j} ]}{\\abs{\\vec{r} - \\vec{r}'}} \\dd[3]{r'}\n\\,.\n\\end{align}\n\nThe square brackets are a notation meaning that the densities must be evaluated at the \\emph{retarded time} \\(t - \\abs{\\vec{r} - \\vec{r}'} / c\\). \nThis accounts for the finite speed of the propagation of light. \n\n\\subsubsection{Radiation from moving charges}\n\nConsider a particle of mass \\(m\\) and charge \\(q\\) moving along a path described by the vector \\(\\vec{r}_0 (t)\\). The charge and current densities will then be \n%\n\\begin{align}\n\\rho (\\vec{r}, t) &= q \\delta (\\vec{r} - \\vec{r}_0(t)) \\\\\n\\vec{j} (\\vec{r}, t) &= q \\underbrace{\\dv{\\vec{r}_0}{t}}_{\\vec{u}(t)} \\delta (\\vec{r} - \\vec{r}_0(t))\n\\,.\n\\end{align}\n\nFrom these expressions we can calculate the potentials generated by the moving charge. Let us introduce the relative radius \\(\\vec{R} = \\vec{r} - \\vec{r}_0\\) and its corresponding unit vector \\(\\vec{n} = \\vec{R} / \\abs{\\vec{R}}\\). \n\nThe potentials will then be \n%\n\\begin{align}\n\\phi = \\qty[\\frac{q}{kR}] \\qquad \\text{and} \\qquad\n\\vec{A} = \\qty[\\frac{q \\vec{n}}{ckR}]\n\\,,\n\\end{align}\n%\nwhere \\(k = 1 - \\vec{n} \\cdot \\vec{u} / c\\), while \\(R = \\abs{\\vec{R}}\\).\nThese are called the \\textbf{Li\u00e9nard-Wiechert} potentials. \nFrom them we can perform the differentiation in order to recover the fields, which will be called the Li\u00e9nard-Wiechert fields. \nThe final expressions for the fields are \n%\n\\begin{align}\n\\vec{E} (\\vec{r}, t) &= q \\qty[\\frac{(\\vec{n} - \\vec{\\beta}) (1- \\beta^2)}{ k^3 R^2}] + \\frac{q}{c} \\qty[\\frac{\\vec{n}}{k^3R} \\times \\qty((\\vec{n} - \\vec{\\beta}) \\times \\dot{\\vec{\\beta}})] \\\\\n\\vec{B}(\\vec{r}, t) &= \\qty[\\vec{n} \\times \\vec{E}]\n\\,,\n\\end{align}\n%\nwhere \\(\\vec{\\beta} = \\vec{u} / c\\).\nThe important thing to notice here is that there are two contributions to the electric field: one goes as \\(R^{-2}\\) and is called the \\textbf{Coulomb field} \\(\\vec{E}_c\\) and the other goes as \\(R^{-1}\\) and is called the \\textbf{radiation field} \\(\\vec{E}_r\\).\n\nNow we will discuss the radiation field mainly. Since the magnetic field can be found by taking a product with the electric field, we can make the same decomposition there.\n\nThe radiation field \\(\\vec{E}_r\\) must be perpendicular to \\(\\vec{n}\\), and so must be \\(\\vec{B}_r\\). So, they form an orthogonal triple. \n\nLet us consider some simple cases. \n\n\\subsubsection{Nonrelativistic charge}\n\nLet us first suppose that the particle is moving slowly, \\(\\abs{u}/ c = \\abs{\\beta } \\ll 1 \\): this means that \\(\\abs{\\vec{n} - \\beta } \\sim 1\\) and also \\(k \\sim 1\\). \n\nWith these simplifications we find that the ratio of the magnitudes of the two components of the electric field is  \n%\n\\begin{align}\n\\frac{E_r}{E_c} \\sim \\frac{R \\dot{u}}{c^2}\n\\,.\n\\end{align}\n\nIf the characteristic time across which the velocity of the particle changes is \\(\\tau \\), then we have \\(\\dot{u} \\sim u / \\tau\\). \nThis time \\(\\tau \\) will be associated with a characteristic frequency \\(\\nu = 1 / \\tau \\) and a characteristic wavelength \\(\\lambda = c / \\nu \\). \n\\todo[inline]{\\dots and this is the wavelength of the emitted radiation? is this always the case? it does not seem that obvious}\n\nThen we will get \n%\n\\begin{align}\n\\frac{E_r}{E_c} \\sim \\frac{R u \\nu }{c^2} \\sim \\frac{R u}{\\lambda c}\n\\,,\n\\end{align}\n%\nso we can see that if \n%\n\\begin{align}\n\\frac{R}{\\lambda } \\lesssim 1\n\\,\n\\end{align}\n%\nthen we have \\(E_r / E_c \\lesssim u /c\\). This defines an inner region around the emitter for which the Coulomb field dominates; outside it the radiation field dominates. \n\nTypically, we are far enough from the emitters of radiation so that the radiation field dominates. \nThen, under the assumption \\(\\beta \\ll 1\\) we have  \n%\n\\begin{align}\n\\vec{E}_r = \\frac{q}{Rc^2} \\qty[\\vec{n} \\times \\qty(\\vec{n} \\times \\dot{\\vec{u}})]\n\\qquad \\text{and} \\qquad\n\\vec{B}_r = \\qty[\\vec{n} \\times \\vec{E}_r]\n\\,.\n\\end{align}\n\n\\end{document}\n", "meta": {"hexsha": "a1137f9e3d63757b3cc799b7be3e29b4e285cae0", "size": 9926, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ap_second_semester/radiative_processes/mar25.tex", "max_stars_repo_name": "jacopok/notes", "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_issues_repo_path": "ap_second_semester/radiative_processes/mar25.tex", "max_issues_repo_name": "jacopok/notes", "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ap_second_semester/radiative_processes/mar25.tex", "max_forks_repo_name": "jacopok/notes", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "avg_line_length": 37.8854961832, "max_line_length": 490, "alphanum_fraction": 0.6595808987, "num_tokens": 3348, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7606506526772883, "lm_q1q2_score": 0.5537350003433736}}
{"text": "Variational autoencoders \\citep[VAEs; ][]{Kingma2014,Rezende2014} are a popular class of deep latent-variable models. The VAE assumes that observations $\\x$ are generated by first sampling a latent vector $\\z$ from some tractable prior $\\p(\\z)$, and then sampling $\\x$ from some tractable distribution $\\p(\\x \\given \\decoder_\\genparam(\\z))$.\nFor example, $\\decoder_\\genparam(z)$ could be a neural network with weights $\\genparam$ and $\\p(\\x \\given \\decoder_\\genparam(\\z))$ might be a Gaussian with mean $\\decoder_\\genparam(\\z)$.\n\nVAEs, like other unsupervised latent-variable models \\citep[e.g.; ][]{Tipping1999,Blei2003}, can uncover latent structure in datasets.\nIn particular, one might hope that high-level characteristics of the data are encoded more directly in the geometry of the latent space $\\z$ than they are in the data space $\\x$. For example, when modeling faces one might hope that one latent dimension corresponds to pose, another to hair length, another to gender, etc. \n\nWhat kind of latent structure will the VAE actually discover?\n\\citet{Hoffman2016} observe that the ELBO encourages the model to make the statistics of the population of encoded $\\z$ vectors resemble those of the prior, so that $\\p(\\z)\\approx \\E_\\mathrm{population}[\\p(\\z \\given \\x)]$.\nThe prior $\\p(\\z)$ therefore plays an important role in shaping the geometry of the latent space.\nFor example, if we use the ``default'' prior $\\p(\\z)=\\N(\\z; 0, I)$, then we are asking the model to explain the data in terms of smoothly varying, completely independent factors \\citep{Burgess2018}. These constraints may sometimes be reasonable---for example, geometric factors such as pose or lighting angle may be nearly independent and rotationally symmetric. But some natural factors exhibit dependence structure (for example, facial hair length and gender are strongly correlated), and others may have nonsmooth structure (for example, handwritten characters naturally cluster into discrete groups).\n\nIn this paper, we propose using a more opinionated prior on the VAE's latent vectors: the time-marginalized coalescent \\citep[TMC; ][]{Boyles2012}.\nThe TMC is a powerful, interpretable Bayesian nonparametric hierarchical clustering model that can encode rich discrete and continuous structure.\nCombining the TMC with the VAE combines the strengths of Bayesian nonparametrics (interpretable, discrete structure learning) and deep generative modeling (freedom from restrictive distributional assumptions).\n\n\n% We can consider the models from an information-theoretic perspective \\citep{alemi18brokenelbo}:\n% a classical autoencoder will try to make $z$ encode as much information about $x$ as possible, subject to the constraints imposed by the training procedure and function class, while\n% the evidence lower bound (ELBO) used to train VAEs adds a regularization term that penalizes the model for encoding too much information in $z$.\n% But this says nothing about the geometry of the latent space, which is critical for downstream applications. For example, if we want to use the inferred $z$ vectors for a classification task we may want them to organize into well-separated clusters.\n\n\n% If there are multiple factors of variation in the data, we claim that the model will choose to model those whose statistics\n\n% For example, \\citet{burgess2018understanding} observe that the common normal prior $\\p(z) = \\N(z; 0, I)$ encourages the model to learn \\emph{disentangled} representations---this \n\n\n% This implies that the prior $\\p(z)$ plays an important role in determining what kind of latent structure the VAE discovers.\n\n\n% Approximate maximum-likelihood training of VAEs can encourage the latent vector $z$ to have high mutual information with the observations $x$ \\citep{alemi18brokenelbo}, while encouraging the model to make the statistics of the encoded $z$ vectors resemble those of the prior so that $\\p(z)\\approx\\mathbb{E}_x[\\p(z\\mid x)]$ \\citep{hoffman2016elbo}.\n\n\n% If all goes well, the posterior $\\p(z\\mid x)$ will encode useful information about $x$.\n\n% find an encoding scheme such that the aggregate posterior $\\tilde \\p(z)\\triangleq \\frac{1}{N}\\sum_\\n $\n\n% encode information about $x$ in $z$ in a way that conforms to the (usually simple) \n\n% %One advantage of VAEs over other powerful deep density estimators based on autoregressive models \\citep[e.g.; ][]{salimans2017pixelcnn++,oord2016wavenet,papamakarios2017maf} is that the latent vectors $z$ may encode \n\n% - Representation learning with VAEs, latent space encodes semantics\n\n% - Often times people use generic marginals (N(0, I) beta VAE, (factors may not be independent hair length, beard length)\n\n% - Alternatively use a powerful prior (MAF), but encode smoothness assumptions but aren't appropriate if data has natural latent discrete structure\n\nOur contributions are:\n\\begin{itemize}\n    \\item We propose a deep Bayesian nonparametric model that can discover hierarchical cluster structure in complex, high-dimensional datasets.\n    \\item We develop a minibatch-friendly inference procedure for fitting TMCs based on an inducing-point approximation, which scales to arbitrarily large datasets.\n    \\item We show that our model's learned latent representations consistently outperform those learned by other variational (and classical) autoencoders when evaluated on downstream classification and retrieval tasks.\n\\end{itemize}\n\n\\section{Background}\n\n\\subsection{Bayesian priors for hierarchical clustering}\n\\label{sec:bnhc}\n\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}[t]{0.4\\textwidth}\n    \\centering\n    \\includegraphics[frame, width=\\textwidth]{img/loracs/tmc/tmc-small-cropped}\n\\end{subfigure}\n~\n\\begin{subfigure}[t]{0.4\\textwidth}\n    \\centering\n    \\reflectbox{\\includegraphics[frame, width=\\textwidth]{img/loracs/tmc/tmc-big-1-cropped}}\n\\end{subfigure}\n\\caption{Independent samples from a time-marginalized coalescent (TMC) prior and two-dimensional Gaussian random walk likelihood model (10 and 300 leaves respectively). \nContours in the plots correspond to posterior predictive density $r(\\z_{\\numdata + 1} \\given \\dataset, \\tree)$.}\n\\label{fig:tmc-samples}\n\\end{figure*}\n\nHierarchical clustering is a flexible\ntool in exploratory data analysis\nas trees offer visual, interpretable\nsummaries of data. Typically,\nalgorithms for hierarchical clustering are\neither agglomerative\n(where data are recursively, greedily merged to form\na tree from the bottom-up)\nor divisive (where data\nare recursively partitioned, forming a tree from the\ntop-down). Bayesian nonparametric hierarchical clustering (BNHC) \nadditionally incorporates uncertainty over tree\nstructure by introducing\na prior distribution over trees $r(\\tree)$ and\na likelihood model for data $r(\\latentdataset \\given \\tree)$,\nwith the goal of sampling\nthe posterior distribution $r(\\tree \\given \\latentdataset)$.\\footnote{We use $r$ to denote probability distributions\nrelating to the TMC and distinguish from $p$ and $q$\ndistributions used later in the paper.}\n\nIn this paper, we focus on\nrooted binary trees with $\\numdata$ labeled leaves \nadorned with branch lengths,\ncalled \\emph{phylogenies}.\nPrior distributions over phylogenies\noften take the form of a stochastic generative\nprocess in which a tree is built\nwith random merges, as in the Kingman coalescent \\citep{Kingman1982},\nor random splits, as in the Dirichlet diffusion tree \\citep{Neal2003}.\nThese nonparametric distributions have\nhelpful properties, such as exchangeability,\nwhich enable efficient Bayesian inference.\nIn this paper, we focus on the time-marginalized\ncoalescent \\citep[TMC; ][]{Boyles2012}, which decouples the\ndistribution over tree structure and branch length,\na property that helps simplify inference down the line.\n\n\\textbf{Time-marginalized coalescent (TMC):}\nThe time-marginalized coalescent defines a prior distribution over phylogenies.\nA phylogeny $\\tree = (V, E, T)$ is a directed rooted full binary tree, with vertex set $V$ and edges $E$, together with time labels $T: V \\to [0, \\, 1]$ where we denote $t_v = T(v)$.\nThe vertex set $V$ is partitioned into $\\numdata$ leaf vertices $V_\\text{leaf}$ and $\\numdata-1$ internal vertices $V_\\text{int}$, so that $V = V_\\text{int} \\cup V_\\text{leaf}$, and we take $V_\\text{leaf} = \\{1, 2, \\ldots, \\numdata\\}$ to simplify notation for identifying leaves with $\\numdata$ data points.\nThe directed edges of the tree are encoded in the edge set $E \\subset V_\\text{int} \\times V$, where we denote the root vertex as $v_\\text{root}$ and for $v \\in V \\setminus \\{v_\\text{root}\\}$ we denote the parent of $v$ as $\\pi(v) = w$ where $(w, v) \\in E$.\n\nThe TMC samples a random tree structure $(V, E)$ by a stochastic process in which the $\\numdata$ leaves are recursively merged uniformly at random until only one vertex is left.\nThis process yields the probability mass function on valid $(V, E)$ pairs given by\n\\begin{equation}\n    r(V, E) = \\frac{(\\numdata - 1)!}{\\prod_{v \\in V_\\text{int}} c(v)} \\prod_{i = 1}^{\\numdata-1} {i+1 \\choose 2}^{-1},\n\\end{equation}\nwhere $c(v)$ denotes the number of internal vertices in the subtree rooted at $v$.\nGiven the tree structure, time labels are generated via the stick-breaking process\n\\begin{equation}\n    t_v = \\begin{cases} 0 & v = v_\\text{root}, \\\\ 1 & v \\in V_\\text{leaf}, \\\\ t_{\\pi(v)} - \\beta_v (1 - t_{\\pi(v)}) & v \\in V_\\text{int} \\setminus \\{v_\\text{root}\\}, \\end{cases}\n\\end{equation}\nwhere $\\beta_v \\iid \\sim \\mathrm{Beta}(a, b)$ for $v \\in V$. These time labels encode a branch length $t_v - t_{\\pi(v)}$ for each edge $e = (\\pi(v), v) \\in E$. We denote the overall density on phylogenies with $\\numdata$ leaves as $\\mathrm{TMC}_\\numdata(\\tree; a, b)$.\n\nFinally, to connect the TMC prior to data in $\\R^\\d$, we define a likelihood model $r(\\latentdataset \\given \\tree)$ on $\\numdata$ data points, with $\\z_\\n$ corresponding to the leaf vertex $\\n \\in V_\\text{leaf}$.\nWe use a Gaussian random walk (GRW), where for each vertex $v \\in V$ a location $\\z_v \\given \\z_{\\pi(v)}$ is sampled according to a Gaussian distribution centered at its parent's location with variance equal to the branch length,\n\\begin{equation}\n    \\z_v \\given \\z_{\\pi(v)} \\sim \\N(\\z_{\\pi(v)}, (t_v - t_{\\pi(v)})I), \\quad v \\in V \\setminus \\{v_\\text{root}\\},\n    \\notag\n\\end{equation}\nand we take $\\z_{v_\\text{root}} \\sim \\N(0, I)$.\nAs a result of this choice, we can exploit the Gaussian graphical model structure to efficiently marginalize out the internal locations $\\z_v$ associated with internal vertices $v \\in V_\\text{int}$ and evaluate the resulting marginal density $r(\\latentdataset \\given \\tree)$. For details\nabout this marginalization, please refer to \\autoref{sec:algorithm-details}.\nThe final overall density is written as\n\\begin{equation}\n    r(\\latentdataset, \\tree) = \\mathrm{TMC}_\\numdata(\\tree; a, b)r(\\latentdataset \\given \\tree).\n\\end{equation}\nFor further details and derivations related to the TMC,\nplease refer to \\citet{Boyles2012}.\n\n\\textbf{TMC posterior predictive density:}\nThe TMC with $\\numdata$ leaves and a GRW likelihood model can be a prior on a set of $\\numdata$ hierarchically-structured data, i.e. data that correspond to nodes with small tree distance should have similar location values.\nIn addition, it also acts as a density from which we can sample new data.\nThe posterior predictive density $r(\\z_{\\numdata + 1} \\given \\latentdataset, \\tree)$ is easy to sample thanks to the exchangeability of the TMC.\n\nTo sample a new data point $\\z_{\\numdata + 1}$, we select a branch (edge) and a time to attach a new leaf node.\nThe probability $r(e_{\\numdata+1} \\given V, E)$ of selecting branch $e_{\\numdata+1}$ is proportional to the probability under the TMC prior of the tree with a new leaf attached to branch $e_{\\numdata+1}$.\nThe density $r(t_{\\numdata+1} \\given e_{\\numdata+1}, V, E)$ for a time label $t_{\\numdata+1}$ is determined by the stick-breaking process (see \\autoref{sec:algorithm-details} for details).\n% given by a shifted-and-scaled $\\mathrm{Beta}(a, b)$ distribution and inserting it into the stick-breaking process, creating a new time $t_{N+1}$ for the node attached to branch $b_{N+1}$.\nBoth of these probabilities are easy to calculate and sample due to the exchangeability of the TMC.\n\nThe new location $\\z_{\\numdata + 1}$ can be sampled from  $r(\\z_{\\numdata + 1} \\given e_{\\numdata + 1}, t_{\\numdata + 1}, \\tree)$, which is\nthe Gaussian distribution that comes out of the GRW likelihood model.\nPictured in \\autoref{fig:tmc-samples} are samples from a TMC prior and GRW likelihood, where contours correspond to $r(\\z_{\\numdata + 1} \\given \\latentdataset, \\tree)$.\nIn addition to modeling hierarchical structure, the TMC is a flexible nonparametric density estimator.\n\n\\textbf{TMC inference:}\nThe posterior distribution $r(\\tree \\given \\latentdataset)$\nis analytically intractable due to the normalization constant\n$r(\\latentdataset)$ involving a sum over all tree structures,\nbut it can be approximately sampled via Markov chain Monte-Carlo (MCMC) methods.\n% In MCMC, we construct a proposal distribution\n% $T(\\tree' \\given \\tree)$\n% a Markov chain with transition dynamics $T$\n% has joint distribution $r(\\tree, z_{1:N})$.\nWe utilize the Metropolis-Hastings\nalgorithm with a subtree-prune-and-regraft (SPR)\nproposal distribution \\citep{Neal2003}. An SPR proposal\npicks a subtree uniformly at random from $\\tree$\nand detaches it.\nIt is then attached back on the tree\nto a branch and time picked uniformly at random.\n% The new tree $\\tree'$ is accepted as the next\n% state in the Markov chain\n% with probability $\\min\\left(1, \\frac{r(\\tree', z_{1:N})T(\\tree \\given \\tree')}{r(\\tree, z_{1:N})T(\\tree' \\given \\tree)}\\right)$.\nThe Metropolis-Hastings acceptance probability is efficient to compute because the joint density $r(\\tree, \\latentdataset)$ can be evaluated using belief propagation to marginalize the latent values at internal nodes of $\\tree$, and many of the messages can be cached.\nSee \\autoref{sec:algorithm-details} for details.\n\n\\subsection{Variational autoencoder}\nThe variational autoencoder (VAE) is a generative model\nfor a dataset $\\dataset$\nwherein latent vectors $\\latentdataset$ are sampled\nfrom a prior distribution\nand then individually passed into\na neural network observation model with parameters $\\genparam$,\n\\begin{equation}\n    \\begin{split}\n        \\latentdataset \\sim \\p(\\latentdataset),\n        \\qquad\n        \\x_\\n \\given \\z_\\n \\sim \\p_\\genparam(\\x_\\n \\given \\z_\\n),\n    \\end{split}\n\\end{equation}\nWe are interested in the posterior distribution\n$\\p(\\z_\\n \\given \\x_\\n)$, which is not analytically tractable\nbut can be approximated with a variational distribution\n$\\q_\\recparam(\\z_\\n \\given \\x_\\n)$, typically a neural network\nthat outputs parameters of a Gaussian distribution.\nThe weights of the approximate posterior can be learned\nby optimizing the evidence-lower bound (ELBO),\n\\begin{equation}\n    \\begin{split}\n    \\L[\\q] &\\triangleq \\E_\\q \\left[\\log \\frac{\\p_\\genparam(\\dataset, \\latentdataset)}{\\prod_\\n \\q_\\recparam(\\z_\\n \\given \\x_\\n)}\\right] \\\\\n          %&= \\left(\\sum_{n = 1}^N \\E_\\q\\left[\\log \\p(\\x_\\n \\given \\z_\\n)\\right]\\right) - \\KL{q_\\recparam(z_{1:N} \\given x_{1:N})}{\\p(z_{1:N})}.\n    \\end{split}\n\\end{equation}\nThe parameters of the model, $\\genparam$ and $\\recparam$,\nare learned via stochastic gradient ascent on\nthe ELBO, using the reparametrization trick\nfor lower variance gradients \\citep{Kingma2014, Rezende2014}.\n\n\\section{The TMC-VAE}\n\\label{sec:tmc-vae}\n\nThe choice of prior distribution in the VAE significantly affects the autoencoder and resulting latent space.\nThe default standard normal prior, which takes $\\z_\\n \\iid\\sim \\N(0, I)$, acts as a regularizer on an otherwise unconstrained autoencoder, but can be restrictive and result in overpruning \\citep{Burda2015}.\nExtremely flexible, learnable distributions like masked autoregressive flow (MAF) priors \\citep{Papamakarios2017} enable very rich latent spaces, but don't encode any interpretable bias for organizing the latent space (except perhaps smoothness).\n\nIn this paper, we explore the TMC prior for the VAE, which could potentially strike a sweet spot between restrictive and flexible priors.\nWe generate the latent values $\\latentdataset$ of a VAE according to the TMC prior, then generate observations $\\dataset$ using a neural network observation model,\n\\begin{gather}\n        \\tree \\sim \\mathrm{TMC}_\\numdata(\\tree; a, b), \\\\\n        \\latentdataset \\given \\tree \\sim r(\\latentdataset \\given \\tree),\n        \\qquad\n        \\x_\\n \\given \\z_\\n \\sim \\p_\\genparam(\\x_\\n \\given \\z_\\n).\n\\end{gather}\nThe TMC-VAE is a coherent\ngenerative process that captures\ndiscrete, interpretable structure in the latent space.\nA phylogeny not only has an intuitive inductive bias,\nbut can be useful for\nexploratory data analysis\nand introspecting the latent space itself.\n\nConsider doing inference in this model:\nfirst assume variational distributions \n$\\q_\\recparam(\\z_\\n \\given \\x_\\n)$ (as in the VAE)\nand $\\q(\\tree)$, which results in the\nELBO,\n\\begin{equation}\n    \\begin{split}\n    \\L[\\q] &= \\E_\\q\\left[\\log \\frac{\\p(\\tree, \\latentdataset, \\dataset)}{\\q(\\tree)\\prod_\\n \\q(\\z_\\n \\given \\x_\\n)}\\right] \\\\\n    \\end{split}\n\\end{equation}\nFor fixed $\\q_\\recparam(\\z_\\n \\given \\x_\\n)$, we can\nsample the optimal $\\q^*(\\tree)$,\n\\begin{equation}\n    \\begin{split}\n    \\q^*(\\tree) &\\propto \\exp{\\E_\\q\\left[\\log \\p(\\tree, z_{1:N}, x_{1:N})\\right]} \\\\\n              &\\propto \\exp{\\E_\\q\\left[\\log \\p(\\tree)\\p(z_{1:N}|\\tree)\\right]} \\\\\n              &\\propto \\exp{\\log \\p(\\tree) + \\E_\\q\\left[\\log \\p(\\latentdata|\\tree)\\right]}\n    \\end{split}\n\\end{equation}\n\\begin{equation}\n    \\begin{split}\n    \\end{split}\n\\end{equation}\nBecause $\\p(\\latentdataset \\given \\tree)$ is jointly Gaussian (factorizing\naccording to tree structure) and $\\q_\\recparam(\\z_\\n \\given \\x_\\n)$ is Gaussian, \nexpectations with respect to $\\latentdataset$\ncan move into $\\log \\p(\\tree, \\latentdataset, \\dataset)$. This\nenables sampling the expected joint likelihood $\\E_\\q\\left[\\log \\p(\\tree, \\latentdataset)\\right]$\nusing SPR Metropolis-Hastings.\nHowever, optimizing this ELBO is problematic.\n$\\p(\\latentdataset \\given \\tree)$\ndoes not factorize independently, so computing\nunbiased gradient estimates from minibatches is impossible\nand requires evaluating all the data.\nFurthermore, the TMC is limiting\nfrom a computational perspective.\nSince a phylogeny has as many leaves\nas points in the dataset, \nbelief propagation over internal nodes\nof the tree slows down linearly as \nthe size of the dataset grows.\nIn addition,\n% the search space of trees\n% grows exponentially with the size of the dataset,\n% rendering MCMC unusable with extremely\nSPR proposals mix very slowly for large trees.\nWe found these limitations \nmake the model impractical for datasets\nof more than 1000 examples.\n\nIn the next section, we address these computational\nissues, while retaining\nthe interesting properties of the TMC-VAE.\n\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}[t]{0.3\\textwidth}\n    \\centering\n    \\includegraphics{tikz/loracs/ltmc.tikz}\n    \\caption{TMC-VAE graphical model}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.34\\textwidth}\n    \\centering\n    \\includegraphics{tikz/loracs/iltmc.tikz}\n    \\caption{\\acronym-VAE graphical model}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.34\\textwidth}\n    \\centering\n    \\includegraphics{tikz/loracs/iltmc-v.tikz}\n    \\caption{\\acronym-VAE variational factors}\n\\end{subfigure}\n\\caption{Graphical models and variational\napproximations for TMC models described in the paper}\n\\label{fig:graphical-models}\n\\end{figure*}\n\n\\section{\\acronym\\;prior for VAEs}\n\nIn this section, we introduce\na novel approximation to the TMC\nprior, which preserves many \ndesirable properties\nlike structure and interpretability,\nwhile being computationally viable.\nOur key idea is to use a set of\nlearned \\emph{inducing points}\nas the leaves of the tree in the latent\nspace, analogous to inducing-input\napproximations for Gaussian processes \\citep{Snelson2006}. \nIn this model, latent vectors $\\latentdataset$\nare not directly hierarchically clustered,\nbut are rather independent samples\nfrom the induced posterior predictive density\nof a TMC.\nWe call this the \\acronymexplanation\\;(\\emph{\\acronym}, pronounced ``\\acronympronunciation'') prior.\n\nTo define the \\acronym\\;prior $\\p(\\tree, \\dataset)$, we first define an auxiliary TMC distribution $r(\\tree, \\inducingpoints)$ with $M$ leaf locations $\\inducingpoints$.\nWe treat $\\inducingpoints$ as a set of learnable free parameters, and define the conditional $r(\\tree \\given \\inducingpoints)$ as the \\acronym\\;prior on phylogenies $\\tree$:\n\\begin{equation}\n%   \\tree \\sim \\p(\\tree ; \\inducingpoints) \\triangleq r(\\tree \\given \\inducingpoints).\n  \\p(\\tree ; \\inducingpoints) \\triangleq r(\\tree \\given \\inducingpoints).\n\\end{equation}\nThat is, we choose the prior on phylogenies $\\tree$ to be the posterior distribution of a TMC with pseudo-observations $\\inducingpoints$.\nNext, we define the \\acronym\\;prior on locations $\\z_\\n\\given\\tree$ as a conditionally independent draw from the predictive distribution $r(s_{M+1} \\given \\tree, \\inducingpoints)$, writing the sampled attachment branch and time as $e_\\n$ and $t_\\n$, respectively:\n\\begin{gather}\n    % b_\\n, t_\\n \\given \\tree \\sim \\p(b_\\n, t_\\n \\given \\tree)\n    \\p(e_\\n, t_\\n \\given \\tree)\n    \\triangleq r(e_{M+1} = e_\\n, t_{M+1} = t_\\n \\given \\tree),\n    \\notag\n    \\\\\n    % \\z_\\n \\given b_\\n, t_\\n \\tree \\sim \\p(\\z_\\n \\given b_\\n, t_\\n, \\tree)\n    \\p(\\z_\\n \\given e_\\n, t_\\n, \\tree)\n    \\triangleq r(\\inducingpoint_{M+1} = \\z_\\n \\given e_\\n, t_\\n, \\tree, \\inducingpoints).\n\\end{gather}\nTo complete the model, we use an observation likelihood parameterized by a neural network, writing\n\\begin{equation}\n    \\x_\\n \\given \\z_\\n \\sim \\p_\\genparam(\\x_\\n \\given \\z_\\n).\n\\end{equation}\n%\n% We introduce a set of \n% $M$ learnable inducing points $\\inducingpoints$\n% that are leaves of a phylogeny $\\tree$,\n% and adopt the following model:\n%\n% \\todo{simplify this model description}\n% \\begin{equation}\n%     \\begin{split}\n%         \\tree &\\sim \\p(\\tree ; \\inducingpoints) \\triangleq r(\\tree \\given \\inducingpoints) \\\\\n%         \\z_\\n \\given \\tree &\\sim \\p(\\z_\\n \\given \\tree; \\inducingpoints) \\triangleq r(s_{M + 1} = \\z_\\n \\given \\tree, \\inducingpoints) \\\\\n%         \\x_\\n \\given \\z_\\n &\\sim p_\\genparam(\\x_\\n \\given \\z_\\n)\n%     \\end{split}\n% \\end{equation}\n% We choose our prior\n% on phylogenies to be the posterior TMC\n% with leaves $\\inducingpoints$.\n% The prior on latent vectors $\\p(\\z_\\n \\given \\tree; \\inducingpoints)$\n% is the posterior predictive density for $\\tree$ and $\\inducingpoints$\n% and the observation model is parametrized by a neural network,\n% as in the VAE. Effectively, we have put a flexible,\n% nonparametric density estimator as our VAE prior\n% with the added benefit of learned discrete structure.\n%\n%By limiting the size of the tree to $M$ leaves,\n%we've circumvented some computational issues\n%issues that come with larger trees.\n%This comes at the cost of the\n%expressiveness and interpretability\n%of this prior distribution, but\n%this can be tuned by making $M$\n%larger. However, some additional\n%approximations will need to be made for tractable inference.\n%\n% For the purpose of inference, \n% we work with a more explicit version\n% of the model, where the branch and time of the\n% posterior predictive distribution are\n% are sampled explicitly.\n% \\todo{use this model for induced tmc}\n% \\begin{equation}\n%     \\begin{split}\n%         \\tree &\\sim \\p(\\tree ; \\inducingpoints) \\\\\n%         b_\\n, t_\\n &\\sim \\p(b_\\n, t_\\n \\given \\tree) \\\\\n%         \\z_\\n \\given b_\\n, t_\\n, \\tree &\\sim \\p(\\z_\\n \\given b_\\n, t_\\n, \\tree; \\inducingpoints) \\triangleq r(s_{M + 1} = \\z_\\n \\given b_\\n, t_\\n, \\tree) \\\\\n%         \\x_\\n \\given \\z_\\n &\\sim p_\\genparam(\\x_\\n \\given \\z_\\n)\n%     \\end{split}\n% \\end{equation}\n%\nBy using the learned inducing points $\\inducingpoints$, we avoid the main difficulty of inference in the TMC-VAE of \\autoref{sec:tmc-vae}, namely the need to do inference over all $\\numdata$ points in the dataset.\nInstead, dependence between datapoints is mediated by the set of inducing points $\\inducingpoints$, which has a size independent of $\\numdata$.\nAs a result, with the \\acronym\\;prior, minibatch-based learning becomes tractable even for very large datasets.\nThe quality of the approximation to the TMC-VAE can be tuned by adjusting the size of $M$.\n\nHowever, this technique presents its own inference challenges.\nSampling the optimal variational factor $\\q^*(\\tree)$\nis no longer an option as it was in the TMC-VAE:\n\\begin{equation}\n\\begin{split}\n    \\q^*(\\tree; \\inducingpoints) \n    &\\textstyle\\propto \\exp{\\E_\\q\\left[\\log \\p(\\tree, \\latentdataset, \\dataset)\\right]} \\\\\n    &\\textstyle\\propto \\exp{\\log \\p(\\tree ; \\inducingpoints) + \\sum_\\n \\E_\\q\\left[\\p(\\z_\\n \\given e_\\n, t_\\n, \\tree)\\right]} \\\\\n    &\\textstyle\\propto \\exp{\\log \\mathrm{TMC}_M(\\tree; a, b)\n    \\\\ &\\textstyle\\qquad\\quad + \\sum_{m = 1}^M \\log r(\\inducingpoint_m \\given \\inducingpoint_{1:m - 1}, \\tree)\n    \\\\ &\\textstyle\\qquad\\quad + \\sum_\\n \\E_\\q\\left[\\log \\p(\\z_\\n \\given e_\\n, t_\\n, \\tree)\\right]}.\n\\end{split}\n\\end{equation}\nThis term has a sum over $\\numdata$ expectations; therefore computing this likelihood\nfor the purpose of MCMC\nwould involve passing the entire dataset through a neural network.\nFurthermore, the normalizer for this likelihood is intractable,\nbut necessary for computing gradients w.r.t $\\inducingpoints$.\nWe therefore avoid using the optimal $\\q^*(\\tree; \\inducingpoints)$\nand set\n$\\q(\\tree; \\inducingpoints)$ to the prior.\nThis has the additional computational advantage of cancelling out\nthe $\\E_\\q[\\log \\p(\\tree)]$ term in the ELBO, which also has an intractable normalizing constant.\nIf the inducing points are chosen so that they contain most of the information about the hierarchical organization of the dataset, then the approximation $\\p(\\tree \\given \\z)\\approx r(\\tree \\given \\inducingpoints)=\\p(\\tree)$ will be reasonable.\n% Although this choice adds some approximation error, we still fit a distribution over tree structures by optimizing the inducing points $\\inducingpoints$ which parameterize the prior.\n\nWe also fit the variational factors\n$\\q(e_\\n)$,\n$\\q_{\\xi}(t_\\n \\given e_\\n, \\z_\\n; \\inducingpoints)$,\nand\n$\\q_\\recparam(\\z_\\n \\given \\x_\\n)$.\nThe factor for attachment times,\n$\\q_{\\xi}(t_\\n \\given e_\\n, \\z_\\n; \\inducingpoints)$, is a\nrecognition network that outputs\na posterior over attachment times for a particular branch.\nSince the $\\q(\\tree; \\inducingpoints)$ and $\\p(\\tree; \\inducingpoints)$ terms\ncancel out, we obtain the following ELBO (some notation suppressed for simplicity):\n\\begin{equation}\n    \\begin{split}\n    \\L[\\q] &\\triangleq \\E_\\q \\left[\\log \\frac{\\prod_\\n \\p(e_\\n, t_\\n \\given \\tree) \\p(\\z_\\n \\given e_\\n, t_\\n, \\tree)\\p(\\x_\\n \\given \\z_\\n)}{\\prod_\\n \\q(e_\\n)\\q(t_\\n \\given e_\\n, \\z_\\n)\\q(\\z_\\n \\given \\x_\\n)}\\right].\n    \\end{split}\n\\end{equation}\nThis ELBO can be optimized by first computing\n\\begin{equation}\n    \\q^*(e_\\n) = \\exp{\\E_\\q\\left[\\log \\p(e_\\n \\given t_\\n, \\z_\\n, \\tree; \\inducingpoints)\\right]}\n\\end{equation}\nand computing gradients with respect to $\\genparam$, $\\inducingpoints$,\n$\\recparam$, and $\\xi$ using a Monte-Carlo estimate of the ELBO\nusing samples from $\\q(\\tree; \\inducingpoints)$, $q^*(e_\\n)$, $\\q_\\recparam(\\z_\\n \\given \\x_\\n)$, and $q_{\\xi}(t_\\n \\given e_\\n, \\z_\\n; \\inducingpoints)$.\nThe factor $\\q(\\tree; \\inducingpoints)$ can\nbe sampled using vanilla SPR Metropolis-Hastings. \nThe detailed inference procedure can be found in \\autoref{sec:algorithm-details}.\n\n\\section{Related work}\nAs mentioned above, \\acronym\\;connects various ideas in the literature, including Bayesian nonparametrics \\citep{Boyles2012}, inducing-point approximations \\citep[e.g.; ][]{Snelson2006,Tomczak2017},\nand amortized inference \\citep{Kingma2014, Rezende2014}.\n\nAlso relevant is a recent thread of efforts to endow VAEs with the interpretability of graphical models \\citep[e.g.; ][]{Johnson2016,Lin2018}.\nIn this vein, \\citet{Goyal2017} propose using a different Bayesian nonparametric tree prior, the nested Chinese restaurant process (CRP) \\citep{Blei2010}, in a VAE.\nWe chose to base \\acronym\\;on the TMC instead, as the posterior predictive distribution of an nCRP is a finite mixture, whereas the TMC's posterior predictive distribution has more complex continuous structure.\nAnother distinction is that \\citet{Goyal2017} only consider learning from pretrained image features, whereas our approach is completely unsupervised.\n\n\\section{Results}\nIn this section, we \nanalyze properties of the\n\\acronym\\;prior,\nfocusing on\nqualitative aspects, like exploratory data analysis and interpretability,\nand quantitative aspects, like few-shot classification and information retrieval.\n\n\\textbf{Experimental setup} We evaluated\nthe \\acronym\\;prior on three separate\ndatasets: dynamically binarized MNIST \\citep{Lecun1998}, Omniglot \\citep{Lake2015},\nand CelebA \\citep{Liu2015}. \nFor all three experiments,\nwe utilized convolutional/deconvolutional encoders/decoders\nand a 40-dimensional\nlatent space (detailed architectures can be found\nin \\autoref{sec:implementation-details}).\nWe used 200, 1000, and 500 inducing points for MNIST, Omniglot, and CelebA respectively \nwith TMC parameters $a = b = 2$.\n$\\q_{\\xi}(t_\\n \\given e_\\n, \\z_\\n; \\inducingpoints)$ was a\ntwo-layer 500-wide neural network\nwith ReLU activations that output parameters of a\nlogistic-normal distribution over stick size\nand all parameters were optimized with Adam \\citep{Kingma2014adam}.\nOther implementation details can be found in \\autoref{sec:implementation-details}.\n\n\\subsection{Qualitative results}\n\nA hierarchical clustering in the latent space\noffers a unique opportunity for interpretability\nand exploratory data analysis,\nespecially when the data are images.\nHere are some methods for users to obtain\nuseful data summaries and explore a dataset.\n\n\\textbf{Visualizing inducing points}\nWe first inspect the learned inducing points $\\inducingpoints$\nby passing them through the decoder.\nVisualized in \\autoref{fig:mnist-inducing}\nare the 200 learned inducing points for MNIST.\nThe inducing points are all unique\nand are cleaner than pseudo-input reconstructions from VampPrior (shown in\n\\autoref{fig:mnist-vamp-inducing-outputs}).\nInducing points can help summarize a\ndataset, as visualizations of the latent space\nindicate they spread out and cover the data\n(see \\autoref{fig:mnist-tsne-tree}). Inducing points\nare also visually unique and sensible\nin Omniglot and CelebA (see\n\\autoref{fig:omniglot-inducing-points} and \\ref{fig:celeba-inducing-points}).\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{img/loracs/mnist/mnist-inducing-points.png}\n\\caption{Learned inducing points for a \\acronym(200) prior on MNIST.}\n\\label{fig:mnist-inducing}\n\\end{figure}\n\n\\textbf{Hierarchical clustering}\nWe can sample\n$\\q(\\tree ; \\inducingpoints)$ to obtain\nphylogenies\nover the inducing points,\nand can visualize these clusterings\nusing the decoded inducing points;\nsubtrees from a sample in each dataset\nare visualized in \\autoref{fig:subtrees}.\nIn MNIST, we find large subtrees\ncorrespond to the discrete classes\nin the dataset. In Omniglot,\nsubtrees sometimes correspond to language groups\nand letter shapes. In CelebA,\nwe find subtrees sometimes correspond\nto pose or hair color and style.\n\nWe can further use the time at each internal node\nto summarize the data at many levels of granularity.\nConsider ``slicing'' the hierarchy\nat a particular time $t$ by\ntaking every branch $(\\pi(v), v) \\in E$ with $t_{\\pi(v)} \\leq t < t_v$\nand computing the corresponding expected Gaussian random walk value at time $t$.\nAt times closer to zero, we slice\nfewer branches and are closer to the root\nof the hierarchy, so the value at the slice\nlooks more like the mean of the data.\nIn \\autoref{fig:celeba-evolution}, \nwe visualize this process over a subset\nof the inducing points of CelebA.\nVisualizing the dataset in this way\nreveals cluster structure at \ndifferent granularities\nand offers an evolutionary interpretation\nto the data, as leaves that coalesce \nmore ``recently'' are likely to be closer in \nthe latent space.\n\nAlthough the hierarchical clustering is only\nover inducing points, we can still visualize\nwhere real data belong on the hierarchy\nby computing $\\q^*(e_\\n)$ and\nattaching the data to the tree.\nBy doing this for many points of data, and removing\nthe inducing points from the tree,\nwe obtain an induced hierarchical clustering.\n%We visualize some example induced hierarchical clusterings\n%in \\autoref{fig:induced-clusters}.\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}[t]{0.3\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{img/loracs/mnist/subtree1.png}\n\\caption{MNIST}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.3\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{img/loracs/omniglot/subtree1.png}\n\\caption{Omniglot}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.3\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{img/loracs/celeba/subtree1.png}\n\\caption{CelebA}\n\\end{subfigure}\n\\caption{An example learned subtree from a sample of $\\q(\\tree; \\inducingpoints)$ for each dataset. \nLeaves are visualized by passing inducing points throught the decoder.}\n\\label{fig:subtrees}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{img/loracs/celeba/celeba-evolution.png}\n\\caption{The evolution of a CelebA over a subset of inducing points. We create\nthis visualization by taking slices of the tree at particular times\nand looking at the latent distribution \nat each of the sliced branches.}\n\\label{fig:celeba-evolution}\n\\end{figure}\n\n\\textbf{Generating samples} \nHaving fit a generative model to our data,\nwe can visualize samples from the model.\nAlthough we do not expect the samples\nto have fidelity and sharpness comparable\nto those from GANs or state-of-the-art\ndecoding networks \\citep{Radford2015, Salimans2017},\nsampling with the\n\\acronym\\;prior can help us understand the latent space.\nTo draw a sample from a TMC's posterior predictive density,\nwe first sample a branch and time, assigning\nthe sample a place in the tree.\nThis provides each generated sample a \\emph{context},\ni.e., the branch and subtree it was generated from.\nHowever, learning a \\acronym\\;prior allows us to\nconditionally sample in a novel way. By restricting\nsamples to a subtree, we can generate samples\nfrom the support of the posterior predictive density\nlimited to that subtree. This enables\nconditional sampling at many levels of\nthe hierarchy. We visualize examples\nof this in \\autoref{fig:subtree-samples}\nand \\autoref{fig:celeba-samples}.\n\n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[t]{0.8\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{img/loracs/mnist/mnist-samples.png}\n\\caption{MNIST}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.8\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{img/loracs/omniglot/omniglot-samples.png}\n\\caption{Omniglot}\n\\end{subfigure}\n\\caption{Conditional samples from subtrees.}\n\\label{fig:subtree-samples}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[t]{0.8\\textwidth}\n\\centering\n\\includegraphics[frame, width=\\textwidth]{img/loracs/celeba/celeba-samples-1.png}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.8\\textwidth}\n\\centering\n\\includegraphics[frame, width=\\textwidth]{img/loracs/celeba/celeba-samples-2.png}\n\\end{subfigure}\n\\caption{Samples from subtrees of CelebA.}\n\\label{fig:celeba-samples}\n\\end{figure}\n\n\\subsection{Quantitative results}\n\nWe ran experiments designed to\nevaluate the usefulness of the \\acronym's\nlearned latent space for downstream tasks. We \ncompare the \\acronym\\;prior\nagainst a set of baseline priors\non three different tasks:\nfew-shot classification,\ninformation retrieval,\nand generative modeling.\nOur datasets are\ndynamically binarized MNIST and Omniglot \n(split by instance) and\nour baselines are representations\nlearned with the same encoder-decoder\narchitecture and latent dimensionality\\footnote{Following the defaults\nin the author's reference implementation, we evaluated\nDVAE\\# on statically binarized MNIST with smaller neural networks, but\nwith a higher-dimensional latent space.} but substituting\nthe following prior distributions over $z$:\n\\begin{itemize}\n    \\setlength\\itemsep{0.2em}\n    \\item No prior\n    \\item Standard normal prior\n    \\item VampPrior \\citep{Tomczak2017} - 500 pseudo-inputs for MNIST, 1000 for Omniglot\n    \\item DVAE$\\sharp$ \\citep{Vahdat2018dvaesharp} - latent vectors are 400-dimensional, formed from concatenating binary latents, encoder and decoder are two-layer feed-forward networks with ReLU nonlinearities\n    \\item Masked autoregressive flow \\citep[MAF; ][]{Papamakarios2017} - two layer, 512 wide MADE\n\\end{itemize}\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}[t]{0.4\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{img/loracs/mnist/cf-mnist.png}\n    \\caption{MNIST}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.4\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{img/loracs/omniglot/cf-omniglot.png}\n    \\caption{Omniglot}\n\\end{subfigure}\n\\caption{Few-shot classification results}\n\\vspace{-0.5cm}\n\\label{fig:fewshot}\n\\end{figure}\n\n\\textbf{Few-shot classification}\nIn this task,\nwe train a\nclassifier with\nvarying numbers of labels and measure\ntest accuracy. We pick\nequal numbers of labels per class\nto avoid imbalance and we use\na logistic regression classifier\ntrained to convergence to avoid\nadding unnecessary degrees of freedom to the experiment.\nWe replicated the experiment across\n20 randomly chosen label sets for MNIST\nand 5 for Omniglot.\nThe test accuracy on\nthese datasets is visualized in \n\\autoref{fig:fewshot}. For MNIST, we \nalso manually labeled inducing points and\nfound that training a classifier on 200 and 500 inducing points\nachieved significantly better test accuracy than \nrandomly chosen labeled points, hinting that\nthe \\acronym\\;prior has utility in an active learning setting.\n\nThe representations\nlearned with the \\acronym\\;\nconsistently achieve better accuracy,\nthough  in MNIST, \\acronym\\;prior\nand MAF reach very similar test\naccuracy at 100 labels per class.\nThe advantage of the \\acronym\\;prior is especially clear\nin Omniglot\n(\\autoref{tab:semisupervised-mnist}\nand \\autoref{tab:semisupervised-omniglot} contain the exact numbers).\nWe believe our advantage in this task\ncomes from ability of the \\acronym\\;prior\nto model discrete structure.\nTSNE visualizations\nin \\autoref{fig:tsne-tmc-normal} and \\autoref{fig:tsne}\nindicate\nclusters are more concentrated and separated\nwith the \\acronym\\;prior\nthan with other priors,\nthough TSNE visualizations should be\ntaken with a grain of salt.\n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}[t]{0.4\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{img/loracs/mnist/tsne/mnist2-tsne-normal.png}\n    \\caption{Normal prior}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.4\\textwidth}\n    \\centering\n    \\includegraphics[width=\\textwidth]{img/loracs/mnist/tsne/mnist2-tsne-tmc.png}\n    \\caption{\\acronym(200) prior}\n\\end{subfigure}\n\\caption{TSNE visualizations of the latent space of the MNIST test set with different priors,\ncolor-coded according to class. \\acronym\\;prior appears to learn a space with more\nseparated, concentrated clusters.}\n\\label{fig:tsne-tmc-normal}\n\\end{figure}\n\n\\textbf{Information retrieval}\nWe evaluated the meaningfulness of Euclidean distances\nin the learned latent space\n% Information retrieval can\n% help inform whether distance measured\n% in the latent space is informative.\n% We evaluate this\nby measuring precision-recall\nwhen querying the test set.\nWe take each element of the test set\nand sort all other members according to\ntheir $L_2$ distance in the latent space.\nFrom this ranking, we produce\nprecision-recall curves for each \nof the query and \nplot the average precision-recall\nover the entire test set in \\autoref{fig:prec-rec}.\nWe also report the area-under-the-curve (AUC)\nmeasure for each of these curves in \\autoref{tab:auc}.\n\nAUC numbers for Omniglot are low across the board\nbecause of the large number of classes\nand low number of instances per class.\nHowever,\nin both datasets the \\acronym\\;prior consistently\nachieves the highest AUC,\nespecially with MNIST. \nThe \\acronym\\;prior encourages\ntree-distance to correspond to\nsquared Euclidean distance, as branch\nlengths in the tree are \nvariances in a Gaussian likelihoods.\nWe thus suspect distances in a \\acronym\\;prior\nlatent space to be more informative\nand better for information retrieval.\n\n\\textbf{Held-out log-likelihood}\n% Held-out log-likelihood estimates\n% the capacity of a generative model.\nWe estimate held-out log-likelihoods for the four VAEs we\ntrained with comparable architectures and different priors.\n(We exclude DVAE$\\sharp$ since its architecture is substantially\ndifferent, and the classical autoencoder since it lacks generative\nsemantics.)\nWe use 1000 importance-weighted samples \\citep{Burda2015}\nto estimate held-out log-likelihood,\nand report the results in \\autoref{tab:holl}.\nWe find that, although \\acronym\\;outperforms the other\npriors on downstream tasks, it only achieves\nmiddling likelihood numbers.\nThis result is consistent with the findings of \\citet{Chang2009} that held-out log-likelihood is not necessarily correlated with interpretability or usefulness for downstream tasks.\n\n\\begin{table}\n\\centering\n\\caption{Averaged precision-recall AUC on MNIST/Omniglot test datasets}\n\\input{results/loracs/ir.tex}\n\\label{tab:auc}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\caption{MNIST/Omniglot test log-likelihoods}\n\\begin{tabular}{r|cc}\n\\toprule\nPrior & MNIST & Omniglot\\\\ \\midrule\n%No prior  & N/A & N/A\\\\\nNormal    & -83.789 & -89.722\\\\\nMAF       & \\textbf{-80.121} & \\textbf{-86.298}\\\\\nVamp & -83.0135 & -87.604\\\\\n\\acronym & -83.401 & -87.105\\\\\n\\bottomrule\n\\end{tabular}\n\\label{tab:holl}\n\\end{table}\n\n\\section{Discussion}\nLearning discrete, hierarchical structure in a latent space\nopens a new opportunity:\ninteractive deep unsupervised learning.\nUser-provided constraints have been used\nin both flat and\nhierarchical clustering \\citep{Wagstaff2000,Awasthi2010}, so an interesting\nfollow up to this work would be incorporating\nconstraints into the \\acronym\\; prior, as\nin \\citet{Vikram2016}, which could potentially\nenable user-guided representation learning.\n\n\\loracsack\n", "meta": {"hexsha": "4ee7cd3682a07270212d17a2ea5ec8c2bf968689", "size": 42372, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "writeup/content/structured-representation/loracs.tex", "max_stars_repo_name": "sharadmv/thesis", "max_stars_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-30T01:28:54.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-30T01:28:54.000Z", "max_issues_repo_path": "writeup/content/structured-representation/loracs.tex", "max_issues_repo_name": "sharadmv/thesis", "max_issues_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writeup/content/structured-representation/loracs.tex", "max_forks_repo_name": "sharadmv/thesis", "max_forks_repo_head_hexsha": "5fbf70c0645e44b2992f3cb4d7c2fbbbf7592d7f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.0416666667, "max_line_length": 604, "alphanum_fraction": 0.7595345983, "num_tokens": 11658, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289835, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.5537349992649223}}
{"text": "\\subsection{Three dimensional uniform dataset}\n\nThe three dimensional uniform dataset (that corresponds to the RGB color cube) is an interesting low dimensional case that requires a dimensionality reduction (from dimension 3 to 2). Since we used a uniform distribution this means the dataset is a dense three dimensional manifold that needs to be mapped to a two dimensional manifold which is known not to have an optimal solution. However, this difficulty can be partially alleviated using a loose topology in the RSOM. This is made possible by using a 2-neighbours induced topology as shown in figure \\ref{fig:3D-uniform:results}A. This weak topology possesses several disconnected subgraphs that relax the constraints on the neighborhood of the BMU (see figure \\ref{fig:topology-influence} in the supplementary section for the influence of neighborhood on the self-organization). This is clearly illustrated in figure \\ref{fig:3D-uniform:results}B where the Voronoi cell of a neuron has been painted with the color of its codeword. We can observe an apparent structure of the RGB spectrum with some localized ruptures. To test for the completeness of the representation, we represented the position of six fundamental colors (C - white (1,1,1), D - black (0,0,0), E - yellow (1,1,0), F - red (1,0,0), G - green (0,1,0) and H - blue (0,0,1)) along with their associated distance maps after learning.\n\nFurthermore, we performed the persistent homology to identify important topological features in the\ninput space and investigate how well the SOM and RSOM captured those features. Figures\n\\ref{fig:3D-uniform:analysis}A, B, and C show the persistent barcodes for the input space, SOM, and \nRSOM, respectively. We can see how the RSOM (panel C) captures more $H1$- and $H2$-homological properties\n(since there are more persistent line segments, orange and green lines). The SOM (panel B) seems to capture\nsome of those features as well but they do not persist as long as they in the case of RSOM. The persistence \ndiagrams of input, SOM and RSOM are shown in figures~\\ref{fig:3D-uniform:analysis} D, E, and F, respectively.\nThese figures indicate that the RSOM has more persistent features (orange and green dots away from the diagonal\nline) than the regular SOM.\nThe Bottleneck distance between the persistence diagrams of input space and those of SOM and RSOM reveals\nthat the SOM's persistence diagram is slightly closer to the input space's one for both the $H0$ (SOM: $0.00035$,\nRSOM: $0.0007$), $H1$ (SOM: $0.006$, RSOM: $0.007$), and $H2$ (SOM: $0.0062$, RSOM: $0.0057$). Despite the \nfact that the bottleneck distances show that \nregular SOM's persistent diagram is closer to input space's one, the barcodes diagrams indicate that the RSOM \ncaptures more persistent topological features suggesting that RSOM preserves in a better way the topology of the\ninput space. Furthermore, the RSOM seems to capture better the higher dimensional topological features since the \nBottleneck distances of $H2$-homological features are smaller for the RSOM than for the SOM.\n\n% For the three-dimensional experiment we draw 50,000 three-dimensional points from a uniform distribution $\\mathcal{U}([0, 1]\\times [0, 1]\\times [0, 1])$ (the cloud points form a cube in $\\mathbb{R}^3$). The map is a two-dimensional manifold and thus the required task it to map the three-dimensional vectors  to the two-dimensional neural space. Once again we place the neurons on the two-dimensional neural space based on the sampling of a blue noise distribution and the induced topology is illustrated in Figure~\\ref{fig:3D-uniform:results}A. The learning algorithm runs for $25000$ epochs over the three-dimensional vectors and after convergence we  obtain the map with the prototypes shown in Figure~\\ref{fig:3D-uniform:results}B. Each color represents a different face of the three-dimensional cube (a cube has six faces so we obtain mainly six colors) and we observe that the SOM has mapped the three-dimensional cube on a two-dimensional neural space. The continuity and grouping of the colors indicates that the receptive fields of the neurons have been established properly. We illustrate this phenomenon in panels~\\ref{fig:3D-uniform:results}C-H,  where the response of six different neurons (see the annotation in panel~\\ref{fig:3D-uniform:results}B). The dark blue color indicates values close to zero and the yellow color represents values close to one.\n\n\n\\begin{figure}\n  \\includegraphics[width=\\columnwidth]{experiment-3D-uniform.pdf}\n  \\vspace{2mm}\n  \\centering\n  \\includegraphics[width=.975\\columnwidth]{figures/colormap.pdf}\n  %\n  \\caption{%\n  %\n  {\\bfseries \\sffamily Three dimensional uniform dataset (results)}\n  %\n  Randomized SOM made of $4096$ neurons with a $3$-nearest neighbors induced topology. Model has been trained for $25,000$ epochs on three-dimensional points drawn from a uniform distribution on the unit cube. \\textbf{A} Map topology in neural space. \\textbf{B} Map codeword in neural space. Each neural voronoi cell is painted with the color of the codeword. \\textbf{C to H} Normalized distance map for six samples, respectively (1,1,1), (0,0,0), (1,1,0), (1,0,0), (0,1,0) and (0,0,1) in RGB notations. Normalization has been performed for each sample in order to enhance contrast but this prevents comparison between maps.\n  %\n  }\n  \\label{fig:3D-uniform:results}\n\\end{figure}\n\n\\begin{figure}\n     \\centering\n     \\includegraphics[width=\\textwidth]{figures/experiment-3D-uniform-analysis.pdf}\n     \\caption{ {\\bfseries \\sffamily Three dimensional uniform dataset (analysis)}\n     Persistent Barcodes of \\textbf{A} input space, \\textbf{B} SOM, and \\textbf{RSOM}.\n  The blue, orange, and green line segments represent the $H0$-, $H1$-, and $H2$-homology, respectively.\n  This means that blue color represents connected segments within the space and orange color reflects the holes\n  within the space and the green one the voids. The longer the line segment the more important the \n  corresponding topological feature. \\textbf{D} illustrates the persistent diagram for the input space.\n  \\textbf{E} and \\textbf{F} depict the persistent diagrams for SOM and RSOM, respectively. Again blue dots\n  indicate $H0$-homology features, orange dots represent $H1$-homolocical features, and green the \n  $H2$-homological features.}\n \\label{fig:3D-uniform:analysis}\n\\end{figure}\n", "meta": {"hexsha": "206eb5a3ac65fc53162ddbaddf66824c638f16df", "size": 6376, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "article-overleaf/03-results-B.tex", "max_stars_repo_name": "rougier/VSOM", "max_stars_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-11-20T06:27:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-11T22:20:28.000Z", "max_issues_repo_path": "article-overleaf/03-results-B.tex", "max_issues_repo_name": "rougier/VSOM", "max_issues_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "article-overleaf/03-results-B.tex", "max_forks_repo_name": "rougier/VSOM", "max_forks_repo_head_hexsha": "78e6eb924b5f89a0e6f42eb6bbe7971473a9abaa", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-03T04:41:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T04:41:57.000Z", "avg_line_length": 113.8571428571, "max_line_length": 1369, "alphanum_fraction": 0.778544542, "num_tokens": 1589, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059775, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5537349963934899}}
{"text": "\\documentclass[11pt, a4paper]{article}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage[margin=0.6in]{geometry}\n\\usepackage{listings}\n\\usepackage{float}\n\n\\title{EE2703: Assignment 3} % Title\n\\author{Akilesh Kannan (EE18B122)} % Author name\n\\date{\\today} % Date for the report\n\n\\begin{document}\n    \\maketitle % Insert the title, author and date\n\n    \\section{Abstract}\n        In this assignment we aim to :\n        \\begin{itemize}\n            \\item Observe the error in fitting the \\textit{Least Error Fit} function to a given set of data.\n            \\item Find the relation between the error observed and the noise in the data.\n        \\end{itemize}\n    \\section{Introduction}\n        From linear algebra, we can condense any parameter estimation problem to a simple matrix equation of the form:\n        \\begin{equation}\n            \\left(\\begin{matrix}\n            F_1(t_1) & F_2(t_1) & ... & F_n(t_1)\\\\\n            F_1(t_2) & F_2(t_2) & ... & F_n(t_2)\\\\\n            ... & ... & ... & ...\\\\\n            F_1(t_m) & F_2(t_m) & ... & F_n(t_m)\\\\\n            \\end{matrix}\\right)\n            \\left(\\begin{matrix}\n            p_1\\\\\n            p_2\\\\\n            ...\\\\\n            p_n\\\\\n            \\end{matrix}\\right)\n            =\n            \\left(\\begin{matrix}\n            a_1\\\\\n            a_2\\\\\n            ...\\\\\n            a_m\\\\\n            \\end{matrix}\\right)\n            \\label{eq0}\n        \\end{equation}\n        where,\n        \\begin{equation}\n            f(t;p_1, p_2,..,p_n) = \\sum_{i=0}^{n}p_iF_i(t)\n        \\end{equation}\n        is the function to be estimated, $F_i(t)$ are arbitrary functions of the variable $t$ and $p_i$ are constant parameters.\\\\\n\n         Since we only have to ``fit'' the real-time data we have to a function, $F_i(t)$ are functions of our choice. Usually we make a few educated guesses for these functions, looking at the plots of these real-time data.\\\\\n\n         Equation \\eqref{eq0} can be written as:\n        \\begin{equation}\n            F.\\vec{p} = \\vec{a_0}\n            \\label{eqwithoutnoise}\n        \\end{equation}\n\n        But, in any real world situation, there will always be noise associated with the data. To account for that, we have to slightly modify \\eqref{eqwithoutnoise} to:\n        \\begin{equation}\n            F.\\vec{p} = \\vec{a_0}+\\vec{n} = \\vec{a}\n            \\label{truenoiseeqn}\n        \\end{equation}\n        where $\\vec{n}$ accounts for the noise in the data.\\\\\n\n        However, the above equation cannot be satisfied exactly always (as number of equations is \\textbf{\\textit{N}}, but the number of measurements is \\textbf{\\textit{M}}).\\\\\n\n        So, we make a few assumptions about the noise in the data and then try to ``best guess'' the solution, i.e., the error between the ideal solution, in the absence of noise and the one obtained has to be as minimum as possible.\\\\\n\n        The assumptions we make about the noise are that it has zero mean and a standard deviation of $\\sigma$.\\\\\n\n        The error function $\\epsilon$ is then given by:\n        \\begin{equation}\n            \\epsilon = F.\\vec{p} - \\vec{a}\n        \\end{equation}\n\n        The norm of the error is:\n        \\begin{equation}\n            \\|\\epsilon\\|^2 = \\sum_{i}\\epsilon_i^2\n        \\end{equation}\n\n        or in other words,\\\\\n        \\begin{equation}\n            \\|\\epsilon\\|^2 = \\epsilon^T\\epsilon = ((F.\\vec{p}-\\vec{a})^T(F.\\vec{p}-\\vec{a})) = \\sum_{i}\\epsilon_i^2\n        \\end{equation}\n\n        On expanding the above equation and using the condition that the gradient at the minima is 0, we get:\\\\\n        \\begin{equation}\n            \\begin{aligned}\n                2(F^TF)\\vec{p_0} - 2F^T\\vec{a} = 0\\\\\n                \\implies \\vec{p_0} = (F^TF)^{-1}F^T\\vec{a}\n            \\end{aligned}\n        \\end{equation}\n\n        The above result is called the \\textbf{\\textit{Least Squares Estimate of $\\vec{p_0}$ }}.\\\\\n\n        However, the above result holds true only if the initial functions $F_i(t)$ are independent and that the noise is same for different measurements. This is because, we have to \\textit{give lesser importance (\\textbf{weight}) to those values which have more noise}, as they are more unreliable.\n\n    \\section{Procedure}\n        The function to be fitted is:\n        \\begin{equation}\n            f(t) = 1.05J_2(t)-0.105t\n        \\end{equation}\n        where $J_2(t)$ is the \\textit{Bessel Function of the first kind of Order 2}. The true data used for fitting is obtained using this equation.\n\n        \\subsection{Creating noisy data}\n            To create the noisy data, we add random noise to $f(t)$. This random noise, denoted by $n(t)$, is given by the standard normal probability distribution:\n            \\begin{equation}\n                P(n(t)|\\sigma)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{-\\frac{n(t)^2}{2\\sigma^2}} \\label{eq1}\n            \\end{equation}\n            The resulting noisy data will be of the form:\n            \\begin{equation}\n                f(t) = 1.05J_2(t)-0.105t+n_{\\sigma_i}(t)\n            \\end{equation}\n            where, $n_{\\sigma_i}(t)$ is the noisy data function with $\\sigma = \\sigma_{i}$ in \\eqref{eq1}. Thus for 9 different values of sigma (in a log scale from 0.001 to 0.1), the noisy data is created and stored in the \\texttt{\\textbf{fitting.dat}} file.\n\n        \\subsection{Analyzing the noisy data}\n            The data is read and plotted using PyPlot. The output result looks as follows:\n            \\begin{figure}[H]\n                \\centering\n                \\includegraphics[scale=0.5]{Fig 0.png}\n                \\caption{Noisy Data with True Data}\n                \\label{fig:noisyAndTrue}\n            \\end{figure}\n\n            As we can see, the ``noisiness'' of the data increases with increasing value of $\\sigma$. Another view of how the noise affects the data can be seen below:\n            \\begin{figure}[H]\n                \\centering\n                \\includegraphics[scale=0.5]{Fig 1.png}\n                \\caption{Noisy Data with Errorbar}\n                \\label{fig:noiseError}\n            \\end{figure}\n\n            The blue lines (\\textit{error bar}) indicate the standard deviation of the noisy data from the original data, at that value of $t$. It is plotted at every $5^\\text{th}$ point to make the plot readable.\n\n        \\subsection{Finding the best approximation for the noisy data}\n            From the data, we can conclude that the data can be fitted into a function of the form:\n            \\begin{equation}\n                g(t, A, B) = AJ_2(t)+Bt\n            \\end{equation}\n            where $A$ and $B$ are constants that we need to find.\\\\\n\n            To find the coefficients $A$ and $B$, we first try to find the mean square error between the function and the data for a range of values of \\textit{A} and \\textit{B}, which is given by:\n            \\begin{equation}\n                \\epsilon_{ij} = \\frac{1}{101}\\sum_{k=0}^{101}(f(t_k) - g(t_k, A_i, B_j))^2\n            \\end{equation}\n            where $\\epsilon_{ij}$ is the error for $(A_i,B_j)$. The contour plot of the error is shown below:\n            \\begin{figure}[H]\n                \\centering\n                \\includegraphics[scale=0.5]{Fig 2.png}  % Mention the image name within the curly braces. Image should be in the same folder as the tex file.\n                \\caption{Contour Plot of $\\epsilon_{ij}$}\n                \\label{fig:contourPlot}\n            \\end{figure}\n\n            We can see the location of the minima to be approximately near the orignal function coefficients.\\\\\n\n            Using the \\texttt{lstsq} function in \\texttt{scipy} package, we solve for:\n            \\begin{equation}\n            M.p = D \\label{eq5}\n            \\end{equation}\n            where\n            \\begin{equation}\n            M=\\left[\\begin{matrix}\n            J_2(t_1)&t_1\\\\\n            ...&...\\\\\n            J_2(t_m)&t_m\n            \\end{matrix}\\right]\\text{, }p=\\left[\\begin{matrix}\n            A_{fit}\\\\B_{fit}\n            \\end{matrix}\\right]\\ \\text{and }D=\\left[\\begin{matrix}f(t_1)\\\\...\\\\f(t_m)\\end{matrix}\\right]\n            \\end{equation}\\\\\n\n            Thus, we solve for $p$ and then find the mean square error of the values of $A_{fit}$ and $B_{fit}$ found using \\texttt{lstsq} and the original values $(1.05,\\ -0.105)$.\n\n        \\subsection{Finding out the variation of $\\epsilon$ with $\\sigma_n$}\n            We solve \\eqref{eq5} for different values of $\\sigma_n$, by changing matrix D to different columns of \\texttt{\\textbf{fitting.dat}}. We find that the variation of the mean squared error of values $A_{fit}$ and $B_{fit}$ is as follows:\n            \\begin{figure}[H]\n                \\centering\n                \\includegraphics[scale=0.5]{Fig 3.png}  % Mention the image name within the curly braces. Image should be in the same folder as the tex file.\n                \\caption{Mean Squared Error vs Standard Deviation}\n                \\label{fig:errorSTD}\n            \\end{figure}\n\n            This plot does not give that much useful information between $\\sigma_n$ and $\\epsilon$, but when we do the \\texttt{loglog} plot as below:\n            \\begin{figure}[H]\n                \\centering\n                \\includegraphics[scale=0.5]{Fig 4.png}  % Mention the image name within the curly braces. Image should be in the same folder as the tex file.\n                \\caption{Error vs Standard Deviation \\texttt{loglog} Plot}\n                \\label{fig:errorSTDloglog}\n            \\end{figure}\n\n            We can see an approximately linear relation between $\\sigma_n$ and $\\epsilon$. This is the required result.\n\n    \\section{Conclusion}\n        From the above procedure, we were able to determine that \\textbf{the logarithm of the standard deviation of the noise} \\textit{\\textbf{linearly affects}} \\textbf{the logarithm of the error} in the calcuation of the least error fit for a given data.\n\\end{document}\n", "meta": {"hexsha": "ac384322b393865bb72e2a8efc6537c4bba33bac", "size": 9820, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Week 3/report.tex", "max_stars_repo_name": "aklsh/EE2703", "max_stars_repo_head_hexsha": "546b70c9adac4a4de294d83affbb74e480c2f65d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Week 3/report.tex", "max_issues_repo_name": "aklsh/EE2703", "max_issues_repo_head_hexsha": "546b70c9adac4a4de294d83affbb74e480c2f65d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week 3/report.tex", "max_forks_repo_name": "aklsh/EE2703", "max_forks_repo_head_hexsha": "546b70c9adac4a4de294d83affbb74e480c2f65d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-07-15T08:02:05.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-07T06:50:07.000Z", "avg_line_length": 50.1020408163, "max_line_length": 300, "alphanum_fraction": 0.5964358452, "num_tokens": 2663, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7606506472514406, "lm_q1q2_score": 0.5537349963934898}}
{"text": "\\documentclass{article}\n\\usepackage{mathrsfs}\n\\usepackage{amsmath}\n\\usepackage{mathtools}\n\\usepackage{graphicx}\n\\DeclarePairedDelimiter{\\ceil}{\\lceil}{\\rceil}\n\n\n\\begin{document}\n\\begin{center}\n\\textbf{\\huge{Week 6}}\n\\end{center}\n\n\\section{Random variable source coding}\n\nLet $\\underbar{c}(x)$ be the codeword assigned to $x \\in \\mathcal{X}$.\n\n$l(x)$ be the length of codeword assigned to $x$.\n\nHere we are coding a single random variable and all codewords are binary strings. (Fixed-variable length source coding)\n$$ \\mathscr{C}= \\{ \\underbar{c}(x): x \\in \\mathcal{X}\\}$$\n\n$$ L_{\\mathscr{C}}= \\sum_{x \\in \\mathcal{X}}p(x)l(x)$$\n\nOur goal is to design a code which has minimum $L_{\\mathscr{C}}$, $L^{*}$\n$$L^{*} = \\text{min}_{\\mathscr{C}} L_{\\mathscr{C}}$$\n\n\\subsection{Kraft inequality}\n\nLet $\\mathscr{C}$ be any prefix-free (binary) code. Then,\n$$ \\sum_{x \\in \\mathcal{X}} 2^{-l(x)} \\leq 1$$\n\nProof: We know that any prefix-free code can be represented using a binary tree which has `leaves' as the codeword. In the binary tree corresponding to the code, the depth of the tree would be the length of the largest codeword in the code. ($l_{\\text{max}}=max_{x \\in \\mathcal{X}} l(x)$)\\\\\n\nSuppose there is a codeword $\\underbar{c}$ of length $l$ represented by a node at depth l($l \\leq l_{\\text{max}}$).\\\\\n\nThen $\\underbar{c}$ has $2^{l_{\\text{max}}-l}$ successors at level $l_{\\text{max}}$. Also, none of these are codewords as the code is prefix-free. Now,\n\n\\begin{align}\n    \\sum_{\\underbar{x} \\in \\mathcal{X}} 2^{l_{\\text{max}}-l(\\underbar{x})} \\leq 2^{l_{\\text{max}}}\n\\end{align}\nThis is true if distinct codewords $\\underbar{c1}$, $\\underbar{c2}$ don't have common successors at $l_{\\text{max}}$ level. Any successor of $\\underbar{c1}$ at $l_{\\text{max}}$ has $\\underbar{c1}$ as a prefix. Similarly in the other case as well.\n\nSuppose $l(\\underbar{c1}) \\leq l(\\underbar{c2})$, so there can be a common successor $\\underbar{v}$ at $l_{\\text{max}}$.\n\nThen, $\\underbar{v}$ has first $l(\\underbar{c1})$ places as $\\underbar{c1}$ and first $l(\\underbar{c2})$ places as $\\underbar{c2}$.\\\\\n\n$\\Rightarrow$ $\\underbar{c1}$ should be a prefix of $\\underbar{c2}$ which isn't true as $\\mathscr{C}$ is a prefix-free code. Hence, no pair of codewords in $\\mathscr{C}$ have any common successors.\n\nHence $(1)$ is true. And dividing both sides by $2^{l_{\\text{max}}}$ gives us the Kraft inequality.\n\n\\subsection{Lemma}\n$$ L^{*} \\geq H(X)$$\nAny prefix-free code for $X$ has average length of at least $H(X)$.\n\nProof:\n$$ L - H(X) =  \\sum_{x \\in \\mathcal{X}}p(x)l(x) - \\sum_{x \\in \\mathcal{X}} p(x) \\log \\frac{1}{p(x)}$$\nWe know,\n $$ D(p||q)= \\sum_{x \\in supp(P_X)}p(x)\\log \\frac{p(x)}{q(x)}$$\n Let\n$$ q(x):= \\frac{2^{-l(x)}}{\\sum_{x \\in \\mathcal{X}} 2^{-l(x)}}   $$\n\n\\begin{align*}\n    D(p_X || q_X) &= \\sum_{x \\in supp(P_X)}p(x)\\log \\frac{p(x)}{\\frac{2^{-l(x)}}{\\sum_{x' \\in \\mathcal{X}} 2^{-l(x')}}} \\\\\n    &= - \\sum p(x) \\log \\frac{1}{p(x)} + \\sum p(x) \\log \\frac{1}{\\frac{2^{-l(x)}}{\\sum_{x' \\in \\mathcal{X}} 2^{-l(x')}}} \\\\\n    &= -H(X) + \\sum_{x \\in supp(p_x)} p(x) \\log 2^{l(x)} + \\sum_{x \\in supp(p_x)} p(x) \\log \\sum_{x'} 2^{-l(x')}  \\\\\n    &= -H(X)+ L - \\epsilon \\qquad (\\text{from Kraft's})\\\\\n    & \\leq - H(X)+L\n\\end{align*}\n$$ \\Rightarrow L_{\\mathscr{C}} - H(X) \\geq 0$$\n $$ L^{*} \\geq H(X)$$\n\n Note: Equality happens only iff $p_x = q_x$ and $\\epsilon$ is zero.\n %30/6\n \\subsection{Lemma}\n Suppose we have a random variable $X \\in \\{ x_1, x_2, \\cdots, x_k\\}$ and positive integers such that $\\sum_{i=1}^{k} 2^{-l_i} \\leq 1$.\n\n Then there exists a prefix free code for $X$ with codeword lengths $l_1 , l_2 , \\cdots, l_k$.\\\\\n\n Proof: We can construct a binary tree with leaves at depths $l_1, l_2, cdots, l_k$ such that none of these nodes are successors of each other, i.e. they are leaves of some binary tree (valid p-f code).\n\n Assume that $l_1 \\leq l_2 \\leq \\cdots \\leq l_k$, without loss in generality    , for any $i \\leq k$,\n \\begin{equation}\n     \\sum_{1}^{i-1} 2^{-l_i} <1\n \\end{equation}\n\nImagine we take the full binary tree upto level $l_k$. At each step of the algorithm we intend to pick one available (undeleted) node from the above tree at level $l_i$ and delete all it's successors from the tree. Repeat this process for $i=1, \\cdots , k$, we will then have a prefix-free tree.\n\nWe have to show that at each step $i=1, \\cdots, k$, there is atleast one node left undeleted at depth $i$. We shall use observation $(2)$.\n\nClearly at step 1, there is a node at $l_1$, ($l_1 \\geq 1$).\n\nAfter $i-1$ steps, assume that we have picked nodes at level $l_1, \\cdots, l_{i-1}$ and appropriately deleted. We want to show that there is a node at level $i$.\n\nTotal nodes at level $l_k$ which aren't in the tree after $(i-1)^{th}$ step\n$$ = \\sum_{j=1}^{i-1} 2^{l_k -l_j}$$\n\nHence, number of nodes remaining at level $l_k$\n$$ = 2^{l_k}(1- \\sum_{j=1}^{i-1}2^{-l_j})$$\n\n$\\Rightarrow$ Number of nodes remaining at $l_k$ in tree at after $(i-1)^{th}$ step $>1$.\n\n$\\Rightarrow$ At least one  survivor node should be present at level $l_i$ also. So we can pick a node for the $i^{th}$ step from level $l_i$ also. This completes the proof.\n\nRemark: We construct the tree from smallest length to largest length codewords. Contrast this with optimal source coding we shall see Hoffman codes later.\n\n\\subsection{Example}\n\\includegraphics[width=\\textwidth]{p-feg.png}\n\n\\subsection{}\n\nNow, suppose that the source random variable $X \\sim P_X$, we want to obtain a collection of integers $l_1, \\cdots, l_k$ such that Kraft inequality is satisfied, then we know how to get the code for $X$.\n$$ \\sum_{i=1}^{k} 2^{-l} \\leq 1$$\n\\begin{itemize}\n    \\item Suppose all codewords are of the same length, $$ k 2^{-l} \\leq 1 \\Rightarrow l \\geq \\log k$$\n\n    Hence we can pick $l = \\ceil{\\log k}$ (ceil function). But there is no guarantee that this code is `good', it may not have small average length.\n\n    \\item We know average length $= \\sum p_i l_i$.\n\n    Then we will choose small $l_i$ for larger $p_i$. (while taking care that it satisfies Kraft inequality)\n\\end{itemize}\n\n\\section{Shannon-Fano code}\n\nWe fix,\n\\begin{equation}\n    l_i = \\ceil{ \\log_2 \\frac{1}{p_i}}\n\\end{equation}\n\n where $p_i$ is the probability of $X$ taking the $i^{th}$ value in $\\mathcal{X}$.\n\nClearly $l_i \\geq 1$.\n\n\\begin{align*}\n    \\sum_{i=1}^k 2^{-l_i} &= \\sum_{i=1}^k 2^{-\\ceil{\\log_2 \\frac{1}{p_i}}}  =\\leq \\sum_{i=1}^{k} 2^{- \\log_2 \\frac{1}{p_i}} \\\\\n    &= \\sum_{i=1}^k p_i = 1\n\\end{align*}\n\nThe lengths given by $(3)$ satisfy Kraft inequality. We can use the tree-pruning algorithm (see sec 1.3) to get a prefix-free code for $X$.\n\nThis code obtained is called as the Shannon-Fano code.\n\nNow,\n\n\\begin{align*}\n    L_{\\text{Shannon-Fano}} &= \\sum_{i=1}^{k} p_i \\ceil{\\log_2 \\frac{1}{p_i}} \\\\\n    &< \\sum_{i=1}^{k} p_i \\left( \\log_2 \\frac{1}{p_i} +1 \\right) \\\\\n    &< H(X)+1\n\\end{align*}\n\nBut S-F code is not always an optimal length prefix-free code.\n\n\\subsection{Example for Shannon-Fano code}\n$$ X \\in \\mathcal{X}= \\{ x_1,x_2,x_3,x_4 \\}$$\n\\begin{equation*}\n        P_X (x_i)=\n        \\begin{cases}\n          1/4, & \\text{if}\\ i=1 \\\\\n          1/2, & \\text{if}\\ i=2 \\\\\n          1/9, & \\text{if}\\ i=3 \\\\\n          5/36, & \\text{if}\\ i=4\n        \\end{cases}\n    \\end{equation*}\n\nLengths satisying the Kraft's inequality for the Shannon-Fano code\n$$ l_i= \\ceil{\\log_2 \\frac{1}{P_X (x_i)}}= \\begin{cases}\n  2, & \\text{if}\\ i=1 \\\\\n  1, & \\text{if}\\ i=2 \\\\\n  4, & \\text{if}\\ i=3 \\\\\n  3, & \\text{if}\\ i=4\n\\end{cases} $$\n\n$$ \\bar{L}= \\sum_{i=1}^{4} p_i l_i = \\frac{57}{36}$$\n\nObtaining the Shannon-Fano code:\n\\begin{itemize}\n    \\item Arrange lengths in ascending order: $l_2 \\leq l_1 \\leq l_4 \\leq l_3$. Hence we pick off a node at depth 2, then 1, 4 and at 3 and delete all the successors.\n\n    \\item Now we can choose the codewords from the binary tree, say\n    $$ x_1 \\to 01 \\qquad x_2 \\to 1 \\qquad x_3 \\to 0010 \\qquad x_4 \\to 000$$\n    Hence, the prefix-free Shannon-Fano code is $\\{ 01,1,0010,000\\} $.\n\\end{itemize}\n\n\\section{Huffman coding}\nWe will show an optimal code construction that has smaller average length compared to the Shannon-Fano code.\n\n\\subsection{Lemma's}\n\nLet us see some intuitive lemma's about an optimal code for a random variable $X$:\n\nAssume that $X \\in \\mathcal{X}= \\{ x_1, \\cdots, x_k\\}$.\n\n$P_X(x_i)= p_i, i=1, \\cdots , k$, without loss in generality, let $p_1 \\geq \\cdots \\geq p_k$.\n\\begin{enumerate}\n    \\item Consider that $l_1, \\cdots, l_k$ are the lengths of the codewords associated to the messages $x_i, i=1,\\cdots, k$ respectively in any optimal code for X.\n\n    Then,\n    \\begin{equation}\n        l_1 \\leq \\cdots \\leq l_k\n    \\end{equation}\n\n    Proof: Suppose $\\mathscr{C}$ is an optimal code in which $\\exists $ some distinct $i,j \\in \\{1,\\cdots, k \\}$ such that $p_i > p_j$ but $l_i > l_j$. (Assumption of the contrary)\n\n    We shall show that there is another code $\\mathscr{C}'$ which has smaller average length $L_{\\mathscr{C'}}$ than $L_{\\mathscr{C}}$.\n\n    Cosider the code $\\mathscr{C}'$ in which the codewords for $x_i$ \\& $x_j$ are swapped between those in $\\mathscr{C}$.\n    $$ \\underbar{c}_{\\mathscr{C'}}(x_j)=\\underbar{c}_{\\mathscr{C}}(x_i) \\qquad \\underbar{c}_{\\mathscr{C'}}(x_i)=\\underbar{c}_{\\mathscr{C}}(x_j)$$\n\n    Now in $\\mathscr{C}$,\n\n    $$l_{i, \\mathscr{C}} :=  \\text{length of codeword of } x_i \\text{ in }\\mathscr{C}= l_j$$\n$$l_{i, \\mathscr{C'}}= l_i$$\n\nSo, $l_{i, \\mathscr{C'}}$ is having the same codewords as $\\mathscr{C}$ and hence is also prefix-free.\n$$ \\bar{L}_\\mathscr{C'}= \\sum_{k= \\{ 1, \\cdots, k\\} \\setminus \\{ i,j\\}}  p_k l_k+ p_i l_i + p_j l_j$$\n$$ \\bar{L}_\\mathscr{C}= \\sum_{k= \\{ 1, \\cdots, k\\}} p_k l_k$$\n$$ \\bar{L}_\\mathscr{C'} - \\bar{L}_\\mathscr{C} = p_i(l_j -l_i)+ p_j(l_i -l_j) =(p_i-p_j)(l_j -l_i) $$\n\nWe have $p_i > p_j$, but $l_i > l_j$,\n$$ \\Rightarrow  \\bar{L}_\\mathscr{C'} - \\bar{L}_\\mathscr{C} < 0 \\Rightarrow  \\bar{L}_\\mathscr{C'} < \\bar{L}_\\mathscr{C}$$\n$\\Rightarrow$ $\\mathscr{C}$ is not optimal, thus this a contradiction. $(4)$ has to happen for an optimal code.\n\n\\item Consider the tree representation of a code which is optimal code. In such a tree, it should be true that either each node is a codeword or it has at least 2 successors which are codewords. i.e. There are no unused leaves in the tree of an optimal code.\n%3/7\n\nProof: Let $\\mathscr{C}$ be the optimal code. Let us consider the tree corresponding to this, suppose it has an unused leaf. Because the code is optimal, these unused leaves must be at maximum depth in the tree. Then, for at least one value $x_i$ of $\\mathscr{C}$, we have the situation\n\n\\includegraphics[width=\\textwidth]{huffman.png}\n\nIn either case, we can delete the last digit of the codeword for $x_i$ (without changing the other codewords) and still have a prefix-free code. But the new code has smaller $\\bar{L}$ and thus the original code could not have been optimal.\n\n\\item There is an optimal prefix-free code for random variable $X$ such that the codewords associated to the two smallest probability symbols are siblings (share the same parent). i.e. two least likely codewords differ only in their last digit.\n\nProof: Let $\\mathscr{C}$ be some optimal code for random variable. Suppose $\\mathscr{C}$ already satifies the property specified above, then we are done.\n\nSuppose $\\mathscr{C}$ doesn't satify the property, we know:\n\n$$ l_{k-1}\\leq l_k \\qquad as \\qquad p_{k-1} \\geq p_k$$\n\n\\includegraphics[width=\\textwidth]{lemma3.png}\n\nNote that $l_i = l_k$ and $i \\neq k-1$.\n\n$$ \\Rightarrow l_i \\leq l_{k-1} \\leq l_k = l_i  \\qquad \\text{from Lemma-1}$$\n\n$$ \\Rightarrow l_i = l_{k-1}= l_k$$\n\nInterchange the codewords for the $i^{th}$ and $(k-1)^{th}$ symbols and get a new code $\\mathscr{C'}$.\n$$ \\Rightarrow \\bar{L}_\\mathscr{C} = \\bar{L}_\\mathscr{C'}$$\n\nNow $\\mathscr{C'}$ satisfies the property in Lemma 3.\n\n\\end{enumerate}\n\n\\subsection{Huffman coding}\n\nSay we have $p_X$, assume that $p_1 \\geq \\cdots \\geq p_k$. Unlike Shannon-Fano code, here we are building the tree in reverse. We pick $p_{k-1}$ and $p_k$ as they have the least 2 probabilities.\n\nThese are assigned to siblings in the tree and now the parent of them has probability $p_{k-1}+ p_k$.\n\nNow we can form a new distribution using these $k-1$ probabilities (symbols) i.e. $p_1, \\cdots, p_{k-2}, p_{k-1}+ p_k$. Now, if $\\{ p_{k-2}, p_{k-1}+ p_k \\}$ are the least 2 probabilities, a new node is assigned to $p_{k-2}$.\nWe repeat the process until all codewords are assigned.\n\n\\subsection{Example}\n\n\\includegraphics[width=\\textwidth]{huffmaneg.png}\n\n\\end{document}\n", "meta": {"hexsha": "03bad1ac7745cebd28a16c6a1d4a5672baf87752", "size": 12517, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Source/Notes_week6.tex", "max_stars_repo_name": "thundermage117/Information-Comm.-Notes", "max_stars_repo_head_hexsha": "dfffa27d7216bd231b0e0e5743d7105c64ecf7fc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Source/Notes_week6.tex", "max_issues_repo_name": "thundermage117/Information-Comm.-Notes", "max_issues_repo_head_hexsha": "dfffa27d7216bd231b0e0e5743d7105c64ecf7fc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Source/Notes_week6.tex", "max_forks_repo_name": "thundermage117/Information-Comm.-Notes", "max_forks_repo_head_hexsha": "dfffa27d7216bd231b0e0e5743d7105c64ecf7fc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.3592592593, "max_line_length": 295, "alphanum_fraction": 0.6596628585, "num_tokens": 4410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772883, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.5537349958542643}}
{"text": "\\section{Applications of spectral theory}\n\n\\begin{outcome}\n  \\begin{enumerate}\n  \\item Use diagonalization to find a high power of a matrix.\n  \\item Use diagonalization to solve dynamical systems.\n  \\end{enumerate}\n\\end{outcome}\n", "meta": {"hexsha": "9252cea728240dc3caaf552db483319be6261f07", "size": 229, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "old/content/spectraltheoryApplicationsDiagonalization.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "old/content/spectraltheoryApplicationsDiagonalization.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "old/content/spectraltheoryApplicationsDiagonalization.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 25.4444444444, "max_line_length": 61, "alphanum_fraction": 0.7641921397, "num_tokens": 57, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506418255928, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.5537349924436059}}
{"text": "\\section{Conclusion}\n\nImproving renewable energy forecasting is important for grid-planning\nand unit commitment, especially as the share of variable renewable resources\nincreases, challenging grid stability. We first\ndemonstrated that our implementation of the \\gls{esn} algorithm is consistent\nwith the literature. Then, we used it to predict\ntotal demand, solar energy, and wind energy. Finally, we evaluated the\ninfluence of several meteorological factors on prediction accuracy. Our results \nshow that researchers must carefully choose each additional training input to\navoid increasing the system complexity. Our results also indicate that properly\nchosen training inputs can achieve comparable accuracy to more complex\nalgorithms. The conventional \\gls{esn} used here did not demonstrate an\nimprovement over the state-of-the-art, nor was it accurate enough to improve\ngrid-scale energy economy. Future work will explore other applications of\n\\glspl{esn} and pursue improvements to the model algorithm.\n", "meta": {"hexsha": "89a5e2d8d39ea255b25cfe27fcfa5ec8cace5cfd", "size": 1008, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "publications/forecasting-paper/conclusion.tex", "max_stars_repo_name": "arfc/cairo", "max_stars_repo_head_hexsha": "f2e38eadd6c786b2853defd97bc49c585bb6e513", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-09-11T18:27:30.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-11T18:27:30.000Z", "max_issues_repo_path": "publications/forecasting-paper/conclusion.tex", "max_issues_repo_name": "arfc/cairo", "max_issues_repo_head_hexsha": "f2e38eadd6c786b2853defd97bc49c585bb6e513", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 68, "max_issues_repo_issues_event_min_datetime": "2019-09-19T19:40:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-25T20:03:06.000Z", "max_forks_repo_path": "publications/forecasting-paper/conclusion.tex", "max_forks_repo_name": "arfc/cairo", "max_forks_repo_head_hexsha": "f2e38eadd6c786b2853defd97bc49c585bb6e513", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-10-01T18:27:43.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:37:44.000Z", "avg_line_length": 59.2941176471, "max_line_length": 80, "alphanum_fraction": 0.8303571429, "num_tokens": 204, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506418255927, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.5537349924436057}}
{"text": "\\documentclass[11pt, letterpaper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{bm}\n\n\\newcommand{\\e}{\\epsilon}\n\\newcommand{\\dl}{\\delta}\n\\newcommand{\\dij}{\\delta_{ij}}\n\\newcommand{\\mdij}{\\delta^i_j}\n\\newcommand{\\1}{\\bm{1}}\n\\newcommand{\\gd}{\\dot \\gamma}\n\\newcommand{\\pd}[2]{\\frac{\\partial #1}{\\partial #2}}\n\\newcommand{\\uu}[1]{\\underline{\\underline{#1}}}\n\\newcommand{\\vect}[1]{\\underline{#1}} %\\vect can be changed to make vectors appear bold or underlined or with arrow on top\n\\newcommand{\\p}[1]{\\grave{#1}} %\\p is for primed variables. The accent on top can be changed to \\hat \\tilde etc.\n\n\n\\title{Lecture 7: Beyond Cartesian Tensors}\n\\begin{document}\n\\maketitle\n\nIn general the components of a vector maybe measured parallel to the axes or perpendicular to it. This distinction leads to two equivalent representations of a vector. If the components are measured parallel to the axes, the vector is said to be contravariant. Contravariant vectors are denoted by a superscript index as $x^i$. If the components are perpendicular to the axes, it's called a covariant vector. These are written with a subscript index $x_i$. The crucial difference between these two types of vectors (two representations of a vector) lies in the transformation rules for these vectors. These can be seen as follows.\n\n\\section{Transformation rules}\nAssume that we want to transform a certain vector from an old coordinate system $(x_1, x_2, x_3)$ to a new coordinate system $(\\p x_1, \\p x_2, \\p x_3)$. The new coordinates can be expressed as some functions of the old ones. Hence we have\n\\begin{align*}\n\\p x_1 &\\equiv \\p x_1(x_1, x_2, x_3)\\\\\n\\p x_2 &\\equiv \\p x_2(x_1, x_2, x_3)\\\\\n\\p x_3 &\\equiv \\p x_3(x_1, x_2, x_3)\n\\end{align*}\n\nThen, using chain rule, we can write the components of vector $\\vect{d\\p x}$ as\n\n\\begin{align*}\nd\\p x^1 &= \\pd{\\p x^1}{x^1}dx^1+\\pd{\\p x^1}{x^2}dx^2+\\pd{\\p x^1}{x^3}dx^3\\\\\nd\\p x^2 &= \\pd{\\p x^2}{x^1}dx^1+\\pd{\\p x^2}{x^2}dx^2+\\pd{\\p x^2}{x^3}dx^3\\\\\nd\\p x^3 &= \\pd{\\p x^3}{x^1}dx^1+\\pd{\\p x^3}{x^2}dx^2+\\pd{\\p x^3}{x^3}dx^3\n\\end{align*}\n\nor, as per Einstein's summation convention, we can condense these equations as\n$$\nd\\p x^i=\\pd{\\p x^i}{x^j}dx^j\n$$\nTherefore, in general, for any contravariant vector we get\n$$\n\\p a^i=\\pd{\\p x^i}{x^j}a^j\n$$\nwhich is the transformation rule for contravariant vectors (and hence the superscript indices). More precisely, we say that vectors which transform this way are called contravariant vectors. This can be generalized to contravariant tensors as well\n$$\n\\p A^{ij}=\\pd{\\p x^i}{x^k}\\pd{\\p x^j}{x^l}A^{kl}\n$$\n\nThe key idea is this: Let there be a vector denoted as $\\vect{a}$ in the old coordinate system and $\\vect{\\p a}$ in the new coordinate system. Then, for contravariant tensors, going from old coordinate representation to the new one ($\\vect{a} \\rightarrow \\vect{\\p a}$), requires \\textit{differentiating the new coordinates with respect to the old ones} ($\\p x$ with respect to $x$).\n\n\nThere is another species of vectors which behave differently from the contravariant vectors. This difference is because the differentials of coordinates ($dx_1$ $dx_2$ $dx_3$) appear in the numerator in vectors like $\\vect{dx}$, velocity vector $\\vect{u}$ or acceleration vector $\\vect{a}$, but they appear in the denominator for the gradient vector $\\nabla \\phi = (\\pd{\\phi}{x_1},\\pd{\\phi}{x_2},\\pd{\\phi}{x_3})$ for some scalar $\\phi$. How does such a vector transform when we go from an old (unprimed) to a new (primed) coordinate system? Recall that\n\\begin{align*}\n\\p x_1 &\\equiv \\p x_1(x_1, x_2, x_3)\\\\\n\\p x_2 &\\equiv \\p x_2(x_1, x_2, x_3)\\\\\n\\p x_3 &\\equiv \\p x_3(x_1, x_2, x_3)\n\\end{align*}\nThen, using chain rule,\n\\begin{align*}\n\\pd{\\phi}{\\p x^1} &= \\pd{\\phi}{x^1}\\pd{x^1}{\\p x^1}+\\pd{\\phi}{x^2}\\pd{x^2}{\\p x^1}+\\pd{\\phi}{x^3}\\pd{x^3}{\\p x^1}\\\\\n\\pd{\\phi}{\\p x^2} &= \\pd{\\phi}{x^1}\\pd{x^1}{\\p x^2}+\\pd{\\phi}{x^2}\\pd{x^2}{\\p x^2}+\\pd{\\phi}{x^3}\\pd{x^3}{\\p x^2}\\\\\n\\pd{\\phi}{\\p x^3} &= \\pd{\\phi}{x^1}\\pd{x^1}{\\p x^3}+\\pd{\\phi}{x^2}\\pd{x^2}{\\p x^3}+\\pd{\\phi}{x^3}\\pd{x^3}{\\p x^3}\n\\end{align*}\nor, in index notation,\n$$\n\\pd{\\phi}{\\p x^i} = \\pd{x^j}{\\p x^i}\\pd{\\phi}{x^j}\n$$\n\nThen, instead of $\\pd{\\phi}{x^j}$, if there was a general covariant vector $a$, we would get\n$$\n\\p a_j = \\pd{x^i}{\\p x^j} a_i\n$$\nThis can be generalized for a second order tensor \n$$\n\\p a_{ij} = \\pd{x^k}{\\p x^i}\\pd{x^l}{\\p x^j} a_{kl}\n$$\nIn essence, for a covariant vector $\\vect{a}$, going from the old to the new coordinate system requires \\textit{differentiation of the old coordinates with respect to the new ones} and therein lies the key difference. Finally, we can also note the transformation rule for mixed tensors\n$$\n\\grave{a}^{i}_j = \\pd{x^k}{\\grave{x}^i}\\pd{\\grave{x}^j}{x^l} a^{k}_l\n$$\n\n\\section{Index contraction and the metric tensor}\nKnowing how the vectors transform under coordinate transformations, we can show an important result: \\textit{Contraction of indices always occurs between a contravariant and a covariant index}. Two contravariant or two covariant indices cannot contract amongst each other.\n\nTo show this, let there be a scalar $S=a^ib_i$. Being a scalar, it will be invariant to coordinate transformations. Hence we must also have $S=\\p a^i\\p b_i$. Let's assume that $\\p a^i\\p b_i=\\p S$ for some $\\p S$. Then, \n$$\n\\p S=\\p a^i\\p b_i=\\pd{\\p x^i}{x^k}a^k\\pd{x^l}{\\p x^i}b_l=\\pd{x^l}{x^k}a^k b_l=\\dl^l_k a^k b_l = a^kb_k = S\n$$\nNote that the key step in this process is\n$$\n\\pd{\\p x^i}{x^k}a^k\\pd{x^l}{\\p x^i}b_l=\\pd{x^l}{x^k}a^k b_l\n$$\nThis is only possible because one of the contracting vectors is contravariant and the other is covariant so that one of the primed coordinate can cancel the other. Hence, contractions can only occur between two different types of vectors. In general, this idea can be used to lower the indices of higher order tensors as we will soon see. Another thing to note is that $\\mdij$ is, in fact, a mixed second order tensor- it has one contravariant and one covariant index.\n\nThis idea allows us to define the metric tensor. To begin with, consider the expression for a short length $ds$ in Cartesian coordinate system\n$$\nds^2= d{x_1}^2+d{x_2}^2+d{x_3}^2 \n$$\nFor cylindrical coordinates $(x_1=r, x_2=\\phi, x_3=z)$, the expression for $ds^2$ is given by\n$$\nds^2= d{r}^2+r^2d{\\phi}^2+d{z}^2\n$$\n\nHere $r$ is the \\textit{metric factor}. It converts the coordinate differential $d\\phi$ to a small distance $r d\\phi$ in cylindrical coordinates. More generally, in an arbitrary coordinate system, we can write\n$$\nds^2= h_1^2d{x_1}^2 + h_2^2d{x_2}^2 + h_3^2d{x_3}^2 \n$$\nwhere $h_1,h_2,h_3$ are the metric factors corresponding to each of the coordinates. Yet, this is not the most general form for $ds^2$. This is the most general form for the metric \\text{in an orthogonal coordinate system}. If the coordinate system is non-orthogonal, the form for the metric is given by a more general expression\n$$\nds^2 = g_{ij}dx^idx^j\n$$\n\nFor the Cartesian coordinates, $g_{ij} = \\dij$. For cylindrical coordinates, $g_{ij}$ is such that $\\{g_{11} = 1(=h_1^2),g_{22} = r(=h_2^2),g_{33} = 1(=h_3^2)\\}$ and the rest of the terms are $0$. The fact that $g_{ij}$ is diagonal is an artifact of the orthogonality of the coordinate system. If the coordinate system is non-orthogonal, then the off-diagonal terms can also be non-zero. How does $g_{ij}$ transform upon change of coordinates? Let there be a metric $ds^2$ defined in the $\\p x$ coordinate system as\n$$\nds^2 = \\p g_{ij} d\\p x^i d \\p x^j\n$$\nNow, in order to migrate to the old coordinates, we impose the transformation rules for contravariant vectors ($ds^2$, being a scalar, remains invariant to coordinate transformations)\n\\begin{align*}\nds^2 &= \\p g_{ij}\\pd{\\p x^i}{x^k}dx^k \\pd{\\p x^j}{x^l}dx^l\\\\\n\\Rightarrow ds^2 &=\\p g_{ij}\\pd{\\p x^i}{x^k}\\pd{\\p x^j}{x^l} dx^k dx^l\n\\end{align*}\nComparing this equation with the definition of metric in the old coordinate system $ ds^2 = g_{kl} dx^k dx^l $ we get the transformation rule for $g$ as\n$$\ng_{kl} = \\pd{\\p x^i}{x^k}\\pd{\\p x^j}{x^l} \\p g_{ij}\n$$\nSimilarly we can obtain the inverse transformation as\n$$\n\\p g_{ij} = \\pd{x^k}{\\p x^i}\\pd{x^l}{\\p x^j}g_{kl}\n$$\nTherefore, allowing us to find the metric tensor for the new space from the knowledge of metric tensor for the old space. Specifically, if the old coordinate system is Cartesian, then $g_{kl} = \\dl_{kl}$. Let's denote the Cartesian coordinates with $y_i$'s  and the new coordinate system with $x_i$'s. Then, the transformation becomes\n\\begin{align*}\ng_{ij} &= \\pd{y^k}{x^i}\\pd{y^l}{x^j}\\dl_{kl} \\\\\n&=\\pd{y^k}{x^i}\\pd{y^k}{x^j}\n\\end{align*}\nThis will allow us to obtain the metric tensor when we go from Cartesian to a general coordinate system. \n\n\nLet us also define an inverse contravariant metric tensor $g^{ij}$ such that\n$$\ng_{ik}g^{kj}=\\mdij\n$$\n\nThis will allow us to perform operations like lowering and raising of indices i.e. to transform a covariant vector into its contravariant representation and vice versa.\n\\begin{align*}\na_i &= g_{ij}a^j\\\\\na^i &= g^{ij}a_j\n\\end{align*}\n\nWith this formalism, we can consider a simple application.\n\n\\section{The Convection-Diffusion equation}\n\nLet's say we have simple shear flow in a channel. Let's define two coordinate systems- the Cartesian coordinates which are fixed in space ($y_i$'s) and the convected coordinates which deform with the flow ($x_i$'s). Then, we have\n\\begin{align*}\nu_1 &= \\gd y_2\\\\\nu_2 &= 0 \\\\\n\\Rightarrow y_1 &= \\gd y_2 t + c_1 \\\\\ny_2 &= c_2 \\\\\n\\Rightarrow c_1 &= y_1 - \\gd t y_2 \\\\\nc_2 &= y_2\n\\end{align*}\n$c_1$ and $c_2$ are in fact the convected coordinates. At $t=0$, $c_1=y_1, c_2=y_2$. For later times, $c_1$ remains $y_1$ while $c_2$ convects with the flow. Therefore, our convected coordinates become\n\\begin{align*}\nx_1 &= y_1 - \\gd t y_2 \\\\\nx_2 &= y_2\n\\end{align*}\n\nAdditionally, we can also find the new basis vectors ($\\vect g_1,\\vect g_2$) using transformation law for vectors\n\\begin{align*}\n\\p a_j &= \\pd{x^i}{\\p x^j} a_i\\\\\n\\Rightarrow g_i &= \\pd{y^j}{x^i} \\vect{1}_j\\\\\n\\Rightarrow g_1 &= \\vect{1}_1\\\\\n g_2 &= \\gd t\\vect{1}_1+\\vect{1}_2\n\\end{align*}\n\nUsing chain rule, we can express the derivatives in the new coordinates\n\\begin{align*}\n\\pd{}{y_1} &= \\pd{}{x^1}\\\\\n\\pd{}{y_2} &= -\\gd t \\pd{}{x_1} + \\pd{}{x_2} \n\\end{align*}\n\nNow, the convection-diffusion equation equation, in Cartesian coordinates, is \n$$\n\\pd{T}{t} + u_\\alpha\\pd{T}{y_\\alpha} = \\kappa \\pd{^2T}{y_\\alpha y_\\alpha} \n$$\nUsing the expressions for shear-flow velocity profile\n$$\n\\pd{T}{t} + \\gd y_2 \\pd{T}{y_1} = \\kappa \\bigg(\\pd{^2T}{y_1^2}+\\pd{^2T}{y_2^2}\\bigg) \n$$\nExpressing the derivatives in the new coordinates, we finally get the convection-diffusion equation in convected coordinates\n\\begin{align*}\n\\pd{T}{t} &= \\kappa \\bigg((1+(\\gd t)^2)\\pd{^2T}{x_1^2}+\\pd{^2T}{x_2^2} - 2\\gd t \\pd{^2T}{x^1 \\partial x^2} \\bigg) \\\\\n\\end{align*}\n\nWhat does this equation tell us? Notice that there is no convection term anymore. We only have the first time derivative proportional to the second order spatial derivatives which is a telltale sign of pure diffusion. Therefore, in the convected coordinates, the convection-diffusion equation appears as just the diffusion equation. But there is one important difference. The last term includes partial derivatives with respect to both $x_1$ and $x_2$. Presence of cross-derivatives is an artifact of non-orthogonality in the coordinate system. As time increases, the coefficient of this term increases in magnitude and hence the non-orthogonality in the coordinate system. \n\nIt also makes intuitive sense. Since, the coordinates move with the flow, an observer moving with the coordinates will not perceive any flow. \n\n\\subsection{Method II}\n\nIn the previous section we arrived at the result using just the coordinate transformation. Now we arrive at the same result using the isotropy of metric tensor. The heat equation is given by $\\pd T t = \\kappa \\pd {q^i}{x^i}$. Using Fourier's law ($q^i=-\\kappa g^{ij} \\pd {T} {x^j}$), it becomes\n\\begin{align*}\n\\pd{T}{t} &= \\kappa \\pd{}{x^i}\\big(g^{ij} \\pd{T}{x^j}\\big)\n\\end{align*}\n\nNote that heat flux is a contravariant vector. This is because its contraction with a covariant vector, $\\nabla \\cdot q$ is a scalar. Since gradient vector $\\pd{T}{x^j}$ is obviously a covariant vector, this implies that we need an isotropic second order contravariant tensor to carry out the transformation. Hence, $g^{ij}$ is a contravariant tensor. Now all we need to do is find $g^{ij}$. \n\nFirstly, we find the elements of the metric tensor $g_{ij}$\n\\begin{align*}\ng_{ij} &= \\sum^{2}_{\\alpha=1} \\pd{y^k}{x_i}\\pd{y^k}{x^j}\\\\\n\\Rightarrow g_{11} &= 1\\\\\ng_{22} &= 1 + (\\gd t)^2\\\\\ng_{12} &= g_{21} = \\gd t\n\\end{align*}\nUsing $g_{ik}g^{kj}=\\mdij$, we can find the contravariant metric tensor $g^{ij}$. Its elements turn out to be\n\\begin{align*}\ng^{11} &= 1 + (\\gd t)^2\\\\\ng^{22} &= 1\\\\\ng^{12} &= g^{21} = -\\gd t\\\\\n\\end{align*}\n\nSubstituting the values in the heat equation, we get exactly the same equation as before\n\\begin{align*}\n\\pd{T}{t} &= \\kappa \\bigg((1+(\\gd t)^2)\\pd{^2T}{x_1^2}+\\pd{^2T}{x_2^2} - 2\\gd t \\pd{^2T}{x^1 \\partial x^2} \\bigg) \\\\\n\\end{align*}\n\nHowever, there is a glitch in this process. In order to carry out the outer differentiation in the term $\\kappa \\pd{}{x^i}(g^{ij} \\pd{T}{x^j})$, we need the covariant derivative. A covariant derivative is a form of differentiation that, besides considering the change in components of the vector, also accounts for change in the direction of unit vectors as we traverse a small distance in the space. It would not matter in the above example because the metric tensor $g^{ij}$ does not vary with space. Thus, at any given time, the direction of unit vectors is same at all points in the space. But for a general metric, we must use the covariant derivative to carry out the differentiation in the Fourier's law. \n\nThe above said differentiation requires us to differentiate a contravariant tensor and a covariant vector. Once we incorporate the laws for covariant derivative of the contravariant tensor and covariant derivative of the covariant vector, we can write down the heat-equation in a general, non-orthogonal coordinate system.\n\n\n\\end{document}\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "6fb13f1c85899d904c4f0de9ebecd5ba3c253cb1", "size": 14177, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex_files/lecture07.tex", "max_stars_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_stars_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-16T04:19:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-16T04:19:07.000Z", "max_issues_repo_path": "tex_files/lecture07.tex", "max_issues_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_issues_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex_files/lecture07.tex", "max_forks_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_forks_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.8653061224, "max_line_length": 712, "alphanum_fraction": 0.704168724, "num_tokens": 4755, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506526772884, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.5537349913651549}}
{"text": "\\documentclass[letterpaper, twoside, 12pt]{book}\n\\usepackage{packet}\n\n\n\\begin{document}\n\n\\setcounter{chapter}{3}\n\n\\chapter{Packet 4.1: Sections 16.1-16.4}\n\n\\setcounter{chapter}{16}\n\\setcounter{section}{0}\n\n\\section{Vector Fields} %16.1\n\n\\begin{definition}\n  A \\textbf{vector field} assigns a vector to each point in 2D or 3D space.\n    \\[\n      \\vect{F}=\n      \\vect{F}(\\vect{r})=\n      \\vect{F}(x,y)=\n      \\<P(x,y),Q(x,y)\\>=\n      \\<P(\\vect{r}),Q(\\vect{r})\\>=\n      \\<P,Q\\>\n    \\]\n    \\[\n      \\vect{F}=\n      \\vect{F}(\\vect{r})=\n      \\vect{F}(x,y,z)=\n      \\<P(x,y,z),Q(x,y,z),R(x,y,z)\\>=\n      \\<P(\\vect{r}),Q(\\vect{r}),R(\\vect{r})\\>=\n      \\<P,Q,R\\>\n    \\]\n\\end{definition}\n\n          \\begin{problem}\n            Sketch the vector field $\\vect{F}=\\<x+y,2y\\>$ for\n            all $x\\in\\{0,1,2\\}$ and $y\\in\\{0,1,2\\}$.\n          \\end{problem}\n\n          \\begin{solution}\n            See\n            \\url{http://kevinmehall.net/p/equationexplorer/vectorfield.html#(x+y)i+2yj%7C%5B-1,4,-1,4%5D}\n          \\end{solution}\n\n\\begin{remark}\n  The gradient vector function\n    \\[\n      \\nabla f (x,y)\n        =\n      \\<f_x(x,y),f_y(x,y)\\>\n    \\]\n    \\[\n      \\nabla f (x,y,z)\n        =\n      \\<f_x(x,y,z),f_y(x,y,z),f_z(x,y,z)\\>\n    \\]\n  is a vector field which yields normal vectors\n  to the level surfaces of the function $f$.\n\\end{remark}\n\n          \\begin{problem}\n            Compute $\\nabla f$ for the function\n            $f(x,y)=x^2-2xy+y$, and then\n            sketch the vector field $\\nabla f$\n            all $x\\in\\{0,1,2\\}$ and $y\\in\\{0,1,2\\}$.\n          \\end{problem}\n\n          \\begin{solution}\n            \\(\n              \\nabla f\n                =\n              \\<\n                \\frac{\\partial}{\\partial x}[x^2-2xy+y],\n                \\frac{\\partial}{\\partial y}[x^2-2xy+y]\n              \\>\n                =\n              \\<\n                2x-2y,\n                -2x+1\n              \\>\n            \\)\n\n            See\n            \\url{http://kevinmehall.net/p/equationexplorer/vectorfield.html#(x+y)i+2yj%7C%5B-1,4,-1,4%5D}\n          \\end{solution}\n\n\n\\section{Line Integrals} %16.2\n\n\\begin{theorem}\n  Some vector functions which parameterize curves follow.\n  \\begin{itemize}\n    \\item\n    A line segment beginning at $P_0$ and ending at $P_1$:\n      \\[\n        \\vect{r}(t) = \\vect{P_0} + t\\vect{P_0P_1}, 0\\leq t\\leq 1\n      \\]\n    \\item\n    A circle centered at the origin with radius $a$:\n      \\[\n        \\vect{r}(t) = \\<a\\cos t,a\\sin t\\>, 0\\leq t\\leq 2\\pi\n        \\text{ (full counter-clockwise rotation)}\n      \\]\n      \\[\n        \\vect{r}(t) = \\<a\\sin t,a\\cos t\\>, 0\\leq t\\leq 2\\pi\n        \\text{ (full clockwise rotation)}\n      \\]\n    \\item\n    A planar curve given by $y=f(x)$ from $(x_0,y_0)$ to $(x_1,y_1)$\n      \\[\n        \\vect{r}(t) = \\<t,f(t)\\>, x_0\\leq t\\leq x_1\n        \\text{ (left-to-right)}\n      \\]\n      \\[\n        \\vect{r}(t) = \\<-t,f(-t)\\>, -x_0\\leq t\\leq -x_1\n        \\text{ (right-to-left)}\n      \\]\n    \\end{itemize}\n\\end{theorem}\n\n          \\begin{problem}\n            Give a vector function which parameterizes the line segment\n            from the point $(0,3,-2)$ to the point $(4,-1,0)$.\n          \\end{problem}\n\n          \\begin{solution}\n            Using the above theorem:\n            \\[\n              \\vect{r}(t) = \\vect{P_0} + t\\vect{P_0P_1}, 0\\leq t\\leq 1\n            \\]\n            \\[\n              \\vect{r}(t) = \\<0,3,-2\\> + t\\<4-0,-1-3,0-(-2)\\>, 0\\leq t\\leq 1\n            \\]\n            \\[\n              \\vect{r}(t) = \\<4t,3-4t,-2+2t\\>, 0\\leq t\\leq 1\n            \\]\n          \\end{solution}\n\n          \\begin{problem}\n            Give a vector function which parameterizes the curve\n            $y=x^3-2x$ from the point $(1,-1)$ to the point $(-1,1)$.\n          \\end{problem}\n\n          \\begin{solution}\n            Using the above theorem (noting that \\((1,-1)\\) is to\n            the right of \\((-1,1)\\)):\n              \\[\n                \\vect{r}(t) = \\<-t,f(-t)\\>, -x_0\\leq t\\leq -x_1\n              \\]\n              \\[\n                \\vect{r}(t) = \\<-t,(-t)^3-2(-t)\\>, -1\\leq t\\leq 1\n              \\]\n              \\[\n                \\vect{r}(t) = \\<-t,-t^3+2t\\>, -1\\leq t\\leq 1\n              \\]\n          \\end{solution}\n\n          \\begin{problem}\n            Give a vector function which parameterizes the curve\n            $x^2+y^2=9$ from the point $(3,0)$ clockwise to the point $(0,-3)$.\n          \\end{problem}\n\n          \\begin{solution}\n            Using the above theorem, we may get a full clockwise rotation\n            by using:\n              \\[\n                \\vect{r}(t) = \\<a\\sin t,a\\cos t\\>, 0\\leq t\\leq 2\\pi\n              \\]\n              \\[\n                \\vect{r}(t) = \\<3\\sin t,3\\cos t\\>, 0\\leq t\\leq 2\\pi\n              \\]\n\n            To obtain the portion from \\((3,0)\\) to \\((0,-3)\\), we note\n            that we may plug in \\(\\pi/2\\) and \\(\\pi\\) respectively to\n            get those points.\n            Therefore the vector function is:\n              \\[\n                \\vect{r}(t) = \\<3\\sin t,3\\cos t\\>, \\frac{\\pi}{2}\\leq t\\leq \\pi\n              \\]\n          \\end{solution}\n\n\\begin{definition}\nThe \\textbf{line integral with respect to arclength} of a function of many\nvariables $f(\\vect{r})$ along a curve $C$ is given by\n  \\[\n    \\int_C f(\\vect{r})\\dvar{s} =\n    \\lim_{n\\to\\infty}\\sum_{i=1}^n f(\\vect{r}_{n,i})\\Delta s_{n,i}\n  \\]\nwhere for each positive integer $n$ we've defined a way to partition $C$\ninto $n$ pieces\n  \\[\n    \\Delta C_{n,1},\\Delta C_{n,2},\\dots,\\Delta C_{n,n}\n  \\]\nwhere $\\Delta C_{n,i}$ has length $\\Delta s_{n,i}$, contains the position\nvector $\\vect{r}_{n,i}$, and\n  \\[\n    \\lim_{n\\to\\infty} \\max(\\Delta s_{n,i}) = 0\n  \\]\n\\end{definition}\n\n\\begin{theorem}\nIf $\\vect{r}(t)$ is a parametrization of $C$ for $a \\leq t \\leq b$, then\n  \\[\n    \\int_C f(\\vect{r})\\dvar{s}\n    =\\int_{t=a}^{t=b} f(\\vect{r}(t))\\frac{ds}{dt}\\dvar{t}\n  \\]\n\\end{theorem}\n\n          \\begin{problem}\n            Evaluate $\\int_C z + 2xy\\dvar{s}$ where $C$ is the line segment\n            from $(0,-1,3)$ to $(2,2,-3)$.\n          \\end{problem}\n\n          \\begin{solution}\n            We'll use the parametrization\n              \\[\n                \\vect{r}(t)\n                  =\n                \\<2t,-1+3t,3-6t\\>,\n                0\\leq t\\leq 1\n              \\]\n            for which\n              \\[\n                \\frac{d\\vect{r}}{dt}\n                  =\n                \\<2,3,-6\\>\n              \\]\n              \\[\n                \\frac{ds}{dt}\n                  =\n                \\left|\\frac{d\\vect{r}}{dt}\\right|\n                  =\n                \\sqrt{2^2+3^2+(-6)^2}\n                  =\n                \\sqrt{4+9+36}\n                  =\n                7\n              \\]\n\n            Therefore\n              \\[\n                \\int_C z + 2xy\\dvar{s}\n                  =\n                \\int_0^1 (z + 2xy)\\frac{ds}{dt}\\dvar{t}\n                  =\n                \\int_0^1 [(3-6t) + 2(2t)(-1+3t)](7)\\dvar{t}\n              \\]\n              \\[\n                  =\n                \\int_0^1 [3-10t+12t^2](7)\\dvar{t}\n                  =\n                (7)[3t-5t^2+4t^3]_0^1\n                  =\n                (7)(2)\n                  =\n                14\n              \\]\n          \\end{solution}\n\n          \\begin{problem}\n            Prove that $\\int_C xy\\dvar{s}=\\int_0^1 t^3\\sqrt{1+4t^2}\\dvar{t}$\n            where $C$ is the parabolic arc\n            on $y=x^2$ from $(0,0)$ to $(1,1)$.\n          \\end{problem}\n\n          \\begin{solution}\n            We'll use the parametrization\n              \\[\n                \\vect{r}(t)\n                  =\n                \\<t,t^2\\>,\n                0\\leq t\\leq 1\n              \\]\n            for which\n              \\[\n                \\frac{d\\vect{r}}{dt}\n                  =\n                \\<1,2t\\>\n              \\]\n              \\[\n                \\frac{ds}{dt}\n                  =\n                \\left|\\frac{d\\vect{r}}{dt}\\right|\n                  =\n                \\sqrt{1^2+(2t)^2}\n                  =\n                \\sqrt{1+4t^2}\n              \\]\n\n            Therefore\n              \\[\n                \\int_C xy\\dvar{s}\n                  =\n                \\int_0^1 xy\\frac{ds}{dt}\\dvar{t}\n                  =\n                \\int_0^1 (t)(t^2)\\sqrt{1+4t^2}\\dvar{t}\n                  =\n                \\int_0^1 t^3\\sqrt{1+4t^2}\\dvar{t}\n              \\]\n          \\end{solution}\n\n\\begin{definition}\nThe \\textbf{line integral of a vector field} $\\vect F$\nover the curve $C$ is given by\n  \\[\n    \\int_C \\vect F\\cdot\\dvar{\\vect r} =\n    \\lim_{n\\to\\infty}\\sum_{i=1}^n\n    \\vect F(\\vect{r}_{n,i})\\cdot\\Delta\\vect{C}_{n,i}\n  \\]\nwhere for each positive integer $n$ we've defined a way to approximate $C$\nwith $n$ vectors\n  \\[\n    \\Delta \\vect{C}_{n,1},\\Delta \\vect{C}_{n,2},\\dots,\\Delta \\vect{C}_{n,n}\n  \\]\nwhere $\\vect{r}_{n,i}+\\Delta \\vect{C}_{n,i}=\\vect{r}_{n,i+1}$\nand\n  \\[\n    \\lim_{n\\to\\infty} \\max(|\\Delta \\vect{C}_{n,i}|) = 0\n  \\]\n\\end{definition}\n\n\\begin{definition}\nThe line integral of a vector field $\\vect F$ over the curve $C$\nmay be computed by\n    \\[\n      \\int_C \\vect{F}\\cdot\\dvar{\\vect r}\n        =\n      \\int_C \\vect{F}\\cdot\\vect{T}\\dvar{s}\n    \\]\nwhere $\\vect T$ yields the unit tangent vectors to the curve $C$.\n\\end{definition}\n\n\\begin{definition}\nIf $\\vect{r}(t)$ is a parametrization of $C$ for $a \\leq t \\leq b$, then\n    \\[\n      \\int_C \\vect{F}\\cdot\\dvar{\\vect r}\n        =\n      \\int_{t=a}^{t=b} \\vect{F}\\cdot\\frac{d\\vect{r}}{dt}\\dvar{t}\n    \\]\n\\end{definition}\n\n          \\begin{problem}\n            Prove that\n            $\\int_C \\<2x,y-x\\>\\cdot\\dvar{\\vect{r}}\n              =\n            \\int_0^1 23t-7 \\dvar{t}$\n            where $C$ is the line segment given by the vector equation\n            $\\vect{r}(t)=\\<1-2t,3t\\>$ for $0\\leq t\\leq 1$.\n          \\end{problem}\n\n          \\begin{solution}\n            We'll use the parametrization\n              \\[\n                \\vect{r}(t)\n                  =\n                \\<1-2t,3t\\>,\n                0\\leq t\\leq 1\n              \\]\n            for which\n              \\[\n                \\frac{d\\vect{r}}{dt}\n                  =\n                \\<-2,3\\>\n              \\]\n\n            Therefore\n              \\[\n                \\int_C \\<2x,y-x\\>\\cdot\\dvar{\\vect{r}}\n                  =\n                \\int_0^1 \\<2x,y-x\\>\\cdot\\frac{d\\vect{r}}{dt}\\dvar{t}\n                  =\n                \\int_0^1 \\<2(1-2t),3t-(1-2t)\\>\\cdot\\<-2,3\\>\\dvar{t}\n              \\]\n              \\[\n                  =\n                \\int_0^1 \\<2-4t,5t-1\\>\\cdot\\<-2,3\\>\\dvar{t}\n                  =\n                \\int_0^1 (-4+8t)+(15t-3)\\dvar{t}\n                  =\n                \\int_0^1 23t-7\\dvar{t}\n              \\]\n          \\end{solution}\n\n\\begin{remark}\n  The work done by a force vector field $\\vect{F}$ over the curve $C$\n  is given by $\\int_C\\vect{F}\\cdot\\dvar{\\vect{r}}$.\n\\end{remark}\n\n          \\begin{problem}\n            Find the work done by the force vector field\n            $\\<-3y,3x\\>$ moving a particle one rotation counter-clockwise\n            around the unit circle $x^2+y^2=1$.\n          \\end{problem}\n\n          \\begin{solution}\n            We'll use the parametrization\n              \\[\n                \\vect{r}(t)\n                  =\n                \\<\\cos t,\\sin t\\>,\n                0\\leq t\\leq 2\\pi\n              \\]\n            for which\n              \\[\n                \\frac{d\\vect{r}}{dt}\n                  =\n                \\<-\\sin t,\\cos t\\>\n              \\]\n\n            Therefore\n              \\[\n                W\n                  =\n                \\int_C \\<-3y,3x\\>\\cdot\\dvar{\\vect{r}}\n                  =\n                \\int_0^{2\\pi} \\<-3y,3x\\>\\cdot\\frac{d\\vect{r}}{dt}\\dvar{t}\n                  =\n                \\int_0^{2\\pi}\n                  \\<-3\\sin t,3\\cos t\\>\\cdot\\<-\\sin t,\\cos t\\>\n                \\dvar{t}\n              \\]\n              \\[\n                  =\n                \\int_0^{2\\pi}\n                  3\\sin^2 t + 3\\cos^2 t\n                \\dvar{t}\n                  =\n                \\int_0^{2\\pi}\n                  3\n                \\dvar{t}\n                  =\n                6\\pi\n              \\]\n          \\end{solution}\n\n\\begin{theorem}\n  If $C$ may be split into two curves $C_1$ and $C_2$, then\n  \\[\n    \\int_C f\\dvar{s}\n      =\n    \\int_{C_1} f\\dvar{s}\n      +\n    \\int_{C_2} f\\dvar{s}\n  \\]\n  and\n  \\[\n    \\int_C \\vect F\\cdot\\dvar{\\vect r}\n      =\n    \\int_{C_1} \\vect F\\cdot\\dvar{\\vect r}\n      +\n    \\int_{C_2} \\vect F\\cdot\\dvar{\\vect r}\n  \\]\n\\end{theorem}\n\n\\begin{theorem}\n  If $-C$ is the curve $C$ oriented in the opposite direction, then\n  \\[\n    \\int_C f\\dvar{s}\n      =\n    \\int_{-C} f\\dvar{s}\n  \\]\n  and\n  \\[\n    \\int_C \\vect F\\cdot\\dvar{\\vect r}\n      =\n    - \\int_{-C} \\vect F\\cdot\\dvar{\\vect r}\n  \\]\n\\end{theorem}\n\n          \\begin{problem}\n            Write a paragraph explaining why a negative appears in the\n            previous theorem for the\n            line integral of a vector field but not for an arclength\n            line integral.\n          \\end{problem}\n\n          \\begin{solution}\n            \\(\\int_C f\\dvar{s}\\) measures the area of a ribbon\n            of height \\(f\\) at each point above \\(C\\), so the orientation\n            of \\(C\\) is irrelevant.\n\n            But \\(\\int_C \\vect F\\cdot\\dvar{\\vect r}\\) measures the work\n            done by \\(\\vect F\\) moving through the curve \\(C\\), so\n            if the orientation of \\(C\\) is reversed, then the motion is\n            in the opposite direction as before, so work is negated.\n          \\end{solution}\n\n\n\\section{The Fundamental Theorem for Line Integrals} %16.3\n\n\\begin{definition}\n  If $\\nabla f=\\vect{F}$, then $f$ is a \\textbf{potential function}\n  for the \\textbf{conservative field} $\\vect{F}$.\n\\end{definition}\n\n          \\begin{problem} %x^2-3yz\n            Prove that $\\<2x,-3z,-3y\\>$ is a conservative field by\n            finding a potential function $f$ for it. Hint: such an $f$\n            must satisfy that $f=x^2+\\Phi_1(y,z)$, $f=-3yz+\\Phi_2(x,z)$,\n            and $f=-3yz+\\Phi_3(x,y)$ for some functions $\\Phi_i$. (Why?)\n          \\end{problem}\n\n          \\begin{solution}\n            We want to solve the system\n              \\[\n                f_x = 2x\n              \\]\n              \\[\n                f_y = -3z\n              \\]\n              \\[\n                f_z = -3y\n              \\]\n            for a potential function \\(f\\). Note \\(f\\) must satisfy\n            each of the anti-partial-derivatives:\n              \\[\n                f = x^2+\\Phi_1(y,z)\n              \\]\n              \\[\n                f = -3yz+\\Phi_2(x,z)\n              \\]\n              \\[\n                f = -3yz+\\Phi_3(x,y)\n              \\]\n\n            So \\(f=x^2-3yz\\) satisfies all three. Therefore since\n            \\(\\nabla f=\\vect{F}\\), \\(\\vect F\\) is conservative.\n          \\end{solution}\n\n\\begin{theorem}\n  The Fundamental Theorem for Line Integrals:\n  If $C$ is any smooth curve beginning at the point $A$ and ending at the\n  point $B$, then\n  \\[\n    \\int_C \\nabla f\\cdot \\dvar{\\vect{r}} = \\left[f\\right]_A^B = f(B)-f(A)\n  \\]\n\\end{theorem}\n\n          \\begin{problem}\n            Prove that if $C$ is any smooth \\textbf{closed curve}\n            (beginning and ending at the same point), then\n            \\[\n              \\int_C \\nabla f\\cdot \\dvar{\\vect{r}} = 0\n            \\]\n          \\end{problem}\n\n          \\begin{solution}\n            Let \\(C\\) begin and end at the point \\(A\\).\n            \\[\n              \\int_C \\nabla f\\cdot \\dvar{\\vect{r}}\n                =\n              [f]_A^A\n                =\n              f(A)-f(A)\n                =\n              0\n            \\]\n          \\end{solution}\n\n          \\begin{problem}\n            Compute $\\int_C\\<4,z^2,2yz\\>\\cdot\\dvar{\\vect r}$ where\n            $C$ is the curve given by\n            $\\vect{r}(t)=\\<2^t,\\sin (\\pi t),4t^2\\>$ for $0\\leq t\\leq 1$.\n            Then compute $\\int_{C'}\\<4,z^2,2yz\\>\\cdot\\dvar{\\vect r}$ where\n            $C'$ is the line segment starting at $(1,0,0)$ and ending\n            at $(2,0,4)$.\n          \\end{problem}\n\n          \\begin{solution}\n            By plugging in \\(0\\) and \\(1\\) into \\(\\vect{r}(t)\\),\n            we see that both curves begin at \\((1,0,0)\\) and end at\n            \\((2,0,4)\\).\n\n            It follows that \\(f=4x+2yz^2\\) satisfies\n              \\[\n                f_x=4\n              \\]\n              \\[\n                f_y=z^2\n              \\]\n              \\[\n                f_z=2yz\n              \\]\n            so for both curves,\n              \\[\n                \\int_C\\<4,z^2,2yz\\>\\cdot\\dvar{\\vect r}\n                  =\n                \\int_{C'}\\<4,z^2,2yz\\>\\cdot\\dvar{\\vect r}\n                  =\n                [4x+2yz^2]_{(1,0,0)}^{(2,0,4)}\n                  =\n                [8+0]-[4+0]\n                  =\n                4\n              \\]\n          \\end{solution}\n\n          \\begin{problem}\n            Prove that if $f$ is a potential function for the vector field\n            $\\<P,Q,R\\>$, then\n            $P_y=Q_x$, $P_z=R_x$, and $Q_z=R_y$. (Hint: use the mixed derivative\n            theorem.)\n          \\end{problem}\n\n          \\begin{solution}\n            Note that \\(P=f_x\\), \\(Q=f_y\\), and \\(R=f_z\\). So it follows\n            by the mixed derivative theorem that:\n              \\[\n                P_y=f_{xy}=f_{yx}=Q_x\n              \\]\n              \\[\n                P_z=f_{xz}=f_{zx}=R_x\n              \\]\n              \\[\n                Q_z=f_{yz}=f_{zy}=R_y\n              \\]\n          \\end{solution}\n\n\\begin{theorem}\n  $\\vect{F}=\\<P,Q,R\\>$ is a conservative vector field if and only if\n  $P_y=Q_x$, $P_z=R_x$, and $Q_z=R_y$.\n\\end{theorem}\n\n          \\begin{problem}\n            Prove that\n            $\\int_C\\<ye^{xy+z},xe^{xy+z},e^{xy+z}\\>\\cdot\\dvar{\\vect r}=0$\n            where $C$ is the curve given by\n            $\\vect{r}(t)=\\<\\frac{1}{1+t^2},\\cos t,e^{1-t^2}\\>$\n            for $-1 \\leq t \\leq 1$.\n          \\end{problem}\n\n          \\begin{solution}\n            By the previous theorem, we can show that\n            \\(\\<ye^{xy+z},xe^{xy+z},e^{xy+z}\\>\\) is conservative by:\n              \\[\n                P_y=e^{xy+z}+xye^{xy+z}=Q_x\n              \\]\n              \\[\n                P_z=ye^{xy+z}=R_x\n              \\]\n              \\[\n                Q_z=xe^{xy+z}=R_y\n              \\]\n\n            Since \\(C\\) starts at \\(\\vect{r}(-1)=\\<\\frac{1}{2},\\cos 1,1\\>\\)\n            and ends at the same point\n            \\(\\vect{r}(1)=\\<\\frac{1}{2},\\cos 1,1\\>\\), the result of\n            Problem 25 shows that\n              \\[\n                \\int_C\\<ye^{xy+z},xe^{xy+z},e^{xy+z}\\>\\cdot\\dvar{\\vect r}=0\n              \\]\n          \\end{solution}\n\n\n\\section{Green's Theorem} %16.4\n\n\\begin{theorem}\n  Let $C$ be the boundary of the region $R$ in the $xy$ plane oriented\n  counter-clockwise, and let $\\vect{F}$ be a two-dimensional vector field. Then\n  \\[\n    \\int_C \\vect{F}\\cdot\\dvar{\\vect{r}}\n      =\n    \\iint_R \\left(\\frac{\\p Q}{\\p x}-\\frac{\\p P}{\\p y}\\right)\\dvar{A}\n  \\]\n\\end{theorem}\n\n          \\begin{problem}\n            Evaluate $\\int_C\\<x^2+y,x+y\\>\\cdot\\dvar{\\vect r}$ where\n            $C$ is the boundary of the unit square oriented counter-clockwise.\n          \\end{problem}\n\n          \\begin{solution}\n            Green's Theorem tells us that\n            \\[\n              \\int_C \\<x^2+y,x+y\\>\\cdot\\dvar{\\vect{r}}\n                =\n              \\iint_R \\left(1-1\\right)\\dvar{A}\n                =\n              \\iint_R 0\\dvar{A}\n                =\n              0\n            \\]\n\n            (This also works because \\(C\\) is a closed curve and\n            \\(\\<x^2+y,x+y\\>\\) is conservative.)\n          \\end{solution}\n\n          \\begin{problem}\n            Find the work done by a force vector field $\\<y,2x\\>$ moving an\n            object around the\n            boundary of the triangle with vertices $(1,2)$, $(-1,-2)$, and\n            $(3,-2)$ oriented clockwise.\n          \\end{problem}\n\n          \\begin{solution}\n            Green's Theorem tells us that\n            \\[\n              \\int_C \\<y,2x\\>\\cdot\\dvar{\\vect{r}}\n                =\n              \\iint_R \\left(2-1\\right)\\dvar{A}\n                =\n              \\iint_R 1\\dvar{A}\n            \\]\n\n            Since \\(R\\) is a triangle with base \\(4\\) and height \\(4\\),\n            we conclude that\n            \\[\n              \\iint_R 1\\dvar{A}\n                =\n              \\frac{1}{2}(4)(4)\n                =\n              8\n            \\]\n            (or we could set it up like a Type II double integral\n            if we like doing extra work).\n          \\end{solution}\n\n\n\\end{document}", "meta": {"hexsha": "22e3c8954b1552ba53b78d8b063e8c42e639598a", "size": 20467, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packet4_1_solutions.tex", "max_stars_repo_name": "StevenClontz/teaching-2015-spring", "max_stars_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packet4_1_solutions.tex", "max_issues_repo_name": "StevenClontz/teaching-2015-spring", "max_issues_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packet4_1_solutions.tex", "max_forks_repo_name": "StevenClontz/teaching-2015-spring", "max_forks_repo_head_hexsha": "f0f09d6cc9420d643f8ea446e57cb09dd6512843", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4263888889, "max_line_length": 105, "alphanum_fraction": 0.4187228221, "num_tokens": 6496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754371026367, "lm_q2_score": 0.7606506526772884, "lm_q1q2_score": 0.5537349913651549}}
{"text": "\\documentclass[11pt, oneside]{article}   \t% use \"amsart\" instead of \"article\" for AMSLaTeX format\n\\usepackage{geometry}                \t\t% See geometry.pdf to learn the layout options. There are lots.\n\\geometry{letterpaper}                   \t\t% ... or a4paper or a5paper or ... \n%\\geometry{landscape}                \t\t% Activate for for rotated page geometry\n%\\usepackage[parfill]{parskip}    \t\t% Activate to begin paragraphs with an empty line rather than an indent\n\\usepackage{graphicx}\t\t\t\t% Use pdf, png, jpg, or eps\u00a7 with pdflatex; use eps in DVI mode\n\t\t\t\t\t\t\t\t% TeX will automatically convert eps --> pdf in pdflatex\t\t\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\n\\usepackage{url}\n\\usepackage{hyperref}\n\n\\title{Notes on continuous- and discrete-time chemical master equation kinetics}\n\\author{Vincent Voelz}\n\\date{May 4, 2018}\t\t\t\t\t\t\t% Activate to display a given date or no date\n\n\\begin{document}\n\\maketitle\n\n\n\\section*{The Chemical Master Equation}\n\n\nThe \\textbf{chemical master equation} describes the time evolution of the populations of multiple chemical species, based on their rates of interconversion:\n\\[\n\\frac{d}{dt} \\mathbf{p} = \\mathbf{K} \\mathbf{p}\n\\]\nHere, $\\mathbf{p} = (p_1, p_2, ..., p_n)$ is a column vector of state populations (i.e. chemical species), and $\\mathbf{K}$ is a \\textit{rate matrix}.  The solution of this equation gives the time evolution of the populations $\\mathbf{p}(t)$  in terms of the initial populations $\\mathbf{p}(0)$, and some properties of the rate matrix $\\mathbf{K}$.\n\n\\subsection*{A two-state master equation}\n\nAs a simple example, let's first consider the case when $n=2$. Here, there are two chemical species, $A$ and $B$, and a reaction $A \\rightarrow B$ with forward rate $k_f$ (s$^{-1}$) and backward rate $k_b$ (s$^{-1}$).\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=0.3\\textwidth]{two-state}\n%\\caption{\\label{fig:your-figure}Caption goes here.}\n\\end{center}\t\n\\end{figure}\n\nThe population flux of the species is described by the differential equations\n\\[\n\\frac{d}{dt} p_A = -k_f p_A + k_b p_B \n\\]\n\\[\n\\frac{d}{dt} p_B = +k_f p_A - k_b p_B\n\\]\nOr, more consisely,\n\\[\n\\frac{d}{dt} \\begin{pmatrix}\n\tp_A \\\\\n\tp_B \\end{pmatrix} = \\begin{pmatrix}\n\t-k_f & k_b \\\\\n\tk_f  & -k_b \\end{pmatrix} \\begin{pmatrix}\n\tp_A \\\\\n\tp_B \\end{pmatrix} \n\\]\nThis is now in the form of a master equation with $\\mathbf{p} = (p_A, p_B)$ and $\\mathbf{K} = \\begin{pmatrix}\n\t-k_f & k_b \\\\\n\tk_f  & -k_b \\end{pmatrix} $:\n\\[\n\\frac{d}{dt} \\mathbf{p} = \\mathbf{K} \\mathbf{p}\n\\]\n\nNote that the rate matrix $\\mathbf{K}$ is \\textit{singular} (i.e. \\textit{degenerate}), meaning it's not invertible because one row of this matrix is a linear combination of the other.  This means that the determinant of this matrix will be zero, and there must be at least one eigenvalue of $K$ that is zero.\n\nThis fact has physical significance.  Consider the eigenvector $\\psi$ whose corresponding eigenvalue is zero.  For this eigenvector, \n\\[\n\\frac{d}{dt} \\mathbf{p} = \\mathbf{K} \\psi = 0.\n\\]\nThus, when the populations are $(p_A, p_B) = \\psi$, the dynamics have reached a \\textit{steady-state} condition; in other words: $\\psi$ are the steady-state populations.\n\nUnder the stronger condition where \\textit{detailed balance} is obeyed (i.e. $p_A k_f = p_B k_b$, as must be the case for all processes at \\textit{equilibrium}), then $\\psi = \\mathbf{p}_{eq}$ must be the \\textit{equilibrium} populations.\n\n\\paragraph{Exercise 1.} Use the equilibrium constant $K_{eq} = k_f/k_b$ along with the constraint $p_A + p_B = 1$ to solve for the equilibrium populations $\\mathbf{p}_{eq}$.\n\n\\paragraph{Exercise 2.} Use the master equation to show that $\\mathbf{p}_{eq}$ is the dynamical steady-state. Show that $\\mathbf{p}_{eq}$ obeys detailed balance, and therefore are also the equilibrium populations.\n\n\\paragraph{Exercise 3.} Find the equilibrium populations $\\mathbf{p}_{eq}$ a different way, this time by finding the eigenvalues and eigenvectors of $\\mathbf{K}$ ($\\mathbf{p}_{eq}$ will be the eigenvector whose corresponding eigenvector is zero).\n\n\n\\subsection*{A many-state master equation}\n  \nNow consider the case where we have $n$ states, whose populations change according to simple first-order kinetic rates.  We define $k_{ji}$ to be the rate at which species $i$ converts to species $j$.\\footnote{NOTE: This nomenclature ($k_{ji}$ used for $k_{i \\rightarrow j}$) can be confusing; the reason for this is so  the elements of the rate matrix $K_{ij}$ correspond to row $i$ and column $j$.   Be aware that other authors define the rate matrix using the opposite convention, and then define the master equation in terms of $d\\mathbf{p}/dt = \\mathbf{pK}$ where $\\mathbf{p}$ is a \\textit{row} vector.}\n\nThe differential equations describing the population flux are:\n\n$$\\frac{dp_1}{dt} = -(k_{21}+k_{31}+ ... +k_{n1})p_1 + k_{12} p_2 +k_{13} p_3 + ... +  k_{1n} p_n $$\n$$\\frac{dp_2}{dt} = k_{21} p_1 + -(k_{12}+k_{32}+ ... +k_{n2})p_2 + k_{23} p_3 + ... +  k_{2n} p_n $$\n$$ ... $$\n$$\\frac{dp_n}{dt} = k_{n1} p_1 + k_{n2} p_2 + ...   -(k_{1n}+k_{2n}+ ... +k_{(n-1)n})p_n$$\n\nThis can be expressed succinctly as the master equation  $\\frac{d}{dt} \\mathbf{p} = \\mathbf{K} \\mathbf{p}$ where $\\mathbf{K}$ is the rate matrix\n$$\\mathbf{K} =\n \\begin{pmatrix}\n  -\\sum_{i \\neq 1} k_{i1} & k_{12} & \\cdots & k_{1n} \\\\\n  k_{21} & -\\sum_{i \\neq 2}k_{i2} & \\cdots & k_{2n} \\\\\n  \\vdots  & \\vdots  & \\ddots & \\vdots  \\\\\n  k_{n1} & k_{n2} & \\cdots & -\\sum_{i \\neq n}k_{in}\n \\end{pmatrix}\n$$\n\n\\subsection*{Solving the chemical master equation}\n\nRecall that that for a one-dimensional differential equation $dp/dt = \\lambda p$, the solution is:\n\\[\np(t) = p_0 \\exp(\\lambda t)\n\\]\nwhere $p_0$ is the initial value of $p$ at time $t=0$.   The strategy for solving the multi-state master equation will be to transform to a new coordinate system in which the rate matrix is diagonal.  In this new (``primed'') coordinate system, the master equation will read\n\\[\n\\frac{d}{dt} \\mathbf{p}' =  \\begin{pmatrix}\n  \\lambda_1 & 0 & \\cdots & 0 \\\\\n 0 & \\lambda_2 & \\cdots & 0 \\\\\n  \\vdots  & \\vdots  & \\ddots & \\vdots  \\\\\n  0 & 0 & \\cdots & \\lambda_n\n \\end{pmatrix} \\mathbf{p}' \n\\]\nThis equation describes $n$ independent first-order equations whose solutions are\n$$p_i'(t) = p_i'(0) \\exp (\\lambda_i t)$$\nwhere $i = 1, ... n$.\n \nTo perform this coordinate transformation, we must find the eigenbasis of $\\mathbf{K}$.  Let $\\mathbf{V}$ be the matrix whose columns are (right) eigenvectors of $\\mathbf{K}$. Then\n\\[\n\\mathbf{K} \\mathbf{V} = \\mathbf{V} \\mathbf{\\Lambda}\n\\]\nwhere $\\mathbf{\\Lambda}$ is the diagonal matrix containing the eigenvalues of $\\mathbf{K}$.  Let $\\mathbf{p}'(t)$ be the populations in the new coordinate system, such that\n$$\\mathbf{V}\\mathbf{p}' = \\mathbf{p}$$\nso that the master equation reads:\n\n$$\\frac{d}{dt} \\mathbf{V}\\mathbf{p}' = \\mathbf{K} \\mathbf{V}\\mathbf{p}'$$\n\n\n\nApplying $\\mathbf{V}^{-1}$ yields\n$$\\mathbf{V}^{-1} \\frac{d}{dt} \\mathbf{V}\\mathbf{p}' = \\mathbf{V}^{-1} \\mathbf{K} \\mathbf{V}\\mathbf{p}'$$\n$$\\frac{d}{dt} \\mathbf{p}'=  \\mathbf{\\Lambda} \\mathbf{p}'$$\n\nWe now have $n$ independent first-order equations whose solutions are\n$$p_i'(t) = p_i'(0) \\exp (\\lambda_i t)$$\n\nTransforming back to the original coordinate system (see Appendix for how this works), we get:\n$$\\mathbf{p}(t) = \\sum_{i=1}^n [\\psi^L_i \\cdot  \\mathbf{p}(0)] \\psi^R_i \\exp (\\lambda_i t)$$\n\nwhere $\\psi_i^R$ and $\\psi_i^L$ are the right and left eigenvectors (respectively) of $\\mathbf{K}$ .  \n\n\\subsection*{The time-evolution of state populations is a superposition of relaxation eigenmodes}\n\nIn the previous section (and Appendix), we find that the complete solution is the sum\n\\[\n\\boxed{\n\\mathbf{p}(t) = \\sum_{i=1}^n [\\psi^L_i \\cdot  \\mathbf{p}(0)] \\psi^R_i \\exp (\\lambda_i t)\n}\n\\]\nThe interpretation of this solution is a \\textbf{superposition of relaxation eigenmodes}. The right eigenvectors of $\\mathbf{K}$ give the shape of each eigenmode; the projection of the initial populations onto each \\textit{left} eigenvector, $[\\psi^L_i \\cdot  \\mathbf{p}(0)]$, give the \\textbf{amplitude} of each relaxation; and the eigenvalues $\\lambda_i$ give the \\textbf{rate} of each relaxation.\n\nWe have already seen in two-state example that one of the eigenvalues of $\\mathbf{K}$ is $\\lambda_1 = 0$.  It can be proven (not here) that the remaining eigenvalues of  $\\mathbf{K}$ are all \\textit{less} that zero, and can be ordered \n\\[\n\\lambda_1 > \\lambda_2 > ... > \\lambda_n.\n\\]\nThis means that in the limit of $t \\rightarrow \\infty$, all the relaxation eigenmodes decay to zero, except for the first eigenmode:\n\\[\n\\lim_{t \\rightarrow \\infty} \\mathbf{p}(t) =  [\\psi^L_1 \\cdot  \\mathbf{p}(0)] \\psi^R_1 \n\\]\nThe first eigenmode $\\psi_1^R$ thus proportional to the equilibrium populations, $\\mathbf{p}_{eq}$.  In fact, it is typical to define $\\psi^R_1 = \\mathbf{p}_{eq}$, in which case we also define $\\psi^L_1 = (1,1,...,1)$ to maintain the orthogonality of $\\psi^L_1$ and $\\psi^R_1$. \n\nWhat do the other eigenmodes look like?  Let's consider a specific three-state example:\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{three-state}\n\\caption{\\label{fig:three-state}A simple three-state kinetic model}\n\\end{center}\n\\end{figure}\n\nThis model is represented by the following non-zero rates: $k_{21}$ = 100 s$^{-1}$, $k_{12}$ = 200 s$^{-1}$, $k_{32}$ = 10 s$^{-1}$, $k_{23}$ = 1 s$^{-1}$, and a resulting rate matrix of \n\\[\n\\mathbf{K} =\n \\begin{pmatrix}\n  -\\sum_{i \\neq 1} k_{i1} & k_{12} & k_{13} \\\\\n  k_{21} & -\\sum_{i \\neq 2}k_{i2} & k_{23} \\\\\n  k_{31} & k_{32} & -\\sum_{i \\neq 3}k_{3i}\n \\end{pmatrix} =    \\begin{pmatrix}\n  -100 & 200 & 0 \\\\\n 100 & -210 & 1 \\\\\n  0 & 10 & -1\n \\end{pmatrix}\\]\n\nUsing a numerical eigensolver, we find that the eigenvalues and eigenvectors of this model are\n\\[\n\\lambda_1 = 0 \\text{ (s}^{-1}),  \\lambda_2 = -4.238 \\text{ (s}^{-1}),  \\lambda_3 = -306.8 \\text{ (s}^{-1}),  \n\\]\nThere is one \\textit{stationary} eigenmode (the equilibirum state), and two \\textit{relaxation} eigenmodes that relax on timescales of $\\tau_2 = -1/\\lambda_2$ = 0.236 s and $\\tau_3 = -1/\\lambda_3$ = 3.26 ms.\n\nThe eigenmodes can be characterized their shape, $\\psi_i^R$ and their amplitude $[\\psi^L_i \\cdot  \\mathbf{p}(0)]$.   The eigensolver gives the right eigenvectors as:\n\n\\[\n\\psi_1^R  = \\begin{pmatrix} 0.19518  \\\\ 0.09759 \\\\ 0.97590 \\end{pmatrix}, \n\\psi_2^R  = \\begin{pmatrix} -0.54104  \\\\ -0.25906 \\\\ 0.80010 \\end{pmatrix}, \n\\psi_3^R  = \\begin{pmatrix} -0.69506  \\\\ 0.71856 \\\\ -0.02350 \\end{pmatrix}, \n\\]\nBut we have some choice in normalizing the left and right eigenvalues so that $(\\psi^L_i)^T\\psi_j^R = \\delta_{ij}$; in particular it's useful normalize the elements of $\\psi_1^R$ so it's equal to the equilibrium populations $\\mathbf{p}_{eq}$ (making $\\psi_1^L = (1,1,1)$. We also have some choice to flip the \\textit{sign} of each pair of right and left eigenvalues, so that the amplitude $[\\psi^L_i \\cdot  \\mathbf{p}(0)]$ will always be \\textit{positive}.   That way, we can think of each eigenmode $\\psi^R_i$ as having an initial positive amplitude, and decaying away over time.  \n\nLet's assume the initial populations at time $t=0$ is given by $\\mathbf{p}(0) = (1,0,0)$, i.e. all the population is in state 1. In this case, we inspect the left eigenvectors given by the eigensolver, and flip the sign of both $\\psi_i^L$ and $\\psi_i^R$ if any of the amplitudes are negative.  We additionally scale the $\\psi_i^L$ so that  $(\\psi^L_i)^T\\psi_j = \\delta_{ij}$.\n\nAfter performing these modifications, we get a new set of eigenmodes and amplitudes:\n\n\\[\n\\psi_1^R  = \\begin{pmatrix} 0.1538  \\\\ 0.0769  \\\\ 0.7692  \\end{pmatrix}, \n\\psi_2^R  = \\begin{pmatrix} 0.54104  \\\\ 0.25906 \\\\ -0.80010 \\end{pmatrix}, \n\\psi_3^R  = \\begin{pmatrix} 0.69506  \\\\ -0.71856 \\\\ 0.02350 \\end{pmatrix}, \n\\]\n\n\\[\n[\\psi^L_1 \\cdot  \\mathbf{p}(0)] = 1, \n[\\psi^L_2 \\cdot  \\mathbf{p}(0)] = 0.9749, \n[\\psi^L_3 \\cdot  \\mathbf{p}(0)] = 0.4585, \n\\]\n\nThis information is visually represented in the plot below:\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{relaxation-eigenmodes}\n\\caption{\\label{fig:relaxation-eigenmodes}Relaxation eigenmodes of the three-state kinetic model.}\n\\end{center}\n\\end{figure}\n\nWith this choice of normalization for the left and right eigenvalues, the eigenmodes are more physically meaningful.  The stationary eigenmode, $\\psi_1^R  = (0.1538, 0.0769, 0.7692)$, gives the equilibrium populations.  At equilibrium, about 77\\% of the population is in state 3.   For the non-stationary modes, the \\textit{sign structure} of the eigenmode now shows the \\textit{direction} that population is flowing over time, and over what timescale.  For example, from an inspection of the kinetic model in Figure \\ref{fig:three-state}, one would predict that if all the population was in state 1 at $t=0$, , there would be a fast \\textit{pre-equilibration} between states 1 and 2.  Indeed, the fastest relaxation eigenmode ($\\tau_3 \\sim 3.26$ ms) is dominated by components 1 and 2. The positive components (blue) show states for which population is leaving on this timescale, and the negative components show states where the population is entering.\n\n\\section*{Reconciling experimental and theoretical kinetic models}\n\n\\paragraph{Separation of timescales.} A feature evident from Figure \\ref{fig:relaxation-eigenmodes} is the clear separation in timescales corresponding to each relaxation eigenmode.  An experiment with limited time-resolution may be able to detect the relaxation on timescale $\\tau_2 \\sim 0.236$ s, but not the fast relaxation at $\\tau_3 \\sim 3.26$ ms.   In that case, the experimentalist would conclude a two-state kinetic mechanism, when in reality the situation is more complicated.    This reasoning is often invoked when comparing protein folding experiments to molecular simulations of folding dynamics.  Whereas simulations may identify many microscopic intermediates, experiments may only detect two-state folding behavior.  Figure \\ref{fig:ww-eigenmodes} shows the eigenmodes and relaxation time scales for a 200-state kinetic model of WW domain protein folding dynamics. \n\n\\begin{figure}[htbp]\n\\includegraphics[width=0.8\\textwidth]{ww-eigenmodes}\n\\caption{\\label{fig:ww-eigenmodes}A 200-state kinetic model of WW domain protein folding dynamics, constructed from 200 $\\mu$s of trajectory data generated on the Anton supercomputer. In order to visualize the relaxation eigenmodes, states are projected to an RMSD-to-native reaction coordinate.  Folding kinetics experiments have shown that  WW domain a two-state folding mechanism, while the simulation-based kinetic model predicts additional fast relaxations ($\\sim$ 500 ns) due to the solvent exposure/burial of a tryptophan residue. Figure taken from:  Lane, T. J., et al. (2011). \\textit{Journal of the American Chemical Society}, 133(45), 18413--18419. \\url{http://doi.org/10.1021/ja207470h} }\n\\end{figure}\n\n\\paragraph{Experimental detection of eigenmode relaxations.} Depending on the situation, there may be problems in experimentally measuring molecular kinetics that give rise the each eigenmode relaxation.  Even if the amplitude $[\\psi^L_i \\cdot  \\mathbf{p}(0)]$ of eigenmode $\\psi_i^R$ is significant, it may be that a given spectroscopic observable is very insensitive to population changes captured by $\\psi_i^R$.  Consider a vector $\\chi$ of experimental observables (e.g. solvent-exposure of a fluorophore) whose elements capture the value of the observable for each of the $n$ kinetic states.  In order to measure relaxation of the $i^{th}$ eigenmode, $\\chi \\cdot \\psi_i^R$ must be sufficiently large to be detected.   \n\nAnother case of experimentally ``invisible'' relaxation eigenmodes occurs when the amplitude $[\\psi^L_i \\cdot  \\mathbf{p}(0)]$ of the relaxation is small.   The amplitude depends on the initial (out-of-equilibrium) populations, which can be very different to control exactly.   Protein folding kinetic experiments often use a temperature-jump to perturb the equilibrium populations, so that the relaxation back to equilibrium can be observed.  The initial $\\mathbf{0}$ in this case is very similar to $\\mathbf{p}_{eq}$, which may result in low amplitudes for some relaxations.  Figure \\label{fig:beauchamp} shows a calculation of these amplitudes for eigenmodes of a number of small ``two-state'' proteins.  While many eigenmodes may be present, only a few of them have significant amplitude to be detected experimentally.\n\n\n\\begin{figure}[htbp]\n\\includegraphics[width=0.8\\textwidth]{beauchamp}\n\\caption{\\label{fig:beauchamp} Amplitudes of eigenmodes for a kinetic models of a number of small mini-proteins, constructed from  trajectory data generated on the Anton supercomputer by D. E. Shaw Research.  Figure taken from:  Beauchamp, K. A. et al. (2012). \\textit{Proceedings of the National Academy of Sciences of the United States of America}, 109(44), 17807--17813. \\url{http://doi.org/10.1073/pnas.1201810109}   }\n\\end{figure}\n\n\n\\newpage\n\n\\subsection*{Discrete-time Markov models}\n\n\nA Markov model describes population changes in discrete time intervals, using a \\textit{transition matrix} $\\mathbf{T}^{(\\tau)}$, whose elements $T_{ji}$ are probability of transitioning from state $i$ to state $j$ in some time $\\tau$.  The time evolution of $\\mathbf{p}(t)$ can thus be obtained by iterated multiplication:\n\n$$\\mathbf{p}(t+\\tau) = \\mathbf{T}^{(\\tau)}\\mathbf{p}(t)$$\n\nFrom a practical standpoint, transition probabilities $T_{ji}$ are much more easily estimated from molecular simulations than rates. This is because with enough simulation trajectories that start from some state $i$, one can simply \\textit{count} how many trajectories have reached state $j$ after some time $\\tau$.\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{markov}\n\\caption{\\label{fig:markov}An example of a discrete-time Markov model}\n\\end{center}\n\\end{figure}\n\n\n\\subsection*{What is the relationship between $\\mathbf{T}^{(\\tau)}$ and $\\mathbf{K}$?}\n\nConsider the time-evolution of populations according to the master equation $\\frac{d}{dt}\\mathbf{p}(t) = \\mathbf{K} \\mathbf{p}$. After some small time interval $\\delta t << \\tau$ has elapsed, the populations have changed by\n$$\\delta \\mathbf{p} = \\mathbf{K}\\mathbf{p}(t) \\delta t$$\nso that \n$$\\mathbf{p}(t+\\delta t) = \\mathbf{p}(t) + \\delta \\mathbf{p} = (\\mathbf{1} + \\mathbf{K} \\delta t)\\mathbf{p}(t)$$\n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{tau-chopped}\n\\caption{\\label{fig:tau-chopped}}\n\\end{center}\n\\end{figure}\n\nIf the time window $\\tau$ is divided up into $N$  small intervals (such that $N \\delta t = \\tau$), iteration of the above  yields\n\n$$\\mathbf{p}(t+N\\delta t) = (\\mathbf{1} + \\mathbf{K} \\delta t)^N\\mathbf{p}(t)$$\n\nTaking the limit where $N \\rightarrow \\infty$ and $\\delta t \\rightarrow 0$, we get\n\n$$\\mathbf{p}(t+\\tau) = \\lim_{N \\rightarrow \\infty} (\\mathbf{1} + \\mathbf{K} \\frac{\\tau}{N})^N\\mathbf{p}(t) = \\exp(\\mathbf{K}\\tau)\\mathbf{p}(t)$$\n\nTherefore,\n$$\\mathbf{T}^{(\\tau)} = \\exp(\\mathbf{K}\\tau)$$\n\n\n\n\\subsection*{$\\mathbf{T}^{(\\tau)}$ and $\\mathbf{K}$ share eigenvectors }\n\nIf we write the matrix exponential as an Euler series, we can easily see that $\\mathbf{T}^{(\\tau)}$ and $\\mathbf{K}$ share eigenvectors.  \n\n$$\\mathbf{T}^{(\\tau)}\\psi_i^R = \\exp(\\mathbf{K}\\tau) \\psi_i^R$$\n$$= [1 + \\mathbf{K}\\tau + \\frac{1}{2!}\\mathbf{K}^2\\tau^2 + \\frac{1}{3!}\\mathbf{K}^3\\tau^3 + ...] \\psi_i^R$$\n$$= [1 + \\lambda_n\\tau + \\frac{1}{2!}\\lambda_i^2\\tau^2 + \\frac{1}{3!}\\lambda_i^3\\tau^3 + ...]  \\psi_i^R$$\n$$ \\mathbf{T}^{(\\tau)} \\psi_i^R = \\exp(\\lambda_i \\tau) \\psi_i^R$$\nApparently, the eigenvalues of $ \\mathbf{T}^{(\\tau)}$, $\\mu_i$, are $\\exp(\\lambda_i \\tau)$.  \\textbf{This means we can get information about the timescales of the continuous-time dynamics (i.e. $\\lambda_i$) directly through the discrete-time transition matrix}:\n$$ \\lambda_i = \\log(\\mu_i)/\\tau$$\n\nUseful quantities to define are the so-called \\textbf{implied timescales} of each eigenmode relaxation, $\\tau_i = -1/\\lambda_i$.  We can compute this directly from the discrete-time transition matrix, using\n\n\\begin{center}\n$\\boxed{ \\tau_i = \\frac{\\tau}{-\\log(\\mu_i)}  }$\n\\end{center}\n\n\n\\section*{Appendix: Left and Right Eigenvectors}\n\nThe eigenvectors $\\psi$ referred to in linear algebra are usually \\textit{right} eigenvectors:\n\\[\n\\mathbf{K} \\psi^R = \\lambda \\psi^R\n\\]\nThe \\textit{left} eigenvectors are defined by\n\\[\n\\psi_i^L \\mathbf{K} = \\lambda_i \\psi_i^L\n\\]\nIn the above expression they multiply the matrix as row vectors.   The left and the right eigenvectors are related through the transpose of $\\mathbf{K}$, as you can verify in the exercise below:\n\n\\paragraph{Exercise.}  Show that the left eigenvectors are the right eigenvectors of $\\mathbf{K}^T$ and vice versa.  \\textit{Hint:} Take the transpose of both sides of the equation $\\mathbf{K} \\psi^R = \\lambda \\psi^R$.  Recall that by the definition of matrix multiplication,  $(\\mathbf{ABC})^T = \\mathbf{C}^T\\mathbf{B}^T\\mathbf{A}^T.$ \n\n\\vskip 0.6 cm\n\nNow, consider $\\mathbf{V}$ as the matrix whose columns are right eigenvectors $\\psi^R_i$ of $\\mathbf{K}$.  Since $\\mathbf{V}^{-1}\\mathbf{KV} = \\mathbf{\\Lambda}$ is a diagonal matrix, it must be equal to its own transpose:\n\\[\n\\mathbf{V}^{-1}\\mathbf{KV} = (\\mathbf{V}^{-1}\\mathbf{KV})^T = \\mathbf{V}^T\\mathbf{K}^T (\\mathbf{V}^{-1})^T\n\\]\nThe last term implies that $(\\mathbf{V}^{-1})^T$  has the \\textit{left} eigenvectors $\\psi_i^L$ as its column vectors.  In other words, $\\mathbf{V}^{-1}$ is a matrix whose \\textit{rows} are the left eigenvectors $\\psi_i^L$.   \n\nSince $\\mathbf{V}^{-1} \\mathbf{V} = \\mathbf{1}$, this also means that left and right eigenvectors must obey $\\psi_i^L \\cdot \\psi_j^R = \\delta_{ij}$).  Each set of $\\psi_i^L$ or $\\psi_i^R$ are not in general orthonormal, but through normalization, the products of left and right eigenvector can always be made to equal 1 (when $i=j$) or 0 ($i \\neq j$). In the case that the matrix $\\mathbf{K}$ is symmetric (i.e. $\\mathbf{K}^T = \\mathbf{K}$), the left and right are equal to each other, and there is a single set of orthonormal eigenvectors.\n\n\\subsection*{Using left and right eigenvectors in the master equation solution}\n\nRecall that in the eigenbasis coordinate system, the solution to the master equation is a column vector $\\mathbf{p}'(t)$ whose elements are\n\\[\np_i'(t) = p_i'(0) \\exp (\\lambda_i t)\n\\]\nUsing $\\mathbf{Vp}' = \\mathbf{p}$, we can transform back to the original coordinate system:\n\\[\n\\mathbf{p}(t) = \\mathbf{Vp}'(t) = \\sum_i \\psi_i^R p_i'(0) \\exp (\\lambda_i t)\n\\]\nNext, using $\\mathbf{p}' = \\mathbf{V}^{-1}\\mathbf{p}$, we can get $p_i'(0)$ in terms of the original coordinate system.   Since $\\mathbf{V}^{-1}$ is a matrix whose rows are the left eigenvectors, $\\psi^L_i$, the $i^{th}$ element of $\\mathbf{V}^{-1}\\mathbf{p}(0)$ is\n\\[\np_i'(0) = (\\psi^L_i)^T\\mathbf{p}(0) =  \\psi^L_i \\cdot \\mathbf{p}(0) \n\\]\nThus, the complete solution of the master equation is\n\\[\n\\mathbf{p}(t) = \\sum_i [ \\psi^L_i \\cdot \\mathbf{p}(0)  ] \\psi_i^R \\exp (\\lambda_i t)\n\\]\n\n\n\\end{document}  ", "meta": {"hexsha": "b65749c2a38d09d1ae2b838e4f4eca4fe126e438", "size": 23052, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "images_for_wiki/math_background/notes-on-chemical-master-eq/Notes_on_master_equation_kinetics.tex", "max_stars_repo_name": "vvoelz/msm-best-practices", "max_stars_repo_head_hexsha": "38e689aeb32164673f27082982b95094252446d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-11T17:39:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-11T17:39:19.000Z", "max_issues_repo_path": "images_for_wiki/math_background/notes-on-chemical-master-eq/Notes_on_master_equation_kinetics.tex", "max_issues_repo_name": "vvoelz/msm-best-practices", "max_issues_repo_head_hexsha": "38e689aeb32164673f27082982b95094252446d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "images_for_wiki/math_background/notes-on-chemical-master-eq/Notes_on_master_equation_kinetics.tex", "max_forks_repo_name": "vvoelz/msm-best-practices", "max_forks_repo_head_hexsha": "38e689aeb32164673f27082982b95094252446d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.8559556787, "max_line_length": 954, "alphanum_fraction": 0.7043206663, "num_tokens": 7399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.815232489352, "lm_q1q2_score": 0.5536885416436088}}
{"text": "\\section{Exercise 02}\n\\subsection{}\n\n\\begin{frame}\n\\frametitleTC{Problem}\n\\framesubtitleTC{This one we solve together}\n\\myPause\n Take the same system as in exercise 01, i.e.,\n \\begin{displaymath}\n  A = \\begin{bmatrix} 0.4 & 0.4 \\\\ 0.05 & 0.3 \\end{bmatrix}, \\quad\n  b = \\begin{bmatrix} 1 \\\\ 0.5 \\end{bmatrix}, \\quad\n  c = \\begin{bmatrix} 2 & -1 \\end{bmatrix}, \\quad\n  d = 0,\n \\end{displaymath}\n and turn it into a block diagram.\\\\ \\myPause\n \\vspace{5mm}Important hint: since $z$ is the one-step advance operator, $z^{-1}$ is the one-step\\\\\n \\emph{delay} operator, i.e., \n \\begin{displaymath}\n  z^{-1} v(k) = v(k-1).\n \\end{displaymath}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Solution}\n\\framesubtitleTC{}\n\\myPause\n \\only<2 >{One delay element per state variable:\\\\\n           \\begin{center}\n            \\includegraphics[width=0.85\\columnwidth]{./Unit-03/img/PS01-ex02-fig01.pdf}\n           \\end{center}}\n \\only<3 >{If the inputs are the $x_i(k+1)$, then the outputs are the $x_i(k)$:\\\\\n           \\begin{center}\n            \\includegraphics[width=0.85\\columnwidth]{./Unit-03/img/PS01-ex02-fig02.pdf}\n           \\end{center}}\n \\only<4 >{Now place the elements of \\textcolor{magenta}{$A$} and \\textcolor{blue}{$b$}:\\\\\n           \\begin{center}\n            \\includegraphics[width=0.85\\columnwidth]{./Unit-03/img/PS01-ex02-fig03.pdf}\n           \\end{center}}\n \\only<5 >{Read the state equation and wire how $x(k+1)$ depends on $x(k)$:\\\\\n           \\begin{center}\n            \\includegraphics[width=0.85\\columnwidth]{./Unit-03/img/PS01-ex02-fig04.pdf}\n           \\end{center}}\n \\only<6 >{Continue reading and wire how $x(k+1)$ depends on $u(k)$:\\\\\n           \\begin{center}\n            \\includegraphics[width=0.85\\columnwidth]{./Unit-03/img/PS01-ex02-fig05.pdf}\n           \\end{center}}\n \\only<7 >{Now place the elements of \\textcolor{green!80!black}{$c$}:\\\\\n           \\begin{center}\n            \\includegraphics[width=0.85\\columnwidth]{./Unit-03/img/PS01-ex02-fig06.pdf}\n           \\end{center}}\n \\only<8->{Read the output equation and complete the block diagram:\\\\\n           \\begin{center}\n            \\includegraphics[width=0.85\\columnwidth]{./Unit-03/img/PS01-ex02-fig07.pdf}\n           \\end{center}}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Proposed exercise 02}\n\\framesubtitleTC{Try this at home, ask questions next time if needed}\n\\myPause\n Take the same system as in the previous proposed exercise, i.e.,\n \\begin{displaymath}\n  A = \\begin{bmatrix} 0.5 & 0 \\\\ 2 & -0.3 \\end{bmatrix}, \\quad\n  b = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}, \\quad\n  c = \\begin{bmatrix} 0 & 4 \\end{bmatrix}, \\quad\n  d = 1,\n \\end{displaymath}\n and turn it into a block diagram.\\\\ \\myPause\n \\vspace{5mm}Remark: in this case you will see a \\TC{direct feedthrough} from $u$ to $y$,\\\\\n because $d \\neq 0$. \n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Takeaways}\n\\framesubtitleTC{from exercise 02 (and the proposed one)}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item Generalise the situation with compact matrix notation (thick lines are vectors):\n       \\begin{center}\n        \\includegraphics[width=0.65\\columnwidth]{./Unit-03/img/PS01-ex02-takeaway-FBscheme.pdf}\n       \\end{center}\n \\item Do you see the \\textcolor{blue!80!black}{feedback loop} inherently inside the system?\n \\item Feedback is not ``an invention for control''; it is an integral part\\\\\n       of how nature works!\n \\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Takeaways}\n\\framesubtitleTC{from exercise 02 (and the proposed one)}\n\\myPause\n \\begin{itemize}[<+-| alert@+>]\n \\item Get acquainted with the $(k,k-1)$ and the $(k+1,k)$ ways of writing a DT system,\\\\\n       and always pay attention to which symbol is what.\n \\item In particular, the $u$ in the state equation is NOT at the same instant as that\\\\\n       in the output equation, no matter how the system is written.\n \\item \\vspace{4mm}As feedback is a powerful weapon, ``slight'' approximations as taking\\\\\n       the previous input of a block and not the current, or \\emph{vice versa},\\\\\n       can in fact be devastating...\n \\item ...and without a system-theoretical analysis, this is impossible\\\\\n       to figure out before the system is run --- i.e., possibly, broken.\n \\end{itemize}\n\\end{frame}\n\n\n\n", "meta": {"hexsha": "63f63844338ba1af7335f14ee7704d764814c2f9", "size": 4214, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/Unit-03/sections/02-PS01-ex02.tex", "max_stars_repo_name": "albertoleva/PID4CSE", "max_stars_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-19T16:38:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-19T16:38:10.000Z", "max_issues_repo_path": "slides/Unit-03/sections/02-PS01-ex02.tex", "max_issues_repo_name": "albertoleva/PID4CSE", "max_issues_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/Unit-03/sections/02-PS01-ex02.tex", "max_forks_repo_name": "albertoleva/PID4CSE", "max_forks_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3831775701, "max_line_length": 99, "alphanum_fraction": 0.6580446132, "num_tokens": 1334, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178699175393, "lm_q2_score": 0.8152324871074607, "lm_q1q2_score": 0.5536885401191655}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{graphicx}\n\n\\setlength{\\parindent}{0em}\n\\setlength{\\parskip}{0.5em}\n\n\\title{CPM Vehicle Parameter Identification and Model Predictive Control}\n\\author{Janis Maczijewski}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\section{Vehicle Dynamics Model}\n\n\nThis is an end-to-end, grey-box model for the vehicle dynamics. The model parameters are not measured directly, but optimized to best fit the vehicle behavior.\n\n\\begin{align*}\n\\boldsymbol{x} &= [p_x, p_y, \\psi, v] \\\\\n\\boldsymbol{u} &= [f, \\delta, V] \\\\ \n\\end{align*}\n\n\n\n\\begin{center}\n\\begin{tabular}{ r | l }\n $p_x$ & IPS x-position  \\\\ \n $p_y$ & IPS y-position   \\\\ \n $\\psi$ & IPS yaw angle  \\\\ \n $v$ & Odometer Speed  \\\\ \n $f$ & Dimensionless motor command  \\\\ \n $\\delta$ & Dimensionless steering command  \\\\ \n $V$ & Battery voltage  \\\\ \n\\end{tabular}\n\\end{center}\n\n\n\\begin{align}\n\\dot{p}_x &= p_1 \\cdot v \\cdot (1+p_2 \\cdot (\\delta + p_{9})^2) \\cdot \\cos(\\psi + p_3 \\cdot (\\delta + p_{9}) + p_{10}) \\\\\n\\dot{p}_y &= p_1 \\cdot v \\cdot (1+p_2 \\cdot (\\delta + p_{9})^2) \\cdot \\sin(\\psi + p_3 \\cdot (\\delta + p_{9}) + p_{10}) \\\\\n\\dot{\\psi} &= p_4 \\cdot v \\cdot (\\delta + p_{9}) \\\\\n\\dot{v} &= p_5 \\cdot v + (p_6 + p_7 \\cdot V) \\cdot \\text{sign}(f) \\cdot |f|^{p_8}\n\\end{align}\n\nThis is a kinematic bicycle model with some added terms to account for various errors.\n\n\\begin{itemize}\n\\item $p_1$: Compensate calibration error between IPS speed and odometer speed. \n\\item $(1+p_2 \\cdot (\\delta + p_{9})^2)$: Compensate for speed differences due to different reference points between the IPS and odometer. The formulation is simplified with a second-order Taylor approximation.\n\\item $p_3$: Side slip angle (Schwimmwinkel) due to steering.\n\\item $p_{10}$: IPS Yaw calibration error.\n\\item $p_{4}$: Unit conversion for the steering state.\n\\item $p_{5}$: Speed low pass (PT1).\n\\item $p_{6}, p_{7}$: Motor strength depends on the battery voltage.\n\\item $p_{8}$: Compensate non-linear steady-state speed.\n\\item $p_{9}$: Steering misalignment correction.\n\\end{itemize}\n\n\n\\section{Model Discretization}\n\nThe model is discretized with the explicit Euler method, as follows:\n\n\\begin{align}\n\\boldsymbol{x}_{k+1} = \\boldsymbol{x}_k + \\Delta t \\cdot f(\\boldsymbol{x}_k,  \\boldsymbol{u}_k, \\boldsymbol{p}) \n\\end{align}\n\nThis discretization is chosen for its simplicity and computational efficiency.\n\nTODO justification inaccuracy: small timestep, and discretization is included in identification.\n\n\\section{Parameter Identification}\n\nOptimal parameter estimation problem for the vehicle dynamics. The optimization tries to find a set of model parameters, that best explain/reproduce the experiment data.\n\n\\begin{align}\n\\underset{\\boldsymbol{x}_k^j, \\boldsymbol{p}}{\\text{minimize}} && \\sum_{j=1}^{n_{experiments}} \\sum_{k=1}^{n_{timesteps}} E(\\boldsymbol{x}_k^j - \\hat{\\boldsymbol{x}}_k^j) \\\\\n\\text{subject to} &&  \\boldsymbol{x}_{k+1}^j = \\boldsymbol{x}_k^j + \\Delta t \\cdot f(\\boldsymbol{x}_k^j,  \\hat{\\boldsymbol{u}}_k^j, \\boldsymbol{p}) \\\\\n&& \\quad k=1..(n_{timesteps}-1) \\\\\n&& \\quad j=1..n_{experiments} \\\\\n\\end{align} \n\n\n\n\\begin{center}\n\\begin{tabular}{ r | l }\n $\\hat{\\boldsymbol{x}}_k^j$ & Measured States  \\\\ \n $\\hat{\\boldsymbol{u}}_k^j$ & Measured Inputs   \\\\ \n $f$ & Vehicle dynamics model  \\\\ \n $\\boldsymbol{p}$ & Model parameters  \\\\ \n $\\Delta t$ & Constant timestep $0.02s$ \\\\ \n $E$ & Error penalty function \\\\ \n\\end{tabular}\n\\end{center}\n\n\\textbf{Error penalty $E$}: Weighted quadratic error with model specific extensions. The yaw error function has a period of $2\\pi$, so that a full rotation does not count as an error. This is done using $\\sin(\\Delta\\psi/2)^2$.\n\n\n\\textbf{Delays}: This kind of optimization problem is not well suited for identifying the delay times (Totzeiten). The delays are solved in an outer loop. The delay is guessed/assumed and the measurement data is modified by appropriately shifting it in the time index $k$. This optimization problem is solved many times for combinations of delay times. The delays that create the lowest objective value are taken as the solution.\n\n\n\\section{Model Predictive Control}\n\nThe MPC uses the identified model to calculate control inputs in real time. The MPC must run in an environment with narrow computational constraints. It must run on a Raspberry Pi Zero W and in less than 10ms per time step. This results in a computational budget of roughly 100000 to 500000 double precision floating point operations per time step.\n\n\n\\subsection{Optimization Problem and Optimizer}\n\nThe MPC is formulated as a (mostly) unconstrained minimization problem, with a box constraint:\n\n\\begin{align}\n\\underset{z \\in \\mathbb{R}^n}{\\text{minimize}} \\quad & J(z) \\\\\n\\text{subject to} \\quad & -1 \\leq z_i  \\leq 1 \n\\end{align}\n\n\nThe optimization problem is solved using a simple and lightweight method, the gradient descent method with momentum.\nThe method works by iterating the following two equations:\n\n\\begin{align*}\nm^{(j)} &:= \\beta  m^{(j-1)} - \\nabla J(z^{(j-1)}) \\\\\nz^{(j)} &:= \\text{clip}(z^{(j-1)} + \\alpha  m^{(j)})\n\\end{align*}\n\nwhere\n$z^{(j)}$ is the approximate solution, which is improved in each iteration,\n$m^{(j)}$ is the momentum term,\n$\\alpha = 0.4$ and $\\beta = 0.6$ are constants,\n$\\text{clip}(\\cdot)$ applies the following function element-wise $\\min(1, \\max(-1, x_i))$ and\n$\\nabla J(z^{(j-1)})$ is the gradient of the objective.\n\nA constant number of iterations $j = 1 ... N$ are executed. \nThis guarantees a constant computation time, but not the convergence to the solution.\nThe approximate solution $z^{(N)}$ is however sufficient in practice.\n\n\\subsection{Trajectory Tracking Problem}\n\nThe goal of the MPC is to make the vehicle follow a given reference trajectory.\nThe reference trajectory is given as a Cubic Hermite spline function $r(t) = [p_{x,ref}, p_{y,ref}]$.\nThis function is evaluated on an appropriate time grid $t_k$ to give the discrete reference trajectory $[p_{x,ref,k}, p_{y,ref,k}]$ where $k = 1 ... H_p$.\nThe MPC objective is defined as follows:\n\n\\begin{align*}\nJ &= \\sum_{k=1}^{H_p} \\left[ (p_{x,k} - p_{x,ref,k})^2 + (p_{y,k} - p_{y,ref,k})^2 \\right] \\\\\n  &+ 0.5 \\sum_{k=1}^{H_u}  (f_k - f_{k-1})^2 \\\\\n  &+ 0.01 \\sum_{m=1}^{H_u}  (\\delta_{m} - \\delta_{m-1})^2\n\\end{align*}\n\nThe predicted states are calculated explicitly and recursively. This is known as a single shooting method.\n\n\\begin{align*}\n\\begin{bmatrix}\np_{x,k+1}  \\\\\np_{y,k+1}  \\\\\n\\psi_{k+1}  \\\\\nv_{k+1}  \n\\end{bmatrix}\n= \n\\begin{bmatrix}\np_{x,k}  \\\\\np_{y,k}  \\\\\n\\psi_{k}  \\\\\nv_{k}  \n\\end{bmatrix}\n+\n\\Delta t_{MPC} \\cdot f\\left(\n\\begin{bmatrix}\np_{x,k}  \\\\\np_{y,k}  \\\\\n\\psi_{k}  \\\\\nv_{k}  \n\\end{bmatrix}\n,  \n\\begin{bmatrix}\nf_{\\boldsymbol{m}_k}  \\\\\n\\delta_{\\boldsymbol{m}_k}  \\\\\nV_{\\boldsymbol{m}_k} \n\\end{bmatrix}\n, \\boldsymbol{p}\\right), \\quad k = 0 ... (H_p-1)\n\\end{align*}\n\nwhere $f$ is the identified model from section (ref).\n\n$\\boldsymbol{m}$ is an index vector that allows reuse of input vectors. Current implementation $\\boldsymbol{m} = [1, 1, 2, 2, 3, 3]$.\n\n$H_p = 6$, $H_u = 3$\n\nThe MPC uses a longer discretization time step $\\Delta t_{MPC} = 0.05s$. \n\nThe inputs for the trajectory tracking problem are:\n\n\\begin{itemize}\n\\item $[p_{x,0}, p_{y,0}, \\psi_{0}, v_{0}]$: The measured state with delay compensation\n\\item $[f_{0}, \\delta_{0}]$: The previous command\n\\item $[p_{x,ref,k}, p_{y,ref,k}]$: The discrete reference trajectory\n\\item $V_m$: The measured battery voltage\n\\item $\\boldsymbol{p}$: The model parameters\n\\end{itemize}\n\n\nThe variables for the trajectory tracking problem are: $z = [f_1, ..., f_{H_u}, \\delta_1, ..., \\delta_{H_u}]$.\n\n\n\\subsection{Delay Compensation}\n\nThe processes of measuring the state, running the MPC controller and applying the new command introduces a delay in the control loop. This delay is compensated by performing a short simulation after measuring the state and before running the MPC. Thus, the MPC will optimize the inputs for a future state, which matches the time by which the new commands take effect.\nThis simulation requires some command inputs. These are the MPC commands from the previous timesteps.\nThe simulation runs for 3 samples, or 60ms.\n\n\n\\subsection{CasADi and Code Generation}\n\nThe trajectory tracking problem and the optimizer are implemented symbolically with Matlab and CasADi.\nCasADi's code generator for C is used to obtain an efficient implementation for the Raspberry Pi.\n\n\\subsection{Finetuning}\n\nThe controller implementation collects statistical data about the trajectory tracking errors in real time.\n\nTODO play with all the MPC hyperparameters to minimize the tracking error.\n\n\n\\end{document}\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "00c63e12b8e2837758cbe6c5248141ec1a33b900", "size": 8663, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tools/vehicle_dynamics_identification_and_mpc/documentation/main.tex", "max_stars_repo_name": "Durrrr95/cpm_lab", "max_stars_repo_head_hexsha": "e2e6f4ace4ebc01e8ddd87e2f4acf13e6ffdcc67", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2020-06-24T11:22:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:14:13.000Z", "max_issues_repo_path": "tools/vehicle_dynamics_identification_and_mpc/documentation/main.tex", "max_issues_repo_name": "Durrrr95/cpm_lab", "max_issues_repo_head_hexsha": "e2e6f4ace4ebc01e8ddd87e2f4acf13e6ffdcc67", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-10T13:48:04.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-10T13:48:04.000Z", "max_forks_repo_path": "tools/vehicle_dynamics_identification_and_mpc/documentation/main.tex", "max_forks_repo_name": "Durrrr95/cpm_lab", "max_forks_repo_head_hexsha": "e2e6f4ace4ebc01e8ddd87e2f4acf13e6ffdcc67", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-11-08T11:59:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-15T13:50:54.000Z", "avg_line_length": 36.7076271186, "max_line_length": 429, "alphanum_fraction": 0.7065681635, "num_tokens": 2614, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8031738057795402, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5536322258313998}}
{"text": "\\subsection{ Preparation of \\py{Dex_Mw} kDa Dextran-FITC stock solution}\n\n\\begin{enumerate}\n\\item The final concentration should be \\py{final_concentration_DexFITC} \\py{final_concentration_units_DexFITC}\n\\item We have weighted \\py{masse_weighted_DexFITC} \\py{masse_weighted_DexFITC_units} \n\\item We add \\py{final_volume_DexFITC} \\py{final_volume_DexFITC_units} of medium\n\\item Wrap the 15 mL falcon tube with aluminum foil\n\\end{enumerate}\n\n\\subsection{Working solution preparation}\n\n\\begin{pycode}\n\nFinal_volume_ = int(nb_puits * volume_per_well)\nFinal_volume_lost = int(Final_volume_ * 1.1)\nVi_ = int(working_concentration * Final_volume_lost / final_concentration_DexFITC)\ndelta_Volume = int(Final_volume_lost - Vi_)    \n\\end{pycode}\n\nWe have \\py{nb_puits} wells to test, in each we will add \\py{volume_per_well} $\\mu L$ per well. We need \\py{Final_volume_} $\\mu L$. We will prepare \\py{Final_volume_lost} $\\mu L$ of solution at \\py{working_concentration} \\py{working_concentration_units} of \\py{Dex_Mw} kDa Dextran-FITC in medium. \n\n\\begin{enumerate}\n\\item We take \\py{delta_Volume} $\\mu L$ of medium\n\\item Then we add \\py{Vi_} $\\mu L$ of the \\py{Dex_Mw} kDa Dextran-FITC at \\py{final_concentration_DexFITC} \\py{final_concentration_units_DexFITC}\n\\item Protect the falcon from light abd keep it in fridge\n\\end{enumerate}", "meta": {"hexsha": "93915cb20da79da632604493262c21b8d7916ef4", "size": 1323, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "BioTex/protocole/english/Solution_stock_DexFITC.tex", "max_stars_repo_name": "Hatoris/BioTex", "max_stars_repo_head_hexsha": "dee83a6aa939190f971cfcf64b0cfff01b48055f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "BioTex/protocole/english/Solution_stock_DexFITC.tex", "max_issues_repo_name": "Hatoris/BioTex", "max_issues_repo_head_hexsha": "dee83a6aa939190f971cfcf64b0cfff01b48055f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BioTex/protocole/english/Solution_stock_DexFITC.tex", "max_forks_repo_name": "Hatoris/BioTex", "max_forks_repo_head_hexsha": "dee83a6aa939190f971cfcf64b0cfff01b48055f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.8846153846, "max_line_length": 297, "alphanum_fraction": 0.7876039305, "num_tokens": 418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.803173791645582, "lm_q2_score": 0.6893056295505783, "lm_q1q2_score": 0.5536322160887829}}
{"text": "\\section{\\textbf{ArraySumTest}}\n\n\\subsection{Particular Case (problem)}\nThe problem we want to solve is that of summing up all the elements of\nan array; but of course, we want to do it in parallel.\n\n\\subsection{Solution}\nThe solution uses the divide and conquer approach, creating a tree of\ntasks (\\C{Future} from java concurrency library); which represent\nthe processing we do while splitting the arrays in two halves and summing up\neach half recursively. Base case are arrays of a single element. Below\nthe heart of the solution, the recursive function which create new tasks\nfor subarrays: \\\\\n\n\\begin{lstlisting}[style=numbers]\n    public Integer call() throws InterruptedException, ExecutionException {\n      if (size == 1) {\n        return a[start];\n      } else {\n        int lhsSize = size / 2;\n        int rhsStart = lhsSize + 1;\n        int rhsSize = size - lhsSize;\n        Future<Integer> lhs = exec.submit(new SumTask(a, start, lhsSize));\n        Future<Integer> rhs = exec.submit(new SumTask(a, rhsStart, rhsSize));\n        return rhs.get() + lhs.get();\n      }\n    }\n\\end{lstlisting}\n\\hfill\n\n\\subsection{Experiment Description}\nThe test merely consists in trying the algorithm with a couple of\narrays of consecutive positive integers, of respective sizes 3 and\n4. The test validates that the expected sum is 6 and 10 respectively.\n\n\\subsection{Observations and Interpretations}\nThe test presents failures like the one below, where the array of 3\nelements gave as sum 7 instead of 6: \\\\\n\n\\begin{verbatim}\n[oraadm@gdlaa008 orig]$ junit steal.ArraySumTest\n.F\nTime: 0.009\nThere was 1 failure:\n1) testRun(steal.ArraySumTest)junit.framework.AssertionFailedError:\n                expected:<6> but was:<7>\nat steal.ArraySumTest.testRun(ArraySumTest.java:31)\nat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nat\n                \nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\nat\n                \nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\nFAILURES!!!\nTests run: 1,  Failures: 1,  Errors: 0\n\\end{verbatim}\n\\hfill\n\nThe problem lied on the program \\C{ArraySum.java}, on the assignment\nof variable \\C{rhsStart}, which represents the starting point of the\nright subarray. While the original program had $lhsSize + 1$\nas assignment, the proper expression was $lhsSize + start$ (so we take\ninto account both the relative start of the initial subarray we got,\nas well as the length of the left sub-subarray). After this fix, the test\nworked fine on both 2 and 24 cores machines.\n\n\n\n", "meta": {"hexsha": "0bc8cd6a27c3f765f5a6d19c812a6d2e3fd3583c", "size": 2567, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/sections/ArraySumTest.tex", "max_stars_repo_name": "rzavalet/multiprocessor", "max_stars_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Report/sections/ArraySumTest.tex", "max_issues_repo_name": "rzavalet/multiprocessor", "max_issues_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/sections/ArraySumTest.tex", "max_forks_repo_name": "rzavalet/multiprocessor", "max_forks_repo_head_hexsha": "f27d4dd6f44172bb6c910552e50107838d653f2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.1549295775, "max_line_length": 85, "alphanum_fraction": 0.7409427347, "num_tokens": 627, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.689305616785446, "lm_q2_score": 0.8031737987125612, "lm_q1q2_score": 0.5536322107074717}}
{"text": "\\section{Correctness proof of Algorithms~\\ref{alg:findisointersection}, \\ref{alg:getautomorphisms} and \\ref{alg:getsingleisomorphism}}\n\n\\todo[inline]{rewrite \\& check if all references to algorithm contents are still correct}\n\nHere, we prove that \\findisomorphismsetintersection~is correct, for which we need the following remark and lemma.\n\n\\begin{remark}\n    \\label{remark:correctness-isomorphism-intersection}\n    If $I$ is an \\isomorphismset, then denote by $\\pi_I$ the single isomorphism and by $G_I$ the \\automorphismgenerators~object it contains.\n    Furthermore, denote by $\\langle G\\rangle$ the automorphism group generated by $G$.\n    Upon input $I_A, I_B$ and output $I$, the correctness criterion of the function \\findisomorphismsetintersection~is equivalent to \n    \\[\n        \\pi_{I} \\cdot \\langle G \\rangle = \\left( \\pi_{I_A} \\cdot \\langle G_{I_A} \\rangle \\right) \\cap \\left( \\pi_{I_B} \\cdot \\langle G_{I_B} \\rangle \\right)\n    \\]\n    where we have denoted $\\pi \\cdot G = \\{\\pi \\cdot g \\mid g \\in G\\}$.\n\\end{remark}\n\n\n\\begin{lemma}\n    \\label{thm:coset-intersection-nonempty}\n    Let $G$ be a group with subgroups $G_A$ and $G_B$, and let $\\pi_A, \\pi_B \\in G$.\n    The intersection $(\\pi_A \\cdot G_A)\\cap (\\pi_B \\cdot G_B)$ is nonempty if and only if $\\pi_A^{-1} \\pi_B$ can be written as $\\pi_A^{-1}\\pi_B=g_A \\cdot g_B$, for some $g_A \\in G_A$ and $g_B \\in G_B$.\n\\end{lemma}\n\\begin{proof}\n    \\todo[inline]{fill (with proof that Lieuwe wrote before)}\n\\end{proof}\n\nNow we prove that the algorithm \\findisomorphismsetintersection~is correct.\n\n\\begin{lemma}\n    The function \\findisomorphismsetintersection~satisfies its correctness criterion given in \n    \\cref{remark:correctness-isomorphism-intersection}.\n\\end{lemma}\n\\begin{proof}\n    First, we start by using the well-known fact that the intersection of two cosets $\\pi_{I_A} \\cdot \\langle G_{I_A} \\rangle$ and $\\pi_{I_B} \\cdot \\langle G_{I_B} \\rangle$ is either empty, or it is a coset of $\\langle G_{I_A} \\rangle \\cap \\langle G_{I_B} \\rangle$.\n    Hence, the algorithm is correct if \n    \\begin{enumerate}[(a)]\n        \\item the \\automorphismgenerators~object it outputs is $\\langle G_{I_A} \\rangle \\cap \\langle G_{I_B} \\rangle$;\n        \\item the single isormorphism it outputs is an element of $\\left( \\pi_{I_A} \\cdot \\langle G_{I_A} \\rangle \\right) \\cap \\left( \\pi_{I_B} \\cdot \\langle G_{I_B} \\rangle \\right)$, or \\none~if this intersection is empty.\n    \\end{enumerate}\n\n    We first prove (a).\n    The central insight is that the removal of elements containing $X$ or $Y$ from $H_B^{\\textnormal{echelon}}$ in step 7 does not alter the intersection we aim for. That is:\n    \\begin{equation}\n        \\label{eq:noXY}\n    \\langle H_A \\rangle \\cap \\langle H_B^{\\textnormal{echelon}} \\rangle = \n    \\langle H_A \\rangle \\cap \\langle H^{\\textnormal{noXY}}_B \\rangle\n        .\n    \\end{equation}\n    The reason why this holds is that elements of $H_A$ do not contain $X$ or $Y$, while for each $k \\in \\{1, 2, \\dots n\\}$, there is at most a single operator in $H_B^{\\textnormal{echelon}}$ which contains a pivot at position $k$ which is either $X$ or $Y$; therefore, no element in $\\langle H_A\\rangle \\cap \\langle H_B^{\\textnormal{echelon}}\\rangle$ will have a decomposition in $H_B^{\\textnormal{echelon}}$ containing operators containing $X$ or $Y$.\n    \\todo[inline]{argument is simpler than I manage to put into words; add picture to clarify the argument?}\n\n    Let us now prove the (a).\n    We use the notation $U^{\\dagger} A U = \\{U^{\\dagger} a U | a \\in A\\}$ where $A$ is an arbitrary set.\n    Following the notation from the algorithm,\n    \\begin{eqnarray*}\n        \\langle G\\rangle\n        &=&\n        U^{\\dagger} \\left(\n        \\langle H_A \\rangle \\cap \\langle H_B^{\\textnormal{noXY}}\\rangle\n        \\right) U\n        \\\\\n        &\\stackrel{\\textnormal{eq.~\\eqref{eq:noXY}}}{=}&\n        U^{\\dagger} \\left(\n        \\langle H_A \\rangle \\cap \\langle H_B^{\\textnormal{echelon}}\\rangle\n        \\right) U\n        \\\\\n        &=&\n        U^{\\dagger} \\left(\n        \\langle H_A \\rangle \\cap \\langle H_B\\rangle\n        \\right) U\n        \\\\\n        &=&\n        U^{\\dagger} \\left(\n        \\langle U G_A U^{\\dagger} \\rangle \\cap \\langle U G_B U^{\\dagger}\\rangle\n        \\right) U\n        \\\\\n        &\\stackrel{*}{=}&\n        U^{\\dagger} U \\left(\n        \\langle G_A \\rangle \\cap \\langle G_B\\rangle\n        \\right) U^{\\dagger} U\n        \\\\\n        &=&\n        \\langle G_A \\rangle \\cap \\langle G_B\\rangle\n    \\end{eqnarray*}\n    where $\\stackrel{*}{=}$ holds because the map $x \\mapsto Ux U^{\\dagger}$ is a group isomorphism.\n\n    We prove (b) by showing the following two statements: (b1) if the single isomorphism that is outputted by the isomorphism is not \\none, then it is an element of the intersection of $\\pi_{I_A} \\cdot \\langle G_{I_A} \\rangle$ and $\\pi_{I_B} \\cdot \\langle G_{I_B} \\rangle$, and (b2) if the outputted isomorphism is $\\none$, then this intersection is empty.\n\n    Proving (b1) is straightforward: from step 8 of the algorithm we see that \n    \\begin{equation}\n        \\label{eq:tau-prime-1}\n   \\tau' = \\prod_{j} h_j^A \\cdot \\prod_{k} h_k^B\n    \\end{equation}\n    where $h_j^A \\in H_A, h_k^B\\in H_{B}^{\\textnormal{noXY}}$\n    while from steps 1--5 we observe that $\\tau'$ can also be written as\n    \\begin{equation}\n        \\label{eq:tau-prime-2}\n        \\tau' = U\\pi_A^{-1} U^{\\dagger} U \\pi_B U^{\\dagger} \\cdot \\prod_{\\ell} h_{\\ell}^{\\textnormal{echelon}}\n    \\end{equation}\n for $h_{\\ell}^{\\textnormal{echelon}} \\in \\langle H_B^{\\textnormal{echelon}} \\rangle = \\langle H_B \\rangle$.\n    Combining eqs.~\\eqref{eq:tau-prime-1} and~\\eqref{eq:tau-prime-2} and reshuffling yields\n    \\begin{equation}\n        \\label{eq:tau-prime-3}\n        U \\pi_B U^{\\dagger} \\cdot \\prod_{\\ell} h_{\\ell}^{\\textnormal{echelon}}  \\cdot \\prod_{k} \\left(h_k^B\\right)^{-1}\n        =\n        U\\pi_A U^{\\dagger} \\cdot \\prod_{j} h_j^A\n    \\end{equation}\n    Applying the group isomorphism $x \\mapsto U^{\\dagger} x U$ to both sides of eq.~\\eqref{eq:tau-prime-3}, we find\n    \\[\n        \\pi_B \\cdot \\prod_{\\ell} U^{\\dagger} h_{\\ell}^{\\textnormal{echelon}} U  \\cdot \\prod_{k} U^{\\dagger} \\left(h_k^B\\right)^{-1} U\n        =\n        \\pi_A \\cdot \\prod_{j} U^{\\dagger} h_j^A U\n    \\]\n    Since $H_A = \\{UgU^{\\dagger} | g \\in G_A\\}$ and similary for $H_B$ we infer that\n    \\[\n        \\pi_A \\cdot \\prod_{j} U^{\\dagger} h_j^A U \\in \n    \\]\n    is an element of\n    both\n    $\\pi_{I_A} \\cdot \\langle G_{I_A} \\rangle$\n    and\n    $\\pi_{I_B} \\cdot \\langle G_{I_B} \\rangle$\n    , which proves (b1).\n\n    For proving (b2), we note that the algorithm outputs \\none~precisely if $\\tau'$ cannot be written as $h_A \\cdot h_B$ where $h_{\\Box} \\in \\langle H_{\\Box} \\rangle$.\n    To see this, consider the two steps in the algorithm where \\none~is returned:\n    \\begin{itemize}\n        \\item in step 6; $\\tau'$ contains at least one $X$ or $Y$ which has not been eliminated in step 5, while $H_A$ only contains $Z$ and $\\unit_2$, so $\\tau'\\notin \\langle H_A \\rangle \\cdot \\langle H_B \\rangle$.\\todo[inline]{maybe needs more rigorous proof}\n        \\item in step 8, where the Zassenheim algorithm finds a decomposition of $\\tau'$ of the form $h_A \\cdot h_B$ only if it exists.\n    \\end{itemize}\nIt follows from the fact that $\\tau'$ is not an element of $\\langle H_A \\rangle \\cdot \\langle H_B \\rangle$ that $\\tau$ is neither, and using the fact that $x\\mapsto UxU^{\\dagger}$ is a group isomorphism, we find by definition of $\\tau$ that $\\pi_A^{-1} \\pi_B \\notin \\langle G_A \\rangle \\cdot \\langle G_B\\rangle$.\n    It now follows from Lemma~\\ref{thm:coset-intersection-nonempty} that the intersection between $\\pi_A\\cdot \\langle G_A \\rangle$ and $\\pi_B\\cdot \\langle G_B \\rangle$ is empty, which concludes the proof for (b2).\n\\end{proof}\n\n\n\n\n\n\\begin{lemma}[Conditional correctness of function \\getautomorphisms]\n    \\label{lemma:condition-correctness-1}\n    The function \\getautomorphisms~satisfies its correctness criterions for single-qubit states.\n    Moreover, it also does so for $n$-qubit states for $n>1$, conditioned on the correctness of the function \\getsingleisomorphism~for quantum states on strictly fewer than $n$ qubits.\n\\end{lemma}\n\\begin{proof}\n    In case two states are given as input, then the correctness of the algorithm is an immediate consequence of the correctness of the single-state-input case and the correctness of the algorithm \\findautomorphismsetintersection.\n    So we only need to prove the case where a single state is given as input.\n    We use induction on the number of qubits $n$.\n    The case $n=1$ is a brute-force search over the (six) single-qubit stabilizer states and is thus correct.\n    For the case $n>1$, we show the following two statements:\n\n    (i) each operator that is outputted by \\getautomorphisms~is an automorphism of $\\ket{\\phi}$;\n\n    (ii) each automorphism of $\\ket{\\phi}$ can be written as product of operators outputted by \\getautomorphisms.\n\n    Showing (i) is straightforward, assuming that \\getsingleisomorphism~and \\getautomorphisms~are both correct for $(n-1)$-qubit states.\n    For example, for the operators of the form $\\unit_2\\otimes g$ with $g$ an automorphism to both $\\ket{\\phi_0}$ and $\\ket{\\phi_1}$, we write\n    \\begin{eqnarray*}\n        \\unit_2 \\otimes g \\ket{\\phi}\n        =\n        \\alpha_0 \\ket{0} \\otimes g\\ket{\\phi_0} + \\alpha_1 \\ket{1} \\otimes g\\ket{\\phi_1}\n        =\n        \\alpha_0 \\ket{0} \\otimes \\ket{\\phi_0} + \\alpha_1 \\ket{1} \\otimes \\ket{\\phi_1}\n        = \\ket{\\phi}\n        .\n    \\end{eqnarray*}\n    The other three cases are similar.\n\n    Regarding (ii), we first note that if $\\unit_2 \\otimes g$ is an automorphism of $\\ket{\\phi}$, then $\n    \\alpha_0 \\ket{\\phi_0} =\n    \\left(\\bra{0}\\otimes \\unit_{2^{n-1}}\\right) \\left(\\unit_2 \\otimes g \\ket{\\phi}\\right) =\n    g (\\alpha_0 \\ket{\\phi_0})\n    $\n    and hence $g$ must be an automorphism to $\\ket{\\phi_0}$, while following a similar argument $g$ is also an automorphism to $\\ket{\\phi_1}$.\n    This shows that all automorphisms of the form $\\unit_2 \\otimes g$ are in the output of \\getautomorphisms.\n    Regarding automorphisms of the form $X\\otimes a$ with $a$ an $(n-1)$ qubit Pauli operator, it is not hard to see that $a$ is an isomorphism mapping $\\alpha_0 \\ket{\\phi_0}$ to $\\alpha_1 \\ket{\\phi_1}$ and vice versa.\n    Now note that $X\\otimes a = (X\\otimes s)(\\unit_2 \\otimes s a)$ where $s$ is as in the algorithm.\n    Since $s$ maps $\\alpha_0 \\ket{\\phi_0}$ to $\\alpha_1 \\ket{\\phi_1}$, we see that $sa$ is an automorphism to both $\\ket{\\phi_0}$ and $\\ket{\\phi_1}$, so that $X\\otimes a$ can be written as product of elements outputted by \\getautomorphisms.\n    The argument for the remaining two cases, i.e. automorphisms of the form $Z\\otimes a$ and $Y\\otimes a$, are similar.\n    This finishes the proof for (ii).\n\\end{proof}\n\n\n\\begin{lemma}[Conditional correctness of function \\getsingleisomorphism]\n    \\label{lemma:condition-correctness-2}\n    The function \\getsingleisomorphism~satisfies its correctness criterions for $0$-qubit states.\n    Moreover, it also does so for $n$-qubit states for $n>0$, conditioned on the correctness of the function \\getautomorphisms~for quantum states on strictly fewer than $n$ qubits.\n    \\todo[inline]{Readers might $0$-qubit states find confusing...Reason we chose it is because starting the recursion at 0 qubits gives a concise formulation than at 1 qubit}\n\\end{lemma}\n\\begin{proof}\n    If the input are $0$-qubit states, then correctness is immediate, both for single- and two-tuple input.\n    For the case of $n$-qubit input states with $n>0$, correctness of the two-tuple case follows immediately from the correctness of the functions \\getautomorphisms~(on $n$ qubits) and \\findisomorphismsetintersection.\n    The case $n>0$ for the single-tuple input case is also correct since the algorithm is a brute-force search over the four single-qubit Pauli operators on the most-significant qubit.\n\\end{proof}\n\nBy induction on the number of qubits, the correctness of the algorithm \\getsingleisomorphism~now follows straighforwardly from Lemma~\\ref{lemma:condition-correctness-1} and \\ref{lemma:condition-correctness-2}.\n\n\\begin{corollary}[Unconditional correctness of function \\getsingleisomorphism]\n    The function \\getsingleisomorphism~satisfies its correctness criterions.\n\\end{corollary}\n\n\n\n\\section{Correctness proof of \\textsc{GetCanonicalLabels}}\n\\label{sec:proof-getcanonicallabels}\n\n\\todo[inline]{TODO}\n\n\n\\begin{lemma}\n    \\label{lemma:automorphism-phase}\n    If $\\lambda P$ is an element of a Pauli automorphism group, then $\\lambda \\in \\{\\pm 1\\}$.\n\\end{lemma}\n\\begin{proof}\n    Let $\\lambda P$ be an element of the automorphism group of a state $\\ket{\\phi}$.\n    Then $\\lambda^2 \\ket{\\phi} = (\\lambda P)^2 \\ket{\\phi} = \\lambda P (\\lambda P \\ket{\\phi}) = \\lambda P \\ket{\\phi} = \\ket{\\phi}$, so $\\lambda^2 = 1$, so $\\lambda \\in \\{\\pm 1\\}$.\n\\end{proof}\n\n\\begin{lemma}\n    Let $G_0, G_1$ be Pauli automorphism groups and $G = \\langle G_0 \\cup G_1 \\rangle$.\n    If $g\\in G$, then $\\pm i g \\notin G$.\n\\end{lemma}\n\\begin{proof}\n    To reach a contradiction, assume there exists a $g\\in G$ for which $\\pm i g \\in G$ also.\n    Since Pauli LIMs commute or anticommute, we can decompose both as $g = (-1)^x g_0 g_1$ and $\\pm i g = (-1)^y h_0 h_1$ for some $x, y \\in \\{0, 1\\}$ and $g_0, h_0\\in G_0$ and $g_1, h_1 \\in G_1$.\n    Combining yields $\\pm i (-1)^x g_0 g_1 = (-1)^y h_0 h_1$, which we rewrite as $\\pm i (-1)^{x+y} \\underbrace{g_1 h_1^{-1}}_{\\in G_1} = \\underbrace{g_0^{-1} h_0}_{\\in G_0}$. \n    Squaring both sides yields the contradiction $-1 \\cdot \\id[2] = \\id[2]$ where we used that automorphisms square to $\\id[2]$ (corollary of \\autoref{lemma:automorphism-phase}).\n\\end{proof}\n\n\\begin{corollary}\n    Let $G_0, G_1$ be Pauli automorphism groups and $G = \\langle G_0 \\cup G_1 \\rangle$.\n    If $\\lambda P \\in G$ and $\\mu P \\in G$, then $\\lambda = \\pm \\mu$.\n\\end{corollary}\n", "meta": {"hexsha": "b035702d9b7657995994362b7f1205cf34b883d9", "size": 13902, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Src/CS/sections/correctness_proofs_algorithms.tex", "max_stars_repo_name": "Katafotic/latex_parsing", "max_stars_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Src/CS/sections/correctness_proofs_algorithms.tex", "max_issues_repo_name": "Katafotic/latex_parsing", "max_issues_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Src/CS/sections/correctness_proofs_algorithms.tex", "max_forks_repo_name": "Katafotic/latex_parsing", "max_forks_repo_head_hexsha": "f00a9547b2034f4592e732a382cdbd34e11e13db", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.4102564103, "max_line_length": 453, "alphanum_fraction": 0.6752985182, "num_tokens": 4454, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8031738057795402, "lm_q2_score": 0.6893056104028797, "lm_q1q2_score": 0.5536322104524699}}
{"text": "\\paragraph{}\nThis section details the proposed error indicators for adaptive scaled boundary finite element analysis.\nIn comparison with the works in \\citep{NME:NME439}, no stress recovery is required.\nThis reduces the computational effort and leads to an efficient adaptive analysis.\nThe error estimation is directly invoked from the scaled boundary finite element solutions.\n\n%   ----    %\n\\subsection{Mesh Size}\n\\paragraph{}\nThe area of a subdomain will apparently influences the accuracy of the result.\nGenerally speaking, a larger subdomain tends to lead to a higher error.\nThe area of any polygon in Fig.~\\ref{adap_fig:ei_polygon} can be calculated as \n%\n\\begin{equation}\n    S = \\frac{1}{2}\n        \\sum_{k=1}\n        \\left(\n            x_k y_{k+1} - x_{k+1} y_k\n        \\right)\n\\end{equation}\n%\n\\begin{figure}[h!]\n    \\centering\n    \\scalebox{1}{\n        \\includegraphics{adaptivity/images/adap_ei_polygon.eps}\n    }\n    \\caption{A polygon with $n$ vertexes}\n    \\label{adap_fig:ei_polygon}\n\\end{figure}\n%\n%   ----    %\n\\subsection{Mesh Quality}\n\\paragraph{}\nMesh quality is another important factor that influences the accuracy.\nIn SBFEM, the mesh quality is highly related to the minimal angle formed by the intersecting lines connected to the scaling center and the adjacent polygon vertexes.\nAn extremely smaller angel as shown in Fig.~\\ref{adap_fig:ei_mesh_quality} may raises numerical stability issue and hence decreases the accuracy of the result.\n\n\\begin{figure}[h!]\n    \\begin{subfigure}[b]{0.5\\linewidth}\n        \\centering\n        \\scalebox{1}{\n            \\includegraphics{adaptivity/images/adap_ei_mesh_quality_good.eps}\n        }\n        \\caption{Acceptable}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{0.5\\linewidth}\n        \\centering\n        \\scalebox{1}{\n            \\includegraphics{adaptivity/images/adap_ei_mesh_quality_poor.eps}\n        }\n        \\caption{Unacceptable}\n    \\end{subfigure}\n    \\caption[Mesh quality in SBFEM]{Scaling center (\n        \\tikz[baseline=-0.5ex]\\draw[black,fill=black] (0,0) circle (0.7ex);\n    ), minimal angle $\\alpha$}\n    \\label{adap_fig:ei_mesh_quality}\n\\end{figure}\n%\n%   ----    %\n%\n\\subsection{Eigenvalue in SBFEM}\n\\paragraph{}\nFrom Eq.~\\ref{lr_eq:sbfem_displacement_field} ,the displacement SBFEM can be calculated as\n\\begin{equation}\n    u(\\xi) = c_1 \\xi ^{-\\lambda_1} \\phi_u^1\n            +c_2 \\xi ^{-\\lambda_2} \\phi_u^2\n            +\\dots\n\\label{adap_eq:ei_terms}\n\\end{equation}\nwhere $\\phi_u^i$ stands for the eigenvector corresponding to the $i$th eigenvalue in the eigenvalue matrix $\\Lambda^{n}$ in Eq.~\\ref{lr_eq:sbfem_eigen_decomp}.\nThe contribution of each mode is represented by every individual term in Eq.~\\ref{adap_eq:ei_terms} \\citep{Deeks2002}.\nOn the other hand, after the displacement solution is calculated on the boundary nodes by Eq.~\\ref{lr_eq:sbfem_general_sol_disp} with $\\xi=1$, displacement solution in circumferential direction then is interpolated by the help of the $p$-th order shape function $R(\\eta)$ in Eq.~\\ref{lr_eq:sbfem_displacement_field}.\nAs a result, terms corresponding to the eigenvalue $\\lambda_i \\leq p$ can be exactly interpolated by the shape function.\nThese terms can be regarded as exact.\nHowever, therms corresponding to the eigenvalue $\\lambda_i > p$ indicates the shape function is not capable to interpolate the displacement exactly and thus shall be taken as error terms.\nConsequently, displacement on the boundary $u(\\xi=1)$ can be expressed with exact terms and approximation terms as followed\n\\begin{equation}\n    u_b = u_e (\\xi=1) + u_a(\\xi=1)\n\\end{equation}\nwhere the exact terms for the displacement on the boundary can be expressed as\n\\begin{equation}\n    u_e (\\xi=1) = \\sum c_i \\phi_u^i \\text{   for all  } \\lambda_i \\leq p\n\\end{equation}\nand similarly, the approximation terms can be written as\n\\begin{equation}\n    u_a (\\xi=1) = \\sum c_i \\phi_u^i \\text{   for all  } \\lambda_i > p\n\\end{equation}\nFollow the same logic, nodal forces on the boundary can be expressed As\n\\begin{equation}\n    \\begin{aligned}\n        q_b &= q_e(\\xi=1) + q_a(\\xi=1) \\\\\n        q_e(\\xi=1) &= \\sum c_i \\phi_q^i \\text{   for all  } \\lambda_i \\leq p \\\\\n        q_a(\\xi=1) &= \\sum c_j \\phi_q^j \\text{   for all  } \\lambda_j > p \\\\\n    \\end{aligned}\n\\end{equation}\n% energy\nThe energy can be calculated by the\n\\begin{equation}\n    \\begin{aligned}\n        U &= \\frac{1}{2} u_b^T q_b = U_e + U_a \\\\\n        U_e &= u_e q_e\\\\\n        U_a &= u_e q_a + u_a q_e + u_e q_e\n    \\end{aligned}\n\\end{equation}\nThe relative error for energy can be calculated as $\\frac{U_a}{U}$ and the same logic for displacement and stress. \n\\paragraph{}\nIn order to obtain satisfactory solutions within certain accuracy, the contribution of $U_a$ towards $U$ should be minimized and $U_a$ should also be distributed evenly to all cells. This is, in fact, similar to the relative energy norm error used in the literature \\citep{NME:NME1620280411}.\n", "meta": {"hexsha": "dfda987c097cff5b4eab8b85650a1a164ff35c60", "size": 4924, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "adaptivity/error_indicator.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "adaptivity/error_indicator.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "adaptivity/error_indicator.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.5925925926, "max_line_length": 316, "alphanum_fraction": 0.7073517465, "num_tokens": 1392, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7122321781307374, "lm_q1q2_score": 0.5536179658364656}}
{"text": "% 01smoothmanifolds.tex\n% Fund Science! & Help Ernest finish his Physics Research! : quantum super-A-polynomials - a thesis by Ernest Yeung\n%                                               \n% http://igg.me/at/ernestyalumni2014                                                                             \n%                                                              \n% Facebook     : ernestyalumni  \n% github       : ernestyalumni                                                                     \n% gmail        : ernestyalumni                                                                     \n% google       : ernestyalumni                                                                                   \n% linkedin     : ernestyalumni                                                                             \n% tumblr       : ernestyalumni                                                               \n% twitter      : ernestyalumni                                                             \n% youtube      : ernestyalumni                                                                \n% indiegogo    : ernestyalumni                                                                        \n%\n% Ernest Yeung was supported by Mr. and Mrs. C.W. Yeung, Prof. Robert A. Rosenstone, Michael Drown, Arvid Kingl, Mr. and Mrs. Valerie Cheng, and the Foundation for Polish Sciences, Warsaw University.                  \n \n\n\n\n\n\n\n\\subsection*{Topological Manifolds }\n\n$M$ topological manifold of $\\text{dim}{n}$, or topological $n$-manifold \n\\begin{itemize}\n\\item locally Euclidean, $\\text{dim}{n}$ - $\\forall \\, p \\in M$, $\\exists \\, $ neighborhood $U \\equiv U_p$ s.t. $U_p \\approx^{\\text{homeo}} \\text{ open } V \\subset \\mathbb{R}^n$ \n\\end{itemize}\n\n\n\\exercisehead{1.1} Recall, $M$ locally Euclidean $\\dim{n}$ $\\forall \\, p \\in M$, $\\exists \\,$ neighborhood homeomorphic to open subset. \\\\\nopen subset $\\mathcal{O} \\subseteq \\mathbb{R}^n$ homeomorphic to open ball and $\\mathcal{O}$ homeomorphic is $\\mathbb{R}^n$ since $\\mathbb{R}^n$ homeomorphic to open ball.  \n\nTo see this explicitly, that open ball $B_{\\epsilon}(x_0) \\subseteq \\mathbb{R}^n $ homeomorphic to $\\mathbb{R}^n$\n\nConsider $\\begin{aligned} & \\quad \\\\ \n  & T: B_{\\epsilon}(x_0) \\to \\mathbb{R}^n \\\\ \n  & T(B_{\\epsilon}(x_0)) = B_{\\epsilon}(0) \\\\ \n  & T(x) = x- x_0 \\end{aligned}$  \\\\\n$T^{-1}(x) = x+x_0$.  Clearly $T$ homeomorphism.  \n\nConsider $\\begin{aligned} & \\quad \\\\\n& \\lambda : \\mathbb{R}^n \\to \\mathbb{R}^n \\\\\n  & \\lambda(x) = \\lambda x \\\\\n  & \\lambda^{-1}(x) = \\frac{1}{ \\lambda } x \\end{aligned}$ for $\\lambda >0$.  Clearly $\\lambda$ homeomorphism.  \n\nConsider $B \\equiv B_1(0)$.  \n\nConsider $\\begin{aligned} & \\quad \\\\ \n  & g: \\mathbb{R}^n \\to \\mathbb{R}^n \\\\\n  & g(x) = \\frac{x}{ 1 + |x| } \\end{aligned}$ \n\n$g$ cont.  \n\nLet $\\begin{aligned} & \\quad \\\\\n  & f:B \\to \\mathbb{R}^n \\\\\n  & f(x) = \\frac{x}{ 1 - |x| } \\end{aligned} $ \n\nHow was $f$ guessed at?\n\n$|g(x) | = \\left| \\frac{x}{1 + |x| } \\right| = \\frac{r}{ 1 + r}$ .  Note $0 \\leq |g(x) | <1$ \\\\\nSo $g(\\mathbb{R}^n) = B$\n\nFor $|g(x)| = |y|$, $y \\in B$, $|y|(1+r) = r$, \\, $r = \\frac{ |y| }{ 1 - |y| }$\n\nThis is well-defined, since $0 \\leq |y| < 1$ and $0 < 1 - |y| \\leq 1$\n\\[\n\\begin{aligned}\n  & gf(x) =\\frac{  \\frac{ x}{ 1 - |x| } }{ 1 + \\frac{|x|}{ 1 - |x| } } = x \\\\ \n  & fg(x) = \\frac{ \\frac{x}{ 1 + |x| } }{ 1 - \\frac{|x| }{ 1 + |x| } } = x\n\\end{aligned}\n\\]\n$f$ homeomorphism between $B$ and $\\mathbb{R}^n$.  $B$ and $\\mathbb{R}^n$ homeomorphic.  So an open ball in $\\mathbf{R}^n$ is homeomorphic to $\\mathbb{R}^n$\n\n\\hrulefill\n\nIn practice, both the Hausdorff and second countability properties are usually easy to check, especially for spaces that are built out of other manifolds, because both properties are inherited by subspaces and products (Lemmas A.5 and A.8).  In particular, it follows easily that any open subset of a topological $n$-manifold is itself a topological $n$-manifold (with the subspace topology, of course).  \n\n\\subsubsection*{Coordinate Charts}\n\nchart on $M$, $(U, \\varphi)$ where open $U \\subset M$ and \\\\\nhomeomorphism $\\varphi : U \\to \\mathbb{R}^n$, $\\varphi(U)$ open.  \n\n\n\n\n\n\n\\subsubsection*{Examples of Topological Manifolds}\n\nExample 1.3. (Graphs of Continuous Functions) \\\\\nLet open $U \\subset \\mathbb{R}^n$ \\\\\nLet $F: U \\to \\mathbb{R}^k$ cont. \\\\\ngraph of $F$: $\\Gamma(F) = \\lbrace (x,y) \\in \\mathbb{R}^n \\times \\mathbb{R}^k | x \\in U , y = F(x) \\rbrace$ with subspace topology. \\\\\n$\\pi_1 : \\mathbb{R}^n \\times \\mathbb{R}^k \\to \\mathbb{R}^n$ projection onto first factor. \\\\\n$\\varphi_k:\\Gamma(F) \\to U$ restriction of $\\pi_1$ to $\\Gamma(F)$ \\\\\n$\\varphi_F(x,y) = x$, $(x,y) \\in \\Gamma(F)$ \\\\\n\n\\textbf{Example 1.4 (Spheres)} $S^n = \\lbrace x \\in \\mathbb{R}^{n+1} | |x| = 1 \\rbrace$ \\\\\nHausdorff and second countable because it's topological subspace of $\\mathbb{R}^n$   \\\\\n\n\n\\textbf{Example 1.5 (Projective Spaces)}\n\n$U_i \\subset \\mathbb{R}^{n+1} - 0$ where $x^i \\neq 0$  \\\\\n$V_i = \\pi(U_i)$  \n\nLet $a\\in U_i$. \n\\[\n|x-a|^2 = (x^1 - a^1)^2 + \\dots + (x^i - a^i)^2 + \\dots + (x^{n+1} - a^{n+1})^2 < \\frac{ (n+1) \\epsilon^2 }{ n + 1 } = \\epsilon^2\n\\]\n$\\forall \\, a^i \\in \\mathbb{R}$, $\\exists \\,  x^i$, s.t. $(x^i - a^i)^2 < \\frac{ \\epsilon^2}{ n +1}$, by choice of $0< x^i < a^i + \\frac{ \\epsilon}{ \\sqrt{ n+ 1} }$ with $0<x^i$ for $i=i$ index.  \\\\\n$U_i$ indeed open set, \\emph{saturated open set}.  \n\nopen $U_i \\subset \\mathbb{R}^{n+1} - 0$, $x^i \\neq 0$\n \nFrom Lemma A.10, recall\n(d) restriction of $\\pi$ to any saturated open or closed subset of $X$ is a quotient map.  \n\nnatural map $\\pi: \\mathbb{R}^{n+1} - 0 \\to \\mathbb{R}P^n$ given quotient topology.  \n\nBy Tu, Prop. 7.14, $\\sim $ on $\\mathbb{R}^{n+1} - 0  \\to \\mathbb{R}P^n$ open equivalence relation.  \\\\\n$\\Longrightarrow  \\left. \\pi \\right|_{U_i}(U_i) = V_i$ open.  \n\n\\[\n\\begin{aligned}\n   & \\varphi_i : V_i \\to \\mathbb{R}^n \\\\ \n& \\varphi_i [ x^1 \\dots x^{n+1} ] = \\left( \\frac{ x^1 }{ x^i } \\dots \\frac{ x^{i-1}}{ x^i } , \\frac{ x^{i+1}}{ x^i} \\dots \\frac{x^{n+1}}{ x^i} \\right)\n\\end{aligned}\n\\]\n\n$\\varphi_i$ well-defined since \n\\[\n\\varphi_i[tx^1 \\dots tx^{n+1}] = \\left( \\frac{ tx^1 }{ tx^i } \\dots \\frac{ t\\widehat{x}^i }{ tx^i } \\dots \\frac{ tx^{n+1}}{ tx^i } \\right)= \\left( \\frac{x^1}{ x^i } \\dots \\frac{ \\widehat{x}^i }{ x^i } \\dots \\frac{ x^{n+1}}{ x^i } \\right) = \\varphi_i[x^1 \\dots x^{n+1} ]\n\\]\n\n$\\varphi_i$ cont. since $\\varphi_i \\pi$ cont.  \n\n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em]\n  {\n    U_i \\subset \\mathbb{R}^{n+1} - 0 &  \\\\\n    V_i \\subset \\mathbb{R}P^n  & \\mathbb{R}^n  \\\\ };\n  \\path[-stealth]\n  (m-1-1) edge node [right] {$\\varphi_i \\pi$} (m-2-2)\n  edge node [left] { $\\pi$} (m-2-1)\n  (m-2-1) edge node [below] {$\\varphi$} (m-2-2);\n\\end{tikzpicture}\n\n\\[\n\\begin{gathered}\n  \\begin{aligned}\n    & \\varphi_i : U_i \\subset \\mathbb{R}P^n \\to \\mathbb{R}^n \\\\ \n    & \\varphi_i[x^1 \\dots x^{n+1} ] = \\left( \\frac{x^1}{x^i} \\dots \\frac{ \\widehat{x}^i }{ x^i } \\dots \\frac{x^{n+1}}{ x^i } \\right) \\\\\n    & \\varphi^{-1}_i(u^1 \\dots u^n) = [ u^1 \\dots u^{i-1}, 1 , u^i \\dots u^n ]\n\\end{aligned} \\\\ \n  \\varphi_i^{-1} \\varphi_i[x^1 \\dots x^{n+1} ] = \\left[ \\frac{x^1 }{ x^i } \\dots \\frac{x^{i-1}}{ x^i } , 1 , \\frac{x^{i+1}}{ x^i} \\dots \\frac{x^{n+1}}{ x^i } \\right] = [x^1 \\dots x^{i-1}, x^i , x^{i+1} \\dots x^{n+1} ] \\\\\n  \\varphi_i \\varphi^{-1}_i(u^1 \\dots u^n) = (u^1 \\dots u^{i-1} , u^i  \\dots u^n )\n\\end{gathered}\n\\]\ncont. $\\varphi_i$ bijective, $\\varphi_i^{-1}$ cont. $\\varphi_i$ homeomorphism.  \n\nFrom a previous edition: \n\n\\exercisehead{1.2} \n\n\nLet $ \\begin{aligned} & \\quad \\\\ \n  & \\phi_t : \\mathbb{R}^{n+1}\\backslash \\lbrace  0 \\rbrace \\to \\mathbb{R}^{n+1} \\backslash \\lbrace 0 \\rbrace \\\\ \n  & \\phi_t(x) = tx \\end{aligned}$ \\\\\n$\\phi_t$ invertible, $\\phi_t^{-1} = \\phi_{\\frac{1}{t}}$ \\\\\n$\\phi_t$, $\\phi_t^{-1}$ \\, $C^1$ (and $C^{\\infty}$), $\\phi_t$ homeomorphism.  \\\\\n\nLet $U$ open in $\\mathbb{R}^{n+1}\\backslash \\lbrace 0 \\rbrace$.  Then $\\phi_t(U)$ open in $\\mathbb{R}^{n+1}\\backslash \\lbrace 0 \\rbrace$.  \\\\\nThus $\\pi^{-1}([U]) = \\bigcup_{t \\in \\mathbb{R}} \\phi_t(U)$ open in $\\mathbb{R}^{n+1}\\backslash \\lbrace 0 \\rbrace$.  \\\\\nThus $[U]$ open in $\\mathbb{R}P^n$.  $\\sim$ open.  \\\\\n\nNote $\\begin{aligned} & \\quad  \\\\\n  & \\pi : \\mathbb{R}^{n+1} \\backslash \\lbrace 0 \\rbrace \\to \\mathbb{R}P^n \\\\\n  & \\pi(x) = \\frac{x}{ \\| x\\| } \\end{aligned}$ \\\\\n$\\mathbb{R}^n$ 2nd. countable, $\\mathbb{R}P^n$ 2nd. countable.  \n\n\n\\exercisehead{1.3}\n\n$S^n$ compact. \\\\\n$\\pi :\\mathbb{R}^{n+1}\\backslash \\lbrace 0 \\rbrace \\to \\mathbb{R}P^n$ \\\\\n$\\pi(x) = [ \\frac{x}{ \\| x \\| } ]$ \\\\\nLet $x \\in \\mathbb{R}P^n$ \\\\\n\\phantom{Let } $y = \\frac{x}{ \\| x \\| } \\in S^n$ and $ \\left. \\pi \\right|_{S^n}(y) = [x]$ \\\\\n$\\left. \\pi \\right|_{S^n}$ surjective.  \n\n\n\n\\exercisehead{1.6}\n\nFirst, note that $\\sim $ on $\\mathbb{R}^{n+1} -0$ in the definition of $\\mathbb{R}P^n$ is an open $\\sim$ i.e. open equivalence relation.  \n\nThis is because of the following:\n$\\forall \\, U \\subset \\mathbb{R}^{n+1}-0$, \\\\\n$\\pi^{-1}(\\pi(U)) = \\bigcup_{t\\in \\mathbb{R}} tU$, set of all pts. equivalent to some pt. of $U$.  \\\\\nmultiplication by $t\\in \\mathbb{R}$ homeomorphism of $\\mathbb{R}^{n+1} - 0$, so $tU$ open $\\forall \\, t \\in \\mathbb{R}$.  \\\\\n$\\pi^{-1}(\\pi(U))$ open i.e. $\\pi(U)$ open (for $\\pi$ is cont.).  \n\n\nLet $X = \\mathbb{R}^{n+1} -0$.  \\\\\nConsider $R = \\lbrace ( x,y) \\in X \\times X | x\\sim y \\text{ or } y = tx \\text{ for some } t \\in \\mathbb{R} \\rbrace$ \n\n$y = tx$ means $y_i = tx_i$ \\, $\\forall \\, i = 0\\dots n$.  Then $\\frac{ x_i}{y_i} = \\frac{ x_j}{ y_j}$ \\, $\\forall \\, i,j = 0 \\dots n$.  Hence $x_i y_j - y_i x_j = 0$ \\, $\\forall \\, i , j$.  \n\nLet $\\begin{aligned} & \\quad \\\\ \n  & f : X \\times X \\to \\mathbb{R} \\\\\n  & f(x,y) = \\sum_{ i \\neq j} (x_i y_j - y_i x_j)^2 \\end{aligned}$  \n\n\\[\n\\begin{aligned}\n  & \\frac{ \\partial f}{ \\partial x_i} = \\sum_{i \\neq j } 2 (x_i y_j - y_i x_j) ( y_j - y_j) = 0 \\\\\n  & \\frac{ \\partial f}{ \\partial y_j} = \\sum_{j \\neq i } 2 (x_i y_j - y_i x_j) ( x_i - x_i) = 0 \n\\end{aligned}\n\\]\n\nNevertheless, $f$ is $C^1$ so $f$ cont.  \n\nSo $f^{-1}(0) = R$. \n\n$0$ closed, so $f^{-1}(0) = R$ closed.  By theorem, since $\\sim$ open, $\\mathbb{R}P^n = \\mathbb{R}^{n+1} - 0 / \\sim$ Hausdorff.  \n\ncf. \\url{http://math.stackexchange.com/questions/336272/the-real-projective-space-rpn-is-second-countable}\n\ntopological space is second countable if its topology has countable basis.  \\\\\n\\quad $\\mathbb{R}^n$ second countable since $\\mathcal{B} = \\lbrace B_r{ ( q)} | r,q \\in \\mathbb{Q} \\rbrace$ is a countable basis.  $\\forall \\, x \\in \\mathbb{R}^n$ \\\\\n\nIf $X$ is second countable, with countable basis $\\mathcal{B}$, \n\\begin{enumerate}\n\\item If $Y \\subseteq X$, $Y$ also second countable with countable basis $\\lbrace B | B \\in \\mathcal{B}, \\, Y \\bigcap B \\neq \\emptyset \\rbrace$\n\\item If $Z == X/\\sim $, $\\lbrace \\lbrace [x] | x \\in B \\rbrace | B \\in \\mathcal{B} \\rbrace$ is a countable basis for $Z$ since $\\mathcal{B}$ countable.  \\\\\n\nIt is a basis since \n\n\\begin{tikzpicture}\n  \\matrix (m) [matrix of math nodes, row sep=2em, column sep=3em, minimum width=1em]\n  {\nX = \\bigcup_{B \\in \\mathcal{B}} B    \\\\\nZ = X/ \\sim = \\pi{ \\left( \\bigcup_{B \\in \\mathcal{B}} B \\right) } = \\bigcup_{B \\in \\mathcal{B}} \\pi{ (B)} = \\bigcup_{ B \\in \\mathcal{B}} \\lbrace [x] | x  \\in B \\rbrace \\\\\n  };\n%  \\path[-stealth]\n  \\path[->]\n  (m-1-1) edge node [right] {$\\pi$} (m-2-1);\n\\end{tikzpicture}\n\n\n\\end{enumerate}\n\nNow let $Y = \\mathbb{R}^n-0$ and \\\\\n\\phantom{Now let } $Z = \\mathbb{R}P^n = \\mathbb{R}^n-0 / \\sim$\n\n\\exercisehead{1.7}\n\n$S^n$ compact so $S^n/\\lbrace \\pm \\rbrace$ compact by Theorem, as $\\pi_S(S^n) = S^n/ \\lbrace \\pm \\rbrace$, as $\\pi_S$ cont. surjective ($\\forall \\, [x] \\in S^n/\\lbrace \\pm \\rbrace$, $\\exists \\, x \\in S^n$ s.t. $\\pi_S(S^n) = [x]$) \\\\\n$g$ cont. bijective as defined above so since $g(S^n/\\lbrace \\pm \\rbrace) = \\mathbb{R}P^n$, $\\mathbb{R}P^n$ compact.  \n\n\n\\textbf{Example 1.8 (Product Manifolds)}\n\\[\n\\begin{aligned}\n  & M_1 = \\bigcup_{\\alpha \\in \\mathfrak{A}_1} U_{\\alpha}^{(1)} \\\\ \n  & M_i = \\bigcup_{\\alpha \\in \\mathfrak{A}_i} U_{\\alpha}^{(i)} \n\\end{aligned} \\quad \\quad \\quad \\, \nM_1 \\times \\dots \\times M_n = \\bigcup_{ \\begin{aligned} & \\alpha_1 \\in \\mathfrak{A}_1 \\\\ \n& \\vdots \\\\ \n    & \\alpha_n \\in \\mathfrak{A}_n \\end{aligned} }  U_{\\alpha_1} \\times \\dots \\times U_{\\alpha_n} \\quad \\quad \\, (\\text{by def.})\n\\]\n\n$\\forall \\, p = (p_1 \\dots p_n) \\in M_1 \\times \\dots M_n$, consider $p_i \\in M_i$.  Choose coordinate chart $(U_{j_i}, \\varphi_{j_i})$, $\\varphi_i(U_i) \\subset \\mathbb{R}^{n_i}$.  Then,  \\\\\n\nConsider $\\varphi : U_1 \\times \\dots \\times U_n \\to \\mathbb{R}^{m_1} \\times \\dots \\times \\mathbb{R}^{m_n} = \\mathbb{R}^{ m_1 + \\dots + m_n}$, $(U_1 \\times \\dots \\times U_n, \\varphi_1 \\times \\dots \\times \\varphi_n)$ \\\\\n$\\varphi = \\varphi_1 \\times \\dots \\times \\varphi_n$\\\\\n$\\varphi$ also a homeomorphism.  $\\begin{gathered} \\varphi \\psi^{-1} \\\\ (\\varphi_1 \\times \\dots \\times \\varphi_n) \\circ(\\psi_1 \\times \\dots \\times \\psi_n)^{-1} \\end{gathered}$ also a diffeomorphism, cont. bijective and $C^{\\infty}$ \\\\\n$\\lbrace (U, \\varphi)  = (U_1 \\times \\dots \\times U_n, \\varphi_1 \\times \\dots \\times \\varphi_n) | (U_i, \\varphi_i) \\in \\lbrace (U_i, \\varphi_i) | U_i \\in M_i \\rbrace \\rbrace$ also an atlas.  \n\n\n\n\\subsubsection*{Topological Properties of Manifolds}\n\n\\begin{lemma}[1.10]\n$\\forall \\, $ topological $M$, $M$ has countable basis of precompact coordinate balls\n\\end{lemma}\n\n\\begin{proof}\n  First consider $M$ can be covered by single chart.  \\\\\nSuppose $\\varphi : M \\to \\widehat{U} \\subseteq \\mathbb{R}^n$ global coordinate map. \\\\\nLet $\\mathcal{B} = \\lbrace B_r(x) | \\text{ open } B_r(x) \\subseteq \\mathbb{R}^n \\text{ s.t. } r\\in \\mathbb{Q}, \\, x \\in \\mathbb{Q}, \\, \\text{ i.e. $x$ rational coordinates }, B_{r'}(x) \\subseteq \\widehat{U}, \\text{ for some } r' > r \\rbrace$ \\\\\nClearly, $\\forall \\, B_r(x)$ precompact in $\\widehat{U}$ \\\\\n\\quad \\, $\\mathcal{B}$ countable basis for topology of $\\widehat{U}$\n\n$\\varphi$ homeomorphism, it follows $\\lbrace \\varphi^{-1}(B) | B \\in \\mathcal{B} \\rbrace$ countable basis for $M$\n\nLet $M$ arbitrary, \\\\\n\\quad By def., $\\forall \\, p \\in M$, $p \\in $ domain $U$ of a chart\n\nProp. A.16, $\\forall \\, $ open cover of second-countable space has countable subcover. \\\\\n\\quad $M$ covered by countably many charts $\\lbrace (U_i, \\varphi_i) \\rbrace$ \\\\\n$\\forall \\, U_i$, $U_i$ has countable basis of coordinate balls precompact in $U_i$ \\\\\nunion of all these coordinates bases is countable basis for $M$.   \\\\\n\nIf $V \\subseteq U_i$ one of these balls,  \\\\\n\\quad then $\\overline{V}$ compact in $U_i$.  $M$ Hausdorff, so $\\overline{V}$ closed.  \\\\\n\\quad $\\overline{V}$ in $M$ is same as $\\overline{V}$ in $U_i$, so $V$ precompact in $M$.   \n\\end{proof}\n\n\\subsubsection*{Connectivity}\n\n\\subsubsection*{Local Compactness and Paracompactness}\n\n\n\n\\subsubsection*{Fundamental Groups of Manifolds}\n\n\n\n\n\n\\subsection*{Smooth Structures}\n\nIf open $U \\subset \\mathbb{R}^n$, $V \\subset \\mathbb{R}^m$, \\\\\n$F:U \\to V$ smooth (or $C^{\\infty}$) if $\\forall \\, $ cont. partial derivative of all orders exists.  \\\\\n\n$F$ diffeomorphism if $F$ \\emph{smooth}, bijective, and has a \\emph{smooth} inverse.  \\\\\nDiffeomorphism is a homeomorphism.  \\\\\n\n$(U, \\varphi), (V,\\psi)$ smoothly compatible if $UV = \\emptyset$ or \n\\[\n\\psi \\varphi^{-1}: \\varphi(UV) \\to \\psi(UV) \\quad \\quad \\, \\text{ diffeomorphism }\n\\]\n\natlas $\\mathcal{A} \\equiv \\lbrace (U, \\varphi ) \\rbrace$ s.t. $\\bigcup U = M$.  Smooth atlas if $\\forall \\, (U,\\varphi), (V, \\psi) \\in \\mathcal{A}$, $(U,\\varphi), (V,\\psi)$ smoothly compatible.  \\\\\n\nSmooth structure on topological $n$-manifold $M$ is a maximal smooth atlas.   \\\\\n\nSmooth manifold $(M, \\mathcal{A})$ where $M$ topological manifold, $\\mathcal{A}$ smooth structure on $M$.   \\\\\n\n\n\\begin{proposition}[1.17] Let $M$ topological manifold. \n\\begin{enumerate}\n\\item[(a)] $\\forall \\, $ smooth atlas for $M$ is contained in ! maximal smooth atlas.  \n\\item[(b)] 2 smooth atlases for $M$ determine the same maximal smooth atlas iff union is smooth atlas.  \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nLet $\\mathcal{A}$ smooth atlas for $M$   \\\\\n$\\overline{ \\mathcal{A} } \\equiv $ set of all charts that are smoothly compatible with every chart in $\\mathcal{A}$ \n\n\\textbf{Want}: $\\overline{ \\mathcal{A}}$ smooth atlas, i.e. $\\forall \\, (U, \\varphi), (V \\psi) \\in \\overline{\\mathcal{A}}$, $\\psi \\varphi^{-1} : \\varphi(UV) \\to \\psi(UV)$ smooth.  \n\nLet $x = \\varphi(p) \\in \\varphi(UV)$ \\\\\n$p\\in M$, so $\\exists \\,$ some chart $(W, \\theta) \\in \\mathcal{A}$ s.t. $p \\in W$.  \\\\\nBy given, $\\theta \\varphi^{-1}, \\psi \\theta^{-1}$ smooth where they're defined.   \\\\\n$p \\in UVW$, so $\\psi \\varphi^{-1} = \\psi \\theta^{-1} \\theta \\varphi^{-1}$ smooth on $x$.  \\\\\nThus $\\psi \\varphi^{-1}$ smooth in a neighborhood of each pt. in $\\varphi(UV)$.  Thus $\\overline{\\mathcal{A}}$ smooth atlas.  \n\nTo check maximal, \n\n\n\\end{proof}\n\n\n\n\n\\subsection*{Local Coordinate Representations}\n\n\\begin{proposition}[1.19]\n  $\\forall \\, $ smooth $M$ has countable basis of regular coordinate balls\n\\end{proposition}\n\n\\exercisehead{1.20}\n\nsmooth manifold $M$ has smooth structure \\\\\n\\quad Suppose single smooth chart $\\varphi$ has entire $M$ as domain\n\\[\n\\varphi: M \\to \\widehat{U} \\subseteq \\mathbb{R}^n\n\\]\n\nLet $\\widehat{B} = \\lbrace \\widehat{B}_r(x) \\subseteq \\mathbb{R}^n | r \\in \\mathbb{Q}, \\, x \\in \\mathbb{Q}, \\, \\widehat{B}_{r'}(x) \\subseteq \\widehat{U} \\text{ for some } r' > r \\rbrace$ \\\\\n\n\\quad $\\forall \\, \\widehat{B}_r(x)$ precompact in $\\widehat{U}$ \\\\\n\\quad $\\widehat{\\mathcal{B}}$ countable basis for topology of $\\widehat{U}$\n\n$\\varphi$ homeomorphism, \n\n\\quad Let $\\begin{aligned}\n  & \\quad \\\\\n  & \\varphi^{-1}( \\widehat{B}_r{(0)}) = B \\\\\n  & \\varphi^{-1}{(\\widehat{B}_{r'}{(0)} )} = B' \\end{aligned}$\n\n$\\varphi$ homeomorphism and since $\\widehat{B}$ countable basis, $\\lbrace B \\rbrace$ countable basis of regular coordinate basis. \\\\\n\nSuppose arbitrary smooth structure.  \\\\\n\\quad By def., $\\forall \\, p \\in M$, $p$ in some chart domain \\\\\n\\quad Prop. A.16., $\\forall \\, $ open cover of second countable space has countable subcover \\\\\n\\quad $M$ covered by countably many charts $\\lbrace (U_i, \\varphi_i) \\rbrace$ \\\\\n\n$\\forall \\, U_i , \\, U_i$ has countable basis of coordinate balls precompact in $U_i$ \\\\\nunion of all these coordinate charts is countable basis for $M$.  \n\nIf $V \\subseteq U_i$, 1 of these balls,  \\\\\n$\\begin{aligned}\n  & \\quad \\\\ \n  & \\varphi(V) = B_r(0) \\\\ \n  & \\varphi{(\\overline{V})} = \\overline{B}_r(0)\n\\end{aligned}$\n\nand $\\varphi(B') = B_{r'}{(0)}$, $r' > r$ for countable basis for $U_i$ \\\\\nSo $V$ regular coordinate ball.\n\n\n\\hrulefill\n\n\n\n\\subsection*{Examples of Smooth Manifolds}\n\n\n\n\n\n\n\\subsubsection*{More Examples}\n\n\n\n\\textbf{Example 1.25 (Spaces of Matrices) } Let $M(m\\times n,\\mathbb{R}) \\equiv $ set of $m\\times n$ matrices with real entries.  \n\n\\textbf{Example 1.26 (Open Submanifolds)}\n\n$\\forall \\, $ open subset $U \\subseteq M$ is itself a $\\text{dim}{M}$ manifold. \\\\\nEY : \\emph{ $\\forall \\, $ open subset $U \\subseteq M$ is itself a $\\text{dim}{M}$ manifold. }\n\n\n\\textbf{Example 1.27 (The General Linear Group) }\n\ngeneral linear group $GL(n,\\mathbb{R}) = \\lbrace A | \\text{det}{A} \\neq 0 \\rbrace$ \\\\\n\\quad $\\text{det}:A \\to \\mathbb{R}$ is cont. (by def. of $\\text{det}{A} = \\epsilon^{ i_1 \\dots i_n}a_{1 i_1} \\dots a_{n i_n}$ \\\\\n\\quad $\\text{det}^{-1}{ ( \\mathbb{R} - 0 )}$ is open since $\\mathbb{R}-0$ open so $GL(n,\\mathbb{R})$ open \\\\\n\\quad $GL(n,\\mathbb{R}) \\subseteq M(n,\\mathbb{R})$, $M(n,\\mathbb{R})$ $n^2$-dim. vector space. \\\\\n\\quad \\quad so $GL(n,\\mathbb{R})$ smooth $n^2$-dim. manifold.  \n\n\\textbf{Example 1.28 (Matrices of Full Rank)}\n\nSuppose $m <n$ \\\\\n\nLet $M_n(m\\times n, \\mathbb{R}) \\subseteq M(m\\times n, \\mathbb{R})$ with matrices of rank $m$ \\\\\nif $A \\in M_m(m\\times n, \\mathbb{R})$, \\\\\n\\quad $\\text{rank}{A}=m$ \n\nmeans that $A$ has some nonsingular $m \\times m$ submatrix.  (EY 20140205 ???)\n\n\n\n\n\\textbf{Example 1.31 (Spheres)} \n\n\\[\n\\begin{aligned}\n  & \\varphi_i^{\\pm} : S^n \\to B_1^n(0) \\subset \\mathbb{R}^n \\quad \\, (B_1^n(0) \\text{ disk of radius $1$}) \\\\\n  & \\varphi_i^{\\pm}(x_1 \\dots x_{n+1}) = (x_1 \\dots \\widehat{x}_i \\dots x_{n+1} )= (y_1 \\dots y_n)\n\\end{aligned}\n\\]\nNote $x_1^2 + \\dots + x_i^2 + \\dots + x_{n+1}^2 = 1$.  \\quad \\, $x_i = \\pm \\sqrt{ 1 - (x_1^2 + \\dots + \\widehat{x}_i^2 + \\dots + x_{n+1}^2 ) }$\n\n\\[\n\\begin{aligned}\n  & (\\varphi_i^{\\pm})^{-1}(y_1 \\dots y_n) = (y_1 \\dots \\pm \\sqrt{ 1 - (y_1^2 + \\dots + y_n^2) } \\dots y_n) = (y_1 \\dots y_{i-1}, \\pm \\sqrt{ 1-  |y|^2 }, y_i\\dots y_n ) \\\\ \n & \\varphi_i^{\\pm} (\\varphi_j^{\\pm})^{-1}(y_1 \\dots y_n ) = (y_1 \\dots \\widehat{y}_i \\dots y_{j-1} , \\pm \\sqrt{ 1 - |y|^2 }, y_j \\dots y_n) \\\\ \n & \\varphi_i^{\\pm} (\\varphi_j^{\\mp})^{-1}(y_1 \\dots y_n ) = (y_1 \\dots \\widehat{y}_i \\dots y_{j-1} , \\mp \\sqrt{ 1 - |y|^2 }, y_j \\dots y_n) \\\\ \n\\end{aligned}\n\\]\n\\[\n\\varphi_j^{\\pm}(\\varphi_i^{\\mp})^{-1} \\varphi_i^{\\pm} (\\varphi_j^{\\pm})^{-1}(y_1 \\dots y_n) = \\varphi_j^{\\pm}(y_1 \\dots \\pm y_i \\dots y_{j-1}, \\pm \\sqrt{ 1 - |y|^2 } \\dots y_n ) = (y_1 \\dots \\pm y_i \\dots y_j \\dots y_n)\n\\]\nThis is symmetrical in $i,j$ and so true if $i,j$ reverse.  \\\\\n\nSo $\\varphi_i^{\\pm}(\\varphi_j^{\\pm})^{-1}$ diff. and bijective.  Likewise for $\\varphi_i^{\\pm}(\\varphi_j^{\\mp})^{-1}$.  \\\\\nSo $\\begin{aligned} & \\varphi_i^{\\pm} (\\varphi_j^{\\pm})^{-1} \\\\ \n  & \\varphi_i^{\\pm} (\\varphi_j^{\\mp})^{-1} \\end{aligned}$ diffeomorphisms.  \n\n\\begin{lemma}[1.35] \\textbf{(Smooth Manifold Chart Lemma)}\nLet $M$ be a set, suppose given $\\lbrace U_{\\alpha} | U_{\\alpha} \\subset M \\rbrace$, given maps $\\varphi_{\\alpha}: U_{\\alpha} \\to \\mathbb{R}^n$ s.t.\n\\begin{enumerate}\n\\item[(i)] $\\forall \\, \\alpha$, $\\varphi_{\\alpha}$ bijection between $U_{\\alpha}$ and open $\\varphi_{\\alpha}(U_{\\alpha}) \\subseteq \\mathbb{R}^n$\n\\item[(ii)] $\\forall \\, \\alpha, \\beta$, $\\varphi_{\\alpha}(U_{\\alpha} \\bigcap U_{\\beta})$, $\\varphi_{\\beta}(U_{\\alpha} \\bigcap U_{\\beta})$ open in $\\mathbb{R}^n$\n\\end{enumerate}\n\\end{lemma}\n\n\n\\textbf{Example 1.36 (Grassmann Manifolds)}\n\n%$G_k(V) = $ set of all $k$-dim. linear subspaces of $V$, $\\text{dim}{V} = n \\geq k $\n\n%Let $P,Q$ complementary subspaces of $V$, $\\begin{aligned} & \\quad \\\\ & \\text{dim}{P} = k \\\\ & \\text{dim}{Q} = n- k \\end{aligned}$\n\n%direct sum decomposition $V = P \\oplus Q$  \n\n%graph of any linear $X: P \\to Q$\n\n%\\[\n%\\Gamma(X) = \\lbrace v + X v | v \\in P \\rbrace \\subset V\n%\\]\n%$k$-dim. subspace\n\n%Now $\\Gamma(X) Q = \\emptyset$ since $\\Gamma(A) \\subset P \\oplus Q$ and $P$ and $Q$ are complementary.  \n\n%If $KQ = \\emptyset$, then $K \\subset P \\oplus Q$, $P\\neq 0$.  \n\n$G_k(V) = \\lbrace S | S \\subseteq V \\rbrace$ \\quad \\, $S$ $k$-dim. linear subspace of $V$ \\\\\n$\\text{dim}V = n$, $V$ vector space  \\\\\n\nif $V = \\mathbb{R}^n$, $G_k(\\mathbb{R}^n) \\equiv G_{k,n} \\equiv G(k,n)$ (notation) \\quad \\, $G_1(\\mathbb{R}^{n+1}) = \\mathbb{R}P^n$\n\nLet $V = P \\oplus Q$, \\, $\\begin{aligned}\n  & \\text{dim}P =k \\\\\n  & \\text{dim}Q = n-k \\end{aligned}$ \\\\\n\nlinear $X: P \\to Q$ \n\n\\[\n\\Gamma(X) = \\lbrace v + Xv | v \\in P \\rbrace , \\quad \\, \\Gamma(X) \\subseteq V, \\, \\text{dim}\\Gamma(X) = k\n\\]\n$\\Gamma(X) \\bigcap Q = 0$ since $\\forall \\, w \\in \\Gamma(X)$, $w$ has a $P$ piece, and $Q$ complementary to $P$ \\\\\n\nConverse: $\\forall \\, $ subspace $S \\subseteq V$, s.t. $S\\bigcap Q = 0$ \\\\\n\\quad \\, let $\\begin{aligned}\n  & \\quad \\\\ \n  & \\pi_P : V \\to P \\\\ \n  & \\pi_Q: V \\to Q\n\\end{aligned}$ \\quad \\, projections by direct sum decomposition $V = P\\oplus Q$ \\\\\n\n$\\left. \\pi_P \\right|_S : S \\to P$ isomorphism  \\\\\n$\\Longrightarrow X = ( \\left. \\pi_Q \\right|_S ) \\cdot ( \\left. \\pi_P \\right|_S)^{-1}$, \\, $X: P \\to Q$ \\\\\n\nLet $v\\in P$.  $v+ Xv = v +  \\left. \\pi_Q \\right|_S (  \\left. \\pi_P \\right|_S)^{-1} v$.  Let $v  \\in \\left. \\pi_P \\right|_S (S)$ \\quad \\, $\\Gamma(X) = S$ \\\\\n\n\nLet $L(P;Q) = \\lbrace f | \\text{ linear } f : P \\to Q \\rbrace$, \\, $L(P;Q)$ vector space \\\\\n$U_Q \\subseteq G_k(V)$, \\, $U_Q = \\lbrace S | \\text{dim}S =k, \\, S \\text{ subspace }, \\, S \\bigcap Q = 0 \\rbrace$ \\\\\n$\\begin{aligned}\n  & \\Gamma : L(P;Q) \\to U_Q \\\\\n  & X \\mapsto \\Gamma(X) \\end{aligned}$ \\\\\n\n$\\Gamma$ bijection by above \\\\\n$\\varphi = \\Gamma^{-1} : U_Q \\to L(P;Q)$  \\\\\n\nBy choosing bases for $P,Q$, identify $L(P;Q)$ with $M((n-k)k; \\mathbb{R})$ and hence with $\\mathbb{R}^{k(n-k)}$ \\\\\n\nthink of $(U_Q, \\varphi)$ as coordinate chart. \\\\\n$\\varphi(U_Q) = L(P;Q)$\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection*{Problems}\n\n\\problemhead{1.7} (This was Problem 1.5 in previous editions)\n\n\\[\nS^n = \\lbrace ( x_1 \\dots x_{n+1} \\rbrace \\in \\mathbb{R}^{n+1} | \\sum_{i=1}^{n+1} x_i^2 = 1 \\rbrace \\subset \\mathbb{R}^{n+1}\n\\]\n\nLet $\\begin{aligned} & \\quad \\\\\n  & N = (0 \\dots 0, 1) \\\\ \n  & S = (0 \\dots 0, -1) \\end{aligned}$ \\quad \\, $x\\in S^n$. \n\\begin{enumerate}\n\\item[(a)]\n Consider $t(x-N) + N = tx + (1-t)N$ when $x_{n+1} =0$ \n\\[\n\\begin{gathered}\n  tx_{n+1} + (1-t) = 0 \\text{ or } tx_{n+1} + -1 + t = 0 \\\\ \n  \\Longrightarrow \\frac{ 1}{ 1- x_{n+1} } = t \\quad \\left( \\text{ or } \\frac{1}{ 1 + x_{n+1} } \\right)\n\\end{gathered}\n\\]\n\n\\[\n\\begin{aligned}\n  & \\pi_1: S^n - N \\to \\mathbb{R}^n \\\\ \n  & \\pi_1(x_1 \\dots x_{n+1}) = \\left( \\frac{x_1}{ 1 - x_{n+1} } \\dots \\frac{x_n}{ 1 - x_{n+1} } , 0 \\right) \\\\ \n  & \\pi_2 : S^n -S \\to \\mathbb{R}^n \\\\ \n  & \\pi_2(x_1 \\dots x_{n+1}) = \\left( \\frac{x_1}{ 1 + x_{n+1}} \\dots \\frac{x_n}{ 1 + x_{n+1} }, 0 \\right)\n\\end{aligned}\n\\]\nNote that $-\\pi_2(-x) = \\pi_1$ and $\\pi_1 \\equiv \\sigma$, $\\pi_2 \\equiv \\widetilde{\\sigma}$ in Massey's notation.  \n\\item[(b)] Note, for $y_i = \\frac{x_i}{ 1 - x_{n+1}}$\n\\[\n\\begin{gathered}\n  y_1^2 + \\dots +y_n^2 = |y|^2 = \\frac{1-  x_{n+1}^2}{ (1-x_{n+1})^2 } = \\frac{1+ x_{n+1}}{ 1- x_{n+1}} \\text{ or } x_{n+1}  = \\frac{ |y|^2 - 1 }{ |y|^2 + 1 }  \\\\\n  x_i = y _i (1- x_{n+1}) = \\frac{2y_i}{ 1 + |y|^2 }\n\\end{gathered}\n\\]\n\n\\[\n\\begin{aligned}\n  & \\pi_1^{-1}: \\mathbb{R}^n \\to S^n - N \\\\ \n  & \\pi_1^{-1}(y_1 \\dots y_n) = \\left( \\frac{2y_1}{ 1 + |y|^2 } \\dots \\frac{2y_n}{1+ |y|^2 } , \\frac{ |y|^2 - 1 }{ |y|^2 + 1 } \\right) \\\\ \n  & \\pi_2^{-1}(y_1 \\dots y_n) = \\left( \\frac{2y_1}{ 1 + |y|^2 } \\dots \\frac{2y_n}{1+ |y|^2 } , \\frac{ 1 - |y|^2  }{ |y|^2 + 1 } \\right)\n\\end{aligned}\n\\]\n\n$\\pi_1,\\pi_2$ diff., bijective, and $(S^n - N ) \\bigcup (S^n-S) = S^n$ \\\\\n\n\\item[(c)] Computing the transition maps for the stereographic projections.\n\n\nConsider $(S^n - N)(S^n-S) = S^n - N \\bigcup S$ \n\\[\n\\begin{aligned}\n  & \\pi_1 \\pi_2^{-1}(y_1 \\dots y_n) = \\left( \\frac{y_1 }{ |y|^2} \\dots \\frac{y_n}{|y|^2} , 0 \\right) \\\\ \n  & \\pi_2 \\pi_1^{-1}(y_1 \\dots y_n) = \\left( \\frac{y_1 }{ |y|^2} \\dots \\frac{y_n}{|y|^2} , 0 \\right) \n\\end{aligned}\n\\]\nsince, for example, \n\\[\n\\begin{gathered}\n\\frac{   \\frac{2y_i }{ 1 + |y|^2} }{ 1 - \\frac{ 1 - |y|^2}{ 1 + |y|^2 }} = \\frac{y_i }{ |y|^2 }\n\\end{gathered}\n\\]\n\n\n\n$\\pi_1 \\pi_2^{-1}$ bijective and $C^{\\infty}$, $\\pi_1 \\pi_2^{-1}$ diffeomorphism.   \\\\\n\n$\\lbrace (S^n-N, \\pi_1), (S^n - S, \\pi_2) \\rbrace$ \\, $C^{\\infty}$ atlas or differentiable structure.  \n\\[\n\\begin{aligned}\n  & \\partial_j \\frac{y_i}{ |y|^2} = \\frac{ - 2y_i y_j }{ (y_1^2 + \\dots + y_n^2 )^2 } \\\\ \n  & \\partial_j \\frac{ y_j}{ |y|^2} = \\frac{ (y_1^2 + \\dots + y_n^2) - 2y_j^2 }{ |y|^4} = \\frac{ y_1^2 + \\dots + \\widehat{y}_j^2 + \\dots + y_n^2 - y_j^2 }{ |y|^4}\n\\end{aligned}\n\\]\n\\[\n\\text{det}{ (\\partial_j \\pi_1 \\pi_2^{-1}(y) ) } = \\sum_{ \\sigma \\in S_n} \\text{sgn}{ (\\sigma)} \\partial_{\\sigma_1} \\frac{y_1}{ |y|^2} \\dots \\partial_{\\sigma_n} \\frac{y_n}{|y|^2} = 0 \\text{ only if $y=0$. But that's excluded }\n\\]\n\\item[(d)] Consider $\\begin{gathered}\n  \\lbrace ( S^n\\backslash, \\pi_1), (S^n \\backslash S, \\pi_2) \\rbrace \\\\ \n  \\mathcal{A} = \\lbrace (U_i^{\\pm}, \\varphi_i^{\\pm}) \\rbrace \\end{gathered}$  \\\\\n\nNow \n\\[\n\\begin{aligned}\n  & S^n \\backslash N \\bigcap U_i^+ = \\begin{cases} U_i^+ & \\text{ if } i \\neq n + 1 \\\\ U_{n+1}^+ \\backslash N & \\text{ if } i = n+1 \\end{cases} \\\\ \n  & S^n \\backslash N \\bigcap U_i^- = U_i^-\n\\end{aligned}\n\\]\n\n\\[\n\\pi_1(\\varphi_i^{\\pm})^{-1}(y_1 \\dots y_n) = \\pi_1(y_1 \\dots y_{i-1}, \\pm \\sqrt{ 1 - |y|^2 }, y_i \\dots y_n) = \\left( \\frac{y_1 }{ 1 - y_n} \\dots \\frac{y_{i-1} }{ 1 - y_n}, \\frac{ \\pm \\sqrt{ 1 - |y|^2 } }{ 1 - y_n } , \\frac{y_i}{ 1 - y_n} \\dots \\frac{y_{n-1}}{ 1 - y_n }, 0 \\right)\n\\]\n\nNote $-1< y_n <1$ on $\\varphi_i^{\\pm}(S^n\\backslash \\bigcap U_i^{\\pm})$\n\\[\n\\varphi_i^{\\pm} \\pi_1^{-1}(y_1 \\dots y_n) = \\varphi_i^{\\pm}\\left( \\frac{ 2y_1}{ 1 + |y|^2} \\dots \\frac{2y_n}{ 1 + |y|^2} , \\frac{|y|^2 - 1 }{ |y|^2 + 1 } \\right) = \\left( \\frac{2y_1 }{ 1 + |y|^2} \\dots \\frac{ 2 \\widehat{y}_i }{ 1 + |y|^2 } \\dots \\frac{ 2y_n }{ 1 + |y|^2 }, \\frac{|y|^2 -1}{ |y|^2 + 1 } \\right)\n\\]\n$\\pi_{1,2}(\\varphi_i^{\\pm})^{-1}, \\varphi_i^{\\pm}\\pi_{1,2}^{-1}$ are diffeomorphisms (bijective and differentiable).  \n\nSo $\\lbrace(S^n\\backslash N, \\pi_1), (S^n\\backslash S , \\pi_2) \\rbrace \\bigcup \\mathcal{A}$ also a $C^{\\infty}$ atlas.  \\\\\nSo $\\lbrace(S^n\\backslash N, \\pi_1), (S^n\\backslash S , \\pi_2) \\rbrace$ ,  $\\mathcal{A}$ equivalent.  \n\n\n\n\\end{enumerate}\n\n", "meta": {"hexsha": "535f6816b7a73d3227d2b32e72ad69835ffb9d66", "size": 28821, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LeeJM/01smoothmanifolds.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LeeJM/01smoothmanifolds.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LeeJM/01smoothmanifolds.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 42.3838235294, "max_line_length": 405, "alphanum_fraction": 0.5696193748, "num_tokens": 11631, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7025300698514777, "lm_q2_score": 0.787931185683219, "lm_q1q2_score": 0.5535453509161895}}
{"text": "\\chapter{Infinite Series}\r\nWe previously discussed some special types of series and formulas for their nth term.\r\nNow, we'll look at sums of infinitely many terms and eventually see how summing variables rather than just numbers allows us to approximate functions.\r\n\r\n\\input{./infinite_series/power_series.tex}\r\n\\input{./infinite_series/taylor_series.tex}\r\n\\input{./infinite_series/error.tex}\r\n\\input{./infinite_series/convergence.tex}\r\n\\input{./infinite_series/radius_of_convergence.tex}", "meta": {"hexsha": "8d566a56fccee67db32736f26830b2803c95e8b7", "size": 488, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "calc/infinite_series/infinite_series.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "calc/infinite_series/infinite_series.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "calc/infinite_series/infinite_series.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 54.2222222222, "max_line_length": 151, "alphanum_fraction": 0.8053278689, "num_tokens": 108, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879312056025699, "lm_q2_score": 0.702530051167069, "lm_q1q2_score": 0.5535453501881038}}
{"text": "%%==========================================\n%% Section 2.02: $\\lambda$-DIFFERENTIABILITY\n%%==========================================\n\\documentclass[../dissertation]{subfiles}\n\n\n\\begin{document}\n\n\\section{$\\lambda$-Differentiability}\\label{sec2:diff}\n\nWe begin this section by proving a useful variant of Young's inequality\n(Technical Lemma \\ref{tlma2:1}) that we use in a number of proofs in this \ndissertation. We then prove that that the operators $T_{\\star, \\lambda, u}$ are\ncontinuous in the spectral parameter $\\lambda$ (Proposition \n\\ref{prop2:Tlamcont}). The remainder of this section the focuses on proving the \n$\\lambda$-differentiability of $G_{\\star}^+$. We direclty use the results \nfrom all four propositions in this section in proving that the Jost solution \nboundary $M_1^+$ is differentiable in $\\lambda$, and hence has a \nlinearization in $\\lambda$. The $\\lambda$ linearization of $M_1^+$ is ultimately \nused in the proof that the direct scattering map $\\mathscr D$ is Lipschitz \ncontinuous.\n\nSince the proof of the $\\lambda$-continuity of $T_{\\star, \\lambda, u}$ calls\nTechnical Lemma \\ref{tlma2:1}, we start with the proof of that lemma.\n\n\\begin{tlma}\\label{tlma2:1}\n\tFor $f \\in \\inn{\\dotarg}^s L^1(\\mathbb R)$ and \n\t$\\inn{\\dotarg}^s g \\in L^\\infty(\\mathbb R)$ the inequality \n\t\\begin{align} \\label{eq3:tlma}\n\t\t\\| f * g\\|_{\\inn{\\dotarg}^s L^\\infty}\n\t\t\t&\\leq \\|f\\|_{\\inn{\\dotarg}^s L^1} \\|\\inn{\\dotarg}^s g\\|_{L^\\infty}\n\t\\end{align}\n\tholds for $s\\geq0$. Alternatively, if $\\inn{\\dotarg}^s f \\in L^1(\\mathbb R)$\n\tand $g \\in \\inn{\\dotarg}^s L^\\infty(\\mathbb R)$ the estimate\n\t\\begin{align}\\label{eq3:tlma2}\n\t\t\\| f * g\\|_{\\inn{\\dotarg}^s L^\\infty}\n\t\t\t&\\leq \\nm{\\inn{\\dotarg}^s f}_{L^1}  \\nm{g}_{\\inn{\\dotarg}^s L^\\infty}\n\t\\end{align}\n\tholds instead for $s \\geq 0$. \n\\end{tlma}\n\\begin{proof}\n\tIt is straightforward to show that\n\t\\begin{align*}\n\t\t\\frac{\\inn{x'}}{\\inn{x-x'}\\inn{x}} \\leq 1,\n\t\\end{align*}\n\tfor all $x, x' \\in \\mathbb R$, as\n\t\\begin{align*}\n\t\t\\left(\\frac{\\inn{x'}}{\\inn{x-x'}\\inn{x}}\\right)^2\n\t\t\t= \\frac{1+(x')^2}{1+(x-x')^2 + x^2 + x^2(x-x')^2}\n\t\t\t\\leq \\frac{1+(x')^2}{1+(x')^2}.\n\t\\end{align*}\n\tAs such, we find for $s> 0$ \n\t\\begin{align*}\n\t\t\\| f * g\\|_{\\inn{\\dotarg}^s L^\\infty}\n\t\t\t&= \\sup_{x\\in\\mathbb R} \\inn{x}^{-s} \n\t\t\t\t\\int_{\\mathbb R}\n\t\t\t\t\t\\big| \n\t\t\t\t\t\tf(x') \\, g(x-x')\n\t\t\t\t\t\\big|\n\t\t\t\t\\, \\mathrm{d}x'\n\t\t\t\t\\\\\n\t\t\t&= \\sup_{x\\in\\mathbb R}\n\t\t\t\t\\int_{\\mathbb R}\n\t\t\t\t\t\\Big| \n\t\t\t\t\t\t\\big[\\inn{x'}^{-s} f(x')\\big]\n\t\t\t\t\t\t\\big[\\inn{x-x'} g(x-x') \\big]\n\t\t\t\t\t\\Big|\n\t\t\t\t\t\t\\left(\n\t\t\t\t\t\t\t\\frac{\\inn{x'}}{\\inn{x-x'}\\inn{x}}\n\t\t\t\t\t\t\\right)^s\n\t\t\t\t\\, \\mathrm{d}x'\n\t\t\t\t\\\\\n\t\t\t&\\leq \n\t\t\t\t\\left\\|\n\t\t\t\t\t\t\\big[\\inn{\\dotarg}^{-s} f \\big]\n\t\t\t\t\t\t*\n\t\t\t\t\t\t\\big[\\inn{\\dotarg}^{s} g \\big]\n\t\t\t\t\\right\\|_{L^\\infty} \\\\\n\t\t\t&\\leq \n\t\t\t\t\\|f\\|_{\\inn{\\dotarg}^s L^1} \\|\\inn{\\dotarg}^s g\\|_{L^\\infty}\n\t\\end{align*}\n\tby Minkowski's integral inequality \\cite[Theorem 1.2.10]{Grafakos}. \n\tIf $s=0$, then \\eqref{eq3:tlma} automatically holds by \n\t\\cite[Theorem 1.2.10]{Grafakos}. An analogous argument also verifies \n\t\\eqref{eq3:tlma2}.\n\\end{proof}\n\n\n\\begin{prop}\\label{prop2:Tlamcont}\n\tFor $u \\in X \\cap \\inn{\\dotarg}^{-2}L^\\infty(\\mathbb R)$, the operator \n\t$T_{\\star, \\lambda, u} : \\inn{\\dotarg}L^\\infty(\\mathbb R) \\to \n\t\\inn{\\dotarg}L^\\infty(\\mathbb R)$ given by \n\t\\begin{align*}\n\t\tT_{\\star, \\lambda, u} : f\\mapsto \\big[G_\\star^+(\\cdot; \\lambda)\\big]*(u\\,f)\n\t\\end{align*}\n\tis continuous in the parameter $\\lambda \\in \\mathbb R$ in the sense the limit \n\t\\[\n\t\t\\lim_{h\\to0} \\| T_{\\star, \\lambda+h, u} - T_{\\star, \\lambda, u}\n\t\t\t \\|_{\\inn{\\dotarg}L^\\infty\\toitself}\n\t\t\t=0\n\t\\]\n\tholds pointwise for each fixed $\\lambda \\in \\mathbb R$.\n\\end{prop}\n\\begin{proof}\n\tTo simplify notation, let $T_{\\star, \\lambda, u}$ be denoted by \n\t$T_\\lambda$. Then, we see from Technical Lemma \\ref{tlma2:1}\n\tthat\n\t\\begin{align}\\label{eq2:Tcont}\n\t\t\\|(T_{\\lambda+h} - T_\\lambda)f\\|_{\\inn{\\dotarg}L^\\infty}\n\t\t\t&= \n\t\t\t\t\\left\\| \n\t\t\t\t\t\\big[ \n\t\t\t\t\t\tG_\\star^+(\\dotarg, \\lambda+h) \n\t\t\t\t\t\t- G_\\star^+(\\dotarg, \\lambda)\n\t\t\t\t\t\\big]\n\t\t\t\t\t*uf\n\t\t\t\t\\right\\|_{\\inn{\\dotarg}L^\\infty}  \\\\\n\t\t\t&\\leq \n\t\t\t\t\\left\\| \n\t\t\t\t\t\tG_\\star^+(\\dotarg, \\lambda+h) \n\t\t\t\t\t\t- G_\\star^+(\\dotarg, \\lambda)\n\t\t\t\t\\right\\|_{\\inn{\\dotarg}L^\\infty}\n\t\t\t\t\\left\\|\n\t\t\t\t\t\\inn{\\dotarg} u f\n\t\t\t\t\\right\\|_{L^\\infty}\n\t\t\t\t\\nonumber\n\t\\end{align}\n\tfor all $f \\in \\inn{\\dotarg}L^\\infty(\\mathbb R)$ as \n\t$u \\in X \\cap \\inn{\\dotarg}^{-2}L^\\infty(\\mathbb R)$ implies \n\t$\\inn{\\dotarg} u\\, f\\in \\inn{\\dotarg}L^\\infty(\\mathbb R)$. \n\tNoting that the argument in the proof of Proposition \n\t\\ref{prop2:ldervG} (which does not depend on this Proposition) implies\n\t\\begin{align}\\label{eq2:Glamcont}\n\t\t\\lim_{h\\to\\infty} \n\t\t\t\\left\\| \n\t\t\t\t\tG_\\star^+(\\dotarg, \\lambda+h) \n\t\t\t\t\t- G_\\star^+(\\dotarg, \\lambda)\n\t\t\t\\right\\|_{\\inn{\\dotarg}L^\\infty}\n\t\t\t=0,\n\t\\end{align}\n\tProposition \\ref{prop2:Tlamcont} follows from \\eqref{eq2:Tcont}\n\tand \\eqref{eq2:Glamcont}.\n\\end{proof}\n\n\n\\begin{rmk}\\label{rmk2:notation}\n\tThe notation in proofs of Propositions \\ref{prop2:ldervG} through \n\t\\ref{prop2:ghconv} can get unnecessarily complicated. To avoid this, \n\tin these proof we use the notation $G(x, \\lambda)$ and $G(\\lambda)$ \n\tas stand-ins for $G_\\star^+(x; \\lambda)$. \n\\end{rmk}\n\n\\begin{prop}\\label{prop2:ldervG}\n\tFor each fixed $x\\ne 0$ and $\\lambda \\in \\mathbb R$, the Green's function \n\tboundary $G_\\star^+$ ($\\star = L \\text{, or } R$) is differentiable in \n\tthe spectral parameter $\\lambda$, and \n\t\\begin{align*}\n\t\t\\frac{\\partial}{\\partial \\lambda} G_\\star^+(x; \\lambda)\n\t\t\t= \\frac{1}{2\\pi} \n\t\t\t\t\\int_{{\\Gamma_\\star}} e^{ix\\xi} \n\t\t\t\t\t\\left(\n\t\t\t\t\t\t\\frac{\\partial}{\\partial \\lambda} \\frac{1}{p(\\xi; \\lambda)}\n\t\t\t\t\t\\right)\n\t\t\t\t\\, \\mathrm{d}\\xi.\n\t\\end{align*}\n\\end{prop}\n\\begin{rmk}\n\tSince\n\t\\[\n\t\t\\frac{\\partial}{\\partial \\lambda} \\frac{1}{p(\\xi; \\lambda)}\n\t\t\t= \n\t\t\t\t\\frac{\\zeta'(\\lambda)}\n\t\t\t\t{\\Big(\\xi - \\zeta(\\lambda)\\big(1-e^{-2\\xi}\\big)\\Big)^2},\n\t\\]\n\tthe function $\\frac{\\partial}{\\partial \\lambda} \\frac{1}{p(\\xi; \\lambda)}$ \n\tdecays exponentially to zero as $\\xi\\to -\\infty$, and, for $\\xi > 0$, decays\n\tlike $1/\\xi^2$. As such, the integral\n\t\\begin{align*}\n\t\t\\int_{\\Gamma_\\star}\n\t\t\t\\left(\n\t\t\t\t\\frac{\\partial}{\\partial \\lambda} \\frac{1}{p(\\xi; \\lambda)}\n\t\t\t\\right)\n\t\t\\, \\mathrm{d}\\xi.\n\t\\end{align*}\n\tconverges absolutely.\n\\end{rmk}\n\\begin{proof}[Proof of Proposition \\ref{prop2:ldervG}]\n\tIn accordance with Remark \\ref{rmk2:notation}, we write $G_\\star^+(x; \\lambda)$ \n\tas either $G(x, \\lambda)$ or as $G(\\lambda)$. We further define $G_h$ to be the \n\tdifference quotient\n\t\\[\n\t\tG_h:= \\frac{G(\\lambda + h)- G(\\lambda)}{h}.\n\t\\]\n\t\\label{sym2:Gh}\n\tand seek to prove that $\\lim_{h\\to0} G_h$ converges. For $|h|>0$ sufficiently \n\tsmall, we may assume by analyticity that $G(\\lambda)$ and $G(\\lambda+h)$ \n\tshare the same contour of integration $\\Gamma$. If $\\lambda = 0$, the\n\tcontour $\\Gamma$ runs along with a single small semi-circular detour below the \n\treal axis to avoid passing through $\\xi = 0$ and $\\xi=h$. In which case\n\t\\begin{align*}\n\t\tG_h(\\lambda) = \n\t\t\t\\frac{1}{2\\pi}\\int_\\Gamma e^{ix\\xi}\\,\n\t\t\t\t\\frac{1}{h}\\,\n\t\t\t\t\\left[\n\t\t\t\t\t\\frac{1}{p(\\xi; \\lambda +h)}\n\t\t\t\t\t-\\frac{1}{p(\\xi; \\lambda)}\n\t\t\t\t\\right]\n\t\t\t\\, \\mathrm{d}\\xi\n\t\t\t= \n\t\t\t\t\\int_\\Gamma e^{ix\\xi} \n\t\t\t\t\t\\left(\\frac{1}{p_\\lambda(\\xi)}\\right)_{h}\n\t\t\t\t\\, \\mathrm{d}\\xi,\n\t\\end{align*}\n\twhere we define\n\t\\begin{align*}\n\t\t\\left(\\frac{1}{p_\\lambda(\\xi)}\\right)_{h}\n\t\t\t:= \\frac{1}{h}\\,\n\t\t\t\t\\left[\n\t\t\t\t\t\\frac{1}{p(\\xi; \\lambda +h)}\n\t\t\t\t\t-\\frac{1}{p(\\xi; \\lambda)}\n\t\t\t\t\\right]\n\t\\end{align*}\n\t\\label{sym2:pdiffquot}\n\tas the difference quotient of $1/p$ with respect to $\\lambda$.\n\tSince \n\t\\[\n\t\t\\left(\\frac{1}{p_\\lambda(\\xi)}\\right)_{h}\n\t\t\t= \\frac{1}{h}\n\t\t\t\t\\frac{\n\t\t\t\t\t\\big[\n\t\t\t\t\t\t\\zeta(\\lambda + h) - \\zeta(h)\n\t\t\t\t\t\\big]\n\t\t\t\t\t\\big(1-e^{-2\\xi}\\big)\n\t\t\t\t}\n\t\t\t\t{p(\\xi; \\lambda+h)\\,p(\\xi;\\lambda)}\n\t\\]\n\tdecays exponentially as $\\xi \\to -\\infty$ and decays as \n\t$1/\\xi^2$ for positive $\\xi$, \n\t\\[\n\t\t\\left(\\frac{1}{p_\\lambda(\\xi)}\\right)_{h} \\in L^1(\\Gamma)\n\t\\]\n\tfor each fixed $h\\ne0$. Further, using the continuity of $\\zeta$ and the reverse \n\ttriangle inequality, we also have\n\t\\[\n\t\t\\left|\\left(\\frac{1}{p_\\lambda(\\xi)}\\right)_{h}\\right| \n\t\t\t\\lesssim_\\lambda \n\t\t\t\t\\frac{\n\t\t\t\t\t\\big|1-e^{-2\\xi}\\big|\n\t\t\t\t}{\n\t\t\t\t\t|\\xi|\\big| \\xi - \\zeta(\\lambda) \\big(1-e^{-2\\xi}\\big)\\big|\n\t\t\t\t}\n\t\t\t=: \\iota(\\xi).\n\t\\]\n\tSince $\\iota$ is continuous in $\\xi$ on $\\Gamma$ and $\\iota(\\xi) \n\t= \\mathcal O(1/\\xi^2)$ for large $|\\xi|$, one application of the \n\tDominated Convergence Theorem completes this proof.\n\\end{proof}\n\n\\begin{prop}\\label{prop2:lderivGspace}\n\tThe partial derivative $\\frac{\\partial}{\\partial \\lambda} G_\\star^+$\n\t($\\star = L \\text{, or } R$)\n\tof the Green's function boundary value $G_\\star^+$ lies in $\\inn{x}^s L_x^1(\\mathbb R)$\n\tfor $s> 3$ and all $\\lambda \\in \\mathbb R$. \n\tIf $\\lambda \\in \\mathbb R$ and $\\lambda \\ne 0$, then \n\t$\\frac{\\partial}{\\partial \\lambda} G_\\star^+ \n\t\\in \\inn{x}^s L_x^1(\\mathbb R)$ for $s > 2$.\n\\end{prop}\n\\begin{proof}\n\tDefine\n\t\\[\n\t\tg(\\xi; \\lambda) \n\t\t\t:= e^{ix\\xi}\\, \\frac{\\partial}{\\partial \\lambda} \\frac{1}{p},\n\t\\]\n\tThrough direct computation, one can show \n\t\\begin{align}\\label{eq2:gzerores}\n\t\t\\Res_{\\xi=0} g\n\t\t\t= \n\t\t\t\t\\frac{\n\t\t\t\t\t2 e^{2\\lambda}\\big(e^{2\\lambda}-2\\lambda-1\\big)\n\t\t\t\t}{\n\t\t\t\t\t\\big(\n\t\t\t\t\t\t1 - e^{2\\lambda}+ 2\\lambda e^{2\\lambda}\n\t\t\t\t\t\\big)^2\n\t\t\t\t},\n\t\\end{align}\n\t\\begin{align}\\label{eq3:glamres}\n\t\t\\Res_{\\xi=\\lambda} g\n\t\t\t= \n\t\t\t\t\\frac{\n\t\t\t\t\t2 e^{2\\lambda} - 2 - 4 \\lambda e^{2\\lambda}\n\t\t\t\t\t+ i \n\t\t\t\t\t\\big[\n\t\t\t\t\t\tx -\n\t\t\t\t\t\t2 x e^{2\\lambda} + x e^{4\\lambda} + 2x\\lambda\n\t\t\t\t\t\t- 2 x \\lambda e^{2\\lambda}\n\t\t\t\t\t\\big]\n\t\t\t\t}\n\t\t\t\t{\\big(e^{2\\lambda} - 2 \\lambda -1\\big)^2}\n\t\t\t\t\\, e^{i x \\lambda},\n\t\\end{align}\n\tand \n\t\\begin{align}\\label{eq2:gcollapsingpoles}\n\t\t\\lim_{\\lambda\\to0} \n\t\t\t\t\\big[\n\t\t\t\t\t\\Res_{\\xi=0} g + \\Res_{\\xi=\\lambda} g\n\t\t\t\t\\big]\n\t\t\t= -\\frac{1}{2} \\, x^2 + i \\, \\frac{1}{3} \\, x.\n\t\\end{align}\n\tIn particular, equations \\eqref{eq2:gzerores}, \\eqref{eq3:glamres},\n\tand \\eqref{eq2:gcollapsingpoles} imply\n\t\\begin{align}\\label{eq3:gsummary}\n\t\t\\begin{aligned}\n\t\t\t\\big|\\Res_{\\xi=0} g \\big| \\lesssim_\\lambda 1, \\qquad\n\t\t\\big|\\Res_{\\xi=\\lambda} g \\big| \\lesssim_\\lambda |x|, \\\\\n\t\t\\lim_{\\lambda\\to0} \n\t\t\t\t\\big[\n\t\t\t\t\t\\Res_{\\xi=0} g + \\Res_{\\xi=\\lambda} g\n\t\t\t\t\\big]\n\t\t\t= \\mathcal O(x^2)\n\t\t\\end{aligned}\n\t\\end{align}\n\n\tFurther, on our work in the proof of Lemma \\ref{prop2:ldervG}\n\talso implies that\n\t\\begin{align}\\label{eq3:pldervspace}\n\t\t\\frac{\\partial}{\\partial \\lambda} \\frac{1}{p}\n\t\t \\in L_\\xi^1(\\mathbb R \\pm i \\pi).\n\t\\end{align}\n\tAs such, after applying the contour shift demonstrated in Figure\n\t\\ref{fig1:GammaContour} to the contour of integration for\n\t$\\frac{\\partial}{\\partial \\lambda} G_L^+$, \n\tLemma \\ref{prop2:lderivGspace} is an immediate \n\tconsequence of \\eqref{eq3:gsummary} and \\eqref{eq3:pldervspace}.\n\\end{proof}\n\n\\begin{prop}\\label{prop2:ghconv}\n\tThe difference quotient\n\t\\[\n\t\tG_h(x; \\lambda) := \\frac{G_L^+(x; \\lambda+h) - G_L^+(x; \\lambda)}{h}\n\t\\]\n\tconverges to $\\frac{\\partial}{\\partial \\lambda} G_L^+$ in \n\t$\\inn{x}^s L_x^1(\\mathbb R)$ for $s > 3$ and all real $\\lambda$. \n\tIf $\\lambda$ is real and non-zero, then \n\tthis convergence happens in $\\inn{x}^s L_x^1(\\mathbb R)$ for \n\t$s > 2$.\n\\end{prop}\n\\begin{proof}\n\tDirect computation yields the following results\n\t\\begin{align*}\n\t\t\\Res_{\\xi=0}\n\t\t\t\t\\left[\n\t\t\t\t\te^{ix\\xi}\n\t\t\t\t\t\\left(\n\t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t\t\t\t\t\\right)_h\n\t\t\t\t\\right]\n\t\t\t&= \n\t\t\t\t\\frac{\n\t\t\t\t\t2 e^{2\\lambda}\n\t\t\t\t\t\\big(\n\t\t\t\t\t\th \\, e^{2(\\lambda+h)} \n\t\t\t\t\t\t+ \\lambda \n\t\t\t\t\t\t- e^{2h}(\\lambda + h)\n\t\t\t\t\t\\big)\n\t\t\t\t}{\n\t\t\t\t\th \n\t\t\t\t\t\\big(\n\t\t\t\t\t\t1 + e^{2\\lambda}(2\\lambda - 1)\n\t\t\t\t\t\\big)\n\t\t\t\t\t\\big(\n\t\t\t\t\t\t1 + e^{2(\\lambda+h)(2\\lambda + 2h - 1)}\n\t\t\t\t\t\\big)\n\t\t\t\t} \\\\\n\t\t\\Res_{\\xi=\\lambda}\n\t\t\t\t\\left[\n\t\t\t\t\te^{ix\\xi}\n\t\t\t\t\t\\left(\n\t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t\t\t\t\t\\right)_h\n\t\t\t\t\\right]\n\t\t\t&= - \\frac{1}{h}\\frac{e^{2\\lambda}-1}{e^{2\\lambda}-2\\lambda-1}\n\t\t\t\t\\, e^{ix\\lambda} \\\\\n\t\t\\Res_{\\xi=\\lambda+h}\n\t\t\t\t\\left[\n\t\t\t\t\te^{ix\\xi}\n\t\t\t\t\t\\left(\n\t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t\t\t\t\t\\right)_h\n\t\t\t\t\\right]\n\t\t\t&= \n\t\t\t\t \\frac{1}{h}\n\t\t\t\t \\frac{e^{2(\\lambda+h)}-1}{e^{2(\\lambda+h)}-2(\\lambda+h)-1}\n\t\t\t\t\\, e^{ix(\\lambda+h)}\n\t\\end{align*}\n\tAs such, through further direct computation, we find the\n\tlimits\n\t\\begin{align*}\n\t\t\\lim_{h\\to0}\n\t\t\t\\Res_{\\xi=0}\n\t\t\t\t\\left\\{ \n\t\t\t\t\\left(\n\t\t\t\t\te^{ix\\xi}\n\t\t\t\t\t\\frac{\\partial}{\\partial \\lambda}\n\t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t\t\t\t\\right)\n\t\t\t\t-\n\t\t\t\t\\Res_{\\xi=0}\n\t\t\t\t\\left[\n\t\t\t\t\te^{ix\\xi}\n\t\t\t\t\t\\left(\n\t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t\t\t\t\t\\right)_h\n\t\t\t\t\\right]\n\t\t\t\t\\right\\}\n\t\t\t=0,\n\t\\end{align*}\n\tand\n\t\\begin{align*}\n\t\t\\lim_{h\\to0}\n\t\t\t\\Res_{\\xi=\\lambda}\n\t\t\t\t\\left\\{ \n\t\t\t\t\\left(\n\t\t\t\t\te^{ix\\xi}\n\t\t\t\t\t\\frac{\\partial}{\\partial \\lambda}\n\t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t\t\t\t\\right)\n\t\t\t\t-\n\t\t\t\t\\sum_{k \\in \\{\\lambda, \\lambda+h\\}}\n\t\t\t\t\\Res_{\\xi=k}\n\t\t\t\t\\left[\n\t\t\t\t\te^{ix\\xi}\n\t\t\t\t\t\\left(\n\t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t\t\t\t\t\\right)_h\n\t\t\t\t\\right]\n\t\t\t\t\\right\\}\n\t\t\t=0\n\t\\end{align*}\n\thold pointwise for each $x \\in \\mathbb R$. Thus, since\n\t$\\left(\\frac{1}{p_\\lambda(\\xi)}\\right)_h \\in L_\\xi^1(\\mathbb R \\pm i\\pi)$\n\timplies by Fourier theory that $\\int_{\\mathbb R\\pm i\\pi} e^{ix\\xi} \n\t\\left(\\frac{1}{p_\\lambda(\\xi)}\\right)_h \\, \\mathrm{d}\\xi$ is continuous and bounded \n\t(in $x \\in \\mathbb R$), we may complete this proof by doing a contour shift\n\tand then applying Dominated Convergence to \n\t$\\int_{\\mathbb R} \\inn{x}^{-s} \\, G_h (x) \\, \\mathrm{d}x$.\n\t% \\begin{align}\n\t% \t\\Res_{\\xi=\\lambda}\n\t% \t\t\t\\left[\n\t% \t\t\t\te^{ix\\xi}\n\t% \t\t\t\t\\left(\n\t% \t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t% \t\t\t\t\\right)_h\n\t% \t\t\t\\right]\n\t% \t\t= - \\frac{1}{h}\\frac{e^{2\\lambda}-1}{e^{2\\lambda}-2\\lambda-1}\n\t% \t\t\t\\, e^{ix\\lambda}\n\t% \\end{align}\n\t% \\begin{align}\n\t% \t\\Res_{\\xi=\\lambda+h}\n\t% \t\t\t\\left[\n\t% \t\t\t\te^{ix\\xi}\n\t% \t\t\t\t\\left(\n\t% \t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t% \t\t\t\t\\right)_h\n\t% \t\t\t\\right]\n\t% \t\t= \n\t% \t\t\t \\frac{1}{h}\n\t% \t\t\t \\frac{e^{2(\\lambda+h)}-1}{e^{2(\\lambda+h)}-2(\\lambda+h)-1}\n\t% \t\t\t\\, e^{ix(\\lambda+h)}\n\t% \\end{align}\n\t% \\begin{align}\n\t% \t&\\lim_{\\lambda\\to0}\n\t% \t\t\\left\\{\n\t% \t\t\t\\sum_{k \\in \\{0, \\,\\lambda,\\, \\lambda+h\\}}\n\t% \t\t\t\\Res_{\\xi=k}\n\t% \t\t\t\\left[\n\t% \t\t\t\te^{ix\\xi}\n\t% \t\t\t\t\\left(\n\t% \t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t% \t\t\t\t\\right)_h\n\t% \t\t\t\\right]\n\t% \t\t\t% +\\Res_{\\xi=\\lambda}\n\t% \t\t\t% \\left[\n\t% \t\t\t% \te^{ix\\xi}\n\t% \t\t\t% \t\\left(\n\t% \t\t\t% \t\t\\frac{1}{p_\\lambda(\\xi)}\n\t% \t\t\t% \t\\right)_h\n\t% \t\t\t% \\right]\n\t% \t\t\t% +\\Res_{\\xi=\\lambda+h}\n\t% \t\t\t% \\left[\n\t% \t\t\t% \te^{ix\\xi}\n\t% \t\t\t% \t\\left(\n\t% \t\t\t% \t\t\\frac{1}{p_\\lambda(\\xi)}\n\t% \t\t\t% \t\\right)_h\n\t% \t\t\t% \\right]\n\t% \t\t\\right\\} \\\\\n\t% \t&\\qquad =\n\t% \t\t\\frac{1}{h}\n\t% \t\t\\left[\n\t% \t\t\t\\frac{e^{2h}-1}{e^{2h}-2h-1}\\,e^{ihx}\n\t% \t\t\t-\n\t% \t\t\t\\frac{2\\big(e^{2h}(h+1)-1\\big)}\n\t% \t\t\t\t{e^{2h}(6h-3)+3}\n\t% \t\t\t- \\frac{1}{3}\n\t% \t\t\t- ix\n\t% \t\t\\right]\n\t% \t\t\\nonumber\n\t% \\end{align}\n\t% \\begin{align}\n\t% \t&\\lim_{h\\to0}\n\t% \t\t\\lim_{\\lambda\\to0}\n\t% \t\t\t\\left\\{\n\t% \t\t\t\t\\sum_{k \\in \\{0, \\,\\lambda,\\, \\lambda+h\\}}\n\t% \t\t\t\t\\Res_{\\xi=k}\n\t% \t\t\t\t\\left[\n\t% \t\t\t\t\te^{ix\\xi}\n\t% \t\t\t\t\t\\left(\n\t% \t\t\t\t\t\t\\frac{1}{p_\\lambda(\\xi)}\n\t% \t\t\t\t\t\\right)_h\n\t% \t\t\t\t\\right]\n\t% \t\t\t\\right\\} \\\\\n\t% \t&\\qquad=\n\t% \t\t- \\frac{1}{2} x^2 + i \\frac{1}{3} x \\nonumber\n\t% \\end{align}\n\\end{proof}\n\n\\end{document}", "meta": {"hexsha": "7fc635ffae9bbe02a0b1d140e235cfef05a7c58f", "size": 14943, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter2-GFmapping/2.2-diff.tex", "max_stars_repo_name": "ADGC/ilw-dsm-dissertation", "max_stars_repo_head_hexsha": "de0f27b6389ee55c24d155ff482743acbe6a35a1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter2-GFmapping/2.2-diff.tex", "max_issues_repo_name": "ADGC/ilw-dsm-dissertation", "max_issues_repo_head_hexsha": "de0f27b6389ee55c24d155ff482743acbe6a35a1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter2-GFmapping/2.2-diff.tex", "max_forks_repo_name": "ADGC/ilw-dsm-dissertation", "max_forks_repo_head_hexsha": "de0f27b6389ee55c24d155ff482743acbe6a35a1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4087452471, "max_line_length": 88, "alphanum_fraction": 0.5630730108, "num_tokens": 6271, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.787931185683219, "lm_q2_score": 0.7025300636233416, "lm_q1q2_score": 0.5535453460088468}}
{"text": "\\newcommand{\\definedas}{\\stackrel{\\triangle}{=}}\n\\newcommand{\\spc}{\\;\\;\\;\\;\\;}\n\\newcommand{\\mspc}{\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}\n\\newcommand{\\lspc}{\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}\n\n\\documentstyle[fleqn,11pt]{article}\n\n\\renewcommand{\\baselinestretch}{1.1}\n\\newcommand{\\eg}[1]{\\begin{quote}{\\tt #1} \\end{quote}}\n\\setlength{\\textwidth}{15cm}\n\\addtolength{\\oddsidemargin}{-1cm}\n\n\\title{                    CHANGEVR,  \\\\\n                         A REDUCE Facility   \\\\\n                              to   \\\\\n                    Perform Change of Independent Variable(s)\\\\\n                               in \\\\\n                    Differential Equations\\\\[2cm]\n        }\n\\author{\n          G. \\\"{U}\\c{c}oluk\n\t     \\thanks{Email address: UCOLUK@TRMETU.BITNET}\n                     \\\\\n                               Department of Physics \\\\\n                               Middle East Technical University \\\\\n                               Ankara, Turkey\n       }\n\\date{October 1989}\n\n\\begin{document}\n\\maketitle\n\\newpage\n\n\\section{Introduction}\nThe mathematics behind the change of independent variable(s) in differential\nequations is quite straightforward. It is basically the application of the\nchain rule. If the dependent variable of the differential equation is $F$,\nthe independent variables are $x_{i}$ and the new independent variables are\n$u_{i}$ (where ${\\scriptstyle i=1\\ldots n}$) then the first derivatives are:\n\\[\n    \\frac{\\partial F}{\\partial x_{i}} = \\frac{\\partial F}{\\partial u_{j}}\n                                        \\frac{\\partial u_{j}}{\\partial x_{i}}\n\\]\nWe assumed Einstein's summation convention. Here the problem is to\ncalculate the $\\partial u_{j}/\\partial x_{i}$ terms if the change of variables\nis given by\n\\[\n    x_{i} = f_{i}(u_{1},\\ldots,u_{n})\n\\]\nThe first thought might be solving the above given equations for $u_{j}$ and\nthen differentiating them with respect to $x_{i}$, then again making use of the\nequations above, substituting new variables for the old ones in the calculated\nderivatives. This is not always a  preferable way to proceed. Mainly because\nthe functions $f_{i}$ may not always be easily invertible. Another approach\nthat makes use of the Jacobian is better. Consider the above given equations\nwhich relate the old variables to the new ones. Let us differentiate them:\n\\begin{eqnarray*}\n  \\frac{\\partial x_{j}}{\\partial x_{i}} & = &\n        \\frac{\\partial f_{j}}{\\partial x_{i}}   \\\\\n  \\delta_{ij} & = &\n        \\frac{\\partial f_{j}}{\\partial u_{k}}\n        \\frac{\\partial u_{k}}{\\partial x_{i}}\n\\end{eqnarray*}\nThe first derivative is nothing but the $(j,k)$ th entry of the Jacobian matrix.\n\nSo if we speak in matrix language\n\\[ {\\bf 1 = J \\cdot D} \\]\nwhere we defined the Jacobian\n\\[ {\\bf J}_{ij} \\definedas  \\frac{\\partial f_{i}}{\\partial u_{j}} \\]\nand the matrix of the derivatives we wanted to obtain as\n\\[ {\\bf D}_{ij} \\definedas  \\frac{\\partial u_{i}}{\\partial x_{j}}. \\]\nIf the Jacobian has a non-vanishing determinant then it is invertible and\nwe are able to write from the matrix equation above:\n\\[ {\\bf  D = J^{-1}} \\]\nso finally we have what we want\n\\[\n   \\frac{\\partial u_{i}}{\\partial x_{j}} = \\left[{\\bf J^{-1}}\\right]_{ij}\n\\]\n\nThe higher derivatives are obtained by the successive application of the chain\nrule and using the definitions of the old variables in terms of the new ones. It\n\ncan be easily verified that the only derivatives that are needed to be\ncalculated are the first order ones which are obtained above.\n\n\\section{How to Use CHANGEVR}\n{\\bf This facility requires the matrix package to be present in the session}.\nSo if it is not autoloaded in your REDUCE implementation, say\n\\eg{LOAD\\_PACKAGE MATRIX;}\nin the REDUCE environment. Then load {\\tt CHANGEVR} by the statement:\n\\eg{LOAD\\_PACKAGE CHANGEVR\\$}\nNow the REDUCE function {\\tt CHANGEVAR} is ready to use. {\\bf Note:  The\npackage is named CHANGEVR, but the function has the name CHANGEVAR}.  The\nfunction {\\tt CHANGEVAR} has (at least) four different arguments.  Here we\ngive a list them:\n\\begin{itemize}\n\\item {\\bf FIRST ARGUMENT} \\\\\n     Is a list of the dependent variables of the differential equation.\n     They shall be enclosed in a pair of curly braces and separated by commas.\n     If there is only one dependent variable there is no need for the curly\n     braces.\n\\item {\\bf SECOND ARGUMENT}  \\\\\n     Is a list of the {\\bf new} independent variables. Similar to what is said\n     for the first argument, these shall also be separated by commas,\n     enclosed in curly braces and the curly braces can be omitted if there is\n     only one new variable.\n\\item {\\bf THIRD ARGUMENT}  \\\\\n     Is a list of equations separated by commas, where each of the equation\n     is of the form\n      \\eg{{\\em old variable} = {\\em a function in new variables}}\n     The left hand side cannot be a non-kernel structure. In this argument\n     the functions which give the old variables in terms of the new ones are\n     introduced. It is possible to omit totally the curly braces which enclose\n     the list. {\\bf Please note that only for this argument it is allowed to\n     omit the curly braces even if the list has \\underline{more than one}\n     items}.\n\\item {\\bf LAST ARGUMENT}  \\\\\n     Is a list of algebraic expressions which evaluates to  differential\n     equations, separated by commas, enclosed in curly braces.\n     So, variables in which differential equations are already stored may be\n     used freely. Again it is possible to omit the curly braces if there is\n     only {\\bf one} differential equation.\n\\end{itemize}\n\nIf the last argument is a list then the result of {\\tt CHANGEVAR} is also a\nlist.\n\nIt is possible to display the entries of the inverse Jacobian, explained\nin the introduction.  To do so, turn {\\tt ON} the flag {DISPJACOBIAN} by a\nstatement: \\eg{ON DISPJACOBIAN;}\n\n\\section{AN EXAMPLE\\ldots\\ldots The 2-dim. Laplace Equation}\nThe 2-dimensional Laplace equation in cartesian coordinates is:\n\\[\n   \\frac{\\partial^{2} u}{\\partial x^{2}} +\n   \\frac{\\partial^{2} u}{\\partial y^{2}} = 0\n\\]\nNow assume we want to obtain the polar coordinate form of Laplace equation.\nThe change of variables is:\n\\[\n   x = r \\cos \\theta, \\mspc  y = r \\sin \\theta\n\\]\nThe solution using {\\tt CHANGEVAR}  (of course after it is properly loaded)\nis as follows\n\\eg{CHANGEVAR(\\{u\\},\\{r,theta\\},\\{x=r*cos theta,y=r*sin theta\\}, \\\\\n    \\hspace*{2cm}     \\{df(u(x,y),x,2)+df(u(x,y),y,2)\\} )}\nHere we could omit the curly braces in the first and last arguments (because\nthose lists have only one member) and the curly braces in the third argument\n(because they are optional), but you cannot leave off the curly braces in the\nsecond argument. So one could equivalently write\n\\eg{CHANGEVAR(u,\\{r,theta\\},x=r*cos theta,y=r*sin theta,        \\\\\n    \\hspace*{2cm}     df(u(x,y),x,2)+df(u(x,y),y,2) )}\n\nIf you have tried out the above example, you will notice that the denominator\ncontains a $\\cos^{2} \\theta + \\sin^{2} \\theta$ which is actually equal to $1$.\nThis has of course nothing to do with the {\\tt CHANGEVAR} facility introduced\nhere.  One has to be overcome these pattern matching problems by the\nconventional methods REDUCE provides (a {\\tt LET} statement, for example,\nwill fix it).\n\nSecondly you will notice that your {\\tt u(x,y)} operator has changed to\n{\\tt u(r,theta)} in the result. Nothing magical  about this. That is just what\nwe do with pencil and paper. {\\tt u(r,theta)} represents the  the transformed\ndependent variable.\n\n\\section{ANOTHER EXAMPLE\\ldots\\ldots An Euler Equation}\nConsider a differential equation which is of Euler type, for instance:\n\\[\n   x^{3}y''' - 3 x^{2}y'' + 6 x y' - 6 y = 0\n\\]\nWhere prime denotes differentiation with respect to $x$. As is well known,\nEuler type of equations are solved by a change of variable:\n\\[\n   x = e^{u}\n\\]\nSo our  {\\tt CHANGEVAR} call reads as follows:\n\\eg{CHANGEVAR(y, u, x=e**u, x**3*df(y(x),x,3)-   \\\\\n    \\hspace*{2cm}   3*x**2*df(y(x),x,2)+6*x*df(y(x),x)-6*y(x))}\n\n\\end{document}\n\n\n\n", "meta": {"hexsha": "1eb6ca66c2a5576da251ba503d238436a0f26309", "size": 7971, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "packages/misc/changevr.tex", "max_stars_repo_name": "arthurcnorman/general", "max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "packages/misc/changevr.tex", "max_issues_repo_name": "arthurcnorman/general", "max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "packages/misc/changevr.tex", "max_forks_repo_name": "arthurcnorman/general", "max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.3206521739, "max_line_length": 80, "alphanum_fraction": 0.6779575963, "num_tokens": 2152, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311906630568, "lm_q2_score": 0.7025300449389326, "lm_q1q2_score": 0.553545334785304}}
{"text": "\\input{docs/preamble}\n\\title{Photon Statistics}\n\\author{Max Bigras and David Frawley}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nWe explored the statistics for photons arriving from a constant intensity and a randomly scattered light source. We measured the number of photons arriving per a unit time. By constructing probability distributions and calculating $\\chi^{2}$ values we found the distribution for constant intensity light to agree with a Poisson distribution and the distribution for randomly scattered light to agree with a Bose-Einstein distribution.\n\\end{abstract}\n\n\\section{Introduction}\nWe used a photomultiplier tube (PMT) and a photon counter to directly count the number of photons arriving per a unit time from two different light sources. By overlaying our experimental distribution over a theoretical distribution based on statistics we illustrated the probabilistic nature of the photodetection process itself \\cite{koc}.\n\n\n\n\\section{Theory}\nThe probability for ejection of an electron from the PMT photocathode depends only on intensity light \\cite{manual}. We are interested in the probability for obtaining $n$ counts in some large time interval $T$, for a constant intensity light source and a randomly scattered light source. \n\nFor a constant intensity light source and from statistics the probability of detecting $n$ photons in time $T$ is\n\n\\begin{equation}\nP(n,T) = \\frac{n_{\\mathrm{av}}^{n}}{n!}e^{-n_{\\mathrm{av}}}\n\\label{poisson}\n\\end{equation}\nwhere $n_{\\mathrm{av}}$ is the average number of photons counted in time $T$.\n\nBut this is the Poisson distribution. So we expect a constant intensity light source to cause electrons to be ejected from the cathod of the PMT with a Poisson distribution.\n\nWhat if the intensity of light varies in space randomly? This is the case for randomly scattered light. If one can determine the intensity of the light at any spatial location,  and perform a weighted average then the probability for detecting $n$ photons in time $T$ is\n\n\\begin{equation}\nP(n,T) = \\frac{n_{\\mathrm{av}}^{n}}{(n_{\\mathrm{av}}+1)^{n+1}}\n\\label{bose}\n\\end{equation}\n\nBut this is the Bose-Einstein distribution. So we expect a randomly scattered light source to cause electrons to be ejected from the cathod of the PMT with a Bose-Einstein distribution.\n\n\\section{Apparatus}\nOur apparatus is shown in Figure \\ref{apparatus}. We used photons produced by an HeNe laser, which emits around $10^{18}$ photons per a second. We are interested in counting a few thousand photons per a second, so we are required to reduce the intensity of the laser. We attenuate the laser in three ways, first we polarize the laser beam, then we spread it out with a short focal length lens and finally we pass the light through two pinholes ($<$100 $\\mu$m diameter). Any surviving photons are detected by the the PMT and finally counted by the photon counter. The photon counter detects pulses above a discriminator level and counts the number of pulses per a unit time.\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.5\\textwidth]{figs/apparatus}\n  \\caption{Diagram for apparatus}\n  \\label{apparatus}\n\\end{figure}\n\nIt's important to count pulses coming from photons and not from noise in the PMT. We measured how the count rate varies with increasing discriminator level. Initially there is a fast change as the noise gets filtered out, shown in Figure \\ref{dlevel}, then the curve flattens out and again dips down as we began to also filter out the pulses from the photons. Because we're interested in detecting photons away from the noise we chose our discriminator level to be in the flat region.\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.6\\textwidth]{figs/dlevel}\n  \\caption{Finding the optimum discriminator level}\n  \\label{dlevel}\n\\end{figure}\n\n\\section{Analysis}\nFigures \\ref{ci1000}, \\ref{ci3000} and \\ref{ci10000} show our probability distributions for constant intensity light. As shown by the $\\chi^{2}$ values and visually our experimental distributions agree with the theoretical Poisson distribution.\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.6\\textwidth]{figs/ci1000}\n  \\caption{Theoretical and experimental Poisson distribution for constant intensity light with an average photon count = 1.09 photons/ms  and $\\chi^{2}$ = 0.254}\n  \\label{ci1000}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.6\\textwidth]{figs/ci3000}\n  \\caption{Theoretical and experimental Poisson distribution for constant intensity light with an average photon count = 3.004 photons/ms and $\\chi^{2}$ = 0.933}\n  \\label{ci3000}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.6\\textwidth]{figs/ci10000}\n  \\caption{Theoretical and experimental Poisson distribution for constant intensity light with average photon count = 9.987 photons/ms and $\\chi^{2}$ = 3.487 and $\\chi^{2}$ = 0.843}\n  \\label{ci10000}\n\\end{figure}\n\nFigures \\ref{pt1000}, \\ref{pt3000} and \\ref{pt10k} show our probability distributions for randomly scattered light. As shown by the $\\chi^{2}$ values and visually our experimental distributions agree with the theoretical Bose-Einstein distribution. One observation that can be made is the probability where 0 events were observed is always lower than predicated. Further, for small counts, just away from 0, the probability is always higher than predicated. We suspect that our resolution for detecting dark may not be fine enough, in which case dark points would be counted as light. Also the HeNe laser that we used is known to emit some blue light in addition to the constant intensity light that we expect, this would also cause dark points to be counted as light. In either or both cases we'd expect to see a sagging at the origin and an uplifting just away from the origin, the exact behavior that our data is exhibiting.\n\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.6\\textwidth]{figs/pt1000}\n  \\caption{Theoretical and experimental Bose-Einstein distribution for randomly scattered light with average photon count = 1.045 and $\\chi^{2}$ = 7.864}\n  \\label{pt1000}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.6\\textwidth]{figs/pt3000}\n  \\caption{Theoretical and experimental Bose-Einstein distribution for randomly scattered light with average photon count = 3.743 and $\\chi^{2}$ = 8.078}\n  \\label{pt3000}\n\\end{figure}\n\n\\begin{figure}[H]\n  \\includegraphics[totalheight=0.6\\textwidth]{figs/pt10k}\n  \\caption{Theoretical and experimental Bose-Einstein distribution for randomly scattered light with average photon count = 10.443 and $\\chi^{2}$ = 3.487}\n  \\label{pt10k}\n\\end{figure}\n\n\\section{Conclusion}\nWe explored the statistics for photons arriving from a constant intensity and a randomly scattered light source. By constructing probability distributions and calculating $\\chi^{2}$ values we found the distribution for constant intensity light to agree with a Poisson distribution and the distribution for randomly scattered light to agree with a Bose-Einstein distribution. We also provided explanation for the slight sagging and uplifting behavior for small counts.\n\n\n\n\\begin{thebibliography}{99}\n\n\\bibitem{manual} Physics Dept., ``Photon Counting and the Statistics of Light'', Quantum Lab, California Polytechnic University, (2013).\n\n\\bibitem{koc} P. Koczyk, P. Wiewior, and C. Radzewicz, ``Photon counting statistics-Undergraduate experiment'', Institute of Experimental Physics, Warsaw University, Poland, 1995.\n\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "2ac01d95c0fe0158ab6b733f1f9937178c5c586e", "size": 7481, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "photon_counting_report/lab_report.tex", "max_stars_repo_name": "mbigras/physics_projects", "max_stars_repo_head_hexsha": "7dd29707b3ac8adea7ed8b63786245e34097345e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2016-12-05T23:34:30.000Z", "max_stars_repo_stars_event_max_datetime": "2016-12-11T19:45:07.000Z", "max_issues_repo_path": "photon_counting_report/lab_report.tex", "max_issues_repo_name": "mbigras/physics_projects", "max_issues_repo_head_hexsha": "7dd29707b3ac8adea7ed8b63786245e34097345e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "photon_counting_report/lab_report.tex", "max_forks_repo_name": "mbigras/physics_projects", "max_forks_repo_head_hexsha": "7dd29707b3ac8adea7ed8b63786245e34097345e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.6330275229, "max_line_length": 927, "alphanum_fraction": 0.7835850822, "num_tokens": 1813, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6261241772283034, "lm_q2_score": 0.8840392756357326, "lm_q1q2_score": 0.5535183640949284}}
{"text": "\\documentclass{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{indentfirst}\n\\title{CS131 B1: HW 5}\n\\author{Duy Nguyen}\n\\date{4 November 2016}\n\n\\begin{document}\n\n\\maketitle\n\n\\subsection*{1. Prove that $1^2 - 2^2 + 3^2 - ... + (-1)^{n-1} n^2 = (-1)^{n-1} n(n+1)/2$, whenever $n$ is a positive integer.}\n\n\\textbf{Base case:}\n$$1^2 = (-1)^0 1(1+1)/2$$\n$$1^2 = \\frac{2}{2} = 1$$\n\n\\textbf{Induction hypothesis:} Assume that\n$$1^2 - 2^2 + 3^2 - ... + (-1)^{k-1} k^2 = (-1)^{k-1} \\frac{k(k+1)}{2}$$\n\n\\textbf{Induction goal:} \n$$1^2 - 2^2 + 3^2 - ... + (-1)^{k-1} k^2 + (-1)^{k} (k+1)^2 = (-1)^{k} \\frac{(k+1)(k+2)}{2}$$\n\n\\textbf{Proof:}\n$$1^2 - 2^2 + 3^2 - ... + (-1)^{k-1} k^2 + (-1)^{k} (k+1)^2 = (-1)^{k-1} \\frac{k(k+1)}{2} + (-1)^{k} (k+1)^2$$\nWe look at the right side of the equation:\n$$ = (-1)^{k-1} \\frac{k(k+1)}{2} + (-1)^{k} (k+1)^2$$\n$$ = (-1)^{k} (k+1) ((-1)^{-1}\\frac{k}{2} + (k+1))$$\n$$ = (-1)^{k} (k+1) (\\frac{-k}{2} + \\frac{2k}{2} + \\frac{2}{2})$$\n$$ = (-1)^{k} (k+1) \\frac{-k + 2k +2}{2}$$\n$$ = (-1)^{k} \\frac{(k+1) (k+2)}{2}$$\n\nWe have proved the hypothesis through induction.\n\n\\subsection*{2. Prove that $n!<n^n$, where $n$ is an integer greater than 1.}\n\n\\textbf{Base case:}\nPick the next integer greater than 1. With $n=2$, we have $2<2^2 = 4$ so $2<4$.\n\n\\textbf{Induction hypothesis:} Assume that \n$$k! < k^k$$\n\n\\textbf{Induction goal:}\n$$(k+1)! < (k+1)^{k+1}$$\n\n\\textbf{Proof:}\n$$(k+1)!< (k+1)^{k+1}$$\n$$(k+1)*k!<(k+1)*(k+1)^k$$\nCancel $k+1$ from each side of the inequality, we have\n$$k! < (k+1)^k$$\nCompare this to what we have assumed ($k! < k^k$) we note that this inequality is true, since $k$ is always positive and thus $k<k+1$ so $k^k<(k+1)^k$ thus $k! < (k+1)^k$ is true.\n\n\\subsection*{3. Prove that 3 divides $n^3 +2n$ whenever n is a positive integer.}\n\n\\textbf{Base case:} \n$$1^3 + 2\\cdot 3 = \\frac{6}{3} = 2$$\n\n\\textbf{Inductive hypothesis:} Assume that \n$k^3 + 2n$ is divisible by $3$.\n\n\\textbf{Inductive goal:} $(k+1)^3 + 2(k+1)$ is divisible by $3$.\n\n\\textbf{Proof:}\nWe have \n$$(k+1)^3 + 2(k+1)$$\n$$k^3 + 3k^2 + 3k + 1 + 2k + 2$$\n$$(k^3 + 2k) + 3(k^2+k +1)$$\n\nNotice that $k^3 + 2k$ is divisible by $3$ as we have assumed and $3(k^2+k +1)$ is also divisible by $3$. Thus the hypothesis is proven through induction.\n\n\\subsection*{4. Show that $n$ cents payment for $n \\geq 8$ can be formed using just 3-cents and 5-cents stamps.}\n\n\\textbf{Base case:} We notes the result of $P(n)$ for $n = 8, 9, 10$\n\n$$8= 5+3$$\n$$9 = 3 + 3+ 3$$\n$$10=5+5$$\n\n\\textbf{Inductive hypothesis:} We note our result is true $\\forall k: 8\\leq k\\leq n$ where $n = 10$. \n\n\\textbf{Inductive goal:} $\\forall n >  10$ our result is true.\n\n\\textbf{Proof:}\nWe note that for $n>10$, $P(n)$ is just either $P(8), P(9),$ or $P(10)$ adding with the appropriate amount of 3-stamps. In another word, for $P(n)$ for $n>10$, we can write $n$ as $x + 3^k$, where $x = 8,9, $or$ 10$ and $k$ is an positive integer $>0$. In this way, since $P(8), P(9),$ and $P(10)$ are true, we can reduce any $P(n)$ back to any of these three cases.\n\nThus the hypothesis is true through induction.\n\n\\end{document}\n", "meta": {"hexsha": "6c376a21aef2a6665e18bbc9329f32b5595fc800", "size": 3086, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "unnatural_rubber/hw/CS131/hw5.tex", "max_stars_repo_name": "zuik/stuff", "max_stars_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "unnatural_rubber/hw/CS131/hw5.tex", "max_issues_repo_name": "zuik/stuff", "max_issues_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "unnatural_rubber/hw/CS131/hw5.tex", "max_forks_repo_name": "zuik/stuff", "max_forks_repo_head_hexsha": "4bae095f8a857c884b409356a61f56a49b768611", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.6741573034, "max_line_length": 366, "alphanum_fraction": 0.569669475, "num_tokens": 1376, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6261241632752915, "lm_q2_score": 0.8840392939666335, "lm_q1q2_score": 0.5535183632373378}}
{"text": "\\chapter{Conclusion and Future Work}\n\\label{chap:conclusion-future-work}\n\nIn this dissertation,\nwe contributed to the research needs highlighted in~\\cref{chap:introduction}.\n\nThe applicability of SSAT to the analysis of VLSI systems was examined.\nWe formulated a framework for the property evaluation of probabilistic design.\nThe average-case and worst-case analyses are encoded\nas random-exist and exist-random quantified SSAT formulas, respectively.\n\nMotivated by the emerging VLSI applications,\nwe further devised novel algorithms for random-exist and exist-random quantified SSAT formulas.\nThe proposed algorithms leverage the success from SAT/QBF-solving and model-counting communities\nand advance the state-of-the-art of SSAT solving beyond the conventional DPLL-based search.\nFor random-exist quantified SSAT formulas,\nwe used minterm generalization and weighted model counting as subroutines\nand employed SAT solvers and weighted model counters as plug-in engines.\nFor exist-random quantified SSAT formulas,\nwe proposed clause-containment learning,\nwhich was inspired by clause selection from QBF solving.\nUnder the framework of clause-containment learning,\nwe explored three heuristics to strengthen learnt clauses.\nMoreover, unlike previous exact approaches,\nthe proposed algorithms can solve approximate SSAT by deriving upper and lower bounds of satisfying probabilities.\nOur evaluation showed the benefits of the proposed solvers over a wide range of formula instances.\nFurthermore, our implementations and the benchmark suite of SSAT instances are open-source\nfor other researchers to base their work on top of our results.\n\nTo generalize SSAT beyond the PSPACE-complete complexity class for more complex problems,\nwe extended DQBF to its stochastic variant DSSAT and proved its NEXPTIME-completeness.\nCompared to the PSPACE-complete SSAT,\nDSSAT is more powerful to succinctly model NEXPTIME-complete decision problems with uncertainty.\nWe demonstrated the DSSAT formulation of the analysis to probabilistic/approximate partial design\nand gave a polynomial-time reduction from the NEXPTIME-complete Dec-POMDP to DSSAT.\n\nWe highlight several directions for future investigation.\nFirst, in order to improve the scalability of probabilistic property evaluation,\nwe are interested in approximate approaches based on simulation techniques.\nIn addition to the conventional Monte Carlo method,\ncircuit simulation based on symbolic sampling~\\cite{KravetsDAC19ECOSampling} may have much potential.\nAnother line of on-going work is to develop solvers for arbitrarily quantified SSAT and DSSAT formulas.\nRecently, clause selection has been adapted to solve random-exist quantified SSAT formulas\nand combined with the clause-containment learning in a recursive manner to solve general SSAT~\\cite{Chen2021}.\nIt is also extended to DQBF~\\cite{Tentrup2019},\nwhich might provide a promising framework for DSSAT solving.\nFrom the view of practical implementation,\nSSAT solvers will benefit from a tight integration with model-counting components.\nMore advanced data structures, e.g., d-DNNF~\\cite{Darwiche2001,Darwiche2002dDNNF}, could be also integrated.\nIn particular, \\textit{incremental} model counting might be a key step to boost the performance of SSAT solvers\nif the computational efforts among different counting queries can be effectively shared.\nMotivated by approximate model counting,\nwe also hope to pursue a similar formulation for SSAT solving.\nSpecifically, we envisage a unified SSAT framework that allows users to control the solution precision,\nin order to trade inexactness for better scalability.\nFinally, we would like to bring SSAT to different research fields, especially to machine-learning applications.\nThe SSAT solvers developed in this dissertation have been applied to verify the fairness of supervised-learning algorithms~\\cite{Ghosh2021}.\nAccording to the reported data~\\cite{Ghosh2021},\nusing the proposed SSAT solvers achieves several orders of magnitude improvement over the state-of-the-art tools.\nThis success shows the great potential and benefit of SSAT solving.\n\n\\iffalse\n    We have proposed a formal framework to probabilistic property\n    evaluation, under the worst-case and average-case scenarios.\n    Connections between probabilistic property evaluation and existing\n    solving techniques have been established. A novel BDD-based SSAT solver is proposed. A comparative experimental study has been performed to assess the capabilities\n    of different methods. Among the considered solutions, the proposed BDD-based SSAT solver, which makes use of circuit structures to construct BDD, currently tends to be the most robust in our experiments. Nevertheless, there are cases solvable only by approximate weighted model counting, but not by other methods. As the BDD-based method has its memory explosion problem, SSAT and model\n    counting approaches based on CNF formula might be more viable than the BDD one if their efficiency would be improved in the future. Our results may benefit the synthesis of probabilistic design, perhaps not only for silicon but also for genetic circuits, which are intrinsically stochastic. For future investigation, Monte-Carlo simulation may be incorporated to our proposed formal methods.\n\n    In this paper, we focused on solving random-exist quantified SSAT formulas.\n    In contrast to the previous DPLL-based algorithms, we proposed a novel algorithm using SAT solver and weighted model counter as underlying engines to improve computational efficiency.\n    Leveraging the great success of modern SAT solving techniques, the proposed algorithm outperforms the state-of-the-art method in the experiment on random $k$-CNF and strategic companies formulas.\n    Moreover, unlike previous exact SSAT methods, the proposed algorithm can be easily modified to solve approximate SSAT by deriving upper and lower bounds of satisfying probability.\n    We demonstrated the applicability of our SSAT solver to VLSI circuit analysis.\n    While the state-of-the-art solver fails to compute the exact satisfying probability, the proposed method succeeded in finding bounds of the formulas.\n    In several cases, the derived bounds are very close to, or even match the exact satisfying probability.\n    This approximation flexibility of our method can be helpful when SSAT is applied to real-world applications.\n    %This work might shed light on the application of solving approximate SSAT to real-world problems.\n    For future work, we intend to extend the proposed algorithm to arbitrary quantified SSAT formulas.\n\n    We developed a new approach to solving E-MAJSAT formulas. In contrast to prior methods based on DPLL search or knowledge compilation, we proposed the clause containment learning technique, inspired by clause selection recently developed in QBF evaluation, and design a novel algorithm to solve E-MAJSAT efficiently. Under the framework of clause containment learning, three enhancement techniques were proposed to improve the computational efficiency.\n    Experiment results show the benefit of our method.\n    %that our method achieves significant performance gains and memory savings over prior SSAT methods, and also provides useful lower bound information for cases where no information can be given by prior methods.\n    For future work, we intend to solve SSAT with general prefix structure.\n\n    In this paper, we extended DQBF to its stochastic variant DSSAT and proved its NEXPTIME-completeness.\n    Compared to the PSPACE-complete SSAT, DSSAT is more powerful to succinctly model NEXPTIME-complete decision problems with uncertainty.\n    The new formalism can be useful in applications such as artificial intelligence and system design.\n    Specifically, we demonstrated the DSSAT formulation of the analysis to probabilistic/approximate partial design, and gave a polynomial-time reduction from the NEXPTIME-complete Dec-POMDP to DSSAT.\n    We envisage the potential broad applications of DSSAT and plan solver development for future work.\n    %\\textcolor{blue}{\n    We note that recent developments of \\textit{clausal abstraction} for QBF~\\cite{JanotaM15,RabeT15} and DQBF~\\cite{Tentrup19} might provide a promising framework for DSSAT solving.\n    Clausal abstraction has been lifted to SSAT~\\cite{ChenHJ21}, and we are investigating its feasibility for DSSAT.\n    %}\n\\fi", "meta": {"hexsha": "b83c8130ccd1aec2a429a59bc4dd85bc5c0d2c5b", "size": 8373, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/conclusion-future-work.tex", "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_issues_repo_path": "paper/conclusion-future-work.tex", "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/conclusion-future-work.tex", "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.1368421053, "max_line_length": 455, "alphanum_fraction": 0.8198972889, "num_tokens": 1689, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8688267830311354, "lm_q2_score": 0.6370307944803832, "lm_q1q2_score": 0.5534694158601597}}
{"text": "\\chapter{Assignment: Overfitting}\n\\label{hw:overfitting}\n\n\\newthought{Overfitting is something we try to avoid at all times.} But overfitting comes in many shapes and sizes. For this exercise we will use a \\emph{blood-loneliness} data set with the File widget. This data set relates gene expressions in blood with a measure of loneliness obtained from a sample of elderly persons. Let's try to model loneliness with logistic regression and see how well the model performs.\n\n\\marginnote{To load the blood loneliness data set copy and paste the below URL to the URL field of the File widget.\n\\break\\break\n\\url{http://file.biolab.si/datasets/blood-loneliness-GDS3898.tab}}\n\n\\begin{figure}[h]\n  \\centering\n  \\includegraphics[scale=0.7]{overfitting1.png}%\n  \\caption{$\\;$}\n  \\label{fig:wf1}\n\\end{figure}\n\n\\begin{enumerate}\n    \\item Train the Logistic Regression model on the data and observe its performance. What is the result?\n    \\item We have many features in our data. What if we select only the most relevant ones, the genes that actually matter? Use Rank to select the top 20 best performing features.\n    \n    \\begin{figure}[h]\n        \\centering\n        \\includegraphics[scale=0.7]{overfitting2.png}%\n        \\caption{$\\;$}\n        \\label{fig:wf2}\n      \\end{figure}\n\n    \\item How are the results now? What happened? Is there a better way of performing feature selection?\n\\end{enumerate}\n", "meta": {"hexsha": "05b3380f8d98b0b1a032fe3b0ab2a7c157bd71ec", "size": 1394, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignments/030-overfitting/overfitting.tex", "max_stars_repo_name": "PrimozGodec/orange-lecture-notes", "max_stars_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-10-13T14:31:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:47:06.000Z", "max_issues_repo_path": "assignments/030-overfitting/overfitting.tex", "max_issues_repo_name": "PrimozGodec/orange-lecture-notes", "max_issues_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-02-26T13:33:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-25T19:15:34.000Z", "max_forks_repo_path": "assignments/030-overfitting/overfitting.tex", "max_forks_repo_name": "PrimozGodec/orange-lecture-notes", "max_forks_repo_head_hexsha": "5072afa3e29cec77e1a7f6c0d1fd044e737fe378", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-01-19T16:55:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-21T20:35:41.000Z", "avg_line_length": 46.4666666667, "max_line_length": 414, "alphanum_fraction": 0.7410329986, "num_tokens": 359, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702880639791, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.5534266995207279}}
{"text": "\\chapter{State of the Art}\n\\label{chapter:sota}\n\nThis chapter provides the necessary background of machine learning and deep learning to familiarize the reader with the techniques used throughout this work, as well as introduce the state-of-the-art results in skin lesion classification.\n\n\\section{Supervised Learning}\n\nA supervised learning problem is a machine learning problem in which the data has the correct expected output for every input.\n\nMore formally, it is a problem in which there are $m$ labeled pairs $(x^{(i)}, y^{(i)})$, also denoted as samples, where $x^{(i)} \\in \\mathbb{R}^n$ (\\textit{i.e.}, the input vectors, commonly referred to as features) and $y^{(i)} \\in \\mathbb{R}^n$ (\\textit{i.e.}, the corresponding output vector, usually called the target or label) which have some relationship expressed by some function.\n\nThe goal of supervised learning is to find a function $h_{\\theta}(x)$ (sometimes called the hypothesis) that usefully approximates the true function in the underlying relationship between the input $x$ and associated output $y$, parameterized by $\\theta$.\n\nIn supervised learning, the data samples are commonly split into three separate sets with different intents and used exclusively for that purpose:\n\n\\begin{itemize}\n    \\item Training Set: samples used for actually fitting the model's parameters (\\textit{e.g.}, weights and biases of a neural network);\n    \\item Validation Set: samples used to select the best model by testing different sets of hyperparameters or architectures;\n    \\item Test Set: samples used to assess the final performance of the final model.\n\\end{itemize}\n\nIt is important to follow this taxonomy carefully and not use the different sets for the different tasks interchangeably. As \\citeauthor{crossvalidationbias} point out in their \\citeyear{crossvalidationbias} paper, if, for example, we use the test set to simultaneously tune hyperparameters and assess its generalization performance we risk optimistically biasing the model. As such, if we use any one set to tune hyperparameters, we must use a different set for evaluation to get an unbiased assessment of the model's generalization performance.\n\nHowever, this technique (typically referred to as the holdout method) has its downsides:\n\n\\begin{itemize}\n    \\item using an arbitrary fixed split of the dataset means that we are completely setting aside data for validation and testing which will not be used for fitting the model and vice versa, thus wasting data in some sense\n    \\item without averaging over multiple runs (\\textit{i.e.}, different splits) with different initial conditions, results may be biased and misleading\n\\end{itemize}\n\n$K$-fold cross-validation is a common cross-validation technique that randomly partitions the data into $K$ equally sized subsamples, of which a single subsample is used for testing and the remaining $K-1$ subsamples for fitting the model, a process which is repeated $K$ times to yield $K$ performance estimations that can be averaged over to produce one estimation that can be referred to. In this way all samples are used for training and testing, which is an advantage over repeated random sub-sampling which does not systematically use all samples for training and testing.\n\nNested cross-validation techniques are required when, in addition to estimating a model's generalization performance (\\textit{i.e.}, performance on the test set), it is also necessary to select the best model among many (\\textit{e.g.}, with different hyperparameters) \\cite{crossvalidationbias}. A truly nested variant will use an inner loop of $L$ folds to tune the hyperparameters and select the best model, and an outer loop of $K$ folds to evaluate the models selected by the inner loop. A simpler (albeit not really nested) scheme is one that is similar to the $K$-fold cross-validation we described above, except that, of the $K$ equally sized subsamples, we take 1 subsample for validation, another 1 subsample for testing, and the remaining $K-2$ subsamples are used for training. However, in practice, nested cross-validation when selecting classifiers is overzealous and too expensive for most applications \\cite{nestedcvoverzealous}.\n\n\\section{Bias-Variance Tradeoff}\n\nThe goal of supervised learning is to find a function $\\hat{f}(x)$ that approximates the true function $f(x)$ in the underlying relationship between the data $x$ and associated labels $y$. For this approximation to be useful it should simultaneously\n\n\\begin{itemize}\n    \\item accurately capture the training data\n    \\item generalize to new data\n\\end{itemize}\n\nIt turns out this is very difficult to do simultaneously. Decomposition of the expectation $E$ of the error on an unseen sample $x$ yields\n\n\\begin{equation}\n{\\displaystyle {\\begin{aligned}\\operatorname {E} &={\\Big (}\\operatorname {Bias} {\\big [}{\\hat {f}}(x){\\big ]}{\\Big )}^{2}+\\operatorname {Var} {\\big [}{\\hat {f}}(x){\\big ]}+\\sigma ^{2}\\\\\\end{aligned}}}\n\\end{equation}\n\nwhere\n\n\\begin{itemize}\n    \\item the bias of the approximation is the error caused by the simplifying assumptions inherent to the approximation\n    \\item the variance of the approximation is the error caused by how much the approximation $\\hat{f}(x)$ will try to fit the data $x$ exactly\n    \\item the error $\\sigma^2$ is the variance of the noise within the data, which forms a lower bound on the expected error since all other equated terms are necessarily non-negative and the error $\\sigma^2$ is irreducible\n\\end{itemize}\n\nThis formulation causes a tradeoff that models must make between bias and variance among the following possible scenarios:\n\n\\begin{itemize}\n    \\item high-variance low-bias models represent the training data well but risk overfitting to noise\n    \\item low-variance high-bias models are simpler but risk underfitting the training data, failing to capture the underlying signal\n\\end{itemize}\n\nAn underfitting (low variance) problem can be identified when the training error is high and the validation error is roughly equally high, which can be fixed by\n\n\\begin{itemize}\n    \\item training with more features\n    \\item decreasing regularization\n\\end{itemize}\n\nOn the other hand, an overfitting (high variance) problem can be identified when the training error is high and the validation error is comparatively much higher, which can be fixed by\n\n\\begin{itemize}\n    \\item training with more data\n    \\item training with less features\n    \\item increasing regularization\n\\end{itemize}\n\n\\section{Feature Normalization}\n\nSome machine learning algorithms rely on the distance between the features in a feature vector $x$ and assume the values are on the same scale. Min-max normalization rescales $x$ to $x'$ to be on the interval $[0, 1]$, \\textit{i.e.},\n\n\\begin{equation}\nx' = \\frac{x - \\min{(x)}}{\\max{(x)} - \\min{(x)}}\n\\end{equation}\n\nOther machine learning algorithms assume that the feature vector $x$ follows a Gaussian distribution (\\textit{i.e.}, zero mean and unit-variance). Gaussian normalization is the process of rescaling $x$ (where $\\mu_x$ is the mean of each feature vector $x$ and $\\sigma_x$ its standard deviation) to $x'$ to follow a Gaussian distribution, \\textit{i.e.},\n\n\\begin{equation}\nx' = \\frac{x - \\mu_x}{\\sigma_x}\n\\end{equation}\n\nIn practice, gradient descent converges much faster with features that follow a normal distribution as was seen by the work of \\citeauthor{batchnormalization} on batch normalization \\cite{batchnormalization}.\n\nPerforming feature normalization on the entire dataset and only afterwards split it (into train, validation, and test sets) will leak information (\\textit{i.e.}, the mean and variance) about the validation and test sets into the train set itself which will bias performance metrics on the validation and test sets. Therefore, it is crucial to\n\n\\begin{enumerate}\n    \\item Split the dataset;\n    \\item Perform feature normalization on the train set and save the mean $\\mu_{train}$ and variance $\\sigma_{train}$;\n    \\item Perform feature normalization on the validation set using the mean $\\mu_{train}$ and variance $\\sigma_{train}$;\n    \\item Perform feature normalization on the test set using the mean $\\mu_{train}$ and variance $\\sigma_{train}$.\n\\end{enumerate}\n\n\n\\section{Binary Classification}\n\nIn general, let\n\n\\begin{itemize}\n    \\item $L \\colon \\mathbb{R}^n \\to \\mathbb{R}$ be a function, called the loss function, which quantifies the quality of a particular set of parameters $\\theta$ relative to a single data sample $(x^{(i)}, y^{(i)})$;\n    \\item $J \\colon \\mathbb{R}^n \\to \\mathbb{R}$ be a function, called the cost function, which quantifies the quality of a particular set of parameters $\\theta$ relative to all the $m$ samples in the training set.\n\n    \\begin{equation}\n    J(\\theta) = \\frac{1}{m} \\sum_{i=1}^{m} L(h_{\\theta}(x^{(i)}), y^{(i)})\n    \\end{equation}\n\\end{itemize}\n\nIn particular, the linear regression cost function can be easily understood as the summation of the differences between the predictions and the ground truth labels:\n\n\\begin{equation}\nJ(\\theta) = \\frac{1}{2m} \\sum_{i=1}^{m} (h_{\\theta}(x^{(i)}) - y^{(i)})^2\n\\end{equation}\n\nIn a binary classification problem specifically we have that $y^{(i)} \\in \\{0, 1\\}$. Unfortunately, as it turns out, the linear regression cost function is unsuitable for binary classification because it would be a non-convex function with many local minima, which in turn would make optimization very difficult.\n\nEssentially, binary classification requires a cost function that penalizes very confident wrong predictions while also rewarding confident right predictions, \\textit{i.e.},\n\n\\begin{itemize}\n    \\item if the prediction is $h_{\\theta}(x) = 0$ and the ground-truth label is $y = 0$, $J(\\theta)$ is low;\n    \\item if the prediction is $h_{\\theta}(x) = 0$ and the ground-truth label is $y = 1$, $J(\\theta)$ is high;\n    \\item if the prediction is $h_{\\theta}(x) = 1$ and the ground-truth label is $y = 0$, $J(\\theta)$ is high;\n    \\item if the prediction is $h_{\\theta}(x) = 1$ and the ground-truth label is $y = 1$, $J(\\theta)$ is low.\n\\end{itemize}\n\nWhereas in linear regression the hypothesis is of the form $h_{\\theta}(x) = \\theta^{\\top}x$, in binary classification (or logistic regression) the hypothesis is of the form $h_{\\theta}(x) = \\sigma(\\theta^{\\top}x)$ to squash predictions to the interval $[0, 1]$, hence the name logistic regression, because it uses the sigmoid-shaped logistic function $\\sigma(z) = \\frac{1}{1 + e^{-z}}$.\n\nA useful loss function (sometimes called binary cross-entropy in some literature) for logistic regression that takes these requirements into consideration is\n\n\\begin{equation}\nL(h_{\\theta}(x^{(i)}), y^{(i)}) =\n\\begin{dcases}\n    -\\log(h_{\\theta}(x^{(i)})),& \\text{if } y^{(i)} = 1\\\\\n    -\\log(1 - h_{\\theta}(x^{(i)})),& \\text{if } y^{(i)} = 0\n\\end{dcases}\n\\end{equation}\n\nor, cleverly arranging things algebraically,\n\n\\begin{equation}\nJ(\\theta) = -\\frac{1}{m} \\sum_{i=0}^{m} ( y^{(i)}\\log(h_{\\theta}(x^{(i)})) + (1 - y^{(i)})\\log(1 - h_{\\theta}(x^{(i)})) )\n\\end{equation}\n\n\\subsection{Evaluation}\n\nTo evaluate a binary classifier means to compare the model's predicted conditions against the true conditions, where one must consider:\n\n\\begin{itemize}\n    \\item \\ac{TP}, cases correctly classified as positive, \\textit{i.e.}, $h(x^{(i)}) = 1$ matches $y^{(i)} = 1$;\n    \\item \\ac{TN}, cases correctly classified as negative, \\textit{i.e.}, $h(x^{(i)}) = 0$ matches $y^{(i)} = 0$;\n    \\item \\ac{FP}, cases incorrectly classified as positive, \\textit{i.e.}, $h(x^{(i)}) = 1$ but $y^{(i)} = 0$;\n    \\item \\ac{FN}, cases incorrectly classified as negative, \\textit{i.e.}, $h(x^{(i)}) = 0$ but $y^{(i)} = 1$.\n\\end{itemize}\n\nFrom these basic measurements one can derive useful metrics:\n\n\\begin{itemize}\n    \\item \\ac{TPR}, also known as sensitivity or recall, is basically the percentage of actual positives that are correctly identified as positive;\n        \\begin{equation}\n        TPR = \\frac{TP}{TP + FN}\n        \\end{equation}\n    \\item \\ac{TNR}, also known as specificity, is the percentage of actual negatives that are correctly identified as negative;\n        \\begin{equation}\n        TNR = \\frac{TN}{FP + TN}\n        \\end{equation}\n    \\item \\ac{PPV}, also known as precision,\n        \\begin{equation}\n        PPV = \\frac{TP}{TP + FP}\n        \\end{equation}\n    \\item \\ac{NPV}\n        \\begin{equation}\n        NPV = \\frac{TN}{TN + FN}\n        \\end{equation}\n    \\item \\ac{FPR} is the percentage of actual positives that are incorrectly identified as negative;\n        \\begin{equation}\n        FPR = 1 - TNR\n        \\end{equation}\n\\end{itemize}\n\nAs a summary statistic one can derive the accuracy $A$ as the percentage of total items classified correctly\n\n\\begin{equation}\nA = \\frac{TP + TN}{TP + FP + TN + FN}\n\\end{equation}\n\nHowever accuracy is misleading in cases where there is a class imbalance because a model can always predict the value of the majority class and still report a high accuracy value despite missing all predictions in the minority class. Instead F1 score should be preferred. F1 score is the harmonic mean of precision and recall given by\n\n\\begin{equation}\nF_1 = 2 \\times \\frac{PPV \\times TPR}{PPV + TPR}\n\\end{equation}\n\nLastly, \\ac{AUC} is known as the area under the \\ac{ROC} curve, \\textit{i.e.}, the area under the plot of \\ac{FPR} versus \\ac{TPR} at different points in the interval $[0, 1]$.\n\n\\subsection{Class Imbalance}\n\nIn binary classification problems, it is said that a dataset of $m$ samples has class imbalance when there is a majority class with $|S_{maj}|$ samples and a minority class with $|S_{min}|$ samples where $|S_{maj}| > |S_{min}|$, in other words when there is a significant difference between the number of samples of one class compared to the other. It is very common to see datasets in which classes are highly imbalanced, simply because it reflects the real world distribution (\\textit{e.g.}, there are more benign moles than actual melanoma cases). A lot of machine learning algorithms do not perform well under this imbalance. When a dataset is imbalanced, machine learning algorithms will typically over-classify the majority group due to its increased prior probability. Thus, the samples in the minority group are misclassified more often than the majority group \\cite{Johnson2019}.\n\n\\citeauthor{haibo2009} provide a comprehensive survey of class balancing algorithms \\cite{haibo2009}, of which oversampling of the minority class is the simplest and most natural. In practice the minority class must be oversampled by a factor (\\textit{i.e.}, number of transformations applied to each original sample) of $\\frac{|S_{maj}|}{|S_{min}|}$. Sometimes, perhaps because of an overfitting problem, it is also necessary to augment the total number of samples $m$ to a new total number of samples $m'$, in which case\n\n\\begin{itemize}\n    \\item the minority class must be oversampled by augmenting its samples  by a factor of $\\frac{\\frac{m'}{2}}{|S_{min}|}$\n    \\item and the majority class must also be augmented by a factor of $\\frac{\\frac{m'}{2}}{|S_{maj}|}$\n\\end{itemize}\n\n\\section{Optimization}\n\nGiven a cost function $J$ and initial parameters $\\theta$, the goal is to find the optimal $\\theta^*$ that minimizes the cost function, \\textit{i.e.}, the optimization objective is\n\n\\begin{equation}\n\\theta^{*} = \\argmin_{\\theta} J(\\theta)\n\\end{equation}\n\nIn the deep learning high-dimensional optimization landscape, the objective function is highly non-convex w.r.t. the parameters, so one can not use any of the tools from the convex optimization literature. Using calculus to find the minimum analytically is only feasible for trivial, low-dimensional, convex functions.\n\nA naive first approach is to systematically and exhaustively enumerate and evaluate many candidate solutions while keeping track of the best one. This simple brute-force search solution, arguably the simplest metaheuristic, is infeasible in the context of deep neural networks due to the curse of dimensionality.\n\nRather than try all solutions exhaustively, another approach is to try random values of $\\theta$ in a loop and record the running best until some stopping criteria. Random local search is a slightly better strategy that starts with random $\\theta$ and iteratively refine the best-found solution (hence the local in local search) until some stopping criteria.\n\nRandom local search forms a good basis for optimization, but there is no need to move randomly in search-space. The negative of the gradient of the cost function w.r.t. the parameters $\\theta$ over all $m$ training samples gives the direction of steepest descent (\\textit{i.e.}, the direction in which the cost function decreases):\n\n\\begin{equation}\n\\nabla_{\\theta} J(\\theta) = \\frac{1}{m} \\sum_{i=1}^{m} \\nabla_{\\theta} L(h_{\\theta}(x^{(i)}), y^{(i)})\n\\end{equation}\n\nA parameter $\\eta \\in \\mathbb{{R}^{+}}$, called the learning rate, controls the size of the step to take in the direction of the gradient. The learning rate $\\eta$ must be set carefully to ensure that the cost function converges. A small enough value will ensure convergence, but requires a larger number of updates to get there, whereas bigger values will likely result in divergence. The learning rate can also be set dynamically to change during training. One such heuristic is to reduce it by a factor $f$ (\\textit{e.g.}, $f=0.1$) when the loss has not changed significantly within some threshold $\\epsilon$ (\\textit{e.g.}, $\\epsilon=0.0001$) in $p$ epochs (\\textit{e.g.}, $p=30$) during which it will be patient \\cite{learningrateschedules}.\n\nThus emerges a very simple and natural update rule (called batch gradient descent) for parameters:\n\n\\begin{equation}\n\\theta^{t+1} = \\theta^t - \\eta \\nabla_{\\theta} J(\\theta)\n\\end{equation}\n\nComputing the gradient $\\nabla J$ with respect to the original $m$ training samples is computationally and memory intensive for very big $m$. But, as it turns out, through an idea interchangeably referred to as mini-batch gradient descent or \\ac{SGD}, we can get a good estimate of the true gradient $\\nabla J$, by averaging over a smaller random number of samples $m'$, \\textit{i.e.},\n\n\\begin{equation}\n\\nabla J \\approx \\frac{\\sum_{i=1}^{m'} \\nabla J_i}{m'} \\approx \\frac{\\sum_{i=1}^m \\nabla J_{i}}{m}\n\\end{equation}\n\nA variation of \\ac{SGD} called \\ac{SGD} with momentum \\cite{ruder2016} keeps track of the past parameter updates in a vector $v$ (with $\\dim v = \\dim \\theta$) which initially, at iteration $t = 0$, is the zero matrix\n\n\\begin{equation}\nv^0 = 0\n\\end{equation}\n\nand $v^{t+1}$ holds a fraction $\\gamma \\in [0,1]$ (usually $\\gamma = 0.9$) of the updates up until iteration $t$, \\textit{i.e.},\n\n\\begin{equation}\nv^{t+1} = -\\eta \\nabla_{\\theta} J(\\theta) + \\gamma v^{t}\n\\end{equation}\n\nand determines that the parameter update at (the next) iteration $t+1$ is a linear combination of the gradient and the previous parameter updates $v^{t}$:\n\n\\begin{align*}\n    \\theta^{t+1} &= \\theta^{t} + v^{t+1} \\\\\n                 &= \\theta^{t} - \\eta \\nabla_{\\theta} J(\\theta) + \\gamma v^{t}\n\\end{align*}\n\n\\ac{SGD} with momentum effectively helps accelerate \\ac{SGD} in the relevant direction and dampen oscillations in the parameter values \\cite{ruder2016}.\n\nMore recently, adaptive gradient descent methods have been further developed to find individual learning rates per parameter and use ideas similar to momentum, which \\citeauthor{ruder2016} \\cite{ruder2016} presents an extensive review of. Despite the popularity of these adaptive methods (\\textit{e.g.}, Adam), sometimes they offer marginal value versus the classic \\ac{SGD}. Wilson et al. \\cite{wilson2017} show that the solutions found by adaptive methods usually generalize worse when compared to \\ac{SGD}, which suggests that adaptive methods like Adam should not be used like a magical black box but cross-validated among other optimizers.\n\n\\section{Initialization}\n\nAn optimization algorithm like gradient descent is inherently iterative in its update rule for which we need to initialize the matrix $\\theta$ such that the first update works. This initialization determines how quickly optimization converges or whether it converges at all, which is why it is so important.\n\nA reasonable first approach would be to initialize all values of the matrix $\\theta$ to zero. However, \\textit{e.g.}, in a neural network this does not have any symmetry breaking properties because if every weight is initialized to zero then every neuron will compute the same activation (assuming all neurons use the same activation function), thus the gradients computed by backpropagation will also be the same and result in the exact same updates during gradient descent. Therefore parameters must be chosen in such a way that they break this symmetry, which is what motivates the use of random initialization. However simply drawing random parameters (\\textit{e.g.}, from a Gaussian distribution with mean 0 and standard deviation 1) might result in too small or too big parameter values and very wide in range which originate a vanishing or exploding gradient problem where different layers effectively learn at different speeds.\n\nModern initialization techniques are heuristic based and designed around the activation functions themselves to counter these problems by, essentially, guaranteeing the activations' mean to be $0$ and the variance to be the same across different layers. Under these conditions the backpropagated gradient will not be multiplied by very small or very big values (which would lead to vanishing or exploding gradient problems, respectively). Specifically,\n\n\\begin{itemize}\n    \\item for tanh activation functions, Xavier et al. \\cite{xavierinit} recommend drawing parameters from either $\\mathcal{N}(0, \\frac{1}{n_{in}})$ or $\\mathcal{N}(0, \\frac{2}{n_{in}+n_{out}})$ where $n_{in}$ is the number of input neurons and $n_{out}$ the number of output neurons in that layer;\n    \\item for ReLU activation functions, He et al. \\cite{heinit} suggest drawing from $\\mathcal{N}(0, \\frac{2}{n_{in}})$.\n\\end{itemize}\n\nMore recently, initializing a network with parameters transferred from other networks (typically trained on natural image datasets whose lower-layers' parameters generalise well to other tasks) has emerged as a very useful and pragmatic initialization strategy which in practice works very well for most tasks \\cite{howtransferable}, giving birth to what is now known as transfer learning.\n\n\\section{Regularization}\n\nIn the optimization literature, regularization is classically seen as a modification $J_R(\\theta)$ to the original objective function $J(\\theta)$ that penalises large parameter values $\\theta$ using some criteria $R(\\theta)$ such that\n\n\\begin{equation}\nJ_R(\\theta) = J(\\theta) + R(\\theta)\n\\end{equation}\n\nThis regularization penalty can be seen as a form of Occam's razor that prefers simpler functions. In a sense, this redefines the optimization goal to simultaneously\n\n\\begin{itemize}\n    \\item fit the training data well (\\textit{i.e.}, the $J(\\theta)$ term);\n    \\item keep the parameters small (\\textit{i.e.}, the $R(\\theta)$ term).\n\\end{itemize}\n\nIn the case of L1 regularization, also known as Lasso Regression,\n\n\\begin{equation}\nR(\\theta) = \\lambda \\sum_{j=1}^{n} |x_j|\n\\end{equation}\n\nwhich in essence\n\n\\begin{itemize}\n    \\item penalizes the absolute value of parameters, which tends to shrink parameters to be close to zero;\n    \\item effectively does feature selection because some of the parameters can be zero;\n    \\item consequently produces sparse matrices of parameters, which can be relevant;\n    \\item generally produces simpler and interpretable models.\n\\end{itemize}\n\nWhereas in the case of L2 regularization, also known as Ridge Regression,\n\n\\begin{equation}\nR(\\theta) = \\lambda \\sum_{j=1}^{n} x_j^2\n\\end{equation}\n\nwhich basically\n\n\\begin{itemize}\n    \\item penalizes the sum of squared parameters, which tends to shrink parameters to be close to zero;\n    \\item does not do feature selection and does not produce sparse matrices;\n    \\item generally produces more complex models.\n\\end{itemize}\n\nIn practice, the regularization strength is controlled via the $\\lambda \\in \\mathbb{{R}^{+}}$ parameter. Put simply, in a high variance scenario (overfitting) one should increase $\\lambda$ and in a high bias situation (underfitting) decrease $\\lambda$.\n\n\\citeauthor{regularizationsurvey} present a much broader and more modern view of regularization in machine learning in their \\citeyear{regularizationsurvey} paper \\cite{regularizationsurvey}, where they define regularization more abstractly as any technique that allows the model to generalize better (\\textit{i.e.}, perform better on the test set). They go over implicit regularization via data augmentation, network architecture, and the optimization algorithm itself as well as various explicit regularization techniques that penalize large weights (\\textit{e.g.}, L1 and L2 regularization).\n\nMore recently, Dropout \\cite{dropout} is another regularization technique designed specifically to prevent neural networks from overfitting by randomly dropping neurons (and their connections) from the network during training which prevents complex co-adaptations on training data.\n\nIterative optimization algorithms like \\ac{SGD} work by improving the model's fit to the training set every epoch which, up to some point, also improves its generalization performance (\\textit{i.e.}, performance on data outside of the training set). However, past that point, continuing to improve the model's fit to the training set will negatively impact its generalization performance. Early stopping emerges as a form of regularization applicable to such optimization algorithms that stops training according to some stopping criterion \\cite{earlystopping} \\cite{deeplearning}. For example, most implementations\\footnote{\\url{https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping}} stop training after $p$ patient epochs of no significant improvement within a threshold $\\epsilon$ (\\textit{e.g.}, $\\epsilon = 0.0001$) of some monitored metric (\\textit{e.g.}, error on a validation set) and in the end either uses the weights from the epoch with the best value of the monitored metric (which requires extra memory to maintain the best weights) or the weights obtained at the last epoch of training after early stopping (which is faster since it just needs to seek the weights of the current model).\n\n\\section{Feedforward Neural Networks}\n\nFeedforward \\ac{ANN} define a family of parameterized non-linear functions which, according to the universal approximation theorem \\cite{uat}, can approximate any continuous function. A feedforward \\ac{ANN} with a single hidden layer of finite width is such a universal approximator, but may require a large number of neurons in the hidden layer (its width) \\cite{uat}. This has since motivated the construction of width-bounded deeper networks, sparking what is now known as deep learning.\n\n\\subsection{Neuron}\n\nIn general, a neuron is a function $\\hat{f} \\colon \\mathbb{R}^n \\to \\mathbb{R}$\n\n\\begin{itemize}\n    \\item accepting an input $x \\in \\mathbb{R}^{n}$ where $x_0 = 1$ by convention, enabling fast matrix-multiplication implementations of the forward pass;\n    \\item parameterized by a vector $\\theta \\in \\mathbb{R}^{n}$\n    \\item characterized by a (usually non-linear) activation function $g \\colon \\mathbb{R} \\to \\mathbb{R}$\n\\end{itemize}\n\n\\begin{equation}\n\\hat{f}(x) = g(\\theta_0 x_0 + \\sum_{i=1}^{n}{\\theta_i x_i}) = g(\\theta^{\\top} x)\n\\end{equation}\n\nWhen the activation function is\n\n\\begin{itemize}\n    \\item the Heaviside step function the neuron is reduced to the Rosenblatt perceptron \\cite{perceptron} which does not output a continuous value nor does it distinguish data that is not linearly separable;\n    \\item any linear function (\\textit{e.g.}, $g(x) = x$) the neuron is reduced to a linear regression model;\n    \\item the logistic function $g(x) = \\frac{1}{1 + e^{-x}}$ the neuron is reduced to a logistic regression model;\n    \\item the ReLU activation function $g(x) = \\max(0, x)$ a network of such neurons is able to overcome numerical computation issues (exploding and vanishing gradients, typically associated with activation functions like the logistic function) which, in practice, allow faster convergence \\cite{alexnet}.\n\\end{itemize}\n\n\\subsection{Fully-Connected Neural Networks}\n\nNeurons can be arbitrarily composed (as in function composition) in so called hidden layers to form a neural network. For example a neural network with two hidden layers (each with only one neuron) and an output layer of one neuron is parameterized by $\\theta^1_1$ in the first hidden layer (the first function composition), $\\theta^2_1$ in the second hidden layer (the second function composition), $\\theta^3_1$ in the output layer (the third function composition), \\textit{i.e.}, the neural network computes\n\n\\begin{equation}\n\\hat{f_1}(x) = g({\\theta^3_1}^{\\top} g({\\theta^2_1}^{\\top} g({\\theta^1_1}^{\\top} x)))\n\\end{equation}\n\n\nAs another example, a neural network with 1 hidden layer of 4 neurons and an output layer of 1 neuron computes\n\n\\begin{equation}\n\\hat{f_2}(x) = g({\\theta^2_1}^{\\top} (g({\\theta^1_1}^{\\top} x) + g({\\theta^1_2}^{\\top} x) + g({\\theta^1_3}^{\\top} x) + g({\\theta^1_4}^{\\top} x)))\n\\end{equation}\n\nIn a way, these neural networks can be seen as just one big function composition of linear functions activated by some non-linear function. However networks more complex than the trivial $\\hat{f_1}$ and $\\hat{f_2}$ are cumbersome to express algebraically, so it is more convenient to visualize them in terms of layers of parameters (figures \\ref{fig:fcnn1} and \\ref{fig:fcnn2}) which hides away the notational complexity of the algebraic representation of highly composite functions.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{figs/fcnn1.png}\n    \\caption{Graphical representation of the $\\hat{f_1}$ neural network in vertical layers where the input layer is $x \\in \\mathbb{R}^{5}$ and other nodes represent a neuron in a given layer.}\n    \\label{fig:fcnn1}\n\\end{figure}\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{figs/fcnn2.png}\n    \\caption{Graphical representation of the $\\hat{f_2}$ neural network in vertical layers where the input layer is $x \\in \\mathbb{R}^{5}$ and other nodes represent a neuron in a given layer.}\n    \\label{fig:fcnn2}\n\\end{figure}\n\nNetworks such as $\\hat{f_1}$ and $\\hat{f_2}$ are commonly called \\ac{MLP}, but strictly speaking an \\ac{MLP} (as the name suggests) is composed of perceptrons which modern neural networks certainly do not use. Hereinafter these networks will be referred to as \\ac{FCNN}. An \\ac{FCNN} composes neurons in one-dimensional layers (known as fully-connected layers, illustrated in figure \\ref{fig:1dlayers}) where each neuron is fully-connected to every other neuron in the previous layer, \\textit{i.e.}, the output of every neuron in a given layer is an input to every neuron in the next layer and vice versa.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{figs/1dlayers.png}\n    \\caption{Composition of neurons in 1D fully-connected layers in a typical fully-connected neural network with 2 hidden layers \\cite{cs231n}.}\n    \\label{fig:1dlayers}\n\\end{figure}\n\n\\ac{FCNN} are relatively simple (when compared to other architectures that require setting lots of hyperparameters) and adequate for a variety of problems in various domains but, in computer vision problems where the input images are big, these networks quickly amount to a rather large number of parameters (which makes them particularly hard to train since this is a situation that is easily prone to overfitting). For example, images on the CIFAR-10 \\cite{cifar10} dataset are $32 \\times 32 \\times 3$ (\\textit{i.e.}, $n = 32 \\times 32 \\times 3 = 3072$ features) for which a vector of parameters $\\theta \\in \\mathbb{R}^{3072}$ is needed to parameterize a single neuron fully-connected to the input. It is easy to see that this is unsustainable when\n\n\\begin{itemize}\n    \\item the input dimensions are more typically on the order of $224 \\times 224 \\times 3$ or $n = 224 \\times 224 \\times 3 = 150528$ features which would require 150528 weights for a single neuron;\n    \\item fully-connected layers are often composed of hundreds or thousands of neurons, which would require anywhere between $10^7$ and $10^8$ (or more) weights for a single fully-connected layer of neurons;\n    \\item \\ac{FCNN} can be arbitrarily deep in the number of layers.\n\\end{itemize}\n\n\\subsection{Convolutional Neural Networks}\n\n\\ac{CNN} are another class of feedforward \\ac{ANN} (originally designed for computer vision problems) whose architecture takes advantage of the assumption that the input is an image. Unlike \\ac{FCNN} which compose neurons in one-dimensional layers, \\ac{CNN} compose neurons in three-dimensional layers (or volumes, as illustrated in figure \\ref{fig:3dvolumes}), namely convolutional layers and pooling layers. Nonetheless most \\ac{CNN} architectures still use fully-connected layers (typically in the rightmost layers).\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{figs/3dvolumes.png}\n    \\caption{Composition of neurons in 3D volumes in convolutional neural networks \\cite{cs231n}.}\n    \\label{fig:3dvolumes}\n\\end{figure}\n\nIn particular, the convolutional layer is the building block of \\ac{CNN} \\cite{cs231n} \\cite{deeplearning}. Essentially a convolutional layer accepts an input volume of size $W_1 \\times H_1 \\times D_1$ (which may eventually be zero-padded around the edges in a $P$-pixels wide border) and is parameterized by a set of $K$ learnable filters of size $F \\times F \\times D_1$ which in the forward pass slide (with some stride $S$) over the input volume (locally in the width and height, but fully in depth of the input volume) to compute dot products between the filter and the selected part of the input (just like a regular neuron), producing $K$ two-dimensional activation maps (one for each of the $K$ filters) stacked in depth to form the output volume.\n\nWith the assumption that the input to a \\ac{CNN} is a two-dimensional image with three color channels, it stands to reason that many of the features found in one channel will also be found in other channels because most features are spatial (\\textit{e.g.}, edges, lines, shapes). As such if some local feature (local in width and height) is activating meaningfully at depth $d_1$ of a convolutional layer, it will also usefully activate the same concept at a different depth $d_2$. It turns out this can be taken advantage of by further relaxing the architecture and introducing parameter sharing thus allowing neurons at the same depth slice of a convolutional layer to share the same weights and biases, which\n\n\\begin{itemize}\n    \\item significantly reduces the number of free parameters to only $F F D_1$ weights and $1$ bias per filter, or a total of $(F F D_1 + 1) K$ parameters;\n    \\item reduces the memory footprint of keeping said parameters;\n    \\item contributes to the translation equivariance of the \\ac{CNN} architecture.\n\\end{itemize}\n\nIn a \\ac{FCNN} all neurons are fully-connected to all the neurons in the previous layer which creates a highly complex model capable of modeling very complex relationships in the input features. However, for spatial data like images, this complexity is pointless since most features are localized anyway, \\textit{i.e.}, the face of a person is likely completely unrelated to the dog in the background of the photo. Instead, convolutional layers exploit the spatial locality of images by connecting neurons only to local regions of the input volume (illustrated in figure \\ref{fig:localconnectivity}) which, intuitively, ensures that the learnt filters activate strongly only in the presence of a spatially local input pattern. By stacking many such layers the network first creates representations of small regions of the input and progressively assembles representations of larger regions of the input (as if the filters were bigger or global rather than local).\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{figs/localconnectivity.png}\n    \\caption{Local connectivity of neurons in a convolutional layer (in blue) which are connected only to a local area of the input volume (in red) \\cite{cs231n}.}\n    \\label{fig:localconnectivity}\n\\end{figure}\n\nOn the other hand, the exact location of these features is not so important when compared to its location relative to other features. The pooling layer is an approach to providing translation invariance by reducing the spatial size (or down-sampling) of every depth slice of an input volume, most commonly by sliding a filter of size $2 \\times 2$ with a stride $S = 2$ and computing the maximum (\\textit{i.e.}, max pooling, illustrated in figure \\ref{fig:maxpooling}) of the selected part of the input. Effectively this progressively reduces the number of free parameters (while introducing no new parameters since max pooling is a fixed function of the input) and the memory footprint and computation in the network.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=0.5\\textwidth]{figs/maxpooling.png}\n    \\caption{Max pooling with a $2 \\times 2$ filter and stride $S = 2$ \\cite{cs231n}.}\n    \\label{fig:maxpooling}\n\\end{figure}\n\nIn summary \\ac{CNN} are a great architecture for computer vision due to:\n\n\\begin{itemize}\n    \\item Composition of neurons in 3D volumes;\n    \\item Parameter sharing reducing the number of parameters;\n    \\item Local connectivity exploiting spatial locality of images;\n    \\item Translation and rotation invariance contributing to generalization of features.\n\\end{itemize}\n\nLeNet \\cite{lenet}, in figure \\ref{fig:lenet}, is widely considered to be the first successful application of \\ac{CNN}, when \\citeauthor{lenet} used backpropagation to automatically learn the kernels of convolutions rather than the laborous hand-designed systems that came before it.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{figs/lenet.png}\n    \\caption{Architecture of the LeNet convolutional neural network \\cite{lenet}.}\n    \\label{fig:lenet}\n\\end{figure}\n\nAlexNet \\cite{alexnet} won the \\ac{ILSVRC} \\cite{imagenet} in 2012 by a very large margin, introducing a breakthrough in computer vision which further proved that learned features could be better than hand-designed features chosen heuristically. The network, pictured in figure \\ref{fig:alexnet}, employed an 8-layer \\ac{CNN} that stacked many convolutional layers (immediatelly followed by pooling layers) producing progressively smaller convolutional windows, topped off by two fully-connected layers with 4096 outputs and finally a softmax layer for classification in ImageNet. In its design they also used Dropout regularization and the ReLU activation function, which today are very popular and essential tools in deep learning.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{figs/alexnet.png}\n    \\caption{Architecture of the AlexNet convolutional neural network \\cite{alexnet}.}\n    \\label{fig:alexnet}\n\\end{figure}\n\nLater the \\ac{VGG} designed an architecture (illustrated in figure \\ref{fig:vgg16}) that used repeated blocks of layers to build progressively more abstract features, shifting thinking in terms of layers to blocks. This basic building block consists of a convolutional layer with padding to maintain resolution, a nonlinearity (\\textit{i.e.}, ReLU), pooling for downsampling the features. In the original VGG16 paper \\cite{vgg16} the authors employed $3 \\times 3$ convolutions and $2 \\times 2$ max pooling with stride $S = 2$, essentially halving the resolution after each one of these blocks.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{figs/vgg16.jpg}\n    \\caption{Architecture of the VGG16 convolutional neural network \\cite{vgg16}.}\n    \\label{fig:vgg16}\n\\end{figure}\n\nGoogLeNet \\cite{inceptionv1} (also known as Inception V1) won the \\ac{ILSVRC} \\cite{imagenet} in 2014 which introduced the Inception block that establishes four parallel paths that use convolutional layers of different windows and max pooling layers. Furthermore it exhibited lower computational complexity when compared to other models with similar generalization performance. This influenced many later versions of Inception \\cite{inceptionv2_3}\\cite{inceptionv4}, namely Inception V3 as illustrated in figure \\ref{fig:inceptionv3}.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{figs/inceptionv1.png}\n    \\caption{Architecture of the GoogLeNet convolutional neural network \\cite{inceptionv1}.}\n    \\label{fig:inceptionv3}\n\\end{figure}\n\nMore recently \\citeauthor{resnet} have pushed the state-of-the-art by introducing residual neural networks \\cite{resnet}. Motivated by avoiding the problem of vanishing gradients, residual networks allow the use of skip connections (or short cuts, as pictured in figure \\ref{fig:resnet50}) to arbitrarily jump over layers (effectively reusing activations from previous layers in its forward pass) which significantly speeds up learning by reducing the impact of vanishing gradients since there are less layers to backpropagate through. An ensemble of these networks achieved 3.57\\% error on ImageNet, a result which won 1st place on \\ac{ILSVRC} 2015.\n\n\\begin{figure}[ht]\n    \\centering\n    \\includegraphics[width=1.0\\textwidth]{figs/resnet50.png}\n    \\caption{Comparison between residual networks and more traditional architectures like VGG19, illustrating how residual networks avoid the vanishing gradient problem \\cite{resnet}.}\n    \\label{fig:resnet50}\n\\end{figure}\n\n\\subsection{Computational Graphs}\n\nIn some sense, a neural network is just an arbitrarily composite function\n\n\\begin{equation}\n\\hat{f}(x) = g(h( ... j(x) ... ))\n\\end{equation}\n\nSuch a function can easily be represented in a directed acyclic graph which the machine learning literature calls a computational graph. Computational graphs expose a programming model where the big idea is to express a numeric computation as a graph such that\n\n\\begin{itemize}\n    \\item graph nodes represent operations (which have any number of inputs and outputs);\n    \\item graph edges represent the flow (input and output) of data (most generically tensors, hence TensorFlow \\cite{tensorflow}) between nodes.\n\\end{itemize}\n\nOn a computational graph we have essentially two modes of computation \\cite{deeplearning}:\n\n\\begin{itemize}\n    \\item Forward propagation (or inference) refers to the calculation (and storage for the later backpropagation) of all operations (including the final output) in the graph in the order of the innermost operation to the outermost operation.\n    \\item Backward propagation (or backpropagation) refers to the calculation of the gradient of an objective function (the cost function $J$) w.r.t. every parameter $\\theta_{ij}$ by traversing the graph in reverse order relative to the forward pass (which in turn might require computing intermediate gradients according to the chain rule of calculus) giving $\\frac{\\partial J}{\\partial \\theta_{ij}}$ that can be used to finally run gradient descent to optimize parameters \\cite{deeplearning}.\n\\end{itemize}\n\nForward and backward propagation are highly interdependent. Forward propagation sequentially computes and stores all intermediate operations within the computational graph that is traversed from input to output. In turn backpropagation sequentially (but in reverse order) computes and stores the gradients of intermediate operations as needed according to the chain rule. Moreover, in order to invoke the chain rule, these intermediate values need to be retained until backpropagation terminates which is why it is so memory intensive \\cite{deeplearning}.\n\n\\section{Ensemble Learning}\n\nEnsemble learning is a machine learning paradigm that designs models by combining other models into a single more complex and presumably more accurate model. In ensemble learning theory we refer to these building block models as weak learners because they usually suffer from high bias or high variance, which are combined in such a way to reduce the bias and variance into a final ensemble model called a strong learner.\n\nThere are three major ensemble learning methods: bagging, boosting, and stacking.\n\n\\subsection{Bagging}\n\nBagging, short for bootstrap aggregation, focuses on reducing variance and relies on $L$ approximately i.i.d. subsamples of the data (called bootstrap samples) to fit $L$ weak learners $h_1(x), h_2(x), \\ldots, h_L(x)$ and aggregate or average them through some function $H(x)$ which, \\textit{e.g.},\n\n\\begin{enumerate}\n    \\item in a regression problem can literally be the average of the predictions of the weak learners, sometimes referred to as soft voting, \\textit{i.e.}, $H(x) = \\frac{1}{L} \\sum_{i=1}^{L} h_i(x)$\n    \\item in a classification problem can be to just take the mode of the predictions of the weak learners, often called hard voting, \\textit{i.e.}, $H(x) = \\mode{(h_1(x), h_2(x), \\ldots, h_L(x))}$\n\\end{enumerate}\n\nPerhaps the biggest advantage of bagging is that the the $L$ weak learners can be fit independently and in parallel, considerably speeding up research iterations.\n\n\\subsection{Boosting}\n\nBoosting fits multiple models sequentially such that the model being fit at a given iteration gives more importance to samples that were previously predicted with high error, thus resulting in a strong learner with lower bias than its weak learners.\n\nAdaptive boosting (also known as AdaBoost) \\cite{adaboost} combines the output of its $T$ weak learners into a weighted sum that represents the final output and is of the form\n\n\\begin{equation}\nH_T(x) = \\sum_{t=1}^{T} \\alpha_t h_t\n\\end{equation}\n\nRather than optimizing for the optimal set of weights $\\alpha_t$ and weak learners $h_t$, AdaBoost takes an iterative approach. Essentially, at each iteration $t$, a weak learner $h_t$ is chosen and weighted $\\alpha_t$ to minimize the training error $E_t$ of the $t$-th boosting classifier\n\n\\begin{equation}\nE_t = \\sum_{t=1}^{T} E[H_{t-1}(x_t) + \\alpha_t h_t(x_i)]\n\\end{equation}\n\n\\subsection{Stacking}\n\nRather than combining the weak learners using some arbitrary pre-determined scheme, stacking learns this combination by training a meta-model based on the weak learners' predictions, which can even be done in multiple layers.\n\nStacking often works well with heterogeneous weak learners, \\textit{i.e.}, different algorithms. For example, the weak learners could consist of \\ac{KNN}, \\ac{SVM}, and decision tree models. Then, the outputs of the weak learners could be taken as inputs for an \\ac{ANN} to learn the meta-model based on their predictions.\n\nThe data that is used to train the weak learners is not relevant for training the meta-model. Thus we need to split the data in two folds: one for training the weak learners and another for training the meta-model. An obvious immediate drawback is that we only have a fraction of the data available for training the meta-model and vice versa.\n\n\\section{Transfer Learning}\n\\label{section:transferlearning}\n\nIn practice, supervised training of deep neural networks requires\n\n\\begin{itemize}\n    \\item vast amounts of labeled data, which is very expensive and time consuming in itself;\n    \\item setting and reasoning about many different hyperparameters and cross-validating them;\n    \\item immense computational and time resources which is often equally or more expensive.\n\\end{itemize}\n\nTransfer learning emerges as a technique that can be used to reduce the costs or time constraints of training these deep neural networks.\n\nTransfer learning is a machine learning technique that seeks to leverage (or transfer) the knowledge gained from solving one problem to another (ideally related) problem \\cite{deeptransferlearning}. In the context of deep neural networks it means to transfer the weights of a model trained on a very large dataset (\\textit{e.g.}, ImageNet \\cite{imagenet}) and re-purpose them for another model.\n\nIn image classification problems \\ac{CNN} models are used to build progressively more specific feature maps in layers, where lower layers represent abstract, problem independent features (\\textit{e.g.}, squares, circles) and higher layers represent specific, problem dependent features (\\textit{e.g.}, car, dog). Transfer learning seeks to exploit and take advantage of this construct. In computer vision problems, the most common transfer learning techniques boil down to:\n\n\\begin{enumerate}\n    \\item choose up to which layer parameters should be extracted from;\n    \\item decide up to which layer parameters should be trained (\\textit{i.e.}, updated during gradient descent) and which should remain frozen (\\textit{i.e.}, not updated during gradient descent);\n    \\item connect a classifier (\\textit{e.g.}, fully-connected layers.\n\\end{enumerate}\n\n\\subsection{Transfer Learning by Total Parameter Extraction without Fine Tuning}\n\nIn the simplest scenario:\n\n\\begin{enumerate}\n    \\item extract and freeze all the layers of the pre-trained model, \\textit{i.e.}, total parameter extraction;\n    \\item build a classifier on top of the extracted parameters and only train those of the classifier layers, \\textit{i.e.}, without fine tuning.\n\\end{enumerate}\n\nIntuitively this is unlikely to be the best performing solution because only the classifier's parameters are being updated, leaving the pre-trained model's parameters untouched (thus too specific to the ImageNet domain).\n\n\\subsection{Transfer Learning by Total Parameter Extraction with Fine Tuning}\n\nThe parameters of the extracted layers do not need to remain frozen. Given sufficient computational and time resources, further training the parameters of the extracted layers can yield even higher generalization performance because we are, in some sense, fine-tuning the extracted parameters to better fit the target dataset.\n\nFor this fine-tuning one should be very careful and make sure to use a very slow (\\textit{i.e.}, low) learning rate so as to not suddenly disrupt the learned parameters from different layers, because we are actually updating parameters transferred from another model.\n\n\\subsection{Transfer Learning by Partial Parameter Extraction}\n\nWhen the source and target domains are not very similar, extracting higher layers will yield features that are too specific to the source domain. For example, if the source domain is cars but the target domain is dogs, then the higher layers likely represent features very specific to dogs (\\textit{e.g.}, dog eyes, dog tails, dog legs), whereas some arbitrary middle layer might represent more abstract shape-like features that can still be used as a solid starting point for the cars domain.\n\nFor this reason it is often counter-productive to extract all the parameters. Instead parameters can be extracted up to an arbitrary middle layer which likely represents more useful features that are still relevant for the target domain. The precise point up to which parameters should be extracted needs to be studied and tested empirically for each problem and chosen based on whichever yields the highest generalization performance.\n\nUnder this strategy, the parameters of earlier extracted layers can still be fine tuned.\n\n\\section{Deep Learning Hardware and Software}\n\nDeep learning is very computationally intensive in itself and even more so when we want to run multiple experiments with cross-validated hyperparameters or completely different architectures.\n\n\\ac{CNN}, the core of most state-of-the-art deep learning applied to computer vision, are computationally complex and embarassingly parallel \\cite{chang2017} which the architecture of general purpose \\ac{GPU} are appropriate for \\cite{gpu} and for which libraries like cuDNN \\cite{cudnn} were developed to further leverage the characteristics of \\ac{GPU} into even bigger performance improvements.\n\nThere is a growing demand for domain-specific hardware designed specifically for the computations necessary in neural network training and inference, like Google's TPU custom ASIC \\cite{tpu}, which naturally can achieve major improvements in cost-energy-performance when compared to general purpose hardware like \\ac{GPU} that was originally designed for the demands of computer graphics which coincidentally also serve deep learning reasonably well. Nonetheless, \\ac{GPU} remain the best cost-effective commodity hardware for this type of computation, especially when not working at the scale of companies like Google and Facebook. In a typical scenario, the CPU does little useful computation in a deep learning application where most of the computation is delegated to the GPU.\n\nTensorFlow\\footnote{\\url{https://www.tensorflow.org/}} is an open source software library for numerical computation using data flow graphs (or computational graphs). The graph nodes represent mathematical operations, while the graph edges represent the tensors that flow between them. This flexible architecture enables you to deploy computation to one or more CPU or GPU in a desktop, server, or mobile device without rewriting code \\cite{tensorflow}.\n\nKeras\\footnote{\\url{https://keras.io/}} is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK or Theano. It was developed with a focus on fast experimentation which enables easy and fast prototyping through user friendliness, modularity, and extensibility. This API was more recently adopted by TensorFlow itself in the \\verb|tf.keras| namespace which is a central part of TensorFlow 2.0, following the criticism of TensorFlow's convoluted API, and the original Keras project is likely to be dissolved eventually.\n\n\\section{Skin Lesion Classification}\n\nThis section reviews current state-of-the-art results in skin lesion classification and medical image analysis in general to get an overview of the work related to this dissertation.\n\n\\subsection{Transfer Learning Approaches}\n\n\\citeauthor{Brinker2018} \\cite{Brinker2018} present the first systematic review of the cutting edge in lesion classification with deep convolutional neural networks, which remains the fundamental technique in state-of-the-art results. They review 13 papers that use \\ac{CNN}, most of which transfer weights from networks trained on ImageNet to the target task, which speeds up training and reduces costs by leveraging previous knowledge. In conclusion they note that \\ac{CNN} are currently the state-of-the-art in skin lesion classification and that transfer learning is a very effective approach, but that it was rather difficult to compare results given the heterogeneity of datasets (some of which are not public) making reproduceability difficult. This motivated the initiative by the \\ac{ISIC} Archive to collect and uniformize data as well as organize challenges to push new results.\n\nIn \\citeyear{nature2017} \\citeauthor{nature2017} \\cite{nature2017} achieved perhaps the most famous result in skin lesion classification using deep learning, making it to the Nature scientific journal. Their dataset combines biopsy-proven data from the \\ac{ISIC} Archive, Edinburgh Dermofit Library, and the Stanford Hospital, totalling an astonishing 129450 samples (after going through data augmentation of random flips, rotations, crops) which remains one of the biggest efforts in data collection in the area. They build an undirected graph connecting images that were deemed to be similar and made sure that the connected components of this graph were separated and distributed between the train, validation, and test sets in order to create a more effective and diverse split of the data. The authors follow a transfer learning approach by leveraging the weights of the InceptionV3 network trained on ImageNet, on top of which they build their own classifier and fine-tune previous layers carefully using the RMSProp optimizer. They evaluate the performance of the network by pitting it against 21 board-certified dermatologists on biopsy-proven medically-relevant cases of keratinocyte carcinomas versus benign seborrheic keratoses and malignant melanomas versus benign nevi, attaining performance on par with experts.\n\n\\citeauthor{menegola2017} \\cite{menegola2017} went back to the \\ac{ISIC} 2016 lesion classification challenge to try to obtain better results and employed a transfer learning approach to leverage weights from VGG-16 and VGG-M networks originally trained on ImageNet and fine-tune the network on a dataset comprised of data from the \\ac{ISIC} 2016 challenge and the Interactive Atlas of Dermoscopy (augmented by randomly scaling, rotating or flipping the samples) which they train for 60 epochs using \\ac{SGD} with momentum and L2 regularization to obtain a set of features on top of which they train an SVM classifier. They report results on the test set of the \\ac{ISIC} 2016 challenge, achieving 0.807 AUC with the deeper VGG-16 model. Their results further show that in really difficult cases their model is not very confident in its prediction, which suggests that, for the time being, current technology is better used as a reference to support and explain the diagnosis human doctors' rather than as a complete diagnosis framework.\n\nIn part 3 of the \\ac{ISIC} 2017 \\cite{isic2017} challenge participants were asked to develop two binary classifiers to distinguish between:\n\n\\begin{itemize}\n    \\item melanoma vs nevus and seborrheic keratosis\n    \\item seborrheic keratosis vs nevus and melanoma\n\\end{itemize}\n\nParticipants were given a training set of 2000 images (374 melanoma, 254 seborrheic keratosis, and the remaining 1372 benign nevi), a validation set of 150 images and a test set of 600 images. The images were of questionable quality and required a lot of preprocessing effort, which they could also complement by gathering their own data. Participants were ranked and awarded based only on AUC, but other metrics were reported for scientific completeness.\n\n% first place\nFirst place was a joint effort between Casio and Shinshu University presented by Kazuhisa Matsunaga et al\\cite{isic2017first}. In their work they adopted ensembles of their own variant of ResNet-50 that they trained presumably end-to-end (using RMSProp \\cite{rmsprop} and AdaGrad \\cite{adagrad}) on data from the challenge as well as data they gathered independently, which was normalized in such a way to exploit color constancy and of which multiple geometric transformations were input in parallel to the networks.\n\nThe metadata available in the training set showed that melanoma and seborrheic keratosis were both uncommon in young ages. From this observation they implemented a simple thresholding by age, which improved performance for seborrheic keratosis classification from 0.957 to 0.960 AUC in cross-validation but not for melanoma classification. They noted that a more careful thresholding implementation is necessary from a clinical point of view. In the end, the mean performance of the two classifiers on the validation set was 0.958 AUC and, after the paper was published, 0.911 on the test set.\n\n% second place\nIv\u00e1n from the Universidad Carlos III de Madrid \\cite{isic2017second} got second place by designing a very complete automatic diagnosis system where a dermoscopic image goes through:\n\n\\begin{enumerate}\n    \\item A segmentation network based on \\ac{FCN} \\cite{fcn} to generate a binary mask outlining the area actually occupied by the lesion\n    \\item A data augmentation module to generate random label-invariant views of the original dermoscopic image through rotations and crops.\n    \\item A structure segmentation network that produces binary masks for 8 heuristically designed structures (dots, reticular patterns and pigmented networks, homogeneous areas, regression areas, blue-white veil, streaks, vascular structures, and other unspecific patterns) that expert dermatologists find important for a diagnosis\n    \\item A classification network based on transfer learning from ResNet-50 \\cite{resnet} that takes into consideration the structures identified by the structure segmentation network as well as the original dermoscopic image.\n\\end{enumerate}\n\nThis effort resulted in AUC score of 0.910 on the test set, thus earning second place.\n\n% third place\nThird place was an effort by Afonso Menegola et al \\cite{isic2017third} from RECOD Lab. who collected several datasets which they cleaned and filtered, resulting in two sets of 9640 and 7544 images with differing performances on the two different binary classification tasks that they decided to keep in consideration throughout their experiments. They adopted a transfer learning approach and decided to focus on ResNet-101 and Inception-V4 models trained on ImageNet on top of which they experimented with:\n\n\\begin{itemize}\n    \\item Curriculum learning scheme in which they schedule the samples during training such that easier samples are batched first and harder samples are batched later. However in practice this was worse than a traditional learning scheme.\n    \\item Training data and testing data augmentation by applying label-invariant random transformations such as crops, flips, etc. that ended up significantly improving performance (which the authors already knew from experience).\n    \\item Meta-learning scheme to take into consideration the decision of multiple models (even by simply averaging the output probabilities) gave the best results on the official validation AUC, even when compared to the single best model.\n    \\item Normalizing the inputs to Inception networks by subtracting the average pixel value significantly improved performance, but further dividing by the standard deviation gave worse results than baseline.\n\\end{itemize}\n\nTheir final submission was a meta model that combined seven models based on Inception and ResNet networks trained on distinct datasets which were then stacked in a meta-learning layer based on SVM. In the end they placed third by getting an AUC score of 0.908 on the test set.\n\n% others\nAlso worth noting is\n\n\\begin{itemize}\n    \\item \\citeauthor{isic2017li} \\cite{isic2017li} who present novel multi-scale fully convolutional residual networks (based on FCRN-88 \\cite{fcrn}) trained on datasets augmented differently (which empirically proved to offer better performance) whose outputs are interpolated to the original scale and summed up to yield what the authors call possibility maps which are further refined by taking into a consideration a distance map representing the importance of each pixel. For comparison they also ran experiments with AlexNet, VGG16, ResNet-50, ResNet-101, Inception-v3, but their custom network outperformed them all with an AUC score of 0.912 as evaluated on the \\ac{ISIC} 2017 dataset.\n    \\item \\citeauthor{yang2017} \\cite{yang2017} whose work propose a multi-task learning scheme where lesion segmentation and classification are solved simultaneously by constructing a model that builds feature maps using \\ac{CNN} which then branches out into 3 parallel paths whose outputs individually do segmentation, melanoma binary classification, and seborrheic keratosis binary classification, but training is performed as if it was a single network optimizing parameters as usual. They achieve an AUC score of 0.926.\n\\end{itemize}\n\nIn part 3 of the \\ac{ISIC} 2018 challenge participants were asked to develop a classifier to distinguish between:\n\n\\begin{itemize}\n    \\item Melanoma\n    \\item Melanocytic nevus\n    \\item Basal cell carcinoma\n    \\item Actinic keratosis\n    \\item Benign keratosis\n    \\item Dermatofibroma\n    \\item Vascular lesion\n\\end{itemize}\n\nThe provided training data comes from the \\ac{HAM10000} dataset \\cite{ham10000}, which was acquired with many different dermatoscopes, from most anatomic sites, from a historical sample of patients presented for skin cancer screening, from many different institutions, with the proper approval. Diagnosis ground truth labels were established by:\n\n\\begin{itemize}\n    \\item Histopathology\n    \\item Reflectance confocal microscopy\n    \\item Lesion did not change during digital dermatoscopic follow up over two years with at least three images\n    \\item Consensus of at least three expert dermatologists from a single image\n    \\item Histopathology confirmation in cases of malignancy\n\\end{itemize}\n\nJust like in the 2017 edition, participants could, of course, complement the training data by gathering their own, but this time they were ranked based on normalized multi-class accuracy metric.\n\nIn the 2018 edition \\citeauthor{isic2018first} \\cite{isic2018first} won first, second, and third place, attaining, respectively, a balanced multiclass accuracy of 0.885, 0.882, and 0.871 (which still beat fourth place by a margin of 2\\%). In their work they combined data from the competition's \\ac{HAM10000} dataset, the \\ac{ISIC} Archive, and other proprietary data, which were preprocessed using the Shades of Gray method to normalize images from different sources and contexts. Further in their data pipeline they augment training data by performing random horizontal flips, random rotations of ${0, 90, 180, 270}$ degrees, change brightness, saturation, and contrast by a random factor in the range $[0.9, 1.1]$. Their models were based on transfer learning from models trained on ImageNet like InceptionV3, ResNet-50, Squeezenet, Densenet, and others, of which they picked the best-performing and ensembled them in a stacking scheme.\n\nThe authors note that a large ensemble of models like theirs is not practical in production because of the high computational cost associated with infering the prediction of a given input for all models in the ensemble. For this reason, they suggest that constraints on memory usage or \\ac{FLOPS} be considered in future challenges.\n\nNoteworthy are also the submissions by\n\n\\begin{itemize}\n    \\item \\citeauthor{isic2018milton} \\cite{isic2018milton} who also followed an approach based on transfer learning from models of PNASNet-5-Large, InceptionResNetV2, SENet154, InceptionV4 trained on ImageNet, who interestingly noted that in the first few epochs the gradient is very erratic and thus refrained from fine-tuning weights during the first 2 epochs to avoid updating weights towards the wrong direction, in the end achieving a score of 0.76 on the validation set;\n    \\item \\citeauthor{isic2018bissoto} \\cite{isic2018bissoto} (who won 3rd place in the 2017 edition) which transferred knowledge from models of InceptionV4, ResNet-152, and DenseNet-161 trained on ImageNet, by training with online data augmentation (\\textit{e.g.}, random crops, flips, rotations, shears, color transformations), \\ac{SGD} with the learning rate being decreased by a factor of 10 whenever validation loss didn't improve for 10 epochs, eventually building an average of 15 models trained only with the challenge data that attained a score of 0.803.\n\\end{itemize}\n\n\\subsection{Hybrid Learning Techniques}\n\nClearly the large majority of the work in skin lesion classification follows a transfer learning approach, presumably because of how cost-effective it is (especially for smaller research teams). Nonetheless there has been some research that explore other techniques in addition to transfer learning.\n\n\\citeauthor{hybrid2} \\cite{hybrid2} present an approach that extracts features from\n\n\\begin{itemize}\n    \\item a \\ac{VGG} network originally trained on ImageNet\n    \\item unsupervised feature learning using sparse coding\n\\end{itemize}\n\nand respectively trains two non-linear \\ac{SVM} (using a histogram intersection kernel and sigmoid feature normalization) whose outputs are mapped to probabilities at a 50\\% threshold and fused together using unweighted score averaging. They compare this against a more classical ensemble approach using only hand-coded low-level features, which used to be the state-of-the-art but is now significantly less performant at 0.715 accuracy on their test set (versus the 0.739 accuracy of the hybrid approach).\n\nThe work by \\citeauthor{hybrid1} \\cite{hybrid1} combines different techniques into a similar (but more extensive) classification framework that basically:\n\n\\begin{enumerate}\n    \\item extracts various features across two input scales (an area cropped around the segmented lesion, and the entire original dermoscopic image)\n    \\begin{itemize}\n        \\item hand-engineered rule-based features like color histogram, edge histogram, and color local binary patterns;\n        \\item unsupervised learning features from a sparse coded representation;\n        \\item features extracted from two deep residual networks trained on ImageNet and fine-tuned on the target dataset;\n        \\item the segmentation produced by their U-Net segmentation network is also used as a shape descriptor feature.\n    \\end{itemize}\n    \\item trains non-linear \\ac{SVM} to learn a classifier for each extracted set of features;\n    \\item averages the output of each classifier in an ensemble to produce a final classification.\n\\end{enumerate}\n\nWhen compared to an average of 8 dermatologists' predictions on 100 test images, they produce a higher accuracy and specificity evaluated at an equivalent sensitivity.\n", "meta": {"hexsha": "7b2708d16f5098f3d393b90064f47e23f489313a", "size": 69341, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/chapters/sota.tex", "max_stars_repo_name": "fabiomaia/msc", "max_stars_repo_head_hexsha": "43e3a8e17786c2c3246768f29be60d6c93f52527", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-03-27T16:38:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-28T22:57:14.000Z", "max_issues_repo_path": "doc/chapters/sota.tex", "max_issues_repo_name": "fabiomaia/msc", "max_issues_repo_head_hexsha": "43e3a8e17786c2c3246768f29be60d6c93f52527", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/chapters/sota.tex", "max_forks_repo_name": "fabiomaia/msc", "max_forks_repo_head_hexsha": "43e3a8e17786c2c3246768f29be60d6c93f52527", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.0863213811, "max_line_length": 1325, "alphanum_fraction": 0.7778226446, "num_tokens": 16252, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174789, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.5534266898082107}}
{"text": "\\vfill \\eject\n\\section{{\\tt allInOne.c} -- A Serial $QR$ Driver Program}\n\\label{section:QR-serial-driver}\n\n\\begin{verbatim}\n/*  QRallInOne.c  */\n\n#include \"../../misc.h\"\n#include \"../../FrontMtx.h\"\n#include \"../../SymbFac.h\"\n\n/*--------------------------------------------------------------------*/\nint\nmain ( int argc, char *argv[] ) {\n/*\n   --------------------------------------------------\n   QR all-in-one program\n   (1) read in matrix entries and form InpMtx object\n       of A and A^TA\n   (2) form Graph object of A^TA\n   (3) order matrix and form front tree\n   (4) get the permutation, permute the matrix and \n       front tree and get the symbolic factorization\n   (5) compute the numeric factorization\n   (6) read in right hand side entries\n   (7) compute the solution\n\n   created -- 98jun11, cca\n   --------------------------------------------------\n*/\n/*--------------------------------------------------------------------*/\nchar            *matrixFileName, *rhsFileName ;\nChvManager      *chvmanager ;\nDenseMtx        *mtxB, *mtxX ;\ndouble          facops, imag, real, value ;\ndouble          cpus[10] ;\nETree           *frontETree ;\nFILE            *inputFile, *msgFile ;\nFrontMtx        *frontmtx ;\nGraph           *graph ;\nint             ient, irow, jcol, jrhs, jrow, msglvl, neqns,\n                nedges, nent, nrhs, nrow, seed, type ;\nInpMtx          *mtxA ;\nIV              *newToOldIV, *oldToNewIV ;\nIVL             *adjIVL, *symbfacIVL ;\nSubMtxManager   *mtxmanager ;\n/*--------------------------------------------------------------------*/\n/*\n   --------------------\n   get input parameters\n   --------------------\n*/\nif ( argc != 7 ) {\n   fprintf(stdout, \n      \"\\n usage: %s msglvl msgFile type matrixFileName rhsFileName seed\"\n      \"\\n    msglvl -- message level\"\n      \"\\n    msgFile -- message file\"\n      \"\\n    type    -- type of entries\"\n      \"\\n      1 (SPOOLES_REAL)    -- real entries\"\n      \"\\n      2 (SPOOLES_COMPLEX) -- complex entries\"\n      \"\\n    matrixFileName -- matrix file name, format\"\n      \"\\n       nrow ncol nent\"\n      \"\\n       irow jcol entry\"\n      \"\\n        ...\"\n      \"\\n        note: indices are zero based\"\n      \"\\n    rhsFileName -- right hand side file name, format\"\n      \"\\n       nrow \"\n      \"\\n       entry[0]\"\n      \"\\n       ...\"\n      \"\\n       entry[nrow-1]\"\n      \"\\n    seed -- random number seed, used for ordering\"\n      \"\\n\", argv[0]) ;\n   return(0) ;\n}\nmsglvl = atoi(argv[1]) ;\nif ( strcmp(argv[2], \"stdout\") == 0 ) {\n   msgFile = stdout ;\n} else if ( (msgFile = fopen(argv[2], \"a\")) == NULL ) {\n   fprintf(stderr, \"\\n fatal error in %s\"\n           \"\\n unable to open file %s\\n\",\n           argv[0], argv[2]) ;\n   return(-1) ;\n}\ntype           = atoi(argv[3]) ;\nmatrixFileName = argv[4] ;\nrhsFileName    = argv[5] ;\nseed           = atoi(argv[6]) ;\n/*--------------------------------------------------------------------*/\n/*\n   --------------------------------------------\n   STEP 1: read the entries from the input file \n   and create the InpMtx object of A\n   --------------------------------------------\n*/\ninputFile = fopen(matrixFileName, \"r\") ;\nfscanf(inputFile, \"%d %d %d\", &nrow, &neqns, &nent) ;\nmtxA = InpMtx_new() ;\nInpMtx_init(mtxA, INPMTX_BY_ROWS, type, nent, 0) ;\nif ( type == SPOOLES_REAL ) {\n   for ( ient = 0 ; ient < nent ; ient++ ) {\n      fscanf(inputFile, \"%d %d %le\", &irow, &jcol, &value) ;\n      InpMtx_inputRealEntry(mtxA, irow, jcol, value) ;\n   }\n} else {\n   for ( ient = 0 ; ient < nent ; ient++ ) {\n      fscanf(inputFile, \"%d %d %le %le\", &irow, &jcol, &real, &imag) ;\n      InpMtx_inputComplexEntry(mtxA, irow, jcol, real, imag) ;\n   }\n}\nfclose(inputFile) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n input matrix\") ;\n   InpMtx_writeForHumanEye(mtxA, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   ----------------------------------------\n   STEP 2: read the right hand side entries\n   ----------------------------------------\n*/\ninputFile = fopen(rhsFileName, \"r\") ;\nfscanf(inputFile, \"%d %d\", &nrow, &nrhs) ;\nmtxB = DenseMtx_new() ;\nDenseMtx_init(mtxB, type, 0, 0, nrow, nrhs, 1, nrow) ;\nDenseMtx_zero(mtxB) ;\nif ( type == SPOOLES_REAL ) {\n   for ( irow = 0 ; irow < nrow ; irow++ ) {\n      fscanf(inputFile, \"%d\", &jrow) ;\n      for ( jrhs = 0 ; jrhs < nrhs ; jrhs++ ) {\n         fscanf(inputFile, \"%le\", &value) ;\n         DenseMtx_setRealEntry(mtxB, jrow, jrhs, value) ;\n      }\n   }\n} else {\n   for ( irow = 0 ; irow < nrow ; irow++ ) {\n      fscanf(inputFile, \"%d\", &jrow) ;\n      for ( jrhs = 0 ; jrhs < nrhs ; jrhs++ ) {\n         fscanf(inputFile, \"%le %le\", &real, &imag) ;\n         DenseMtx_setComplexEntry(mtxB, jrow, jrhs, real, imag) ;\n      }\n   }\n}\nfclose(inputFile) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n rhs matrix in original ordering\") ;\n   DenseMtx_writeForHumanEye(mtxB, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   -------------------------------------------------\n   STEP 3 : find a low-fill ordering\n   (1) create the Graph object for A^TA or A^HA\n   (2) order the graph using multiple minimum degree\n   -------------------------------------------------\n*/\ngraph = Graph_new() ;\nadjIVL = InpMtx_adjForATA(mtxA) ;\nnedges = IVL_tsize(adjIVL) ;\nGraph_init2(graph, 0, neqns, 0, nedges, neqns, nedges, adjIVL,\n            NULL, NULL) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n graph of A^T A\") ;\n   Graph_writeForHumanEye(graph, msgFile) ;\n   fflush(msgFile) ;\n}\nfrontETree = orderViaMMD(graph, seed, msglvl, msgFile) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n front tree from ordering\") ;\n   ETree_writeForHumanEye(frontETree, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   -----------------------------------------------------\n   STEP 4: get the permutation, permute the matrix and \n           front tree and get the symbolic factorization\n   -----------------------------------------------------\n*/\noldToNewIV = ETree_oldToNewVtxPerm(frontETree) ;\nnewToOldIV = ETree_newToOldVtxPerm(frontETree) ;\nInpMtx_permute(mtxA, NULL, IV_entries(oldToNewIV)) ;\nInpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ;\nsymbfacIVL = SymbFac_initFromGraph(frontETree, graph) ;\nIVL_overwrite(symbfacIVL, oldToNewIV) ;\nIVL_sortUp(symbfacIVL) ;\nETree_permuteVertices(frontETree, oldToNewIV) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n old-to-new permutation vector\") ;\n   IV_writeForHumanEye(oldToNewIV, msgFile) ;\n   fprintf(msgFile, \"\\n\\n new-to-old permutation vector\") ;\n   IV_writeForHumanEye(newToOldIV, msgFile) ;\n   fprintf(msgFile, \"\\n\\n front tree after permutation\") ;\n   ETree_writeForHumanEye(frontETree, msgFile) ;\n   fprintf(msgFile, \"\\n\\n input matrix after permutation\") ;\n   InpMtx_writeForHumanEye(mtxA, msgFile) ;\n   fprintf(msgFile, \"\\n\\n symbolic factorization\") ;\n   IVL_writeForHumanEye(symbfacIVL, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   ------------------------------------------\n   STEP 5: initialize the front matrix object\n   ------------------------------------------\n*/\nfrontmtx = FrontMtx_new() ;\nmtxmanager = SubMtxManager_new() ;\nSubMtxManager_init(mtxmanager, NO_LOCK, 0) ;\nif ( type == SPOOLES_REAL ) {\n   FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, \n                 SPOOLES_SYMMETRIC, FRONTMTX_DENSE_FRONTS, \n                 SPOOLES_NO_PIVOTING, NO_LOCK, 0, NULL,\n                 mtxmanager, msglvl, msgFile) ;\n} else {\n   FrontMtx_init(frontmtx, frontETree, symbfacIVL, type, \n                 SPOOLES_HERMITIAN, FRONTMTX_DENSE_FRONTS, \n                 SPOOLES_NO_PIVOTING, NO_LOCK, 0, NULL,\n                 mtxmanager, msglvl, msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   -----------------------------------------\n   STEP 6: compute the numeric factorization\n   -----------------------------------------\n*/\nchvmanager = ChvManager_new() ;\nChvManager_init(chvmanager, NO_LOCK, 1) ;\nDVzero(10, cpus) ;\nfacops = 0.0 ;\nFrontMtx_QR_factor(frontmtx, mtxA, chvmanager, \n                   cpus, &facops, msglvl, msgFile) ;\nChvManager_free(chvmanager) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n factor matrix\") ;\n   fprintf(msgFile, \"\\n facops = %9.2f\", facops) ;\n   FrontMtx_writeForHumanEye(frontmtx, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   --------------------------------------\n   STEP 7: post-process the factorization\n   --------------------------------------\n*/\nFrontMtx_postProcess(frontmtx, msglvl, msgFile) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n factor matrix after post-processing\") ;\n   FrontMtx_writeForHumanEye(frontmtx, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   -------------------------------\n   STEP 8: solve the linear system\n   -------------------------------\n*/\nmtxX = DenseMtx_new() ;\nDenseMtx_init(mtxX, type, 0, 0, neqns, nrhs, 1, neqns) ;\nFrontMtx_QR_solve(frontmtx, mtxA, mtxX, mtxB, mtxmanager,\n                  cpus, msglvl, msgFile) ;\nif ( msglvl > 1 ) {\n   fprintf(msgFile, \"\\n\\n solution matrix in new ordering\") ;\n   DenseMtx_writeForHumanEye(mtxX, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   -------------------------------------------------------\n   STEP 9: permute the solution into the original ordering\n   -------------------------------------------------------\n*/\nDenseMtx_permuteRows(mtxX, newToOldIV) ;\nif ( msglvl > 0 ) {\n   fprintf(msgFile, \"\\n\\n solution matrix in original ordering\") ;\n   DenseMtx_writeForHumanEye(mtxX, msgFile) ;\n   fflush(msgFile) ;\n}\n/*--------------------------------------------------------------------*/\n/*\n   ------------------------\n   free the working storage\n   ------------------------\n*/\nInpMtx_free(mtxA) ;\nFrontMtx_free(frontmtx) ;\nGraph_free(graph) ;\nDenseMtx_free(mtxX) ;\nDenseMtx_free(mtxB) ;\nETree_free(frontETree) ;\nIV_free(newToOldIV) ;\nIV_free(oldToNewIV) ;\nIVL_free(symbfacIVL) ;\nSubMtxManager_free(mtxmanager) ;\n/*--------------------------------------------------------------------*/\nreturn(1) ; }\n/*--------------------------------------------------------------------*/\n\\end{verbatim}\n", "meta": {"hexsha": "f7962abab35545307efde9af6be3f269b22b258e", "size": 10403, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial_driver.tex", "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial_driver.tex", "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial_driver.tex", "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "avg_line_length": 34.6766666667, "max_line_length": 72, "alphanum_fraction": 0.4974526579, "num_tokens": 2796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.5534266898082106}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath,amssymb}\n\\usepackage{bm}\n\n\\newcommand{\\uveci}{{\\bm{\\hat{\\textnormal{\\bfseries\\i}}}}}\n\\newcommand{\\uvecj}{{\\bm{\\hat{\\textnormal{\\bfseries\\j}}}}}\n\\newcommand{\\uveck}{{\\bm{\\hat{\\textnormal{\\bfseries\\k}}}}}\n\n\\title{\\bf{Order of Magnitude Analysis of Energy Flux into the SoV}}\n\\author{Nicholas Malaya and Robert D. Moser\\\\ Institute for Computational Engineering and Sciences \\\\ University of Texas at Austin} \\date{}\n\n\\begin{document}\n\\maketitle\n\nThis document details an attempt to provide a very rough estimate of the\ntotal energy flowing into the Solar Vortex apparatus. This information\nis desired to provide a context to the energy fluxes measured through\nthe top of the device. \n\nAt present, we consider only the energy flowing into the device due to\nthe ambient conditions, in particular, the incoming wind and heat\nthrough the front hemisphere of the circular device. We consider a large\n(3m radius) device incoming freestream velocity of 5 m/s. The surface\ntemperature is 343 Kelvin, with a specified inflow boundary layer\nbridging the ground temperature to the ambient air conditions of 313\nKelvin. These numbers were chosen based on information provided by the\nfield team in Arizona during the summer of 2014. \n\nThere are two forms of energy to consider: kinetic and enthalpy. We\nbegin by considering the kinetic energy flux through the front of the\napparatus. \n\n\\section*{Kinetic Energy Flux}\n\nFrom the first law of thermodynamics we can express the kinetic energy\nflux as a surface integral over the upstream face of the device, \n\\begin{equation*}\n\\int_{CS} \\frac{\\vec V^2}{2} \\rho \\vec V \\cdot \\hat n dA.\n\\end{equation*}\n%\n% could cite fluid dynamics book here\n% pg. 239\n%\n\nWhere, $\\vec V = u \\uveci + v \\uvecj + w \\uvecj.$ \nWe assume our freestream velocity has no components in y and z, \n$\\vec V = u(z) \\uveci + 0 \\uvecj + 0 \\uvecj.$ Note that the \nstreamwise velocity, u, is shown as having variation in z. \nThis variation in magnitude is entirely due to the boundary layer \nand will be expanded upon in this discussion shortly.\nThe surface of interest, S,\nis a perfect cylinder, and therefore has a surface element of, \n\\begin{equation*}\ndS = Rd\\theta dz. \n\\end{equation*}\nAn outward pointing vector from this surface has the form, \n\\begin{equation*}\nx \\uveci + y \\uvecj. \n\\end{equation*}\nThe normal vector is then,\n\\begin{equation*}\n\\hat n = \\frac{x \\uveci + y \\uvecj }{||x \\uveci + y \\uvecj||} =\n \\frac{r\\text{ cos}\\theta \\uveci + r\\text{ sin}\\theta \\uvecj}{r} =\n \\text{cos}\\theta \\uveci + \\text{sin}\\theta \\uvecj. \n\\end{equation*}\nOur integral now has the form, \n\\begin{align*}\n\\int_{CS} \\frac{\\vec V^2}{2} \\rho \\vec V \\cdot \\hat n dA & = R \\rho \\int\n \\int \\frac{(u(z) \\uveci)^2}{2} (u(z) \\uveci) \\cdot\n (\\text{cos}\\theta \\uveci + \\text{sin}\\theta \\uvecj) d\\theta dz \\\\\n & = \\frac{1}{2} R \\rho \\int^{z_\\text{max}}_0\n \\int^{\\frac{3\\pi}{2}}_{\\frac{\\pi}{2}}  u(z)^3 \\text{cos}\\theta d\\theta dz.\n\\end{align*}\nThis is seperable, \n\\begin{align*}\n & = \\frac{1}{2} R \\rho \\left( \\int^{z_\\text{max}}_0 u(z)^3 dz \\right)\n \\left(\\int^{\\frac{3\\pi}{2}}_{\\frac{\\pi}{2}} \\text{cos}\\theta d\\theta\n \\right) \\\\\n & = -R \\rho \\int^{z_\\text{max}}_0 u(z)^3 dz. \n\\end{align*}\n\nWe assume that the variation in z for the streamwise velocity is only on account \nof the thin boundary layer near the ground. We functionally approximate this behavior \nusing the common 7th power function for a turbulent boundary layer, \n\\begin{equation*}\n  u(z) = U \\text{ min }\\left(\\left(\\frac{z}{\\delta}\\right)^7,1\\right)\n\\end{equation*}\nwhere U is the constant freestream velocity and $\\delta$ the assumed boundary \nlayer thickness. Our integral now becomes, \n\\begin{align*}\n & = -R \\rho \\left[ \\int^{z_\\text{max}}_\\delta U^3 dz\n + \\int^{\\delta}_0 \\left(U\\left(\\frac{z}{\\delta}\\right)^7\n \\right)^3 dz \\right] \\\\\n & = -R \\rho \\left[ \\int^{z_\\text{max}}_\\delta U^3 dz\n + \\int^{\\delta}_0 U^3 \\left(\\frac{z}{\\delta} \\right)^{10} dz \\right] \\\\\n & = -R \\rho U^3 \\left[ z \\bigg|^{z_{\\text{max}}}_{\\delta}\n + \\int^{\\delta}_0 \\left(\\frac{z}{\\delta} \\right)^{10} dz \\right] \\\\\n& = -R \\rho U^3 \\left[ z_{\\text{max}} - \\frac{10}{11}\\delta.\n\\right]\n\\end{align*}\n\n%%  & = \\frac{1}{2} R \\rho u^3 z_\\text{max}\n%%  \\int^{\\frac{3\\pi}{2}}_{\\frac{\\pi}{2}} \\text{cos}\\theta d\\theta \\\\\n%%  & = -R \\rho u^3 z_\\text{max}. \n%% \\end{align*}\n\nThe negative sign here indicates that the kinetic energy is flowing into\nthe surface, in opposition to the outward facing unit normal, $\\hat\nn$. Characteristic values for this analysis are, $u = 5$ m/s, $\\rho =\n1.225$ Kg/$m^3$, $R = 3$ m, and $z_{\\text{max}} = 2.5$ m. The boundary\nlayer thickness is $\\approx 10$ cm. This provides\nan estimate of 1144.26 Watts as the incoming kinetic energy flux across\nthe SoV vanes. Or, approximately 1.14 kW. \n\nNote that this is only an accounting of the incoming flow energy. It is certainly\nimpossible to extract all this energy from the flow, where Betz-like considerations \nwould impose a maximum in the possible energy extraction. We only present an idealized\nestimate of the total energy available in the flow, to provide a very rough estimate.\n\n\\section*{Gravitational Potential Energy Flux}\n\n\nNow we estimate the gravitational potential energy flux by integrating \nthe boussinesq term by the height of the vanes, \n\\begin{align*}\n  \\text{Potential Energy Flux} & = \\int_{-h}^0 u(z) \\Delta \\rho g z dz. \\\\\n  & = \\int_{-h}^0 u(z) \\rho' g z^2 dz. \n\\end{align*}\nWhere the substitution, $\\Delta \\rho = \\rho' z$ was made. At \nthis point we again separate the integral into two components, \nfor the boundary layer and the constant freestream velocity region. \n\\begin{align*}\n  & = \\rho' g \\left[ \\int_{-\\delta}^{0} U \\left( \\frac{z}{\\delta} \\right)^7 z^2 dz \n      + \\int_{-h}^{-\\delta} U z^2 dz \\right] \\\\\n  & = -\\rho' g U \\left[ \\frac{h^3}{3} - \\frac{7}{30} \\delta^3 \\right].\n\\end{align*}\nFurthermore, we\nnote that $\\rho' = -\\beta \\rho_0 \\Delta T$, resulting in, \n%\n% cite monin-yaglom page 59\n%\n\\begin{equation}\n \\text{Power } = U \\beta \\rho_0 \\Delta T g \\left[ \\frac{h^3}{3} - \\frac{7}{30} \\delta^3 \\right].\n\\end{equation}\n\nUsing $\\rho_0 = 1.225$ Kg/$m^3$, $T_{\\text{ref}}=313$m Kelvin, $\\beta_T = 0.003194$\n(This is just 1/$T_{\\text{ref}}$), $g=9.81$ m/$s^2$, and a freesteam\nvelocity of five meters per second results in an\nestimate of 30 Watts for the gravitational potential energy. \n\n\\section*{Discussion}\n\nThese two quantities do not scale identically. In particular, \nthe kinetic energy scales linearly in height, while the gravitational\npotential energy holds a cubic power scaling. The dust devils observed\nin the field are often quite tall, so a reasonable question is if the\nheights are such that this occurs in a regime where the kinetic energy\nor the gravititaional potential energy (or both, equally) is dominant.  \n\n%\n% old discussion\n%\n% The thermal energy flux is essentially the specific enthalpy. Again \n% examining our 1st law energy balance rate equation, \n% \\begin{equation*}\n% \\frac{dE}{dt} = \\dot Q - \\dot W + \\dot m_{in}\\left( h_{in} +\n% \\frac{V_{in}^2}{2} + gz_{out} \\right) \n% - \\dot m_{out}\\left( h_{out} + \\frac{V_{out}^2}{2} + gz_{out} \\right).\n% \\end{equation*}\n% Now we make several convenient simplifying assumptions. \n% Namely, that the system is steady state ($\\frac{dE}{dt} = 0$), \n% there is no heat transfer with the surroundings ($\\dot Q = 0$), \n% and there are no appreciable contributions from gravitational \n% potential energy ($z_{out}-z_{in} = 0$). Furthermore, we \n% previously provided an estimate for the kinetic energy contribution. \n% Finally, we assume that the mass flux is approximately the same \n% between the inlet and outlet. \n\n% Our resulting equation is then, \n% \\begin{equation*}\n%   \\dot W_{\\text{thermal}} = \\dot m \\left(h_{in}-h_{out}\\right).\n% \\end{equation*}\n% At the very moderate temperatures considered, \n% for the working fluid (air), it is reasonable to model \n% the system as an ideal gas. Under these conditions, \n% the specific enthalpy will only depend on the temperature. \n\n% Our expression then becomes, \n% \\begin{equation*}\n%   \\dot W_{\\text{thermal}} = \\dot m \\left(h(T_{in})-h(T_{out})\\right).\n% \\end{equation*}\n% We now further approximate this system by making the \n% assumption that the specific heats are constant, or that, \n% \\begin{equation*}\n%   h(T_{in})-h(T_{out}) = c_p(T_{in}-T_{out})\n% \\end{equation*}\n% Here, $T_{out}$ is the ambient temperature. For $T_{in}$ we \n% approximate the varying vertical gradient by a constant value equal to the\n% mean value of the function over that interval, e.g.\n% \\begin{equation*}\n% \\bar f = \\frac{1}{b-a} \\int^b_a f(x) dx. \n% \\end{equation*}\n% With a gradient of $2/3$ Kelvin/meter this results in a delta T of\n% approximately 2 degrees Kelvin. $c_p$ of air at this temperature is\n% approximately 1006.5 kJ/(kg K). The resulting estimate of the thermal\n% energy is substantial: 69 kW, or 60x larger than the kinetic energy of\n% the wind. \n\n\\end{document}\n", "meta": {"hexsha": "c4a4ca26c555c6bf198961e1b764c563f292c9d2", "size": 8958, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "disputatio/causa/pe.tex", "max_stars_repo_name": "nicholasmalaya/paleologos", "max_stars_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-04T17:49:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-04T17:49:42.000Z", "max_issues_repo_path": "disputatio/causa/pe.tex", "max_issues_repo_name": "nicholasmalaya/paleologos", "max_issues_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "disputatio/causa/pe.tex", "max_forks_repo_name": "nicholasmalaya/paleologos", "max_forks_repo_head_hexsha": "11959056caa80d3c910759b714a0f8e42f986f0f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-01-04T16:08:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-16T19:34:24.000Z", "avg_line_length": 42.6571428571, "max_line_length": 140, "alphanum_fraction": 0.6997097566, "num_tokens": 2845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.5534266852703856}}
{"text": "% Created 2021-08-30 Mon 12:43\n% Intended LaTeX compiler: pdflatex\n\\documentclass[letterpaper]{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{khpreamble}\n\\usepackage[top=10mm, bottom=10mm]{geometry}\n\\author{Kjartan Halvorsen}\n\\date{}\n\\title{Pole placement, root locus and lead-lag}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={Pole placement, root locus and lead-lag},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\\section*{The inverted pendulum}\n\\label{sec:orga27804c}\nThe friction-less, inverted pendulum has two poles symmetric about the imaginary axis.\n\n\\subsection*{PD-control}\n\\label{sec:org2447a44}\n\\def\\omegazero{1}\n     \\begin{center}\n       \\small\n       \\begin{tikzpicture}[scale = 0.8, node distance=20mm, block/.style={rectangle, draw, minimum width=12mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n       \n       \\node[coordinate] (refinput) {};\n       \\node[sumnode, right of=refinput, node distance=20mm] (sumerr) {\\tiny $\\sum$};\n       \\node[block, right of=sumerr] (controller) {$K(2s + a)$};\n       \\node[above of=controller, node distance=6mm] {controller};\n       \\node[block, right of=controller, node distance=24mm] (plant) {$\\frac{1}{s^2 - a^2}$};\n       \\node[above of=plant, node distance=6mm] {plant};\n       \\node[coordinate, right of=plant, node distance=20mm] (output) {};\n\n       \\draw[->] (refinput) -- node[above, pos=0.3] {$y_{ref}(t)$} (sumerr);\n       \\draw[->] (sumerr) -- node[above] {$e(t)$} (controller);\n       \\draw[->] (controller) -- node[above] {$u(t)$} (plant);\n       \\draw[->] (plant) -- node[coordinate] (measure) {} node[above, pos=0.8] {$y(t)$} (output);\n       \\draw[->] (measure) -- ++(0,-14mm) -| node[right, pos=0.95] {$-$} (sumerr);\n\n    \\begin{axis} [\n        xshift = 12cm,\n        yshift = -3cm,\n        width=12cm,\n        height=8cm,\n        axis lines=middle,\n        axis line style={->},\n        xtick={-1, 1},\n        ytick={-1, 1},\n        xticklabels={$-a$, $a$},\n        xmin=-6,\n        xmax=2,\n        ymin=-3,\n        ymax=3,\n        ytick=\\empty,\n        xlabel=Re,\n        ylabel=Im,\n        ]\n        \n        \\addplot [ thick,black, mark=x, mark size=6pt, only marks] coordinates { (-\\omegazero,0) (\\omegazero,0) }; \n        %\\addplot [ thick,black, mark=o, mark size=6pt] coordinates { (-0.5,0) }; \n        \n      %\\node[coordinate, pin={[pin distance=3cm] 135:{3 startpunkter}}] at (axis cs:0,0) {};\n      %\\node[coordinate, pin={[pin distance=2.5cm] -135:{2 \u00e4ndpunkter}}] at (axis cs:-0.5,0) {};\n    \\end{axis}\n\n       \\end{tikzpicture}\n\n\n      \n     \\end{center}\n\n\\subsection*{Modified PD-control}\n\\label{sec:orgdc47f6d}\n \\def\\omegazero{1}\n \\begin{center}\n   \\small\n   \\begin{tikzpicture}[scale = 0.8, node distance=20mm, block/.style={rectangle, draw, minimum width=12mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n   \\node[coordinate] (refinput) {};\n   \\node[sumnode, right of=refinput, node distance=20mm] (sumerr) {\\tiny $\\sum$};\n   \\node[block, right of=sumerr] (controller) {$K\\frac{s + 0.5a}{s+5a}$};\n   \\node[above of=controller, node distance=6mm] {controller};\n   \\node[block, right of=controller, node distance=24mm] (plant) {$\\frac{1}{s^2 - a^2}$};\n   \\node[above of=plant, node distance=6mm] {plant};\n   \\node[coordinate, right of=plant, node distance=20mm] (output) {};\n\n   \\draw[->] (refinput) -- node[above, pos=0.3] {$y_{ref}(t)$} (sumerr);\n   \\draw[->] (sumerr) -- node[above] {$e(t)$} (controller);\n   \\draw[->] (controller) -- node[above] {$u(t)$} (plant);\n   \\draw[->] (plant) -- node[coordinate] (measure) {} node[above, pos=0.8] {$y(t)$} (output);\n   \\draw[->] (measure) -- ++(0,-14mm) -| node[right, pos=0.95] {$-$} (sumerr);\n\n\\begin{axis} [\n    xshift = 12cm,\n    yshift = -3cm,\n    width=12cm,\n    height=8cm,\n    axis lines=middle,\n    axis line style={->},\n    xtick={-1, 1},\n    ytick={-1, 1},\n    xticklabels={$-a$, $a$},\n    xmin=-7,\n    xmax=2,\n    ymin=-3,\n    ymax=3,\n    ytick=\\empty,\n    xlabel=Re,\n    ylabel=Im,\n    ]\n\n    \\addplot [ thick,black, mark=x, mark size=6pt, only marks] coordinates { (-\\omegazero,0) (\\omegazero,0) }; \n    %\\addplot [ thick,black, mark=o, mark size=6pt] coordinates { (-0.5,0) }; \n\n  %\\node[coordinate, pin={[pin distance=3cm] 135:{3 startpunkter}}] at (axis cs:0,0) {};\n  %\\node[coordinate, pin={[pin distance=2.5cm] -135:{2 \u00e4ndpunkter}}] at (axis cs:-0.5,0) {};\n\\end{axis}\n\n   \\end{tikzpicture}\n\n\n\n \\end{center}\n\n\n\n\n\n\\section*{The DC-motor}\n\\label{sec:org5250fe6}\nThe dynamics of the velocity of the DC motor is a first-order system when the input is armature voltage. By integrating once, we get the shaft angle as the output.\n\n\\subsection*{PD-control}\n\\label{sec:org5073a75}\n\\def\\omegazero{1}\n     \\begin{center}\n       \\small\n       \\begin{tikzpicture}[scale = 0.6, node distance=20mm, block/.style={rectangle, draw, minimum width=12mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n       \n       \\node[coordinate] (refinput) {};\n       \\node[sumnode, right of=refinput, node distance=20mm] (sumerr) {\\tiny $\\sum$};\n       \\node[block, right of=sumerr] (controller) {$K(s + 2a)$};\n       \\node[above of=controller, node distance=6mm] {controller};\n       \\node[block, right of=controller, node distance=24mm] (plant) {$\\frac{k}{s+a}$};\n       \\node[block, right of=plant, node distance=20mm] (int) {$\\frac{1}{s}$};\n       %\\node[above of=plant, node distance=6mm] {plant};\n       \\node[coordinate, right of=int, node distance=20mm] (output) {};\n\n       \\draw[->] (refinput) -- node[above, pos=0.3] {$y_{ref}(t)$} (sumerr);\n       \\draw[->] (sumerr) -- node[above] {$e(t)$} (controller);\n       \\draw[->] (controller) -- node[above] {$u(t)$} (plant);\n       \\draw[->] (plant) -- node[above] {} (int);\n       \\draw[->] (int) -- node[coordinate] (measure) {} node[above, pos=0.8] {$y(t)$} (output);\n       \\draw[->] (measure) -- ++(0,-14mm) -| node[right, pos=0.95] {$-$} (sumerr);\n\n    \\begin{axis} [\n        xshift = 18.5cm,\n        yshift = -3cm,\n        width=12cm,\n        height=8cm,\n        axis lines=middle,\n        axis line style={->},\n        xtick={-1, 0},\n        ytick={-1, 0},\n        xticklabels={$-a$, $0$},\n        xmin=-6,\n        xmax=2,\n        ymin=-3,\n        ymax=3,\n        ytick=\\empty,\n        xlabel=Re,\n        ylabel=Im,\n        ]\n        \n        \\addplot [ thick,black, mark=x, mark size=6pt, only marks] coordinates { (-\\omegazero,0) (0,0) }; \n        %\\addplot [ thick,black, mark=o, mark size=6pt] coordinates { (-0.5,0) }; \n        \n      %\\node[coordinate, pin={[pin distance=3cm] 135:{3 startpunkter}}] at (axis cs:0,0) {};\n      %\\node[coordinate, pin={[pin distance=2.5cm] -135:{2 \u00e4ndpunkter}}] at (axis cs:-0.5,0) {};\n    \\end{axis}\n\n       \\end{tikzpicture}\n\n\n      \n     \\end{center}\n\n\\subsection*{Lag-compensator}\n\\label{sec:orgef421f0}\nA lag compensator is a controller with one pole and one zero, and where the pole is closer to the origin than the zero. Such a compensator increases the gain at low frequencies, at the expense of more oscillatory response.\n\n \\def\\omegazero{2}\n \\begin{center}\n   \\small\n   \\begin{tikzpicture}[scale = 0.6, node distance=20mm, block/.style={rectangle, draw, minimum width=12mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n   \\node[coordinate] (refinput) {};\n   \\node[sumnode, right of=refinput, node distance=20mm] (sumerr) {\\tiny $\\sum$};\n   \\node[block, right of=sumerr] (controller) {$K\\frac{s + a}{4s+a}$};\n   \\node[above of=controller, node distance=6mm] {controller};\n   \\node[above of=controller, node distance=6mm] {controller};\n   \\node[block, right of=controller, node distance=24mm] (plant) {$\\frac{k}{s+a}$};\n   \\node[block, right of=plant, node distance=20mm] (int) {$\\frac{1}{s}$};\n   %\\node[above of=plant, node distance=6mm] {plant};\n   \\node[coordinate, right of=int, node distance=20mm] (output) {};\n\n   \\draw[->] (refinput) -- node[above, pos=0.3] {$y_{ref}(t)$} (sumerr);\n   \\draw[->] (sumerr) -- node[above] {$e(t)$} (controller);\n   \\draw[->] (controller) -- node[above] {$u(t)$} (plant);\n   \\draw[->] (plant) -- node[above] {} (int);\n   \\draw[->] (int) -- node[coordinate] (measure) {} node[above, pos=0.8] {$y(t)$} (output);\n   \\draw[->] (measure) -- ++(0,-14mm) -| node[right, pos=0.95] {$-$} (sumerr);\n\n\\begin{axis} [\n    xshift = 18.5cm,\n    yshift = -3cm,\n    width=12cm,\n    height=8cm,\n    axis lines=middle,\n    axis line style={->},\n    xtick={-1, 0},\n    ytick={-1, 0},\n    xticklabels={$-a$, $0$},\n    xmin=-6,\n    xmax=2,\n    ymin=-3,\n    ymax=3,\n    ytick=\\empty,\n    xlabel=Re,\n    ylabel=Im,\n    ]\n\n    \\addplot [ thick,black, mark=x, mark size=6pt, only marks] coordinates { (-\\omegazero,0) (0,0) }; \n    %\\addplot [ thick,black, mark=o, mark size=6pt] coordinates { (-0.5,0) }; \n\n  %\\node[coordinate, pin={[pin distance=3cm] 135:{3 startpunkter}}] at (axis cs:0,0) {};\n  %\\node[coordinate, pin={[pin distance=2.5cm] -135:{2 \u00e4ndpunkter}}] at (axis cs:-0.5,0) {};\n\\end{axis}\n\n   \\end{tikzpicture}\n\n\n \\end{center}\n\\end{document}", "meta": {"hexsha": "3088f21732842501bd469cd7cb5a6cffea88ece4", "size": 9273, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classic-control/exercises/root-locus-exercise.tex", "max_stars_repo_name": "kjartan-at-tec/mr2025", "max_stars_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classic-control/exercises/root-locus-exercise.tex", "max_issues_repo_name": "kjartan-at-tec/mr2025", "max_issues_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classic-control/exercises/root-locus-exercise.tex", "max_forks_repo_name": "kjartan-at-tec/mr2025", "max_forks_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.393129771, "max_line_length": 222, "alphanum_fraction": 0.5986196484, "num_tokens": 3221, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.8104789109591832, "lm_q1q2_score": 0.5532116169158433}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[utf8]{inputenc}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{September 17, 2014}\n\\maketitle\n\\section*{2.7a}\nshow that $(a_n)=\\left(\\frac{n\\cos^n(n)}{\\sqrt{n^2+2n}}\\right)_{n=1}^\\infty$ has a convergent sequence\n\n$n/\\sqrt{n^2+2n}<1$ and $\\left\\lvert\\cos^nn\\right\\rvert\\le1$\n\\section*{2.7b}\n\\begin{align*}\n  n+\\cos(n\\pi)\\sqrt{n^+1}\n\\end{align*}\nbounded below because even terms are increasing, odd terms bounded by 0 and $1-\\sqrt{2}$. odd subsequence is bounded, so there is a convergent subsequence\n\n\\section*{2.8}\nif a sequence is convergent to $L$ then $\\forall\\varepsilon>0\\exists N\\in\\mathbb{N}$ such that $\\left\\lvert a_n-L\\right\\rvert<\\varepsilon$ if $n\\ge N$\n\nwhere cauchy sequence is $\\forall\\varepsilon>0\\exists N\\in\\mathbb{N}$ such that $\\left\\lvert a_n-a_m\\right\\rvert<\\varepsilon\\forall n,m\\ge N$\n\n\\subsection*{theorem}\nevery convergent sequence is cauchy\n\nevery cauchy sequence of reals converges to a real\n\nnot every cauchy sequence of rationals converges to a rational, although it will converge to a real.\n\ndefinition:\n\na subset $S$ is said to be complete if every Cauchy sequence $(a_n)$ in $S$ (that is $a_n\\in S$) converges to a point in $S$\n\nreals are complete, rationals are not\n\n\\section*{completeness theorem}\nevery cauchy sequence of real numbers converges. so $\\mathbb{R}$ is complete\n\n\\subsubsection*{proof}\nlet $(a_n)$ be cauchy. then $\\forall\\varepsilon>0\\exists N$ such that if $n,m\\ge N$ then $\\left\\lvert a_n-a_m\\right\\rvert<\\varepsilon$.\n\nstep 1\n\n$(a_n)$ is bounded.\n\ngiven $\\varepsilon=1, \\exists N_1$ succh that $\\left\\lvert a_n-a_m\\right\\rvert<1\\forall n,m\\ge  N_1$. in particulare $\\left\\lvert a_n-a_{N_1}\\right\\rvert<\\forall n\\ge N_1$ so $-1+a_{N_1}<a_n<1+a_{N_1}$ and $a_1,\\dots a_{N_1}$ is finite so it is bounded.\n\nstep 2\n\nBW saays there is a subsequence $(a_{n_k})$ converging to $L\\in\\mathbb{R}$.\n\nstep 3\n\nthe whole sequence onverges to some $L$\n\n$\\left\\lvert a_n-L\\right\\rvert=\\left\\lvert a_n-a_{n_k}+a_{n_k}-L\\right\\rvert\\le\\left\\lvert a_n-a_{n_k}\\right\\rvert+\\left\\lvert a_{n_k}-L\\right\\rvert<2\\varepsilon$\n\n\\section*{example}\ncontractive sequences $\\left\\lvert a_{n+1}-a_n\\right\\rvert\\le\\lambda\\left\\lvert a_n-a_{n-1}\\right\\rvert,\\lambda\\in(0,1)$\n\nlimit of $\\lambda^n$ as n aproaches infinity is 0.\n\nlimit of $\\left\\lvert a_{n+1}-a_n\\right\\rvert=0$, telescoping $|a_n-a_n|=|a_n-a_{m-1}+a_{m-1-\\dots-a_{n+1}a_{n+1}-a_n}\\le$\n\\end{document}\n", "meta": {"hexsha": "b1043f5ac9f03efb90a68951fd79f43c09b79b1f", "size": 2583, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-09-17.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-09-17.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-09-17.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3835616438, "max_line_length": 253, "alphanum_fraction": 0.7251258227, "num_tokens": 956, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6825737344123242, "lm_q2_score": 0.8104789018037399, "lm_q1q2_score": 0.5532116106665781}}
{"text": "% Created 2021-07-20 Tue 10:14\n% Intended LaTeX compiler: pdflatex\n\\documentclass[presentation,aspectratio=169]{beamer}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{graphicx}\n\\usepackage{grffile}\n\\usepackage{longtable}\n\\usepackage{wrapfig}\n\\usepackage{rotating}\n\\usepackage[normalem]{ulem}\n\\usepackage{amsmath}\n\\usepackage{textcomp}\n\\usepackage{amssymb}\n\\usepackage{capt-of}\n\\usepackage{hyperref}\n\\usepackage{khpreamble}\n\\usepackage{amssymb}\n\\DeclareMathOperator{\\shift}{q}\n\\DeclareMathOperator{\\diff}{p}\n\\usetheme{default}\n\\author{Kjartan Halvorsen}\n\\date{\\today}\n\\title{Polynomial pole placement - part 2}\n\\hypersetup{\n pdfauthor={Kjartan Halvorsen},\n pdftitle={Polynomial pole placement - part 2},\n pdfkeywords={},\n pdfsubject={},\n pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, \n pdflang={English}}\n\\begin{document}\n\n\\maketitle\n\n\n\\section{Intro}\n\\label{sec:org8bb9cb2}\n\n\\begin{frame}[label={sec:orgef2154c}]{Goal of today's lecture}\nUnderstand the design procedure of polynomial pole placement\n\\end{frame}\n\n\n\\section{2-dof controller}\n\\label{sec:orga4e419a}\n\n\\begin{frame}[label={sec:org6dfc545}]{Two-degree-of-freedom controller}\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{../../figures/2dof-block-explicit}\n\\end{center}\n\n\\begin{align*}\nY(z)     &= \\frac{F_f(z)H(z)}{1 + z^{-d}F_b(z)H(z)}U_c(z) + \\overbrace{\\frac{1}{1 + z^{-d}F_b(z)H(z)}}^{S_s(z)}V(z)  - \\overbrace{\\frac{z^{-d}F_b(z)H(z)}{1 + z^{-d}F_b(z)H(z)}}^{T_s(z)}N(z)\\\\\n\\end{align*}\n\n\\alert{Evidently} \\(S_s(z) + T_s(z) = 1\\) \\alert{Conclusion:} One must find a balance between disturbance rejection and noise attenuation.\n\\end{frame}\n\n\n\\section{Sensitivity, revisited}\n\\label{sec:orgaa3ab2a}\n\\begin{frame}[label={sec:org523ac99}]{The sensitivity function}\n\\[S_s(z) = \\frac{1}{1 + z^{-d}F_b(z)H(z)} = \\frac{1}{1 + G_o(z)}= \\frac{1}{G_o(z) - (-1)}\\]\n\n\n\\begin{columns}\n\\begin{column}{0.45\\columnwidth}\n\\[|S_s(\\mathrm{e}^{i\\omega h})| = |S_s(i\\omega)| = \\frac{1}{| G_o(i\\omega) - (-1)|}\\]\n\n\\alert{The magnitude of the sensitivity function is inverse proportional to the distance of the Nyquist curve to the critical point -1}\n\\end{column}\n\n\\begin{column}{0.65\\columnwidth}\n\\begin{center}\n\\includegraphics[width=0.6\\linewidth]{../../figures/implane-nyquist-margins}\n\\end{center}\n\\end{column}\n\\end{columns}\n\\end{frame}\n\n\n\n\\section{RST}\n\\label{sec:orgb0db44e}\n\n\\begin{frame}[label={sec:org73a5512}]{The design procedure}\n\\end{frame}\n\\begin{frame}[label={sec:org07f51fc}]{The design procedure}\nGiven plant model \\(H(z)=\\frac{B(z)}{A(z)}\\) and specifications on the desired closed-loop poles \\(A_{cl}(z)\\)\n\\begin{enumerate}\n\\item Find polynomials \\(R(z)\\) and \\(S(z)\\) with \\(n_R \\ge n_S\\) such that \n\\[ A(z)R(z)z^{d} + B(z)S(z) = A_{cl}(z) \\]\n\\item Factor the closed-loop polynomials as \\(A_{cl}(z) = A_c(z)A_o(z)\\), where \\(n_{A_o} \\le n_R\\). Choose\n\\[T(z) = t_0 A_o(z),\\] where \\(t_0 = \\frac{A_c(1)}{B(1)}\\).\n\\end{enumerate}\n\nThe control law is then\n\\[ R(q) u(k) = T(q)u_c(k) - S(q)y(k). \\]\nThe closed-loop response to the command signal is given by\n\\[ A_c(q)y(k) = t_0 B(q) u_c(k). \\]\n\\end{frame}\n\\begin{frame}[label={sec:org1a53433}]{Determining the order of the controller}\nWith Diophantine equation \n   \\[ A(z)R(z)z^{d} + B(z)S(z) = A_{cl}(z) \\qquad (*) \\]\nand feedback controller\n\\[F_b(z) = \\frac{S(z)}{R(z)} = \\frac{s_0z^n + s_1z^{n-1} + \\cdots + s_n}{z^n + r_1 z^{n-1} + \\cdots + r_n}\\]\n\\alert{How should we choose the order of the controller?} Note:\n\\begin{itemize}\n\\item the controller has \\(n+n+1 = 2\\deg R + 1\\) unknown parameters\n\\item the LHS of \\((*)\\) has degree \\(\\deg \\big(A(z)R(z)z^d + B(z)S(z)\\big) = \\deg A + \\deg R + d\\)\n\\item The diophantine gives as many (nontrivial) equations as the degree of the polynomials on each side when we set the coefficients equal.\n\n\\alert{\\(\\Rightarrow\\;\\)Choose \\(\\deg R\\) so that \\(2\\deg R + 1 = \\deg A + \\deg R + d\\)}\n\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:org2ebbb1d}]{Determining the order of the controller - Exercise}\nWith the plant model \\[H(z) = \\frac{B(z)}{A(z)} = \\frac{b}{z + a}\\] and \\(d=0\\) (no delay), what is the appropriate degree of the controller \n\\[F_b(z) = \\frac{S(z)}{R(z)} = \\frac{s_0z^n + s_1z^{n-1} + \\cdots + s_n}{z^n + r_1 z^{n-1} + \\cdots + r_n}\\]\nso that all parameters can be determined from the diophantine equation\n\\[ A(z)R(z) + B(z)S(z) = A_c(z)A_o(z)?\\]\n\\begin{center}\n\\begin{tabular}{ll}\n1. \\(n = 0\\) & 2. \\(n = 1\\)\\\\\n3. \\(n=2\\) & 4. \\(n=3\\)\\\\\n\\end{tabular}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgefa8757}]{Determining the order of the controller - Exercise - Solution}\nWith the plant model \\[H(z) = \\frac{B(z)}{A(z)} = \\frac{b}{z + a}\\] and \\(d=0\\) (no delay), what is the appropriate degree of the controller \\[F_b(z) = \\frac{S(z)}{R(z)} = \\frac{s_0z^n + s_1z^{n-1} + \\cdots + s_n}{z^n + r_1 z^{n-1} + \\cdots + r_n}\\]\nso that all parameters can be determined from the diophantine equation\n\\[ A(z)R(z) + B(z)S(z) = A_c(z)A_o(z)?\\]\n\\begin{center}\n\\begin{tabular}{rr}\n1. \\(n = 0\\) & 2.\\\\\n3. & 4.\\\\\n\\end{tabular}\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orgcbd8cc2}]{Two-degree-of-freedom controller, the importance of the observer poles}\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{../../figures/2dof-block-explicit}\n\\end{center}\n\\begin{align*}\nY(z) &= \\frac{t_0B(z)z^d}{A_c(z)}U_c(z) + \\frac{A(z)R(z)z^d}{A_c(z)A_o(z)}V(z)- \\frac{S(z)B(z)}{A_c(z)A_o(z)}N(z)\n\\end{align*}\n\\alert{Conclusiones} 1) There is a partial separation between designing for reference tracking and designing for perturbance rejection. 2) The observer poles (the roots of \\(A_o(z)\\)) can be used to determine a balance between disturbance rejection and noise attenuation.\n\\end{frame}\n\n\n\n\n\\section{Example}\n\\label{sec:orgabd7f07}\n\\begin{frame}[label={sec:org2ecba8c}]{Example - Level control of a dam}\n\\begin{center}\n\\includegraphics[width=0.5\\linewidth]{../../figures/kraftverk}\n\\end{center}\n\n\\alert{Objective} Design a control system to maintain the water level under influence of disturbances.\n\\end{frame}\n\n\\begin{frame}[label={sec:org3c2cd7f}]{Example - Level control of a dam}\n\\begin{center}\n\\includegraphics[width=0.3\\linewidth]{../../figures/kraftverk}\n\\end{center}\n\n\\alert{The process dynamics}\n\n\\begin{center}\n  \\begin{tikzpicture}\n    \\node at (0,0) {$y(k) = y(k-1) -v(k-1) + u(k-2)$};\n    \\node[coordinate, pin=140:{Cambio en el nivel de agua}] at (-2.6,0.2) {};\n    \\node[coordinate, pin=-140:{Cambio en flujos no controlados}] at (0.8,-0.2) {};\n    \\node[coordinate, pin=60:{Cambio en flujo controlado}] at (2,0.2) {};\n\\end{tikzpicture}\n\\end{center}\n\\begin{center}\n  \\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n    \\node[coordinate] (input) {};\n    \\node[block, right of=input, node distance=20mm] (delay)  {$z^{-1}$};\n    \\node[sumnode, right of=delay, node distance=16mm] (sum) {\\tiny $\\Sigma$};\n    \\node[block, right of=sum, node distance=20mm] (plant)  {$H_p(z)$};\n    \\node[coordinate, above of=sum, node distance=12mm] (disturbance) {};\n    \\node[coordinate, right of=plant, node distance=20mm] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.3] {$u(k)$} (delay);\n    \\draw[->] (sum) -- node[above] {} (plant);\n    \\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);\n    \\draw[->] (disturbance) -- node[right, pos=0.2] {$v(k)$} node[left, pos=0.8] {$-$} (sum);\n    \\draw[->] (delay) -- (sum);\n  \\end{tikzpicture}\n\\end{center}\n\\end{frame}\n\n\\begin{frame}[label={sec:org61013f5}]{Example - Level control of a dam}\n\\alert{The process dynamics}\n\n\\begin{center}\n  \\begin{tikzpicture}\n    \\node at (0,0) {$y(k) = y(k-1) -v(k-1) + u(k-2)$};\n    \\node[coordinate, pin=140:{Cambio en el nivel de agua}] at (-2.6,0.2) {};\n    \\node[coordinate, pin=-140:{Cambio en flujos no controlados}] at (0.8,-0.2) {};\n    \\node[coordinate, pin=60:{Cambio en flujo controlado}] at (2,0.2) {};\n\\end{tikzpicture}\n\\end{center}\n\\begin{center}\n  \\begin{tikzpicture}[node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}]\n\n    \\node[coordinate] (input) {};\n    \\node[block, right of=input, node distance=20mm] (delay)  {$z^{-1}$};\n    \\node[sumnode, right of=delay, node distance=16mm] (sum) {\\tiny $\\Sigma$};\n    \\node[block, right of=sum, node distance=20mm] (plant)  {$H_p(z)$};\n    \\node[coordinate, above of=sum, node distance=12mm] (disturbance) {};\n    \\node[coordinate, right of=plant, node distance=20mm] (output) {};\n\n    \\draw[->] (input) -- node[above, pos=0.3] {$u(k)$} (delay);\n    \\draw[->] (sum) -- node[above] {} (plant);\n    \\draw[->] (plant) -- node[above, near end] {$y(k)$} (output);\n    \\draw[->] (disturbance) -- node[right, pos=0.2] {$v(k)$} node[left, pos=0.8] {$-$} (sum);\n    \\draw[->] (delay) -- (sum);\n  \\end{tikzpicture}\n\\end{center}\n\\alert{Activity} What is the transfer function from \\(u(k)\\) to \\(y(k)\\)?\n\n\\begin{center}\n\\begin{tabular}{lll}\n1: \\(H(z) = \\frac{z}{z-1}\\) & 2: \\(H(z)=\\frac{1}{z-1}\\) & 3: \\(H(z)=\\frac{1}{z(z-1)}\\)\\\\\n\\end{tabular}\n\\end{center}\n\\end{frame}\n\n\n\\begin{frame}[label={sec:orgd0cf498}]{Example - Level control of a dam}\nGiven process \\(H(z) = \\frac{B(z)}{A(z)} = \\frac{1}{z(z-1)}\\) and desired poles in \\(z=0.9\\).\n\n\\begin{enumerate}\n\\item The Diophantine equation \\(A(z)R(z)z^d + B(z)S(z) = A_{cl}(z)\\)\n\\[ z(z-1)R(z) + S(z) = A_{cl}(z)\\]\nThe order of the controller is\n\\[\\deg R = \\deg A + d - 1 = 2-1 = 1, \\quad \\Rightarrow \\quad F_b(z)=\\frac{S(z)}{R(z)} = \\frac{s_0z + s_1}{z + r_1}\\]\n\\item Resulting Diophantine equation\n\\[ z(z-1)(z+r_1) + s_0z + s_1 = A_{cl}(z)\\]\nThe degree of \\(A_{cl}(z)\\) is 3. Choose \\(A_o(z) = z\\),  ( \\(\\deg A_o = \\deg R\\)) \n\\[ A_{cl}(z) = A_o(z) A_c(z) = z(z-0.9)^2\\]\n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}[label={sec:orgc75df91}]{Example - Level control of a dam}\n\\begin{enumerate}\n\\setcounter{enumi}{2}\n\\item From the Diophantine equation \\[ z(z-1)(z+r_1) + s_0z + s_1 = z(z-0.9)^2\\]\n\\[ z^3 + (r_1-1)z^2 - r_1z + s_0z + s_1 = z^3 -1.8z^2 + 0.81z\\]\nwe obtain the equations\n\\begin{align*}\n\\begin{cases} z^2 &: \\quad r_1-1 = -1.8\\\\\nz^1 &: \\quad -r_1 + s_0 = 0.81\\\\\nz^0 &: \\quad s_1 = 0\n\\end{cases}\n\\quad \\Rightarrow \\quad \n\\begin{cases} r_1 &= -0.8\\\\ s_0 &= 0.01\\\\ s_1 &=0 \\end{cases}\n\\end{align*}\n\\[F_b(z) = \\frac{0.01z}{z - 0.8}\\]\n\\end{enumerate}\n\\end{frame}\n\n\\begin{frame}[label={sec:org2dcdae7}]{Example - Level control of a dam}\n\\begin{enumerate}\n\\setcounter{enumi}{3}\n\\item We have \\(A_o(z) = z\\), so \n\\[T(z) = t_0A_o(z) = t_0z\\]\n\\[G_c(z) = \\frac{T(z)B(z)}{A_o(z)A_c(z)} = \\frac{t_0 B(z)}{A_c(z)}, \\quad \\text{queremos}\\, G_c(1)=1\\]\n\\[ t_0 = \\frac{A_c(1)}{B(1)} = \\frac{(1-0.9)^2}{1} = 0.01\\]\n\\end{enumerate}\n\n\\alert{Control law}\n\\[R(\\shift) u(kh) = T(\\shift)u_c(kh) - S(\\shift)y(kh)\\]\n\\[ (\\shift - 0.8)u(kh) = 0.01\\shift u_c(kh) - 0.01\\shift y(kh)\\]\n\\[ u(kh+h) = 0.8u(kh) + 0.01 u_c(kh+h) - 0.01y(kh+h)\\]\n\\end{frame}\n\\end{document}", "meta": {"hexsha": "af2c642a40365fa451fb9349d724b6c65276a140", "size": 10882, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "polynomial-design/slides/lecture-polynomial-design-2.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "polynomial-design/slides/lecture-polynomial-design-2.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "polynomial-design/slides/lecture-polynomial-design-2.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 37.9163763066, "max_line_length": 271, "alphanum_fraction": 0.6404153648, "num_tokens": 4208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936537604181, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5530672432169654}}
{"text": "\\documentclass{article}\n\\usepackage{fullpage}\n\\usepackage{nopageno} \n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{March 7, 2014}\n\\maketitle\n\\section*{worksheet}\n\\subsection*{2}\nmultinomial theorem\n\\[(x_1+x_2+\\dots+x_t)^n=\\sum\\limits{\\binom{n}{r_1}\\binom{n-r_1}{r_z}\\dots\\binom{n-r_1-\\dots-r_{t-1}}{r_t}{x_1}^{r_1}{x_2}^{r_2}\\dots{x_t}^{r_t}}\\]\ncombinatorial proof sketch\n\n\\emph{``would be sufficient for an exam''}\n\\begin{align*}\n  (x_1+\\dots+x_t)(x_1+\\dots+x_t)\\dots(x_1+\\dots+x_t)\n\\end{align*}\nwhen you multiply out, each term is a word in the letters $x_1,\\dots,x_t$\n\\begin{align*}\n  \\underbrace{x_2x_5x_2x_1x_3x_1}_n\n\\end{align*}\nso the coefficient of ${x_1}^{r_1}{x_2}^{r_2}\\dots{x_t}^{r_t}$ counts \\# words of length $n$ with $\\underbrace{r_ix_i}_{\\text{letter distribution}}$'s\n\\section*{example 39}\nfind the coefficient of ${x_1}^3x_2{x_3}^4{x_5}^2$ in $(x_1+\\dots+x_5)^{10}$. $\\frac{10!}{3!1!4!2!}$\n\\section*{other example}\nwhat is $\\sum\\limits_{n_1+n_2+n_3=n}{\\binom{n}{n_1,n_2,n_3}(-1)^{n_2}}$. use multinomial thm, plug in 1,-1,1 for x's\n\\section*{other example}\nwhat is $\\sum\\limits_{n_1+n_2+n_3=n}{\\binom{n}{n_1,n_2,n_3,n_4}(-1)^{n_2+n_4}}=(1-1+1-1)^n=0$\n\\section*{chapter 6}\n\\subsection*{principle of inclusion-exclusion}\n$\\abs{A_1\\cup A_2\\cup A_3\\dots\\cup A_n}=\\abs{\\cup_{i=1}^n{A_i}}=\\sum\\limits_{i=1}^n{\\abs{A_1}}-\\sum\\limits_{i\\ne j}{\\abs{A_i\\cap A_j}}+\\sum\\limits{\\abs{A_i\\cap A_j\\cap A_k}}-\\dots+(-1)^{n+1}\\abs{A_1\\cap\\dots\\cap A_n}$\n\\subsection*{application 1}\nfind \\# integers between 1 and 1000. that are not divisible by 5,6, or 8.\n$A_i=$set of integers between 1-1000 divisible by $i$. consider $A_5,A_6,A_8$\n\nwe are looking for $A_5\\cup A_6\\cup A_8$\n\n$\\left\\lvert A_5\\right\\rvert+\\left\\lvert A_6\\right\\rvert+\\left\\lvert A_8\\right\\rvert-\\left\\lvert A_5\\cap A_6\\right\\rvert-\\left\\lvert A_5\\cap A_8\\right\\rvert-\\left\\lvert A_6\\cap A_8\\right\\rvert+\\left\\lvert A_5\\cap A_6\\cap A_8\\right\\rvert$\n\\end{document}\n", "meta": {"hexsha": "eb12ab5ae9ed7165f89c098c846339035b42ef93", "size": 2037, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "combinatorics/combinatorics-notes-2014-03-07.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "combinatorics/combinatorics-notes-2014-03-07.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "combinatorics/combinatorics-notes-2014-03-07.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.2826086957, "max_line_length": 237, "alphanum_fraction": 0.7029945999, "num_tokens": 905, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432182679956, "lm_q2_score": 0.7662936430859598, "lm_q1q2_score": 0.5530672400991674}}
{"text": "\\documentclass[10pt]{article}\n\\author{Alex Peyrard}\n\\title{Digital Image Processing}\n\\usepackage{graphicx}\n\n\\begin{document}\n\\maketitle\n\\section{Introduction}\nAll of the programs are written in python 3. The shebangs are included and thus they should work if called using the \"./program.py\" notation. In case this causes a problem, please try to call them using \"python3 program.py\" notation.\\\\\\\\\nI will provide further help on how to call each program.\n\\subsection{libraries}\nI used numpy in all of the programs and scipy in some of them. I will detail when scipy is used.\n\\section{Exercise 1}\nThe program called ex1.py computes the histogram of a greyscale image, and enhances the image using histogram equalization. it displays the enhanced image, the histograms of the default and enhanced images, and the transformation function.\\\\\nIn this exercise, the matplotlib library is used to plot hitograms and functions.\n\\subsection{Examples}\n\\subsubsection{Fig1.jpg}\nAll of the results are obtained using the program with the call \"./ex1.py Fig1.jpg\"\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig1.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig1_hist.png}\n\t\\caption{Original image's histogram}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig1_cdf.png}\n\t\\caption{Enhancement function}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig1_enh_hist.png}\n\t\\caption{Enhanced image histogram}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig1_enh.jpg}\n\t\\caption{Enhanced image}\n\\end{figure}\n\\clearpage\n\n\\subsubsection{Fig2.jpg}\nAll of the results are obtained using the program with the call \"./ex1.py Fig2.jpg\"\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig2.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig2_hist.png}\n\t\\caption{Original image's histogram}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig2_cdf.png}\n\t\\caption{Enhancement function}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig2_enh_hist.png}\n\t\\caption{Enhanced image histogram}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex1/Fig2_enh.jpg}\n\t\\caption{Enhanced image}\n\\end{figure}\n\\clearpage\n\n\\section{Exercise 2}\nThe program called ex2.py performs several spatial enhancement techniques on a given greyscale image.\n\\subsection{Example on skeleton\\_orig.tif}\nAll of the results are obtained using the program with the call \"./ex2.py skeleton\\_orig.tif\"\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_orig.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_lap.jpg}\n\t\\caption{Rescaled laplacian of image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_lap_plus_orig.jpg}\n\t\\caption{Sum of laplacian and original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_sobel.jpg}\n\t\\caption{Sobel gradient of original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_smmoth_sobel.jpg}\n\t\\caption{Smoothed Sobel gradient of original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_product.jpg}\n\t\\caption{Product of smoothed Sobel gradient and of the previous sum of laplacian and original}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_final.jpg}\n\t\\caption{Sum of original image and previous image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex2/skeleton_final.jpg}\n\t\\caption{Power law of original image, gamma = 0.5, c = 1}\n\\end{figure}\n\n\\clearpage\n\n\\section{Exercise 3}\nThe program called ex3.py performs several frequency domain enhancement techniques on a given greyscale image.\n\\subsection{Examples}\n\\subsubsection{Ideal filter}\nThose images are obtained using the call \"./ex3 --ideal --highpass cutoff image\" for highpass images, and \"./ex3 --ideal --lowpass cutoff image\" for lowpass images.\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_ideal_low_10.jpg}\n\t\\caption{Cutoff 10 Ideal lowpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_ideal_low_30.jpg}\n\t\\caption{Cutoff 30 Ideal lowpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_ideal_low_100.jpg}\n\t\\caption{Cutoff 100 Ideal lowpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_ideal_high_10.jpg}\n\t\\caption{Cutoff 10 Ideal highpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_ideal_high_30.jpg}\n\t\\caption{Cutoff 30 Ideal highpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_ideal_high_100.jpg}\n\t\\caption{Cutoff 100 Ideal highpass filter}\n\\end{figure}\n\\clearpage\n\\subsubsection{Gaussian filter}\nThose images are obtained using the call \"./ex3 --gaussian --highpass cutoff image\" for highpass images, and \"./ex3 --gaussian --lowpass cutoff image\" for lowpass images.\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_gauss_low_10.jpg}\n\t\\caption{Cutoff 10 Gaussian lowpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_gauss_low_30.jpg}\n\t\\caption{Cutoff 30 Gaussian lowpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_gauss_low_100.jpg}\n\t\\caption{Cutoff 100 Gaussian lowpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_gauss_high_10.jpg}\n\t\\caption{Cutoff 10 Gaussian highpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_gauss_high_30.jpg}\n\t\\caption{Cutoff 30 gaussian highpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_gauss_high_100.jpg}\n\t\\caption{Cutoff 100 Gaussian highpass filter}\n\\end{figure}\n\\clearpage\n\\subsubsection{Butterworth filter}\nThose images are obtained using the call \"./ex3 --butterworth --highpass --order order cutoff image\" for highpass images, and \"./ex3 --butterworth --lowpass --order order cutoff image\" for lowpass images.\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_butter_low_10.jpg}\n\t\\caption{Cutoff 10 order 2 Butterworth lowpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_butter_low_30.jpg}\n\t\\caption{Cutoff 30 order 2 Butterworth lowpass filter}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_butter_high_10.jpg}\n\t\\caption{Cutoff 10 order 2 Butterworth highpass filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex3/ch_butter_high_30.jpg}\n\t\\caption{Cutoff 30 order 2 Butterworth highpass filter}\n\\end{figure}\n\n\\clearpage\n\n\\section{Exercise 4}\nFor exercise 4, the program gaussNoise.py adds gaussian noise of desired mean and variance o an image.\nThe program uniNoise.py adds uniform noise of desired lower and upper bound to an image.\nThe program filtering.py uses different means of filtering an image.\n\\subsection{Examples}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/Circuit.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussC.jpg}\n\t\\caption{Original image with gaussian noise of mean 0 and variance 260}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/uniC.jpg}\n\t\\caption{Original image with uniform noise in the [-100, 100] range}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussarith.jpg}\n\t\\caption{Image with gaussian noise enhanced with arithmetic mean}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/uniarith.jpg}\n\t\\caption{Image with uniform noise enhanced with arithmetic mean}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussgeo.jpg}\n\t\\caption{Image with gaussian noise enhanced with geometric mean}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/unigeo.jpg}\n\t\\caption{Image with uniform noise enhanced with geometric mean}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussharmo.jpg}\n\t\\caption{Image with gaussian noise enhanced with harmonic filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/uniharmo.jpg}\n\t\\caption{Image with uniform noise enhanced with harmonic filter}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussch15.jpg}\n\t\\caption{Image with gaussian noise enhanced with contra harmonic filter of order 1.5}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/unich15.jpg}\n\t\\caption{Image with uniform noise enhanced with contra harmonic filter of order 1.5}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussmedian.jpg}\n\t\\caption{Image with gaussian noise enhanced with median filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/unimedian.jpg}\n\t\\caption{Image with uniform noise enhanced with median filter}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussmax.jpg}\n\t\\caption{Image with gaussian noise enhanced with max filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/unimax.jpg}\n\t\\caption{Image with uniform noise enhanced with max filter}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussmin.jpg}\n\t\\caption{Image with gaussian noise enhanced with min filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/unimin.jpg}\n\t\\caption{Image with uniform noise enhanced with min filter}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussmid.jpg}\n\t\\caption{Image with gaussian noise enhanced with midpoint filter}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/unimid.jpg}\n\t\\caption{Image with uniform noise enhanced with midpoint filter}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/gaussalpha2.jpg}\n\t\\caption{Image with gaussian noise enhanced with alpha trimmed filter of order 2}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex4/unialpha2.jpg}\n\t\\caption{Image with uniform noise enhanced with alpha trimmed filter of order 2}\n\\end{figure}\n\n\n\\clearpage\n\\section{Exercice 5}\nIn exercise 5 we analyze blurring degradation.\\\\\nUsing \"./ex5 blur image output\" The program blurs and saves an image.\\\\\nBy using \"./ex5 filter image output\", the program enhances a blurred image using the inverse filter.\n\\section{Example}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex5/book_cover.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex5/blurredBook.jpg}\n\t\\caption{Blurred image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex5/blurredGaussBook.jpg}\n\t\\caption{Blurred image with gaussian noise of mean 0 variance 650}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex5/filteredBook.jpg}\n\t\\caption{Enhanced image of the blurred book}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex5/filteredGaussBook.jpg}\n\t\\caption{Enhanced image of the blurred book with noise}\n\\end{figure}\n\\clearpage\n\nWe can notice that the enhancement with inverse filtering isn't perfect, and is useless with noise.\\\\\n\nI apologize for not being able to do the rest of the exercise.\n\n\\section{Exercise 6}\nThis program is capable of performing rotations, translations and rescaling on an image, using either the nearest neighbor or bilinear interpolation.\\\\\n\nUsage :\\\\\n--neighbor for nearest neighbor\\\\\n--bilinear for bilinear interpolation\\\\\n\n--rotate, --rescale, --translate to choose transform\n\\subsection{Examples}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex6/ray_trace_bottle.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex6/rotate26n.jpg}\n\t\\caption{Image rotated by 26 degrees with nearest neighbor. Command line : ./ex6 --neighbor --rotate image 26}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex6/rotate155b.jpg}\n\t\\caption{Image rotated by 155 degrees with bilinear interpolation. Command line : ./ex6 --bilinear --rotate image 155}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex6/rescale05.jpg}\n\t\\caption{Image rescaled by 0.5 with nearest neighbor. Command line : ./ex6 --neighbor --rescale image 0.5}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex6/rescale2.jpg}\n\t\\caption{Image rescaled by 2 with bilinear interpolation. Command line : ./ex6 --bilinear --rescale image 2}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex6/translate1.jpg}\n\t\\caption{Image translated by 166.5 -455.3 with nearest neighbor. Command line : ./ex6 --neighbor --translate image 166.5 -455.3}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex6/translate2.jpg}\n\t\\caption{Image translated by -25.12 65 with bilinear interpolation. Command line : ./ex6 --bilinear --translate image -25.12 65}\n\\end{figure}\n\\clearpage\n\\section{Exercise 7}\nThe exercise 7 program can compress an image using the discrete cosine transform and a mask to discard coefficients.\\\\\n\nUsage :\\\\\n--dct to perform the dct\\\\\n--showdiff to show the differences between the original image and the compressed image\\\\\n\n--threshold to use the threshold mask instead of the zonal mask.\n\\subsection{Examples}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex7/lenna.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex7/zonal.jpg}\n\t\\caption{Image compressed using the zonal mask}\n\\end{figure}\n\\clearpage\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex7/zonaldiff.jpg}\n\t\\caption{Differences between original and zonal compressed images}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex7/threshold.jpg}\n\t\\caption{Image compressed using the zonal mask}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex7/thresholddiff.jpg}\n\t\\caption{Differences between original and zonal compressed images}\n\\end{figure}\n\n\\clearpage\n\nI wasn't able to do the part of the exercise with wavelets.\n\n\\section{Exercise 8}\nThe exercise 8 program can perform morphological processing on an image.\\\\\n\nUsage :\\\\ \n--dilate to use dilation\\\\\n--erode to use erosion\\\\\n--open to use opening\\\\\n--close to use closing\\\\\n\n--boundary to use boundary extraction\\\\\n--filling to use hole filling\\\\\n--extraction for feature extraction\n\n\\subsection{Examples}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/noisy_fingerprint.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/noisy_dilated.jpg}\n\t\\caption{Dilated image. Command ./ex8 --dilate image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/noisy_eroded.jpg}\n\t\\caption{Eroded image. Command ./ex8 --erode image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/noisy_opened.jpg}\n\t\\caption{Opened image. Command ./ex8 --open image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/noisy_closed.jpg}\n\t\\caption{Closed image. Command ./ex8 --close image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/lincoln.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/lincoln_boundary.jpg}\n\t\\caption{Boundary of image. Command ./ex8 --boundary image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/region.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex8/region_filled.jpg}\n\t\\caption{Image filled at position 50 50. Command ./ex8 --filling image 50 50}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=150pt]{./ex8/chicken.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=150pt]{./ex8/chicken_extracted.jpg}\n\t\\caption{Image thresholded, eroded, and then extracted at position 100 356. Command ./ex8 --extraction image 100 356}\n\\end{figure}\n\\clearpage\nThe program also gives the number of pixels of the extracted component. In this case :  1356 pixels\n\\clearpage\n\\section{Exercise 9}\nThe exercise 9 program can perform image segmentation on an image using edge detectors or thresholding segmentation methods.\\\\\n\nUsage : \\\\\n--roberts to use the Roberts edge detectors\\\\\n--prewitt to use the Prewitt edge detector\\\\\n--sobel to use the Sobel edge detector\\\\\n--mh to use the Marr hildreth edge detector\\\\\n--canny to use the canny edge detector\\\\\n\n--otsu to use Otsu's threshold segmentation.\\\\\n--threshold to use the global thresholding method\n\\subsection{Examples}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/building.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/building_roberts.jpg}\n\t\\caption{Image obtained using Roberts edge detector. Command : ./ex9 --roberts image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/building_prewitt.jpg}\n\t\\caption{Image obtained using Prewitt edge detector. Command : ./ex9 --prewitt image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/building_sobel.jpg}\n\t\\caption{Image obtained using Sobel edge detector. Command : ./ex9 --sobel image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/building_mh.jpg}\n\t\\caption{Image obtained using Marr Hildreth edge detector. Command : ./ex9 --mh image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/building_canny.jpg}\n\t\\caption{Image obtained using Canny edge detector. Command : ./ex9 --canny image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/polymersomes.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/polymersomes_otsu.jpg}\n\t\\caption{Image obtained using Otsu's method. Command : ./ex9 --otsu image. Obtained $\\eta$ : 0.48}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex9/polymersomes_global.jpg}\n\t\\caption{Image obtained using global thresholding method method. Command : ./ex9 --threshold image.}\n\\end{figure}\n\\clearpage\nWe can see that Otsu's method is better at determining the best threshold value./\n\\section{Exercise 10}\nThe exercise 10 program can perform boundary following, resampling, and compute the chain code and the difference chain code of a grayscale image.\\\\\n\nThe pca program can use principal component analysis to compress an image and reconstruct it from fewer components.\n\nUsage of ex10: \\\\\n--boundary to use boundary following\\\\\n--resampling to use boundary following and resampling\\\\\n--linking to link the resampled points\\\\\n--chain to compute the chain code\\\\\n--chaindiff to compute the difference chain code\\\\\n\nUsage of pca:\ncalling pca automatically reconstructs the 6 images from washington DC from 2 components and also prints the difference images.\n\\subsection{Examples}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/noisy_stroke.jpg}\n\t\\caption{Original image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/noisy_boundary.jpg}\n\t\\caption{Boundary of the image. Command : ./ex10 --boundary image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/noisy_resample.jpg}\n\t\\caption{Resampling of the image's boundary. Command : ./ex10 --resampling image}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/noisy_link.jpg}\n\t\\caption{Linking of the previous image. Command : ./ex10 --linking image}\n\\end{figure}\n\\clearpage\nThe chain code, obtained with ./ex10 --chain image, is 0000606666666646442444242222202202\\\\\n\nThe first difference chain code, obtained with ./ex10 --chaindiff image, is \n000626000000062606200626000062062\\\\\n\nImages used for pca :\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/WashingtonDC_Band1.jpg}\n\t\\caption{Original image 1}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/WashingtonDC_Band2.jpg}\n\t\\caption{Original image 2}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/WashingtonDC_Band3.jpg}\n\t\\caption{Original image 3}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/WashingtonDC_Band4.jpg}\n\t\\caption{Original image 4}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/WashingtonDC_Band5.jpg}\n\t\\caption{Original image 5}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/WashingtonDC_Band6.jpg}\n\t\\caption{Original image 6}\n\\end{figure}\n\\clearpage\nImages obtained with 2 remaining components :\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash1.jpg}\n\t\\caption{Image 1}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash2.jpg}\n\t\\caption{Image 2}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash3.jpg}\n\t\\caption{Image 3}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash4.jpg}\n\t\\caption{Image 4}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash5.jpg}\n\t\\caption{Image 5}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash6.jpg}\n\t\\caption{Image 6}\n\\end{figure}\n\\clearpage\nStretched differences images:\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash1diff.jpg}\n\t\\caption{Difference image 1}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash2diff.jpg}\n\t\\caption{Difference image 2}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash3diff.jpg}\n\t\\caption{Difference image 3}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash4diff.jpg}\n\t\\caption{Difference image 4}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash5diff.jpg}\n\t\\caption{Difference image 5}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=200pt]{./ex10/Wash6diff.jpg}\n\t\\caption{Difference image 6}\n\\end{figure}\n\\end{document}", "meta": {"hexsha": "baa9d1705f9e9ff9f36ffcdcf09a224eda8a3960", "size": 23519, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "DIP/exercises/report.tex", "max_stars_repo_name": "apeyrard/sjtu-work", "max_stars_repo_head_hexsha": "ca98fec3c83b81ed9091bdc968cb5ad8a74d1d6a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-26T10:04:05.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T10:04:05.000Z", "max_issues_repo_path": "DIP/exercises/report.tex", "max_issues_repo_name": "apeyrard/sjtu-work", "max_issues_repo_head_hexsha": "ca98fec3c83b81ed9091bdc968cb5ad8a74d1d6a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DIP/exercises/report.tex", "max_forks_repo_name": "apeyrard/sjtu-work", "max_forks_repo_head_hexsha": "ca98fec3c83b81ed9091bdc968cb5ad8a74d1d6a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-26T10:04:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-26T10:04:06.000Z", "avg_line_length": 32.2178082192, "max_line_length": 241, "alphanum_fraction": 0.760831668, "num_tokens": 7248, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.766293653760418, "lm_q1q2_score": 0.5530672386305454}}
{"text": "\\section{Results}\n\nIn this section, we compare the results from Moltres for each CNRS Benchmark\nstep to the results in the benchmark paper \\cite{tiberga_results_2020}.\nThe software packages from \\gls{CNRS} and \\gls{TUD}\neach report two sets of results arising from different angular discretizations\nin their neutronics models for Steps 0.2, 1.1, 1.2, 1.3, 1.4, and 2.1. These\nsets of results are labeled as CNRS-$SP_1$ and\nCNRS-$SP_3$; and TUD-$S_2$ and TUD-$S_6$, respectively. The\nauthors performed code-to-code verification by sampling observable values at\n201 equidistant points along the centerlines AA' and BB' and reporting the\ndiscrepancy $\\epsilon_c$ of each observable from each software\n(indexed by $c$) for each measured observable $Q_c$ (not to be confused with\nfission heat source $Q_f$), relative to the average of\nthat same observable $Q_{avg}$ from all participating software. Variables\n$\\epsilon_c$ and $Q_{avg}$ are calculated as:\n%\n\\begin{align}\n    \\epsilon_c =& \\sqrt{\\frac{\\sum^{N_p}_{i=1}\\left[Q_c(\\vec{r_i}) - Q_{avg}\n    (\\vec{r_i})\\right]^2}{\\sum^{N_p}_{i=1} Q^2_{avg}(\\vec{r_i})}}\n    \\shortintertext{and}\n    Q_{avg}(\\vec{r_i}) =& \\frac{1}{N_c} \\sum^{N_c}_{c=1} Q_c(\\vec{r_i})\n    \\shortintertext{where}\n    Q_c(\\vec{r_i}) =&\n    \\mbox{ value of observable $Q$ at location $\\vec{r_i}$ from software $c$,}\n    \\nonumber \\\\\n    N_p =& \\mbox{ number of sampling points of quantity $Q$} = 201,\n    \\nonumber \\\\\n    N_c =& \\mbox{ number of participating software packages.} \\nonumber\n\\end{align}\n\nThe average discrepancy $\\epsilon$ over all software is calculated as:\n%\n\\begin{align}\n    \\epsilon =& \\frac{1}{N_c}\\sum^{N_c}_{c=1} \\epsilon_c\n\\end{align}\n\nWe adopted the averaged values $\\epsilon$ and $Q_{avg}$ directly from the\nreference work \\cite{tiberga_results_2020} without including our results\nin the calculations. We note that the benchmark does not provide a reference\nsolution and a significantly erroneous value from one of the software packages\ncould heavily skew the discrepancy values. Nevertheless, the benchmark paper\nreports good agreement among their software packages.\n\nFor observables measured along the centerlines AA' and/or BB', Tables\n\\ref{table:disc0} and \\ref{table:disc1} report the discrepancy $\\epsilon_c$ of\neach observable from Moltres relative to the average of the benchmark\nparticipants $Q_{avg}$ alongside the average discrepancy $\\epsilon$ of\nthe benchmark participants. We also reproduce corresponding plots\nin the benchmark paper for every observable along AA' or BB' in Figures\n\\ref{fig:0.1}, \\ref{fig:0.2}, \\ref{fig:0.3}, \\ref{fig:1.1}, \\ref{fig:1.2},\n\\ref{fig:1.3}, and \\ref{fig:2.1} for a qualitative comparison of the results\nfrom Moltres and the benchmark participants. Given the significant overlap in\nthe plot curves, these figures omit results from CNRS-$SP_1$ and TUD-$S_2$ to\nreduce cluttering. Readers may also refer to\n\\ref{appendix:tables} for tables of observable values at nine equidistant\npoints along AA' and BB' from Moltres and the benchmark participants. We\nprovide these tables for ease of review and a direct comparison to\ncorresponding data tables from \\cite{tiberga_results_2020}. The full dataset\nof all observable results used in this results analysis is\navailable at \\cite{park_results_2021}. Lastly, Table\n\\ref{table:rho} reports all reactivity and change in reactivity results from\nSteps 0.2, 1.1, 1.2, and 1.3.\n\n\\begin{figure}[h]\n\t\\centering\n    \\includegraphics[width=.8\\columnwidth]{0-1-vel-plot}\n\t\\caption{Step 0.1 \\textemdash\\ Horizontal velocity component along BB'.}\n\t\\label{fig:0.1}\n\\end{figure}\n%\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=.8\\columnwidth]{0-2-fiss-plot}\n\t\\caption{Step 0.2 \\textemdash\\ Fission rate density along AA'.}\n\t\\label{fig:0.2}\n\t\\includegraphics[width=.8\\columnwidth]{0-3-temp-plot}\n\t\\caption{Step 0.3 \\textemdash\\ Temperature distribution along BB'.}\n\t\\label{fig:0.3}\n\\end{figure}\n%\n\\FloatBarrier\n%\n\\begin{table}[htb]\n\t\\caption{Discrepancy values from Moltres alongside the average and standard\n\tdeviation of the discrepancy values of the benchmark participants for Phase\n\t0.}\n\t\\centering\n\t\\small\n\t\\begin{tabular}{l l c S S S}\n\t\t\\toprule\n\t\t\\multirow{2}{*}{\\textbf{Step}} & \\multirow{2}{*}{\\textbf{Observable}} & \\multirow{2}{*}{\\textbf{Centerline}} & {\\multirow{2}{*}{\\textbf{Moltres [\\%]}}} & \\multicolumn{2}{c}{\\textbf{Benchmark [\\%]}} \\\\\n\t\t& & & & {Average} & {SD} \\\\\n\t\t\\midrule\n\t\t\\multirow{4}{*}{0.1} &\n\t\t\\multirow{2}{*}{$u_x$} & AA' & 0.247 & 0.253 & 0.150 \\\\\n\t\t& & BB' & 0.266 & 0.318 & 0.102 \\\\\n\t\t\\cmidrule{2-6}\n\t\t& \\multirow{2}{*}{$u_y$} & AA' & 0.540 & 0.598 & 0.266 \\\\\n\t\t& & BB' & 0.468 & 0.795 & 0.421 \\\\\n\t\t\\midrule\n\t\t{0.2} &\n\t\t{$\\sum^6_g \\Sigma_{f,g} \\phi_g(\\vec{r})$} & AA' & 0.313 & 0.285 & 0.153\n\t\t\\\\\n\t\t\\midrule\n\t\t\\multirow{2}{*}{0.3} &\n\t\t\\multirow{2}{*}{$T$} & AA' & 0.090 & 0.085 & 0.031 \\\\\n\t\t& & BB' & 0.164 & 0.083 & 0.027\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:disc0}\n\\end{table}\n\n\\subsection{Phase 0 results \\& discussion}\n\nFigures \\ref{fig:0.1}, \\ref{fig:0.2}, and \\ref{fig:0.3} show that Moltres\naccurately reproduced all three sets of results in Phase 0 for the velocity\nfield, fission rate density, and temperature. Table\n\\ref{table:disc0} reports the discrepancy values from Moltres for Phase 0 and\nthe corresponding average and \\gls{SD} of the discrepancy values from\nthe benchmark participants\n\\cite{tiberga_results_2020}. Moltres performs very well as most discrepancy\nvalues are either lower than or fall within one \\gls{SD} of the benchmark\naverage discrepancies. The discrepancy value for $T$ along centerline BB' in\nStep 0.3 is the only exception with its value of 0.164\\% being larger than\nthe benchmark average by 3 \\gls{SD}.\n\nWe\nnote in Figure \\ref{fig:0.3} that the $T$ distribution from Moltres is almost\nidentical to the corresponding distributions from CNRS-$SP_3$ and TUD-$S_6$\nalong most of centerline BB'. However, Figure \\ref{fig:0.3-zoom} shows\nsignificant spread in the $T$ distributions along BB' from all software\npackages near the top boundary. At $y = 2.0$ m, Moltres underpredicts the\ntemperature at 912.3 K compared to the benchmark participants' values which\nrange between 930.3 K and 948.1 K (Refer to Table \\ref{table:0.3} for the\nnumerical values). This point on the top boundary lies directly downstream of\nthe velocity boundary condition discontinuity at the top-left corner.\nCorner singularities are generally difficult to approximate with\ncontinuous Galerkin methods \\cite{kuhlmann_lid-driven_2018}.\nThe \\gls{SUPG} stabilization scheme dampens numerical oscillations by\nintroducing pointwise artificial thermal diffusivity which depends strongly on\nthe inverse of local velocity magnitude \\cite{peterson_overview_2018}.\nTherefore, while the \\gls{SUPG} scheme was very effective in eliminating\nspurious numerical oscillations everywhere else, it provides little damping\nalong the top boundary due to the relatively large non-zero velocity boundary\ncondition. On the other hand, the temperature values in the rest of the domain\nand the average discrepancies of the other variables show that Moltres can\nstill accurately reproduce the expected results and the temperature deviations\nalong the top boundary do not impact the overall integrity of our results.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=.8\\columnwidth]{0-3-temp-plot-zoom}\n\t\\caption{Step 0.3 \\textemdash\\ Temperature distribution along BB' for y = 1.94 m to\n\ty = 2.00 m.}\n\t\\label{fig:0.3-zoom}\n\\end{figure}\n\nLastly, we observe in table \\ref{table:rho} that the reactivity $\\rho$ value of\n465.6 pcm from Moltres falls well within the range of $\\rho$ values from the\nbenchmark which range from 353.7 pcm up to 578.1 pcm. Given that Moltres \nadopts the neutron diffusion model, our $\\rho$ value agrees closest to the\nresults from the software packages which also adopt the neutron diffusion model\nor theoretically-equivalent models such as the $SP_1$ and $S_2$ neutron\ntransport models, namely CNRS-$SP_1$, PoliMi, PSI, and TUD-$S_2$.\n\n\\begin{table}[htb]\n    \\caption{Reactivity $\\rho$ and change in reactivity\n    $\\left(\\rho_a - \\rho_b\\right)$ values from Steps 0.2, 1.1,\n    1.2, and 1.3. All units are in pcm.}\n    \\centering\n    \\small\n    \\setlength\\tabcolsep{2pt}\n    \\begin{tabular}{l S S S S}\n        \\toprule\n        \\multirow{2}{*}{\\textbf{Software}} & {\\textbf{Step 0.2}} &\n        {\\textbf{Step 1.1}} & {\\textbf{Step 1.2}} & {\\textbf{Step 1.3}} \\\\\n        & {$\\rho_{s_{0.2}}$}\n        & {$\\rho_{s_{1.1}} - \\rho_{s_{0.2}}$}\n        & {$\\rho_{s_{1.2}} - \\rho_{s_{1.1}}$}\n        & {$\\rho_{s_{1.3}} - \\rho_{s_{0.2}}$} \\\\\n        \\midrule\n        Moltres     & 465.6 & -62.7 & -1142.2 & -1207.7 \\\\\n        CNRS-$SP_1$ & 411.3 & -62.5 & -1152.0 & -1220.5 \\\\\n        CNRS-$SP_3$ & 353.7 & -62.6 & -1152.7 & -1220.7 \\\\\n        PoliMi      & 421.2 & -62.0 & -1161.0 & -1227.0 \\\\\n        PSI         & 411.7 & -63.0 & -1154.8 & -1219.6 \\\\\n        TUD-$S_2$   & 482.6 & -62.0 & -1145.2 & -1208.5 \\\\\n        TUD-$S_6$   & 578.1 & -60.7 & -1122.0 & -1184.4 \\\\\n        \\bottomrule\n    \\end{tabular}\n    \\label{table:rho}\n\\end{table}\n\n\\FloatBarrier\n\n\\subsection{Phase 1 results \\& discussion}\n\nTable \\ref{table:disc1} shows the discrepancy values from Moltres relative to\nthe average and \\gls{SD} of the benchmark participants for Steps 1.1, 1.2, and\n1.3, and the corresponding average discrepancy values from the benchmark\n\\cite{tiberga_results_2020}. The subsequent subsections discuss the results\nfor each benchmark step in Phase 1.\n%\n\\begin{table*}[htb]\n\t\\caption{Discrepancy values from Moltres alongside the average and standard\n\tdeviation of the discrepancy values of the benchmark participants for Phase\n\t1.}\n\t\\centering\n\t\\small\n\t\\begin{tabular}{l l c S S S}\n\t\t\\toprule\n\t\t\\multirow{2}{*}{\\textbf{Step}} & \\multirow{2}{*}{\\textbf{Observable}} & \\multirow{2}{*}{\\textbf{Centerline}} & {\\multirow{2}{*}{\\textbf{Moltres [\\%]}}} & \\multicolumn{2}{c}{\\textbf{Benchmark [\\%]}} \\\\\n\t\t& & & & {Average} & {SD} \\\\\n\t\t\\midrule\n\t\t\\multirow{2}{*}{1.1} &\n\t\t\\multirow{2}{*}{$\\sum_i \\lambda_i C_i$} & AA' & 0.603 & 0.346 & 0.166\n\t\t\\\\\n\t\t& & BB' & 0.327 & 0.294 & 0.153 \\\\\n\t\t\\midrule\n\t\t\\multirow{4}{*}{1.2} &\n\t\t\\multirow{2}{*}{$T$} & AA' & 0.076 & 0.095 & 0.015 \\\\\n\t\t& & BB' & 0.179 & 0.089 & 0.012 \\\\\n\t\t\\cmidrule{2-6}\n\t\t& \\multirow{2}{*}{\\footnotesize $\\Delta\\left[\\sum^6_g \\Sigma_{f,g} \\phi_g(\\vec{r})\n\t\t\\right]_{s_{1.2}-s_{0.2}}$} & AA' & 1.110 & 1.576 & 0.564 \\\\\n\t\t& & BB' & 1.089 & 1.133 & 0.392 \\\\\n\t\t\\midrule\n\t\t\\multirow{7}{*}{1.3} &\n\t\t{$u_x$} & AA' & 0.123 & 0.691 & 0.566 \\\\\n\t\t\\cmidrule{2-6}\n\t\t& \\multirow{2}{*}{$u_y$} & AA' & 0.237 & 0.329 & 0.131 \\\\\n\t\t& & BB' & 0.238 & 0.356 & 0.217 \\\\\n\t\t\\cmidrule{2-6}\n\t\t& \\multirow{2}{*}{$T$} & AA' & 0.064 & 0.057 & 0.023 \\\\\n\t\t& & BB' & 0.070 & 0.080 & 0.024 \\\\\n\t\t\\cmidrule{2-6}\n\t\t& \\multirow{2}{*}{$\\sum_i \\lambda_i C_i$} & AA' & 1.043 & 0.460 & 0.190\n\t\t\\\\\n\t\t& & BB' & 0.462 & 1.194 & 0.178 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:disc1}\n\\end{table*}\n\n\\subsubsection{Step 1.1: Circulating fuel}\n\nFigure \\ref{fig:1.1} shows good qualitative agreement in the delayed neutron\nsource distribution along BB' among Moltres and the benchmark participants.\nFrom Table \\ref{table:disc1}, Moltres reports discrepancies of 0.603\\% and\n0.327\\% along the centerlines AA' and BB', respectively. Both values are on the\nwithin two and one \\gls{SD}, respectively, of the average discrepancies of the\nbenchmark participants (0.346\\% and 0.294\\%).\nIn Table \\ref{table:rho}, we observe that the change in\n$\\rho$ relative to Step 0.2 is $-62.7$ pcm for Moltres and this value is\nconsistent with the $-63.0$ to $-62.0$ pcm range that most of the benchmark\nparticipants' values fall in.\n%\n\\begin{figure}[h!]\n\t\\centering\n    \\includegraphics[width=.8\\columnwidth]{1-1-dnp-x-plot}\n    \\includegraphics[width=.8\\columnwidth]{1-1-dnp-y-plot}\n\t\\caption{Step 1.1 \\textemdash\\ Delayed neutron source along AA' (top) and BB'\n\t(bottom).}\n\t\\label{fig:1.1}\n\\end{figure}\n%\n\\begin{figure*}[htb]\n\t\\centering\n\t\\includegraphics[width=.8\\columnwidth]{1-2-temp-plot}\n\t\\includegraphics[width=.8\\columnwidth]{1-2-fiss-plot}\n\t\\caption{Step 1.2 \\textemdash\\ Temperature distribution and change in fission rate\n\tdensity along AA'.}\n\t\\label{fig:1.2}\n\\end{figure*}\n\n\\FloatBarrier\n\n\\subsubsection{Step 1.2: Power coupling}\n\nFigure \\ref{fig:1.2} shows the temperature distribution and the change in\nfission rate density along AA' from Step 1.2. Similar to Step 0.3, the\ntemperature distribution from Moltres agrees closest with CNRS-$SP_3$ and\nTUD-$S_2$. Table \\ref{table:disc1} reports the same trends we observed in Phase\n0; the average discrepancy in temperature along BB' from Moltres for Step 1.2\nis marginally higher than the benchmark while the average discrepancy in the\nfission rate density is within one \\gls{SD} range to the benchmark average.\nFrom Table \\ref{table:rho}, Moltres also reports a change in $\\rho$\nrelative to Step 1.1 of $-1142.2$ pcm which\nfalls within the range of reported benchmark participants' values.\n\n\\subsubsection{Step 1.3: Buoyancy}\n\nFigure \\ref{fig:1.3} shows the vertical velocity component, temperature, and\ndelayed neutron source distributions along AA'.\nMoltres reproduces the correct trend in all three physical\nobservables. Step 1.3 has a relatively slower buoyancy-driven flow profile with\nno forced flow from the top boundary. Consequently, there are no temperature\ndeviations near the top boundary and we observe in Table \\ref{table:disc1} that\nthe average discrepancy value of 0.070\\% for the temperature distribution along\nBB' is in much closer agreement to the benchmark value of 0.080\\% compared to\nthe corresponding temperature discrepancy values for Step 0.3 and 1.2.\n\nIn Table \\ref{table:rho}, we observe that the change in $\\rho$ from\nMoltres (-1207.7 pcm) falls within the range of reported benchmark values and\nmatches the software from TUD-$S_2$ (-1208.5 pcm) the closest. This is likely\nby virtue of the similar solvers (diffusion and $S_2$ neutronics models) and\nmethods of solution (finite element method on uniform meshes) employed by our\nrespective software.\n%\n\\begin{figure*}[htb]\n\t\\centering\n\t\\includegraphics[width=.6\\columnwidth]{1-3-vel-plot}\n\t\\includegraphics[width=.6\\columnwidth]{1-3-temp-plot}\n\t\\includegraphics[width=.6\\columnwidth]{1-3-dnp-plot}\n\t\\caption{Step 1.3 \\textemdash\\ Vertical velocity component, temperature distribution,\n\tand delayed neutron source along AA'.}\n\t\\label{fig:1.3}\n\\end{figure*}\n\n\\FloatBarrier\n\n\\subsubsection{Step 1.4: Full coupling}\n\nFigure \\ref{fig:color} shows 2D temperature distribution and velocity\nstreamlines from Moltres for Step 1.4 with $U_{lid} = 0.5$ m$\\cdot$s$^{-1}$ and\n$P = 1$ GW. Table \\ref{table:full} shows the change in $\\rho$ under the various\n$U_{lid}$ and $P$ values. We refer readers to Tiberga et al.'s paper\n\\cite{tiberga_results_2020} for the benchmark participants' corresponding\nvalues. The change in $\\rho$ values from Moltres all fall within the range of\nbenchmark values\nfor all cases. Furthermore, the $\\Delta\\rho$ values are all within 1.1 pcm of\nthe corresponding values from the TUD-S$_2$ model in the benchmark paper. Given\nthat the $S_2$ discrete ordinates method is theoretically equivalent to the\nmultigroup neutron diffusion method, this indicates that Moltres is largely\nconsistent with the benchmark participants outside of differences from the\nneutronics models.\n\n\\begin{figure}[htb]\n  \\centering\n  \\includegraphics[width=\\columnwidth]{full-coupled}\n  \\caption{Temperature distribution from Moltres for the fully coupled\n  system (Step 1.4) with buoyancy effects, $P = 1$ GW, and $U_{lid} = 0.5$\n  m$\\cdot$s$^{-1}$. The lines correspond to the streamlines of the velocity\n  field.}\n  \\label{fig:color}\n\\end{figure}\n%\n\\begin{table}[htb]\n\t\\caption{Reactivity change in Step 1.4, relative to Step 0.2 under various\n\t$U_{lid}$ and $P$ values.}\n\t\\centering\n\t\\small\n\t\\setlength\\tabcolsep{1.5pt}\n\t\\begin{tabular}{c c c c c c}\n\t\t\\toprule\n\t\t& \\multicolumn{5}{c}{$\\rho_{s1.4} - \\rho_{s0.2}$ [pcm]} \\\\\n\t\t\\midrule\n\t\t{\\backslashbox{$U_{lid}$ [m$\\cdot$s$^{-1}$]}{$P$ [GW]}} & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\\\\n\t\t\\midrule\n\t\t0.0 & -263.7 & -498.3 & -730.9 & -966.7 & -1207.7 \\\\\n\t\t0.1 & -265.9 & -498.7 & -730.6 & -966.0 & -1206.7 \\\\\n\t\t0.2 & -268.1 & -498.8 & -729.4 & -963.7 & -1203.6 \\\\\n\t\t0.3 & -269.9 & -498.5 & -727.8 & -960.8 & -1199.5 \\\\\n\t\t0.4 & -271.9 & -498.5 & -726.5 & -958.3 & -1195.7 \\\\\n\t\t0.5 & -274.2 & -498.7 & -725.6 & -956.4 & -1192.7 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:full}\n\\end{table}\n\n\\FloatBarrier\n\n\\begin{table}[htb]\n\t\\caption{Discrepancy values from Moltres alongside the average and standard\n\tdeviation of the discrepancy values of the benchmark participants for Step\n\t2.1.}\n\t\\centering\n\t\\small\n\t\\begin{tabular}{l l S S S}\n\t\t\\toprule\n\t\t\\multirow{2}{*}{\\textbf{Step}} & \\multirow{2}{*}{\\textbf{Observable}} & {\\multirow{2}{*}{\\textbf{Moltres [\\%]}}} & \\multicolumn{2}{c}{\\textbf{Benchmark [\\%]}} \\\\\n\t\t& & & {Average} & {SD} \\\\\n\t\t\\midrule\n\t\t\\multirow{2}{*}{2.1} & Gain & 0.493 & 0.587 & 0.244 \\\\\n\t\t\\cmidrule{2-5}\n\t\t& Phase shift & 1.741 & 2.176 & 0.554 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:disc2}\n\\end{table}\n%\n\\begin{figure*}[htb]\n\t\\centering\n\t\\includegraphics[width=.8\\columnwidth]{2-1-gain-plot}\n\t\\includegraphics[width=.8\\columnwidth]{2-1-phase-plot}\n\t\\caption{Step 2.1 \\textemdash\\ Bode gain and phase plots of the frequency response of\n\tthe fully coupled system.}\n\t\\label{fig:2.1}\n\\end{figure*}\n\n\\subsection{Phase 2 results \\& discussion}\n\nLastly, the following subsection discusses the results for the transient cases\nin Step 2.1 which involve measuring the response in power output to periodic\nperturbations in the heat transfer coefficient.\n\n\\subsubsection{Step 2.1: Forced convection transient}\n\nFigure \\ref{fig:2.1} shows the Bode gain and phase shift plots of the response\nin power output in the fully coupled system. Along with the average discrepancy\nvalues from Table \\ref{table:disc2}, the results show that Moltres is\nconsistent with the benchmark. The gain data points from all \\gls{MSR} software\nagree closely with one another. Moltres reports an average discrepancy value of\n0.496\\%, slightly lower than the benchmark average of 0.587\\%. On the other\nhand, the phase shift data points show greater spread over the various driving\nfrequencies. We note the different timestepping schemes and timestep\nsizes among the different software packages which is likely responsible for\nthe variations in the phase shift. Even with a precision of\n$\\pm0.9^\\circ$ for each phase shift value, Moltres accurately reproduces the\ncorrect trend with a lower average discrepancy (1.741\\%) than the benchmark\nparticipants' average (2.176\\%).\n\n\\FloatBarrier\n", "meta": {"hexsha": "0dc1818c27d1afe5813b91442c25a12a51aba7a6", "size": 18689, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "results.tex", "max_stars_repo_name": "smpark7/2021-park-moltres-benchmark", "max_stars_repo_head_hexsha": "efaaaa70e0db4781a0dddad51151640aa820486f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-06-17T16:40:04.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-07T22:56:12.000Z", "max_issues_repo_path": "results.tex", "max_issues_repo_name": "smpark7/2021-park-moltres-benchmark", "max_issues_repo_head_hexsha": "efaaaa70e0db4781a0dddad51151640aa820486f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-02-25T14:45:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-08T19:14:44.000Z", "max_forks_repo_path": "results.tex", "max_forks_repo_name": "arfc/2021-park-moltres-benchmark", "max_forks_repo_head_hexsha": "efaaaa70e0db4781a0dddad51151640aa820486f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-17T18:11:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-17T17:55:22.000Z", "avg_line_length": 43.9741176471, "max_line_length": 202, "alphanum_fraction": 0.7092942373, "num_tokens": 6212, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.5530672323949491}}
{"text": "%% Document type\n\\documentclass[]{amsbook}\n\n%% Packages\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{exercise}\n\\usepackage{hyperref}\n\\usepackage{tikz-cd}\n\\usepackage{bbm}\n\n%% User-defined commands\n\\newcommand{\\q}{\\quad}\n\\newcommand{\\qq}{\\qquad}\n\\newcommand{\\catname}[1]{\\mathbf{#1}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\0}{\\mathbf{0}}\n\\newcommand{\\1}{\\mathbf{1}}\n\\newcommand{\\oldemptyset}{\\emptyset}\n\\renewcommand{\\emptyset}{\\varnothing}\n\\newcommand{\\dps}{\\displaystyle}\n\\newcommand{\\List}{\\texttt{List}}\n\\newcommand{\\mbb}[1]{\\mathbb{#1}}\n\\newcommand{\\mc}[1]{\\mathcal{#1}}\n\\newcommand{\\opn}[1]{\\operatorname{#1}}\n\n\\newtheorem{prop}{Proposition}\n\\newenvironment{solution}\n    {\\begin{proof}[Solution]}{\\end{proof}}\n\n\n\\begin{document}\n\\title{Introduction to Categories and Categorical Logic}\n\\author{Vishal Lama}\n\\date{\\today}\n\\maketitle\n\\tableofcontents\n\n\\chapter{Introduction to Categories and Categorical Logic}\n\\section{Introduction}\nWe say that a function $f: X \\to Y$ is:\\\\~\\\\\n\\begin{tabular}{ l l }\n    \\emph{injective} & if $\\forall x, x' \\in X. f(x) = f(x') \\implies x = x'$, \\\\\n    \\emph{surjective} & if $\\forall y \\in Y. \\exists x \\in X. f(x) = y$, \\\\~\\\\\n    \\emph{monic} & if $\\forall g, h. f \\circ g = f \\circ h \\implies g = h$\n    \\q ($f$ is left cancellative),\\\\\n    \\emph{epic} & if $\\forall g, h. g \\circ f = h \\circ f \\implies g = h$\n    \\q ($f$ is right cancellative).\n\\end{tabular}\\\\\n\n\\begin{prop}\n    Let $f : X \\to Y$. Then,\n    \\begin{enumerate}\n        \\item $f$ is injective $\\iff f$ is monic.\n        \\item $f$ is surjective $\\iff f$ is epic.\n    \\end{enumerate}\n\\end{prop}\n\\begin{proof}\n    We first show $(1)$.\\\\\n    ($\\impliedby$) Suppose $f$ is monic. Fix a one-element set\n    $\\boldsymbol{1} = \\{ \\bullet \\}$. Then, note that elements $x \\in X$ are in\n    1-1 correspondence with functions $\\bar{x}: \\boldsymbol{1} \\to X$, defined\n    by $\\bar{x}(\\bullet) := x$. Then, for all $x, x' \\in X$, we have\\\\~\\\\\n    \\begin{tabular} { l l }\n        & $f(x) = f(x')$\\\\\n        $\\implies$ & $f(\\bar{x}(\\bullet)) = f(\\bar{x'}(\\bullet))$\\\\\n        $\\implies$ & $(f \\circ \\bar{x})(\\bullet) = (f \\circ \\bar{x'})(\\bullet)$\\\\\n        $\\implies$ & $f \\circ \\bar{x} = f \\circ \\bar{x'}$\\\\\n        $\\implies$ & $\\bar{x} = \\bar{x'}$ \\q (since $f$ is monic) \\\\\n        $\\implies$ & $\\bar{x}(\\bullet) = \\bar{x'}(\\bullet)$\\\\\n        $\\implies$ & $x = x'$\n    \\end{tabular}\\\\\n    This shows that $f$ is injective.\\\\\n\n    ($\\implies$) Suppose $f$ is injective. Let $f \\circ g = f \\circ h$ for all\n    $g, h: A \\to X$. Then, for all $a \\in A$,\\\\\n    \\begin{tabular} { l l }\n        & $(f \\circ g)(a) = (f \\circ h)(a)$\\\\\n        $\\implies$ & $f(g(a)) = f(h(a))$\\\\\n        $\\implies$ & $g(a) = h(a)$ \\q (since $f$ is injective)\\\\\n        $\\implies$ & $g = h$\n    \\end{tabular}\\\\\n    This establishes that $f$ is monic. And, we are done.\n\\end{proof}\n\n\\setcounter{Exercise}{1}\n\\begin{Exercise}\n    Show that $f: X \\to Y$ is surjective iff it is epic.\n\\end{Exercise}\n\\begin{solution}\n    ($\\implies$) Suppose $f: X \\to Y$ is epic. And, assume, for the sake of\n    contradiction, $f$ is \\emph{not} surjective. Then, there exists some $y_0\n    \\in Y$, such that, for all $x \\in X$, $f(x) \\ne y_0$. Define mappings\n    $g, h: Y \\to Y \\cup \\{ Y \\}$ by:\n    \\begin{center}\n        $g(y) := y$\\\\~\\\\\n        $h(y) :=\n        \\begin{cases}\n        y & \\text{if } y \\ne y_0 \\\\\n        Y & \\text{if } y = y_0\n        \\end{cases}$\n    \\end{center}\n    Note that $g \\ne h$.\\\\\n    Then, for all $x \\in X$, $(g \\circ f)(x) = g(f(x)) = h(f(x)) = (h \\circ f)\n    (x)$.\n    This implies $g \\circ f = h \\circ f$, which implies $g = h$, since $f$ is\n    epic. The last conclusion contradicts the fact that $g = h$. Thus, we\n    conclude $f$ is surjective.\\\\~\\\\\n    ($\\impliedby$) Suppose $f: X \\to Y$ is surjective. Then, for any $y \\in Y$,\n    there exists an $x \\in X$, such that $f(x) = y$. Now, assume, for all\n    $g, h: Y \\to Z$, $g \\circ f = h \\circ f$. Then, for all $y \\in Y$,\n    $g(y) = g(f(x)) = (g \\circ f)(x) = (h \\circ f)(x) = h(f(x)) = h(y)$, which\n    implies $g = h$, showing that $f$ is epic.\\\\\n    And, this completes our proof.\n\\end{solution}\n\n\\setcounter{Exercise}{4}\n\\begin{Exercise}\n    Suppose $G$ and $H$ are groups (and hence monoids), and that $h: G \\to H$\n    is a monoid homomorphism. Prove that $h$ is a group homomorphism.\n\\end{Exercise}\n\\begin{solution}\n    We need only show that $h$ preserves inverses. To that end,\n    suppose $g^{-1}$ is the inverse of $g \\in G$. Then, $h(g) h(g^{-1}) =\n    h(g g^{-1}) = h(1_G) = 1_H = h(1_G) = h(g^{-1} g) = h(g^{-1}) h(g)$.\n    This establishes $h$ preserves inverses, and we are done.\n\\end{solution}\n\n\\begin{Exercise}\n    Check that $\\catname{Mon}, \\catname{Vect}_k, \\catname{Pos}$, and\n    $\\catname{Top}$ are indeed categories.\n\\end{Exercise}\n\\begin{solution}\n    ($\\catname{Mon}$) The objects are monoids $(M, \\cdot, 1_M)$, and morphisms\n    are monoid homomorphisms. Given monoid homomorphisms, $f: (M, \\cdot, 1_M)\n    \\to (N, \\cdot, 1_N)$ and $g: (N, \\cdot, 1_N) \\to (P, \\cdot, 1_P)$, the\n    function $g \\circ f: (M, \\cdot, 1_M) \\to (P, \\cdot, 1_P)$ is also a monoid\n    homomorphism, because for all $m, m' \\in M$, we have $(g \\circ f)(m m') =\n    g(f(m m')) = g(f(m) f(m')) = (g(f(m)) (g(f(m'))) = ((g \\circ f)(m))\n    ((g \\circ f)(m'))$. Also, for each monoid, the identity morphism is the\n    identity function. It is also easy to check that for all monoid\n    homomorphisms $f, g$ and $h$ with the appropriate domains and codomains,\n    $h \\circ (g \\circ f) = (h \\circ g) \\circ f$. This establishes that\n    $\\catname{Mon}$ is indeed a category.\n\n    ($\\catname{Vect}_k$) The objects are vector spaces over a field $k$, and\n    morphisms are linear maps between vector spaces. Suppose $f: U \\to V$ and\n    $g: V \\to W$ are linear maps. Then, for all $x, y \\in U$, we have\n    $(g \\circ f)(x + y) = g(f(x + y)) = g(f(x) + f(y)) = g(f(x)) + g (f(y)) =\n    (g \\circ f)(x) + (g \\circ f)(y)$. Also, for all $\\alpha \\in k$, we have\n    $(g \\circ f)(\\alpha x) = g(f(\\alpha x)) = g(\\alpha f(x)) = \\alpha g(f(x))\n    = \\alpha (g \\circ f)(x)$. This establishes $g \\circ f: U \\to W$ is a\n    linear map as well. The identity map $1_U$ for any vector space $U$ is the\n    identity morphism. The associativity of linear maps and the identity axiom\n    follow from the property of functions. This shows that $\\catname{Vect}_k$\n    is also a category.\n\n    ($\\catname{Pos}$) The objects are partially ordered sets, and morphisms\n    are monotone functions between these sets. Suppose $h: P \\to Q$ and\n    $g: Q \\to R$ are monotone functions. Then, for all $x, y \\in P$,\n    $x \\le y \\implies h(x) \\le h(y) \\implies g(h(x)) \\le g(h(y)) \\implies\n    (g \\circ h)(x) \\le (g \\circ h)(y)$, which shows $g \\circ h: P \\to R$ is\n    a monotone function. The identity map is the identity morphism, and the\n    associativity and identity axioms are satisfied by the property of\n    functions. This establishes $\\catname{Pos}$ is a category.\n\n    ($\\catname{Top}$) The objects are topological spaces, and morphisms are\n    continuous maps between these spaces. Given continuous maps $f: (X, T_X)\n    \\to (Y, T_Y)$ and $g: (Y, T_Y) \\to (Z, T_Z)$, we can show that $g \\circ f:\n    (X, T_X) \\to (Z, T_Z)$ is also a continuous map. First, note that for any\n    $T \\subset Z$, $x \\in (g \\circ f)^{-1}(T)$ iff $(g \\circ f)(x) \\in T$ iff\n    $g(f(x)) \\in T$ iff $f(x) \\in g^{-1}(T)$ iff $x \\in f^{-1}(g^{-1}(T))$.\n    Thus,\n    \\begin{center}\n        for all $T \\subset Z, (g \\circ f)^{-1}(T) = f^{-1}(g^{-1}(T))$.\n    \\end{center}\n    Therefore, for any open set $T \\in T_Z$, we have $g^{-1}(T) \\in T_Y$,which\n    implies $f^{-1}(g^{-1}(T)) \\in T_X$, which implies $(g \\circ f)^{-1}(T)\n    \\in T_X$ (by using the result above.) Hence, $g \\circ f: (X, T_X) \\to\n    (Z, T_Z)$ is a continuous map. The associativity and identity axioms follow\n    from the associativity and identity laws for functions. This establishes\n    $\\catname{Top}$ is a category.\n\\end{solution}\n\n\\begin{Exercise}\n    Check carefully that monoids correspond exactly to one-object categories.\n    Make sure you understand the difference between such a category and\n    $\\catname{Mon}$. (For example: how many objects does $\\catname{Mon})$\n    have?)\n\\end{Exercise}\n\\begin{solution}\n    (\\href{https://ncatlab.org/nlab/show/monoid#as_a_oneobject_category}\n    {Monoid as a one-object category}) Given a monoid $(M, \\cdot, 1)$,\n    we can construct its corresponding category as follows. We write\n    $\\catname{B}M$ for the corresponding category with a single object\n    $\\bullet$, where $\\catname{Hom}_{\\catname{B}M}(\\bullet, \\bullet) := M$.\n    We note then that the composition map in $\\catname{B}M$ is reflected in\n    the binary operation $\\_ \\cdot \\_ : M \\times M \\to M$, where\n    $\\mathbf{id}_{\\bullet} := 1$. Then, the associative and identity laws for\n    the category $\\catname{B}M$ follow directly from the associative and\n    identity laws, respectively, satisfied by the monoid $(M, \\cdot, 1)$.\n    This shows any monoid can be seen or interpreted as a one-object category.\n\\end{solution}\n\n\\begin{Exercise}\n    Check carefully that preorders correspond exactly to categories in which\n    each homset has at most one element. Make sure you understand the\n    difference between such a category and $\\catname{Pos}$. (For example: how\n    big can homsets in $\\catname{Pos}$ be?)\n\\end{Exercise}\n\\begin{solution}\n    Let $(P, \\le)$ be a preorder. Then, we define the corresponding category\n    $\\catname{C}$ as follows. The objects of $\\catname{C}$ are the elements of\n    the set $P$, and for all $x, y \\in P$, we define a morphism $x \\to y$ iff\n    $x \\le y$. Then, for every object $x \\in \\catname{C}$, the identity\n    morphism $1_x: x \\to x$ corresponds exactly to the reflexive property\n    $x \\le x$ for all $x \\in P$. Note that each homset in $\\catname{C}$ has at\n    most one element. Also, for every $x \\to y$ and $y \\to z$ in $\\catname{C}$,\n    $x \\to z$ follows from the fact that $x \\le y$ and $y \\le z$ and the\n    transitivity of the $\\le$ relation on $P$. This defines a composition map\n    for morphisms in $\\catname{C}$. In addition, for all morphisms $x \\to y$,\n    $y \\to z$, and $z \\to w$, their associativity follows immediately from the\n    transitivity of $\\le$. Lastly, the unit laws also follow from the same\n    transitivity relation. Therefore, we conclude that every preorder\n    corresponds precisely to a category in which each homset has at most one\n    element.\n\\end{solution}\n\n\\setcounter{Exercise}{9}\n\\begin{Exercise}\n    Show that the inverse, if it exists, is unique.\n\\end{Exercise}\n\\begin{solution}\n    Suppose $i: A \\to B$ is an isomorphism, with inverse $j: B \\to A$, in a\n    category $\\catname{C}$. Suppose $j': B \\to A$ is also an inverse of $i$.\n    Then, $j = 1_A \\circ j = (j' \\circ i) \\circ j = j' \\circ (i \\circ j) =\n    j' \\circ 1_B = j'$, and we are done.\n\\end{solution}\n\n\\begin{Exercise}\n    Show that $\\cong$ is an equivalence relation on the objects of a category.\n\\end{Exercise}\n\\begin{solution}\n    Let $\\catname{C}$ be some category.\\\\\n    (\\emph{Reflexivity}) For any object $X \\in \\catname{C}$, $X \\cong X$\n    follows from the fact that the identity morphism $1_X: X \\to X$ is an\n    isomorphism.\\\\\n    (\\emph{Symmetry}) If $X \\cong Y$, then there exists an isomorphism $i: X\n    \\to Y$. But, the inverse, $i^{-1}: Y \\to X$, of $i$ is also an isomorphism.\n    Hence, $Y \\cong X$.\\\\\n    (\\emph{Transitivity}) Suppose $X \\cong Y$ and $Y \\cong Z$. Then, there\n    exist isomorphisms $i: X \\to Y$ and $j: Y \\to Z$. Then, we claim that\n    $j \\circ i: X \\to Z$ is also an isomorphism. Indeed, its trivial to show\n    that its inverse is the morphism $i^{-1} \\circ j^{-1}: Z \\to X$. This\n    implies $X \\cong Z$.\\\\\n    We thus conclude that $\\cong$ is an equivalence relation on the objects of\n    a category.\n\\end{solution}\n\n\\begin{Exercise}\n    Verify the claims that isomorphisms in $\\catname{Set}$ correspond exactly\n    to bijections, in $\\catname{Grp}$ to group isomorphisms, in $\\catname{Top}$\n    to homeomorphisms, and in $\\catname{Pos}$ to isomorphisms.\n\\end{Exercise}\n\\begin{solution}\n    ($\\catname{Set}$) We claim the following:\n    \\begin{enumerate}\n        \\item $f: X \\to Y$ is injective iff $f$ has a left inverse.\n        \\item $f: X \\to Y$ is surjective iff $f$ has a right inverse.\n    \\end{enumerate}\n    We first show (1).\n\n    ($\\implies$) Suppose $f: X \\to Y$ has a left inverse, $g: Y \\to X$, say.\n    Then, $g \\circ f = 1_X$. Assume for any $x, x' \\in X, f(x) = f(x')$. Then,\n    $x = 1_X(x) = (g \\circ f)(x) = g(f(x)) = g(f(x')) = (g \\circ f)(x') =\n    1_X(x') = x'$, which implies $f$ is injective.\n\n    ($\\impliedby$) Suppose $f: X \\to Y$ is injective. If $X$ is empty, then $f$\n    is an empty function corresponding to each $Y$. In this case, $1_X$ is also\n    an empty function, and we thus have $g \\circ f = 1_X$ for any $g: Y \\to X$.\n    That is, $f$ has a left inverse. On the other hand, if $X$ is nonempty,\n    choose some $x_0 \\in X$. Define $g: Y \\to X$ by\n    \\begin{center}\n        $g(y) :=\n        \\begin{cases}\n        x_0 & \\text{if } y \\in Y \\setminus \\mathbf{Im}(f) \\\\\n        f^{-1}(y) & \\text{if } y \\in \\mathbf{Im}(f)\n        \\end{cases}$\n    \\end{center}\n    Then, for all $x \\in X, (g \\circ f)(x) = g(f(x)) = x = 1_X(x)$, which\n    implies $g \\circ f = 1_X$, thus showing that $g$ is a left inverse of $f$.\n\n    We now show (2).\n\n    ($\\implies$) Suppose $f: X \\to Y$ has a right inverse, $g: Y \\to X$, say.\n    Then, $f \\circ g = 1_Y$. Therefore, for all $y \\in Y$, $y = 1_Y(y) =\n    (f \\circ g)(y) = f(g(y)) = f(x)$, where $x = g(y)$. This shows $f$ is\n    surjective.\n\n    ($\\impliedby$) Suppose $f: X \\to Y$ is surjective. Now, consider an indexed\n    family of nonempty sets $\\{ f^{-1}(y)\\}_{y \\in Y}$. Then, using the axiom\n    of choice, we conclude there exists a function $g: Y \\to X$, such that $g(y)\n    \\in f^{-1}(y)$ for all $y \\in Y$. Then, for all $y \\in Y$, $(f \\circ g)(y) =\n    f(g(y)) = y = 1_Y(y)$, which implies $f \\circ g = 1_Y$, thus proving $f$ has\n    a right inverse.\n\n    Since in $\\catname{Set}$ a bijection is a function which is both injective\n    and surjective, using (1) and (2), we immediately conclude that bijections\n    in $\\catname{Set}$ correspond exactly to isomorphisms, and we are done.\n\n    In addition, in any category $\\catname{C}$, if $f: X \\to Y$ has both a left\n    inverse, $g: Y \\to X$, say, and a right inverse, $h: Y \\to X$, say, then\n    $g = h$. Indeed, $g = g \\circ 1_Y = g \\circ (f \\circ h) = (g \\circ f) \\circ\n    h = 1_X \\circ h = h$, and we are done.\\\\\n\n    % TODO\n    ($\\catname{Grp}$)\n\n    % TODO\n    ($\\catname{Top}$)\n\n    % TODO\n    ($\\catname{Pos}$)\n\\end{solution}\n\n\\subsection*{Opposite Categories and Duality}\nGiven a category $\\catname{C}$, the opposite category $\\catname{C}^{\\mathbf{op}}$\nis given by taking the same objects as $\\catname{C}$, and\n\\begin{center}\n    $\\catname{C}^{\\mathbf{op}}(A, B) = \\catname{C}(B, A)$.\n\\end{center}\n\nComposition and identities are inherited from $\\catname{C}$.\\\\\nIf we have\n\\begin{center}\n    $A \\xrightarrow{f} B \\xrightarrow{g} C$\n\\end{center}\nin $\\catname{C}^{\\mathbf{op}}$, this means\n\\begin{center}\n    $A \\xleftarrow{f} B \\xleftarrow{g} C$\n\\end{center}\nin $\\catname{C}$. Therefore, composition $ g \\circ f$ is\n$\\catname{C}^{\\mathbf{op}}$ is defined as $f \\circ g$ in $\\catname{C}$. This\nleads to the \\emph{\\textbf{principle of duality}}: a statement $S$ is true\nabout a category $\\catname{C}$ iff its dual (\\emph{i.e.} the one obtained\nfrom $S$ by reversing all the arrows) is true about $\\catname{C}^{\\mathbf{op}}$.\nFor example, a morphism $f$ is monic in $\\catname{C}^{\\mathbf{op}}$ iff it is\nepic in $\\catname{C}$. We say monic and epic are \\emph{dual notions}.\n\n\\setcounter{Exercise}{13}\n\\begin{Exercise}\n    If $P$ is a preorder, for example $(\\R, \\le)$, describe $P^{\\mathbf{op}}$\n    explicitly.\n\\end{Exercise}\n\\begin{solution}\n    An arrow $a \\le_{P^{\\mathbf{op}}} b$ in $P^{\\mathbf{op}}$ is precisely the\n    arrow $b \\le_P a$ in $P$. When $P = (\\R, \\le)$, $P^{\\mathbf{op}}$ describes\n    the ``greater than or equal\" preorder relation on $\\R$.\n\\end{solution}\n\n\\subsection*{Subcategories}\nLet $\\catname{C}$ be a category. Suppose we are given the collections\n\\begin{center}\n    $\\mathbf{Ob}(\\catname{D}) \\subseteq \\mathbf{Ob}(\\catname{C})$,\\\\\n    $\\forall A, B \\in \\mathbf{Ob}(\\catname{D}).\n    \\catname{D}(A, B) \\subseteq \\catname{C}(A, B)$.\n\\end{center}\nWe say $\\catname{D}$ is a \\emph{\\textbf{subcategory}} of $\\catname{C}$ if\nit is itself a category. In particular, $\\catname{D}$ is:\n\\begin{itemize}\n    \\item A \\emph{\\textbf{full}} subcategory of $\\catname{C}$ if for any\n    $A, B \\in \\mathbf{Ob}(\\catname{D})$, $\\catname{D}(A, B) = \\catname{C}(A, B)$.\n    \\item  A \\emph{\\textbf{lluf}} subcategory of $\\catname{C}$ if\n    $\\mathbf{Ob}(\\catname{D}) = \\mathbf{Ob}(\\catname{C})$.\n\\end{itemize}\nFor example, $\\catname{Grp}$ is a full subcategory of $\\catname{Mon}$, and\n$\\catname{Set}$ is a lluf subcategory of $\\catname{Rel}$.\n\n\\setcounter{Exercise}{15}\n\\begin{Exercise}\n    How many categories $\\catname{C}$ with $\\mathbf{Ob}(\\catname{C}) =\n    \\{ \\bullet \\}$ are there? (Hint: what do such categories correspond to?)\n\\end{Exercise}\n\\begin{solution}\n    Each such category corresponds to a monoid. So, there are as many such\n    categories as there are monoids.\n\\end{solution}\n\n\\subsection*{Exercises}\n\\begin{enumerate}\n    \\item Consider the following properties of an arrow $f$ in a category\n    $\\catname{C}$.\n    \\begin{itemize}\n        \\item $f$ is \\emph{split monic} if for some $g$, $g \\circ f$ is an\n        identity arrow.\n        \\item $f$ is \\emph{split epic} if for some $g$, $f \\circ g$ is an\n        identity arrow.\n    \\end{itemize}\n    \\begin{itemize}\n        \\item[a.] Prove that if $f$ and $g$ are arrows such that $g \\circ f$ is\n        monic, then $f$ is monic.\n        \\item[b.] Prove that if $f$ is split epic then it is epic.\n        \\item[c.] Prove that if $f$ and $g \\circ f$ are iso then $g$ is iso.\n        \\item[d.] Prove that if $f$ is monic and split epic then it is iso.\n        \\item[e.] In the category $\\catname{Mon}$ of monoids and monoid\n        homomorphisms, consider the inclusion map\n        \\begin{center}\n            $i: (\\N, +, 0) \\to (\\Z, +, 0)$\n        \\end{center}\n        of natural numbers into the integers. Show that this arrow is both monic\n        and epic. Is it an iso?\n    \\end{itemize}\n    The \\textbf{Axiom of Choice} in Set Theory states that if\n    $\\{ X_i \\}_{i \\in I}$ is a family of nonempty sets, we can form  a set\n    $X = \\{ x_i \\mid i \\in I \\}$, where $x_i \\in X_i$ for all $i \\in I$.\n    \\begin{itemize}\n        \\item[f.] Show that in $\\catname{Set}$ an arrow which is epic is split\n        epic. Explain why this needs the Axiom of Choice.\n        \\item[g.] Is is always the case that an arrow which is epic is split\n        epic? Either prove that it is, or give a counterexample.\n    \\end{itemize}\n\n    \\item Give a description of partial orders as categories of a special kind.\n\\end{enumerate}\n\\begin{solution}\n    \\leavevmode\n    \\begin{enumerate}\n        \\item \\leavevmode\n        \\begin{itemize}\n            \\item[a.] Suppose $f: A \\to B$ and $g: B \\to C$ such that\n            $g \\circ f$ is monic. Assume, for all $i, j: Z \\to A$,\n            $f \\circ i = f \\circ j$. Then, $(g \\circ f) \\circ i = g \\circ\n            (f \\circ i) = g \\circ (f \\circ j) = (g \\circ f) \\circ j$, which\n            implies $i = j$, since $g \\circ f$ is monic. This implies $f$ is\n            monic, and we are done.\n            \\item[b.] Suppose $f: A \\to B$ is split epic. Then, there exists a\n            $g: B \\to A$ such that $f \\circ g = 1_B$. Assume, for all $i, j: B\n            \\to C$, $i \\circ f = j \\circ f$. Then, $i = i \\circ 1_B = i \\circ\n            (f \\circ g) = (i \\circ f) \\circ g = (j \\circ f) \\circ g = j \\circ\n            (f \\circ g) = j \\circ 1_B = j$, which shows $f$ is epic.\n            \\item[c.] Suppose $f: A \\to B$ and $g: B \\to C$ such that $f$ and\n            $g \\circ f$ are iso. We claim that the inverse of $g$ is\n            $f \\circ (g \\circ f)^{-1}: C \\to B$. Indeed, $g \\circ (f \\circ\n            (g \\circ f)^{-1}) = (g \\circ f) \\circ (g \\circ f)^{-1} = 1_C$, and\n            $(f \\circ (g \\circ f)^{-1}) \\circ g = f \\circ (g \\circ f)^{-1} \\circ\n            (g \\circ f) \\circ f^{-1} = f \\circ f^{-1} = 1_B$, which establishes\n            $g$ is also an iso.\n            \\item[d.] Suppose $f: A \\to B$ is monic and split epic. The latter\n            implies $f$ has a right inverse, $g: B \\to A$, say, where $f \\circ g\n            = 1_B$. Note that $g \\circ f: A \\to A$ and $1_A: A \\to A$. Now,\n            $f \\circ (g \\circ f) = (f \\circ g) \\circ f = 1_B \\circ f = f = f\n            \\circ 1_A$, which implies $g \\circ f = 1_A$, since $f$ is monic\n            (left cancellative). Thus, $g$ is also a left inverse of $f$, and\n            hence, $f$ is iso.\n            \\item[e.] It is easy to prove the inclusion map $\\N \\hookrightarrow\n            \\Z$ is really a monoid homomorphism. Indeed, $i(0) = 0$, and, for all\n            $n_1, n_2 \\in \\N$, $i(n_1 + n_2) = n_1 + n_2 = i(n_1) + i(n_2)$.\\\\\n            Next, we show that $i$ is monic. Assume, for all monoid homomorphisms\n            $g, h: X \\to \\N$, $i \\circ g = i \\circ h$. Then, for all $x \\in X$,\n            $(i \\circ g)(x) = (i \\circ h) (x)$, which implies $i(g(x)) = i(h(x))$,\n            which implies $g(x) = h(x)$, which implies $g = h$. This shows the\n            inclusion map is monic.\\\\\n            We now show the inclusion map is epic. First, assume, for all\n            monoid homomorphisms $g, h: (\\Z, +, 0) \\to (X, \\star, 1_X)$,\n            $g \\circ i = h \\circ i$. Then, for all $n \\in \\N$, $(g \\circ i)(n)\n            = (h \\circ i)(n)$, which\n            implies $g(i(n)) = h(i(n))$, which implies $g(n) = h(n)$. We now\n            claim that for all $n \\ge 1$, $g(-n) = h(-n)$. To that end, we use\n            induction on $n$. Note that $g(-1) = g(-1) \\star 1_X = g(-1) \\star\n            h(0) = g(-1) \\star h(1 + (-1)) = g(-1) \\star h(1) \\star h(-1) = g(-1)\n            \\star g(1) * h(-1) = g(-1 + 1) \\star h(-1) = g(0) \\star h(-1) = 1_X\n            \\star h(-1) = h(-1)$. Now, assume the proposition holds for some\n            $n \\ge 1$. Then, $g(-(n + 1)) = g(-n + (-1)) = g(-n) \\star g(-1) =\n            h(-n) \\star h(-1) = h(-n + (-1)) = h(-(n + 1))$. Hence, by induction,\n            $g(-n) = h(-n)$ for all $n \\ge 1$. Combining the results from above,\n            we thus conclude $g(z) = h(z)$ for all $z \\in \\Z$. In other words,\n            $g = h$, which implies $i$ is epic.\\\\\n            Clearly, the inclusion map $\\N \\hookrightarrow \\Z$ is not iso.\n            \\item[f.] Suppose $f: X \\to Y$ is epic in $\\catname{Set}$. Then,\n            from an earlier result about $\\catname{Set}$, we conclude $f$ is\n            surjective. Now, consider the family of nonempty sets\n            $\\{ f^{-1}(b) \\}_{b \\in B}$. Each of the sets in the family is\n            nonempty, because $f$ is surjective. Therefore, using the Axiom of\n            Choice, we can choose some element from each nonempty set in the\n            family to construct a function $g: Y \\to X$, given by $g(y) := x$\n            if $x \\in f^{-1}(b)$. In addition, for all $y \\in Y$, $(f \\circ g)(y)\n            = f(g(y)) = y = 1_Y(y)$, which implies $f \\circ g = 1_Y$. This shows\n            $f$ has a right inverse, thus proving $f$ is split epic.\n            \\item[g.] It isn't always the case that an arrow which is epic is\n            split epic. For example, in the category $\\catname{Mon}$, the\n            inclusion map $\\N \\hookrightarrow \\Z$ is epic (as shown in (e)\n            above.) Now, if we assume that it is also split epic, then there\n            exists a monoid homomorphism $g: \\Z \\to \\N$, such that $i \\circ g =\n            1_{\\Z}$. This implies $(i \\circ g)(-1) = 1_{\\Z}(-1)$, which implies\n            $i(g(-1)) = -1$, which implies $g(-1) = -1$, which implies $-1 \\in\n            \\N$, which is absurd. We thus conclude the aforesaid inclusion map\n            is \\emph{not} split epic, even though it is epic. And this proves\n            our original claim.\n        \\end{itemize}\n    \\item Suppose $(P, \\le)$ is a poset. Then, its corresponding category\n    $\\catname{C}$ is defined as follows. The objects of $\\catname{C}$ are the\n    elements of $P$, and for all $x, y \\in P$, $x \\to y$ iff $x \\le y$. The\n    reflexivity of $\\le$ corresponds to the identity arrows, and transitivity\n    to arrow composition. Note that there is at most one arrow for every pair\n    of objects in the category. Anti-symmetry of $\\le$ corresponds to the fact\n    that the only isomorphisms in $\\catname{C}$ are the identity arrows.\n    \\end{enumerate}\n\\end{solution}\n\n\\section{Some Basic Constructions}\n\\subsection*{Initial and Terminal Objects}\nAn object $I$ in a category $\\catname{C}$ is \\emph{\\textbf{initial}} if, for\nevery object $A$, \\emph{there exists a unique arrow} $I \\to A$, which we write\n$\\iota_{A}: I \\to A$.\n\nAn object $T$ in a category $\\catname{C}$ is \\emph{\\textbf{terminal}} if, for\nevery object $A$, \\emph{there exists a unique arrow} $A \\to T$, which we write\n$\\tau_{A}: A \\to T$.\n\nNote that initial and terminal objects are dual notions: $T$ is terminal in\n$\\catname{C}$ iff it is initial in $\\catname{C}^{\\mathbf{op}}$. We sometimes\nwrite $\\1$ for the terminal object and $\\0$ for the initial object.\n\n\\setcounter{Exercise}{17}\n\\begin{Exercise}\n    Verify the following claims. In each case, identify the canonical arrows.\n    \\begin{enumerate}\n        \\item In $\\catname{Set}$, the empty set is an initial object while any\n        one-element set $\\{ \\bullet \\}$ is terminal.\n        \\item In $\\catname{Pos}$, the poset $(\\emptyset, \\emptyset)$ is an\n        initial object while $(\\{ \\bullet \\}, \\{ (\\bullet, \\bullet) \\})$ is\n        terminal.\n        \\item In $\\catname{Top}$,  the space $(\\emptyset, \\{ \\emptyset \\})$ is\n        an initial object while $(\\{ \\bullet \\}, \\{ \\emptyset, \\{ \\bullet \\} \\})$\n        is terminal.\n        \\item In $\\catname{Vect}_k$, the one-element space $\\{ 0 \\}$ is both\n        initial and terminal.\n        \\item In a poset, seen as a category, an initial object is a least\n        element, while a terminal object is a greatest element.\n    \\end{enumerate}\n\\end{Exercise}\n\\begin{solution}\n    \\leavevmode\n    \\begin{enumerate}\n       \\item In $\\catname{Set}$, for any set (object) $A$, the function\n       $(\\emptyset, A, \\emptyset)$ is the unique function (arrow) from\n       $\\emptyset$ to $A$. Therefore, the empty set is (the) initial object in\n       $\\catname{Set}$. And, for every set $A$, the function $A \\to\n       \\{ \\bullet \\}$ that maps every element of $A$ to $\\bullet$ is the unique\n       function from $A$ to $\\{ \\bullet \\}$. This establishes that any\n       one-element set is terminal in $\\catname{Set}$.\n       \\item For any poset $(P, \\le)$, there exists a unique (empty) monotone\n       function $(\\emptyset, \\emptyset) \\xrightarrow{(\\emptyset, P, \\emptyset)}\n       (P, \\le)$. Hence, the poset $(\\emptyset, \\emptyset)$ is an initial object\n       in $\\catname{Pos}$. And, for any poset $(P, \\le)$, there exists a unique\n       monotone function $(P, \\le) \\to (\\{ \\bullet \\}, \\{ (\\bullet, \\bullet) \\})$,\n       defined by $x \\mapsto \\bullet$ for all $x \\in P$. Hence, $(\\{ \\bullet \\},\n       \\{ (\\bullet, \\bullet) \\})$ is terminal in $\\catname{Pos}$.\n       \\item For any topological space $(X, T_X)$, the unique empty function\n       \\begin{center}\n           $(\\emptyset, \\{ \\emptyset \\}) \\xrightarrow{(\\emptyset, X, \\emptyset)}\n           (X, T_X)$\n       \\end{center}\n       is continuous, since for every open set $T \\in T_X$, its preimage under\n       the aforesaid function is the empty set, which is open. Hence,\n       $(\\emptyset, \\{ \\emptyset \\})$ is initial in $\\catname{Top}$.\\\\\n       And, for any topological space $(X, T_X)$, the unique function\n       $(X, T_X) \\to (\\{ \\bullet \\}, \\{ \\emptyset, \\{ \\bullet \\} \\})$, defined by\n       $x \\mapsto \\bullet$ for all $x \\in X$, is continuous, since the preimage\n       of $\\emptyset$ under the aforesaid function is $\\emptyset$, which is open,\n       and the preimage of $\\{ \\bullet \\}$ is $X$, which is also open. Hence,\n       $(\\{ \\bullet \\}, \\{ \\emptyset, \\{ \\bullet \\} \\})$ is terminal in\n       $\\catname{Top}$.\n       \\item Assuming the ground field is $k$, for any vector space $V$, the\n       unique linear map $\\{ 0 \\} \\to V$, defined by $0 \\mapsto 0_V$ is a unique\n       arrow from $\\{ 0 \\}$ to $V$ in $\\catname{Vect}_k$. Also, the unique linear\n       map $V \\to \\{ 0\\}$, defined by $v \\mapsto 0$ for all $v \\in V$, is a\n       unique arrow from $V$ to $\\{ 0 \\}$ in $\\catname{Vect}_k$. This shows that\n       $\\{ 0 \\}$ is both initial and terminal in $\\catname{Vect}_k$.\n       \\item In a poset $(P, \\le)$, seen as a category, if $\\bot$ is an initial\n       object, then there exists a unique arrow $\\bot \\to p$ for all $p \\in P$.\n       This implies $\\bot \\le p$ for all $p \\in P$, when seen as a set. Hence,\n       an initial object in the category corresponding to $(P, \\le)$ is a least\n       element in $P$. Arguing similarly, we conclude that a terminal object in\n       the category corresponding to $(P, \\le)$ is a greatest element in $P$.\n   \\end{enumerate}\n\\end{solution}\n\n\\begin{Exercise}\n    Identify the initial and terminal objects in $\\catname{Rel}$.\n\\end{Exercise}\n\\begin{solution}\n    In $\\catname{Rel}$, the empty set $\\emptyset$ is both the initial object and\n    the terminal object. Indeed, for any set $A$, the empty relation $\\emptyset$\n    ($\\subseteq \\emptyset \\times A$) is a unique relation from $\\emptyset$ to\n    $A$, and the empty relation $\\emptyset$ ($\\subseteq A \\times \\emptyset$) is\n    also a unique relation from $A$ to $\\emptyset$.\n\\end{solution}\n\n\\begin{Exercise}\n    Suppose a monoid, viewed as a category, has either an initial or a terminal\n    object. What must the monoid be?\n\\end{Exercise}\n\\begin{solution}\n    The category corresponding to a monoid $(M, \\cdot, 1_M)$ contains just a\n    single object. If this object is initial, then all morphisms must be the\n    identity morphism on this initial object, which implies $M = \\{ 1_M \\}$.\n    The argument is similar if the aforesaid object is terminal, which would\n    again imply $M = \\{ 1_M \\}$. Thus, in either case, the monoid must be the\n    trivial monoid.\n\\end{solution}\n\nA fundamental fact about initial and terminal objects is that they are\n\\emph{unique up to (unique) isomorphism}. This is characteristic of all such\n``universal\" definitions. Hence, if initial objects exist in a category, we\ncan speak of \\emph{the} initial object. Similarly for terminal objects.\n\n\\setcounter{Exercise}{21}\n\\begin{Exercise}\n    Let $\\catname{C}$ be a category with an initial object $\\0$. For any object\n    $A$, show the following:\n    \\begin{enumerate}\n        \\item If $A \\cong \\0$, then $A$ is an initial object.\n        \\item If there exists a monomorphism $f: A \\to \\0$, then $f$ is an iso,\n        and hence $A$ is initial.\n    \\end{enumerate}\n\\end{Exercise}\n\\begin{solution}\n    \\leavevmode\n    \\begin{enumerate}\n        \\item Suppose $A \\cong \\0$. Then, there exists an isomorphism $f: A\n        \\xrightarrow{\\sim} \\0$. For any object $B$, there exists a unique\n        morphism $\\iota_B: \\0 \\to B$, and hence, $\\iota_B \\circ f: A \\to B$.\n        This proves the existence of a morphism $A \\to B$ for any object $B$\n        in the category. We now show that such a morphism is indeed unique.\n        Let $g, h: A \\to B$ be a pair of morphisms for any object $B$ in the\n        category $\\catname{C}$. Then, we have\n        \\begin{center}\n            $\\0 \\xrightarrow{f^{-1}} A \\xrightarrow{g} B$\n        \\end{center}\n        and\n        \\begin{center}\n            $\\0 \\xrightarrow{f^{-1}} A \\xrightarrow{h} B$.\n        \\end{center}\n        Since $\\0$ is initial, we must have $g \\circ f^{-1} = h \\circ f^{-1}$.\n        Therefore, $g = g \\circ 1_A = g \\circ (f^{-1} \\circ f) = (g \\circ f^{-1})\n        \\circ f = (h \\circ f^{-1}) \\circ f = h \\circ (f^{-1} \\circ f) = h \\circ\n        1_A = h$. This proves uniqueness, and we are done.\n        \\item Suppose $A \\xrightarrow{f} \\0$ is a monomorphism. We claim\n        the unique arrow $\\0 \\xrightarrow{\\iota_A} A$ is the inverse of $f$. To\n        that end, we show $\\iota_A$ is both a left and a right inverse of $f$.\n        Indeed, $f \\circ \\iota_A : \\0 \\to \\0$, and since $\\0$ is initial, we\n        must have $f \\circ \\iota_A = 1_{\\0}$, which implies $\\iota_A$ is a\n        right inverse of $f$. Now, note $\\iota_A \\circ f: A \\to A$ and\n        $1_A: A \\to A$. Also, $f \\circ (\\iota_A \\circ f) = (f \\circ \\iota_A)\n        \\circ f = 1_{\\0} \\circ f = f = f \\circ 1_A$, and since $f$ is left\n        cancellative, we have $\\iota_A \\circ f = 1_A$, which shows $\\iota_A$ is\n        a left inverse of $f$. Thus, $f$ has both a left inverse and a right\n        inverse, implying it is iso, and hence, using the result obtained in (1)\n        above, we conclude $A$ is an initial object in $\\catname{C}$.\n    \\end{enumerate}\n\\end{solution}\n\n\\subsection*{Products and Coproducts}\nWe can express a general notion of product that is meaningful in any category,\nsuch that, if a product exists, it is characterized uniquely up to unique\nisomorphism. Given a particular mathematical context (\\emph{i.e.} a category),\nwe can then verify if a product exists in that category. The concrete\nconstruction appropriate to the context will enter only into the proof of\n\\emph{existence}; all of the useful \\emph{properties} of a product follow from\nthe general definition.\n\n\\setcounter{Exercise}{23}\n\\begin{Exercise}\n    Verify $\\catname{Pair}(A, B)$ is a category, where $A$ and $B$ are arbitrary\n    objects in some category.\n\\end{Exercise}\n\\begin{solution}\n    Let $A$ and $B$ be some arbitrary objects in some category $\\catname{C}$.\n    Now, given morphisms $f: (P, p_1, p_2) \\to (Q, q_1, q_2)$ and $g: (Q, q_1,\n    q_2) \\to (R, r_1, r_2)$ in $\\catname{Pair}(A, B)$, it is easy to check that\n    $g \\circ f: P \\to R$ in $\\catname{C}$. Also, we have\n    \\begin{center}\n        $q_1 \\circ f = p_1$, $q_2 \\circ f = p_2$\n    \\end{center}\n    and\n    \\begin{center}\n        $r_1 \\circ g = q_1$, $r_2 \\circ g = q_2$\n    \\end{center}\n    So, $r_1 \\circ (g \\circ f) = (r_1 \\circ g) \\circ f = q_1 \\circ f = p_1$, and,\n    $r_2 \\circ (g \\circ f) = (r_2 \\circ g) \\circ f = q_2 \\circ f = p_2$, which\n    implies $g \\circ f: (P, p_1, p_2) \\to (R, r_1, r_2)$ in $\\catname{Pair}(A,B)$.\n    Associativity of morphisms in $\\catname{Pair}(A, B)$ follows directly from\n    the associativity of morphisms in $\\catname{C}$. Finally, for all\n    $(P, p_1, p_2)$ in $\\catname{Pair}(A, B)$, the identity morphism $1_P: P \\to\n    P$ is the identity morphism for $(P, p_1, p_2)$, since $p_1 \\circ 1_P = p_1$\n    and $p_2 \\circ 1_P = p_2$. And, this proves that $\\catname{Pair}(A, B)$ is\n    indeed a category.\n\\end{solution}\n\nWe say $(A \\times B, \\pi_1, \\pi_2)$ is a \\emph{\\textbf{product}} of $A$ and $B$\nif it is \\emph{terminal} in $\\catname{Pair}(A, B)$. Products are specified by\ntriples $A \\xleftarrow{\\pi_1} A \\times B \\xrightarrow{\\pi_2}$, where $pi_i$'s are\ncalled \\emph{projections}. For economy (and if projections are obvious), we say\n$A \\times B$ is the product of $A$ and $B$. We say a category $\\catname{C}$ has\n\\emph{\\textbf{(binary) products}} if each pair of objects $A, B$ has a product\nin $\\catname{C}$. Since, products are terminal objects, they are unique up to\n(unique) isomorphism.\n\nUnpacking the uniqueness condition from $\\catname{Pair}(A, B)$ back to\n$\\catname{C}$, we obtain the following more concise definition of products that\nwe use in practice.\\\\\n\n(\\textbf{Equivalent definition of product}) Let $A, B$ be objects in a category\n$\\catname{C}$. A product of $A$ and $B$ is an object $A \\times B$ together with\na pair of arrows $A \\xleftarrow{\\pi_1} A \\times B \\xrightarrow{\\pi_2} B$ such\nthat for every triple $A \\xleftarrow{f} C \\xrightarrow{g} B$, there exists a\n\\emph{unique} morphism\n\\begin{center}\n    $\\langle f, g \\rangle : C \\to A \\times B$\n\\end{center}\nsuch that the corresponding diagram commutes. That is,\n\\begin{center}\n    $\\pi_1 \\circ \\langle f, g \\rangle = f$\\\\\n    $\\pi_2 \\circ \\langle f, g \\rangle = g$\n\\end{center}\nWe call $\\langle f, g \\rangle$ the \\emph{pairing} of $f$ and $g$.\n\n\\setcounter{Exercise}{25}\n\\begin{Exercise}\n    Verify the following claims.\n    \\begin{enumerate}\n        \\item In $\\catname{Set}$, products are the usual cartesian products.\n        \\item In $\\catname{Pos}$, products are cartesian products with the\n        pointwise order.\n        \\item In $\\catname{Top}$, products are cartesian products with the\n        product topology.\n        \\item In $\\catname{Vect}_k$, products are direct sums.\n        \\item In a poset, seen as a category, products are \\emph{greatest lower\n        bounds}.\n    \\end{enumerate}\n\\end{Exercise}\n\\begin{solution}\n    \\leavevmode\n    \\begin{enumerate}\n        \\item Let $A, B$ be arbitrary sets in $\\catname{Set}$. We claim\n        $A \\times B$ equipped with the canonical projection functions is the\n        cross product of $A$ and $B$. Indeed, given any $A \\xleftarrow{f} C\n        \\xrightarrow{g} B$, we show $\\langle f, g \\rangle : C \\to A \\times B$,\n        defined by\n        \\begin{center}\n            $c \\mapsto (f(c), g(c))$,\n        \\end{center}\n        is the unique function that makes the following diagram commute:\n        \\[\n        \\begin{tikzcd}\n            & C \\arrow[ddl, swap, \"f\"]\n                \\arrow[dd, dashrightarrow, \"{\\langle f, g \\rangle}\" description]\n                \\arrow[ddr, \"g\"] & \\\\\n            & & \\\\\n            A & A \\times B \\arrow[l, swap, \"\\pi_1\"] \\arrow[r, \"\\pi_2\"] & B\n        \\end{tikzcd}\n        \\]\n        (\\emph{Existence}) It is easy to check that $\\langle f, g \\rangle$ is\n        indeed a function from $C$ to $A \\times B$.\\\\\n        (\\emph{Commutativity}) For all $c \\in C$, $(\\pi_1 \\circ \\langle f, g\n        \\rangle)(c) = f(c)$ and $(\\pi_2 \\circ   \\langle f, g \\rangle)(c) = g(c)$,\n        which imply the above diagram commutes.\\\\\n        (\\emph{Uniqueness}) Suppose $h: C \\to A \\times B$ such that $\\pi_1 \\circ\n        h = f$ and $\\pi_2 \\circ h = g$. Then, for all $c \\in C$, $(\\pi_1 \\circ\n        h)(c) = f(c)$ and $(\\pi_2 \\circ h)(c) = g(c)$, which imply $\\pi_1(h(c)) =\n        f(c)$ and $\\pi_2(h(c)) = g(c)$, which imply $h(c) = (f(c), g(c)) =\n        \\langle f, g \\rangle (c)$, thus proving $h = \\langle f, g \\rangle$, and\n        thereby, showing the uniqueness of $\\langle f, g \\rangle$.\\\\\n        Hence, $A \\xleftarrow{\\pi_1} A \\times B \\xrightarrow{\\pi_2} B$ is the\n        cross product of $A$ and $B$.\n        \\item Let $(P, \\le)$ and $(Q, \\le)$ be posets. Let $(P \\times Q, \\le)$ be\n        the cartesian product of $P$ and $Q$ with the pointwise order. That is,\n        for all $a, c \\in P$ and $b, d \\in Q$, $(a, b) \\le (c, d)$ iff $a \\le c$\n        and $b \\le d$. We claim $(P \\times Q, \\le)$ equipped with the canonical\n        projection functions (which are monotone) is the cross product of $(P,\n        \\le)$ and $(Q, \\le)$. Given any $(P, \\le) \\xleftarrow{f} (R, \\le)\n        \\xrightarrow{g} (Q, \\le)$, where $f, g$ are monotone functions, the\n        function $\\langle f, g \\rangle : (R, \\le) \\to (P \\times Q, \\le)$, defined\n        by\n        \\begin{center}\n            $r \\mapsto (f(r), g(r))$\n        \\end{center}\n        is the unique monotone function that makes the following diagram commute:\n        \\[\n        \\begin{tikzcd}\n            & (R, \\le) \\arrow[ddl, swap, \"f\"]\n                \\arrow[dd, dashrightarrow, \"{\\langle f, g \\rangle}\" description]\n                \\arrow[ddr, \"g\"] & \\\\\n            & & \\\\\n            (P, \\le) & (P \\times Q, \\le) \\arrow[l, swap, \"\\pi_1\"]\n            \\arrow[r, \"\\pi_2\"] & (Q, \\le)\n        \\end{tikzcd}\n        \\]\n        (\\emph{Existence}) It is easy to check that $\\langle f, g \\rangle$ is\n        indeed a set function from $R$ to $P \\times Q$. And, for all $r_1, r_2\n        \\in R$, if $r_1 \\le r_2$, then $f(r_1) \\le f(r_2)$ and $g(r_1) \\le\n        g(r_2)$ (since $f, g$ are monotone), which implies $(f(r_1), g(r_1)) \\le\n        (f(r_2), g(r_2))$, which implies $\\langle f, g \\rangle (r_1) \\le\n        \\langle f, g \\rangle (r_2)$, which implies $\\langle f, g \\rangle$ is\n        monotone.\\\\\n        (\\emph{Commutativity}) For all $r \\in R$, we have\n        \\begin{center}\n            $(\\pi_1 \\circ \\langle f, g \\rangle)(r) = f(r)$,\\\\\n            $(\\pi_2 \\circ \\langle f, g \\rangle)(r) = g(r).$\n        \\end{center}\n        The above implies that the above diagram does commute.\\\\\n        (\\emph{Uniqueness}) Suppose $h: (R, \\le) \\to (P \\times Q, \\le)$ is a\n        monotone function such that $\\pi_1 \\circ h = f$ and $\\pi_2 \\circ h = g$.\n        Then, for all $r \\in R$, $\\pi_1 \\circ h (r)= f(r)$ and $\\pi_2 \\circ h\n        (r)= g(r)$, which imply $\\pi_1(h(r)) = f(r)$ and $\\pi_2(h(r)) = g(r)$,\n        which imply $h(r) = (f(r), g(r))$, which implies $h(r) = \\langle f, g\n        \\rangle (r)$, which implies $h = \\langle f, g \\rangle$, thus showing\n        that $\\langle f, g \\rangle$ with the commutativity property is indeed\n        unique.\\\\\n        Hence, we conclude the cartesian product $(P \\times Q, \\le)$ with the\n        pointwise order is the product of any posets $(P, \\le)$ and $(Q, \\le)$.\n        \\item % TODO\n        \\item % TODO\n        \\item In a poset $(P, \\le)$, seen as a category, the product $a \\times b$\n        of two elements $a, b \\in P$ is an element in $P$ satisfying $a \\times b\n        \\le a$ and $a \\times b \\le b$, such that for all elements $c \\in P$, if\n        $c \\le a$ and $c \\le b$, then $c \\le a \\times b$. This is precisely the\n        definition of the \\emph{greatest lower bound} of any two elements $a, b\n        \\in P$, seen as a set. Therefore, products are greatest lower bounds in\n        posets.\n    \\end{enumerate}\n\\end{solution}\n\nThe following proposition shows that the uniqueness of the pairing arrow can be\nspecified purely equationally by the equation:\n\\begin{center}\n    $\\forall h: C \\to A \\times B.\\, h = \\langle \\pi_1 \\circ h, \\pi_2 \\circ h\n    \\rangle$\n\\end{center}\n\n\\setcounter{prop}{26}\n\\begin{prop}\n    For any triple $A \\xleftarrow{\\pi_1} A \\times B \\xrightarrow{\\pi_2} B$, the\n    following statements are equivalent:\n    \\begin{itemize}\n        \\item[(I)] For any triple $A \\xleftarrow{f} C \\xrightarrow{g} B$, there\n        exists a unique morphism $\\langle f, g \\rangle : C \\to A \\times B$ such\n        that $\\pi_1 \\circ \\langle f, g \\rangle = f$ and $\\pi_2 \\circ \\langle f,\n        g \\rangle = g$.\n        \\item[(II)] For any triple $A \\xleftarrow{f} C \\xrightarrow{g} B$, there\n        exists a morphism $\\langle f, g \\rangle : C \\to A \\times B$ such that\n        $\\pi_1 \\circ \\langle f, g \\rangle = f$ and $\\pi_2 \\circ \\langle f, g\n        \\rangle = g$, and moreover, for any $h: C \\to A \\times B$,  $h =\n        \\langle \\pi_1 \\circ h, \\pi_2 \\circ h \\rangle$.\n    \\end{itemize}\n\\end{prop}\n\\begin{proof}\n    ((I) $\\implies$ (II)) Suppose (I) holds. Assume $A \\xleftarrow{f} C\n    \\xrightarrow{g} B$. Then, by (I), there exists a (unique) morphism $\\langle\n    f, g \\rangle : C \\to A \\times B$ such that $\\pi_1 \\circ \\langle f, g \\rangle\n    = f$ and $\\pi_2 \\circ \\langle f, g \\rangle = g$. Now, let $h: C \\to A\n    \\times B$. Note $A \\xleftarrow{\\pi_1 \\circ h} C \\xrightarrow{\\pi_2 \\circ h}\n    B$. Thus, by (I), there exists a unique morphism $\\langle \\pi_1 \\circ h,\n    \\pi_2 \\circ h \\rangle : C \\to A \\times B$ such that\n    \\begin{center}\n        $\\pi_1 \\circ \\langle \\pi_1 \\circ h, \\pi_2 \\circ h \\rangle =\n        \\pi_1 \\circ h$,\\\\\n        $\\pi_2 \\circ \\langle \\pi_1 \\circ h, \\pi_2 \\circ h \\rangle =\n        \\pi_2 \\circ h$.\n    \\end{center}\n    The above implies $h = \\langle \\pi_1 \\circ h, \\pi_2 \\circ h \\rangle$. This\n    proves (II).\n\n    ((II) $\\implies$ (I)) Suppose (II) holds. Assume $A \\xleftarrow{f} C\n    \\xrightarrow{g} B$. Then, by (II), there exists a morphism $\\langle f, g\n    \\rangle : C \\to A \\times B$ such that $\\pi_1 \\circ \\langle f, g \\rangle =\n    f$ and $\\pi_2 \\circ \\langle f, g \\rangle = g$. We claim such a morphism is\n    unique. So, suppose $h: C \\to A \\times B$ such that $\\pi_1 \\circ h = f$ and\n    $\\pi_2 \\circ h = g$. Then, by (II), we have $h = \\langle \\pi_1 \\circ h,\n    \\pi_2 \\circ h \\rangle$, which implies $h = \\langle f, g \\rangle$. This\n    proves (I), and our proof is complete.\n\\end{proof}\n\n\\textbf{Cartesian product of morphisms.} Given $f_1 : A_1 \\to B_1$ and $f_2 :\nA_2 \\to B_2$, we define the \\emph{cartesian product of morphisms} $f_1$ and\n$f_2$ by\n\\begin{center}\n    $f_1 \\times f_2 := \\langle f_1 \\circ \\pi_1, f_2 \\circ \\pi_2 \\rangle :\n    A_1 \\times A_2 \\to B_1 \\times B_2$.\n\\end{center}\n\nThe following proposition provides some useful properties of products.\n\n\\begin{prop}\n    For any $f: A \\to B$, $g: A \\to C$, $h: A' \\to A$, and any $p: B \\to B'$,\n    $q: C \\to C'$, the following hold:\n    \\begin{enumerate}\n        \\item $\\langle f, g \\rangle \\circ h = \\langle f \\circ h, g \\circ h\n        \\rangle$\n        \\item $(p \\times q) \\circ \\langle f, g \\rangle = \\langle p \\circ f,\n        q \\circ g \\rangle$.\n    \\end{enumerate}\n\\end{prop}\n\\begin{proof}\n\\leavevmode\n    \\begin{enumerate}\n        \\item Note $\\langle f, h \\rangle \\circ h : A' \\to B \\times C$.\n        Therefore, by (II) of Proposition 27, $\\langle f, h \\rangle \\circ h =\n        \\langle \\pi_1 \\circ (\\langle f, g \\rangle \\circ h), \\pi_2 \\circ\n        (\\langle f, g \\rangle \\circ h) \\rangle = \\langle f \\circ h, f \\circ g\n        \\rangle$.\n        \\item $(p \\times q) \\circ \\langle f, g \\rangle = \\langle p \\circ \\pi_1,\n        q \\circ \\pi_2 \\rangle \\circ \\langle f, g \\rangle = \\langle p \\circ\n        \\pi_1 \\circ \\langle f, g \\rangle, q \\circ \\pi_2 \\circ \\langle f, g\n        \\rangle = \\langle p \\circ f, q \\circ g \\rangle$.\n    \\end{enumerate}\n\\end{proof}\n\n\\textbf{General Products.} The notion of products can be generalized to\narbitrary arities as follows. In a category $\\catname{C}$, a product for a\nfamily of objects  $\\{ A_i \\}_{i \\in I}$ is an object $P$ and morphisms\n\\begin{center}\n    $p_i: P \\to A_i$ ($i \\in I$)\n\\end{center}\nsuch that, for all objects $B$ and arrows\n\\begin{center}\n    $f_i: B \\to A_i$ ($i \\in I$)\n\\end{center}\nthere is a \\emph{unique} arrow $g: B \\to P$ such that, for all $i \\in I$, the\nfollowing diagram commutes\n\\[\n\\begin{tikzcd}\n    B \\arrow[rr, dashrightarrow, \"g\"] \\arrow[ddr, swap, \"f_i\"]\n      & & P \\arrow[ddl, \"p_i\"] \\\\\n    & & \\\\\n    & A_i &\n\\end{tikzcd}\n\\]\nAgain, if such a product exists, it is unique up to (unique) isomorphism. We\nwrite $P = \\prod_{i \\in I} A_i$ for the product object, and $g = \\langle f_i\n\\mid i \\in I \\rangle$ for the unique morphism in the definition.\n\n\\setcounter{Exercise}{28}\n\\begin{Exercise}\n    What is the product of the empty family?\n\\end{Exercise}\n\\begin{solution}\n    The product of the empty family is an object $T$, such that for every object\n    with arrows to (non-existent) members of the empty family, there is a\n    unique arrow from that object to $T$ making the corresponding diagram\n    commute. Since there are no diagrams, this means there is a unique arrow\n    from every object to $T$, and this is precisely the definition of a\n    terminal object. Hence, the product of an empty family is a terminal object.\n\\end{solution}\n\n\\begin{Exercise}\n    Show that if a category has binary and nullary products, then it has all\n    finite products.\n\\end{Exercise}\n\\begin{solution}\n    Suppose $\\catname{C}$ is a category with binary and nullary products. We\n    claim, for all $n \\in \\N$, $P_n = \\prod_{i=1}^{n} A_i$ with the\n    corresponding projection functions $p_i: P \\to A_i$, where $A_i$ is an\n    object in $\\catname{C}$, exists. We use induction on $n$ to prove our\n    claim. (\\emph{Base case}) For $n = 0$, $P$ is the nullary product, which\n    exists by assumption. (\\emph{Inductive case}) Now, suppose a product\n    $P_n$ exists for some $n \\ge 0$. Then, $P_{n+1} = \\prod_{i=1}^{n+1} A_i =\n    \\prod_{i=1}^{n} A_i \\times A_{n+1} = P_n \\times A_{i+1}$, which is a binary\n    product of objects, which exists due to the fact that $\\catname{C}$ has\n    binary products and that $P_n$ exists (from our inductive hypothesis.)\n    Hence, by induction, $P_n$ exists for all $n \\in \\N$.\n\\end{solution}\n~\\\\\n\\emph{\\textbf{Coproducts}}. The dual notion to products are coproducts.\nFormally, coproducts in a category $\\catname{C}$ are just products in\n$\\catname{C}^{\\mathbf{op}}$, interpreted back in $\\catname{C}$.\n\nLet $A, B$ be objects in a category $\\catname{C}$. A \\emph{coproduct} of $A$\nand $B$ is an object $A + B$ together with a pair of arrows $A \\xrightarrow{i_A}\nA + B \\xleftarrow{i_B} B$, such that for every triple $A \\xrightarrow{f} C\n\\xleftarrow{g} B$, there exists a unique morphism\n\\begin{center}\n    $[f, g]: A + B \\to C$\n\\end{center}\nsuch that the following diagram commutes.\n\\[\n\\begin{tikzcd}\n    A \\arrow[r, \"i_A\"] \\arrow[ddr, swap, \"f\"] &\n        A + B \\arrow[dd, dashed, \"{[f, g]}\" description] &\n        B \\arrow[l, swap, \"i_B\"] \\arrow[ddl, \"g\"] \\\\\n    & & \\\\\n    & C &\n\\end{tikzcd}\n\\]\nWe call $i_A$ and $i_B$ \\emph{injections} and $[f, g]$ the \\emph{copairing} of\n$f$ and $g$. As with pairings, the uniqueness of copairings can be specified\nby an equation:\n\\begin{center}\n    $\\forall h: A + B \\to C.\\, h = [h \\circ i_A, h \\circ i_B]$\n\\end{center}\n\n\\setcounter{Exercise}{31}\n\\begin{Exercise}\n    A coproduct in $\\catname{Set}$ is given by \\emph{disjoint union} of sets,\n    which can be defined concretely, \\emph{e.g.} by\n    \\begin{center}\n        $X + Y := (\\{ 1 \\} \\times X) \\bigcup (\\{ 2 \\} \\times Y)$\n    \\end{center}\n    We can define \\emph{injections}\n    \\begin{center}\n        $X \\xrightarrow{i_X} X + Y \\xleftarrow{i_Y} Y$\\\\\n        $i_X(x) := (1, x), \\q i_Y(y) := (2, y)$.\n    \\end{center}\n    Also, given functions $f: X \\to Z$ and $g: Y \\to Z$, we can define\n    \\begin{center}\n        $[f, g]: X + Y \\to Z$\\\\\n        $[f, g](1, x) := f(x), \\q [f, g](2, y) := g(y)$.\n    \\end{center}\n    Check that the above construction does yield coproducts in $\\catname{Set}$.\n\\end{Exercise}\n\\begin{solution}\n    For all $x \\in X$, $([f, g] \\circ i_X)(x) = [f, g](i_X(x)) = [f, g](1, x) =\n    f(x)$, and, for all $y \\in Y$, $([f, g] \\circ i_Y)(y) = [f, g](i_Y(y)) =\n    [f, g](2, y) = g(y)$. Therefore, $[f, g] \\circ i_X = f$ and $[f, g] \\circ\n    i_Y = g$, proving that the corresponding diagram is indeed commutative. Let\n    $h: X + Y \\to Z$ be such that $h \\circ i_X = f$ and $h \\circ i_Y = g$. Then,\n    for all $x \\in X$ and $y \\in Y$, $(h \\circ i_X)(x) = f(x)$ and $(h \\circ\n    i_Y)(y) = g(y)$, which imply $h(1, x) = [f, g](1, x)$ and $h(2, g) = [f, g]\n    (2, y)$, which imply $h = [f, g]$, thus showing $[f, g]$ is indeed unique.\n    This shows $X \\xrightarrow{i_X} X + Y \\xleftarrow{i_Y} Y$ as defined is a\n    coproduct of $X$ and $Y$, for any two objects $X, Y$ in $\\catname{C}$.\n\\end{solution}\n\n\\begin{Exercise}\n    Verify the following claims:\n    \\begin{enumerate}\n        \\item In $\\catname{Pos}$, disjoint unions (with the inherited orders)\n        are coproducts.\n        \\item In $\\catname{Top}$, topological disjoint unions are coproducts.\n        \\item In $\\catname{Vect}_k$, direct sums are coproducts.\n        \\item In a poset, \\emph{least upper bounds} are coproducts.\n    \\end{enumerate}\n\\end{Exercise}\n\\begin{solution}\n\\leavevmode\n    \\begin{enumerate}\n        \\item % TODO\n        \\item % TODO\n        \\item % TODO\n        \\item In a poset $(P, \\le)$, for any two elements $p, q \\in P$, the\n        coproduct $p \\times q$ is an element satisfying $p \\le p \\times q$ and\n        $q \\le p \\times q$, such that for any element $r \\in P$, if $p \\le r$\n        and $q \\le r$ then $p \\times q \\le r$. Thus, $p \\times q$ satisfies\n        precisely the definition of the least upper bound of $p$ and $q$. Hence,\n        least upper bounds in a poset are coproducts.\n    \\end{enumerate}\n\\end{solution}\n\n\\begin{Exercise}\n    Dually to products, express coproducts as initial objects of a category\n    $\\catname{Copair}(A, B)$ of $A, B$-copairings.\n\\end{Exercise}\n\\begin{solution}\nLet $A, B$ be objects in a category $\\catname{C}$. An $A, B$-copairing is a\ntriple $A \\xrightarrow{p_1} P \\xleftarrow{p_2} B$, where $P$ is an object in\n$\\catname{C}$. A morphism of $A, B$-copairings $f: (P, p_1, p_2) \\to\n(Q, q_1, q2)$ is a morphism $f: P \\to Q$ in $\\catname{C}$ such that the\nfollowing diagram commutes\n\\[\n\\begin{tikzcd}\n    & Q & \\\\\n    & & \\\\\n    A \\arrow[uur, \"q_1\"]\n      \\arrow[r, swap, \"p_1\"] &\n    P \\arrow[uu, swap, \"f\"] &\n    B \\arrow[uul, swap, \"q_2\"]\n      \\arrow[l, \"p_2\"]\n\\end{tikzcd}\n\\]\nThen, it is easy to check that $\\catname{Copair}(A, B)$ is a category of\n$A, B$-copairings.\n\nWe say $(A + B, i_A, i_B)$ is a \\textbf{coproduct} of $A$ and $B$ if it is\n\\emph{initial} in $\\catname{Copair}(A, B)$.\n\\end{solution}\n\n\\subsection*{Pullbacks and Equalisers}\nWe consider two further constructions of interest: \\emph{pullbacks} and\n\\emph{equalisers}.\\\\\n\n\\textbf{Pullbacks}. Consider a pair of morphisms $A \\xrightarrow{f} C\n\\xleftarrow{g} B$. The \\emph{\\textbf{pullback}} of $f$ along $g$ is a pair\n$A \\xleftarrow{p} D \\xrightarrow{q} B$ such that $f \\circ p = g \\circ q$, and,\nfor any pair $A \\xleftarrow{p'} D' \\xrightarrow{q'} B$ such that $f \\circ p' =\ng \\circ q'$, there exists a unique $h: D' \\to D$ such that the following\ndiagram commutes.\n\\[\n\\begin{tikzcd}\n    D' \\arrow[drr, bend left, \"q'\"]\n       \\arrow[ddr, bend right, \"p'\"]\n       \\arrow[dr, dashed, \"h\"] & & \\\\\n    & D \\arrow[r, \"q\"]\n        \\arrow[d, swap, \"p\"]\n    & B \\arrow[d, \"g\"] \\\\\n    & A \\arrow[r, swap, \"f\"] & C\n\\end{tikzcd}\n\\]\n\n\\emph{\\textbf{Examples of pullbacks}}:\n\\begin{itemize}\n    \\item In $\\catname{Set}$, the pullback of $A \\xrightarrow{f} C\n    \\xleftarrow{g} B$ is defined as a \\emph{subset of the cartesian product}:\n    \\begin{center}\n        $A \\times_C B = \\{ (a, b) \\in A \\times B \\mid f(a) = g(b) \\}$.\n    \\end{center}\n    For example, consider a category $\\catname{C}$ with\n    \\begin{center}\n        $\\text{Ar}(\\catname{C}) \\xrightarrow{\\text{dom}} \\text{Ob}(\\catname{C})\n        \\xleftarrow{\\text{cod}} \\text{Ar}(\\catname{C})$.\n    \\end{center}\n    Then, the pullback of \\textbf{dom} along \\textbf{cod} is the set of\n    \\emph{composable morphisms, i.e.} pairs of morphisms $(f, g)$ in\n    $\\catname{C}$ such that $f \\circ g$ is well-defined.\n    \\item In $\\catname{Set}$ again, subsets (\\emph{i.e.} inclusion maps) pull\n    back to subsets:\n    \\[\n    \\begin{tikzcd}\n        f^{-1}(U)\n            \\arrow[r]\n            \\arrow[dd, hook] &\n        U\n            \\arrow[dd, hook] \\\\\n          & \\\\\n        X\n            \\arrow[r, swap, \"f\"] &\n        Y\n    \\end{tikzcd}\n    \\]\n\\end{itemize}\n\n\\setcounter{Exercise}{36}\n\\begin{Exercise}\n    Let $\\catname{C}$ be a category with a terminal object $\\1$. Show that, for\n    any $A, B \\in \\mathbf{Ob}(\\catname{C})$, the pullback of $A\n    \\xrightarrow{\\tau_A} \\1 \\xleftarrow{\\tau_B} B$ is the product of $A$ and $B$,\n    if it exists.\n\\end{Exercise}\n\\begin{solution}\n    Suppose $\\catname{C}$ is a category with a terminal object $\\1$. Assume, for\n    any $A, B \\in \\mathbf{Ob}(\\catname{C})$, their product $A \\xleftarrow{\\pi_1}\n    A \\times B \\xrightarrow{\\pi_2} B$ exists. We show that this product is the\n    pullback of $A \\xrightarrow{\\tau_A} \\1 \\xleftarrow{\\tau_B} B$. First, note\n    $\\tau_A \\circ \\pi_1 : A \\times B \\to \\1$ and $\\tau_B \\circ \\pi_w : A \\times\n    B \\to \\1$, but since $\\1$ is terminal, we have $\\tau_A \\circ \\pi_1 = \\tau_B\n    \\circ \\pi_2$. That is, the following diagram commutes.\n    \\[\n    \\begin{tikzcd}\n        A \\times B\n        \\arrow[r, \"\\pi_2\"]\n        \\arrow[d, swap, \"\\pi_1\"] &\n        B\n        \\arrow[d, \"\\tau_B\"] \\\\\n        A\n        \\arrow[r, swap, \"\\tau_A\"] &\n        \\1\n    \\end{tikzcd}\n    \\]\n    Now, for any pair $A \\xleftarrow{f} C \\xrightarrow{g} B$, we again have\n    $\\tau_A \\circ f = \\tau_B \\circ g$, since $\\1$ is terminal. Also, there\n    exists a unique morphism $\\langle f, g \\rangle : C \\to A \\times B$ such\n    that $\\pi_1 \\circ \\langle f, g \\rangle = f$ and  $\\pi_2 \\circ \\langle f, g\n    \\rangle = g$, which implies the following diagram commutes.\n    \\[\n    \\begin{tikzcd}\n        C\n        \\arrow[ddr, swap, bend right, \"f\"]\n        \\arrow[drr, bend left, \"g\"]\n        \\arrow[dr, dashed, \"{\\langle f, g \\rangle}\" description] & & \\\\\n        & A \\times B\n        \\arrow[r, \"\\pi_2\"]\n        \\arrow[d, swap, \"\\pi_1\"] &\n        B\n        \\arrow[d, \"\\tau_B\"] \\\\\n        & A\n        \\arrow[r, swap, \"\\tau_A\"] &\n        \\1\n    \\end{tikzcd}\n    \\]\n    And, this completes our proof.\n\\end{solution}\n\nJust as for products, pullbacks can equivalently be described as terminal\nobjects in suitable categories. Given a pair of morphisms $A \\xrightarrow{f} C\n\\xleftarrow{g} B$, we define an $(f, g)$-\\emph{cone} to be a triple $(D, p, q)$\nsuch that the following diagram commutes.\n\\[\n\\begin{tikzcd}\n    D\n    \\arrow[r, \"q\"]\n    \\arrow[d, swap, \"p\"]\n    &\n    B\n    \\arrow[d, \"g\"] \\\\\n    A\n    \\arrow[r, swap, \"f\"]\n    &\n    C\n\\end{tikzcd}\n\\]\nA morphism of $(f, g)$-cones $h: (D_1, p_1, q_1) \\to (D_2, p_2, q_2)$ is a\nmorphism $h: D_1 \\to D_2$ such that the following diagram commutes.\n\\[\n\\begin{tikzcd}\n    &\n    D_1\n    \\arrow[dl, swap, \"p_1\"]\n    \\arrow[d, \"h\"]\n    \\arrow[dr, \"q_1\"]\n    & \\\\\n    A\n    &\n    D_2\n    \\arrow[l, \"p_2\"]\n    \\arrow[r, \"q_2\"]\n    &\n    B\n\\end{tikzcd}\n\\]\nWe can thus form a category $\\catname{Cone}(f, g)$. A pullback of $f$ along $g$,\nif it exists, is exactly a terminal object in $\\catname{Cone}(f, g)$. This also\nshows the uniqueness of pullbacks up to unique isomorphism.\n~\\\\\n\n\\textbf{Equalisers.} Consider a pair of parallel arrows\n\\begin{tikzcd}\n    A \\arrow[r, shift left, \"f\"]\n      \\arrow[r, swap, shift right, \"g\"]\n      & B\n\\end{tikzcd}.\nAn \\emph{\\textbf{equaliser}} of $(f, g)$ is an arrow $e: E \\to A$ such that\n$f \\circ e = g \\circ e$, and for any arrow $h: D \\to A$ such that $f \\circ h =\ng \\circ h$, there is a unique arrow $\\hat{h}: D \\to E$ so that $h = e \\circ\n\\hat{h}$. That is, the following diagram commutes.\n\\[\n\\begin{tikzcd}\n    E \\arrow[r, \"e\"]\n      & A \\arrow[r, shift left, \"f\"]\n          \\arrow[r, swap, shift right, \"g\"]\n          & B \\\\\n    D \\arrow[u, dashed, \"\\hat{h}\"]\n      \\arrow[ur, swap, \"h\"]\n\\end{tikzcd}\n\\]\n\nAs for products, the uniqueness of the arrow from $D$ to $E$ can be expressed\nequationally:\n\\begin{center}\n    $\\forall k : D \\to E. \\,\\, \\widehat{e \\circ k} = k$.\n\\end{center}\n\n\\setcounter{Exercise}{38}\n\\begin{Exercise}\nWhy is $\\widehat{e \\circ k}$ well-defined for any $k: D \\to E$? Prove that\nthe above equation is equivalent to the uniqueness requirement.\n\\end{Exercise}\n\\begin{solution}\n~\\\\\n(Uniqueness requirement $\\implies$ equation) Suppose the uniqueness requirement\nis satisfied. Assume $k: D \\to E$. Then, $e \\circ k : D \\to A$. Therefore, there\nexists a unique $l : D \\to E$ such that $e \\circ l = e \\circ k$. Now, note $l =\nk$ and $l = \\widehat{e \\circ k}$ (the latter from the satisfaction of the\nuniqueness requirement) both satisfy the previous equation, which implies $k =\n\\widehat{e \\circ k}$.\\\\\n(Equation $\\implies$ uniqueness requirement) Suppose $\\forall k : D \\to E,\n\\widehat{e \\circ k} = k$. Let $h : D \\to A$ such that $f \\circ h = g \\circ h$,\nand assume $\\hat{h} : D \\to E$ such that $e \\circ \\hat{h} = h$. We show such an\narrow $\\hat{h}$ is unique. To that end, suppose $l : D \\to E$ such that $e \\circ\nl = h$. Then, $\\hat{h} = \\widehat{e \\circ l} = l$, which is what we set out to\nproof, and we are done.\n\\end{solution}\n\n\\emph{\\textbf{Example of equaliser}}: In $\\catname{Set}$, the equaliser of $f,\ng$ is given by the inclusion\n\\begin{center}\n    $\\{ x \\in A \\mid f(x) = g(x) \\} \\hookrightarrow A$.\n\\end{center}\nThis allows \\emph{equationally defined subsets} to be defined as equalisers.\nFor example, consider the pairs of maps\n\\begin{tikzcd}\n    \\R^2 \\arrow[r, shift left, \"f\"]\n         \\arrow[r, shift right, swap, \"g\"]\n         & \\R,\n\\end{tikzcd}\nwhere\n\\begin{center}\n    $f : (x, y) \\mapsto x^2 + y^2, \\qq g : (x, y) \\mapsto 1$.\n\\end{center}\nThen, the equaliser is the unit circle as a subset of $\\R^2$.\\\\\n\n\\subsection*{Limits and Colimits}\nThe notions introduced so far are all special cases of a general notion of\n\\emph{limits} in categories, as well as the dual notion of \\emph{colimits}.\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{| l | l |}\n    \\hline\n    \\textbf{Limits} & \\textbf{Colimits} \\\\\n    \\hline \\hline\n    Terminal Objects & Initial Objects \\\\\n    Products & Coproducts \\\\\n    Pullbacks & Pushouts \\\\\n    Equalisers & Coequalisers \\\\\n    \\hline\n\\end{tabular}\n\\caption{Examples of Limits and Colimits}\n\\label{table: 1}\n\\end{table}\nAn import aspect of studying any kind of mathematical structure is to\ndetermine what limits and colimits the category of such structures has.\n\n\\subsection*{Exercises}\n\\leavevmode\n\\begin{enumerate}\n    \\item Give an example of a category where some pair of objects lacks a\n    product or coproduct.\n\n    \\item (\\emph{Pullback lemma}) Consider the following commutative diagram.\n    \\[\n    \\begin{tikzcd}\n        A \\arrow[r, \"f\"]\n          \\arrow[d, swap, \"u\"]\n          & B \\arrow[r, \"g\"]\n              \\arrow[d, \"v\"]\n          & C \\arrow[d, \"w\"] \\\\\n        D \\arrow[r, swap, \"h\"]\n          & E \\arrow[r, swap, \"i\"]\n          & F\n    \\end{tikzcd}\n    \\]\n    Given that the right hand square $BCEF$ and the outer square $ACDF$ are\n    pullbacks, prove that the left hand square $ABDE$ is a pullback.\n\n    \\item Consider $A \\xrightarrow{f} C \\xleftarrow{g} B$ with pullback\n    $A \\xleftarrow{p} D \\xrightarrow{q} B$. For each $A \\xleftarrow{p'} D'\n    \\xrightarrow{q'} B$ with $f \\circ p' = g \\circ q'$, let $\\phi(p', q'): D'\n    \\to D$ be the arrow dictated by the pullback condition. Express uniqueness\n    of $\\phi(p', q')$ equationally.\n\\end{enumerate}\n\\begin{solution}\n\\leavevmode\n\\begin{enumerate}\n    \\item In a discrete category with the naturals as objects, say, any pair\n    of natural numbers (objects) lacks either a product or a coproduct.\n\n    \\item Suppose the following diagram is commutative.\n    \\[\n    \\begin{tikzcd}\n    A \\arrow[r, \"f\"]\n      \\arrow[d, swap, \"u\"]\n      & B \\arrow[r, \"g\"]\n          \\arrow[d, \"v\"]\n      & C \\arrow[d, \"w\"] \\\\\n    D \\arrow[r, swap, \"h\"]\n      & E \\arrow[r, swap, \"i\"]\n      & F\n    \\end{tikzcd}\n    \\]\n    And, suppose the right hand square $BCEF$ and the outer square $ACDF$ are\n    pullbacks. We claim the left hand square $ABDE$ is a pullback. To that end,\n    let $D \\xleftarrow{k} A' \\xrightarrow{l} B$ such that $v \\circ l = h \\circ\n    k \\, (\\ast)$. Note $D \\xleftarrow{k} A' \\xrightarrow{g \\circ l} C$ and that\n    $(i \\circ h) \\circ k = i \\circ (v \\circ l) = (w \\circ g) \\circ l = w \\circ\n    (g \\circ l)$. And, since $ACDF$ is a pullback, there exists a unique\n    morphism $p : A' \\to A$ making the following diagram commute.\n    \\[\n    \\begin{tikzcd}\n        A' \\arrow[ddr, swap, bend right, \"k\"]\n           \\arrow[drrr, bend left, \"g \\circ l\"]\n           \\arrow[dr, dashed, \"p\"]\n           & & & \\\\\n        & A \\arrow[r, \"f\"]\n            \\arrow[d, swap, \"u\"]\n        & B \\arrow[r, \"g\"]\n            \\arrow[d, \"v\"]\n        & C \\arrow[d, \"w\"] \\\\\n        & D \\arrow[r, swap, \"h\"]\n        & E \\arrow[r, swap, \"i\"]\n        & F\n    \\end{tikzcd}\n    \\]\n    Now, we need only show $f \\circ p = l$.\\\\\n    Note $E \\xleftarrow{h \\circ k} A' \\xrightarrow{g \\circ l} C$, and $i \\circ\n    (h \\circ k) = (i \\circ h) \\circ k = (i \\circ h) \\circ (u \\circ p) = (i\n    \\circ h \\circ u) \\circ p = (w \\circ g \\circ f) \\circ p = w \\circ (g \\circ\n    f \\circ p) = w \\circ (g \\circ l)$. Since $BCEF$ is a pullback, there exists\n    a unique morphism $q : A' \\to B$ such that\n    \\begin{align*}\n        g \\circ q &= g \\circ l \\\\\n        v \\circ q &= h \\circ k\n    \\end{align*}\n    Using $(\\ast)$ and the commutativity of the above diagram, we see $q = l$\n    and $q = f \\circ p$ both satisfy the above set of equations, which implies\n    $f \\circ p = l$. Let $p' : A' \\to A$ where $f \\circ p' = l$ and $u \\circ p'\n    = k$. Then, this implies the following diagram commutes.\n    \\[\n    \\begin{tikzcd}\n        A' \\arrow[ddr, swap, bend right, \"k\"]\n        \\arrow[drrr, bend left, \"g \\circ l\"]\n        \\arrow[dr, \"p'\"]\n        & & & \\\\\n        & A \\arrow[r, \"f\"]\n        \\arrow[d, swap, \"u\"]\n        & B \\arrow[r, \"g\"]\n        \\arrow[d, \"v\"]\n        & C \\arrow[d, \"w\"] \\\\\n        & D \\arrow[r, swap, \"h\"]\n        & E \\arrow[r, swap, \"i\"]\n        & F\n    \\end{tikzcd}\n    \\]\n    Since $ACDF$ is a pullback, it implies $p' = p$, from which we conclude\n    $ABDE$ is indeed a pullback.\n\n    \\item The uniqueness of $\\phi(p', q')$ can be stated equationally as\n    follows:\n    \\begin{center}\n        $\\forall h: D' \\to D. \\,\\, h = \\phi(p \\circ h, p \\circ h)$.\n    \\end{center}\n\\end{enumerate}\n\\end{solution}\n\n\\section{Functors}\nPart of the ``categorical philosophy\" is:\n\\begin{center}\n    \\emph{Don't just look at the objects; take the morphisms into account too}.\n\\end{center}\n\n\\subsection*{Basics}\nA ``morphism of categories\" is a \\emph{functor}.\\\\\n\nA \\emph{\\textbf{functor}} $F: \\catname{C} \\to \\catname{D}$ is given by:\n\\begin{itemize}\n    \\item An object-map, assigning an object $FA$ of $\\catname{D}$ to every\n    object $A$ of $\\catname{C}$.\n    \\item An arrow-map, assigning an arrow $Ff : FA \\to FB$ of $\\catname{D}$ to\n    every arrow $f: A \\to B$ in such a way that composition and identities are\n    preserved:\n    \\begin{center}\n        $F(g \\circ f) = F(g) \\circ F(f), \\qq F1_A = 1_{FA}$.\n    \\end{center}\n\\end{itemize}\n\nWe use the same symbol to denote object- and arrow-maps; in practice, this\nnever causes confusion. Since functors preserve domains and codomains of arrows,\nfor each pair of objects $A, B$ of $\\catname{C}$, there is a well-defined map\n\\begin{center}\n    $F_{A, B} : \\catname{C}(A, B) \\to \\catname{D}(FA, FB)$.\n\\end{center}\n\nThe conditions expressing preservation of composition and identities are called\n\\emph{functoriality}.\\\\\n\n\\textbf{Examples of functors.}\n\\begin{enumerate}\n    \\item Let $(P, \\le)$ and $(Q, \\le)$ be preorders (seen as categories.) A\n    functor $F: (P, \\le) \\to (Q, \\le)$ is specified by an object-map, $F: P \\to\n    Q$, say, and an appropriate arrow-map. The arrow-map corresponds to the\n    condition:\n    \\begin{center}\n        $\\forall p_1, p_2 \\in P. \\,\\, p_1 \\le p_2 \\implies F(p_1) \\le F(p_2)$,\n    \\end{center}\n    \\emph{i.e.} to monotonicity of $F$. Moreover, the functoriality conditions\n    are trivial, since in the codomain $(Q, \\le)$, all homsets are singletons.\\\\\n    Hence, a functor between preorders is just a monotone map.\n\n    \\item Let $(M, \\cdot, 1)$ and $(N, \\cdot, 1)$ be monoids. A functor $F:\n    (M, \\cdot, 1) \\to (N, \\cdot, 1)$ is specified by a trivial object-map (since\n    monoids are categories with a single object) and an arrow-map, $F: M \\to N$,\n    say. The functoriality conditions correspond to\n    \\begin{center}\n        $\\forall m_1, m_2 \\in M. \\,\\, F(m_1 \\cdot m_2) = F(m_1) \\cdot F(m_2),\n        \\qq F(1) = 1$,\n    \\end{center}\n    \\emph{i.e.} to $F$ being a monoid homomorphism.\\\\\n    Hence, a functor between monoids is just a monoid homomorphism.\n\\end{enumerate}\n\n\\textbf{Other examples of functors.}\n\\begin{itemize}\n    \\item Inclusion of a subcategory, $\\catname{C} \\hookrightarrow \\catname{D}$,\n    is a functor (by taking the identity map for object- and arrow-map.)\n\n    \\item The \\emph{covariant} powerset functor $\\mc{P}: \\catname{Set} \\to\n    \\catname{Set}$:\n    \\begin{center}\n        $X \\mapsto \\mc{P}X, \\q (f: X \\to Y) \\mapsto \\mc{P}(f) :=\n        S \\mapsto \\{ f(x) \\mid x \\in S \\}$.\n    \\end{center}\n\n    \\item $U: \\catname{Mon} \\to \\catname{Set}$ is the `forgetful' or\n    `underlying' functor which sends a monoid to its set of elements,\n    `forgetting' the algebraic structure, and sends a homomorphism to the\n    corresponding function between sets. There are similar forgetful functors\n    for other categories of structured sets.\n\n    \\item (\\emph{Group theory examples}) The assignment of the commutator\n    sub-group of a group extends to a functor from $\\catname{Grp}$ to\n    $\\catname{Grp}$; and the assignment of the quotient by this normal\n    subgroup extends to a functor from $\\catname{Grp}$ to $\\catname{Ab}$ (the\n    category of abelian groups and group homomorphisms.)\n\n    \\item (\\emph{Homology}) The basic idea of algebraic topology is that there\n    are functorial assignments of algebraic objects (\\emph{i.e.} groups) to\n    topological spaces, and variants of this idea (`(co)homology theories') are\n    pervasive throughout modern pure mathematics.\n\\end{itemize}\n~\\\\\n\n\\textbf{Functors `of several variables'}\n\nWe can generalize the notion of a functor to a mapping from several domain\ncategories to a codomain category. For this, we need the following definition.\\\\\n\n(\\emph{\\textbf{Product category}}) For categories $\\catname{C}, \\catname{D}$,\ndefine the \\emph{\\textbf{product category}} $\\catname{C} \\times \\catname{D}$ as\nfollows. An object in $\\catname{C} \\times \\catname{D}$ is a pair of objects\nfrom $\\catname{C}$ and $\\catname{D}$, and an arrow in $\\catname{C} \\times\n\\catname{D}$ is a pair of arrows from $\\catname{C}$ and $\\catname{D}$.\nIdentities and arrow composition are defined componentwise:\n\\begin{center}\n    $1_{(A, B)} := (1_A, 1_B), \\qq (f, g) \\circ (f', g') := (f \\circ f',\n    g \\circ g')$.\n\\end{center}\n~\\\\\nA functor `of two variables', with domains $\\catname{C}$ and $\\catname{D}$, to\n$\\catname{E}$ is simply a functor\n\\begin{center}\n    $F: \\catname{C} \\times \\catname{D} \\to \\catname{E}$.\n\\end{center}\n~\\\\\nFor example, there are evident projection functors\n\\begin{center}\n    $\\catname{C} \\leftarrow \\catname{C} \\times \\catname{D} \\to \\catname{D}$.\n\\end{center}\n~\\\\\n\n\\subsection*{Further Examples}\nSet-valued functors.\n\nMany important constructions arise as functors $F: \\catname{C} \\to\n\\catname{Set}$. Here are some examples:\n\\begin{itemize}\n    \\item If $G$ is a group, a functor $F: G \\to \\catname{Set}$ is \\emph{an\n    action of $G$ on a set}.\n\n    \\item If $P$ is a poset representing time, a functor $F: P \\to\n    \\catname{Set}$ is a notion of \\emph{set varying through time}. This is\n    related to \\emph{Kripke semantics}, and to \\emph{forcing arguments} in set\n    theory.\n\n    \\item Recall $\\mathbbm{2}_{\\rightrightarrows}$ is the category\n    \\begin{tikzcd}\n        \\bullet \\arrow[r, bend left]\n                 \\arrow[r, bend right]\n                 & \\bullet\n    \\end{tikzcd}.\n    Then, functors $F: \\mathbbm{2}_{\\rightrightarrows} \\to \\catname{Set}$\n    correspond to \\emph{directed graphs, i.e.} structures $(V, E, s, t)$, where\n    $V$ is a set of vertices, $E$ is a set of edges, and $s, t: E \\to V$\n    specify the source and target vertices of each edge.\n\\end{itemize}\n\nIn the first example above, we note that for a group $(G, \\cdot, 1)$, a functor\n$F: G \\to \\catname{Set}$ maps the unique object in the category corresponding\nto $G$ to a set $X$, and each element in $g \\in G$ to an endofunction on $X$,\n$g \\bullet \\_ : X \\to X$. Then, functoriality amounts to the\nconditions\n\\begin{center}\n    $\\forall g, h \\in G. \\,\\, F(g \\cdot h) = F(g) \\circ F(h), \\qq F(1) = 1_X$.\n\\end{center}\nThat is, for all $g, h \\in G$ and all $x \\in X$,\n\\begin{center}\n    $(g \\cdot h) \\bullet x = g \\bullet (h \\bullet x), \\qq 1 \\bullet x = x$.\n\\end{center}\nWe therefore see $F$ defines a (\\emph{left}) \\emph{group action} of $G$ on $X$.\n\n\\setcounter{Exercise}{44}\n\\begin{Exercise}\nVerify that functors $F: \\mathbbm{2}_{\\rightrightarrows} \\to \\catname{Set}$\ncorrespond to directed graphs.\n\\end{Exercise}\n\\begin{solution}\nLet $\\mathbbm{2}_{\\rightrightarrows} := $\n\\begin{tikzcd}\n    A \\arrow[r, bend left, \"f\"]\n    \\arrow[r, swap, bend right, \"g\"]\n    & B\n\\end{tikzcd}\nThen, let any functor\n\\begin{center}\n    $F: \\mathbbm{2}_{\\rightrightarrows} \\to \\catname{Set}$\n\\end{center}\nmap $A$ to $E$, the set of edges of a directed graph, and, $B$ to $V$, the set\nof vertices of the directed graph. Also, $F$ maps $f$ to the source function,\nand $g$ to the target function, where both the source and target functions\nhave domain $E$ and codomain $V$. Then, it is easy to check $F$ is indeed\nfunctorial, and thus we conclude that each such $F$ corresponds to a directed\ngraph, and we are done.\n\\end{solution}\n\n\\textbf{Example: Lists}\\\\\nData-type constructors are functors. As a basic example, we consider lists.\nThere is a functor\n\\begin{center}\n    $\\List : \\catname{Set} \\to \\catname{Set}$\n\\end{center}\nwhich takes a set $X$ to the set of all finite lists (sequences) of elements of\n$X$. $\\List$ is functorial: its action on morphisms (\\emph{i.e.} functional\nprograms) is given by \\emph{maplist}:\\\\\n\\begin{center}\n    $\\dps \\frac{f: X \\to Y}{\\List(f): \\List(X) \\to \\List(Y)}$\\\\~\\\\\n    $\\List(f)[x_1, \\ldots, x_n] := [f(x_1), \\ldots, f(x_n)]$.\n\\end{center}\n~\\\\\nWe can upgrade $\\List$ to a functor $\\texttt{MList}: \\catname{Set} \\to\n\\catname{Mon}$ by mapping each set $X$ to the monoid $(\\List(X), \\ast,\n\\epsilon)$ and $f: X \\to Y$ to $\\List(f)$, as above. The monoid operation\n$\\ast: \\List(X) \\times \\List(X) \\to \\List(X)$ is list concatenation, and\n$\\epsilon$ is the empty list. We call $\\texttt{MList}(X)$ the\n\\emph{\\textbf{free monoid}} over $X$.\\\\\n\n\\textbf{Products as functors}\\\\\nIf a category $\\catname{C}$ has binary products, then there is automatically a\nfunctor\n\\begin{center}\n    $\\_ \\times \\_ : \\catname{C} \\times \\catname{C} \\to \\catname{C}$\n\\end{center}\nwhich takes each pair $(A, B)$ to the product $A \\times B$, and each $(f, g)$ to\n\\begin{center}\n    $f \\times g := \\langle f \\circ \\pi_1, g \\circ \\pi_2 \\rangle$.\n\\end{center}\nFunctoriality is demonstrated as follows\n\\begin{align*}\n    (f \\times g) \\circ (f' \\times g')\n    &= (f \\times g) \\circ \\langle f' \\circ \\pi_1, g' \\circ g' \\rangle \\\\\n    &= \\langle f \\circ f' \\circ \\pi_1, g \\circ g' \\circ \\pi_2 \\rangle \\\\\n    &= (f \\circ f') \\times (g \\circ g'), \\\\\n    1_A \\times 1_B\n    &= \\langle 1_A \\circ \\pi_1, 1_B \\circ \\pi_2 \\rangle \\\\\n    &= \\langle \\pi_1 \\circ 1_{A \\times B}, \\pi_2 \\circ 1_{A \\times B} \\rangle \\\\\n    &= 1_{A \\times B}.\n\\end{align*}\n~\\\\\n\n\\textbf{The category of categories}\\\\\nThere is a category $\\catname{Cat}$ whose objects are categories, and whose\narrows are functors. Identities in $\\catname{Cat}$ are given by identity\nfunctors:\n\\begin{center}\n    $\\text{Id}_{\\mc{C}}: \\mc{C} \\to \\mc{C} := A \\mapsto A, \\, f \\mapsto f$.\n\\end{center}\n\nComposition of functors is defined in the evident way. Note that if $F: \\mc{C}\n\\to \\mc{D}$ and $G: \\mc{D} \\to \\mc{E}$, then, for $f: A \\to B$ in $\\mc{C}$,\n\\begin{center}\n    $G \\circ F (f) := G(F(f)) : G(F(A)) \\to G(F(B))$.\n\\end{center}\nOne usually makes some size restriction on the categories, so that\\\n$\\catname{Cat}$ does not contain itself.\n\nNote that product categories are products in $\\catname{Cat}$. For any pair of\ncategories $\\mc{C}, \\mc{D}$, set\n\\begin{center}\n    $\\mc{C} \\xleftarrow{\\pi_1} \\mc{C} \\times \\mc{D} \\xrightarrow{\\pi_2} \\mc{D}$\n\\end{center}\nwhere $\\mc{C} \\times \\mc{D}$ the product category (defined previously) and\n$\\pi_i$'s the obvious projection functors. For any pair of functors $\\mc{C}\n\\xleftarrow{F} \\mc{E} \\xrightarrow{G} \\mc{D}$, set\n\\begin{center}\n    $\\langle F, G \\rangle : \\mc{E} \\to \\mc{C} \\times \\mc{D} :=\n    A \\mapsto (FA, GA), \\q f \\mapsto (Ff, Gf)$.\n\\end{center}\nIt is easy to check that $\\langle F, G \\rangle$ is indeed a functor. Moreover,\nsatisfaction of the product diagram and uniqueness are shown exactly as in\n$\\catname{Set}$.\n~\\\\\n\n\\subsection*{Contravariance}\nBy definition, the arrow-map of a functor $F$ is \\emph{covariant}: it preserves\nthe direction of arrows. That is, if $f: A \\to B$, then $Ff: FA \\to FB$. A\n\\emph{contravariant} functor $G$ does exactly the opposite: it reverses the\ndirection of arrows. That is, if $f: A \\to B$, then $Gf: FB \\to FA$. The\nfollowing is a concise way to express contravariance.\n\nLet $\\mc{C}, \\mc{D}$ be categories. A \\emph{\\textbf{contravariant}} functor\n$G$ from $\\mc{C}$ to $\\mc{D}$ is a functor $G: \\mc{C}^{\\text{op}} \\to \\mc{D}$.\n(Equivalently, a functor $G: \\mc{C} \\to \\mc{D}^{\\text{op}}$.)\n\nMore explicitly, a contravariant functor $G$ is given by an assignment of:\n\\begin{itemize}\n    \\item An object $GA$ in $\\mc{D}$ to every object $A$ in $\\mc{C}$.\n    \\item An arrow $Gf: GB \\to GA$ in $\\mc{D}$ to every arrow $f: A \\to B$ in\n    $\\catname{C}$, such that\n    \\begin{center}\n        $G(g \\circ f) = G(f) \\circ G(g), \\qq G(1_A) = 1_{GA}$.\n    \\end{center}\n\\end{itemize}\nNote that functors of several variables can be covariant in some variables and\ncontravariant in others, \\emph{e.g.}\n\\begin{center}\n    $F: \\mc{C}^{\\text{op}} \\times \\mc{D} \\to \\mc{E}$.\n\\end{center}\n~\\\\\n\n\\textbf{Examples of Contravariant Functors}\n\\begin{itemize}\n    \\item The contravariant powerset functor, $\\mc{P}^{\\text{op}}:\n    \\catname{Set}^{\\text{op}} \\to \\catname{Set}$, is given by:\n    \\begin{center}\n        $\\mc{P}^{\\text{op}}(X) := \\mc{P}(X)$,\\\\\n        $\\mc{P}(f: X \\to Y): \\mc{P}(Y) \\to \\mc{P}(X) := T \\mapsto\n        \\{ x \\in X \\mid f(x) \\in T \\}$.\n    \\end{center}\n\n    \\item The dual space functor on vector spaces:\n    \\begin{center}\n        $(\\_)^* : \\mathbf{Vect}_k^{\\text{op}} \\to \\mathbf{Vect}_k :=\n        V \\mapsto V^*$\n    \\end{center}\n\\end{itemize}\n\nThe above are both examples of the following idea: Send an object $A$ into\nfunctions from $A$ into some fixed object. For example, the powerset can be\nwritten as $\\mc{P}(X) = 2^X$, where we think of a subset in terms of its\ncharacteristic function.\\\\\n\n\\textbf{Hom-functors}\\\\\nWe now consider some fundamental examples of $\\catname{Set}$-valued functors.\nGiven a category $\\mc{C}$ and an object $A$ of $\\mc{C}$, two functors to\n$\\catname{Set}$ can be defined:\n\\begin{itemize}\n    \\item The covariant Hom-functor at $A$,\n    \\begin{center}\n        $\\mc{C}(A, \\_) : \\mc{C} \\to \\catname{Set}$,\n    \\end{center}\n    which is given by (recall that each $\\mc{C}(A, B)$ is a set):\n    \\begin{center}\n        $\\mc{C}(A, \\_)(B) := \\mc{C}(A, B), \\qq\n        \\mc{C}(A, \\_)(f: B \\to C) := g \\mapsto f \\circ g$.\n    \\end{center}\n~\\\\\n    We usually write $\\mc{C}(A, \\_)(f)$ as $\\mc{C}(A, f)$. Functoriality\n    reduces directly to the basic category axioms: associativity of composition\n    and the unit laws for the identity.\n\n    \\item There is also a contravariant Hom-functor,\n    \\begin{center}\n        $\\mc{C}(A, \\_) : \\mc{C}^{\\text{op}} \\to \\catname{Set}$,\n    \\end{center}\n    given by:\n    \\begin{center}\n        $\\mc{C}(A, \\_)(B) := \\mc{C}(B, A), \\qq\n        \\mc{C}(A, \\_)(h: C \\to B) := g \\mapsto g \\circ h$.\n    \\end{center}\n\\end{itemize}\n~\\\\\nGeneralizing both of the above, we obtain a \\emph{\\textbf{bivariant}}\nHom-functor,\n\\begin{center}\n    $\\mc{C}(\\_, \\_) : \\mc{C}^{\\text{op}} \\times \\mc{C} \\to \\catname{Set}$.\n\\end{center}\n\n\\setcounter{Exercise}{46}\n\\begin{Exercise}\n    Spell out the definition of $\\mc{C}(\\_, \\_): \\mc{C}^{\\text{op}} \\times\n    \\mc{C} \\to \\catname{Set}$. Verify carefully that it is a functor.\n\\end{Exercise}\n\\begin{solution}\n    We define $\\mc{C}(\\_, \\_): \\mc{C}^{\\text{op}} \\times \\mc{C} \\to\n    \\catname{Set}$ as follows:\n    \\begin{center}\n        For all $(A, B)$ in $\\mc{C}^{\\text{op}} \\times \\mc{C}$, $\\mc{C}(\\_,\\_)\n        (A, B) := \\mc{C}(A, B)$, and \\\\\n        for all $f: A' \\to A$ and $g: B \\to B'$, $\\mc{C}(\\_, \\_)(f, g) =\n        h \\mapsto g \\circ h \\circ f$, where $h \\in \\mc{C}(A, B)$.\n    \\end{center}\n    Now, we show the above functor does satisfy the functoriality conditions.\n    Indeed, for all $(A, B), (A', B'), (A'', B'')$ in $\\mc{C}^{\\text{op}}\n    \\times \\mc{C}$, if $f: A' \\to A, f': A'' \\to A', g: B \\to B', g': B' \\to\n    B''$, then $\\mc{C}(\\_,\\_)(f', g')\n    \\circ \\mc{C}(\\_,\\_)(f, g) = (h' \\mapsto g' \\circ h' \\circ f') \\circ\n    (h \\mapsto g \\circ h \\circ f) = h \\mapsto g' \\circ (g \\circ h \\circ f)\n    \\circ f' = h \\mapsto (g' \\circ g) \\circ h \\circ (f \\circ f') = \\mc{C}(\\_,\n    \\_)(f \\circ f', g' \\circ g)$. And, for any $(A, B)$ in $\\mc{C}^{\\text{op}}\n    \\times \\mc{C}$, we have $\\mc{C}(\\_, \\_)(1_{(A, B)}) = \\mc{C}(\\_, \\_)\n    (1_A, 1_B) = h \\mapsto 1_B \\circ h \\circ 1_B = h \\mapsto h = 1_{\\mc{C}\n    (A, B)}$. And this establishes that $\\mc{C}(\\_, \\_)$ as defined above is\n    indeed a functor.\n\\end{solution}\n\n\\subsection*{Properties of Functors}\nA functor $F: \\mc{C} \\to \\mc{D}$ is said to be:\n\\begin{itemize}\n    \\item \\emph{\\textbf{faithful}} if each map $F_{A, B}: \\mc{C}(A, B) \\to\n    \\mc{D}(FA, FB)$ is injective;\n\n    \\item \\emph{\\textbf{full}} if each map $F_{A, B}: \\mc{C}(A, B) \\to\n    \\mc{D}(FA, FB)$ is surjective;\n\n    \\item an \\emph{\\textbf{embedding}} if $F$ is full, faithful, and injective\n    on objects;\n\n    \\item an \\emph{\\textbf{equivalence}} if $F$ is full, faithful, and\n    \\emph{essentially surjective, i.e.} for every object $B$ of $\\mc{D}$, there\n    is an object $A$ of $\\mc{C}$ such that $F(A) \\cong B$;\n\n    \\item an \\emph{\\textbf{isomorphism}} if there is a functor $G: \\mc{D} \\to\n    \\mc{C}$ such that\n    \\begin{center}\n        $G \\circ F = 1_{\\mc{C}}, \\qq F \\circ G = 1_{\\mc{D}}$.\n    \\end{center}\n\\end{itemize}\n\nWe say categories $\\mc{C}$ and $\\mc{D}$ are isomorphic, $\\mc{C} \\cong\n\\mc{D}$, if there is an isomorphism between them. This is just the usual notion\nof isomorphism applied to $\\catname{Cat}$.\n\nSome examples of functors with various properties are as follows:\n\\begin{itemize}\n    \\item The forgetful functor $U: \\catname{Mon} \\to \\catname{Set}$ is\n    faithful, but not full. Note not all functions $f: M \\to N$ yield an arrow\n    $f: (M, \\cdot, 1) \\to (N, \\cdot, 1)$. Similar properties hold for other\n    forgetful functors.\n\n    \\item The free monoid functor $\\texttt{MList}: \\catname{Set} \\to\n    \\catname{Mon}$ is faithful, but not full.\n\n    \\item The product functor $- \\times - : \\mc{C} \\times \\mc{C} \\to \\mc{C}$\n    is generally neither faithful nor full.\n\n    \\item There is an equivalence between $\\catname{FDVect}_k$, the category\n    of finite dimensional vector spaces over the field $k$, and\n    $\\catname{Mat}_k$, the category of matrices with entries in $k$. These\n    categories are very far from being isomorphic.\n\\end{itemize}\n~\\\\\n\\textbf{Preservation and Reflection}\\\\\nLet $P$ be a property of arrows. We say a functor $F: \\mc{C} \\to \\mc{D}$\n\\emph{preserves} $P$ if whenever $f$ satisfies $P$, so does $F(f)$. We say $F$\n\\emph{reflects} $P$ if whenever $F(f)$ satisfies $P$, so does $f$.\n\n\\setcounter{Exercise}{48}\n\\begin{Exercise}\nProve the following:\n\\begin{enumerate}\n    \\item All functors preserve isomorphisms, split monics, and split epics.\n    \\item Faithful functors reflect monics and epics.\n    \\item Full and faithful functors reflect isomorphisms.\n    \\item Equivalences preserve monics and epics.\n    \\item The forgetful functor $U: \\catname{Mon} \\to \\catname{Set}$ preserves\n    products.\n\\end{enumerate}\n\\end{Exercise}\n\\begin{solution}\n\\leavevmode\n\\begin{enumerate}\n    \\item (Isomorphism) Suppose $f: A \\to B$ in $\\mc{C}$ is an isomorphism.\n    Then, $F(f) \\circ F(f^{-1}) = F(f \\circ f^{-1}) = F(1_B) = 1_{FB}$. And,\n    $F(f^{-1}) \\circ F(f) = F(f^{-1} \\circ f) = F(1_A) = 1_{FA}$. Thus, $F(f)$\n    is an isomorphism, and we are done.\\\\\n    (Split monic) Suppose $f: A \\to B$ in $\\mc{C}$ is split monic. Then, $f$\n    has a left inverse $g: B \\to A$ such that $g \\circ f = 1_A$. Therefore,\n    $F(g) \\circ F(f) = F(g \\circ f) = F(1_A) = 1_{FA}$, which implies $F(f)$\n    has a left inverse, from which we conclude $F(f)$ is split monic, and we\n    are done.\\\\\n    (Split epic) Suppose $f: A \\to B$ in $\\mc{C}$ is split epic. Then, $f$\n    has a right inverse $g: B \\to A$ such that $f \\circ g = 1_B$. Therefore,\n    $F(f) \\circ F(g) = F(f \\circ g) = F(1_B) = 1_{FB}$, which implies $F(f)$\n    has a right inverse, from which we conclude $F(f)$ is split epic, and we\n    are done.\n\n    \\item (Monic) Suppose $F: \\mc{C} \\to \\mc{D}$ is a faithful functor, and let\n    $f: A \\to B$ in $\\mc{C}$ such that $F(f)$ is monic. Assume $g, h: C \\to A$\n    such that $f \\circ g = f \\circ h$. Then, $F(f \\circ g) = F(f \\circ h)$,\n    which implies $F(f) \\circ F(g) = F(f) \\circ F(h)$, which implies $F(g) =\n    F(h)$, since $F(f)$ is left-cancellative, and since $F$ is faithful, we\n    have $g = h$, showing $f$ is monic. Hence, we conclude faithful functors\n    preserve monics.\\\\\n    (Epic) Suppose $F: \\mc{C} \\to \\mc{D}$ is a faithful functor, and let\n    $f: A \\to B$ in $\\mc{C}$ such that $F(f)$ is epic. Assume $g, h: B \\to C$\n    such that $g \\circ f = h \\circ f$. Then, $F(g \\circ f) = F(h \\circ f)$,\n    which implies $F(g) \\circ F(f) = F(h) \\circ F(f)$, which implies $F(g) =\n    F(h)$, since $F(f)$ is right-cancellative, and since $F$ is faithful, we\n    have $g = h$, showing $f$ is epic. Hence, we conclude faithful functors\n    preserve epics.\n\n    \\item Suppose $F: \\mc{C} \\to \\mc{D}$ is a full and faithful functor, and\n    let $f: A \\to B$ in $\\mc{C}$ such that $F(f)$ is an isomorphism. Since $F$\n    is full, there exists some $g: B \\to A$ such that $F(g) = F(f)^{-1}$.\n    Therefore, $F(f \\circ g) = F(f) \\circ F(g) = F(f) \\circ F(f)^{-1} = 1_{FB}\n    = F(1_B)$, which implies $f \\circ g = 1_B$, since $F$ is faithful.\n    Similarly, it is easy to check $F(g \\circ f) = F(1_A)$, which implies\n    $g \\circ f = 1_A$. Hence, $f$ is an isomorphism, and we are done.\n\n    \\item (Monic) Suppose $F: \\mc{C} \\to \\mc{D}$ is an equivalence. Then, $F$\n    is full, faithful, and essentially surjective. Let $f: A \\to B$ in $\\mc{C}$\n    be monic. Let $g', h': C' \\to FA$ such that $F(f) \\circ g' = F(f) \\circ h'$.\n    That is, the following diagram commutes:\n    \\[\n    \\begin{tikzcd}\n        C' \\arrow[r, shift left, \"g'\"]\n           \\arrow[r, shift right, swap, \"h'\"]\n           & FA \\arrow[r, \"F(f)\"]\n                & FB\n    \\end{tikzcd}\n    \\]\n    We claim $F(f)$ is monic, \\emph{i.e.} $F(f)$ is left-cancellative. To that\n    end, since $F$ is essentially surjective, there exists some $C_0$ in\n    $\\mc{C}$ such that $F(C_0) \\cong C'$. In other words, there exists an\n    isomorphism $k: F(C_0) \\xrightarrow{\\sim} C'$. In addition, since $F$ is\n    full, there exist $g, h: C_0 \\to A$ in $\\mc{C}$ such that $F(g) = g' \\circ\n    k$ and $F(h) = h' \\circ k$. That is, the following diagram commutes:\n    \\[\n    \\begin{tikzcd}\n    F(C_0) \\arrow[r, shift left, \"F(g)\"]\n    \\arrow[r, shift right, swap, \"F(h)\"]\n    & FA \\arrow[r, \"F(f)\"]\n    & FB\n    \\end{tikzcd}\n    \\]\n    Therefore, $F(f) \\circ F(g) = F(f) \\circ F(h)$, which implies $F(f \\circ g)\n    = F(f \\circ h)$, which implies $f \\circ g = f \\circ h$ (since $F$ is\n    faithful), which implies $g = h$, since $f$ is monic. Thus, $g' = g' \\circ\n    1_{C'} = g' \\circ (k \\circ k^{-1}) = (g' \\circ k) \\circ k^{-1} = F(g) \\circ\n    k^{-1} = F(h) \\circ k^{-1} = (h' \\circ k) \\circ k^{-1} = h' \\circ (k \\circ\n    k^{-1}) = h' \\circ 1_{C'} = h'$, which shows $F(f)$ is indeed\n    left-cancellative, and we are done.\\\\\n    (Epic) Suppose $F: \\mc{C} \\to \\mc{D}$ is an equivalence. Then, $F$ is full,\n    faithful, and essentially surjective. Let $f: A \\to B$ in $\\mc{C}$ be epic.\n    We claim $F(f)$ is epic, \\emph{i.e.} $F(f)$ is right-cancellative. To that\n    end, let $g', h': FB \\to C'$ such that $g' \\circ F(f) = h' \\circ F(f)$,\n    \\emph{i.e} the following diagram commutes:\n    \\[\n    \\begin{tikzcd}\n        FA \\arrow[r, \"F(f)\"]\n           & FB \\arrow[r, shift left, \"g'\"]\n                \\arrow[r, shift right, swap, \"h'\"]\n                & C'\n    \\end{tikzcd}\n    \\]\n    Since $F$ is essentially surjective, there exists some $C_0$ in $\\mc{C}$\n    such that $C' \\cong F(C_0)$. In other words, there exists an isomorphism\n    $k: C' \\to F(C_0)$. And, since $F$ is full, there exist $g, h: B \\to C_0$\n    such that $F(g) = k \\circ g'$ and $F(h) = k \\circ h'$. That is, the\n    following diagram commutes:\n    \\[\n    \\begin{tikzcd}\n    FA \\arrow[r, \"F(f)\"]\n       & FB \\arrow[r, shift left, \"F(g)\"]\n            \\arrow[r, shift right, swap, \"F(h)\"]\n            & F(C_0)\n    \\end{tikzcd}\n    \\]\n    Therefore, $F(g) \\circ F(f) = F(h) \\circ F(f)$, which implies $F(g \\circ f)\n    = F(h \\circ f)$, which implies $g \\circ f = h \\circ f$ (since $F$ is\n    faithful), which implies $g = h$, since $f$ is epic. Thus, $g' = 1_{C'}\n    \\circ g' = (k^{-1} \\circ k) \\circ g' = k^{-1} \\circ (k \\circ g') = k^{-1}\n    \\circ F(g) = k^{-1} \\circ F(h) = k^{-1} \\circ (k \\circ h') = (k^{-1} \\circ\n    k) \\circ h' = 1_{C'} \\circ h' = h'$, which shows $F(f)$ is indeed\n    right-cancellative, and we are done.\n\n    \\item % TODO: Exercise 49 - Part (5)\n\\end{enumerate}\n\\end{solution}\n\n\\begin{Exercise}\nShow the following:\n\\begin{enumerate}\n    \\item Functors do not, in general, reflect monics or epics.\n    \\item Faithful functors do not, in general, reflect isomorphisms.\n    \\item Full and faithful functors do not, in general, preserve monics or\n    epics.\n\\end{enumerate}\n\\end{Exercise}\n\\begin{solution}\n    % TODO: Exercise 50 - Parts (1), (2), and (3)\n\\end{solution}\n\n\\subsection*{Exercises}\n\\leavevmode\n\\begin{enumerate}\n    \\item Consider the category $\\catname{FDVect}_{\\R}$ of finite dimensional\n    vector spaces over $\\R$, and $\\catname{Mat}_{\\R}$ of matrices over $\\R$.\n    Concretely, $\\catname{Mat}_{\\R}$ is defined as follows:\n    \\begin{center}\n        $Ob(\\catname{Mat}_{\\R}) := \\N$,\\\\\n        $\\catname{Mat}_{\\R}(n, m) := \\{ M \\mid M \\text{ is an } n \\times m\n        \\text{ matrix with entries in } \\R \\}$.\n    \\end{center}\n    Thus, objects are natural numbers, and arrows $n \\to m$ are $n \\times m$\n    real matrices. Composition is matrix multiplication, and the identity on\n    $n$ is the $n \\times n$ identity matrix.\\\\\n    Now, let $F: \\catname{Mat}_{\\R} \\to \\catname{FDVect}_{\\R}$ be the functor\n    taking each $n$ to the vector space $\\R^n$ and each $M: n \\to m$ to the\n    linear function\n    \\begin{center}\n        $FM: \\R^n \\to \\R^m := (x_1, \\ldots, x_n) \\mapsto [x_1, \\ldots, x_n] M$\n    \\end{center}\n    with the $1 \\times m$ matrix $[x_1, \\ldots, x_n] M$ considered as a vector\n    in $\\R^m$. Show that $F$ is full, faithful, and essentially surjective, and\n    hence, $\\catname{FDVect}_{\\R}$ and $\\catname{Mat}_{\\R}$ are equivalent\n    categories. Are they isomorphic?\n\n    \\item Let $\\mc{C}$ be a category with binary products such that, for each\n    pair of objects $A, B$,\n    \\begin{center}\n        $\\mc{C}(A, B) \\neq \\emptyset \\qq (*)$\n    \\end{center}\n    Show that the product functor $F: \\mc{C} \\times \\mc{C} \\to \\mc{C}$ is\n    faithful.\\\\\n    Would $F$ still be faithful in the absence of condition $(*)$?\n\\end{enumerate}\n\\begin{solution}\n\\leavevmode\n\\begin{enumerate}\n    \\item %TODO: Exercises subsection 1.3.5 - (1)\n    \\item %TODO: Exercises subsection 1.3.5 - (2)\n\\end{enumerate}\n\\end{solution}\n\n\\section{Natural Transformations}\nMorphisms between functors are \\emph{natural transformations}, just as functors\nare morphisms between categories.\n\n\\subsection*{Basics}\nLet us define natural transformations.\n\nLet $F, G: \\mc{C} \\to \\mc{D}$ be functors. A \\emph{\\textbf{natural\ntransformation}}\n\\begin{center}\n    $t: F \\to G$\n\\end{center}\nis a family of morphisms in $\\mc{D}$ indexed by objects $A$ of $\\mc{C}$,\n\\begin{center}\n    $\\{ t_A : FA \\to GA \\}_{A \\in Ob(\\mc{C})}$\n\\end{center}\nsuch that for all $f: A \\to B$, the following diagram commutes:\n\\[\n\\begin{tikzcd}\n    FA \\arrow[r, \"Ff\"]\n       \\arrow[d, swap, \"t_A\"]\n       & FB \\arrow[d, \"t_B\"]\\\\\n    GA \\arrow[r, swap, \"Gf\"]\n       & GB\n\\end{tikzcd}\n\\]\nThis condition is known as \\emph{naturality}.\\\\\nIf each $t_A$ is an isomorphism, we say $t$ is a \\emph{\\textbf{natural\nisomorphism}}:\n\\begin{center}\n    $t: F \\xrightarrow{\\cong} G$.\n\\end{center}\n~\\\\\n\\textbf{Examples of natural transformations:}\n\\begin{itemize}\n    \\item Let $\\opn{Id}$ be the identity functor on $\\catname{Set}$, and\n    $\\times \\circ \\langle \\opn{Id}, \\opn{Id} \\rangle$ be the functor taking\n    each set $X$ to $X \\times X$ and each function $f$ to $f \\times f$. Then,\n    there is a natural transformation $\\Delta: \\opn{Id} \\to \\times \\circ\n    \\langle \\opn{Id}, \\opn{Id} \\rangle$ given by:\n    \\begin{center}\n        $\\Delta_X: X \\to X \\times X := x \\mapsto (x, x)$.\n    \\end{center}\n    Naturality amounts to asserting that, for any function $f: X \\to Y$, the\n    following diagram commutes:\n    \\[\n    \\begin{tikzcd}\n        X \\arrow[r, \"f\"]\n          \\arrow[d, swap, \"\\Delta_X\"]\n          & Y \\arrow[d, \"\\Delta_Y\"]\\\\\n        X \\times X \\arrow[r, swap, \"f \\times f\"]\n                   & Y \\times Y\n    \\end{tikzcd}\n    \\]\n    We call $\\Delta$ the \\emph{diagonal} transformation on $\\catname{Set}$. In\n    fact, it is the \\emph{only} natural transformation between these functors.\n\n    \\item The diagonal transformation can be defined for any category $\\mc{C}$\n    with binary products by setting, for each object $A$ in $\\mc{C}$,\n    \\begin{center}\n        $\\Delta_A: A \\to A \\times A := \\langle 1_A, 1_A \\rangle$.\n    \\end{center}\n    Projections also yield natural transformations. For example, the arrows\n    \\begin{center}\n        $\\pi_{1 (A, B)}: A \\times B \\to A$\n    \\end{center}\n    specify a natural transformation $\\pi_1: - \\times - \\to \\pi_1$. Note that\n    $- \\times -$, $\\pi_1: \\mc{C} \\times \\mc{C} \\to \\mc{C}$ are the functors for\n    product and first projection, respectively.\n\n    \\item Let $\\mc{C}$ be a category with terminal object $T$, and let $K_T:\n    \\mc{C} \\to \\mc{C}$ be the functor mapping all objects to $T$ and all\n    arrows to $1_T$. Then, the canonical arrows\n    \\begin{center}\n        $\\tau_A : A \\to T$\n    \\end{center}\n    specify a natural transformation $\\tau: \\opn{Id} \\to K_T$ (where $\\opn{Id}$)\n    is the identity functor on $\\mc{C}$.\n\n    \\item Recall the functor $\\List: \\catname{Set} \\to \\catname{Set}$ that\n    takes a set $X$ to the set of finite lists with elements in $X$. We can\n    define the following natural transformations,\n    \\begin{align*}\n        \\texttt{reverse} &: \\List \\to \\List,\\\\\n        \\texttt{unit} &: \\opn{Id} \\to \\List,\\\\\n        \\texttt{flatten} &: \\List \\circ \\List \\to \\List,\n    \\end{align*}\n    by setting, for each set $X$,\n    \\begin{align*}\n        \\texttt{reverse}_X &: \\List(X) \\to \\List(X) :=\n        [x_1, \\ldots, x_n] \\mapsto [x_n, \\ldots, x_1]\\\\\n        \\texttt{unit}_X &: X \\to \\List(X) := x \\mapsto [x]\\\\\n        \\texttt{flatten}_X &: \\List(\\List(X)) \\to \\List(X)\\\\\n        &:= [[x_1^1, \\ldots, x_{n_1}^1], \\ldots, [x_1^k, \\ldots, x_{n_k}^k]\n        \\mapsto [x_1^1, \\ldots \\ldots, x_{n_k}^k].\n    \\end{align*}\n\n    \\item Consider the functor $P := \\times \\circ \\langle U, U \\rangle$ with\n    $U: \\catname{Mon} \\to \\catname{Set}$ being the forgetful functor. That is,\n    \\begin{center}\n        $P: \\catname{Mon} \\to \\catname{Set} := (M, \\cdot, 1) \\mapsto M \\times M,\n        \\, f \\mapsto f \\times f$.\n    \\end{center}\n    Then, the monoid operation yields a natural transformation $t: P \\to U$\n    defined by:\n    \\begin{center}\n        $t_{(M, \\cdot, 1)}: M \\times M \\to M := (m, m') \\mapsto m \\cdot m'$.\n    \\end{center}\n    Naturality corresponds to asserting that, for any $f: (M, \\cdot, 1) \\to\n    (N, \\cdot, 1)$, the following diagram commutes:\n    \\[\n    \\begin{tikzcd}\n        M \\times M \\arrow[r, \"f \\times f\"]\n                   \\arrow[d, swap, \"t_M\"]\n                   & N \\times N \\arrow[d, \"t_N\"] \\\\\n        M \\arrow[r, swap, \"f\"]\n          & N\n    \\end{tikzcd}\n    \\]\n    That is, for any $m_1, m_2 \\in M$, $f(m_1) \\cdot f(m_2) = f(m_1 \\cdot m_2)$.\n\n    \\item If $V$ is a finite dimensional vector space, then $V$ is isomorphic\n    to both its first dual $V^*$ and to its second dual $V^{**}$.\\\\\n    However, while it is naturally isomorphic to its second dual, there is no\n    natural isomorphism to the first dual. This was the original example that\n    motivated Samuel Eilenberg and Saunders Mac Lane to define the concept of\n    natural transformation; in this case, naturality captures \\emph{basis\n    independence}.\n\\end{itemize}\n\n\\setcounter{Exercise}{51}\n\\begin{Exercise}\n    Verify naturality of diagonal transformations, projections, and terminals,\n    for a category $\\mc{C}$ with finite products.\n\\end{Exercise}\n\\begin{solution}\n    Suppose $\\mc{C}$ is a category with finite products.\n\n    (Naturality of diagonal transformation) We show that for all objects $A, B$\n    in $\\mc{C}$ and all arrows $f: A \\to B$, the following diagram commutes:\n    \\[\n    \\begin{tikzcd}\n        A \\arrow[r, \"f\"]\n          \\arrow[d, swap, \"\\Delta_A\"]\n        & B \\arrow[d, \"\\Delta_B\"]\\\\\n        A \\times A \\arrow[r, swap, \"f \\times f\"]\n                   & B \\times B\n    \\end{tikzcd}\n    \\]\n    Note that $\\Delta_A : A \\to A \\times A := \\langle 1_A, 1_A \\rangle$. Now,\n    $\\Delta_B \\circ f = \\langle 1_B, 1_B \\rangle \\circ f = \\langle 1_B \\circ f,\n    1_B \\circ f \\rangle = \\langle f, f \\rangle$. And, $(f \\times f) \\circ\n    \\Delta_A = (f \\times f) \\circ \\langle 1_A, 1_A \\rangle = \\langle f \\circ\n    1_A, f \\circ 1_A \\rangle = \\langle f, f \\rangle$, which shows $\\Delta_B\n    \\circ f = (f \\times f) \\circ \\Delta_A$, thus proving the above diagram\n    commutes.\n\n    (Naturality of projection) We claim that for all $f: A \\to B$ and $g: C\n    \\to D$ in $\\mc{C}$, the following diagram commutes.\n    \\[\n    \\begin{tikzcd}\n        A \\times C \\arrow[r, \"f \\times g\"]\n                   \\arrow[d, swap, \"\\pi_1\"]\n                   & B \\times D \\arrow[d, \"\\pi_1\"]\\\\\n        A \\arrow[r, swap, \"f\"]\n          & B\n    \\end{tikzcd}\n    \\]\n    Indeed, $\\pi_1 \\circ (f \\times g) = \\pi_1 \\circ \\langle f \\circ \\pi_1,\n    g \\circ \\pi_2 \\rangle = f \\circ \\pi_1$, thus establishing our claim.\n\n    (Naturality of terminal projection) Let $T$ be a terminal object in\n    $\\mc{C}$. The fact that the canonical arrows $\\tau_A: A \\to T$ specify a\n    natural transformation is demonstrated by showing the following diagram\n    commutes:\n    \\[\n    \\begin{tikzcd}\n        A \\arrow[r, \"f\"]\n          \\arrow[d, swap, \"\\tau_A\"]\n          & B \\arrow[d, \"\\tau_B\"]\\\\\n        T \\arrow[r, swap, \"1_T\"]\n          & T\n    \\end{tikzcd}\n    \\]\n    Indeed, note $\\tau_B \\circ f: A \\to T$ and $1_T \\circ \\tau_A: A \\to T$, and\n    since $T$ is a terminal object, we must have $\\tau_B \\circ f = 1_T \\circ\n    \\tau_A$, thereby proving the above diagram commutes.\n\\end{solution}\n\n\\begin{Exercise}\n    Prove that the diagonal is the only natural transformation $\\opn{Id} \\to\n    \\times \\circ \\langle \\opn{Id}, \\opn{Id} \\rangle$ on $\\catname{Set}$.\n    Similarly, prove that the first projection is the only natural\n    transformation $\\times \\to \\pi_1$ on $\\catname{Set}$.\n\\end{Exercise}\n\\begin{solution}\n    % TODO: Exercise (53)\n\\end{solution}\n~\\\\\n\n\\subsection*{Further Examples}\nNatural isomorphisms for products.\n\nLet $\\mc{C}$ be a category with finite products, \\emph{i.e.} binary products\nand a terminal object $\\1$. Then, we have the following canonical natural\nisomorphisms.\n\\begin{align*}\n    a_{A,B,C} &: A \\times (B \\times C) \\xrightarrow{\\sim}\n        (A \\times B) \\times C,\\\\\n    s_{A,B} &: A \\times B \\xrightarrow{\\sim} B \\times A\\\\\n    l_A &: \\1 \\times A \\xrightarrow{\\sim} A\\\\\n    r_A &: A \\times \\1 \\xrightarrow{\\sim} A.\n\\end{align*}\nThe first two isomorphisms assert the product is \\emph{associative} and\n\\emph{symmetric}, and the last two that $\\1$ is its unit. These conditions\nform part of the definition of \\emph{symmetric monoidal categories}.\n\nThe above natural isomorphisms are defined explicitly by:\n\\begin{align*}\n    a_{A,B,C} &:= \\langle \\langle \\pi_1, \\pi_1 \\circ \\pi_2 \\rangle,\n        \\pi_2 \\circ \\pi_2 \\rangle,\\\\\n    s_{A,B} &:= \\langle \\pi_2, \\pi_1 \\rangle,\\\\\n    l_A &:= \\pi_2\\\\\n    r_A &:= \\pi_1.\n\\end{align*}\nSince natural isomorphisms are a \\emph{self dual} notion, similar natural\nisomorphisms can be defined if $\\mc{C}$ has binary coproducts and an initial\nobject.\n\n\\begin{Exercise}\n    Verify that the above families of arrows are natural isomorphisms.\n\\end{Exercise}\n\n\\end{document}\n", "meta": {"hexsha": "22b36accb9d75025edf086a3a34af5f87af5bf5a", "size": 100065, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main.tex", "max_stars_repo_name": "vishallama/categories-and-categorical-logic", "max_stars_repo_head_hexsha": "2524d6e0afd92336603cca78084869d64242d8c2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.tex", "max_issues_repo_name": "vishallama/categories-and-categorical-logic", "max_issues_repo_head_hexsha": "2524d6e0afd92336603cca78084869d64242d8c2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.tex", "max_forks_repo_name": "vishallama/categories-and-categorical-logic", "max_forks_repo_head_hexsha": "2524d6e0afd92336603cca78084869d64242d8c2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.1009255178, "max_line_length": 82, "alphanum_fraction": 0.6030180383, "num_tokens": 34413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7662936430859597, "lm_q1q2_score": 0.5530672309263277}}
{"text": "\\section{DT LTI -- state space}\n\n\\subsection{}\n\n\\begin{frame}\n\\frametitleTC{The DT LTI class}\n\\framesubtitleTC{Preliminaries}\n\\begin{itemize}[<+-| alert@+>]\n\\item For the LTI class, that we see here almost exclusively in the DT context, there exists a very strong theory.\n\\item The same is not true for more general (e.g., nonlinear) system classes.\n\\item Therefore, control problems of the type addressed here, are cast in the LTI framework wherever possible.\n\\item \\vfill This motivates the importance of the DT LTI class, which we now\\\\\n      come to examine.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{State-space representation of a DT LTI system}\n\\framesubtitleTC{Definition --- we stick to the SISO case for simplicity}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item The \\TC{State-Space (SS)} representation of a DT LTI dynamic system is the quadruplet $(A,b,c,d)$ in\n      \\begin{displaymath}\n       \\left\\{\n        \\begin{array}{rlll}\n         x(k) &= A x(k-1) + b u(k-1) & & \\text{\\TC{(state equation)}}  \\\\\n         y(k) &= c x(k)   + d u(k)   & & \\text{\\TC{(output equation)}}\n        \\end{array}\n       \\right.\n      \\end{displaymath}    \n\\item $u(k)$ and $y(k)$ are real scalars;\n\\item $x(k) \\in \\Re^n$, where $n$ is the system's order;\n\\item the real \\TC{dynamic matrix} $A$ is $n \\times n$;\n\\item $b$ is a real column vector ($n \\times 1$);\n\\item $c$ is a real row vector ($1 \\times n$);\n\\item $d$ is a real scalar.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Motion}\n\\framesubtitleTC{}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item Given $x(0)$ and $u(k)$, $k \\geq 0$, we get\n      {\\small\n      \\begin{displaymath}\n       \\begin{array}{rclcl}\n        x(1) &=& Ax(0)+bu(0) \\\\\n        x(2) &=& Ax(1)+bu(1) &=& A^2x(0)+Abu(0)+bu(1) \\\\\n        x(3) &=& Ax(2)+bu(2) &=& A^3x(0)+A^2bu(0)+Abu(1)+bu(2) \\\\\n             & & \\cdots\\\\\n       \\end{array}\n      \\end{displaymath}\n      }\n\\item This readily generalises to the \\TC{Lagrange state formula}\n      \\begin{displaymath}\n       x(k) = A^k x(0) +\\sum\\limits_{h=0}^{k-1} A^{k-h-1}bu(h).\n      \\end{displaymath}\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Free and induced state motion}\n\\framesubtitleTC{}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item In LTI systems, the state motion $x(k)$ is the \\TC{sum} of a \\TC{free motion} $x_F(k)$ and an\n      \\TC{induced motion} $x_I(k)$, i.e.,\n      \\begin{displaymath}\n       x(k) = x_F(k) + x_I(k),\n      \\end{displaymath} \\myPause\n      where\n      \\begin{displaymath}\n       x_F(k) = A^k x(0), \\quad\n       x_I(k) = \\sum\\limits_{h=0}^{k-1} A^{k-h-1}bu(h).\n      \\end{displaymath} \\myPause\n\\item $x_F(k)$ depends linearly on $x(0)$ and not on $u$,\n\\item while $x_I(k)$ depends linearly only on $u(k)$ and not on $x(0)$.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Free and induced output motion}\n\\framesubtitleTC{}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item In LTI systems, also the state motion $y(k)$ is the \\TC{sum} of a \\TC{free motion} $y_F(k)$ and an\n      \\TC{induced motion} $y_I(k)$, i.e.,\n      \\begin{displaymath}\n       y(k) = y_F(k) + y_I(k),\n      \\end{displaymath} \\myPause\n      where\n      \\begin{displaymath}\n       y_F(k) = cA^k x(0), \\quad\n       y_I(k) = c\\sum\\limits_{h=0}^{k-1} A^{k-h-1}bu(h) +du(k).\n      \\end{displaymath} \\myPause\n\\item again, $y_F(k)$ depends linearly on $x(0)$ and not on $u$,\n\\item while $y_I(k)$ depends linearly only on $u(k)$ and not on $x(0)$.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Equilibrium}\n\\framesubtitleTC{}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item For an LTI system, equilibrium states are found by solving\n      \\begin{displaymath}\n       \\overline{x} = A\\overline{x}+b\\overline{u}.\n      \\end{displaymath}\n\\item Thus, if $A$ has no unity eigenvalues, there exists the one equilibrium\n      \\begin{displaymath}\n       \\overline{x} = (I-A)^{-1}b\\overline{u},\n      \\end{displaymath}\n\\item while in the opposite case, either there is no equilibrium, or there\\\\\n      are infinite ones.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Equilibrium}\n\\framesubtitleTC{Peculiarities of L(TI) systems}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item Contrary to nonlinear systems, they cannot have a finite number of equilibria, different from zero and one.\n\\item If some $\\overline{u}$ produces zero, one or infinite equilibria, the same is true for any other $\\overline{u}$,\n      contrary again to the nonlinear case.\n\\item For each equilibrium state, there surely exists the one equilibrium output\n      \\begin{displaymath}\n       \\overline{y}=c\\overline{x}+d\\overline{u}.\n      \\end{displaymath}\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Stability of an equilibrium}\n\\framesubtitleTC{}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item Let $(\\overline{x},\\overline{u})$ be an equilibrium for an LTI system. \n\\item The Lagrange state formula leads to write\n      \\begin{displaymath}\n       \\overline{x} = A^k \\overline{x} +\\sum\\limits_{h=0}^{k-1} A^{k-h-1}b\\overline{u}.\n      \\end{displaymath}.\n\\item Consider now the  perturbed motion $x_{\\Delta}(k)$ produced by $\\overline{u}$ and\n      $x(0)=\\overline{x}+\\Delta\\overline{x}$; the same formula yields\n      \\begin{displaymath}\n       x_{\\Delta}(k) = A^k (\\overline{x}+\\Delta\\overline{x}) +\\sum\\limits_{h=0}^{k-1} A^{k-h-1}b\\overline{u}.\n      \\end{displaymath}\n\\item Subtracting, therefore,\n      \\begin{displaymath}\n       x_{\\Delta}(k)-\\overline{x} = A^k\\Delta\\overline{x}\n      \\end{displaymath}\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Stability of an equilibrium}\n\\framesubtitleTC{and in the L(TI) case, of a system}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item The way $x_{\\Delta}(k)$ moves with respect to $\\overline{x}$ does not depend on $\\overline{x}$.\n\\item That is, contrary the nonlinear case, there cannot be equilibria with different stability characteristics.\n\\item In the L(TI) class, stability is a property of the \\emph{system}, not of the individual equilibria.\n\\item Moreover, \n      \\begin{itemize}[<+-| alert@+>]\n      \\item for $k\\rightarrow\\infty$, $||x_{\\Delta}(k)-\\overline{x}|| \\rightarrow 0 \\, \\forall x(0)$  iff $A^k$\n            converges to a zero matrix,\n      \\item the same norm generally diverges if at least one element of $A^k$ does,\n      \\item and if $A^k$ neither converges to zero nor diverges, the same happens\\\\\n            to $||x_{\\Delta}(k)-\\overline{x}||$.\n      \\end{itemize}\n\\item Thus, in the LTI case, the stability of a system only depends on its\\\\\n      dynamic matrix $A$.\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[label={pag:stab-eivals}]\n\\frametitleTC{Stability and eigenvalues of $A$}\n\\framesubtitleTC{}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item The following can be proven.\n      \\begin{itemize}[<+-| alert@+>]\n      \\item An LTI system is asymptotically stable iff all the eigenvalues of $A$ have magnitude less  than one\n            (or, equivalently, lie in the open \\TC{unit circle} of the complex plane).\n      \\item The same system is unstable if (but not only if) at least one eigenvalue of $A$ has magnitude greater\n            than one.\n      \\item If all the eigenvalues of $A$ have magnitude less than or equal to one, and there exists at least one\n            with unity magnitude, the system can be either unstable\\\\\n            or stable, but not asymptotically.\n      \\end{itemize}\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}\n\\frametitleTC{Properties of asymptotically stable systems}\n\\framesubtitleTC{also in a view to control}\n\\myPause\n\\begin{itemize}[<+-| alert@+>]\n\\item An asymptotically stable system has one and only one equilibrium for each constant input.\n\\item The state and output free motions of an asymptotically stable system converge to zero (norm)\n      for $k\\rightarrow\\infty$.\n\\item As a consequence, asymptotically stable systems ``forget their initial condition''...\n\\item ...which is a definitely desired property for a \\TC{controlled} system.\n\\end{itemize}\n\\end{frame}\n\n", "meta": {"hexsha": "2532070a82dfd1c26bc0d43a80a040fd9e6bb28a", "size": 7980, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/Unit-02/sections/03-DTLTI-SS.tex", "max_stars_repo_name": "albertoleva/PID4CSE", "max_stars_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-19T16:38:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-19T16:38:10.000Z", "max_issues_repo_path": "slides/Unit-02/sections/03-DTLTI-SS.tex", "max_issues_repo_name": "albertoleva/PID4CSE", "max_issues_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/Unit-02/sections/03-DTLTI-SS.tex", "max_forks_repo_name": "albertoleva/PID4CSE", "max_forks_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.0, "max_line_length": 118, "alphanum_fraction": 0.654887218, "num_tokens": 2615, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217431943271999, "lm_q2_score": 0.766293653760418, "lm_q1q2_score": 0.5530672294577055}}
{"text": "\\section{XAFS Fourier transforms}\n\n\\begin{frame} \\frametitle{XAFS Fourier Transforms}\n\n\\begin{cenpage}{130mm}\nFourier Transforms are an important part of XAFS Analysis:\n\n\\begin{postitbox}{75mm}  $\\displaystyle{ \\chi(R) = {\\rm FT}[\\chi(k)] =\n  \\frac{1}{\\sqrt{2\\pi}}\n  \\int_{-\\infty}^{\\infty} { dk \\, e^{i2kR} \\, k^w \\, \\chi(k) \\, \\Omega(k) }} $\n\\end{postitbox}\n\n\\begin{itemize}\n\\item $\\Omega(k)$ is the Window Function\n\\item $w$ is the $k$-weighting\n\\end{itemize}\n\n\\pause\n\\vmm\n We really use a discrete version and Fast Fourier Transform\n\n\\[\n\\chi(R_m) = \\frac{i \\delta k}{\\sqrt{\\pi N_{\\rm fft}}}\n\\sum_{n=1}^{N_{\\rm fft}}\ne^{2\\pi i n m/N_{\\rm fft}} \\,  k_n^w \\, \\chi(k_n)\\, \\Omega(k_n)\n\\]\n\n\\pause\n\\begin{itemize}\n\\item {\\chik} is put on a uniform $k$-grid with spacing of  $\\delta k=0.05\n  {\\rm\\,  \\AA}^{-1}$.\n\\item {\\chik} is filled with zeros past the real data range.\n\n\\item $N_{\\rm fft} = 2048$: {\\chik} can go to $102.4 {\\rm\\, \\AA}^{-1}$\n  ($\\sim 40\\rm \\, keV$) past the edge.\n\n\\item {\\chir} is on a $R$-grid with spacing $\\sim0.031\\,\\rm\\AA$, and can\n  go to $31.4 \\,\\rm\\AA$.\n\\end{itemize}\n\\end{cenpage}\n\n\\end{frame}\n\n\\begin{frame} \\frametitle{Fourier Transforms: Basic Properties}\n\n  \\begin{cenpage}{130mm}\nFourier Transform of a sine wave:\n\n  \\begin{tabular}{ccc}\n    \\begin{minipage}{55mm}\n      \\includegraphics[width=55mm]{figs/reduction/sine_k0}\n    \\end{minipage}\n    & $\\Rightarrow $ &\n    \\begin{minipage}{55mm}\n      \\includegraphics[width=55mm]{figs/reduction/sine_r0}\n    \\end{minipage}\\\\\n    \\begin{minipage}{55mm}\n      \\includegraphics<2->[width=55mm]{figs/reduction/sine_k}\n    \\end{minipage}\n    & {\\onslide+<2-> $\\Rightarrow $}  {\\onslide<1> {\\vspace{40mm}}}  &\n    \\begin{minipage}{55mm}\n      \\includegraphics<2->[width=55mm]{figs/reduction/sine_r}\n    \\end{minipage}\\\\\n  \\end{tabular}\n\\end{cenpage}\n\\end{frame}\n\n\\begin{frame} \\frametitle{Fourier Transforms: Basic Properties(2)}\n\n  \\begin{cenpage}{130mm}\nFourier Transforms are complex:\n\n  \\begin{tabular}{ccc}\n    \\begin{minipage}{55mm}\n      \\includegraphics[width=55mm]{figs/reduction/sine_k}\n    \\end{minipage}\n    & $\\Rightarrow $ &\n    \\begin{minipage}{55mm}\n      \\includegraphics[width=55mm]{figs/reduction/sine_r2}\n    \\end{minipage}\\\\\n    \\noalign{\\smallskip}\n    \\multicolumn{3}{l}{\\onslide+<2->Waves with slightly different frequencies can cancel\n      each other out, causing ``beats''  }  \\\\\n    \\noalign{\\smallskip}\n\n    \\begin{minipage}{55mm}\n      \\includegraphics<2->[width=55mm]{figs/reduction/beat_k}\n    \\end{minipage}\n    & {\\onslide+<2-> $\\Rightarrow $}  {\\onslide<1> {\\vspace{40mm}}}  &\n    \\begin{minipage}{55mm}\n      \\includegraphics<2->[width=55mm]{figs/reduction/beat_r}\n    \\end{minipage}\\\\\n\n  \\end{tabular}\n\\end{cenpage}\n\\end{frame}\n\n\\begin{frame} \\frametitle{Fourier Transform Window Types}\n\\begin{cenpage}{130mm}\n  \\begin{tabular}{ll}\n    \\begin{minipage}{80mm}\n      \\includegraphics[width=80mm]{figs/reduction/ftwin_zoo}\n    \\end{minipage}\n    &\n    \\begin{minipage}{50mm} \\setlength{\\baselineskip}{10pt}\n      \\hspace{-3mm}{\\Red{Typical Window Functions}}\n      \\vspace{0.5mm}\n\n      A Window Function:\n      \\begin{itemize}\n      \\item goes from 0 to 1 and back to 0\n      \\item $dk$ gives the width of the Window ``sill''\n      \\end{itemize}\n\n      \\vmm\n      {\\Red{Most important rule:}}\n\n      \\vmm\n\n      Pick a window type and stick with it.\n\n      \\vmm\n\n      Kaiser-Bessel and Hanning are the most commonly used, and recommended.\n\n      \\vmm\n\n    \\end{minipage}\n  \\end{tabular}\n\n\\vmm Personal Recommendation: \\vmm\n\nKaiser-Bessel Window, $dk=4$.\n\\end{cenpage}\n\\end{frame}\n\n\\begin{frame} \\frametitle{Fourier Transform Window Types}\n\\begin{cenpage}{130mm}\n\n  \\begin{columns}\n\n    \\begin{column}{70mm}\n      \\includegraphics[width=60mm]{figs/reduction/ftwin_anat}\n\n      \\vmm\n      {\\onslide+<2-> {\n          \\includegraphics[width=60mm]{figs/reduction/ftwin_sills}\n        }}\n\n    \\end{column}\n\n    \\begin{column}{55mm}\n\n      \\hspace{-3mm}{\\Red{Fourier Window Function}}  \\vspace{0.5mm}\n\n      The meaning of $k_{\\rm min}$, $k_{\\rm max}$, and $dk$.\n\n      \\vspace{10mm}\n\n\n      {\\onslide+<2-> {\n\n      \\hspace{-3mm}{\\Red{Parzen, Hanning, Welch}}\n\n      \\vspace{0.5mm}\n\n      Details of the different Window ``sills''.\n\n      all with $k_{\\rm  min}=2\\rm\\,\\AA^{-1}$  and $dk=3\\rm\\,\\AA^{-1} $.\n\n      \\vspace{3mm}\n    }}\n    \\end{column}\n  \\end{columns}\n\n\\end{cenpage}\n\\end{frame}\n\n\\begin{frame} \\frametitle{Fourier Transform Window and real data}\n\n  \\begin{cenpage}{130mm}\nThe effect of $dk$ (for Hanning Window) and different Window Function:\n\n  \\begin{tabular}{ll}\n    \\begin{minipage}{60mm}\n      \\includegraphics[width=55mm]{figs/reduction/ftwin_kdk}\n    \\end{minipage}\n    &\n    \\begin{minipage}{60mm}\n      \\includegraphics[width=55mm]{figs/reduction/ftwin_rdk}\n    \\end{minipage}\\\\\n    \\begin{minipage}{60mm}\n      \\includegraphics<2->[width=55mm]{figs/reduction/ftwin_wins}\n    \\end{minipage}\n    &\n    {\\onslide<1> {\\vspace{40mm}}}     {\\onslide+<2->\n      \\begin{minipage}{60mm}\n        Changing  $dk$ and  Window functions\n        gives relatively small changes to {\\chir}, most noticeable in the\n       ``ringing'' of small peaks.\n      \\end{minipage}\n    }\n  \\end{tabular}\n\\end{cenpage}\n\\end{frame}\n\n\\begin{frame} \\frametitle{Fourier Transform Window and $k$-weight }\n\n\\begin{cenpage}{130mm}\n $\\displaystyle{ \\chi(R) =   \\frac{1}{\\sqrt{2\\pi}}\n  \\int_{-\\infty}^{\\infty} { dk \\, e^{i2kR} \\, k^{w} \\, \\chi(k) \\, \\Omega(k) }} $\n\n  \\vmm\n\nChanging $w$, the $k$-weighting has a significant impact:\n\n  \\vmm\n\n  \\begin{tabular}{ll}\n    \\begin{minipage}{65mm}\n      \\includegraphics[width=65mm]{figs/reduction/ftwin_kw}\n    \\end{minipage}\n    &\n    \\begin{minipage}{42mm}\n\n      Fe-Fe scattering dominates with higher $w$.\n\n      \\vmm\n\n      \\vmm low $w$ emphasizes low-$k$, and low-Z scatterers.\n\n      \\vmm high $w$ emphasizes high-$k$, and  high-Z scatterers.\n\n      \\vmm\n    \\end{minipage}\n  \\end{tabular}\n\n  \\vmm This is important when trying to determine the $Z$ of a scatterer.\n\n  \\vmm Again, $w=2$ and $w=3$ are most common, and recommended.\n\\end{cenpage}\n\\end{frame}\n\n\n\\begin{frame} \\frametitle{Fourier Transform Window and $k_{\\rm min}$ }\n\n  \\begin{cenpage}{130mm}\n  $k_{\\rm min}$ and\n  $k_{\\rm max}$ are important too.\n\n  \\begin{itemize}\n  \\item $k_{\\rm max}$ should be the end of useful data.\n  \\item With $k$-weight = 2, 3, it is not too important to avoid ``very low $k$''.\n  \\end{itemize}\n\n  \\vmm\n\n  \\begin{tabular}{ll}\n    \\begin{minipage}{65mm}\n      \\includegraphics[width=65mm]{figs/reduction/ftwin_kmin}\n    \\end{minipage}\n    &\n    \\begin{minipage}{42mm}\n\n      Conventional wisdom:  keep $k_{\\rm min} > 2\\rm\\,\\AA^{-1}$\n\n      \\vmm\n      But: don't make it too big.\n\n      \\vmm\n\n\n    \\end{minipage}\n  \\end{tabular}\n\n\\begin{center}\n  \\begin{postitbox}{85mm}\n    Use Kaiser-Bessel with $dk=4,k_{\\rm min}=2 \\rm\\,\\AA^{-1}$\n\n    \\vmm\n\n    Use $k$-weight=2, or 3. \\hspace{10mm} Don't obsess too much.\n  \\end{postitbox}\n\\end{center}\n\\end{cenpage}\n\\end{frame}\n", "meta": {"hexsha": "d98aa5af1efd420607ad115be07a84b4e86ab36f", "size": 6989, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "slides/fourier.tex", "max_stars_repo_name": "newville/xafsfun", "max_stars_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/fourier.tex", "max_issues_repo_name": "newville/xafsfun", "max_issues_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/fourier.tex", "max_forks_repo_name": "newville/xafsfun", "max_forks_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.3519163763, "max_line_length": 88, "alphanum_fraction": 0.6294176563, "num_tokens": 2510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5530672278085295}}
{"text": "\\title{Model Criticism}\n\n\\subsection{Model Criticism}\n\nWe can never validate whether a model is true. In practice, ``all\nmodels are wrong'' \\citep{box1976science}. However, we can try to\nuncover where the model goes wrong. Model criticism helps justify the\nmodel as an approximation or point to good directions for revising the\nmodel.\n\nModel criticism typically analyzes the posterior predictive\ndistribution,\n\\begin{align*}\n  p(\\mathbf{x}_\\text{new} \\mid \\mathbf{x})\n  &=\n  \\int\n  p(\\mathbf{x}_\\text{new} \\mid \\mathbf{z})\n  p(\\mathbf{z} \\mid \\mathbf{x})\n  \\text{d} \\mathbf{z}.\n\\end{align*}\nThe model's posterior predictive can be used to generate new data\ngiven past observations and can also make predictions on new data\ngiven past observations.\nIt is formed by calculating the likelihood of the new data, averaged\nover every set of latent variables according to the posterior\ndistribution.\n\nA helpful utility function to form the posterior predictive is\n\\texttt{copy}. For example, assume the model defines a likelihood\n\\texttt{x} connected to a prior\n\\texttt{z}. The posterior predictive distribution is\n\\begin{lstlisting}[language=Python]\nx_post = ed.copy(x, {z: qz})\n\\end{lstlisting}\nHere, we copy the likelihood node \\texttt{x} in the graph and replace dependence\non the prior \\texttt{z} with dependence on the inferred posterior \\texttt{qz}.\nWe describe several techniques for model criticism.\n\n\\subsubsection{Point Evaluation}\n\nA point evaluation is a scalar-valued metric for assessing\ntrained models \\citep{winkler1994evaluating,gneiting2007strictly}.\nFor example, we can assess models for classification\nby predicting the label for each observation in the data and comparing\nit to their true labels. Edward implements a variety of metrics, such\nas classification error and mean absolute error.\n\nThe \\texttt{ed.evaluate()} method takes as input a set of metrics to\nevaluate, and a data dictionary. As with inference, the data dictionary binds the\nobserved random variables in the model to realizations: in this case,\nit is the posterior predictive random variable of outputs \\texttt{y\\_post} to\n\\texttt{y\\_train} and a placeholder for inputs \\texttt{x} to\n\\texttt{x\\_train}.\n\\begin{lstlisting}[language=Python]\ned.evaluate('categorical_accuracy', data={y_post: y_train, x: x_train})\ned.evaluate('mean_absolute_error', data={y_post: y_train, x: x_train})\n\\end{lstlisting}\n\nPoint evaluation also applies to unsupervised tasks. For example, we\ncan evaluate the likelihood of observing the data.\n\\begin{lstlisting}[language=Python]\ned.evaluate('log_likelihood', data={x_post: x_train})\n\\end{lstlisting}\n\nIt is common practice to criticize models with data held-out from\ntraining. To do this, we must first perform inference over any local\nlatent variables of the held-out data, fixing the global variables; we\ndemonstrate this below. Then we make predictions on the held-out data.\n\n\\begin{lstlisting}[language=Python]\nfrom edward.models import Categorical\n\n# create local posterior factors for test data, assuming test data\n# has N_test many data points\nqz_test = Categorical(logits=tf.Variable(tf.zeros[N_test, K]))\n\n# run local inference conditional on global factors\ninference_test = ed.Inference({z: qz_test}, data={x: x_test, beta: qbeta})\ninference_test.run()\n\n# build posterior predictive on test data\nx_post = ed.copy(x, {z: qz_test, beta: qbeta}})\ned.evaluate('log_likelihood', data={x_post: x_test})\n\\end{lstlisting}\n\nPoint evaluations are formally known as scoring rules\nin decision theory. Scoring rules are useful for model comparison, model\nselection, and model averaging.\n\nSee the \\href{/api/criticism}{criticism API} for further details.\nAn example of point evaluation is in the\n\\href{/tutorials/supervised-regression}{supervised learning\n(regression)} tutorial.\n\n\\subsubsection{Posterior predictive checks}\n\nPosterior predictive checks (PPCs)\nanalyze the degree to which data generated from the model deviate from\ndata generated from the true distribution. They can be used either\nnumerically to quantify this degree, or graphically to visualize this\ndegree. PPCs can be thought of as a probabilistic generalization of\npoint evaluation\n\\citep{box1980sampling,rubin1984bayesianly,meng1994posterior,gelman1996posterior}.\n\nThe simplest PPC works by applying a test statistic on new data\ngenerated from the posterior predictive, such as\n$T(\\mathbf{x}_\\text{new}) = \\max(\\mathbf{x}_\\text{new})$.  Applying\n$T(\\mathbf{x}_\\text{new})$ to new data over many data replications\ninduces a distribution. We compare this distribution to the test\nstatistic on the real data $T(\\mathbf{x})$.\n\n\\includegraphics{/images/ppc.png}\n\nIn the figure, $T(\\mathbf{x})$ falls in a low probability region of\nthis reference distribution: if the model were true, the probability\nof observing the test statistic is very low. This indicates that the\nmodel fits the data poorly according to this check; this suggests an\narea of improvement for the model.\n\nMore generally, the test statistic can be a function of the\nmodel's latent variables $T(\\mathbf{x}, \\mathbf{z})$, known as a\ndiscrepancy function.  Examples of discrepancy functions are the\nmetrics used for point evaluation. We can now interpret the\npoint evaluation as a special case of PPCs: it simply calculates\n$T(\\mathbf{x}, \\mathbf{z})$ over the real data and without a reference\ndistribution in mind. A reference distribution allows us to make\nprobabilistic statements about the point, in reference to an overall\ndistribution.\n\nThe \\texttt{ed.ppc()} method provides a scaffold for studying various\ndiscrepancy functions. It takes as input a user-defined discrepancy\nfunction, and a data dictionary.\n\\begin{lstlisting}[language=Python]\ned.ppc(lambda xs, zs: tf.reduce_mean(xs[x_post]), data={x_post: x_train})\n\\end{lstlisting}\nThe discrepancy can also take latent variables as input, which we pass\ninto the PPC.\n\\begin{lstlisting}[language=Python]\ned.ppc(lambda xs, zs: tf.maximum(zs[z]),\n       data={y_post: y_train, x_ph: x_train},\n       latent_vars={z: qz, beta: qbeta})\n\\end{lstlisting}\n\nSee the \\href{/api/criticism}{criticism API} for further details.\n\nPPCs are an excellent tool for revising models---simplifying or\nexpanding the current model as one examines its fit to data.\nThey are inspired by classical hypothesis testing; these methods\ncriticize models under the frequentist perspective of large sample\nassessment.\n\nPPCs can also be applied to tasks such as hypothesis testing, model\ncomparison, model selection, and model averaging.  It's important to\nnote that while PPCs can be applied as a form of Bayesian hypothesis\ntesting, hypothesis testing is generally not recommended: binary\ndecision making from a single test is not as common a use case as one\nmight believe. We recommend performing many PPCs to get a holistic\nunderstanding of the model fit.\n\n\\subsubsection{References}\\label{references}\n", "meta": {"hexsha": "0dcfa3e2a24509bfd64f17cb89a40440a70c372b", "size": 6871, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/tutorials/criticism.tex", "max_stars_repo_name": "zhangyewu/edward", "max_stars_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5200, "max_stars_repo_stars_event_min_datetime": "2016-05-03T04:59:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:32:26.000Z", "max_issues_repo_path": "docs/tex/tutorials/criticism.tex", "max_issues_repo_name": "zhangyewu/edward", "max_issues_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 724, "max_issues_repo_issues_event_min_datetime": "2016-05-04T09:04:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-28T02:41:12.000Z", "max_forks_repo_path": "docs/tex/tutorials/criticism.tex", "max_forks_repo_name": "zhangyewu/edward", "max_forks_repo_head_hexsha": "8ec452eb0a3801df8bda984796034a9e945faec7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1004, "max_forks_repo_forks_event_min_datetime": "2016-05-03T22:45:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T00:08:08.000Z", "avg_line_length": 42.6770186335, "max_line_length": 82, "alphanum_fraction": 0.7879493524, "num_tokens": 1670, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7662936377487304, "lm_q1q2_score": 0.5530672270742186}}
{"text": "\\chapter{Fourier Transform}\n\\begin{enumerate}\n\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tF(k)&=\\frac{1}{\\sqrt{2 \\pi}} \\int_{-\\infty}^{+\\infty} f(x) e^{i k x} d x\\\\\n\t\t\\because g(x)&=f(x+a)\\\\\n\t\t\\therefore G(k)&=\\frac{1}{\\sqrt{2 \\pi}} \\int_{-\\infty}^{+\\infty} f(x+a) e^{i k x} d x\\\\\n\t\t\\text{Let }x+a&=y, d x=d y\\\\\n\t\t\\therefore G(k)&=\\frac{1}{\\sqrt{2 \\pi}} \\int_{-\\infty}^{+\\infty} f(y) e^{i k(y-a)} d y=e^{-i k a} \\frac{1}{\\sqrt{2 \\pi}} \\int_{-\\infty}^{+\\infty} f(x) e^{i k x} d x=e^{-i a k} F(k)\n\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (c)}\n\t\\end{answer}\n\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t f_{c}(\\omega) &=\\sqrt{\\frac{2}{\\pi}} \\int_{0}^{\\infty} f(x) \\cos \\omega x d x \\\\ &=\\sqrt{\\frac{2}{\\pi}} \\int_{0}^{1} \\cos \\omega x d x+\\sqrt{\\frac{2}{\\pi}} \\int_{1}^{2}-\\cos \\omega x d x \\\\ &=\\sqrt{\\frac{2}{\\pi}}\\left[\\left(\\frac{\\sin \\omega x}{\\omega}\\right)_{0}^{1}-\\left(\\frac{\\sin \\omega x}{\\omega}\\right)_{1}^{2}\\right] \\\\ &=\\sqrt{\\frac{2}{\\pi}}\\left(\\frac{2 \\sin \\omega-\\sin 2 \\omega}{\\omega}\\right) \n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (b)}\n\t\\end{answer}\n\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\mathrm{n}: g(k)&=\\frac{1}{\\sqrt{2 \\pi}} \\int_{-\\infty}^{+\\infty} \\frac{n}{\\sqrt{\\pi}} e^{-n^{2} x^{2}} e^{i k x} d x=\\frac{n}{\\sqrt{2} \\cdot \\pi} \\int_{-\\infty}^{+\\infty} e^{-n^{2} x^{2}+i i x} d x=\\frac{n}{\\sqrt{2} \\cdot \\pi} e^{-k^{2} / 4 n^{2}} \\int_{-\\infty}^{+\\infty} e^{-n^{2}\\left(x-\\frac{i k}{2 n^{2}}\\right)^{2}} d x\\\\\n\t\t&=\\frac{n}{\\sqrt{2} \\cdot \\pi} e^{-k^{2} / 4 n^{2}} \\times 2 \\times \\frac{1}{2} \\cdot \\frac{\\sqrt{\\pi}}{n}=\\frac{1}{\\sqrt{2 \\pi}} e^{-k^{2} / 4 n^{2}}\n\t\t\\intertext{ Now inverse fourier transform of $g(k)$ is}\n\t\t\\delta_{n}(x)&=\\frac{1}{\\sqrt{2 \\pi}} \\int_{-\\infty}^{+\\infty} \\frac{e^{-k^{2} / 4 n^{2}}}{\\sqrt{2 \\pi}} \\cdot e^{-i k x} d k\\\\\n\t\t\\delta(x)&=\\lim _{n \\rightarrow \\infty} \\delta_{n}(x)=\\lim _{n \\rightarrow \\infty} \\frac{1}{2 \\pi} \\int_{-\\infty}^{+\\infty} e^{-k^{2} / 4 n^{2}} \\cdot e^{-i k x} d k=\\frac{1}{2 \\pi} \\int_{-\\infty}^{+\\infty} e^{-i k x} d k\\\\\n\t\t\\therefore \\delta(x)&=\\frac{1}{2 \\pi} \\int_{-\\infty}^{+\\infty} e^{-i k x} d k\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (c)}\n\t\\end{answer}\n\t\\item  $\\left. \\right. $\t\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{(i) }g_{c}(\\omega)=\\sqrt{\\frac{2}{\\pi}} \\int_{0}^{\\omega} \\delta(t-x) \\cos \\omega t d t=\\sqrt{\\frac{2}{\\pi}} \\cos \\omega x\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (a)}\n\t\\end{answer}\n\t\t\\item  $\\left. \\right. $\t\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tf(t)&=\\delta(t-x)=\\sqrt{\\frac{2}{\\pi}} \\int_{0}^{+\\infty} \\sqrt{\\frac{2}{\\pi}} \\cos \\omega x \\cos \\omega t d \\omega\\\\\n\t\t\\Rightarrow \\delta(t-x)&=\\frac{2}{\\pi} \\int_{0}^{\\infty} \\cos \\omega t \\cos \\omega x d \\omega\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (b)}\n\t\\end{answer}\n\t\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{Fourier Transform of $f(t)$ is}\n\t\tg(\\omega)&=\\frac{1}{\\sqrt{2 \\pi}} \\int_{0}^{\\infty} e^{-\\Gamma t / 2 \\hbar} \\cdot e^{-i E_{0} / / \\hbar} \\cdot e^{i a t} d t=\\frac{1}{\\sqrt{2 \\pi}} \\int_{0}^{\\infty} \\exp \\left\\{\\frac{-i t}{\\hbar}\\left(E_{0}-i \\Gamma / 2-\\hbar \\omega\\right)\\right\\} d t\\\\\n\t\t&=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\frac{e^{\\left\\{\\frac{-i t}{\\hbar}\\left(E_{0}-i / / 2-\\hbar \\omega\\right)\\right\\}}}{\\frac{-i}{\\hbar}\\left(E_{0}-i \\Gamma / 2-\\hbar \\omega\\right)}\\right]_{0}^{\\infty}=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\frac{0-1}{\\frac{-i}{\\hbar}\\left(E_{0}-i \\Gamma / 2-\\hbar \\omega\\right)}\\right]\\\\\n\t\t&=\\frac{1}{\\sqrt{2 \\pi} i(E o-i \\Gamma / 2-h \\omega)}\n\t\t\\intertext{Inverse transform of $g(\\omega)$ is}\n\t\tf(t)&=\\frac{1}{2 \\pi} \\int_{-\\infty}^{+\\infty} \\frac{\\hbar e^{-i \\omega x}}{i\\left(E_{0}-i \\Gamma / 2-\\hbar \\omega\\right)} d \\omega=\\frac{\\hbar}{2 \\pi i} \\int_{-\\infty}^{+\\infty} \\frac{e^{-i \\omega t} d \\omega}{\\left(E_{0}-i \\Gamma / 2-\\hbar \\omega\\right)}\\\\\n\t\t\\therefore &\\frac{\\hbar}{2 \\pi i} \\int_{-\\infty}^{+\\infty} \\frac{e^{-i \\omega t} d \\omega}{\\left(E_{0}-i \\Gamma / 2-\\hbar \\omega\\right)}=f(t)= \\begin{cases}\\exp (-\\Gamma t / 2 \\hbar) \\exp \\left(\\frac{-i E_{0} t}{\\hbar}\\right), & t>0 \\\\ 0 & t<0\\end{cases}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (c)}\n\t\\end{answer}\n\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tF\\left\\{t e^{-r^{2}}\\right\\}&=-\\frac{1}{2} F\\left\\{\\frac{d\\left(e^{-r^{2}}\\right)}{d t}\\right\\}=-\\frac{1}{2}(j 2 \\pi f) F\\left\\{e^{-t^{2}}\\right\\}=-j \\pi f e^{-\\pi^{2} f^{2}}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (b)}\n\t\\end{answer}\n\t\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{This is sine function}\n\t\tf(x)&=A \\sin x \\Rightarrow F(k)=j \\frac{A}{2}\\left[\\delta\\left(k-k_{0}\\right)-\\delta\\left(k+k_{0}\\right)\\right]\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (a)}\n\t\\end{answer}\n\t\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tH(f)&=\\int_{-\\infty}^{+\\infty} h(t) e^{-j 2 \\pi f t} d t=\\int_{0}^{+\\infty} \\beta e^{-\\alpha t} e^{-j 2 \\pi f t} d t=\\beta \\int_{0}^{+\\infty} e^{-(\\alpha+j 2 \\pi f) t} d t\\\\\n\t\t\\Rightarrow H(f)&=\\left.\\frac{-\\beta}{\\alpha+j 2 \\pi f} e^{-(\\alpha+j 2 \\pi f) t}\\right|_{0} ^{\\infty}=\\frac{\\beta}{\\alpha+j 2 \\pi f}=\\frac{\\beta \\alpha}{\\alpha^{2}+(2 \\pi f)^{2}}-j \\frac{2 \\pi f \\beta}{\\alpha^{2}+(2 \\pi f)^{2}}\\\\\n\t\t\\Rightarrow H(f)&=\\frac{\\beta}{\\sqrt{\\alpha^{2}+(2 \\pi f)^{2}}} e^{j \\tan ^{-1}[-2 \\pi f / \\alpha]}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (d)}\n\t\\end{answer}\n\t\t\\item  $\\left. \\right. $\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tf(x)&=\\left\\{\\begin{array}{rl}-1 & -1<x<0 \\\\ 1 & 0<x<1 \\\\ 0 & \\text { otherwise }\\end{array}\\right.\\\\\n\t\tF[f(x)]&=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\int_{-1}^{0}-e^{-i \\omega x} d x+\\int_{0}^{1} e^{-i \\omega x} d x\\right]\\\\\n\t\t&=\\frac{1}{\\sqrt{2 \\pi}}\\left[-\\left(\\frac{e^{-i \\omega x}}{-i \\omega}\\right)_{-1}^{0}+\\left(\\frac{e^{-i \\omega x}}{-i \\omega}\\right)_{0}^{1}\\right]=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\frac{-1+e^{i \\omega}}{-i \\omega}+\\frac{e^{-i \\omega}-1}{-i \\omega}\\right]\\\\\n\t\t&=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\frac{2 \\cos \\omega-2}{-i \\omega}\\right]=\\frac{i}{\\omega} \\sqrt{\\frac{2}{\\pi}}(\\cos \\omega-1)\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (b)}\n\t\\end{answer}\n\t\t\\item  $\\left. \\right. $\n\t\t\\begin{answer}\n\t\t\t\\begin{align*}\n\t\t\tf(x)&= \\begin{cases}x, & 0<x<a \\\\ 0, & \\text { otherwise }\\end{cases}\\\\\n\t\t\tF[f(x)]&=\\frac{1}{\\sqrt{2 \\pi}} \\int_{0}^{a} x e^{-i \\omega x} d x=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\left(\\frac{x e^{-i e x}}{-i \\omega}\\right)_{0}^{a}-\\left(\\frac{e^{-i \\omega x}}{(-i \\omega)^{2}}\\right)_{0}^{a}\\right]\\\\&=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\frac{a e^{-i \\omega a}}{-i \\omega}-\\frac{e^{-i \\omega a}-1}{(-i \\omega)^{2}}\\right]=\\frac{1}{\\sqrt{2 \\pi}}\\left[\\frac{i a \\omega e^{-i e a}}{\\omega^{2}}+\\frac{e^{-i e a}-1}{\\omega^{2}}\\right]\\\\\n\t\t\t&=\\frac{1}{\\sqrt{2 \\pi} \\cdot \\omega^{2}}\\left[e^{-i \\omega x}(1+i a \\omega)-1\\right]\n\t\t\t\\end{align*}\n\t\t\tSo the correct answer is \\textbf{Option (a)}\n\t\t\\end{answer}\n\\end{enumerate}", "meta": {"hexsha": "a515a1421783cb0ee1b2cf71e915137bdf820383", "size": 6981, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CSIR- Mathematical Physics/chapter/Assignments/Assignment-Fourier Transform-solutions.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CSIR- Mathematical Physics/chapter/Assignments/Assignment-Fourier Transform-solutions.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CSIR- Mathematical Physics/chapter/Assignments/Assignment-Fourier Transform-solutions.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.125, "max_line_length": 443, "alphanum_fraction": 0.5524996419, "num_tokens": 3199, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115011, "lm_q2_score": 0.7217431943271998, "lm_q1q2_score": 0.5530672140492698}}
{"text": "\\section{Application of isogeometric SBFEM to linear elastic fracture mechanics}\n\\label{iso_section:fracture}\nAn attractive feature of the SBFEM is that no a priori knowledge of the asymptotic solution is required to accurately handle the stress singularity at a crack tip as shown in Fig.~\\ref{lr_fig:sbfem_intro}.\nWhen modeling a cracked structure, a subdomain surrounding the crack tip is selected and the scaling center is placed at the crack tip.\nThe boundary of the subdomain is divided into line elements.\nWhen the scaling center is placed at the crack tip, the solution for the stress field in Eq.\\ref{lr_eq:sbfem_stress_field} is expressed, by using Eq.\\ref{lr_eq:sbfem_transform}, as\n    \\begin{equation}\n        \\sigma(r,\\eta) = \\sum_i^n\n            c_i r^{-(\\lambda_i+1)} \\left(\n                r_\\eta^{\\lambda_i+1} (\\eta)\n                \\boldsymbol{\\psi}_{\\sigma_i}(\\eta)\n            \\right)\n        \\label{iso_eq:isosbfem_fracture_stress_field}\n    \\end{equation}\nwhere $\\psi_{\\sigma_i}(\\eta)$ is the $i$th stress mode, i.e. the $i$th column of the matrix $\\psi_\\sigma(\\eta)$.\nLike the well-known William expansion \\citep{Williams1957109}, Eq.~\\ref{iso_eq:isosbfem_fracture_stress_field} is a power series of the radial coordinate $r$.\nThe radial variation of each term of the series is expressed analytically by the power function $r^{-\\lambda_i + 1}$.\nAt discrete points along the boundary, the angular coordinates (see Eq.~\\ref{lr_eq:sbfem_transform}) are arranged as a vector $\\theta(\\eta)$ and the stress modes $\\psi_{\\sigma_i}(\\eta)$ are computed.\n$\\psi_{\\sigma_i}(\\eta)$ and $\\theta(\\eta)$ from a parametric equation of the angular variation of stresses.\nThe singular stress and the T-stress terms can be easily identified by the value of the exponent $-(\\lambda_i+1)$.\nWhen the real part of the exponent $-(\\lambda_i+1)$ of a term is negative, the stresses of this term at the crack tip, i.e. $\\xi=0$, tend to infinity.\nWhen the exponent $-(\\lambda_i+1)$ of a term is equal to 0, the stresses of this term are constant and contribute to the T-stress.\n\\paragraph{}\nIn the case of a crack in a homogeneous material or on a material interface, two singular terms exist in the solution.\nDenoting the singular stress modes as $\\mathrm{I}$ and $\\mathrm{II}$, the singular stress $\\sigma^{s}(\\xi,\\eta)$ (superscript s for singular stresses) are obtained from Eq.~\\ref{iso_eq:isosbfem_fracture_stress_field}:\n\\begin{equation}\n    \\sigma^s(r,\\eta)\n    =\\sum_{i,\\mathrm{I},\\mathrm{II}}\n    c_i r^{-(\\lambda_i + 1)}\n    (\n        r_\\eta^{(\\lambda_i+1)}\n        \\psi_{\\sigma_i}(\\eta)\n    )\n\\label{iso_eq:isosbfem_singular_stress}\n\\end{equation}\nNote that the singular stress terms are separated from other terms and the stress singularity is represented analytically.\nThis allows the evaluation of the stress intensity factors by directly matching their definition with the singular stress.\nFor convenience, the point on the boundary along the crack from $\\theta=0$ is considered.\nThe distance from the crack tip on the boundary is denoted as $L_0 = r_\\eta(\\theta=0)$.\nFrom Eq.~\\ref{iso_eq:isosbfem_singular_stress}, the values of the singular stresses at this point are equal to:\n\\begin{equation}\n    \\sigma^s (L_0, \\theta = 0)\n    =\\sum_{i,\\mathrm{I},\\mathrm{II}}\n    c_i \\psi_{\\sigma_i}(\\theta=0)\n    \\label{iso_eq:isosbfem_singular_stress_2}\n\\end{equation}\nwhere $\\psi_{\\sigma_i}(\\theta=0)$ is the value of the stress modes at $\\theta=0$.\nIt is obtained by interpolating $\\psi_{\\sigma_i}(\\eta)$ at the discrete points of $\\theta(\\eta)$.\nThe stress intensity factors can be computed directly from their definition using the stresses in Eq.~\\ref{iso_eq:isosbfem_singular_stress_2}.\nFor a crack in a homogeneous medium, the classical definition of stress intensity factors $K_{\\mathrm{I}}$ and $K_{\\mathrm{II}}$ for mode $\\mathrm{I}$ and $\\mathrm{II}$ are expressed as:\n\\begin{equation}\n    \\begin{Bmatrix}\n        K_{\\mathrm{I}}\\\\\n        K_{\\mathrm{II}}\n    \\end{Bmatrix}\n    =\\sqrt{2\\pi r}\n    \\begin{Bmatrix}\n        \\sigma_{\\theta\\theta}^s(r,\\theta=0) \\\\\n        \\tau_{r\\theta}^s(r,\\theta=0)\n    \\end{Bmatrix}\n    \\label{iso_eq:isosbfem_crack_homo}\n\\end{equation}\nFormulating Eq.~\\ref{iso_eq:isosbfem_crack_homo} at $r=L_0$ results in\n\\begin{equation}\n    \\begin{Bmatrix}\n        K_{\\mathrm{I}}\\\\\n        K_{\\mathrm{II}}\n    \\end{Bmatrix}\n    =\\sqrt{2\\pi L_0}\n    \\begin{Bmatrix}\n        \\sigma_{\\theta\\theta}^s(L_0,\\theta=0) \\\\\n        \\tau_{r\\theta}^s(L_0,\\theta=0)\n    \\end{Bmatrix}\n\\label{iso_eq:isosbfem_crack_homo_at_L}\n\\end{equation}\nThe stress intensity factors are then determined by substituting the stress components $\\sigma_{\\theta\\theta}^s(L_0,\\theta=0)$ and $\\tau_{r\\theta}^s(L_0,\\theta=0)$ obtained from Eq.~\\ref{iso_eq:isosbfem_singular_stress_2} into Eq.~\\ref{iso_eq:isosbfem_crack_homo_at_L}.\nFor a subdomain containing a crack tip, two of the eigenvalues are equal to 1.\nThey represent the T-stress term and the rotational rigid body motion term, which does not contribute to the stresses.\nThey are separated from other terms in Eq.~\\ref{iso_eq:isosbfem_singular_stress_2} and expressed as (superscript T for the T-stress)\n\\begin{equation}\n    \\sigma^T(\\eta) = \n    \\sum_{ i=T_{ \\mathrm{I} }, T_{ \\mathrm{II} } }\n    c_i \\psi_{\\sigma_i} (\\eta)\n\\end{equation}\nThe T-stress along the crack front($\\theta$=0) is determined by interpolating the angular variation of the two stress modes $(\\theta(\\eta), \\psi_{\\sigma_i} (\\eta))$.", "meta": {"hexsha": "c74f10d6ad21598c490375c240f9987b3b98f822", "size": 5460, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "isogeometric_sbfem/sbfem_linear_fracture.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "isogeometric_sbfem/sbfem_linear_fracture.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "isogeometric_sbfem/sbfem_linear_fracture.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.2352941176, "max_line_length": 269, "alphanum_fraction": 0.7194139194, "num_tokens": 1609, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430645886583, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.5530513654917032}}
{"text": "\\documentclass{report}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\usepackage{graphicx}\n\\usepackage{float}\n\\usepackage{listings}\n\\begin{document}\n\\section{LinearRegression without regularization}\n\\subsection{Abstract}\nHello, everyone. I'm btobab.\\\\\nWhen I learned NLP, I found I completely can't understand formula derivation, so I decided to learn the theoretical derivation of machine learning from the beginning.\\\\\nThis chapter is mainly divided into two parts:\n\\begin{itemize}\n\t\\item[] The first part will derive the closed-form solution of the linear regression(least squares method) from three perspectives of matrix, geometry, probability, and provide reference code.\n\t\\item[] The second part will derive the closed-formula solution of least squares method with regularization from two perspectives of matrix and probability, and construct a complete linear regressioin class, and implement the closed-solution method and gradient descent method with code at the same time.\n\\end{itemize}\n\\subsection{Introduction}\nLinear regression model is a model that use linear function to fit the relationship between one or more independent variables and the dependent variable ($y$).\\\\\\\\\nThe target variable ($y$) is a continuous numerical type, such as: housing price, number of people and rainfall. The regression model is to find a mapping function between input variables and output variables.\\\\\\\\\nThe learning of regression task equals to function fitting: use a function curve to make it fit the known data well and predict unknown data.\\\\\\\\\nRegression task is divided into two processes: model learning and prediction. Construct a model based on given training dataset and predict corresponding output based on new input data.\n\\subsection{Algorithm}\n\\subsubsection{Matrix perspective}\nNote:in general, the vectors we are discussing are column vectors. Therefore, in order to ensure the shape of the matrix during the derivation process, a large number of transposition characters are used\\\\\\\\\ngiven dataset $\\mathcal{D}=\\{(x_1,y_1),(x_2,y_2)...(x_n, y_n)\\}$\\\\\\\\\nin which $x_i\\in \\mathcal{R}^p, y_i\\in \\mathcal{R}, i=1,2,...,n$\n\n$$\nX=(x_1,x_2,...,x_n)^T=\\begin{pmatrix}\nx_{11}&x_{12}&...&X_{1p}\\\\\nx_{21}&x_{22}&...&x_{2p}\\\\\n.&.&.&.&\\\\\n.&.&.&.&\\\\\n.&.&.&.&\\\\\nx_{n1}&x_{n2}&...&x_{np}\\\\\n\\end{pmatrix}_{np}\n$$\n$$\nY=(y1,y2,...,y_n)_{n1}^T\n$$\nIt is the model we constrct: $f(w)=w^Tx+w_0 x_0.$\\\\\\\\\nGenerally set $x_0=1$,and $b=w_0 x_0$, $b$ is bias, $w$ is weight. below, for the convenience of derivation, we merge $w_0$ into $w$ and $x_0$ into $x$.\\\\\\\\\nso the model is updated to $f(w)=w^Tx$\nThe loss function of least squares method is:\n$$\nL(w)=\\sum_{i=1}^{i=n}\\|y_i-w^T x_i\\|_2^2\n$$\n$$\n=\\begin{pmatrix}\ny_1-w^Tx_1&y_2-w^Tx_2&...&y_n-w^Tx_n\n\\end{pmatrix}\n\\begin{pmatrix}\ny_1-w^Tx_1\\\\\ny_2-w^Tx_2\\\\\n.\\\\\n.\\\\\n.\\\\\ny_n-w^Tx_n\\\\\n\\end{pmatrix}\n$$\n$$\n=(Y^T-w^TX^T)(Y^T-w^TX^T)^T\n$$\n$$\n=(Y^T-w^TX^T)(Y-Xw)\n$$\n$$\n=Y^TY-w^TX^TY-Y^TXw+w^TX^TXw\n$$\nNote the second and third terms are transposed to each other, and observe its matrix shape: $(1,p)(p,n)(n,1)=(1,1)$\\\\\nKnowning that these two terms are scalars, and the transposition of a scalar is itself, so the two can be conbined, get:\n$$\nL(w)=Y^TY-2w^TX^TY+w^TX^TXw\n$$\nso $\\hat{w}=argmin(L(w))$\nbelow, to find out the minimum of $L(w)$, we need to diffentiate $L(w)$\\\\\\\\\nNote there are three terms in the formual. The first term has nothing to do with $w$ and can be removed. Then the remaining two terms involve matrix derivation.\\\\\\\\\nRegarding to matrix derivation, author recommends three articles by a blogger(more detailed and rigorous than textbook, each formual has proof)\n\\begin{itemize}\n\t\\item \\href{https://zhuanlan.zhihu.com/p/263777564}{essence}\n\t\\item \\href{https://zhuanlan.zhihu.com/p/273729929}{basics}\n\t\\item \\href{https://zhuanlan.zhihu.com/p/288541909}{advanced}\n\\end{itemize}\nThe following is the derivative solution process of the above two terms.\\\\\\\\\nBecause $X,Y$ are constant matrices, the derivative can be obtained directly. However, since it is the derivative of $w$, the result must be transposed.\n$$\n\\frac{d(2w^TX^TY)}{dw}=2X^TY\n$$\nBelow, let's solve the third term.\n$$\nd(w^TX^TXw)=tr(d(w^TX^TXw))=tr(X^TXd(w^Tw))\n$$\n$$\n=tr(X^TX(d(w^T)w+w^Td(w)))=tr(X^TXw(dw)^T)+tr(X^TXw^Tdw)\n$$\n$$\n=tr(w^TX^TXdw)+tr(X^TXw^Tdw)=tr(2X^TXw^Tdw)\n$$\nso\n\n$$\n\\frac{d(w^TX^TXw)}{dw}=2wX^TX\n$$\nFrom a geometric perspective, we regard $X$ as a $p$ dimensional vector.\\\\\\\\\nThe first dimension of $X$ is $(x_{11},x_{21},...,x_{n1})$, the p-th dimension of $X$ is $(x_{1p},x_{2p},...,x_{np})$\\\\\\\\\nand here $Y$ is regarded as a one-dimensional vector.\\\\\\\\\nso $\\frac{dL(w)}{dw}=2X^TXw-2X^TY$\\\\\\\\\nset the derivative equal to 0 to get the closed-form solution of the least squares method:\n$$\n\\hat{w}=(X^TX)^{-1}X^TY\n$$\n\\subsubsection{Geometry perspective}\n$$\nX=(x_1,x_2,...,x_n)^T=\\begin{pmatrix}\nx_{11}&x_{12}&...&X_{1p}\\\\\nx_{21}&x_{22}&...&x_{2p}\\\\\n.&.&.&.&\\\\\n.&.&.&.&\\\\\n.&.&.&.&\\\\\nx_{n1}&x_{n2}&...&x_{np}\\\\\n\\end{pmatrix}_{np}\n$$\n$$\nY=(y1,y2,...,y_n)_{n1}^T\n$$\nNow we assume  $p=2$ because it is easier to draw. The diagram is as follows(I really drew it for a long time and pitifully asked for a star.\n\\begin{figure}\n\\includegraphics[width=0.7 \\textwidth]{/Users/btobab/TeX-Projects/figures/1}\n\\caption{figure 1}\n\\end{figure}\nChange the model to $f(w)=Xw$, which means zoom $X$ with weight $w$\\\\\\\\\nThe geometric meaning of the least squares method is to find a $w$, so that the distance between vector $Y-Xw$ and space $w$ is the smallest. Of course the case of the smallest distance is perpendicular to space $X$.\\\\\\\\\nso we get a formula: $X^T(Y-Xw)=0$\\\\\\\\\nthen get the solution of $w$: \n$$\nX^TXw=X^TY\n$$\n$$\n\\hat{w}=(X^TX)^{-1}X^TY\n$$\nwe can see that the solved $w$ is the same as the result of matrix perspective.\n\\subsubsection{Probability perspective}\nAs we known, in reality, it is hard to fit a distribution with a straight line. True data must have some randomness, that is, noise.\\\\\\\\\nso we assume noise $\\epsilon\\backsim N(0,\\sigma^2)$\\\\\nso $y=f(w)+\\epsilon=w^Tx+\\epsilon$\\\\\nso $y|x;w\\backsim N(w^Tx,\\sigma^2)$\\\\\nBring it into the probability density function of gaussian distribution:\n$$\np(y|x;w)=\\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-\\frac{(y-w^Tx)^2}{2\\sigma^2}}\n$$\nThen use $MLE$ (maximum likelihood estimate)\\\\\\\\\nNote: so-called MLE is to get the relative frequency via a large number of samples to approximate probability\\\\\\\\\nLet's assume a function: $\\zeta(w)=\\log{p(Y|X;w)}$\n\nSince $n$ data are independent, we can change the probability to a form of continuous mutiplication:\n\n$\\zeta(w)=\\log{\\Pi_{i=1}^np(y_i|x_i;w)}=\\Sigma_{i=1}^n \\log{p(y_i|x_i;w)}$\n\nbring the probability density function of gaussian distribution into the formula:\n\n$\\zeta(w)=\\Sigma_{i=1}^n(\\log{\\frac{1}{\\sqrt{2\\pi}\\sigma}}-\\frac{(y-w^Tx)^2}{2\\sigma^2})$\n\nsince the former term has nothing to do with $w$, it can be ignored.\n\nso:\n$$\n\\hat{w}=argmax{\\zeta(w)}\n$$\n$$\n=argmax{ \\Sigma_{i=1}^n -\\frac{(y-w^Tx)^2}{2\\sigma^2}}\n$$\n$$\n=argmin{ \\Sigma_{i=1}^n (y-w^Tx)^2}\n$$\nThe conclusion obtained by using maximum likelihood estimation is the definition of least squares method.\\\\\\\\\nThis also shows that least squares method hides a assumption that noise is gaussian distribution.\n\\subsection{Implement}\n\\begin{lstlisting}[language={python}]\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# num of samples\nn = 1000\n# noise\nepsilon = 1\nX = np.expand_dims(np.linspace(0,100,1000), axis=-1)\nw = np.asarray([5.2])\nY = X  w\n# apply noise to X\nX += np.random.normal(scale=epsilon, size=(X.shape))\nX_T = X.transpose()\nw_hat = np.matmul(np.linalg.pinv((np.matmul(X_T, X))), np.matmul(X_T, Y))\nprint(w_hat)\nplt.scatter(X, Y, s=3, c=\"y\")\nY_hat = X  w_hat\nplt.plot(X, Y_hat)\nplt.show()\n\\end{lstlisting}\n\\newpage\n\\section{LinearRegression with regularization}\n\\subsection{Algorithm}\n\\subsubsection{Matrix perspective}\nFirstly, given a new loss functioin with regularization:\n$$\n\\zeta(w)=\\Sigma_{i=1}^{n}||y_i-w^T  x_i||^2 + \\lambda  ||w||^2\n$$\nthen the derivation of loss function from matrix perspective without regularization is referenced:\n$$\n\\zeta(w)=Y^TY-2w^TX^TY+w^TX^TX w+\\lambda  ||w||^2\n$$\nso$\\hat{w}=argmax(\\zeta(w))$\\\\\\\\\ndifferentiate $\\zeta(w)$:\n$$\n\\frac{\\partial \\zeta(w)}{\\partial w}=2X^TXw-2X^T Y+2\\lambda  w \n$$\nset the derivative equal to 0 to get the closed-form solution of least squares method with regularization:\n$$\n\\hat{w}=(X^TX+\\lambda  I)^{-1} X^TY\n$$\n$I$ is identity matrix\n\\subsubsection{Probability perspective}\nassume noise $\\epsilon \\backsim N(0,\\sigma_1^2) \\ w \\backsim N(0,\\sigma_2^2)$\n\nsince $y=w^T x + \\epsilon$\n\nwe get $y|w \\backsim N(w^T x,\\sigma_1^2)$\n\\\\Next we use MAP(Maximum a posteriori estimate):\n\naccording to Bayes theorem:\n$$\nP(w|Y)=\\frac{P(Y|w) P(w)}{P(Y)}\n$$\n$P(w)$ is a priori probability, $P(Y|w)$ is a likelihood probability, $P(Y)$ is normalized probability, priori probability is mutiplied by the likelihood probability and normalized to obtain the posteriori probability $P(w|Y)$.\nactually $P(Y)$ is constant, so:\n$$\n\\hat{w}=argmax(P(w|Y))=argmax(P(Y|w) P(w))=argmax(log(P(Y|w) P(w)))\n$$\nsince samples are independent, we can change the probability to a form of continuous mutipication.\n$$\n=argmax(log(\\prod_{i=1}^n P(y_i|w) P(w)))=argmax(\\sum_{i=1}^n log(P(y_i|w)+ log(P(w))))\n$$\nbring it into probability density function of gaussian distribution to get:\n$$\n\\hat{w}=argmax(\\sum_{i=1}^nlog(\\frac{1}{\\sqrt{2\\pi} \\sigma_1})-\\frac{(y_i-w^T x_i)^2}{2\\sigma_1^2}+log(\\frac{1}{\\sqrt{2 \\pi} \\sigma_2})-\\frac{w^2}{2\\sigma_2^2})\n$$\nsince both $\\sigma_1$ and $\\sigma_2$ are hyperparameters, they can be omitted.\n\nso:\n$$\n\\hat{w}=argmin(\\sum_{i=1}^n \\frac{(y_i-w^T x_i)^2}{2\\sigma_1^2}+\\frac{w^2}{2\\sigma_2^2})\n$$\n$$\n=argmin(\\sum_{i=1}^n (y_i-w^T x_i)^2+\\frac{\\sigma_1^2}{\\sigma_2^2} w^2)\n$$\nwe can see that the result derived via $MAP$ is the definition of the least squares method with regularization.\n\\subsection{Implement}\n\\begin{lstlisting}[language={python}]\nimport os\nos.chdir(\"../\")\nfrom models.linear_models import LinearRegression\nimport numpy as np\nimport matplotlib.pyplot as plt\nX_ = np.expand_dims(np.linspace(0, 10, 1000), axis=-1)\nX = np.c_[X_, np.ones(1000)]\nw = np.asarray([5.2, 1])\nY = X.dot(w)\nX = np.r_[X, np.asarray([[11, 1], [12, 1], [13, 1]])]\nY = np.r_[Y, np.asarray([100, 110, 120])]\n\nmodel = LinearRegression(l2_ratio=1e1, epoch_num=1000, lr=1e-2, batch_size=100, if_standard=False)\nmodel.fit(X[:, :-1], Y)\nprint(model.get_params())\nmodel.draw(X[:, :-1], Y)\n\\end{lstlisting}\n\\end{document}", "meta": {"hexsha": "987bed1347f5790670fc1796fc31198c9d70ffce", "size": 10502, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "EN-TeX_files/LinearRegression/05_linear_regression.tex", "max_stars_repo_name": "btobab/Machine-Learning-notes", "max_stars_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-08-28T18:47:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T07:36:27.000Z", "max_issues_repo_path": "EN-TeX_files/LinearRegression/05_linear_regression.tex", "max_issues_repo_name": "btobab/Machine-Learning-notes", "max_issues_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EN-TeX_files/LinearRegression/05_linear_regression.tex", "max_forks_repo_name": "btobab/Machine-Learning-notes", "max_forks_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-28T18:47:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-28T18:47:22.000Z", "avg_line_length": 37.9133574007, "max_line_length": 305, "alphanum_fraction": 0.6966292135, "num_tokens": 3527, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191460821871, "lm_q2_score": 0.7799929002541068, "lm_q1q2_score": 0.5530299000883353}}
{"text": "\\documentclass[a4paper]{article}\n\n\\def\\npart{IB}\n\n\\def\\ntitle{Linear Algebra}\n\\def\\nlecturer{A.\\ M.\\ Keating}\n\n\\def\\nterm{Michaelmas}\n\\def\\nyear{2017}\n\n\\input{header}\n\n\\newcommand*{\\M}{\\matrixring}\n\\newcommand*{\\spans}{\\generation}\n\n\\newcommand*{\\ann}{\\circ}\n\n\\newcommand*{\\basis}{\\mathcal}\n\n\\newcommand*{\\ip}{\\innerproduct}\n\n\\theoremstyle{definition}\n\\newtheorem*{caution}{Caution}\n\n\\begin{document}\n\n\\input{titlepage}\n\n\\tableofcontents\n\n\\section{Vector Space}\n\n\\begin{convention}\n  Throughout this course, $\\mathbb{F}$ denotes a general field. If you wish, think of it as $\\mathbb{R}$ or $\\mathbb{C}$.\n\\end{convention}\n\n\\subsection{Definitions}\n\n\\begin{definition}[Vector space]\\index{vector space}\n  An $\\mathbb{F}$-\\emph{vector space} (or a vector space over $\\mathbb{F}$) is an abelian group $(V, +)$ equipped with a function, called \\emph{scalar multiplication}:\n  \\begin{align*}\n    \\mathbb{F}\\times V &\\to V \\\\\n    (\\lambda, v) &\\mapsto \\lambda\\cdot v\n  \\end{align*}\n  satisfying the axioms\n  \\begin{itemize}\n  \\item distributive over vectors: $\\lambda(v_1+v_2) = \\lambda(v_1+v_2)$,\n  \\item distributive over scalars: $(\\lambda_1+\\lambda_2)v= \\lambda_1 v+\\lambda_2 v$,\n  \\item $\\lambda(\\mu v) = \\lambda \\mu v$,\n  \\item $1\\cdot v = v$.\n  \\end{itemize}\n\\end{definition}\n\nThe additive unit of $V$ is denoted by $\\V 0$.\n\n\\begin{eg}\\leavevmode\n  \\label{eg:matrix as V}\n  \\begin{enumerate}\n  \\item $\\forall n \\in \\mathbb{N}, \\mathbb{F}^n$ is the space of column vectors of length $n$ with entries in $\\mathbb{F}$. It is an vector space by entry-wise addition and entry-wise scalar multiplication.\n  \\item $\\M_{m,n}(\\mathbb{F})$, the set of $m\\times n$ matrices with entries in $\\mathbb{F}$, with the operation defined as entry-wise addition.\n    \\item For any set $X$, $\\mathbb{R}^X = \\{f: X \\to \\mathbb{R}\\}$, the set of $\\mathbb{R}$-valued functions on $X$, with addition and scalar multiplication defined pointwise. For instance, $(f_1+f_2)(x) = f_1(x)+f_2(x)$.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{ex}\\leavevmode\n  \\begin{enumerate}\n  \\item Check the above examples satisfy the axioms.\n    \\item $0\\cdot v = \\V 0$ and $(-1)\\cdot v = -v$ for all $v \\in V$.\n  \\end{enumerate}\n\\end{ex}\n\n\\subsection{Vector Subspace}\n\n\\begin{definition}[Vector subspace]\\index{vector space!subspace}\n  Let $V$ be an $\\mathbb{F}$-vector space. A subset $U \\subseteq V$ is a \\emph{subspace}, denoted $U \\leq V$, if\n  \\begin{itemize}\n  \\item $\\V 0 \\in U$,\n  \\item $U$ is closed under addition: $\\forall u_1, u_2 \\in U, u_1+u_2 \\in U$,\n    \\item $U$ is closed under scalar multiplication: $\\forall u \\in U, \\forall \\lambda \\in \\mathbb{F}, \\lambda u \\in U$.\n  \\end{itemize}\n\\end{definition}\n\n\\begin{ex}\n  If $U$ is a subspace of $V$, then $U$ is also an $\\mathbb{F}$-vector space.\n\\end{ex}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item $V = \\mathbb{R}^{\\mathbb{R}}$, the set all functions from $\\mathbb{R}$ to itself, has a (proper) subspace $C(\\mathbb{R})$, the space of continuous functions on $\\mathbb{R}$ as continuous functions are closed under addition and scalar multiplication. $C(\\mathbb{R})$ in turn has a proper subspace $P(\\mathbb{R})$, the set of all polynomials in $\\mathbb{R}$.\n    \\item $\\{(x_1,x_2,x_3) \\in \\mathbb{R}^3: x_1+x_2+x_3 = t\\}$ where $t$ is some fixed constant is a subspace of $\\mathbb{R}^3$ if and only if $t = 0$.\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{proposition}\n  Let $V$ be an $\\mathbb{F}$-vector space, $U, W \\leq V$. Then $U \\cap W \\leq V$.\n\\end{proposition}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item $\\V 0 \\in U, \\V 0 \\in V$ so $\\V 0 \\in U \\cap W$.\n    \\item Suppose $u, w \\in U \\cap W$. Fix $\\lambda, \\mu \\in \\mathbb{F}$. As $U \\leq V$, $\\lambda u + \\mu w \\in U$. As $W \\leq V$, $\\lambda u +\\mu w \\in W$ so $\\lambda u + \\mu w \\in U \\cap W$. Take $\\lambda = \\mu = 1$ for vector addition and $\\mu = 0$ for scalar multiplication.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{eg}\n  $V = \\mathbb{R}^3, U = \\{(x,y,z): x=0\\}, W=\\{(x,y,z):y=0\\}$, then $U\\cap W=\\{(x,y,z):x=y=0\\}$.\n\\end{eg}\n\n\\begin{note}\nThe union of a family of subspaces is \\emph{almost never} a subspace. For example, $V = \\mathbb{R}^2$, $U, V$ be $x$- and $y$-axis.\n\\end{note}\n\n\\begin{definition}[Sum of vector spaces]\\index{vector space!sum}\n  Let $V$ be an $\\mathbb{F}$-vector space, $U, W \\leq V$, the \\emph{sum} of $U$ and $W$ is the set\n  \\[\n    U + W = \\{u+w: u\\in U, w\\in W\\}\n  \\]\n\\end{definition}\n\n\\begin{eg}\n  Use the definition from the previous example, $U+W=V$.\n\\end{eg}\n\n\\begin{proposition}\n  $U+W \\leq V$.\n\\end{proposition}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item $\\V 0 = \\V 0 + \\V 0 \\in U+W$,\n  \\item $u_1,u_1\\in U, w_1,w_2\\in W$, $(u_1+w_2) + (u_2+w_2) = (u_1+u_2)+(w_1+w_2) \\in U+W$,\n    \\item similar for scalar multiplication. Left as an exercise.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{note}\n  $U+W$ is the smallest subspace containing both $U$ and $W$. This is because all elements of the form $u+w$ are in such a space by closure under addition.\n\\end{note}\n\n\\begin{definition}[Quotient vector space]\\index{vector space!quotient}\n  Let $V$ be an $\\mathbb{F}$-vector space, $U \\leq V$. The \\emph{quotient space} $V/U$ is the abelian gropup $V/U$ equipped with scalar multiplication\n  \\begin{align*}\n    \\mathbb{F} \\times V/U &\\to V/U \\\\\n    (\\lambda, v+U) &\\mapsto \\lambda v+U\n  \\end{align*}\n\\end{definition}\n\n\\begin{proposition}\n  This is well-defined and $V/U$ is an $\\mathbb{F}$-vector space.\n\\end{proposition}\n\n\\begin{proof}\n  First check it is well-defined. Suppose $v_1+U= v_2+U \\in V/U$. Then $v_1-v_2\\in U$. Now use closure under scalar multiplication and distributivity, $\\lambda v_1 - \\lambda v_2 = \\lambda(v_1-v_2)\\in U$ so $\\lambda v_1 + U = \\lambda v_2 +U\\in V/U$.\n  Now check vector space axioms of $V/U$, which will follow from the axioms for $V$:\n  \\begin{itemize}\n  \\item $\\lambda(\\mu(v+U)) = \\lambda(\\mu v+U) = \\lambda(\\mu v)+U = (\\lambda\\mu) v+U = \\lambda\\mu(v+U)$,\n  \\item other axioms are left as an exercise.\n  \\end{itemize}\n\\end{proof}\n\n\\subsection{Span, Linear Independence \\& Basis}\n\n\\begin{definition}[Span]\\index{span}\n  Let $V$ be a $\\F$-vector space, $S \\subseteq V$ be a subset. The \\emph{span} of $S$\n  \\[\n    \\spans S = \\Big\\{\\sum_{s\\in S} \\lambda_s s : \\lambda_s \\in F \\Big\\}\n  \\]\n  is the set of all the finite linear combinations of elements (i.e.\\ all but finitely many of the $\\lambda$ are zero) of $S$.\n\\end{definition}\n\n\\begin{remark}\n  $\\spans S$ is the smallest subspace of $V$ containing all elements of $S$.\n\\end{remark}\n\n\\begin{convention}\n  $\\spans \\emptyset = \\{\\V 0\\}$\n\\end{convention}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n \\item $V=\\R^3$, $S = \\{(1,0,0),(0,1,2),(3,-2,-4)\\}$, $\\spans S = \\{(a,b,2b): a,b\\in \\R \\}$\n \\item For any set $X$, $\\R^X$ is a vector space. For $x \\in X$, define $\\delta_x: X \\to \\R, \\delta_x(x) = 1, \\delta_x(y) = 0 \\: \\forall y \\neq x$, then\n   \\[\n     \\spans{\\delta_x: x\\in X} = \\{f\\in \\R^X: f \\text{ has finite support} \\}\n   \\]\n  \\end{enumerate} \n\\end{eg}\n\n\\begin{definition}[Span]\n  $S$ spans $V$ if $\\spans S = V$.\n\\end{definition}\n\n\\begin{definition}[Finite-dimensional]\\index{finite-dimensional}\n  $V$ is \\emph{finite-dimensional} over $\\F$ if it is spanned by a finite set.\n\\end{definition}\n\n\\begin{definition}[Linear independence]\\index{linear independence}\n  The vectors $v_1,\\ldots, v_n$ are \\emph{linearly independent} over $\\F$ if\n  \\[\n    \\sum_{i=1}^n \\lambda_i = 0 \\Rightarrow \\lambda_i = 0 \\: \\forall i\n  \\]\n  A subset $S \\subseteq V$ is \\emph{linearly independent} if every finite subset of $S$ is linearly independent.\n\n  A subset if \\emph{linearly dependent} if it is not linearly independent.\n\\end{definition}\n\n\\begin{eg}\n  In the first example above, the three vectors are not linearly independent.\n\\end{eg}\n\n\\begin{ex}\n  The set $\\{\\delta_x: x \\in X\\}$ is linearly independent.\n\\end{ex}\n\n\\begin{definition}[Basis]\\index{basis}\n  $S$ is a \\emph{basis} of $V$ if it is linearly independent and spans $V$.\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item $\\F^n$ has standard basis $\\{e_1,e_2,\\ldots,e_n\\}$ where $e_i$ is the column vector with $1$ in the $i$th entry and $0$ elsewhere.\n  \\item $V=\\C$ over $\\C$ has natural basis $\\{1\\}$, but over $\\R$ it has natural basis $\\{1, i\\}$.\n  \\item $V=P(\\R)$, the space of real polynomials, has natural basis\n    \\[\n      \\{1, x, x^2, \\dots \\}.\n    \\]\n    It is an exercise to check this carefully.\n    \\end{enumerate}\n\\end{eg}\n\n\\begin{lemma}\n  Let $V$ be a $\\F$-vector space. The vectors $v_1,\\ldots,v_n$ form a basis of $V$ if and only if each vector $v\\in V$ has a unique expression\n  \\[\n    v = \\sum_{i=1}^n \\lambda_i v_i, \\lambda_i \\in \\F.\n  \\]\n  \n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item $\\Rightarrow:$ Fix $v\\in V$. The $v_i$ span $V$, so exists $\\lambda_i \\in \\F$ such that $v = \\sum \\lambda_i v_i$. Suppose also $v = \\sum \\mu_i v_i$ for some $\\mu_i \\in \\F$. Then the difference\n  \\[\n    \\sum (\\mu_i - \\lambda_i) v_i = \\V 0.\n  \\]\n  Since the $v_i$ are linearly independent, $\\mu_i-\\lambda_i = 0$ for all $i$.\n\\item $\\Leftarrow:$ The $v_i$ span $V$ by assumption. Suppose $\\sum_{i=1}^n \\lambda_i v_i = \\V 0$. Note that $\\V 0 = \\sum_{i=0}^n 0 \\cdot v_i$. By appying uniqueness to $\\V 0$, $\\lambda_i = 0$ for all $i$.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{lemma}\n  If $v_1,\\ldots, v_n$ spans $V$ over $\\F$, then some subset of $v_1,\\ldots,v_n$ is a basis of $V$ over $\\F$.\n\\end{lemma}\n\n\\begin{proof}\n  If $v_1,\\ldots, v_n$ is linearly independent then done. Otherwise for some $\\ell$, there exist $\\alpha_1, \\ldots, \\alpha_{\\ell-1} \\in \\F$ such that\n  \\[\n    v_\\ell = \\sum_{i=1}^{\\ell-1} \\alpha_i v_i.\n  \\]\n  (If $\\sum \\lambda_i v_i = 0$, not all $\\lambda_i$ is zero. Take $\\ell$ maximal with $\\lambda_\\ell \\neq 0$, then $\\alpha_i = -\\frac{\\lambda_i}{\\lambda_\\ell}$.)\n\n  Now $v_1,\\ldots,v_{\\ell-1},v_{\\ell+1},\\ldots,v_n$ still span $V$. Continue iteratively until we have linear independence.\n\\end{proof}\n\n\\begin{theorem}[Steinitz Exchange Lemma]\n  Let $V$ be a finite-dimensional vector space over $\\F$. Take $v_1,\\ldots,v_m$ to be linearly independent, $w_1,\\ldots,w_n$ to span $V$. Then\n  \\begin{itemize}\n  \\item $m \\leq n$, and\n    \\item reordering the $w_i$ if needed, $v_1,\\ldots, v_m, w_{m+1},\\ldots,w_n$ spans $V$.\n  \\end{itemize}\n\\end{theorem}\n\n\\begin{proof}\n  Proceed by induction. Suppose that we have replaced $\\ell \\geq 0$ of the $w_i$. Reordering $w_i$ if needed, $v_1,\\ldots,v_\\ell,w_{\\ell+1},\\ldots,w_n$ spans $V$.\n  \\begin{itemize}\n  \\item If $\\ell = m$, done.\n  \\item If $\\ell < m$, then $v_{\\ell+1} = \\sum_{i=1}^\\ell \\alpha_i v_i + \\sum_{i> \\ell} \\beta_i w_i$. As the $v_i$ are linearly independent, $\\beta_i \\neq 0$ for some $i$. After reordering, $\\beta_{\\ell+1} \\neq 0$,\n    \\[\n      w_{\\ell+1} = \\frac{1}{\\beta_{\\ell+1}} (v_{\\ell+1}-\\sum_{i\\leq \\ell} \\alpha_i v_i - \\sum_{i>\\ell+1} \\beta_i w_i).\n    \\]\n    Thus $v_1,\\ldots, v_\\ell, v_{\\ell+1},w_{\\ell+2},\\ldots, w_n$ also spans $V$. After $m$ steps, we will replace $m$ of the $w_i$ by $v_i$. Thus $m \\leq n$.\n  \\end{itemize}\n\\end{proof}\n\n\\subsection{Dimension}\n\n\\begin{theorem}\n  If $V$ is a finite-dimensional vector space over $\\F$, then any two bases for $V$ have the same cardinality, which is called the \\emph{dimension} of $V$, donoted $\\dim_\\F V$.\n\\end{theorem}\n\n\\begin{proof}\n  If $v_1,\\ldots, v_n$ and $w_1,\\ldots,w_m$ are both bases, then $\\{v_i\\}$ is linearly independent and $\\{w_i\\}$ spans $V$ so $n \\leq m$. Similarly $m \\leq n$.\n\\end{proof}\n\n\\begin{eg}\n  $\\dim_\\C \\C = 1$, but $\\dim_\\R \\C = 2$.\n\\end{eg}\n\n\\begin{lemma}\n  Let \\(V\\) be a finite-dimensional \\(\\F\\)-vector space. If \\(w_1,\\ldots,w_\\ell\\) is a linearly independent set of vectors, we can extend it to a basis \\(w_1,\\ldots,w_\\ell,w_{\\ell+1},\\ldots,w_n\\).\n\\end{lemma}\n\n\\begin{proof}\n  Apply Steinitz exchange lemma to \\(w_1,\\ldots, w_\\ell\\) and any basis \\(v_1,\\ldots, v_n\\).\n\n  Or more direcly, if \\(V=\\langle w_1,\\ldots, w_\\ell \\rangle\\), done. Otherwise take \\(v_{\\ell+1} \\in V\\setminus\\langle w_1,\\ldots, w_\\ell\\rangle\\). Now \\(w_1,\\ldots, w_\\ell,w_{\\ell+1}\\) is linearly independent. Iterate.\n\\end{proof}\n\n\\begin{corollary}\n  Let \\(V\\) be a finite-dimensional vector space of dimension \\(n\\). Then\n  \\begin{enumerate}\n  \\item Any linearly independent set of vectors has at most \\(n\\) elements, with equality if and only if the set is a basis.\n  \\item Any spanning set of vectors has at least \\(n\\) elements, with equaility if and only if the set is a basis.\n  \\end{enumerate}\n\\end{corollary}\n\n\\begin{slogan}\n  Choose the best basis for the job.\n\\end{slogan}\n\n\\begin{theorem}\n  Let \\(U, W\\) be subspaces of \\(V\\). If \\(V\\) and \\(W\\) are finite-dimensional, so is \\(U+W\\) and\n  \\[\n\\dim(U+W) = \\dim U + \\dim W - \\dim(U\\cap W).\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  Pick basis \\(v_1,\\ldots, v_\\ell\\) of \\(U\\cap W\\). Extend it to basis \\(v_1,\\ldots,v_\\ell,u_1,\\ldots,u_m\\) of \\(U\\) and \\(v_1,\\ldots,v_\\ell,w_1,\\ldots,w_n\\) of \\(W\\). Claim \\(v_1,\\ldots, v_\\ell,u_1,\\ldots,u_m,w_1,\\ldots,w_n\\) is a basis for \\(U+W\\):\n  \\begin{itemize}\n  \\item spanning: if \\(u\\in U\\), then \\(u= \\sum \\alpha_iv_i + \\sum \\beta_iu_i\\) and if \\(w\\in W\\), \\(w = \\sum_{}^{}\\gamma_iv_i + \\sum_{}^{}\\delta_iw_i\\), so \\(u+w = \\sum_{}^{}(\\alpha_i + \\gamma_i)v_i + \\sum_{}^{}\\beta_iu_i + \\sum_{}^{}\\delta_iu_i\\).\n  \\item linear independence: assume \\(\\sum \\alpha_iv_i + \\sum \\beta_iu_i+ \\sum \\gamma_iw_i=0\\). Rearrange, \\(\\sum\\alpha_iv_i + \\sum\\beta_iu_i = -\\sum\\gamma_iw_i \\in U\\cap W\\) so it equals to \\(\\sum\\delta_iv_i\\) for some \\(\\delta_i\\in \\F\\) because \\(v_i\\) is a basis for \\(U\\cap W\\). As \\(v_i\\) and \\(w_i\\) are linearly independent, \\(\\gamma_i=\\delta_i=0\\) for all \\(i\\). Thus \\(\\sum\\alpha_iv_i + \\sum\\beta_iv_i=0\\), so \\(\\alpha_i=\\beta_i=0\\) since \\(v_i\\) and \\(u_i\\) form a basis for \\(U\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{theorem}\n  Let \\(V\\) be a finite-dimensional vector space over \\(\\F\\) and \\(U \\leq V\\), then \\(U\\) and \\(V/U\\) are also finite-dimensional and\n  \\[\n\\dim V = \\dim U + \\dim V/U.\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  Left as an exercise. Outline: first show \\(U\\) is finite-dimensional, then let \\(u_1,\\ldots,u_\\ell\\) be a basis for \\(U\\). Extend it to a basis for \\(V\\), say \\(u_1,\\ldots,u_\\ell,w_{\\ell+1},\\ldots,w_n\\) of \\(V\\). Check \\(w_{\\ell+1}+U,\\ldots,w_n+U\\) form a basis for \\(V/U\\).\n\\end{proof}\n\n\\begin{corollary}\n  If \\(U\\) is a proper subspace of \\(V\\), which is finite-dimensional, then \\(\\dim U < \\dim V\\).\n\\end{corollary}\n\n\\begin{proof}\n  \\(V/U \\neq 0\\) so \\(\\dim V/U > 0\\).\n\\end{proof}\n\n\\subsection{Direct Sum}\n\n\\begin{definition}[Direct sum]\\index{vector space!sum!direct}\n  Let \\(V\\) be a vector space over \\(\\F\\), \\(U, W\\leq V\\). Then\n  \\[\n    V = U \\oplus W\n  \\]\n  if every element of \\(V\\) can be written as \\(v=u+w\\) for some unique \\(u\\in U, w\\in W\\). This is called the \\emph{internal direct sum}. \\(W\\) is a \\emph{direct complement} of \\(U\\) in \\(V\\).\n\\end{definition}\n\n\\begin{lemma}\n  Suppose \\(U,W\\leq V\\), TFAE:\n  \\begin{enumerate}\n  \\item \\(V = U \\oplus W\\),\n  \\item \\(V=U+W\\) and \\(U\\cap W = 0\\),\n  \\item Given \\(\\basis B_1\\) any basis of \\(U\\), \\(\\basis B_2\\) any basis of \\(V\\), \\(\\basis B = \\basis B_1\\cup \\basis B_2\\) is a basis of \\(V\\).\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(2 \\Rightarrow 1\\): any \\(v\\in V\\) is \\(u+w\\) for some \\(u\\in U, w\\in W\\). Suppose \\(u_1+w_1=u_2+w_2\\), then \\(u_1-u_2 = w_2-w_1 \\in U\\cap W = 0\\). Thus \\(u_1=u_2,w_1=w_2\\).\n  \\item \\(1 \\Rightarrow 3\\): \\(\\basis B\\) spans as any \\(v\\in V\\) is \\(u+w\\). Write \\(u\\) in terms of \\(\\basis B_1\\) and \\(w\\) in terms of \\(\\basis B_2\\). Then \\(u+w\\) is a linear combination of elements of \\(\\basis B\\). To show \\(\\basis B\\) is linearly independent, suppose \\(\\sum_{v\\in \\basis B} \\lambda_v v = \\V 0 = \\V 0_V + \\V 0_W\\). Write LHS as \\(\\sum_{v\\in \\basis B_1} \\lambda_vv + \\sum_{v\\in \\basis B_2}\\lambda_vv\\). By uniqueness of expression, \\(\\sum_{v\\in \\basis B_1}\\lambda_vv=\\V 0_V\\) and \\(\\sum_{w\\in \\basis B_2}\\lambda_ww=\\V 0_w\\). As \\(\\basis B_1, \\basis B_2\\) are bases, all of the \\(\\lambda_v, \\lambda_w\\) are zero.\n  \\item \\(3 \\Rightarrow 2\\): if \\(v\\in V, v=\\sum_{x\\in V}\\lambda_xx = \\sum_{u\\in \\basis B_1}\\lambda_uu + \\sum_{w\\in \\basis B_1}\\lambda_ww\\) so \\(v\\in U+W\\). Conversely, if \\(v\\in U\\cap W, v = \\sum_{u\\in \\basis B_1}\\lambda_uu=\\sum_{w\\in \\basis B_2}\\lambda_ww\\) so all \\(\\lambda_u, \\lambda_v\\) are zero since \\(\\basis B_1\\cup \\basis B_2\\) is linearly independent.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(V\\) be a finite-dimensional vector space over \\(\\F\\) and \\(U\\leq V\\). Then there exists a direct complement to \\(U\\) in \\(V\\).\n\\end{lemma}\n\n\\begin{proof}\n  Let \\(u_1,\\ldots, u_\\ell\\) be a basis for \\(U\\). Extend this to a basis \\(u_1,\\ldots, u_\\ell,w_{\\ell+1},\\ldots,w_n\\) for \\(V\\). Then \\(\\spans{w_{\\ell+1},\\ldots,w_n}\\) is a direct complement of \\(U\\).\n\\end{proof}\n\n\\begin{caution}\n  Direct complements are \\emph{not} unique.\n\\end{caution}\n\n\\begin{definition}[Direct sum]\\index{vector space!direct sum}\n  Suppose \\(V_1,\\ldots, V_\\ell \\leq V\\), then the sum\n  \\[\n    \\sum_i V_i = V_1+\\cdots+V_\\ell = \\{v_1+\\cdots+v_\\ell: v_i\\in V_i\\}.\n  \\]\n  is \\emph{direct} if\n  \\[\n    v_1+\\cdots+v_\\ell = v_1'+\\cdots+ v_\\ell' \\Rightarrow v_i = v_i' \\text{ for all } i.\n  \\]\n  In which case it is denoted\n  \\[\n    V = \\bigoplus_{i=1}^\\ell V_i.\n  \\]\n\\end{definition}\n\n\\begin{ex}\n  \\(V_1,\\ldots, V_\\ell \\leq V\\), TFAE:\n  \\begin{enumerate}\n  \\item The sum \\(\\sum_i V_i\\) is direct,\n  \\item \\(V_i \\cap \\sum_{j\\neq i}V_j = 0\\) for all \\(i\\),\n  \\item For any basis \\(B_i\\) of \\(V_i\\), the union \\(B=\\bigcup_{i=1}^\\ell B_i\\) is a basis for \\(\\sum_i V_i\\).\n  \\end{enumerate}\n\\end{ex}\n\n\\begin{definition}[Direct sum]\\index{vector space!direct sum}\n  Let \\(U, W\\) be vector spaces over \\(\\F\\). The \\emph{external direct sum} is\n  \\[\n    U\\oplus W = \\{(u,w): u\\in U, w\\in W\\}\n  \\]\n  with pointwise addition and scalar multiplication.\n\\end{definition}\n\n\\section{Linear Map}\n\n\\subsection{Definitions}\n\n\\begin{definition}[Linear map]\\index{linear map}\n  \\(V, W\\) two \\(\\F\\)-vector space, a map \\(\\alpha: V\\to W\\) is \\emph{linear} if\n  \\begin{itemize}\n  \\item \\(\\alpha(v_1 + v_2) = \\alpha(v_1) + \\alpha(v_2)\\),\n  \\item \\(\\alpha(\\lambda v) = \\lambda \\alpha(v)\\).\n  \\end{itemize}\n  This is equivalent to\n  \\[\n\\alpha(\\lambda_1v_1+ \\lambda_2v_2) = \\lambda_1\\alpha(v_1) + \\lambda_2\\alpha(v_2).\n  \\]\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Given an \\(n\\times m\\) matrix \\(A\\) with coefficients in \\(\\F\\), the map \\(\\alpha: \\F^m\\to \\F^n, v\\to Av\\).\n  \\item Differentiation \\(D: P(\\R) \\to P(\\R), f\\mapsto \\frac{df}{dx}\\).\n  \\item Integration \\(I: C[0,1] \\to C[0,1], f \\mapsto I(f)\\) where \\(I(f)(x) = \\int_0^x f(t)dt\\).\n  \\item Fix \\(x\\in [0,1]\\), the map \\(C[0,1]\\to \\R, f\\mapsto f(x)\\).\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{note}[Categority of \\(\\mathbf{Vect}_\\F\\)]\n  Suppose \\(U, V, W\\) are \\(\\F\\)-vector spaces, then\n  \\begin{enumerate}\n  \\item \\(\\id: V\\to V\\) is linear.\n  \\item Given \\(U \\stackrel{\\alpha}{\\to} V \\stackrel{\\beta}{\\to} W\\), if \\(\\alpha, \\beta\\) are linear then so is \\(\\beta \\compose \\alpha\\).\n  \\end{enumerate}\n\\end{note}\n\n\\begin{lemma}[Free functor \\(\\mathbf{Set} \\to \\mathbf{Vect}_\\F\\)]\n  Suppose \\(V, W\\) are \\(\\F\\)-vector spaces and \\(\\basis B\\) is a basis for \\(V\\). If \\(\\alpha_0: \\basis B\\to W\\) is \\emph{any} map, then there is a \\emph{unique} linear map \\(\\alpha: V\\to W\\) extending \\(\\alpha_o\\).\n\\end{lemma}\n\n\\begin{proof}\n  Let \\(v\\in V\\). Write \\(v = \\sum \\lambda_iv_i\\) in a unique way. By linearity \\(\\alpha(v) = \\alpha(\\sum \\lambda_iv_i) = \\sum \\lambda_i \\alpha(v_i) = \\sum \\lambda_i \\alpha_0(v_i)\\). Uniqueness follows.\n\\end{proof}\n\n\\begin{note}\\leavevmode\n  \\begin{itemize}\n  \\item This is true for infinite-dimensional vector spaces as well.\n  \\item Very often, to define a linear map, define it on a basis and extend it linearly to the vector space.\n  \\item Two linear maps \\(\\alpha_1,\\alpha_2: V\\to W\\) are equal if and only if they agree on a basis.\n  \\end{itemize}\n\\end{note}\n\n\\subsection{Isomorphism of Vector Spaces}\n\n\\begin{definition}[Isomorphism]\\index{isomorphism}\n  Given \\(V, W\\) two \\(\\F\\)-vector spaces, the map \\(\\alpha:V\\to W\\) is an \\emph{isomorphism} if it is linear and bijective, denoted \\(V \\cong W\\).\n\\end{definition}\n\n\\begin{lemma}\n  \\(\\cong\\) is an equivalence relation on the class of all \\(\\F\\)-vector spaces.\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item symmetric: obvious.\n  \\item reflexive: blah blah in lecture. Left as an exercise to reader.\n  \\item transitive: obvious.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{theorem}\n  If \\(V\\) is an \\(\\F\\)-vector space, then \\(V \\cong \\F^n\\) for some \\(n\\).\n\\end{theorem}\n\n\\begin{proof}\n  Choose a basis for \\(V\\), say \\(v_1,\\ldots, v_n\\). Define a map\n  \\begin{align*}\n    V &\\to \\F^n \\\\\n    \\sum_{i}^{ }\\lambda_iv_i &\\mapsto (\\lambda_1,\\ldots,\\lambda_n)\n  \\end{align*}\nwhich is an isomorphism.\n\\end{proof}\n\n\\begin{remark}\n  Choosing an isomorphism \\(V \\cong \\F^n\\) is equivalent to choosing a basis for \\(V\\), i.e.\\ there is a bijection\n  \\[\n    \\{\\alpha\\in\\Hom(V,\\F^n), \\alpha\\text{ bijective}\\} \\leftrightarrow \\{\\text{bases of } V\\}.\n  \\]\n\\end{remark}\n\n\\begin{theorem}\n  Given two finite-dimensional \\(\\F\\)-vector spaces \\(V, W\\), they are isomorphic if and only if they have the same dimension.\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\Leftarrow\\): \\(V \\cong \\F^{\\dim V} = \\F^{\\dim W} \\cong W\\).\n  \\item \\(\\Rightarrow\\): let \\(a:V\\to W\\) be an isomorphism and \\(\\basis B\\) be a basis for \\(V\\). Claim \\(\\alpha(\\basis B)\\) is a basis for \\(W\\): \\(\\alpha(\\basis B)\\) spans \\(W\\) due to surjectivity and \\(\\alpha(\\basis B)\\) is linearly independent due to injectivity.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{definition}[Kernel \\& Image]\\index{linear map!kernel}\\index{linear map!image}\n  Given \\(\\alpha: V\\to W\\),\n  \\begin{itemize}\n  \\item \\(N(\\alpha) = \\ker \\alpha = \\{v\\in V: \\alpha(v) = 0\\} \\leq V\\),\n  \\item \\(\\im \\alpha = \\{w\\in W: \\exists v\\in V, \\alpha(v) = w \\} \\leq W\\).\n  \\end{itemize}\n\\end{definition}\n\n\\begin{proposition}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\alpha\\) is injective if and only if \\(N(\\alpha) = 0\\),\n  \\item \\(\\alpha\\) is surjective if and only if \\(\\im \\alpha = W\\).\n  \\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\n  Easy.\n\\end{proof}\n\n\\begin{eg}\n  Let \\(\\alpha: C^\\infty(\\R) \\to C^\\infty(\\R), \\alpha(f)(t) = f''(t)+2f'(t)+5f(t)\\). \\(\\ker \\alpha = \\{f:f''+2f'+5f=0\\}\\) and \\(g\\in \\im \\alpha\\) if and only if there exists an \\(f\\) such that \\(f''+2f'+5f=g\\).\n\\end{eg}\n\n\\begin{theorem}[First Isomorphism Theorem]\\index{isomorphism}\n  Let \\(\\alpha: V\\to W\\) be a linear map. It induces an isomprhism\n  \\begin{align*}\n    \\bar \\alpha: V/\\ker \\alpha &\\to \\im \\alpha \\\\\n    v + \\ker \\alpha &\\mapsto \\alpha(v)\n  \\end{align*}\n\\end{theorem}\n\n\\begin{proof}\n  Check the following:\n  \\begin{itemize}\n  \\item \\(\\bar \\alpha\\) is well-defined,\n  \\item \\(\\bar \\alpha\\) is linear: immediate from linearity of \\(\\alpha\\),\n  \\item \\(\\bar \\alpha\\) is surjective.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{definition}[Rank \\& Nullity]\\index{linear map!rank}\\index{linear map!nullity}\\leavevmode\n  \\begin{itemize}\n  \\item \\(r(\\alpha) = rk(\\alpha) = \\dim( \\im \\alpha)\\) is the \\emph{rank} of \\(\\alpha\\),\n  \\item \\(n(\\alpha) = \\dim N(\\alpha)\\) is the \\emph{nullity} of \\(\\alpha\\).\n  \\end{itemize}\n\\end{definition}\n\n\\begin{theorem}[Rank-nullity]\n  Let \\(U, V\\) be \\(\\F\\)-vector spaces, \\(\\dim U < \\infty\\). Let \\(\\alpha:U\\to V\\) be a linear map. Then\n  \\[\n\\dim U = r(\\alpha) + n(\\alpha).\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  \\(U/\\ker \\alpha \\cong \\im \\alpha\\) so \\(\\dim U - \\dim (\\ker \\alpha) = \\dim (\\im \\alpha)\\). Rearrange.\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(V, W\\) be \\(\\F\\)-vector spaces with equal, finite dimension. Let \\(\\alpha:V\\to W\\) be linear, then TFAE:\n  \\begin{enumerate}\n  \\item \\(\\alpha\\) is injective,\n  \\item \\(\\alpha\\) is surjective,\n  \\item \\(\\alpha\\) is an isomorphism.\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n  Rank-nullity theorem.\n\\end{proof}\n\n\\subsection{Linear Maps as Vector Space}\n\nSuppose \\(V\\) and \\(W\\) are \\(\\F\\)-vector spaces. Let \\(L(V,W) = \\{\\alpha:V\\to W, \\alpha \\text{ linear}\\}\\).\n\n\\begin{proposition}\n  \\(L(V,W)\\) is an \\(\\F\\)-vector space, under operations\n  \\begin{align*}\n    (\\alpha_1+\\alpha_2)(v) &= \\alpha_1(v) + \\alpha_2(v) \\\\\n    (\\lambda\\alpha)(v) &= \\lambda(\\alpha(v))\n  \\end{align*}\n\\end{proposition}\n\n\\begin{proof}\n  \\(\\alpha_1+\\alpha_2, \\lambda\\alpha\\) as above are well-defined linear maps. The vector space axioms can be easily checked.\n\\end{proof}\n\n\\begin{proposition}\n  \\label{prop:dimension of linear map space}\n  If both \\(V\\) and \\(W\\) are finite-dimensional over \\(\\F\\) then so is \\(L(V,W)\\) and \\(L(V,W) = \\dim V \\cdot \\dim W\\).\n\\end{proposition}\n\n\\begin{proof}\n  See Lemma~\\ref{cor:dim of hom}.\n\\end{proof}\n\n\\subsubsection{Matrices, an Interlude}\n\n\\begin{definition}[Matrix]\\index{matrix}\n  An \\emph{\\(m\\times n\\) matrix} over \\(\\F\\) is an array with \\(m\\) rows and \\(n\\) columns with entries in \\(\\F\\). We write\n  \\[\nA = (a_{ij}), a_{ij}\\in\\F, 1\\leq i \\leq m, 1\\leq j \\leq n.\n  \\]\n\\end{definition}\n\n\\begin{definition}\n  \\(\\M_{m,n}(\\F)\\) is the set of all such \\(m\\times n\\) matrices.\n\\end{definition}\n\n\\begin{proposition}\n  \\(\\M_{m,n}(\\F)\\) is an \\(\\F\\)-vector space and \\(\\dim \\M_{m,n}(\\F) = m\\cdot n\\).\n\\end{proposition}\n\n\\begin{proof}\n  See this \\hyperref[eg:matrix as V]{example} for the proof of vector space axioms. For the dimension claim, a standard basis for \\(\\M_{m,n}(F)\\) is\n  \\[\n    E_{ij}=\n    \\begin{pmatrix}\n      0 & \\dots & 0 \\\\\n      \\vdots & \\ddots & \\vdots \\\\\n      0 & 1 & 0 \\\\\n      \\vdots & \\ddots & \\vdots \\\\\n      0 & \\dots & 0 \n    \\end{pmatrix}\n  \\]\n  with \\(1\\) in the \\((i,j)\\)th entry so \\(a_{ij} = \\sum_{i,j}^{} a_{ij}E_{ij}\\), from which span and linear independence follow. The basis has cardinality \\(m\\cdot n\\).\n\\end{proof}\n\n\\subsubsection{Representation of Linear Maps by Matrices}\n\nLet \\(V\\) and \\(W\\) be finite-dimensional \\(\\F\\)-vector space, \\(\\alpha: V\\to W\\) linear. Let \\(\\basis B = \\{v_1,\\ldots,v_n\\}\\) be a basis for \\(V\\), \\(\\basis C = \\{w_1,\\ldots,w_m\\}\\) be a basis for \\(W\\). If \\(v=\\sum_{i}\\lambda_iv_i \\in V\\), write\n\\[\n[v]_{\\basis B} =\n\\begin{pmatrix}\n  \\lambda_1 \\\\\n  \\vdots \\\\\n  \\lambda_n\n\\end{pmatrix}\n\\in \\F^n\n\\]\nwhich is called the \\emph{coordinate vector of \\(v\\) with respect to \\(\\basis B\\)}. Similarly \\([w]_{\\basis C}\\in \\F^m\\).\n\n\\begin{definition}[Matrix representation]\\index{linear map!matrix representation}\n  \\([\\alpha]_{\\basis B, \\basis C}\\) is the matrix representation of \\(\\alpha\\) with respect to \\(\\basis B\\) and \\(\\basis C\\) with\n  \\begin{align*}\n    [\\alpha]_{\\basis B, \\basis C} &= \\Big( [\\alpha(v_1)]_{\\basis C} \\: \\Big| \\: [\\alpha(v_2)]_{\\basis C} \\: \\Big | \\: \\cdots \\: \\Big| \\: [\\alpha(v_n)]_{\\basis C} \\Big) \\\\\n                                  &= (a_{ij})\n  \\end{align*}\n\\end{definition}\n\nThe matrix says\n\\[\n  \\alpha(v_j) = \\sum_{i}^{ }a_{ij}w_i.\n\\]\n\n\\begin{lemma}\n  For any \\(v\\in V\\),\n  \\[\n[\\alpha(v)]_{\\basis C} = [\\alpha]_{\\basis B, \\basis C}\\cdot [v]_{\\basis B}\n  \\]\n  where \\(\\cdot\\) is matrix multiplication.\n\\end{lemma}\n\n\\begin{proof}\n  Fix \\(v =\\sum_{j=1}^{n}\\lambda_jv_j \\in V\\), so\n  \\[\n[v]_{\\basis B} =\n\\begin{pmatrix}\n  \\lambda_1 \\\\\n  \\vdots \\\\\n  \\lambda_n\n\\end{pmatrix}\n\\]\n\\begin{align*}\n  \\alpha(v) &= \\alpha\\left( \\sum_{j}^{ }\\lambda_jv_j \\right) \\\\\n            &= \\sum_{j}^{ }\\lambda_j\\alpha(v_j) \\\\\n            &= \\sum_{j}^{ }\\lambda_j\\left( \\sum_{i}^{ }\\alpha_{ij}w_i \\right) \\\\\n            &= \\sum_{i}^{ }\\left( \\sum_{j}^{} a_{ij}\\lambda_j \\right) w_i\n\\end{align*}\nso the \\(i\\)th entry of \\(\\alpha(v)\\) is the \\(i\\)th entry of \\([\\alpha]_{\\basis B, \\basis C} \\cdot [v]_{\\basis B}\\).\n\\end{proof}\n\n\\begin{lemma}\n  Suppose \\(U \\stackrel{\\beta}{\\to} V \\stackrel{\\alpha}{\\to} W\\) with \\(\\alpha, \\beta\\) linear, with \\(\\alpha \\compose \\beta: U\\to W\\). Let \\(\\basis A, \\basis B, \\basis C\\) be bases for \\(U,V,W\\) respectively. Then\n  \\[\n    [\\alpha\\compose\\beta]_{\\basis A, \\basis C} = [\\alpha]_{\\basis B,\\basis C}\\cdot[\\beta]_{\\basis A, \\basis B}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{align*}\n    (\\alpha\\compose\\beta)(u_\\ell) &= \\alpha(\\beta(u_\\ell)), \\: u_\\ell\\in A\\\\\n                                  &= \\alpha\\Big( \\sum_{j}^{ }b_{jl}v_j \\Big), \\: v_j\\in B \\\\\n                                  &= \\sum_{j}^{ }b_{jl}\\alpha(v_j) \\\\\n                                  &= \\sum_{j}^{ }b_{jl}\\sum_{i}^{ }a_{ij}w_i, \\: w_i\\in W \\\\\n                                  &= \\sum_{i}^{ }\\left( \\sum_{j}^{ }a_{ij}b_{jl} \\right)w_i\n  \\end{align*}\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(V\\) and \\(W\\) be \\(\\F\\)-vector spaces with \\(\\dim V = n, \\dim W = m\\), then\n  \\[\nL(V,W) \\cong \\M_{m,n}(\\F).\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Fix bases \\(B=\\{v_1\\ldots,v_n\\}, C=\\{w_1,\\ldots,w_m\\}\\) for \\(V\\) and \\(W\\) respectively. Claim\n  \\begin{align*}\n    \\theta: L(V,W) &\\to \\M_{m,n}(\\F) \\\\\n    \\alpha &\\mapsto [\\alpha]_{\\basis B, \\basis C}\n  \\end{align*}\n  is an isomorphism:\n  \\begin{itemize}\n  \\item linearity: \\([\\lambda_1\\alpha_1 + \\lambda_2\\alpha_2]_{\\basis B, \\basis C} = \\lambda_1[\\alpha_1]_{\\basis B, \\basis C} + \\lambda_2[\\alpha_2]_{\\basis B, \\basis C}\\).\n  \\item surjectivity: given \\(A = (a_{ij})\\), let \\(\\alpha:v_j\\mapsto \\sum_{i=1}^{m}a_{ij}w_i \\) and extend linearly. It follows that \\(\\alpha\\in L(V,W)\\) and \\(\\theta(\\alpha) = A\\).\n  \\item injectivity: \\([\\alpha]_{\\basis B, \\basis C} = \\V 0\\) implies that \\(\\alpha\\) is the zero map.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{corollary}\n  \\label{cor:dim of hom}\n  \\[\n\\dim L(V,W) = \\dim V \\cdot \\dim W.\n  \\]\n\\end{corollary}\n\n\\begin{eg}\n  Suppose \\(\\alpha:V\\to W\\), \\(Y\\leq V, Z\\leq W\\) with \\(\\alpha(Y)\\leq Z\\). Let \\(\\basis B'=\\{v_1,\\ldots,v_k\\}\\) be a basis of \\(Y\\) and extend to \\(\\basis B=\\{v_1,\\dots,v_k,v_{k+1}, \\dots, v_n\\}\\) a basis for \\(V\\). Similarly \\(\\basis C' = \\{w_1,\\dots,w_l\\}\\) and \\(\\basis C\\) for \\(Z\\) and \\(W\\).\n  \\begin{itemize}\n  \\item \\([\\alpha]_{\\basis B, \\basis C} =\n\\begin{pmatrix}\n  A & B \\\\\n  0 & C\n\\end{pmatrix}\n\\) for some \\(A, B, C\\) because for \\(1\\leq j \\leq k\\), \\(\\alpha(v_j)\\) is a linear combination of \\(w_i\\) where \\(1\\leq i \\leq l\\).\n  \\item \\([\\alpha|_Y]_{B',C'} = A. \\)\n  \\item \\(\\alpha\\) induces a map\n    \\begin{align*}\n      \\bar\\alpha: V/Y &\\to W/Z \\\\\n      v+ Y &\\mapsto \\alpha(v) + Z\n    \\end{align*}\n    This is well-defined. Linearity follows from that of \\(\\alpha\\). A basis for \\(V/Y\\) is \\(\\basis B''=\\{v_{k+1}+Y,\\ldots,v_n+Y\\}\\) and similarly for \\(W/Z\\). It is an exercise to show \\([\\bar\\alpha]_{\\basis B'', \\basis C''} = C \\).\n  \\end{itemize}\n\\end{eg}\n\n\\subsubsection{Change of Bases}\n\nThroughout this section, let \\(V\\) and \\(W\\) be \\(\\F\\)-vector spaces and suppose they have the following bases:\n\\begin{table}[htbp]\n  \\centering\n  \\begin{tabular}{|c||c|c|}\n    \\hline\n    Vector space & \\(V\\) & \\(W\\) \\\\ \\hline\n    Basis 1 & \\(\\basis B = \\{v_1,\\dots,v_n\\}\\) & \\(\\basis C = \\{w_1,\\dots,w_m\\}\\) \\\\ \\hline\n    Basis 2 & \\(\\basis B' = \\{v_1',\\dots,v_n'\\}\\) & \\(\\basis C' = \\{w_1',\\dots,w_m'\\}\\) \\\\ \\hline\n  \\end{tabular}\n\\end{table}\n\n\\begin{definition}[Change-of-basis matrix]\\index{matrix!change-of-basis}\n  The \\emph{change-of-basis matrix} from \\(\\basis B'\\) to \\(\\basis B\\) is \\(P = (p_{ij})\\) given by\n  \\begin{align*}\n    v_j' &= \\sum_{i}^{ }p_{ij}v_i \\\\\n    P &= \\Big( [v_1']_{\\basis B} \\: \\Big| \\: [v_2']_{\\basis B} \\: \\Big| \\: \\dots \\Big| \\: [v_n']_{\\basis B} \\Big) = [\\id]_{\\basis B', \\basis B}\n  \\end{align*}\n\\end{definition}\n\n\\begin{lemma}\n  \\[\n    [v]_{\\basis B} = P[v]_{\\basis B'}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\[\n    P[v]_{\\basis B'} = [\\id]_{\\basis B', \\basis B}[v]_{\\basis B'} = [v]_{\\basis B}.\n  \\]\n\\end{proof}\n\n\\begin{lemma}\n  \\(P\\) is an invertible \\(n\\times n\\) matrix and \\(P^{-1}\\) is the change-of-basis matrix from \\(\\basis B\\) to \\(\\basis B'\\).\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{align*}\n    [\\id]_{\\basis B, \\basis B'}[\\id]_{\\basis B', \\basis B} &= [\\id]_{\\basis B', \\basis B'} = I_n \\\\\n    [\\id]_{\\basis B', \\basis B}[\\id]_{\\basis B, \\basis B'} &= [\\id]_{\\basis B, \\basis B} = I_n\n  \\end{align*}\n\\end{proof}\n\nLet \\(Q\\) be the change-of-basis matrix from \\(\\basis C'\\) to \\(\\basis C\\). Then \\(Q\\) is an invertible \\(m\\times m\\) matrix.\n\n\\begin{proposition}\n  Let \\(\\alpha: V\\to W\\) be a linear map, \\(A = [\\alpha]_{\\basis B,\\basis C}\\), \\(A' = [\\alpha]_{\\basis B',\\basis C'}\\), then\n  \\[\n    A' = Q^{-1}AP.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  \\[\n    \\underbrace{[\\id]_{\\basis C,\\basis C'}}_{Q^{-1}} [\\alpha]_{\\basis B,\\basis C} \\underbrace{[\\id]_{\\basis B',\\basis B}}_P = \\underbrace{[\\id\\compose\\alpha\\compose\\id]_{\\basis B',\\basis C'}}_{A'}\n  \\]\n\\end{proof}\n\n\\begin{definition}[Equivalence of matrices]\\index{matrix!equivalence}\n  \\(A, A' \\in \\M_{m,n}(\\F)\\) are \\emph{equivalent} if\n  \\[\n    A' = Q^{-1}AP\n  \\]\n  for some invertible \\(P\\in \\M_{n,n}(\\F)\\) and \\(Q\\in \\M_{m,m}(\\F)\\).\n\\end{definition}\n\n\\begin{note}\n  This defines an equivalence relation on \\(\\M_{m,n}(\\F)\\).\n\\end{note}\n\n\\begin{proposition}\n  Let \\(V, W\\) be \\(\\F\\)-vector spaces of dimension \\(n\\) and \\(m\\) respectively. Let \\(\\alpha:V\\to W\\) be a linear map. Then there exist bases \\(\\basis B\\) of \\(V\\), \\(\\basis C\\) of \\(W\\), and some \\(r\\leq m,n\\) such that\n  \\[\n    [\\alpha]_{\\basis B,\\basis C} =\n    \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0 \n    \\end{pmatrix}\n  \\]\n  where \\(I_r\\) the is \\(r\\times r\\) the identity matrix.\n\\end{proposition}\n\n\\begin{note}\n  \\(r = rk(\\alpha) = r(\\alpha)\\).\n\\end{note}\n\n\\begin{proof}\n  Fix \\(r\\) such that \\(\\dim N(\\alpha) = n-r\\). Fix a basis for \\(N(\\alpha)\\), say \\(v_{r+1},\\dots,v_n\\). Extend this to a basis \\(\\basis B\\) for \\(V\\), say \\(v_1,\\dots,v_r,v_{r+1},\\dots,v_n\\). Now \\(\\alpha(v_1),\\dots,\\alpha(v_r)\\) is a basis for \\(\\im(\\alpha)\\):\n  \\begin{itemize}\n  \\item span: \\(\\alpha(v_1),\\dots, \\alpha(v_n)\\) certainly span \\(\\im(\\alpha)\\). Since \\(v_{r+1}, \\dots ,v_n \\in \\ker \\alpha\\), \\(\\alpha(v_{r+1}),\\dots,\\alpha(v_n) = 0\\) so we can remove them from the spanning set.\n  \\item linear independence: assume \\(\\sum_{i=1}^{n}\\lambda_i \\alpha(v_i) =\\V 0 \\). Then \\(\\alpha \\big(\\sum_{i=1}^n\\lambda_iv_i\\big) =\\V0\\). This implies that\n    \\[\n      \\sum_{i=1}^{n}\\lambda_iv_i = \\sum_{j=r+1}^{n}\\mu_jv_j.\n    \\]\n    As \\(v_1,\\dots v_n\\) are linearly independent, \\(\\lambda_i=\\mu_j=0\\) for all \\(i,j\\).\n  \\end{itemize}\n  Extend \\(\\alpha(v_1),\\dots,\\alpha(v_r)\\) to a basis for \\(W\\), say \\(\\basis C\\). By construction,\n  \\[\n    [\\alpha]_{\\basis B,\\basis C} =\n    \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}\n  \\]\n\\end{proof}\n\n\\begin{remark}\n  In the proof above we didn't need to assume that \\(r = r(\\alpha)\\). This gives us another way prove Rank-nullity Theorem.\n\\end{remark}\n\n\\begin{corollary}\n  Any \\(m\\times n\\) matrix is equivalent to\n  \\[\n  \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}\n  \\]\n  for some \\(r\\).\n\\end{corollary}\n\n\\begin{definition}[Row and column rank]\\index{matrix!rank}\n  Let \\(A\\in \\M_{m,n}(\\F)\\).\n  \\begin{itemize}\n  \\item The \\emph{column rank} of \\(A\\), \\(r(A)\\) is the dimension of the subspace of \\(\\F^m\\) spanned by the columns of \\(A\\).\n  \\item The \\emph{row rank} of \\(A\\) is the column rank of \\(A^T\\).\n  \\end{itemize}\n\\end{definition}\n\n\\begin{note}\n  If \\(\\alpha\\) is a linear map represented by \\(A\\) with respect to any choice of bases, then \\(r(\\alpha) = r(A)\\).\n\\end{note}\n\n\\begin{proposition}\n  Two \\(m\\times n\\) matrices \\(A, A'\\) are equivalent if and only if\n  \\[\n    r(A) = r(A').\n  \\]\n\\end{proposition}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\Leftarrow\\): Both \\(A\\) and \\(A'\\) are equivalent to \\(\n    \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}\n    \\) and matrix equivalence is transitive.\n  \\item \\(\\Rightarrow\\): Let \\(\\alpha:\\F^n\\to \\F^m\\) be the linear map represented by \\(A\\) with repect to, say, the standard basis. Since \\(A'=Q^{-1}AP\\) for some invertible \\(P\\) and \\(Q\\), \\(A'\\) represents the same \\(\\alpha\\) with respect to another bases. \\(r(\\alpha)\\) is defined in a basis-invariant way so \\(r(A) = r(\\alpha) = r(A')\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{theorem}\n  \\label{thm:upper corner matrix}\n  \\[\n    r(A) = r(A^T).\n  \\]\n\\end{theorem}\n\n\\begin{proof} \\(\n    Q^{-1}AP =\n    \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}_{m, n}\n  \\) where \\(P\\) and \\(Q\\) are invertible. Take transpose of the whole equation:\n  \\[\n    \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}_{n,m}\n    =(Q^{-1}AP)^T = P^TA^T(Q^T)^{-1}\n  \\]\n  so \\(A^T\\) is equivalent to\n  \\[\n      \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}\n  \\]\n\\end{proof}\n\nNote a special case for change of basis: \\(V = W\\), \\(\\basis C = \\basis B\\) and \\(\\basis C' = \\basis B'\\). \\(P\\), the change-of-basis matrix from \\(\\basis B'\\) to \\(\\basis B\\), is given the map \\(\\alpha \\in L(V,V)\\)\n\\[\n  [\\alpha]_{\\basis B',\\basis B'} = P^{-1}[\\alpha]_{\\basis B,\\basis B}P.\n\\]\n\n\\begin{definition}[Similar matrices]\\index{matrix!similar}\\index{matrix!conjugacy}\n  Given \\(A, A' \\in \\M_{n,n}(\\F)\\), \\(A\\) and \\(A'\\) are \\emph{similar}, or \\emph{conjugate} if\n  \\[\n    A' = P^{-1}AP\n  \\]\n  for some invertible \\(P\\).\n\\end{definition}\n\n\\subsubsection{Elementary Matrices and Operations}\n\n\\begin{definition}[Elementary column operation]\\index{elementary operation}\n  \\emph{Elementary column operation} on a \\(m\\times n\\) matrix \\(A\\) is one of the following operations:\n  \\begin{enumerate}\n  \\item swap column \\(i\\) and \\(j\\) (wlog \\(i\\neq j\\)),\n  \\item scale column \\(i\\) by \\(\\lambda\\)  (\\(\\lambda\\neq0\\)),\n  \\item add \\(\\lambda\\) times column \\(i\\) to column \\(j\\) (\\(i\\neq j,\\lambda\\neq 0\\)).\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{definition}[Elementary row operation]\n  Defined analoguously, replacing ``column'' by ``row''.\n\\end{definition}\n\n\\begin{note}\n  All of these operations are invertible.\n\\end{note}\n\n\\begin{definition}[Elementary matrix]\\index{matrix!elementary}\nThe elementary column (row, respectively) operations have corresponding elementary matrices, which are the results of performing these column (row, respectively) operations on \\(I_n\\) (\\(I_m\\), respectively):\n\\begin{enumerate}\n\\item\n  \\[\n    \\begin{pmatrix}\n      1 & 0 & \\cdots & & & 0 \\\\\n      \\vdots & \\ddots & & & & \\vdots \\\\\n      & & 0 & & 1 & 0 \\\\\n      0 & \\cdots & 0 & \\ddots & 0 & 0 \\\\\n      & & 1 & 0 & 0 & & \\\\\n      \\vdots & & &  \\ddots & \\\\\n      0 & & \\cdots & & \\cdots & 0\n    \\end{pmatrix}\n  \\]\n\\item\n  \\[\n    \\begin{pmatrix}\n      1 & 0 & \\cdots & & 0 \\\\\n       & \\ddots & & & \\\\\n      \\vdots & & \\lambda & & \\vdots \\\\\n       & & & \\ddots & \\\\\n      0 & \\cdots & & 0 & 1\n    \\end{pmatrix}\n  \\]\n\\item \\(I_n+\\lambda E_{ij}\\) where \\(E_{ij}\\) is the matrix with \\(1\\) on \\(ij\\)th entry and \\(0\\) elsewhere.\n\\end{enumerate}\n\\end{definition}\n\nAn elementary column (row, respectively) operation on \\(A\\in \\M_{m,n}(\\F)\\) can be performed by multiplying \\(A\\) by these corresponding elementary matrices on the right (left respectively).\n\n\\begin{eg}\n  \\[\n    \\begin{pmatrix}\n      1 & 2 \\\\ 3 & 4\n    \\end{pmatrix}\n    \\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix} = \\begin{pmatrix} 2 & 1 \\\\ 4 & 3 \\end{pmatrix}\n  \\]\n\\end{eg}\n\nGiven the elementary matrices, we can give a constructive proof that any \\(m\\times n\\) matrix is equivalent to \\(\\begin{pmatrix} I_r & 0 \\\\ 0 & 0 \\end{pmatrix}\\) for some \\(r\\):\n\n\\begin{proof}[Constructive proof of Theorem~\\ref{thm:upper corner matrix}]\n  Start with \\(A\\). If all entries of \\(A\\) are zero then done. If not then some \\(a_{ij} = \\lambda \\neq 0\\). Perform the following:\n  \\begin{enumerate}\n  \\item swap row \\(1\\) and \\(i\\), swap column \\(1\\) and \\(j\\) so \\(\\lambda\\) is in position \\((1,1)\\),\n  \\item multiply column \\(1\\) by \\(1/\\lambda\\) to get \\(1\\) in position \\((1,1)\\),\n  \\item add \\((-a_{12})\\) times column \\(1\\) to column \\(2\\). Do so for the other entries in row \\(1\\). Also use row operations to clear out all other entries in column \\(1\\). Now the matrix is in the form\n    \\[\n      \\begin{pmatrix}\n        1 & 0 \\\\\n        0 & A'\n      \\end{pmatrix}\n    \\]\n  \\item iterate for \\(A'\\). Stop when the new \\(A' = 0\\).\n  \\end{enumerate}\n  The result of these operations is\n  \\[\n    \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}\n    =\\underbrace{E'_{\\ell}E'_{\\ell-1}\\dots E'_1}_{Q^{-1}} A \\underbrace{E_1E_2\\dots E_{\\ell-1}E_\\ell}_{P}.\n  \\]\n  As elementary operations are invertible, the elementary matrices are invertible so\n  \\[\n    Q^{-1}AP = \n     \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}\n  \\]\n\\end{proof}\n\nIf you only use elementary row operations, we can get the \\emph{row echelon form} of a matrix:\n\\[\n  \\begin{pmatrix}\n    a & b & \\dots & c \\\\\n    0 & d & \\dots & e \\\\\n    \\vdots & & \\ddots & \\vdots \\\\\n    0 & 0 & \\dots & f\n  \\end{pmatrix}\n\\]\n\n\\begin{lemma}\n  If \\(A\\) is an \\(n\\times n\\) invertible matrix then we can obtain \\(I_n\\) by using only elementary row/column operations.\n\\end{lemma}\n\n\\begin{proof}\n  We prove the column operation case. Use induction on \\(n\\), the number of rows. Suppose we have got \\(\\begin{pmatrix} I_k & 0 \\\\ \\star & \\ast \\end{pmatrix}\\) for some \\(k\\geq 0\\). There exists \\(j>k\\) such that \\(a_{k+1,j}\\neq 0\\), (i.e.\\ in the \\(\\ast\\) block) as otherwise \\((0,\\dots,1,\\dots, 0)\\) with \\(1\\) in \\((k+1)\\)th position would not be in the span of the column vectors, contradicting the invertiblity. Next we carry out the following operations:\n  \\begin{enumerate}\n  \\item swap column \\(k+1\\) and \\(j\\),\n  \\item divide column \\(k+1\\) by \\(a_{k + 1, k + 1}\\) so have \\(1\\) in \\((k+1,k+1)\\) position,\n  \\item use column operation to clear other entries of \\((k+1)\\)th row.\n  \\end{enumerate}\n  Proceed inductively.\n\\end{proof}\n\nNote that the equality\n\\[\n  AE_1E_2\\dots E_c = I_n\n\\]\ngives\n\\[\n  A^{-1} = E_1E_2\\dots E_c,\n\\]\none way to compute inverses.\n\n\\begin{proposition}\n  Any invertible matrix can be written as a product of elementary ones.\n\\end{proposition}\n\n\\section{Dual Space \\& Dual Map}\n\n\\subsection{Definitions}\n\n\\begin{definition}[Dual space]\\index{vector space!dual}\n  Let \\(V\\) be an \\(\\F\\)-vector space. The \\emph{dual space} of \\(V\\) is defined to be\n  \\[\n    V^* = L(V,\\F) = \\{\\alpha:V\\to \\F, \\alpha \\text{ linear}\\}.\n  \\]\n\\end{definition}\n\n\\(V^*\\) is itself an \\(\\F\\)-vector space. Its elements are sometimes called \\emph{linear functionals}.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\R^3 \\to \\R, (a,b,c)\\mapsto a-c\\) is an element of \\(V^*\\).\n  \\item \\(\\tr: \\M_{n,n}(\\F)\\to \\F, A\\mapsto \\sum_i A_{ii}\\) is an element of \\(\\M_{n,n}(\\F)^*\\).\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{lemma}[Dual basis]\n  Let \\(V\\) be a finite-dimensional \\(\\F\\)-vector space with basis \\(\\basis B = \\{e_1,\\dots,e_n\\}\\). Then there is a basis for \\(V^*\\), given by\n  \\[\n    \\basis B^* = \\{\\varepsilon_1,\\dots, \\varepsilon_n\\}\n  \\]\n  where\n  \\[\n    \\varepsilon_j \\Big( \\sum_{i=1}^{n} a_i e_i \\Big) = a_j\n  \\]\n  for \\(1\\leq j\\leq m\\).\n\n  \\(\\basis B^*\\) is called the \\emph{dual basis} to \\(\\basis B\\).\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item linear independence: suppose\n    \\[\n      \\sum_{j=1}^{n}\\lambda_j\\varepsilon_j = 0.\n    \\]\n    Apply the relation to basis vectors,\n    \\[\n      0 = \\Big( \\sum_{j=1}^n \\lambda_j\\varepsilon_j \\Big) e_i = \\sum_{j=1}^n \\lambda_j\\varepsilon_j(e_i)\n      \\]\n      The last expression is \n      \\[\n        \\varepsilon_j(e_i) = \n      \\begin{cases}\n        0 & \\text{ if } i \\neq j \\\\\n        1 & \\text{ if } i = j\n      \\end{cases}\n    \\]\n    so \\(\\lambda_i=0\\) for all \\(1 \\leq i \\leq n\\).\n  \\item span: if \\(\\alpha \\in V^*\\), then\n    \\[\n      \\alpha = \\sum_{i=1}^{n}\\alpha(e_i)\\varepsilon_i\n    \\]\n    since linear maps are uniquely determined by the action on basis.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{corollary}\n  If \\(V\\) is a finite-dimensional \\(\\F\\)-vector space then\n  \\[\n    \\dim V = \\dim V^*.\n  \\]\n\\end{corollary}\n\n\\begin{remark}\n  Sometimes it is useful to think about \\((\\F^n)^*\\) as the space of row vectors of length \\(n\\) over \\(\\F\\).\n\\end{remark}\n\n\\subsection{Dual Map}\n\nIt turns out dual spaces have maps between them. Before studying them in detail, we introduce this concept to add richness to the theory of dual map:\n\n\\begin{definition}[Annihilator]\\index{annihilator}\n  If \\(U \\subseteq V\\), the \\emph{annihilator} of \\(U\\) is\n  \\[\n    U^\\ann = \\{\\alpha\\in V^*: \\forall u \\in U,\\,\\alpha(u) = 0 \\}.\n  \\]\n\\end{definition}\n\n\\begin{lemma}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(U^\\ann \\leq V^*\\),\n  \\item If \\(U \\leq V\\) and \\(\\dim V = n < \\infty\\) then\n    \\[\n      \\dim V = \\dim U + \\dim U^\\ann.\n    \\]\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(0 \\in U^\\ann\\). If \\(\\alpha\\) and \\(\\alpha'\\) are in \\(U^\\ann\\) then\n    \\[\n      (\\alpha+ \\alpha')(u) = \\alpha(u) + \\alpha'(u) = 0+0 = 0\n    \\]\n    for all \\(u\\in U\\). Similarly \\(\\lambda\\alpha\\in U^\\ann\\) for any \\(\\lambda \\in \\F\\).\n  \\item Let \\(\\basis B = \\{e_1,\\dots, e_k\\}\\) be a basis for \\(U\\) and extend it to a basis for \\(V\\), say \\(e_1,\\dots,e_k,e_{k+1},\\dots,e_n\\). Let \\(\\basis B^*=\\{\\varepsilon_1,\\dots,\\varepsilon_n\\}\\) be its dual basis. Claim \\(\\varepsilon_{k+1},\\dots,\\varepsilon_n \\) is a basis for \\(U^\\ann\\):\n    \\begin{itemize}\n    \\item If \\(i>k,j\\leq k\\) then \\(\\varepsilon_i(e_j) = 0 \\) so \\(\\varepsilon_i\\in U^\\ann\\).\n    \\item Linear independence comes from the fact that \\(B^*\\) is a basis.\n    \\item If \\(\\alpha\\in U^\\ann\\), \\(\\alpha = \\sum_{i=1}^na_i\\varepsilon_i\\) for some \\(\\alpha_i\\in \\F\\). Then for any \\(j\\leq k\\),\n      \\[\n        \\Big( \\sum_{i=1}^{n}a_i\\varepsilon_i \\Big) (e_j) = 0\n      \\]\n      so \\(a_j=0\\). It follows that \\(\\alpha \\in \\langle \\varepsilon_{k+1},\\dots,\\varepsilon_n \\rangle\\).\n    \\end{itemize}\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{lemma}[Dual space as a contravariant functor]\n  Let \\(V\\) and \\(W\\) be \\(\\F\\)-vector spaces. Let \\(\\alpha \\in L(V,W)\\). Then the map\n  \\begin{align*}\n    \\alpha^*: W^* &\\to V^* \\\\\n    \\varepsilon &\\mapsto \\varepsilon \\compose \\alpha\n  \\end{align*}\n  is linear. \\(\\alpha^*\\) is called the \\emph{dual} of \\(\\alpha\\).\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\varepsilon \\compose \\alpha \\in V^*\\) since composition preserves linearity.\n  \\item Fix \\(\\theta_1,\\theta_2\\in W^*\\),\n    \\begin{align*}\n      \\alpha^*(\\theta_1+\\theta_2) &= (\\theta_1+\\theta_2)\\compose \\alpha \\\\\n                                  &= \\theta_1\\compose \\alpha + \\theta_2 \\compose \\alpha \\\\\n      &= \\alpha^*\\theta_1 + \\alpha^*\\theta_2\n    \\end{align*}\n  \\item Similarly \\(\\alpha^*(\\lambda\\theta) = \\lambda\\alpha^*(\\theta)\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{proposition}\n  Let \\(V\\) and \\(W\\) be \\(\\F\\)-vector spaces with bases \\(\\basis B\\) and \\(\\basis C\\) repectively. Let \\(\\basis B^*\\) and \\(\\basis C^*\\) be the dual bases. Consider \\(\\alpha\\in L(V,W)\\) with dual \\(\\alpha^*\\), then\n  \\[\n    [\\alpha^*]_{\\basis C^*,\\basis B^*} = [\\alpha]^T_{\\basis B,\\basis C}.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Say \\(\\basis B = \\{b_1,\\dots,b_n\\}\\), \\(\\basis B^* = \\{\\beta_1,\\dots,\\beta_n\\}\\), \\(\\basis C = \\{c_1,\\dots,c_m\\}\\) and \\(\\basis C^* = \\{\\gamma_1,\\dots,\\gamma_n\\}\\). Further let \\([\\alpha]_{\\basis B,\\basis C} = (a_{ij})\\), an \\(m\\times n\\) matrix.\n  \\begin{align*}\n    \\alpha^*(\\gamma_r)(b_s) &= \\gamma_r \\compose \\alpha(b_s) \\\\\n                            &= \\gamma_r(\\alpha(b_s)) \\\\\n                            &= \\gamma_r \\Big( \\sum_{t}^{ }a_{ts}c_t \\Big) \\\\\n                            &= \\sum_{t}^{ } a_{ts} \\gamma_r(c_t) \\\\\n                            &= a_{rs} \\\\\n                            &= \\Big( \\sum_{i}^{ } a_{ri}\\beta_i \\Big) (b_s)\n  \\end{align*}\n  Thus\n  \\[\n    \\alpha^*(\\gamma_r) = \\sum_{i}^{ } a_{ri}\\beta_i\n  \\]\n  so\n  \\[\n    [\\alpha^*]_{\\basis C^*,\\basis B^*} = [\\alpha]^T_{\\basis B,\\basis C}.\n  \\]\n\\end{proof}\n\nIt follows that\n\\begin{lemma}\n  Let \\(V\\) be a finite-dimensional \\(\\F\\)-vector space with bases \\(\\basis E = \\{e_1,\\dots,e_n\\}\\) and \\(\\basis F = \\{f_1,\\dots,f_n\\}\\). They have correponding dual bases \\(\\basis E^* = \\{\\varepsilon_1,\\dots, \\varepsilon_n\\}\\) and \\(\\basis F^*\\). If the change-of-basis matrix from \\(\\basis F\\) to \\(\\basis E\\) is \\(P\\) then the change-of-basis matrix from \\(\\basis F^*\\) to \\(\\basis E^*\\) is\n  \\[\n    (P^{-1})^T.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\[\n    [\\id]_{\\basis F^*,\\basis E^*} = [\\id]_{\\basis E, \\basis F}^T = ([\\id]_{\\basis F, \\basis E}^{-1})^T.\n  \\]\n\\end{proof}\n\n\\begin{caution}\n  \\(V \\cong V^*\\) only if \\(V\\) is finite-dimensional. Let \\(V = P\\), the space of all real polynomials. It has basis \\(\\{p_j\\}_{j\\in \\N}\\) where \\(p(j) = t^j\\). In example sheet 2 we will see\n\\begin{align*}\n  P^* &\\cong \\R^\\N \\\\\n  \\varepsilon &\\mapsto (\\varepsilon(p_0), \\varepsilon(p_1), \\dots)\n\\end{align*}\nand on example sheet 1 we prove\n\\[\n  P \\ncong \\R^\\N\n\\]\nas the latter does not have a countable basis.\n\\end{caution}\n\nNow we move on to more discussion about annhilator.\n\n\\begin{lemma}\n  Let \\(V\\) and \\(W\\) be \\(\\F\\)-vector spaces. Fix \\(\\alpha \\in L(V,W)\\) and let \\(\\alpha^* \\in L(W^*, V^*)\\) be its dual. Then\n  \\begin{itemize}\n  \\item \\(N(\\alpha^*) = (\\im \\alpha)^\\ann\\), so \\(\\alpha^*\\) is injectve if and only if \\(\\alpha\\) is surjective.\n  \\item \\(\\im \\alpha^* \\leq N(\\alpha)^\\ann\\), with equality if \\(V\\) and \\(W\\) are both finite-dimensional, in which case \\(\\alpha^*\\) is surjective if and only if \\(\\alpha\\) is injective.\n  \\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item Let \\(\\varepsilon \\in W^*\\), then\n  \\begin{align*}\n    & \\varepsilon \\in N(\\alpha^*) \\\\\n    \\Leftrightarrow & \\alpha^* (\\varepsilon) = 0 \\\\\n    \\Leftrightarrow & \\varepsilon \\compose \\alpha = 0 \\\\\n    \\Leftrightarrow & \\varepsilon(u) = 0 \\, \\forall u \\in \\im \\alpha \\\\\n    \\Leftrightarrow & \\varepsilon \\in (\\im \\alpha)^\\ann\n  \\end{align*}\n\n\\item Let \\(\\varepsilon \\in \\im \\alpha^*\\), Then \\(\\varepsilon = \\alpha^* (\\phi)\\) for some \\(\\phi \\in W^*\\). For any \\(u \\in N(a)\\),\n  \\[\n    \\varepsilon(u) = (\\alpha^*(\\phi))(u) = (\\phi\\compose \\alpha)(u) = \\phi(\\alpha(u)) = \\phi(0) = 0\n  \\]\n  so \\(\\varepsilon \\in N(\\alpha)^\\ann\\).\n\n  Now use the fact that \\(V\\) and \\(W\\) are finite-dimensional:\n  \\[\n    \\dim \\im(\\alpha^*) = r(\\alpha^*) = r(\\alpha)\n  \\]\n  as \\(r(A) = r(A^T)\\). On the other hand,\n  \\[\n    r(\\alpha) = \\dim V - \\dim N(\\alpha) = \\dim (N(\\alpha))^\\ann\n  \\]\n  Thus they are equal.\n\\end{itemize}\n\\end{proof}\n\n\\subsection{Double Dual}\n\nLet \\(V\\) be an \\(\\F\\)-vector space. Then \\(V^* = L(V,\\F)\\) is its dual space. The natrual (oops) next step is\n\n\\begin{definition}[Double dual]\\index{vector space!double dual}\n  The \\emph{double dual} of \\(V\\) is\n  \\[\n    V^{**} = V^* = L(V^*, \\F).\n  \\]\n\\end{definition}\n\n\\begin{theorem}[Natural homomorphism of double dual]\n  If \\(V\\) is an \\(\\F\\)-vector space, then the map\n  \\begin{align*}\n    \\hat \\cdot: V &\\to V^{**} \\\\\n    v &\\mapsto \\hat v\n  \\end{align*}\n  where \\(\\hat v(\\varepsilon) = \\varepsilon(v)\\), is an natural homomorphism. In particular when \\(V\\) is finite-dimensional this is a natural isomorphism.\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item For \\(v\\in V\\), the map \\(\\hat v: V^* \\to \\F\\) is linear so \\(\\hat \\cdot\\) does give a map from \\(V\\) to \\(V^{**}\\).\n  \\item Linearity: for \\(v_1, v_2 \\in V\\), \\(\\lambda_1, \\lambda_2 \\in \\F\\), \\(\\varepsilon \\in V^*\\), then\n    \\[\n      \\reallywidehat{\\lambda_1v_1 + \\lambda_2v_2}(\\varepsilon) = \\varepsilon(\\lambda_1v_1 + \\lambda_2v_2) = \\lambda_1 \\varepsilon(v_1) + \\lambda_2 \\varepsilon(v_2) = \\lambda_1 \\hat v_1(\\varepsilon) + \\lambda_2 \\hat v_2(\\varepsilon)\n    \\]\n  \\item Injectivity: let \\(e \\in V\\setminus\\{0\\}\\). Extend it to a basis of \\(V\\), say \\(e, e_2,\\dots, e_n\\). Let \\(\\varepsilon, \\varepsilon_2,\\dots, \\varepsilon_n\\) be its corresponding dual basis. Now\n    \\[\n      \\hat e(\\varepsilon) = \\varepsilon(e) = 1 \n    \\]\n    so \\(\\hat e\\) is non-zero. \\(\\hat \\cdot\\) is injective.\n  \\item Finally, if \\(V\\) is finite-dimensional, \\(\\dim V = \\dim V^* = \\dim V^{**}\\) so \\(\\hat \\cdot\\) is an isomorphism.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(V\\) be an \\(\\F\\)-vector space and \\(U \\leq V\\). Then\n  \\[\n    \\hat U \\leq U^{\\ann \\ann}.\n  \\]\n  If \\(V\\) is finite-dimensional then \\(\\hat U = U^{\\ann \\ann}\\) so \\(U \\cong U^{\\ann \\ann}\\).\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item First show \\(\\hat U \\leq U^{\\ann \\ann}\\): given \\(u \\in U\\), for all \\(\\varepsilon \\in U^\\ann\\), \\(\\varepsilon(u) = 0\\) so \\(\\hat u(\\varepsilon) = 0\\). Thus \\(\\hat u \\in (U^\\ann)^\\ann = U^{\\ann \\ann}\\).\n  \\item If \\(V\\) is finite-dimensional then\n    \\[\n      \\dim U^{\\ann \\ann} = \\dim V^* - \\dim U^\\ann = \\dim V - \\dim U^\\ann = \\dim U\n    \\] so \\(\\hat U = U^{\\ann \\ann}\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(V\\) be a finite-dimensional \\(\\F\\)-vector space and \\(U_1, U_2 \\leq V\\). Then\n  \\begin{itemize}\n  \\item \\((U_1 + U_2)^\\ann = U_1^\\ann \\cap U_2^\\ann\\),\n  \\item \\((U_1 \\cap U_2)^\\ann = U_1^\\ann + U_2^\\ann\\).\n  \\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item Let \\(\\theta \\in V^*\\), \\(\\theta \\in (U_1+U_2)^\\ann\\) if and only if \\(\\theta(u_1+u_2) = 0\\) for all \\(u_1\\in U_1, u_2 \\in U_2\\), if and only if \\(\\theta(u) = 0\\) for all \\(u \\in U_1 \\cup U_2\\), so \\(\\theta\\in U_1^\\ann \\cap U_2^\\ann\\).\n  \\item Apply \\(^\\ann\\) to the first result and use the previous lemma.\n  \\end{itemize}\n\\end{proof}\n\n\\section{Bilinear Form I}\n\n\\begin{definition}[Bilinear form]\\index{bilinear form}\n  Let \\(U\\) and \\(V\\) be \\(\\F\\)-vector spaces. A map \\(\\varphi: U \\times V \\to \\F\\) is \\emph{bilinear} if it is linear in both arguments, i.e.\n  \\begin{align*}\n    \\forall u \\in U, \\, \\varphi(u, -) &\\in V^* \\\\\n    \\forall v \\in V, \\, \\varphi(-, v) &\\in U^*\n  \\end{align*}\n\\end{definition}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(V \\times V^* \\to \\F, (v, \\theta) \\mapsto \\theta(v)\\).\n  \\item \\(U = V = \\R^n, \\varphi(x, y) = \\sum_{i=1}^{n}x_iy_i\\).\n  \\item \\(A \\in \\M_{m,n}(\\F), \\varphi: \\F^m \\times \\F^n \\to \\F, (u, v) \\mapsto u^TAv\\).\n  \\item \\(U = V = C([0, 1], \\R), (f, g) \\mapsto \\int_{0}^{1} f(t)g(t) dt \\)\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{definition}[Matrix of bilinear form]\\index{matrix!bilinear form, of}\n  Let \\(\\basis B = \\{e_1,\\dots,e_m\\}\\) be a basis for \\(U\\) and \\(\\basis C = \\{f_1,\\dots,f_n\\}\\) be a basis for \\(V\\). Given a bilinear map \\(\\varphi: U \\times V \\to \\F\\), the \\emph{matrix of \\(\\varphi\\)} with respect to \\(\\basis B\\) and \\(\\basis C\\) is\n  \\[\n    [\\varphi]_{\\basis B, \\basis C} = \\left( \\varphi(e_i, f_j) \\right)_{m \\times n}.\n  \\]\n\\end{definition}\n\n\\begin{lemma}\n  \\[\n    \\varphi(u, v) = [u]_{\\basis B}^T [\\varphi]_{\\basis B, \\basis C} [v]_{\\basis C}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  Let \\(u = \\sum_{i}^{ }\\lambda_ie_i, v = \\sum_{j}^{} \\mu_jf_j\\), then\n  \\begin{align*}\n    \\varphi(u, v) &= \\varphi\\left( \\sum_{i}^{ }\\lambda_ie_i, \\sum_{j}^{} \\mu_jf_j \\right) \\\\\n                  &= \\sum_{i}^{ }\\lambda_i \\varphi\\left(e_i, \\sum_{j}^{} \\mu_jf_j \\right)\\\\\n                  &= \\sum_{i, j}^{ }\\lambda_i \\varphi(e_i, f_j) \\mu_j\n  \\end{align*}\n\\end{proof}\n\n\\begin{note}\\leavevmode\n  \\begin{enumerate}\n  \\item \\([\\varphi]_{\\basis B, \\basis C}\\) is the unique matrix with this property.\n  \\item A bilinear form \\(\\varphi: U \\times V \\to \\F\\) induces linear maps\n    \\begin{align*}\n      \\varphi_L: U &\\to V^* \\\\\n      u &\\mapsto \\varphi(u, -) \\\\\n      \\varphi_R: V &\\to U^* \\\\\n      v &\\mapsto \\varphi(-, v)\n    \\end{align*}\n  \\end{enumerate}\n\\end{note}\n\n\\begin{lemma}\n  Let \\(\\basis B = \\{e_1,\\dots, e_m\\}\\) be a basis for \\(U\\), \\(\\basis B^* = \\{\\varepsilon_1,\\dots, \\varepsilon_m\\}\\) a basis for \\(U^*\\), \\(\\basis C = \\{f_1,\\dots, f_n\\}, \\basis C^* = \\{\\eta_1,\\dots, \\eta_n\\}\\) for \\(V\\) and \\(V^*\\). If \\([\\varphi]_{\\basis B, \\basis C} = A\\) then\n  \\begin{align*}\n    [\\varphi_L]_{\\basis B, \\basis C^*} &= A^T, \\\\\n    [\\varphi_R]_{\\basis C, \\basis B^*} &= A. \\\\\n  \\end{align*}\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{align*}\n    \\varphi_L(e_i)(f_j) = A_{ij} &\\Longrightarrow \\varphi_L(e_i) = \\sum_{j}^{ }A_{ij}\\eta_j \\\\\n    \\varphi_R(f_j)(e_i) = A_{ij} &\\Longrightarrow \\varphi_R(f_j) = \\sum_{i}^{ }A_{ij}\\varepsilon_i\n  \\end{align*}\n\\end{proof}\n\n\\begin{definition}[Left and right kernel]\\index{bilinear!kernel}\n  The \\emph{left (right, respectively) kernel} of \\(\\varphi\\) is \\(\\ker \\varphi_L\\) (\\(\\ker \\varphi_R\\), respectively).\n\\end{definition}\n\n\\begin{definition}[Degeneracy]\\index{bilinear!dengeneracy}\n  \\(\\varphi\\) is \\emph{non-degenerate} if \\(\\ker \\varphi_L = 0\\) and \\(\\ker \\varphi_R = 0\\). Otherwise, \\(\\varphi\\) is \\emph{degenerate}.\n\\end{definition}\n\n\\begin{lemma}\n  Let \\(U, V\\) have bases as before, and \\(\\varphi, A\\) as before. Then \\(\\varphi\\) is non-degenerate if and only if \\(A\\) is invertible.\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{align*}\n    & \\varphi \\text{ is non-degenerate} \\\\\n    \\Leftrightarrow & \\ker \\varphi_L = 0 \\text{ and } \\ker \\varphi_R = 0 \\\\\n    \\Leftrightarrow & n(A^T) = n(A) = 0 \\\\\n    \\Leftrightarrow & r(A^T) = \\dim V, r(A) = \\dim U \\\\\n    \\Leftrightarrow & A \\text{ is invertible}\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n  If \\(\\varphi\\) is non-degenerate and \\(U\\) and \\(V\\) are finite-dimensional then \\(\\dim U = \\dim V\\).\n\\end{corollary}\n\n\\begin{corollary}\n  When \\(U\\) and \\(V\\) are finite-dimensional, choosing a non-degenerate bilinear form \\(\\varphi: U \\times V \\to \\F\\) is equivalent to picking an homomorphism \\(\\varphi_L: U \\to V^*\\).\n\\end{corollary}\n\n\\begin{definition}\n  For \\(T \\subseteq U, S \\in V\\),\n  \\begin{align*}\n    T^\\perp &= \\{ v\\in V: \\varphi(t, v) = 0 \\, \\forall t\\in T \\} \\leq V \\\\\n    \\prescript{\\perp}{}{S} &= \\{ u\\in U: \\varphi(u, s) = 0 \\, \\forall s\\in S \\} \\leq U\n  \\end{align*}\n\\end{definition}\nThey are generalisation of annihilators.\n\n\\begin{proposition}\n  \\label{prop:change of basis of bilinear form}\n  Suppose \\(U\\) have basese \\(\\basis B, \\basis B'\\) and \\(V \\) have bases \\(\\basis C, \\basis C'\\), \\(P = [\\id]_{\\basis B', \\basis B}, Q = [\\id]_{\\basis C',\\basis C}\\). Let \\(\\varphi: U \\times V \\to \\F\\) be a bilinear form. Then\n  \\[\n    [\\varphi]_{\\basis B',\\basis C'} = P^T[\\varphi]_{\\basis B,\\basis C}Q.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  \\begin{align*}\n    \\varphi(u, v) &= [u]_{\\basis B}^T [\\varphi]_{\\basis B, \\basis C} [v]_{\\basis C} \\\\\n                  &= (P[u]_{\\basis B'})^T [\\varphi]_{\\basis B, \\basis C} (Q[v]_{\\basis C'}) \\\\\n                  &= [u]_{\\basis B'}^T P^T[\\varphi]_{\\basis B, \\basis C}Q[v]_{\\basis C'} \n  \\end{align*}\n\\end{proof}\n\n\\begin{definition}[Rank of bilinear form]\\index{rank}\n  The \\emph{rank} of \\(\\varphi\\), \\(r(\\varphi)\\), is the rank of its matrix representation (which is well-defined by the previous proposition).\n\\end{definition}\n\n\\begin{note}\n  \\[\n    r(\\varphi) = r(\\varphi_L) = r(\\varphi_R).\n  \\]\n\\end{note}\n\n\\section{Determinant \\& Trace}\n\n% TODO: add exposition about eigenvalues and basis-independent values.\n\n\\subsection{Trace}\n\n\\begin{definition}[Trace]\\index{trace}\n  For \\(A \\in \\M_n(\\F) = \\M_{n,n}(\\F)\\), the \\emph{trace} of \\(A\\) is\n  \\[\n    \\tr(A) = \\sum_{i=1}^{n}A_{ii}.\n  \\]\n\\end{definition}\n\n\\begin{lemma}\n  For \\(A, B\\in \\M_n(\\F)\\),\n  \\[\n    \\tr(AB) = \\tr(BA).\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\[\n    \\tr(AB) = \\sum_{i}^{ }\\sum_{j}^{ }a_{ij}b_{ji} = \\sum_{j}^{ }\\sum_{i}^{ }b_{ji}a_{ij} = \\tr(BA).\n  \\]\n\\end{proof}\n\n\\begin{lemma}\n  Similar (or conjugate) matrices have the same trace.\n\\end{lemma}\n\n\\begin{proof}\n  Suppose \\(A\\) and \\(B\\) are conjugates, they there exists \\(P\\) such that \\(B = P^{-1}AP\\) so\n  \\[\n    \\tr(B) = \\tr(P^{-1}AP) = \\tr(APP^{-1}) = \\tr(A).\n  \\]\n\\end{proof}\n\n\\begin{definition}[Trace]\\index{trace}\n  Let \\(\\alpha: V \\to V\\) be a linear map. The \\emph{trace} of \\(\\alpha\\) is\n  \\[\n    \\tr \\alpha = \\tr[\\alpha]_{\\basis B} = \\tr [\\alpha]_{\\basis B, \\basis B}\n  \\]\n  with repsect to a basis \\(\\basis B\\). This is well-defined by the previous lemma.\n\\end{definition}\n\n\\begin{lemma}\n  Let \\(\\alpha: V \\to V\\) be linear and \\(\\alpha^*: V^* \\to V^*\\) be its dual. Then\n  \\[\n    \\tr \\alpha = \\tr \\alpha^*.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\[\n    \\tr \\alpha = \\tr [\\alpha]_{\\basis B} = \\tr [\\alpha]_{\\basis B}^T = \\tr[\\alpha^*]_{\\basis B^*} = \\tr \\alpha^*.\n  \\]\n\\end{proof}\n\n\\subsection{Determinant}\n\nRecall some results from IA Groups: let \\(S_n\\) be the permutation group of the set \\(\\{1, 2, \\dots, n\\}\\) and \\(\\varepsilon: S_n \\to \\{1, -1\\}\\) be the signature of a permutation, i.e.\n\\[\n  \\varepsilon(\\sigma) =\n  \\begin{cases}\n    1 &\\text{if \\(\\sigma\\) is a product of even number of transpotitions}\\\\\n    0 &\\text{otherwise}\n  \\end{cases}\n\\]\n\n\\begin{definition}[Determinant]\\index{determinant}\n  Suppose \\(A \\in \\M_n(\\F)\\), \\(A = (a_{ij})\\), the \\emph{determinant} of \\(A\\) is\n  \\[\n    \\det A = \\sum_{\\sigma \\in S_n}^{ } \\varepsilon(\\sigma) a_{\\sigma(1), 1} a_{\\sigma(2), 2} \\cdots a_{\\sigma(n), n}.\n  \\]\n\\end{definition}\n\nThere are \\(n!\\) terms in this summation and each is the signed product of \\(n\\) elements (one from each row and each column).\n\n\\begin{eg}\n  For \\(n = 2\\),\n  \\[\n    \\det\n    \\begin{pmatrix}\n      a_{11} & a_{12} \\\\\n      a_{21} & a_{22}\n    \\end{pmatrix}\n    =a_{11}a_{22} - a_{21}a_{12}\n  \\]\n\\end{eg}\n\n\\begin{lemma}\n  If \\(A = (a_{ij})\\) is an upper-triangular matrix (i.e.\\ \\(a_{ij} = 0\\) for all \\(i > j\\)) then\n  \\[\n    \\det A = a_{11}a_{22}\\dots a_{nn}.\n  \\]\n  Similar for lower-trianglular matrices.\n\\end{lemma}\n\n\\begin{proof}\n  In the summation\n  \\[\n    \\det A = \\sum_{\\sigma \\in S_n}^{ }\\varepsilon(\\sigma) a_{\\sigma(1), 1}\\dots a_{\\sigma(n),n},\n  \\]\n  for a summand to be non-zero, we need \\(\\sigma(j) \\leq j\\) for all \\(j\\). Thus \\(\\sigma = \\id\\).\n\\end{proof}\n\n\\begin{lemma}\n  \\[\n    \\det A = \\det A^T\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{align*}\n    \\det A &= \\sum_{\\sigma \\in S_n}^{ } \\varepsilon(\\sigma) \\prod_{i = 1}^n a_{\\sigma(i), i} \\\\\n           &= \\sum_{\\sigma \\in S_n}^{ } \\varepsilon(\\sigma) \\prod_{i = 1}^n a_{i, \\sigma^{-1}(i)} \\\\\n           &= \\sum_{\\sigma \\in S_n}^{ } \\varepsilon(\\sigma^{-1}) \\prod_{i = 1}^n a_{i, \\sigma^{-1}(i)} \\\\\n           &= \\sum_{\\tau \\in S_n}^{ } \\varepsilon(\\tau) \\prod_{i = 1}^n a_{i, \\tau(i)} \\text{ where \\(\\tau = \\sigma^{-1}\\)} \\\\\n           &= \\det A^T\n  \\end{align*}\n\\end{proof}\n\n\\begin{definition}[Volume form]\\index{volume form}\n  A \\emph{volume form} on \\(\\F^n\\) is a function\n  \\[\n    d: \\underbrace{\\F^n \\times \\F^n \\times \\dots \\times \\F^n}_{\\text{\\(n\\) copies}} \\to \\F\n  \\]\n  which is:\n  \\begin{itemize}\n  \\item multilinear: for any \\(i\\) and \\(v_1, \\dots, v_{i-1}, v_i, v_{i + 1}, \\dots, v_n \\in \\F^n\\),\n    \\[\n      d(v_1, \\dots, v_{i-1}, -, v_{i+1}, \\dots, v_n) \\in (\\F^n)^*.\n    \\]\n  \\item alternating: if \\(v_i = v_j\\) for \\(i \\neq j\\), \\(d(v_1, \\dots, v_n) = 0\\).\n  \\end{itemize}\n\\end{definition}\n\n\\begin{notation}\n  Given \\(A = (a_{ij})\\), write \\(A\\) in column form\n  \\[\n    \\left( A^{(1)} | \\cdots | A^{(n)} \\right).\n  \\]\n\\end{notation}\nFor example, if \\(\\{e_i\\}\\) is a standard basis for \\(\\F^n\\) then\n\\[\n  I = \\left(e_1 | \\cdots | e_n \\right).\n\\]\n\n\\begin{lemma}\n  \\begin{align*}\n    \\det: \\F^n \\times \\dots \\times \\F^n &\\to \\F \\\\\n    (A^{(1)}, \\dots, A^{(n)}) &\\mapsto \\det A\n  \\end{align*}\n  is a volume form.\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item Multilinear: for any fixed \\(\\sigma \\in S_n\\), \\(\\prod_{i = 1}^n a_{\\sigma(i), i}\\) contains exactly one term from each column so it is multilinear. Multilinearity is preserved under addition.\n  \\item Alternating: suppose \\(A^{(k)} = A^{(l)}\\) for some \\(l \\neq k\\). Let \\(\\tau = (kl)\\). Then \\(a_{ij} = a_{i \\tau(j)}\\) for all \\(i, j\\). Also \\(S_n\\) can be expressed as a union of two disjoint cosets \\(A_n\\) and \\(\\tau A_n\\) so\n    \\begin{align*}\n      \\det A &= \\sum_{\\sigma \\in A_n}^{ } \\varepsilon(\\sigma) \\prod_{i = 1}^n a_{i, \\sigma(i)} - \\sum_{\\sigma \\in A_n}^{ } \\varepsilon(\\sigma) \\prod_{i = 1}^n a_{i, \\tau\\sigma(i)} \\\\\n      \\intertext{since \\(\\varepsilon\\) is a homomorphism}\n             &= \\sum_{\\sigma \\in A_n}^{ } \\varepsilon(\\sigma) \\prod_{i = 1}^n a_{i, \\sigma(i)} - \\sum_{\\sigma \\in A_n}^{ } \\varepsilon(\\sigma) \\prod_{i = 1}^n a_{i, \\sigma(i)} \\\\\n             &= 0\n    \\end{align*}\n  \\end{itemize}\n\\end{proof}\n\nIn the rest of the section we are going to prove that the converse is also true, i.e.\\ all volume forms are determinant up to a scaling constant.\n\n\\begin{lemma}\n  Let \\(d\\) be a volume form. Then swapping two entries changes the sign:\n  \\[\n    d(v_1, \\dots, v_i, \\dots, v_j, \\dots, v_n) = - d(v_1, \\dots, v_j, \\dots, v_i, \\dots, v_n).\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{align*}\n    0 &= d(v_1, \\dots, v_{i-1}, v_i + v_j, v_{i+1}, \\dots, v_{j-1}, v_i + v_j, v_{j + 1}, \\dots, v_n) \\\\\n      &= \\underbrace{d(v_1, \\dots, v_i, \\dots, v_i, \\dots, v_n)}_{ = 0} + d(v_1, \\dots, v_j, \\dots, v_i, \\dots, v_n) \\\\\n      &+ d(v_1, \\dots, v_i, \\dots, v_j, \\dots, v_n) + \\underbrace{d(v_1, \\dots, v_j, \\dots, v_j, \\dots, v_n)}_{ = 0} \\\\\n  \\end{align*}\n  Rearrange.\n\\end{proof}\n\n\\begin{corollary}\n  If \\(\\sigma \\in S_n\\),\n  \\[\n    d(v_{\\sigma(1)}, \\dots, v_{\\sigma(n)}) = \\varepsilon(\\sigma) d(v_1, \\dots, v_n).\n  \\]\n\\end{corollary}\n\n\\begin{theorem}\n  Let \\(d\\) be a volume form on \\(\\F^n\\), \\(A = (A^{(1)}|\\dots | A^{(n)})\\), then\n  \\[\n    d(A^{(1)}, \\dots, A^{(n)}) = \\det A \\cdot d(e_1, \\dots, e_n).\n  \\]\n\\end{theorem}\n\n\\begin{proof}\n  \\begin{align*}\n    d(A^{(1)}, \\dots, A^{(n)}) &= d \\left( \\sum_{i=1}^{n} a_{i1}e_i, A^{(2)}, \\dots, A^{(n)} \\right) \\\\\n                               &= \\sum_{i = 1}^{n}a_{i1} d(e_i, A^{(2)}, \\dots, A^{(n)}) \\\\\n                               &= \\sum_{i}^{} \\sum_{j}^{ } a_{i1} a_{j2} d(e_i, e_j, \\dots, A^{(n)}) \\\\\n                               &= \\sum_{i_1, i_2, \\dots, i_n}^{ } \\prod_{k = 1}^{n} a_{{i_k},k} d(e_{i_1}, \\dots e_{i_n})\n\\end{align*}\nThe last term is \\(0\\) unless all of \\(i_k\\) are distinct, i.e.\\ exists \\(\\sigma \\in S_N\\) such that \\(i_k = \\sigma(k)\\). Thus\n\\begin{align*}\n  d(A^{(1)}, \\dots, A^{(n)}) = \\sum_{\\sigma \\in S_n}^{ } \\prod_{k = 1}^{n} a_{\\sigma(k), k} \\underbrace{d(e_{\\sigma(1)}, \\dots, e_{\\sigma(n)})}_{= \\varepsilon(\\sigma) d(e_1, \\dots, e_n)}\n\\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n  \\(\\det\\) is the unique volume form \\(d\\) such that \\(d(e_1, \\dots, e_n) = 1\\).\n\\end{corollary}\n\n\\begin{proposition}\n  Suppose \\(A, B \\in \\M_n(\\F)\\), then\n  \\[\n    \\det AB =\\det A \\det B.\n  \\]\n\\end{proposition}\n\n\\begin{proof}\n  Let\n  \\begin{align*}\n    d_A: \\F^n \\times \\dots \\times \\F^n \\to \\F \\\\\n    (v_1, \\dots, v_n) \\to \\det(Av_1|\\cdots|Av_n)\n  \\end{align*}\n  then \\(d_A\\) is a volume form:\n  \\begin{itemize}\n  \\item multilinear: \\(v_i \\mapsto Av_i\\) is linear and \\(\\det\\) is multilinear.\n  \\item alternating: \\(v_i = v_j\\) implies \\(A_{v_i} = A_{v_j}\\) and \\(\\det\\) is alternating.\n  \\end{itemize}\n  It follows that\n  \\begin{align*}\n    d_A(Be_1, \\dots, Be_n) &= \\det B \\cdot d_A(e_1, \\dots, e_n) = \\det B \\det A \\\\\n                           &= \\det (ABe_1| \\cdots | ABe_n) = \\det AB\n  \\end{align*}\n\\end{proof}\n\n\\begin{definition}[Singular]\\index{matrix!singular}\n  \\(A \\in \\M_n(\\F)\\) is \\emph{singular} if \\(\\det A = 0\\). Otherwise it is \\emph{non-singular}.\n\\end{definition}\n\n\\begin{lemma}\n  If \\(A\\) is invertible then it is non-singular and\n  \\[\n    \\det A^{-1} = \\frac{1}{\\det A}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\[\n    1 = \\det I_n = \\det(AA^{-1}) = \\det A \\det A^{-1}\n  \\]\n\\end{proof}\n\n\\begin{theorem}\n  Suppose \\(A \\in \\M_n(\\F)\\) then TFAE:\n  \\begin{enumerate}\n  \\item \\(A\\) is invertible,\n  \\item \\(A\\) is non-singular,\n  \\item \\(r(A) = n\\).\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(1 \\Rightarrow 2\\): done.\n  \\item \\(2 \\Rightarrow 3\\): suppose that \\(r(A) < n\\). By rank-nullity \\(n(A) > 0\\) so \\(\\exists \\lambda \\in \\F^n \\setminus \\{0\\}\\) such that \\(A \\lambda = 0\\). Say \\(\\lambda = (\\lambda_i)\\) and \\(\\lambda_k \\neq 0\\). Have \\(\\sum_{i = 1}^{n}A^{(i)}\\lambda_i = 0 \\). Let\n    \\[\n      B= (e_1|e_2|\\cdots|e_{k-1}|\\lambda|e_{k+1}|\\cdots|e_n)\n    \\]\n    It follows that \\(AB\\) has \\(k\\)th column zero so\n    \\[\n      0 = \\det AB = \\det A \\det B = \\lambda_k \\det A.\n    \\]\n    So \\(\\det A = 0\\).\n  \\item \\(3 \\Rightarrow 1\\): by rank-nullity.\n  \\end{itemize}\n\\end{proof}\n\n\\subsection{Determinant of Linear Maps}\n\n\\begin{lemma}\n  Conjugate matrices have the same determinant.\n\\end{lemma}\n\n\\begin{proof}\n  Let \\(B = P^{-1}AP\\). Then\n  \\[\n    \\det B = \\det (P^{-1}AP) = \\det P^{-1} \\det A \\det P = \\det (P^{-1}P) \\det A = \\det A.\n  \\]\n\\end{proof}\n\n\\begin{definition}[Determinant]\\index{determinant}\n  Let \\(\\alpha: V \\to V\\) where \\(V\\) is a finite-dimensional vector space. The \\emph{determinant} of \\(\\alpha\\) is\n  \\[\n    \\det \\alpha = \\det [\\alpha]_{\\basis B, \\basis B}\n  \\]\n  where \\(\\basis B\\) is any basis for \\(V\\). \n\\end{definition}\nThis is well-defined by the previous lemma.\n\n\\begin{theorem}\n  \\(\\det: L(V, V) \\to \\F\\) satisfies\n  \\begin{enumerate}\n  \\item \\(\\det \\id = 1\\),\n  \\item \\(\\det \\alpha \\compose \\beta = \\det \\alpha \\det \\beta\\),\n  \\item \\(\\det \\alpha \\neq 0\\) if and only if \\(\\alpha\\) is invertible and in this case \\(\\det (\\alpha^{-1}) = \\frac{1}{\\det \\alpha}\\).\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n  Restatement of previous results.\n\\end{proof}\n\n\\subsection{Determinant of Block-triangular Matrices}\n\n\\begin{lemma}\n  Suppose \\(A \\in \\M_k(\\F), B \\in \\M_\\ell(\\F)\\) and \\(C \\in \\M_{k, \\ell}(\\F)\\), then\n  \\[\n    \\det\n    \\begin{pmatrix}\n      A & C \\\\\n      0 & B\n    \\end{pmatrix}\n    = \\det A \\det B.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  Let \\(n = k + \\ell\\) and call the block matrix \\(X = (x_{ij})\\), which is an element of \\(\\M_n(\\F)\\). Then\n  \\begin{align*}\n    \\det X &= \\sum_{\\sigma \\in S_n}^{ }\\varepsilon(\\sigma) \\prod_{i = 1}^{n} x_{\\sigma(i), i} \\\\\n    \\intertext{Note that \\(x_{\\sigma(i), i} = 0\\) if \\(i \\leq k\\) and \\(\\sigma(i) > k\\). Thus we are only concerned about \\(\\sigma\\) under which these are in different orbits, i.e.\\ \\(\\sigma = \\sigma_1\\sigma_2\\) where \\(\\sigma_1 \\in \\sym_{\\{1,\\dots, k\\}}\\) and \\(\\sigma_2 \\in \\sym_{\\{k+1, \\dots, n\\}}\\).}\n           &= \\sum_{\\sigma_1 \\in \\sym_{\\{1, \\dots, k\\}}}^{ } \\varepsilon(\\sigma_1) \\prod_{j = 1}^{k} a_{\\sigma_1(j),j} \\\\\n           &\\times \\sum_{\\sigma_2 \\in \\sym_{\\{k+1, \\dots, n\\}}}^{ } \\varepsilon(\\sigma_1) \\prod_{j = k + 1}^{n} a_{\\sigma_2(j),j} \\\\\n           &= \\det A \\det B\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n  For a sequence of matrices \\(A_1, \\dots, A_k\\),\n  \\[\n    \\det\n    \\begin{pmatrix}\n      A_1 & & & & \\\\\n      & A_2 & & * & \\\\\n      & & A_3 & & \\\\\n      & 0 & & \\ddots & \\\\\n      & & & & A_k\n    \\end{pmatrix}\n    = \\prod_{i = 1}^{k} \\det A_i\n  \\]\n\\end{corollary}\n\n\\begin{proof}\n  Apply the previous lemma inductively.\n\\end{proof}\n\n\\begin{caution}\n  In general,\n  \\[\n    \\det\n    \\begin{pmatrix}\n      A & B \\\\\n      C & D\n    \\end{pmatrix}\n    \\neq \\det A \\det D - \\det B \\det C.\n  \\]\n\\end{caution}\n\n\\subsection{Volume Interpretation of Determinant}\n\nIn \\(\\R^2\\), the determinant of a matrix\n\\[\n  \\det\n  \\begin{pmatrix}\n    a_{11} & a_{12} \\\\\n    a_{21} & a_{22} \n  \\end{pmatrix}\n\\]\ncan be intepreted as the signed area of the parallelogram spanned by the column vectors \\(\\binom{a_{11}}{a_{21}}\\) and \\(\\binom{a_{12}}{a_{22}}\\) of the matrix.\n    \nSimilarly in \\(\\R^3\\) the determinant of a matrix is the signed volume of the parallelepiped spanned by the column vectors of the matrix.\n\nFor higher dimensions, although difficult to visualise, the same interpretation still works: consier a hypercube \\(H = [0, 1]^n \\subseteq \\R^n\\). Then a map \\(A \\in \\M_n(\\F)\\) sends\n\\begin{align*}\n  H &\\to A(H) \\\\\n  \\sum_{i = 1}^{n} t_ie_i &\\mapsto \\sum_{i = 1}^{n}t_i A^{(i)}\n\\end{align*}\nand the generalised signed volume of RHS is \\(\\det A\\).\n\n\\subsection{Determinant of Elementary Operation}\n\nConsider the determinants of elementary column operation matrices:\n\\begin{itemize}\n\\item \\(E_1\\) swaps two columns so \\(\\det E_1 = -1\\),\n\\item \\(E_2\\) multiplies a column by \\(\\lambda \\neq 0\\) so \\(\\det E_2 = \\lambda\\),\n\\item \\(E_3\\) adds \\(\\lambda\\) times of a column to another column so \\(\\det E_3 = 1\\).\n\\end{itemize}\n\nOne could prove properties of \\(\\det\\) by decomposing any matrix into elementary matrices.\n\n\\subsection{Column Expansion \\& Adjugate Matrices}\n\n\\begin{lemma}\n  Suppose \\(A \\in \\M_n(\\F)\\), \\(A = (a_{ij})\\). Define \\(A_{\\widehat{ij}} \\in \\M_{n - 1}(\\F)\\) by deleting row \\(i\\) and column \\(j\\) from \\(A\\). Then \\(\\det A\\) can be calculated by\n  \\begin{enumerate}\n  \\item expansion in column \\(j\\): for a fixed \\(j\\),\n    \\[\n      \\det A = \\sum_{i = 1}^{n} (-1)^{i + j} a_{ij} \\det A_{\\widehat{ij}}.\n    \\]\n  \\item expansion in row \\(i\\): for a fixed \\(i\\),\n    \\[\n      \\det A = \\sum_{j = 1}^{n} (-1)^{i + j} a_{ij} \\det A_{\\widehat{ij}}.\n    \\]\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{remark}\n  It is possible to use one of the expressions above to define determinant iteratively, with base case \\(\\det a = a\\) for \\(n = 1\\).\n\\end{remark}\n\n\\begin{eg}\n  \\[\n    \\begin{vmatrix}\n      a & b & c \\\\\n      d & e & f \\\\\n      g & h & i\n    \\end{vmatrix}\n    =\n    a\n    \\begin{vmatrix}\n      e & f \\\\\n      h & i\n    \\end{vmatrix}\n    -b\n    \\begin{vmatrix}\n      d & f \\\\\n      g & i\n    \\end{vmatrix}\n    +c\n    \\begin{vmatrix}\n      d & e \\\\\n      g & h\n    \\end{vmatrix}\n  \\]\n\\end{eg}\n\n\\begin{proof}\n  We prove \\(1\\):\n  \n  \\begin{align*}\n    \\det A &= \\det \\left( A^{(1)} | \\cdots | \\sum_{i = 1}^{n} a_{ij}e_i | \\cdots | A^{(n)} \\right) \\\\\n           &= \\sum_{i = 1}^{n} a_{ij} \\det( A^{(1)} | \\cdots | e_i | \\cdots | A^{(n)} ) \\\\\n    \\intertext{use row and column operations to move the entry to top left corner,}\n           &= \\sum_{i = 1}^{n} a_{ij} (-1)^{(i - 1) + (j - 1)} \\det\n             \\begin{pmatrix}\n               1 & 0 \\\\\n               0 & A_{\\widehat{ij}}\n             \\end{pmatrix} \\\\\n           &= \\sum_{i = 1}^{n} a_{ij} (-1)^{i + j} \\det A_{\\widehat{ij}}\n  \\end{align*}\n\\end{proof}\n\n\\begin{definition}[Adjugate]\\index{matrix!adjugate}\n  Let \\(A \\in \\M_n(\\F)\\). The \\emph{adjugate matrix} of \\(A\\), \\(\\adj A\\), is the \\(n \\times n\\) matrix\n  \\[\n    (\\adj A)_{ij} = (-1)^{i + j} \\det A_{\\widehat{ji}}.\n  \\]\n\\end{definition}\n\nNotice the transposition of indices.\n\n\\begin{theorem}\\leavevmode\n  \\begin{enumerate}\n  \\item \\((\\adj A ) A = \\det A \\cdot I\\),\n  \\item If \\(A\\) is invertible then\n    \\[\n      A^{-1} = \\frac{\\adj A}{\\det A}.\n    \\]\n  \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item For a fixed \\(j\\), \\(\\det A = \\sum_i (\\adj A)_{ji} a_{ij} = (\\adj A \\cdot A)_{jj}\\). For \\(j \\neq k\\), replace the \\(j\\)th column with the \\(k\\)th:\n    \\begin{align*}\n      0 &= \\det(A^{(1)} | \\cdots | A^{(k)} | \\cdots | A^{(k)} | \\cdots | A^{(n)}) \\\\\n        &= \\sum_{i}^{ } (\\adj A)_{ji} a_{ik} \\\\\n        &= (\\adj A \\cdot A)_{jk}\n    \\end{align*}\n  \\item If \\(A\\) is invertible then \\(\\det A \\neq 0\\) so\n    \\[\n      I = \\frac{\\adj A}{\\det A} A.\n    \\]\n  \\end{enumerate}\n\\end{proof}\n\n\\subsection{Application: System of Linear Equations}\n\nA system of linear equations can be written as\n\\[\n  A \\V x = \\V b\n\\]\nwhere \\(A \\in \\M_{m, n}(\\F)\\) and \\(\\V b \\in \\M_{m, 1}(\\F)\\) are known and \\(\\V x \\in \\M_{n, 1}(\\F)\\) is unknown.\n\nThe system has a solution if and only if \\(r(A) = r(A|\\V b)\\) where the matrix on RHS is the \\emph{augmented matrix} by adding \\(\\V b\\) as a column to \\(A\\) since this happens if and only if \\(\\V b\\) is a linear combination of columns of \\(A\\).\n\nThe solution is unique if and only if \\(r(A) = n\\).\n\nIn particular, if \\(m = n\\), if \\(A\\) is non-singular then there is a unique solution\n\\[\n  \\V x = A^{-1} \\V b.\n\\]\n\nAlthough in theory we could invert the matrix to solve the system of equations, it is terribly inefficient. Instead, we use\n\n\\begin{proposition}[Cramer's rule]\n  If \\(A \\in \\M_n(\\F)\\) is invertible then the system\n  \\[\n    A \\V x = \\V b\n  \\]\n  has unique solution \\(\\V x = (x_i)\\) where\n  \\[\n    x_i = \\frac{\\det (A_{\\hat{i} \\V b})}{\\det A}\n  \\]\n  where \\(A_{\\hat{i} \\V b}\\) is obtained from \\(A\\) by deleting \\(i\\)th column and replacing it with \\(\\V b\\).\n\\end{proposition}\n\n\\begin{proof}\n  Assume \\(\\V x\\) is a solution of the system.\n  \\begin{align*}\n    \\det (A_{\\hat i \\V b}) &= \\det(A^{(1)} | \\cdots | \\V b | \\cdots | A^{(n)}) \\\\\n                           &= \\det(A^{(1)} | \\cdots | A \\V x | \\cdots | A^{(n)}) \\\\\n                           &= \\sum_{j = 1}^{n} x_j \\det(A^{(1)} | \\cdots | A^{(j)} | \\cdots | A^{(n)}) \\\\\n    \\intertext{\\(A^{(j)}\\) is one of the other columns unless \\(j = i\\) so}\n                           &= x_i \\det A\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n  If \\(A \\in \\M_n(\\Z)\\) with \\(\\det A = \\pm 1\\), then\n  \\begin{enumerate}\n  \\item \\(A^{-1} \\in \\M_n(\\Z)\\).\n  \\item Given \\(\\V b \\in \\Z^n\\), \\(A \\V x = \\V b\\) has an integer solution.\n  \\end{enumerate}\n\\end{corollary}\n\n\\section{Endomorphism}\n\n\\subsection{Definitions}\n\nLet \\(V\\) be an \\(\\F\\)-vector space with \\(\\dim V = n < \\infty\\). Let \\(\\basis B =\\{v_1, \\dots, v_n\\}\\) be a basis and \\(\\alpha \\in L(V) = L(V, V)\\). The general problem studied in this chapter is to choose a basis \\(\\basis B\\) such that \\([\\alpha]_{\\basis B}\\) has ``nice forms'', for example, to be amenable to \\(\\det\\) and \\(\\tr\\).\n\nSuppose there is another basis \\(\\basis B'\\) with change-of-basis matrix \\(P\\). Recall that\n\\[\n  [\\alpha]_{\\basis B} = P^{-1}[\\alpha]_{\\basis B'}P.\n\\]\nThe above problem is thus euqivalent to the following: given \\(A \\in \\M_n(\\F)\\), find \\(A'\\) conjugate to \\(A\\) and in a ``nice form''.\n\nWhat are the nice forms that we desire? The best we can have is\n\n\\begin{definition}[Diagonalisable]\\index{diagonalisable}\n  \\(\\alpha \\in L(V)\\) is \\emph{diagonalisable} if there exists \\(\\basis B\\) such that \\([\\alpha]_{\\basis B}\\) is diagonal.\n\\end{definition}\n\nA slightly weaker, albeit still ``nice'' enough form is\n\n\\begin{definition}[Triangulable]\\index{triangulable}\n  \\(\\alpha \\in L(V)\\) is \\emph{triangulable} if there exists \\(\\basis B\\) such that \\([\\alpha]_{\\basis B}\\) is upper triangular.\n\\end{definition}\n\nEquivalent, rephrasing using languages of matrices, \\(A \\in \\M_n(\\F)\\) is diagonalisable (triangulable, respectively) if it is conjugate to a diagonal (upper triangle, respectively) matrix.\n\n\\begin{definition}[Eigenvalue, eigenvector, eigenspace]\\index{eigenvalue}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\lambda \\in \\F\\) is an \\emph{eigenvalue} of \\(\\alpha\\) if there exists some \\(v \\in V\\setminus\\{0\\}\\) such that \\(\\alpha(v) = \\lambda v\\).\n  \\item \\(v \\in V\\) is an \\emph{eigenvector} of \\(\\alpha\\) if \\(\\alpha(v) = \\lambda v\\) for some eigenvalue \\(\\lambda\\).\n  \\item \\(V_\\lambda = \\{v \\in V: \\alpha(v) = \\lambda v\\}\\) is the \\emph{\\(\\lambda\\)-eigenspace} of \\(\\alpha\\).\n  \\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\n  It is easy to check that \\(V_\\lambda \\leq V\\).\n\\end{remark}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(V_\\lambda = \\ker(\\alpha - \\lambda \\iota)\\) and\n    \\begin{align*}\n      & \\lambda \\text{ is an eigenvalue} \\\\\n      \\Leftrightarrow & \\alpha - \\lambda \\iota \\text{ is singular} \\\\\n      \\Leftrightarrow & \\det (\\alpha - \\lambda \\iota) = 0\n    \\end{align*}\n  \\item If \\(\\alpha(v_j) = \\lambda v_j\\) then the \\(j\\)th column of \\([\\alpha]_{\\basis B}\\) is \\((0, \\dots, \\lambda, \\dots, 0)^T\\).\n  \\item \\([\\alpha]_{\\basis B}\\) is diagonal if and only if \\(\\basis B\\) consists of eigenvectors. \\([\\alpha]_{\\basis B}\\) is upper triangular if and only if \\(\\alpha(v_j) \\in \\spans{v_1, \\dots, v_j}\\) for all \\(j\\). In particular, \\(v_1\\) is an eigenvector.\n  \\end{enumerate}\n\\end{remark}\n\n\\subsection{Polynomial Ring, an Aside}\n\nBefore discussing polynomials associated with a linear map, we need some background knowledge about the ambient polynomial space that we will be working with. The following results should be self-evident and proofs are omitted. Most of them will be studied in detail in IB Groups, Rings and Modules and a proof the Fundamental Theorem of Algebra can be found in IB Complex Analysis.\n\nLet\n\\[\n  \\F[t] = \\{\\text{polynomials with coefficients in } \\F\\}\n\\]\nand \\(\\deg f\\) be the degree of \\(f\\) in \\(\\F[t]\\). In addition for the convenience of stating the following properties we let \\(\\deg 0 = -\\infty\\). We have the following properties:\n\\begin{enumerate}\n\\item \\(\\deg (f + g) \\leq \\max(\\deg f, \\deg g), \\deg(f g) = \\deg f + \\deg g\\).\n\\item If \\(\\lambda \\in \\F\\) is a root of some \\(f \\in \\F[t]\\), i.e.\\ \\(f(\\lambda) = 0\\) then \\((t - \\lambda) \\divides f\\). In other words, \\(f(t) = (t - \\lambda) g(t)\\) for some \\(g(t) \\in \\F[t]\\) and \\(\\deg g = \\deg f - 1\\).\n\\item We say \\(\\lambda\\) is a root of \\(f \\in \\F[t]\\) with \\emph{multiplicity} \\(e \\in \\N\\) if \\((t - \\lambda)^e \\divides f\\) but \\((t - \\lambda)^{e + 1} \\ndivides f\\).\n\\item A polynomial of degree \\(n\\) has at most \\(n\\) roots, counted with multiplicity.\n\\item Fundamental Theorem of Algebra: any \\(f \\in \\C[t]\\) of positive degree has a root (hence \\(\\deg f\\) roots).\n\\end{enumerate}\n\n\\subsection{Characteristic Polynomial of Endormorphism}\n\n\\begin{definition}[Characteristic polynomial]\\index{polynomial!characteristic}\n   The \\emph{characteristic polynomial} of \\(\\alpha \\in L(V)\\) is\n  \\[\n    \\chi_\\alpha(t) = \\det (\\alpha - t \\iota).\n  \\]\n  The \\emph{characteristic polynomial} of \\(A \\in \\M_n(\\F)\\) is\n  \\[\n    \\chi_A(t) = \\det (A - t I).\n  \\]\n\\end{definition}\n\nConjugate matrices have the same characteristic polynomial.\n\n\\begin{theorem}\n  A linear map \\(\\alpha\\) is triangulable if and only if \\(\\chi_\\alpha(t)\\) can be written as a product of linear factors over \\(\\F\\).\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\Rightarrow\\): suppose \\(\\alpha\\) is triangulable and is represented by\n    \\[\n      \\begin{pmatrix}\n        a_1 & \\cdots & * \\\\\n        & \\ddots & \\vdots \\\\\n        0 & & a_n\n      \\end{pmatrix}\n    \\]\n    with respect to some basis. Then\n    \\[\n      \\chi_\\alpha(t) = \\det\n      \\begin{pmatrix}\n        a_1 - t & \\cdots & * \\\\\n        & \\ddots & \\vdots \\\\\n        0 & & a_n -t\n      \\end{pmatrix}\n      = \\prod_{i = 1}^{n} (a_i - t)\n    \\]\n  \\item \\(\\Leftarrow\\): induction of \\(n = \\dim V\\): if \\(n = 1\\) then done. Suppose \\(n > 1\\) and the theorem holds for all endomorphisms of spaces of smaller dimensions. By hypothesis \\(\\chi_\\alpha(t)\\) has a root in \\(\\F\\), say \\(\\lambda\\). Let \\(U = V_\\lambda \\neq 0\\), then \\(\\alpha(U) \\leq U\\) so \\(\\alpha\\) induces \\(\\overline \\alpha: V/U \\to V/U\\). Pick basis \\(v_1, \\dots, v_k\\) for \\(U\\) and extend it to a basis \\(\\basis B = \\{v_1, \\dots, v_n\\}\\) for \\(V\\). With respect to \\(\\basis B\\), \\(\\alpha\\) has representation\n    \\[\n      \\begin{pmatrix}\n        \\lambda I_k & * \\\\\n        0 & C\n      \\end{pmatrix}\n    \\]\n    so\n    \\[\n      \\chi_\\alpha(t) = \\det(\\alpha - t \\iota) = (\\lambda - t)^k \\chi_{\\overline \\alpha}(t).\n    \\]\n    Thus \\(\\chi_{\\overline \\alpha}(t)\\) is also a product of linear factors. Since \\(\\chi_{\\overline \\alpha}(t)\\) acts on a linear space of strictly smaller dimension, by induction hypothesis there is a basis \\(w_{k + 1} + U, \\dots, w_n + U\\) for \\(V/U\\) with respect to which \\(\\overline \\alpha\\) has an upper-triangular matrix representation, say T. Then with respect to basis \\(v_1, \\dots, v_k, w_{k + 1}, \\dots, w_n\\), \\(\\alpha\\) has matrix representation\n    \\[\n      \\begin{pmatrix}\n        \\lambda I_k & * \\\\\n        0 & T\n      \\end{pmatrix}\n    \\]\n  \\end{itemize}\n\\end{proof}\n\n\\begin{eg}\n  \\label{eg:rotation}\n  Let \\(\\F = \\R\\), \\(V = \\R^2\\) and \\(\\alpha\\) be a rotation. Then with respect to the standard basis \\(\\alpha\\) has representation\n  \\[\n    \\begin{pmatrix}\n      \\cos \\theta & \\sin \\theta \\\\\n      -\\sin \\theta & \\cos \\theta\n    \\end{pmatrix}\n  \\]\n  and thus \\(\\chi_\\alpha(t) = t^2 - 2 \\cos \\theta t + 1\\), which is irreducible in general. Thus \\(\\alpha\\) is not triangulable over \\(\\R\\).\n\\end{eg}\n\n\\begin{lemma}\n  \\label{lem:determinant from characteristic}\n  Let \\(V\\) be an \\(n\\)-dimensional \\(\\F\\)-vector space and \\(\\alpha \\in L(V)\\) with \\(\\chi_\\alpha(t) = (-1)^n t^n + c_{n - 1} t^{n - 1} + \\dots c_0\\). Then\n  \\begin{itemize}\n  \\item \\(c_0 = \\det \\alpha\\),\n  \\item \\(c_{n - 1} = (-1)^{n - 1} \\tr \\alpha\\) for \\(\\F = \\R\\) or \\(\\C\\).\n  \\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\label{proof:determinant from characteristic}\n  \\begin{itemize}\n  \\item \\(c_0 = \\chi_\\alpha(0) = \\det (\\alpha - 0) = \\det \\alpha\\).\n  \\item If \\(\\F = \\R\\) then there is an extension of scalars \\(\\M_n(\\R) \\otimes \\C \\embed \\M_n(\\C)\\) induced by \\(\\R \\embed \\C\\) (i.e.\\ complexification). For \\(\\F = \\C\\), use Fundamental Theorem of Algebra to write\n    \\[\n      \\chi_\\alpha(t) = \\det\n      \\begin{pmatrix}\n        a_0 - t & \\cdots & * \\\\\n        & \\ddots & \\vdots \\\\\n        0 & & a_n - t\n      \\end{pmatrix}\n      = \\prod_{i = 1}^{n} (a_i - t)\n    \\]\n    where \\(\\sum_{i = 1}^n a_i = \\tr \\alpha\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{notation}\n  Let \\(p(t)\\) be a polynomial over \\(\\F\\),\n  \\[\n    p(t) = a_nt^n + \\dots + a_0 \\in \\F[t].\n  \\]\n  For \\(A \\in \\M_n(\\F)\\), define\n  \\[\n    p(A) = a_nA^n + \\dots + a_0I \\in \\M_n(\\F).\n  \\]\n  For \\(\\alpha \\in L(V)\\), define\n  \\[\n    p(\\alpha) = a_n\\alpha^n + \\dots + a_0\\iota \\in L(V).\n  \\]\n\\end{notation}\n\n\\begin{theorem}\n  Let \\(V\\) be a finite-dimensional \\(\\F\\)-vector space. Let \\(\\alpha \\in L(V)\\). Then \\(\\alpha\\) is diagonalisable if and only if \\(p(\\alpha) = 0\\) for some \\(p \\in \\F[t]\\) which is the product of distinct linear factors.\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item \\(\\Rightarrow\\): Suppose \\(\\alpha\\) is diagonalisable with distinct eigenvalues \\(\\lambda_1, \\dots, \\lambda_k\\). Let\n    \\[\n      p(t) = (t - \\lambda_1) \\cdots (t - \\lambda_k).\n    \\]\n    Let \\(\\basis B\\) be a basis of eigenvectors. For \\(v \\in \\basis B\\), \\(\\alpha(v) = \\lambda_i v\\) for some \\(i\\). Thus\n    \\[\n      0 = (\\alpha - \\lambda_i \\iota) v \\Rightarrow p(\\alpha)(v) = 0.\n    \\]\n    As this holds for all \\(v \\in \\basis B\\), \\(p(\\alpha) = 0\\).\n  \\item \\(\\Leftarrow\\): Suppose \\(p(\\alpha) = 0\\) for this \\(p\\), which is monic wlog. Claim that\n    \\[\n      V = \\bigoplus_{i = 1}^k V_{\\lambda_i}\n    \\]\n    \\begin{proof}\n      For \\(j = 1, \\dots, k\\), let\n      \\[\n        q_j(t) = \\prod_{i = 1, i \\neq j}^k \\frac{t - \\lambda_i}{\\lambda_j - \\lambda_i}\n      \\]\n      and \\(q(t) = \\sum_{j = 1}^k q_j(t)\\). \\(q(t)\\) has degree at most \\(k - 1\\) and \\(q(\\lambda_i) = 1\\) for all \\(i = 1, \\dots, k\\) so \\(q(t) = 1\\).\n\n      Let \\(\\pi_j = q_j(\\alpha): V \\to V\\). By construction\n      \\[\n        \\sum_{j = 1}^{k} \\pi_j = q(\\alpha) = \\iota \\in L(V)\n      \\]\n      so given \\(v \\in V\\),\n      \\[\n        v = q(\\alpha) (v) = \\sum_{j = 1}^{k} \\pi_j(v).\n      \\]\n      Also\n      \\[\n        (\\alpha - \\lambda_j \\iota)(\\pi_j(v)) = (\\alpha - \\lambda_j\\iota)(q_j(\\alpha)(v)) = \\frac{1}{\\prod_{i \\neq j}^{ } (\\lambda_j - \\lambda_i)} \\underbrace{p(\\alpha)}_{= 0}(v) = 0.\n      \\]\n      We have thus shown that\n      \\[\n        \\im \\pi_j \\leq \\ker(\\alpha - \\lambda_j \\iota) = V_{\\lambda_j}.\n      \\]\n      Thus \\(V = \\sum_{j = 1}^{k} V_{\\lambda_j}\\).\n\n      To prove the sum is direct, suppose \\(v \\in V_{\\lambda_j} \\cap (\\sum_{i \\neq j}^k V_{\\lambda_i})\\) and apply \\(\\pi_j\\) to \\(v\\):\n      \\begin{align*}\n        v \\in V_{\\lambda_j} &\\Rightarrow \\pi_j(v) = \\prod_{i = 1, i \\neq j}^{k} \\frac{\\lambda_j - \\lambda_i}{\\lambda_j - \\lambda_i} v = v \\\\\n        v \\in \\sum_{i \\neq j}^{k}V_{\\lambda_i} &\\Rightarrow \\pi_j(v) = 0\n      \\end{align*}\n        so \\(v = 0\\) and the sum is direct.\n\n        Now take the union of bases for \\(V_{\\lambda_i}\\) as a basis for \\(V\\).\n    \\end{proof}\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item Morally speaking, \\(\\pi_j\\) ``projects'' \\(V\\) to \\(V_{\\lambda_j}\\).\n  \\item The proof shows that for \\(k\\) distinct eigenvalues \\(\\lambda_1, \\dots \\lambda_k\\) of \\(\\alpha\\), the sum \\(\\sum_j V_{\\lambda_j}\\) is direct. The only way for diagonalisation to fail is if \\(\\sum_j V_{\\lambda_j} \\lneq V\\).\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{corollary}\n  Suppose \\(A \\in \\M_n(\\C)\\) has finite order then \\(A\\) is diagonalisable. \n\\end{corollary}\n\n\\begin{proof}\n  \\(p(A) = 0\\) for \\(p(t) = t^m - 1\\) where \\(m\\) is the order of \\(A\\). This factorises as \\(\\prod_{i = 0}^{m - 1} (t - \\xi^i)\\) where \\(\\xi\\) is a primitive \\(m\\)th root of unity.\n\\end{proof}\n\n\\begin{theorem}[Simultaneous diagonalisation]\n  Let \\(\\alpha, \\beta \\in L(V)\\) be diagonalisable. Then \\(\\alpha\\) and \\(\\beta\\) are \\emph{simultaneous diagonalisable} (there exists a basis with respect to which they are both diagonal) if and only if \\(\\alpha\\) and \\(\\beta\\) commute.\n\\end{theorem}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\Rightarrow\\): Suppose there is a basis \\(\\basis B\\) such that \\(A = [\\alpha]_{\\basis B}\\) and \\(B = [\\beta]_{\\basis B}\\) are diagonal. Any two diagonal matrices commute so \\(AB = BA\\), \\(\\alpha\\beta = \\beta\\alpha\\).\n  \\item \\(\\Leftarrow\\): Suppose \\(\\alpha\\) and \\(\\beta\\) commute and both are diagonalisable. We have\n    \\[\n      V = V_1 \\oplus \\cdots \\oplus V_k\n    \\]\n    where \\(V_i = \\ker(\\alpha - \\lambda_i\\iota)\\). Claim that \\(\\beta(V_j) \\leq V_j\\): suppose \\(v \\in V_j\\),\n    \\[\n      \\alpha\\beta (v) = \\beta\\alpha (v) = \\beta(\\lambda_j v) = \\lambda_j \\beta(v).\n    \\]\n\n    As \\(\\beta\\) is diagonalisable, there is a polynomial \\(p\\) with distinct linear factors such that \\(p(\\beta) = 0\\). Now\n    \\[\n      p(\\beta|_{V_i}) = p(\\beta)|_{V_i} = 0\n    \\]\n    so \\(\\beta|_{V_i} \\in L(V_i)\\) is diagonal. Pick a basis \\(\\basis B_i\\) of \\(V_i\\) combining its eigenvectors for \\(\\beta\\). By construction these are also eigenvectors for \\(\\alpha\\). With respect to \\(\\basis B = \\bigcup_i \\basis B_i\\) both \\(\\alpha\\) and \\(\\beta\\) are diagonal.\n  \\end{itemize}\n\\end{proof}\n\n\\begin{lemma}[{\\(\\F[t]\\)} as a Euclidean domain]\n  Given \\(a, b \\in \\F[t]\\) with \\(b \\neq 0\\), there eixst \\(q, r \\in \\F[t]\\) with \\(\\deg r < \\deg b\\) and \\(a = qb + r\\).\n\\end{lemma}\n\n\\begin{proof}\n  IB Groups, Rings and Modules.\n\\end{proof}\n\n\\begin{definition}[Minimal polynomial]\\index{polynomial!minimal}\n  Suppose \\(\\alpha \\in L(V)\\) and \\(V\\) is finite-dimensional. The \\emph{minimal polynomial} of \\(\\alpha\\), \\(m_\\alpha\\), is the monic non-zero polynomial of smallest degree such that\n  \\[\n    m_\\alpha(\\alpha) = 0.\n  \\]\n\\end{definition}\n\n\\begin{remark}\n  Let \\(\\dim V = n < \\infty\\), \\(\\dim L(V) = n^2\\) so\n  \\[\n    \\iota, \\alpha, \\alpha^2, \\dots, \\alpha^{n^2} \\in L(V)\n  \\]\n  must be linearly dependent so there is a non-trivial relation. Thus minimal polynomial exists.\n\\end{remark}\n\n\\begin{lemma}\n  Let \\(\\alpha \\in L(V)\\), \\(p \\in \\F[t]\\). Then \\(p(\\alpha) = 0\\) if and only if \\(m_\\alpha(t) \\divides p(t)\\).\n\\end{lemma}\n\n\\begin{proof}\n  By Euclidean algorithm there exist \\(q, r \\in \\F[t]\\) such that\n  \\[\n    p(t) = m_\\alpha(t) q(t) + r(t)\n  \\]\n  where \\(\\deg r < \\deg m_\\alpha\\). Then\n  \\[\n    0 = p(\\alpha) = m_\\alpha(\\alpha) q(\\alpha) + r(\\alpha)\n  \\]\n  so \\(r(\\alpha) = 0\\). By the minimality of the degree of \\(m_\\alpha\\), \\(r = 0\\).\n\\end{proof}\n\n\\begin{corollary}\n  \\(m_\\alpha\\) is uniquely defined.\n\\end{corollary}\n\n\\begin{proof}\n  Suppose \\(m_1\\) and \\(m_2\\) are both minimal polynomials of \\(\\alpha\\). Then by the previous lemma \\(m_1 \\divides m_2\\) and vice versa. By assumption both of them are monic so \\(m_1 = m_2\\).\n\\end{proof}\n\n\\begin{theorem}[Cayley-Hamilton Theorem]\n  Let \\(V\\) be a finite-dimensional \\(\\F\\)-vector space. Let \\(\\alpha \\in L(V)\\). Then\n  \\[\n    \\chi_\\alpha(\\alpha) = 0.\n  \\]\n\\end{theorem}\n\nFirst we give a proof for \\(\\F = \\C\\):\n\n\\begin{proof}\n  For some basis \\(\\basis B = \\{v_1, \\dots, v_n\\}\\), \\(\\alpha\\) has matrix representation\n  \\[\n    [\\alpha]_{\\basis B} =\n    \\begin{pmatrix}\n      a_1 & \\cdots & * \\\\\n      & \\ddots & \\vdots \\\\\n      0 & & a_n\n    \\end{pmatrix}\n  \\]\n  Let \\(U_j = \\spans{v_1, \\dots, v_j}\\). Then \\((\\alpha - a_j \\iota) U_j \\leq U_{j - 1}\\) so\n  \\[\n    \\underbrace{(\\alpha - a_1 \\iota) (\\alpha - a_2 \\iota) \\cdots \\underbrace{(\\alpha - a_{n - 1} \\iota) \\underbrace{(\\alpha - a_n \\iota) V}_{\\leq U_{n - 1}}}_{\\leq U_{n - 2}}}_{\\leq (\\alpha - a_1\\iota) U_1 = 0} = 0\n  \\]\n  so\n  \\[\n    \\chi_\\alpha(\\alpha) = 0.\n  \\]\n\\end{proof}\n\nHowever, this proof is unsatisfactory in that it relies on the fact of \\(\\C\\) being algebraically closed, which is partially, to say the least, an analytical and not an algebraic property\\footnote{While being closed is an algebraic property, the construction of \\(\\C\\) via \\(\\R\\) from \\(\\Q\\) is not. The point here is that Caylay-Hamilton holds for all fields, not just closed ones.}. In actuality, Cayley-Hamilton is a general result that applies to all fields. Thus we give an alternative algebraic proof:\n\n\\begin{proof}(Non-examinable)\n  Let \\(A \\in \\M_n(\\F)\\), then\n  \\[\n    \\chi_A(t) \\cdot (-1)^n = t^n + a_{n - 1}t^{n - 1} + \\dots + a_0 = \\det(tI - A).\n  \\]\n  For any matrix \\(B\\), we have\n  \\[\n    B \\adj B = \\det B \\cdot I.\n  \\]\n  Let \\(B = tI - A\\). Then \\(\\adj B\\) is a matrix whose entries are polynomials in \\(t\\) of degree smaller than \\(n\\), i.e.\\ polynomials in \\(t\\) with coefficients in \\(\\M_n(\\F)\\).\n\n  Thus\n  \\[\n    (tI - A) \\underbrace{(B_{n - 1}t^{n - 1} + \\cdots + B_1 t + B_0)}_{\\adj B} = \\underbrace{(t^n + a_{n - 1}t^{n - 1} + \\cdots + a_0)}_{\\det B} I\n  \\]\n  Equating the coefficients of each power of \\(t\\),\n  \\begin{align*}\n    I &= B_{n - 1} \\\\\n    a_{n - 1} I &= B_{n - 2} - AB_{n - 1} \\\\\n    & \\vdots \\\\\n    a_0 I &= -AB_0\n  \\end{align*}\n  multiply by \\(A^{n - i + 1}\\) for the \\(i\\)th row\n  \\begin{align*}\n    A^n &= A^n B_{n - 1} \\\\\n    a_{n - 1} A^{n - 1} &= A^{n - 1} B_{n - 2} - A^n B_{n - 1} \\\\\n    & \\vdots \\\\\n    a_0 I &= -AB_0\n  \\end{align*}\n  and add them up,\n  \\[\n    A^n + a_{n - 1}A^{n - 1} + \\dots a_1 A + A_0 I = 0.\n  \\]\n\\end{proof}\n\n\\begin{definition}[Algebraic multiplicity]\\index{multiplicity!algebraic}\n  Let \\(\\lambda\\) be an eigenvalue of \\(\\alpha \\in L(V)\\) where \\(V\\) is a finite-dimensional \\(\\F\\)-vector space. Write\n  \\[\n    \\chi_\\alpha(t) = (t - \\lambda)^{a_\\lambda} q(t)\n  \\]\n  for some \\(q(t) \\in \\F[t]\\) and \\((t - \\lambda) \\ndivides q(t)\\). \\(a_\\lambda\\) is the \\emph{algebraic multiplicity} of \\(\\lambda\\) as an eigenvalue of \\(\\alpha\\).\n\\end{definition}\n\n\\begin{definition}[Geometric multiplicity]\\index{multiplicity!geometric}\n  \\(g_\\lambda = n(\\alpha - \\lambda \\iota)\\) is the \\emph{geometric multiplicity} of \\(\\alpha\\).\n\\end{definition}\n\n\\begin{lemma}\n  If \\(\\lambda\\) is an eigenvalue then\n  \\[\n    1 \\leq g_\\lambda \\leq a_\\lambda.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\(1 \\leq g_\\lambda\\) since \\(\\alpha - \\lambda \\iota\\) is singular.\n\n  Let \\(\\basis B = \\{v_1, \\dots v_n\\}\\) be a basis of \\(V\\) with \\(\\{v_1, \\dots, v_g\\}\\) a basis of \\(\\ker(\\alpha - \\lambda \\iota)\\). Let \\(g = g_\\lambda\\). Then\n  \\[\n    [\\alpha]_{\\basis B} =\n    \\begin{pmatrix}\n      \\lambda I_g & * \\\\\n      0 & A_1\n    \\end{pmatrix}\n  \\]\n  where \\(A_1 \\in \\M_{n - g}(\\F)\\). Thus\n  \\[\n    \\chi_\\alpha(t) = (t - \\lambda)^g \\alpha_{A_1}(t)\n  \\]\n  and \\(g_\\lambda \\leq a_\\lambda\\).\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(\\lambda\\) be an eigenvalue. Let \\(c_\\lambda\\) be the multiplicity of \\(\\lambda\\) as a root of \\(m_\\alpha\\). Then\n  \\[\n    1 \\leq c_\\lambda \\leq a_\\lambda.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  As \\(m_\\alpha \\divides \\chi_\\alpha\\), \\(c_\\lambda \\leq a_\\lambda\\).\n\n  As \\(\\lambda\\) is an eigenvalue, \\(\\alpha(v) = \\lambda v\\) for some \\(v \\neq 0\\). Now given \\(p \\in \\F[t]\\), \\(p(\\alpha)(v) = p(\\lambda)(v)\\). Apply this to \\(m_\\alpha\\),\n  \\[\n    0 = m_\\alpha(\\alpha)(v) = m_\\alpha(\\lambda)(v)\n  \\]\n  so \\(m_\\alpha(\\lambda) = 0\\).\n\\end{proof}\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Let\n  \\[\n    A =\n    \\begin{pmatrix}\n      1 & 0 & -2 \\\\\n      0 & 1 & 1 \\\\\n      0 & 0 & 2\n    \\end{pmatrix}\n  \\]\n  then\n  \\[\n    \\chi_A(t) = \\det(A - tI) = (2 - t)(1 - t)^2.\n  \\]\n  There are two candidates for the minimal polynomial:\n  \\begin{itemize}\n  \\item \\((t - 2)(t - 1)^2\\),\n  \\item \\((t - 2)(t - 1)\\).\n  \\end{itemize}\n  We can check that \\((A - I)(A - 2I) = 0\\) so the second one is the minimal polynomial. It follows that \\(A\\) is diagonalisable.\n\\item Let\n  \\[\n    A =\n    \\begin{pmatrix}\n      \\lambda & 1 & & \\\\\n      & \\lambda & 1 &  \\\\\n      & & \\ddots & 1 \\\\\n      & & & \\lambda\n    \\end{pmatrix}\n  \\]\n  which has \\(g_\\lambda = 1, a_\\lambda = n, c_\\lambda = n\\).\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{lemma}\n  Let \\(\\alpha \\in L(V)\\), then TFAE:\n  \\begin{enumerate}\n  \\item \\(\\alpha\\) is diagonalisable,\n  \\item \\(a_\\lambda = g_\\lambda\\) for all eigenvalues \\(\\lambda\\),\n  \\item if \\(\\F = \\C\\), \\(c_\\lambda = 1\\) for all eigenvalues \\(\\lambda\\).\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(1 \\Leftrightarrow 2\\): Let \\(\\lambda_1, \\dots, \\lambda_k\\) be eigenvalues of \\(\\alpha\\). Then \\(\\alpha\\) is diagonalisable if and only if \\(V = \\bigoplus_i V_{\\lambda_i}\\). Take dimension of both sides,\n    \\begin{align*}\n      \\dim V &= n = \\deg \\chi_\\alpha = a_1 + \\dots + a_k \\\\\n      \\dim \\bigoplus_i V_{\\lambda_i} &= g_1 + \\dots + g_k\n    \\end{align*}\n    But \\(g_i \\leq a_i\\) for all \\(i\\) so \\(\\alpha\\) is diagonalisable if and only if \\(g_i = \\alpha_i\\) for all \\(i\\).\n  \\item \\(2 \\Leftrightarrow 3\\): By the Fundamental Theorem of Algebra, \\(m_\\alpha\\) is a product of linear factors. \\(\\alpha\\) is diagonalisable if and only if these are all distinct, i.e. \\(c_\\lambda = 1\\) for all eigenvalues \\(\\lambda\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{remark}\n  Over \\(\\C\\),\n  \\begin{align*}\n    \\chi_\\alpha(t) &= (\\lambda_1 - t)^{a_1} \\cdots (\\lambda_k - t)^{a_k} \\\\\n    m_\\alpha(t) &= (t - \\lambda_1)^{c_1} \\cdots (t - \\lambda_k)^{c_k}\n  \\end{align*}\n  with \\(1 \\leq c_i \\leq a_i\\).\n\\end{remark}\n\n\\begin{definition}[Jordan normal form]\\index{Jordan normal form}\n  \\(A \\in \\M_n(\\F)\\) is in \\emph{Jordan normal form} if it is a block diagonal matrix\n  \\[\n    A =\n    \\begin{pmatrix}\n      J_{n_1}(\\lambda_1) & & & \\\\\n      & J_{n_2}(\\lambda_2) & & \\\\\n      & & \\ddots & \\\\\n      & & & J_{n_k}(\\lambda_k)\n    \\end{pmatrix}\n  \\]\n  where \\(k \\geq 1\\), \\(n_1, \\dots, n_k \\in \\N\\) with \\(\\sum_i n_i = n\\), \\(\\lambda_i \\in \\F\\) not necessarily distinct and\n  \\[\n    J_m(\\lambda) =\n    \\begin{pmatrix}\n      \\lambda & 1 & & \\\\\n      & \\lambda & 1 & \\\\\n      & & \\ddots & 1 \\\\\n      & & & \\lambda\n    \\end{pmatrix}\n    \\in M_m(\\F)\n  \\]\n  is a \\emph{Jordan block}.\n\\end{definition}\n\n\\begin{theorem}\n  \\label{thm:jordan normal form}\n  Every \\(A \\in \\M_n(\\C)\\) is similar to a matrix in Jordan normal form, unique up to reordering the Jordan blocks.\n\\end{theorem}\n\n\\begin{proof}(Non-examinable)\n  It is a consequence of a main theorem on modules in IB Groups, Rings and Modules.\n\\end{proof}\n\nIn the rest of this section assume \\(\\F = \\C\\) unless stated otherwise.\n\n\\begin{eg}\\leavevmode\n  \\begin{enumerate}\n  \\item Classification of Jordan normal forms for \\(\\M_2(\\C)\\):\n  \\[\n    \\begin{array}[h]{c|c|c}\n      \\begin{psmallmatrix}\n        \\lambda_1 & \\\\\n        & \\lambda_2\n      \\end{psmallmatrix}\n        &\n          \\begin{psmallmatrix}\n            \\lambda & \\\\\n            & \\lambda\n          \\end{psmallmatrix}\n        &\n          \\begin{psmallmatrix}\n            \\lambda & 1 \\\\\n            & \\lambda\n          \\end{psmallmatrix}\n      \\\\ \\hline\n      (t - \\lambda_1)(t - \\lambda_2) & t - \\lambda & (t - \\lambda)^2\n    \\end{array}\n  \\]\n\\item Classification of Jordan normal forms for \\(\\M_3(\\C)\\):\n  \\[\n    \\begin{array}[h]{c|c|c}\n      \\begin{psmallmatrix}\n        \\lambda_1 & & \\\\\n        & \\lambda_2 & \\\\\n        & & \\lambda_3\n      \\end{psmallmatrix}\n        &\n          \\begin{psmallmatrix}\n            \\lambda_1 & & \\\\\n            & \\lambda_2 & \\\\\n            & & \\lambda_2\n          \\end{psmallmatrix}\n        &\n          \\begin{psmallmatrix}\n            \\lambda & & \\\\\n            & \\lambda & \\\\\n            & & \\lambda\n          \\end{psmallmatrix}\n      \\\\ \\hline\n      (t - \\lambda_1)(t - \\lambda_2)(t - \\lambda_3) & (t - \\lambda_1)(t - \\lambda_2) & t - \\lambda \\\\ \\hline \\hline\n      & & \\\\\n      \\begin{psmallmatrix}\n        \\lambda_1 & & \\\\\n        & \\lambda_2 & 1 \\\\\n        & & \\lambda_2\n      \\end{psmallmatrix}\n        &\n          \\begin{psmallmatrix}\n            \\lambda & & \\\\\n            & \\lambda & 1 \\\\\n            & & \\lambda\n          \\end{psmallmatrix}\n        &\n          \\begin{psmallmatrix}\n            \\lambda & 1 & \\\\\n            & \\lambda & 1 \\\\\n            & & \\lambda\n          \\end{psmallmatrix}\n      \\\\ \\hline\n      (t - \\lambda_1)(t - \\lambda_2)^2 & (t - \\lambda)^2 & (t - \\lambda)^3\n    \\end{array}\n  \\]\n  \\end{enumerate}\n\\end{eg}\n\n\\begin{theorem}[Generalised Eigenspace Decomposition]\n  \\label{thm:generalised eigenspace decomposition}\n  Let \\(V\\) be a finite-dimensional \\(\\C\\)-vector space and \\(\\alpha \\in L(V)\\). Suppose that\n  \\[\n    m_\\alpha(t) = (t - \\lambda_1)^{c_1} \\cdots (t - \\lambda_k)^{c_k}\n  \\]\n  where \\(\\lambda_i\\)'s are distinct. Then\n  \\[\n    V = \\bigoplus_j V_j\n  \\]\n  where\n  \\[\n    V_j = N(\\alpha - \\lambda_j \\iota)^{c_j}\n  \\]\n  is the \\emph{generalised eigenspace}.\n\\end{theorem}\n\n\\begin{proof}[Sketch of proof]\n  Let\n  \\[\n    p_j(t) = \\prod_{i \\neq j}^{ } (t - \\lambda_i)^{c_i}.\n  \\]\n  The \\(p_j\\) have no common factor so by Euclidean algorithm we can find \\(q_1, \\dots, q_k \\in \\C[t]\\) such that\n  \\[\n    \\sum_{j}^{ } p_j(t)q_j(t) = 1.\n  \\]\n  Let \\(\\pi_j = q_j(\\alpha)p_j(\\alpha) \\in L(V)\\). Note \\(\\sum_{j = 1}^{k} \\pi_j = \\iota\\).\n  \\begin{enumerate}\n  \\item As \\(m_\\alpha(\\alpha) = 0\\), \\((\\alpha - \\lambda_j \\iota)^{c_j} \\pi_j = 0\\) so \\(\\im \\pi_j \\leq V_j\\).\n  \\item Suppose \\(v \\in V\\), \\(v = \\iota(v) = \\sum \\pi_j(v)\\) so \\(V = \\sum_j V_j\\).\n  \\item To show the sum is direct, \\(\\pi_i \\pi_j = 0\\) for \\(i \\neq j\\) so\n    \\[\n      \\pi_i = \\pi_i \\left( \\sum_{j = 1}^{k} \\pi_j \\right) = \\pi_i^2\n    \\]\n    i.e.\\ \\(\\pi_i\\) is a projection. Then\n    \\[\n      \\pi_i |_{V_j} =\n      \\begin{cases}\n        \\id & i = j \\\\\n        0 & i \\neq j\n      \\end{cases}\n    \\]\n    Directness follows.\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item We can use \\nameref{thm:generalised eigenspace decomposition} to reduce the proof of \\Cref{thm:jordan normal form} to a single eigenvalue.\n  \\item Considering \\(\\alpha - \\lambda \\iota\\) can reduce to the case of eigenvalue \\(0\\).\n  \\end{enumerate}\n\\end{remark}\n\n\\begin{lemma}\n  Let \\(\\alpha \\in L(V)\\) with Jordan normal form \\(A \\in \\M_n(\\C)\\). Then the number of Jordan blocks \\(J_\\ell(\\lambda)\\) of \\(A\\) with \\(\\ell \\geq 1\\) is\n  \\[\n    n((\\alpha - \\lambda \\iota)^\\ell) - n((\\alpha - \\lambda \\iota)^{\\ell - 1}).\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  Work blockwise, for each \\(s \\times s\\) block, \n  \\[\n    \\begin{array}[h]{c|c|c}\n      \\begin{psmallmatrix}\n        \\lambda & 1 & & \\\\\n        & \\ddots & \\ddots & \\\\\n        & & \\ddots & 1 \\\\\n        & & & \\lambda\n      \\end{psmallmatrix}\n        &\n          \\begin{psmallmatrix}\n            0 & 1 & & \\\\\n            & & \\ddots & \\\\\n            & & & 1 \\\\\n            & & & 0\n          \\end{psmallmatrix}\n                 &\n                   \\begin{psmallmatrix}\n                     0 & 0 & 1 & \\\\\n                     & & \\ddots & 1 \\\\\n                     & & & 0 \\\\\n                     & & & 0\n                   \\end{psmallmatrix}\n      \\\\ \\hline\n      J_s(\\lambda) & J_s(\\lambda) - \\lambda I & (J_s(\\lambda) - \\lambda I)^2\n    \\end{array}\n  \\]\n  so\n  \\[\n    n((J_s(\\lambda) - \\lambda I)^k) =\n    \\begin{cases}\n      k & k \\leq s \\\\\n      s & k \\geq s\n    \\end{cases}\n  \\]\n\\end{proof}\n\n\\begin{eg}\n  Let\n  \\[\n    A =\n    \\begin{pmatrix}\n      0 & -1 \\\\\n      1 & 2\n    \\end{pmatrix}\n  \\]\n  we want to find a basis \\(\\basis B = \\{v_1, v_2\\}\\) with respect to which \\(A\\) is in Jordan normal form.\n  \\begin{enumerate}\n  \\item\n    \\[\n      \\chi_A(t) =\n      \\begin{vmatrix}\n        -t & -1 \\\\\n        1 & 2 - t\n      \\end{vmatrix}\n      = t^2 - 2t + 1 = (t - 1)^2\n    \\]\n    There are two possibilities:\n    \\begin{enumerate}\n    \\item \\(m_A(t) = t - 1\\). Then the Jordan normal form is\n      \\[\n        \\begin{pmatrix}\n          1 & 0 \\\\\n          0 & 1\n        \\end{pmatrix}\n      \\]\n    \\item \\(m_A = (t - 1)^2\\). Then the Jordan normal form is\n      \\[\n        \\begin{pmatrix}\n          1 & 1 \\\\\n          0 & 1\n        \\end{pmatrix}\n      \\]\n    \\end{enumerate}\n    A trick here is to note that if \\(A\\) is conjugate to \\(I\\) then \\(A = I\\). Thus (b) holds.\n  \\item The eigenspace is spanned by \\(v_1 = \\binom{1}{-1}\\).\n  \\item \\(v_2\\) satisfies \\((A - I)v_2 = v_1\\) so\n    \\[\n      \\begin{pmatrix}\n        -1 & -1 \\\\\n        1 & 1\n      \\end{pmatrix}\n      v_2 =\n      \\begin{pmatrix}\n        1 \\\\\n        -1\n      \\end{pmatrix}\n    \\]\n    so \\(v_2 = \\binom{-1}{0}\\). Note that \\(v_2\\) is not unique.\n  \\item Finally,\n    \\[\n      A =\n      \\begin{pmatrix}\n        1 & -1 \\\\\n        -1 & 0\n      \\end{pmatrix}\n      \\begin{pmatrix}\n        1 & 1 \\\\\n        0 & 1\n      \\end{pmatrix}\n      \\begin{pmatrix}\n        1 & -1 \\\\\n        -1 & 0\n      \\end{pmatrix}\n      ^{-1}\n    \\]\n  \\end{enumerate}\n\\end{eg}\n\nWe can use the diagonalisation to calculate powers of matrices:\n\\[\n  A^n = (P^{-1}JP)^n = P^{-1}J^nP = P^{-1}\n  \\begin{pmatrix}\n    1 & n \\\\\n    0 & 1\n  \\end{pmatrix}\n  P\n\\]\n\n\\begin{remark}\n  In Jordan normal form,\n  \\begin{itemize}\n  \\item \\(a_\\lambda\\) is the total number of times that \\(\\lambda\\) appears in the diagonal.\n  \\item \\(g_\\lambda\\) is the number of \\(\\lambda\\)-Jordan blocks.\n  \\item \\(c_\\lambda\\) is the size the largest \\(\\lambda\\)-Jordan block.\n  \\end{itemize}\n\\end{remark}\n\n\\section{Bilinear Form II}\n\n\\subsection{Symmetric Bilinear Forms}\n\nIn this chapter we are going to studying a special bilinear form (and variants whereof) in detail. Let \\(\\varphi: V \\times V \\to \\F\\) be a bilinear form on \\(V\\) and assume we take the same basis for both factors of \\(V\\), say \\(\\basis B\\). Therefore if \\(\\dim V < \\infty\\), \\(\\varphi\\) has matrix representation\n\\[\n  [\\varphi]_{\\basis B} = [\\varphi]_{\\basis B, \\basis B}.\n\\]\n\nRecall that\n\\begin{lemma}\n  Let \\(\\varepsilon: V \\times V \\to \\F, \\dim V < \\infty\\) and \\(\\basis B\\) and \\(\\basis B'\\) are two bases for \\(V\\). Let \\(P = [\\id]_{\\basis B', \\basis B}\\). Then\n  \\[\n    [\\varphi]_{\\basis B'} = P^T[\\varphi]_{\\basis B}P.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  Special case of \\Cref{prop:change of basis of bilinear form}.\n\\end{proof}\n\nThis motivates us to define a relation on \\(\\M_n(\\F)\\)\n\n\\begin{definition}[Congruency]\\index{matrix!congruency}\n  \\(A, B \\in \\M_n(\\F)\\) are \\emph{congruent} if\n  \\[\n    A = P^TBP\n  \\]\n  for some invertible \\(P\\).\n\\end{definition}\n\n\\begin{note}\n  This is an equivalence relation.\n\\end{note}\n\nNaturally, we want to find nice forms to which a general bilinear form is congruent. Certainly the nicest form we can have is diagonal matrix. It turns out the property we require a bilinear form to be ``diagonalisable'' is\n\n\\begin{definition}[Symmetric]\\index{bilinear!symmetric}\n  A bilinear form \\(\\varphi\\) on \\(V\\) is \\emph{symmetric} if\n  \\[\n    \\varphi(u, v) = \\varphi(v, u)\n  \\]\n  for all \\(u, v \\in V\\).\n\\end{definition}\n\n\\begin{note}\\leavevmode\n  \\begin{itemize}\n  \\item \\(A \\in \\M_n(\\F)\\) is symmetric if \\(A = A^T\\). Then \\(\\varphi\\) is symmetric if and only if \\([\\varphi]_{\\basis B}\\) is symmetric for any basis \\(\\basis B\\), if and only if \\([\\varphi]_{\\basis B}\\) is symmetric for one \\(\\basis B\\).\n  \\item To be able to be represented by a diagonal matrix, \\(\\varphi\\) needs to be symmetric:\n    \\[\n      [\\varphi]_{\\basis B} = P^TAP = D \\Rightarrow D^T = D = P^TA^TP \\Rightarrow A = A^T\n    \\]\n  \\end{itemize}\n\\end{note}\n\n\\begin{definition}[Quadratic form]\\index{quadratic form}\n  A map \\(Q: V \\to \\F\\) is a \\emph{quadratic form} if there is a bilinear form \\(\\varphi\\) on \\(V\\) such that\n  \\[\n    Q(v) = \\varphi(v, v)\n  \\]\n  for all \\(v \\in V\\).\n\\end{definition}\n\n\\begin{eg}\n  Let \\(V = \\R^2\\). A general quadratic form is\n  \\[\n    \\begin{pmatrix}\n      x \\\\\n      y\n    \\end{pmatrix}\n    \\mapsto\n    \\begin{pmatrix}\n      x & y\n    \\end{pmatrix}\n    %\n    \\begin{pmatrix}\n      a & b \\\\\n      c & d\n    \\end{pmatrix}\n    %\n    \\begin{pmatrix}\n      x \\\\\n      y\n    \\end{pmatrix}\n    = ax^2 + (b + c) xy + dy^2\n  \\]\n\\end{eg}\n\n\\begin{remark}\n  A quadratic form does not change under \\(A \\mapsto \\frac{1}{2}(A + A^T)\\) where \\(A\\) is a representation of the inducing bilinear form.\n\\end{remark}\n\n\\begin{proposition}\n  Assume \\(\\ch \\F \\neq 2\\). If \\(Q: V \\to \\F\\) is a quadratic form then there exists a unique symmetric bilinear form \\(\\varphi\\) on \\(V\\) such that\n  \\[\n    Q(v) = \\varphi(v, v)\n  \\]\n  for all \\(v \\in V\\).\n\\end{proposition}\n\n\\begin{proof}\n  First we prove the existence. Let \\(\\psi\\) be a bilinear form on \\(V\\) such that \\(Q(v) = \\psi(v, v)\\) for all \\(v \\in V\\). We want to construct a symmetric bilinear form by adding \\(\\psi\\) and its transpose. Let\n  \\[\n    \\varphi(u, v) = \\frac{1}{2}(\\psi(u, v) + \\psi(v, u)).\n  \\]\n  (this is where we require \\(\\ch \\F \\neq 2\\)) then it is bilinear and symmetric and\n  \\[\n    \\varphi(v, v) = \\psi(v, v) = Q(u).\n  \\]\n\n  To show the uniqueness, suppose \\(\\varphi\\) is such a symmetric bilinear form. Consider\n  \\begin{align*}\n    Q(u + v) &= \\varphi(u + v, u + v) \\\\\n             &= \\varphi(u, u) + \\varphi(u, v) + \\varphi(v, u) + \\varphi(v, v) \\\\\n             &= Q(u) + 2\\varphi(u, v) + Q(v)\n  \\end{align*}\n  Rearrange,\n  \\[\n    \\varphi(u, v) = \\frac{1}{2}(Q(u + v) - Q(u) - Q(v))\n  \\]\n  which is uniquely determined.\n\\end{proof}\n\n\\begin{remark}\n  The last identity\n  \\[\n    \\varphi(u, v) = \\frac{1}{2}(Q(u + v) - Q(u) - Q(v))\n  \\]\n  is called the \\emph{polarisation identity} and will appear later.\n\\end{remark}\n\nThe note after the definition of symmetric bilinear form shows that being symmetric is a necessary condition for a bilinear form to be ``diagonalisable''. The following theorem says that it is also sufficient:\n\n\\begin{theorem}\n  Let \\(\\varphi\\) be a symmetric bilinear form on \\(V\\), an \\(\\F\\)-vector space and assume \\(\\ch \\F \\neq 2\\) and \\(\\dim V < \\infty\\). Then there is a basis \\(\\basis B\\) of \\(V\\) such that \\([\\varphi]_{\\basis B}\\) is diagonal.\n\\end{theorem}\n\n\\begin{proof}\n  Induction on \\(n = \\dim V\\). If \\(n = 0\\) or \\(1\\) then obviously true.\n\n  Suppose the theorem holds for all spaces of dimension smaller than \\(n\\). There are two cases to consider:\n  \\begin{enumerate}\n  \\item if \\(\\varphi(u, u ) = 0\\) for all \\(u\\) then by the polarisation identity \\(\\varphi = 0\\) so diagonal.\n  \\item otherwise choose \\(e_1 \\in V\\) such that \\(\\varphi(e_1, e_1) \\neq 0\\). Let\n    \\[\n      U = \\spans{e_1}^\\perp = \\{u \\in V: \\varphi(e_1, u) = 0\\} = \\ker (\\varphi(e_1, -): V \\to \\F)\n    \\]\n    which has dimension \\(n - 1\\) by rank-nullity. Moreover, \\(V = \\spans{e_1} \\oplus U\\) since \\(\\spans{e_1} \\cap U = 0\\) and \\(\\dim(\\spans{e_1} \\oplus U) = n\\). Consider \\(\\varphi|_U: U \\times U \\to \\F\\) which is also symmetric bilinear. By induction hypothesis there is a basis \\(e_2, \\dots e_n\\) of \\(U\\) with respect to which \\(\\varphi|_U\\) is diagonal. Now \\(\\varphi\\) is diagonal with respect to \\(e_1, \\dots, e_n\\).\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{notation}\n  In \\(V = \\R^n\\) with standard basis \\(e_1, \\dots, e_n\\), write\n  \\[\n    Q(x_1, x_2, \\dots, x_n) = Q \\left(\\sum_{i = 1}^n x_i e_i \\right).\n  \\]\n\\end{notation}\n\n\\begin{eg}\n  Let \\(V = \\R^3\\) with standard basis \\(e_1, e_2, e_3\\) and\n  \\[\n    Q(x_1, x_2, x_3) = x_1^2 + x_2^2 + 2x_3^2 + 2x_1x_2 + 2x_1x_3 -2x_2x_3.\n  \\]\n  We want a basis \\(f_1, f_2, f_3\\) of \\(\\R^3\\) such that\n  \\[\n    Q(af_1 + bf_2 + cf_3) = \\lambda a^2 + \\mu b^2 + \\nu c^2\n  \\]\n  for some \\(\\lambda, \\mu, \\nu \\in \\R\\), which are the diagonal entries.\n\n  The martix representation of \\(Q\\) with repect to \\(e_1, e_2, e_3\\) is\n  \\[\n    A =\n    \\begin{pmatrix}\n      1 & 1 & 1 \\\\\n      1 & 1 & -1 \\\\\n      1 & -1 & 2\n    \\end{pmatrix}\n  \\]\n\n  We could use the algorithm as outlined in the induction proof above but choose to do it differently by completing the square:\n    \\begin{align*}\n      Q(x_1, x_2, x_3) &= \\underbrace{(x_1 + x_2 + x_3)^2}_{\\text{used all terms in } x_1} + x_3^2 - 2x_2x_3 - 2 x_2x_3 \\\\\n                       &= (x_1 + x_2 + x_3)^2 + \\underbrace{(x_3 - 2x_2)^2}_{\\text{used all terms in } x_3} - (2x_2)^2\n    \\end{align*}\n    From here we can read off the diagonal matrix and the basis: for some \\(P\\),\n    \\[\n      P^TAP =\n      \\begin{pmatrix}\n        1 & 0 & 0 \\\\\n        0 & 1 & 0 \\\\\n        0 & 0 & -1\n      \\end{pmatrix}\n    \\]\n    To find \\(P\\), note that\n    \\[\n      \\begin{pmatrix}\n        x_1' \\\\\n        x_2' \\\\\n        x_3'\n      \\end{pmatrix}\n      =\n      \\underbrace{\n      \\begin{pmatrix}\n        1 & 1 & 1 \\\\\n        0 & -2 & 1 \\\\\n        0 & 2 & 0 \n      \\end{pmatrix}\n    }_{P^{-1}}\n    %\n    \\begin{pmatrix}\n      x_1 \\\\\n      x_2 \\\\\n      x_3\n    \\end{pmatrix}\n    \\]\n\\end{eg}\n\n\\begin{corollary}\n  Let \\(\\varphi\\) be a symmetric bilinear form on \\(V\\), a finite-dimensional \\(\\C\\)-vector space. Then there is a basis \\(\\basis B = \\{v_1, \\dots v_n\\}\\) of \\(V\\) such that\n  \\[\n    [\\varphi]_{\\basis B} =\n    \\begin{pmatrix}\n      I_r & 0 \\\\\n      0 & 0\n    \\end{pmatrix}\n  \\]\n  where \\(r = r(\\varphi)\\).\n\\end{corollary}\n\n\\begin{proof}\n  Pick basis \\(\\basis E = \\{e_1, \\dots e_n\\}\\) such that\n  \\[\n    [\\varphi]_{\\basis E} =\n    \\begin{pmatrix}\n      a_1 & & \\\\\n      & \\ddots & \\\\\n      & & a_n\n    \\end{pmatrix}\n  \\]\n  Reorder the \\(e_i\\)'s such that\n  \\[\n    \\begin{cases}\n      a_i \\neq 0 & 1 \\leq i \\leq r \\\\\n      a_i = 0 & i > r\n    \\end{cases}\n  \\]\n  For \\(i \\leq r\\), pick a complex square root of \\(a_i\\), say \\(\\sqrt{a_i}\\). Now let\n  \\[\n    v_i =\n    \\begin{cases}\n      \\frac{e_i}{\\sqrt{a_i}} & 1 \\leq i \\leq r \\\\\n      e_i & i > r\n    \\end{cases}\n  \\]\n\\end{proof}\n\n\\begin{corollary}\n  Every symmetric matrix \\(A \\in \\M_n(\\C)\\) is congruent to a unique matrix of the form \\(\\begin{psmallmatrix} I_r & 0 \\\\ 0 & 0 \\end{psmallmatrix}\\).\n\\end{corollary}\n\nEquivalently,\n\\[\n  Q \\left( \\sum_{i = 1}^{n} \\lambda_i v_i \\right) = \\sum_{i = 1}^{r} \\lambda_i^2.\n\\]\n\nWe have derived a corollary for our favourite field \\(\\C\\), and there is another one corresponding to our second favourite \\(\\R\\):\n\n\\begin{corollary}\n  Let \\(\\varphi\\) be a symmetric bilinear form on \\(V\\), a finite-dimensional \\(\\R\\)-vector space. There is a basis \\(\\basis B = \\{v_1, \\dots, v_n\\}\\) such that\n  \\[\n    [\\varphi]_{\\basis B} =\n    \\begin{pmatrix}\n      I_p & & \\\\\n      & -I_q & \\\\\n      & & 0\n    \\end{pmatrix}\n  \\]\n  where \\(p, q \\geq 0\\) and \\(p + q = r(\\varphi)\\).\n\\end{corollary}\n\n\\begin{proof}\n  The proof is the same as for \\(\\C\\) up to the point of exhibiting a basis with respect to which \\(\\varphi\\) is diagonal. Note that we \\emph{cannot} choose a square root for all the entries. Instead, reorder the indices such that\n  \\[\n    \\begin{cases}\n      a_i > 0 & 1 \\leq i \\leq p \\\\\n      a_i < 0 & p + 1 \\leq i \\leq p + q \\\\\n      a_i = 0 & i > p + q\n    \\end{cases}\n  \\]\n  and let\n  \\[\n    v_i =\n    \\begin{cases}\n      \\frac{e_i}{\\sqrt{a_i}} & 1 \\leq i \\leq p \\\\\n      \\frac{e_i}{\\sqrt{-a_i}} & p + 1 \\leq i \\leq p + q \\\\\n      e_i & i > p + 1\n    \\end{cases}\n  \\]\n\\end{proof}\n\nEquivalently,\n\\[\n  Q \\left( \\sum_{i = 1}^{n} \\lambda_i v_i \\right) = \\sum_{i = 1}^{p} \\lambda_i^2 - \\sum_{i = p + 1}^{q + p} \\lambda_i^2.\n\\]\n\n\\begin{definition}[Positive/Negative (semi-)definiteness]\\index{matrix!postive definite}\n  A symmetric bilinear form \\(\\varphi\\) on a real vector space \\(V\\) is\n  \\begin{itemize}\n  \\item \\emph{positive definite} if \\(\\varphi(u, u) > 0\\) for all \\(u \\in V\\setminus \\{0\\}\\).\n  \\item \\emph{positive semi-definite} if \\(\\varphi(u, u) \\geq 0\\) for all \\(u \\in V\\setminus \\{0\\}\\).\n  \\item \\emph{negative definite} if \\(\\varphi(u, u) < 0\\) for all \\(u \\in V\\setminus \\{0\\}\\).\n  \\item \\emph{negative semi-definite} if \\(\\varphi(u, u) \\leq 0\\) for all \\(u \\in V\\setminus \\{0\\}\\).\n  \\item \\emph{indefinite} if none of the above.\n  \\end{itemize}\n\\end{definition}\n\nThe same terminologies apply to quadratic forms.\n\n\\begin{eg}\n  A bilinear form on \\(\\R^n\\) represented by \\(\\begin{psmallmatrix} I_p & 0 \\\\ 0 & 0 \\end{psmallmatrix} \\in \\M_n(\\R)\\) is positive definite if \\(p = n\\) and positive semi-definite if \\(p < n\\).\n\\end{eg}\n\n\\begin{definition}[Signature]\\index{bilinear!signature}\n  The \\emph{signature} of a real symmetric bilinear form \\(\\varphi\\) is\n  \\[\n    s(\\varphi) = p - q.\n  \\]\n\\end{definition}\n\nAgain, this applies to quadratic forms as well.\n\nHowever, we have not even checked whether this is well-defined. Thus we need\n\n\\begin{theorem}[Sylvester's Law of Inertia]\n  If a real symmetric bilinear bilinear form \\(\\varphi\\) has with respect to basis \\(\\basis B\\) and \\(\\basis B'\\)\n  \\[\n    [\\varphi]_{\\basis B} =\n    \\begin{pmatrix}\n      I_p & & \\\\\n      & -I_q & \\\\\n      & & 0\n    \\end{pmatrix}\n    \\quad\n    [\\varphi]_{\\basis B'} =\n    \\begin{pmatrix}\n      I_p' & & \\\\\n      & -I_q' & \\\\\n      & & 0\n    \\end{pmatrix}\n  \\]\n  then\n  \\begin{align*}\n    p &= p', \\\\\n    q &= q'.\n  \\end{align*}\n\\end{theorem}\n\nIt is then immediate that\n\n\\begin{corollary}\n  Signature is well-defined.\n\\end{corollary}\n\n\\begin{proof}\n  For uniqueness of \\(p\\), show that \\(p\\) is the largest dimension of a subspace on which \\(\\varphi\\) is positive definite. This suffices as it is a basis invariant characterisation.\n\n  Let \\(\\basis B = \\{v_1, \\dots, v_n\\}\\). Let \\(X = \\spans{v_1, \\dots, v_p}\\) and \\(Y = \\spans{v_{p + 1}, \\dots, v_n}\\). \\(\\dim X = p\\) and \\(\\varphi\\) is positive definite on \\(X\\):\n  \\[\n    Q(v) = Q \\left( \\sum_{i = 1}^{p} \\lambda_i v_i \\right) = \\sum_{i = 1}^{p} \\lambda_i^2 > 0\n  \\]\n  for all \\(v \\neq 0\\). Similarly \\(\\varphi\\) is negative semi-definite on \\(Y\\).\n\n  Suppose is \\(\\varphi\\) is positive definite on some other subspace \\(X'\\). Then \\(X' \\cap Y = 0\\) since \\(Q\\) is positive definite on \\(X'\\) and negative semi-definite on \\(Y\\). Therefore\n  \\[\n    \\dim (Y + X') = \\dim Y \\oplus X' = \\dim Y + \\dim X' \\leq n\n  \\]\n  but since \\(\\dim Y = n - p\\) we have \\(\\dim X' \\leq p\\).\n\n  For \\(q\\), we can either run the same argument with negative definite spaces, or use the fact that \\(q = r(\\varphi) - q\\) is invariant.\n\\end{proof}\n\nThe zero diagonal block is not very interesting but it does get a speical name:\n\n\\begin{definition}[Kernel of symmetric bilinear form]\\index{bilinear!kernel}\n  The \\emph{kernel} of a symmetric bilinear form is\n  \\[\n    K = \\{ v \\in V: \\varphi(u, v) = 0 \\text{ for all } u \\in V\\}.\n  \\]\n\\end{definition}\n\n\\begin{note}\n  \\[\n    \\dim K = n - r(\\varphi).\n  \\]\n\\end{note}\n\nIn our previous notation, the kernel is simply\n\\[\n  K = \\spans{v_{p + q + 1}, \\dots, v_n}.\n\\]\n\n\\begin{caution}\n  There is a subspace \\(T\\) of dimension \\(n - (p + q) + \\min(p, q)\\) such that \\(\\varphi|_T = 0\\): say \\(p \\geq q\\),\n  \\[\n    T = \\spans{v_1 + v_{p + 1}, \\dots, v_q + v_{p + q}, v_{p + q + 1}, \\dots, v_n}.\n  \\]\n\\end{caution}\n\n\\begin{ex}\n  Check that \\(T\\) above is the largest possible such space.\n\\end{ex}\n\n\\subsection{Sesquilinear Form}\n\nLet \\(\\F = \\C\\) throughout this section.\n\nThe dot product on a real vector space comes naturally as a bilinear form. However, its generalisation to complex vector space, the standard inner product defined by\n\\[\n  \\ip{x, y} = \\sum_{i = 1}^{n} x_i \\conj y_i\n\\]\nis not bilinear: the second coordinate transforms by ``conjugate-linearity'' instead of linearity. Among many other examples of the same spirit, this serves as a motivation to modify the definition of bilinear forms for \\(\\C\\)-vector spaces:\n\n\\begin{definition}[Sesquilinear form]\\index{sesquilinear form}\n  Let \\(V\\) and \\(W\\) be \\(\\C\\)-vector spaces. A \\emph{sesquilinear form} is a function \\(\\varphi: V \\times W \\to \\C\\) such that\n  \\begin{align*}\n    \\varphi(\\lambda_1v_1 + \\lambda_2v_2, w) &= \\lambda_1 \\varphi(v_1, w) + \\lambda_2 \\varphi(v_2, w) \\\\\n    \\varphi(v, \\mu_1w_1 + \\mu_2w_2) &= \\conj \\mu_1 \\varphi(v, w_1) + \\conj \\mu_2 \\varphi(v, w_2)\n  \\end{align*}\n  for all \\(\\lambda_1, \\mu_1 \\in \\C\\) and \\(v, v_1, v_2 \\in V\\), \\(w, w_1, w_2 \\in W\\).\n\\end{definition}\n\nNaturally we would expect a sesquilinear form, just like a bilinear form, to have a matrix representation which behaves and transforms accordingly under change-of-basis:\n\n\\begin{definition}[Matrix of sesquilinear form]\\index{matrix!sesquilinear form, of}\n  Same notation as above. Let \\(\\basis B = \\{v_1, \\dots, v_m\\}\\) be a basis for \\(V\\) and \\(\\basis C = \\{w_1, \\dots, v_n\\}\\) be a basis for \\(W\\). Then the \\emph{matrix} of \\(\\varphi\\) with respect to \\(\\basis B\\) and \\(\\basis C\\) is\n  \\[\n    [\\varphi]_{\\basis B, \\basis C} = \\left(\\varphi(v_i, w_j)\\right)_{i,j}\n  \\]\n\\end{definition}\n\n\\begin{lemma}\n  \\[\n    \\varphi(u, v) = [u]_{\\basis B}^T [\\varphi]_{\\basis B, \\basis C} \\conj{[v]}_{\\basis C}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  Easy.\n\\end{proof}\n\n\\begin{lemma}\n  Let \\(\\basis B, \\basis B'\\) be bases for \\(V\\), \\(P = [\\id]_{\\basis B', \\basis B}\\) and \\(\\basis C, \\basis C\\) be bases for \\(W\\), \\(Q = [\\id]_{\\basis C', \\basis C}\\). Then\n  \\[\n    [\\varphi]_{\\basis B', \\basis C'} = P^T [\\varphi]_{\\basis B, \\basis C} \\conj Q.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  Ditto.\n\\end{proof}\n\n\\subsection{Hermitian Form}\n\nWe have special bilinear forms that are symmetric. The analogue for sesquilinear form is\n\n\\begin{definition}[Hermitian form]\\index{Hermitian form}\n  A sesquilinear form \\(\\varphi: V \\times V \\to \\C\\) is \\emph{Hermitian} if\n  \\[\n    \\varphi(u, v) = \\conj{\\varphi(v,u)}.\n  \\]\n\\end{definition}\n\n\\begin{note}\\leavevmode\n  \\begin{enumerate}\n  \\item For \\(\\varphi\\) Hermitian, \\(\\varphi(u, u) \\in \\R\\) and \\(\\varphi(\\lambda u, \\lambda u) = |\\lambda|^2 \\varphi(u,u)\\) so we can still talk about positive/negative (semi-)definite Hermitian forms.\n  \\item For a Hermitian form \\(\\varphi: V \\times V \\to \\C\\), let \\(\\basis B\\) be a basis for \\(V\\). Then we write\n    \\[\n      [\\varphi]_{\\basis B} = [\\varphi]_{\\basis B, \\basis B}.\n    \\]\n  \\end{enumerate}\n\\end{note}\n\n\\begin{lemma}\n  A sesquilinear form \\(\\varphi: V \\times V \\to \\C\\) is Hermitian if and only if for any basis \\(\\basis B\\),\n  \\[\n    [\\varphi]_{\\basis B} = \\conj{[\\varphi]}^T_{\\basis B}.\n  \\]\n\\end{lemma}\n\nAs before, if and only if this holds for one basis.\n\n\\begin{proof}\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\Rightarrow\\): Let \\(\\basis B = \\{v_1, \\dots, v_n\\}\\) and \\(A = [\\varphi]_{\\basis B} = (a_{ij})\\). Then\n    \\begin{align*}\n      a_{ij} &= \\varphi(v_i, v_j) \\\\\n             &= \\conj{\\varphi(v_j, v_i)} \\\\\n      &= \\conj{a_{ji}}\n    \\end{align*}\n  \\item \\(\\Leftarrow\\):\n    \\begin{align*}\n      \\varphi \\left( \\sum \\lambda_iv_i, \\sum \\mu_jv_j \\right) &= \\lambda^T A \\conj \\mu \\\\\n                                                              &= \\lambda^T \\conj A^T \\conj \\mu \\\\\n                                                              &= \\conj \\mu^T \\conj A \\lambda \\text{ taking transpose of a scalar} \\\\\n                                                              &= \\conj{\\mu^T A \\conj \\lambda} \\\\\n                                                              &= \\conj{\\varphi \\left( \\sum \\mu_jv_j, \\sum \\lambda_iv_i \\right)}\n    \\end{align*}\n  \\end{itemize}\n\\end{proof}\n\nSimilarly we have polarisation identity for sesquilinear form: a Hermitian form \\(\\varphi\\) on \\(\\C\\)-vector space \\(V\\) is determined by\n\\begin{align*}\n  Q: V &\\to \\R \\\\\n  v &\\mapsto \\varphi(v, v)\n\\end{align*}\nvia the formula\n\\[\n  \\varphi(u, v) = \\frac{1}{4} \\left( Q(u + v) - Q(u - v) + i Q(u + iv) - i Q(u - iv) \\right).\n\\]\n\n\\begin{proof}\n  Exercise.\n\\end{proof}\n\nLastly,\n\\begin{theorem}[Diagonalisation of Hermitian Form and Sylvester's Law]\\index{Hermitian!diagonalisation}\\index{Sylvester's Law}\n  Let \\(V\\) be a finite-dimensional \\(\\C\\)-vector space and \\(\\varphi: V \\times V \\to \\C\\) be a Hermitian form. There is a basis \\(\\basis B = \\{v_1, \\dots, v_n\\}\\) of \\(V\\) with respect to which\n  \\[\n    [\\varphi]_{\\basis B} =\n    \\begin{pmatrix}\n      I_p & & \\\\\n      & -I_q & \\\\\n      & & 0\n    \\end{pmatrix}\n  \\]\n  where \\(p\\) and \\(q\\) are invariants of \\(\\varphi\\).\n\\end{theorem}\n\n\\begin{proof}\n  This is nearly as identical to the symmetric case so we only give a sketch here.\n\n  For existence, if \\(\\varphi(u, u) = 0\\) for all \\(u\\) then by polarisation identity \\(\\varphi(u, v) = 0\\) so done. Assume not. There exists \\(e_1\\) such that \\(\\varphi(e_1, e_1) \\neq 0\\). Rescale to have\n  \\[\n    v_1 = \\frac{e_1}{\\sqrt{|\\varphi(e_1, e_1)|}}\n  \\]\n  so \\(\\varphi(v_1, v_1) = \\pm 1\\). Note that we used the fact that \\(\\varphi(e_1, e_1) \\in \\R\\).\n\n  Consider the complementary space\n  \\[\n    W = \\spans{v_1}^\\perp = \\left\\{ w \\in V: \\varphi(v_1, w) = 0 \\right\\}\n  \\]\n  Check that \\(V = \\spans{v_1} \\oplus W\\). Now proceed by induction on \\(W\\).\n\n  For uniqueness part (Sylvester's law), note \\(p\\) is the maximal dimension of a subspace of \\(V\\) on which \\(\\varphi\\) is positive definite.\n\\end{proof}\n\n\\subsection{Alternating Form}\n\n\\begin{definition}[Alternating form]\\index{alternating}\\index{skew-symmetric}\n  A bilinear form \\(\\varphi: V \\times V \\to \\F\\) is \\emph{alternating} or \\emph{skew-symmetric} if\n  \\[\n    \\varphi(u, v) = - \\varphi(v, u)\n  \\]\n  for all \\(u, v \\in V\\).\n\\end{definition}\n\nAs a consequence \\(\\varphi(u, u) = 0\\) for all \\(u \\in V\\) and for any basis \\(\\basis B\\), \\([\\varphi]_{\\basis B} = - [\\varphi]_{\\basis B}^T\\).\n\n\\begin{remark}\n  Alternating form is useful since for any \\(A \\in \\M_n(\\F)\\) with \\(\\ch \\F \\neq 2\\),\n  \\[\n    A = \\underbrace{\\frac{1}{2}(A + A^T)}_{\\text{symmetric}} + \\underbrace{\\frac{1}{2}(A - A^T)}_{\\text{skew-symmetric}}.\n  \\]\n\\end{remark}\n\n\\begin{theorem}\n  If \\(\\varphi\\) is skew-symmetric, there exists a basis\n  \\[\n    \\basis B = \\{v_1, w_1, v_2, w_2, \\dots, v_m, w_m, v_{2m + 1}, \\dots, v_n\\}\n  \\]\n  such that\n  \\[\n    [\\varphi]_{\\basis B} =\n    \\begin{pmatrix}\n      0 & 1 & & & & & & & & \\\\\n      -1 & 0 & & & & & & & & \\\\\n      & & 0 & 1 & & & & & & \\\\\n      & & -1 & 0 & & & & & & \\\\\n      & & & & \\ddots & & & & & \\\\\n      & & & & & 0 & 1 & & & \\\\\n      & & & & & -1 & 0 & & & \\\\\n      & & & & & & & 0 & &\\\\\n      & & & & & & & & \\ddots & \\\\\n      & & & & & & & & & 0 \\\\\n    \\end{pmatrix}\n  \\]\n  where there are \\(m\\) blocks of \\(\\begin{psmallmatrix} 0 & 1 \\\\ -1 & 0 \\end{psmallmatrix}\\).\n\\end{theorem}\n\n\\begin{remark}\n  By reordering the basis, with respect to\n  \\[\n    \\{v_1, \\dots, v_m, w_1, \\dots, w_m, v_{2m + 1}, \\dots v_n\\}\n  \\]\n  it has matrix representation\n  \\[\n    \\begin{pmatrix}\n      0 & I_m & \\\\\n      -I_m & 0 & \\\\\n      & & 0\n    \\end{pmatrix}\n  \\]\n\\end{remark}\n\n\\begin{remark}\n  Skew-symmetric matrices have even rank.\n\\end{remark}\n\n\\begin{proof}[Sketch of proof]\n  Induction on \\(\\dim V\\): If \\(\\varphi = 0\\) then done. Assume not. Then there exists \\(v_1, w_1\\) such that \\(\\varphi(v_1, w_1) \\neq 0\\). In particular \\(v_1\\) and \\(w_1 \\) are linearly independent. Scale \\(v_1\\) to get \\(\\varphi(v_1, w_1) = 1 = -\\varphi(w_1, v_1)\\) and let\n  \\begin{align*}\n    U &= \\spans{v_1, w_1} \\\\\n    W &= \\prescript{\\perp}{}{U} = \\{ v\\in V: \\varphi(v, v_1) = \\varphi(v, w_1) = 0 \\}\n  \\end{align*}\n  Check \\(V = U \\oplus W\\) by dimension argument. Now apply the induction hypothesis to \\(\\varphi|_W\\).\n\\end{proof}\n\n\\section{Inner Product Space}\n\n\\subsection{Definitions}\n\nLet \\(\\F = \\R\\) or \\(\\C\\) in this chapter.\n\n\\begin{definition}[Inner product]\\index{inner product}\n  Let \\(V\\) be a vector space over \\(\\R\\) (\\(\\C\\), repsectively). An \\emph{inner product} on \\(V\\) is a positive definite symmetric bilinear form (Hermitian form, respectively) \\(\\varphi\\) on \\(V\\).\n\n  \\(V\\) is called a real (complex, respectively) \\emph{inner product space}, or a \\emph{Euclidean} (\\emph{unitary}, repsectively) space.\n\\end{definition}\n\n\\begin{notation}\n  Write \\(\\ip{u, v}\\) for \\(\\varphi(u, v)\\). Note that it is the same as our notation for span so we will spell out span whenever we use it in this chapter.\n\\end{notation}\n\n\\begin{eg}\\leavevmode\n  \\begin{itemize}\n  \\item Dot product on \\(\\R^n\\) or \\(\\C^n\\).\n  \\item \\(V = C([0, 1], \\C)\\), \\(\\ip{f, g} = \\int_0^1 f(t) \\conj{g(t)} dt\\).\n  \\item This can be generalised. Given \\(w: [0, 1] \\to \\R_{> 0}\\) continuous, think of it as a weight function, we can define an inner product\n    \\[\n      \\ip{f, g} = \\int_{0}^{1} w(t)f(t)\\conj{g(t)} dt.\n    \\]\n  \\end{itemize}\n\\end{eg}\n\n\\begin{remark}\n  An inner product induces a distance function, i.e.\\ a norm on \\(V\\) by\n  \\[\n    \\norm v = \\sqrt{\\ip{v, v}}\n  \\]\n  whose axioms will be checked later.\n  \n  Conversely, \\(\\norm \\cdot\\) determines the inner product because of the polarisation identity.\n\\end{remark}\n\n\\begin{lemma}[Cauchy-Schwarz Inequality]\\index{Cauchy-Schwarz}\n  \\[\n    |\\ip{u, v}| \\leq \\norm u \\cdot \\norm v\n  \\]\n  for all \\(u, v \\in V\\).\n\\end{lemma}\n\n\\begin{proof}\n  Wlog \\(u \\neq 0\\). For all \\(t \\in \\F\\),\n  \\begin{align*}\n    0 &\\leq \\norm{tu - v}^2 \\\\\n      &= \\ip{tu - v, tu - v} \\\\\n      &= \\norm{tu}^2 - t \\ip{u, v} - \\conj t \\conj{\\ip{u, v}} + \\norm v^2 \\\\\n    \\intertext{by setting \\(t = \\conj{\\ip{u, v}}/\\norm u^2\\),}\n      &\\leq - \\frac{|\\ip{u, v}|^2}{\\norm u^2} + \\norm v^2\n  \\end{align*}\n  Rearrange.\n\\end{proof}\n\n\\begin{note}\n  We only used polarisation identity and did not assume any of the norm properties of \\(\\norm \\cdot\\), which we will prove now.\n\\end{note}\n\n\\begin{corollary}[Triangle Inequaility]\n  \\[\n    \\norm{u + v} \\leq \\norm u + \\norm v\n  \\]\n  for all \\(u, v \\in V\\).\n\\end{corollary}\n\n\\begin{proof}\n  \\begin{align*}\n    \\norm{u + v}^2 &= \\norm u^2 + \\ip{u, v} + \\conj{\\ip{u, v}} + \\norm v^2 \\\\\n                   &\\leq \\norm u^2 + 2 \\norm u \\cdot \\norm v + \\norm v^2 \\\\\n                   &= (\\norm u + \\norm v)^2\n  \\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n  \\(\\norm \\cdot\\) is a norm.\n\\end{corollary}\n\n\\begin{remark}\n  For \\(\\F = \\R\\), the angle \\(\\theta\\) between two non-zero vectors \\(u\\) and \\(v\\) satisfies (or defined by, actually)\n  \\[\n    \\cos \\theta = \\frac{\\ip{u, v}}{\\norm u \\cdot \\norm v}.\n  \\]\n\\end{remark}\n\n\\subsection{Orthonomal Basis}\n\n\\begin{definition}[Orthogonality]\\index{orthogonal}\n  A set \\(\\{e_1, \\dots, e_k\\}\\) of vectors in \\(V\\) is \\emph{orthogonal} if\n  \\[\n    \\ip{e_i, e_j} = 0\n  \\]\n  for \\(i \\neq j\\).\n\\end{definition}\n\n\\begin{definition}[Orthonormality]\\index{orthonormal}\n  A set \\(\\{e_1, \\dots, e_k\\}\\) of vectors in \\(V\\) is \\emph{orthonormal} if\n  \\[\n    \\ip{e_i, e_j} = \\delta_{ij}\n  \\]\n  for all \\(i, j\\).\n\\end{definition}\n\n\\begin{lemma}\n  If \\(\\{e_1, \\dots, e_k\\}\\) is orthogonal and non-zero then they are linearly independent.\n\n  Moreover if \\(v = \\sum_{j = 1}^k \\lambda_j e_j\\),\n  \\[\n    \\lambda_j = \\frac{\\ip{v, e_j}}{\\ip{e_j, e_j}}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  \\[\n    \\ip{v, e_j} = \\ip*{\\sum_{i = 1}^{k} \\lambda_ie_i, e_j} = \\lambda_j \\ip{e_j, e_j}\n  \\]\n  and the results follow.\n\\end{proof}\n\n\\begin{lemma}[Parseval's Identity]\\index{Parseval}\n  Let \\(V\\) be a finite-dimensional inner product space with an orthonormal basis \\(e_1, \\dots, e_n\\). Then\n  \\[\n    \\ip{u, v} = \\sum_{i = 1}^{n}\\ip{u, e_i} \\conj{\\ip{v, e_i}}.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  Follows immediately from the orthonormal basis expansion formula in the previous lemma:\n  \\[\n    \\ip{u, v} = \\ip*{\\sum_{i = 1}^n \\ip{u, e_i}e_i, \\sum_{j = 1}^n \\ip{v,e_j}e_j}.\n  \\]\n\\end{proof}\n\n\\begin{theorem}[Gram-Schmidt Orthonormalisation Process]\\index{Gram-Schmidt}\n  Let \\(V\\) be an inner product space and \\(\\{v_1, v_2, \\dots \\}\\) be a countable set of linearly indpendent vectors in \\(V\\). Then there exists a sequence \\(e_1, e_2, \\dots\\) orthonormal such that\n  \\[\n    \\text{span} \\{v_1, \\dots, v_k\\} = \\text{span} \\{e_1, \\dots, e_k\\}\n  \\]\n  for all \\(k\\).\n\\end{theorem}\n\n\\begin{proof}\n  We see the word ``countable'' and instinctly use induction on \\(k\\). If \\(k = 1\\) then done. Suppose we have found \\(e_1, \\dots, e_k\\). Inspired by the orthonormal basis expansion formula, let\n  \\[\n    e'_{k + 1} = v_{k + 1} - \\underbrace{\\sum_{i = 1}^k \\ip{v_{k + 1}, e_i} e_i}_{\\text{linear combination of } v_1, \\dots, v_k}\n  \\]\n  which is non-zero by linear independence of \\(\\{v_1, \\dots, v_{k + 1}\\}\\). Also \\(\\ip{e'_{k + 1}, e_i} = 0\\) for \\(1 \\leq i \\leq k\\) by construction. Finally,\n\\[\n  \\text{span} \\{v_1, \\dots, v_k, v_{k + 1}\\} = \\text{span} \\{e_1, \\dots, e_k, e'_{k + 1}\\}.\n\\]\nFinally normalise it by\n\\[\n  e_{k + 1} = \\frac{e'_{k + 1}}{\\norm{e'_{k + 1}}}.\n\\]\n\\end{proof}\n\n\\begin{corollary}\n  Let \\(V\\) be a finite-dimensional inner product space. Any orthonormal set of vectors can be extended to an orthonormal basis.\n\\end{corollary}\n\n\\begin{proof}\n  Say \\(\\{e_1, \\dots, e_k\\}\\) are orthonormal. They are linearly independent so we can extend to a basis \\(\\{e_1, \\dots, e_k, v_{k + 1}, \\dots, v_n\\}\\) of \\(V\\).\n\n  Now apply Gram-Schmidt to this set. As the first \\(k\\) vectors are already orthonormal it has no effect on them.\n\\end{proof}\n\n\\begin{note}\n  \\(A \\in \\M_{m, n}(\\R)\\) has orthonormal columns if \\(A^TA = I\\) and \\(A \\in \\M_{m, n}(\\C)\\) has orthonormal columns if \\(A^T \\conj A = I\\).\n\\end{note}\n\nWe give special names to them:\n\n\\begin{definition}[Orthogonal matrix]\\index{matrix!orthogonal}\n  \\(A \\in \\M_n(\\R)\\) is \\emph{orthogonal} if \\(A^T A = I\\).\n\\end{definition}\n\nEquivalently \\(A^{-1} = A^T\\).\n\n\\begin{definition}[Unitary matrix]\\index{matrix!unitary}\n  \\(A \\in \\M_n(\\R)\\) is \\emph{unitary} if \\(A^T \\conj A = I\\).\n\\end{definition}\n\nEquivalently \\(A^{-1} = \\conj A^T\\).\n\nGiven these terminologies, Gram-Schmidt may be equivalently formulated as follow:\n\n\\begin{proposition}\n  \\(A \\in \\M_n(\\R)\\) (\\(\\M_n(\\C)\\), respectively) non-singular can be written as \\(A = RT\\) where\n  \\begin{itemize}\n  \\item \\(T\\) is upper triangular,\n  \\item \\(R\\) is orthogonal (unitary, respectively).\n  \\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\n  Apply Gram-Schmidt to columns of \\(A\\). The details are left as an exercise.\n\\end{proof}\n\n\\subsection{Orthogonal Complements \\& Projections}\n\n\\begin{definition}[Orthogonal direct sum]\\index{orthogonal direct sum}\n  Let \\(V\\) be an inner product space and \\(V_1, V_2 \\leq V\\). \\(V\\) is the \\emph{orthogonal direct sum} of \\(V_1\\) and \\(V_2\\) if\n  \\begin{enumerate}\n  \\item \\(V = V_1 \\oplus V_2\\),\n  \\item \\(\\ip{v_1, v_2} = 0\\) for all \\(v_1 \\in V_1, v_2 \\in V_2\\).\n  \\end{enumerate}\n  Write \\(V = V_1 \\perp V_2\\).\n\\end{definition}\n\n\\begin{note}\n  The first condition is actually redundant: \\(V_1 \\cap V_2 = 0\\) because of positive definiteness of inner product.\n\\end{note}\n\n\\begin{definition}[Orthogonal complement]\\index{orthogonal complement}\n  Let \\(W \\leq V\\). The \\emph{orthogonal complement} of \\(W\\) in \\(V\\) is\n  \\[\n    W^\\perp = \\{v \\in V: \\ip{v, w} = 0 \\text{ for all } v \\in W \\}.\n  \\]\n\\end{definition}\n\n\\begin{lemma}\n  Let \\(V\\) be a finite-dimensional inner product space and \\(W \\leq V\\). Then\n  \\[\n    V =  W \\perp W^\\perp.\n  \\]\n\\end{lemma}\n\n\\begin{proof}\n  If \\(w \\in W, u \\in W^\\perp\\) then \\(\\ip{w, u} = 0\\) so it remains to show that \\(V = W + W^\\perp\\).\n\n  Let \\(\\{e_1, \\dots, e_k\\}\\) be an orthonormal basis for \\(W\\). By a previous lemma we can extend it to an orthonormal basis \\(\\{e_1, \\dots, e_n\\}\\) of \\(V\\). Note that \\(e_{k + 1}, \\dots, e_n \\in W^\\perp\\).\n\\end{proof}\n\n\\begin{note}\n  Complementary space is, in general, not unique but orthogonal complement is.\n\\end{note}\n\nA concept closely related to orthogonal complement is\n\n\\begin{definition}[Projection]\\index{projection}\n  Suppose \\(V = U \\oplus W\\), a \\emph{projection} from \\(V\\) to \\(W\\) is a map\n  \\begin{align*}\n    \\pi: V &\\to W \\\\\n    u + w &\\mapsto w\n  \\end{align*}\n  where \\(u \\in U, w \\in W\\). This a well-defined linear map and is idempotent, i.e.\\ \\(\\pi^2 = \\pi\\).\n\\end{definition}\n\nIf the direct sum is orthogonal, however, there is a unique projection:\n\n\\begin{definition}[Orthogonal projection]\n  If \\(U = W^\\perp\\) above then \\(\\pi\\) is the \\emph{orthgonal projection} from \\(V\\) to \\(W\\).\n\\end{definition}\n\n\\begin{note}\n  \\(\\pi' = \\iota - \\pi\\) is the orthogonal projection from \\(V\\) to \\(W^\\perp\\).\n\\end{note}\n\nSo far we have discussed orthogonal complement only at an abstract level and we don't know yet how to find one, although you should have a good intuition of how. The following lemma tells us how it works:\n\n\\begin{lemma}\n  Let \\(V\\) be an inner product space, \\(W \\leq V\\) with orthonormal basis \\(e_1, \\dots, e_k\\) and \\(\\pi\\) is the orthogonal projection onto \\(W\\). Then\n  \\begin{enumerate}\n  \\item For all \\(v \\in V\\),\n    \\[\n      \\pi(v) = \\sum_{i = 1}^k \\ip{v, e_i} e_i\n    \\]\n  \\item \\(\\norm{v - \\pi(v)} \\leq \\norm{v - w}\\) for all \\(v \\in V\\), \\(w \\in W\\) with equiality if and only if \\(\\pi(v) = w\\). Equivalently, \\(\\pi(v)\\) is the closest point in \\(W\\) to \\(v\\).\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item We need\n    \\[\n      v - \\sum_{i = 1}^k \\ip{v, e_1}e_i \\in W^\\perp.\n    \\]\n    But\n    \\[\n      \\ip*{v - \\sum_{i = 1}^k \\ip{v, e_i}e_i, e_j} = \\ip{v, e_j} - \\ip{v, e_j} = 0.\n    \\]\n  \\item\n    \\begin{align*}\n      \\norm{v - w}^2 &= \\norm{\\underbrace{v - \\pi(v)}_{\\in W^\\perp} + \\underbrace{\\pi(v) - w}_{\\in W}}^2 \\\\\n                     &= \\norm{v - \\pi(v)}^2 + \\norm{\\pi(v) - w}^2 \\\\\n                     &\\leq \\norm{v - \\pi(v)}^2\n    \\end{align*}\n    with equality if and only if \\(\\pi(v) = w\\).\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{remark}\n  We met internal and external direct sum before. There is an analogous distinction for orthogonal direct sum.\n\n  Given \\(V_1, V_2\\) two inner product spaces over \\(\\F\\), we can define the \\emph{external orthogonal} direct sum \\(V_1 \\oplus V_2\\) by\n  \\[\n    \\ip{(u_1, u_2), (v_1, v_2)} = \\ip{u_1, v_1} + \\ip{u_2, v_2}.\n  \\]\n  In practice we often suppress the distinction between internal and external (orthogonal) direct sums.\n\\end{remark}\n\n\\subsection{Adjoints}\n\n\\begin{proposition}\n  Let \\(V\\) and \\(W\\) be finite-dimensional inner product spaces and \\(\\alpha \\in L(V, W)\\). Then there exists a unique linear map \\(\\alpha^*: W \\to V\\) such that for all \\(v \\in V, w \\in W\\),\n  \\[\n    \\ip{\\alpha v, w} = \\ip{v, \\alpha^*w}.\n  \\]\n\n  If \\(\\basis B\\) is an orthonormal basis for \\(V\\) and \\(\\basis C\\) is an orthonormal basis for \\(W\\),\n  \\[\n    [\\alpha^*]_{\\basis C, \\basis B} = \\conj{[\\alpha]}^T_{\\basis B, \\basis C}.\n  \\]\n\\end{proposition}\n\n\\begin{definition}[Adjoint]\\index{adjoint}\n  \\(\\alpha^*\\) as above is the \\emph{adjoint} of \\(\\alpha\\).\n\\end{definition}\n\n\\begin{proof}\n  Let \\(\\basis B = \\{v_1, \\dots, v_n\\}, \\basis C = \\{w_1, \\dots, w_m\\}\\) and as usual, let\n  \\begin{align*}\n    A &= [\\alpha]_{\\basis B, \\basis C} = (a_{ij}) \\\\\n    \\conj A^T &= C = (c_{ij})\n  \\end{align*}\n  where \\(c_{ij} = \\conj{a_{ij}}\\).\n\n  Since the formula for the adjoint map is given, we might as well just verify it. Consider the linear map \\(\\beta\\) such that \\([\\beta]_{\\basis C, \\basis B} = C\\). Then\n  \\begin{align*}\n    \\ip*{\\alpha \\left( \\sum_i \\lambda_iv_i \\right), \\sum_j \\mu_jw_j} &= \\ip*{\\sum_{i, k}\\lambda_ia_{ki}w_k, \\sum_j \\mu_jw_j} = \\sum_{i, j} \\lambda_ia_{ji}\\conj{\\mu_j} \\\\\n    \\intertext{while on the other hand}\n    \\ip*{\\sum_i \\lambda_iv_i, \\beta \\left( \\sum_j \\mu_jw_j \\right)} &= \\ip*{\\sum_i\\lambda_iv_i, \\sum_{j, k}\\mu_jc_{kj}v_k}  = \\sum_{i, j}\\lambda_i \\conj{c_{ij}}\\conj{\\mu_j} \\\\\n  \\end{align*}\n  and we see that they are equal since \\(c_{ij} = \\conj{a_{ij}}\\). Thus we have proved the existence of adjoint.\n\n  By specialising the above calculation to basis elements uniqueness follows.\n\\end{proof}\n\n\\begin{notation}\n  Denote the Hermitian conjugate by\n  \\[\n    A^\\dag = \\conj A^T.\n  \\]\n\\end{notation}\n\n\\begin{caution}\n  We use the same notation \\(\\alpha^*\\) for the adjoint and the dual of \\(\\alpha\\). Hopefully the context should be clear which one is our concern.\n\\end{caution}\n\n\\begin{remark}\n  This usage of notation is not entirely coincidental. In fact, let \\(V\\) and \\(W\\) be finite-dimensional real inner product spaces and \\(\\alpha \\in L(V, W)\\). In finite dimension there is an isomorphism\n  \\begin{align*}\n    \\psi_{R, V}: V &\\to V^* \\\\\n    v &\\mapsto \\ip{-, v}\n  \\end{align*}\n  and similarly \\(\\psi_{R, W}: W \\to W^*\\) such that the following diagram commutes:\n  \\[\n    \\begin{tikzcd}\n      W \\ar[r, \"\\text{adjoint of } \\alpha\"] \\ar[d, \"\\psi_{R, W}\"']  & V \\ar[d, \"\\psi_{R, V}\"] \\\\\n      W^* \\ar[r, \"\\text{dual of } \\alpha\"'] & V^*\n    \\end{tikzcd}\n  \\]\n  In the fancy language of category theory, this essentially says that adjoint and dual are naturally isomorphic as contravariant functors, with components \\(\\psi_{R, -}\\).\n\\end{remark}\n\n\\subsection{Self-adjoint Maps \\& Isomoetries}\n\nLet \\(V = W\\) throughout this section.\n\n\\begin{definition}[Self-adjoint]\\index{self-adjoint}\n  Let \\(V\\) be an inner product space and \\(\\alpha \\in L(V)\\). Let \\(\\alpha^*\\) be the adjoint of \\(\\alpha\\). \\(\\alpha\\) is \\emph{self-adjoint} if it satisfies one of the equivalent properties below:\n  \\begin{itemize}\n  \\item For all \\(u, v \\in V\\), \\(\\ip{\\alpha u, v} = \\ip{u, \\alpha v}\\),\n  \\item \\(\\alpha = \\alpha^*\\).\n  \\end{itemize}\n\n  \\(\\alpha\\) is said to be symmetric (Hermitian, respectively) if the vector space is real (complex, respectively).\n\\end{definition}\n\n\\begin{definition}[Isometry]\\index{isometry}\n  Let \\(V\\) be an inner product space and \\(\\alpha \\in L(V)\\). Let \\(\\alpha^*\\) be the adjoint of \\(\\alpha\\). \\(\\alpha\\) is an \\emph{isometry} if it satisfies one of the equivalent properties below:\n  \\begin{itemize}\n  \\item For all \\(u, v \\in V\\), \\(\\ip{\\alpha u, \\alpha v} = \\ip{u, v}\\),\n  \\item \\(\\alpha^{-1} = \\alpha^*\\).\n  \\end{itemize}\n\n  \\(\\alpha\\) is said to be orthogonal (unitary, respectively) if the vector space is real (complex, respectively).\n\\end{definition}\n\nThe equivalences should be quite obvious and in case you don't find it so,\n\n\\begin{proof}[Proof of 2nd equivalence]\\leavevmode\n  \\begin{itemize}\n  \\item \\(\\Rightarrow\\): \\(\\norm{\\alpha v}^2 = \\norm{v}^2\\) so \\(\\alpha\\) is injective and \\(\\alpha^{-1}\\) exists. For all \\(u, v \\in V\\),\n    \\[\n      \\ip{u, \\alpha^*v} = \\ip{\\alpha u, v} = \\ip{u, \\alpha^{-1}v}\n    \\]\n    so \\(\\alpha^{-1} = \\alpha^*\\).\n  \\item \\(\\Leftarrow\\):\n    \\[\n      \\ip{\\alpha u, \\alpha v} = \\ip{u, \\alpha^*\\alpha v} = \\ip{u, v}\n    \\]\n    for all \\(u, v \\in V\\).\n  \\end{itemize}\n\\end{proof}\n\n\\begin{remark}\n  By the polarisation identity there is yet another equivalent definition of isometry: \\(\\alpha\\) is an isometry if\n  \\[\n    \\norm{\\alpha v} = \\norm{v}\n  \\]\n  for all \\(v \\in V\\), which might be closer to the intuitive definition of an ``isometry''.\n\\end{remark}\n\n\\begin{lemma}\n  Let \\(V\\) be a finite-dimensional real (complex, respectively) inner product space and \\(\\alpha \\in L(V)\\). Then\n  \\begin{itemize}\n  \\item \\(\\alpha\\) is self-adjoint if and only if for all \\emph{orthonormal basis} \\(\\basis B\\), \\([\\alpha]_{\\basis B}\\) is symmetric (Hermitian, repsectively).\n  \\item \\(\\alpha\\) is an isometry if and only if for all \\emph{orthonormal basis} \\(\\basis B\\), \\([\\alpha]_{\\basis B}\\) is orthogonal (unitary, respectively).\n  \\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\n  There is very little to do actually. For any orthonormal basis \\(\\basis B\\),\n  \\[\n    [\\alpha^*]_{\\basis B} = \\conj{[\\alpha]}^T_{\\basis B}\n  \\]\n  and the two cases follow.\n\\end{proof}\n\nIt turns out all the isometries on an inner product space form a group:\n\n\\begin{definition}[Orthogonal/Unitary group]\\index{orthogonal group}\\index{unitary group}\\leavevmode\n  \\begin{itemize}\n  \\item If \\(\\F = \\R\\), the \\emph{orthogonal group} of \\(V\\) is\n    \\[\n      O(V) = \\{\\alpha \\in L(V): \\alpha \\text{ isometry}\\}.\n    \\]\n  \\item If \\(\\F = \\C\\), the \\emph{unitary group} of \\(V\\) is\n    \\[\n      U(V) = \\{\\alpha \\in L(V): \\alpha \\text{ isometry}\\}.\n    \\]\n  \\end{itemize}\n\\end{definition}\n\n\\begin{lemma}\n  Let \\(V\\) be an inner product space with orthonormal basis \\(e_1, \\dots, e_n\\). Then\n  \\begin{itemize}\n  \\item if \\(\\F = \\R\\), there is a correspondence\n    \\begin{align*}\n      O(V) &\\leftrightarrow \\{\\text{orthonormal basis of } V\\} \\\\\n      \\alpha &\\leftrightarrow (\\alpha(e_1), \\dots, \\alpha(e_n))\n    \\end{align*}\n  \\item if \\(\\F = \\C\\), there is a correspondence\n    \\begin{align*}\n      U(V) &\\leftrightarrow \\{\\text{orthonormal basis of } V\\} \\\\\n      \\alpha &\\leftrightarrow (\\alpha(e_1), \\dots, \\alpha(e_n))\n    \\end{align*}\n  \\end{itemize}\n\\end{lemma}\n\n\\subsubsection{Spectral Theory for Self-adjoint Maps}\n\nSpectral theory is the study of eigenvalues and eigenvectors of linear operators, particularly those on infinite dimensional spaces. They have enormous importance in many areas of mathematics and physics, including for example functional analysis, harmonic analysis and quantum mechanics. In this course, spectral simply refers the the collection of all eigenvalues of an endomorphism on a finite-dimensional vector space.\n\n\\begin{lemma}\n  Let \\(V\\) be an inner product space. If \\(\\alpha \\in L(V)\\) is self-adjoint then\n  \\begin{enumerate}\n  \\item \\(\\alpha\\) has real eigenvalues.\n  \\item eigenvectors of \\(\\alpha\\) for different eigenvalues are orthogonal.\n  \\end{enumerate}\n\\end{lemma}\n\nNote that this true for any inner product space, regardless of dimension.\n\n\\begin{proof}\\leavevmode\n  \\begin{enumerate}\n  \\item Suppose \\(\\alpha v = \\lambda v\\) for some non-zero \\(v \\in V\\) and \\(\\lambda \\in \\C\\). Then\n    \\[\n      \\lambda \\ip{v, v} = \\ip{\\lambda v, v} = \\ip{\\alpha v, v} = \\ip{v, \\alpha v} = \\ip{v, \\lambda v} = \\conj \\lambda \\ip{v, v}\n    \\]\n    so \\(\\lambda = \\conj \\lambda \\in \\R\\).\n  \\item Suppose \\(\\alpha v = \\lambda v, \\alpha w = \\mu w\\) where \\(\\lambda \\neq \\mu \\in \\R\\). Use the similar idea,\n    \\[\n      \\lambda \\ip{v, w} = \\ip{\\lambda v, w} = \\ip{\\alpha v, w} = \\ip{v, \\alpha w} = \\mu \\ip{v, w}\n    \\]\n    so \\(\\ip{v, w} = 0\\).\n  \\end{enumerate}\n\\end{proof}\n\nFor infinite dimensional space, we may not have any eigenvalues (although in which case the above is vacuously true). However, in finite-dimesional case we have\n\n\\begin{theorem}\n  Let \\(V\\) be a finite-dimensional inner product space and \\(\\alpha \\in L(V)\\) is self-adjoint. Then \\(V\\) has an orthonormal basis of eigenvectors, whose eigenvalues are real by the previous lemma.\n\\end{theorem}\n\n\\begin{proof}\n  Let \\(\\F = \\R\\) or \\(\\C\\). Induction on \\(n = \\dim V\\). If \\(n = 0\\) this is vacuously true. If \\(n = 1\\) then also true by the previous lemma. Suppose \\(n > 1\\). Say\n  \\[\n    [\\alpha]_{\\basis B} = A\n  \\]\n  where \\(\\basis B\\) is the standard basis. Passing to \\(\\C\\) in the same way we did in the \\hyperref[proof:determinant from characteristic]{proof} of \\Cref{lem:determinant from characteristic}. By Fundamental Theorem of Algebra, \\(\\chi_A(t) \\in \\C[t]\\) has a root so \\(A \\in \\M_n(\\C)\\) has an eigenvalue. Note that the argument so far applies to all maps, not just self-adjoint operators.\n\n  Now using self-adjointness, this eigenvalue is real. So \\(\\chi_A(t)\\) has a (real) root. Thus for both fields \\(\\alpha\\) has a real eigenvalue, say \\(\\lambda\\). Pick \\(v_1 \\in V \\setminus \\{0\\}\\) such that \\(\\alpha v_1 = \\lambda v_1\\). Now use the old trick of passing to a space of strictly smaller dimension, in this case the orthogonal complement\n  \\[\n    U = \\spans{v_1}^\\perp \\leq V\n  \\]\n  where \\(\\spans{v_1}\\) is the subspace spanned by \\(v_1\\). Check conditions for induction: if \\(u \\in V\\),\n  \\[\n    \\ip{\\alpha u, v_1} = \\ip{u, \\alpha v_1} = \\ip{u, \\lambda v_1} = \\lambda \\ip{u, v_1} = 0\n  \\]\n  so \\(\\alpha\\) is \\(U\\)-stable. \\(\\alpha|_U \\in L(U)\\) is obviously self-adjoint so by induction hypothesis there is an orthonormal basis \\(v_2, \\dots, v_n\\) of \\(U\\) which are eigenvectors for \\(\\alpha|_U\\). Adjoining \\(\\frac{v_1}{\\norm{v_1}}\\) gives an orthonormal basis of eigenvectors of \\(\\alpha\\).\n\\end{proof}\n\n\\begin{corollary}\n  Let \\(V\\) be a finite-dimensional inner product space. If \\(\\alpha \\in L(V)\\) is self-adjoint, \\(V\\) is the orthogonal direct sum of all the eigenspaces of \\(\\alpha\\).\n\\end{corollary}\n\n\\begin{proof}\n  Immediate.\n\\end{proof}\n\nOne reason self-adjoint operators are important is that many physical systems can be described by self-adjoint operators. By the theorem we can decompose such a space into orthogonal direct sum of eigenspaces, and when in orthonormal basis, the action of a self-adjoint operator is simply ``scaling''.\n\n\\subsubsection{Spectral Theory for Unitary Maps}\n\nThe other important map is isometry. Let \\(\\F = \\C\\) throughout this subsection.\n\n\\begin{lemma}\n  Let \\(V\\) be a complex inner product space and \\(\\alpha \\in L(V)\\) unitary. Then\n  \\begin{enumerate}\n  \\item all eigenvalues lie on the unit circle.\n  \\item eigenvectors corresponding to different eigenvalues are orthogonal.\n  \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n  This involves similar ideas as the lemma in the last subsection.\n  \\begin{enumerate}\n  \\item Suppose \\(\\alpha v = \\lambda v\\) for non-zero \\(v \\in V\\). In addition \\(\\lambda \\neq 0\\) as \\(\\alpha\\) is invertible. Then\n    \\[\n      \\lambda \\ip{v, v} = \\ip{\\lambda v, v} = \\ip{\\alpha v, v} = \\ip{v, \\alpha^{-1} v} = \\ip{v, \\lambda^{-1} v} = \\conj \\lambda^{-1} \\ip{v, v}\n    \\]\n    so \\(\\lambda = \\conj \\lambda^{-1}\\), \\(|\\lambda|^2 = 1\\).\n  \\item Suppose \\(\\alpha v = \\lambda v, \\alpha w = \\mu w\\) where \\(\\lambda \\neq \\mu\\). Then\n    \\[\n      \\lambda \\ip{v, w} = \\ip{\\alpha v, w} = \\ip{v, \\alpha^{-1} w} = \\conj \\mu^{-1} \\ip{v, w} = \\mu \\ip{v, w}\n    \\]\n    so \\(\\ip{v, w} = 0\\).\n  \\end{enumerate}\n\\end{proof}\n\n\\begin{theorem}\n  Let \\(V\\) be a finite-dimensional complex inner product space and \\(\\alpha \\in L(V)\\). Then \\(V\\) has an orthonormal basis of eigenvectors.\n\\end{theorem}\n\n\\begin{proof}\n  By Fundamental Theorem of Algebra, \\(\\alpha\\) has an eigenvalue, say \\(\\lambda \\in \\C\\). Fix non-zero \\(v_1 \\in V\\) such that \\(\\alpha v_1 = \\lambda v_1\\) and further assume \\(\\norm{v_1} = 1\\). Let\n  \\[\n    U = \\spans{v_1}^\\perp \\leq V.\n  \\]\n  For all \\(u \\in U\\),\n  \\[\n    \\ip{\\alpha u, v_1} = \\ip{u, \\alpha^{-1} v_1} = \\conj \\lambda^{-1} \\ip{u, v_1} = 0\n  \\]\n  so \\(\\alpha\\) is \\(U\\)-stable. By induction on dimension, \\(U\\) has an orthonormal basis of eigenvectors of \\(\\alpha|_U\\), say \\(v_2, \\dots, v_n\\). Thus \\(v_1, \\dots, v_n\\) is an orthonormal basis of eigenvectors of \\(\\alpha\\).\n\\end{proof}\n\n\\begin{remark}\\leavevmode\n  \\begin{enumerate}\n  \\item Self-adjoint operators and isometries have different physical properties. However spectral theory says that they have similar properties and the proofs of which are essentially identical. This is because they are both examples of a more general type of operators called \\emph{normal} maps, which are defined to be those satisfying\n    \\[\n      \\alpha \\alpha^* = \\alpha^* \\alpha.\n    \\]\n    Other examples of normal maps include skew-Hermitian maps. We will meet more in example sheet.\n  \\item Note that unlike the previous subsection, we only discuss unitary operators (i.e.\\ complex isometries). An orthogonal matrix \\(A \\in \\M_n(\\R)\\) cannot, in general, be diagonalised over \\(\\R\\). For example, rotation of \\(\\R^2\\). See example on page~\\pageref{eg:rotation}.\n\n    However, we can still get orthonormal basis with respect to which it is \\emph{block-diagonal} with each blocks of size 1 or 2. See also example sheet. In a sense, this is the ``worst'' can happen to an isometry.\n  \\end{enumerate}\n\\end{remark}\n\n\\subsubsection{Application to Bilinear Forms}\n\nRecall spectral theorem for self-adjoint maps and isometries.\n\n\\begin{corollary}\n  Let \\(A \\in \\M_n(\\R)\\) (\\(\\M_n(\\C)\\) respectively) be a symmetric (Hermitian respectively) matrix. Then there is an orthogonal (unitary respectively) matrix \\(P\\) such that \\(P^\\dag AP\\) is diagonal with real entries.\n\\end{corollary}\n\n\\begin{proof}\n  Let \\(\\F = \\R\\) or \\(\\C\\) and equip \\(\\F^n\\) with the standard inner product. Then \\(A \\in L(\\F^n)\\) is self-adjoint. Thus there is an orthonormal basis of \\(\\F^n\\) of eigenvectors of \\(A\\) (with real eigenvalues), say \\(v_1, \\dots, v_n\\). Let\n  \\[\n    P = (v_1 \\:|\\: \\cdots \\:|\\: v_n)\n  \\]\n  then \\(P^{-1}AP\\) is diagonal with real entries. Now use \\(P^{-1} = P^\\dag\\).\n\\end{proof}\n\n\\begin{note}\n  For an orthogonal change-of-basis matrix \\(P\\),\n  \\[\n    P^{-1}AP = P^\\dag AP\n  \\]\n  where LHS is change-of-basis of \\(A\\) as a linear map while RHS is change-of-basis of \\(A\\) as a bilinear (sesquilinear respectively) form. This interpretation can be exploited to tell us more about the structure of it.\n\\end{note}\n\n\\begin{corollary}\n  Let \\(V\\) be a finite-dimensional real (complex respectively) inner product space and \\(\\varphi: V \\times V \\to \\F\\) be a symmetric bilinear (Hermitian respectively) form. Then there is an orthonormal basis of \\(V\\) with respect to which \\(\\varphi\\) is represented by  a diagonal matrix with real entries.\n\\end{corollary}\n\nRecall that previously we have shown that in general a symmetric bilinear (Hermitian respectively) form is diagonalisable using perpendicular space. However, if we equip the same space with an inner product, this corollary not only tells us that the bilinear form is diagonalisable, but also gives us an orthonormal basis with respect to which this holds.\n\n\\begin{proof}\n  Let \\(\\basis B = \\{v_1, \\dots, v_n\\}\\) be any orthonormal basis and \\(A = [\\varphi]_{\\basis B}\\). \\(A = A^\\dag\\) and there is an orthogonal (unitary respectively) matrix \\(P\\) such that \\(D = P^\\dag AP\\) is diagonal. Let \\(w_i\\) be the \\(i\\)th column of \\(P\\), then \\(\\basis B' = \\{w_1, \\dots, w_n\\}\\) is an orthonormal basis of \\(V\\) and \\([\\varphi]_{\\basis B'} = D\\).\n\\end{proof}\n\n\\begin{remark}\n  The diagonal entries of \\(P^\\dag AP\\) are the eigenvalues of \\(A\\) and thus the signature could be equivalently defined as\n  \\[\n    s(\\varphi) = \\# \\text{positive eigenvalues of } A - \\# \\text{negative eigenvalues of } A.\n  \\]\n\\end{remark}\n\n\\begin{corollary}[Simultaneous diagonalisation of bilinear forms]\n  Let \\(V\\) be a finite-dimensional real (complex respectively) vector space. Let \\(\\varphi, \\psi\\) be symmetric (Hermitian respectively) bilinear forms on \\(V\\). Assume \\(\\varphi\\) is positive definite. There is a basis \\(\\basis B = \\{v_1, \\dots, v_n\\}\\) of \\(V\\) such that \\([\\varphi]_{\\basis B}\\) and \\([\\psi]_{\\basis B}\\) are both diagonal.\n\\end{corollary}\n\n\\begin{proof}\n  First note that \\(V\\) equipped with \\(\\varphi\\) is an inner product space. Thus there exists an orthonormal (with respect to this inner product) basis with respect to which \\(\\psi\\) is represented by a diagonal matrix. By definition \\(\\varphi\\) is represented by the identity matrix, which is unchanged under any change of basis. The result follows.\n\\end{proof}\n\n\\begin{caution}\n  The positive definite assumption is necessary. See example sheet for counterexamples when this assumption is not satisfied.\n\\end{caution}\n\nAnd we have a version correpsonding to matrices:\n\n\\begin{corollary}\n  Let \\(A, B \\in \\M_n(\\R)\\) (\\(\\M_n(\\C)\\) respectively) be symmetric (Hermitian respectively). Suppose \\(\\conj x^T Ax > 0\\) for all non-zero \\(x\\). Then there exists \\(Q \\in \\M_n(\\R)\\) (\\(\\M_n(\\C)\\) respectively) invertible such that \\(Q^TAQ\\) and \\(Q^TBQ\\) (\\(Q^TA\\conj Q\\) and \\(Q^TB\\conj Q\\) respectively) are both diagonal.\n\\end{corollary}\n\n\\begin{proof}\n  Easy.\n\\end{proof}\n\n\\printindex\n\\end{document}\n", "meta": {"hexsha": "2106bf5237d42bc41cae84cb3cc66454dc4c2af3", "size": 154054, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "IB/linear_algebra.tex", "max_stars_repo_name": "zulysi/tripos", "max_stars_repo_head_hexsha": "5fc4f6a14ac30c3e4fb0e49553c5b5e9ee598690", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IB/linear_algebra.tex", "max_issues_repo_name": "zulysi/tripos", "max_issues_repo_head_hexsha": "5fc4f6a14ac30c3e4fb0e49553c5b5e9ee598690", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IB/linear_algebra.tex", "max_forks_repo_name": "zulysi/tripos", "max_forks_repo_head_hexsha": "5fc4f6a14ac30c3e4fb0e49553c5b5e9ee598690", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.0411156528, "max_line_length": 633, "alphanum_fraction": 0.5919677516, "num_tokens": 56548, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191214879991, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5530298845311067}}
{"text": "\\chapter{Proving}\n\\graphicspath{ {./images/} }\n\nIn this and next chapter I will discuss how proving is done on a computer machine. Similarity of mathematical type structures and a programming language type system is to be understood thus explaining the theoretical correctness of afore mentioned equivalence.\n\n\\section{Hilbert-style deduction systems }\nThis is a kind of logic system \\cite{hilbert_system}. The idea of a formal deduction says that there are mathematical formulaes or functions (for Input-Output generalization) which are either the base cases often called Axioms or are derived from some axioms by applying some hypotheses. Let $\\Gamma$ be a set of pressumed contions and base cases (Axioms) and $\\phi$ be deduction made using $\\Gamma$ then we can write this as $\\Gamma\\vdash\\phi$ meaning $\\phi$ can be duduced and hence can be proved from $\\Gamma$. \n\n\n\\begin{figure}[!htb]\n\\centering\n  \\includegraphics[scale=0.4]{hilbert}\n  \\caption{A Basic Deduction System (source : Wikipedia)}\n\\end{figure}\n\nThe Deduction system is well characterized by use of numerous Logical Axioms called an \\textbf{Axiom Scheme}. It gives us a iterative and inductive nature of axiom deduction so that it can be extended as many step as one wish. The terms called \\textit{Combinators} (like $\\rightarrow$) that interweave the results of a set of a given Axiom Scheme and \\textit{Quantifiers} (like $\\forall$ ) generalize the results of axiom schemes using some quanitfiable data like the initial value of a mathematical function. \n\nUsing Combinators and Quantifiers one may easily reduce the deductions into less number of steps hence forming metatheorems. A metatheorem inmathematics is the one that can be achieved using some other very basic mathematical theorems. Thus the this gives us idea of correctness of untyped $\\lambda$ -calculus. We can thus extend this to other objects also. Below I have tried to explain formation of a Hilbert-style deduction system for Abstract Binding Tree.\\\\     \n\n\nProfessor \\textbf{Robert Harper} in his book \\textit{Practical Foundations of Programming Languages}\\cite{pfpl} defines the term \\textit{Judgement} 'J' defined on a set of rules 'R' defined for a type that satisfies a property 'P' (hence forming an Abstract Binding Tree ABT) and it follows an inductive definition \n\n\\[ \\dfrac{J_1 J_2 J_3\\cdots J_k}{J}\\]\n\nwhere each J$_i$ is some sort of pre-defined abstract judgements or base cases. The set R is a collection of all such J$_i$s. R is also referred to as an \\textit{Axiom} if k=0 i.e it is just first judgement to be made. for example :\n\n\\[\\dfrac{}{zero\\quad nat} \\quad \\quad and \\quad \\dfrac{a\\quad nat}{succ(a)\\quad nat}  \\]\n\nHe explains \\textit{Derivabilty} Judjement J$_1$...J$_k$$\\vdash$$_R$ K for a given set of rules 'R', with J$_i$ and K as basic judgements to show that we may derive K from the expansion of R$\\cup$\\{J$_1$...J$_k$\\} of the rules with axioms \n\n\\[ \\dfrac{}{J_1} \\quad \\dfrac{}{J_2} \\quad  \\dfrac{}{J_3} \\quad \\cdots \\quad \\dfrac{}{J_k}   \\]\n\n$\\Gamma\\vdash$$_R$K means that a Judgement K is derivable from rules R$\\cup\\Gamma$.\n\nHe defines \\textit{Admissibility} Judgement written as $\\Gamma\\models_R$J, that if there exists any judgement like $\\vdash$$_R$$\\Gamma$ then it implies $\\vdash$$_R$ J . It means that if all the assumptions (base cases) are derivable from R then J is derivable from R.\n\nThe mathemetical formalization of static part and dynamic part of a typed programming language is expressed in the form of the judgements.\n\nThe static feature of a language imposes constraints on the formation of the phases that are sensitive to the context in which thay occur. It follows an induction definiton like this\n\n\\[\\chi | \\Gamma\\vdash e:\\tau\\]\n\nwhere $\\chi$ is a finite set of variables and $\\Gamma$ is a typing context consisting of hypothesis of the form x:$\\tau$, one for each $x\\in\\chi$. The Dynamics feature basically means the transition of states expressed in form of \\textit{Transition} Judgement having a transition $s\\longmapsto^n s'$ (for n$^{th}$ order transition) as\n\n\\[\\dfrac{}{s\\longmapsto^n s'} \\quad \\quad and \\quad \\dfrac{{s\\longmapsto s'} \\quad {s'\\longmapsto^{*} s\"}}{s\\longmapsto^{*} s\"}  \\]\n\nIt is important to note that all this operations are performed on pressumed abstract mathematical objects, Abstract Binding Trees (ABTs) which are also the most important fundamental part of any type system. This theoretical equivalence thus allows us to construct the static and dynamic rules and some derived Judgements or abstract functions (with type correctness) for a programming language. A very good hand on experience can be found in the books Software Foundations \\cite{Softwarefoundations} and Certified Proframming with Dependent Types \\cite{cpdt}.\n\n\\section{Curry-Howard Correspondence }\nIn programming language theory the \\textbf{Curry\u2013Howard correspondence}\\cite{curry_howard} (also known as the Curry\u2013Howard isomorphism ) says that mathematical proofs and computer programs are equivalent. This equivalence depends on \\textit{Axiom Schemes} with a Type as a combinator. It also follows the \\textit{Combinatory Logic System} that allows removing a mentioned variable using some 'ways'. These ways also called functions and they use some combinators to define a result from given arguments. Then it assumes the correctness of Typed-$\\lambda$ Calculus. \n\nLet's assume a function $F$ that has a Type defined by set of rules $R$ and some set of initial conditions J$_i$ (i=1$\\cdots n$) then we can write :\n\n\\[F \\quad \\equiv \\quad J_{1 \\cdots n}\\vdash_R \\lambda \\quad (\\lambda \\mbox{ is the result of function that also has a Type defined by R.}) \\]\n\nA computer program is a set of such functions and a computer aided proof is a computer program. Thus it can be derived that \"\\textit{A computer Proof is a program with the thesis being proved is type for that program}\" . Thus this forms the basis of proof assistants and Functional Programming. \n\n\\section{Hoare Logic}\nIt \\cite{hoare_logic} is a logic system of formal rules to rigorously reason for the corectness of computer programs. The central feature of Hoare logic is the \\textit{Hoare triple}. A triple describes how the execution of a piece of code changes the state of the computation. A Hoare triple is of the form\n\n   \\[\\mbox{  \\{ P \\} C \\{ Q \\} }\\]\n\nwhere P and Q are some 'Quantifiable' assertions and C is a command. P is named the \\textit{Precondition} and Q is called as the \\textit{Postcondition}. It follows then that, when the precondition is met, executing the command should establish the postcondition. It's basic structure is developed on basis of Axiomatic and Inference rules that are able to reason for concurrency, Procedures, Jumps and Pointers.\n\n\\section{Separation Logic}\nThis logic system \\cite{separation_logic} is an extension of Hoare system and describes states (as an analogue to basic Axioms in hoare system) as the store or heap, roughly corresponding to the local variables and stack allocated objects. The states are valid if there exists actual physical address (valid register is present) for that state. The vaious functions basically maps the different states onto one another. The correctness of this transformation is extensively under this logic system. All the data structures are treated in the formal system using separation logic system. \n", "meta": {"hexsha": "366b36c5ac8651789f11c258edf196f573b33af6", "size": 7344, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "files/proving.tex", "max_stars_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_stars_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/proving.tex", "max_issues_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_issues_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/proving.tex", "max_forks_repo_name": "SatyendraBanjare/Type-Theory-notes", "max_forks_repo_head_hexsha": "2228194ff7debdf414f78d7e190279908c797025", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 109.6119402985, "max_line_length": 587, "alphanum_fraction": 0.7745098039, "num_tokens": 1823, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7799928951399098, "lm_q2_score": 0.7090191214879992, "lm_q1q2_score": 0.5530298772789799}}
{"text": "\\section{Magnus expansion for $A$, $B$ constant and deterministic}\n\\subsection{Values of Moments}\n\t\\input{C:/Users/kevin/Documents/MATLAB/PhD/Magnus/Final/Correction/AB_const_moment/Pdf/temp/moment_Delta0.0001_M1000.tex}\n", "meta": {"hexsha": "e0e5b68f49613d21562f2c88e7704af3d1b907b1", "size": 221, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Data_in_Paper/Moments/valuesOfMoments/AB_const_T1_d2_N10000_M1000/input.tex", "max_stars_repo_name": "kevinkamm/StochasticMagnusExpansion", "max_stars_repo_head_hexsha": "634d3f069703c8eb1348b4f5743ac6b9e52ef049", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Data_in_Paper/Moments/valuesOfMoments/AB_const_T1_d2_N10000_M1000/input.tex", "max_issues_repo_name": "kevinkamm/StochasticMagnusExpansion", "max_issues_repo_head_hexsha": "634d3f069703c8eb1348b4f5743ac6b9e52ef049", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Data_in_Paper/Moments/valuesOfMoments/AB_const_T1_d2_N10000_M1000/input.tex", "max_forks_repo_name": "kevinkamm/StochasticMagnusExpansion", "max_forks_repo_head_hexsha": "634d3f069703c8eb1348b4f5743ac6b9e52ef049", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.25, "max_line_length": 122, "alphanum_fraction": 0.8099547511, "num_tokens": 68, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117855317474, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.5529711943582566}}
{"text": "\\newpage\n\\section{Thermal Analysis MATLAB Code} \\label{sec:appJ}\n\\subsection{Convection MATLAB Code}\n\n\\lstinputlisting[language=Matlab]{appendix/code/thermal/convecTUBULAR_2.m}\n%\\inputminted{matlab}{appendix/code/convecTUBULAR_1.m}\n\nThe resulting $h-ratio$ is then applied to the value of K in the main script written below:\n\n\\subsection{Main Thermal MATLAB Code}\n\n\n\\lstinputlisting[language=Matlab]{appendix/code/thermal/TUBULAR_v_1_5.m}\n%\\inputminted{matlab}{appendix/code/TUBULAR_v_1_4.m}\n\n%The code will be uppdated but nice you have a way of get it into the SED looks good ^^\n%Shall make it look pretty during the weekend only ;) ", "meta": {"hexsha": "4f53c51a75182b0751fb1be9674b9cd9912725c2", "size": 635, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "appendix/appendix-thermal-analysis-matlab-code.tex", "max_stars_repo_name": "georgeslabreche/tubular-bexus-sed", "max_stars_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-01-17T10:38:07.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-17T10:38:07.000Z", "max_issues_repo_path": "appendix/appendix-thermal-analysis-matlab-code.tex", "max_issues_repo_name": "georgeslabreche/tubular-bexus-sed", "max_issues_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "appendix/appendix-thermal-analysis-matlab-code.tex", "max_forks_repo_name": "georgeslabreche/tubular-bexus-sed", "max_forks_repo_head_hexsha": "c0db957167dfc90c25743af64c514fce837c1405", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.3529411765, "max_line_length": 91, "alphanum_fraction": 0.7921259843, "num_tokens": 186, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8267117983401363, "lm_q2_score": 0.6688802471698041, "lm_q1q2_score": 0.5529711920119437}}
{"text": "\\input{PreambleCommon}\n\\input{../WeekTitles}\n\n\\begin{document}\n\\setfont\n\\pagestyle{fancy}\n\\renewcommand{\\Week}{7 }\n\\renewcommand{\\WeekTitle}{\\WeekTitleSeven }\n\n\\fancyhead[LE,RO]{Week \\Week}  % default, usually only for first page\n\\fancyfoot{}\n\\sectionbox{Week \\#\\Week: \\WeekTitle}\n\n\\vspace{5mm}\n\\noindent\n\n\\goals\n\\begin{itemize}\n\\item Determine how to calculate the area described by a function. WITH AN EDIT\n\\item Define the {\\bf definite integral}.\n\\item Explore the relationship between the definite integral and area.\n\\item Explore ways to estimate the definite integral.\n\\end{itemize}\n\n\\newpage\n\n\\topic{Integration} \n\\subsection*{Integration} \nIf we had to summarize the first half of\nthe course, we would have to say that the focus was on {\\bf\n  differentiation}. \n\nAll differentiation problems ask the same basic question: {\\bf At what\n  rate does a process change}, and how does that rate of change relate\nto other characteristics of the process?\n\nThe key observation was that {\\bf at small scales, rates of change look linear}.\n\n\\newpage\n\nIn the remainder of the course we will study {\\bf integration}.\nAgain, the analysis will be made possible by the observation that on a very\nsmall scale all processes look linear.  This time, though, we will use\nthis fact to see how regarding a process as an {\\bf accumulation} of\ninfinitely many small linear steps allows us to calculate the\naccumulated total even when the rate of accumulation is far from\nlinear.  {\\bf Integration is always in some way about finding the\n  total at the end of a process of accumulation.}\n\n\\newpage\n\n\n \\topic{Distance and Velocity}\n \\subsection*{Distance and Velocity}\n\n Recall that if we measure distance $x$ as a function of time $t$, the\n velocity is determined by differentiating $x(t)$: \n \\begin{center}\n{\\bf Velocity} is the {\\bf slope} on the {\\bf position graph}.\n \\end{center}\n\n But now suppose we begin with a {\\bf graph of the velocity} with\n respect to time.  How can we determine what {\\bf distance} will be\n traveled?  Does distance also ``appear'' in the velocity graph\n somehow?\n\n\\newpage\n\n\\problem \n Consider the graph for the velocity of a particle shown below. \n\\begin{center}\n\\includegraphics[width=0.25\\linewidth]{graphics/w07_const_velocity}\n\\end{center}\nHow far did the particle travel between $t=0$ and $t=5$ seconds?\n\\begin{enumerate}[(a)]\n\\item 5 m\\\\[1ex]\n\\item 10 m\\\\[1ex]\n\\item 15 m\\\\[1ex]\n\\item 20 m\n\\end{enumerate}\n\n\\newpage\n\\problem \n~\n\\begin{center}\n\\includegraphics[width=0.25\\linewidth]{graphics/w07_const_velocity}\n\\end{center}\n\nWhere does the distance travelled between $t=0$ and $t=5$ ``appear''\non this velocity graph?\n\\begin{enumerate}[(a)]\n\\item The distance travelled is the {\\bf slope} between $t=0$ and $t=5$  \\\\[1ex]\n\\item The distance travelled is the {\\bf average height} of the graph between $t=0$ and $t=5$  \\\\[1ex]\n\\item The distance travelled is the {\\bf area} under the graph between $t=0$\n  and $t=5$.\n\\end{enumerate}\n\n\\newpage\n\nWhen the velocity is {\\bf constant}, we have the equivalency:\n\\vspace{0.2in}\n\\begin{center}\ndist = vel $\\times$ time ~~~~ $\\Longleftrightarrow$ dist = area under the velocity graph\n\\end{center}\n\\vspace{0.2in}\n\n\\problem \nWhat about when the velocity is {\\bf not} constant though?\n\n\\begin{center}\n\\includegraphics[width=0.35\\linewidth]{graphics/w07_variable_velocity}\n\\end{center}\n\nDo the units of the ``area'' under this graph still make sense as a\ndistance value?\n\n\n\\vfill\n\\newpage\n \\topic{Calculating Areas}\n\\subsection*{Calculating Areas}\n\nIt appears that the distance traveled is the area under the graph of\nvelocity, even when the velocity is changing. We'll see exactly why\nthis is true very soon.\n\nIf we are simply interested in the area under a graph, without any\nphysical interpretation, we can already do so if the graph creates a\nshape that we recognize.\n\\newpage\n\n\\problem Calculate the area between the $x$-axis and the graph of $y = x+1$ from $x=-1$ and $x=1$.\n\n\\hfill \\includegraphics[width=0.25\\linewidth]{graphics/w07_linear_area}\n\\newpage\n\n\\problem Calculate the area between the $x$-axis \nand the graph of $y = \\sqrt{1 - x^{2}}$ from $x = -1$ to $x = 1$.\n\n\\hfill \\includegraphics[width=0.3\\linewidth]{graphics/w07_semi_circle_area}\n\n\\vfill\n\n\\problem What shapes do you know, right now, for which you can\ncalculate the exact area?\n\n\\vfill\n\n\\newpage\n\n\\topic{Estimating Areas}\n\\subsection*{Estimating Areas}\n\nUnfortunately, many or most arbitrary areas are essentially impossible\nto find the area of when the shape isn't a simple composition of\ntriangles, rectangles, or circles.\n\nIn these cases, we must use less direct methods.  We will start by\nmaking an {\\em estimate} of the area under the graph using shapes\nwhose area is easier to calculate.\n\n\\newpage\n\n\\problem Suppose we are trying to find the area underneath the graph of the\nfunction $f(x)$ given below between $x = 1$ and $x=4$.  Shade in that\nregion, and call that area $A$.\n\n\\begin{center}\n\\includegraphics[width=4in]{graphics/w07_graph06}\n\\end{center}\n\n\n\\newpage\nWe can make a rough estimation of the area by drawing a rectangle that\ncompletely contains the area, or a rectangle that is completely\ncontained by the area.\n\n\n\\problem \n{Calculate this overestimate and underestimate for the\n  area $A$.}\n  \n\\hfill \\includegraphics[width=4in]{graphics/w07_graph06}\n\n\n\\newpage The next step is to use smaller rectangles to improve our\nestimate the area.  We can divide the interval from $x = 1$ to $x = 4$\ninto 3 intervals of width 1, and use different size rectangles on each\ninterval.\n\n\n\n\\problem Estimate the area $A$ by using 3 rectangles of width 1.\n  Use the function value at the {\\em left} edge of the interval as the\n  height of each rectangle.\n\n\\hfill \\includegraphics[width=0.43\\linewidth]{graphics/w07_graph06}\n\n\\newpage\n\nWe can repeat this process for any number of rectangles, and we expect\nthat our estimation of the area will get better the more rectangles we\nuse.  The method we used above, choosing for the height of the\nrectangles the function at the left edge, is called the {\\bf left hand\n  sum}, and is denoted {\\em LEFT(n)} if we use $n$ rectangles.\n\n\\newpage\n\n\\topic{Generalizing Area Calculation with Summation Notation}\n\\subsection*{Generalizing the Area Calculation}\n\nSuppose we are trying to estimate the area under the function $f(x)$\nfrom $x = a$ to $x = b$ via the left hand sum with $n$ rectangles.\n\\begin{itemize}\n\\item the {\\bf width} of each rectangle will be $\\Delta x =\n\\displaystyle\\frac{b-a}{n}$.  \n\\item If we label the endpoints of the\nintervals to be $a = x_{0} < x_{1} < \\cdots < x_{n-1} < x_{n} = b$,\nthen the {\\bf heights} of the rectangles will be $f(x_{i})$'s, and\n\\item the formula for the left hand sum/area will be\n\\end{itemize}\n\n\\begin{align*}\n  LEFT(n) = & f(x_{0}) \\Delta x+ \n f(x_{1}) \\Delta x + \\ldots + \n f(x_{n-1}) \\Delta x \n\\\\  = & \\displaystyle\\sum_{i=1}^{n} f(x_{i-1})\\Delta x.\n\\end{align*}\n\n\\newpage\n\n\\subsection*{Aside: Summation Notation}\n\nThe capital greek letter sigma, $\\sum$, is used as a shortform notation for long sums.\nE.g.\n$$\\ds \\sum_{i=1}^n x_i$$\n\n\\newpage\n\\problem \nTranslate the following into more traditional sums:\n\n$\\ds \\sum_{i=1}^{100} i= $ \\vfill \n\n$\\ds \\sum_{n=1}^{10} n^2 + n= $  \\vfill\n\n$\\ds \\sum_{i=1}^{n}  f(x_{i-1}) ~\\Delta x= $\n\\vfill\n\n\\newpage\n\\topic{Back to Areas!} \n\\subsection*{Back to Areas!} \n\nThinking back to our LEFT sum to estimate areas, we have a similar\ndefinition for the {\\bf right hand sum}, or {\\em RIGHT(n)}. This is\ncalculated by taking the height of each rectangle to be the height of\nthe function at the {\\bf right hand endpoint} of the interval.\n\n\\begin{align*}\nRIGHT(n) = & f(x_{1}) \\Delta x+ \nf(x_{2}) \\Delta x + \\ldots + \n f(x_{n}) \\Delta x \\\\ \n= &  \\displaystyle\\sum_{i=1}^{n} f(x_{i})\\Delta x.\n\\end{align*}\n\n\\bigskip\n\n\\newpage \n\n{Calculate LEFT(6) and RIGHT(6) for the function shown,\n  between $x=1$ and $x=4$.  You will need to estimate some rectangle\n  heights from the graph.}\n\n\\includegraphics[width=0.4\\linewidth]{graphics/w07_graph06}\n\\hfill \\includegraphics[width=0.4\\linewidth]{graphics/w07_graph06}\n\\newpage\n\n%{In general, when will LEFT(n) be greater than RIGHT(n)? }\n%\\vfill \n%\n%{ When will LEFT(n) be an overestimate for the\n%  area?}  \\vfill \n%\n%{ When will LEFT(n) be an underestimate?}\n%\n%\\vfill\n%\n%\\newpage\n\n\\topic{Riemann Sums}\n\\subsection*{Riemann Sums}\n\nArea estimations like $LEFT(n)$ and $RIGHT(n)$ are often called {\\bf\n  Riemann sums}, after the mathematician Bernahrd Riemann (1826-1866)\nwho formalized many of the techniques of calculus.  The general form\nfor a Riemann Sum is\n\n\\begin{align*}\n f(x_{1}^{*}) \\Delta x+ f(x_{2}^{*}) \\Delta x + \\ldots + f(x_{n}^{*}) \\Delta x \n \\\\ = \\displaystyle\\sum_{i=1}^{n} f(x_{i}^{*})\\Delta x\n\\end{align*}\n\nwhere each $x_{i}^{*}$ is some point in the interval $[x_{i-1},\nx_{i}]$.  For $LEFT(n)$, we choose the left hand endpoint of the\ninterval, so $x_{i}^* = x_{i-1}$; for $RIGHT(n)$, we choose the right\nhand endpoint, so $x_{i}^* = x_{i}$.  \\newpage \n\nThe common property of all these\napproximations is that they involve\n\\begin{itemize}\n\\item a sum of rectangular areas, with \\\\[1ex]\n\\item widths ($\\Delta x$), and \\\\[1ex]\n\\item heights ($f(x_i^*)$)\n\\end{itemize}\n\n\n\nThere are other Riemann Sums that give slightly better estimates of\nthe area underneath a graph, but they often require extra computation.\nWe will examine some of these other calculations a little later.\n\n\\newpage\n\n\\topic{The Definite Integral}\n\\subsection*{The Definite Integral}\n\nWe observed that as we increase the number of rectangles used to\napproximate the area under a curve, our estimate of the area under the\ngraph becomes more accurate.  This implies that to obtain the {\\bf\n  exact area}, we should use a {\\bf limit} on our Riemann sums.\n\n\\medskip\n\\begin{center}\nThe area underneath the graph of $f(x)$ between $x=a$ and $x=b$ is equal to $\\displaystyle\\lim_{n \\to \\infty} LEFT(n) = \n\\displaystyle\\lim_{n \\to \\infty} \\displaystyle\\sum_{i=1}^{n} f(x_{i-1})\\Delta x$, ~~~where $\\Delta x = \\displaystyle\\frac{b-a}{n}$.\n\\end{center}\n\nThis limit is called the {\\em definite integral} of $f(x)$ from $a$ to $b$, and is equal to the area under curve whenever $f(x)$ is a non-negative continuous function.  The definite integral is written with some special notation.\n\n\\newpage\n\n  {\\bf{Notation for the Definite Integral}} \n  \n  The definite integral of $f(x)$ between $x=a$ and\n  $x=b$ is denoted by the symbol $$\\int_{a}^{b} f(x) \\,dx$$\n  We call $a$ and $b$ the {\\bf{limits of integration}} and $f(x)$ the\n  {\\bf{integrand}}.  The $dx$ denotes which variable we are using; this will become important for using some techniques for calculating definite integrals.\nNote that this notation shares the same common structure with Riemann sums: \n\\begin{itemize}\n\\item a sum ($\\ds \\int $ sign) \\\\[1ex]\n\\item widths ($dx$), and \\\\[1ex]\n\\item heights ($f(x)$)\n\\end{itemize}\n\n\\newpage \n\n\\problem Write the definite integral representing the area underneath\n  the graph of $f(x) = x + \\cos x$ between $x=2$ and $x=4$.\n\n\\vfill\n\n\\newpage\n\n\\problem \n  Use a right hand sum with $n=4$ to estimate $\\displaystyle\n  \\int_{0}^{2}x^2\\,dx$.  \n\n\\newpage\n\n\\vfill Sketch how the $n=$ right hand sum calculation would be\nrepresented graphically, and how it differs from the exact value of\n$\\ds \\int_0^2 x^2~dx$.\n\n\\includegraphics[width=0.3\\linewidth]{graphics/w07_parabola_for_areas}\n\\hfill\n\\includegraphics[width=0.3\\linewidth]{graphics/w07_parabola_for_areas}\n\n\\newpage\n\n% You can get MuPAD to draw the diagram associated with the midpoint\n% rule for this integral by first creating the plot object for the graph:\n\n% [\\, {\\bf GRPH := plot::Function2d(x$\\wedge$2, x=0..2)}\n\n% and then use this graph to produce the plot object for the midpoint rule with 4 intervals:\n\n% [\\, {\\bf RECTS := plot::Integral(GRPH, IntMethod=RiemannMiddle, Nodes=[4])}\n\n% You can then show them together by typing\n\n% [\\, {\\bf plot(GRPH, RECTS)}\n\n% The values of the Riemann sum and the integral are automatically printed on the diagram.\n\n%\\newpage\n\n\\topic{Negative Integral Values}\n\\subsection*{Negative Integral Values}\n\nSo far we have only dealt with the areas under/intergrals of {\\bf\n  positive} functions.  Will the definite integral still be equal to\nthe area underneath the graph if $f(x)$ is always negative?  What\nhappens if $f(x)$ crosses the $x$-axis several times?\n\n\n\n\\problem Suppose that $f(x)$ has the graph shown below, and that A, B,\n  C, D, and E are the areas of the regions shown.\n\n\\begin{center}\n\\includegraphics[width=0.6\\linewidth]{graphics/w07_negative_area_example}\n\\end{center}\n\n\\newpage\n\\begin{center}\n\\includegraphics[width=0.4\\linewidth]{graphics/w07_negative_area_example}\n\\end{center}\n\nIf we were to partition $[a,b]$ into small subintervals and construct a corresponding Riemann sum, then the first few terms in the Riemann sum would correspond to the region with area A, the next few to B, etc. \n\n\\problem \nWhich of these sets of terms have positive values?\n\n\\vfill\n\nWhich of these sets have negative values?\n\\vfill\n\n\\newpage\n\n\\begin{center}\n\\includegraphics[width=0.3\\linewidth]{graphics/w07_negative_area_example}\n\\end{center}\n\\problem \n{Express the integral $\\displaystyle \\int_a^b f(x) dx$ in\n  terms of the (positive) areas A, B, C, D, and E.  }\n\n\\vfill\n\\vfill\n\n{If $f$ were to represent velocity over time, what would the ``negative\n  areas'' represent?}\n\n\\vfill\n\\newpage\n\n\n%\\newpage\n% \\problem \n% The answer we found in the preceding question is\n% \\begin{enumerate}[A.]\n% \\item An underestimate because the first rectangle is very small.\n% \\item An overestimate because this is an increasing function.\n% \\item An underestimate because the graph of $f$ is concave upward.\n% \\item An overestimate because we used the midpoints.\n% \\end{enumerate}\n% \n\n\\newpage\n\\begin{boxnote}\n{\\bf The Role of Riemann sums}\n\\begin{enumerate}\n\\item They are needed to say what we mean by an integral. \\\\[2ex]\n\\item They enable us to decide which integral is appropriate in a word\nproblem. \\\\[2ex]\n\\item They can also be used to give an approximate value of the\nintegral. \n\\end{enumerate}\n\\end{boxnote}\n\n\\vspace{0.5in}\n\nWe will explore how to avoid needing Riemann sums for calculations next week, through the Fundamental Theorem of Calculus! \n\n\\end{document}\n\n", "meta": {"hexsha": "fbf71e9cb32e0a9b321764835850017bce7f9c3b", "size": 14221, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/notes07_bak.tex", "max_stars_repo_name": "aableson/MNTCP01", "max_stars_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-11-27T16:10:35.000Z", "max_stars_repo_stars_event_max_datetime": "2015-11-27T16:10:35.000Z", "max_issues_repo_path": "Notes/notes07_bak.tex", "max_issues_repo_name": "aableson/MNTCP01", "max_issues_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/notes07_bak.tex", "max_forks_repo_name": "aableson/MNTCP01", "max_forks_repo_head_hexsha": "1845fe6ac290b008b070f00f1b68856bdbdfc584", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.8760504202, "max_line_length": 229, "alphanum_fraction": 0.7279375571, "num_tokens": 4103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417500561683, "lm_q2_score": 0.83973396967765, "lm_q1q2_score": 0.5528955414518985}}
{"text": "w\\subsection{Logical theories}\\label{subsec:logical_theories}\n\n\\begin{definition}\\label{def:first_order_theory}\\mcite[def. 17.1]{OpenLogicFull}\n  The \\term{closure} of the set \\( \\Gamma \\) of formulas in the \\hyperref[def:first_order_syntax]{first-order language} \\( \\mscrL \\) is the set\n  \\begin{equation*}\n    \\cl(\\Gamma) \\coloneqq \\set{ \\varphi \\in \\boldop{Form} \\given \\Gamma \\vDash \\varphi }.\n  \\end{equation*}\n\n  Due to \\fullref{rem:propositional_logic_as_first_order_logic}, this definition also holds for formulas in propositional logic.\n\n  The set \\( \\Gamma \\) is called \\term{closed} if it equals its own closure. A closed set of closed formulas is also sometimes called a \\term{theory} in e.g. \\cite[def. 33.1]{OpenLogicFull}, but we will not put this restriction because it is not clear what are the axioms of an arbitrary closed set of formulas. Instead, we will use the term \\enquote{theory} to mean the set of axioms itself.\n\n  As discussed in \\fullref{rem:deduction_with_free_variables}, we are only interested in closed formulas. If a formula is not closed, we instead consider its \\hyperref[thm:implicit_universal_quantification]{universal closure}.\n\n  If it is necessary to distinguish between derivability and entailment, we can use the syntactic closure \\( \\cl^\\vdash \\) and semantic closure \\( \\cl^\\vDash \\).\n\n  \\begin{thmenum}\n    \\thmitem{def:first_order_theory/axiomatized}\\mcite[def. 17.1]{OpenLogicFull} We say that the set \\( \\Gamma \\) is \\term{axiomatized} by \\( \\Delta \\) if \\( \\Gamma = \\cl(\\Delta) \\).\n\n    \\medskip\n\n    \\thmitem{def:first_order_theory/complete}\\mcite[def. 33.7]{OpenLogicFull} The set \\( \\Gamma \\) of formulas is said to be \\term{complete} if every for every formula in \\( \\varphi \\), either \\( \\Gamma \\vDash \\varphi \\) or \\( \\Gamma \\vDash \\neg \\varphi \\).\n\n    This is not to be confused with \\fullref{def:derivability_and_satisfiability/completeness}, which defines completeness of a deductive system via how it relates to semantics.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{definition}\\label{def:first_order_theory_consistency}\\mcite[def. 22.6]{OpenLogicFull}\n  Given a deductive system, the set \\( \\Gamma \\) of formulas is said to be \\term[bg=\u043f\u0440\u043e\u0442\u0438\u0432\u043e\u0440\u0435\u0447\u0438\u0432\u0430,ru=\u043f\u0440\u043e\u0442\u0438\u0432\u043e\u0440\u0435\u0447\u0438\u0432\u0430\u044f]{inconsistent} if \\( \\Gamma \\vdash \\bot \\) and \\term{consistent} otherwise.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:formulas_unsatisfiable_iff_inconsistent}\n   Assuming \\hyperref[def:classical_logic]{classical logic}, a theory \\( \\Gamma \\) is \\hyperref[def:propositional_semantics/satisfiability]{unsatisfiable} if and only if it is semantically \\hyperref[def:first_order_theory_consistency]{inconsistent}.\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof Assume first that \\( \\Gamma \\) is unsatisfiable. Then for all zero models of \\( \\Gamma \\) any formula is satisfied vacuously, in particular that any model of \\( \\Gamma \\) satisfies \\( \\bot \\). Thus, \\( \\Gamma \\vDash \\bot \\) and, by \\fullref{thm:classical_first_order_logic_is_sound_and_complete}, \\( \\Gamma \\vdash \\bot \\). Therefore, \\( \\Gamma \\) is inconsistent.\n\n  \\NecessitySubProof Let \\( \\Gamma \\) be inconsistent and suppose that \\( \\mscrX = (X, I) \\) is a model of \\( \\Gamma \\). Fix any valuation \\( v \\) in \\( \\mscrX \\). Since \\( \\Gamma \\vdash \\bot \\), \\fullref{thm:classical_first_order_logic_is_sound_and_complete} implies that \\( \\bot\\Bracks{v} = T \\).\n\n  By the \\hyperref[def:first_order_valuation/formula_valuation]{definition of formula valuation}, however, we have \\( \\bot\\Bracks{v} = F \\). The obtained contradiction shows that \\( \\mscrX \\) cannot be a model of \\( \\Gamma \\) and since this structure was chosen arbitrarily, we conclude that \\( \\Gamma \\) is unsatisfiable.\n\\end{proof}\n\n\\begin{theorem}[First-order compactness theorem]\\label{thm:first_order_compactness_theorem}\\mcite[thm. 23.21]{OpenLogicFull}\n  Assuming \\hyperref[def:classical_logic]{classical logic}, for a theory \\( \\Gamma \\) we have \\( \\Gamma \\vDash \\varphi \\) for the closed formula \\( \\varphi \\) if and only if there exists a finite subset of \\( \\Delta \\subseteq \\Gamma \\) such that \\( \\Delta \\vDash \\varphi \\).\n\\end{theorem}\n\\begin{proof}\n  \\SufficiencySubProof Suppose that \\( \\Gamma \\vDash \\varphi \\). By \\fullref{thm:classical_first_order_logic_is_sound_and_complete}, \\( \\Gamma \\vdash \\varphi \\). Every \\hyperref[def:proof_tree]{proof tree} is finite, and thus \\( \\varphi \\) can be derived from only the finitely many premises \\( \\Delta \\) of the proof. Then \\( \\Delta \\vdash \\varphi \\) and, by the other direction of \\fullref{thm:classical_first_order_logic_is_sound_and_complete}, we have \\( \\Delta \\vDash \\varphi \\).\n\n  \\NecessitySubProof If \\( \\Delta \\vDash \\varphi \\) for some finite \\( \\Delta \\subseteq \\Gamma \\), then necessarily \\( \\Gamma \\vDash \\varphi \\).\n\\end{proof}\n\n\\begin{corollary}\\label{thm:first_order_compactness_theorem_satisfiability}\\mcite[thm. 23.21]{OpenLogicFull}\n  If every finite subset of \\( \\Gamma \\) is satisfiable, then \\( \\Gamma \\) itself is satisfiable.\n\n  This is also called \\enquote{the compactness theorem}.\n\\end{corollary}\n\\begin{proof}\n  We can restate the theorem using its contraposition: If \\( \\Gamma \\) is unsatisfiable, then there exists a finite subset \\( \\Delta \\subseteq \\Gamma \\) that is also unsatisfiable. Due to \\fullref{thm:formulas_unsatisfiable_iff_inconsistent}, this can be also restated as: If \\( \\Gamma \\) is inconsistent, then there exists a finite subset \\( \\Delta \\subseteq \\Gamma \\) that is also inconsistent.\n\n  But the last formulation is a special case of \\fullref{thm:first_order_compactness_theorem} with \\( \\varphi = \\bot \\).\n\\end{proof}\n\n\\begin{theorem}[Downward L\\\"owenheim-Skolem theorem]\\label{thm:downward_lowenheim_skolem_theorem}\\mcite{nLab:lowenheim_skolem_theorem}\n  Let \\( \\Gamma \\) be a \\hyperref[def:first_order_theory]{first-order theory} over some arbitrary language. Suppose that \\( \\Gamma \\) has a \\hyperref[def:first_order_semantics/satisfiability]{model} of \\hyperref[def:set_finiteness]{infinite} \\hyperref[thm:cardinality_existence]{cardinality}.\n\n  Then \\( \\Gamma \\) also has a \\hyperref[def:set_countability]{countable} model.\n\\end{theorem}\n\n\\begin{example}[Skolem's paradox]\\label{ex:skolems_paradox}\\mcite[ex. 23.32]{OpenLogicFull}\n  From \\fullref{thm:downward_lowenheim_skolem_theorem} it follows that \\hyperref[def:zfc]{\\logic{ZFC}}, if it is consistent, has a model at is at most countably infinite. The \\hyperref[def:zfc/infinity]{axiom of infinity} states that no model of \\logic{ZFC} is finite. Therefore, there exists a model of \\logic{ZFC} that is countably infinite. But we can construct uncountable sets in \\logic{ZFC}.\n\n  Therefore, either:\n  \\begin{itemize}\n    \\item Uncountable sets within the metatheory are fundamentally different from those within the object theory.\n    \\item Uncountable sets are paradoxical and \\logic{ZFC} must disallow them in order to be consistent.\n    \\item \\logic{ZFC} is inconsistent even when restricted to countable sets.\n  \\end{itemize}\n\\end{example}\n\n\\begin{theorem}[Upward L\\\"owenheim-Skolem theorem]\\label{thm:upward_lowenheim_skolem_theorem}\\mcite{nLab:lowenheim_skolem_theorem}\n  Let \\( \\Gamma \\) be a \\hyperref[def:first_order_theory]{first-order theory} over some arbitrary language. Suppose that \\( \\Gamma \\) has a \\hyperref[def:first_order_semantics/satisfiability]{model} of \\hyperref[def:set_finiteness]{infinite} \\hyperref[thm:cardinality_existence]{cardinality} \\( \\kappa \\).\n\n  Then for every \\hyperref[def:set_finiteness]{infinite} \\hyperref[def:cardinal]{cardinal} \\( \\mu \\geq \\card(\\Gamma) \\), the theory \\( \\Gamma \\) also has a model of cardinality \\( \\mu \\).\n\\end{theorem}\n\n\\begin{definition}\\label{def:category_of_small_first_order_models}\n  Let \\( \\mscrL \\) be a first-order language and let \\( \\Gamma \\) be a nonempty set of formulas. We describe the \\term{categories of small models} for \\( \\Gamma \\) as the \\hyperref[def:concrete_category]{concrete category}. See \\fullref{rem:positive_formulas_in_category_of_models} for why it is a reasonable restriction for \\( \\Gamma \\) to be a set of \\hyperref[def:positive_formula]{positive formulas}.\n\n  Suppose that we are given a \\hyperref[def:grothendieck_universe]{Grothendieck universe} \\( \\mscrU \\), which is safe to assume to be the smallest suitable one as explained in \\fullref{def:large_and_small_sets}.\n\n  \\begin{itemize}\n    \\item The \\hyperref[def:category/objects]{set of objects} is the set of all \\( \\mscrU \\)-small models of \\( \\Gamma \\).\n\n    \\item The \\hyperref[def:category/morphisms]{set of morphisms} between two models is the set of all \\hyperref[def:first_order_homomorphism]{structure homomorphisms} between them.\n\n    \\item \\hyperref[def:category/composition]{Composition of morphisms} is the usual \\hyperref[def:multi_valued_function/composition]{function composition}.\n\n    \\item The \\hyperref[def:category/identity]{identity morphism} on a model is the \\hyperref[def:multi_valued_function/identity]{identity function} on its universe.\n  \\end{itemize}\n\\end{definition}\n\n\\begin{remark}\\label{rem:positive_formulas_in_category_of_models}\n  We usually want \\( \\Gamma \\) to be a set of \\hyperref[def:positive_formula]{positive formulas} because, in general, homomorphisms are not injective and the homomorphic image of a model may fail to be a model. \\Fullref{thm:positive_formulas_preserved_under_homomorphism} shows that if all formulas in \\( \\Gamma \\) are positive, the image of any homomorphism is again a model and thus an object in the category.\n\\end{remark}\n\n\\begin{example}\\label{ex:def:category_of_small_first_order_models}\n  This is an incomplete list of categories corresponding to \\hyperref[def:first_order_theory]{first-order theories} that can be found in this document:\n  \\begin{itemize}\n    \\item The categories \\hyperref[def:pointed_set/category]{\\( \\cat{Set_*} \\)}, \\hyperref[def:set_with_involution/category]{\\( \\cat{Inv} \\)}, \\hyperref[def:magma/category]{\\( \\cat{Mag} \\)}, \\hyperref[def:unital_magma/category]{\\( \\cat{Mag}_* \\)}, \\hyperref[def:unital_magma/monoid]{\\( \\cat{Mon} \\)}, \\hyperref[def:group/category]{\\( \\cat{Grp} \\)} and \\hyperref[def:abelian_group]{\\( \\cat{Ab} \\)}, whose relations are crucial for the definition and properties of groups.\n\n    \\item The categories \\hyperref[def:semiring/ring]{\\( \\cat{Ring} \\)} of rings, \\hyperref[def:semiring/ring]{\\( \\cat{Mod}_\\mscrR \\)} of modules and \\hyperref[def:vector_space]{\\( \\cat{Vect}_\\BbbK \\)} of vector spaces, which are both based on abelian groups.\n\n    \\item The categories \\hyperref[def:category_of_topological_groups]{\\( \\cat{TopGrp} \\)} and \\hyperref[def:category_of_topological_groups]{\\( \\cat{TopVect}_\\BbbK \\)} of topological groups and vector spaces, respectively.\n\n    \\item The categories \\hyperref[def:partially_ordered_set/category]{\\( \\cat{Pos} \\)} in order theory and the related \\hyperref[def:semilattice/category]{\\( \\cat{Lat} \\)}, \\hyperref[def:heyting_algebra/category]{\\( \\cat{Heyt} \\)} and \\hyperref[def:boolean_algebra/category]{\\( \\cat{Bool} \\)} in lattice theory.\n  \\end{itemize}\n\n  In contrast:\n  \\begin{itemize}\n    \\item We define the category \\hyperref[def:category_of_small_topological_spaces]{\\( \\cat{Top} \\)}  of topological spaces and all of its related categories within set theory without a corresponding first-order theory.\n\n    \\item The category \\hyperref[def:category_of_small_sets]{\\( \\cat{Set} \\)} of sets with respect to either \\logic{ZFC}, \\logic{ZFC+U} or na\\\"ive set theory is not the same as the category of \\( \\mscrU \\)-small models of set theory. Instead, it is a category within a fixed set theory. Within the metatheory of this document, we work within a fixed model of \\logic{ZFC+U} with respect to \\hyperref[def:classical_logic]{classical logic}.\n\n    \\item Similarly, we do not care about models of \\hyperref[def:peano_arithmetic]{Peano arithmetic} enough to study its category of \\( \\mscrU \\)-small models. Instead, we only use a single model and denote it by \\( \\BbbN \\).\n  \\end{itemize}\n\\end{example}\n\n\\begin{definition}\\label{def:lindenbaum_tarski_algebra}\\mcite{nLab:lindenbaum_tarski_algebra}\n  Assume some fixed \\hyperref[def:deductive_system]{deductive system} in propositional or first-order logic. Let \\( \\Gamma \\) be a \\hyperref[def:first_order_theory]{closed set} of \\hyperref[def:first_order_syntax/ground_formula]{closed formulas} within the corresponding logic.\n\n  Then \\( (\\Gamma, \\vdash) \\) is a \\hyperref[def:preordered_set]{preordered set}. The \\term{Lindenbaum-Tarski algebra} of the theory \\( \\Gamma \\) is the partially ordered set obtained from \\( (\\Gamma, \\vdash) \\) using \\fullref{thm:preorder_to_partial_order}.\n\n  More concretely, the Lindenbaum-Tarski algebra of \\( \\Gamma \\) is a quotient set of \\( \\Gamma \\) by the relation \\hyperref[def:proof_derivability]{interderivability} and endowed with the partial order\n  \\begin{equation}\\label{eq:def:lindenbaum_tarski_algebra/order}\n    [\\varphi] \\leq [\\psi] \\T{if and only if} \\varphi \\vdash \\psi.\n  \\end{equation}\n\n  Of course, we can define the algebra using entailment rather than derivability, but in the cases we consider, the two are equivalent and derivability is simpler to work with.\n\\end{definition}\n\\begin{proof}\n  The correctness of \\eqref{eq:def:lindenbaum_tarski_algebra/order}, i.e. the fact that the relation \\( \\leq \\) does not depend on the choice of representatives from the quotient sets, follows from \\fullref{thm:preorder_to_partial_order}.\n\n  We must only demonstrate that \\( (\\Gamma, \\vdash) \\) is indeed a preordered set. Reflexivity of \\( \\vdash \\) follows from \\fullref{def:axiomatic_deductive_system} and transitivity follows from \\fullref{thm:deductive_system_transitivity}.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:intuitionistic_lindenbaum_tarski_algebra}\n  Assume that we are working in the \\hyperref[def:intuitionistic_propositional_deductive_systems]{intuitionistic propositional natural deduction system}. The \\hyperref[def:lindenbaum_tarski_algebra]{Lindenbaum-Tarski algebra} of a closed set of closed formulas \\( \\Gamma \\) then is a \\hyperref[def:heyting_algebra]{Heyting algebra}.\n\n  In the \\hyperref[def:classical_propositional_deductive_systems]{classical derivation system}, the algebra is instead a \\hyperref[def:boolean_algebra]{Boolean algebra}.\n\n  In the \\hyperref[def:minimal_propositional_axiomatic_deductive_system]{minimal derivation system}, we have an unbounded lattice with only a top element, but no bottom. Consequently, conditionals and pseudocomplements may fail to exist.\n\n  Explicitly:\n  \\begin{thmenum}\n    \\thmitem{thm:intuitionistic_lindenbaum_tarski_algebra/join} The \\hyperref[def:semilattice/join]{join} of the equivalence classes \\( [\\psi_1] \\) and \\( [\\psi_2] \\) is the class \\( [\\psi_1 \\vee \\psi_2] \\) of their \\hyperref[def:propositional_language/connectives/disjunction]{disjunction}.\n\n    \\thmitem{thm:intuitionistic_lindenbaum_tarski_algebra/top} The class of \\hyperref[def:propositional_semantics/tautology]{tautologies} \\( [\\top] \\) is the \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{top element}.\n\n    \\thmitem{thm:intuitionistic_lindenbaum_tarski_algebra/meet} Similarly to joins, the \\hyperref[def:semilattice/meet]{meet} of \\( [\\psi_1] \\) and \\( [\\psi_2] \\) is the equivalence class \\( [\\psi_1 \\wedge \\psi_2] \\) of their \\hyperref[def:propositional_language/connectives/conjunction]{conjunction}.\n\n    \\thmitem{thm:intuitionistic_lindenbaum_tarski_algebra/bottom} The class of \\hyperref[def:propositional_semantics/contradiction]{contradictions} \\( [\\bot] \\) is the \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{bottom element}.\n\n    \\thmitem{thm:intuitionistic_lindenbaum_tarski_algebra/conditional} The \\hyperref[eq:def:heyting_algebra/conditional]{conditional} of \\( [\\psi_1] \\) and \\( [\\psi_2] \\) is the equivalence class \\( [\\psi_1 \\rightarrow \\psi_2] \\).\n\n    \\thmitem{thm:intuitionistic_lindenbaum_tarski_algebra/complement} The \\hyperref[eq:def:heyting_algebra/pseudocomplement]{pseudocomplement} \\( \\widetilde{[\\psi]} \\) of \\( [\\psi] \\) equals \\( [\\neg \\psi] \\).\n\n    In the classical derivation system this pseudocomplement is a complement, i.e. it satisfies \\eqref{def:bounded_lattice_complement/join} and \\eqref{def:bounded_lattice_complement/meet}.\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:intuitionistic_lindenbaum_tarski_algebra/join} We will show that \\( [\\psi_1 \\vee \\psi_2] \\) is the supremum of \\( [\\psi_1] \\) and \\( [\\psi_2] \\).\n\n  The inference rule \\eqref{eq:def:minimal_propositional_natural_deduction_system/or/intro_left} implies that \\( \\psi_1 \\vdash \\psi_1 \\vee \\psi_2 \\) and \\eqref{eq:def:minimal_propositional_natural_deduction_system/or/intro_right} implies that \\( \\psi_2 \\vdash \\psi_1 \\vee \\psi_2 \\). Thus, \\( \\psi_1 \\vee \\psi_2 \\) is an upper bound for both \\( \\psi_1 \\) and \\( \\psi_2 \\) under the ordering \\( \\vdash \\).\n\n  Let \\( \\varphi \\) be any formula in \\( \\Gamma \\) such that \\( \\psi_1 \\vdash \\varphi \\), \\( \\psi_2 \\vdash \\varphi \\) and \\( \\varphi \\vdash (\\psi_1 \\vee \\psi_2) \\). Then the following instance of \\eqref{eq:def:minimal_propositional_natural_deduction_system/or/elim}\n  \\begin{equation*}\n    \\begin{prooftree}\n      \\hypo{ [\\psi_1 \\vee \\psi_2] }\n      \\hypo{ [\\psi_1]^1 }\n      \\ellipsis {} { \\varphi }\n      \\hypo{ [\\psi_2]^1 }\n      \\ellipsis {} { \\varphi }\n      \\infer[left label=\\( 1 \\)]3[\\ref{eq:def:minimal_propositional_natural_deduction_system/or/elim}]{ \\varphi }\n    \\end{prooftree}\n  \\end{equation*}\n  demonstrates that \\( \\psi_1, \\psi_2, (\\psi_1 \\vee \\psi_2) \\vdash \\varphi \\). Hence, from \\fullref{thm:deductive_system_transitivity} it follows that \\( (\\psi_1 \\vee \\psi_2) \\vdash \\varphi \\).\n\n  Obviously interderivability is not affected by the choice of representatives from \\( [\\psi_1] \\) and \\( [\\psi_2] \\), hence \\( [\\varphi] = [\\psi_1 \\vee \\psi_2] \\) and this is indeed the supremum of \\( [\\psi_1] \\) and \\( [\\psi_2] \\).\n\n  \\SubProofOf{thm:intuitionistic_lindenbaum_tarski_algebra/top} The inference rule \\eqref{eq:def:minimal_propositional_natural_deduction_system/top/intro} shows that \\( [\\varphi] \\leq [\\top] \\) for any formula \\( \\varphi \\) and \\( [\\top] \\) is the top element.\n\n  \\SubProofOf{thm:intuitionistic_lindenbaum_tarski_algebra/meet} Let \\( \\psi_1 \\) and \\( \\psi_2 \\) be any formulas in \\( \\Gamma \\). The inference rule \\eqref{eq:def:minimal_propositional_natural_deduction_system/and/elim_left} implies that \\( \\psi_1 \\wedge \\psi_2 \\vdash \\psi_2 \\) and \\eqref{eq:def:minimal_propositional_natural_deduction_system/and/elim_right} implies that \\( \\psi_1 \\wedge \\psi_2 \\vdash \\psi_1 \\). Thus, \\( \\psi_1 \\wedge \\psi_2 \\) is a lower bound for both \\( \\psi_1 \\) and \\( \\psi_2 \\) under the ordering \\( \\vdash \\).\n\n  We must show that \\( \\psi_1 \\wedge \\psi_2 \\) is interderivable with the greatest lower bound. Let \\( \\varphi \\) be a formula in \\( \\Gamma \\) such that \\( \\varphi \\vdash \\psi_1 \\), \\( \\varphi \\vdash \\psi_2 \\) and \\( (\\psi_1 \\wedge \\psi_2) \\vdash \\varphi \\). We will show that \\( \\varphi \\vdash (\\psi_1 \\wedge \\psi_2) \\).\n\n  The rule \\eqref{eq:def:minimal_propositional_natural_deduction_system/and/intro} implies that\n  \\begin{equation*}\n    \\psi_1, \\psi_2 \\vdash \\psi_1 \\wedge \\psi_2.\n  \\end{equation*}\n\n  But \\( \\varphi \\) derives both \\( \\psi_1 \\) and \\( \\psi_2 \\), hence due to \\fullref{thm:deductive_system_transitivity},\n  \\begin{equation*}\n    \\varphi \\vdash (\\psi_1 \\wedge \\psi_2).\n  \\end{equation*}\n\n  Analogously to the proof of \\fullref{thm:intuitionistic_lindenbaum_tarski_algebra/join}, we conclude that the choice of representatives of the equivalence classes is irrelevant and that \\( [\\varphi] = [\\psi_1 \\wedge \\psi_2] \\) is the infimum of \\( [\\psi_1] \\) and \\( [\\psi_2] \\).\n\n  \\SubProofOf{thm:intuitionistic_lindenbaum_tarski_algebra/bottom} That \\( [\\bot] \\) is the bottom is a restatement of \\eqref{eq:thm:minimal_propositional_negation_laws/efq}.\n\n  \\SubProofOf{thm:intuitionistic_lindenbaum_tarski_algebra/conditional} Restating \\eqref{eq:def:heyting_algebra/conditional}, we must prove that \\( [\\psi_1 \\rightarrow \\psi_2] \\) equals\n  \\begin{equation*}\n    ([\\psi_1] \\rightarrow [\\psi_2]) = \\underbrace{\\sup\\set[\\Big]{ [\\varphi] \\given* \\varphi \\in \\Gamma \\T{and} (\\varphi \\wedge \\psi_1) \\vdash \\psi_2 }}_{\\Phi}.\n  \\end{equation*}\n\n  An equivalent condition for \\( \\varphi \\) to be in \\( \\Phi \\) is, due to \\fullref{thm:conjunction_of_premises},\n  \\begin{equation*}\n    \\varphi, \\psi_1 \\vdash \\psi_2.\n  \\end{equation*}\n\n  Then by the deduction theorem,\n  \\begin{equation*}\n    \\varphi \\vdash (\\psi_1 \\rightarrow \\psi_2).\n  \\end{equation*}\n\n  Thus, \\( [\\psi_1 \\rightarrow \\psi_2] \\) is an upper bound of \\( \\Psi \\).\n\n  It remains to show that \\( (\\psi_1 \\rightarrow \\psi_2) \\in \\Psi \\). Both \\( \\psi_1 \\) and \\( (\\psi_1 \\rightarrow \\psi_2) \\) follow from the formula \\( \\parens[\\Big]{ (\\psi_1 \\rightarrow \\psi_2) \\wedge \\psi_1 } \\) and by applying \\eqref{eq:def:def:axiomatic_deductive_system/mp}, we obtain\n  \\begin{equation*}\n    \\psi_1, (\\psi_1 \\rightarrow \\psi_2) \\vdash \\psi_2.\n  \\end{equation*}\n\n  From \\fullref{thm:conjunction_of_premises},\n  \\begin{equation*}\n    \\parens[\\Big]{ (\\psi_1 \\rightarrow \\psi_2) \\wedge \\psi_1 } \\vdash \\psi_2.\n  \\end{equation*}\n\n  Therefore, \\( [\\psi_1 \\rightarrow \\psi_2] \\in \\Phi \\) and it is indeed the supremum of \\( \\Psi \\).\n\n  \\SubProofOf{thm:intuitionistic_lindenbaum_tarski_algebra/complement} The pseudocomplement \\( \\widetilde{[\\psi]} \\) is, by definition,\n  \\begin{equation*}\n    \\widetilde{[\\psi]}\n    =\n    [\\psi] \\rightarrow [\\bot].\n  \\end{equation*}\n\n  From what we have already proved, we can conclude that \\( \\widetilde{[\\psi]} = [\\psi \\rightarrow \\bot] \\). From \\fullref{def:minimal_propositional_axiomatic_deductive_system/negation} it follows that the formulas \\( \\psi \\rightarrow \\bot \\) and \\( \\neg \\psi \\) are interderivable, thus \\( \\widetilde{[\\psi]} = [\\neg \\psi] \\).\n\n  If we are working in classical logic where \\eqref{eq:thm:minimal_propositional_negation_laws/lem} holds, then\n  \\begin{equation*}\n    \\sup\\set{ [\\psi], \\widetilde{[\\psi]} }\n    \\reloset {\\ref{thm:intuitionistic_lindenbaum_tarski_algebra/join}} =\n    [\\psi \\vee \\neg \\psi]\n    \\reloset {\\eqref{eq:thm:minimal_propositional_negation_laws/lem}} =\n    [\\top],\n  \\end{equation*}\n  which proves \\eqref{def:bounded_lattice_complement/join}.\n\n  The dual law \\eqref{def:bounded_lattice_complement/meet}\n  \\begin{equation*}\n    \\inf\\set{ [\\psi], \\widetilde{[\\psi]} }\n    \\reloset {\\ref{thm:intuitionistic_lindenbaum_tarski_algebra/meet}} =\n    [\\psi \\wedge \\neg \\psi]\n    \\reloset {\\eqref{eq:thm:minimal_propositional_negation_laws/lnc}} =\n    [\\neg \\top]\n    =\n    \\widetilde{[\\top]}\n    \\reloset {\\eqref{eq:def:heyting_algebra/pseudocomplement}} =\n    [\\bot].\n  \\end{equation*}\n\\end{proof}\n\n\\begin{remark}\\label{rem:thm:intuitionistic_lindenbaum_tarski_algebra/syntactic_proof}\n  Notice that the proof of \\fullref{thm:intuitionistic_lindenbaum_tarski_algebra} relies entirely on the derivation system. On the other hand, we define \\hyperref[def:propositional_heyting_algebra_semantics]{Heyting semantics} for intuitionistic formulas.\n\n  Thus, we have shown that Heyting algebras arise naturally from the intuitionistic natural deduction system and that their role as a semantic framework is completely justified.\n\\end{remark}\n", "meta": {"hexsha": "7d7e266edee25327a55b5f383e1f75f94cee8c16", "size": 23248, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/logical_theories.tex", "max_stars_repo_name": "v--/notebook", "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/logical_theories.tex", "max_issues_repo_name": "v--/notebook", "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/logical_theories.tex", "max_forks_repo_name": "v--/notebook", "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.9277978339, "max_line_length": 538, "alphanum_fraction": 0.7405368204, "num_tokens": 7060, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.7981867801399695, "lm_q1q2_score": 0.5528576844364115}}
{"text": "%%#fixtex% for html/pdf   -*-latex-*-\n\n\\section{Fourier Transforms in {\\ifeffit} } \\label{App:Fourier}\n\nThis appendix describes and lists the conventions used for Fourier\ntransforms in {\\ifeffit}.\n\n\n\\subsection{Fourier transform Conventions}\\label{App:Fourier:overview}\n\nMany of {\\ifeffit}'s command use Fourier transforms (FT) to perform\ntheir tasks.  In addition to {\\cmnd{fftf}} and {\\cmnd{fftr}}, which\nare principly designed to do Fourier transforms, the commands\n{\\cmnd{chi\\_noise}}, {\\cmnd{feffit}}, and {\\cmnd{spline}} all do (or\ncan do) Fourier transforms as part of their data processing.  The form\nof the Fourier transform done by all these commands is the same, and\nis really an XAFS-specific Fourier transform that converts {\\chik}\ninto {\\chir} in the forward direction and {\\chir} into {\\chiq} in the\nreverse direction.  The XAFS-specific FT done by these commands will\nbe described in detail shortly.  For now, an important point to\nemphasize is that all these commands share many arguments and grogram\nvariables describing the Fourier transforms.  These shared command\narguments and program variables are the topic of this section.\n\nThe forward XAFS Fourier transform, done with {\\cmnd{fftf}}, transforms\n{\\chik} to {\\chir}.  To do this, the {\\chik} data is first multiplied by a\n$k$-weighting factor of the form $k^w$ and a window function before the\nactual Fast Fourier transform is performed.  The $k$-weighting factor $w$\nis used to ``even out'' the decaying {\\chik} function and to emphasize\ndifferent $k$-regions of the EXAFS in the resulting {\\chir}.  Popular\nchoics for $w$ are 1, 2, and 3.  \n\nFormally, the XAFS Fourier transform can be written as\n\\begin{equation}\n\\tilde\\chi(R) =  {{1}\\over{\\sqrt{2\\pi}}} \\int_{-\\infty}^{\\infty} {\n         dk e^{i2kR} k^w \\tilde\\chi(k) \\Omega(k) }      \n\\end{equation}\n\\noindent\nwhere $\\Omega(k)$ is the window function, and $w$ is the $k$-weighting\nfactor.  The window function can take a variety of functional forms, all of\nwhich rise from a small value (possible zero) at low-$k$, rise up to one,\nand then fall back towards zero at high-$k$.  The window is intended to\nsmooth out any ringing in the resulting FT amplitude while maintaining as\nmuch resolution as possible, and will be discussed in more detail in the\nnext section.\n\n\nA discrete form of the above formula is actually used so that the Fast\nFourier Transform algorithm can be exploited.  The key point here is that\nthe data is sampled on a finite and {\\emph{uniform}} grid in $k$ (or $R$\nfor the back-transform).  The $k$-space grid used throughout {\\ifeffit} is\n$\\delta k=0.05 {\\rm\\,  \\AA}^{-1}$.  The array sizes for {\\chik} and {\\chir}\nare $N_{\\rm fft} = 2048$, and the data is {\\emph{zero-padded}} out to\nhigh-$k$ (or high-$R$).  The zero-padding for {\\chik} will smooth the data\npoints in $R$-space, and the zero-padding of {\\chir} will smooth the data\nin backtransformed-$k$-space.  The grid in $R$-space is $\\delta R = \\pi /\nN_{\\rm fft}\\,\\delta k$, which is then $\\sim0.0307\\,\\rm\\AA$.\n\nFor the discrete Fourier transforms, we write $k_n =n \\delta k$ and $R_m = m\n\\,\\delta R$, and have\n\\begin{equation}\n\\tilde\\chi(R_m) = {{i \\delta\n    k}\\over{\\sqrt{\\pi N_{\\rm fft}}}}\\, \\sum_{n=1}^{N_{\\rm fft}} \\chi(k_n)\n\\, \\Omega(k_n) \\, k_n^w e^{2\\pi i n m/N_{\\rm fft}}\n\\end{equation}\n\\noindent\nfor the forward transform and\n\\begin{equation}\n\\tilde\\chi(k_n) = {{2\n    i \\delta R}\\over{\\sqrt{\\pi N_{\\rm fft}}}}\\, \\sum_{m=1}^{N_{\\rm fft}}\n\\tilde\\chi(R_m) \\, \\Omega(R_m) \\, e^{-2\\pi i n m/N_{\\rm fft}}\n\\end{equation}\n\\noindent\nfor the back transform.  These normalizations preserve the symmetry\nproperties of the Fourier Transforms with conjugate variables $k$ and $2R$.\n\n\nThere are few slight complication with these formulas.  The first arises\nfrom the fact that because the classic EXAFS equation has a term of the\nform ${e^{i2kR}}$ or $\\sin(2kR)$, it customary to use $k$ and $2R$ as the\nFourier conjugate variables while still desiring the $R$ space function to\nbe a function of $R$.  This changes the normalization factors in front of\nthe integral to those above.  \n\n\nThe other minor complication is that the ``measured'' {\\chik} derived from\n{\\muE} and is a strictly real function while the Fourier transform\ninherently treats {\\chik} as complex functions, signified by the $\\tilde{ }\n$ above the $\\chi$).  There is an ambiguity about how to construct the\ncomplex $\\tilde\\chi(k)$.  In many formal treatments, the measured XAFS is\nwritten as the imaginary part of some function, so that constructing\n$\\tilde\\chi(k)$ as $(0, \\chi_{\\rm measured}(k))$ might seem a natural\nchoice.  For historical reasons, {\\ifeffit} uses the opposite convention,\nconstructing $\\tilde\\chi(k)$ as $(\\chi_{\\rm measured}(k),0)$.  You can\neasily override this default however and do transforms assuming {\\chik} is\nthe imaginary part of $\\tilde\\chi(k)$.  Normally, one does a forward\ntransform with\n\\begin{verbatim}\n  Iff> fftf(real = data.chi)\n\\end{verbatim}\n\\noindent\nwhich sets $\\tilde\\chi(k)$ as $(\\chi_{\\rm data}(k),0)$. You can use\n\\begin{verbatim}\n  Iff> fftf(imag = data.chi)\n\\end{verbatim}\n\\noindent\nto  construct $\\tilde\\chi(k)$ as $(0,\\chi_{\\rm data}(k))$.\n\nThe Fourier transform requires that the $\\chi(k)$ data begin at $k=0$.\nMore to the point, the {\\cmnd{fftf}} command {\\emph{assumes}} that the\nsupplied array for $chi$ starts at $k=0$ unless told otherwise.  It is\nimportant to include the $k$-array with this keyword.  If not given, the\n$\\chi$ array will be assumed to have it's first point be $\\chi(k=0)$, and\nthen to be input on an even $k$-grid with spacing $\\rm0.05\\,\\AA$.\n\n\n \n\\subsection{Fourier transform window functions}\\label{App:Fourier:windows}\n\nThere are seven optional forms for the Fourier transform window\n$\\Omega(k)$.  There is quite a bit of literature on the different windows,\nand generally more opinion than justified reason for selecting one window\nfunction over others.  I believe that all the window functions in\n{\\ifeffit} are appropriate and useful for EXAFS analysis.  My\nrecommendation is to pick one function and stick with it.  If you're unsure\nabout which one to pick, my favorites are the Hanning window (the default\nin {\\ifeffit}, largely for historical reasons) and the Kaiser-Bessel\nwindow.  Again, I have not seen any objective rational for preferring any\nother windows, and the choice is really a matter of taste.\n\nThe available window functions are described below, first in the table\ngiving a brief description, then with an equation for the window function,\nand finally with a representative plot.  For simplicity, all are written as\nfunctions of $k$.  The $R$-space windows are exactly analogous with\n{\\tt{k}} replaced by {\\tt{r}}.\n\n\\begin{table}\n  \\begin{center}\n    \\caption[a]{Table of Fourier Transform Window Functions.  The first\n      four windows list ramp up from 0 to 1 over a $k$-range defined by the\n      {\\tt{dk}} parameter, stay at 1 for some $k$-range, and then drop back\n      down to zero.  The final three window functions apply a continuous\n      function that may never go to zero over the entire $k$-range of the\n      window.  For each window type, the Key in the second column gives the\n      value to use for the {\\tt{kwindow}} parameter of {\\cmnd{fftf}}, or\n      the {\\tt{rwindow}} parameter of {\\cmnd{fftr}}.  \\smallskip }\n    {\\label{Table:FTwins}}\n    \\begin{tabular}{lll}\n    \\noalign{\\smallskip}%\n    {Window Name} & {Key} & Description  \\\\ \n    \\noalign{\\smallskip}    \\hline    \\noalign{\\smallskip}    \n    Hanning            & {\\tt{hanning}} &  ramps up and down as $\\cos^2(k)$ \\\\\n    Parzen             & {\\tt{parzen}}  &  ramps up and down linear with $k$\\\\\n    Welch              & {\\tt{welch}}   &  ramps up and down linear with $k^2$\\\\\n    \\noalign{\\smallskip}\n    Sine               & {\\tt{sine}}    &  a Sine function over the full $k$-range\\\\\n    Gaussian           & {\\tt{gaussian}}&  a Gaussian function over the full  $k$-range\\\\\n    Kaiser-Bessel      & {\\tt{kaiser}}  &  a modified Bessel function over the full $k$-range\\\\\n    \\noalign{\\smallskip}    \\hline\n    \\end{tabular}\n  \\end{center}\n\\end{table}\n  \n{\\tt{altwindow}} names an alternative window {\\bf{array}} is named with\n   this keyword, overriding the `normal' window.  The array specified must be\n   created before invoking {\\tt{feffit}}. \n\n\n\n%%#GraphicsFile%    win_hanning  WinHanning\n% Anatomy of the Hanning Window.\n\\begin{figure}[tb] \\begin{center}\n  \\includegraphics[width=2.75in,angle=-90]{figs/win_hanning.ps}\n  \\caption{ Anatomy of the Hanning Window.}\\label{Fig:WinHanning}\n\\end{center} \\end{figure}\n%%#EndGraphics%\n\n%%#GraphicsFile%    win_parzen  WinParzen\n% Anatomy of the Parzen Window.\n\\begin{figure}[tb] \\begin{center}\n  \\includegraphics[width=2.75in,angle=-90]{figs/win_parzen.ps}\n  \\caption{ Anatomy of the Parzen Window.}\\label{Fig:WinParzen}\n\\end{center} \\end{figure}\n%%#EndGraphics%\n\n%%#GraphicsFile%    win_welch  WinWelch\n% Anatomy of the Welch Window.\n\\begin{figure}[tb] \\begin{center}\n  \\includegraphics[width=2.75in,angle=-90]{figs/win_welch.ps}\n  \\caption{ Anatomy of the Welch Window.}\\label{Fig:WinWelch}\n\\end{center} \\end{figure}\n%%#EndGraphics%\n\n%%#GraphicsFile%    win_gauss  WinGauss\n% Anatomy of the Gauss Window.\n\\begin{figure}[tb] \\begin{center}\n  \\includegraphics[width=2.75in,angle=-90]{figs/win_gauss.ps}\n  \\caption{ Anatomy of the Gauss Window.}\\label{Fig:WinGauss}\n\\end{center} \\end{figure}\n%%#EndGraphics%\n\n%%#GraphicsFile%    win_kaiser  WinKaiser\n% Anatomy of the Kaiser Window.\n\\begin{figure}[tb] \\begin{center}\n  \\includegraphics[width=2.75in,angle=-90]{figs/win_kaiser.ps}\n  \\caption{ Anatomy of the Kaiser Window.}\\label{Fig:WinKaiser}\n\\end{center} \\end{figure}\n%%#EndGraphics%\n\n%%#GraphicsFile%    win_sine  WinSine\n% Anatomy of the Sine Window.\n\\begin{figure}[tb] \\begin{center}\n  \\includegraphics[width=2.75in,angle=-90]{figs/win_sine.ps}\n  \\caption{ Anatomy of the Sine Window.}\\label{Fig:WinSine}\n\\end{center} \\end{figure}\n%%#EndGraphics%\n\n%% {{\\bigskip} \\halign{\\tolerance=8000 \\hfuzz=5pt\n%% \\vtop{\\parindent=0.25truein\\hsize=1.00truein\\tt\\strut{\\hfil#\\hfil}\\strut}&%\n%% \\vtop{\\parindent=0.00truein\\hsize=5.25truein\\rm\\strut{#}\\strut}\\cr\n%% Ikwindo & Window Type and functional form  \\cr\n%% \\noalign{\\smallskip}\n%% %%\n%%   0  & Hanning Window Sills:  The Default Window Type.  \\cr%%\n%%      & {\\vskip-30truept { $${\n%%   { {\\rm W}(k) =  \\cases{\n%% \\displaystyle\\sin^2\\bigg({{\\pi \\, ({k - {\\tt Kmin} + {\\tt Dk1}/2})}\\over\n%%                          {2 \\, \\, {\\tt Dk1}}} \\bigg) ,\n%%       & ${\\tt Kmin} - {\\tt Dk1}/2 \\le k  <  {\\tt Kmin} + {\\tt Dk1}/2$\\cr\n%%  1.0, & ${\\tt Kmin} + {\\tt Dk1}/2 \\le k \\le {\\tt Kmax} - {\\tt Dk2}/2$\\cr\n%% \\displaystyle\\cos^2\\bigg({{\\pi \\, ({k - {\\tt Kmax} + {\\tt Dk2}/2})}\\over\n%%                          {2 \\, \\, {\\tt Dk2}}} \\bigg) ,\n%%    & ${\\tt Kmax} - {\\tt Dk2}/2  <  k \\le {\\tt Kmax} + {\\tt Dk2}/2$\\cr}\n%% }  \\hskip 3truept plus 5pt minus 2pt \\hfill  }$$}\\vskip-16truept}\\cr\n%% %%\n%%   1  & Hanning Window Fraction:  {\\tt Dk1} is the fraction of the \n%%            window range that is not held at 1.00. In the formula \n%%            below, $\\gamma =  {\\tt Dk1}({\\tt Kmax}-{\\tt Kmin})/2$ \\cr\n%%      & {\\vskip-30truept { $${\n%% {\\rm W}(k) =  \\cases{\n%% \\displaystyle\\sin^2\\bigg({{\\pi \\, ({k - {\\tt Kmin} + {\\tt Dk1}/2})}\\over\n%%                          {2 \\, \\, {\\tt Dk1}}} \\bigg) ,\n%%   & ${\\tt Kmin} \\le k  <  {\\tt Kmin} + \\gamma $ \\cr\n%% {\\vbox{\\vskip4truept }}\n%%  1.0 \n%%    & ${\\tt Kmin} + \\gamma  \\le k \\le {\\tt Kmax} - \\gamma $\\cr\n%% {\\vbox{\\vskip4truept }}\n%% \\displaystyle\\cos^2\\bigg({{\\pi \\, ({k - {\\tt Kmax} + {\\tt Dk1}/2})}\\over\n%%                          {2 \\, \\, {\\tt Dk1}}} \\bigg) ,\n%%    & ${\\tt Kmax} - \\gamma <  k \\le {\\tt Kmax} $\n%% \\cr}  \\hskip 45truept plus 10truept minus 20truept \\hfill \n%% }$$}\\vskip-16truept}\\cr\n%% \\noalign{\\goodbreak}\n%%  2  &  Gaussian Window:  Note that ${\\rm W}(k)$ never goes to zero.\n%%        {\\tt Iwindo} = 7 gives an alternate form for the Gaussian window.\\cr\n%%      & {\\vskip-30truept { $${\n%% {\\rm W}(k) =\n%%      \\exp\\bigg( -{\\tt Dk1}\\, \\bigl\\{{ {2k - {\\tt Kmax} - {\\tt Kmin} }\n%%                                      \\over{ {\\tt Kmax} + {\\tt Kmin} } \n%%                                       }\\bigr\\}^2\\bigg) \n%%  \\hskip 163truept plus 10truept minus 20truept \\hfill }$$}\\vskip-16truept}\\cr\n%% %%\\noalign{\\smallskip}\n%% \\noalign{\\medskip \\goodbreak}\n%%  3   & Lorentzian Window: Note that ${\\rm W}(k)$ never goes to zero.\\cr\n%%      & {\\vskip-30truept { $${\n%% {\\rm W}(k) =\n%%   \\bigg( 1.0 + {\\tt Dk1}\\, \\bigl\\{{ {2k - {\\tt Kmax} - {\\tt Kmin} }\n%%                                    \\over{ {\\tt Kmax} + {\\tt Kmin} } \n%%                                       }\\bigr\\}^2\\bigg)^{-1} \n%%  \\hskip 156truept plus 10truept minus 20truept \\hfill }$$}\\vskip-16truept}\\cr\n%%  %%\\noalign{\\smallskip}\n%% \\noalign{\\goodbreak}\n%%   4  & Parzen Window: This window has linear ``sills''.\\cr\n%%      & {\\vskip-30truept { $${\n%% {\\rm W}(k) =  \\cases{%\n%% \\displaystyle{ { k - {\\tt Kmin} + {\\tt Dk1}/2 }\\over\n%%                          {\\tt Dk1}}  ,\n%%    & ${\\tt Kmin} - {\\tt Dk1}/2 \\le k  <  {\\tt Kmin} + {\\tt Dk1}/2$\\cr\n%% {\\vbox{\\vskip4truept }}\n%% 1.0   & ${\\tt Kmin} + {\\tt Dk1}/2 \\le k \\le {\\tt Kmax} - {\\tt Dk2}/2$\\cr\n%% {\\vbox{\\vskip4truept  }}\n%% \\displaystyle{1.0 - {{ k - {\\tt Kmax} + {\\tt Dk2}/2 } \\over\n%%                          {\\tt Dk2}} } ,\n%%    & ${\\tt Kmax} - {\\tt Dk2}/2  <  k \\le {\\tt Kmax} + {\\tt Dk2}/2 $\n%% \\cr}  \\hskip 25truept plus 10truept minus 20truept \\hfill \n%% }$$}\\vskip-16truept}\\cr\n%% \\noalign{\\goodbreak}\n%%   5  & Welch Window: This window has quadratic ``sills''. \\cr\n%%      & {\\vskip-30truept { $${\n%% {\\rm W}(k) =  \\cases{%\n%% \\displaystyle{ \\big\\{{{ k - {\\tt Kmin} + {\\tt Dk1}/2} \\over\n%%                          {\\tt Dk1}}\\big\\}^2 },\n%%    & ${\\tt Kmin} - {\\tt Dk1}/2 \\le k  <  {\\tt Kmin} + {\\tt Dk1}/2$\\cr\n%% {\\vbox{\\vskip4truept }}\n%% 1.0   & ${\\tt Kmin} + {\\tt Dk1}/2 \\le k \\le {\\tt Kmax} - {\\tt Dk2}/2$\\cr\n%% {\\vbox{\\vskip4truept }}\n%% \\displaystyle{1.0 - \\big\\{{{k - {\\tt Kmax} + {\\tt Dk2}/2}\\over\n%%                          {\\tt Dk2}}\\big\\}^2 } ,\n%%   & ${\\tt Kmax} - {\\tt Dk2}/2  <  k \\le {\\tt Kmax} + {\\tt Dk2}/2$\\cr}\n%%   \\hskip 8truept plus 10truept minus 20truept \\hfill \n%% }$$}\\vskip-16truept}\\cr\n%% \\noalign{\\goodbreak}\n%%    6  &  Sine Window: This gives a half-period over the window range. \\cr\n%%       & {\\vskip-30truept { $${{\\rm W}(k) =\n%%             \\sin \\bigg({ { \\pi\\,({\\tt Kmax} + {\\tt Dk2} - k)}\\over\n%%              { {\\tt Kmax} + {\\tt Dk2} - {\\tt Kmin} + {\\tt Dk1}}}\\bigg)\n%%             \\hskip 180truept plus 10truept minus 20truept \\hfill }$$}\n%% \\vskip-16truept}\\cr\n%%    7  &  Gaussian Window: An alternate version of the Gaussian window.\\cr\n%%       & {\\vskip-30truept { $${ {\\rm W}(k) =\n%%              \\exp\\bigg( -{\\tt Dk1}\\, \\bigl({k - {\\tt Dk2} }\\bigr)^2\\bigg)\n%%           \\hskip 216truept plus 10truept minus 20truept \\hfill }$$}\n%% \\vskip-16truept}\\cr   }}   \\goodbreak\n", "meta": {"hexsha": "bf99580e7830d2472d7df10737be8dc2b9bf6698", "size": 14952, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/RefMan/fourier.tex", "max_stars_repo_name": "keechul/ifeffit", "max_stars_repo_head_hexsha": "306444e500cb3ecb1795fcbde9219369b003f1fa", "max_stars_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-09-16T12:41:43.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-08T05:17:16.000Z", "max_issues_repo_path": "doc/RefMan/fourier.tex", "max_issues_repo_name": "bruceravel/ifeffit", "max_issues_repo_head_hexsha": "97f6458584e237a6a9e3681bb9b604c9d1ec9743", "max_issues_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-07-20T01:15:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-20T02:09:08.000Z", "max_forks_repo_path": "doc/RefMan/fourier.tex", "max_forks_repo_name": "bruceravel/ifeffit", "max_forks_repo_head_hexsha": "97f6458584e237a6a9e3681bb9b604c9d1ec9743", "max_forks_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-03-22T19:27:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-23T07:47:31.000Z", "avg_line_length": 47.6178343949, "max_line_length": 95, "alphanum_fraction": 0.6346308186, "num_tokens": 5018, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8221891305219504, "lm_q2_score": 0.672331705744791, "lm_q1q2_score": 0.5527838205686495}}
{"text": "\\myframe{proving things about \\tla\\ specs: \\hfill correctness}{\n\nOne may want to prove something like \\\\\n\\vspace{5pt}\n\\hfill the following about the Quicksort specification: \n\\vspace{15pt}\n\\begin{example}[correctness]\n\\[\n\\begin{array}{r c l}\nCorrectness & \\triangleq & (U = \\{\\})\\  \\Rightarrow\\  A \\in PermsOf(A_0) \\ \\wedge \\ IsSorted(A)\\\\\n\\\\\n\\end{array}\n\\]\n\n\\end{example}\n\n\\vspace{75pt}\n\n}\n\n\n\\input{tla-pic.tex}\n\n\\subsection{The Basics}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm}{\n\n{\\color{Maroon} Idea:} \\\\\n\\vspace{15pt}\nUser writes something fairly close to\\\\ \n\\hspace{50pt} a Hilbert-style mathematical proof \\\\\n\\hspace{45pt} (or a Fitch-style natural deduction proof); \\\\\n\\vspace{13pt}\n\\hfill backend provers take care of the annoying details. \n\\vspace{75pt}\n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm}{\n\n{\\color{Maroon} Idea:} \\\\\n\\vspace{15pt}\nUser writes something {\\color{Maroon}fairly close} to\\\\ \n\\hspace{50pt} a Hilbert-style mathematical proof \\\\\n\\hspace{45pt} (or a Fitch-style natural deduction proof); \\\\\n\\vspace{15pt}\n\\hfill backend provers take care of the annoying details.\n\\vspace{25pt}\n\n% reasonably light-weight \n{\\color{Maroon} some guidance using keywords like\\\\\n\\hfill \\kw{assume-prove}, \\kw{pick}, \\kw{have}, \\kw{witness}, \\kw{case}, etc }\n\n\\vspace{26pt}\n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm}{\n\n{\\color{Maroon} Idea:} \\\\\n\\vspace{15pt}\nUser writes something fairly close to\\\\ \n\\hspace{50pt} a Hilbert-style mathematical proof \\\\\n\\hspace{45pt} (or a Fitch-style natural deduction proof); \\\\\n\\vspace{15pt}\n\\hfill backend provers {\\color{Maroon}take care of the annoying details}. \n\\vspace{25pt}\n\n\\hfill {\\color{Maroon} we wish... \\smiley }\n\\vspace{38pt}\n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} The simplest \\tlapm\\ proof  is a one-liner:}\n\n\\begin{example}[reflexivity of $PermsOf$]\n\\vspace{-10pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_refl & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T] \\\\\n& & \\kw{prove}\\ \\    X \\in PermsOf(X) \\\\ \\\\\n&  \\kw{by} &   \\kw{def}\\ \\ PermsOf,\\ Automorphisms,\\ **\n\\end{array}\n\\]\n\n\\end{example}               \n\\vspace{80pt}\n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} The simplest \\tlapm\\ proof  is a one-liner:}\n\n\\begin{example}[reflexivity of $PermsOf$]\n\\vspace{-10pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_refl & \\triangleq & \n{\\color{Maroon}\\textbf{\\kw{assume}}}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T] \\\\\n& &{\\color{Maroon} \\textbf{\\kw{prove}}}\\ \\    X \\in PermsOf(X) \\\\ \\\\\n&  \\kw{by} &  \\kw{def}\\ \\ PermsOf,\\ Automorphisms,\\ **\n\\end{array}\n\\]\n\n\\end{example}               \n\n{\\color{Maroon}\n\\underline{\\kw{assume-prove}} \\\\ \\vspace{10pt}\nequivalent to: \\hfill \n$\\forall S,\\ \\forall T,\\ \\forall X \\in [S \\rightarrow T] .\\ X \\in PermsOf(X) $\\\\ \\vspace{10pt}\nactually: \\hfill $S \\in [S \\rightarrow T]\\ \\vdash\\ X \\in PermsOf(X)$ \n}\n\\vspace{20pt}\n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} The simplest \\tlapm\\ proof  is a one-liner:}\n\n\\begin{example}[reflexivity of $PermsOf$]\n\\vspace{-10pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_refl & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T] \\\\\n& & \\kw{prove}\\ \\    X \\in PermsOf(X) \\\\ \\\\\n&{\\color{Maroon} \\textbf{\\kw{by}}} &{\\color{Maroon} \\textbf{\\kw{def}\\ \\ PermsOf,\\ Automorphisms,\\ **}}\n\\end{array}\n\\]\n\n\\end{example}               \n\\vspace{80pt}\n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} The simplest \\tlapm\\ proof  is a one-liner:}\n\n\\begin{example}[reflexivity of $PermsOf$]\n\\vspace{-10pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_refl & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T] \\\\\n& & \\kw{prove}\\ \\    X \\in PermsOf(X) \\\\ \\\\\n&{\\color{Maroon} \\textbf{\\kw{by}}} &{\\color{Maroon} Isa \\ \\ \\textbf{\\kw{def}\\ \\ PermsOf,\\ Automorphisms,\\ **}}\n\\end{array}\n\\]\n\n\\end{example}               \n{\\color{Maroon} \n\\underline{Tactics} \\\\ \\vspace{10pt}\n\\hfill \\kw{SMT, (CVC, Z3, Yices), \\ Isa, (Auto, Blast, Force), Zenon, TPL} \\\\ \\vspace{10pt}\ndefault: \\hfill \\kw{SMT, Zenon, Isa}\n} \n\n\\vspace{22.5pt}\n}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% PERMS OF TRANS \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} Some \\tla\\ proofs are more complicated (`hierarchical'):}\n\n\\footnotesize\n\\begin{example}[transitivity of $PermsOf$]\n\\footnotesize\n\\vspace{-7pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_trans & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T], \\\\\n& & \\kw{new}\\ Y \\in PermsOf(X), \\kw{new}\\ Z \\in PermsOf(Y) \\\\\n& & \\kw{prove}\\ \\    Z \\in PermsOf(X) \\\\ \\\\\n\\end{array} \n\\] \\vspace{-15pt}\n%\\scriptsize\n\\[\n\\begin{array}{r c l}\n& \\langle1\\rangle 1 & Y \\in [S \\rightarrow  T] \\\\\n&  & \\kw{by}\\  PermsOf\\_type \\\\\n& \\langle1\\rangle 2 & \\kw{pick}\\ f \\in Automorphisms(S) : Y = X **\\, f \\\\\n&  & \\kw{by}\\  \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 3 & \\kw{pick}\\ g \\in Automorphisms(S) : Z = Y **\\, g \\\\\n&  & \\kw{by}\\  \\langle1\\rangle 1\\ \\ \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 4 & Z = X **\\, (f **\\, g) \\\\\n&  & \\kw{by}\\   \\langle1\\rangle 2, \\langle1\\rangle 3\\ \\  \\kw{def}\\  Automorphisms, ** \\\\\n& \\langle1\\rangle\\ \\  & \\kw{qed} \\\\\n&  & \\kw{by}\\   Automorphisms\\_trans, \\langle1\\rangle 4\\ \\  \\kw{def}\\  PermsOf \\\\\n\\end{array}\n\\]\n\n\\end{example}               \n}\n\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} Some \\tla\\ proofs are more complicated (`hierarchical'):}\n\n\\footnotesize\n\\begin{example}[transitivity of $PermsOf$]\n\\footnotesize\n\\vspace{-7pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_trans & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T], \\\\\n& &  {\\color{Maroon} \\textbf{\\kw{new}}\\ Y \\in PermsOf(X), \\textbf{\\kw{new}}\\ Z \\in PermsOf(Y)} \\\\\n& & \\color{Maroon} \\textbf{\\kw{prove}}\\ \\ Z \\in PermsOf(X) \\\\ \\\\\n\\end{array} \n\\] \\vspace{-15pt}\n%\\scriptsize\n\\[\n\\begin{array}{r c l}\n& \\langle1\\rangle 1 & Y \\in [S \\rightarrow  T] \\\\\n&  & \\kw{by}\\  PermsOf\\_type \\\\\n& \\langle1\\rangle 2 & \\kw{pick}\\ f \\in Automorphisms(S) : Y = X **\\, f \\\\\n&  & \\kw{by}\\  \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 3 & \\kw{pick}\\ g \\in Automorphisms(S) : Z = Y **\\, g \\\\\n&  & \\kw{by}\\  \\langle1\\rangle 1\\ \\ \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 4 & Z = X **\\, (f **\\, g) \\\\\n&  & \\kw{by}\\   \\langle1\\rangle 2, \\langle1\\rangle 3\\ \\  \\kw{def}\\  Automorphisms, ** \\\\\n& \\langle1\\rangle\\ \\  & \\kw{qed} \\\\\n&  & \\kw{by}\\   Automorphisms\\_trans, \\langle1\\rangle 4\\ \\  \\kw{def}\\  PermsOf \\\\\n\\end{array}\n\\]\n\n\\end{example}               \n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} Some \\tla\\ proofs are more complicated (`hierarchical'):}\n\n\\footnotesize\n\\begin{example}[transitivity of $PermsOf$]\n\\footnotesize\n\\vspace{-7pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_trans & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T], \\\\\n& & \\kw{new}\\ Y \\in PermsOf(X), \\kw{new}\\ Z \\in PermsOf(Y) \\\\\n& & \\kw{prove}\\ \\    Z \\in PermsOf(X) \\\\ \\\\\n\\end{array} \n\\] \\vspace{-15pt}\n%\\scriptsize\n\\[\n\\begin{array}{r c l}\n&\\color{Maroon} \\langle1\\rangle 1 & Y \\in [S \\rightarrow  T] \\\\\n&  & {\\color{Maroon}\\textbf{\\kw{by}}}\\  PermsOf\\_type \\\\\n&\\color{Maroon} \\langle1\\rangle 2 & \\kw{pick}\\ f \\in Automorphisms(S) : Y = X **\\, f \\\\\n&  & {\\color{Maroon}\\textbf{\\kw{by}}}\\  \\kw{def}\\  PermsOf \\\\\n&\\color{Maroon} \\langle1\\rangle 3 & \\kw{pick}\\ g \\in Automorphisms(S) : Z = Y **\\, g \\\\\n&  & {\\color{Maroon}\\textbf{\\kw{by}}}\\  \\langle1\\rangle 1\\ \\ \\kw{def}\\  PermsOf \\\\\n&\\color{Maroon} \\langle1\\rangle 4 & Z = X **\\, (f **\\, g) \\\\\n&  & {\\color{Maroon}\\textbf{\\kw{by}}}\\   \\langle1\\rangle 2, \\langle1\\rangle 3\\ \\  \\kw{def}\\  Automorphisms, ** \\\\\n&\\color{Maroon} \\langle1\\rangle\\ \\  & \\color{Maroon} \\textbf{\\kw{qed}} \\\\\n&  & {\\color{Maroon}\\textbf{\\kw{by}}}\\   Automorphisms\\_trans, \\langle1\\rangle 4\\ \\  \\kw{def}\\  PermsOf \\\\\n\\end{array}\n\\]\n\n\\end{example}               \n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} Some \\tla\\ proofs are more complicated (`hierarchical'):}\n\n\\footnotesize\n\\begin{example}[transitivity of $PermsOf$]\n\\footnotesize\n\\vspace{-7pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_trans & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T], \\\\\n& & \\kw{new}\\ Y \\in PermsOf(X), \\kw{new}\\ Z \\in PermsOf(Y) \\\\\n& & \\kw{prove}\\ \\    Z \\in PermsOf(X) \\\\ \\\\\n\\end{array} \n\\] \\vspace{-15pt}\n%\\scriptsize\n\\[\n\\begin{array}{r c l}\n& \\langle1\\rangle 1 & Y \\in [S \\rightarrow  T] \\\\\n&  & \\kw{by}\\ \\color{Maroon} PermsOf\\_type \\\\\n& \\langle1\\rangle 2 & \\kw{pick}\\ f \\in Automorphisms(S) : Y = X **\\, f \\\\\n&  & \\kw{by}\\  \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 3 & \\kw{pick}\\ g \\in Automorphisms(S) : Z = Y **\\, g \\\\\n&  & \\kw{by}\\ {\\color{Maroon} \\langle1\\rangle 1}\\ \\ \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 4 & Z = X **\\, (f **\\, g) \\\\\n&  & \\kw{by}\\ {\\color{Maroon} \\langle1\\rangle 2, \\langle1\\rangle 3}\\ \\  \\kw{def}\\  Automorphisms, ** \\\\\n& \\langle1\\rangle\\ \\  & \\kw{qed} \\\\\n&  & \\kw{by}\\ {\\color{Maroon} Automorphisms\\_trans, \\langle1\\rangle 4}\\ \\  \\kw{def}\\  PermsOf \\\\\n\\end{array}\n\\]\n\n\\end{example}               \n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} Some \\tla\\ proofs are more complicated (`hierarchical'):}\n\n\\footnotesize\n\\begin{example}[transitivity of $PermsOf$]\n\\footnotesize\n\\vspace{-7pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_trans & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T], \\\\\n& & \\kw{new}\\ Y \\in PermsOf(X), \\kw{new}\\ Z \\in PermsOf(Y) \\\\\n& & \\kw{prove}\\ \\    Z \\in PermsOf(X) \\\\ \\\\\n\\end{array} \n\\] \\vspace{-15pt}\n%\\scriptsize\n\\[\n\\begin{array}{r c l}\n& \\langle1\\rangle 1 & \\color{Maroon} \\bf Y \\in [S \\rightarrow  T] \\\\\n&  & \\kw{by}\\  PermsOf\\_type \\\\\n& \\langle1\\rangle 2 & \\kw{pick}\\ f \\in Automorphisms(S) : Y = X **\\, f \\\\\n&  & \\kw{by}\\  \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 3 & \\kw{pick}\\ g \\in Automorphisms(S) : Z = Y **\\, g \\\\\n&  & \\kw{by}\\  \\langle1\\rangle 1\\ \\ \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 4 & \\color{Maroon} \\bf Z = X **\\, (f **\\, g) \\\\\n&  & \\kw{by}\\   \\langle1\\rangle 2, \\langle1\\rangle 3\\ \\  \\kw{def}\\  Automorphisms, ** \\\\\n& \\langle1\\rangle\\ \\  & \\kw{qed} \\\\\n&  & \\kw{by}\\   Automorphisms\\_trans, \\langle1\\rangle 4\\ \\  \\kw{def}\\  PermsOf \\\\\n\\end{array}\n\\]\n\n\\end{example}               \n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n \\hspace{-18pt} {\\color{Maroon} Some \\tla\\ proofs are more complicated (`hierarchical'):}\n\n\\footnotesize\n\\begin{example}[transitivity of $PermsOf$]\n\\footnotesize\n\\vspace{-7pt}\n\\[\n\\begin{array}{r c l }\n\\kw{lemma}\\ PermsOf\\_trans & \\triangleq & \n\\kw{assume}\\ \\kw{new}\\ S, \\kw{new}\\ T, \\kw{new}\\ X \\in [S \\rightarrow T], \\\\\n& & \\kw{new}\\ Y \\in PermsOf(X), \\kw{new}\\ Z \\in PermsOf(Y) \\\\\n& & \\kw{prove}\\ \\    Z \\in PermsOf(X) \\\\ \\\\\n\\end{array} \n\\] \\vspace{-15pt}\n%\\scriptsize\n\\[\n\\begin{array}{r c l}\n& \\langle1\\rangle 1 & Y \\in [S \\rightarrow  T] \\\\\n&  & \\kw{by}\\  PermsOf\\_type \\\\\n& \\langle1\\rangle 2 & {\\color{Maroon} \\text{\\bf{\\kw{pick}}}}\\ f \\in Automorphisms(S) : Y = X **\\, f \\\\\n&  & \\kw{by}\\  \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 3 & {\\color{Maroon} \\text{\\bf{\\kw{pick}}}}\\ g \\in Automorphisms(S) : Z = Y **\\, g \\\\\n&  & \\kw{by}\\  \\langle1\\rangle 1\\ \\ \\kw{def}\\  PermsOf \\\\\n& \\langle1\\rangle 4 & Z = X **\\, (f **\\, g) \\\\\n&  & \\kw{by}\\   \\langle1\\rangle 2, \\langle1\\rangle 3\\ \\  \\kw{def}\\  Automorphisms, ** \\\\\n& \\langle1\\rangle\\ \\  & \\kw{qed} \\\\\n&  & \\kw{by}\\   Automorphisms\\_trans, \\langle1\\rangle 4\\ \\  \\kw{def}\\  PermsOf \\\\\n\\end{array}\n\\]\n\n\\end{example}               \n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill \\tlapm\\ proofs}{\n\n\\begin{center}\n\\href{run:examples/videos/PermsOf.mp4}{\\textbf{PermsOf lemmas demo}}\n\\end{center}\n\n}\n\n\\subsection{Safety Properties: Proving Boxes}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill safety proofs}\n{\nFor some safety property $P$,  the thing to prove is generally:\n\\vspace{25pt}\n\\begin{theorem}[Safety]\n\\begin{center}\n$Spec\\ \\Rightarrow\\  \\Box\\, P$\n\\end{center}\n\\end{theorem}\n\n\\vspace{75pt}\n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill safety proofs}\n{\nFor some safety property $P$,  the thing to prove is generally:\n\\vspace{25pt}\n\\begin{theorem}[Safety]\n\\begin{center}\n$Init \\wedge \\Box\\, [Next]_V \\  \\Rightarrow \\ \\Box\\, P$\n\\end{center}\n\\end{theorem}\n\n\\vspace{75pt}\n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill safety proofs}\n{\nFor some safety property $P$,  the thing to prove is generally:\n\\vspace{25pt}\n\\begin{theorem}[Safety]\n\\begin{center}\n$Init \\wedge \\Box\\, [Next]_V \\  \\Rightarrow \\ \\Box\\, P$\n\\end{center}\n\\end{theorem}\n\\vspace{20pt}\n{\\color{Maroon} Question:} \\\\\n\\hfill How does one prove a $\\Box$ ?\n\\vspace{27pt}\n}\n\n\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill proving $\\Box$}\n{\n\\hspace{-18pt} {\\color{Maroon}There are two rules that `make' a $\\Box$:}\n\\vspace{15pt}       \n\\begin{rl}[$\\Box$-Induction -- standard LTL]\n\\begin{prooftree}\n         \\AxiomC{$Init \\seq P$}\n         \\AxiomC{$P\\ ,\\ [Next]_V \\seq  P'$}\n         \\RightLabel{\\rlable{\\Box-Ind}}    \n         \\BinaryInfC{$Init\\ ,\\ \\Box\\, [Next]_V  \\seq \\Box\\, P$}   \n\\end{prooftree}\n\\end{rl}\n\n\\vspace{30pt}\n\n\\begin{rl}[Necessitation]\n\\begin{prooftree}\n        \\AxiomC{$\\seq P$}\n         \\RightLabel{\\rlable{Nec}}    \n        \\UnaryInfC{$\\hspace{8pt} \\seq \\Box\\, P$}\n\\end{prooftree}\n\\end{rl}\n\\vspace{45pt}\n\n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill proving $\\Box$}\n{\n\\hspace{-18pt} {\\color{Maroon}There are two rules that `make' a $\\Box$:}\n\\vspace{15pt}       \n\\begin{rl}[$\\Box$-Induction -- standard LTL]\n\\begin{prooftree}\n         \\AxiomC{$Init \\seq P$}\n         \\AxiomC{$P\\ ,\\ [Next]_V \\seq  P'$}\n         \\RightLabel{\\rlable{\\Box-Ind}}    \n         \\BinaryInfC{$Init\\ ,\\ \\Box\\, [Next]_V  \\seq \\Box\\, P$}       \n\\end{prooftree}\n\\end{rl}\n\n\\vspace{30pt}\n\n\\begin{rl}[Necessitation]\n\\begin{prooftree}\n        \\AxiomC{$\\Box\\, F_1, \\ldots, \\Box\\, F_n \\seq P$}\n         \\RightLabel{\\rlable{Nec}}    \n        \\UnaryInfC{$\\hspace{8pt} \\Box\\, F_1, \\ldots, \\Box\\, F_n \\seq \\Box\\, P$}\n\\end{prooftree}\n\\end{rl}\n\\vspace{45pt}\n\n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill proving $\\Box$}\n{\n\\hspace{-18pt} {\\color{Maroon}There are two rules that `make' a $\\Box$:}\n\\vspace{15pt}       \n\\begin{rl}[$\\Box$-Induction -- standard LTL]\n\\begin{prooftree}\n         \\AxiomC{$Init \\seq P$}\n         \\AxiomC{$P\\ ,\\ [Next]_V \\seq  P'$}\n         \\RightLabel{\\rlable{\\Box-Ind}}    \n         \\BinaryInfC{$Init\\ ,\\ \\Box\\, [Next]_V  \\seq \\Box\\, P$}       \n\\end{prooftree}\n\\end{rl}\n\n\\vspace{30pt}\n\n\\begin{rl}[Necessitation -- careful about weakening !]\n\\begin{prooftree}\n        \\AxiomC{$\\Box\\, F_1, \\ldots, \\Box\\, F_n \\hspace{10pt} \\seq P$}\n        \\RightLabel{\\rlable{WK}}\n         \\UnaryInfC{$\\hspace{1pt} \\Box\\, F_1, \\ldots, \\Box\\, F_n, G \\seq  P$}\n        \\RightLabel{\\color{Maroon} \\Large $\\times$}\n        \\UnaryInfC{$\\hspace{9pt} \\Box\\, F_1, \\ldots, \\Box\\, F_n , G \\seq \\Box\\, P$}\n\\end{prooftree}\n\\end{rl}\n\\vspace{45pt}\n\n}\n\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% CORRECTNESS FOR QUICKSORT\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\myframe{proving things about \\tla\\ specs: \\hfill correctness of Quicksort}{\n\\hspace{-18pt} {\\color{Maroon}Correctness of Quicksort is a safety property:}\n\\vspace{10pt}\n\\begin{example}[correctness of quicksort]\n\\vspace{-12pt}\n\\[\n\\begin{array}{r c l}\nCorrectness & \\triangleq & (U = \\{\\})\\  \\Rightarrow\\  A \\in PermsOf(A_0) \\ \\wedge \\ IsSorted(A)\\\\ \\\\\n\\kw{theorem} & & Spec \\Rightarrow \\Box \\, Correctness\n\\\\\n\\end{array}\n\\]\n\n\\end{example}\n\n\\vspace{15pt}\n\nUsing $\\Box$-Induction to prove this directly is not a good idea, \\\\\n\\hfil because $Correctness$ is vacuously true as long as $U \\neq \\{\\}$.\n\n\\vspace{15pt}\n\\hfill {\\color{Maroon} We need an inductive invariant... }\n\\vspace{18.5pt}\n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill correctness of Quicksort}{\n\\hspace{-18pt} {\\color{Maroon}Correctness of Quicksort is a safety property:}\n\\vspace{10pt}\n\\begin{example}[correctness of quicksort]\n\\[\n\\begin{array}{r c l}\nCorrectness & \\triangleq & (U = \\{\\})\\  \\Rightarrow\\  A \\in PermsOf(A_0) \\ \\wedge \\ IsSorted(A)\n\\\\ \\\\\n% \\kw{theorem} & & Spec \\Rightarrow \\Box \\, Correctness\n% \\\\ \\\\\nUnsortedIsCovered & \\triangleq &  \\forall\\ i, j \\in 1\\,..\\,N .\\  i < j \\ \\wedge \\ A[j] < A[i] \\\\\n& & \\Rightarrow \\exists\\ u \\in U. \\  \\{i,j\\} \\subseteq u[1]\\,..\\, u[2]\\\\\n\\\\\nNoOverlap & \\triangleq & \\forall\\ u,v \\in U .\\ (u[1]\\, ..\\, u[2]) \\cap (v[1]\\, ..\\, v[2]) \\neq \\{\\} \\\\\n& &  \\Rightarrow u = v \n\\\\ \\\\\nInv & \\triangleq & UnsortedIsCovered \\wedge NoOverlap \n\\\\\n\\end{array}\n\\]\n\n\\end{example}\n}\n\n\\myframe{proving things about \\tla\\ specs: \\hfill correctness of Quicksort}{\n\\hspace{-18pt} {\\color{Maroon}The proof will look like this:}\n\\vspace{10pt}\n\\begin{example}[correctness of quicksort]\n\\[\n\\begin{array}{r c l r}\n\\kw{theorem} & & Spec \\Rightarrow \\Box \\, Correctness \n\\\\ \\\\\n& \\langle 1 \\rangle 1 & Spec \\Rightarrow \\Box \\, TypeOK & \\mathsf{\\Box-Ind}\n\\\\\n& \\langle 1 \\rangle 2 & Spec \\Rightarrow \\Box \\, Inv & \\mathsf{\\Box-Ind}\n\\\\ \n& \\langle 1 \\rangle 3 & TypeOK \\wedge Inv \\Rightarrow Correctness\n\\\\\n& \\langle 1 \\rangle q & \\kw{qed} \\\\\n& & \\kw{by}\\ \\langle1\\rangle1, \\langle1\\rangle2, \\langle1\\rangle3, \\kw{PTL} & \\mathsf{Nec}\n\\\\\n\\end{array}\n\\]\n\n\\end{example}\n\n\\vspace{36.5pt}\n}\n\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill correctness of Quicksort}{\n\\hspace{-18pt} {\\color{Maroon}The problem with weakening manifests itself here:}\n\\vspace{10pt}\n\\begin{example}[correctness of quicksort]\n\\[\n\\begin{array}{r c l r}\n\\kw{theorem} & & Spec \\Rightarrow \\Box \\, Correctness \n\\\\ \\\\\n& \\langle 1 \\rangle 1 & Spec \\Rightarrow \\Box \\, TypeOK & \\mathsf{\\Box-Ind}\n\\\\\n& \\langle 1 \\rangle 2 & Spec \\Rightarrow \\Box \\, Inv & \\mathsf{\\Box-Ind}\n\\\\ \n& \\langle 1 \\rangle 3 & TypeOK \\wedge Inv {\\color{Maroon} \\wedge\\  Init} \\Rightarrow Correctness\n\\\\\n& \\langle 1 \\rangle q & \\kw{qed} \\\\\n& & \\kw{by}\\ \\langle1\\rangle1, \\langle1\\rangle2, \\langle1\\rangle3, \\kw{PTL} & \\color{Maroon} \\Large{ \\mathbf{\\times}}\n\\\\\n\\end{array}\n\\]\n\n\\end{example}\n\n\\vspace{36.5pt}\n}\n\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill correctness of Quicksort}{\n\n\\begin{center}\n\\href{run:examples/videos/QuicksortCor.mp4}{\\textbf{Quicksort correctness demo}}\n\\end{center}\n% 1. dummy proof at bottom \n% 2. add init show failure\n% 3. show proof with lemmata, show both lemmata\n\n}\n\n\n\\myframe{proving things about \\tla\\ specs: \\hfill so far so good ...}{\n\\footnotesize\n\\hspace{-18pt} {\\color{Maroon} There are several nice aspects of writing proofs in \\tla:}\n\n\\begin{itemize}\n        \\item little overhead over the pen-and-paper version ;\n        \\item proofs are quite robust wrt to changes in definitions, etc \n\\href{run:examples/videos/NoOverlaps.mp4}{{\\color{Maroon}(\\textbf{demo})}} ; \n        \\item interface facilitates asynchronous working style ; % {\\color{Maroon} ...}\n        \\item module system ;\n        \\item show me another ITP that supports modal reasoning.\n\\end{itemize}\n\n\\vspace{20pt}\n\n\\hspace{-18pt} {\\color{Maroon} There are also several not-so-nice aspects of writing proofs in \\tla:}\n\n\\begin{itemize}\n        \\item controlling the exact obligations is subtle ; \n% this is partly due to design decisions taken to minimize the explicit modal reasoning... \n        \\item modal reasoning is subtle, can look strange, and is limited {\\color{Maroon} (at present)} ; \n        \\item no certificates {\\color{Maroon} (at present)}.\n\\end{itemize}\n\\vspace{10pt}\n\n}\n\n", "meta": {"hexsha": "8bfc90e827101176bdb24237c08cab327cdd41da", "size": 20044, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/presentations/2014-QM/tlapm.tex", "max_stars_repo_name": "damiendoligez/tlapm", "max_stars_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2016-08-16T14:58:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-19T18:38:07.000Z", "max_issues_repo_path": "doc/presentations/2014-QM/tlapm.tex", "max_issues_repo_name": "damiendoligez/tlapm", "max_issues_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 49, "max_issues_repo_issues_event_min_datetime": "2020-03-04T18:13:13.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-07T17:43:24.000Z", "max_forks_repo_path": "doc/presentations/2014-QM/tlapm.tex", "max_forks_repo_name": "damiendoligez/tlapm", "max_forks_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2020-02-26T19:58:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-12T22:18:25.000Z", "avg_line_length": 30.4157814871, "max_line_length": 117, "alphanum_fraction": 0.6079125923, "num_tokens": 7481, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.6723317057447908, "lm_q1q2_score": 0.5527838147100508}}
{"text": "For the most part, the abstract semantics applies existing semantic operators of boxed Intervals \\cite{COU98}.\n\nFirst of all, $\\mathbb{I}$ defines the semantics of arithmetic expressions on a single hypercube by applying the well-known arithmetic operators on intervals.\n\nThis semantics is used to define the abstract semantics $\\mathbb{B}$ of Boolean comparisons. Given a hypercube and a Boolean comparison $E_1 <bop> E_2$ where $<bop> \\in \\{\\geq, >, \\leq, <, \\neq \\}$, this semantics returns an \\emph{abstract value of the boolean domain} (namely, $\\mathit{true}$, $\\mathit{false}$, or $\\top$) comparing the intervals obtained from $E_1$ and $E_2$ through $\\mathbb{I}$. Therefore, given a Boolean condition and a set of hypercubes, we partition this set into the hypercubes for which (i) the condition surely holds, (ii) the conditions surely does not hold, and (iii) the condition may or may not hold. In this way, we can discard all the hypercubes for which a given Boolean condition surely holds or does not hold.\n\nNote that in this way we lose some precision. For instance, imagine that in a given hypercube we know that $\\statement{x} \\in [0..5]$ (because the fixed width of intervals associated to \\statement{x} is 5), and we check if $\\statement{x} \\leq 3$. The answer of $\\mathbb{B}$ will be $\\top$ and, if the condition was inside an \\statement{if} statement, this hypercube will be used to compute the semantics of both the branches. Indeed, we would know that $\\statement{x} \\in [0..3]$ in the \\statement{then} branch, and $\\statement{x} \\in [4..5]$ in the else branch. Nevertheless, we cannot and do not want to track this information in our hypercube, since the width of the interval associated to \\statement{x} is 5, and fixed widths are a key feature in order to obtain an efficient analysis. We will present (Section \\ref{sec:widths}) how we can recursively modify the widths of the analysis to improve precision in these cases.\n\nThe semantics of the logical operators $not$, $and$, $or$ is defined in the standard way.\n\n$\\mathbb{I}$ is used to define the semantics $\\mathbb{S}$ of variable assignment as well. The standard semantics of \\statement{x=exp} is to (i) obtain the interval representing the right part ($\\mathbb{I}\\llbracket\\statement{x=exp}, \\sigma\\rrbracket=[m..M]$), and (ii) assign it in the current state. This approach does not necessarily produce a single hypercube, since we could assign an interval which width is greater than the fixed width of the assigned variable, or that covers several other intervals belonging to different partitions. In this case, we build up several hypercubes that cover the interval $[m..M]$. This can be formalized by $assign(h, V_i, [a..b])=\\{h[i\\mapsto m] : [m\\times w_i .. (m+1)\\times w_i]\\cap [a..b] \\neq \\emptyset\\}$, where $h$ is a hypercube, $V_i$ is the assigned variable, and $[a..b]$ is the interval we are assigning (which depends on the hypercube $h$, since we use its variables values to compute the result of the expression). We repeat this process for each hypercube $h$ in the abstract state by using it as input for the computation of $assign$.\nIn this way, we are able to over-approximate the assignment keeping the fixed widths of the intervals, which is a key feature of our domain in order to obtain an efficient analysis.\n\n%\\todogiulia{Tino: come dicevo questa sezione si dovrebbe ridurre all'osso: riferimenti a Intervalli, Disjuctinve domains trough Powerset, Prodotto Cartesiano, Conditional Partitioning}\n%\n%In this Section, we define the abstract semantics of Hypercubes. In particular, the semantics we will define are the following ones:\n%\\begin{itemize}\n%\\item $\\mathbb{I}$, the abstract semantics of arithmetic expressions, which receives an expression and a single hypercube in input and returns an \\emph{interval of real values} resulting from the execution of that expression when the variable values belong to that hypercube.\n%\\item $\\mathbb{B}$, the abstract semantics of boolean conditions, which receives a single hypercube and two expressions in input and returns an \\emph{abstract value of the boolean domain} (namely, $true$, $false$, or $\\top$) obtained by comparing the two expressions (through $\\geq, >, \\leq, <$ or $\\neq$) when the variable values belong to that hypercube. Boolean conditions can be combined through logical operators (and, or, not) in the usual way (i.e., exploiting the abstract semantics of the boolean abstract domain). \n%\\item $\\mathbb{S}$, the abstract semantics of statements, which receives in input a set of hypercubes (the current abstract state) and returns a new set of hypercubes (the new abstract state after the execution of the statement).\n%\\end{itemize}\n%\n%\\subsection{The abstract semantics of arithmetic expressions, $\\mathbb{I}$}\n%\n%\\subsubsection{Constants}\n%\\label{sec:constants}\n%We define a constant as a variable which gets assigned only once with a constant value, or a numerical value which appears in some statements (without being assigned to a specific variable). To simplify the treatment of constants, we execute a preprocessing on the program with constant propagation, to remove constant variables and replace their uses with their numerical value. \n%\n%\n%The abstract semantics of an expression made up by a constant numeric value is, simply, an interval of zero width: the extremes of the interval are the same and they are equal to the value of the constant. \n%Then, the abstract semantics of a constant is:\n%\n%$$\\mathbb{I}\\llbracket c \\rrbracket \\; h = [c,c]$$\n%\n%Note that the value of the hypercube in input ($h$) is ignored because it is not needed to compute the result.\n%\n%\\subsubsection{Intervals}\n%The abstract semantic of an expression made up by an interval of real values is immediate: it is exactly that interval, without modifications. We ignore the hypercube passed in input.\n%\n%$$\\mathbb{I}\\llbracket (m,M) \\rrbracket \\; h = [m,M]$$\n%\n%\\subsubsection{Variables}\n%When the expression is made up by a variable, we must consider the abstract value of that variable in the hypercube passed in input. Let $h_i$ be the integer index associated to the $i$-th dimension of the hypercube, $V_i$ the $i$-th variable defined in the program and $w_i$ the width associated in the analysis to such variable. Then, the abstract semantics of a variable  is $\\mathbb{I}\\llbracket V_i\\rrbracket \\; h = [h_i*w_i,(h_i+1)*w_i]$.\n%\n%\n%\\subsubsection{Arithmetic operations}\n%We considered only the most used arithmetic operators. In particular, we considered sum ($+$), subtraction ($-$), product ($\\times$) and division ($\\div$). These operators should suffice for most physics simulations (for example, our case study requires only sum and product - the change of sign being a multiplication for $-1$). Anyway, our framework can be easily extended to support other operations (for example modulus), by simply defining their abstract semantics when working on intervals of values.\n%\n%The semantics of these operators over intervals has been already deeply studied, and we refer to \\cite{COU98} for their definitions.\n%\n%\n%\\subsection{The abstract semantics of Boolean conditions, $\\mathbb{B}$}\n%\\label{sec:boolcondition}\n%We now define the semantics of integer comparison using the operators $>$, $<$, $\\leq$, $\\geq$, and $\\neq$. \n%\n%The abstract semantics separately computes the semantics on the left and right arithmetical expressions of the statement, obtaining two intervals. Let us call them $i_1 = [a,b]$ and $i_2 = [c,d]$. The abstract comparison between them depends on the specific comparison operator present in the statement:\n%\\begin{itemize}\n%\\item $i_1 \\neq i_2$ returns true if $b < c \\vee a > d$, false if $b=c=a=d$, and $\\top$ otherwise.\n%\\item $i_1 < i_2$ returns true if $b < c$, false if $a > d$, top otherwise.\n%\\item $i_1 > i_2$ returns true if $a > d$, false if $b < c$, top otherwise.\n%\\item $i_1 \\leq i_2$ returns true if $b \\leq c$, false if $a \\geq d$, top otherwise.\n%\\item $i_1 \\geq i_2$ returns true if $a \\geq d$, false if $b \\leq c$, top otherwise.\n%\\end{itemize}\n%\n%$\\mathbb{B}$ will be used by the statement semantics $\\mathbb{S}$ when computing the semantics of \\statement{if} and \\statement{while} statements to discard the hypercubes that surely do not satisfy the condition. Note that in this way we lose some precision. For instance, imagine that in a given hypercube we know that $\\statement{x} \\in [0..5]$ (because the fixed width of intervals associated to \\statement{x} is 5), and we check if $\\statement{x} \\leq 3$ when computing the semantics of an \\statement{if} statements. The answer of $\\mathbb{B}$ will be $\\top$, and so this hypercube will be used to compute the semantics of both the branches. Indeed, we would know that $\\statement{x} \\in [0..3]$ in the \\statement{then} branch, and $\\statement{x} \\in [4..5]$ in the else branch. Nevertheless, we cannot track this information in our hypercube, since the width of the interval associated to \\statement{x} is 5. Anyway, we will present (Section \\ref{sec:widths}) how we can recursively modify the widths of the analysis to improve precision in these cases.\n%\n%\n%\\subsection{The abstract semantics of statements, $\\mathbb{S}$}\n%\n%\\subsubsection{Assignment}\n%Usually, with non-disjunctive domains, the abstract semantics of an assignment is straightforward: you have to compute the abstract semantics of the expression and update the abstract value of the variable with the result. In our domain, though, we track a different kind of information: we represent possible values of all variables together (through hypercubes) and we consider disjunctive information (a set of valid hypercubes instead of a single hypercube). Therefore, we must devise a specific abstract semantics to deal with the assignment statement. \n%\n%Let the assignment be $V_i = e$, where $e$ is an arithmetic expression. Our approach can then be sketched as follows: \n%\\begin{itemize}\n%\\item we consider, separately, each hypercube $h$ of the current state;\n%\\item we compute the abstract semantics of the arithmetic expression $e$ passing to it the hypercube $h$.\n%\\item we create a new hypercube (or \\emph{some} new hypercubes, depending on the width of the resulting interval), where the abstract value of $V_i$ is the abstraction of the interval resulting from $e$.\n%\\end{itemize}\n%\n%If the interval obtained from $e$ overlaps over several partitions established for variable $V_i$, we have to build up several hypercubes as result of the assignments semantics over a single hypercube. \n%\n%\n%On the other hand, if many hypercubes of the initial state map to the same hypercube in the resulting state, it could also happen that the cardinality of \\adomain\\ decreases (or remains unmodified) after the execution of an assignment.\n%\n%Formally: \n%\n%$$\\mathbb{S}\\llbracket \\statement{V_i = e}\\rrbracket \\; H = \\bigcup_{h \\in H} {assign(h, V_i, \\mathbb{I}\\llbracket e\\rrbracket \\; h)}$$\n%\n%where $assign(h, V_i, [a,b])=\\{h[i\\mapsto m] : [m\\times w_i .. (m+1)\\times w_i]\\cap [a..b] \\neq \\emptyset\\}$. The output of this function is the set of hypercubes covering all the intervals that overlaps with the interval assigned to the given variable.\n%\n%\n%\\subsubsection{If-then-else semantics}\n%\n%To precisely deal with branches of \\statement{if} statements, we partition the abstract state \\adomain\\ with respect to the evaluation of the branching condition. In particular, we compute the abstract semantics $\\mathbb{B}$ of the boolean condition on each hypercube of \\adomain\\ and we assign each hypercube to a specific partition, based on the result of the condition semantics. Therefore, we obtain three partitions: the hypercubes for which the condition evaluates to true ($p_t$), the ones for which the condition evaluates to false ($p_f$), and the ones for which we do not have a definitive answer ($p_\\top$).\n%\n%Once obtained these three partitions, we can compute selectively the abstract semantics of the two branches, and in particular the \\statement{then} branch with $p_t \\cup p_\\top$, and the \\statement{else} branch with $p_f \\cup p_\\top$. \n%\n%Formally, the semantics of the \\statement{if} statements follows:\n%\n%$$\\mathbb{S}\\llbracket \\statement{if}(B)\\ \\statement{then}\\ P_1\\ \\statement{else}\\ P_2\\rrbracket \\; H = ( \\mathbb{S}\\llbracket P_1\\rrbracket \\; (p_t \\cup p_\\top) ) \\cup ( \\mathbb{S}\\llbracket P_2\\rrbracket \\; (p_f \\cup p_\\top) )$$\n%\n%where $p_t = \\{ h \\in H : \\mathbb{B}\\llbracket \\ael{if}\\rrbracket \\; h = true \\}$, $p_f = \\{ h \\in H : \\mathbb{B}\\llbracket \\ael{if}\\rrbracket \\; h = false \\}$, and $p_\\top = \\{ h \\in H : \\mathbb{B}\\llbracket \\ael{if}\\rrbracket \\; h = \\top \\}$.\n%\n%\\subsubsection{Concatenation of statements}\n%The abstract semantics of the concatenation of two statements is straightforward: it executes the abstract semantics of the first statement, it takes the result and it passes it as input to the abstract semantics of the second statement. Formally:\n%\n%$$\\mathbb{S}\\llbracket P_1;P_2\\rrbracket \\; H = \\mathbb{S}\\llbracket P_2\\rrbracket \\; (\\mathbb{S}\\llbracket P_1\\rrbracket \\; H)$$\n%\n%\\todogiulia{Non sono per niente convinto che sia questa big-step semantics quello di cui abbiamo bisogno. Ad esempio, la semantics di \\statement{while(true) P_1; P_2} ritorna $\\emptyset$ stando a questa definizione. A occhio abbiamo bisogno di qualcos'altro, e.g., uno stato astratto per ogni punto di programma di un cfg? In questo caso, non dobbiamo lavorare a questo livello con \\statement{if} e \\statement{while} ma piu' a basso livello.}\n%\n%\\subsection{While loop}\n%To do. \n\n", "meta": {"hexsha": "d8947187064e47a70a5232a275e7aa224b758ec8", "size": 13541, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "12. Hypercubes Domain/HC/Sections/versione SAS2013/3b_abstract_semantics.tex", "max_stars_repo_name": "vs-team/Papers", "max_stars_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2016-04-06T08:46:02.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-19T07:16:23.000Z", "max_issues_repo_path": "12. Hypercubes Domain/Sections/versione SAS2013/3b_abstract_semantics.tex", "max_issues_repo_name": "vs-team/Papers", "max_issues_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "12. Hypercubes Domain/Sections/versione SAS2013/3b_abstract_semantics.tex", "max_forks_repo_name": "vs-team/Papers", "max_forks_repo_head_hexsha": "58fa4a3b4c8185ad30bf9a142002d87ceca756e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 118.7807017544, "max_line_length": 1090, "alphanum_fraction": 0.7535632523, "num_tokens": 3529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.822189121808099, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.5527838093119641}}
{"text": "%File: algorithm.tex\n%Date: Sun Nov 17 23:43:49 2013 +0800\n%Author: Yuxin Wu <ppwwyyxxc@gmail.com>\n\n\\section{Algorithm and Implementation}\n\nWe presented a prototype system based on MFCC as acoustic features, and\nGMM as our recognition model.\n\n\\subsection{MFCC}\nMFCC (Mel-frequency Cepstral Coefficient) is a representation of the short-term power spectrum of a sound,\nbased on a linear cosine transform of a log\npower spectrum on a nonlinear mel-scale of frequency \\cite{mfcc} .\nMFCC is the mostly widely used features in Automatic Speech Recognition(ASR), and it can also be\napplied to Speaker Recognition task.\n\nThe process to extract MFCC feature is as followed:\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=\\textwidth]{res/MFCC.png}\n\\end{figure}\n\nFirst, the input speech should be divided into successive short-time frames of length $L$,\nneighboring frames shall have overlap $R$. In our implementation, We choose $L = 20ms  $ ans $ R = 10 ms$.\nThose frames are then windowed by Hamming Window, as shown in \\figref{frames}\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.7\\textwidth]{res/frames.png}\n  \\caption{Framing and Windowing \\label{fig:framming}}\n\\end{figure}\n\nThen, We perform Discrete Fourier Transform (DFT) on windowed signals to compute their spectrums.\nFor each of $N$ discrete frequency bands we get a complex number $X[k]$ representing\nmagnitude and phase of that frequency component in the original signal.\n\nConsidering the fact that human hearing is not equally sensitive to all frequency bands, and especially, it has lower resolution at higher frequencies.\nScaling methods like Mel-scale and Bark-scale are aimed at scaling the frequency domain to fit human auditory perception better.\nThey are approximately linear below 1 kHz and logarithmic above 1 kHz.\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.6\\textwidth]{res/mel-scale.png}\n\\end{figure}\n\nIn MFCC, Mel-scale is applied on the spectrums of the signals. The expression of Mel-scale warpping is as followed:\n\\[ M(f) = 2595 \\log_{10}(1 + \\dfrac{f}{700}) \\]\n\n\\begin{figure}[H]\n  \\centering\n  \\includegraphics[width=0.5\\textwidth]{res/bank.png}\n  \\caption{Filter Banks (6 filters) \\label{fig:bank}}\n\\end{figure}\nThen,  we appply the bank of filters according to Mel-scale on the spectrum,\ncalculate the logarithm of energy under each bank by $E_i[m] = \\log (\\sum_{k=0}^{N-1}{X_i[k]^2 H_m[k]}) $ and apply Discrete\nCosine Transform (DCT) on $E_i[m](m = 1, 2, \\cdots M) $ to get an array $c_i $:\n\\[ c_i[n] = \\sum_{m=0}^{M-1}{E_i[m]\\cos(\\dfrac{\\pi n}{M}(m - \\dfrac{1}{2}))} \\]\n\nUsually, the first 13 terms in $c_i $ is used as features for future training.\n\n\\subsection{GMM}\n\nWe use Gaussian Mixture Model (GMM) to model all features from one person.\nFor implementation, we use the GMM model training and predicting routine from the famous python\nmachine learning package scikit-learn \\cite{sklearn}.\nSince the last step of MFCC is DCT, different dimensions of the features are strongly independent, so we\nuse GMM with diagonal covariance matrix. The number of components in GMM is chosen as 32 in our implementation.\n\nAfter building models for each person, it can be used to calculate the probability that the input signal is generated by this model.\nThe model with maximum probability is picked out as the result, and the corresponding person is recognized.\n\n\\pysrcpart{res/test.py}{45}{50}\n", "meta": {"hexsha": "4f9f2c5fa3d4e66f985ebecbc140e1ac24602725", "size": 3401, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/Progress-Report/algorithm.tex", "max_stars_repo_name": "juliia5m/knu_voice", "max_stars_repo_head_hexsha": "1f5d150ded23af4c152b8d20f1ab4ecec77b40e1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-01-23T21:40:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-12T02:07:25.000Z", "max_issues_repo_path": "doc/Progress-Report/algorithm.tex", "max_issues_repo_name": "juliia5m/knu_voice", "max_issues_repo_head_hexsha": "1f5d150ded23af4c152b8d20f1ab4ecec77b40e1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/Progress-Report/algorithm.tex", "max_forks_repo_name": "juliia5m/knu_voice", "max_forks_repo_head_hexsha": "1f5d150ded23af4c152b8d20f1ab4ecec77b40e1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-06-14T09:31:13.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-23T07:52:16.000Z", "avg_line_length": 47.9014084507, "max_line_length": 151, "alphanum_fraction": 0.7621287857, "num_tokens": 913, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397348, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5527818321669798}}
{"text": "\n\\section{Dam break on wet areas}\n\nThe dam break problem on wet areas was solved analytically by \nStoker~\\cite{Stoker1948, Stoker1957}.\nThe analytical solution exhibits a rarefaction and involves a shock. Generally this problem is easier to solve numerically than the dry dam break (the dam break on a dry area).\n\nThe initial condition is\n\\begin{equation} \\label{eq:dbp_init_wet}\nu(x,0)=0, ~~v(x,y)=0, ~~\\textrm{and}~~\nh(x,0) = \\left\\{ \\begin{array}{ll}\nh_1 & \\textrm{if $x < 0$}\\\\\nh_0 & \\textrm{if $x > 0$}\\\\\n\\end{array} \\right.\n\\end{equation}\nwhere $h_1>h_0>0$. The topography is a horizontal flat bed.\n\nThe analytical solution~\\cite{Stoker1948, Stoker1957}\nto the wet dam break problem is\n\\begin{equation}\nh(x) = \\left\\{ \\begin{array}{ll}\nh_1 & \\textrm{if $x \\leq -t \\sqrt{gh_1}$}\\\\\nh_3=\\frac{4}{9g}(\\sqrt{gh_1}-\\frac{x}{2t})^2 & \\textrm{if $-t \\sqrt{gh_1} <x \\leq t(u_2-\\sqrt{gh_2}$})\\\\\nh_2=\\frac{h_0}{2}\\bigg(\\sqrt{1+\\frac{8\\dot{\\xi}^2}{gh_0}}-1\\bigg) & \\textrm{if $ t(u_2-\\sqrt{gh_2}) <x < t\\dot{\\xi}$}\\\\\nh_0 & \\textrm{if $x \\geq t\\dot{\\xi}$}\\\\\n\\end{array} \\right.\n\\end{equation}\nand\n\\begin{equation}\nu(x) = \\left\\{ \\begin{array}{ll}\n0 & \\textrm{if $x \\leq -t \\sqrt{gh_1}$}\\\\\nu_3=\\frac{2}{3}(\\sqrt{gh_1}+\\frac{x}{t}) & \\textrm{if $-t \\sqrt{gh_1} <x \\leq t(u_2-\\sqrt{gh_2}$})\\\\\nu_2=\\dot{\\xi}-\\frac{gh_0}{4\\dot{\\xi}}\\bigg(1+\\sqrt{1+\\frac{8\\dot{\\xi}^2}{gh_0}} \\bigg) & \\textrm{if $ t(u_2-\\sqrt{gh_2}) <x < t\\dot{\\xi}$}\\\\\n0 & \\textrm{if $x \\geq t\\dot{\\xi}$}\\\\\n\\end{array} \\right.\n\\end{equation}\nat any time $t>0$, where $\\dot{\\xi}$ is the shock speed constant given by \n\\begin{equation} \\label{eq:shock}\n\\dot{\\xi}=2\\sqrt{gh_1}+\\frac{gh_0}{4\\dot{\\xi}}\\bigg( 1+\\sqrt{1+\\frac{8\\dot{\\xi}^2}{gh_0}}\\bigg)-\\bigg( 2gh_0 \\sqrt{1+\\frac{8\\dot{\\xi}^2}{gh_0}}-2gh_0\\bigg)^\\frac{1}{2}.\n\\end{equation}\n\n\n\\subsection{Results}\n\nFor our test, we consider $h_1=10$ and $h_0=1$ in (\\ref{eq:dbp_init_wet}).\nThe following figures show the stage, $x$-momentum, and $x$-velocity at several instants of time.\nWe should see excellent agreement between the analytical and numerical solutions. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{stage_plot.png}\n\\end{center}\n\\caption{Stage results}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{xmom_plot.png}\n\\end{center}\n\\caption{Xmomentum results}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{xvel_plot.png}\n\\end{center}\n\\caption{Xvelocity results}\n\\end{figure}\n\n\n\\endinput\n", "meta": {"hexsha": "ded7533703ddafe97bab1e0d6765f3aef382d3a6", "size": 2495, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "validation_tests/analytical_exact/dam_break_wet/results.tex", "max_stars_repo_name": "samcom12/anuga_core", "max_stars_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_stars_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_stars_count": 136, "max_stars_repo_stars_event_min_datetime": "2015-05-07T05:47:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T03:07:40.000Z", "max_issues_repo_path": "validation_tests/analytical_exact/dam_break_wet/results.tex", "max_issues_repo_name": "samcom12/anuga_core", "max_issues_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_issues_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_issues_count": 184, "max_issues_repo_issues_event_min_datetime": "2015-05-03T09:27:54.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-20T04:22:48.000Z", "max_forks_repo_path": "validation_tests/analytical_exact/dam_break_wet/results.tex", "max_forks_repo_name": "samcom12/anuga_core", "max_forks_repo_head_hexsha": "f4378114dbf02d666fe6423de45798add5c42806", "max_forks_repo_licenses": ["Python-2.0", "OLDAP-2.7"], "max_forks_count": 70, "max_forks_repo_forks_event_min_datetime": "2015-03-18T07:35:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T07:07:29.000Z", "avg_line_length": 33.7162162162, "max_line_length": 176, "alphanum_fraction": 0.675751503, "num_tokens": 1032, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303285397348, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.5527818321669798}}
{"text": "\\documentclass[11pt]{article}\n\n%%% The preamble loads packages, theorem styles, and macros %%%%%%%%%%%%%%\n\\input{CourseNotesPreamble}\n\n\\title{Math 626: High Dimensional Probability}\n\\author{Taught by Mark Rudelson \\\\ Lecture notes by Sayantan Khan}\n%\\email{schommerpries.chris.math@gmail.com}\n%\\address{Department of Mathematics \\\\\n%Harvard University \\\\\n%1 Oxford St. \\\\\n%Cambridge, MA 02138}\n\\date{\\today}\n\n\\begin{document}\n\n\n\\maketitle\n\\tableofcontents\n\n\\subsection*{About the notes and the course}\n\nThese notes only cover the first half of the course, which focused on measure concentration. The second half of the course focused on suprema of random processes, and closely followed chapters 7 and 8 of Roman Vershynin's \\emph{High Dimensional Probability} \\cite{vershynin2018high}.\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nHigh dimensional probability is the study of random variables taking values in $\\RR^n$ for large but fixed values of $n$.\nWhile this area has always been studied by classical probability theorists, it has also attracted attention from computer scientists, especially since the design and analysis of fast probabilistic algorithms requires tools and theorems from this field.\nA classical result that arises from pure mathematics, but has several real life applications is Dvoretzky's theorem, whose statement does not involve any probability at all, and yet the proof uses concentration inequalities.\n\\begin{theorem}[Dvoretzky's theorem \\cite{dvoredsky1961some}]\n  Let $X$ be an $n$-dimensional normed vector space. Then for any $\\varepsilon > 0$, there exists a constant $c(\\varepsilon) > 0$, and a linear subspace $E$ of $X$ such that, $\\dim(E) \\geq c(\\varepsilon) \\log(n)$, and for all $v \\in E$, the following inequality relating the ambient norm and the Euclidean norm on $E$.\n  \\begin{align*}\n    (1-\\varepsilon) \\norm{v}_2 \\leq \\frac{\\norm{v}_X}{M(X)} \\leq (1+\\varepsilon)\\norm{x}_2\n  \\end{align*}\n  Where $M(X)$ is a scaling factor that depends only on $X$, and $\\norm{\\cdot}_X$ and $\\norm{\\cdot}_2$ are the ambient and Euclidean norm on $E$.\n\\end{theorem}\n\n\\begin{proof}[Idea of proof]\n  Pick a random subspace, and then show that with very high probability, the given inequality holds.\n  We will prove the result in full detail later in the course.\n\\end{proof}\n\n\\begin{remark}\n  \\label{rem1}\n  When the norm on $X$ is the $\\ell^1$ norm, the lower bound on the dimension of $E$ can be improved to be linear in $c(\\varepsilon) n$.\n\\end{remark}\n\n\\paragraph{A computer science application of Dvoretzky's theorem}\n\nConsider a subset $S$ of $\\ell_2^N$, given by inequalities involving the norms of elements in $\\ell_2^N$.\nSuppose that we are required to optimize a linear function $f$ on the set $S$.\nSince $S$ is given by inequalities involve the $\\ell^2$-norm, it will be and intersection of interiors of ellipsoids, and consequently, optimizing $f$ will be computationally expensive.\nBut we can get around the computational expense by embedding $\\ell_2^N$ into $\\ell_1^M$, where $M$ is $O(N)$, by Remark \\ref{rem1}.\nSince this embedding does not distort distances too much, we can replace $S$ with a nearby polytope, given by inequalities involving the $\\ell^1$ norm instead.\nOptimizing a linear function on a polytope is computationally much easier, thanks to linear programming.\n\n\\paragraph{Empirical covariance estimation}\n\nLet $X$ be an $\\RR^n$-valued random variable, and let $\\E(X) = 0$.\nThe covariance of $X$ is $\\E(X^{\\top}X)$, and will be denoted by $\\Sigma$.\nLet $\\left\\{ X_1, X_2, \\ldots, X_m \\right\\}$ be i.i.d. samples of $X$.\nWe define the sample covariance $\\Sigma_m$ in the following manner.\n\\begin{align*}\n  \\Sigma_n = \\frac{\\sum_{i=1}^m X_i^{\\top}X_i}{m}\n\\end{align*}\nAs $m$ tends to infinity, the sample covariance $\\Sigma_m$ will approach the true covariance, as we would expect the law of large numbers to predict.\nA harder, and more interesting question is to determine how many samples do we need to take to be within some threshold of the true covariance with high probability.\n\nJust like in the scalar setting, one answers the question by proving appropriate concentration inequalities for matrix valued random variables.\nHere is the most general set up: Let $\\Sigma_m$ and $\\Sigma$ be matrices mapping some normed space $X$ to some other normed space $Y$.\nWe define the distance between $\\Sigma_m$ and $\\Sigma$ using the operator norm.\n\\begin{align*}\n  \\norm{\\Sigma_m - \\Sigma}_{\\mathrm{op}} &= \\max_{\\norm{v}_x \\leq 1} \\norm{(\\Sigma_m-\\Sigma)v}_Y \\\\\n                               &= \\max_{\\norm{v}_x \\leq 1} \\max_{\\norm{w}_{Y^{\\ast}} \\leq 1} w\\left( (\\Sigma_m - \\Sigma)v \\right) \\\\\n                               &= \\max_{\\substack{(v, w) \\in X \\times Y^{\\ast} \\\\ \\norm{v} \\leq 1 \\\\ \\norm{w} \\leq 1}} w((\\Sigma_m - \\Sigma)v)\n\\end{align*}\nWe can consider $w((\\Sigma_m -\\Sigma)v)$ to be a family of scalar random variables parameterized by points in $X \\times Y^{\\ast}$, i.e. a random process $V_u$ parameterized by $u \\in X \\times Y^{\\ast}$.\nThis leads to the following two questions.\n\\begin{enumerate}[(i)]\n\\item How to bound $\\E \\max V_u$.\n\\item How to bound $\\P(\\left| \\max V_u - \\E \\max V_u \\right| \\geq t)$.\n\\end{enumerate}\nIt turns out one can often answer (ii) without answering (i), which may seem surprising given that the most elementary concentration inequalities involve moment bounds (i.e. Markov's inequality).\nThe starting point in answering (ii) is understanding \\emph{concentration of measure}.\n\n\\section{Concentration of measure}\n\\label{sec:conc-meas}\n\nConcentration of measure was originally observed L\u00e9vy, but first used by Milman in the early 1970s.\nRoughly speaking, concentration of measure is the following phenomenon: suppose $(X, d, \\P)$ is a metric space endowed with a probability measure and $f: X \\to \\RR$ is a ``nice'' function.\nThen the value of $f$ is essentially constant, i.e. there exists some constant $M(f)$ such that for small enough $\\varepsilon$, the following probability bound holds.\n\\begin{align*}\n  \\P(|f(x) - M(f)| < \\varepsilon) \\ll 1\n\\end{align*}\nUsually by nice, we will mean $1$-Lipschitz, although similar results hold for a more general class of functions like convex or quasi-convex functions.\nWe begin with the simplest version of measure concentration.\n\n\\subsection{Concentration for linear functions}\n\\label{sec:conc-line-funct}\n\nLet the metric space $X$ in this setting be $\\R^n$ for some fixed $n$, and let $\\{X_1, \\ldots, X_n \\}$ be i.i.d scalar random variables.\nLet $f: \\R^n \\to \\R$ be a linear function, given by taking inner product with some vector $\\mathbf{a}$.\nWe define $Y$ to be $\\sum_{i=1}^n a_i X_i$.\nWe will prove concentration inequalities for $Y$ by imposing conditions on $X_i$: one such condition is requiring $X_i$ to be \\emph{subgaussian}.\n\n\\subsubsection{Subgaussian random variables}\n\\label{sec:subg-rand-vari}\n\n\\begin{definition}[Subgaussian decay]\n  A random variable $X$ is said to be $\\sigma$-subgaussian if there exists a constant $C > 0$, such that for all $t > 0$, the following inequality holds.\n  \\begin{align*}\n    \\P(|X| > t) \\leq C \\exp\\left( -\\frac{1}{2} \\left( \\frac{t}{\\sigma} \\right)^2 \\right)\n  \\end{align*}\n\\end{definition}\n\\begin{remark}\n  The constant $\\sigma$ is often referred to as the variance proxy of the subgaussian random variable.\n\\end{remark}\n\n\\begin{example}\n  The following random variables are examples of subgaussian random variables.\n  \\begin{enumerate}[(i)]\n  \\item Normal random variables\n  \\item Bounded random variables\n  \\end{enumerate}\n\\end{example}\n\n\\begin{lemma}\n  Let $X$ be a random variable. Then the following are equivalent:\n  \\begin{enumerate}[(i)]\n  \\item For all $t> 0$, $\\P(|X| > t) \\leq C \\exp\\left( -\\frac{1}{2} \\left( \\frac{t}{\\sigma} \\right)^2 \\right)$ (definition of subgaussian random variables).\n  \\item There exists $a > 0$ such that $\\E(\\exp(aX^2)) < \\infty$ ($\\psi_2$ condition).\n  \\item There exist $C^{\\prime}$ and $b$ such that for all $\\lambda \\in \\R$, $\\E(\\exp(\\lambda X)) \\leq C^{\\prime} \\exp(b \\lambda^2)$ (Laplace transform condition).\n  \\item There exists $K$ such that for all $p \\geq 1$, $\\E(X^p)^{\\frac{1}{p}} \\leq K \\sqrt{p}$ (moment condition).\n  \\end{enumerate}\n  Moreover, if $\\E(X) = 0$, then the constant $C^{\\prime}$ in the Laplace transform condition can be taken to be equal to $1$.\n\\end{lemma}\n\n\\begin{proof}\n  \\begin{description}\n  \\item[$(i) \\implies (ii)$] Using Fubini's theorem and a change of variables, we can express $\\E(aX^2)$ as an integral involving tail bounds.\n    \\begin{align*}\n      \\E(aX^2) &= 1 + \\int_0^{\\infty} 2 a t e^{at^2} \\cdot \\P\\left( |X| > t \\right) \\dd t \\\\\n      &\\leq 1 + \\int_0^\\infty 2Cat\\exp\\left( at^2 - \\frac{1}{2}\\left( \\frac{t}{\\sigma}  \\right)^2 \\right) \\dd t\n    \\end{align*}\n    Clearly, picking a value of $a$ smaller than $\\frac{1}{2\\sigma^2}$ will make the integral converge.\n  \\item[$(ii) \\implies (iii)$] Since we want to estimate the expectation of $\\exp(\\lambda X)$, we multiply and divide by $\\exp(aX^2)$ and complete the square.\n    \\begin{align*}\n      \\exp(\\lambda X) = \\exp\\left(aX^2\\right) \\cdot \\exp\\left( \\frac{\\lambda^2}{4a} \\right) \\cdot \\exp\\left( - \\left( \\sqrt{a}X + \\frac{\\lambda}{2\\sqrt{a}} \\right)^2 \\right)\n    \\end{align*}\n    Note that the third term in the product is always less than $1$. We now take the expectation of the right hand side.\n    \\begin{align*}\n      \\E(\\exp(\\lambda X)) &= \\E\\left( \\exp\\left(aX^2\\right) \\cdot \\exp\\left( \\frac{\\lambda^2}{4a} \\right) \\cdot \\exp\\left( - \\left( \\sqrt{a}X + \\frac{\\lambda}{2\\sqrt{a}} \\right)^2 \\right) \\right) \\\\\n      &\\leq \\exp\\left( \\frac{\\lambda^2}{4a} \\right) \\E\\left( \\exp(aX^2) \\right)\n    \\end{align*}\n    Setting $b= \\frac{1}{4a}$ and $C^{\\prime} = \\E(\\exp(aX^2))$, we get the result.\n  \\item[$(iii) \\implies (iv)$] We begin by getting a crude estimate for $\\E(X^p)$ using the infinite series for $\\exp(\\lambda X)$.\n    \\begin{align*}\n      \\E(X^p) &\\leq \\frac{p!}{\\lambda^p} \\E(\\exp(\\lambda X)) \\\\\n              &= \\frac{C^{\\prime}p! \\exp(b\\lambda^2)}{\\lambda^p}\n    \\end{align*}\n    Note that this inequality works for all values of $\\lambda$, but to get the best inequality, we minimize the right hand side by varying $\\lambda$ over $\\R$.\n    The minimum is attained when $\\lambda = \\frac{\\sqrt{p}}{2b}$: plugging that into the right hand side, and taking $p$\\textsuperscript{th} roots gives us the following.\n    \\begin{align*}\n      \\E(X^p)^{\\frac{1}{p}} \\leq C^{\\prime \\prime} \\frac{(p!)^{\\frac{1}{p}}}{\\sqrt{p}}\n    \\end{align*}\n    Here, we have absorbed all the constants into $C^{\\prime \\prime}$.\n    Using Stirling's approximation, the numerator is bounded above by $p$, giving us the inequality we want.\n  \\item[$(iv) \\implies (i)$] We rewrite the event $|X| > t$ in the following manner.\n    \\begin{align*}\n      \\P(|X| > t) &= \\P(\\exp(\\lambda X^2) > \\exp(\\lambda t^2)) \\\\\n                  &\\leq \\exp(-\\lambda t^2) \\E(\\exp(\\lambda X^2))\n    \\end{align*}\n    Here, $\\lambda$ is a positive real number that we will specify later, and the inequality comes from Markov's inequality.\n    Of course, we do not a priori know that $\\E(\\lambda X^2)$ is finite, but we will pick a $\\lambda$ small enough such that it is.\n    Using Fubini's theorem, we can express $\\E(\\exp(\\lambda X^2))$ in the following manner.\n    \\begin{align*}\n      \\E(\\exp(\\lambda X^2)) = 1 + \\frac{\\lambda \\E(X^2)}{1!} + \\frac{\\lambda^2 \\E(X^4)}{2!} + \\cdots\n    \\end{align*}\n    Using the bound on the moments of $X$ and Stirling's approximation, we get the following inequality.\n    \\begin{align*}\n      \\E(\\exp(\\lambda X^2)) &\\leq \\sum_{p=0}^{\\infty} \\frac{(2\\lambda K^2p)^p}{p!} \\\\\n                            &\\leq \\sum_{p=0}^{\\infty} \\frac{(2\\lambda e K^2 p)^p}{\\sqrt{2\\pi p} p ^p}\n    \\end{align*}\n    If we pick $\\lambda$ to be small enough that $2e \\lambda K^2$ is much smaller than $1$, then the infinite sum converges, and the expectation is finite.\n    Setting $\\frac{1}{2\\sigma^2}$ to be equal to $\\lambda$ gives us the result.\n  \\end{description}\n  We now show that the constant in the Laplace transform condition can be set to be $1$ when $\\E(X) = 0$.\n  To do so, we recall the $\\psi_2$ and the Laplace transform condition, i.e. there exist constants $a$, $C^{\\prime}$ and $b$ such that the following two inequalities hold for all $\\lambda \\in \\R$.\n  \\begin{align}\n    \\label{eq:1}\n    \\E(\\exp(aX^2)) &< \\infty \\\\\n    \\E(\\exp(\\lambda X)) & \\leq C \\exp(b\\lambda^2)\n  \\end{align}\n  Suppose now that $\\lambda^2 > 2a$.\n  By the Laplace transform condition, we have the following inequality.\n  \\begin{align*}\n    \\E(\\exp(2aX)) &\\leq C^{\\prime} \\exp(4ba^2) \\\\\n                  &= \\exp(4a^2 b^{\\prime})\n  \\end{align*}\n  Where $b^{\\prime}$ is $b + \\frac{\\log(C^{\\prime})}{4a^2}$.\n  Since $b^{\\prime}$ decreases as $a$ increases, for any $\\lambda^2 > 2a$, $\\E(\\exp(aX^2))$ will be less than $\\exp(b^{\\prime} \\lambda^2)$.\n\n  Now suppose that $\\lambda^2 < 2a$.\n  We begin by considering the special case where $X$ is a symmetric random variable.\n  By symmetry of $X$, we have the following inequality.\n  \\begin{align*}\n    \\exp(\\lambda X) &= \\frac{\\exp(\\lambda X) + \\exp(-\\lambda X)}{2} \\\\\n            &\\leq \\exp\\left( \\frac{\\lambda^2 X^2}{2} \\right)\n  \\end{align*}\n  Taking expectations on both sides gives us the following.\n  \\begin{align*}\n    \\E(\\exp(\\lambda X)) \\leq \\E\\left(\\exp\\left( \\frac{\\lambda^2}{2} X^2 \\right)\\right)\n  \\end{align*}\n  Since $\\lambda^2 < 2a$, $\\frac{2a}{\\lambda^2}$ is greater than $1$, and we can use H\u00f6lder's inequality to bound the right hand term.\n  \\begin{align*}\n    \\E\\left(\\exp\\left( \\frac{\\lambda^2}{2} X^2 \\right)\\right) \\leq\n    \\left( \\E\\left(\\exp\\left(   aX^2   \\right) \\right) \\right)^{\\frac{\\lambda^2}{2a}}\n  \\end{align*}\n  Since $\\E(\\exp(aX^2))$ is a finite constant, the right hand side is $\\exp(b^{\\prime \\prime} \\lambda^2)$ for some constant $b^{\\prime \\prime}$.\n\n  Now suppose $X$ is not symmetric.\n  Let $X^{\\prime}$ be an identical independent copy of $X$.\n  Since $\\E(X^{\\prime})$ is $0$, we have the following equality.\n  \\begin{align*}\n    \\E(\\exp(\\lambda X)) = \\E(\\exp(\\lambda (X - \\E(X^{\\prime}))))\n  \\end{align*}\n  Since $\\exp$ is a convex function, we can pull out the inner expectation, using Jensen's inequality.\n  \\begin{align*}\n    \\E(\\exp(\\lambda X)) \\leq \\exp(\\lambda(X - X^{\\prime}))\n  \\end{align*}\n  Since $X - X^{\\prime}$ is symmetric, the result follows from the previous part, and the proof is complete.\n\\end{proof}\nWe now explain why care so much about the constant in the Laplace transform condition.\n\n\\begin{theorem}[Hoeffding-Chernoff-Azuma inequality]\n  \\label{thm:hoeffding}\n  Let $\\{X_1, \\ldots, X_n\\}$ be i.i.d. subgaussian random variables with mean $0$.\n  Then for any $(a_1, \\ldots , a_n) \\in \\R^n$ and any $t > 0$, the following probability bound holds.\n  \\begin{align*}\n    \\P\\left( \\left| \\sum_{i=1}^n a_i X_i \\right| > t \\right) \\leq C \\exp\\left( -c \\frac{t^2}{\\norm{\\mathbf{a}}_2} \\right)\n  \\end{align*}\n  Where $C$ and $c$ are some absolute constants.\n\\end{theorem}\n\\begin{proof}\n  Without loss of generality, we can assume $\\norm{\\mathbf{a}}_2 = 1$.\n  It will suffice to show that the sum of subgaussian random variables is subgaussian.\n  We will show it verifying the Laplace transform condition.\n  Let $\\lambda \\in \\R$, and let $Y = \\sum_{i=1}^n a_iX_i$.\n  We compute $\\E(\\exp(\\lambda Y))$.\n  \\begin{align*}\n    \\E(\\exp(\\lambda Y)) &= \\prod_{i=1}^n \\E(\\exp(\\lambda a_i X_i)) \\\\\n                        &\\leq \\prod_{i=1}^n \\E(\\exp(b \\lambda^2 a_i^2)) \\\\\n                        &= \\E(\\exp(b\\lambda^2))\n  \\end{align*}\n  This proves the result. Note that having the Laplace transform coefficient equal to $1$ helped, because if the coefficient $C$ was greater than $1$, then we would pick up a constant of $C^n$, which would be very large for large values of $n$.\n\\end{proof}\n\nTo see how this inequality is used in practice, consider the simplest possible example of a subgaussian random variable, a random variable that takes values $1$ and $-1$ with probability $\\frac{1}{2}$.\n% A Rademacher random variable is a random variable that takes values $1$ and $-1$ with probability $\\frac{1}{2}$.\n\nLet's recall some elementary facts about Fourier analysis before stating the example.\nLet $L^2([0,1])$ be the space of complex $L^2$ functions on $[0,1]$ and let $\\{e_n\\}_{n \\in \\Z}$ be the standard Fourier basis, i.e. $e_n(t) = \\exp(2\\pi i nt)$.\nSince $\\{e_n\\}$ forms an orthonormal basis, any function $f \\in L^2([0,1])$ can be decomposed into its Fourier series.\n\\begin{align*}\n  f = \\sum_{n \\in \\Z} \\widehat{f}(n) e_n\n\\end{align*}\nThe Fourier coefficient $\\widehat{f}(n)$ is given by $\\int_0^1 f(t) \\overline{e_n(t)} \\dd t$.\nThe map sending $f$ to its Fourier coefficients is a linear isometry from $L^2([0,1])$ to $\\ell^2(\\Z)$.\nFor most sequences $\\{\\widehat{f}(n)\\}$ in $\\ell^2(\\Z)$, the associated function $f$ will not be continuous, but we will show that under reasonably mild conditions on $\\{\\widehat{f}(n)\\}$, $f$ can be made to be continuous.\n\nLet $\\varepsilon_n \\in \\{-1, 1\\}$ for $n \\in \\Z$, and let $\\varepsilon = \\{\\varepsilon_n\\}_{n \\in \\Z}$.\nFor any $f \\in L^2([0,1])$ and any such $\\\\varepsilon$, define $f_{\\varepsilon}$ to be the following function.\n\\begin{align*}\n  f_{\\varepsilon} = \\sum_{n \\in Z} \\varepsilon_n \\widehat{f}(n) e_n\n\\end{align*}\nWe then have the following theorem.\n\\begin{theorem}\n  \\label{thm:continuous-l2-function}\n  Let $f$ be a function in $L^2([0,1])$ whose Fourier coefficients satisfy the following inequality.\n  \\begin{align*}\n    % \\label{eq:convergence-condition-example}\n    \\sum_{n \\in \\Z} \\left( \\log(|n| + 1) \\right)^3 \\left| \\widehat{f}(n) \\right|^2 < \\infty\n  \\end{align*}\n  Let $\\{\\varepsilon_n\\}$ be a sequence of i.i.d random variables taking values in $1$ and $-1$ with probability $\\frac{1}{2}$.\n  Then $f_{\\varepsilon}$ is a continuous function with probability $1$.\n\\end{theorem}\n\nTo prove the theorem, we will need several lemmas.\n\n\\begin{lemma}\n  \\label{lem:main-technical-lemma}\n  Let $N \\in \\N$.\n  Then for any $\\{a_n\\}_{n=-N}^N$, the following probability bound holds.\n  \\begin{align*}\n    \\P\\left( \\norm{ \\sum_{n=-N}^N \\varepsilon_n a_n e_n }_{\\infty} >  C \\sqrt{\\log(N)} \\left( \\sum_{n=-N}^N |a_n|^2 \\right)^{\\frac{1}{2}} \\right) \\leq \\frac{1}{N^2}\n  \\end{align*}\n  Here the constant $C$ is absolute, i.e. independent of $N$.\n\\end{lemma}\n\\begin{proof}\n  The first step is to consider the maximum, not over all of $[0,1]$, but over the points $\\left\\{ \\frac{j}{N^2} \\right\\}_{j = 0}^N$.\n  It will suffice to bound the probability for any given point by $\\frac{1}{N^4}$, and then use the union bound to get the desired inequality.\n  Since $\\varepsilon_n$ is a subgaussian random variable with mean $0$, by Hoeffding-Chernoff-Azuma inequality, we get the following bound for any $t \\in [0,1]$.\n  \\begin{align*}\n    \\P\\left( \\norm{ \\sum_{n=-N}^N \\varepsilon_n a_n e_n }_{\\infty} >  C \\sqrt{\\log(n)} \\left( \\sum_{n=-N}^N |a_n|^2 \\right)^{\\frac{1}{2}} \\right) \\leq C^{\\prime} \\exp(-cC^2 \\log(n))\n  \\end{align*}\n  We can pick a $C$ large enough so that the right hand side is bounded above by $\\frac{1}{N^4}$, and that concludes the first step after we use the union bound.\n  Note that in order to do this, we really needed something stronger than Chebyshev's inequality, since we needed an upper bound that can be made smaller than $\\frac{1}{N^4}$.\n\n  The next step is to extend the argument to all of $[0,1]$.\n  The key trick here will be to estimate the maximum value of trigonometric polynomials away from points of the form $\\frac{j}{N^2}$ using the maximum we derived in step $1$.\n  Bernstein's inequality for trigonometric polynomial helps in this regard: given a trigonometric polynomial $p$ of degree $n$, the following inequality relates the $\\norm{\\cdot}_{\\infty}$-norm of $p^{\\prime}$ and $p$.\n  \\begin{align*}\n    \\norm{p^{\\prime}}_{\\infty} \\leq n \\norm{p}_{\\infty}\n  \\end{align*}\n  Let $V$ be the maximum value of the trigonometric polynomial $p = \\sum_{n=-N}^N \\varepsilon_n a_n e_n$ achieves on points of the form $\\frac{j}{N^2}$, and let $W$ be the maximum value over all of $[0,1]$, and say it is achieved at some point $t$, and let $s$ be the closest point of the form $\\frac{j}{N^2}$.\n  Then, by the mean value theorem, we get the following relation between $V$ and $W$.\n  \\begin{align*}\n    W &\\leq V + \\norm{p^{\\prime}}_{\\infty} |t-s| \\\\\n      &\\leq V + \\frac{N \\norm{p}_{\\infty}}{N^2} \\\\\n      &= V + \\frac{W}{N}\n  \\end{align*}\n  This means for $N > 1$, $W \\leq 2V$, and this proves the lemma.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:continuous-l2-function}]\n  For $M \\in \\N$, define the function $f_{M, \\varepsilon}$ in the following manner.\n  \\begin{align*}\n    f_{M, \\varepsilon} = \\sum_{2^M \\leq |n| < 2^{M+1}} \\varepsilon_n \\widehat{f}(n) e_n\n  \\end{align*}\n  We now use Lemma \\ref{lem:main-technical-lemma} with $N = 2^{M+1}$, $a_n = 0$ for $|n| < 2^M$, and $a_n = \\widehat{f}(n)$ for $2^M \\leq n < 2^{M+1}$.\n  \\begin{align*}\n    \\P\\left( \\norm{f_{M,\\varepsilon}}_{\\infty} > C \\sqrt{M} \\norm{f_{M,\\varepsilon}}_2 \\right) < \\frac{1}{2^{2(M+1)}}\n  \\end{align*}\n  % \\confused{It seems something is going wrong here because the sum of probability upper bound converges too easily, which means one might be making a mistake, or strengthen the result significantly.}\n  By the Borel-Cantelli lemma, for almost every $\\varepsilon$, $\\norm{f_{M, \\varepsilon}}_{\\infty}$ eventually becomes smaller than $C\\sqrt{M} \\norm{f_{M,\\varepsilon}}_2$.\n\n  Pick $\\varepsilon$ to be one of the instances where the above described situation does happen.\n  We have that $f$ is an infinite sum of continuous functions.\n  \\begin{align*}\n    f = \\varepsilon_0 \\widehat{f}(0) e_0 + \\sum_{M=0}^{\\infty} f_{M, \\varepsilon}\n  \\end{align*}\n  This will converge to a continuous function if the sequence of partial sums is uniformly Cauchy.\n  To see that is indeed the case, pick $K_1$ and $K_2$ larger than the threshold $M$ after which $\\norm{f_{M, \\varepsilon}} < C\\sqrt{M}$.\n  \\begin{align*}\n    \\norm{f_{K_2, \\varepsilon} - f_{K_1, \\varepsilon}}_{\\infty} &\\leq \\sum_{M=K_1}^{K_2} \\norm{f_{M, \\varepsilon}}_{\\infty} \\\\\n                                                                &\\leq \\sum_{M=K_1}^\\infty C \\sqrt{M} \\norm{f_{M, \\varepsilon}}_2 \\\\\n                                                                &\\leq C \\left( \\sum_{M=K_1}^{\\infty} \\left( \\frac{1}{M} \\right)^2 \\right)^{\\frac{1}{2}} \\left( \\sum_{M=K_1}^{\\infty} M^3 \\norm{f_{M, \\varepsilon}}_2^2 \\right)^{\\frac{1}{2}} \\\\\n                                                                &\\leq \\frac{C^{\\prime}}{K_1} \\left( \\sum_{n \\in \\Z} (C^{\\prime \\prime} \\log(|n| + 1))^3 \\widehat{f}(n)^2 \\right) \\\\\n    &\\leq \\frac{C^{\\prime\\prime\\prime}}{K_1}\n  \\end{align*}\n  The upper bound goes to $0$ as $K_1$ goes to $\\infty$, which shows the sequence is uniformly Cauchy, and thus the limit is a continuous function.\n\\end{proof}\n\nWe now prove a moment bound for sums of subgaussian random variables.\n\\begin{theorem}[Khintchine's inequality \\cite{khintchine1923dyadische}]\n  Let $\\{X_1, \\ldots, X_n\\}$ be i.i.d subgaussian random variables with $\\E(X_i) = 0$ and $\\E(X_i^2) = 1$.\n  Then for any $p \\in [1, \\infty)$, there exist constants $A_p$ and $B_p$ greater than $0$ such that for any vector $\\mathbf{a} \\in \\R^n$, the following moment bound holds.\n  \\begin{align*}\n    A_p \\norm{\\mathbf{a}}_2 \\leq \\left( \\E \\left| \\sum_{i=1}^n a_i X_i \\right|^p \\right)^{\\frac{1}{p}} \\leq  B_p \\norm{\\mathbf{a}}_2\n  \\end{align*}\n\\end{theorem}\n\\begin{proof}\n  Consider first the case where $p > 2$.\n  Then, using H\u00f6lder's inequality for the convex function $x \\mapsto x^{\\frac{p}{2}}$, we get the following.\n  \\begin{align*}\n\\left( \\E \\left| \\sum_{i=1}^n a_i X_i \\right|^2 \\right)^{\\frac{1}{2}} \\leq \\left( \\E \\left| \\sum_{i=1}^n a_i X_i \\right|^p \\right)^{\\frac{1}{p}}\n  \\end{align*}\n  This shows that for $p > 2$, we can set $A_p = 1$.\n  To get $B_p$, we use the moment condition on subgaussian random variables.\n  Since the sum of subgaussian random variables is subgaussian, we have that $Y = \\sum_{i=1}^n a_i X_i$ is subgaussian, and thus satisfies the moment bound.\n  We have seen that the absolute constant $K$ will only depend on $\\norm{\\mathbf{a}}_2$, giving us the upper bound.\n  \\begin{align*}\n    \\E(|Y|^p)^{\\frac{1}{p}} \\leq K \\sqrt{p} \\norm{\\mathbf{a}}_2\n  \\end{align*}\n  Setting $B_p = K\\sqrt{p}$ proves the result in this case.\n\n  For $p < 2$, it will suffice to prove it for $p = 1$, since the $p$\\textsuperscript{th} moment of $|Y|$ is an increasing function of $Y$ and bounded above by $\\norm{\\mathbf{a}}_2$ by H\u00f6lder's inequality, which means we can set $B_p = 1$ (or the previous argument will also work, but $B_p = 1$ is better than $B_p = K\\sqrt{p}$).\n  Thus we just need to show the lower bound for $p = 1$.\n  In this case, the inequality follows from Cauchy-Schwartz.\n  \\begin{align*}\n    \\E(|Y|^2) &\\leq \\sqrt{\\E(|Y|) \\E(|Y|^3)}\n  \\end{align*}\n  Using Khintchine's inequality for $p=3$, we deal with $\\E(|Y|^3) \\leq B_3 \\norm{\\mathbf{a}}_2$.\n  Squaring both sides, we see that $A_1 = B_3^{-3}$ works, and the proof is complete.\n\\end{proof}\nThere is a far reaching generalization of Khintchine's inequality, due to Kahane.\n\\begin{theorem}[Kahane inequality \\cite{kahane1964sommes}]\n  Let $X$ be a normed vector space, and let $\\{\\varepsilon_1, \\ldots, \\varepsilon_n\\}$ be i.i.d. Rademacher random variables.\n  For any $p \\in [1, \\infty)$, there exists $A_p$ and $B_p$ greater than $0$ such that for any $\\{a_1, \\ldots, a_n\\}$ in $X$, the following holds.\n  \\begin{align*}\n    A_p \\left( \\E \\norm{\\sum_{j=1}^n \\varepsilon_j a_j}^2_X \\right)^{\\frac{1}{2}} \\leq \\left( \\E \\norm{\\sum_{j=1}^n \\varepsilon_j a_j}^p_X \\right)^{\\frac{1}{p}} \\leq B_p \\left( \\E \\norm{\\sum_{j=1}^n \\varepsilon_j a_j}^2_X \\right)^{\\frac{1}{2}}\n  \\end{align*}\n\\end{theorem}\nThe proof of this inequality requires more machinery than the previous result, so we'll defer the proof until we have developed the required tools.\n\nRecall that we got strong tail decay for bounded random variables using Hoeffding-Chernoff-Azuma inequality, since they're subgaussian, but it turns out, we can do much better than that using boundedness.\n\n\\begin{theorem}[Bennett's inequality]\n  \\label{thm:bennett}\n  Let $\\{X_1, \\ldots, X_n\\}$ be i.i.d random variables satisfying the following properties.\n  \\begin{enumerate}[(i)]\n  \\item $\\norm{X_j}_{\\infty} \\leq 1$.\n  \\item $\\E(X_j) = 0$ and $\\E(X_j^2) = \\delta$.\n  \\end{enumerate}\n  Then for any $\\mathbf{a} \\in \\R^n$, we have the following tail decay estimate.\n  \\begin{align*}\n    \\P\\left( \\left| \\sum_{j=1}^n a_j X_j \\right| > t \\right) \\leq\n    \\begin{cases}\n      2 \\exp \\left(- \\frac{t^2}{2e\\delta \\norm{\\mathbf{a}}_2^2} \\right)\\text{ for $t \\leq t_{\\ast}$} \\\\\n      2 \\exp \\left( - \\frac{t}{4 \\norm{\\mathbf{a}}_{\\infty}} \\cdot \\log\\left( \\frac{t \\norm{\\mathbf{a}}_{\\infty}}{\\delta \\norm{\\mathbf{a}}_2^2} \\right) \\right)\\text{ for $t > t_{\\ast}$}\n    \\end{cases}\n  \\end{align*}\n  Here $t_{\\ast} = e\\delta \\norm{\\mathbf{a}}_2^2$.\n\\end{theorem}\nBefore we start to prove the theorem, let's show why the described tail bound is a very natural upper bound to consider.\nConsider $\\mathbf{a} = (1, 1, \\ldots, 1)$.\nThen $\\sum a_j X_j$ is approximately $\\cN(0, \\delta \\norm{\\mathbf{a}}^2_2) = \\cN(0, \\delta n)$.\nThe tail should then behave something like $\\exp \\left( - \\frac{t^2}{2\\delta n} \\right)$, which is precisely the first case in the upper bound.\nIf $\\delta$ is small, i.e. $\\delta n$ is bounded above by some constant $\\lambda$, then the central limit theorem asymptotic does not apply, but rather the Poisson limit theorem asymptotic applies.\n\\begin{align*}\n  \\P\\left( \\sum_{j=1}^n a_j X_j > t \\right) &\\sim \\sum_{j=t}^{\\infty} e^{-\\lambda} \\frac{\\lambda^j}{j!} \\\\\n                                            &\\sim e^{-\\lambda} \\frac{\\lambda^j}{j!} \\\\\n                                            &\\sim \\exp\\left( -t \\log\\left( \\frac{t}{e\\lambda} \\right) \\right)\n\\end{align*}\nContrast this with the second case of the tail in Bennett's inequality.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:bennett}]\n  Without loss of generality, assume that $\\norm{\\mathbf{a}}_{\\infty} = 1$.\n  Set $Y = \\sum_{j=1}^n a_j X_j$, and let $\\lambda > 0$.\n  We then estimate the Laplace transform of $Y$.\n  \\begin{align*}\n    \\E\\left( \\exp(\\lambda Y) \\right) &= \\prod_{j=1}^n \\E\\left(\\exp\\left(\\lambda a_j X_j \\right) \\right)\n  \\end{align*}\n  We use an elementary inequality to estimate the Laplace transform.\n  \\begin{align*}\n    e^x \\leq 1 + x + \\frac{x^2}{2} e^{|x|}\n  \\end{align*}\n  Thus, we have the following.\n  \\begin{align*}\n    \\E\\left( \\exp\\left( \\lambda a_j X_j \\right) \\right) &\\leq \\E\\left(1 + \\lambda a_j X_j + \\frac{\\lambda^2 a_j^2 X_j^2}{2} \\exp\\left( \\left| \\lambda a_j X_j \\right| \\right)\\right) \\\\\n    &\\leq 1 + 0 + \\frac{\\lambda^2 a_j^2 \\delta}{2} e^{\\lambda}\n  \\end{align*}\n  Putting all of it back together, we have the following inequality.\n  \\begin{align*}\n    \\E\\left( \\exp\\left( \\lambda Y \\right) \\right) &\\leq \\prod_{j=1}^n \\left( 1 + \\frac{\\lambda^2 a_j^2 \\delta e^{\\lambda}}{2} \\right) \\\\\n                                                  &\\leq \\prod_{j=1}^n \\exp\\left( \\frac{\\lambda^2 a_j^2 \\delta e^{\\lambda}}{2} \\right) \\\\\n                                                  &= \\exp\\left( \\frac{\\delta\\lambda^2 e^{\\lambda}}{2} \\norm{\\mathbf{a}}_2^2 \\right)\n  \\end{align*}\n  Now that we have the Laplace transform estimate, we can use it to estimate tail probabilities.\n  \\begin{align*}\n    \\P(Y > t) &= \\P(\\exp(\\lambda Y) > e^{\\lambda t}) \\\\\n              &\\leq \\frac{\\exp\\left( \\frac{\\delta\\lambda^2 e^{\\lambda}}{2} \\norm{\\mathbf{a}}_2^2 \\right)}{e^{\\lambda t}} \\\\\n              &= \\exp\\left( -\\lambda t + \\frac{\\delta\\lambda^2 e^{\\lambda}}{2} \\norm{\\mathbf{a}}_2^2 \\right)\n  \\end{align*}\n\n  The next step is to optimize this inequality as we vary $\\lambda$.\n  We begin by optimizing in the region $\\lambda \\leq 1$.\n  In this case, the optimum $\\lambda$ is $\\frac{t}{e\\delta \\norm{\\mathbf{a}_2^2}}$.\n  Plugging in this value of $\\lambda$ gives the first case of the upper bound, and this is valid for $t \\leq e\\delta \\norm{\\mathbf{a}}_2^2 = t_{\\ast}$.\n\n  In the region $\\lambda > 1$, we use the inequality $\\lambda \\leq e^{\\lambda}$.\n  \\begin{align*}\n    \\P(Y > t) &\\leq \\exp\\left( -\\lambda t + \\frac{\\delta\\lambda^2 e^{\\lambda}}{2} \\norm{\\mathbf{a}}_2^2 \\right) \\\\\n              &\\leq \\exp\\left( -\\lambda t + \\frac{\\delta\\lambda e^{2\\lambda}}{2} \\norm{\\mathbf{a}}_2^2 \\right)\n  \\end{align*}\n  Choose $\\lambda$ such that $\\lambda \\norm{\\mathbf{a}}_2^2 e^{2\\lambda} = t$. Plugging that value in, we get the second case.\n\n  Finally, doing this for $-t$ gives similar bounds, and we combine the two using union bound to get the claimed result. This completes the proof.\n\\end{proof}\n\nThe theorems in this section illustrate how strong the condition of being subgaussian is, especially when taking sums of i.i.d copies of subgaussians.\nIn the next section, we will investigate another similar tail decay condition.\n\n\\subsubsection{Subexponential random variables}\n\\label{sec:subexp-rand-vari}\n\n\\begin{definition}[Subexponential random variable]\n  A random variable $X$ is said to be $k$-subexponential if for all $t > 0$, the following holds.\n  \\begin{align*}\n    \\P\\left( |X| > t \\right) \\leq 2 \\exp\\left(- \\frac{t}{k} \\right)\n  \\end{align*}\n\\end{definition}\n\nJust like in the case of subgaussian random variables, we have a number of equivalent definitions of subexponential random variables.\n\n\\begin{lemma}\n  Let $X$ be a random variable. Then the following conditions are equivalent.\n  \\begin{enumerate}[(i)]\n  \\item $X$ is $k$-subexponential.\n  \\item There exists a $b > 0$ such that $\\E\\left( \\exp(b|X|) \\right) \\leq 2$ ($\\psi_1$ condition).\n  \\item For all $p \\geq 1$, $\\E\\left( |X|^p \\right)^{\\frac{1}{p}} \\leq Cp$ (moment condition).\n  \\end{enumerate}\n  Moreover, if $\\E(X) = 0$, there exists $\\lambda_0 > 0 $ such that for all $|\\lambda| < \\lambda_0$, $\\E \\left( \\exp(\\lambda X) \\right) \\leq \\exp\\left( \\widetilde{C}\\lambda^2 \\right)$.\n\\end{lemma}\n\n\\begin{proof}\n  % \\begin{description}\n  % \\item[$(i) \\implies (ii)$] Using the integrated tail probability expectation formula, we can express $\\E\\left( \\exp(b|X|) \\right)$ as the following integral.\n  %   \\begin{align*}\n  %     \\E\\left( b|X| \\right) &= 1 + \\int_0^\\infty b\\exp(bt) \\cdot \\P\\left( |X| > t \\right) \\dd t \\\\\n  %                           &\\leq 1 + \\int_0^{\\infty} b \\exp\\left( bt - \\frac{t}{k} \\right) \\dd t\n  %   \\end{align*}\n  %   Clearly, picking a small enough $b$ makes the right hand side converge to a number smaller than $2$.\n  % \\item[$(i) \\implies (ii)$]\n  % \\end{description}\n  The proofs of the three equivalences are similar in spirit to the versions for subgaussians, so we will skip the proof, and just prove the moreover part.\n\n  Writing $\\E(\\exp(\\lambda X))$ as an infinite sum of expectations, we get the following chain of inequalities for a small enough value of $\\lambda$.\n  \\begin{align*}\n    \\E(\\exp\\left( \\lambda X \\right)) &= 1 + \\E(\\lambda X) + \\sum_{j=2}^{\\infty} \\frac{\\lambda^j \\E(X^j)}{j!} \\\\\n                                     &\\leq 1 + 0 + \\sum_{j=2}^{\\infty} \\frac{\\left( \\lambda C j \\right)^j}{j!} \\\\\n                                     &\\leq 1 + \\sum_{j=2}^{\\infty} (\\lambda C e)^j \\\\\n                                     &= 1 + \\frac{(\\lambda C e)^2}{1 - \\lambda C e}\n  \\end{align*}\n  We get the first inequality from the moment condition, the second from Stirling's approximation, and the third follows from geometric convergence for $\\lambda C e < 1$.\n  Note now that if $\\lambda C e < \\frac{1}{2}$, we get the following inequalities.\n  \\begin{align*}\n    1 + \\frac{(\\lambda C e)^2}{1 - \\lambda C e} &\\leq 1 + 2C^2e^2\\lambda^2 \\\\\n    &\\leq \\exp\\left( 2C^2e^2 \\lambda^2 \\right)\n  \\end{align*}\n  This proves the result.\n\\end{proof}\n\nWe can now prove a strong tail bound for sums of subexponential random variables like we did in Hoeffding-Chernoff-Azuma inequality.\n\n\\begin{theorem}[Bernstein's inequality]\n  \\label{thm:bernstein}\n  Let $\\{X_1, \\ldots, X_n\\}$ be i.i.d subexponential random variables with mean $0$, and let $\\mathbf{a}$ be a vector in $\\R^n$. Then the following holds.\n  \\begin{align*}\n    \\P\\left( \\left| \\sum_{j=1}^n a_j X_j \\right| > t \\right) \\leq\n    2 \\exp\\left( -c \\left( \\frac{t^2}{\\norm{\\mathbf{a}}_2^2} \\wedge  \\frac{t}{\\norm{\\mathbf{a}}_\\infty}\\right) \\right)\n  \\end{align*}\n  Here $a \\wedge b$ denotes the minimum of $a$ and $b$.\n\\end{theorem}\n\\begin{remark}\n  The tail of a sum of i.i.d random variables behaves very much like the situation described above.\n  When $t$ is small, the tail behave like a subgaussian, and when $t$ is large, the tail behaves like a subexponential random variable.\n\\end{remark}\n\n\\begin{proof}[Sketch of proof of Theorem \\ref{thm:bernstein}]\n  Like with all the other sum tail bounds, the proof of this theorem is via bounding the Laplace transform of the random variable.\n  For small $\\lambda$, we have the upper bound to be $\\exp(\\widetilde{C}\\lambda^2)$ and then we optimize over $\\lambda$, and for large $t$, we use Markov's inequality.\n\\end{proof}\n\n\\subsubsection{Applications of subgaussian and subexponential random variables}\n\\label{sec:appl-subg-subexp}\n\nBefore we list some of the applications, we make a remark on why the conditions for subexponential and subgaussian random variables were called $\\psi_1$ and $\\psi_2$ conditions respectively.\nLet $\\alpha$ be a number greater than $0$.\nDefine a function $\\psi_{\\alpha}$ in the following manner.\n\\begin{align*}\n  \\psi_{\\alpha}(x) \\coloneqq \\exp(x^\\alpha) - 1\n\\end{align*}\nUsing this function, we can define norms on random variables.\n\\begin{align*}\n  \\norm{X}_{\\psi_{\\alpha}} \\coloneqq \\inf \\left( K > 0 \\mid \\E\\left( \\psi_{\\alpha} \\left( \\frac{|X|}{K} \\right) \\right) \\leq 1 \\right)\n\\end{align*}\nWith this norm, the space of subexponential and subgaussian random variables form Banach spaces with respect to $\\norm{\\cdot}_{\\psi_1}$ and $\\norm{\\cdot}_{\\psi_2}$ respectively. These Banach spaces are often known as Orlicz spaces.\n\nOur first application of Hoeffding-Chernoff-Azuma and Bernstein's inequality will be the Johnson-Lindenstrauss lemma\\footnote{While this lemma was only a small, and rather easy, part of a hard technical paper of Johnson and Lindenstrauss, the lemma is (supposedly) one of the most cited lemmas in computer science.}.\n\n\\begin{theorem}[Johnson-Lindenstrauss lemma \\cite{johnson1984extensions}]\n  \\label{thm:johnson-lindenstrauss}\n  Let $F \\subset \\R^N$ be a finite set.\n  Then for any $\\varepsilon > 0$, there exists a linear mapping $\\varphi: F \\to \\R^n$ with $n \\leq \\frac{C}{\\varepsilon^2} \\log(\\#F)$ such that the mapping does not distort distances too much, i.e. the following inequalities hold for all $x$ and $y$ in $F$.\n  \\begin{align*}\n    (1-\\varepsilon) \\norm{x-y}_2 \\leq \\norm{\\varphi(x) - \\varphi(y)}_2 \\leq (1+\\varepsilon) \\norm{x-y}_2\n  \\end{align*}\n\\end{theorem}\n\n  This is useful for computer scientists because it allow dimension reduction.\n  Often, one has finitely many vectors in a very high dimensional space, and one only cares about their metric structure.\n  This theorem allows one to reduce the ambient dimension significantly while not distorting the metric structure too much, and furthermore, the new ambient dimension is $O(\\log \\#F)$, whereas creating a metric graph would involve $O(\\#F^2)$ computations.\n\nWhile the $\\ell^2$ norm is natural for geometry, in computer science applications, the $\\ell^1$ norm is preferred, because of its relation to linear programming. So a natural follow up question is whether one can perform similar dimension reduction in $\\ell^1$ instead.\nThis was open for a long time, but recently shown to be impossible (see \\cite{10.1145/1089023.1089026}) in a very strong way.\nIt was shown that in order to have at most $\\varepsilon$ distortion in the $\\ell^1$ distance, the ambient dimension would be at least $C \\# F$, i.e. linear in the size of the dataset, rather than logarithmic. The original proof is this fact was quite long and non-trivial, but Lee and Naor soon gave a simpler proof (see \\cite{lee2004embedding}) that relied on some highly non-trivial functional analysis. Johnson and Naor also characterized Banach spaces that allow strong dimension reduction, and it turns out those spaces are quite similar to Hilbert spaces (see \\cite{2008arXiv0807.1919J}).\n\n\\begin{proof}[Proof of Theorem \\ref{thm:johnson-lindenstrauss}]\n  Define a set $V \\subset \\R^n$ in the following manner.\n  \\begin{align*}\n    V = \\left\\{ \\frac{x-y}{\\norm{x-y}_2} \\mid \\{x, y\\} \\subset F\\text{ and }x \\neq y \\right\\}\n  \\end{align*}\n  We will work with this set $V$ instead.\n  $V$ is contained in $S^N$, and has cardinality $\\frac{\\#F^2 - \\#F}{2}$.\n\n  Let $G$ be an $n \\times N$ matrix with i.i.d subgaussian entries $g_{ij}$ satisfying the following two properties.\n  \\begin{align*}\n    \\E(g_{ij}) &= 0 \\\\\n    \\E(g_{ij}^2) &= 1\n  \\end{align*}\n  Fix a point $v \\in V$ and let $n = \\frac{\\theta}{\\varepsilon^2} \\log(\\#F)$, where $\\theta$ a parameter we'll pick later.\n  We will prove that the following inequality holds with high probability.\n  \\begin{align*}\n    \\left| \\norm{Gv}_2 - \\sqrt{n} \\right| \\leq \\varepsilon\n  \\end{align*}\n  If the probability is high enough, we can do this for all elements of $V$ simultaneously using the union bound.\n\n  Let $i \\in \\{1, \\ldots, n\\}$. Then we define $Y_i$ to be the $i$\\textsuperscript{th} entry of $Gv$.\n  \\begin{align*}\n    Y_i = \\sum_{j=1}^N g_{ij}v_i\n  \\end{align*}\n  We then compute the first and second moments of $Y_i$.\n  \\begin{align*}\n    \\E(Y_i) &= 0 \\\\\n    \\E(Y_i^2) &= \\sum_{j=1}^{n} v_i^2 \\E(g_{ij}^2) \\\\\n            &= 1\n  \\end{align*}\n  The random variable $Y_i$ is a linear combination of subgaussian random variable, and thus is also subgaussian. Also, as $i \\neq i^{\\prime}$, $Y_i$ and $Y_{i^{\\prime}}$ are i.i.d.\n\n  Let us now get estimates on $\\norm{Gv}_2^2 - n$.\n  \\begin{align*}\n    \\norm{Gv}_2^2 - n &= \\sum_{i=1}^n Y_i^2 - n \\\\\n                      &= \\sum_{i=1}^n (Y_i^2 - 1)\n  \\end{align*}\n  Let $Z_i = Y_i^2 - 1$. Then $\\E(Z_i) = 0$, and $Z_i$ are subexponential. This is a consequence of the fact that squares of subgaussian random variables are subexponential, which can be checked using the tail bound, or the moment condition.\n\n  We use Bernstein's inequality to control the tail of the sum of $Z_i$.\n  \\begin{align*}\n    \\P\\left( \\left| \\norm{Gv}_2^2 - n \\right| > t \\right)\n    &= \\P\\left( \\left| \\sum_{i=1}^n Z_i \\right| > t \\right) \\\\\n    &\\leq 2\\exp\\left( -c \\left( \\frac{t^2}{n} \\wedge t \\right) \\right)\n  \\end{align*}\n  Set $t = \\varepsilon n$, where $\\varepsilon < 1$. In this regime, $t^2 < t$, which means the tail bound is $2 \\exp\\left( -c\\frac{t^2}{n} \\right)$.\n\n  We now use the union bound to bound the probability that some $v \\in V$ violates this condition.\n  \\begin{align*}\n    \\P\\left( \\exists\\ v \\in V \\mid \\left| \\norm{Gv}_2^2 - n \\right| > \\varepsilon n \\right)\n    &\\leq 2 \\exp\\left( -c \\varepsilon^2 n \\right) \\cdot \\#F \\\\\n    &= 2 \\exp\\left( -c \\varepsilon^2 \\frac{\\theta}{\\varepsilon^2} \\log(\\#F) + \\log(\\#F) \\right) \\\\\n    &= \\#F^{2(1 - c\\theta)}\n  \\end{align*}\n  A large enough $\\theta$ makes the upper bound much smaller than $1$.\n\n  Thus, with very high probability, $\\left| \\norm{Gv}_2^2 - n \\right| \\leq \\varepsilon n$.\n  We claim that this matrix $\\varphi \\coloneqq \\frac{G}{\\sqrt{n}}$ does the dimension reduction with distortion bounded by $\\varepsilon$.\n  This can be seen by expanding out the definition of $v$.\n  \\begin{align*}\n    \\varepsilon n\n    &\\geq \\left| \\norm{Gv}_2^2 - n \\right| \\\\\n    &= \\left| \\frac{\\norm{Gx - Gy}_2^2}{\\norm{x-y}_2^2} - n \\right|\n  \\end{align*}\n  Dividing both sides by $n$, we get the inequality we wanted.\n  \\begin{align*}\n    \\varepsilon \\geq \\left| \\frac{\\norm{\\varphi x - \\varphi y}_2^2}{\\norm{x-y}_2^2} - 1 \\right|\n  \\end{align*}\n  This proves the result.\n\\end{proof}\n\n\\subsection{Concentration for quadratic forms}\n\\label{sec:conc-quadr-forms}\n\nLet $\\{X_i\\}$ be i.i.d copes of a random variable, and let $A$ be a symmetric matrix associated to a quadratic form.\nWe define a new random variable $Y$ by evaluating the quadratic form $A$ on $\\mathbf{X} = \\left( X_1, \\ldots, X_n \\right)$.\n\\begin{align*}\n  Y \\coloneqq \\langle \\mathbf{X}, A\\mathbf{X} \\rangle\n\\end{align*}\nConsider now the singular value decomposition of $A$, i.e. a decomposition as $UDV$, where $U$ and $V$ lie in $O(n)$ and $D$ is diagonal.\nThe diagonal entries of $D$ can be arranged to be non-decreasing, i.e. $s_1(A) \\geq s_2(A) \\geq \\cdots \\geq s_n(A) \\geq 0$.\nThe diagonal entries are called the singular values of $A$.\nThe $j$\\textsuperscript{th} singular value of $A$ is the square root of the $j$\\textsuperscript{th} largest eigenvalue of $AA^{\\top}$.\n\nWe also consider the Frobenius norm of the matrix $A$.\n\\begin{align*}\n  \\norm{A}_F \\coloneqq \\left( \\sum_{i,j=1}^n a_{ij} \\right)^{\\frac{1}{2}}\n\\end{align*}\nThis is the inner-product norm for the following inner product on matrices.\n\\begin{align*}\n  \\langle A, B \\rangle = \\mathrm{tr}(AB^{\\top})\n\\end{align*}\nOne can express the Frobenius norm of $A$ using the singular values.\n\\begin{align*}\n  \\norm{A}_F^2 = \\sum_{i=1}^{n} s_i(A)^2\n\\end{align*}\nGeometrically, the singular values are the lengths of the axes of the ellipsoid that is the image under $A$ of the standard sphere in $\\R^n$.\nNote that one can also express the operator norm in terms of the singular values: it's the $\\sup$ norm on the singular values, just like the Frobenius norm is the $\\ell^2$-norm on the singular values.\n\nWe can now state a concentration inequality for quadratic forms.\n\\begin{theorem}[Hanson-Wright inequality (\\cite{hanson1971bound}  and \\cite{wright1973bound})]\n  \\label{thm:hanson-wright}\n  Let $A$ be any $n \\times n$ matrix, and let $\\{X_1, \\ldots, X_n\\}$ be i.i.d subgaussian random variables with $\\E(X_i) = 0$.\n  Let $\\mathbf{X} = (X_1, \\ldots, X_n) \\in \\R^n$. Then for any $t > 0$:\n  \\begin{align*}\n    \\P\\left( \\left| \\langle \\mathbf{X}, A \\mathbf{X} \\rangle - \\E\\left( \\langle \\mathbf{X}, A \\mathbf{X} \\rangle \\right) \\right| > t \\right)\n    \\leq 2 \\exp\\left( -c \\left( \\frac{t^2}{\\norm{A}_F^2} \\wedge \\frac{t}{\\norm{A}_{\\mathrm{op}}} \\right) \\right)\n  \\end{align*}\n\\end{theorem}\n\\begin{remark}\n  Note that we can only expect a tail like we had in Bernstein's inequality, since entries of $\\langle \\mathbf{X}, A \\mathbf{X} \\rangle$ be squares of subgaussians, i.e. subexponential.\n\\end{remark}\n\\begin{remark}\n  The original papers of Hanson and Wright proved a slightly weaker version of the above theorem.\n  The version stated is from 2013, and appeared in \\cite{rudelson2013}.\n\\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:hanson-wright}]\n  We first decompose $A$ as the sum of a diagonal matrix $A_{\\mathrm{diag}}$, and a matrix $\\wt{A}$ with all diagonal entries equal to $0$.\n  \\begin{align*}\n    A = A_{\\mathrm{diag}} + \\wt{A}\n  \\end{align*}\n  Since we are trying to bound the probability that $\\langle \\mathbf{X}, A \\mathbf{X} \\rangle$ deviates by more than $t$ from its mean, it will suffice to bound the probability that $\\langle \\mathbf{X}, A_{\\mathrm{diag}}\\mathbf{X} \\rangle$ deviates by more than $\\frac{t}{2}$ and the probability that $\\langle \\mathbf{X}, \\wt{A} \\mathbf{X} \\rangle$ deviates by more than $\\frac{t}{2}$.\n  \\begin{align*}\n    \\P\\left( \\left| \\langle \\mathbf{X}, A \\mathbf{X} \\rangle - \\E\\left( \\langle \\mathbf{X}, A \\mathbf{X} \\rangle \\right) \\right| > t \\right)\n    &\\leq \\P\\left( \\left| \\langle \\mathbf{X}, A_{\\mathrm{diag}} \\mathbf{X} \\rangle - \\E\\left( \\langle \\mathbf{X}, A_{\\mathrm{diag}} \\mathbf{X} \\rangle \\right) \\right| > \\frac{t}{2} \\right) \\\\\n      &+ \\P\\left( \\left| \\langle \\mathbf{X}, \\wt{A} \\mathbf{X} \\rangle - \\E\\left( \\langle \\mathbf{X}, \\wt{A} \\mathbf{X} \\rangle \\right) \\right| > \\frac{t}{2} \\right)\n  \\end{align*}\n  Note that since $\\wt{A}$ has zeroes on the diagonal, and the $X_i$s are independent with mean $0$, that means $\\E\\left( \\langle \\mathbf{X}, \\wt{A} \\mathbf{X} \\rangle \\right) = 0$.\n\n  Observe that the first term is a tail bound on a sum of subexponential random variables.\n  \\begin{align*}\n    \\langle \\mathbf{X}, A_{\\mathrm{diag}} \\mathbf{X} \\rangle - \\E\\left( \\langle \\mathbf{X}, A_{\\mathrm{diag}} \\mathbf{X} \\rangle \\right)\n    &= \\sum_{i=1}^{n} a_{ii}\\left( X_i^2 - \\E(X_i^2) \\right)\n  \\end{align*}\n  Since $X_i$ are subgaussian random variables, $X_i^2 - \\E(X_i^2)$ are subexponential random variables with mean $0$, which means we can apply Bernstein's inequality.\n  \\begin{align*}\n    \\P\\left( \\left| \\langle \\mathbf{X}, A_{\\mathrm{diag}} \\mathbf{X} \\rangle - \\E\\left( \\langle \\mathbf{X}, A_{\\mathrm{diag}} \\mathbf{X} \\rangle \\right) \\right| > \\frac{t}{2} \\right)\n    \\leq 2 \\exp\\left( -c \\left( \\frac{t^2}{\\sum_{i=1}^n a_{ii}^2} \\wedge \\frac{t}{\\max(a_{ii})} \\right) \\right)\n  \\end{align*}\n  Note that we can bound the denominators using the appropriate matrix norm.\n  \\begin{align*}\n    \\sum_{i=1}^{n} a_{ii}^2 &\\leq \\norm{A}_F^2 \\\\\n    \\max(a_{ii}) &\\leq \\norm{A}\n  \\end{align*}\n  Note that the upper bound for the diagonal term is of the form stated in the theorem, which means that all we need to do prove a similar or better bound for the second term, which involves $\\wt{A}$.\n  That boils down to finding a probability bound for the following event.\n  For convenience of notation, we will now denote $\\wt{A}$ by just $A$, where it is understood that $A$ is a matrix with all diagonal entries equal to $0$.\n  We will also denote $\\langle \\mathbf{X}, A \\mathbf{X} \\rangle$ by $Y$.\n  \\begin{align*}\n    \\P\\left( \\left| Y \\right| > t \\right)\n    \\leq 2 \\exp\\left( -c \\left( \\frac{t^2}{\\norm{A}_F^2} \\wedge \\frac{t}{\\norm{A}_{\\mathrm{op}}} \\right) \\right)\n  \\end{align*}\n  Recall that when we proved Bernstein's inequality, we did so bounding the Laplace transform of $Y$, i.e. getting upper bounds for $\\E\\left( \\exp\\left( \\lambda Y \\right) \\right)$.\n  Since $Y$ in our case is a quadratic function in independent random variables $X_i$, estimating the Laplace transform is a little tricky, and we need to use \\emph{decoupling}.\n\n  Introduce new random variables $\\{\\delta_1, \\ldots, \\delta_n\\}$ which are independent from each other, as well as all $X_i$, and are distributed like $\\mathrm{Bernoulli}\\left( \\frac{1}{2} \\right)$. We then have the following.\n  \\begin{align*}\n    \\E\\left( \\exp(\\lambda Y) \\right)\n    &= \\E\\left(  \\exp\\left( 4\\lambda \\sum_{i,j = 1}^{n} \\E_{\\delta}\\left( \\delta_i(1- \\delta_j) \\right) a_{ij} X_i X_{j} \\right) \\right)\n  \\end{align*}\n  Here, $\\E_{\\delta}$ represents taking expectation over the sample space of the $\\delta_{i}$, and $\\E$ represents taking the expectation over the sample space of $X_i$.\n  Note that the equality holds because for $i=j$, $\\delta_i(1-\\delta_j) = 0$.\n  We now use Jensen's inequality to pull out the $\\E_{\\delta}$.\n  \\begin{align*}\n    \\E\\left( \\exp(\\lambda Y) \\right)\n    &\\leq \\E \\E_{\\delta}\\left(  \\exp\\left( 4\\lambda \\sum_{i,j = 1}^{n} \\left( \\delta_i(1- \\delta_j) \\right) a_{ij} X_i X_{j} \\right) \\right)\n  \\end{align*}\n  Let $I$ be the random subset of $\\{1, \\ldots, n\\}$ where $i \\in I$ iff $\\delta_i = 1$.\n  We can condition the above expectation on the value of $I$ to simplify things.\n  \\begin{align*}\n    \\E\\left( \\exp(\\lambda Y) \\right)\n    &\\leq \\E_{\\delta} \\E \\left( \\exp\\left( 4\\lambda \\sum_{i \\in I} \\sum_{j \\not \\in I} a_{ij} X_i X_j \\right) \\bigg|\\ I \\right)\n  \\end{align*}\n  We can simplify the inner integral using the fact that $I$ is fixed.\n  \\begin{align*}\n    \\E \\left( \\exp\\left( 4\\lambda \\sum_{i \\in I} \\sum_{j \\not \\in I} a_{ij} X_i X_j \\right) \\bigg|\\ I \\right)\n    &= \\prod_{j \\not \\in I} \\E \\exp\\left( \\left[ 4 \\lambda \\sum_{i \\in I} a_{ij} X_i \\right] \\cdot X_j \\right)\n  \\end{align*}\n  We now use the fact that each $X_j$ is subgaussian with mean $0$ to bound the right hand side in the following manner.\n  \\begin{align}\n    \\label{eq:hw-1}\n    \\prod_{j \\not \\in I} \\E \\exp\\left( \\left[ 4 \\lambda \\sum_{i \\in I} a_{ij} X_i \\right] \\cdot X_j \\right)\n    &\\leq \\prod_{j \\not \\in I} \\E \\left( \\exp\\left[ 16 C \\lambda^2 \\left( \\sum_{i \\in I} a_{ij} X_i \\right)^2 \\right] \\right)\n  \\end{align}\n  We now use a trick to turn the expectation of the quadratic form as in \\eqref{eq:hw-1} to expectation of a bilinear form.\n  Note that for a standard normal random variable $g$, one has the following explicit formula for the Laplace transform.\n  \\begin{align*}\n    \\E\\left( \\theta g \\right) = \\exp\\left( \\frac{\\theta^2}{2} \\right)\n  \\end{align*}\n  Let $\\{g_1, \\ldots, g_n\\}$ be i.i.d standard normal random variables that are independent of $X_{i}$ and $\\delta_i$.\n  We get that the right hand side of \\eqref{eq:hw-1} is the following.\n  \\begin{align*}\n    \\prod_{j \\not \\in I} \\E \\left( \\exp\\left[ 16 C \\lambda^2 \\left( \\sum_{i \\in I} a_{ij} X_i \\right)^2 \\right] \\right)\n    = \\E \\left( \\exp\\left( C^{\\prime}\\lambda \\sum_{j \\not \\in I} \\sum_{i \\in I} a_{ij} X_i g_j \\right) \\right)\n  \\end{align*}\n  Repeating this whole process again, with $X_i$ rather than $X_j$, we end up with the following upper bound.\n  \\begin{align*}\n    \\E \\left( \\exp\\left( 4\\lambda \\sum_{i \\in I} \\sum_{j \\not \\in I} a_{ij} X_i X_j \\right) \\bigg|\\ I \\right)\n    &\\leq \\E\\left( \\exp\\left( C^{\\prime \\prime} \\lambda^2 \\sum_{i \\in I} \\left( \\sum_{j \\not \\in I} a_{ij} g_j \\right)^2 \\right) \\bigg|\\ I \\right)\n  \\end{align*}\n  Note that the upper bound is independent of $X_i$, and only depends on the Bernoulli random variables $\\delta_i$ and the normal random variables $g_i$.\n\n  We will now use that fact that $\\mathbf{g} = (g_1, \\ldots, g_n)$ is a rotation invariant random vector in $\\R^n$ to estimate the upper bound.\n  Let $P_{I}$ be the projection to the subspace spanned by $\\{e_i\\}_{i \\in I}$, and let $B_I = P_{I}A(\\mathrm{Id} - P_I)$.\n  Then the innermost double sum can be expressed as a norm.\n  \\begin{align*}\n    \\sum_{i \\in I} \\left( \\sum_{j \\not \\in I} a_{ij} g_j \\right)^2 = \\norm{B_I \\mathbf{g}}_2^2\n  \\end{align*}\n  This means we need to estimate the following conditional expectation.\n  \\begin{align*}\n    \\E\\left( C^{\\prime \\prime} \\lambda^2 \\norm{B_I \\mathbf{g}}_2^2 \\bigg|\\ I \\right)\n  \\end{align*}\n  Let $U_ID_IV_I$ be the singular value decomposition of $B_I$.\n  Since $\\norm{\\cdot}_2$ is invariant under $O(n)$ action, and the distribution of $\\mathbf{g}$ is also $O(n)$ invariant, $B_I \\mathbf{g}$ has the same distribution as $D_I \\mathbf{g}$.\n  This means we really just need to understand the distribution of the singular values of $B_I$.\n  \\begin{align*}\n    \\E\\left( \\exp\\left(   C^{\\prime \\prime} \\lambda^2 \\norm{B_I \\mathbf{g}}_2^2 \\right) \\bigg|\\ I \\right)\n    &= \\E\\left( \\exp\\left( C^{\\prime \\prime} \\lambda^2 \\norm{D_I \\mathbf{g}}_2^2 \\right)  \\bigg|\\ I \\right) \\\\\n    &= \\E\\left( \\exp\\left(  C^{\\prime \\prime} \\lambda^2 \\sum_{j=1}^n s_j^2(B_I) g_j^2 \\right)  \\bigg|\\ I \\right) \\\\\n    &= \\prod_{j=1}^n \\E\\left( \\exp\\left( C^{\\prime \\prime} \\lambda^2 s_j^2(B_I) g_j^2 \\right) \\bigg|\\ I \\right)\n  \\end{align*}\n  One can explicitly compute $\\E(\\exp(\\theta g^2))$ for a normal random variable $g$ for $\\theta \\in \\left[ 0, \\frac{1}{2} \\right]$.\n  \\begin{align*}\n    \\E(\\exp\\left( \\theta g^2 \\right)) = \\frac{1}{\\sqrt{1 - 2\\theta}}\n  \\end{align*}\n  Hence,\n  \\begin{align*}\n    \\E\\left( C^{\\prime \\prime} \\lambda^2 \\norm{B_I \\mathbf{g}}_2^2 \\bigg|\\ I \\right)\n    &= \\prod_{j=1}^n \\left( 1 - 2 C^{\\prime \\prime} \\lambda^2 s_j^2(B_I) \\right)^{-\\frac{1}{2}}\n  \\end{align*}\n  For $x \\in \\left[ 0 , \\frac{1}{2} \\right]$, $\\frac{1}{\\sqrt{1-x}} \\leq e^x$. This lets us bound the right hand side in the following manner.\n  \\begin{align*}\n    \\prod_{j=1}^n \\left( 1 - 2 C^{\\prime \\prime} \\lambda^2 s_j^2(B_I) \\right)^{-\\frac{1}{2}}\n    &\\leq \\prod_{j=1}^n \\exp\\left( 2 C^{\\prime \\prime} \\lambda^2 s_j^2(B_I) \\right)\n  \\end{align*}\n  Note however that to use the simplification, we required $2 C^{\\prime \\prime} \\lambda^2 s_j^2(B_I) \\leq \\frac{1}{2}$.\n  If we pick a $\\lambda$ smaller than $\\frac{c^{\\prime \\prime \\prime}}{s_1(B_I)}$, then the inequalities are valid.\n  That ends up giving the following bound.\n  \\begin{align*}\n    \\E\\left( \\exp\\left( C \\lambda \\norm{B_I \\mathbf{g}}_2^2 \\right) \\right)\n    &\\leq \\exp\\left( \\wt{C} \\lambda^2 \\norm{B_I}_F^2 \\right)\n  \\end{align*}\n  This is valid when $\\lambda \\leq \\frac{c^{\\prime \\prime \\prime}}{s_1(B_I)} = \\frac{c^{\\prime \\prime \\prime}}{\\norm{B_I}}$.\n  Finally, we can get rid of the dependence on $I$ using the following inequalities involving matrix norms.\n  \\begin{align*}\n    \\norm{B_I}_F &= \\norm{P_IA(\\mathrm{Id} - P_I)}_F \\\\ &\\leq \\norm{A}_F \\\\\n    \\norm{B_I} &= \\norm{P_IA(\\mathrm{Id} - P_I)} \\\\ &\\leq \\norm{A}\n  \\end{align*}\n  We can now conclude our estimates. We get the following inequality for all $\\lambda$ less than $\\frac{c^{\\prime \\prime \\prime}}{\\norm{A}}$.\n  \\begin{align*}\n    \\E\\left( \\exp(\\lambda Y) \\right)\n    &\\leq \\E_{\\delta} \\E\\left( \\exp\\left( C \\lambda \\norm{B_I \\mathbf{g}}_2^2 \\right) \\right) \\\\\n    &\\leq \\exp\\left( \\wt{C} \\lambda^2 \\norm{B_I}_F^2 \\right) \\\\\n    &\\leq \\exp\\left( \\wt{C} \\lambda^2 \\norm{A}_F^2 \\right)\n  \\end{align*}\n  We are able to integrate easily over the $\\delta$ sample space because the right hand side does not depend on $I$ at all.\n  The rest of the proof follows exactly like the end of Bernstein's inequality, where we optimized over $\\lambda$ to get the best bound via Markov's inequality.\n\\end{proof}\n\nWe now show a rather unexpected application of the Hanson-Wright inequality.\n\\begin{theorem}\n Let $A$ be an $n \\times n$ matrix, and let $\\mathbf{X} \\in \\R^n$ be a random vector with i.i.d subgaussian coordinates with mean $0$ and variance $1$.\n Then for any $\\tau > 0$, the following holds.\n \\begin{align*}\n   \\P\\left( \\left| \\norm{A \\mathbf{X}}_2 - \\norm{A}_F \\right| > \\tau \\right)\n   \\leq 2 \\exp\\left( -c \\frac{\\tau^2}{\\norm{A}^2} \\right)\n \\end{align*}\n\\end{theorem}\n\\begin{remark}\n  This result is surprising for two reasons: first, we will prove this using the Hanson-Wright inequality, but the tail bound in this inequality is better than the tail bound in Hanson-Wright. Second, all of our earlier concentration inequalities were concentration inequalities about the mean, but in this case $\\E\\left( \\norm{A \\mathbf{X}}_2 \\right) \\neq \\norm{A}_F$.\n\\end{remark}\n\n\\begin{proof}\n  Let $Y = \\norm{A \\mathbf{X}}_2^2 = \\langle \\mathbf{X}, A^{\\top}A \\mathbf{X} \\rangle$.\n  This expression as the quadratic form $A^\\top A$ will let us apply the Hanson-Wright inequality. Using the Hanson-Wright inequality, we obtain the following.\n  \\begin{align*}\n    \\P\\left( \\left| \\langle \\mathbf{X}, A^{\\top}A \\mathbf{X} \\rangle\n    - \\E\\left( \\langle \\mathbf{X}, A^{\\top}A \\mathbf{X} \\rangle \\right)\n    \\right| > t \\right)\n    \\leq 2 \\exp\\left( -c \\left[ \\frac{t^2}{\\norm{A^{\\top}A}_F^2} \\wedge \\frac{t}{\\norm{A^{\\top}A}} \\right] \\right)\n  \\end{align*}\n  We can explicitly evaluate $\\E \\left( \\langle \\mathbf{X}, A^{\\top}A \\mathbf{X} \\rangle \\right)$: since the entries of $\\mathbf{X}$ are i.i.d with mean $0$ and variance $1$, the expectation simplifies to $\\mathrm{tr}(A^{\\top}A) = \\norm{A}_F^2$.\n  We also have these inequalities involving products of matrices.\n  \\begin{align*}\n    \\norm{AB}_F &\\leq \\norm{A} \\cdot \\norm{B}_F \\\\\n    \\norm{AB} &\\leq \\norm{A} \\cdot \\norm{B}\n  \\end{align*}\n  Using these two inequalities, we can simplify the inequality we got from the application of Hanson-Wright inequality.\n  \\begin{align*}\n    \\P\\left( \\left| \\norm{A \\mathbf{X}}_2^2 - \\norm{A}_F^2 \\right| > t \\right)\n    \\leq 2 \\exp\\left( -c \\left[ \\frac{t^2}{\\norm{A}^2 \\cdot \\norm{A}_F^2} \\wedge \\frac{t}{\\norm{A}^2} \\right] \\right)\n  \\end{align*}\n  Let $t = \\varepsilon \\cdot \\norm{A}_F^2$. Then the above inequality can be rewritten as follows.\n  \\begin{align*}\n    \\P\\left( \\left| \\frac{ \\norm{A \\mathbf{X}}_2^2}{ \\norm{A}_F^2}  - 1 \\right| > \\varepsilon \\right)\n    \\leq 2 \\exp\\left( -c \\frac{\\norm{A}_F^2}{\\norm{A}^2} \\left[ \\varepsilon^2 \\wedge \\varepsilon \\right] \\right)\n  \\end{align*}\n  Rewriting the inequality in this manner makes it clearer which of the terms in the upper bound is smaller.\n  The analysis splits into two cases.\n  \\begin{description}\n  \\item[\\textbf{Case 1} ($\\varepsilon \\leq 1$):] In this case, $\\varepsilon^2 \\leq \\varepsilon$, so the tail bound simplifies in the following manner.\n    \\begin{align*}\n    \\P\\left( \\left| \\frac{ \\norm{A \\mathbf{X}}_2^2}{ \\norm{A}_F^2}  - 1 \\right| > \\varepsilon \\right)\n      \\leq 2 \\exp \\left( -c \\frac{\\norm{A}_F^2}{\\norm{A}^2} \\varepsilon^2 \\right)\n    \\end{align*}\n    Observe now that for positive values of $x$, $|x^2-1| \\geq |x-1|$. This means that we can replace the $\\frac{\\norm{A \\mathbf{X}}_2^2}{\\norm{A}_F^2}$ in left hand side of the above inequality by its square root.\n    We do that, and set $\\tau = \\varepsilon \\norm{A}_F$.\n    \\begin{align*}\n      \\P\\left( |\\norm{A \\mathbf{X}}_2 - \\norm{A}_F| > \\tau \\right)\n      \\leq 2 \\exp \\left( -c \\frac{\\tau^2}{\\norm{A}^2} \\right)\n    \\end{align*}\n    This proves the inequality for all $\\tau \\in \\left[ 0, \\norm{A}_F \\right]$.\n  \\item[\\textbf{Case 2} ($\\varepsilon > 1$):] In this case $\\varepsilon \\leq \\varepsilon^2$, so the tail bound simplifies in the following manner.\n    \\begin{align*}\n    \\P\\left( \\left| \\frac{ \\norm{A \\mathbf{X}}_2^2}{ \\norm{A}_F^2}  - 1 \\right| > \\varepsilon \\right)\n      \\leq 2 \\exp \\left( -c \\frac{\\norm{A}_F^2}{\\norm{A}^2} \\varepsilon \\right)\n    \\end{align*}\n    Note again that for positive values of $x$, $|x^2 - 1| \\geq |x-1|^2$. We use this fact to make a substitution on the left hand side, and let $\\tau = \\sqrt{\\varepsilon} \\norm{A}_F$.\n    \\begin{align*}\n      \\P\\left( |\\norm{A \\mathbf{X}}_2 - \\norm{A}_F| > \\tau \\right)\n      \\leq 2 \\exp  \\left( -c \\frac{\\norm{A}_F^2}{\\norm{A}^2} \\varepsilon \\right)\n    \\end{align*}\n    This proves in the inequality for $\\tau \\in \\left( \\norm{A}_F, \\infty \\right)$.\n  \\end{description}\n\\end{proof}\n\n\\subsection{Concentration for matrix-valued random variables}\n\\label{sec:conc-matr-valu}\n\nSo far, we have looked at concentration inequalities for functions acting on vector-valued random variables, more specifically, linear and quadratic functions on vectors of random variables.\nIn this section, we will look at concentration inequalities for matrix valued random variables.\nWe will start by proving the matrix Bernstein inequality.\n\\begin{theorem}[Matrix Bernstein Inequality]\n  \\label{thm:matrix-bernstein}\n  Let $\\{X_1, \\ldots, X_N\\}$ be independent $n \\times n$ symmetric random matrices satisfying the following conditions for some constant $K$ independent of $j$.\n  \\begin{align*}\n    \\E\\left( X_j \\right) &= 0 \\\\\n    \\norm{X_j}_{\\infty} &\\leq K\n  \\end{align*}\n  Let $\\sigma^2$ denote the following quantity.\n  \\begin{align*}\n    \\sigma^2 \\coloneqq \\norm{\\sum_{j=1}^N \\E(X_j^2)}\n  \\end{align*}\n  Then for any $t > 0$, we have the following tail bound on the sum of $X_j$.\n  \\begin{align*}\n    \\P\\left( \\norm{\\sum_{j=1}^N X_j} > t \\right)\n    \\leq 2n \\exp\\left( -c \\left( \\frac{t^2}{\\sigma^2} \\wedge \\frac{t}{K} \\right) \\right)\n  \\end{align*}\n\\end{theorem}\n\\begin{remark}\n  Note that this inequality looks very similar to the original Bernstein inequality: the only difference is that the coefficient multiplied with $\\exp$ in this case is $2n$, rather than $2$.\n  This means that the bound is not dimension independent, and that does cause issues in practice.\n  However, the bound is optimal.\n\\end{remark}\n\nTo prove Theorem \\ref{thm:matrix-bernstein}, we will need the following linear algebraic result of Lieb.\n\n\\begin{theorem}[Lieb's concavity theorem \\cite{LIEB1973267}]\n  Let $H$ be an $n \\times n$ symmetric matrix.\n  Consider the function $f$ defined on the set of $n \\times n$ symmetric positive-definite matrices.\n  \\begin{align*}\n    f(X) \\coloneqq \\mathrm{tr} \\left( \\exp\\left( H + \\log(X) \\right) \\right)\n  \\end{align*}\n  The function $f$ is concave, i.e. for any $t \\in [0,1]$, and any $X$ and $Y$ in the space of symmetric positive-definite matrices, the following inequality holds.\n  \\begin{align*}\n    f(tX + (1-t)Y) \\geq tf(X) + (1-t)f(Y)\n  \\end{align*}\n\\end{theorem}\n\nUsing Lieb's concavity theorem and Jensen's inequality, we get the following corollary.\n\\begin{corollary}\n  \\label{cor:lieb}\n  Let $H$ be an $n \\times n$ symmetric matrix, and let $Y$ be a random symmetric matrix.\n  Then the following inequality holds.\n  \\begin{align}\n    \\label{eq:2}\n    \\E \\left( \\mathrm{tr} \\left( \\exp\\left( H + Y \\right) \\right) \\right)\n    \\leq \\mathrm{tr}\\left( \\exp\\left( H + \\log\\left( \\E(\\exp(Y)) \\right) \\right) \\right)\n  \\end{align}\n\\end{corollary}\nUsing this corollary, we can prove the matrix Bernstein inequality.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:matrix-bernstein}]\n  We begin by giving an easy upper bound for the norm of a symmetric matrix $X$.\n  \\begin{align*}\n    \\norm{X} \\leq \\lambda_{\\mathrm{max}}(X) + \\lambda_{\\mathrm{max}}(-X)\n  \\end{align*}\n  Here, $\\lambda_{\\mathrm{max}}(X)$ denotes the largest eigenvalue of $X$.\n  Expressed in terms of tail bounds, we get the following.\n  \\begin{align*}\n    \\P\\left( \\norm{\\sum_{j=1}^N X_j} > t \\right)\n    \\leq \\P\\left( \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N X_j \\right) > \\frac{t}{2} \\right)\n    + \\P\\left( \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N -X_j \\right) > \\frac{t}{2} \\right)\n  \\end{align*}\n  Note that it will suffice to get an upper bound for one of the terms on the right hand side, and multiply that by $2$.\n\n  Just like in the proof of Bernstein's inequality, it will help to bound the Laplace transformation of the random variables involved.\n  To do so, we need to chain several simplifying equalities and inequalities.\n  The first equality we get from the fact that the largest eigenvalue of the exponential of a symmetric matrix is the exponential of the largest eigenvalue of the original matrix.\n  \\begin{align}\n    \\label{eq:3}\n    \\E\\left( \\exp\\left( \\lambda \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N X_j \\right) \\right) \\right)\n    &= \\E\\left( \\lambda_{\\mathrm{max}} \\exp\\left( \\lambda \\left( \\sum_{j=1}^N X_j\\right) \\right) \\right)\n  \\end{align}\n  Here, $\\lambda$ is an arbitrary positive number.\n  Now note that $\\exp$ of a symmetric matrix has only positive eigenvalues.\n  This means that the largest eigenvalue is smaller than the trace.\n  \\begin{align}\n    \\label{eq:4}\n    \\E\\left( \\lambda_{\\mathrm{max}} \\exp\\left( \\lambda \\left( \\sum_{j=1}^N X_j\\right) \\right) \\right)\n    \\leq \\E\\left( \\mathrm{tr} \\exp\\left( \\lambda \\left( \\sum_{j=1}^{N-1} X_j + X_N\\right) \\right) \\right)\n  \\end{align}\n  We now condition on the values of $\\{X_1, \\ldots, X_{N-1}\\}$, which lets us treat the right hand side of \\eqref{eq:4} using \\eqref{eq:2}.\n  \\begin{align*}\n    \\E\\left( \\mathrm{tr} \\exp\\left( \\lambda \\left( \\sum_{j=1}^{N-1} X_j + X_N\\right) \\right) \\right)\n    \\leq \\E\\left( \\mathrm{tr} \\exp\\left( \\lambda \\left( \\sum_{j=1}^{N-1} X_j \\right) + \\log \\E \\exp\\left( \\lambda X_N \\right) \\right) \\right)\n  \\end{align*}\n  We repeat this process $N-2$ additional times, conditioning on all but the last $X_i$ to get the following inequality, combining \\eqref{eq:3}, \\eqref{eq:4}, and \\eqref{eq:2}.\n  \\begin{align}\n    \\label{eq:5}\n    \\E\\left( \\exp\\left( \\lambda \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N X_j \\right) \\right) \\right)\n    \\leq \\mathrm{tr}\\left( \\exp\\left[ \\sum_{j=1}^N \\log \\E \\exp\\left( \\lambda X_j \\right)\\right] \\right)\n  \\end{align}\n\n  We now deal with $\\E \\exp(\\lambda X_j)$.\n  Note that for $t \\in [0, 1]$, we have the following elementary inequality.\n  \\begin{align*}\n    \\exp(t) \\leq 1 + t + t^2\n  \\end{align*}\n  We can extend this to symmetric matrices, where the inequality $A \\leq B$ is understood to mean that $B-A$ is positive definite.\n  \\begin{align*}\n    \\E \\exp\\left( \\lambda X_j \\right) &\\leq I + \\E(\\lambda X_j) + \\E\\left( \\lambda^2 X_j^2 \\right) \\\\\n    &= I + \\lambda^2 \\E(X_j^2)\n  \\end{align*}\n  Note that the above inequality holds when $\\lambda$ is small enough such that the largest eigenvalue of $\\lambda X_j$ is less than $1$.\n  Now we use the inequality $1 + x \\leq \\exp(x)$ to get an upper bound of $\\exp\\left( \\lambda^2 \\E(X_j^2) \\right)$.\n  We combine this with \\eqref{eq:5} to get the following.\n  \\begin{align}\n    \\label{eq:6}\n    \\E\\left( \\exp\\left( \\lambda \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N X_j \\right) \\right) \\right)\n    \\leq \\mathrm{tr} \\left( \\exp \\left[ \\sum_{j=1}^N \\lambda^2 \\E(X_j^2) \\right] \\right)\n  \\end{align}\n  Now we again use the fact that the exponential of a symmetric matrix is positive definite to bound the trace above by $n$ times the largest eigenvalue.\n  \\begin{align}\n    \\label{eq:7}\n     \\E\\left( \\exp\\left( \\lambda \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N X_j \\right) \\right) \\right)\n    \\leq n \\lambda_{\\mathrm{max}} \\left( \\exp \\left[ \\sum_{j=1}^N \\lambda^2 \\E(X_j^2) \\right]\\right)\n  \\end{align}\n  Finally, we use the fact that $\\lambda_{\\mathrm{max}} X$ is less than or equal to $\\norm{X}$ to get the upper bound in terms of matrix norm.\n  \\begin{align}\n    \\label{eq:8}\n    \\E\\left( \\exp\\left( \\lambda \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N X_j \\right) \\right) \\right)\n    &\\leq n \\left( \\exp \\left[ \\lambda^2 \\norm{\\sum_{j=1}^N \\E(X_j^2)}  \\right]\\right) \\\\\n    &\\leq n \\left( \\exp \\left[ \\lambda^2 \\sigma^2  \\right]\\right)\n  \\end{align}\n\n  This gives a bound on the Laplace transformation of the random variable we were trying to tail bound.\n  We now use Markov's inequality to get the tail bound.\n  \\begin{align*}\n    \\P\\left( \\lambda_{\\mathrm{max}} \\left( \\sum_{j=1}^N X_j\\right) > \\tau \\right)\n    \\leq n \\exp\\left( \\lambda^2 \\sigma^2 - \\lambda \\tau \\right)\n  \\end{align*}\n  Note that this bound comes from \\eqref{eq:8}, which is only valid when $\\lambda \\max \\norm{X_j} \\leq 1$.\n  We optimize over the values of $\\lambda$ now. This calculation is identical to the one we did for the original Bernstein inequality, and the tail bound follows.\n\\end{proof}\n\nWe now discuss a few corollaries of the matrix Bernstein inequality.\nThe first corollary pertains to matrices that need not be symmetric, or even square.\n\n\\begin{corollary}\n  Let $\\{X_1, \\ldots, X_N\\}$ be $m \\times n$ matrices, satisfying the following conditions for some constant $K$.\n  \\begin{align*}\n    \\E(X_j) &= 0 \\\\\n    \\norm{X_j}_{\\infty} &\\leq K\n  \\end{align*}\n  Let $\\sigma^2 = \\norm{\\sum_{j=1}^N \\E\\left( X_j X_j^{\\top} \\right)} + \\norm{\\sum_{j=1}^N \\E\\left( X_j^{\\top}X_j \\right)}$.\n  Then we have the following tail bound.\n  \\begin{align*}\n    \\P\\left( \\norm{\\sum_{j=1}^N X_j} > t \\right)\n    \\leq 2(n+m) \\exp\\left( -c \\left( \\frac{t^2}{\\sigma^2} \\wedge \\frac{t}{K} \\right) \\right)\n  \\end{align*}\n\\end{corollary}\n\\begin{proof}\n  The proof follows from Theorem \\ref{thm:matrix-bernstein} after symmetrizing the rectangular matrices.\n  Consider the $(n+m) \\times (n+m)$ matrix $X_j^s$, constructed in the following manner.\n  \\begin{align*}\n    X_j^s =\n    \\begin{pmatrix}\n      0 & X_j  \\\\\n      X_j^{\\top} & 0\n    \\end{pmatrix}\n  \\end{align*}\n  Applying Theorem \\ref{thm:matrix-bernstein} to $\\{X_1^s, \\ldots, X_N^s\\}$ gives us the proof.\n\\end{proof}\n\nAnother easy corollary is getting expectation bounds from tail bounds.\n\\begin{corollary}\n  \\label{cor:exp-bound}\n  Let $\\{X_1, \\ldots, X_N\\}$ be as in Theorem \\ref{thm:matrix-bernstein}. Then we have the following expectation bound.\n  \\begin{align*}\n    \\E \\norm{\\sum_{j=1}^N X_j} \\leq C \\left( \\sigma \\sqrt{\\log(n)} + K \\log(n) \\right)\n  \\end{align*}\n\\end{corollary}\n\nWe now return to the problem of covariance estimation that we mentioned in the introduction.\n\n\\paragraph{Empirical covariance estimation}\n\nLet $X$ be an $\\R^n$-valued random variable with mean $0$.\nThe population covariance $\\Sigma$ is defined to be $\\E X X^{\\top}$.\nLet $\\{X_1, \\ldots, X_N\\}$ be i.i.d copies of $X$.\nWe define sample covariance $\\Sigma_N$ to be the sample average of the covariance of each $X_j$.\n\\begin{align*}\n  \\Sigma_N \\coloneqq \\frac{\\sum_{j=1}^N X_j X_j^{\\top}}{N}\n\\end{align*}\nBy the law of large numbers, $\\Sigma_N$ converges to the population covariance.\nHowever, it's more useful to know how fast $\\Sigma_N$ converges to $\\Sigma$.\n\\begin{theorem}\n  Let $X$ and $\\{X_1, \\ldots, X_N\\}$ be random vectors as described previously.\n  Suppose that the following inequality holds almost surely for some constant $S$.\n  \\begin{align*}\n    \\norm{X}_2^2 \\leq S \\E\\norm{X}_2^2\n  \\end{align*}\n  Let $\\varepsilon \\in (0,1)$, and set $N$ to be the following.\n  \\begin{align*}\n    N = C \\frac{S n \\log(n)}{\\varepsilon^2}\n  \\end{align*}\n  Here, $C$ is some fixed constant, and we round $N$ to the nearest integer.\n  Then we have the following concentration inequality for sample covariance.\n  \\begin{align*}\n    \\E \\norm{\\Sigma_N - \\Sigma} \\leq \\varepsilon \\cdot \\norm{\\Sigma}\n  \\end{align*}\n\\end{theorem}\n\\begin{proof}\n  Define $Y_j$ to be $X_j X_j^{\\top} - \\Sigma$.\n  The expectation of $Y_j$ is clearly $0$.\n  We can express the left hand side of the inequality we are trying to prove in terms of $Y_j$.\n  \\begin{align*}\n    \\E \\norm{\\Sigma_N - \\Sigma} &= \\E \\norm{ \\frac{\\sum_{j=1}^N X_jX_j^{\\top}}{N} - \\Sigma } \\\\\n                                &= \\frac{1}{N} \\E \\norm{\\sum_{j=1}^N Y_j}\n  \\end{align*}\n  By Corollary \\ref{cor:exp-bound}, we can bound the above expectation.\n  \\begin{align}\n    \\label{eq:9}\n    \\E \\norm{\\sum_{j=1}^N Y_j}\n    \\leq C\\left( \\sigma \\sqrt{\\log(n)} + K \\log(n) \\right)\n  \\end{align}\n  We need to compute what $\\sigma$ and $K$ are in this context.\n  Recall that $K$ was an almost sure upper bound on $\\norm{Y_j}$.\n  \\begin{align*}\n    \\norm{Y_j} &= \\norm{X_jX_j^{\\top} - \\Sigma} \\\\\n               &\\leq \\norm{X_j}^2 + \\norm{\\Sigma}\n  \\end{align*}\n  We now use the hypothesis we had.\n  \\begin{align*}\n    \\norm{X_j}^2 &\\leq S \\E\\norm{X}_2^2 \\\\\n                 &=S \\E\\left( \\mathrm{tr}(X_j X_j^{\\top}) \\right) \\\\\n                 &= S \\mathrm{tr}(\\Sigma) \\\\\n                 &\\leq Sn \\norm{\\Sigma}\n  \\end{align*}\n  Thus, $K$ in this context in $S n \\norm{\\Sigma}$.\n\n  To compute $\\sigma$, we do something similar.\n  \\begin{align*}\n    \\E Y_j^2 &= \\E\\left( X_jX_j^{\\top} - \\Sigma \\right)^2 \\\\\n             &\\leq \\E\\left( X_j X_j^{\\top} \\right)^2 \\\\\n             &\\leq \\norm{X_j}_2^2 \\cdot \\E \\left( X_j X_j^\\top \\right) \\\\\n             &\\leq Sn \\norm{\\Sigma} \\Sigma\n  \\end{align*}\n  The quantity $\\sigma^2$ then works out to be less than $N S n \\norm{\\Sigma}^2$.\n  Plugging these values into \\eqref{eq:9} gives us the result we want.\n\\end{proof}\n\n\\section{Martingales}\n\\label{sec:martingales}\n\n\\subsection{Azuma's martingale inequality}\n\\label{sec:azum-mart-ineq}\n\nWe begin by recalling the definition of a martingale.\n\\begin{definition}[Martingale]\n  Let $\\cF_0 \\subset \\cF_1 \\subset \\ldots \\subset \\cF_n$ be a filtration of sigma algebras of the sample space $\\Omega$, with $\\cF_0 = \\{\\emptyset, \\Omega\\}$.\n  A sequence of random variables $\\{M_1, M_2, \\ldots\\}$ is said to be a martingale adapted with respect to the filtration $\\{ cF_i\\}$ if the following holds for all $i$ and $j$, where $j > i$.\n  \\begin{align*}\n    \\E\\left( M_j \\mid\\ \\cF_i \\right) = M_i\n  \\end{align*}\n\\end{definition}\n\nUsing the notion of a martingale, we can state a generalization of a special case of Hoeffding's inequality.\n\n\\begin{theorem}[Azuma's martingale inequality]\n  \\label{thm:azuma-martingale}\n  Let $\\{M_j\\}$ be a martingale. Assume that for all $j \\geq 1$, there exist constants $d_j$ such that the following holds.\n  \\begin{align*}\n    \\norm{M_j - M_{j-1}}_{\\infty} \\leq d_j\n  \\end{align*}\n  Then for any $t > 0$, we have the following concentration inequality for $M_n$.\n  \\begin{align*}\n    \\P\\left( \\left| M_n - \\E M_n \\right| > t \\right)\n    \\leq 2 \\exp\\left( - \\frac{t^2}{4\\sum_{j=1}^n d_j^2} \\right)\n  \\end{align*}\n\\end{theorem}\nTo see how this is a generalization of a special case of Hoeffding's inequality, let $\\{B_i\\}$ be i.i.d copies of a random variable with zero mean whose absolute value is bounded by $1$ almost surely.\nIf we define $M_n$ by $\\sum_{j=1}^n d_jB_j$, then it's easy to verify that $\\{M_n\\}$ is a martingale satisfying the above hypotheses, and the tail bound in this case follows from Hoeffding's inequality.\n\nThe reason why the martingale variant of the inequality requires more work is that it is not the case that all martingales are sums of independent random variables.\nHowever, it turns out that a lot of results that hold for sums of independent random variables can also be made to work for martingales.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:azuma-martingale}]\n  We prove this concentration inequality again by estimating the Laplace transform of $M_n$.\n  Let $\\lambda > 0$, and define $f$ in the following manner.\n  \\begin{align*}\n    f(\\lambda) = \\E(\\exp(\\lambda M_n))\n  \\end{align*}\n  To estimate the Laplace transform, we split up $M_n$ in an artificial manner.\n  \\begin{align*}\n    \\E(\\exp(\\lambda M_n))\n    &= \\E(\\exp(\\lambda M_{n-1} + \\lambda(M_n - M_{n+1})))\n  \\end{align*}\n  Observe that since $M_{n-1}$ and $M_n - M_{n-1}$ are not necessarily independent, we cannot turn this into a product of expectations.\n  But we do know that the expectation of $M_{n} - M_{n-1}$ when conditioned on $M_{n-1}$ is $0$.\n  \\begin{align*}\n    \\E(\\exp(\\lambda M_{n-1} + \\lambda(M_n - M_{n+1})))\n    &= \\E\\left( \\exp\\left( \\lambda M_{n-1} \\right)  \\right)\n      \\cdot \\E\\left( \\exp(\\lambda (M_n - M_{n-1})) \\mid\\ M_{n-1} \\right)\n  \\end{align*}\n  We deal with the second term in the product first.\n  To do, so consider the following elementary inequality, which we will state without proof.\n  For all $x \\in \\R$, the following holds.\n  \\begin{align*}\n    e^x \\leq x + e^{x^2}\n  \\end{align*}\n  We use this to bound the second term of the product as follows.\n  \\begin{align*}\n    \\E\\left( \\exp(\\lambda (M_n - M_{n-1})) \\mid\\ M_{n-1} \\right)\n    &\\leq \\E\\left( \\lambda(M_n - M_{n-1}) \\mid\\ M_{n-1} \\right) \\\\\n      &+ \\E\\left( \\exp\\left( \\lambda^2(M_n - M_{n-1})^2 \\right) \\mid\\ M_{n-1} \\right)\n  \\end{align*}\n  Observe that the first term vanishes, and the second term is bounded above by $\\exp(\\lambda^2 d_j^2)$.\n\n  To deal with the $\\E\\left( \\exp\\left( \\lambda M_{n-1} \\right) \\right)$ term, we do the same thing to get an upper bound of $\\exp(\\lambda d_{j-1}^2)$.\n  We repeat this process inductively, to get the following bound.\n  \\begin{align*}\n    \\E\\left( \\exp\\left( \\lambda M_n \\right) \\right)\n    &\\leq \\exp\\left( \\lambda^2 \\left( \\sum_{j=1}^n d_j^2 \\right) \\right) \\cdot \\E\\left( \\lambda M_0 \\right) \\\\\n    &= \\exp\\left( \\lambda^2 \\left( \\sum_{j=1}^n d_j^2 \\right) \\right) \\cdot \\E\\left( \\lambda M_n \\right)\n  \\end{align*}\n  Using Markov's inequality, and optimizing over $\\lambda$, the inequality follows.\n\\end{proof}\n\nAn application of Azuma's martingale inequality is the \\emph{average bounded differences inequality}.\n\n\\begin{corollary}[Average bounded differences inequality]\n  \\label{thm:abd}\n  Let $\\{X_1, \\ldots, X_n\\}$ be independent real valued random variables.\n  Let $f$ be a function from $\\R^n$ to $\\R$ such that for any $j \\in \\{1, 2, \\ldots, n-1\\}$, and any $\\{y_1, \\ldots, y_{j+1}\\}$ in $\\R$, the following holds for some $d_{j+1} \\geq 0$.\n  \\begin{align*}\n    \\left| \\E f\\left( y_1, \\ldots, y_{j+1}, X_{j+2}, \\ldots, X_m \\right) -\n    \\E f\\left( y_1, \\ldots, y_j, X_{j+1}, \\ldots, X_m \\right)\n    \\right| \\leq d_{j+1}\n  \\end{align*}\n  Then for any $t > 0$, the following tail bound applies to $f(X_1, \\ldots, X_n)$.\n  \\begin{align*}\n    \\P\\left( \\left| f(X_1, \\ldots, X_n) - \\E f(X_1, \\ldots, X_n) \\right| > t \\right)\n    \\leq 2 \\exp\\left( - \\frac{t^2}{\\sum_{j=1}^n d_j^2} \\right)\n  \\end{align*}\n\\end{corollary}\n\n\\begin{remark}\n  The strength of this theorem comes from the fact that we require very little from the function $f$.\n  For instance, if $f$ is merely continuous, and the $X_i$ are bounded random variables, then this inequality applies, and gives us a strong concentration result.\n\\end{remark}\n\n\\begin{proof}[Proof of Corollary \\ref{thm:abd}]\n  Let $\\cF_j$ be the $\\sigma$-algebra generated by $X_1$ to $X_j$.\n  Define the martingale $M_j$ in the following manner.\n  \\begin{align*}\n    M_j = \\E\\left( f(X_1, \\ldots, X_n) \\mid\\ \\cF_j \\right)\n  \\end{align*}\n  It is easy to check that the sequence of $M_j$'s actually forms a martingale.\n  Furthermore, one can verify the condition required by the martingale in Theorem \\ref{thm:azuma-martingale}.\n  \\begin{align*}\n    \\left| M_j - M_{j-1} \\right|\n    &= \\left| E\\left[ f(X_1, \\ldots, X_n) \\mid\\ \\cF_j \\right] - E\\left[ f(X_1, \\ldots, X_n) \\mid\\ \\cF_{j-1} \\right] \\right| \\\\\n    &\\leq \\sup_{\\left\\{ y_1, \\ldots y_j \\right\\}} \\big| E\\left[ f(y_1, \\ldots, y_j, X_{j+1} \\ldots, X_n) \\mid\\ \\cF_j \\right] \\\\\n    &- E\\left[ f(y_1, \\ldots, y_{j-1}, X_j, \\ldots X_n) \\mid\\ \\cF_{j-1} \\right] \\big| \\\\\n    &\\leq d_j\n  \\end{align*}\n  Using Theorem \\ref{thm:azuma-martingale}, we get the claimed tail bound.\n\\end{proof}\n\n\\subsection{Applications of Azuma's martingale inequality}\n\\label{sec:appl-azum-mart}\n\nIn this section, we consider several applications of Azuma's martingale inequality and the average bounded differences inequality.\n\n\\paragraph{Random vectors in Banach spaces}\n\nLet $\\{X_1, \\ldots, X_n\\}$ be independent random vectors in a normed space $(Y, \\norm{\\cdot})$.\nIf for all $j \\leq n$, $\\norm{A_j} \\leq a_j$ almost surely, then the following holds.\n\\begin{align*}\n  \\P\\left( \\left| \\norm{ \\sum_{j=1}^n X_j} - \\E  \\norm{ \\sum_{j=1}^n X_j}  \\right| > t \\right)\n  \\leq 2 \\exp \\left( - \\frac{t^2}{16 \\sum_{j=1}^n a_j^2} \\right)\n\\end{align*}\nThis follows from Azuma's martingale inequality by letting $M_j$ be the random variable obtained by taking the conditional expectation of $\\norm{\\sum_{j=1}^n X_j}$ conditioned on the $\\sigma$-algebra generated by $\\{X_1, \\ldots, X_j\\}$.\nThis sequence of real valued random variables forms a martingale, and from the fact that $\\norm{X_j}$ is bounded above by $a_j$ almost surely, we get that that the martingale difference is bounded above by $2a_j$ almost surely, and the result follows.\n\nObserve that this proof tells us absolutely nothing about $\\E \\norm{\\sum_{j=1}^n X_j}$, but only gives a concentration inequality about the mean.\nThis will be the norm when proving concentration inequalities using Azuma's martingale inequality.\n\n\\paragraph{Randomized bin packing}\n\nAnother practical application is the \\emph{randomized bin packing problem}.\nSuppose $\\{X_1, \\ldots, X_n\\}$ are numbers from the interval $[0,1]$, which we will call weights.\nWhat is the minimum number $N(X_1, \\ldots, X_n)$ of partitions (or bins) we need for these weights such that the sum of weights in each partition (or bin) does not exceed $1$?\nThis is an optimization problem of great importance to computer scientists, and a na\u00efve algorithm to compute the minimum number of bins is takes super-exponential time as a function of $n$.\n\nConsider now a variant of this problem where the weights $\\{X_1, \\ldots, X_n\\}$ are i.i.d random variables.\nWe are interested in the minimal number of bins $N(X_1, \\ldots, X_n)$ as $n$ tends to $\\infty$.\nOne has a result in the spirit of the law of large numbers.\n\\begin{theorem}\n  There exists a real number $\\gamma \\in [0,1]$, depending on the distribution of $X_i$ such that the following holds almost surely.\n  \\begin{align*}\n    \\lim_{n \\to \\infty} \\frac{N(X_1, \\ldots, X_n)}{n} = \\gamma\n  \\end{align*}\n\\end{theorem}\n  Let $M_j$ be the random obtained by taking the conditional expectation of $F(X_1, \\ldots , X_n)$ conditioned on the $\\sigma$-algebra $\\cF_j$ generated by $\\{X_1, \\ldots, X_j\\}$.\n  This is a martingale, and we now verify that $\\left| M_{j+1} - M_j \\right|$ is at most $1$.\n  \\begin{align*}\n    \\left| M_{j+1} - M_j \\right|\n    &= \\left| \\E\\left( N(X_1, \\ldots, X_n) \\mid \\cF_{j+1} \\right) - \\E\\left( N(X_1, \\ldots, X_n) \\mid \\cF_{j} \\right) \\right|\n  \\end{align*}\n  To see that the right hand side is at most $1$, it will suffice to condition further on the $\\sigma$-algebra generated by $\\{ X_{j+2}, \\ldots, X_n\\}$.\n  After doing so, we see that the difference is bounded pointwise by $1$.\n  \\begin{align*}\n    \\left| N(x_1, \\ldots, x_n) - N(x_1, \\ldots, x_j, X_{j+1}, x_{j+2}, \\ldots, x_n) \\right| \\leq 1\n  \\end{align*}\n  The above inequality follows from the fact that changing one weight can at most decrease or increase the number of bins by $1$.\n  If the weight increases, we need at most one additional bin.\n  If the weight decreases, and the number of bins goes down by more than $1$, then increasing the weight again would increase the number of bins by more than $1$, which cannot happen.\n  This proves the claim that the martingale difference is bounded above by $1$.\n  Using Azuma's martingale inequality, we get the following concentration inequality for $N$.\n  \\begin{align*}\n    \\P\\left( \\left| N(X_1, \\ldots, X_n) - \\E N(X_1, \\ldots, X_n) \\right| > t \\right)\n    \\leq 2 \\exp\\left( - \\frac{t^2}{n} \\right)\n  \\end{align*}\n  Letting $t = \\frac{\\gamma n}{2}$, we get that the probability that $N(X_1, \\ldots, X_n)$ is less than $\\frac{\\gamma n}{2}$ is very small.\n  \\begin{align*}\n    \\P\\left( N(X_1, \\ldots, X_n) \\leq \\frac{\\gamma n}{2} \\right) &\\leq 2 \\exp\\left( -frac{\\gamma^2 n^2}{n} \\right) \\\\\n   &= 2 \\exp \\left( -\\gamma^2 n \\right)\n  \\end{align*}\n\n\\paragraph{Functional and geometric concentration for the discrete cube}\n\nAnother application of Azuma's martingale inequality is functional and geometric concentration for the discrete cube.\nThe discrete cube $D_n$ is $\\{0,1\\}^n$ with the distance function $d_H$, where $d_H(x, y) \\coloneqq \\frac{1}{n} \\sum_{i=1}^n |x_i - y_i|$.\nLet $\\mu$ be the uniform probability measure on $D_n$.\nLet $f: D_n \\to \\R$ be $1$-Lipschitz.\nIf we consider the coordinates to be independent random variables, we see that the $1$-Lipschitz condition translates to the average difference of $f$ being less than or equal to $1$.\nWe then get the following concentration result for $f$.\n\\begin{align}\n  \\label{eq:func-conc}\n  \\mu\\left( \\left| f(x) - \\E f(x) \\right| > t \\right) \\leq 2 \\exp \\left( - nt^2 \\right)\n\\end{align}\nThis inequality gives us \\emph{geometric measure concentration}.\n\\begin{theorem}[Geometric measure concentration]\n  Let $A \\subset D_n$ be a set such that $\\mu(A) \\geq \\frac{1}{2}$.\n  Let $\\varepsilon > 0$, and let $A_\\varepsilon$ be the $\\varepsilon$ neighbourhood of $A$.\n  Then the complement $A_\\varepsilon^C$ of $A_\\varepsilon$ has very little measure.\n  \\begin{align*}\n    \\mu(A_\\varepsilon^C) \\leq 2 \\exp \\left( - \\varepsilon^2 n \\right)\n  \\end{align*}\n\\end{theorem}\nThis is quite unlike what we are used to in the case of a solid cube in $\\R^n$, with Lebesgue measure.\nIn this case, as $n$ approaches $\\infty$, the measure of $\\mu(A_\\varepsilon^C)$ does not approach $0$.\n\\begin{proof}[Proof of geometric measure concentration]\n  Let $f(x) = d_H(x, A)$. Clearly, $f$ is $1$-Lipschitz and we can use \\eqref{eq:func-conc}.\n  We have that $\\mu(A_\\varepsilon^C) = \\mu(f(x)>\\varepsilon)$.\n  To use \\eqref{eq:func-conc}, we need to estimate $\\E f(x)$.\n  If $x \\in A$, $f(x) = 0$.\n  \\begin{align*}\n    \\frac{1}{2} &\\leq \\mu(A) \\\\\n                &\\leq \\mu\\left( \\left| f(x) - \\E f(x) \\right| \\geq \\E f(x) \\right) \\\\\n    &\\leq 2 \\exp \\left( - n \\left( \\E f(x) \\right)^2 \\right)\n  \\end{align*}\n  Thus, $\\E f(x) \\leq \\frac{C}{\\sqrt{n}}$ for some constant $C$.\n\n  We thus get the following.\n  \\begin{align*}\n    \\mu\\left( f(x) > t + \\frac{C}{\\sqrt{n}} \\right) \\leq 2 \\exp(-nt^2)\n  \\end{align*}\n  Setting $t$ to the appropriate value gives us the result.\n\\end{proof}\n\n\\paragraph{Random allocations}\n\nSuppose we have $n$ balls, which we randomly and independently put into $m$ bins.\nOne can ask various statistical questions about the setup, e.g. number of empty bins, longest stretch of empty bins and so on.\nThe problem we consider is the following: let $Z$ be the number of empty bins.\nWe can write $Z$ as $\\sum_{j=1}^m Z_j$, where $Z_j$ is the indicator for the event that the $j$\\textsuperscript{th} bin is empty.\nWe can easily evaluate $\\E Z$: this will be $\\sum_{j=1}^m \\E Z_j = m \\left( 1 - \\frac{1}{m} \\right)^n \\approx m \\exp \\left( - \\frac{n}{m} \\right)$.\nSuppose now that $\\frac{n}{m}$ is approximately a constant $r$.\nWe want concentration inequalities for $Z$, and $Z$ is a sum of identically distributed random variables.\nHowever, the $Z_j$ are not independent, which means most of our results don't apply here.\nTo deal with this, we consider the random variables $\\{X_j\\}$, which is the number of the bin the $j$\\textsuperscript{th} ball was put in.\nClearly, $Z$ is a function of $\\{X_1, \\ldots, X_n\\}$: like the previous examples, we can create a martingale by taking the conditional expectations of $Z$ conditioned on the $\\sigma$-algebra generated by $\\{X_1, \\ldots, X_j\\}$.\nFurthermore, the martingale differences are bounded above by $1$ as well, which means by Azuma's martingale inequality, we get the following concentration inequality.\n\\begin{align}\n  \\label{eq:weak-bound}\n  \\P\\left( \\left| Z - \\E Z \\right| > t \\right) \\leq 2 \\exp \\left( - \\frac{t^2}{4n} \\right)\n\\end{align}\n\nWe can do better though: note that the martingale difference bound we obtained was a pointwise bound, and the difference in expectation might be smaller.\nLet $\\Delta_j$ denote the martingale difference.\n\\begin{align*}\n  \\Delta_j = \\E\\left[ Z \\mid X_1, \\ldots, X_j \\right] - \\E\\left[ Z \\mid X_1, \\ldots, X_{j-1} \\right]\n\\end{align*}\nRecall now that $Z = \\sum_{j=1}^m Z_j$. Let $1 \\leq k \\leq m$.\nThen given $X_1, \\ldots, X_j$, if at least one of them equals $k$, $Z_k = 0$.\nOtherwise $Z_k$ is a Bernoulli random variable with $\\P(Z_k = 1 \\mid X_1, \\ldots , X_j) = \\left( 1- \\frac{1}{m} \\right)^{n-j}$.\nThis lets evaluate the conditional expectations of $Z_k$.\n\\begin{align*}\n  \\E\\left[ Z_k \\mid X_1, \\ldots , X_j \\right] = \\left( 1 - \\frac{1}{m} \\right)^{n-j} \\prod_{l=1}^j \\left( 1 - \\mathbb{1}_k(X_l) \\right)\n\\end{align*}\nWe similarly have the $j-1$\\textsuperscript{th} term.\n\\begin{align*}\n  \\E\\left[ Z_k \\mid X_1, \\ldots , X_{j-1} \\right] = \\left( 1 - \\frac{1}{m} \\right)^{n-j + 1} \\prod_{l=1}^{j-1} \\left( 1 - \\mathbb{1}_k(X_l) \\right)\n\\end{align*}\nLet $X_j^{\\prime}$ be an independent copy of $X_j$. Then we can rewrite the above expression in the following manner.\n\\begin{align*}\n  \\E\\left[ Z_k \\mid X_1, \\ldots , X_{j-1} \\right]\n  &= \\left( 1 - \\frac{1}{m} \\right)^{n-j} \\E_{X_{j}^{\\prime}} \\left( 1 - \\mathbb{1}_k(X_j^{\\prime}) \\right) \\prod_{l=1}^{j-1} \\left( 1 - \\mathbb{1}_k(X_l) \\right) \\\\\n  &= \\left( 1 - \\frac{1}{m} \\right)^{n-j} \\E_{X_{j}^{\\prime}} \\left[ \\left( 1 - \\mathbb{1}_k(X_j^{\\prime}) \\right) \\prod_{l=1}^{j-1} \\left( 1 - \\mathbb{1}_k(X_l) \\right) \\right]\n\\end{align*}\nWe can write the martingale difference $\\Delta_j$ by taking the difference of the two terms we derived.\n\\begin{align*}\n  \\Delta_j &= \\left( 1 - \\frac{1}{m} \\right)^{n-j}\n  \\cdot \\left[\n  \\prod_{l=1}^j \\left( 1 - \\mathbb{1}_k(X_l) \\right) -\n\\E_{X_{j}^{\\prime}} \\left[ \\left( 1 - \\mathbb{1}_k(X_j^{\\prime}) \\right) \\prod_{l=1}^{j-1} \\left( 1 - \\mathbb{1}_k(X_l) \\right) \\right]\n             \\right] \\\\\n  &= \\left( 1 - \\frac{1}{m} \\right)^{n-j} \\E_{X_{j}^{\\prime}} \\left[ \\left( \\mathbb{1}_k(X_j^{\\prime}) - \\mathbb{1}_k(X_j) \\right) \\prod_{l=1}^{j-1} \\left( 1 - \\mathbb{1}_k(X_l) \\right) \\right]\n\\end{align*}\nObserve now that the above expression is bounded above pointwise by $a_j = \\left( 1- \\frac{1}{m} \\right)^{n-j}$, which is better than the previous estimate we had.\nWe can work out what the terms of the improved concentration inequality will look like.\n\\begin{align*}\n  \\sum_{j=1}^{n} a_j^2 &= \\sum_{j=1}^n \\left( 1 - \\frac{1}{m} \\right)^{2(n-j)} \\\\\n                       &=\\frac{1 - \\left( 1- \\frac{1}{m} \\right)^{2(n+2)}}{1- \\left( 1 - \\frac{1}{m} \\right)^2} \\\\\n                       &\\approx \\frac{ 1 - \\exp\\left( - \\frac{2n}{m} \\right)}{\\frac{2}{m} - \\frac{1}{m^2}} \\\\\n  &\\approx \\frac{1 - \\exp(-2r)}{\\frac{2}{m}}\n\\end{align*}\nWe use this term in Azuma's martingale inequality.\n\\begin{align}\n  \\label{eq:strong-bound}\n  \\P\\left( \\left| Z - \\E Z \\right| > t \\right) \\leq 2 \\exp \\left( - \\frac{t^2}{2m(1-e^{-2r})} \\right)\n\\end{align}\nCompare this to \\eqref{eq:weak-bound}.\nIf $r$ is much bigger than $1$, then \\eqref{eq:strong-bound} is much better, because the denominator is on the order of $2n$, which is better than $4n$.\nIf $r$ is much smaller than $1$, the denominator in \\eqref{eq:strong-bound} is roughly $4n$, which is the same as \\eqref{eq:weak-bound}, giving us no improvement.\n\nThe takeaway from this calculation is that the better we can estimate the martingale differences, the better concentration inequality we get.\nAnother takeaway is that for when $r$ is much smaller than $1$, the number of bins vastly outnumbers the number of balls, in which case, it's not hard to imagine that $Z_j$ are almost independent random variables.\nTreating them as actually independent random variables, and using Hoeffding's inequality to get concentration for their sum gives us approximately the same result we got.\n\n\\subsection{Geometric and functional concentration for $\\Pi_n$}\n\\label{sec:geom-funct-conc}\n\nRecall the example of geometric and functional concentration on $D_n$ outlined in the previous section.\nGeometric concentration refers to the phenomenon that the $\\varepsilon$ neighbourhood of a set of measure greater than $\\frac{1}{2}$ has almost full measure, and functional concentration refers to the fact that the value of a $1$-Lipschitz function concentrates near its mean.\nIt turns out that for a general metric probability space, if one has geometric concentration, one has functional concentration, and vice versa.\nIn this section, we will look and geometric and functional concentration on the symmetric group $\\Pi_n$.\nWe define the metric on $\\Pi_n$ in the following manner.\n\\begin{align*}\n  d(\\pi, \\pi^{\\prime}) = \\sum_{j=1}^n \\mathbb{1}_{\\pi(j) \\neq \\pi^{\\prime}(j)}(j)\n\\end{align*}\nWe let the probability measure on $\\Pi_n$ be the uniform measure.\nWe then have functional concentration for this probability metric space.\n\\begin{theorem}[Maurey \\cite{maurey1979construction}]\n  \\label{thm:maurey}\n  Let $f: \\Pi_n \\to \\R$ be a $1$-Lipschitz function. Then for all $t > 0$, the following tail bound holds.\n  \\begin{align*}\n    \\P\\left( \\left| f(x) - \\E f(x) \\right| > t \\right) \\leq 2 \\exp\\left( - \\frac{t^2}{16n} \\right)\n  \\end{align*}\n\\end{theorem}\nOne can deduce the geometric version of this concentration result as a corollary.\n\\begin{corollary}\n  For any $\\varepsilon > 0$, and for any $A \\subset \\Pi_n$, if $\\P(A) \\geq \\frac{1}{2}$,\n  then $\\P\\left( A_\\varepsilon^C \\right) \\leq \\exp\\left( -c^{\\prime} \\frac{\\varepsilon^2}{n} \\right)$, for some fixed constant $c^{\\prime}$.\n\\end{corollary}\n\nLots of natural functions on $\\Pi_n$ are $1$-Lipschitz.\n\\begin{example}\n  Let $f(\\pi)$ be the length of the longest increasing subsequence of the sequence $\\left( \\pi(1), \\ldots, \\pi(n) \\right)$.\n  Clearly, if $\\pi^{\\prime}$ differs from $\\pi$ on $k$ elements, then the longest increasing subsequence can only become shorter by at most $k$ elements, i.e. by removing those elements from the subsequence if they appeared.\n  This shows that $f$ is a $1$-Lipschitz function, and one can get strong concentration inequalities for $f$.\n\\end{example}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:maurey}]\n  We begin by defining a filtration in order to construct the martingale required to use Azuma's inequality.\n  Let $\\cF_0$ be the trivial $\\sigma$-algebra, and for $1 \\leq j \\leq n$, let $\\cF_j$ be the $\\sigma$-algebra generated by the sets $C_{k_1, \\ldots, k_j}$, where $\\{k_1, \\ldots, k_j\\}$ varies over all the $j$-element subsets of $\\{1, \\ldots, n\\}$, defined in the following manner.\n  \\begin{align*}\n    C_{k_1, \\ldots, k_j} = \\left\\{ \\pi \\in \\Pi_n\\mid\\ \\pi(i) = k_i \\right\\}\n  \\end{align*}\n  By taking the conditional expectation of $f$ with respect to $\\cF_j$, we get the $j$\\textsuperscript{th} element of the martingale.\n  We wish to estimate the following martingale difference.\n  \\begin{align}\n    \\label{eq:martingale-diff}\n    \\E\\left[ f(\\pi) \\mid \\cF_j \\right] - \\E\\left[ f(\\pi) \\mid \\cF_{j-1} \\right]\n  \\end{align}\n  We estimate this difference using by \\emph{coupling} two copies of the random variable $\\pi$.\n  Denote $J = \\left\\{ \\pi(1), \\ldots, \\pi(j-1) \\right\\}$.\n  Let $\\sigma$ be a random transposition such that $\\sigma(j) = j^{\\prime}$ where $j^{\\prime}$ is chosen uniformly from $\\{j, j+1, \\ldots, n\\}$.\n  Let $\\pi^{\\prime} = \\pi \\circ \\sigma$: if $\\pi$ is uniformly distributed, then so is $\\pi^{\\prime}$ (this can be checked by a simple counting argument).\n  The random variables $\\pi$ and $\\pi^{\\prime}$ are now coupled: in particular, $\\pi(k) = \\pi^{\\prime}(k)$ for $k \\leq j-1$.\n  We use the coupled random variables to make \\eqref{eq:martingale-diff} easier to compute.\n  \\begin{align*}\n    \\E\\left[ f(\\pi) \\mid \\cF_j \\right] - \\E\\left[ f(\\pi) \\mid \\cF_{j-1} \\right]\n    &= \\E\\left[ f(\\pi) \\mid \\cF_j \\right] - \\E\\left[ f(\\pi^{\\prime}) \\mid \\cF_{j-1} \\right] \\\\\n    &= \\E\\left[ f(\\pi) \\mid \\cF_j \\right] - \\E\\E_{\\sigma} \\left[ f(\\pi \\circ \\sigma) \\mid \\cF_{j} \\right] \\\\\n    &= \\E_{\\pi} \\left[ f(\\pi) - \\E_{\\sigma} f(\\pi \\circ \\sigma) \\mid \\cF_j \\right]\n  \\end{align*}\n  The key equality in this chain of equalities is when $\\E\\left[ f(\\pi^{\\prime}) \\mid \\cF_{j-1} \\right]$ turns into $\\E\\E_{\\infty}\\left[ f(\\pi \\circ \\sigma) \\mid \\cF_{j} \\right]$.\n  In the first term, we are fixing the first $j-1$ entries, and taking the average over the remaining entries.\n  In the second term, we are first fixing the first $j$ entries, computing the average over the remaining terms, and then computing the average of the average as we vary the $j$\\textsuperscript{th} term.\n  It's clear that these two quantities must be equal.\n\n  Using the fact that $f$ is $1$-Lipschitz, and $\\sigma$ is a transposition, we see that $f(\\pi)$ and $f(\\pi \\circ \\sigma)$ differ by at most $2$.\n  Thus the martingale difference is bounded by $2$, and we can apply Azuma's inequality.\n  \\begin{align*}\n    \\P\\left( \\left| f(\\pi) - \\E f(\\pi) \\right| > t \\right) \\leq 2 \\exp \\left( - \\frac{t^2}{16n} \\right)\n  \\end{align*}\n\\end{proof}\n\\begin{remark}\n  If $f$ is not $1$-Lipschitz, but $L$-Lipschitz, rescaling appropriately gives a similar bound, with $16L^2n$ in the denominator, rather than $16n$.\n\\end{remark}\n\n\\section{Log-concave random variables}\n\\label{sec:log-concave-random}\n\nWe will now consider log-concave random variables. These will be helpful in proving subgaussian concentration results for Gaussian random vectors in $\\R^n$.\n\n\\begin{definition}[Log-concave measures and random variables]\n  A Borel measure in $\\R^n$ is called log-concave if for any $\\lambda \\in [0,1]$ and any compact sets $A$ and $B$ in $\\R^n$, the following holds.\n  \\begin{align*}\n    \\log\\left(  \\mu\\left( \\lambda A + (1-\\lambda)B \\right) \\right) \\geq\n    \\lambda \\log(\\mu(A)) + (1-\\lambda) \\log(\\mu(B))\n  \\end{align*}\n  Here, $\\lambda A + (1-\\lambda)B$ is the Minkoswki sum of the sets.\n\n  An $\\R^n$-valued random variable $X$ is said to be log-concave if the pushforward measure $\\mu(A) \\coloneqq \\P(X \\in A)$ is a log-concave measure.\n\\end{definition}\n\\begin{example}\n  The Lebesgue measure on $\\R^n$ is a log-concave measure.\n  The Gaussian random vector, i.e. a vector with i.i.d copies of the standard normal in its coordinates is a log-concave random variable.\n\\end{example}\nWe will shortly describe the entire class of log-concave random variables.\n\\begin{definition}[Log-concave function]\n  A function $f: \\R^n \\to [0, \\infty]$ is called log-concave if $\\log(f)$ is a concave function.\n\\end{definition}\n\n\\begin{theorem}[Borell]\n  Let $\\mu$ be a locally finite Borel measure in $\\R^n$. Assume that $\\dim(\\mathrm{supp}(\\mu)) = n$.\n  Then the following conditions are equivalent.\n  \\begin{enumerate}[(i)]\n  \\item $\\mu$ is log-concave.\n  \\item $\\mu$ is absolutely continuous with respect to Lebesgue measure, and the Radon-Nikodym derivative is log-concave.\n  \\end{enumerate}\n\\end{theorem}\nWith this theorem, it's easy to see why the examples we gave are log-concave.\n\nWe will only prove one direction of Borell's theorem, i.e. $(ii) \\implies (i)$.\nTo do so, we need the Pr\u00e9kopa\u2013Leindler inequality\n\\begin{theorem}[Pr\u00e9kopa\u2013Leindler inequality]\n  Let $\\lambda \\in [0,1]$ and let $f$, $g$, and $h$ be functions $\\R^n \\to [0, \\infty]$ be $L^1$ functions such that for all $x$ and $y$ in $\\R^n$, the following inequality holds.\n  \\begin{align*}\n    h(\\lambda x + (1-\\lambda)y) \\geq f(x)^{\\lambda} \\cdot g(y)^{1-\\lambda}\n  \\end{align*}\n  Then we have an associated integral inequality.\n  \\begin{align*}\n    \\int_{\\R^n} h(x) \\dd x \\geq \\left( \\int_{\\R^n} f(x) \\dd x \\right)^{\\lambda} \\cdot \\left( \\int_{\\R^n} g(x) \\dd x \\right)^{1 - \\lambda}\n  \\end{align*}\n\\end{theorem}\n\\begin{proof}\n  We will prove this result in two steps.\n  \\begin{description}\n  \\item[Step 1] Let $n = 1$. Assume that $f$ and $g$ are non-zero. Without loss of generality, we can assume that $\\int_\\R f(x) \\dd x = \\int_{\\R} g(x) \\dd x = 1$. If not, scale $f$, $g$, and $h$ appropriately.\n    We just need to show that $\\int_{\\R} h(x) \\dd x \\geq 1$.\n\n    To make things simpler, assume that $f$ and $g$ are continuous $L^1$ functions to $(0, \\infty)$ (we will get rid of this assumption later).\n    Define the functions $\\phi$ and $\\psi$ from $(0,1)$ to $\\R$ in the following manner.\n    \\begin{align*}\n      \\int_{-\\infty}^{\\phi(t)} f(x) \\dd x &= t \\\\\n      \\int_{-\\infty}^{\\psi(t)} g(x) \\dd x &= t\n    \\end{align*}\n    The functions $\\phi$ and $\\psi$ are well defined since $f$ and $g$ are positive.\n    Moreover, since $f$ and $g$ are continuous, we have the following by the fundamental theorem of calculus.\n    \\begin{align*}\n      f(\\phi(t)) \\cdot \\phi^{\\prime}(t) &= 1 \\\\\n      g(\\psi(t)) \\cdot \\psi^{\\prime}(t) &= 1\n    \\end{align*}\n\n    To evaluate the integral of $h$, we perform the following change of variables.\n    \\begin{align*}\n      \\theta(t) = \\lambda \\phi(t) + (1- \\lambda)\\psi(t)\n    \\end{align*}\n    The map $\\theta$ is a differentiable bijection from $(0,1)$ to $\\R$.\n    \\begin{align*}\n      \\theta^{\\prime}(t) = \\frac{\\lambda}{f(\\phi(t))} + \\frac{1-\\lambda}{g(\\psi(t))}\n    \\end{align*}\n    With this change of coordinates, we can evaluate the integral of $h$.\n    \\begin{align*}\n      \\int_{\\R} h(x) \\dd x &= \\int_0^1 h(\\theta(t)) \\theta^{\\prime}(t) \\dd t \\\\\n                           &= \\int_0^1 H(\\lambda \\phi(t) + (1- \\lambda)\\psi(t))\n                             \\cdot \\left[ \\frac{\\lambda}{f(\\phi(t))} + \\frac{1-\\lambda}{g(\\psi(t))} \\right] \\dd t\n    \\end{align*}\n    Using the log-concavity hypothesis, we can bound the integrand below.\n    \\begin{align*}\n      &\\int_0^1 H(\\lambda \\phi(t) + (1- \\lambda)\\psi(t))\n      \\cdot \\left[ \\frac{\\lambda}{f(\\phi(t))} + \\frac{1-\\lambda}{g(\\psi(t))} \\right] \\dd t \\\\\n      \\geq &\\int_{0}^1 f(\\phi(t))^{\\lambda} \\cdot g(\\psi(t))^{1-\\lambda}\n      \\cdot \\left[ \\frac{\\lambda}{f(\\phi(t))} + \\frac{1-\\lambda}{g(\\psi(t))} \\right] \\dd t\n    \\end{align*}\n    The arithmetic-geometric-mean inequality tells us that for any $\\lambda \\in (0,1)$, and any positive $a$ and $b$, the following holds.\n    \\begin{align*}\n      \\lambda a + (1-\\lambda)b \\geq a^{\\lambda} \\cdot b^{1-\\lambda}\n    \\end{align*}\n    Plugging in this inequality in the above integral inequality, we get that the integral of $h$ is bounded below by $1$.\n\n    To get rid of the assumption that $f$ and $g$ are continuous and strictly positive, note that we can approximate $f$ and $g$ by such functions if not the case.\n  \\item[Step 2] Assume that the result holds for all dimensions less than $n+1$. We prove it for dimension $n+1$.\n    We have functions $f$, $g$, and $h$ from $\\R^{n+1}$ to $[0, \\infty]$ such that for any $t$ and $s$ in $\\R$, and any $x$ and $y$ in $\\R^n$, the following holds.\n    \\begin{align*}\n      h(\\lambda t + (1-\\lambda)s, \\lambda x + (1-\\lambda)y)\n      \\geq f(t, x)^{\\lambda} \\cdot g(s,y)^{1-\\lambda}\n    \\end{align*}\n    Fix $t$ and $s$, and consider the functions as functions on $\\R^n$.\n    These functions satisfy the log-concave condition, and we can use the induction hypothesis to claim the following.\n    \\begin{align*}\n      \\int_{\\R^n} h(\\lambda t + (1-\\lambda)s, x) \\dd x\n      \\geq \\left( \\int_{\\R^n} f(t,x) \\dd x \\right)^{\\lambda} \\cdot \\left( \\int_{\\R^n} g(s,x) \\dd x \\right)^{1- \\lambda}\n    \\end{align*}\n    Denote the left hand term as $H(\\lambda t + (1-\\lambda)s)$, the first term on the right hand side by $F(t)$, and the second term on the right hand side by $G(s)$.\n    Then the functions $F$, $G$, and $H$ are functions on $\\R$ satisfying the hypotheses of the theorem, and thus satisfy the following integral inequality by the base case of the induction.\n    \\begin{align*}\n      \\int_\\R H(t) \\dd t \\geq \\left( \\int_{\\R} F(t) \\dd t \\right)^{\\lambda} \\cdot \\left( \\int_{\\R} G(t) \\dd t \\right)^{1-\\lambda}\n    \\end{align*}\n    Unwrapping the definitions of $F$, $G$, and $H$, and using Fubini's theorem, we get the claimed integral inequality for $\\R^{n+1}$.\n  \\end{description}\n\\end{proof}\n\nWe will consider several consequences of the Pr\\'ekopa-Leindler inequality, starting with some elementary corollaries.\n\\begin{corollary}\n  Let $X$ and $Y$ be independent log-concave random variables in $\\R^n$, and let $E$ be a subspace of $\\R^n$.\n  Then:\n  \\begin{enumerate}[(1)]\n  \\item The random variable $W = P_EX$ is log-concave.\n  \\item For all real $a$ and $b$, $Z = aX + bY$ is log-concave.\n  \\end{enumerate}\n\\end{corollary}\n\\begin{proof}\n  \\begin{enumerate}[(1)]\n  \\item Without loss of generality, we can assume that the subspace $E$ is the subspace where the last $n-k$ coordinates are $0$, where $k = \\dim(E)$.\n    Given any $x \\in \\R^n$, we can write it as $(w, z)$, where $w$ is the projection to $E$.\n    By log-concavity, we have the following inequality involving the density $f_X$ of $X$ for all $\\lambda \\in (0,1)$.\n    \\begin{align*}\n      f_X(\\lambda w_1 + (1-\\lambda)w_2, \\lambda z_1 + (1-\\lambda) z_2)\n      \\geq \\left( f_X(w_1, z_1) \\right)^{\\lambda} \\cdot \\left( f_X(w_2, z_2) \\right)^{1-\\lambda}\n    \\end{align*}\n    By Pr\\'ekopa-Leindler, we get the corresponding integral inequality when we integrate over $z$ keeping $w$ fixed.\n    But the integral of $f_X$ in the last $n-k$ coordinates is precisely the density of the pushforward measure.\n  \\item Without loss of generality, assume $a=b=\\frac{1}{\\sqrt{2}}$.\n    Observe that the random vector $(X, Y) \\in \\R^{2n}$ is log-concave, since the $X$ and $Y$ are independent, and the joint density is the product of the individual densities.\n    Upon taking logarithm of the joint density, we get a sum of two concave functions, which must also be concave.\n    Furthermore, $aX+bY$ can be obtained by projecting $(X,Y)$ to an appropriate subspace of $\\R^{2n}$.\n    The result then follows from $(1)$.\n  \\end{enumerate}\n\\end{proof}\n\nAnother corollary that we get from Pr\\'ekopa-Leindler is the Brunn-Minkowski inequality.\n\\begin{corollary}[Brunn-Minkowski inequality]\n  Let $K$ and $D$ be compact subsets of $\\R^n$.\n  Then the following inequality involving the Lebesgue measure of the Minkowski sum $K+D$ holds.\n  \\begin{align*}\n    \\left( m_n(K+D) \\right)^{\\frac{1}{n}} \\geq \\left( m_n(K) \\right)^{\\frac{1}{n}} + \\left( m_n(D) \\right)^{\\frac{1}{n}}\n  \\end{align*}\n\\end{corollary}\n\\begin{proof}\n  Let $A$ and $B$ be compact sets in $\\R^n$ such that they have measure $1$.\n  Let $f = \\mathbb{1}_A$, $g = \\mathbb{1}_B$ and let $g = \\mathbb{1}_{\\lambda A + (1-\\lambda)B}$.\n  Then we can verify that $f$, $g$, and $h$ satisfy the hypothesis of the Pr\\'ekopa-Leindler inequality.\n  To see why that must be the case, observe that for any $x \\in A$ and $y \\in B$, $\\lambda x + (1-\\lambda )y$ must be in $\\lambda A + (1-\\lambda)B$, by the definition of the Minkowski sum.\n  Integrating the functions gives us the following inequality involving the Lebesgue measures.\n  \\begin{align*}\n    m_n(\\lambda A + (1-\\lambda B)) \\geq \\left( m_n(A) \\right)^{\\lambda} \\cdot \\left( m_n(B) \\right)^{1 - \\lambda}\n  \\end{align*}\n  Note that the right hand side is actually equal to $1$, so we get the following inequality.\n  \\begin{align*}\n    m_n(\\lambda A + (1-\\lambda B))^{\\frac{1}{n}} &\\geq \\lambda \\left( m_n(A) \\right)^{\\frac{1}{n}} + (1-\\lambda)(m_n(B))^{\\frac{1}{n}} \\\\\n    &= \\left( m_n( \\lambda A) \\right)^{\\frac{1}{n}} + (m_n( (1-\\lambda)B))^{\\frac{1}{n}}\n  \\end{align*}\n  Letting $K = \\lambda A$ and $D = (1-\\lambda)B$, we prove the result in the case when $K$ and $D$ have measure less than or equal to $1$.\n  Furthermore, since the inequality is homogeneous with respect to scalar multiplication, the general case follows.\n\\end{proof}\n\nAnother more involved application of Pr\\'ekopa-Leindler inequality is the following result due to Khatri and \u0160idak.\n\\begin{theorem}[Khatri-\u0160idak]\n  Let $\\left\\{g_1, \\ldots, g_n\\right\\}$ be (not necessarily independent) centered normal random variables.\n  Then for any $\\{a_1, \\ldots, a_n\\}$ in $\\R_+$, the following holds.\n  \\begin{align*}\n    \\P\\left( |g_1| < a_1, |g_2| < a_2,\\cdots, |g_n| < a_n \\right)\n    &\\geq \\prod_{i=1}^n \\P\\left( |g_i| < a_i \\right)\n  \\end{align*}\n\\end{theorem}\n\\begin{proof}\n  Let $G$ be the standard Gaussian vector in $\\R^n$: there exist vectors $\\{x_1, \\ldots, x_n\\}$ such that $g_i = \\langle G, x_i \\rangle$.\n  The vectors $x_i$ can be easily recovered from the correlation matrix of the $g_i$.\n  We express the inequality we want to prove in terms of $G$.\n  \\begin{align}\n    \\label{eq:10}\n  \\begin{gathered}\n    \\P\\left( |\\langle G, x_1 \\rangle| < a_1, \\cdots, |\\langle G, x_n \\rangle| < a_n \\right)\n    \\geq \\P\\left( |\\langle G, x_1 \\rangle| < a_1, \\cdots, |\\langle G, x_{n-1} \\rangle| < a_{n-1} \\right) \\\\\n    \\cdot \\P\\left( |\\langle G, x_n \\rangle| < a_n \\right)\n  \\end{gathered}\n  \\end{align}\n\n  Define $K \\subset \\R^n$ as follows.\n  \\begin{align*}\n    K \\coloneqq \\left\\{ y \\in \\R^n \\mid\\ |\\langle y, x_i \\rangle| < a_i\\text{ for $1\\leq i \\leq n-1$} \\right\\}\n  \\end{align*}\n  Let $\\gamma_n$ be the standard Gaussian measure in $\\R^n$.\n  Then \\eqref{eq:10} can be rewritten in terms of $K$ and $\\gamma_n$.\n  \\begin{align}\n    \\label{eq:11}\n      \\gamma_n(K \\cap \\left\\{ |\\langle y, x_n \\rangle| < a_n \\right\\})\n      \\geq \\gamma_n(K) \\cdot \\gamma_n\\left( \\left\\{ |\\langle y, x_n \\rangle| < a_n \\right\\} \\right)\n  \\end{align}\n  Observe that $K$ is centrally symmetric and convex.\n  We claim that \\eqref{eq:11} holds for any convex centrally symmetric set.\n  We prove this claim by inducting on $n$.\n\n  Let $n=1$: then $K$ is the interval $[-b,b]$ for some constant $b$.\n  Similarly, the set $\\left\\{ |\\langle y, x_1 \\rangle| < a_1 \\right\\}$ is just $(-a_1, a_1)$.\n  The intersections of these two centrally symmetric sets is also centrally symmetric.\n  This gives the following chain of inequalities, proving the claim for $n=1$.\n  \\begin{align*}\n    \\gamma_1(K \\cap \\left\\{ |\\langle y, x_1 \\rangle| < a_1 \\right\\})\n    &= \\gamma_1\\left( [-(b \\wedge a_1), (b \\wedge a_1)] \\right) \\\\\n    &\\geq \\gamma_1([-b, b]) \\cdot \\gamma_1((-a_1, a_1))\n  \\end{align*}\n\n  Assume now that \\eqref{eq:11} holds in dimension $n$.\n  Let $K$ be a convex centrally symmetric body in $\\R^{n+1}$ and let $S = \\left\\{ y \\mid\\ |\\langle y, x_{n+1} \\rangle| < a_{n+1} \\right\\}$.\n  We can also assume that $x_n$ is the $n+1$\\textsuperscript{th} basis vector.\n  Then we can express the left hand side of the claimed inequality in the following manner.\n  \\begin{align*}\n    \\gamma_{n+1}(K \\cap S)\n    &= \\int_{\\R} \\gamma_n\\left( K \\cap \\left\\{ y \\mid\\ y_{n+1} = t \\right\\} \\right)\\cdot \\mathbb{1}_{[-a_{n+1}, a_{n+1}]}(t) \\dd \\gamma_1(t)\n  \\end{align*}\n  We can now interpret the right hand side as the expectation of $f(t)$, where $t$ is the standard Gaussian random variable.\n  Doing so lets us rewrite the right hand side in the following manner.\n  \\begin{align}\n    \\label{eq:12}\n    \\gamma_{n+1}(K \\cap S)\n    &= \\int_{0}^{\\infty} \\gamma_1\\left( \\gamma_n\\left( K \\cap \\{y \\mid\\ y_{n+1} \\in [-a_{n+1}, a_{n+1}]\\} \\right) > s \\right) \\dd s\n  \\end{align}\n  Let $E_s$ be a set defined in the following manner.\n  \\begin{align*}\n    E_s \\coloneqq \\left\\{ t \\in \\R \\mid\\ \\gamma_n\\left( K \\cap \\{y \\mid\\ y_{n+1} = t\\} \\right)  > s\\right\\}\n  \\end{align*}\n  Then $E_s$ is centrally symmetric, because $K$ is centrally symmetric.\n  Moreover, $E_s$ is convex, i.e. $E_s = [-\\theta(s), \\theta(s)]$.\n  That is because $\\psi(t) = \\gamma_n(K \\cap \\{y \\mid\\ y_{n+1} = t\\})$ is an even log-concave function, since this is a projection of a log-concave function.\n  Therefore, $\\psi(t)$ is decreasing for $t \\geq 0$.\n  Observe that we can express the quantity we care about in terms of $E_s$, using \\eqref{eq:12}.\n  \\begin{align*}\n    \\gamma_{n+1}(K \\cap S) &= \\int_0^{\\infty} \\gamma_1\\left( [-\\alpha, \\alpha] \\cap E_s \\right) \\\\\n                           &\\geq \\int_0^{\\infty} \\gamma_1\\left( [-\\alpha, \\alpha] \\right) \\cdot \\gamma_1(E_s) \\\\\n                           &= C_{\\alpha} \\cdot \\int_0^{\\infty} \\gamma_1(E_s) \\\\\n                           &= \\gamma_{n+1}(S) \\cdot \\gamma_n(K)\n  \\end{align*}\n  This completes the proof of the theorem.\n\\end{proof}\n\\begin{remark}\n  A general version of this inequality, where rather than splitting up the normal random variables in groups of $n-1$ and $1$, one splits them up in groups of $n-m$ and $m$ was the Gaussian correlation inequality.\n  This inequality, which was a conjecture until 2014, attracted a lot of attention and resisted efforts to prove it for decades, until it was by Thomas Royen\\footnote{See the \\href{https://www.quantamagazine.org/statistician-proves-gaussian-correlation-inequality-20170328/}{Quanta article} that outlines this discovery that was almost neglected by the larger mathematical community.}.\n\\end{remark}\n\n% We will now look at a combinatorial application of the Khatri-\u0160idak inequality.\n% \\begin{theorem}[$6\\sigma$ Theorem]\n%   Let $a_{ij}$ be numbers in $[-1, 1]$, as $i$ and $j$ range from $1$ to $n$.\n%   Then there exist $\\{\\varepsilon_i\\}_{1 \\leq i \\leq n}$ in $\\{-1, 1\\}$ such that the following holds for all $i$.\n%   \\begin{align*}\n%     \\left| \\sum_{j=1}^n \\varepsilon_j a_j \\right| \\leq C \\sqrt{n}\n%   \\end{align*}\n% \\end{theorem}\n% \\begin{remark}\n%   If one randomly picks the $\\varepsilon_i$, and uses Hoeffding's inequality to get the appropriate concentration inequality, one only ends up with an upper bound of the form $C \\sqrt{n \\log(n)}$.\n% \\end{remark}\n% We will need a few lemmas in order to prove this theorem.\n% Before we state them, we set up some notation.\n% Let $\\mathbf{g} = \\left( g_1, \\ldots, g_n \\right)$ be the standard Gaussian vector.\n% Let $\\gamma_n(A)$ denote $\\P\\left( g \\in A \\right)$.\n% \\begin{lemma}\n%   Let $\\{a_1, \\ldots, a_m\\}$ be vectors in $\\R^n$ such that $\\norm{a_j}_2 \\leq \\sqrt{n}$ for all $j$.\n%   Define the set $K_t$\n% \\end{lemma}\n\nWe will now prove a Gaussian concentration inequality using the Pr\\'ekopa-Leindler inequality.\nLet $\\mathbf{g}$ be a standard Gaussian vector in $\\R^n$.\nDenote $\\gamma_n(A)$ to be the probability that $g \\in A$ for a Borel subset $A$.\n\\begin{theorem}[Geometric concentration inequality]\n  \\label{thm:guassian-geometric}\n  Let $K$ be a closed set in $\\R^n$.\n  For $t > 0$ let $K_t$ be the $t$-neighbourhood of $K$.\n  Then for any $t > 0$, the following holds.\n  \\begin{align*}\n    \\gamma_n\\left( \\left( K_t \\right)^c \\right) \\leq \\frac{\\exp\\left( - \\frac{t^2}{4} \\right)}{\\gamma_n(K)}\n  \\end{align*}\n\\end{theorem}\n\\begin{proof}\n  We will choose functions $f$ and $h$ from $\\R^n$ to $[0, +\\infty]$ and find $g: \\R^n \\to [0, \\infty]$ such that the triple $(f,g,h)$ satisfies the hypotheses for Pr\\'ekopa-Leindler inequality.\n  Set $h$ and $f$ to be the following functions.\n  \\begin{align*}\n    h(x) &= \\exp\\left( - \\frac{\\norm{x}_2^2}{2} \\right) \\\\\n    h(x) &= \\exp\\left( - \\frac{\\norm{x}_2^2}{2} \\right) \\cdot \\mathbb{1}_K(x)\n  \\end{align*}\n  We want to find a $g$ that satisfies the hypotheses of Pr\\'ekopa-Leindler for $\\lambda = \\frac{1}{2}$, i.e. the following inequality for all $x$ and $y$ in $\\R^n$.\n  \\begin{align*}\n    h\\left( \\frac{x+y}{2} \\right) \\geq \\left( f(x) \\right)^{\\frac{1}{2}} \\cdot \\left( g(y) \\right)^{\\frac{1}{2}}\n  \\end{align*}\n  Without loss of generality, we can focus on the values of $x$ that are contained in $K$, otherwise, the right hand side is $0$, and the inequality holds.\n  Taking logarithms on both sides gives us the following inequality that $g$ must satisfy for all $x \\in K$ and $y \\in \\R^n$.\n  \\begin{align*}\n    - \\frac{1}{2} \\norm{\\frac{x+y}{2}}_2^2\n    \\geq \\frac{1}{2}\\left( - \\frac{\\norm{x}_2^2}{2} \\right) + \\frac{1}{2} \\log(g(y))\n  \\end{align*}\n  Moving terms, we get the following constraint on $\\phi(y) \\coloneqq \\log(g(y))$.\n  \\begin{align*}\n    \\phi(y) \\leq \\inf_{x \\in K} \\left[ \\frac{\\norm{x}_2^2}{2} - \\norm{\\frac{x+y}{2}}_2^2 \\right]\n  \\end{align*}\n  Defining $\\phi(y)$ to be the right hand side, which makes the inequality hold trivially.\n  We can simplify terms to get a formula for $\\phi(y)$.\n  \\begin{align*}\n    \\phi(y) &\\coloneqq \\inf_{x \\in K} \\left[ \\frac{\\norm{x}_2^2}{2} - \\norm{\\frac{x+y}{2}}_2^2 \\right] \\\\\n            &= \\inf_{x \\in K} \\left[ \\frac{\\norm{x-y}_2^2}{4} \\right] - \\frac{\\norm{y}_2^2}{2} \\\\\n            &= \\frac{ \\mathrm{dist}(y, K)^2}{4}  - \\frac{\\norm{y}_2^2}{2}\n  \\end{align*}\n  This gives us the formula for $g$.\n  \\begin{align*}\n    g(y) = \\exp\\left( \\frac{\\mathrm{dist}(y, K)^2}{4} \\right) \\cdot \\exp\\left( - \\frac{\\norm{y}_2^2}{2} \\right)\n  \\end{align*}\n  The Pr\\'ekopa-Leindler inequality then gives us the corresponding integral inequality, which we multiply by $\\left( 2 \\pi \\right)^{- \\frac{n}{2}}$ to be able to interpret the integrands as integrals against the Gaussian probability measure.\n  \\begin{align*}\n    \\int_{\\R^n} \\left( 2\\pi \\right)^{-\\frac{n}{2}} h(x) \\dd x\n    &\\geq \\left( \\int_{\\R^n}\\left( 2\\pi \\right)^{-\\frac{n}{2}} f(x) \\dd x \\right)^{\\frac{1}{2}}\n      \\left( \\int_{\\R^n}\\left( 2\\pi \\right)^{-\\frac{n}{2}} g(x) \\dd x \\right)^{\\frac{1}{2}}\n  \\end{align*}\n  Observe that the left hand side is $1$, the first term on the right hand side is the measure of the set $K$, and the second term is the expectation of the random variable $\\exp\\left( \\frac{\\mathrm{dist}(y, K)^2}{4} \\right)$.\n  \\begin{align*}\n    1 \\geq \\gamma_n(K) \\cdot \\E \\exp\\left( \\frac{\\mathrm{dist}(y, K)^2}{4} \\right)^{\\frac{1}{2}}\n  \\end{align*}\n  The claim then follows from an application of Markov's inequality.\n\\end{proof}\nFrom the geometric concentration inequality, we can immediately derive the functional version.\n\\begin{theorem}[Functional concentration inequality]\n  Let $F: \\R^n \\to \\R$ be a $1$-Lipschitz function.\n  Set $M = \\mathrm{Median}(F)$, i.e. $\\P(F(\\mathbf{g}) \\leq M) \\geq \\frac{1}{2}$ and $\\P\\left( F(\\mathbf{g}) \\geq M \\right) \\geq \\frac{1}{2}$, where $\\mathbf{g}$ is a standard Gaussian vector.\n  Then for any $t > 0$, $F(x)$ concentrates near the median in the following manner.\n  \\begin{align*}\n    \\P\\left( \\left| F(x) - M \\right| > t \\right) \\leq 4 \\exp\\left( - \\frac{t^2}{4} \\right)\n  \\end{align*}\n\\end{theorem}\n\\begin{proof}\n  Let $K$ be the set of all points $x$ in $\\R^n$ such that $F(x) \\leq M$.\n  We have that $\\gamma_n(K) \\geq \\frac{1}{2}$.\n  Since $F$ is $1$-Lipschitz, we have that the set of points $x$ where $F(x) \\geq M+t$ is contained in $\\left( K_t \\right)^c$.\n  Applying Theorem \\ref{thm:guassian-geometric} gives us the claimed inequality.\n  \\begin{align*}\n    \\P\\left( F(x) < M + t \\right) &\\leq \\gamma_n(\\left( K_t \\right)^c) \\\\\n                                  &\\leq \\frac{\\exp\\left( - \\frac{t^2}{4} \\right)}{\\gamma_n(K)} \\\\\n    &\\leq 2 \\exp\\left( - \\frac{t^2}{4} \\right)\n  \\end{align*}\n  Doing so for the probability that $F(x) < M - t$ and taking union bound gives us the result.\n\\end{proof}\n\\begin{remark}\n  We can also obtain a subgaussian concentration around the $\\E F(x)$, possibly with a worse constant on the exponent by obtaining a uniform upper bound on the distance between the median and the mean.\n\\end{remark}\n\nWe also get an easy corollary about the concentration of the norm of a Gaussian matrix.\n\\begin{corollary}\n  Let $G$ be an $n \\times m$ matrix with i.i.d $\\mathcal{N}(0, 1)$ entries.\n  Then for all $t > 0$, the norm of $G$ concentrates around its mean in the manner described below.\n  \\begin{align*}\n    \\P\\left( \\left| \\norm{G} - \\E \\norm{G} \\right| > t \\right) \\leq 2 \\exp(-ct^2)\n  \\end{align*}\n\\end{corollary}\n\\begin{proof}\n  The matrix norm is a $1$-Lipschitz function with respect the the $L^2$ norm when the matrix is considered as a vector in $\\R^{n \\times m}$.\n  The proof then follows from the functional concentration inequality.\n\\end{proof}\n\nFor the above estimate to be useful in practice, one also needs to know $\\E\\norm{G}$.\nThe expected value of the matrix norm depends on $n$ and $k$ in the following manner.\n\\begin{align*}\n  c(\\sqrt{n} + \\sqrt{k}) \\leq \\E \\norm{G} \\leq C (\\sqrt{n} + \\sqrt{k})\n\\end{align*}\nHere, $c$ and $C$ are some absolute constants.\nThe lower bound is easy to obtain, and left as an exercise.\nTo obtain the upper bound, it will be convenient to have the framework of packings, coverings, and nets.\n\n\\paragraph{Packings, coverings, and nets}\n\n\\begin{definition}\n  Let $(X, d)$ be a metric space.\n  Let $\\varepsilon > 0$ and let $K \\subset X$.\n  A set $N \\subset X$ is said to be an $\\varepsilon$-net if the $\\varepsilon$-neighbourhood of $N$ contains $K$.\n  The minimal cardinality of $N$ for a fixed $\\varepsilon$, denoted $N(K, d, \\varepsilon)$ is called the covering number of $K$.\n\\end{definition}\n\n\\begin{definition}\n  A set $P \\subset K$ is called an $\\varepsilon$-separated set if for all $x$ and $y$ in $P$, $d(x,y) > \\varepsilon$.\n  The maximal cardinality of a separating set for $K$, denoted by $P(K, d, \\varepsilon)$ is called the packing number of $K$.\n\\end{definition}\n\nWe will be interested in obtaining precise relationships between the covering and packing numbers for compact sets $K$.\n\n\\begin{lemma}\n  Let $K \\subset X$ in the metric space $(X, d)$.\n  Then for any $\\varepsilon > 0$, the packing and covering numbers are commensurate as we scale $\\varepsilon$.\n  \\begin{align*}\n    P(K, d, 2 \\varepsilon) \\leq N(K, d, \\varepsilon) \\leq P(K, d, \\varepsilon)\n  \\end{align*}\n\\end{lemma}\nWe skip the proof of this fact, since it is fairly elementary.\n\nWe can also relate the covering number to Lebesgue measure.\n\\begin{theorem}[Volumetric estimate]\n  Let $X = \\left( \\R^n, \\norm{\\cdot}_X \\right)$ be a normed space.\n  Then for any $K \\subset X$, and any $\\varepsilon > 0$, the following holds.\n  \\begin{align*}\n    N(K, \\norm{\\cdot}_X, \\varepsilon) \\leq \\frac{m_n\\left( K+\\frac{\\varepsilon}{2}B_X \\right)}{m_n \\left( \\frac{\\varepsilon}{2}B_X \\right)}\n  \\end{align*}\n  Here, $m_n$ is the Lebesgue measure, $B_X$ is the unit ball, and $+$ is the Minkowski sum operation.\n\\end{theorem}\n\\begin{proof}\n  We will prove the inequality for the packing number, and since the packing number is always greater than or equal to the covering number, the result will follow.\n  Let $\\mathcal{P}$ be an $\\varepsilon$-separated set.\n  For all $x$ and $x^{\\prime}$ in $\\mathcal{P}$, if $x \\neq x^{\\prime}$, then $B_X\\left( x, \\frac{\\varepsilon}{2} \\right)$ and $B_X\\left( x^{\\prime}, \\frac{\\varepsilon}{2} \\right)$ are disjoint.\n  The union of these balls is contained in $K + \\frac{\\varepsilon}{2}B_X$.\n  The result then follows.\n\\end{proof}\nOne can estimate the covering number of $B_X$ from this theorem, by setting $K = B_X$: this gives us that the covering number of $B_X$ is less than $\\left( 1 + \\frac{2}{\\varepsilon} \\right)^n$.\n\nWe now go back to our problem of estimating the upper bound for the expected norm of a standard Gaussian $n \\times k$ matrix.\nLet $A$ be a linear operator from the normed space $X = (\\R^k, \\norm{\\cdot}_X)$ to $Y = \\left( \\R^n, \\norm{\\cdot}_Y \\right)$.\n\\begin{lemma}\n  Let $\\mathcal{N}$ be an $\\varepsilon$-net for $B_X$ for some $\\varepsilon \\in (0,1)$.\n  Then we have the following estimate for the operator norm of $A$.\n  \\begin{align*}\n    \\norm{A}_{\\mathrm{op}} \\leq \\frac{1}{1 - \\varepsilon} \\max_{x \\in \\mathcal{N}} \\norm{Ax}_Y\n  \\end{align*}\n\\end{lemma}\n\\begin{proof}\n  Let $y$ be the point $B_X$ where $\\norm{Ay}_Y$ is maximized.\n  Since $\\mathcal{N}$ is an $\\varepsilon$-net, there exists an $x \\in \\mathcal{N}$ within distance $\\varepsilon$ of $y$.\n  \\begin{align*}\n    \\norm{A}_{\\mathrm{op}} &= \\norm{Ay}_Y \\\\\n                           &= \\norm{Ax + A(y-x)}_Y \\\\\n                           &\\leq \\norm{Ax}_Y + \\norm{A(y-x)}_Y \\\\\n                           &\\leq \\max_{x \\in \\mathcal{N}} \\norm{Ax}_Y + \\varepsilon \\norm{A}_{\\mathrm{op}}\n  \\end{align*}\n  This proves the result.\n\\end{proof}\nIf both the norms are the standard Euclidean norms, we get the following corollary by applying the lemma twice.\n\\begin{corollary}\n  If $\\mathcal{N}$ is an $\\varepsilon$-net for $B_X$ and $\\mathcal{M}$ is a $\\varepsilon$-net for $B_Y$, then the $L^2$-operator norm of $A$ is bounded above by the following quantity.\n  \\begin{align*}\n    \\norm{A} \\leq \\frac{1}{(1-\\varepsilon)^2} \\max_{x \\in \\mathcal{N}} \\max_{y \\in \\mathcal{M}} \\langle Ax, y \\rangle\n  \\end{align*}\n\\end{corollary}\n\nWe can use this results about the $L^2$-operator norm to get lower bounds on the expectation of the norm of the standard $n \\times k$ Gaussian matrix.\nWe first need the following lemma.\n\\begin{lemma}\n  For any $t > 0$, we have the following tail bound on $\\norm{G}$.\n  \\begin{align*}\n    \\P\\left( \\norm{G} > C(\\sqrt{n} + \\sqrt{k}) + t \\right) \\leq 2 \\exp\\left( - \\frac{t^2}{4} \\right)\n  \\end{align*}\n\\end{lemma}\n\\begin{proof}\n  Let $\\mathcal{N}$ be a $\\frac{1}{2}$-net for $B_2^k$ and $\\mathcal{M}$ be $\\frac{1}{2}$-net for $B_2^n$ such that their cardinalities are bounded by the following quantities, coming from the volumetric estimates.\n  \\begin{align*}\n    |\\mathcal{N}| &\\leq \\left( \\frac{3}{\\varepsilon} \\right)^k \\\\\n    |\\mathcal{M}| &\\leq \\left( \\frac{3}{\\varepsilon} \\right)^n\n  \\end{align*}\n  From the corollary, we have the following upper bound on $\\norm{G}$.\n  \\begin{align*}\n    \\norm{G} \\leq 4 \\max_{x \\in \\mathcal{N}} \\max_{y \\in \\mathcal{M}} \\langle Gx, y \\rangle\n  \\end{align*}\n  To get the tail estimate for this maximum of random variables, we can find a tail estimate for each $x$ and $y$, and then use the union bound.\n  \\begin{align*}\n    \\P\\left( \\langle Gx, y \\rangle > s \\right)\n    &= \\P \\left( \\sum_{i=1}^k  \\sum_{j=1}^n g_{ij} x_i y_j > s\\right)\n  \\end{align*}\n  Observe that $\\sum g_{ij} x_i y_j$ is distributed like $\\mathcal{N}(0, \\norm{x}_2^2 \\norm{y}_2^2)$.\n  Since $x$ and $y$ are in the unit ball, the product of their norms is less than $1$.\n  We therefore get the individual tail bound is less than $\\exp\\left( - \\frac{s^2}{2} \\right)$.\n  Plugging in the volumetric estimate gives the result.\n\\end{proof}\n\\begin{remark}\n  This result also works for centered subgaussian random variables.\n\\end{remark}\n\n\\paragraph{Milman-Dvoretzky theorem}\n\nWe now consider another application of the Gaussian concentration inequality: the Milman-Dvoretzky theorem, which is a key ingredient of the Dvoretzky theorem.\n\\begin{theorem}[Dvoretzky]\n  \\label{thm:dvo}\n  Let $Y = \\left( \\R^n, \\norm{\\cdot}_Y \\right)$ be a normed space.\n  For any $\\varepsilon > 0$, there exists a Euclidean norm $\\norm{\\cdot}_\\varepsilon$ on $\\R^n$ and a linear subspace $E$ with $\\dim(E) \\geq c(\\varepsilon) \\cdot \\log(n)$ such that for all $x \\in E$, the Euclidean norm agrees with the ambient norm up to an error of $\\varepsilon$.\n  \\begin{align*}\n    (1-\\varepsilon) \\cdot \\norm{x}_\\varepsilon \\leq \\norm{x}_Y \\leq (1+\\varepsilon)\\cdot \\norm{x}_\\varepsilon\n  \\end{align*}\n\\end{theorem}\n\nBefore we present the proof, we will need to set up some notation.\nLet $b(Y)$ be $\\max_{x \\in S^{n-1}} \\norm{x}_Y$, where $Y$ is the normed space in theorem \\ref{thm:dvo}.\nLet $\\mathbf{g}$ be the standard Gaussian vector in $\\R^n$: we define the Dvoretzky dimension $\\mathrm{Dv}$ of $Y$ as follows.\n\\begin{align*}\n  \\mathrm{Dv}(Y) \\coloneqq \\left( \\frac{\\E \\norm{g}_Y}{b(Y)} \\right)^2\n\\end{align*}\n\nWe compute the Dvoretzky dimension for a few well known normed spaces.\n\\begin{example}\n  \\begin{enumerate}[(1)]\n  \\item $Y = \\ell_1^n$: In this case $b(Y) = \\sqrt{n}$, and $\\E\\norm{g}_Y = cn$ for a fixed constant $c$.\n    This means that the Dvoretzky dimension of $\\ell_1^n$ is $c^2n$, i.e. it is proportional to the real dimension.\n  \\item $Y = \\ell_2^n$: In this case $b(Y) = 1$, and $\\E\\norm{g}_Y$ which is smaller than $\\sqrt{n}(1 - o(1))$, and thus the Dvoretzky dimension is $O(n)$.\n  \\item $Y = \\ell_{\\infty}^n$: In this case $b(Y) = 1$, and $\\E\\norm{g}_Y$ is bounded above and below by $\\sqrt{\\log(n)}$ times appropriate constants.\n    This the Dvoretzky dimension is $\\Omega(\\log(n))$.\n  \\end{enumerate}\n\\end{example}\n\\begin{fact}\n  The Dvoretzky dimension cannot exceed the actual dimension of the normed space.\n\\end{fact}\nWe can now state the Milman-Dvoretzky theorem.\n\\begin{theorem}\n  \\label{thm:mil-dvo}\n  Let $Y = \\left( \\R^n, \\norm{\\cdot}_Y \\right)$ be a normed space.\n  Let $\\varepsilon > 0$ and let $E \\subset \\R^n$ be a random linear subspace of dimension $k = \\lfloor c(\\varepsilon) \\mathrm{Dv}(Y) \\rfloor$, where $c(\\varepsilon)$ is a function of $\\varepsilon$.\n  Then with probability $1 - \\exp\\left( -C \\varepsilon^2 k \\right)$, the following inequality holds for all $x$ in $E$.\n  \\begin{align*}\n    (1-\\varepsilon)M(Y)\\norm{x}_2 \\leq \\norm{x}_Y \\leq (1+\\varepsilon)M(Y)\\norm{x}_2\n  \\end{align*}\n  Here, $M(Y)$ only depends on $Y$.\n\\end{theorem}\n\n\\begin{example}\n  % \\begin{enumerate}[(1)]\n  \\item Let $Y = \\ell_1^n$: In this case, we can let $k = \\frac{c \\varepsilon^2}{\\log\\left( \\frac{1}{\\varepsilon} \\right)} \\cdot n$\n    This may seem counter-intuitive, since random sections of polytopes in $\\R^n$ end up looking very close to ellipses.\n  % \\end{enumerate}\n\\end{example}\n\nThe notes end here, but the course covered a few more measure concentration results, and then moved on to the study of random processes, and in particular, subgaussian processes.\n\n\\printbibliography\n\n\\end{document}", "meta": {"hexsha": "ec908fa279bff3bb8a287221a43ee8e520279cb4", "size": 127730, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "main.tex", "max_stars_repo_name": "sayantangkhan/hdp-course-notes", "max_stars_repo_head_hexsha": "6427c68c2b920bb32dcfd44ec7b2099224cf3114", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "main.tex", "max_issues_repo_name": "sayantangkhan/hdp-course-notes", "max_issues_repo_head_hexsha": "6427c68c2b920bb32dcfd44ec7b2099224cf3114", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "main.tex", "max_forks_repo_name": "sayantangkhan/hdp-course-notes", "max_forks_repo_head_hexsha": "6427c68c2b920bb32dcfd44ec7b2099224cf3114", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.3073170732, "max_line_length": 594, "alphanum_fraction": 0.6611211149, "num_tokens": 43727, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.7905303186696747, "lm_q1q2_score": 0.552781830219834}}
{"text": "\\section{Describing  and Measuring Electricity}\n\nThe concepts surrounding electricity and its behavior in a circuit are rich and complex; fortunately, we can limit what we need to understand to a handful of core terms and ideas. A full study of electric charge and electromagnetism is beyond the scope of this book and certainly well beyond what you will need in order to understand the circuits presented and to design circuits of your own. We will focus first on the three key terms we need - \\textit{electro-motive force}, also known as emf and measured in \\textbf{volts}, \\textit{electric current}, a measure of quantity represented as \\textbf{amps}, and \\textit{impedance}, the resistance of an electric circuit as measured in \\textbf{ohms}.\n\nThese terms - \\textbf{volts}, \\textbf{amps}, and \\textbf{ohms} - are likely familiar, and volts in particular are used frequently and colloquially. They are also often used \\textit{incorrectly} (again, volts in particular, which is often incorrectly used as a measure of quantity, such as \"Be careful around that outlet, 110 volts is a lot of electricity!\"). We'll define the basic terms here, then move on to some examples:\n\n\\begin{itemize}\n\\item \\textbf{Electro-Motive Force} - think of electro-motive force, or emf, as a measure of strength (technically it is a measure of \\textit{potential}). When we compare sources of electricity, such as different batteries or an electrical outlet in our home, one of the things we are chiefly interested in is how much work or force can be done by the electricity coming from that source. We measure this force or potential in volts, which can be positive or negative, and we work with volts as a relative measure.\n\\item \\textbf{Electric Current} - think of this item as a measure of quantity, as it is a measure of the number of electrons that move through a point in our circuit over a given time. Our unit here is the amp. While an amp represents a specific quantity of electrons (known as a \\textit{joule}), that specific quantity is not really important to us. We'll be using amps to understand the  requirements and limits of the components we use and the circuits we build. \n\\item \\textbf{Impedance} - impedance, or resistance\\footnote{This is one of these areas where we are simplifying concepts. Technically, the terms \\textit{resistance} and \\textit{impedance} represent two different things, with one being a more general case of the other. For our purposes, we can treat them as synonyms.}, tells us how much effort or force is required to be overcome in order for a circuit we build or a component we use to perform it's work. You can almost think of impedance as a measure of difficulty. Impedance is measured in ohms and is also referred to as \\textit{load}.\n\\end{itemize}\n\nThese are the three basic ideas that we will leverage throughout all of our work, and are three concepts that balance each other out. We construct a circuit that presents a load, and provide that circuit with a source of energy whose electro-motive force is sufficient to \"overcome\" that load. By overcoming and powering that load, we cause electricity to move through the circuit; we can measure the amount of electricity moving through the circuit by measuring our current in amps.\n\nWhen referring to the above three terms in formulas, we use the letter \\textit{I} to represent current (amps), the letter \\textit{R} to represent resistance (ohms), and \\textit{V} to represent volts or electro-motive force. The relationship between these three measures is beautifully elegant, and expressed in something known as Ohm's Law:\n\n\\begin{equation}\nI = \\frac{V}{R}\n\\end{equation}\n\nThis law or equation states that current is equal to force divided by resistance; put another way, the current in a circuit is equal to the force of the power source divided by the resistance of the load we are powering. If \\textit{V = 1} and \\textit{R = 1} - meaning that we have a circuit that presents 1 ohm of resistance and a battery that can produce 1 volt of electro-motive force, then we can plug in to our equation:\n\n\\begin{equation}\nI = \\frac{1}{1}\n\\end{equation}\n\nand, in solving for I, see that 1 / 1 = 1 or 1 amp. So a circuit with a resistance of 1 ohm, powered by a battery that can produce 1 volt of force, we succeed in moving 1 amp through that circuit. \n\nHang tight, we will get to some practical examples of this soon. First, let's try making some changes. Let's assume that we our battery, instead of producing 1 volt, can actually produce 9 volts. Also, for brevity, we will switch here to using V for volts, A for amps, and  $\\Omega$ for ohms.\n\n\\begin{equation}\nI = \\frac{9}{1}\n\\end{equation}\n\nDividing 9 by 1 shows that our current \\textit{I} is now 9A. By increasing the potential of the battery but keeping the needs of the circuit the same, we are causing more electricity to flow through the circuit. Lets say that we add some more components to our circuit, and the load goes up:\n\n\\begin{equation}\nI = \\frac{9}{2}\n\\end{equation}\n\nBy doubling the load, we cut the amount of current \\textit{I} in half to 4.5A. \n\nHopefully this is fairly clear - for any given circuit or load, the more potential created by the battery the more current will flow. This is why the potential or force of our power source will become critical - we need to ensure that we are matching the potential we apply to a load with what that load can tolerate. Let's try a simple example with a batter and a LED\\footnote{A LED, or \\textit{light-emitting diode}. In this example, we are referring to a simple LED light, not an LED display or anything complicated like that.}\n\n\\begin{itemize}\n\\item We have a battery that can produce 9V of potential.\n\\item We have an LED that presents 50$\\Omega$ of resistance.\n\\end{itemize}\n\n\\begin{equation}\nI = \\frac{9}{50}\n\\end{equation}\n\nSolving for \\textit{I}, we see we will be passing 0.18A of current. However, if we were to look at the datasheet for our LED\\footnote{Pretty much every electronic component we will work with will have a datasheet available that describes its operation and its requirements. They are generally freely available from the seller and/or the manufacturer.} we would see that 0.18A of current is \\textit{significantly more} than the LED can handle. This much electricty moving through the LED will literally burn it up! In our example, or datasheet tells us that the LED has a \\textit{maximum current} of 0.02A. \n\nWe can't change the resistance of the LED - that's fixed. We know the LED has a maximum current rating that it can tolerate of 0.2A. It also has a minimum current rating, under which no damage will occur but the LED won't light, so we can't just arbitrarily reduce the current or we may have an useless circuit. But somehow, we have to limit the amount of current to 0.2A so we can get the LED to light up, not fry.\n\nThis brings us to alternate forms of Ohm's law. We can rearrange the formula two different ways, depending on what we want to solve for. If we needed to calculate resistance, we could use this form:\n\n\\begin{equation}\nR = \\frac{V}{I}\n\\end{equation}\n\nIf we know the potential and we already know the amount of current, we can determine what the resistance of our circuit must be. In this example, we know what the resistance is, and we know how much current we can allow, so we need to calculate the property battery to use:\n\n\\begin{equation}\nV = IR\n\\end{equation}\n\nIf we multiply current by resistance, we will get volts. Plugging in the values from our example:\n\n\\begin{equation}\nV = (0.02)(50)\n\\end{equation}\n\nand solving for V, we get 1V. Thats it - in our fictional\\footnote{The values for this example were chosen to make nice round numbers, but in reality aren't far off - a typical red LED expects around 0.02A of current and provides somewhere between 40-50$\\Omega$ of resistance.} example, we need to find a 1V battery to properly power our LED.\n", "meta": {"hexsha": "4c59c46d2a6f87de4991200fa8bfa54c58ef929e", "size": 7906, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "book/describing_electricity.tex", "max_stars_repo_name": "cayugadata-robsnyder/retrocom", "max_stars_repo_head_hexsha": "51e0ca820f86f016dc9b49e5dfd7952b120749e9", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "book/describing_electricity.tex", "max_issues_repo_name": "cayugadata-robsnyder/retrocom", "max_issues_repo_head_hexsha": "51e0ca820f86f016dc9b49e5dfd7952b120749e9", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/describing_electricity.tex", "max_forks_repo_name": "cayugadata-robsnyder/retrocom", "max_forks_repo_head_hexsha": "51e0ca820f86f016dc9b49e5dfd7952b120749e9", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 102.6753246753, "max_line_length": 697, "alphanum_fraction": 0.7749810271, "num_tokens": 1933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696748, "lm_q2_score": 0.6992544210587585, "lm_q1q2_score": 0.5527818203107593}}
{"text": "\\documentclass{article}\n\n\\title{A \\LaTeX Workshop \\\\ for All Skill Levels}\n\\author{D. Zack G.}\n\\date{\\today}\n\n\\usepackage{hyperref}\n\\usepackage{amsmath, amssymb, amsthm}\n\\usepackage{graphicx}\n\\usepackage{parskip}\n\\usepackage{float}\n\n\\newcommand{\\RR}[1]{\\mathbb{R}^{#1}}\n\n\\begin{document}\n\n\\maketitle\n\\newpage\n\\tableofcontents\n\\newpage\n\n\\section{Introduction}\n    Document content goes here!\n    \n    We can't just say y = mx + b.\n    \n    Use inline math with $y = mx + b$.\n    \n    Equation math is $$y = mx + b$$\n\n\\section{Content}\n\t\\subsection{Mathematics}\n        \\subsubsection{Inline Math}\n        \tExample: $f(x)=ax^2 + c_1$\n        \\subsubsection{Block-Level Math}\n        \tExample: $$ \\sum_{i=0}^\\infty i = -\\frac{1}{12} $$\n        \t\n\\section{More Content}\n    More unrelated text\n\t\t\n    $\\sum_{i=0}^\\infty i = -\\frac{1}{12}$\n    \n    The function $f: \\RR{n} \\rightarrow \\RR{n}$ is \\textit{continuous} if\n    $$\\lim_{n\\rightarrow\\infty} x_n = x \\Rightarrow \\lim_{n\\rightarrow\\infty} f(x_n) = f(x)$$\n    \n$$\nM=\n  \\left[ {\n      \\begin{array}{ccc}\n       1 & \\alpha_1 & \\alpha_1^2 \\\\\n       1 & \\alpha_2 & \\alpha_2^2 \\\\\n       1 & \\alpha_3 & \\alpha_3^2 \\\\\n      \\end{array}\n  } \\right]\n$$\n    \n$\\Box, \\blacksquare$\n\n\n\\section{Even More Content}\n\n\\begin{figure}[H]\n    \\centering\n    \\includegraphics[width=4cm]{homotopy}\n    \\label{fig:myname}\n    \\caption{Delicious!}\n\\end{figure}    \n\n\\begin{equation} \\label{eu_eqn}\n    e^{\\pi i} + 1 = 0\n\\end{equation}\n\nSee Equation~\\ref{eu_eqn} \n\n\\begin{equation*}\n\t\\boxed{e^{\\pi i} + 1 = 0}\n\\end{equation*}\n\n\\end{document}", "meta": {"hexsha": "01145d96833853d5b1058afa89d8dd1834477fd7", "size": 1568, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Completed Live Code Document/main.tex", "max_stars_repo_name": "ckearney07/latex-sp17-intro-workshop", "max_stars_repo_head_hexsha": "09dbdd348190f91ce653dc9a25dea2c836900d6f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Completed Live Code Document/main.tex", "max_issues_repo_name": "ckearney07/latex-sp17-intro-workshop", "max_issues_repo_head_hexsha": "09dbdd348190f91ce653dc9a25dea2c836900d6f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Completed Live Code Document/main.tex", "max_forks_repo_name": "ckearney07/latex-sp17-intro-workshop", "max_forks_repo_head_hexsha": "09dbdd348190f91ce653dc9a25dea2c836900d6f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.8481012658, "max_line_length": 93, "alphanum_fraction": 0.6033163265, "num_tokens": 564, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544210587585, "lm_q2_score": 0.7905303112671294, "lm_q1q2_score": 0.5527818151344966}}
{"text": "\\documentclass[]{article}\n\n\\usepackage{natbib}\n\\usepackage{subfiles}\n\\usepackage{multicol}\n%%%%% AUTHORS - PLACE YOUR OWN PACKAGES HERE %%%%%\n\n% Only include extra packages if you really need them. Common packages are:\n\\usepackage{graphicx}\t% Including figure files\n\\usepackage{amsmath}\t% Advanced maths commands\n\\usepackage{amssymb}\t% Extra maths symbols\n\\usepackage{subcaption}\n\\usepackage{float}\n\\usepackage{color}\n\\bibliographystyle{apj}\n\n\\newcommand{\\sukhdeep}[1]{{\\textcolor{magenta}{Sukhdeep: #1}}}\n\n\\begin{document}\n\n\\section{Correlation Function}\n\tWe begin by writing the angular space observable, $X$, in terms of the harmonic counterpart \n\t\\begin{align}\\label{eq:X_harmonic}\n\t\tX(\\Omega)=&\\sum_{lm}\\tilde X_{lm}Y_{lm}(\\Omega)\n\t\\end{align}\n\twhere $\\Omega$ refers to the angular coordinates on the sky.\n\tThe angular cross correlation function of two (scalar) tracers, $X,Z$ of large scale structure can be written in terms \n\tof their harmonic space counter parts, $\\tilde X, \\tilde Z$ as\n\t\\begin{align}\n\t\t\\langle XZ \\rangle(\\theta)=&\\left\\langle\\sum_{\\ell,m}\\sum_{\\ell', m'}\\tilde X_{\\ell m}\\tilde Z_{\\ell' m'}\n\t\t\t\t\t\t\t\t\tY_{\\ell m}(\\Omega)\n\t\t\t\t\t\t\t\t\tY_{\\ell'm'}(\\Omega+\\theta)\\right\\rangle\\\\\n\t\t\t\t\t\t\t\t\t=&\\sum_{\\ell,m}C_{\\ell}Y_{\\ell m}(\\Omega)\n\t\t\t\t\t\t\t\t\tY_{\\ell m}(\\Omega+\\theta)\\\\\n\t\t\\langle XZ \\rangle(\\theta)=&\\frac{1}{4\\pi}\\sum_{\\ell}(2\\ell+1)C_{\\ell}P_{\\ell}(\\cos\\theta)\\label{eq:xi_pl0}\n\t\\end{align}\n\tWe used the identities\n\t\\begin{align}\n\t\t\\langle\\tilde X_{\\ell m}\\tilde Z_{\\ell' m'}\\rangle=&C_{\\ell}\\delta_D(m,m')\\delta_D(\\ell,\\ell')\\\\\n\t\t\\sum_{m=-\\ell}^{m=\\ell}Y_{\\ell m}(\\Omega)Y_{\\ell m}(\\Omega+\\theta)=&\\frac{2\\ell+1}{4\\pi}\\label{eq:Ylm_Pl}\n\t\\end{align}\n\t\n\tFor the case of shear, since it is a spin-2 object, eq.\\ref{eq:X_harmonic} is written in terms of spin harmonics \n\t\\citep[see for ex.][]{Castro2005,Kilbinger2017}. Rest of the analysis proceeds similarly, using the relation for spin \n\tharmonics, \n\tanalogous to eq.~\\ref{eq:Ylm_Pl} \\citep[see for ex. ][]{Hu1997}. \n\t\n\tExpression of $\\xi_+$ is same as eq.~\\ref{eq:xi_pl0}. Expressions for galaxy lensing cross correlation \n\t\\cite{Putter2010} and $\\xi_-$ is given by\n\t\\begin{align}\n\t\t\\langle g\\gamma_T\\rangle(\\theta)&=\\frac{1}{4\\pi}\\sum_{\\ell}\\frac{(2\\ell+1)}{\\ell(\\ell+1)}C_{\\ell}^{g\\kappa}\n\t\tP_{\\ell}^2(\\cos\\theta)\\label{eq:xi_g_gamma}\\\\\n\t\t\\xi_+(\\theta)&=\\frac{1}{4\\pi}\\sum_{\\ell}{(2\\ell+1)}C_{\\ell}^{\\kappa\\kappa}\n\t\tP_{\\ell}(\\cos\\theta)\\label{eq:xi_p}\\\\\n\t\t\\xi_-(\\theta)&=\\frac{1}{4\\pi}\\sum_{\\ell}\\frac{(\\ell-4)!}{(\\ell+4)!}\\ell^4{(2\\ell+1)}C_{\\ell}^{\\kappa\\kappa}\n\t\tP_{\\ell}^4(\\cos\\theta)\\label{eq:xi_m}\n\t\\end{align}\n\\sukhdeep{$\\xi_-$ is a bit of cheating. I'm not familiar with spin harmonics, so I simply took the relation between \n$P_\\ell^m$ and $J_m(\\ell \\theta)$ to get this expression from the Hankel transform for $\\xi_-$. It may not be very accurate at large scale (low $\\ell$) as is evident from $g\\gamma_T$ expression. I can sort this out later. If somebody knows correct expression already, please feel free to put it in.}\n\n\tThe ccl\\_tracer\\_corr\\_legendre routine computes these transform to convert $C_\\ell$ to correlation functions. We\n\tuse the associated Legendre function implementation from gsl library.\n\tccl\\_tracer\\_corr\\_legendre routine evaluations can be slow, especially for $P_\\ell^m$ with $m>0$. Note that \n\t$P_\\ell^m$ evaluations need to be done only once and can then be saved as long as $\\ell,\\theta$ values do not change.\n\tThis is not yet implemented, but will be done soon.\n\t\t\n\t\\subsection{Hankel Transform}\n\t\tExpression in eq.~\\ref{eq:xi_g_gamma}--\\ref{eq:xi_m} can be written as Hankel transforms using the relation \n\t\tbetween $P_{\\ell}^m$ and bessel functions $J_m$\n\t\t\\begin{align}\n\t\t\tP_{\\ell}^m(\\cos\\theta)=(-1)^m\\frac{(\\ell+m)!}{(\\ell-m)!}\\ell^{-m}J_m(\\ell\\theta)\n\t\t\\end{align}\n\n\t\tWe get the following analogous expressions (flat-sky limit)\n\t\t\\begin{align}\n\t\t\t\\langle g\\gamma_T\\rangle(\\theta)&=\\frac{1}{2\\pi}\\int d\\ell\\ell C_\\ell J_2(\\ell\\theta)\\\\\n\t\t\t\\xi_+(\\theta)&=\\frac{1}{2\\pi}\\int d\\ell\\ell C_\\ell J_0(\\ell\\theta)\\\\\n\t\t\t\\xi_-(\\theta)&=\\frac{1}{2\\pi}\\int d\\ell\\ell C_\\ell J_4(\\ell\\theta)\n\t\t\\end{align}\n\t\tTo evaluate Hankel transform, we use the fast FFTlog routine\\citep{Hamilton2000,Talman2009}. In brief, FFTlog \n\t\tworks on functions periodic in log space, by writing the Hankel Transform as a convolution between bessel function \n\t\tand the function of interest (in this case $C_\\ell$). The convolution can \n\t\tthen be evaluated using Fourier transforms, with Fourier transform of bessel function evaluated using analytical \n\t\tfunctions while Fourier transform of $C_\\ell$ and the inverse Fourier transform of the product \n\t\tevaluated using fast fourier transform routines. \n\t\\bibliography{correlation}\n\\end{document}", "meta": {"hexsha": "6536761e201c7a311c69573e2f0b418306781d90", "size": 4726, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/0000-ccl_note/correlation/correlation.tex", "max_stars_repo_name": "Jappenn/CCL", "max_stars_repo_head_hexsha": "a37cad61f060f3928fa5d47b1e2670db3e9bce6f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 91, "max_stars_repo_stars_event_min_datetime": "2017-07-14T02:45:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T08:55:54.000Z", "max_issues_repo_path": "doc/0000-ccl_note/correlation/correlation.tex", "max_issues_repo_name": "Jappenn/CCL", "max_issues_repo_head_hexsha": "a37cad61f060f3928fa5d47b1e2670db3e9bce6f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 703, "max_issues_repo_issues_event_min_datetime": "2017-07-07T16:27:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T14:40:10.000Z", "max_forks_repo_path": "doc/0000-ccl_note/correlation/correlation.tex", "max_forks_repo_name": "Jappenn/CCL", "max_forks_repo_head_hexsha": "a37cad61f060f3928fa5d47b1e2670db3e9bce6f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 54, "max_forks_repo_forks_event_min_datetime": "2017-07-12T13:08:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-06T13:12:10.000Z", "avg_line_length": 54.3218390805, "max_line_length": 299, "alphanum_fraction": 0.7054591621, "num_tokens": 1602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8438951143326726, "lm_q2_score": 0.6548947357776795, "lm_q1q2_score": 0.5526624679249703}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage[\\graphtype]{mfpic}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\opengraphsfile{pl02-18}\n\\begmath 2.18 Digamma or $\\psi $ Function\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThe procedures described here compute the digamma or $\\psi $ function\ndefined by $\\psi (z) = d[\\ln \\Gamma (z)]/dz = \\Gamma ^{\\prime}(z)/\\Gamma (z)$%\n. Additional procedures, not necessary for simplest usage, are provided to\nspecify unusual options or retrieve error estimates.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\n\\begin{description}\n\\item[REAL]  \\ {\\bf U , X, SPSI}\n\n\\item[EXTERNAL]  \\ {\\bf SPSI}\n\\end{description}\n\nAssign a value $x$ to X, and obtain U $= \\psi (x)$ by using\n$$\n\\fbox{{\\bf U = SPSI(X)}}\n$$\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[X]  \\ [in] The argument of the function, $x$ above.\n\\end{description}\n\n\\subsubsection{Program Prototype, Single Precision, Specify Unusual Options}\n\n\\begin{description}\n\\item[REAL]  \\ {\\bf TOL, XERR}\n\n\\item[INTEGER]  \\ {\\bf MSGOFF}\n\\end{description}\n\nAssign values to TOL, XERR and MSGOFF, and specify options for SPSI by using\n$$\n\\fbox{{\\bf CALL SPSIK (TOL, XERR, MSGOFF)}}\n$$\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[TOL]  \\ [in] Relative error tolerance for $\\psi (x)$. When positive,\nit indicates relative error tolerance. When negative or zero, it indicates\nthe default, equal to the square root of the round-off level, $\\sqrt{\\rho }$%\n, should be used. $\\rho $ is the smallest number such that the floating\npoint representation of $1.0+\\rho \\neq 1.0$. TOL only affects the threshold\nfor producing error messages; it does not affect the accuracy of computation.\n\n\\item[XERR]  \\ [in] If non-negative, XERR provides the estimated relative\nerror in X. If negative, XERR indicates the default error estimate for X,\nthe round-off level, $\\rho $, should be used.\n\n\\item[MSGOFF]  \\ [in] MSGOFF is added onto the error message level before an\nerror message is produced by using the error message processor described in\nChapter~19.2.\n\\end{description}\n\nIf SPSIK is not called, the effect is as though CALL SPSIK~(0.0, $-$1.0,~0)\nhad been executed.\n\\vspace{10pt}\n\n\\hspace{5pt}\\mbox{\\input pl02-18 }\n\n\\subsubsection{Program Prototype, Single Precision, Determine Error Estimate}\n\n\\begin{description}\n\\item[REAL]  \\ {\\bf ERR, IERFLG}\n\\end{description}\n\nRetrieve the relative error committed by the last call to SPSI, and the\ninternal error indicator, by using\n$$\n\\fbox{{\\bf CALL SPSIE (ERR,IERFLG)}}\n$$\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[ERR]  \\ [out] reports the relative error committed by the last call to\nSPSI. If SPSI has not been called, ERR is returned with the value $-$1.\n\n\\item[IERFLG]  \\ [out] reports the value of the internal error flag. If SPSI\nhas not been called, or if computation on the last call to SPSI was\ncompletely successful, IERFLG is returned with the value zero. See Section\nE below for discussion of non-zero values of IERFLG.\n\\end{description}\n\n\\subsubsection{Modifications for Double Precision}\n\nChange the REAL statements to DOUBLE PRECISION and change the first letter\nof the procedure names from S to D. One must declare DPSI to be DOUBLE\nPRECISION, as its default type would be REAL.\n\n\\subsection{Examples and Remarks}\n\nSee DRDPSI and ODDPSI for an example of the usage of DPSIK, DPSI and\nDPSIE.\n\nFor $x \\geq 0$, $\\psi (x)$ is a monotone increasing function of $x$. $\\psi\n(x)$ has singularities for $x = 0$ or $x$ a negative integer. See\n\\cite{ams55} for a discussion of other properties of $\\psi (x).$\n\n\\subsection{Functional Description}\n\nThe computational methods used in SPSI were developed by L. Wayne Fullerton\nof Los Alamos National Scientific Laboratory. The methods include rational\nChebyshev expansions, recurrences, and a reflection formula.\n\nWhen X $>$ 0 SPSI estimates the relative error in $\\psi ($X) is the\nround-off level.  Otherwise, SPSI computes an estimate for the\nrelative error in $\\psi ($X$) = ($relative error in X$) \\times |\\text{X}|\n\\times |d\\psi /d\\text{X}|\\ /\\ |\\psi (\\text{X})|$, with $d\\psi /d\\text{X}$\napproximated by $\\pi ^2 \\cot ^2 \\pi \\text{X}$.  When X $<$ 0 and X is\napproximately an integer, or $\\psi ($X) is approximately zero, the\nerror will be large.   When X $< < $ 0 and $2\\text{X}$ is not approximately an\ninteger, SPSI may underestimate the error it committed.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nIf SPSI estimates the specified error tolerance is not satisfied (because $x$\nis too near a negative integer), $\\psi (x)$ is computed, but an error\nmessage is issued by using the error message processor described in Chapter\n19.2, with LEVEL = 1 + MSGOFF, where MSGOFF is zero unless specified by a\ncall to SPSIK at some time before calling SPSI. The IERFLG output from SPSIE\nwill be $-$3.\n\nIf SPSI is called with X zero or a negative integer, $\\psi (x)$ is not\ndefined, and an error message is issued by using the error message processor\ndescribed in Chapter~19.2, with LEVEL = 2 + MSGOFF. If error termination is\nsuppressed by calling SPSIK with a negative value of MSGOFF, or by calling\nthe ERMSET procedure described in Chapter~19.2, the function value will be\nzero, and the IERFLG output from SPSIE will be~1 if X is zero, or~2 if X is\na negative integer.\n\n\\subsection{Supporting Information}\n\nAll code is written in ANSI Standard Fortran~77.\n\nThe program units SPSI, SPSIB, SPSIE and SPSIK communicate by way of a\ncommon block /SPSIC/. The program units DPSI, DPSIB, DPSIE and DPSIK\ncommunicate by way of a common block /DPSIC/.\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nDPSI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, DCSEVL, DERM1, DERV1, DINITS, DPSI, ERFIN, ERMSG, IERM1,\nIERV1\\rule[-5pt]{0pt}{8pt}\\rule[-5pt]{0pt}{8pt}}\\\\\nDPSIE & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, DCSEVL, DERM1, DERV1, DINITS, DPSI, ERFIN, ERMSG, IERM1,\nIERV1\\rule[-5pt]{0pt}{8pt}\\rule[-5pt]{0pt}{8pt}}\\\\\nDPSIK & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, DCSEVL, DERM1, DERV1, DINITS, DPSI, ERFIN, ERMSG, IERM1,\nIERV1\\rule[-5pt]{0pt}{8pt}\\rule[-5pt]{0pt}{8pt}}\\\\\nSPSI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, ERFIN, ERMSG, IERM1, IERV1, SCSEVL, SERM1, SERV1,\nSINITS, SPSI\\rule[-5pt]{0pt}{8pt}\\rule[-5pt]{0pt}{8pt}}\\\\\nSPSIE & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, ERFIN, ERMSG, IERM1, IERV1, SCSEVL, SERM1, SERV1,\nSINITS, SPSI\\rule[-5pt]{0pt}{8pt}\\rule[-5pt]{0pt}{8pt}}\\\\\nSPSIK & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, ERFIN, ERMSG, IERM1, IERV1, SCSEVL, SERM1, SERV1,\nSINITS, SPSI}\\\\\n\\end{tabular}\n\nDesigned and programmed by L. Wayne Fullerton, Los Alamos National\nScientific Laboratory, 1977. Revised and adapted to MATH77 by W. V. Snyder,\n1993, 1994.\n\n\n\\begcodenp\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRDPSI}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{dpsi}}\n\n\\vspace{30pt}\\centerline{\\bf \\large ODDPSI}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{dpsi}}\n\n\\closegraphsfile\n\\end{document}\n", "meta": {"hexsha": "04e242812a22ae4bf0ddacc8c4453eabcc5e94e6", "size": 7358, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch02-18.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch02-18.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch02-18.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 36.9748743719, "max_line_length": 98, "alphanum_fraction": 0.7387877141, "num_tokens": 2347, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943925708562, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5526567495994434}}
{"text": "% Chapter X\n\n\\chapter{Learning model parameters from data} % Chapter title\n\n\\label{Learning model parameters from data} % For referencing the chapter elsewhere, use \n\nFigaro provides support for learning model parameters from data. In this section, a special type of compound element will be presented which allows a distribution to be learned from observed evidence. Details are given about the algorithm Figaro provides for learning parameters. Lastly, an example using parameters and learning algorithms to determine the fairness of a set of die rolls is presented.\n\n\\section{Parameters and parameterized elements}\n\nThis section discusses elements which are learnable parameters. For clarity, a distinction should be made on the meaning of the word \\emph{parameter} in this context. This is different from a method parameter or Scala type parameter. In this section, we use \\emph{parameter} to refer to a Figaro element which can be learned from data. There are currently two such types of parameters in Figaro - \\texttt{BetaParameter} and \\texttt{DirichletParameter}.\n\nA customary illustration of parameter learning is to consider the outcomes of a coin flip and determine whether or not the coin is fair. In the case of a \\texttt{Flip} element (which is a Bernoulli distribution), the conjugate prior distribution is a Beta distribution. If the coin is not fair, we would expect a prior distribution to have a higher value of alpha or beta (the shape variables of a Beta). First, we will create the conjugate prior distribution of a \\texttt{Flip}\n\n\\begin{flushleft}\n\\texttt{val fairness = BetaParameter(1,1)}\n\\end{flushleft}\n\nThe element \\texttt{fairness} is the parameter we will use to model the bias of our coin. The behavior of a \\texttt{BetaParameter} is similar to a normal \\texttt{Beta} element, with a few extra methods and properties. Most importantly, we later use it to create a model learned from parameterized elements. Creation of a parameterized element is accomplished in exactly the same way as creating a compound element.\n\n\\begin{flushleft}\n\\texttt{val f = Flip(fairness)}\n\\end{flushleft}\n\nThis element models a flip of a coin having the fairness specified by the beta parameter, using a value of \\texttt{true} to represent heads and \\texttt{false} to represent tails. We have actually created an instance of \\texttt{ParameterizedFlip}, which is a special type of compound element. A  \\texttt{ParameterizedFlip} is created simply by providing a \\texttt{BetaParameter} as the argument to \\texttt{Flip}.\n\nBy using a \\texttt{ParameterizedFlip}, the evidence we observe on f can be used to learn the value of \\texttt{fairness}. Thus, the next step is to provide the data observed from flips of the coin. Values can be observed just as with other elements, by using \\texttt{f.observe(true)} or \\texttt{f.observe(false)}. We could also use conditions or constraints.\n\n\\marginpar{This example is found in FairCoin.Scala}\nAs a more detailed example, suppose we have seen 24 heads and 62 tails. One way to represent this data is in a Scala sequence. Note that for readability, the sequence is truncated here.\n\n\\begin{flushleft}\n\\texttt{val data = Seq('H', 'H', 'H', 'T', 'H', 'H', 'T', 'H', ...}\n\\end{flushleft}\n\nThe following block of Scala code will iterate through each of the items in the sequence, create a Flip element using the parameter, and observe true or false based on the side of the coin:\n\n\\begin{flushleft}\n\\texttt{data.foreach \\{ d =>\n\\newline val f = Flip(fairness)\n\\newline \\tab if (d == 'H') \\{\n\\newline \\tab f.observe(true)\n\\newline \\} else if (d == 'T') \\{ \n\\newline \\tab f.observe(false)\n\\newline \\}\n\\newline \\}\n}\n\\end{flushleft}\n\nWe have created a parameter, parameterized elements and considered a set of data. Note that each time a parameterize flip is created, it is using the same \\texttt{BetaParameter}. It is now desirable to employ a learning mechanism to determine the fairness of the coin, and to create a new element corresponding to the learned value. This is possible by using a learning algorithm.\n\n\\section{Expectation maximization}\n\nA learning algorithm can be used to determine the values of parameterized elements. At the end of the process, a parameter targeted by the learning algorithm can provide a new element according to the observed data. Presently, Figaro provides one learning algorithm, expectation maximization, based on variable elimination to compute sufficient statistics. Recall that expectation maximization is an iterative algorithm consisting of an expectation step and a maximization step. During the expectation step, an estimate is produced for the sufficient statistics of the parameter. The estimates are then used in the maximization step to find the most likely value of the parameter. This continues for a set number of iterations and converges toward the true parameter value.\n\nFrom a practical standpoint, learning a parameter with expectation maximization is very simple. We need only provide the target parameter and, optionally, the number of iterations to the algorithm. The default number of iterations is 10. This can be done in the following way:\n\n\\begin{flushleft}\n\\texttt{val learningAlgorithm = ExpectationMaximization(fairness)\n\\newline learningAlgorithm.start\n\\newline learningAlgorithm.kill\n\\newline \n\\newline val coin = Flip(fairness.MAPValue)\n\\newline println(\"The probability of a coin with this fairness showing\n'heads' is: \")\n\\newline println(coin.prob)\n}\n\\end{flushleft}\n\nAfter the algorithm has finished running, we can retrieve an element learned from the parameter by using \\texttt{Flip(fairness.MAPValue)}. The element \\texttt{coin} is a \\texttt{Flip}, where the probability of producing true is determined from the data we observed above. It is important to make a distinction between parameterized elements and learned elements. Parameterized elements are compound elements and serve as our means of supplying data about a parameter. A learned element is an atomic element with arguments learned from the data we have observed.\n\nAfter running the program, we see:\n\n\\begin{flushleft}\n\\texttt{The probability of a coin with this fairness showing 'heads' is:\n0.7159090909090909}\n\\end{flushleft}\n\nWe may want to make further explorations about the learned model. For instance, if we wanted to know the probability that two flips of this coin show the same side, we could use\n\n\\begin{flushleft}\n\\texttt{val t1 = Flip(fairness.MAPValue) \n\\newline val t2 = Flip(fairness.MAPValue) \n\\newline val equal = t1 === t2}\n\\end{flushleft}\n\nWe can then use variable elimination to determine the probability that the coins show the same side:\n\n\\begin{flushleft}\n\\texttt{val alg = VariableElimination(equal)\n\\newline alg.start()\n\\newline println(\"The probability of two coins which exhibit this fairness showing the same side is: \" + alg.probability(equal, \\textbf{true}))\n\\newline alg.kill()\n}\n\\end{flushleft}\n\nThis results in the following output:\n\n\\begin{flushleft}\n\\texttt{The probability of two coins which exhibit this fairness showing the same side is: 0.5932334710743803}\n\\end{flushleft}\n\n\\section{A second example}\n\nIn the previous sections, parameter learning was discussed using a Beta parameter. This section will explain the use of Dirichlet parameters. The Dirichlet distribution is a multidimensional generalization of the Beta with a variable number of concentration parameters or alpha values. These values correspond to the weight of each possible outcome in the posterior distribution. In a Dirichlet parameter with two dimensions, the alpha values might again correspond to the outcome of heads and tails, or true and false. Using a higher number of dimensions, we can model a number of different categories or outcomes.\n\nSuppose we are given a set of data in which each record represents a roll of two die out of three possible die. The sum of the die is available, as well as which die were selected for the roll. However, the individual outcome of each die is not available. Our task is to learn the fairness of each die.\n\nThe first step is to define the possible outcomes from a dice roll. This is easily accomplished by using a Scala list:\n\n\\begin{flushleft}\n\\marginpar{This example is found in FairDice.Scala}\n\\texttt{val outcomes = List(1, 2, 3, 4, 5, 6)}\n\\end{flushleft}\n\nNext, we create a parameter representing the fairness of each die:\n\n\\begin{flushleft}\n\\texttt{val fairness1 = DirichletParameter(1,1,1,1,1,1) \n\\newline val fairness2 = DirichletParameter(1,1,1,1,1,1) \n\\newline val fairness3 = DirichletParameter(1,1,1,1,1,1) \n\\newline val parameters = Seq(fairness1, fairness2, fairness3)\n}\n\\end{flushleft}\n\nEach die is initially assumed to be fair. For convenience, we can place all three parameters in a Scala sequence named \\texttt{parameters}.  An item this sequence at index i can be accessed with \\texttt{parameters(i)}. This sequence will help us to concisely observe the data from which the parameters are learned. We can represent the data in another sequence:\n\n\\begin{flushleft}\n\\texttt{val data = Seq((2, 3, 8), (1, 3, 7), (1, 2, 3), (1, 2, 3), ...}\n\\end{flushleft}\n\n\\texttt{data} is a sequence of 50 Scala tuples. The first two values in each tuple indicate which two die were chosen to roll. The third value is the sum of the two die.\n\nTo model the outcome of the sum, we can use an \\texttt{Apply} element with a function which sums the outcome of its arguments.\n\n\\begin{flushleft}\n\\texttt{def trial(p1: AtomicDirichlet, p2: AtomicDirichlet, result: Int) = \\{\n\\newline \\tab val sum = (i1: Int, i2: Int) => i1 + i2 \n\\newline \\tab val die1 = Select(p1, outcomes: \\_*)\n\\newline \\tab val die2 = Select(p2, outcomes: \\_*)\n\\newline \\tab val t = Apply(die1, die2, sum)\n\\newline \\tab t.observe(result)\n\\newline \\}\n}\n\\end{flushleft}\n\nThe code section above defines a Scala function which accepts two Dirichlet parameters and an integer value. \\texttt{val sum = (i1: Int, i2: Int) => i1 + i2} defines a Scala function which accepts two integer values and returns their sum.  Next, two Select elements are created and parameterized by the input parameters. Note that the arguments to \\texttt{Select} are different from what has been presented previously. Instead of directly enumerating each probability and outcome, we specify a Dirichlet parameter and the list of possible outcomes. The last two lines of \\texttt{trial} apply the sum function to the die and observe the result. By calling the \\texttt{trial} function for each tuple in the sequence, we can create a model learned from the data.\n\n\\begin{flushleft}\n\\texttt{data.foreach \\{ d =>\n\\newline if (d.\\_1 == 1 \\&\\& d.\\_2 == 2) \\{\n\\newline \\tab trial(parameters(0), parameters(1), d.\\_3)\n\\newline \\tab \\} else if (d.\\_1 == 1 \\&\\& d.\\_2 == 3) \\{\n\\newline \\tab trial(parameters(0), parameters(2), d.\\_3)\n\\newline \\} else \\{\n\\newline \\tab trial(parameters(1), parameters(2), d.\\_3)\n\\newline \\}\n\\newline \\}\n}\n\\end{flushleft}\n\nJust as in the fair coin example, we create an expectation maximization algorithm and use the list of parameters as input. Additionally, this time we have chosen to raise the number of iterations to 100.\n\n\\begin{flushleft}\n\\texttt{val numberOfIterations = 100\n\\newline val algorithm = ExpectationMaximization(numberOfIterations, parameters: \\_*)\n\\newline algorithm.start\n\\newline \n\\newline val die1 = Select(fairness1.MAPValue.view(0,fairness1.MAPValue.size).toList, outcomes)  \n\\newline val die2 = Select(fairness2.MAPValue.view(0,fairness2.MAPValue.size).toList, outcomes) \n\\newline val die3 = Select(fairness3.MAPValue.view(0,fairness3.MAPValue.size).toList, outcomes) \n}\n\\end{flushleft}\n\nThe code block above will retrieve learned elements from the parameters. Note that for a Select, a list of outcomes must be supplied as an argument to along with their corresponding probabilities, which is the array returned by \\texttt{MAPValue}. This is because the number of concentration parameters is may vary, and the type of the outcomes is not fixed. Running this code results in the following output, in which we see the model has estimated the probabilities of each value for each die. If one examines the full data declaration in the example code, it is quite easy to see that there are only three observed values of the sum of the die (3, 7 and 8), so the learning algorithm has correctly inferred that the most likely values of the die are 1, 2 and 6, respectively.\n\n\\begin{flushleft}\n\\texttt{The probabilities of seeing each side of d\\_1 are:\n\\newline \\tab 0.906250000442371 -> 1\n\\newline \\tab 0.0 -> 2\n\\newline \\tab 0.0 -> 3\n\\newline \\tab 0.0 -> 4\n\\newline \\tab 0.09374999955762903 -> 5\n\\newline \\tab 0.0 -> 6\n}\n\\end{flushleft}\n\n\n\\begin{flushleft}\n\\texttt{The probabilities of seeing each side of d\\_2 are:\n\\newline \\tab 0.0 -> 1\n\\newline \\tab 0.9999999996067813 -> 2\n\\newline \\tab 0.0 -> 3\n\\newline \\tab 0.0 -> 4\n\\newline \\tab 0.0 -> 5\n\\newline \\tab 3.9321864899990694E-10 -> 6\n}\n\\end{flushleft}\n\n\n\\begin{flushleft}\n\\texttt{The probabilities of seeing each side of d\\_3 are:\n\\newline \\tab 0.0 -> 1\n\\newline \\tab 0.0 -> 2\n\\newline \\tab 0.0 -> 3\n\\newline \\tab 0.0 -> 4\n\\newline \\tab 0.0 -> 5\n\\newline \\tab 1.0 -> 6\n}\n\\end{flushleft}\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "0fb8d3d8346b0efbacdea446aa358dc3513c93aa", "size": 13155, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "FigaroLaTeX/Tutorial/Sections/8LearningParams.tex", "max_stars_repo_name": "wkretschmer/figaro", "max_stars_repo_head_hexsha": "ab45d86d7f2b23c77d242b15396f0f704d40570c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "FigaroLaTeX/Tutorial/Sections/8LearningParams.tex", "max_issues_repo_name": "wkretschmer/figaro", "max_issues_repo_head_hexsha": "ab45d86d7f2b23c77d242b15396f0f704d40570c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FigaroLaTeX/Tutorial/Sections/8LearningParams.tex", "max_forks_repo_name": "wkretschmer/figaro", "max_forks_repo_head_hexsha": "ab45d86d7f2b23c77d242b15396f0f704d40570c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.7276785714, "max_line_length": 777, "alphanum_fraction": 0.7710376283, "num_tokens": 3318, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802476562641, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.5526567487679184}}
{"text": "% declare document class and geometry\n\\documentclass[12pt]{article} % use larger type; default would be 10pt\n\\usepackage[english]{babel} % for hyphenation dictionary\n%\\setdefaultlanguage{english} % polyglossia command for use with XeTeX / LuaTeX\n\\usepackage[margin=1in]{geometry} % handle page geometry\n\n% import packages and commands\n\\input{../header2.tex}\n\n% title information\n\\title{Phys 221A -- Quantum Mechanics -- Lec18}\n\\author{UCLA, Fall 2014}\n\\date{\\formatdate{3}{12}{2014}} % Activate to display a given date or no date (if empty),\n         % otherwise the current date is printed \n         % format: formatdate{dd}{mm}{yyyy}\n\n\\begin{document}\n\\maketitle\n\n\n\\section{Spin-1/2 and Total Angular Momentum}\n\nThe total angular momentum $\\v J = \\v L + \\v S$ is the generator of rotation in the total/global Hilbert space $L_2 \\otimes (\\text{spin space})$, i.e. the total rotation\n\\begin{eqn}\nR_J (\\phi \\uv n) = e^{-i \\phi \\uv n \\cdot \\v J / \\hbar}\n\\end{eqn}\nmaps to rotated states\n\\begin{eqn}\n\\ket{\\psi} \\otimes \\ket{\\sigma} \\mapsto \n\tR_J \\left( \\ket{\\psi} \\otimes \\ket{\\sigma} \\right) \n\t= R_L \\ket{\\psi} \\otimes R_S \\ket{\\sigma}\n\t= \\ket{\\psi'} \\otimes \\ket{\\sigma'} \n\\end{eqn}\nThe components $J_i$ of total angular momentum are dynamical, observables variables which obey the usual angular momentum commutation relations\n\\begin{eqn}\n[J_i, J_j] = i \\hbar \\epsilon_{ijk} J_k.\n\\end{eqn}\n\nFor a spin-$\\frac{1}{2}$ particle, localized in real space so that $\\v L = 0$, the Hilbert space is just the two-dimensional spin space. We can write the spin angular momentum vector as $\\v S = \\hbar \\v \\sigma / 2$ where $\\v \\sigma$ are the Pauli matrices\n\\begin{eqn}\n\\v{\\sigma} = \\set{ \\pmat{0 & 1 \\\\ 1 & 0}, \\pmat{0 & -i \\\\ i & 0}, \\pmat{1 & 0 \\\\ 0 & -1} }\n\\end{eqn}\nwhich span $SU(2)$. \n\nWriting out the spin-space rotation matrix we have\n\\begin{align}\nR(\\phi \\uv n) &= e^{-i \\phi \\uv n \\cdot \\v S / \\hbar} = e^{-i \\phi \\uv n \\cdot \\v \\sigma / 2} \\\\ &= \n\\begin{pmatrix}\n\\cos(\\phi/2) - i n_z \\sin(\\phi/2) & -i n_- \\sin(\\phi/2) \\\\\n-i n_+ \\sin(\\phi/2) & \\cos(\\phi/2) + i n_z \\sin(\\phi/2)\n\\end{pmatrix}\n\\end{align}\nwhere $n_\\pm = n_x \\pm i n_y$. We can derive this matrix expression from the Taylor expansion\n\\begin{eqn}\ne^{-\\frac{i}{2}\\phi \\uv n \\cdot \\v \\sigma} = \\sum_n \\frac{(-i \\phi \\uv n \\cdot \\v \\sigma / 2)^n}{n!}\n\\end{eqn}\nwhich, from the identity\\footnote{More generally, $(\\v a \\cdot \\v \\sigma)(\\v b \\cdot \\v \\sigma = \\v a \\cdot \\v b + i (\\v a \\times \\v b) \\cdot \\v \\sigma$} $(\\uv n \\cdot \\v \\sigma)^2 = 1$, we find\n\\begin{eqn}\nR(\\phi \\uv n) = \\cos(\\phi/2) - i \\uv n \\cdot \\v \\sigma \\sin(\\phi/2),\n\\end{eqn}\nwhich is equivalent to the matrix written above. \n\n\n\\subsection{Something funny---rotations by $2\\pi$}\n\nFrom our expression for the spin rotation we find that something funny happens when we rotate by $2\\pi$---notice that $R(2\\pi \\uv n) = -1$. So $R$ is a ``projective'' representation of $SO(3)$; it's a 2--1 homomorphism, i.e. a double covering. This turns out to be true for all half-integer spins (fermions). \n\n[Missing: something about Zeemann something or other and Larmor frequencies. He erased the board right after he wrote it..]\n\n\\begin{example}[Boson vs fermion interference experiment]\nSuppose we set up an interference experiment where a particle can go around one side of a loop entering a black box with a magnetic field $\\v B$ passing through it, or the other side of the loop with no field. From the Larmor effect we get a spin rotation of $\\delta \\phi = \\omega_L t$ from the black box, where $t$ is the time spent in the box. If we set up the experiment so that $\\delta \\phi = 2\\pi$, we get constructive interference for bosons and destructive interference for fermions. \n\\end{example}\n\n[Explained other interesting effects: weak localization and weak anti-localization]\n\n\n\\subsection{Basis spinors}\n\nIt is customary to write spin states as column vectors in the basis of normalized eigenstates of $\\sigma_z$,\n\\begin{eqn}\n\\ket{\\uparrow} = \\ket{+} = \\pmat{1 \\\\ 0}, \\qquad \n\\ket{\\downarrow} = \\ket{-} = \\pmat{0 \\\\ 1},\n\\end{eqn}\nwith eigenvalues $+1$ and $-1$ respectively. These are thus also eigenstates of $S_z$ with eigenvalues $\\pm \\hbar / 2$. We can write the closure relation in this space as \n\\begin{eqn}\n1 = \\ket{\\uparrow} \\bra{\\uparrow} + \\ket{\\downarrow} \\bra{\\downarrow}. \n\\end{eqn}\nThen we have another complete set of eigenstates for $S_x$\n\\begin{eqn}\n\\ket{\\rightarrow} = \\frac{1}{\\sqrt 2} \\pmat{1 \\\\ 1}, \\qquad\n\\ket{\\leftarrow} = \\frac{1}{\\sqrt 2} \\pmat{1 \\\\ -1}\n\\end{eqn}\nwith eigenvalues of $S_x$ given by $+\\hbar / 2$ and $-\\hbar / 2$ respectively. So we have another closure relation\n\\begin{eqn}\n1 = \\ket{\\rightarrow} \\bra{\\rightarrow} + \\ket{\\leftarrow} \\bra{\\leftarrow}. \n\\end{eqn}\n\nNow, we can write the square angular momentum\n\\begin{eqn}\n\\v J^2 = \\frac{1}{2} (J_+ J_- + J_- J_+) + J_z^2 \n\\end{eqn}\nwhere $J_\\pm = J_x \\pm i J_y$ are so-called ladder operators. Notice the ladder operators are non-Hermitian, in fact they are the adjoint of either is the other. We have a new set of commutation relations for the ladder operators\n\\begin{eqn}\n[J_+, J_-] = 2 \\hbar J_z, \\quad\n[J_z, J_\\pm] = \\pm \\hbar J_\\pm, \\quad\n[\\v J^2, J_\\pm] = [\\v J^2, J_z] = 0. \n\\end{eqn}\nSo we have two compatible operators $\\v J^2$ and $J_z$ with quantum numbers $j$ and $m$ respectively. Furthermore, denoting \n\\begin{eqn}\n\\ket{\\psi_\\pm} = J_\\pm \\ket{j,m},\n\\end{eqn}\nwe find\n\\begin{eqn}\nJ_z \\ket{\\psi_\\pm} = \\left( [J_z, J_\\pm] + J_\\pm J_z \\right) \\ket{j,m} = (j \\pm \\hbar) \\ket{\\psi_\\pm}\n\\end{eqn}\nFurthermore\n\\begin{eqn}\n\\v J^2 \\quad \\text{something [missed this]}\n\\end{eqn}\n\n\n\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "b649b8988dd0ff3d629ea056c0c19572c1256668", "size": 5619, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "quantum/lec18.tex", "max_stars_repo_name": "paulinearriaga/phys-ucla", "max_stars_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "quantum/lec18.tex", "max_issues_repo_name": "paulinearriaga/phys-ucla", "max_issues_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quantum/lec18.tex", "max_forks_repo_name": "paulinearriaga/phys-ucla", "max_forks_repo_head_hexsha": "48084dbbac2f8a4748c1fdaaf63a4cebaae16809", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.2230769231, "max_line_length": 491, "alphanum_fraction": 0.6796583022, "num_tokens": 1902, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7690802370707281, "lm_q1q2_score": 0.5526567365263333}}
{"text": "\\section{Security Analysis for $\\mathcal{TPA}$}\n\\label{tpa}\n\\subsection{Correctness}\nAssume each $\\{Y_i\\}_{\\{i \\in \\mathcal{T}\\}}$ is correctly calculated,\n$G_S$ will be:\n\\[\n  G_S = \\prod_{j \\in \\mathcal{T}}Y_i^{\\lambda_j} = g_{\\pi}^{a \\sum_{j\n      \\in \\mathcal{T}} f(j) \\lambda_j} = g_{\\pi}^{aS} \\pmod p\n\\]\nand by raising $s_i = H(pw, u_i)$ to $G_S$, we get:\n\\[\n  X_i = (G_S^{s_i})^{a'} = (g_{\\pi}^{Ss_i})^{aa'} \\pmod p\n\\]\nwhich must be the same as $v_i^{aa'} \\bmod p$ for each $i \\in\n\\mathcal{T}$ iff the correct password ($g_{\\pi}$) is given at the\nstep 1. With each $B_i = v_i^{b_i} \\bmod p$, the client and servers\nshare DH keys $K_i = g_{\\pi}^{Ss_iaa'b_i} \\bmod p$.\n\n\\begin{lemma}\n\\label{tpa1}\nFrom $\\{Y_i\\}_{i \\in \\mathcal{L}}$ where $\\mathcal{L} \\subset \\{1..n\\}$ and $|\\mathcal{L}| <\nt$, correct $G_S$ cannot be obtained.\n\\end{lemma}\n\n\\begin{proof}\n$S$ cannot be reconstructed from $|\\mathcal{L}|$ servers, from the SSS\n  property, thus $g_{\\pi}^{aS}$ cannot be obtained whatever $a$ is.\n\\end{proof}\n\n\\subsection{Protocol Analysis}\nWe look into the $\\mathcal{TPA}$ protocols by dividing it into two\nparts: the first roundtrip (step 1 and 2) is to recover the shared\nsecret $S$, and the second roundtrip (step 3 and 4) is for key\nexchange.\n\n$g_{\\pi} = \\pi^2 \\bmod p$, where $\\pi$ is a secure hash value over the\npassword, is a safe generator in $\\mathbb{Z}^*_q$ with high\nprobability. Raise a random exponent $a$ to mask the\ngenerator. Servers blindly raise its own share $y_i$ to whatever $X =\ng_{\\pi}^a$ is. As long as $p$ is a safe prime $X^{y_i} \\bmod p$ does\nnot reveal $y_i$ whatever $X$ is.\n\nThe second roundtrip can be seen as the Elgamal encryption with a\ngenerator $G_S^{s_i}$ where $s_i := H(pw, u_i)$. $u_i$ is a salt\ngenerated uniquely for each server. The encrypted data is sent in the\nnext roundtrip.  Another random exponent $a'$ is applied to make it\ndifficult to do offline dictionary attacks over $s_i$.\n\nThe last roundtrip (step 4 and 5) is to check if both parties have\nshared a random session key and to obtain a proof. The client receives\nthe encrypted data iff the MAC is verified at the server, otherwise it\nwill receive $\\bot$.\n\n\\subsection{Reduction to Elgamal Encryption}\nSince $\\mathbb{Z}_p^*$ is a cyclic group, there exist $g \\in\n\\mathbb{Z}_p^*$ and $x \\in \\mathbb{Z}_p$ such that $g_{\\pi} = g^x\n\\pmod p$ therefore in the step 3, $X_i = G_S^{s_ia'}$ can be\nseen as a $DH$ term: $g^{x'}$ where $x' = xSs_iaa' \\bmod q$. The\nsession key on the server side is the same as the standard Elgamal\nencryption, i.e., $(g^{x'})^{b_i}$.\nAt the step 4, the server returns $B_i = v_i^{b_i}$ where $v_i =\ng^{x'/aa'} \\pmod p$. For any $aa' \\in \\mathbb{Z}_q$ there exist a\nmultiplicative inverse in $\\mathbb{Z}_q$ such that $v_i^{aa'} = g^{x'}\n\\pmod p$. Therefore we get the standard Elgamal key such as $B_i^{aa'}\n= (g^{x'})^{b_i} \\pmod p$ on the client side.\n\n\\subsection{Resistance to offline dictionary attacks}\nAssume we have $|\\mathcal{L}| < t$ compromised servers and try to\nreconstruct $g_{\\pi}^S$ from $\\{y_i\\}_{i \\in \\mathcal{L}}$.\nFrom the Lemma ~\\ref{tpa1} we need at least one $Y_i$, $i \\notin\n\\mathcal{L}$, and unless the server $i$ is compromised the only\nway to get $Y_i$ is to issue the protocol $X = g_{\\pi}^a \\bmod\np$ in the step 1.\nUnless the attacker knows $\\pi$ all he can do is to guess the\npassword $\\pi'$ and to keep asking the server to calculate $Y'_i =\ng_{\\pi'}^a \\bmod p$. If the guess is wrong $Y'_i$ will not\ncalculate $g_{\\pi}^S$ correctly with the compromised data $\\{f(j)\\}_{j\n\\in \\mathcal{L}}$.\nUnless the attacker can calculate $S$ from $g^S \\bmod p$ with a\nfabricated $g$, he cannot calculate $g_{\\pi'}^S \\bmod p$ in\norder to brute force for $g_{\\pi'} \\in \\{g^e | e = 1..p-1\\}$.\n\nNext, we look into dictionary attacks for {\\em share} that is\nrevealed from compromised servers, i.e.,\n\\begin{align*}\n  y_i &= f(i) \\\\\n  v_i &= g_{\\pi}^{Ss_i} \\bmod p \\\\\n\\end{align*}\nfor $i \\in \\mathcal{L}$. We also get $G_S = g_{\\pi}^{aS} \\bmod p$ from\nthe step 2. $B_i$ will not contribute anything to the attack as they already\nhave $v_i$.\nTo brute force on $v_i$ there are two possible ways:\n\\begin{enumerate}\n\\item $\\exists S \\in \\mathbb{Z}_q$, check if\n  $v_i \\stackrel{\\text{\\tiny ?}}{=} (g_{\\pi}^{s_i})^S \\pmod p$\n  for each $g_{\\pi}^{s_i}$ \\\\\n\\item $\\exists g_{\\pi} \\in \\mathbb{Z}^*_q$, check if\n  $v_i \\stackrel{\\text{\\tiny ?}}{=} (g_{\\pi}^S)^{s_i} \\pmod p$\n  for each $s_i$\n\\end{enumerate}\nFor the former case we need $S$ and for the latter case we need\n$g_{\\pi}^S$. From the protocol 1, we can obtain either $g^S \\bmod p$\nfor $\\forall g \\in \\mathbb{Z}$ or $g_{\\pi}^{aS} \\bmod p$. Obtaining\n$S$ is simply the {\\sf DLog} problem. $g_{\\pi}^S \\bmod p$ can be\nobtained iff $g$ happens to be $g_{\\pi}$ or a legitimate client\nchooses $a = 1$ which must be excluded.\nIf $g_{\\pi}^S$ is obtained, the dictionary attack is possible on\n$s_i$, however, it is difficult to obtain $g_{\\pi}^S$ from $G_S$ which\nis calcualted by a legitimate client as $a$ has been randomly\nchosen. Attackers can choose $a = 1$ at the step 1 and they can obtain\n$g_{\\pi'}^S$ but they need to start over the protocol to get the\ncorrect $g_{\\pi}^S$.\n", "meta": {"hexsha": "3730c436afec0fe612fd8098087fe00ab30177b3", "size": 5166, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/tex/tpa.tex", "max_stars_repo_name": "dmitris/bftkv", "max_stars_repo_head_hexsha": "8769a830c87436922a2568f967aed891baf0cd5b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2017-09-29T22:46:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-07T22:27:28.000Z", "max_issues_repo_path": "docs/tex/tpa.tex", "max_issues_repo_name": "dmitris/bftkv", "max_issues_repo_head_hexsha": "8769a830c87436922a2568f967aed891baf0cd5b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/tex/tpa.tex", "max_forks_repo_name": "dmitris/bftkv", "max_forks_repo_head_hexsha": "8769a830c87436922a2568f967aed891baf0cd5b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-12-27T17:17:40.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-08T23:56:28.000Z", "avg_line_length": 45.7168141593, "max_line_length": 92, "alphanum_fraction": 0.6718931475, "num_tokens": 1857, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8056321796478255, "lm_q2_score": 0.685949467848392, "lm_q1q2_score": 0.552622964910966}}
{"text": "\\subsection{Finite-Difference Time-Domain Method}\n\nThe finite-difference time-domain method (also called Yee's method after the mathematician Kane Yee) describes an algorithm to calculate time dependent electromagnetic fields based on Maxwell's equations. It has first been published in a paper by Kane Yee in 1966~\\cite{yee}.\n\nTo describe a system, space and time are broken down into a grid (which can but does not have to have equidistant points) which contains information about the electric and magnetic fields and the permittivity at each point. After the grid is initialized, the neighboring electric field values can be calculated from the magnetic field and, in a second step, the future electric field can be calculated from the magnetic field. By taking the permittivity into account this method is capable of describing inhomogenous complex systems that consist of different materials and field sources.\n\nWhile there are other often more efficient methods to numerically simulate and calculate surface plasmons, the finite-difference time-domain method is way more flexible. Because it is basically just a solver for Maxwell's equations without any assumptions on the geometry it can calculate various experiments from wave guides to all kinds of light interaction with wavelength-scale 3D metal geometries~\\cite{numel}.\n", "meta": {"hexsha": "806a28100959a89d3bdc9c71ee486e7ba59b21d0", "size": 1333, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "pages/1_theoretical_basis/16_fdtd.tex", "max_stars_repo_name": "JensRavens/thesis", "max_stars_repo_head_hexsha": "73299cec14df30ad5fd0f7bde6058344ce4ed709", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-04-01T12:29:45.000Z", "max_stars_repo_stars_event_max_datetime": "2016-08-19T22:59:44.000Z", "max_issues_repo_path": "pages/1_theoretical_basis/16_fdtd.tex", "max_issues_repo_name": "JensRavens/thesis", "max_issues_repo_head_hexsha": "73299cec14df30ad5fd0f7bde6058344ce4ed709", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pages/1_theoretical_basis/16_fdtd.tex", "max_forks_repo_name": "JensRavens/thesis", "max_forks_repo_head_hexsha": "73299cec14df30ad5fd0f7bde6058344ce4ed709", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 166.625, "max_line_length": 587, "alphanum_fraction": 0.824456114, "num_tokens": 261, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8175744673038222, "lm_q2_score": 0.6757646010190476, "lm_q1q2_score": 0.5524878837009277}}
{"text": "\n\n\\subsection{meda}\n\nMatrix Exploratory Data Analysis (meda) is a package being developed to\nallow for easy generation of modern summary statistics effective for\nhigh-dimensional data analysis. \n\n\\begin{compactitem}\n  \\item Source code: \\href{https://github.com/neurodata/meda}{https://github.com/neurodata/meda}\n  \\item Example output generated from Fisher's Iris data is here:\n    \\href{http://docs.neurodata.io/meda}{http://docs.neurodata.io/meda}\n\\end{compactitem}\n\nThe goal of this package is to realize the following checklist: Given a new set of n samples of vectors in $\\mathbb{R}^d$\n\n\\begin{compactenum}\n  \\item histogram of feature types (binary, integer, non-negative, character, string etc.)\n  \\item \\# NaNs per row? Per column? Infs per row? Per column? \"Zero\" variance rows? columns?\n  \\item Heat map of raw data that fits on screen (k-means++ to select 1000 samples, CUR to select 100 dimensions)\n  \\item 1st moment statistics\n  \\begin{compactenum}\n    \\item mean (line plot + heatmap)\n    \\item median (line plot + heatmap)\n  \\end{compactenum}\n  \\item 2nd moment statistics\n  \\begin{compactenum}\n    \\item correlation matrix (heatmap)\n    \\item matrix of energy distances (heatmap)\n  \\end{compactenum}\n  \\item density estimate\n  \\begin{compactenum}\n    \\item 1D marginals (Violin + jittered scatter plot of each dimension,  if n > 1000 or d>10, density heatmaps)\n    \\item 2D marginals (Pairs plots for top ~8 dimensions, if n*d>8000, 2D heatmaps)\n  \\end{compactenum}\n  \\item Outlier plot \n  \\item cluster analysis (IDT++)\n  \\begin{compactenum}\n    \\item BIC curves\n    \\item mean line plot\n    \\item covariance matrix heatmaps\n  \\end{compactenum}\n  \\item spectral analysis\n  \\begin{compactenum}\n    \\item cumulative variance (with elbows) of data matrix\n    \\item eigenvectors (pairs plot + heatmap)\n  \\end{compactenum}\n\\end{compactenum}\n\n\n\\begin{compactitem}\n\\item To rescale the data in case of differently scaled features, we will implement the following options: \n\\begin{compactitem}\n  \\item raw\n  \\item linear options\n  \\begin{compactitem}\n    \\item linear squash between 0 \\& 1\n    \\item mean subtract and standard deviation divide\n    \\item median subtract and median absolute deviation divide\n    \\item make unit norm\n  \\end{compactitem}\n  \\item nonlinear\n  \\begin{compactitem}\n    \\item rank\n    \\item sigmoid squash\n  \\end{compactitem}\n\\end{compactitem}\n\n\\item To robustify in the face of outliers, we will utilize\n \\href{http://projecteuclid.org/euclid.bj/1438777595}{Geometric median and robust estimation in Banach spaces} \n\n\\item { if features have categories}\n\\begin{compactenum}\n  \\item sort by category\n  \\item color code labels by category\n\\end{compactenum}\n\n\\item { if points have categories}: \n   label points in scatter plots by symbol\n\\end{compactitem}\n\n\n% \\end{document}\n", "meta": {"hexsha": "a3619bb16c2d703c7fb0f07bda5153441ae49275", "size": 2811, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Reporting/reports/2016-12Q4/meda.tex", "max_stars_repo_name": "openconnectome/SIMPLEX_Q2", "max_stars_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Reporting/reports/2016-12Q4/meda.tex", "max_issues_repo_name": "openconnectome/SIMPLEX_Q2", "max_issues_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Reporting/reports/2016-12Q4/meda.tex", "max_forks_repo_name": "openconnectome/SIMPLEX_Q2", "max_forks_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.4642857143, "max_line_length": 121, "alphanum_fraction": 0.7381714692, "num_tokens": 763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.7826624789529375, "lm_q1q2_score": 0.5523914594947832}}
{"text": "\\documentclass[simplex.tex]{subfiles}\n% NO NEED TO INPUT PREAMBLES HERE\n% packages are inherited; you can compile this on its own\n\\providecommand{\\mb}[1]{\\boldsymbol{#1}}\n\\providecommand{\\mv}[1]{\\vec{#1}}\n\\providecommand{\\ve}[1]{\\boldsymbol{#1}}\n\\newcommand{\\bpsi}{\\ve{\\psi}}\n\\newcommand{\\bv}{\\mb{v}}\n\\newcommand{\\bx}{\\mb{x}}\n\\newcommand{\\bA}{\\mathbf{A}}\n\\newcommand{\\bH}{\\mathbf{H}}\n\\newcommand{\\bX}{\\mathbf{X}}\n\\newcommand{\\bP}{\\mathbf{P}}\n\\newcommand{\\bB}{\\mathbf{B}}\n\\newcommand{\\bD}{\\mathbf{D}}\n\\newcommand{\\bS}{\\mathbf{S}}\n\\newcommand{\\bR}{\\mathbf{R}}\n\\newcommand{\\bJ}{\\mathbf{J}}\n\\newcommand{\\bLambda}{\\mathbf{\\Lambda}}\n\\begin{document}\n\\subsection{Vertex Screening}\nWe are developing a vertex screening method to recover the signal subgraph. Specifically, we have $m$ pairs of graph and label sampled independently from some distribution $F_{G,Y}$,\n\\[(G_1,Y_1),(G_2,Y_2),(G_3,Y_3),...,(G_m,Y_m) \\overset{i.i.d.}{\\sim} F_{G,Y}. \\] \nIt is often the case that the signal is sparse in $G$. That is to say, there is a small subgraph $G[S]$ induced by signal vertices $S$ which contains all information about $Y$. The vertex screening method wants to recover the signal vertices $S$. It consists of three steps: feature extraction, computing distance correlations, and thresholding. \\\\\n\nThe first step is to extract a feature vector for each vertex in a graph. We use notation $\\hat{X}_{i}[u,\\cdot]$ to denote the feature extracted for vertex $u$ in graph $i$ where $i \\in [m]$ and $u \\in [n]$. A simple approach to obtain a feature vector is to set $\\hat{X}_{i}[u]$ to the $u$th row of adjacency matrix $A_i$, that is $\\hat{X}_{i}[u,\\cdot]=A_i[u,\\cdot]$. In this case,  $\\hat{X}_{i}[u,\\cdot]$ is a vector in $\\mathbb{R}^n$ which can be a high dimensional space. Alternatively, Adjacency Spectral Embedding could also extract a feature vector $\\hat{X}_{i}[u,\\cdot]$ which lies in $\\mathbb{R}^d$. The second step computes a correlation between  $\\{\\hat{X}_{i}[u,\\cdot]\\}_{i=1}^m$ and $\\{Y_i\\}_{i=1}^m$ for each $u \\in V$. The correlation could be distance correlation (Dcorr) or multiscale generalized correlation (MGC). Let $c_u$ be the correlation, that is\n\\[ c_u = Dcorr(\\{\\hat{X}_{i}[u,\\cdot]\\}_{i=1}^m, \\{Y_i\\}_{i=1}^m ) \\text{ or } MGC(\\{\\hat{X}_{i}[u,\\cdot]\\}_{i=1}^m, \\{Y_i\\}_{i=1}^m). \\]\nThe last step order $c_u$s by their magnitudes. Then, we threshold the correlations by a critical value $c$. The vertices survive thresholding will be the estimated signal vertices $\\hat{S}$, that is \n\\[\\hat{S} = \\{u \\in V | c_u > c\\}.\\]\nThe estimated signal subgraph and the corresponding adjacency matrix $G[\\hat{S}]$ and $A[\\hat{S}]$. The Algorithm \\ref{alg:vs} describes the general procedure of vertex screening using adjacency vector as feature and MGC. A simulation experiment is shown in the Figure \\ref{fig:vs} which demonstrates that the vertex screening can significantly improve the graph classification performance for various sample size. We are currently working on theories of vertex screening and finishing the first draft.\n\\clearpage\n\\begin{algorithm}\n\t\\caption{Vertex Screening. Find the signal vertex estimate $\\hat{S}$.}\n\t\\label{alg:vs}\n\t\\begin{algorithmic}[1]\n\t\t\\Procedure{Input $\\{(A_i,Y_i)\\}_{i=1}^m$ and $c \\in [0,1]$}{}\n\t\t\\For{$u=1:n$ }\n\t\t\\State $c_u = MGC(\\{\\hat{X}_{i}[u,\\cdot]\\}_{i=1}^m, \\{Y_i\\}_{i=1}^m )$\n\t\t\\EndFor\n\t\t\\State Output $\\hat{S} = \\{u| c_u > c\\}$\n\t\t\\EndProcedure\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n%%%  FIGURE BLOCK\n\\begin{figure}[!h]\n\t\\begin{cframed}\n\t\t\\centering\n\t\t\\includegraphics[scale=0.35, clip = true]{../../figs/Bayes_plugin_IER_3n.png}\n\t\t\\caption{The classification accuracies of five approaches with their standard errors are shown. We generate graphs from $3$ different inhomogeneous Erdos-Renyi model, then apply $5$ classifiers to classify these graphs: Bayes optimal classifier (green), Bayes plugin on $G[S]$ (purple), Bayes plugin on $G[\\hat{S}]$ with $\\hat{S}$ estimated by Dcorr (blue), Bayes plugin on $G[\\hat{S}]$ with $\\hat{S}$ estimated by MGC (yellow), and Bayes plugin on $G$ (red). The classifiers with vertex screening (blue and yellow) have significantly better classification performance compared to without screening (red), and are close to Bayes optimal (green) when given $600$ graphs. }\n\t\t\\label{fig:vs}\n\t\\end{cframed}\n\\end{figure}\n%\n\\clearpage\n\n\\subsection{Joint Embedding}\nThe latest draft is posted on \\href{https://arxiv.org/abs/1703.03862}{arXiv} and submitted for publication. The code can be found \\href{https://github.com/shangsiwang/Joint-Embedding}{here}. We are also working on making a R package with joint embedding included.\n\n\n\n\\clearpage\n\n\\end{document}\n", "meta": {"hexsha": "79d6c6a0fd81fefaf4cd5f24969d511626524f8e", "size": 4643, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Reporting/reports/2017-05/discriminability.tex", "max_stars_repo_name": "openconnectome/SIMPLEX_Q2", "max_stars_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Reporting/reports/2017-05/discriminability.tex", "max_issues_repo_name": "openconnectome/SIMPLEX_Q2", "max_issues_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Reporting/reports/2017-05/discriminability.tex", "max_forks_repo_name": "openconnectome/SIMPLEX_Q2", "max_forks_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 70.3484848485, "max_line_length": 870, "alphanum_fraction": 0.7120396295, "num_tokens": 1453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850154599562, "lm_q2_score": 0.782662489091802, "lm_q1q2_score": 0.5523914569635853}}
{"text": "\\section{Results and Discussion}\n\nWe present results with different types of flow field data sets to evaluate our approach, including synthetic data and spatially aggregated data sets. To demonstrate the effectiveness of the proposed algorithm, we compared it with the Monte Carlo (MC) method, which is the general approach to stochastically trace particles in uncertain flow fields. We performed quantitative comparisons on the resulting streamlines generated by our approach and the MC method with different settings and distance measurements. We also qualitatively compare the most likely streamlines as well as the distributions of possible traces produced by our approach and the MC method by visualizing sample traces on different data sets.\n\n\\subsection{Synthetic Data}\n\nWe first evaluate the proposed algorithm on the analytical static double-gyre data set proposed by Shadden et al. in~\\cite{Shadden2005271}. Gaussian noise is added into the vector field to synthesize the uncertainty. In order to quantitatively evaluate the robustness of the proposed algorithm under the influence of noise, we generate streamlines starting from regularly sampled seed positions for the certain double-gyre data set and use those streamlines as our ground truth. Then, a set of sample traces are generated by the MC method and the Bayesian approach starting from the same seed position presented above for the uncertain double-gyre data set with different noise level, which is controlled by the standard deviation $\\sigma$ of the Gaussian noise. All the streamlines were generated with a step size of $0.005$ and a maximum step number of $100$. For the Monte Carlo method and the proposed algorithm, a critical parameter is the number of particles used for each seed. Indeed, more particles will give a more accurate presentation of the target distribution but will take more time to generate the results. Hence, a particle count that balances the accuracy and the computation time need to be studied. Based on~\\cite{journals/mia/PontabryROSKD13}, we use $100$ particles for both of the methods in the experiments. To compare the accuracy of the resulting traces, the Hausdorff distance~\\cite{Roessl:2012:TVCG} and the Mean of the closest point distance~\\cite{Corouge04towardsa} between streamlines were used. Figure~\\ref{gerror} gives the average of the distances presented above between the most likely streamlines generated from each method and the ground truth, with increasing $\\sigma$ values for the noise in the vector field. The figure reveals that our method can produce most likely traces that are closer to the ground truth and the average of the distances increases more slowly than the MC method as the noise increases.\n\nBesides comparing the accuracy of the most likely traces, it is also important to compare all possible traces generated by each method. We evaluate the accuracy of uncertain streamlines starting from a given seed position by measuring the distance between each individual trace and the ground truth, then we compute the weighted sum of all the distances. For the MC method, all traces are equally weighted. For the proposed method, the weights of the traces described above are used. Figure~\\ref{gerror_r} shows that the proposed method can generate more accurate traces.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\begin{subfigure}[b]{0.24\\textwidth}\n    \\centering\n    \\includegraphics[height=0.8in]{../figures/doublegyre_h.eps}\n  \\end{subfigure}~\n  \\begin{subfigure}[b]{0.24\\textwidth}\n    \\centering\n    \\includegraphics[height=0.8in]{../figures/doublegyre_m.eps}\n  \\end{subfigure}\n  \\caption{Comparison of the distance between the most likely traces and the ground truth for our method and the MC method.}\n  \\label{gerror}\n\\end{figure}\n\n\\begin{figure}[!htb]\n  \\centering\n  \\begin{subfigure}[b]{0.24\\textwidth}\n    \\centering\n    \\includegraphics[height=0.8in]{../figures/doublegyre_hr.eps}\n  \\end{subfigure}~\n  \\begin{subfigure}[b]{0.24\\textwidth}\n    \\centering\n    \\includegraphics[height=0.8in]{../figures/doublegyre_mr.eps}\n  \\end{subfigure}\n  \\caption{Comparison of overall trace accuracy. For each method, distances of all sample traces to the ground truth are measured and summed by their weights.}\n  \\label{gerror_r}\n\\end{figure}\n\nFigure~\\ref{case_1} shows sample traces generated by the MC method and our method at a given seed location in the double-gyre flow field. As we can see in the figure, our method can generate more concentrated traces which are also closer to the ground truth compared with the MC method. The most likely trace generated by our method is also closer to the ground truth, as shown in~\\ref{case_1} (c).\n\n\\begin{figure}[!htb]\n   \\centering\n  \\small\n  (a) \\vcenterbox{\\includegraphics[height=0.5in]{../figures/double_gyre_mc35.eps} } \\hfill\n  (b) \\vcenterbox{\\includegraphics[height=0.5in]{../figures/double_gyre_smc35.eps} } \\hfill\n  (c) \\vcenterbox{\\includegraphics[height=0.5in]{../figures/double_gyre_opt35.eps} }\n  \\caption{(a): Sampled streamlines computed by the MC method starting from seeding position $x=0.3, y=0.5$ in the analytical double-gyre data set. (b): Sampled streamlines computed by our method from the same seeding position in (a). (c): The most likely traces generated by both methods compared with the ground truth.}\n  \\label{case_1}\n\\end{figure}\n\n\\subsection{Spatially aggregated Data Sets}\n\n\\begin{figure*}[!htbp]\n  \\centering\n  \\small\n  (a) \\vcenterbox{\\includegraphics[width=1.5in]{../figures/isabel_gt.eps} } \\hfill\n  (b) \\vcenterbox{\\includegraphics[width=1.5in]{../figures/isabel_mc.eps} } \\hfill\n  (c) \\vcenterbox{\\includegraphics[width=1.5in]{../figures/isabel_smc.eps} }\n\n  \\caption{Streamlines generated on the Hurricane Isabel data sets. The color is used to enhance the contrast among streamlines. (a): The ground truth streamlines generated on the raw data. (b): Results produced by the Monte Carlo method on the distribution data with block size $16^3$. (c): Streamlines generated by our method on the same data in (b).}\n  \\label{data_overview}\n\\end{figure*}\n\n\\begin{figure}[!htbp]\n  \\centering\n  \\small\n  (a) \\vcenterbox{\\includegraphics[height=1.0in]{../figures/isabel_mc1.eps} } \\hfill\n  (b) \\vcenterbox{\\includegraphics[height=1.0in]{../figures/isabel_smc1.eps} }\n  \\caption{(a): Sampled streamlines computed by the MC method starting from seeding position $x=250, y=150, z=45$ in the Isabel data set. (b): Sampled streamlines computed by our method from the same seeding position in (a).}\n  \\label{case_4}\n\\end{figure}\n\nIn this section, experiments were done on the Hurricane Isabel data set. Hurricane Isabel is a data set with a resolution of $500 \\times 500 \\times 100$ that models a strong hurricane in the west Atlantic region in September 2003. In order to test the performance of our algorithm on uncertain data which is represented by non-Gaussian distributions, we decompose the data into small cubic blocks and construct a histogram for each block. For the test data set, distribution-based data are generated with three different block size $8^3$, $16^3$, and $32^3$ to evaluate the performance of the proposed algorithm under the influence of uncertainty.\n\nAs presented above, we regularly sample a set of seed locations and compute the streamlines for both the raw data and the spatially down sampled data. To perform the quantitative analysis, we treat the streamlines computed from the raw data as the ground truth and compute the distance between the stochastic particle traces with the ground truth. $100$ particles were used with an integration step size $1.0$ and a maximum step number of $1000$ for the streamline computation. Figure~\\ref{berror_r} gives the mean of the distances' weighted sum between sample streamline bundles generated from the test methods and the ground truth on the test data set with different block sizes. The figure reveals that the proposed method can produce traces that are closer to the ground truth.\n\n\\begin{figure}[!htb]\n  \\centering\n  \\begin{subfigure}[b]{0.24\\textwidth}\n    \\centering\n    \\includegraphics[height=0.9in]{../figures/isabel_h.eps}\n  \\end{subfigure}~\n  \\begin{subfigure}[b]{0.24\\textwidth}\n    \\centering\n    \\includegraphics[height=0.9in]{../figures/isabel_m.eps}\n  \\end{subfigure}\n  \\caption{Distances between the ground truth and sample traces generated by our method and the MC method for the Isabel data set.}\n  \\label{berror_r}\n\\end{figure}\n\nFigure~\\ref{data_overview} shows the streamlines generated from the test data set on the seed positions presented above. The most likely streamlines generated by the MC method on the data set with block size $16^3$ are given in Figure~\\ref{data_overview} (b); as expected, streamlines generated by the MC method are generally not as smooth as the ground truth and some flow features looks quiet different compare with the ground truth. Figure~\\ref{data_overview} (c) show the streamlines produced by our method with the same block size, which give more accurate and smoother results. Figure~\\ref{case_4} gives sample traces generated by the MC method and the proposed method at a given seed location in the Isabel data set. Figure~\\ref{case_4} shows that our algorithm can produce more concentrated and accurate results than the basic MC method, because the correlation between consecutive integration steps are exploited.\n\n\\subsection{Performance}\n\nAll the experiments were performed on a desktop computer with an Intel(R) Core(TM) i7-4790K CPU 4.0GHz processor, 16GB memory, and an NVIDIA GTX 970 GPU. In Table~\\ref{timing}, we compare the performance measurements between the proposed algorithm and the Monte Carlo method for streamlines estimated for a given seed position with $100$ sample points for all the test data sets used in this paper. In all the datasets, our approach is almost as fast as the Monte Carlo method.\n\n\\begin{table}[!htb]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Data Set}    & \\multirow{2}{*}{Method}     & \\multicolumn{3}{c|}{Timing(sec)}  \\\\ \\cline{3-5}\n                             &                             & 40 Steps  & 80 Steps & 120 Steps  \\\\ \\hline\n\\multirow{2}{*}{Double Gyre} & MC                          & 0.0026    & 0.005    & 0.008      \\\\ \\cline{2-5}\n                             & Bayesian             & 0.0035    & 0.007    & 0.01       \\\\ \\hline\n\\multirow{2}{*}{Isabel}      & MC                          & 3.3       & 6.7      & 10.1       \\\\ \\cline{2-5}\n                             & Bayesian             & 3.4       & 6.8      & 10.7       \\\\ \\hline\n\n\\end{tabular}\n\\caption{Overview of the performance for the proposed algorithm and the Monte Carlo method.}\n\\label{timing}\n\\end{table}\n", "meta": {"hexsha": "2c12547be95b8ac0ed17f8ec43f18f9a5488be4f", "size": 10682, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/pvisnotes2016/draft-rs.tex", "max_stars_repo_name": "hewenbin/pspf", "max_stars_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/pvisnotes2016/draft-rs.tex", "max_issues_repo_name": "hewenbin/pspf", "max_issues_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/pvisnotes2016/draft-rs.tex", "max_forks_repo_name": "hewenbin/pspf", "max_forks_repo_head_hexsha": "d51ac35f2e425d818c5b272c74b3ab9ef01bd5ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 95.375, "max_line_length": 1949, "alphanum_fraction": 0.7566935031, "num_tokens": 2651, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7826624789529375, "lm_q2_score": 0.7057850154599563, "lm_q1q2_score": 0.5523914498077267}}
{"text": "\\section{Evaluation and Error Calculation}\n\\label{sec:Evaluation}\n% --------------\n% Speed of Sound\n% --------------\n\\subsection{Speed of Sound}\n\\label{subsec:Speed_of_Sound}\nTo determine the speed of sound, the propagation time of a sound pulse over the distance $s=(2.561\\pm0.003)\\ m$ was measured 20 times (see table \\ref{tab:Speed_of_Sound_Measurements}). The ambient temperature was $\\vartheta=23$ \u00b0C.\n\nTo calculate the speed of sound from the measured values, one has to calculate the mean value of the duration $t_i$ and its error. To do this in Excel, the function \"AVERAGE()\" is used to calculate the mean value. This function implements equation \\ref{eq:Arithmetic_Mean}. To calculate the error of the mean value, the function \"STDEV.S()\" is used, which implements equation \\ref{eq:Arithmetic_Mean_Error}. The same thing can be achieved using the software QtiPlot. The measured values are shown in the following figure \\ref{fig:Sound_Propagation_Time}:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Sound_Propagation_Time}\n\t\\caption{Sound Propagation Time}\n\t\\label{fig:Sound_Propagation_Time}\n\\end{figure}\nEquation \\ref{eq:Experimental_Standard_Deviation} is used to calculate the experimental standard deviation. This can be done by using the Excel function \"STDEV.S()\" again. If the equation \\ref{eq:Arithmetic_Mean} for the mean value error is compared to the equation \\ref{eq:Experimental_Standard_Deviation} for the experimental standard deviation, it becomes apparent that only an $\\sqrt{n}$ is missing in the denominator. This can easily be fixed by dividing the function by $\\sqrt{n}$. QtiPlot allows for direct calculation of the experimental standard deviation for an entire column of values.\n\\newpage\nThe comparison between Excel and QtiPlot is shown in the following table \\ref{tab:Comparison_Speed_of_Sound_Measurements}:\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.2}\n\t\\begin{tabular}{c c c c}\n\t\t\\hline\n\t\t\\textbf{Software} & \\textbf{Mean Value $\\overline t$} & \\textbf{Mean Value Error $s_{\\overline{t}}$} & \\textbf{Standard Deviation $s$} \\\\\n\t\t\\hline\n\t\tExcel & $7.3225\\cdot10^{-3}$ s & $7.35308\\cdot10^{-5}$ s & $3.28840\\cdot10^{-4}$ s \\\\\n\t\tQtiPlot & $7.3225\\cdot10^{-3}$ s & $7.35308\\cdot10^{-5}$ s & $3.28840\\cdot10^{-4}$ s \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Comparison between the calculated values}\n\t\\label{tab:Comparison_Speed_of_Sound_Measurements}\n\\end{table}\nThe following equation is used to calculate the mean value of the speed of sound:\n\\begin{equation}\n\\overline{v}=\\frac{\\overline s}{\\overline t}=\\frac{2.561\\ m}{7.3225\\cdot10^{-3}\\ s}=349.744\\ \\frac{m}{s}\n\\label{eq:Speed_of_Sound}\n\\end{equation}\n\\\\\nEquation \\ref{eq:Error_Propagation} is used to calculate the uncertainty:\n\\begin{equation}\ns_{\\overline{v}}=\\sqrt{\\left(\\frac{\\partial v}{\\partial s}\\Biggr|_{\\overline v}\\cdot s_{\\overline{s}}\\right)^2 + \\left(\\frac{\\partial v}{\\partial t}\\Biggr|_{\\overline v}\\cdot s_{\\overline{t}}\\right)^2}=\\sqrt{\\left(\\frac{1}{\\overline t}\\cdot s_{\\overline{s}}\\right)^2 + \\left(-\\frac{\\overline{s}}{\\overline{t}^2}\\cdot s_{\\overline{t}}\\right)^2}=3.53586\\ \\frac{m}{s}\n\\end{equation}\n\\\\\nThe derived speed of sound is as follows:\n\\begingroup\n\\Large\n\\begin{equation}\nv=\\overline{v}\\pm s_{\\overline v}=(350\\pm4)\\ \\frac{m}{s}\n\\end{equation}\n\\endgroup\n\\newpage\n% ------------\n% Iron Content\n% ------------\n\\subsection{Iron Content}\n\\label{subsec:Iron_Content}\nThe iron content in an alloy was determined by various methods. Therefore, the accuracy varies between the different measurements.\n\nIn Excel, the mean value and its error are calculated the same way as in section \\ref{subsec:Speed_of_Sound}. The weighted mean value can be calculated by using the \"SUMPRODUCT()\" function. The formula is as follows:\n\\[\n=\\text{SUMPRODUCT}(\\overline{x_1}:\\overline{x_n};1/(s_{\\overline{x1}}:s_{\\overline{xn}})\\textasciicircum 2)/\\text{SUM}(s_{\\overline{x1}}:s_{\\overline{xn}})\n\\]\n\\\\\nThe measured values (see table \\ref{tab:Iron_Content_Measurements}) and their respective error are shown in figure \\ref{fig:Iron_Content}. Furthermore, the weighted mean value was calculated by a weighted regression fit in QtiPlot (also shown in fig. \\ref{fig:Iron_Content}).\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Iron_Content}\n\t\\caption{Iron Content}\n\t\\label{fig:Iron_Content}\n\\end{figure}\nThe comparison for the mean value and its error:\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.2}\n\t\\begin{tabular}{c c c}\n\t\t\\hline\n\t\t\\textbf{Software} & \\textbf{Mean Value} & \\textbf{Mean Value Error} \\\\\n\t\t\\hline\n\t\tExcel & $20.5556$ \\% & $0.519912$ \\% \\\\\n\t\tQtiPlot & $20.5556$ \\% & $0.519912$ \\% \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Comparison between the calculated values (Mean Value)}\n\t\\label{tab:Comparison_Iron_Content_Mean_Value}\n\\end{table}\n\\newpage\nThe comparison for the weighted mean value and its error:\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.2}\n\t\\begin{tabular}{c c c}\n\t\t\\hline\n\t\t\\textbf{Software} & \\textbf{Weighted Mean Value} & \\textbf{Weighted Mean Value Error} \\\\\n\t\t\\hline\n\t\tExcel & $20.4007$ \\% & $0.361055$ \\% \\\\\n\t\tQtiPlot & $20.4007$ \\% & $0.361055$ \\% \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Comparison between the calculated values (Weighted Mean Value)}\n\t\\label{tab:Comparison_Iron_Content_Weighted_Mean_Value}\n\\end{table}\n\\newpage\n% ---------------\n% Spring Constant\n% ---------------\n\\subsection{Spring Constant}\n\\label{subsec:Spring_Constant}\nTo determine the spring constant of a pretensioned steel spring from the measured values (see table \\ref{tab:Spring_Constant_Measurements}) the following equation is used to create a linear regression fit:\n\\begin{equation}\nF=k\\cdot z+F_0\n\\end{equation}\nThe measured values and the linear regression fit are shown in the following figure \\ref{fig:Spring_Constant}:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Spring_Constant}\n\t\\caption{Spring Constant}\n\t\\label{fig:Spring_Constant}\n\\end{figure}\nThe values obtained from QtiPlot are shown in the following table \\ref{tab:Calculated_Spring_Force_Parameters}:\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{c c}\n\t\t\\hline\n\t\t\\textbf{Spring Constant $k$} & \\textbf{Pretension Force $F_0$} \\\\\n\t\t\\hline\n\t\t$(22.5\\pm0.9)\\ \\,^\\text{N}\\!/_\\text{m}$ & $(-0.8\\pm0.5)$\\ N \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Calculated Spring Force Parameters}\n\t\\label{tab:Calculated_Spring_Force_Parameters}\n\\end{table}\nThe spring force can now be derived from the following equation:\n\\begin{equation}\nF=22.5\\ \\,^\\text{N}\\!/_\\text{m}\\cdot z-0.8\\ \\text{N}\n\\end{equation}\n\\newpage\n% --------\n% Pendulum\n% --------\n\\subsection{Pendulum}\n\\label{subsec:Pendulum}\nThe damped oscillation of a pendulum can be described by the following equation:\n\\begin{equation}\ny=A\\cdot \\text{exp}(-\\Gamma\\cdot t)\\cdot\\text{sin}(2\\cdot\\pi\\cdot f\\cdot t-\\delta)+y_0\n\\end{equation}\nThe values in table \\ref{tab:Pendulum_Measurements} were measured with an ultrasonic sensor. The following figure \\ref{fig:Pendulum} shows the measured distances as a function of time as well as the nonlinear fitted curve.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{Pendulum}\n\t\\caption{Pendulum}\n\t\\label{fig:Pendulum}\n\\end{figure}\nThe values obtained from QtiPlot are shown in table \\ref{tab:Calculated_Pendulum_Parameters}:\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r|c c c}\n\t\t& \\textbf{Value} & & \\textbf{Uncertainty} \\\\\n\t\t\\hline\\hline\n\t\t\\textbf{Amplitude $A$} & $-1.22$\\ m & $\\pm$ & $0.03$\\ m \\\\\n\t\t\\textbf{Damping Constant $\\Gamma$} & $52\\cdot10^{-3}\\ \\text{s}^{-1}$ & $\\pm$ & $2\\cdot10^{-3}\\ \\text{s}^{-1}$ \\\\\n\t\t\\textbf{Frequency $f$} & $55.0\\cdot10^{-3}$\\ Hz & $\\pm$ & $0.2\\cdot10^{-3}$\\ Hz \\\\\n\t\t\\textbf{Phase $\\delta$} & $-2.63$\\ rad & $\\pm$ & $0.02$\\ rad \\\\\n\t\t\\textbf{Offset $y_0$} & $49\\cdot10^{-3}$\\ m & $\\pm$  & $5\\cdot10^{-3}$\\ m \\\\\n\t\\end{tabular}\n\t\\caption{Calculated Pendulum Parameters}\n\t\\label{tab:Calculated_Pendulum_Parameters}\n\\end{table}\nThe position of the pendulum at a given time can now be derived from the following equation:\n\\begin{equation}\ny=-1.22\\ m\\cdot \\text{exp}(-0.052\\ \\text{s}^{-1}\\cdot t)\\cdot\\text{sin}(2\\cdot\\pi\\cdot0.055\\ Hz\\cdot t+2.63)+0.049\\ m\n\\end{equation}\n\\newpage\n% ------------------\n% RC Low-Pass Filter\n% ------------------\n\\subsection{RC Low-Pass Filter}\n\\label{subsec:RC_Low-Pass_Filter}\nThe 1st order RC low-pass filter (see figure \\ref{fig:1st_Order_RC_Low-Pass_Filter}) has a known resistor $R = 500\\ \\Omega$ and a known input voltage $U_{\\text{I}}=4\\ \\text{V}_{\\text{pp}}$. The output voltage and phase was measured with a cathode ray oscilloscope at various frequencies (see table \\ref{tab:RC_Low-Pass_Filter_Measurements}).\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.6]{RC_Circuit}\n\t\\caption{1st Order RC Low-Pass Filter}\n\t\\label{fig:1st_Order_RC_Low-Pass_Filter}\n\\end{figure}\nThe equation for the output voltage is as follows:\n\\begin{equation}\nU_{\\text{O}}=\\frac{X_{\\text{C}}}{\\sqrt{X_{\\text{C}}^2+R^2}}\\cdot U_{\\text{I}}=\\frac{1}{\\omega\\cdot C\\cdot\\sqrt{\\frac{1}{(\\omega\\cdot C)^2}+R^2}}\\cdot U_{\\text{I}}=\\frac{1}{\\sqrt{1+(\\omega\\cdot C\\cdot R)^2}}\\cdot U_{\\text{I}}\n\\end{equation}\n\\\\\nThe following figure \\ref{fig:Output_Voltage} shows the measured output voltage values and the fitted curve:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{RC_Low-Pass_Filter_Voltage}\n\t\\caption{Output Voltage}\n\t\\label{fig:Output_Voltage}\n\\end{figure}\n\\newpage\nThe equation for the phase shift is as follows:\n\\begin{equation}\n\\varphi=\\text{arctan}(-\\omega\\cdot R\\cdot C)\n\\label{eq:Phase_Shift}\n\\end{equation}\n\nThe phase shift values have been converted from degrees to radians. They can now be plugged directly into the equation \\ref{eq:Phase_Shift}.\n\nThe following figure \\ref{fig:Phase} shows the measured phase shift and the fitted curve:\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=1]{RC_Low-Pass_Filter_Phase}\n\t\\caption{Phase}\n\t\\label{fig:Phase}\n\\end{figure}\nThe values for the Capacity $C$ obtained from QtiPlot are shown in table \\ref{tab:Calculated_RC_Calculated_Capacity}:\n\\begin{table}[H]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.3}\n\t\\begin{tabular}{r|c}\n\t\t& \\textbf{Capacity $C$} \\\\\n\t\t\\hline\\hline\n\t\t\\textbf{from Output Voltage} & $(216.9\\pm0.9)\\cdot10^{-9}$\\ F \\\\\n\t\t\\textbf{from Phase} & $(197.5\\pm3.1)\\cdot10^{-9}$\\ F \\\\\n\t\\end{tabular}\n\t\\caption{Calculated Capacity}\n\t\\label{tab:Calculated_RC_Calculated_Capacity}\n\\end{table}\n", "meta": {"hexsha": "46fcbf9dba11706e70ce099db8a38b9f324aec41", "size": 10426, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "glaL3_C_Evaluation_with_Computer/sections/evaluation_error_calculation.tex", "max_stars_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_stars_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "glaL3_C_Evaluation_with_Computer/sections/evaluation_error_calculation.tex", "max_issues_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_issues_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "glaL3_C_Evaluation_with_Computer/sections/evaluation_error_calculation.tex", "max_forks_repo_name": "MuellerDominik/Physics-Laboratory-Notebooks", "max_forks_repo_head_hexsha": "02836870e6d97a29b1857c956fbd58eb5933eede", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.3377777778, "max_line_length": 596, "alphanum_fraction": 0.7215614809, "num_tokens": 3435, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6513548646660543, "lm_q2_score": 0.8479677564567913, "lm_q1q2_score": 0.552327923248091}}
{"text": "\\chapter{Statistics for Particle Analysis}\n\\label{chap:statistics}\n\n\\section{Classification functions}\n\\label{sec:classification_functions}\n\nThe main concepts to compare the goodness of identification methods which are used throughout the thesis are based on statistical classification functions. However, their use is not limited to physics, let alone particle physics, but can be found in all fields containing some form of (binary) classification problem. A classification function is a tool which separates elements which do not have the desired feature from those which have it.\nIn the following examples the classifier assumes the role of a discriminator between kaons and non-kaons.\n\nThe most important classification functions are:\n\\begin{itemize}\n\t\\item\n\t\\begin{samepage}\n\t\t\\textbf{T}rue \\textbf{P}ositive \\textbf{R}ate (\\textbf{TPR}): \\textit{proportion of accepted elements which are correct relative to all positives}\n\n\t\t\\nopagebreak\n\t\tHence, in the example it is the ratio of identified kaons which actually are kaons in proportion to the number of kaons in the data.\n\t\\end{samepage}\n\n\t\\item\n\t\\begin{samepage}\n\t\t\\textbf{T}rue \\textbf{N}egative \\textbf{R}ate or Specificity (\\textbf{TNR}): \\textit{proportion of rejected elements which are incorrect relative to all negatives}\n\n\t\t\\nopagebreak\n\t\tIt is the ratio of non-kaon particles being identified as non-kaons in proportion to the number of all non-kaon particles.\n\t\\end{samepage}\n\n\t\\item\n\t\\begin{samepage}\n\t\t\\textbf{F}alse \\textbf{P}ositive \\textbf{R}ate (\\textbf{FPR}): \\textit{proportion of accepted elements which are incorrect relative to all negatives}\n\n\t\t\\nopagebreak\n\t\tThis rate represents the fraction of non-kaon particles identified as kaons over the number of all non-kaons.\n\t\\end{samepage}\n\n\t\\item\n\t\\begin{samepage}\n\t\t\\textbf{F}alse \\textbf{N}egative \\textbf{R}ate (\\textbf{FNR}): \\textit{proportion of rejected elements which are correct relative to all positives}\n\n\t\t\\nopagebreak\n\t\tIt is the fraction of kaons classified as being non-kaons over the number of all non-kaons.\n\t\\end{samepage}\n\n\t\\item\n\t\\begin{samepage}\n\t\t\\textbf{P}ositive \\textbf{P}redicted \\textbf{V}alue (\\textbf{PPV}): \\textit{proportion of accepted elements which are correct relative to all accepted}\n\n\t\t\\nopagebreak\n\t\tThe definition represents the fraction of kaons classified as such over the number of all tracks classified as kaons but not necessarily actually being a kaon.\n\t\\end{samepage}\n\\end{itemize}\n\n\\section{Receiver operating characteristic curve}\n\\label{sec:roc}\n\nThe \\textbf{R}eceiver \\textbf{O}perating \\textbf{C}haracteristic (\\textbf{ROC}) curve is the TPR plotted over the FPR. The values on the $x$- and $y$-axis go from zero to unity. Each point on the curve represents an applied selection criterion on the data or a so called \\textit{cut}.\n\nA straight diagonal line connecting the point $(0, 0)$ with $(1, 1)$ would be the result of a classifier which is merely guessing the classes of two equally likely yields. A curve below this diagonal is worse than guessing and anything above is some degree of good. An optimal curve achieves a high TPR value at a very low FPR.\nMultiple methods can therefore be compared by assessing the value and the slope of each method's TPR in dependence on the FPR. \\autoref{fig:sample_roc_curve} visually underlines the above described relations.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth,height=0.38\\textheight,keepaspectratio]{{{../res/Sample Receiver Operating Characteristic (ROC) curve}}}\n\t\\caption{ROC curves for a binary classification problem with each outcome being equally likely.}\n\t\\label{fig:sample_roc_curve}\n\\end{figure}\n\nUsually the points on the left are of most interest as they represent a selection with only few false elements contaminating the sample.\n\n\\section{Identification efficiencies}\n\\label{sec:efficiency}\n\nThe identification efficiency is defined as the proportion of correctly classified particles of a class relative to all of the available particles belonging to it. Hence, it directly represents the TPR. Both terms will be used as synonyms throughout the thesis.\n\nThe $\\epsilon_{PID}$-matrix is the confusion matrix normalized by row for an exclusive particle classification. The term `exclusive' in this context denotes that each track is labeled with exactly one particle hypothesis. Such a classification can be achieved by, e.g., assigning the track the label of the highest identification variable. This idea is used throughout the analysis.\n\nThe values of the matrix are given by the fraction of particles $i$ classified as $j$ over the true abundance of particle $i$. Hence, its values are\n\n\\begin{equation}\n\t\\epsilon_{i j} = \\frac{N_{i \\text{ classified as } j}}{A_{i \\text{ true}}}.\n\\end{equation}\n\nThe matrix has the shape of a $6 \\times 6$ matrix when listing the confusion probabilities for all six particle species of interest:\n\\begin{equation}\n\t\\begin{pmatrix}\n\t\t\\epsilon_{K K} & \\epsilon_{K \\pi} & \\epsilon_{K e} & \\epsilon_{K \\mu} & \\epsilon_{K p} & \\epsilon_{K d} \\\\\n\t\t\\epsilon_{\\pi K} & \\epsilon_{\\pi \\pi} & \\epsilon_{\\pi e} & \\epsilon_{\\pi \\mu} & \\epsilon_{\\pi p} & \\epsilon_{\\pi d} \\\\\n\t\t\\epsilon_{e K} & \\epsilon_{e \\pi} & \\epsilon_{e e} & \\epsilon_{e \\mu} & \\epsilon_{e p} & \\epsilon_{e d} \\\\\n\t\t\\epsilon_{\\mu K} & \\epsilon_{\\mu \\pi} & \\epsilon_{K e} & \\epsilon_{K \\mu} & \\epsilon_{K p} & \\epsilon_{\\mu d} \\\\\n\t\t\\epsilon_{p K} & \\epsilon_{p \\pi} & \\epsilon_{p e} & \\epsilon_{p \\mu} & \\epsilon_{p p} & \\epsilon_{p d} \\\\\n\t\t\\epsilon_{d K} & \\epsilon_{d \\pi} & \\epsilon_{K e} & \\epsilon_{K \\mu} & \\epsilon_{K p} & \\epsilon_{d d} \\\\\n\t\\end{pmatrix}.\n\\end{equation}\n\nThe definition generalizes to non-normalized matrices, e.g., resulting from non-exclusive cuts. Although, reading the matrix is less intuitive. Comparing matrices in this case becomes ambiguous as a particle might belong to multiple classes.\n\nThe diagonal of the matrix contains the identification efficiencies of each particle species. In general, its values should be close to unity while non-diagonal entries should vanish for a good classification approach. The efficiency of a particle classification is always normalized by the abundance of the particle and as such each row may have a different normalization. This is especially important when calculating the overall efficiency which is the fraction of all correctly classified tracks relative to all available tracks. In this case, each efficiency on the diagonal has to be weighted with the abundance of the particle.\n\n\\section{Likelihood}\n\\label{sec:likelihood}\n\n\\subsection{Likelihood ratio}\n\\label{sec:likelihood_ratios}\n\nThe ratio of likelihoods is commonly used for comparisons of the goodness of different models. For each hypothesis a likelihood of event~$\\pmb{x}$ occurring is calculated under the assumption the hypothesis is indeed true. The ratio of the likelihoods of two hypothesis $H_0$ and $H_1$\n\\begin{equation}\n\t\\frac{\\mathcal{L}(\\pmb{x}|H_0)}{\\mathcal{L}(\\pmb{x}|H_1)}\n\\end{equation}\ndenotes how many times more likely the event $\\pmb{x}$ is under hypothesis $H_0$ compared to $H_1$.\n\nHowever, the event $\\pmb{x}$ need not necessarily take the form of a simple one dimensional value. It may very well be a composition of, e.g., multiple detector responses. In case the components $x_i$ are independent from one another, the overall likelihood of $\\pmb{x}$ may be constructed by multiplying the separate likelihoods of each $x_i$. Hence, $\\mathcal{L}(\\pmb{x}|H_0)$ is composed out of multiple likelihoods each assuming $H_0$ to be true:\n\\begin{equation}\n\t\\mathcal{L}(\\pmb{x}|H_0) = \\prod \\limits_{i} \\mathcal{L}_i(x_i|H_0).\n\\end{equation}\nIn case of event~$\\pmb{x}$ being a detector response, the likelihood $\\mathcal{L}(\\pmb{x}|H_0)$ is the probability of measuring a signal given a particle hypothesis is true. Its value is constructed by multiplying the likelihoods of $\\mathcal{L}_i(x_i|H_0)$ for each detector $i$.\n\n\\subsection{Neyman-Pearson}\n\\label{sec:likelihood_ratios_neyman_pearson}\n\nThe Neyman-Pearson lemma is useful for evaluating the goodness of separating two models which have no unknown parameters. It states that a test on the likelihood ratio has the highest probability of correctly rejecting the original hypothesis at a given significance level. In other words: A test on the likelihood ratio provides the highest purity at a given efficiency.\n\nThe purity of a selection is defined as the proportion of correctly classified particles relative to all the identified ones. Its definition is identical to the PPV and as such will be used synonymously throughout the thesis.\n\nHence, by plotting the purity over the likelihood ratio, a monotonically increasing function is to be expected. An idealized version of such a graph is depicted in \\autoref{fig:neyman_pearson_visualization}. Since the underlying data may not be assumed to be a continues stream, the likelihood ratio is binned as it better represents the actual expected shape.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth,height=0.38\\textheight,keepaspectratio]{{{../res/Neyman-Pearson Visualization}}}\n\t\\caption{Visualization of a test on the likelihood ratio. A monotonically increasing function should be expected on the basis of the Neyman-Pearson lemma. The small horizontal lines indicate likelihood ratio bins, while the curve represents the overall trend. The pion purity and likelihood ratio is merely used to emphasize the connection to particle physics.}\n\t\\label{fig:neyman_pearson_visualization}\n\\end{figure}\n\n\\section{Neural network}\n\\label{sec:neural_network}\n\nAn artificial neural network or simply neural network is a class of algorithms inspired by the central nervous system of biological beings. Instead of electrical signals passing from neuron to neuron with complex biochemical processes involved, an artificial neural network passes on numbers with functions representing neurons.\n\nDespite only employing simplistic building blocks, a neural network is able to model any continuos function arbitrarily well using one layer and an infinite number of neurons~\\cite{NeuralNetwork:UniversalApproximation}. It is used in hopes of discovering hidden relations among variables and to utilize high dimensional correlations not otherwise obvious.\n\nA simple approach is to stack multiple layers of neurons (\\textit{nodes}) on top of each other and to connect the outputs of the previous layer with inputs of the new layer (\\textit{feed-forward neural network}). A network can be designed arbitrarily deep and provide a multitude of additional feedback loops (\\textit{recurrent neural network}) and further binning restrictions on node-inputs (\\textit{convolutional neural network}).\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=\\textwidth,height=0.4\\textheight,keepaspectratio]{{{../res/Design of an artificial neural network}}}\n\t\\caption{Design of an artificial neural network with three layers, $\\pmb{x}$ as input, $z_i$ as activation function, $V$ and $w$ as weights and $f(\\pmb{x})$ as prediction. Adapted from~\\cite{MachineLearning:NeuralNetworks}.}\n\t\\label{fig:sample_neural_network_design}\n\\end{figure}\n\nA simple feed-forward network is depicted in \\autoref{fig:sample_neural_network_design}. Each line between two nodes represents a connection. In other words, the output of the node at the bottom is passed to the node at the top. The function used for calculating the various values of $z_i$ is called ReLU~\\cite{Hahnloser:NeuralComputation} and takes the form of $relu(h) = \\max(0, h)$. In terms of a biological system it can be thought of as a boundary which has to be overcome prior to a signal being passed on.\n\nLayers not representing the input or output are called hidden layers (blue nodes from~\\autoref{fig:sample_neural_network_design}). The dimensions of the input (green nodes) are also referred to as features. A function of a node is called \\textit{activation function} ($z_i$). \\textit{Learning} or \\textit{training} in the context of neural networks refers to the process of adapting parameters or so called \\textit{weights} ($V$ and $w$) of a node. The process of adapting them is performed in batches. The \\textit{batch size} describes the number of individual data points contained within a batch.\\footnotemark{} Each weight is altered according to a gradient which optimizes the desired function. Often weights include a \\textit{bias} which is a constant offset not influenced by any previous neuron. The desired function which is to be optimized is referred to as \\textit{loss function}. It measures the predictive power of the classification. The duty of the \\textit{optimizer} is to adapt the weights in a way which minimizes the function, a task usually done via propagating the error back through the network in a schema called \\textit{back propagation}.\n\\footnotetext{The last batch may be smaller if the total number of samples is not divisible by the batch size.}\n\nIt is important to avoid making the network too dependent on the specific characteristics of the training data. Otherwise it will simply \\textit{over-fit} the given events without learning the more general concept. Hence, the neurons of an over-fitted network are perfectly adjusted to the input which it has already seen. However, the system fails upon receiving anything it has not already seen in this exact form.\n\nSlight amendments are to be made for a multi-class classification problem. Namely a different activation function than the previously mentioned ReLU is used in the final layer. In this thesis the softmax algorithm is employed as last activation. Its response is given by\n\\begin{equation}\n\tP(c | x) = \\frac{exp(x \\cdot w_c)}{\\sum \\limits_{c' = 1}^{\\#\\text{classes}} exp(x \\cdot w_{c'})}\n\t\\text{,}\n\\end{equation}\nwith $c$ representing a class, $w_c$ the weights of a class and $x$ the input. The function assigns values between zero and unity to an element of belonging to class $c$. Note that the final output of the network is an exclusive classification into the class with the highest softmax value.\n\nAdditionally the loss function must be adapted to reflect the existence of more than two classes. In this study the categorical~cross~entropy is chosen. In information theory, it puts a measure on the additional information needed to describe the data if deviating from the true underlying distribution.\n", "meta": {"hexsha": "5e09a4f7108840c6c106186f1066b17058686ca2", "size": 14370, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/thesis/chapters/statistics.tex", "max_stars_repo_name": "Edenhofer/PID-boost", "max_stars_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/thesis/chapters/statistics.tex", "max_issues_repo_name": "Edenhofer/PID-boost", "max_issues_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/thesis/chapters/statistics.tex", "max_forks_repo_name": "Edenhofer/PID-boost", "max_forks_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.6219512195, "max_line_length": 1162, "alphanum_fraction": 0.7822546973, "num_tokens": 3505, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743168019989179, "lm_q2_score": 0.7431680143008301, "lm_q1q2_score": 0.5522987017072378}}
{"text": "\\documentclass{article}\n\\usepackage[colorlinks=true,urlcolor=blue,citecolor=blue,linkcolor=blue]{hyperref} \n\n\\usepackage{tikz}\n\\usepackage{amsmath}\n\\usepackage{multirow}\n\\usepackage[linesnumbered, ruled, vlined]{algorithm2e}\n\\usepackage{graphicx}% Include figure files\n\\usepackage{subcaption}\n\\usepackage{tabularx}\n\\usepackage{listings}\n\\usepackage{amsthm}\n\\usepackage{cancel}\n\\usetikzlibrary{shapes}\n\\usetikzlibrary{shapes.geometric}\n\\usetikzlibrary{positioning}\n\\usetikzlibrary{snakes}\n\\usepackage[mode=buildnew]{standalone}\n\\makeatletter\n\\def\\parsept#1#2#3{%\n    \\def\\nospace##1{\\zap@space##1 \\@empty}%\n    \\def\\rawparsept(##1,##2){%\n        \\edef#1{\\nospace{##1}}%\n        \\edef#2{\\nospace{##2}}%\n    }%\n    \\expandafter\\rawparsept#3%\n}\n\\makeatother\n\\theoremstyle{definition}\n\\newtheorem{definition}{Definition}[section]\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{corollary}{Corollary}[section]\n%\\usepackage[linesnumbered, ruled, vlined, algo2e]{algorithm2e}\n\\lstset{\n    basicstyle=\\ttfamily\\small,\n    numberstyle=\\scriptsize,\n    % numbers=left,\n    backgroundcolor=\\color{white},\n    %backgroundcolor=\\color{white},\n    %frame=single,\n    xleftmargin=2em,\n    tabsize=2,\n    rulecolor=\\color{black},\n    %title=\\lstname,\n    escapeinside={(*}{*)},\n    breaklines=true,\n    %breakatwhitespace=true,\n    %framextopmargin=2pt,\n    %framexbottommargin=2pt,\n    frame=bt,\n    extendedchars=true,\n    inputencoding=utf8,\n    columns=fullflexible,\n    %escapeinside={(*@}{@*)},\n}\n\n% the color of power ragers\n\\newcommand{\\blue}[1]{[{\\bf  \\color{blue}{JG: #1}}]}\n\\newcommand{\\red}[1]{[{\\bf  \\color{red}{MD: #1}}]}\n\\newcommand{\\yellow}[1]{[{\\bf  \\color{yellow}{RG: #1}}]}\n\\newcommand{\\Eq}[1]{Eq.~(\\ref{#1})}\n\\newcommand{\\Fig}[1]{Fig.~\\ref{#1}}\n\\newcommand{\\ra}[1]{\\renewcommand{\\arraystretch}{#1}}\n\\newcommand{\\pdv}[2]{{\\frac{\\partial#1}{\\partial#2}}}\n\\newcommand\\given[1][]{\\:#1\\vert\\:}\n\\def\\unitdisk#1#2#3#4{\n\\begin{scope}[shift={#1}]\n    \\draw [fill=black] (0, 0) coordinate (#2) circle (0.1cm) node[#3] {#4};\n    \\draw [thick](0,0) circle (1cm);\n\\end{scope}\n}\n\n\\title{Quantum simulation on a random tensor network}\n\\begin{document}\n\\maketitle\n\\blue{This is how you make a comment. Hi}\n\\red{Hi}, \\yellow{Hi}.\n\n\\section{The Dirac-Frenkel's variational principle}\nWe have a Hamiltonian for a quantum system with $K$ terms.\n\\begin{equation}\n    H  = \\sum_{k=1}^{K} \\prod_{i=1}^{n} o_i^k\n\\end{equation}\nwhere $n$ is the system size and $o_i^k$ is a local operator on site $i$. We can see each term is a product of local operators.\nLet $\\psi(\\theta)$ be the ansatz of wave functions, to obtain the dynamics of the parameters.\nWe treat $x \\equiv \\pdv{\\theta}{t}$ as the variational parameters, and set the goal is to minimize~\\cite{Broeckhove1988}\n\\begin{equation}\n\\mathcal{L} = \\left\\|i\\pdv{\\psi}{\\theta}x-H\\psi\\right\\|^2.\n\\end{equation}\nAt the extrema, we have $\\pdv{\\mathcal{L}}{x} = 0$. That is\n\\begin{align}\n0 & = (-i\\pdv{\\psi^*}{\\theta})(i\\pdv{\\psi}{\\theta}x-H\\psi) + (-ia\\pdv{\\psi^*}{\\theta}-\\psi^*H)(i\\pdv{\\psi}{\\theta})\\\\\n& = \\pdv{\\psi^*}{\\theta}\\pdv{\\psi}{\\theta}x+i\\pdv{\\psi^*}{\\theta}H\\psi + x\\pdv{\\psi^*}{\\theta}\\pdv{\\psi}{\\theta}- i\\psi^*H\\pdv{\\psi}{\\theta}\n\\end{align}\n\nFinally, we arrive at the Dirac-Frenkel's variational principle by taking the first two terms\n\\blue{Because the wave function is analytic complex valued function, when the real parts of two functions are the same, their complex components are the same too.}\n\\begin{equation}\n\\pdv{\\psi^*}{\\theta}\\pdv{\\psi}{\\theta}x = -i\\pdv{\\psi^*}{\\theta}H\\psi\n\\end{equation}\n\n\\blue{In tensor network representation, we can normalize tensor networks easily, so we do not need to go to the unnormalized prepresentation.}\n\n\\subsection{The automatic differentiation approach}\n\nThe first term can be computed as\n\\begin{align}\n    \\begin{split}\n        &\\mathcal{L}_1(\\theta, \\theta') = \\psi(\\theta)^*\\psi(\\theta'),\\\\\n        &\\mathcal{G}_1(\\theta) = \\pdv{\\mathcal{L}_1(\\theta, \\theta')}{\\theta'}x,\\\\\n        &\\pdv{\\psi^*}{\\theta}\\pdv{\\psi}{\\theta}x = \\pdv{\\mathcal{G}_1(\\theta)}{\\theta} \\given[\\Big]_{\\theta =\\theta'}\n    \\end{split}\\label{eq:ad1}\n\\end{align}\nwhere $\\psi(\\theta) = \\texttt{normalize}(\\texttt{tensornetwork}(\\theta))$.\nThe second term can be computed as\n\\begin{align}\n    \\begin{split}\n        &\\mathcal{L}_2(\\theta) = -i\\psi(\\theta)^*H\\psi\\\\\n        &-i\\pdv{\\psi^*}{\\theta}H\\psi = \\pdv{\\mathcal{L}_2(\\theta)}{\\theta}\n    \\end{split}\\label{eq:ad2}\n\\end{align}\n\n\\subsection{Time and space complexity}\nLet us denote the time complexity of contracting the overlap $\\psi^*\\psi$ as $1$.\n\\Eq{eq:ad1} can be evaluated by reverse differentiating first order gradient program $G_1$.\nObtaining the gradient of the program through back propagation only introduces a constant overhead, and let us denote this constant as $c$. Then the overhead of obtaining the Hessian is $c^2$.\nThe time to evaluate expectation values of all terms in the Hamiltonian in \\Eq{eq:ad2} is $K$.\nHence the overall time overhead is $K+c^2$. The space overhead is propotional to the size of intermediate contraction results of tensor networks, which is dominated by the largest tensors.\n\n\\bibliographystyle{plain}\n\\bibliography{refs}\n\\end{document}\n", "meta": {"hexsha": "fa3bd01e64160883e0b4c45113cf79495c93bf31", "size": 5198, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/note.tex", "max_stars_repo_name": "RydbergBoston/TensorNetworkEvolve.jl", "max_stars_repo_head_hexsha": "3a82326c14c3487a73b7eb6b8029b296c51932fd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/note.tex", "max_issues_repo_name": "RydbergBoston/TensorNetworkEvolve.jl", "max_issues_repo_head_hexsha": "3a82326c14c3487a73b7eb6b8029b296c51932fd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2021-12-17T22:35:14.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-23T17:34:21.000Z", "max_forks_repo_path": "docs/note.tex", "max_forks_repo_name": "RydbergBoston/TensorNetworkEvolve.jl", "max_forks_repo_head_hexsha": "3a82326c14c3487a73b7eb6b8029b296c51932fd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-22T17:15:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T17:15:04.000Z", "avg_line_length": 39.0827067669, "max_line_length": 192, "alphanum_fraction": 0.6916121585, "num_tokens": 1757, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.743168019989179, "lm_q1q2_score": 0.5522986974798388}}
{"text": "\n\\section{The \\isheap algorithm}\n\\Label{sec:isheap}\n\nThe \\isheap algorithm of the \\cxx Standard Library \\cite[\\S 28.7.7.5]{cxx-17-draft}\nworks on generic sequences. \nFor our purposes we have modified the generic implementation\nto that of an array of type \\valuetype.\nThe signature now reads:\n\n\\begin{lstlisting}[style = acsl-block]\n\n    bool is_heap(const value_type* a, int n);\n\\end{lstlisting}\n\nThe algorithm \\isheap checks whether a given array satisfies the heap properties\nwe have semi-formally described in the beginning of this chapter.\nIn particular, \\isheap will return \\inl{true}\ncalled with the array argument from Figure~\\ref{fig:heap-array}.\n\n%\\clearpage\n\n\\subsection{Formal specification of \\isheap}\n\nThe specification of \\isheap is shown in the following listing.\nThe function returns~\\inl{true} if and only if the input array\nsatisfies the predicate \\logicref{Heap}.\n\n\\input{Listings/is_heap.h.tex}\n\n\\subsection{Implementation of \\isheap}\n\nOur implementation of \\isheap in the following listing\nutilizes the function \\specref{isheapuntil}.\n\n\\input{Listings/is_heap.c.tex}\n\n\\clearpage\n\n", "meta": {"hexsha": "984af3ad71f6407b8fe8c69935e6f5c14d593689", "size": 1101, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Informal/heap/is_heap.tex", "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 90, "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_issues_repo_path": "Informal/heap/is_heap.tex", "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 22, "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_forks_repo_path": "Informal/heap/is_heap.tex", "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 19, "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "avg_line_length": 27.525, "max_line_length": 83, "alphanum_fraction": 0.7792915531, "num_tokens": 286, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7461390043208003, "lm_q1q2_score": 0.5522729615155751}}
{"text": "\\documentclass[letterpaper,hidelinks]{article}\n\\usepackage[left=1in,right=1in,top=1in,bottom=1in]{geometry}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{listings}\n\\usepackage{enumitem}\n\\usepackage{listings}\n\\usepackage{algorithm}\n\\usepackage{algorithmic}\n\\usepackage{hyperref}\n\\lstset{language=R,%\n\tbasicstyle=\\footnotesize,\n\t%basicstyle=\\color{red},\n\tbreaklines=true,%\n\tshowstringspaces=false,%without this there will be a symbol in the places where there is a space\n\tnumbers=left,%\n\tnumbersep=9pt, % this defines how far the numbers are from the text\n\t%emph=[2]{word1,word2}, emphstyle=[2]{style},    \n}\n \n\\numberwithin{equation}{section}\n\\author{KK Feng}\n\\title{AdaBoost and K-Fold Cross-Validation on Hand-Written Digits}\n\\date{}\n\\begin{document}\n\\maketitle\n\\section{Problem}\n\\subsection{Description}\nClassify grayscale images for hand-written digits.\n\\subsection{Data}\n\\begin{itemize}\n\\item \\textbf{uspsdata.txt}: contains a matrix with one data point (= vector of length 256) per row. The 256-vector in each row represents a 16 by 16 image of a handwritten number.\n\\item \\textbf{uspscl.txt}: contains the corresponding class labels.\nThe data contains two classes - the digits 5 and 6 - so the class labels are stored as -1 and +1, respectively.\n\\end{itemize}\n\\subsection{Idea}\n\\begin{itemize}\n\\item Adaptive Boosting algorithm with decision stumps as weak learners.\n\\item K-Fold Cross-Validation to tune the number of weak learners.\n\\end{itemize}\n\n\\section{Solution}\n\\subsection{Implementation}\nTo train decision stumps, we implement following algorithm\n\\begin{algorithm}\n\\caption{A simple training algorithm for decision stumps}\n\\begin{algorithmic}[1]\n\\REQUIRE\nData $X=(x_1,\\cdots,x_n)$ where $x_i\\in\\mathbb{R}^d$, weight $w$, label $y$\n\\FOR{$j=1:d$}\n\\STATE{Sort samples $x_i$ in ascending order along dimension $j$}\n\\FOR{$i=1:n$}\n\\STATE{Compute cumulative sums $cum_{i}^j=\\sum_{k=1}^iw_ky_k$}\n\\ENDFOR\n\\STATE{Threshold $\\theta_j$ is obtained at the extrema of $cum_{i}^j$}\n\\STATE{Label $m_j$ is obtained from the sign of cumulative sum at extrema}\n\\STATE{Compute the error rate of classifier $(\\theta_j,m_j)$ along dimension j}\n\\ENDFOR\n\\STATE{Find optimal $j^*,\\theta^*$ in which the classifier $(\\theta_j,m_j)$ gives the minimum error rate}\n\\end{algorithmic}\n\\end{algorithm}\\\\\n(Reference: \\href{http://ais.informatik.uni-freiburg.de/teaching/ws09/robotics2/pdfs/rob2-10-adaboost.pdf}{http://ais.informatik.uni-freiburg.de/teaching/ws09/robotics2/pdfs/rob2-10-adaboost.pdf} )\\\\\\\\\n\n\\subsection{Plot}\nPlease find the plots for training error and test error as a function of b (number of weak learners) in the following. Note that the cross validation error is the average of errors of 5 folds.\n\\begin{center}\n\\includegraphics[width=16cm]{1}\n\\end{center}\nFrom the plot, we can see that for the USPS data and using 5-fold cross validation, the training error reaches bottom of the curve when we use approximately 20 weak learners, and the training error curve become flat when number of weak learners go larger than 20. On the other hand, we need around 40 weak learners to ensure that we have the optimal test error. If number of weak learners go larger than 40, the test error will just have small oscillations around the optimal test error we get at 40 weak learners.\n\n\\subsection{Code}\n\\lstinputlisting{adaboost.R}\n\\end{document}", "meta": {"hexsha": "fe3ea38aac65995e8d6a824a339d47510d5d7f0c", "size": 3367, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "classification-adaboost-hand-written-digits/readme.tex", "max_stars_repo_name": "zhengkaifeng/classification-clustering-machine-learning", "max_stars_repo_head_hexsha": "85fe0ffcad52edd3c8a1a4d5f24fa15b3d5b6ba7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classification-adaboost-hand-written-digits/readme.tex", "max_issues_repo_name": "zhengkaifeng/classification-clustering-machine-learning", "max_issues_repo_head_hexsha": "85fe0ffcad52edd3c8a1a4d5f24fa15b3d5b6ba7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classification-adaboost-hand-written-digits/readme.tex", "max_forks_repo_name": "zhengkaifeng/classification-clustering-machine-learning", "max_forks_repo_head_hexsha": "85fe0ffcad52edd3c8a1a4d5f24fa15b3d5b6ba7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.5, "max_line_length": 514, "alphanum_fraction": 0.771012771, "num_tokens": 955, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.746138993030751, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.5522729488824957}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Homework}\n\\date{September 17, 2014}\n\\author{Jon Allen}\n\\maketitle\nSection 1.3: \\#12, 20\nSection 2.1: \\# 18, 8\n\\renewcommand{\\labelenumi}{1.\\arabic{enumi}}\n\\renewcommand{\\labelenumii}{\\arabic{enumii}.}\n\\renewcommand{\\labelenumiii}{(\\alph{enumiii})}\n\\begin{enumerate}\n\\setcounter{enumi}{2}\n\\item\n\\begin{enumerate}\n\\setcounter{enumii}{11}\n%1.2 12\n\\item\nShow that $4\\cdot(n^2+1)$ is never divisible by 11.\n\\subsubsection*{proof}\n%We prove the result by assuming that 11 divides $4\\cdot(n^2+1)$ and finding a contradiction.\nFirst we note that $\\gcd(4,11)=1$ and 11 is prime, so then if $11|4\\cdot(n^2+1)$ then by the fundamental theorem of arithmetic, $11|(n^2+1)$.\nThat is to say\n\\begin{align*}\n  n^2+1&\\equiv0\\mod11\\\\\n  n^2&\\equiv-1\\mod11\\\\\n  n^2&\\equiv10\\mod11\\\\\n\\end{align*}\nNoting that $\\mathbb{Z}_{11}$ partitions $\\mathbb{Z}$ we see that there must exist some $[a]_{11}\\in\\mathbb{Z}_{11}$ such that $[a^2]_{11}=[10]_{11}$ So looking at all the elements of $\\mathbb{Z}_{11}$:\n\\begin{align*}\n  [0^2]&=[0]&\n  [1^2]&=[1]&\n  [2^2]&=[4]&\n  [3^2]&=[9]&\n  [4^2]&=[5]&\n  [5^2]&=[3]\\\\\n  [6^2]&=[3]&\n  [7^2]&=[2]&\n  [8^2]&=[-2]=[9]&\n  [9^2]&=[4]&\n  [10^2]&=[1]&\n\\end{align*}\nWe see by exhaustion that $[a]_{11}\\not\\in\\mathbb{Z}_{11}$ such that $[a^2]_{11}=[10]_{11}$ and so $11$ does not divide $4\\cdot(n^2+1)$.\n$\\Box$\n\\setcounter{enumii}{19}\n%1.2 20\n\\item\nSolve the following system of congruences.\n\\begin{align*}\n  2x&\\equiv 5\\mod 7&3x&\\equiv4\\mod 8\n\\end{align*}\n\\emph{Hint:} First reduce to the usual form.\n\\begin{align*}\n  2x&\\equiv 5\\mod 7&3x&\\equiv4\\mod 8\\\\\n  \\gcd(2,7)&=1&\\gcd(3,8)&=1\\\\\n  \\intertext{So both congruencies have one solution}\n  c\\cdot2&\\equiv1\\mod 7&c\\cdot3&\\equiv1\\mod8\\\\\n  4\\cdot2&\\equiv1\\mod 7&3\\cdot3&\\equiv1\\mod8\\\\\n  x&\\equiv5\\cdot4\\mod7&x&\\equiv3\\cdot4\\mod8\\\\\n  x&\\equiv6\\mod7&x&\\equiv4\\mod8\\\\\n\\end{align*}\nNow because $\\gcd(7,8)=1$ we can apply the Chinese Remainder Theorem.\n\\begin{align*}\n  7a+8b&=1\\\\\n  7(-1)+8(1)&=1\\\\\n  4(7)(-1)+6(1)(8)&=48-28=20\\text{ is a specific solution}\\\\\n  20+7\\cdot8t&=20+56t\\text{ is all solutions}\n\\end{align*}\n\\end{enumerate}\n\\renewcommand{\\labelenumi}{2.\\arabic{enumi}}\n\\setcounter{enumi}{0}\n\\item\n\\begin{enumerate}\n\\setcounter{enumii}{7}\n%2.1 8\n\\item\nWhich of the following formulas define functions from the set of rational numbers into itself? (Assume in each case the $n,m$ are integers and that $n$ is nonzero.)\n\\begin{enumerate}\n\\item\n$\\displaystyle f\\left(\\frac{m}{n}\\right)=\\frac{m+1}{n+1}$\n\nNot a function from $\\mathbb{Q}\\to\\mathbb{Q}$ because when $n=-1$ there is no image.\n\\item\n$\\displaystyle g\\left(\\frac{m}{n}\\right)=\\frac{2m}{3n}$\n\nThis is a function because rational numbers are closed under multiplication so for any $q\\in\\mathbb{Q}$ we know that $\\frac{2}{3}q\\in\\mathbb{Q}$\n\\item\n$\\displaystyle h\\left(\\frac{m}{n}\\right)=\\frac{m+n}{n^2}$\n\nThis is not a function. Counterexample: $\\frac{1}{2}=\\frac{2}{4}$. $\\frac{1+2}{2^2}=\\frac{3}{4}\\ne\\frac{2+4}{4^2}=\\frac{6}{16}=\\frac{3}{8}$. $\\frac{1}{2}$ has more than one image so the map is not well defined and not a function.\n\\item\n$\\displaystyle k\\left(\\frac{m}{n}\\right)=\\frac{(m-n)^2}{n^2}$\n\n$\\displaystyle \\frac{(m-n)^2}{n^2}=\\frac{m^2-2mn+n^2}{n^2}=\\left(\\frac{m}{n}\\right)^2-2\\frac{m}{n}+1$. Looks like a good function. It will have the same result independent of representation of the rational number, and has an image for every element of $\\mathbb{Q}$.\n\\item\n$\\displaystyle p\\left(\\frac{m}{n}\\right)=\\frac{4m^2}{7n^2}-\\frac{m}{n}$\n\nIs a function of rationals. They are closed under multiplication and subtraction. all equivalent elements will have the same image, regardless if their representation in terms of $m,n$.\n\\item\n$\\displaystyle q\\left(\\frac{m}{n}\\right)=\\frac{m+1}{m}$\n\nNot a function. No representation of zero has an image. For example $\\frac{0}{1}$ does not have an image as $\\frac{1}{0}$ is undefined.\n\\end{enumerate}\n\\setcounter{enumii}{17}\n%2.1 18\n\\item\nLet $A$ be a nonempty set, and let $f:A\\to B$ be a function. Prove that $f$ is one-to-one if and only if there exists a function $g:B\\to A$ such that $g\\circ f=1_A$\n\\subsubsection*{proof}\nLets start by assuming that $f$ is a one to one function. Because $f$ is a function, we know that for every $x\\in A$ there exists some $x'\\in B$. Furthermore, because $f$ is one to one, we know that $x'$ is unique. Now we simply define $g(x')=x$. If $B$ has more elements than $A$ then we can define those elements that aren't images of $A$ under $f$ to map to random $a\\in A$. Now we see that $g(f(x))=g(x')=x$ and so we've found a function that satisfies our result.\n\nNow let us assume that the function $f$ is not one to one. Because $f$ is a function, we can't have any elements of $A$ map to more than one element in $B$. Therefore $\\left\\lvert f\\right\\rvert\\le \\left\\lvert A\\right\\rvert$. Now because $f$ is not one to one, we know that there are two elements in $A$ that have the same image in $B$. This makes our cardinality inequality strict: $\\left\\lvert f\\right\\rvert<\\left\\lvert A\\right\\rvert$. This means that if we have a function $g:B\\to A$ and feed it the images created by $f$ it will only be able to spit out at most $\\left\\lvert A\\right\\rvert-1$ images of it's own. So we have $\\left\\lvert g\\circ f\\right\\rvert<\\left\\lvert A\\right\\rvert$. Because $\\left\\lvert g\\circ f\\right\\rvert\\ne\\left\\lvert A\\right\\rvert$ it is certain that $g\\circ f\\ne1_A$\n$\\Box$\n\\end{enumerate}\n\\end{enumerate}\n\\end{document}\n", "meta": {"hexsha": "2ba495f6e5768d14fb51d1776e3ff4e4e088aaa9", "size": 5620, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "abstract algebra/abstract-hw-2014-09-17.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "abstract algebra/abstract-hw-2014-09-17.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "abstract algebra/abstract-hw-2014-09-17.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.96, "max_line_length": 794, "alphanum_fraction": 0.684519573, "num_tokens": 2122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390163, "lm_q2_score": 0.7461389817407016, "lm_q1q2_score": 0.552272944802365}}
{"text": "\\documentclass[a4paper,11pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{aas_macros}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{graphicx}\n\\usepackage{enumitem}\n\\usepackage{listings}\n\\usepackage{xcolor} % kvuli barvickam v pythonim textu \n\\usepackage{xspace}\n\\usepackage{physics}\n\\usepackage[margin=2.5cm]{geometry} % nastavuje sirku okraju stranky\n\n\\author{Anna Juranova}\n\\title{Ferrers Potential in galpy}\n\n% Dots following numbers of section:\n\\renewcommand{\\thesection}{\\arabic{section}.}\n\\renewcommand{\\thesubsection}{\\thesection\\arabic{subsection}.}\n% Only one letter wide space between the number and the title of the chapter:\n\\makeatletter \\def\\@seccntformat#1{\\csname the#1\\endcsname\\hspace{1ex}} \\makeatother\n\n\\lstdefinestyle{customc}{\n\t%belowcaptionskip=1\\baselineskip,\n\tbreaklines=true,\n\t%frame=L,\n\t%xleftmargin=\\parindent,\n\tlanguage=Python,\n\tshowstringspaces=false,\n\tkeepspaces=true,\n\tbasicstyle=\\footnotesize\\ttfamily,\n\tkeywordstyle=\\bfseries\\color{green!40!black},\n\tcommentstyle=\\itshape\\color{purple!40!black},\n\t%breakatwhitespace=true,\n\tidentifierstyle=\\color{blue},\n\tstringstyle=\\color{orange},\n}\n\n\\lstset{escapechar=@,style=customc}\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{document}\n\t\n\t\\begin{center}\n\t\t\\huge{Ferrers Potential in galpy}\\\\\n\t\\end{center}\n\n\t\\section{Variables and Units}\n%\tVARIABLES AND UNITS\t\n           $ \\mathrm{amp} $ -- Amplitude to be applied to the potential (default: 1); can be a Quantity with units of mass or $ G \\times \\mathrm{mass} $\\\\\n           $ a $ -- Scale radius (can be Quantity)\\\\\n           $ n $ -- Power of Ferrers density ($ n > 0 $), not necessarilly integer, calculations generally fail for $ n > 4 $\\\\\n           $ b $ -- y-to-x axis ratio of the density, usually $ b < 1 $\\\\\n           $ c $ -- z-to-x axis ratio of the density, usually $ c < b $\\\\\n           $ \\Omega_b $ -- pattern speed of the bar, $ \\vec{\\Omega}_b = \\hat{e}_z\\,\\Omega_b $\\\\\n           $ \\mathrm{pa} $ -- If set, the initial position angle of the bar in the xy plane measured from x (rad or Quantity)\\\\\n           normalize -- if True, normalize such that $ v_c(1.,0.)=1. $, or, if given as a number, such that the force is this fraction of the force necessary to make $ v_c(1.,0.)=1. $.\\\\\n           $ r_0 $, $ v_0 $ -- Distance and velocity scales for translation into internal units (default from configuration file)\\\\\n           Variables $ \\alpha $\tappearing in the code in form $ \\_\\alpha2 $ carry the value of $ \\alpha^2 $ initialized in the begining of the class \\_\\_init\\_\\_ function.\n\n\t\\section{Functions}\n\t ...implemented for rotating potential in a frame of reference in which the bar lies aligned with x axis.)\n%\tFUNCTIONS\t\n\t\t \n\t\t\\subsection{Potential} % POTENTIAL ITSELF\n\t\t\tPurpose: Evaluation of the bar potential as a function of cartesian coordinates in corotating frame of reference. \\\\\n   \t\t\\begin{equation}\n   \t\t\\Phi(\\vec{x}) = \\frac{-\\mathrm{amp}\\,b\\,c}{4(n+1)}\\,\\int_{\\lambda}^{\\infty} A^{n+1}(\\tau)\\,\\mathrm{d} \\tau\n   \t\t\\end{equation}\n   \t\t\twhere\n\t\t\\begin{equation}\n\t\tA^{\\nu}(\\tau) = \\frac{\\left(1- \\sum_{i=1}^{3} \\frac{x_i^2}{\\tau + a_i^2}\\right)^{\\nu}}{[(\\tau + a^2)(\\tau + a^2b^2)(\\tau + a^2c^2)]^\\frac{1}{2}}\n\t\t\\end{equation}\n   \t\t\tis part of an integrand which is repeatedly used in following functions. Lower limit for the integral based on definition of the density distribution is given by relation:\n \n\t\t\\begin{equation}\n\t\t\\lambda = \\begin{cases}\n\t\t\\text{unique positive solution of} \\, m^2(\\lambda)=1, &  \\text{for}\\ m \\geq 1\\\\\n\t\t\\lambda = 0\\,, & \\text{for}\\ m < 1\n\t\t\\end{cases} \\\\\n\t\t\\end{equation}\n\t\twhere $ m^2(\\lambda) = \\sum_{i=1}^{3} \\frac{x_i^2}{\\lambda + a_i^2} $.\t\n\t\n\t% they have the units in density \\rho_0 wrong in paper one (probably because of the constants before the physical variables obtained somehow??) but still! [\\rho] \\neq s^{-2} !!\n\t\t\\subsection{Density} % DENSITY\n\t\t\tPurpose: Evaluation of the density as a function of (x,y,z) in the aligned coordinate frame from cylindrical coordinates given as input\\\\\n\t\t\\begin{equation*}\n\t\t\\rho(m^2) = \\begin{cases}\n\t\t\\frac{\\mathrm{amp}}{4\\,\\pi\\,a^3}\\,(1-(m/a)^2)^n, & \\text{for}\\, m < a \\\\\n\t\t0, & \\text{for}\\, m \\geq a\n\t\t\\end{cases}\n\t\t\\end{equation*}\n\t\t\twhere\n\t\t\\begin{equation}\n\t\tm^2 = x^2 + \\frac{y^2}{b^2}+\\frac{z^2}{c^2}\n\t\t\\end{equation}\n\t\t\n\t\t\\subsection{Force} % FORCE\n\t\t\tEvaluation of the x component of the force as a function of (x,y,z) in the aligned coordinate frame, which is used for evaluation of the force in cylindrical coordinates and then in orbit integration; does not take into account bar's rotation or initial position and therefore shall not be used directly.\\\\\n   \t\t\\begin{equation}\n   \t\tF_i = \\frac{-\\mathrm{amp}\\,b\\,c}{2}\\, \\int_{\\lambda}^{\\infty} \\frac{x_i}{a_i^2 + \\tau} A^{n}(\\tau)\\,\\mathrm{d} \\tau\n   \t\t\\end{equation}  \n\t\twhere\n\t\t\\begin{equation*}\n\t\t\ta_1 = a,~ a_2 = ab,~a_3 = ac\n\t\t\\end{equation*}\n\t\t\n\t\t\n\t\t% SECOND DERIVATIVE\n\t\t\\subsection{General Second Derivative}\n\t\t\tGeneral 2nd derivative of the potential as a function of (x,y,z) in the aligned coordinate frame \\\\\n\t\t\\begin{equation}\n\t\t\\Phi_{\\mathrm{ij}} = -\\frac{1}{4}\\,b\\,c \\,\\Phi'_{\\mathrm{ij}}\t\n\t\t\\end{equation}\n\t\t\n\t\t% INTEGRATION FOR SECOND DERIVATIVE\t\n\t\t\\subsection{Integration for Second Derivative}\n\t\t\tIntegral that gives the 2nd derivative of the potential in x,y,z \\\\\n\t\t\t\n\t\t\t\\noindent The derivative is generally $ \\pdv{\\Phi}{x_i}{x_j} $; for $ i=j $ the integral to be evaluated is: \n\t\t\t\n\t\t\\begin{equation}\n\t\t\\Phi'_{\\mathrm{ii}} = \\int_{\\lambda}^{\\infty} \n\t\t \\frac{4\\,n\\,x_i^2}{(\\tau + a_i^2)^2} \\,A^{\\nu-1}(\\tau)\n\t\t- \\frac{2}{\\tau + a_i^2} A^{\\nu}(\\tau) \\mathrm{d}\\,\\tau\n\t\t\\end{equation}\n\t\t\n\t\t\t\\noindent In all other cases, the integral has this form:\n\t\t\\begin{equation}\n\t\t\\Phi'_{\\mathrm{ij}} = \\int_{\\lambda}^{\\infty} \n\t\t\\frac{4\\,n\\,x_i\\,x_j}{(\\tau + a_i^2)(\\tau + a_j^2)} A^{\\nu-1}(\\tau) \\mathrm{d}\\tau\n\t\t\\end{equation}\n\n\t% INERTIAL FRAME\t\n\t\t\\subsection{Second Derivative in Nonrotating FoR}\n\t\t\tTransformation of the second derivative into the frame of reference which is corotating with the potential\\\\\n\t\t\\begin{equation}\n\t\t\t\\Phi_{x'x'} = \\cos[2](\\Omega_b t)\\Phi_{xx} - 2 \\sin(\\Omega_b t) \\cos(\\Omega_b t)\\Phi_{xy} + \\sin[2](\\Omega_b t)\\Phi_{yy}\\\\\n\t\t\\end{equation}\n\t\t\n\t\t\\begin{equation}\n\t\t\t\\Phi_{y'y'} = \\cos[2](\\Omega_b t)\\Phi_{yy} + 2 \\sin(\\Omega_b t) \\cos(\\Omega_b t)\\Phi_{xy} + \\sin[2](\\Omega_b t)\\Phi_{xx}\n\t\t\\end{equation}\n\t\t\n\t\t\\begin{equation}\n\t\t\t\\Phi_{z'z'} = \\Phi_{zz}\n\t\t\\end{equation}\n\t\t\n\t\t\\begin{equation}\n\t\t\t\\Phi_{x'y'} = [\\cos[2](\\Omega_b t) - \\sin[2](\\Omega_b t)]\\Phi_{xy} + \\sin(\\Omega_b t) \\cos(\\Omega_b t)[\\Phi_{xx} - \\Phi_{yy}]\n\t\t\\end{equation}\n\t\t\n\t\t\\begin{equation}\n\t\t\t\\Phi_{x'z'} = \\cos(\\Omega_b t)\\Phi_{xz} - \\sin(\\Omega_b t) \\Phi_{yz}\n\t\t\\end{equation}\n\t\t\n\t\t\\begin{equation}\n\t\t\t\\Phi_{y'z'} = \\cos(\\Omega_b t)\\Phi_{xz} + \\sin(\\Omega_b t) \\Phi_{yz}\n\t\t\\end{equation}\t\n\n\t%MILKY WAY BAR POTENTIAL\n\t\\section{Milky Way Bar Potential}\n\t\t\n\t\tValues of parameters used for further work are set as the final state values in \\cite{MachadoManos:2016}, that is:\n\t\t\n\t\t\\begin{equation*}\n\t\t\ta = 8\\,kpc,~\n\t\t\tb = 0.35,~\n\t\t\tc = 0.2375,~\n\t\t\t\\mathrm{amp} = 3.3 \\times 10^{10}\\, M_{\\odot},~\n\t\t\t\\Omega_b = 10\\,km/s/kpc.\n\t\t\\end{equation*}\n\t\n\t\n\t\t\\nocite{*} % Insert publications even if they are not cited in the poster\n\t\t\n\t\t\\small{\\bibliographystyle{unsrt}\n\t\t\t\\bibliography{sample}\\vspace{0.01in}}\n\t\t\n\t\n\t\n\\end{document}", "meta": {"hexsha": "db46e0135058c456bf8f604e5f61e49d9370aa58", "size": 7462, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc_tex/document.tex", "max_stars_repo_name": "annajur/cbp", "max_stars_repo_head_hexsha": "c7a8d567d93814886a5c47a2eb1cc5c8c6797c55", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc_tex/document.tex", "max_issues_repo_name": "annajur/cbp", "max_issues_repo_head_hexsha": "c7a8d567d93814886a5c47a2eb1cc5c8c6797c55", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc_tex/document.tex", "max_forks_repo_name": "annajur/cbp", "max_forks_repo_head_hexsha": "c7a8d567d93814886a5c47a2eb1cc5c8c6797c55", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.7759562842, "max_line_length": 309, "alphanum_fraction": 0.6472795497, "num_tokens": 2600, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8872045847699186, "lm_q2_score": 0.6224593171945417, "lm_q1q2_score": 0.5522487600477505}}
{"text": "\\section{Evaluation}\\label{sec:evaluation:clustering}\nClustering for \\bslongs{} has two aspects that we must evaluate: how pure the \\bslongs{} (clusters) are and how many of the \\isols{} in \\cplop{} end up in a cluster (as opposed to noise).\nThe former tests whether the \\ecoli{} strains stay relatively unique to the \\spec{} from which they come from, the core theory of library-based \\mst{}.\nThe latter tells us how effective \n\\autoref{sec:methodology:clustering} describes how we can use clustering as a \\mst{} method --- cluster an unknown \\isol{} along with the \\isols{} in \\cplop{} and classify it as the most plural species of the cluster.\nCluster purity can easily gauge how effective this technique is at \\mst{}, since the concept readily summarizes to how pure, from a \\spec{} perspective, a cluster is.\n\\Dbased{} clustering algorithms such as \\dbscan{} may not cluster every datapoint, as mentioned in \\autoref{sec:background:dbscan},  labeling some datapoints (\\isols{}) as noise and thus leaving them unclustered.\n%Thus, it is crucial that our \\mst{} method has the ability to assert a classification on any \\isol{} we provide it, driving us to use \\kNN{} for classification.\n%Nevertheless, by applying \\dbscan{} to \\cplop{} \\isols{}, we gained insight into the nature of \\ecoli{} strains that we discuss in \\autoref{sec:results:clustering}.\n\n\\subsection{Cluster and Clustering Purity}\nIn this paper we look at the results of clustering \\cplop{} data using this algorithm from the perspective of cluster purity. We call a cluster (\\bslong{}) \\textit{100\\% pure}\nif all isolates that belong to it come from the same \\spec{}. \n\nOf interest to us is the following information:\n\\begin{enumerate}\n    \\item The number of 100\\% pure clusters and the percentage of bacterial isolates from \\cplop{} clustered into pure clusters.\n    \\item The structure of impure clusters: specifically, whether a dominant \\spec{} can\n    be clearly identified in each cluster.\n    \\item Coverage: the total number of \\cplop{} isolates found to belong to a strain.\n    \\item MST Accuracy: the percentage of isolates for which the strain-based MST procedure produces the correct response.\n\\end{enumerate}\nThus, our core measure is \\textit{cluster purity}, the proportion of a cluster that comes from the most plural \\spec{} of that particular cluster.\n\\index{cluster purity}\nA \\textit{100\\% pure cluster} is a cluster which only contains data points (\\isols{}) with the same class label (same \\spec{} of origin). \n\nConsider a cluster $C=\\{c_1,\\ldots, c_K\\}$. Let $s(c)$ refer to the species of isolate $c$.\nLet $m$ be the plurality species label for data points in $C$, and let the total number of points in\n$C$ with $s(c) = m$ be $s_m$. Then the \\textit{individual cluster purity} $\\nu$ of cluster $C$ is:\n\\[\n    \\nu(C) = \\frac{s_m}{K}\n\\]\n\nIn addition to computing the purity of individual clusters we want to have an understanding of the overall purity on the entire dataset.\nGiven a \\textit{clustering} $\\mathcal{C} = \\{C_1,\\dots,C_n\\}$ on a dataset, we define the size $\\mathcal{M}$ of the set of clusters: \n\\index{clustering}\n\\begin{equation}\\label{eq:num_isols}\n\\mathcal{M} = \\sum_{i = 1}^{n} |C_i|\n\\end{equation}\nThe \\textit{overall clustering purity} is:\n\\index{overall clustering purity}\n\\begin{equation}\\label{eq:overall_clustering}\n\\sum_{i=1}^{n} \\frac{|C_i|}{\\mathcal{M}}\\cdot\\nu(C_i)\n\\end{equation}\nOne can think of \\eqref{eq:overall_clustering} as a form of weighted arithmetic mean of the purities, where the size of the cluster adds more weight to the value.\n\n\\subsection{Clustering Coverage}\n\n% COVERAGE --------------------\n%\\subsection{Clustering Coverage} \n\\label{sec:validation:coverage}\nCoverage of the dataset is important to an effective \\mst{} method.\nThe density-based clustering method we use has one key disadvantage: a clustering run  with the parameter \\minneigh{}, treats all points that do not fit into a cluster of size  of at least \\minneigh{} as noise.\nThis means that as the value of \\minneigh{} grows, so will the number of \\isols{} that do not cluster into a strain.\n\nGiven the parameter \\minneigh{} of the clustering algorithm, we collect the following four measures, that collectively represent the breakdown of all data points (\\isols{}) in \\cplop{}:\n\\begin{enumerate}\n    \\item \\textit{Noise.} Number/percentage of \\isols{} clustered as noise points.\n    \\item \\textit{Misses.} Number/percentage of \\isols{} from  minority species\n    in impure clusters.\n     \\item \\textit{Hits.} Number/percentage of \\isols{} from plurality species in\n     impure clusters.\n     \\item \\textit{Pure points.} Number/percentage of \\isols{} in 100\\% pure clusters.\n\\end{enumerate}\n\\index{noise}\n\\index{misses}\n\\index{hits}\n\\index{pure points}\n", "meta": {"hexsha": "c3dbd5927de84186c432eebb352786b20ca7a597", "size": 4753, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/clustering/evaluation.tex", "max_stars_repo_name": "jmcgover/thesis", "max_stars_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/clustering/evaluation.tex", "max_issues_repo_name": "jmcgover/thesis", "max_issues_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/clustering/evaluation.tex", "max_forks_repo_name": "jmcgover/thesis", "max_forks_repo_head_hexsha": "25664684158d00864dbe697276d2691ba84461cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.884057971, "max_line_length": 217, "alphanum_fraction": 0.7452135493, "num_tokens": 1243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581741774411, "lm_q2_score": 0.7490872187162397, "lm_q1q2_score": 0.5521957664485208}}
{"text": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n% fphw Assignment\n% LaTeX Template\n% Version 1.0 (27/04/2019)\n%\n% This template originates from:\n% https://www.LaTeXTemplates.com\n%\n% Authors:\n% Class by Felipe Portales-Oliva (f.portales.oliva@gmail.com) with template \n% content and modifications by Vel (vel@LaTeXTemplates.com)\n%\n% Template (this file) License:\n% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)\n%\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%----------------------------------------------------------------------------------------\n%\tPACKAGES AND OTHER DOCUMENT CONFIGURATIONS\n%----------------------------------------------------------------------------------------\n\n\\documentclass[\n\t12pt, % Default font size, values between 10pt-12pt are allowed\n\t%letterpaper, % Uncomment for US letter paper size\n\t%spanish, % Uncomment for Spanish\n]{fphw}\n\n% Template-specific packages\n\\usepackage[T1]{fontenc} % Output font encoding for international characters\n\\usepackage{fontspec,unicode-math} % Required for using utf8 characters in math mode\n\\usepackage{parskip}  % To add extra space between paragraphs\n\\usepackage{graphicx} % Required for including images\n\\usepackage{booktabs} % Required for better horizontal rules in tables\n\\usepackage{enumerate}% To modify the enumerate environment\n\\usepackage{mathtools}% Important mathematicals environments and other things\n\\usepackage{physics}  % Defines lots of commands for vectors, matrices, derivatives...\n\\setlength{\\parindent}{15pt}\n\\setlength{\\headheight}{22.66pt}\n\n%----------------------------------------------------------------------------------------\n%\tASSIGNMENT INFORMATION\n%----------------------------------------------------------------------------------------\n\n\\title{Task 3 \\\\ Frenet's Family} % Assignment title\n\n\\author{Emilio Dom\u00ednguez S\u00e1nchez} % Student name\n\n\\date{October 22nd, 2020} % Due date\n\n\\institute{University of Murcia \\\\ Faculty of Mathematics} % Institute or school name\n\n\\class{Geometr\u00eda de Superficies} % Course or class name\n\n\\professor{Dr. Pascual Lucas Saorin} % Professor or teacher in charge of the assignment\n\n%----------------------------------------------------------------------------------------\n%\tDefinitions\n%----------------------------------------------------------------------------------------\n\n\\DeclareMathOperator{\\proy}{proy}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\T}{{\\vb{T}}}\n\\newcommand{\\N}{{\\vb{N}}}\n\\newcommand{\\B}{{\\vb{B}}}\n\\newcommand{\\\u03b1}{{\\vb*{\u03b1}}}\n\\newcommand{\\inner}[2]{\\left\\langle #1, \\; #2 \\right\\rangle}\n\\renewcommand{\\dv}{\\prime}\n\\newcommand{\\ddv}{\\dprime}\n\\newcommand{\\dddv}{\\trprime}\n\n\\begin{document}\n\n\\maketitle % Output the assignment title, created automatically using the information in the custom commands above\n\n%----------------------------------------------------------------------------------------\n%\tASSIGNMENT CONTENT\n%----------------------------------------------------------------------------------------\n\n\\section*{Problem}\n\n\\begin{problem}\n    Given a regular curve $\\\u03b1 : I \\subset \\R \\to \\R^3$ in $\\R^3$,\nits Frenet's family for some $t$ is the family $\\Pmqty{\u03ba(t), \u03c4(t), \\T(t), \\N(t), \\B(t)}$,\nwhere $\u03ba(t)$, $\u03c4(t)$, $\\T(t)$, $\\N(t)$ and $\\B(t)$ are\nthe curvature; the torsion; and the tangent, normal and bi-normal vectors\nat the point $\\\u03b1(t)$.\n\n\\noindent\nFind the Frenet's family for the curve $\\\u03b1(t) = \\Pmqty{3t-t^3 & 3t^2 & 3t+t^3}$.\n\\end{problem}\n\n%----------------------------------------------------------------------------------------\n\n\\subsection*{Answer}\n\n    We need the derivatives of the curve for our calculations.\n\n\\begin{align*}\n    \\\u03b1(t) &= \\Pmqty{3t-t^3 & 3t^2 & 3t+t^3}, \\\\\n    \\\u03b1\\dv(t) &= \\Pmqty{3-3t^2 & 6t & 3+3t^2} = 3\\Pmqty{1-t^2 & 2t & 1+t^2}, \\\\\n    \\\u03b1\\ddv(t) &= \\Pmqty{-6t & 6 & 6t} = 6\\Pmqty{-t & 1 & t}, \\\\\n    \\\u03b1\\dddv(t) &= \\Pmqty{-6 & 0 & 6} = 6\\Pmqty{-1 & 0 & 1}.\n\\end{align*}\n\n    We will start by finding the Fermat's frame ($\\T$, $\\N$ and $\\B$).\n$\\\u03b1\\dv(t)$ is already a vector proportional to $\\T$ and with the same sense.\nWe will find a vector proportional to $\\N$ and with the same sense\nby removing the tangential component from $\\\u03b1\\ddv(t)$.\nTo keep the notation simple, we will call this vector $\\vb{n}$.\n\n\\begin{equation*}\n    \\inner{\\\u03b1\\ddv{t}}{\\\u03b1\\dv(t)} =\n    6 \\cdot 3 \\inner{\\Pmqty{-t & 1 & t}}{\\Pmqty{1-t^2 & 2t & 1+t^2}} =\n    18\\Pmqty{3t^3 + 6t + 3t^3} = 36t(1+t^2). \\\\\n\\end{equation*}\n\n\\begin{multline*}\n    \\vb{n}(t) = \\proy_\\N (\\\u03b1\\ddv(t)) = \\\\\n    \\\u03b1\\ddv(t) -\n        \\frac{\\inner{\\\u03b1\\ddv{t}}{\\\u03b1\\dv{t}}}{\\norm{\\\u03b1\\dv(t)}^2}\n    \\\u03b1\\dv(t) =\n    \\\u03b1\\ddv(t) - \\frac{36t(1+t^2)}{18(1+t^2)^2} \\\u03b1\\dv(t) = \\\\\n    \\\u03b1\\ddv(t) - \\frac{2t}{(1+t^2)} \\\u03b1\\dv(t) =\n    \\frac{1}{1+t^2} \\qty\\Big[(1+t^2)\\\u03b1\\ddv(t) - 2t \\\u03b1\\dv(t)] = \\\\\n    \\frac{6}{1+t^2} \\qty\\Big[\n        (1+t^2)\\Pmqty{-t & 1 & t} - t\\Pmqty{1 - t^2 & 2t & 1 + t^2}\n    ] = \\\\\n    \\frac{6}{1+t^2} \\Pmqty{-t(1+t^2) - t + t^3 & (1+t^2) - 2t^2 & t(1+t^2) - t - t^3} = \\\\\n    \\frac{6}{1+t^2} \\Pmqty{-2t & 1 - t^2 & 0}.\n\\end{multline*}\n\n    Now that we have two vectors $\\\u03b1\\dv(t)$ and $\\vb{n}(t)$\nin the same direction and sense than $\\T$ and $\\N$,\nwe can find another vector with the same sense and direction as $\\B$\nusing the cross product.\n\n\\begin{multline*}\n    \\vb{b}(t) = \\\u03b1\\dv(t) \\times \\vb{n} = \\\\\n    3\\Pmqty{1-t^2 & 2t & 1+t^2} \\times \\frac{6}{1+t^2} \\Pmqty{-2t & 1 - t^2 & 0} = \\\\\n    \\frac{18}{1+t^2} \\Pmqty{1-t^2 & 2t & 1+t^2} \\times \\Pmqty{-2t & 1 - t^2 & 0} = \\\\\n    \\frac{18}{1+t^2} \\Pmqty{-(1+t^2)(1-t^2) & (1+t^2)(-2t) & (1-t^2)^2 + 4t^2} = \\\\\n    18 \\Pmqty{t^2-1 & -2t & 1+t^2}.\n\\end{multline*}\n\n    All that reamins is to normalize the ortoghonal base $\\qty{\\\u03b1\\dv(t), \\vb{n}, \\vb{b}}$\nto obtain $\\qty{\\T, \\N, \\B}$.\n\n\\begin{align*}\n    \\T(t) &=\n    \\frac{\\\u03b1\\dv(t)}{\\norm{\\\u03b1\\dv(t)}} =\n    \\frac{3\\Pmqty{1-t^2 & 2t & 1+t^2}}{3\\sqrt{2}(1+t^2)} =\n    \\frac{\\sqrt{2}}{2} \\Pmqty{\\frac{1-t^2}{1+t^2} & \\frac{2t}{1+t^2} & 1}, \\\\\n%\n    \\N(t) &=\n    \\frac{\\vb{n}(t)}{\\norm{\\vb{n}(t)}} =\n    \\frac{\\Pmqty{-2t & 1 - t^2 & 0}}{\\norm{\\Pmqty{-2t & 1 - t^2 & 0}}} =\n    \\frac{\\Pmqty{-2t & 1 - t^2 & 0}}{1+t^2} =\n    \\Pmqty{\\frac{-2t}{1+t^2} & \\frac{1-t^2}{1+t^2} & 0}, \\\\\n%\n    \\B(t) &=\n    \\frac{\\vb{b}(t)}{\\norm{\\vb{b}(t)}} = \\\\ &\n    \\begin{multlined}\n        \\frac{\\Pmqty{t^2-1 & -2t & 1+t^2}}{\\norm{\\Pmqty{t^2-1 & -2t & 1+t^2}}} =\n        \\frac{\\Pmqty{t^2-1 & -2t & 1+t^2}}{\\frac{1}{3}\\norm{\\\u03b1\\dv(t)}} =\n        \\frac{\\Pmqty{t^2-1 & -2t & 1+t^2}}{\\sqrt{2}(1+t^2)} = \\\\\n        \\frac{\\sqrt{2}}{2}\\Pmqty{\\frac{t^2-1}{1+t^2} & \\frac{-2t}{1+t^2} & 1}.\n    \\end{multlined}\n\\end{align*}\n\n    In the last equality, we have writen\n$\\norm{\\Pmqty{t^2-1 & -2t & 1+t^2}} = \\frac{1}{3}\\norm{\\\u03b1\\dv(t)}$\nrecognizing the similarity in the coordinates of both vectors,\nand we will use $\\norm{\\vb{b}} = 6\\norm{\\\u03b1\\dv(t)}$ whenever it comes in handy.\nThis is just a coincidence, and not an identity for any curve.\n\n    Lastly, we will use the formulas for the curvature and the torsion.\nWe will try to make use of the operations that we have already made.\nIn particular, the relation\n$\\norm{\\\u03b1\\dv(t) \\times \\\u03b1\\ddv(t)} = \\norm{\\\u03b1\\dv(t) \\times \\vb{n}(t)}$\nis a useful identity.\n\n\\begin{align*}\n    \u03ba_{\\\u03b1}(t) =\n    \\frac{\\norm{\\\u03b1\\dv(t) \\times \\\u03b1\\ddv(t)}}{\\norm{\\\u03b1\\dv(t)}^3} =\n    \\frac{\\norm{\\\u03b1\\dv(t) \\times \\proy_\\N (\\\u03b1\\ddv(t))}}{\\norm{\\\u03b1\\dv(t)}^3} =\n    \\frac{\\norm{\\\u03b1\\dv(t) \\times \\vb{n}(t)}}{\\norm{\\\u03b1\\dv(t)}^3} = \\\\\n    \\frac{\\norm{\\vb{b}(t)}}{\\norm{\\\u03b1\\dv(t)}^3} = \n    \\frac{6\\norm{\\\u03b1\\dv(t)}}{\\norm{\\\u03b1\\dv(t)}^3} =\n    \\frac{6}{\\norm{\\\u03b1\\dv(t)}^2} =\n    \\frac{6}{18(1+t^2)^2} =\n    \\frac{1}{3(1+t^2)^2}.\n\\end{align*}\n\n\\begin{align*}\n    \u03c4_{\\\u03b1}(t) =\n    -\\frac{\\det\\Pmqty{\\\u03b1\\dv(t) & \\\u03b1\\ddv(t) & \\\u03b1\\dddv(t)}}\n        {\\norm{\\\u03b1\\dv(t) \\times \\\u03b1\\ddv(t)}^2} =\n    -\\frac{\\qty(\\\u03b1\\dv(t) \\times \\\u03b1\\ddv(t)) \\cdot \\\u03b1\\dddv(t)}\n        {\\norm{\\\u03b1\\dv(t) \\times \\\u03b1\\ddv(t)}^2} = \\\\\n    -\\frac{\\vb{b}(t) \\cdot \\\u03b1\\dddv(t)}\n        {\\qty(6\\norm{\\\u03b1\\dv(t)})^2} =\n    -\\frac{18 \\Pmqty{t^2-1 & -2t & 1+t^2} \\cdot 6\\Pmqty{-1 & 0 & 1}}\n        {36\\cdot 18(1+t^2)^2} = \\\\\n    -\\frac{1}{3(1+t^2)^2}.\n\\end{align*}\n\n%----------------------------------------------------------------------------------------\n\n\\end{document}\n", "meta": {"hexsha": "722d166732b732318c78a89cd7313af3522fab50", "size": 8127, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "frenet-set.tex", "max_stars_repo_name": "useredsa/exercises-surfaces-geometry", "max_stars_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-25T03:04:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T03:04:15.000Z", "max_issues_repo_path": "frenet-set.tex", "max_issues_repo_name": "useredsa/introductory-exercises-of-differential-geometry", "max_issues_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "frenet-set.tex", "max_forks_repo_name": "useredsa/introductory-exercises-of-differential-geometry", "max_forks_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.7, "max_line_length": 114, "alphanum_fraction": 0.5148271195, "num_tokens": 3001, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7490872243177518, "lm_q2_score": 0.7371581626286834, "lm_q1q2_score": 0.5521957619266943}}
{"text": "% !TEX program = xelatex \n\\documentclass[12pt]{article}\n\n\\usepackage{amsmath} \n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{fancyhdr}\n\\usepackage{enumerate}\n\\usepackage{multicol} \n\\usepackage{bm}\n\\usepackage{caption}\n\\usepackage{float}\n\\usepackage{mathtools}\n\\usepackage{colortbl}\n\\usepackage{xcolor}\n\\usepackage{cancel}\n\\usepackage{comment}\n\\usepackage{stmaryrd}\n\\usepackage{ulem} \n\\usepackage{alltt}\n\\usepackage[T1]{fontenc}\n\\usepackage{rotating}\n\\usepackage{fancyvrb}\n\\usepackage{hyperref}\n\\usepackage{pdfpages}\n\n\\usepackage{tikzducks}\n\\usepackage[document]{ragged2e}\n\\usepackage{fontspec}\n\\usepackage{listings}\n\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\\newcommand{\\K}{\\mathbb{K}}\n\\newcommand{\\F}{\\mathbb{F}}\n\\renewcommand{\\O}{\\mathcal{O}}\n\\renewcommand{\\P}{\\mathcal{P}}\n\\renewcommand{\\U}{\\mathcal{U}}\n\\newcommand{\\V}{\\mathcal{V}}\n\\newcommand{\\W}{\\mathcal{W}}\n\\newcommand{\\A}{\\mathcal{A}}\n\\newcommand{\\B}{\\mathcal{B}}\n\\newcommand{\\E}{\\mathbb{E}}\n\\renewcommand{\\L}{\\mathcal{L}}\n\n\\newcommand{\\where}{\\text{ where }}\n\\newcommand{\\via}{\\text{ via }}\n\\newcommand{\\of}{\\circ}\n\\newcommand{\\id}{\\mathrm{id}}\n\\newcommand{\\union}{\\cup}\n\\newcommand{\\intersect}{\\cap}\n\\newcommand{\\inv}{^{-1}}\n\\newcommand{\\lang}{\\Sigma^*}\n\\newcommand{\\ceil}[1]{\\left\\lceil #1 \\right\\rceil}\n\\newcommand{\\floor}[1]{\\left\\lfloor #1 \\right\\rfloor}\n\n\\newcommand{\\Int}{\\displaystyle\\int}\n\\newcommand{\\Lim}{\\displaystyle\\lim}\n\\newcommand{\\Union}{\\displaystyle\\bigcup}\n\\newcommand{\\Intersect}{\\displaystyle\\bigcap}\n\\newcommand{\\Sum}{\\displaystyle\\sum}\n\\newcommand{\\Prod}{\\displaystyle\\prod}\n\\newcommand{\\Binom}{\\displaystyle\\binom}\n\n\\newcommand{\\proj}{\\mathrm{proj}}\n\\newcommand{\\Span}{\\mathrm{Span}}\n\\newcommand{\\Null}{\\mathrm{Null}}\n\\newcommand{\\rank}{\\mathrm{rank}}\n\\newcommand{\\chr}{\\mathsf{char}}\n\\newcommand{\\Prec}{\\mathsf{Prec}}\n\\newcommand{\\dec}{\\mathsf{dec}}\n\\newcommand{\\len}{\\mathsf{len}}\n\\newcommand{\\nullity}{\\mathrm{nullity}}\n\\renewcommand{\\vec}[1]{\\mathbf{#1}}\n\\newcommand{\\otherwise}{\\text{otherwise}}\n\n\\newcommand{\\Var}{\\mathsf{Var}}\n\\newcommand{\\Skew}{\\mathsf{Skew}}\n\\newcommand{\\Binomial}{\\mathsf{Binomial}}\n\\newcommand{\\Poisson}{\\mathsf{Poisson}}\n\\newcommand{\\Geometric}{\\mathsf{Geometric}}\n\\newcommand{\\Bernoulli}{\\mathsf{Bernoulli}}\n\\newcommand{\\Exponential}{\\mathsf{Exponential}}\n\\newcommand{\\Uniform}{\\mathsf{Uniform}}\n\\newcommand{\\Normal}{\\mathsf{Normal}}\n\n\\newcommand{\\WP}{\\text{w.p. }}\n\n\\newcommand{\\dx}{\\, dx}\n\\newcommand{\\dy}{\\, dy}\n\\newcommand{\\du}{\\, du}\n%\\newcommand{\\dp}{\\, dp}\n\\newcommand{\\ddx}{\\frac{d}{dx}}\n\n\\newcommand{\\highlight}[1]{%\n  \\colorbox{yellow!30}{$\\displaystyle#1$}}\n\\newenvironment{amatrix}[1]{%\n  \\left[\\begin{array}{@{}*{#1}{c}|c@{}}\n}{%\n  \\end{array}\\right]\n}\n\\renewcommand{\\arraystretch}{1.3}\n\n\\oddsidemargin0cm\n\\topmargin=-1cm    \n\\textwidth16.5cm   \n\\textheight22.5cm  \n\n\\setmonofont[\n  Contextuals={Alternate}\n]{Fira Code}\n\n\\makeatletter\n\\def\\verbatim@nolig@list{}\n\\makeatother\n\n\\lstset{\n  basicstyle={\\small\\ttfamily},\n  columns=fullflexible,\n  keepspaces=true,\n  mathescape,\n  numbers=left,\n  stringstyle=\\color[HTML]{a31515},\n  commentstyle=\\color[rgb]{0,0.5,0},\n  keywordstyle=\\color[HTML]{AF00DB},\n  keywordstyle=[2]\\color[rgb]{0,0,0.7},\n  keywordstyle=[3]\\color[HTML]{267F99},\n  morecomment=[s]{(*}{*)},\n  morecomment=[l]{--},\n  keywords={let,letp,in,end,if,then,else,iter,with,fn},\n  keywords=[2]{true,false,nil,cons},\n  keywords=[3]{int,unit,list,<>, bool}\n}\n\n\\newcommand{\\code}{\\lstinline}\n\n\\newcounter{questioncounter}\n\\newcommand{\\question}[1]{\n\\stepcounter{questioncounter}\n\\subsection*{Question #1}\n}\n\n\\newcounter{taskcounter}[questioncounter]\n\\newcommand{\\task}[1]{\n\\stepcounter{taskcounter}\n\\subsubsection*{(\\arabic{questioncounter}.\\arabic{taskcounter}) #1 }\n}\n\n\\fancyhf{} % sets both header and footer to nothing\n\\renewcommand{\\headrulewidth}{0pt}\n\n\\pagestyle{fancyplain}\n\\lhead{\\fancyplain{}{{\\myhwname} --- \\mycourse}}\n\\rhead{\\fancyplain{}{{\\myname}}}\n\\cfoot{\\thepage}\n\n\\newcommand{\\myname}{Ishan Bhargava}\n\n\\newcommand{\\myhwname}{Homework 14} % confirm\n\\newcommand{\\mycourse}{15-819}  % confirm\n\n\\setlength{\\parindent}{0pt}\n\\setlength{\\parskip}{5pt plus 1pt}\n\\setlength{\\headheight}{15pt}\n\n% \\pagecolor{black}\n% \\color{white}\n\n\\title{LFPL Typechecker and Interpreter}\n\\author{Ishan Bhargava}\n\\date{15-819 Resource Aware Programming Languages}\n\n\\begin{document}\n\n\\maketitle \n\n\\section{Introduction}\n\n\nAlthough computers these days seem to have more memory than we know what to do with, there are still many areas of computing which require precise control over resource usage. For example, phones contain tiny SoCs which are dedicated to tasks such as processing Bluetooth radio events. These chips have small amounts of memory, but still need to fulfill difficult tasks in real time, such as decoding a high quality audio stream. Other classical examples include Blockchain-based applications, as space is also expensive in that domain. \n\n\\subsection{LFPL Background}\n\nThe LFPL programming language was designed to represent `non-size-increasing programs'. LFPL programs produce a result whose size can never exceed the input size. This is achieved by using `diamonds' to represent an `allocation' for a node of a recursive type. For example, linked list nodes each require one diamond. This means that if you start your program with 5 diamonds, you cannot create more than 5 list nodes. Although our implementation of LFPL only includes linked lists, there's no reason this could not be generalized to other structures such as binary trees, where each node would require a diamond, or arrays, where $n$ diamonds would be required to allocate an array of length $n$.\n\nThis has the consequence that non-constant-space data structures such as lists cannot be copied, since copying diamonds is illegal. Therefore, LFPL uses an affine type system for diamonds and lists. However, integers, booleans, and other constant-space types can be copied without any problem. This is because these so-called `heap-free' types cannot end up taking more than a constant amount of space, which can be found statically. \n\n\\section{Design and Syntax}\n\nThe LFPL interpreter implements all features of LFPL as described in the lecture notes. In addition, integers have also been added as a heap-free type. Note for some syntactic forms, there is a Unicode alternative. LFPL programs are all one expression. \n\\begin{center}\n  \\begin{tabular}{rl}\n    Comments \n      & \\code|-- Line comment| \\\\\n      & \\code|(* Block comment *)| \\\\\n    Types\n      & \\code|unit|, \\code|int|, \\code|bool| \\\\\n      & \\code|<>|, $\\diamondsuit$ \\\\\n      & \\code|t1 * t2| \\\\\n      & \\code|t1 -> t2|, \\code|t1 -o t2| \\\\\n      & \\code|t list| \\\\\n    Variables \n      & \\code|ident| \\\\\n    Lambda/anonymous function \n      & \\code|fn (x : t) => e| \\\\\n      & \\code|$\\lambda$ (x : t) => e| \\\\\n    Let binding\n      & \\code|let x : t = e1 in e2 end| \\\\\n    Function application \n      & \\code|e1 e2| \\\\\n    Boolean literals \n      & \\code|true|, \\code|false| \\\\\n    Integer literals \n      & \\code|..., -2, -1, 0, 1, 2, 3, ...| \\\\\n    Unit literal \n      & \\code|()| \\\\\n    Diamond literal (*)\n      & \\code|<>|, $\\diamondsuit$ \\\\\n    If expression \n      & \\code|if e1 then e2 else e3| \\\\\n    Pair creation\n      & \\code|(e1, e2)| \\\\\n    Pair destructuring\n      & \\code|letp (x, y) = e1 in e2 end| \\\\\n    Empty list \n      & \\code|nil: t|, \\code|[]: t| \\\\\n    List construction \n      & \\code|cons(diamond, head, tail)| \\\\\n    List literal (*)\n      & \\code|[e1, e2, e3, ...]: t| \\\\\n    List iteration \n      & \\code@iter e1 { nil => e2 | cons(x, y, _) with z => e2 }@\n  \\end{tabular}\n\\end{center}\n\nForms marked with (*) `create' diamonds and therefore cannot appear in a regular LFPL program. They can only be used to specify input to an LFPL program. \n\nNote that the empty list must be annotated with the type of its elements. This avoids ambiguity for programs like \\code|fn x : unit => []| which otherwise cannot be assigned a type. The same applies for list literals. For example, \\code|[1, 2, 3]: int| is a list of ints. \n\n\\subsection{Enforcing linearity}\n\nIt is clear that lists are not copyable and therefore not heap-free. However, it is less clear whether we should consider functions to be heap-free or not. On one hand, functions can `capture' arbitrary values, some of which may be non-heap-free such as lists or other functions. But many functions don't do this, and could be considered heap-free. In order to avoid having to classify some functions as `heap-free' and others as not, we choose the more restrictive path and enforce linearity for all functions. These programs can be made to typecheck by inlining the function in question\n\nSimilarly, inside of the \\code|cons| case for the list iterator, the program cannot access any variables other than those bound by the iterator. This means that the following program does not typecheck:\n\\begin{lstlisting}\nlet \n  insert : <> -> int -> int list -> int list =\n    fn d: <> => fn v: int => fn L: int list => (\n      iter L { \n        [] => (fn d2: <> => fn x: int => cons(d2, x, []: int))\n      | cons(d1, x1, _) with f => (\n          fn d2: <> => fn x: int => \n              if x >= x1 \n                then cons(d1, x1, f d2 x)\n                else cons(d1, x, f d2 x1)\n        )\n      }\n    ) d v \nin\nlet isort : int list -> int list = \n  fn L : int list => \n      iter L {\n          [] => []: int\n        | cons(d, x, _) with sortedTail => \n            insert d x sortedTail \n      }\nin \n  isort \nend\nend\n\\end{lstlisting}\nCompiling this program fails:\n\\begin{lstlisting}[numbers=none]\nbad_isort.lfpl: 20:13 - 20:20: Variable 'insert' not declared or out of scope\n\\end{lstlisting}\nInstead, the programmer must inline the \\code|insert| function manually\n\n\\section{LFPL Interpreter}\n\nThe LFPL interpreter is appropriately named `\\verb|lfpl|'. We will illustrate the usage of the LFPL interpreter on a program which concatenates its input. \n\nSuppose the following code is in a file called \\verb|concat.lfpl|.\n\\begin{lstlisting}\nlet\n  concat: int list list -> int list =\n    fn xss: int list list =>\n      iter xss {\n          [] => []: int\n        | cons(d, xs, _) with y =>\n            iter xs {\n                [] => y\n              | cons(d', x, _) with y' => cons(d', x, y')\n            }\n\n            -- Note 'd' is lost here. We don't need it\n      }\nin\n  concat\nend \n\\end{lstlisting}\n\nWe can typecheck this program by running \\verb|lfpl concat.lfpl|:\n\\begin{lstlisting}[numbers=none]\n% lfpl concat.lfpl\n<lambda>: (((int) list) list) -> ((int) list)\n\\end{lstlisting}\nSince the program right now just returns a function, the interpreter stops and reports its type. We can provide input to the function as another argument on the command line\n\\begin{lstlisting}[numbers=none]\n% lfpl concat.lfpl \"([[1,2,3]: int, [4,5,6]: int, [7,8,9]: int]: int list)\"\n[1, 2, 3, 4, 5, 6, 7, 8, 9]: (int) list\n\\end{lstlisting}\nNow that there is some input for the function, the interpreter performs the function application, reduces it to a value, and displays it. \n\nThe interpreter also will display all parsing/typechecking problems. Consider the following program which attempts to use a diamond multiple times:\n\\begin{lstlisting}\nlet\n  append: int list -> int list -> int list =\n      fn xs: int list => fn ys: int list =>\n        iter xs {\n            [] => ys\n          | cons(d, x, _) with y => cons(d, x, cons(d, x, y))\n        }\nin\n  append\nend  \n\\end{lstlisting}\nThis results in an error:\n\\begin{lstlisting}[numbers=none]\nbadappend.lfpl: 6:57 - 6:58: Variable 'd' of type '<>' is not heap-free \n                             and cannot be used multiple times\nLast usage: badappend.lfpl: 6:46 - 6:47\nDeclared at: badappend.lfpl: 4:13 - 8:1  \n\\end{lstlisting}\n\n\\section{Further work}\nCurrently, the interpreter uses a simple substitution-based approach when evaluating expressions. However, we could also compile LFPL to C without having to dynamically allocate more memory than is available in the input. In addition, we could relax some of the rules on linearity involving function types, by checking if a lambda contains free variables or if those free variables are all heap-free. We could also offer a new kind of definition \\code|x := e| which would physically substitute \\code|e| wherever \\code|x| appears. This could make programming in LFPL easier. \n\n\\end{document}\n", "meta": {"hexsha": "6f4a76922f658c99f5c337b9c8b6b62ba811b8a4", "size": 12410, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/report.tex", "max_stars_repo_name": "ishantheperson/LFPL", "max_stars_repo_head_hexsha": "96eb0e1a5401a5ce745ce1477d4ab0356136e3db", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/report.tex", "max_issues_repo_name": "ishantheperson/LFPL", "max_issues_repo_head_hexsha": "96eb0e1a5401a5ce745ce1477d4ab0356136e3db", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/report.tex", "max_forks_repo_name": "ishantheperson/LFPL", "max_forks_repo_head_hexsha": "96eb0e1a5401a5ce745ce1477d4ab0356136e3db", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.3929618768, "max_line_length": 697, "alphanum_fraction": 0.6882352941, "num_tokens": 3655, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.749087201911703, "lm_q1q2_score": 0.5521957454098925}}
{"text": "\\title{Self Organizing Systems Exercise 2}\n\\author{\n        Alexander Dobler 01631858\\\\\n        Thomas Kaufmann 01129115 \n}\n\\date{\\today}\n\n\\documentclass[12pt]{article}\n\n\\usepackage{hyperref}\n\\usepackage{booktabs}\n\\usepackage{graphics}\n\\usepackage{multirow}\n\\usepackage{graphicx}\n\\usepackage{subcaption}\n\\usepackage{mwe}\n\\usepackage{amsmath,amssymb}\n\\usepackage{subcaption}\n\\usepackage[section]{placeins}\n\n\\begin{document}\n\\maketitle\n\n\\begin{abstract}\n    In the second exercise of the lecture Self Organizing Systems we are implementing a simple Particle Swarm Optimization (PSO) algorithm and experimenting with different parameters, fitness functions, constraints and constraint handling methods.\n    More specifically, we are given a PSO framework in NetLogo for optimizing functions $f$ from $\\mathbb{R}^2$ to $\\mathbb{R}$.\n    Our task is to implement 3 different fitness functions, 3 different constraints and constraint handling methods using penalization or solution rejection.\n    Furthermore, we are to conduct several experiments, to observe different effects of parameters on the performance of the convergence behaviour of the PSO algorithm.\n    Here we are inspecting \\textit{population size}, \\textit{particle speed limit}, \\textit{particle inertia}, both \\textit{acceleration coefficients} and the difference between constraint handling using penalization and rejection.\n\\end{abstract}\n\n\\section{Implementation}\nIn this section we will describe how we implemented the required tasks given in the exercise.\nWe will divide this section into explanations for \\textit{constraints}, \\textit{fitness functions} and \\textit{constraint handling with penalization}.\n\n\\subsection{Constraints}\nAs we are already given skeletons for constraints, implementing is as easy as returning \\textit{true}, if the constraint is violated and \\textit{false} otherwise.\nWe opted for implementing the following constraints.\n\\begin{enumerate}\n        \\item $x^2+y^2<6000$\n        \\item $x^2+y^2<9000\\text{ and }x^2+y^2>4000$\n        \\item $\\tan(2x)<\\tan(4y)$\n\\end{enumerate}\nSo, if for example for the first constraint it holds that $x^2+y^2>=6000$, the constraint is violated and we return true.\nWe selected these constraints, as we wanted to have functions with one connected region (constraint 1 and 2) and also constraints with miltiple separated regions (constraint 3).\n\\subsection{Fitness Functions}\nHere we have a similar setting as for constraints, because we already have skeletons for fitness functions.\nWe opted to implement the following functions.\n\\begin{enumerate}\n        \\item Schaffer function\n        \\item Booth's function\n        \\item Schwefel function\n\\end{enumerate}\nScaling $x$ and $y$ variables for the input of the functions is done as already shown in the template.\nIt is also important to mention that the NetLogo $\\sin$-function expects angles in degrees as input, so we had to convert radians to degrees first.\nFor the Schwefel function we had to set $n:=2$ and $x_1=x,x_2=y$ for our purpose of two dimensions.\n\nWe chose these functions as we wanted to have both optima at the corners of the grid and optima in the middle of the grid.\nFurthermore we also have a diversity of how many local optima the search-space of the different functions have.\n\\subsection{Constraint Handling with Penalization}\nThe implementation for penalization is more interesting.\nFirst we created 4 different functions to calculate penalization values for each constraint (the example constraint included).\nSo for each constraint we can compute its penalty value at patch $x,y\\in\\mathbb{R}^2$, if the constraint is violated at this patch.\nThis penalty value is $C(x,y)\\cdot d$ for constraints $C(x,y)<0$ and a constant $d$.\nFor example for constraint 1 we have $(x^2+y^2-6000)\\cdot d$ as penalty value if $x^2+y^2\\ge 6000$.\nFurthermore we wanted penalty values to be between 0 and 1, so we chose constants $d$ appropriately.\nSo for example for constraint 1 we set $d:=\\frac{1}{14000}$ as $C(x,y)=x^2+y^2-6000$ can be as big as $14000$ for $x=y=100$.\n\nWe then add these penalty values at position $(x,y)$ to the value of the patch at position $(x,y)$ if the selected contraint is violated at position $(x,y)$.\nThis, of course, is only done if penalization is selected as constraint handling method.\nDue to this variant of implementation, we do not have to update anything in the \\textit{update-particle-positions}, as fitness-functions at patches are already considering penalization by the selected constraint.\n\n\n\\section{Experiments and Analysis}\n\\subsection{Experimental Setup \\& Evaluation Metrics}\nIn order to compare the convergence behaviour, we use a fixed number of up to $200$ iterations, without any time limit. \nFurthermore, we disabled the premature termination criterion once the optimum is found, since this would have led to bias results in our stepwise average approach. \n\nWe use the following metrics to capture characteristics of solutions:\n\\begin{itemize}\n\t\\item \\textbf{Fitness}: a value between 0 and 1\n\t\\item \\textbf{Number of Clusters}: To get a feeling of the distribution of particles in the search space.\n\tWe use the clustering algorithm of the netlogo plugin \\emph{dbscan}, where a cluster is constituted of at least three agents with a maximum distance of $5$. \n\tThese values have been selected manually based on some manual tweaking and tuning. \n\t\n\t\\begin{figure}[h!]\n\t\t\\centering\n\t\t\t\\includegraphics[width=5cm]{figures/clusters.png}\n\t\t\t\\caption{Illustration of Clusters in a constrained Schwefel Function Optimization Instance.}\n\t\t\t\\label{fig:clusters}\n\t\\end{figure}\t\n\n\tFigure~\\ref{fig:clusters} illustrates a few clusters in an early iteration in a Schaffer optimization instance. \n\tWith this metric, in combination with others, we aim to identify scenarios where agents split up into separate groups, converging towards different regions in the search space. \n\tWe normalized the number of clusters by the population size, to foster comparability in plots. \n\t\n\t\n\t\\item \\textbf{Average Distance to the Optimal Solution}: Should be monotonically decreasing in convex functions. However, in rugged landscapes with several nearly optimal solutions may not necessarily decrease to $\\approx 0$.\n\t\\item \\textbf{Average Distance among Particles}: As a measure for the distribution of particles in the search space. \n\tThus, in the beginning it is relatively high, as particles are randomly scattered among the search space, but should decrease continuously as particles strive towards the (single or few) global optima. \n\tIn case this value is high, a disconnected and highly constrained search may be indicated. \n\t\\item \\textbf{Average Path Length}: Average length of the paths of each particle throughout the entire execution.\n\t\t\n\\end{itemize}\n\nFinally, for statistically stable results, each configuration in our experiments was executed $15$ times, and metrics where obtained by the average of those executions.\n%\\subsection{Experiment 1: Strong Personal Focus}\n%\\begin{figure}\n%\t\\centering\n%\t\\includegraphics[width=0.75\\textwidth]{figures/ex1/ex1-1.pdf}\n%\t\\label{fig:ex1-1}\n%\\end{figure}\n%\\begin{figure}\n%\t\\centering\n%\t\\includegraphics[width=0.75\\textwidth]{figures/ex1/ex1-2.pdf}\n%\t\\label{fig:ex1-2}\n%\\end{figure}\n%\n%\\begin{figure}\n%\t\\centering\n%\t\\includegraphics[width=0.75\\textwidth]{figures/ex1/ex1-3.pdf}\n%\t\\label{fig:ex1-3}\n%\\end{figure}\n%strong personal focus and scarce distribution in the beginning, hard to find neighbour that guides somewhere\n%in easier functions even small population sizes \n%\\begin{figure}\n%\\begin{subfigure}[c]{0.5\\textwidth}\n%\\centering\n%\\includegraphics[width=0.75\\textwidth]{figures/ex1/f1.png}\n%\\subcaption{F1}\n%\\end{subfigure}\n%\\begin{subfigure}[c]{0.5\\textwidth}\n%\\centering\n%\\includegraphics[width=0.75\\textwidth]{figures/ex1/f3.png}\n%\\subcaption{F3}\n%\\end{subfigure}\n%\n%\\caption{Comparison of representative solutions for fitness function $1$ ($P=100$) and $3$ ($P=20$).}\n%\n%\\end{figure}\n\\FloatBarrier\n\\subsection{Different Population Sizes}\nIn the first experiment we compare the impact of the population size on the convergence behaviour. \nFor each function, we compared three different population sizes with fixed parameters for the others (inertia $\\omega=0.33$, swarm-confidence $c_s=1$, personal-confidence $c_p=1$ and particle-speed-limit $V_{max}=10$).\nObservations based on Figures~\\ref{fig:ex2-1},\\ref{fig:ex2-2},\\ref{fig:ex2-3}:\n\\begin{itemize}\n\t\\item Larger populations are in favour of finding the global optimum or at least converge faster. \n\tOne reason for this is certainly the relatively restricted search space, where it's rather likely that a random initialization hits high quality regions right in the beginning. \n\tWe conducted some small experiments with completely unsuitable parameter settings, but large populations that still obtained the global optimum in just a couple of iterations. \n\t\\item For extraordinarily small populations, e.g. $P=10$, the experiments show that only for rather simple functions without a rugged landscape convergence towards the optimum can be achieved (Booth's function seems to be \\emph{convexish}, although it is quadratic). \n\t\\item For more complex functions, like Schaffer, those populations converge at several poor local optima around the global one, indicated by the relatively high number of different clusters and the high average distance to the optimum. \n\tStill, after $30$ iterations they seem to concentrate on a certain subspace, where they get stuck in local optima. \n\t\\item Although convergence towards a suboptimal solution w.r.t to the fitness sets in rather early, the path lengths still increases, indicating stagnation behaviour. \n\\end{itemize}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{figures/ex2/ex2-1.pdf}\n\t\\caption{Population sizes: Metrics for Schaffer Function}\n\t\\label{fig:ex2-1}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{figures/ex2/ex2-2.pdf}\n\t\\caption{Population sizes: Metrics for Booth Function}\n\t\\label{fig:ex2-2}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{figures/ex2/ex2-3.pdf}\n\t\\caption{Population sizes: Metrics for Schwefel Function}\n\t\\label{fig:ex2-3}\n\\end{figure}\n\n\\FloatBarrier\n\\subsection{Acceleration Coefficients}\nBased on our findings we proceeded with experiments concerning acceleration coefficients. \nFrom manual tests and the previous experiment, we know that for larger populations our functions can be optimized quite easily with standard parameters, $c_s=1$, $c_p=1$ and $V_{max}=10$, so in this experiment we turned our focus on values off the mean (i.e. $1$) and tested all four combinations for the values $0.3$ and $1.7$ for medium and larger population sizes for each of the selected functions. \n\nObservations based on Figures~\\ref{fig:ex4-1-20},\\ref{fig:ex4-1-50},\\ref{fig:ex4-2-20},\\ref{fig:ex4-2-50},\\ref{fig:ex4-3-20},\\ref{fig:ex4-3-50}:\n\\begin{itemize}\n\t\\item Our main observation from this experiment was that in the selected functions, a dominance of swarm-confidence is favourable in terms of convergence. \n\tEven for smaller populations convergence towards a high quality optimum can be obtained even below $20$ iterations.\n\t\\item Even further, the number of particles temporarily stuck in suboptimal local seems to be lower, indicated by our cluster metric, which in combination with the distance to the global optimum, shows a relatively smooth convergence in just $20$ iterations. \n\t\\item For other configurations, on the other hand, these metrics show a different picture. \n\tFirst, there's a lot of fluctuations in the number of clusters, indicating that particles frequently change influencing neighbours, although eventually they still converge towards the optimum on average (but with far more particles being in many other regions in the search space). \n\tIt is important to note that this is actually not necessarily a poor characteristic. \n\tFor other functions, particularly with larger and potentially more rugged search spaces, we claim that such a behaviour is actually to some degree advantageous since it fosters diversity in the search (still it perfectly shows that parameters highly depend on the instance).\n\t\\item For the inverse dominance relationship between swarm and personal confidence a rather poor behaviour can be observed. \n\tFirst, this configuration lacks in convergence towards the optimum w.r.t to fitness, but also the other metrics show poor behaviour. \n\tThroughout the entire search, the number of distinct clusters remains relatively high, indicating that particles likely do not profit from each other, but rather seem to be stuck in intensifications phases (the path length is still relatively high, indicating that particles still make significant steps but without any real progress in terms of fitness, also shown by the high average distance to the optimum). \n\t\\item Figure~\\ref{fig:ex4-example} shows two representative examples of this behaviour. \n\tOn the one hand, ~\\ref{fig:ex4-example-a} shows almost arbitrarily scattered particles in the search space after $100$ iterations, while in ~\\ref{fig:ex4-example-b} all particles were directly guided towards the optimum. \n\t\\item Again, it can be observed that Booth's function is among the easier ones, where even poor configurations obtain the optimum relatively fast. \n\\end{itemize}\n\n\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex4/ex4-1-20.pdf}\n\t\\caption{Acceleration coefficients: Metrics for Schaffer function and $P=20$}\n\t\\label{fig:ex4-1-20}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex4/ex4-1-50.pdf}\n\t\\caption{Acceleration coefficients: Metrics for Schaffer function and $P=50$}\n\t\\label{fig:ex4-1-50}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex4/ex4-2-20.pdf}\n\t\\caption{Acceleration coefficients: Metrics for Booth function and $P=20$}\n\t\\label{fig:ex4-2-20}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex4/ex4-2-50.pdf}\n\t\\caption{Acceleration coefficients: Metrics for Booth function and $P=50$}\n\t\\label{fig:ex4-2-50}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex4/ex4-3-20.pdf}\n\t\\caption{Acceleration coefficients: Metrics for Schwefel function and $P=20$}\n\t\\label{fig:ex4-3-20}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex4/ex4-3-50.pdf}\n\t\\caption{Acceleration coefficients: Metrics for Schwefel function and $P=50$}\n\t\\label{fig:ex4-3-50}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{subfigure}[c]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{figures/ex4/ex4-03-17-f3.png}\n\\subcaption{$c_s=0.3$, $c_p=1.7$}\n\\label{fig:ex4-example-a}\n\\end{subfigure}\n\\begin{subfigure}[c]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{figures/ex4/ex4-17-03-f3.png}\n\\subcaption{$c_s=1.7$, $c_p=0.3$}\n\\label{fig:ex4-example-b}\n\\end{subfigure}\n\n\\caption{Comparison of representative solutions for fitness function $1$ ($P=100$) and $3$ ($P=20$).}\n\\label{fig:ex4-example}\n\n\\end{figure}\n\n\\FloatBarrier\n\\subsection{Inertia}\nIn the following experiment we studied the impact of the particle inertia on all functions with default values for other parameters: $P=30$, $c_s=1$, $c_p=1$ and $V_{max}=10$. \n\nObservations:\n\\begin{itemize}\n\t\\item It is again illustrated that for a reasonable parameter setting with a sufficiently a large population size, it seems to be rather easy to identify (nearly) optimal regions in the search space, although a too low inertia seems to slow down convergence. \n\t\\item For high $\\omega$ values larger fluctuations can be observed, simply cause it affects the magnitude of the velocity. This behaviour is also reflected in the path lengths being significantly larger than for smaller $\\omega$ values. In combination with these, the relatively high distance to the optimum further suggests that we encounter some kind of stagnation behaviour, where particles continuously move in the search space, without achieving any significant progress (e.g. in Schwefel function, $\\omega=1$ converges on average to some fitness $<1$, with the particle distance being still relatively high with lots of frequently changing clusters, which suggests that many particles are (relatively loosely) gathered around some suboptimal region without achieving any progress on the fitness, but cannot escape this region due to the high concentration of particles in this region. An illustrative example of such a situation is shown in Figure~\\ref{fig:ex3-example}).\n\t\\item Medium value for $\\omega$ seem to be most suitable and provide a good trade-off between fitness convergence as well as dampening fluctuations among clusters, i.e. we suspect that not too many particles get stuck in low-quality local optima, while it seems that the population can identify the optimum soon and guides a majority of particles towards this region. This is also illustrated by the average distance to the real optimum that smoothly converges. Throughout all functions it can be observed that $\\omega=0.5$ shows a continuously slower increasing average path length, even slower than low omega values. Here, it is again important to stress the following: in more complex optimization problems, we would assume that such a behaviour can be observed rather on a region level than for \\\"particular optima\\\" as it seems to be the case for these functions.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex3/ex3-1.pdf}\n\t\\label{fig:ex3-1}\n\t\\caption{Inertia: Metrics for Schaffer function for inertia comparison.}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex3/ex3-2.pdf}\n\t\\label{fig:ex3-2}\n\t\\caption{Inertia: Metrics for Booth function for inertia comparison.}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex3/ex3-3.pdf}\n\t\\label{fig:ex3-3}\n\t\\caption{Inertia: Metrics for Schwefel function for inertia comparison.}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\begin{subfigure}[c]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{figures/ex3/wrong-optimum-init.png}\n\\subcaption{$c_s=0.3$, $c_p=1.7$}\n\\label{fig:ex3-example-a}\n\\end{subfigure}\n\\begin{subfigure}[c]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{figures/ex3/wrong-optimum.png}\n\\subcaption{$c_s=1.7$, $c_p=0.3$}\n\\label{fig:ex3-example-b}\n\\end{subfigure}\n\n\\caption{Comparison of a representative execution (\\ref{fig:ex3-example-a}: the initial solution and \\ref{fig:ex3-example-b}: the final solution after 100 iterations) with $\\omega=1$ converging towards a suboptimal optimum.}\n\\label{fig:ex3-example}\n\n\\end{figure}\n\n\\FloatBarrier\n\\subsection{Speed Limit}\nThe next few experiments should illustrate the impact of particle speed limit on the behaviour of the algorithm with regards to convergence.\nParameters used were $P=20,\\omega=0.33,c_s=1,c_p=1$.\nWe compared four different particle speed limits.\n\nThe plots are illustrated in Figures~\\ref{fig:ex5-1}-\\ref{fig:ex5-3} and the main observations are:\n\\begin{itemize}\n\t\\item For smaller speed-limit convergence to the global optimum is slower, but this has to be expected.\n\t\\item Furthermore also the number of clusters for smaller speed limits is higher. This is visible in the Schwefel function, where we have a lot of clusters, even after 100 iterations for $V_{\\max}=3$.\n\t\\item Interestingly, the path length is not always longest for the highest speed limit.\n\t\\item Overall one could say, that the speed limit does not have that much of an impact on the Schaffer and Booth function, whereas too low speed limits lead to a lot of scattered particles in the search space for the Schwefel function.\n\tBut upon closer look, we can also see that some suboptimal speed limits lead to early convergence, not leading to the global optimum.\n\tThis is also illustrated in Figure~\\ref{fig:ex5comp}, where the search space for the Schwefel function is illustrated after 100 iterations.\n\tFor $V_{\\max}=3$ we can see a lot of scattered particles stuck in different local optima, whereas for $V_{\\max}=20$ all particles are stuck in one single local optimum.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex5/ex5-1.pdf}\n\t\\caption{Speed limit: Metrics for Schaffer function.}\n\t\\label{fig:ex5-1}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex5/ex5-2.pdf}\n\t\\caption{Speed limit: Metrics for Booth function.}\n\t\\label{fig:ex5-2}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex5/ex5-3.pdf}\n\t\\caption{Speed limit: Metrics for Schwefel function.}\n\t\\label{fig:ex5-3}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\begin{subfigure}[c]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{figures/ex5/f3-3.png}\n\\subcaption{$V_{max}=3$}\n\\end{subfigure}\n\\begin{subfigure}[c]{0.5\\textwidth}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{figures/ex5/f3-20.png}\n\\subcaption{$V_{max}=20$}\n\\end{subfigure}\n\\caption{Comparison of representative solutions for fitness function $1$ ($P=100$) and $3$ ($P=20$).}\n\\label{fig:ex5comp}\n\\end{figure}\n\n\\FloatBarrier\n\\subsection{Constraint Handling}\nOur last conducted experiment compares the two constraint handling methods, namely penalty and rejection.\nFor this we try out each of the $3\\cdot 3=9$ combinations of fitness function and constraints and compare constraint handling by rejection and penalty.\nThe parameters for all experiments are $P=50,c_s=1,c_p=1,V_{\\max}=10,\\omega=0.33$.\nAll 9 plots are illustrated in Figures~\\ref{fig:ex6-1-1}-\\ref{fig:ex6-3-3} and we could observe the following:\n\n\\begin{itemize}\n\t\\item For constraint 1 and the Schaffer function we have almost the same behaviour for rejection and penalty.\n\tThis is most certainly due to the fact that the global optimum is at coordinate $(0,0)$, whereas constraint 1 imposes a constraint, where all valid solutions lie in the middle.\n\t\\item For all other combinations of fitness functions and constraints the penalty method always leads to particles having smaller distance to each other.\n\tThis is because particles can overcome barriers with the penalty method, whereas with the rejection method particles can be ``stuck'' in some regions.\n\tThe same arguments also lead to fewer clusters, when using the penalty method.\n\t\\item It is also worth mentioning that the path length also increases for the penalty method, as particles can walk into invalid regions.\n\t\\item Regarding convergence to global optima we have similar behaviour for both constraint handling methods, as we used a population size of 50, which is quite high.\n\tBut for example for the Schaffer function and constraint 3 we have faster convergence to the global optimum for the penalty method, whereas for the rejection method particles converges slowly to a local optimum, which is not globally optimal.\n\t\\item Overall it can be said, that the penalty method outperforms the rejection method in nearly every aspect.\n\tThis can of course be the result of the choice of our fitness function and constraints, and also the particular implementation we chose.\n\tBut we would still prefer penalty over rejection in nearly every scenario.\n\\end{itemize}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-1-Constraint_1.pdf}\n\t\\caption{Constraint handling: Schaffer function, constraint 1}\n\t\\label{fig:ex6-1-1}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-1-Constraint_2.pdf}\n\t\\caption{Constraint handling: Schaffer function, constraint 2}\n\t\\label{fig:ex6-1-2}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-1-Constraint_3.pdf}\n\t\\caption{Constraint handling: Schaffer function, constraint 3}\n\t\\label{fig:ex6-1-3}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-2-Constraint_1.pdf}\n\t\\caption{Constraint handling: Booth function, constraint 1}\n\t\\label{fig:ex6-2-1}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-2-Constraint_2.pdf}\n\t\\caption{Constraint handling: Booth function, constraint 2}\n\t\\label{fig:ex6-2-2}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-2-Constraint_3.pdf}\n\t\\caption{Constraint handling: Booth function, constraint 3}\n\t\\label{fig:ex6-2-3}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-3-Constraint_1.pdf}\n\t\\caption{Constraint handling: Schwefel function, constraint 1}\n\t\\label{fig:ex6-3-1}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-3-Constraint_2.pdf}\n\t\\caption{Constraint handling: Schwefel function, constraint 2}\n\t\\label{fig:ex6-3-2}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{figures/ex6/ex6-3-Constraint_3.pdf}\n\t\\caption{Constraint handling: Schwefel function, constraint 3}\n\t\\label{fig:ex6-3-3}\n\\end{figure}\n\n\\FloatBarrier\n\\section{Conclusion}\nWe have seen how different parameters influence the behaviour of a PSO algorithm.\nIn particular we have considered population size, particle inertia, acceleration coefficients and particle speed limits.\nWe have evaluated different choices for these parameters on 3 different fitness functions.\nThe main observation was that for default choices of parameters the PSO algorithm performs really well on all 3 different fitness functions.\nOne would have to use really abstract parameter choices (small population sizes or suboptimal acceleration coefficients) to lead the PSO algorithm to early convergence and other undesirable behaviour.\n\nFurthermore we compared rejection and penalty methods for constraint handling using 3 different constraints.\nWe have seen that the penalty method performs better in nearly all aspects and occasions.\nThis could be due to our choice of fitness functions, constraints and particular implementation.\nBut still we think that some form of penalization has to be prefered over rejection in almost all cases.\n\n\n\\end{document}\n  ", "meta": {"hexsha": "1b20cc54d2671e842ee69a1e82844e50c4ef0e94", "size": 26266, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ex2/report/report.tex", "max_stars_repo_name": "tkauf15k/sos2020", "max_stars_repo_head_hexsha": "b75188097d095e4acaca32290ba4f49fa8cb6c0e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ex2/report/report.tex", "max_issues_repo_name": "tkauf15k/sos2020", "max_issues_repo_head_hexsha": "b75188097d095e4acaca32290ba4f49fa8cb6c0e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ex2/report/report.tex", "max_forks_repo_name": "tkauf15k/sos2020", "max_forks_repo_head_hexsha": "b75188097d095e4acaca32290ba4f49fa8cb6c0e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.2394678492, "max_line_length": 978, "alphanum_fraction": 0.7811238864, "num_tokens": 6772, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7718434978390746, "lm_q1q2_score": 0.5521953593435234}}
{"text": "\n\\documentclass[12pt]{amsart}\n\\usepackage{geometry} % see geometry.pdf on how to lay out the page. There's lots.\n\\usepackage{unitb}\n\\usepackage{calculational}\n\\usepackage{ulem}\n\\normalem\n\\geometry{a4paper} % or letter or a5paper or ... etc\n% \\geometry{landscape} % rotated page geometry\n\n% See the ``Article customise'' template for some common customisations\n\n\\title{}\n\\author{}\n\\date{} % delete this line to display the current date\n\n%%% BEGIN DOCUMENT\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n%\\section{}\n%\\subsection{}\n\n\\begin{machine}{m0}\n\n\\hide{\n\t\\begin{align*}\n\\variable{\t\tn,a : \\Int}\n\t\\end{align*}\n}\n\n\\begin{align*}\n\\invariant{inv0}\n{\ta = n^3\t}\n\\end{align*}\n%\n%%\\begin{proof}{INIT/INV/inv0}\n%%\\begin{calculation}\n%%\t\n%%\\end{calculation}\n%%\\end{proof}\n\n\\newevent{evt}\n\n%\n\\begin{align*}\n\\initialization{init0}\n{\tn = 0 \\land a = 0\t}\n\\end{align*}\n%\n%\n\\begin{align*}\n\\evassignment{evt}{a0}\n{\tn' = n + 1\t}\n\\end{align*}\n\nWe use the proof obligation of \\ref{inv0} to deduce a proper assignment to $a$:\n\n\\begin{proof}{evt/INV/inv0}\n\t\\begin{calculation}\n\t\t(n')^3\n\t\\hint{=}{ \\ref{a0} }\n\t\t(n+1)^3\n\t\\hint{=}{ arithmetic }\n\t\tn^3 + 3 \\cdot n^2 + 3 \\cdot n + 1\n\t\\hint{=}{ \\ref{inv0} }\n\t\ta + 3 \\cdot n^2 + 3 \\cdot n + 1\n\t\\hint{=}{ we add a variable $b$: \\ref{inv1}, see below }\n\t\ta + b\n\t\\hint{=}{ \\ref{a1} }\n\t\ta'\n\t\\end{calculation}\n\\end{proof}\n\n\\begin{align*}\n\\variable{\tb : \\Int}\n\\end{align*}\n\n\\begin{align*}\n\\invariant{inv1}\n{\tb = 3 \\cdot n^2 + 3 \\cdot n + 1\t}\n\\end{align*}\n%%%%\\begin{invariant}{inv2}\n%%%%\tb ~=~ 3 \\cdot n^2 + 3 \\cdot n + 1\n%%%%\\end{invariant}\n%%%\n\\begin{align*}\n\\evassignment{evt}{a1}\n{\ta' = a + b\t}\n\\end{align*}\n\nWe now have a new invariant to preserve. It is easy to see how to establish it initially:\n%\n\\begin{align*}\n\\initialization{in1}\n{\tb = 1\t}\n\\end{align*}\n\n\\begin{itemize}\n\\item label initialization predicates\n\\item spacing commands\n\\item \\sout{dummy section}\n\\item \\sout{test case: train system}\n\\item refinement\n\n\tadd refinement environments for: changing schedules, transforming progress properties\n\\item po labels: check that proofs match a po\n\\item translate the tags in proof obligations\n\t\n\tcreate toc entries with the proof environment, in unitb.sty\n\\end{itemize}\n\n\\section{Todo:}\n\\begin{itemize}\n\\item grammar: make operator grammar generic\n\\item grammar: include prefix operators and quantifiers\n\\item \n\\item in Z3.Z3.merge, remove the exceptions to integrate it to error handling\n\\item spacing in the middle of event and set declarations\n\\item testing the input validation, error messages\n\\item proof structures (proof by cases, etc)\n\\item types\n\\item invariant theorems\n\\item error checking\n\\item better LaTeX formatting\n\\item lazy proof checking\n\\item \\sout{continuous checking}\n\\item \\sout{\\emph{bug}: last of empty list of calculation steps}\n\\item \\sout{error report: report line number instead of step number when proof is incorrect}\n\\item generate documentation\n\\item handle the ``\\textbackslash input'' commands in latex and use it to produce a project hierarchy\n\\end{itemize}\n\nI'm now describing how I came up with the proof. It came to me in a dream and I forgot it in another dream.\n%\n\\begin{proof}{evt/INV/inv1}\n\t\\begin{calculation}\n\t\t3 \\cdot (n')^2 + 3 \\cdot n' + 1\n\t\\hint{=}{ \\ref{a0} }\n\t\t3 \\cdot (n+1)^2 + 3 \\cdot (n+1) + 1\n\t\\hint{=}{ arithmetic }\n\t\t3 \\cdot (n^2+2\\cdot n+1) + 3 \\cdot (n+1) + 1\n\t\\hint{=}{ arithmetic }\n\t\t3 \\cdot n^2+6\\cdot n+3 + 3 \\cdot n + 3 + 1\n\t\\hint{=}{ \\ref{inv1} }\n\t\tb+6\\cdot n+3+3\n\t\\hint{=}{ \\ref{inv2}, see below }\n\t\tb+c\n\t\\hint{=}{ \\ref{a2}, see below }\n\t\tb'\n\t\\end{calculation}\n\\end{proof}\n\n\\begin{align*}\n\\variable{\tc : \\Int}\n\\end{align*}\n\n\\begin{align*}\n\\invariant{inv2}\n{\tc = 6 \\cdot n + 6\t}\n\\end{align*} \n\n\\begin{align*}\n\\evassignment{evt}{a2}\n{\tb' = b + c\t}\n\\end{align*}\n\nInvariant \\ref{inv2} is easy to satisfy initially:\n\n\\begin{align*}\n\\initialization{in2}\n{\tc = 6\t}\n\\end{align*}\n\n\\begin{proof}{evt/INV/inv2}\n\t\\begin{calculation}\n\t\t6 \\cdot (n') + 6\n\t\\hint{=}{ \\ref{a0} }\n\t\t6 \\cdot (n+1) + 6\n\t\\hint{=}{ arithmetic }\n\t\t6 \\cdot n + 6 + 6\n\t\\hint{=}{ \\ref{inv2} }\n\t\tc + 6\n\t\\hint{=}{ \\ref{a3} }\n\t\tc'\n\t\\end{calculation}\n\\end{proof}\n%\n\\begin{align*}\n\\evassignment{evt}{a3}\n{\tc' = c + 6\t}\n\\end{align*}\n\n\\end{machine}\n\n\\end{document}", "meta": {"hexsha": "6c6467b535abdb3c72fc40022696f0b9aa05f4e8", "size": 4207, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tests/integers.tex", "max_stars_repo_name": "literate-unitb/literate-unitb", "max_stars_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-07-27T11:05:56.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-20T14:53:33.000Z", "max_issues_repo_path": "Tests/integers.tex", "max_issues_repo_name": "unitb/literate-unitb", "max_issues_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2017-06-25T03:53:02.000Z", "max_issues_repo_issues_event_max_datetime": "2017-06-25T04:28:38.000Z", "max_forks_repo_path": "Tests/integers.tex", "max_forks_repo_name": "literate-unitb/literate-unitb", "max_forks_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.2474747475, "max_line_length": 107, "alphanum_fraction": 0.6638935108, "num_tokens": 1520, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7718435083355187, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.5521953574865649}}
{"text": "\\chapter{hjgad}\n\\begin{abox}\n\tProblem set-1\n\\end{abox}\n\\begin{enumerate}[label=\\color{ocre}\\textbf{\\arabic*.}]\n\t\\item Consider the matrix $M=\\left(\\begin{array}{lll}1 & 1 & 1 \\\\ 1 & 1 & 1 \\\\ 1 & 1 & 1\\end{array}\\right)$\\\\\n\t\\textbf{A.} The eigenvalues of $M$ are\n\t{\\exyear{NET/JRF(JUNE-2011)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $0,1,2$\n\t\t\\task[\\textbf{B.}] $0,0,3$\n\t\t\\task[\\textbf{C.}] $1,1,1$\n\t\t\\task[\\textbf{D.}] $-1,1,3$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{For eigen values }\\left[\\begin{array}{ccc}1-\\lambda & 1 & 1 \\\\ 1 & 1-\\lambda & 1 \\\\ 1 & 1 & 1-\\lambda\\end{array}\\right]&=0\\\\\n\t\t(1-\\lambda)\\left((1-\\lambda)^{2}-1\\right)-(1-\\lambda-1)+1(1-(1-\\lambda))&=0\\\\\n\t\t(1-\\lambda)\\left(1+\\lambda^{2}-2 \\lambda-1\\right)+\\lambda+\\lambda=0 \\Rightarrow \\lambda^{2}-2 \\lambda-\\lambda^{3}+2 \\lambda^{2}+2 \\lambda&=0\\\\\n\t\t\\lambda^{3}-3 \\lambda^{2}=0 \\Rightarrow \\lambda^{2}(\\lambda-3)=0 \\Rightarrow \\lambda&=0,0,3\n\t\t\\intertext{For any $n \\times n$ matrix having all elements unity eigenvalues are $0,0,0, \\ldots, n$.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\textbf{B.} The exponential of $M$ simplifies to ( $I$ is the $3 \\times 3$ identity matrix)\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $e^{M}=I+\\left(\\frac{e^{3}-1}{3}\\right) M$\n\t\t\\task[\\textbf{B.}] $e^{M}=I+M+\\frac{M^{2}}{2 !}$\n\t\t\\task[\\textbf{C.}] $e^{M}=I+3^{3} M$\n\t\t\\task[\\textbf{D.}] $e^{M}=(e-1) M$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\\intertext{We know that}\n\te^x&=1+x+\\frac{x^2}{2!}+\\frac{x^3}{3!}+.....\\\\\n\te^M&=1+M+\\frac{M^2}{2!}+\\frac{M^3}{3!}+.....\\\\\n\tM&=\\left[\\begin{array}{lll}1 & 1 & 1 \\\\ 1 & 1 & 1 \\\\ 1 & 1 & 1\\end{array}\\right] \\Rightarrow M^{2} =\\left[\\begin{array}{lll}3 & 3 & 3 \\\\ 3 & 3 & 3 \\\\ 3 & 3 & 3\\end{array}\\right]=3 M\\\\\n\t\\text{similarly}\\quad M^{3}&=9 M=3^{2} M\n\t\\intertext{we can rewrite $e^M$ as,}\n\te^{M}&=I+M+\\frac{3 M}{2 !}+\\frac{3^{2} M}{3 !}+\\frac{3^{3} M}{4 !}+\\cdots\\\\\n\t&=I+\\frac{M}{3}\\left[3+\\frac{3^{2}}{2 !}+\\frac{3^{3}}{3 !}+\\frac{3^{4}}{4 !}+\\cdots\\right]\\\\\n\t&=I+\\frac{M}{3}\\left[e^{3}-1\\right]\n\t\t\\end{align*}\n\t\\end{answer}\n\t\\item A $3 \\times 3$ matrix $M$ has $\\operatorname{Tr}[M]=6, \\operatorname{Tr}\\left[M^{2}\\right]=26$ and $\\operatorname{Tr}\\left[M^{3}\\right]=90$. Which of the following can be a possible set of eigenvalues of $M ?$\n\t{\\exyear{NET/JRF(DEC-2011)}}\n\\begin{tasks}(4)\n\\task[\\textbf{A.}] $\\{1,1,4\\}$\n\\task[\\textbf{B.}] $\\{-1,0,7\\}$\n\\task[\\textbf{C.}] $\\{-1,3,4\\}$\n\\task[\\textbf{D.}] $\\{2,2,2\\}$\n\\end{tasks}\n\\begin{answer}\n\\begin{align*}\nT_r[M]&=\\text{sum of eigen values}\\\\\nT_r[M^2]&=\\text{sum of square of eigen values}\\\\\n\\operatorname{Tr}\\left[M^{2}\\right]&=(-1)^{2}+(3)^{2}+(4)^{2}\\text{ also } \\operatorname{Tr}\\left[M^{3}\\right]\\\\&=(-1)^{3}+(3)^{3}+(4)^{3}=90\n\\end{align*}\nSo the correct answer is \\textbf{Option (C)}\n\\end{answer}\n\\item The eigen values of the matrix $A=\\left(\\begin{array}{lll}1 & 2 & 3 \\\\ 2 & 4 & 6 \\\\ 3 & 6 & 9\\end{array}\\right)$ are\n{\\exyear{NET/JRF(JUNE-2012)}}\n\\begin{tasks}(4)\n\t\\task[\\textbf{A.}] $(1,4,9)$\n\t\\task[\\textbf{B.}] $(0,7,7)$\n\t\\task[\\textbf{C.}] $(0,1,13)$\n\t\\task[\\textbf{D.}] $(0,0,14)$\n\\end{tasks}\n\\begin{answer}\n\t \n\t\\begin{align*}\n\t\\intertext{The given matrix $A$ has identical rows and columns}\n\t\\intertext{So it's eigen values are,}\n\t\\lambda=0,0\\ \\text{Trace}=0,0,14\n\t\\intertext{Another solution}\n\t\\text{For eigenvalues }|A-\\lambda I|=0 \\Rightarrow\\left[\\begin{array}{ccc}1-\\lambda & 2 & 3 \\\\ 2 & 4-\\lambda & 6 \\\\ 3 & 6 & 9-\\lambda\\end{array}\\right]&=0\\\\\n\t(1-\\lambda)[(4-\\lambda)(9-\\lambda)-36]-2[2(9-\\lambda)-18]+3[12-3(4-\\lambda)]&=0\\\\\n\t(1-\\lambda)(4-\\lambda)(9-\\lambda)-36(1-\\lambda)-4(9-\\lambda)+36+9 \\lambda&=0\\\\\n\t\\lambda^{3}-14 \\lambda^{2}=0 \\Rightarrow \\lambda^{2}(\\lambda-14)=0 \\Rightarrow \\lambda&=0,0,14\n\t\\end{align*}\n\tSo the correct answer is \\textbf{Option (D)}\n\\end{answer}\n\t\\item The eigenvalues of the antisymmetric matrix,\n\t$$\n\tA=\\left(\\begin{array}{ccc}\n\t0 & -n_{3} & n_{2} \\\\\n\tn_{3} & 0 & -n_{1} \\\\\n\t-n_{2} & n_{1} & 0\n\t\\end{array}\\right)\n\t$$\n\twhere $n_{1}, n_{2}$ and $n_{3}$ are the components of a unit vector, are\n\t{\\exyear{NET/JRF(JUNE-2012)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $0, i,-i$\n\t\t\\task[\\textbf{B.}] $0,1,-1$\n\t\t\\task[\\textbf{C.}] $0,1+i,-1,-i$\n\t\t\\task[\\textbf{D.}]  $0,0,0$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tA&=\\left[\\begin{array}{ccc}0 & -n_{3} & n_{2} \\\\ n_{3} & 0 & -n_{1} \\\\ -n_{2} & n_{1} & 0\\end{array}\\right] \\Rightarrow-A^{T}=\\left[\\begin{array}{ccc}0 & -n_{3} & n_{2} \\\\ n_{3} & 0 & -n_{1} \\\\ -n_{2} & n_{1} & 0\\end{array}\\right]\\\\\n\t\t(A-\\lambda I)&=0,\\left[\\begin{array}{ccc}0-\\lambda & -n_{3} & n_{2} \\\\ n_{3} & 0-\\lambda & -n_{1} \\\\ -n_{2} & n_{1} & 0-\\lambda\\end{array}\\right]=0\\\\\n\t\t\\Rightarrow \\lambda_{1}&=0 \\Rightarrow \\lambda_{2}=-\\sqrt{-n_{1}^{2}-n_{2}^{2}-n_{3}^{2}} \\Rightarrow \\lambda_{3}\\\\&=\\sqrt{-n_{1}^{2}-n_{2}^{2}-n_{3}^{2}}\\\\\n\t\t\\text{but }&\\sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}}=1\\\\\n\t\t\\intertext{For an antisymmetric matrix the eigen values are}\n\t\t\\lambda &=0,\\pm i\n\t\t\\intertext{sum of sqares of non diagonal elements}\n\t\t&=0,\\pm i \\sqrt{n_1^2+n_2^2+n_3^2}=0, \\pm i\\\\\n\t\t\\therefore &n_1, n_2, n_3 \\text{are components of a unit vector}\\\\\n\t\t\\text{\tso, }\\quad \\lambda_{1}&=0, \\lambda_{2}=i, \\lambda_{3}=-i\\\\\n\t\tA&=-A^{T}\\text{ (Antisymmetric). Eigenvalues are either zero or purely imaginary.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (A)}\n\t\\end{answer}\n\t\\item Consider an $n \\times n(n>1)$ matrix $A$, in which $A_{i j}$ is the product of the indices $i$ and $j$ $\\left(\\right.$ namely $\\left.A_{i j}=i j\\right)$. The matrix $A$\n{\\exyear{NET/JRF(DEC-2013)}}\n\t\\begin{tasks}(1)\n\t\t\\task[\\textbf{A.}]  Has one degenerate eigevalue with degeneracy $(n-1)$\n\t\t\\task[\\textbf{B.}] Has two degenerate eigenvalues with degeneracies 2 and $(n-2)$\n\t\t\\task[\\textbf{C.}] Has one degenerate eigenvalue with degeneracy $n$\n\t\t\\task[\\textbf{D.}] Does not have any degenerate eigenvalue\n\t\\end{tasks}\n\t\\begin{answer}\n\t\\begin{align*}\n\t\\intertext{The matrix $A$ will be like, }\n\tA_{i j}=\\left[\\begin{array}{ccccccc}1 & 2 & 3 & 4 & \\cdots & n \\\\ 2 & 4 & 6 & 8 & \\cdots & \\cdots & \\\\ 3 & 6 & 9 & 12 & \\cdots & \\cdots \\\\ 4 & 8 & 12 & 16 & \\cdots & \\cdots \\\\ \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\ n & \\vdots & \\vdots & \\vdots & & \\end{array}\\right]\n\t\\intertext{The matrix $A$ is having identical rows and columns then it's eigen values will be, $(n-1)$ number of zeros and it's trace.}\n\t\\lambda&=0,0,.......,[1^2+2^2+....n^2]\n\t\\intertext{Thus the matrix has one degenerate eigen value with $n-1$  degeneracy}\n\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (A)}\n\t\\end{answer}\n\t\\item Consider the matrix\n\t$$\n\tM=\\left(\\begin{array}{ccc}\n\t0 & 2 i & 3 i \\\\\n\t-2 i & 0 & 6 i \\\\\n\t-3 i & -6 i & 0\n\t\\end{array}\\right)\n\t$$\n\tThe eigenvalues of $M$ are\n\t{\\exyear{NET/JRF(JUNE-2014)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $-5,-2,7$\n\t\t\\task[\\textbf{B.}] $-7,0,7$\n\t\t\\task[\\textbf{C.}] $-4 i, 2 i, 2 i$\n\t\t\\task[\\textbf{D.}] $2,3,6$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tM&=\\left(\\begin{array}{ccc}0 & 2 i & 3 i \\\\ -2 i & 0 & 6 i \\\\ -3 i & -6 i & 0\\end{array}\\right), M^{+}=\\left(\\begin{array}{ccc}0 & 2 i & 3 i \\\\ -2 i & 0 & 6 i \\\\ -3 i & -6 i & 0\\end{array}\\right)\\\\\n\t\tM^{+}&=M\n\t\t\\intertext{Matrix is Hermitian so roots are real and }\n\t\t\\lambda&=0,\\pm i\\sqrt{(2i)^2+(3i)^2+(6i)^2}\\quad\\text{property of antisymmetric matrix}\\\\\n\t\t&=0,\\pm i\\sqrt{-49}\\\\\n\t\t&=0,\\pm 7=-7,0,7\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item The column vector $\\left(\\begin{array}{l}a \\\\ b \\\\ a\\end{array}\\right)$ is a simultaneous eigenvector of $A=\\left(\\begin{array}{ccc}0 & 0 & 1 \\\\ 0 & 1 & 0 \\\\ 1 & 0 & 0\\end{array}\\right)$ and $B=\\left(\\begin{array}{lll}0 & 1 & 1 \\\\ 1 & 0 & 1 \\\\ 1 & 1 & 0\\end{array}\\right)$ if\n\t{\\exyear{NET/JRF(DEC-2014)}}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $b=0$ or $a=0$\n\t\t\\task[\\textbf{B.}] $b=a$ or $b=-2 a$\n\t\t\\task[\\textbf{C.}] $b=2 a$ or $b=-a$\n\t\t\\task[\\textbf{D.}] $b=a / 2$ or $b=-a / 2$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{Let }b&=a\\\\\n\t\t\\left(\\begin{array}{lll}0 & 0 & 1 \\\\ 0 & 1 & 0 \\\\ 1 & 0 & 0\\end{array}\\right)\\left(\\begin{array}{l}a \\\\ a \\\\ a\\end{array}\\right)&=\\left(\\begin{array}{l}a \\\\ a \\\\ a\\end{array}\\right)\\text{ and } \\left(\\begin{array}{lll}0 & 1 & 1 \\\\ 1 & 0 & 1 \\\\ 1 & 1 & 0\\end{array}\\right)\\left(\\begin{array}{l}a \\\\ a \\\\ a\\end{array}\\right)=2\\left(\\begin{array}{l}a \\\\ a \\\\ a\\end{array}\\right)\\\\\n\t\t\\text{\tLet }b&=-2 a\\\\\n\t\t\\left(\\begin{array}{ccc}0 & 0 & 1 \\\\ 0 & 1 & 0 \\\\ 1 & 0 & 0\\end{array}\\right)\\left(\\begin{array}{c}a \\\\ -2 a \\\\ a\\end{array}\\right)&=\\left(\\begin{array}{c}a \\\\ -2 a \\\\ a\\end{array}\\right)\\text{ and } \\left(\\begin{array}{ccc}0 & 1 & 1 \\\\ 1 & 0 & 1 \\\\ 1 & 1 & 0\\end{array}\\right)\\left(\\begin{array}{c}a \\\\ -2 a \\\\ a\\end{array}\\right)\\\\&=\\left(\\begin{array}{c}-a \\\\ 2 a \\\\ -a\\end{array}\\right)=-1\\left(\\begin{array}{c}a \\\\ -2 a \\\\ a\\end{array}\\right)\n\t\t\\intertext{For other combination above relation is not possible.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item The matrix $M=\\left(\\begin{array}{ccc}1 & 3 & 2 \\\\ 3 & -1 & 0 \\\\ 0 & 0 & 1\\end{array}\\right)$ satisfies the equation\n\t{\\exyear{NET/JRF(DEC-2016)}}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $M^{3}-M^{2}-10 M+12 I=0$\n\t\t\\task[\\textbf{B.}] $M^{3}+M^{2}-12 M+10 I=0$\n\t\t\\task[\\textbf{C.}] $M^{3}-M^{2}-10 M+10 I=0$\n\t\t\\task[\\textbf{D.}] $M^{3}+M^{2}-10 M+10 I=0$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{he characteristic equation is}\n\t\t&\\left|\\begin{array}{ccc}(1-\\lambda) & 3 & 2 \\\\ 3 & (-1-\\lambda) & 0 \\\\ 0 & 0 & (1-\\lambda)\\end{array}\\right|=0\\\\\n\t\t&\\Rightarrow \\quad(1-\\lambda)(-1-\\lambda)(1-\\lambda)-(3) \\times 3(1-\\lambda)=0\\\\\n\t\t&\\Rightarrow \\quad-\\left(\\lambda^{2}-1\\right)(\\lambda-1)-9(1-\\lambda)=0 \\\\&\\Rightarrow \\lambda^{3}-10 \\lambda-\\lambda^{2}+10=0\n\t\t\\intertext{Thus the matrix $M$ satisfies the equation}\n\t\t&M^{3}-M^{2}-10 M+10 I=0\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item   Which of the following can not be the eigen values of a real $3 \\times 3$ matrix\n\t{\\exyear{NET/JRF(JUNE-2017)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}]  $2 i, 0,-2 i$\n\t\t\\task[\\textbf{B.}] $1,1,1$\n\t\t\\task[\\textbf{C.}] $e^{i \\theta}, e^{-i \\theta}, 1$\n\t\t\\task[\\textbf{D.}] $i, 1,0$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{If the matrix is real then the complex eigen values always occurs with its complex conjugate. In option (d) if $i$ is an eigen value then $-i$ must also be an eigen value. But $-i$ is not given in option, hence option (d) is incorrect.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  Let $\\sigma_{x}, \\sigma_{y}, \\sigma_{z}$ be the Pauli matrices and $x^{\\prime} \\sigma_{x}+y^{\\prime} \\sigma_{y}+z^{\\prime} \\sigma_{z}=\\exp \\left(\\frac{i \\theta \\sigma_{z}}{2}\\right) \\times$\n\t$$\n\t\\left[x \\sigma_{x}+y \\sigma_{y}+z \\sigma_{z}\\right] \\exp \\left(-\\frac{i \\theta \\sigma_{z}}{2}\\right)\n\t$$\n\tThen the coordinates are related as follows\n\t{\\exyear{NET/JRF(JUNE-2017)}}\n\t\\begin{tasks}(1)\n\t\t\\task[\\textbf{A.}] $\\left(\\begin{array}{l}x^{\\prime} \\\\ y^{\\prime} \\\\ z^{\\prime}\\end{array}\\right)=\\left(\\begin{array}{ccc}\\cos \\theta & -\\sin \\theta & 0 \\\\ \\sin \\theta & \\cos \\theta & 0 \\\\ 0 & 0 & 1\\end{array}\\right)\\left(\\begin{array}{l}x \\\\ y \\\\ z\\end{array}\\right)$\n\t\t\\task[\\textbf{B.}] $\\left(\\begin{array}{l}x^{\\prime} \\\\ y^{\\prime} \\\\ z^{\\prime}\\end{array}\\right)=\\left(\\begin{array}{ccc}\\cos \\theta & \\sin \\theta & 0 \\\\ -\\sin \\theta & \\cos \\theta & 0 \\\\ 0 & 0 & 1\\end{array}\\right)\\left(\\begin{array}{l}x \\\\ y \\\\ z\\end{array}\\right)$\n\t\t\\task[\\textbf{C.}] $\\left(\\begin{array}{l}x^{\\prime} \\\\ y^{\\prime} \\\\ z^{\\prime}\\end{array}\\right)=\\left(\\begin{array}{ccc}\\cos \\frac{\\theta}{2} & \\sin \\frac{\\theta}{2} & 0 \\\\ -\\sin \\frac{\\theta}{2} & \\cos \\frac{\\theta}{2} & 0 \\\\ 0 & 0 & 1\\end{array}\\right)\\left(\\begin{array}{l}x \\\\ y \\\\ z\\end{array}\\right)$\n\t\t\\task[\\textbf{D.}] $\\left(\\begin{array}{l}x^{\\prime} \\\\ y^{\\prime} \\\\ z^{\\prime}\\end{array}\\right)=\\left(\\begin{array}{ccc}\\cos \\frac{\\theta}{2} & -\\sin \\frac{\\theta}{2} & 0 \\\\ \\sin \\frac{\\theta}{2} & \\cos \\frac{\\theta}{2} & 0 \\\\ 0 & 0 & 1\\end{array}\\right)\\left(\\begin{array}{l}x \\\\ y \\\\ z\\end{array}\\right)$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\sigma_{x}&=\\left(\\begin{array}{ll}0 & 1 \\\\ 1 & 0\\end{array}\\right), \\sigma_{y}=\\left(\\begin{array}{cc}0 & -i \\\\ i & 0\\end{array}\\right)\\text{ and } \\sigma_{z}=\\left(\\begin{array}{cc}1 & 0 \\\\ 0 & -1\\end{array}\\right)\\\\\n\t\t\\text{Hence, }x \\sigma_{x}+y \\sigma_{y}+z \\sigma_{z}&=\\left(\\begin{array}{cc}z & x-i y \\\\ x+i y & -z\\end{array}\\right)\\\\\n\t\tx^{\\prime} \\sigma_{x}+y^{\\prime} \\sigma_{y}+z^{\\prime} \\sigma_{z}&=\\left(\\begin{array}{cc}z^{\\prime} & x^{1}-i y^{\\prime} \\\\ x^{\\prime}+i y^{\\prime} & -z^{\\prime}\\end{array}\\right)\\\\\n\t\t\\exp \\left(\\frac{i \\theta \\sigma_{z}}{z}\\right)&=\\left(\\begin{array}{cc}e^{i \\theta / 2} & 0 \\\\ 0 & e^{-i \\theta / 2}\\end{array}\\right)\\text{ and exp }\\left(\\frac{-i \\theta \\sigma_{z}}{2}\\right)=\\left(\\begin{array}{cc}e^{-i \\theta / 2} & 0 \\\\ 0 & e^{i \\theta / 2}\\end{array}\\right)\\\\\n\t\t\\text{Hence, }\\left(\\begin{array}{cc}z^{\\prime} & x^{\\prime}-i y^{\\prime} \\\\ x^{\\prime}+i y^{\\prime} & -z^{\\prime}\\end{array}\\right)&=\\left(\\begin{array}{cc}e^{i \\theta / 2} & 0 \\\\ 0 & e^{-i \\theta / 2}\\end{array}\\right)\\left(\\begin{array}{cc}z & x-i y \\\\ x+i y & -z\\end{array}\\right)\\left(\\begin{array}{cc}e^{-i \\theta / 2} & 0 \\\\ 0 & e^{i \\theta / 2}\\end{array}\\right)\\\\\n\t\t\\Rightarrow\\left(\\begin{array}{cc}z^{\\prime} & x^{\\prime}-i y^{\\prime} \\\\ x^{\\prime}+i y^{\\prime} & -z^{\\prime}\\end{array}\\right)&=\\left(\\begin{array}{cc}z & e^{i \\theta}(x-i y) \\\\ e^{-i \\theta}(x+i y) & -z\\end{array}\\right)\\\\\n\t\t\\text{Hence, }z^{\\prime}&=z\\text{ and }x^{\\prime}-i y^{\\prime}=e^{i \\theta}(x-i y)\\\\\n\t\t\\text{Thus }x^{\\prime}-i y^{\\prime}&=[(\\cos \\theta) x+(\\sin \\theta) y]-i[(\\cos \\theta) y-(\\sin \\theta) x]\\\\\n\t\t\\text{Thus }x^{\\prime}&=(\\cos \\theta) x+(\\sin \\theta) y\\\\\n\t\t\\text{And }y^{\\prime}&=(-\\sin \\theta) x+(\\cos \\theta) y\\\\\n\t\t\\text{Thus, }\\left(\\begin{array}{l}x^{\\prime} \\\\ y^{\\prime} \\\\ z^{\\prime}\\end{array}\\right)&=\\left(\\begin{array}{ccc}\\cos \\theta & \\sin \\theta & 0 \\\\ -\\sin \\theta & \\cos \\theta & 0 \\\\ 0 & 0 & 1\\end{array}\\right)\\left(\\begin{array}{l}x \\\\ y \\\\ z\\end{array}\\right)\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item  Let $A$ be a non-singular $3 \\times 3$ matrix, the columns of which are denoted by the vectors $\\vec{a}, \\vec{b}$ and $\\vec{c}$, respectively. Similarly, $\\vec{u}, \\vec{v}$ and $\\vec{w}$ denote the vectors that form the corresponding columns of $\\left(A^{T}\\right)^{-1}$. Which of the following is true?\n\t{\\exyear{NET/JRF(DEC-2017)}}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $\\vec{u} \\cdot \\vec{a}=0, \\vec{u} \\cdot \\vec{b}=0, \\vec{u} \\cdot \\vec{c}=1$\n\t\t\\task[\\textbf{B.}]  $\\vec{u} \\cdot \\vec{a}=0, \\vec{u} \\cdot \\vec{b}=1, \\vec{u} \\cdot \\vec{c}=0$\n\t\t\\task[\\textbf{C.}] $\\vec{u} \\cdot \\vec{a}=1, \\vec{u} \\cdot \\vec{b}=0, \\vec{u} \\cdot \\vec{c}=0$\n\t\t\\task[\\textbf{D.}]  $\\vec{u} \\cdot \\vec{a}=0, \\vec{u} \\cdot \\vec{b}=0, \\vec{u} \\cdot \\vec{c}=0$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{We can take any $3 \\times 3$ non singular matrix in order to avoid long calculation.}\n\t\t\\text{Take }A&=\\left[\\begin{array}{ccc}1 & 0 & 0 \\\\ 0 & 2 & 0 \\\\ 0 & 0 & 3 \\\\ \\downarrow & \\downarrow & \\downarrow \\\\ \\vec{a} & \\vec{b} & \\vec{c}\\end{array}\\right] \\Rightarrow\\left(A^{T}\\right)^{-1}=\\left[\\begin{array}{ccc}1 & 0 & 0 \\\\ 0 & 1 / 2 & 0 \\\\ 0 & 0 & 1 / 3 \\\\ \\downarrow & \\downarrow & \\downarrow \\\\ \\vec{u} & \\vec{v} & \\vec{w}\\end{array}\\right]\n\t\t\\intertext{We see that}\n\t\t\\vec{u} \\cdot \\vec{a}&=1.1+0.0+0.0=1\\\\\n\t\t\\vec{u} \\cdot \\vec{b}&=1.0+0.2+0.0=0\\\\\n\t\t\\vec{u} \\cdot \\vec{C}&=1.0+0.0+0.3=0\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item Consider the matrix equation\n\t$$\n\t\\left(\\begin{array}{llc}\n\t1 & 1 & 1 \\\\\n\t1 & 2 & 3 \\\\\n\t2 & b & 2 c\n\t\\end{array}\\right)\\left(\\begin{array}{l}\n\tx \\\\\n\ty \\\\\n\tz\n\t\\end{array}\\right)=\\left(\\begin{array}{l}\n\t0 \\\\\n\t0 \\\\\n\t0\n\t\\end{array}\\right)\n\t$$\n\tThe condition for existence of a non-trivial solution and the corresponding normalised solution (upto a sign) is\n\t{\\exyear{NET/JRF(DEC-2017)}}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $b=2 c$ and $(x, y, z)=\\frac{1}{\\sqrt{6}}(1,-2,1)$\n\t\t\\task[\\textbf{B.}] $c=2 b$ and $(x, y, z)=\\frac{1}{\\sqrt{6}}(1,1,-2)$\n\t\t\\task[\\textbf{C.}] $c=b+1$ and $(x, y, z)=\\frac{1}{\\sqrt{6}}(2,-1,-1)$\n\t\t\\task[\\textbf{D.}] $b=c+1$ and $(x, y, z)=\\frac{1}{\\sqrt{6}}(1,-2,1)$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{Solution: We know that the matrix equation, $A X=0$, where $A$ is the given matrix and $X$ is a column vector has a non-zero solution if and only if $|A|=0$}\n\t\t\\left|\\begin{array}{lll}1 & 1 & 1 \\\\ 1 & 2 & 3 \\\\ 2 & b & 2 c\\end{array}\\right|&=0 \\Rightarrow 4 c-3 b-2 c+6+b-4=0\\\\\n\t\t\\Rightarrow 2 c-2 b+2&=0 \\Rightarrow b=c+1\n\t\t\\intertext{we do not need to perform further calculation,}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  Which of the following statements is true for a $3 \\times 3$ real orthogonal matrix with determinant $+1 ?$\n\t{\\exyear{NET/JRF(JUNE-2018)}}\n\t\\begin{tasks}(1)\n\t\t\\task[\\textbf{A.}] The modulus of each of its eigenvalues need not be 1, but their product must be 1\n\t\t\\task[\\textbf{B.}] At least one of its eigenvalues is $+1$\n\t\t\\task[\\textbf{C.}] All of its eigenvalues must be real\n\t\t\\task[\\textbf{D.}]  None of its eigenvalues must be real\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{Solution: The characteristic equation of any $3 \\times 3$ matrix is of thee form $\\lambda^{3}+a \\lambda^{2}+b \\lambda+c=0$ which implies that at least one of the eigenvalues must be real. It is a proven fact that modulus of each eigenvalues of an orthogonal matrix is 1 .}\n\t\t\\intertext{If all eigenvalues of $3 \\times 3$ orthogonal matrix are real then only possibilities for eigenvalues are}\n\t\t\\lambda_{1}&=1, \\lambda_{2}=1\\text{ and }\\lambda_{3}=1\\text{ or }\\\\\\lambda_{1}&=-1, \\lambda_{2}=-1, \\lambda_{3}=1\\text{ or }\\\\\\lambda_{1}&=-1, \\lambda_{2}=1, \\lambda_{3}=-1\\\\\n\t\t\\intertext{Thus we see that at least one eigenvalue is $+1$. Suppose one eigenvalues is real and other two eigenvalues are complex conjugates. Now}\n\t\t\\lambda_{1} \\lambda_{2} \\lambda_{3}&=1\\\\\n\t\t\\Rightarrow \\lambda_{1}(a+i b)(a-i b)&=1 \\Rightarrow \\lambda_{1}\\left(a^{2}+b^{2}\\right)=1\\\\\n\t\t\\text{\tSince }a^{2}+b^{2}\\text{ is always positive hence }\\lambda_{1}&=1\n\t\t\\intertext{In this case also we see that at least one eigenvalue must be $+1$}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item One of the eigenvalues of the matrix $e^{A}$ is $e^{a}$, where $A=\\left(\\begin{array}{ccc}a & 0 & 0 \\\\ 0 & 0 & a \\\\ 0 & a & 0\\end{array}\\right)$. The product of the other two eigenvalues of $e^{A}$ is\n\t{\\exyear{NET/JRF(DEC-2018)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $e^{2 a}$\n\t\t\\task[\\textbf{B.}] $e^{-a}$\n\t\t\\task[\\textbf{C.}]  $e^{-2 a}$\n\t\t\\task[\\textbf{D.}] 1\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{ Eigenvalues of matrix $A$ are $a, a$ and $-a$. The product of two other eigenvalues of $A$ are $e^{a} a^{-a}=1$}\n\t\t\\intertext{Alternativety}\n\t\te^{\\text {TraceA }}&=e^{\\lambda_{1}+\\lambda_{2}+\\lambda_{3}}=\\operatorname{det} e^{A}\\\\\n\t\t\\Rightarrow e^{\\lambda_{1}} \\cdot e^{\\lambda_{2}+\\lambda_{3}}&=\\operatorname{det} e^{A} \\Rightarrow e^{a} \\cdot e^{\\lambda_{2}} \\cdot e^{\\lambda_{3}}=e^{a}\\\\\n\t\t\\Rightarrow e^{\\lambda_{2}} \\cdot e^{\\lambda_{3}}&=1\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item A $4 \\times 4$ complex matrix $A$ satisfies the relation $A^{\\dagger} A=4 I$, where $I$ is the $4 \\times 4$ identity matrix. The number of independent real parameters of $A$ is\n\t{\\exyear{NET/JRF(DEC-2018)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] 32\n\t\t\\task[\\textbf{B.}] 10\n\t\t\\task[\\textbf{C.}] 12\n\t\t\\task[\\textbf{D.}] 16\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{Given that }A^{\\dagger} A&=4 I \\Rightarrow \\frac{1}{4}\\left(A^{\\dagger} A\\right)=I\\\\\n\t\t\\text{Let }A&=2 B\\text{ then}\\\\\n\t\tA^{\\dagger}&=2 B^{\\dagger}\\\\\n\t\t\\text{Therefore, }B^{\\dagger} B&=I\n\t\t\\intertext{This shows that $B$ is a unitary matrix. The number of independent real parameters needed to specify an $n \\times n$ unitary matrix is $n^{2}$. Thus, the number of independent parameter needed to specify matrix $B$ is $4^{2}=16$.}\n\t\t\\intertext{Now, the number of independent parameters needed to specify matrix $A$ is same as that of matrix $B$.}\n\t\t\\intertext{Thus the number of independent parameters needed to specify $A$ is 16}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  The element of a $3 \\times 3$ matrix $A$ are the products if its row and column indices $A_{i j}=i j$ (where $i, j=1,2,3)$. The eigenvalues of $A$ are\n\t{\\exyear{NET/JRF(JUNE-2019)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $(7,7,0)$\n\t\t\\task[\\textbf{B.}]  $(7,4,3)$\n\t\t\\task[\\textbf{C.}] $(14,0,0)$\n\t\t\\task[\\textbf{D.}] $\\left(\\frac{14}{3}, \\frac{14}{3}, \\frac{14}{3}\\right)$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{Since }A_{i j}&=i j\n\t\t(\\text{where }i, j=1,2,3, )\\\\\n\t\t\\text{We obtain the matrix }A&=\\left[\\begin{array}{lll}1 & 2 & 3 \\\\ 2 & 4 & 6 \\\\ 3 & 6 & 9\\end{array}\\right]\\\\\n\t\t\\text{For calculating eigen values }&\\left|\\begin{array}{ccc}1-\\lambda & 2 & 3 \\\\ 2 & 4-\\lambda & 6 \\\\ 3 & 6 & 9-\\lambda\\end{array}\\right|=0\n\t\t\\intertext{$(1-\\lambda)[(4-\\lambda)(9-\\lambda)-36]-2[2(9-\\lambda)-18]+3(12-3(4-\\lambda))=0$}\n\t\t\\intertext{$\\Rightarrow-\\lambda^{3}+\\lambda^{2} \\cdot 14=0 \\Rightarrow \\lambda^{2}(-\\lambda+14)=0 \\quad \\Rightarrow \\lambda=0,0,14$}\n\t\t\\intertext{Also, directly for a $3 x 3$ matrix we can write $(0,0$, Trace of A) as Eigen values.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item  The operator $A$ has a matrix representation $\\left(\\begin{array}{ll}2 & 1 \\\\ 1 & 2\\end{array}\\right)$ in the basis spanned by $\\left(\\begin{array}{l}1 \\\\ 0\\end{array}\\right)$ and $\\left(\\begin{array}{l}0 \\\\ 1\\end{array}\\right) .$ In another basis spanned by $\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{l}1 \\\\ 1\\end{array}\\right)$ and $\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{c}1 \\\\ -1\\end{array}\\right)$, the matrix representation of $A$ is\n\t{\\exyear{NET/JRF(JUNE-2019)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $\\left(\\begin{array}{ll}2 & 0 \\\\ 0 & 2\\end{array}\\right)$\n\t\t\\task[\\textbf{B.}] $\\left(\\begin{array}{ll}3 & 0 \\\\ 0 & 1\\end{array}\\right)$\n\t\t\\task[\\textbf{C.}] $\\left(\\begin{array}{ll}3 & 1 \\\\ 0 & 1\\end{array}\\right)$\n\t\t\\task[\\textbf{D.}] $\\left(\\begin{array}{ll}3 & 0 \\\\ 1 & 1\\end{array}\\right)$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{The given vector $\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{l}1 \\\\ 1\\end{array}\\right)$ and $\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{c}1 \\\\ -1\\end{array}\\right)$ are eigen vectors of operator $A$,}\n\t\t\\intertext{Hence in this basis matrix $A$ is represented by diagonal matrix $D$ consisting of eigenvalues of matrix $A$ on the main diagonal. Therefore,}\n\t\tD&=\\left[\\begin{array}{ll}3 & 0 \\\\ 0 & 1\\end{array}\\right]\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item  If the rank of an $n \\times n$ matrix $A$ is $m$, where $m$ and $n$ are positive integers with $1 \\leq m \\leq n$, then the rank of the matrix $A^{2}$ is\n\t{\\exyear{NET/JRF(DEC-2019)}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}]  $m$\n\t\t\\task[\\textbf{B.}] $m-1$\n\t\t\\task[\\textbf{C.}] $2 \\mathrm{~m}$\n\t\t\\task[\\textbf{D.}] $m-2$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{\tLet }\\sigma_{1}&=\\left[\\begin{array}{ll}0 & 1 \\\\ 1 & 0\\end{array}\\right]_{2 \\times 2 \\atop n=2}=A \\quad m=2\\\\\n\t\t1& \\leq 2 \\leq 2 \\quad 1 \\leq m \\leq n\\\\\n\t\tA^{2}&=\\left[\\begin{array}{ll}1 & 0 \\\\ 0 & 1\\end{array}\\right]_{2 \\times 2} m=2\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (A)}\n\t\\end{answer}\n\t\\item   The eigenvalues of the $3 \\times 3$ matrix $M=\\left(\\begin{array}{lll}a^{2} & a b & a c \\\\ a b & b^{2} & b c \\\\ a c & b c & c^{2}\\end{array}\\right)$ are\n\t{\\exyear{NET/JRF(JUNE-2020)}}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $a^{2}+b^{2}+c^{2}, 0,0$\n\t\t\\task[\\textbf{B.}] $b^{2}+c^{2}, a^{2}, 0$\n\t\t\\task[\\textbf{C.}] $a^{2}+b^{2}, c^{2}, 0$\n\t\t\\task[\\textbf{D.}] $a^{2}+c^{2}, b^{2}, 0$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\tM&=\\left(\\begin{array}{lll}a^{2} & a b & a c \\\\ a b & b^{2} & b c \\\\ a c & b c & c^{2}\\end{array}\\right)\n\t\t\\intertext{To make it simple, Let $a=1, b=1, c=1$ so $\\quad M=\\left[\\begin{array}{lll}1 & 1 & 1 \\\\ 1 & 1 & 1 \\\\ 1 & 1 & 1\\end{array}\\right]_{3 \\times 3}$}\n\t\t\\Rightarrow \\lambda&=3,0,0\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (A)}\n\t\\end{answer}\n\\end{enumerate}\n\\newpage\n\\begin{abox}\n\tProblem set-2\n\\end{abox}\n\\begin{enumerate}[label=\\color{ocre}\\textbf{\\arabic*.}]\n\t\\item The eigenvalues of the matrix $\\left(\\begin{array}{lll}2 & 3 & 0 \\\\ 3 & 2 & 0 \\\\ 0 & 0 & 1\\end{array}\\right)$ are\n\t{\\exyear{GATE 2010}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $5,2,-2$\n\t\t\\task[\\textbf{B.}] $-5,-1,-1$\n\t\t\\task[\\textbf{C.}]  $5,1,-1$\n\t\t\\task[\\textbf{D.}] $-5,1,1$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{The characteristic equation of the matrix $A,|A-\\lambda I|=0$}\n\t\t\\Rightarrow|A-\\lambda I|=\\left|\\begin{array}{ccc}2-\\lambda & 3 & 0 \\\\ 3 & 2-\\lambda & 0 \\\\ 0 & 0 & 1-\\lambda\\end{array}\\right|&=0 \\\\\\Rightarrow(1-\\lambda)\\left[(2-\\lambda)^{2}-9\\right]&=0 \\\\\\Rightarrow \\lambda=1,2-\\lambda&=\\pm 3\\\\\n\t\t\\Rightarrow \\lambda&=5,1,-1\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item Two matrices $A$ and $B$ are said to be similar if $B=P^{-1} A P$ for some invertible matrix $P$.Which of the following statements is NOT TRUE?\n\t{\\exyear{GATE 2011}}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] Det $A=\\operatorname{Det} B$\n\t\t\\task[\\textbf{B.}]  Trace of $A=$ Trace of $B$\n\t\t\\task[\\textbf{C.}] $A$ and $B$ have the same eigenvectors\n\t\t\\task[\\textbf{D.}] $A$ and $B$ have the same eigenvalues\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\intertext{If $A$ and $B$ be square matrices of the same type and if $P$ be invertible matrix, then matrices $A$ and $B=P^{-1} A P$ have the same characteristic roots.}\n\t\t\\intertext{Then, $B-\\lambda I=P^{-1} A P-P^{-1} \\lambda I P=P^{-1}(A-\\lambda I) P$ where $I$ is identity matrix.}\n\t\t|B-\\lambda I|&=\\left|P^{-1}(A-\\lambda I) P\\right|=\\left|P^{-1}\\|A-\\lambda I\\| P\\right|\\\\&=\\left|A-\\lambda I\\left\\|P^{-1}\\right\\| P\\right|=\\left|A-\\lambda I \\| P P^{-1}\\right|\\\\&=|A-\\lambda I|\n\t\t\\intertext{Thus, the matrices $A$ and $B\\left(=P^{-1} A P\\right)$ have the same characteristic equation and hence\n\t\t\tsame characteristic roots or eigen values. Since, the sum of the eigen values of a matrix and product of eigen values of a matrix is equal to the determinant of matrix, hence third alternative is incorrect.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item A $3 \\times 3$ matrix has elements such that its trace is 11 and its determinant is 36 . The eigenvalues of the matrix are all known to be positive integers. The largest eigenvalues of the matrix is\n\t{\\exyear{GATE 2011}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] 18\n\t\t\\task[\\textbf{B.}]  12\n\t\t\\task[\\textbf{C.}] 9\n\t\t\\task[\\textbf{D.}] 6\n\t\\end{tasks}\n\t\\begin{answer}\n\t\tWe know that for any matrix\\\\\n\t\t1. The product of eigenvalues is equals to the determinant of that matrix.\\\\\n\t\t2. $\\lambda_{1}+\\lambda_{2}+\\lambda_{3}+\\ldots \\ldots=$ Trace of matrix\\\\\n\t\t$\\lambda_{1}+\\lambda_{2}+\\lambda_{3}=11$ and $\\lambda_{1} \\lambda_{2} \\lambda_{3}=36$. Hence, the largest eigen value of the matrix is $6 .$\\\\\\\\\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  The number of independent components of the symmetric tensor $A_{i j}$ with indices $i, j=1,2,3$ is\n\t{\\exyear{GATE 2012}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] 1\n\t\t\\task[\\textbf{B.}] 3\n\t\t\\task[\\textbf{C.}] 6\n\t\t\\task[\\textbf{D.}] 9\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{\tFor symmetric tensor, }A_{i j}&=\\left[\\begin{array}{ccc}A_{11} & A_{12} & A_{13} \\\\ A_{21} & A_{22} & A_{23} \\\\ A_{31} & A_{32} & A_{33}\\end{array}\\right]\\\\\n\t\t\\because A_{12}=A_{21}, \\quad A_{23}&=A_{32}, A_{13}=A_{31},\\text{ hence there are six independent components.}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item  The eigenvalues of the matrix $\\left(\\begin{array}{lll}0 & 1 & 0 \\\\ 1 & 0 & 1 \\\\ 0 & 1 & 0\\end{array}\\right)$ are\n\t{\\exyear{GATE 2012}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] $0,1,1$\n\t\t\\task[\\textbf{B.}] $0,-\\sqrt{2}, \\sqrt{2}$\n\t\t\\task[\\textbf{C.}]  $\\frac{1}{\\sqrt{2}}, \\frac{1}{\\sqrt{2}}, 0$\n\t\t\\task[\\textbf{D.}] $\\sqrt{2}, \\sqrt{2}, 0$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t|A-\\lambda I|&=0 \\Rightarrow\\left|\\begin{array}{ccc}-\\lambda & 1 & 0 \\\\ 1 & -\\lambda & 1 \\\\ 0 & 1 & -\\lambda\\end{array}\\right|\\\\&=0 \\Rightarrow-\\lambda\\left(\\lambda^{2}-1\\right)+\\lambda=0 \\Rightarrow \\lambda\\\\&=0,+\\sqrt{2},-\\sqrt{2}\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\t\\item    The degenerate eigenvalue of the matrix $\\left[\\begin{array}{ccc}4 & -1 & -1 \\\\ -1 & 4 & -1 \\\\ -1 & -1 & 4\\end{array}\\right]$ is (your answer should be an\n\tinteger)---\n\t{\\exyear{GATE 2013}}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\left[\\begin{array}{ccc}4-\\lambda & -1 & -1 \\\\ -1 & 4-\\lambda & -1 \\\\ -1 & -1 & 4-\\lambda\\end{array}\\right]&=0 \\Rightarrow(2-\\lambda)\\left[\\begin{array}{ccc}1 & -1 & -1 \\\\ 0 & 5-\\lambda & 0 \\\\ 0 & 0 & 5-\\lambda\\end{array}\\right]\\\\&=(2-\\lambda)(5-\\lambda)^{2}=0 \\Rightarrow \\lambda\\\\&=2,5,5\n\t\t\\end{align*}\n\t\\end{answer}\n\t\\item  The matrix\n\t$$\n\tA=\\frac{1}{\\sqrt{3}}\\left[\\begin{array}{cc}\n\t1 & 1+i \\\\\n\t1-i & -1\n\t\\end{array}\\right] \\text { is }\n\t$$\n\t{\\exyear{GATE 2014}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] Orthogonal\n\t\t\\task[\\textbf{B.}] Symmetric\n\t\t\\task[\\textbf{C.}]  Anti-symmetric\n\t\t\\task[\\textbf{D.}]  Unitary\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{\tUnitary }A^{\\dagger} A=I\n\t\t\\end{align*}\n\t\tSo the correct answer is \\textbf{Option (D)}\n\t\\end{answer}\n\t\\item  Let $X$ be a column vector of dimension $n>1$ with at least one non-zero entry. The number of non-zero eigenvalues of the matrix $M=X X^{T}$ is\n\t{\\exyear{GATE 2017}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}] 0\n\t\t\\task[\\textbf{B.}] $n$\n\t\t\\task[\\textbf{C.}] 1\n\t\t\\task[\\textbf{D.}] $n-1$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\t\\begin{align*}\n\t\t\\text{\tLet }X&=\\left[\\begin{array}{l}0 \\\\ 0 \\\\ a \\\\ 0 \\\\ 0 \\\\ 0\\end{array}\\right],\\text{ then} X^{T}=\\left[\\begin{array}{llll}0 & 0 & a \\ldots & 0\\end{array}\\right]\n\t\t\\intertext{Here, $X$ is an $n \\times 1$ column vector with the entry in the $i$ th row equal to a. $X^{T}$ is a row vector having entry in the $i$ th column equal to a. Then, $X X^{T}$ is an $n \\times 1$ matrix having the entry in the $i$ th row and $i$ th column equal to $a^{2}$.}\n\t\t\\end{align*}\n\t\tHence\n\t\t\\begin{figure}[H]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=4cm,width=6cm]{diagram-20210823(3)-crop}\n\t\t\\end{figure}\n\t\tSince this matrix is diagonal, its eigenvalues are $a^{2}, 0,0 \\ldots \\ldots 0 .$ Hence, the number of non zero eigenvalues of the matrix $X X^{T}$ is 1 .\\\\\\\\\n\t\tSo the correct answer is \\textbf{Option (C)}\n\t\\end{answer}\n\t\\item The eigenvalues of a Hermitian matrix are all\n\t{\\exyear{GATE 2018}}\n\t\\begin{tasks}(4)\n\t\t\\task[\\textbf{A.}]  Real\n\t\t\\task[\\textbf{B.}] Imaginary\n\t\t\\task[\\textbf{C.}] Of modulus one\n\t\t\\task[\\textbf{D.}] Real and positive\n\t\\end{tasks}\n\t\\begin{answer}\n\t\tEigenvalue of Hermitian matrix must be real.\\\\\\\\\n\t\tSo the correct answer is \\textbf{Option (A)}\n\t\\end{answer}\n\t\\item During a rotation, vectors along the axis of rotation remain unchanged. For the rotation matrix $\\left(\\begin{array}{ccc}0 & 1 & 0 \\\\ 0 & 0 & -1 \\\\ -1 & 0 & 0\\end{array}\\right)$, the vector along the axis of rotation is\n\t{\\exyear{GATE 2019}}\n\t\\begin{tasks}(2)\n\t\t\\task[\\textbf{A.}] $\\frac{1}{3}(2 \\hat{i}-\\hat{j}+2 \\hat{k})$\n\t\t\\task[\\textbf{B.}]  $\\frac{1}{\\sqrt{3}}(\\hat{i}+\\hat{j}-\\hat{k})$\n\t\t\\task[\\textbf{C.}] $\\frac{1}{\\sqrt{3}}(\\hat{i}-\\hat{j}-\\hat{k})$\n\t\t\\task[\\textbf{D.}] $\\frac{1}{3}(2 \\hat{i}+2 \\hat{j}-\\hat{k})$\n\t\\end{tasks}\n\t\\begin{answer}\n\t\tSo the correct answer is \\textbf{Option (B)}\n\t\\end{answer}\n\\end{enumerate}\n\\colorlet{ocre1}{ocre!70!}\n\\colorlet{ocrel}{ocre!30!}\n\\setlength\\arrayrulewidth{1pt}\n\\begin{table}[H]\n\t\\centering\n\t\\arrayrulecolor{ocre}\n\t\\begin{tabular}{|p{1.5cm}|p{1.5cm}||p{1.5cm}|p{1.5cm}|}\n\t\t\\hline\n\t\t\\multicolumn{4}{|c|}{\\textbf{Answer key}}\\\\\\hline\\hline\n\t\t\\rowcolor{ocrel}Q.No.&Answer&Q.No.&Answer\\\\\\hline\n\t\t1&\\textbf{C} &2&\\textbf{C}\\\\\\hline \n\t\t3&\\textbf{D} &4&\\textbf{C} \\\\\\hline\n\t\t5&\\textbf{B} &6&\\textbf{-} \\\\\\hline\n\t\t7&\\textbf{D}&8&\\textbf{C}\\\\\\hline\n\t\t9&\\textbf{A}&10&\\textbf{B}\\\\\\hline\n\t\t\n\t\\end{tabular}\n\\end{table}", "meta": {"hexsha": "bd318aade5e56de9b7311765187ab3ec90f4e98c", "size": 32623, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "CSIR- Mathematical Physics/chapter/Matrix Problem set Solutions.tex", "max_stars_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_stars_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CSIR- Mathematical Physics/chapter/Matrix Problem set Solutions.tex", "max_issues_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_issues_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CSIR- Mathematical Physics/chapter/Matrix Problem set Solutions.tex", "max_forks_repo_name": "archives-futuring/CSIR-Physics-Study-Material", "max_forks_repo_head_hexsha": "689cff91895fec36b4bb0add178f13a0f68648ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.1064189189, "max_line_length": 449, "alphanum_fraction": 0.604512154, "num_tokens": 13980, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6791787121629465, "lm_q2_score": 0.8128673178375734, "lm_q1q2_score": 0.5520821780882715}}
{"text": "\\chapter{Greedy}\n\n\\section{Introduction}\nPhilosophy: choose the best options at the current state without reverting the choice in the future. \n\nA greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum.\n\nGreedy algorithm is a degenerated dp since the the past substructure is not remembered.\n\n\\subsection{Proof}\nThe proof technique for the correctness of the greedy method. \n\nProof by contradiction, the solution of greedy algorithm is $\\mat{G}$ and the optimal solution is $\\mat{O}$, $\\mat{O}\\neq \\mat{G}$ (or relaxed to $|\\mat{O}|\\neq |\\mat{G}|$). \n\nTwo general technique it is impossible to have $\\mat{O}\\neq \\mat{G}$:\n\\begin{enumerate}\n\\item Exchange method \n\\item Stays-head method \n\\end{enumerate}\n\n\\section{Extreme First}\n\\runinhead{Rearranging String $k$ distance apart.} Given a non-empty string $s$ and an integer $k$, rearrange the string such that the same characters are at least distance\n$k$ from each other.\n\n\\textbf{Core clues.} \n\\begin{enumerate}\n\\item The char with the most count put to the result first - greedy.\n\\item Fill every $k$ slots as cycle - greedily fill high-count char as many as possible.\n\\end{enumerate}\n\n\\textbf{Implementations.}\n\\begin{enumerate}\n\\item Use a heap as a way to get the char of the most count. \n\\item \\pyinline{while} loop till exhaust the heap\n\\end{enumerate}\n\n\\begin{python}\ndef rearrangeString(self, s, k):\n  if not s or k == 0: return s\n\n  d = defaultdict(int)\n  for c in s:\n    d[c] += 1\n\n  h = []\n  for char, cnt in d.items():\n    heapq.heappush(h, Val(cnt, char))\n\n  ret = []\n  while h:\n    cur = []\n    for _ in xrange(k):\n      if not h: \n        return \"\".join(ret) if len(ret) == len(s) else \"\"\n\n      e = heapq.heappop(h)\n      ret.append(e.val)\n      e.cnt -= 1\n      if e.cnt > 0:\n        cur.append(e)\n\n    for e in cur:\n      heapq.heappush(h, e)\n\n  return \"\".join(ret)\n\n\\end{python}\n\n", "meta": {"hexsha": "d74296517521eabda58136056e89e95083d4cf85", "size": 1964, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapterGreedy.tex", "max_stars_repo_name": "algorhythms/Algo-Quicksheet", "max_stars_repo_head_hexsha": "c5d219a96f195adf1d19d2d701986e01fc9b8195", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 902, "max_stars_repo_stars_event_min_datetime": "2015-08-16T08:25:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T05:23:50.000Z", "max_issues_repo_path": "chapterGreedy.tex", "max_issues_repo_name": "andysli6590/Algo-Quicksheet", "max_issues_repo_head_hexsha": "c5d219a96f195adf1d19d2d701986e01fc9b8195", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2015-07-06T17:24:47.000Z", "max_issues_repo_issues_event_max_datetime": "2018-12-12T00:01:38.000Z", "max_forks_repo_path": "chapterGreedy.tex", "max_forks_repo_name": "andysli6590/Algo-Quicksheet", "max_forks_repo_head_hexsha": "c5d219a96f195adf1d19d2d701986e01fc9b8195", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 92, "max_forks_repo_forks_event_min_datetime": "2015-10-09T03:13:35.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-20T00:57:08.000Z", "avg_line_length": 28.4637681159, "max_line_length": 174, "alphanum_fraction": 0.6904276986, "num_tokens": 541, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.679178686187839, "lm_q2_score": 0.8128673223709251, "lm_q1q2_score": 0.5520821600529116}}
{"text": "% !TEX root = main.tex\n\n%-------------------------------------------------\n\\section{One-sample tests}\\label{sec:nhst_one_sample_tests}\n\n%-----------------------------\n\\begin{example}[Binomial test]\n Let $X_1,X_2,\\ldots,X_n$ be a random sample from the $\\text{Bernoulli}(\\theta)$ distribution, where $\\theta$ is unknown.\n\\ben\n\\it Find a critical region of size $\\alpha$ to test $H_0:\\theta=\\theta_0$ against $H_1:\\theta<\\theta_0$.\n\\it Find the power function of the test.\n\\een\n\\end{example}\n\n\\begin{solution}\n\\bit\n\\it The sample space is $D = \\{0,1\\}^n$, which consists of all binary vectors of length $n$.\n\\eit\nLet $\\mathbf{X}=(X_1,X_2,\\ldots,X_n)$ denote the random sample, and consider the test statistic \n\\[\n\\begin{array}{rccl}\nS: & D & \\to & \\R \\\\\n & \\mathbf{x} & \\mapsto & \\sum_{i=1}^n x_i\n\\end{array}\n\\]\n\\bit\n\\it $S(\\mathbf{X})$ is the total number of successes in the sample.\n\\it Under the null hypothesis, $S(\\mathbf{X})\\sim\\text{Binomial}(n,\\theta_0)$,\n\\eit\n\n\\ben\n\\it % size\nTo test $H_0:\\theta=\\theta_0$ against $H_1:\\theta<\\theta_0$, we define the critical region\n\\[\nC = \\{\\mathbf{x} : S(\\mathbf{x}) \\leq k\\},\n\\]\nwhere $k$ is chosen so that $\\prob_{\\theta_0}\\big(S(\\mathbf{X})\\leq k\\big) = \\alpha$.\n\n\\bit\n\\it Because $S$ is \\emph{discrete}, it is unlikely that $\\prob_{\\theta_0}\\big(S(\\mathbf{X})\\leq k\\big) = \\alpha$ has an integer solution $k$.\n\\it The cautious approach would be to take the largest value of $k$ satisfying $\\prob_{\\theta_0}\\big(S(\\mathbf{X})\\leq k\\big) \\leq \\alpha$.\n\\eit\n\n\\it % power\nThe power function of the test is\n\\[\n\\gamma(\\theta) \n\t= \\prob_{\\theta}(S\\leq k) \n\t= \\sum_{j=1}^k\\binom{n}{j}\\theta^j(1-\\theta)^{n-j} \\quad\\text{for all $\\theta < \\theta_0$.}\n\\]\t\n%\\begin{align*}\n%\\gamma(\\theta) \n%\t& = \\prob_{\\theta}\\big[S(\\mathbf{X})\\leq k\\big] \\\\\n%\t& = \\sum_{j=1}^k\\binom{n}{j}\\theta^j(1-\\theta)^{n-j}\\text{\\quad for\\quad $\\theta\\in [0,\\theta_0]$.}\n%\\end{align*}\n\\een\nIf we increase the size of the test, its power also increases: for example, if we take the critical region\n\\[\nC^{*} = \\{\\mathbf{x} : S(\\mathbf{x}) \\leq k+1 \\}\n\\]\nthen $\\gamma^{*}(\\theta) > \\gamma(\\theta)$ for all $\\theta\\in(0,1)$:\n\\[\n\\gamma^{*}(\\theta) \n\t= \\prob(S\\leq k+1;\\theta)\n\t> \\prob(S\\leq k;\\theta) \n\t= \\gamma(\\theta) \\text{\\quad for all $\\theta\\in(0,1)$.}\n\\]\n\\end{solution}\n\n%-----------------------------\n\\begin{example}[$z$-test]\nLet $X\\sim N(\\mu,\\sigma^2)$ where $\\mu$ is unknown but $\\sigma^2$ is known. \n\\ben\n\\it Find a critical region for testing $H_0:\\mu=\\mu_0$ against $H_1:\\mu > \\mu_0$\n\\it Find the power function $\\gamma(\\mu)$ of the test.\n\\it Show that the power function is strictly increasing for $\\mu>\\mu_0$. %Comment on your answer.\n\\een\n\\end{example}\n\n\\begin{solution}\n\\ben\n\\it % size\nLet $X_1,X_2,\\ldots,X_n$ be a random sample from the distribution of $X$\n\\bit\n\\it The sample mean $\\bar{X}$ is an unbiased estimator for $\\mu$.\n\\it If $\\bar{X}-\\mu_0$ is large, we would be inclined to reject $H_0$ in favour of $H_1$.\n\\eit\nAn appropriate test statistic $Z:\\R^n\\to\\R$ is given by\n\\[\nZ(X_1,X_2,\\ldots,X_n) = \\sqrt{n}\\left(\\frac{\\bar{X}-\\mu_0}{\\sigma}\\right)\n\\]\nBecause the $X_i$ are normally distributed, under $H_0$ we have that $Z\\sim N(0,1)$. \n\n\\bigskip\nThe critical region for the test is\n$C=\\{\\mathbf{x}:Z(\\mathbf{x})> z_c\\}$, i.e.\n\\[\nC=\\left\\{\\mathbf{x}: \\sqrt{n}\\left(\\frac{\\bar{x}-\\mu}{\\sigma}\\right) > z_c\\right\\}\n\\]\nwhere $z_c$ is the solution of $\\prob_{\\mu_0}(Z > z)=\\alpha$, or equivalently the solution of $\\Phi(z)=1-\\alpha$ where $\\Phi$ is the CDF of the standard normal distribution.\n\n\\it % power\nFor $\\mu > \\mu_0$, the power function is\n\\begin{align*}\n\\gamma(\\mu) \n%\t& = \\prob_{\\mu}(Z\\geq z_c) \\\\\n\t= \\prob_{\\mu}\\left[\\sqrt{n}\\left(\\frac{\\bar{X}-\\mu_0}{\\sigma}\\right) > z_c\\right] \n\t& = \\prob_{\\mu}\\left[\\bar{X} > \\frac{\\sigma}{\\sqrt{n}}z_c+\\mu_0\\right] \\\\\n\t& = \\prob_{\\mu}\\left[\\bar{X}-\\mu > \\frac{\\sigma}{\\sqrt{n}}z_c + (\\mu_0-\\mu)\\right] \\\\\n\t& = \\prob_{\\mu}\\left[\\sqrt{n}\\left(\\frac{\\bar{X}-\\mu}{\\sigma}\\right) > z_c + \\frac{\\sqrt{n}(\\mu_0-\\mu)}{\\sigma}\\right] \\\\\n\t& = 1 - \\prob_{\\mu}\\left[\\sqrt{n}\\left(\\frac{\\bar{X}-\\mu}{\\sigma}\\right) \\leq z_c + \\frac{\\sqrt{n}(\\mu_0-\\mu)}{\\sigma}\\right] \\\\\n\t& = \\prob\\left[Z \\geq \\frac{\\sqrt{n}(\\mu_0-\\mu)}{\\sigma} + z_c\\right] \\text{\\quad where $Z\\sim(0,1)$},\\\\\n\t& = 1 - \\Phi\\left(z_c + \\frac{\\sqrt{n}(\\mu_0-\\mu)}{\\sigma}\\right)\n\\end{align*}\t\nwhere $\\Phi$ is the CDF of the standard normal distribution $N(0,1)$.\n\n\\it % power\n$\\gamma(\\mu)$ is an increasing function for $\\mu>\\mu_0$, because\n\\[\n\\gamma'(\\mu) = \\frac{\\sqrt{n}}{\\sigma} \\phi\\left(z_c + \\frac{\\sqrt{n}(\\mu_0-\\mu)}{\\sigma}\\right) > 0,\n\\]\nwhere $\\phi$ is the PDF of the standard normal distribution $N(0,1)$. This shows that the power to detect the simple alternative $H_1:\\mu=\\mu_1$ increases as $\\mu_1$ increases away from $\\mu_0$.\n\\een\n\\end{solution}\n\n%-----------------------------\n\\begin{example}[$t$-test]\nLet $X\\sim N(\\mu,\\sigma^2)$ where $\\mu$ and $\\sigma^2$ are both unknown. Find a critical region of size $\\alpha$ for testing $H_0:\\mu=\\mu_0$ against $H_1:\\mu < \\mu_0$\n\\end{example}\n\n\\begin{solution}\nLet $X_1,X_2,\\ldots,X_n$ be a random sample from the distribution of $X$, and let $\\bar{X}$ and $S^2$ denote the sample mean and sample variance respectively.\n\n\\bigskip\nThe appropriate test statistic $T:\\R^n\\to\\R$ is given by\n\\[\nT(X_1,X_2,\\ldots,X_n) = \\sqrt{n}\\left(\\frac{\\bar{X}-\\mu_0}{S}\\right)\n\\]\n\nUnder the null hypothesis $H_0:\\mu=\\mu_0$, this has \\emph{Student's $t$-distribution} with $n-1$ degrees of freedom.\n\nThe critical region for our test is\n\\[\nC=\\left\\{\\mathbf{x}:\\sqrt{n}\\left(\\frac{\\bar{x}-\\mu}{s}\\right) \\leq t_c\\right\\}\n\\]\nwhere $t_c$ is the $\\alpha$th quantile of Student's $t$ distribution with $n-1$ degrees of freedom.\n\\end{solution}\n\n\n%-----------------------------\n\\begin{example}[Normal approximation]\nLet $X_1,X_2,\\ldots,X_n$ be a random sample from the $\\text{Bernoulli}(\\theta)$ distribution, where $\\theta$ is unknown. We reject the null hypothesis $H_0:\\theta=1/2$ in favour of $H_1:\\theta>1/2$ if the observed number of successes exceeds some constant value $c>0$. Using a normal approximation, find values of $n$ and $c$ for which the size of the test is $0.1$ and whose power at $\\theta=2/3$ is $0.95$. \n\\end{example}\n\n\\begin{solution}\nLet $X\\sim\\text{Binomial}(n,\\theta)$ be the number of successes. \n\\bit\n\\it By the central limit theorem, $X\\sim N\\big[n\\theta,n\\theta(1-\\theta)\\big]$ approx. provided $n$ is sufficiently large.\n\\eit\n\n(1) If $H_0:\\theta=1/2$ is correct, then $X\\sim N(n/2,n/4)$ approx, so under $H_0$ we have\n\\[\nZ \t= \\frac{X-\\expe(X)}{\\sqrt{\\var(X)}} \n\t= \\frac{X-n/2}{\\sqrt{n/4}} \n\t= \\frac{2X-n}{\\sqrt{n}} \\sim N(0,1) \\text{\\quad approx.}\n\\]\n\nFor a test of size $\\prob_{1/2}(X>c) = 0.1$ we need that \n\\[\n\\prob\\left(Z > \\frac{2c-n}{\\sqrt{n}}\\right) = 0.1,\n\\quad\\text{and hence}\\quad\n\\frac{2c-n}{\\sqrt{n}} = 1.282 \\text{\\quad (from tables).}\n\\]\n\n(2) If $H_1:\\theta=2/3$ is correct, then $X\\sim N(2n/3,2n/9)$ approx, so under $H_1$ we have\n\\[\nZ \t= \\frac{X-2n/3}{\\sqrt{2n/9}} \n\t= \\frac{3X-2n}{\\sqrt{2n}} \\sim N(0,1) \\text{ approx.}\n\\]\nFor a test with power $\\prob_{2/3}(X>c) = 0.95$  we need that\n\\[\n\\prob\\left(Z > \\frac{3c-2n}{\\sqrt{2n}}\\right) = 0.95,\n\\quad\\text{and hence}\\quad\n\\frac{3c-2n}{\\sqrt{2n}} = -1.645 \\text{\\quad (from tables).}\n\\]\n\nThus we have two equations in two unknowns:\n\\[\n2c = n + 1.282\\sqrt{n} \\text{\\quad and\\quad} 3c = 2n - 1.645\\sqrt{2n}.\n\\]\n\n\\bit\n\\it Solving for $n$ and $c$, we find that $n=72.24$ and $c=41.56$.\n\\it Thus a sample of size $72$ and a rejection threshold of $42$ would approximately meet the stated requirements.\n\\eit\n\\end{solution}\n\n%-----------------------------\n\\begin{exercise}\n\\begin{questions}\n\\question\nLet $X\\sim\\text{Binomial}(10,\\theta)$ where $\\theta$ is either equal to $0.25$ or $0.5$. The simple mull hypothesis $H_0:\\theta=0.5$ is rejected in favour of the simple alternative $H_1:\\theta=0.25$ if the observed value of $X$ is at most equal to $3$. Find the size and power of the test.\n\\begin{answer}\n\\bit\n\\it The critical region is $C = \\{X\\leq 3\\}$. Under the null hypothesis, $X\\sim\\text{Binomial}(10,0.5)$. The significance level of the test is therefore\n\\[\n\\alpha = \\prob_{0.5}(X\\leq 3) = \\prob\\big[X\\leq 3\\text{ when } X\\sim\\text{Binomial}(10,0.5)\\big] = 0.1719 \\quad\\text{(from tables).}\n\\]\n\\it Under the alternative hypothesis $H_1:\\theta=0.25$ we have $X\\sim\\text{Binomial}(10,0.25)$. The power of the test to detect $H_1$ is therefore\n\\[\n\\gamma(0.25) = \\prob_{0.25}(X\\leq 3) = \\prob\\big[X\\leq 3\\text{ when } X\\sim\\text{Binomial}(10,0.25)\\big] = 0.7759 \\quad\\text{(from tables).}\n\\]\n\\eit\n\\end{answer}\n\n\\question\nAdult males diagnosed with lung cancer have a mortality rate of $70\\%$ within one year of the initial diagnosis. A research laboratory claims that a new treatment reduces this rate. Based on a random sample of $20$ patients, find a critical region of size $\\alpha=0.15$ to test the claim, and compute the power of the test to detect a $20\\%$ reduction in the mortality rate.\n\\begin{answer}\nLet $\\theta$ be the probability that a patient dies within one year of the initial diagnosis.\n\\bit\n\\it We wish to test the null hypothesis $H_0:\\theta=0.7$ against the alternative $H_1:\\theta<0.7$.\n\\eit\nLet $X_1,X_2,\\ldots,X_{20}$ be a random sample from the $\\text{Bernoulli}(\\theta)$ distribution\n\\bit\n\\it In this context, `success' corresponds to death within one year of the initial diagnosis!\n\\eit\nLet $S=\\sum_{i=1}^{20} X_i$ be the total number of deaths within one year of the initial diagnosis.\n\\bit\n\\it Under the null hypothesis, $S\\sim\\text{Binomial}(20,0.7)$.\n\\eit\nA critical region for the test is $C=\\{\\mathbf{x}:S(\\mathbf{x})\\leq k\\}$, where $k$ is chosen so that \n\\[\n\\prob_{0.7}(S\\leq k) = 0.15.\n\\]\nTabulated values of the $\\text{Binomial}(20,0.7)$ distribution yield\n\\[\n\\prob_{0.7}(S\\leq 11) = 0.1133 \\text{\\quad and\\quad}\\prob_{0.7}(S\\leq 12) = 0.2277.\n\\]\n\\bit\n\\it It is not possible to find a critical region of size $\\alpha=0.15$ exactly.\n\\eit\nThe conservative approach would be to take $k=11$ and $\\alpha=0.1133$.\n\\bit\n\\it A $20\\%$ reduction in the mortality rate corresponds to $H_1:\\theta=0.5$.\n\\eit\nFor the test of size $\\alpha=0.1133$, its power to detect $H_1:\\theta=0.5$ is\n\\begin{align*}\n\\prob_{0.5}(S\\leq 11) \n\t& = \\prob(S\\leq 11) \\text{ where } S\\sim\\text{Binomial}(20,0.5) \\\\\n\t& = 0.7483 \\text{\\quad (from tables)}.\n\\end{align*}\nFor the test of size $\\alpha=0.2277$, its power to detect $H_1:\\theta=0.5$ is\n\\begin{align*}\n\\prob_{0.5}(S\\leq 12) \n\t& = \\prob(S\\leq 12) \\text{ where } S\\sim\\text{Binomial}(20,0.5) \\\\\n\t& = 0.8684 \\text{\\quad (from tables)}.\n\\end{align*}\n\\end{answer}\n\n\\question\nLet $X_1,X_2,\\ldots,X_5$ be a random sample from the $\\text{Bernoulli}(\\theta)$ distribution. We wish to test the null hypothesis $H_0:\\theta\\leq 0.5$ against the alternative $H_1:\\theta> 0.5$. $H_0$ is rejected by test $A$ only if all five trials result in `success', and rejected by test $B$ if at least at least three trials result in `success'. Find the size and power function of each test.\n\\begin{answer}\nThe sample space is $D=\\{0,1\\}^5$, the set of binary vectors of length $5$. \\\\\nLet $Y=\\sum_{i=1}^5 X_i$ be the number of successes: $Y\\sim\\text{Binomial}(5,\\theta)$.\n\\ben\n\\it For test $A$, \n\\begin{align*}\n\\alpha \t\t\t& = \\max_{0\\leq\\theta\\leq 0.5}\\prob_{\\theta}(Y=5) = 0.5^5 = 0.0312. \\\\[2ex]\n\\gamma(\\theta)\t& = \\prob_{\\theta}(Y=5) = \\theta^5.\n\\end{align*}\n\\it For test $B$, \n\\begin{align*}\n\\alpha \t\t\t& = \\max_{0.5\\leq\\theta\\leq 1}\\prob(Y\\geq 5) \n\t\t\t\t= 10(0.5)^3(0.5)^2 + 5(0.5)^4(0.5) + (0.5)^5 = 16(0.5)^5 = 0.5, \\\\\n\\gamma(\\theta)\t& = \\prob(Y\\geq 3;\\theta) \n\t\t\t\t= 10\\theta^3(1-\\theta)^2 + 5\\theta^4(1-\\theta) + \\theta^5.\n\\end{align*}\nThus Test B is more powerful than Test A, but its size is greater that of Test A.\n\\een\n\\end{answer}\n\n\\question \nLet $X$ be a random sample of size $1$ from the $\\text{Exponential}(\\theta)$ distribution, where $\\theta>0$ is a rate parameter. The null hypothesis $H_0:\\theta=1/2$ is rejected in favour of the simple alternative $H_1:\\theta = 1$ if the observed value $x$ is such that\n\\[\n\\frac{f(x;1/2)}{f(x;1)} \\leq \\frac{3}{4}.\n\\]\nwhere $f(x;\\theta)$ is the PDF of $X$. (This is defined to be $f(x;\\theta) = \\theta e^{-\\theta x}$ for $x>0$, and zero otherwise.)\n\\ben\n\\it Show that the size of the test is $\\alpha=1/3$.\n\\it Find the power of the test at $\\theta=1$.\n\\een\n\\begin{answer}\nLet $T$ denote the test statistic:\n\\[\nT(X) = \\frac{f(X;1/2)}{f(X;1)} = \\frac{1}{2}e^{X/2}\n\\]\n\\ben\n\\it The size of the test is\n\\begin{align*}\n\\alpha = \\prob_{H_0}(T\\leq 3/4)\n\t& = \\prob_{H_0}(e^{X/2} \\leq 3/2) \\\\\n\t& = \\prob_{H_0}\\big[X\\leq 2\\log(3/2)\\big] \\\\\n\t& = 1 - e^{-\\log 3/2} \\\\\n\t& = 1 - 2/3 = 1/3.\n\\end{align*}\n\\it The power of the test when $\\theta=1$ is\n\\begin{align*}\n\\gamma(1) = \\prob_{H_1}(T\\leq 3/4)\n\t& = \\prob_{H_1}\\big[X\\leq 2\\log(3/2)\\big] \\\\\n\t& = 1 - e^{-2\\log 3/2} \\\\\n\t& = 1 - (2/3)^2 = 5/9.\n\\end{align*}\n\\een\n\\end{answer}\n\\end{questions}\n\\end{exercise}\n\n", "meta": {"hexsha": "988b724526e3283a88bb26414f622d1b765143e4", "size": 12887, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "L5/MA2500/09B_examples.tex", "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "L5/MA2500/09B_examples.tex", "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L5/MA2500/09B_examples.tex", "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "avg_line_length": 40.5251572327, "max_line_length": 409, "alphanum_fraction": 0.6418095755, "num_tokens": 4934, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458152, "lm_q2_score": 0.7931059609645724, "lm_q1q2_score": 0.5519687061698906}}
{"text": "\\section[The Riemann-Stieltjes Integral]{\\hyperlink{toc}{The Riemann-Stieltjes Integral}}\n\n\\subsection{Definition of the Integral}\n\\begin{definition}{Partition}{6.1a}\n    A \\textbf{partition} of $[a, b] \\subset \\RR$ is a set $\\set{x_0, x_1, \\ldots, x_n}$ (for some $n \\in \\NN$) such that:\n    \\begin{align*}\n        a = x_0 \\leq x_1 \\leq x_2 \\leq \\ldots \\leq x_{n-1} \\leq x_n = b\n    \\end{align*}\n    We can then write:\n    \\begin{align*}\n        \\Delta x_i = x_i - x_{i-1} \n    \\end{align*}\n\\end{definition}\n\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[very thick, latex-latex] (-2, 0) -- (2, 0);\n        \\node[] at (-1.5, 0) {$[$};\n        \\node[below] at (-1.5, -0.15) {$x_0$};\n        \\node[below] at (-1.5, -0.45) {$a$};\n        \\node[] at (1.5, 0) {$]$};\n        \\node[below] at (1.5, -0.15) {$x_4$};\n        \\node[below] at (1.5, -0.4) {$b$};\n        \\draw[] (-1, 0) -- (-1, -0.15);\n        \\node[below] at (-1, -0.15) {$x_1$};\n        \\draw[] (-0.2, 0) -- (-0.2, -0.15);\n        \\node[below] at (-0.2, -0.15) {$x_2$};\n        \\draw[] (0.4, 0) -- (0.4, -0.15);\n        \\node[below] at (0.4, -0.15) {$x_3$};\n    \\end{tikzpicture}\n    \\caption{Visualization of a partition $\\set{x_0, x_1, x_2, x_3, x_4}$ of $[a, b]$. Note that the points in the partitions need not be equally spaced.}\n    \\label{fig27}\n\\end{figure}\n\n\\setcounter{rudin}{0}\n\\begin{definition}{Upper and Lower Sums}{6.1b}\n    Given $f: [a, b] \\mapsto \\RR$ and a partition $P$ of $[a, b]$ let:\n    \\begin{align*}\n        M_i &= \\sup\\set{f(x): x_{i-1} \\leq x \\leq x_i}\n        \\\\ m_i &= \\inf\\set{f(x): x_{i-1} \\leq x \\leq x_i}\n    \\end{align*}\n    Then, we can define the \\textbf{upper} and \\textbf{lower sums}:\n    \\begin{align*}\n        U(P, f) &= \\sum_{i=1}^n M_i \\Delta x_i\n        \\\\ L(P, f) &= \\sum_{i=1}^n m_i \\Delta x_i\n    \\end{align*}\n\\end{definition}\n\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[very thick, latex-latex] (-2, 0) -- (2, 0);\n        \\node[] at (-1.5, 0) {$[$};\n        \\node[below] at (-1.5, -0.15) {$x_0$};\n        \\node[below] at (-1.5, -0.45) {$a$};\n        \\node[] at (1.5, 0) {$]$};\n        \\node[below] at (1.5, -0.15) {$x_4$};\n        \\node[below] at (1.5, -0.4) {$b$};\n        \\draw[] (-1, 0) -- (-1, -0.15);\n        \\node[below] at (-1, -0.15) {$x_1$};\n        \\draw[] (-0.2, 0) -- (-0.2, -0.15);\n        \\node[below] at (-0.2, -0.15) {$x_2$};\n        \\draw[] (0.4, 0) -- (0.4, -0.15);\n        \\node[below] at (0.4, -0.15) {$x_3$};\n        \\draw[] (-1, 0) sin (-1.5, 1);\n        \\draw[] (-1, 0) sin (-0.5, -1);\n        \\draw[] (0, 0) sin (-0.5, -1);\n        \\draw[] (0, 0) sin (0.5, 1);\n        \\draw[] (1, 0) sin (0.5, 1);\n        \\draw[] (1, 0) sin (1.5, -1);\n        \\node[circle, fill = red, minimum size = 0.2cm, inner sep = 0pt, label={[right, text=red]:$M_1$}] at (-1.5, 1) {};\n        \\node[circle, fill = purple, minimum size = 0.2cm, inner sep = 0pt, label={[right, text=blue]:$m_1$}] at (-1, 0) {};\n        \\node[right, text = red] at (-0.8, 0.06) {$/M_2$};\n        \\node[circle, fill = blue, minimum size = 0.2cm, inner sep = 0pt, label={[above, text=blue]:$m_2$}] at (-0.5, -1) {};\n        \\node[circle, fill = blue, minimum size = 0.2cm, inner sep = 0pt, label={[right, text=blue]:$m_3$}] at (-0.2, -0.6) {};\n        \\node[circle, fill = red, minimum size = 0.2cm, inner sep = 0pt, label={[left, text=red]:$M_3$}] at (0.4, 0.94) {};\n        \\node[circle, fill = red, minimum size = 0.2cm, inner sep = 0pt, label={[right, text=red]:$M_4$}] at (0.5, 1) {};\n        \\node[circle, fill = blue, minimum size = 0.2cm, inner sep = 0pt, label={[above, text=blue]:$m_4$}] at (1.5, -1) {};\n        \\end{tikzpicture}\n    \\caption{Example of a function $f$, a partition $P$ of $[a, b]$, and the $M_i, m_i$s for this choice of partition.}\n    \\label{fig28}\n\\end{figure}\n\n\\noindent By construction, it should be evident that $L(P, f) \\leq U(P, f)$ for all $P, f$.\n\nA natural question that arises from the form of the above expression is whether these are Riemann sums or not. Recall from first year calculus that we would choose the left endpoint, right endpoint, or some other arbitrary choice of a point in the subinterval. Here, we in a sense use a ``special case'' of the supremum/infimum. We will see that this choice is musch easier to use in proofs due to monotonicity properties. Namely, if we have a partition and add another point, then $U(P, f)$ can only decrease, and $L(P, f)$ can only increase (we will see this in a theorem soon)!\n\n\n\\setcounter{rudin}{0}\n\\begin{definition}{Upper/Lower Integrals and Riemann Integrability}{6.1c}\n    We define the \\textbf{upper Riemann integral} to be:\n    \\begin{align*}\n        \\uint{a}{b} f dx = \\inf_P U(P, f).\n    \\end{align*}\n    and the \\textbf{lower Riemann integral} to be:\n    \\begin{align*}\n        \\lint{a}{b} f dx= \\sup_P L(P, f).\n    \\end{align*}\n    Here, the infimum/supremum is taken over all partitions $P$. We say that $f$ is \\textbf{Riemann integrable} on $[a, b]$, and write $f \\in \\R[a, b]$ if:\n    \\begin{align*}\n        \\uint{a}{b} f dx= \\lint{a}{b} f dx\n    \\end{align*}\n    which we can write as:\n    \\begin{align*}\n        \\int_{a}^{b} f dx \\text{ or } \\int_{a}^{b} f(x) dx\n    \\end{align*}\n\\end{definition}\n\\noindent Note that the choice of variable in the above definition is totally arbitrary. \n\nAlso, note that while $f$ is not required to be continuous in the above definition, it is required to be bounded; else, $M_i$ and $m_i$ may not exist. Since $f$ is bounded, $U(P, f)$, $L(P, f)$ are bounded for all $P, f$ and hence we have a set of real numbers for which we may consider the supremum/infimium of by the LUB/GLB property of the reals. Since the upper/lower sums lie in a bounded interval, there is no questions about whether the lower/upper integrals exist. The question becomes whether they are equal or not. Before getting into further discussion on this topic, we discuss a bound:\n\n\\setcounter{rudin}{0}\n\\begin{theorem}{ML Bounds}{6.1}\n    Let $m = \\inf\\set{f(x): a \\leq x \\leq b}$ and $M = \\sup\\set{f(x): a \\leq x \\leq b}$ (which exist by the boundedness of $f$). Then,\n    \\begin{align*}\n        m(b - a) \\leq L(P, f) \\leq U(P, f) \\leq M(b - a)\n    \\end{align*}\n    For any choice of partition $P$.\n\\end{theorem}\n\\begin{nproof}\n    For any $i$, we have that:\n    \\begin{align*}\n        m \\leq m_i \\leq M_i \\leq M\n    \\end{align*}\n    Therefore:\n    \\begin{align*}\n        \\sum_{i=1}^n m \\Delta x_i \\leq \\sum_{i=1}^n m_i \\Delta x_i \\leq \\sum_{i=1}^n M_i \\Delta x_i \\leq \\sum_{i=1}^n M \\Delta x_i\n    \\end{align*}\n    So we conclude that:\n    \\begin{align*}\n        m(b - a) \\leq L(P, f) \\leq U(P, f) \\leq M(b - a)\n    \\end{align*} \\qed\n\\end{nproof}\n\n\\noindent Now that we have established the Riemann integral, a natural question is how can we extend this notion. In order to do so, we will use a montonically increasing function $\\alpha: [a, b] \\mapsto \\RR$ (that is, $\\alpha(x) \\leq \\alpha(y)$ for all $x \\leq y$). Note that $\\alpha$ need not be continuous. Indeed, compared to the Riemann integral where $\\alpha(x) = x$ and was continuous, in this general setting, $\\alpha$ is allowed to have jumps. This allows for certain benefits, as we will soon discuss. However, we note that $\\alpha$ can only have a finite number of jumps.\n\n\\begin{ntheorem}{ 4.30}{}\n    Let $\\alpha: [a, b] \\mapsto \\RR$ be monotonic. Then, it can only have finitely many discontinities.\n\\end{ntheorem}\n\\begin{nproof}\n    Assign a rational number $r(x)$ to each of the discontinuities of $\\alpha$. Then, we have that:\n    \\begin{align*}\n        \\lim_{x \\rightarrow r(x)^-} \\alpha(x) < \\alpha(x) < \\lim_{x \\rightarrow r(x)^+} \\alpha(x)\n    \\end{align*}\n    Since $x_1 < x_2$ implies $\\lim_{x \\rightarrow r(x_1)^+} \\alpha(x) \\leq lim_{x \\rightarrow r(x_2)^-} \\alpha(x)$, we have that $r(x_1) \\neq r(x_2)$ if $x_1 \\neq x_2$. We therefore have established a function $r$ from the set of discontinuities of $\\alpha$ to the rationals. As the rationals are countable, the set of discontiuities of $\\alpha$ are also countable. \\qed\n\\end{nproof}\n\n\\noindent With this established, we now define the generalized Riemann-Stieltjes integral.\n\n\\begin{definition}{Riemann-Stieltjes Integral}{6.2}\n    Let $\\alpha: [a, b] \\mapsto \\RR$ be increasing, and given a partition $P$ of $[a, b]$, define:\n    \\begin{align*}\n        \\Delta \\alpha_i = \\alpha(x_i) - \\alpha(x_{i-1}) (\\geq 0)\n    \\end{align*}\n    For bounded $f: [a, b] \\mapsto \\RR$, Let:\n    \\begin{align*}\n        U(P, f, \\alpha) &= \\sum_{i=1}^n M_i \\Delta \\alpha_i\n        \\\\ L(P, f, \\alpha) &= \\sum_{i=1}^n m_i \\Delta \\alpha_i\n    \\end{align*}\n    We then take the infimum/supremum over partitions $P$ to get:\n    \\begin{align*}\n        \\uint{a}{b} f d\\alpha &= \\inf_P U(P, f, \\alpha)\n        \\\\ \\lint{a}{b}f d\\alpha &= \\sup_P L(P, f, \\alpha)\n    \\end{align*}\n    If equal, we write their value as:\n    \\begin{align*}\n        \\int_a^b f d\\alpha \\text{ or } \\int_a^b f(x)d\\alpha(x)\n    \\end{align*}\n    and we write that $f \\in \\R_\\alpha[a, b]$. In the case where $\\alpha(x) = x$, we recover the Riemann integral.\n\\end{definition}\n\n\\noindent Why is this definition useful? What does it accomplish for us that the original Riemann integral does not? We consider a physically motivating example. Suppose we have a thin wire with varying mass density $\\rho(x)$. If we wanted to calculate the mass density of the wire, we would integrat the density $\\rho(x)$ over the length of the wire. Now, suppose our wire consists of steel of continuously varying mass density, as well as beads/point masses placed on certain locations of the wire. The Riemann integral cannot handle these point masses, but the Riemann-Stieltjes integral can deal with this case if we use an $\\alpha$ with discontinuities in it. Hence, the Riemann-Stieltjes integral allows us to handle cases where we both have continuous and discrete masses to integrate over. It acts as a bridge between Riemann and Lebesgue integration (the latter of which will be the subject of a later course in measure theory).\n\nWe now will answer the question: ``for what choices of $f, \\alpha$ is $f$ Riemann-Stieltjes integrable?''\n\n\\subsection{Criterion for Integrability}\n\n\\begin{definition}{Refinements and Common Refinemnet}{6.3}\n    $P^*$ is a \\textbf{refinement} of $P$ if $P \\subset P^*$ and $P, P^*$ are partitions. The common refinemnet of $P_1, P_2$ is $P^* = P_1 \\cup P_2$.\n\\end{definition}\n\n\\begin{theorem}{}{6.4}\n    If $P^*$ is a refinemne tof $P$, then:\n    \\begin{align*}\n        L(P, f, \\alpha) \\leq L(P^*, f, \\alpha) \\leq U(P^*, f, \\alpha) \\leq U(P, f, \\alpha)\n    \\end{align*}\n\\end{theorem}\n\n\\noindent As a remark, when we take infimums/supremums over partitions $P$ to obtain the upper/lower Riemann-Stieltjes integals, we are taking refinements.\n\nAlso, note that the above theorem does \\textit{not} apply to (right-hand, left-hand, midpoint, arbitrary) Riemann sums, and is a consequence of the choice of upper/lower sums with supremums/infimums taken over the subintervals.\n\n\\begin{nproof}\n    It suffices to consider $P^*$ with a single extra point $x_{i - 1} < x^* < x_i$, and then the genral case follows by induction.\n\n    For the case where $\\alpha(x) = x$, the refinement adds $(m^* - m_i)(x^* - x_{i-1}) \\geq 0$. \n\n    For the general case, we have that:\n    \\begin{align*}\n        L(P^*, f, \\alpha) - L(P, f, \\alpha) &= \\left(m^*(\\alpha(x^*) - \\alpha(x_{i-1})) + m_i(x^* - x_{i-1}) \\right)- m_i(\\alpha_{x_i} - \\alpha_{x_{i-1}})\n        \\\\ &= \\left(m^*(\\alpha(x^*) - \\alpha(x_{i-1})) + m_i(x^* - x_{i-1}) \\right)\n        \\\\ &- m_i\\left[(\\alpha(x^*) - \\alpha(x_{i-1})) + (\\alpha(x_i) - \\alpha(x^*))\\right]\n        \\\\ &= (m^* - m_i)(\\alpha(x^*) - \\alpha(x_i))\n    \\end{align*}\n    $\\alpha$ is montonically increasing, so $x^* \\geq x_i$ implies that the second term is positive. Furthermore, $m^* \\geq m_i$ as $\\inf\\set{f(x): x \\in [x_{i-1}, x^*]} \\geq \\inf\\set{f(x): x \\in [x_{i-1}, x_i]}$. It follows that:\n    \\begin{align*}\n        L(P^*, f, \\alpha) - L(P, f, \\alpha) \\geq 0\n    \\end{align*} \n    and the proof for $U(P^*, f, \\alpha) - U(P, f, \\alpha) \\leq 0$ follows analogously. \\qed\n\\end{nproof}\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=1.5]\n        \\draw[latex-latex, very thick] (-1.5, 0) -- (1.5, 0);\n        \\draw[] (-1, 0) sin (-0.5, 1);\n        \\draw[] (0, 0) sin (-0.5, 1);\n        \\draw[] (0, 0) sin (0.5, -1);\n        \\draw[] (1, 0) sin (0.5, -1);\n        \\draw[] (-1, 0) -- (-1, -0.15);\n        \\draw[] (1, 0) -- (1, -0.15);\n        \\node[below] at (-1, -0.15) {$x_{i-1}$};\n        \\node[below] at (1, -0.15) {$x_i$};\n        \\draw[] (-0.25, 0) -- (-0.25, -0.15);\n        \\node[below] at (-0.25, -0.12) {$x^*$};\n        %\\node[circle, fill = red, minimum size = 0.2cm, inner sep = 0pt, label={[right, text=red]:$M_{i\\text{original}}/M_{i\\text{new}}$}] at (-0.5, 1) {};\n        \\node[circle, fill = blue, minimum size = 0.2cm, inner sep = 0pt, label={[above, text=blue]:$m_{i}$}] at (0.5, -1) {};\n        %\\node[circle, fill = red, minimum size = 0.2cm, inner sep = 0pt, label={[right, text=red]:$M_{i+1\\text{new}}$}] at (-0.25, 0.7) {};\n        \\node[circle, fill = blue, minimum size = 0.2cm, inner sep = 0pt, label={[above, text=blue]:$m^*$}] at (-1, 0) {};\n    \\end{tikzpicture}\n    \n    \\caption{Visualization of the effect of adding an extra point $x^*$ to the partition $P$. We can see that this has the net effect of increasing $L(P, f, \\alpha)$ as $m^* \\geq m_i$.}\n    \\label{fig29}\n\\end{figure}\n\n\\newpage \nNote that as a point of notation, if $f, \\alpha$ are fixed, we sometimes can write $L(P), U(P)$ in place of $L(P, f, \\alpha)$ and $U(P, f, \\alpha)$. Sometimes where the context is clear, it is also common to write $\\R(\\alpha)$ in place of $\\R_\\alpha[a, b]$. \n\n\\begin{theorem}{}{6.5}\n    $\\lint{a}{b} f d\\alpha \\leq \\uint{a}{b} f d\\alpha$.\n\\end{theorem}\n\\begin{nproof}\n    For partitions $P_1, P_2$, let $P^* = P_1 \\cup P_2$ be the common refinement. By Theorem \\ref{thm:6.4}, we have that:\n    \\begin{align*}\n        L(P_1) \\leq L(P^*) \\leq U(P^*) \\leq U(P_2)\n    \\end{align*}\n    And in particular, $L(P_1) \\leq U(P_2)$. Therefore, for any fixed $P_2$, $U(P_2)$ is an upper bound on the set of all lower sums. As the supremum is the least upper bound, we have that:\n    \\begin{align*}\n        \\sup_{P_1} L(P_1) \\leq U(P_2)\n    \\end{align*}\n    Therefore, as $\\sup_{P_1} L(P_1)$ is a lower bound on the set of all upper sums, and the infimum is the greatest lower bound, we have that:\n    \\begin{align*}\n        \\sup_{P_1} L(P_1) \\leq \\inf_{P_2} U(P_2)\n    \\end{align*}\n    So therefore:\n    \\begin{align*}\n        \\lint{a}{b} f d\\alpha \\leq \\uint{a}{b} f d\\alpha\n    \\end{align*}\n    as claimed. \\qed\n\\end{nproof}\n\n\\begin{theorem}{\\texorpdfstring{$\\e$}{e}-Criterion for Integrability}{6.6}\n    $f \\in \\R_\\alpha[a, b]$ if and only if for all $\\e > 0$, there exists a partition $P_\\e$ of $[a, b]$ such that $U(P_\\e) - L(P_\\e) < \\e$. \n\\end{theorem}\n\\begin{nproof}\n    $\\boxed{\\implies}$ By hypothesis, $\\sup_P L(P) = \\int_a^b fd\\alpha = \\inf_P U(P)$. Let $\\e > 0$. Then by the property of sup/inf, there exist $P_1, P_2$ such that:\n    \\begin{align*}\n        \\int_a^b f d\\alpha - L(P_1) &< \\frac{\\e}{2}\n        \\\\ U(P_2) - \\int_a^b f d\\alpha &< \\frac{\\e}{2}\n    \\end{align*}\n    Adding the first inequality to the second, we get:\n    \\begin{align*}\n        U(P_2) - L(P_1) < \\e\n    \\end{align*}\n    Letting $P^* = P_1\\cup P_2$, by Theorem \\ref{thm:6.4} we have that:\n    \\begin{align*}\n        U(P^*) - L(P^*) \\leq U(P_2) - L(P_1) < \\e\n    \\end{align*}\n    which proves the claim.\n\n    $\\boxed{\\impliedby}$ Let $\\e > 0$ Then by Theorem \\ref{thm:6.5} we have that:\n    \\begin{align*}\n        0 \\leq \\uint{a}{b} f d\\alpha - \\lint{a}{b} f d\\alpha\n    \\end{align*}\n    and furthermore:\n    \\begin{align*}\n        0 \\leq \\uint{a}{b} f d\\alpha - \\lint{a}{b} f d\\alpha \\leq U(P_\\e) - L(P_\\e) < \\e\n    \\end{align*}\n    Where the second inequality is true for any choice of partition. $\\e$ is arbitrary, so we conclude that:\n    \\begin{align*}\n        \\uint{a}{b} f d\\alpha - \\lint{a}{b} f d\\alpha = 0\n    \\end{align*}\n    And therefore $\\uint{a}{b} f d\\alpha = \\lint{a}{b} f d\\alpha$, and $f \\in \\R_\\alpha[a, b]$. \\qed\n\\end{nproof}\n\n\\noindent Theorem 6.7 is a little technical, so we shall skip it for now.\n\n\\stepcounter{rudin}\n\\begin{theorem}{Continuity implies integrability}{6.8}\n    If $f$ is continuous on $[a, b]$, then $f \\in \\R_\\alpha[a, b]$. \n\\end{theorem}\n\\noindent Note in the above theorem that we make no assumptions on $\\alpha$, only that (of course) it is monotonic.\n\\begin{nproof}\n    By definition, $U(P) - L(P) = \\sum_{i=1}^n (M_i - m_i)\\Delta \\alpha_i$. The idea will be to choose small intervals to make these differences small. Since $[a, b]$ is compact, $f$ is uniformly continuous by Theorem \\ref{thm:4.19}. So, for all $\\eta > 0$, there exists $\\delta > 0$ such that $\\abs{f(x) - f(t)} < \\eta$ if $\\abs{x - t} < \\delta$. Thus, if $P$ is constructed such that $\\Delta x_i < \\delta$ then $M_i - m_i < \\eta$. We then have that:\n    \\begin{align*}\n        U(P) - L(P) \\leq \\sum_{i=1}^n \\eta \\Delta \\alpha_i = \\eta \\sum_{i=1}^n (\\alpha(x_{i}) - \\alpha(x_{i-1})) = \\eta\\left(\\alpha(b) - \\alpha(a)\\right)\n    \\end{align*}\n    Where in the last equality we use the fact that we have a telescoping sum. Given $\\e > 0$, we choose $\\eta < \\frac{\\e}{\\alpha(b) - \\alpha(a)}$. With this choice of partition with $\\Delta x_i < \\delta = \\delta(\\eta)$, we have that:\n    \\begin{align*}\n        U(P) - L(P) < \\e\n    \\end{align*}\n    and we conclude that $f \\in \\R_\\alpha[a, b]$ by Theorem \\ref{thm:6.6}. \\qed\n\\end{nproof}\n\\noindent A natural question is ``what $f$s are Riemann integrable in general?'' The answer turns out to be ``if $f$ is continuous almost everywhere''. This sounds handwavy, but has a precise definition; although it is not covered in this course, one can refer to Rudin 11.33(b) for details. \n\n\\begin{theorem}{}{6.9}\n    If $f$ is monotone on $[a, b]$ and $\\alpha$ is continuous on $[a, b]$m then $f \\in \\R_\\alpha[a, b]$. \n\\end{theorem}\n\\noindent In the proof of Theorem \\ref{thm:6.8}, we used the continuity of $f$ to bound the maximum/minimum on each subinterval. Here, $f$ is no longer continuous, so we cannot control the maximum/minimum. Instead, we will use the continuity of $\\alpha$ to control the size of the $\\Delta \\alpha$s. \n\\begin{nproof}\n    Given $n \\in \\NN$, choose $P$ such that $\\Delta \\alpha_i = \\frac{\\alpha(b) - \\alpha(a)}{n}$ for all $i \\in {1, \\ldots, n}$. Note that such a choice is possible by the continuity of $\\alpha$ and the IVT (Theorem \\ref{thm:4.23}). We then have that:\n    \\begin{align*}\n        U(P) - L(P) = \\sum_{i=1}^n (M_i - m_i)\\Delta \\alpha_i = \\frac{\\alpha(b) - \\alpha(a)}{n}\\sum_{i=1}^n (M_i - m_i)\n    \\end{align*}\n    Suppose (WLOG) that $f$ is an increasing function. Then, $M_i = f(x_i)$ and $m_i = f(x_{i-1})$ due to the monotone increasing property. Hence:\n    \\begin{align*}\n        U(P) - L(P) = \\frac{\\alpha(b) - \\alpha(a)}{n}\\sum_{i=1}^n f(x_i) - f(x_{i-1}) = \\frac{\\alpha(b) - \\alpha(a)}{n}\\left[f(b) - f(a)\\right]\n    \\end{align*}\n    Where in the last equality we use the fact that the sum telescopes. If $\\alpha(b) = \\alpha(a)$, then $U(P) - L(P) = 0$ and the claim immediately follows. If $\\alpha(b) > \\alpha(a)$, then the claim follows by choosing $n > \\frac{1}{\\e}\\frac{f(b) - f(a)}{\\alpha(b) - \\alpha(a)}$ (that is, selecting a partition $P$ with sufficiently large $n$), from which it follows that:\n    \\begin{align*}\n        U(P) - L(P) < \\e\n    \\end{align*}\n    and hence $f \\in \\R_\\alpha[a, b]$ by Theorem \\ref{thm:6.6}. \\qed\n\\end{nproof}\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[-latex, very thick] (0, 0) -- (2, 0);\n        \\draw[-latex, very thick] (0, 0) -- (0, 2);\n        \\foreach \\i in {0.5,0.7,...,1.7}{ \n            \\draw[] (0, \\i) -- (-0.15, \\i);\n        }\n        \\filldraw[] (0.5, 0.5) circle (1pt);\n        \\filldraw[] (1.7, 1.7) circle (1pt);\n        \\draw[] (0.5, 0) -- (0.5, -0.15);\n        \\draw[] (1.7, 0) -- (1.7, -0.15);\n        \\node[below] at (0.5, -0.16) {$a$};\n        \\node[below] at (1.7, -0.13) {$b$};\n        \\node[left] at (-0.15, 0.5) {$\\alpha(a)$};\n        \\node[left] at (-0.15, 1.7) {$\\alpha(b)$};\n        \\draw[dashed] (0, 0.5) -- (0.5, 0.5);\n        \\draw[dashed] (0.5, 0) -- (0.5, 0.5);\n        \\draw[dashed] (0, 0.7) -- (0.86, 0.7);\n        \\draw[dashed] (0.86, 0) -- (0.86, 0.7);\n        \\draw[dashed] (0, 0.9) -- (0.95, 0.9);\n        \\draw[dashed] (0.95, 0) -- (0.95, 0.9);\n        \\draw[dashed] (0, 1.1) -- (1.15, 1.1);\n        \\draw[dashed] (1.15, 0) -- (1.15, 1.1);\n        \\draw[dashed] (0, 1.3) -- (1.3, 1.3);\n        \\draw[dashed] (1.3, 0) -- (1.3, 1.3);\n        \\draw[dashed] (0, 1.5) -- (1.36, 1.5);\n        \\draw[dashed] (1.36, 0) -- (1.36, 1.5);\n        \\draw[dashed] (0, 1.7) -- (1.7, 1.7);\n        \\draw[dashed] (1.7, 0) -- (1.7, 1.7);\n        \\draw [] (0.5 , 0.5) to [ curve through ={(0.8, 0.6)..(1, 1)..(1.25, 1.2)..(1.4, 1.6)}] (1.7, 1.7);\n    \\end{tikzpicture}\n    \n    \\caption{Visualization of the idea of the proof of Theorem \\ref{thm:6.9}. We chop up $[\\alpha(a), \\alpha(b)]$ into equal sized subintervals.}\n    \\label{fig30}\n\\end{figure}\n\n\\begin{theorem}{}{6.10}\n    Suppose $f: [a, b] \\mapsto \\RR$ is bounded and has finitely many discontinuities. Suppose $\\alpha$ is continuous at every point where $f$ is not. Then, $f \\in \\R_\\alpha[a, b]$. \n\\end{theorem}\n\n\\begin{nproof}\n    As has become standard, we will be applying Theorem \\ref{thm:6.6}. We have that:\n    \\begin{align*}\n        U(P) - L(P) = \\sum_{i=1}^n (M_i - m_i)\\Delta \\alpha_i\n    \\end{align*}\n    Let $\\e > 0$, and let $E = \\set{e_1, \\ldots e_k}$ be the set of points where $f$ is not continuous. $\\alpha$ is continuous at each $e_j$ by hypothesis. Therefore, there exists disjoint intervals $(u_j, v_j)$ that cover points in $E$, such that $u_j < e_j < v_j$ and $\\alpha(v_j) - \\alpha(v_j) < \\e$ (by the continuity of $\\alpha$). Let $K = [a, b] \\cap \\left(\\bigcup_{j=1}^k (u_j, v_j)\\right)^c$. $K$ is a compact set as it is a finite union of closed intervals. $f$ is continuous on $K$ by hypothesis, so $f$ is uniformly conitnuous on $K$ by Theorem \\ref{thm:4.19}. Hence, there exists $\\delta > 0$ such that for $s, t \\in K$, $\\abs{s - t} < \\delta \\implies \\abs{f(s) - f(t)} < \\e$. We then form a partition $P = \\set{x_1, \\ldots, x_n}$ to consist of $\\set{u_1, v_1, \\ldots, u_k, v_k}$ and additional points in $K$ with $\\delta x_i < \\delta$. For such $i$, $f$ is continuous on the subinterval and hence $M_i - m_i < \\e$. For the other intervals $[u_j, v_j]$, we have that $M_j - m_j \\leq 2M$ where $M = \\sup\\set{\\abs{f(x)}: x \\in [a, b]}$ and that $\\Delta \\alpha_j < \\e$. Therefore, we have that:\n    \\begin{align*}\n        0 \\leq U(P) - L(P) &= \\sum_{i=1}^n (M_i - m_i)\\Delta \\alpha_i\n        \\\\ &\\leq 2M\\e + \\e\\left(\\alpha(b) - \\alpha(a)\\right)\n    \\end{align*}\n    Where the first term comes from the $[u_j, v_j]$s and the second term is the maximum possible value from the subintervals of $K$. By choosing $\\e$ small enough, the RHS is small as desired for some choice of $P$. Hence, $U(P) - L(P) < \\e$ for some $P$ and hence $f \\in \\R_\\alpha[a, b]$. \\qed\n\\end{nproof}\n\\noindent A natural question that arises is ``what if $f$ and $\\alpha$ are discontinuous at the same point?'' In this case, we can construct functions such that $f \\notin \\R_\\alpha[a, b]$ (see HW1Q5).\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[latex-latex, very thick] (-2, 0) -- (2, 0);\n        \\node[] at (-1.5, 0) {$[$};\n        \\node[] at (1.5, 0) {$]$};\n        \\node[] at (-1.2, 0) {$($};\n        \\filldraw[] (-1, 0) circle (0.75pt);\n        \\node[] at (-0.8, 0) {$)$};\n        \\node[] at (0.2, 0) {$($};\n        \\filldraw[] (0.5, 0) circle (0.75pt);\n        \\node[] at (0.8, 0) {$)$};\n        \\foreach \\i in {-1.45, -1.4, ..., -1.21} {\n            \\draw[blue] (\\i, 0.05) -- (\\i, -0.05);\n        }\n        \\foreach \\i in {-0.75, -.7, ..., 0.19} {\n            \\draw[blue] (\\i, 0.05) -- (\\i, -0.05);\n        }\n        \\foreach \\i in {0.85, 0.9, ..., 1.49} {\n            \\draw[blue] (\\i, 0.05) -- (\\i, -0.05);\n        }\n        \\node[below] at (-1.5, -0.1) {$a$};\n        \\node[below] at (-1.2, -0.1) {$u_1$};\n        \\node[below] at (-1, -0.1) {$e_1$};\n        \\node[below] at (-0.8, -0.1) {$v_1$};\n        \\node[below] at (0.2, -0.1) {$u_2$};\n        \\node[below] at (0.5, -0.1) {$e_2$};\n        \\node[below] at (0.8, -0.1) {$v_2$};\n        \\node[below] at (1.5, -0.08) {$b$};\n    \\end{tikzpicture}\n    \\caption{Visualization of how $[a, b]$ gets split up in the proof of Theorem \\ref{thm:6.10}. $K$ consists of the union of the closed intervals marked in blue.}\n    \\label{fig31}\n\\end{figure}\n\n\\begin{theorem}{}{6.11}\n    If $f \\in \\R_\\alpha[a, b]$, $m \\leq f(x) \\leq M$ for all $x \\in [a, b]$, and $\\phi$ is continuous on $[m, M]$, then $\\phi \\circ f \\in \\R_\\alpha[a, b]$. \n\\end{theorem}\n\\begin{nproof}\n    Not covered in lecture, see Rudin. \\qed\n\\end{nproof}\n\n\\noindent As an example of the above theorem, suppose we have that $f \\in \\R(\\alpha)$. Then, we have that $f^2 \\in \\R(\\alpha)$ and $\\abs{f} \\in \\R(\\alpha)$ as $\\phi(x) = x^2$ and $\\phi(x) = \\abs{x}$ are both continuous functions.\n\n\\subsection{Properties of the Integral}\n\\begin{theorem}{Linearity and Related Properties}{6.12}\n\\begin{enumerate}\n    \\item Suppose $f_1 \\in \\R_\\alpha[a, b]$ and $f_2 \\in \\R_\\alpha[a, b]$. Then, $f_1 + f_2 \\in \\R_\\alpha[a, b]$ and $cf_1 \\in \\R_{[a,b]}(\\alpha)$ for all $c \\in \\RR$. Furthermore:\n    \\begin{align*}\n        \\int_a^b(f_1 + f_2) d\\alpha = \\int_a^b f_1 d\\alpha + \\int_a^b f_2d\\alpha \\text{ and } \\int_a^b cf_1 d\\alpha = c\\int_a^b f_1d\\alpha\n    \\end{align*}\n    \\item Suppose $f_1, f_2 \\in \\R_\\alpha[a, b]$ and $f_1(x) \\leq f_2(x)$ for all $x \\in [a, b]$. Then,\n    \\begin{align*}\n        \\int_a^b f_1 d\\alpha \\leq \\int_a^b f_2d\\alpha\n    \\end{align*}\n    \\item If $f \\in \\R_\\alpha[a, b]$ and $c \\in (a, b)$, then $f \\in \\R_\\alpha[a, c] \\cap \\R_\\alpha[c, b]$ and furthermore:\n    \\begin{align*}\n        \\int_a^b f d\\alpha = \\int_a^c f d\\alpha + \\int_c^b f d\\alpha\n    \\end{align*}\n    \\item If $f \\in \\R_\\alpha[a, b]$ and $\\abs{f(x)} \\leq M$ for all $x \\in [a, b]$, then:\n    \\begin{align*}\n        \\abs{\\int_a^b f d\\alpha} \\leq M(\\alpha(b) - \\alpha(a))\n    \\end{align*}\n    \\item If $f \\in \\R_{\\alpha_1}[a, b]$ and $f \\in \\R_{\\alpha_2}[a, b]$, then $f \\in \\R_{\\alpha_1 + \\alpha_2}[a, b]$ and $f \\in \\R_{c\\alpha_1}[a, b]$ for all $c \\geq 0$. Furthermore:\n    \\begin{align*}\n        \\int_a^b f d\\alpha_1 + \\int_a^b f d\\alpha_2 = \\int_a^b f d(\\alpha_1 + \\alpha_2) \\text{ and } \\int_a^b f d(c\\alpha_1) = c\\int_a^b f d\\alpha_1\n    \\end{align*}\n\n\\end{enumerate}\n\\end{theorem}\n\\begin{nproof}\n    \\begin{enumerate}\n        \\item See Rudin and HW2Q1.\n        \\item We have that $L(P, f_1) \\leq L(P, f_2)$ for all partitions $P$ as $\\inf\\set{f_1(x): x \\in [x_{i-1}, x_i]} \\leq \\inf\\set{f_2(x): x \\in [x_{i-1}, x_i]}$ for all $i$ (as $f_1(x) \\leq f_2(x)$). Therefore, we have that:\n        \\begin{align*}\n            \\int_a^b f_1 d\\alpha = \\sup_P L(P, f_1) \\leq \\sup_P L(P, f_2) = \\int_a^b f d\\alpha.\n        \\end{align*}\n        \\item Exercise (see HW2Q1 for the $\\uint{a}{b}$ case)\n        \\item Consider any partition $P$ of $[a, b]$. Let $M = \\sup\\set{\\abs{f(x): x \\in [a, b]}}$. We then have that (using the ML bound from Theorem \\ref{thm:6.1}):\n        \\begin{align*}\n            -M[\\alpha(b) - \\alpha(a)] = \\sum_{i=1}^n(-M)\\Delta \\alpha_i \\leq \\sum_{i=1}^n M_i \\Delta \\alpha_i = L(P, f, \\alpha) \\leq \\int_a^b fd\\alpha \n            \\\\ \\leq U(P, f, \\alpha) = \\sum_{i=1}^n M_i \\Delta \\alpha_i \\leq \\sum_{i=1}^n M\\Delta \\alpha_i = M[\\alpha(b) - \\alpha(a)].\n        \\end{align*}\n        \\item We prove the first (additivity) statement. Let $\\e > 0$. Choose $P_i, i \\in \\set{1, 2}$ (By Theorem \\ref{thm:6.6}) such that \\begin{align*}\n            U(P_i, f, \\alpha_i) - L(P_i, f, \\alpha_i) < \\frac{\\e}{2} \n        \\end{align*}   \n        Let $P = P_1 \\cup P_2$. Then, by Theorem \\ref{thm:6.4}, we have that:\n        \\begin{align*}\n            U(P^*, f, \\alpha_i) - L(P^*, f, \\alpha_i) < \\frac{\\e}{2}\n        \\end{align*}\n        Adding the two expressions (for $i = 1, 2$) together, we have:\n        \\begin{align*}\n            U(P^*, f, \\alpha_1 + \\alpha_2) - L(P^*, f, \\alpha_1 + \\alpha_2) < \\e\n        \\end{align*}\n        So $f \\in \\R_{\\alpha_1 + \\alpha_2}[a, b]$ by Theorem \\ref{thm:6.6}. Furthermore, We have that:\n        \\begin{align*}\n            \\int_a^b f d(\\alpha_1 + \\alpha_2) \\leq U(P^*, f, \\alpha_1 + \\alpha_2) = U(P^*, f, \\alpha_1) + U(P^*, f, \\alpha_2) < \\int_a^b f d\\alpha_1 + \\int_a^b fd\\alpha_2 + \\e\n        \\end{align*}\n        where we apply the inequality of $U(P^*, f, \\alpha_i) < \\int_a^b f d\\alpha_i + \\frac{\\e}{2}$ in the last line. Similarly, we have that:\n        \\begin{align*}\n            \\int_a^b fd(\\alpha_1 + \\alpha_2) \\geq L(P^*, f, \\alpha_1 + \\alpha_2) = L(P^*, f, \\alpha_1) + L(P, f, \\alpha_2) > \\int_a^b fd\\alpha_1 + \\int_a^b fd\\alpha_2 - \\e\n        \\end{align*}\n        Since $\\e$ is arbitrary, we therefore conclude that:\n        \\begin{align*}\n            \\int_a^b f d(\\alpha_1 + \\alpha_2) = \\int_a^b fd\\alpha_1 + \\int_a^b fd\\alpha_2\n        \\end{align*} \\qed\n    \\end{enumerate}\n\\end{nproof}\n\n\\begin{theorem}{}{6.13}\n    \\begin{enumerate}\n        \\item Suppose $f, g \\in \\R(\\alpha)$. Then, $fg \\in \\R(\\alpha)$.\n        \\item Suppose $f \\in \\R(\\alpha)$. Then, $\\abs{f} \\in \\R(\\alpha)$ and $\\abs{\\int_a^b fd\\alpha} \\leq \\int_a^b \\abs{f}d\\alpha$.\n    \\end{enumerate}\n\\end{theorem}\n\\begin{nproof}\n    \\begin{enumerate}\n        \\item By Theorem \\ref{thm:6.12}(a), $f \\pm g \\in \\R(\\alpha)$, so by Theorem \\ref{thm:6.11}, $(f \\pm g)^2 \\in \\R(\\alpha)$. Since $(f+g)^2 - (f-g)^2 = 4fg$, we then have that:\n        \\begin{align*}\n            \\frac{(f+g)^2 + (f-g)^2}{4} = fg \\in \\R(\\alpha)\n        \\end{align*}\n        \\item By Theorem \\ref{thm:6.11}, $\\abs{f} \\in \\R(\\alpha)$ letting $\\phi(t) = \\abs{t}$. Let $c = \\sgn\\left(\\int_a^b f d\\alpha\\right) \\in \\set{-1, 0, 1}$. Then, we have that:\n        \\begin{align*}\n            \\abs{\\int_a^b fd\\alpha} = c\\int_a^b d\\alpha = \\int_a^b cfd\\alpha \\leq \\int_a^b \\abs{f}d\\alpha\n        \\end{align*}\n        Where we use Theorem \\ref{thm:6.12}(a) in the second equality, and Theorem \\ref{thm:6.12}(b) in the last inequality (as $cf \\leq \\abs{f}$). \\qed\n    \\end{enumerate}\n\\end{nproof}\n\n\\setcounter{rudin}{14}\n\n\\begin{theorem}{}{6.15}\n    Suppose $f$ is bounded on $[a, b]$, $s \\in (a, b)$, and $f$ is continuous at $s$. Let:\n    \\begin{align*}\n        \\alpha(x) = \\begin{cases}\n            0 & x \\leq s\n            \\\\ 1 & x > s\n        \\end{cases}\n    \\end{align*}\n    Then, $\\int_a^b fd\\alpha = f(s)$ (and in particular, $f \\in \\R(\\alpha)$).\n\\end{theorem}\n\n\\noindent This result is interesting, and some remarks on the nature of this theorem are in order. Firstly, by Theorem 4.29 (not covered in Lecture, see Rudin), since $\\alpha$ is monotonically increasing, we have that $\\lim_{t \\rightarrow x^+}\\alpha(t) = \\alpha(x^+)$ and $\\lim_{t \\rightarrow x^-}\\alpha(t) = \\alpha(x^-)$ exist at every $x \\in (a, b)$ and $\\alpha(x^-) \\leq \\alpha(x) \\leq \\alpha(x^+)$. Secondly, note that Rudin defines $\\alpha$ in the above Theorem to be left contiuous, but in probability theory, it is conventional to use a right continuous (i.e. $\\alpha(x) = \\alpha(x^+)$ for all $x \\in (a, b)$). We leave it as an exercise to prove the theorem for the case of a right continuous $\\alpha$ (the strategy and result are identitcal).\n\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[-latex, very thick] (0, 0) -- (2, 0);\n        \\draw[-latex, very thick] (0, 0) -- (0, 2);\n        \\draw [] (0.25 , 0) to [ curve through ={(0.5, 0.3)}] (1, 0.5);\n        \\draw[] (1, 1.5) to [curve through = {(1.4, 1.7)}] (1.9, 1.8);\n        \\draw[fill = white] (1, 0.5) circle (1pt);\n        \\filldraw[] (1, 1) circle (1pt);\n        \\draw[fill = white] (1, 1.5) circle (1pt);\n        \\draw[] (1, 0) -- (1, -0.15);\n        \\node[below] at (1, -0.15) {$x$};\n        \\draw[] (0, 0.5) -- (-0.15, 0.5);\n        \\draw[] (0, 1) -- (-0.15, 1);\n        \\draw[] (0, 1.5) -- (-0.15, 1.5);\n        \\node[left] at (-0.15, 0.5) {$\\alpha(x^-)$};\n        \\node[left] at (-0.15, 1) {$\\alpha(x)$};\n        \\node[left] at (-0.15, 1.5) {$\\alpha(x^+)$};\n        %\\draw[] (0.5, 0) -- (0.5, -0.15);\n        %\\draw[] (1.7, 0) -- (1.7, -0.15);\n        %\\node[below] at (0.5, -0.16) {$a$};\n        %\\node[below] at (1.7, -0.13) {$b$};    \n    \\end{tikzpicture}\n    \n    \\caption{Visualization of Theorem 4.29. For any strictly increasing function, we have that $\\alpha(x^-) \\leq \\alpha(x) \\leq \\alpha(x^+)$ at a discontinuity $x$.}\n    \\label{fig32}\n\\end{figure}\nAny physicists looking at the result of the Theorem will note that it looks very much like the ``Dirac Delta'' function; indeed, this choice of $\\alpha$ is making rigorous the notion of a $\\delta$ function where:\n\\begin{align*}\n    \\int_a^b f(x)\\delta(x - s) = f(s)\n\\end{align*}\nWe leave it as a homework exercise (HW2Q2) to prove that no such function can actually exist. This step-function method is one way to make the $\\delta$ function well-defined; other methods include taking the limit of a bell curve, or bringing in the theory of distributions (the latter which is most definitely outside the scope of this course).\n\nFinally, we note that this example really shows off something that the Riemann integral cannot reproduce; the incorporation of ``discrete'' points.\n\n\\begin{nproof}\n    Choose $P = \\set{x_0, x_1 = s, x_2, x_3 = b}$. We then have that:\n    \\begin{align*}\n        U(P) &= \\sum_{i=1}^3M_i\\Delta \\alpha_i = M_2\\Delta \\alpha_2 = M_2 = \\sup\\set{f(x): x \\in [x_1, x_2]}\n        \\\\ L(P) &= \\sum_{i=1}^3 m_i \\Delta \\alpha_i = m_2\\Delta \\alpha_2 = m_2\n    \\end{align*}\n    So, we have that $m_2 \\leq \\int_a^b f d\\alpha \\leq M_2$ (assuming the integral exists). Taking the limit as $x_2 \\rightarrow s^+$, by the continuity of $f$ at $s$ we have that $M_2 \\rightarrow f(s)$ and $m_2 \\rightarrow f(s)$. Therefore, $\\int_a^b f d\\alpha$ exists and equals $f(s)$. \\qed\n\\end{nproof}\n\\noindent Note that the above proof works exactly the same way if $s = a$. If $s = b$, then because we take $\\alpha$ to be left continuous, $\\alpha = 0$ everywhere on $[a, b]$ and the integral just equals zero.\n\n\\setcounter{rudin}{13}\n\\begin{definition}{Unit Step Function}{6.14}\n    We define the (left continuous) \\textbf{unit step function} as:\n    \\begin{align*}\n        I(x) = \\begin{cases}\n            0 & x \\leq 0\n            \\\\ 1 & x > 0\n        \\end{cases}\n    \\end{align*}\n    In Theorem \\ref{thm:6.15}, we have that $\\alpha(x) = I(x - s)$. \n\\end{definition}\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[-latex, very thick] (0, 0) -- (0, 2);\n        \\draw[latex-latex, very thick] (-2, 0) -- (2, 0);\n        \\draw[<-, thick, blue] (-1.5, 0) -- (0, 0);\n        \\draw[->, thick, blue] (0, 1) -- (1.5, 1);\n        \\filldraw[fill = blue, draw = blue] (0, 0) circle (1pt);\n        \\filldraw[fill = white, draw = blue] (0, 1) circle (1pt);\n    \\end{tikzpicture}\n    \n    \\caption{Plot of the (left continuous) unit step function. To obtain the $\\alpha$ used in Theorem \\ref{thm:6.15}, move the step to $x = s$ instead of $x = 0$.}\n    \\label{fig33}\n\\end{figure}\n\n\\stepcounter{rudin}\n\\begin{theorem}{}{3.16}\n    Let $c_n \\geq 0$ be such that $\\sum_{n=1}^\\infty c_n < \\infty$. Take $s_n \\in (a, b)$ distinct (that is, $s_n \\neq s_m$ if $n \\neq m$). Define $\\alpha(x) = \\sum_{n=1}^\\infty c_n I(x - s_n)$ (which converges/exists by comparison, as $\\sum c_n$ converges). Let $f$ be continuous on $[a, b]$. Then,\n    \\begin{align*}\n        \\int_a^b f d\\alpha = \\sum_{n=1}^\\infty c_nf(s_n)\n    \\end{align*}\n\\end{theorem}\n\\noindent Note that the resultant series in the theorem above is convergent/well defined as $f$ is bouded on $[a, b]$ and hence $\\sum_{n=1}^\\infty c_nf(s_n) \\leq M\\sum_{n=1}^\\infty c_n$ where $M = \\max\\set{f(x)}$. Hence the series converges by comparison. If all the $c_n$s are zero except for one, this is just Theorem \\ref{thm:6.15}. Note that the above result holds for a finite sum as well (we would use induction to prove it in this case).\n\n\\begin{nproof}\n    Let $R_N = \\sum_a^b fd\\alpha - \\sum_{n=1}^N c_nf(s_n)$. We show that $R_n \\rightarrow 0$ as $N \\rightarrow \\infty$ (i.e. given $\\e > 0$, we show there exists $N_0$ such that $\\abs{R_N} < \\e$ if $N \\geq N_0$). We define:\n    \\begin{align*}\n        \\alpha_1(x) = \\sum_{n=1}^N c_nI(x - s_n) \\quad \\alpha_2(x) = \\sum_{n=N+1}^\\infty c_n I(x - s_n)\n    \\end{align*}\n    Then, by Theorem \\ref{thm:6.12}(e) we have that:\n    \\begin{align*}\n        \\int_a^b fd\\alpha = \\int_a^b f d\\alpha_1 + \\int_a^b f d\\alpha_2 \n    \\end{align*}\n    Then, we have that:\n    \\begin{align*}\n        \\int_a^b d\\alpha_1 &= \\sum_{i=1}^N \\int_a^b f(x)d(c_nI(x - s_n))\n        \\\\ &= \\sum_{n=1}^N c_n\\int_a^b f(x)d(I(x - s_n))\n        \\\\ &= \\sum_{n=1}^Nc_nf(s_n)\n    \\end{align*}\n    Wherein the first two equalities we again apply Theorem \\ref{thm:6.12}(e) and in the last equality we use Theorem \\ref{thm:6.15}. We therefore obtain that $R_N = \\int_a^b d\\alpha_2$. We then have that:\n    \\begin{align*}\n        \\abs{R_n} \\leq M\\left[\\alpha_2(b) - \\alpha_2(a)\\right] = M\\sum_{n=N+1}^{\\infty} c_n\n    \\end{align*}\n    Then by the convergence of $\\sum c_n$, we can choose $N_0$ such that $\\sum_{n=N_0+1}^\\infty c_n < \\frac{\\e}{M}$. We therefore have that $R_N < \\e$ if $N \\geq N_0$, proving the claim. \\qed\n\\end{nproof}\n\n\\begin{theorem}{}{6.17}\n    Suppose:\n    \\begin{enumerate}[(i)]\n        \\item $\\abs{f(x)} \\leq M$ for all $x \\in [a, b]$.\n        \\item $\\alpha$ is continuous and increasing on $[a, b]$ and differentable on $(a, b)$.\n        \\item $\\alpha' \\in \\R[a, b]$. \n    \\end{enumerate}\n    Then, $f \\in \\R_\\alpha[a, b]$ if and only if $f\\alpha' \\in \\R[a, b]$ and in this case:\n    \\begin{align*}\n        \\int_a^b fd\\alpha = \\int_a^b f(x)\\alpha'(x)dx\n    \\end{align*}\n\\end{theorem}\n\n\\begin{nproof}\n    It suffices to show that:\n    \\begin{align*}\n        \\uint{a}{b} f d\\alpha = \\uint{a}{b} f\\alpha'dx \\text{ and } \\lint{a}{b}fd\\alpha = \\lint{a}{b}f\\alpha' dx\n    \\end{align*}\n    We prove the first equality and the second equality follows analogously. Let $\\e > 0$. Since $\\alpha' \\in \\R$, there exists a partition $P$ such that $U(P, \\alpha') - L(P, \\alpha') < \\e$ (this also holds for any refinement of $P$). Letting $A_i = \\sup \\alpha', a_i = \\inf \\alpha' on [x_{i-1}, x_i]$ we have that:\n    \\begin{align*}\n        \\sum_{i=1}^n(A_i - a_i) \\Delta x_i < \\e\n    \\end{align*}\n    By the Mean Value Theorem (Theorem \\ref{thm:5.10}) we have that there exists $t_i \\in [x_{i-1}, x_i]$ such that $\\Delta \\alpha_i = \\alpha'(t_i)\\Delta x_i$. We also have that for all $s_i \\in [x_{i-1}, x_i]$, $\\abs{\\alpha'(s_i) - \\alpha'(t_i)} \\leq A_i - a_i$ ($\\alpha'$ can only vary as much as the maximum minus the minimum on the interval). We therefore have that:\n    \\begin{align*}\n        \\sum_{i=1}^n \\abs{\\alpha'(s_i) - \\alpha'(t_i)}\\Delta x_i \\leq \\sum_{i=1}^n A_i - a_i\\Delta x_i < \\e\n    \\end{align*}\n    Recall that we skipped Theorem 6.7, but it might be worth comparing this result to Theorem 6.7(c). Let $M = \\sup{f(x): x \\in [a, b]}$. Now, for any choice of $s_i \\in [x_{i-1}, x_i]$, we have that:\n    \\begin{align*}\n        \\abs{\\sum_{i=1}^n f(s_i)\\Delta\\alpha_i - \\sum_{i=1}^n \\alpha'(s_i)\\Delta x_i} \\leq \\sum_{i=1}^m M\\abs{\\alpha'(t_i) - \\alpha'(s_i)}\\Delta x_i < M\\e \\quad (*)\n    \\end{align*}\n    Therefore:\n    \\begin{align*}\n        \\sum_{i=1}^n f(s_i) \\Delta \\alpha_i \\leq \\sum_{i=1}^n f(s_i)\\alpha'(s_i)\\Delta x_i + M\\e \\leq U(P, f\\alpha') + M\\e\n    \\end{align*}\n    We now take the supremum over each $[s_{i-1}, s_i]$. Since $s_i$ is arbitrary, we have that $\\uint{a}{b}fd\\alpha \\leq U(P, f, \\alpha) \\leq U(P, f\\alpha') + M\\e$, and hence $\\uint{a}{b}f d\\alpha \\leq \\uint{a}{b}f\\alpha' dx + M\\e$. $(*)$ gives $\\uint{a}{b}f\\alpha' dx \\leq \\uint{a}{b}fd\\alpha + M\\e$. Since $\\e$ is arbitrary, we have that $\\uint{a}{b}fd\\alpha \\leq \\uint{a}{b}f\\alpha'dx$ and $\\uint{a}{b}f\\alpha'dx \\leq \\uint{a}{b}fd\\alpha$, so $\\uint{a}{b}f\\alpha'dx = \\uint{a}{b}fd\\alpha$. \\qed\n\\end{nproof}\n\\noindent This theorem tells us something that looks very familiar from Calculus... namely, a change of variables! Recall the COV formula from first year:\n\\begin{align*}\n    \\int_a^b f(x)dx = \\int_{\\phi^{-1}(a)}^{\\phi^{-1}(b)} f(\\phi(y))\\phi'(y)dy\n\\end{align*}\nWhere $x = \\phi(y), y = \\phi^{-1}(y)$. We now prove this notion rigorously. \n\n\\setcounter{rudin}{18}\n\\begin{theorem}{Change of Variable}{6.19}\n    Let $f: [a, b] \\mapsto \\RR$. Suppose $\\phi: [A, B] \\mapsto [a, b]$ is continuous and strictly increasing. Suppose $\\alpha$ is increasing on $[a, b]$ and $f \\in \\R_{\\alpha}[a, b]$. Let $g = f \\circ \\phi: [A, B] \\mapsto \\RR$ and $\\beta = \\alpha \\circ \\phi: [A, B] \\mapsto \\RR$. Then, we have that $g \\in \\R_\\beta[A, B]$, and:\n    \\begin{align*}\n        \\int_A^B g d\\beta = \\int_a^b fd\\alpha\n    \\end{align*}\n\\end{theorem}\n\\noindent As an example, consider the integral $\\int_a^b \\sin x^2dx$ for $0 \\leq a < b$. We then have that $f(x) = \\sin x^2$ and $\\alpha(x) = x$. We make the substitution $x^2 = y$, so $\\phi(x) = \\sqrt{x}$ and $\\phi^{-1}(y) = y^2$. Then, A = $\\phi^{-1}(a) = a^2$, $B = \\phi^{-1}(b) = b^2$. We then have that $g(y) = f \\circ \\phi(y) = f(\\sqrt{y}) = \\sin(y)$, and $\\beta(y) = \\alpha \\circ \\phi(y) = \\alpha(\\sqrt{y}) = \\sqrt{y}$. We then have that:\n\\begin{align*}\n    \\int_a^b \\sin x^2 dx = \\int_{a^2}^{b^2} \\sin y d\\beta = \\int_{a^2}^{b^2} \\frac{\\sin y}{2\\sqrt{y}}dy\n\\end{align*}\nWhere the first equality follows by Theorem \\ref{thm:6.19} and the second equality follows by Theorem \\ref{thm:6.17}.\n\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[-latex, very thick] (0, 0) -- (2, 0);\n        \\draw[-latex, very thick] (0, 0) -- (0, 2);\n        \\draw[] (0, 0.5) -- (-0.15, 0.5);\n        \\draw[] (0, 0.7) -- (-0.15, 0.7);\n        \\draw[] (0, 1.5) -- (-0.15, 1.5);\n        \\draw[] (0, 1.7) -- (-0.15, 1.7);\n        \\node[] at (0.7, 1.2) {$x = \\phi(y)$};\n        \\filldraw[] (0.5, 0.5) circle (1pt);\n        \\filldraw[] (1.7, 1.7) circle (1pt);\n        \\draw[] (0.5, 0) -- (0.5, -0.15);\n        \\draw[] (1.7, 0) -- (1.7, -0.15);\n        \\node[below] at (0.5, -0.16) {$a$};\n        \\node[below] at (1.7, -0.13) {$b$};\n        \\node[below] at (0.86, -0.15) {$x_1$};\n        \\node[below] at (1.36, -0.15) {$x_2$};\n        \\draw[] (0.86, 0) -- (0.86, -0.15);\n        \\draw[] (1.36, 0) -- (1.36, -0.15);\n        \\node[left] at (-0.15, 0.5) {$A$};\n        \\node[left] at (-0.15, 1.7) {$B$};\n        \\node[left] at (-0.15, 0.7) {$y_1$};\n        \\node[left] at (-0.15, 1.5) {$y_2$};\n        \\draw[dashed] (0, 0.5) -- (0.5, 0.5);\n        \\draw[dashed] (0.5, 0) -- (0.5, 0.5);\n        \\draw[dashed] (0, 0.7) -- (0.86, 0.7);\n        \\draw[dashed] (0.86, 0) -- (0.86, 0.7);\n        \\draw[dashed] (0, 1.5) -- (1.36, 1.5);\n        \\draw[dashed] (1.36, 0) -- (1.36, 1.5);\n        \\draw[dashed] (0, 1.7) -- (1.7, 1.7);\n        \\draw[dashed] (1.7, 0) -- (1.7, 1.7);\n        \\draw [] (0.5 , 0.5) to [ curve through ={(0.8, 0.6)..(1, 1)..(1.25, 1.2)..(1.4, 1.6)}] (1.7, 1.7);\n    \\end{tikzpicture}\n    \\caption{Plot of a (monotonically increasing and continuous) function $\\phi$ and a demonstration of how it puts partitions of $[a, b]$ and $[A, B]$ in one-to-one correspondence. This gives the intuition for the proof of Theorem \\ref{thm:6.19}.}\n    \\label{fig34}\n\\end{figure}\n\n\\begin{nproof}\n    We make the observation that partitions $P$ of $[a, b]$ and parititons $Q$ of $[A, B]$ can be put in 1-1 correspondence via $x_i = \\phi(y_i)$. Additionally, we make the observation that the set of $g$ values on $[y_{i-1}, y_i]$ is equal to the set of $f$ values on $[x{i-1}, x_i]$. Finally, we observe that $\\alpha(x_i) = (\\alpha\\circ\\phi)(y_i) = \\beta(y_i)$. With these three observations, we have that:\n    \\begin{align*}\n        U(P, f, \\alpha) = U(Q, g, \\beta) \\text{ and } L(P, f, \\alpha) = U(Q, g, \\beta)\n    \\end{align*}\n    Let $\\e > 0$. Since $f \\in \\R_{\\alpha}[a, b]$, there exists a $P$ such that $U(P, f, \\alpha) - L(P, f, \\alpha) < \\e$ by Theorem \\ref{thm:6.6}. For this partition, we have that $U(Q, g, \\beta) - L(Q, g, \\beta) < \\e$ for the corresponding partition $Q$. Hence, $g \\in \\R_\\beta[A, B]$. Finally, we have that:\n    \\begin{align*}\n        \\int_A^B gd\\beta = \\inf_Q U(Q, g, \\beta) = \\inf_P U(P, f, \\alpha) = \\int_a^b fd\\alpha\n    \\end{align*}\n    \\qed\n\\end{nproof}\n\n\\subsection{The Fundamental Theorem of Calculus}\n\n\\begin{theorem}{Fundamental Theorem of Calculus I}{6.20}\n    Let $f \\in \\R[a, b]$ and for $x \\in [a, b]$, define $F(x) = \\int_a^x f(t)dt$. Then, $F$ is continuous on $[a, b]$. If $f$ is continuous at $x_0 \\in [a, b]$ then $F'(x_0)$ exists and $F'(x_0) = f(x_0)$.  \n\\end{theorem}\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[-latex, very thick] (0, 0) -- (2, 0);\n        \\draw[-latex, very thick] (0, 0) -- (0, 2);\n        \\filldraw[] (0.5, 0.5) circle (1pt);\n        \\filldraw[] (1.7, 1.7) circle (1pt);\n        \\draw[] (0.5, 0) -- (0.5, -0.15);\n        \\draw[] (1.7, 0) -- (1.7, -0.15);\n        \\draw[] (1, 0) -- (1, -0.15);\n        \\node[below] at (0.5, -0.16) {$a$};\n        \\node[below] at (1.7, -0.13) {$b$};\n        \\node[below] at (1, -0.15) {$x$};\n        \\draw [] (0.5 , 0.5) -- (1.7, 1.7);\n        \\filldraw[opacity = 0.5, blue] (0.5, 0.5) -- (1, 1) -- (1, 0) -- (0.5, 0) -- (0.5, 0.5);\n        \\node[text = blue] at (1.2, 0.5) {$F(x)$};\n        \\node[above] at (1.2, 1.3) {$f(x)$};\n    \\end{tikzpicture}\n    \n    \\caption{The Fundamental Theorem of Calculus/Theorem \\ref{thm:6.20} gives us a way to relate a curve ($f$) with the culmulative area beneath it ($F$) by means of the derivative.}\n    \\label{fig35}\n\\end{figure}\n\nTo get intuition for Theorem \\ref{thm:6.20}, we think about what happens when we ``zoom in'' to a function $f$. Over an interval $(x_0, x_0 + h)$, if $x_0$ is continuous at $x_0$ and $h$ is small, then $f$ will be roughly constant over the interval. Hence, $F(x_0 + h) - F(x_0) \\approx f(x_0)h$. In the limit of $h \\rightarrow 0$, this approximation becomes exact. \n\n\\begin{figure}[htbp]\n    \\centering\n    \\begin{tikzpicture}[scale=2]\n        \\draw[latex-latex, very thick] (0, 0) -- (2, 0);\n        \\draw [<->, thick] (0.3, 1) -- (1.7, 1.05);\n        \\draw[] (0.5, 0) -- (0.5, -0.15);\n        \\draw[] (1.5, 0) -- (1.5, -0.15);\n        \\node[below] at (0.5, -0.18) {$x_0$};\n        \\node[below] at (1.46, -0.15) {$x_0 + h$};\n        \\filldraw[opacity = 0.5, blue] (0.5, 0) -- (1.5, 0) -- (1.5, 1.04) -- (0.5, 1.004) -- (0.5, 0);\n\n    \\end{tikzpicture}\n    \n    \\caption{Visual Intuition for Theorem \\ref{thm:6.20} and its proof. As $f$ is continuous at $x_0$, for small $h$, $f$ is roughly constant on $(x_0, x_0 + h)$. Hence, the increase in culmulative area below the curve is roughly $f(x_0)\\cdot h$ (height of $f(x_0)$ times width $h$ of the approximate rectangle).}\n    \\label{fig36}\n\\end{figure}\n\n\\begin{nproof}\n    We first show the continuity of $F$. Choose $M$ such that $\\abs{f(t)} \\leq M$ for all $t \\in [a, b]$. For $a \\leq x < y \\leq b$, we then have that:\n    \\begin{align*}\n        \\abs{F(x) - F(y)} = \\abs{\\int_a^xf(t)dt - \\int_a^yf(t)dt} = \\abs{\\int_x^y f(t)dt} \\leq M(y - x)\n    \\end{align*}\n    Where we use Theorem \\ref{thm:6.12}(c) for the second equality and Theorem \\ref{thm:6.12}(d) for the inequality at the end. Let $\\e > 0$. If $\\abs{x - y} < \\delta = \\frac{\\e}{M}$, then $\\abs{F(x) - F(y)} < \\e$ so $F$ is continuous.\n\n    We next show the differentiability of $F$.We wish to show that:\n    \\begin{align*}\n        \\lim_{h \\rightarrow 0} \\frac{F(x_0 + h) - F(x_0)}{h} = f(x_0)\n    \\end{align*}\n    We show the case for $h > 0$, that is, taking the limit of $h \\rightarrow 0^+$. We have that:\n    \\begin{align*}\n        \\frac{1}{h}\\left[F(x_0 + h) - F(x_0)\\right] - f(x_0) = \\frac{1}{h}\\int_{x_0}^{x_0 + h}f(t)dt - f(x_0) = \\frac{1}{h}\\int_{x_0}^{x_0 + h}\\left[f(t) - f(x_0)\\right]dt\n    \\end{align*}\n    Where in the last line we use the observation that $\\frac{1}{h}\\int_{x_0}^{x_0 + h}f(x_0)dt = f(x_0)$. Since $f$ is continuous at $x_0$, for any $\\e > 0$, there exists $\\delta > 0$ such that:\n    \\begin{align*}\n        \\abs{t - x_0} < \\delta \\implies \\abs{f(t) - f(x_0)}  <\\e\n    \\end{align*}\n    Thus, if $h < \\delta$, then:\n    \\begin{align*}\n        \\abs{ \\frac{1}{h}\\left[F(x_0 + h) - F(x_0)\\right] - f(x_0)} \\leq \\frac{1}{h}\\int_{x_0}^{x_0 + h}\\abs{f(t) - f(x_0)}dt < \\frac{1}{h}\\e h = \\e\n    \\end{align*}\n    So we conclude that:\n    \\begin{align*}\n        \\lim_{h \\rightarrow 0^+} \\frac{F(x_0 + h) - F(x_0)}{h} = F'(x_0) = f(x_0)\n    \\end{align*}\n    \\qed\n\\end{nproof}\n\n\\begin{theorem}{Fundamental Theorem of Calculus II}{6.21}\n    If $f \\in \\R[a, b]$ and there exists $F$ on $[a, b]$ such that $F' = f$, then:\n    \\begin{align*}\n        \\int_a^b f(x)dx = F(b) - F(a)\n    \\end{align*}\n\\end{theorem}\n\n\\begin{nproof}\n    For any partition $P$, we have that\n    \\begin{align*}\n        F(b) - F(a) &= \\sum_{i=1}^n \\left[F(x_i) - F(x_{i-1})\\right]\n        \\\\ &= \\sum_{i=1}^n F'(t_i) \\Delta x_i\n        \\\\ &= \\sum_{i=1}^n f(t_i) \\Delta x_i\n    \\end{align*}\n    Where the first equality follows by the fact that the sum telescopes, the second equality follows from the Mean Value Theorem/Theorem \\ref{thm:5.10} (By the continuity of $F$, there exists $t_i \\in [x_{i-1}, x_i]$ such that $F(x_i) - F(x_{i-1}) = F'(t_i)(x_i - x_{i-1})$) and the third equality follows by Theorem \\ref{thm:6.20}. Now, we ahve that $f(t_i) \\in [m_i, M_i]$ for each $i$, so we therefore have that:\n    \\begin{align*}\n        \\sum_{i=1}^n m_i \\Delta x_i \\leq \\sum_{i=1}^n f(t_i)\\Delta x_i \\leq \\sum_{i=1}^n M_i \\Delta x_i\n    \\end{align*}\n    Hence:\n    \\begin{align*}\n        F(b) - F(a) \\in [L(P, f), U(P, f)]\n    \\end{align*}\n    Additionally, by definition of the integral we have that:\n    \\begin{align*}\n        \\int_a^b f(x)dx \\in [L(P, f), U(P, f)]\n    \\end{align*}\n    Let $\\e > 0$. Choose $P$ such that $U(P, f) - L(P, f) < \\e$. Then, we have that:\n    \\begin{align*}\n        \\abs{F(b) - F(a) - \\int_a^b f(x)dx} < \\e\n    \\end{align*}\n    Since $\\e$ is arbitrary, we have that:\n    \\begin{align*}\n        F(b) - F(a) = \\int_a^b f(x)dx\n    \\end{align*} \\qed\n\\end{nproof}\n\n\\begin{theorem}{Integration By Parts}{6.22}\n    If $F, G$ are differentiable on $[a, b]$ with $F' = f \\in \\R[a, b]$ and $G' = g \\in \\R[a, b]$, then:\n    \\begin{align*}\n        \\int_a^b F(x)g(x)dx = F(b)G(b) - F(a)G(a) - \\int_a^b f(x)G(x)dx\n    \\end{align*}\n\\end{theorem}\n\\noindent Note that the above integration by parts formula generalizes to different $\\alpha$ (not just $\\alpha(x) = x$. The proof of this is left as an exercise (Rudin Chapter 6 Problem 17).\n\\begin{nproof}\n    Let $H(x) = F(x)G(x)$. Then, $H'(x) = F'(x)G(x) + F(x)G'(x) = f(x)G(x) + F(x)g(x) \\in \\R[a, b]$ by the product rule (Theorem \\ref{thm:5.3}) and the FTC I (Theorem \\ref{thm:6.20}). Therefore, we have that:\n    \\begin{align*}\n        H(b) - H(a) = \\int_a^b H'(x)dx = \\int_a^b f(x)G(x)dx + \\int_a^b F(x)g(x)dx\n    \\end{align*}\n    Where the first equality follows by FTC II (Theorem \\ref{thm:6.21}). We have that $H(b) - H(a) = F(b)G(b) - F(a)G(a)$, so rearranging the above expression, we find that:\n    \\begin{align*}\n        \\int_a^b F(x)g(x)dx = F(b)G(b) - F(a)G(a) - \\int_a^b f(x)G(x)dx\n    \\end{align*}\n    Which is the desired expression. \\qed\n\\end{nproof}\n\n\\begin{ndef}{: Improper Integrals}{}\n    If $f \\in \\R[a, b]$ for all $b > a$, then we define:\n    \\begin{align*}\n        \\int_a^\\infty f(x)dx = \\lim_{b \\rightarrow \\infty}\\int_a^b f(x)dx\n    \\end{align*}\n    if the integral exists in $\\RR$. Then, we say that the \\textbf{improper integral} $\\int_a^\\infty f(x)dx$ converges.\n\\end{ndef}\n\\begin{ndef}{: Absolute Convergence of Integrals}{}\n    If $\\int_a^\\infty \\abs{f(x)}dx$ converges, then we say that $\\int_a^\\infty f(x)dx$ \\textbf{converges absolutely}.\n\\end{ndef}\n\n\\noindent To finish off this chapter, we will work through a comprehensive problem together; we prove that $\\int_0^\\infty \\sin t^2 dt$ converges but not absolutely. \n\n\\begin{proof}\n    The proof that the integral does not converge absolutely is left as a homework exercise (HW3Q1). We will prove that the integral converges here. Let $p_n = \\int_0^n \\sin t^2dt$ where $n \\in \\NN$. We will show that:\n    \\begin{enumerate}[(i)]\n        \\item $\\set{p_n}$ is a Cauchy sequence and hence has a limit in $\\RR$.\n        \\item This is enough to show that the improper integral converges.\n    \\end{enumerate}\n    We start by showing (i), namely that $\\set{p_n}$ is Cauchy. For $x < y$, we have that:\n    \\begin{align*}\n        \\int_x^y \\sin t^2 dt = \\int_{x^2}^{y^2} \\sin u \\frac{1}{2\\sqrt{u}}du\n    \\end{align*}\n    by a change of variable (Theorem \\ref{thm:6.19}). Now performing an integration by parts (Theorem \\ref{thm:6.22}) with $F(u) = \\frac{1}{2\\sqrt{u}}$ and $G'(u)du = \\sin u du$, we have that $F'(u) = -\\frac{1}{4u^{3/2}}$ and $G(u) = -\\cos u$, so:\n    \\begin{align*}\n        \\int_x^y \\sin t^2 dt = \\int_{x^2}^{y^2} \\sin u \\frac{1}{2\\sqrt{u}}du = \\left.-\\frac{\\cos u}{2\\sqrt{u}}\\right|_{x^2}^{y^2} - \\int_{x^2}^{y^2} \\frac{\\cos u}{4u^{3/2}} du\n    \\end{align*}\n    Hence:\n    \\begin{align*}\n        \\int_x^y \\sin t^2 dt = \\frac{\\cos x^2}{2x} - \\frac{\\cos y^2}{2y} - \\int_{x^2}^{y^2} \\frac{\\cos u}{4u^{3/2}}du\n    \\end{align*}\n    Suppose $n \\geq m$. Then, we have that:\n    \\begin{align*}\n        \\abs{p_n - p_m} &= \\abs{\\int_m^n \\sin t^2 dt}\n        \\\\ &= \\abs{\\frac{\\cos m^2}{2m} - \\frac{\\cos n^2}{2n} - \\int_{m^2}^{n^2} \\frac{\\cos u}{4 u^{3/2}}du }\n    \\end{align*}\n    Using the fact that $\\abs{\\cos(x)} \\leq 1$ and applying the Triangle inequality, we have that:\n    \\begin{align*}\n        \\abs{p_n - p_m} \\leq \\frac{1}{2m} + \\frac{1}{2n} - \\int_{m^2}^{n^2} \\frac{1}{4u^{3/2}}du = \\frac{1}{2m} + \\frac{1}{2n} + \\left. \\frac{-1}{2u^{1/2}}\\right|_{m^2}^{n^2} = \\frac{1}{2m} + \\frac{1}{2n} - \\frac{1}{2n} + \\frac{1}{2m} = \\frac{1}{m} \\quad (*)\n    \\end{align*}\n    Let $\\e > 0$. Choose $N_0$ such that $\\frac{1}{N_0} < \\e$. Then, for $m \\geq n \\geq N_0$ we have that:\n    \\begin{align*}\n        \\abs{p_n - p_m} \\leq \\frac{1}{m} \\leq \\frac{1}{N_0} < \\e\n    \\end{align*}\n    and we conclude that $\\set{p_n}$ is Cauchy. Since $\\RR$ is complete, $p_n \\rightarrow p$ for some $p \\in \\RR$. To finish the proof, we show (ii); that this is sufficient. For $b \\geq N_0$, choose $N \\geq N_0$ such that $b \\in [N, N+1)$. Then, we have that:\n    \\begin{align*}\n        \\int_{0}^b \\sin t^2 dt = p_N + \\int_{N}^b \\sin t^2 dt \\leq p_N + \\frac{1}{N}\n    \\end{align*}\n    Where the inequality follows by $(*)$. We therefore have that:\n    \\begin{align*}\n        \\abs{p - \\int_0^b \\sin t^2 dt} \\leq \\abs{p - p_N} + \\abs{\\int_{N}^b \\sin t^2 dt} < \\e + \\frac{1}{N} \\leq \\e + \\frac{1}{N_0} < \\e + \\e  = 2\\e\n    \\end{align*}\n    Since $\\e$ is arbitrary, we conclude that $\\lim_{b \\rightarrow \\infty} \\int_0^b \\sin t^2 dt = p$ and hence the improper integral converges.\n\\end{proof}", "meta": {"hexsha": "e076fd2f8acf9deff563953081c3680f12045e34", "size": 55123, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapters/ch6.tex", "max_stars_repo_name": "RioWeil/MATH320-321-Notes", "max_stars_repo_head_hexsha": "532c4bf12a8e4ea80a58a83508de05e1f121a79a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapters/ch6.tex", "max_issues_repo_name": "RioWeil/MATH320-321-Notes", "max_issues_repo_head_hexsha": "532c4bf12a8e4ea80a58a83508de05e1f121a79a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-06-10T23:18:29.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-25T17:05:30.000Z", "max_forks_repo_path": "Chapters/ch6.tex", "max_forks_repo_name": "RioWeil/MATH320-321-notes", "max_forks_repo_head_hexsha": "532c4bf12a8e4ea80a58a83508de05e1f121a79a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.6569264069, "max_line_length": 1103, "alphanum_fraction": 0.5765107124, "num_tokens": 21496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458152, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.551968695959701}}
{"text": "\n\\subsection{Fermat's little theorem}\n\n\n", "meta": {"hexsha": "da2d70cda8fc75ada6b4b25867213deb81052ca3", "size": 40, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/logic/primes/01-05-fermatLittle.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/logic/primes/01-05-fermatLittle.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/logic/primes/01-05-fermatLittle.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 8.0, "max_line_length": 36, "alphanum_fraction": 0.75, "num_tokens": 11, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.793105951184112, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5519686943616278}}
{"text": "\\section{Experiments}\n\nThe following experiments refer to \\emph{linearly} and \\emph{nonlinearly} separable generated datasets of 100 training examples. All the training times refer to running on a laptop with an Intel i7-6700HQ (8) @ 3.500GHz and 31.2 GB of memory.\n\nThe Python source code is available at: \\href{https://github.com/dmeoli/optiml}{\\texttt{github.com/dmeoli/optiml}}.\n\n\\subsection{Support Vector Classifier}\n\nBelow experiments are about the SVC for which has been tested different values for the regularization hyperparameter $C$, i.e., from \\emph{soft} to \\emph{hard margin}, and in case of nonlinearly separable data also different \\emph{kernel functions} mentioned above.\n\nThe experiments about SVCs are available at: \\\\ \\href{https://github.com/dmeoli/optiml/blob/master/notebooks/optimization/CM_SVC_report_experiments.ipynb}{\\texttt{github.com/dmeoli/optiml/blob/master/notebooks/optimization/CM\\_SVC\\_report\\_experiments.ipynb}}.\n\n\\subsubsection{Hinge loss}\n\n\\paragraph{Primal formulation}\n\n% https://arxiv.org/pdf/1008.4000.pdf\n% https://hal.archives-ouvertes.fr/hal-00708243v1/file/appendixMNBCI.pdf\n% https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6580579&casa_token=5p51fnC0_mgAAAAA:CEwEO0ZeqzWDMlS_QG2fsfk4L7xlwUr6LzLu-Gw-nYD9CPEAhMCIMeyDDOTh7iATszKN3V4rvQ\n\nThe experiments results shown in~\\ref{primal_l1_svc_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to $1/L$ where $\\displaystyle L = \\frac{C}{n} \\sum_{i=1}^n \\| x_i \\|^2$ and $\\beta$, i.e., the \\emph{momentum}, equal to 0.5. The optimization process is stopped if after 5 iterations the function value does not improve by at least $1\\mathrm{e}{-8}$.\n\n\\input{experiments/primal_l1_svc}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/l1_svc_loss_history}\n\t\\caption{SGD convergence for the primal formulation of the $\\protect \\mathcal{L}_1$-SVC}\n\t\\label{fig:l1_svc_loss_history}\n\\end{figure}\n\n\\paragraph{Linear dual formulations}\n\nThe experiments results shown in~\\ref{linear_lagrangian_dual_l1_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l1_svc_aug_lagrangian_dual} and~\\eqref{eq:l1_svc_bcqp_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/linear_dual_l1_svc}\n\n\\input{experiments/linear_lagrangian_dual_l1_svc}\n\n\\paragraph{Nonlinear dual formulations}\n\nThe experiments results shown in~\\ref{nonlinear_dual_l1_svc_cv_results} and~\\ref{nonlinear_lagrangian_dual_l1_svc_cv_results} are obtained with \\emph{d} and \\emph{r} hyperparameters equal to 3 and 1 respectively for the \\emph{polynomial} kernel; \\emph{gamma} is setted to \\emph{`scale`} for both \\emph{polynomial} and \\emph{gaussian} kernels. The experiments results shown in~\\ref{nonlinear_lagrangian_dual_l1_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l1_svc_aug_lagrangian_dual} and~\\eqref{eq:l1_svc_bcqp_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/nonlinear_dual_l1_svc}\n\n\\input{experiments/nonlinear_lagrangian_dual_l1_svc}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/lagrangian_dual_l1_svc_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the $\\protect \\mathcal{L}_1$-SVC}\n\t\\label{fig:lagrangian_dual_l1_svc_loss_history}\n\\end{figure}\n\n\\pagebreak\n\n\\subsubsection{Squared hinge loss}\n\n\\paragraph{Primal formulation}\n\n% https://arxiv.org/pdf/1008.4000.pdf\n% https://hal.archives-ouvertes.fr/hal-00708243v1/file/appendixMNBCI.pdf\n% https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6580579&casa_token=5p51fnC0_mgAAAAA:CEwEO0ZeqzWDMlS_QG2fsfk4L7xlwUr6LzLu-Gw-nYD9CPEAhMCIMeyDDOTh7iATszKN3V4rvQ\n\nThe experiments results shown in~\\ref{primal_l2_svc_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to $1/L$ where $\\displaystyle L = \\frac{C}{n} \\sum_{i=1}^n \\| x_i \\|^2$ and $\\beta$, i.e., the \\emph{momentum}, equal to 0.5. The optimization process is stopped if after 5 iterations the function value does not improve by at least $1\\mathrm{e}{-8}$.\n\n\\input{experiments/primal_l2_svc}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/l2_svc_loss_history}\n\t\\caption{SGD convergence for the primal formulation of the $\\protect \\mathcal{L}_2$-SVC}\n\t\\label{fig:l2_svc_loss_history}\n\\end{figure}\n\n\\pagebreak\n\n\\paragraph{Linear dual formulations}\n\nThe experiments results shown in~\\ref{linear_lagrangian_dual_l2_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l2_svc_aug_lagrangian_dual} and~\\eqref{eq:l2_svc_lb_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/linear_lagrangian_dual_l2_svc}\n\n\\paragraph{Nonlinear dual formulations}\n\nThe experiments results shown in~\\ref{nonlinear_lagrangian_dual_l2_svc_cv_results} are obtained with \\emph{d} and \\emph{r} hyperparameters equal to 3 and 1 respectively for the \\emph{polynomial} kernel; \\emph{gamma} is setted to \\emph{`scale`} for both \\emph{polynomial} and \\emph{gaussian} kernels. The experiments results shown in~\\ref{nonlinear_lagrangian_dual_l1_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l2_svc_aug_lagrangian_dual} and~\\eqref{eq:l2_svc_lb_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/nonlinear_lagrangian_dual_l2_svc}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/lagrangian_dual_l2_svc_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the $\\protect \\mathcal{L}_2$-SVC}\n\t\\label{fig:lagrangian_dual_l2_svc_loss_history}\n\\end{figure}\n\n\\pagebreak\n\n\\subsection{Support Vector Regression}\n\nBelow experiments are about the SVR for which has been tested different values for regularization hyperparameter $C$, i.e., from \\emph{soft} to \\emph{hard margin}, the $\\epsilon$ penalty value and in case of nonlinearly separable data also different \\emph{kernel functions} mentioned above.\n\nThe experiments about SVRs are available at: \\\\ \\href{https://github.com/dmeoli/optiml/blob/master/notebooks/optimization/CM_SVR_report_experiments.ipynb}{\\texttt{github.com/dmeoli/optiml/blob/master/notebooks/optimization/CM\\_SVR\\_report\\_experiments.ipynb}}.\n\n\\subsubsection{Epsilon-insensitive loss}\n\n\\paragraph{Primal formulation}\n\n% https://arxiv.org/pdf/1008.4000.pdf\n% https://hal.archives-ouvertes.fr/hal-00708243v1/file/appendixMNBCI.pdf\n% https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6580579&casa_token=5p51fnC0_mgAAAAA:CEwEO0ZeqzWDMlS_QG2fsfk4L7xlwUr6LzLu-Gw-nYD9CPEAhMCIMeyDDOTh7iATszKN3V4rvQ\n\nThe experiments results shown in~\\ref{primal_l1_svr_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to $1/L$ where $\\displaystyle L = \\frac{C}{n} \\sum_{i=1}^n \\| x_i \\|^2$ and $\\beta$, i.e., the \\emph{momentum}, equal to 0.2. The optimization process is stopped if after 5 iterations the function value does not improve by at least $1\\mathrm{e}{-8}$.\n\n\\input{experiments/primal_l1_svr}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.5]{img/l1_svr_loss_history}\n\t\\caption{SGD convergence for the primal formulation of the $\\protect \\mathcal{L}_1$-SVR}\n\t\\label{fig:l1_svr_loss_history}\n\\end{figure}\n\n\\pagebreak\n\n\\paragraph{Linear dual formulations}\n\nThe experiments results shown in~\\ref{linear_lagrangian_dual_l1_svr_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l1_svr_aug_lagrangian_dual} and~\\eqref{eq:l1_svr_bcqp_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/linear_dual_l1_svr}\n\n\\input{experiments/linear_lagrangian_dual_l1_svr}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/linear_lagrangian_dual_l1_svr_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the linear $\\protect \\mathcal{L}_1$-SVR}\n\t\\label{fig:linear_lagrangian_dual_l1_svr_loss_history}\n\\end{figure}\n\n\\paragraph{Nonlinear dual formulations}\n\nThe experiments results shown in~\\ref{nonlinear_dual_l1_svr_cv_results} and~\\ref{nonlinear_lagrangian_dual_l1_svr_cv_results} are obtained with \\emph{gamma} setted to \\emph{`scale`} for both \\emph{gaussian} and \\emph{laplacian} kernels. The experiments results shown in~\\ref{nonlinear_lagrangian_dual_l1_svc_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l1_svr_aug_lagrangian_dual} and~\\eqref{eq:l1_svr_bcqp_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/nonlinear_dual_l1_svr}\n\n\\input{experiments/nonlinear_lagrangian_dual_l1_svr}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/laplacian_lagrangian_dual_l1_svr_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the laplacian $\\protect \\mathcal{L}_1$-SVR}\n\t\\label{fig:laplacian_lagrangian_dual_l1_svr_loss_history}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/gaussian_lagrangian_dual_l1_svr_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the gaussian $\\protect \\mathcal{L}_1$-SVR}\n\t\\label{fig:gaussian_lagrangian_dual_l1_svr_loss_history}\n\\end{figure}\n\n\\pagebreak\n\n\\subsubsection{Squared epsilon-insensitive loss}\n\n\\paragraph{Primal formulation}\n\n% https://arxiv.org/pdf/1008.4000.pdf\n% https://hal.archives-ouvertes.fr/hal-00708243v1/file/appendixMNBCI.pdf\n% https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6580579&casa_token=5p51fnC0_mgAAAAA:CEwEO0ZeqzWDMlS_QG2fsfk4L7xlwUr6LzLu-Gw-nYD9CPEAhMCIMeyDDOTh7iATszKN3V4rvQ\n\nThe experiments results shown in~\\ref{primal_l2_svr_cv_results} referred to \\emph{Stochastic Gradient Descent} algorithm are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to $1/L$ where $\\displaystyle L = \\frac{C}{n} \\sum_{i=1}^n \\| x_i \\|^2$ and $\\beta$, i.e., the \\emph{momentum}, equal to 0.2. The optimization process is stopped if after 5 iterations the function value does not improve by at least $1\\mathrm{e}{-8}$.\n\n\\input{experiments/primal_l2_svr}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.5]{img/l2_svr_loss_history}\n\t\\caption{SGD convergence for the primal formulation of the $\\protect \\mathcal{L}_2$-SVR}\n\t\\label{fig:l2_svr_loss_history}\n\\end{figure}\n\n\\pagebreak\n\n\\paragraph{Linear dual formulations}\n\nThe experiments results shown in~\\ref{linear_lagrangian_dual_l2_svr_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l2_svr_aug_lagrangian_dual} and~\\eqref{eq:l2_svr_lb_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/linear_lagrangian_dual_l2_svr}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/linear_lagrangian_dual_l2_svr_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the linear $\\protect \\mathcal{L}_2$-SVR}\n\t\\label{fig:linear_lagrangian_dual_l2_svr_loss_history}\n\\end{figure}\n\n\\paragraph{Nonlinear dual formulations}\n\nThe experiments results shown in~\\ref{nonlinear_lagrangian_dual_l2_svr_cv_results} are obtained with \\emph{gamma} setted to \\emph{`scale`} for both \\emph{gaussian} and \\emph{laplacian} kernels. The experiments results shown in~\\ref{nonlinear_lagrangian_dual_l2_svr_cv_results} are obtained with $\\alpha$, i.e., the \\emph{learning rate} or \\emph{step size}, setted to 1 for the \\emph{AdaGrad} algorithm. Note that the \\emph{unreg\\_bias} and \\emph{reg\\_bias} duals refers to the \\emph{Lagrangian dual} formulations~\\eqref{eq:l2_svr_aug_lagrangian_dual} and~\\eqref{eq:l2_svr_lb_aug_lagrangian_dual} respectively with $\\rho$ equals to 1. The optimization process is stopped if the primal-dual weight vector does not change by at least $1\\mathrm{e}{-6}$  between two consecutive iterations.\n\n\\input{experiments/nonlinear_lagrangian_dual_l2_svr}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/laplacian_lagrangian_dual_l2_svr_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the laplacian $\\protect \\mathcal{L}_2$-SVR}\n\t\\label{fig:laplacian_lagrangian_dual_l2_svr_loss_history}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[scale=0.55]{img/gaussian_lagrangian_dual_l2_svr_loss_history}\n\t\\caption{AdaGrad convergence for the Lagrangian dual formulation of the gaussian $\\protect \\mathcal{L}_2$-SVR}\n\t\\label{fig:gaussian_lagrangian_dual_l2_svr_loss_history}\n\\end{figure}\n", "meta": {"hexsha": "651f246805a1aacfe0cb2dd66d5e00bc29387d88", "size": 14888, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notebooks/optimization/tex/experiments.tex", "max_stars_repo_name": "DonatoMeoli/NumericalOptimization", "max_stars_repo_head_hexsha": "e60144458026a6ddbe1612f92b838c342db572eb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-05-22T09:17:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-15T18:23:56.000Z", "max_issues_repo_path": "notebooks/optimization/tex/experiments.tex", "max_issues_repo_name": "DonatoMeoli/NumericalOptimization", "max_issues_repo_head_hexsha": "e60144458026a6ddbe1612f92b838c342db572eb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-25T08:29:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-25T09:03:13.000Z", "max_forks_repo_path": "notebooks/optimization/tex/experiments.tex", "max_forks_repo_name": "DonatoMeoli/NumericalOptimization", "max_forks_repo_head_hexsha": "e60144458026a6ddbe1612f92b838c342db572eb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-10-10T13:38:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-13T20:23:37.000Z", "avg_line_length": 68.2935779817, "max_line_length": 936, "alphanum_fraction": 0.7964803869, "num_tokens": 4704, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583250334527, "lm_q2_score": 0.7931059462938815, "lm_q1q2_score": 0.5519686859567613}}
{"text": "\\section{Decomposition Attack using Impossible Monomials}\n\\SecLabel{monoattack}\n\n\\newcommand\\func{f}\n\nIn this section I describe how large enough classes of impossible monomials can be used to mount a recovery attack on the last round's Feistel function.\n\nThe high level idea is the following. Consider a 5-round $2n$-bit Feistel Network with bijective Feistel functions, i.e. let $S^5 \\in \\bij{5}{n-1}$. Let $S^4 \\in \\bij{4}{n-1}$ be the Feistel Network consisting of the first 4 rounds of $S^5$ and let $\\func\\colon \\field{n} \\to \\field{n}$ be the Feistel function used in the last round.\nFrom Theorem~\\Ref{thm:low-bij}, for any $k$, any $(n,k)$-monomial is not present in the ANFs of the right output branch of $S^4$. However, in the following 5-th round the output of the last Feistel function is xored into this branch and becomes the left output branch of $S^5$. This result now may or may not contain the $(n,k)$-monomials. By observing the presence of such monomials in the ANFs of the left branch of $S^5$, we can deduce some information about the last Feistel function in the form of linear equations. If the number of impossible monomials is large enough, an equivalent of the Feistel function $f$ can be recovered. For an illustration see \\FigRef{impmono-explanation}, where the 5-th round of a Feistel Network with 3-bit branches is shown. $a_u$ denotes the ANF coefficient of a monomial that is impossible in the right branch of a 4-round Feistel Network.\n\n\\FigTex{impmono-explanation.tex}\n\nMore formally, observe that by the Feistel structure \n$$(\\LB \\circ S^5) \\oplus (f \\circ \\RB \\circ S^5) = \\RB \\circ S^4.$$\nConsider an arbitrary coordinate position $i, 1 \\le i \\le n$. For any monomial $x^u$ that is impossible in $\\RB \\circ S^4$ (e.g. any $(n,k)$-monomial),\n$$\n\\coef{u}{\\inprod{e_i, \\LB \\circ S^5}} \\oplus\n\\coef{u}{\\inprod{e_i, \\func \\circ \\RB \\circ S^5}} = \n0.\n$$\nBy decomposing $\\func$ through its ANF,\n$$\n\\coef{u}{\\inprod{e_i, \\LB \\circ S^5}} =\n\\bigoplus_{v \\in \\field{n}} \\coef{u}{\n    \\coef{v}{\\inprod{e_i, \\func}}\n    (\\RB \\circ S^5)^v}.\n$$\nSince $S^5$ is known, this can be considered as a linear equation on the unknown ANF coefficients of $\\inprod{e_i, \\func}$. In total, there are $2^n-1$ equations (from $2^n$ impossible $(n,k)$-monomials, except the $(n,n)$-monomial) and $2^n-1$ unknowns (the constant is excluded). More equations can be obtained by considering the other classes of impossible monomials from Theorem~\\Ref{thm:low-bij}. Therefore, it can be expected that the system will have (close to) full rank with high probability and the solution will be unique. The algorithm of the attack is given in Algorithm~\\Ref{alg:recovery}.\n\n\\begin{algorithm}\n    \\caption{Feistel Function Recovery Attack}\n    \\Label{alg:recovery}\n    \n    \\begin{algorithmic}[1]\n        \\Require the full codebook of a function $S \\in \\nbij{r}{d}, S\\colon \\field{2n} \\to \\field{2n}$; a set $U \\subseteq \\field{2n}\\}$.\n        \n        \\Ensure a function $f\\colon \\field{n} \\to \\field{n}, \\deg{f} \\le d$ if exists, such that \n        \\Statex for all $u \\in U$ and for all $i,1 \\le i \\le n$,~ $\\coef{u}{\\inprod{e_i, \\RB \\circ R_f \\circ S}} = 0$.\n        \n        \\State $V \\gets \\pset{v \\in \\field{n} \\mid 1 \\le \\wt(v) \\le d}$\n        \n        \\State $M \\gets$ a $|U|\\times|V|$ matrix indexed by $U$ and $V$\n        \\ForAll{$i \\in \\seg{1}{n}$}\n            \\State $b^i \\gets$ a $|U|$-bit vector indexed by $U$\n        \\EndFor\n        \\ForAll{$u \\in U$}\n            \\ForAll{$v \\in V$}\n                \\State $M_{u,v} \\gets \\bigoplus_{x \\preceq u}\n                    (\\RB \\circ S(x))^v$\n            \\EndFor\n            \\ForAll{$i \\in \\seg{1}{n}$}\n                \\State $b^i_u \\gets \\bigoplus_{x \\preceq u} \\inprod{e_i, \\LB \\circ S(x)}$\n            \\EndFor\n        \\EndFor\n\n        \\ForAll{$i \\in \\{1, \\ldots, n\\}$}\n            \\State $a \\gets$ a solution of $M\\times a = b^i$\n            \\State $f_i \\gets \\pround{x \\mapsto \\oplus_{v \\in V} a_v x^v}$\n        \\EndFor\n        \\State \\Return $f = (f_1, \\ldots, f_n)$\n    \\end{algorithmic}\n\\end{algorithm}\n\n% ===============================================================\n\n\\subsection{On the Assumptions}\n\nIn the decomposition attack it is assumed that the equation system will have full or close to full rank. Then the correct Feistel function $f$ will be among one of the few system's solutions.\n\nOne reason for a possible rank deficiency is that the low-degree monomials in the ANF of the Feistel function $f$ may not generate the high-degree 4-round impossible monomials. In such case the equations generated from the high-degree 4-round impossible monomials provide no information about the low-degree monomials of $f$. In particular, Theorem~\\Ref{thm:bij} proves that linear monomials of $f$ can not generate $(n,n-1)$ monomials when composed with the right output branch of a 5-round Feistel Network.\n\nTo the best of my knowledge, there are no known ways to prove even a possibility of presence of any highest-degree monomials in, for example, a 5-round Feistel Network. Indeed, in general, proving lower bounds on the algebraic degree is a very difficult problem. \n\n% ===============================================================\n\n\\subsection{Instantiations}\n\nThe attack is not restricted to the case of a 5-round Feistel Network with bijective functions. The requirement is to have enough impossible monomials, which can be obtained from Theorems~\\Ref{thm:low-nbij},\\Ref{thm:low-bij} or by another analysis methods. In practice, a cryptanalyst can generate random instances of the analyzed structure and empirically determine all impossible monomial classes with high probability. This analysis will not dominate the complexity.\n\nThe described 5-round attack corresponds to the case of the type-II distinguisher. It exploits a large amount of impossible monomials in a 4-round network, i.e. the one that has the type-I distinguisher. I propose a conjecture on the generalization of this rule.\n\n\\begin{conjecture}\n\\Label{conj:impmono}\nLet $r$ be the maximum number of rounds such that all $S \\in \\nbij{r}{d}$ have the type-II distinguisher. Then the impossible monomial attack succeeds with high probability on all $S \\in \\nbij{r}{d}$, i.e. it outputs a negligible number of candidates for the last Feistel function, and the correct one is always among them.\n\\end{conjecture}\n\n\\paragraph{Experiment.} I have implemented the attack in~Sage~\\cite{sage} and performed a few experiments on small values of the branch size $n$. For all $n \\in \\pset{3,4,5,6,7}$ and $d \\in \\pset{2,3,n-1,n}$, I generated 100 random instances of $\\nbij{r}{d}$ (and $\\bij{r}{n-1}$) for maximum $r$ such that the structure has the type-II distinguisher. Then for the first $r-1$ rounds I empirically evaluated all impossible monomial classes. Using these classes, I generated the equation system of the impossible monomial attack for each of the 100 instances. I computed the average rank of the system and the system's dimension, i.e. the number of unknowns. In addition, I verified that the actual last round Feistel function satisfies the equations. The results of the experiment are given in \\TabRef{impmonorank}.\n\n\\FigTex{impmonorank.tex}\n\nThe results show that the rank is close to the maximum on average. It means that there are only a few solutions on average and the impossible monomial attack succeeds in the analyzed cases. The rank deficiency is larger for cases with $n=3$ and decreases fast with the growth of $n$. Furthermore, the results confirm the conjecture on the analyzed cases.\n\n% ===============================================================\n\n\\subsection{Relation with Integral Attack from~\\cite{LeoFeistel}}\n\nAn integral distinguisher was already used to mount a Feistel function recovery attack by~Biryukov~\\etal{}~\\cite{LeoFeistel}. They show that, for a 5-round Feistel Network, a 4-round integral distinguisher provides a linear equation on the \\emph{values} of the last Feistel function.\nThe impossible monomial attack described in this section considers the same equation system but in the \\emph{monomial basis}. That is, the unknown variables are the monomial coefficients in the ANFs of the coordinates of the Feistel function. Since the ANF coefficients can be computed by summing the function over particular sets, this is a change of basis (in other words, the \\Mobius{} transform is linear).\nThe advantage of the monomial basis is that an upper bound on the degree of the Feistel functions can be used to decrease the number of unknowns.\n\n\\Todo{Implementation of full decomposition? maybe a subsection here. Experiments (how large n?) Link to published source code.}\n", "meta": {"hexsha": "c1d30d0bcd1eabe8558a2b9aacc02d1ed701c3c3", "size": 8650, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis-source/9strFeistel/5monoattack.tex", "max_stars_repo_name": "hellman/thesis", "max_stars_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2019-05-16T19:55:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-31T15:36:12.000Z", "max_issues_repo_path": "thesis-source/9strFeistel/5monoattack.tex", "max_issues_repo_name": "hellman/thesis", "max_issues_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-09T11:26:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-09T11:26:45.000Z", "max_forks_repo_path": "thesis-source/9strFeistel/5monoattack.tex", "max_forks_repo_name": "hellman/thesis", "max_forks_repo_head_hexsha": "6ba1c2b241e63c07cf76108481c1b67f21a50f12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-05T19:40:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-05T19:40:16.000Z", "avg_line_length": 84.8039215686, "max_line_length": 878, "alphanum_fraction": 0.7063583815, "num_tokens": 2395, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7931059511841119, "lm_q2_score": 0.6959583187272712, "lm_q1q2_score": 0.5519686843586877}}
{"text": "\\documentclass{beamer}\n\\usetheme{Copenhagen}\n\n\\title{Introduction to Python}\n\\usepackage{listings}\n\\subtitle{Scientific Computing with Python}\n\\date{2nd December 2016}\n\n\\begin{document}\n\n\\lstset{language=Python}\n\n\\newcommand{\\snippet}{\\lstinline}\n\n\\begin{frame}\n  \\titlepage\n\\end{frame}\n\n\n\n\\section{Options}\n\n\n\\begin{frame}[fragile]{Choices choices}\n\\begin{itemize}\n\\item Python has {\\em a lot} of options for scientific computing\n\\item Today is a small selection.\n\\pause\n\\item Today: numpy, matplotlib, (no sympy - sorry)\n\\pause\n\\item Missed out: Pandas, SciPy, PyGSL, ScientificPython, GmPy\\ldots\n\\item Please \\bf{read the docs!}\n\\end{itemize}\n\\end{frame}\n\n\\section{numpy}\n\n\\begin{frame}[fragile]{Introduction to numpy}\n\\begin{itemize}\n\\item What is it?\n\\begin{block}{}\nNumpy is an extension to Python adding support for large, multi-dimensional arrays and matrices, along with \na large library of high-level mathematical functions to operate on these arrays.\n\\end{block}\n\\pause\n\\item More importantly - numpy is {\\bf fast}.\n\\item numpy is written in C and so is as fast as e.g. matlab.\n\\pause\n\\item Designed for ease of use with matlab - similar functions and can accept .mat files.\n\\end{itemize}\n\\end{frame}\n\n\\subsection{numpy Classes and Functions}\n\\begin{frame}[fragile]{ndarray}\n\\begin{itemize}\n\\item The basic building block is the \\snippet{ndarray} type.\n\\pause\n\\item Multidimensional array. Can use it just like a vector, matrix or higher dimensional analogues.\n\\pause\n\\item Don't use the \\snippet{matrix} class as it will give you odd bugs!\n\\pause\n\\item Arrays have a \\snippet{shape} field that returns a vector of dimensions.\n\\begin{block}{}\n\\begin{lstlisting}{frame=single}\nimport numpy as np\nx = np.random.rand(3) # x is 1d array \nprint x.shape # 3\n\\end{lstlisting}\n\\end{block}\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]{ndarray}\n\\begin{itemize}\n\\item Many similar functions to matlab.\n\\begin{block}{}\n\\begin{lstlisting}{frame=single}\nimport numpy as np\nx = np.random.rand(3) # shape is 3\ny = np.eye((3,3)) # identity matrix\nz = np.zeros((4,4,4)) # note two sets of brackets\n\\end{lstlisting}\n\\end{block}\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]{ndarray}\n\\begin{itemize}\n\\item All Python arithmetic operators are {\\em elementwise}.\n\\item i.e. \\snippet{*},\\snippet{+},\\snippet{**} etc.\n\\pause\n\\item Use \\snippet{numpy.dot} for matrix-type multiplication.\n\\pause\n\\item If shapes don't match well then numpy tries to broadcast - read the docs!\n\\pause\n\\item You can also change the shape with \\snippet{.reshape}.\n\\pause\n\\item Demo!\n\\end{itemize}\n\\end{frame}\n\n\n\n\\begin{frame}[fragile]{ndarray}\n\\begin{itemize}\n\\item Remember that numpy is still just a Python library.\n\\pause\n\\item Everything you know still applies \n\\pause\n\\item e.g. slicing - slightly nicer syntax for numpy\n\\begin{block}{}\n\\begin{lstlisting}{frame=single}\nimport numpy as np\nx = np.arange(6).reshape(4,4)\ny = x[3,:] # y is 4th row of x\n\\end{lstlisting}\n\\end{block}\n\\end{itemize}\n\\end{frame}\n\n\n\\begin{frame}[fragile]{Statistics Functionality}\n\\begin{itemize}\n\\item numpy is smart - if you find yourself writing a for loop always try and rewrite in terms numpy functions.\n\\pause\n\\item Numpy is designed so that everything that works in one-dimension should work in any dimension.\n\\item A lot of basic stats is built in: mean, standard deviation etc.\n\\pause\n\\item Example: linear regression\n\\end{itemize}\n\\end{frame}\n\n\\section{Matplotlib}\n\\begin{frame}[fragile]{What is matplotlib}\n\\begin{itemize}\n\\item What is it?\n\\begin{block}{}\nmatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.\n\\end{block}\n\\pause\n\\item Makes things very simple!\n\\pause\n\\item Line plots, scatter plots, histograms, contour plots, 3d plots etc.\n\\pause\n\\item Integrates well with numpy\n\\end{itemize}\n\\end{frame}\n\n\\begin{frame}[fragile]{Basic idea}\n\\begin{itemize}\n\\item There is always a 'current figure' and 'current plot'. \n\\item You build up the plot as you go before using \\snippet{.show()} or \\snippet{.savefig(\"filename\")}.\n\\pause\n\\item Can have subplots, labelled axes etc.\n\\pause\n\\item Best seen through example!\n\\end{itemize}\n\\end{frame}\n\n\\end{document}\n", "meta": {"hexsha": "0ffa28a54deb4ca2e89b0d290a018a7b5d8b19ab", "size": 4237, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "session10ScientificComputing/scientificComputing.tex", "max_stars_repo_name": "campbellC/introToPython", "max_stars_repo_head_hexsha": "988f67adc3f687248308cb9b26481eb30cad687d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-11-30T14:10:53.000Z", "max_stars_repo_stars_event_max_datetime": "2017-11-30T14:10:53.000Z", "max_issues_repo_path": "session10ScientificComputing/scientificComputing.tex", "max_issues_repo_name": "campbellC/introToPython", "max_issues_repo_head_hexsha": "988f67adc3f687248308cb9b26481eb30cad687d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "session10ScientificComputing/scientificComputing.tex", "max_forks_repo_name": "campbellC/introToPython", "max_forks_repo_head_hexsha": "988f67adc3f687248308cb9b26481eb30cad687d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3167701863, "max_line_length": 165, "alphanum_fraction": 0.7491149398, "num_tokens": 1208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056295505783, "lm_q2_score": 0.8006919949619793, "lm_q1q2_score": 0.5519214996633756}}
{"text": "\\section{Projection mapping}\\label{sec:projection-mapping}\n\nProjection mapping is a spatial augmented reality \\cite{Bimber2005} technique in which video projectors are used to overlay virtual geometry on top of real objects or surfaces. This allows to create an immersive environment that together with 3D perception systems can be used to develop interactive interfaces that show contextual information for helping or teaching human operators performing complex tasks faster. The next sections describe the mathematical projector modeling and the associated calibration that is necessary for performing proper 3D rendering of the virtual world in order to achieve high accuracy projection.\n\n\n\\subsection{Projector modeling}\n\nOver the years, several projection technologies were developed according to the requirements of color fidelity / saturation, image sharpness, brightness, contrast, refresh rate and price. Currently, the video projection market is split between reflective \\gls{dlp} and transmissive \\gls{lcd} projection technology, with a small percentage of projectors consisting of a hybrid between the two technologies (\\gls{lcos}).\n\nFor video projection mapping purposes \\cite{Raskar1998,Bimber2005,Tan2013,Fujimoto2014}, reflective projectors are better suited than the remaining technologies given their ability to create images with smaller gaps between the projected pixels (smoother images) and they also have higher contrast, better color accuracy / uniformity, much fewer dead pixels and the image quality does not degrade over time. The main stages of the image creation in a \\gls{dlp} projector are show in \\cref{fig:dlp-projector-diagram-dmd}. The first phase is the generation of light from either a lamp or a \\gls{led} / laser array, which is later on condensed on a lens in order to pass through a moving color wheel to become one of the 3 primary additive colors (red, green, blue). The colored light then passes through a shaping lens and hits a \\gls{dmd} which has an electronic controllable mirror for each projection pixel that either reflects the light into the projection lens or into a heat sink. Color shading is achieved by controlling how long and how often the micro mirrors in the \\gls{dmd} are reflecting each light color into the lens (or into the heat sink).\n\nThe mathematical modeling of a \\gls{dlp} projector can be seen as an inverse pinhole camera \\cite{Hartley2003} (shown in \\cref{fig:camera-intrinsics}), given the grid disposition of the mirrors in the \\gls{dmd} and the very low distortion that modern \\gls{dlp} projectors have. As such, scene rendering can be performed efficiently using the \\gls{opengl} projection matrix (shown in \\cref{eq:projection-matrix,eq:ndc-matrix,eq:perspective-matrix}), which incorporates the focal lengths (Fx, Fy), principal point (Cx, Cy) and axis skew (S) intrinsic camera parameters (in pixel units). The correction of lens distortion for \\gls{dlp} projectors typically uses 3 coefficients for removing radial distortions and 2 coefficients for accounting for the tangential distortions.\n\n\n\\begin{figure}[H]\n\t\\begin{floatrow}[2]\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.19\\textheight]{dlp-projector-diagram-dmd}}\n\t\t{\\caption[Single chip \\glsentrytext{dlp} diagram]{Single chip \\glsentrytext{dlp} diagram\\protect\\footnotemark}\\label{fig:dlp-projector-diagram-dmd}}\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.19\\textheight]{camera-intrinsics}}\n\t\t{\\caption[Pinhole camera model]{Pinhole camera model\\protect\\footnotemark}\\label{fig:camera-intrinsics}}\n\t\\end{floatrow}\n\\end{figure}\n\\footnotetext[\\the\\numexpr\\value{footnote}-1\\relax]{\\url{https://vimeo.com/blog/post/display-tech-home-projectors}}\n\\footnotetext[\\value{footnote}]{\\url{http://perso.ensta-paristech.fr/~filliat/Courses/2011_projets_C10-2/BRUNEAU_DUBRAY_MURGUET/monoSLAM_bruneau_dubray_murguet_en.html}}\n\n\n{\n\t\\scriptsize\n\t\\begin{equation}\\label{eq:projection-matrix}\n\tProjectionMatrix = \\glsentrytext{ndc}Matrix \\times PerspectiveMatrix\n\t\\end{equation}\n\t\n\t\\begin{equation}\\label{eq:ndc-matrix}\n\tNDCMatrix = \n\t\\begin{bmatrix}\n\t\\frac{2}{ImageWidth} & 0 & 0 & -1 \\\\\n\t0 & \\frac{2}{ImageHeight} & 0 & -1 \\\\\n\t0 & 0 & \\frac{-2}{ClipFar - ClipNear} & \\frac{-(ClipFar + ClipNear)}{ClipFar - ClipNear} \\\\\n\t0 & 0 & 0 & 1\n\t\\end{bmatrix}\n\t\\end{equation}\n\t\n\t\n\t\\begin{equation}\\label{eq:perspective-matrix}\n\tPerspectiveMatrix = \n\t\\begin{bmatrix}\n\tFx & S & -Cx & 0 \\\\\n\t0 & Fy & -Cy & 0 \\\\\n\t0 & 0 & ClipNear + ClipFar & ClipNear \\times ClipFar \\\\\n\t0 & 0 & -1 & 0\n\t\\end{bmatrix}\n\t\\end{equation}\n}\n\n\n\\subsection{Projector calibration}\n\nHigh accuracy projection mapping requires proper hardware / software calibration of the camera / projector and also appropriate positioning within the intended workspace in order to avoid occlusions caused by the objects 3D shape or the human operators. This calibration aims to compute the intrinsic parameters of the projector (that do not change when the projector is moved within the workspace) along with the extrinsic parameters that are needed to know where is the projector in the global reference frame in order to be able to do proper 3D rendering of the scene that will be projected.\n\nThe intrinsic parameters of a \\gls{dlp} projector can be computed using image analysis of complementary gray code patterns (example in \\cref{fig:dlp-calibration-pattern-wall}) projected into a chessboard. The calibration system proposed in \\cite{Moreno2012} was used to retrieve the 5 intrinsic parameters (Fx, Fy, Cx, Cy, S) of the projector along with the 3D position and rotation of the projector in relation to the camera (that remains firmly attached to the projector support for fast recalibration of the extrinsic parameters). It was used 5 sets of 42 gray code image patterns captured with the chessboard in different positions and orientations in relation to the projector, that was pointing to the table workspace at a distance of 0.81 meters. After calibration, it was projected a validation pattern to evaluate the accuracy of the projection, and as can be seen in \\cref{fig:dlp-projected-chessboard}, the white squares were projected into the chessboard with sub-millimeter accuracy.\n\n\\begin{figure}[H]\n\t\\begin{floatrow}[2]\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.205\\textheight]{dlp-calibration-pattern-wall}}\n\t\t{\\caption{One of the \\glsentrytext{dlp} projector calibration patterns}\\label{fig:dlp-calibration-pattern-wall}}\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.205\\textheight]{chessboard}}\n\t\t{\\caption{\\glsentrytext{dlp} projector validation pattern}\\label{fig:dlp-projected-chessboard}}\n\t\\end{floatrow}\n\\end{figure}\n\nAfter having the intrinsic parameters of the camera and projector along with the relative position of the projector in relation to the camera, computing the global position of the projector in relation to the chessboard reference frame can be done by multiplying the $4 \\times 4$ homogeneous matrix that gives the transformation from the chessboard origin (shown in \\cref{fig:chess-board-detection}) to the camera frame, with the $4 \\times 4$ homogeneous matrix that gives the transformation from the camera frame to the projector frame.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.47\\linewidth]{chess-board-detection}\n\t\\caption{Camera pose estimation in relation to the chessboard origin (using the Kinect 2 RGB camera)}\n\t\\label{fig:chess-board-detection}\n\\end{figure}\n\n\n\\subsection{Scene rendering}\n\nFor achieving accurate projection mapping, the Gazebo simulator\\footnote{\\url{http://gazebosim.org}} camera implementation was improved to allow the setting of a custom projection matrix in order to perform 3D rendering with a camera model that takes into account the projector intrinsic parameters. Moreover, it was added the possibility to dynamically change image, video and text during runtime for allowing the display of the relevant information for each assembly step.\nFor efficient 3D scene rendering, the Gazebo simulator relies on the cross platform open source Ogre3D graphics engine\\footnote{\\url{http://www.ogre3d.org}}, that in turn uses the \\gls{opengl} \\gls{gpu} \\gls{api} to take advantage of the massively parallel graphics cards currently available to generate raster images for the \\gls{dlp} projector (example of a rendered scene for the last assembly step in \\cref{fig:scene-rendering}).\n\nFor user interface, the Gazebo simulator has a Qt\\footnote{\\url{https://www.qt.io/}} \\gls{gui} that allows visual inspection of the scene while also giving the option to add new objects or move and rotate existing models. Moreover, for lightweight rendering, it can also start in server mode without a \\gls{gui}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{scene-rendering}\n\t\\caption{3D scene rendering using the Gazebo simulator}\n\t\\label{fig:scene-rendering}\n\\end{figure}\n\n\n\n\\section{Human machine interaction}\\label{sec:human-machine-interaction}\n\nThe immersive \\gls{hmi} system developed (shown in \\cref{fig:scene-rendering,fig:human-machine-interface}) projects into the workspace detailed textual information of the current assembly task along with a video showing the operation being performed by an expert operator. Given the high variability of assembly / maintenance operations, the system was designed to decompose the assembly process into a set of small and concise operations. This allows to keep the operator focused on the current task and reduces the required projection area. Moreover, the operator can pause and move the video forwards and backwards, allowing to inspect a given complex operation with more time.\n\nThe user interaction with the projected \\gls{hmi} is done by analyzing the 3D point cloud sensor data that falls within a set of \\glspl{roi} (shown in \\cref{fig:interaction-rois}). When a minimum number of points falls within a given \\gls{roi}, the cluster of points centroid is extracted (shown as spheres in \\cref{fig:interaction-rois}) and if this \\gls{roi} is associated with a button, the user needs to hold the finger for at least 0.25 seconds to trigger the action. Moreover, to avoid unintentional action triggering, the user needs to remove and insert the finger into the \\gls{roi} to request the action again. On the other hand, if the \\gls{roi} is associated with the video seek bar \\gls{roi} (the vertical yellow box in \\cref{fig:interaction-rois}), the video seek time is computed considering the relative position of the finger within the \\gls{roi} (the bottom of the \\gls{roi} is associated with the start of the current video while the top corresponds to the end of the current video).\n\nBesides video play / pause functionality (provided by the blue middle \\gls{roi} shown in \\cref{fig:interaction-rois}), the \\gls{hmi} also allows the operator to navigate within the assembly operations (using the green \\glspl{roi} visible in \\cref{fig:interaction-rois}). As can be seen in \\cref{fig:human-machine-interface,fig:interaction-rois}, there are buttons for moving to the first, previous, next and last assembly step. Moreover, it is also shown what is the number of the current assembly step and how many steps are required to complete the assembly.\n\n\\begin{figure}[H]\n\t\\begin{floatrow}[2]\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.35\\textheight]{human-machine-interface}}\n\t\t{\\caption{Rendering of the human machine interface}\\label{fig:human-machine-interface}}\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.35\\textheight]{interaction-rois}}\n\t\t{\\caption{\\glspl{roi} for the \\gls{hmi} (overlaid on top of the Kinect 2 point cloud sensor data)}\\label{fig:interaction-rois}}\n\t\\end{floatrow}\n\\end{figure}\n\n\n\\section{Object 3D reconstruction}\\label{sec:object-reconstruction}\n\nThe last step of the assembly process is a visual inspection by the operator of the final product in order to ensure that every part was mounted correctly. To speedup this analysis the projection mapping system projects into the workspace the expected product outline (generated by colorizing the mesh using the surface curvature information). To be able to perform proper 3D rendering and also to detect and track the object within the workspace, it is necessary the 3D \\gls{cad} model of the final product. Given the lack of public available \\gls{cad} models for the Mitsubishi M000T20873 starter motor (shown in \\cref{fig:starter-motor}), it was necessary to perform object 3D reconstruction. The 3D mesh model shown in \\cref{fig:object-reconstruction} was generated using the David Laser 3D structured light system\\footnote{\\url{http://www.david-3d.com}}, and was built by surface matching algorithms using sensor data retrieved from 38 scans in which the starter motor was moved and rotated several times in order to capture enough sensor data to be able to reconstruct the entire surface. This particular object created some challenges for the structured light scanner, given that it has polished metallic sections and also black coated regions. As such, it was necessary to capture the same object sections several times with different projector brightness and camera exposure times (the dark regions required the maximum projector brightness and very high exposure times while the polished sections required dimmer projector brightness and very short exposure times in order for the surface to be fully reconstructed).\n\n\\begin{figure}[H]\n\t\\begin{floatrow}[2]\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.136\\textheight]{mitsubishi-m000t20873-front}\\includegraphics[height=.136\\textheight]{mitsubishi-m000t20873-back}}\n\t\t{\\caption{Mitsubishi M000T20873 starter motor}\\label{fig:starter-motor}}\n\t\t\\ffigbox[\\FBwidth]\n\t\t{\\includegraphics[height=.136\\textheight]{object-reconstruction-front}\\includegraphics[height=.136\\textheight]{object-reconstruction-back}}\n\t\t{\\caption{3D model of the starter motor reconstructed using a structured light scanner}\\label{fig:object-reconstruction}}\n\t\\end{floatrow}\n\\end{figure}\n\n\n\n\\section{Object recognition}\\label{sec:object-recognition}\n\nRobust and accurate object recognition and pose estimation is a requirement when developing projection mapping systems with dynamic objects. This is one of the most challenging perception tasks, which has received a lot of research over the years, both at the hardware and software level. The next sections describe the main processing stages of the robot localization system (described in \\cite{Costa2016}) that was reconfigured and improved to perform 3D object recognition.\n\n\n\\subsection{Reference models}\n\nThe first step in any perception system is the definition of the geometry of the object that we intend to recognize. As such, given a \\gls{cad} / mesh model, the object recognition system generates the respective point cloud and the associated keypoints and feature descriptors.\n\n\n\\subsection{Point cloud assembly}\n\nThe object recognition system can use any sensor that provides point clouds, namely RGB-D / \\gls{tof} cameras, \\glspl{lidar}, stereo vision systems among many others. Each of these types of sensors have very different operation rates and measurement accuracy. As such, the recognition system allows the assembly of several sensor scans using a circular buffer in order to reduce the impact of sensor noise and increase the observed surface area of the objects.\n\n\n\\subsection{Filtering and down sampling}\n\nThe time it takes to perform cloud registration increases considerable as the amount of points in the sensor point cloud and in the reference model becomes larger. As such, adjusting the level of detail of the point clouds by using voxel grids and random sampling gives some control over the desired pose estimation accuracy and the computational resources that will be required. Moreover, when we know the expected workspace, we can crop the sensor data to a given \\gls{roi} (such as a given volume above the working table), which removes any unnecessary environment clutter, resulting in a preliminary segmentation of the sensor data that later will be analyzed for object recognition. This stage is also useful to mitigate the measurement errors of the depth sensors, since the centroid of a voxel that contains points from several scans will be closer to the real surface (if the voxels have dimensions slightly larger than the expected measurement errors).\n\n\n\\subsection{Normal estimation}\n\nMost of feature detection, description and matching algorithms along with some registration methods rely on the point's surface normal and curvature. These algorithms analyze the neighborhood of a given point in order to compute the surface normal, and as such, the correct specification of what points should be included in the estimation is crucial to achieve accurate results. This depends on the environment geometry and the level of detail that is required, and is usually done by specifying a radius distance or by limiting the number of neighboring points to use.\n\n\n\\subsection{Keypoint detection}\n\nAligning two point clouds with overlapping views of the environment requires the establishment of point correspondences. If both point clouds have similar sensor origins, these can be determined with nearest neighbor's searches and filtered with correspondence rejectors (using other point properties such as reflectance and color). But if they were acquired in two very different positions, then more advanced techniques must be employed.\n\nOne of those techniques uses histograms to describe the geometric properties of the environment around a given point. This allows points to be matched even if they have completely different Euclidean coordinates. Also, by using histograms and sampling techniques, these descriptors are much more robust against sensor noise and varying level of point density. However, these advantages come with a heavy computational cost, and as such, point descriptors should only be computed on the most descriptive areas of the environment.\n\nIdentifying these environment points is known as feature / keypoint detection \\cite{Filipe2014}, and usually involves finding interesting points, such as corners and edges. Besides uniqueness, these points must also be repeatable. This means that the detection algorithms should be able to find the same points even if they are present in different point clouds with sensor noise and varying point density. This is of the utmost importance, because if the same keypoints are not identified on both clouds, then matching the point descriptors will likely fail.\n\nThe object recognition systems used the \\gls{sift} \\cite{Lowe2004} algorithm on the point's curvature, but it also supports the \\gls{iss3d} \\cite{Zhong2009} keypoint detector on the point's normals.\n\n\n\\subsection{Keypoint description}\n\nDescribing a keypoint usually involves analyzing its neighboring points and computing a given metric or histogram that quantifies the neighbor's relative spatial distribution, their normals angular relation, associated geometry or other metrics that are deemed useful. Several approaches were suggested over the years according to different recognition needs and they are the basis of feature matching algorithms used in the initial pose estimation.\n\nThe object recognition system used the \\gls{fpfh} \\cite{Rusu2009} keypoint descriptor, but it also supports the \\gls{pfh} \\cite{Rusu2008a}, the \\gls{shot} \\cite{Tombari2011}, the \\gls{sc3d} \\cite{Frome2004}, the \\gls{usc} \\cite{Tombari2010} and the \\gls{esf} \\cite{Wohlkinger2011}.\n\n\n\\subsection{Cloud registration}\n\nPoint cloud registration is the process of finding the transformation matrix (usually translation and rotation only) that when applied to a given ambient cloud will minimize an error metric (such as the mean square error of the ambient point cloud in relation to a given reference point cloud). Several approaches were suggested over the years and they can be categorized as point or feature cloud registration.\n\n\n\\subsubsection{Initial alignment with keypoints descriptor matching}\\label{subsec:localization-system_feature-registration}\n\nFeature registration is the process of matching keypoint descriptors in order to find an initial alignment between two point clouds. The object recognition system uses a feature registration method similar to the \\gls{sacia} algorithm presented in \\cite{Rusu2009}. It uses a \\gls{ransac} approach to select the best registration transformation after a given number of iterations. In each iteration a subsample of randomly selected descriptors from the ambient cloud is retrieved. Then for each of these descriptors, $k$ best matching descriptors in the reference point cloud are searched (using a kd-tree) and one of them is chosen randomly. Later after having filtered these correspondences between ambient and reference descriptors, the registration matrix is computed. If this registration matrix results in a point cloud overlap that has a minimum of inliers percentage (a point in the ambient cloud is an inlier if it has a point in the reference cloud closer than a given distance), then it is considered an acceptable initial pose. In the end of all iterations, the best initial pose (if found) is refined with a point cloud registration algorithm.\n\n\n\\subsubsection{Final alignment with point cloud error minimization}\n\nPoint cloud registration algorithms such as \\gls{icp} \\cite{Besl1992} (with its several known variations \\cite{Rusinkiewicz2001,Pomerleau2013} like \\gls{icp} point-to-point, \\gls{icp} point-to-point non-linear, \\gls{icp} point-to-plane and generalized \\gls{icp} \\cite{Segal2009}) and the \\gls{ndt} \\cite{Magnusson2009} are among the most popular algorithms to register point clouds. They can achieve very accurate cloud registration but they require an approximate initial pose for the registration to successfully converge to a correct solution (otherwise they may achieve only partial cloud overlap or even fail to converge to a valid solution).\n\nIn \\cref{fig:initial-pose-estimation} is shown the estimation of the pose of the starter motor. When the system started (left image), it subsampled the reference pointcloud (small green circles) and computed the keypoints (large yellow circles) and associated feature descriptors. Then, when sensor data arrived, it found the keypoints (blue circles) and their descriptors. Finally it applied the feature matching and registration refinement and achieved the pose estimation showed on the right image of \\cref{fig:initial-pose-estimation} (observe the overlap between the sensor data and the reference point cloud). After the successful initial pose estimation phase, the recognition system enters in tracking mode and relies only on point cloud matching algorithms (given their better accuracy and lower computational cost). If the tracking is lost, the feature matching algorithms are used again to find a plausible estimation of the object position that is then refined with the point cloud registration refinement algorithms.\n\n%\\begin{figure}[!ht]\n%\t\\centering\n%\t\\includegraphics[height=.27\\textheight]{initial-pose-estimation-1-before}\n%\t\\hspace{0.5em}\n%\t\\includegraphics[height=.27\\textheight]{initial-pose-estimation-1-after}\n%\t\\caption{Initial pose estimation of assembled object}\n%\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[height=.182\\textheight]{initial-pose-estimation-2-before}\n\t\\hspace{0.5em}\n\t\\includegraphics[height=.182\\textheight]{initial-pose-estimation-2-after}\n\t\\caption{Initial pose estimation of assembled starter motor}\n\t\\label{fig:initial-pose-estimation}\n\\end{figure}\n\n\n\\subsection{Outlier detection}\n\nDetecting which points from the sensor data are not part of the reference point cloud can be very useful to evaluate the quality of point cloud registration as well as to analyze if the object recognition was successful (if a high number of points were considered outliers then the object recognition is likely to have failed). It's computation splits the ambient cloud into two point sets. One containing inliers (points correctly registered and present in the reference point cloud) and the other having the outliers (points that are either incorrectly registered or not present in the reference cloud). A given ambient point can be classified as outlier if the corresponding closest point in the reference cloud is farther way than a given distance threshold. These operations can be done efficiently by using the reference point cloud kd-tree.\n", "meta": {"hexsha": "4a571db5376fb2f958d2a00ba0405f78d5b9108e", "size": 24369, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/sections/implementation.tex", "max_stars_repo_name": "carlosmccosta/assembly_projection_mapping_teaching_article", "max_stars_repo_head_hexsha": "5c6060919d8e8856f29e302f839b3571da82e68a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-08-17T09:51:21.000Z", "max_stars_repo_stars_event_max_datetime": "2018-08-17T09:51:21.000Z", "max_issues_repo_path": "tex/sections/implementation.tex", "max_issues_repo_name": "carlosmccosta/assembly_projection_mapping_teaching_article", "max_issues_repo_head_hexsha": "5c6060919d8e8856f29e302f839b3571da82e68a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tex/sections/implementation.tex", "max_forks_repo_name": "carlosmccosta/assembly_projection_mapping_teaching_article", "max_forks_repo_head_hexsha": "5c6060919d8e8856f29e302f839b3571da82e68a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 111.7844036697, "max_line_length": 1626, "alphanum_fraction": 0.8026591161, "num_tokens": 5528, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920068519376, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5519214925277818}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[fleqn]{amsmath}\n\\usepackage{textcomp}\n\\usepackage{gensymb}\n\\usepackage{enumitem}\n%\\usepackage{tikz}  % Include for figures.\n%\\usepackage{subfiles}  % Include for subfiles.\n\n\\newcommand{\\HOMEWORKNUM}{3}\n\\newcommand{\\NAME}{D. Choi}\n\\newcommand{\\DATE}{2020-05-27}\n\n\\title{\\vspace{-4\\baselineskip}MATH 225 - Homework \\#\\HOMEWORKNUM}\n\\author{\\NAME}\n\\date{\\DATE}\n\n%\\pagenumbering{gobble}  % Include for single-page document.\n\n\\begin{document}\n\\maketitle\n\n\\section*{1.}\n\\textit{Find the $2 \\times 2$ matrix $M$ that is associated with the\northographic projection onto ${y = -\\sqrt{3}x}$ in 2 ways.}\n% Hey wow, you can put {braces} around a math expression to make it an atom,\n% preventing line breaks from sneaking in! (Shouldn't matter here, though.)\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item \\textit{``Hard geometry, easy algebra.''}\n\t\\\\[\\baselineskip]\n\tNote that $y = -\\sqrt{3}x = \\tan(120 \\degree)x$. \\\\\n\tLet $\\hat u(\\theta) =\n\t\\begin{pmatrix} \\cos \\theta \\\\ \\sin \\theta \\end{pmatrix}$.\n\t\\begin{gather*}\n\t\t\\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}\n\t\t\\overset{M}{\\longrightarrow}\n\t\t\\text{proj}_{\\hat u(120 \\degree)} \\hat u(0 \\degree)\n\t\t=\n\t\t\\cos(120 \\degree) \\cdot \\hat u(120 \\degree)\n\t\t=\n\t\t\\begin{pmatrix} \\frac{1}{4} \\\\ -\\frac{\\sqrt{3}}{4} \\end{pmatrix}\n\t\t\\\\\n\t\t\\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}\n\t\t\\overset{M}{\\longrightarrow}\n\t\t\\text{proj}_{\\hat u(120 \\degree)} \\hat u(90 \\degree)\n\t\t=\n\t\t\\cos(30 \\degree) \\cdot \\hat u(120 \\degree)\n\t\t=\n\t\t\\begin{pmatrix} -\\frac{\\sqrt{3}}{4} \\\\ \\frac{3}{4} \\end{pmatrix}\n\t\\end{gather*}\n\tThus,\n\t\\begin{equation*}\n\t\tM =\n\t\t\\boxed{\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\frac{1}{4} & -\\frac{\\sqrt{3}}{4} \\\\\n\t\t\t\t-\\frac{\\sqrt{3}}{4} & \\frac{3}{4}\n\t\t\t\\end{pmatrix}\n\t\t}\n\t\\end{equation*}\n\t\\item \\textit{``Easy geometry, hard algebra.''}\n\t\\begin{gather*}\n\t\t\\begin{pmatrix} \\sqrt{3} \\\\ 1 \\end{pmatrix}\n\t\t=\n\t\t2 \\cdot \\hat u(30 \\degree)\n\t\t\\overset{M}{\\longrightarrow}\n\t\t\\begin{pmatrix} 0 \\\\ 0 \\end{pmatrix}\n\t\t\\\\\n\t\t\\begin{pmatrix} 1 \\\\ -\\sqrt{3} \\end{pmatrix}\n\t\t=\n\t\t-2 \\cdot \\hat u(120 \\degree)\n\t\t\\overset{M}{\\longrightarrow}\n\t\t\\begin{pmatrix} 1 \\\\ -\\sqrt{3} \\end{pmatrix}\n\t\\end{gather*}\n\t\\newpage\n\tLet $M = \\begin{pmatrix} a & b \\\\ c & d \\end{pmatrix}$.\n\t\\begin{gather*}\n\t\t\\sqrt{3}a + b = 0 \\\\\n\t\ta - \\sqrt{3}b = 1 \\\\\n\t\t\\sqrt{3}c + d = 0 \\\\\n\t\tc - \\sqrt{3}d = -\\sqrt{3}\n\t\\end{gather*}\n\t$a = \\frac{1}{4}, b = -\\frac{3}{4}, c = -\\frac{3}{4}, d = \\frac{3}{4}$\n\t\\\\[\\baselineskip]\n\tThus,\n\t\\begin{equation*}\n\t\tM =\n\t\t\\boxed{\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\frac{1}{4} & -\\frac{\\sqrt{3}}{4} \\\\\n\t\t\t\t-\\frac{\\sqrt{3}}{4} & \\frac{3}{4}\n\t\t\t\\end{pmatrix}\n\t\t}\n\t\\end{equation*}\n\\end{enumerate}\n\n\\section*{2.}\n\\textit{Find the matrix $B$ that reflects across $y = -\\sqrt{3}x$.}\n\\\\[\\baselineskip]\nLet $\\text{Ref}(\\theta)$ be the reflection matrix where\n\\begin{equation*}\n\t\\text{Ref}(\\theta) =\n\t\\begin{pmatrix}\n\t\t\\cos(2 \\theta) & \\sin(2 \\theta) \\\\\n\t\t\\sin(2 \\theta) & -\\cos(2 \\theta)\n\t\\end{pmatrix}\n\t.\n\\end{equation*}\nNote that $y = -\\sqrt{3}x = \\tan(120 \\degree)x$. \\\\\nThus,\n\\begin{equation*}\n\tB = \\text{Ref}(120 \\degree) =\n\t\\begin{pmatrix}\n\t\t\\cos(2 \\cdot 120 \\degree) & \\sin(2 \\cdot 120 \\degree) \\\\\n\t\t\\sin(2 \\cdot 120 \\degree) & -\\cos(2 \\cdot 120 \\degree)\n\t\\end{pmatrix}\n\t=\n\t\\boxed{\n\t\t\\begin{pmatrix}\n\t\t\t-\\frac{1}{2} & -\\frac{\\sqrt{3}}{2} \\\\\n\t\t\t-\\frac{\\sqrt{3}}{2} & \\frac{1}{2}\n\t\t\\end{pmatrix}\n\t}\n\\end{equation*}\n\\newpage\n\n\\section*{3.}\n\\textit{Let $A$ be a $30 \\degree$ rotation, let $B$ be the same matrix\nfrom \\textbf{(2)}, and let $M$ be the same matrix from \\textbf{(1)}. \\\\\nFind the following.}\n\\\\[\\baselineskip]\nFirst, let $\\text{Rot}(\\theta)$ be the rotation matrix where\n\\begin{equation*}\n\t\\text{Rot}(\\theta) =\n\t\\begin{pmatrix}\n\t\t\\cos \\theta & -\\sin \\theta \\\\\n\t\t\\sin \\theta & \\cos \\theta\n\t\\end{pmatrix}\n\t.\n\\end{equation*}\n\\begin{enumerate}[label=(\\alph*)]\n\t\\item $A^{1001} = \\text{Rot}(1001 \\cdot 30 \\degree) =\n\t\\text{Rot}(1001 \\cdot 30 \\degree \\text{mod} 360) =\n\t\\boxed{\\text{Rot}(150 \\degree)}$\n\t\\item $B^{1001} = (B^2)^{500} \\cdot B =\n\t([\\text{Ref}(120 \\degree)]^2)^{500} \\cdot \\text{Ref}(120 \\degree) =\n\t\\boxed{\\text{Ref}(120 \\degree)}$\n\t\\item $AB = \\text{Rot}(30 \\degree) \\text{Ref}(120 \\degree) =\n\t\\text{Ref}(120 \\degree + \\frac{1}{2} \\cdot 30 \\degree) =\n\t\\boxed{\\text{Ref}(135 \\degree)}$\n\t\\item $BA = \\text{Ref}(120 \\degree) \\text{Rot}(30 \\degree) =\n\t\\text{Ref}(120 \\degree - \\frac{1}{2} \\cdot 30 \\degree) =\n\t\\boxed{\\text{Ref}(105 \\degree)}$\n\t\\item $A^{-1} B^5 = A^{-1} (B^2)^2 \\cdot B = A^{-1} B =\n\t\\text{Rot}(-30 \\degree) \\text{Ref}(120 \\degree) = \\\\\n\t\\text{Ref}(120 \\degree + \\frac{1}{2} \\cdot -30 \\degree) =\n\t\\boxed{\\text{Ref}(105 \\degree)}$\n\t\\item $M^{1001} = \\boxed{M}$ \\quad ($M$ is a projection, so $M^2 = M$.)\n\t\\item\n\t$AB \\begin{pmatrix} \\cos(40 \\degree) \\\\ \\sin(40 \\degree) \\end{pmatrix}$\n\t\\begin{align*}\n\t\t&=\n\t\t\\text{Rot}(30 \\degree)\n\t\t\\text{Ref}(120 \\degree)\n\t\t\\begin{pmatrix} \\cos(40 \\degree) \\\\ \\sin(40 \\degree) \\end{pmatrix} \\\\\n\t\t&= \n\t\t\\text{Rot}(30 \\degree)\n\t\t\\begin{pmatrix} \\cos(200 \\degree) \\\\ \\sin(200 \\degree) \\end{pmatrix}\n\t\t\\\\\n\t\t&=\n\t\t\\boxed{\n\t\t\t\\begin{pmatrix}\n\t\t\t\t\\cos(230 \\degree) \\\\ \\sin(230 \\degree)\n\t\t\t\\end{pmatrix}\n\t\t}\n\t\\end{align*}\n\\end{enumerate}\n\n\\section*{4.}\n\\textit{Find a $3 \\times 3$ matrix $C$ that rotates 3-space $30 \\degree$ about\nthe z-axis.}\n\\begin{gather*}\n\t\\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\end{pmatrix}\n\t\\overset{C}{\\longrightarrow}\n\t\\begin{pmatrix} \\cos(30 \\degree) \\\\ \\sin(30 \\degree) \\\\ 0 \\end{pmatrix}\n\t\\\\\n\t\\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\end{pmatrix}\n\t\\overset{C}{\\longrightarrow}\n\t\\begin{pmatrix} \\cos(120 \\degree) \\\\ \\sin(120 \\degree) \\\\ 0 \\end{pmatrix}\n\t\\\\\n\t\\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\end{pmatrix}\n\t\\overset{C}{\\longrightarrow}\n\t\\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\end{pmatrix}\n\\end{gather*}\n\\begin{equation*}\n\tC =\n\t\\boxed{\n\t\t\\begin{pmatrix}\n\t\t\t\\cos(30 \\degree) & cos(120 \\degree) & 0 \\\\\n\t\t\t\\sin(30 \\degree) & sin(120 \\degree) & 0 \\\\\n\t\t\t0 & 0 & 1\n\t\t\\end{pmatrix}\n\t}\n\\end{equation*}\n\n\\end{document}", "meta": {"hexsha": "18e2e56b7e42f8ebc05ba4fee99bb87d798a9349", "size": 5865, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "usc-20202-math-225-39425/hw03/main.tex", "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "usc-20202-math-225-39425/hw03/main.tex", "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "usc-20202-math-225-39425/hw03/main.tex", "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.6097560976, "max_line_length": 78, "alphanum_fraction": 0.6028985507, "num_tokens": 2466, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056167854461, "lm_q2_score": 0.800691997339971, "lm_q1q2_score": 0.5519214910815995}}
{"text": "\\section{Cutting}\n\\label{sec:cutting}\n\\index{stack!cutting}\n\n\\index{cut@\\fun{cut/2}|(} Let us consider the problem of cutting a\nstack~\\(s\\) at the~\\(k\\)th place. Obviously, the result is a pair of\nstacks. More precisely, let \\(\\pair{t}{u}\\) be the value of\n\\(\\fun{cut}(s,k)\\), such that \\(\\fun{cat}(t,u) \\twoheadrightarrow\ns\\)\\index{cat@\\fun{cat/2}} and \\(t\\)~contains \\(k\\)~items, that is to\nsay, \\(\\fun{len}(t) \\twoheadrightarrow k\\). In particular, if~\\(k =\n0\\), then \\(t = \\el\\); invalid inputs lead to unspecified\nbehaviours. For instance, \\(\\fun{cut}([4,2],0) \\twoheadrightarrow\n\\pair{\\el}{[4,2]}\\) and \\(\\fun{cut}([5,3,6,0,2],3) \\twoheadrightarrow\n\\pair{[5,3,6]}{[0,2]}\\), but, for the sake of simplicity, nothing is\nsaid about \\(\\fun{cut}([0],7)\\) and \\(\\fun{cut}([0],-1)\\). We derive\ntwo cases: \\(k = 0\\) or else the stack is not empty. The former is\neasy to guess:\n\\begin{equation*}\n\\fun{cut}(s,0)           \\rightarrow \\pair{\\el}{s};\\qquad\n\\fun{cut}(\\cons{x}{s},k) \\rightarrow \\fbcode{CCCCCC}\\,.\n\\end{equation*}\nA big\\hyp{}step design\\index{design!big-step $\\sim$} uses some\nsub\\-structural recursive calls to set the structure of the value in\nthe right\\hyp{}hand side. Because \\fun{cut/2} takes two arguments, we\nexpect a lexicographic order\\index{induction!lexicographic order}\n(definition~\\eqref{def:lexico}, page~\\pageref{def:lexico}):\n\\begin{equation*}\n\\fun{cut}(s_0,k_0) \\succ \\fun{cut}(s_1,k_1)\n:\\Leftrightarrow \n\\text{\\(s_0 \\succ s_1\\) or (\\(s_0 = s_1\\) and \\(k_0 > k_1\\))}.\n\\end{equation*}\nUsing the proper subterm order\\index{induction!proper subterm order}\n(section~\\ref{par:well-founded}, page~\\pageref{par:well-founded}) on\nstacks for (\\(\\succ\\)),\n\\begin{equation*}\n\\fun{cut}(\\cons{x}{s},k) \\succ \\fun{cut}(s,j);\\quad\n\\fun{cut}(\\cons{x}{s},k) \\succ \\fun{cut}(\\cons{x}{s},j),\\;\n\\text{if \\(k > j\\)}.\n\\end{equation*}\nIn the latter case, we can set \\(j=k-1\\), but the value of\n\\(\\fun{cut}(s,j)\\) needs to be projected into \\(\\pair{t}{u}\\) so\n\\(x\\)~is injected in and yields \\(\\pair{\\cons{x}{t}}{u}\\). This can be\nachieved with an auxiliary\nfunction~\\fun{push/2}\\index{push@\\fun{push/2}}:\n\\begin{equation*}\n\\begin{array}{@{}r@{\\;}l@{\\;}lr@{\\;}l@{\\;}l@{}}\n\\fun{cut}(s,0) & \\rightarrow & \\pair{\\el}{s};\n& \\fun{push}(x,\\pair{t}{u}) & \\rightarrow & \\pair{\\cons{x}{t}}{u}.\\\\\n\\fun{cut}(\\cons{x}{s},k) & \\rightarrow \n& \\fun{push}(x,\\fun{cut}(s,k-1)).\n\\end{array}\n\\end{equation*}\n\n\n\\paragraph{Inference systems}\n\\label{par:infsys}\n\\index{inference system}\n\nWhen the value of a recursive call needs to be destructured, it is\nconvenient to use an extension of our language to avoid creating\nauxiliary functions like \\fun{push/2}:\n\\begin{mathpar}\n\\inferrule*{}{\\fun{cut}(s,0) \\rightarrow \\pair{\\el}{s}}\n\\;\\TirName{Nil}\n\\qquad\n\\inferrule\n  {\\fun{cut}(s,k-1)         \\twoheadrightarrow \\pair{t}{u}}\n  {\\fun{cut}(\\cons{x}{s},k) \\twoheadrightarrow \\pair{\\cons{x}{t}}{u}}\n\\,\\TirName{Pref}\n\\end{mathpar}\nThe new construct is called an \\emph{inference rule}\\index{inference\n  system!rule} because it means: `For the value of\n\\(\\fun{cut}(\\cons{x}{s},k)\\) to be \\(\\pair{\\cons{x}{t}}{u}\\), we infer\nthat the value of \\(\\fun{cut}(s,k-1)\\) must be \\(\\pair{t}{u}\\).' This\ninterpretation corresponds to an upwards reading of the rule\n\\TirName{Pref} (\\emph{prefix}). Just as we compose horizontally\nrewrite rules, we compose inference rules vertically, stacking them,\nas in\n\\begin{mathpar}\n\\inferrule\n  {\\inferrule{\n     \\inferrule{\\fun{cut}([0,2],0) \\rightarrow \\pair{\\el}{[0,2]}}\n               {\\fun{cut}([6,0,2],1) \\twoheadrightarrow \\pair{[6]}{[0,2]}}}\n      {\\fun{cut}([3,6,0,2],2) \\twoheadrightarrow \\pair{[3,6]}{[0,2]}}}\n  {\\fun{cut}([5,3,6,0,2],3) \\twoheadrightarrow \\pair{[5,3,6]}{[0,2]}}\n\\end{mathpar}\nWhen determining the cost of \\(\\fun{cut}(s,k)\\), we take into account\nthe hidden function \\fun{push/2}, so \\(\\Call{\\fun{cut}([5,3,6,0,2],3)}\n= 7\\). In general, \\index{cut@$\\C{\\fun{cut}}{n}$} \\(\\C{\\fun{cut}}{k} =\n2k + 1\\).\n\nBeyond simplifying programs, what makes this formalism interesting is\nthat it enables two kinds of interpretation: logical and\ncomputational. The computational reading, called\n\\emph{inductive}\\index{inference system!inductive reading} in some\ncontexts, has just been illustrated. The logical understanding\nconsiders inference rules as logical implications of the form \\(P_1\n\\wedge P_2 \\wedge \\ldots \\wedge P_n \\Rightarrow C\\), written\n\\begin{equation*}\n\\inferrule\n  {P_1 \\\\ P_2 \\\\ \\ldots \\\\ P_n}\n  {C}\n\\end{equation*}\nThe propositions~\\(P_i\\) are called \\emph{premises}\\index{inference\n  system!rule!premise} and \\(C\\)~is the \\emph{conclusion}. In the case\nof \\TirName{Pref}, there is only one premise. When premises are\nlacking, as in \\TirName{Nil}, then \\(C\\)~is called an \\emph{axiom} and\nno horizontal line is drawn. The composition of inference rules is a\n\\emph{derivation}. In the case of \\fun{cut/2}, all derivations are\nisomorphic to a stack, whose top is the conclusion.\n\nThe logical reading of rule \\TirName{Pref} is: `If \\(\\fun{cut}(s,k-1)\n\\twoheadrightarrow \\pair{t}{u}\\), then we have\n\\(\\fun{cut}(\\cons{x}{s},k) \\twoheadrightarrow\n\\pair{\\cons{x}{t}}{u}\\).'  This top\\hyp{}down reading qualifies as\n\\emph{deductive}\\index{inference system!deductive reading}. The\nprevious derivation then can be regarded as the proof of the theorem\n\\(\\fun{cut}([5,3,6,0,2],3) \\twoheadrightarrow \\pair{[5,3,6]}{[0,2]}\\).\n\n\n\\paragraph{Induction on proofs}\n\\index{induction!$\\sim$ on proofs}\n\nA single formalism with such a dual interpretation is powerful because\na definition by means of inference rules enables the proof of theorems\nabout a function by \\emph{induction on the structure of the proof}. As\nwe have done previously, structural induction can be applied to stacks\nconsidered as a data type (objects). Since, in the case of\n\\fun{cut/2}, derivations are stacks in themselves, so can induction be\napplied to their structure (as meta\\hyp{}objects). \n\nLet us illustrate this elegant inductive technique with a proof of the\n\\emph{soundness}\\index{soundness} of\n\\fun{cut/2}\\index{cut@\\fun{cut/2}}.\n\n\\mypar{Soundness}\n\\label{par:cut_sound}\n\nThe concept of soundness or \\emph{correctness} \\citep{McCarthy_1962,\n  Floyd_1967, Hoare_1971,\n  Dijkstra_1976}\\index{correctness|see{soundness}} is a binary\nrelationship, so we always ought to speak of the soundness of a\nprogram with respect to its\n\\emph{specification}\\index{specification}. A specification is a\nlogical description of the expected properties of the output of a\nprogram, given some assumptions on its input. In the case of\n\\(\\fun{cut}(s,k)\\), we already mentioned what to expect: the value\nmust be a pair \\(\\pair{t}{u}\\) such that the catenation of\n\\(t\\)~and~\\(u\\) is~\\(s\\) and the length of~\\(t\\) is~\\(k\\).\n\nFormally, let \\(\\pred{CorCut}{s,k}\\)\\index{CorCut@\\predName{CorCut}|(}\nbe the proposition\n\\begin{quote}\n  \\textsl{If \\(\\fun{cut}(s,k) \\twoheadrightarrow \\pair{t}{u}\\), then\n    \\(\\fun{cat}(t,u) \\twoheadrightarrow s\\) and \\(\\fun{len}(t)\n    \\twoheadrightarrow k\\),}\n\\end{quote}\nwhere the function \\fun{len/1}\\index{len@\\fun{len/1}|(} is defined in\nequation~\\eqref{eq:len} \\vpageref{eq:len}.\n\nLet us suppose the antecedent of the implication to be true, otherwise\nthe theorem is vacuously true, so there exists a derivation~\\(\\Delta\\)\nwhose conclusion is \\(\\fun{cut}(s,k) \\twoheadrightarrow\n\\pair{t}{u}\\). This derivation is a (meta) stack whose top is the\nconclusion in question, which makes it possible to reckon by induction\non its structure, that is, we assume that \\predName{CorCut} holds for\nthe immediate sub\\-derivation of~\\(\\Delta\\) (the induction hypothesis)\nand then proceed to prove that \\predName{CorCut} holds for the entire\nderivation. This is the immediate subterm induction we use when\nreasoning on a stack as an object: we assume the theorem to hold\nfor~\\(s\\) and then move to prove it for~\\(\\cons{x}{s}\\).\n\nA case by case analysis on the kind of rule that can end~\\(\\Delta\\)\nguides the proof. To avoid clashes between variables in the theorem\nand in the inference system, we will overline the latter ones, like\n\\(\\overline{s}\\), \\(\\overline{t}\\) etc. which may differ from\n\\(s\\)~and~\\(t\\) in \\predName{CorCut}.\n\\begin{itemize}\n\n\\item \\emph{Case where \\(\\Delta\\) ends with \\TirName{Nil}.} There are\n  no premises, as~\\TirName{Nil} is an axiom. In this case, we have to\n  establish \\predName{CorCut} without induction. The matching of\n  \\(\\fun{cut}(s,k) \\twoheadrightarrow \\pair{t}{u}\\) against\n  \\(\\fun{cut}(\\overline{s},0) \\rightarrow \\pair{\\el}{\\overline{s}}\\)\n  yields \\(\\overline{s} = s\\), \\(0=k\\), \\(\\el=t\\) and\n  \\(\\overline{s}=u\\). Therefore, \\index{cat@\\fun{cat/2}}\n  \\(\\fun{cat}(t,u) = \\fun{cat}(\\el,s) \\xrightarrow{\\smash{\\alpha}}\n  s\\), which proves half of the conjunction. Moreover\n  \\(\\fun{len}(t) = \\fun{len}(\\el) \\xrightarrow{\\smash{a}} 0\\). This is\n  consistent with \\(k=0\\), so \\(\\pred{CorCut}{s,0}\\) is\n  true.\\index{CorCut@\\predName{CorCut}|)}\n\n  \\item \\emph{Case where \\(\\Delta\\) ends with \\TirName{Pref}.} The\n    shape of~\\(\\Delta\\) is thus as follows:\n    \\begin{mathpar}\n      \\inferrule*[right=Pref]\n        {\\inferrule*\n           {\\inferrule*[vdots=1.5em]{}{ }}\n           {\\fun{cut}(\\overline{s},\\overline{k}-1)\n            \\twoheadrightarrow \\pair{\\overline{t}}{\\overline{u}}}\n        }\n        {\\fun{cut}(\\cons{\\overline{x}}{\\overline{s}},\\overline{k})\n         \\twoheadrightarrow\n         \\pair{\\cons{\\overline{x}}{\\overline{t}}}{\\overline{u}}}\n    \\end{mathpar}\n    The matching of \\(\\fun{cut}(s,k) \\twoheadrightarrow \\pair{t}{u}\\)\n    against the conclusion yields \\(\\cons{\\overline{x}}{\\overline{s}}\n    = s\\), \\(\\overline{k} = k\\), \\(\\cons{\\overline{x}}{\\overline{t}} =\n    t\\) and \\(\\overline{u} = u\\). The induction hypothesis in this\n    case is that the theorem holds for the sub\\-derivation,\n    therefore\\index{cat@\\fun{cat/2}|(} \\(\\fun{cat}(\\overline{t},\n    \\overline{u}) \\twoheadrightarrow \\overline{s}\\) and\n    \\(\\fun{len}(\\overline{t}) \\twoheadrightarrow \\overline{k} -\n    1\\).\\index{len@\\fun{len/1}|)} The induction principle requires\n    that we establish now\n    \\(\\fun{cat}(\\cons{\\overline{x}}{\\overline{t}}, \\overline{u})\n    \\twoheadrightarrow \\cons{\\overline{x}}{\\overline{s}}\\) and\n    \\(\\fun{len}(\\cons{\\overline{x}}{\\overline{t}}) \\twoheadrightarrow\n    \\overline{k}\\). From the definition of \\fun{cat/2} and part of the\n    hypothesis, we easily deduce\n    \\(\\fun{cat}(\\cons{\\overline{x}}{\\overline{t}}, \\overline{u})\n    \\xrightarrow{\\smash{\\beta}}\n    \\cons{\\overline{x}}{\\fun{cat}(\\overline{t}, \\overline{u})}\n    \\twoheadrightarrow \\cons{\\overline{x}}{\\overline{s}}\\). Now the\n    other part: \\(\\fun{len}(\\cons{\\overline{x}}{\\overline{t}})\n    \\xrightarrow{\\smash{b}} 1 + \\fun{len}(\\overline{t})\n    \\twoheadrightarrow 1 + (\\overline{k} - 1) =\n    \\overline{k}\\).\\index{cat@\\fun{cat/2}|)}\\hfill\\(\\Box\\)\n\n\\end{itemize}\n\n\\paragraph{Exercise}\n\nWrite an equivalent definition of \\fun{cut/2} in tail\nform.\\index{functional language!tail form}\n\\index{cut@\\fun{cut/2}|)}\n", "meta": {"hexsha": "04c3155bc955d977bc475fb2a75cc6c5cda4cc21", "size": 10938, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "cutting.tex", "max_stars_repo_name": "rinderknecht/Book", "max_stars_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "cutting.tex", "max_issues_repo_name": "rinderknecht/Book", "max_issues_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cutting.tex", "max_forks_repo_name": "rinderknecht/Book", "max_forks_repo_head_hexsha": "6f302ab1319c8ae9b3ea690c45fdb3d2b6fbca16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.1518987342, "max_line_length": 75, "alphanum_fraction": 0.6768147742, "num_tokens": 3751, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056040203135, "lm_q2_score": 0.8006920116079209, "lm_q1q2_score": 0.5519214906956378}}
{"text": "\\section{Fixed notation}\n\nThe following notation will be fixed.\n\n\\begin{description}\n\t\\item[$\\dayc(t,T)$] Day count convention fraction between the dates $t < T$.\n\t\\item[$r(t)$] Short-rate at the time $t$.\n\t\\item[$\\Bank(t)$] The value of an idealized bank account at the time $t$.\n\t\\item[$\\DF (t,T)$] Stochastic discount factor between the dates $t,T$.\n\t\\item[$\\Bond(t,T)$] The price of $T$-bond (zero coupon bond with the maturity $T$) at the time $t$.\n\t\\item[$\\Rflt(t,T)$] Simple spot rate between dates $t$ and $T$\n\t\\item[$\\Rflt^k(t,T)$] $k$-times compounded simple spot rates between dates $t$ and $T$\n\t\\item[$\\FRA(t,T,S,K)$] The price of a forward rate agreement at the time $t$, where $K$ is the fixed price and the interest is paid between the dates $T < S$.\n\t\\item[$\\Rflt(t,T,S)$] Simple forward rate at the time $t$ between dates $T$ and $S$\n\t\\item[$\\Forwardrate(t,T)$] Instantaneous forward rate at the time $t$ for date $T$.\n\t\\item[$\\FUT(t,T,S)$] Futures rate at the time $t$ between dates $T$ and $S$.\n\t\\item[$\\DBond(t,T)$] The price of defaultable $T$-bond (zero coupon bond with the maturity $T$) at the time $t$.\n\t\\item[$\\ZBC(t,S,T,K)$] The price of call option at the time $t$ maturing at $S$ written on $T$-bond. \n\t\\item[$\\default$] The time of a default.\n\t\\item[$\\LGD$] The loss given the default (LGD) of a contract.\n\t\\item[$\\Rec$] The recovery value of a contract.\n\t\\item[$\\Protection(t)$] The value of a protection leg of a credit default swap.\n\t\\item[$\\Premium(t,C)$] The value of a premium leg of a credit default swap with a coupon rate $C$.\n\\end{description}", "meta": {"hexsha": "0384aa59cb09bc5bfbe23d838b8164445ddfadb6", "size": 1583, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notation.tex", "max_stars_repo_name": "mrytty/gradu-public", "max_stars_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notation.tex", "max_issues_repo_name": "mrytty/gradu-public", "max_issues_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notation.tex", "max_forks_repo_name": "mrytty/gradu-public", "max_forks_repo_head_hexsha": "537337ab3dc49be9f1f4283706b0f4dcbc8cb059", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.9583333333, "max_line_length": 159, "alphanum_fraction": 0.6778269109, "num_tokens": 515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8006920116079208, "lm_q2_score": 0.6893056040203135, "lm_q1q2_score": 0.5519214906956377}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\n\\usepackage{geometry}\n \\geometry{\n a4paper,\n total={170mm,250mm},\n left=31mm,\n right=31mm,\n top=34mm,\n bottom=29mm\n }\n\n\\usepackage{amsfonts,amsmath}\n\\usepackage{mathtools}\n\\usepackage{graphicx}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{tikz-cd}\n\\usepackage{mathrsfs}\n\\usepackage{subcaption}\n\\usepackage{multicol}\n\\usepackage[colorinlistoftodos]{todonotes}\n\\usepackage{enumitem}\n\\usepackage{hyperref}\n\\usepackage{url}\n\\hypersetup{\n   colorlinks=true,\n    linkcolor=blue,\n    citecolor=red,\n    urlcolor=cyan,\n}\n%\\usepackage{yfonts}\n\\usepackage{mathdots}\n\n\\usepackage{fancyhdr}\n\\pagestyle{fancy}\n\\fancyhf{}\n\\rhead{\\textit{Your name}}\n\\lhead{\\small Your title}\n\\cfoot{\\thepage}\n\n\\setcounter{MaxMatrixCols}{20} % Enable us to create matrices with more than 10 columns\n\n\n\\title{Your title}\n\n\\author{\\textit{Your name}\\footnote{Your institution}}\n\n\\date{\\today}\n\n\\setlength{\\footnotesep}{0.5cm}\n\\setlength{\\skip\\footins}{1cm}\n\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{lem}[thm]{Lemma}\n\\newtheorem{cor}[thm]{Corollary}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem{defn}[thm]{Definition}\n\\newtheorem{obs}[thm]{Observation}\n\\theoremstyle{definition}\n\\newtheorem{eg}[thm]{Example}\n\\newtheorem{ex}[thm]{Exercise}\n\n\n\\begin{document}\n\\maketitle\n\n\\begin{abstract}\nA concise summary of the whole reading report and main findings\n\\end{abstract}\n\n\\section{Introduction}\n\nA brief overview of the subject or research field that you are going to discuss.\n\n\\section{Background}\n\n\\begin{defn}\nA list of vectors $x_1,...,x_k\\in \\mathbf{C}^n$ is \\emph{orthogonal} if $x_i^*x_j=0$ for all $i\\neq j, i,j\\in\\{1,...,k\\}$. If, in addition, $x_i^*x_i=1$ for all $i=1,...,k$ (that is, the vectors are \\emph{normalized}), then the list is \\emph{orthonormal}.\n\\end{defn}\n\n\\begin{eg}[normalization]\n\\label{Normalization}\nIf $y_1,...,y_k\\in \\mathbf{C}^n$ are orthogonal and nonzero, the vectors $x_1,...,x_k$ defined by $x_i=(y_i^*y_i)^{-\\frac{1}{2}}y_i, i=1,...,k$ are orthonormal.\n\\end{eg}\n\n\\begin{thm}\nEvery orthogonal list of vectors in $\\mathbf{C}^n$ is linearly independent.\n\\end{thm}\n\\begin{proof}\nSuppose that $\\{y_1,...,y_k\\}$ is an orthogonal set. Normalize them as Example \\ref{Normalization} did and obtain an orthonormal list of vectors $\\{x_1,...,x_k\\}$. Assume that $0=\\alpha_1x_1+\\cdots+\\alpha_k x_k$. Then $0=(\\alpha_1x_1+\\cdots+\\alpha_k x_k)^*(\\alpha_1x_1+\\cdots+\\alpha_k x_k)=\\sum\\limits_{i,j} \\bar{\\alpha}_i \\alpha_j x_i^*x_j= \\sum\\limits_{i=1}^k |\\alpha_i|^2 x_i^*x_i=\\sum\\limits_{i=1}^k |\\alpha_i|^2$ because the vectors $x_i$ are orthonormal. Thus, all $\\alpha_i=0$ and hence $\\{x_1,...,x_k\\}$ is a linearly independent set, which in turn means that $\\{y_1,...,y_k\\}$ is linearly independent.\n\\end{proof}\n\n\\section{Method and Evaluation}\n\nMain section of the reading report. It contains your solutions to the exercises, your methodology to address the problem, and your experimental results.\n\n\\section{Conclusions}\n\nSummarize what you have written in previous sections and discuss your understandings of possible future research directions.\n\n\n\\vspace{1cm}\n\\begin{thebibliography}{00}\n\n\\bibitem{Horn} Roger A. Horn, Charles R. Johnson (2012) {\\em Matrix Analysis, Second Edition.} Cambridge University Press.\n\n% \\bibitem{PowerIterWiki} Wikipedia: The Free Encyclopedia \\textbf{\\em Power iteration.} URL: \\url{https://en.wikipedia.org/wiki/Power_iteration} Retrieved 12 April, 2018.\n\n\\end{thebibliography}\n\n\\end{document}\n", "meta": {"hexsha": "98683961311f5432e4421d1ee88dee87aa0402fe", "size": 3515, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "_teaching/file_spring2018/Latex_template.tex", "max_stars_repo_name": "zhangyk8/personal_website", "max_stars_repo_head_hexsha": "f67cdc2dd1a142457fa1666e3b42c5917d178ded", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-06-17T07:07:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-11T11:25:08.000Z", "max_issues_repo_path": "_teaching/file_spring2018/Latex_template.tex", "max_issues_repo_name": "zhangyk8/personal_website", "max_issues_repo_head_hexsha": "f67cdc2dd1a142457fa1666e3b42c5917d178ded", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_teaching/file_spring2018/Latex_template.tex", "max_forks_repo_name": "zhangyk8/personal_website", "max_forks_repo_head_hexsha": "f67cdc2dd1a142457fa1666e3b42c5917d178ded", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-03-22T02:30:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-22T14:55:48.000Z", "avg_line_length": 30.5652173913, "max_line_length": 610, "alphanum_fraction": 0.7376955903, "num_tokens": 1155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056040203135, "lm_q2_score": 0.8006920068519376, "lm_q1q2_score": 0.5519214874173118}}
{"text": "\\documentclass[10pt, preprint]{aastex}\n\n\\usepackage{natbib}\n\\bibliographystyle{apj}\n\n\\usepackage{minted}\n\\usepackage{float}\n\\usepackage{graphicx}\n\\usepackage{subfig}\n\\usepackage{amsmath}\n\\usepackage[toc,page]{appendix}\n\\usepackage[utf8]{inputenc}\n\\usepackage{hyperref}\n\\hypersetup{\n    colorlinks=true,\n    linkcolor=blue,\n    filecolor=magenta,      \n    urlcolor=blue,\n    citecolor=blue,\n}\n\\usepackage{booktabs}\n\\renewcommand{\\arraystretch}{1}\n\n\\title{Assignment 2 - CTA200H}\n\\author{Rebecca Ceppas de Castro}\n\n\\begin{document}\n\n\\maketitle\n\n\\section*{Question 1}\n\nFor this question, two methods of numerical approximations for the derivative of $sin(x)$ at $x_0 = 0.1$ are compared to its analytical derivative. For reference, Equation \\ref{method1} is called method 1 in the python script and it assumes $h$ is infinitesimally small. Equation \\ref{method2} is referred to as method 2 and it more realistically assumes $h$ is a small but finite step size.\n\n\\begin{equation}\\label{method1}\n    d_xf|_{x_0} \\approx \\frac{f_{x_0 + h} - f_{x_0}}{h}\n\\end{equation}\n\n\\begin{equation}\\label{method2}\n    d_xf|_{x_0} \\approx \\frac{f_{x_0 + h} - f_{x_0 - h}}{2h}\n\\end{equation}\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.7\\textwidth]{derivatives.pdf}\n    \\caption{Comparison of the Fractional error resulting from using method 1 and method 2 with the analytical derivative of sin(x) at 0.1}\n    \\label{derivatives}\n\\end{figure}\n\nFigure \\ref{derivatives} shows that the fractional errors from both methods of numerical approximations are relatively small for small values of h and increase at different rates for higher values of h, as expected. The first to notice is that method 2 always has a smaller fractional error, which agrees with the fact that that method assumes h is finite and we, in fact, used finite values of h in the calculations. The slope shows that the error in the approximation when using method 2 is more susceptible to changes in h, while method 1 shows less of this variation. In other words, while both methods have high errors for large step sizes, using method 2 produces more accurate results at a faster rate than if you were using method 1 as you decrease the step size.\n\n\\section*{Question 2}\n\nIn this question, we plot the Mandelbrot set, obtained through the recurring relation $z_{i+1} = z_{i}^2 + c$, where $c$ is a complex number given by $c = x + iy$ for $-2 < x < 2$ and $-2 < y < 2$. For different complex numbers $c$, the set diverges after a certain number of iterations, while for others it is bounded. In my function \\textit{get\\_z}, I use try and except to keep track of the bounded sets, as well as the number of iterations ran through before the divergent sets diverged. For the bounded sets, I designated a value of 10000 iterations, as if I used the real value - infinite number of iterations before divergence - we would lose some details in Figure \\ref{mandelbrot}.\n\n\\begin{figure}[h!]\n    \\centering\n    \\subfloat[Divergent and Bounded points c]{{\\includegraphics[width=0.5\\textwidth]{duo.png}}}\n    \\hfill\n    \\subfloat[Mandelbrot Set with colour map based on number of iterations before divergence]{{\\includegraphics[width=0.5\\textwidth]{iterations.png}}}\n    \\caption{Mandelbrot Set}\n    \\label{mandelbrot}\n\\end{figure}\n\n\n\\section*{Question 3}\n\nIn the SIR model, we can use a system of differential equations to model the spread of a disease in a population. We have three parameters - $S(t)$: those susceptible but not infected, $I(t)$: those infected, $R(t)$: those recovered and immune. These 3 groups add up to the total population number $N$ and vary with time according to system of Equations \\ref{SIR}.\n\n\\begin{equation*}\n    \\frac{dS}{dt} = -\\frac{\\beta S I}{N}\n\\end{equation*}\n    \n\\begin{equation}\\label{SIR}\n    \\frac{dI}{dt} = -\\frac{\\beta S I}{N} - \\gamma I\n\\end{equation}\n\n\\begin{equation*}\n    \\frac{dR}{dt} = \\gamma I\n\\end{equation*}\n\nWhere $\\gamma$ represents the recovery rate and $\\beta$ represents the transmission rate.\n\nI used the scipy \\textit{odeint} solver to find these parameters at different times, using different values of $\\gamma$ and $\\beta$ to see how the different groups evolved with time for each case. For all of the plots shown in Figure \\ref{SIR}, I have taken $N = 1000$, and started with 1 infected person, none recovered and 999 susceptible. \n\n\nI chose 4 combinations of $\\gamma$ and $\\beta$ to represent different scenarios. In the first case, I have the same transmission and recovery rate, so we see that the one infected person transmits the disease to more people, but the recover rate is at the same rate, so the disease doesn't spread as much and at around 60 (days, months, minutes - whatever time scale is relevant for the disease), it flattens out and the disease is pretty much eradicated. In the second case, we have a much higher transmission rate than recovery rate, so there is a steep peak in people infected and a much slower recovery curve. In this case, as well as the third one with $\\gamma = 0.01$ and $\\beta = 0.1$, the people susceptible reach 0, which means the whole population got infected at some point. However, we see that when the transmission and recovery rates have more similar values, the infection curve is less steep, as expected. Finally, for some combinations of $\\gamma$ and $\\beta$, the infection curve rises but is balance by an almost equal in magnitude recovery rate and it falls again. In this last scenario, the disease is eradicated before everyone is infected, but with much more damage that that shown in the first case.\n\n\\begin{figure}[h]\n    \\centering\n    \\includegraphics[width=0.8\\textwidth]{SIR.pdf}\n    \\caption{Evolution of the three groups of the population using the SIR model with different $\\beta$ and $\\gamma$ values.}\n    \\label{SIR}\n\\end{figure}\n\n\\end{document}\n", "meta": {"hexsha": "a747022aebde03e44561025cbdfb8fb318c768e8", "size": 5835, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment_2/main.tex", "max_stars_repo_name": "rebeccaceppas/CTA200", "max_stars_repo_head_hexsha": "d7edac4f968f2c1f7cc61123ecf83b44cbc47020", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment_2/main.tex", "max_issues_repo_name": "rebeccaceppas/CTA200", "max_issues_repo_head_hexsha": "d7edac4f968f2c1f7cc61123ecf83b44cbc47020", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment_2/main.tex", "max_forks_repo_name": "rebeccaceppas/CTA200", "max_forks_repo_head_hexsha": "d7edac4f968f2c1f7cc61123ecf83b44cbc47020", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.1546391753, "max_line_length": 1223, "alphanum_fraction": 0.7501285347, "num_tokens": 1572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6477982043529715, "lm_q2_score": 0.851952809486198, "lm_q1q2_score": 0.5518935001786284}}
{"text": "\\documentclass{article}\n\\usepackage{bibentry}\n\\usepackage{etoolbox}\n\\usepackage[]{geometry}\n\\usepackage[english]{babel}\n\\usepackage{mathpazo}\n\\usepackage{listings}\n\\usepackage{amsmath}\n\\usepackage{mathtools}\n\\makeatletter\n\\makeatother\n\\title{Supervised Learning}\n\\bibliographystyle{plain}\n\\author{}\n\\date{}\n\\pagestyle{empty}\n\\thispagestyle{empty}\n\\pagenumbering{gobble}\n\\setlength{\\parindent}{0em}\n\\setlength{\\parskip}{1em}\n\\begin{document}\n\\nocite{*}\n\\maketitle\n\\section{Introduction}\n\nSupervised learning implies that the data is prelabeled ie already\ndistinguished. The model is responsible for correctly predicting this outcome.\n\n\\section{Introduction to Regression}\n\nSupervised learning is the process whereby given examples of inputs and outputs\nan algorithm is then given a new input and asked to predict its output.\n\nRegression is this process specifically done on continuous inputs and outputs.\n\nLinear regression is the attempt to model a linear relationship between a\ndependent variable and independent variables. Such that given the dependent\nvariable $y$ and the independent variables $x_1, x_2, \\ldots x_n$ that we then\ngenerate a function $y = \\theta_0 + \\theta_1x_1 + \\ldots + \\theta_nx_n$\n\nThe $\\theta$ values are referred to as weights and determine how important the\ncorresponding independent variable is.\n\nGradient descent is the method by which we handling linear regression, using the\nsum of squared errors rather than absolute errors since sum-of-squared is\ndifferentiable.\n\nThe aim of linear regression in this case is to minimize the amount of total\nerror across all of the provided input/output values.\n\nThere are various different error functions that can be used in order to find\nthe model with the least error. For gradient descent sum of squared error is the\nbest since its differentiable.\n\nGenerally we try to fit the function\n$f(x) = c_0 + c_1x + c_2x^2 + \\ldots + c_nx^n$ and by using a higher order\nfunction we then have more degrees of freedom. We are limited so that the max\norder $n$ is only $\\mid X \\mid$ where $X$ is the dataset of all training points.\n\nPicking too high an order of freedom results in over fitting and the resultant\nmodel fits the data but overcommits to the data.\n\nGiven the equation $Xw = Y$ where $X$ is the set of independent variables, $w$\nis the vector of constants $c_0, c_1, \\ldots c_n$ and $Y$ is the set of\ndependent variables. We are then able to solve for the constant vector $w$ as\nfollows:\n$$X^TXw  = X^TY$$\n$$(X^TX)^{-1}X^TXw = (X^TX)^{-1}X^TY$$\n$$w = (X^TX)^{-1}X^TY$$\n\nThis is using projections in linear algebra.\n\nTraining data has errors that is we model not $f$ but $f + \\epsilon$. Errors can\nresult from a number of sources.\n\nAssume that data is independent and identically distributed. This implies that\nall the data comes from the same source and there are no implicit biases in\nthe data. This is not a fundamental assumption of supervised learning however,\nit is an assumption of certain algorithms within supervised learning.\n\nWe can split the training set into a cross validation set (a stand-in for the\nactual testing data). Cross validation splits data into $n$ folds. Then test on\neach fold, training on the rest of the data. You then choose the degree model\nwith the lowest error and average the coefficients together.\n\n\nWithout enough complexity you underfit the data. Too much and you overfit the\ndata.\n\nVector inputs are possible and generalize to hyperplanes (that is we aren't\nlimited to a small number of axes).\n\nWe can use discrete, vector, or scalar data can be input into regression models\nby using ordinal conversion function. \\textbf{Note}: A conversion function can\nbe approximated by using another machine learning algorithm.\n\nBoolean vectors can be used to encode complex data.\n\n\\section{More Regressions}\n\nParametric regression represents a model with a number of parameters (ie\nregression).\n\nK nearest neighbor finds the k neighbors closest to the input value and then\naverages their output values together using a couple of different metrics.\n\nKernel regression is similar to KNN and each data point is weighted unlike KNN\nwhere each point is simply left as they are.\n\nInstance based methods keep the data and use it to generate a\nprediction when provided with an unknown label.\n\nThe parametric model is biased (as in we already have an idea of the solution)\nvs an instance method which is unbiased.\n\nParametric models are space efficient however cannot be updated easily. Training\nis slow but querying is fast.\n\nInstance based methods are not space efficient, training is fast but querying is\nslow. Also have the added benefit of fitting patterns that are too complex for\nparametric models.\n\n\\section{Regressions in sklearn}\n\nContinuous supervised learning is regression.\n\nContinuous learning works on data sets with some type of ordering.\n\nTarget variable is the dependent variable and input variables are the\nindependent variables.\n\nLinear regression minimizes the sum of the squared errors. The best regression\nis the model that minimizes the SSE metric.\n\nOrdinary least squares (OLS) (used in sklearn) and gradient descent are the\ntwo algorithms used to minimize SSE.\n\nThere is a fundamental ambiguity. There can be multiple lines that minimize the\n$\\sum \\mid \\textsf{error} \\mid$ but only one line minimizes\n$\\sum \\textsf{error}^2$.\n\nFurthermore, using SSE makes the implementation easier as the function is\ndifferentiable.\n\nSSE does not account for the number of data points and thus as the data set\nsize grows the SSE grows as well. The SSE metric can not compare fits between\ntwo different data sets.\n\n$R^2$ explains how much of my change in the output is explained by my change\nin the input and is bounded $0 < r^2 < 1.0$. $R^2$ is not generally affected\nby the number of data points.\n\nRegression requires data that fits its structures well. Data can be transformed\nbefore hand as a fix.\n\nReinforcing the previous notes concerning regression vs classification note that\ndiscrete labels and decision boundary indicate supervised classification method.\nIn the case of regression the object is have a continuous output and a line of\nbest fit. Classification uses accuracy as its measure where regression uses\nsum of squared error.\n\nMultivariate regression is regression on several independent variables reducing\nto a single output variable.\n\n\\section{Decision Trees}\n\nClassification maps a set of independent variables to a set of discrete labels.\n\nInstances are inputs, vectors of independent values of input. \n\nConcepts are represented by functions. A relationship between some input and\noutput. A concept is formally defined in this case as a set of things.\n\nThe target concept is the concept from the possible concept space that most \neffectively describes the inputs in terms of the outputs by some metric.\n\nThe hypothesis class is the set of concepts that are possibly useful to use.\n\nA sample is a training set of inputs (independent variables) and labels\n(outputs). \n\nCandidate is a concept that we evaluate to be an effective target concept.\n\nThe testing set is a set of inputs and outputs that is specifically used to \nevaluate the competency of the candidate concept. \n\nThe representation of a decision tree is a n-ary tree. Where each set of\nchildren the node represents a decision and the node represents a specific \nselection against that decision. Decision trees only use data that is relevant,\nit will ignore any data that is not required to model the data properly.\n\n\\textbf{Note}: Decision trees start from the root node. \n\nDecision trees attempt to create the most equal splits possible when splitting\na decision tree. The deeper questions are predicated on the earlier\npossibilities. \n\nThe algorithm for decision trees can be defined as follows:\n\\begin{enumerate}\n        \\item Pick the best attribute (gives the most even split).\n        \\item Asked the question\n        \\item Follow the answer path\n        \\item Until you get an answer go to 1\n\\end{enumerate}\n\nA decision tree building algorithm follows all possible path until all possible\nquestions are answered. Whereas the above algorithm only defines how to use\na prebuilt decision tree. \n\nA decision that keeps the data as it is, is less useful than any possible \ndecision split that splits the data in any capacity. A decision that doesn't\nactually split the data is essentially a mapping of the current state of the\ndata to itself and simply wastes cpu time. \n\nDecision trees can be considered a more general class of the behavior of a\ntransistor. It can model most logical operators. A decision tree can be\nconsidered a form of logical primitive in this case.\n\nNot all decision trees are commutative but they do exist.\n\nThe number of nodes in a decision tree represents how complex it is. The \ntrees for n-or and n-and are linear that is there are $n$ nodes that need\nto be traversed. \n\nThe n-xor tree on the other hand, $2^n - 1$. The n-xor tree is exponential, and\nis a NP hard problem. \n\nSplitting on a choice we construct a decision table to represent all the \npossibilities. The resulting space in which a decision tree can exist, of the \nmany possible choices we must consider is\n$m_{output}^{\\prod^{n_{decisions}}\\mid k_n\\mid}$ where $k$ the cardinality\nof the decision outputs for a given decision. \n\nAn algorithm like that is very expressive, meaning that the hypothesis space is\nvery large. However, this means that the algorithm we use to search this space\nneeds to be extremely efficient in order to prevent nearly infinite runtimes. \n\n\\subsection{ID3}\n\nThe following algorithm is used to efficiently build decision trees.\n\nWe begin by looping forever until a solution is found\n\\begin{itemize}\n    \\item $A \\leftarrow$ best attribute\n    \\item Assign $A$ as decision attribute for $\\text{node}$\n    \\item For each value of $A$, create a decedent of $\\text{node}$\n    \\item Sort training examples to leaves\n    \\item If examples are perfectly classified stop. Else iterate over the\n        leaves\n\\end{itemize}\n\nInformation gain is a mathematical representation of the reduction in randomness\nand is defined\n$\\text{Gain} (S,A) = \\text{entropy}(S) - \\sum_v \\frac{S_v}{S}\\text{entropy}(S)$.\n\nEntropy is defined as a measure of impurity in a bunch of examples. \n\nThe formula for entropy is defined $-\\sum_v p(v)\\text{log}_2(p(v))$ where $p(v)$\nis the probability of seeing a value. Entropy in machine learning refers to \nthe homogeneity of a dataset\n\nThe best attribute is the one that maximizes information gain.  \n\nID3 terminates under two conditions: When there is no relevant data left to\nsplit on (ie its perfectly classified) or when there are no more attributes \navailable to divide the data with. \n\nThe ID3 algorithm is recursively called against itself unless a terminating\ncondition is met. \n\nInformation gain is calculated and then sorted for each remaining attribute \nduring the process of building the decision tree. \n\nID3 is a greedy algorithm as it has no complex look ahead behavior and only\nfocuses on the immediately next set of nodes. \n\nAs the number of examples (the size of the training data) and/or the number of \nattributes to analyze increases, the likelihood the tree returned by ID3 is \nsuboptimal.\n\nWith continuous data bucketing can be used to enable ID3 to use it. \n\nIf entries are missing from data there are several ways to fill the gaps. \n\n\\begin{itemize}\n\t\\item You can replace the gap with the most common (in the data) element\n\t\tfor that specific attribute. \n\t\\item You can also assign probabilities to each possibility for the\n\t\tgiven attribute and then randomly assigning gaps using a \n\t\tpseudo random generator until no gaps remain. \n\\end{itemize}\n\nSince ID3 fits until there is no fitting left to do, it can drastically overfit\nthe data. There are two major ways to deal with this:\n\n\\begin{enumerate}\n\t\\item Stop growing the tree before it becomes too large\n\t\\item Pruning the tree once it is done\n\\end{enumerate}\n\nA limit to the growth can be established through a limit on the layers/depth. \nCross validation can also be used to help establish the best tree structure.\n\nPruning however requires testing the fully built tree against the pruned trees \nand seeing if it the pruned tree performs better on the test data. \n\nFinally we are able to manipulate the formula used for the information gain\nanalysis. If there are many buckets and not enough data we can wind up with\nsparse separation which is difficult for the information gain formula to \nevaluate.\n\nThe formula called Gain Ratio overcomes the aforementioned difficulty by\nmeasuring how evenly and how broadly the data is distributed. It is expressed:\n$\\text{GainRatio}(S,A) = \\frac{\\text{Gain}(S,A)}{\\text{SplitInformation}(S,A)}$\nwhere $\\text{SplitInformation}$ is defined\n$\\text{SplitInformation}(S,A) = - \\sum_{i = 1}^c\\frac{\\mid S_i\\mid}{\\mid S\\mid}\n\\text{log}_2\\frac{\\mid S_i\\mid}{\\mid S\\mid}$\n\nThere are two major types of bias:\n\n\\begin{enumerate}\n\t\\item Restriction bias: Refers to the hypothesis set (anything outside\n\tis not considered)\n\t\\item Preference bias: What hypotheses in the hypothesis set are\n\tpreferred\n\\end{enumerate}\n\nID3 prefers splits at the top, correct trees and trees that are shorter. \n\n\\subsection{Decision Trees Continued}\n \nUse range buckets in order to represent bounded continuous attributes.\n\nUsing separate bucketing methods to extract discrete value attributes allows\ncontinuous values to be reused in the same child-to-root path for a given \ndecision trees. \n\nFitting the training data perfectly is a non-starter since there is the\npossibility of noise in the data. \n\nA complex decision tree conflicts with occam's razor. Using cross validation it \nis possible to get a more efficient tree by comparing different trees against \none another. \n\nBesides hard limits to prevent a tree from growing to large, the information\ngain function can add weights to cause it to prefer breadth first growth. \n\nInformation gain is not easily used on regression. So we need to change the \nspitting criteria. It is possible to specify a local output function for each \nleaf. \n\nAnother way to measure the decision when to stop is a voting function applied\nover the leaf nodes. \n\n\\section {More Decision Trees}\n\nLinearly separable data is data that can be partitioned by a $y = m_0x_0 +\n\\ldots + m_nx_n + b)$. \n\nDecision trees allow multiple yes/no questions (binary decision tree).\n\nAnalogue: Decision tree is a 8bit coast building algorithm. \n\nIn sklearn, the default split criterion is the Gini index.\n\nDecision trees are easy to use and easy to grow. Easy to display.\nBigger classifiers can be built using decision trees using ensemble methods. \n\nDecision trees are prone to overfitting with lots of data and be careful\nwith parameter tunes. Outliers can drastically affect the growth of the tree\nif the tune does not modify the minimum split criteria. \n\n\\section{Neural Networks}\nVia approximation, artificial neural networks are cartoonish in comparison to\nactual neurons. \n\nAn artificial neuron has a set of inputs $X$ with a set of weights $W$ to the\nthreshold of the neuron $\\theta$ and an output $y$. \n\nThe activation is defined as $\\sum_{i = 1}^k x_i \\times w_i$. If the activation\nis equal to or exceeds the threshold than the neuron outputs. \n\nThe artificial neuron is referred to as a perceptron and is considered the most\nbasic form on an artificial neuron. \n\nPerceptrons compute a half plane, since the activation function is linear, the \ndecision boundary that they compute is always linear. \n can express basic logical constructs (or, and, not) etc with a \nsingle perceptron unit. \n\nWhen using a bounded evaluation structure and an unbounded (continuous) internal\nrepresentation there is often an infinite number of viable expressions that\ncan represent the bounded evaluation metric. That is AND/OR/NOT/XOR are easy\nto express with a variety of different perceptron formulations. Note this does \nnot imply that all values are possible to represent all solutions, there are\njust an infinite number of bounded solutions. \n\nThere are two different rules are\n\\begin{itemize}\n\t\\item Perceptron (thresholded output)\n\t\\item Gradient descent/delta (not thresholded)\n\\end{itemize} Defining the perceptron rule we begin with the input labels $X$, the output $y$, the evaluated output $\\hat{y}$, the learning rate $N$ and the weights $W$.\n\nWhen doing calculations using the perceptron we have the following expression:\n$\\hat{y} = \\sum_i w_ix_i \\geq \\theta$ we can modify this function to\n$\\hat{y} = \\sum_i w_ix_i \\geq \\theta > 0$. This enables us to calculate the \nthreshold as a weight, improving calculation times. In order to take advantage \nof this we add a single column of '1's to the input labels $X$ in order to\naccount for this. \n\nWe perform several iteration steps of the perceptron rule function, by which \nwe modify the weights. The weight changing function is defined $w_i = w_i + N\n(y - (\\sum_i w_ix_i \\geq 0))X_i$. The inequality in the middle assumes a binary\nyes/no output structure. \n\nIf there is a half-plane that separates the data, then the data is linearly\nseparable and then the perceptron rule can find the answer in a finite number\nof iterations. \n\nOnly run the perceptron rule while there is some error. \n\nAn algorithm that is not solely focused on linearly separable data would be \ngradient descent. Gradient works by defining an error metric over the data set. \nThe error function is defined as $E(x) = \\frac{1}{2}\\sum_{(x,y) \\in D} (y - \n(\\sum_i x_iw_i))^2$. We then simply minimize the error by finding the derivative\nand then solving for the minimum. We finally wind up the equation:\n$\\sum_{(x,y)\\in D}(y - (\\sum_i x_iw_i)(-x_i))$ The weight varying function for \ngradient descent is defined as $\\delta w_i = N(y - a)x_i$. \n\nPerceptron rule has a guarantee of finite convergence in the case of linearly \nseparability. \n\nGradient descent is more robust (data doesn't need to be linearly separable) but\nas a trade-off only converges to a local optimum. \n\nWe use the perceptron rule for cases where the data thresholding function is\nnon-differentiable.\n\nThe sigmoid function acts like the perceptron function however it is\ndifferentiable. The sigmoid function is defined as $\\sigma(a) = \n\\frac{1}{1 + e^{-a}}$. The derivative of the $\\sigma$ function is written\n$D\\sigma(a) = \\sigma(a)(1 - \\sigma(a))$. \n\nThe sigmoid function is useful in allowing gradient descent to be used\nuniversally, where previously the perceptron function itself could only be used. \n\nNeural networks are layers of nodes that compute the sigmoid weights of the\nprevious layers. There are hidden layers (not available to IO) that are used \nfor larger representations.  The mapping from input to output using a sigmoid\nann is entirely differentiable. \n\nBack propagation arises from the computationally beneficial organization of the\nchain rule. That is data flows forwards while error flows backwards. Also known\nas error back propagation. \n\nBack propagation occurs so long as the error function is differentiable. \n\nSigmoid units are analogous to a perceptron but have many local optima. \n\nThere are other advanced methods of optimization (learning) that can be used:\n\\begin{itemize}\n\\item Momentum: pushes the ball so it doesn't necessarily settle in local\n\toptimum\n\\item Higher order derivatives: combinations of weights, hamiltonians etc\n\\item Randomized optimization\n\\item Penalties for complexity, prevents overfitting (less nodes, less layers,\n\tlarge numbers)\n\\end{itemize}\n\nLarge numbers affect network complexity (increase the complexity and chance of\noverfitting).\n\n\\begin{itemize}\n\t\\item Boolean functions can be represented by a network of \n\t\tthreshold-like units.\n\t\\item Continuous functions can be represented by a single hidden layer.\n\t\\item Arbitrary functions can be represented by a double hidden layer, \n\t\tstitched together. \n\\end{itemize}\n\nThere is a danger of overfitting due to the capability of networks to represent\ncomplex data. Cross validation can be used to determine the number of hidden\nlayers, number of nodes in each layer and when to cap the size of weights. \n\nThe initial weights for a neural network, typically small random values are\nchosen (different random values for different runs). \n\nNeural networks prefer simple explanations, occam's razor as well as correct\nvalues.  Occam's razor: entities should not be multiplied unnecessarily. \n\n\\section{Support Vector Machines}\n\nFits that are too close to the data overfit the data. An svm finds a line\nthat fits the data but is least committed to it. \n\nWe consider the generalized linear function $y = w^tx + b$. Where $y$ is the\nlabels, $x$ is the input, $w$ and $b$ are the parameters of the splitting\nhyperplane. \n\nWe want the plane that correctly splits the data while being as far away from it\nas possible. \n\nThe equation for the splitting hyperplane is specifically formulated as\n$w^tx + b = 0$.\n\nThe output labels $y \\in {-1, 1}$, and we can then say the lines tangential to \nthe positive and negative sets can be defined as $w^tx + b = 1$ and \n$w^tx + b = -1$. When then maximize the distance between these two lines. This \ndistance is defined as $\\frac{2}{\\vert\\vert w\\vert\\vert}$\n\nGoal of support vector machines is to maximize the margin such that the margin\n$m$ is defined as $\\frac{w^T}{\\vert\\vert w\\vert\\vert}(x_1 - x_2)$. However we \nneed to correctly classify everything which is represented by the margin $m$\nbeing constrained such that $m = \\frac{2}{\\vert\\vert w\\vert\\vert}$.\n\nIn order to classify everything correctly we have consider the inequality\n$y_i(w^Tx_i + b) \\geq 1$ considering $ \\forall_i$. That inequality is used\nbecause correctly predicted non-members multiple double negatives and invert the \nsign of the predicted value. However maximizing $\\frac{2}{\\vert\\vert w\\vert\\vert\n}$ is difficult so instead we minimize $\\frac{1}{2}\\vert\\vert w\\vert\\vert^2$.\nThe form is easier to solve as it is a quadratic programming problem. We then \nformalize the problem in the quadratic normal form $W(\\alpha) = \\sum_i \\alpha_i\n - \\frac{1}{2}\\sum_{ij}\\alpha_i \\alpha_j y_i y_j x_i^T x_j$. This is done under\n the constraints that $\\alpha_i > 0$ and $\\sum_i \\alpha_i y_i = 0$ and try to\n maximize the $W(\\alpha)$.\n\n Retrieving the initial values we have $w = \\sum_i \\alpha_i y_i x_i$. Most\n $\\alpha$ are zero. Most data doesn't matter when considering how to split the \n data. The data that is used to actually define the support vector machine are\n considered the support vectors.\n\nThe points that are close to the decision boundary are important while the ones\nthat are farther from the decision boundary are not considered important. Its an\nexample of instance based learning that gets rid of the unnecessary points. The \ndot product can be considered a measure of similarity. \n\nData sets that are linearly married imply that a support vector machine cannot\nseparate the data properly as any line chosen still results in errors.  We \nuse a kernel function $\\Phi$. The formula $\\Phi$ is used because when applying \nit in support vector machines and apply the dot product we can transform \nthe data so that we can still apply the SVM.\n\nThere is some higher dimensional space that is equivalent to the current one,\nvia the kernel. However that kernel function may need to transform an \nunbounded number of equations. \n\nThe general form of the SVM is $W(\\alpha) = \\sum \\alpha_i - \\frac{1}{2}\n\\sum_{i,j} \\alpha_i \\alpha_j y_i y_j K(x_i, x_j)$ where $K(x_i, x_j)$ is the \nkernel where we inject domain knowledge into the SVM. A common kernel form is \n$(x^Ty + c)^p$. A good kernel function captures domain knowledge properly. The\nkernel function measures the similarity of two items in a specific domain. \n\nKernel functions should explain the Mercer condition: it acts as a well behaved\ndistance function. \n\nMargins are useful in determining how generalization and overfitting happen. \n\n\\section{SVMs in practice}\nSVMs split data of two classes. \n\nMargin is the distance between the line and the points that is maximized by the \nSVM. \n\nSVM lines that are too close to the data are more susceptible to noise\n\nAn svm prioritizes a correct separation of the data over the maximization of the\nmargin. \n\nSVMs are capable of tolerating small/individual outliers.\n\nThe $C$ parameter controls the tradeoff between a smooth decision boundary and\nclassifying the training data points correctly. A higher $C$ implies more\ntraining points will be correct. That is we fit tightly to the data. \n\nThe $\\gamma$ defines how far the influence of a single training example reaches. \nThe higher $\\gamma$ the closer the decision boundary fits to the data.\n\nSVM work well on complicated domains. \n\nDoesn't work well on large data sets ($O(n^3)$)\n\nDoes not handle lots of noise well, overfits to the noise. \n\n\\section{Instance based learning}\nInstance based learning remembers the data, its very fast to insert new data and\nits generally simpler.\n\nDistance between training examples is important. There are multiple metrics\nto measure distance in a traditional sense, let alone for more abstract \nclassifications. Where distance and similarity are analogues for one another.  \n\n\\subsection{K-NN}\nGiven the training data $D = {x_i, y_i}$, a distance metric (domain knowledge)\n$d(q,x)$ and the number of neighbors $k$ (also considered domain knowledge).\nFinally we are also given the query point $q$. \n\nWe define the nearest neighbor set $NN$ as the set ${i: d(q, x_i) \\text{min} k}$\n. Where we then return the mode/plurality of the $NN$ in the case of\nclassification and in the case of regression we take the mean. \n\n\\subsection{Use of KNN}\nTies in classification are handled by a designer metric. \n\nMost of the features of the KNN algorithm are defined by the designer which \nallows the algorithm to be apparently simple in its structure. \n\nInstance based learning is slower to query then learn while other methods are\nslow to train and quick to query. Lazy vs eager learners.\n\nThe choice of distance metric and k drastically affect the performance of the \nKNN. \n\nKNN assumes near points are similar to one another, the distance function\nencodes that assumption. There is always one best distance function. KNN by \naveraging things together, expects functions to behave smoothly. Dependent on\nthe distance function, most KNN metrics assume most features matter equally,\nthere is no explicit feature preferencing. \n\nKNN like most other algorithms suffers from the curse of dimensionality:\nas feature/dimensions grow the data required to generalize increases\nexponentially. \n\nThe choice of distance function is integral to the performance of KNN. \n\nWeighting  in the distance function can be used to help with the curse of \ndimensionality. \n\nSetting $k=n$ and using a weighted average we wind up using a specific \ntechnique called locally weighted regression (which regressed by averaging the \nlocal data points more heavily than others).  \n\nSimilarity is just another way of capturing domain knowledge. \n\nInstance based method of using linear regressions which allow us to compose\nseveral learning methods together. (The KNN averaging function and distance \nfunctions can be used to encode other algorithms). \n\n\\section{Naive Bayes}\n\nThe decision boundary is a separator of two or more classifiers. \n\nSave about $10\\%$ of data for testing.\n\n$p(E_2 \\vert E_1)$ is the sensitivity and $p(\\neg E_2 \\vert \\neg E_1)$ having it is the specificity. \n\nPrior probability and test evidence results in the posterior probability. \n$P(E_1\\vert E_2) = P(E_1) P(E_2 \\vert E_1)$ and $P(\\neg E_1 \\vert E_2) = \nP(\\neg E_1) P(E_2 \\vert \\neg E_1)$.\n\nWe then normalize each item by $P(E_1\\vert E_2) + P(\\neg E_1 \\vert E_2)$\n\nNaive Bayes is naive because it ignores greater context.\n\n\\section{Bayesian Learning}\n\nBasics of machine learning are to learn the most probable hypothesis given some \ndata and some domain knowledge. In this case we equate best to likelihood. This\ncan be represented as $\\text{argmax}_{h \\in H} P(h\\vert D)$ where $h$ is a \nparticular hypothesis and $D$ refers to a set of data. \n\nFormally speaking Bayes Rule is written $P(h\\vert D) = \\frac{P(D\\vert h) P(h)}\n{P(D)}$\n\nWe can derive Bayes' Rule from the standard chain rule in probability: \n$\\Pr(a,b) = \\Pr(a \\vert b)\\Pr(b) = \\Pr(b \\vert a)\\Pr(a)$\n\nIn Bayes' rule $\\Pr(D)$ refers to your prior belief about the data.\n$\\Pr(D \\vert h)$ refers to the data given the hypothesis and refers to the\nactual running the hypothesis. $\\Pr(h)$ refers to out prior belief refers to our\nbelief in a certain hypothesis over any others, this generally refers to domain\nknowledge. This implies better domain knowledge or higher accuracy $\\Pr(h)$ and\n$\\Pr(D\\vert h)$ improve our overall probability. \n\nIf a test is not useful, then via Bayesian reasoning it makes sense to alter\nthe population you perform the test on. \nThe algorithm for Bayesian Learning we for each $h \\in H$, calculate $\\Pr(h\\vert D) = \\frac{\\Pr(D \\vert h)\\Pr(h)}{\\Pr(D)}$ and then find the maximal\nprobability in the set of outputs and choose that as out $h$. That is $h = \n\\text{argmax}_h \\Pr(h \\vert D)$.  Its not necessary to find $P(D)$ since we only\nneed to find the $\\text{argmax}$. \n\nThe Maximum a posteriori or MAP is defined $\\Pr(h \\vert D) \\approx\n\\Pr(D \\vert h) \\Pr(h)$.\n\nIt is often difficult to derive any useful domain knowledge $\\Pr(h)$ we then \nreduce the computed hypothesis to the maximum likelihood hypothesis which is \ndefined as $h_{ml} = \\text{argmax}_h \\Pr(D \\vert h))$ since we assume all \nhypotheses are equally likely.\n\nDifficult to use the most likely hypothesis model directly since hypothesis\nspace may be infinite. Maximum likelihood can be used as a gold standard. \n\nGiven a set of data ${<x_i, d_i>}$ as a set of noise free\nexamples of a hypothesis $c$, that $c \\in H$ and there is a uniform prior\naka an uninformed prior. We then define $\\Pr(h) = \\frac{1}{\\vert H \\vert}$.\nWe also have $\\Pr(D \\vert h) = 1 \\text{ s.t. } \\forall d_i \\in D d_i = c_i)$. And\nfinally we have $\\Pr(D) = \\sum_{h_i \\in H} P(D \\vert h_i)\\Pr(h_i)$ assuming that\nall hypotheses are mutually exclusive. Deriving further we only select the \nhypotheses that are consistent with the data (that is all points agree)\nsince otherwise their probability is 0. So we get $\\Pr(D) = \\sum_{h_i \\in \nVS_{H,D}} 1 * \\frac{1}{\\vert H \\vert} = \\frac{\\vert VS \\vert}{\\vert H \\vert}$\nFinally we have $\\Pr(h\\vert H) = \\frac{1}{\\vert VS \\vert}$\nThis states that all hypotheses in the version space are all equally as usable,\nhowever this depends on the correct hypothesis being in the hypothesis space,\nnoise free data and a uniform prior. \n\nA derivation for Bayesian learning with noise follows: Given a set of inputs\nand labels $<x_i, d_i>$. The function that we are trying to analyze $f$ is \ndefined such that $d_i = f(x_i) + \\epsilon_i$. The error $\\epsilon$ is drawn \nfrom a normal distribution $N$ with a $0$ mean and a variance $\\sigma^2$ whose\nrelevance is minimal. Each epsilon is independent and identically distributed. \n\nWe begin with $h_{ML} = \\text{argmax} \\Pr(D \\vert h)$ and then via the chain\nrule have $\\text{argmax} \\prod_i \\Pr(d_i \\vert h)$. Then since each \n$\\epsilon_i$  is distributed over a normal distribution and is iid, then we can\nexpress the function as a product of the likelihood of any specific errors\noccurring in the form of a Gaussian $$\\text{argmax} \\prod_i \\frac{1}\n{\\sqrt{2\\pi\\sigma^2}} e^{-\\frac{(d_i - f(x_i))^2}{2\\sigma^2}}$$ Since we are \nusing argmax we can remove any constant expressions so using $\\text{log}$ is \nmonotonic then we can drop the exponential and the constant term $\\frac{1}\n{\\sqrt{2\\pi\\sigma^2}}$ can be dropped. We then have $\\text{argmax} \\sum_i\n\\frac{-(d_i - f(x_i))^2}{2\\sigma^2}$. Then minimizing instead of maximizing\nand throwing out constants we get $\\text{argmin} sum_i (d_i - h(x_i))^2$ and\nthis function is represented by the sum of squared errors.\n\nSeveral machine learning algorithms 'pop' out of the Bayesian derivation, and \nour assumptions. We actually as a result assume Gaussian noise corrupting all \nlabel data but not corrupting all input. \n\nA derivation of the minimum description length follows: Given $h_{MAP}$ such\nthat $h_{MAP} = \\text{argmax} \\Pr(D\\vert h) \\Pr(h)$ we then apply $\\text{log}_2$\nto get $\\text{argmax} \\text{ lg}\\Pr(D \\vert h) + \\text{lg}\\Pr(h)$ and then \nmultiply by $-1$ to get $\\text{argmin }[- \\text{lg}\\Pr(D\\vert h) - \\text{lg}\\Pr\n(h)]$. Then finally from information theory we have the optimal code for event \n$w$ with probability $p$ has a length $-\\text{lg }p$. We then have $\n\\text{length}(D \\vert h) + \\text{length}(h)$. Reduced we want to minimize\n$\\text{error} + \\text{size}$. This shows that we desire shorter\nand correct descriptions. In essence it is a Bayesian representation of Occam's\nrazor. \n\nSadly the tradeoff between misclassification error and the size of the \nhypothesis is built in and can't be entirely overcome. \n\nHowever, Bayesian learning is focused on finding the best $h$ not the best label\n$d$ as a result we need to actually calculate $d$ which is done by a weighted \nvote where for $h \\in H$ the weight of the vote is $\\Pr(h \\vert D)$. This \nis represented by the equation $$d_{MAP} = \\text{argmax}_d \\sum_h \\Pr(d \n\\vert h)\\Pr(h \\vert D)$$ This is known as the Bayes Optimal Classifier. There\nis no way to do any better on average than the BOC.\n\n\\section{Bayesian Inference}\nBayesian networks are good at representing joint probabilities over complex \nspaces. \n\nA joint distribution is a distribution of combined factors with a total \nprobability of one. \n\nWe use conditional probability and independence  in order to prevent the\nexponentiation of the factors every time we add parameters.\n\nConditional independence is defined as: $X$ is conditionally independent of \n$Y$ given $Z$ if the probability distribution governing $X$ is independent of\nthe value of $Y$ given the value of $Z$ or formally $\\forall X,Y,Z \n\\Pr(X = x \\vert Y = y, Z = z) = \\Pr(X = x\\vert Z = z)$ or more compactly\n$\\Pr(X \\vert Y, Z) = \\Pr(X \\vert Z)$. The independence of two independent \nvariables means their joint probabilities is the product of their margins. It\nis not possible to necessarily possible to be as strong as independent \nprobability. However, conditional independence can be used more generally. \n\nBelief networks are also know as Bayes networks or graphical models. The nodes\ncorrespond to the variables and edges correspond to dependencies. \n\nThe more influences on a node the number of nodes in the probability table to \ncalculate the resultant probability table grows exponentially. That is the\ncardinality of the probability table $\\vert P(E) \\vert = 2^k$ where $k$ is the\nnumber of influential nodes on event $E$. This deals with the indegree of the\nbelief network structure.\n\nBelief network shows relationship between probabilities. Are not able to infer\ncausality between. Merely talking about statistical dependence. \n\nSampling from a joint distribution is done in topological order, however this \ndepends on the graph must be acyclic.\n\nWe depend on Bayes belief networks being acyclic so that we can properly\ncompute joint probability distributions over them.\n\nCyclic graphs make everything more complicated and can result in complicated\npredicating and constraints on the conditional probability. \n\nWe are able to use the joint distribution to compute any joint subgraph. Given\na joint distribution where there exist nodes for conditional evens $a \\in A$ we\nwe compute the subgraph via the fact that each event $a$ and its probability \ntable $P(a)$ is written such that if that event $a_i$ is conditionally \nindependent predicated on events $a_j, a_k, \\ldots a_n$ these can be considered\na subset $A_i$ then we write the probability table for $a_i$ as $\\Pr(a_i \\vert\nA_i)$ and then the joint probability distribution of any subgraph $A_s$ can\nbe expressed as $\\prod_i \\Pr(a_i\\vert A_i)$ where $\\forall a_i, A_i$ that\n$a_i \\in A_s$ and $A_i \\subset A_s$\n\nThe number of boolean variables required to represent the subgraph is normally \n$2^{\\vert A_s\\vert}$ but using a joint probability distribution where the \nelements are conditionally independent we gain the benefit of expressing it with\nfewer values and the minimal expression can be done via $\\sum_i 1 + \\vert A_i \n\\vert$. If all events are independent than the minimal expression possible is\n$\\vert A_s \\vert$.\n\nDistributions are useful for determining the probability of a certain value \nand also generating values. If a distribution represents a process, it allows\nus to approximately inference and to model that process and to simulate it. We\nare able to visualize the distribution and to get a feeling for the behavior \nof the distribution. We use approximation since exact is hard and/or\napproximation is faster. Solving arbitrarily complicated belief networks is\nNP hard. (Due to the exponential explosion of complicated conditional\nindependent events). \n\nThere are some basic probabilistic inference rules such as marginalization\nwhere the formula $\\Pr(x \\vert = \\sum_y \\Pr(x,y))$ and the chain rule which is\n$\\Pr(X,Y) = P(X)P(Y \\vert X)$ and then we have Bayes Rule. \n\nIf you know your in a specific context there are features that are common and \ntheir are probabilities of those features occurring given the context.\n\nThe probability of finding the probability of a root node given a number of \nattributes. Then we have the formula $\\Pr(V \\vert a_1, a_2, \\ldots a_n) = \n\\prod_i \\frac{\\Pr(a_i \\vert V) P(V)}{Z}$ where $Z$ is the normalization factor\nacross all of the attributes. This type of formalization allows for the use\nof attributes to determine the general class of something. This classification\nformula is $\\text{MAP class} = \\text{argmax } \\Pr(V)\\prod \\Pr(a_i \\vert V)$\n\nInference is cheap with naive Bayes, there is a linear number of parameters. \nWe can estimate parameters with labeled data (count the number of occurrences in\nthat class over the number of items in that class). Naive Bayes connects\ninference and classification and Naive Bayes is empirically successful. \n\nNaive Bayes assumes conditionally independent features if you know the class. No\ninner relationships among the attributes are not considered. \n\nIts OK if the probabilities are wrong so long as the answer is in the right \ndirection ('OK in practice validation').\n\nOrdering is preserved, Naive Bayes believes its answers too much. One unseen\nattribute spoils the entire product.  A way to avoid the full veto is to\ninitialize the values to small probabilities (this is considered overfitting). \nBy smoothing the data we have an inductive bias that everything is possible. \n\nNaive Bayes allows us to link Bayesian Probability and classification. It \nhandles missing attributes well, it allows inference in any direction. \n\n\\section{Ensemble B\\&B}\nUsing a number of simple rules can help to determine but collectively cannot \nrepresent eh whole corpus of rules required to fully represent a concept or\nclassification. \n\nEnsemble learning combines simple rules into a complex rule. Subsets of the data\nare used to generate these simple rules. These rules are then combined. \n\nSimplest method for subsets is uniformly and randomly picking data. Simplest \nmethod is combining is mean/voting. \n\nEnsemble learning gets the benefits of cross validation, that is we don't fit\nthe training data as closely and generalize better, getting rid of some\noverfitting. \n\nChoosing random subsets to apply learners to is known as bagging or bootstrap\naggregation. \n\nSubsets should be chosen iteratively by creating subsets based on those with the \nhighest error for the current/previous learners and then they are weighted for\nthe final computation. \n\nWe define error in boosting as $\\Pr_D[h(x) \\neq c(x)]$.\n\nA learning algorithm is weak when no matter the distribution, it will do better\nthan chance. Formally speaking this implies $\\forall D \\Pr_D[\\cdot] \\leq \n\\frac{1}{2} - \\epsilon$. This implies the learner always learns something from \nthe data. \n\nHowever if the distribution space has a potential weighting situation that \nprevents some hypothesis from satisfying the definition, then we have no weak\nlearner. Small hypothesis spaces with bad hypotheses make having a weak learner\ndifficult. \n\n\\subsection{Boosting Algorithm}\nAssume a classification circumstance\n\nGiven training data $\\{(x_i, y_i)\\}$ where $y \\in \\{-1, 1\\}$\n\nFor $t = 1 \\text{ to } T$ we:\n\\begin{itemize}\n\t\\item Construct $D_T$\n\t\\item Find a weak classifier with $h_t(x)$ with a small error\n\t\t$\\epsilon_t = \\Pr_{D_t} [h_t(x) = y_i]$\n\\end{itemize}\n\nThen output $H_{final}$\nThe $\\epsilon$ is bounded such that $\\epsilon < \\frac{1}{2}$.\n\nTo find the distribution the first distribution $D_1(i) = \\frac{1}{n}$\nAt every time step we have distribution $D_{t + 1}(i) = \\frac{D_{t}(i) \\cdot\ne^{-\\alpha_ty_uh_t(x_i)}}{Z_t}$ where $\\alpha_t = \\frac{1}{2}\\text{ln}\\frac\n{1 -\\epsilon_t}{\\epsilon_t}$. This utilizes the fact that the algorithm is \nworking for a classifier (if the predicted and actual label have the same sign \nthe probability decreases depending on the normalization constant. \n\n$Z_t$ stands for whatever normalization constant is required to make the\ndistribution work. \n\nThen we get the weighted average $H_{final} = \\text{sign}(\\sum_t \\alpha_th_t\n(x))$.\n\n\\subsection{Boosting Continued}\nA feature of ensemble methods is that $H \\leq \\{\\sum h \\in H\\}$ whereby we are\nreferring to the complexity of the hypothesis classes and therefore can be \nmore expressive.\n\nConfidence refers to how strongly you believe in a specific hypothesis. \n\nVariance can be used as a stand-in for confidence (low variance = high\nconfidence). \n\nUsing a modification to the weighting formula normalizing it, we can obtain a\nmeasure of confidence. That is $H_{final}(x) = \\text{sign}(\\frac{\\sum_t \n\\alpha_t h_t(x)}{\\sum_t \\alpha_t}$. The closer the value is to the maximal $-1$ \nor $1$ the more confident the ensemble method is in that value. \n\nEssentially as iterative steps are done, error stays the same but confidence\nincreases. Boosting increases the margin between the confidence and larger \nmargins counter overfitting. \n\nWeak learners tend to overfit in the circumstances where the underlying learners\noverfit. Pink noise also causes boosting to overfit. Pink noise is uniform\nnoise. \n\nWhite noise is Gaussian noise. \n\nThere is no direct definition of 'strong learner'.\n\\nobibliography{../../citations} \n\\end{document}\n", "meta": {"hexsha": "99cef4976910924a24b333e8b56196fd65ffb9e7", "size": 42437, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Notes/Main.tex", "max_stars_repo_name": "Ryxai/MLND-Section-2", "max_stars_repo_head_hexsha": "6c6720d62a953ce23bbee1e510c97861160ea9fd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notes/Main.tex", "max_issues_repo_name": "Ryxai/MLND-Section-2", "max_issues_repo_head_hexsha": "6c6720d62a953ce23bbee1e510c97861160ea9fd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notes/Main.tex", "max_forks_repo_name": "Ryxai/MLND-Section-2", "max_forks_repo_head_hexsha": "6c6720d62a953ce23bbee1e510c97861160ea9fd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.1938232162, "max_line_length": 169, "alphanum_fraction": 0.7721092443, "num_tokens": 10175, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585903489891, "lm_q2_score": 0.7549149923816048, "lm_q1q2_score": 0.5518870901638139}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{geometry}                \n\\geometry{a4paper,left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm}\n\\usepackage{natbib}\n\\usepackage{color}\n\\definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue\n\\definecolor{mylilas}{RGB}{170,55,241}\n\\usepackage{epsfig}\n\\usepackage{amssymb,amsmath}\n\\usepackage{enumerate}\n\\usepackage{enumitem}\n\\usepackage[utf8]{inputenc}\n\\usepackage{hyperref}\n\\usepackage{mathtools}\n\n\n\\newcommand{\\ssd}{\\text{ssd}}\n\\newcommand{\\sS}{\\mathsf{S}}\n\\newcommand{\\tot}{\\text{tot}}\n\n\\begin{document}\n\n\n\\input{symbols}\n\n\\section*{Layered quasigeostrophic model}\n\nThe $\\nmax$-layer quasigeostrophic (QG) potential vorticity is\n\n\\begin{align}\n{q_1} &= \\lap\\psi_1 + \\frac{f_0^2}{H_1} \\left(\\frac{\\psi_{2}-\\psi_1}{g'_{1}}\\right)\\,,  \\qquad & i =1\\com \\nonumber \\\\\n{q_n} &= \\lap\\psi_n + \\frac{f_0^2}{H_n} \\left(\\frac{\\psi_{n-1}-\\psi_n}{g'_{n-1}}  - \\frac{\\psi_{i}-\\psi_{n+1}}{g'_{i}}\\right)\\,,  \\qquad &i = 2,\\nmax-1 \\com \\nonumber \\\\\n{q_\\nmax} &= \\lap\\psi_\\nmax + \\frac{f_0^2}{H_\\nmax} \\left(\\frac{\\psi_{\\textsf{N}-1}-\\psi_\\nmax}{g'_{\\nmax-1}}\\right) + \\frac{f_0}{H_\\nmax}h_b (x,y)\\,,  \\qquad & i =\\nmax\\,,\n\\end{align}\nwhere $q_n$ is the n'th layer QG potential vorticity, and $\\psi_n$ is the streamfunction, \n $f_0$ is the inertial frequency, n'th $H_n$ is the layer depth, and $h_b$ is the \nbottom topography. (Note that in QG $h_b/H_\\nmax << 1$.) Also the n'th buoyancy\njump (reduced gravity) is\n\\begin{equation}\ng'_n \\equiv g \\frac{\\rho_{n}-\\rho_{n+1}}{\\rho_n}\\com\n\\end{equation}\nwhere $g$ is the acceleration due to gravity and $\\rho_n$ is the layer density.\n\nThe dynamics of the system is given by the evolution of PV. In particular, assuming a background\nflow with background velocity $\\vec{V} = (U,V)$ such that\n\\begin{align}\n\\label{eq:Uequiv}\nu_n^{{\\tot}} = U_n - \\psi_{n y}\\com \\nonumber \\\\\nv_n^{\\tot} = V_n + \\psi_{n x} \\com\n\\end{align}\nand\n\\begin{equation}\nq_n^{\\tot} = Q_n + \\delta_{n\\nmax}\\frac{f_0}{H_\\nmax}h_b + q_n \\com\n\\end{equation}\nwhere $Q_n + \\delta_{n\\nmax}\\frac{f_0}{H_\\nmax}h_b$ is n'th layer background PV,\nwe obtain the evolution equations\n\\begin{align}\n\\label{eq:qg_dynamics}\n{q_n}_t + \\mathsf{J}(\\psi_n,q_n + \\delta_{n \\nmax} \\frac{f_0}{H_\\nmax}h_b )& + U_n ({q_n}_x + \\delta_{n \\nmax} \\frac{f_0}{H_\\nmax}h_{bx}) + V_n ({q_n}_y + \\delta_{n \\nmax} \\frac{f_0}{H_\\nmax}h_{by})+ \\nonumber\n\\\\ & {Q_n}_y {\\psi_n}_x - {Q_n}_y {\\psi_n}_y = \\ssd\n- r_{ek} \\delta_{n\\nmax} \\lap \\psi_n \\com \\qquad n = 1,\\nmax\\com\n\\end{align}\nwhere $\\ssd$ is \nstands for small scale dissipation, which is achieved by an spectral exponential filter\nor hyperviscosity, and $r_{ek}$ is the linear bottom drag coefficient. The Dirac delta,\n$\\delta_{nN}$, indicates that the drag is only applied in the bottom layer.\n\n\\subsection*{Equations in spectral space}\n\nThe evolutionary equation in spectral space is\n\n\\begin{align}\n    \\hat{q}_{nt} + (\\mathrm{i} k U + \\mathrm{i} l V) \\left(\\hat{q}_n + \\delta_{n \\nmax} \\frac{f_0}{H_\\nmax}\\hat{h}_b\\right) + (\\mathrm{i} k\\, {Q_y} -  - \\mathrm{i} l\\,{Q_x}){\\hat{\\psi}_n} + \\mathsf{\\hat{J}}(\\psi_n, q_n + \\delta_{n \\nmax} \\frac{f_0}{H_\\nmax}h_b )   \\nonumber \\\\ =  \\ssd + \\mathrm{i}  \\delta_{n \\nmax} r_{ek} \\kappa^2 \\hat{\\psi}_n \\,, \\qquad i = 1,\\textsf{N}\\com\n\\end{align}\nwhere $\\kappa^2 = k^2 + l^2$. Also, in the pseudo-spectral spirit we write the transform of the nonlinear\nterms and the non-constant coefficient linear term as the transform of the products, calculated in physical space, as opposed to double convolution sums.  That is $\\mathsf{\\hat{J}}$ is the Fourier transform of Jacobian computed in physical space.\n\nThe inversion relationship is\n\n\\begin{equation}\n    \\hat{q}_i = {\\left(\\sS - \\kappa^2 \\sI \\right)} \\hat{\\psi}_i\\com\n\\end{equation}\nwhere $\\sI$ is the $\\nmax\\times\\nmax$ identity matrix, and the stretching matrix is\n\n\\begin{equation}\n\\textsf{S} \\equiv  f_0^2\n\\begin{bmatrix}\n    -\\frac{1}{g'_1 H_1}& & \\frac{1}{g'_1 H_1} &  & 0 \\dots& \\\\\n & 0 & & & & &\\\\\n    \\vdots & \\ddots& &\\ddots &\\ddots & & & &\\\\\n       & \\frac{1}{g'_{i-1} H_i}& &  -\\left(\\frac{1}{g'_{i-1} H_i} + \\frac{1}{g'_{i} H_i}\\right)& & \\frac{1}{g'_{i} H_i}\\,\\,\\,\\,\\,\\,\\, \\\\\n       & \\ddots& & \\ddots &\\ddots & & & &\\\\\n& & & & & \\\\\n& \\dots & 0 & \\frac{1}{ g'_{\\nmax-1} H_\\nmax}& & -\\frac{1}{g'_{\\nmax-1} H_\\nmax}\n\\end{bmatrix}\n\\per\n\\end{equation}\n\n\\subsection*{Energy spectrum}\nThe equation for the energy spectrum,\n\\begin{equation}\nE(k,l) \\equiv {\\frac{1}{2 H}\\sum_{i=1}^{\\nmax} H_i \\kappa^2 |\\hat{\\psi}_i|^2} \\,\\,\\,\\,+ \\,\\,\\,\\,\\,\\, {\\frac{1}{2 H} \\sum_{i=1}^{\\nmax-1} \\frac{f_0^2}{g'_i}|\\hat{\\psi}_{i}- \\hat{\\psi}_{i+1}|^2}\\,\\,\\,\\,,\n\\end{equation}\nis \n\n\\begin{align}\n    \\frac{d}{dt} E(k,l) = {\\frac{1}{H}\\sum_{i=1}^{\\mathsf{N}} H_i \\text{Re}[\\hat{\\psi}_i^\\star {\\mathsf{\\hat{J}}}(\\psi_i,\\nabla^2\\psi_i)]} +\n    {\\frac{1}{H}\\sum_{i=1}^{\\mathsf{N}} H_i\\text{Re}[\\hat{\\psi}_i^\\star \\hat{\\mathsf{J} (\\psi_i,(\\sS \\psi)_i)}]} \\nonumber \\\\\n    + {\\frac{1}{H}\\sum_{i=1}^{\\mathsf{N}} H_i ( k U_i +  l V_i)\\, \\text{Re}[i \\, \\hat{\\psi}^\\star_i (\\mathsf{S}\\hat{\\psi}_i)]} \\,\\,\\,\\,\\,\\,\\,{- r_{ek} \\frac{H_\\mathsf{N}}{H} \\kappa^2 |\\hat{\\psi}_{\\mathsf{N}}|^2}  +{ {{E_\\ssd}}} \\com\n\\end{align}\nwhere $\\star$ stands for complex conjugation, and the terms above on the right represent, from\nleft to right,\n\n\\begin{description}\n    \\item[I:]  The spectral divergence of the kinetic energy flux;\n    \\item[II:] The spectral divergence of the potential energy flux; \n    \\item[III:] The spectrum of the potential energy generation;\n    \\item[IV:] The spectrum of the energy dissipation by linear bottom drag;\n    \\item[V:] The spectrum of energy loss due to small scale dissipation.\n\\end{description}\nWe assume that $V$ is relatively small, and that, in statistical steady state, the\nbudget above is dominated by I through IV.\n\n\\subsection*{Enstrophy spectrum}\nSimilarly the evolution of the barotropic enstrophy spectrum,\n\n\\begin{equation}\nZ(k,l) \\equiv \\frac{1}{2H} \\sum_{i=1}^{\\nmax} H_i |\\hat{q}_i|^2\\com\n\\end{equation}\nis governed by\n\\begin{equation}\n    \\frac{d}{d t} Z(k,l) = {\\text{Re}[\\hat{q}_i^\\star {\\mathsf{\\hat{J}}(\\psi_i,q_i) ]}}\n    {-(k Q_y - l Q_x)\\text{Re}[(\\sS \\hat{\\psi}_i^\\star)\\hat{\\psi}_i]}\n    + { {\\hat{Z_\\ssd}}}\\com\n\\end{equation}\nwhere the terms above on the right represent, from\nleft to right,\n\\begin{description}\n    \\item[I:]   The spectral divergence of barotropic potential enstrophy flux;\n    \\item[II:]  The spectrum of  barotropic potential enstrophy generation;\n    \\item[III:] The spectrum of  barotropic potential enstrophy loss due to small scale dissipation.\n\\end{description}\nThe enstrophy dissipation is concentrated at the smallest scales resolved in the model and, in statistical steady\nstate, we expect the budget above to be dominated by the balance between I and II.\n\n\\section*{Special case: two-layer model}\nWith $\\nmax = 2$, an alternative notation for the perturbation of potential vorticities can be written as\n\\begin{align}\n    q_1 &= \\lap \\psi_1 + F_1 (\\psi_2 - \\psi_1) \\nonumber\\\\\n    q_2 &= \\lap \\psi_2 + F_2 (\\psi_1  - \\psi_2)\\com\n\\end{align}\nwhere we use the following definitions\nwhere\n\\begin{equation}\nF_1 \\equiv \\frac{k_d^2}{1 + \\delta^2}\\,, \\qquad \\:\\:\\text{and} \\qquad F_2 \\equiv \\delta \\,F_1\\,,\n\\end{equation}\nwith the deformation wavenumber\n\\begin{equation}\nk_d^2 \\equiv \\, \\frac{f_0^2}{g} \\frac{H_1+H_2}{H_1 H_2} \\per\n\\end{equation}\nWith this notation, the ``stretching matrix'' is simply\n\\begin{equation}\n\\sS = \\begin{bmatrix}\n- F_1 \\qquad \\:\\:\\:\\:F_1\\\\\nF_2 \\qquad -  + F_2\n\\end{bmatrix}\\per\n\\end{equation}\nThe inversion relationship in Fourier space is\n\\begin{equation}\n\\begin{bmatrix}\n\\hat{\\psi}_1\\\\\n\\hat{\\psi}_2\\\\\n\\end{bmatrix}\n= \\frac{1}{\\text{det} \\: \\sB}\n\\begin{bmatrix}\n-(\\kappa^2 + F_2) \\qquad \\:\\:\\:\\:-F_1\\\\\n\\:\\:\\:\\: -F_2 \\qquad - (\\kappa^2 + F_1)\n\\end{bmatrix}\n\\begin{bmatrix}\n\\hat{q}_1\\\\\n\\hat{q}_2\\\\\n\\end{bmatrix}\\com\n\\end{equation}\nwhere \n\\begin{equation}\n\\qquad \\text{det}\\, \\sB = \\kappa^2\\left(\\kappa^2 + F_1 + F_2\\right)\\,.\n\\end{equation}\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "37d1fc2f6f0e220e717ae70d89c1080c6479a1b5", "size": 7998, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/equations/notation_layered.tex", "max_stars_repo_name": "pittwolfe/pyqg", "max_stars_repo_head_hexsha": "3a4b8b0a53dc0204a437376ffdcb981568edb111", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 100, "max_stars_repo_stars_event_min_datetime": "2015-08-31T19:05:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T23:36:33.000Z", "max_issues_repo_path": "docs/equations/notation_layered.tex", "max_issues_repo_name": "pittwolfe/pyqg", "max_issues_repo_head_hexsha": "3a4b8b0a53dc0204a437376ffdcb981568edb111", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 212, "max_issues_repo_issues_event_min_datetime": "2015-08-30T04:07:37.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T22:42:16.000Z", "max_forks_repo_path": "docs/equations/notation_layered.tex", "max_forks_repo_name": "pittwolfe/pyqg", "max_forks_repo_head_hexsha": "3a4b8b0a53dc0204a437376ffdcb981568edb111", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 75, "max_forks_repo_forks_event_min_datetime": "2015-08-31T16:16:23.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T17:33:30.000Z", "avg_line_length": 41.4404145078, "max_line_length": 377, "alphanum_fraction": 0.6579144786, "num_tokens": 3100, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.5518870897714352}}
{"text": "\\documentclass{ecnreport}\n\n\\stud{Control \\& Robotics Master}\n\\topic{Manipulator Modeling \\& Control}\n\n\\begin{document}\n  \n  \\inserttitle{Manipulator Modeling \\& Control}\n  \n  \\section{Content of this lab}\n  \n  The goal of this lab is to program in C++ the three fundamentals models for robot manipulators:\n  \\begin{itemize}\n    \\item Direct Geometric Model: what is the pose of the end-effector for a given joint configuration?\n    \\item Inverse Geometric Model: what joint values correspond to a given end-effector pose?\n    \\item Kinematic Model: what is the velocity screw of the end-effector when the joints move?\n  \\end{itemize}\n  These models will be applied on three different robots, and used to perform basic point-to-point control.\\\\\n  \n  As it is not a lab on C++, most of the code is already written and you only have to fill the required functions:\n  \\begin{itemize}\n    \\item \\texttt{Robot::init\\_wMe()} to initialize the fixed transform between the wrist and end-effector\n    \\item \\texttt{Robot::fMw(q)} for the direct geometric model wrist-to-fixed frame\n    \\item \\texttt{Robot::inverseGeometry(M)} for...well, the inverse geometry\n    \\item \\texttt{Robot::fJw} for the kinematic model wrist-to-fixed frame\n  \\end{itemize}\n  \n  \\subsection{Installing and loading the lab in Qt Creator}\n  \n  This project uses the ROS\\footnote{Robot Operating System, http://www.ros.org} framework which imposes some particular steps to configure the environment. An actual ROS course will be held in the second semester, for now just configure as follows:\n  \n  \\begin{enumerate}\n    \\item Open a terminal and create (if needed) a ROS workspace: \\texttt{mkdir -p $\\sim$/ros/src}\n    \\item Go inside this folder \\texttt{cd $\\sim$/ros/src}\n    \\item Download this repository: \\texttt{git clone https://github.com/oKermorgant/ecn\\_manip}.\n    \\item Compile the workspace: \\texttt{cd .. \\&\\& catkin build}\n    \\item Create the QtCreator configuration file: \\texttt{cd src/ecn\\_manip \\&\\& gqt}\n    \\item Open QtCreator and load the \\texttt{$\\sim$/ros/src/ecn\\_manip/CMakeLists.txt} file\n    \\item The files should be displayed and ready to run\n    \\item Compilation is done by clicking the bottom-left hammer\n    \\item Run your program with the green triangle. It can be stopped by clicking on the red square\n  \\end{enumerate}\n  \n  \\subsection{Expected work}\n  \n  The {\\bf only} files to be modified are:\n  \\begin{itemize}\n    \\item \\texttt{control.cpp}: main file where the control is done depending on the current robot mode\n    \\item \\texttt{robot\\_turret.cpp}: model of the RRP turret robot\n    \\item \\texttt{robot\\_kr16.cpp}: model of the industrial Kuka KR16 robot\n    \\item \\texttt{robot\\_ur10.cpp}: model of the Universal Robot UR-10\n  \\end{itemize}\n  We will use the ViSP\\footnote{Visual Servoing Platform, http://visp.inria.fr} library to manipulate mathematical vectors and matrices, including frame transforms, rotations, etc. The main classes are detailed in Appendix \\ref{visp}.\\\\\n  At the end of the lab, you should upload them through the \\link{http://pagesperso.ls2n.fr/~kermorgant-o/student links.php}{portal on my website}\n  \n  \\section{The robots}\n  \n  \\def\\wMe{{}^w\\M_e}\n  \n  Three robots are proposed with increasing complexity. You should start with the Turret, then the Kuka KR16 then the UR-10.\\\\\n  For each of these robots, the table of Modified Denavit-Hartenberg parameters should be defined. As we saw during the lectures, once you have the table then the Direct Geometric and Direct Kinematic Models can be almost automatically derived. A tool is provided in this package to generate C++ code for the Geometric and Kinematic models, see Appendix \\ref{dhcode} for details on this tool.\n  \n  The fixed frame and the end-effector frame are imposed by the schematics. \n  All intermediary frames have to be placed according to MDH convention, with a final fixed transform between the wrist and end-effector frames.\n  \n  \\subsection{Turret RRP robot}\n  \n  \\begin{minipage}{.8\\linewidth}\n    This is a very simple robot to begin the modeling, as shown in the figure.\\\\\n    The fixed frame $\\Frame{0}$ and end-effector frame $\\Frame{e}$ are imposed, as well as the joint direction.\\\\\n    The constant values for this model are: $b = 0.5$ m and $d = 0.1$ m.\\\\~\\\\\n    All computation may be done first without using the Denavit-Hartenberg parameters as the robot is quite simple. \n  \\end{minipage}\n  \\begin{minipage}{.2\\linewidth}\n    \\includegraphics{fig/turret}\\label{turret}\n  \\end{minipage}\n  \n  \\subsection{Kuka KR16 anthropomorphic robot}\n  \n  This robot is a classical 6R architecture with a spherical wrist, as shown in the next figure:\n  \n  \\begin{figure}[h!]\\centering\n    \\includegraphics[width=.6\\linewidth]{fig/kr16}\n    \\caption{Model of the Kuka KR16 with imposed fixed and end-effector frames}\n  \\end{figure}\n  \n  The wrist frame is typically at the common point between joints 4, 5 and 6.\n  The constant transform $\\wMe$ between the wrist and the end-effector should include the $0.158$ distance and potential rotations.\n  \n  \\subsection{Universal Robot UR-10 anthropomorphic robot}\n  \n  This robot is a less classical 6R architecture, without spherical wrist. A tool is also attached with an arbitrary end-effector offset.\n  \n  \\begin{figure}[h!]\\centering\n    \\includegraphics[width=.6\\linewidth]{fig/ur10}\n    \\caption{Model of the UR-10 with imposed fixed and end-effector frames}\n  \\end{figure}\n  \n  The Inverse Geometry is given for this robot. \n  The constant transform $\\wMe$ between the wrist and the end-effector should include the $0.1922$ distance and potential rotations.\n  \n  \n  \\newpage\n  \\subsection{Running the simulation}\n  \n  Once everything is installed and compiled, the base simulation can be run from a terminal (only 1 line depending on the robot you work on):\n  \\cppstyle\n  \\begin{lstlisting}\n  roslaunch ecn_manip turret.launch\n  roslaunch ecn_manip kr16.launch\n  roslaunch ecn_manip ur10.launch\n  \\end{lstlisting}\n  \n  This will display two windows:\n  \\begin{itemize}\n    \\item RViz is the 3D simulator to show the current configuration of the robot. It will also display the base and end-effector frames, and the one that you compute. You can thus easily check if the computation is fine.\n    \\item The other window is composed of 4 panels:\n    \\begin{itemize}\n      \\item Buttons at the top left to change the current exercise\n      \\item Sliders at the bottom left to manually control the robot (exercises 1 and 2)\n      \\item Sliders at the top to change the switching time of the position control (exercises 3+)\n      \\item A plotting region with two tabs, one for the operational space and one for the joint space. The joint space is displayed in a normalized fashion, depending on the joint limits (upper and lower dotted lines)\n    \\end{itemize}\n  \\end{itemize}\n  \n  \n  \\section{Building the models}\n  \n  \\subsection*{The main code}\n  \n  In the file \\texttt{control.cpp}, the \\texttt{if - else if} blocks correspond to the 6 exercises to be done for each robot. Note that if they work for the Turret robot, then \n  they should work for all robots as long as the model is correct. \\\\\n  The current exercise is changed by using the buttons in the GUI.\n  \n  \\subsection*{Exercise 1: Checking the Direct Geometric Model}\n  \n  In this exercise, the robot follows the manually given joint positions. The task is to build and print the direct geometric model, depending on the current value of $\\q$. This is to be done in two functions:\n  \\begin{itemize}\n    \\item \\texttt{Robot::init\\_wMe()} where you have to write the constant matrix $\\wMe$ from the end-effector to the wrist frame\n    \\item \\texttt{Robot::fMw(q)}  where you have to write and return the transform $\\M$ between the fixed and wrist frame\n  \\end{itemize}\n  Once you have computed the \\texttt{M} matrix, the code calls \\texttt{robot->checkPose(M);} in order to print both your matrix and the true one. No need to tell that they should be equal.\\\\\n  What can be told is that it is useless to continue until they are.\n  \n  \\subsection*{Exercise 2: Checking the Direct Kinematic Model (Jacobian)}\n  \n  If you select \\texttt{Manual Operational Velocity} in the GUI then the sliders allow you to give a desired operational velocity (ie twist) for the end-effector.\\\\\n  This velocity is ${}^e\\v_e$ and is defined in the end-effector frame.\\\\\n  The robot Jacobian has to be defined in the function \\texttt{Robot::fJw(q)} that should return the Jacobian of the wrist in the fixed frame.\\\\\n  \n  \n  In the main code, the commanded velocity then to be mapped to the joint velocity in two steps:\n  \\begin{enumerate}\n    \\item Map to the fixed frame: ${}^f\\mathbf{v}_e^* = \\left[\\begin{array}{cc}{}^f\\R_e & 0 \\\\ 0 & {}^f\\R_e\\end{array}\\right]{}^e\\mathbf{v}_e^*$\\\\\n    Call \\texttt{M.getRotationMatrix()} to get this rotation.\\\\\n    The \\texttt{ecn::putAt} function may be of good use.\n    \\item Map to the joint space: $\\dot\\q = \\J^+.{}^f\\mathbf{v}_e^*$\\\\\n    Call \\texttt{robot->fJe(q).pseudoInverse()} to get the Jacobian pseudo-inverse\n  \\end{enumerate}\n  You can check that the end-effector is able to follow straight 3D lines in x, y or z direction in its own frame.\n  Similarly, it should be able to rotate around x, y or z. This means the Jacobian computation is correct.\n  \n  \\newpage\n  \\subsection*{Exercise 3: Checking the Inverse Geometric Model}\n  \n  If you select \\texttt{Direct point-to-point} in the GUI then the robot will receive two operational setpoints, at a given sampling time that can be changed with the \\texttt{Switching time} slider in the GUI. The current setpoint is in the matrix \\texttt{Md} and its corresponding joint position is in the vector \\texttt{qf}. \n  \n  Nothing has to be done in the main code, but of course the \\texttt{Robot::inverseGeometry(M)} function has to be filled in order to compute the actual inverse geometry solution. The robot will try to have all joints reach their desired value as soon as possible, which will lead to a discontinuous motion.\\\\\n  \n  In practice, this exercise is to check that the Inverse Geometric Model is correct: the robot should reach the desired pose.\\\\\n  \n  When computing the inverse geometry, try to find a solution as near as possible to the passed \\texttt{q0}.\\\\\n  \n  Many useful functions are available to solve classical trigonometric equations. They are detailed in Appendix \\ref{trigsolve}.\\\\\n  \n  A given potential solution should be stored using \\texttt{addCandidate(\\{q1, q2, ..., qn\\});}\\\\ At the end of the Inverse Geometry function, use \\texttt{return bestCandidate(q0);} that will pick the candidate that is the nearest from the current joint position and also within joint limits.\n  \n  \\subsection*{Exercise 4: Interpolated point-to-point control}\n  \n  In this exercise the previous pose setpoint \\texttt{M0} and its corresponding joint position \\texttt{q0} should be used to interpolate the joint positions.\\\\\n  \n  Here the goal is to compute the current setpoint $\\q_c = \\q_0 + p(t)(\\q_f-\\q_0)$. The interpolation function $p(t)$ was defined during the lectures and should take into account the maximum joint velocity and acceleration that are stored in the \\texttt{vMax} and \\texttt{aMax} vectors. Basically this amounts to computing the minimum time \\texttt{tf} needed to reach $\\q_f$ from $\\q_0$. This computation should be done in the \\texttt{if(robot->newRef())} and then used to compute the current \\texttt{qCommand}.\n  \n  \n  \\subsection*{Exercise 5: Operational control through Inverse Geometric Model}\n  \n  Exercises 3 and 4 lead to the correct pose but not in a straight 3D line. Indeed the robot is only controlled in the joint space with no constraints between the two extreme poses.\\\\\n  In this exercise, the goal is to perform the interpolation not in the joint space but in the Cartesian space. In practice, we will compute many poses between \n  \\texttt{M0} and \\texttt{Md} and command the robot to reach sequentially all these poses. As they will be pretty close one from each other, the resulting motion \n  will be close to a straight line.\n  \n  The \\texttt{robot->intermediaryPose(M0, Md, alpha)} function returns a pose between \\texttt{M0} and \\texttt{Md}, where \\texttt{alpha} is between 0 and 1 and should be computed from $t$, $t_0$ and $t_f$ = 1 s.\n  \n  Then, the inverse geometry of this pose should be sent to the robot as joint position setpoint.\n  \n  \\newpage\n  \\subsection*{Exercise 6: Operational control through Jacobian}\n  \n  In this exercise the goal is still to follow a straight line between the two points, but this time by sending a joint velocity command.\\\\\n  \n  As seen in class, this command should make the end-effector approach its desired pose. The steps are:\n  \\begin{enumerate}\n    \\item Compute the pose error in the desired frame: ${}^{e*}\\M_e = {}^{f}\\M_{e*}^{-1}.{}^{f}\\M_e$\n    \\begin{itemize}\n      \\item in the code, ${}^{f}\\M_{e*}$ corresponds to \\texttt{Md} and ${}^{f}\\M_e$ to \\texttt{M} \n      \\item In practice we use the $(\\mathbf{t}, \\theta\\u)$ representation with \\texttt{p.buildFrom()}\n    \\end{itemize}\n    \\item Compute the desired linear and angular velocities: $v = -\\lambda{}^{f}\\M_{e*}\\mathbf{t}$, \\quad $\\omega = -\\lambda{}^{f}\\M_{e} (\\theta\\u)$\n    \\item Map these velocities to the joint space: $\\dot\\q = \\J^+ \\left[\\begin{array}{c}v\\\\\\omega\\end{array}\\right]$\n  \\end{enumerate}\n  $\\lambda$ is a gain that can be changed from the GUI. Increasing it will lead to a faster convergence. It can be obtained through \\texttt{robot->lambda()}.\n  \n  \n  \n  \\appendix\n  \n  \n  \\newpage\n  \n  \\section{Using ViSP}\\label{visp}\n  \n  ViSP is a library for Visual Servoing (wait for the M2 Robotics to know more!) and also includes all necessary tools to play with mathematical vectors, matrices and 3D transforms.\n  The full documentation is here: \\link{http://visp-doc.inria.fr/doxygen/visp-3.1.0/classes.html}{http://visp-doc.inria.fr/doxygen/visp-3.1.0/classes.html}.\\\\\n  The main classes to be used are:\n  \\begin{itemize}\n    \\item \\texttt{vpMatrix}: a classical matrix of given dimensions, can be multiplied with other matrices and vectors if the dimensions are ok. The pseudo-inverse of matrix \\texttt{M} \n    is available with \\texttt{M.pseudoInverse()}\n    \\item \\texttt{vpColVector}: a column vector of given dimension\n    \\item \\texttt{vpHomogeneousMatrix}: the classical $4\\times 4$ 3D transform matrix\n  \\end{itemize}\n  Most of the model computation is about initializing a matrix or vector to correct dimensions (which is already done) and fill its elements with formulae.\\\\\n  To write 1 at element (i,j) of matrix M, just write \\texttt{M[i][j] = 1;}\\\\\n  Similarly, to write 1 at element (i) of vector q, just write \\texttt{q[i] = 1;}\\\\\n  \n  3D transforms can be easily changed between homogeneous matrix, rotation matrix, translation vector or $\\theta\\u$ vector. Another type called a pose vector\n  consists in the full $(\\mathbf{t}, \\theta\\u)$ vector of dimension 6:\n  \\cppstyle\n  \\begin{lstlisting}\n  vpHomogeneousMatrix M;\n  vpPoseVector p;\n  vpRotationMatrix R;\n  vpTranslationVector t;\n  vpColVector tu(3);\n  \n  M.getRotationMatrix()*t;\t// extract the rotation part from M and multiply with t\n  p.buildFrom(M);\t\t\t// build (t, theta-u) from M\n  t = M.getTranslationVector();\t// extract the translation part\n  \n  // copy the theta-u part of p into tu\n  for(int i = 0; i < 3; ++i)\n  tu[i] = p[i+3];\n  \n  // compute inverse transform\n  M = M.inverse();\n  \\end{lstlisting}\n  \n  Two utility functions are provided to put a matrix or vector into a bigger one:\n  \\begin{itemize}\n    \\item \\texttt{void ecn::putAt(M, Ms, r, c)}: Writes matrix \\texttt{Ms} inside matrix \\texttt{M}, starting from given row and column.\n    \\item \\texttt{void ecn::putAt(V, Vs, r)}: Writes column vector \\texttt{Vs} inside vector \\texttt{V}, starting from given row. These two functions do nothing but print an error\n    if the submatrix or vector does not fit in the main one.\n  \\end{itemize}\n  \n  \n  \\newpage\n  \\section{Code generation from Modified Denavit-Hartenberg parameters}\\label{dhcode}\n  \n  The core of robot modeling is to build the MDH table. From this, the DGM and KM can be derived automatically. The Inverse Geometry still has to be done by hand (even if some advanced tools allow code-generation for the IG resolution).\n  \n  As it is quite tedious to have to compute all matrices when a solution of MDH is investigated, a tool is provided to generate the C++ code. An example can be found in the file \\texttt{dh\\_example.yml}:\n  \\cppstyle\n  \\begin{lstlisting}\n  notation: [alpha, a, theta, r]\n  joint:\n  1: [0, 0, q1, 0]\n  2: [-pi/2, 0, 0, q2]\n  3: [pi/2, 0, q3, r3]\n  4: [-pi/2, a4, q4+pi/2, 0]\n  \\end{lstlisting}\n  The command to be run is: \\texttt{rosrun ecn\\_manip dh\\_code.py <name of the file>}.\\\\\n  In the case of the above MDH table, it leads to these outputs:\n  \n  \\begin{minipage}{.45\\linewidth}\n    \\cppstyle \\raggedright\n    \\begin{lstlisting}\n    // Generated pose code                                                                                                                                                                                                                  \n    const double c1 = cos(q[0]);                                                                                                                                                                                                            \n    const double c1p3 = cos(q[0]+q[2]);                                                                                                                                                                                                     \n    const double c4 = cos(q[3]);                                                                                                                                                                                                            \n    const double s1 = sin(q[0]);                                                                                                                                                                                                            \n    const double s1p3 = sin(q[0]+q[2]);                                                                                                                                                                                                     \n    const double s4 = sin(q[3]);                                                                                                                                                                                                            \n    M[0][0] = -s4*c1p3;                                                                                                                                                                                                                     \n    M[0][1] = -c4*c1p3;                                                                                                                                                                                                                     \n    M[0][2] = -s1p3;                                                                                                                                                                                                                        \n    M[0][3] = a4*c1p3 - q[1]*s1;                                                                                                                                                                                                            \n    M[1][0] = -s4*s1p3;\n    M[1][1] = -s1p3*c4;\n    M[1][2] = c1p3;\n    M[1][3] = a4*s1p3 + q[1]*c1;\n    M[2][0] = -c4;\n    M[2][1] = s4;\n    M[2][2] = 0;\n    M[2][3] = r3;\n    M[3][0] = 0;\n    M[3][1] = 0;\n    M[3][2] = 0;\n    M[3][3] = 1.;\n    // End of pose code\n    \\end{lstlisting}\n  \\end{minipage}\n  \\begin{minipage}{.1\\linewidth}\n    \\quad\\quad\n  \\end{minipage}\n  \\begin{minipage}{.45\\linewidth}\n    \\cppstyle \\raggedright\n    \\begin{lstlisting}\n    // Generated Jacobian code\n    const double c1 = cos(q[0]);\n    const double c1p3 = cos(q[0]+q[2]);\n    const double s1 = sin(q[0]);\n    const double s1p3 = sin(q[0]+q[2]);\n    J[0][0] = -a4*s1p3 - q[1]*c1;\n    J[0][1] = -s1;\n    J[0][2] = -a4*s1p3;\n    //J[0][3] = 0;\n    J[1][0] = a4*c1p3 - q[1]*s1;\n    J[1][1] = c1;\n    J[1][2] = a4*c1p3;\n    //J[1][3] = 0;\n    //J[2][0] = 0;\n    //J[2][1] = 0;\n    //J[2][2] = 0;\n    //J[2][3] = 0;\n    //J[3][0] = 0;\n    //J[3][1] = 0;\n    //J[3][2] = 0;\n    J[3][3] = -s1p3;\n    //J[4][0] = 0;\n    //J[4][1] = 0;\n    //J[4][2] = 0;\n    J[4][3] = c1p3;\n    J[5][0] = 1.;\n    //J[5][1] = 0;\n    J[5][2] = 1.;\n    //J[5][3] = 0;\n    // End of Jacobian code\n    \\end{lstlisting}\t\n  \\end{minipage}\n  This is exactly what you would get by doing the manual computation from the initial table, which I hope is quite appreciated.\\\\\n  \n  Fixed transforms for initial and end-effector frames can also be written with keys \\texttt{f:} and \\texttt{e:}\\\\\n  \n  If passing \\texttt{--display} the script will display the direct geometry with classical math notations. It can be useful for the inverse geometry computation.\\\\\n  For a 6-DOF robot, passing \\texttt{--wrist} will display useful information to benefit from a spherical wrist: math expressions for $\\Pass{0}{\\mathbf{t}}{6}(q_1,q_2,q_3)$ and  $\\Pass{3}{\\R}{6}(q_4,q_5,q_6)$, and code for $\\Pass{0}{\\R}{3}(q_1,q_2,q_3)$.\n  \n  \\newpage\n  \\section{Trigonometric solvers}\\label{trigsolve}\n  \n  The code contains solvers for all types of trigonometric equations (except type 1 that is obvious).\\\\\n  They take as argument the coefficients of the equations. They return:\n  \\begin{itemize}\n    \\item A \\texttt{std::vector<double>} for equations of 1 unknown (types 2 \\& 3).\\\\We can go through the valid solutions with:\n    \\cppstyle\n    \\begin{lstlisting}\n    // assume q1 should ensure type 2: X.sin(q1) + Y.cos(q1) = Z\n    \n    for(double q1: solveType2(X, Y, Z))\n    {\n    // here q1 is a valid solution, may be used to deduce other joint values\n    }\n    \\end{lstlisting}\n    \\item A \\texttt{std::vector<JointSolution>} for equations of 2 unknowns (types 4 to 8).\\\\In this case, the valid solutions can be explored with:\n    \\cppstyle\n    \\begin{lstlisting}\n    // assume q1 and q2 should ensure type 6 equation: \n    //\t\tW.sin(q2) = X.cos(q1) + Y.sin(q1) - Z1\n    //\t\tW.cos(q2) = X.sin(q1) - Y.cos(q1) - Z1\n    \n    for(auto qs: solveType6(X, Y, Z1, Z2, W))\n    {\n    // here a (q1,q2) solution is available in qs.qi and qs.qj\n    }\n    \\end{lstlisting}\n  \\end{itemize}\n  \n  Furthermore, a function allows retrieving the 12 elements of $\\Pass{0}{\\M}{n*}$ from the matrix $\\Pass{f}{\\M}{e*}$ given to the inverse geometry solver:\n  \\begin{cppcode}\n    const auto [xx,xy,xz,yx,yy,yz,zx,zy,zz,tx,ty,tz] = explodeMatrix(fMe_des);\n\\end{cppcode}\nThis function output the matrix terms according to:\n\\begin{equation*}\n\\Hom{0}{n*} = \\Hom{0}{f}\\Hom{f}{e*}\\Hom{e}{n} = \\left[\\begin{matrix}x_x & y_x & z_x & t_x\\\\x_y & y_y & z_y & t_y\\\\x_z & y_z & z_z & t_z\\\\0 & 0 & 0 & 1\\end{matrix}\\right] \n\\end{equation*}\n\n\n  \n  \\newpage\n  Below are listed the syntax for all trigonometric solvers:\\\\\n  \n  %\\setlength\\extrarowheight{5pt}\n  \\renewcommand{\\arraystretch}{2.2}\n  \\begin{tabular}{|c|c|C{7cm}|C{2cm}|}\\hline\n    Type & Equation & Syntax & Returns (vector of)\\\\\\hline\n    1 & $Xq_i = Y$& does not exist &$q_i$\t\\\\\\hline\t\t\n    2 & $Xs_i+Yc_i = Z$ & \\texttt{solveType2(x, y, z)}&$q_i$\\\\\\hline\n    3 & $\\begin{array}{l}X_1s_i+Y_1c_i = Z_1 \\\\ X_2s_i+Y_2c_i = Z_2\\end{array}$&\\texttt{solveType3(x1, y1, z1,\n      x2, y2, z2)} &$q_i$\\\\\\hline\n    4 & $\\begin{array}{l}X_1q_js_i = Y_1 \\\\ X_2q_jc_i = Y_2\\end{array}$& \\texttt{solveType4(x1, y1, x2, y2)}&$(q_i,q_j)$ \\\\\\hline\n    5 & $\\begin{array}{l}X_1s_i = Y_1+Z_1q_j \\\\ X_2c_i = Y_2+Z_2q_j\\end{array}$& \\texttt{solveType5(x1, y1, z1,\n      x2, y2, z2)}&$(q_i,q_j)$\\\\\\hline\t\t\t\n    6 & $\\begin{array}{l}Ws_j = Xc_i + Ys_i+Z_1 \\\\ Wc_j = Xs_i - Yc_i+Z_2\\end{array}$& \\texttt{solveType6(x, y,\n      z1, z2, w)}&$(q_i,q_j)$\\\\\\hline\n    7 & $\\begin{array}{l}W_1c_j +W_2s_j = Xc_i + Ys_i+Z_1 \\\\ W_1s_j -W_2c_j = Xs_i - Yc_i+Z_2\\end{array}$& \\texttt{solveType7(x, y, z1, z2, w1, w2)}&$(q_i,q_j)$\\\\\\hline\n    8 &$\\begin{array}{l}Xc_i+Yc_{ij} =Z_1 \\\\ Xs_i+Ys_{ij} =Z_2\\end{array}$ & \\texttt{solveType8(x, y, z1, z2)}&$(q_i,q_j)$\\\\\\hline\n  \\end{tabular}\n  \n  \n\\end{document}\n", "meta": {"hexsha": "6769114246b911367e26ec1d9551d6e0ece65c3e", "size": 24363, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "subject/manip_lab.tex", "max_stars_repo_name": "oKermorgant/ecn_manip", "max_stars_repo_head_hexsha": "41d398a3c1b3a30f89b1259275af4adba97f1be3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "subject/manip_lab.tex", "max_issues_repo_name": "oKermorgant/ecn_manip", "max_issues_repo_head_hexsha": "41d398a3c1b3a30f89b1259275af4adba97f1be3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "subject/manip_lab.tex", "max_forks_repo_name": "oKermorgant/ecn_manip", "max_forks_repo_head_hexsha": "41d398a3c1b3a30f89b1259275af4adba97f1be3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-11-08T13:32:15.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-13T23:33:37.000Z", "avg_line_length": 58.0071428571, "max_line_length": 511, "alphanum_fraction": 0.6313261914, "num_tokens": 6827, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585903489891, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5518870861327739}}
{"text": "%!TEX root = ../PhD_thesis__Lilian_Besson\n% ******************************* Thesis Appendix A ****************************\n\\chapter{Additional Mathematical Tools}\n\\label{app:1:mathsTools}\n\nIn this appendix, we present some mathematical tools that are used in various proofs in this thesis.\n\n% ----------------------------------------------------------------------------\n\\section{Concentration Inequalities}\n\\label{sec:app1:ConcentrationInequalities}\n\n\\TODOL{I probably need to include Hoeffding's inequality, Chernoff's method etc?}\n\nReference: \\cite{Boucheron2013} and maybe the original paper by Hoeffding, Chernoff etc.\n\n\n% ----------------------------------------------------------------------------\n\\section{Useful lemmas}\n\\label{sec:app1:UsefulLemmas}\n\n\\TODOL{I probably need to include a bunch of useful lemmas, used in various places in the thesis, like Pinsker's inequality, etc.}\n\n\n% ----------------------------------------------------------------------------\n\\section{Results about klUCB}\n\\label{sec:app1:resultsklUCB}\n\n\\TODOL{I probably need to include some results about klUCB, mainly coming from \\cite{KLUCBJournal}, used in various places in the thesis?}\n", "meta": {"hexsha": "d62bab17f7aaa931367e026fec5741dc79ba3fd4", "size": 1171, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "3-Appendices/1-Appendix/appendix1.tex", "max_stars_repo_name": "Naereen/phd-thesis", "max_stars_repo_head_hexsha": "0fa93ca0d738771f4215bc4aeb66157f2026ba00", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-11-18T12:22:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T19:29:48.000Z", "max_issues_repo_path": "3-Appendices/1-Appendix/appendix1.tex", "max_issues_repo_name": "Naereen/phd-thesis", "max_issues_repo_head_hexsha": "0fa93ca0d738771f4215bc4aeb66157f2026ba00", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2019-11-18T09:19:15.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-08T14:13:08.000Z", "max_forks_repo_path": "3-Appendices/1-Appendix/appendix1.tex", "max_forks_repo_name": "Naereen/phd-thesis", "max_forks_repo_head_hexsha": "0fa93ca0d738771f4215bc4aeb66157f2026ba00", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-28T20:56:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-13T11:11:57.000Z", "avg_line_length": 40.3793103448, "max_line_length": 138, "alphanum_fraction": 0.5900939368, "num_tokens": 253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.5518870853480167}}
{"text": "\\documentclass[11pt]{article}\n\n\\usepackage{alltt,fullpage,graphics,color,epsfig,amsmath, amssymb}\n\\usepackage{hyperref}\n\\usepackage{boxedminipage}\n\\usepackage[ruled,vlined]{algorithm2e}\n\\usepackage{mathtools}\n\\usepackage{amsmath}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\newcommand\\myeq{\\stackrel{\\mathclap{\\normalfont\\mbox{rank}}}{=}}\n\\newcommand{\\floor}[1]{\\lfloor #1 \\rfloor}\n\\newcommand{\\ceil}[1]{\\lceil #1 \\rceil}\n\n\\title{CS 510 Assignment 2}\n\\author{Daniel Campos}\n\\date{March 14th,2021}\n\\begin{document}\n\\maketitle\n\\section{Probabilistic Retrieval Model}\n\\subsection{Multinominal Ranking}\n\\subsubsection{Show that ranking if a document is relevant to a query is equivalent to the sum of the probability of each word being relevant to the query}\nFirst off, recapping the RSJ model where $score(Q,D) \\myeq  \\sum_{i=1, d_i=q_i=1}^k \\log \\frac{p_i(1-q_i)}{q_i(1-p_i)} $. We know that a multi-variate Bernoulli models focuses on the presence and absence of a feature where for IR this feature is occurrence in the document. We also know that a Multinomial model leverages the number of counts of a feature, commonly referred to as term frequency. Essentially, a Bernoulli model is the same as a multinomial model where all frequencies have been simplified to 1. Thus the first step is to introduce the term frequency into the calculation which gives $score(Q,D) \\myeq  \\sum_{i=1, d_i=q_i=1}^k c(w,D) \\log \\frac{p_i(1-q_i)}{q_i(1-p_i)} $. We then amplify the summation from all terms in the query present in the document ($\\sum_{i=1, d_i=q_i=1}^k$) to all the words in the vocabulary ($\\sum_{w \\in V}$) since we can no longer just focus on term occurrence but now have to focus on frequency. Since we have now are modeling the frequency of a feature instead of its absence/existence we drop the normalization of $(1-q_i)$ and $(1-p_i)$. Thus we arrive at our target formula $score(Q,D) \\myeq  \\sum_{w \\in V} c(w,D) \\log \\frac{p(w|Q, R=1)}{p(w|Q, R=0)} $\n\\subsubsection{How many parameters are in such a model}\n2: $p(w|Q, R=1)$ for $w \\in V$, and $p(w|Q, R=0)$ for $w \\in V$\n\\subsection{Give the formula for Maximum likelihood estimate of $p(w | Q, R= 0)$}\n$p(w | Q, R=0) = \\sum_{D \\in C} \\frac{tf_w}{|D|}$ where $tf_w$ is the amount of times a word $w$ occurs in document $D$ and $|D|$ is the document length.\n\\subsection{Give the formula for Maximum likelihood estimate of $p(w | Q, R= 1)$ where we use the query as the only relevant document}\n$p(w | Q, R=1) = \\frac{tf_w}{|Q|}$ where $tf_w$ indicates the occurrence of word $w$ in the query $Q$ and $|Q|$ represents the query length.\n\\subsection{Give the formula for smoothing the maximum likelihood estimate using Jelinek-Mercer with the collection language model}\n$p(w | Q, R=1) = (1- \\beta) \\frac{tf_w}{|Q|}+ \\beta p(w|C)$ where $tf_w$ indicates the occurrence of word $w$ in the query $Q$ and $|Q|$ represents the query length, $\\beta$ is a Jelinek-Mercer smoothing coefficient and $p(w|C)$ is the language model unigram probability.\n\\subsection{Write Down the retrieval function. Does the retrieval function capture TF, IDF, document length normalization?}\n$score(Q,D) \\myeq  \\sum_{w \\in V} c(w,D) \\log \\frac{(1- \\beta) \\frac{tfq_w}{|Q|}+ \\beta p(w|C)}{\\sum_{D \\in C} \\frac{tfd_w}{|D|}} $ where $tfq_w$ is the term frequency in the query $Q$ and $tfd_w$ is the terfm frequency in document $D$, $|Q|$ represents the query length,$|D|$ is the document length, $\\beta$ is a Jelinek-Mercer smoothing coefficient and $p(w|C)$ is the language model unigram probability. \\\\\nThis captures TF because we take the term probability and multiply it by the frequency, it captures IDF because we normalize the probability of the word in Query given how common the word is in the corpus and it captures document length normalization because each of our estimations normalize frequency by query/document length.\n\\section{Language Models}\n\\subsection{Show that if we use query likelihood scoring method with Jelinker-Mercer smoothing function we can rank documents with \\\\$score(Q,D) \\myeq \\sum_{w \\in Q \\cap D} c(w,Q)\\log(1 + \\frac{(1-\\lambda)c(w,D)}{\\lambda p (w| C)*\\abs{D}})$}\nThis ranking function is ranking documents by assigning documents score by the recall of query terms. First, the sum $\\sum_{w \\in D \\cap D}$ means this will rank documents that have matches on query terms higher than any without. This is necessary to remove any noise that non matched by high probability terms may have for document score. Next, this function multiplies the probability by the count of word occurrence in the query, $c(q,Q)$ which serves to add higher importance to common query terms. Finally the ranking function includes a measure of term relevance in the given document. The numerator $(1-\\lambda)c(w,D)$ models the term occurrence in the document. The denominator $\\lambda p(w|C)*|D|$ represents the expected occurrence of the term in the document given its commonality in the corpus since $p(w|C)$ is overall unigram occurrence which is multiplied by document length $|D$ to provide an expected amount of term occurrences in document. This means that documents that have higher than the corpus wide expected term occurrence(and thus more relevant) have a higher probability than those that have a less than corpus average occurrence. \n\\subsection{Vector Space}\n\\subsubsection{What would be the query vector}\nThe query vector in this case is represented by $\\sum_{w \\in Q \\cap D} c(w,Q)$ as we can essentially consider it a one hot vector for query document term matches. Since we only sum over terms that match all unmatched terms are set to 0. Since we multiply by $c(w,Q)$ any term that is not 0(and thus occurs) is set to its occurrence in the query.\n\\subsubsection{What would be the document vector}\nThe document vector is the $\\log(1 + \\frac{(1-\\lambda)c(w,D)}{\\lambda p (w| C)*\\abs{D}})$ portion of the summation. Terms that occur in both the document and query are given the value of their occurrence in the document vs their expected occurrence in the document. \n\\subsubsection{What is the similarity function}\nThe similarity function is the multiplication by query vector and the document vector. Terms that occur in the document more than the LM predicts make a document more relevant than terms that occur less. These probabilities are summed over the multiplied query and document vector to produce a similarity score. \n\\subsubsection{Does the term weight in the document vector capture TF-IDF weighting and the document length normalization heuristics?}\nIt captures all 3 heuristics because of the document vector model. TF is captured by $c(w,D)$, IDF is captured by the use of the LM probability in the denominator($p(w|C)$) and document length normalization is captured by the multiplication of the IDF by document length($p(w|C) * |D|$).\n\\subsection{Are artificially long documents penalized using Jelinek-Mercer and Dirichlet prior smoothing?}\nIn Jelinek-Mercer artificially long documents are not penalized because the smoothing function is static everywhere in the prediction. In other words if we increase the document by size $k$ the scores will remain the same. On the other hand Dirichlet prior smoothing actually prefers longer documents since there is a separation in the normalization of document length ($n \\log \\frac{\\mu}{|d| + \\mu}$ and the term occurrence ($\\log (1+\\frac{c(w,d)}{\\mu p(w|C)}$ and thus is we increase the document length by a factor of k our first term grows by a factor of k which places more importance on it than the document length\n\\section{KL-Divergence}\n\\subsection{Show that KL Divergence covers the query likelihood retrieval function if the language model is set to the word distribution in the query}\nKullback-Leibler divergence (or relative entropy) is written as $D(P | Q) = \\sum_x p(x) \\log \\frac{p(x)}{q(x)}$. If we build on the assumption that $p(w|\\theta_q)= \\frac{c(w,Q)}{|Q|}$ to represent word likelihood given a query and $p(d|\\theta_D) = \\frac{c(w,D)}{|D|}$ then we can treat $\\theta_q$ and $\\theta_D$ as estimates of the query and document language models then the relevance can be measured by the negative KL divergence. We use the negative KL divergence because items that have a lower entropy between $q$ and $d$ are more relevant. Our query likelihood function becomes $score(Q,D) \\myeq - D(\\theta_Q| \\theta_D) = \\sum_w p(w|\\theta_Q)\\log p(w|\\theta_D) + (- \\sum_w p(w|\\theta_q) \\log p(w| \\theta_q))$. The second term is a query dependent constant which is ignored for ranking.\n\\section{Divergence Minimization Feedback}\nWe start off with out optimization problem $f(\\theta *) = \\argmin [\\frac{1}{n} (\\sum_{i=1}^n D(\\theta || \\theta_{e_i}) - \\lambda D(\\theta || \\theta_C$. \\\\ Using the fact that $g(p_1, p_2, ..., p_n) = \\sum_{w \\in V} p(w| \\theta) = 1$ we use the Lagrange multiplier to find the point of minimum entropy across all probability distributions, $p'$ which we do by requiring that $\\frac{\\delta}{\\delta p'} (f + \\lambda(g-1))| = 0$. \\\\ This gives a system of $\\lambda$ equations such that $\\frac{\\delta}{\\delta p_k} (-(\\sum_{j=1}^lambda p_j \\log (p_j)) + \\lambda(\\sum_{j=1}^\\lambda -1)) = 0$. Carrying out the differentiation we get $-(\\frac{1}{\\ln 2}+log_2 p'_k) + \\lambda = 0$ which shows that all $p'_k$ are equal because they depend only on $\\lambda$. \\\\\nBy then using our constraint that $\\sum_{w \\in V} p(w|\\theta) = 1$ we find that $p'_k= \\frac{1}{\\lambda}$ meaning that the uniform distribution is that uniform with the most entropy and thus maxima. As a result the minima happens at  $p'_k= \\frac{1}{1-\\lambda}$\\\\\nSubstituting this information in, $p(w|\\theta) \\alpha exp(\\frac{1}{1-\\lambda}\\frac{1}{|n|}\\sum_{i=1}^n D(\\theta || \\theta_{E_i})) - \\frac{\\lambda}{1-\\lambda} D(\\theta || \\theta_C)$. \\\\\nKnowing that $D(\\theta || \\theta_C) = \\log p(w|C)$ and $D(\\theta || \\theta_{E_i}) = \\log p(w|\\theta)$ we rewrite to our final form. $p(w|\\theta) \\alpha exp(\\frac{1}{1-\\lambda}\\frac{1}{|n|}\\sum_{i=1}^n \\log(p|\\theta_i) - \\frac{\\lambda}{1-\\lambda}\\log p(w|C))$\n\\end{document}", "meta": {"hexsha": "fff1373ac268f29d5553f546c809935ab3a8bda1", "size": 10054, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Assignments/2.tex", "max_stars_repo_name": "spacemanidol/CS510IR", "max_stars_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignments/2.tex", "max_issues_repo_name": "spacemanidol/CS510IR", "max_issues_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignments/2.tex", "max_forks_repo_name": "spacemanidol/CS510IR", "max_forks_repo_head_hexsha": "84def6a199aafe9a845d3235585204f3a3c060e4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 176.3859649123, "max_line_length": 1202, "alphanum_fraction": 0.7407996817, "num_tokens": 2788, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149978955811, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.5518870853480167}}
{"text": "\\section{Risk Invariance Algorithms}\n  \\subimport{thesis/}{riskinvalgsintro.tex}\n  \\subimport{thesis/definitions/}{trustreduction.tex}\n\n  \\subimport{thesis/theorems/}{saturationtheorem.tex}\n  \\subimport{thesis/proofs/}{saturationproof.tex}\n\n  \\subimport{thesis/theorems/}{trusttransfertheorem.tex}\n  \\subimport{thesis/proofs/}{trusttransferproof.tex}\n\n  \\subimport{thesis/definitions/}{restrictedflow.tex}\n\n  \\subimport{thesis/lemmas/}{flowlimitlemma.tex}\n  \\subimport{thesis/proofs/}{flowlimitproof.tex}\n\n  \\subimport{thesis/theorems/}{trustsavingtheorem.tex}\n  \\subimport{thesis/proofs/}{trustsavingproof.tex}\n\n  \\subimport{thesis/theorems/}{invtrustrednaivetheorem.tex}\n  \\subimport{thesis/proofs/}{invtrustrednaiveproof.tex}\n\n  Until now $MaxFlow$ has been viewed purely as an algorithm. This algorithm is not guaranteed to always return the same\n  flow when executed muliple times on the same graph. However, the corresponding flow value, $maxFlow$, is always the same.\n  Thus $maxFlow$ can be also viewed as a function from a matrix of capacities to a non-negative real number. Under this\n  perspective, we prove the following theorem. Let $\\mathcal{C}$ be the family of all capacity matrices\n  $C = [c_{vw}]_{V\\left(\\mathcal{G}\\right) \\times V\\left(\\mathcal{G}\\right)}$.\n  \\subimport{thesis/theorems/}{maxflowcontinuitytheorem.tex}\n  \\subimport{thesis/proofsketches/}{maxflowcontinuityproofsketch.tex}\n\n  Here we show three naive algorithms for calculating new direct trusts so as to maintain invariable risk when paying\n  a trusted party. Let $F = \\sum\\limits_{i=1}^{n}x_i$. To prove the correctness of the algorithms, it suffices to prove that\n  \\begin{equation}\n  \\label{naive:req1}\n     \\forall i \\in [n], c'_i \\leq x_i \\mbox{ and}\n  \\end{equation}\n  \\begin{equation}\n  \\label{naive:req2}\n     \\sum\\limits_{i=1}^{n}c'_i = F - V \\enspace.\n  \\end{equation}\n  Proofs of correctness and complexity can be found in the Appendix.\n  \\subimport{thesis/algorithms/}{fcfscode.tex}\n  This algorithm simply chooses to nullify all outgoing trust to one player after another in the order they are given at the\n  input, until the desired indirect trust is achieved. The complexity of this algorithm is $O\\left(n\\right)$.\n  \\newpage\n  \\subimport{thesis/algorithms/}{abscode.tex}\n\n  This algorithm finds a \\texttt{reduction} value such that the configuration\n  $\\forall i \\in \\left[n\\right], c'_i = \\max{\\left(0, x_i - \\mbox{\\texttt{reduction}}\\right)}$ achieves the desired flow.\n\n  The function \\texttt{preprocess(}$x_i$\\texttt{)} returns a data structure \\texttt{X} containing the set of flows\n  $\\left(x_i\\right)$, such that the corresponding function \\texttt{popMin(X)} is able to repeatedly return a tuple\n  consisiting of the index of the minimum element and a new data structure missing exactly the minimum element.\n  Examples of such pairs of functions are:\n  \\begin{equation*}\n  \\begin{gathered}\n    \\begin{cases}\n      \\texttt{preprocess = quickSort} \\\\\n      \\texttt{popMin = (}x_1\\texttt{, X}\\setminus x_1\\texttt{)}\n    \\end{cases}\n    \\mbox{ and} \\\\\n    \\begin{cases}\n      \\texttt{preprocess = FibonacciHeap} \\\\\n      \\texttt{popMin = (find-min(X),delete-min(X))}\n    \\end{cases} \\enspace.\n  \\end{gathered}\n  \\end{equation*}\n  In the general case, the complexity of the algorithm \\texttt{abs} is proven to be\n  $O\\left(preprocess\\right) + O\\left(n\\right)O\\left(popMin\\right)$. In both specific cases, the complexity is\n  $O\\left(n\\log{n}\\right)$. This algorithm also minimizes the $||\\Delta_i||_\\infty$ norm for the specific set of old flows\n  $x_i$ given as input. A proof of this fact can be found in the Appendix.\n\n  \\subimport{thesis/algorithms/}{propcode.tex}\n  We can see that this algorithm is simpler than the previous two. It simply reduces all outgoing direct trust proportionally\n  in such a way that the desired flow is agieved. The complexity of this algorithm is proven to be $O\\left(n\\right)$.\n\n  Naive algorithms result in $c'_i \\leq x_i$, thus according to the Invariability Theorem (\\ref{invariability}),\n  $||\\delta_i||_1$ is invariable for any of the possible results $C'$ of these algorithms and the resulting norm is not\n  necessarily the minimum. The following algorithms concentrate on finding a configuration $C'$ that achieves $F' = F - V$\n  while minimizing two $\\delta_i$ norms, $||\\delta_i||_\\infty$ and $||\\delta_i||_1$. We start with the\n  $||\\delta_i||_\\infty$ minimizer.\n\n  \\subimport{thesis/algorithms/}{dinfmincode.tex}\n  Since trust should be considered as a continuous unit and binary search bisects the interval containing the solution\n  on each recursive call, inclusion of the $\\epsilon$-parameters in \\texttt{BinSearch} is necessary for the algorithm to\n  complete in a finite number of steps.\n  \\subimport{thesis/algorithms/}{dinfbinsearchcode.tex}\n  Let $\\delta \\in \\left[0, \\max\\limits_{1 \\leq i \\leq n}{\\{c_i\\}}\\right]$. Furthermore, let $C'$ such that\n  $\\forall i \\in \\left[n\\right], c'_i = \\max{\\left(0, c_i - \\delta\\right)}$. We define $maxFlow\\left(\\delta\\right) =\n  maxFlow\\left(A, B, C'\\right) = F'$ and $MaxFlow\\left(\\delta\\right) = X'$. Conventions similar to $F$, $X$ and $C$ hold\n  for $F'$, $X'$ and $\\delta$ as far as subscripts are concerned.\n  \\subimport{thesis/lemmas/}{maxflowmonotonicitylemma.tex}\n  \\subimport{thesis/proofsketched/}{maxflowmonotonicityproofsketch.tex}\n  From the previous lemma we deduce that, given a $V \\in \\left(0, F\\right)$, if we determine a $\\delta$ such that\n  $maxFlow\\left(\\delta\\right) = F - V$, this $\\delta$ is unique. Furthermore, it is\n  \\begin{equation*}\n    ||\\delta_i||_\\infty = \\max\\limits_{1 \\leq i \\leq n}{\\delta_i} = \\max\\limits_{1 \\leq i \\leq n}{\\left(c_i - c'_i\\right)} =\n    \\max\\limits_{1 \\leq i \\leq n}{\\left(c_i - \\max{\\left(0, c_i - \\delta\\right)}\\right)} = \\delta \\enspace.\n  \\end{equation*}\n  It is proven that the two algorithms work as expected, that is an invocation of \\texttt{dinfmin} with valid inputs returns\n  a capacity $C'$ that yields the desired $maxFlow$ and minimizes $||\\delta_i||_\\infty$. The complexity of \\texttt{BinSearch}\n  is $O\\left(\\left(maxFlow + n\\right)\\log_2\\left(\\frac{top - bot}{\\epsilon_1 + \\epsilon_2}\\right)\\right)$ and the complexity\n  of \\texttt{dinfmin} is\n  $O\\left(\\left(maxFlow + n\\right)\\log_2\\left(\\frac{\\delta_{max}}{\\epsilon_1 + \\epsilon_2}\\right)\\right)$.\n\n  We will now concentrate on finding a capacity configuration $C'$ such that $F' = F - V$ and that minimizes\n  $\\sum\\limits_{i=1}^{n}\\left(c_i-c'_i\\right) = ||\\delta_i||_1$ as well. We treat the flow problem as a linear programming\n  problem. Next we see the formulation of the problem in this form, along with a breakdown of each relevant matrix and\n  vector.\n  \\subimport{thesis/lp/}{lpprimal.tex}\n  We would like to find a solution that, except for maximizing the flow, also minimizes $||\\delta_i||_1$ at the same time.\n  More precisely, we would like to optimize\n  \\begin{equation*}\n    \\min{\\sum\\limits_{v \\in \\mathcal{V}}\\left(c_{Av} - c'_{Av}\\right)}\n  \\end{equation*}\n  as well.\n  Since we wish to optimize with regards to two objective functions, we approach the problem as follows: Initially, we ignore\n  the minimization and derive the dual of the previous problem with respect to the maximisation. We then substitute the two\n  problems' optimisations with an additional constraint that equates the two objective functions. Due to the Strong Duality\n  theorem of linear programming \\cite{amo}, this equality can be achieved only by the common optimal solution of the\n  two problems. Next we treat the combination of constraints and variables of the primal and the dual problem, along with the\n  newly introduced constraint and the previously ignored $||\\delta_i||_1$ minimisation as a new linear problem. For every\n  $j \\in \\left[n\\right]$, the solution to this problem contains a $c'_{1j}$. These capacities will comprise the new\n  configuration that player $A$ requires.\n\n  We will now describe the dual problem in detail.\n  \\subimport{thesis/lp/}{lpdual.tex}\n  Everything is now in place to define the linear problem whose solution $C'$ yields $maxFlow\\left(C'\\right) = F - V$ and\n  minimizes $||\\delta_i||_\\infty$. The constraints are (\\ref{lp:primal:constraints}) and (\\ref{lp:dual:constraints})\n  supplemented with a constraint that equates the two problems' optimality functions:\n  \\begin{equation*}\n    \\sum\\limits_{j \\in \\left[n\\right]}f'_{1j} = \\sum\\limits_{j \\in \\left[n\\right]}c_{1j}y_{c1j} +\n    \\sum\\limits_{i \\in \\left[n\\right]}\\sum\\limits_{j \\in \\left[n\\right]}c_{ij}y_{fcij} + \\left(F - V\\right)y_F \\enspace.\n  \\end{equation*}\n  The desired optimisation is\n  \\begin{equation*}\n    \\min{\\sum\\limits_{j \\in \\left[n\\right]}\\left(c_{1j} - c'_{1j}\\right)} \\enspace.\n  \\end{equation*}\n  The final linear program consists of $2n^2 + 2n - 1$ variables and $2n^2 + 2n$ constraints. There exist a variety of\n  algorithms that solve linear programs, such as simplex and ellipsoid algorithm. The ellipsoid algorithm requires polynomial\n  time in the worst-case scenario, but for practical purposes algorithms of the simplex category seem to exhibit better\n  behavior \\cite{it}.\n", "meta": {"hexsha": "d4547172348c7bbab4d82f46d4a79eb45b6c4292", "size": 9114, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/riskinvalgs.tex", "max_stars_repo_name": "dionyziz/DecentralizedTrust", "max_stars_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "thesis/riskinvalgs.tex", "max_issues_repo_name": "dionyziz/DecentralizedTrust", "max_issues_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/riskinvalgs.tex", "max_forks_repo_name": "dionyziz/DecentralizedTrust", "max_forks_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.1830985915, "max_line_length": 125, "alphanum_fraction": 0.7308536318, "num_tokens": 2700, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7549149868676284, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.5518870772859372}}
{"text": "\n\\documentclass[12pt]{amsart}\n\\usepackage{geometry} % see geometry.pdf on how to lay out the page. There's lots.\n\\usepackage{bsymb}\n\\usepackage{unitb}\n\\usepackage{calculational}\n\\usepackage{ulem}\n\\usepackage{hyperref}\n\\normalem\n\\geometry{a4paper} % or letter or a5paper or ... etc\n% \\geometry{landscape} % rotated page geometry\n\n% See the ``Article customise'' template for some common customisations\n\n\\title{}\n\\author{}\n\\date{} % delete this line to display the current date\n\n%%% BEGIN DOCUMENT\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n%\\section{}\n%\\subsection{}\n\n\\begin{machine}{m0}\n\n%\\hide{\n\t\\begin{align*}\n\\variable{\t\tn,a : \\Int}\n\t\\end{align*}\n%}\n\n\n\n\\begin{align*}\n\\invariant{inv0}\n{\ta = n^3\t}\n\\end{align*}\n%\n%%\\begin{proof}{INIT/INV/inv0}\n%%\\begin{calculation}\n%%\t\n%%\\end{calculation}\n%%\\end{proof}\n\n\\newevent{evt}\n\n%\n\\begin{align*}\n\\initialization{init0}\n{\tn = 0 \\land a = 0\t}\n\\end{align*}\n%%\n%%\n\\begin{align*}\n\\evassignment{evt}{a0}\n{\tn' = n + 1\t}\n\\end{align*}\n%\nWe use the proof obligation of \\ref{inv0} to deduce a proper assignment to $a$:\n%\n\\begin{proof}{evt/INV/inv0}\n\t\\begin{calculation}\n\t\t(n')^3\n\t\\hint{=}{ \\ref{a0} }\n\t\t(n+1)^3\n\t\\hint{=}{ arithmetic }\n\t\tn^3 + 3 \\cdot n^2 + 3 \\cdot n + 1\n\t\\hint{=}{ \\ref{inv0} }\n\t\ta + 3 \\cdot n^2 + 3 \\cdot n + 1\n\t\\hint{=}{ we add a variable $b$: \\ref{inv1}, see below }\n\t\ta + b\n\t\\hint{=}{ \\ref{a1} }\n%\t\\hint{=}{ }\n\t\ta'\n\t\\end{calculation}\n\\end{proof}\n%\n\\begin{align*}\n\\variable{\tb : \\Int}\n\\end{align*}\n%\n\\begin{align*}\n\\invariant{inv1}\n{\tb = 3 \\cdot n^2 + 3 \\cdot n + 1\t}\n\\end{align*}\n%%%%%\\begin{invariant}{inv2}\n%%%%%\tb ~=~ 3 \\cdot n^2 + 3 \\cdot n + 1\n%%%%%\\end{invariant}\n%%%%\n\\begin{align*}\n\\evassignment{evt}{a1}\n{\ta' = a + b\t}\n\\end{align*}\n%\nWe now have a new invariant to preserve. It is easy to see how to establish it initially:\n%%\n\\begin{align*}\n\\initialization{in1}\n{\tb = 1\t}\n\\end{align*}\n%\n%\\begin{itemize}\n%\\item label initialization predicates\n%\\item spacing commands\n%\\item \\sout{dummy section}\n%\\item \\sout{test case: train system}\n%\\item refinement\n%\n%\tadd refinement environments for: changing schedules, transforming progress properties\n%\\item po labels: check that proofs match a po\n%\\item translate the tags in proof obligations\n%\t\n%\tcreate toc entries with the proof environment, in unitb.sty\n%\\end{itemize}\n%\n%\\section{Todo:}\n%\\begin{itemize}\n%\\item grammar: make operator grammar generic\n%\\item grammar: include prefix operators and quantifiers\n%\\item \n%\\item in Z3.Z3.merge, remove the exceptions to integrate it to error handling\n%\\item spacing in the middle of event and set declarations\n%\\item testing the input validation, error messages\n%\\item proof structures (proof by cases, etc)\n%\\item types\n%\\item invariant theorems\n%\\item error checking\n%\\item better LaTeX formatting\n%\\item lazy proof checking\n%\\item \\sout{continuous checking}\n%\\item \\sout{\\emph{bug}: last of empty list of calculation steps}\n%\\item \\sout{error report: report line number instead of step number when proof is incorrect}\n%\\item generate documentation\n%\\item handle the ``\\textbackslash input'' commands in latex and use it to produce a project hierarchy\n%\\end{itemize}\n%\n%I'm now describing how I came up with the proof. It came to me in a dream and I forgot it in another dream.\n%%\n\\begin{proof}{evt/INV/inv1}\n\t\\begin{calculation}\n\t\t3 \\cdot (n')^2 + 3 \\cdot n' + 1\n\t\\hint{=}{ \\ref{a0} }\n\t\t3 \\cdot (n+1)^2 + 3 \\cdot (n+1) + 1\n\t\\hint{=}{ arithmetic }\n\t\t3 \\cdot (n^2+2\\cdot n+1) + 3 \\cdot (n+1) + 1\n\t\\hint{=}{ arithmetic }\n\t\t3 \\cdot n^2+6\\cdot n+3 + 3 \\cdot n + 3 + 1\n\t\\hint{=}{ arithmetic }\n\t\t3 \\cdot n^2 + 3 \\cdot n + 1 +6\\cdot n+3 + 3\n\t\\hint{=}{ \\ref{inv1} }\n\t\tb+6\\cdot n+3+3\n\t\\hint{=}{ \\ref{inv2}, see below }\n\t\tb+c\n\t\\hint{=}{ \\ref{a2}, see below }\n%\t\\hint{=}{ }\n\t\tb'\n\t\\end{calculation}\n\\end{proof}\n%\n\\begin{align*}\n\\variable{\tc : \\Int}\n\\end{align*}\n%\n\\begin{align*}\n\\invariant{inv2}\n{\tc = 6 \\cdot n + 6\t}\n\\end{align*} \n%\n\\begin{align*}\n\\evassignment{evt}{a2}\n{\tb' = b + c\t}\n\\end{align*}\n\nInvariant \\ref{inv2} is easy to satisfy initially:\n\n\\begin{align*}\n\\initialization{in2}\n{\tc = 6\t}\n\\end{align*}\n%\n\n\tIf we increase $c$ by 6, we can easily see that \\ref{inv2} is preserved:\n\\begin{align*}\n\\evassignment{evt}{a3}\n{\tc' = c + 6\t}\n\\end{align*}\n\n\\begin{proof}{evt/INV/inv2}\n\t\\easy\n%\t\\begin{calculation}\n%\t\t6 \\cdot (n') + 6\n%\t\\hint{=}{ \\ref{a0} }\n%\t\t6 \\cdot (n+1) + 6\n%\t\\hint{=}{ arithmetic }\n%\t\t6 \\cdot n + 6 + 6\n%\t\\hint{=}{ \\ref{inv2} }\n%\t\tc + 6\n%\t\\hint{=}{ \\ref{a3} }\n%\t\tc'\n%\t\\end{calculation}\n\\end{proof}\n%%\n\n\\begin{align*}\n\\invariant{inv3}\n{\tf = \\qfun{i}{ 0 \\le i \\land i < n }{ i^3 } }\n\\end{align*}\n\n\\begin{align*}\n\\variable{\tf : \\Int \\pfun \\Int}\n\\end{align*}\n\n\\begin{align*}\n\\dummy{\ti,k : \\Int}\n\\end{align*}\n\n\\with{functions} \\with{sets}\n\n\\begin{proof}{INIT/INV/inv3}\n\t\\begin{calculation}\n\t\t\\qfun{i}{ 0 \\le i \\land i < n }{ i^3 }\n\t\\hint{=}{ \\ref{init0} }\n\t\t\\qfun{i}{ 0 \\le i \\1\\land i < 0 }{ i^3 }\n\t\\hint{=}{ arithmetic }\n%\t\t\\qfun{i}{ \\false }{ i^3 }\n%\t\\hint{=}{ arithmetic }\n\t\t\\oftype{ \\emptyfun }{ \\pfun [ \\Int, \\Int ] }\n\t\\hint{=}{ \\ref{init3} }\n\t\tf\n\t\\end{calculation}\n\\end{proof}\n\n\\begin{align*}\n\\initialization{init3}\n{\tf = \\emptyfun\t}\n\\end{align*}\n\n\\begin{proof}{evt/INV/inv3}\n\t\\begin{calculation}\n\t\t\\qfun{i}{ 0 \\le i \\land i < n' }{ i^3 }\n\t\\hint{=}{ \\ref{a0} }\n\t\t\\qfun{i}{ 0 \\le i \\1\\land i < n+1 }{ i^3 }\n\t\\hint{=}{ split range with \\ref{inv4} (see below) }\n\t\t\\qfun{i}{ 0 \\le i \\1\\land i < n }{ i^3 } \\3 | n \\fun n^3\n\t\\hint{=}{ \\ref{inv3} }\n%\t\t\\qfun{i}{ 0 \\le i \\1\\land i < n }{ i^3 } \\3 | \\qfun{i}{ i = n }{ i^3 }\n%\t\\hint{=}{ one-point rule }\n%\t\t\\qfun{i}{ 0 \\le i \\1\\land i < n }{ i^3 } \\2 | n \\fun n^3\n%\t\\hint{=}{ \\ref{inv3} }\n\t\tf \\1 | n \\fun n^3\n\t\\hint{=}{ \\ref{inv0} }\n\t\tf \\1 | n \\fun a\n\t\\hint{=}{ \\ref{a4} }\n\t\tf'\n\t\\end{calculation}\n\\end{proof}\n\n\\begin{align*}\n\\invariant{inv4}\n{ 0 \\le n }\n\\end{align*}\n\n\\begin{align*}\n\\evassignment{evt}{a4}\n{ f' \\2 = f \\1 | n \\fun a }\n\\end{align*}\n\n\\begin{align*}\n\\invariant{inv5}\n{\t\\qforall{i}{0 \\le i \\1\\land i < n}{ f.i = i^3 }\t\t}\n\\end{align*}\n\n\\begin{proof}{evt/INV/inv5}\n%\t\\begin{align}\n%\t\\assert{asm0}{ \\qforall{i}{}{ i \\in \\dom.f \\2\\equiv 0 \\le i \\land i < n } }\n%\t\\end{align}\n\t\\begin{calculation}\n\t\t\\qforall{i}{0 \\le i \\1\\land i < n'}{ f'.i = i^3 }\n\t\\hint{=}{ \\ref{a0} }\n\t\t\\qforall{i}{0 \\le i \\1\\land i < n+1}{ f'.i = i^3 }\n\t\\hint{=}{ split range with \\ref{inv4} }\n\t\t\\qforall{i}{0 \\le i \\1\\land i < n}{ f'.i = i^3 } \\2 \\land f'.n = n^3\n\t\\hint{=}{ \\ref{a4} and \\ref{inv0} }\n\t\t\\qforall{i}{0 \\le i \\1\\land i < n}{ f'.i = i^3 } % \\2 \\land a = n^3\n\t\\hint{=}{ \\ref{a4}  }\n\t\t\\qforall{i}{0 \\le i \\1\\land i < n  }\n\t\t\t{ f.i = i^3 } \n\t\\hint{=}{ \\ref{inv5} }\n%\t\t\\qforall{i}{0 \\le i \\1\\land i < n \n%\t\t\t\\1\\land i \\in \\dom.f \\setminus \\dom.(n \\fun a) }\n%\t\t\t{ f'.i = i^3 } \n%\t\\hint{=}{  }\n\t\t\\true\n\t\\end{calculation}\n%\t\\begin{subproof}{asm0}\n%\t\\easy\n%\t\\end{subproof}\n\\end{proof}\n\n\\begin{align*}\n\\invariant{inv6}\n{\t\\dom.f = \\qset{i}{0 \\le i \\land i < n }{ i }\t\t}\n\\end{align*}\n\n\\begin{proof}{evt/INV/inv6}\n\t\\begin{calculation}\n\t\t\\qset{i}{0 \\le i \\land i < n' }{ i }\n\t\\hint{=}{ \\ref{a0} }\n\t\t\\qset{i}{0 \\le i \\land i < n+1 }{ i }\n\t\\hint{=}{ split range }\n\t\t\\qset{i}{0 \\le i \\land i < n }{ i } \\bunion \\qset{i}{ 0 \\le i \\land i = n}{ i }\n\t\\hint{=}{ \\ref{inv6} }\n\t\t\\dom.f \\bunion \\qset{i}{ 0 \\le i \\land i = n}{ i }\n\t\\hint{=}{ one point rule with \\ref{inv4} }\n\t\t\\dom.f \\bunion \\{ n \\}\n\t\\hint{=}{ split range }\n\t\t\\dom.f'\n\t\\end{calculation}\n\\end{proof}\n\n\\begin{align*}\n\\constant{\tN: \\Int}\n\\end{align*}\n\n\\begin{align*}\n&\t\\progress{prog0}{\\true}{ f = \\qfun{i}{0 \\le i \\land i < N}{i^3} }\n\\refine{prog0}{monotonicity}{prog1}{ with \\ref{inv3} }\n&\t\\progress{prog1}{\\true}{ N = n  }\n\\refine{prog1}{induction}{prog2}{ over $\\var{n}{up}{N}$ up to $N$ }\n&\t\\progress{prog2}{n = k}{ k < n \\2\\lor n = N } \n\\refine{prog2}{trading}{prog3}{}\n&\t\\progress{prog3}\n\t\t{ n = k \\1\\land \\neg n = N}\n\t\t{ k < n \\1\\lor n = N } \n\\refine{prog3}{PSP}{ prog4, saf0 }{}\n&\t\\progress{prog4}\n\t\t{n = k \\1\\land \\neg n = N } \n\t\t{ \\neg n = k } \n\\\\ & \t\\safety{saf0}{ k \\le n }{ N = n }\n%\\refine{prog2}{trading}{prog3}{}\n%&\t\\progress{prog3}{n = k \\2\\land \\neg n = N}{ n < k } \\\\\n \\refine{prog4}{discharge}{tr0,saf1}{}\n& \t\\safety{saf1}{ n = k \\1\\land \\neg n = N }{ \\neg n = k }\n\\end{align*}\n\n\\begin{align*}\n\\transient{evt}{tr0}\n{\tn = k \\1\\land \\neg n = N\n\t}\n\\end{align*}\n\n\\begin{align*}\n\\cschedule{evt}{c0}\n{\tn < N\n\t}\n\\end{align*}\n\n\\begin{align*}\n\\evguard{evt}{grd0}\n{\t\\neg n = N\n\t}\n\\end{align*}\n\n\\begin{align*}\n\\invariant{inv7}\n{\tn \\le N\n\t} \\\\\n\\assumption{asm3}\n{\t0 \\le N\t}\n\\end{align*}\n\n% \\removecoarse{evt}{default} % \\weakento{evt}{default}{c0}\n\n\\end{machine}\n\n\\end{document}", "meta": {"hexsha": "248ae6a0c767f39ba02f7acd9b4e1c007c900206", "size": 8531, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tests/cubes-t7.tex", "max_stars_repo_name": "literate-unitb/literate-unitb", "max_stars_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-07-27T11:05:56.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-20T14:53:33.000Z", "max_issues_repo_path": "Tests/cubes-t7.tex", "max_issues_repo_name": "unitb/literate-unitb", "max_issues_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 32, "max_issues_repo_issues_event_min_datetime": "2017-06-25T03:53:02.000Z", "max_issues_repo_issues_event_max_datetime": "2017-06-25T04:28:38.000Z", "max_forks_repo_path": "Tests/cubes-t7.tex", "max_forks_repo_name": "literate-unitb/literate-unitb", "max_forks_repo_head_hexsha": "0d843456dc103bb09babc5b12855435d2e10f534", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.45, "max_line_length": 108, "alphanum_fraction": 0.5938342516, "num_tokens": 3580, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5518870772859371}}
{"text": "% A simple template for LaTeX documents\n% \n% To produce pdf run:\n%   $ pdflatex paper.tex \n%\n\n\\documentclass[10pt, twocolumn]{article}\n%\\documentclass[12pt]{article}\n\n% Begin paragraphs with new line\n\\usepackage{parskip}  \n\n% Change margin size\n\\usepackage[margin=0.5in]{geometry}   \n\n% Graphics Example:  (PDF's make for good plots)\n\\usepackage{graphicx}               \n% \\centerline{\\includegraphics{figure.pdf}}\n\n% Allows hyperlinks\n\\usepackage{hyperref}\n\n% Blocks of code\n\\usepackage{listings}\n\\lstset{basicstyle=\\ttfamily, title=\\lstname}\n% Insert code like this. replace `plot.R` with file name.\n% \\lstinputlisting{plot.R}\n\n% Monospaced fonts\n%\\usepackage{inconsolata}\n% GNU \\texttt{make} is a nice tool.\n\n% Supports proof environment\n\\usepackage{amsthm}\n\n% Allows writing \\implies and align*\n\\usepackage{amsmath}\n\n% Allows mathbb{R}\n\\usepackage{amsfonts}\n\n% Numbers in scientific notation\n\\usepackage{siunitx}\n\n% Use tables generated by pandas\n\\usepackage{booktabs}\n\n% norm and infinity norm\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\\newcommand{\\inorm}[1]{\\left\\lVert#1\\right\\rVert_\\infty}\n\n% Statistics essentials\n\\newcommand{\\iid}{\\text{ iid }}\n\\newcommand{\\Expect}{\\operatorname{E}}\n\\newcommand{\\Var}{\\operatorname{Var}}\n\\newcommand{\\Cov}{\\operatorname{Cov}}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{document}\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{verbatim}\n```{r, include=FALSE, results='hide'}\nlegend('topright', c('a', 'b', 'c'), pch=1:3)\nboxplot(Y ~ somefactor, data = mydata)\noptions(contrasts=c('contr.sum', 'contr.sum'))\nmeans = tapply(Y, somefactor, mean)\nG = expand.grid(k=1:3, j=1:4, i=1:3)\nleaps::regsubsets(y ~ ., data=d, ...)\nMASS::stepAIC(model0, ...)\n\\end{verbatim}\n\n\\textbf{Binomial}\n$X \\sim B(n, p)$\n\\[\n    p(k) = \\binom{n}{k} p^k (1-p)^{n-k}\n    \\qquad k = 0, 1, \\dots, n\n\\]\n$\\Expect X = np, \\quad \\Var X = np(1-p)$\n\n\\textbf{Poisson}\n$X \\sim P(\\lambda)$\n\\[\n    p(k) = \\frac{e^{-\\lambda} \\lambda^k}{k!}\n    \\qquad k = 0, 1, \\dots\n\\]\n$\\Expect X = \\lambda, \\quad \\Var X = \\lambda$\n\n\\textbf{Normal}\n$X \\sim N(\\mu, \\Sigma)$, $\\Sigma$ positive definite\n\\[\n    f(x) = \\frac{\\exp\\{ - \\frac{1}{2}(x - \\mu)^T \\Sigma^{-1} (x - \\mu) \\}}\n        {(2\\pi)^{\\frac{k}{2}} \\sqrt{\\det(\\Sigma)}}\n        = \\frac{1}{\\sqrt{2 \\pi} \\sigma} e^{-\\frac{(x - \\mu)^2}{2\n        \\sigma^2}}\n\\]\nmgf: $M_X (t) = \\exp (\\mu' t + \\frac{1}{2} t' \\Sigma t)$\n\n\\textbf{Beta}\n$ X \\sim \\text{Beta}(\\alpha, \\beta)$\n\\[\n    f(x) = \\frac{x^{\\alpha-1}(1 - x)^{\\beta-1}}{B(\\alpha, \\beta)} \n    \\qquad 0 \\leq x \\leq 1\n\\]\n$\\Expect X = \\frac{\\alpha}{\\alpha + \\beta},\n\\quad \\Var X = \\frac{\\alpha \\beta}{(\\alpha + \\beta)^2 (\\alpha + \\beta + 1)}$\n\n\\textbf{Gamma}\n$X \\sim \\text{Gamma}(\\alpha, \\beta)$\n\\[\n    f(x) = \\frac{\\beta^\\alpha x^{\\alpha-1} e^{-\\beta x}}{\\Gamma(\\alpha)}\n    \\qquad x > 0\n\\]\n$\\Expect X = \\frac{\\alpha}{\\beta},\n\\quad \\Var X = \\frac{\\alpha}{\\beta^2}$\n\n$X \\sim \\text{Gamma}(\\alpha, \\beta) \\iff \\beta X \\sim \\text{Gamma}(\\alpha, 1)$\n\n$X_i \\iid \\text{Gamma}(\\alpha_i, \\beta)$, then\n\\[\n    \\sum X_i \\sim \\text{Gamma}(\\sum \\alpha_i, \\beta)\n\\]\n\\textbf{Exponential}\nSpecial case: $X \\sim \\text{Exp}(\\lambda) \\equiv \\text{Gamma}(1, \\lambda)$\n\n$\nf(x) = \\lambda e^{-\\lambda x},\n\\quad x > 0\n\\qquad \\Expect X = \\frac{1}{\\lambda},\n\\quad \\Var X = \\frac{1}{\\lambda ^2}\n$\n\nCDF $F(x) = 1 - e^{-\\lambda x}$\n\n\\textbf{Chi square}\nSpecial case: $X \\sim \\chi^2_n \\equiv \\text{Gamma}(\\frac{n}{2}, \\frac{1}{2})$\n\n$\nf(x) \\propto x^{\\frac{n}{2} - 1} e^{\\frac{-x}{2}},\n\\quad x > 0\n\\qquad \\Expect X = n,\n\\quad \\Var X = 2n\n$\n\nLet $Z_i$ be iid $N(0, 1)$.\n$\\sum_{i=1}^n Z_i^2 \\sim \\chi^2_n$\n\nNoncentral $\\chi^2$. Let $Y \\sim N(\\mu, I)$ be an $n$ vector. Then \n\\[\n    \\norm{Y}^2 \\sim \\chi^2_n(\\norm{\\mu}^2)\n\\]\n\\textbf{F} if num. and den. independent then\n\\[\n    F(m, n) \\equiv \\frac{\\frac{\\chi^2_m}{m}}\n        {\\frac{\\chi^2_n}{n}}\n\\]\n\\textbf{T} if num. and den. independent then\n\\[\n    t(n) = \\frac{N(0, 1)}\n    {\\sqrt{\\frac{\\chi^2_n}{n}}}\n\\]\nConditional pdf:\n\\[\n    f_{X|Y}(x | y) \\equiv \\frac{f_{X, Y}(x, y)}{f_Y(y)}\n\\]\nIterated expectation:\n\\[\n    E(Y) = E(E(Y | X))\n\\]\nConditional variance formula:\n\\[\n    \\Var(Y) = \\Var(E(Y | X)) + E(\\Var(Y | X))\n\\]\nSingular Value Decompostion (SVD) Any matrix $X$ can be written\n\\[\n    X = UDV^T\n\\]\nwith $U, V$ orthogonal, and $D$ diagonal.\n\nMoore Penrose Psuedoinverse $A^+$ exists uniquely for every matrix $A$.\nProperties: $AA^+A = A, \\quad A^+AA^+ = A^+$ and $AA^+, A^+A$ are symmetric.\n\nProjection matrix $P$ are symmetric and idempotent. They have eigenvalues\neither 0 or 1.\n\\[\n    P = P^T \\qquad P^2 = P\n\\]\nCovariance of linear transformations\n\\[\n    Cov(Ay, Bx) = A Cov(y, x) B^T\n\\]\nInvert $2 \\times 2$ matrix:\n$\n    A = \n    [\\begin{smallmatrix}\n        a & b \\\\\n        c & d \\\\\n    \\end{smallmatrix}]\n$\n\\[\n    A^{-1} = \n    \\frac{1}{\\det (A)}\n    \\begin{bmatrix}\n        d & -b \\\\\n        -c & a \\\\\n    \\end{bmatrix}\n\\]\nBlock matrix:\n$\n\\begin{bmatrix}\n    A & B \\\\\n    C & D\n\\end{bmatrix}^{-1} =\n$\n\\[\n\\begin{bmatrix}\n                     (A - BD^{-1}C)^{-1}         & -A^{-1}B(D -\n    CA^{-1}B)^{-1} \\\\\n                     -D^{-1}C(A - BD^{-1}C)^{-1} & (D - CA^{-1}B)^{-1}  \n\\end{bmatrix}\n\\]\nSum identities:\n\\[\n    \\sum_{k=0}^{\\infty} p^k = \\frac{p}{1 - p} \\qquad \n    \\sum_{k=0}^{\\infty} k p^k = \\frac{p}{(1 - p)^2} \\qquad |p| < 1\n\\]\nIntegration by parts:\n\\[\n    \\int uv' = uv - \\int u'v\n\\]\nMatrix / Vector differentiation\n\n$\\frac{\\partial A^T \\beta}{\\partial \\beta} = A$, \n$\\frac{\\partial \\beta^T A \\beta}{\\partial \\beta} = (A + A^t) \\beta =\n2A\\beta$ for $A$ symmetric.\n\n$\\frac{\\partial}{\\partial \\theta_i} \\log (|A|) =\ntr( A^{-1} \\frac{\\partial A}{\\partial \\theta_i}$)\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection*{232B}\n\n\\textbf{Linear mixed models} Standard assumptions:\n\\[\n    y = X \\beta + Z \\alpha + \\epsilon\n\\]\nwhere $\\alpha \\sim N(0, G)$ independent of  $\\epsilon \\sim N(0, R)$.\n\nMarginal model $y \\sim N(X \\beta, V)$ where $V = R + ZGZ'$.\n\nRestricted maximum likelihood (REML) uses a linear transformation\n$Ay$ to remove the fixed effects, and then estimates the variance.\nWe have $\\text{rank} (A) = n - p$ where $p = \\text{rank} (X)$ and $A'X =\n0$. Then $z = A'y \\sim N(0, A'VA)$ and we can maximize the restricted log likelihood\nto estimate the variance parameters. Let $P = A(A'VA)^{-1}A'$ and solve\n\\[\n    0 = \\frac{\\partial l_R}{\\partial \\theta_i} = \\frac{1}{2}\n    \\left( y'P \\frac{\\partial V}{\\partial \\theta_i} Py \n    - tr (P \\frac{\\partial V}{\\partial \\theta_i})\n    \\right)\n\\]\n\\textbf{Predictions}\nLet $\\xi = b' \\beta + a' \\alpha$ be a mixed effect. These are what we're\ninterested in estimating. We call them predictions rather than estimations\nbecause we're predicting a random component.\n\nBLUE - Best linear unbiased estimator, \n$\\tilde{\\beta} = (X' V^{-1} X)^{-1} X' V^{-1} y$. This is the MLE of\n$\\beta$.\n\nBLUP - Best linear predictor, which plugs in $\\tilde{\\beta}$ into BP.\n\nBP - Best predictor, $\\Expect (\\xi | y) = b' \\beta + a' G Z' V^{-1} (y - X\n\\beta)$ a theoretical ideal that's\nusually difficult or impossible to derive.\n\nEBLUP - Empirical best linear predictor, plugs in estimates for both fixed\nand variance components into the BP. This is\ntypically the one we compute and use.\n\n\\begin{verbatim}\nlibrary(lme4)\nlibrary(HLMdiag)\nfit1 = lmer(y ~ X1 + X2 + (1 | V), data=somedata)\ns1 = summary(fit1)\n# 2 ways to extract variance estimates:\ns1$sigma^2\ns1$varcar$V\nHLMdiag::varcomp.mer(fit1)\ngetME(fit1, ...)  # Extract various components\npredict(fit1, data=newdata)  # EBLUPs\n\\end{verbatim}\n\n2 ways to get standard errors for variance estimates are parametric\nbootstrap and asymptotic covariance matrix of the estimates.\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\subsection*{232A}\n\nLeast Squares Principle\n\\[\n    \\text{arg min}_\\beta \\norm{Y - X\\beta}^2\n\\]\nNormal Equations - Any $b$ satistfying this solves the least squares\n\\[\n    X^T X b = X^T y\n\\]\nGauss Markov Theorem - $\\hat{\\beta}$ is Best Linear Unbiased\nEstimator (BLUE) of $\\beta$.\n\\[\n    \\hat{\\beta} = (X^T X)^{-1} X^T y \\sim N(\\beta, \\sigma^2 (X^T X)^{-1})\n\\]\nvariance: \n\\[\n    \\frac{\\norm{y - X \\hat{\\beta}}^2}{\\sigma^2} \\sim\n\\chi^2_{n-p}.\n    \\qquad \\implies \\qquad\n    \\hat{\\sigma}^2 = \\frac{\\norm{y - X \\hat{\\beta}}^2}{n - p}\n\\]\nUse t test for hypothesis testing and confidence intervals for the value of\na particular $\\beta_j$ coefficient. \nLet $w_{ii}$ be the $i$th diagonal entry of $(X^T X)^{-1}$.\n\\[\n    \\frac{\\beta_j - \\beta_j^*}{\\hat{\\sigma} \\sqrt{w_{ii}}} \\sim t_{n-p}\n\\]\n$1 - \\alpha$ Confidence intervals for new observation $Y_h$ at $x_h$ and $E[Y_h]$:\n\\[\n    E[y_h] \\approx \\hat{y_h} \\pm t(n-p, 1 - \\frac{\\alpha}{2}) \\hat{\\sigma}\n        \\sqrt{x_h^T (X^T X)^{-1} x_h}\n\\]\n\\[\n    y_h \\approx \\hat{y_h} \\pm t(n-p, 1 - \\frac{\\alpha}{2}) \\hat{\\sigma}\n        \\sqrt{1 + x_h^T (X^T X)^{-1} x_h}\n\\]\nSimultaneous (Working-Hotelling) confidence interval for $\\Expect (y_h)$:\n\\[\n    \\hat{y}_h \\pm \\sqrt{p F_{p, n - p, 1 - \\alpha}} se\\{ \\hat{y}_h \\}\n\\]\n\\[\n    \\frac{(\\hat{\\beta} - \\beta)^T X^T X (\\hat{\\beta} - \\beta) /\n    p}{\\hat{\\sigma}^2}\n    \\sim F_{p, n - p}\n\\]\nGeneral linear tests. Partition $\\beta = (\\beta_1, \\beta_2)$ where $\\beta_1$\nis an $r$ vector and $\\beta_2$ is $p - r$. Null hypothesis $H_0: \\beta_2 =\n\\beta_2^*$ (often 0), and $H_a: \\beta_2 \\neq \\beta_2^*$. Then\n$SSE_r = \\norm{y - X_2 \\beta_2^* - X_1 \\tilde{\\beta_1}}^2$ is the sum of\nsquared error for the reduced model and \n$SSE_f = \\norm{y - X \\hat{\\beta}}^2$ is the squared sum of error for the\nfull model.\nUnder $H_0$:\n\\[\n    \\frac{\\frac{SSE_r - SSE_f}{p - r}}\n         {\\frac{SSE_f}{n - p}}\n         \\sim F_{p-r, n-p}\n\\]\nAlternate forms of linear test, and testing a linear combination if $R\\beta\n= r$, for $R$ full rank $s \\times p$ matrix.\n\\[\n    \\frac{(R \\hat{\\beta} - r)^T (R(X^T X)^{-1} R^T)^{-1} (R \\beta - r) / s}\n    {\\hat{\\sigma}^2} \\sim F_{s, n-p}\n\\]\n$SSTO = \\sum_{i=1}^n (y_i - \\bar{y}) = \\norm{y - \\bar{y} 1_n }^2\n        = \\norm{(I - J)y}^2$\n\n$SSR = \\sum_{i=1}^n (\\hat{y}_i - \\bar{y}) = \\norm{\\hat{y} - \\bar{y} 1_n}^2\n    = \\norm{(H - P)y}^2$\n\n$SSE = \\sum_{i=1}^n (y_i - \\hat{y}_i) = \\norm{y - \\hat{y}}^2\n        = \\norm{(I - H)y}^2$\n\n\\begin{verbatim}\nSSE = sum(residuals(fit)^2)\nSSR = sum((fitted(fit) - mean(y))^2)\nSSTO = sum((y - mean(y))^2)\n\\end{verbatim}\n\nIf the model contains the intercept in the column space of $X$  then $SSTO = SSR + SSE$.\n\n$R^2 = 1 - \\frac{SSE}{SSTO}$\n\nAdjusted $R^2_a = 1 - \\frac{SSE / (n-p)}{SSTO / (n-1)}$\n\n$AIC = n \\log SSE + 2p$\n\n$BIC = n \\log SSE + p \\log n$\n\n$Cp = \\frac{SSE}{MSE} - (n - 2p)$\n\n\\begin{verbatim}\ninfluence.measures, cooks.distance\n\\end{verbatim}\n\nResiduals: $\\hat{\\epsilon}_i = y_i - \\hat{y}_i$\n\nStudentized residuals (\\texttt{rstandard} in R): \n\\[\n    \\gamma_i =\n    \\frac{\\hat{\\epsilon}_i}{ s \\{ \\hat{\\epsilon}_i \\} } = \n    \\frac{\\hat{\\epsilon}_i}{\\hat{\\sigma} \\sqrt{1 - h_{ii}}}\n\\]\nStudentized deleted residuals (\\texttt{rstudent} in R):  \n\\[\n    t_i = \\frac{\\hat{\\epsilon}_i}{\\sqrt{MSE_{(-i)} (1 - h_{ii})}} \\sim t_{n - p -1}\n\\]\nWhere $MSE_{(-i)} = SSE_{(-i)} / (n - 1 - p)$ and\n$SSE_{(-i)} = SSE - \\frac{\\hat{\\epsilon}_i}{1 - h_{ii}}$ can be used to\ncalculate without refitting model.\n\nPrediction sum of squares (PRESS) is the same as leave one out cross\nvalidation (LOOCV). Prediction error on $i$th observation is called deleted\nresiduals:\n\\[\n    y_i - \\hat{y}_{i (-i)} = \\frac{y_i - \\hat{y}_i}{1 - H_{ii}}\n\\]\nWorks for ridge regression also, letting \n$H = X(X^T X + \\lambda I)^{-1} X^T$.\n\n\\textbf{Ridge Regression} for $\\lambda > 0$ solves\n\\[\n    \\min_\\beta ||Y - X\\beta||^2 + \\lambda ||\\beta||^2\n\\]\n\\begin{verbatim}\nW = solve(XtX + lambda * Ip)\nbetahat = W %*% t(X) %*% y\n\\end{verbatim}\n\n\n\\subsection*{ANOVA}\n\nThree principles of experimental design: 1) Replication 2) Randomization 3)\nBlocking\n\nOne way ANOVA with $n$ total observations, $K$ groups:\n\n{\n\\centering\n\\begin{tabular}{lll}\n    SS   &  & DF     \\\\\n    SSTR & $\\sum_{j=1}^K n_j (\\bar{y_{j \\cdot}} - \\bar{y_{\\cdot \\cdot}})^2$  & K - 1 \\\\\n    SSE  & $\\sum_{i=1}^n (y_{ij} - \\bar{y_{j \\cdot}})^2$  & n - K \\\\\n    SSTO & $\\sum_{i=1}^n (y_{ij} - \\bar{y_{\\cdot \\cdot}})^2$  & n - 1 \n\\end{tabular}\n}\n\nContrasts are sums of the form $\\Phi = \\sum_{i=1}^K c_i \\mu_i$ with\n$\\sum_{i=1}^K c_i = 0$.\nTukey's works for all pairwise contrasts.\nScheffe's and extended Tukey works for all contrasts.\nBonferroni's is for a limited number of pre specified contrasts.\n\nExample of contrast $L$ with standard error $se(\\hat{L}) = \\hat{\\sigma}\n\\sqrt{\\sum_{i=1}^I c_i^2 / n_i}$.\nEach way to compute a confidence interval for $L$ can be expressed in the form:\n$$\n    \\hat{L} \\pm se(\\hat{L}) \\times \\text{multiplier}\n$$\n\n\\begin{verbatim}\nm = c(simple = qt(1 - alpha / 2, fit$df.residual)\n, bonferroni = qt(1 - alpha / (2 * 20), fit$df.residual)\n, ex_tukey = qtukey(1 - alpha, I, n - I)\n, scheffe = sqrt((I - 1) *\n                qf(1 - alpha, I - 1, fit$df.residual)))\nwidth = seL * multipliers\nTukeyHSD(aov(y ~ x, data))  # pairwise intervals\n\\end{verbatim}\n\n\\newpage\n\n\\subsection*{Multivariate Normal}\n\nlog likelihood for $k$ vector $x \\sim N(\\mu, \\Sigma)$\n\\[\n    l_x = -\\frac{k}{2} \\log 2 \\pi - \\frac{1}{2}\n    \\{ \\log \\det \\Sigma + (x - \\mu)^T \\Sigma^{-1} (x - \\mu) \\}\n\\]\nStein's formula: $X \\sim N(\\mu, \\sigma)$\n\\[\n    \\Expect (g(X) (X - \\mu)) = \\sigma^2 \\Expect(g'(X))\n\\]\nassuming these expectations are finite.\n\n$X \\sim N(\\mu, \\Sigma)$, $A$ an $m \\times n$ matrix,\nthen \n\\[\n    AX \\sim N(A \\mu, A \\Sigma A^t)\n\\]\nFor $\\Sigma$ full rank it's possible to transform between $Z \\sim\nN(0, I)$ and $X$:\n\\[\n    X = \\Sigma^{1/2} Z + \\mu \\qquad Z = \\Sigma^{-1/2} (X - \\mu)\n\\]\nIn block matrix form:\n\\[\n    X =\n    \\begin{bmatrix}\n        X_1 \\\\\n        X_2 \\\\\n    \\end{bmatrix}\n    \\sim N \\left(\n    \\begin{bmatrix}\n        \\mu_1 \\\\\n        \\mu_2 \\\\\n    \\end{bmatrix}\n    ,\n    \\begin{bmatrix}\n        \\Sigma_{11} & \\Sigma_{12} \\\\\n        \\Sigma_{21} & \\Sigma_{22} \\\\\n    \\end{bmatrix}\n\\right)\n\\]\nAssuming $\\Sigma_{11}$ is positive definite then the conditional\ndistribution\n\\[\n    X_2 | X_1 \\sim N(\\mu_2 + \\Sigma_{21} \\Sigma_{11}^{-1} (X_1 - \\mu_1),\n    \\Sigma_{22} - \\Sigma_{21} \\Sigma_{11}^{-1} \\Sigma_{12})\n\\]\n\n\\newpage\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\end{document}\n", "meta": {"hexsha": "376b85ccd319f35dc148274f4099246087b12d70", "size": 14241, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "232.tex", "max_stars_repo_name": "clarkfitzg/phd_stats", "max_stars_repo_head_hexsha": "c74b21a7fd55a713927650ce1827d1b853809395", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "232.tex", "max_issues_repo_name": "clarkfitzg/phd_stats", "max_issues_repo_head_hexsha": "c74b21a7fd55a713927650ce1827d1b853809395", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2015-11-10T07:47:09.000Z", "max_issues_repo_issues_event_max_datetime": "2015-11-10T22:55:08.000Z", "max_forks_repo_path": "232.tex", "max_forks_repo_name": "clarkfitzg/phd_stats", "max_forks_repo_head_hexsha": "c74b21a7fd55a713927650ce1827d1b853809395", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.7687969925, "max_line_length": 88, "alphanum_fraction": 0.5790323713, "num_tokens": 5387, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7310585786300049, "lm_q2_score": 0.7549149868676283, "lm_q1q2_score": 0.5518870772859371}}
{"text": "\\chapter{Similarity Renormalization Group Formalism}\\label{srg_formalism}\n\nIn this chapter, we introduce the physics concepts and formalism necessary to understand the 1-dimensional problems to which SRG is applied. We then introduce SRG and discuss details and open questions regarding its use.\n\n\\section{Quantum Mechanics Operators}\n\nOperators in quantum mechanics act linearly on state vectors, which represent specific states of the system. Each operator corresponds to some physical quantity; for example, the Hamiltonian, $\\hat{H}$, which is the sum of kinetic and potential energy operators, $\\hat{T}$ and $\\hat{V}$ respectively, is connected with the possible energies of the system. A state, which in Dirac notation is denoted by an abstract ket, $\\ket{\\Psi}$, is said to be an eigenstate with eigenvalue $\\omega$ of some operator, $\\hat{\\Omega}$, when the following equation holds:\n\\begin{equation}\n\\hat{\\Omega}\\ket{\\Psi} = \\omega\\ket{\\Psi}.\n\\end{equation}\nThese eigenvalues are the \\textit{observables} (i.e. measurable quantities) of quantum mechanical systems.\n\nOperators exist in a Hilbert space spanned by a basis of state vectors, $\\ket{\\Psi_i}$. An operator's representation with respect to a basis is given by its matrix elements,\n\\begin{equation}\n\\hat{\\Omega}_{ij} = \\bra{\\Psi_i}\\Omega\\ket{\\Psi_j}.\n\\end{equation}\nAn operator can be transformed to a different basis through a unitary transformation, which preserves its eigenvalues and thus its observables. Examples of 1-dimensional single-particle bases include particle position, $x$, particle momentum, $p$, and the 1-dimensional harmonic oscillator basis, which will be discussed next.\n\n\\subsection{1-Dimensional Harmonic Oscillator}\n\n\\begin{figure}[t]\n\\begin{center}\n\\subfloat[\\label{fig:ho_wf_w_1}]{\\includegraphics[width=0.5\\linewidth]{ho_plot_1}}\n\\subfloat[\\label{fig:ho_wf_w_4}]{\\includegraphics[width=0.5\\linewidth]{ho_plot_4}}\n\\end{center}\n\\caption{1-dimensional harmonic oscillator wavefunctions for the three lowest energy states for (a) $\\omega=1$ and (b) $\\omega=4$.}\n\\label{fig:ho_wf}\n\\end{figure}\n\nThe 1-dimensional harmonic oscillator is a single particle in a purely quadratic potential, $\\hat{V}(x) = m\\omega^2 \\hat{x}^2/2$. The Hamiltonian for the quantum harmonic oscillator is given by\n\\begin{equation}\n\\hat{H}_{HO} = \\frac{\\hat{p}^2}{2m} + \\frac{1}{2}m\\omega^2\\hat{x}^2,\n\\end{equation}\nwhich has eigenstates $\\ket{n}$, which have corresponding wavefunctions\n\\begin{equation}\n\\Psi_n(x) = \\frac{1}{\\sqrt{2^n n!}}\\left(\\frac{m \\omega}{\\pi \\hbar}\\right)^{1/4}e^{-m \\omega x^2 / 2 \\hbar} H_n\\left(\\sqrt{\\frac{m \\omega}{\\hbar}}x\\right)\n\\end{equation}\n\\begin{equation}\n\\Phi_n(p) = \\frac{(-i)^n}{\\sqrt{2^n n!}}\\left(\\frac{1}{\\pi m \\hbar \\omega}\\right)^{1/4}e^{-p^2 / 2 m \\hbar \\omega} H_n\\left(\\frac{p}{\\sqrt{m \\omega \\hbar}}\\right)\n\\end{equation}\nwhere $H_n(x)$ is the $n$-th Hermite polynomial. The three lowest energy momentum-space wavefunctions, $\\Phi_n(p)$, can be seen in Fig.~\\ref{fig:ho_wf}. The harmonic oscillator Hamiltonian can also be rewritten as\n\\begin{equation}\n\\hat{H}_{HO} = \\hbar \\omega \\left(\\hat{a}^\\dagger \\hat{a} + \\frac{1}{2}\\right),\n\\end{equation}\nwhere $\\hat{a}^\\dagger$ and $\\hat{a}$ are raising and lowering operators which act on the harmonic oscillator eigenstates like\n\\begin{equation}\n\\hat{a}^\\dagger\\ket{n} = \\sqrt{n + 1}\\ket{n+1}\n\\end{equation}\n\\begin{equation}\n\\hat{a}\\ket{n} = \\sqrt{n}\\ket{n-1}.\n\\end{equation}\n\nFor an operator, $\\hat{V}$, represented with respect to a momentum basis, $V(p, p')$, a transformation to a harmonic oscillator basis can be achieved by the following calculation:\n\\begin{equation}\\label{eq:ho_transform}\n\\bra{n}\\hat{V}\\ket{n'} = \\iint V(p, p') \\Phi_n^*(p) \\Phi_{n'}(p') \\,dp\\,dp'.\n\\end{equation}\n\n\\subsection{Many-particle states}\n\nFor a system of $A$ non-interacting particles, the energy eigenstates of the system can be written as a product of the eigenstates of the single particle Hamiltonians:\n\\begin{equation}\n\\ket{\\Psi} = \\prod_{i=1}^{A}\\ket{\\Psi_k}_i\n\\end{equation}\nwhere $\\ket{\\Psi_k}_i$ is the $k$-th eigenstate of the $i$-th particle's single particle Hamiltonian. For $A$ non-interacting particles in a harmonic oscillator potential, these product states are denoted by:\n\\begin{equation}\n\\ket{n_1 n_2 ... n_A} = \\prod_{i=1}^{A}\\ket{n_i}_i.\n\\end{equation}\n\n\n\\section{1-Dimensional System of Bosons}\n\nThe Hamiltonian for a 1-dimensional system of $A$ identical bosons with mass $m$ that interact via a local two-body potential is as follows:\n\\begin{equation}\n\\hat{H} = \\frac{1}{2 m} \\sum_{i=1}^A \\hat{k}_i^2 + \\sum_{i=1}^A\\sum_{j=i+1}^{A} \\hat{V}(x_i - x_j).\n\\end{equation}\nIn this equation, the $\\hat{k}_i$ are the single-particle momenta and the $x_i$ are the single-particle coordinates. A local potential is a function of the distance between two particles, as opposed to a function of both of their absolute coordinates.\n\n\\subsection{Jacobi Coordinates}\n\nA factorization of the Hamiltonian into a center-of-mass component, which is unaffected by the local potential, and a component relative to the center of mass is possible. This is achieved by a transformation to relative momentum Jacobi coordinates \\cite{Jurgenson:2008jp}. These are defined to be:\n\\begin{equation}\np_i = \\sqrt{\\frac{i}{i+1}} \\left(\\frac{1}{i} \\sum_{j=1}^{i}(k_j - k_{i+1})\\right),\n\\end{equation}\nwith $k_i$ being the single-particle momenta as mentioned above. It is worth noting that for a system of $A$ particles, we only need $A-1$ Jacobi momentum coordinates. We can get $\\hat{V}$ in terms of incoming Jacobi momentum $p$ and outgoing momentum $p'$ by taking the following Fourier transform of $\\hat{V}(x_1 - x_2)$\n\\begin{equation}\n\\hat{V}(p, p') = \\int \\hat{V}(\\sqrt{2} l_1) e^{- i (p - p') l_1} \\,dl_1,\n\\end{equation}\nwhere $l_1$ is the coordinate conjugate of $p_1$, $(x_1 - x_2)/\\sqrt{2}$. Removing the center-of-mass component of the Hamiltonian, we find the new Hamiltonian to be\n\\begin{equation}\n\\hat{H} = \\frac{1}{2 \\mu} \\sum_{i=1}^{A-1} \\hat{p}_i^2 + \\sum_{i=1}^{A-1}\\hat{V}(p_i, p_i'),\n\\end{equation}\nwhere $\\mu$ is the reduced mass given by $\\mu = m / A$ for particles of equal mass. For the purposes of this work, we will be starting with a potential defined with respect to the Jacobi momentum coordinates, avoiding the process of Fourier transforming a local coordinate-space potential.\n\n\\subsection{Transformation To Harmonic Oscillator States}\n\nIn momentum space, SRG evolutions require separate treatment of 2-body, 3-body, and higher-body potentials, to avoid delta functions caused by spectator particles \\cite{gl\u00f6ckle1983quantum}. A particle is a spectator particle when it is in a state where it does not interact with two particles that are interacting with each other. To avoid the cognitive load of handling all these potentials separately, we can make a transformation to a discrete basis. The discrete basis of choice for this project is the harmonic oscillator basis with respect to the Jacobi coordinates. This transformation can be achieved as shown in Eq.~\\ref{eq:ho_transform}. From here on out, $\\ket{n_i}$ will mean the $n$-th harmonic oscillator state with respect to the $i$-th Jacobi coordinate. So for a full treatment of an $A$-particle system, we will need product states for $A-1$ Jacobi coordinates,\n\\begin{equation}\n\\ket{n_1 n_2 ... n_{A-1}} = \\prod_{i=1}^{A-1}\\ket{n_i}.\n\\end{equation}\n\n\\subsection{Symmetrizing the Harmonic Oscillator Basis}\n\nAccording to the spin-statistics theorem, the state of a system of bosons must\nbe symmetric under any permutation of the particle coordinates\n\\cite{streater2000pct}.\nWe approach the problem of generating a basis of states that reflects this symmetry recursively, by first symmetrizing the 2-body system and then going from a symmetrized $(A-1)$-body basis to a symmetrized $A$-body basis. At each stage, we must diagonalize the symmetrizer, the form of which we will show for the 2-body and 3-body cases.\n\nFor the 2-particle system, we are only working with the first Jacobi coordinate and the harmonic oscillator states corresponding to it, $\\ket{n_1}$. The symmetrizer is $\\hat{S} = (1 + \\hat{P}_{12})/2$, where $\\hat{P}_{ij}$ is the operator that exchanges the coordinates between $i$-th and $j$-th particles. Because harmonic oscillator wavefunctions are either even for even $n$ or odd for odd $n$, $\\hat{P}_{12} \\ket{n_1} = (-1)^{n_1} \\ket{n_1}$ and the symmetrizer is already diagonal with eigenvalue 0 for odd $n_1$ and eigenvalue 1 for even $n_1$. We select the states with eigenvalue 1 to create our symmetric harmonic oscillator basis for the 2-particle system.\n\nTo generate the partially symmetrized basis for the 3-body system, we take the states $\\ket{n_1 n_2}$ where the possible $n_1$ values come from the symmetrized 2-body basis. The general 3-body symmetrizer is governed by the permutation group $S_3$, generated by $\\hat{P}_{12}$ and $\\hat{P}_{23}$, and has the form\n\\begin{equation}\n\\hat{S} = \\frac{1}{6}(1 + \\hat{P}_{12} + \\hat{P}_{23} + \\hat{P}_{12}\\hat{P}_{23} + \\hat{P}_{23}\\hat{P}_{12} + \\hat{P}_{12}\\hat{P}_{23}\\hat{P}_{12})\n\\end{equation}\nSince the states included in our to-be-symmetrized basis are already symmetric with respect to $\\hat{P}_{12}$, the symmetrizer simplifies to $\\hat{S} = (1 + 2\\hat{P}_{23})/3$. The matrix elements of $\\hat{P}_{23}$ in our partially symmetrized basis are\n\\begin{equation}\n\\bra{n_1' n_2'}\\hat{P}_{23}\\ket{n_1 n_2} = \\delta_{N',N} \\bra{n_1' n_2'}\\ket{n_1 n_2}_3,\n\\end{equation}\nwhere $N'=n_1' +n_2'$, $N=n_1 + n_2$, and $\\bra{n_1' n_2'}\\ket{n_1 n_2}_3$ is the 1-dimensional harmonic oscillator transformation bracket for particles with mass ratio 3. This transformation bracket value is calculated as\n\\begin{equation}\n\\bra{n_1' n_2'}\\ket{n_1 n_2}_3 =\n\\frac{\\sqrt{n_1!n_2!}}{\\sqrt{n_1'!n_2'!}}\\sum_{k=0}^{n_1'}\\binom{n_1'}{k}\\binom{n_2'}{n_2\n- k}\\left(\\frac{1}{2}\\right)^{n_1' + n_2 -\n2k}\\left(\\frac{\\sqrt{3}}{2}\\right)^{n_2' - n_2 + 2k} (-1)^{n_2-k}\n\\delta_{n_{1}' + n_{2}',n_{1} + n_{2}}.\n\\end{equation}\nDiagonalizing $\\hat{S}$ will give eigenvectors that are normalized linear combinations of product states with the same total harmonic oscillator number $N$. We again select the ones with eigenvalue 1 and keep those as our fully symmetrized 3-body basis. While the project leading up to this thesis only worked with 2-particle and 3-particle systems and thus only the treatment of symmetrizing those bases is relevant to this work, the procedure is generalizable to symmetrize up to any $A$-particle basis. The details of this procedure are explained in Ref.~\\cite{Jurgenson:2008jp}.\n\n\\subsection{Matrix Elements in the 3-Body Harmonic Oscillator Space}\n\nFrom Eq.~\\ref{eq:ho_transform}, we are able to transform both parts of our 2-body Hamiltonian into the symmetrized 2-body harmonic oscillator basis. In order to work in a symmetrized 3-body basis, we need to compute the kinetic energy for the 3-body system and embed the 2-body potential in the 3-body space. Both the $A$-body kinetic energy and the $A$-body embedded 2-body potential are defined with respect to their $(A-1)$-body counterparts \\cite{Jurgenson:2008jp}, so we will set up the discussion to give those formulas. Let $\\ket{N_A i_A}$ be a symmetrized $A$-body state with total harmonic oscillator number $N_A$ and degeneracy label $i_A$. $\\ket{N_A i_A}$ is defined as\n\\begin{equation}\n\\ket{N_A i_A} = \\sum_{l=1}^k c_l \\ket{N_{A-1} i_{A-1}; n_{A-1}},\n\\end{equation}\nwhere $k$ is the number of states in the linear combination, the $c_l$ are the coefficients of the linear combination of product states that make up the symmetrized state, $\\ket{n_{A-1}}$ is a harmonic oscillator state with respect to the $(A-1)$-th Jacobi coordinate, and $\\ket{N_{A-1} i_{A-1}}$ is a symmetrized $(A-1)$-body state. The $A$-body kinetic energy is calculated as\n\\begin{equation}\n\\begin{aligned}\n\\bra{N_A' i_A'}\\hat{T}_A\\ket{N_A i_A} = & \\sum_{l=1}^k \\sum_{l'=1}^{k'} c_l c_{l'}' (\\bra{N_{A-1}'\ni_{A-1}'}\\hat{T}_{A-1}\\ket{N_{A-1} i_{A-1}}\\delta_{n_{A-1}',n_{A-1}} \\\\\n                                        & -\n                                        \\frac{\\omega}{4}\\delta_{N_{A-1}',N_{A-1}}\\delta_{i_{A-1}',i_{A-1}}(\\sqrt{(n_{A-1} + 1)(n_{A-1} + 2)}\\delta_{n_{A-1}',n_{A-1} + 2} \\\\\n& + \\sqrt{n_{A-1}(n_{A-1} - 1)}\\delta_{n_{A-1}',n_{A-1} - 2} \\\\\n& - (2 n_{A-1} + 1)\\delta_{n_{A-1}',n_{A-1}})).\n\\end{aligned}\n\\end{equation}\nSimilarly, the embedded 2-body potential in the $A$-body basis is calculated as\n\\begin{equation}\n\\bra{N_A' i_A'}\\hat{V}_A\\ket{N_A i_A} = \\frac{A}{A-2} \\sum_{l=1}^k \\sum_{l'=1}^{k'} c_l c_{l'}'\n\\bra{N_{A-1}' i_{A-1}'}\\hat{V}_{A-1}\\ket{N_{A-1} i_{A-1}}\\delta_{n_{A-1}',n_{A-1}}.\n\\end{equation}\n\n\\section{Similarity Renormalization Group}\n\nThe similarity renormalization group (SRG), whose use in low-energy nuclear physics was initially explored at OSU \\cite{Jurgenson:2007td}, is one method of reducing the computational complexity of low-energy nuclear calculations. The idea behind it is to continuously unitarily transform the operator of interest (for example, the Hamiltonian) into a simpler form. This simpler form is chosen to allow the large basis to be truncated without affecting the operator eigenvalues, which are essential to the truncated operator's utility in later calculations. The SRG transformation is given by the flow equation for the evolving Hamiltonian $\\hat{H}_s$,\n\\begin{equation}\n\\frac{d \\hat{H}_s}{ds} = [\\hat{\\eta}_s, \\hat{H}_s],\n\\end{equation}\nwhere $[A, B]$ is the matrix commutator and $\\hat{\\eta}_s$ is the generator of the transformation which is defined as\n\\begin{equation}\n\\hat{\\eta}_s = [\\hat{G}_s, \\hat{H}_s],\n\\end{equation}\nwhere $\\hat{G}_s$ is known as the flow operator. The changes in the Hamiltonian over the course of the SRG evolution are absorbed into the potential, $\\hat{V}_s$, leaving the kinetic energy, $\\hat{T}$, independent of $s$, the flow parameter. $s$ is taken to be 0 for the un-evolved Hamiltonian. It is often convenient to use $\\lambda = 1 / s^{1/4}$ as an alternative flow parameter, so instead of going from $s=0$ towards $s=\\infty$ over the course of an SRG transformation, you are going from $\\lambda=\\infty$ towards $\\lambda=0$. For our purposes, $\\lambda=40$ (or $s=3.9 \\times 10^{-7}$) will be a good enough place to start with the initial Hamiltonian.\n\n\\subsection{Induced Many-Body Forces and Flow Operators}\n\nThe SRG evolution is fully unitary for an $A$-body operator when done in the $A$-body space. However, working in the full many-body space is only feasible for the smallest of systems, due to the combinatorial scaling of the basis size with respect to $A$. When evolving an $A$-body system operator in some smaller basis, the SRG evolution induces many-body forces which show up as an error when eventually computing operator observables further down the line. This error has limited the SRG's domain of applicability to calculations for small to medium-sized systems.\n\nCertain results have suggested that alternative choices for the flow operator, $\\hat{G}_s$, could reduce this many-body force induction and thereby reduce the error induced by applying SRG to calculations \\cite{Dicaire:2014fra}. The standard choice for $\\hat{G}_s$ is the kinetic energy, which is easy to calculate and represent in most problems. Alternative flow operator choices have been explored previously, but not thoroughly.\n\n\\subsection{Value of the 1-Dimensional Laboratory}\n\nSRG was explored initially in the 1-dimensional setting, which made it easy to test and allowed those exploring it to learn a great deal, such as the more careful treatment necessary for $A$-body evolutions in momentum space. Moreover, the setup of the 1-dimensional system is analogous to that of more complex calculations, so the results from the 1-dimensional setting generalize to application of the SRG in 3-dimensional calculations. If there is something to be learned about the connection between flow operator and many-body force induction, it is best comprehensively explored in a simple 1-dimensional problem and then generalized to expensive, difficult 3-dimensional calculations.\n\nIn chapter \\ref{python_lib}, we will explain the structure of the Python framework designed to make it easy to explore SRG in this 1-dimensional setting.\n", "meta": {"hexsha": "1137ce2a8d7cd3e3ab668f59f2f06a8bbb00217e", "size": 16253, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/chap2.tex", "max_stars_repo_name": "cheshyre/undergrad-thesis", "max_stars_repo_head_hexsha": "548d24e27bf6be4865a9499b13f0046f5ead7e89", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/chap2.tex", "max_issues_repo_name": "cheshyre/undergrad-thesis", "max_issues_repo_head_hexsha": "548d24e27bf6be4865a9499b13f0046f5ead7e89", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/chap2.tex", "max_forks_repo_name": "cheshyre/undergrad-thesis", "max_forks_repo_head_hexsha": "548d24e27bf6be4865a9499b13f0046f5ead7e89", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.408045977, "max_line_length": 879, "alphanum_fraction": 0.7378945425, "num_tokens": 4855, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.785308580887758, "lm_q2_score": 0.7025300698514778, "lm_q1q2_score": 0.5517028921860415}}
{"text": "\\subsubsection{complex -- a numeric type}\nComplex numbers have a real and imaginary part, which are each a floating point number.\nTo extract these parts from a complex number z, use z.real and z.imag.\n(The standard library includes the additional numeric types fractions.Fraction, for rationals, and decimal.Decimal, for floating-point numbers with user-definable precision.)", "meta": {"hexsha": "b038aca3eb1475d35b45a3f20a9acbbfa2f91aeb", "size": 375, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/python3/sections/built-in-types-numerics-complex.tex", "max_stars_repo_name": "remigiusz-suwalski/programming-notes", "max_stars_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-28T05:03:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T05:03:18.000Z", "max_issues_repo_path": "src/python3/sections/built-in-types-numerics-complex.tex", "max_issues_repo_name": "remigiusz-suwalski/programming-notes", "max_issues_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/python3/sections/built-in-types-numerics-complex.tex", "max_forks_repo_name": "remigiusz-suwalski/programming-notes", "max_forks_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_forks_event_max_datetime": "2016-11-24T19:55:47.000Z", "avg_line_length": 93.75, "max_line_length": 174, "alphanum_fraction": 0.808, "num_tokens": 75, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7853085808877581, "lm_q2_score": 0.7025300573952054, "lm_q1q2_score": 0.551702882404024}}
{"text": "\\lab{The Inverted Pendulum}{The Inverted Pendulum}\n\\label{lab:inverted_pendulum}\n\n\\objective{We will set up the LQR optimal control problem for the inverted pendulum and compute the solution numerically. \n% We will add a stochastic component to the system.\n}\n\n\nThink back to your childhood days when, for entertainment purposes, you'd balance objects: a book on your head, a spoon on your nose, or even a broom on your hand.\n Learning how to walk was likely your initial introduction to the inverted pendulum problem.\n\nA pendulum has two rest points: a stable rest point directly underneath the pivot point of the pendulum, and an unstable rest point directly above. \nThe generic pendulum problem is to simply describe the dynamics of the object on the pendulum (called the `bob').\nThe inverted pendulum problem seeks to guide the bob toward the unstable fixed point at the top of the pendulum. \nSince the fixed point is unstable, the bob must be balanced relentlessly to keep it upright. \n\nThe inverted pendulum is an important classical problem in dynamics and control theory, and is often used to test different control strategies. One application of the inverted pendulum is the guidance of rockets and missiles. Aerodynamic instability occurs because the center of mass of the rocket is not the same as the center of drag. Small gusts of wind or variations in thrust require constant attention to the orientation of the rocket. \n\n\n\\section*{The Simple Pendulum}\nWe begin by studying the simple pendulum setting. \nSuppose we have a pendulum consisting of a bob with mass $m$ rotating about a pivot point at the end of a (massless) rod of length $l$. \nLet $\\theta(t)$ represent the angular displacement of the bob from its stable equilibrium.\nBy Hamilton's Principle, the path $\\theta$ that is taken by the bob minimizes the functional \n\\begin{align}\nJ[\\theta] = \\int_{t_0}^{t_1}\tL,\n\\end{align}\nwhere the Lagrangian $L = T - U$ is the difference between the kinetic and potential energies of the bob. \n\nThe kinetic energy of the bob is given by $mv^2/2$, where $v$ is the velocity of the bob. \nIn terms of $\\theta$, the kinetic energy becomes \n\\begin{align}\n\t\\begin{split}\n\tT &= \\frac{m}{2}v^2  = \\frac{m}{2}(\\dot{x}^2 + \\dot{y}^2),\\\\\n\t&= \\frac{m}{2}((l\\cos(\\theta)\\dot{\\theta})^2 + (l\\sin(\\theta)\\dot{\\theta})^2),\\\\\n\t&= \\frac{ml^2\\dot{\\theta}^2}{2}.\n\t\\end{split}\n\\end{align}\nThe potential energy of the bob is $U = mg(l-l\\cos \\theta)$. \nFrom these expressions we can form the Euler-Lagrange equation, which determines the path of the bob: \n\\begin{align}\n\t\\begin{split}\n\t0 &= L_{\\theta} - \\frac{d}{dx}L_{\\dot{\\theta}},\\\\\n\t&= -mgl\\sin \\theta - m l^2 \\ddot{\\theta},\\\\\n\t&= \\ddot{\\theta} + \\frac{g}{l}\\sin \\theta.\n\t\\end{split}\n\\end{align}\nSince in this setting the energy of the pendulum is conserved, the equilibrium position $\\theta = 0$ is only Lyapunov stable. When forces such as friction and air drag are considered $\\theta = 0$ becomes an asymptotically stable equilibrium. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{Simple_gravity_pendulum.png}\n\\caption{The frame of reference for the simple pendulum problem.\n}\n\\label{fig:inverted_pendulum:simple_gravity_pendulum}\n\\end{figure}\n\n\\section*{The Inverted Pendulum}\n\\subsection*{The Control System}\nWe consider a gift suspended above a rickshaw by a (massless) rod of length $l$. \nThe rickshaw and its suspended gift will have masses $M$ and $m$ respectively, $M > m$.  \nLet $\\theta $ represent the angle between the gift and its unstable equilibrium, with clockwise orientation. \nLet $v_1$ and $v_2$ represent the velocities of the rickshaw and the gift, and $F$ the force exerted on the rickshaw. \nThe rickshaw will be restricted to traveling along a straight line (the $x$-axis). \n\nBy Hamilton's Principle, the path $(x,\\theta)$ of the rickshaw and the present minimizes the functional \n\\begin{align}\nJ[x,\\theta] = \\int_{t_0}^{t_1}\tL,\n\\end{align}\nwhere the Lagrangian $L = T - U$ is the difference between the kinetic energy of the present on the pendulum, and its potential energy.\n\nSince the position of the rickshaw and the present are $(x(t),0)$ and $(x-l\\sin \\theta, l\\cos \\theta)$ respectively, the total kinetic energy is \n\\begin{align}\n\t\\begin{split}\n\tT &= \\frac{1}{2}Mv_1^2 +  \\frac{1}{2}mv_2^2,\\\\\n\t&= \\frac{1}{2}M\\dot{x}^2 +  \\frac{1}{2}m((\\dot{x} - l\\dot{\\theta}\\cos \\theta)^2 + (- l\\dot{\\theta}\\sin \\theta)^2),\\\\\n\t&= \\frac{1}{2}(M+m)\\dot{x}^2 +\\frac{1}{2}m l^2\\dot{\\theta}^2-ml\\dot{x}\\dot{\\theta}\\cos \\theta.\n\t\\end{split}\n\\end{align}\n% The potential energy $U$ is composed of two parts: the first is the gravitational potential energy of the bob/present on the pendulum, and is given by $(mg)(l\\cos \\theta)$.\n% The second part is the potential energy in the system due to any force $F$ exerted on the rickshaw.\n% Recalling that whenever a force field $\\mathbf{F}$ has the form $\\mathbf{F} = \\nabla \\phi(\\mathbf{x})$, where $\\phi(\\mathbf{x})$ is a scalar field, the potential energy derived from that force is $ -\\phi(\\mathbf{x})$.\nThe total potential energy is \n\\begin{align*}\nU &= mgl\\cos \\theta.\n\\end{align*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{rickshaw_img.png}\n\\caption{The inverted pendulum problem on a mobile rickshaw with a present suspended above.\n}\n\\label{fig:inverted_pendulum:rickshaw_diagram}\n\\end{figure}\n\nThe path $(x,\\theta)$ of the rickshaw and the present satisfy the Euler-Lagrange differential equations, but the problem involves a nonconservative force $F$ acting in the $x$ direction.\nBy way of D'Alambert's Principle, our normal Euler-Lagrange equations now include the nonconservative force $F$ on the right side of the equation:\n\n\\begin{align}\n\t\\begin{split}\n\\frac{\\partial L}{\\partial x} - \\frac{d}{dt} \\frac{\\partial L}{\\partial \\dot{x}} &= F,\\\\\n\\frac{\\partial L}{\\partial \\theta} - \\frac{d}{dt} \\frac{\\partial L}{\\partial \\dot{\\theta}} &= 0.\n\t\\end{split}\\label{inverted_pendulum:langrange_eqns}\n\\end{align}\nAfter expanding \\eqref{inverted_pendulum:langrange_eqns} we see that $x(t)$ and $\\theta(t)$ satisfy\n\\begin{align}\n\t\\begin{split}\n\t\tF &= (M + m)\\ddot{x} - ml\\ddot{\\theta} \\cos \\theta + ml \\dot{\\theta}^2 \\sin \\theta,\\\\\n\t\tl \\ddot{\\theta} &= g \\sin \\theta + \\ddot{x} \\cos \\theta.\n\t\\end{split}\\label{inverted_pendulum:langrange_eqns_explicit}\n\\end{align}\n\nAt this point we make a further simplifying assumption. \nIf $\\theta$ starts close to $0$, we may assume that the corresponding force $F$ will keep $\\theta$ small. \nIn this case, we linearize \\eqref{inverted_pendulum:langrange_eqns_explicit} about $(\\theta, \\dot{\\theta}) = (0,0)$, obtaining the equations \n\\begin{align*}\n\t\\begin{split}\n\t\tF &= (M + m)\\ddot{x} - ml\\ddot{\\theta},\\\\\n\t\tl \\ddot{\\theta} &= g \\theta + \\ddot{x}.\n\t\\end{split}%\\label{inverted_pendulum:langrange_eqns_explicit2}\n\\end{align*}\nThese equations can be further manipulated \n % \\eqref{inverted_pendulum:langrange_eqns_explicit2}\nto obtain \n\\begin{align}\n\t\\begin{split}\n\t\t\\ddot{x} &= \\frac{1}{M}F - \\frac{m}{M}g\\theta,\\\\\n\t\t\\ddot{\\theta} &= \\frac{1}{Ml}F + \\frac{g}{Ml} (M+m) \\theta.\n\t\\end{split}\\label{inverted_pendulum:langrange_eqns_explicit3}\n\\end{align}\n\nWe will now write \\eqref{inverted_pendulum:langrange_eqns_explicit3} as a first order system. \nMaking the assignments $x_1 = x$, $x_2 = x_1'$, $\\theta_1 = \\theta$, $\\theta_2 = \\theta_1'$, letting $u = F$ represent the control variable, we obtain \n\\begin{align*}\n\\begin{bmatrix}\nx_1\\\\\nx_2 \\\\\n\\theta_1 \\\\\n\\theta_2\n\\end{bmatrix}' &= \n\\begin{bmatrix}\n0 & 1 & 0 & 0\\\\\n0 & 0 & \\frac{mg}{M} & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n0 & 0 & \\frac{g}{Ml}(M+m) & 0\n\\end{bmatrix}\n\\begin{bmatrix}\nx_1\\\\\nx_2 \\\\\n\\theta_1 \\\\\n\\theta_2\n\\end{bmatrix} + u\n\\begin{bmatrix}\n0\\\\\n\\frac{1}{M} \\\\\n0 \\\\\n\\frac{1}{Ml}\n\\end{bmatrix},\n\\end{align*}\nwhich can be written more concisely as \n\\[z' = Az + Bu.\\]\n\n\\subsection*{The infinite time horizon LQR problem}\nWe consider the cost function\n\\begin{align}\n\\begin{split}\nJ[z] &= \\int_0^{\\infty} (q_1x_1^2 + q_2x_2^2  + q_3\\theta_1^2 + q_4\\theta_2^2 + ru^2)\\, dt\\\\\n&= \\int_0^{\\infty} z^TQz + u^TRu \\, dt\n\\end{split} \\label{inverted_pendulum:LQR}\n\\end{align}\nwhere $q_1, q_2, q_3, q_4$, and $r$ are nonnegative weights, and \n\\[\nQ = \n\\begin{bmatrix}\nq_1 & 0 & 0 & 0 \\\\\n0 & q_2 & 0 & 0\\\\\n0 & 0 & q_3 & 0 \\\\\n0 & 0 & 0 & q_4\n\\end{bmatrix}, R = \\begin{bmatrix} r \\end{bmatrix}.\n\\]\n\n\\begin{problem}\nWrite a function that returns the matrices $A, B, Q$, and $R$ given above. Let $g = 9.8\\text{ m}/\\text{s}^2$.\t\n\t\n\\begin{lstlisting}\ndef linearized_init(M, m, l, q1, q2, q3, q4, r): \n\t'''\n\tParameters:\n\t----------\n\tM, m: floats\n          masses of the rickshaw and the present\n\tl \t: float\n          length of the rod\n\tq1, q2, q3, q4, r : floats\n         relative weights of the position and velocity of the rickshaw, the \n\t\t angular displacement theta and the change in theta, and the control\n\t\n\tReturn\n\t-------\n\tA : ndarray of shape (4,4) \n\tB : ndarray of shape (4,1) \n\tQ : ndarray of shape (4,4) \n\tR : ndarray of shape (1,1) \n\t'''\n\tpass\t\n\\end{lstlisting}\n\\end{problem}\n\nThe optimal control problem \\eqref{inverted_pendulum:LQR} is an example of a Linear Quadratic Regulator (LQR), and is known to have an optimal control $\\tilde{u}$ described by a linear state feedback law:\n\\begin{align*}\n\\tilde{u} &= -R^{-1}B^TP\\tilde{z}.\n\\end{align*}\nHere $P$ is a matrix function that satisfies the Riccati differential equation (RDE)\n\\[\n\\dot{P}(t) = PA + A^TP + Q - PBR^{-1}B^T P.  \n\\]\nSince this problem has an infinite time horizon, we have $\\dot{P} = 0$. Thus $P$ is a constant matrix, and can be found by solving the algebraic Riccati equation (ARE)\n\\begin{align}\n PA + A^TP + Q - PBR^{-1}B^T P = 0.  \\label{inverted_pendulum:ARE}\n\\end{align}\nThe evolution of the optimal state vector $\\tilde{z}$ can then be described by \\footnote{See Calculus of Variations and Optimal Control Theory, Daniel Liberzon, Ch.6}\n\\begin{align}\n\\dot{\\tilde{z}} = (A - BR^{-1}B^TP)\\tilde{z}. \\label{inverted_pendulum:optimal_state}\n\\end{align}\n\n\\begin{problem}\nWrite the following function to find the matrix $P$. \nUse \\li{scipy.optimize.root}. \nSince \\li{root} takes in a vector and not a matrix, you will have to reshape the matrix $P$ before passing it in and after getting your result, using \\li{np.reshape(16)} and \\li{np.reshape((4,4))}.\n\\begin{lstlisting}\ndef find_P(A, B, Q, R):\n\t'''\n\tParameters:\n\t----------\n\tA, Q \t: ndarrays of shape (4,4)\n\tB\t\t: ndarray of shape (4,1)\n\tR\t\t: ndarray of shape (1,1)\n\t\n\tReturns\n\t-------\n\tP\t\t: the matrix solution of the Riccati equation\n\t'''\n\tpass\n\n\\end{lstlisting}\nUsing the values \n\\begin{lstlisting}\nM, m = 23., 5.\nl = 4.\nq1, q2, q3, q4 = 1., 1., 1., 1.\nr = 10.\n\\end{lstlisting}\ncompute the eigenvalues of $A - BR^{-1}B^TP$.\nAre any of the eigenvalues positive? \nConsider differential equation \\eqref{inverted_pendulum:optimal_state} governing the optimal state $\\tilde{z}$. \nUsing this value of $P$, will we necessarily have $\\tilde{z} \\to 0$?\n\\end{problem}\n\nNotice that we have no information on how many solutions \\eqref{inverted_pendulum:ARE} possesses. \nIn general there may be many solutions. \nWe hope to find a unique solution $P$ that is \\textit{stabilizing}: the eigenvalues of $A - BR^{-1}B^TP$ have negative real part. \nTo find this $P$, use the function \\li{solve_continuous_are} from \\li{scipy.linalg}. \nThis function is designed to solve the continuous algebraic Riccati equation. \n\n\\begin{problem}\n\tWrite the following function that implements the LQR solution described earlier.  For the IVP solver, you can use your own or you may use the function \\li{odeint} from \\li{scipy.integrate}.\n\\begin{lstlisting}\ndef rickshaw(tv, X0, A, B, Q, R, P):\n\t'''\n\tParameters:\n\t----------\n\ttv \t: ndarray of time values, with shape (n+1,)\n\tX0 \t: Initial conditions on state variables\n\tA, Q: ndarrays of shape (4,4)\n\tB\t: ndarray of shape (4,1)\n\tR\t: ndarray of shape (1,1)\n\tP\t: ndarray of shape (4,4)\n\t\n\tReturns\n\t-------\n\tZ : ndarray of shape (n+1,4), the state vector at each time\n\tU : ndarray of shape (n+1,), the control values\n\t'''\n\tpass\n\\end{lstlisting}\n\\label{prob:inverted_pendulum3}\n\\end{problem}\n\n\\begin{problem}\nTest the function made in Problem \\eqref{prob:inverted_pendulum3} with the following inputs: \n\\begin{lstlisting}\nM, m = 23., 5.\nl = 4.\nq1, q2, q3, q4 = 1., 1., 1., 1.\nr = 10.\ntf = None\nX0 = np.array([-1, -1, .1, -.2])\n\\end{lstlisting}\nFind the matrix P using the \\li{scipy.optimize.root} method with \\li{tf=6} as well as the \\li{solve_continuous_are} method with \\li{tf=60}.\nPlot the solutions $\\tilde{z}$ and $\\tilde{u}$. \nCompare your results as shown in Figure \\ref{fig:inverted_pendulum:4}.\n% and \\ref{fig:inverted_pendulum:prob4_stable} respectively.\n\\label{prob:inverted_pendulum:4}\n\\end{problem}\n\n\\begin{figure}\n\\begin{minipage}[b]{.47\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{prob4_unstable.pdf}\n\\caption*{$P$ is found using \\li{scipy.optimize.root}.}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.47\\linewidth}\n\\centering\n\\includegraphics[width=\\textwidth]{prob4_stable.pdf}\n\\caption*{$P$ is found using \\li{solve_continuous_are}.}\n\\end{minipage}\n\\caption{The solutions of Problem \\ref{prob:inverted_pendulum:4}.}\n\\label{fig:inverted_pendulum:4}\n\\end{figure}\n\n\\begin{comment}\n\\begin{problem}\nConsider the following inputs: \n\\begin{lstlisting}\nM, m = 23., 5.\nl = 4.\nq1, q2, q3, q4 = 1., 1., 1., 1.\nr = 10.\ntf = 60\nX0 = np.array([-1, -1, .1, -.2])\n\\end{lstlisting}\nVary the entries of \\li{X0} responsible for $\\theta(0)$ and $\\dot{\\theta}(0)$ to determine the sensitivity of the control $u$ to the initial conditions.  What initial conditions lead to a reasonable, physical control $u$? (Plot several solutions with various values of $\\theta(0)$ and $\\dot{\\theta}(0)$.)\n\\end{problem}\n\\end{comment}", "meta": {"hexsha": "110534f43e352ff07d5d72f1438719ae44544ffa", "size": 13671, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "acme-material/Labs/Volume4/InvertedPendulum/InvertedPendulum.tex", "max_stars_repo_name": "DM561/dm561.github.io", "max_stars_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-13T13:22:41.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-13T13:22:41.000Z", "max_issues_repo_path": "acme-material/Labs/Volume4/InvertedPendulum/InvertedPendulum.tex", "max_issues_repo_name": "DM561/dm561.github.io", "max_issues_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-10-18T19:57:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-31T19:00:36.000Z", "max_forks_repo_path": "acme-material/Labs/Volume4/InvertedPendulum/InvertedPendulum.tex", "max_forks_repo_name": "DM561/dm561.github.io", "max_forks_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.8089552239, "max_line_length": 442, "alphanum_fraction": 0.7049959769, "num_tokens": 4449, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7025300573952054, "lm_q2_score": 0.7853085758631159, "lm_q1q2_score": 0.5517028788740618}}
{"text": "%bisection diagram\n\n\\section{Bisection Diagrams}\n\n\\subsection{2-D Bisection of Terminal Triangle Pair}\n\n\\begin{figure}\n\\begin{center}\n\n\\vnode{0} \\vnode{1} \\vnode{2}\n\\psfrag{vh}{$\\widehat{V}$}\n\n\\psfrag{longedge}{(longest edge)}\n\n\\psfrag{ar1}{\\Huge{$\\Downarrow$}}\n\\psfrag{ar2}{\\Huge{$\\Rightarrow$}}\n\n\\psfrag{t0}{$t_0$}\n\\psfrag{t1}{$t_1$}\n\\psfrag{s0}{$s_0$}\n\\psfrag{t0d}{$\\tilde{t}_0$}\n\n\\psfrag{A}{$A$}\n\\psfrag{B}{$B$}\n\\psfrag{C}{$C$}\n\\psfrag{D}{$D$}\n\n\\includegraphics[width=4.0in]{Bisect_Terminal_Triangle_Pair}\n\\caption{Diagram of bisection process for terminal triangle pair.  The initial pair consists of two triangles $t_0, t_1$ that share a longest edge.  The initial neighbors of $t_0$ are the triangles labeled $A, B, t_1$.  Assuming that $t_1$ has already been bisected into triangles $C, D$, we see an intermediate stage for bisecting $t_0$.  We then replace $t_0$ by triangles $\\tilde{t}_0$ and $s_0$ whose neighbors are adjusted accordingly.} \\label{fig:Bisect_Terminal_Pair_2D}\n\\end{center}\n\\end{figure}\n\nAssume we are given a terminal pair of triangles that share a longest edge (see Figure \\ref{fig:Bisect_Terminal_Pair_2D}).  Bisecting the longest edge adds a new vertex with global index denoted $\\widehat{V}$.  To better explain the bisection of the adjacent triangles, we assume that $t_1$ is already bisected into two triangles labeled $C, D$.\n\nTo bisect the triangle $t_0$, we must replace the triangle connectivity data of $t_0$ with the connectivity data of $\\tilde{t}_0$, followed by adding a new triangle to the mesh, namely $s_0$. We then change the neighbors of $t_0$ to those of $\\tilde{t}_0$, and add the neighbors of $s_0$ to the neighbor list.  Finally, we adjust the neighbors of $A$ and $B$ to correctly correspond to the new mesh.  This is summarized as follows.\n\\begin{equation}\\label{eqn:tri_connectivity_and_neighbors}\n\\begin{split}\n    \\text{triangle~connectivity~of} ~\\tilde{t}_0 &= [\\widehat{V}, V_1, V_2], \\quad \\text{neighbor~connectivity~of} ~\\tilde{t}_0 = [A, s_0, D] \\\\\n    \\text{triangle~connectivity~of} ~s_0 &= [\\widehat{V}, V_2, V_0], \\quad \\text{neighbor~connectivity~of} ~s_0 = [B, C, \\tilde{t}_0]\n%    \\text{neighbor~connectivity~of} ~\\tilde{t}_0 &= [A, s_0, D] \\\\\n%    \\text{neighbor~connectivity~of} ~s_0 &= [B, C, \\tilde{t}_0]\n\\end{split}\n\\end{equation}\n\nNote that the global triangle index of $t_0$ and $\\tilde{t}_0$ is identical, since we simply \\emph{replaced} $t_0$ by $\\tilde{t}_0$.  This means that the neighbor data for triangle $A$ does not need to be changed.  However, we must update the neighbors of triangle $B$, i.e. if $B$'s $k$th neighbor was $t_0$ before the bisection, then $B$'s $k$th neighbor after bisection should be $s_0$.\n\nNote: the process for bisecting $t_1$ is exactly the same as for $t_0$; just rotate the diagrams in Figure \\ref{fig:Bisect_Terminal_Pair_2D} by $180^\\circ$.  In other words, triangle $C$ is really $\\tilde{t}_1$, and $D$ is $s_1$.\n\n\n\n%%%%%%\n", "meta": {"hexsha": "3b8d90864a0b069d312a42a03adf2a701b30cf58", "size": 2938, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "FELICITY/Static_Codes/Lepp_Bisection_2D/src_code/Figures/latex_code/Bisection_Diagram.tex", "max_stars_repo_name": "brianchowlab/BcLOV4-FEM", "max_stars_repo_head_hexsha": "f54bd8efa0e0f2c7ca2de4a6688ef1a403376285", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "FELICITY/Static_Codes/Lepp_Bisection_2D/src_code/Figures/latex_code/Bisection_Diagram.tex", "max_issues_repo_name": "brianchowlab/BcLOV4-FEM", "max_issues_repo_head_hexsha": "f54bd8efa0e0f2c7ca2de4a6688ef1a403376285", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FELICITY/Static_Codes/Lepp_Bisection_2D/src_code/Figures/latex_code/Bisection_Diagram.tex", "max_forks_repo_name": "brianchowlab/BcLOV4-FEM", "max_forks_repo_head_hexsha": "f54bd8efa0e0f2c7ca2de4a6688ef1a403376285", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.5, "max_line_length": 477, "alphanum_fraction": 0.7178352621, "num_tokens": 1017, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321964553657, "lm_q2_score": 0.7745833841649233, "lm_q1q2_score": 0.5516832250416137}}
{"text": "% !Mode:: \"TeX:UTF-8\"\n% !TEX program  = xelatex\n\\begin{abstract}\n    We introduce projectile motion and quadratic equation through a real world problem --- \\emph{what angle should we throw a football for maximum range}, then we give a proper and accurate definition of quadratic equation, from which we derive quadratic formula for solving quadratic equations. Furthermore, we list some useful applications of the quadratic equation and implement them in \\texttt{Python}.\n\\end{abstract}\n\\paragraph{Keywords:} Projectile motion; Quadratic equation; \\texttt{Python} simulation.\n", "meta": {"hexsha": "b273b9f21540ac7919cb0137a87980c4c67a2fde", "size": 576, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "MA320/sections/quadratic_equation/abstract.tex", "max_stars_repo_name": "iydon/homework", "max_stars_repo_head_hexsha": "253d4746528ef62d33eba1de0b90dcb17ec587ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-10-20T08:18:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-11T12:14:56.000Z", "max_issues_repo_path": "MA320/sections/quadratic_equation/abstract.tex", "max_issues_repo_name": "iydon/homework", "max_issues_repo_head_hexsha": "253d4746528ef62d33eba1de0b90dcb17ec587ed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2022-01-13T03:04:10.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:49:10.000Z", "max_forks_repo_path": "MA320/sections/quadratic_equation/abstract.tex", "max_forks_repo_name": "iydon/homework", "max_forks_repo_head_hexsha": "253d4746528ef62d33eba1de0b90dcb17ec587ed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-11-02T05:46:01.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-12T23:11:28.000Z", "avg_line_length": 82.2857142857, "max_line_length": 406, "alphanum_fraction": 0.7881944444, "num_tokens": 126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.7122321903471563, "lm_q1q2_score": 0.5516832203102963}}
{"text": "\n\n\nCalculating means and other simple statistics is a matter of using the\nright function in R.  The \\pkg{mosaic} package --- which you should\nload in to R as shown in Section \\ref{sec:loading-mosaic} --- makes it\nstraightforward to calculate either a ``grand'' statistic or a\n``group-wise'' statistic.   To illustrate:\n\n\\noindent Load the \\pkg{mosaic} package, needed only once per session:\n\\begin{Schunk}\n\\begin{Sinput}\n> require(mosaic) \n\\end{Sinput}\n\\end{Schunk}\n\n\\noindent Read in data you are interested in analyzing, for instance\nthe Cherry-Blossom 2008 data described earlier: \\datasetCherryBlossomEight\n\\begin{Schunk}\n\\begin{Sinput}\n> runners = fetchData(\"Cherry-Blossom-2008.csv\")\n> names( runners )\n\\end{Sinput}\n\\begin{Soutput}\n[1] \"position\" \"division\" \"total\"    \"name\"     \"age\"     \n[6] \"place\"    \"net\"      \"gun\"      \"sex\"     \n\\end{Soutput}\n\\end{Schunk}\n\n\\noindent Calculate a grand mean on the ``gun'' time --- the time from\nthe start of the race, signalled by a gun, and when each runner\ncrossed the finish line:\n\\begin{Schunk}\n\\begin{Sinput}\n> mean( gun, data=runners )\n\\end{Sinput}\n\\begin{Soutput}\n[1] 93.7\n\\end{Soutput}\n\\end{Schunk}\n\\noindent Or other ``grand'' statistics.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> median( gun, data=runners )\n\\end{Sinput}\n\\begin{Soutput}\n[1] 93.7\n\\end{Soutput}\n\\begin{Sinput}\n> sd( gun, data=runners )\n\\end{Sinput}\n\\begin{Soutput}\n[1] 15\n\\end{Soutput}\n\\end{Schunk}\n\nTo tell R that you want to break the statistics down by groups, use\nthe \\verb+~+ notation, pronounced ``\\newword{tilde}.''  You will be\nusing this notation frequently in building models.  It means, ``model\nby'' or ``broken down by'' or ``versus.''\n\\begin{Schunk}\n\\begin{Sinput}\n> mean( gun ~ sex, data=runners )\n\\end{Sinput}\n\\begin{Soutput}\n  sex    S    N Missing\n1   F 98.8 6397       0\n2   M 88.3 5905       0\n\\end{Soutput}\n\\end{Schunk}\n\nOther statistics work the same way, for instance,\n\\begin{Schunk}\n\\begin{Sinput}\n> sd( gun ~ sex, data=runners )\n\\end{Sinput}\n\\begin{Soutput}\n  sex    S    N Missing\n1   F 13.3 6397       0\n2   M 14.7 5905       0\n\\end{Soutput}\n\\end{Schunk}\n\nAnother example ... wage broken down by sector of the economy, using\ndata :\n\\begin{Schunk}\n\\begin{Sinput}\n> cps = fetchData(\"cps.csv\")\n> mean( wage ~ sector, data=cps )\n\\end{Sinput}\n\\begin{Soutput}\n    sector     S   N Missing\n1 clerical  7.42  97       0\n2    const  9.50  20       0\n3    manag 12.70  55       0\n4    manuf  8.04  68       0\n5    other  8.50  68       0\n6     prof 11.95 105       0\n7    sales  7.59  38       0\n8  service  6.54  83       0\n\\end{Soutput}\n\\end{Schunk}\n\nIn the Whickham smoking data example, the outcome for each person was\nnot a number but a category: Alive or Dead at the time of the\nfollow-up.  \\datasetWhickham\n\\begin{Schunk}\n\\begin{Sinput}\n> w = fetchData(\"whickham.csv\")\n> names(w)\n\\end{Sinput}\n\\begin{Soutput}\n[1] \"outcome\" \"smoker\"  \"age\"    \n\\end{Soutput}\n\\begin{Sinput}\n> levels(w$outcome)\n\\end{Sinput}\n\\begin{Soutput}\n[1] \"Alive\" \"Dead\" \n\\end{Soutput}\n\\end{Schunk}\n\nTo find the proportion of people who were alive at the end of the\n20-year follow-up period, you can use a computational trick.  Convert\nthe \\VN{outcome} variable to \\texttt{TRUE} or \\texttt{FALSE} to\nindicate whether an individual is alive, then take the mean of the\ntrue/false values.  R, like many computer languages, treats\n\\texttt{TRUE} as 1 and \\texttt{FALSE} as 0 for the purposes of doing arithmetic.\n\\begin{Schunk}\n\\begin{Sinput}\n> mean( outcome==\"Alive\", data=w )\n\\end{Sinput}\n\\begin{Soutput}\n[1] 0.719\n\\end{Soutput}\n\\end{Schunk}\n\nHere's the breakdown according to smoking status:\n\\begin{Schunk}\n\\begin{Sinput}\n> mean( outcome==\"Alive\" ~ smoker, data=w )\n\\end{Sinput}\n\\begin{Soutput}\n  smoker     S   N Missing\n1     No 0.686 732       0\n2    Yes 0.761 582       0\n\\end{Soutput}\n\\end{Schunk}\n\nA more meaningful question is whether smokers are different from\nnon-smokers when holding other variables constant, such as age.  To\naddress this question, you need to add age into the model.\n\nIt might be natural to consider\neach age --- 35, 36, 37, and so on --- as a separate group, but you\nwon't get very many members of each group.  And, likely, the data for\n35 year-olds has quite a lot to say about 36 year-olds, so it doesn't\nmake sense to treat them as completely separate groups.\n\nYou can use the \\function{cut} function to divide up a quantitative\nvariable into groups.  You get to specify the breaks between groups.\n\\begin{Schunk}\n\\begin{Sinput}\n> w$ageGroups = cut(w$age, breaks=c(0,30,40,53,64,75,100) )\n> mean( outcome==\"Alive\" ~ ageGroups, data=w )\n\\end{Sinput}\n\\begin{Soutput}\n  ageGroups     S   N Missing\n1    (0,30] 0.979 288       0\n2   (30,40] 0.948 252       0\n3   (40,53] 0.832 280       0\n4   (53,64] 0.625 251       0\n5   (64,75] 0.201 169       0\n6  (75,100] 0.000  74       0\n\\end{Soutput}\n\\begin{Sinput}\n> mean( outcome==\"Alive\" ~ smoker + ageGroups, data=w )\n\\end{Sinput}\n\\begin{Soutput}\n   smoker ageGroups     S   N Missing\n1      No    (0,30] 0.982 165       0\n2     Yes    (0,30] 0.976 123       0\n3      No   (30,40] 0.955 134       0\n4     Yes   (30,40] 0.941 118       0\n5      No   (40,53] 0.876 113       0\n6     Yes   (40,53] 0.802 167       0\n7      No   (53,64] 0.669 127       0\n8     Yes   (53,64] 0.581 124       0\n9      No   (64,75] 0.214 131       0\n10    Yes   (64,75] 0.158  38       0\n11     No  (75,100] 0.000  62       0\n12    Yes  (75,100] 0.000  12       0\n\\end{Soutput}\n\\end{Schunk}\n\nWith modeling techniques, to be introduced in later chapters, you can use\nquantitative variables without the need to divide them into groups.\n\n\\subsection{Model Values and Residuals}\n\n\nA group-wise model tells you a model value for each group, but often\nyou will need these in a case-by-case format: the model value for each\ncase in a data set.  The \\function{fitted} function carries out\nthis simple calculation, taking each case in turn, figuring out which\ngroup it belongs to, and then returning the set of model values for\nall the cases.  It requires two arguments: the group-wise model and\nthe data on which the model was based.  For example:\n\n\\index{P}{fitted}\n\\index{P}{fetchData}\n\n\\begin{Schunk}\n\\begin{Sinput}\n> kids = fetchData(\"kidsfeet.csv\")\n> mod = mean( width ~ sex, data=kids )\n> fitted( mod, kids )\n\\end{Sinput}\n\\begin{Soutput}\n [1] 9.19 9.19 9.19 9.19 9.19 9.19 9.19 8.78 8.78 9.19 9.19\n[12] 9.19 9.19 9.19 8.78 8.78 8.78 8.78 8.78 8.78 9.19 9.19\n[23] 8.78 8.78 8.78 9.19 8.78 9.19 9.19 9.19 8.78 8.78 8.78\n[34] 9.19 9.19 8.78 8.78 8.78 8.78\n\\end{Soutput}\n\\end{Schunk}\n\nThe residuals are found by subtracting the case-by-case model value\nfrom the actual values for each case.  \n\\begin{Schunk}\n\\begin{Sinput}\n> res = kids$width - fitted(mod, kids)\n\\end{Sinput}\n\\end{Schunk}\nTake care to use the same quantitative variable (\\VN{width} in this case) from the\ndata as was used in constructing the model.\n\nThe \\function{var} function will calculate the variance:\n\\begin{Schunk}\n\\begin{Sinput}\n> var( kids$width )  # overall variation\n\\end{Sinput}\n\\begin{Soutput}\n var \n0.26 \n\\end{Soutput}\n\\begin{Sinput}\n> var( fitted(mod, kids) ) # variation in model values\n\\end{Sinput}\n\\begin{Soutput}\n   var \n0.0422 \n\\end{Soutput}\n\\begin{Sinput}\n> var( kids$width - fitted(mod, kids) ) # residual variation\n\\end{Sinput}\n\\begin{Soutput}\n  var \n0.217 \n\\end{Soutput}\n\\end{Schunk}\n\n", "meta": {"hexsha": "bfcad4c38672e4e14acfd1ef3548f7747d294cb7", "size": 7325, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ComputationalTechnique-Orig/SimpleModels/computer-simple-models.tex", "max_stars_repo_name": "dtkaplan/SM3", "max_stars_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-01T01:28:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-01T01:28:07.000Z", "max_issues_repo_path": "ComputationalTechnique-Orig/SimpleModels/computer-simple-models.tex", "max_issues_repo_name": "BriannaBarry/SM3", "max_issues_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ComputationalTechnique-Orig/SimpleModels/computer-simple-models.tex", "max_forks_repo_name": "BriannaBarry/SM3", "max_forks_repo_head_hexsha": "56fef8d4368e7afa7ccce006d8f4acc6cf6c1fd1", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-02-14T05:22:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-08T12:42:15.000Z", "avg_line_length": 27.8517110266, "max_line_length": 82, "alphanum_fraction": 0.675221843, "num_tokens": 2575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321842389469, "lm_q2_score": 0.7745833893685269, "lm_q1q2_score": 0.5516832192851525}}
{"text": "\\subsubsection{ParetoFrontier}\n\\label{ParetoFrontierPP}\nThe \\textbf{ParetoFrontier} PostProcessor is designed to identify the points lying on the Pareto Frontier in a multi-dimensional trade-space.\nThis post-processor receives as input a \\textbf{DataObject} (a PointSet only) which contains all data points in the trade-space space and it\nreturns the subset of points lying in the Pareto Frontier as a PointSet.\n\nIt is here assumed that each data point of the input PointSet is a realization of the system under consideration for a\nspecific configuration to which corresponds several objective variables (e.g., cost and value).\n\n%\n\\ppType{ParetoFrontier}{ParetoFrontier}\n%\n\\begin{itemize}\n  \\item   \\xmlNode{objective},\\xmlDesc{string, required parameter}, ID of the objective variable that represents a dimension of the trade-space space.\n          The \\xmlNode{costID} requires one identifying attribute:\n          \\begin{itemize}\n            \\item \\xmlAttr{goal}, \\xmlDesc{string, required field}, Goal of the objective variable characteristic: minimization (min) or maximization (max)\n            \\item \\xmlAttr{upperLimit}, \\xmlDesc{string, optional field}, Desired upper limit of the objective variable for the points in the Pareto frontier\n            \\item \\xmlAttr{lowerLimit}, \\xmlDesc{string, optional field}, Desired lower limit of the objective variable for the points in the Pareto frontier\n          \\end{itemize}\n\\end{itemize}\n\nThe following is an example where a set of realizations (the ``candidates'' PointSet) has been generated by changing two parameters\n(var1 and var2) which produced two output variables: cost (which it is desired to be minimized) and value (which it is desired to be maximized).\nThe \\textbf{ParetoFrontier} post-processor takes the ``candidates'' PointSet and populates a Point similar in structure\n(the ``paretoPoints'' PointSet).\n\n\\textbf{Example:}\n\\begin{lstlisting}[style=XML,morekeywords={anAttribute},caption=ParetoFrontier input example (no expand)., label=lst:ParetoFrontier_PP_InputExample]\n  <Models>\n    <PostProcessor name=\"paretoPP\" subType=\"ParetoFrontier\">\n      <objective goal='min' upperLimit='0.5'>cost</objective>\n      <objective goal='max' lowerLimit='0.5'>value</objective>\n    </PostProcessor>\n  </Models>\n\n  <Steps>\n    <PostProcess name=\"PP\">\n      <Input     class=\"DataObjects\"  type=\"PointSet\"        >candidates</Input>\n      <Model     class=\"Models\"       type=\"PostProcessor\"   >paretoPP</Model>\n      <Output    class=\"DataObjects\"  type=\"PointSet\"        >paretoPoints</Output>\n    </PostProcess>\n  </Steps>\n\n  <DataObjects>\n    <PointSet name=\"candidates\">\n      <Input>var1,var2</Input>\n      <Output>cost,value</Output>\n    </PointSet>\n    <PointSet name=\"paretoPoints\">\n      <Input>var1,var2</Input>\n      <Output>cost,value</Output>\n    </PointSet>\n  </DataObjects>\n\\end{lstlisting}\n\n\\nb it is possible to specify both upper and lower limits for each objective variable.\nWhen one or both of these limits are specified, then the Pareto frontier is filtered such that all Pareto frontier points that\nsatisfy those limits are preserved.\n", "meta": {"hexsha": "861b82d9d22f9c2d2b5d42bdfd01bf23fb05f838", "size": 3117, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/user_manual/PostProcessors/ParetoFrontier.tex", "max_stars_repo_name": "dgarrett622/raven", "max_stars_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/user_manual/PostProcessors/ParetoFrontier.tex", "max_issues_repo_name": "dgarrett622/raven", "max_issues_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/user_manual/PostProcessors/ParetoFrontier.tex", "max_forks_repo_name": "dgarrett622/raven", "max_forks_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.95, "max_line_length": 157, "alphanum_fraction": 0.7369265319, "num_tokens": 778, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7745833841649233, "lm_q2_score": 0.712232184238947, "lm_q1q2_score": 0.5516832155789787}}
{"text": "\r\n\\subsection*{Name}\r\nseries -- generate an additive series of numbers\r\n\r\n\\subsection*{Usage}\r\n\r\n{\\bf series} start end [stepsize]\r\n\r\n\\subsection*{Description}\r\n\r\n{\\bf series} prints the real numbers from\r\n{\\bf start} to {\\bf end}, one per line.\r\n{\\bf series} begins with {\\bf start} \r\nto which {\\bf stepsize} is repeatedly added or subtracted,\r\nas appropriate, to approach, possibly meet, but not pass\r\n{\\bf end}.\r\n\r\nIf all arguments are integers, only integers are \r\nproduced in the output.\r\nThe {\\bf stepsize} must be nonzero; if it is not specified,\r\nit is assumed to be of unit size (1).\r\nIn all other cases,\r\n{\\bf series} prints an appropriate error message.\r\n\r\n\\subsection*{Example}\r\nTo count from 1 to 100:\r\n\\begin{verbatim}\r\n\tseries 1 100 \r\n\\end{verbatim}\r\nTo do the same, but backwards:\r\n\\begin{verbatim}\r\n\tseries 100 1\r\n\\end{verbatim}\r\n\r\n\\subsection*{Limitations}\r\n\r\nThe reported number of significant digits is limited.\r\nIf the ratio of the series range to the\r\n{\\bf stepsize} is too large, several numbers in a row will be equal. \r\n\r\nThe maximum length of a series is limited to the size of the\r\nmaximum long integer that can be represented on the machine in use.\r\nExceeding this value has undefined results.\r\n\r\n\\subsection*{Author}\r\n\r\nGary Perlman\r\n", "meta": {"hexsha": "63904756e647939e04edc5169d34bca0df5b9688", "size": 1263, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "t-series/ts2-spec/ts2-e.tex", "max_stars_repo_name": "chrisinmtown/defect-detect-expt", "max_stars_repo_head_hexsha": "0416e211c7e97c66275e43c339b67450805619f1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "t-series/ts2-spec/ts2-e.tex", "max_issues_repo_name": "chrisinmtown/defect-detect-expt", "max_issues_repo_head_hexsha": "0416e211c7e97c66275e43c339b67450805619f1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "t-series/ts2-spec/ts2-e.tex", "max_forks_repo_name": "chrisinmtown/defect-detect-expt", "max_forks_repo_head_hexsha": "0416e211c7e97c66275e43c339b67450805619f1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3125, "max_line_length": 70, "alphanum_fraction": 0.7228820269, "num_tokens": 310, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7122321720225278, "lm_q2_score": 0.7745833737577158, "lm_q1q2_score": 0.5516831987039954}}
{"text": "% !TEX root = ../../main.tex\n\\subsection{Kramers-Moyal expansion of the Master equation}\n\nFor the most part master equations are hard to deal with beacause of their\nintegro-differential nature. A way around this issue is to simplify the master\nequation using Taylor expansions. This expansion goes by name of Kramers-Moyal\nexpansion, and the truncated version of the expansion is known in physics\nliterature as the Fokker-Planck equation, and in the probability theory\nliterature as the Kolmogorov forward equation. Regardless of the fancy names\nbehind these manipulations the idea is very straight-forward and very common in\nphysics in general: we take a complicated function and use its derivatives to\ncome up with an approximation that is good enough at least locally.\n\nTo perform the expansion we need to express the transition probabilities\n$\\phi_t(x \\mid x')$ in \\eref{eq_master_eq_full} as a function of the jump size\n$r \\equiv x - x'$ rather than as conditional probabilities. We therefore\nexpress the transition probabilities as\n\\begin{equation}\n  \\phi_t(x \\mid x') = \\phi_t(x'; r).\n  \\label{eq_trans_jump_size}\n\\end{equation}\nBy performing this change of variables for the integral in\n\\eref{eq_master_eq_full} we need to compute the Jacobian which is of the form\n\\begin{equation}\n  dr = dx' {dr \\over dx'} = -dx'.\n\\end{equation}\n\nFurthermore since the integration now is done over values of $r$ rather than\n$x'$ we must update the integration limits as well. We then have for the lower\nlimit $x' = 0$ that upon substitution on the definition of $r$ this gives\n\\begin{equation}\n  r(x' = 0) = x - 0 = x.\n\\end{equation}\nDoing the same for the upper limit of $x' = 1$ results in\n\\begin{equation}\n  r(x' = 1) = x - 1.\n\\end{equation}\nUsing these changes results in a master equation of the form\n\\begin{equation}\n  \\dt{P(x, t)} = \\int_f^{x - 1} \\underbrace{(-1)}_{\\text{Jacobian}} dr \\;\n  \\left[\n  \\phi_t(x - r; r) P(x - r, t) -\n  \\phi_t(x; -r) P(x, t)\n  \\right].\n\\end{equation}\nFor the second term inside the square brackets we wrote $\\phi_t(x; -r)$ due to\nthe change of reference point. Since the original term was $\\phi_t(x' \\mid x)$\nthat means that the jump took place from $x$ to $x'$ instead of the other way,\ntherefore the jump rather than being of size $r$ it must be of size $-r$ to\ncompensate for this change. Flipping the limits of integration because of the\nnegative sign that the Jacobian gave results in\n\\begin{equation}\n  \\dt{P(x, t)} = \\int^x_{x - 1} dr \\;\n  \\phi_t(x - r; r) P(x - r, t) -\n  \\int^x_{x - 1} dr \\;\n  \\phi_t(x; -r) P(x, t).\n  \\label{eq_master_eq_jump}\n\\end{equation}\nHaving showed what the integration limits \\textit{should look like} we come\nback to the issue mentioned before. Nowhere in the literature I have come\nacross there has been an explicit account for these limits. From Kimura to Rice\nto even Lassig, everyone seems to overlook what these integrations should be.\nIn an attempt to be more explicit and thorough, while still readable and\nintuitive I wanted to show the limits for the integral in the master equation.\nNevertheless these limits will have to be modify for our approximations as we\nwill see in a bit.\n\nAs mentioned before, dealing with \\eref{eq_master_eq_jump} is extremely\ncomplicated due to its integro-differential nature. To simplify things now we\nwill make use of one of the most handful techniques ever given to physics - the\nTaylor expansion. This expansion will allow us to cast the nasty\nintegro-differential equation into a partial differential equation of infinite\norder. This again is difficult to deal with, but we can always compromise some\nprecision for simplicity by truncating the expansion up to some arbitrary order\nthat is easier to handle. In particular is common practice to truncate\nexpansions up to second order, giving in this case one of Motoo Kimura's most\nfundamental results, and the basis of all of diffusion theory in population\ngenetics. To perform the expansion of the master equation we have to assume the\nfollowing:\n\\begin{itemize}\n  \\item Changes on $x$ occur via very small jumps; i.e. the transition\n  probability $\\phi_t(x; r)$ is sharply peaked as a function of $r$, but at the\n  same time varies slowly enough with $x$. This is a reasonable assumption in\n  the limit of large population sizes since changes in the allele frequency\n  will be slow for this regime.\n  \\item The probability of an allele frequency $P(x, t)$ also varies slowly\n  with $x$. This smoothness requirement for the distribution is necessary since\n  the expansion requires computing derivatives.\n\\end{itemize}\nBoth of these assumptions are necessary for the expansion to be valid at least\nlocally around the point we expand at.\n\n\\subsubsection{Issues with the integration limits}\n\nAs mentioned before, the integration limits on \\eref{eq_master_eq_jump} must be\nmodified if we want to perform the Kramers-Moyal expansion. To show why this is\nthe case we will first attempt the expansion keeping the limits. Given the\nassumptions we listed we can expand the first integrand in\n\\eref{eq_master_eq_jump} in powers of $x$ obtaining\n\\begin{equation}\n  \\begin{split}\n    \\ddt{P(x, t)} &= \\int^x_{x - 1} dr \\;\n    \\left[\n    \\phi_t(x; r) P(x, t) -\n    {\\partial \\over \\partial f} \\left( \\phi_t(x; r) P(x, t) \\right) r \\right. \\\\\n    &+ \\left.\n    {1 \\over 2} {\\partial^2 \\over \\partial f^2}\n    \\left( \\phi_t(x; r) P(x, t) \\right) r^2 -\n    {1 \\over 3!} {\\partial^3 \\over \\partial f^3}\n    \\left( \\phi_t(x; r) P(x, t) \\right) r^3 + \\ldots\n    \\right] \\\\\n    &-\n    P(x, t) \\int_{x - 1}^x dr \\; \\phi_t(x, -r).\n  \\end{split}\n\\end{equation}\nIf we now distribute the integral and write the general form of the expansion\nwe obtain\n\\begin{equation}\n  \\begin{split}\n    \\ddt{P(x, t)} &= P(x, t)\\int^x_{x - 1} dr \\; \\phi_t(x; r) \\\\\n    &+\n    \\sum_{k=1}^\\infty {(-1)^k \\over k!} {\\partial^k \\over \\partial f^k}\n    \\left[\n    P(x, t) \\int_{x-1}^x dr \\; r^k \\phi_t(x; r)\n    \\right] \\\\\n    &-\n    P(x, t) \\int_{x - 1}^x dr \\; \\phi_t(x, -r).\n  \\end{split}\n\\end{equation}\nThe problem with this expansion is that all integrals of the form\n\\begin{equation}\n  \\Phi(x, k) = \\int_{x-1}^x dr \\; r^k \\phi_t(x; r),\n  \\text{ for } k \\in \\mathbb{N},\n\\end{equation}\nhave the wrong integration limits. Recall that we set these limits to cover the\nentire domain of values that the jump size could take. Before the expansion\nthese limits allowed $\\phi_t(x - r; r)$ to cover all jumps from any point\nbetween zero and one towards $x$; after the expansion this is not true anymore.\nFor example if we were to substitute the lower integration limit into the\ntransition probability $\\phi_t(x; r = x-1)$, the resulting final position $x +\nx -1 = 2x -1$ would be outside of the range $[0, 1]$ for all $x < 1$. That is\nwhy the Kramers-Moyal expansion is generally done for cases in which the\nboundaries are irrelevant, for example Einstein's Brownian motion problem\ndidn't have this issue since the integration was taken from $-\\infty$ to\n$\\infty$.\n\nSo how do we overcome this issue? The way to overcome this difficulty is to\nset the integration limits to some irrelevant boundary. But in order to do so\nwe need to define our transition probability $\\phi_t(x; r)$ to be zero if the\njump falls outside of the range $[0, 1]$. In other words, we could define the\ntransition probability as a piecewise function\n\\begin{equation}\n  \\phi_t(x; r) =\n  \\begin{cases}\n    \\phi_t(x + r \\mid x)& \\text{xor } (x + r), x \\in [0, 1] \\\\\n    0& \\text{otherwise}\n  \\end{cases}.\n\\end{equation}\nThis definition allow us to send the integration limits of\n\\eref{eq_master_eq_jump} from $-\\infty$ to $\\infty$ to still cover all possible\njumps. With this definition we then have\n\\begin{equation}\n  \\begin{split}\n    \\ddt{P(x, t)} &= P(x, t)\\int^\\infty_{- \\infty} dr \\; \\phi_t(x; r) \\\\\n    &+\n    \\sum_{k=1}^\\infty {(-1)^k \\over k!} {\\partial^k \\over \\partial f^k}\n    \\left[\n    P(x, t) \\int_{-\\infty}^\\infty dr \\; r^k \\phi_t(x; r)\n    \\right] \\\\\n    &-\n    P(x, t) \\int_{-\\infty}^\\infty dr \\; \\phi_t(x, -r).\n  \\end{split}\n\\end{equation}\nNotice that the first and third term on the right-hand side of the equation are\nintegrating the transition probability over all possible jumps. Because of the\nchange of integration limits we proposed, these two terms cancel each other and\nwe are left with\n\\begin{equation}\n  \\ddt{P(x, t)} = \\sum_{k=1}^\\infty {(-1)^k \\over k!}\n  {\\partial^k \\over \\partial f^k}\n  \\left[\n  P(x, t) \\int_{-\\infty}^\\infty dr \\; r^k \\phi_t(x; r)\n  \\right].\n  \\label{eq_almost_km}\n\\end{equation}\nLet us now define the moments of the jump size distribution $\\phi_t(x; r)$ to\nbe\n\\begin{equation}\n  a^{(k)}(x, t) \\equiv \\int_{-\\infty}^\\infty dr \\; r^k \\phi_t(x; r),\n  \\label{eq_jump_mom}\n\\end{equation}\nThen we can substitute this into \\eref{eq_almost_km} to finally obtain the\nKramers-Moyal expansion of the master equation\n\\begin{equation}\n  \\ddt{P(x, t)} = \\sum_{k=1}^\\infty {(-1)^k \\over k!}\n  {\\partial^k \\over \\partial f^k}\n  \\left[\n  P(x, t) a^{(k)}(x, t)\n  \\right].\n  \\label{eq_km_expansion}\n\\end{equation}\nNotice that on \\eref{eq_transition_short_time} we used already the zero$\\tth$\nmoment of the distribution; that's why the use of the notation $a^{(0)}(x, t)$.\n\nSo far we converted the complicated integro-differential form of the master\nequation in \\eref{eq_master_eq_jump} into an equally difficult partial\ndifferential equation of infinite order. Therefore our ability to handle the\nequation hasn't been simplified. In the next section we will deal with how to\nactually improve our chances of making sense of this by truncating the\nexpansion up to second order.\n", "meta": {"hexsha": "9efc32247cf158c6496c12ba40ade878d6a524e6", "size": 9591, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/book_draft/chapters/classic_diffusion/03_kramers_moyal.tex", "max_stars_repo_name": "mrazomej/stat_gen", "max_stars_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/book_draft/chapters/classic_diffusion/03_kramers_moyal.tex", "max_issues_repo_name": "mrazomej/stat_gen", "max_issues_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-03-05T00:17:26.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-05T00:17:26.000Z", "max_forks_repo_path": "doc/book_draft/chapters/classic_diffusion/03_kramers_moyal.tex", "max_forks_repo_name": "mrazomej/pop_gen", "max_forks_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.4549763033, "max_line_length": 80, "alphanum_fraction": 0.717547701, "num_tokens": 2832, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7279754489059774, "lm_q2_score": 0.7577943658046609, "lm_q1q2_score": 0.5516556936250685}}
{"text": "\\subsubsection{Global Convolutional Network}\nGlobal Convolutional Network (GCN) proposed by Peng et al. \n~\\cite{Peng2017} addressed the importance to have large kernels for both \nlocalisation and classification for semantic segmentation to enlarge \nrespective fields.\nHowever, a contradiction arises when performing classification and localisation \ntasks. \nFor instance, classification tasks require the models to be invariant for \ndifferent transformations such as rotation and translation.\nOn the other hand, localisation tasks require the models to be sensitive for \nany transformation, to accurately assign each pixel for its semantic category.\nAccordingly, to solve this contradiction, two design principles were suggested: \n\\begin{enumerate}\n\t\\item For the classification task, in order to improve the capability of \n\tthe model to handle different transformations, a large kernel size must be \n\tused to enable dense connections between feature maps and per-pixel \n\tclassifiers.\n\t\\item For localisation task, the model must be fully convolutional. \n\tAdditionally, fully connected or global pooling layers are not applied as \n\tthese layers will discard the localisation information. \n\\end{enumerate}\n\nFigure~\\ref{fig:gcn} presents the proposed GCN module for semantic segmentation \nutilised for delamination identification.\n\\begin{figure} [h!]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1.0]{GCN.png}\n\t\\end{center}\n\t\\caption{Global Convolution Network whole architecture.} \n\t\\label{fig:gcn}\n\\end{figure}\nAs shown in the Fig.~\\ref{fig:gcn}, a residual network was utilised as a backbone for \nfeature maps extraction, the residual block is shown in \nFig.~\\ref{fig:res_gcn_br}a.\nAfter each residual block, a GCN block is inserted  \n(Fig.~\\ref{fig:res_gcn_br}b), which employs a combination of \\((1\\times \nk)\\)+\\((k\\times 1)\\) and \\((k\\times 1)\\)+\\((1\\times k)\\) convolutions which \nenables a dense connections within a large \\((k\\times k)\\) region in the \nfeature map.\nIn this work, we implemented the model with \\(k=7\\).\nThis is followed by a boundary refinement (BR) block shown in Fig.~\\ref{fig:res_gcn_br}c, which can be considered as an additional residual block to refine the predictions near the object boundaries ended up generating a lower resolution score map. \nFurthermore, the upsampling operation is done recursively, it upsamples the low \nresolution score maps then concatenate it with a higher one to produce a new \nscore maps.\nThe deconvolution operation is repeated until the original image size is \nobtained.\n\\begin{figure} [h!]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1.0]{res_gcn_br.png}\n\t\\end{center}\n\t\\caption{(a) Residual block, (b) Global Convolution Network block, (c) \n\t\tBoundary Refinement} \n\t\\label{fig:res_gcn_br}\n\\end{figure}\n\n\n", "meta": {"hexsha": "9538a850822683613b4665b1a6f76ab49f1bc6db", "size": 2758, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "reports/journal_papers/MSSP_2/GCN.tex", "max_stars_repo_name": "IFFM-PAS-MISD/aidd", "max_stars_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c", "max_stars_repo_licenses": ["RSA-MD"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-03T05:36:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-03T05:36:07.000Z", "max_issues_repo_path": "reports/journal_papers/MSSP_2/GCN.tex", "max_issues_repo_name": "IFFM-PAS-MISD/aidd", "max_issues_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c", "max_issues_repo_licenses": ["RSA-MD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/journal_papers/MSSP_2/GCN.tex", "max_forks_repo_name": "IFFM-PAS-MISD/aidd", "max_forks_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c", "max_forks_repo_licenses": ["RSA-MD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.3859649123, "max_line_length": 249, "alphanum_fraction": 0.7831762146, "num_tokens": 681, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.757794360334681, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.551655685170805}}
{"text": "\\documentclass{yellowpaper}\n\\usepackage{graphicx}\n\\usepackage{balance}  % for  \\balance command ON LAST PAGE  (only there!)\n\\usepackage{pgf-pie}\n\\usepackage{tikz}\n\\usetikzlibrary{positioning,shadows,arrows}\n\n\\begin{document}\n\n\\title{Emotiq Yellowpaper}\n\n\\numberofauthors{1} \n\n\\author{\n\\alignauthor\nThe team\\\\\n       %\\affaddr{Emotiq AG}\\\\\n       \\email{info@emotiq.ch}\n}\n\\date{22 March 2018}\n\n\\maketitle\n\n\\section{UTX Transfer Proof}\nWhen a transaction is submitted with cloaked values, the use of Pedersen commitments enables transfer of tokens to other parties. But those recipients must then be able to use them in future transactions that they initiate. \n\nProof of ownership is shown by evidence that they know the cloaking and hiding terms in the commitments that originally referred to their tokens. That information must have been transmitted to them through a secret side channel from the originator of the output UTX transferred to them.\n\nRecall that a Pedersen commitment with cloaking refers to a commitment that looks like this:\n$$ C = \\gamma \\, A + (k_{rand} + x) \\, B$$\nWhere $A$ and $B$ are independent generators from group $G_1$, $\\gamma$ is a hiding factor for the commitment, $k_{rand}$ is a cloaking factor to protect knowledge of transaction amount $x$. \n\nThis commitment is always accompanied by terms \n $$L = (k_{rand} + x) \\, A$$\n and \n$$R = \\gamma \\, B$$\n  for use in verification. \n  \n  For a Fiat-Shamir challenge, computed as the hash of the public transcript:\n$$z = Z_r(H(A, B, C, L, R))$$\nthe verifier computes a new generator in $G_1$ as\n$$G = \\frac{1}{z} \\, A + z\\, B$$\nand then verifies that\n$$ \\alpha \\, G = \\frac{1}{z^2} \\, L + C + z^2\\, R$$\nwhere $\\alpha$ was transmitted to verifier in response to challenge $z$ as\n$$\\alpha = z \\, \\gamma + \\frac{1}{z} (k_{rand} + x)$$\n\nThis proves that the commitment quantities $\\gamma$ and $(k_{rand} + x)$ were known by the committer, without having to reveal the actual values involved.\n\nNow when $x$ is transmitted via secret channel to the new owner, we must also transmit the values of $\\gamma$ and $k_{rand}$ to them so that they can utilize this output commitment in their own future transaction proofs. Only the person with full knowledge of $\\gamma$, $k_{rand}$, and $x$ can produce verifiable proofs on their values when used again. \n\nIn effect, it is the value of this output commitment which ties the output UTX to its use as input in a future transaction, and the output commitment should be repeated in input position in that new transaction. That provides a record of transfer.\n\nThere is no need for identification with public keys in this process. Nothing needs to tie a commitment to any particular key. It is sufficient to prove knowledge of the commitment quantities, which only the owner of the UTX could possibly know.\n\nHence complete anonymity is provided in the public ledger. The only person needing knowledge of the public keys of recipients is the initiator of the transaction, for purposes of secret channel communications regarding the commitment items.\n\nA transaction need not even be signed with keyed signatures, since it either validates properly through all the cryptographic validation proofs, or it doesn't. There is no requirement in transfer of ownership to show public information about who originated the transaction, and who now owns output UTX tokens.\n\nThis further implies that we should choose generators $A$ and $B$, in $G_1$, once and for all to use, so that prior commitment values, $C$, can be used in a chain of transactions. There should be no need to select different independent generators. If new generators are ever utilized, then we need additional proofs that can verify the chain of transactions through the change of generators. It is simpler just to use previous, already publicly known, values for $A$ and $B$ throughout the entire chain.\n\n\\subsection{Protection of UTX Owner}\nWith the above arrangement, two people know everything they need to know to support spending proofs on a newly created UTX -- the creator of the UTX output, and the new owner. \n\nIn order to prevent a dishonest creator from turning around and spending the output UTX for themselves, the output UTX must be declared as directed at some new owner. We need not identify them in the public record, but some public key needs to be made up to support the output and we publish the hash of the output commitment, $C$, and public key, $P$, as $h = H(C, P)$.\n\nThen when the UTX is next spent, the input UTX is presented as a tuple $(h, C, P, Sig(h, P))$, where $Sig(h, P)$ is a valid signature on $h$, corresponding to public key $P$.\n\nValue $h$ could serve as a unique identifier for the UTX, and double spending can be prevented by allowing only one time for $h$ to be used as input UTX in an otherwise valid transaction. Everyone can validate the spending attempt as unique, verify that $h = H(C, P)$, and verify the validity of signature $Sig(h, P)$.\n\nThe public key used in the transactions, $P$, can be freshly created and need not correspond to any known public key for the recipient. That preserves a measure of anonymity in these transactions.\n\nSo a full transaction record must consist of an input $UTX_{in}$ and one or more $UTX_{out}$ where\n$$UTX_{in} = (h, C, P, Sig(h, P))$$\n$$UTX_{out, i} = (C_i, H(C_i, P_i))$$\n$$Trans = (UTX_{in}, UTX_{out,1}, UTX_{out,2}, ...)$$\n\nThe transaction must also include Pedersen commitment proofs on all the quantities involved, along with range proofs on each.\n\nIn order for recipients to locate their new UTX, we can store the them along with encrypted vital quantities, in the blockchain blocks, tagged by the hash of the public key used in forming the directed commitments. Only the recipient knows what to look for. The actual public key is never shown until, just once, at the start of a new spend transaction involving that UTX.\n\n\\subsection{Bulletproofs}\n\nBulletproofs are used to provide range checks on token amounts involved in transactions. Every token amount mentioned in a transaction is accompanied by its own Bulletproof. \n\nA Bulleproof includes an uncloaked Pedersen commitment \n$$C = \\gamma \\, A + x \\, B$$\n as one of its items, along with a recursively structured short proof of every bit value in the binary encoding of amount $x$. The $\\gamma$ blinding factor is randomly generated for each commitment.\n\nIn order to produce a verifiable transaction, we take all of those Pedersen commitments, add them all up for the input side, and subtract all of the output side UTXO commitments. Then we add a correction factor in the $A$ curve to show a well recognized encoding of zero amount. \n\nIf the sum of the input amounts equals the sum of the output amounts, this process will produce a zero $B$ value. But the sum and difference of the $\\gamma$ factors in each Pedersen commitment will likely not be zero, since they are randomly generated. So to that sum and difference of UTX commitments, we must add a correction factor that zeros out only the $A$ component, and carry that correction factor along in the transaction, so that anyone else can verify that the transaction sums to zero.\n\nIf we set the correction factor\n\n$$\\gamma_{corr} = \\sum_{outs} \\gamma_{out} - \\sum_{ins} \\gamma_{in}$$\n\nthen we have, for any valid transaction,\n\n$$\n\\begin{align}\nC_{trans} &= \\sum_{ins} C_{in} - \\sum_{outs} C_{out} + \\gamma_{corr} \\, A \\\\\n&= 0 \\, A + 0 \\, B \\\\\n&= \\infty\n\\end{align}\n$$\n\nWhen someone wishes to spend a UTXO arising from this process, they need to produce a signature on the hash of their public key and the Pedersen commitment for that UTXO. Only the recipient can do this, since a signature requires the corresponding private key. \n\nBut more than that, they need to start with the same Pedersen commitment value that the UTXO had when it was created. And they need to know the $\\gamma$ value used in that commitment so that they too can produce a correction factor in the $A$ curve for the overall new transaction.\n\nSince the correction factor applies only to the $A$ curve, and there is no known relationship between $A$ and $B$, and likewise, there is no known relation between the amounts $x$ and the $\\gamma$ factors used in each Pedersen commitment, this suffices as proof that the transaction is valid. \n\nWe need to know the value $\\gamma$ used when the UTXO Pedersen commitment was created, in order to be able to compute the overall correction factor $\\gamma_{corr}$ for the transaction we are creating to spend that UTX. \n\nKnowing only the value of the Pedersen commitment, which anyone can see in plain sight, is not sufficient. And knowing only that commitment value tells you nothing about the $\\gamma$ value that was used, even if you know the amount $x$ involved in that UTXO.\n\n\\end{document}\n\n", "meta": {"hexsha": "dc0ac4161ef1dd5eb6fd3b17e6928d8f05d7de52", "size": 8777, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/Crypto-writeup/UTX-Transfer.tex", "max_stars_repo_name": "easye/emotiq", "max_stars_repo_head_hexsha": "8f765e516704496011299eee4a2eba6f70170a34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 109, "max_stars_repo_stars_event_min_datetime": "2018-01-17T21:59:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T10:05:45.000Z", "max_issues_repo_path": "src/Crypto-writeup/UTX-Transfer.tex", "max_issues_repo_name": "easye/emotiq", "max_issues_repo_head_hexsha": "8f765e516704496011299eee4a2eba6f70170a34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 227, "max_issues_repo_issues_event_min_datetime": "2018-02-13T10:15:07.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-17T11:54:39.000Z", "max_forks_repo_path": "src/Crypto-writeup/UTX-Transfer.tex", "max_forks_repo_name": "easye/emotiq", "max_forks_repo_head_hexsha": "8f765e516704496011299eee4a2eba6f70170a34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2018-01-28T16:29:39.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-29T09:10:30.000Z", "avg_line_length": 73.1416666667, "max_line_length": 503, "alphanum_fraction": 0.7589153469, "num_tokens": 2114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245828938678, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.5515966044256059}}
{"text": "\\documentclass[12pt, letterpaper, twoside]{article}\n\\usepackage{nopageno,epsfig, amsmath, amssymb}\n\\usepackage{physics}\n\\usepackage{mathtools}\n\\usepackage{hyperref}\n\\usepackage{xcolor}\n\\hypersetup{\n    colorlinks,\n    linkcolor={blue},\n    citecolor={blue},\n    urlcolor={blue}\n}\n\\usepackage{empheq}\n\n\\usepackage[letterpaper,\n            margin=0.8in]{geometry}\n\n\\title{working out kick coordinates}\n\\author{\\textbf{Tom Wagg}}\n\n\\newcommand{\\question}[1]{{\\noindent \\it #1}}\n\\newcommand{\\answer}[1]{\n    \\par\\noindent\\rule{\\textwidth}{0.4pt}#1\\vspace{0.5cm}\n}\n\\newcommand{\\todo}[1]{{\\color{red}\\begin{center}TODO: #1\\end{center}}}\n\n% custom function for adding units\n\\makeatletter\n\\newcommand{\\unit}[1]{%\n    \\,\\mathrm{#1}\\checknextarg}\n\\newcommand{\\checknextarg}{\\@ifnextchar\\bgroup{\\gobblenextarg}{}}\n\\newcommand{\\gobblenextarg}[1]{\\,\\mathrm{#1}\\@ifnextchar\\bgroup{\\gobblenextarg}{}}\n\\makeatother\n\n\\newcommand{\\avg}[1]{\\left\\langle #1 \\right\\rangle}\n\\newcommand{\\angstrom}{\\mbox{\\normalfont\\AA}}\n\\allowdisplaybreaks\n\n\\begin{document}\n\n\\section*{kick thoughts}\n\nThere's no way I'm going to get this right if I don't write it down haha. Okay so starting from COSMIC we are given the change in velocities due to kicks in the same coordinates system as BSE, which looks like this\n\\begin{center}\n    \\includegraphics[width=0.5\\textwidth]{static/bse_coordinates.png}\n\\end{center}\nwhich means that the axes are defined as\n\\begin{itemize}\n    \\item \\textbf{$x$-axis}: positive $x$ direction points in the direction of motion of the star that is \\textit{not} going supernova\n    \\item \\textbf{$y$-axis}: positive $y$ direction points from the non-supernova star to the supernova star\n    \\item \\textbf{$x$-axis}: positive $z$ direction points in the direction of angular momentum vector of the system\n\\end{itemize}\n\nWe are given by COSMIC the change in systemic velocity, which I'm just going to call $\\va{v}_{\\rm kick}$ due to the kick in this coordinate system. This seems to account for the Blauuw kick and natal kick \\textbf{but doesn't take the orbital motion into consideration}. Therefore, we need to add on the orbital motion of the supernova star $\\va{v}_{\\rm orb}$. From the definition of the coordinate system, this vector points along the negative $x$ axis, such that\n\\begin{equation}\n    \\va{v}_{\\rm kick, eff} = \\mqty[v_{\\rm kick, x} - v_{\\rm orb} \\\\ v_{\\rm kick, y} \\\\ v_{\\rm kick, z}]\n\\end{equation}\nThe orbital speed is calculated just as follows if we assume that the orbit is circular\n\\begin{equation}\n    v_{\\rm orb}^2 = \\frac{G (M_1 + M_2)}{a}\n\\end{equation}\nNow we just need to move this coordinate system into the Galactocentric frame so that we can add it to the system velocity, $v_{\\rm sys}$. I \\textit{think} that we consider this as basically a rotation about the $z$ and $x$ axis due to the random orbital phase and inclination to the Galactic plane respectively. Let's call these angles $\\theta$ and $\\phi$. In this case we can make a rotation matrix\n\\begin{align}\n    \\mathcal{R} &= \\mqty[ \\cos \\theta & - \\sin \\theta & 0 \\\\ \\sin \\theta & \\cos \\theta & 0 \\\\ 0 & 0 & 1 ] \\mqty[ 1 & 0 & 0 \\\\ 0 & \\cos \\phi & - \\sin \\phi \\\\ 0 & \\sin \\phi & \\cos \\phi ] \\\\\n                &= \\left[\n                    \\begin{array}{ccc}\n                     \\cos \\theta & - \\sin \\theta \\cos \\phi & \\sin \\theta \\sin \\phi \\\\\n                     \\sin \\theta & \\cos \\theta \\cos \\phi & -\\cos \\theta \\sin \\phi \\\\\n                     0 & \\sin \\phi & \\cos \\phi \\\\\n                    \\end{array}\n                    \\right]\n\\end{align}\nWhich means that the final systemic velocity should be\n\\begin{equation}\n    v_{\\rm sys, final} = \\mqty[v_{\\rm sys, R} \\\\ v_{\\rm sys, T} \\\\ v_{\\rm sys, z}] + \\mathcal{R} \\cdot \\va{v}_{\\rm kick, eff},\n\\end{equation}\nwhere\n\\begin{equation}\n    \\mathcal{R} \\cdot \\va{v}_{\\rm kick, eff} = \\left(\n        \\begin{array}{c}\n         \\cos \\theta (v_{\\rm kick, x}-v_{\\rm orb})-v_{\\rm kick, y} \\sin \\theta \\cos \\phi\n           +v_{\\rm kick, z} \\sin \\theta \\sin \\phi \\\\\n         \\sin \\theta (v_{\\rm kick, x}-v_{\\rm orb})+v_{\\rm kick, y} \\cos \\theta \\cos \\phi\n           -v_{\\rm kick, z} \\cos \\theta \\sin \\phi \\\\\n         v_{\\rm kick, y} \\sin \\phi+v_{\\rm kick, z} \\cos \\phi \\\\\n        \\end{array}\n        \\right)\n\\end{equation}\n\n\\end{document}\n\n ", "meta": {"hexsha": "e714ff254fa78825c8224171dc80cc54a9b87fa3", "size": 4266, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/kick_thoughts.tex", "max_stars_repo_name": "TomWagg/microlensing-kicks", "max_stars_repo_head_hexsha": "41f22c38586479f7dbec828a2710c589f7d739ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-23T16:19:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T16:19:24.000Z", "max_issues_repo_path": "src/kick_thoughts.tex", "max_issues_repo_name": "TomWagg/microlensing-kicks", "max_issues_repo_head_hexsha": "41f22c38586479f7dbec828a2710c589f7d739ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/kick_thoughts.tex", "max_forks_repo_name": "TomWagg/microlensing-kicks", "max_forks_repo_head_hexsha": "41f22c38586479f7dbec828a2710c589f7d739ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-08T14:54:23.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-08T14:54:23.000Z", "avg_line_length": 46.3695652174, "max_line_length": 463, "alphanum_fraction": 0.655180497, "num_tokens": 1358, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8333245911726382, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.5515966043479108}}
{"text": "\\documentclass[8pt,oneside]{extarticle}\n\n%\\usepackage{subfigure}\n\\usepackage{subcaption}\n\\usepackage{tabularx}\n\\usepackage{graphicx}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{hyperref}\n\\usepackage{adjustbox}\n\\usepackage{listings}\n\\usepackage{optidef}\n\\usepackage{cleveref}\n\\usepackage{threeparttable}\n\\usepackage{xcolor}\n\\usepackage{titlesec}\n\\usepackage{enumitem}\n\\usepackage{mathrsfs}\n\\usepackage[driver=pdftex]{geometry}\n\\usepackage{import}\n%\\usepackage{titleformat{\\section\n%        {\\normalfont\\normalzie\\bfseries}{Helo.}{1em}{}\n\n\n\\definecolor{codegreen}{rgb}{0,0.6,0}\n\\definecolor{codegray}{rgb}{0.5,0.5,0.5}\n\\definecolor{codepurple}{rgb}{0.58,0,0.82}\n\\definecolor{backcolour}{rgb}{0.95,0.95,0.92}\n\n\\crefname{table}{table}{table}\n\\setlength{\\parindent}{0em}\n\\setlength{\\parskip}{0.7em}\n\n\\counterwithin{table}{section}\n \n\\lstdefinestyle{mystyle}{\n    backgroundcolor=\\color{backcolour},   \n    commentstyle=\\color{codegreen},\n    keywordstyle=\\color{magenta},\n    numberstyle=\\tiny\\color{codegray},\n    stringstyle=\\color{codepurple},\n    basicstyle=\\ttfamily\\footnotesize,\n    breakatwhitespace=false,         \n    breaklines=true,                 \n    captionpos=b,                    \n    keepspaces=true,                 \n    numbers=left,                    \n    numbersep=5pt,                  \n    showspaces=false,                \n    showstringspaces=false,\n    showtabs=false,                  \n    tabsize=2\n}\n\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{definition}{Definition}\n\\newtheorem{proof}{Proof}\n \n\\lstset{style=mystyle}\n\n%\\usepackage[margin=0.5in]{geometry}\n\\usepackage{inputenc}\n\n\\newcommand{\\Real}{\\mathbb{R}}\n\\newcommand{\\Int}{\\mathbb{Z}}\n\\newcommand{\\Nat}{\\mathbb{N}}\n\\newcommand{\\Complex}{\\mathbb{C}}\n\\newcommand{\\vect}[1]{\\boldsymbol{#1}}\n\n%\\renewcommand{\\TPTminimum}{\\textwidth}\n\n\\renewcommand{\\Re}[1]{\\mathfrak{Re}\\left\\lbrace{#1}\\right\\rbrace}\n\\renewcommand{\\Im}[1]{\\mathfrak{Im}\\left\\lbrace{#1}\\right\\rbrace}\n\n%\\DeclareMathOperator*{\\minimize}{minimize}\n%\\DeclareMathOperator*{\\maximize}{maximize}\n\n\\title{{\\bf MATH 3172 3.0\\\\ Combinatorial Optimization}\\\\\\vspace{10pt} \\large Workshop 5     \n    \\author{Jacques Nel}\n}\n\n\\begin{document}\n\n\\maketitle\n\n\\thispagestyle{empty}\n\n\\newpage\n\n\\pagenumbering{arabic}\n\n\\section*{A quick note about revision:}\n\nThis is the $3^\\mathrm{rd}$ revision of this assignment.\n\n\\newpage\n\n\n\\section{A-4 Cane sugar production}\n\nThis problem is taken from\\footnote{C. Gu\u00e9ret, C. Prins, M. Sevaux, \\textit{Applications of optimization with Xpress-MP}. %\nParis: Dash Optimization Ltd., 2007. Page 74.}.\n\n\\subsection{Parameters}\n\nLet $W = \\left\\lbrace 1,\\ldots, m\\right\\rbrace$ enumerate $m=11$ wagons or lots, and \n$S=\\left\\lbrace 1, \\ldots, n\\right\\rbrace$ enumerate $n$ time slots. The refinery has $k=3$ equivalent processing lines.\n$n$ timeslots are required to process $m$ wagons where $n$ is given by $n=\\texttt{ceil}(m/k)=4$. \nEach lot $w\\in W$ has an associated hourly loss $\\Delta_w$ and remaining lifespan $l_w$ until total loss.\nFurthermore, a single lot takes $D=2$ hours to process on any given production line.\n\n\\subsection{Decision variable}\n\nLet $\\vect{x} = \\left[ x_{ws}\\right] \\in \\left\\lbrace 0, 1\\right\\rbrace^{m\\times n}$\nwhere for $w\\in W$ and $s\\in S$,\n\n$$x_{ws} = \\begin{cases}\n    1 & \\text{lot }w\\text{ is processed in slot }s\\\\\n    0 & \\text{otherwise}\n\\end{cases}.\n$$\n\\subsection{Model}\n\nWe seek to minimize the loss in raw material resulting from fermentation due\nto delayed processing of a lot. The model is\n\n\\begin{mini!}\n    {\\vect{x}}{f\\left(\\vect{x}\\right)=\\sum_{w\\in W}\\sum_{s\\in S} sd\\Delta_{w}x_{ws} \\protect\\label{eq:a4-obj}}{\\label{eq:a4}}{}\n    \\addConstraint{\\sum_{s\\in S}x_{ws}}{=1, \\forall w\\in W \\protect\\label{eq:a4-cstr1}}\n    \\addConstraint{\\sum_{w\\in W}x_{ws}}{\\leq k, \\forall s\\in S \\protect\\label{eq:a4-cstr2}}\n    \\addConstraint{\\sum_{s\\in S}sx_{ws}}{\\leq l_w / d, \\forall w\\in W \\protect\\label{eq:a4-cstr3}}\n\\end{mini!}\n\nThe objective function \\cref{eq:a4-obj} is the total loss in raw material resulting from delayed processing\nsummed over all lots and wagons. All lots must be assigned to exactly one slot as enforced by \\cref{eq:a4-cstr1}.\nNext, \\cref{eq:a4-cstr2} guarantees that at most $k=3$ lots can be processed in any one timeslot. Finally,\n\\cref{eq:a4-cstr3} ensures that a lot is processed before its total loss occurs. Observe that total loss of a lot\noccurs after $l_w / d$ slots.\n\n\\newpage\n\n\\subsection{Results}\n\nThe optimal solution results in a loss of $f\\left(\\vect{x}^*\\right) = 1620$ kg with the following\ntime slot assignments:\n\n\\begin{table}[h]\n    \\centering\n    \\caption{Optimal time slot allocations for each lot}\\label{table:a4-results}\n    \\begin{tabular}{cccc}\n        \\hline\n        \\textbf{Slot 1} & \\textbf{Slot 2} & \\textbf{Slot 3} & \\textbf{Slot 4} \\\\\n        \\hline\n        lot 3   & lot 1 & lot 10 & lot 2 \\\\\n        lot 6   & lot 5 & lot 8 & lot 4 \\\\\n        lot 7   & lot 8 & &\\\\\n        \\hline\n\n    \\end{tabular}\n\n    \\medskip\n    \\emph{Note:} Column $j$ is generated with $\\left\\lbrace w : x_{wj} = 1, \\forall w \\in\n            \\mathrm{W}\\right\\rbrace$.\n\\end{table}\n\n\\section{A-6 Production of electricity}\n\nThis problem is taken from\\footnote{C. Gu\u00e9ret, C. Prins, M. Sevaux, \\textit{Applications of optimization with Xpress-MP}. %\nParis: Dash Optimization Ltd., 2007. Page 78.}.\n\n\\subsection{Parameters}\n\nLet $T=\\lbrace 1,\\ldots, n\\rbrace$ enumerate $n=7$ time periods (of varying length) and $P=\\lbrace 1,\\ldots, m\\rbrace$ enumerate\n$m=4$ generator types. For a time period $t\\in T$ let $l_t$\ndenote the length of time period in hours, and\nlet $d_t$ denote the forecasted power demand given by $d_t$.\n\n\\begin{table}[h]\n    \\center\n    \\caption{Length and forecasted demand of time periods}\\label{table:a6-periods}\n    \\begin{tabular}{c c c}\n        \\hline\n        \\textbf{Period} $t$ & \\textbf{Length} $l_t$ & \\textbf{Demand} $d_t$ \\\\\n        \\hline\n        1 & 6 & $1.2 \\times 10^4$ \\\\\n        2 & 3 & $3.2 \\times 10^4$ \\\\\n        3 & 3 & $2.5 \\times 10^4$ \\\\\n        4 & 2 & $3.6 \\times 10^4$ \\\\\n        5 & 4 & $2.5 \\times 10^4$ \\\\\n        6 & 4 & $3.0 \\times 10^4$ \\\\\n        7 & 2 & $1.8 \\times 10^4$ \\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\nFor each generator type $p\\in P$, there are $a_p$ units available. Each unit has\na minimum base power output $\\theta_p$ (if it is running) and can scale up to a\nmaximum output denoted $\\psi_p$, but incurring additional operating cost.\n\n\\begin{table}[h]\n    \\center\n    \\caption{Number of available units and power output capacity}\\label{table:a6-generators}\n    \\begin{tabular}{cccc}\n        \\hline\n        \\textbf{Type} & \\textbf{Num. available} $a_p$ & \\textbf{Min. output} $\\theta_p$ & \\textbf{Max output} $\\psi_p$ \\\\\n        \\hline\n        1 & 10 & $7.5\\times 10^2$ & $1.75\\times 10^3$ \\\\\n        2 & 4 & $1.0\\times 10^3$ & $1.5\\times 10^3$ \\\\\n        3 & 8 & $1.2\\times 10^3$ & $2.0\\times 10^3$ \\\\\n        4 & 3 & $1.8\\times 10^3$ & $3.5\\times 10^3$ \\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\n\\newpage\n\nStarting a generator unit of type $p\\in P$ incurs a startup cost $\\lambda_p$. Running the generator\nincurs a fixed cost per hour $\\mu_p$. Additional scalable output, on top of\nthe base output, incurs a hourly cost $\\nu_p$ that is proportional to the additional output.\n\n\\begin{figure}[h]\n    \\protect\\label{fig:a6-costs}\n    \\center\n    \\caption{Various costs associated with generator types}\n    \\begin{tabular}{cccc}\n        \\hline\n        \\textbf{Type} & \\textbf{Start cost} $\\lambda_p$ & \\textbf{Run cost} $\\mu_p$ & \\textbf{Add. cost} $\\nu_p$ \\\\\n        \\hline\n        1 & 5000 & 2250 & 2.7 \\\\\n        2 & 1600 & 1800 & 2.2 \\\\\n        3 & 2400 & 3750 & 1.8 \\\\\n        4 & 1200 & 4800 & 3.8 \\\\\n        \\hline\n    \\end{tabular}\n\\end{figure}\n\n\n\\subsection{Decision variables}\n\nSuppose $p\\in P$ and $t\\in T$. Let\n$0 \\leq x_{pt} \\in\\Int$ denote the number of generators of type $p$ started in\nperiod $t$, and\n$0 \\leq y_{pt} \\in\\Int$ be the number of generators of type $p$ running in\nperiod $t$. Finally, let $0 \\leq z_{pt}\\in\\Real$ denote the additional power generated by unit of type $p$ during\nperiod $t$. \n\nTo simplify notation, let $\\vect{x} = \\left[x_{pt}\\right] \\in\n\\Int^{m\\times n}, \\vect{y} = \\left[y_{pt}\\right] \\in \\Int^{m\\times n}$,\nand $\\vect{z} = \\left[z_{pt}\\right] \\in \\Real^{m\\times n}$.\n\n\\subsection{Model}\n\n\\begin{mini!}\n    {\\vect{x}, \\vect{y}, \\vect{z}}{\\sum_{t\\in T}\\sum_{p\\in P} \\lambda_{p}x_{pt} + l_t\\left(\\mu_p y_{pt} + \\nu_p z_{pt}\\right) \\protect\\label{eq:a6-obj}}{\\label{eq:a6}}{}\n    \\addConstraint{x_{p1}}{\\geq y_{p1} - y_{pn}, \\forall p\\in P \\protect\\label{eq:a6-cstr1}}\n    \\addConstraint{x_{pt}}{\\geq y_{pt} - y_{p(t-1)}, \\forall p\\in P, 1 < t\\in T \\protect\\label{eq:a6-cstr2}}\n    \\addConstraint{z_{pt}}{\\leq \\left(\\psi_p - \\theta_p\\right)y_{pt}, \\forall (p, t)\\in P\\times T \\protect\\label{eq:a6-cstr3}}\n    \\addConstraint{\\sum_{p\\in P} \\theta_p y_{pt} + z_{pt}}{\\geq d_t, \\forall t\\in T \\protect\\label{eq:a6-cstr4}}\n    \\addConstraint{\\sum_{p\\in P} \\psi_p y_{pt}}{\\geq 1.2 d_t, \\forall t\\in T \\protect\\label{eq:a6-cstr5}}\n    \\addConstraint{y_{pt}}{\\leq a_p, \\forall (p,t)\\in P\\times T \\protect\\label{eq:a6-cstr6}}\n    \\addConstraint{x_{pt}}{\\geq 0, \\forall (p,t)\\in P\\times T \\protect\\label{eq:a6-cstr7}}\n    \\addConstraint{y_{pt}}{\\geq 0, \\forall (p,t)\\in P\\times T \\protect\\label{eq:a6-cstr8}}\n    \\addConstraint{z_{pt}}{\\geq 0, \\forall (p,t)\\in P\\times T \\protect\\label{eq:a6-cstr9}}\n\\end{mini!}\n\nThe cost function \\cref{eq:a6-obj} is simply the startup cost, running cost, and\nadditional power cost summed over all decision variables. The number of generators started\nfor $t=1$ is related to the number of generators running by \\cref{eq:a6-cstr1}. This relationship\ndepends on the numbers generators running at the end period $t=n$. The next family of \nconstraints \\cref{eq:a6-cstr2} is similar to the above, but deals with the relationship\nfor $1< t \\leq n$.\n\nAdditional power output $z_{pt}$ is bounded by the difference between the maximum and\nthe base output, ie. $\\psi_p - \\theta_p$, as expressed\nby \\cref{eq:a6-cstr3}. Next, \\cref{eq:a6-cstr4} ensures that the total ouput of all\ngenerator units meets forecasted demand $d_t$ for all $t\\in T$. Furthermore, a $20\\%$\nsafety buffer is required at all times. \\Cref{eq:a6-cstr5} ensures that, if demand\nwere to suddenly spike, a minimum of $20\\%$ of $d_t$ of additional capacity can instantly be \nmade available, by increasing additional output $z_{pt}$ up its maximum $\\psi_p-\\theta_p$.\n\nThe family of constraints \\cref{eq:a6-cstr6} simply places an upper bound on the number\nof units running, in a given period, equal to the given available number of units\n$a_p$ of each type. Finally, \\cref{eq:a6-cstr7}, \\cref{eq:a6-cstr8}, and\n\\cref{eq:a6-cstr9} simply enforce the canonical non-negativity of $x_{pt}$, $y_{pt}$, and $z_{pt}$\nrespectively.\n\n\\subsection{Results}\n\nThe optimal solution was found with a total operating cost of\n$f\\left(\\vect{x}^*, \\vect{y}^*, \\vect{z}^*\\right) = \\$1,456,810$.\n\n\\begin{table}[ht]\n    \\centering\n    \\caption{Optimal power generation schedule for 4 generator types over 7 planning periods}\n\\begin{tabular}{clrrrrrrr}\n    \\hline\n    \\textbf{Type} & & \\textbf{1} & \\textbf{2} & \\textbf{3} & \\textbf{4} & \\textbf{5} & \\textbf{6} & \\textbf{7} \\\\\n\\hline\n    \\textbf{1} & No. used & 3 & 4 & 4 & 7 & 3 & 3 & 3\\\\\n & Tot. output & 2250 & 4600 & 3000 & 8600 & 2250 & 2600 & 2250\\\\\n & Add. output & 0 & 1600 & 0 & 3350 & 0 & 350 & 0\\\\\n \\hline\n    \\textbf{2} & No. used & 4 & 4 & 4 & 4 & 4 & 4 & 4\\\\\n & Tot. output & 5750 & 6000 & 4200 & 6000 & 4950 & 6000 & 5950\\\\\n & Add. output & 1750 & 2000 & 200 & 2000 & 950 & 2000 & 1950\\\\\n \\hline\n    \\textbf{3} & No. used & 2 & 8 & 8 & 8 & 8 & 8 & 4\\\\\n & Tot. output & 4000 & 16000 & 16000 & 16000 & 16000 & 16000 & 8000\\\\\n & Add. output & 1600 & 6400 & 6400 & 6400 & 6400 & 6400 & 3200\\\\\n \\hline\n    \\textbf{4} & No. used & 0 & 3 & 1 & 3 & 1 & 3 & 1\\\\\n & Tot. output & 0 & 5400 & 1800 & 5400 & 1800 & 5400 & 1800\\\\\n & Add. output & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n \\hline\n\\end{tabular}\n\n\\medskip\n\\emph{Note:} The above table was generated from the solution values $x_{pt}^*, \ny_{pt}^*$, and $z_{pt}^*$ with \\texttt{a-6\\_report.py}.\n\\end{table}\n\n\\section{C-2 Production of drinking glasses}\n\nThis problem is taken from\\footnote{C. Gu\u00e9ret, C. Prins, M. Sevaux, \\textit{Applications of optimization with Xpress-MP}. %\nParis: Dash Optimization Ltd., 2007. Page 106.}.\n\n\\subsection{Parameters}\n\nLet $W =\\left\\lbrace 1,\\ldots, n\\right\\rbrace$ enumerate $n=12$ week-long planning\nperiods, and let $P=\\left\\lbrace 1,\\ldots, m\\right\\rbrace$ enumerate the $m=6$\nproduct variants. Each product $p\\in P$ has a predicted demand $d_{pt}$ during week\n$t\\in W$ as given in \\cref{table:c2-demands}.\n\n\\begin{table}[h]\n    \\center\n    \\caption{Predicted weekly demand for each product variant}\\label{table:c2-demands}\n    \\begin{tabular}{c cccccccccccc}\n        \\hline\n        \\textbf{Week} & \\textbf{1} &\\textbf{2} &\\textbf{3} &\\textbf{4} &\\textbf{5} &\\textbf{6} &\\textbf{7} &\\textbf{8} &\\textbf{9} &\\textbf{10} &\\textbf{11} &\\textbf{12} \\\\\n        \\hline\n        \\textbf{V1} & 20 & 22 & 18 & 35 & 17 & 19 & 23 & 20 & 29 & 30 & 28 & 32 \\\\\n        \\textbf{V2} & 17 & 19 & 23 & 20 & 11 & 10 & 12 & 34 & 21 & 23 & 30 & 12 \\\\\n        \\textbf{V3} & 18 & 35 & 17 & 10 & 9  & 21 & 23 & 15 & 10 & 0 & 13 & 17 \\\\\n        \\textbf{V4} & 31 & 45 & 24 & 38 & 41 & 20 & 19 & 37 & 28 & 12 & 30 & 37 \\\\\n        \\textbf{V5} & 23 & 20 & 23 & 15 & 10 & 22 & 18 & 30 & 28 & 7 & 15 & 10 \\\\\n        \\textbf{V6} & 22 & 18 & 20 & 19 & 18 & 35 & 0 & 28 & 12 & 30 & 21 & 23 \\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\nEach product variant $p\\in P$ has an associated basic production cost $\\lambda_p$\nand an inventory storage cost $\\mu_p$ incurred on product in inventory over\ngiven period. Production requires a known amount worker labour time $\\delta_p$,\nmachine time $\\pi_p$, and production area $\\gamma_p$. In every period there\nis a limited amount of worker time $\\Delta$, available machine time $\\Pi$, and\nproduction area $\\Gamma$. Lastly, at the start of planning period there exists volume \n$I_p$ of item $p$ in the inventory, and it is required that there is $F_p$ of item\n$p$ at the end of the planning period in the inventory. All parameters are given in\n\\cref{table:c2-params}.\n\n\\begin{table}[h]\n    \\center\n    \\caption{Given costs, production resources, and inventory of product variants}\n    \\label{table:c2-params}\n    \\begin{tabular}{cccccccc}\n        \\hline\n        & \\textbf{prod. cost} & \\textbf{inv. cost}  & \n        \\textbf{init. stock}  & \\textbf{fin. stock}  & \n        \\textbf{labour}  & \\textbf{mach. time}  & \n        \\textbf{area} \\\\\n        \\hline\n        \\textbf{V1} & 100 & 25 & 50 & 10 & 3 & 2 & 4 \\\\\n        \\textbf{V2} & 80 & 28 & 20 & 10 & 3 & 1 & 5 \\\\\n        \\textbf{V3} & 110 & 25 & 0 & 10 & 3 & 4 & 5 \\\\\n        \\textbf{V4} & 90 & 27 & 15 & 10 & 2 & 8 & 6 \\\\\n        \\textbf{V5} & 200 & 10 & 0 & 10 & 4 & 11 & 4 \\\\\n        \\textbf{V6} & 150 & 20 & 10 & 10 & 4 & 9 & 9 \\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\n\\newpage\n\n\\subsection{Decision variables}\n\nFor a given product $p\\in P$ and week $t\\in W$ let\n$0 \\leq x_{pt} \\in\\Int$ denote the production volume. \nAlso let $0\\leq y_{pt}\\in\\Int$ denote the amount of product stored in the inventory\nat the end of period $t$. \n\nTo simplify notation let $\\vect{x} = \\left[ x_{pt} \\right] \\in \\Int^{m\\times x}$\nand $\\vect{y} = \\left[ y_{pt} \\right] \\in \\Int^{m\\times x}$.\n\n\\subsection{Model}\n\n\\begin{mini!}\n    {\\vect{x}, \\vect{y}}{f\\left(\\vect{x}, \\vect{y}\\right)=\\sum_{p\\in P}\\sum_{t\\in W} \\lambda_p x_{pt} + \\mu_p y_{pt} \\protect\\label{eq:c2-obj}}{\\label{eq:c2}}{}\n    \\addConstraint{y_{p1}}{= I_p + x_{pt} - d_{pt}, \\forall p\\in P \\protect\\label{eq:c2-cstr1}}\n    \\addConstraint{y_{pt}}{= y_{p(t-1)} + x_{pt} - d_{pt}, \\forall p\\in P, t\\in W \\protect\\label{eq:c2-cstr2}}\n    \\addConstraint{y_{pn}}{= F_p, \\forall p\\in P, \\protect\\label{eq:c2-cstr3}}\n    \\addConstraint{\\sum_{p\\in P} \\delta_p x_{pt}}{\\leq \\Delta, \\forall t\\in W, \\protect\\label{eq:c2-cstr4}}\n    \\addConstraint{\\sum_{p\\in P} \\pi_p x_{pt}}{\\leq \\Pi, \\forall t\\in W, \\protect\\label{eq:c2-cstr5}}\n    \\addConstraint{\\sum_{p\\in P} \\gamma_p x_{pt}}{\\leq \\Gamma, \\forall t\\in W, \\protect\\label{eq:c2-cstr6}}\n    \\addConstraint{x_{pt}}{\\geq 0 \\forall p\\in P, t\\in W, \\protect\\label{eq:c2-cstr7}}\n    \\addConstraint{y_{pt}}{\\geq 0 \\forall p\\in P, t\\in W, \\protect\\label{eq:c2-cstr8}}\n\\end{mini!}\n\nWe seek to minimize total production cost. \\Cref{eq:c2-obj} is a cost function which simply sums the total production and storage\ncosts over all decision variables.\n\n\\Cref{eq:c2-cstr1} and \\cref{eq:c2-cstr2} states that the inventory at time $t$ is\nequal the previous inventory plus the product minus the demand. \\Cref{eq:c2-cstr1}\nmakes special consideration for the initial inventory $I_p$. \\Cref{eq:c2-cstr3}\nensures that the final inventory for product $p$ is equal to $F_p$ at the end of the\nplanning period.\n\n\\Cref{eq:c2-cstr4}, \\cref{eq:c2-cstr5} and \\cref{eq:c2-cstr6} ensures that limited\nproduction factors: worker time capacity $\\Delta$, machine time capacity \n$\\Pi$, and production area $\\Gamma$ constraints are respected. For example,\nproduction of product $p$ during period $t$ requires $\\pi_p x_{pt}$ machine hours, the total\nof which shall not exceed $\\Pi$ for the given period.\n\nLastly \\cref{eq:c2-cstr7} and \\cref{eq:c2-cstr8} are simply the cononical non-negativity\nconstraints on both decision variables $x_{pt}$ and $y_{pt}$.\n\n\\newpage\n\n\\subsection{Results}\n\nA optimal solution $\\left(\\vect{x}^*, \\vect{t}^*\\right)$ is found with a total \nproduction cost of $f\\left(\\vect{x}^*, \\vect{y}^*\\right) = \\$186,076$.\n\n\\begin{table}[h]\n    \\center\n    \\caption{Production and storage quantities for each product type}\n\\begin{tabular}{cccccccccccccc}\n    \\hline\n    & \\textbf{Week} & \\textbf{1} &\\textbf{2} &\\textbf{3} &\\textbf{4} &\\textbf{5} &\\textbf{6} &\\textbf{7} &\\textbf{8} &\\textbf{9} &\\textbf{10} &\\textbf{11} &\\textbf{12} \\\\\n    \\hline\n    \\textbf{1} & Prod. & 0 & 0 & 11 & 34 & 29 & 7 & 23 & 21 & 29 & 29 & 29 & 41\\\\\n& Store & 30 & 8 & 1 & 0 & 12 & 0 & 0 & 1 & 1 & 0 & 1 & 10\\\\\n\\textbf{2} & Prod. & 7 & 21 & 14 & 17 & 11 & 10 & 12 & 34 & 21 & 23 & 30 & 22\\\\\n& Store & 10 & 12 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 10\\\\\n\\textbf{3} & Prod. & 18 & 35 & 17 & 11 & 8 & 21 & 23 & 15 & 10 & 0 & 13 & 27\\\\\n& Store & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 10\\\\\n\\textbf{4} & Prod. & 16 & 45 & 24 & 38 & 41 & 20 & 20 & 36 & 29 & 11 & 31 & 46\\\\\n& Store & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 10\\\\\n\\textbf{5} & Prod. & 47 & 16 & 34 & 14 & 23 & 24 & 43 & 0 & 26 & 4 & 0 & 0\\\\\n& Store & 24 & 20 & 31 & 30 & 43 & 45 & 70 & 40 & 38 & 35 & 20 & 10\\\\\n\\textbf{6} & Prod. & 14 & 17 & 20 & 18 & 18 & 35 & 1 & 27 & 12 & 49 & 28 & 7\\\\\n& Store & 2 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 19 & 26 & 10\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\emph{Note:} I choose to solve this as an integer programming model. This is\nthe reason why my results differ slightly from the book. The above table simply states the optimal\nsolution values for $x_{pt}^*$ and $y_{pt}^*$ for all $p\\in P$ and $t\\in W$.\n\n\\section{D-5 Cutting sheet metal}\n\nThis problem is taken from\\footnote{C. Gu\u00e9ret, C. Prins, M. Sevaux, \\textit{Applications of optimization with Xpress-MP}. %\nParis: Dash Optimization Ltd., 2007. Page 134.}.\n\n\\subsection{Parameters}\n\nLet $S =\\left\\lbrace 1, \\ldots, n\\right\\rbrace$ enumerate $n=4$ different sizes, ie.\n$\\left\\lbrace \\texttt{36x50}, \\texttt{24x36}, \\texttt{20x60}, \\texttt{18x30}\\right\\rbrace$.\nAlso, let $P=\\left\\lbrace 1,\\ldots, m\\right\\rbrace$ enumerate $m$ different cutting patterns.\nFor $s\\in S$ and $p\\in P$ let $c_{sp}$ denote number of pieces of size $s$ yielded\nby pattern $p$. The values of $c_{sp}$ are given by \\cref{table:d5-yield}.\n\n\\begin{table}[h]\n    \\center\n    \\caption{Yields of various cutting patterns}\\label{table:d5-yield}\n    \\begin{tabular}{ccccccccccccccccc}\n        \\hline\n        \\textbf{Pattern} & \\textbf{1} & \\textbf{2} &\\textbf{3} &\\textbf{4} &\\textbf{5}&\\textbf{6}&\\textbf{7}&\\textbf{8}&\\textbf{9}&\\textbf{10}&\\textbf{11}&\\textbf{12}&\\textbf{13}&\\textbf{14}&\\textbf{15}&\\textbf{16} \\\\\n        \\hline\n        \\texttt{36x50} & 1 & 1 & 1 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n        \\texttt{24x36} & 2 & 1 & 0 & 2 & 1 & 0 & 3 & 2 & 1 & 0 & 5 & 4 & 3 & 2 & 1 & 0 \\\\\n        \\texttt{20x60} & 0 & 0 & 0 & 2 & 2 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n        \\texttt{18x30} & 0 & 1 & 3 & 0 & 1 & 3 & 0 & 2 & 3 & 5 & 0 & 1 & 3 & 5 & 6 & 8 \\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\nEach cut size $s\\in S$ has a given demand $\\vect{d} =\\left( d_s : s \\in S\\right)^T\n= \\left( 8, 13, 5, 15\\right)^T$. Finaly, each pattern has equivalent cost $\\kappa = 1$,\nor simply the cost of each sheet of raw material.\n\n\\subsection{Decision variable}\n\nLet $0 \\leq x_{p} \\in \\Int$ denote the number of times pattern $p$ is used. To simplify\nnotation let $\\vect{x} = \\left( x_p : p \\in P\\right)$.\n\n\\subsection{Model}\n\n\\begin{mini!}\n    {\\vect{x}}{f(\\vect{x}) = \\sum_{p\\in P} x_p\\kappa \\protect\\label{eq:d5-obj}}{\\label{eq:d5}}{}\n    \\addConstraint{\\sum_{p\\in P} c_{sp} x_p}{\\geq d_s, \\forall s\\in S \\protect\\label{eq:d5-cstr1}}\n    \\addConstraint{x_p}{\\geq 0, \\forall p\\in P \\protect\\label{eq:d5-cstr2}}\n\\end{mini!}\n\nWe seek to minimize the total cost which is given by \\cref{eq:d5-obj}. This is simply\nthe total number of sheets of raw material used. \\Cref{eq:d5-cstr1} is the family\nof demand constraints, which guarantee that demand is met for each size $s\\in S$.\nFinaly \\cref{eq:d5-cstr2} is simply the canonical non-negativity constraint on\nthe decision variable $x_p$.\n\n\\subsection{Results}\n\nAn optimal solution is found which uses 11 sheets of raw material to satisfy\ndemand, with a cost function value of $f\\left(\\vect{x}^*\\right)= 11$. The following quantities of each pattern\nare used:\n\n\\texttt{pattern 1} = 3,\n\\texttt{pattern 3} = 5,\n\\texttt{pattern 4} = 2,\n\\texttt{pattern 7} = 1,\nand all others are unused. These values are simply the optimal non-zero values of the decision variable\n$x_p^*, \\forall p\\in\\mathrm{P}$.\n\n\\section{F-1 Flight connections at a hub}\n\nThis problem is taken from\\footnote{C. Gu\u00e9ret, C. Prins, M. Sevaux, \\textit{Applications of optimization with Xpress-MP}. %\nParis: Dash Optimization Ltd., 2007. Page 157.}.\n\n\\subsection{Parameters}\n\nLet $P = \\left\\lbrace 1, \\ldots, n\\right\\rbrace$ enumerate both $n=6$ incoming flights\nand then $n$ outgoing flights. For $i\\in P$ and $j\\in P$, let $\\mu_{ij}$ denote\nthe number of passengers arriving on flight $i$ from origin $i$ seeking to continue\non to destination $j$ given by \\cref{table:f1-passengers}.\n\n\\begin{table}[h]\n    \\center\n    \\caption{Arriving passengers and destinations}\\label{table:f1-passengers}\n    \\begin{tabular}{c|cccccc}\n        \\hline\n        \\textbf{City} & \\textbf{1} & \\textbf{2} & \\textbf{3} & \\textbf{4} & \\textbf{5} & \\textbf{6} \\\\\n        \\textbf{1} & 35 & 12 & 16 & 38 & 5 & 2 \\\\ \n        \\textbf{2} & 25 & 8 & 9 & 24 & 6 & 8 \\\\\n        \\textbf{3} & 12 & 8 & 11 & 27 & 3 & 2 \\\\\n        \\textbf{4} & 38 & 15 & 14 & 30 & 2 & 9 \\\\\n        \\textbf{5} & - & 9 &   8 & 25 &  10 & 5 \\\\\n        \\textbf{6} & - & - & - & 14 &  6 & 7 \\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\n\\subsection{Decision variable}\n\nLet $x_{ij} \\in\\lbrace 0, 1\\rbrace$ indicate that aircraft\n    from origin $i\\in P$ travels to destination $j\\in P$ for next flight when\n    $x_{ij} = 1$. To simplify notation let $\\vect{x} = \\left[x_{ij}\\right]\n    \\in \\left\\lbrace 0, 1\\right\\rbrace^{n\\times n}$.\n\n\\subsection{Model}\n\nWe seek to minimize the number of passengers requiring to disembark and transfer to another plane for their\nnext flight. In other words, we wish to maximize the number of passengers staying on their arriving aircraft.\n\n\\begin{mini!}\n    {\\vect{x}}{f(\\vect{x}) = \\sum_{i\\in P} \\sum_{j\\in P} \\mu_{ij}x_{ij} \\protect\\label{eq:f1-obj}}{\\label{eq:f1}}{}\n    \\addConstraint{\\sum_{j\\in P} \\mu_{ij}x_{ij}}{= 1, \\forall i\\in P \\protect\\label{eq:f1-cstr1}}\n    \\addConstraint{\\sum_{i\\in P} \\mu_{ij}x_{ij}}{= 1, \\forall j\\in P. \\protect\\label{eq:f1-cstr2}}\n\\end{mini!}\n\n\\subsection{Results}\n\nThe optimal solution has a total of $f\\left(\\vect{x}^*\\right)= 112$ passengers remaining on their arrival flights for the\nremainder of their journeys. The following assignment of aircraft to destination, minimizes passenger\ninconvenience:\n\\medskip\n\nBordeaux $\\rightarrow$ London 38\n\nClermon-Ferrand $\\rightarrow$ Bern 8\n\nMarseille $\\rightarrow$ Brussels 11\n\nNantes $\\rightarrow$ Berlin 38\n\nNice $\\rightarrow$ Rome 10\n\nToulouse $\\rightarrow$ Vienna 7\n\n\\medskip\n\\emph{Note:} The above mapping is simply constructed by the permutation matrix given by $\\vect{x}^*$.\n\nRefer to \\texttt{f-1\\_report.py}\nfor implementation details.\n\n\\end{document}\n", "meta": {"hexsha": "2cbd735adcfe813eb197be875d8a8b332e3f7ed9", "size": 24863, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "workshop5/latex-rev3/root.tex", "max_stars_repo_name": "jmnel/combinatorial-optimization", "max_stars_repo_head_hexsha": "c921dee6cb0febc47a8a791f8220b02b35caf0cf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "workshop5/latex-rev3/root.tex", "max_issues_repo_name": "jmnel/combinatorial-optimization", "max_issues_repo_head_hexsha": "c921dee6cb0febc47a8a791f8220b02b35caf0cf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "workshop5/latex-rev3/root.tex", "max_forks_repo_name": "jmnel/combinatorial-optimization", "max_forks_repo_head_hexsha": "c921dee6cb0febc47a8a791f8220b02b35caf0cf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.716442953, "max_line_length": 217, "alphanum_fraction": 0.6417568274, "num_tokens": 9203, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.665410558746814, "lm_q2_score": 0.8289388104343892, "lm_q1q2_score": 0.5515846370180663}}
{"text": "%!TEX root = fastZKP.tex\n\n\\section{Zero Knowledge Argument Protocols}\\label{sec:zkp}\n\nIn this section, we present the construction of our new zero-knowledge argument system. In~\\cite{zhang2017vsql}, Zhang et al. proposed to combine the GKR protocol with a verifiable polynomial delegation protocol, resulting in an argument system. Later, in~\\cite{zkvpd,hyrax}, the construction was extended to zero-knowledge, by sending all the messages in the GKR protocol in homomorphic commitments and performing all the checks by zero-knowledge equality and product testing. This incurs a high overhead for the verifier compared to the plain version without zero-knowledge, as each multiplication becomes an exponentiation and each equality check becomes a $\\Sigma$-protocol, which is around $100\\times$ slower in practice.\n\nIn this paper, we follow the same blueprint of combining GKR and VPD to obtain an argument system, but instead show how to extend it to be zero-knowledge efficiently. In particular, the prover masks the GKR protocol with special random polynomials so that the verifier runs a \"randomized\" GKR that leaks no extra information and her overhead is small. A similar approach was used by Chiesa et.al in~\\cite{zksumcheck}. In the following, we present the zero-knowledge version of each building block, followed by the whole zero-knowledge argument.\n\n\\begin{figure}[b!]\n\t\\small{\n\t\t\\centering{\\centering\n\t\t\t\\framebox{\\parbox{.99\\linewidth}{\n\t\t\t\t\t\\begin{construction}\n\t\t\t\t\t\t\\label{construction::zksumcheck}\n\t\t\t\t\t\tWe assume the existence of a zkVPD protocol defined in Section~\\ref{subsec::zkvpd}. For simplicity, we omit the randomness $r_f$ and public parameters $\\pp,\\vp$ without any ambiguity. To prove the claim $H = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} f(x_1, x_2, \\ldots, x_\\ell)$:\n\t\t\t\t\t\t\\begin{enumerate}\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\item $\\P$ selects a polynomial $g(x_1,\\ldots, x_\\ell) = a_{0} + g_1(x_1) + g_2(x_2) + \\ldots + g_l(x_\\ell)$, where $g_{i}(x_i) = a_{i,1}x_i + a_{i,2}x_i^2 + \\ldots + a_{i,d}x_i^d$ and all $a_{i,j}$s are uniformly random. $\\P$ sends $H = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} f(x_1, x_2, \\ldots, x_\\ell)$, $G = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} g(x_1, x_2, \\ldots, x_\\ell)$ and $\\comm_g = \\Commit(g)$ to $\\V$.\n\t\t\t\t\t\t\t\\item $\\V$ uniformly selects $\\rho \\in \\mathbb{F}^*$, computes $H+\\rho G$ and sends $\\rho$ to $\\P$.\n\t\t\t\t\t\t\t\\item $\\P$ and $\\V$ run the sumcheck protocol on\n\t\t\t\t\t\t\t$$H + \\rho G = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary}(f(x_1, x_2, \\ldots, x_\\ell) + \\rho g(x_1, x_2, \\ldots, x_\\ell))$$\n\t\t\t\t\t\t\t\\item At the last round of the sumcheck protocol, $\\V$ obtains a claim $h_\\ell(r_\\ell) = f(r_1, r_2, \\ldots, r_\\ell)+\\rho g(r_1, r_2, \\ldots, r_\\ell)$. $\\P$ and $\\V$ opens the commitment of $g$ at $r = (r_1,\\ldots, r_\\ell)$ by $(g(r), \\pi)\\leftarrow\\Open(g,r), \\Verify(\\comm_g,g(r),r,\\pi)$. If $\\Verify$ outputs $\\reject$, $\\V$ aborts.\n\t\t\t\t\t\t\t\\item $\\V$ computes $h_\\ell(r_\\ell)-\\rho g(r_1,\\ldots,r_\\ell)$ and compares it with the oracle access of $f(r_1,\\ldots, r_\\ell)$.\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\\end{enumerate}\n\t\\end{construction}}}}}\n\t%\\vspace{-.2in}\n\\end{figure}\n\n\n\\subsection{Zero Knowledge Sumcheck}\\label{subsec:zksumcheck}\nAs a core step of the GKR protocol, $\\P$ and $\\V$ execute a sumcheck protocol on Equation~\\ref{eq:GKR}, during which $\\P$ sends $\\V$ evaluations of the polynomial at several random points chosen by $\\V$. These evaluations leak information about the values in the circuit, as they can be viewed as weighted sums of these values.\n\nTo make the sumcheck protocol zero-knowledge, we take the approach proposed by Chiesa et al. in \\cite{zksumcheck}, which is masking the polynomial in the sumcheck protocol by a random polynomial. In this approach, to prove \n$$H = \\sum\\nolimits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary}f(x_1, x_2, \\ldots, x_\\ell)$$\nthe prover generates a random polynomial $g$ with the same variables and individual degrees of $f$. She commits to the polynomial $g$, and sends the verifier a claim $G = \\sum\\limits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary}g(x_1, x_2, \\ldots, x_\\ell)$. The verifier picks a random number $\\rho$, and execute a sumcheck protocol with the prover on $$H + \\rho G = \\sum\\nolimits_{x_1, x_2, \\ldots, x_\\ell \\in \\{0, 1\\}}(f(x_1, x_2, \\ldots, x_\\ell) + \\rho g(x_1, x_2, \\ldots, x_\\ell)).$$ At the last round of this sumcheck, the prover opens the commitment of $g$ at $g(r_1, \\ldots, r_\\ell)$, and the verifier computes $f(r_1, \\ldots, r_l)$ by subtracting $\\rho g(r_1, \\ldots, r_\\ell)$ from the last message, and compares it with the oracle access of $f$. It is shown that as long as the commitment and opening of $g$ are zero-knowledge, the protocol is zero-knowledge. Intuitively, this is because all the coefficients of $f$ are masked by those of $g$. The soundness still holds because of the random linear combination of $f$ and $g$. \n\nUnfortunately, the masking polynomial $g$ is as big as $f$, and opening it to a random point later is expensive. In~\\cite{zksumcheck}, the prover sends a PCP oracle of $g$, and executes a zero-knowledge sumcheck to open it to a random point, which incurs an exponential complexity for the prover. Even replacing it with the zkVPD protocol in~\\cite{zkvpd}, the prover time is slow in practice.\n\nIn this paper, we show that it suffices to mask $f$ with a small polynomial to achieve zero-knowledge. In particular, we set $g(x_1, \\ldots, x_\\ell) = a_{0} + g_1(x_1) + g_2(x_2) + \\ldots + g_\\ell(x_\\ell)$, where $g_{i}(x_i) = a_{i,1}x_i + a_{i,2}x_i^2 + \\ldots + a_{i,d}x_i^d$ is a random univariate polynomial of degree $d$ ($d$ is the variable degree of $f$). Note here that the size of $g$ is only $O(d\\ell)$, while the size of $f$ is $O(2^\\ell)$ \\babis{don't you need to include $d$ in this expression?}.\n\nThe intuition of our improvement is that the prover sends $O(d\\ell)$ messages in total to the verifier during the sumcheck protocol, thus a polynomial $g$ with $O(d\\ell)$ random coefficients is sufficient to mask all the messages and achieve zero-knowledge. We present the full protocol in Construction~\\ref{construction::zksumcheck}.\n\n\n\nThe completeness of the protocol holds obviously. The soundness follows the soundness of the sumcheck protocol and the random linear combination in step 2 and 3, as proven in~\\cite{zksumcheck}. We give a proof of zero knowledge here.\n\n\\begin{theorem}[Zero knowledge]\\label{thm:zksc}\n\tFor every verifier $\\V^*$ and every $\\ell$-variate polynomial $f:\\F^\\ell\\rightarrow\\F$ with variable degree $d$, there exists a simulator $\\S$ such that given access to $H = \\sum\\nolimits_{x_1, x_2, \\ldots, x_\\ell \\in \\binary} f(x_1, x_2, \\ldots, x_\\ell)$, $\\S$ is able to simulate the partial view of $\\V^*$ in step 1-4 of Construction~\\ref{construction::zksumcheck}. \n\\end{theorem}\n\n\n\\begin{proof}\n\t\n\tWe build the simulator $\\S$ as following.\n\t\n\t\\begin{enumerate}\n\t\t\n\t\t\\item $\\S$ selects a random polynomial $g^*(x_1,\\ldots,x_\\ell) = a^*_{0} + g^*_1(x_1) + g^*_2(x_2) + \\cdots + g^*_\\ell(x_\\ell)$, where $g^*_i(x_i) = a^*_{i,1}x_i + a^*_{i,2}x_i^2 + \\cdots + a^*_{i,d}x_i^d$. $\\S$ sends $H$, $G^* = \\sum\\limits_{x_1, x_2, \\cdots, x_\\ell \\in \\binary} g^*(x_1, x_2, \\cdots, x_\\ell)$ and $\\comm_{g^*} = \\Commit(g^*)$ to $\\V$.\n\t\t\n\t\t\n\t\t\\item $\\S$ receives $\\rho \\neq 0$ from $\\V^*$.\n\t\t\\item $\\S$ selects a polynomial $f^*:\\F^\\ell\\rightarrow\\F$ with variable degree $d$ uniformly at random conditioning on $\\sum\\limits_{x_1, x_2, \\cdots, x_\\ell \\in \\{0, 1\\}}f^*(x_1, x_2, \\cdots, x_\\ell) = H$. $\\S$ then engages in a sumcheck protocol with $\\V$ on  $H+\\rho G^* = \\sum\\limits_{x_1, x_2, \\cdots, x_l \\in \\{0, 1\\}}(f^*(x_1, x_2, \\cdots, x_\\ell)+\\rho g^*(x_1, x_2, \\cdots, x_\\ell))$\n\t\t\n\t\t\\item Let $r \\in \\F^\\ell$ be the point chosen by $\\V^*$ in the sumcheck protocol. $\\S$ runs $(g^*(r), \\pi)\\leftarrow\\Open(g^*,r)$ and sends them to $\\V$.\n\t\t\n\t\\end{enumerate} \n\t\n\tAs both $g$ and $g^*$ are randomly selected, and the zkVPD protocol is zero-knowledge, it is obvious that step 1 and 4 in $\\S$ are indistinguishable from those in the real world of Construction~\\ref{construction::zksumcheck}. It remains to show that the sumchecks in step 3 of both worlds are indistinguishable.\n\t\n\tTo see that, recall that in round $i$ of the sumcheck protocol, $\\V$ receives a univariate polynomial $h_i(x_i) = \\sum\\limits_{b_{i+1}, \\ldots,b_\\ell\\in\\binary}h(r_1, \\ldots, r_{i-1},x_i, b_{i+1}, \\ldots, b_\\ell)$ where $h = f+\\rho g$. (The view of $\\V^*$ is defined in the same way with $h^*,f^*,g^*$ and we omit the repetition in the following.) As the variable degree of $f$ and $g$ is $d$, $\\P$ sends $\\V$ $h_i(0), h_i(1), \\ldots, h_i(d)$ which uniquely defines $h_i(x_i)$. These evaluations reveal $d+1$ independent linear constraints on the coefficients of $h$. In addition, note that when these evaluations are computed honestly by $\\P$, $h_i(0)+h_i(1) = h_{i-1}(r_{i-1})$, as required in the sumcheck protocol. Therefore, in all $\\ell$ rounds of the sumcheck, $\\V$ and $\\V^*$ receives $\\ell(d+1) - (\\ell-1) = \\ell d+1$ independent linear constraints on the coefficients of $h$ and $h^*$. \n\t\n\tAs $h$ and $h^*$ are masked by $g$ and $g^*$, each with exactly $\\ell d+1$ coefficients selected randomly, the two linear systems are identically distributed. Therefore, step 3 of the ideal world is indistinguishable from that of the real world.\n\t\n\\end{proof}\n\n\n\n\\subsection{Zero knowledge GKR}\\label{subsec::zkgkr}\n\nTo achieve zero-knowledge, we replace the sumcheck protocol in GKR with the zero-knowledge version described in the previous section. However, the protocol still leaks additional information. In particular, at the end of the zero-knowledge sumcheck, $\\V$ queries the oracle to evaluate the polynomial on a random point. When executed on Equation~\\ref{eq:GKR}, this reveals two evaluations of the polynomial $\\tV_i$ defined by the values in the $i$-th layer of the circuit: $\\tV_i(u)$ and $\\tV_i(v)$.\n\n\nTo prevent this leakage, Chiesa et al.\\cite{zksumcheck} proposed to replace the multi-linear extension $\\tV_i$ with a low degree extension, such that learning $\\tV_i(u)$ and $\\tV_i(v)$ does not leak any information about $V_i$. Define a low degree extension of $V_i$ as \n\n\\begin{equation}\\label{eq:lde}\n\\dot{V}_{i}(z) \\overset{def}{=} \\tV_i(z)+Z_i(z)\\sum\\nolimits_{w \\in \\{0, 1\\}^\\lambda}R_i(z, w),\n\\end{equation}\nwhere $Z(z) = \\prod_{i=1}^{s_i} z_i(1-z_i)$, i.e., $Z(z)=0$ for all $z\\in\\{0, 1\\}^{s_i}$. $R_i(z,w)$ is a random low-degree polynomial and $\\lambda$ is the security parameter. With this low degree extension, Equation~\\ref{eq:GKR} becomes\n\n\\begin{align}\\label{eq:zkGKR}\n\\dot{V}_{i}(g)&=\\sum\\nolimits_{x, y\\in \\binary^{s_{i+1}}}\\tilde{mult}_{i+1}(g, x, y)(\\dot{V}_{i+1}(x)\\dot{V}_{i+1}(y))\\\\\n&+\\tilde{add}_{i+1}(g,x,y)(\\dot{V}_{i+1}(x)+\\dot{V}_{i+1}(y))\\nonumber + Z_i(g)\\sum\\nolimits_{w \\in \\binary^\\lambda}R_i(g, w)\\nonumber\\\\\n&=\\sum\\nolimits_{x, y\\in \\binary^{s_{i+1}},w \\in \\binary^\\lambda}(I(\\vec{0},w) \\cdot \\tilde{mult}_{i+1}(g, x, y)(\\dot{V}_{i+1}(x)\\dot{V}_{i+1}(y))\\\\\n&+\\tilde{add}_{i+1}(g,x,y)(\\dot{V}_{i+1}(x)+\\dot{V}_{i+1}(y))\\nonumber + I((x, y), \\vec{0})Z_i(g)R_i(g, w))\n\\end{align}\nwhere $I(\\vec{a},\\vec{b})$ is an identity polynomial $I(\\vec{a},\\vec{b}) = 0$ iff $\\vec{a}=\\vec{b}$. The first equation holds because $\\dot{V}_i$ agrees with $\\tV_i$ on the Boolean hyper-cube $\\binary^{s_i}$, as $Z_i(z) = 0$ for binary inputs. The second equation holds because the mask in $\\dot{V}_i$ is in the form of a ``sum\" and can be moved into the sumcheck equation. \n\nWhen executing the zero-knowledge sumcheck protocol on Equation~\\ref{eq:zkGKR}, at the end of the protocol, $\\V$ receives $\\dot{V}_{i+1}(u)$ and $\\dot{V}_{i+1}(v)$ for random points $u,v\\in\\F^{s_{i+1}}$ chosen by $\\V$. They no longer leak information about $V_{i+1}$, as they are masked by $Z_{i+1}(z)\\sum\\nolimits_{w \\in \\{0, 1\\}^\\lambda}R_{i+1}(z, w)$ for $z=u$ and $z=v$. $\\V$ computes $\\tilde{mult}_{i+1}(g,u,v)$ and $\\tilde{add}_{i+1}(g,u,v)$ as before, computes $Z_i(g), I(\\vec{0},c), I((u,v),\\vec{0})$ where $c\\in\\F^\\lambda$ is a random point chosen by $\\V$ for variable $w$, opens $R_i(g,w)$ at $c$  with $\\P$ through a polynomial commitment, and checks that together with $\\dot{V}_{i+1}(u), \\dot{V}_{i+1}(v)$ received from $\\P$ they are consistent with the last message of the sumcheck.$\\V$ then uses $\\dot{V}_{i+1}(u), \\dot{V}_{i+1}(v)$ to proceed to the next round.\n\nUnfortunately, similar to the zero-knowledge sumcheck, the masking polynomial $R_i$ is very large in~\\cite{zksumcheck}. Opening $R_i$ at a random point takes exponential time for $\\P$ either using a PCP oracle as in~\\cite{zksumcheck} or potentially using a zkVPD, as $R$ has $s_i+2s_{i+1}+\\lambda$ variables.\n\nIn this section, we show that we can set $R_i$ to be a small polynomial to achieve zero-knowledge. In particular, $R_i$ has only two variables with variable degree 2. This is because in the $(i-1)$-th round, $\\V$ receives two evaluations of $V_i$, $\\dot{V}_i(u)$ and $\\dot{V}_i(v)$,  which are masked by $\\sum_{w}R_i(u,w)$ and $\\sum_{w}R_i(v,w)$; in the $i$-th sumcheck, $\\V$ opens $R_i$ at $R_i(u,c)$ and $R_i(v,c)$. It suffices to make these four evaluations linearly independent, assuming the commitment and opening of $R_i$ are using a zkVPD. Therefore, we set the low-degree term in Equation~\\ref{eq:lde} as $Z_i(z)\\sum\\nolimits_{w \\in\\binary} R_i(z_1, w)$, i.e. $R_i$ only takes two variables, the first variable $z_1$ of $z$ and an extra variable $w\\in\\binary$ instead of $\\binary^\\lambda$, with variable degree 2. \n\nThe full protocol is presented in Construction~\\ref{construction:zkgkr}. Here we use superscriptions (e.g., $u^{(i)}$) to denote random numbers or vectors for the $i$-th layer of the circuit.\n\n%\\begin{figure}[H]\n\n%{\\small\n%\\centering{\\centering\n%\\framebox{\\parbox{.99\\linewidth}{\n\\begin{mdframed}[nobreak=false]\n\t\\begin{construction}\\label{construction:zkgkr}\n\t\t\\begin{enumerate} \n\t\t\t\\item On a layered arithmetic circuit $C$ with $d$ layers and input $\\mathsf{in}$, the prover $\\P$ sends the output of the circuit $\\mathsf{out}$ to the verifier $\\V$.\n\t\t\t\n\t\t\t\\item $\\P$ randomly selects polynomials $R_1(z_1, w), \\ldots, R_d(z_1, w): \\F^2\\rightarrow \\F$ with variable degree 2. $\\P$ commits to these polynomials by sending $\\comm_i\\leftarrow\\Commit(R_i)$ to $\\V$ for $i\\in[1,d]$.\n\t\t\t\n\t\t\t\\item $\\V$ defines $\\dot{V}_0(z)= \\tV_0(z)$, where $\\tV_0(z)$ is the multilinear extension of $\\mathsf{out}$. $\\dot{V}_0(z)$ can be viewed as a special case with $R_0(z_1,w)$ being the 0 polynomial. $\\V$ evaluates it at a random point $\\dot{V}_0(g^{(0)})$ and sends $g^{(0)}$ to $\\P$.\n\t\t\t\n\t\t\t\\item $\\P$ and $\\V$ execute the zero knowledge sumcheck protocol presented in Construction~\\ref{construction::zksumcheck} on\n\t\t\t\\[\n\t\t\t\\begin{aligned}\n\t\t\t\\dot{V}_{0}(g^{(0)})=\\sum_{x, y\\in \\binary^{s_{1}}}&\\tilde{mult}_{1}(g^{(0)}, x, y)(\\dot{V}_{1}(x)\\dot{V}_{1}(y))\\\\\n\t\t\t&+\\tilde{add}_{1}(g^{(0)},x,y)(\\dot{V}_{1}(x)+\\dot{V}_{1}(y))\n\t\t    \\end{aligned}\n\t\t\t\\]\n\t\t\tIf $u_1^{(1)} = v_1^{(1)}$, $\\P$ aborts. At the end of the protocol, $\\V$ receives $\\dot{V}_1(u^{(1)})$ and $\\dot{V}_1(v^{(1)})$. $\\V$ computes $\\tmult_1(g^{(0)},u^{(1)},v^{(1)})$, $\\tadd_1(g^{(0)},u^{(1)},v^{(1)})$ and checks that $$\\tmult_1(g^{(0)},u^{(1)},v^{(1)})\\dot{V}_1(u^{(1)})\\dot{V}_1(v^{(1)})+\\tadd_1(g^{(0)},u^{(1)},v^{(1)})(\\dot{V}_1(u^{(1)})+\\dot{V}_1(v^{(1)}))$$ equals to the last message of the sumcheck (evaluation oracle).\n\t\t\t\n\t\t\t\\item For layer $i=1,\\ldots, d-1$:\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item $\\V$ randomly selects $\\alpha^{(i)}, \\beta^{(i)}\\in\\F$ and sends them to $\\mathcal{P}$.\n\t\t\t\t\\item  Let $Mult_{i + 1}(x, y) = \\alpha^{(i)}\\tilde{mult}_{i+1}(u^{(i)}, x, y)+\\beta^{(i)}\\tilde{mult}_{i+1}(v^{(i)}, x, y)$ and\\\\ $Add_{i + 1}(x, y) = \\alpha^{(i)}\\tilde{add}_{i+1}(u^{(i)}, x, y)+\\beta^{(i)}\\tilde{add}_{i+1}(v^{(i)}, x, y)$. $\\P$ and $\\V$ run the zero knowledge sumcheck on the equation\\\\\n\t\t\t\t\n\t\t\t\t$\\alpha^{(i)}\\dot{V}_i(u^{(i)})+\\beta^{(i)}\\dot{V}_i(v^{(i)})=$\n\t\t\t\t\\begin{align*}\n\t\t\t\t&\\sum_{\\substack{x, y\\in \\binary^{s_{i+1}}\\\\w \\in \\binary}}(I(\\vec{0},w) \\cdot Mult_{i+1}(x, y)(\\dot{V}_{i+1}(x)\\dot{V}_{i+1}(y))\\\\\n\t\t\t\t&+Add_{i+1}(x, y)(\\dot{V}_{i+1}(x)+\\dot{V}_{i+1}(y))\\nonumber\\\\\n\t\t\t\t&+ I((x, y), \\vec{0})(\\alpha^{(i)}Z_i(u^{(i)})R_i(u_1^{(i)}, w)+\\beta^{(i)}Z_i(v^{(i)})R_i(v_1^{(i)}, w)))\n\t\t\t\t\\end{align*}\n\t\t\t\tIf $u_1^{(i+1)} = v_1^{(i+1)}$, $\\P$ aborts.\n\t\t\t\t\n\t\t\t\t\\item At the end of the zero-knowledge sumcheck protocol, $\\P$ sends $\\V$ $\\dot{V}_{i+1}(u^{(i+1)})$ and $\\dot{V}_{i+1}(v^{(i+1)})$.\n\t\t\t\t\n\t\t\t\t\\item $\\V$ computes $$a_{i+1} = \\alpha^{(i)}\\tilde{mult}_{i+1}(u^{(i)}, u^{(i+1)}, v^{(i+1)})+\\beta^{(i)}\\tilde{mult}_{i+1}(v^{(i)}, u^{(i+1)}, v^{(i+1)})$$ and $$b_{i+1} = \\alpha^{(i)}\\tilde{add}_{i+1}(u^{(i)}, u^{(i+1)}, v^{(i+1)})+\\beta^{(i)}\\tilde{add}_{i+1}(v^{(i)}, u^{(i+1)}, v^{(i+1)})$$ locally. $\\V$ computes $Z_i(u^{(i)}),Z_i(v^{(i)}),I(\\vec{0},c^{(i)}), I((u^{(i+1)},v^{(i+1)}),\\vec{0})$ locally.\n\t\t\t\t\\item $\\P$ and $\\V$ open $R_i$ at two points $R_i(u_1^{(i)},c^{(i)})$ and $R_i(v_1^{(i)},c^{(i)})$ using $\\Open$ and $\\Verify$.\n\t\t\t\t\\item $\\V$ computes the following as the evaluation oracle and uses it to complete the last step of the zero-knowledge sumcheck.\n\t\t\t\t%{\\footnotesize\n\t\t\t\t\\begin{align*}\n\t\t\t\t&I(\\vec{0},c^{(i)})(a_{i+1}(\\dot{V}_{i+1}(u^{(i+1)})\\dot{V}_{i+1}(v^{(i+1)}))+\\\\\n\t\t\t\t&b_{i+1}(\\dot{V}_{i+1}(u^{(i+1)})+\\dot{V}_{i+1}(v^{(i+1)})))+\\\\\n\t\t\t\t&I((u^{(i+1)},v^{(i+1)}),\\vec{0})(\\alpha^{(i)}Z_i(u^{(i)})R_i(u_1^{(i)}, c^{(i)})+\\beta^{(i)}Z_i(v^{(i)})R_i(v_1^{(i)}, c^{(i)}))\n\t\t\t\t\\end{align*}\n\t\t\t\t%}\n\t\t\t\tIf all checks in the zero knowledge sumcheck and $\\Verify$ passes, $\\V$ uses $\\dot{V}_{i+1}(u^{(i+1)})$ and $\\dot{V}_{i+1}(v^{(i+1)})$ to proceed to the $(i+1)$-th layer. Otherwise, $\\V$ outputs $\\reject$ and aborts.\n\t\t\t\t\n\t\t\t\\end{enumerate}\n\t\t\t\n\t\t\t\\item At the input layer $d$, $\\V$ has two claims $\\dot{V}_{d}(u^{(d)})$ and $\\dot{V}_{d}(v^{(d)})$. $\\V$ opens $R_d$ at 4 points $R_d(u_1^{(d)},0)$, $R_d(u_1^{(d)},1)$, $R_d(v_1^{(d)},0)$, $R_d(v_1^{(d)},1)$ and checks that $\\dot{V}_{d}(u^{(d)}) = \\tV_d(u^{(d)})+Z_d(u^{(d)})\\sum\\limits_{w \\in \\binary}R_d(u_1^{(d)},w)$ and $\\dot{V}_{d}(v^{(d)}) = \\tV_d(v^{(d)})+Z_d(v^{(d)})\\sum\\limits_{w \\in \\binary}R_d(v_1^{(d)},w)$, given oracle access to two evaluates of $\\tV_d$ at $u^{(d)}$ and $v^{(d)}$. If the check passes, output $\\accept$; otherwise, output $\\reject$.\n\t\t\t\n\t\t\\end{enumerate}\n\t\\end{construction}\n\\end{mdframed}\n\n\n%}}}}\n%\\end{figure}\n\n\n\\begin{theorem}\\label{thm:zkgkr}\n\tConstruction~\\ref{construction:zkgkr} is an interactive proof protocol per Definition~\\ref{def:ip}, for a function $f$ defined by a layered arithmetic circuit $C$ such that $f(\\mathsf{in},\\mathsf{out}) = 1$ iff $C(\\mathsf{in}) = \\mathsf{out}$. In addition, for every verifier $\\V^*$ and every layered circuit $C$, there exists a simulator $\\S$ such that given oracle access to $\\mathsf{out}$, $\\S$ is able to simulate the partial view of $\\V^*$ in step 1-5 of Construction~\\ref{construction:zkgkr}. \n\\end{theorem}\n\nThe completeness follows from the construction explained above and the completeness of the zero knowledge sumcheck. The soundness follows the soundness of the GKR protocol with low degree extensions, as proven in~\\cite{GKR} and~\\cite{zksumcheck}. We give the proof of zero knowledge here.\n\n\\begin{proof} With oracle access to $\\mathsf{out}$, and the simulator $\\S_{sc}$ of the zero-knowledge sumcheck protocol in Section~\\ref{subsec:zksumcheck} as a subroutine, we construct the simulator $\\mathcal{S}$ as following:\n\t\n\t\\begin{enumerate}\n\t\t\n\t\t\\item $\\S$ sends the $\\mathsf{out}$ to $\\V^*$.\n\t\t\n\t\t\\item $\\S$ randomly selects polynomials $R^*_1(z_1, w), \\ldots, R^*_d(z_1, w): \\F^2\\rightarrow \\F$ with variable degree 2. $\\S$ commits to these polynomials by sending $\\comm_i\\leftarrow\\Commit(R^*_i)$ to $\\V^*$ for $i\\in[1,d]$.\n\t\t\n\t\t\\item $\\S$ receives $g^{(0)}$ from $\\V^*$.\n\t\t\n\t\t\\item $\\S$ calls $\\S_{sc}$ to simulate the partial view of the zero knowledge sumcheck protocol on \n\t\t{\\footnotesize\n\t\t\t\\[\n\t\t\t\\dot{V}_{0}(g^{(0)})=\\sum_{x, y\\in \\binary^{s_{1}}}\\tilde{mult}_{1}(g^{(0)}, x, y)(\\dot{V}_{1}(x)\\dot{V}_{1}(y))+\\tilde{add}_{1}(g^{(0)},x,y)(\\dot{V}_{1}(x)+\\dot{V}_{1}(y))\n\t\t\t\\]\n\t\t}\n\t\tIf $u_1^{(1)} = v_1^{(1)}$, $\\S$ aborts. At the end of the sumcheck, $\\S$ samples $\\dot{V}^*_1(u^{(1)})$ and $\\dot{V}^*_1(v^{(1)})$ such that $\\tmult_1(g^{(0)},u^{(1)},v^{(1)})\\dot{V}^*_1(u^{(1)})\\dot{V}^*_1(v^{(1)})+\\tadd_1(g^{(0)},u^{(1)},v^{(1)})$ $(\\dot{V}^*_1(u^{(1)})+\\dot{V}^*_1(v^{(1)}))$ equals to the last message of the sumcheck.\n\t\t\n\t\t\\item For layer $i=1,\\ldots,d-1$:\n\t\t\\begin{enumerate}\n\t\t\t\\item $\\S$ receives $\\alpha^{(i)}, \\beta^{(i)}$ from $\\V^*$.\n\t\t\t\\item Let $Mult_{i + 1}(x, y) = \\alpha^{(i)}\\tilde{mult}_{i+1}(u^{(i)}, x, y)+\\beta^{(i)}\\tilde{mult}_{i+1}(v^{(i)}, x, y)$ and \\\\ $Add_{i + 1}(x, y) = \\alpha^{(i)}\\tilde{add}_{i+1}(u^{(i)}, x, y)+\\beta^{(i)}\\tilde{add}_{i+1}(v^{(i)}, x, y)$. $\\S$ calls $\\S_{sc}$ to simulate the partial view of the zero knowledge sumcheck protocol on \\\\\n\t\t\t\n\t\t\t$\\alpha^{(i)}\\dot{V}_i(u^{(i)})+\\beta^{(i)}\\dot{V}_i(v^{(i)})=$\n\t\t\t\\begin{align*}\n\t\t\t&\\sum_{\\substack{x, y\\in \\binary^{s_{i+1}}\\\\w \\in \\binary}}(I(\\vec{0},w) \\cdot Mult_{i+1}(x, y)(\\dot{V}_{i+1}(x)\\dot{V}_{i+1}(y))\\\\\n\t\t\t&+Add_{i+1}(x, y)(\\dot{V}_{i+1}(x)+\\dot{V}_{i+1}(y))\\nonumber\\\\\n\t\t\t&+ I((x, y), \\vec{0})(\\alpha^{(i)}Z_i(u^{(i)})R_i(u_1^{(i)}, w)+\\beta^{(i)}Z_i(v^{(i)})R_i(v_1^{(i)}, w)))\n\t\t\t\\end{align*}\n\t\t\tIf $u_1^{(i+1)} = v_1^{(i+1)}$, $\\S$ aborts.\n\t\t\t\n\t\t\t\\item At the end of the zero-knowledge sumcheck protocol, if $u_1^{(i+1)} = v_1^{(i+1)}$, $\\S$ aborts. Otherwise, $\\S$ samples $\\dot{V}^*_{i+1}(u^{(i+1)})$ and $\\dot{V}^*_{i+1}(v^{(i+1)})$ randomly such that the following equals to the last message of the sumcheck protocol.\n\t\t\t{\\footnotesize\n\t\t\t\t\\begin{align*}\n\t\t\t\tI(\\vec{0},c^{(i)})(a_{i+1}(\\dot{V}^*_{i+1}(u^{(i+1)})\\dot{V}^*_{i+1}(v^{(i+1)}))+b_{i+1}(\\dot{V}^*_{i+1}(u^{(i+1)})+\\dot{V}^*_{i+1}(v^{(i+1)})))\\\\\n\t\t\t\t+I((u^{(i+1)},v^{(i+1)}),\\vec{0})(\\alpha^{(i)}Z_i(u^{(i)})R^*_i(u_1^{(i)}, c^{(i)})+\\beta^{(i)}Z_i(v^{(i)})R^*_i(v_1^{(i)}, c^{(i)}))\n\t\t\t\t\\end{align*}\n\t\t\t}\n\t\t\t$a_{i+1} = \\alpha^{(i)}\\tilde{mult}_{i+1}(u^{(i)}, u^{(i+1)}, v^{(i+1)})+\\beta^{(i)}\\tilde{mult}_{i+1}(v^{(i)}, u^{(i+1)}, v^{(i+1)})$ and \\\\ $b_{i+1} = \\alpha^{(i)}\\tilde{add}_{i+1}(u^{(i)}, u^{(i+1)}, v^{(i+1)})+\\beta^{(i)}\\tilde{add}_{i+1}(v^{(i)}, u^{(i+1)}, v^{(i+1)})$. $\\S$ sends $\\dot{V}_{i+1}(u^{(i+1)})$ and $\\dot{V}_{i+1}(v^{(i+1)})$ to $\\V^*$.\n\t\t\t\n\t\t\t\\item $\\V^*$ computes the corresponding values locally as in step 5(d) of Construction~\\ref{construction:zkgkr}.\n\t\t\t\\item $\\S$ opens $R^*_i$ at two points $R^*_i(u_1^{(i)},c^{(i)})$ and $R^*_i(v_1^{(i)},c^{(i)})$ using $\\Open$.\n\t\t\t\\item $\\V^*$ performs the checks as in step 5(f) of Construction~\\ref{construction:zkgkr}.\n\t\t\\end{enumerate}\n\t\\end{enumerate} \n\t\n\tNote here that $\\V^*$ can actually behave arbitrarily in step 5(d) and 5(f) above. We include these steps to be consistent with the real world in  Construction~\\ref{construction:zkgkr} for the ease of interpretation.\n\t\n\tTo prove zero-knowledge, step 1,3, 5(a), 5(d) and 5(f) are obviously indistinguishable as $\\S$ only receives messages from $\\V^*$. Step 2 and 5(e) of both worlds are indistinguishable because of the zero knowledge property of the zkVPD, and the fact that $R^*$ and $R$ are sampled randomly in both worlds. Step 4 and 5(b) are indistinguishable as proven in Theorem~\\ref{thm:zksc} for $\\S_{sc}$.  \n\t\n\t\n\tIt remains to consider the messages received at the end of step 4 and in step 5(c), namely $\\dot{V}_{i}(u^{(i)}), \\dot{V}_{i}(v^{(i)})$ and $\\dot{V}^*_{i}(u^{(i)}), \\dot{V}^*_{i}(v^{(i)})$ for $i = 1,\\ldots,d$. In the real world, $\\dot{V}_{i}(z)$ is masked by $\\sum\\limits_{w \\in\\binary}R_i(z_1,w)$ ($Z(z)$ is publicly known), thus $\\dot{V}_{i}(u^{(i)})$ and $\\dot{V}_{i}(v^{(i)})$ are masked by $\\sum\\limits_{w \\in\\binary}R_i(u_1^{(i)},w)$ and $\\sum\\limits_{w \\in\\binary}R_i(v_1^{(i)},w)$ correspondingly. In addition, in step 5(e), $\\V^*$ opens $R_i$ at $R_i(u_1^{(i)},c^{(i)})$ and $R_i(v_1^{(i)},c^{(i)})$. To simplify the notation here, we consider only a particular layer and omit the subscription and superscription of $i$. Let $R(z_1,w) = a_0+a_1z_1+a_2w+a_3z_1w+a_4z_1^2+a_5w^2+a_6z_1^2w^2$, where $a_0,\\ldots,a_6$ are randomly chosen. We can write the four evaluations above as \n\t\\[\n\t\\begin{bmatrix}\n\t2 & 2u_1 & 1 & u_1 & 2u_1^2 & 1 & u_1^2\\\\\n\t2 & 2v_1 & 1 & v_1& 2v_1^2 & 1 & v_1^2\\\\\n\t1 & u_1 & c & cu_1 & u_1^2 & c^2 & c^2u_1^2\\\\\n\t1 & v_1 & c & cv_1 &  v_1^2 & c^2 & c^2v_1^2\\\\\n\t\\end{bmatrix}\n\t\\times\n\t\\begin{bmatrix}\n\ta_0 & a_1 &a_2 &a_3&a_4&a_5&a_6\n\t\\end{bmatrix}^T\n\t\\]\n\tAfter row reduction, the left matrix is\n\t\\[\n\t\\begin{bmatrix}\n\t2 & 2u_1 & 1 & u_1 & 2u_1^2 & 1 & u_1^2\\\\\n\t0 & 2(v_1-u_1) & 0 & v_1-u_1 & 2(u_1^2-v_1^2) & 0 & u_1^2-v_1^2\\\\\n\t0 & 0 & 2c-1 & (2c-1)u_1 & 0 & 2c^2-1 & (2c^2-1)u_1^2\\\\\n\t0 & 0 & 0 & (2c-1)(v_1-u_1)&0 & 0 & (2c^2-1)(v_1^2-u_1^2)\\\\\n\t\\end{bmatrix}\n\t\\]\n\tAs $u_1\\neq v_1$, the matrix has full rank if $2c^2 - 1\\neq 0 \\mod p$, where $p$ is the prime that defines $\\F$. This holds if $2^{-1}$ is not in the quadratic residue of $p$, or equivalently $p \\not\\equiv 1, 7 \\mod 8$.\\footnote{From the reduced matrix, we can see that setting $a_2 = a_3 = a_4 = 0$ does not affect the rank of the matrix, which simplifies the masking polynomial $R$ in practice.} In case $p\\equiv 1, 7 \\mod 8$, we can add a check to both the protocol and the simulator to abort if $2c^2 -1 =0$. This does not affect the proof of zero knowledge, and only reduces the soundness error by a small amount. \\footnote{If one is willing to perform a check like this, we can simplify the masking polynomial $R$ to be multilinear. The reduced matrix will be the first 4 columns of the matrix showed above, and it has full rank if $c\\neq 2^{-1}$.}\n\t\n\t\n\tBecause of the full rank of the matrix, the four evaluations are linearly independent and uniformly distributed, as $a_0,\\ldots a_6$ are chosen randomly. In the ideal world, $R^*(u_1,c)$ and $R^*(v_1,c)$ are independent and uniformly distributed, and $\\dot{V}^*(u), \\dot{V}^*(v)$ are randomly selected subject to a linear constraint (step 5(c)), which is the same as the real world. Therefore, they are indistinguishable in the two worlds, which completes the proof.   \n\t\n\t\n\t\n\\end{proof}\n\n\n\\subsection{Zero knowledge VPD}\\label{subsec::our_zkvpd}\nIn this section, we present the instantiations of the zkVPD protocol, as defined in Definition~\\ref{def::zkvpd}. For every intermediate layer $i$, we use the same zkVPD protocol as proposed by Zhang et al. in~\\cite{zkvpd} to commit and open the masking polynomials $g_i(x), R_i(z_1, w)$. In fact, as we show in the previous sections, these polynomials are very small ($g_i$ is the sum of univariate polynomials and $R_i$ has 2 variables with variable degree 2), the zkVPD protocols become very simple. The complexity of $\\KeyGen, \\Commit, \\Open, \\Verify$ and proof size are all $O(s_i)$ for $g_i$ and are all $O(1)$ for $R_i$. We omit the full protocols due to space limit.\n\n\n\nFor the zkVPD used for the input layer, we design a customized protocol based on the zkVPD protocol in~\\cite{zkvpd}. Recall that at the end of the GKR protocol, $\\V$ receives two evaluations of the polynomial $\\dot{V}_d(z)=\\tV_d(z)+Z_d(z)\\sum\\nolimits_{w \\in\\binary} R_d(z_1,w)$ at $z=u^{(d)}$ and $z=v^{(d)}$. In our zero knowledge proof protocol, which will be presented in Section~\\ref{subsec::zkall}, $\\P$ commits to $\\dot{V}_d(z)$ using the zkVPD at the beginning, and opens it to the two points selected by $\\V$.\n\nThe protocol in~\\cite{zkvpd} works for any polynomial with $\\ell$ variables and any variable degree, and is particularly efficient for multilinear polynomials. We modify the protocol for our zero-knowledge proof scheme and preserves the efficiency. Note that though $\\dot{V}_d(z)$ is a low degree extension of the input, it can be decomposed to the sum of $\\tV_d(z)$, a multilinear polynomial, and $Z_d(z)\\sum\\nolimits_{w \\in\\binary} R_d(z_1,w)$. Moreover, $Z_d(u^{(d)})$ and $Z_d(v^{(d)})$ can be computed directly by $\\V$. Therefore, in our construction, $\\P$ commits to $\\tV_d(z)$ and $\\sum\\nolimits_{w \\in\\binary} R_d(z_1,w)$ separately, and later opens the sum together given $Z_d(u^{(d)})$ and $Z_d(v^{(d)})$, which is naturally supported because of the homomorphic property of the commitment. Another optimization is that unlike other layers of the circuit, $R_d(z_1,w)$ itself is not opened at two points ($\\V$ does not receive $R_d(u^{(d)},c^{(d)})$ and $R_d(v^{(d)},c^{(d)})$ in Construction~\\ref{construction:zkgkr}). Therefore, it suffices to set $\\dot{V}_d(z)=\\tV_d(z)+Z_d(z)R_d(z_1)$, where $R_d$ is a univariate linear polynomial. The full protocol is presented in Construction~\\ref{construction::zkVPD}.\n\n\\begin{figure}[t!]\n\t\\small{\n\t\t\\centering{\\centering\n\t\t\t\\framebox{\\parbox{.99\\linewidth}{\n\t\t\t\t\t\\begin{construction}\n\t\t\t\t\t\t\\label{construction::zkVPD}\n\t\t\t\t\t\tLet $\\mathbb{F}$ be a prime-order finite field. Let $\\dot{V}(x): \\F^\\ell\\rightarrow \\F$ be an $\\ell$-variate polynomial such that $\\dot{V}(x) = \\tV(x)+Z(x)R(x_1)$, where $\\tV(x)$ is a multilinear polynomial, $Z(x) = \\prod_{i=1}^\\ell x_i(1-x_i)$ and $R(x_1) = a_0+a_1x_1$. \n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t\t\\begin{itemize}\n\t\t\t\t\t\t\t\\item $(\\pp, \\vp) \\leftarrow \\KeyGen(1^\\lambda, \\ell)$: Select $\\alpha, t_1, t_2, \\cdots, t_l, t_{\\ell+1} \\in \\F$ uniformaly at random, run $\\mathsf{bp} \\leftarrow \\mathsf{BilGen}(1^{\\lambda})$ and compute $\\pp = (\\mathsf{bp},g^\\alpha, g^{t_{\\ell+1}}, g^{\\alpha t_{\\ell+1}}, \\{g^{\\prod_{i\\in W}t_i}, g^{\\alpha\\prod_{i\\in W}t_i}\\}_{W\\in \\mathcal{W}_\\ell})$, where $\\mathcal{W}_\\ell$ is the set of all subsets of $\\{1,\\ldots, \\ell\\}$. Set $\\vp = (\\mathsf{bp},g^{t_1}, \\ldots, g^{t_{\\ell+1}}, g^\\alpha)$.\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\item $\\comm \\leftarrow \\Commit(\\dot{V}, r_V, r_R, \\pp)$: Compute $c_1 = g^{\\tV(t_1, t_2, \\cdots, t_\\ell) + r_Vt_{\\ell+1}}$, $c_2 = g^{\\alpha(\\tV(t_1, t_2, \\cdots, t_\\ell) + r_Vt_{\\ell+1})}$, $c_3 = g^{R(t_1)+r_Rt_{\\ell+1}}$ and $c_4 = g^{\\alpha(R(t_1)+r_Rt_{\\ell+1})}$  output the commitment $\\comm = (c_1, c_2,c_3,c_4)$.\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\item $\\{\\accept,\\reject\\}\\leftarrow\\CheckComm(\\comm, \\vp)$: Output $\\accept$ if $e(c_1, g^\\alpha) = e(c_2,g)$ and $e(c_3, g^\\alpha) = e(c_4,g)$. Otherwise, output $\\reject$.\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\item $(y,\\pi)\\leftarrow\\Open(\\dot{V}, r_V, r_R, u,\\pp)$: Choose $r_1,\\ldots, r_\\ell\\in\\F$ at random, and compute polynomials $q_i$ such that\n\t\t\t\t\t\t\t\\[\n\t\t\t\t\t\t\t\\tV(x)+r_Vx_{\\ell+1}+Z(u)(R(x_1)+r_Rx_{\\ell+1}) -(\\tV(u)+Z(u)R(u_1)) =\n\t\t\t\t\t\t\t\\]\n\t\t\t\t\t\t\t\\[ \\sum_{i=1}^\\ell(x_i-u_i)(q_i(x_i,\\ldots,x_\\ell)+r_ix_{\\ell+1})+x_{\\ell+1}(r_V+r_RZ(u)-\\sum_{i=1}^\\ell r_i(x_i-u_i)).\n\t\t\t\t\t\t\t\\]   \n\t\t\t\t\t\t\tSet $\\pi = (\\{g^{q_i(t_i\\ldots, t_\\ell)+r_it_{\\ell+1}}, g^{\\alpha(q_i(t_i\\ldots, t_\\ell)+r_it_{\\ell+1})}\\}_{i\\in[1,\\ell]},$ $ g^{r_V+r_RZ(u)-\\sum_{i=1}^\\ell r_i(t_i-u_i)}$, $ g^{\\alpha(r_V+r_RZ(u)-\\sum_{i=1}^\\ell r_i(t_i-u_i))})$ and $y = \\tV(u)+Z(u)R(u_1)$.\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\item $\\{\\accept,\\reject\\}\\leftarrow\\Verify(\\comm,u,y,\\pi,\\vp)$: Parse $\\pi$ as $(\\pi_i,\\pi_{\\alpha i})$ for $i\\in[1,\\ell+1]$. Check $e(\\pi_i,g^\\alpha) = e(\\pi_{\\alpha i},g)$ for $i\\in[1,\\ell+1]$. Check $e(c_1c_3^{Z(u)}/g^y, g) = \\prod_{i=1}^\\ell e(\\pi_i, g^{t_i-u_i}) \\cdot e(g^{\\pi_{\\ell+1}},g^{t_{\\ell+1}})$. Output $\\accept$ if all the checks pass, otherwise, output $\\reject$.\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\\end{itemize} \n\t\t\t\t\t\t\n\t\\end{construction}}}}}\n\\end{figure}\n\n\n\\begin{theorem}\\label{thm:zkvpd}\n\tConstruction~\\ref{construction::zkVPD} is a zero-knowledge verifiable polynomial delegation scheme as defined by Definition~\\ref{def::zkvpd}, under Assumption~\\ref{asp::qSDH} and~\\ref{asp::dlEPKE}.\n\\end{theorem}\n\nThe proof of completeness, soundness and zero knowledge is similar to that of the zkVPD protocol in~\\cite{zkvpd}. We only add an extra univariate linear polynomial $R(x_1)$, which does not affect the proof. We omit the proof due to space limit. Using the same algorithms proposed in in~\\cite{vram,zkvpd}, the running time of $\\KeyGen$, $\\Commit$ and $\\Open$ is $O(2^\\ell)$, $\\Verify$ takes $O(\\ell)$ time and the proof size is $O(\\ell)$.\n\n\n\n\n\n\n\n\\subsection{Putting Everything Together}\\label{subsec::zkall}\n\nIn this section, we present our zero knowledge argument scheme. At a high level, similar to~\\cite{zhang2017vsql,hyrax,zkvpd}, $\\V$ can use the GKR protocol to verify the correct evaluation of a circuit $C$ on input $x$ and a witness $w$, given an oracle access to the evaluation of a polynomial defined by $x,w$ on a random point. We instantiate the oracle using the zkVPD protocol. Formally, we present the construction in Construction~\\ref{construction::all}, which combines our zero knowledge GKR and zkVPD protocols. Similar to the protocols in~\\cite{zkvpd,hyrax}, Step 6 and 7 are to check that $\\P$ indeed uses $x$ as the input to the circuit.\n\\begin{figure}[t!]\n\\small{\n\\centering{\\centering\n\\framebox{\\parbox{.99\\linewidth}{\n\\begin{construction}\n%\\begin{protocol}\n\\label{construction::all}\n\tLet $\\lambda$ be the security parameter, $\\F$ be a prime field, $n$ be an upper bound on input size, and $S$ be an upper bound on circuit size. We use $\\VPD_1, \\VPD_2, \\VPD_3$ to denote the zkVPD protocols for input layer, masking polynomials $g_i$ and $R_i$ described in Construction~\\ref{construction:zkgkr}.\n\t\\begin{itemize}\n\t\t\\item $\\mathcal{G}(1^\\lambda, n, S)$: run $(\\pp_1,\\vp_1)\\leftarrow\\VPD_1.\\KeyGen(1^\\lambda, \\log n)$, $(\\pp_2,\\vp_2)\\leftarrow\\VPD_2.\\KeyGen(1^\\lambda, \\log S)$, $(\\pp_3,\\vp_3)\\leftarrow\\VPD_3.\\KeyGen(1^\\lambda)$. Output $\\pk = (\\pp_1, \\pp_2, \\pp_3)$ and $\\vk = (\\vp_1, \\vp_2, \\vp_3)$.\n\t\t\n\t\t\\item $\\langle\\P(\\pk,w), \\V(\\vk)\\rangle(x)$: Let $C$ be a layered arithmetic circuit over $\\F$ with $d$ layers, input $x$ and witness $w$ such that $|x|+|w|\\le n$, $|C|\\le S$ and $C(x;w) =1$. Without loss of generality, assume $|w|/|x| = 2^m -1$ for some $m\\in\\N$.\n\t\t\\begin{enumerate}\n\t\t\t\\item $\\P$ selects a random bivariate polynomial $R_d$ with variable degree 2 and commits to the input of $C$ by sending $\\comm_d\\leftarrow\\VPD_1.\\Commit(\\dot{V}_d, r_V, r_R, \\pp_1)$ to $\\V$, where $\\tV_d$ is the multilinear extension of array $(x;w)$ and $\\dot{V}_d=\\tV_d+R_d$\n\t\t\t\\item $\\V$ runs $\\VPD_1.\\CheckComm(\\comm_d,\\vp_1)$. If it outputs $\\reject$, $\\V$ aborts and outputs $\\reject$.  \n\t\t\t\\item $\\P$ and $\\V$ execute Step 1-5 of the zero knowledge GKR protocol in Construction~\\ref{construction:zkgkr}, with the zkVPDs instantiated with $\\VPD_2$ and $\\VPD_3$. If Construction~\\ref{construction:zkgkr} rejects, $\\V$ outputs $\\reject$ and aborts. Otherwise, by the end of this step, $\\V$ receives two claims of $\\dot{V}_d$ at $u^{(d)}$ and $v^{(d)}$.\n\t\t\t\\item $\\P$ runs $(y_1,\\pi_1)\\leftarrow\\VPD_1.\\Open(\\dot{V}, r_V, r_R,u^{(d)},\\pp_1)$,  $(y_2,\\pi_2)\\leftarrow\\VPD_1.\\Open(\\dot{V}, r_V, r_R, v^{(d)},\\pp_1)$ and sends $y_1,\\pi_1,y_2,\\pi_2$ to $\\V$.\n\t\t\t\\item $\\V$ runs $\\Verify(\\comm_d,u^{(d)},y_1,\\pi_1,\\vp_1)$ and $\\Verify(\\comm_d,v^{(d)},y_2,\\pi_2,\\vp_1)$ and output $\\reject$ if either check fails. Otherwise, $\\V$ checks $\\dot{V}_d(u^{(d)}) = y_1$ and $\\dot{V}_d(v^{(d)}) = y_2$, and rejects if either fails.\n\t\t\t\\item $\\V$ computes the multilinear extension of input $x$ at a random point $r_x\\in\\F^{\\log |x|}$ and sends $r_x$ to $\\P$.\n\t\t\t\\item $\\P$ pads $r_x$ to $r'_x\\in\\F^{\\log |x|}\\times 0^{\\log |w|}$ with $\\log |w|$ 0s and sends $\\V$ $(y_x,\\pi_x)\\leftarrow\\VPD_1.\\Open(\\tV_d, r_V, r_R, r'_x,\\pp_1)$. $\\V$ checks $\\Verify(\\comm_d,r'_x,y_x,\\pi_x,\\vp_1)$ and $y_x$ equals the evaluation of the multilinear extension on $x$. $\\V$ outputs $\\reject$ if the checks fail. Otherwise, $\\V$ outputs $\\accept$.\n\t\t\\end{enumerate}\n\t\t\n\t\\end{itemize}\n%\\end{protocol}\n\\end{construction}}}}}\n%\\vspace{-0.2in}\n\\end{figure}\n\\begin{theorem}\\label{theorem:main}\n\tFor an input size $n$ and a finite field $\\F$, Construction~\\ref{construction::all} is a zero knowledge argument for the relation\n\t\\[\n\t\\mathcal{R} = \\{(C,x;w): C\\in\\mathcal{C}_\\F\\wedge|x|+|w|\\le n\\wedge C(x;w) = 1\\},\n\t\\]\n\tas defined in Definition~\\ref{def::zkp}, under Assumption~\\ref{asp::qSDH} and~\\ref{asp::dlEPKE}. Moreover, for every $(C,x;w)\\in\\mathcal{R}$, the running time of $\\P$ is $O(|C|)$ field operations and $O(n)$ multiplications in the base group of the bilinear map. The running time of $\\V$ is $O(|x|+d\\cdot\\log |C|)$ if $C$ is log-space uniform with $d$ layers. $\\P$ and $\\V$ interact $O(d\\log |C|)$ rounds and the total communication (proof size) is $O(d\\log |C|)$. In case $d$ is $\\mathsf{polylog}(|C|)$, the protocol is a succinct argument.\n\\end{theorem}\n\n\\paragraph{Proof Sketch.} The correctness and the soundness follow from those of the two building blocks, zero knowledge GKR and zkVPD, by Theorem~\\ref{thm:zkgkr} and~\\ref{thm:zkvpd}.\n\nTo prove zero knowledge, consider a simulator $\\S$ that calls the simulator $\\S_{GKR}$ of zero knowledge GKR given in Section~\\ref{subsec::zkgkr} as a subroutine, which simulates the partial view up to the input layer. At the input layer, the major challenge is that $\\S$ committed to (a randomly chosen) $\\dot{V}^*_d$ at the beginning of the protocol, before knowing the points $u^{(d)}, v^{(d)}$ to evaluate on. If $\\S$ opens the commitment honestly, with high probability the evaluations are not consistent with the last message of the GKR (sumcheck in layer $d-1$) and a malicious $\\V^*$ can distinguish the ideal world from the real world. In our proof, we resolve this issue by using the simulator $\\S_{VPD}$ of our zkVPD protocol. Given the trapdoor $\\mathsf{trap}$ used in $\\KeyGen$, $\\S_{VPD}$ is able to open the commitment to any value in zero knowledge, and in particular it opens to those messages that are consistent with the GKR protocol in our scheme, which completes the construction of $\\S$. %We defer the formal proof to the full version of the paper. \n\nThe complexity of our zero knowledge argument scheme follows from our new GKR protocol with linear prover time, and the complexity of the zkVPD protocol for the input layer analyzed in Section~\\ref{subsec::our_zkvpd}. The masking polynomials $g_i, R_i$ and their commitments and openings introduce no asymptotic overhead and are efficient in practice.\n\n\n\n", "meta": {"hexsha": "ae4cb9f67ade66a7689b4a6937e692a30550be47", "size": 38082, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/eprint/zkgkr.tex", "max_stars_repo_name": "niconiconi/Libra", "max_stars_repo_head_hexsha": "d8b4bebd70c1b0681fdecb66fbadeccb3f9d926e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2020-01-05T12:05:57.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-23T16:18:40.000Z", "max_issues_repo_path": "paper/eprint/zkgkr.tex", "max_issues_repo_name": "niconiconi/Libra", "max_issues_repo_head_hexsha": "d8b4bebd70c1b0681fdecb66fbadeccb3f9d926e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-10T17:15:38.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-11T16:14:46.000Z", "max_forks_repo_path": "paper/eprint/zkgkr.tex", "max_forks_repo_name": "niconiconi/Libra", "max_forks_repo_head_hexsha": "d8b4bebd70c1b0681fdecb66fbadeccb3f9d926e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2020-01-31T05:53:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-02T14:05:43.000Z", "avg_line_length": 105.7833333333, "max_line_length": 1219, "alphanum_fraction": 0.6520140749, "num_tokens": 14115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.828938825225204, "lm_q2_score": 0.6654105454764747, "lm_q1q2_score": 0.5515846358597312}}
{"text": "\\subsection{Binary operators}\nThis section decribes all the functions in the file ``ptwXY\\_binaryOperators.c''.\n\n\\subsubsection{ptwXY\\_slopeOffset}\nThis function applies the math operation ( $y_i$ = slope $\\times$ $y_i$ + offset ) to the y-values of \\highlight{ptwXY}.\n\\setargumentNameLengths{offset}\n\\CallingC{fnu\\_status ptwXY\\_slopeOffset(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double slope,}\n    \\addArgument{double offset );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{slope}{The slope.}\n    \\argumentBox{offset}{The offset.}\n    \\vskip 0.05 in \\noindent\n\n\\subsubsection{ptwXY\\_add\\_double}\nThis function applies the math operation ( $y_i$ = $y_i$ + offset ) to the y-values of \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_add\\_double(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double offset );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{offset}{The offset.}\n\n\\subsubsection{ptwXY\\_sub\\_doubleFrom}\nThis function applies the math operation ( $y_i$ = $y_i$ - offset ) to the y-values of \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_sub\\_double(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double offset );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{offset}{The offset.}\n\n\\subsubsection{ptwXY\\_sub\\_fromDouble}\nThis function applies the math operation ( $y_i$ = offset - $y_i$ ) to the y-values of \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_sub\\_fromDouble(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double offset );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{offset}{The offset.}\n\n\\subsubsection{ptwXY\\_mul\\_double}\nThis function applies the math operation ( $y_i$ = slope $\\times$ $y_i$ ) to the y-values of \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_mul\\_double(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double slope );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{slope}{The slope.}\n\n\\subsubsection{ptwXY\\_div\\_doubleFrom}\nThis function applies the math operation ( $y_i$ = $y_i$ / divisor ) to the y-values of \\highlight{ptwXY}.\n\\CallingC{fnu\\_status ptwXY\\_div\\_doubleFrom(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double divisor );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{divisor}{The divisor.}\nIf \\highlight{divisor} is zero, the error \\highlight{nfu\\_divByZero} is returned.\n\n\\subsubsection{ptwXY\\_div\\_fromDouble}\nThis function applies the math operation ( $y_i$ = dividend / $y_i$ ) to the y-values of \\highlight{ptwXY}.\n\\setargumentNameLengths{dividend}\n\\CallingC{fnu\\_status ptwXY\\_div\\_fromDouble(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double dividend );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{dividend}{The dividend.}\n    \\vskip 0.05 in \\noindent\nThis function does not handle safe division (see Section~\\ref{ptwXYdivptwXY}). One way to do safe division is\nto use the function \\highlight{ptwXY\\_valueTo\\_ptwXY} to convert the \\highlight{dividend} value to a \\highlight{ptwXYPoints} object and\nthen use \\highlight{ptwXY\\_div\\_ptwXY}.\n\n\\subsubsection{ptwXY\\_mod}\n\\setargumentNameLengths{ptwXY}\nThis function gives the remainer of $y_i$ divide by $m$. That is, it set \\highlight{ptwXY}'s y-values to \n\\begin{equation}\n    y_i = {\\rm mod}( y_i, m ) \\ \\ \\ \\ .\n\\end{equation}\n\\setargumentNameLengths{pythonMod}\n\\CallingC{fnu\\_status ptwXY\\_mod(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY,}\n    \\addArgument{double m,}\n    \\addArgument{int pythonMod );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY}{A pointer to the \\highlight{ptwXYPoints} object.}\n    \\argumentBox{m}{The modulus.}\n    \\argumentBox{pythonMod}{Controls whether the Python or C form of mod is implemented.}\n    \\vskip 0.05 in \\noindent\nPython's and C's mod functions act differently for negative values. If \\highlight{pythonMod} then the Python form is executed;\notherwise, the C form is executed.\n\n\\subsubsection{ptwXY\\_binary\\_ptwXY}\nThis function creates a new \\highlight{ptwXYPoints} object from the union of \\highlight{ptwXY1} and \\highlight{ptwXY2} and then\napplies the math operation\n\\begin{equation}\n    y_i(x_i) = s_1 \\times y_1(x_i) + s_2 \\times y_2(x_i) + s_{12} \\times y_1(x_i) \\times y_2(x_i)\n\\end{equation}\nto the new object. Here ($x_i,y_i$) is a point in the new object, $y_1(x_i)$ is \\highlight{ptwXY1}'s y-value at $x_i$ and \n$y_2(x_i)$ is \\highlight{ptwXY2}'s y-value at $x_i$.\nThis function is used internally to add, subtract and multiply two \\highlight{ptwXYPoints} objects. For example, addition is performed\nby setting $s_1$ and $s_2$ to 1. and $s_{12}$ to 0.\n\\setargumentNameLengths{ptwXY1}\n\\CallingC{ptwXYPoints *ptwXY\\_binary\\_ptwXY(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1,}\n    \\addArgument{ptwXYPoints *ptwXY2,}\n    \\addArgument{double s1,}\n    \\addArgument{double s2,}\n    \\addArgument{double s12 );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{s1}{The value $s_1$.}\n    \\argumentBox{s2}{The value $s_2$.}\n    \\argumentBox{s12}{The value $s_{12}$.}\n\n\\subsubsection{ptwXY\\_add\\_ptwXY}\nThis function adds two \\highlight{ptwXYPoints} objects and returns the result as a new \\highlight{ptwXYPoints} object\n(i.e., it calls ptwXY\\_binary\\_ptwXY with $s_1 = s_2 = 1.$ and $s_{12} = 0.$).\n\\CallingC{ptwXYPoints *ptwXY\\_add\\_ptwXY(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1,}\n    \\addArgument{ptwXYPoints *ptwXY2 );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object.}\n\n\\subsubsection{ptwXY\\_sub\\_ptwXY}\nThis function subtracts one \\highlight{ptwXYPoints} objects from another, and returns the result as a new \\highlight{ptwXY} object\n(i.e., it calls ptwXY\\_binary\\_ptwXY with $s_1 = 1.$, $s_2 = -1.$ and $s_{12} = 0.$).\n\\CallingC{ptwXYPoints *ptwXY\\_sub\\_ptwXY(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1,}\n    \\addArgument{ptwXYPoints *ptwXY2 );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object which is the minuend.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object which is the subtrahend.}\n\n\\subsubsection{ptwXY\\_mul\\_ptwXY}\nThis function multiplies two \\highlight{ptwXYPoints} objects and returns the result as a new \\highlight{ptwXY} object\n(i.e., it calls ptwXY\\_binary\\_ptwXY with $s_1 = s_2 = 0.$ and $s_{12} = 1.$).  This function does not infill.\n\\CallingC{ptwXYPoints *ptwXY\\_mul\\_ptwXY(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1,}\n    \\addArgument{ptwXYPoints *ptwXY2 );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object.}\n\n\\subsubsection{ptwXY\\_mul2\\_ptwXY}\nThis function multiplies two \\highlight{ptwXYPoints} objects and returns the result as a new \\highlight{ptwXY} object.\nUnlike \\highlight{ptwXY\\_mul\\_ptwXY}, this function will infill to obtain the desired accuracy.\n\\CallingC{ptwXYPoints *ptwXY\\_mul2\\_ptwXY(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1,}\n    \\addArgument{ptwXYPoints *ptwXY2 );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object.}\n\n\\subsubsection{ptwXY\\_div\\_ptwXY} \\label{ptwXYdivptwXY}\nThis function divides two \\highlight{ptwXYPoints} objects and returns the result as a new \\highlight{ptwXY} object.\n\\setargumentNameLengths{safeDivide}\n\\CallingC{ptwXYPoints *ptwXY\\_div\\_ptwXY(}{statusMessageReporting *smr,\n    \\addArgument{ptwXYPoints *ptwXY1,}\n    \\addArgument{ptwXYPoints *ptwXY2,}\n    \\addArgument{int safeDivide );}}\n    \\argumentBox{smr}{The \\highlight{statusMessageReporting} instance to record errors.}\n    \\argumentBox{ptwXY1}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{ptwXY2}{A pointer to a \\highlight{ptwXYPoints} object.}\n    \\argumentBox{safeDivide}{If true safe division is performed.}\n", "meta": {"hexsha": "9b2c2f8135704c7227ab66dd0891a1d749d7b5aa", "size": 9681, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "numericalFunctions/Doc/ptwXY_binaryOperators.tex", "max_stars_repo_name": "Mathnerd314/gidiplus", "max_stars_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_stars_repo_licenses": ["MIT-0", "MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2019-08-29T23:46:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T10:16:25.000Z", "max_issues_repo_path": "numericalFunctions/Doc/ptwXY_binaryOperators.tex", "max_issues_repo_name": "Mathnerd314/gidiplus", "max_issues_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_issues_repo_licenses": ["MIT-0", "MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-04T16:14:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-01T01:54:34.000Z", "max_forks_repo_path": "numericalFunctions/Doc/ptwXY_binaryOperators.tex", "max_forks_repo_name": "Mathnerd314/gidiplus", "max_forks_repo_head_hexsha": "ed4c48ab399a964fe782f73d0a065849b00090bb", "max_forks_repo_licenses": ["MIT-0", "MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-03-03T22:41:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T22:54:43.000Z", "avg_line_length": 56.6140350877, "max_line_length": 135, "alphanum_fraction": 0.7498192336, "num_tokens": 3090, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8289387914176258, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.551584624364111}}
{"text": "\\section{Quantum statistical mechanics}\n\n\\newcommand{\\bra}[1]{\\langle #1 |}\n\\newcommand{\\ket}[1]{| #1 \\rangle}\n\\newcommand{\\braket}[2]{\\langle #1 | #2\\rangle}\n\nIn this section we want to extend the classical statistical mechanics concepts that we have seen so far to quantum mechanical systems. When we do so, we will find that two particular properties of quantum systems --- the (anti-)symmetry of the wave function for fermions (resp. bosons), and the indistinguishability of particles --- have consequences for the statistics of the ensembles that they are part of.\n\nIn quantum mechanics we already have two layers of uncertainty about the state of a system: the Heisenberg uncertain principle $\\Delta{p}\\Delta {r}>{\\hbar}/{2}$ means that rather than having a discrete point in phase space to describe the state of a system, the most precision we can hope for is some small region in phase space, centered on that point. Quantum mechanics also deals with probability distributions itself: the wave function $\\psi_n({\\bf r})$ (or rather $|\\psi_n|^2$) describes the probability distribution of a particle being in a particular quantum state.\n\nStatistical mechanics adds an extra layer of probability to this situation in order to deal with ensembles that have probabilities $p_n$ of being in a variety of quantum states $\\psi_n$. These quantum ensembles are called \\emph{mixed states}. They are distinct from the superposition of quantum states. The superposition of two wave functions implies that the system is in a state comprised of both wave functions simultaneously, while the mixed state says that the system could be in one state or the other, or perhaps both. For example: if $\\ket{H}$\nis the wave function for a horizontally polarized photon and $\\ket{V}$ is the wave function for a vertically polarised photon, then the superposition $\\frac{1}{\\sqrt{2}}(\\ket{H}+\\ket{V})$ is a diagonally polarised photon, while the mixed state is an unpolarised photon that is half $\\ket{H}$ and half $\\ket{V}$ that could be in either state:  i.e. $\\frac12\\left(\\ket{H}\\bra{H}+\\ket{V}\\bra{V}\\right)$.\n\nGiven a quantum system in a particular state, represented by the wave function $\\psi_n$, the quantum expectation of an operator $\\hat{A}$ is given by\n$$\n\t\\langle\\hat{A}\\rangle_\\text{QM} = \\int\\psi^*_n\\hat{A}\\psi_n.\n$$\n(I'll try to always use hats to denote quantum operators, but I'll probably forget them from time-to-time.)\n\nThis is sometimes also denoted as $\\langle\\hat{A}\\rangle_\\text{pure}$ as it applies to a purely quantum state, where no possibilities of different statistical mechanical configurations exist.\n\nThe statistical mechanical expectation operator for an ensemble is given by\n$$\n\t\t\\langle\\hat{A}\\rangle = \\sum_n p_n\\int\\psi_n^*\\hat{A}\\psi_n.\n$$\nQuantum statistical mechanics is concerned with figuring out how the properties of $\\psi_n$ affect the expected value of $\\hat{A}$.\n\n\\subsection{A refresher: bra-ket notation}\nThe bra-ket notation, popularised by Dirac, gives us an extremely convenient and compact way of dealing with elements of a vector space (in QM, a Hilbert space) in various combinations. It doesn't introduce any new physics but it does let us, for example, use the same notation for an inner product of two quantities, whether they are finite-dimensional vectors or infinite dimensional functions.\n\n$\\ket{v}$ by itself (pronounced ket $v$) is just the column vector (or function) $v$.\n\n$\\bra{u}$ by itself (pronounced bra $u$) is interpreted as the conjugate transpose of $\\ket{u}$, i.e. $\\bra{u}=\\ket{u}^*$.\n\nThe notation becomes particularly convenient when we want to calculate inner products\n$$\n\t\\braket{u}{v} = u^*v\n$$\nfor finite dimension vectors or \n$$\n\t\\braket{u}{v} =\t\\int d{\\bf r}u^*({\\bf r})v({\\bf r})\n$$\nfor functions.\n\nSimilarly, the outer product can be denoted \n\n$$\n\\ket{u}\\bra{v}=uv^* =\n\\begin{bmatrix}\n    u_1v^*_1 & u_1v^*_2  & \\dots \\\\\n    u_2v^*_1 & u_2v^*_2  & \\dots   \\\\\n    \\vdots & \\vdots & \\ddots \n\\end{bmatrix}.\n$$\n\nFor an operator $\\hat{A}$ acting on a ket $\\ket{v}$ we have $\\hat{A}\\ket{v} = \\ket{\\hat{A}v}$ and hence \n$$\n\t\\bra{u}\\hat{A}\\ket{v}=\\braket{u}{\\hat{A}v} = u^*\\hat{A}v\n$$\nOr, equivalently,  \n$$\n\t\\braket{u}{\\hat{A}v} = \\int d{\\bf r} u^*({\\bf r}) \\hat{A} v({\\bf r}).\n$$\n\nBra-ket notation has a number of convenient properties; one that we will make use of shortly is that closed bra-ket sets commute with each other. This is easy to see by considering the vector case where a closed bra-ket pair is just a scalar.\n\n\\subsection*{The density matrix/density operator}\nSimilar to the way that the single particle density is a convenient way of introducing a probability density into classical statistical mechanics, the density matrix $\\rho = \\sum_n p_n\\ket{\\psi_n}\\bra{\\psi_n}$ (or the density operator $\\hat{rho}$ in the case of functions) allows us to consider a probability distribution over an ensemble of quantum states.\n\nTo see how the density matrix arises, we will begin with the expression we had earlier for the ensemble expectation of an observable corresponding to some operator $\\hat{A}$, which we can now write in bra-ket notation.\n\\begin{equation}\n\t\\langle\\hat{A}\\rangle = \\sum_n p_n\\bra{\\psi_n}\\hat{A}\\ket{\\psi_n}.\n\t\\label{eq:7.2}\n\\end{equation}\n(We are going to assume throughout this section that all the vectors/wave functions that we deal with have already been normalised.) Now, if $\\phi_a$ is any orthonormal basis then the identity operator is\n$$\n\tId = \\sum_a\\ket{\\phi_a}\\bra{\\phi_a}.\n$$\nIf we insert this into equation (\\ref{eq:7.2}) for the expected value of $\\hat{A}$ we get\n\\begin{eqnarray*}\n\t\\langle\\hat{A}\\rangle &=& \\sum_n p_n\\bra{\\psi_n} \\left(\\sum_a\\ket{\\phi_a}\\bra{\\phi_a}\\right) |\\hat{A}\\ket{\\psi_n}\\\\\n\t&=& \\sum_n p_n\\sum_a\\bra{\\phi_a}\\hat{A}\\ket{\\psi_n}\\braket{\\psi_n}{\\phi_a}\\\\\n\t&=& \\sum_a \\bra{\\phi_a}\\hat{A} \\left(\\sum_n p_n \\ket{\\psi_n}\\bra{\\psi_n}\\right)\\ket{\\phi_a}\\\\\n\t&=& \\sum_a \\bra{\\phi_a}\\hat{A}\\rho\\ket{\\phi_a}\\\\\n\t&=& \\text{Tr}(\\hat{A}\\rho)\n\\end{eqnarray*}\nwhere $\\text{Tr}(M)$ denotes the trace of $M$ (in the case of a matrix, this is the sum over the diagonal elements).\nNormalisation means that we have $\\text{Tr}(\\rho)=1$ and $\\sum_n p_n=1$.\n\nWe can now use the density matrix to give some expressions for the quantum [micro-$|$grand-]canonical ensembles.\n\n\\subsection{Quantum micro-canonical ensemble}\nWe will consider a system with fixed $N$ and $V$. Because we are dealing with a quantum system, rather than fixing the total energy of the system we fix the interval $(E,E+\\epsilon)$ where $\\epsilon$ is some (small) uncertainty. We will denote by $E_j$ the (discrete) energy eigenstates that correspond to the Hamiltonian operator $\\hat{H}$.\n\nThe system has distinct accessible micro-states that correspond to the macro-state $(N,V,E;\\epsilon)$. The number of distinct accessible states is given by counting over the phase space to get $\\Gamma(N,V,E;\\epsilon)$. We are going to assume that the probability of the system being in any of the energy states in the interval is equal --- this is the so-called \\emph{equal a priori postulate}.\n\nThe probability of the system being in a particular micro-state $E_j$ is therefore\n$$\n\tp(E_j) =\n\t\\begin{cases}\n\t\t\\frac{1}{\\Gamma} \\text{ if } E<E_j<E+\\epsilon\\\\\n\t\t0 \\text{ otherwise}\n\t\\end{cases}\n$$\nand, if we choose a basis such that the Hamiltonian operator $\\hat{H}$ is diagonal, then the density matrix is the diagonal matrix\n$$\n\t\\rho = \\sum_j p(E_j) \\ket{E_j}\\bra{E_j}.\n$$\n\nThe entropy is given by\n$$\n\tS(E) = k_B\\ln\\Gamma(E).\n$$\nAt this point, it's worth noting that when we count the states that comprise $\\Gamma$ we must do so in a quantum fashion, i.e. taking account of the indistinguishability of the quantum particles.\n\nWe can also write this in a form similar to that for the Gibbs entropy that we saw in an earlier section:\n$$\n\tS(E) = -k_b\\sum_jp(E_j)\\ln(p(E_j))=-k_B\\text{Tr}(\\rho\\ln\\rho).\n$$ \n\n\\subsection{Quantum canonical ensemble}\nHere we take:\n$$\n\tp(E_j) = \\frac{1}{Z}\\exp\\left(\\frac{-1}{k_BT}E_j\\right)\n$$\nand\n$$\n\tZ = \\sum_j \\exp\\left(\\frac{-1}{k_BT}E_j\\right).\n$$\n\nAgin, if we assume, that we have picked a basis where $\\hat{A}$ is diagonal and has energy eigenstates $E_j$ (i.e. $\\hat{H}\\ket{E_j} = E_j\\ket{E_j}$) then, we can write\n$$\n\t\\rho = \\frac{1}{Z}\\exp\\left(\\frac{-1}{k_BT}\\hat{H}\\right)\n$$\nwhere\n$$\n\tZ = \\text{Tr}\\left[\\exp\\left(\\frac{-1}{k_BT}\\hat{H}\\right)\\right].\n$$\n\n\\subsection{Quantum grand-canonical ensemble}\nThe density matrix is given by\n$$\n\t\\rho = \\frac{1}{Z}\\exp\\left(\\frac{-1}{k_BT}(\\mu\\hat{N}-\\hat{H})\\right)\n$$\nwith the corresponding partition function\n$$\n\tZ = \\text{Tr}\\left[\\exp\\left(\\frac{-1}{k_BT}(\\mu\\hat{N}-\\hat{H})\\right)\\right].\n$$\n\n\\subsection{Systems of indistinguishable particles}\nThis is where we will see one of the more interesting consequences of quantum mechanics on statistical mechanics - namely how the symmetry of the wave function for the particles in our system affect the statistics of the system.\n\nWe will consider a system of $N$ identical non-interacting particles --- an ideal quantum gas. Since the particles are non-interacting, the overall Hamiltonian $\\hat{H}$ for the system is just the sum of the Hamiltonians for each of the individual particles:\n$$\n\t\\hat{H}({\\bf q},{\\bf p}) = \\sum_i^N\\hat{H_i},\n$$\nwhere $\\hat{H}_i$ is the Hamiltonian of the $i$-th particle.\n\nThe time-independent Schr\\\"odinger equation for the system is\n$$\n\t\\hat{H}\\ket{\\psi_E} = E\\ket{\\psi_E}\n$$\nwhere $E$ is an energy eigenvalue of $\\hat{H}$ and $\\psi_E$ is the corresponding eigenfunction.\n\nWe can use the fact that $\\hat{H}$ is a sum of non-interacting terms to write the solution $\\psi_E$ as a product of the solutions to each of the individual particles: \n$$\n\t\\psi_E = \\prod_{i=1}^Nu_{\\epsilon_i}\n$$\nwhere $\\sum_{i=1}^N\\epsilon_i = E$ and where $u_{\\epsilon_i}$ is the eigenfunction of $\\hat{H_i}$, i.e. $\\hat{H_i}u_{\\epsilon_i}={\\epsilon_i}u_{\\epsilon_i}$.\n\nSince the stationary state of the overall system can be described in terms of the constituent particles, we just need to specify how many particles are in each energy state. We write $n_i$ for the number of particles in the state with energy $\\epsilon_i$ and $\\{n_i\\}$ for the set of all such $n_i$. We therefore have\n$$\n\t\\sum_i n_i =N \\qquad \\text{and} \\qquad \\sum_in_i\\epsilon_i = E.\n$$\n\nThe wave function for the overall state can now be written in terms of the particles in each of the distinct energy eigenstates\n\\begin{equation}\n\t\\psi_E = \\prod_{m=1}^{n_1}u_1(m)\\prod_{m=n_1+1}^{n_1+n_2}u_2(m)\\cdots\n\t\\label{eq:compfn}\n\\end{equation}\nwhere $u_i(m)$ is the single particle wave function $u_{\\epsilon_1}({\\bf q}_m)$.\nThat is, $\\psi_E$ is the product of individual particle eigenfunction, each with degeneracy that corresponds to the number of particles in each energy state.\nWe will refer to this as the Boltzmann wave function, which we will denote $\\psi_\\text{Boltz}$.\n\nSince the particles are identical, it is possible to permute their coordinates in (\\ref{eq:compfn}) while keeping the system in the same state. I.e. the sequence of indices $(1,2,3,\\ldots,N)$ are permuted to $(P1,P2,P3,\\ldots,PN)$ where $P$ denotes mapping an index $m$ to the permuted index $Pm$. The resulting wave function will then be\n\\begin{equation}\n\tP\\psi_E = \\prod_{m=1}^{n_1}u_1(Pm)\\prod_{m=n_1+1}^{n_1+n_2}u_2(Pm)\\cdots.\n\t\\label{eq:permfn}\n\\end{equation}\n\n\nIn classical statistical mechanics, though the particles may be identical, we treat them as being distinguishable. That is permutation of two identical particles corresponds to two different, though identical, micro-states of the system with the same energy. (This is what leads to the necessity of introducing the Gibbs factor in classical statistical mechanics when evaluating the free energy of the canonical ensemble.  The Gibbs factor $\\frac{1}{n_1!n_2!\\cdots}$ allows us to correct for the fact that the particles being permuted are \\emph{effectively} indistinguishable and the permutation doesn't really correspond to a different micro-state of the system.\n\nIn quantum mechanics, the particles are, \\emph{in reality} indistinguishable. I.e. all that matters is the number of particles in each state --- the micro-states that occur from permutation are the same micro-state.\nInterchanging particle coordinates (i.e. the arguments of $u_i$) in the wave function (\\ref{eq:compfn}) should, therefore, not lead to a new micro-state.\n\nHow can we make sure that the wave function $\\psi_E$ doesn't change under any of the above permutations? One way is to make $\\psi_E$ be a linear combination of all the $N!$ possible wave functions that can be formed from such permutations such that the probability density for the quantum system stays the same: $|P\\psi|^2=|\\psi|^2$.\n\nThere are two ways to satisfy this condition. The first is when $\\psi$ is symmetric in all its arguments such that $P\\psi = \\psi$ for all possible permutations. The second way is when $\\psi$ is anti-symmetric in its arguments such that\n$$\n\t\\psi = \n\t\\begin{cases}\n\t\t+\\psi \\text{ if $P$ is an even permutation, or}\\\\\n\t\t-\\psi \\text{ if $P$ is an odd permutation.}\n\t\\end{cases}\n$$\n\nWe will denote these solutions as $\\psi_S$ and $\\psi_A$ respectively. They can be written in terms of the Boltzmann wave function (\\ref{eq:compfn}) as\n$$\n\t\\psi_S=c\\sum_P P\\psi_\\text{Boltz}\n$$\nand\n$$\n\t\\psi_A=c\\sum_P \\delta_PP\\psi_\\text{Boltz}\n$$\nwhere $c$ is a normalisation constant and $\\delta_PP$ is $+1$ or $-1$ depending on whether $P$ is an even or odd permutation, respectively.\n\nThe wave function $\\psi_A$ can be written as the \\emph{Slater determinant} \n$$\n\t\\Psi_A = c\\det\\begin{bmatrix}\n\t\tu_i(1) & u_i(2) & u_i(3) & \\dots & u_i(N)\\\\\n\t\tu_j(1) & u_j(2) & u_j(3) & \\dots & u_j(N)\\\\\n\t\tu_k(1) & u_k(2) & u_k(3) & \\dots & u_k(N)\\\\\n\t\t\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n\t\tu_z(1) & u_z(2) & u_z(3) & \\dots & u_z(N)\n\t\\end{bmatrix}\n$$\n\nWhen calculating the terms in the expansion of the determinant, the leading diagonal is the Boltzmann wave function (\\ref{eq:compfn}) while the other terms are permutations of it. The signs for the permuted terms appear automatically in the right order as the determinant is calculated.\n\nExchanging a pair of arguments in the wave function is the same as swapping a pair of columns of the matrix and causes the sign of the contribution to the determinate to change.\n\nIf two or more particles are put into the same energy state, then the matrix has two or more identical rows and the determinant becomes zero --- the wave function vanishes. This implies that an anti-symmetric wave function corresponds to a quantum system where no two particles can be in the same state, i.e. the Pauli exclusion principle that applies to fermions. The statistics governing such systems are called \\emph{Fermi-Dirac statistics}.\n\nFor Fermi-Dirac statistics the weight factor $W_{F.D.}\\{n_i\\}$ for the state of the system is given by\n$$\n\tW_{F.D.}\\{n_i\\} = \n\t\\begin{cases}\n\t\t1 \\text{ if } \\sum_i n_i^2 = N\\\\\n\t\t0 \\text{ if } \\sum_i n_i^2 > N\n\t\\end{cases}\n$$\n\n\nFor symmetric wave functions, there is no such restriction on the number of particles per state $n_i$ and the resulting statistics are called \\emph{Bose-Einstein statistics}.\n\nHere the weight factor is\n$$\n\tW_{B.E.}\\{n_i\\} = 1,\\qquad i=0,1,2,\\ldots.\n$$\n\nIt is worth noting that although the properties we derived here all assumed systems of non-interacting particles, the same basic properties hold for systems of interacting particles where, even though the wave function for the system $\\psi({\\bf r})$ can not be expressed in terms of single-particles wave functions $u_i(r_m)$, it will nevertheless be either symmetric or anti-symmetric, resulting in either Bose or Fermi statistics.\n\n\n\n\\subsection{Fermionic and bosonic partition functions}\nWith what we have now covered on non-interacting quantum particles, in combination with what we learnt in the previous section on classical ensembles we are now in a position to introduce the quantum version of the partition function. We wil jump straight to the most commonly used case --- the fermionic and bosonic grand canonical partition functions.\n\n\\subsubsection{Bose-Einstein Statistics}\nFor bosons, all fillings $n_k$ of the energy levels are allowed. Each particle in eigenstate $\\psi_k$ contributes energy $\\epsilon_k$ and chemical potential $\\mu$, so for each particle the partition function is\n$$\n\tZ_k^\\text{boson} = \\sum_{n_k=0}^\\infty\\exp(-\\beta(\\epsilon_k-\\mu)n_k) = \\sum_{n_k=0}^\\infty\\left(\\exp(-\\beta(\\epsilon_k-\\mu))\\right)^n_k = \\frac{1}{1-\\exp(-\\beta(\\epsilon_k-mu))}.\n$$\n\nThe partition function for the ensemble is therefore\n$$\n\tZ^\\text{boson} = \\prod_{k=0} \\frac{1}{1-\\exp(-\\beta(\\epsilon_k-mu))}.\n$$\n\nThe expected number of particles in each state can be calculated as\n$$\n\t\\langle n_k \\rangle^\\text{boson} = -\\frac{\\partial}{\\partial \\mu}(-\\beta\\ln(Z)) = \\frac{1}{\\exp(\\beta(\\epsilon_k-\\mu))-1}.\n$$\n\n\\subsubsection{Fermi-Dirac statistics}\nFor fermions where $n_k=0$ or $n_k=1$ the single particle partition function is\n$$\n\tZ_k^\\text{fermion} = \\sum_{n_k=0}^1 \\exp(-\\beta(\\epsilon_k-\\mu)n_k) = 1 + \\exp(-\\beta(\\epsilon_k-\\mu)),\n$$\nand the partition particle for an ensemble is\n$$\n\tZ^\\text{fermion} = \\prod_k(1+ \\exp(-\\beta(\\epsilon_k-\\mu))).\n$$\nThe expected number of fermions per state is therefore\n$$\n\t\t\\langle n_k \\rangle^\\text{fermion} = \\frac{1}{\\exp(\\beta(\\epsilon_k-\\mu))+1}\n$$\n\n\\subsubsection{Maxwell-Boltzman statistics}\nMaxwell-Boltzman statistics describe the distribution of classical particles, however they are sometimes still useful in the context of quantum systems. (Partly because the maths for them is typically slightly simpler). The Maxwell-Boltzman statistics arise from initially treating particles as being distinguishable, but then dividing the number of micro-states by $N\\!$ to account for the double-counting of states that results from doing this. This transformation to the statistics of indistinguishable particles amounts to the so-called dilute limit of M-B statistics in which F-D and B-E statistics are equivalent. The way to think about this is that if you have a large number of states for a small number of particles, even in a classical distribution there is a vanishingly small probability of having more than one particle per state, thus the F-D and B-E statistics are equivalent in this limit.\n\nThe partition function for M-B statistics is the familiar\n$$\n\tZ^{MB}_{ensemb} = \\prod_k \\exp(-\\beta(\\epsilon_k-\\mu))\n$$\nand the expected number of particles in the $k$th energy state is\n$$\n\\langle n\\rangle_{MB} = \\exp(-\\beta(\\epsilon_k-\\mu)).\n$$\n\n\\subsection{An example: the quantum harmonic oscillator}\nWe are going to consider a quantum harmonic oscillator (QHO) with frequency $\\omega$ and energy levels $E_n = (n\\frac12)\\hbar\\omega$.\n\nThe partition function for the QHO is $Z^\\text{QHO} = \\sum_n\\exp(-\\beta E_n)$, where for convenience, I've put $\\frac{1}{k_BT} = \\beta$. Writing this out in full and exploiting the fact that we have a geometric series $\\sum_n x^n = \\frac{1}{1-x}$ we get\n\\begin{eqnarray*}\n\tZ^\\text{QHO} &=& \\sum_{n=0}^\\infty \\exp(-\\beta\\hbar\\omega(n+\\frac12))\\\\\n\t&=& \\exp(-\\beta\\hbar\\omega/2)\\sum_{n=0}^\\infty(\\exp(-\\beta\\hbar\\omega))^n \\\\\n\t&=& \\exp(-\\beta\\hbar\\omega/2)\\frac{1}{1-\\exp(-\\beta\\hbar\\omega)}.\n\\end{eqnarray*}\n\nTo find the average internal energy, think back to when we first introduced the partition function to normalize the probability of a particle being in a state with a particular energy $E_n$.\n\nWe have\n$$\n\t\\langle E\\rangle = \\sum_n E_n P_n = \\frac{\\sum_n E_n \\exp(-\\beta E_n)}{Z} = \\frac{-\\partial Z}{\\partial \\beta}\\frac{1}{Z} = -\\frac{\\partial}{\\partial \\beta}\\ln{Z},\n$$\n\nSo,\n\\begin{eqnarray*}\n\t\\langle E\\rangle^\\text{QHO} &=& -\\frac{\\partial}{\\partial}\\ln(Z^\\text{QHO})\\\\\n\t&=&  -\\frac{\\partial}{\\partial\\beta} \\left[\\ln\\left( \\exp(-\\beta\\hbar\\omega/2)\\frac{1}{1-\\exp(-\\beta\\hbar\\omega)}\\right) \\right] \\\\\n\t&=& \\frac{\\partial}{\\partial \\beta} \\left[ \\beta\\hbar\\omega/2 - \\ln\\left(\\frac{1}{1-\\exp(-\\beta\\hbar\\omega)}\\right)\\right]\\\\\n\t&=& \\frac{\\partial}{\\partial \\beta} \\left[ \\beta\\hbar\\omega/2 + \\ln\\left({1-\\exp(-\\beta\\hbar\\omega)}\\right)\\right]\\\\\n\t&=& \\hbar\\omega/2 + \\frac{\\hbar\\omega\\exp(-\\beta\\hbar\\omega)}{1-\\exp(-\\beta\\hbar\\omega)}\\\\\n\t&=& \\hbar\\omega\\left(\\frac12 +\\frac{1}{\\exp(\\beta\\hbar\\omega)-1}\\right)\n\\end{eqnarray*}\n\nand expected value for the excitation level is $\\langle n\\rangle^\\text{QHO} = 1/(\\exp(\\beta\\hbar\\omega)-1)$.\n\\subsection{Recommended reading}\nAlthough I've made lots of use of \\emph{Statistical Mechanics in a Nutshell} in other sections of the course, I feel like it does particularly poorly with quantum systems. \\emph{Statistical Mechanics made Simple} does better, but it doesn't give much of an overview of the fundamentals. So, for this section I've made use of \\emph{Statistical Mechnaics} by R.K. Pathria, particularly the fundamentals in chapter five. Chapters six through eight give more details on ideal quantum systems and Fermi and Bose systems --- have a look at them too. The other useful text is \\emph{Entropy, Order Parameters and Complexity} by J.P. Sethna, specifically chapter seven. You should read sections 7.6 and 7.7 for some simple examples on bosonic and fermionic systems. Finally, chapters five and six of SMmS covers bosonic and fermionic systems well once you've got the fundamentals sorted.\n", "meta": {"hexsha": "a46a28259942c5086b7262ea153a00a6f20c738f", "size": 21263, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "08-quantumStatistics.tex", "max_stars_repo_name": "dvasques83/708Notes2018", "max_stars_repo_head_hexsha": "98db7cf5060553a9f2ba1d92d3e9fe79e9e902b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "08-quantumStatistics.tex", "max_issues_repo_name": "dvasques83/708Notes2018", "max_issues_repo_head_hexsha": "98db7cf5060553a9f2ba1d92d3e9fe79e9e902b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "08-quantumStatistics.tex", "max_forks_repo_name": "dvasques83/708Notes2018", "max_forks_repo_head_hexsha": "98db7cf5060553a9f2ba1d92d3e9fe79e9e902b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.2386706949, "max_line_length": 905, "alphanum_fraction": 0.7313643418, "num_tokens": 6136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8080672227971212, "lm_q2_score": 0.6825737214979745, "lm_q1q2_score": 0.5515654514851639}}
{"text": "\\section{Cost Modelling}\n\nIt turns out Tensorflow does not have any cost model. Atleast there is no such\nthing in the public release. User has to manually place computation on different\ndevices. There has been some work on cost modelling of relational operators for\nGPU operators \\cite{gpudb}. This was relatively easy due to the small number of\noperators. For arbitrary operators, it can possibly be done but would require\nmuch more work.\n\n\\subsection{Selection}\n\nSymbols:\n\n\\begin{itemize}[label={},noitemsep]\n\\item $B_r$ - read bandwidth of global memory\n\\item $B_w$ - write bandwidth of global memory\n\\item $C_r$ - read segment size of global memory\n\\item $C_w$ - write segment size of global memory\n\\item $W$ - number of threads in the thread group\n\\item $\\norm{R}$ - cardinality of table R\n\\item $n$ - number of projected columns\n\\item $K_i$ - attribute size of the ith projected column\n\\item $m$ - number of predicate columns\n\\item $P_i$ - the attribute size of the ith predicate columns\n\\item $r$ - selectivity of the predicates\n\\end{itemize}\n\n\\noindent \\textbf{Old Model}: Scan and write out filter per column with predicate\n\\begin{align*}\nT_1 &= \\sum_{i=1}^{m} ( \\ceil{\\frac{P_i W}{C_r}} + \\ceil{\\frac{4 W}{C_r}} ) \\times \\frac{\\norm{R}}{W} \\times \\frac{C_r}{B_r}  +  \\sum_{i=1}^{m} \\ceil{\\frac{4 W}{C_w}} \\times \\frac{R}{W} \\times \\frac{C_w}{B_w}\n\\end{align*}\nRead the filter and write results to global memory.\n\\begin{align*}\nT_2 &= \\sum_{i=1}^{n} (\\ceil{\\frac{4 W}{C_r}} + \\ceil{\\frac{K_i}{4}}) \\times \\frac{\\norm{R}}{W} \\times \\frac{C_r}{B_r} + \\norm{R} \\times r \\times \\sum_{i=1}^{n} \\ceil{\\frac{K_i}{4}} \\times \\frac{C_w}{B_w}\n\\end{align*}\n\n\\noindent \\textbf{New Model}: Scan all the columns and write out the filter\n\\begin{align*}\nT_1 &= \\sum_{i=1}^{m} \\ceil{\\frac{P_i W}{C_r}} \\times \\frac{\\norm{R}}{W} \\times \\frac{C_r}{B_r}  +  \\ceil{\\frac{4 W}{C_w}} \\times \\frac{R}{W} \\times \\frac{C_w}{B_w}\n\\end{align*}\nRead the filter and write results to global memory.\n\\begin{align*}\nT_2 &= (\\ceil{\\frac{4 W}{C_r}} + \\sum_{i=1}^{n} \\ceil{\\frac{K_i}{4}}) \\times \\frac{\\norm{R}}{W} \\times \\frac{C_r}{B_r} + \\sum_{i=1}^{n} \\ceil{\\frac{r R K_i}{C_w}} \\times \\frac{C_w}{B_w}\n\\end{align*}\n\n\\noindent \\textbf{CPU Model}:\n\n%\\begin{center}\n%\\begin{tabu} to \\textwidth { |X| X| X| }\n %\\hline\n %Old Model & New Model & CPU Model \\\\ \n %\\hline\n\n\n\n\n %& cell5 \n\n %& cell6 \\\\  \n %\\hline\n %cell7 & cell8 & cell9 \\\\ \n %\\hline\n%\\end{tabu}\n%\\end{center}\n", "meta": {"hexsha": "495a28c9fc69a7c30ad8461b4e6efe88187148c4", "size": 2442, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/cost-modelling.tex", "max_stars_repo_name": "anilshanbhag/GPUDB", "max_stars_repo_head_hexsha": "2af1466facb2b7777c5e3e9b02651ee1fc33bb38", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2016-10-11T17:47:47.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-20T12:33:09.000Z", "max_issues_repo_path": "paper/cost-modelling.tex", "max_issues_repo_name": "anilshanbhag/GPUDB", "max_issues_repo_head_hexsha": "2af1466facb2b7777c5e3e9b02651ee1fc33bb38", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-10-19T09:32:32.000Z", "max_issues_repo_issues_event_max_datetime": "2016-10-19T20:03:59.000Z", "max_forks_repo_path": "paper/cost-modelling.tex", "max_forks_repo_name": "anilshanbhag/GPUDB", "max_forks_repo_head_hexsha": "2af1466facb2b7777c5e3e9b02651ee1fc33bb38", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-01-28T14:48:04.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-28T02:07:00.000Z", "avg_line_length": 37.5692307692, "max_line_length": 208, "alphanum_fraction": 0.6752661753, "num_tokens": 861, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.837619979547273, "lm_q2_score": 0.6584175139669997, "lm_q1q2_score": 0.5515036645826046}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{enumitem}\n\\usepackage{graphicx}\n\\usepackage{listings}\n\\usepackage{color} %red, green, blue, yellow, cyan, magenta, black, white\n\n\\title{Assignment1}\n\\author{Jakub Slowinski}\n\\begin{document}\n\\begin{itemize}\n\\item Q2.31\n\n\\definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue\n\\definecolor{mylilas}{RGB}{170,55,241}\n\\lstset{language=Matlab,%\n    %basicstyle=\\color{red},\n    breaklines=true,%\n    morekeywords={matlab2tikz},\n    keywordstyle=\\color{blue},%\n    morekeywords=[2]{1}, keywordstyle=[2]{\\color{black}},\n    identifierstyle=\\color{black},%\n    stringstyle=\\color{mylilas},\n    commentstyle=\\color{mygreen},%\n    showstringspaces=false,%without this there will be a symbol in the places where there is a space\n    numbers=left,%\n    numberstyle={\\tiny \\color{black}},% size of the numbers\n    numbersep=9pt, % this defines how far the numbers are from the text\n    emph=[1]{for,end,break},emphstyle=[1]\\color{red}, %some words to emphasise\n    %emph=[2]{word1,word2}, emphstyle=[2]{style},    \n}\n\n\n\\section*{Matlab Code}\n\n\\lstinputlisting{Determinant.m}\n\\includegraphics{deta1.png}\n\\\\\n\\includegraphics{detbi.png}\n\\\\\n\n\\item Q3.2\n  \\\\ root of f(x) = x - $2e^{-x}$\n  \\begin{enumerate}[label=(\\alph*)]\n  \\item Bisection method\n  \\[ a=0, b=1\\]\n  \\[f(0)=0-2e^0=-2\\]\n  \\[f(1)=1-2e^{-1}=0.2642 \\]\n  \\\\First iteration:\n  \\[ x_1 = \\frac{a+b}{2} = \\frac{0+1}{2} = 0.5 \\]\n  \\[f(x_1)=0.5 - 2e^{-0.5}=-0.7 \\]\n  \\\\ as this is a negative number and you need a negative value and positive value between a and b; a now equals $c_1$ \n  \\\\Second iteration:\n  \\[ a=0.5, b=1\\]\n  \\[ x_2 = \\frac{0.5+1}{2}=0.75 \\]\n  \\[f(x_2)=0.75 - 2e^{-0.75}=-0.1947 \\]\n  \\\\ as this is a negative number, it replaces the previous negative number, a\n  \\\\Third iteration:\n  \\[ a=0.75, b=1\\]\n  \\[ x_3 = \\frac{0.75+1}{2}=0.875 \\]\n  \\[f(x_3)=0.875 - 2e^{-0.875}=0.04 \\]\n  \\\\ as this is a positive number, this is the new b\n  \\\\Final iteration:\n  \\[ a=0.75, b=0.875\\]\n  \\[ x_4 = \\frac{0.75+0.875}{2}=0.8125 \\]\n  \n  \\item Secant method\n  \\[x_1 = 0, x_2=1 \\]\n  \\[f(x_1)= 0-2e^0=-2 \\]\n  \\[f(x_2)= 1-2e^{-1}=0.2642\\]\n  \\\\First iteration\n  \\[x_3 = x_2 - f(x_2) * \\frac{x_2-x_1}{f(x_2)-f(x_1)}\\]\n  \\[ x_3= 1-0.2642*\\frac{1-0}{0.2642+2} =0.8516 \\]\n  \\[f(x_3)= 0.8833-2e^{-0.8833}=0.0565 \\]\n  \\\\Second iteration\n  \\[x_4 = x_3 - f(x_3) * \\frac{x_3-x_2}{f(x_3)-f(x_2)}\\]\n  \\[x_4=0.8833-0.0565*\\frac{0.8833-1}{0.0565-0.2642} =0.8516\\]  \n  \\[f(x_4)= 0.8516-2e^{-0.8516}=-0.0019 \\]\n  \n  Third iteration\n  \\[x_5 = x_4 - f(x_4) * \\frac{x_4-x_3}{f(x_4)-f(x_3)}\\]\n  \\[x_5=0.8516+0.0019*\\frac{0.8516-0.8833}{-0.019-0.0565} =0.8529\\]  \n  \\[f(x_5)= 0.8529-2e^{-0.8529}=0.00054 \\]\n  \n  Final iteration\n  \\[x_6 = x_5 - f(x_5) * \\frac{x_5-x_4}{f(x_5)-f(x_4)}\\]\n  \\[x_6=0.8529-0.00054*\\frac{0.8529-0.8516}{0.00054+0.0019} =0.85261\\]  \n    \n  \\item Newton's method\n  \\[ f(x) = x-2e^{-x}, x_1=1 \\]\n  \\[ f'(x)=2e^{-x}+1\\]\n  \\[ f(x_1)= 1-2e^{-1}=0.2642\\]\n  \\[ f'(x_1)=2e^{-1}+1=1.7358\\]\n  \n  First iteration\n  \\[ x_2=x_1 - \\frac{f(x_1)}{f'(x_1)}\\]\n  \\[ x_2=1 - \\frac{0.2642}{1.7358}=0.8478\\]\n  \\[ f(x_2)= 0.8478-2e^{-0.8478}=-0.0089\\]\n  \\[ f'(x_2)=2e^{-0.8478}+1=1.8567\\]\n  \n  Second iteration\n  \\[ x_3=x_2 - \\frac{f(x_2)}{f'(x_2)}\\]\n  \\[ x_3=0.8478 - \\frac{-0.0089}{1.8467}=0.8526\\]\n  \\[ f(x_2)= 0.8526-2e^{-0.8526}=-0.0001\\]\n  \\[ f'(x_2)=2e^{-0.8526}+1=1.8526\\]\n  \n  Third iteration\n  \\[ x_4=x_3 - \\frac{f(x_3)}{f'(x_3)}\\]\n  \\[ x_4=0.8526 - \\frac{-0.00001}{1.8526}=0.8526\\]\n  \\[ f(x_4)= 0.8526-2e^{-0.8526}=-0.0001\\]\n  \\[ f'(x_4)=2e^{0.8526}+1=1.8567\\]\n  \n  Final iteration\n  \\[ x_5=0.8526 - \\frac{-0.00001}{1.8526}=0.8256\\]\n  \\end{enumerate}\n\\item Q4.24\n\\\\\n\\section*{Matlab Code}\n\\lstinputlisting{Inverse.m}\n\\begin{figure}[htbp]\n\\hspace*{-2cm}\n\\includegraphics[scale=0.65]{3i.png}\n\\\\\n\\hspace*{-2cm}\n\\includegraphics[scale=0.65]{3ii.png}\n\\end{figure}\n\n\n\\end{itemize}\n\\end{document}", "meta": {"hexsha": "dc9942d3d0d896189adeddf761236f4a84a98763", "size": 3888, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Year3/CS3081/Assignment1/Assignment1.tex", "max_stars_repo_name": "slow-J/TCD", "max_stars_repo_head_hexsha": "91b5572cc148b284a210f2b89ee3f43295686d48", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-04T11:25:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-04T11:25:32.000Z", "max_issues_repo_path": "Year3/CS3081/Assignment1/Assignment1.tex", "max_issues_repo_name": "sickfila/TCD", "max_issues_repo_head_hexsha": "91b5572cc148b284a210f2b89ee3f43295686d48", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-10-19T20:11:15.000Z", "max_issues_repo_issues_event_max_datetime": "2018-10-19T20:11:15.000Z", "max_forks_repo_path": "Year3/CS3081/Assignment1/Assignment1.tex", "max_forks_repo_name": "slow-J/TCD", "max_forks_repo_head_hexsha": "91b5572cc148b284a210f2b89ee3f43295686d48", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.9076923077, "max_line_length": 119, "alphanum_fraction": 0.5949074074, "num_tokens": 1852, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.658417487156366, "lm_q2_score": 0.8376199653600371, "lm_q1q2_score": 0.551503632784358}}
{"text": "\\subsection{Plane strain bracket}\n\\paragraph{}\nIn this example, a plane strain bracket with a downward uniform distributed load on the top is considered (see Fig.~\\ref{qdt_fig:ex_bracket_geo_bc}).\nThe material properties are: Young\u2019s modulus $E = \\SI{2e5}{\\newton \\per \\square \\meter}$ and Poisson\u2019s ratio $\\nu = 0.3$.\n    \\begin{figure}[H]\n        \\centering\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{1.2}{\n                \\includegraphics{quadtree/ex_images/ex_bracket_geo.eps}\n            }\n        \\end{subfigure}\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{1.3}{\n                \\includegraphics{quadtree/ex_images/ex_bracket_load.eps}\n            }\n        \\end{subfigure}\n        \\caption{ Plane strain bracket: geometry and boundary conditions}\n        \\label{qdt_fig:ex_bracket_geo_bc}\n    \\end{figure}\n\n\\paragraph{}\nA total strain energy of $\\SI{282.927}{\\joule}$ is determined by ANSYS with the mesh shown in Fig.~\\ref{qdt_fig:ex_bracket_ansys_mesh}\n    \\begin{figure}[H]\n        \\centering\n        \\scalebox{0.35}{\n            \\includegraphics{quadtree/ex_images/ex_bracket_ansys_mesh_31002nodes.png}\n        }\n        \\caption{ANSYS mesh for plane strain bracket (62004 DOF)}\n        \\label{qdt_fig:ex_bracket_ansys_mesh}\n    \\end{figure}\n%\nDrawing in AutoCAD will be divided into two parts: base and the holes as shown in Fig.~\\ref{qdt_fig:ex_bracket_cad}\n    \\begin{figure}[H]\n        \\centering\n        \\begin{subfigure}[b]{1\\linewidth}\n            \\centering\n            \\scalebox{0.4}{\n                \\includegraphics{quadtree/ex_images/ex_bracket_cad_full.png}\n            }\n            \\caption{All}\n        \\end{subfigure}\n        \\begin{subfigure}[b]{0.4\\linewidth}\n            \\scalebox{0.2}{\n                \\includegraphics{quadtree/ex_images/ex_bracket_cad_base.png}\n            }\n            \\caption{Base}\n        \\end{subfigure}\n        \\begin{subfigure}[b]{0.4\\linewidth}\n            \\scalebox{0.2}{\n                \\includegraphics{quadtree/ex_images/ex_bracket_cad_holes.png}\n            }\n            \\caption{Holes}\n        \\end{subfigure}\n        \\caption{CAD drawing of plane strain bracket}\n        \\label{qdt_fig:ex_bracket_cad}\n    \\end{figure}\n% fig- 64,32/8,3\nGenerated background mesh, coloring and the final result with $res=32$, $s_{max}=4$ and $s_{min}=1$ are shown in Fig.~\\ref{qdt_fig:ex_chole_background_mesh}, Fig.~\\ref{qdt_fig:ex_chole_mesh_coloring} and Fig.~\\ref{qdt_fig:ex_chole_mesh_final}.\n\\begin{figure}\n    \\centering\n    \\scalebox{0.4}{\n        \\includegraphics{quadtree/ex_images/ex_bracket_background.eps}\n    }\n    \\caption[Background mesh of the bracket]{Background mesh of the bracket : Bold lines represents the input geometry}\n    \\label{qdt_fig:ex_chole_background_mesh}\n\\end{figure}\n%\n\\begin{figure}\n    \\centering\n    \\scalebox{0.5}{\n        \\includegraphics{quadtree/ex_images/ex_bracket_colored.eps}\n    }\n    \\caption[Mesh coloring of the bracket]{Mesh coloring of the bracket : Grey area represents the bracket}\n    \\label{qdt_fig:ex_chole_mesh_coloring}\n\\end{figure}\n%\n\\begin{figure}\n    \\centering\n    \\scalebox{0.3}{\n        \\includegraphics{quadtree/ex_images/ex_bracket_final.eps}\n    }\n    \\caption[Final mesh of the bracket]{Final mesh of the bracket}\n    \\label{qdt_fig:ex_chole_mesh_final}\n\\end{figure}\n% -1 - 282.927\n% 382*2  - 274.5638 (32-4/8-3)\n% 950*2  - 280.1255 (64-4/8-3)\n% 2110*2 - 281.0799 (128-4/15-4)\n% 4414*2 - 282.0363 (256-4/15-4)\n%\n% mlp\n% 530 - 279.8076\n% 1554 - 282.0745\n% 4208 - 282.6581\n% err = abs([279.8076 ,282.0745,282.6581]-282.927)/282.927; dof = [530,1554,4208]; polyfit(log(dof),log(err),1)\n%\n% uniform\n% 530 - 279.8076\n% 1810 - 282.3186\n% 6464 - 282.674\n% err_un = abs([279.8076 ,282.3186 ,282.674]-282.927)/282.927; dof_un = [530,1810,6464]; polyfit(log(dof_un),log(err_un),1)\n%\n% ansys-2nd\n% 406*2 -  282.412\n% 1144*2 - 282.895\n% 4270*2 - 282.923\n%\n% 16462*2 -  282.926\n% err_a2 = abs([282.412,282.895,282.923]-282.927)/282.927; dof_a2 = [406,1144,4270]*2; polyfit(log(dof_a2),log(err_a2),1)\n%\n% ansys-1st\n% 185*2 - 275.635    \n% 650*2 -  280.006   \n% 1227*2 -  281.502 \n% 2405*2 - 282.136\n% err_a1 = abs([275.635,280.006  ,281.502 ,282.136]-282.927)/282.927; dof_a1 = [185,650,1227,2405]*2; polyfit(log(dof_a1),log(err_a1),1)\nMesh with different parameters are plotted in Fig.~\\ref{qdt_fig:ex_bracket_mesh_all} and the convergence study is plotted in Fig.~\\ref{qdt_fig:ex_bracket_mesh_conv}\n%\n\\begin{figure}[H]\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{quadtree/ex_images/ex_bracket_mesh_64_4.eps}\n        }\n        \\caption{Mesh with $res=64$, $s_{max}=4$, 1656 DOFs}\n    \\end{subfigure}\n    \\\\\n    % \\begin{subfigure}[b]{1\\linewidth}\n    %     \\centering\n    %     \\scalebox{0.4}{\n    %         \\includegraphics{quadtree/ex_images/ex_bracket_mesh_128_4.eps}\n    %     }\n    %     \\caption{Mesh with $res=128$, $s_{max}=4$, 2548 DOFs}\n    % \\end{subfigure}\n\\end{figure}\n\n\\begin{figure}[H]\\ContinuedFloat\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{quadtree/ex_images/ex_bracket_mesh_128_4.eps}\n        }\n        \\caption{Mesh with $res=128$, $s_{max}=4$, 2548 DOFs}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{quadtree/ex_images/ex_bracket_mesh_256_4.eps}\n        }\n        \\caption{Mesh with $res=256$, $s_{max}=4$, 5464 DOFs}\n    \\end{subfigure}\n    \\caption[Mesh of the plane strain bracket]{Mesh of the plane strain bracket}\n    \\label{qdt_fig:ex_bracket_mesh_all}\n\\end{figure}\n\n\\begin{figure}[H]\n    \\centering\n    \\scalebox{0.75}{\n        \\includegraphics{quadtree/ex_images/ex_bracket_conv.eps}\n    }   \n    \\caption[Convergence of the plane strain bracket]{Convergence of the plane strain bracket}\n    \\label{qdt_fig:ex_bracket_mesh_conv}\n\\end{figure}\n%\nFig.~\\ref{qdt_fig:bracket_stress_contour} shows the von Mises equivalent stress for the plane strain bracket.\nFrom Fig.~\\ref{qdt_fig:bracket_stress_contour}, it can be observed that the results from the present approach qualitatively match with the FE solution.\n\\begin{figure}[h!]\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{quadtree/ex_images/ex_bracket_ansys_vmstr.png}\n        }\n        \\caption{FEM}\n    \\end{subfigure}\n    \\begin{subfigure}[b]{1\\linewidth}\n        \\centering\n        \\scalebox{0.4}{\n            \\includegraphics{quadtree/ex_images/ex_bracket_strcontour.eps}\n        }\n        \\caption{Quadtree SBFEM}\n    \\end{subfigure}\n\n    \\caption{Von Mises equivalent stress contours for plane strain bracket}\n    \\label{qdt_fig:bracket_stress_contour}\n\\end{figure}\n", "meta": {"hexsha": "551a14c083774f6d701f1d081d2eceaa5ff12851", "size": 6827, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "quadtree/ex_bracket.tex", "max_stars_repo_name": "fa93hws/thesis", "max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z", "max_issues_repo_path": "quadtree/ex_bracket.tex", "max_issues_repo_name": "fa93hws/thesis", "max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quadtree/ex_bracket.tex", "max_forks_repo_name": "fa93hws/thesis", "max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.3138297872, "max_line_length": 243, "alphanum_fraction": 0.6499194375, "num_tokens": 2206, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802735722128, "lm_q2_score": 0.824461932846258, "lm_q1q2_score": 0.5514663231920804}}
{"text": "\\chapter{Problem Formulation} \\label{sec:Problem Formulation}\nOur models strongly draw from the previous studies \\cite{Xue2016Avi2, Xue2016Avi1}, with modifications directed at reducing computation time, as well as getting better results. Since GPUs enable faster computation on tensors (see \\Cref{sec:Gpu's Strengths}), both the Identification and the Pricing Problem are formulated as tensor-based 3-layered and 2-layered neural networks respectively using the PyTorch library \\cite{PTDocs}. Recognizing that NVIDIA GPUs easily pair with PyTorch \\cite{PTDocs}, accelerating tensor operations using NVIDIA's APIs --- CUDA and cuDNN \\cite{CUDADocs, cuDNNPaper}, we aim to maximize parallel operations and minimize our models' computation time.\n\n\\section{Identification Problem} \\label{sec:Identification Problem}\nAs discussed in \\Cref{sec:Introduction}, the model should learn parameters that caused the change in agents' behavior when a certain set of rewards was applied to locations in the experiment region. Learning those parameters will help us understand how agents behaved with a fixed reward distribution, and will enable organizers to redistribute rewards based on that behavior.\n\nSpecifically, given datasets $\\vect{x_t}$ and $\\vect{y_t}$ of agents' visit densities at time $t$, before and after the rewards $\\vect{r_t}$ were placed respectively, we want to find the weights that caused the change from $\\vect{x_t}$ to $\\vect{y_t}$, considering possible influence from environmental factors $\\matr{f}$ and distances between locations $\\matr{D}$. Although the previous study proposed to learn a single set of weights $\\matr{w}$ \\cite{Xue2016Avi2}, our proposed model considers two sets of weights $\\matr{w_1}$ and $\\matr{w_2}$ as it may theoretically result into higher accuracy and lower loss value. Mathematically, the model can be formulated as:\n\\begin{equation} \\label{eqn:iden_problem}\n    \\begin{aligned}\n        & \\underset{\\matr{w_1}, \\matr{w_2}}{\\text{minimize}}\n        & & Z_I(\\matr{w_1}, \\matr{w_2}) = \\sum_{t} (\\omega_t(\\vect{y_t} - \\matr{P}(\\matr{f}, \\vect{r_t}; \\matr{w_1}, \\matr{w_2})\\vect{x_t}))^{2}\n    \\end{aligned}\n\\end{equation}\nwhere $\\omega_t$ is a set of weights (not a learnable parameter) at time $t$ capturing penalties relative to the priority of homogenizing different locations at time $t$. In other words, it highlights if the organizer wishes higher homogeneity at one time unit over another. Elements $p_{u, v}$ of $\\matr{P}$ are given as ($\\vect{\\Theta_i}$ substituted for $\\vect{w_i}_v^T$):\n\\begin{equation} \\label{eqn:puv_equation}\np_{u, v} = \\frac{\\exp(\\vect{\\Theta_2} \\cdot \\text{reLU} (\\matr{\\Theta_1} \\cdot [d_{u, v}, \\vect{f_{u}}, r_{u}]))}{\\sum_{u'} \\exp(\\vect{\\Theta_2} \\cdot \\text{reLU} (\\matr{\\Theta_1} \\cdot [d_{u', v}, \\vect{f_{u'}}, r_{u'}]))} = \\frac{\\exp(\\Gamma_{u, v})}{\\sum_{u'}\\exp(\\Gamma_{u', v})} = \\text{softmax}(\\Gamma_{u, v})\n\\end{equation}\n\nTo optimize the loss value $Z_I(\\matr{w_1}, \\matr{w_2})$, the neural network learns the set of weights through multiple epochs of backpropagating the loss using gradient descent. Furthermore, the program preprocesses the dataset to avoid unnecessary sub-epoch iterations, and to promote batch operations on tensors.\n\n\\subsection{Structure of Input Dataset for Identifying Weights} \\label{sec:Structure of Input Dataset for Identifying Weights}\n\\begin{figure}[!htbp]\n    \\begin{subfigure}{.64\\textwidth}\n        \\centering\n        \\includegraphics[width=\\linewidth]{weights_input_dataset}\n        \\caption{A Tensor representing the Input Dataset $\\tens{F}$}\n        \\label{fig:A Tensor representing the complete Input Dataset}\n    \\end{subfigure}\n    \\begin{subfigure}{.35\\textwidth}\n        \\centering\n        \\includegraphics[width=\\linewidth]{zoomup_Fuv}\n        \\caption{Contents of $\\tens{F}[v][u]$: This vector (length $n_F$) contains quantified causes for the change in agents' behavior from $\\matr{x_t}$ to $\\matr{y_t}$}\n        \\label{fig:Zoomed-in contents of Fvu}\n    \\end{subfigure}\n    \\caption{Visual representation of the Identification Problem's Input Dataset}\n    \\label{fig:Visual representation of the Identification Problem's Input Dataset}\n\\end{figure}\nSince preprocessing the dataset reduces data operations during model execution, the input dataset, comprising of distance between locations $\\matr{D}$, environmental features $\\vect{f}$, and given rewards $\\vect{r_t}$ (all normalized), is built into a tensor (\\Cref{fig:A Tensor representing the complete Input Dataset}) such that operations can be performed on batches of slices $\\tens{F}[v]$.\n\nAnother advantage of building the dataset as a tensor comes with the PyTorch library, which provides convenient handling and transfer of tensors residing on the Main Memory and GPUs' internal global memory \\cite{PTDocs}. \\Cref{alg:Constructing the Input Dataset} describes the steps to construct this dataset. During implementation, we preprocess the $\\tens{F}$ dataset to reduce computation time during model runs (\\Cref{app:Building the Dataset F}).\n\\begin{algorithm}[!htbp]\n    \\caption{Constructing the Input Dataset} \\label{alg:Constructing the Input Dataset}\n    \\begin{algorithmic}[1]\n        \\Function{Build-Dataset}{$\\matr{D}, \\matr{f}, \\matr{r_t}$}\n        \\State $\\matr{D} \\gets \\Call{Normalize}{\\matr{D}}$\\Comment{$\\matr{D}[u][v]$ is the distance between locations $u$ and $v$}\n        \\State $\\vect{f} \\gets \\Call{Normalize}{\\mathbf{f}, axis = 0}$\\Comment{$\\mathbf{f}[u]$ is a vector of env. features at location $u$}\n        \\State $\\vect{r_t} \\gets \\Call{Normalize}{\\vect{r_t}, axis = 0}$\\Comment{$\\vect{r_t}[u]$ is the reward at location $u$}\n        \\For{$v = 1, 2, \\dots, J$}\n            \\For{$u = 1, 2, \\dots, J$}\n                \\State $\\tens{F}[v][u] \\gets [\\matr{D}[v][u], \\vect{f}[u], \\vect{r_t}[u]]$ \\Comment{As depicted in \\Cref{fig:Zoomed-in contents of Fvu}}\n             \\EndFor\n        \\EndFor\n        \\State \\Return $\\tens{F}$\n        \\EndFunction\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Minimizing Loss for the Identification Problem} \\label{sec:Minimizing Loss for the Identification Problem}\nAs shown in \\Cref{fig:Neural network designed for the Identification Problem}, the neural network is made of 3 fully connected layers --- the input layer, the hidden layer with rectified Linear Units (reLU), and the output layer with softmax$(\\cdot)$ function units. The network can also be visualized as a stack of 1-dimensional layers (\\Cref{fig:Side view of the network}), with the softmax$(\\cdot)$ calculated on the stack's output.\n\\begin{figure}[!htbp]\n    \\centering\n    \\begin{subfigure}{\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{weights_net}\n        \\caption{3-dimensional View of the Network Slice, Taking in $\\tens{F}[v]$}\n        \\label{fig:3-dimensional view of the network slice taking in Fv}\n    \\end{subfigure}\n    \\begin{subfigure}{.75\\textwidth}\n        \\centering\n        \\includegraphics[width=\\textwidth]{weights_net_side}\n        \\caption{Side View of the Network: Output of one such cross-section is $p_{u_i, v}$}\n        \\label{fig:Side view of the network}\n    \\end{subfigure}\n    \\caption{Neural Network Designed for the Identification Problem}\n    \\label{fig:Neural network designed for the Identification Problem}\n\\end{figure}\n\nIt is important to clarify that the network, which takes in $\\tens{F}[v]$ as shown in \\Cref{fig:3-dimensional view of the network slice taking in Fv}, is a slice of the original network, which takes in the complete tensor $\\tens{F}$, and computes the complete result $\\matr{P}^{T}$  per iteration of $t$. In other words, the input and the hidden layers are 3-dimensional, and the output layer is 2-dimensional. Since it is difficult to visualize the complete network on paper, slices of the network are depicted in \\Cref{fig:3-dimensional view of the network slice taking in Fv}. \\Cref{alg:Algorithm for the Identification Problem} details the steps for learning the parameters $\\matr{w_1}$ and $\\matr{w_2}$ based on \\Cref{eqn:iden_problem,eqn:puv_equation}.\n\n\\begin{algorithm}[!htbp]\n    \\caption{Algorithm for the Identification Problem} \\label{alg:Algorithm for the Identification Problem}\n    \\begin{algorithmic}[1]\n        \\State $\\matr{w_1} \\gets \\Call{Random}{\\;(J,n_F,n_F)\\;}$\\Comment{$\\matr{w_1}$ has dimensions $J \\times n_F \\times n_F$}\n        \\State $\\matr{w_2} \\gets \\Call{Random}{\\;(J,n_F,1)\\;}$\\Comment{$\\matr{w_2}$ has dimensions $J \\times n_F \\times 1$}\n        \\For{$e = 1, 2, \\dots, \\text{Epochs}$}\n            \\State $loss \\gets 0$\n            \\For{$t = 1, 2, \\dots, T$}\n                \\State $\\tens{F} \\gets \\Call{Build-Dataset}{\\matr{D}, \\matr{f}, \\matr{r}[t]}$\\Comment{Defined in \\Cref{alg:Constructing the Input Dataset}}\n                \\State $\\matr{H} \\gets  \\text{reLU}(\\Call{Batch-Multiply}{\\tens{F}, \\matr{w_1}})$\n                \\State $\\matr{O} \\gets \\text{softmax}(\\Call{Batch-Multiply}{\\matr{H}, \\matr{w_2}})$\n                \\State $\\matr{P} \\gets \\matr{O}^T$\n                \\State $loss \\gets loss + (\\omega[t](\\matr{y}[t] - \\matr{P} \\cdot \\matr{x}[t]))^2$\n            \\EndFor\n            \\State $\\Call{Gradient-Descent}{loss, \\matr{w_1}, \\matr{w_2}}$\n            \\State $\\matr{w_1}, \\matr{w_2} \\gets \\Call{Update-Using-Gradients}{\\matr{w_1}, \\matr{w_2}}$\n            \\State $\\Call{Log-Info}{e, loss}$\n        \\EndFor\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\section{Pricing Problem} \\label{sec:Pricing Problem}\nAfter learning the set of weights $\\matr{w_1}$ and $\\matr{w_2}$, highlighting the change in agents' behavior to collect observations, the Pricing Problem aims to redistribute rewards to the all locations such that the predicted agents' behavior, influenced by the new set of rewards, is homogeneous. Thus, given a budget of rewards $\\mathcal{R}$, this optimization problem can be expressed as:\n\\begin{equation} \\label{eqn:pricing_problem}\n    \\begin{aligned}\n        & \\underset{\\vect{r}}{\\text{minimize}}\n        & & Z_P(\\vect{r}) = \\frac{1}{n}\\lVert \\vect{y} - \\mean{\\vect{y}} \\rVert\\\\\n        & \\text{subject to}\n        & & \\vect{y} = \\matr{P}(\\matr{f}, \\vect{r}; \\matr{w_1}, \\matr{w_2}) \\, \\vect{x}\\\\\n        &&& \\sum_{i} r_i \\leq \\mathcal{R}\\\\\n        &&& r_i \\geq 0\n    \\end{aligned}\n\\end{equation}\nwhere elements of $\\matr{P}$ are defined as in \\Cref{eqn:puv_equation}.\n\nTo allocate the rewards $\\vect{r}$, the calculations for the pricing problem are akin to that for the Identification Problem (see \\Cref{sec:Identification Problem}). However, since only 1 set of rewards need to be optimized, we use an altered 2-layer (input and output layers) network instead of the 3-layered network used for the Identification Problem. While \\Cref{eqn:pricing_problem} looks like a typical Linear Programming problem, only a part of the formulation uses LP to constrain the rewards. We calculate $\\matr{P}$ using a 2-layered network that minimizes the loss function $Z_P(\\vect{r})$ using gradient descent, and constrain the rewards using linear programming. Fundamental processes are described below, whereas specific implementation details are described in \\Cref{app:Implementation}.\n\n\\subsection{Input Dataset for Finding Rewards}\nSince it is the set of rewards $\\vect{r}$ that need to be optimized, they must serve as the ``weights'' of the network (``weights'' here refer to the weighted edges of this network and not to the set of calculated weights $\\matr{w_1}$ and $\\matr{w_2}$). Therefore, the rewards $\\vect{r}$ are no longer fed into the network but are its characteristic. Instead, the calculated weights $\\matr{w_1}$ are fed into the network, and are ``weighted'' by the rewards.\n\nThe observation density datasets ($\\matr{x}$ and $\\matr{y}$) are also aggregated for all agents such that they give information in terms of locations $u$ only. This is also why rewards vector $\\vect{r}$ does not depend on $t$ --- we want a generalized set of rewards for all time $t$ per location $u$. Therefore, the algorithm for constructing $\\tens{F}$ (see \\Cref{sec:Structure of Input Dataset for Identifying Weights}) is same as \\Cref{alg:Constructing the Input Dataset} but with a change --- $\\vect{r_t}$ replaced by $\\vect{r}$.\n\n\\subsection{Calculating Rewards} \\label{sec:Calculating Rewards}\n\\Cref{alg:Solving the Pricing Problem} for finding $\\matr{P}$ is very similar to \\Cref{alg:Algorithm for the Identification Problem}'s first few steps but without any epochs of $t$. Also, since the model would predict $\\vect{y}$, it does not need labels $\\vect{y}$ as a dataset. \\Cref{alg:Solving the Pricing Problem}'s logic flow is depicted in \\Cref{fig:Logic Flow of Algorithm Pricing Problem} to supplement understanding.\n\\begin{figure}[!htbp]\n    \\centering\n    \\includegraphics[width=\\textwidth]{logic_alg_pricing}\n    \\caption{Logic Flow of \\Cref{alg:Solving the Pricing Problem}}\n    \\label{fig:Logic Flow of Algorithm Pricing Problem}\n\\end{figure}\n\\begin{algorithm}[!htbp]\n    \\caption{Solving the Pricing Problem} \\label{alg:Solving the Pricing Problem}\n    \\begin{algorithmic}[1]\n        \\Function{Forward}{$\\matr{D}, \\matr{f}, \\vect{r}, \\matr{w_1}, \\matr{w_2}, \\vect{x}$}\n            \\State $\\tens{F} \\gets \\Call{Build-Dataset}{\\matr{D}, \\matr{f}, \\vect{r}}$\\Comment{Defined in  \\Cref{alg:Constructing the Input Dataset}}\n            \\State $\\matr{O}_1 \\gets \\text{reLU}(\\Call{Batch-Multiply}{\\tens{F}, \\matr{w_1}})$\n            \\State $\\matr{O}_2 \\gets \\text{softmax}(\\Call{Batch-Multiply}{\\matr{O}_1, \\matr{w_2}})$\n            \\State $\\matr{P} \\gets \\matr{O}_2^T$\n            \\State $\\vect{y} \\gets \\matr{P} \\cdot \\vect{x}$\n            \\State \\Return $\\lVert \\vect{y} - \\mean{\\vect{y}} \\rVert / J$\n        \\EndFunction\n        \\vspace*{-.7\\baselineskip}\\Statex\\hspace*{\\dimexpr-\\algorithmicindent-2pt\\relax}\\rule{\\textwidth}{0.1pt}%\n        \\Statex\\hspace*{-\\algorithmicindent}\\textbf{Main Script}%\n        \\vspace*{-.6\\baselineskip}\\Statex\\hspace*{\\dimexpr-\\algorithmicindent-2pt\\relax}\\rule{\\textwidth}{0.1pt}% horizontal rule\n        \\State $\\vect{r} \\gets \\Call{Random}{\\;(J)\\;}$\\Comment{$\\vect{r}$ has dimensions $J$}\n        \\State $loss \\gets \\Call{Forward}{\\matr{D}, \\matr{f}, \\vect{r}, \\matr{w_1}, \\matr{w_2}, \\vect{x}}$\n        \\For{$e = 1, 2, \\dots, \\text{Epochs}$}\n            \n        \\State $\\Call{Gradient-Descent}{loss, \\vect{r}}$\n        \\State $\\vect{r} \\gets \\Call{Update-Using-Gradients}{\\vect{r}}$\n        \\State $\\vect{r} \\gets \\Call{LP}{\\vect{r}, \\mathcal{R}}$\\Comment{LP($\\cdot$) explained in \\Cref{sec:Constraining Rewards}}\n        \\State $loss \\gets \\Call{Forward}{\\matr{D}, \\matr{f}, \\vect{r}, \\matr{w_1}, \\matr{w_2}, \\vect{x}}$\n        \\State $\\Call{Log-Best-Rewards}{loss, \\vect{r}}$\\Comment{Record $\\vect{r}$ with the lowest $loss$ yet}\n        \\EndFor\n    \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Constraining Rewards} \\label{sec:Constraining Rewards}\nAfter updating the rewards, the program constrains them using LP($\\cdot$) such that $\\sum_{i}r_i \\leq \\mathcal{R}$ and $r_i \\geq 0$. The LP($\\cdot$) finds another set of rewards $\\vect{r'}$ such that the absolute difference between new and old rewards ($\\sum_{i}|r'_i - r_i|$) is minimum. The mathematical formulation is given in \\Cref{eqn:lp_math_constrain_rewards}, which was implemented using SciPy's Optimize Module \\cite{SCPOptimizeDocs}. Since the module supports a standard format for doing linear programming, \\Cref{eqn:lp_code_constrain_rewards} is used (see \\Cref{app:Modeling the Linear Programming Problem in the Standard Format}), which is mathematically equivalent to \\Cref{eqn:lp_math_constrain_rewards}.\n\\begin{multicols}{2}\n    \\begin{equation} \\label{eqn:lp_math_constrain_rewards}\n        \\begin{aligned}\n            & \\underset{\\vect{r'}}{\\text{minimize}}\n            & & \\sum_{i}|r'_i - r_i|\\\\ \\\\\n            & \\text{subject to}\n            & & \\sum_{i}r'_i \\leq \\mathcal{R}\\\\\n            &&& r_i \\geq 0\n        \\end{aligned}\n    \\end{equation}\n    \\begin{equation} \\label{eqn:lp_code_constrain_rewards}\n        \\begin{aligned}\n            & \\underset{[\\vect{r'}, \\vect{u}]}{\\text{minimize}}\n            & & \\sum_{i} u_i\\\\\n            & \\text{subject to}\n            & & r'_i - r_i \\leq u_i\\\\\n            &&& r_i - r'_i \\leq u_i\\\\\n            &&& \\sum_{i} r'_i \\leq \\mathcal{R}\\\\\n            &&& r'_i, u_i \\geq 0\n        \\end{aligned}\n    \\end{equation}\n\\end{multicols}\n\nWe make a tradeoff by constraining rewards using LP instead of Mixed Integer Programming, which was used by the previous study \\cite{Xue2016Avi2}. The tradeoff exists between decreasing the computation time and loosening integer constraints on each reward value. While model-learned integer rewards have worked in real-life testing \\cite{Xue2016Avi2}, we cannot say if allowing non-integer values in rewards would produce better or comparable results in real-life, corresponding to the model's prediction. In other words, we can't ensure the predicted rewards' effectiveness in ground testing before \\textit{actually} deploying them in the game. Nevertheless, we can estimate how good the predictions can be compared to several baseline indicators. These baseline sets are elaborated in \\Cref{sec:Pricing Problem-Optimizing the Original Dataset}.", "meta": {"hexsha": "1c6c845f5a93d1b890695ff80a1681872c084b8b", "size": 17186, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/prob_formulation.tex", "max_stars_repo_name": "anmolkabra/avicaching-summ17", "max_stars_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "report/prob_formulation.tex", "max_issues_repo_name": "anmolkabra/avicaching-summ17", "max_issues_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/prob_formulation.tex", "max_forks_repo_name": "anmolkabra/avicaching-summ17", "max_forks_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.4285714286, "max_line_length": 846, "alphanum_fraction": 0.6983009426, "num_tokens": 5005, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619350028204, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.551466313750672}}
{"text": "% Created 2016-02-05 Fri 09:42\n\\documentclass{scrartcl}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{fixltx2e}\n\\usepackage{graphicx}\n\\usepackage{longtable}\n\\usepackage{float}\n\\usepackage{wrapfig}\n\\usepackage{soul}\n\\usepackage{textcomp}\n\\usepackage{marvosym}\n\\usepackage{wasysym}\n\\usepackage{latexsym}\n\\usepackage{amssymb}\n\\usepackage{hyperref}\n\\tolerance=1000\n\\usepackage{khpreamble}\n\\providecommand{\\alert}[1]{\\textbf{#1}}\n\n\\title{Computerized control - homework 2}\n\\author{Kjartan Halvorsen}\n\\date{Due 2016-02-05}\n\\hypersetup{\n  pdfkeywords={},\n  pdfsubject={},\n  pdfcreator={Emacs Org-mode version 7.9.3f}}\n\n\\begin{document}\n\n\\maketitle\n\n\n\n\\section*{Exercises}\n\\label{sec-1}\n\\subsection*{Sample the continuous-time transfer function}\n\\label{sec-1-1}\n\n   Consider the harmonic oscillator with transfer function \n   \\begin{equation}\n    G(s) = \\frac{\\omega^2}{s^2 + \\omega^2}.\n    \\label{eq:contsys}\n    \\end{equation}\n\n\n   \\textbf{Compute the pulse-transfer function} by sampling the transfer function $G(s)$. \n\\subsection*{Simulation of the continuous- and discrete-time harmonic oscillator}\n\\label{sec-1-2}\n\nUse matlab's control toolbox or the \\href{http://python-control.sourceforge.net/}{python control module}  to simulate the system and verify your calculations. \n\\subsubsection*{Define systems}\n\\label{sec-1-2-1}\n\nFirst, define the continuous-time system in \\eqref{eq:contsys}\n\n\\begin{verbatim}\nomega = 1; % Just a suggestion\nh = 0.2/omega; % Completely undamped system. This gives about 30 samples per period \nsys_c = tf([omega^2],[1 0 omega^2])\n\\end{verbatim}\nSample the system using the function \\texttt{c2d}\n\n\\begin{verbatim}\nsys_c2d = c2d(sys_c, h)\n\\end{verbatim}\nDefine the discrete-time system you calculated in the first part of the homework\n\n\\begin{verbatim}\nden = [1 a1 a2];\nnum = [b1 b2];\nsys_d = tf(num, den, h)\n\\end{verbatim}\n\\textbf{Verify that the two discrete-time systems} \\texttt{sys\\_c2d} \\textbf{and} \\texttt{sys\\_d} \\textbf{are identical.}\n\\subsubsection*{Simulate step responses}\n\\label{sec-1-2-2}\n\nSimulate for 4 complete periods \n\n\\begin{verbatim}\nTc = linspace(0, 4*(2*pi/omega), 800);\n[yc,tc] = step(sys_c, Tc);\n\nTd = h*(0:120);\n[yd,td] = step(sys_d, Td);\n\nfigure()\nclf\nplot(tc,yc)\nhold on\nplot(td,yd, 'r*')\n\\end{verbatim}\n\n\\textbf{Verify that the step response of the discrete-time system is exactly equal to that of the continuous-time system at the sampling instants. Explain why this is so!}\n\\subsubsection*{Compute the discrete step response yourself}\n\\label{sec-1-2-3}\n\n    Write some lines of code that solves the difference equation\n    \\[ y(k+2) = -a_1y(k+1) - a_2y(k) + b_1u(k+1) + b_2u(k) \\]\n    for the harmonic oscillator. \n   Use the initial state \\(y(-1)=y(0)=0\\) and compute the response to a step sequence \n    \\[ u(k) = \\begin{cases} 1, & k \\ge 0\\\\ 0, & \\text{otherwise} \\end{cases}.\\]\n    Verify that your solution is the same as when using the \\texttt{step} function in the previous exercise in this homework.\n \n\\section*{Solutions}\n\\label{sec-2}\n\\subsection*{Sampling the transfer function}\n\\label{sec-2-1}\n\n\\begin{enumerate}\n\\item Calculate the step response\n      \\begin{equation*}\n       \\begin{split} \n         Y(s) &= G(s)\\frac{1}{s} = \\frac{\\omega^2}{(s^2 + \\omega^2)s}\\\\\n              &= \\frac{1}{s} - \\frac{s}{s^2 + \\omega^2}.\n       \\end{split}\n      \\end{equation*}\n\\item Transform to time domain (using transform table) \n      \\[ y(t) = u(t) - u(t) \\cos \\omega t \\]\n\\item Calculate z-transform of sampled output \n      \\begin{equation*}\n       \\begin{split} \n        Y(z) &= \\ztrf{y(kh)} = \\ztrf{u(kh) - u(kh)\\cos(\\omega k h)}\\\\\n             &= \\ztrf{u(k)} - \\ztrf{u(k)\\cos(\\omega hk)}\\\\\n             &= \\frac{z}{z-1} - \\frac{z(z-\\cos(\\omega h))}{z^2 -2\\cos(\\omega h)z +1}\n       \\end{split}\n      \\end{equation*}\n\\item Divide by the z-transform of the step input\n       \\begin{equation*}\n        \\begin{split}\n         H(z) &= \\frac{Y(z)}{U(z)} = \\frac{z-1}{z} \\Big( \\frac{z}{z-1} - \\frac{z(z-\\cos(\\omega h))}{z^2 -2\\cos(\\omega h)z +1}\\Big)\\\\\n              &= 1 - \\frac{(z-1)(z-\\cos(\\omega h))}{z^2 -2\\cos(\\omega h)z +1}\\\\\n              &= \\frac{z^2 - 2\\cos(\\omega h)z + 1 - z^2 +\\cos(\\omega h)z + z - \\cos(\\omega h)}{z^2 - 2\\cos(\\omega h)z + 1}\\\\\n              &= \\frac{\\big(1-\\cos(\\omega H)\\big)z + 1-\\cos(\\omega h)}{z^2 - 2\\cos(\\omega h)z + 1}.\n        \\end{split}\n       \\end{equation*}\n\\end{enumerate}\n\\subsection*{Simulations}\n\\label{sec-2-2}\n\n   The code that is provided does most of the job. You just have to define the numerator and denominator of the discrete-time system from the sampled model you obtained:\n\n\\begin{verbatim}\na1 = -2*cos(omega*h);\na2 = 1;\nb1 = 1-cos(omega*h);\nb2 = b1;\n\nden = [1 a1 a2];\nnum = [0 b1 b2];\nsys_d = tf(num, den, h)\n\\end{verbatim}\n\\subsection*{Own code for simulating system response}\n\\label{sec-2-3}\n\n   There are many ways to do this. Here is one\n\n\\begin{verbatim}\n% Calculate the step response by hand\nN = 120;\nu = ones(N,1);\ny = nan(N,1);\n% Pad with two zeros at beginning, corresponding to u(-2), u(-1), y(-2) and\n% y(-1)\nu = [zeros(2,1); u];\ny = [zeros(2,1); y];\n\nfor kplustwo = 3:N\n    y(kplustwo) = -a1*y(kplustwo-1) -a2*y(kplustwo-2) + b1*u(kplustwo-1) + b2*u(kplustwo-2);\nend\n\nyd2 = y(3:end);\nplot(td,yd2, 'ko', 'markersize', 14)\nlegend('Continous model', 'Discrete model', 'Own simulation')\n\\end{verbatim}\n\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{hw2-spring16-sim}\n\\end{center}\n\n\\end{document}\n", "meta": {"hexsha": "ff3a71746af5b72caf154b0a480877348586db1d", "size": 5447, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "homework/historical/hw2-spring16.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "homework/historical/hw2-spring16.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "homework/historical/hw2-spring16.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 29.9285714286, "max_line_length": 171, "alphanum_fraction": 0.6574261061, "num_tokens": 1892, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6688802603710086, "lm_q2_score": 0.8244619285331332, "lm_q1q2_score": 0.5514663094232259}}
{"text": "\\chapter{Isotropic energy probability densities in the laboratory frame}\n\\label{Sec:isotropic-lab}\nThe \\xendl\\ library supports several formats for energy probability densities\nwhich are isotropic in the\nlaboratory frame.  These data are typically used for equilibrium\nreactions and for fission neutrons.  Because the outgoing\ndistribution is isotropic, the probability density\n$\\pi(\\Elab', \\mulab   \\mid E)$ in Eq.~(\\ref{def_pi}) takes the form\n\\begin{equation}\n  \\pi(\\Elab', \\mulab   \\mid E) = \\pi_0(\\Elab' \\mid E).\n \\label{isotropic-pi}\n\\end{equation}\nConsequently, for the number-conserving\nmatrices only the $\\ell = 0$ Legendre order,\n\\begin{equation}\n   \\Inum_{g,h,0} =\n     \\int_{\\calE_g} dE \\, \\sigma ( E ) M(E) w(E) \\widetilde \\phi_0(E)\n   \\int_{\\calE_h'} d\\Elab' \\, \\pi_0(\\Elab' \\mid E)\n \\label{InumI4-0}\n\\end{equation}\nneeds to be computed,\nand Eq.~(\\ref{Ien}) for the energy-preserving transfer matrix becomes\n\\begin{equation}\n   \\Ien_{g,h,0} =\n     \\int_{\\calE_g} dE \\, \\sigma ( E ) M(E) w(E) \\widetilde \\phi_0(E) \n     \\int_{\\calE_h'} d\\Elab' \\, \\pi_0(\\Elab' \\mid E) \\Elab'.\n \\label{IenI4-0}\n\\end{equation}\n\nThe data $\\pi_0(\\Elab' \\mid E)$ may be given in \\xendl\\ either as\na table of values or as parameters in a function formula.\nBecause several of the function formulas for isotropic energy\nprobability densities are given in terms of incomplete gamma\nfunctions, these are discussed first.  This is followed by a presentation\nof the functional formulas for isotropic probability densities.  Then,\n the treatment of tables of~$\\pi_0(\\Elab' \\mid E)$ for isotropic emission\n in the laboratory frame is discussed.  The section closes with\n the special treatment of the evaporation of delayed fission neutrons.\n\n\\section{Computational aspects of incomplete gamma functions}\nMany\nof the function formulas for $\\pi_0(\\Elab' \\mid E)$ make use of the \nlower incomplete gamma function\n\\begin{equation}\n  \\gamma(\\kappa, x) =\n   \\int_0^x dt\\, t^{\\kappa - 1} e^{-t}\n  \\label{def-gamma}\n\\end{equation}\nwith $\\kappa > 0$.  The upper incomplete gamma\nfunction is\n\\begin{equation}\n  \\Gamma(\\kappa, x) =\n   \\int_x^\\infty dt\\, t^{\\kappa - 1} e^{-t},\n  \\label{def-Gamma}\n\\end{equation}\nand they are related by\n$$\n  \\gamma(\\kappa, x) + \\Gamma(\\kappa, x) = \\Gamma(\\kappa) =\n  \\int_0^\\infty dt\\, t^{\\kappa - 1} e^{-t}.\n$$\nIn order to reduce the difficulties of computer round-off,\nthe formula\n$$\n  \\int_a^b dt\\, t^{\\kappa - 1} e^{-t} =\n  \\gamma( \\kappa, b ) - \\gamma( \\kappa, a )\n$$\nis used when $0 \\le a < b \\le 1$, and\n$$\n  \\int_a^b dt\\, t^{\\kappa - 1} e^{-t} =\n  \\Gamma( \\kappa, a ) - \\Gamma( \\kappa, b )\n$$\nis used when $1 \\le a < b$.  Either form may be used when\n$a < 1 < b$.\n\nNote that even though it is possible to write down\nexact formulas for $\\gamma(\\kappa, x)$ when $\\kappa$ is a\npositive integer, it is better not to use them in the computations.\nFor example, it is true that\n$$\n  \\gamma(2, x) = 1 - (1 + x)e^{-x}.\n$$\nFor values of $x$ near zero, this formula involves subtracting\nfrom 1 a number very close to 1 to get a result close to~$x^2/2$.\nThis is may lead to bad round-off errors in the computer arithmetic, \nand it is far better to\nuse the software for~$\\gamma(2, x)$.\n\n\\section{Functional formulas for isotropic probability densities}\nThe functional formulas used in  \\xendl\\ for energy \nprobability densities~$\\pi_0(\\Elab' \\mid E)$ are the evaporation model,\nthe Maxwell model, the Watt model, and the Madland-Nix model.\nThese models are discussed in turn.  For all of these models the\nenergy of the outgoing particle is in the laboratory frame.\n\n\\subsection{Evaporation model}\nFor the evaporation model the formula is\n\\begin{equation}\n  \\pi_0(\\Elab' \\mid E) = C \\Elab' \\expon{- \\frac{\\Elab'}{\\Theta(E)}}\n \\label{evaporationF}\n\\end{equation}\nwith $0 \\le \\Elab' \\le E - U$.  The value of $C$ in Eq.~(\\ref{evaporationF})\nis chosen so that\n$$\n  \\int_0^{E - U} d\\Elab' \\, \\pi_0(\\Elab' \\mid E) = 1.\n$$\nThat is, \n$$\n  C = \\frac{1}{\\Theta^2 \\gamma(2, (E - U)/\\Theta)}.\n$$\nThe data consist of the energy of the reaction $U$ and pairs of\nvalues $\\{E, \\Theta(E)\\}$.  The 1-dimensional interpolation methods\nof Section~\\ref{Sec:1d-interp} are used to\ndetermine the value of $\\Theta$ for intermediate values of the \nenergy $E$ of the incident particle.\n\nAccording to the comment on incomplete gamma functions above,\nfor the calculation of $\\Inum_{g,h,0}$ on an outgoing energy bin,\n$E_0 \\le \\Elab' \\le E_1$ the expression\n$$\n  \\int_{E_0}^{E_1} d\\Elab' \\, \\pi_0(\\Elab' \\mid E) =\n   C\\Theta^2[\\gamma(2, E_1/\\Theta) - \\gamma(2, E_0/\\Theta)]\n$$\nis used when $E_0 \\le \\Theta$, and\n$$\n  \\int_{E_0}^{E_1} d\\Elab' \\, \\pi_0(\\Elab' \\mid E) =\n   C\\Theta^2[\\Gamma(2, E_0/\\Theta) - \\Gamma(2, E_1/\\Theta)]\n$$\nis used when $E_0 > \\Theta$.  Analogously, for the\ncalculation of $\\Ien_{g,h,0}$\n$$\n  \\int_{E_0}^{E_1} d\\Elab' \\, \\Elab' \\pi_0(\\Elab' \\mid E) =\n   C\\Theta^3[\\gamma(3, E_1/\\Theta) - \\gamma(3, E_0/ \\Theta)]\n$$\nis used when $E_0 \\le \\Theta$, and\n$$\n  \\int_{E_0}^{E_1} d\\Elab' \\, \\Elab' \\pi_0(\\Elab'  \\mid E) =\n   C\\Theta^3[\\Gamma(3, E_0/\\Theta) - \\Gamma(3, E_1/\\Theta)]\n$$\nis used otherwise.\n\n\\subsubsection{Input file data for the evaporation model}\nThe process identifier in Section~\\ref{data-model} is\\\\\n  \\Input{Process: evaporation spectrum}{}\\\\\nThese data are always in the laboratory frame,\\\\\n  \\Input{Product Frame: lab}{}\n\nOne item of model-dependent data in Section~\\ref{model-info}\nis the value of $U$ used in defining the range of outgoing\nenergies $E$ in Eq.~(\\ref{evaporationF}), and it is given by\\\\\n  \\Input{U:}{$U$}\\\\\nThe other input data are the values of $\\Theta(E)$ in\nEq.~(\\ref{evaporationF}) depending on the incident energy~$E$.  \nAll of these energies, $U$, $E$, and $\\Theta(E)$, must be in the same\nunits as the energy bins in Sections~\\ref{Ein-bins} and~\\ref{Eout-bins}.\nThe format for these data is\\\\\n  \\Input{Theta: n = $n$}{}\\\\\n  \\Input{Interpolation:}{interpolation flag}\\\\\nwith $n$ pairs of entries $\\{E, \\Theta(E)\\}$.  \nThe interpolation flag is one of those for simple lists as in \nSection~\\ref{interp-flags-list}.\nFor example, in units of MeV one may have\\\\\n  \\Input{U: 11.6890}{}\\\\\n  \\Input{Theta: n = 2}{}\\\\\n  \\Input{Interpolation: lin-lin}{}\\\\\n  \\Input{ 12.0 1.04135}{}\\\\\n  \\Input{ 20.0 1.04135}{}\n\n\\subsection{Maxwell model}\nThe formula for the Maxwell is\n\\begin{equation}\n  \\pi_0(\\Elab'  \\mid E) = C \\sqrt{\\Elab' } \\, \\expon{- \\frac{\\Elab' }{\\Theta(E)}}\n \\label{MaxwellF}\n\\end{equation}\nfor $0 \\le \\Elab'  \\le E - U$.  This model is often used for fission neutrons.\nThe value of $C$ in Eq.~(\\ref{MaxwellF}) is given by\n$$\n  C = \\frac{1}{\\Theta^{3/2} \\gamma(3/2, (E - U)/\\Theta)}.\n$$\nBecause of round-off problems with small values of $x$,\nit is unwise to use the mathematically equivalent formula\n$$\n  \\gamma(3/2, x) =\n  \\frac{\\sqrt{\\pi}}{2}\\, \\erf{\\sqrt{x}} - \\sqrt{x}\\,e^{-x}.\n$$\nThe data consist of the energy of the reaction $U$ and pairs of\nvalues $\\{E, \\Theta(E)\\}$.  The parameter $\\Theta$ is interpolated\nby the methods of Section~\\ref{Sec:1d-interp} to obtain intermediate values. \n\nDepending on the value of $E_0/\\Theta$,\nthe calculation of $\\Inum_{g,h,0}$ on an outgoing energy bin\n$E_0 \\le \\Elab'  \\le E_1$ uses the expression\n$$\n  \\int_{E_0}^{E_1} d\\Elab'  \\, \\pi_0(\\Elab'  \\mid E) =\n   C\\Theta^{3/2}[\\gamma({3/2}, E_1/\\Theta) - \\gamma({3/2}, E_0/\\Theta)]\n$$\nor\n$$\n  \\int_{E_0}^{E_1} d\\Elab'  \\, \\pi_0(\\Elab'  \\mid E) =\n   C\\Theta^{3/2}[\\Gamma({3/2}, E_0/\\Theta) - \\Gamma({3/2}, E_1/\\Theta)].\n$$\nAnalogously, the calculation of $\\Ien_{g,h,0}$ uses either\n$$\n  \\int_{E_0}^{E_1} d\\Elab'  \\, \\Elab' \\pi_0(\\Elab'  \\mid E) =\n   C\\Theta^{5/2}[\\gamma({5/2}, E_1/\\Theta) - \\gamma({5/2}, E_0/\\Theta)]\n$$\nor\n$$\n  \\int_{E_0}^{E_1} d\\Elab'  \\, \\Elab' \\pi_0(\\Elab'  \\mid E) =\n   C\\Theta^{5/2}[\\Gamma({5/2}, E_0/\\Theta) - \\Gamma({5/2}, E_1/\\Theta)].\n$$\n\n\\subsubsection{Input file data for the Maxwell model}\nThe process identifier in Section~\\ref{data-model} is\\\\\n  \\Input{Process: Maxwell spectrum}{}\\\\\nAgain, this data is in the laboratory frame,\\\\\n  \\Input{Product Frame: lab}{}\n\nOne item of model-dependent data in Section~\\ref{model-info}\nis the value of $U$ used in defining the range of outgoing\nenergies $E$ in Eq.~(\\ref{MaxwellF}), and it is given by\\\\\n  \\Input{U:}{$U$}\\\\\nThe other input data are the values of $\\Theta(E)$ in\nEq.~(\\ref{MaxwellF}) depending on the incident energy~$E$.  \nThese energies, $U$, $E$, and $\\Theta(E)$, must all be in the same\nunits as the energy bins in Sections~\\ref{Ein-bins} and~\\ref{Eout-bins}.\nThe format for such data is\\\\\n  \\Input{Theta: n = $n$}{}\\\\\n  \\Input{Interpolation:}{interpolation flag}\\\\\nwith $n$ pairs of entries $\\{E, \\Theta(E)\\}$.  \nThe interpolation flag is one of those for simple lists as in \nSection~\\ref{interp-flags-list}.\nFor example, in units of MeV one may have\\\\\n  \\Input{U: -20}{}\\\\\n  \\Input{Theta: n = 2}{}\\\\\n  \\Input{Interpolation: lin-lin}{}\\\\\n  \\Input{ 1.0e-11  1.28}{}\\\\\n  \\Input{ 20.0  1.28}{}\n\n\\subsection{Watt model}\nAnother model sometimes used for fission neutrons in \\xendl\\ is the Watt\nformula\n\\begin{equation}\n  \\pi_0(\\Elab'  \\mid E) = C \\sinh{\\sqrt{b\\Elab' }}\\, \\expon{- \\frac{\\Elab' }{a}}\n \\label{WattF}\n\\end{equation}\nfor $0 \\le \\Elab'  \\le E - U$.\nThe value of $C$ in Eq.~(\\ref{WattF})\nis given by\n$$\n  \\frac{1}{C} =\n  \\frac{az\\sqrt{\\pi}}{2}\\, \\expon{z^2}\n  \\left(\n    \\erf{y - z} - \\erf{y + z}\n  \\right) -\n  a \\expon{-y^2} \\sinh{\\sqrt{b(E - U)}}\n$$\nwith $y = \\sqrt{(E - U)/a}$ and $z = \\sqrt{ab /4}$.\nThe data consist of the energy of the reaction $U$ and pairs of\nvalues $\\{E, a(E)\\}$ and $\\{E, b(E)\\}$.  For intermediate incident\nenergies $E$, the parameters $b$ and~$a$ are interpolated by\nthe methods of Section~\\ref{Sec:1d-interp}.\n\n\\subsubsection{Input file data for the Watt model}\nThe process identifier in Section~\\ref{data-model} is\\\\\n  \\Input{Process: Watt spectrum}{}\\\\\nThis data is in the laboratory frame,\\\\\n  \\Input{Product Frame: lab}{}\n\nOne item of model-dependent data in Section~\\ref{model-info}\nis the value of $U$ used in defining the range of outgoing\nenergies $E$ in Eq.~(\\ref{WattF}), and it is given by\\\\\n  \\Input{U:}{$U$}\\\\\nThe other input data are the values of $a(E)$ and $b(E)$ in\nEq.~(\\ref{WattF}).  \nThe energies, $U$, $E$, and $a(E)$, must be in the same\nunits as the energy bins in Sections~\\ref{Ein-bins} and~\\ref{Eout-bins},\nand the units for $b(E)$ are the reciprocal of these units.\nThe format for these data is\\\\\n  \\Input{a: n = $n$}{}\\\\\n  \\Input{Interpolation:}{interpolation flag}\\\\\nwith $n$ pairs of entries $\\{E, a(E)\\}$ and\\\\\n  \\Input{b: n = $n$}{}\\\\\n  \\Input{Interpolation:}{interpolation flag}\\\\\nwith $n$ pairs of entries $\\{E, b(E)\\}$.\nThe interpolation flags for $a$ and $b$ are those for simple lists as in \nSection~\\ref{interp-flags-list}.\nFor example, with energies in MeV  one may have\\\\\n  \\Input{U: -10}{}\\\\\n  \\Input{a: n = 11}{}\\\\\n  \\Input{Interpolation: lin-lin}{}\\\\\n  \\Input{ 1.000000e-11    9.770000e-01}{}\\\\\n  \\Input{ 1.500000e+00   9.770000e-01}{}\\\\\n  \\Input{}{ $\\cdots$}\\\\\n     \\Input{ 3.000000e+01 1.060000e+00}{}\\\\\n  \\Input{b: n = 11}{}\\\\\n  \\Input{Interpolation: lin-lin}{}\\\\\n  \\Input{ 1.000000e-11    2.546000e+00}{}\\\\\n  \\Input{ 1.500000e+00   2.546000e+00}{}\\\\\n  \\Input{}{ $\\cdots$}\\\\\n     \\Input{ 3.000000e+01 2.620000e+00}{}\n\n\\subsection{Madland-Nix model}\\label{Sec:Madland}\n\nThe Madland-Nix model~\\cite{Madland} for prompt fission neutrons uses \nthe formula\n\\begin{equation}\n  \\pi_0(\\Elab'  \\mid E) = \\frac{C}{2}\\, [g(\\Elab' , E_{FL}) + g(\\Elab' , E_{FH})]\n \\label{Madland-NixF}\n\\end{equation}\nfor\n\\begin{equation}\n 0 \\le \\Elab' \\le \\texttt{maxEout},\n \\label{Madland-Nix-E-range}\n\\end{equation}\nwhere \\texttt{maxEout} is one of the input parameters.\nNote that the range of outgoing energies Eq.~(\\ref{Madland-Nix-E-range})\nis independent of the incident energy.\nIn fact, the \\ENDF\\ manual~\\cite{ENDFB} gives no way for the data to specify the\nmaximum outgoing energy for the Madland-Nix model. \n\nIn Eq.~(\\ref{Madland-NixF}) $E_{FL}$ is the average kinetic energy of the light fission\nfragments, and $E_{FH}$ is the average kinetic energy of the heavy fission\nfragments.  The function $g(\\Elab' , E_F)$ in Eq.~(\\ref{Madland-NixF}) is given in terms\nof the parameters $T_m$ and\n\\begin{equation}\n   u_1 = \\frac{(\\sqrt{\\Elab' } - \\sqrt{E_F})^2}{T_m}, \\quad\n   u_2 = \\frac{(\\sqrt{\\Elab' } + \\sqrt{E_F})^2}{T_m}\n \\label{Madland-Nixu}\n\\end{equation}\nby the formula\n\\begin{equation}\n   g(\\Elab' , E_F) = \\frac{1}{3\\sqrt{E_F T_m}}\n   \\left[\n     u_2^{3/2}E_1(u_2) - u_1^{3/2}E_1(u_1) -\n     \\Gamma(3/2, u_2) + \\Gamma(3/2, u_1)\n   \\right],\n \\label{Madland-Nixg}\n\\end{equation}\nwhere $E_1$ denotes the exponential integral\n$$\n  E_1(x) = \\int_x^\\infty dt\\, \\frac{1}{t}e^{-t}.\n$$\nIt is clear from the definitions that\n$$\n  E_1(x) = \\Gamma(0, x),\n$$\nbut software to compute $\\Gamma(\\kappa, x)$ generally requires\nthat $\\kappa$ be positive.\nThe data for the Madland-Nix model contains the average energies\n$E_{FL}$ and $E_{FH}$ as well as pairs of values $\\{E, T_m(E)\\}$.\nThe interpolation rule for $T_m$ is also given.\n\nIf the range of outgoing energies is taken to be $0 \\le \\Elab'  < \\infty$ \nin Eq.~(\\ref{Madland-NixF}), then $C = 1$.  For other ranges of $\\Elab' $\nand for computation of $\\Inum_{g,h,0}$, it follows from Eq.~(\\ref{Madland-Nixg})\nthat it is necessary to compute integrals\n\\begin{equation}\n \\calG_i( a, b ) =  \\int_a^b d\\Elab'  \\, u_i^{3/2}E_1(u_i)\n \\label{Madland-Nix-u-integral}\n\\end{equation}\nand\n\\begin{equation}\n \\calH_i( a, b ) =   \\int_a^b d\\Elab'  \\, \\Gamma(3/2, u_i)\n \\label{Madland-Nix-Gamma-integral}\n\\end{equation}\nwith $i = 1$, 2.\n\nThe values of the integrals Eqs.~(\\ref{Madland-Nix-u-integral})\nand~(\\ref{Madland-Nix-Gamma-integral}) are conveniently expressed\nin terms of the parameters\n\\begin{equation}\n  \\alpha = \\sqrt{T_m}, \\quad\n  \\beta = \\sqrt{E_F},\n \\label{Madland-Nix-alpha-beta}\n\\end{equation}\n\\begin{equation}\n  A = \\frac{(\\sqrt{a} + \\beta)^2}{\\alpha^2}, \\quad\n  B = \\frac{(\\sqrt{b} + \\beta)^2}{\\alpha^2},\n \\label{Madland-Nix-A-B}\n\\end{equation}\nand\n\\begin{equation}\n  A' = \\frac{( \\beta - \\sqrt{a})^2}{\\alpha^2}, \\quad\n  B' = \\frac{(\\sqrt{b} - \\beta)^2}{\\alpha^2}.\n \\label{Madland-Nix-A-B-prime}\n\\end{equation}\n\nOne might think it sufficient to calculate\n$$\n  \\calG_i( 0, b )  \\quad \\text{and} \\quad\n  \\calH_i( 0, b ) \n$$\nin Eqs.~(\\ref{Madland-Nix-u-integral}) and~(\\ref{Madland-Nix-Gamma-integral})\nand to use\n\\begin{equation*}\n \\begin{split}\n   \\calG_i( a, b ) &= \\calG_i( 0, b ) - \\calG_i( 0, a ), \\\\\n   \\calH_i( a, b ) &= \\calH_i( 0, b ) - \\calH_i( 0, a )\n \\end{split}\n\\end{equation*}\nfor $i = 1$, 2.  In fact, this approach is suitable only for $i = 2$.\nThe reason for the difficulty is seen from Eqs.~(\\ref{Madland-Nixu})\nand~(\\ref{Madland-Nix-alpha-beta}), in that\n\\begin{equation}\n  u_1^{3/2} = \\begin{cases}\n    (\\beta - \\sqrt{\\Elab' })^3 / \\alpha^3  \\quad &\\text{for $0 \\le \\Elab'  \\le \\beta^2$}, \\\\\n    (\\sqrt{\\Elab' } - \\beta)^3 / \\alpha^3 \\quad &\\text{for $\\Elab'  > \\beta^2$}.\n    \\end{cases}\n \\label{Madland-Nix-u1}\n\\end{equation}\n\nConsequently, the integrals used to compute $\\calG_i( a, b )$\nand~$\\calH_i( a, b )$ in Eqs.~(\\ref{Madland-Nix-u-integral}) \nand (\\ref{Madland-Nix-Gamma-integral}) are evaluated as\n\\begin{equation}\n  \\calG_1( a, \\beta^2 ) = \n    \\frac{\\alpha \\beta}{2} \\, \\gamma \\left( 2, A' \\right)\n       -\\frac{2 \\alpha^2}{5} \\, \\gamma \\left( \\frac{5}{2}, A' \\right) +\n       \\left[\n          \\frac{2 \\alpha \\sqrt{A'}}{5} - \\frac{\\beta}{2}\n        \\right] \\alpha {A'}^2 E_1( A' )\n       \\quad \\text{for $0 \\le a < \\beta^2$},\n\\label{Madland-Nix-G1a}\n\\end{equation}\n\\begin{equation}\n  \\calG_1( \\beta^2, b ) = \n    \\frac{\\alpha \\beta}{2} \\, \\gamma \\left( 2, B' \\right)\n       + \\frac{2 \\alpha^2}{5} \\, \\gamma \\left( \\frac{5}{2}, B' \\right) +\n       \\left[\n          \\frac{\\beta}{2} + \\frac{2 \\alpha \\sqrt{B'}}{5}\n        \\right] \\alpha {B'}^2 E_1( B' )\n       \\quad \\text{for $b > \\beta^2$},\n \\label{Madland-Nix-G1b}\n\\end{equation}\n\\begin{equation}\n \\begin{split}\n  \\calG_2( 0, b )  = &\n    \\frac{2 \\alpha^2}{5} \\, \\gamma \\left( \\frac{5}{2}, B \\right) -\n    \\frac{\\alpha \\beta}{2} \\, \\gamma \\left( 2, B \\right) -\n    \\frac{\\beta^5}{10 \\alpha^3} \\,e^{-B} + {}\\\\\n      & \\left[\n        \\frac{2 \\alpha^2}{5} B^{5/2} - \\frac{\\alpha \\beta}{2} {B}^2 +\n          \\frac{ \\beta^5}{10 \\alpha^3} \\right]  E_1( B) - C_1\n       \\quad \\text{for $b \\ge 0$},\n \\end{split}\n\\label{Madland-Nix-G2}\n\\end{equation}\n\\begin{equation}\n  \\calH_1(a, \\beta^2) = 2 \\alpha \\beta \\, \\gamma \\left( 2, A' \\right) -\n  \\alpha^2 \\, \\gamma \\left( \\frac{5}{2}, A' \\right) +\n     (\\beta^2 - a) \\, \\Gamma\\left( \\frac{3}{2}, A' \\right)\n     \\quad \\text{for $0 \\le a < \\beta^2$},\n\\label{Madland-Nix-H1a}\n\\end{equation}\n\\begin{equation}\n  \\calH_1(\\beta^2, b) = 2 \\alpha \\beta \\, \\gamma \\left( 2, B' \\right) +\n     \\alpha^2 \\, \\gamma \\left( \\frac{5}{2}, B' \\right) +\n     (b - \\beta^2) \\, \\Gamma\\left( \\frac{3}{2}, B' \\right)\n     \\quad \\text{for $b \\ge \\beta^2$},\n\\label{Madland-Nix-H1b}\n\\end{equation}\nand\n\\begin{equation}\n   \\calH_2(0, b) = \\alpha^2 \\, \\gamma \\left( \\frac{5}{2}, B \\right) -\n     2 \\alpha\\beta \\, \\gamma \\left( 2, B \\right) +\n     \\beta^2 \\, \\gamma \\left( \\frac{3}{2}, B \\right) +\n     b \\, \\Gamma \\left( \\frac{3}{2}, B \\right) - C_2\n     \\quad \\text{for $b > 0$}.\n\\label{Madland-Nix-H2}\n\\end{equation}\nIn the relations for $\\calG_2(0, b)$ and $\\calH_2(0, b)$ above, $C_1$\nand~$C_2$ are constants of integration.\n\n\\begin{figure}\n\\input{fig5-1}\n\\end{figure}\n\nIn order to illustrate how the above integration formulas may be \nderived, consider the case of Eq.~(\\ref{Madland-Nix-G1b})\nfor $\\calG_1( \\beta^2, b )$ defined in \nEq.~(\\ref{Madland-Nix-u-integral}) with $u_1$ as in\nEq.~(\\ref{Madland-Nix-u1}) and with~$b > \\beta^2$.  Substitution\nof the definition of the exponential integral~$E_1$ gives the double\nintegral\n$$\n   \\calG_1( \\beta^2, b ) = \\int_{\\beta^2}^b d\\Elab'  \\,\n      u_1^{3/2} \\int_{u_1}^\\infty dt \\, \\frac{1}{t} \\, e^{-t}.\n$$\nThe region of integration for this integral is the union of the\ntwo shaded domains in Figure~\\ref{Fig:Madland-Nix}.\nThe integral over the darker shaded region of Figure~\\ref{Fig:Madland-Nix} is\n$$\n  J_{11} = \\int_{\\beta^2}^b d\\Elab'  \\, u_1^{3/2}\\int_{u_1}^{B'} dt\\, \\frac { e^{-t}}{t}.\n$$\nReversal of the order of integration transforms this integral to\n$$\n  J_{11} = \\int_0^{B'} dt\\, \\frac{1}{t} \\,e^{-t}\n    \\int_{\\beta^2}^{(\\alpha\\sqrt{t} + \\beta)^2}\n       d\\Elab' \\, u_1^{3/2}.\n$$\nUnder the substitution\n\\begin{equation*}\n  \\Elab'  = (\\alpha \\sqrt{u_1} + \\beta)^2,\n%  \\label{u_for_E}\n\\end{equation*}\nthe inner integral takes the form\n$$\n  \\int_{\\beta^2}^{(\\alpha\\sqrt{t} + \\beta)^2}\n       d\\Elab' \\, u_1^{3/2} =\n   \\int_0^t du_1 \\, u_1^{3/2}\\left( \n       \\alpha^2 + \\frac{\\alpha\\beta}{\\sqrt{u_1}} \n  \\right) =\n    \\frac{2\\alpha^2}{5}t^{5/2} + \\frac{\\alpha\\beta}{2}t^2.\n$$\nThus, it follows that the integral over the dark shaded region in\nFigure~\\ref{Fig:Madland-Nix} is\n\\begin{equation*}\n  J_{11} = \n    \\frac{2\\alpha^2}{5} \\, \\gamma(5/2, B') + \\frac{ \\alpha\\beta}{2} \\, \\gamma(2, B').\n%  \\label{intJ11}\n\\end{equation*}\nThis relation gives the first two terms on the right-hand side\nof Eq.~(\\ref{Madland-Nix-G1b}).\n\nThe other terms on the right-hand side of Eq.~(\\ref{Madland-Nix-G1b})\nresult from evaluation of the integral over the light shaded region in\nFigure~\\ref{Fig:Madland-Nix},\n$$\nJ_{12} = \\int_{\\beta^2}^b d\\Elab'  \\, u_1^{3/2} \\int_{B'}^\\infty dt\\, \\frac {e^{-t}}{t}\n = \\int_{B'}^\\infty dt\\, \\frac{1}{t} \\, e^{-t}\n    \\int_{\\beta^2}^{b}\n       d\\Elab' \\, u_1^{3/2}.\n$$  \n\n\\subsubsection{Input file data for the Madland-Nix model}\nThe process identifier in Section~\\ref{data-model} is\\\\\n  \\Input{Process: Madland-Nix spectrum}{}\\\\\nThis data is in the laboratory frame,\\\\\n  \\Input{Product Frame: lab}{}\n\nThe model-dependent data in Section~\\ref{model-info}\ncontains values of $E_{FL}$, the average kinetic energy of the\nlight fission fragment and $E_{FH}$, the average kinetic energy of the\nheavy fission fragment.  These parameters are given by\\\\\n  \\Input{EFL:}{$E_{FL}$}\\\\\n  \\Input{EFH:}{$E_{FH}$}\\\\\nThe user must also specify a maximum outgoing energy \n\\texttt{maxEout} for use in Eq.~(\\ref{Madland-Nix-E-range}).\n \nThe other input data are the values of $T_m$ as a function of\nincident energy in\nEq.~(\\ref{Madland-NixF}).  The format for these data is\\\\\n  \\Input{TM: n = $n$}{}\\\\\n  \\Input{Interpolation:}{interpolation flag}\\\\\nwith $n$ pairs of entries $\\{E, T_m(E)\\}$.  \nThe interpolation flag is one of those for simple lists as in \nSection~\\ref{interp-flags-list}.\nThe energies, $E_{FL}$, $E_{FH}$, $E$, and $T_m(E)$, must be in the same\nunits as the energy bins in Sections~\\ref{Ein-bins} and~\\ref{Eout-bins}.\nFor example, in MeV units one may have\\\\\n  \\Input{EFL: 1.029979}{}\\\\\n  \\Input{EFH: 0.5467297}{}\\\\\n  \\Input{maxEout: 60}{}\\\\\n  \\Input{TM: n = 38}{}\\\\\n  \\Input{Interpolation: lin-lin}{}\\\\\n  \\Input{ 1.0000000e-11   1.0920640e+00}{}\\\\\n  \\Input{ 5.0000010e-01  1.1014830e+00}{}\\\\\n  \\Input{}{ $\\cdots$}\\\\\n  \\Input{ 2.0000000e+01  1.1292690e+00}{}\n\n\\section{Energy probability density tables}\\label{Sec:isotropicTables}\nAnother form of isotropic probability density data $\\pi_0(\\Elab'  \\mid E)$\nEq.~(\\ref{isotropic-pi}) in \\xendl\\ is in the form of tables.  The computation\nof transfer matrices for such data given in the laboratory frame is\ndiscussed here.  For data in the center-of-mass frame, this is a\nspecial case of Legendre expansions discussed in Section~\\ref{Ch:Legendre-cm}\nwith Legendre order zero.\nFor given\nincident energies $E_i$, the data consist of pairs \n$\\{E_{k,j}', \\pi_0(E_{k,j}' \\mid E_k)\\}$ as in Eq.~(\\ref{EPtable}).\nFor such tabular data,\ncomputation of the integrals $\\Inum_{g,h,0}$ in Eq.~(\\ref{InumI4-0})\nand $\\Ien_{g,h,0}$ in Eq.~(\\ref{IenI4-0}) depends on the type\nof interpolation used between\ndifferent incident energies.\nThe effects of the unit-base map Eq.~(\\ref{unit-base-map}) are\ndiscussed here.  The considerations are the same, whether the\nunit-base map is used alone or as a component of interpolation\nby cumulative points.\n\nAfter the unit-base transformation Eq.~(\\ref{unit-base-map})\nthe integrals Eqs.~(\\ref{InumI4-0}) and~(\\ref{IenI4-0}) take the form\n\\begin{equation}\n   \\Inum_{g,h,0} =\n     \\int_{\\calE_g} dE \\, \\sigma ( E ) M(E) w(E) \\widetilde \\phi_0(E) \n   \\int_{\\widehat\\calE_h'} d\\widehat \\Elab'  \\,\n     \\widehat\\pi_0(\\widehat \\Elab'   \\mid E)\n \\label{InumhatI4-0}\n\\end{equation}\nand\n\\begin{equation}\n   \\Ien_{g,h,0} =\n     \\int_{\\calE_g} dE \\, \\sigma ( E ) M(E) w(E) \\widetilde \\phi_0(E) \n   \\int_{\\widehat\\calE_h'} d\\widehat \\Elab'   \\,\n     \\widehat\\pi_0(\\widehat \\Elab'   \\mid E) \\Elab'  .\n \\label{IenhatI4-0}\n\\end{equation}\nIn these intergrals $\\widehat\\calE_h'$ denotes result of mapping the \noutgoing energy bin $\\calE_h'$ with the transformation Eq.~(\\ref{unit-base-map}).\nFurthermore, $\\Elab'  $ in Eq.~(\\ref{IenhatI4-0}) is to be obtained from $\\widehat \\Elab'  $\nusing the inverse unit-base mapping Eq.~(\\ref{unitbaseInvert}).\n\n\\begin{figure}\n\\input{fig5-2}\n\\end{figure}\n\nFigure~\\ref{Fig:unit-base-region} illustrates the effect of\nthe unit-base map Eq.~(\\ref{unit-base-map}).   \nFor incident energies $E = E_{k-1}$ and~$E = E_k$,  \n1-dimensional interpolation is used\nto produce data at a common set of unit-base outgoing energies\n$\\{\\widehat E_j'\\}$. In the left-hand\nportion of Figure~\\ref{Fig:unit-base-region}, suppose that\nprobability densities $\\pi_0(\\Elab'   \\mid E)$ are given at incident energies\n$E = E_{k-1}$ and~$E = E_k$ and at unit-base outgoing energies\n $\\widehat E_{j-1}'$ and $\\widehat E_j'$.  \nThen for this set of data, the range of \nintegration over $E$ in Eqs.~(\\ref{InumhatI4-0}) or (\\ref{IenhatI4-0}) requires\nboth that $E_{k-1} < E < E_k$ and that $E$ be in the bin~$\\calE_g$.  The\noutgoing energy~$\\Elab'  $ is required to be in the bin~$\\calE_h'$ and to satisfy\nthe constraint $\\widehat E_{j-1}' < \\widehat \\Elab'   < \\widehat E_j'$.\n\n The right-hand portion of Figure~\\ref{Fig:unit-base-region}\nshows a rectangle with vertices at $E = E_{k-1}$ and~$E = E_k$\nand at $\\widehat \\Elab'   = \\widehat E_{j-1}'$ and~$\\widehat \\Elab'   = \\widehat E_j'$,\nand data values $\\widehat\\pi_\\ell(\\widehat \\Elab'   \\mid E)$ are given at\nthese corners after any required interpolation in outgoing energy.  \nThe values of $\\widehat\\pi_\\ell(\\widehat \\Elab'   \\mid E)$\ninterior to this rectangle are determined by interpolation.\nThe contribution of this potion of the data to the transfer matrix is obtained\nby integrating Eqs.~(\\ref{InumhatI4-0}) or (\\ref{IenhatI4-0}) over the shaded\nregion in Figure~\\ref{Fig:unit-base-region}.\n\n\\subsection{Input of isotropic energy probability tables}\n\\label{Sec:isotropic-table-lab}\nThe process identifier in Section~\\ref{data-model} is\\\\\n  \\Input{Process: isotropic energy probability table}{}\\\\\nThis option permits either the center-of-mass or the laboratory frame.\nFor data in the laboratory frame, the command in\nSection~\\ref{Reference-frame} is\\\\\n  \\Input{Product Frame: lab}{}\n\nThe data as in Section~\\ref{model-info}\nfor tables of isotropic energy probability densities is entered\nin the format\\\\\n  \\Input{EEpPData: n = $K$}{}\\\\\n  \\Input{Incident energy interpolation:}{probability interpolation flag}\\\\\n  \\Input{Outgoing energy interpolation:}{list interpolation flag}\\\\\nThe interpolation flag for incident energy is one those used for\nprobability density tables in Section~\\ref{interp-flags-probability},\nand that for outgoing energy is one for simple lists.\nThis information is followed\nby $K$ sections of the form\\\\\n  \\Input{Ein: $E$:}{$\\texttt{n} = J$}\\\\\nwith $J$ pairs of values of $\\Elab'$ and $\\pi_E(\\Elab'   \\mid E)$.\n\nAn example with energies in eV of the model-dependent section of the input file for\nisotropic energy probability density tables is\\\\\n  \\Input{EEpPData: n = 4}{}\\\\\n  \\Input{Incident energy interpolation: lin-lin unitbase}{}\\\\\n  \\Input{Outgoing energy interpolation: flat}{}\\\\\n    \\Input{ Ein:  1.722580000000e+07 : n = 34}{}\\\\\n     \\Input{\\indent  0.000000000000e+00  0.000000000000e+00}{}\\\\\n     \\Input{\\indent  1.000000000000e-08  0.000000000000e+00}{}\\\\\n     \\Input{\\indent  1.778280000000e-08  2.766140000000e-07}{}\\\\\n    \\Input{\\indent   3.162280000000e-08  4.918960000000e-07}{}\\\\\n  \\Input{\\indent }{ $\\cdots$}\\\\\n     \\Input{\\indent  5.623410000000e-01  8.396540000000e-01}{}\\\\\n     \\Input{\\indent  1.000000000000e+00  0.000000000000e+00}{}\\\\\n  \\Input{ $\\cdots$}{}\\\\\n  \\Input{ Ein:  2.000000000000e+07 : n = 38}{}\\\\\n     \\Input{\\indent  0.000000000000e+00  0.000000000000e+00}{}\\\\\n     \\Input{\\indent  7.500000000000e-03  0.000000000000e+00}{}\\\\\n     \\Input{\\indent  1.333710000000e-02  4.877750000000e-14}{}\\\\\n    \\Input{\\indent   2.371710000000e-02  8.674000000000e-14}{}\\\\\n  \\Input{\\indent  }{ $\\cdots$}\\\\\n   \\Input{\\indent  2.250000000000e+06  4.413810000000e-08}{}\\\\\n    \\Input{\\indent   2.750000000000e+06  0.000000000000e+00}{}\\\\\nNote that for these data it is not clear what should be used as the minimum outgoing energy.\nIn particular for incident energy $E_0 = 1.72258 \\times 10^7$ eV, \nit is not clear whether it is more reasonable to set $\\Eminzero' = 0$ or $\\Eminzero' = 1.77828\n\\times 10^{-8}$ eV in the unit-base interpolation.  The \\gettransfer\\ code uses $\\Eminzero' = 0$, \nto be consistent with Eq.~(\\ref{Eout-ranges}).\n\n\\section{General evaporation of delayed fission neutrons}\nFor some fissionable targets, the energy spectra data for delayed\nfission neutrons is represented in \\xendl\\ in the form\n\\begin{equation}\n  \\pi_0(\\Elab'   \\mid E) = g\\left(\\frac{\\Elab'}{\\Theta(E)}   \\right).\n \\label{general-evaporation}\n\\end{equation}\nFor this model, values of $\\Theta$ are given as a function of~$E$,\nand values of $g$ as a function of $x = \\Elab'/\\Theta(E)$.  In fact, all of the\ngeneral evaporation data in \\xendl\\ have $\\Theta$ constant,\nand the \\gettransfer\\ code requires that $\\Theta$ be constant.\nThe isotropic probability density $\\pi_0(\\Elab'   \\mid E)$ in \nEq.~(\\ref{general-evaporation}) is then independent of~$E$.\nIn this case, the integrals\n$\\Inum_{g,h,0}$ in Eq.~(\\ref{InumI4-0}) and $\\Ien_{g,h,0}$ in Eq.~(\\ref{IenI4-0})\nneeded for the transfer matrix become simply products of 1-dimensional\nintegrals\n$$\n   \\Inum_{g,h,0} =\n     \\int_{\\calE_g} dE \\, \\sigma ( E ) M(E) w(E) \\widetilde \\phi_0(E)\n   \\int_{\\calE_h'} d\\Elab'   \\, g(\\Elab' /\\Theta  )\n$$\nand\n$$\n   \\Ien_{g,h,0} =\n     \\int_{\\calE_g} dE \\, \\sigma ( E ) M(E) w(E) \\widetilde \\phi_0(E) \n     \\int_{\\calE_h'} d\\Elab'   \\, g(\\Elab' /\\Theta  ) \\Elab'  .\n$$\n\n\\subsection{Input of data for the general evaporation model}\nFor the general evaporation model, the process identifier in Section~\\ref{data-model} is\\\\\n  \\Input{Process: general evaporation}{}\\\\\nThis data is in the laboratory frame,\\\\\n  \\Input{Product Frame: lab}{}\n\nThe model-dependent data in Section~\\ref{model-info}\nconsist of pairs $\\{E, \\Theta(E)\\}$ and of pairs\n$\\{x, g(x)\\}$ with $x = \\Elab'/\\Theta$.  The format for these data is\\\\\n  \\Input{Theta: n = $n$}{}\\\\\n  \\Input{Interpolation:}{interpolation flag}\\\\\nwith $n$ pairs of entries $\\{E, \\Theta(E)\\}$ and\\\\\n  \\Input{g: n = $n$}{}\\\\\n  \\Input{Interpolation:}{interpolation flag}\\\\\nwith $n$ pairs of entries $\\{x, g(x)\\}$.\nIn both cases, the interpolation flag is one of those for simple lists as in \nSection~\\ref{interp-flags-list}.\nThe $\\Theta$ parameter is dimensionless, and the units for $E$ and~$x$\nmust be the same as those for the energy bins.\nFor example, in MeV one may have\\\\\n  \\Input{Theta: n = 2}{}\\\\\n  \\Input{Interpolation: lin-lin}{}\\\\\n  \\Input{1.0e-11   1.0}{}\\\\\n  \\Input{20.0      1.0}{}\\\\\n  \\Input{g: n = 185}{}\\\\\n  \\Input{Interpolation: lin-lin}{}\\\\\n  \\Input{ 0.0000000e+00   3.1433980e-01}{}\\\\\n  \\Input{ 1.0000000e-02   2.8124280e+00}{}\\\\\n  \\Input{ 2.0000000e-02   3.1373560e+00}{}\\\\\n  \\Input{ }{ $\\cdots$}\\\\\n  \\Input{ 1.8400000e+00  0.0000000e+00}{}\\\n", "meta": {"hexsha": "57869fea0f9911df9cc30f33b1e1c83678a118e7", "size": 29720, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Merced/Doc/isotropic.tex", "max_stars_repo_name": "brown170/fudge", "max_stars_repo_head_hexsha": "4f818b0e0b0de52bc127dd77285b20ce3568c97a", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2019-08-29T23:46:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T10:16:25.000Z", "max_issues_repo_path": "Merced/Doc/isotropic.tex", "max_issues_repo_name": "brown170/fudge", "max_issues_repo_head_hexsha": "4f818b0e0b0de52bc127dd77285b20ce3568c97a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-04T16:14:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-01T01:54:34.000Z", "max_forks_repo_path": "Merced/Doc/isotropic.tex", "max_forks_repo_name": "brown170/fudge", "max_forks_repo_head_hexsha": "4f818b0e0b0de52bc127dd77285b20ce3568c97a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-03-03T22:41:41.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T22:54:43.000Z", "avg_line_length": 39.3642384106, "max_line_length": 98, "alphanum_fraction": 0.6545087483, "num_tokens": 11102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8244619220634457, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.5514662996538345}}
{"text": "% Standard Article Definition\n\\documentclass[]{article}\n\n% Page Formatting\n\\usepackage[margin=1in]{geometry}\n\\setlength\\parindent{0pt}\n\n% Graphics\n\\usepackage{graphicx}\n\n% Math Packages\n\\usepackage{physics}\n\\usepackage{amsmath, amsfonts, amssymb, amsthm}\n\\usepackage{mathtools}\n\n% Code Def\n\\usepackage{listings}\n\n% Section Heading Settings\n\\usepackage{enumitem}\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\\renewcommand*{\\thesection}{Problem \\arabic{section}}\n\\renewcommand*{\\thesubsection}{\\alph{subsection})}\n\\renewcommand*{\\thesubsubsection}{\\quad \\quad \\roman{subsubsection})}\n\n%Custom Commands\n\\newcommand{\\Rel}{\\mathcal{R}}\n\\newcommand{\\R}{\\mathbb{R}}\n\\newcommand{\\C}{\\mathbb{C}}\n\\newcommand{\\N}{\\mathbb{N}}\n\\newcommand{\\Z}{\\mathbb{Z}}\n\\newcommand{\\Q}{\\mathbb{Q}}\n\n\\newcommand{\\toI}{\\xrightarrow{\\textsf{\\tiny I}}}\n\\newcommand{\\toS}{\\xrightarrow{\\textsf{\\tiny S}}}\n\\newcommand{\\toB}{\\xrightarrow{\\textsf{\\tiny B}}}\n\n\\newcommand{\\divisible}{ \\ \\vdots \\ }\n\\newcommand{\\st}{\\ : \\ }\n\n\n% Theorem Definition\n\\newtheorem{definition}{Definition}\n\\newtheorem{assumption}{Assumption}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{lemma}{Lemma}\n\\newtheorem{proposition}{Proposition}\n\n\n%opening\n\n\\title{MATH 5301 Elementary Analysis - Homework 9}\n\n\\author{Jonas Wagner}\n\n\\date{2021, November 5\\textsuperscript{th}}\n\n\\begin{document}\n\n\\maketitle\n\n% Problem 1 ----------------------------------------------\n\\section{}\nLet $\\norm{\\cdot}_a$ and $\\norm{\\cdot}_b$ be two equivalent norms on $\\R^n$.\n\\begin{definition}\n    For $\\norm{\\cdot}_a, \\norm{\\cdot}_b$ on $S$, \n    $\\norm{\\cdot}_a$ is said to be \\underline{stronger} then $\\norm{\\cdot}_b$ if \n    \\[\n        \\forall \\{x_n\\} \\subset S \\st x_n \\xrightarrow[d_a]{} x \\implies x_n \\xrightarrow[d_b]{} x\n    \\]\n\\end{definition}\n\\begin{definition}\n    $\\norm{\\cdot}_a$ and $\\norm{\\cdot}_b$ are said to be \\underline{equivalent},  $\\norm{\\cdot}_a \\sim \\norm{\\cdot}_b$,\n    if $\\norm{\\cdot}_a$ is stronger then $\\norm{\\cdot}_b$ \n    and $\\norm{\\cdot}_b$ is stronger then $\\norm{\\cdot}_a$. \n    This means that\n    \\[\n        \\norm{\\cdot}_a \\sim \\norm{\\cdot}_b \n            \\iff \\exists{\\alpha,\\beta \\in \\R_{>0}} : \n            \\forall_{x\\in S} \\alpha \\norm{\\cdot}_b \\leq \\norm{\\cdot}_a \\leq \\beta \\norm{x}_b\n    \\]\n\\end{definition}\n\n%Part a\n\\subsection{Prove that if the set $A$ is closed in the $a$-norm, then it is closed in $b$-norm.}\n\n\\begin{definition}\n    The set $A \\subset V$ is called \\underline{open} if \n    $$\\forall_{x\\in A} \\exists_{\\epsilon>0} : B_\\epsilon(x)\\subset A$$\n    or equivalently,\n    \\[\\forall_{x \\in A} \\exists_{\\epsilon>0} : \\forall_{y \\in V}\\norm{x - y} < \\epsilon \\implies y \\in A\\]\n\\end{definition}\n\\begin{definition}\n    The set $A \\subset V$ is called \\underline{closed} if $A^c$ is open.\n\\end{definition}\n\n\\begin{theorem}\n    If the set $A$ is closed in the $a$-norm, then it is closed in $b$-norm.\n    \\begin{proof}\n        Set $A$ being closed in $a$-norm implies $A^c$ is open in $a$-norm.\n        \\[\\forall_{x \\in A^c} \\exists_{\\epsilon_a>0} : \\forall_{y \\in S} \\ \\norm{x - y}_a < \\epsilon_a \\implies y \\in A^c\\]\n        Additionally, since $\\norm{\\cdot}_a$ is equivalent to $\\norm{\\cdot}_b$ means thats\n        \\[\\exists_{\\alpha,\\beta > 0} : \n        \\forall_{x\\in S} \\  \\alpha \\norm{\\cdot}_b \\leq \\norm{\\cdot}_a \\leq \\beta \\norm{\\cdot}_b\\]\n        Therefore, $\\norm{x - y}_a \\leq \\beta \\norm{x-y}_b$ and then \n        \\begin{align*}\n            \\forall_{x \\in A^c} \\exists_{\\epsilon_a>0} \n            &\\st \\forall_{y \\in S} \\ \\norm{x - y}_a \\leq \\beta \\norm{x - y}_b < \\epsilon_a \\implies y \\in A^c\\\\\n            \\forall_{x \\in A^c} \\exists_{\\epsilon_b>0} &\\st \\forall_{y \\in S} \\ \\norm{x - y}_b < \\epsilon_b \\implies y \\in A^c\n        \\end{align*}\n        where $ \\epsilon_b \\geq \\cfrac{\\epsilon_a}{\\beta}$\n    \\end{proof}\n\\end{theorem}\n\n\\newpage\n%Part b\n\\subsection{Prove that if the set $A$ is compact in the $a-norm$ then it is compact in the $b$-norm.}\n\\begin{definition}\n    Let $(S,d)$ be a metric space with $A \\subset S$,\n    \\begin{enumerate}\n        \\item For $\\{U_\\alpha\\}_{\\alpha \\in A}, \\ U_\\alpha \\subset S$, is a \\textbf{\\underline{cover}} of the set $A$ if \n        $$A \\subset \\bigcup_{\\alpha\\in A} U_\\alpha$$\n        \\item A cover $\\{U_\\alpha\\}_{\\alpha \\in A}$ of $A$ is an \\textbf{\\underline{open cover}} if $\\forall_{\\alpha \\in A}$ $U_\\alpha$ is an open set.\n        \\item $\\{V_\\beta\\}_{\\beta\\in B}$ is called a \\textbf{\\underline{subcover}} of $\\{U_\\alpha\\}_{\\alpha \\in A}$ if\n        \\begin{enumerate}\n            \\item $\\{V_\\beta\\}_{\\beta\\in B}$ is a cover of $A$\n            \\item $\\forall_{\\beta \\in B} \\exists_{\\alpha \\in A} V_\\beta = U_\\alpha$\n        \\end{enumerate}\n        \\item A cover with a finite number of sets is called a \\textbf{\\underline{finite cover}}.\n    \\end{enumerate}\n\\end{definition}\n\\begin{definition}\n    For $A \\subset (S,d)$, $A$ is \\textbf{\\underline{compact}} if for every open cover of $A$ there exists a finite sub cover.\n    Which is equivalent to saying all sequences within $A$ converge to a set point in $A$. (i.e)\n    \\[\\forall_{{a_k}_{k\\in \\N}} \\exists_{{a_{n_k}}} : a_{n_k} \\to a \\in A\\]\n\\end{definition}\n\\begin{definition}\n    A sequence $\\{x_{n}\\}$is called \\underline{Cauchy} if \n    \\[\n        \\forall_{\\epsilon>0} \\ \\exists_{N \\in \\N} \\ \\forall_{l_1,l_2 \\geq N} \\norm{x_{n_{l_1}} - x_{n_{l_2}}} < \\epsilon\n    \\]\n\\end{definition}\n\n\\begin{theorem}\n    If the set $A$ is compact in the $a$-norm, then it is compact in the $b$-norm.\n    \\begin{proof}\n        Set $A$ being compact in $a$-norm means that every sequence in $A$ satisfies the Cauchy sequence property:\n        \\[\n            \\forall_{\\epsilon>0} \\ \\exists_{N \\in \\N} \\ \\forall_{l_1,l_2 \\geq N} \\norm{x_{n_{l_1}} - x_{n_{l_2}}}_a < \\epsilon\n        \\]\n        Additionally, since $\\norm{\\cdot}_a$ is equivalent to $\\norm{\\cdot}_b$ means thats\n        \\[\n            \\exists_{\\alpha,\\beta > 0} : \\forall_{x\\in S} \\ \\alpha \\norm{\\cdot}_b \\leq \\norm{\\cdot}_a \\leq \\beta \\norm{\\cdot}_b\n        \\]\n        Therefore, $\\norm{x - y}_a \\leq \\beta \\norm{x-y}_b$ and then \n        \\begin{align*}\n            \\forall_{\\epsilon_a>0} \\ \\exists_{N \\in \\N} \\ \\forall_{l_1,l_2 \\geq N} \n            &\\norm{x_{n_{l_1}} - x_{n_{l_2}}}_a \\leq \\beta \\norm{x_{n_{l_1}} - x_{n_{l_2}}}_b < \\epsilon_a\\\\\n            \\forall_{\\epsilon_b>0} \\ \\exists_{N \\in \\N} \\ \\forall_{l_1,l_2 \\geq N} \n            &\\norm{x_{n_{l_1}} - x_{n_{l_2}}}_b < \\epsilon_b\n        \\end{align*}\n        where $ \\epsilon_b \\geq \\cfrac{\\epsilon_a}{\\beta}$\n    \\end{proof}\n\\end{theorem}\n\n% Problem 2\n\\newpage\n\\section{}\nConsider the set $l^{\\infty}$ of all real-valued sequences, endowed with the sup-norm: $\\norm{l}_\\infty = \\sup_{n\\in \\N} \\abs{l_n}$.\n\n%Part a\n\\subsection{Prove that $l^\\infty$ is complete.}\n\n\\begin{definition}\n    A metric space is $(S,d)$ is a \\underline{complete metric space} if every Cauchy sequence in $S$ converges.\n\\end{definition}\n\\begin{definition}\n    The set $A$ in norm space is $(S,\\norm{\\cdot})$ is a \\underline{complete} set if every Cauchy sequence in $A$ converges to a limit in $A$.\n\\end{definition}\n\n\\begin{definition}\n    Let the set $l^\\infty$ be the set of real-valued sequences:\n    \\[\n        l^{\\infty} := \\qty{\\{l_n\\}_{n\\in \\N} \\st l_n \\in \\R}\n    \\]\n\\end{definition}\n\\begin{definition}\n    Let the norm space be defined as $(l^\\infty, \\norm{l}_\\infty)$ where \\[\\norm{l}_\\infty = \\sup_{n\\in\\N} \\abs{l_n}\\]\n\\end{definition}\n\n\\begin{theorem}\n    The set $l^\\infty$ is complete.\n    \\begin{proof}\n        Let $\\{x_m\\}$ denote any cauchy sequence in $l^\\infty$, which is in $l^\\infty$ by definition.\n        For all $m\\geq 1$, define\n        \\[\n            l_m = \\{x_1^{(m)}, x_2^{(m)}, \\cdots\\} \\in l^\\infty\n        \\]\n        Clearly, $\\forall_{j \\in \\R_{>0}}$ the sequence $\\{x_j^{(m)}\\}$ is a Cauchy sequence (in $\\R$) therefore it converges to $x_j \\in \\R$.\n        Since $\\R$ is complete and $\\forall {l_m \\in l^\\infty} \\implies \\lim_{m \\to \\infty} l_m = l$ where $l \\in l^infty$, the set $l^\\infty$ is complete (because all Cauchy sequences in $l^\\infty$ converge within $l^\\infty$).\n    \\end{proof}\n\\end{theorem}\n\n%Part b\n\\subsection{Prove that $l^\\infty$ is not compact.}\n\n\\begin{theorem}\n    The set $l^\\infty$ is not compact.\n    \\begin{proof}\n        Set $l^\\infty$ not being compact means that there exists an open cover $\\{U_\\alpha\\}_{\\alpha \\in l^\\infty}$ without a finite subcover.\n        This can be proven by constructing an open cover that consists of an infinite set of subcovers.\n        Let $l_m \\in l^\\infty$ be constructed with $m$ elements, i.e\n        \\[\n            l_m = \\{x_1^{(m)}, x_2^{(m)}, \\cdots, x_m^{(m)}\\} \\in l^\\infty\n        \\]\n        An open cover $$\\{U_\\alpha\\}_{\\alpha \\in A}$$ can then be constructed by an infinite number of sets\n        \\[\n            V_\\beta = \\qty{l_m^{(\\beta)}}_{m \\in \\{1, 2, \\dots, \\beta\\}}\n        \\]\n        Which can be constructed with each additional iteration with $\\beta$ will increase the size of the $\\{V_\\beta\\}_{\\beta \\in B}$ subcover, and will cover $l^\\infty$ when taken to infinity, but is not finite.\n        Therefore, the open cover $\\{U_\\alpha\\}_{\\alpha \\in A}$ will not have an associated finite subcover; implying $l^\\infty$ is not compact.\n    \\end{proof}\n\\end{theorem}\n\n% Problem 3\n\\newpage\n\\section{}\nConsider the set $\\mathbb{B}([0,1],\\R)$ of all bounded real-valued functions on the unit interval endowed with the sup-norm: $\\norm{f}_\\infty = \\sup_{x \\in [0,1]} \\abs{f(x)}$. Denote the closed unit ball as $B_1 := \\qty{f \\in \\mathbb{B}([0,1],\\R) \\st \\norm{f}_\\infty \\leq 1}$.\n\n\n%Part a\n\\subsection{Prove $B_1$ is closed.}\n\\begin{theorem}\n    $B_1$ is the closed.\n    \\begin{proof}\n        Set $B_1$ being closed implies $B_1^c$ is open.\n        \\begin{align*}\n            \\forall_{f \\in B_1^c} \\exists_{\\epsilon>0} \\st \\forall_{g \\in \\mathbb{B}} \\ \\norm{f-g}_\\infty < \\epsilon &\\implies g \\in B_1^c\\\\\n            \\forall_{f \\in \\mathbb{B}} \\norm{f}_\\infty > 1 \\exists_{\\epsilon>0}\\st \\forall_{g \\in \\mathbb{B}} \\ \\norm{f-g}_\\infty < \\epsilon &\\implies g \\in \\mathbb{B} \\st \\norm{g}_\\infty > 1\n        \\end{align*}\n        Additionally, since $\\norm{f}_\\infty > 1$ and $\\norm{f-g}_\\infty$ is bounded, the only way these are both true this must also be true: $\\norm{g}_\\infty > 1$.\n\n        Alternatively, you can just recognize that and $f \\in \\mathbb{B}\\st \\norm{\\cdot}_\\infty > 1 \\implies f \\notin B_1 \\implies f \\in B_1^c$ which demonstrates $B_1^c$ is open, and therefore, $B_1$ is closed.\n    \\end{proof}    \n\\end{theorem}\n\n%Part b\n\\subsection{Prove that $B_1$ is bounded.}\n\n\\begin{theorem}\n    $B_1$ is bounded, i.e.\n    \\[\\exists_{N} \\st \\forall_{f \\in B_1} \\ \\norm{f}_\\infty < N\\]\n    \\begin{proof}\n        Since, by definition, $\\norm{f} \\leq 1$, $B_1$ is clearly bounded for any $N > 1$.\n    \\end{proof}\n\\end{theorem}\n\n%Part c\n\\subsection{Prove that $B_1$ is not compact.}\n\n\\begin{theorem}\n    $B_1$ is not compact.\n    \\begin{proof}\n        $B_1$ not being compact is equivalent to saying\n        \\begin{align*}\n            \\lnot \\qty(\\forall_{{f_k}_{k\\in \\N}} f_k \\in \\mathbb{B} \\st \\exists_{{f_{n_k}}} : f_{n_k} \\to f \\in B_1)\\\\\n            \\exists_{{f_k}_{k\\in \\N}} f_k \\in \\mathbb{B} \\st \\forall_{{f_{n_k}}} : f_{n_k} \\to f \\notin B_1\n        \\end{align*}\n    \\end{proof}\n\\end{theorem}\n\n\n\n% Problem 4\n\\newpage\n\\section{}\nLet $\\{V, \\norm{\\cdot}\\}$ be a normed space.\nShow that the function $f(x) = \\norm{x} \\st V \\to \\R$ is continuous on $V$.\n\n\\begin{definition}\n    A function $f : (S_1,d_1) \\to (S_2,d_2)$ is continuous on $S_1$ if \n    \\[\n        \\forall_{x\\in S_1} \\forall_{\\epsilon>0} \\exists_{\\delta(x,\\epsilon)>0} \\forall_{y\\in S_1} d_1(x,y) < \\epsilon \\implies d_2(f(x),f(y)) < \\delta\n    \\]\n\\end{definition}\n\n\\begin{theorem}\n    The function $f(x)$ is continuous on $V$, i.e.\n    \\[\n        \\forall_{x\\in V} \\forall_{\\epsilon>0} \\exists_{\\delta(x,\\epsilon)>0} \\forall_{y\\in V} \\norm{x - y} < \\epsilon \\implies \\abs{f(x) - f(y)} < \\delta\n    \\]\n    \\begin{proof}\n        \\begin{align*}\n            \\forall_{x\\in V} \\forall_{\\epsilon>0} \\exists_{\\delta(x,\\epsilon)>0} \\forall_{y\\in V} \\norm{x - y} < \\epsilon \n                &\\implies \\abs{\\norm{x} - \\norm{y}} < \\delta\\\\\n            \\forall_{x\\in V} \\forall_{\\epsilon_1>0} \\exists_{\\delta_2(x,\\epsilon_1)>0} \\forall_{y\\in V} \\norm{x - y} \\leq \\norm{x} + \\norm{y} < \\epsilon_1 \n                &\\implies \\abs{\\norm{x} - \\norm{y}}\\leq \\norm{x} + \\norm{y} < \\delta_2\\\\\n            \\forall_{x\\in V} \\forall_{\\epsilon_1>0} \\exists_{\\delta_2(x,\\epsilon_1)>0} \\forall_{y\\in V} \\norm{x} + \\norm{y} < \\epsilon_1 \n                &\\implies \\norm{x} + \\norm{y} < \\delta_2\\\\\n        \\end{align*}\n        which is clearly true, therefore $f(x) = \\norm{x}$ is continuous on $V$.\n    \\end{proof}\n\\end{theorem}\n\n% Problem 5\n\\newpage\n\\section{}\n$(X,d_1)$ and $(Y,d_2)$ are two metric spaces.\nAssume also that $Y$ is a vector space.\nConstruct and example of two continuous functions $f,g : X \\to Y$ such that $f + g$ is discontinuous.\n\n\\begin{definition}\n    Let $f : X \\to Y$ be defined by\n    \\[\n        f(x) := \n        \\begin{cases}\n            - x & x <0\\\\\n            x & x > 0\\\\\n            0 & x = 0\n        \\end{cases}\n    \\]\n\\end{definition}\n\\begin{definition}\n    Let $g : X \\to Y$ be defined by\n    \\[\n        g(x) := \n        \\begin{cases}\n            x^2 & x \\neq 0\\\\\n            0 & x = 0\n        \\end{cases}\n    \\]\n\\end{definition}\n\n\\begin{theorem}\n    Functions $f$ and $g$ are continuous, but $f + g$ is discontinuous.\n    \\begin{proof}\n        \n        \\subsection{}\n        \\begin{lemma}\n            $f$ is a continuous function.\n            \\begin{proof}\n                \\begin{align*}\n                    \\forall_{x \\in X} \\forall_{\\epsilon > 0} \\exists_{\\delta(x, \\epsilon)} \\forall_{y \\in X} \\st d_1(x,y) < \\epsilon\n                        &\\implies d_2(f(x), f(y)) < \\delta\\\\\n                        &\\implies \n                        \\begin{cases}\n                            d_2(-x, -y) < \\delta &x < 0, y < 0\\\\\n                            d_2(x, -y) < \\delta & x > 0, y < 0\\\\\n                            d_2(-x, y) < \\delta & x < 0, y > 0\\\\\n                            d_2(0, -y) < \\delta & x= 0, y <0\\\\\n                            d_2(0, y) < \\delta & x = 0, y > 0\\\\\n                            d_2(-x, 0) < \\delta & x < 0, y = 0\\\\\n                            d_2(x, 0) < \\delta & x > 0, y = 0\\\\\n                            d_2(0, 0) < \\delta & x = 0, y = 0\n                        \\end{cases}\n                \\end{align*}\n                All of which are clearly true for $d_2$ within a vector space.\n            \\end{proof}\n        \\end{lemma}\n\n        \\subsection{}\n        \\begin{lemma}\n            $g$ is a continuous function.\n            \\begin{proof}\n                \\begin{align*}\n                    \\forall_{x \\in X} \\forall_{\\epsilon > 0} \\exists_{\\delta(x, \\epsilon)} \\forall_{y \\in X} \\st d_1(x,y) < \\epsilon\n                        &\\implies d_2(g(x), g(y)) < \\delta\\\\\n                        &\\implies \n                        \\begin{cases}\n                            d_2(x^2, y^2) < \\delta &x \\neq 0, y \\neq 0\\\\\n                            d_2(0, y^2) < \\delta & x = 0, y \\neq 0\\\\\n                            d_2(x^2, 0) < \\delta & x \\neq 0, y = 0\\\\\n                            d_2(0, 0) < \\delta & x = 0, y = 0\n                        \\end{cases}\n                \\end{align*}\n                All of which are clearly true for $d_2$ within a vector space.\n            \\end{proof}\n        \\end{lemma}\n\n        \\subsection{}\n        \\begin{lemma}\n            $f + g$ is a discontinuous function.\n            \\begin{proof}\n                Proof by contradiction, assume $f + g$ is continuous:\n                \\begin{align*}\n                    \\forall_{x \\in X} \\forall_{\\epsilon > 0} \\exists_{\\delta(x, \\epsilon)} \\forall_{y \\in X} \\st d_1(x,y) < \\epsilon\n                        &\\implies d_2(f(x) + g(x), f(y) + g(y)) < \\delta\\\\\n                        &\\implies \n                        \\begin{cases}\n                            d_2(-x + x^2, -y + y^2) < \\delta &x < 0, y < 0\\\\\n                            d_2(x + x^2, -y + y^2) < \\delta & x > 0, y < 0\\\\\n                            d_2(-x + x^2, y + y^2) < \\delta & x < 0, y > 0\\\\\n                            d_2(0, -y + y^2) < \\delta & x= 0, y <0\\\\\n                            d_2(0, y + y^2) < \\delta & x = 0, y > 0\\\\\n                            d_2(-x + x^2, 0) < \\delta & x < 0, y = 0\\\\\n                            d_2(x + x^2, 0) < \\delta & x > 0, y = 0\\\\\n                            d_2(0, 0) < \\delta & x = 0, y = 0\n                        \\end{cases}\n                \\end{align*}\n                however, in the cases when $x$ or $y$ are zero and the other is not, the statement of continuity is not always true due to a discontinuity immediately surrounding $x = 0$.\n            \\end{proof}\n        \\end{lemma}\n    \\end{proof}\n\\end{theorem}\n\n\n\n\n% Problem 6\n\\newpage\n\\section{}\nConstruct an example of a sequence $\\{f_n\\}$ of nowhere continuous functions $[0,1] \\to \\R$ such that $f_n$ converge in the sup-norm to continuous functions.\n\n\\begin{definition}\n    Let $\\mathbb{NC}$ be defined as the sequence of all nowhere continuous functions from $[0,1] \\to \\R$, i.e\n    \\[\n        \\mathbb{NC} := \\qty{\n            f : [0,1] \\to \\R \\st \\forall_{x \\in [0,1]} \\exists_{\\epsilon>0} \\forall_{\\delta(x,\\epsilon)} \\exists_{y \\in [0,1]} \\st 0 < \\norm{x - y} < \\delta \\land \\norm{f(x) - f(y)} \\geq \\epsilon\n        } \n    \\]\n\\end{definition}\n\\begin{definition}\n    Let $1_\\Q : [0,1] \\to \\R$ be defined as the \\underline{Dirichlet function} as the indicator for the set of rational numbers $\\Q$, i.e\n    \\[\n        1_\\Q (x) := \n        \\begin{cases}\n            1 & x \\in \\Q\\\\\n            0 & x \\notin \\Q\n        \\end{cases}\n    \\]\n    which is a nowhere continuous function with the binary output of $1$ or $0$.\n\\end{definition}\n\n\\begin{definition}\n    Let the sequence $\\{f_n\\} \\in \\mathbb{NC}$ be defined by\n    \\[\n        \\{f_n\\}_{n\\in\\N} := \n        \\qty{f_n \\st f_n(x) = \\qty(\\frac{2}{n}) \\qty(1_\\Q (x) - 0.5)}\n    \\]\n\\end{definition}\n\n\\begin{theorem}\n    Within the sup-norm, $\\norm{l}_\\infty = \\sup_{n\\in \\N} \\abs{f_n}$, the sequence of functions $\\{f_n\\}$ is continuous.\n    \\begin{proof}\n        The definition of continuity for the sequence under the sup-norm is given as:\n        \\begin{align*}\n            \\forall_{x \\in X} \\forall_{\\epsilon > 0} \\exists_{\\delta(x, \\epsilon)} \\forall_{y \\in X} \\st \\norm{x - y} < \\epsilon \\\n            &\\implies \\norm{f_n(x) - f_n(y)}_\\infty < \\delta\\\\\n            &\\implies \\norm{f_n(x) - f_n(y)}_\\infty \\leq \\norm{f_n(x)}_\\infty + \\norm{f_n(y)}_\\infty < \\delta_2\\\\\n            &\\implies \\sup_{n\\in \\N} \\abs{f_n(x)} + \\sup_{n\\in \\N} \\abs{f_n(y)} < \\delta\n        \\end{align*}\n        Since $\\abs{f_n(x)}$ is clearly bounded from above by $frac{1}{n}$, \n        $\\sup_{n\\in \\N} \\abs{f_n(x)} \\leq \\sup_{n\\in \\N} \\frac{1}{n} = 1$,\n        therefore, taking $\\delta = 2$ results in the definition of continuity being satisfied for $f_n$ under the sup-norm.\n    \\end{proof}\n\\end{theorem}\n\n\n\n\n\n\n\\end{document}\n", "meta": {"hexsha": "2cfcec218e6759132c5c54bfa37c7468a279489e", "size": 19083, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Homework/HW9/MATH5301-HW9.tex", "max_stars_repo_name": "jonaswagner2826/MATH5301", "max_stars_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-01T05:26:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-01T05:26:53.000Z", "max_issues_repo_path": "Homework/HW9/MATH5301-HW9.tex", "max_issues_repo_name": "jonaswagner2826/MATH5301", "max_issues_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Homework/HW9/MATH5301-HW9.tex", "max_forks_repo_name": "jonaswagner2826/MATH5301", "max_forks_repo_head_hexsha": "40de090ba1a936b406aa8d4c4383be2cf1418f29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.1271551724, "max_line_length": 276, "alphanum_fraction": 0.5558874391, "num_tokens": 6634, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.668880247169804, "lm_q2_score": 0.8244619177503205, "lm_q1q2_score": 0.551466291326925}}
{"text": "\\documentclass[../main.tex]{subfiles}\n\n\\begin{document}\n\n\\chapter{Background} \\label{chapter:background}\n\nThis chapter describes the relevant theory behind software bug predictions and what the state of the art has been able to achieve. \n\n\\section{Machine Learning} \n\nMachine learning can be defined as the study of algorithms that learn from a set of observed samples to predict values for unseen samples \\cite{koza1996automated}. These algorithms have recently gained traction and become more applied in products and services due to the increase in computing power, availability of open source machine learning frameworks and vast amounts of data. Machine learning has become a popular approach for software defect prediction models as they eliminate the need for a programmer to explicitly set rules that identify a commit or file as being risky or not. Machine learning approaches can automatically spot patterns in data that may have been difficult for people to find, especially if the dimensionality of the input space is large. Some of the main challenges with these algorithms are to obtain substantial quantities of data and to ensure that the model's predictions are reliable.\n\nIn the context of software defect prediction, machine learning models are used to solve a binary classification problem. Classification involves taking samples of data and identifying their correct category (\\textit{risky} or \\textit{not risky}) given their features. There are various algorithms available for classification which differ by their mathematical formulations.\n\n\\subsection{Logistic Regression}\n\nLogistic regression is a binary classification model that takes real valued inputs and returns their probability of belonging to one of two classes, 0 or 1. The logistic regression model makes predictions by applying the sigmoid function $\\sigma(z) = \\frac{1}{1+e^{-z}}$ upon a linear model $f(x) = x^T\\theta$ (where $x$ is our input and $\\theta$ is the model's parameters) \\cite{bishop2006pattern}. When provided an input vector $x$, the function $\\sigma(f(x))$ will predict the probability that $x$ belongs to the class 1 whereas $1-\\sigma(f(x))$ computes the probability the class of $x$ is 0. The final label that is predicted is based on the most likely class, for example if the output of $\\sigma(f(x)) > 0.5$, the predicted class is 1.  \n\nThe optimal parameter for $\\theta$ can be found using maximum likelihood estimation which aims to find the parameter $\\theta$ that maximizes the likelihood $p(\\textbf{t}|\\theta)$ (where $\\textbf{t}$ a vector containing output labels). Typically, this maximization problem is transformed into a minimization problem by taking the negative log of the likelihood. Then, iterative methods such as gradient descent can be used to obtain the optimal parameters. \n\n\n\\subsection{Decision Trees}\n\nDecision trees are a type of tree-based model which classify inputs by using a set of conditional rules \\cite{grzymala1993selected}. Every node in the decision tree represents a single conditional rule applied to an input feature, for example a dataset containing information about people could be split into two subsets by using the rule $Age < 30$. After a decision rule has been evaluated, the input is passed further down the tree along one of two possible branches. One heuristic algorithm used for creating a decision tree is ID3 which constructs a tree using a greedy, top-down approach \\cite{quinlan1986induction}. Given a dataset $D$ and some features, ID3 chooses a feature that has not been included in the tree yet such that it provides the largest information gain. A decision rule is created upon this feature and this allows a dataset to be partitioned. The algorithm stops when all features are used up or when all elements of a partition have the same label. \n \n\\subsection{Ensemble Methods}\n\nAn issue with decision trees is that an individual tree can be very prone to overfitting the training data, resulting in a high variance model. In order to combat this, one can rely on ensemble techniques which involve combining multiple base models when training. One example of an ensemble learning technique is bagging which trains multiple classifiers independently and aggregates their outputs \\cite{breiman1996bagging}. Given that we want to train $n$ models, bagging will randomly sample the training set and train a model on this sample so that each individual model has a different view of the data. A random forest model applies bagging by training $B$ decision trees. When a prediction has to be made for a classification task it aggregates the predictions by taking the majority value from each tree. \n\nBoosting methods on the other hand involve sequential training of an ensemble of classifiers to convert weak classifiers (a model that does slightly better than random) into stronger ones \\cite{sewell2008ensemble}. Unlike in bagging methods such as random forest, the classifiers are dependently trained on each other. When trained sequentially, subsequent models will attempt to correct upon previous learners by giving an increased weight to data points that the last model misclassified. Some examples of commonly used boosting methods applied upon decision trees are AdaBoost and XGBoost. \n\n\\subsection{K-Nearest Neighbours}\n\nThe K-nearest neighbours algorithm is another example of a non-parametric algorithm. To make a prediction for a new data point $x$, the algorithm must find the $k$ elements of the training set that are closest to $x$ \\cite{cunningham2007k}. The proximity of points can be measured using distance metrics such as Euclidean distance or similarity metrics like the Pearson correlation \\cite{jaskowiak2011comparing}. The predicted label for $x$ will then be the most frequent label among its k-nearest neighbours. In the case of a binary classification problem, $k$ should be chosen to be odd to guarantee that a majority label exists. When the labels of the dataset are imbalanced (such as in the case of defect prediction models), the algorithm might predict the dominant class more due to how frequently it occurs rather than because it is the correct label \\cite{coomans1982alternative}. In this case, one can apply a weighted aggregation of the $k$ labels so that closer data points contribute more to the final prediction.  \n\n\n\\subsection{Semi-Supervised Learning}\n\nThe majority of software defect prediction models are done using supervised learning where models are provided labelled datasets that they can learn from and make predictions for new data. Semi-supervised learning on the other hand relies on labelled and unlabelled data to train models. The motivation for semi-supervised approaches is that labelling data can be expensive, therefore, the idea is to use unlabelled data as much as possible together with limited labelled data. \n\nAccording to \\cite{chapelle2006semi}, in order for semi-supervised learning to be effective, there are several assumptions that have to be made. First of all, data points that are close to each other or form clusters should have a high probability of having the same label. This is because semi-supervised methods propagate labels from labelled instances to unlabelled instances. Also, it is required that points in higher dimensions need to be representable in lower dimensions in order for points to be able to considered close to each other. \n\n\\subsection{The Self-Training Algorithm} \\label{section:selfTrain}\n\nThe self-training algorithm is a type of semi-supervised algorithm that can act as a wrapper method for any classifier \\cite{triguero2015self}. When provided labelled and unlabelled data, the algorithm begins by training a classifier on the labelled data. It then makes predictions for the unlabelled data and if the classifier's prediction has a high probability, these predicted samples will be included in the training dataset. Then a model trains on this new dataset and these steps are repeated until the maximum number of iterations are reached or if the predictions for unlabelled data is identical to the previous iteration's predictions. \n\nFor the self-training algorithm, let us define the following. Let $X_L, y_L$ be our labelled inputs and outputs. $X_U$ is the unlabelled dataset and the model is defined as the function $f:X \\rightarrow Y$. The parameters for the algorithm are $I \\in \\mathbb{N}$, the max number of iterations and $C \\in \\{c \\in \\mathbb{R} | 0 \\leq c \\leq 1\\}$, the minimum confidence for a prediction on unlabelled data. The vairable $iter$ represents the current iteration and $y_{U\\_old}$ is a local variable for storing the previous predictions. The variable $idx$ contains the indices of all unlabelled samples such that their probability of being either class is greater than the threshold $C$. \n\nSome variations of this algorithm will move samples from $X_U$ and their predicted labels to the sets $X_L,y_L$. This version on the other hand samples does not remove entries from the unlabelled dataset.\n\n\\begin{algorithm} [H]\n    \\caption{Self-training algorithm \\cite{zhu2007semi}}\n    \\label{algorithm:selfTrain}\n    \\begin{algorithmic}[H]\n        \\Procedure{Self-Train($X_{L}, y_{L}, X_{U}, I, C, ',f$)}{}\n            \\State $iter = 0$\n            \\State $f = f.fit(X_L, y_L)$\n            \\State $y_U = f.predict(X_U)$\n            \\State $y_{U\\_old} =$ an empty array with same length as $y_U$\n            \n            \\While{iter $<$ I and $|y_{U\\_old}| == 0$ or $y_U \\neq y_{U\\_old}$}\n                \\State $y_{U\\_old} = y_{U}$\n                \n                \\State $idx = \\{1 \\leq i \\leq len(y_U) | P(y_i=1) > C$  $\\vee$ $P(y_i=0) > C\\}$\n                \n                \\State $X_{new} = \\{x_i \\in X_U | i \\in idx\\}$  \n                \\State $y_{new} = \\{y_i \\in y_U | i \\in idx\\}$\n                \n                \\State $f = f.fit(X_L.concat(X_{new}), y_L.concat(y_{new})$\n\n                \\State $y_U = f.predict(X_U)$\n\n                \\State iter = iter + 1\n\n            \\EndWhile\n            \\State return $f$\n        \\EndProcedure\n    \\end{algorithmic}\n\\end{algorithm}\n\n\n\nOne variation of this algorithm involves using no thresholds when adding the unlabelled data to the labelled data. One can also make predictions on a smaller batch of unlabelled rather than on all possible unlabeled data. As the algorithm works as a wrapper method to any base classifier, it is possible to make a direct comparison between a supervised learning algorithm and its self-trained counterpart. A risk with using this algorithm is that its performance could degraded if it spots an incorrect pattern early on and reinforces this on the unlabelled data. \n\n\\section{Evaluating Classification Models}\n\nIn order to verify that our models have performed well at a classification task, the models are evaluated on a test set and various performance measures are recorded. Below are metrics that describe the possible outcomes that can occur when making a prediction:\n\n\\begin{itemize}\n    \\item $TP=$ The number of true positives, instances belonging to the target class that were correctly identified\n    \n    \\item $TN=$ The number of true negatives, instances belonging to the non-target class that were correctly identified\n    \n    \\item $FP=$ The number of false positives, instances belonging to the non-target class that were misclassified as the target class\n    \n    \\item $FN=$ The number of false negatives, instances belonging to the target class that were misclassified as the non-target class\n\\end{itemize}\n\nIn the context of software defect prediction, a false positive (FP) would describe a commit that was actually clean but predicted to be risky. A false negative (FN) would be a commit that was risky but classified as not risky. These four values can be used to reason about a model's performance and define further performance metrics. \n\n\\subsection{Accuracy, Precision, Recall and F1 score}\n\n\\vspace{20pt}\n\n\\begin{equation}\n    Accuracy = \\frac{TP+TN}{TP+TN+FP+FN}\n\\end{equation}\n\n\\vspace{10pt}\n\nAccuracy represents the fraction of total data samples our model predicted correctly. However, if a dataset has a class imbalance (which software defect prediction datasets typically do), accuracy can be a misleading metric for performance. Since one of the classes would be underrepresented, the machine learning classifiers learn to simply predict the majority class label for every sample it sees. Take the following example, with a dataset of emails where 99\\% are not spam and 1\\% are spam, a classifier could achieve 99\\% accuracy by labelling each email as not containing spam. However, this would lead to a very high number of false negatives and the classifier would not have actually learned what a spam email is. Therefore we rely on additional metrics such as precision and recall.\n\n\\begin{equation}\n    Precision = \\frac{TP}{TP+FP}\n\\end{equation}\n\n\\begin{equation}\n    Recall = \\frac{TP}{TP+FN}\n\\end{equation}\n\n\\vspace{20pt}\n\nPrecision in the context of software defect prediction is the fraction of instances that were correctly classified as defective relative to all instances that were deemed defective by the classifier. Recall is the fraction of correctly identified defects relative to the actual number of defective instances. Precision and recall are inversely proportional to each other, therefore as soon as a classifier aims to optimize recall, its precision will decrease and vice-versa. \n\n\\begin{equation}\n    F1 = 2*\\frac{precision*recall}{precision+recall}\n\\end{equation}\n\nThe F1 score aims to combine precision and recall into one metric using the harmonic mean so that one can optimize models towards having a good combination of both.\n\n\\begin{comment}\n\\subsection{Receiver Operating Characteristic Curves}\n\nThe Receiver Operating Characteristic (ROC) curve is an indication of how well the model can tell apart various classes at various thresholds. By default, a classifier such as logistic regression sets its threshold at 0.5, if the output is greater than 0.5 then we assign an instance to one class and if it is lower than 0.5 it gets assigned to the other class. However, one can vary this threshold in order to deal with situations where the cost behind a false negative is not equally weighted with the cost of a false positive (for example, letting a bug go unnoticed versus a developer spending time reviewing supposedly faulty code that is actually clean). \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[scale=0.45]{images/Background/roc_cases.png}\n\\label{fig:roc}\n\\caption{The possible cases for the ROC-AUC \\cite{rocCases}}\n\\end{figure}\n\nThe ROC curve plots the false positive rate (FPR) and the true positive rate (TPR), also referred to as the recall. To quantify the performance of the classifier for various thresholds, the Area Under Curve (AUC) metric is used. As seen in figure \\ref{fig:roc}, ideal classifiers have $AUC = 1$ because their true positive rate is 1 regardless of the false positive rate. In the worst case, the $AUC = 0.5$ and this indicates that model cannot distinguish between the two classes. When the $AUC$ is less than 0.5, this indicates that the model is able to distinguish between the two classes but the labels are swapped.\n\n\\subsection{Precision-Recall Curves}\n\nAccording to Davis \\cite{davis2006relationship}, an issue with ROC curves is that they are insensitive to class imbalance. Because of this, a high ROC-AUC value can be misleading when measuring the performance of software defect prediction models. Davis suggests that one should also rely on precision-recall curves which plot the precision and recall of a classifier for various thresholds. Similar to ROC curves, one can compute the AUC of these curves to get an aggregate measure of how well the model performs. \n\n\\end{comment}\n\n\\subsection{K-fold Cross Validation}\n\nK-fold cross validation is a statistical method for generalizing a model's performance to unseen data \\cite{kale2011cross}. Cross validation has two main roles, the first being for model selection during training and the second is for evaluating a model's final performance. K-fold cross validation involves splitting up a dataset into $k$ partitions such that a classifier trains on $k-1$ of the partitions and is evaluated on the remaining set (designated the test set). When performing model selection, a dataset will typically be split into a training, validation and test set. The validation set is used to tune the hyperparameters of a model whereas the test set is used for a final evaluation.\n\nA benefit of using cross validation is that it reduces bias and error when estimating the classifier's true performance. However, there is criticism for using cross validation as an evaluation method for software defect predictions. This is because a model may end up using future knowledge by training on samples that occurred later in time than those in the test set \\cite{tan2015online}. \n\n\\subsection{Metrics for Semi-Supervised Classifiers}\n\nWhen evaluating the semi-supervised classifier, one can measure how well the model's predictions are for unlabelled data it used during training. One can define the reconstruction accuracy as the fraction of unlabelled instances from the training set that the self-training algorithm provided correct predictions for. Since the self-training algorithm runs for several iterations, this reconstruction accuracy is computed once the algorithm has terminated. Also, each iteration the algorithm can technically make a prediction for the same instance if the prediction never meets the confidence threshold, thus never being transferred from the unlabelled training set to the labelled training set. Therefore the predictions that will be considered in the reconstruction error are those that were added to the labelled training set. Similarly, one can define the reconstruction precision and reconstruction recall in order to investigate if there are discrepancies between the correctness of labels for the two classes.\n\n\\section{Related Work}\n\nThis section outlines some of the past achievements in the field of software defect prediction and the various techniques that were used. \n\n\\subsection{Software Defect Prediction}\n\nThe purpose of software defect prediction is to predict the likelihood of a defect within software. The proposed benefit of these models would be to assist software developers with bug detection and to prioritize testing efforts \\cite{briski2008minimizing}. The general process for constructing software defect prediction models begins with obtaining a dataset that labels files or commits as \\emph{risky} or \\emph{not risky} (a risky commit is defined as a commit that will produce a bug). Then for each instance in the dataset (a commit or file), various features such as McCabe complexity or lines of code are computed \\cite{zhang2007predicting}. The features and labels are then used to train a supervised machine learning classifier and finally, the model can be used to make predictions for new instances. \n\nWhen building defect prediction models, there are various input features one could use. one could provide metadata concerning files or commits such as lines of code added or the number of developers working on a particular file. One could also use text based features such as commit messages or comments which can be encoded using a bag-of-words representation. One could also parse the source code itself with abstract syntax trees which can be encoded as vectors \\cite{tan2015online}.\n\n%TODO \\subsubsection{File Level Defect Prediction}\n\n\\subsection{SZZ Algorithm}\n\nIn order to build defect prediction models, one requires labelled data that indicates whether or not a file or commit contains a bug or not. However in practice, this information may not always be available and manually labelling commits is time consuming. Research in defect prediction models have benefited greatly from the Sliwerski Zimmermann Zeller (SZZ) algorithm, originally described in the paper \\textit{\"When do changes induce fixes?\"} \\cite{sliwerski2005changes}. The focus of the paper was limited to identifying bug inducing commits rather than creating defect prediction models but their labelling algorithm is valuable for automatically labelling a large number of commits. \n\n\n\\begin{algorithm}\n\\caption{SZZ: labels commits as \\textit{risky} or \\textit{not risky} \\cite{sliwerski2005changes}}\n\\label{algorithm:szz}\n    \\begin{algorithmic}[1]\n    \\Procedure{SZZ()}{}\n        \\State $\\texttt{keywords}=\\{\"bug\",\"fix\"\\}$\n        \\State \\texttt{commits} = a list of all commits in a repository\n        \\State \\texttt{bugFixCommits} = $\\{\\texttt{c :c} \\in \\texttt{commits}\\texttt{ and }\\texttt{containsKeyword(c.message)}\\}$ \n        \\State \\texttt{result =} $\\emptyset$\n        \\For{\\texttt{i = 1 to length(bugFixCommits)}}\n                \\State \\texttt{bugFixCommit = bugFixCommits[i]}\n                \\State \\texttt{filesTouched = getFilesTouched(bugFixCommit)}\n                \\State \\texttt{linesTouched = getLinesTouched(bugFixCommit, filesTouched)}\n                \\State \\texttt{result = result $\\cup$ getBlame(filesTouched, linesTouched)}\n        \\EndFor\n    \n        \\State \\texttt{return result}\n    \\EndProcedure\n    \\end{algorithmic}\n\\end{algorithm}\n\nThe algorithm works by initially locating commits that fix a bug. This can be done by filtering commits by their message or comments for keywords such as \"bug\" or \"fix\". Those commits are then cross referenced with a bug tracking database in order to confirm that these commits do resolve a bug. For each bug fixing commit, the files this commit touched as well as the exact lines changed within those files are computed using the \\texttt{diff} command in a version control system (such as CVS or Git). One can then use the \\texttt{blame} or \\texttt{annotate} command to find the previous commit responsible for causing a patch to be created. This previous commit becomes labeled as a bug causing commit. A variant of this algorithm denoted Approximate SZZ follows the same essential steps but does not rely on a bug tracking database for identifying bug fixing commits. Since not all projects use a bug tracking database, using the ASZZ algorithm would allow defect prediction models to train on more data. \n\n%SZZ revisited: verifying when changes induce fixes, Jaime Spacco (2008)) Claims that no one has previously verified the reliability of the SZZ algorithm. Outlines several improvements to SZZ, replace annotation graphs with line number maps, use DiffJ to ignore comments and formattinf changes in soure. Then finally verify that SZZ identified the correct changes. Manually select 25 bug fixing commits random commits from 37,000 commits of the Eclipse projects. These 25 commits changed 50 lines in total. Only 43 of the 50 changed lines actually fix a bug. (They evaluate reliability of finding bug-fix commits). \n\n\\subsection{File Level Defect Prediction }\n\nIn their work, Meng Yan et al. investigate the claim that unsupervised models may outperform supervised models for providing effort-aware defect predictions on the file level \\cite{yan2017file}. They replicate a previous study upon the PROMISE dataset and their main finding was that unsupervised models can outperform supervised models for cross project prediction but not for within project predictions. \n\nIn Personalized Defect Prediction, the authors propose creating defect prediction models that are tailored to individual developers \\cite{jiang2013personalized}. The motivation for this is the fact that developers exhibit varying coding patterns and styles and the number of bugs introduced can depend upon the experience the developer has. Results found that this method can improve a model's F1 score by 0.01-0.06.\n\nSong Wang et al. apply a Deep Belief Network upon the file-level changes of software projects \\cite{wang2016automatically}. Their deep learning models are trained upon vector representations of the programs' Abstract Syntax Trees and this approach was shown to improve within-project as well as cross-project defect prediction. The issue with non-semantic metrics (such as lines of code) is that programs performing different tasks may have the same values for these metrics. Their models were evaluated upon 10 open source Java projects from the PROMISE datasets. Their evaluation compares to two baselines, the first being non-semantic features that were available in the PROMISE datasets and the second being AST nodes that they provided to their models. \n\n\n\\subsection{Just-in-time Defect Prediction}\n\nA method for providing software defect predictions that has gained traction recently is Just-in-time defect predictions. These models make predictions at the commit level rather than at the file level and utilize commit metadata for features rather than product metrics or by analyzing the source code \\cite{kamei2013large}. The motivation for this approach is the fact that commits tend to be smaller and more manageable to review than files. Also, there is the fact that files can be edited by multiple developers which makes it challenging to assign an individual developer to resolve the bug whereas commits only have one author \\cite{kamei2013large}. A proposed benefit of these models is having the ability to provide immediate feedback to a developer as soon as a change is made. \n\nOne of the earlier works on change-level defect prediction, \\textit{\"Classifying Software Changes: Clean or Buggy?\"} suggested that one should train models on file changes rather than files or functions themselves \\cite{kim2008classifying}. This paper differs from previous work in the sense that it used the SZZ algorithm to find the exact changes that cause a bug whereas previous work relied on bug-fix data which could only show roughly where a bug occurred. Also, due to the fact that they represent their features using a bag-of-words encoding, their technique could work independently of programming language. \n\nIn their paper, Kamei et al. \\cite{kamei2013large} introduce the idea of Just-in-time prediction models where models attempt to predict risky commits rather than risky files. They also ensure that the bug-fixing commits that the SZZ finds are present in a bug-tracking database for each project they investigate. They evaluated their models on six open source projects as well as five commercial projects to obtain an average accuracy of 68\\% and average recall of 64\\%. They also evaluate if the model serves a useful purpose practically, i.e. will prioritizing commits by risk level reduce the amount of work required when reviewing code. This paper is considered to be the most related to the research undertaken within this thesis as the same datasets and similar modelling techniques are used. \n\nA follow up work to the previous paper stated that predictions have to be feasible for projects that have little historical data available \\cite{fukushima2014empirical}. This can be a challenge for defect prediction models where the model only trains on data from a single repository. Instead, they investigate the feasibility of cross project models, models that train on data from multiple projects. They find that within-project models do not perform well on cross-project performance. They also found that within project models can improve if the project they intend to predict on has similar correlations between predictor and dependent variables. It was shown that file level predictions seem to do better at cross project performance than commit level predictions. Finally, they also show that ensemble learning techniques benefit cross project performance. \n\nIn \"Deep learning for just-in-time defect prediction\", the authors propose a deep-learning based approach to detect risky commits \\cite{yang2015deep}. They evaluate performance on the same 6 OSS projects that Kamei et al. used and showed that their approach outperformed previous models. They use a deep belief network algorithm for feature selection and then utilized deep neural networks to build the classifier. Their model named Deeper was able to find 39.96 \\% more bugs and had statistically significant higher F1 scores. This paper was one of the first examples of applying deep learning to this field and they also investigate the effect that the quantity of data has on their model.\n\nYang et al. try a different approach to Just-In-Time (JIT) predictions that involves a 2-layer ensemble learning classifier named TLEL that combines stacking and bagging techniques with decision trees \\cite{yang2017tlel}. The first layer consists of applying bagging to a decision tree to produce a random forest model. The second layer uses stacking and random undersampling to train multiple random forest models and combine them together. The motivation behind this two-layer approach according to them is that they observed that decision tree perform particularly well at this task as well as the fact that ensemble methods tend to produce better results over single models. They compare their approach to three previous models, Deeper, DNC and MKEL. For evaluating their models, they rely on F1 score and PofB20 which measures the cost effectiveness of the model (how well the model can find bugs by only reviewing 20\\% of all lines of code). \n\nIn the paper \"Supervised vs Unsupervised Models: A Holistic Look at Effort-Aware Just-in-Time Defect Prediction\", they limit their focus on effort aware JIT defect prediction which tries to find risky commits while inspecting the code as minimally as possible \\cite{huang2017supervised}. They state that one challenge in this field is obtaining labelled training data so they investigate the possibility of using unsupervised models. Their contribution is the CBS algorithm CBS (Classify Before Sorting) which makes predictions and then sorts the output by the size of the commit in ascending order. They focus on making their predictions effort aware because it takes time to inspect a commit and they want to minimize the time a developer would need to do a review after they receive predictions. They also observed that the distribution of lines of codes in commits is skewed, the majority of commits are small and very few are large, thus the need for applying logarithmic transformations to their features.\n\nIn his paper \"Online Defect Prediction for Imbalanced Data\", Ming Tan addresses the challenge of class imbalance in defect prediction \\cite{tan2015online}. Usually, the minority of entries in datasets will have a label corresponding to a bug. The author tried out four resampling methods which alter the training set in order to deal with the imbalance in the dataset. Another issue that is brought up is that previous studies rely on cross validation to evaluate model performance which does not assess models in a realistic manner. The reason being that cross validation splits up the training and test sets in a way such that models can use future knowledge to make predictions. Instead, the author suggests that a time-sensitive classification should be used so that the test set only contains commits that occur later in time than the commits provided in the training set. \n\n\\subsection{Practical Implementations of JIT Prediction Models}\n\nA recent open source implementation of a JIT risk prediction model would be Commit Guru which builds directly on top of the work presented in \"A large-scale empirical study of just-in-time quality assurance\" \\cite{rosen2015commit}. Commit Guru is able to take a GitHub repository and returns statistics on which commits may have high risk. The associated paper outlines the architecture of the tool as well as the usage of the SZZ algorithm and the classification model they use. As \\cite{nayrolles2018clever} points out, the limitation of this tool is that it cannot point out which exact code block caused a bug. Also, there has not been a formal study that investigates how useful the predictions are for developers and what performance we should expect in terms of precision and recall for developers to trust the tool. \n\nThe video games developer Ubisoft performed a study in collaboration with Concordia University in which they developed a tool named CLEVER which builds upon Commit Guru when performing predictions of risky commits \\cite{nayrolles2018clever}. They rely on a two stage process, the first is similar to Commit Guru as it predicts commits that are risky. The second step then does a deeper analysis of those potentially risky commits by comparing code snippets with a database of known bugs to confirm that the commit does cause some sort of issue and is not a false positive. \n\n\\end{document}\n\n\n\n\n\n\\begin{comment}\n\\subsection{Challenges in defect prediction}\n\n\\subsubsection{Imbalanced data}\n\nTODO\n\nDue to the fact that the majority of changes to a codebase will function as intended, there will be an imbalance in the dataset between the two classes of risky and not risky. Machine learning algorithms tend to see a drop in performance when trained on imbalanced data.. For example, given a classifier tasked with diagnosing cancer in patients where the rate of having cancer is 5\\%, the algorithm would be accurate 95\\% of the time if it simply diagnosed every patient as not having cancer. Despite the high accuracy, this would be a very poor classifier in practice as it hasn't actually learned to spot the minority class. The classifier performs this way because the risky samples are underrepresented so it assumes that in most cases, a label would not be risky. There is also the fact that false positives and false negatives are by default weighted equally but in practice a false negative tends to be worse than a false positive. In our example of defect prediction, it is much worse if a bug is introduced and goes unnoticed as opposed to having a developer inspect some code and find no bugs. \n\n\nhttps://medium.com/james-blogs/handling-imbalanced-data-in-classification-problems-7de598c1059f\n\nTypical approaches to address this involve re-sampling methods such as over sampling and undersampling. These methods increse or decrease instances of a class in a dataset until there is a 50/50 split. \n\\subsubsection{Skew in data}\n\\subsubsection{Collinearality between features}\n\\subsubsection{Within project vs cross project models}\n\nWithin projects models are defined to be models trained on a single repository whereas cross project models learn from a mix of previous projects. The importance of making this distinction is because defect prediction models should be able to make predictions on new projects with little historical data available. However, machine learning models in practice require a large amount of data to perform well. \n%http://posl.ait.kyushu-u.ac.jp/~kamei/publications/Fukushima_MSR2014.pdf\n\nTheir report found that having a high within-project performance can still lead to a low cross project performance. Kamei et al found that \n\n\\end{comment}\n", "meta": {"hexsha": "c4b85a24b77b833f82f4fdfb8f9f3afbc42c2f1a", "size": 34921, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/chapters/2_background.tex", "max_stars_repo_name": "Arsalan-Syed/Master_Thesis", "max_stars_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-08-29T17:45:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-19T18:01:36.000Z", "max_issues_repo_path": "report/chapters/2_background.tex", "max_issues_repo_name": "Arsalan-Syed/Master_Thesis", "max_issues_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/chapters/2_background.tex", "max_forks_repo_name": "Arsalan-Syed/Master_Thesis", "max_forks_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 131.7773584906, "max_line_length": 1105, "alphanum_fraction": 0.7943071504, "num_tokens": 7263, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972784807408, "lm_q2_score": 0.6334102498375401, "lm_q1q2_score": 0.5514452396703685}}
{"text": "\\section{Flux Corrected Transport scheme}\nEquation (\\ref{eq:tracer_int}) can be written as\n\\begin{eqnarray}\n%%\n\\left(\\rho q\\right)^{n+1}_{i,j,k} \n&=& \\left(\\rho q\\right)^{n}_{i,j,k}\n- \\frac{1}{\\Delta x \\Delta y \\Delta z}\n\\big[ \\nonumber\\\\\n&+&\n\\left[ C_{i+\\frac{1}{2},j,k} F_{i+\\frac{1}{2},j,k}^{high}\n+ \\left( 1 - C_{i+\\frac{1}{2},j,k}\\right) F_{i+\\frac{1}{2},j,k}^{low}\\right]\\nonumber\\\\\n&-&\n\\left[ C_{i-\\frac{1}{2},j,k} F_{i-\\frac{1}{2},j,k}^{high}\n+ \\left( 1 - C_{i-\\frac{1}{2},j,k}\\right) F_{i-\\frac{1}{2},j,k}^{low}\\right]\\nonumber\\\\\n&+&\n\\left[ C_{i,j+\\frac{1}{2},k} F_{i,j+\\frac{1}{2},k}^{high}\n+ \\left( 1 - C_{i,j+\\frac{1}{2},k}\\right) F_{i,j+\\frac{1}{2},k}^{low}\\right]\\nonumber\\\\\n&-&\n\\left[ C_{i,j-\\frac{1}{2},k} F_{i,j-\\frac{1}{2},k}^{high}\n+ \\left( 1 - C_{i,j-\\frac{1}{2},k}\\right) F_{i,j-\\frac{1}{2},k}^{low}\\right]\\nonumber\\\\\n&+&\n\\left[ C_{i,j,k+\\frac{1}{2}} F_{i,j,k+\\frac{1}{2}}^{high}\n+ \\left( 1 - C_{i,j,k+\\frac{1}{2}}\\right) F_{i,j,k+\\frac{1}{2}}^{low}\\right]\\nonumber\\\\\n&-&\n\\left[ C_{i,j,k-\\frac{1}{2}} F_{i,j,k-\\frac{1}{2}}^{high}\n+ \\left( 1 - C_{i,j,k-\\frac{1}{2}}\\right) F_{i,j,k-\\frac{1}{2}}^{low}\\right]\\nonumber\\\\\n\\big]\n\\label{eq:1step_integ_tracer}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n  F_{i+\\frac{1}{2},j,k}^{high,low} &=& \\Delta t \\Delta y \\Delta z (\\rho u)_{i+\\frac{1}{2},j,k} q_{i+\\frac{1}{2},j,k}^{high,low}\\\\\n  F_{i,j+\\frac{1}{2},k}^{high,low} &=& \\Delta t \\Delta z \\Delta x (\\rho u)_{i,j+\\frac{1}{2},k} q_{i,j+\\frac{1}{2},k}^{high,low}\\\\\n  F_{i,j,k+\\frac{1}{2}}^{high,low} &=& \\Delta t \\Delta x \\Delta y (\\rho u)_{i,j,k+\\frac{1}{2}} q_{i,j,k+\\frac{1}{2}}^{high,low}\n\\end{eqnarray}\nThe anti-diffusive flux are defined as\n\\begin{eqnarray}\n  A_{i+\\frac{1}{2},j,k} &=& F_{i+\\frac{1}{2},j,k}^{high}-F_{i+\\frac{1}{2},j,k}^{low}\\\\\n  A_{i,j+\\frac{1}{2},k} &=& F_{i,j+\\frac{1}{2},k}^{high}-F_{i,j+\\frac{1}{2},k}^{low}\\\\\n  A_{i,j,k+\\frac{1}{2}} &=& F_{i,j,k+\\frac{1}{2}}^{high}-F_{i,j,k+\\frac{1}{2}}^{low}\n\\end{eqnarray}\n\nEquation (\\ref{eq:1step_integ_tracer}) can be rewritten as\n\\begin{eqnarray}\n%%\n\\left(\\rho q\\right)^{n+1}_{i,j,k} \n&=& \\left(\\rho q\\right)^{n}_{i,j,k}\n- \\frac{1}{\\Delta x \\Delta y \\Delta z}\n\\big[ \\nonumber\\\\\n&+&\n\\left[ F_{i+\\frac{1}{2},j,k}^{low}\n+ C_{i+\\frac{1}{2},j,k}  A_{i+\\frac{1}{2},j,k}\\right]\\nonumber\\\\\n&-&\n\\left[ F_{i-\\frac{1}{2},j,k}^{low}\n+ C_{i-\\frac{1}{2},j,k}  A_{i-\\frac{1}{2},j,k}\\right]\\nonumber\\\\\n&+&\n\\left[ F_{i,j+\\frac{1}{2},k}^{low}\n+ C_{i,j+\\frac{1}{2},k}  A_{i,j+\\frac{1}{2},k}\\right]\\nonumber\\\\\n&-&\n\\left[ F_{i,j-\\frac{1}{2},k}^{low}\n+ C_{i,j-\\frac{1}{2},k}  A_{i,j-\\frac{1}{2},k}\\right]\\nonumber\\\\\n&+&\n\\left[ F_{i,j,k+\\frac{1}{2}}^{low}\n+ C_{i,j,k+\\frac{1}{2}}  A_{i,j,k+\\frac{1}{2}}\\right]\\nonumber\\\\\n&-&\n\\left[ F_{i,j,k-\\frac{1}{2}}^{low}\n+ C_{i,j,k-\\frac{1}{2}}  A_{i,j,k-\\frac{1}{2}}\\right]\\nonumber\\\\\n\\big]\n\\label{eq:1step_integ_tracer2}\n\\end{eqnarray}\nIn practice, we calculate Eq.(\\ref{eq:1step_integ_tracer2}) by the \nfollowing steps:\n\\begin{enumerate}\n%\n\\item The tentative values are calculated by using the low order flux:\n\\begin{eqnarray}\n\\left(\\rho q\\right)^{\\dagger}_{i,j,k} \n&=& \\left(\\rho q\\right)^{n}_{i,j,k}\\nonumber\\\\\n&-& \\frac{1}{\\Delta x \\Delta y \\Delta z}\n\\left[\n+ F_{i+\\frac{1}{2},j,k}^{low}-F_{i-\\frac{1}{2},j,k}^{low}\n+ F_{i,j+\\frac{1}{2},k}^{low}-F_{i,j-\\frac{1}{2},k}^{low}\n+ F_{i,j,k+\\frac{1}{2}}^{low}-F_{i,j,k-\\frac{1}{2}}^{low}\n\\right]\n\\end{eqnarray}\n%\n\\item Allowable maximum and minimum values are calculated:\n\\begin{eqnarray}\n\\left(\\rho q\\right)^{\\max}_{i,j,k}\n&=& \\max [\\nonumber\\\\\n&&\\max( \\left(\\rho q\\right)^{\\dagger}_{i,j,k},\\left(\\rho q\\right)^{n}_{i,j,k} ),\\nonumber\\\\\n&&\\max( \\left(\\rho q\\right)^{\\dagger}_{i-1,j,k},\\left(\\rho q\\right)^{n}_{i-1,j,k} ),\\nonumber\\\\\n&&\\max( \\left(\\rho q\\right)^{\\dagger}_{i+1,j,k},\\left(\\rho q\\right)^{n}_{i+1,j,k} ),\\nonumber\\\\\n&&\\max( \\left(\\rho q\\right)^{\\dagger}_{i,j-1,k},\\left(\\rho q\\right)^{n}_{i,j-1,k} ),\\nonumber\\\\\n&&\\max( \\left(\\rho q\\right)^{\\dagger}_{i,j+1,k},\\left(\\rho q\\right)^{n}_{i,j+1,k} ),\\nonumber\\\\\n&&\\max( \\left(\\rho q\\right)^{\\dagger}_{i,j,k-1},\\left(\\rho q\\right)^{n}_{i,j,k-1} ),\\nonumber\\\\\n&&\\max( \\left(\\rho q\\right)^{\\dagger}_{i,j,k+1},\\left(\\rho q\\right)^{n}_{i,j,k+1} ) \\nonumber\\\\\n&&]\\\\\n\\left(\\rho q\\right)^{\\min}_{i,j,k}\n&=& \\min [\\nonumber\\\\\n&&\\min( \\left(\\rho q\\right)^{\\dagger}_{i,j,k},\\left(\\rho q\\right)^{n}_{i,j,k} ),\\nonumber\\\\\n&&\\min( \\left(\\rho q\\right)^{\\dagger}_{i-1,j,k},\\left(\\rho q\\right)^{n}_{i-1,j,k} ),\\nonumber\\\\\n&&\\min( \\left(\\rho q\\right)^{\\dagger}_{i+1,j,k},\\left(\\rho q\\right)^{n}_{i+1,j,k} ),\\nonumber\\\\\n&&\\min( \\left(\\rho q\\right)^{\\dagger}_{i,j-1,k},\\left(\\rho q\\right)^{n}_{i,j-1,k} ),\\nonumber\\\\\n&&\\min( \\left(\\rho q\\right)^{\\dagger}_{i,j+1,k},\\left(\\rho q\\right)^{n}_{i,j+1,k} ),\\nonumber\\\\\n&&\\min( \\left(\\rho q\\right)^{\\dagger}_{i,j,k-1},\\left(\\rho q\\right)^{n}_{i,j,k-1} ),\\nonumber\\\\\n&&\\min( \\left(\\rho q\\right)^{\\dagger}_{i,j,k+1},\\left(\\rho q\\right)^{n}_{i,j,k+1} ) \\nonumber\\\\\n&&]\n\\end{eqnarray}\n\\item Several values for the flux limiter are calculated:\n\\begin{eqnarray}\nP_{i,j,k}^{+} &=& \n-\\min ( 0, A_{i+\\frac{1}{2},j,k} ) + \\max( 0, A_{i-\\frac{1}{2},j,k} )\\nonumber\\\\\n&&\n-\\min ( 0, A_{i,j+\\frac{1}{2},k} ) + \\max( 0, A_{i,j-\\frac{1}{2},k} )\\nonumber\\\\\n&&\n-\\min ( 0, A_{i,j,k+\\frac{1}{2}} ) + \\max( 0, A_{i,j,k-\\frac{1}{2}} )\\\\\nP_{i,j,k}^{-} &=& \n-\\max ( 0, A_{i+\\frac{1}{2},j,k} ) + \\min( 0, A_{i-\\frac{1}{2},j,k} )\\nonumber\\\\\n&&\n-\\max ( 0, A_{i,j+\\frac{1}{2},k} ) + \\min( 0, A_{i,j-\\frac{1}{2},k} )\\nonumber\\\\\n&&\n-\\max ( 0, A_{i,j,k+\\frac{1}{2}} ) + \\min( 0, A_{i,j,k-\\frac{1}{2}} )\\\\\n\\end{eqnarray}\n\\begin{eqnarray}\nQ_{i,j,k}^{+} &=& \n\\left[\\left(\\rho q\\right)^{\\max}_{i,j,k} - \\left(\\rho q\\right)^{\\dagger}_{i,j,k} \\right]\n\\Delta x \\Delta y \\Delta z\\\\\nQ_{i,j,k}^{-} &=& \n\\left[\\left(\\rho q\\right)^{\\dagger}_{i,j,k}-\\left(\\rho q\\right)^{\\min}_{i,j,k} \\right]\n\\Delta x \\Delta y \\Delta z\n\\end{eqnarray}\n\\begin{eqnarray}\nR_{i,j,k}^{+} &=& \n\\begin{cases}\n        \\min( 1, Q_{i,j,k}^{+}/P_{i,j,k}^{+} ) & {\\rm if~} P_{i,j,k}^{+}>0\\\\\n        0 & {\\rm if~} P_{i,j,k}^{+}=0\\\\\n\\end{cases}\n\\\\\nR_{i,j,k}^{-} &=& \n\\begin{cases}\n        \\min( 1, Q_{i,j,k}^{-}/P_{i,j,k}^{-} ) & {\\rm if~} P_{i,j,k}^{-}>0\\\\\n        0 & {\\rm if~} P_{i,j,k}^{-}=0\\\\\n\\end{cases}\n\\end{eqnarray}\n\\item The flux limters at the cell wall are calculated:\n\\begin{eqnarray}\nC_{i+\\frac{1}{2},j,k} &=& \n\\begin{cases}\n        \\min( R_{i+1,j,k}^{+}, R_{i,j,k}^{-} ) & {\\rm if~} A_{i+\\frac{1}{2},j,k}^{-}\\geq 0\\\\\n        \\min( R_{i,j,k}^{+}, R_{i+1,j,k}^{-} ) & {\\rm if~} A_{i+\\frac{1}{2},j,k}^{-}<0\\\\\n\\end{cases}\n\\\\\nC_{i,j+\\frac{1}{2},k} &=& \n\\begin{cases}\n        \\min( R_{i,j+1,k}^{+}, R_{i,j,k}^{-} ) & {\\rm if~} A_{i,j+\\frac{1}{2},k}^{-}\\geq 0\\\\\n        \\min( R_{i,j,k}^{+}, R_{i,j+1,k}^{-} ) & {\\rm if~} A_{i,j+\\frac{1}{2},k}^{-}<0\\\\\n\\end{cases}\n\\\\\nC_{i,j,k+\\frac{1}{2}} &=& \n\\begin{cases}\n        \\min( R_{i,j,k+1}^{+}, R_{i,j,k}^{-} ) & {\\rm if~} A_{i,j,k+\\frac{1}{2}}^{-}\\geq 0\\\\\n        \\min( R_{i,j,k}^{+}, R_{i,j,k+1}^{-} ) & {\\rm if~} A_{i,j,k+\\frac{1}{2}}^{-}<0\\\\\n\\end{cases}\n\\end{eqnarray}\n\\end{enumerate}\n", "meta": {"hexsha": "a5b950921ab586129576ab6e312379e9312267fd", "size": 6991, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/descriptions/fct.tex", "max_stars_repo_name": "slayoo/scale", "max_stars_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2020-06-14T11:12:31.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-13T05:29:55.000Z", "max_issues_repo_path": "doc/descriptions/fct.tex", "max_issues_repo_name": "slayoo/scale", "max_issues_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-29T03:38:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-30T05:08:47.000Z", "max_forks_repo_path": "doc/descriptions/fct.tex", "max_forks_repo_name": "slayoo/scale", "max_forks_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-07-10T10:39:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-28T22:20:41.000Z", "avg_line_length": 41.8622754491, "max_line_length": 129, "alphanum_fraction": 0.5345444142, "num_tokens": 3363, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206765295399, "lm_q2_score": 0.611381973294151, "lm_q1q2_score": 0.5514180429714257}}
{"text": "\\section{The Counting Numbers}\n\n\\begin{frame}\n  \\begin{columns}\n    \\begin{column}{0.3\\textwidth}\n      \\centering\n      \\begin{tikzpicture}\n        \\draw (0,0) ellipse (1.5 and 1);\n        \\draw (0,3) ellipse (1.5 and 1);\n        \\node () at (0,3.5) {\\footnotesize vanilla};\n        \\node () at (0,3) {\\footnotesize chocolate};\n        \\node () at (0,2.5) {\\footnotesize strawberry};\n        \\node (A) at (0,0) {$1 \\quad  2 \\quad  3$};\n      \\end{tikzpicture}\n    \\end{column}\n    {\\color{gmitgrey!30}\\vrule{}} \\hspace{0.1\\textwidth}\n    \\begin{column}{0.5\\textwidth}\n      $\\mathbb{N} = \\{1,2,3,\\ldots\\}$ \\\\[8mm]\n      $1 \\leftrightarrow $ vanilla \\\\[8mm]\n      $2 \\leftrightarrow $ chocolate \\\\[8mm]\n      $3 \\leftrightarrow $ strawberry \\\\[8mm]\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n\n\\begin{frame}\n  \\begin{columns}\n    \\begin{column}{0.3\\textwidth}\n      \\color{gmitblue} \\fontsize{30}{10}\n      \\[2\\mathbb{N}\\]\n    \\end{column}\n    {\\color{gmitgrey!30}\\vrule{}} \\hspace{0.1\\textwidth}\n    \\begin{column}{0.5\\textwidth}\n      $\\mathbb{N} = \\{1,2,3,\\ldots\\}$ \\\\[4mm]\n      $2\\mathbb{N} = \\{2,4,6,\\ldots\\}$ \\\\[4mm]\n      $\\mathbb{N} \\leftrightarrow 2 \\mathbb{N}$ \\\\[8mm]\n      $1 \\leftrightarrow 2$ \\\\[4mm]\n      $2 \\leftrightarrow 4$ \\\\[4mm]\n      $3 \\leftrightarrow 6$ \\\\[4mm]\n      $\\ldots$ \\\\[4mm]\n    \\end{column}\n  \\end{columns}\n\\end{frame}\n\n\\begin{frame}[standout]\n\n\\begin{quote}\n  Computer programs are binary strings. \\\\[8mm]\n  Treat them as integers. \\\\[8mm]\n  They're countable. \\\\[8mm]\n  (Slight issue with leading zeros.)\n\\end{quote}\n\n\\end{frame}\n", "meta": {"hexsha": "23dc703c05279a7081454225452e59ef767de430", "size": 1571, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "content.tex", "max_stars_repo_name": "ianmcloughlin/slides-counting", "max_stars_repo_head_hexsha": "3d36fdcacf0abb8b8f90783ec692f039c8823b94", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content.tex", "max_issues_repo_name": "ianmcloughlin/slides-counting", "max_issues_repo_head_hexsha": "3d36fdcacf0abb8b8f90783ec692f039c8823b94", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content.tex", "max_forks_repo_name": "ianmcloughlin/slides-counting", "max_forks_repo_head_hexsha": "3d36fdcacf0abb8b8f90783ec692f039c8823b94", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0535714286, "max_line_length": 56, "alphanum_fraction": 0.5722469764, "num_tokens": 589, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7606506635289836, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.5513730655883112}}
{"text": "\\section{Computational Co-Ordinates Transformation}\n\n\tHaving phrased the Navier-Stokes equations in matrix form (Eqs. \\ref{eqn:final}-\\ref{eqn:finalHv}), \nthe next step is to cast these equations in a form suitable for numerical simulation.  To do this, two things\nare required to make this transformation as useful as possible.  The first, and most involved\nstep, is to transform the equations onto computational co-ordinates.  This will allow \nfor the fact that the orthogonal computational grid need not necessarily be aligned with the space \nphysical co-ordinates  \nof the system under consideration, and thus cannot be applied directly to an arbitrary physical gridding.  However,\nby using computational co-ordinates one can easily allow for any type of grid (i.e.\nclustered along given directions or in specified areas) through the incorporation of metric \nterms in the governing equations.\n\n\tThe second step is to phrase the variables in a language suitable for computation, which\ninvolves only direct substitution and is hence fairly straightforward.  However one would like to \nhave indices that match those used by the programming language, as although in reality the number of\nspecies goes from 1 to $n_s$, for a programming language such as \\emph{C} the indices would vary \nfrom 0 to $n_s - 1$.  As well, common variables differentiated by subscripts alone rather than distinct\nGreek or arithmetic symbols are easier to program, as these can be represented by arrays of various dimensions.\n\n\n%---------------------------------------------------------------------------------------------------------\n\\subsection{Metric Derivatives}\n\n\tIn order to properly transform the governing equations from the Cartesian to the computational\nco-ordinate system metric derivatives will be needed, as it is these terms that describe the geometrical\nrelationship between the two domains.  Thus before tackling the main set of equations, it is helpful to \nderive relations for the metric terms and have them at the ready for substitution into the governing\nequations when needed.   We can make use of the fact that there are only two independent space variables, \n$x$ and $r$, (in axisymmetric flow) and one variable in time, $t$, which allows a slight simplification from\na fully three dimensional transformation.  Making use of the relationship between differential quantities as \ndefined by the Calculus one can write,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\tdt=\\frac{\\partial t}{\\partial \\tau}d\\tau + \\frac{\\partial t}{\\partial \\xi}d\\xi +\n\t\t\\frac{\\partial t}{\\partial \\eta}d\\eta \\\\\n\t\tdx=\\frac{\\partial x}{\\partial \\tau}d\\tau + \\frac{\\partial x}{\\partial \\xi}d\\xi +\n\t\t\\frac{\\partial x}{\\partial \\eta}d\\eta \\\\\n\t\tdr=\\frac{\\partial r}{\\partial \\tau}d\\tau + \\frac{\\partial r}{\\partial \\xi}d\\xi +\n\t\t\\frac{\\partial r}{\\partial \\eta}d\\eta\n\t\\end{array}\n\\end{displaymath}\n\n\tNow since real time is independent of the physical grid location (and hence independent of the computational\ngrid location as there is a one to one correspondence between points in the two domains) then one can write \n$\\frac{\\partial t}{\\partial \\xi}$ = \n$\\frac{\\partial t}{\\partial \\eta}$ = 0, and if one further assumes that computational time is identical to\nreal time then $\\frac{\\partial t}{\\partial \\tau}$ = 1, the above relations can be expressed in matrix form\nas,\n\n\\begin{equation}\n\t\\begin{array}{cccc}\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\tdt \\\\ dx \\\\ dr \n\t\t\\end{array} \n\t\t\\right\\}\n\t\t& = &\n\t\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t1 & 0 & 0 \\\\ x_{t} & x_{\\xi} & x_{\\eta} \\\\ r_{t} & r_{\\xi} & r_{\\eta} \n\t\t\\end{array}\n\t\t\\right] \n\t\t&\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\td\\tau \\\\ d\\xi \\\\ d\\eta \n\t\t\\end{array}\n\t\t\\right\\}\n\t\\end{array} \n\\label{eqn:matrix1}\n\\end{equation}\n\n\tUsing the Calculus again, one can express these relations from the opposite point of view \nexpressing the differential computational quantities in terms of derivatives with respect to the \noriginal Cartesian co-ordinates,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\td\\tau=\\frac{\\partial \\tau}{\\partial t}dt + \\frac{\\partial \\tau}{\\partial x}dx +\n\t\t\\frac{\\partial \\tau}{\\partial r}dr \\\\\n\t\td\\xi=\\frac{\\partial \\xi}{\\partial t}dt + \\frac{\\partial \\xi}{\\partial x}dx +\n\t\t\\frac{\\partial \\xi}{\\partial r}dr \\\\\n\t\td\\eta=\\frac{\\partial \\eta}{\\partial t}dt + \\frac{\\partial \\eta}{\\partial x}dx +\n\t\t\\frac{\\partial \\eta}{\\partial r}dr\n\t\\end{array}\n\\end{displaymath}\n\n\twhich again, after applying the same logic as before, can be simplified and rewritten as\n\n\\begin{equation}\n\t\\begin{array}{cccc}\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\td\\tau \\\\ d\\xi \\\\ d\\eta \n\t\t\\end{array} \n\t\t\\right\\}\n\t\t& = &\n\t\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t1 & 0 & 0 \\\\ \\xi_t & \\xi_x & \\xi_r \\\\ \\eta_t & \\eta_x & \\eta_r \n\t\t\\end{array}\n\t\t\\right] \n\t\t&\n\t\t\\left\\{\n\t\t\\begin{array}{c}\n\t\tdt \\\\ dx \\\\ dr \n\t\t\\end{array}\n\t\t\\right\\}\n\t\\end{array} \n\\label{eqn:matrix2}\n\\end{equation}\n\n\tExamining Eqs. \\ref{eqn:matrix1} and \\ref{eqn:matrix2} it can be seen that the square matrices in \neach equation are inverses of each other, thus using the property of inverse matrices that states $AA^{-1} = I$ one can\nwrite\n\n\\begin{equation}\n\\begin{array}{ccccc}\n\t\\begin{array}{c}\n\t\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t1 & 0 & 0 \\\\ x_{\\tau} & x_{\\xi} & x_{\\eta} \\\\ r_{\\tau} & r_{\\xi} & r_{\\eta} \n\t\t\\end{array}\n\t\t\\right] \n\t\\end{array} \n& \\cdot  &\n\t\\begin{array}{c}\n\t\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t1 & 0 & 0 \\\\ \\xi_t & \\xi_x & \\xi_r \\\\ \\eta_t & \\eta_x & \\eta_r \n\t\t\\end{array}\n\t\t\\right] \n\t\\end{array}\n& = &\n\t\\begin{array}{c}\n\t\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1\n\t\t\\end{array}\n\t\t\\right]\n\t\\end{array}\n\\end{array}\n\\label{eqn:identity} \n\\end{equation}\n\n\tEq. \\ref{eqn:identity} yields nine equations when multiplied out, three of which provide no useful\nrelations (i.e. 0 = 0), two provide relations for the metric derivatives with respect to time (which in the case of a grid\nthat is constant in time reduce to zero) and the remaining four express relations for the pertinent metric derivatives\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\\xi_t = -x_t \\xi_x - r_t \\xi_r \\\\\n\t\\eta_t = -x_t \\eta_x - r_t \\eta_r \\\\\n\t\\xi_x = \\frac{1}{x_\\xi}(1 - r_\\xi \\xi_r) \\\\\n\t\\xi_x = -\\frac{1}{x_\\eta}(r_\\eta \\xi_r) \\\\\n\t\\eta_r = -\\frac{1}{r_\\xi}(x_\\xi \\eta_x) \\\\\n\t\\eta_r = \\frac{1}{r_\\eta}(1 - x_\\eta \\eta_x)\n\t\\end{array}\n\\end{displaymath}\n\n\tAfter suitable manipulations of the last four equations above (eliminating one of the metric derivatives, \nsolving for the remaining metric derivative, and then substituting this result back into the equations to solve\nfor the eliminated variable) one arrives at the following results,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\\xi_x = J r_\\eta \\\\\n\t\\xi_r = -J x_\\eta \\\\\n\t\\eta_x = -J r_\\xi \\\\\n\t\\eta_r = J x_\\xi\n\t\\end{array}\n\\end{displaymath}\n\n\twhere is J is the Jacobian of transformation (also known as the metric Jacobian) and is defined as,\n\n\\begin{equation}\n\tJ = \\frac{1}{x_\\xi r_\\eta - x_\\eta r_{\\xi}}\n\\label{eqn:J}\n\\end{equation}\n\n\tSubstituting the above results in the previous expressions for $\\xi_t$ and $\\eta_t$ also yields\n\n\\begin{equation}\n\t\\xi_t = J(-x_t r_\\eta + r_t x_\\eta)\n\\label{eqn:xit}\n\\end{equation}\n\n\\begin{equation}\n\t\\eta_t = J(x_t r_\\xi - r_t x_\\xi)\n\\label{eqn:etat}\n\\end{equation}\n\n\tGoing one step further and rephrasing the above relations (except for Eqs. \\ref{eqn:xit} and \\ref{eqn:etat}) \nin variables more suited to a programming\nlanguage, we will let the Cartesian co-ordinates be represented as $x$ = $x_0$ and $r$ = $x_1$ while the \ncomputational co-ordinates as $\\xi$ = $X_0$, $\\eta$ = $X_1$.  Therefore,\n\n\\begin{displaymath}\n\t\\begin{array}{c}\n\t\\xi_x = \\frac{\\partial \\xi}{\\partial x} = \\frac{\\partial X_0}{\\partial x_0} = X_{0,0} = \n\t\tJ \\frac{\\partial x_1}{\\partial X_1} \\\\\n\t\\xi_r = \\frac{\\partial \\xi}{\\partial r} = \\frac{\\partial X_0}{\\partial x_1} = X_{0,1} = \n\t\t-J \\frac{\\partial x_0}{\\partial X_1} \\\\\n\t\\eta_x = \\frac{\\partial \\eta}{\\partial x} = \\frac{\\partial X_1}{\\partial x_0} = X_{1,0} = \n\t\t-J \\frac{\\partial x_1}{\\partial X_0} \\\\\n\t\\eta_r = \\frac{\\partial \\eta}{\\partial r} = \\frac{\\partial X_1}{\\partial x_1} = X_{1,1} = \n\t\tJ \\frac{\\partial x_0}{\\partial X_0} \n\t\\end{array}\n\\end{displaymath}\n\n\tThese relations can be succinctly described by a single formula \n\n\\begin{equation}\n\tX_{i,j} = J(2\\delta_{i,j} -1)\\frac{\\partial x_{j+1}}{\\partial X_{i+1}}\n\\label{eqn:metrics}\n\\end{equation}\n\n\twhere $\\delta_{i,j}$ is the Kronecker delta which is equal to 1 if $i = j$ and 0 \notherwise (i.e. $i \\neq j$).  Also note that when the index goes higher than the number of dimensions\nminus one, it cycles back to its lowest value (i.e. for two dimensions $i$ = 0,1 thus $i+1$ = 1,0 and\nsimilarly for $j$).\n\n%--------------------------------------------------------------------------------------------------------------\n\\subsection{Governing Equations} \n\n\tStarting with the axisymmetric, multi-species, viscous, turbulent Navier-Stokes equations as\nshown in Eqs. \\ref{eqn:final}-\\ref{eqn:finalHv}, it is desired to transform these \nequations onto the general computational co-ordinates $\\xi$ and $\\eta$, as well as the computational time\n$\\tau$.  In the following derivations, the Euler equations will be considered (i.e. the viscous terms will be\nneglected) however the analysis applies equally as well to these terms and hence the results will be simply be \nextended to include the full set of equations.  Thus the axisymmetric Euler equations can be written as\n\n\\begin{equation}\n\t\\frac{\\partial Q}{\\partial t} + \\frac{\\partial E}{\\partial x} + \\frac{\\partial F}{\\partial r} + S_{be3D} = 0\n\\label{eqn:euler}\n\\end{equation}\n\n\tNow if we divide each term in Eq. \\ref{eqn:euler} by the metric Jacobian $J$ (as defined by Eq. \\ref{eqn:J}) \none can write\n\n\\begin{displaymath}\n\t\\frac{1}{J}\\frac{\\partial Q}{\\partial t} + \\frac{1}{J}\\frac{\\partial E}{\\partial x} \n\t+ \\frac{1}{J}\\frac{\\partial F}{\\partial r} + \\frac{1}{J}S_{be3D} = 0\n\\end{displaymath}\n\n\tTaking a closer look at the first term, the conservative variables vector, one can express the \nderivative with respect to time using the Calculus as\n\n\\begin{displaymath}\n\t\\frac{1}{J}\\frac{\\partial Q}{\\partial t} = \\frac{1}{J}\\Big[\\frac{\\partial \\tau}{\\partial t}\n\t\\frac{\\partial Q}{\\partial \\tau} + \\frac{\\partial \\xi}{\\partial t}\n\t\\frac{\\partial Q}{\\partial \\xi} + \\frac{\\partial \\eta}{\\partial t} \\frac{\\partial Q}{\\partial \\eta}\\Big]\n\\end{displaymath}\n\n\tNow if one adds groups of terms which in and of themselves sum to zero (these terms are shown in curly braces $\\{\\}$) \none can write for the conservative variables vector \n\n\\begin{displaymath}\n\t\\frac{\\tau_t}{J}\\frac{\\partial Q}{\\partial \\tau} + \n\t\\frac{\\xi_t}{J} \\frac{\\partial Q}{\\partial \\xi} + \\frac{\\eta_t}{J} \\frac{\\partial Q}{\\partial \\eta}\n\t+ \\{ Q\\frac{\\partial}{\\partial \\tau}(\\frac{\\tau_t}{J}) - Q\\frac{\\partial}{\\partial \\tau}(\\frac{\\tau_t}{J}) \\}\n\t+ \\{ Q\\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_t}{J}) - Q\\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_t}{J}) \\}\n\t+ \\{ Q\\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_t}{J}) - Q\\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_t}{J}) \\}\n\\end{displaymath}\n\n\twhich can be regrouped as\n\n\\begin{equation}\n\t\\Big[\\frac{\\tau_t}{J}\\frac{\\partial Q}{\\partial \\tau} + Q\\frac{\\partial}{\\partial \\tau}(\\frac{\\tau_t}{J})\\Big] +\n\t\\Big[\\frac{\\xi_t}{J}\\frac{\\partial Q}{\\partial \\xi} + Q\\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_t}{J})\\Big] + \n\t\\Big[\\frac{\\eta_t}{J}\\frac{\\partial Q}{\\partial \\eta} + Q\\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_t}{J})\\Big]\n\t-Q \\Big\\{\\frac{\\partial}{\\partial \\tau}(\\frac{\\tau_t}{J}) + \\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_t}{J})\n\t+ \\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_t}{J}) \\Big\\}\n\\label{eqn:Qbig}\n\\end{equation}\n\n\tIn this form it can be seen that the terms in the square brackets can be combined into a single derivative\nusing the inverse of the chain rule from the Calculus, while the last term in curly braces sums to zero as follows.  \nSince time in both Cartesian and computational co-ordinates is the same (as was specified in the derivation of the \nmetric terms) then $\\tau$ = $t$ and thus $\\tau_t$ = 1.  Now if the\nrelation between the Cartesian and computational co-ordinates is held constant in time, then the derivative of the metric \nJacobian with respect to time must be zero which eliminates the first term in the curly braces (note however, that \nthis assumption does not require a grid that is constant in time, only that the relation between the physical and \ncomputational domains remains fixed).  Substitution of Eqs. \\ref{eqn:xit} and \\ref{eqn:etat} into the last two terms \nin the curly braces yields,\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial \\xi}J^{-1}\\Big\\{J(-x_t r_\\eta + r_t x_\\eta)\\Big\\} \n\t+ \\frac{\\partial}{\\partial \\eta}J^{-1}\\Big\\{J(x_t r_\\xi - r_t x_\\xi)\\Big\\}\n\\end{displaymath}\n\n\tIf we further assume that the derivatives with respect to time are independent of grid location (i.e. changes\nin gridding with time are applied uniformly over the entire grid), then $x_t$ and $r_t$\ncan be pulled outside of the derivatives.  Also, using Clairaut's Theorem which states that\n\n\\begin{equation}\n\t\\frac{\\partial r^2}{\\partial \\eta \\partial \\xi} = \\frac{\\partial r^2}{\\partial \\xi \\partial \\eta}\n\\label{eqn:clairaut}\n\\end{equation}\n\n\tthen the above equation sums to zero.  It should be noted that although restrictions\nwere placed on the manner in which the grid may change in time, it was not stated that the grid must stay constant in time\n(although in most circumstances this will indeed be the case).  Thus Eq. \\ref{eqn:Qbig} can be written\n\n\\begin{displaymath}\n\t\\frac{1}{J}\\frac{\\partial Q}{\\partial t} = \\frac{\\partial}{\\partial t}(\\frac{Q}{J}) \n\t+ \\frac{\\partial}{\\partial \\xi}(\\xi_t \\frac{Q}{J}) + \\frac{\\partial}{\\partial \\eta}(\\eta_t \\frac{Q}{J})\n\\end{displaymath}  \n\n\twhich when combined with the assertion that the grid spacing is constant in time reduces to \n\n\\begin{equation}\n\t\\frac{1}{J}\\frac{\\partial Q}{\\partial t} = \\frac{\\partial}{\\partial t}(\\frac{Q}{J}) \n\\label{eqn:Q}\n\\end{equation}\n\n\tIt is worth noting here that the above expression in Eq. \\ref{eqn:Q} could have been arrived at quite\neasily by specifying at the outset a physical domain that remains constant in time, and that time in both the Cartesian\nand computational domains is equivalent.\n\n\tConsidering next the flux vector along the $x$ direction and remembering that one has already divided by the\nmetric Jacobian, one can write \n\n\\begin{displaymath}\n\t\\frac{1}{J}\\frac{\\partial E}{\\partial x} = \\frac{1}{J}\\Big[\\frac{\\partial \\xi}{\\partial x}\\frac{\\partial E}{\\partial \\xi}\n\t+ \\frac{\\partial \\eta}{\\partial x}\\frac{\\partial E}{\\partial \\eta}\\Big]\n\\end{displaymath} \n\n\tAdding terms that sum to zero in a fashion similar to that done for the $Q$ vector\n\n\\begin{displaymath}\n\t\\frac{1}{J}\\frac{\\partial E}{\\partial x} = \\frac{\\xi_x}{J}\\frac{\\partial E}{\\partial \\xi}\n\t+\\frac{\\eta_x}{J}\\frac{\\partial E}{\\partial \\eta} + \\Big\\{E \\frac{\\partial}{\\partial \\xi}\n\t(\\frac{\\xi_x}{J}) - E\\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_x}{J})\\Big\\}\n\t+ \\Big\\{E\\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_x}{J}) - E\\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_x}{J})\\Big\\}\n\\end{displaymath}\n\n\tRegrouping terms yields\n\n\\begin{displaymath}\n\t\\frac{1}{J}\\frac{\\partial E}{\\partial x} = \n\t\\Big[\\frac{\\xi_x}{J}\\frac{\\partial E}{\\partial \\xi}\n\t+ E \\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_x}{J})\\Big]\n\t+ \\Big[\\frac{\\eta_x}{J}\\frac{\\partial E}{\\partial \\eta} \n\t+ E \\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_x}{J}) \\Big]\n\t-E \\Big\\{\\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_x}{J})\n\t+ \\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_x}{J})\\Big\\}\n\\end{displaymath}\n\n\tThe first two groups of terms in square brackets can be combined into a single derivative while the last\nterm in the curly braces sums to zero by substituting for the definitions of the metric derivatives as follows,\n\n\\begin{displaymath}\n\t-E \\Big\\{\\frac{\\partial}{\\partial \\xi}(\\frac{\\xi_x}{J})\n\t+ \\frac{\\partial}{\\partial \\eta}(\\frac{\\eta_x}{J})\\Big\\} = \n\t-E \\Big\\{\\frac{\\partial}{\\partial \\xi}J^{-1}(J r_\\eta)\n\t+ \\frac{\\partial}{\\partial \\eta}J^{-1}(J r_\\xi)\\Big\\}\n\\end{displaymath}\n\n\tAgain applying Clairaut's Theorem (Eq. \\ref{eqn:clairaut}) the above relation reduces to zero.  Therefore, the\nderivative of the flux vector $E$ can be expressed as,\n\n\\begin{equation}\n\t\\frac{1}{J}\\frac{\\partial E}{\\partial x} = \n\t\\frac{\\partial}{\\partial \\xi} (\\xi_x \\frac{E}{J})\n\t+ \\frac{\\partial}{\\partial \\eta} (\\eta_x \\frac{E}{J}) \n\\label{eqn:E}\n\\end{equation}\n\n\tThe exact same procedure is followed for the $F$ flux vector resulting in\n\n\\begin{equation}\n\t\\frac{1}{J}\\frac{\\partial F}{\\partial r} = \n\t\\frac{\\partial}{\\partial \\xi} (\\xi_r \\frac{F}{J})\n\t+ \\frac{\\partial}{\\partial \\eta} (\\eta_r \\frac{F}{J}) \n\\label{eqn:F}\n\\end{equation}\n\n\tCombining the results of Eqs. \\ref{eqn:Q}, \\ref{eqn:E}, and \\ref{eqn:F} into Eq. \\ref{eqn:euler} and\nrealizing that since the axisymmetric source term $S_{be3D}$ is not differentiated it can be left as is yields the\nresult\n\n\\begin{displaymath}\n\t\\frac{\\partial}{\\partial t}(\\frac{Q}{J}) + \\frac{\\partial}{\\partial \\xi} \\Big[\\frac{1}{J}(\\xi_x E\n\t+ \\xi_r F)\\Big] + \\frac{\\partial}{\\partial \\eta}\\Big[\\frac{1}{J} (\\eta_x E \n\t+ \\eta_r F)\\Big] + \\frac{1}{J}S_{be3D} = 0\n\\end{displaymath}\n\n\tAs a final step, one would like to substitute computational variables for those above, thus following the\nconventions established when deriving the metric terms ($x$ = $x_0$, $r$ = $x_1$, $\\xi$ = $X_0$, and $\\eta$ = $X_1$)\nand defining a new variable \n\n\\begin{equation}\n\t\\Omega = J^{-1}\n\\label{eqn:omega}\n\\end{equation} \n\n\tone can write\n\n\\begin{equation}\n\t\\frac{\\partial \\bar{Q}}{\\partial t} + \\frac{\\partial \\bar{E}}{\\partial \\xi} \n\t+ \\frac{\\partial \\bar{F}}{\\partial \\eta} + \\bar{S}_{be3D} = 0\t\n\\label{eqn:eulerbar}\n\\end{equation}\n\n\twhere\n\n\\begin{equation}\n\t\\begin{array}{ccc}\n\t\t\\bar{Q} &=& \\Omega Q \\\\\n\t\t\\bar{E} &=& \\Omega (X_{0,0}E + X_{0,1}F) \\\\\n\t\t\\bar{F} &=& \\Omega (X_{1,0}E + X_{1,1}F) \\\\\n\t\t\\bar{S}_{be3D} &=& \\Omega S_{be3D}\n\t\\end{array}\n\\label{eqn:eulerbars}\n\\end{equation}\n\n\twith $Q$, $E$, $F$, and $S_{be3D}$ defined by Eqs. \\ref{eqn:finalQ}, \\ref{eqn:finalEF}, and \\ref{eqn:finalH} \nrespectively.  Extending the same logic to the viscous terms yields the desired form of the governing equations,\n\n\\begin{equation}\n\t\\frac{\\partial \\bar{Q}}{\\partial t} + \\frac{\\partial \\bar{E}}{\\partial \\xi} \n\t+ \\frac{\\partial \\bar{F}}{\\partial \\eta} + \\bar{S}_{be3D} -\\frac{\\partial \\bar{E}_v}{\\partial \\xi} \n\t- \\frac{\\partial \\bar{F}_v}{\\partial \\eta} - \\bar{S}_{be3D_v} = 0\t\n\\label{eqn:finalbar}\n\\end{equation}\n\n\twith\n\n\\begin{equation}\n\t\\begin{array}{ccc}\n\t\t\\bar{Q} &=& \\Omega Q \\\\\n\t\t\\bar{E}_v &=& \\Omega (X_{0,0}E_v + X_{0,1}F_v) \\\\\n\t\t\\bar{F}_v &=& \\Omega (X_{1,0}E_v + X_{1,1}F_v) \\\\\n\t\t\\bar{S}_{be3D_v} &=& \\Omega S_{be3D_v}\n\t\\end{array}\n\\label{eqn:viscousbars}\n\\end{equation}\n\n\tHowever, it must be noted here that the viscous terms themselves contain derivatives and hence\nEqs. \\ref{eqn:finalEvFv} and \\ref{eqn:finalHv} must be modified to account for the transformation\nas well.\n\n%-------------------------------------------------------------------------------------------------------------------\n\\subsection{Axisymmetric Source Term Transformation}\n\n\tGiven that without the axisymmetric source terms, the governing equations are simply those for\nnormal two-dimensional flow, the transformation of the viscous flux vectors will not be demonstrated as these can\nbe found in other sources.  However of particular interest here are the source terms, which will be transformed\nexplicitly given that there exist numerous ways in which these terms can be expressed depending on the \nform of the computational governing equations used (i.e. definition of $\\Omega$ or  $Q$).\n\n\\subsubsection{Inviscid Axisymmetric Source Term}\n\n\tAlthough this term contains no derivatives, it is useful to cast the variables in ones similar in form\nto the computational space variables, i.e., similar terms differentiated by subscripts alone.  This would also\napply to the conservative variable vector $Q$ and to the inviscid flux vectors $E$ and $F$ although the process \nwill not be shown here.  If we let the physical space velocities $u$ and $v$ be represented by $v_o$ and $v_1$ \nrespectively then one can write\n\n\\begin{equation}\n\t\\bar{S}_{be3D} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\rho_1 v_1 \\\\\n\t\t\\vdots \\\\\n\t\t\\rho_k v_1 \\\\\n\t\t\\rho v_0v_1 \\\\\n\t\t\\rho {v_1}^2 \\\\\n\t\t(\\rho E + p)v_1 \\\\\n\t\t\\rho k v_1 \\\\\n\t\t\\rho \\omega v_1\n\t\t   \\end{array}\n\t    \\right]\n\\label{eqn:finalHbar}\n\\end{equation}\n\n\n\\subsubsection{Viscous Axisymmetric Source Term}\n\n\tSince this term contains derivatives in the Cartesian domain, these quantities must be appropriately\ntransformed in addition to substituting computational variables for Cartesian ones as was done for the inviscid source\nterm.  Although the viscous flux vectors require the appropriate transformations as well, given that these vectors\nare exactly the same as those for two-dimensional flow, their transformation will not be shown here.  Rewriting the \nviscous source term here for clarity,\n\n\\begin{displaymath}\n\tS_{be3D_v} = \\frac{1}{r}\\left[ \\begin{array}{c}\n\t\t\\nu_1 \\frac{\\partial c_1}{\\partial r} \\\\\n\t\t\\vdots \\\\\n\t\t\\nu_k \\frac{\\partial c_k}{\\partial r} \\\\\n\t\t\\tau_{rx} - r\\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu \\frac{v}{r}) \\\\\n\t\t\\hat{ \\tau}_{rr} -\\tau_{\\theta \\theta} - \\frac{2}{3}\\mu \\frac{v}{r} \n\t\t- r\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu \\frac{v}{r}) \\\\\n\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) - q_{c_r} + \\tau_{rx}u \n\t\t+ \\hat{ \\tau}_{rr}v + {\\mu_k}^* \\frac{\\partial k}{\\partial r} - \\frac{2}{3} \\mu \\frac{v^2}{r}\n\t\t\t- r\\frac{2}{3} \\frac{\\partial}{\\partial x}(\\mu \\frac{uv}{r})\n\t\t\t- r\\frac{2}{3} \\frac{\\partial}{\\partial r}(\\mu \\frac{v^2}{r}) \\\\\n\t\t{\\mu_k}^* \\frac{\\partial k}{\\partial r} + rS_k\\\\\n\t\t{\\mu_\\omega}^* \\frac{\\partial \\omega}{\\partial r} + rS_\\omega\n\t\t   \\end{array}\n\t    \\right]\n\\end{displaymath}\n\n\tRemembering that $r$ = $x_1$ and that $\\bar{S}_{be3D_v}$ = $\\Omega S_{be3D_v}$ one can immediately\nrewrite the above as\n\n\\begin{equation}\n\t\\bar{S}_{be3D_v} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\nu_1 \\frac{\\partial c_1}{\\partial r} \\\\\n\t\t\\vdots \\\\\n\t\t\\nu_k \\frac{\\partial c_k}{\\partial r} \\\\\n\t\t\\tau_{rx} - x_1\\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu \\frac{v}{r}) \\\\\n\t\t\\hat{ \\tau}_{rr} -\\tau_{\\theta \\theta} - \\frac{2}{3}\\mu \\frac{v}{r} \n\t\t- x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu \\frac{v}{r}) \\\\\n\t\t\\sum_k (\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) - q_{c_r} + \\tau_{rx}u \n\t\t+ \\hat{ \\tau}_{rr}v + {\\mu_k}^* \\frac{\\partial k}{\\partial r} - \\frac{2}{3} \\mu \\frac{v^2}{r}\n\t\t\t- x_1\\frac{2}{3} \\frac{\\partial}{\\partial x}(\\mu \\frac{uv}{r})\n\t\t\t- x_1\\frac{2}{3} \\frac{\\partial}{\\partial r}(\\mu \\frac{v^2}{r}) \\\\\n\t\t{\\mu_k}^* \\frac{\\partial k}{\\partial r} + x_1S_k\\\\\n\t\t{\\mu_\\omega}^* \\frac{\\partial \\omega}{\\partial r} + x_1S_\\omega\n\t\t   \\end{array}\n\t    \\right]\n\\label{eqn:beginHvbar}\n\\end{equation}\n\n%---------------------------------------------------------------------------------------------------------------\n\\subsubsection{Species Continuity Source Term}\n\n\tFor the remaining transformations, each equation will be considered separately.  Starting with the\nspecies conservation source terms, and letting the number of species vary between 0 and $n_s -1 = \\bar{n}_s$\none can write for the first species\n\n\\begin{displaymath}\n\t\\nu_0 \\frac{\\partial}{\\partial r}(c_0) = \\nu_0 \\Big\\{\\xi_r \\frac{\\partial c_0}{\\partial \\xi}\n\t+ \\eta_r \\frac{\\partial c_0}{\\partial \\eta}\\Big\\} = \\nu_0 \\Big\\{X_{0,1} \\frac{\\partial c_0}{\\partial X_0}\n\t+ X_{1,1} \\frac{\\partial c_0}{\\partial X_1} \\Big\\}\n\\end{displaymath}\n\n\tThus for a general species \\emph{i} one can write\n\n\\begin{equation}\n\t\\nu_i \\frac{\\partial c_i}{\\partial r} = \\nu_i \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\\frac{\\partial c_i}{\\partial X_\\theta}\n\\label{eqn:speciescomp}\n\\end{equation}\n\n\twhere $\\bar{n}_d$ is the number of dimensions minus one, which for axisymmetric flow has a value equal\nto 1.  Therefore combining Eq. \\ref{eqn:speciescomp} and Eq. \\ref{eqn:beginHvbar} yields\n\n\\begin{displaymath}\n\t\\bar{S}_{be3D_v} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\nu_0 \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_0}{\\partial X_\\theta} \\\\\n\t\t\\vdots \\\\\n\t\t\\nu_{\\bar{n}_s} \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_{\\bar{n}_s}}{\\partial X_\\theta} \\\\\n\t\t\\vdots\n\t\t   \\end{array}\n\t    \\right]\n\\end{displaymath}\n\n%----------------------------------------------------------------------------------------------------------------------\n\\subsubsection{$x$ Momentum Source Term}\n\n\tConsidering next the equation of motion along the first physical co-ordinate, by substitution\nof Eq. \\ref{eqn:taurx} and replacing Cartesian variables one can write\n\n\\begin{displaymath}\n\t\\tau_{rx} - x_1\\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu \\frac{v}{r}) = \\mu\\Big[\\frac{\\partial}{\\partial x}(v_1) \n\t+ \\frac{\\partial}{\\partial r}(v_0)\\Big] - x_1\\frac{2}{3} \\frac{\\partial}{\\partial x}(\\mu \\frac{v_1}{x_1})\n\\end{displaymath}\n\n\tRewriting the Cartesian derivatives using the Calculus as functions of metrics and derivatives along\ncomputational co-ordinate directions yields\n\n\\begin{displaymath}\n  \\begin{array}{c}\n\t\\mu \\Big\\{\\xi_x \\frac{\\partial v_1}{\\partial \\xi} + \\eta_x \\frac{\\partial v_1}{\\partial \\eta} \n\t+ \\xi_r \\frac{\\partial v_0}{\\partial \\xi} + \\eta_r \\frac{\\partial v_0}{\\partial \\eta}\\Big\\}\n\t- x_1 \\frac{2}{3} \\Big\\{\\xi_x \\frac{\\partial}{\\partial \\xi}(\\mu \\frac{v_1}{x_1}) \n\t+ \\eta_x \\frac{\\partial}{\\partial \\eta}(\\mu \\frac{v_1}{x_1}) \\Big\\}  \n\t\\\\\n\t\\mu \\Big\\{X_{0,0} \\frac{\\partial v_1}{\\partial X_0} + X_{1,0} \\frac{\\partial v_1}{\\partial X_1} \n\t+ X_{0,1} \\frac{\\partial v_0}{\\partial X_0} + X_{1,1} \\frac{\\partial v_0}{\\partial X_1}\\Big\\}\n\t- x_1 \\frac{2}{3} \\Big\\{X_{0,0} \\frac{\\partial}{\\partial X_0}(\\mu \\frac{v_1}{x_1}) \n\t+ X_{1,0} \\frac{\\partial}{\\partial X_1}(\\mu \\frac{v_1}{x_1}) \\Big\\}  \n\t\\\\\n\t\\mu \\Big\\{ \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\\frac{\\partial v_0}{\\partial X_\\theta}\n\t+ \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0} \\frac{\\partial v_1}{\\partial X_\\theta} \\Big\\}\n\t- x_1 \\frac{2}{3} \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0} \\frac{\\partial}{\\partial X_\\theta}\n\t(\\mu \\frac{v_1}{x_1})\n  \\end{array}\n\\end{displaymath}\n\n\twhich can be simplified to \n\n\\begin{equation}\n\t\\tau_{rx} - x_1\\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu \\frac{v}{r}) =\n\t\\mu \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\\frac{\\partial v_0}{\\partial X_\\theta}\n\t+ \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0} \\Big\\{\\mu \\frac{\\partial v_1}{\\partial X_\\theta} \n\t- x_1 \\frac{2}{3} \\frac{\\partial}{\\partial X_\\theta}(\\mu \\frac{v_1}{x_1}) \\Big\\}\n\\label{eqn:xmomcomp}\n\\end{equation}\n\n\twhich when combined with Eqs. \\ref{eqn:speciescomp} and Eq. \\ref{eqn:beginHvbar} yields\n\n\\begin{displaymath}\n\t\\bar{S}_{be3D_v} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\nu_0 \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_0}{\\partial X_\\theta} \\\\\n\t\t\\vdots \\\\\n\t\t\\nu_{\\bar{n}_s} \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_{\\bar{n}_s}}{\\partial X_\\theta} \\\\\n\t\t\\mu \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\\frac{\\partial v_0}{\\partial X_\\theta}\n\t\t+ \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0} \\Big\\{\\mu \\frac{\\partial v_1}{\\partial X_\\theta} \n\t\t- x_1 \\frac{2}{3} \\frac{\\partial}{\\partial X_\\theta}(\\mu \\frac{v_1}{x_1}) \\Big\\} \\\\\n\t\t\\vdots\n\t\t   \\end{array}\n\t    \\right]\n\\end{displaymath}\n\n%--------------------------------------------------------------------------------------------------------------\n\\subsubsection{$r$ Momentum Source Term}\n\n\tConsidering the momentum equation source term along the second co-ordinate direction and \nsubstituting the results of Eqs. \\ref{eqn:taurrhat} and \\ref{eqn:tauthetatheta} in for\nthe viscous stress terms yields,\n\n\\begin{displaymath}\n\t\\hat{ \\tau}_{rr} -\\tau_{\\theta \\theta} - \\frac{2}{3}\\mu \\frac{v}{r} \n\t- x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu \\frac{v}{r}) =  \n\t\\mu ( - \\frac{2}{3} \\frac{\\partial u}{\\partial x} + \\frac{4}{3} \\frac{\\partial v}{\\partial r})\n\t- \\mu \\Big\\{- \\frac{2}{3} (\\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial r})  \n\t+ \\frac{4}{3} \\frac{v}{r} \\Big\\} - \\frac{2}{3}\\mu \\frac{v}{r} \n\t- x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu \\frac{v}{r})\n\\end{displaymath}\n\n\tRearranging and simplifying results in\n\n\\begin{displaymath}\n\t\\mu \\Big\\{\\frac{4}{3}\\frac{\\partial v}{\\partial r} + \\frac{2}{3}\\frac{\\partial v}{\\partial r}\n\t- \\frac{2}{3}\\frac{\\partial u}{\\partial x} + \\frac{2}{3}\\frac{\\partial u}{\\partial x}  \\Big\\}\n\t-\\frac{4}{3}(\\mu \\frac{v}{r}) - \\frac{2}{3}(\\mu \\frac{v}{r}) - x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}\n\t(\\mu \\frac{v}{r})\n\\end{displaymath}\n\n\\begin{displaymath}\n\t2\\mu \\frac{\\partial}{\\partial r}(v) - x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu \\frac{v}{r})\n\t-2\\mu \\frac{v}{r}\n\\end{displaymath}\n\t\n\tRewriting the derivatives and substituting for the computational variables\n\n\\begin{displaymath}\n  \\begin{array}{c}\n\t2\\mu\\Big\\{\\xi_r \\frac{\\partial v_1}{\\partial \\xi} + \\eta_r \\frac{\\partial v_1}{\\partial \\eta} \\Big\\}\n\t-x_1\\frac{2}{3}\\Big\\{\\xi_r\\frac{\\partial}{\\partial \\xi}(\\mu\\frac{v_1}{x_1}) + \\eta_r\\frac{\\partial}\n\t{\\partial \\eta}(\\mu\\frac{v_1}{x_1})\\Big\\} - 2\\mu \\frac{v_1}{x_1}\n\t\\\\\n\t2\\mu\\Big\\{X_{0,1} \\frac{\\partial v_1}{\\partial X_0} + X_{1,1} \\frac{\\partial v_1}{\\partial X_1} \\Big\\}\n\t-x_1\\frac{2}{3}\\Big\\{X_{0,1}\\frac{\\partial}{\\partial X_0}(\\mu\\frac{v_1}{x_1}) + X_{1,1}\\frac{\\partial}\n\t{\\partial X_1}(\\mu\\frac{v_1}{x_1})\\Big\\} - 2\\mu \\frac{v_1}{x_1}\n\t\\\\\n\t2\\mu\\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\frac{\\partial v_1}{\\partial X_\\theta} \n\t-\\frac{v_1}{x_1} \\Big\\}\n\t-x_1\\frac{2}{3}\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\frac{\\partial}{\\partial X_\\theta}(\\mu\\frac{v_1}{x_1})\n  \\end{array}\n\\end{displaymath}\n\n\tand thus one can finally write\n\n\\begin{equation}\n\t\\hat{ \\tau}_{rr} -\\tau_{\\theta \\theta} - \\frac{2}{3}\\mu \\frac{v}{r} \n\t- x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu \\frac{v}{r}) = \n\t\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\Big\\{2\\mu\\frac{\\partial v_1}{\\partial X_\\theta} \n\t-x_1\\frac{2}{3}\\frac{\\partial}{\\partial X_\\theta}(\\mu\\frac{v_1}{x_1})\\Big\\} - 2\\mu\\frac{v_1}{x_1}\n\\label{eqn:rmomcomp}\n\\end{equation}\n\n\twhich when combined with Eqs. \\ref{eqn:speciescomp}, \\ref{eqn:xmomcomp}, \\ref{eqn:rmomcomp}, \nand \\ref{eqn:beginHvbar} yields\n\n\\begin{displaymath}\n\t\\bar{S}_{be3D_v} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\nu_0 \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_0}{\\partial X_\\theta} \\\\\n\t\t\\vdots \\\\\n\t\t\\nu_{\\bar{n}_s} \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_{\\bar{n}_s}}{\\partial X_\\theta} \\\\\n\t\t\\mu \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\\frac{\\partial v_0}{\\partial X_\\theta}\n\t\t+ \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0} \\Big\\{\\mu \\frac{\\partial v_1}{\\partial X_\\theta} \n\t\t- x_1 \\frac{2}{3} \\frac{\\partial}{\\partial X_\\theta}(\\mu \\frac{v_1}{x_1}) \\Big\\} \\\\\n\t\t\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\Big\\{2\\mu\\frac{\\partial v_1}{\\partial X_\\theta} \n\t\t-x_1\\frac{2}{3}\\frac{\\partial}{\\partial X_\\theta}(\\mu\\frac{v_1}{x_1})\\Big\\} - 2\\mu\\frac{v_1}{x_1} \\\\\n\t\t\\vdots\n\t\t   \\end{array}\n\t    \\right]\n\\end{displaymath}\n\n%----------------------------------------------------------------------------------------------------------------------\n\\subsubsection{Energy Equation Source Term}\n\n\tThe next term to be considered is the energy equation source term, which will be broken down into its\nindividual components given the number of terms that need to be rewritten.  Starting with the species diffusion\ncontribution and setting $k$ equal to  0 to $\\bar{n}_s$\n\n\\begin{displaymath}\n\t\\sum_{k = 0}^{\\bar{n}_s}(\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) = \n\t\\sum_{k = 0}^{\\bar{n}_s}\\nu_k h_k \\Big\\{\\xi_r \\frac{\\partial c_k}{\\partial \\xi} +\n\t\\eta_r \\frac{\\partial c_k}{\\partial \\eta}\\Big\\} =\n\t\\sum_{k = 0}^{\\bar{n}_s}\\nu_k h_k \\Big\\{X_{0,1}\\frac{\\partial c_k}{\\partial X_0} +\n\tX_{1,1} \\frac{\\partial c_k}{\\partial X_1}\\Big\\}\n\\end{displaymath}\n\n\twhich can be simplified to \n\n\\begin{equation}\n\t\\sum_{k = 0}^{\\bar{n}_s}(\\nu_k \\frac{\\partial c_k}{\\partial r}h_k) = \n\t\\sum_{k = 0}^{\\bar{n}_s}\\nu_k h_k \\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\n\t\\frac{\\partial c_k}{\\partial X_\\theta}\\Big\\}\n\\label{eqn:energyA}\n\\end{equation}\n\t\n\tNext is the heat transfer term, where we will let $k^*$ represent the turbulent coefficient of\nthermal conductivity, so as not to confuse it with $k$, the specific turbulent kinetic energy\n\n\\begin{displaymath}\n\t- q_{c_r} = -(-k^* \\frac{\\partial T}{\\partial r}) = k^* \\Big\\{\\xi_r \\frac{\\partial T}{\\partial \\xi}\n\t+ \\eta_r \\frac{\\partial T}{\\partial \\eta}\\Big\\} = k^*\\Big\\{X_{0,1} \\frac{\\partial T}{\\partial X_0}\n\t+ X_{1,1} \\frac{\\partial T}{\\partial X_1}\\Big\\}\n\\end{displaymath}\n\n\twhere it is noted that Eq. \\ref{eqn:rheatflux} was used to describe the heat flux term.  Simplifying\nthe above yields,\n\n\\begin{equation}\n\t- q_{c_r} = k^*\\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1} \\frac{\\partial T}{\\partial X_\\theta}\n\\label{eqn:energyB}\n\\end{equation}\n\t\n\tConsidering next the first stress term $\\tau_{rx}$ and using the results of Eq. \\ref{eqn:taurx}\n\n\\begin{displaymath}\n\t\\tau_{rx}u = u\\Big\\{\\mu\\Big[\\frac{\\partial}{\\partial x}(v) + \\frac{\\partial}{\\partial r}(u)\\Big]\\Big\\} =\n\tu\\mu\\Big\\{\\xi_x \\frac{\\partial v}{\\partial \\xi} + \\eta_x \\frac{\\partial v}{\\partial \\eta} +\n\t\\xi_r \\frac{\\partial u}{\\partial \\xi} + \\eta_r \\frac{\\partial u}{\\partial \\eta} \\Big\\} \n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\tau_{rx}u = \n\tv_0 \\mu \\Big\\{X_{0,0} \\frac{\\partial v_1}{\\partial X_0} + X_{1,0} \\frac{\\partial v_1}{\\partial X_1} +\n\tX_{0,1} \\frac{\\partial v_0}{\\partial X_0} + X_{1,1} \\frac{\\partial v_0}{\\partial X_1} \\Big\\}\n\\end{displaymath}\n\n\twhich can be simplified to\n\n\\begin{equation}\n\t\\tau_{rx}u = v_0\\mu\n\t\\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\\frac{\\partial v_0}{\\partial X_\\theta} +\n\t\\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0}\\frac{\\partial v_1}{\\partial X_\\theta}\\Big\\}\n\\label{eqn:energyC}\n\\end{equation}\n\n\tThe second stress term can be written with the help of Eq. \\ref{eqn:taurrhat} as\n\t\n\\begin{displaymath}\n\t\\hat{ \\tau}_{rr}v = v\\Big\\{\\mu\\Big[ -\\frac{2}{3} \\frac{\\partial}{\\partial x}(u) \n\t+ \\frac{4}{3} \\frac{\\partial}{\\partial r}(v)\\Big]\\Big\\} = v\\mu\\Big\\{-\\frac{2}{3}\\Big[\n\t\\xi_x\\frac{\\partial u}{\\partial \\xi} + \\eta_x\\frac{\\partial u}{\\partial \\eta}\\Big] + \\frac{4}{3}\\Big[\n\t\\xi_r\\frac{\\partial v}{\\partial \\eta} + \\eta_r\\frac{\\partial v}{\\partial \\eta}\\Big] \\Big\\}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t\\hat{ \\tau}_{rr}v = v_1\\mu\\Big\\{-\\frac{2}{3}\\Big[\n\tX_{0,0}\\frac{\\partial v_0}{\\partial X_0} + X_{1,0}\\frac{\\partial v_0}{\\partial X_1}\\Big] + \\frac{4}{3}\\Big[\n\tX_{0,1}\\frac{\\partial v_1}{\\partial X_0} + X_{1,1}\\frac{\\partial v_1}{\\partial X_1}\\Big] \\Big\\} \n\\end{displaymath}\n\n\twhich can be simplified to\n\n\\begin{equation}\n\t\\hat{ \\tau}_{rr}v = v_1\\mu\\Big\\{-\\frac{2}{3}\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,0}\n\t\\frac{\\partial v_0}{\\partial X_\\theta} + \\frac{4}{3}\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\n\t\\frac{\\partial v_1}{\\partial X_\\theta} \\Big\\}\t\n\\label{eqn:energyD}\n\\end{equation}\n\n\tThe turbulent energy diffusion term is similar to the heat transfer term\n\n\\begin{displaymath}\n\t{\\mu_k}^* \\frac{\\partial k}{\\partial r} = {\\mu_k}^* \\Big\\{\\xi_r\\frac{\\partial k}{\\partial \\xi} \n\t+ \\eta_r\\frac{\\partial k}{\\partial \\eta}\\Big\\} = {\\mu_k}^* \\Big\\{X_{0,1}\\frac{\\partial k}{\\partial X_0}\n\t+ X_{1,1}\\frac{\\partial k}{\\partial X_1}\\Big\\}\n\\end{displaymath}\t\n\n\twhich is simplified to\n\n\\begin{equation}\n\t{\\mu_k}^* \\frac{\\partial k}{\\partial r} = {\\mu_k}^* \\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\n\t\\frac{\\partial k}{\\partial X_\\theta}\\Big\\}\n\\label{eqn:energyE}\n\\end{equation}\n\n\tThe next term has no derivative terms and hence requires only direct substitution for the computational \nvariables\n\n\\begin{equation}\n\t- \\frac{2}{3}\\mu\\frac{v^2}{r} = - \\frac{2}{3}\\mu\\frac{{v_1}^2}{x_1}\n\\label{eqn:energyF}\n\\end{equation}\n\n\tThe last two terms are treated as follows\n\n\\begin{displaymath}\n\t- x_1\\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu \\frac{uv}{r}) = -x_1\\frac{2}{3}\\Big\\{\\xi_x\n\t\\frac{\\partial}{\\partial \\xi}(\\mu\\frac{v_0 v_1}{x_1}) + \\eta_x\\frac{\\partial}{\\partial \\eta}\n\t(\\mu\\frac{v_0 v_1}{x_1})\\Big\\} = -x_1\\frac{2}{3}\\Big\\{X_{0,0}\n\t\\frac{\\partial}{\\partial X_0}(\\mu\\frac{v_0 v_1}{x_1}) + X_{1,0}\\frac{\\partial}{\\partial X_1}\n\t(\\mu\\frac{v_0 v_1}{x_1})\\Big\\}\t\n\\end{displaymath}\n\n\\begin{displaymath}\n\t- x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}(\\mu \\frac{v^2}{r}) = -x_1\\frac{2}{3}\\Big\\{\\xi_r\n\t\\frac{\\partial}{\\partial \\xi}(\\mu\\frac{{v_1}^2}{x_1}) + \\eta_r\\frac{\\partial}{\\partial \\eta}(\\mu\\frac{{v_1}^2}{x_1})\\Big\\}\n\t= -x_1\\frac{2}{3}\\Big\\{X_{0,1}\n\t\\frac{\\partial}{\\partial X_0}(\\mu\\frac{{v_1}^2}{x_1}) + X_{1,1}\\frac{\\partial}{\\partial X_1}(\\mu\\frac{{v_1}^2}{x_1})\\Big\\}\n\\end{displaymath}\n\n\twhich can combined and rewritten as\n\n\\begin{equation}\n\t- x_1\\frac{2}{3}\\frac{\\partial}{\\partial x}(\\mu \\frac{uv}{r}) - x_1\\frac{2}{3}\\frac{\\partial}{\\partial r}\n\t(\\mu \\frac{v^2}{r}) = -x_1\\frac{2}{3}\\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d} \\sum_{\\vartheta = 0}^{\\bar{n}_d}\n\tX_{\\theta,\\vartheta}\\frac{\\partial}{\\partial X_\\theta}(\\mu\\frac{v_\\vartheta v_1}{x_1}) \\Big\\}\n\\label{eqn:energyG}\n\\end{equation}\n\n\tAt this point all the individual terms have been rewritten using the appropriate metric terms and hence the\naxisymmetric energy equation source term can be expressed as a combination of Eqs. \\ref{eqn:energyA} - \\ref{eqn:energyG}\n\n\\begin{equation}\n  \\begin{array}{c}\n\t\\sum_{k = 0}^{\\bar{n}_s}\\nu_k h_k \\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\n\t\\frac{\\partial c_k}{\\partial X_\\theta}\\Big\\} + k^*\\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1} \n\t\\frac{\\partial T}{\\partial X_\\theta} + v_0\\mu\\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\n\t\\frac{\\partial v_0}{\\partial X_\\theta} + \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0}\n\t\\frac{\\partial v_1}{\\partial X_\\theta}\\Big\\} \\\\\n\t+ v_1\\mu\\Big\\{-\\frac{2}{3}\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,0}\n\t\\frac{\\partial v_0}{\\partial X_\\theta} + \\frac{4}{3}\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\n\t\\frac{\\partial v_1}{\\partial X_\\theta} \\Big\\} \n\t+ {\\mu_k}^* \\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\n\t\\frac{\\partial k}{\\partial X_\\theta}\\Big\\} \\\\\n\t- \\frac{2}{3}\\mu\\frac{{v_1}^2}{x_1}\n\t -x_1\\frac{2}{3}\\Big\\{\\sum_{\\theta = 0}^{\\bar{n}_d} \\sum_{\\vartheta = 0}^{\\bar{n}_d}\n\tX_{\\theta,\\vartheta}\\frac{\\partial}{\\partial X_\\theta}(\\mu\\frac{v_\\vartheta v_1}{x_1}) \\Big\\}\n  \\end{array}\n\\label{eqn:energycomp}\n\\end{equation}\n\n\twhich when combined with Eqs. \\ref{eqn:speciescomp}, \\ref{eqn:xmomcomp}, \\ref{eqn:rmomcomp}, \nand \\ref{eqn:beginHvbar} yields \n\n\\begin{displaymath}\n\t\\bar{S}_{be3D_v} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\nu_0 \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_0}{\\partial X_\\theta} \\\\\n\t\t\\vdots \\\\\n\t\t\\nu_{\\bar{n}_s} \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_{\\bar{n}_s}}{\\partial X_\\theta} \n\t\t\\\\\n\t\t\\mu \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\\frac{\\partial v_0}{\\partial X_\\theta}\n\t\t+ \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0} \\Big\\{\\mu \\frac{\\partial v_1}{\\partial X_\\theta} \n\t\t- x_1 \\frac{2}{3} \\frac{\\partial}{\\partial X_\\theta}(\\mu \\frac{v_1}{x_1}) \\Big\\} \n\t\t\\\\\n\t\t\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\Big\\{2\\mu\\frac{\\partial v_1}{\\partial X_\\theta} \n\t\t-x_1\\frac{2}{3}\\frac{\\partial}{\\partial X_\\theta}(\\mu\\frac{v_1}{x_1})\\Big\\} - 2\\mu\\frac{v_1}{x_1} \n\t\t\\\\\n\t\\left(\\begin{array}{c}\n\t\t\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,0} \\Big\\{v_0\\mu\\frac{\\partial v_1}{\\partial X_\\theta} -\\frac{2}{3}\n\t\tv_1\\mu\\frac{\\partial v_0}{\\partial X_\\theta} \\Big\\} + \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\Big\\{\n\t\t\\sum_{k = 0}^{\\bar{n}_s}\\nu_k h_k \\frac{\\partial c_k}{\\partial X_\\theta} + \n\t\tk^*\\frac{\\partial T}{\\partial X_\\theta} + {\\mu_k}^*\\frac{\\partial k}{\\partial X_\\theta} \\\\ \n\t\t+ v_0 \\mu \\frac{\\partial v_0}{\\partial X_\\theta} + \\frac{4}{3}v_1 \\mu \\frac{\\partial v_1}{\\partial X_\\theta}\n\t\t\\Big\\} -\\frac{2}{3}\\Big\\{\\mu\\frac{{v_1}^2}{x_1} + x_1\\sum_{\\theta = 0}^{\\bar{n}_d}\\sum_{\\vartheta = 0}^{\\bar{n}_d}\n\t\tX_{\\theta,\\vartheta}(\\mu\\frac{v_\\vartheta v_1}{x_1})\\Big\\}\n\t\\end{array}\\right)\n\t\t\\\\\t\n\t\t\\vdots\n\t\t   \\end{array}\n\t    \\right]\n\\end{displaymath}\n\n%--------------------------------------------------------------------------------------------------------------------------\n\\subsubsection{Turbulent Source Terms}\n\n\tExamining the remaining source terms for the turbulence quantities one can write\n\n\\begin{displaymath}\n\t{\\mu_k}^* \\frac{\\partial k}{\\partial r} + x_1S_k = {\\mu_k}^* \\Big\\{\\xi_r \\frac{\\partial k}{\\partial \\xi}\n\t+\\eta_r \\frac{\\partial k}{\\partial \\eta}\\Big\\} + x_1S_k = {\\mu_k}^* \\Big\\{X_{0,1} \\frac{\\partial k}{\\partial X_0}\n\t+ X_{1,1} \\frac{\\partial k}{\\partial X_1}\\Big\\} + x_1S_k\n\\end{displaymath}\t\n\n\\begin{displaymath}\n\t{\\mu_\\omega}^* \\frac{\\partial \\omega}{\\partial r} + x_1S_\\omega = {\\mu_\\omega}^* \n\t\\Big\\{\\xi_r \\frac{\\partial \\omega}{\\partial \\xi}\n\t+\\eta_r \\frac{\\partial \\omega}{\\partial \\eta}\\Big\\} + x_1S_\\omega = {\\mu_\\omega}^* \n\t\\Big\\{X_{0,1} \\frac{\\partial \\omega}{\\partial X_0}\n\t+ X_{1,1} \\frac{\\partial \\omega}{\\partial X_1}\\Big\\} + x_1S_\\omega\n\\end{displaymath}\t\n\n\tboth of which can be simplified similarly to\n\n\\begin{equation}\n\t{\\mu_k}^*\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\frac{\\partial k}{\\partial X_\\theta} + x_1S_k\n\\label{eqn:kcomp}\n\\end{equation}\n\n\\begin{equation}\n\t{\\mu_\\omega}^*\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\frac{\\partial \\omega}{\\partial X_\\theta} + x_1S_\\omega\n\\label{eqn:omegacomp}\n\\end{equation}\n\n\tAs a last step, since both $S_k$ and $S_\\omega$ contain the variable $P_k$ which itself contains an axisymmetric\nterm (Eq. \\ref{eqn:pkbe3d}), this term must be transformed as well.  Starting from the definition of $P_{k_be3D}$ (where\nthe $\\tilde v$ has been removed for simplicity but is still understood),\n\n\\begin{displaymath}\n\tP_{k_{be3D}} = -\\mu_T\\frac{2}{3}\\frac{v}{r}(\\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial r})\n\\end{displaymath}\n\n\tMaking the substitution for $u$ and $v$ into the computational space variables $v_0$ and $v_1$ respectively while at the \nsame time replacing the physical space variables in a similar fashion yields,\n\n\\begin{displaymath}\n\tP_{k_{be3D}} = -\\mu_T\\frac{2}{3}\\frac{v_1}{x_1}(\\frac{\\partial v_0}{\\partial x_0} + \\frac{\\partial v_1}{\\partial x_1})\n\\end{displaymath}\n\n\tUsing the Calculus to rewrite the Cartesian derivatives in terms of metrics and derivatives along the computational\nco-ordinates,\n\n\\begin{displaymath}\n\tP_{k_{be3D}} = -\\mu_T\\frac{2}{3}\\frac{v_1}{x_1}\\Big\\{\\xi_x \\frac{\\partial v_o}{\\partial \\xi} +\n\t\\eta_x \\frac{\\partial v_o}{\\partial \\eta} + \\xi_r \\frac{\\partial v_1}{\\partial \\xi} + \\eta_r\n\t\\frac{\\partial v_1}{\\partial \\eta}\\Big\\}\n\\end{displaymath}\n\n\\begin{displaymath}\n\t= -\\mu_T\\frac{2}{3}\\frac{v_1}{x_1}\\Big\\{X_{0,0} \\frac{\\partial v_o}{\\partial X_O} +\n\tX_{1,0} \\frac{\\partial v_o}{\\partial X_1} + X_{0,1} \\frac{\\partial v_1}{\\partial X_0} + X_{1,1}\n\t\\frac{\\partial v_1}{\\partial X_1}\\Big\\}\n\\end{displaymath}\n\n\twhich can be simplified in summation form to,\n\n\\begin{equation}\n\tP_{k_{be3D}}= -\\mu_T\\frac{2}{3}\\frac{v_1}{x_1}\\sum_{\\theta=0}^{\\theta=1}\\sum_{\\vartheta=0}^{\\vartheta=1}\n\tX_{\\theta,\\vartheta}\\frac{\\partial v_{\\vartheta}}{\\partial X_{\\theta}}\n\\label{eqn:pkbe3dtrans}\n\\end{equation}\n\n%----------------------------------------------------------------------------------------------------------------------\n\\subsubsection{Summary of Axisymmetric Source Terms}\n\n\tTherefore the final form of the axisymmetric, multi-species, viscous, turbulent source term can be written as\nthe combination of Eqs. \\ref{eqn:speciescomp}, \\ref{eqn:xmomcomp}, \\ref{eqn:rmomcomp}, \\ref{eqn:kcomp}, and \n\\ref{eqn:omegacomp} \n \n\\begin{equation}\n\t\\bar{S}_{be3D_v} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\nu_0 \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_0}{\\partial X_\\theta} \\\\\n\t\t\\vdots \\\\\n\t\t\\nu_{\\bar{n}_s} \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1} \n\t\t\\frac{\\partial c_{\\bar{n}_s}}{\\partial X_\\theta} \n\t\t\\\\\n\t\t\\mu \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,1}\\frac{\\partial v_0}{\\partial X_\\theta}\n\t\t+ \\sum_{\\theta = 0}^{\\bar{n}_d} X_{\\theta,0} \\Big\\{\\mu \\frac{\\partial v_1}{\\partial X_\\theta} \n\t\t- x_1 \\frac{2}{3} \\frac{\\partial}{\\partial X_\\theta}(\\mu \\frac{v_1}{x_1}) \\Big\\} \n\t\t\\\\\n\t\t\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\Big\\{2\\mu\\frac{\\partial v_1}{\\partial X_\\theta} \n\t\t-x_1\\frac{2}{3}\\frac{\\partial}{\\partial X_\\theta}(\\mu\\frac{v_1}{x_1})\\Big\\} - 2\\mu\\frac{v_1}{x_1} \n\t\t\\\\\n\t\\left(\\begin{array}{c}\n\t\t\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,0} \\Big\\{v_0\\mu\\frac{\\partial v_1}{\\partial X_\\theta} -\\frac{2}{3}\n\t\tv_1\\mu\\frac{\\partial v_0}{\\partial X_\\theta} \\Big\\} + \\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\Big\\{\n\t\t\\sum_{k = 0}^{\\bar{n}_s}\\nu_k h_k \\frac{\\partial c_k}{\\partial X_\\theta} + \n\t\tk^*\\frac{\\partial T}{\\partial X_\\theta} + {\\mu_k}^*\\frac{\\partial k}{\\partial X_\\theta} \\\\ \n\t\t+ v_0 \\mu \\frac{\\partial v_0}{\\partial X_\\theta} + \\frac{4}{3}v_1 \\mu \\frac{\\partial v_1}{\\partial X_\\theta}\n\t\t\\Big\\} -\\frac{2}{3}\\Big\\{\\mu\\frac{{v_1}^2}{x_1} + x_1\\sum_{\\theta = 0}^{\\bar{n}_d}\\sum_{\\vartheta = 0}^{\\bar{n}_d}\n\t\tX_{\\theta,\\vartheta}(\\mu\\frac{v_\\vartheta v_1}{x_1})\\Big\\}\n\t\\end{array}\\right)\n\t\t\\\\\t\n\t\t{\\mu_k}^*\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\frac{\\partial k}{\\partial X_\\theta} + x_1S_k \n\t\t\\\\\n\t\t{\\mu_\\omega}^*\\sum_{\\theta = 0}^{\\bar{n}_d}X_{\\theta,1}\\frac{\\partial \\omega}{\\partial X_\\theta} + x_1S_\\omega\n\t\t   \\end{array}\n\t    \\right]\n\\label{finalHvbar}\n\\end{equation} \n\n\twhile the inviscid source term is defined by Eq. \\ref{eqn:finalHbar}\n\n\\begin{displaymath}\n\t\\bar{S}_{be3D} = \\frac{\\Omega}{x_1}\\left[ \\begin{array}{c}\n\t\t\\rho_1 v_1 \\\\\n\t\t\\vdots \\\\\n\t\t\\rho_k v_1 \\\\\n\t\t\\rho v_0v_1 \\\\\n\t\t\\rho {v_1}^2 \\\\\n\t\t(\\rho E + p)v_1 \\\\\n\t\t\\rho k v_1 \\\\\n\t\t\\rho \\omega v_1\n\t\t   \\end{array}\n\t    \\right]\n\\end{displaymath}", "meta": {"hexsha": "df072808fbb2734daf7fceb7f85d4596912d851a", "size": 45498, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "model/fluid/doc/Axisymmetric_Theory/transform.tex", "max_stars_repo_name": "zhanghuanqian/CFDWARP", "max_stars_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 29, "max_stars_repo_stars_event_min_datetime": "2018-09-13T13:58:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T21:44:13.000Z", "max_issues_repo_path": "model/fluid/doc/Axisymmetric_Theory/transform.tex", "max_issues_repo_name": "zhanghuanqian/CFDWARP", "max_issues_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-11-10T11:28:30.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-23T09:21:28.000Z", "max_forks_repo_path": "model/fluid/doc/Axisymmetric_Theory/transform.tex", "max_forks_repo_name": "zhanghuanqian/CFDWARP", "max_forks_repo_head_hexsha": "9340a8526bb263d910f79d79e84dcac7aec211b6", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2018-07-26T08:17:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-04T08:41:55.000Z", "avg_line_length": 44.7374631268, "max_line_length": 123, "alphanum_fraction": 0.6511055431, "num_tokens": 16862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7606506472514407, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.5513730492682183}}
{"text": "\\documentclass[]{article}\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\graphicspath{ {./images/} }\n\n% Minted\n\\usepackage[cache=false]{minted}\n\t\\usemintedstyle{vs}\n\t\\usepackage{xcolor}\n\t\t\\definecolor{light-gray}{gray}{0.97}\n\n% Subsubsection header run along with text.\t\n\\usepackage{titlesec}\n\\titleformat{\\subsubsection}[runin]% runin puts it in the same paragraph\n\t{\\normalfont\\bfseries}% formatting commands to apply to the whole heading\n\t{\\thesubsection}% the label and number\n\t{0.5em}% space between label/number and subsection title\n\t{}% formatting commands applied just to subsection title\n\t[]% punctuation or other commands following subsection title\n\n\\usepackage{enumitem}\n\\usepackage{hyperref}\n\n\\usepackage[normalem]{ulem}\n\n\\title{Big Data Analytics: Exercise 4-2}\n\\author{Brandon Hosley}\n\\date{\\today}\n\n\\begin{document}\n\\maketitle\n\n\\section*{Pre-Assignment}\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nfrom pyspark.ml.linalg import Vectors\nfrom pyspark.ml.classification import LogisticRegression\ntraining = spark.createDataFrame([\n\t(1.0, Vectors.dense([0.0, 1.1, 0.1])),\n\t(0.0, Vectors.dense([2.0, 1.0, -1.0])),\n\t(0.0, Vectors.dense([2.0, 1.3, 1.0])),\n\t(1.0, Vectors.dense([0.0, 1.2, -0.5]))], [\"label\", \"features\"])\n\\end{minted}\n\n\\section*{Assignment 1}\n\\emph{ Do the exercise in section 2.2.1.2 and 2.2.1.3 }\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nlr = LogisticRegression(maxIter=10, regParam=0.01)\nprint(\"LogisticRegression parameters:\\n\" + lr.explainParams() + \"\\n\")\nmodel1 = lr.fit(training)\n\nprint(\"Model 1 was fit using parameters: \")\nprint(model1.extractParamMap())\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image1} %\\vspace{-1.5em}\n\n\n\\section*{Assignment 2}\n\\emph{ Do the exercise in section 2.2.1.4 and 2.2.1.5 }\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nparamMap = {lr.maxIter: 20}\nparamMap[lr.maxIter] = 30 # Specify 1 Param, overwriting the original maxIter.\nparamMap.update({lr.regParam: 0.1, lr.threshold: 0.55}) # Specify multiple Params.\n\nparamMap2 = {lr.probabilityCol: \"myProbability\"} # Change output column name\nparamMapCombined = paramMap.copy()\nparamMapCombined.update(paramMap2)\n\nmodel2 = lr.fit(training, paramMapCombined)\nprint(\"Model 2 was fit using parameters: \")\nprint(model2.extractParamMap())\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image2} %\\vspace{-1.5em}\n\n\n\\section*{Assignment 3}\n\\emph{ Do the exercise in section 2.2.1.7 and 2.2.1.8 }\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\ntest = spark.createDataFrame([\n\t(1.0, Vectors.dense([-1.0, 1.5, 1.3])),\n\t(0.0, Vectors.dense([3.0, 2.0, -0.1])),\n\t(1.0, Vectors.dense([0.0, 2.2, -1.5]))], [\"label\", \"features\"])\n\nprediction = model2.transform(test)\nresult = prediction.select(\"features\", \"label\", \"myProbability\", \"prediction\").collect()\n\nfor row in result:\n\tprint(\"features=%s, label=%s -> prob=%s, prediction=%s\"\n\t\t% (row.features, row.label, row.myProbability, row.prediction))\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image3} %\\vspace{-1.5em}\n\n\n\\section*{Assignment 4}\n\n\\emph{ Do the exercise to learn ML pipeline in }\n\\href{https://spark.apache.org/docs/2.4.0/ml-pipeline.html#example-pipeline\n}{Apache Docs}.\n \n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nfrom pyspark.ml import Pipeline\nfrom pyspark.ml.feature import HashingTF, Tokenizer\n\ntraining = spark.createDataFrame([\n\t(0, \"a b c d e spark\", 1.0),\n\t(1, \"b d\", 0.0),\n\t(2, \"spark f g h\", 1.0),\n\t(3, \"hadoop mapreduce\", 0.0)\n], [\"id\", \"text\", \"label\"])\n\ntokenizer = Tokenizer(inputCol=\"text\", outputCol=\"words\")\nhashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol=\"features\")\nlr = LogisticRegression(maxIter=10, regParam=0.001)\npipeline = Pipeline(stages=[tokenizer, hashingTF, lr])\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image4.1} %\\vspace{-1.5em}\n\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nmodel = pipeline.fit(training)\n\ntest = spark.createDataFrame([\n\t(4, \"spark i j k\"),\n\t(5, \"l m n\"),\n\t(6, \"spark hadoop spark\"),\n\t(7, \"apache hadoop\")\n], [\"id\", \"text\"])\n\nprediction = model.transform(test)\nselected = prediction.select(\"id\", \"text\", \"probability\", \"prediction\")\nfor row in selected.collect():\n\trid, text, prob, prediction = row\n\tprint(\"(%d, %s) --> prob=%s, prediction=%f\" % (rid, text, str(prob), prediction))\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image4.2} %\\vspace{-1.5em}\n\n\n\\clearpage\n\n\n\\section*{Assignment 5}\n\\emph{ Learn a model using the ML pipeline and then test it and show the results. We will use \n\t\\hyperref{CIKM2020 AnalytiCup COVID-19 Retweet}{} \n\tPrediction Challenge dataset. The CIKM is one of \n\tthe best conferences in Machine Learning world.\r }\n\n\\begin{minted}[breaklines,bgcolor=light-gray,fontsize=\\scriptsize]{python}\nfrom pyspark.ml.linalg import Vectors\nfrom pyspark.ml.evaluation import RegressionEvaluator\nfrom pyspark.ml.classification import LogisticRegression\nfrom pyspark.ml import Pipeline\nfrom pyspark.ml.feature import HashingTF, Tokenizer, VectorAssembler, VectorIndexer\nfrom pyspark.sql.functions import split\n\nschema = 'TweetId LONG, Username STRING, Timestamp TIMESTAMP, Followers INT, Friends INT, Retweets INT, Favorites INT, Entities STRING, Sentiment STRING, Mentions STRING, Hashtags STRING, URLs STRING'\n\ndf = spark.read.schema(schema) \\\n  .option(\"delimiter\", \"\\t\") \\\n  .option(\"timestampFormat\",\"EEE MMM dd HH:mm:ss Z yyyy\") \\\n  .csv(\"/user/data/CSC534BDA/COVID19-Retweet/TweetsCOV19-train.tsv\")\ndf.printSchema()\n\nsent_col = split(df['Sentiment'], ' ')\ndf = df.withColumn('Positivity', sent_col.getItem(0).cast('INT'))\ndf = df.withColumn('Negativity', sent_col.getItem(1).cast('INT'))\n\n(trainingData, testData) = df.randomSplit([0.8, 0.2])\n\ntokenizer = Tokenizer(inputCol=\"Entities\", outputCol=\"words\")\nhashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol=\"Words\")\nassembler = VectorAssembler(outputCol='features', \\\n  inputCols=['Followers', 'Friends', 'Favorites', 'Positivity', 'Negativity'])\n\nestimator = LogisticRegression(labelCol='Retweets', featuresCol='features', maxIter=10, regParam=0.001)\n\npipeline = Pipeline(stages=[tokenizer, hashingTF, assembler, lr])\nmodel = pipeline.fit(trainingData)\n\n# Test\n\npredictions = model.transform(testData)\nevaluator = RegressionEvaluator(predictionCol=\"prediction\", \\\n  labelCol=\"Retweets\",metricName=\"r2\")\nprint(\"R Squared (R2) on test data = %g\" % \\\n  evaluator.evaluate(predictions))\npredictions.select(\"prediction\",\"Retweets\",\"features\").show(5)\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image5.5} \\vspace{1.5em}\n\n\nFor this model we achieve a\n($R^2 = 0.251982$)\nwhich, while still a weak performer, \nis a somewhat significant predictor. \n\nUnfortunately we were unable to utilize some of the other metrics as the hashing of the tokenized versions of both Entities and Mentions caused heap overflow.\n\nWhile the small sample of the data set suggested that the retweets numbers fit a Tweedie distribution, the logistic regression ended up out-performing all of the other models tested.\n\n\\clearpage\n\n\\section*{Appendix:}\nThe following are all of the models that were used to explore the provided data before the final selection was made.\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nfrom pyspark.ml.linalg import Vectors\nfrom pyspark.ml.classification import LogisticRegression\nfrom pyspark.ml import Pipeline\nfrom pyspark.ml.feature import HashingTF, Tokenizer\n\nschema = 'TweetId LONG, Username STRING, Timestamp TIMESTAMP, Followers INT, Friends INT, Retweets INT, Favorites INT, Entities STRING, Sentiment STRING, Mentions STRING, Hashtags STRING, URLs STRING'\n\n# Define the Schema \n\ndf = spark.read.schema(schema) \\\n\t.option(\"delimiter\", \"\\t\") \\\n\t.option(\"timestampFormat\",\"EEE MMM dd HH:mm:ss Z yyyy\") \\\n\t.csv(\"/user/data/CSC534BDA/COVID19-Retweet/TweetsCOV19-train.tsv\")\ndf.printSchema()\n\n# Fix Sentiment\n\nsent_col = split(df['Sentiment'], ' ')\ndf = df.withColumn('Positivity', sent_col.getItem(0).cast('INT'))\ndf = df.withColumn('Negativity', sent_col.getItem(1).cast('INT'))\n\n# Split Data\n\nassembler = VectorAssembler( \\\n\toutputCol='features', \\\n\tinputCols=['Followers', 'Friends', 'Positivity', 'Negativity'])\nfeatures_df = assembler.transform(df)\n\n(trainingData, testData) = features_df.randomSplit([0.8, 0.2])\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image5.1} %\\vspace{-1.5em}\n\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nfrom pyspark.ml.regression import GeneralizedLinearRegression\n\nglr = GeneralizedLinearRegression(labelCol='Retweets', featuresCol='features', \\\nfamily=\"gaussian\", link=\"identity\", maxIter=10, regParam=0.3)\nglr_model = glr.fit(trainingData)\n\nglr_predictions = glr_model.transform(testData)\nglr_evaluator = RegressionEvaluator(predictionCol=\"prediction\", \\\nlabelCol=\"Retweets\",metricName=\"r2\")\nprint(\"R Squared (R2) on test data = %g\" % \\\nglr_evaluator.evaluate(glr_predictions))\nglr_predictions.select(\"prediction\",\"Retweets\",\"features\").show(5)\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image5.2} %\\vspace{-1.5em}\n\nLogistic regression was performed and did not give very good results. \n($R^2 = -0.0098882$)\nSlightly worse than a fixed predictor. \nNext we will try a decision tree regression to compensate for the highly irregular distribution of retweets.\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nfrom pyspark.ml.regression import DecisionTreeRegressor\n\ndt_model = DecisionTreeRegressor(labelCol='Retweets', featuresCol='features').fit(trainingData)\n\ndt_predictions = dt_model.transform(testData)\ndt_evaluator = RegressionEvaluator(predictionCol=\"prediction\", \\\n\tlabelCol=\"Retweets\",metricName=\"r2\")\nprint(\"R Squared (R2) on test data = %g\" % \\\n\tdt_evaluator.evaluate(dt_predictions))\ndt_predictions.select(\"prediction\",\"Retweets\",\"features\").show(5)\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image5.3} %\\vspace{-1.5em}\n\nThis model performs worse than the linear regression.\n($R^2 = -0.0693814$)\n\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\nfrom pyspark.ml.regression import IsotonicRegression\n\nir = IsotonicRegression(labelCol='Retweets', featuresCol='features', regParam=0.3)\nir_model = ir.fit(trainingData)\n\nir_predictions = ir_model.transform(testData)\nir_evaluator = RegressionEvaluator(predictionCol=\"prediction\", \\\n\tlabelCol=\"Retweets\",metricName=\"r2\")\nprint(\"R Squared (R2) on test data = %g\" % \\\n\tir_evaluator.evaluate(ir_predictions))\nir_predictions.select(\"prediction\",\"Retweets\",\"features\").show(5)\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image5.4} %\\vspace{-1.5em}\n\nThe piece-wise fit of an isotonic regression provides \nus with the first model to perform better than a fixed model \n($R^2 = 0.0224688$)\nbut is still essentially an insignificant predictor.\n\n\n\\end{document}\n\n\n\\begin{minted}[breaklines,bgcolor=light-gray]{shell-session}\n\n\\end{minted}\n\\includegraphics[width=\\linewidth]{image1} %\\vspace{-1.5em}\n\n\n\\includegraphics[width=\\linewidth]{image1} %\\vspace{-1.5em}\n\\begin{minted}[breaklines,bgcolor=light-gray,fontsize=\\scriptsize]{shell-session}\n\\end{minted}\n\\mintinline[bgcolor=light-gray]{bash}{} \\begin{enumerate}[before=\\itshape,font=\\normalfont,label=\\alph*.]\n\\end{enumerate}", "meta": {"hexsha": "a4bdef8ce26ad30f45ec781ea04f9b3f1c918f0f", "size": 11174, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2021-Spr Big Data Analytics/Ex4-2/Hosley_Ex4-2.tex", "max_stars_repo_name": "bhosley/Schoolwork", "max_stars_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2021-Spr Big Data Analytics/Ex4-2/Hosley_Ex4-2.tex", "max_issues_repo_name": "bhosley/Schoolwork", "max_issues_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2021-Spr Big Data Analytics/Ex4-2/Hosley_Ex4-2.tex", "max_forks_repo_name": "bhosley/Schoolwork", "max_forks_repo_head_hexsha": "7c4eb909d2e6c65cd93b1c7fa744a183cebfc952", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.473015873, "max_line_length": 200, "alphanum_fraction": 0.7492393055, "num_tokens": 3163, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702761768249, "lm_q2_score": 0.7606506526772883, "lm_q1q2_score": 0.551373048680268}}
{"text": "%!TEX program = xelatex\n\n\\documentclass[a4paper]{article}\n\n\\usepackage{polyglossia}\n\\setdefaultlanguage{english}\n\n\\usepackage{amsmath}\n\\usepackage[backend=biber]{biblatex}\n\\usepackage[autostyle]{csquotes}\n\\usepackage{fontenc}\n\\usepackage{graphicx}\n\\usepackage{hyperref}\n\\usepackage{parskip}\n\n\\newcommand{\\HRule}{\\rule{\\linewidth}{0.5mm}}\n\n\\addbibresource{./Internal_Assessment.bib}\n\\everymath{\\displaystyle}\n\n\\title{Investigating Fourier's Solution to the Heat Equation\\\\$\\frac{\\partial{u}}{\\partial{t}}-\\alpha\\nabla^{2}u=0$}\n\\date{June 2015}\n\\author{Tarik Onalan}\n\n\\begin{document}\n    \\input{./Internal_Assessment.title.tex}\n    \n    %--------------------------------------------------------------------------------------------------%\n\n    \\maketitle\n\n    %--------------------------------------------------------------------------------------------------%\n\n    \\tableofcontents\n\n    %--------------------------------------------------------------------------------------------------%\n\n    \\section{Introduction}\n\n        The heat equation is used to describe the distribution of heat in an object over time. While\n        not introduced by Fourier, it is a method for solving the equation proposed by Fourier in\n        his \\textit{Th\u00e9orie analytique de la chaleur}---translated as ``an analytical theory of\n        heat''---that we will be investigating. I chose to investigate Fourier's solution to the heat\n        equation because it is through his solution that the Fourier Transform, a popular tool in signal\n        processing, was defined. I frequently use the Fourier Transform while doing data analysis, and I\n        wanted to learn how and why it was created.\n\n        \\begin{center}\n            \\includegraphics{./heat_eq.png}\n        \\end{center}\n\n    %--------------------------------------------------------------------------------------------------%\n\n    \\section{Solution}\n        \\subsection{Defining the heat equation}\n            The heat equation (as defined in a rod) is as follows:\n            \\begin{equation}\n                u_{t} = \\alpha u_{xx}\n            \\end{equation}\n            where $u = u\\left(x,t\\right)$ is a function of two variables $x$ and $t$.\n            \\begin{itemize}\n                \\item $x$: position, where $x \\in [0,L]$ and $L$ is the length of the rod\n                \\item $t$: time, where $t\\geq0$\n            \\end{itemize}\n            assuming the initial condition\n            \\begin{equation}\n                u\\left(x,0\\right) = f\\left(x\\right) \\quad \\forall x \\in [0,L]\n            \\end{equation}\n            and the following boundary conditions\n            \\begin{equation}\n                u\\left(0,t\\right) = 0 = u\\left(L,t\\right) \\quad \\forall t > 0\n            \\end{equation}\n\n            \\begin{center}\n                \\includegraphics{./temp_dist.png}\n            \\end{center}\n\n        \\subsection{Our objective}\n            We are looking for nontrivial solutions to to $\\left(1\\right)$ that satisfy the conditions defined by\n            equations $\\left(2\\right)$ and $\\left(3\\right)$. We still need more information, however. Let us look at equation\n            $\\left(1\\right)$ again:\n            \\[\n                u_{t} = \\alpha u_{xx}\n            \\]\n            We can rearrange this equation so that it appears as follows:\n            \\[\n                u_{t} - \\alpha u_{xx} = 0\n            \\]\n            What does this mean? The equation is a linear homogeneous partial differential equation,\n            meaning that $u$ can be written as the product\n            \\begin{equation}\n                u\\left(x,t\\right)=X\\left(x\\right)T\\left(t\\right)\n            \\end{equation}\n            through separation of variables.\n\n        \\subsection{Solving the constituent equations $T\\left(t\\right)$ and $X\\left(x\\right)$}\n            \\[\n                \\begin{cases}\n                    u_{t} = X\\left(x\\right)T'\\left(t\\right) \\\\\n                    u_{xx} = X''\\left(x\\right)T\\left(t\\right) \\\\\n                \\end{cases}\n            \\]\n            \\\\\n            \\[\n                X\\left(x\\right)T'\\left(t\\right) = \\alpha X''\\left(x\\right)T\\left(t\\right) \\\\\n            \\]\n            \\[\n                \\frac{T'\\left(t\\right)}{\\alpha T\\left(t\\right)} = \\frac{X''\\left(x\\right)}{X\\left(x\\right)}\n            \\]\n            One thing to notice at this point is that both sides, though dependent on two independent\n            variables, are equivalent. This means that both are equal to some constant that we will\n            call $-\\lambda$ (the negative is for simplicity later on).\n\n            Now we have the following:\n            \\[\n                \\frac{T'\\left(t\\right)}{\\alpha T\\left(t\\right)} = \\frac{X''\\left(x\\right)}{X\\left(x\\right)} = -\\lambda\n            \\]\n            \\[\n                \\begin{cases}\n                    \\frac{T'\\left(t\\right)}{\\alpha T\\left(t\\right)} = -\\lambda \\\\\n                    \\frac{X''\\left(x\\right)}{X\\left(x\\right)} = -\\lambda\n                \\end{cases}\n            \\]\n            Let us take the first equation, $\\frac{T'\\left(t\\right)}{\\alpha T\\left(t\\right)} = -\\lambda$:\n            \\[\n                \\frac{1}{T\\left(t\\right)}\\cdot\\frac{dT}{dt} = -\\lambda \\alpha\n            \\]\n            \\[\n                \\int{\\frac{1}{T\\left(t\\right)}}dT = -\\int{\\lambda \\alpha}dt\n            \\]\n            \\[\n                \\ln{T\\left(t\\right)} + K_1= -\\lambda \\alpha t + K_2\n            \\]\n            \\[\n                T\\left(t\\right) = e^{-\\lambda \\alpha t + K}\n            \\]\n            \\begin{equation}\n                T\\left(t\\right) = A e^{-\\lambda \\alpha t}\n            \\end{equation}\n            Now let us take the second equation, $\\frac{X''\\left(x\\right)}{X\\left(x\\right)} = -\\lambda$\n            \\[\n                \\frac{d^{2}X}{dx^{2}} = -\\lambda X\\left(x\\right)\n            \\]\n            This may seem daunting at first, but let us quickly observe a few things about this\n            differential equation:\n            \\begin{itemize}\n                \\item Derived twice, it has a coefficient $-\\lambda$\n                \\item Derived twice, it is still the same basic function\n            \\end{itemize}\n            The latter point is very interesting. Only one other function remains the same after\n            deriving: $e^{x}$. We have found the characteristic function for $X\\left(x\\right)$. All that remains\n            is calculating the coefficients ($a$, in this case):\n            \\[\n                \\frac{d^{2} e^{ax}}{dx^{2}} = -\\lambda e^{ax}\n            \\]\n            \\[\n                a^{2} e^{ax} + \\lambda e^{ax} = 0\n            \\]\n            \\[\n                e^{ax}\\left(a^{2} + \\lambda\\right) = 0\n            \\]\n            Given that $e^{ax} \\neq 0$:\n            \\[\n                a^{2} + \\lambda = 0\n            \\]\n            \\[\n                a = \\pm\\sqrt{-\\lambda}\n            \\]\n            Therefore,\n            \\[\n                X\\left(x\\right) = \\sum_{b \\in a}{B e^{bx}}\n            \\]\n            \\begin{equation}\n                X\\left(x\\right) = B e^{\\sqrt{-\\lambda}x} + C e^{-\\sqrt{-\\lambda}x}\n            \\end{equation}\n\n        \\subsection{Limiting $\\lambda$ to nontrivial solutions}\n            Let us begin with $\\lambda < 0$, and consider equation $\\left(6\\right)$:\n            \\[\n                X\\left(x\\right) = B e^{\\sqrt{\\lambda}x} + C e^{-\\sqrt{\\lambda}x}\n            \\]\n            Now, let us consider the boundary condition defined by equation $\\left(3\\right)$:\n            \\[\n                X\\left(0\\right) = 0 = B e^{\\sqrt{\\lambda} \\cdot 0} + C e^{-\\sqrt{\\lambda} \\cdot 0}\n            \\]\n            \\[\n                X\\left(0\\right) = 0 = B e^{0} + C e^{0}\n            \\]\n            Which would mean that $B = 0 = C$, a trivial solution.\n            \n            Now let us consider $\\lambda = 0$, again with equation $\\left(6\\right)$:\n            \\[\n                X\\left(0\\right) = 0 = B e^{\\sqrt{0}x} + C e^{-\\sqrt{0}x}\n            \\]\n            \\[\n                X\\left(0\\right) = 0 = B e^{0} + C e^{0}\n            \\]\n            Which would again mean that $B = 0 = C$.\n\n            Thus, $\\lambda > 0$. With this knowledge, we can make a couple of changes to equation $\\left(6\\right)$\n            using Euler's formula:\n            \\[\n                X\\left(x\\right) = B e^{i\\sqrt{\\lambda} x} + C e^{-i\\sqrt{\\lambda} x}\n            \\]\n            This is where $-\\lambda$ becomes preferable to $\\lambda$, as it only becomes possible to use Euler's\n            formula on the exponent when it is $\\sqrt{-\\lambda} x$, as we can factor out an $i$ and make it a\n            complex number. The expansion of the above equation yields\n            \\[\n                X\\left(x\\right) = B\\left(\\cos{\\left(\\sqrt\\lambda x\\right)} + i\\sin{\\left(\\sqrt\\lambda x\\right)}\\right) + C\\left(\\cos{\\left(\\sqrt\\lambda x\\right)} - i\\sin{\\left(\\sqrt\\lambda x\\right)}\\right)\n            \\]\n            Grouping like terms results in the equation\n            \\[\n                X\\left(x\\right) = B\\sin{\\left(\\sqrt\\lambda x\\right)} + C\\cos{\\left(\\sqrt\\lambda x\\right)}\n            \\]\n            From there, solving for the boundary conditions defined in equation $\\left(3\\right)$ yields:\n            \\[\n                X\\left(0\\right) = 0 = B\\sin{\\left(\\sqrt\\lambda \\cdot 0\\right)} + C\\cos{\\left(\\sqrt\\lambda \\cdot 0\\right)}\n            \\]\n            \\[\n                X\\left(0\\right) = 0 = B\\sin{0} + C\\cos{0}\n            \\]\n            Which means that $C = 0$. However, another piece of information can be gained from\n            this. If we look closely at the following equation, given $C = 0$:\n            \\[\n                X\\left(L\\right) = 0 = B\\sin{\\left(\\sqrt\\lambda \\cdot L\\right)}\n            \\]\n            We can see that we can calculate possible values for $\\sqrt\\lambda$\n            \\[\n                \\sin{\\left(\\sqrt\\lambda \\cdot L\\right)} = 0\n            \\]\n            \\[\n                \\sqrt\\lambda = \\frac{n \\pi}{L}\n            \\]\n\n        \\subsection{Finishing the proof}\n            Now that $\\lambda$ is defined, we can apply it to our functions $X\\left(x\\right)$ and $T\\left(t\\right)$\n            \\[\n                \\begin{cases}\n                    X\\left(x\\right) = B \\sin\\left(\\frac{n \\pi x}{L}\\right) \\\\\n                    T\\left(t\\right) = A e^{-\\frac{n^{2} \\pi^{2} \\alpha t}{L^{2}}}\n                \\end{cases}\n            \\]\n            We can now apply this to equation $\\left(4\\right)$, yielding the particular solution\n            \\[\n                u_{par}\\left(x,t\\right) = B_{n} \\sin\\left(\\frac{n \\pi x}{L}\\right) \\cdot e^{-\\frac{n^{2} \\pi^{2} \\alpha t}{L^{2}}}\n            \\]\n            Computing the summation of $u_{par}$ for all $n$ yields the general solution\n            \\[\n                u\\left(x,t\\right) = \\sum_{n=1}^{\\infty}{u_{par}}\n            \\]\n            \\[\n                u\\left(x,t\\right) = \\sum_{n=1}^{\\infty}{B_{n} \\sin\\left(\\frac{n \\pi x}{L}\\right) \\cdot e^{-\\frac{n^{2} \\pi^{2} \\alpha t}{L^{2}}}}\n            \\]\n            Let us now solve for the initial condition stated in equation $\\left(2\\right)$\n            \\[\n                u\\left(x,0\\right) = \\sum_{n=1}^{\\infty}{B_{n} \\sin\\left(\\frac{n \\pi x}{L}\\right) \\cdot e^{-\\frac{n^{2} \\pi^{2} \\alpha 0}{L^{2}}}}\n            \\]\n            \\[\n                u\\left(x,0\\right) = \\sum_{n=1}^{\\infty}{B_{n} \\sin\\left(\\frac{n \\pi x}{L}\\right) \\cdot e^{0}}\n            \\]\n            \\[\n                u\\left(x,0\\right) = \\sum_{n=1}^{\\infty}{B_{n} \\sin\\left(\\frac{n \\pi x}{L}\\right)}\n            \\]\n            By the definition of the Fourier Transform, we can now define $B_{n}$\n            \\[\n                B_{n} = \\frac{2}{L} \\int_{0}^{L}{f\\left(x\\right) \\sin\\left(\\frac{n \\pi x}{L}\\right)} dx\n            \\]\n            and solve for $u\\left(x,t\\right)$, finishing the problem\n            \\[\n                u\\left(x,t\\right) = \\sum_{n=1}^{\\infty}{\\frac{2}{L} \\left(\\int_{0}^{L}{f\\left(x\\right) \\sin\\left(\\frac{n \\pi x}{L}\\right)} dx\\right) \\sin\\left(\\frac{n \\pi x}{L}\\right) \\cdot e^{-\\frac{n^{2} \\pi^{2} \\alpha t}{L^{2}}}}\n            \\]\n\n    %--------------------------------------------------------------------------------------------------%\n\n    \\section{Reflection}\n\n        \\vfill\n\n    %--------------------------------------------------------------------------------------------------%\n\n    \\nocite{fourierj}\n    \\printbibliography\n\\end{document}", "meta": {"hexsha": "a93cdbc9bd49aa868f8030ffa0d4d30f6fcbc12d", "size": 12357, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2014-2015/Math/Internal_Assessment.tex", "max_stars_repo_name": "QuantumPhi/school", "max_stars_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2014-2015/Math/Internal_Assessment.tex", "max_issues_repo_name": "QuantumPhi/school", "max_issues_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-04-10T07:28:17.000Z", "max_issues_repo_issues_event_max_datetime": "2015-04-10T07:30:10.000Z", "max_forks_repo_path": "2014-2015/Math/Internal_Assessment.tex", "max_forks_repo_name": "QuantumPhi/school", "max_forks_repo_head_hexsha": "a1bec6b1ed4ea843cb291babf7b7b9925e370749", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.6643109541, "max_line_length": 232, "alphanum_fraction": 0.4780286477, "num_tokens": 3365, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7248702642896702, "lm_q2_score": 0.7606506418255927, "lm_q1q2_score": 0.5513730317722246}}
{"text": "\n\\subsection{Kernel Fisher discriminant analysis}\n\nUse LDA with kernel feature spaces.\n\n", "meta": {"hexsha": "f97bc0103d0e29e72e859815c8140fca77b6816a", "size": 88, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/statistics/discriminantAnalysis/01-02-KFDA.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/statistics/discriminantAnalysis/01-02-KFDA.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/statistics/discriminantAnalysis/01-02-KFDA.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 14.6666666667, "max_line_length": 48, "alphanum_fraction": 0.8068181818, "num_tokens": 18, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8558511469672594, "lm_q2_score": 0.6442251064863697, "lm_q1q2_score": 0.5513607962914643}}
{"text": "\\documentclass[letterpaper]{article}\n\n\\usepackage{fullpage}\n\\usepackage{nopageno}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage[utf8]{inputenc}\n\\allowdisplaybreaks\n\n\\newcommand{\\abs}[1]{\\left\\lvert #1 \\right\\rvert}\n\n\\begin{document}\n\\title{Notes}\n\\date{November 11, 2014}\n\\maketitle\n$f:\\mathbb{R}^n\\to\\mathbb{R}^m$ is continous iff $\\forall C$ is closed in $R^{m}$ then $f^{-1}(C)$ is closed in $\\mathbb{R}^n$.o\n\nassume $f$ is continuous. let $c\\in \\mathbb{R}^n$ be closed. we want to show $f^{-1}(C)$ is closed.\n\nfirst we pick a sequence $x_k\\in f^{-1}(C)$. or $f(x_k)\\in C$. we are picking $x_k$ so that it converges to something (say $y$). now because $f(x)$ is continous, then $f(x_k)\\to f(y)$. and because $f(y)\\in C$ then $C$ is closed.\n\n\\section*{true false}\n$f:\\mathbb{R}^n\\to\\mathbb{R}^m$ is continous\n\\begin{enumerate}\n\\item\nfi $A\\subseteq \\mathbb{R}^n$ is open then $f(A)$ is open\n\\item\nfi $A\\subseteq \\mathbb{R}^n$ is closed then $f(A)$ is closed\n\\item\nfi $A\\subseteq \\mathbb{R}^n$ is compact then $f(A)$ is compact\n\\item\nfi $A\\subseteq \\mathbb{R}^m$ is open then $f^{-1}(A)$ is open\n\\item\nfi $A\\subseteq \\mathbb{R}^m$ is compact then $f(A)$ is compact\n\\end{enumerate}\n\nunions, intersections of $f$or $f^{-1}$ over two sets. $f(A^{c})=(f(A))^{c}$ and same with $f^{-1}$.\n\n\\section*{5.5.A}\nnote that $g'(x)$ is unbounded on $(0,\\infty)$. so we cannot sed the ``bounded slope '' argueent for uniform continuity. need to show $\\exists r=r(\\varepsilon)$ such that if $|x-y|<r$ then $|\\sqrt{x}-\\sqrt{y}|<\\varepsilon$\n\nif we prove this then $\\sqrt{x}-\\sqrt{y}\\le \\sqrt{x-y}$ $\\sqrt{r}=\\varepsilon$\n\n\\begin{align*}\n  x-y=(\\sqrt{x}-\\sqrt{y})(\\sqrt{x}+\\sqrt{y})\\ge (\\sqrt{x}-\\sqrt{y})^2\\\\\n  \\sqrt{x-y}\\ge \\sqrt{x}-\\sqrt{y}\n\\end{align*}\n\\section*{5.5.D}\nshow that $f(x)=x^p$ is not uniformally continuous on $\\mathbb{R}$ is $p>1$\n\n\\begin{align*}\n  \\frac{x^p-y^p}{x-y}=|xx^{p-1}-y^p|=xx^{p-1}-x^{p-1}y+x^{p-1}y-y^{p}|le |x^{p-1}||x-y|+|y||x^{p-1}-y^{p-1}|\n\\end{align*}\n\\section*{T is continouous, A is closed, it T(A) closed?}\nno, hint, look for $T:\\mathbb{R}^2\\to \\mathbb{R}$ or $\\mathbb{R}^2$\n where $T$ is a linear transform (matrix), example $f(x)=\\frac{1}{x^2+1}$ on $\\mathbb{R}$, $f(\\mathbb{R})=(0.1]$\n\n \\section*{5.4.J}\n let $A$ be compact in $\\mathbb{R}^n$. show that $\\forall x\\in \\mathbb{R}^n\\exists a\\in A$ such that $a$ is the closest point from $A$ so $x$. that is $||x-a||\\le||x-y||\\forall y \\in A$.\n \\subsubsection*{proof}\n $x$ is a fixed point. $f(y)=||x-y||$. $A$ is compact, if we showthat $f$ is continous then $f$ has  min on $A$.\n\n if $||z-y||<r$ then $|f(z)-f(y)|<\\varepsilon$. $| ||x-z||-||x-y|| |\\le ||z-y||$. need $r=\\epsilon$\n\\section*{5.5.HI}\nlet $f$ be continuous on $(0,1]$ show $f$ is uniformallly continuous iff $\\lim_{x\\to 0^+}f(x)$ exists.\n\nasssuming $f$ is uniformally continuous, $|f(x)-f(y)|<\\varepsilon$ whenever $|x-y|<r$. take $x_k$ any decreasing sequence converging to zero. we use the cauchy condition. $\\forall r\\exists N$ such that $|x_k-x_l|<r$ if $k,l\\ge N$. so $|f(x_k)-f(x_l)|<\\varepsilon$ then $f(x_k)$ is cauchy. and so $\\lim_{k\\to\\infty}f(x_k)=L=\\lim_{x\\to0^+}f(x)$\n\nother way, we need to shwo that $r$ does not depend on $\\varepsilon$\n\\end{document}\n", "meta": {"hexsha": "6c92f5e821a9f300f743675af7806f6768c5cd90", "size": 3211, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "real analysis/analysis-notes-2014-11-10.tex", "max_stars_repo_name": "ylixir/school", "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "real analysis/analysis-notes-2014-11-10.tex", "max_issues_repo_name": "ylixir/school", "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "real analysis/analysis-notes-2014-11-10.tex", "max_forks_repo_name": "ylixir/school", "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.2253521127, "max_line_length": 342, "alphanum_fraction": 0.6337589536, "num_tokens": 1306, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947425132315, "lm_q2_score": 0.8418256532040708, "lm_q1q2_score": 0.5513071943961129}}
{"text": "\\documentclass{article}\n\n\\usepackage{mathrsfs,amsmath}\n\\usepackage{xcolor}\n\\usepackage{titlesec}\n\\usepackage{listings}\n\\usepackage{syntax}\n\\usepackage{pythonhighlighting}\n\\usepackage{graphicx}\n\n\\usepackage[margin=1.4in]{geometry}\n\\graphicspath{ {./images/} }\n\n\\title{Exercise \\#00 | CS 335} \n\\author{Jared Dyreson\\\\ \n        California State University, Fullerton}\n\n\\DeclareRobustCommand{\\bowtie}{%\n  \\mathrel\\triangleright\\joinrel\\mathrel\\triangleleft}\n\n\n\\usepackage [english]{babel}\n\\usepackage [autostyle, english = american]{csquotes}\n\\MakeOuterQuote{\"}\n\n\\titlespacing*{\\section}\n{0pt}{5.5ex plus 1ex minus .2ex}{4.3ex plus .2ex}\n\\titlespacing*{\\subsection}\n{0pt}{5.5ex plus 1ex minus .2ex}{4.3ex plus .2ex}\n\n\\usepackage{hyperref}\n\\hypersetup{\n    colorlinks,\n    citecolor=black,\n    filecolor=black,\n    linkcolor=black,\n    urlcolor=black\n}\n\n\\begin{document}\n\n\\maketitle\n\\tableofcontents\n\n\\newpage\n\n\\section{Examples}\n\n\\subsection{Arithmetic}\n\n\\begin{flushleft}\n\n\\textbf{Problem:}\n\n\\begin{python}\ndef arithmetic(lst):\n    x = lst[0]\n    y = lst[1]\n    lst[2] = x + y\n    return lst[0] + lst[1]\n\\end{python}\n\n\\textbf{Analysis:} Each line only contains one 1 step, therefore it linear. Thus having a classification of $O(n)$.\n\n\\end{flushleft}\n\n\\subsection{Nested Function Calls}\n\n\\begin{flushleft}\n\n\\begin{python}\ndef setup():\n    forward = make_list(20) # n + 3 : dependent of defined size and contents of function called below\n    backward = make_list(30) # n + 3 ^ see above\n    return forward, backward # 1\n    # 2(n + 3) + 1 => 2n + 6 + 1 => 2n + 7\n\ndef make_list(n):\n    L = [] # 1\n    for i in range(n): # n\n        L.append(0) # 1\n    return L # 1\n    # 1 + n + 1  + 1 => n + 3\n\\end{python}\n\n\\textbf{Analysis:} Therefore, the current runtime is $O(2n + 7)$. However, since we are classifying algorithms in Big Oh, we ignore leading coefficients and only focus on the leading term.\nThus, the true classification of this algorithm is of $O(n)$, as it is directly proportional to the input of the problem set.\n\n\\end{flushleft}\n\n\\subsection{Loops}\n\n$$T(n) = \\sum_{x \\in X} t_{x}$$\n\n\\begin{flushleft}\n\n\\begin{python}\nfor x in range(100):\n    print(\"hello\") # please assume printing only takes one instruction/atomic\n    print(\"world\")\n    print(\"goodbye\")\n\\end{python}\n\n\\textbf{Analysis:} Since this loop only contains atomics, we can add up the amount of them and use it as our coefficient. Thus becoming: $O(3n) \\implies O(n)$\n\n\\end{flushleft}\n\n\\subsection{Loops in Sequence}\n\n\\begin{flushleft}\n\\begin{python}\ndef func():\n    for x in range(100):\n        # contains three atomics\n        print(\"hello\")\n        print(\"hello\")\n        print(\"hello\")\n    for x in range(1000):\n        # contains two atomics\n        print(\"world\")\n        print(\"hello\")\n\\end{python}\n\n\\textbf{Analysis:} Since both of these loops only contain atomics, we can add up the amount of them and use it as our coefficient for each. Both loops are independent of one another and do not contribute to the overall runtime classification. Thus becoming: $( O(3n) + O(2n) ) \\implies O(5n) \\implies O(n)$\n\n\\end{flushleft}\n\n\n\\subsection{Nested Loops}\n\n\\begin{flushleft}\n\n\\begin{python}\nfor x in range(10):\n    # three atomics\n    for y in range(10):\n        # two atomics\n\\end{python}\n\n\\textbf{Analysis:} When attempting to analyze the runtime of nested loops, we need to count how many loop calls there are. This will give us the degree our polynomial will be. These instructions compound on one another, as the parent must execute along with all it's children. In this case, our deepest for loop call is of $O(2n)$ and our parent is $O(3n)$. We can determine the runtime by saying that:\n\n$$O(2n) \\times O(3n) \\implies O(6n^{2}) \\implies O(n^{2})$$\n\n\\end{flushleft}\n\n\\newpage\n\n\\section{General Notes}\n\n\\begin{itemize}\n\\item If a math function gives a large resultant from a small input size, please refrain from using this algorithm\n\n\\includegraphics[width=7cm]{joy.jpg}\n\\end{itemize}\n\n\\end{document}\n\n", "meta": {"hexsha": "b79be9d91c577e037ff5db5d66a68ac052cae753", "size": 3980, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes/00 Introduction/Introduction.tex", "max_stars_repo_name": "JaredsAlgorithms/Assignments", "max_stars_repo_head_hexsha": "1efa28b481033384ff566cdda0e93ee269230b26", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/00 Introduction/Introduction.tex", "max_issues_repo_name": "JaredsAlgorithms/Assignments", "max_issues_repo_head_hexsha": "1efa28b481033384ff566cdda0e93ee269230b26", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/00 Introduction/Introduction.tex", "max_forks_repo_name": "JaredsAlgorithms/Assignments", "max_forks_repo_head_hexsha": "1efa28b481033384ff566cdda0e93ee269230b26", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.1898734177, "max_line_length": 402, "alphanum_fraction": 0.6952261307, "num_tokens": 1198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6548947155710233, "lm_q2_score": 0.8418256432832333, "lm_q1q2_score": 0.5513071652183668}}
{"text": "\\chapter{Concatenated backward ray mapping}\\label{chap:raymapping1}\nIn the previous chapter we have seen that PS ray tracing based on the source and the target PS constitutes an improvement of MC and QMC ray tracing. \nNow, a method that employs not only the source and the target PS but also the PS of \\textit{all} the other lines that constitute the optical system is introduced. \n% Furthermore, instead of starting from the source, the new approach is an inverse method which starts considering rays on target PS.\nIn this chapter, we consider systems formed only by straight and reflective line segments.\nAll lines can be modeled as receivers of the incident light and emitters of the reflected light, they constitute\nthe target for incident rays and the source for reflected rays.\nMoreover, we assume that the source can only emit light and the target can only receive light.\n%Therefore, \n%one PS is taken into account for the source and one for the target while both the source and target phase spaces are considered for the other lines. E%\n%Every line of the system (except for the source \\point{S} and the target \\point{T}) . \nTherefore, two different PS are considered for the reflectors and one PS for\n\\point{S} and \\point{T}. All these phase spaces are connected through a map which relates the ray coordinates on every PS. This map can be written as the concatenation of many maps which can be classified as two different kind of maps, i.e., the map that connects the source and the target PS of two \\textit{different} lines and the map that connects the target and the source PS of the \\textit{same} line.\nEmploying the inverses of these maps we are able to detect the parts of target PS illuminated by the source.\nAll the PS considered are divided into several regions, the boundaries of which can be determined exactly for systems formed by straight lines.\nWe make the assumption of a Lambertian source.\n%; hence, the luminance is a positive constant inside a when different from $0$. \nAs a consequence, the output intensity along a given direction is obtained from the total width of all the patches with positive luminance, measured along that direction.\\\\ \\indent \n%In this chapter we explain the method for systems formed by straight and reflective lines segment. \nIn this chapter, two different optical systems are investigated: the two-faceted cup and the so-called multi-faceted cup. Next, the details of the procedure are explained for the two-faceted-cup.\n\\section{Phase spaces of the two-faceted cup}\\label{sec:cup_raymapping}\nA two-faceted cup is introduced in Chapter \\ref{chap:raytracing} and depicted in Figure \\ref{fig:cup}. It is formed by a source \\point{S}, a target \\point{T} and two reflectors which are straight lines segments. \nUsing the same notation of Chapter \\ref{chap:PS}, we denote with $\\mbox{\\set{S}{}{}}=\\mbox{\\set{Q}{}{}}\\times\\mbox{\\set{P}{}{}}$ the PS and with \n$(\\variabile{q}, \\variabile{p})$ the rays coordinates in \\set{S}{}{}.\\\\ \\indent\nLet's now introduce some new notation. \nThe source and the target PS of a line $\\lineai$ are indicated with \\set{S}{\\lineai}{} and \\set{T}{\\lineai}{}, respectively. The initial rays coordinates in \\set{S}{}{} are denoted with $(\\pos{s,}{$1$}, \\dir{s,}{$1$})$.\nThe coordinates of every ray that reaches the line $\\lineai\\in\\{2, 3, 4\\}$ are indicated  with $(\\pos{t,}{\\lineai}, \\dir{t,}{\\lineai})$ on \\set{T}{\\lineai}{}. \nIn the following, to simplify the notation, we indicate the target coordinates of the rays on \\set{T}{$4$}{} with (\\variabile{q}, \\variabile{p}) instead of $(\\pos{t,}{$4$}, \\dir{t,}{$4$})$.\nAfter reflection, the ray leaves line $\\lineai \\in\\{2, 3\\}$ at the same position but with a different direction, the new ray coordinates are indicated with \n$(\\pos{s,}{\\lineai}, \\dir{s,}{\\lineai})$ on \\set{S}{\\lineai}{}. \nNote that $\\pos{s,}{\\lineai}= \\pos{t,}{\\lineai}$, while $\\dir{s,}{\\lineai}$ is obtained applying the reflection law to the direction coordinate $\\dir{t,}{\\lineai}$ of the incident ray.\n\nThe phase spaces \\set{S}{\\lineai}{} and  \\set{T}{\\lineai}{} of each line $\\lineai$ are partitioned into different regions, (\\set{S}{\\lineai,}{\\lineaj})$_{\\lineaj=2, 3, 4}$ and (\\set{T}{\\lineai,}{\\lineak})$_{\\lineak=1, 2, 3}$, respectively, where $\\lineaj\\neq \\lineai$ is the index of the line that is illuminated by $\\lineai$ and $\\lineak\\neq\\lineai$ is the index of the line that illuminates $\\lineai$. Hence, \\set{S}{\\lineai,}{\\lineaj}$\\subset$ \\set{S}{\\lineai}{} is the part of \\set{S}{\\lineai}{} corresponding to rays that illuminate line $\\lineaj$, and \\set{T}{\\lineai,}{\\lineak} $\\subset$ \\set{T}{\\lineai}{} is the part of \\set{T}{\\lineai}{} corresponding to rays originating from the line $\\lineak$. Note that, due to the fact that the source only emits light, we do not define its target PS \\set{T}{$1$}{}. Similarly, since the target only receives light, its source PS \\set{S}{$4$}{} is not defined.\nFor the two-faceted cup, six different phase spaces need to be considered which are given by the following expressions:\n\\begin{equation}\n\\label{SPS}\n\\begin{split}\n \\mbox{\\set{S}{$1$}{}} & = \\mbox{\\set{S}{$1$,}{$2$}}\\cup\n \\mbox{\\set{S}{$1$,}{$3$}} \\cup \\mbox{\\set{S}{$1$,}{$4$}},\\\\\n\\mbox{\\set{S}{$2$}{}} & =  \\mbox{\\set{S}{$2$,}{$3$}} \\cup \\mbox{\\set{S}{$2$,}{$4$}},\\\\\n\\mbox{\\set{S}{$3$}{}} & =  \\mbox{\\set{S}{$3$,}{$2$}} \\cup \\mbox{\\set{S}{$3$,}{$4$}},\\\\\n\\mbox{\\set{T}{$2$}{}} & = \\mbox{\\set{T}{$2$,}{$1$}} \\cup \\mbox{\\set{T}{$2$,}{$3$}},\\\\\n\\mbox{\\set{T}{$3$}{}} & = \\mbox{\\set{T}{$3$,}{$1$}}\\cup \\mbox{\\set{T}{$3$,}{$2$}},\\\\\n\\mbox{\\set{T}{$4$}{}} & = \\mbox{\\set{T}{$4$,}{$1$}}\\cup \\mbox{\\set{T}{$4$,}{$2$}}\\cup\n\\mbox{\\set{T}{$4$,}{$3$}}.\n\\end{split}\n \\end{equation}\nNote that, as the source cannot receive light and the target cannot emit light,  the regions $(\\mbox{\\set{S}{\\lineai,}{$1$}})_{\\lineai=2,3}$ and $(\\mbox{\\set{T}{\\lineai,}{$4$}})_{\\lineai=2, 3}$ are not considered. \n\\\\ \\indent The boundaries $\\partial \\mbox{\\set{S}{\\lineai,}{\\lineaj}}$ are mapped into the boundaries $\\partial \\mbox{\\set{T}{\\lineaj,}{\\lineai}}$ for every $\\lineai=\\{1, 2, 3\\}$ \nand $\\lineaj=\\{2, 3,4\\}$ with $\\lineaj\\neq \\lineai$ (edge-ray principle). For the two-faceted cup, and for all systems formed by straight line segments, these boundaries can be determined analytically. Given two lines $\\lineai$ and $\\lineaj$\nwith $\\lineai\\neq \\lineaj$, the boundaries \\set{S}{\\lineai}{} and \\set{T}{\\lineaj}{} are determined as follows. Let $(\\variabile{x}_{\\lineai, \\ell}, \\variabile{z}_{\\lineai, \\ell})$ and $(\\variabile{x}_{\\lineai, \\textrm{r}},\\variabile{z}_{\\lineai, \\textrm{r}})$ be the coordinates of the points located at the left and the right extreme of line $\\lineai$, respectively.\nSimilarly, $(\\variabile{x}_{\\lineaj, \\ell}, \\variabile{z}_{\\lineaj, \\ell})$ and $(\\variabile{x}_{\\lineaj, \\textrm{r}},\\variabile{z}_{\\lineaj, \\textrm{r}})$ are the coordinates of the points located at the left and the right extreme of line $\\lineaj$, respectively.\nThe boundaries $\\partial$\\set{S}{\\lineai,}{\\lineak} and $\\partial$\\set{T}{\\lineak,}{\\lineai} are  %obtained considering all the rays that leave the end points of line $\\lineai$ \n%and all the rays that reach the end points of $\\lineaj$.\nformed by four different curves,\ntwo of them are given by all the rays that leave the end points of line $\\lineai$ and hit line $\\lineak$ and, the others two are given by the rays\nthat leave the interior of line $\\lineai$ and hit the end points of line $\\lineak$.\nThey are given by:\n \\begin{equation}\n\\label{eq:analytic_boundaries}\n \\begin{split}\n \\partial\\mbox{\\set{S}{\\lineai,}{\\lineak}} & = \\partial\\mbox{\\setbound{S}{\\lineai,}{\\lineak}{\\,1}}\\cup \\partial\\mbox{\\setbound{S}{\\lineai,}{\\lineak}{\\,2}} \\cup \\partial\\mbox{\\setbound{S}{\\lineai,}{\\lineak}{\\,3}}\\cup \\partial\\mbox{\\setbound{S}{\\lineai,}{\\lineak}{\\,4}},\\\\\n\\partial\\mbox{\\set{T}{\\lineak,}{\\lineai}} & = \\partial\\mbox{\\setbound{T}{\\lineak,}{\\lineai}{\\,1}}\\cup \\partial\\mbox{\\setbound{T}{\\lineak,}{\\lineai}{\\,2}}\\cup \\partial\\mbox{\\setbound{T}{\\lineak,}{\\lineai}{\\,3}}\\cup \\partial\\mbox{\\setbound{T}{\\lineak,}{\\lineai}{\\,4}}.\n \\end{split}\n \\end{equation}\nIn the following we explain in more details the case for $\\variabile{\\lineai}=1$ and $\\variabile{\\lineak}=4$ (see Figure \\ref{fig:cups}).\n\\begin{figure}\n\\centering\n\\begin{subfigure}{.48\\textwidth}\n  \\centering\n  \\includegraphics[width=\\textwidth]{rays_cup1}\n  \\caption{Rays that leave the left end point of the source (line $1$) and trace out the target (line $4$).}\n  \\label{fig:cup1}\n\\end{subfigure}%\n\\hfill\n\\begin{subfigure}{.48\\textwidth}\n  \\centering\n  \\includegraphics[width=\\textwidth]{rays_cup2}\n  \\caption{Rays that trace out the source (line $1$) and hit the right end point of the target (line $4$).}\n  \\label{fig:cup2}\n\\end{subfigure} %\n\\hfill\n\\begin{subfigure}{.48\\textwidth}\n  \\centering\n  \\includegraphics[width = \\textwidth]{rays_cup3}\n  \\caption{Rays that leave the right end point of the source (line $1$) and trace out the target (line $4$).}\n  \\label{fig:cup3}\n\\end{subfigure}%\n\\hfill\n\\begin{subfigure}{.48\\textwidth}\n  \\centering\n  \\includegraphics[width=\\textwidth]{rays_cup4}\n  \\caption{Rays that trace out the source (line $1$) and hit the left end point of the target (line $4$).}\n  \\label{fig:cup4}\n\\end{subfigure}\n\\caption{\\textbf{Rays located on the boundaries of the regions $\\partial$\\set{S}{$1$,}{$4$} and $\\partial$\\set{T}{$4$,}{$1$}}.\n$A = (\\variabile{x}_{1 \\ell}, \\variabile{z}_{1, \\ell}) =(-2, 0)$ and \n$D = (\\variabile{x}_{1, \\textrm{r}}, \\variabile{z}_{1, \\textrm{r}}) = (2, 0)$ \nare the left and right corner points (or end points) of\n\\point{S} (line $1$).\n$B =  (\\variabile{x}_{4, \\ell}, \\variabile{z}_{4, \\ell}) = (-17, 40)$ and $C =  (\\variabile{x}_{4, \\textrm{r}}, \\variabile{z}_{4, \\textrm{r}}) = (17 , 40)$, are the left and right corner points of \\point{T} (line $4$).}\n\\label{fig:cups}\n\\end{figure} \\\\\n The boundaries $\\partial$\\set{S}{$1$,}{$4$} and $\\partial$\\set{T}{$4$,}{$1$} are given in Figures \\ref{fig:S14} and \\ref{fig:T411}, respectively.\n$\\partial$\\setbound{S}{$1$,}{$4$}{1} and $\\partial$\\setbound{T}{$4$,}{$1$}{1} are obtained tracing out line $4$ from\n$\\variabile{q}_{\\ell} = -\\variabile{b}$ to $\\variabile{q}_{\\textrm{r}} = \\variabile{b}$\n by rays leaving $\\variabile{q}_{1, \\ell}= -\\variabile{a}$ with varying $\\variabile{p}_1$, these rays are shown in Figure \\ref{fig:cup1}, and the boundary segments\n $\\partial$\\setbound{S}{$1$,}{$4$}{1} and $\\partial$\\setbound{T}{$4$,}{$1$}{1} are the orange line segments labeled with \\const{c}. \n $\\partial$\\setbound{S}{$1$,}{$4$}{2} and $\\partial$\\setbound{T}{$4$,}{$1$}{2} are given tracing out line $1$ from\n $\\variabile{q}_{1, \\ell}= -\\variabile{a}$ to $\\variabile{q}_{1, \\textrm{r}}= \\variabile{a}$\n with varying $\\variabile{p}_1$, such that all rays hit $\\variabile{q}_{\\textrm{r}} = \\variabile{b}$, these rays are shown in Figure \\ref{fig:cup2}, the boundary segments\n $\\partial$\\setbound{S}{$1$,}{$4$}{2} and $\\partial$\\setbound{T}{$4$,}{$1$}{2} are depicted in blue (lines segments labeled with \\const{d}).\n Likewise, $\\partial$\\setbound{S}{$1$,}{$4$}{3} and $\\partial$\\setbound{T}{$4$,}{$1$}{3} are obtained tracing out line $4$ from\n$\\variabile{q}_{\\textrm{r}}= \\variabile{b}$ to $\\variabile{q}_{\\ell}= -\\variabile{b}$ \n by rays leaving $\\variabile{q}_{1, \\textrm{r}}=\\variabile{x}_{1, \\textrm{r}} = \\variabile{a}$ with varying $\\variabile{p}_{1}$. These rays are shown in Figure \\ref{fig:cup3}, \n $\\partial$\\setbound{S}{$1$,}{$4$}{3} and $\\partial$\\setbound{T}{$4$,}{$1$}{3} are the red line segments labeled with \\const{e}.\n  Finally, $\\partial$\\setbound{S}{$1$,}{$4$}{4} and $\\partial$\\setbound{T}{$4$,}{$1$}{4} are given tracing out line $1$ from\n$\\variabile{q}_{1, \\textrm{r}} = \\variabile{a}$ to  $\\variabile{q}_{1, \\ell} = -\\variabile{a}$ \n with varying $\\variabile{p}_{1}$, such that all rays hit $\\variabile{q}_{\\ell} = -\\variabile{b}$, these rays are shown in Figure \\ref{fig:cup4}, \n $\\partial$\\setbound{S}{$1$,}{$4$}{4} and $\\partial$\\setbound{T}{$4$,}{$1$}{4} are the green lines segments labeled with \\const{f}. \nWe remind the reader that we use the notation $(\\variabile{x}, \\variabile{z})$ for the Cartesian coordinates of the optical system, while PS has $(\\variabile{q}, \\variabile{p})$ coordinates. \nIt is worth noting that  $\\variabile{q}_{1, \\ell}=\\variabile{x}_{1, \\ell}$,  $\\variabile{q}_{1, \\textrm{r}}=\\variabile{x}_{1, \\textrm{r}}$,  \n$\\variabile{q}_{\\ell}=\\variabile{x}_{4, \\ell}$ and  $\\variabile{q}_{\\textrm{r}}=\\variabile{x}_{4, \\textrm{r}}$.\\\\ \\indent\n For the two-faceted cup there is an analytic expression for every line segment $\\partial\\mbox{\\setbound{S}{\\lineai,}{\\lineaj}{\\variabile{\\,m}}}$ and\n $\\partial\\mbox{\\setbound{T}{\\lineaj,}{\\lineai}{\\variabile{\\,m}}}$ in Equation (\\ref{eq:analytic_boundaries}) with $\\variabile{m}\\in\\{1, \\cdots, 4\\}$.\n For instance, the rays on the boundaries $\\partial\\mbox{\\setbound{S}{\\lineai,}{\\lineaj}{\\,1}}$ and $\\partial \\mbox{\\setbound{T}{\\lineaj,}{\\lineai}{\\,1}}$\n  are parameterized in the (\\variabile{x}, \\variabile{z})-plane by\n \\begin{equation}\n\\label{extremes_rays}\n\\vect{r}_{\\variabile{\\lineai}, \\variabile{\\lineaj}}(\\variabile{t})=\n\\left( \\begin{array}{cc}\n\\variabile{x}_{\\variabile{\\lineaj}, \\ell}-\\variabile{x}_{\\variabile{\\lineai}, \\ell}+t(\\variabile{x}_{\\variabile{\\lineaj}, \\textrm{r}}-\\variabile{x}_{\\variabile{\\lineaj}, \\ell}) \\\\\n\\variabile{z}_{\\variabile{\\lineaj}, \\ell}-\\variabile{z}_{\\variabile{\\lineai}, \\ell}+t(\\variabile{z}_{\\variabile{\\lineaj}, \\textrm{r}}-\\variabile{z}_{\\variabile{\\lineaj},\\ell})\n\\end{array} \\right) \\qquad \\quad 0\\leq t\\leq 1\\,.\n\\end{equation}\n These rays are located on a vertical line segment in \\set{S}{\\lineai}{} as only the $\\mbox{\\variabile{p}}_{\\variabile{\\lineai}}$-coordinate changes and on a curved line in \n\\set{T}{\\lineaj}{}\n  as both the target position and direction vary. The analytic expressions for $\\partial \\mbox{\\setbound{S}{\\lineai,}{\\lineaj}{\\,1}}$ and $\\partial \\mbox{\\setbound{T}{\\lineaj,}{\\lineai}{\\,1}}$ are\n\\begin{equation}\n\\label{S_boundary}\n\\partial \\mbox{\\setbound{S}{\\lineai,}{\\lineaj}{1}}(\\variabile{t})= \\bigg\\{ (\\variabile{q}_{\\variabile{\\lineai}}, \\variabile{p}_{\\variabile{\\lineai}}) = \\Big(\\variabile{q}_{\\variabile{\\lineai}, \\ell},\n|\\boldsymbol{\\nu}_{\\variabile{\\lineai}}\\times \\hat{\\vect{r}}_{\\variabile{\\lineai}, \\variabile{\\lineaj}}(\\variabile{t})|\n\\Big) \\bigg\\},\n\\end{equation}\n\\begin{equation}\n\\label{T_boundary}\n\\partial\\mbox{\\setbound{T}{\\lineaj,}{\\lineai}{\\,1}}(\\variabile{t})=\\bigg\\{(\\variabile{q}_{\\variabile{\\lineaj}}, \\variabile{p}_{\\variabile{\\lineaj}}) =\n\\Big(\\variabile{q}_{\\variabile{\\lineaj}, \\ell}-\\variabile{q}_{\\variabile{\\lineai}, \\ell}+t(\\variabile{q}_{\\variabile{\\lineaj}, \\textrm{r}}-\\variabile{q}_{\\variabile{\\lineaj},\\ell}),\n|\\boldsymbol{\\nu}_{\\variabile{\\lineaj}}\\times \\hat{\\vect{r}}_{\\variabile{\\lineai}, \\variabile{\\lineaj}}(\\variabile{t})|\\Big) \\bigg\\}\\,,\n\\end{equation}\nwhere we have indicated with $\\hat{\\vect{r}}_{\\variabile{\\lineai}, \\variabile{\\lineaj}}(\\variabile{t})$ the normalization of the ray in ($\\ref{extremes_rays}$) and\n $ \\boldsymbol{\\nu}_\\variabile{\\lineai}$ and $\\boldsymbol{\\nu}_\\variabile{\\lineaj}$ are the normalized inward normals to lines $\\variabile{\\lineai}$ and $\\variabile{\\lineaj}$, respectively.\n Note that  $\\sin{\\tau_\\variabile{\\lineai}} = |\\nu_{\\variabile{\\lineai}}\\times \\hat{\\vect{r}}_{\\variabile{\\lineai}, \\variabile{\\lineaj}}(\\variabile{t})|$ and $\\sin{\\tau_\\variabile{\\lineaj}} = |\\nu_{\\variabile{\\lineaj}}\\times \\hat{\\vect{r}}_{\\variabile{\\lineai}, \\variabile{\\lineaj}}(\\variabile{t})|$.\n \\begin{figure}\n \\begin{minipage}[]{.48\\textwidth}\n   \\centering\n   \\includegraphics[width=\\textwidth]{S141}\n   \\caption{\\textbf{Source PS of line $1$.}\n   Boundary of the region \\set{S}{$1$,}{$4$}.}\n   \\label{fig:S14}\n \\end{minipage}\\hfill\n  \\begin{minipage}[]{0.48\\textwidth}\n  \\centering\n   \\includegraphics[width=\\textwidth]{T411}\n   \\caption{\\textbf{Target PS of line $4$.}\n    Boundary of the region \\set{T}{$4$,}{$1$}.}\n    \\label{fig:T411}\n \\end{minipage}\n \\end{figure}\n Likewise, the boundaries $\\partial$\\setbound{S}{\\lineai,}{\\lineak}{\\variabile{\\,m}} and\n $\\partial$\\setbound{T}{\\lineak,}{\\lineai}{\\variabile{\\,m}} are calculated for every $\\variabile{m}\\in\\{2,3,4\\}$. Finally, $\\partial$\\set{S}{\\lineai,}{\\lineak} and $\\partial$\\set{T}{\\lineak,}{\\lineai} are determined using (\\ref{eq:analytic_boundaries}). \\\\\n \\begin{figure}\n \\begin{minipage}[]{.43\\textwidth}\n   \\includegraphics[width=\\textwidth]{S1}\n\\caption{\\footnotesize{\\textbf{Source PS of line $\\boldsymbol{1}$.} It is partitioned into regions $(\\mbox{\\set{S}{$1$,}{\\lineaj}})_{\\variabile{\\lineaj} = 2,3,4}$\n   formed by rays that leave line $1$ and hit line $\\textit{\\lineaj}$.}}\n   \\label{fig:S1}\n \\end{minipage}\n  \\begin{minipage}[]{.45\\textwidth}\n  \\centering\n   \\includegraphics[width=\\textwidth]{T4b}\n   \\caption{\\footnotesize{\\textbf{Target PS of line $\\boldsymbol{4}$.} It is partitioned into regions $(\\mbox{\\set{T}{$4$,}{\\lineak}})_{\\variabile{\\lineak} = 1,2,3}$\n   formed by rays that leave line $\\textit{\\lineak}$ and hit line $4$.}}\n   \\label{fig:T4b}\n \\end{minipage}\n\\begin{minipage}[]{.43\\textwidth}\n\\centering\n   \\includegraphics[width=\\textwidth]{S2}\n\\caption{\\footnotesize{\\textbf{Source PS of line $\\boldsymbol{2}$.} It is partitioned into regions $(\\mbox{\\set{S}{$2$,}{\\lineaj}})_{\\variabile{\\lineaj} = 3,4}$\n  formed by rays that leave line $2$ and hit line $\\variabile{\\lineaj}$.}} \n \\end{minipage}\n \\begin{minipage}[]{.43\\textwidth}\n \\centering\n   \\includegraphics[width=\\textwidth]{T2b}\n\\caption{\\footnotesize{\\textbf{Target PS of line $\\boldsymbol{2}$.} It is partitioned into regions $(\\mbox{\\set{T}{$2$,}{\\lineak}})_{\\variabile{\\lineak} = 1,3}$\nformed by rays that leave line $\\variabile{\\lineak}$ and hit line $2$. }} \n \\end{minipage}\n \\begin{minipage}[]{.43\\textwidth}\n \\centering\n   \\includegraphics[width=\\textwidth]{S3}\n   \\caption{\\footnotesize{\\textbf{Source PS of line $\\boldsymbol{3}$.} It is partitioned into regions\n   $(\\mbox{\\set{S}{$3$,}{\\lineaj}})_{\\variabile{\\lineaj} = 2,4}$ formed by rays that leave line $3$ and hit line $\\variabile{\\lineaj}$. }} \n \\end{minipage}\n \\hspace{1.7cm}\n \\begin{minipage}[]{.43\\textwidth}\n \\centering\n   \\includegraphics[width=\\textwidth]{T3_b}\n  \\caption{\\footnotesize{\\textbf{Target PS of line $\\boldsymbol{3}$.} It is partitioned into regions $(\\mbox{\\set{T}{$3$,}{\\lineak}})_{\\variabile{\\lineak} = 1,2}$\n   formed by rays that leave line $\\variabile{\\lineak}$ and hit line $3$.}} \n\\label{fig:T3}\n \\end{minipage}\n\\end{figure}\n\\indent In Figures $\\ref{fig:S1}-\\ref{fig:T3}$,  $(\\partial \\mbox{\\set{S}{\\lineai,}{\\lineaj}})_{\\variabile{\\lineai}\\neq\\variabile{\\lineaj}=2, 3, 4}$ and $(\\partial \\mbox{\\set{T}{\\lineai,}{\\lineak}})_{\\variabile{\\lineai}\\neq\\variabile{\\lineak}=1, 2, 3}$ are depicted in blue and red, respectively. The source and target PS of lines $2$ and $3$ have some empty regions. \nThese parts correspond to the regions formed by the rays that either go back to the source or are emitted from the target. These regions are not taken into account, see Equation (\\ref{SPS}). We observe that, because of the symmetry of the optical system, \\set{S}{$3$}{} is the mirror image of \\set{S}{$2$}{} after reflection in the central point \n$(\\variabile{q}, \\variabile{p}) = (-9.5, 0)$ followed by a translation $(\\variabile{q}, \\variabile{p})\\rightarrow(\\variabile{q}+19, \\variabile{p})$. Likewise \\set{T}{$3$}{} is the mirror image of \\set{S}{$2$}{} after the same reflection and translation.\n%In the next section, we show how the phase spaces are related to each other and we define the target photometric variables on \\set{T}{$4$}{}.\n\\subsection{Computation of the target photometric variables}\nIn this section we explain how to compute the target photometric variables in PS.\nThe intensity $I$ along a given direction $\\variabile{p}\\in [-1,1]$ in target phase space \\set{T}{$4$}{} depends on the luminance $L(\\variabile{q}, \\variabile{p})$ defined as in Equation (\\ref{eq:PSintensity}). For the two-faceted cup, it becomes:\n\\begin{equation}\\label{I(eta)}\nI_{\\textrm{PS}}(\\variabile{p}) = \\int_{-\\variabile{b}}^{\\variabile{b}} L(\\variabile{q},\\variabile{p}) \\textrm{d}\\variabile{q}\\,.\n\\end{equation}\nThe parts of \\set{T}{$4$}{} that are illuminated by the source \\point{S} correspond to parts with positive luminance, for the other parts the luminance might be $0$.\nAssuming positive luminance on \\point{S}, the following relations hold:\n\\begin{equation}\\label{LT4}\n\\begin{aligned}\nL(\\variabile{q}, \\variabile{p})&>0 \\qquad \\quad \\forall (\\variabile{q}, \\variabile{p})\\in \\mbox{\\set{T}{$4$,}{$1$}},\\\\\nL(\\variabile{q}, \\variabile{p})&\\geq 0 \\qquad\\quad \\forall(\\variabile{q}, \\variabile{p}) \\in (\\mbox{\\set{T}{$4$,}{\\lineai}})_{\\lineai=2,3}.\n\\end{aligned}\n\\end{equation}\nOnce a ray leaves the source \\point{S} it can hit the reflectors several times before hitting the target \\point{T}. To relate \\point{S} and \\point{T}, a map $\\mapnumb{M}_{1,4}$: \\set{S}{$1$}{}$\\rightarrow$ \\set{T}{$4$}{} is introduced such that $\\mapnumb{M}_{1,4}(\\pos{s,}{$1$},\\dir{s,}{$1$})=(\\variabile{q},\\variabile{p})$. As  not all parts of \\set{T}{$4$}{} are illuminated by the source \\point{S}, the map\n$\\mapnumb{M}_{1,4}$ is not surjective.\nTherefore, we need to determine the subsets of \\set{T}{$4$}{} illuminated by \\point{S} corresponding to the regions where the luminance is positive.\nTo this purpose, we consider two different kinds of maps.\nThe first map relates the coordinates of the source and the target PS of two \\textit{different} lines, we call it the \\textit{propagation map}.\nThe second map relates the coordinates of the target and the source PS of the \\textit{same} line, we call it the \\textit{reflection map}.\nIn particular, given two lines $\\lineai$ and $\\lineaj$ with $\\lineai\\neq\\lineaj$, the propagation map $\\mapnumb{P}_{\\lineai,\\lineaj}: \\mbox{\\set{S}{\\lineai,}{\\lineaj}}\\rightarrow$\\set{T}{\\lineaj,}{\\lineai} relates \\set{S}{\\lineai,}{\\lineaj} with \\set{T}{\\lineaj,}{\\lineai} and, it is defined as follows:\n \\begin{equation}\\label{Pij}\n\\mapnumb{P}_{\\lineai,\\lineaj}(\\pos{s,}{\\lineai},\\dir{s,}{\\lineai})=(\\pos{t,}{\\lineaj},\\dir{t,}{\\lineaj}),\n\\end{equation}\nwhere $\\pos{t,}{\\lineaj}$ is given by the \\variabile{x}-coordinate of the intersection point between the ray and line $\\lineaj$,\nand $\\dir{t,}{\\lineaj}$ is computed considering the direction of the incident ray with respect to the normal of line $\\lineaj$. \nFor one single line $\\lineaj$, the reflection map $\\mapnumb{R}_{\\lineaj,\\lineak,h}$:~\\set{T}{\\lineaj,}{\\lineak} $\\rightarrow$\\set{S}{\\lineaj,}{h}  relates the regions \\set{T}{\\lineaj,}{\\lineak}$\\subset$\\set{T}{\\lineaj}{} and\n\\set{S}{\\lineaj,}{h}$\\subset$\\set{S}{\\lineaj}. To simplify the notation, from now on we omit the dependence of $\\mapnumb{R}_{\\lineaj,\\lineak,h}$ from $\\lineak$ and \\variabile{h}, i.e., $\\mapnumb{R}_{\\lineaj,\\lineak,h} = \\mapnumb{R}_{\\lineaj}$. The reflection map is defined as:\n\\begin{equation}\\label{Rj}\n\\mapnumb{R}_{\\lineaj}(\\variabile{q}_{\\textrm{t},\\textit{\\lineaj}},\\variabile{p}_{\\textrm{t},\\textit{\\lineaj}})=(\\variabile{q}_{\\textrm{s},\\textit{\\lineaj}},\\variabile{p}_{\\textrm{s},\\textit{\\lineaj}}),\n\\end{equation}\nwhere $\\dir{t,}{\\lineaj}$ changes according to the reflection law and $\\pos{t,}{\\lineaj}= \\pos{s,}{\\lineaj}$ as $\\mapnumb{R}_{\\lineaj}$ maps the target PS into the source PS of the same line $\\lineaj$, that is \\set{T}{\\lineaj}{} into \\set{S}{\\lineaj}{}.\nUsing a procedure similar to the ray transport matrices approach (see \\cite{hecht1998hecht}, Chapter 6),\nthe map $\\mapnumb{M}_{1,4}$ is described by the composition of $\\mapnumb{P}_{\\lineai,\\lineaj}$ and $\\mapnumb{R}_{\\lineaj}$ defined in $(\\ref{Pij})$ and $(\\ref{Rj})$. This composition depends on the path $\\Pi$ followed by the rays.\n% where we refer to a path as the sequence of lines that\n% a ray hits during its propagation from \\point{S} to \\point{T}. \nWe indicate with $\\mapnumb{M}_{1,4}(\\Pi)$\nthe map $\\mapnumb{M}_{1,4}$ restricted to the path $\\Pi$ and with \\set{R}{}{}$(\\Pi)\\subset \\mbox{\\set{T}{$4$}{}}$ the regions on \\set{T}{$4$}{} formed by the rays that follow path $\\Pi$.\nConsidering all the possible paths $\\Pi$ from \\point{S} to \\point{T}, all the positive luminance regions $\\mbox{\\set{R}{}{}}(\\Pi)$ on \\set{T}{$4$}{} can be determined.\n\\\\ \\indent To clarify this concept, we provide the following example.\nConsider a ray that is emitted from the source (line $1$), hits the left reflector (line $2$) and finally reaches the target (line $4$).\n The path $\\Pi$ followed by this ray is defined as $\\Pi =(1, 2, 4)$ and\nthe corresponding map $\\mapnumb{M}_{1,4}(\\Pi):\\mbox{\\set{S}{$1$}{}}\\rightarrow \\mbox{\\set{R}{}{}}(\\Pi)$ that describes the propagation of all rays that follow path $\\Pi$ is defined by:\n\\begin{equation}\n\\label{map_example}\n\\mapnumb{M}_{1,4}(\\Pi):\\mbox{\\set{S}{$1$,}{$2$}}\\rightarrow \\mbox{\\set{T}{$2$,}{$1$}}\\rightarrow\\mbox{\\set{S}{$2$,}{$4$}}\\rightarrow \\mbox{\\set{T}{$4$,}{$2$}},\n\\end{equation} which can be written as:\n\\begin{equation}\n\\mapnumb{M}_{1,4}(\\Pi) = \\mapnumb{P}_{2,4}\n\\circ \\mapnumb{R}_{2}\\circ \\mapnumb{P}_{1,2}\\,.\n\\end{equation}\nIn general, to construct the map $\\mapnumb{M}_{1,4}(\\Pi)$ we need to know its corresponding path $\\Pi$.\nTo determine all possible paths $\\Pi$,\ninstead of tracing the rays from \\point{S} to \\point{T}, we start considering the rays in \\set{T}{$4$}{}.\nIn particular, along a given direction $\\variabile{p}\\in[-1,1]$ we consider the intersection points between the line $\\variabile{p}=\\mbox{const}$ and $(\\partial\\mbox{\\set{T}{$4$,}{\\lineai}})_{\\lineai=1, 2, 3}$. These points are traced back to line $\\lineai$ from which they are emitted and their corresponding coordinates on \\set{S}{\\lineai}{} and \\set{T}{\\lineai}{} are computed. This is done applying sequentially the maps $\\inversemap{P}{\\lineai,}{4}:\\mbox{\\set{T}{$4$,}{\\lineai}}\\rightarrow\\mbox{\\set{S}{\\lineai,}{$4$}}$ and $\\inversemap{R}{\\lineai}{}:\\mbox{\\set{S}{\\lineai}{}}\\rightarrow\\mbox{\\set{T}{\\lineai}{}}$.\nThen the same procedure is repeated considering these new coordinates on \\set{T}{\\lineai}{}.\nThe computation stops either when the points found are emitted from the source, that is when they are located on \\set{S}{$1$}{}, or when they reach again the target, that is when they are located on \\set{T}{$4$}{}.\nIf a ray reaches \\set{S}{$1$}{}, then a path $\\Pi$ from \\point{S} to \\point{T} is found.\nIf a ray reaches again the target \\set{T}{$4$}, then we conclude that it is not emitted by\n\\point{S} and therefore, it is located inside the parts of \\set{T}{$4$}{} with luminance equal to $0$. \\\\ \\indent\n Finally, the inverse $\\inversemap{M}{1,}{4}(\\Pi)$ of the map $\\mapnumb{M}_{1,4}(\\Pi)$ is constructed for every possible path $\\Pi$.\n The map $\\inversemap{M}{1,}{4}(\\Pi)$ is the composition of the inverses of the propagation and the reflection maps in reverse order according to the path $\\Pi$.\nFor instance, for path $\\Pi = (1,2,4)$, $\\inversemap{M}{1,}{4}(\\Pi)$ is given by:\n\\begin{equation}\n\\label{inverse_map}\n\\inversemap{M}{1,}{4}({\\Pi}) = \\inversemap{P}{1,}{2}\n\\circ \\inversemap{R}{2}{}\\circ \\inversemap{P}{2,}{4}.\n\\end{equation}\nThe steps of the procedure are shown in Figure \\ref{fig:tree} where the map in (\\ref{inverse_map}) is written in red. \\\\\n\\begin{figure}\n \\begin{center}\n  \\includegraphics[width = \\textwidth]{tree2}\n\\label{fig:tree}\n  \\end{center}\n\\caption{\\textbf{Ray mapping tree.} It describes how to detect all the possible paths from \\point{S} to \\point{T}.}\n\\label{fig:tree}\n\\end{figure}\nUsing the procedure explained above, given a ray with coordinates\n$(\\variabile{q}, \\variabile{p})\\in \\mbox{\\set{T}{$4$}{}}$ we can establish whether it is located inside one of the positive luminance regions $\\mbox{\\set{R}{}{}}(\\Pi)$ with or not.\nIn case the ray is inside a region $\\mbox{\\set{R}{}{}}(\\Pi)$,\nits corresponding coordinates $(\\pos{s,}{$1$},\\dir{s,}{$1$})\\in \\mbox{\\set{S}{$1$}{}}$ are obtained using $\\inversemap{M}{1,}{4}(\\Pi)$, where $\\Pi$ is the path followed by this ray. The luminance in Equation ($\\ref{LT4}$) is, therefore, defined as in Equation (\\ref{eq:PSluminance}), \nfor some path $\\Pi$ connecting \\point{S}{}{} and \\point{T}{}{}. The target intensity is calculated from Equation (\\ref{I(eta)}). \nIndicating with $\\variabile{q}^\\textrm{\\,min}(\\Pi,\\variabile{p})$ and $\\variabile{q}^\\textrm{\\,max}(\\Pi,\\variabile{p})$ the minimum and maximum position coordinates of the intersection points between the boundaries $ \\partial$\\set{R}{}{}($\\Pi$) and the line $\\variabile{p}= \\mbox{const}$,\nEquation (\\ref{I(eta)}) reduces to Equation (\\ref{eta2}), if only two intersection points are found, and to Equation (\\ref{eq:Ips}) in case more than two intersection points occur. For the two-faceted cup there are only two intersection points between a line $\\dir{}{}{}=\\mbox{const}$ and $\\partial$\\set{R}{}{}$(\\Pi)$, hence, in this chapter we use Equation (\\ref{eta2}).\nWe remark that, for a given ray with corresponding coordinates $(\\pos{}{}, \\dir{}{})$ on \\set{T}{$4$}{}, only one path is possible as we are assuming that all lines are reflective.\nBecause of this, the regions \\set{R}{}{}($\\Pi$) do not overlap.\nNext, the details of the procedure to compute the coordinates $\\variabile{q}^\\textrm{\\,min}(\\Pi, \\variabile{p})$ and $\\variabile{q}^\\textrm{\\,max}(\\Pi, \\variabile{p})$\nare explained. \n\\subsection{The structure of the backward ray mapping algorithm}\\label{sec:algorithm_raymapping}\nThe goal is to determine the target intensity along a given direction $\\variabile{p}=\\const{const}$.\nAlso in the ray mapping method, we assume a Lambertian source, therefore the intensity is equal to the sum of the lengths of the line segments given by the intersection of the line $\\variabile{p} = \\mbox{const}$ and the support of $L$ (see Equation (\\ref{eta2})).\nTo determine these line segments, a recursive procedure is developed.\nThe procedure starts on \\set{T}{$4$}{} with a given direction $\\variabile{p} = \\mbox{const}$ and with the parallel rays corresponding to the end points $(\\variabile{q}_{\\ell}, \\variabile{p}) = (-\\variabile{b}, \\variabile{p})$ and $(\\variabile{q}_{\\textrm{r}}, \\variabile{p}) = (\\variabile{b}, \\variabile{p})$. \nWe set the initial intensity $I(\\variabile{p})=0$ along direction $\\variabile{p} = \\mbox{const}$. \nConsidering the intersection between the line $\\variabile{p}=\\textrm{const}$ and the boundaries ($\\partial$\\set{T}{$4$,}{\\lineai})$_{\\lineai=1,2,3}$ three intervals are found.\nEach interval corresponds to rays emitted by line $\\lineai$ ($\\lineai\\in\\{1,2,3\\}$).\nThe rays corresponding to the end points of these intervals are traced back from \\set{T}{$4$}{} to \\set{T}{\\lineai}{} where $\\lineai$ is the line from which\nthey are emitted. Then, another interval of parallel rays along the corresponding direction in \\set{T}{\\lineai}{} has to be considered and the intersection points between the line $\\dir{}{} = \\dir{t,}{\\lineai}$ and $\\partial$\\set{T}{\\lineai,}{\\lineaj} (with $\\lineai\\neq4$ and $\\lineai\\neq\\lineaj$) are calculated, where $\\dir{t,}{\\lineai}$ is the new direction of the rays traced back.\nThe procedure continues recursively until the source is found. \n\\\\ \\indent Before explaining the details, let us introduce some new notation. The role of the variables we introduce will become clear later on.\nThe coordinates in \\set{T}{\\lineaj}{} of the rays traced back from a line $\\lineai\\neq\\lineaj$ to\nline $\\lineaj$ are indicated with $(\\pos{t,}{\\lineaj}^1, \\dir{t,}{\\lineaj})$ and $(\\pos{t,}{\\lineaj}^2, \\dir{t,}{\\lineaj})$.\nThe minimum and the maximum position coordinates are $\\pos{t,}{\\lineaj}^\\textrm{\\,min} = \\min\\{\\pos{t,}{\\lineaj}^1, \\pos{t,}{\\lineaj}^2\\}$ and \n$\\pos{t,}{\\lineaj}^\\textrm{\\,max} = \\max\\{\\pos{t,}{\\lineaj}^1, \\pos{t,}{\\lineaj}^2\\}$, respectively. The coordinates of the intersection points of\n$\\variabile{p}= \\dir{t,}{\\lineaj}$ with boundaries $\\partial$\\set{T}{\\lineaj,}{\\lineai} need to be determined for every $\\lineai \\in \\{1,2,3\\}$ and $\\lineaj \\in \\{2,3,4\\}$ with $\\lineaj\\neq\\lineai$. \nThey are indicated with  $(\\variabile{u}_{\\lineaj,\\lineai}^\\textrm{\\,min}, \\dir{t,}{\\lineaj})$ and $(\\variabile{u}_{\\lineaj,\\lineai}^\\textrm{\\,max}, \\dir{t,}{\\lineaj})$ where \n$\\variabile{u}_{\\lineaj,\\lineai}^\\textrm{\\,min}<\\variabile{u}_{\\lineaj,\\lineai}^\\textrm{\\,max}$. \nSince not all the rays whose corresponding coordinates are located inside the segment\n$[\\pos{t,}{\\lineaj}^\\textrm{\\,min}, \\pos{t,}{\\lineaj}^\\textrm{\\,max}]$ with direction $ \\dir{}{} =\\dir{t,}{\\lineaj}$ follow the same path, the intersection segment \n$[\\variabile{v}_{\\lineaj,\\lineai}^\\textrm{\\,min}, \\variabile{v}_{\\lineaj,\\lineai}^\\textrm{\\,max}] = [\\pos{t,}{\\lineaj}^\\textrm{\\,min}, \\pos{t,}{\\lineaj}^\\textrm{\\,max}] \\cap [\\variabile{u}_{\\lineaj,\\lineai}^\\textrm{\\,min}, \\variabile{u}_{\\lineaj,\\lineai}^\\textrm{\\,max}]$ needs to be calculated. $(\\variabile{v}_{\\lineaj,\\lineai}^\\textrm{\\,min},\\dir{t,}{\\lineaj})$ and $(\\variabile{v}_{\\lineaj,\\lineai}^\\textrm{\\,max},\\dir{t,}{\\lineaj})$ are the coordinates of the rays that need to be traced back from line $\\lineaj$ to line $\\lineai$.\n\\\\ \\indent The method can be outlined as follows.\n\\begin{enumerate}\n\\item Calculate the intersection points $(\\variabile{u}_{4,\\lineai}^\\textrm{\\,min}, \\variabile{p})$ and $(\\variabile{u}_{4,\\lineai}^\\textrm{\\,max}, \\variabile{p})$ between line $\\variabile{p}= \\const{const.}$ and $\\partial$\\set{T}{$4$,}{\\lineai} for every $\\lineai\\in\\{1,2,3\\}$, where $\\variabile{u}_{4,\\lineai}^\\textrm{\\,min}<\\variabile{u}_{4,\\lineai}^\\textrm{\\,max}$. This can be done analytically because the exact expression of the boundaries $\\partial$\\set{T}{$4$,}{\\lineai} is found as explained in Section \\ref{sec:cup_raymapping}.\n\\item Calculate the intersection segment \n\\begin{equation*}\n[\\variabile{v}_{4, \\lineai}^\\textrm{\\,min}, \\variabile{v}_{4, \\lineai}^\\textrm{\\,max}] = [\\variabile{u}_{4, \\lineai}^\\textrm{\\,min}, \\variabile{u}_{4, \\lineai}^\\textrm{\\,max}]\\cap\n [\\pos{}{}^\\textrm{\\,min}, \\pos{}{}^\\textrm{\\,max}].\n\\end{equation*}\n\\item If $\\lineai= 1$, the coordinates $(\\variabile{v}_{4,1}^\\textrm{\\,min}, \\variabile{p})$ and $(\\variabile{v}_{4,1}^\\textrm{\\,max}, \\variabile{p})$ equal the coordinates $(\\variabile{q}^\\textrm{\\,min}(\\Pi, \\variabile{p}), \\variabile{p})$ and $(\\variabile{q}^\\textrm{\\,max}(\\Pi, \\variabile{p}), \\variabile{p})$ of the rays located on the boundary $\\partial$\\set{R}{}{}$(\\Pi)$ with $\\Pi = (1,4)$. All the parallel rays with direction coordinate $\\variabile{p}$ and $\\variabile{q}$-position coordinate $\\variabile{u}_{4,1}^\\textrm{\\,min}\\leq \\variabile{q} \\leq \\variabile{u}_{4,1}^\\textrm{\\,max}$ are emitted by the source and they directly hit the target. Update the intensity using (\\ref{eta2}):\n\\begin{equation*}\nI(\\variabile{p})= I(\\variabile{p})+\\variabile{q}^\\textrm{\\,max}(\\Pi, \\dir{}{})-\\variabile{q}^\\textrm{\\,min}(\\Pi, \\dir{}{}).\n\\end{equation*}\n\\item If $\\lineai\\neq 1$, continue with the following steps\n\\item Trace back $(\\variabile{v}_{4,\\lineai}^\\textrm{\\,min}, \\variabile{p})$ and $(\\variabile{v}_{4,\\lineai}^\\textrm{\\,max}, \\variabile{p})$ from line $4$ to line $\\lineai$ to find their corresponding coordinates on \\set{T}{\\lineai}{}:\n\\begin{equation*}\n\\begin{aligned}\n(\\pos{t,}{\\lineai}^{\\,1}, \\dir{t,}{\\lineai})& =\\inversemap{R}{\\lineai}{}\\circ \\inversemap{P}{\\lineai,}{4}(\\variabile{v}_{4,\\lineai}^\\textrm{\\,min}, \\variabile{p}),  \\\\\n(\\pos{t,}{\\lineai}^{\\,2}, \\dir{t,}{\\lineai}) & =\\inversemap{R}{\\lineai}{}\\circ \\inversemap{P}{\\lineai,}{4}(\\variabile{u}_{4,\\lineai}^\\textrm{\\,max}, \\variabile{p}).\n\\end{aligned}\n\\end{equation*}\n\\item Update the path $\\Pi = (\\lineai, 4)$\n\\item Determine $\\pos{t,}{\\lineai}^\\textrm{\\,min}= \\min\\{\\pos{t,}{\\lineai}^{\\,1}, \\pos{t,}{\\lineai}^{\\,2}\\}$ and $\\pos{t,}{\\lineai}^\\textrm{\\,max}= \\max\\{\\pos{t,}{\\lineai}^{\\,1}, \\pos{t,}{\\lineai}^{\\,2}\\}$\n\\item Calculate the intersection points $(\\variabile{u}_{\\lineai,\\lineaj}^\\textrm{\\,min}, \\variabile{p})$ and $(\\variabile{u}_{\\lineai,\\lineaj}^\\textrm{\\,max}, \\variabile{p})$ between the line $\\dir{}{}=\\dir{t,}{\\lineai}$ and \n$\\partial$\\set{T}{\\lineai,}{\\lineaj} for every $\\lineaj\\in\\{1,2,3\\}$ with $\\lineaj\\neq\\lineai$.\n\\item Since not all rays whose corresponding coordinates are located inside the segment\n$[\\pos{t,}{\\lineai}^\\textrm{\\,min}, \\pos{t,}{\\lineai}^\\textrm{\\,max}]$ follow the same path,\n compute the intersection segment \n\\begin{equation*}\n[\\variabile{v}_{\\lineai, \\lineaj}^\\textrm{\\,min}, \\variabile{v}_{\\lineai, \\lineaj}^\\textrm{\\,max}] = [\\variabile{u}_{\\lineai, \\lineaj}^\\textrm{\\,min}, \\variabile{u}_{\\lineai, \\lineaj}^\\textrm{\\,max}]\\cap\n [\\pos{t,}{\\lineai}^\\textrm{\\,min}, \\pos{t,}{\\lineai}^\\textrm{\\,max}]\n\\end{equation*}\n\\item If $\\lineaj\\neq 1$ \n\\begin{itemize}\n\\item[a)] Trace back $(\\variabile{v}_{\\lineai, \\lineaj}^\\textrm{\\,min}, \\dir{t,}{\\lineai})$ and $(\\variabile{v}_{\\lineai, \\lineaj}^\\textrm{\\,max}, \\dir{t,}{\\lineai})$ from $\\lineai$ to $\\lineaj$ \n\\begin{equation*}\n\\begin{aligned}\n(\\pos{t,}{\\lineaj}^{\\,1}, \\dir{t,}{\\lineaj})& =\\inversemap{R}{\\lineaj}{}\\circ \\inversemap{P}{\\lineaj,}{\\lineai}(\\variabile{v}_{\\lineai, \\lineaj}^\\textrm{\\,min}, \\dir{t,}{\\lineai}),  \\\\\n(\\pos{t,}{\\lineaj}^{\\,2}, \\dir{t,}{\\lineaj}) & =\\inversemap{R}{\\lineaj}{}\\circ \\inversemap{P}{\\lineaj,}{\\lineai}(\\variabile{v}_{\\lineai, \\lineaj}^\\textrm{\\,max}, \\dir{t,}{\\lineai}).\n\\end{aligned}\n\\end{equation*}\n\\item[b)] Update the path: $\\Pi = (\\lineaj, \\Pi)$\n\\item[c)] Set $\\lineai=\\lineaj$ and repeat the procedure from point $7$.\n\\end{itemize}\nElse if $\\lineaj=1$, the rays reached the source and a possible path $\\Pi = (1, \\cdots, 4)$ is found. \n\\begin{itemize}\n\\item[a)]\nTrace back to source\n\\begin{equation*}\n\\begin{aligned}\n(\\pos{s,}{$1$}^{\\,1}, \\dir{s,}{$1$})& = \\inversemap{P}{1,}{\\lineai}(\\variabile{v}_{\\lineai, 1}^\\textrm{\\,min}, \\dir{t,}{\\lineai}),  \\\\\n(\\pos{s,}{$1$}^{\\,2}, \\dir{s,}{$1$}) & =\\inversemap{P}{1,}{\\lineai}(\\variabile{v}_{\\lineai, 1}^\\textrm{\\,max}, \\dir{t,}{\\lineai}).\n\\end{aligned}\n\\end{equation*}\n\\item[b)] Apply the forward map\n\\begin{equation*}\n\\begin{aligned}\n (\\pos{}{}^{1}(\\Pi, \\dir{}{}), \\dir{}{})&= \\mapnumb{M}_{1,4}(\\Pi)(\\pos{s,}{$1$}^{\\,1}, \\dir{s,}{$1$}),\\\\\n (\\pos{}{}^{2}(\\Pi, \\dir{}{}), \\dir{}{})&= \\mapnumb{M}_{1,4}(\\Pi)(\\pos{s,}{$1$}^{\\,2}, \\dir{s,}{$1$}).\n\\end{aligned}\n\\end{equation*}\n\\item[c)] Update the intensity: $$I(\\variabile{p}) = I(\\variabile{p})+\\pos{}{}^{\\textrm{max}}(\\Pi, \\dir{}{})- \\pos{}{}^{\\textrm{min}}(\\Pi, \\dir{}{})$$\nwhere \n$\\pos{}{}^\\textrm{min}(\\Pi, \\dir{}{}) := \\pos{}{}^\\textrm{min} =\\min\\{ \\pos{}{}^{1}(\\Pi, \\dir{}{}), \\pos{}{}^{2}(\\Pi, \\dir{}{})\\}$ \\\\ and $\\quad\\pos{}{}^\\textrm{max}(\\Pi, \\dir{}{}) := \\pos{}{}^\\textrm{max} =\\max\\{\\pos{}{}^{1}(\\Pi, \\dir{}{}), \\pos{}{}^{2}(\\Pi, \\dir{}{})\\}.$\n\\end{itemize}\n\\end{enumerate}\n\\indent\nTo clarify the method, we give an example that describes how the target intensity along direction $\\variabile{p}= -0.2$ is calculated.\nFrom Figure $\\ref{fig:T41}$ to Figure $\\ref{fig:T43}$ the steps used in this example are shown.\nA detailed description of those figures is given in the following.\n\\\\ \\indent The procedure starts with the rays with direction $\\variabile{p} = 0.2$ on \\set{T}{$4$}{}, where $\\variabile{q}_\\ell= -\\variabile{b}$ and\n$\\variabile{q}_\\textrm{r}= \\variabile{b}$ are the left and the right end points of the target \\point{T}, respectively.\n The intersection points $(\\variabile{u}_{4,\\lineai}^\\textrm{\\,min}, \\variabile{p})$ and\n$(\\variabile{u}_{4,\\lineai}^\\textrm{\\,max}, \\variabile{p})$\nof the line $\\variabile{p}= -0.2$\n with boundaries $\\partial\\mbox{\\set{T}{$4$,}{\\lineai}}$ are computed for every $\\lineai\\neq 4$. \\\\ \\indent\nWe start from $\\lineai=1$. Therefore the coordinates  $(\\variabile{u}_{4,1}^\\textrm{\\,min}, \\variabile{p})$ and\n$(\\variabile{u}_{4,1}^\\textrm{max}, \\variabile{p})$ of the intersection points between line $\\variabile{p} = -0.2$ and the boundary $\\partial\\mbox{\\set{T}{$4$,}{$1$}}$ are computed and these points are depicted in Figure \\ref{fig:T41}.\n The source is now reached because $\\lineai = 1$ and, one possible path is found.\n The points $(\\variabile{u}_{4,1}^\\textrm{\\,min}, \\variabile{p})$\nand $(\\variabile{u}_{4,1}^\\textrm{\\,max}, \\variabile{p})$ are located on the boundaries of the region formed by the rays that leave the source and directly hit the target, that is the rays located on\n$\\partial \\mbox{\\set{R}{}{}}(\\Pi_1)$ with $\\Pi_1 = (1,4)$.\nTherefore, the contribution to the intensity formed by the rays that follow the path $\\Pi_1 = (1,4)$ is given by \n$\\variabile{u}_{4,1}^\\textrm{\\,max}-\\variabile{u}_{4,1}^\\textrm{\\,min}$. \\\\ \\indent\nWe continue with $\\lineai = 2$. \nThe boundary $\\partial\\mbox{\\set{T}{$4$,}{$2$}}$ is considered in order to find other paths.\nThe intersection points $(\\variabile{u}_{4,2}^\\textrm{\\,min}, \\variabile{p})$ and\n$(\\variabile{u}_{4,2}^\\textrm{\\,max}, \\variabile{p})$ of line\n$\\variabile{p}=-0.2$ with the boundary\n$\\partial\\mbox{\\set{T}{$4$,}{$2$}}$ are calculated.\nThey are depicted in Figure \\ref{fig:T42} with the magenta dots. Also the intersection segment \n\\begin{equation}\n[\\variabile{v}_{4,2}^\\textrm{\\,min}, \\variabile{v}_{4,2}^\\textrm{\\,max}] = [\\variabile{u}_{4,2}^\\textrm{\\,min}, \\variabile{u}_{4,2}^\\textrm{\\,max}]\\cap [\\variabile{q}^\\textrm{min}, \\variabile{q}^\\textrm{max}]\n\\end{equation}\nis calculated. In \\set{T}{$4$}{} $\\variabile{v}_{4,2}^\\textrm{\\,min} = \\variabile{u}_{4,2}^\\textrm{\\,min}$ and $\\variabile{v}_{4,2}^\\textrm{\\,max} = \\variabile{u}_{4,2}^\\textrm{\\,max}$ because $\\variabile{q}^\\textrm{min} = -\\variabile{b}$ and $\\variabile{q}^\\textrm{max} = \\variabile{b}$ always coincide with the end points of \\set{T}{$4$}{}.\nTheir corresponding position coordinates $\\pos{s,}{$2$}^1$ and $\\pos{s,}{$2$}^2$ on \\set{S}{$2$}{}\nare obtained from:\n\\begin{equation}\\label{inverseP}\n\\begin{split}\n \\inversemap{P}{2,}{4}\n(\\variabile{v}_{4,2}^\\textrm{\\,min}, \\variabile{p}) &=  (\\pos{s,}{$2$}^1, \\dir{s,}{$2$}^1), \\\\\n \\inversemap{P}{2,}{4}\n(\\variabile{v}_{4,2}^\\textrm{\\,max}, \\variabile{p}) &=  (\\pos{s,}{$2$}^2, \\dir{s,}{$2$}^2).\n\\end{split}\n\\end{equation}\n The directions\n$\\dir{s,}{$2$}^1$ and $\\dir{s,}{$2$}^2 $ on \\set{S}{$2$}{}\nare given considering the direction $\\dir{t,}{$2$} = \\dir{}{}$ with respect to the normal\n$\\boldsymbol{\\nu}_{2}$ of line $2$.\n Note that $\\dir{s,}{$2$}^1=\\dir{s,}{$2$}^2$ because all the lines are straight lines, their normals do not depend on the position where it is computed. Thus, in the following we will omit the subscripts for the direction coordinates.\n Then, the corresponding direction $\\dir{t,}{$2$}^1=\\dir{t,}{$2$}^2$ on \\set{T}{$2$}{}\n is calculated from:\n\\begin{equation}\\label{inverseR}\n\\begin{split}\n\\inversemap{R}{2}{}(\\pos{s,}{$2$}^1, \\dir{s,}{$2$}) &= (\\pos{t,}{$2$}^1, \\dir{t,}{$2$}), \\\\\n\\inversemap{R}{2}{}(\\pos{s,}{$2$}^2, \\dir{s,}{$2$}) &= (\\pos{t,}{$2$}^2, \\dir{t,}{$2$}).\n\\end{split}\n\\end{equation}\nNote that $\\pos{s,}{$2$}^1 = \\pos{t,}{$2$}^1$ and $\\pos{s,}{$2$}^2= \\pos{t,}{$2$}^2$ since the reflection map does not change the position coordinates.\n Equations (\\ref{inverseP}) and (\\ref{inverseR}) lead to: \\begin{equation}\n\\begin{split}\n\\inversemap{R}{2}{}\\circ \\inversemap{P}{2,}{4}(\\variabile{v}_{4,2}^\\textrm{\\,min}, \\variabile{p}) &= (\\pos{t,}{$2$}^1, \\dir{t,}{$2$}),\\\\\n\\inversemap{R}{2}{}\\circ \\inversemap{P}{2,}{4}(\\variabile{v}_{4,2}^\\textrm{\\,max}, \\variabile{p}) &= (\\pos{t,}{$2$}^2, \\dir{t,}{$2$}).\n\\end{split}\n\\end{equation} The map $\\inversemap{R}{2}{}\\circ \\inversemap{P}{2,}{4}$  is depicted in red in Figure \\ref{fig:tree}.\nThe minimum \n $\\pos{t,}{$2$}^\\textrm{\\,min} = \\min\\{\\pos{t,}{$2$}^1, \\pos{t,}{$2$}^2\\}$ and maximum\n $\\pos{t,}{$2$}^\\textrm{\\,max} = \\max\\{\\pos{t,}{$2$}^1, \\pos{t,}{$2$}^2\\}$ are calculated. The points with coordinates  $(\\pos{t,}{$2$}^\\textrm{\\,min}, \\dir{t,}{$2$})$  and  $(\\pos{t,}{$2$}^\\textrm{\\,max}, \\dir{t,}{$2$})$  are depicted in blue Figure \\ref{fig:T21}, where $\\dir{t,}{$2$}=0.82$.\n To verify whether the corresponding rays are illuminated or not by the source, the procedure used for\n \\set{T}{$4$}{} is now applied to \\set{T}{$2$}{}\n along direction $\\dir{t,}{$2$}=0.82$. \\\\ \\indent\n  Next, the intersection points $(\\variabile{u}_{2,\\lineai}^\\textrm{\\,min}, \\dir{t,}{$2$})$ and\n$(\\variabile{u}_{2,\\lineai}^\\textrm{\\,max}, \\dir{t,}{$2$})$\nof line $\\dir{t,}{$2$}= 0.82$\n with boundaries $\\partial\\mbox{\\set{T}{$2$,}{\\lineai}}$ are computed for every $\\lineai\\in\\{1,3\\}.$\n We start from the boundary $\\partial$\\set{T}{$2$,}{$1$} obtaining the points $(\\variabile{u}_{2, 1}^\\textrm{\\,min}, \\dir{t,}{$2$})$ and\n $(\\variabile{u}_{2, 1}^\\textrm{\\,max}, \\dir{t,}{$2$})$ shown in Figure \\ref{fig:T21}.\n Now, the position coordinates $\\variabile{v}_{2, 1}^\\textrm{\\,min} = \\max\\{\\pos{t,}{$2$}^ \\textrm{\\,min},\\variabile{u}_{2,1}^\\textrm{\\,min}\\}$\n and $\\variabile{v}_{2, 1}^\\textrm{\\,max} = \\min\\{\\pos{t,}{$2$}^ \\textrm{\\,max},\\variabile{u}_{2, 1}^\\textrm{\\,max}\\}$ need to be determined.\n All the rays located inside the segment $[\\variabile{v}_{2, 1}^\\textrm{\\,min},\\variabile{v}_{2, 1}^\\textrm{\\,max}]$ in\n  \\set{T}{$2$}{} and with direction $\\dir{t,}{$2$}$ follow path $\\Pi_2 = (1,2,4)$. In particular, the rays corresponding to the coordinates $(\\variabile{v}_{2, 1}^\\textrm{\\,min},\\dir{t,}{$2$} )$ and $(\\variabile{v}_{2, 1}^\\textrm{\\,max},\\dir{t,}{$2$} )$ are  located on the boundaries of the region \\set{R}{}{}$(\\Pi_2)$ on \\set{T}{$4$}{}\n  formed by all the rays that follow path $\\Pi_2$.\n  Their corresponding coordinates $(\\variabile{q}^{1}(\\Pi_2, \\variabile{p}), \\variabile{p})$ and\n $(\\variabile{q}^{2}(\\Pi_2, \\variabile{p}), \\variabile{p})$ on \\set{T}{$4$}{} are obtained from:\\footnote{With a slight abuse of notation we indicate  $\\variabile{q}^{1}(\\Pi, \\variabile{p})$ with $\\variabile{q}^{1}$ and \n$ \\variabile{q}^{2}(\\Pi, \\variabile{p})$ with $\\variabile{q}^{2}$.}\n\\begin{equation}\n\\begin{split}\n\\mapnumb{P}_{2,4} \\circ \\mapnumb{R}_{2} (\\variabile{v}_{2, 1}^\\textrm{\\,min},\\dir{t,}{$2$} ) & = (\\variabile{q}^{1}, \\variabile{p}), \\\\\n\\mapnumb{P}_{2,4} \\circ \\mapnumb{R}_{2} (\\variabile{v}_{2, 1}^\\textrm{\\,max},\\dir{t,}{$2$} ) & = (\\variabile{q}^{2}, \\variabile{p}).\n\\end{split}\n\\end{equation} \nThe rays corresponding to the coordinates $(\\variabile{q}^{1}, \\variabile{p})$ and $(\\variabile{q}^{2}, \\variabile{p})$ are located\n on the boundary $\\partial$\\set{R}{}{}$(\\Pi_2)$ along direction $\\variabile{p}=-0.2$.\nIndicating with $\\variabile{q}^{\\textrm{min}} = \\min\\{\\variabile{q}^{1}, \\variabile{q}^2\\}$ and $\\variabile{q}^{\\textrm{max}} = \\max\\{\\variabile{q}^{1}, \\variabile{q}^2\\}$, the distance $\\variabile{q}^{\\textrm{max}}-\\variabile{q}^{\\textrm{min}}$ gives the contribution to the intensity $I(\\variabile{p})$ of the rays located in \\set{R}{}{}$(\\Pi_2)$ where $\\variabile{p}=-0.2$.\\\\\n\\indent  \\set{T}{$2$}{} can also be illuminated by line $3$, therefore the intersection points $(\\variabile{u}_{2, 3}^\\textrm{\\,min}, \\dir{t,}{$2$})$ and $(\\variabile{u}_{2, 3}^\\textrm{\\,max}, \\dir{t,}{$2$})$ of line\n $\\dir{t,}{$2$}=0.82$ and $\\partial \\mbox{\\set{T}{$2$,}{$3$}}$ are calculated, these points are depicted in Figure \\ref{fig:T22}.\n The coordinates $(\\variabile{v}_{2, 3}^\\textrm{\\,min}, \\dir{t,}{$2$})$ and $(\\variabile{v}_{2, 3}^\\textrm{\\,max}, \\dir{t,}{$2$})$ are shown in the same figure.\n As the source is not reached yet ($\\lineai=3$), the rays corresponding to $(\\variabile{v}_{2, 3}^\\textrm{\\,min}, \\dir{t,}{$2$})$ and $(\\variabile{v}_{2, 3}^\\textrm{\\,max}, \\dir{t,}{$2$})$ are followed back using the inverses of the propagation and the reflection maps. The coordinates on \\set{T}{$3$}{} are shown  with blue circles in Figure \\ref{fig:T31} and they are obtained from:\n \\begin{equation}\n\\begin{split}\n\\inversemap{R}{3}{}\\circ \\inversemap{P}{3,2}{}(\\variabile{v}_{2, 3}^\\textrm{\\,min}, \\dir{t,}{$2$}) &= (\\pos{t,}{$3$}^{1}, \\dir{t,}{$3$}),\\\\\n\\inversemap{R}{3}{}\\circ \\inversemap{P}{2,3}{}(\\variabile{v}_{2, 3}^\\textrm{\\,max}, \\dir{t,}{$2$}) &= (\\pos{t,}{$3$}^{2}, \\dir{t,}{$3$}).\n\\end{split}\n\\end{equation}\nThe minimum and the maximum position coordinates are $\\pos{t,}{$3$}^{\\textrm{min}}= \\min\\{\\pos{t,}{$3$}^{1}, \\pos{t,}{$3$}^{2}\\}$ and \n$\\pos{t,}{$3$}^{\\textrm{max}}= \\max\\{\\pos{t,}{$3$}^{1}, \\pos{t,}{$3$}^{2}\\}$, respectively.\nWe found that $\\variabile{v}_{3,2}^\\textrm{max}\\neq\\variabile{u}_{3,2}^\\textrm{max}$ because $[\\pos{t,}{$3$}^\\textrm{min}, \\pos{t,}{$3$}^\\textrm{max}]\\subset\n[\\variabile{u}_{3,2}^\\textrm{min},\\variabile{u}_{3,2}^\\textrm{max}]$, this means that the rays with corresponding position coordinates inside the interval \n$[\\pos{t,}{$3$}^\\textrm{max},\\variabile{u}_{3,2}^\\textrm{max}]$ will follow a different path. \nThe procedure continues recursively.\n  It stops either when the ray encounters the source, i.e., when $\\lineai=1$, or when no intersection points between the\n  direction $\\variabile{p}=\\dir{t,}{\\lineaj}$ and the boundaries $\\partial$\\set{T}{\\lineaj,}{\\lineai} are determined for any $\\lineai\\in{1,2,3}$ with $\\lineai\\neq\\lineaj$. \\\\ \\indent\n  If the source is reached, then a valid path $\\Pi = (1,3,2,4)$ is found. Using the inverse of the propagation map, we compute\n\\begin{equation}\n\\begin{aligned}\n\\inversemap{P}{1,3}{}(\\pos{t,}{$3$}^\\textrm{min}, \\dir{t,}{$3$}) = (\\pos{s,}{$1$}^1, \\dir{s,}{$1$}), \\\\ \n\\inversemap{P}{1,3}{}(\\pos{t,}{$3$}^\\textrm{max}, \\dir{t,}{$3$}) = (\\pos{s,}{$1$}^2, \\dir{s,}{$1$}).\n \\end{aligned}\n\\end{equation} \nThe forward map $\\mapnumb{M}_{1,4}(\\Pi)$:\\set{S}{$1$}{}$\\rightarrow$\\set{R}{}{}$(\\Pi)$ restricted to path $\\Pi = (1,3,2,4)$, i.e.\n\\begin{equation}\n\\mapnumb{M}_{1,4} = \\mapnumb{P}_{2,4}\\circ\\mapnumb{R}_{2}\\circ\\mapnumb{P}_{3,2}\\circ\\mapnumb{R}_{3}\\circ\\mapnumb{P}_{1,3}\n\\end{equation}\nis applied to the coordinates $(\\pos{s,}{$1$}^{1}, \\dir{s,}{$1$})$ and $(\\pos{s,}{$1$}^{1}, \\dir{s,}{$1$})$:\n\\begin{equation}\n\\begin{split}\n\\mapnumb{M}_{1,4}(\\pos{s,}{$1$}^{1}, \\dir{s,}{$1$}) &= (\\variabile{q}^{1}(\\Pi, \\variabile{p}), \\variabile{p}),\\\\\n\\mapnumb{M}_{1,4}(\\pos{s,}{$1$}^{2}, \\dir{s,}{$1$}) &= (\\variabile{q}^{2}(\\Pi, \\variabile{p}), \\variabile{p}).\n\\end{split}\n\\end{equation} \nThe coordinates $(\\variabile{q}^{1}, \\variabile{p}):=(\\variabile{q}^{1}(\\Pi, \\variabile{p}), \\variabile{p})$ and\n  $(\\variabile{q}^{2}, \\variabile{p}):=(\\variabile{q}^{2}(\\Pi, \\variabile{p}), \\variabile{p})$ located on $\\partial$\\set{R}{}{}$(\\Pi)$ in \\set{T}{$4$}{} are found.\nIndicating with $\\variabile{q}^{\\textrm{min}}= \\min\\{\\variabile{q}^{1}, \\variabile{q}^{2}\\}$ and $\\variabile{q}^{\\textrm{max}}= \\max\\{\\variabile{q}^{1}, \\variabile{q}^{2}\\}$, the contribution to the intensity due to the rays that follow path $\\Pi$ is given by:\n\\begin{equation}\nI(\\variabile{p}) = I(\\variabile{p})+\\variabile{q}^{\\textrm{max}}(\\Pi, \\variabile{p})-\\variabile{q}^{\\textrm{min}}(\\Pi, \\variabile{p}). \n\\end{equation}\n  If no intersection points are found, then the rays traced are\n  not emitted by the source, therefore no contribution to the intensity needs to be added. This is, for instance, the case for rays with\n  coordinates $(\\variabile{v}_{2, 3}^\\textrm{\\,min}, 0.82)$ and $(\\variabile{v}_{2, 3}^\\textrm{\\,max}, 0.82)$\n  on \\set{T}{$2$}{} in Figure \\ref{fig:T22}. Below we explain this case in detail. \\\\ \\indent\n  In Figure \\ref{fig:T31}, the coordinates $(\\pos{t,}{$3$}^{\\textrm{min}}, \\dir{t,}{$3$})$ and $(\\pos{t,}{$3$}^{\\textrm{max}}, \\dir{t,}{$3$})$ in \\set{T}{$3$}{} with $\\dir{t,}{$3$}=-0.29$ are shown.\n  They are obtained from:\n\\begin{equation}\n\\begin{split}\n \\inversemap{R}{3}{}\\circ\\inversemap{P}{3,}{2}(\\variabile{v}_{2, 3}^\\textrm{\\,min}, 0.82)& = (\\pos{t,}{$3$}^{1}, \\dir{t,}{$3$}),\\\\\n\\inversemap{R}{3}{}\\circ\\inversemap{P}{3,}{2}(\\variabile{v}_{2, 3}^\\textrm{\\,max}, 0.82)& = (\\pos{t,}{$3$}^{2}, \\dir{t,}{$3$}).\n\\end{split}\n\\end{equation}\n From Figure \\ref{fig:T31} we note that there are no intersection points of the line $\\dir{t,}{$3$}=-0.29$ with $\\partial\\mbox{\\set{T}{$3$,}{$1$}}$.\n  So, only the coordinates of the intersections $(\\variabile{u}_{3, 2}^\\textrm{\\,min}, -0.29)$ and $(\\variabile{u}_{3, 2}^\\textrm{\\,max}, -0.29)$ between line $\\dir{t,}{$3$}=-0.29$ and  $\\partial\\mbox{\\set{T}{$3$,}{$2$}}$ are calculated.\n  Next, the intersection interval \n\\begin{equation}\n[\\variabile{v}_{3, 2}^\\textrm{\\,min},\\variabile{v}_{3, 2}^\\textrm{\\,max}] = [\\variabile{u}_{3, 2}^\\textrm{\\,min}, \\variabile{u}_{3, 2}^\\textrm{\\,max}] \\cap [\\pos{t,}{$3$}^{\\textrm{min}}, \\pos{t,}{$3$}^{\\textrm{max}}],\n\\end{equation} formed by parallel rays with direction $\\dir{t,}{$3$} = -0.29$, is considered.\n  Using\n\\begin{equation}\n\\begin{split}\n \\inversemap{R}{2}{}\\circ\\inversemap{P}{2,}{3}(\\variabile{v}_{3, 2}^\\textrm{\\,min},-0.29) & = (\\pos{t,}{$2$}^{\\textrm{min}}, \\dir{t,}{$2$}),\\\\\n \\inversemap{R}{2}{}\\circ\\inversemap{P}{2,}{3}(\\variabile{v}_{3, 2}^\\textrm{\\,max},-0.29) & = (\\pos{t,}{$2$}^{\\textrm{max}}, \\dir{t,}{$2$}),\\\\\n\\end{split}\n\\end{equation}\n  the corresponding coordinates $(\\pos{t,}{$2$}^{\\textrm{max}}, \\dir{t,}{$2$})$ and $(\\pos{t,}{$2$}^{\\textrm{min}}, \\dir{t,}{$2$})$ on \\set{T}{$2$}{} are found (see Figure \\ref{fig:T23})\n  with $\\dir{t,}{$2$}=-0.41$.\n  Now the procedure is repeated again for \\set{T}{$2$}{} along the direction $\\dir{t,}{$2$}$.\n  No intersection points between the line $\\dir{t,}{$2$}=-0.41$ and $\\partial$\\set{T}{$2$,}{$1$} occur. \nOnly, the intersection points $(\\variabile{u}_{2,3}^\\textrm{\\,min},\\dir{t,}{$2$})$ and $(\\variabile{u}_{2,3}^\\textrm{\\,max},\\dir{t,}{$2$})$\n  of line $\\dir{t,}{$2$}=-0.41$ and $\\partial$\\set{T}{$2$,}{$3$} are found.\n  The intersection segment \n\\begin{equation}\n[\\variabile{v}_{2,3}^\\textrm{\\,min},\\variabile{v}_{2,3}^\\textrm{\\,max}] = [\\variabile{u}_{2,3}^\\textrm{\\,min},\\variabile{u}_{2,3}^\\textrm{\\,max}] \\cap  [\\pos{t,}{$2$}^\\textrm{\\,min},\\pos{t,}{$2$}^\\textrm{\\,max}]\n\\end{equation}\nis calculated. \nThe coordinates on \\set{T}{$3$}{} corresponding to the end points of the intersection interval are found using:\n\\begin{equation}\n\\begin{split}\n\\inversemap{R}{3}{}\\circ\\inversemap{P}{3,}{2} (\\variabile{v}_{2,3}^\\textrm{\\,min},\\dir{t,}{$2$})& = (\\pos{t,}{$3$}^\\textrm{\\,min},\\dir{t,}{$3$}),\\\\\n\\inversemap{R}{3}{}\\circ\\inversemap{P}{3,}{2} (\\variabile{v}_{2,3}^\\textrm{\\,max},\\dir{t,}{$2$})& = (\\pos{t,}{$3$}^\\textrm{\\,max},\\dir{t,}{$3$}),\n\\end{split}\n\\end{equation}  where $ \\dir{t,}{$3$} = 0.91$ (see Figure \\ref{fig:T32}). \\\\ \\indent\n Considering the PS \\set{T}{$3$}{} and the direction $\\dir{t,}{$3$}=0.91$, we note that there are no intersection points of line\n $\\dir{t,}{$3$}=0.91$ with both $\\partial$\\set{T}{$3$,}{$1$} and $\\partial$\\set{T}{$3$,}{$2$}.\n Indeed, the whole segment $[\\pos{t,}{$3$}^\\textrm{\\,min},\\pos{t,}{$3$}^\\textrm{\\,max}]$ is outside both\n \\set{T}{$3$,}{$2$} and \\set{T}{$3$,}{$1$}. Because of this, all the rays with $\\variabile{q}$-coordinates inside the interval\n $[\\pos{t,}{$3$}^\\textrm{\\,min},\\pos{t,}{$3$}^\\textrm{\\,max}]$\nand with direction $\\variabile{p}= \\dir{t,}{$3$}$ are not illuminated by the source and no new real path is found.\n\\\\ \\indent\nFinally, the recursive procedure is applied to \\set{T}{$4$,}{$3$}.\nThe first step is depicted in Figure \\ref{fig:T43}.\n We decided not to show all the steps for \\set{T}{$4$,}{$3$} as they are similar to those used for \\set{T}{$4$,}{$2$} and explained above.\n \\\\ \\indent Finally, to compute the intensity along another direction $\\variabile{p}^{\\variabile{h}}\\in[-1,1]$ on \\set{T}{$4$}{},\nthe procedure explained for $\\variabile{p}=-0.2$ is repeated for $\\variabile{p}= \\variabile{p}^{\\variabile{h}}$.\nIn this way we find all the possible paths $\\Pi$ and the positive luminance regions \\set{R}{}{}$(\\Pi)$ on \\set{T}{$4$}{}.\nFurthermore, considering every time the coordinates located on the boundaries of the regions \\set{T}{\\lineai,}{\\lineaj} for every $\\lineaj\\neq\\lineai$, also the boundaries $\\partial$\\set{R}{}{}$(\\Pi)$ are determined for a given path $\\Pi$ as well as the coordinates $\\variabile{q}^\\textrm{\\,min}(\\Pi,\\variabile{p})$ and $\\variabile{q}^\\textrm{\\,max}(\\Pi,\\variabile{p})$ for every $\\variabile{p}\\in [-1,1]$.\nIn Algorithm \\ref{alg}, the main steps to calculate the intensity $I(\\variabile{p})$ along a direction\n$\\variabile{p} = \\variabile{p}^{\\variabile{h}}$ in \\set{T}{$4$}{} are given, where for the first step we take $\\lineaj=4$.\n\\begin{figure}\n\\begin{minipage}[t]{.48\\textwidth}\n\\centering\n   \\includegraphics[width=\\textwidth]{T4_1}\n  \\caption{\\footnotesize{\\textbf{Target PS of line $4$.} $\\pos{t,}{$4$}^{\\textrm{min}}$ and $\\pos{t,}{$4$}^{\\textrm{max}}$ are the $\\variabile{x}$-coordinates of the end points of line $4$.\n  The intersection points between the line $\\variabile{p} = -0.2$ and $\\partial$\\set{T}{$4$,}{$1$} are $(\\variabile{u}_{4,1}^{\\textrm{min}}, \\variabile{p})$ and\n  $(\\variabile{u}_{4,1}^{\\textrm{max}}, \\variabile{p})$. $\\variabile{v}_{4,1}^{\\textrm{min}}= \\max \\{\\pos{t,}{$4$}^{\\textrm{min}}, \\variabile{u}_{4,1}^{\\textrm{min}}\\}$ and\n  $\\variabile{v}_{4,1}^{\\textrm{max}}= \\min \\{\\pos{t,}{$4$}^{\\textrm{max}}, \\variabile{u}_{4,1}^{\\textrm{max}}\\}$.}}\n   \\label{fig:T41}\n\\end{minipage}\\hfill\n \\begin{minipage}[t]{.48\\textwidth}\n  \\centering\n   \\includegraphics[width=\\textwidth]{T4_2}\n   \\caption{\\footnotesize{\\textbf{Target PS of line $4$.}\n  The intersection points between the line $\\variabile{p} = -0.2$ and $\\partial$\\set{T}{$4$,}{$2$} are $(\\variabile{u}_{4,2}^{\\textrm{min}}, \\variabile{p})$ and\n  $(\\variabile{u}_{4,2}^{\\textrm{max}}, \\variabile{p})$. $\\variabile{v}_{4,2}^{\\textrm{min}}= \\max \\{\\pos{t,}{$4$}^{\\textrm{min}}, \\variabile{u}_{4,2}^{\\textrm{min}}\\}$ and\n  $\\variabile{v}_{4,2}^{\\textrm{max}}= \\min \\{\\pos{t,}{$4$}^{\\textrm{max}}, \\variabile{u}_{4,2}^{\\textrm{max}}\\}$.\\\\}}\n   \\label{fig:T42}\n \\end{minipage}\\hfill\n \\begin{minipage}[t]{.48\\textwidth}\n   \\centering\n   \\includegraphics[width=\\textwidth]{T2_1}\n   \\caption{\\footnotesize{\\textbf{Target PS of line $2$.} The coordinates of the intersection points between the line $\\dir{t,}{$2$} = 0.82$ and $\\partial$\\set{T}{$2$,}{$1$} are\n  $(\\variabile{u}_{2,1}^{\\textrm{min}}, \\dir{t,}{$2$})$ and $(\\variabile{u}_{2,1}^{\\textrm{max}}, \\dir{t,}{$2$})$.\n  $\\variabile{v}_{2,1}^{\\textrm{min}}= \\max \\{\\pos{t,}{$2$}^{\\textrm{min}}, \\variabile{u}_{2,1}^{\\textrm{min}}\\}$ and\n  $\\variabile{v}_{2,1}^{\\textrm{max}}= \\min \\{\\pos{t,}{$2$}^{\\textrm{max}}, \\variabile{u}_{2,1}^{\\textrm{max}}\\}$.}}\n    \\label{fig:T21}\n \\end{minipage}\\hfill\n \\begin{minipage}[t]{.48\\textwidth}\n  \\centering\n   \\includegraphics[width=\\textwidth]{T2_2}\n\\caption{\\footnotesize{\\textbf{Target PS of line $2$.}\n  The coordinates of the intersection points between the line $\\dir{t,}{$2$}=0.82$ and $\\partial$\\set{T}{$2$,}{$3$} are\n  $(\\variabile{u}_{2,3}^{\\textrm{min}}, \\dir{t,}{$2$})$ and $(\\variabile{u}_{2,3}^{\\textrm{max}}, \\dir{t,}{$2$})$.\n  $\\variabile{v}_{2,3 }^{\\textrm{min}} = \\max\\{\\pos{t,}{$2$}^{\\textrm{min}}, \\variabile{u}_{2,3}^{\\textrm{min}}\\}$ and\n   $\\variabile{v}_{2,3 }^{\\textrm{max}} = \\min\\{\\pos{t,}{$3$}^{\\textrm{max}}, \\variabile{u}_{2,3}^{\\textrm{max}}\\}$.}}\n \\label{fig:T22}\n \\end{minipage}\\hfill\n%\\hspace{3cm}\n \\end{figure}\n \\begin{figure}\n\\begin{minipage}[t]{.48\\textwidth}\n   \\centering\n   \\includegraphics[width=\\textwidth]{T3_1}\n   \\caption{\\footnotesize{\\textbf{Target PS of line $3$.}\n  The position coordinates of the intersection points between  the line $ \\dir{t,}{$3$} = -0.29$ and\n  $\\partial$\\set{T}{$3$,}{$2$} are $\\variabile{u}_{3,2}^{\\textrm{min}}$ and $\\variabile{u}_{3,2}^{\\textrm{max}}$.\n   $\\variabile{v}_{3,2 }^{\\textrm{min}} = \\max\\{\\pos{t,}{$3$}^{\\textrm{min}}, \\variabile{u}_{3,2}^{\\textrm{min}}\\}$ and\n   $\\variabile{v}_{3,2 }^{\\textrm{max}} = \\min\\{\\pos{t,}{$3$}^{\\textrm{max}}, \\variabile{u}_{3,2}^{\\textrm{max}}\\}$.\\\\}}\n   \\label{fig:T31}\n \\end{minipage}\\hfill\n \\begin{minipage}[t]{.48\\textwidth}\n  \\centering\n   \\includegraphics[width=\\textwidth]{T2_3}\n\\caption{\\footnotesize{\\textbf{Target PS of line $2$.}\n The intersection points between the line $\\dir{t}{} = \\dir{t,}{$2$}$ and $\\partial$\\set{T}{$2$,}{$3$} are\n $(\\variabile{u}_{2,3}^{\\textrm{min}}, \\dir{t,}{$2$})$ and $(\\variabile{u}_{2,3}^{\\textrm{max}}, \\dir{t,}{$2$})$.\n $\\variabile{v}_{2,3 }^{\\textrm{min}} = \\max\\{\\pos{t,}{$2$}^{\\textrm{min}}, \\variabile{u}_{2,3}^{\\textrm{min}}\\}$ and\n $\\variabile{v}_{2,3 }^{\\textrm{max}} = \\min\\{\\pos{t,}{$2$}^{\\textrm{max}}, \\variabile{u}_{2,3}^{\\textrm{max}}\\}$.\\\\}}\n    \\label{fig:T23}\n \\end{minipage}\\hfill\n  \\begin{minipage}[t]{.48\\textwidth}\n   \\centering\n   \\includegraphics[width=\\textwidth]{T3_2}\n   \\caption{\\footnotesize{\\textbf{Target PS of line $3$.} \n  There are no intersection points of the line $\\dir{$3$,}{$2$}=0.91$\n with the boundaries $\\partial$\\set{T}{$3$,}{$2$} and $\\partial$\\set{T}{$3$,}{$1$}.\n  The rays with coordinates inside the dotted segment hit again line $4$ after some reflections and, therefore, are not emitted by the source.}}\n    \\label{fig:T32}\n \\end{minipage}\\hfill\n \\begin{minipage}[t]{.48\\textwidth}\n   \\centering\n   \\includegraphics[width=\\textwidth]{T4_3}\n   \\caption{\\footnotesize{\\textbf{Target PS of line $4$.} $\\pos{t,}{$4$}^{\\textrm{min}} = -\\variabile{b}$ and $\\pos{t,}{$4$}^{\\textrm{max}}= \\variabile{b}$.\n  The intersection points between the line $\\variabile{p} = -0.2$ and $\\partial$\\set{T}{$4$,}{$3$} are $(\\variabile{u}_{4,3}^{\\textrm{min}}, \\variabile{p})$ and $(\\variabile{u}_{4,3}^{\\textrm{max}}, \\variabile{p}).$ $\\variabile{v}_{4,3}^{\\textrm{min}} = \\max\\{\\pos{t,}{$4$}^{\\textrm{min}}, \\variabile{u}_{4,3}^{\\textrm{min}}\\}$\n  and $\\variabile{v}_{4,3}^{\\textrm{max}} = \\min\\{\\pos{t,}{$4$}^{\\textrm{max}}, \\variabile{u}_{4,3}^{\\textrm{max}}\\}$.}}\n   \\label{fig:T43}\n\\end{minipage}\\hfill\n%\\hspace{3cm}\n\\end{figure}\n\\begin{algorithm}\n\\caption{Recursive procedure for the intensity calculation}\\label{alg}\nInitialize {$\\lineaj=4,$  $\\pos{t,}{$4$}^\\textrm{\\,min} = \\variabile{q}^{\\textrm{min}} = -\\variabile{b},$ \n $\\pos{t,}{$4$}^ \\textrm{\\,max}= \\variabile{q}^{\\textrm{max}}=\\variabile{b},$ $\\dir{t,}{$4$} = \\dir{}{} =  \\mbox{const},$  $\\Pi = (4)$. }\n\\begin{algorithmic}[1]\n\\Procedure {Intensity computation}{$\\lineaj$, $\\pos{t,}{\\lineaj}^\\textrm{\\,min},$  $\\pos{t,}{\\lineaj}^ \\textrm{\\,max},$ $\\dir{t,}{\\lineaj},$  $\\Pi$}\n\\For{$ \\lineai =  1, 2, 3 $}\n   \\If{$\\lineai\\neq\\lineaj$}\n   \\State Compute the intersection points \n    $(\\variabile{u}_{\\lineaj, \\lineai}^\\textrm{\\,min}, \\dir{t,}{\\lineaj})$ and $(\\variabile{u}_{\\lineaj, \\lineai}^\\textrm{\\,max}, \\dir{t,}{\\lineaj})$\n   %between $\\dir{t,}{\\lineaj}$ and $\\partial$\\set{T}{\\lineaj,}{\\lineai}\n \\State $\\Pi=(\\lineai, \\Pi)$\n           \\State Compute \n%          \\begin{equation*}\n     $      [\\variabile{v}_{\\lineaj, \\lineai}^\\textrm{\\,min}, \\variabile{v}_{\\lineaj, \\lineai}^\\textrm{\\,max}] = [\\variabile{u}_{\\lineaj, \\lineai}^\\textrm{\\,min}, \\variabile{u}_{\\lineaj, \\lineai}^\\textrm{\\,max}]\\cap\n           [\\pos{t,}{\\lineaj}^\\textrm{\\,min}, \\pos{t,}{\\lineaj}^\\textrm{\\,min}] $\n  %        \\end{equation*}\n       \\If{($\\lineai\\neq 1)$  \\& $ (\\lineai\\neq 4)$}\n           \\State Trace back from \\set{T}{\\lineaj}{} to \\set{T}{\\lineai}{}\n           \\begin{equation*}\n           \\begin{aligned}\n           (\\pos{t,}{\\lineai}^{\\,1}, \\dir{t,}{\\lineai})& =\\inversemap{R}{\\lineai}{}\\circ \\inversemap{P}{\\lineai,}{\\lineaj}(\\variabile{v}_{\\lineaj,\\lineai}^\\textrm{\\,min}, \\dir{t,}{\\lineaj})  \\\\\n           (\\pos{t,}{\\lineai}^{\\,2}, \\dir{t,}{\\lineai}) & =\\inversemap{R}{\\lineai}{}\\circ \\inversemap{P}{\\lineai,}{\\lineaj}(\\variabile{v}_{\\lineaj,\\lineai}^\\textrm{\\,max}, \\dir{t,}{\\lineaj})\n           \\end{aligned}\n           \\end{equation*}\n           \\State Determine \\begin{equation*}\n\\pos{t,}{\\lineai}^\\textrm{\\,min}= \\min\\{\\pos{t,}{\\lineai}^{\\,1}, \\pos{t,}{\\lineai}^{\\,2}\\} \\mbox{ and }\n\\pos{t,}{\\lineai}^\\textrm{\\,max}= \\max\\{\\pos{t,}{\\lineai}^{\\,1}, \\pos{t,}{\\lineai}^{\\,2}\\}\n\\end{equation*}\n          \\State\\Return{\\Call{Intensity computation}{$\\lineai$, $\\pos{t,}{\\lineai}^\\textrm{\\,min},$  $\\pos{t,}{\\lineai}^ \\textrm{\\,max},$ $\\dir{t,}{\\lineai},$  $\\Pi$}}\n       \\Else \n\\If{\\lineai=1}\n              \\If{$\\lineaj\\neq4$}\n                 \\State Trace back from \\set{T}{\\lineaj}{} to \\set{S}{$1$}{}, next apply the forward map $\\mapnumb{M}_{1,4}(\\Pi)$\n\\begin{equation*}\n\\begin{aligned}\n(\\pos{s,}{$1$}^{\\,1}, \\dir{s,}{$1$})& = \\inversemap{P}{1,}{\\lineaj}(\\variabile{v}_{\\lineaj,1 }^\\textrm{\\,min}, \\dir{t,}{\\lineaj})  \\\\\n(\\pos{s,}{$1$}^{\\,2}, \\dir{s,}{$1$}) & =\\inversemap{P}{1,}{\\lineaj}(\\variabile{v}_{\\lineaj, 1}^\\textrm{\\,max}, \\dir{t,}{\\lineaj})\\\\\n(\\pos{}{}^{1}(\\Pi, \\dir{}{}), \\dir{}{})&= \\mapnumb{M}_{1,4}(\\Pi)(\\pos{s,}{$1$}^{\\,1}, \\dir{s,}{$1$})\\\\\n(\\pos{}{}^{2}(\\Pi, \\dir{}{}), \\dir{}{})&= \\mapnumb{M}_{1,4}(\\Pi)(\\pos{s,}{$1$}^{\\,2}, \\dir{s,}{$1$})\n\\end{aligned}\n\\end{equation*}\n%\\State Apply \n%\\begin{equation*}\n%\\begin{aligned}\n% (\\pos{}{}^{1}(\\Pi, \\dir{}{}), \\dir{}{})&= \\mapnumb{M}_{1,4}(\\Pi)(\\pos{s,}{1}^{\\,1}, \\dir{s,}{1})\\\\\n% (\\pos{}{}^{2}(\\Pi, \\dir{}{}), \\dir{}{})&= \\mapnumb{M}_{1,4}(\\Pi)(\\pos{s,}{1}^{\\,2}, \\dir{s,}{1})\n%\\end{aligned}\n%\\end{equation*}\n\\State Calculate\n\\begin{equation*}\n\\begin{aligned}\n\\pos{}{}^\\textrm{\\,min}(\\Pi,\\dir{}{})&= \\min\\{\\pos{}{}^{\\,1}, \\pos{}{}^{\\,2}\\} \\\\ \n\\pos{}{}^\\textrm{\\,max}(\\Pi,\\dir{}{})&= \\max\\{\\pos{}{}^{\\,1}, \\pos{}{}^{\\,2}\\}\n\\end{aligned}\n\\end{equation*}\n\\State where $\\pos{}{}^{\\,1} := \\pos{}{}^{\\,1}(\\Pi, \\dir{}{})$ and $\\pos{}{}^{\\,2} := \\pos{}{}^{\\,2}(\\Pi, \\dir{}{})$.\n                 \\State\\Return{$I(\\variabile{p})= I(\\variabile{p})+\\variabile{q}^\\textrm{\\,max}(\\Pi, \\dir{}{})-\\variabile{q}^\\textrm{\\,min}(\\Pi, \\dir{}{})$}\n                 \\Else\n                 \\begin{equation*}\n                    %   \\begin{aligned}\n                       \\variabile{q}^\\textrm{\\,min}(\\Pi, \\dir{}{}) = \\variabile{v}_{4, \\lineaj}^\\textrm{\\,min}, \n                       \\variabile{q}^\\textrm{\\,max}(\\Pi, \\dir{}{}) = \\variabile{v}_{4, \\lineaj}^\\textrm{\\,max}\n                    %   \\end{aligned}\n                       \\end{equation*}\n                  \\State\\Return{$I(\\variabile{p})= I(\\variabile{p})+\\variabile{q}^\\textrm{\\,max}(\\Pi, \\dir{}{})-\\variabile{q}^\\textrm{\\,min}(\\Pi, \\dir{}{}).$}\n              \\EndIf    \n\\Else \\pagebreak\n\\State \\Return $I(\\variabile{p})$\n\\EndIf\n        \\EndIf\n     \\EndIf\n\\EndFor\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n \\\\ \\indent In the next section we provide the numerical results for the two-faceted cup. \n\\section{Numerical results for the two-faceted cup}\nTo demonstrate the accuracy of the method, a comparison with MC and QMC ray tracing is provided.\nThe MC and QMC intensities are computed as explained in Chapter \\ref{chap:raytracing}. \nWe consider here the same partitioning $P:-1 = \\variabile{p}^0 < \\cdots< \\variabile{p}^\\textrm{Nb} = 1$ of the interval $[-1,1]$ used for all the simulations presented in the previous chapters. \nThe profile of the QMC intensity is obtained tracing $10^7$ rays and taking $\\textrm{Nb} = 100$.% is depicted in Figure \\ref{fig:intensity_cup} with a blue line.\\\\\n\\\\ \\indent  The PS intensity is obtained from (\\ref{eta2}) where the rays on the boundaries are obtained applying backward ray mapping. We observe that the method is suitable for detecting all the possible paths $\\Pi$ that a ray can follow during the propagation through the system. According to the results obtained with PS ray tracing, $5$ different paths are found for the two-faceted cup. Given a path $\\Pi$, the coordinates $(\\variabile{q}^\\textrm{min}(\\Pi, \\variabile{p}^ \\variabile{h}), \\variabile{p}^ \\variabile{h})$ and $(\\variabile{q}^\\textrm{\\,max}(\\Pi, \\variabile{p}^ \\variabile{h}), \\variabile{p}^\\variabile{h})$ of the corresponding rays located on $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$ are determined for every $\\variabile{p} = (\\variabile{p}^\\variabile{h})_{\\variabile{h}=0, \\cdots, \\textrm{Nb}}$ where the values $\\variabile{p}^\\variabile{h}$ are chosen from the partitioning $P$ used for QMC ray tracing. These rays are depicted in Figure \\ref{fig:final_T}, where all the rays that follow the same path are shown with the same color. For the two-faceted cup, given a direction $\\variabile{p}^\\variabile{h}$ and a path $\\Pi$, only two rays are located on the boundary $\\partial$\\set{R}{}{}$(\\Pi)$ of the corresponding region along that direction. As a consequence, at most $2\\,\\npath\\,\\textrm{Nb}$ rays need to be traced from the target to the source, where $\\npath =5$ is the number of paths.\nThe averaged normalized PS intensity is given by Equation (\\ref{eq:normalized_PS_intensity})\n%\\begin{equation}\\label{eq:intensity}\n% \\hat{I}_{\\textrm{PS}}(\\variabile{p}^{\\variabile{h}+1/2}) = \\frac{\\int_{\\variabile{p}^\\variabile{h}}^{\\variabile{p}^{\\variabile{h}+1}}{\\lineai}_{PS}(\\variabile{p})\\textrm{d}\\variabile{p}}{\\int_{-1}^{1}{\\lineai}_{PS}(\\variabile{p})\\textrm{d}\\variabile{p}}\\,,\n% \\end{equation}\n%for $\\variabile{h} = 0,1, \\cdots, \\textrm{Nb}-1,$\n where the integrals are calculated using the trapezoidal rule.\n\\begin{figure}[t]\n  \\begin{center}\n  \\includegraphics[scale=.4]{final_T}\n  \\end{center}\n  \\caption{\\textbf{Target phase space of the two-faceted cup divided into $100$ bins.}\n  Five different paths are found. The rays with coordinates $(\\variabile{q}^\\textrm{\\,min}, \\variabile{p})$ and $(\\variabile{q}^\\textrm{\\,max}, \\variabile{p})$ in \\set{T}{$4$}{} that are located at the boundaries $\\partial \\mbox{\\set{R}{}{}}(\\Pi)$ are depicted with dots, the color of the dots depends on the path $\\Pi$ followed by the rays.\n  Using the ray mapping method, only these rays need to be traced from $\\point{S}$ to $\\point{T}$ for the intensity computation.}\n  \\label{fig:final_T}\n\\end{figure} \n%The profile of the PS intensity is depicted in in Figure \\ref{fig:intensity_cup} with the dotted green line.\nThe approximated intensities $\\hat{I}_{\\textrm{A}} (\\textrm{A} = \\textrm{MC}, \\textrm{QMC},  \\textrm{PS})$ are compared to the reference intensity $\\hat{I}_\\textrm{ref}$ which in this case in the exact intensity ($\\hat{I}_{\\textrm{ref}}=\\hat{I}_{\\textrm{exact}}$). The results in Figure \\ref{fig:intensity_cup} show that our method computes the intensity correctly. \n \\\\ \\indent\nTo compare the speed of convergence of the three methods, we consider the error between the approximate intensities $\\hat{I}_{\\textrm{A}}$ ($\\textrm{A} = \\textrm{MC}, \\textrm{QMC}, \\textrm{PS}$) and the exact intensity $\\hat{I}_{\\mbox{exact}} = \\hat{I}_{\\mbox{ref}}$.\nThe three errors as a function of the CPU-time are depicted in a logarithmic scale in Figure $\\ref{fig:error_cup}$. Numerical results show that MC ray tracing converges proportionally to the inverse of the square root of the number of rays traced, QMC error is proportional to the inverse of the number of rays, backward ray mapping is able to compute the output intensity of the two-faceted cup exactly. Also, it is much faster than MC ray tracing when an error smaller than $10^{-4}$ is required and it is faster than QMC ray tracing if an error smaller than around $10^{-5}$ is desired.\n\\begin{figure}[t]\n  \\begin{center}\n  \\includegraphics[scale=.4]{intensity_cup_raymapping}\n  \\end{center}\n  \\caption{\\textbf{Intensities for the two-faceted cup.} The intensities found with three different approaches are shown.}\n  \\label{fig:intensity_cup}\n\\end{figure}\n\\begin{figure}[h]\n  \\begin{center}\n  \\includegraphics[scale=.4]{error_cup_raymapping}\n  \\end{center}\n  \\caption{\\textbf{Errors for the two-faceted cup.} The errors are depicted as a function of the CPU time (in seconds).}\n  \\label{fig:error_cup}\n\\end{figure}\n\\section{Extension of the method for the multi-faceted cup}\n\\label{sec:Generalization}\nThe method can be generalized to more complicated optical systems.\nIn particular, it can be used for all systems formed by straight line segments.\nThe goal of this section is to show the generalization of the method to the multi-faced cup which is a system with many left and right segments as reflectors.\nThe design of this system is explained below. \\\\ \\indent\nA multi-faceted cup is an optical system formed by a source, a target and $\\nline-2$ reflectors, where $\\nline$ is the number of optical line segments that form the system.\nDefining a Cartesian coordinate system $(\\variabile{x}, \\variabile{z})$, the multi-faceted cup is symmetric with respect to the optical axis (\\variabile{z}-axis). An example of this system is depicted in Figure \\ref{fig:multifacetedcup} where all the lines are labeled with numbers.\nThe source $\\point{S}= [-\\variabile{a}, \\variabile{a}]$ (line $1$) and the target $\\point{T}= [-\\variabile{b}, \\variabile{b}]$ (line $22$) are two segments both perpendicular to the optical axis, with $\\variabile{a}=2$ and $\\variabile{b}=17$.\n$\\point{S}$ is located at the height $\\variabile{z}=0$ while $\\point{T}$ has a height $\\variabile{z}=40$.\nBoth sides of the system are divided into $10$ segments which connect $\\point{S}$ with $\\point{T}$.\nThe ten adjacent segments at the left of the system (lines $2, \\cdots, 11$) connect the left extreme of the source with the left extreme of the target.\nSimilarly, ten adjacent segments at the right of the system (lines $12, \\cdots, 21$) connect the right extreme of the source with the right extreme of the target.\nThese segments are designed as follows. The intervals $[-\\variabile{b}, -\\variabile{a}]$ and $[\\variabile{a}, \\variabile{b}]$ are divided into ten sub-intervals of the same length $(\\variabile{b}-\\variabile{a})/10$.\nThe \\variabile{x}-coordinates of the end points of the line segments $12, \\cdots, 21$ are equal to the \\variabile{x}-coordinates of the sub-intervals of $[\\variabile{a},\\variabile{b}]$, while the \\variabile{x}-coordinates of the end points of the line segments $2, \\cdots, 11$ are equal to the \\variabile{x}-coordinates of the sub-intervals of $[-\\variabile{a},-\\variabile{b}]$.\nThe \\variabile{z}-coordinates of every end point of the line segments $2, \\cdots, 21$  are given substituting their $\\variabile{x}$-coordinates into the equation of the parabola whose symmetry axis is equal to the \\variabile{z}-axis and that passes through the\npoints $(-17,40)$ and $(17,40)$. The $20$-faceted cup is now well defined and can be seen as an approximation of a parabolic reflector.\n\\begin{figure}[h]\n\\centering\n\\label{fig:multifacetedcup}\n\\includegraphics[scale=.4]{multifacetedcup1}\n\\caption{\\textbf{The $20$-faceted cup.} The system is formed by $22$ different line segments: the source $\\point{S}$, the target\n$\\point{T}$, ten left reflectors and ten right reflectors.\n $\\point{S}=[-2,2]$ is located at $\\variabile{z}=0$. $\\point{T} = [-17,17]$ is parallel to the source and it is located at a height $\\variabile{z}=40$.\n All the lines are located in air.}\n\\label{fig:multifacetedcup}\n\\end{figure}\n\\\\ \\indent\nSimilarly to the two-faceted cup, also for the multi-faceted cup we define the phase spaces of all the lines $\\lineai\\in\\{1, \\cdots, \\nline\\}$ (for the $20$-faceted cup $\\nline=22$ which is also the index of the target). \nFor the system in Figure \\ref{fig:multifacetedcup}, $42$ different phase spaces need to be considered.\nIn general, for a system formed by $\\nline$ straight line segments, $2\\nline-2$ phase spaces are considered.\nFor all the systems formed by straight line segments, the boundaries $(\\partial\\mbox{\\set{S}{\\lineai,}{\\lineaj}})_{\\lineai\\neq\\lineaj=2, \\cdots, \\nline}$ and $(\\partial\\mbox{\\set{T}{\\lineai,}{\\lineak}})_{\\lineai\\neq\\lineak=1, \\cdots, \\nline-1}$ of the regions that form every PS are determined. \\\\\n\\indent The boundaries $(\\partial\\mbox{\\insieme{T}}_{\\nline,\\lineak})_{\\lineak=1, \\cdots, \\nline-1}$ for the $20$-faceted cup are depicted in Figure \\ref{fig:T20} with red lines.\n\\begin{figure}[h]\n\\centering\n\\label{fig:T20}\n\\includegraphics[width = 0.7\\textwidth]{target10facetedcup1}\n\\caption{\\textbf{Target PS of the $20$-faceted cup.}\nThe red lines are the boundaries $(\\partial\\mbox{\\set{T}{$22$,}{\\lineak}})_{\\lineak=1, \\cdots, 21}$ which are determined analytically. The numbers inside the regions \\set{T}{$22$,}{\\lineak} indicate the value of the index \\lineak.}\n\\label{fig:T20}\n\\end{figure}\nAll the possible paths that the rays can follow when propagating within the $20$-faceted cup are determined using the same algorithm developed for the two-faceted cup and explained in Section \\ref{sec:algorithm_raymapping}.\n As the number of optical lines increases, the number of possible paths increases as well.\n Therefore, we have to construct a more complicated tree than the one in Figure \\ref{fig:tree}.\nDespite this, the algorithm explained in the previous section still works fine and, also for the multi-faceted cup we are able to determine all the possible paths $\\Pi$ and all the positive luminance regions \\set{R}{}{}$(\\Pi)$ at target PS.\n% \\insieme{T}$_{\\nline}$.\nFor a given direction $\\variabile{p}=\\mbox{const}$ the position coordinates $\\variabile{q}^\\textrm{\\,min}(\\Pi,\\variabile{p})$ and $\\variabile{q}^\\textrm{\\,max}(\\Pi,\\variabile{p})$ of the intersection points between the boundaries $ \\partial$\\set{R}{}{}($\\Pi$) and the line $\\variabile{p}= \\mbox{const}$ are calculated for every possible path $\\Pi$. Finally, the target intensity $\\hat{I}_{\\textrm{PS}}(\\variabile{p})$ along the direction $\\variabile{p}$ is obtained.\nNumerical results for a $20$-faceted cup are given in the next section.\n\\section{Numerical results for the $20$-faceted cup}\n\\label{sec:Numerical results_10cup}\nIn this section the results for the $20$-faceted cup are presented.\nWe compute the target intensity both with the backward ray mapping method and MC ray tracing.\nThe same partitioning $P$ of the interval $[-1,1]$ used for the two-faceted cup is considered. A comparison between the reference intensity $\\hat{I}_{\\textrm{ref}}$ and the ray mapping intensity $\\hat{I}_{\\textrm{PS}}$ is shown in Figure \\ref{fig:intensity10cup}, where $\\hat{I}_{\\textrm{ref}}$ is obtained using QMC ray tracing with $10^8$ rays.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width = 0.7\\textwidth]{intensity_10_cup_raymapping}\n\\caption{\\textbf{Intensity for the $20$-faceted cup}.\nComparison between the reference intensity (QMC ray tracing with $10^8$ rays) and the ray mapping intensity.}\n\\label{fig:intensity10cup}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=.4]{error_10_cup_raymapping}\n\\caption{\\textbf{Errors for the $20$-faceted cup as a function of the CPU-time.} The ray mapping method is more accurate than QMC ray tracing and it is faster in case an error smaller than $10^{-5}$ is desired.}\n\\label{fig:error10cup} \n\\end{figure}\n\\\\ \\indent Note that the intensity profile in Figure \\ref{fig:intensity10cup} is more concentrated around the direction $\\variabile{p}=0$ than the intensity of the two-faceted cup (see Figure \\ref{fig:intensity_cup}). In particular, increasing the number of left and right reflectors the intensity profile becomes more and more peaked around the center approaching the profile of a parabolic reflector, (see Chapter \\ref{chap:triangulation}).\nThe error between the approximate intensities $\\hat{I}_{\\textrm{A}}$ ($\\textrm{A} = \\textrm{QMC}, \\textrm{PS}$) and the reference intensity $\\hat{I}_{\\textrm{ref}}$ is shown in Figure \\ref{fig:error10cup}. \nThe PS intensity is calculated using (\\ref{eq:normalized_PS_intensity}) where the integral is approximated using the trapezoidal rule. Figure \\ref{fig:error10cup} shows that increasing the number of intervals in the trapezoidal rule, the PS error decreases. We computed the PS error four times considering $1, 10, 50$, and $100$ intervals in the trapezoidal rule. \nWe remark that the PS method gives the value of the intensity point-wise, therefore we can compute the PS intensity without numerical integration. Nevertheless, we calculate the averaged intensity because we want to compare it with the QMC intensity $\\hat{I}_{\\textrm{QMC}}$.\n\\\\ \\indent The error convergence is depicted in Figure \\ref{fig:error10cup} with the red line.\nSince all the boundaries of the regions in PS are calculated exactly, our expectation is that the PS intensity is the exact intensity.\nFrom Figure \\ref{fig:error10cup} we observe that the minimum ray mapping error has an order of magnitude of $10^{-7}$.\nThis is due to the fact that for the $20$-faceted cup the intensity cannot be computed exactly. \nTherefore, we took as reference intensity $\\hat{I}_{\\textrm{ref}}$ an intensity computed with QMC ray tracing using $10^8$ rays which is not the exact intensity.\nThe error between the normalized exact intensity $\\hat{I}_{\\textrm{exact}}$ and the normalized approximate intensity $\\hat{I}_{\\textrm{A}}$ is given by:\n\\begin{equation}\\label{eq:qmc_error_20_cup}\n\\frac{1}{\\textrm{Nb}}\\sum_{\\variabile{h} = 1}^{\\textrm{Nb}}| \\hat{I}_{\\textrm{exact}}(\\variabile{p}^\\variabile{h}) - \\hat{I}_{\\textrm{A}}(\\variabile{p}^\\variabile{h})| \\leq\n\\frac{1}{\\textrm{Nb}}\\Bigg(\\sum_{\\variabile{h} = 1}^{\\textrm{Nb}}\\big| \\hat{I}_{\\textrm{exact}}(\\variabile{p}^\\variabile{h}) - \\hat{I}_{\\textrm{ref}}(\\variabile{p}^\\variabile{h})\\big| +\n\\sum_{\\variabile{h} = 1}^{\\textrm{Nb}}\\big| \\hat{I}_{\\textrm{ref}}(\\variabile{p}^\\variabile{h}) - \\hat{I}_{\\textrm{A}}(\\variabile{p}^\\variabile{h})\\big|\\Bigg)\\,.\n\\end{equation}\nTo give an estimation of the first term on the right hand side of (\\ref{eq:qmc_error_20_cup}), we do a linear extrapolation from the errors values obtained. The extrapolated point at the time needed for computing the reference intensity (cyan dot in Figure \\ref{fig:error10cup}) gives an estimation of the error between the reference intensity and the exact intensity. If the reference intensity is \\textit{exact} then we would expect an error \\textit{exactly} equal to $0$.\nFrom the extrapolation we obtain\n\\begin{equation*}\\sum_{\\variabile{h} = 1}^{\\textrm{Nb}}| \\hat{I}_{\\textrm{exact}}(\\variabile{p}^\\variabile{h}) - \\hat{I}_{\\textrm{ref}}(\\variabile{p}^\\variabile{h})|/\\textrm{Nb}\\approx 1.02*10^{-7}. \\end{equation*}\nThe results show that \n\\begin{equation*}\n\\sum_{\\variabile{h} = 1}^{\\textrm{Nb}}| \\hat{I}_{\\textrm{exact}}(\\variabile{p}^\\variabile{h}) - \\hat{I}_{\\textrm{ref}}(\\variabile{p}^\\variabile{h})|/\\nbin\n\\approx \\sum_{\\variabile{h} = 1}^{\\nbin}|\\hat{I}_{\\textrm{ref}}(\\variabile{p}^\\variabile{h}) - \\hat{I}_{\\textrm{PS}}(\\variabile{p}^\\variabile{h})|/\\nbin.\n\\end{equation*}\nSince the accuracy of the reference intensity is comparable with the accuracy of the PS intensity, we claim that the error found with backward ray mapping is also due to the QMC error. We expect that considering a more accurate reference intensity, obtained for instance using QMC ray tracing with more rays, the PS error would decrease.\nWe conclude that the backward ray mapping method performs very well also for more complicated systems.\nCompared to QMC ray tracing the new method is not only faster but also much more accurate.\n\\section{Discussions}\nIn this chapter, we presented an backward ray mapping method to compute the target intensity of a given optical system.\nThe method employs the PS of \\textit{all} the lines that form the system.\nAll these phase spaces are related to each other through two different kind of maps.\nA concatenation of these two maps gives a map that connects the coordinates of the rays at the source with those at the target.\nEmploying the inverse of the concatenated map, all the possible paths that rays can follow during their propagation are found. \nOnly the rays located on the boundaries of the positive luminance regions are traced,\nwhere every region is formed by rays that follow the same path during their propagation. From those rays the output intensity is calculated. \\\\ \\indent\nWe presented numerical results for two optical systems: the two-faceted cup and the $20$-faceted cup. The boundaries of the regions that form every PS are determined exactly.\n Numerical results show that the exact output intensity is obtained.\n We compared our method with MC and QMC ray tracing showing significant advantages in terms of the accuracy and the computational time.\n We conclude that the ray mapping method applied to systems formed by straight line segments calculates the \\textit{exact} intensity.\\\\\n\\indent In the next chapter we present the method extended to systems formed by curved lines. \n\n\n", "meta": {"hexsha": "d303cd9b9e0b456e121396f15d60490e85c878ef", "size": 83655, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/Analytic_ray_mapping.tex", "max_stars_repo_name": "melaniafilosa/ps_raytracing", "max_stars_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/Analytic_ray_mapping.tex", "max_issues_repo_name": "melaniafilosa/ps_raytracing", "max_issues_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/Analytic_ray_mapping.tex", "max_forks_repo_name": "melaniafilosa/ps_raytracing", "max_forks_repo_head_hexsha": "8f9111ea4ec3ac125b593f41b3ac6fe302ea6632", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.5254237288, "max_line_length": 1407, "alphanum_fraction": 0.6660689738, "num_tokens": 29645, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933447152498, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.5512402802151782}}
{"text": "\\documentclass[10pt,a4paper,notitlepage]{report}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n% end of usepackage section -----------------------------------------------------\n\n\\begin{document}\n\n\\section{Force from Gravity}\n\\begin{equation}\n\\vec{F}_i = \\sum_{i \\neq j} \\frac{G m_i m_j}{\\vec{x}_{ij}^2} \\hat{x}_{ij}\n\\end{equation}\n\n\\section{Acceleration}\n\\begin{equation}\n\\vec{a}_i = \\frac{\\vec{F}_i}{m_i}\n\\end{equation}\n\n\\section{Velocity}\n\\begin{equation}\ndv = a \\, dt\n\\end{equation}\n\\begin{equation}\n\\vec{v}_{n+1} = \\vec{a}_n \\Delta t + \\vec{v}_n\n\\end{equation}\n\n\\section{Position}\n\\begin{equation}\ndx = v \\, dt\n\\end{equation}\n\n\\begin{equation}\nx_{n+1} = \\vec{v}_{n+1} \\Delta t + x_n\n\\end{equation}\n\n\\section{Runge-Kutta}\n\\begin{equation}\ny_{n+1} = y_n + \\tfrac{h}{6}\\left(k_1 + 2k_2 + 2k_3 + k_4 \\right)\n\\end{equation}\n\n\\begin{equation}\nh = \\Delta t\n\\end{equation}\n\n\\begin{equation}\nt_{n+1} = t_n + h \n\\end{equation}\n\n\\begin{equation}\nk_1 = f(t_n, y_n)\n\\end{equation}\n\n\\begin{equation}\nk_2 = f\\left(t_n + \\frac{h}{2}, y_n + h\\frac{k_1}{2}\\right)\n\\end{equation}\n\n\\begin{equation}\nk_3 = f\\left(t_n + \\frac{h}{2}, y_n + h\\frac{k_2}{2}\\right)\n\\end{equation}\n\n\\begin{equation}\nk_4 = f\\left(t_n + h, y_n + hk_3\\right)\n\\end{equation}\n\n\\end{document}\n", "meta": {"hexsha": "eee3df601dcb53142458b10fbbd72860b253e7a2", "size": 1322, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/grav_sim.tex", "max_stars_repo_name": "foxfire256/grav_sim", "max_stars_repo_head_hexsha": "bc5d4ca1250203a6b7654c04914bb3fe13cef67e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/grav_sim.tex", "max_issues_repo_name": "foxfire256/grav_sim", "max_issues_repo_head_hexsha": "bc5d4ca1250203a6b7654c04914bb3fe13cef67e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/grav_sim.tex", "max_forks_repo_name": "foxfire256/grav_sim", "max_forks_repo_head_hexsha": "bc5d4ca1250203a6b7654c04914bb3fe13cef67e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.1594202899, "max_line_length": 81, "alphanum_fraction": 0.6452344932, "num_tokens": 537, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8198933271118222, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.5512402737628496}}
{"text": "\\documentclass[twoside]{MATH77}\n\\usepackage[\\graphtype]{mfpic}\n\\usepackage{multicol}\n\\usepackage[fleqn,reqno,centertags]{amsmath}\n\\begin{document}\n\\opengraphsfile{pl02-10}\n\\begmath 2.10 Exponential Integrals Ei and $E_1$\n\n\\silentfootnote{$^\\copyright$1997 Calif. Inst. of Technology, \\thisyear \\ Math \\`a la Carte, Inc.}\n\n\\subsection{Purpose}\n\nThese subroutines compute the exponential integrals Ei and $E_1$, defined\nby%\n\\begin{equation*}\n\\text{Ei}(x)=\\int_{-\\infty }^x\\frac{e^t}t\\,dt,\\quad \\text{and\\quad }%\nE_1(x)=\\int_x^\\infty \\frac{e^{-t}}t\\,dt.\n\\end{equation*}\n\nThese functions are related by the equation\n\\begin{equation*}\n\\text{Ei}(x) = -E_1(-x)\n\\end{equation*}\nThe functions Ei$(x)$ for $x > 0$ and $E_1(x)$ for $x < 0$ are defined as\nCauchy principal value integrals. These functions thus have well-defined\nfinite values for all real $x$ except $x = 0$ where Ei$(0) = -\\infty $ and $%\nE_1(0) = +\\infty .$\n\nFor additional properties of these functions see \\cite{ams55:exp-int}.\n\n\\subsection{Usage}\n\n\\subsubsection{Program Prototype, Single Precision}\n\n\\begin{description}\n\\item[REAL]  \\ {\\bf X, Y, EI, SE1}\n\\end{description}\n\nAssign a value to X and obtain the value of Ei or $E_1$ respectively by\nuse of the statements,\n$$\n\\fbox{{\\bf Y =SEI(X)}} \\hspace{.4in} \\fbox{{\\bf Y =SE1(X)}}\n$$\n\n\\subsubsection{Argument Definitions}\n\n\\begin{description}\n\\item[X]  \\ [in] Argument of function. Require X $\\neq 0.$\n\\end{description}\n\n\\subsubsection{Modifications for Double Precision}\n\nFor double precision usage change the REAL statement to DOUBLE PRECISION and\nchange the function names to DEI and DE1 respectively.\n\n\\subsection{Examples and Remarks}\n\nSee the program DRSEI and the output ODSEI for an example of the use of SEI\nand SE1 to tabulate values of Ei and $E_1.$\n\n\\subsection{Functional Description}\n\nAs $x$ varies from $-\\infty $ to~0, $E_1(x)$ varies monotonically from $%\n-\\infty $ to $+\\infty $. There is a single real root\nat~$-$0.37250~74107~81366~63446.\n\nAs $x$ varies from~0 to~$+\\infty $, $E_1(x)$ varies monotonically from $%\n+\\infty $ to zero.\n\n$E_1(x)$ is asymptotic to $x^{-1}e^{-x}$ as $x$ approaches $+\\infty $ or $%\n-\\infty $, and to $-\\ln |x|$\\ as $x$ approaches zero.\n\\vspace{10pt}\n\n\\hspace{5pt}\\mbox{\\input pl02-10 }\n\nLet $\\mu $ and $\\lambda $ be defined so that $e^\\mu $ is the overflow limit\nand $e^{-\\lambda }$ is the underflow limit for the machine arithmetic.\nDefine $\\alpha = \\mu + \\ln \\mu $ and $\\beta = \\lambda - \\ln \\lambda $. Then $%\nE_1(x)$ would overflow for $x < -\\alpha $ and underflow for $x > \\beta .$\n\nThis algorithm, due to L. W. Fullerton, with minor changes by Lawson and\nChiu, partitions the interval $[-\\alpha ,\\beta ]$ into eight subintervals.\nOn each subinterval a polynomial approximation is used.\n\nThe polynomial degrees and the numbers $\\alpha $ and $\\beta $ are determined\non the first entry to the subprogram by use of the System Parameters\nsubprograms (see Chapter~19.1). The subprograms adapt to any precision up to\nabout~31 decimal places.\n\n\\subparagraph{Accuracy tests}\n\nSubprogram SE1 was tested on an IBM compatible PC using IEEE\narithmetic by comparison with DE1 at~50,000\npoints between~$-$80 and 80. The relative precision of the IEEE single\nprecision arithmetic is $\\rho = 2^{-23} \\approx 1.19 \\times 10^{-7}$. The\ntest results may be summarized as follows:\n\n\\begin{tabular}{l@{}r@{{.}}l@{{, }}r@{{.}}lr}\n\\multicolumn{5}{c}{\\bf Argument Interval} &\n\\multicolumn{1}{c}{\\bf Max. Rel. Error}\\\\\n\\rule{.2in}{0pt} [ & $-$80&00  &$-$1&20] & $2.5\\rho $\\rule{.4in}{0pt} \\\\\n\\rule{.2in}{0pt} [ &  $-$1&20  &$-$1&00] & $4.6\\rho $\\rule{.4in}{0pt} \\\\\n\\rule{.2in}{0pt} [ &  $-$1&00  &$-$0&41] & $0.9\\rho $\\rule{.4in}{0pt} \\\\\n\\rule{.2in}{0pt} [ &  $-$0&41 &$-$0&30] & (see just below)~~\\\\\n\\rule{.2in}{0pt} [ &  $-$0&30 &  80&00] & $0.8\\rho $\\rule{.4in}{0pt}\n\\end{tabular}\n\nThe relative error in the interval [$-$0.41, $-$0.30] is large due to the\nroot near $-$0.3725. However, $|E_1(x)|$ is bounded by~0.31 and the absolute\nerror has a satisfactorily small bound of $0.22\\rho $ in this interval.\n\n\\bibliography{math77}\n\\bibliographystyle{math77}\n\n\\subsection{Error Procedures and Restrictions}\n\nIn the following cases the function value would be beyond the representable\nrange. The subprograms will issue an error message and return a value as\nfollows ($\\Omega $ is the overflow limit):\n\\begin{center}\n\\begin{tabular}{lr|lr}\n\\multicolumn{1}{c}{X} & \\multicolumn{1}{c}{SEI(X)\\rule{10pt}{0pt}} &\n\\multicolumn{1}{|c}{\\rule{10pt}{0pt}X} & \\multicolumn{1}{c}{SE1(X)}\\\\\n$< -\\beta $            & 0 \\rule{20pt}{0pt}                    &\n\\rule{10pt}{0pt}$< -\\alpha $           & $-\\Omega $\\rule{13pt}{0pt}\\\\\n$= \\phantom{-}0.$      & $-\\Omega $ \\rule{20pt}{0pt}           &\n\\rule{10pt}{0pt}$= \\phantom{-}0.$      & $\\Omega $\\rule{13pt}{0pt}\\\\\n$> \\phantom{-}\\alpha $ & $\\phantom{-}\\Omega $ \\rule{20pt}{0pt} &\n\\rule{10pt}{0pt}$> \\phantom{-} \\beta $ & 0\\rule{13pt}{0pt}\\\\\n\\end{tabular}\n\\end{center}\nError messages are processed using the subroutines of Chapter~19.2 with\nan error level of zero.\n\n\\subsection{Supporting Information}\n\nThe source language is Fortran~77.\n\nBased on code designed and programmed by L.\\ W.\\ Fullerton, Los Alamos\nNational Lab., 1977. Modified by C. L.\\ Lawson and S.\\ Y. Chiu, JPL, 1983.\n\n\n\\begin{tabular}{@{\\bf}l@{\\hspace{5pt}}l}\n\\bf Entry & \\hspace{.35in} {\\bf Required Files}\\vspace{2pt} \\\\\nDE1 & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, DCSEVL, DEI, DERM1, DERV1, DINITS, ERFIN, ERMSG, IERM1,\nIERV1\\rule[-5pt]{0pt}{8pt}}\\\\\nDEI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, DCSEVL, DEI, DERM1, DERV1, DINITS, ERFIN, ERMSG, IERM1,\nIERV1\\rule[-5pt]{0pt}{8pt}}\\\\\nSE1 & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, ERFIN, ERMSG, IERM1, IERV1, SCSEVL, SEI, SERM1, SERV1,\nSINITS\\rule[-5pt]{0pt}{8pt}}\\\\\nSEI & \\parbox[t]{2.7in}{\\hyphenpenalty10000 \\raggedright\nAMACH, ERFIN, ERMSG, IERM1, IERV1, SCSEVL, SEI, SERM1, SERV1\nSINITS\\rule[-5pt]{0pt}{8pt}}\\\\\n\\end{tabular}\n\n\\begcode\n\n\\bigskip\n\n\\lstset{language=[77]Fortran,showstringspaces=false}\n\\lstset{xleftmargin=.8in}\n\n\\centerline{\\bf \\large DRSEI}\\vspace{10pt}\n\\lstinputlisting{\\codeloc{sei}}\n\n\\newpage\n\n\\vspace{30pt}\\centerline{\\bf \\large ODSEI}\\vspace{10pt}\n\\lstset{language={}}\n\\lstinputlisting{\\outputloc{sei}}\n\\closegraphsfile\n\\end{document}\n", "meta": {"hexsha": "2e480437247f6b03901929d89f8aeaa221193170", "size": 6303, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/doctex/ch02-10.tex", "max_stars_repo_name": "jacobwilliams/math77", "max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z", "max_issues_repo_path": "doc/doctex/ch02-10.tex", "max_issues_repo_name": "jacobwilliams/math77", "max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z", "max_forks_repo_path": "doc/doctex/ch02-10.tex", "max_forks_repo_name": "jacobwilliams/math77", "max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z", "max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z", "avg_line_length": 35.8125, "max_line_length": 98, "alphanum_fraction": 0.6834840552, "num_tokens": 2271, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.672331699179286, "lm_q2_score": 0.8198933271118221, "lm_q1q2_score": 0.5512402737628496}}
{"text": "\\documentclass[11pt]{article}\n\\usepackage{enumitem}\n\\usepackage{amsmath}\n\\usepackage{booktabs}\n\n\\begin{document}\n\\title{MAT1830 --- Assignment 8}\n\\author{Dylan Pinn --- 24160547}\n\\maketitle\n\n\\section*{Question 1}\n\nOne side of a 6-sided die is marked ``0'', two sides are marked ``2'', and three\nsides are marked ``3''. Each side is equally likely to occur when the die is\nrolled. The die is rolled twice and the results are recorded.\n\n\\noindent\nLet $A$ be the event that the first roll is 0. \\\\\nLet $B$ be the event that the second roll is 3. \\\\\nLet $C$ be the event that the sum of the two rolls is 5.\n\n\\begin{enumerate}[label= (\\alph*)]\n  \\item Find $\\Pr(A)$, $\\Pr(B)$, and $\\Pr(C)$.\n\n    \\[ \\Pr(A) = \\frac{1}{6} \\]\n    \\[ \\Pr(B) = \\frac{3}{6} = \\frac{1}{2} \\]\n    \\[ \\Pr(C) = \\frac{12}{36} = \\frac{1}{3} \\]\n\n  \\item Find $\\Pr(A \\cap B)$, $\\Pr(A \\cap C)$, and $\\Pr(B \\cap C)$.\n\n    \\[ \\Pr(A \\cap B) = \\frac{3}{36} = \\frac{1}{12} \\]\n    \\[ \\Pr(A \\cap C) = 0 \\]\n    \\[ \\Pr(B \\cap C) = \\frac{6}{36} = \\frac{1}{6} \\]\n\n  \\item Are $A$ and $C$ independent events? Are $B$ and $C$ independent events?\n\n    \\[ \\Pr(A \\cap C) = \\Pr(A)\\Pr(C) \\]\n    \\[ 0 = \\frac{1}{6} \\times \\frac{1}{3} \\]\n    \\[ 0 \\not = \\frac{1}{18} \\]\n\n    As these are not equal the events are not independent.\n\n    \\[ \\Pr(B \\cap C) = \\Pr(B)\\Pr(C) \\]\n    \\[ \\frac{6}{36} = \\frac{1}{2} \\times \\frac{1}{3} \\]\n    \\[ \\frac{1}{6} = \\frac{1}{6} \\]\n\n    As these are equal the events are independent.\n\n  \\item Find $\\Pr(A \\cup C)$ and $\\Pr(B \\cup C)$.\n\n    \\begin{align*}\n      \\Pr(A \\cup C) &= \\Pr(A) + \\Pr(C) - \\Pr(A \\cap C) \\\\\n      &= \\frac{1}{6} + \\frac{1}{3} - 0 \\\\\n      &= \\frac{3}{6} \\\\\n      &= \\frac{1}{2}\n    \\end{align*}\n\n    \\begin{align*}\n      \\Pr(B \\cup C) &= \\Pr(B) + \\Pr(C) - \\Pr(B \\cap C) \\\\\n      &= \\frac{1}{2} + \\frac{1}{3} - \\frac{1}{6} \\\\\n      &= \\frac{4}{6} \\\\\n      &= \\frac{2}{3}\n    \\end{align*}\n\n  \\item Find $\\Pr(B \\mid B \\cup C)$.\n\n    \\begin{align*}\n      \\Pr(B \\mid B \\cup C) &= \\frac{\\Pr(B \\cap (B \\cup C))}{\\Pr(B \\cup C)} \\\\\n      &= \\frac{\\Pr(B)}{\\frac{2}{3}} \\\\\n      &= \\frac{\\frac{1}{2}}{\\frac{2}{3}} \\\\\n      &= \\frac{3}{4}\n    \\end{align*}\n\\end{enumerate}\n\n\\break{}\n\\section*{Question 2}\n\nBlofeld captures James Bond and places him in a pit with 100 deadly scorpions,\n20 of which have been genetically engineered for extra deadliness. The normal\nscorpions' bites are fatal $60\\%$ of the time and the genetically engineered\nscorpions' bites are fatal $90\\%$ of the time. Bond escapes the pit, but is\nbitten once by one of the scorpions. Given that Bond survives, what is the\nprobability that the scorpion that bit him was genetically engineered?\n\n\\bigskip\n\\noindent\nLet $A$ be the event that Bond is bitten by a genetically engineered scorpion.\n\\\\\nLet $B$ be the event that Bond survives.\n\n\\begin{align*}\n  \\Pr(A) &= \\frac{20}{100} = \\frac{2}{10} \\\\\n  \\Pr(B \\mid A) &= \\frac{9}{10} \\\\\n  \\Pr(B \\mid \\bar A) &= \\frac{6}{10} \\\\\n  \\Pr(\\bar A) &= \\frac{80}{100} = \\frac{8}{10} \\\\\n  \\Pr(A \\mid B) &= \\frac{\\Pr(B \\mid A) \\Pr(A)}{\\Pr(B \\mid A)\\Pr(A) + \\Pr(B \\mid\n  \\bar A) \\Pr(\\bar A)} \\\\\n  &= \\frac{\\frac{9}{10} \\times \\frac{2}{10}}{\\frac{6}{10} \\times \\frac{2}{10} +\n  \\frac{6}{10} \\times \\frac{8}{10}} \\\\\n  &= \\frac{\\frac{18}{100}}{\\frac{12}{100}+\\frac{48}{100}} \\\\\n  &= \\frac{\\frac{18}{100}}{\\frac{60}{100}} \\\\\n  &= \\frac{3}{10}\n\\end{align*}\n\n\\break{}\n\\section*{Question 3}\n\nA word is chosen uniformly at random from the set\n\n\\[ \\{ \\text{there, was, a, wall, it, did, not, look, important} \\}\\]\n\n\\noindent\nLet $X$ be the number of times the letter `t' occurs in the chosen word. Write\ndown the probability distribution of $X$.\n\n\\begin{center}\n  \\begin{tabular}{c c c c}\n    \\toprule\n    $x$ & 0 & 1 & 2 \\\\\n    $\\Pr(X=x)$ & $\\frac{5}{9}$ & $\\frac{3}{9}$ & $\\frac{1}{9}$ \\\\\n    \\bottomrule\n  \\end{tabular}\n\\end{center}\n\n\\end{document}\n", "meta": {"hexsha": "bd444413432b84a7e47d0a33c8d86c41f5e5197e", "size": 3827, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignments/assignment-08.tex", "max_stars_repo_name": "dylanpinn/MAT1830", "max_stars_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-03-01T22:58:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-20T03:41:28.000Z", "max_issues_repo_path": "assignments/assignment-08.tex", "max_issues_repo_name": "dylanpinn/MAT1830", "max_issues_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-03-05T13:52:05.000Z", "max_issues_repo_issues_event_max_datetime": "2018-06-03T06:46:04.000Z", "max_forks_repo_path": "assignments/assignment-08.tex", "max_forks_repo_name": "dylanpinn/MAT1830", "max_forks_repo_head_hexsha": "43c76e9502508c64f7726002613e777e2d71ac88", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-27T03:41:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-19T15:55:08.000Z", "avg_line_length": 30.373015873, "max_line_length": 80, "alphanum_fraction": 0.5612751502, "num_tokens": 1443, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6370307944803831, "lm_q2_score": 0.865224072151174, "lm_q1q2_score": 0.5511743780860148}}
{"text": "\\documentclass{subfile}\n\n\\begin{document}\n\t\\section{RNO}\\label{sec:rno}\n\t\n\t\t\\begin{problem}[$2019$ Team Selection Test, Day $5$, problem $1$]\n\t\t\tDetermine the largest value of the expression\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{1\\leq i<j\\leq 4}(x_{i}+x_{j})\\sqrt{x_{i}{x_{j}}}\n\t\t\t\t\\end{align*}\n\t\t\twhere $x_{1},x_{2},x_{3},x_{4}$ non-negative real numbers such that $x_{1}+x_{2}+x_{3}+x_{4}=1$. Also, find the specific values where this maximum is achieved.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2018$ Team Selection Test, Day $1$, problem $1$]\n\t\t\tLet $x_{1},\\ldots,x_{n}\\geq -1$ be real numbers such that  $\\sum_{i=1}^{n}x_{i}^{3}=0$. Find the least constant $c$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}x_{i}^{2}\n\t\t\t\t\t\t& \\leq cn\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2014$ Team Selection Test, Day $3$, problem $3$]\n\t\t\tDetermine the smallest positive constant $c$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\left(\\dfrac{1}{i}\\sum_{j=1}^{i}x_{j}\\right)^{2}\n\t\t\t\t\t\t& \\leq c\\sum_{i=1}^{n}x_{i}^{2}\n\t\t\t\t\\end{align*}\n\t\t\tfor all positive integer $n$ and all positive real numbers $x_{1},\\ldots,x_{n}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2012$ Team Selection Test, Day $2$, problem $4$]\n\t\t\tLet $k$ be a positive integer. Find the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta^{3k-1}b+b^{3k-1}c+c^{3k-1}a+k^{2}a^{k}b^{k}c^{k}\n\t\t\t\t\\end{align*}\n\t\t\twhere $a,b,c$ are non-negative real numbers such that $a+b+c=3k$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2011$ Team Selection Test, Day $3$, problem $2$]\n\t\t\tGiven real numbers $x,y,z$ such that $x+y+z=0$. Show that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{x(x+2)}{2x^{2}+1}+\\dfrac{y(y+2)}{2y^{2}+1}+\\dfrac{z(z+2)}{2z^{2}+1}\n\t\t\t\t\t\t& \\geq 0\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2011$ Team Selection Test, Day $5$, problem $2$]\n\t\t\tLet $n\\geq2$ be an integer and $x_{1},\\ldots,x_{n}$ be positive real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{1}{x_{i}+1}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tand $k>1$ be a real number. Show that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{1}{x_{i}^{k}+1}\n\t\t\t\t\t\t& \\geq \\dfrac{n}{(n-1)^{k}+1}\n\t\t\t\t\\end{align*}\n\t\t\tand determine the case of equality.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Team Selection Test $1$, problem $2$]\n\t\t\tLet $n$ be a positive integer and $a_{1},\\ldots,a_{n}$ be positive real numbers. Prove that $f:[0,\\infty)\\to\\mathbb{R}$ defined by\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tf(x)\n\t\t\t\t\t\t& = \\dfrac{a_{1}+x}{a_{2}+x}+\\dfrac{a_{2}+x}{a_{3}+x}+\\ldots+\\dfrac{a_{n}+x}{a_{1}+x}\n\t\t\t\t\\end{align*}\n\t\t\tis decreasing.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Team Selection $2$, problem $1$]\n\t\t\tLet $n$ be a positive integer. Determine the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\max\\left\\{\\dfrac{x_{1}}{1+x_{1}},\\ldots,\\dfrac{x_{n}}{1+x_{1}+\\ldots+x_{n}}\\right\\}\n\t\t\t\t\\end{align*}\n\t\t\tas $x_{1},\\ldots,x_{n}$ runs through all non-negative real numbers such that $x_{1}+\\ldots+x_{n}=1$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2010$ Team Selection Test $3$, problem $1$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n}$ be positive real numbers such that $x_{1}\\cdots x_{n}=1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}x_{i}^{n}(1+x_{i})\n\t\t\t\t\t\t& \\geq \\dfrac{n}{2^{n-1}}\\prod_{i=1}^{n}(1+x_{i})\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2009$ Team Selection Test, Day $4$, problem $1$]\n\t\t\tLet $n\\geq 2$ be an integer and $x_{1},\\ldots,x_{n}$ be positive integers such that $x_{1}\\leq\\ldots\\leq x_{n}$ and $x_{1}+\\ldots+x_{n}=x_{1}\\cdots x_{n}$. What is the maximum possible value of $x_{1}+\\ldots+x_{n}$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Team Selection Test, Day $1$, problem $2$]\n\t\t\tLet $n\\geq2$ be an integer and $a_{1},\\ldots,a_{n};b_{1},\\ldots,b_{n}$ be positive real numbers such that $a_{i}<b_{i}$ and\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tb_{1}+\\ldots+b_{n}\n\t\t\t\t\t\t& < 1+a_{1}+\\ldots+a_{n}\n\t\t\t\t\\end{align*}\n\t\t\tProve that there exists a real number $c$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(a_{i}+c+k)(b_{i}+c+k)\n\t\t\t\t\t\t& > 0\n\t\t\t\t\\end{align*}\n\t\t\tfor $1\\leq i\\leq n$ and any integer $k$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2008$ Team Selection Test, Day $2$, problem $1$]\n\t\t\tLet $n\\geq 3$ be an odd integer. Determine the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sqrt{|x_{1}-x_{2}|}+\\ldots+\\sqrt{|x_{n}-x_{1}|}\n\t\t\t\t\\end{align*}\n\t\t\twhere $x_{1},\\ldots,x_{n}$ are real numbers in the interval $[0,1]$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$ Team Selection Test, Day $1$, problem $1$]\n\t\t\tLet $a_{1},\\ldots,a_{n}$ be non-negative real numbers such that $a_{1}^{2}+\\ldots+a_{n}^{2}=1$. Find the maximum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(1-a_{1})\\cdots(1-a_{n})\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$ Team Selection Test, Day $3$, problem $2$]\n\t\t\tLet $n,p\\geq 4$ be integers and $x_{1},\\ldots,x_{n}$ be positive real numbers such that $x_{1}+\\ldots+x_{n}=n$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{1}{x_{i}^{p}}\n\t\t\t\t\t\t& \\geq \\sum_{i=1}^{n}x_{i}^{p}\n\t\t\t\t\\end{align*}\n\t\t\tis false.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2007$ Team Selection Test, Day $7$, problem $1$]\n\t\t\tLet $n\\geq 2$ be an integer and $a_{1},\\ldots,a_{n};b_{1},\\ldots,b_{n}$ be real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}a_{i}^{2}\n\t\t\t\t\t\t& = \\sum_{i=1}^{n}b_{i}^{2}=1\\\\\n\t\t\t\t\t\\sum_{i=1}^{n}a_{i}b_{i}\n\t\t\t\t\t\t& = 0\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\left(\\sum_{i=1}^{n}a_{i}\\right)^{2}+\\left(\\sum_{i=1}^{n}b_{i}\\right)^{2}\n\t\t\t\t\t\t& \\leq n\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$ Team Selection Test, Day $1$, problem $4$]\n\t\t\tLet $n$ be a positive integer and $a_{1},\\ldots,a_{n}$ be real numbers such that $|a_{i}|\\leq 1$ for $1\\leq i\\leq n$ and $a_{1}+\\ldots+a_{n}=0$. Prove that there exists a positive integer $k\\leq n$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t|1\\cdot a_{1}+\\ldots+ka_{k}|\n\t\t\t\t\t\t& \\leq \\dfrac{2k+1}{4}\n\t\t\t\t\\end{align*}\n\t\t\tAlso show that this is the best bound possible for $n>2$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2006$ Team Selection Test, Day $2$, problem $4$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n}$ be real numbers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{1\\leq i<j\\leq n}|x_{i}+x_{j}|\n\t\t\t\t\t\t& \\geq \\dfrac{n-2}{2}\\sum_{i=1}^{n}|x_{i}|\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2005$ Team Selection Test, Day $5$, problem $2$]\n\t\t\tLet $n\\geq 2$ be an integer and $x_{1},\\ldots,x_{n}$ be positive real numbers such that $x_{1}\\cdots x_{n}=1$. Find the smallest real value $\\rho(n)$ such that for any $x_{1},\\ldots,x_{n}$,\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\dfrac{1}{x_{i}}\n\t\t\t\t\t\t& \\leq \\sum_{i=1}^{n}x_{i}^{r}\n\t\t\t\t\\end{align*}\n\t\t\tis true for all $r\\geq\\rho(n)$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2004$ Team Selection Test, Day $1$, problem $1$]\n\t\t\tLet $a_{1},\\ldots,a_{4}$ be the sides of a quadrilateral with perimeter $2s$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{4}\\dfrac{1}{a_{i}+s}\n\t\t\t\t\t\t& \\leq \\dfrac{2}{9}\\sum_{1\\leq i<j\\leq 4}\\dfrac{1}{\\sqrt{(s-a_{i})(s-a_{j})}}\n\t\t\t\t\\end{align*}\n\t\t\tWhen does equality hold?\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2002$ Team Selection Test, Day $3$, problem $2$]\n\t\t\tLet $n\\geq 4$ be an integer and $a_{1},\\ldots,a_{n}$ be positive real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{1}^{2}+\\ldots+a_{n}^{2}\n\t\t\t\t\t\t& = 1\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{a_{1}}{a_{1}^{2}+1}+\\ldots+\\dfrac{a_{n}}{a_{n}^{2}+1}\n\t\t\t\t\t\t& \\geq \\dfrac{4}{5}\\left(a_{1}\\sqrt{a_{1}}+\\ldots+a_{n}\\sqrt{a_{n}}\\right)^{2}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2001$ Team Selection Test, Day $1$, problem $3$]\n\t\t\tLet $a,b,c$ be the sides of a triangle. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t(-a+b+c)(a-b+c)+(a-b+c)(a+b-c)+(a+b-c)(-a+b+c)\n\t\t\t\t\t\t& \\leq \\sqrt{abc}(\\sqrt{a}+\\sqrt{b}+\\sqrt{c})\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$2000$ Team Selection Test, Day $1$, problem $2$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n}$ be real numbers such that $|x_{k+1}-x_{k}|\\leq 1$ for $1\\leq k\\leq n-1$. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}|x_{i}|-\\left|\\sum_{i=1}^{n}x_{i}\\right|\n\t\t\t\t\t\t& \\leq \\dfrac{n^{2}-1}{4}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1999$ Team Selection Test, Day $2$, problem $2$]\n\t\t\tLet $n$ and $x_{1},\\ldots,x_{n}$ be positive integers. Prove that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx_{1}^{2}+\\ldots+x_{n}^{2}\n\t\t\t\t\t\t& \\geq \\dfrac{2n+1}{3}(x_{1}+\\ldots+x_{n})\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1996$ Team Selection Test, Day $2$, problem $4$]\n\t\t\tIf $p_{1},\\ldots,p_{k}$ are the distinct prime divisors of $n$, define\n\t\t\t\t\\begin{align*}\n\t\t\t\t\ta_{n}\n\t\t\t\t\t\t& = \\dfrac{1}{p_1}+\\ldots+\\dfrac{1}{p_{n}}\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=2}^{n}a_{1}\\cdots a_{i}\n\t\t\t\t\t\t& < 1\n\t\t\t\t\\end{align*}\n\t\t\tfor $n\\geq 2$.\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1996$ Team Selection Test, Day $3$, problem $1$]\n\t\t\tLet $n\\geq 3$ be an integer and $x_{1},\\ldots,x_{n-1}$ be non-negative integers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx_{1}+\\ldots+x_{n-1}\n\t\t\t\t\t\t& = n\\\\\n\t\t\t\t\t1\\cdot x_{1}+\\ldots+(n-1)x_{n}\n\t\t\t\t\t\t& = 2(n-2)\n\t\t\t\t\\end{align*}\n\t\t\tFind the minimum value of\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}i(2n-i)x_{i}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1996$ Team Selection Test, Day $4$, problem $1$]\n\t\t\tLet $n$ be a positive integer and $x_{1},\\ldots,x_{n}$ be positive real numbers such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\tx_{n+1}\n\t\t\t\t\t\t& = x_{1}+\\ldots+x_{n}\n\t\t\t\t\\end{align*}\n\t\t\tProve that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\sum_{i=1}^{n}\\sqrt{x_{i}(x_{n+1}-x_{i})}\n\t\t\t\t\t\t& \\leq \\sqrt{\\sum_{i=1}^{n}x_{n+1}(x_{n+1}-x_{i})}\n\t\t\t\t\\end{align*}\n\t\t\\end{problem}\n\t\n\t\t\\begin{problem}[$1993$ Team Selection Test, Day $1$, problem $1$]\n\t\t\tFind the maximum possible constant $A$ such that\n\t\t\t\t\\begin{align*}\n\t\t\t\t\t\\dfrac{x}{\\sqrt{y^{2}+z^{2}}}+\\dfrac{y}{\\sqrt{z^{2}+x^{2}}}+\\dfrac{z}{\\sqrt{x^{2}+y^{2}}}\n\t\t\t\t\t\t& \\geq A\n\t\t\t\t\\end{align*}\n\t\t\tfor all positive real numbers $x,y,z$.\n\t\t\\end{problem}\n\\end{document}", "meta": {"hexsha": "fb61768ee36fbe5bca00ddd1f03b839de51512f8", "size": 9880, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "rno.tex", "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_issues_repo_path": "rno.tex", "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "rno.tex", "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4242424242, "max_line_length": 218, "alphanum_fraction": 0.5812753036, "num_tokens": 4287, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7090191337850932, "lm_q2_score": 0.7772998714925403, "lm_q1q2_score": 0.5511204815769052}}
{"text": "\\chapter{Risk-Sensitive Policy Gradient}\n\\label{ch:risk_sensitive_policy_gradient}\n\nRisk aware decision making plays a crucial role in many fields, such as finance and process control. In this chapter we discuss some policy gradient methods for the risk-sensitive formulation of a sequential decision problem. This extension presents some difficulties which have attracted the interest of many researchers in the last years. In particular, while in the risk-neutral framework the policy gradient theorem represents the keystone for all the learning algorithms, in the risk-sensitive framework the approaches for the episodic setting, the discounted reward and the average reward formulations are quite different. The goal of this chapter is to give an overview of the methods found in the literature and to try to unify them in a way similar to the risk-neutral framework. This chapter and the following represent the main contribution of this thesis to the policy gradient literature.    \n\n\\section{Risk-Sensitive Framework}\nIn the risk-sensitive framework, in addition to maximizing the rewards, the agent also wants to control the risk necessary to achieve it. It is thus necessary to introduce a function $\\Lambda: \\Theta \\to \\R$ such that $\\Lambda(\\theta)$ measures the risk associated with the policy $\\pi_\\theta$. In an episodic framework where the system always starts from the same initial state $s_0$, the variance of the total return can be used \\cite{tamar2012policy}.\n\\begin{definition}[Start Variance]\n\tThe start variance is the variance of the return that can be obtained starting from the start state $s_0 \\in \\S$ and following policy $\\pi_\\theta$\n\t\\begin{equation}\n\t\t\\Lambda_{\\text{start}}(\\theta) = \\Var[\\pi_\\theta]{G_0\\ |\\ S_0 = s_0}\n\t\t\t= U_{\\pi_\\theta} (s_0) - V_{\\pi_\\theta} (s_0)^2 \n\t\\end{equation}\n\\end{definition}\nIn many application one may want control only the downside risk, that is the risk of the actual return being below the expected return or the uncertainty about the magnitude of that difference. This risk may be measured by the semivariance.\n\\begin{definition}[Semivariance]\n\t\\begin{equation}\n\t\t\\Lambda_\\textbf{down}(\\theta) = \\text{SVar}(\\theta) = \\E[\\pi_\\theta]{\\min\\left\\{G_0 - \\E[\\pi_\\theta]{G_0}, 0\\right\\}^2 | S_0 = s_0}\n\t\\end{equation}\n\\end{definition}\t\nThe square root of this quantity is called semideviation. In a continuing environment, we might consider the long-run variance \\cite{prashanth2014actor} defined in Section \\ref{sec:risk_sensitive_formulation}.\n\\begin{definition}[Long-Run Variance]\n\tThe long-run variance $\\Lambda_\\pi$ under policy $\\pi$ is defined as\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\Lambda_{\\text{long-run}}(\\theta) &= \\lim_{T \\to \\infty} \\frac{1}{T} \\E[\\pi_\\theta]{\n\t\t\t\\sum^{T-1}_{t=0} \\left(R_{t+1} - \\rho(\\theta)\\right)^2}\\\\\n\t\t\\end{split}\n\t\\end{equation}\n\\end{definition}\nIn order to formalize the trade-off between reward and risk from a mathematical point of view, we borrow two standard risk-sensitive performance measures from the financial literature: the mean-variance criterion \\cite{markowitz1952portfolio} and the Sharpe ratio \\cite{sharpe1994sharpe}.\n\\begin{definition}[Mean-Variance Criterion]\n\tThe mean-variance criterion is defined as \n\t\\begin{equation}\n\t\tJ_\\text{MV}(\\theta) = J(\\theta) - \\chi \\Lambda(\\theta)\n\t\\end{equation}\n\twhere $\\chi > 0$ is a constant that controls the trade-off between reward and risk. \n\\end{definition} \n\n\\begin{definition}[Sharpe Ratio]\n\tThe Sharpe ratio is defined as \n\t\\begin{equation}\n\t\t\\Sh(\\theta) = \\frac{J(\\theta)}{\\sqrt{\\Lambda(\\theta)}}\n\t\\end{equation}\n\\end{definition} \nLet us remark that in the financial literature the ratio of the expected return and the semideviation is called Sortino ratio. In a risk-sensitive policy gradient algorithm, we try to approximate the optimal parameters that maximize these objective functions by updating the parameters following the gradient ascent directions. In the average-reward formulation, the ascent directions are given by \n\\begin{equation}\n\t\\label{eq:gradient_mean_variance}\n\t\\nabla_\\theta J_{\\text{MV}}(\\theta) = \\nabla_\\theta \\rho(\\theta) - \\chi \\nabla_\\theta \\Lambda(\\theta)\n\\end{equation}\nfor the mean-variance criterion, where \n\\begin{equation}\n\t\\nabla_\\theta \\Lambda(\\theta) =\t\\nabla_\\theta \\eta(\\theta) - 2 \\rho(\\theta) \\nabla_\\theta \\rho(\\theta)\n\\end{equation} \nand by \n\\begin{equation} \n\t\\label{eq:gradient_sharpe_ratio}\n\t\\nabla_\\theta \\Sh(\\theta) = \\frac{\\eta(\\theta) \\nabla_\\theta \\rho(\\theta) - \\frac{1}{2} \\rho(\\theta) \\nabla_\\theta \\eta(\\theta)}{\\Lambda(\\theta) \\sqrt{\\Lambda(\\theta)}}\n\\end{equation}\nfor the Sharpe ratio. Hence, in the risk-sensitive framework it is necessary to estimate to different gradients: $\\nabla_\\theta \\rho(\\theta)$ and $\\nabla_\\theta \\eta(\\theta)$. The equivalent expressions for the episodic setting can be obtained by replacing $\\rho(\\theta)$ (resp. $\\eta(\\theta)$) with $V_\\theta(s_0)$ (resp. $U_\\theta(s_0)$). The estimation of $\\nabla_\\theta \\rho(\\theta)$ (resp. $V_\\theta(s_0)$) has been the focus of the last chapter. In the next sections, we discuss instead several techniques to estimate the new gradient $\\eta(\\theta)$ (resp. $U_\\theta(s_0)$).\n\n\\section{Monte Carlo Policy Gradient}\nFor an episodic environment, the REINFORCE algorithm can be easily extended to the risk-sensitive framework described above \\cite{tamar2012policy}. Indeed, it is sufficient to adapt its derivation for the average return to the second-moment of return\n\\begin{equation}\n\tU(\\theta) = \\E[H\\sim p_\\theta]{G(H)^2}\n\\end{equation}\nApplying the likelihood ratio technique, we obtain\n\\begin{equation}\n\t\\nabla_\\theta U(\\theta) = \\E[H\\sim p_\\theta]{\\nabla_\\theta \\log p_\\theta(H) G(H)^2}\n\\end{equation}\nSimilarly to the risk-neutral framework, we can introduce a baseline without affecting the value of the gradient\n\\begin{equation}\n\t\\nabla_\\theta U(\\theta) = \\E[H\\sim p_\\theta]{\\nabla_\\theta \\log p_\\theta(H) (G(H)^2 - b)}\n\\end{equation}\nIn an episodic environment, we can estimate the gradient via its Monte Carlo estimate\n\\begin{equation}\n\t\\label{eq:reinforce_gradient_2}\n\t\\nabla_\\theta U(\\theta) \\approx \\frac{1}{M} \\sum_{m = 0}^{M} \\nabla_\\theta \\log p_\\theta\\left(h^{(m)}\\right) \\left[G\\left(h^{(m)}\\right)^2 - b\\right]\n\\end{equation}\nwhere $h^{(m)} = \\{(s_t^{(m)}, a_t^{(m)})\\}_{t = 0}^{T^{(m)}}$ are $M$ trajectories of the MDP sampled under policy $\\pi_\\theta$. When using a single trajectory, we obtain the following stochastic gradient estimate \n\\begin{equation}\n\t\\nabla_\\theta U(\\theta) \\approx \\nabla_\\theta \\log p_\\theta\\left(h\\right) \\left[G\\left(h\\right)^2 - b\\right]\n\\end{equation}\nAgain the baseline can be set so as to minimize the variance of the gradient estimate\n\\begin{equation}\n\t\\label{eq:optimal_baseline_2}\n\t\\widehat{b}_k^* = \\frac{\\sum^{M}_{m=1} \\left[\\partial_{\\theta_k} \\log p_\\theta\\left(h^{(m)}\\right)\\right]^2 G(h^{(m)})^2}{\\sum^{M}_{m=1} \\left[ \\partial_{\\theta_k} \\log p_\\theta\\left(h^{(m)}\\right)\\right]^2}\n\\end{equation}\nCombining this estimate with the one for the average return discussed in Section \\ref{sec:MCPG} yields the risk-sensitive Monte Carlo Policy Gradient method, which is outlined in Algorithm \\ref{algo:RSreinforce}.\\\\\nIn an episodic setting, as long as we can rewrite the objective function as expected values on all possible trajectories of the MDP, the likelihood ratio technique yields an analytical expression for its gradient. Hence, this approach can be generalized to more complex measures of risk commonly used in finance, such as the semivariance introduced above. Indeed, the semivariance can be rewritten as \n\\begin{equation}\n\t\\text{SVar}(\\theta) = \\E[H \\sim p_\\theta]{\\min\\left\\{G(H) - \\E[H \\sim p_\\theta]{G(H)}, 0\\right\\}^2}\n\\end{equation}\nHence, by the likelihood ratio technique\n\\begin{equation}\n\t\\nabla_\\theta \\text{SVar}(\\theta) = \\E[H \\sim p_\\theta]{\\nabla_\\theta \\log p_\\theta(H) \\min\\left\\{G(H) - \\E[H \\sim p_\\theta]{G(H)}, 0\\right\\}^2}\n\\end{equation}\nFrom which we can easily derive a Monte Carlo estimate. Since the problems we will consider in the next chapters are not episodic. In the episodic setting, the extension of policy gradient algorithms to the risk-sensitive formulation does not present particular difficulties. For a more thorough presentation as well as some more advanced learning algorithms, we refer the interested reader to the extensive work of Tamar et Al.  \\cite{tamar2013temporal}, \\cite{tamar2013variance}, \\cite{tamar2015policy}, \\cite{chow2015risk}.\n\\begin{algorithm}[t]\n\t\\caption{Risk-sensitive REINFORCE policy gradient estimate}\n\t\\label{algo:RSreinforce}\n\t\\begin{algorithmic}[1]\n\t\t\\Require Policy parameterization $\\theta$, number of trajectories $M$\n\t\t\\Ensure Risk-sensitive REINFORCE policy gradient estimate\n\t\t\\State Sample $M$ trajectories of the MDP following policy $\\pi_\\theta$\n\t\t\\State Compute the optimal baseline for the return via Eq. (\\ref{eq:optimal_baseline})\n\t\t\\State Compute the optimal baseline for the squared return via Eq. (\\ref{eq:optimal_baseline_2})\n\t\t\\State Compute the gradient of the expected return via Eq. (\\ref{eq:reinforce_gradient})\n\t\t\\State Compute the gradient of the expected squared return via Eq. (\\ref{eq:reinforce_gradient_2})\n\t\t\\State Compute the risk-sensitive policy gradient via either Eq. (\\ref{eq:gradient_mean_variance}) or (\\ref{eq:gradient_sharpe_ratio})\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Policy Gradient Theorem}\nIn this section the policy gradient theorem is extended to the risk-sensitive framework. In the average reward formulation of the control problem, the theorem and its derivation are analogous to those for the risk-neutral framework. This will allow us to derive in a trivial way the risk-sensitive versions of all the learning algorithms seen in the previous chapter. The risk-sensitive policy gradient theorem for the average-reward formulation was first derived in \\cite{prashanth2014actor} and the presentation of this section closely follows the original article.\\\\\nOn the other hand, obtaining an equivalent theorem for the discounted reward formulation is more challenging. In the article cited above, the authors prove the theorem under a very strong assumption on the dependence of rewards obtained at different time steps which is not verified in many applications, among which is the asset allocation problem that we will consider for the numerical applications. In the following sections, we will generalize this result by assuming that the reward can depend on both the initial state and the final state of the system. Furthermore, we will discuss the problems arising in the discounted setting and why it is not easy to derive an online policy gradient theorem in this case.\\\\\n In the original article the authors mostly considered the mean-variance criterion since all the algorithms they propose are easily adapted to the Sharpe ratio criterion. Here we take the opposite direction and present the algorithms for the Sharpe ratio, referring to their article for the mean-variance counterparts.\\\\   \nLet us consider a family of parametrized policies $\\pi_\\theta$, with $\\theta\n\\in \\Theta \\subseteq \\R^{D_\\theta}$. The optimization problem then becomes\n\\begin{equation}\n\t\\max_\\theta \\text{Sh}(\\theta) = \\frac{\\rho(\\theta)}{\\sqrt{\\Lambda(\\theta)}}\n\\end{equation}\nwhere we denoted by $\\rho(\\theta) = \\rho_{\\pi_\\theta}$ and similarly for the other quantities. Using a policy gradient approach, the policy parameters are updated following the gradient ascent direction.\n\n\\subsection{Average Reward Formulation}\nIn the average reward formulation the gradient of the Sharpe ratio is\n\\begin{equation}\n\t\\nabla_\\theta \\text{Sh}(\\theta) = \\frac{\\eta(\\theta) \\nabla_\\theta \\rho(\\theta) - \\frac{1}{2} \\rho(\\theta) \\nabla_\\theta \\eta(\\theta)}{\\Lambda(\\theta) \\sqrt{\\Lambda(\\theta)}}\n\\end{equation}\nHence, to compute the update direction, it is sufficient to estimate the various quantities appearing in this formula. For instance, the average reward $\\rho(\\theta)$, the average square reward $\\eta(\\theta)$ and the reward variance $\\Lambda(\\theta)$ can be approximated using exponentially weighted moving averages. On the other hand, the gradient of the average reward $\\nabla_\\theta \\rho(\\theta)$ is given by the standard policy gradient theorem \\ref{thm:risk_neutral_policy_gradient}. The only term remaining is the gradient of the average square reward $\\nabla_\\theta \\eta(\\theta)$, which is provided by the risk-sensitive policy gradient theorem \n\\begin{theorem}[Risk-Sensitive Policy Gradient]\n\tLet $\\pi_\\theta$ be a differentiable policy. The policy gradient for the average square reward is given by\n\t\\begin{equation}\\label{eq:policy_gradient_theorem_W}\n\t\t\\begin{split}\n\t\t\t\\nabla_\\theta \\eta(\\theta) &= \\E[\\substack{S\\sim d_\\theta \\\\ \n\t\t\tA \\sim \\pi_\\theta}]{\\nabla_\\theta \\log \\pi_\\theta(S,A) W_\\theta(S,A)}\n\t\t\\end{split}\n\t\\end{equation}\n\twhere $d^\\theta$ is the stationary distribution of the Markov chain induced by $\\pi_\\theta$. \n\\end{theorem}\n\\begin{proof}\n\tThe proof is analogous to that of the risk-neutral version of theorem. From the basic relation between state-value function and action-value function, we have\n\t\t\\begin{equation*}\n\t\t\t\\begin{split}\n\t\t\t\t\\nabla_\\theta U_\\theta(s) &= \\nabla_\\theta \\int_{\\A} \\pi_\\theta(s,a) W_\\theta(s,a) da\\\\\n\t\t\t\t\t&= \\int_{\\A} \\left[ \\nabla_\\theta \\pi_\\theta(s,a) W_\\theta(s,a) + \\pi_\\theta(s,a) \\nabla_\\theta W_\\theta(s,a)\\right] da\n\t\t\t\\end{split}\n\t\t\\end{equation*} \n\t\tUsing the Bellman expectation equation for $W_\\theta$ \n\t\t\\begin{equation*}\n\t\t\t\\begin{split}\n\t\t\t\t\\nabla_\\theta W_\\theta(s,a) &= \\nabla_\\theta \\left[ \\calM(s,a) - \\eta_\\theta + \\int_{\\S} \\calP(s,a,s') U_\\theta(s') ds' \\right]\\\\\n\t\t\t\t&= -\\nabla_\\theta \\eta_\\theta + \\int_{\\S} \\calP(s,a,s') \\nabla_\\theta U_\\theta(s') ds'\n\t\t\t\\end{split}\n\t\t\\end{equation*}\n\t\tHence, plugging in the first equation \n\t\t\\begin{equation*}\n\t\t\t\t\\nabla_\\theta U_\\theta(s) = \\int_{\\A} \\nabla_\\theta \\pi_\\theta(s,a) W_\\theta(s,a) da - \\nabla_\\theta \\eta_\\theta + \\int_\\A \\pi_\\theta(s,a) \\int_{\\S} \\calP(s,a,s') \\nabla_\\theta U_\\theta(s') ds' \n\t\t\\end{equation*} \t\n\t\tIntegrating both sides with respect to the stationary distribution $d^\\theta$ and noting that, because of stationarity,  \n\t\t\\begin{equation*}\n\t\t\t\\int_{\\S} d^\\theta(s) \\int_{\\A} \\pi(s,a) \\int_{\\S} \\calP(s,a,s') \\nabla_\\theta U(s') ds' da ds = \\int_{\\S} d^\\theta(s) \\nabla_\\theta U_\\theta(s) ds\n\t\t\\end{equation*}\n\t\twe obtain the result \n\t\t\\begin{equation*}\n\t\t\t\\begin{split}\n\t\t\t\\nabla_\\theta \\eta_\\theta &= \\int_{\\S} d^\\theta(s) \\int_{\\A} \\nabla_\\theta \\pi_\\theta(s,a) W_\\theta(s,a) da ds\\\\\n\t\t\t&= \\int_{\\S} d^\\theta(s) \\int_{\\A} \\pi_\\theta(s,a) \\nabla_\\theta \\log\\pi_\\theta(s,a) W_\\theta(s,a) da ds\\\\\n\t\t\t&= \\E[\\substack{S \\sim d^\\theta\\\\A \\sim \\pi_\\theta}]{\\nabla_\\theta\\log\n\t\t\t\t\t\\pi_\\theta(S,A) W_{\\theta}(S, A)} \n\t\t\t\\end{split}\n\t\t\\end{equation*}\n\\end{proof}\nThe result is identical to that for the average reward, provided that we replace the action-value function $Q_\\theta(s,a)$ by the square action-value function $W_\\theta(s,a)$. As in the standard risk-neutral case, a state-dependent baseline can be introduced in the gradient without changing the result. In particular, by \nusing the average-adjusted square value function as baseline, we can replace the \naverage adjusted action-value functions with the following square advantage function\n\\begin{equation}\n\tB_\\theta(s,a) = W_\\theta(s,a) - U_\\theta(s)\n\\end{equation}\nThus, the gradients can be written as \n\\begin{equation}\\label{eq:pg_advantage_W}\n\t\\nabla_\\theta \\eta(\\theta) = \\E[\\substack{S\\sim d_\\pi \\\\ \nA \\sim \\pi}]{\\nabla_\\theta \\log \\pi_\\theta(S,A) B_\\theta(S,A)}\n\\end{equation}\nFrom this result, following the exact same reasoning used in the risk-neutral framework, we can design a variety of risk-sensitive actor-critic algorithms, which\nemploy an approximation of the standard and of the square advantage functions to obtain a more accurate estimate of the objective function. \n\n\\subsection{Risk-Sensitive Actor-Critic Algorithm}\nIn \\cite{prashanth2014actor}, starting from Eqs. (\\ref{eq:actor_critic_pg}) and \n(\\ref{eq:pg_advantage_W}), the authors propose a TD(0) risk-sensitive actor-critic \nalgorithm for the average reward setting. The algorithm maintains two critics that estimate the average adjusted value-functions $V_\\theta(S)$ and $U_\\theta(S)$ respectively and are updated via a TD(0) temporal difference scheme. Let $\\delta_n^A$ and $\\delta_n^B$ be the TD errors for residual value\nand square value functions\n\\begin{equation}\n\t\\begin{split}\n\t\t\\delta_t^A &= R_{t+1} - \\widehat{\\rho}_{t+1} + \\widehat{V}(S_{t+1}) -\n\t\t\\widehat{V}(S_t)\\\\\n\t\t\\delta_t^B &= R_{t+1}^2 - \\widehat{\\eta}_{t+1} + \\widehat{U}(S_{t+1}) -\n\t\t\\widehat{U}(S_t)\\\\\n\t\\end{split}\n\\end{equation}\nwhere $\\widehat{V}$, $\\widehat{U}$, $\\widehat{\\rho}$ and $\\widehat{\\eta}$ are\nunbiased estimate of $V_\\theta$, $U_\\theta$, $\\rho(\\theta)$ and $\\eta(\\theta)$\nrespectively. It is easy to show that $\\delta_t^A$ and $\\delta_t^B$ are\nunbiased estimates of the advantage functions.\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\E[\\theta]{\\delta_t^A|S_t = s, A_t = a} &= A_\\theta(s, a)\\\\\n\t\t\\E[\\theta]{\\delta_t^B|S_t = s, A_t = a} &= B_\\theta(s, a)\\\\\n\t\\end{split}\n\\end{equation*}\nAn estimate of the gradients can be obtained by replacing the advantage functions with the TD errors\n\\begin{equation}\n\t\\begin{split}\n\t\t\\nabla_\\theta \\rho(\\theta) &\\approx \\nabla_\\theta \\log \\pi_\\theta(S_t, A_t) \\delta_t^A\\\\\n\t\t\\nabla_\\theta \\eta(\\theta) &\\approx \\nabla_\\theta \\log \\pi_\\theta(S_t, A_t) \\delta_t^B\\\\\n\t\\end{split}\n\\end{equation}\nThe value functions are linearly approximated using some feature vectors\n$\\Phi_V: \\S \\to \\R^{D_V}$ and $\\Phi_U: \\S \\to \\R^{D_U}$ as follows\n\\begin{equation}\n\t\\begin{split}\n\t\t\\widehat{V}(s) &= \\psi_V^T \\Phi_V(s)\\\\\n\t\t\\widehat{U}(s) &= \\psi_U^T \\Phi_U(s)\\\\\n\t\\end{split}\n\\end{equation}\nCombining all these ingredients lead to the \\gls{RSARAC}, the pseudocode of which is reported in Algorithm \\ref{algo:RSARAC}.\nLet us notice that the algorithm is a three time-scale stochastic approximation algorithm, where the learning rates, in addition to the usual Robbins-Monro conditions, should satisfy\n\\begin{equation*}\n\t\\alpha_k < \\beta_k < \\gamma_k\n\\end{equation*}\n\n\\begin{algorithm}[t!]\n\t\\caption{Risk-Sensitive Average Reward Actor-Critic algorithm}\n\t\\label{algo:RSARAC}\n\t\\begin{algorithmic}[]\n\t\t\\Require {\\\\ \n\t\t\t\\begin{itemize} \n\t\t\t\t\\item Initial actor parameters $\\theta^0$\n\t\t\t\t\\item Initial critics parameters $\\psi_V^0$ and $\\psi_U^0$\n\t\t\t\t\\item Actor learning rate $\\{\\alpha_k\\}$\n\t\t\t\t\\item Critics learning rate $\\{\\beta_k\\}$\n\t\t\t\t\\item Averages learning rate $\\{\\gamma_k\\}$\n\t\t\t\\end{itemize}}\n\t\t\\Ensure Approximation of the optimal policy $\\pi_{\\theta^*} \\approx \\pi_*$\n\t\t\\begin{algorithmic}[1]\n\t\t\\Repeat\n\t\t\t\\State Observe tuple $<s_k, a_k, r_{k+1}, s_{k+1}>$ sampled from the MDP.\n\t\t\t\\State Update averages \n\t\t\t\t\\begin{equation*}\n\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\\widehat{\\rho}_{k+1} &= (1 - \\gamma_k) \\widehat{\\rho}_k + \t\\gamma_k r_{k+1}\\\\\n\t\t\t\t\t\\widehat{\\eta}_{k+1} &= (1 - \\gamma_k) \\widehat{\\rho}_k + \t\\gamma_k r_{k+1}^2\\\\\n\t\t\t\t\t\\end{split}\n\t\t\t\t\\end{equation*}\n\t\t\t\\State Compute TD errors\t\n\t\t\t\t\\begin{equation*}\n\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\t\\delta_k^A &= r_{k+1} - \\widehat{\\rho}_{k+1} + (\\psi_V^k)^T \\Phi_V(s_{k+1}) - (\\psi_V^k)^T \\Phi_V(s_k)\\\\\n\t\t\t\t\t\t\\delta_k^B &= r_{k+1}^2 - \\widehat{\\eta}_{k+1} + (\\psi_U^k)^T \\Phi_U(s_{k+1}) - (\\psi_U^k)^T \\Phi_U(s_k)\\\\\n\t\t\t\t\t\\end{split}\n\t\t\t\t\\end{equation*}\n\t\t\t\\State Update critic parameters \n\t\t\t\t\\begin{equation*}\n\t\t\t\t\t\\begin{split}\n\t\t\t\t\t\t\\psi_V^{k+1} &= \\psi_V^k + \\beta_k \\delta_k^A \\Phi_V(s_k)\\\\\n\t\t\t\t\t\t\\psi_U^{k+1} &= \\psi_U^k + \\beta_k \\delta_k^B \\Phi_U(s_k)\\\\\n\t\t\t\t\t\\end{split}\n\t\t\t\t\\end{equation*}\n\t\t\t\\State Update actor parameters $\\theta^{k+1} = \\theta^k + \\alpha_k \\widehat{\\text{Sh}}(\\theta_k) $. \n\t\t\t\\State $k \\leftarrow k + 1$\n\t\t\\Until{converged}\n\t\t\\end{algorithmic}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Discounted Reward Formulation}\nIn the discounted reward formulation, we need to adapt the definition of the Sharpe ratio associated to a policy. Let us suppose that the system always start in the same initial state $s_0$, then we can introduce the start-state Sharpe ratio that can be achieved following policy $\\pi_\\theta$ as\n\\begin{equation}\n\t\\text{Sh}(\\theta) = \\frac{V_\\theta(s_0)}{\\sqrt{\\Lambda_\\theta(s_0)}}\n\\end{equation}\nwhere $V_\\theta$ and $\\Lambda_\\theta$ are the state-value function and the variance-function introduced in Chapter \\ref{ch:discrete_time_stochastic_optimal_control}. \nHence, the gradient of the Sharpe ratio is given by\n\\begin{equation}\n\t\\nabla_\\theta \\text{Sh}(\\theta) = \\frac{U_\\theta(s_0) \\nabla_\\theta V_\\theta(s_0) - \\frac{1}{2} V_\\theta(s_0) \\nabla_\\theta U_\\theta(s_0)}{\\Lambda_\\theta(s_0) \\sqrt{\\Lambda_\\theta(s_0)}}\n\\end{equation}\nThe gradient of the state-value function $\\nabla_\\theta V_\\theta(s_0)$ is given by the risk-neutral policy gradient theorem. Therefore, in order to approximate the gradient of the Sharpe ratio, we need to estimate the value functions $V_\\theta(s_0)$, $U_\\theta(s_0)$, the variance $\\Lambda_\\theta(s_0$), and the gradient of the square state-value function $U_\\theta (s_0)$. The first three terms might be easily approximated using moving averages. On the other hand, estimating $\\nabla_\\theta U_\\theta(s_0)$ in the same spirit of the policy gradient theorem is much more delicate. \n\\begin{theorem}[Risk-Sensitive Policy Gradient]\n\tLet $\\pi_\\theta$ be a differentiable policy. The policy gradient for the square state-value function is given by\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\nabla_\\theta U_\\theta(s_0) =\n\t\t\t\\mathbb{E}_{\\substack{S \\sim d_{\\gamma^2}^\\theta(s_0, \\cdot)\\\\A \\sim \\pi_\\theta(S,\\cdot)\\\\S' \\sim \\calP(S,A,\\cdot)}} [\\nabla_\\theta &\\log\t\\pi_\\theta(S,A) W_{\\theta}(S, A)\\\\\n\t\t\t&+ 2 \\gamma \\calR(S,A) \\nabla_\\theta V_\\theta(S') + 2 \\gamma C_\\theta(S, A) ]\n\t\t\\end{split}\n\t\\end{equation}\n\twhere $d_{\\gamma^2}^\\theta(s_0, \\cdot)$ is the $\\gamma^2$-discounted visiting distribution over states starting from the initial state $s_0$ and following policy $\\pi_\\theta$\n\t\t\\begin{equation}\n\t\t\td_{\\gamma^2}^\\theta(s_0, x) = \\sum_{k=0}^{\\infty} \\gamma^{2k} \\calP_\\theta^{(k)}(s_0, x)\n\t\t\\end{equation}\n\\end{theorem}\n\\begin{proof}\n\tFrom the basic relation between square state-value function and the square action-value function, we have\n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\nabla_\\theta U_\\theta(s) &= \\nabla_\\theta \\int_{\\A} \\pi_\\theta(s,a) W_\\theta(s,a) da\\\\\n\t\t\t\t&= \\int_{\\A} \\left[ \\nabla_\\theta \\pi_\\theta(s,a) W_\\theta(s,a) + \\pi_\\theta(s,a) \\nabla_\\theta W_\\theta(s,a)\\right] da\n\t\t\\end{split}\n\t\\end{equation*} \n\tHence, using the Bellman expectation equation for $W_\\theta$ \n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\nabla_\\theta W_\\theta(s,a) &= \\nabla_\\theta \\left[ \\calM(s,a) + 2 \\gamma \\calR(s,a) T_a V_\\pi(s) + 2 \\gamma C_\\theta(s, a) + \\gamma^2 T_a U_\\theta(s)\\right]\\\\ \n\t\t\t&= 2 \\gamma \\calR(s,a) \\nabla_\\theta T_a V_\\theta(s) + 2 \\gamma \\nabla_\\theta C_\\theta(s, a) + \\gamma^2 \\nabla_\\theta T_a U_\\theta(s)\\\\\n\t\t\t&= \\int_{\\S} \\calP(s,a,s') \\left[2 \\gamma \\calR(s,a) \\nabla_\\theta V_\\theta(s') + 2 \\gamma \\nabla_\\theta C_\\theta(s, a) + \\nabla_\\theta U_\\theta(s')\\right] ds'\n\t\t\\end{split}\n\t\\end{equation*}\n\twhere we assumed to be able to exchange the gradient and the integral. Plugging in the first equation and exploiting the fact that $\\int_\\S \\calP(s,a,s') ds' = 1$, we obtain\n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\nabla_\\theta U_\\theta(s) = \\int_{\\A} \\pi_\\theta(s,a) &\\int_{\\S} \\calP(s,a,s') [ \\nabla_\\theta \\log \\pi_\\theta(s,a) W_\\theta(s,a)\\\\ \n\t\t\t&+ 2 \\gamma \\calR(s,a) \\nabla_\\theta V_\\theta(s') + 2 \\gamma \\nabla_\\theta C_\\theta(s, a) + \\nabla_\\theta U_\\theta(s') ] ds' da\n\t\t\\end{split}\n\t\\end{equation*} \n\tUnrolling $\\nabla_\\theta U_\\theta$ infinite times and denoting by $\\calP_\\theta^{(k)}(s, x)$ the probability of going from state $s$ to state $x$ in $k$ steps under policy $\\pi_\\theta$, we obtain\n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\nabla_\\theta U_\\theta(s) = \\sum_{k=0}^\\infty \\gamma^{2k} \\int_\\S \\calP_\\theta^{(k)}(s, x) &\\int_\\A \\pi_\\theta(x,a) \\int_{\\S} \\calP(x,a,x') [\\nabla_\\theta \\log \\pi_\\theta(x,a) W_\\theta(x,a)\\\\ \n\t\t\t\t\t\t&+ 2 \\gamma \\calR(x,a) \\nabla_\\theta V_\\theta(x') + 2 \\gamma \\nabla_\\theta C_\\theta(x, a)]  dx' da dx\n\t\t\\end{split}\n\t\\end{equation*} \n\tDefining the $\\gamma^2$-discounted visiting distribution of state $x$ starting from state $s$ as\n\t\\begin{equation*}\n\t\td_{\\gamma^2}^\\theta(s, x) = \\sum_{k=0}^{\\infty} \\gamma^{2k} \\calP_\\theta^{(k)}(s, x)\n\t\\end{equation*}\n\twe have the result\n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\begin{split}\n\t\t\t\t\\nabla_\\theta U_\\theta(s) = \\int_\\S d_{\\gamma^2}^\\theta(s, x) &\\int_\\A \\pi_\\theta(x,a) \\int_{\\S} \\calP(x,a,x') [\\nabla_\\theta \\log \\pi_\\theta(x,a) W_\\theta(x,a)\\\\ \n\t\t\t\t\t\t\t&+ 2 \\gamma \\calR(x,a) \\nabla_\\theta V_\\theta(x') + 2 \\gamma \\nabla_\\theta C_\\theta(x, a)]  dx' da dx\n\t\t\t\\end{split}\n\t\t\\end{split}\n\t\\end{equation*} \n\\end{proof}\nCompared to risk-sensitive policy gradient theorem in the average reward formulation, we have two additional terms: one term depending on the gradient of the state-value function at the next state and another term depending on the covariance between the one-step reward and the successive return. These terms are very difficult to approximate online in a continuing environment. Therefore, the result is of practical interest only for an episodic environment where the experiments have a finite (possibly random) lifespan. Since the application we considered does not fall in this category, we will not discuss any risk-sensitive algorithm for the discounted formulation. \n", "meta": {"hexsha": "b99f428d55b3235042b1edb5d9b5410514055546", "size": 25475, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Report/Chapters/5_Risk_Sensitive_Policy_Gradient.tex", "max_stars_repo_name": "AmineAboussalah/Thesis", "max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 80, "max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z", "max_issues_repo_path": "Report/Chapters/5_Risk_Sensitive_Policy_Gradient.tex", "max_issues_repo_name": "pnecchi/Thesis", "max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Report/Chapters/5_Risk_Sensitive_Policy_Gradient.tex", "max_forks_repo_name": "pnecchi/Thesis", "max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z", "avg_line_length": 73.6271676301, "max_line_length": 905, "alphanum_fraction": 0.7246712463, "num_tokens": 7903, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5511204790406175}}
{"text": "\\subsubsection{Merging node-trees}\n\n%In lines \\ref{algo:joina}-\\ref{algo:joinb}, the pseudocode for merging node-trees only has a cost proportional to the number of cluster mergers, which is defined by the complexity of the Union-Find algorithm. \nMerging node-trees is more complex than merging cluster-trees. The additional cost comes from the depth-first-searches (DFS's) related to the (re)calculation of the node parities and node delays in line \\ref{algo:pdc}. \\todo[inline]{argue that we only care about mergers of an odd number of odd clusters}. \n\n\\todo[inline]{define the meaning of step, link to discussion on max odd root, clarify if there is a constant missing}\nConsider merging $k$ trees $\\{\\mathcal N_i\\}_{i=1}^k$ with $k$ odd and let $\\{m_i\\}_{i=1}^k$ represent the number of nodes in each tree. Then the number of steps required to merge the $k$ trees is $\\sum_{i=1}^km_i-\\max_{i\\in [1,k]}{m_i}$. From this observation, we define by $C(N)$ the total number of steps in the merge required \n\nwhich can be expressed recursively by:\n\n\\begin{equation}\nC(\\mathcal N) = \\sum_{i=1}^k C(\\mathcal N_i) + \\sum_{i=1}^km_i-\\max_{i\\in [1,k]}{m_i}\n\\end{equation} \n\nIn the following we show that if we let $n=\\sum_{i=1}^km_i$ be the total number of nodes in the merged node tree, $C(\\mathcal N)<n\\log n$. \n\\todo[inline]{Present induction base}\nConsider that the induction hypothesis holds up to some odd number $n-2$, then the following inequality holds:\n\\begin{equation}\nC(\\mathcal N) \\leq \\sum_{i=1}^k m_i\\log m_i + \\sum_{i=1}^km_i-\\max_{i\\in [1,k]}{m_i}\n\\end{equation} \n\n\\todo[inline]{use Jensen's inequality to forget about all except for two values}\nLet $m\\leq n/2$.\n\\begin{equation}\nC(\\mathcal N) \\leq f(m)=(n-m)\\log (n-m) + m\\log m+ m\n\\end{equation} \nConsider the first two derivatives of $f(m)$ with respect to $m$:\n\\begin{equation}\n    f'(m) = -\\log (n-m)+\\log m + 1\n\\end{equation}\n\\begin{equation}\n    f''(m) = (1/(n-m)) + 1/m \n\\end{equation}\nThe function is continuous for $0<m<n$ and the second derivative is strictly positive, which means that $f$ is convex and takes the maximum at a extreme value.\n\n%Line \\ref{line:grow} corresponds with  the DFS's on the node-tree results, it has an addional cost which is investigated in this section. \n\n\n%, and (B) the DFS related to the growth of a cluster in line \\ref{algo:grow}. We dub the two parts the \\textbf{delay cost} and the \\textbf{growth cost}, respectively. The $\\Nodejoin$ operation in lines \\ref{algo:joina}-\\ref{algo:joinb} only has a linear addition to the cost.\n% The cost of each step the DFS (\\eqref{}, \\eqref{} and \\eqref{}) is independent of the tree size.\n%The cost of the DFS's is proportional to the total number of nodes encountered in the DFS's. In the following we bound this number, which we denote by $N_\\delta$. \n% Even node-trees that join with multiple node-trees within the same growth iteration and the final set of even node-trees whose clusters are peeled thus do not count towards $N_\\delta$. \n\n%Let an odd node-tree be denoted by $\\oset$, an even node-tree by $\\eset$, and an even node-tree that counts towards the delay cost by $\\bm{\\eset}$. To find $N_\\delta$, we analyze cluster growth by a time-reversed approach; starting from a single cluster at the end of growth, and move back in time to find its \\emph{node-tree decomposition}.\n\n\n%\\begin{definition}\\label{def:fragmentation}\n  \n%  Let the \\textbf{fragmentation} $\\frag_\\oset$ of some odd node-tree $\\pr{k}\\oset$ return its two most recently joined predecessor node-trees, which are $\\pr{k-1}\\oset$ and $\\pr{k-1}\\eset$\n%  \\begin{equation}\\label{eq:pfo}\n%    \\frag_\\oset(\\{\\pr{k}\\oset\\}) = \\{\\pr{k-1}\\oset, \\pr{k-1}\\eset \\},\n%  \\end{equation}\n%  where the prefix $k$ indicate the \\textbf{generation} to which the node-tree belongs. Let $\\frag_\\eset$ fragment an even node-tree $\\pr{k}\\eset$ to some even number $\\nu$ of its predecessor odd node-trees  \n%  \\begin{equation}\\label{eq:pfe}\n%    \\frag_\\eset(\\{\\pr{k}\\eset\\}) =\\{\\pr{k}\\oset_1,...,\\pr{k}\\oset_{\\nu} \\},\n%  \\end{equation}\n%  which have joined to $\\pr{k}\\eset$ within the same growth iteration. As an even node-tree does not grow, it belongs to the same generation as its odd predecessors. Since $\\frag_\\oset$ and $\\frag_\\eset$ operate on node-trees of different parties, both can be combined as a \\textbf{fragmentation step} $\\frag$, which takes an odd node-tree $\\pr{k}\\oset$ and returns its $\\nu+1$ predecessors\n%  \\begin{multline}\\label{eq:fstep}\n%    \\frag(\\{\\pr{k}\\oset\\}) = \\frag_\\eset(\\frag_\\oset(\\{\\pr{k}\\oset\\})) =  \\\\ \\{\\pr{k-1}\\oset_0,\\pr{k-1}\\oset_1,...,\\pr{k-1}\\oset_\\nu\\}, \n%  \\end{multline}\n%  where $\\pr{k-1}\\oset$ from \\Cref{eq:pfo} is now labelled as $\\pr{k-1}\\oset_0$.\n%\\end{definition}\n\n%Note that node-trees of the same \\emph{generation} might not have been constructed in the same growth \\emph{iteration}. An odd cluster grow for many iterations, while its node-tree does not change, before it merges with another node-tree of the same generation. We use the notation $\\frag^{(2)}(\\pr{k}\\oset)$ to indicate that two fragmentation steps are applied on $\\pr{k}\\oset$ to obtain its predecessors of generation $k-2$. Furthermore, let $\\frag_\\oset^{(i)}(\\nset)$ and be equivalent to $\\frag_\\oset(\\frag^{(i-1)}(\\nset))$. \n\n%\\Figure[hbt](topskip=0pt, botskip=0pt, midskip=0pt){figures/tikz/build/main-figure7.pdf}{\n%  The \\emph{fragmentation} of node-tree $\\pr{k+1}\\oset$ into its predecessor node-trees, where the prefix $k+1$ indicates its \\emph{generation}. The fragmentation step $\\frag(\\pr{k+1}\\oset)$ returns $\\nu+1$ odd predecessors of generation $k$. Fragmentation step $\\frag$ can be separated into $\\frag_\\oset(\\pr{k+1}\\oset)$ that returns an odd predecessor $\\pr{k}\\oset_0$ and even predecessor $\\pr{k}\\eset$, and subsequently the $\\frag_\\eset(\\pr{k}\\eset)$, which returns $\\nu$ odd predecessors of generation $k$. The figure depicts a fragmentation step with $\\nu=2$.\\label{fig6}}\n\n%\\begin{definition}\\label{def:decomposition}\n%  Let the \\textbf{decomposition} of a node-tree $\\nset$ be a series of $\\mu$ fragmentation steps on $\\nset$, such that the output of $\\frag^{(\\mu)}$ is a set of node-trees that have no predecessors. \n%\\end{definition}\n\n%Using node-tree decomposition, we aim to find the fragmentations that maximizes the number and size of $\\bm{\\eset}$ node-trees. The definitions of $\\frag_\\oset$ and $\\frag_\\eset$ matches the rules of \\emph{Odd-Rooted Join} and \\emph{Root List Replacement} such that each of the even node-trees in the output of $\\frag_\\oset$ is a $\\bm{\\eset}$ node-tree. The maximum value for $N_\\delta$ can thus be computed using the decomposition of a maximally large odd node-tree $\\pr{\\mu}\\oset$. \n%\\begin{equation}\\label{eq:npdc}\n%  N_\\delta = 2\\sum_{k=1}^\\mu{ \\sum_{ \\pr{\\mu-k}\\eset \\in \\frag_\\oset^{(k)}(\\pr{\\mu}\\oset) }{ \\abs{\\pr{\\mu-k}\\eset}} }.\n%\\end{equation}\n%The inner sum accounts for the combined size of all even node-trees during a single fragmentation step $\\frag_\\oset^{(k)}$. The outer sum loops over all $\\mu$ fragmentation steps. Finally, the $2$ refers to the individual DFS's of the parity and delay calculations. Note that in the decomposition of $\\pr{\\mu}\\oset$, the lowest generation of node-trees outputted by $\\frag^{(\\mu)}(\\pr{\\mu}\\oset)$ is $1$. \n\n%We will use two simplifications to \\Cref{eq:npdc} to find $N_\\delta$. Junction-nodes are initiated on the tangent of two node radii belonging to separate node-trees when merging into one. With increasing number of fragmentations, the total number of nodes in the fragmented set must therefore decrease. We thus change \\Cref{eq:npdc} into the inequality\n%\\begin{equation}\\label{eq:npdc2}\n%  N_\\delta \\leq 2\\sum_{k=1}^\\mu{ \\sum_{ \\pr{\\mu-k}\\eset \\in \\frag_\\oset^{(k)}(\\pr{\\mu}\\oset) }{ \\abs{\\pr{\\mu-k}\\eset}}}. \n%\\end{equation}\n\n%The existence of junction-nodes will be ignored in the remainder of this section, which does not compromise the inequality. If only syndrome-nodes exist, $\\pr{k}\\oset$'s size must equal the sum of sizes of its predecessors in $\\frag(\\pr{k}\\oset)$. The size of each of the $\\nu+1$ predecessors $\\pr{k}\\oset_i$ can be represented by the \\textbf{fragmentation ratio} $\\lambda$\n%\\begin{equation}\\label{eq:ratio}\n%  \\pr{k}\\lambda_i = \\frac{\\abs{\\pr{k-1}\\oset_i}}{\\abs{\\pr{k}\\oset}}, \\hspace{0.5cm} \\sum_{i=0}^{\\nu}{\\pr{k}\\lambda_i} = 1.\n%\\end{equation}\n%Secondly, we assume that vertex-trees do not increase in size, such that $\\abs{\\nset}=\\abs{\\vset}$. Normally, the number of nodes in a cluster is bounded by the number of vertices $\\abs{\\nset}\\leq \\abs{\\vset}$, as non-trivial vertices can be added to the node, which increases the node radius. By this assumption, the vertex-tree can only increase in size due to a merger between clusters, and nodes are effectively not allowed to increase in radius. While this is not possible during realistic cluster growth, this assumption allows us to further simplify \\Cref{eq:npdc2}. \n\n%To find the upper bound in $N_\\delta$, we are now tasked to find: (a) $\\nu$, the number of predecessors in $\\frag_\\eset$, (b) the fragmentation ratios $\\{\\lambda_0, ..., \\lambda_\\nu\\}$, (c) the number of fragmentation generations $\\mu$, and (d) the size of $\\pr{\\mu}\\oset$. \n\n%\\begin{lemma}\\label{lem:evenconstant}\n%  For constant fragmentation ratios $\\pr{k}\\lambda_i = \\lambda_i$ during all generations $1\\leq k<\\mu$, the inner sum of \\Cref{eq:npdc} is constant:\n%  \\begin{equation*}\n%    \\sum_{ \\pr{\\mu-k}\\eset \\in \\frag_\\oset^{(k)}(\\pr{\\mu}\\oset) }{ \\abs{\\pr{\\mu-k}\\eset}} = C \\hspace{1em}\\forall k.\n%  \\end{equation*}\n%\\end{lemma}\n%\\begin{proof}\n%  For $k=1$, there is an even-parity predecessor $\\pr{\\mu-1}\\eset$ of size \n%  \\begin{equation*}\n%    \\abs{\\pr{\\mu-1}\\eset} = (1 - \\lambda_0)\\abs{\\pr{\\mu}\\oset}.\n%  \\end{equation*}\n%  For $k=2$, every odd node-tree in $\\{\\pr{\\mu-1}\\oset_0,... ,\\pr{\\mu-1}\\oset_\\nu\\}$ is fragmented by $\\frag_\\oset$ to an even predecessor $\\pr{\\mu-2}\\eset_i$ for $i \\in \\{0,...,\\nu \\}$, such that \n%  \\begin{equation*}\n%    \\sum_{i=0}^{\\nu}{\\abs{\\pr{\\mu-2}\\eset_i}}  = \\sum_{i=0}^{\\nu}{\\lambda_i(1 - \\lambda_0)\\abs{\\pr{\\mu}\\oset}}= (1 - \\lambda_0)\\abs{\\pr{\\mu}\\oset}.\n%  \\end{equation*}\n%  The same is true for all generations $1\\leq k<\\mu$. \n%\\end{proof}\n\n%\\begin{theorem}\\label{the:fragnumber}\n%  The upper bound of $N_\\delta$ is obtained for $\\nu=2$, such that $\\frag_\\eset$ of even node-tree $\\pr{k}\\eset$ returns two odd predecessors. \n%\\end{theorem}\n%\\begin{proof}\n%  The sum of even node-tree sizes in every generation is constant per \\Cref{lem:evenconstant}. Thus, the upper bound in \\Cref{eq:npdc2} is obtained by the largest possible $\\mu$. As $\\nu$ increases the number of odd node-trees in each $\\frag^{(k)}_\\oset$, the average size of these odd node-trees decreases. Since the size of a node-tree is proportional to the number of predecessor generations, we find that \n%  \\begin{equation*}\n%    \\mu \\propto \\frac{1}{\\nu}. \n%  \\end{equation*}\n%  Hence, the upper bound in \\Cref{eq:npdc2} exists in the minimal value of $\\nu$, which is $\\nu = 2$.\n%\\end{proof}\n\n%Using \\Cref{the:fragnumber}, a fragmentation step on an odd cluster $\\frag(\\pr{k+1}\\oset)$ now returns $\\{\\pr{k}\\oset_0, \\pr{k}\\oset_1, \\pr{k}\\oset_2\\}$, where $\\pr{k}\\oset_1, \\pr{k}\\oset_2$ are predecessors of the even node-tree $\\pr{k}\\eset$. \n\n%\\begin{lemma}\\label{lem:chrono}\n%  The node-tree size of $\\pr{k}\\oset_0$ must be smaller than or equal to $\\pr{k}\\oset_1, \\pr{k}\\oset_2$, such that $\\lambda_1 \\geq \\lambda_0 \\leq \\lambda_2$. \n%\\end{lemma}\n%\\begin{proof}\n%  The partial fragmentations must occur in the order of first \\Cref{eq:pfo}, then \\eqref{eq:pfe}, as \\eqref{eq:pfe} requires an even node-tree that is returned by \\eqref{eq:pfo}. In terms of cluster growth, the vertex-trees $\\vset_1, \\vset_2$, corresponding to $\\pr{k}\\oset_1, \\pr{k}\\oset_2$, must merge before the combined vertex-tree can merge with $\\vset_0$, which corresponds to $\\pr{k}\\nset_0$. As a result of Weighted Growth, $\\abs{\\vset_1}$ and $\\abs{\\vset_2}$ must be smaller or equal to $\\abs{\\vset_0}$, such that \n%  \\begin{equation*}\n%    \\abs{\\vset_1}\\geq \\vset_0 \\leq\\abs{\\vset_2}.\n%  \\end{equation*}\n%  If this condition is not met, the cluster of $\\vset_0$ grows first and merges with either $\\vset_1$ or $\\vset_2$, and the causality of events is disturbed. Since we assumed $\\abs{\\nset}=\\abs{\\vset}$, this can be translated to \n%  \\begin{equation*}\n%    \\abs{\\nset_1}\\geq \\nset_0 \\leq\\abs{\\nset_2},\n%  \\end{equation*}\n%  and subsequently to the fragmentation ratios.\n%\\end{proof}\n\n%\\begin{theorem}\\label{the:ratios}\n%  The upper bound for $N_\\delta$ is obtained via the fragmentation ratios $\\lambda_0 = \\lambda_1 = \\lambda_2 = \\nicefrac{1}{3}$.\n%\\end{theorem}\n%\\begin{proof}\n%  The ratios $\\{\\lambda_0, \\lambda_1, \\lambda_2\\}$ can be found by maximizing the size of the even node-tree $\\pr{k}\\eset$ in each fragmentation step, which is \n%  \\begin{equation*}\n%    \\abs{\\pr{k}\\eset} = (\\lambda_1 + \\lambda_2)\\abs{\\pr{k-1}\\oset}.\n%  \\end{equation*}\n%  Since $ \\lambda_1 \\geq \\lambda_0 \\leq \\lambda_2$ per \\Cref{lem:chrono}, the largest values for $\\lambda_1, \\lambda_2$ possible are equal to $\\lambda_0$.\n%\\end{proof}\n\n%The last unknown parameters in finding the upper bound of $N_\\delta$ in are $\\mu$ and $\\abs{\\pr{\\mu}\\oset}$.\n\n%\\begin{theorem}\\label{the:km}\n%  For $\\nu = 2$ and $\\lambda_i = \\{\\nicefrac{1}{3},\\nicefrac{1}{3},\\nicefrac{1}{3}\\}$, the maximum number of fragmentation generations is $\\mu = \\log_3{\\abs{\\pr{\\mu}\\oset}}$.\n%\\end{theorem}\n%\\begin{proof}\n%  In every generation, all node-trees are fragmented into 3 predecessors that are $\\nicefrac{1}{3}$ the size of their common successor. The series of $\\mu$ fragmentation steps is thus simply $\\mu$ divisions of the node-tree $\\pr{\\mu}\\oset$ in 3 parts until all predecessors have size 1, at which point a node-tree cannot be fragmented.\n%\\end{proof}\n\n%The maximum size of the odd node-tree $\\pr{\\mu}\\oset$ is bounded by the system size $n$, the number of qubits on the lattice. Collecting \\Cref{the:fragnumber,the:ratios,the:km} and filling in \\Cref{eq:npdc2}, we find that\n\n%\\begin{align*}\n%  \\nonumber N_\\delta &\\leq 2\\sum_{k=1}^\\mu{ \\sum_{ \\pr{\\mu-k}\\eset \\in \\frag_\\oset^{(k)}(\\pr{\\mu}\\oset) }{ \\abs{\\pr{\\mu-k}\\eset}}  } \\\\\n%  \\nonumber         &\\leq 2\\sum_{k=1}^{\\log_3{\\abs{\\pr{\\mu}\\oset}}} \\frac{2}{3}\\abs{\\pr{\\mu}\\oset}\\\\\n%                    &\\leq \\frac{4}{3}\\abs{\\pr{\\mu}\\oset}\\log_3{\\abs{\\pr{\\mu}\\oset}} \\\\\n%                    &\\leq \\frac{4}{3}n \\log_3{n}.\n%\\end{align*}\n\n%The worst-case time complexity of related to the delay cost is thus $\\Omega(n\\log{n})$. \n\n%\\subsection{Growth cost}\\label{sec:growthcost}\n\n%To grow a cluster represented by a node-tree $\\nset$, a depth-first search (DFS) is performed on the node-tree to find all nodes with zero delay. The total cost of these DFS's is proportional to the total number of nodes encountered during these DFS's, which we dub $N_g$. Using \\Cref{def:fragmentation,def:decomposition}, the cost of growth of a node-tree $\\pr{\\mu}\\oset$ is proportional to the sum of sizes of all odd node-trees in the decomposition of $\\pr{\\mu}\\oset$: \n%\\begin{equation}\\label{eq:ngrow}\n%  N_g = 2\\sum_{k=1}^\\mu{ \\sum_{ \\pr{\\mu-k}\\oset \\in \\frag^{(k)}(\\pr{\\mu}\\oset) }{ \\abs{\\pr{\\mu-k}\\oset}} }.\n%\\end{equation}\n%Again, we assume that no trivial vertices are added to a cluster or $|\\nset| = |\\vset|$ such that \\Cref{eq:ngrow} becomes an upper bound. As a result of \\emph{Odd-Rooted Join} and \\emph{Root List Replacement}, the upper bound is obtained in the largest $\\mu$. This is again achieved through $\\nu = 2$. For every fragmentation of some odd node-tree $\\pr{k-1}\\oset$ into $\\{\\pr{k}\\oset_0, \\pr{k}\\oset_1, \\pr{k}\\oset_2\\}$, all three predecessors add to $N_g$ if they have grown. As a result of Weighted Growth, this is the case when $\\abs{\\vset_0}\\approx \\abs{\\vset_1}\\approx\\abs{\\vset_2}$ such that $\\lambda_0 \\approx \\lambda_1 \\approx \\lambda_2\\approx \\nicefrac{1}{3}$. For these values of $\\nu$ and $\\lambda$, we can apply \\Cref{the:km} for $\\mu$. For $|\\nset| = |\\vset|$, the sum of even node-tree sizes in every $\\frag_\\oset^{(k)}$ is exactly $\\abs{\\pr{\\mu}\\oset}$, and we find that\n%\\begin{align*}\n%  \\nonumber N_g &\\leq 2\\sum_{k=1}^\\mu{ \\sum_{ \\pr{\\mu-k}\\oset \\in \\frag^{(k)}(\\pr{\\mu}\\oset) }{ \\abs{\\pr{\\mu-k}\\oset}}  } \\\\\n%  \\nonumber         &\\leq 2\\sum_{k=1}^{\\log_3{\\abs{\\pr{\\mu}\\oset}}} \\abs{\\pr{\\mu}\\oset}\\\\\n%                    &\\leq 2\\abs{\\pr{\\mu}\\oset}\\log_3{\\abs{\\pr{\\mu}\\oset}}\\\\\n%                    &\\leq 2 n \\log_3{n},\n%\\end{align*}\n%which again corresponds to a worst-case time complexity $\\Omega(n\\log{n})$.", "meta": {"hexsha": "3757fdeb94f5780bea97f0afaa46c4eb63b84d5b", "size": 16542, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sec_complexity.tex", "max_stars_repo_name": "watermarkhu/tqe_paper_ufbb", "max_stars_repo_head_hexsha": "f9b171049e028ace58be3ab4a01cddac94f7e01e", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sec_complexity.tex", "max_issues_repo_name": "watermarkhu/tqe_paper_ufbb", "max_issues_repo_head_hexsha": "f9b171049e028ace58be3ab4a01cddac94f7e01e", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sec_complexity.tex", "max_forks_repo_name": "watermarkhu/tqe_paper_ufbb", "max_forks_repo_head_hexsha": "f9b171049e028ace58be3ab4a01cddac94f7e01e", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-11T15:53:16.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-11T15:53:16.000Z", "avg_line_length": 90.8901098901, "max_line_length": 885, "alphanum_fraction": 0.6931447225, "num_tokens": 5367, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7772998611746911, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5511204790406173}}
{"text": "\\documentclass[../../main]{subfiles}\n\\pagestyle{fancy}\n\n\\begin{document}\n\n\\chapter{Virtual large cardinals}\n\\label{chapter.virtual-large-cardinals}\n\\thispagestyle{fancy}\n\nIn this chapter we investigate the properties of virtual versions of well-known large cardinals, including measurables, strongs, supercompacts, Woodins and Vop\\v enkas. This entails firstly analysing the relationships between them, and secondly looking at more general properties in terms of their behaviour in core models as well as their indestructibility. This virtual perspective also allows us to analyse virtualised versions of large cardinals that are otherwise inconsistent with $\\zfc$, such as the Berkeley cardinals.\n\n\\section{Strongs \\& supercompacts}\n\nWe start out with measurables, strongs and supercompacts. Their (non-virtual) definitions can be found in Section \\ref{prelims.large-cardinals}.\n\n\\defi[][defi.strongsc]{\n  For $\\theta$ a regular uncountable cardinal, a cardinal $\\kappa<\\theta$ is...\n  \\begin{itemize}\n    \\item \\textbf{faintly $\\theta$-measurable} if, in a forcing extension of $V$, there is a transitive set $\\N$ and an elementary embedding $\\pi\\colon H_\\theta^V\\to\\N$ with $\\crit\\pi=\\kappa$;\n    \\item \\textbf{faintly $\\theta$-strong} if, in a forcing extension of $V$, there is a transitive set $\\N$ with $H_\\theta^V\\subset\\N$ and an elementary embedding $\\pi\\colon H_\\theta^V\\to\\N$ with $\\crit\\pi=\\kappa$;\n    \\item \\textbf{faintly $\\theta$-supercompact} if, in a forcing extension of $V$, there is a transitive set $\\N$ with ${^{<\\theta}\\N}\\cap V\\subset\\N$ and an elementary embedding $\\pi\\colon H_\\theta^V\\to\\N$ with $\\crit\\pi=\\kappa$.\n  \\end{itemize}\n\n  We further replace ``faintly'' by \\textbf{virtually} when $\\N\\subset V$, we attach a \\textbf{``pre''} if we do not assume that $\\pi(\\kappa)>\\theta$, and we will leave out $\\theta$ when it holds for all regular $\\theta>\\kappa$.\n}\n\nAs a quick example of this terminology, a \\textit{faintly prestrong cardinal} is a cardinal $\\kappa$ such that for all regular $\\theta>\\kappa$, $\\kappa$ is faintly $\\theta$-measurable with $H_\\theta^V\\subset\\N$.\n\n\\qquad Observe that whenever we have a virtual large cardinal that has its defining property for all regular $\\theta$, we can assume that the target of the embedding is an \\textit{element} of the ground model $V$ and not just a subset of $V$. Suppose, for instance, that $\\kappa$ is virtually measurable and fix a regular $\\theta>\\kappa$ and set $\\lambda:=(2^{<\\theta})^+$. Take a generic elementary embedding $\\pi:H_\\lambda\\to \\M_\\lambda$ witnessing that $\\kappa$ is virtually $\\lambda$-measurable. \n\n\\qquad Since $\\abs{H_\\theta}=2^{<\\theta}$ it holds that $H_\\theta\\in H_\\lambda$, so that the restriction $\\pi\\restr H_\\theta\\colon H_\\theta\\to\\pi(H_\\theta)$ witnesses that $\\kappa$ is virtually $\\theta$-measurable, and the target model $\\M_\\theta := \\pi(H_\\theta)$ is in $V$ because $\\M_\\lambda\\subseteq V$ by assumption. Thus, the weaker assumption that the target model is a subset of the ground model only affects level-by-level virtual large cardinals. Indeed, as we will see in later sections, for virtually strong cardinals we may even further weaken the assumption that $\\M_\\theta\\subseteq V$ to $H_\\theta=H_\\theta^{\\M_\\theta}$ (we do not know whether this holds level-by-level).\n\n\\qquad We note that even small cardinals can be faintly measurable: we may for instance have a precipitous ideal on $\\omega_1$; see \\cite[Theorem 22.33]{Jech}. The ``virtually'' adverb further implies that the cardinals are large cardinals in the usual sense, as Proposition \\ref{prop.virtit} below shows.\n\n\\prop[Virtualised folklore][prop.virtit]{\n  For any regular uncountable cardinal $\\theta$, every virtually $\\theta$-measurable cardinal is 1-iterable\\footnote{See Section \\ref{prelims.large-cardinals} for a definition of the 1-iterable cardinals.} (in particular, inaccessible).\n}\n\\proof{\n  Let $\\kappa$ be virtually $\\theta$-measurable, witnessed by a forcing $\\mathbb P$, a transitive $\\N\\subset V$ and an elementary $\\pi\\colon H_\\theta^V\\to\\N$ with $\\pi\\in V^{\\mathbb P}$. If $\\kappa$ is not a strong limit then we have a surjection $\\pi(f)\\colon\\p(\\alpha)\\to\\pi(\\kappa)$ with $\\ran\\pi(f)=\\ran f\\subset\\kappa$ for some $\\alpha<\\kappa$, $\\contr$. Note that we used $\\N\\subset V$ to ensure that $\\p(\\alpha)^V=\\p(\\alpha)^{\\N}$. The same argument shows that $\\kappa$ is regular. By restricting the generic embedding and using that $\\p(\\kappa)^V = \\p(\\kappa)^N$ as $\\N\\subset V$ and $\\p(\\kappa)^V\\subset\\N$, we get that $\\kappa$ is 1-iterable.\n}\n\nAlong with the above definition of faint supercompactness we can also virtualise Magidor's characterisation of supercompact cardinals\\footnote{See Section \\ref{prelims.large-cardinals} for a definition of the non-virtual version of this characterisation.}, which was one of the original characterisations of the remarkable cardinals in \\cite{Schindler}.\n\n\\defi{\n  Let $\\theta$ be a regular uncountable cardinal. Then $\\kappa<\\theta$ is \\textbf{virtually $\\theta$-Magidor-supercompact} if there are cardinals $\\bar\\theta<\\kappa$ and $\\bar\\kappa<\\bar\\theta$, and a generic elementary $\\pi\\colon H_{\\bar\\theta}^V\\to H_\\theta^V$ such that $\\crit\\pi=\\bar\\kappa$ and $\\pi(\\bar\\kappa)=\\kappa$.\n}\n\n\\cite{GitmanSchindler} observed that remarkable cardinals are precisely the virtually supercompacts. Surprisingly, they are also precisely the virtually strongs, which in turn makes virtually strongs and virtually supercompacts equivalent. The proofs of these equivalences were omitted in \\cite{GitmanSchindler}, so we give proofs of these here. These are not the original proofs, however, but a slight improvement that allows us to get a more fine-grained level-wise corollary --- see Remark \\ref{rema.levelwise-strong-supercompact}.\n\n\\theo[\\cite{GitmanSchindler}][theo.rem]{\n  For an uncountable cardinal $\\kappa$, the following are equivalent:\n  \\begin{enumerate}\n    \\item $\\kappa$ is virtually strong;\n    \\item $\\kappa$ is virtually supercompact;\n    \\item $\\kappa$ is virtually Magidor-supercompact.\n  \\end{enumerate}\n}\n\\proof{\n$(ii)\\Rightarrow(i)$ is simply by definition.\n\n\\qquad $(i)\\Rightarrow (iii)$: Fix $\\theta>\\kappa$. By $(i)$ there exists a generic elementary embedding $\\pi\\colon H_{(2^{<\\theta})^+}^V\\to\\M$ with\\footnote{The domain of $\\pi$ is $H_{(2^{<\\theta})^+}^V$ to ensure that $H_\\theta^V\\in\\dom\\pi$.} $\\crit\\pi=\\kappa$, $\\pi(\\kappa)>\\theta$, $H_{(2^{<\\theta})^+}^V\\subset\\M$ and $\\M\\subset V$. Since $H_\\theta^V, H_{\\pi(\\theta)}^{\\M}\\in\\M$, Countable Embedding Absoluteness \\ref{lemm.ctblabs} implies that $\\M$ has a generic elementary embedding $\\pi^*:H_\\theta^V\\to H_{\\pi(\\theta)}^{\\M}$ with $\\crit\\pi^*=\\kappa$ and $\\pi^*(\\kappa)=\\pi(\\kappa)>\\theta$. Since $H_\\theta^V=H_\\theta^{\\M}$ as $\\M\\subset V$ and $H_\\theta^V\\subset\\M$, elementarity of $\\pi$ now implies that $H_{(2^{<\\theta})^+}^V$ has cardinals $\\bar\\theta<\\kappa$ and $\\bar\\kappa<\\bar\\theta$, and a generic elementary $\\sigma\\colon H_{\\bar\\theta}^V\\to H_\\theta^V$ with $\\crit\\sigma=\\bar\\kappa$ and $\\sigma(\\bar\\kappa)=\\kappa$. This shows $(iii)$.\n\n  \\qquad $(iii)\\Rightarrow (ii)$: Fix $\\theta>\\kappa$ and set $\\delta:=(2^{<\\theta})^+$. By $(iii)$ there exist cardinals $\\bar\\delta<\\kappa$ and $\\bar\\kappa<\\bar\\delta$, and a generic elementary embedding $\\pi\\colon H_{\\bar\\delta}^V\\to H_\\delta^V$ with $\\crit\\pi=\\bar\\kappa$ and $\\pi(\\bar\\kappa)=\\kappa$. We will argue that $\\bar\\kappa$ is virtually $\\bar\\theta$-supercompact in $H_{\\bar\\delta}^V$, so that by elementarity $\\kappa$ is virtually $\\theta$-supercompact in $H_\\delta^V$ and hence also in $V$ by the choice of $\\delta$. Consider the restriction\n  \\eq{\n    \\sigma:=\\pi\\restr H_{\\bar\\theta}^V\\colon H_{\\bar\\theta}^V\\to H_\\theta^V.\n  }\n\n  Note that $H_\\theta^V$ is closed under ${<}\\bar\\theta$-sequences (and more) in $V$. Now define\n  \\eq{\n    X := \\bar\\theta{+}1 \\cup \\{x\\in H_\\theta^V\\mid\\exists y\\in H_{\\bar\\theta}^V\\exists p\\in\\col(\\omega, H_{\\bar\\theta}^V)\\colon p\\forces\\dot\\sigma(\\check y)=\\check x\\}\\in V.\n  }\n\n  Note that $\\abs X = \\abs{H_{\\bar\\theta}^V} = 2^{<\\bar\\theta}$ and that $\\ran\\sigma\\subset X$. Now let $\\overline\\M\\prec H_\\theta^V$ be such that $X\\subset\\overline\\M$ and $\\overline\\M$ is closed under ${<}\\bar\\theta$-sequences. Note that we can find such an $\\overline\\M$ of size $(2^{<\\bar\\theta})^{<\\bar\\theta} = 2^{<\\bar\\theta}$. Let $\\M$ be the transitive collapse of $\\overline\\M$, so that $\\M$ is still closed under ${<}\\bar\\theta$-sequences and we also still have that $\\abs\\M = 2^{<\\bar\\theta}<\\bar\\delta$, making $\\M\\in H_{\\bar\\delta}^V$.\n\n  \\qquad Countable Embedding Absoluteness \\ref{lemm.ctblabs} then implies that $H_{\\bar\\delta}^V$ has a generic elementary embedding $\\sigma^*\\colon H_{\\bar\\theta}^V\\to\\M$ with $\\crit\\sigma^*=\\bar\\kappa$, showing that $\\bar\\kappa$ is virtually $\\bar\\theta$-supercompact in $H_{\\bar\\delta}^V$, which is what we wanted to show.\n}\n\n\\rema[][rema.levelwise-strong-supercompact]{\n  As mentioned above, the proof in fact shows something stronger: if $\\kappa$ is virtually $(2^{<\\theta})^+$-strong then it is virtually $\\theta$-supercompact, and if it is virtually $(2^{<\\theta})^+$-Magidor-supercompact then it is virtually $\\theta$-supercompact. It is open whether these notions are equivalent level-by-level (see Question \\ref{ques.remarkableequiv}).\n}\n\nAs a corollary of the proof, we obtain the following weaker characterization of virtually strong cardinals:\n\n\\qprop[][prop.weakerstrong]{\n    A cardinal $\\kappa$ is virtually strong if and only if for every $\\theta>\\kappa$ there is a forcing extension, in which there is a transitive set $\\N$ and an elementary embedding $\\pi:H_\\theta\\to \\N$ with $\\crit \\pi=\\kappa$ and $H_\\theta=H_\\theta^{\\N}$.\n}\n\nA key difference between the normal large cardinals and the virtual kinds is that we do not have a virtual version of the Kunen inconsistency\\footnote{See Section \\ref{prelims.large-cardinals} for a definition of the Kunen inconsistency.}: it is perfectly possible to have a generic elementary embedding $H_\\theta^V\\to H_\\theta^V$ with $\\theta$ much larger than the critical point, and thus in particular also the image of the critical point. Here is an example of such a virtualised Kunen inconsistency.\n\n\\prop[Folklore]{\n  If $0^\\sharp$ exists then there are inaccessible cardinals $\\kappa<\\theta$ such that, in a generic extension of $L$, there is an elementary embedding\n  \\eq{\n    \\pi\\colon L_\\theta\\to L_\\theta.\n  }\n  \n  In other words, $\\pi$ witnesses a strong failure of the virtualised Kunen inconsistency.\n}\n\\proof{\n  From $0^\\sharp$ we get an elementary embedding $j\\colon L\\to L$. Let $C\\subset\\on$ be the proper class club of limit points of $j$ above $\\crit j$, which then contains an $L$-inaccessible cardinal $\\theta$ as there are stationarily many such. Restrict $j$ to $\\pi:=j\\restr L_\\theta\\colon L_\\theta\\to\\N$ and note that $\\N=L_\\theta$ by condensation of $L$ and because $\\theta$ is a limit point of $j$. Let $\\kappa:=\\crit\\pi$. Now an application of Countable Embedding Absoluteness \\ref{lemm.ctblabs} shows that a generic extension of $L$ contains an elementary embedding $\\tilde\\pi: L_\\theta\\to L_\\theta$ with $\\crit\\tilde\\pi = \\kappa$.\n}\n\nThis becomes important when dealing with the ``pre''-versions of the large cardinals. We next move to a virtualisation of the $\\alpha$-superstrong cardinals.\n\n\\defi{\n  Let $\\theta$ be a regular uncountable cardinal and $\\alpha$ an ordinal. Then a cardinal $\\kappa<\\theta$ is \\textbf{faintly $(\\theta,\\alpha)$-superstrong} if it is faintly $\\theta$-measurable, $H_\\theta^V\\subset\\N$ and $\\pi^\\alpha(\\kappa)\\leq\\theta$.\\footnote{Here $\\pi^1 = \\pi$, $\\pi^{\\alpha+1}=\\pi\\circ\\pi^{\\alpha}$ and $\\pi^\\alpha(\\kappa) = \\sup_{\\xi<\\alpha}\\pi^\\xi(\\kappa)$ when $\\alpha$ is a limit ordinal.} We replace ``faintly'' by \\textbf{virtually} when $\\N\\subset V$, we say that $\\kappa$ is \\textbf{faintly $\\alpha$-superstrong} if it is faintly $(\\theta,\\alpha)$-superstrong for \\textit{some} $\\theta$, and $\\kappa$ is simply \\textbf{faintly superstrong} if it is faintly 1-superstrong.\\footnote{Note that the conventions stated here are different from the ones in Definition \\ref{defi.strongsc}.}\n}\n\nAs in the non-virtual case, the virtually superstrongs surpass the virtually strongs in consistency strength. This then also implies that the superstrongs are stronger than the virtually supercompacts, which is \\textit{not} the case outside the virtual world.\n\n\\prop[N.][prop.superstrong]{\n  If $\\kappa$ is faintly superstrong then $H_\\kappa$ has a proper class of virtually strong cardinals, and thus also a proper class of virtually supercompact cardinals.\n}\n\\proof{\n  Fix a regular $\\theta>\\kappa$ and a generic embedding $\\pi\\colon H_\\theta^V\\to\\N$ with $\\crit\\pi=\\kappa$, $H_\\theta^V\\subset\\N$ and $\\pi(\\kappa)\\leq\\theta$. Then $\\pi(\\kappa)$ is a $V$-cardinal, so that $H_{\\pi(\\kappa)}^V$ thinks that $\\kappa$ is virtually strong. This implies that $H_\\kappa^V$ thinks there is a proper class of virtually strong cardinals, using that $H_\\kappa^V\\prec H_{\\pi(\\kappa)}^V$.\n}\n\nThe following theorem and its subsequent corollaries then show that the only thing stopping prestrongness from being equivalent to strongness is the existence of virtualised Kunen inconsistencies:\n\n\\theo[N.][theo.virtchar]{\n  Let $\\theta$ be an uncountable cardinal. Then a cardinal $\\kappa<\\theta$ is virtually $\\theta$-prestrong iff either\n  \\begin{enumerate}\n    \\item $\\kappa$ is virtually $\\theta$-strong; or\n    \\item $\\kappa$ is virtually $(\\theta,\\omega)$-superstrong.\n  \\end{enumerate}\n}\n\\proof{\n  $(\\Leftarrow)$ is trivial, so we show $(\\Rightarrow)$. Let $\\kappa$ be virtually $\\theta$-prestrong. Assume $(i)$ fails, meaning that there is a generic elementary embedding \\hbox{$\\pi\\colon H_\\theta\\to\\N$} for some transitive $\\N\\subseteq V$ with $H_\\theta\\subset\\N$, $\\crit\\pi=\\kappa$ and $\\pi(\\kappa)\\leq\\theta$.\n\n  \\qquad First, assume that there is some $n<\\omega$ such that $\\pi^n(\\kappa)=\\theta$. The proof of Proposition~\\ref{prop.superstrong} shows that $\\kappa$ is virtually strong in $H_{\\pi(\\kappa)}$. It follows that $\\pi(\\kappa)$ is virtually strong in $H_{\\pi^2(\\kappa)}$ by elementarity, and by applying elementarity repeatedly we get that $\\pi^n(\\kappa)=\\theta$ is virtually strong in $\\N$. Note that the condition $\\pi^n(\\kappa)=\\theta$ implies that $\\theta$ is inaccessible in $\\N$, and hence a limit cardinal there. \n  \n  \\qquad In particular, $\\theta$ is virtually $\\delta:=(\\theta^+)^{\\N}$-strong in $\\N$, so that $\\N$ has a generic elementary embedding $\\sigma:H_\\delta^{\\N}\\to \\M$ with $\\crit\\sigma=\\theta$ and $H_\\theta^V\\subseteq H_\\delta^{\\N}\\subseteq\\M$. Thus, $H_\\theta^V\\prec H_{\\sigma(\\theta)}^{\\M}$, from which it follows that $\\kappa$ is virtually strong in $H_{\\sigma(\\theta)}^{\\M}$ and, in particular, virtually $\\theta$-strong. But $H_{\\sigma(\\theta)}^{\\M}$ must be correct about this since $H_\\theta^{\\M}=H_\\theta^{\\N}=H_\\theta$, from which we can conclude that $\\kappa$ is actually virtually $\\theta$-strong, contradicting our assumption that $(i)$ fails.\n\n  \\qquad Next, assume that there is a least $n<\\omega$ such that $\\pi^{n+1}(\\kappa)>\\theta$. Since $\\pi(\\kappa)\\leq\\theta$, we have as before that  $\\kappa$ is virtually strong in $H_{\\pi(\\kappa)}$. Since $H_{\\pi^i(\\kappa)}\\prec H_{\\pi^{i+1}(\\kappa)}$ holds by elementarity, we have that $\\kappa$ is virtually strong in $H_{\\pi^n(\\kappa)}$. Applying elementarity to the statement that $\\kappa$ is virtually strong in $H_{\\pi(\\kappa)}$ we also get that  $\\pi^n(\\kappa)$ is virtually strong in $H_{\\pi^{n+1}(\\kappa)}^{\\N}$. \n  \n  \\qquad This means that there is some generic elementary embedding $\\sigma\\colon H_\\theta\\to\\M$ with $H_\\theta\\subset\\M$, $\\M\\subseteq H_{\\pi^{n+1}(\\kappa)}^{\\N}$, $\\crit\\sigma=\\pi^n(\\kappa)$ and $\\sigma(\\pi^n(\\kappa))>\\theta$. Thus, again by elementarity, we get that $H_{\\pi^n(\\kappa)}  \\prec H_{\\sigma(\\pi^n(\\kappa))}^{\\M}$. Since, as we already argued, $\\kappa$ is virtually strong in $H_{\\pi^n(\\kappa)}$, this means that $\\kappa$ is also virtually strong in $H_{\\sigma(\\pi^n(\\kappa))}^{\\M}$ and as $H_\\theta^{\\M} = H_\\theta^{\\N} = H_\\theta$, this means that $\\kappa$ is actually virtually $\\theta$-strong, contradicting our assumption that $(i)$ fails.\n\n \\qquad Finally, assume $\\pi^n(\\kappa)<\\theta$ for all $n<\\omega$ and let $\\lambda = \\sup_{n<\\omega}\\pi^n(\\kappa)$. Since $\\lambda\\leq\\theta$, we have that $\\kappa$ is virtually $(\\theta, \\omega)$-superstrong by definition.\n}\n\nTo get a better intuition for the virtual $\\omega$-superstrongs, recall that a cardinal $\\kappa$ is \\textbf{virtually rank-into-rank} if there exists a cardinal $\\theta>\\kappa$ and a generic elementary embedding $\\pi\\colon H_\\theta^V\\to H_\\theta^V$ with $\\crit\\pi=\\kappa$. We then note that the virtually $\\omega$-superstrongs coincide with the virtually rank-into-ranks.\n\n\\prop[N.][prop.superstrong-rank-into-rank]{\n  A regular uncountable cardinal $\\kappa$ is virtually $\\omega$-superstrong iff it is virtually rank-into-rank.\n}\n\\proof{\n  If $\\kappa$ is virtually $\\omega$-superstrong, witnessed by a generic elementary embedding $\\pi\\colon H_\\theta^V\\to\\N$, then $\\lambda:=\\sup_{n<\\omega}\\pi^n(\\kappa)$ is well-defined. By restricting $\\pi$ to $\\pi\\restr H_\\lambda^V\\colon H_\\lambda^V\\to H_\\lambda^V$ we get a witness to $\\kappa$ being virtually $\\lambda$-rank-into-rank. Conversely, if $\\kappa$ is $\\theta$-rank-into-rank, witnessed by a generic embedding $\\pi\\colon H_\\theta^V\\to H_\\theta^V$, then one readily checks that $\\pi$ also witnesses that $\\kappa$ is virtually $\\omega$-superstrong.\n}\n\nTheorem \\ref{theo.virtchar} also gives us the following surprising consistency result:\n\n\\coro[N.][coro.strong-meas-equiconsistent]{\n  For any uncountable regular $\\theta$, the existence of a virtually $\\theta$-strong cardinal is equiconsistent with the existence of a faintly $\\theta$-measurable cardinal.\n}\n\\proof{\n  The above Proposition \\ref{prop.superstrong} and Theorem \\ref{theo.virtchar} show that virtually $\\theta$-prestrongs are equiconsistent with virtually $\\theta$-strongs. Now note that Countable Embedding Absoluteness \\ref{lemm.ctblabs} and condensation in $L$ imply that every faintly $\\theta$-measurable cardinal is virtually $\\theta$-prestrong in $L$.\n}\n\n\n\\section{Woodins \\& Vop\\v enkas}\n\nIn this section we will analyse the virtualisations of the Woodin and Vop\\v enka cardinals, which are defined using ``boldface'' variants of strongs and supercompacts.\n\n\\defi{\n  Let $\\theta$ be a regular uncountable cardinal. Then a cardinal $\\kappa<\\theta$ is \\textbf{faintly $(\\theta,A)$-strong} for a set $A\\subset H_\\theta^V$ if there exists a generic elementary embedding\n\\eq{\n  \\pi\\colon (H_\\theta^V,\\in,A)\\to(\\M,\\in,B)\n}\n\nwith $\\M$ transitive, such that $\\crit\\pi=\\kappa$, $\\pi(\\kappa)>\\theta$, $H_\\theta^V\\subset\\M$ and $B\\cap H_\\theta^V = A$. $\\kappa$ is \\textbf{faintly $(\\theta,A)$-supercompact} if we further have that ${^{<\\theta}\\M}\\cap V\\subset\\M$ and say that $\\kappa$ is \\textbf{faintly $(\\theta,A)$-extendible} if $\\M=H_\\mu^V$ for some $V$-cardinal $\\mu$. We will leave out $\\theta$ if it holds for all regular $\\theta>\\kappa$.\n}\n\n\\defi{\n  \\label{defi.woodin}\n  A cardinal $\\delta$ is \\textbf{faintly Woodin} if given any $A\\subset H_\\delta^V$ there exists a faintly $({<}\\delta,A)$-strong cardinal $\\kappa<\\delta$.\n}\n\nAs with the previous definitions, for both of the above two definitions we substitute ``faintly'' for \\textbf{virtually} when $\\M\\subset V$, and substitute ``strong'', ``supercompact'' and ``Woodin'' for \\textbf{prestrong}, \\textbf{presupercompact} and \\textbf{pre-Woodin} when we do not require that $\\pi(\\kappa)>\\theta$.\n\n\\qquad We note in the following proposition that, in analogy with the real Woodin cardinals, virtually Woodin cardinals are Mahlo. This contrasts the virtually pre-Woodins since \\cite{WilsonVopenka}, together with Theorem \\ref{theo.vopwood} below, show that these can be singular.\n\n\\prop[Virtualised folklore][prop.woodmahlo]{\n  Virtually Woodin cardinals are Mahlo.\n}\n\\proof{\n  Let $\\delta$ be virtually Woodin. Note that $\\delta$ is a limit of weakly compact cardinals by Proposition \\ref{prop.virtit}, making $\\delta$ a strong limit. As for regularity, assume that we have a cofinal increasing function $f\\colon\\alpha\\to\\delta$ with $f(0)>\\alpha$ and $\\alpha<\\delta$, and note that $f$ cannot have any closure points since $f(0) > \\alpha$ and $f$ is increasing. Fix a virtually $({<}\\delta,f)$-strong cardinal $\\kappa<\\delta$; we claim that $\\kappa$ is a closure point for $f$, which will yield our desired contradiction.\n\n  \\qquad Let $\\gamma<\\kappa$ and choose a regular $\\theta \\in (\\max(f(\\gamma), \\kappa), \\delta)$. We then have a generic embedding $\\pi\\colon (H_\\theta^V,\\in,f\\cap H_\\theta^V)\\to(\\N,\\in,f^+)$ with $H_\\theta^V\\subset\\N$, $\\N\\subset V$, $\\crit\\pi=~\\kappa$, $\\pi(\\kappa)>\\theta$ and $f^+$ is a function such that $f^+\\cap H_\\theta^V = f\\cap H_\\theta^V$. But then $f^+(\\gamma) = f(\\gamma) < \\pi(\\kappa)$ by our choice of $\\theta$, so elementarity implies that $f(\\gamma)<\\kappa$, making $\\kappa$ a closure point for $f$, $\\contr$. This shows that $\\delta$ is inaccessible.\n\n  \\qquad As for Mahloness, let $C\\subset\\delta$ be a club and $\\kappa<\\delta$ a virtually $({<}\\delta,C)$-strong cardinal. Let $\\theta \\in (\\min C,\\delta)$ and let $\\pi\\colon H_\\theta^V\\to\\N$ be the associated generic elementary embedding. Then for every $\\gamma<\\kappa$ there exists an element of $C$ below $\\pi(\\kappa)$, namely $\\min C$, so by elementarity $\\kappa$ is a limit of elements of $C$, making it an element of $C$. As $\\kappa$ is regular, this shows that $\\delta$ is Mahlo.\n}\n\n\\qquad The well-known equivalence of the ``function definition'' and ``$A$-strong'' definition of Woodin cardinals\\footnote{See Section \\ref{prelims.large-cardinals} for this characterisation of (non-virtual) Woodin cardinals.} holds if we restrict ourselves to \\textit{virtually} Woodins, and the analogue of the equivalence between virtually strongs and virtually supercompacts allows us to strengthen this:\n\n\\prop[Dimopoulos-Gitman-N.][prop.woodin]{\n  For an uncountable cardinal $\\delta$, the following are equivalent:\n  \\begin{enumerate}\n    \\item $\\delta$ is virtually Woodin;\n    \\item for every $A\\subset H_\\delta^V$ there exists a virtually $({<}\\delta,A)$-supercompact $\\kappa<\\delta$;\n    \\item for every $A\\subset H_\\delta^V$ there exists a virtually $({<}\\delta,A)$-extendible $\\kappa<\\delta$;\n    \\item for every function $f\\colon\\delta\\to\\delta$ there are regular cardinals $\\kappa<\\theta<\\delta$, where $\\kappa$ is a closure point for $f$, and a generic elementary $\\pi\\colon H_\\theta^V\\to\\M$ such that $\\crit\\pi=\\kappa$, $H_\\theta^V\\subset\\M$, $\\M\\subset V$ and $\\theta = \\pi(f\\restr\\kappa)(\\kappa)$;\n    \\item for every function $f\\colon\\delta\\to\\delta$ there are regular cardinals $\\kappa<\\theta<\\delta$, where $\\kappa$ is a closure point for $f$, and a generic elementary $\\pi\\colon H_\\theta^V\\to\\M$ such that $\\crit\\pi=\\kappa$, ${^{<\\pi(f)(\\kappa)}\\M}\\subset\\M$, $\\M\\subset V$ and $\\theta = \\pi(f\\restr\\kappa)(\\kappa)$;\n    \\item for every function $f\\colon\\delta\\to\\delta$ there are regular cardinals $\\bar\\theta<\\kappa<\\theta<\\delta$, where $\\kappa$ is a closure point for $f$, and a generic elementary embedding $\\pi\\colon H_{\\bar\\theta}^V\\to H_\\theta^V$ with $\\pi(\\crit\\pi)=\\kappa$, $f(\\crit\\pi)=\\bar\\theta$ and $f\\restr\\kappa\\in\\ran\\pi$.\n  \\end{enumerate}\n}\n\n\\begin{figure}\n  \\begin{center}\n    \\includegraphics[scale=0.4]{\\string~/gitsky/phd/gfx/woodin_equivalences.pdf}\n    \\caption{Proof strategy of Proposition \\ref{prop.woodin}, dotted lines are trivial implications.}\n  \\end{center}\n\\end{figure}\n\n\\proof{\n  Firstly note that $(iii)\\Rightarrow(ii)\\Rightarrow(i)$ and $(v)\\Rightarrow(iv)$ are simply by definition.\n  \n  \\qquad\\framebox{$(i)\\Rightarrow(iv)$} Assume $\\delta$ is virtually Woodin, and fix a function $f\\colon\\delta\\to\\delta$. Let $\\kappa<\\delta$ be virtually $({<}\\delta, f)$-strong and let $\\theta<\\delta$ be a regular cardinal such that $\\sup_{\\alpha\\leq\\kappa}f(\\alpha)<\\theta$. Then there is a generic elementary embedding\n  \\eq{\n    \\pi\\colon (H_\\theta, \\in, f\\cap H_\\theta) \\to (\\M, \\in, f^+)\n  }\n  \n  such that $H_\\theta\\subseteq \\M$, $f\\cap H_\\theta=f^+\\cap H_\\theta$, $\\M\\subset V$, and $\\pi(\\kappa)>\\theta$. Note that, by our choice of $\\theta$, $f\\restr\\kappa\\in H_\\theta^V$ and $\\pi(f\\restr\\kappa)(\\kappa)=f^+(\\kappa)=f(\\kappa)<\\theta$.\n\n\n \\qquad So it suffices to show that $\\kappa$ is a closure point for $f$. Let $\\alpha<\\kappa$. Then\n  \\eq{\n    f(\\alpha)=f^+(\\alpha)=\\pi(f\\restr\\kappa)(\\alpha)=\\pi(f\\restr\\kappa)(\\pi(\\alpha))=\\pi(f(\\alpha)),\n  }\n\n  so $\\pi$ fixes $f(\\alpha)$ for every $\\alpha<\\kappa$. Now, if $\\kappa$ was not a closure point of $f$ then, letting $\\alpha<\\kappa$ be the least such that $f(\\alpha)\\geq\\kappa$, we have\n  \\eq{\n    \\theta > f(\\alpha) = \\pi(f(\\alpha)) \\geq \\pi(\\kappa) > \\theta,\n  }\n\n  a contradiction. Note that we used that $\\pi(\\kappa)>\\theta$ here, so this argument would not work if we had only assumed $\\delta$ to be virtually pre-Woodin.\n\n  \\qquad \\framebox{$(iv)\\Rightarrow(vi)$} Assume $(iv)$ holds, let $f\\colon\\delta\\to\\delta$ be given and define $g\\colon\\delta\\to\\delta$ as $g(\\alpha):=(2^{<\\gamma_\\alpha})^+$, where $\\gamma_\\alpha$ is the least regular cardinal above $|f(\\alpha)|$. By $(iv)$ there is a $\\kappa<\\delta$ which is a closure point of $g$ (and so also a closure point of $f$), and there is a regular $\\lambda\\in(\\kappa, \\delta)$ for which there is a generic elementary embedding $\\pi\\colon H_\\lambda\\to\\M$ with $\\crit\\pi=\\kappa$, $H_\\lambda\\subset\\M$, $\\M\\subset V$, and $\\pi(g\\restr\\kappa)(\\kappa)<\\lambda$.\n\n  \\qquad Let $\\theta$ be the least regular cardinal above $|\\pi(f\\restr\\kappa)(\\kappa)|$, and note that \\hbox{$H_\\theta\\in H_\\lambda$} by our definition of $g$. Thus, both $H_\\theta$ and $H_{\\pi(\\theta)}^{\\M}$ are elements of $\\M$. An application of Countable Embedding Absoluteness \\ref{lemm.ctblabs} then yields that $\\M$ has a generic elementary embedding $\\pi^*\\colon H_\\theta^{\\M}\\to H_{\\pi(\\theta)}^{\\M}$ such that $\\crit\\pi^*=\\kappa$, $\\pi^*(\\kappa)=\\pi(\\kappa)$, $\\pi(f\\restr\\kappa)\\in\\ran\\pi^*$, and $\\pi(f\\restr\\kappa)(\\kappa)<\\theta$. By elementarity of $\\pi$, $H_\\theta$ has an ordinal $\\bar\\theta<\\kappa$ and a generic elementary embedding $\\sigma\\colon H_{\\bar\\theta}\\to H_\\theta$ with $\\sigma(\\crit\\sigma)=\\kappa$, $f\\restr\\kappa\\in\\ran\\sigma$ and $f(\\crit\\sigma)<\\bar\\theta$, which is what we wanted to show.\n\n  \\qquad \\framebox{$(vi)\\Rightarrow(v)$} Assume $(vi)$ holds and let $f\\colon\\delta\\to\\delta$ be given. Define $g\\colon\\delta\\to\\delta$ as $g(\\alpha):=\\langle(2^{<\\gamma_\\alpha})^+,f(\\alpha)\\rangle$, where $\\gamma_\\alpha$ is the least regular cardinal above $|f(\\alpha)|$. In particular, $g$ codes $f$. By $(vi)$ there exist regular $\\bar\\kappa<\\bar\\lambda<\\kappa<\\lambda$ such that $\\kappa$ is a closure point of $g$ (so also a closure point of $f$) and there exists a generic elementary embedding $\\pi\\colon H_{\\bar\\lambda}\\to H_\\lambda$ with $\\crit\\pi = \\bar\\kappa$, $\\pi(\\bar\\kappa)=\\kappa$, $g(\\bar\\kappa)<\\bar\\lambda$, and $g\\restr\\kappa\\in\\ran\\pi$. Since $f$ is definable from $g$ and $g\\restr\\kappa\\in\\ran\\pi$, it follows that $f\\restr\\kappa\\in\\ran \\pi$. So let $\\pi(\\bar f)=f\\restr\\kappa$ with $\\bar f\\colon\\bar\\kappa\\to\\bar\\kappa$. Now observe that $\\bar f=f\\restr\\bar\\kappa$ since for $\\alpha<\\bar\\kappa$, we have $f(\\alpha)=\\pi(\\bar f)(\\alpha)=\\pi(\\bar f(\\alpha))=\\bar f(\\alpha)$.\n\n  \\qquad Let $\\bar\\theta$ be the least regular cardinal above $|f(\\bar\\kappa)|$. By the definition of $g$, we have $H_{\\bar\\theta}\\in H_{\\bar\\lambda}$. Now, following the $(iii)\\Rightarrow(ii)$ direction in the proof of Theorem \\ref{theo.rem} we get that $H_{\\bar\\lambda}$ has a generic elementary embedding \\hbox{$\\sigma\\colon H_{\\bar\\theta}\\to\\M$} with $\\M$ closed under ${<}\\bar\\theta$-sequences from $V$, $\\crit\\sigma=\\bar\\kappa$, $\\sigma(\\bar\\kappa)> \\bar\\theta$, and $\\sigma(\\bar f)(\\bar\\kappa)<\\bar\\theta$. Let $\\pi(\\bar\\theta)=\\theta$ and $\\pi(\\M)=\\N$. Now by elementarity of $\\pi$, we get that there is a generic elementary embedding \\hbox{$\\sigma^*:H_\\theta\\to \\N$} with $\\crit\\sigma^*=\\kappa$, $\\sigma^*(\\kappa)>\\theta$, and $\\sigma^*(\\pi(\\bar f))(\\kappa)=\\sigma^*(f\\restr\\kappa)(\\kappa)<\\theta$.\n\n  \\qquad \\framebox{$(vi)\\Rightarrow(iii)$} Let $C$ be the club of all $\\alpha$ such that $$(H_\\alpha,\\in, A\\cap H_\\alpha)\\prec(H_\\delta,\\in,A).$$ Let $f\\colon\\delta\\to\\delta$ be given as $f(\\alpha):=\\bra{\\gamma_0^\\alpha,\\gamma_1^\\alpha}$, where $\\gamma_0^\\alpha$ is the first limit point of $C$ above $\\alpha$ and the $\\gamma_1^\\alpha$ are chosen such that $\\{\\gamma_1^\\alpha\\mid\\alpha<\\beta\\}$ encodes $A\\cap\\beta$ for cardinals $\\beta$. This definition makes sense since $\\delta$ is inaccessible by Proposition \\ref{prop.virtit}.\n\n  \\qquad Let $\\kappa<\\delta$ be a closure point of $f$ such that there are regular cardinals $\\bar\\theta<\\kappa<\\theta$ and a generic elementary embedding $\\pi\\colon H_{\\bar\\theta}\\to H_\\theta$ such that $\\pi(\\crit\\pi)=\\kappa$, $f(\\crit\\pi)<\\bar\\theta$, and $f\\restr\\kappa\\in\\ran\\pi$. Let $\\bar\\kappa=\\crit \\pi$. We claim that $\\bar\\kappa$ is virtually $({<}\\delta, A)$-extendible. Since $\\kappa\\in C$ because it is a closure point of $f$, it suffices by the definition of $C$ to show that\n  \\eq{\n    (H_\\kappa,\\in,A\\cap H_\\kappa)\\models\\godel{\\text{$\\bar\\kappa$ is virtually $(A\\cap H_\\kappa)$-extendible}}. \\tag*{$(1)$}\n  }\n Let $\\beta$ be the least element of $C$ above $\\bar\\kappa$ but below $\\bar\\theta$, and note that $\\beta$ exists as $f(\\bar\\kappa)<\\bar\\theta$, and the definition of $f$ says that the first coordinate of $f(\\bar\\kappa)$ is a limit point of $C$ above $\\bar\\kappa$. It then holds that $$(H_{\\bar\\kappa},\\in,A\\cap H_{\\bar\\kappa}) \\prec (H_\\beta,\\in,A\\cap H_\\beta)$$ as both $\\bar\\kappa$ and $\\beta$ are elements of $C$. Since $f$ encodes $A$ in the manner previously described and $\\pi(f\\restr\\bar\\kappa)=f\\restr\\kappa$, we get that $\\pi(A\\cap H_{\\bar\\kappa}) = A\\cap H_\\kappa$, and thus\n  \\eq{\n    (H_\\kappa,\\in,A\\cap H_\\kappa) \\prec (H_{\\pi(\\beta)},\\in,A^*) \\tag*{$(2)$}\n  }\n\n  for $A^* := \\pi(A\\cap H_\\beta)$. Now, as $(H_\\gamma,\\in, A\\cap H_\\gamma)$ and $(H_{\\pi(\\gamma)},\\in,A^*\\cap H_{\\pi(\\gamma)})$ are elements of $H_{\\pi(\\beta)}$ for every $\\gamma<\\kappa$, Countable Embedding Absoluteness \\ref{lemm.ctblabs} implies that $H_{\\pi(\\beta)}$ sees that $\\bar\\kappa$ is virtually $({<}\\kappa, A^*)$-extendible, which by $(2)$ then implies $(1)$, which is what we wanted to show.\n}\n\nAs a corollary of the proof, we now have an analogue of Proposition~\\ref{prop.weakerstrong} for virtually Woodin cardinals.\n\n\\qprop[Dimopoulos-Gitman-N.][prop.weakerwoodin]{\n    A cardinal $\\delta$ is virtually Woodin if and only if, for every $A\\subseteq H_\\delta$, there is a cardinal $\\kappa$ satisfying the weakening of virtual $({<}\\delta,A)$-strongness where $H_\\theta=H_\\theta^{\\N}$ holds in place of $\\N\\subseteq V$, with $\\N$ being the target of the generic embedding.\n}\n\nAs a corollary we arrive at the surprising result that there is no distinction between the faint and virtual Woodin cardinals, in contrast with what we will see in Theorem \\ref{theo.virtualsep}.\n\n\\coro[Gitman-N.][coro.faintvirtualWoodin]{\n    Faintly Woodin cardinals are virtually Woodin.\n}\n\\proof{\n  Using Proposition~\\ref{prop.weakerwoodin}, it suffices to observe that if $\\theta$ is inaccessible and \n  \\eq{\n    \\pi:(H_\\theta,\\in,A)\\to (\\M,\\in,B)\n  }\n  \n  is a faintly $(\\theta,A)$-strong embedding such that $A$ codes the sequence of $H_\\lambda$ for $\\lambda<\\theta$, then $H_\\theta=H_\\theta^{\\M}$.\n}\n\nWe will now step away from the Woodins for a little bit, and introduce the Vop\\v enkas. In anticipation of the next section we will work with the class-sized version here, but all the following results work equally well for inaccessible virtually Vop\\v enka cardinals\\footnote{Note however that we have to require inaccessibility here: see \\cite{WilsonVopenka} for an analysis of the singular virtually Vop\\v enka cardinals.}.\n\n\\defi[\\gbc]{\n  The \\textbf{Generic Vop\\v enka Principle} (\\gvp) states that for any class $C$ consisting of structures in a common first-order language, there are distinct $\\M,\\N\\in C$ and a generic elementary embedding $\\pi\\colon\\M\\to\\N$.\n}\n\nWe will be using a standard variation of \\gvp\\, involving the following \\textit{natural sequences}:\n\n\\defi[\\gbc]{\n  Say that a class function $f\\colon\\on\\to\\on$ is an \\textbf{indexing function} if it satisfies that $f(\\alpha)>\\alpha$ and $f(\\alpha)\\leq f(\\beta)$ for all $\\alpha<\\beta$.\n}\n\n\\defi[\\gbc]{\n  Say that an $\\on$-sequence $\\bra{\\M_\\alpha\\mid\\alpha<\\on}$ is \\textbf{natural} if there exists an indexing function $f\\colon\\on\\to\\on$ and unary relations $R_\\alpha\\subset V_{f(\\alpha)}$ such that $\\M_\\alpha = (V_{f(\\alpha)}, \\in, \\{\\alpha\\}, R_\\alpha)$ for every $\\alpha$. Denote this indexing function by $f^{\\vec\\M}$ and the unary relations as $R_\\alpha^{\\vec\\M}$.\n}\n\nThe following Theorem \\ref{theo.vopwood} is then the main theorem of this section, showing that inaccessible cardinals are virtually Vop\\v enka iff they are virtually pre-Woodin. \n\n\\theo[Dimopoulos-Gitman-N.; \\gbc][theo.vopwood]{\n  The following are equivalent:\n  \\begin{enumerate}\n    \\item \\gvp\\ holds;\n    \\item For any natural $\\on$-sequence $\\vec\\M$ there exists a generic elementary embedding $\\pi\\colon\\M_\\alpha\\to\\M_\\beta$ for some $\\alpha<\\beta$;\n    \\item $\\on$ is virtually pre-Woodin;\n    \\item $\\on$ is faintly pre-Woodin.\n  \\end{enumerate}\n}\n\\proof{\n  $(i)\\Rightarrow(ii)$ and $(iii)\\Rightarrow(iv)$ are trivial.\n  \n  \\qquad $(iv)\\Rightarrow(i)$: Assume $\\on$ is faintly pre-Woodin and fix some $\\on$-sequence \\\\$\\vec\\M:=\\bra{\\M_\\alpha\\mid\\alpha<\\on}$ of structures in a common language. Let $\\kappa$ be $({<}\\on,\\vec\\M)$-prestrong and fix some regular $\\theta>\\kappa$ satisfying that $\\M_\\alpha\\in H_\\theta^V$ for every $\\alpha<\\theta$, and fix a generic elementary embedding\n  \\eq{\n    \\pi\\colon(H_\\theta^V, \\in, \\vec\\M) \\to (\\N, \\in, \\M^*)\n  }\n\n  with $H_\\theta^V\\subset\\N$ and $\\vec\\M\\cap H_\\theta^V=\\M^*\\cap H_\\theta^V$. Set $\\kappa:=\\crit\\pi$.\n\n  \\qquad We have that $\\pi\\restr\\M_\\kappa\\colon\\M_\\kappa\\to\\M^*_{\\pi(\\kappa)}$, but we need to reflect this embedding down below $\\theta$ as we do not know whether $\\M_{\\pi(\\kappa)}^*$ is on the $\\vec\\M$ sequence. Working in the generic extension, we have\n  \\eq{\n    \\N\\models\\exists\\bar\\kappa<\\pi(\\kappa)\\exists\\dot\\sigma\\in V^{\\col(\\omega, \\M_{\\bar\\kappa}^*)}\\colon \\godel{\\text{$\\dot\\sigma\\colon \\M^*_{\\bar\\kappa}\\to\\M^*_{\\pi(\\kappa)}$ is elementary}}.\n  }\n\n  Here $\\kappa$ realises $\\bar\\kappa$ and $\\pi\\restr\\M_\\kappa$ realises $\\sigma$. Note that $\\M^*_\\kappa = \\M_\\kappa$ since we ensured that $\\M_\\kappa\\in H_\\theta^V$ and we are assuming that $\\vec\\M\\cap H_\\theta^V = \\M^*\\cap H_\\theta^V$, so the domain of $\\sigma$ ($=\\pi\\restr\\M_\\kappa$) \\textit{is} $\\M_\\kappa^*$ --- also note that $\\sigma$ exists in a $\\col(\\omega,\\M_\\kappa)$ extension of $\\N$ by an application of Countable Embedding Absoluteness \\ref{lemm.ctblabs}. Now elementarity of $\\pi$ implies that\n  \\eq{\n    H_\\theta^V\\models\\exists\\bar\\kappa<\\kappa\\exists\\dot\\sigma\\in V^{\\col(\\omega, \\M_{\\bar\\kappa})}\\colon \\godel{\\text{$\\dot\\sigma\\colon \\M_{\\bar\\kappa}\\to\\M_{\\kappa}$ is elementary}},\n  }\n\n  which is upwards absolute to $V$, from which we can conclude that $\\sigma\\colon\\M_{\\bar\\kappa}\\to\\M_\\kappa$ witnesses that \\gvp\\ holds.\n\n  \n\n  \\qquad $(ii)\\Rightarrow(iii)$: Assume $(ii)$ holds and assume that $\\on$ is not virtually pre-Woodin, which means that there exists some class $A$ such that there are no virtually $A$-prestrong cardinals. This allows us to define a function $f\\colon\\on\\to\\on$ as $f(\\alpha)$ being the least regular $\\eta>\\alpha$ such that $\\alpha$ is not virtually $(\\eta,A)$-prestrong.\n\n  \\qquad We also define $g\\colon\\on\\to\\on$ as taking $\\alpha$ to the least strong limit cardinal above $\\alpha$ which is a closure point for $f$. Note that $g$ is an indexing function, so we can let $\\vec\\M$ be the natural sequence induced by $g$ and $R_\\alpha := A\\cap H_{g(\\alpha)}^V$. $(ii)$ supplies us with $\\alpha<\\beta$ and a generic elementary embedding\\footnote{Note that $V_{g(\\alpha)}=H_{g(\\alpha)}^V$ since $g(\\alpha)$ is a strong limit cardinal.}\n  \\eq{\n    \\pi\\colon(H_{g(\\alpha)}^V,\\in,A\\cap H_{g(\\alpha)}^V)\\to (H_{g(\\beta)}^V, \\in, A\\cap H_{g(\\beta)}^V).\n  }\n\n  Since $g(\\alpha)$ is a closure point for $f$ it holds that $f(\\crit\\pi)<g(\\alpha)$, so fixing a regular $\\theta\\in(f(\\crit\\pi), g(\\alpha))$ we get that $\\crit\\pi$ is virtually $(\\theta, A)$-prestrong, contradicting the definition of $f$. Hence $\\on$ is virtually pre-Woodin.\n}\n\n\n\\subsection{Weak Vop\\v enka}\n\nWe now move to a \\textit{weak} variant of \\gvp, introduced in a category-theoretic context in \\cite{AdamekRosicky}. It starts with the following equivalent characterisation of \\gvp, which is the virtual analogue of the characterisation shown in \\cite{AdamekRosicky}:\n\n\\lemm[Virtualised Ad\\' amek-Rosick\\' y; \\gbc][lemm.adaros]{\n  The following are equivalent\\footnote{This is equivalent to saying that $\\on$, viewed as a category, cannot be fully embedded into the category \\textsf{Gra} of graphs, which is how it is stated in \\cite{AdamekRosicky}.}:\n  \\begin{enumerate}\n    \\item \\gvp\\\n    \\item There is not a natural $\\on$-sequence $\\bra{\\M_\\alpha\\mid\\alpha<\\on}$ satisfying that\n      \\begin{itemize}\n        \\item there is a generic homomorphism $\\M_\\alpha\\to\\M_\\beta$ for every $\\alpha\\leq\\beta$, which is unique in all generic extensions;\n        \\item there is \\textit{no} generic homomorphism $\\M_\\beta\\to\\M_\\alpha$ for any $\\alpha<\\beta$.\n      \\end{itemize}\n    \\item There is not a natural $\\on$-sequence $\\bra{\\M_\\alpha\\mid\\alpha<\\on}$ satisfying that\n      \\begin{itemize}\n        \\item there is a homomorphism $\\M_\\alpha\\to\\M_\\beta$ in $V$ for every $\\alpha\\leq\\beta$, which is unique in all generic extensions;\n        \\item there is \\textit{no} generic homomorphism $\\M_\\beta\\to\\M_\\alpha$ for any $\\alpha<\\beta$.\n      \\end{itemize}\n  \\end{enumerate}\n}\n\\proof{\n  Note that the only difference between $(ii)$ and $(iii)$ is that the homomorphism exists in $V$, making $(ii)\\Rightarrow(iii)$ trivial.\n\n  \\qquad $(iii)\\Rightarrow(i)$: Assume that \\gvp\\ fails, meaning by Theorem \\ref{theo.vopwood} that we have a natural $\\on$-sequence $\\vec\\M_\\alpha$ such that, in every generic extension, there is no homomorphism between any two distinct $\\M_\\alpha$'s. Define $\\bra{\\N_\\kappa\\mid\\kappa\\in\\Card}$ as\\footnote{$\\coprod$ is the ``model-theoretic union'', also known as the coproduct.}\n\\eq{\n  \\N_\\kappa:=\\coprod_{\\xi\\leq\\kappa}\\M_\\xi:= \\{(x,\\xi)\\mid\\xi\\leq\\kappa\\land\\xi\\in\\Card\\land x\\in\\M_\\xi\\},\n}\n\n  with a unary relation $R^*$ given as $R^*(x, \\xi)$ iff $\\M_\\xi\\models R(x)$ and a binary relation $\\sim^*$ given as $(x,\\xi)\\sim^*(x', \\xi')$ iff $\\xi=\\xi'$. Whenever we have a homomorphism $f\\colon\\N_\\kappa\\to\\N_\\lambda$ we then get an induced homomorphism $\\tilde f\\colon\\M_0\\to\\M_\\xi$, given as $\\tilde f(x) := f(x, 0)$, where $\\xi\\leq\\kappa$ is given by preservation of $\\sim^*$.\n\n  \\qquad For any two cardinals $\\kappa<\\lambda$ we have a homomorphism $j_{\\kappa\\lambda}\\colon\\N_\\kappa\\to\\N_\\lambda$ in $V$, given as $j_{\\kappa\\lambda}(x, \\xi) := (x,\\xi)$. This embedding must also be the \\textit{unique} such, in all generic extensions, as otherwise we get a generic homomorphism between two distinct $\\M_\\alpha$'s. Furthermore, there cannot be any homomorphism $\\N_\\lambda\\to\\N_\\kappa$ as that would also imply the existence of a generic homomorphism between two distinct $\\M_\\alpha$'s.\n\n\\qquad $(i)\\Rightarrow(ii)$: Assume that we have an $\\on$-sequence $\\vec\\M_\\alpha$ as in the theorem, with generic homomorphisms $j_{\\alpha\\beta}\\colon\\M_\\alpha\\to\\M_\\beta$ that are unique in all generic extensions for every $\\alpha\\leq\\beta$, with no generic homomorphisms going the other way.\n\n\\qquad We first note that we can for every $\\alpha\\leq\\beta$ choose the $j_{\\alpha\\beta}$ in a $\\col(\\omega,\\M_\\alpha)$-extension, by a proof similar to the proof of Lemma \\ref{lemm.ctblabs} and using the uniqueness of $j_{\\alpha\\beta}$. Next, fix a proper class $C\\subset\\on$ such that $\\alpha\\in C$ implies that\n\\eq{\n  \\sup_{\\xi\\in C\\cap\\alpha}\\abs{\\M_\\xi}^V<\\abs{\\M_\\alpha}^V.\n}\n\nand note that this implies that $V[g]\\models\\abs{\\M_\\xi}<\\abs{\\M_\\alpha}$ for every $V$-generic \\\\$g\\subset\\col(\\omega,\\M_\\xi)$. This means that for every $\\alpha\\in C$ we may choose some $\\eta_\\alpha\\in\\M_\\alpha$ which is \\textit{not} in the range of any $j_{\\xi\\alpha}$ for $\\xi<\\alpha$. But now define first-order structures $\\bra{\\N_\\alpha\\mid\\alpha\\in C}$ as $\\N_\\alpha := (\\M_\\alpha, \\eta_\\alpha)$. Then, by our assumption on the $\\M_\\alpha$'s and construction of the $\\N_\\alpha$'s, there can be no generic homomorphism between any two distinct $\\N_\\alpha$, showing that \\gvp\\ fails.\n}\n\nThe \\textit{weak} version of \\gvp\\ is then simply ``flipping the arrows around'' in the above characterisation of \\gvp.\n\n\\xdefi[\\gbc]{\n  \\textbf{Generic Weak Vop\\v enka's Principle} (\\gwvp) states that there does \\textit{not} exist an $\\on$-sequence of first-order structures $\\bra{\\M_\\alpha\\mid\\alpha<\\on}$ such that\n  \\begin{itemize}\n    \\item there is a homomorphism $\\M_\\beta\\to\\M_\\alpha$ in $V$ for every $\\alpha\\leq\\beta$, which is unique in all generic extensions;\n    \\item there is \\textit{no} generic homomorphism $\\M_\\alpha\\to\\M_\\beta$ for any $\\alpha<\\beta$.$\\hfill\\circ$\n  \\end{itemize}\n}\n\nWe start by showing that \\gwvp\\ is indeed a weaker version of \\gvp.\n\n\\prop{\n  \\gvp\\ implies \\gwvp.\n}\n\\proof{\n  Assume \\gvp\\ holds and \\gwvp\\ fails, and let $\\bra{\\M_\\alpha\\mid\\alpha<\\on}$ be an $\\on$-sequence of first-order structures such that for every $\\alpha\\leq\\beta$ there exists a homomorphism\n  \\eq{\n    j_{\\beta\\alpha}\\colon\\M_\\beta\\to\\M_\\alpha\n  }\n\n  in $V$ which is unique in all generic extensions, with no generic homomorphisms going the other way. We can then find a proper class $C\\subset\\on$ such that $\\abs{\\M_\\alpha}^V < \\abs{\\M_\\beta}^V$ for every $\\alpha<\\beta$ in $C$. By \\gvp\\ there are then $\\alpha<\\beta$ in $C$ and a generic homomorphism\n  \\eq{\n    \\pi\\colon\\M_\\alpha\\to\\M_\\beta\n  }\n\n  in some $V[g]$. Here we may assume, as in the proof of Lemma \\ref{lemm.adaros}, that $g\\subset\\col(\\omega,\\M_\\beta)$. But then $\\pi\\circ j_{\\beta\\alpha} = \\id$ by uniqueness of $j_{\\beta\\beta} = \\id$, which means that $j_{\\beta\\alpha}$ is injective in $V[g]$ and hence also in $V$. But then $\\abs{\\M_\\beta}^V\\leq\\abs{\\M_\\alpha}^V$, contradicting the definition of $C$.\n}\n\nDenoting the corresponding non-generic principle by \\wvp\\, Wilson showed the following surprising result:\n\n\\theo[\\cite{WilsonWVP}]{\n  \\wvp\\ is equivalent to $\\on$ being a Woodin cardinal.\n}\n\nGiven our Theorem \\ref{theo.vopwood} we may then suspect that in the virtual world these two continue to be equivalent, which turns out to \\textit{almost} be the case. In the proceeding argument we will be roughly following the structure of the argument in \\cite{WilsonWVP}, but we have to diverge from it at several points in which the author is using the fact that they are working with class-sized elementary embeddings. \n\n\\qquad Indeed, in that paper they establish a correspondence between elementary embeddings and certain homomorphisms, a correspondence we will not achieve here. Proving that the elementary embeddings we \\textit{do} get are non-trivial seems to furthermore require extra assumptions on our structures. In any case, let us begin.\n\n\\qquad Define for every strong limit cardinal $\\lambda$ and $\\Sigma_1$-formula $\\varphi$ the relations\n\\eq{\n  R^\\varphi &:= \\{x\\in V\\mid (V,\\in)\\models\\varphi[x]\\}\\\\\n  R^\\varphi_\\lambda&:= \\{x\\subset H_\\lambda^V\\mid \\exists y\\in R^\\varphi\\colon y\\cap H_\\lambda^V = x\\}\n}\n\nand given any class $A$ define the structure\n\\eq{\n  \\p_{\\lambda,A} := (H_{\\lambda^+}^V,R^\\varphi_\\lambda, \\{\\lambda\\}, A\\cap H_\\lambda^V)_{\\varphi\\in\\Sigma_1}.\n}\n\nSay that a homomorphism $h\\colon\\p_{\\lambda,A}\\to\\p_{\\eta,A}$ is \\textbf{trivial} if\n  \\eq{\n    h(x)\\cap H_\\eta^V = x\\cap H_\\eta^V\n  }\n  \n  for every $x\\in H_{\\lambda^+}^V$. Note that $h$ can only be trivial if $\\eta\\leq\\lambda$ since $h(\\lambda)=\\eta$.\n\n\\lemm[Gitman-N.; \\gbc][lemm.hom]{\n  Let $\\lambda$ be a singular strong limit cardinal, $\\eta$ a strong limit cardinal and $A\\subset V$ a class. If there exists a non-trivial generic homomorphism $h\\colon\\p_{\\lambda,A}\\to\\p_{\\eta,A}$ then there is a non-trivial generic elementary embedding $\\pi\\colon(H_{\\lambda^+}^V,\\in,A\\cap H_\\lambda^V)\\to(\\M, \\in, B)$ for some $\\M$ such that, letting $\\nu := \\min\\{\\lambda,\\eta\\}$, it holds that $H_\\nu^V=H_\\nu^{\\M}$, $A\\cap H_\\nu^V = B\\cap H_\\nu^V$ and $\\crit\\pi<\\nu$.\n}\n\\proof{\n  Assume that we have a non-trivial homomorphism $h\\colon\\p_{\\lambda,A}\\to\\p_{\\eta,A}$ in a forcing extension $V[g]$, define in $V[g]$ the set\n  \\eq{\n    \\M^* := \\{\\bra{b,f} \\mid b\\in [H_\\nu]^{<\\omega}\\land f\\in H_{\\lambda^+}^V\\land f\\colon H_\\lambda^V\\to H_\\lambda^V\\},\n  }\n\n  and define the standard relations $\\in^*$ and $=^*$ on $\\M^*$ as\n  \\eq{\n    \\bra{b_0,f_0} \\in^* \\bra{b_1, f_1} &\\quad\\text{iff}\\quad b_0b_1\\in h(\\{xy\\in[H_\\lambda^V]^{<\\omega}\\mid f_0(x)\\in f_1(y)\\})\\\\\n    \\bra{b_0,f_0} =^* \\bra{b_1, f_1} &\\quad\\text{iff}\\quad b_0b_1\\in h(\\{xy\\in[H_\\lambda^V]^{<\\omega}\\mid f_0(x) = f_1(y)\\})\n  }\n\n  Let $\\M := \\M^*/=^*$, and also call $\\in^*$ the induced relation on $\\M$, which is clearly well-defined. We then get a version of \\L o\\' s' Theorem, using that $h$ preserves all $\\Sigma_1$-relations and that $H_\\lambda^V\\models\\zfc^-$.\n\n  \\clai{\n    For every formula $\\varphi(v_1,\\dots,v_n)$ and every $[b_1,f_1],\\dots,[b_n,f_n]\\in\\M$ the following are equivalent:\n    \\begin{enumerate}\n      \\item $(\\M,\\in^*)\\models\\varphi[[b_1,f_1],\\dots,[b_n,f_n]]$;\n      \\item $b_1\\cdots b_n\\in h(\\{a_1\\cdots a_n\\mid\\p_{\\lambda,A}\\models\\varphi[f_1(a_1),\\dots,f_n(a_n)]\\})$.\n    \\end{enumerate}\n  }\n  \n  \\cproof{\n    The proof is straightforward, using that $h$ preserves $\\Sigma_1$-relations. We prove this by induction on $\\varphi$. If $\\varphi$ is $v_i\\in v_j$ then we have that\n    \\eq{\n      &(\\M,\\in^*)\\models\\varphi[[b_1,f_1],\\dots,[b_n,f_n]]\\\\\n      \\Leftrightarrow&\\bra{b_i,f_i}\\in^*\\bra{b_j,f_j}\\\\\n      \\Leftrightarrow&b_ib_j \\in h(\\{a_ia_j\\in[H_\\lambda^V]^{<\\omega}\\mid f_i(a_i)\\in f_j(a_j)\\})\\\\\n      \\Leftrightarrow&b_1\\cdots b_n\\in h(\\{a_1\\cdots a_n\\mid f_i(a_i)\\in f_j(a_j)\\})\\\\\n      \\Leftrightarrow&b_1\\cdots b_n\\in h(\\{a_1\\cdots a_n\\mid\\p_{\\lambda,A}\\models\\varphi[f_1(a_1),\\dots,f_n(a_n)]\\}).\n    }\n\n    The cases where $\\varphi$ is $\\psi\\land\\chi$ or $\\lnot\\psi$ is straightforward. If $\\varphi$ is $\\exists x\\psi$ then\n    \\eq{\n      &(\\M,\\in^*)\\models\\varphi[[b_1,f_1],\\dots,[b_n,f_n]]\\\\\n      \\Leftrightarrow&\\exists\\bra{b,f}\\in\\M^*\\colon(\\M,\\in)\\models\\psi[\\bra{b,f}, \\bra{b_1,f_1},\\dots,\\bra{b_n,f_n}]\\\\\n      \\Leftrightarrow&\\exists\\bra{b,f}\\in\\M^*\\colon bb_1\\cdots b_n\\in h(\\{a\\vec a\\mid\\p_{\\lambda,A}\\models\\psi[f(a), f_1(a_1),\\dots,f_n(a_n)]\\})\\\\\n      \\Leftrightarrow&b_1\\cdots b_n\\in h(\\{\\vec a\\mid\\p_{\\lambda,A}\\models\\varphi[f_1(a_1),\\dots,f_n(a_n)]\\}),\n    }\n\n    finishing the proof.\n  }\n\n  Note that we have not shown that $(\\M,\\in^*)$ is well-founded, and indeed it might not be. However, the following claim will show that $(H_\\nu^V,\\in)$ is isomorphic to a rank-initial segment of $(\\M^*, \\in^*)$, giving well-foundedness up to that point at least. Define the function $\\chi\\colon(H_\\nu^V,\\in)\\to(\\M^*,\\in^*)$ as $\\chi(a):=[\\bra{a},\\text{pr}]$, where $\\text{pr}(\\bra{x}) := x$.\n\n  \\clai{\n    \\label{clai.pr}\n    For every $[a,f]\\in\\M$ and $b\\in H_\\nu^V$,\n    \\eq{\n      [a,f]\\in^*\\chi(b)\\quad\\Leftrightarrow\\quad\\exists c\\in H_\\nu^V\\colon [a,f] = \\chi(c).\n    }\n  }\n\n  \\cproof{\n    We have that\n    \\eq{\n      [a,f]\\in^* \\chi(b) = [\\bra{b},\\text{pr}] &\\Leftrightarrow a\\bra{b}\\in h(\\{x\\bra{y}\\mid f(x)\\in y\\})\\\\\n      &\\Leftrightarrow a\\bra{b}\\in h(\\{x\\bra{y}\\mid\\exists z\\in y\\colon f(x)=z\\})\\\\\n      &\\Leftrightarrow \\exists c\\in b\\colon a\\bra{c}\\in h(\\{x\\bra{z}\\mid f(x)=z\\})\\\\\n      &\\Leftrightarrow \\exists c\\in b\\colon [a,f] = [\\bra{c},\\text{pr}] = \\chi(c),\n    }\n\n    yielding the wanted.\n  }\n\n  This claim implies that by taking the transitive collapse of $\\ran\\chi\\subset\\M$ we may assume that $H_\\nu^V = H_\\nu^{\\M}$. Now define\n  \\eq{\n    B := \\{[b,f]\\in\\M\\mid b\\in h(\\{x\\in H_\\lambda^V\\mid f(x)\\in A\\})\\}.\n  }\n  \n  and, in $V[g]$, let $\\pi\\colon (H_\\lambda^V,\\in,A\\cap H_\\lambda^V)\\to(\\M,\\in, B)$ be given as $\\pi(x) := [\\bra{},c_x]$. \n  \n  \\clai{\n    $\\pi$ is elementary.\n  }\n\n  \\cproof{\n     For $x_1,\\dots,x_n\\in H_\\lambda^V$ it holds that\n    \\eq{\n      (\\M,\\in^*, B)\\models\\varphi[\\pi(x_1),\\dots,\\pi(x_n)]&\\Leftrightarrow\\ (\\M,\\in^*)\\models\\varphi[\\pi(x_1),\\dots,\\pi(x_n)]\\\\\n      &\\Leftrightarrow\\ \\bra{}\\in h(\\{\\bra{}\\mid\\p_{\\lambda,A}\\models\\varphi[x_1,\\dots,x_n]\\})\\\\\n      &\\Leftrightarrow\\ (H_{\\lambda^+}^V,\\in,A\\cap H_\\lambda^V)\\models\\varphi[x_1,\\dots,x_n]\n    }\n\n    and we also get that, for every $x\\in H_\\lambda^V$,\n    \\eq{\n      x\\in A \\Leftrightarrow \\bra{}\\in h(\\{a\\in H_\\lambda^V\\mid x\\in A\\}) \\Leftrightarrow \\pi(x)\\in B,\n    }\n\n    which shows elementarity.\n  }\n\n  We next need to show that $B\\cap H_\\nu^V = A\\cap H_\\nu^V$, so let $x\\in H_\\nu^V$. Note that $x = [\\bra{x}, \\text{pr}]$ by Claim \\ref{clai.pr} and the observation proceeding it, which means that\n  \\eq{\n    x\\in B \\Leftrightarrow \\bra{x}\\in h(\\{\\bra{y}\\in H_\\lambda^V\\mid y\\in A\\}) \\Leftrightarrow x\\in A.\n  }\n\n  The last thing we need to show is that $\\crit\\pi<\\nu$. We start with an analogous result about $h$. \n\n  \\clai{\n    \\label{clai.move}\n    There exists some $b\\in H_\\nu^V$ such that $h(b)\\neq b$.\n  }\n  \n  \\cproof{\n    Assume the claim fails. We now have two cases.\\\\\n    \n    \\framebox{\\textbf{Case 1:} $\\lambda\\geq\\eta$}\n    \n    \\qquad By non-triviality of $h$ there is an $x\\in\\p^V(H_\\lambda^V)$ such that $h(x)\\neq x\\cap H_\\eta^V$, which means that there exists an $a\\in H_\\eta^V$ such that $a\\in h(x)\\Leftrightarrow a\\notin x$.\n\n    \\qquad  If $a\\in x$ then $\\{a\\} = h(\\{a\\}) \\subset h(x)$,\\footnote{Note that as $h$ preserves $\\Sigma_1$ formulas it also preserves singletons and boolean operations.} making $a\\in h(x)$, $\\contr$, so assume instead that $a\\in h(x)$. Since $\\eta$ is a strong limit cardinal we may fix a cardinal $\\theta<\\eta$ such that $a\\in H_\\theta^V$ and $H_\\theta^V\\in H_\\eta^V$. We then have that\\footnote{Note that we are using $\\lambda\\geq\\eta$ here to ensure that $H_\\theta^V\\in\\dom h$.}\n    \\eq{\n      \\{a\\}\\subset h(x)\\cap H_\\theta^V = h(x)\\cap h(H_\\theta^V) = h(x\\cap H_\\theta^V) = x\\cap H_\\theta^V,\n    }\n\n    so that $a\\in x$, $\\contr$.\\\\\n\n    \\framebox{\\textbf{Case 2:} $\\lambda<\\eta$}\n\n    \\qquad In this case we are assuming that $h\\restr H_\\lambda^V = \\id$, but $h(\\lambda) = \\eta > \\lambda$. Since $\\lambda$ is singular we can fix some $\\gamma<\\lambda$ and a cofinal function $f\\colon\\gamma\\to\\lambda$. Define the relation\n    \\eq{\n      R := \\{(\\alpha,\\beta,\\bar\\alpha,\\bar\\beta,g)\\mid\\godel{\\text{$g$ is a cofinal function $g\\colon\\alpha\\to\\beta$}}\\land g(\\bar\\alpha)=\\bar\\beta\\}.\n    }\n\n    Then $R(\\gamma,\\lambda,\\alpha,f(\\alpha),f)$ holds by assumption for every $\\alpha<\\gamma$, so that $R$ holds for some $(\\gamma^*,\\lambda^*,\\alpha^*,f(\\alpha)^*,f^*)$ such that\n    \\eq{\n      (\\gamma^*,\\lambda^*,\\alpha^*,f(\\alpha)^*,f^*)\\cap H_\\eta^V &= (h(\\gamma),h(\\lambda),h(\\alpha),h(f(\\alpha)),h(f)) \\\\\n      &= (\\gamma,\\eta,\\alpha,f(\\alpha),h(f)),\n    }\n\n     using our assumption that $h$ fixes every $b\\in H_\\lambda^V$. Since $\\gamma$, $\\alpha$ and $f(\\alpha)$ are transitive and bounded in $H_\\lambda^V$, it holds that $h(\\gamma)=\\gamma^*$, $h(\\alpha)=\\alpha^*$ and that $h(f(\\alpha))=f(\\alpha)^*$. Also, since $\\dom(f^*)=\\gamma=\\dom(f)$ we must in fact have that $f^*=h(f)$. But this means that $h(f)\\colon\\gamma\\to\\eta$ is cofinal and $\\ran(h(f))\\subset\\lambda$, a contradiction!\n  }\n  \n  To use the above Claim \\ref{clai.move} to conclude anything about $\\pi$ we will make use of the following standard lemma:\n\n  \\clai{\n    \\label{clai.pihagreement}\n    For any $x\\in H_\\lambda^V$ it holds that $h(x) = \\pi(x)\\cap H_\\eta^V$.\n  }\n\n  \\cproof{\n    For any $n<\\omega$ and $\\bra{a_1,\\hdots,a_n}\\in[H_\\eta^V]^n$ we have that\n    \\eq{\n      &\\bra{a_1,\\dots,a_n}\\in\\pi(x)\\\\\n      \\Leftrightarrow\\ &(\\M,\\in)\\models\\bra{a_1,\\dots,a_n}\\in\\pi(x)\\\\\n      \\Leftrightarrow\\ &(\\M,\\in)\\models\\bra{[\\bra{a_1},\\text{pr}],\\dots,[\\bra{a_n},\\text{pr}]}\\in[\\bra{},c_x]\\\\\n      \\Leftrightarrow\\ &\\bra{a_1,\\hdots,a_n}\\in h(\\{\\bra{x_1,\\hdots,x_n}\\mid\\p_{\\lambda,A}\\models\\bra{x_1,\\dots,x_n}\\in x\\})\\\\\n      \\Leftrightarrow\\ &\\bra{a_1,\\dots,a_n}\\in h(x),\n    }\n\n    showing that $h(x) = \\pi(x)\\cap H_\\eta^V$.\n  }\n\n  Now use Claim \\ref{clai.move} to fix a $b\\in H_\\nu^V$ which is moved by $h$. Claim \\ref{clai.pihagreement} then implies that\n  \\eq{\n    \\pi(b)\\cap H_\\eta^V = h(b)\\cap H_\\eta^V = h(b)\\neq b = b\\cap H_\\eta^V,\n  }\n\n  showing that $\\pi(b)\\neq b$ and hence $\\crit\\pi<\\nu$. This finishes the proof.\n}\n\n\\defi{\n  A cardinal $\\kappa$ is \\textbf{W-virtually pre-Woodin} if it is virtually pre-Woodin but without requiring that the target model is well-founded.\n}\n\n\\theo[Gitman-N.; \\gbc]{\n  \\gwvp\\ holds iff $\\on$ is W-virtually pre-Woodin.\n}\n\\proof{\n  $(\\Leftarrow)$ is just observing that the virtualisation of the argument in \\cite{WilsonWVP} that \\wvp\\ holds if \\on\\ is Woodin works in the W-virtually pre-Woodin case, so we only give a brief sketch.\n\n  \\qquad Assume \\on\\ is W-virtually pre-Woodin and let $\\vec\\M$ be a counterexample to \\gwvp, so that we in $V$ have homomorphisms $\\M_\\beta\\to\\M_\\alpha$ for all $\\alpha\\leq\\beta$. Work in some generic extension $V[g]$, fix a W-virtually $\\vec\\M$-prestrong cardinal $\\kappa$ and let $\\theta\\gg\\kappa$ be such that $\\M_{\\kappa+1}\\in H_\\theta^V$. Letting $\\pi\\colon (H_\\theta^V,\\in)\\to(\\M,\\in^*)$ be the corresponding embedding we get that $\\M_{\\kappa+1}=\\pi(\\vec\\M)_{\\kappa+1}$, so that\n  \\eq{\n    \\pi\\restr\\M_\\kappa\\colon(\\M_\\kappa,\\in)\\to(\\pi(\\M_\\kappa),\\in^*)=(\\pi(\\vec\\M)_{\\pi(\\kappa)},\\in^*).\n  }\n  \n  But then, by the choice of $\\theta$ and elementarity of $\\pi$, we get that $\\M$ has a homomorphism\n  \\eq{\n    h\\colon(\\pi(\\vec\\M)_{\\pi(\\kappa)}, \\in^*)\\to(\\pi(\\vec\\M)_{\\kappa+1},\\in^*)=(\\M_{\\kappa+1}, \\in),\n  }\n  \n  making $h\\circ(\\pi\\restr\\M_\\kappa)\\colon(\\M_\\kappa,\\in)\\to(\\M_{\\kappa+1},\\in)$ a counterexample to \\gwvp.\n\n  \\qquad $(\\Rightarrow)$: Assume that $\\on$ is not W-virtually pre-Woodin. This means that there exists a class $A$ such that there are no W-virtually $A$-prestrong cardinals. We can therefore assign to any cardinal $\\kappa$ the least cardinal $f(\\kappa)>\\kappa$ such that $\\kappa$ is not W-virtually $(f(\\kappa),A)$-prestrong.\n\n  \\qquad Also define a function $g\\colon\\on\\to\\Card$ as taking an ordinal $\\alpha$ to the least singular strong limit cardinal above $\\alpha$ closed under $f$. Then we are assuming that there is no non-trivial generic elementary embedding\n  \\eq{\n    \\pi\\colon(H_{g(\\alpha)}^V,\\in,A\\cap H_{g(\\alpha)}^V)\\to(\\M,\\in, B)\n  }\n\n  with $H_{g(\\alpha)}^V\\subset\\M$ and $B\\cap H_{g(\\alpha)}^V = A\\cap H_{g(\\alpha)}^V$. Assume towards a contradiction that for some $\\alpha,\\beta$ there is a non-trivial generic homomorphism $h\\colon\\p_{g(\\alpha),A}\\to\\p_{g(\\beta), A}$. Lemma \\ref{lemm.hom} then gives us a non-trivial generic elementary embedding\n  \\eq{\n    \\pi\\colon(H_{g(\\alpha)}^V,\\in,A\\cap H_{g(\\alpha)}^V)\\to(\\M, \\in, B)\n  }\n\n  for some transitive $\\M$ such that $H_\\nu^V\\subset\\M$ with $\\nu := \\min\\{g(\\alpha),g(\\beta)\\}$ and $A\\cap H_\\nu^V = B\\cap H_\\nu^V$, a contradiction! Therefore every generic homomorphism $h\\colon\\p_{g(\\alpha),A}\\to\\p_{g(\\beta),A}$ is trivial. Since there is a unique trivial homomorphism when $\\alpha\\geq\\beta$ and no trivial homomorphism when $\\alpha<\\beta$ since $g(\\alpha)$ is sent to $g(\\beta)$, the sequence of structures\n  \\eq{\n    \\bra{\\p_{g(\\alpha),A}\\mid \\alpha\\in\\on}\n  }\n\n  is a counterexample to \\gwvp, which is what we wanted to show.\n}\n\n\n\\section{Berkeleys}\n\nWe next move to the higher realms of the virtual large cardinal hierarchy, and study cardinals whose non-virtual versions are inconsistent with \\zfc.\n\n\\qquad In the virtual setting the virtually Berkeley cardinals, like all the other virtual large cardinals, are simply downwards absolute to $L$. It turns out that virtually Berkeley cardinals are natural objects, as the main theorem of this section, Theorem~\\ref{theo.berkeleyequiv}, shows that these large cardinals are precisely what separates virtually pre-Woodins from the virtually Woodins, as well as separating virtually Vop\\v enka cardinals from Mahlo cardinals, improving on a result in \\cite{GitmanHamkins}.\n\n\\defi{\n  Say that a cardinal $\\delta$ is \\textbf{virtually proto-Berkeley} if for every transitive set $\\M$ such that $\\delta\\subset\\M$ there exists a generic elementary embedding $\\pi\\colon\\M\\to\\M$ with $\\crit\\pi<\\delta$.\n\n  \\qquad If $\\crit\\pi$ can be chosen arbitrarily large below $\\delta$ then $\\delta$ is \\textbf{virtually Berkeley}, and if $\\crit\\pi$ can be chosen as an element of any club $C\\subset\\delta$ we say $\\delta$ is \\textbf{virtually club Berkeley}.\n}\n\nNote that a quick application of Countable Embedding Absoluteness \\ref{lemm.ctblabs} shows that virtually (proto-)Berkeley cardinals are downwards absolute to $L$.\n\n\\qquad We are not interested in the virtually proto-Berkeley cardinals for the same reason we are not interested in the proto-Berkeley cardinals, namely that if $\\delta$ is virtually proto-Berkeley then every $\\kappa>\\delta$ is proto-Berkeley as well. The following theorem, which is a straight-forward virtualisation of the corresponding theorem in the non-virtual context, then shows that the least virtually proto-Berkeley cardinal is indeed virtually Berkeley.\n\n\\theo[Virtualised \\cite{Cutolo} 2.1.14]{\n  The least virtually proto-Berkeley cardinal is virtually Berkeley.\n}\n\\proof{\n  Let $\\delta_0$ be the least virtually proto-Berkeley cardinal and assume that $\\eta_0<\\delta_0$ is least such that there exists a transitive set $\\M_0$ with $\\delta_0\\in\\M_0$ such that there are no generic elementary embeddings $\\pi\\colon\\M\\to\\M$ with $\\crit\\pi\\in(\\eta_0,\\delta_0)$.\n\n  \\qquad We will show that $\\eta_0$ is virtually proto-Berkeley, which would contradict minimality of $\\delta_0$, so let $\\M$ be any transitive set with $\\eta_0\\in\\M$. We can now fix some $\\lambda>\\delta_0$ such that $\\M,\\M_0\\in V_\\lambda$ and define\n  \\eq{\n    \\M' := V_\\lambda\\cup\\{\\{\\bra{x, \\eta_0,\\M,\\M_0}\\mid x\\in V_\\lambda\\}\\}.\n  }\n\n  Note that $\\eta_0$, $\\M_0$ and $\\M$ are all definable in $\\M'$. Use the fact that $\\delta_0$ is virtually proto-Berkeley to get a generic embedding $\\pi\\colon\\M'\\to\\M'$, so that definability ensures that $\\pi$ fixes $\\eta_0$, $\\M_0$ and $\\M$.\n\n  \\qquad By choice of $\\eta_0$ and because $\\M_0$ is fixed by $\\pi$ we get that $\\crit\\pi\\leq\\eta_0$, and since $\\pi$ fixes $\\eta_0$ we further have that $\\crit\\pi<\\eta_0$. This implies that $\\pi\\restr\\M\\colon\\M\\to\\M$ witnesses that $\\eta_0$ is virtually proto-Berkeley, $\\contr$.\n}\n\nVirtually (proto-)Berkeley cardinals turn out to be equivalent to their ``boldface'' versions, the proof of which is a straightforward virtualisation of Lemma 2.1.12 and Corollary 2.1.13 in \\cite{Cutolo}.\n\n\\prop[Virtualised \\cite{Cutolo} 2.1.12 and 2.1.13][prop.boldfaceberkeley]{\n  If $\\delta$ is virtually proto-Berkeley then for every transitive set $\\M$ such that $\\delta\\subset\\M$ and every subset $A\\subset\\M$ there exists a generic elementary embedding $\\pi\\colon(\\M,\\in,A)\\to(\\M,\\in,A)$ with $\\crit\\pi<\\delta$. If $\\delta$ is virtually Berkeley then we can furthermore ensure that $\\crit\\pi$ is arbitrarily large below $\\delta$.\n}\n\\proof{\n  Let $\\M$ be transitive with $\\delta\\subset\\M$ and $A\\subset\\M$. Let\n  \\eq{\n    \\N := \\M\\cup\\{A, \\{\\{A, x\\}\\mid x\\in\\M\\}\\}\\cup\\{\\{A,x\\}\\mid x\\in\\M\\}\n  }\n\n  and note that $\\N$ is transitive. Further, both $A$ and $\\M$ are definable in $\\N$ without parameters: $A$ is definable in $\\N$ as the unique set such that there is a set $B$ ($=\\{\\{A,x\\}\\mid x\\in\\M\\}$), all of whose elements are unordered pairs $\\{A,x\\}$ and for every $x\\in\\N$ such that $A\\notin x$ and $x\\notin\\{A,B\\}$ it holds that $\\{A,x\\}\\in B$. $\\M$ is then what remains if we remove all $x$ such that $A$ is in the transitive closure of $x$. But this means that a generic elementary embedding $\\pi\\colon\\N\\to\\N$ fixes both $\\M$ and $A$, giving us a generic elementary $\\sigma\\colon(\\M,\\in,A)\\to(\\M,\\in,A)$ with $\\crit\\sigma=\\crit\\pi$, yielding the wanted conclusion.\n}\n\nThe following is a straightforward virtualisation of the usual definition of the Vop\\v enka filter (see e.g. \\cite{Kanamori}):\n\n\\defi[\\gbc]{\n  Define the \\textbf{virtually Vop\\v enka filter} $F$ on $\\on$ as $X\\in F$ iff there is a natural $\\on$-sequence $\\vec\\M$ such that $\\crit\\pi\\in X$ for any $\\alpha<\\beta$ and any generic elementary $\\pi\\colon\\M_\\alpha\\to\\M_\\beta$.\n}\n\n  Theorem \\ref{theo.vopwood} shows that $\\emptyset$ is in the virtually Vop\\v enka filter iff \\gvp\\ fails, in analogy with the non-virtual case. Normality also holds in the virtual context, as the following proof shows:\n  \n\\lemm[Virtualised folklore; \\gbc]{\n  The virtually Vop\\v enka filter is a normal filter.\n}\n\\proof{\n  Let $F$ be the virtually Vop\\v enka filter. We first show that $F$ is actually a filter. If $X\\in F$ and $Y\\supset X$ then $Y\\in F$ simply by definition of $F$. If $X,Y\\in F$, witnessed by natural sequences $\\vec\\M$ and $\\vec\\N$, then $X\\cap Y\\in F$ as well, witnessed by the natural sequence $\\vec\\P$ induced by the indexing function $f^{\\vec\\P} := \\max(f^{\\vec\\M}, f^{\\vec\\N})$ and unary relations $R^{\\vec\\P}_\\alpha := \\text{Code}(\\bra{R^{\\vec\\M}_\\alpha, R^{\\vec\\N}_\\alpha})$. Indeed, if $\\pi\\colon\\P_\\alpha\\to\\P_\\beta$ is a generic elementary embedding with critical point $\\mu$ then $\\mu$ is also the critical point of both $\\pi\\restr\\M_\\alpha\\colon\\M_\\alpha\\to\\M_\\beta$ and $\\pi\\restr\\N_\\alpha\\colon\\N_\\alpha\\to\\N_\\beta$.\n  \n  \\qquad For normality, let $X\\in F^+$ be $F$-positive, where we recall that this means that $X\\cap C\\neq\\emptyset$ for every $C\\in F$, and let $f\\colon X\\to\\on$ be regressive. We want to show that $f$ is constant on an $F$-positive set.\n\n  \\qquad Assume this fails, meaning that there are natural sequences $\\vec\\M^\\gamma$ for $\\gamma$ such that for any generic elementary $\\pi\\colon\\M_\\alpha^\\gamma\\to\\M_\\beta^\\gamma$ satisfies that $f(\\crit\\pi)\\neq\\gamma$. Define a new natural sequence $\\vec\\N$ as induced by the indexing function $g\\colon\\on\\to\\on$ given as $g(\\alpha):=\\sup_{\\gamma<\\alpha}\\rk\\M_\\alpha^\\gamma + \\omega$ and unary relations $R_\\alpha^{\\vec\\N}$ given as\n\\eq{\n  R_\\alpha^{\\vec\\N} := \\text{Code}(\\bra{\\bra{\\M_\\alpha^\\gamma\\mid\\gamma<\\alpha}, f\\restr\\alpha}).\n}\n\nNow since $X$ is $F$-positive there exists a generic elementary embedding\n\\eq{\n  \\pi\\colon\\N_\\alpha\\to\\N_\\beta\n}\n\nwith $\\crit\\pi\\in X$. As $f(\\crit\\pi)<\\crit\\pi$ we get that $\\pi(f(\\crit\\pi))=f(\\crit\\pi)$, so that we have a generic elementary embedding\n\\eq{\n  \\pi\\restr\\M_\\alpha^{f(\\crit\\pi)}\\colon\\M_\\alpha^{f(\\crit\\pi)}\\to\\M_\\beta^{f(\\crit\\pi)},\n}\n\n  but this contradicts the definition of $\\vec\\M^{f(\\crit\\pi)}$! Thus $F$ is normal.\n}\n\n  The reason why we are being careful in showing all these analogous properties for the virtual Vop\\v enka filter is that not all of the properties carry over. Indeed, note that uniformity of filters is non-trivial as we are working with proper classes\\footnote{This boils down to the fact that the class club filter is not provably normal in \\gbc, see \\cite{class-fodor}}, and we will see in Theorem \\ref{theo.berkeleyequiv} that uniformity of this filter is equivalent to there being no virtually Berkeley cardinals --- the following lemma is the first implication:\n\n\\lemm[N.; \\gbc][lemm.vopclub]{\n  Assume $\\gvp$ and that there are no virtually Berkeley cardinals. Then the virtually Vop\\v enka filter $F$ on $\\on$ contains every class club $C$.\n}\n\\proof{\n  The crucial extra property we get by assuming that there are not any virtually Berkeleys is that $F$ becomes uniform, i.e. contains every tail $(\\delta,\\on)\\subset\\on$. Indeed, assume that $\\delta$ is the least cardinal such that $(\\delta,\\on)\\notin F$. Let $M$ be a transitive set with $\\delta\\subset M$ and $\\gamma<\\delta$ a cardinal. As $(\\gamma,\\on)\\in F$ by minimality of $\\delta$, we may fix a natural sequence $\\vec\\N$ witnessing this. Let $\\vec\\M$ be the natural sequence induced by the indexing function $f\\colon\\on\\to\\on$ given by\n  \\eq{\n    f(\\alpha):=\\max(\\alpha+1, \\delta+1)\n  }\n\n  and unary relations $R_\\alpha := \\{\\bra{M, \\N_\\beta}\\mid\\beta\\leq\\alpha\\}$. If $\\pi\\colon\\M_\\alpha\\to\\M_\\beta$ is a generic elementary embedding with $\\crit\\pi\\leq\\delta$, which exists as $(\\delta,\\on)\\notin F$, then $\\pi(R_\\alpha)=R_\\beta$ implies that $\\pi\\restr\\M\\colon\\M\\to\\M$ with $\\crit\\pi\\leq\\delta$. We also get that $\\crit\\pi>\\gamma$, as\n  \\eq{\n    \\pi\\restr\\N_{\\crit\\pi}\\colon\\N_{\\crit\\pi}\\to\\N_{\\pi(\\crit\\pi)}\n  }\n\n  is an embedding between two structures in $\\vec\\N$ and hence $\\crit\\pi>\\gamma$ as $\\vec\\N$ witnesses that $(\\gamma,\\on)\\in F$. This means that $\\delta$ is virtually Berkeley, a contradiction. Thus $\\crit\\pi>\\delta$, implying that $(\\delta,\\on)\\in F$.\n\n  \\qquad Note that the class $C_0\\subset\\on$ of limit ordinals is in $F$, since it is the diagonal intersection of the tails $(\\alpha+1,\\on)$. Now let $C\\subset\\on$ be a class club, and let\n  \\eq{\n    C := \\{a_\\alpha\\mid\\alpha<\\on\\}\n  }\n  \n  be its increasing enumeration. Then $C \\supset C_0\\cap\\triangle_{\\alpha<\\on}(a_\\alpha,\\on)$, implying that $C\\in F$.\n}\n\n\\theo[N.; \\gbc][theo.prewoodwood]{\n  If there are no virtually Berkeley cardinals then $\\on$ is virtually pre-Woodin iff $\\on$ is virtually Woodin.\n}\n\\proof{\n  Assume $\\on$ is virtually pre-Woodin, so \\gvp\\ holds by Theorem \\ref{theo.vopwood} and we can let $F$ be the virtually Vop\\v enka filter. The assumption that there are not any virtually Berkeley cardinals implies that for any class $A$ we not only get a virtually $A$-prestrong cardinal, but we get stationarily many such. Indeed, assume this fails --- we will follow the proof of Theorem \\ref{theo.vopwood}.\n\n  \\qquad Failure means that there is some class $A$ and some class club $C$ such that there are no virtually $A$-prestrong cardinals in $C$. Since there are no virtually Berkeley cardinals, Lemma \\ref{lemm.vopclub} imples that $C\\in F$, so there exists some natural sequence $\\vec\\N$ such that whenever $\\pi\\colon\\N_\\alpha\\to\\N_\\beta$ is an elementary embedding between two distinct structures of $\\vec\\N$ it holds that $\\crit\\pi\\in C$. Define $f\\colon\\on\\to\\on$ as sending $\\alpha$ to the least cardinal $\\eta>\\alpha$ such that $\\alpha$ is not virtually $(\\eta,A)$-prestrong if $\\alpha\\in C$, and set $f(\\alpha):=\\alpha$ if $\\alpha\\notin C$. Also define $g\\colon\\on\\to\\on$ as $g(\\alpha)$ being the least strong limit cardinal in $C$ above $\\alpha$ which is a closure point for $f$.\n\n  \\qquad Now let $\\vec\\M$ be the natural sequence induced by $g$ and\n  \\eq{\n    R_\\alpha := \\code(\\bra{A\\cap H_{g(\\alpha)}^V, \\N_\\alpha})\n  }\n  \n  and apply \\gvp\\ to get $\\alpha<\\beta$ and a generic elementary embedding $\\pi\\colon\\M_\\alpha\\to\\M_\\beta$, which restricts to\n  \\eq{\n    \\pi\\restr(H_{g(\\alpha)}^V,\\in,A\\cap H_{g(\\alpha)}^V)\\colon(H_{g(\\alpha)}^V,\\in,A\\cap H_{g(\\alpha)}^V)\\to(H_{g(\\beta)}^V,\\in,A\\cap H_{g(\\beta)}^V),\n  }\n  \n  making $\\crit\\pi$ virtually $(g(\\alpha),A)$-prestrong and thus $\\crit\\pi\\notin C$. But as we also get the embedding $\\pi\\restr\\N_\\alpha\\colon\\N_\\alpha\\to\\N_\\beta$, we have that $\\crit\\pi\\in C$ by definition of $\\vec\\N$, $\\contr$.\n\n  \\qquad Now fix any class $A$ and some large $n<\\omega$ and define the class\n  \\eq{\n    C := \\{\\kappa\\in\\Card\\mid (H_\\kappa^V,\\in,A\\cap H_\\kappa^V)\\prec_{\\Sigma_n}(V,\\in,A)\\}.\n  }\n\n  This is a club and we can therefore find a virtually $A$-prestrong cardinal $\\kappa\\in C$. Assume that $\\kappa$ is not virtually $A$-strong and let $\\theta$ be least such that it is not virtually $(\\theta,A)$-strong. Fix a generic elementary embedding\n  \\eq{\n    \\pi\\colon(H_\\theta^V,\\in,A\\cap H_\\theta^V)\\to (M,\\in,B)\n  }\n\n  with $\\crit\\pi=\\kappa$, $H_\\theta^V\\subset M$, $M\\subset V$, $A\\cap H_\\theta^V = B\\cap H_\\theta^V$ and $\\pi(\\kappa)<\\theta$.\n\n  \\qquad Now $\\pi(\\kappa)$ is inaccessible, and $(H_{\\pi(\\kappa)}^V, \\in, A\\cap H_{\\pi(\\kappa)}^V) = (H_{\\pi(\\kappa)}^M,\\in,B\\cap H_{\\pi(\\kappa)}^M)$ believes that $\\kappa$ is virtually $(A\\cap H_{\\pi(\\kappa)}^V)$-strong as in the proof of Theorem \\ref{theo.virtchar}, meaning that $(H_\\kappa^V,\\in,A\\cap H_\\kappa^V)$ believes that there is a proper class of virtually $(A\\cap H_\\kappa^V)$-strong cardinals. But $\\kappa\\in C$, which means that\n  \\eq{\n    (V,\\in,A)\\models\\godel{\\text{There exists a proper class of virtually $A$-strong cardinals}},\n  }\n\n  implying that \\on\\ is virtually Woodin.\n}\n\nNext, the following result is an improvement of a theorem in \\cite{GitmanHamkins}, reducing the assumption of the existence of $0^\\sharp$ to the existence of a virtually Berkeley cardinal, which we will later see is optimal:\n\n\\theo[N.; \\gbc][theo.berkgvpmahlo]{\n  If there exists a virtually Berkeley cardinal $\\delta$ then $\\gvp$ holds and $\\on$ is not Mahlo.\n}\n\\proof{\n  If $\\on$ was Mahlo then there would in particular exist an inaccessible cardinal $\\kappa>\\delta$, but then $H_\\kappa^V\\models\\godel{\\text{there exists a virtually Berkeley cardinal}}$, contradicting the incompleteness theorem, as we would have shown that\n  \\eq{\n    \\gbc+\\Phi\\proves\\textsf{Con}(\\gbc+\\Phi),\n  }\n\n  witnessed by the model $(H_\\kappa^V, \\p^V(H_\\kappa^V))$, with $\\Phi$ being ``There exists a virtually Berkeley cardinal''.\n\n  \\qquad To show $\\gvp$ we show that $\\on$ is virtually pre-Woodin, which is equivalent by Theorem \\ref{theo.vopwood}. Fix therefore a class $A$ --- we have to show that there exists a virtually $A$-prestrong cardinal. For every cardinal $\\theta\\geq\\delta$ there exists a generic elementary embedding \n  \\eq{\n    \\pi_\\theta\\colon(H_\\theta^V,\\in,A\\cap H_\\theta^V)\\to(H_\\theta^V, \\in, A\\cap H_\\theta^V)\n  }\n\n  with $\\crit\\pi<\\delta$. By the pigeonhole principle we thus get some $\\kappa<\\delta$ which is the critical point of proper class many $\\pi_\\theta$, showing that $\\kappa$ is virtually $A$-prestrong, making $\\on$ virtually pre-Woodin.\n}\n\n%The proof of the following theorem is similar to the proof of Theorem 11 in \\cite{GitmanHamkins}. Here we improve that result by reducing the assumption from the existence of $0^\\sharp$ to the existence of a virtually Berkeley cardinal, which we will later see is optimal.\n%\n%\\theo[N.][theo.clubforcing]{\n%  If there exists a virtually Berkeley cardinal $\\delta$ then there is a definable $\\gbc$-preserving class forcing notion $\\mathbb P$ such that whenever $C\\subset\\mathbb P$ is class generic over $V$ then\n%  \\eq{\n%    V[C]\\models\\godel{\\text{\\gvp\\ holds but \\on\\ is not Mahlo}}.\n%  }\n%}\n%\\proof{\n%  Define $\\mathbb P$ as the class forcing adding a class club which avoids the regular cardinals; conditions are closed bounded sets containing no regular cardinals, ordered by end-extension. This forcing is ${\\leq}\\gamma$-distributive for every ordinal $\\gamma$, as it even contains a dense ${\\leq}\\gamma$-closed subclass: the class of conditions with elements above $\\gamma$. This implies that the forcing adds no sets and preserves $\\gbc$.\\footnote{See e.g. Example 2.2.7 and Corollary 4.3.7 in \\cite{Krapf}.}\n%\n%  \\qquad Letting $G\\subset\\mathbb P$ be $V$-generic, $C:=\\bigcup G$ is a class club containing no regular cardinals and $C$ and $G$ are interdefinable, so that $V[C]=V[G]\\models\\godel{\\text{$\\on$ is not Mahlo}}$. We need to show that $\\gvp$ holds in $V[C]$. As $\\mathbb P$ is definable we can let $\\mathbb P^\\theta$ be $H_\\theta^V$'s version of $\\mathbb P$, for any ordinal $\\theta$.\n%\n%  \\clai{\n%    For any set $a$ and any natural number $n$ in the meta-theory, let $D_{a,n}\\subset\\mathbb P$ be the class of $\\mathbb P$-conditions $c$ for which there exists an $\\omega$-club of cardinals $\\theta$ such that\n%    \\begin{enumerate}\n%      \\item $H_\\theta^V\\prec_{\\Sigma_n} V$;\n%      \\item $c\\cap\\theta\\subset\\mathbb P^{H_\\theta^V}$ is $H_\\theta^V$-generic;\n%      \\item there exists a generic elementary $\\pi\\colon(H_\\theta^V,\\in,a,c\\cap\\theta)\\to(H_\\theta^V,\\in,a,c\\cap\\theta)$ with critical point below $\\delta$.\n%    \\end{enumerate}\n%\n%    Then $D_{a,n}$ is a definable dense subclass of $\\mathbb P$.\n%  }\n%\n%  \\cproof{\n%    Let $d$ be a $\\mathbb P$-condition --- we are looking for a $\\bar c\\in D_{a,n}$ extending $d$. Pick a cardinal $\\theta>\\delta$ above the rank of $a$ of countable cofinality which we may also assume satisfies $H_\\theta^V\\prec_{\\Sigma_n} V$ since this is the case for club many $\\theta$. Fix a sequence $\\bra{\\theta_k\\mid k<\\omega}$ cofinal in $\\theta$. We will now construct $c\\subset\\theta$ end-extending $d$ which is $H_\\theta^V$-generic for $\\mathbb P^{H_\\theta^V}$. For each $k<\\omega$ define the class\n%    \\eq{\n%      D_k := \\bigcap\\{D\\subset\\mathbb P^{H_\\theta^V}\\mid\\text{$D$ is open dense and $\\Sigma_k^{H_\\theta^V}(H_{\\theta_k}^V)$}\\}.\n%    }\n%\n%    As $\\mathbb P^{H_\\theta^V}$ is ${\\leq}\\theta_k$-distributive for all $k<\\omega$ we get that all the $D_k$'s are open dense subclasses of $\\mathbb P^{H_\\theta^V}$. Note that the $D_k$'s also suffice for genericity, as any subclass of $\\mathbb P^{H_\\theta^V}$ which is definable in $H_\\theta^V$ contains some $D_k$. We can therefore build an $H_\\theta^V$-generic $g\\subset\\mathbb P^{H_\\theta^V}$ in $V$ such that $c:=\\bigcup g$ end-extends $d$, by the usual diagonalisation argument. Since $\\delta$ is virtually Berkeley, Proposition \\ref{prop.boldfaceBerkeley} implies that we get a generic elementary embedding\n%    \\eq{\n%      \\pi\\colon(H_\\theta^V,\\in,c,a)\\to(H_\\theta^V,\\in,c,a)\n%    }\n%\n%    with $\\crit\\pi<\\delta$. Defining $\\bar c:=c\\cup\\{\\theta\\}$ we get that $\\bar c\\in\\mathbb P$ because $\\theta$ is singular, so we have verified that $\\bar c\\in D_{a,n}$ end-extends $d$, implying that $D_{a,n}$ is dense in $\\mathbb P$.\n%  }\n%\n%  Now we use the claim to show that $\\gvp$ holds in $V[C]$, so let us now work in $V[C]$. Fix some $\\on$-sequence $\\vec\\M = \\bra{\\M_\\alpha\\mid\\alpha<\\on}$ of structures in a common language $\\mathcal L$. As we are working with definable classes there is some $m<\\omega$ and some $\\Sigma_m$-formula $\\varphi(v,a,C)$ with a set parameter $a\\in V$ and a class parameter $C$ defining $\\vec\\M$.\n%\n%  \\qquad Let $n<\\omega$ be much larger than $m$ --- we will get back to exactly how large later. By the claim we get an $\\omega$-club $C_{\\text{claim}}$ of cardinals $\\theta$ such that the corresponding initial segment of $C$ is in $D_{\\bra{a,\\mathcal L},n}$. This means that for every $\\theta\\in C_{\\text{claim}}$ there exists a generic elementary embedding\n%  \\eq{\n%    \\pi\\colon(H_\\theta^V,\\in,C\\cap\\theta,a,\\mathcal L)\\to(H_\\theta^V,\\in,C\\cap\\theta,a,\\mathcal L),\n%  }\n%\n%  with $\\crit\\pi<\\delta$. \n%\n%  \\qquad We now claim that $(H_\\theta^V, \\in, C\\cap\\theta)\\prec_{\\Sigma_m}(V, \\in, C)$. To see this, suppose that $(H_\\theta^V,\\in, C\\cap\\theta)\\models\\psi(b)$ for some $\\Sigma_m$-formula $\\psi(v)$ and $b\\in H_\\theta^V$. This is then forced over $H_\\theta^V$ by some $\\mathbb P^{H_\\theta^V}$-condition $c$, an initial segment of $C\\cap\\theta$. By choosing $n$ very large, what we meant was that $n$ is large enough to express this forcing relation\\footnote{Note that this $n$ works for all the $\\theta\\in C_{\\text{claim}}$ since $H_{\\theta_0}^V\\prec_{\\Sigma_n}H_{\\theta_1}^V$ for all $\\theta_0,\\theta_1\\in C_{\\text{claim}}$.}, so since $H_\\theta^V\\prec_{\\Sigma_n}V$ we get that $c$ also forces $\\psi(b)$ over $V$. So $(V,\\in,C)\\models\\psi(b)$.\n%\n%  \\qquad It follows that the definition of the class $\\vec\\M$ is absolute to $(H_\\theta^V, \\in, C\\cap\\theta)$ since $\\vec\\M$ was defined by the $\\Sigma_m$-formula $\\varphi(v,a,C)$, which also means that $\\pi$ fixes $\\vec\\M$ as $\\pi$ fixes both $\\mathcal L$, $a$ and $C$ by the choice of $\\pi$. We then have that $\\pi(\\M_\\kappa)=\\pi(\\vec\\M)_{\\pi(\\kappa)}=\\M_{\\pi(\\kappa)}$, so that $\\pi\\restr\\M_\\kappa\\colon\\M_\\kappa\\to\\M_{\\pi(\\kappa)}$ witnesses $\\gvp$.\n%}\n\n\\theo[N.; \\gbc][theo.berkeleyequiv]{\n  The following are equivalent:\n  \\begin{enumerate}\n    \\item if \\gvp\\ holds then $\\on$ is Mahlo;\n    \\item $\\on$ is virtually pre-Woodin iff $\\on$ is virtually Woodin;\n    \\item There are no virtually Berkeley cardinals.\n  \\end{enumerate}\n}\n\\proof{\n  $(iii)\\Rightarrow(ii)$ is Theorem \\ref{theo.prewoodwood}, and the contraposed version of $(i)\\Rightarrow(iii)$ is Theorem \\ref{theo.berkgvpmahlo}. For $(ii)\\Rightarrow(i)$ note that \\gvp\\ implies that \\on\\ is virtually pre-Woodin by Theorem \\ref{theo.vopwood}, which by $(ii)$ means that it is virtually Woodin and the usual proof shows that virtually Woodins are Mahlo\\footnote{See e.g. Exercise 26.10 in \\cite{Kanamori}.}, showing $(i)$.\n}\n\nThis also immediately implies the following equiconsistency, as virtually Berkeley cardinals have strictly larger consistency strength than virtually Woodin cardinals:\n\n\\qcoro[N.]{\n  The existence of an inaccessible virtually pre-Woodin cardinal is equiconsistent with the existence of an inaccessible virtually Woodin cardinal.\n}\n\n\\section{Behaviour in core models}\n\\label{section.core-models}\n\nMost of the cardinals turn out to be downwards absolute to most inner models, including $L$.\n\n\\prop[][prop.absemb]{\n  For any regular uncountable cardinal $\\theta$, faintly $\\theta$-measurable cardinals are downwards absolute to any transitive class $\\U\\subset V$ satisfying \\\\$\\zf^-+\\dc$.\n}\n\\proof{\n  Let $\\kappa$ be faintly $\\theta$-measurable, witnessed by a forcing poset $\\mathbb P$ and a $V$-generic $g\\subset\\mathbb P$ such that, in $V[g]$, there is a transitive $\\M$ and an elementary embedding $\\pi\\colon H_\\theta^V\\to\\M$ with $\\crit\\pi=\\kappa$. Fix a transitive class $\\U\\subset V$ which satisfies $\\zf^-+\\dc$. Restricting the embedding to $\\pi\\restr H_\\theta^{\\U}\\colon H_\\theta^{\\U}\\to\\N$ we can now apply the Countable Absoluteness Lemma \\ref{lemm.ctblabs} to $\\pi\\restr H_\\theta^{\\U}$ to get that there exists an embedding $\\pi^*\\colon H_\\theta^{\\U}\\to\\N^*$ in a generic extension of $U$, making $\\kappa$ faintly $\\theta$-measurable in $\\U$.\n}\n\n\\theo[N.][theo.core-model-behaviour]{\n  Let $\\theta$ be a regular uncountable cardinal.\n  \\begin{enumerate}\n    \\item $L\\models\\godel{\\text{faintly $\\theta$-measurables are equivalent to virtually $\\theta$-prestrongs}}$.\n    \\item Assume that $L[\\mu]$ exists. It then holds that\\\\ \n      $L[\\mu]\\models\\godel{\\text{faintly $\\theta$-measurables are equivalent to virtually $\\theta$-measurables}}$.\n    \\item Assume there is no inner model with a Woodin. It then holds that\\\\\n      $K\\models\\godel{\\text{faintly $\\theta$-measurables are equivalent to virtually $\\theta$-measurables}}$.\n  \\end{enumerate}\n}\n\\proof{\n  For $(i)$ simply note that if $\\pi\\colon L_\\theta\\to\\N$ is a generic elementary embedding with $\\N$ transitive, then by condensation we have that $\\N=L_\\gamma$ for some $\\gamma\\geq\\theta$, so that $\\pi$ also witnesses the virtual $\\theta$-prestrongness of $\\crit\\pi$.\n\n  \\qquad $(ii)$: Assume that $V=L[\\mu]$ for notational simplicity and let $\\kappa$ be faintly $\\theta$-measurable, witnessed by a generic elementary embedding $\\pi\\colon L_\\theta[\\mu]\\to\\N$ existing in some generic extension $V[g]$. By condensation we get that $\\N=L_\\gamma[\\overline\\mu]$ for some $\\gamma\\geq\\theta$ and $\\overline\\mu\\in V[g]$, but we are not guaranteed that $\\overline\\mu\\in V$ here. Let $\\lambda$ be the unique measurable cardinal of $V=L[\\mu]$.\n\n  \\qquad Note that $\\bar\\mu$ is a measure on $\\pi(\\lambda)\\geq\\lambda$. If $\\pi(\\lambda)=\\lambda$ then $L[\\mu]=L[\\bar\\mu]$ by \\cite[Kunen's Theorem 20.10]{Kanamori} and we trivially get that $\\N\\subset V$. Assume thus that $\\pi(\\lambda)>\\lambda$, which implies that $L[\\bar\\mu]$ is an internal iterate of $L[\\mu]$ by \\cite[Kunen's Theorem 20.12]{Kanamori}. In particular it then holds that $L[\\bar\\mu]\\subset~L[\\mu]$, so again we get that $\\N\\subset V$.\n\n  \\qquad $(iii)$: Assume that $V=K=L[\\mathcal E]$ and fix a faintly $\\theta$-measurable cardinal $\\kappa$, witnessed by a generic embedding $\\pi\\colon L_\\theta[\\mathcal E]\\to\\N=L_\\gamma[\\overline{\\mathcal E}]$ in some generic extension $V[g]$. Now coiterate\\footnote{See Section \\ref{prelims.core-model-theory} for more information about the core model $K$ and coiterations.} $L[\\mathcal E]$ with $L[\\overline{\\mathcal E}]$, and denote the last models by $\\P$ and $\\Q$. Since $K=K^{V[g]}$ and as $K$ is universal we get that $\\Q\\init\\P$. Then the $L[\\overline{\\mathcal E}]$-to-$\\Q$ branch did not drop, giving us an iteration embedding $i\\colon L[\\overline{\\mathcal E}]\\to\\Q$.\n\n  \\qquad Note that $\\crit i\\geq\\kappa$ as $\\overline{\\mathcal E}$ is simply the pointwise image of $\\mathcal E$ under $\\pi$, so nothing below $\\kappa$ is touched and is therefore not used in the comparison either. This means that $\\crit(i\\circ\\pi)=\\kappa$, so that $(i\\circ\\pi)\\colon L_\\theta[\\mathcal E]\\to\\Q$ witnesses that $\\kappa$ is virtually $\\theta$-measurable, since $\\Q\\init\\P$ implies that $\\Q\\subset K$.\n}\n\nNote that the proofs of $(ii)$ and $(iii)$ above do not show that $\\kappa$ is virtually $\\theta$-prestrong, as it might still be the case that $\\bar\\mu\\neq\\mu$ or $\\bar{\\mathcal E}\\neq\\mathcal E$, so we cannot conclude that $L_\\theta[\\mu]\\subset L_\\theta[\\bar\\mu]$ or $L_\\theta[\\mathcal E]\\subset L_\\theta[\\overline{\\mathcal E}]$. It might still hold however; see Question \\ref{ques.prestrong-equivalence}.\n\n\n\\section{Separation results}\n\nHaving proven many positive results about the relations between the virtual large cardinals in the previous sections, this section is dedicated to the negatives. More precisely, we will aim to \\textit{separate} many of the defined notions (potentially under suitable large cardinal assumptions). See Figure \\ref{fig.virtual-separations} for an overview of the relations between the various level-wise faint- and virtual large cardinals.\n\n\\qquad Our first separation result is that the virtuals form a level-by-level hierarchy.\n\n\\theo[N.]{\n  Let $\\alpha<\\kappa$ and assume that $\\kappa$ is faintly $\\kappa^{+\\alpha+2}$-measurable. Then\n  \\eq{\n    L_\\kappa\\models\\godel{\\text{There is a proper class of $\\lambda$ which are virtually $\\lambda^{+\\alpha+1}$-strong}}.\n  }\n}\n\\proof{\n  Write $\\theta:=\\kappa^{+\\alpha+1}$. Then by Theorem \\ref{theo.virtchar} we get that either $\\kappa$ is faintly $\\theta^+$-strong in $L$ or otherwise, in particular, $L_\\kappa$ thinks that there is a proper class of remarkables. In the second case we also get that $L_\\kappa$ thinks that there is a proper class of $\\lambda$ such that $\\lambda$ is virtually $\\lambda^{+\\alpha+1}$-strong and we would be done, so assume the first case. Then $L_\\kappa\\prec_2 L_{\\theta^+}$, so define for each $\\xi<\\kappa$ the sentence $\\psi_\\xi$ as\n  \\eq{\n    \\psi_\\xi:\\equiv\\exists\\lambda>\\xi\\colon\\godel{\\text{$\\lambda$ is virtually $\\lambda^{+\\alpha+1}$-strong}}.\n  }\n\n  Then $\\psi_\\xi$ is $\\Sigma_2(\\{\\alpha,\\xi\\})$ since being virtually $\\beta$-strong is a $\\Delta_2(\\{\\beta\\})$-statement. As $L_{\\theta^+}\\models\\psi_\\xi$ for all $\\xi<\\kappa$ we also get that $L_\\kappa\\models\\psi_\\xi$ for all $\\xi<\\kappa$, which is what we wanted to show.\n}\n\nAs we are only assuming $\\kappa$ to be \\textit{faintly} measurable in the above, this also shows that the faintly $\\kappa^{+\\alpha+1}$-measurable cardinals $\\kappa$ form a strict hierarchy whenever $\\alpha<\\kappa$.\n\n\\qquad A separation result in a similar vein is the following, showing that it is consistent to have an inaccessible faintly measurable cardinal which is not weakly compact:\n\n\\prop[N.]{\n  \\label{prop.faintly-not-weakly-compact}\n  Assuming $\\kappa$ is measurable, there is a generic extension of $V$ in which $\\kappa$ is inaccessible and faintly measurable, but not weakly compact.\n}\n\\proof{\n  Let $\\mathbb P$ be the forcing notion that adds a $\\kappa$-Suslin tree $\\T$. By \\cite{Kunen} it then holds that $\\mathbb P * \\T\\cong\\add(\\kappa^+, 1)$, a ${<}\\kappa^+$-closed forcing, which preserves the measurability of $\\kappa$. Further, the $\\mathbb P$ forcing is shown to preserve the inaccesibility of $\\kappa$, making $\\kappa$ inaccesible and faintly measurable in $V[g]$. Lastly, it cannot be weakly compact in $V[g]$ because $\\T$ is a $\\kappa$-tree without a branch, by definition.\n}\n\nNext, we show that the virtuals are in fact different from the faints. This is trivial in general as successor cardinals can be faintly measurable and are never virtually measurable, but the separation still holds true if we rule out this successor case.\n\nFor a slightly more fine-grained distinction let us define an intermediate large cardinal between the faintly and virtual.\n\n\\defi{\n  Let $\\kappa<\\theta$ be infinite regular cardinals. Say that $\\kappa$ is \\textbf{faintly $\\theta$-power-$\\Phi$} for $\\Phi\\in\\{\\text{measurable}, \\text{prestrong}, \\text{strong}\\}$ if it is faintly $\\theta$-$\\Phi$, witnessed by an embedding $\\pi\\colon H_\\theta^V\\to\\N$, and $\\p^V(\\kappa) = \\p^{\\N}(\\kappa)$.\n}\n\nNote that the proof of Lemma \\ref{prop.virtit} shows that faintly power-measurables are also 1-iterable and so in particular weakly compact. Our separation result is then the following:\n\n\\theo[Gitman-N.][theo.virtualsep]{\n  For $\\Phi\\in\\{\\text{measurable}, \\text{prestrong}, \\text{strong}\\}$, if $\\kappa$ is virtually $\\Phi$, then there exist forcing extensions $V[G]$ and $V[G^*]$ such that\n  \\begin{enumerate}\n    \\item in $V[G]$, $\\kappa$ is inaccessible and faintly $\\Phi$, but not faintly power-$\\Phi$, and\n    \\item in $V[G^*]$, $\\kappa$ is faintly power-$\\Phi$, but not virtually $\\kappa^{++}$-prestrong.\n  \\end{enumerate}\n}\n\\proof{\n  We start with $(i)$. Let us assume that $\\kappa$ is virtually measurable. This implies, in particular, that for every regular $\\theta>\\kappa$, we have generic embeddings $\\pi:H_\\theta\\to \\M$ with $\\crit \\pi=\\kappa$ such that $\\M\\in V$. Thus, by Proposition~\\ref{prop.coll}, we can assume that the generic embedding $\\pi$ exists in a $\\col(\\omega,H_\\theta)$-extension.\n\n\\qquad Let $\\mathbb P_\\kappa$ be the Easton support iteration that adds a Cohen subset to every regular $\\alpha<\\kappa$, and let $G\\subset\\mathbb P_\\kappa$ be $V$-generic. Standard computations show that $\\mathbb P_\\kappa$ preserves all inaccessible cardinals.\n\n\\qquad Fix a regular $\\theta\\gg\\kappa$ and let $h\\subseteq \\col(\\omega,H_\\theta)$ be $V[G]$-generic. In $V[h]$, we must have an elementary embedding $\\pi:H_\\theta\\to \\M$ with $\\crit \\pi=\\kappa$ and $\\M\\in V$, and we can assume without loss that $\\M$ is countable. Obviously, $\\pi\\in V[G][h]$. Working in $V[G][h]$, we will now lift $\\pi$ to an elementary embedding on $H_\\theta[G]$. To ensure that such a lift exists, it suffices to find in $V[G][h]$ an $\\M$-generic filter for $\\pi(\\mathbb P_\\kappa)$ containing $\\pi''G$\\footnote{This standard lemma is referred to in the literature as the \\textbf{lifting criterion}.}. Observe first that $\\pi''G=G$ since the critical point of $\\pi$ is $\\kappa$ and we can assume that $\\mathbb P_\\kappa\\subseteq V_\\kappa$. Next, observe that $\\pi(\\mathbb P_\\kappa)\\cong \\mathbb P_\\kappa*\\mathbb P_{\\text{tail}}$, where $\\mathbb P_{\\text{tail}}$ is the forcing beyond $\\kappa$. Since $\\M[G]$ is countable, we can build an $\\M[G]$-generic filter $G_{\\text{tail}}$ for $\\mathbb P_\\text{tail}$ in $V[G][h]$. Thus, $G*G_{\\text{tail}}$ is $\\M$-generic for $\\pi(\\mathbb P_\\kappa)$, and so we can lift $\\pi$ to $\\pi:H_\\theta[G]\\to \\M[G][G_{\\text{tail}}]$. Since $\\theta$ was chosen arbitrarily, we have just shown that $\\kappa$ is faintly measurable in $V[G]$.\n\n\\qquad Now suppose that $\\kappa$ is faintly power-measurable in $V[G]$. Fix regular $\\theta<\\bar\\theta$ and a generic elementary embedding $\\sigma:H_{\\bar\\theta}[G]\\to \\N$ with $\\crit\\sigma=\\kappa$ and $\\p(\\kappa)^{V[G]}=\\p(\\kappa)^{\\N}$. By elementarity, $H_{\\sigma(\\theta)}^{\\N}=\\sigma(H_\\theta)[\\sigma(G)]$ is a forcing extension of $K=\\sigma(H_\\theta)$ by $\\sigma(G)=G*\\bar G_{\\text{tail}}\\subseteq \\sigma(\\mathbb P_\\kappa)\\cong \\mathbb P_\\kappa*\\bar{\\mathbb P}_{\\text{tail}}$. Thus, we have the restrictions $\\sigma:H_\\theta\\to K$ and $\\sigma:H_\\theta[G]\\to K[G][\\bar G_{\\text{tail}}]$. Let us argue that $\\p^{V[G]}(\\kappa)\\subseteq \\p^{K[G]}(\\kappa)$, and hence we have equality. Suppose $A\\subseteq\\kappa$ in $V[G]$ and let $\\dot A$ be a nice $\\mathbb P_\\kappa$-name for $A$, which can be coded by a subset of $\\kappa$. Since $\\crit\\sigma=\\kappa$, we have that $\\dot A\\in K$, and hence $A=\\dot A_G\\in K[G]$. But now it follows that the $K[G]$-generic for ${\\rm Add}(\\kappa,1)$, the forcing at stage $\\kappa$ in $\\sigma(\\mathbb P_\\kappa)$, cannot be in $V[G]$. Thus, we have reached a contradiction, showing that $\\kappa$ cannot be faintly power-measurable in $V[G]$.\n\n\\qquad If $\\Phi = \\text{measurable}$, then we are done at this point. For $\\Phi = \\text{prestrong}$ we simply note that $G\\in\\M[G*G_{\\text{tail}}]$ so that $H_\\theta^{V[G]}\\subseteq\\N[G*G_{\\text{tail}}]$ as well, and since we lifted $\\pi$, we still have $\\pi(\\kappa) > \\theta$ in the $\\Phi = \\text{strong}$ case.\n\n  \\qquad For $(ii)$, we change $\\mathbb P_\\kappa$ to only add Cohen subsets to \\textit{successor} cardinals $\\lambda<\\kappa$ and call the resulting forcing $\\mathbb P^*_\\kappa$. Let $G^*\\subseteq \\mathbb P^*_\\kappa$ be $V$-generic. We verify that $\\kappa$ is faintly-$\\Phi$ as above by lifting an embedding $\\pi:H_\\theta\\to \\M$, with $\\M\\subseteq V$, to\n  \\eq{\n    \\pi:H_\\theta[G^*]\\to \\M[G^*][G_{\\text{tail}}] \n  }\n  \n  in a collapse extension $V[G^*][h]$. The lifted embedding is $\\kappa$-powerset preserving because a subset of $\\kappa$ from $\\M[G^*][G_{\\text{tail}}]$ has to already be in $\\M[G^*]$ as $\\mathbb P_{\\text{tail}}$ is ${\\leq}\\kappa$-closed, and $\\M[G^*]\\subseteq V[G^*]$. So it remains to show that $\\kappa$ is not virtually $\\kappa^{++}$-prestrong. Suppose it is and fix a generic embedding $\\sigma:H_{\\kappa^{++}}[G^*]\\to K[G^*][G_{\\text{tail}}]$ with $\\crit\\sigma=\\kappa$ and $H_{\\kappa^{++}}[G^*]\\subseteq K[G^*][G_{\\text{tail}}]$. It follows that the generic subset of $\\kappa^+$ added at stage $\\kappa^+$ by the tail forcing must be $V[G^*]$-generic, which is contradictory.\n}\n\nStarting from a much stronger hypothesis, it can be shown that a power-measurable cardinal need not even be virtually $\\kappa^+$-measurable.\n\n\\qquad To see this, we first observe that virtually $\\theta$-measurable cardinals $\\kappa$ are $\\Pi^2_1$-indescribable for all $\\theta>\\kappa$. The proof is identical to the standard Hanf-Scott proof that measurable cardinals are $\\Pi^2_1$-indescribable; see e.g. Hanf-Scott's Proposition 6.5 in \\cite{Kanamori}. It should be noted that we crucially need the ``virtual'' property for the proof to go through. \n\n\\qquad Using this indescribability fact, the proof of the following theorem is precisely the same as Hamkins' Proposition 8.2 in \\cite{HolySchlicht}:\n\n\\theo[Virtualised Hamkins][theo.Hamkins]{\nAssuming $\\kappa$ is a $\\kappa^{++}$-tall cardinal,\\footnote{Recall that $\\kappa$ is \\textbf{$\\kappa^{++}$-tall} if there is an elementary embedding $j\\colon V\\to M$ with $\\crit j=\\kappa$, $^\\kappa M\\subset M$ and $j(\\kappa)>\\kappa^{++}$.} there is a forcing extension in which $\\kappa$ is not virtually $\\kappa^+$-measurable, but becomes measurable in a further $\\text{Add}(\\kappa^+,1)$-generic extension.\n}\n\nThis then immediately gives the separation result.\n\n\\coro[][coro.virtualsep]{\n    Assuming $\\kappa$ is a $\\kappa^{++}$-tall cardinal, it is consistent that $\\kappa$ is faintly power-measurable, but not virtually $\\kappa^+$-measurable.\n}\n\\proof{\n\tBy the above Theorem \\ref{theo.Hamkins} we may assume that $\\kappa$ is not virtually $\\kappa^+$-measurable but that it is measurable in $V^{\\mathbb P}$ for $\\mathbb P:=\\text{Add}(\\kappa^+,1)$, so that $\\kappa$ is faintly power-measurable.\n}\n\nIn contrast to the above separation result, note that we showed in Corollary~\\ref{coro.faintvirtualWoodin} and Theorem~\\ref{theo.vopwood} that the faint-virtual distinction vanishes when we are dealing with virtual pre-Woodin or Woodin cardinals.\n\n\\qquad Our next separation result is concerning the virtually prestrong and virtually strong cardinals.\n\n\\coro[N.][coro.rank-into-rank-prestrong-strong]{\n  There exists a virtually rank-into-rank cardinal iff there is an uncountable cardinal $\\theta$ and a virtually $\\theta$-prestrong cardinal which is not virtually $\\theta$-strong.\n}\n\\proof{\n  $(\\Leftarrow)$ is directly from the above Proposition \\ref{prop.superstrong-rank-into-rank} and Theorem \\ref{theo.virtchar}.\n\n  \\qquad $(\\Rightarrow)$: Here we have to show that if there exists a virtually rank-into-rank cardinal then there exists a $\\theta>\\kappa$ and a virtually $\\theta$-prestrong cardinal which is not virtually $\\theta$-strong. Let $(\\kappa,\\theta)$ be the lexicographically least pair such that $\\kappa$ is virtually $\\theta$-rank-into-rank, which trivially makes $\\kappa$ virtually $\\theta$-prestrong. If $\\kappa$ was also virtually $\\theta$-strong then it would be $\\Sigma_2$-reflecting\\footnote{See Exercise 20.18 in \\cite{Jech}, and Section \\ref{prelims.large-cardinals} for a definition of $\\Sigma_n$-reflecting cardinals.}, so that the statement that there exists a virtually rank-into-rank cardinal would reflect down to $H_\\kappa^V$, contradicting the minimality of $\\kappa$.\n}\n\n\\qquad Figure \\ref{fig.virtual-separations} summarises the separation results along with the results from Section \\ref{section.core-models}. Note that it \\textit{might} be the case that virtually $\\theta$-measurables are always virtually $\\theta$-prestrong (and hence also equivalent in $L[\\mu]$ and $K$ below a Woodin cardinal); see Question \\ref{ques.prestrong-equivalence}.\n\n\\figx[fig.virtual-separations][200]{virtual-separations.pdf}{Direct implications between virtuals, where the red lines with crosses indicate that $\\zfc$ does not prove the reverse implication.}\n\n\n\\section{Indestructibility}\n\nIt is well-known that supercompact cardinals $\\kappa$ can be made indestructible by all ${<}\\kappa$-directed closed forcings by a suitable \\textit{Laver preparatory forcing}, which is the main theorem in the seminal paper \\cite{laver-indestructibility}. A natural question, then, is whether similar results hold for the faintly and virtual versions. We noted in Proposition \\ref{prop.virtit} that the virtuals are weakly compact, so the following theorem from \\cite{mousestack} shows that the consistency strength of indestructible virtual supercompacts is very large, potentially even in the realm of supercompacts:\n\n\\theo[Schindler]{\n  The consistency strength of a weakly compact cardinal $\\kappa$ which is indestructible by ${<}\\kappa$-directed closed forcing is larger than the consistency strength of a proper class of strong cardinals and a proper class of Woodin cardinals.\n}\n\nThis gets close to resolving the question about the indestructible virtuals, so what about the faintly supercompact cardinals? To make things a bit easier for ourselves, let us make the notion a bit stronger.\n\n\\defi{\n  Fix uncountable cardinals $\\kappa<\\theta$. Then $\\kappa$ is \\textbf{generically setwise $\\theta$-supercompact} if there exists a generic extension $V[g]$, a transitive $\\N\\in V[g]$ and a generic elementary embedding $\\pi\\colon H_\\theta^V\\to\\N$, $\\pi\\in V[g]$, with $\\crit\\pi=\\kappa$, $\\pi(\\kappa)>\\theta$ and $V[g]\\models{^{<\\theta}}\\N\\subset\\N$. If it holds for all $\\theta>\\kappa$ then we say that $\\kappa$ is \\textbf{generically setwise supercompact}.\n}\n\nNote that the only difference between a generically setwise $\\theta$-supercompact cardinal and a virtually $\\theta$-supercompact cardinal is that the former is closed under sequences \\textit{in the generic extension}, where the latter is only closed under sequences in $V$; i.e., that $V\\cap{^{<\\theta}}\\N\\subset\\N$.\n\n\\qquad Ostensibly this seems to be an incredibly strong notion, as the target model now inherits a lot of structure from the generic extension. A first stab at an upper consistency bound could be to note that if there exists a proper class of Woodin cardinals then $\\omega_1$ is generically setwise supercompact. This can be shown using the countable stationary tower, see 2.7.7 and 2.7.8 in \\cite{stationary-tower}.\n\n\\qquad But, surprisingly, the following result from Usuba shows that they can exist in $L$:\n\n\\qtheo[\\cite{usuba}][theo.usuba]{\n  If $\\kappa$ is virtually extendible then $\\col(\\omega,{<}\\kappa)$ forces that $\\omega_1$ is generically setwise supercompact.\n}\n\nIt turns out that this slightly stronger notion \\textit{does} have indestructiblity properties. We warm up by firstly showing that they are indestructible by small forcing.\n\n\\prop[N.-Schlicht]{\n  Generic setwise supercompactness of $\\kappa$ is indestructible for forcing notions of size $<\\kappa$.\n}\n\\proof{\n  Fix a forcing $\\mathbb P$ of size ${<}\\kappa$ and assume without loss of generality that $\\mathbb P\\in H_\\kappa^V$, and fix also a cardinal $\\theta>\\kappa$. Using the setwise supercompactness of $\\kappa$ we may fix a forcing $\\mathbb Q$ and a $V$-generic $h\\subseteq\\mathbb Q$ such that in $V[h]$ we have an elementary $\\pi\\colon M:=H_\\theta^V\\rightarrow\\N$ in $V[h]$ with $\\N$ ${<}\\theta$-closed.\n  \n  \\qquad Let $g\\subset\\mathbb P$ be $V[h]$-generic and work in $V[g\\times h]$. By the Lifting Criterion \\ref{prop.lifting-criterion} we get a lift $\\tilde{\\pi}\\colon M[g]\\rightarrow \\N[g]$ of $\\pi$. If $\\kappa$ is a limit cardinal, then we may choose a cardinal $\\lambda<\\kappa$ such that $\\mathbb P\\in H_\\lambda^V$. Since $\\mathbb P$ has the $\\lambda^+$-cc in $V$ we get that $\\pi(\\mathbb P)=\\mathbb P$ has the $\\lambda^+$-cc in $\\N$ and hence in $V[h]$ as well, making $\\N[g]$ ${<}\\theta$-closed by Lemma \\ref{lemm.lucke-schlicht} and we are done.\n\n  \\qquad If $\\kappa=\\nu^+$, then there are no cardinals between $\\nu$ and $\\pi(\\kappa)$ in $\\N$ and hence $|\\theta|\\leq \\nu$. Thus it suffices to show that $\\N[g]$ is $\\nu$-closed. Since $\\pi(\\mathbb P)=\\mathbb P$ has size ${\\leq}\\nu$ in $V$, it has the $\\nu^{+V[h]}$-cc in $V[h]$. Therefore $\\N[g]$ is again ${<}\\theta$-closed by Lemma \\ref{lemm.lucke-schlicht}.\n}\n\nNext, we show that these generic setwise supercompact cardinals $\\kappa$ \\textit{are} in fact indestructible for ${<}\\kappa$-directed closed forcings, without having to do any preparation forcing at all.\n\n\\theo[N.-Schlicht]{\n  Generic setwise supercompactness of $\\kappa$ is indestructible for ${<}\\kappa$-directed closed forcings.\n}\n\\proof{\n  Suppose that $\\kappa$ is generically setwise supercompact, $\\mathbb P$ is a ${<}\\kappa$-directed closed forcing and $g$ is $\\mathbb P$-generic over $V$. We will show that $\\kappa$ is generically setwise supercompact in $V[g]$.\n\n  \\qquad In $V$ fix a regular $\\theta>\\kappa$ such that $\\mathbb P\\in H_\\theta^V$, and let $\\mathbb Q$ be the forcing given by the definition of setwise supercompactness. Let $h$ be $\\mathbb Q$-generic over $V[g]$. Let $\\pi\\colon H_\\theta^V\\rightarrow \\N$ be as in the definition of generically setwise supercompactness, so that $\\pi\\in V[h]\\subset V[g\\times h]$. Work in $V[g\\times h]$.\n  \n  \\qquad We may assume that $\\theta=\\theta^{<\\theta}$ holds, as otherwise we just replace $\\mathbb Q$ with $\\mathbb{Q}*\\col(\\theta,\\theta^{<\\theta})$ --- we retain the ${<}\\theta$-closure of $\\N$ because $\\col(\\theta,\\theta^{<\\theta})$ is ${<}\\theta$-closed. We can further assume that $\\abs{\\N}=\\theta^{<\\theta}=\\theta$, as otherwise we can take a hull of $\\N$ containing $\\ran(\\pi)$ and recursively close under ${<}\\theta$-sequences, ending up with a ${<}\\theta$-closed elementary substructure $\\h\\prec\\N$ containing $\\ran(\\pi)$ -- now replace $\\N$ by the transitive collapse of $\\h$.\n\n  \\clai{\n    There is a $\\pi(\\mathbb P)$-generic filter $\\tilde{g}$ over $\\N$ that extends $\\pi[g]$.\n  }\n\n  \\cproof{\n    Since $\\N$ is (in particular) $\\abs{\\mathbb P}$-closed in $V[h]$ and $\\mathbb P$ is trivially $\\abs{\\mathbb P}^+$-cc, Lemma \\ref{lemm.lucke-schlicht} implies that $\\N$ is still $\\abs{\\mathbb P}$-closed in $V[g\\times h]$. As below, we thus still get that $\\pi[g]\\in\\N$. Now work in $V[h]$, where we have full ${<}\\theta$-closure of $\\N$.\n    \n    \\qquad Since $\\pi(\\mathbb P)$ is directed, there is a condition $q\\leq \\pi[g]$ in $\\pi(\\mathbb P)$. Using the fact that $\\abs{\\N}=\\theta$ and $\\pi(\\mathbb P)$ is ${<}\\theta$-closed, we can construct a $\\pi(\\mathbb P)$-generic filter $\\tilde{g}$ over $\\N$ with $q\\in g$. Namely, enumerate the dense subsets of $\\pi(\\mathbb P)$ that are elements of $\\N$ in order type $\\theta$ and use the fact that the initial segments of the sequence, and of the corresponding sequence of conditions that we construct, are in $\\N$. Then $\\tilde{g}$ is as required.\n  }\n\n    Since we now have that $\\pi[g]\\subseteq \\tilde{g}$ by the claim, the lifting criterion implies that we can lift $\\pi$ to $\\tilde{\\pi}\\colon H_\\theta^V[g]\\rightarrow\\N[\\tilde{g}]$.\n    \n    \\qquad It thus remains to see that $\\N[\\tilde{g}]$ is ${<}\\theta$-closed. To see this, take a sequence $\\vec{x}=\\langle x_i\\mid i< \\gamma\\rangle $ with $\\gamma<\\theta$ and $x_i\\in\\N[\\tilde{g}]$ and find names $\\sigma_i\\in\\N$ with $\\sigma_i^{\\tilde{g}}=x_i$ for all $i<\\gamma$. Since $^{<\\theta}\\N\\subseteq\\N$ we have that $\\vec{\\sigma}=\\langle \\sigma_i\\mid i<\\gamma\\rangle\\in\\N$, and from $\\vec{\\sigma}$ we obtain a canonical name $\\vec{\\sigma}^\\bullet\\in\\N$ with $\\vec{\\sigma}^{\\bullet {\\tilde{g}}}=\\vec{x}\\in\\N[\\tilde{g}]$.\n}\n\nInvestigating further, we also show indestructibility for some forcings that do not fall into the above-mentioned categories.\n\n\\prop[N.-Schlicht]{\n  Generic setwise supercompactness of a regular cardinal $\\kappa$ is indestructible for $\\add(\\omega, \\kappa)$. If $\\kappa$ is a successor cardinal then it is also indestructible for $\\col(\\omega, {<}\\kappa)$.\n}\n\\proof{\n  Let $g$ be $\\add(\\omega,\\kappa)$-generic over $V$. In $V$ fix a regular $\\theta>\\kappa$ and let $\\mathbb Q$ be the forcing given by the definition of generic setwise supercompactness. Let $h$ be $\\mathbb Q$-generic over $V[g]$ and work in $V[g\\times h]$.\n\n  \\qquad Let $\\pi\\colon H_\\theta^V\\rightarrow\\N$ be as in the definition of generically setwise supercompactness. Moreover, let $\\tilde{g}$ be $\\add(\\omega,\\pi(\\kappa))$-generic over $V[g\\times h]$. Since $\\pi[g]=g$, the lifting criterion allows us to extend $\\pi$ to some $\\tilde{\\pi}\\colon H_\\theta^V[g]\\rightarrow\\N[g\\times \\tilde{g}]$. To show that $\\N[g\\times \\tilde{g}]$ is ${<}\\theta$-closed in $V[g\\times h\\times \\tilde{g}]$, it suffices that $\\add(\\omega,\\pi(\\kappa))$ has the ccc by Lemma \\ref{lemm.lucke-schlicht}.\n\n  \\qquad For $\\col(\\omega,{<}\\kappa)$, we proceed similarly. Assume that $\\kappa=\\nu^+$. Take $\\col(\\omega,{<}\\kappa)$-, $\\mathbb Q$- and $\\col(\\omega,{<}\\pi(\\kappa))$-generic filters $g$, $h$ and $\\tilde{g}$. $\\pi$ and $\\N$ are as above. Since $\\nu<\\kappa<\\theta<\\pi(\\kappa)$ and there are no cardinals between $\\nu$ and $\\pi(\\kappa)$ (in $\\N$ and thus also in $V[h]$), ${<}\\theta$-closure means $\\nu$-closure (in any model containing $V[h]$). By Lemma \\ref{lemm.lucke-schlicht}, it is thus sufficient to know that $\\col(\\omega,{<}\\pi(\\kappa))$ has the $\\nu^+$-cc in $V[g\\times h]$. This is because $\\pi(\\kappa)=\\nu^{+\\N}\\leq\\nu^{+V[g\\times h]}$.\n}\n\nUsuba's Theorem \\ref{theo.usuba} shows that the \\textit{consistency strength} of these generically setwise supercompact cardinals is small, but do they appear naturally anywhere? The following result shows that we cannot find any in either $L$, $L[\\mu]$ nor $K$ below a measurable cardinal:\n\n\\prop[N.-Schlicht]{\n  No cardinal $\\kappa$ is generically setwise supercompact in either $L$, $K$ below a measurable cardinal, or $L[\\mu]$ with $\\mu$ being a normal ultrafilter.\n}\n\\proof{\n  Assume first that $V=L$ and that $\\kappa$ is generically setwise supercompact. Let $g$ be a generic filter and $\\pi\\colon L_\\theta\\rightarrow\\N$ an embedding in $V[g]$ with $\\pi\\restr L_{\\kappa^{+L}} \\in\\N$. Then $\\N=L_\\alpha$ for some $\\alpha$ by condensation and thus $\\pi\\restr H_{\\kappa^{+L}}\\in L$. But this would induce a ${<}\\kappa$-complete ultrafilter on $\\kappa$, contradicting $V=L$.\n\n  \\qquad Assume now that there exists no inner model with a measurable cardinal and let $K$ be the core model. Let $\\kappa$ be generically setwise supercompact in $K$ and $\\pi\\colon K|\\theta\\to\\N$ be the generic elementary embedding. As above we get that\n  \\eq{\n    \\pi\\restr H_{\\kappa^{+K}}\\in\\N, \n  }\n  \n  so that again $\\N$ has a measure on $\\kappa$. But that means that $K|\\kappa$ has a measurable cardinal, which is a contradiction since $\\kappa$ is inaccessible (in both $V$ and $K$) and therefore $K|\\kappa\\models\\zfc$, contradicting our assumption that no inner model with a measurable cardinal exists.\n\n  \\qquad Assume now that $V=L[\\mu]$ and that $\\kappa$ is generically setwise supercompact, witnessed by a generic embedding $\\pi\\colon L_\\theta[\\mu]\\to L_\\alpha[\\bar\\mu]$. In particular this means that $\\pi\\restr L_{\\kappa^{+L[\\mu]}}[\\mu]\\in L_\\alpha[\\bar\\mu]$. If $\\crit\\mu<\\kappa$ then $\\mu=\\bar\\mu$ and\n  \\eq{\n    \\p^{L[\\mu]}(\\kappa)=\\p^{L[\\bar\\mu]}(\\kappa), \n  }\n  \n  so that both $\\pi(\\kappa)$ and $\\kappa$ are now measurable cardinals in $L[\\bar\\mu]$, contradicting Solovay's Lemma \\cite[20.2]{Kanamori}. So $\\crit\\mu\\geq\\kappa$.\n\n  \\qquad If $\\pi(\\crit\\mu)>\\crit\\mu$ then by Kunen's Theorem \\cite[20.12]{Kanamori} we get that $L[\\bar\\mu]$ is an iterate of $L[\\mu]$. But iteration embeddings preserve the subsets of their critical point, so again we have that $\\p^{L[\\mu]}(\\kappa)=\\p^{L[\\bar\\mu]}$ and we get the same contradiction as before.\n\n  \\qquad Lastly, if $\\crit\\mu>\\kappa$ and $\\pi(\\crit\\mu)=\\crit\\mu$ then $\\mu=\\bar\\mu$ by Kunen's Theorem \\cite[20.10]{Kanamori}, so we get a contradiction as in the $\\crit\\mu<\\kappa$ case.\n}\n\n\\end{document}\n", "meta": {"hexsha": "33c20e461486d72da2dc2457fe863b40ec916c89", "size": 105785, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "chapters/part1/virtual-large-cardinals.tex", "max_stars_repo_name": "saattrupdan/phd", "max_stars_repo_head_hexsha": "21481596be517c874e311797f5a70829e0cba7d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapters/part1/virtual-large-cardinals.tex", "max_issues_repo_name": "saattrupdan/phd", "max_issues_repo_head_hexsha": "21481596be517c874e311797f5a70829e0cba7d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapters/part1/virtual-large-cardinals.tex", "max_forks_repo_name": "saattrupdan/phd", "max_forks_repo_head_hexsha": "21481596be517c874e311797f5a70829e0cba7d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.4496466431, "max_line_length": 1287, "alphanum_fraction": 0.6963463629, "num_tokens": 34039, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998508568416, "lm_q2_score": 0.7090191399336402, "lm_q1q2_score": 0.5511204717250646}}
{"text": "\\section{Clustering \\textnormal{\\sffamily --- Unsupervised}}\n\n% ===\n\\emph{k-Means}\n%\\emph{Assignement}: $C_i \\!=\\! \\{ x\\!\\in\\!\\mathbb{R}^d \\,\\vert\\, i\\!\\in\\!\\arg\\min\\limits_j \\norm{x\\!-\\!\\mu_j} \\}$\n\n$\\hat{R}(\\mu) = \\sum_{i=1}^n \\min\\limits_{j\\in\\{1,\\ldots,k\\}} \\norm{x_i-\\mu_j}_2^2 = \\sum_i d(x_i,\\mu_j)$\n\n$\\hat{\\mu} =  \\arg\\!\\min\\limits_\\mu \\hat{R}(\\mu)$ {\\color{gray}$\\leftarrow$ non-convex, NP-hard}\\vspace{3pt}\n\n\\begin{highlightbox}\n\t\\vspace{-3pt}\n\t\\textbf{Lloyd's heuristic}: \\textit{Initialize} centers $\\mu_{1:k}^{(0)}$,\\\\\n\t\\textit{assign} points to closest center {\\small\\color{gray} $\\to \\arg\\min$},\n\t\\textit{update centers} to mean of each cluster, repeat\n\\end{highlightbox}\n\n% ===\n\\emph{k-Means++:}\n\\enskip $\\to$ for initialization $2\\ldots j$\n\n$\\operatorname{P}(\\textrm{pick }x_l) = \\frac{1}{z} d(x_i - \\mu_{1:j-1})$, \\enskip $\\mu_j^{(0)} \\leftarrow x_i$\n\n%- Start with random data point as center\\\\\n%- Add centers 2 to k randomly, proportionally to squared distance to closest selected center\\\\\n%for $j=2$ to $k$:\n%$i_j$ sampled with prob.\\\\\n%$P(i_j=i) = \\frac{1}{z} \\underset{1\\leq l<j}{min}||x_i-\\mu_l||_2^2$; $\\mu_j \\leftarrow x_{i_j}$\n\n% ===\n\\emph{Optimal \\textit{k}-value:}\n\\enskip $\\to$ ``Elbow'' trick \\textit{or}\n\nRegularization $L(\\mu) = \\min\\limits_k \\min\\limits_{\\mu_k} \\hat R(\\mu_{1:k}) + \\lambda\\,k$\n", "meta": {"hexsha": "bdb29815ade78665175fbe898691182d9d61d83e", "size": 1326, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/IML19/sections/Clustering.tex", "max_stars_repo_name": "tstreule/eth-cheat-sheets", "max_stars_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-26T23:11:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T23:11:57.000Z", "max_issues_repo_path": "src/IML19/sections/Clustering.tex", "max_issues_repo_name": "tstreule/eth-cheat-sheets", "max_issues_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/IML19/sections/Clustering.tex", "max_forks_repo_name": "tstreule/eth-cheat-sheets", "max_forks_repo_head_hexsha": "c61f9fd3b13edf405f790581b4d5eacb50b4f1c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.8857142857, "max_line_length": 114, "alphanum_fraction": 0.6319758673, "num_tokens": 542, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7772998611746912, "lm_q2_score": 0.7090191214879991, "lm_q1q2_score": 0.5511204647028232}}
{"text": "\\section{Application: Principal component analysis}\n\n\\begin{outcome}\n  \\begin{enumerate}\n  \\item Compute the principal components of a matrix $A$.\n  \\item Compute the centroid of a collection of data points.\n  \\item Find the $k$-dimensional subspace that best approximates a\n    given collection of data points.\n  \\item Find the $k$-dimensional affine subspace that best\n    approximates a given collection of data points.\n  \\item Compute the total squared distance of the data points to the\n    best fit subspace (or best fit affine subspace).\n  \\end{enumerate}\n\\end{outcome}\n\nIn this section, we will explore an application of the diagonalization\nof symmetric matrices called \\textbf{principal component analysis}.\nImagine we are given a collection of data points such as the\nfollowing:\n\\begin{equation}\\label{eqn:subspace-fitting}\n  \\begin{tikzpicture}[baseline=-0.5ex]\n    \\draw[thin,->] (-4,0) -- (4,0);\n    \\draw[thin,->] (0,-4) -- (0,4);\n    % \\draw[thick,blue] (-4,-4/1.29) -- (4,4/1.29);\n    \\fill[color=red] (-2.25,-1.21) circle (0.06);\n    \\fill[color=red] (-1.44,-1.28) circle (0.06);\n    \\fill[color=red] (2.29,1.82) circle (0.06);\n    \\fill[color=red] (0.25,0.43) circle (0.06);\n    \\fill[color=red] (-1.57,-1.07) circle (0.06);\n    \\fill[color=red] (2.51,2.00) circle (0.06);\n    \\fill[color=red] (-2.53,-2.34) circle (0.06);\n    \\fill[color=red] (-3.04,-2.40) circle (0.06);\n    \\fill[color=red] (0.84,0.36) circle (0.06);\n    \\fill[color=red] (-3.16,-2.73) circle (0.06);\n    \\fill[color=red] (-2.02,-1.88) circle (0.06);\n    \\fill[color=red] (3.06,2.44) circle (0.06);\n    \\fill[color=red] (-1.40,-1.12) circle (0.06);\n    \\fill[color=red] (-2.09,-1.34) circle (0.06);\n    \\fill[color=red] (-1.23,-1.13) circle (0.06);\n    \\fill[color=red] (0.48,0.06) circle (0.06);\n    \\fill[color=red] (-1.49,-1.20) circle (0.06);\n    \\fill[color=red] (0.98,0.75) circle (0.06);\n    \\fill[color=red] (1.40,0.89) circle (0.06);\n    \\fill[color=red] (-0.12,0.09) circle (0.06);\n    \\fill[color=red] (0.88,0.40) circle (0.06);\n    \\fill[color=red] (-0.38,0.55) circle (0.06);\n    \\fill[color=red] (0.34,0.46) circle (0.06);\n    \\fill[color=red] (0.45,0.21) circle (0.06);\n    \\fill[color=red] (-0.59,-0.63) circle (0.06);\n    \\fill[color=red] (-0.53,-0.29) circle (0.06);\n    \\fill[color=red] (2.10,1.35) circle (0.06);\n    \\fill[color=red] (1.65,1.60) circle (0.06);\n    \\fill[color=red] (-3.88,-2.83) circle (0.06);\n    \\fill[color=red] (-0.07,-0.01) circle (0.06);\n    \\fill[color=red] (-0.37,-0.57) circle (0.06);\n    \\fill[color=red] (-0.99,-0.75) circle (0.06);\n    \\fill[color=red] (-2.34,-2.08) circle (0.06);\n    \\fill[color=red] (3.63,3.15) circle (0.06);\n    \\fill[color=red] (1.37,0.48) circle (0.06);\n    \\fill[color=red] (-0.96,-0.74) circle (0.06);\n    \\fill[color=red] (3.51,2.79) circle (0.06);\n    \\fill[color=red] (-3.33,-2.72) circle (0.06);\n    \\fill[color=red] (0.39,0.09) circle (0.06);\n    \\fill[color=red] (-1.38,-1.17) circle (0.06);\n    \\fill[color=red] (-0.36,-0.65) circle (0.06);\n    \\fill[color=red] (1.38,0.66) circle (0.06);\n    \\fill[color=red] (1.85,1.24) circle (0.06);\n    \\fill[color=red] (2.35,1.85) circle (0.06);\n    \\fill[color=red] (0.85,0.25) circle (0.06);\n    \\fill[color=red] (-0.23,0.19) circle (0.06);\n    \\fill[color=red] (-0.90,-0.74) circle (0.06);\n    \\fill[color=red] (2.50,1.31) circle (0.06);\n    \\fill[color=red] (-0.45,-0.71) circle (0.06);\n    \\fill[color=red] (0.92,0.56) circle (0.06);\n    \\fill[color=red] (0.97,1.21) circle (0.06);\n    \\fill[color=red] (-0.85,-0.74) circle (0.06);\n    \\fill[color=red] (-0.10,0.13) circle (0.06);\n    \\fill[color=red] (0.32,0.49) circle (0.06);\n    \\fill[color=red] (2.58,1.57) circle (0.06);\n    \\fill[color=red] (-0.59,-0.48) circle (0.06);\n    \\fill[color=red] (2.28,1.50) circle (0.06);\n    \\fill[color=red] (1.21,0.94) circle (0.06);\n    \\fill[color=red] (-0.35,-0.13) circle (0.06);\n    \\fill[color=red] (-1.53,-1.26) circle (0.06);\n    \\fill[color=red] (-2.77,-2.14) circle (0.06);\n    \\fill[color=red] (1.23,1.02) circle (0.06);\n    \\fill[color=red] (2.61,1.88) circle (0.06);\n    \\fill[color=red] (-0.04,-0.05) circle (0.06);\n    \\fill[color=red] (2.09,1.30) circle (0.06);\n    \\fill[color=red] (2.37,1.52) circle (0.06);\n    \\fill[color=red] (-2.01,-1.69) circle (0.06);\n    \\fill[color=red] (0.48,0.72) circle (0.06);\n    \\fill[color=red] (0.23,0.49) circle (0.06);\n    \\fill[color=red] (-1.16,-0.95) circle (0.06);\n    \\fill[color=red] (-0.14,-0.18) circle (0.06);\n    \\fill[color=red] (-1.57,-1.68) circle (0.06);\n    \\fill[color=red] (0.39,0.15) circle (0.06);\n    \\fill[color=red] (-0.24,-0.29) circle (0.06);\n  \\end{tikzpicture}\n\\end{equation}\nAlthough these points are spread out in two dimensions, they seem to\nbe located pretty close to a 1-dimensional subspace. Probably the best\nway to interpret this particular data set is to think of the points as\nbeing ``essentially'' on a line, up to some small random errors.\n\nMore generally, suppose we are given a collection of data points in\n$n$-dimensional space, and we are looking for a $k$-dimensional\nsubspace that all data points are close to.  This is an important way\nto make sense of high-dimensional data. For example, it would be very\ndifficult to visualize data in a $100$-dimensional space. However, it\nwe knew that the data points lie very close to a 2-dimensional\nsubspace, then we could project all of the points to the subspace to\nobtain a 2-dimensional image of the data.\n\nTo state the problem more precisely, let us introduce the following\nnotation. If $W$ is a subspace of $\\R^n$ and $\\vect{v}\\in\\R^n$ is a\nvector, let us write $d(\\vect{v},W)$ for the shortest distance from\n$\\vect{v}$ to $W$ (i.e., the distance from $\\vect{v}$ to $W$ along a\nline that is perpendicular to $W$). Moreover, if $W$ is a subspace of\n$\\R^n$ and $\\vect{v}_1,\\ldots,\\vect{v}_m\\in\\R^n$ are the position\nvectors of $m$ points, we define the \\textbf{total squared distance}%\n\\index{distance!total squared distance}%\n\\index{squared distance}%\n\\index{total squared distance}%\n\\index{subspace fitting!total squared distance} of the points to\nthe subspace to be the quantity\n\\begin{equation*}\n  D = d(\\vect{v}_1,W)^2 + \\ldots + d(\\vect{v}_m,W)^2.\n\\end{equation*}\nThen the problem we would like to solve can be stated as follows:\n\n\\begin{problem}{Subspace fitting problem}{subspace-fitting}\n  Given vectors $\\vect{v}_1,\\ldots,\\vect{v}_m\\in\\R^n$ and given an\n  integer $k\\leq n$, find the $k$-dimensional subspace\n  $W\\subseteq\\R^n$ that minimizes the total squared distance, i.e.,\n  such that $D$ is as small as possible%\n  \\index{subspace fitting}%\n  \\index{fitting!subspace fitting}.\n\\end{problem}\n\nThe following proposition gives us a method for solving the subspace\nfitting problem. It turns out that the key ingredient in solving this\nproblem is the diagonalization of symmetric matrices. The method was\ndiscovered by Gale Young%\n\\index{Young, Gale}%\n\\index{Gale Young} in 1937.\n\n\\begin{proposition}{Solution of the subspace fitting problem}{subspace-fitting}\n  Given vectors $\\vect{v}_1,\\ldots,\\vect{v}_m\\in\\R^n$ and $k\\leq n$,\n  the optimal solution to the subspace fitting problem can be computed\n  as follows:\n  \\begin{enumerate}\n  \\item Let $A$ be the $m\\times n$-matrix whose rows are\n    $\\vect{v}_1^T,\\ldots,\\vect{v}_m^T$. (Or equivalently, $A^T$ is the\n    $n\\times m$-matrix whose columns are\n    $\\vect{v}_1,\\ldots,\\vect{v}_m$.)\n  \\item Let $B=A^TA$. Then $B$ is a positive semidefinite\n    $n\\times n$-matrix.\n  \\item By Proposition~\\ref{prop:characterize-positive}, all\n    eigenvalues of $B$ are real and non-negative. Let\n    $\\eigenvar_1,\\ldots,\\eigenvar_n$ be the eigenvalues of $B$, listed\n    according to their multiplicity and in decreasing order, i.e., so\n    that $\\eigenvar_1\\geq\\eigenvar_2\\geq\\ldots\\geq\\eigenvar_n\\geq\n    0$. Let $\\vect{u}_1,\\ldots,\\vect{u}_n$ be the corresponding\n    eigenvectors.\n  \\item Then $W=\\sspan\\set{\\vect{u}_1,\\ldots,\\vect{u}_k}$ is the\n    solution to the subspace fitting problem.  Moreover, the total\n    squared distance of the points to this subspace is\n    \\begin{equation*}\n      D = \\eigenvar_{k+1} + \\ldots + \\eigenvar_n.\n    \\end{equation*}\n  \\end{enumerate}\n\\end{proposition}\n\n\\begin{example}{Subspace fitting in $\\R^2$}{subspace-fitting-r2}\n  Consider the following collection of points in $\\R^2$:\n  \\begin{equation*}\n    \\set{\n      \\begin{mymatrix}{r}  2 \\\\ -3 \\end{mymatrix},\n      \\begin{mymatrix}{r} -1 \\\\  0 \\end{mymatrix},\n      \\begin{mymatrix}{r}  2 \\\\  3 \\end{mymatrix},\n      \\begin{mymatrix}{r} -6 \\\\ -7 \\end{mymatrix},\n      \\begin{mymatrix}{r}  6 \\\\ 11 \\end{mymatrix},\n      \\begin{mymatrix}{r}  0 \\\\ -1 \\end{mymatrix},\n      \\begin{mymatrix}{r}  1 \\\\  6 \\end{mymatrix},\n      \\begin{mymatrix}{r} -2 \\\\ -3 \\end{mymatrix},\n      \\begin{mymatrix}{r} -7 \\\\ -6 \\end{mymatrix}\n    }.\n  \\end{equation*}\n  Find the 1-dimensional subspace that best approximates this\n  collection of points. What is the total squared distance of the\n  points to the subspace?\n\\end{example}\n\n\\begin{solution}\n  We follow the steps outlined in\n  Proposition~\\ref{prop:subspace-fitting}.\n  \\begin{enumerate}\n  \\item We have\n    \\begin{equation*}\n      A^T = \\begin{mymatrix}{rrrrrrrrr}\n        2 & -1 & 2 & -6 & 6 & 0 & 1 & -2 & -7 \\\\\n        -3 & 0 & 3 & -7 & 11 & -1 & 6 & -3 & -6\n      \\end{mymatrix}.\n    \\end{equation*}\n  \\item We calculate\n    \\begin{equation*}\n      B = A^TA = \\begin{mymatrix}{rr}\n        135 & 162 \\\\\n        162 & 270\n      \\end{mymatrix}.\n    \\end{equation*}\n  \\item The eigenvalues of $B$ are $\\eigenvar_1 = 378$ and\n    $\\eigenvar_2 = 27$, with corresponding eigenvectors\n    \\begin{equation*}\n      \\vect{u}_1 = \\begin{mymatrix}{r} 2 \\\\ 3 \\end{mymatrix}\n      \\quad\\mbox{and}\\quad\n      \\vect{u}_2 = \\begin{mymatrix}{r} 3 \\\\ -2 \\end{mymatrix}.\n    \\end{equation*}\n  \\item The desired subspace $W$ is spanned by the eigenvector\n    corresponding to the largest eigenvalue, i.e.,\n    $W=\\sspan\\set{\\vect{u}_1}$. The total squared distance is\n    $\\eigenvar_2 = 27$.\n  \\end{enumerate}\n  The space $W$ is shown in the following illustration, along with the\n  original points:\n  \\begin{equation*}\n    \\begin{tikzpicture}[scale=0.15]\n      \\draw[thin,->] (-12,0) -- (12,0);\n      \\draw[thin,->] (0,-12) -- (0,12);\n      \\draw[thin] (0,10) -- (-1,10) node[left] {$10$};\n      \\draw[thin] (10,0) -- (10,-1) node[below] {$10$};\n      \\draw[thick,blue] (-8,-12) -- (8,12);\n      \\fill[color=red] (2,-3) circle (0.4);\n      \\fill[color=red] (-1, 0) circle (0.4);\n      \\fill[color=red] (2, 3) circle (0.4);\n      \\fill[color=red] (-6, -7) circle (0.4);\n      \\fill[color=red] (6, 11) circle (0.4);\n      \\fill[color=red] (0, -1) circle (0.4);\n      \\fill[color=red] (1, 6) circle (0.4);\n      \\fill[color=red] (-2, -3) circle (0.4);\n      \\fill[color=red] (-7, -6) circle (0.4);\n    \\end{tikzpicture}\n  \\end{equation*}\n  Of course, the example was rigged to ensure that the eigenvalues are\n  integers. In real life, the entries of $A$ and $B$, as well as the\n  eigenvalues and components of the eigenvectors are usually arbitrary\n  real numbers.\n\\end{solution}\n\n\\begin{example}{Subspace fitting in $\\R^3$}{subspace-fitting-r3}\n  Consider the following collection of points in $\\R^3$:\n  \\begin{equation*}\n    \\set{\n      \\begin{mymatrix}{r} -7 \\\\ 4 \\\\ 5 \\end{mymatrix},\n      \\begin{mymatrix}{r} 0 \\\\ 3 \\\\ 3 \\end{mymatrix},\n      \\begin{mymatrix}{r} 2 \\\\ -5 \\\\ -4 \\end{mymatrix},\n      \\begin{mymatrix}{r} 10 \\\\ -4 \\\\ 1 \\end{mymatrix},\n      \\begin{mymatrix}{r} -2 \\\\ 5 \\\\ 4 \\end{mymatrix},\n      \\begin{mymatrix}{r} -8 \\\\ -1 \\\\ -5 \\end{mymatrix},\n      \\begin{mymatrix}{r} 5 \\\\ 4 \\\\ 2 \\end{mymatrix},\n      \\begin{mymatrix}{r} -6 \\\\ 9 \\\\ 6 \\end{mymatrix},\n      \\begin{mymatrix}{r} 9 \\\\ -6 \\\\ 3 \\end{mymatrix},\n      \\begin{mymatrix}{r} -2 \\\\ -7 \\\\ -8 \\end{mymatrix}\n    }.\n  \\end{equation*}\n  \\begin{enumialphparenastyle}\n    \\begin{enumerate}\n    \\item Find the 1-dimensional subspace that best approximates this\n      collection of points.\n    \\item Find the 2-dimensional subspace that best approximates this\n      collection of points.\n    \\item What is the 3-dimensional subspace that best approximates this\n      collection of points?\n    \\end{enumerate}\n  \\end{enumialphparenastyle}\n  In each case, what is the total squared distance of the points to\n  the subspace?\n\\end{example}\n\n\\begin{solution}\n  Again, we follow the steps from\n  Proposition~\\ref{prop:subspace-fitting}. We can do the calculations\n  for parts (a), (b), and (c) at the same time.\n  \\begin{enumerate}\n  \\item We have\n    \\begin{equation*}\n      A^T = \\begin{mymatrix}{rrrrrrrrrr}\n        -7 & 0 & 2 & 10 & -2 & -8 & 5 & -6 & 9 & -2 \\\\\n        4 & 3 & -5 & -4 & 5 & -1 & 4 & 9 & -6 & -7 \\\\\n        5 & 3 & -4 & 1 & 4 & -5 & 2 & 6 & 3 & -8 \\\\\n      \\end{mymatrix}.\n    \\end{equation*}\n  \\item We calculate\n    \\begin{equation*}\n      B = A^TA = \\begin{mymatrix}{rrr}\n        367 & -154 & 16 \\\\\n        -154 & 274 & 170 \\\\\n        16 & 170 & 205 \\\\\n      \\end{mymatrix}.\n    \\end{equation*}\n  \\item The eigenvalues of $B$ are $\\eigenvar_1 = 513$,\n    $\\eigenvar_2 = 306$, and $\\eigenvar_3 = 27$, with corresponding\n    eigenvectors\n    \\begin{equation*}\n      \\vect{u}_1 = \\begin{mymatrix}{r} -2 \\\\ 2 \\\\ 1 \\end{mymatrix},\n      \\quad\n      \\vect{u}_2 = \\begin{mymatrix}{r} 2 \\\\ 1 \\\\ 2 \\end{mymatrix}\n      \\quad\\mbox{and}\\quad\n      \\vect{u}_3 = \\begin{mymatrix}{r} -1 \\\\ -2 \\\\ 2 \\end{mymatrix}.\n    \\end{equation*}\n  \\end{enumerate}\n\n  For part (a), the desired 1-dimensional subspace is spanned by the\n  eigenvector corresponding to the largest eigenvalue, i.e., it is\n  $\\sspan\\set{\\vect{u}_1}$. The total squared distance is\n  $\\eigenvar_2+\\eigenvar_3 = 306 + 27 = 333$.\n\n  For part (b), the desired 2-dimensional subspace is spanned by the\n  eigenvectors corresponding to the two largest eigenvalues, i.e., it\n  is $\\sspan\\set{\\vect{u}_1,\\vect{u}_2}$. The total squared distance is\n  $\\eigenvar_3 = 27$.\n\n  Finally, in part (c), the desired 3-dimensional subspace is spanned\n  by all three eigenvectors; it is of course $\\R^3$ itself, since it\n  is the only 3-dimensional subspace. The total squared distance is\n  $0$, since all points lie in the subspace.\n\\end{solution}\n\nThe vectors $\\vect{u}_1,\\ldots,\\vect{u}_n$ that appear in the solution\nof the subspace fitting problem are called the \\textbf{principal\n  components} of the matrix $A$.\n\n\\begin{definition}{Principal components}{principal-components}\n  Let $A$ be an $m\\times n$-matrix. The \\textbf{principal components}%\n  \\index{principal component}%\n  \\index{component!principal component} of $A$ are the (normalized,\n  orthogonal) eigenvectors $\\vect{u}_1,\\ldots,\\vect{u}_n$ of the\n  positive semidefinite $n\\times n$-matrix $A^TA$. They are usually\n  listed in order of decreasing eigenvalues.\n\\end{definition}\n\nThe first principal component $\\vect{u}_1$ gives the direction in\nwhich the rows of $A$ show the most variability. The second principal\ncomponent $\\vect{u}_2$ gives the direction in which the rows of $A$\nshow the most remaining variability that is orthogonal to\n$\\vect{u}_1$. The third principal component $\\vect{u}_3$ gives the\ndirection of most variability that is orthogonal to $\\vect{u}_1$ and\n$\\vect{u}_2$, and so on.\n\n% ----------------------------------------------------------------------\n\\subsection*{Subspace fitting vs. curve fitting}\n\nIn the particular case where $n=2$ and $k=1$, we are looking for a\n1-dimensional subspace, i.e., a line through the origin, which best\nfits the given 2-dimensional data, as in the illustration\n{\\eqref{eqn:subspace-fitting}} above or as in\nExample~\\ref{exa:subspace-fitting-r2}. On its face, the subspace\nfitting problem in this case seems similar to the linear curve\nfitting%\n\\index{curve fitting} problem we solved in\nSection~\\ref{sec:least-squares}. However, there is a subtle but\nimportant difference: in linear curve fitting, we were seeking to\nminimize the distances of the points from the line in the\n$y$-direction, whereas in subspace fitting, we are seeking to minimize\nthe distances of the points from the subspace in the direction\nperpendicular to the subspace. The following pair of pictures\nillustrates the difference:\n\\begin{equation*}\n  \\begin{tikzpicture}[scale=0.8]\n    \\draw[thin,->] (-3,0) -- (3,0);\n    \\draw[thin,->] (0,-3) -- (0,3);\n    \\draw[thick,blue] (-3,-3*0.75) -- (3,3*0.75);\n    \\fill[color=red] (0,0) circle (0.12);\n    \\fill[color=red] (2,0) circle (0.12);\n    \\fill[color=red] (2,3) circle (0.12);\n    \\fill[color=red] (-2,0) circle (0.12);\n    \\fill[color=red] (-2,-3) circle (0.12);\n    \\draw[<->,black!50] (2,0) -- (2,1.5);\n    \\draw[<->,black!50] (2,3) -- (2,1.5);\n    \\draw[<->,black!50] (-2,0) -- (-2,-1.5);\n    \\draw[<->,black!50] (-2,-3) -- (-2,-1.5);\n    \\path (0,-3.5) node[below] {Linear curve fitting};\n    \\path[black!70] (2,-2) node {\\begin{tabular}{c}minimize\\\\vertical\\\\distances\\end{tabular}};\n  \\end{tikzpicture}\n  \\hspace{1in}\n  \\begin{tikzpicture}[scale=0.8]\n    \\draw[thin,->] (-3,0) -- (3,0);\n    \\draw[thin,->] (0,-3) -- (0,3);\n    \\draw[thick,blue] (-3*0.92,-3) -- (3*0.92, 3);\n    \\fill[color=red] (0,0) circle (0.12);\n    \\fill[color=red] (2,0) circle (0.12);\n    \\fill[color=red] (2,3) circle (0.12);\n    \\fill[color=red] (-2,0) circle (0.12);\n    \\fill[color=red] (-2,-3) circle (0.12);\n    \\draw[<->,black!50] (2,0) -- (0.997*0.92,0.997);\n    \\draw[<->,black!50] (2,3) -- (2.621*0.92,2.621);\n    \\draw[<->,black!50] (-2,0) -- (-0.997*0.92,-0.997);\n    \\draw[<->,black!50] (-2,-3) -- (-2.621*0.92,-2.621);\n    \\path (0,-3.5) node[below] {Subspace fitting};\n    \\path[black!70] (2,-2) node {\\begin{tabular}{c}minimize\\\\distances\\\\perpendicular\\\\to subspace\\end{tabular}};\n  \\end{tikzpicture}\n\\end{equation*}\n\n% ----------------------------------------------------------------------\n\\subsection*{Affine fitting}\n\nSo far, we have been looking to approximate a given collection of\npoints by a {\\em subspace}, which necessarily passes through the\norigin. But sometimes the points may not be near the origin, as in\nthis example:\n\\begin{equation*}\n  \\begin{tikzpicture}[baseline=-0.5ex, scale=0.8]\n    \\draw[thin,->] (-4,0) -- (4,0);\n    \\draw[thin,->] (0,-4) -- (0,4);\n    % \\draw[thick, blue] (0.91,1.76) +(-6,-6/-3.20) -- +(6,6/-3.20);\n    \\fill[color=red] (-0.54,2.53) circle (0.075);\n    \\fill[color=red] (4.26,0.53) circle (0.075);\n    \\fill[color=red] (-0.24,2.38) circle (0.075);\n    \\fill[color=red] (1.98,1.52) circle (0.075);\n    \\fill[color=red] (0.28,2.14) circle (0.075);\n    \\fill[color=red] (5.71,0.05) circle (0.075);\n    \\fill[color=red] (2.90,1.56) circle (0.075);\n    \\fill[color=red] (-3.21,2.97) circle (0.075);\n    \\fill[color=red] (5.44,-0.09) circle (0.075);\n    \\fill[color=red] (-0.37,1.87) circle (0.075);\n    \\fill[color=red] (-0.46,2.02) circle (0.075);\n    \\fill[color=red] (2.21,1.49) circle (0.075);\n    \\fill[color=red] (-0.91,2.23) circle (0.075);\n    \\fill[color=red] (0.81,1.80) circle (0.075);\n    \\fill[color=red] (0.47,1.91) circle (0.075);\n    \\fill[color=red] (0.96,2.34) circle (0.075);\n    \\fill[color=red] (-0.93,2.29) circle (0.075);\n    \\fill[color=red] (-0.35,2.09) circle (0.075);\n    \\fill[color=red] (1.99,1.11) circle (0.075);\n    \\fill[color=red] (2.34,1.31) circle (0.075);\n    \\fill[color=red] (-0.99,2.30) circle (0.075);\n    \\fill[color=red] (-2.26,2.78) circle (0.075);\n    \\fill[color=red] (4.07,0.84) circle (0.075);\n    \\fill[color=red] (0.50,2.19) circle (0.075);\n    \\fill[color=red] (-1.08,2.76) circle (0.075);\n    \\fill[color=red] (-0.10,2.07) circle (0.075);\n    \\fill[color=red] (1.49,1.46) circle (0.075);\n    \\fill[color=red] (2.05,1.08) circle (0.075);\n    \\fill[color=red] (4.18,0.74) circle (0.075);\n    \\fill[color=red] (0.29,1.98) circle (0.075);\n    \\fill[color=red] (-1.37,2.64) circle (0.075);\n    \\fill[color=red] (0.75,2.22) circle (0.075);\n    \\fill[color=red] (0.55,1.91) circle (0.075);\n    \\fill[color=red] (2.41,1.73) circle (0.075);\n    \\fill[color=red] (3.70,1.02) circle (0.075);\n    \\fill[color=red] (1.73,1.85) circle (0.075);\n    \\fill[color=red] (-0.03,1.99) circle (0.075);\n    \\fill[color=red] (2.60,1.00) circle (0.075);\n    \\fill[color=red] (-0.57,2.42) circle (0.075);\n    \\fill[color=red] (2.44,0.87) circle (0.075);\n    \\fill[color=red] (1.67,1.20) circle (0.075);\n    \\fill[color=red] (2.27,1.53) circle (0.075);\n    \\fill[color=red] (-1.48,2.27) circle (0.075);\n    \\fill[color=red] (-0.65,1.73) circle (0.075);\n    \\fill[color=red] (3.53,0.83) circle (0.075);\n    \\fill[color=red] (0.00,2.10) circle (0.075);\n    \\fill[color=red] (0.03,2.06) circle (0.075);\n    \\fill[color=red] (-0.40,2.34) circle (0.075);\n    \\fill[color=red] (0.48,1.91) circle (0.075);\n    \\fill[color=red] (-0.59,2.74) circle (0.075);\n    \\fill[color=red] (0.71,1.61) circle (0.075);\n    \\fill[color=red] (-0.04,1.88) circle (0.075);\n    \\fill[color=red] (0.33,1.89) circle (0.075);\n    \\fill[color=red] (4.66,0.67) circle (0.075);\n    \\fill[color=red] (3.68,1.23) circle (0.075);\n    \\fill[color=red] (1.06,1.51) circle (0.075);\n    \\fill[color=red] (-0.08,2.08) circle (0.075);\n    \\fill[color=red] (-0.24,1.75) circle (0.075);\n    \\fill[color=red] (4.36,0.91) circle (0.075);\n    \\fill[color=red] (1.17,1.51) circle (0.075);\n    \\fill[color=red] (1.25,2.07) circle (0.075);\n    \\fill[color=red] (2.36,1.48) circle (0.075);\n    \\fill[color=red] (-1.21,2.26) circle (0.075);\n    \\fill[color=red] (-0.13,1.83) circle (0.075);\n    \\fill[color=red] (1.23,1.51) circle (0.075);\n    \\fill[color=red] (2.30,1.40) circle (0.075);\n    \\fill[color=red] (-1.08,2.50) circle (0.075);\n    \\fill[color=red] (-1.62,2.33) circle (0.075);\n    \\fill[color=red] (-0.27,2.32) circle (0.075);\n    \\fill[color=red] (-1.75,2.52) circle (0.075);\n    \\fill[color=red] (3.37,1.10) circle (0.075);\n    \\fill[color=red] (-1.10,2.10) circle (0.075);\n    \\fill[color=red] (-0.54,1.85) circle (0.075);\n    \\fill[color=red] (2.16,1.16) circle (0.075);\n    \\fill[color=red] (-0.15,2.11) circle (0.075);\n  \\end{tikzpicture}\n\\end{equation*}\nIn this case, approximating the points by a subspace passing through\nthe origin does not make much sense. Instead, we should be looking for\nan \\textbf{affine subspace}. An affine subspace is similar to a\nsubspace, except it does not necessarily contain the origin.\n\n\\begin{definition}{Affine subspace}{affine-subspace}\n  Let $V$ be a vector space. A subset $A\\subseteq V$ is called an\n  \\textbf{affine subspace}%\n  \\index{affine subspace}%\n  \\index{subspace!affine} of $V$ if $A$ is either empty, or else of\n  the form\n  \\begin{equation*}\n    A = \\vect{v} + W = \\set{\\vect{v}+\\vect{w} \\mid \\vect{w}\\in W},\n  \\end{equation*}\n  where $\\vect{v}\\in V$ and $W$ is a subspace of\\/ $V$.\n\\end{definition}\n\nFor example, in Chapter~\\ref{cha:lines-and-planes}, we considered\nlines and planes in $\\R^n$ that pass through a given point (not\nnecessarily the origin). These are examples of affine subspaces of\n$\\R^n$. The affine subspace fitting problem is analogous to the\nsubspace fitting problem:\n\n\\begin{problem}{Affine subspace fitting problem}{affine-subspace-fitting}\n  Given the position vectors $\\vect{v}_1,\\ldots,\\vect{v}_m$ of $m$\n  points in $\\R^n$, and given an integer $k\\leq n$, find the\n  $k$-dimensional affine subspace $A\\subseteq\\R^n$ that minimizes the\n  total squared distance from the points to $A$%\n  \\index{affine subspace fitting}%\n  \\index{subspace fitting!affine}%\n  \\index{fitting!affine subspace fitting}.\n\\end{problem}\n\nIt turns out that the optimal solution to the affine subspace fitting\nproblem can be computed by first computing the \\textbf{centroid} of\nthe points, shifting the whole problem so that the centroid is at the\norigin, and then solving an ordinary subspace fitting problem.\n\n\\begin{definition}{Centroid}{centroid}\n  Given $m$ vectors $\\vect{v}_1,\\ldots,\\vect{v}_m$, their\n  \\textbf{centroid}%\n  \\index{centroid} is the vector\n  \\begin{equation*}\n    \\centroid{\\vect{v}} = \\frac{1}{m}(\\vect{v}_1+\\ldots+\\vect{v}_m).\n  \\end{equation*}\n  It is also sometimes called the \\textbf{average}%\n  \\index{average!of vectors} or the \\textbf{center of mass}%\n  \\index{center of mass} of the vectors.\n\\end{definition}\n\n\\begin{proposition}{Solution of the affine subspace fitting problem}{affine-subspace-fitting}\n  Given vectors $\\vect{v}_1,\\ldots,\\vect{v}_m\\in\\R^n$ and $k\\leq n$,\n  the optimal solution to the affine subspace fitting problem can be\n  computed as follows:\n  \\begin{enumerate}\n  \\item Compute the centroid $\\centroid{\\vect{v}} =\n    \\frac{1}{m}(\\vect{v}_1+\\ldots+\\vect{v}_m)$ of the vectors.\n  \\item Let $\\vect{w}_i = \\vect{v}_i - \\centroid{\\vect{v}}$, for all\n    $i=1,\\ldots,n$.\n  \\item Compute the solution $W$ to the (ordinary) subspace fitting\n    problem for $\\vect{w}_1,\\ldots,\\vect{w}_m$, as in\n    Proposition~\\ref{prop:subspace-fitting}.\n  \\end{enumerate}\n  Then the best solution to the affine subspace problem is\n  $\\centroid{v} + W$.\n\\end{proposition}\n\n\\begin{example}{Affine subspace fitting problem}{affine-subspace-fitting}\n  Consider the following collection of points in $\\R^2$:\n  \\begin{equation*}\n    \\set{\n      \\begin{mymatrix}{r} 10 \\\\ -6 \\end{mymatrix},\n      \\begin{mymatrix}{r} 2 \\\\ 10 \\end{mymatrix},\n      \\begin{mymatrix}{r} 5 \\\\ -1 \\end{mymatrix},\n      \\begin{mymatrix}{r} 8 \\\\ 3 \\end{mymatrix},\n      \\begin{mymatrix}{r} 2 \\\\ 5 \\end{mymatrix},\n      \\begin{mymatrix}{r} 3 \\\\ 3 \\end{mymatrix},\n      \\begin{mymatrix}{r} 4 \\\\ 11 \\end{mymatrix},\n      \\begin{mymatrix}{r} 10 \\\\ -1 \\end{mymatrix},\n      \\begin{mymatrix}{r} 1 \\\\ 12 \\end{mymatrix}\n    }.\n  \\end{equation*}\n  Find the 1-dimensional affine subspace that best approximates this\n  collection of points. What is the total squared distance of the\n  points to the subspace?\n\\end{example}\n\n\\begin{solution}\n  We start by computing the centroid:\n  \\begin{equation*}\n    \\centroid{\\vect{v}} =\n    \\frac{1}{9}(\\vect{v}_1+\\ldots+\\vect{v}_9)\n    = \\frac{1}{9}\\begin{mymatrix}{r} 45 \\\\ 36 \\end{mymatrix}\n    = \\begin{mymatrix}{r} 5 \\\\ 4 \\end{mymatrix}.\n  \\end{equation*}\n  Next, we shift all vectors by $-\\centroid{\\vect{v}}$ to get a new\n  collection of vectors $\\vect{w}_1,\\ldots,\\vect{w}_9$ centered at the\n  origin.\n  For example,\n  \\begin{eqnarray*}\n    \\vect{w}_1 ~~=~~ \\vect{v}_1 - \\centroid{\\vect{v}}\n    &=& \\begin{mymatrix}{r} 10 \\\\ -6 \\end{mymatrix}\n    - \\begin{mymatrix}{r} 5 \\\\ 4 \\end{mymatrix}\n    ~~=~~ \\begin{mymatrix}{r} 5 \\\\ -10 \\end{mymatrix},\n    \\\\\n    \\vect{w}_2 ~~=~~ \\vect{v}_2 - \\centroid{\\vect{v}}\n    &=& \\begin{mymatrix}{r} 2 \\\\ 10 \\end{mymatrix}\n    - \\begin{mymatrix}{r} 5 \\\\ 4 \\end{mymatrix}\n    ~~=~~ \\begin{mymatrix}{r} -3 \\\\ 6 \\end{mymatrix},\n  \\end{eqnarray*}\n  and so on. We get\n  \\begin{equation*}\n    \\set{\\vect{w}_1,\\ldots,\\vect{w}_9} =\n    \\set{\n      \\begin{mymatrix}{r} 5 \\\\ -10 \\end{mymatrix},\n      \\begin{mymatrix}{r} -3 \\\\ 6 \\end{mymatrix},\n      \\begin{mymatrix}{r} 0 \\\\ -5 \\end{mymatrix},\n      \\begin{mymatrix}{r} 3 \\\\ -1 \\end{mymatrix},\n      \\begin{mymatrix}{r} -3 \\\\ 1 \\end{mymatrix},\n      \\begin{mymatrix}{r} -2 \\\\ -1 \\end{mymatrix},\n      \\begin{mymatrix}{r} -1 \\\\ 7 \\end{mymatrix},\n      \\begin{mymatrix}{r} 5 \\\\ -5 \\end{mymatrix},\n      \\begin{mymatrix}{r} -4 \\\\ 8 \\end{mymatrix}\n    }.\n  \\end{equation*}\n  Next we, we proceed as in Proposition~\\ref{prop:subspace-fitting} to\n  find the best subspace fitting $\\vect{w}_1,\\ldots,\\vect{w}_9$. We\n  have\n  \\begin{equation*}\n    A^T = \\begin{mymatrix}{rrrrrrrrr}\n      5 & -3 & 0 & 3 & -3 & -2 & -1 & 5 & -4 \\\\\n      -10 & 6 & -5 & -1 & 1 & -1 & 7 & -5 & 8 \\\\\n    \\end{mymatrix}\n  \\end{equation*}\n  and\n  \\begin{equation*}\n    B = A^TA = \\begin{mymatrix}{rr}\n      98 & -136 \\\\\n      -136 & 302 \\\\\n    \\end{mymatrix}.\n  \\end{equation*}\n  The eigenvalues of $B$ are $\\eigenvar_1 = 370$ and $\\eigenvar_2 =\n  30$, with corresponding eigenvectors\n  \\begin{equation*}\n    \\vect{u}_1 = \\begin{mymatrix}{r} 1 \\\\ -2 \\end{mymatrix}\n    \\quad\\mbox{and}\\quad\n    \\vect{u}_2 = \\begin{mymatrix}{r} 2 \\\\ 1 \\end{mymatrix}\n  \\end{equation*}\n  Thus, the best-fitting 1-dimensional subspace for\n  $\\vect{w}_1,\\ldots,\\vect{w}_9$ is $W = \\sspan\\set{\\vect{u}_1}$, and\n  the best-fitting 1-dimensional affine subspace for\n  $\\vect{v}_1,\\ldots,\\vect{v}_9$ is\n  \\begin{equation*}\n    \\centroid{\\vect{v}} + W\n    = \\set{\\centroid{\\vect{v}} + \\vect{w} \\mid \\vect{w}\\in W}\n    = \\set{\\left.\\begin{mymatrix}{r} 5 \\\\ 4 \\end{mymatrix} +\n        t\\begin{mymatrix}{r} 1 \\\\ -2 \\end{mymatrix} ~\\right\\vert~ t\\in\\R}.\n  \\end{equation*}\n  Note that this is the equation of a line passing through the\n  centroid $\\centroid{\\vect{v}}$, and with direction vector\n  $\\vect{u}_1$. The points $\\vect{v}_1,\\ldots,\\vect{v}_9$, their\n  centroid, and the affine subspace $\\centroid{\\vect{v}} + W$ are\n  shown in the following illustration:\n  \\begin{equation*}\n    \\begin{tikzpicture}[scale=0.2]\n      \\draw[thin,->] (-12,0) -- (12,0);\n      \\draw[thin,->] (0,-12) -- (0,12);\n      \\draw[thin] (0,10) -- (-1,10) node[left] {$10$};\n      \\draw[thin] (10,0) -- (10,-1) node[below] {$10$};\n      \\draw[thick,blue] (5,4) +(-6,12) -- +(6,-12);\n      \\fill[color=blue] (5,4) circle (0.6);\n      \\draw[blue, ->] (5,4) +(8,4) node[right,yshift=4] {centroid} -- +(0.4,0.2);\n      \\fill[color=red] (10, -6) circle (0.3);\n      \\fill[color=red] (10, -1) circle (0.3);\n      \\fill[color=red] (5, -1) circle (0.3);\n      \\fill[color=red] (8, 3) circle (0.3);\n      \\fill[color=red] (3, 3) circle (0.3);\n      \\fill[color=red] (2, 5) circle (0.3);\n      \\fill[color=red] (2, 10) circle (0.3);\n      \\fill[color=red] (4, 11) circle (0.3);\n      \\fill[color=red] (1, 12) circle (0.3);\n    \\end{tikzpicture}\n  \\end{equation*}\n  \\vspace{-4ex}\\par  \n\\end{solution}\n\n% ----------------------------------------------------------------------\n\\subsection*{Application to U.S. Senate voting data}\n\nThe United States Senate%\n\\index{senate}%\n\\index{U.S. Senate}%\n\\index{United States Senate} votes on a lot of things: motions,\nresolutions, amendments, and bills, among other things. Many of these\nvotes are roll call votes%\n\\index{voting}%\n\\index{roll call vote}, which means that the vote of every individual\nsenator is recorded (as opposed to a voice vote, where only the\noutcome is recorded). Roll call data for the last 3 decades is\npublicly available and can be downloaded from the U.S. Senate website\nat \\url{https://www.senate.gov/legislative/votes.htm}.\n\nWe will now explore how to use linear algebra, and in particular\nprincipal component analysis, to gain some useful information from the\nvoting records.\\footnote{This example was inspired by Examples~11.2.13\n  and 11.3.15 of {\\em ``Coding the Matrix: Linear Algebra through\n    Computer Science Applications''} by Philip N. Klein.} I have made\na spreadsheet containing the votes of 99 senators for the first 200\nroll call votes of 2007. Each row in the spreadsheet corresponds to a\nsenator, listed in alphabetical order from Daniel Akaka of Hawaii to\nRon Wyden of Oregon. I omitted one senator who died during 2007. Each\ncolumn of the spreadsheet corresponds to a vote. For example, the\nfirst roll call vote of 2007 was on a resolution to honour President\nGerald Ford (it passed 88 to 0). Each cell of the spreadsheet contains\nthe number $1$ if the senator voted ``yes'', the number $-1$ if the\nsenator voted ``no'', and the number $0$ if the senator did not\nvote. The spreadsheet is available from\n\\url{https://www.mathstat.dal.ca/~selinger/linear-algebra/} under\n``Supplementary materials''. Here are the first few rows and columns\nof the spreadsheet.\n\\begin{equation*}\n  \\begin{array}{|l|r|r|r|r|r|r|r|r}\n    \\hline\n    \\mbox{Akaka, Daniel (D) HI} & 1 & 1 & 1 & 1 & 1 & -1 & 1 & \\ldots \\\\\\hline\n    \\mbox{Alexander, Lamar (R) TN} & 0 & 1 & -1 & 1 & -1 & -1 & 1 & \\ldots \\\\\\hline\n    \\mbox{Allard, A. (R) CO} & 1 & 1 & -1 & -1 & -1 & 1 & 1 & \\ldots \\\\\\hline\n    \\mbox{Baucus, Max (D) MT} & 1 & 1 & 1 & 1 & 1 & -1 & 1 & \\ldots \\\\\\hline\n    \\mbox{Bayh, Evan (D) IN} & 1 & 1 & 1 & -1 & 1 & -1 & 1 & \\ldots \\\\\\hline\n    \\mbox{Bennett, Robert (R) UT} & 1 & 1 & -1 & 1 & 1 & -1 & 1 & \\ldots \\\\\\hline\n    \\multicolumn{1}{|c|}{\\vdots} & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots\n  \\end{array}\n\\end{equation*}\nThe human mind is not very well equipped to deal with such massive\namounts of data. Rather than listing 122 motions that Senator X\nsupported and 78 motions that she opposed, we like to come up with\nabstractions, such as Senator X is ``conservative'', ``pro choice'',\n``pro business'', ``hawkish'', etc. However, the problem with\nabstractions is that they do not necessarily mean anything in the real\nworld. In the real world, a senator's record is just a sequence of\nvotes.\n\nWe will represent each senator by a vector in $\\R^{200}$, which\ncorresponds to a row of the above table. For example, to Senator\nAkaka, we associate the vector\n\\begin{equation*}\n  \\begin{mymatrix}{c} 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\ -1 \\\\ 1 \\\\ \\vdots \\end{mymatrix}\n  \\in\\R^{200}.\n\\end{equation*}\nThus, we can represent each senator (or more precisely, each senator's\nvoting record) as a point in 200-dimensional space. In this way, the\nvoting data can be interpreted as $99$ points in $\\R^{200}$.\n\nUnfortunately, 200 dimensions are impossible to visualize. But what if\nthe voting records of all the senators lie on (or at least close to) a\nmuch-smaller-dimensional affine subspace? This is actually not an\nunreasonable expectation; after all, there are probably only a handful\nof issues most senators care about. For example, if a certain senator\nsupports gun control, he will be likely to vote a certain way on\nmeasures that affect gun control. If another senator supports the gun\nlobby, she is likely to vote the opposite way.\n\nWe can thus consider this as an instance of the affine subspace\nproblem: we are looking for a low-dimensional affine subspace that is\nclose to all $99$ points. Following the method of\nProposition~\\ref{prop:affine-subspace-fitting}, we first find the\ncentroid of the points, and then we compute a certain\n$99\\times 200$-matrix $A$ and a positive semidefinite\n$200\\times 200$-matrix $B=A^TA$. Using software, we can find the\neigenvalues and -vectors of $B$. The first few eigenvalues (in\ndecreasing order) are:\n\\begin{equation*}\n  \\eigenvar_1 = 7255.65,\\quad\n  \\eigenvar_2 = 519.16,\\quad\n  \\eigenvar_3 = 430.60,\\quad\n  \\eigenvar_4 = 278.05,\\quad\\mbox{and}\\quad\n  \\eigenvar_5 = 230.56.\n\\end{equation*}\nAll of the remaining eigenvalues are less than $200$, and the sum of\nthe remaining eigenvalues is\n$\\eigenvar_6+\\ldots+\\eigenvar_{200}=3913.46$. This means that the vast\nmajority of the voting behavior of each senator is determined by a\nsingle dimension, given by the eigenvector corresponding to the\neigenvalue $\\eigenvar_1$. In other words, there is a $1$-dimensional\naffine subspace that all $99$ points are pretty close to. If we\nproject each senator to this affine subspace, we get the following\npicture:\n\\begin{equation*}\n  \\def\\dataD#1#2#3#4#5#6#7{\\fill[dem](#5,0) circle (0.09);}\n  \\def\\dataR#1#2#3#4#5#6#7{\\fill[rep](#5,0) circle (0.09);}\n  \\def\\dataDx#1#2#3#4#5#6#7{\\fill[dem](#5,0) circle (0.09);\n    \\draw[dem,->,shorten >=0.15cm](#5,0) +(#7,1) node[above] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataRx#1#2#3#4#5#6#7{\\fill[rep](#5,0) circle (0.09);\n    \\draw[rep,->,shorten >=0.15cm](#5,0) +(#7,1) node[above] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataDy#1#2#3#4#5#6#7{\\fill[dem](#5,0) circle (0.09);\n    \\draw[dem,->,shorten >=0.15cm](#5,0) +(#7,-1) node[below] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataRy#1#2#3#4#5#6#7{\\fill[rep](#5,0) circle (0.09);\n    \\draw[rep,->,shorten >=0.15cm](#5,0) +(#7,-1) node[below] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\begin{tikzpicture}[scale=0.75]\n    \\draw (-12.5,0) -- (10.5,0);\n    \\dataRx{DeMint}{James}{R}{SC}{-11.856041}{4.315015}{0}\n    \\dataRy{Cornyn}{John}{R}{TX}{-11.446925}{1.720794}{0}\n    \\dataRx{Allard}{A.}{R}{CO}{-11.414729}{2.353678}{0}\n    \\dataR{Ensign}{John}{R}{NV}{-11.329050}{2.598885}{0}\n    \\dataR{Inhofe}{Jim}{R}{OK}{-11.161216}{4.313828}{0}\n    \\dataRy{Coburn}{Tom}{R}{OK}{-11.095925}{6.451745}{0}\n    \\dataRx{Enzi}{Michael}{R}{WY}{-10.786882}{0.794427}{0}\n    \\dataR{Burr}{Richard}{R}{NC}{-10.716488}{2.508468}{0}\n    \\dataR{Bunning}{Jim}{R}{KY}{-10.674428}{1.409311}{0}\n    \\dataR{Chambliss}{Saxby}{R}{GA}{-10.639465}{1.781387}{0}\n    \\dataR{Isakson}{Johnny}{R}{GA}{-10.548229}{0.724451}{0}\n    \\dataRy{Sessions}{Jeff}{R}{AL}{-10.510782}{1.437248}{0}\n    \\dataR{Gregg}{Judd}{R}{NH}{-10.399051}{1.990574}{0}\n    \\dataRx{McConnell}{Mitch}{R}{KY}{-10.375319}{-1.293286}{0}\n    \\dataR{Dole}{Elizabeth}{R}{NC}{-10.258572}{1.732827}{0}\n    \\dataRy{Kyl}{Jon}{R}{AZ}{-10.042393}{2.633156}{0}\n    \\dataR{Craig}{Larry}{R}{ID}{-9.996687}{-0.803843}{0}\n    \\dataR{Crapo}{Mike}{R}{ID}{-9.987981}{-0.982945}{0}\n    \\dataRx{Graham}{Lindsey}{R}{SC}{-9.942743}{2.090629}{0}\n    \\dataR{Sununu}{John}{R}{NH}{-9.713036}{1.627775}{0}\n    \\dataRy{Roberts}{Pat}{R}{KS}{-9.682832}{-1.781188}{0}\n    \\dataR{Martinez}{Mel}{R}{FL}{-9.602384}{1.021259}{0}\n    \\dataR{Thune}{John}{R}{SD}{-9.412180}{0.719407}{0}\n    \\dataR{Hutchison}{Kay}{R}{TX}{-9.362272}{-0.132493}{0}\n    \\dataR{Lott}{Trent}{R}{MS}{-9.333867}{-1.703358}{0}\n    \\dataRx{Hatch}{Orrin}{R}{UT}{-9.325404}{-1.647643}{0}\n    \\dataR{Corker}{Bob}{R}{TN}{-9.321180}{1.242018}{0}\n    \\dataR{Grassley}{Charles}{R}{IA}{-9.274839}{1.535256}{0}\n    \\dataRy{Vitter}{David}{R}{LA}{-9.268543}{3.508303}{0}\n    \\dataRx{Alexander}{Lamar}{R}{TN}{-8.869856}{-2.298938}{0}\n    \\dataRy{Shelby}{Richard}{R}{AL}{-8.861314}{0.147906}{0}\n    \\dataRx{Bennett}{Robert}{R}{UT}{-8.230363}{-5.265448}{0}\n    \\dataRy{Cochran}{Thad}{R}{MS}{-8.204309}{-3.833549}{0}\n    \\dataRx{Bond}{Christopher}{R}{MO}{-7.907050}{-3.678825}{0}\n    \\dataRy{Brownback}{Samuel}{R}{KS}{-7.841176}{0.525267}{0}\n    \\dataRx{Warner}{John}{R}{VA}{-7.511971}{-3.669235}{0}\n    \\dataRy{Murkowski}{Lisa}{R}{AK}{-6.411501}{-5.705682}{0}\n    \\dataR{Domenici}{Pete}{R}{NM}{-6.367823}{-4.162556}{0}\n    \\dataRx{Hagel}{Chuck}{R}{NE}{-6.362849}{0.617170}{0}\n    \\dataR{Stevens}{Ted}{R}{AK}{-6.307838}{-5.264698}{0}\n    \\dataRy{Lugar}{Richard}{R}{IN}{-5.943015}{-3.919419}{0}\n    \\dataRx{McCain}{John}{R}{AZ}{-5.888698}{2.418029}{0}\n    \\dataRx{Coleman}{Norm}{R}{MN}{-4.118805}{-5.477121}{0}\n    \\dataRy{Voinovich}{George}{R}{OH}{-3.935533}{-0.529552}{0}\n    \\dataRx{Smith}{Gordon}{R}{OR}{-3.444280}{-2.964483}{0}\n    \\dataRx{Specter}{Arlen}{R}{PA}{-2.375146}{-4.450506}{0}\n    \\dataRy{Collins}{Susan}{R}{ME}{-1.718529}{-4.243409}{0}\n    \\dataDx{Johnson}{Tim}{D}{SD}{-1.400105}{3.328882}{0}\n    \\dataRx{Snowe}{Olympia}{R}{ME}{0.030079}{-4.052412}{0}\n    \\dataDx{Nelson}{E.}{D}{NE}{3.212060}{-2.989657}{0}\n    \\dataDx{Pryor}{Mark}{D}{AR}{6.049862}{-2.370210}{0}\n    \\dataD{Landrieu}{Mary}{D}{LA}{6.120904}{0.319874}{0}\n    \\dataD{Bayh}{Evan}{D}{IN}{6.212828}{0.590900}{0}\n    \\dataDy{McCaskill}{Claire}{D}{MO}{6.236225}{2.501971}{0}\n    \\dataDy{Baucus}{Max}{D}{MT}{6.707413}{-1.252353}{0}\n    \\dataD{Tester}{Jon}{D}{MT}{6.755870}{-0.131501}{0}\n    \\dataD{Byrd}{Robert}{D}{WV}{6.844900}{-0.777788}{0}\n    \\dataDx{Lieberman}{Joe}{ID}{CT}{6.941566}{-0.879456}{0}\n    \\dataD{Lincoln}{Blanche}{D}{AR}{7.114470}{-2.537187}{0}\n    \\dataD{Dodd}{Christopher}{D}{CT}{7.150155}{0.426655}{0}\n    \\dataDy{Biden}{Joseph}{D}{DE}{7.191390}{1.560474}{0}\n    \\dataD{Dorgan}{Byron}{D}{ND}{7.226223}{-0.145256}{0}\n    \\dataDy{Conrad}{Kent}{D}{ND}{7.509217}{-0.834957}{0}\n    \\dataDx{Salazar}{Ken}{D}{CO}{7.510558}{-2.642954}{0}\n    \\dataD{Carper}{Thomas}{D}{DE}{7.533684}{-1.016364}{0}\n    \\dataD{Rockefeller}{Jay}{D}{WV}{7.697069}{0.348496}{0}\n    \\dataDy{Webb}{James}{D}{VA}{7.909628}{1.263548}{0}\n    \\dataDx{Nelson}{Bill}{D}{FL}{8.073955}{0.560473}{0}\n    \\dataD{Kerry}{John}{D}{MA}{8.229978}{0.783072}{0}\n    \\dataD{Inouye}{Daniel}{D}{HI}{8.254431}{0.128503}{0}\n    \\dataDy{Kennedy}{Edward}{D}{MA}{8.345720}{-0.125997}{0}\n    \\dataD{Cantwell}{Maria}{D}{WA}{8.453248}{0.176907}{0}\n    \\dataD{Feingold}{Russ}{D}{WI}{8.479149}{3.536569}{-0.15}\n    \\dataDx{Obama}{Barack}{D}{IL}{8.521068}{2.312086}{0}\n    \\dataDy{Reid}{Harry}{D}{NV}{8.689703}{0.665197}{0.1}\n    \\dataD{Mikulski}{Barbara}{D}{MD}{8.776657}{0.129926}{0}\n    \\dataD{Wyden}{Ron}{D}{OR}{8.846398}{-0.272109}{0}\n    \\dataD{Leahy}{Patrick}{D}{VT}{8.853166}{-0.139810}{0}\n    \\dataD{Harkin}{Tom}{D}{IA}{8.859139}{0.517913}{0}\n    \\dataD{Klobuchar}{Amy}{D}{MN}{8.865049}{-0.814168}{0}\n    \\dataD{Murray}{Patty}{D}{WA}{8.923771}{0.434588}{0}\n    \\dataD{Akaka}{Daniel}{D}{HI}{8.955400}{-1.058981}{0}\n    \\dataD{Brown}{Sherrod}{D}{OH}{8.961322}{1.297369}{0}\n    \\dataDx{Feinstein}{Dianne}{D}{CA}{9.019263}{0.316944}{-0.1}\n    \\dataD{Bingaman}{Jeff}{D}{NM}{9.063834}{0.227690}{0}\n    \\dataDy{Schumer}{Charles}{D}{NY}{9.124474}{1.151808}{0.05}\n    \\dataD{Cardin}{Benjamin}{D}{MD}{9.154539}{0.261731}{0}\n    \\dataD{Kohl}{Herb}{D}{WI}{9.168217}{0.505190}{0}\n    \\dataD{Casey}{Bob}{D}{PA}{9.227674}{1.165228}{0}\n    \\dataD{Stabenow}{Debbie}{D}{MI}{9.259294}{0.701454}{0}\n    \\dataD{Reed}{John}{D}{RI}{9.328214}{-0.051071}{0}\n    \\dataDx{Clinton}{Hillary}{D}{NY}{9.345548}{1.498053}{0.1}\n    \\dataDy{Sanders}{Bernard}{I}{VT}{9.366115}{1.166931}{0.3}\n    \\dataD{Durbin}{Richard}{D}{IL}{9.450949}{1.549572}{0}\n    \\dataD{Lautenberg}{Frank}{D}{NJ}{9.458477}{0.934696}{0}\n    \\dataD{Levin}{Carl}{D}{MI}{9.464507}{0.830443}{0}\n    \\dataD{Menendez}{Robert}{D}{NJ}{9.493317}{0.599649}{0}\n    \\dataD{Boxer}{Barbara}{D}{CA}{9.537760}{1.490709}{0}\n    \\dataDx{Whitehouse}{Sheldon}{D}{RI}{9.675169}{0.398093}{0.3}\n    \\draw[->] (0,0) +(1,-2) node[right,yshift=-2] {centroid} -- +(0,0);\n    \\path (-1,4.5) node {2007 U.S. Senate voting data, projection to the first principal component:};\n  \\end{tikzpicture}\n\\end{equation*}\nFor convenience, Republican senators have been shown in red and\nDemocratic and independent senators in blue. Not all senators have\nbeen named, because in some areas they are clustered very densely. An\ninterpretation of the principal component then immediately suggests\nitself: it appears to be the ``conservative'' vs. ``liberal'' axis. We\ncan use this picture to assist in answering questions such as: {\\em\n  ``Which party votes more uniformly?''}, {\\em ``Which state are the\n  most liberal Republicans from?''}, {\\em ``Which state are the most\n  conservative Democrats from?''}, {\\em ``Was Obama really a\n  radical?''}, and {\\em ``Was McCain really a maverick?''}.  If we\nrepeat the same calculation for the 2017 senate, we get the following\npicture:\n\\begin{equation*}\n  \\def\\dataD#1#2#3#4#5#6#7{\\fill[dem](#5,0) circle (0.104);}\n  \\def\\dataR#1#2#3#4#5#6#7{\\fill[rep](#5,0) circle (0.104);}\n  \\def\\dataDx#1#2#3#4#5#6#7{\\fill[dem](#5,0) circle (0.104);\n    \\draw[dem,->,shorten >=0.15cm](#5,0) +(#7,1) node[above] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataRx#1#2#3#4#5#6#7{\\fill[rep](#5,0) circle (0.104);\n    \\draw[rep,->,shorten >=0.15cm](#5,0) +(#7,1) node[above] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataDy#1#2#3#4#5#6#7{\\fill[dem](#5,0) circle (0.104);\n    \\draw[dem,->,shorten >=0.15cm](#5,0) +(#7,-1) node[below] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataRy#1#2#3#4#5#6#7{\\fill[rep](#5,0) circle (0.104);\n    \\draw[rep,->,shorten >=0.15cm](#5,0) +(#7,-1) node[below] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\begin{tikzpicture}[scale=0.651]\n    \\draw (-12,0) -- (14.5,0);\n    \\dataRx{Risch}{James}{R}{ID}{-11.005580}{-0.280661}{-0.1}\n    \\dataR{Daines}{Steve}{R}{MT}{-10.967825}{-0.207257}{0}\n    \\dataR{Lankford}{James}{R}{OK}{-10.942378}{-0.196446}{0}\n    \\dataR{Ernst}{Joni}{R}{IA}{-10.913457}{-0.326658}{0}\n    \\dataR{Fischer}{Deb}{R}{NE}{-10.910381}{-0.154224}{0}\n    \\dataRy{Inhofe}{Jim}{R}{OK}{-10.794950}{-0.297634}{0}\n    \\dataR{Scott}{Tim}{R}{SC}{-10.786803}{-0.087283}{0}\n    \\dataR{Enzi}{Mike}{R}{WY}{-10.766418}{-0.045151}{0}\n    \\dataR{Johnson}{Ron}{R}{WI}{-10.755255}{-0.049739}{0}\n    \\dataR{Toomey}{Patrick}{R}{PA}{-10.749891}{-0.569081}{0}\n    \\dataR{Crapo}{Mike}{R}{ID}{-10.748843}{0.003722}{0}\n    \\dataR{Gardner}{Cory}{R}{CO}{-10.719932}{-0.268951}{0}\n    \\dataR{Kennedy}{John}{R}{LA}{-10.692143}{-0.209493}{0}\n    \\dataR{Flake}{Jeff}{R}{AZ}{-10.615803}{-1.106815}{0}\n    \\dataR{Hoeven}{John}{R}{ND}{-10.600127}{-0.072623}{0}\n    \\dataR{Cassidy}{Bill}{R}{LA}{-10.598818}{-0.002070}{0}\n    \\dataR{Sullivan}{Dan}{R}{AK}{-10.596058}{-0.382321}{0}\n    \\dataRx{Rubio}{Marco}{R}{FL}{-10.588703}{-0.868620}{0}\n    \\dataR{Barrasso}{John}{R}{WY}{-10.584780}{-0.044905}{0}\n    \\dataR{Perdue}{David}{R}{GA}{-10.580328}{-0.070316}{0}\n    \\dataR{Cotton}{Tom}{R}{AR}{-10.536647}{0.114383}{0}\n    \\dataR{Thune}{John}{R}{SD}{-10.530769}{-0.008701}{0}\n    \\dataR{Wicker}{Roger}{R}{MS}{-10.524726}{0.028472}{0}\n    \\dataR{Cornyn}{John}{R}{TX}{-10.513419}{0.176439}{0}\n    \\dataR{Roberts}{Pat}{R}{KS}{-10.513419}{0.176439}{0}\n    \\dataR{Rounds}{Mike}{R}{SD}{-10.513419}{0.176439}{0}\n    \\dataR{Shelby}{Richard}{R}{AL}{-10.501896}{0.078644}{0}\n    \\dataR{Grassley}{Charles}{R}{IA}{-10.458537}{-0.074556}{0}\n    \\dataR{Blunt}{Roy}{R}{MO}{-10.456091}{0.188385}{0}\n    \\dataR{Hatch}{Orrin}{R}{UT}{-10.427457}{0.182143}{0}\n    \\dataR{Cochran}{Thad}{R}{MS}{-10.419903}{0.171741}{0}\n    \\dataR{Boozman}{John}{R}{AR}{-10.416449}{0.049056}{0}\n    \\dataR{Sasse}{Ben}{R}{NE}{-10.384081}{-0.769970}{0}\n    \\dataR{Moran}{Jerry}{R}{KS}{-10.354821}{-0.478472}{0}\n    \\dataRy{Cruz}{Ted}{R}{TX}{-10.337513}{-0.118032}{0}\n    \\dataRx{McConnell}{Mitch}{R}{KY}{-10.328164}{0.019018}{0.2}\n    \\dataR{Burr}{Richard}{R}{NC}{-10.281722}{-0.434053}{0}\n    \\dataR{Lee}{Mike}{R}{UT}{-10.250033}{-1.332033}{0}\n    \\dataR{Corker}{Bob}{R}{TN}{-10.181143}{-0.353619}{0}\n    \\dataR{Tillis}{Thomas}{R}{NC}{-10.164341}{0.190998}{0}\n    \\dataR{Alexander}{Lamar}{R}{TN}{-10.140983}{-0.605518}{0}\n    \\dataR{Graham}{Lindsey}{R}{SC}{-10.118998}{-0.308972}{0}\n    \\dataR{Young}{Todd}{R}{IN}{-10.114119}{0.195154}{0}\n    \\dataR{Capito}{Shelley}{R}{WV}{-10.092079}{-0.013747}{0.2}\n    \\dataRy{Portman}{Rob}{R}{OH}{-9.980981}{0.307914}{0.1}\n    \\dataRx{Paul}{Rand}{R}{KY}{-9.083156}{-2.311221}{0}\n    \\dataRy{McCain}{John}{R}{AZ}{-8.941401}{-1.537122}{0}\n    \\dataR{Murkowski}{Lisa}{R}{AK}{-8.929094}{0.087773}{0}\n    \\dataRx{Heller}{Dean}{R}{NV}{-8.607091}{-1.195798}{0}\n    \\dataRy{Isakson}{John}{R}{GA}{-7.825936}{-2.375299}{0}\n    \\dataRx{Collins}{Susan}{R}{ME}{-7.218464}{1.349259}{0}\n    \\dataDx{Manchin}{Joe}{D}{WV}{4.244339}{4.985463}{0}\n    \\dataDy{Heitkamp}{Heidi}{D}{ND}{4.699457}{6.230765}{0}\n    \\dataDx{Donnelly}{Joe}{D}{IN}{6.294362}{5.372277}{0}\n    \\dataDy{King}{Angus}{I}{ME}{6.567064}{5.137923}{0}\n    \\dataDx{Warner}{Mark}{D}{VA}{7.805257}{5.268461}{0}\n    \\dataDy{McCaskill}{Claire}{D}{MO}{8.515402}{4.257321}{0}\n    \\dataDx{Feinstein}{Dianne}{D}{CA}{9.195030}{1.541150}{0}\n    \\dataDy{Tester}{Jon}{D}{MT}{9.298819}{4.338771}{0}\n    \\dataD{Nelson}{Bill}{D}{FL}{9.415968}{3.904212}{0}\n    \\dataDx{Bennet}{Michael}{D}{CO}{9.648520}{3.741099}{0}\n    \\dataDy{Carper}{Thomas}{D}{DE}{9.787069}{3.051711}{0}\n    \\dataD{Coons}{Christopher}{D}{DE}{9.967535}{2.835454}{0}\n    \\dataDx{Kaine}{Timothy}{D}{VA}{10.380048}{2.820041}{0}\n    \\dataDy{Cortez Masto}{Catherine}{D}{NV}{10.700058}{0.650442}{0}\n    \\dataD{Menendez}{Robert}{D}{NJ}{10.833270}{-0.138012}{0}\n    \\dataD{Shaheen}{Jeanne}{D}{NH}{10.932457}{2.780893}{0}\n    \\dataDx{Durbin}{Richard}{D}{IL}{11.134223}{0.337495}{0}\n    \\dataD{Hassan}{Maggie}{D}{NH}{11.175503}{2.348138}{0}\n    \\dataDy{Klobuchar}{Amy}{D}{MN}{11.227181}{2.483025}{0}\n    \\dataD{Casey}{Bob}{D}{PA}{11.259986}{2.362439}{0}\n    \\dataD{Murphy}{Christopher}{D}{CT}{11.262853}{1.643260}{0}\n    \\dataD{Stabenow}{Debbie}{D}{MI}{11.331051}{0.981505}{0}\n    \\dataD{Peters}{Gary}{D}{MI}{11.377813}{1.008805}{0}\n    \\dataD{Schatz}{Brian}{D}{HI}{11.523780}{-0.180224}{0}\n    \\dataDx{Cardin}{Ben}{D}{MD}{11.530545}{1.600627}{0}\n    \\dataD{Leahy}{Patrick}{D}{VT}{11.666722}{0.850718}{0}\n    \\dataD{Heinrich}{Martin}{D}{NM}{11.677727}{-0.658323}{0}\n    \\dataD{Brown}{Sherrod}{D}{OH}{11.710129}{0.528127}{0}\n    \\dataD{Udall}{Tom}{D}{NM}{11.799501}{-1.446553}{0}\n    \\dataD{Cantwell}{Maria}{D}{WA}{11.871698}{0.994751}{0}\n    \\dataD{Reed}{John}{D}{RI}{11.903017}{-0.135526}{0}\n    \\dataDy{Duckworth}{Tammy}{D}{IL}{11.912992}{-1.455125}{0}\n    \\dataD{Baldwin}{Tammy}{D}{WI}{12.006094}{-0.147581}{0}\n    \\dataD{Hirono}{Mazie}{D}{HI}{12.012566}{-1.222498}{0}\n    \\dataDx{Franken}{Al}{D}{MN}{12.048895}{0.607727}{0}\n    \\dataD{Murray}{Patty}{D}{WA}{12.128326}{0.434861}{0}\n    \\dataD{Whitehouse}{Sheldon}{D}{RI}{12.129382}{-0.638112}{0}\n    \\dataD{Van Hollen}{Chris}{D}{MD}{12.330587}{-0.284590}{0}\n    \\dataDy{Schumer}{Charles}{D}{NY}{12.392888}{-1.562009}{0}\n    \\dataD{Blumenthal}{Richard}{D}{CT}{12.409939}{-2.289658}{0}\n    \\dataD{Wyden}{Ron}{D}{OR}{12.495539}{-2.739850}{0}\n    \\dataD{Markey}{Edward}{D}{MA}{12.823912}{-4.803898}{0}\n    \\dataDx{Sanders}{Bernie}{I}{VT}{12.866844}{-7.239669}{0}\n    \\dataD{Merkley}{Jeff}{D}{OR}{12.972657}{-5.312050}{0}\n    \\dataD{Harris}{Kamala}{D}{CA}{13.017672}{-6.143469}{0}\n    \\dataD{Booker}{Cory}{D}{NJ}{13.163851}{-6.942933}{0}\n    \\dataDy{Gillibrand}{Kirsten}{D}{NY}{13.288360}{-8.552288}{0}\n    \\dataDx{Warren}{Elizabeth}{D}{MA}{13.328440}{-7.543715}{0}\n    \\draw[->] (0,0) +(1,-2) node[right,yshift=-2] {centroid} -- +(0,0);\n    \\path (1.25,5) node {2017 U.S. Senate voting data, projection to the first principal component:};\n  \\end{tikzpicture}\n\\end{equation*}\nWe can use this to help answer questions such as {\\em ``Has the senate\n  become more partisan between 2007 and 2017?''}.\n\nIf we instead project the data onto the first two principal\ncomponents, we get the following picture for the 2007 data:\n\\begin{equation*}\n  \\def\\dataD#1#2#3#4#5#6#7{\\fill[dem](#5,#6) circle (0.09);}\n  \\def\\dataR#1#2#3#4#5#6#7{\\fill[rep](#5,#6) circle (0.09);}\n  \\def\\dataDx#1#2#3#4#5#6#7{\\fill[dem](#5,#6) circle (0.09);\n    \\draw[dem,->,shorten >=0.15cm](#5,#6) +(#7,1) node[above] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataRx#1#2#3#4#5#6#7{\\fill[rep](#5,#6) circle (0.09);\n    \\draw[rep,->,shorten >=0.15cm](#5,#6) +(#7,1) node[above] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataDy#1#2#3#4#5#6#7{\\fill[dem](#5,#6) circle (0.09);\n    \\draw[dem,->,shorten >=0.15cm](#5,#6) +(#7,-1) node[below] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\def\\dataRy#1#2#3#4#5#6#7{\\fill[rep](#5,#6) circle (0.09);\n    \\draw[rep,->,shorten >=0.15cm](#5,#6) +(#7,-1) node[below] {\\rotatebox{90}{\\footnotesize #1, #4}} -- +(0,0);}\n  \\begin{tikzpicture}[scale=0.71]\n    \\draw (-12.5,0) -- (10.5,0);\n    \\draw (0,-6) -- (0,8);\n    \\dataRx{DeMint}{James}{R}{SC}{-11.856041}{4.315015}{0}\n    \\dataRx{Cornyn}{John}{R}{TX}{-11.446925}{1.720794}{-1.5}\n    \\dataRx{Allard}{A.}{R}{CO}{-11.414729}{2.353678}{-1}\n    \\dataR{Ensign}{John}{R}{NV}{-11.329050}{2.598885}{0}\n    \\dataR{Inhofe}{Jim}{R}{OK}{-11.161216}{4.313828}{0}\n    \\dataRx{Coburn}{Tom}{R}{OK}{-11.095925}{6.451745}{0}\n    \\dataR{Enzi}{Michael}{R}{WY}{-10.786882}{0.794427}{0}\n    \\dataR{Burr}{Richard}{R}{NC}{-10.716488}{2.508468}{0}\n    \\dataR{Bunning}{Jim}{R}{KY}{-10.674428}{1.409311}{0}\n    \\dataR{Chambliss}{Saxby}{R}{GA}{-10.639465}{1.781387}{0}\n    \\dataR{Isakson}{Johnny}{R}{GA}{-10.548229}{0.724451}{0}\n    \\dataRx{Sessions}{Jeff}{R}{AL}{-10.510782}{1.437248}{0}\n    \\dataR{Gregg}{Judd}{R}{NH}{-10.399051}{1.990574}{0}\n    \\dataRx{McConnell}{Mitch}{R}{KY}{-10.375319}{-1.293286}{-3}\n    \\dataR{Dole}{Elizabeth}{R}{NC}{-10.258572}{1.732827}{0}\n    \\dataR{Kyl}{Jon}{R}{AZ}{-10.042393}{2.633156}{0}\n    \\dataR{Craig}{Larry}{R}{ID}{-9.996687}{-0.803843}{0}\n    \\dataR{Crapo}{Mike}{R}{ID}{-9.987981}{-0.982945}{0}\n    \\dataRx{Graham}{Lindsey}{R}{SC}{-9.942743}{2.090629}{0}\n    \\dataR{Sununu}{John}{R}{NH}{-9.713036}{1.627775}{0}\n    \\dataR{Roberts}{Pat}{R}{KS}{-9.682832}{-1.781188}{0}\n    \\dataR{Martinez}{Mel}{R}{FL}{-9.602384}{1.021259}{0}\n    \\dataR{Thune}{John}{R}{SD}{-9.412180}{0.719407}{0}\n    \\dataR{Hutchison}{Kay}{R}{TX}{-9.362272}{-0.132493}{0}\n    \\dataR{Lott}{Trent}{R}{MS}{-9.333867}{-1.703358}{0}\n    \\dataR{Hatch}{Orrin}{R}{UT}{-9.325404}{-1.647643}{0}\n    \\dataR{Corker}{Bob}{R}{TN}{-9.321180}{1.242018}{0}\n    \\dataR{Grassley}{Charles}{R}{IA}{-9.274839}{1.535256}{0}\n    \\dataRx{Vitter}{David}{R}{LA}{-9.268543}{3.508303}{0}\n    \\dataR{Alexander}{Lamar}{R}{TN}{-8.869856}{-2.298938}{0}\n    \\dataRx{Shelby}{Richard}{R}{AL}{-8.861314}{0.147906}{0}\n    \\dataRx{Bennett}{Robert}{R}{UT}{-8.230363}{-5.265448}{-0.3}\n    \\dataRx{Cochran}{Thad}{R}{MS}{-8.204309}{-3.833549}{0}\n    \\dataRx{Bond}{Christopher}{R}{MO}{-7.907050}{-3.678825}{0}\n    \\dataRx{Brownback}{Samuel}{R}{KS}{-7.841176}{0.525267}{0}\n    \\dataRx{Warner}{John}{R}{VA}{-7.511971}{-3.669235}{0}\n    \\dataRx{Murkowski}{Lisa}{R}{AK}{-6.411501}{-5.705682}{-0.3}\n    \\dataR{Domenici}{Pete}{R}{NM}{-6.367823}{-4.162556}{0}\n    \\dataRx{Hagel}{Chuck}{R}{NE}{-6.362849}{0.617170}{0}\n    \\dataR{Stevens}{Ted}{R}{AK}{-6.307838}{-5.264698}{0}\n    \\dataRx{Lugar}{Richard}{R}{IN}{-5.943015}{-3.919419}{0}\n    \\dataRx{McCain}{John}{R}{AZ}{-5.888698}{2.418029}{0}\n    \\dataRx{Coleman}{Norm}{R}{MN}{-4.118805}{-5.477121}{0}\n    \\dataRx{Voinovich}{George}{R}{OH}{-3.935533}{-0.529552}{0}\n    \\dataRx{Smith}{Gordon}{R}{OR}{-3.444280}{-2.964483}{0}\n    \\dataRx{Specter}{Arlen}{R}{PA}{-2.375146}{-4.450506}{0}\n    \\dataRx{Collins}{Susan}{R}{ME}{-1.718529}{-4.243409}{0}\n    \\dataDx{Johnson}{Tim}{D}{SD}{-1.400105}{3.328882}{0}\n    \\dataRx{Snowe}{Olympia}{R}{ME}{0.030079}{-4.052412}{-0.5}\n    \\dataDx{Nelson}{E.}{D}{NE}{3.212060}{-2.989657}{0}\n    \\dataDx{Pryor}{Mark}{D}{AR}{6.049862}{-2.370210}{-0.3}\n    \\dataD{Landrieu}{Mary}{D}{LA}{6.120904}{0.319874}{0}\n    \\dataDx{Bayh}{Evan}{D}{IN}{6.212828}{0.590900}{-0.5}\n    \\dataDx{McCaskill}{Claire}{D}{MO}{6.236225}{2.501971}{0}\n    \\dataDx{Baucus}{Max}{D}{MT}{6.707413}{-1.252353}{-0.2}\n    \\dataD{Tester}{Jon}{D}{MT}{6.755870}{-0.131501}{0}\n    \\dataD{Byrd}{Robert}{D}{WV}{6.844900}{-0.777788}{0}\n    \\dataDx{Lieberman}{Joe}{ID}{CT}{6.941566}{-0.879456}{-0.05}\n    \\dataD{Lincoln}{Blanche}{D}{AR}{7.114470}{-2.537187}{0}\n    \\dataD{Dodd}{Christopher}{D}{CT}{7.150155}{0.426655}{0}\n    \\dataDx{Biden}{Joseph}{D}{DE}{7.191390}{1.560474}{0.2}\n    \\dataD{Dorgan}{Byron}{D}{ND}{7.226223}{-0.145256}{0}\n    \\dataD{Conrad}{Kent}{D}{ND}{7.509217}{-0.834957}{0}\n    \\dataDx{Salazar}{Ken}{D}{CO}{7.510558}{-2.642954}{3.3}\n    \\dataD{Carper}{Thomas}{D}{DE}{7.533684}{-1.016364}{0}\n    \\dataD{Rockefeller}{Jay}{D}{WV}{7.697069}{0.348496}{0}\n    \\dataD{Webb}{James}{D}{VA}{7.909628}{1.263548}{0}\n    \\dataD{Nelson}{Bill}{D}{FL}{8.073955}{0.560473}{0}\n    \\dataD{Kerry}{John}{D}{MA}{8.229978}{0.783072}{0}\n    \\dataD{Inouye}{Daniel}{D}{HI}{8.254431}{0.128503}{0}\n    \\dataD{Kennedy}{Edward}{D}{MA}{8.345720}{-0.125997}{0}\n    \\dataD{Cantwell}{Maria}{D}{WA}{8.453248}{0.176907}{0}\n    \\dataD{Feingold}{Russ}{D}{WI}{8.479149}{3.536569}{-0.15}\n    \\dataDx{Obama}{Barack}{D}{IL}{8.521068}{2.312086}{-0.3}\n    \\dataDx{Reid}{Harry}{D}{NV}{8.689703}{0.665197}{-0.8}\n    \\dataD{Mikulski}{Barbara}{D}{MD}{8.776657}{0.129926}{0}\n    \\dataD{Wyden}{Ron}{D}{OR}{8.846398}{-0.272109}{0}\n    \\dataD{Leahy}{Patrick}{D}{VT}{8.853166}{-0.139810}{0}\n    \\dataD{Harkin}{Tom}{D}{IA}{8.859139}{0.517913}{0}\n    \\dataD{Klobuchar}{Amy}{D}{MN}{8.865049}{-0.814168}{0}\n    \\dataD{Murray}{Patty}{D}{WA}{8.923771}{0.434588}{0}\n    \\dataD{Akaka}{Daniel}{D}{HI}{8.955400}{-1.058981}{0}\n    \\dataD{Brown}{Sherrod}{D}{OH}{8.961322}{1.297369}{0}\n    \\dataDx{Feinstein}{Dianne}{D}{CA}{9.019263}{0.316944}{-0.1}\n    \\dataD{Bingaman}{Jeff}{D}{NM}{9.063834}{0.227690}{0}\n    \\dataDx{Schumer}{Charles}{D}{NY}{9.124474}{1.151808}{0.2}\n    \\dataD{Cardin}{Benjamin}{D}{MD}{9.154539}{0.261731}{0}\n    \\dataD{Kohl}{Herb}{D}{WI}{9.168217}{0.505190}{0}\n    \\dataD{Casey}{Bob}{D}{PA}{9.227674}{1.165228}{0}\n    \\dataD{Stabenow}{Debbie}{D}{MI}{9.259294}{0.701454}{0}\n    \\dataD{Reed}{John}{D}{RI}{9.328214}{-0.051071}{0}\n    \\dataDx{Clinton}{Hillary}{D}{NY}{9.345548}{1.498053}{0.8}\n    \\dataDx{Sanders}{Bernard}{I}{VT}{9.366115}{1.166931}{1.2}\n    \\dataD{Durbin}{Richard}{D}{IL}{9.450949}{1.549572}{0}\n    \\dataD{Lautenberg}{Frank}{D}{NJ}{9.458477}{0.934696}{0}\n    \\dataD{Levin}{Carl}{D}{MI}{9.464507}{0.830443}{0}\n    \\dataD{Menendez}{Robert}{D}{NJ}{9.493317}{0.599649}{0}\n    \\dataD{Boxer}{Barbara}{D}{CA}{9.537760}{1.490709}{0}\n    \\dataDx{Whitehouse}{Sheldon}{D}{RI}{9.675169}{0.398093}{1.3}\n    \\draw[->] (0,0) +(1,2) node[right,yshift=2] {centroid} -- +(0,0);\n    \\path (-1,10) node {2007 U.S. Senate voting data, projection to the first two principal components:};\n  \\end{tikzpicture}\n\\end{equation*}\nThe picture clearly shows senators clustering in certain areas. We can\nuse this to help answer certain questions, for example, {\\em ``How\n  different was Sanders's voting record from Clinton's?''}. However,\nalthough the 2-dimensional picture seems to reveal more detail, its\ninterpretation is less clear. While it seems obvious that the\nhorizontal axis corresponds to a conservative vs.\\ liberal world view,\nit is much less obvious what the political meaning of the vertical\naxis is. Maybe it is related to some issue that does not typically\nfollow party lines, such as North vs. South, rich states vs. poor\nstates, pro-immigration vs. anti-immigration, and so on.  To find a\nconvincing interpretation of the vertical axis, further investigation\nof the data would be required (such as, looking at the actual content\nof the votes in question).\n\nFinally, a word of caution. Whenever we use mathematics to try to draw\nreal-world conclusions from data, these conclusions should be taken\nwith an extra-large grain of salt. People have an outsized tendency to\ntrust mathematics and to take its results as infallible. We therefore\nhave a special responsibility not to overstate any conclusions, and to\npoint out potential pitfalls with the analysis. No matter how\nwonderful principal complement analysis is, we must keep in mind that\nwhat we are still only looking at a 2-dimensional projection of a\n200-dimensional space. Therefore it is inevitable that lots of details\nand nuances are lost. We could get a completely different picture by\nlooking at a different 2-dimensional projection.\n\nTo see how the data can sometimes be misleading, consider the question\n{\\em ``How similar is Senator Tim Johnson, Democrat of South Dakota,\n  to Senators Olympia Snowe and Susan Collins of Maine?''}. In the\n1-dimensional picture, it looked as if they were very similar. We\ncould easily rationalize this by pointing out that Johnson is the most\nconservative Democrat, and Snowe and Collins are the most liberal\nRepublicans. However, the 2-dimensional picture reveals an interesting\nnuance, which is that the voting records of Johnson is not all that\nsimilar to that of Snowe and Collins. It is entirely possible that if\nwe add a third or fourth dimension to the picture, many more\nadditional such details will emerge. In summary, while principal\ncomponent analysis is a useful tool, it is just one tool among many,\nand we always need to exercise our best judgement in drawing\nconclusions from data.\n", "meta": {"hexsha": "00be82d4ac60b454ce8c24eb6c1920e48b77500d", "size": 58980, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "baseText/content/InnerProductSpaces-Application-PrincipalComponentAnalysis.tex", "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_issues_repo_path": "baseText/content/InnerProductSpaces-Application-PrincipalComponentAnalysis.tex", "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "baseText/content/InnerProductSpaces-Application-PrincipalComponentAnalysis.tex", "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "avg_line_length": 49.7301854975, "max_line_length": 113, "alphanum_fraction": 0.631519159, "num_tokens": 23376, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419958239132, "lm_q2_score": 0.795658104908603, "lm_q1q2_score": 0.5511062177773672}}
{"text": "\\subsection{ 5.1. A More General Stokes's Theorem}\r\n\r\nLet $V$ compact oriented submanifold of $M^n$ \\\\\r\nsmooth $F: M^n \\to W^m$ \r\n\r\n$F(V) \\subset W$ need not be a submanifold, might have self-interactions, pathologies.  \r\n\r\n\\begin{equation}\r\n        \\int_{F(V)} \\beta^p = \\int_V F^* \\beta^p  \\quad \\quad \\quad (5.1)\r\n\\end{equation}\r\ngeneralizes (3.17), def. \r\n\r\n\\[\r\n\\int_{F(V)} d\\beta^{p-1} = \\int_V F^* d\\beta^{p-1} = \\int_V dF^* \\beta^{p-1} = \\int_{ \\partial V} F^* \\beta^{p-1} = \\int_{ F(\\partial V)} \\beta^{p-1}\r\n\\]\r\n(\\emph{question}, 2nd., 3rd. equality) \\\\\r\n\r\n%\\begin{proof}\r\nAnswer: recall Naturality (cf. wikipedia exterior derivative)\r\n\r\n$\\Omega^k$ contravariant smooth functor $M \\mapsto^{ \\Omega^k } \\Lambda^k(M)$\r\n\r\n\\begin{tikzpicture}\r\n\\matrix(m)[matrix of math nodes, row sep=3em, column sep=3em, text height=1.5ex, text depth=0.25ex]\r\n{ \\Omega^k(N)    &  \\Omega^k(M)  \\\\\r\n\\Omega^{k+1}(N) &  \\Omega^{k+1}(M)  \\\\};\r\n\\path[->,font=\\scriptsize]\r\n(m-1-1) edge node[auto]{$F^*$} (m-1-2)\r\nedge node[auto]{$d$} (m-2-1)\r\n(m-1-2) edge node[auto]{$d$} (m-2-2)\r\n(m-2-1) edge node[auto]{$F^*$} (m-2-2);\r\n%\\path[right hook->](m-1-1) edge  node[auto]{$\\,$} (m-2-1);\r\n%\\path[->,font=\\scriptsize](m-1-2) edge  node[auto]{$p$} (m-2-2);\r\n%edge node[auto]{$\\gamma$} (m-2-2);\r\n%\\path[->,font=\\scriptsize](m-1-2) edge  node[auto]{$p$} (m-2-2);\r\n%\\path[dashed,->,font=\\scriptsize]\r\n%(m-2-1) edge node[auto]{$G$} (m-1-2);\r\n\\end{tikzpicture}\r\n\r\nSo that \r\n\\[\r\ndF^* = F^* d\r\n\\]\r\n%\\end{proof}\r\n\r\n \\hrulefill\r\n\r\ndefine $\\partial F(V) = F( \\partial V)$ \\\\\r\n\r\ngeneralized Stoke's thm.\r\n\\begin{equation}\r\n        \\int_{F(V)} d\\beta^{p-1} = \\int_{ \\partial F(V)} \\beta^{p-1} \\quad \\quad \\quad (5.2)\r\n\\end{equation}\r\n\r\nmanifold needs only ``piecewise smooth'' boundaries.  \r\n\r\n\r\n\r\n\\subsection{ 5.2. Closed Forms and Exact Forms }\r\n\r\n$\\beta^p$ closed if $d\\beta = 0$ \\\\ \r\n $ \\beta^p$ exact if $\\beta^p = d\\alpha^{p-1}$, some $\\alpha^{p-1}$\r\n\r\n\\begin{theorem}[5.3] Let $M^n$ with 1st Betti number 0, $b_1 = 0$, i.e. $\\forall \\, $ closed oriented piecewise smooth curve $C$ is the boundary of some compact oriented ``surface''.  Then $\\forall \\, $ closed 1-form $\\beta^1$ on $M^n$ is exact.\r\n\\end{theorem}\r\n\r\n%\\begin{proof}\r\n\r\n\\hrulefill \\\\\r\n\r\n\r\n        Let $x,y \\in M$, $y$ fixed.  \\\\\r\n        oriented $C(y,x)$ starts at $y$, ends at $x$ \r\n\r\ndefine \r\n\\[\r\nf(x) \\equiv \\int_{C(y,x)} \\beta^1\r\n\\]\r\n\r\nIf $\\exists \\, $ another $C^1(y,x)$, then $C- C'$ closed oriented curve.  \r\n\r\nBy given, $\\exists \\, $ oriented compact surface $F(V)$ s.t. $\\partial F(V) = C$.  \r\n\r\n\\[\r\n\\int_C \\beta - \\int_{C'} \\beta = \\int_{C- C'} \\beta = \\oint_{ \\partial F(V)} \\beta = \\int_{F(V)} d\\beta = 0 \r\n\\]\r\n$\\int_C \\beta = \\int_{C'} \\beta$.  $f$ independent of curve.  \r\n\r\nLet $\\mathbf{v}_x$ vector at $x$.  \r\n\r\nLet vector field $\\mathbf{v}$ coincide with $\\mathbf{v}_x$ at $x$, defined in neighborhood of curve $C(y,x)$, $v=0$ at $y$.  \r\n\r\n$\\phi_t$ flow generated by $v$, $\\phi_t C(y,x)$ curve joining $y$ to $\\phi_i x$ \r\n\\[\r\n\\begin{gathered}\r\n        \\left[ \\frac{ d\\phi_i x}{ dt} \\right]_{t=0} = v_f \\\\ \r\n        df(v) = \\frac{d}{dt} f\\lbrace \\phi_t x \\rbrace_{t=0} = \\left[ \\frac{d}{dt} \\int_{\\phi_tC(y,x)} \\beta \\right]_{t=0} = \\int_{ C(y,x) } \\mathcal{L}_v \\beta  = \r\n\\end{gathered}\r\n\\]\r\n\r\n\\hrulefill\r\n\r\n%\\end{proof}\r\n\r\n\\subsection{ 5.3. Complex Analysis }\r\n\r\n\r\n\r\n\r\n\\subsubsection{5.5 Finding potentials}\r\n\r\n\r\n\\paragraph{5.5(1) Product of a closed and an exact form}\\ \\\\\r\nLet \\ieq{\\kappa} be a closed k-form and \\ieq{\\varepsilon} be an exact form with \\ieq{\\d\\tilde\\varepsilon=\\varepsilon}. Then\r\n\\beq{\r\n\t\\kappa\\wedge\\varepsilon\r\n\t&= \\kappa\\wedge\\d\\tilde\\varepsilon\r\n\t = (-1)^k(\\d(\\kappa\\wedge\\tilde\\varepsilon)-\\underbrace{\\d\\kappa}_{=0}\\wedge\\,\\tilde\\varepsilon)\r\n\t = \\d\\left((-1)^k\\kappa\\wedge\\tilde\\varepsilon\\right)\r\n}\r\n", "meta": {"hexsha": "a9a33566b74ce65ce603b1a43acff0f72fa4226b", "size": 3783, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "LaTeX_and_pdfs/the geometry of physics problems/05potentials.tex", "max_stars_repo_name": "wacfeldwang333/mathphysics", "max_stars_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2017-01-10T14:24:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:19:23.000Z", "max_issues_repo_path": "LaTeX_and_pdfs/the geometry of physics problems/05potentials.tex", "max_issues_repo_name": "wacfeldwang333/mathphysics", "max_issues_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2017-09-29T09:29:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-12T03:12:29.000Z", "max_forks_repo_path": "LaTeX_and_pdfs/the geometry of physics problems/05potentials.tex", "max_forks_repo_name": "wacfeldwang333/mathphysics", "max_forks_repo_head_hexsha": "59eb794dfa46e2b80e43df0440bb8ec3c472d973", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2018-01-21T05:33:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T20:15:13.000Z", "avg_line_length": 31.2644628099, "max_line_length": 246, "alphanum_fraction": 0.5841924399, "num_tokens": 1479, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.7956581073313276, "lm_q1q2_score": 0.5511062093592015}}
{"text": "\r\n\\section{An axiomatic system for FOS5}\r\n\r\n\\qquad To prove the results of Chapters 2 and 3 it was convenient to state things in terms of $\\nao, \\ou, \\ex, \\Diamond$ and $=$. But to stay connected with the formulations of the last chapter consider now the version of first-order modal logic defined using $\\bot$, $\\impli$, $\\todo$ and $\\Box$ (without equality). Let $\\Li$ be the same language fixed in Chapter 5. To make things simple, we write $Fml$ instead of $\\FLi$.   \r\n\r\n\r\n\\qquad Since we have started working only with semantical notions, we have defined the logic FOS5 as the set of all valid sentences relative to the class of all FOS5-models. Alternatively we can study the logic FOS5 using a simple and elegant axiomatic system composed of the following axiom schemes and inference rules:\\\\\r\n\r\n\\textbf{A$\\p$1} classical axioms of first-order logic\\\\\r\n\r\n\\textbf{A$\\p$2} $\\Box \\varphi \\impli \\varphi$\\\\\r\n\r\n\\textbf{A$\\p$3} $\\Box \\varphi \\impli \\Box \\Box \\varphi$\\\\\r\n\r\n\\textbf{A$\\p$4} $\\nao \\Box \\varphi \\impli \\Box \\nao \\Box \\varphi$\\\\\r\n\r\n\\textbf{A$\\p$5} $\\Box (\\varphi \\impli \\psi) \\impli  (\\Box \\varphi \\impli \\Box \\psi)$\\\\\r\n\t\r\n\\textbf{R$\\p$1} (\\textit{Modus Ponens}) $\\teo \\varphi$, $\\teo \\varphi\\impli\\psi$ $\\Rightarrow$ $\\teo \\psi$ \\\\\r\n\r\n\\textbf{R$\\p$2} (\\textit{generalization})  $\\teo \\varphi$ $\\Rightarrow$ $\\teo \\todo x \\varphi$ \\\\\r\n\r\n\\textbf{R$\\p$3} (\\textit{necessitation})  $\\teo \\varphi$ $\\Rightarrow$  $\\teo \\Box \\varphi$.\\\\\r\n\r\n\r\n\\qquad As in the case for FOJT45 we make use of the standard notion of $\\Gamma \\teo \\varphi$. Here the restriction on the generalization rule is the same as stated for FOJT45, and the necessitation rule is allowed only when $\\Gamma = \\vazio$. We write $FOS5\\teo \\varphi$ to denote that in this axiomatic system $\\vazio\\teo \\varphi$.\r\n\r\n\\qquad \r\nIn the seminal paper by Kripke \\cite{Kripke59} the Completeness Theorem for this logic was shown, and so the semantical and the syntactical characterization of FOS5 are equivalent. To be more precise, for every sentence $\\varphi \\in Fml$,\r\n\r\n\\begin{center}\r\n$FOS5\\teo \\varphi$ iff  $\\vSB \\varphi$.\r\n\\end{center}\r\n\r\n\r\n\r\n\r\n\r\n\\section{Realization}\r\n\r\n\r\n\\begin{defn}\r\nLet $\\varphi$ be a formula of FOS5. We define the \\textit{realization} of $\\varphi$ in the language of FOJT45, $\\varphi^{r}$, as follows:\r\n\r\n\r\n\\begin{itemize}\r\n\t\\item If $\\varphi$ is atomic, then $\\varphi^{r}$ is $\\varphi$.\r\n\t\\item If $\\varphi$ is $\\psi\\impli\\theta$, then $\\varphi^{r}$ is $\\psi^{r}\\impli\\theta^{r}$\r\n\t\\item If $\\varphi$ is $\\todo x \\psi$, then $\\varphi^{r}$ is $\\todo x \\psi^{r}$\r\n\t\\item If $\\varphi$ is $\\Box \\psi$ and $fv(\\varphi) = \\{x_{1}, \\dots, x_{n}\\}$, then $\\varphi^{r}$ is $t$$:_{\\{x_{1}, \\dots, x_{n}\\}}$$ \\psi^{r}$\r\n\\end{itemize}\r\n\\end{defn}\r\n\r\n\\qquad A realization is normal if all negative occurrences of $\\Box$ are assigned justification variables. It can easily be checked that for every $\\varphi \\in Fml$, $fv(\\varphi) = fv(\\varphi^{r})$.\r\n\r\n\r\n\r\n\\begin{defn}\r\nLet $\\varphi$ be a formula of FOJT45. \\textit{The forgetful projection} of $\\varphi$, $\\varphi^{\\circ}$, is defined as follows:\r\n\r\n\r\n\\begin{itemize}\r\n\t\\item If $\\varphi$ is atomic, then $\\varphi^{\\circ}$ is $\\varphi$.\r\n\t\\item If $\\varphi$ is $\\psi\\impli\\theta$, then $\\varphi^{\\circ}$ is $\\psi^{\\circ}\\impli\\theta^{\\circ}$\r\n\t\\item If $\\varphi$ is $\\todo x \\psi$, then $\\varphi^{\\circ}$ is $\\todo x \\psi^{\\circ}$\r\n\t\\item If $\\varphi$ is $t$$:_{X}$$ \\psi$, then $\\varphi^{\\circ}$ is $\\Box \\todo \\vec{y}\\psi^{\\circ}$\\\\ where $\\vec{y} \\in fv(\\psi)\\backslash X$.\r\n\\end{itemize}\r\n\\end{defn}\r\n\r\n\\qquad As before, it can easily be checked that for every $\\varphi \\in \\Fj$,  $fv(\\varphi) = fv(\\varphi^{\\circ})$.\r\n\r\n\\begin{pro}\r\nFor every constant specification $\\C$ and for every $\\varphi \\in \\Fj$,\r\n\\begin{center}\r\nIf FOJT45 $\\teo_{\\C} \\varphi$, then FOS5 $\\teo \\varphi^{\\circ}$.\r\n\\end{center}\r\n\\end{pro}\r\n\r\n\r\n\\begin{proof}\r\nInduction on the theorems of FOJT45 with $\\C$. In this proof only we shall use $\\teo$ to denote $FOS5 \\teo$. And for simplicity we are going to deal only with a representative special case of each axiom. These special cases are simpler versions of each axiom; the argument can be easily generalized.\\\\\r\n\r\n\r\n\r\n($\\varphi$ is an instance of \\textbf{A2})\\\\\r\n\r\n\\qquad Suppose $\\varphi$ is\r\n\r\n\\begin{center}\r\n$t$$:_{\\{x,y\\}}$$\\psi(x,z) \\impli t$$:_{\\{x\\}}$$\\psi(x,z)$\r\n\\end{center}\r\nSince $y \\notin fv(\\psi(x,z))$,\r\n\r\n\\begin{center}\r\n$\\{z\\} = fv(\\psi(x,z)) \\backslash \\{x,y\\} = fv(\\psi(x,z)) \\backslash \\{x\\}$ \r\n\\end{center}\r\nThus, $\\varphi^{\\circ}$ is\r\n\r\n\\begin{center}\r\n$\\Box  \\todo z \\psi^{\\circ}(x,z) \\impli \\Box  \\todo z \\psi^{\\circ}(x,z)$.\r\n\\end{center}\r\n\r\nClearly, $\\teo \\varphi^{\\circ}$.\\\\\r\n\\vspace{5mm}\r\n\r\n($\\varphi$ is an instance of \\textbf{A3})\\\\\r\n\r\n\\qquad Suppose $\\varphi$ is\r\n\r\n\\begin{center}\r\n$t$$:_{\\{x\\}}$$\\psi(x,y,z) \\impli t$$:_{\\{x,y\\}}$$\\psi(x,y,z)$\r\n\\end{center}\r\nThen, $\\varphi^{\\circ}$ is\r\n\r\n\\begin{center}\r\n$\\Box  \\todo y\\todo z \\psi^{\\circ}(x,y,z) \\impli \\Box  \\todo z \\psi^{\\circ}(x,y,z)$\r\n\\end{center}\r\nBy classical axioms,\r\n\r\n\\begin{center}\r\n$\\teo \\todo y\\todo z \\psi^{\\circ}(x,y,z) \\impli \\todo z \\psi^{\\circ}(x,y,z)$\r\n\\end{center}\r\nBy necessitation and the distributivity of $\\Box$ over $\\impli$,\r\n\r\n\\begin{center}\r\n$\\teo \\Box  \\todo y\\todo z \\psi^{\\circ}(x,y,z) \\impli \\Box  \\todo z \\psi^{\\circ}(x,y,z)$.\r\n\\end{center}\r\n\r\n\\vspace{5mm}\r\n\r\n($\\varphi$ is an instance of \\textbf{B1})\\\\\r\n\r\n\\qquad Suppose $\\varphi$ is\r\n\r\n\\begin{center}\r\n$t$$:_{\\{x\\}}$$\\psi(x,y) \\impli \\psi(x,y)$\r\n\\end{center}\r\nThen, $\\varphi^{\\circ}$ is\r\n\r\n\\begin{center}\r\n$\\Box  \\todo y\\psi^{\\circ}(x,y) \\impli \\psi^{\\circ}(x,y)$\r\n\\end{center}\r\nBy \\textbf{A$\\p$2},\r\n\r\n\\begin{center}\r\n$\\teo \\Box \\todo y\\psi^{\\circ}(x,y) \\impli \\todo y\\psi^{\\circ}(x,y)$\r\n\\end{center}\r\nAnd by classical axioms,\r\n\r\n\\begin{center}\r\n$\\teo \\todo y\\psi^{\\circ}(x,y) \\impli \\psi^{\\circ}(x,y)$\r\n\\end{center}\r\nSo, \r\n\r\n\\begin{center}\r\n$\\teo \\Box  \\todo y\\psi^{\\circ}(x,y) \\impli \\psi^{\\circ}(x,y)$.\r\n\\end{center}\r\n\\vspace{5mm}\r\n\r\n($\\varphi$ is an instance of \\textbf{B2})\\\\\r\n\r\nSuppose $\\varphi$ is\r\n\r\n\\begin{center}\r\n$t$$:_{\\{x,x\\p\\}}$$(\\psi(x,y) \\impli \\theta(x\\p,z)) \\impli$ $(s$$:_{\\{x,x\\p\\}}$$\\psi(x,y) \\impli$ $[t\\cdot s]$$:_{\\{x,x\\p\\}}$$\\theta(x\\p,z))$\r\n\\end{center}\r\nThen, $\\varphi^{\\circ}$ is\r\n\r\n\\begin{center}\r\n$\\Box  \\todo y\\todo z (\\psi^{\\circ}(x,y) \\impli \\theta^{\\circ}(x\\p,z)) \\impli (\\Box \\todo y \\psi^{\\circ}(x,y) \\impli \\Box \\todo z \\theta^{\\circ}(x\\p,z))$\r\n\\end{center}\r\nBy classical reasoning,\r\n\r\n\r\n\\begin{center}\r\n$\\teo \\todo y\\todo z (\\psi^{\\circ}(x,y) \\impli \\theta^{\\circ}(x\\p,z)) \\impli (\\todo y\\todo z  \\psi^{\\circ}(x,y) \\impli \\todo y\\todo z  \\theta^{\\circ}(x\\p,z))$\r\n\\end{center}\r\nSince $z \\notin fv(\\psi^{\\circ}(x,y))$ and $y \\notin fv(\\theta^{\\circ}(x\\p,z))$, we have that\r\n\r\n\r\n\\begin{center}\r\n$\\teo \\todo y\\todo z  \\psi^{\\circ}(x,y) \\see \\todo y \\psi^{\\circ}(x,y)$\\\\\r\n$\\teo \\todo y\\todo z  \\theta^{\\circ}(x\\p,z) \\see \\todo z \\theta^{\\circ}(x\\p,z)$\r\n\\end{center}\r\nHence,\r\n\r\n\\begin{center}\r\n$\\teo \\todo y\\todo z (\\psi^{\\circ}(x,y) \\impli \\theta^{\\circ}(x\\p,z)) \\impli (\\todo y \\psi^{\\circ}(x,y) \\impli \\todo z \\theta^{\\circ}(x\\p,z))$\r\n\\end{center}\r\nBy necessitation and the distributivity of $\\Box$ over $\\impli$,\r\n\r\n\\begin{center}\r\n$\\teo \\Box  \\todo y\\todo z (\\psi^{\\circ}(x,y) \\impli \\theta^{\\circ}(x\\p,z)) \\impli (\\Box \\todo y \\psi^{\\circ}(x,y) \\impli \\Box \\todo z \\theta^{\\circ}(x\\p,z))$.\r\n\\end{center}\r\n\\vspace{5mm}\r\n\r\n($\\varphi$ is an instance of \\textbf{B3})\\\\\r\n\r\n\\qquad If $\\varphi$ is $t$$:_{\\{x\\}}$$\\psi(x,y) \\impli$ $[t+s]$$:_{\\{x\\}}$$\\psi(x,y)$, then $\\varphi^{\\circ}$ is $\\Box  \\todo y \\psi^{\\circ}(x,y) \\impli \\Box  \\todo y \\psi^{\\circ}(x,y)$. Clearly, $\\teo \\varphi^{\\circ}$. The same argument holds when $\\varphi$ is $s$$:_{\\{x\\}}$$\\psi(x,y) \\impli$ $[t+s]$$:_{\\{x\\}}$$\\psi(x,y)$.\r\n\\vspace{5mm}\r\n\r\n($\\varphi$ is an instance of \\textbf{B4})\\\\\r\n\r\n\\qquad If $\\varphi$ is $t$$:_{\\{x\\}}$$\\psi(x,y) \\impli$ $!t$$:_{\\{x\\}}$$t$$:_{\\{x\\}}$$\\psi(x,y)$, then $\\varphi^{\\circ}$ is $\\Box\\todo y \\psi^{\\circ}(x,y) \\impli \\Box \\Box\\todo y \\psi^{\\circ}(x,y)$, which is an instance of axiom \\textbf{A$\\p$3}; hence $\\teo \\varphi^{\\circ}$.\r\n\\vspace{5mm}\r\n\r\n\r\n\r\n($\\varphi$ is an instance of \\textbf{B5})\\\\\r\n\r\n\r\n\\qquad If $\\varphi$ is $\\nao t$$:_{\\{x\\}}$$\\psi(x,y) \\impli$ $?t$$:_{\\{x\\}}$$\\nao t$$:_{\\{x\\}}$$\\psi(x,y)$, then $\\varphi^{\\circ}$ is $\\nao \\Box\\todo y \\psi^{\\circ}(x,y) \\impli \\Box \\nao \\Box\\todo y \\psi^{\\circ}(x,y)$, which is an instance of axiom \\textbf{A$\\p$4}; hence $\\teo \\varphi^{\\circ}$.\r\n\r\n\\pagebreak\r\n($\\varphi$ is an instance of \\textbf{B6})\\\\\r\n\r\n\\qquad Suppose $\\varphi$ is\r\n\r\n\\begin{center}\r\n $t$$:_{\\{y\\}}$$\\psi(x,y,z) \\impli gen_{x}(t)$$:_{\\{y\\}}$$\\todo x\\psi(x,y,z)$\r\n\\end{center}\r\nSo $\\varphi^{\\circ}$ is\r\n\\begin{center}\r\n $\\Box \\todo x \\todo z\\psi^{\\circ}(x,y,z) \\impli \\Box \\todo z \\todo x\\psi^{\\circ}(x,y,z)$\r\n\\end{center}\r\nBy classical reasoning,\r\n\r\n\\begin{center}\r\n $\\teo \\todo x \\todo z\\psi^{\\circ}(x,y,z) \\impli \\todo z \\todo x\\psi^{\\circ}(x,y,z)$\r\n\\end{center}\r\nBy necessitation and the distributivity of $\\Box$ over $\\impli$,\r\n\r\n\\begin{center}\r\n $\\teo \\Box \\todo x \\todo z\\psi^{\\circ}(x,y,z) \\impli \\Box \\todo z \\todo x\\psi^{\\circ}(x,y,z)$.\r\n\\end{center}\r\n\r\n\\qquad If $\\varphi$ is derived by using the rules \\textbf{R1} or \\textbf{R2} the result easily follows from the induction hypothesis.\r\n\r\n\\qquad Suppose $\\varphi$ is derived using the rule \\textbf{R3}. So $\\varphi$ is $c$$:$$\\psi(x)$ where $\\psi(x)$ is an axiom. By the argument above, $\\teo \\psi^{\\circ}(x)$. By generalization and necessitation, $\\teo \\Box \\todo x\\psi^{\\circ}(x)$, i.e., $\\teo \\varphi^{\\circ}$.\r\n\\end{proof}\r\n\r\n\\qquad As usual in the study of justification logic, the proof of Proposition 33 is a trivial induction on the theorems of the justification logic in question (in this case FOJT45). What is a more significant result is the following:\\\\\r\n\r\n\r\n\\textbf{(Realization Theorem)} If FOS5 $\\teo \\varphi$, then FOJT45 $\\teo_{\\C} \\varphi^{r}$ for a constant specification $\\C$ and a normal realization $r$.\\\\\r\n\r\n\r\n\r\n\\qquad Right now we believe that the best path to try to prove this theorem is to apply all the notions and results presented in this thesis in order to adapt the proof of the Realization Theorem using semantical tools (as presented in \\cite{Fitting13}, \\cite{Fitting13R} and \\cite{Fitting14R}) for FOJT45. But we also consider different ways. Another strategy is to study the constructive argument using nested sequent calculus (as presented in \\cite{Kuz10}) and see how this argument can be used for this case.\r\nWe want to consider these two paths in future research.\r\n\r\n\r\n\\section{Justification logic and interpolation}\r\n\r\n\r\n\\qquad When studying justification logic it is natural to investigate the relationship between this logic and modal logic. The Realization Theorem gives us a tool to see this relationship. Although we have left the proof of this theorem for future work, it is worthwhile to see one easy conclusion of the Realization Theorem. To do so we need to state one definition:\r\n\r\n\r\n\r\n\\begin{itemize}\r\n\\item[] \\textit{The Interpolation Theorem} holds for FOJT45 iff for every constant specification $\\C$ and sentences $\\varphi$ and $\\psi$, if $\\teo_{\\C} \\varphi \\impli\\psi$, then there is a formula $\\theta$ such that $\\teo_{\\C} \\varphi \\impli \\theta$, $\\teo_{\\C} \\theta \\impli \\psi$ and the non-logical symbols and the justification terms that occur in $\\theta$ occur both in $\\varphi$ and $\\psi$.\r\n\\end{itemize}\r\n\r\n\r\n\r\n\r\n\\begin{pro}\r\nIf the Realization Theorem holds between FOS5 and FOJT45, then the Interpolation Theorem fails for FOJT45. \r\n\\end{pro}\r\n\r\n\r\n\\begin{proof}\r\nSuppose that the Interpolation Theorem holds for FOJT45. By Theorem 2 and by the Completeness Theorem for FOS5, let $\\varphi$ and $\\psi$ be sentences such that FOS5 $\\teo \\varphi \\impli \\psi$ and there is no interpolant between them. By the Realization Theorem, there is a normal realization $r$ such that\r\n\r\n\\begin{center}\r\nFOJT45 $\\teo_{\\C} \\varphi^{r} \\impli \\psi^{r}$\r\n\\end{center}\r\n\r\n\\qquad By hypothesis, there is a formula $\\theta$ such that the non-logical symbols and the justification terms that occur in $\\theta$ occur both in $\\varphi^{r}$ and $\\psi^{r}$. Moreover, we have that \r\n\r\n\r\n\\begin{center}\r\nFOJT45 $\\teo_{\\C} \\varphi^{r} \\impli \\theta$\\\\\r\nFOJT45 $\\teo_{\\C} \\theta \\impli \\psi^{r}$\r\n\\end{center}\r\nby the forgetful projection: \r\n\r\n\r\n\\begin{center}\r\nFOS5 $\\teo (\\varphi^{r} \\impli \\theta)^{\\circ}$\\\\\r\nFOS5 $\\teo (\\theta \\impli \\psi^{r})^{\\circ}$\r\n\\end{center}\r\ni.e.,\r\n\r\n\\begin{center}\r\nFOS5 $\\teo \\varphi \\impli \\theta^{\\circ}$\\\\\r\nFOS5 $\\teo \\theta^{\\circ} \\impli \\psi$\r\n\\end{center}\r\n\r\n\r\n\\qquad Now, since there is no interpolant between $\\varphi$ and $\\psi$, then there is no relation symbol occurring in $\\theta^{\\circ}$. Hence, $\\theta^{\\circ}$ is a formula such that $\\bot$ is the only atomic formula that occur in $\\theta^{\\circ}$. Thus, either $\\theta^{\\circ}$ is FOS5-valid or $\\theta^{\\circ}$ is FOS5-unsatisfiable.\r\n\r\n\\qquad On the one hand, if $\\theta^{\\circ}$ is FOS5-valid, then, since $\\models_{FOS5} \\theta^{\\circ}\\impli \\psi$, $\\psi$ is FOS5-valid. And so, $\\varphi \\impli \\psi$ has an interpolant, contradicting our hypothesis.\r\n\r\n\\qquad On the other hand, if $\\theta^{\\circ}$ is FOS5-unsatisfiable, then, since $\\models_{FOS5} \\varphi \\impli \\theta^{\\circ}$, $\\varphi$ is FOS5-unsatisfiable. And so, $\\varphi \\impli \\psi$ has an interpolant, contradicting our hypothesis.\r\n\\end{proof}\r\n\r\n\\vspace{10mm}\r\n\r\n\\qquad We hope that the topics presented in this thesis fulfilled two objectives: i) give a brief introduction to first-order S5; ii) clarify the connections between first-order modal logic and first-order justification logic.\r\n\r\n\\qquad About the last objective it is important to stress that, as Proposition 34 shows, the failure of the Interpolation Theorem for FOJT45 is just a straightforward consequence of the Realization Theorem. And so to prove this theorem for FOJT45 will not be only a subject of interest for the researchers involved in justification logic, but will be a result of interest for the broader modal logic community. \r\n\r\n\r\n", "meta": {"hexsha": "843848a01825985becabaeca51153aaeee289e19", "size": 14069, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/chapters/conclusion.tex", "max_stars_repo_name": "felipessalvatore/dissertacao_mestrado", "max_stars_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/chapters/conclusion.tex", "max_issues_repo_name": "felipessalvatore/dissertacao_mestrado", "max_issues_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/chapters/conclusion.tex", "max_forks_repo_name": "felipessalvatore/dissertacao_mestrado", "max_forks_repo_head_hexsha": "171d9f4d7b99fb6b70de04c109ff4f5d0f65ef4b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6634920635, "max_line_length": 513, "alphanum_fraction": 0.6519297747, "num_tokens": 4845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.7956581049086031, "lm_q1q2_score": 0.5511062076811208}}
{"text": "\\documentclass[dvisvgm,hypertex,aspectratio=169]{beamer}\n\\usefonttheme{serif}\n\n%\\usepackage[utf8]{inputenc}\n%\\usepackage[T1]{fontenc}\n\n%\\usepackage[draft]{animate}\n\\usepackage[final]{animate}\n\\usepackage{ifthen}\n\n\n%\\usepackage{pythontex} % <--\n\\usepackage{graphicx}\n\n\n\\usepackage{tikz}\n\\usepackage{pgfplots}\n\\usepackage{pgfplotstable}\n\\pgfplotsset{compat=1.16}\n\\usetikzlibrary{calc}\n\\usetikzlibrary{decorations.pathmorphing,patterns}\n\\usepackage{amsmath}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \n% Define footer\n\\usepackage{ccicons}\n\n\\makeatletter\n\\setbeamertemplate{footline}\n{\n  \\leavevmode%\n  \\hbox{%\n  %\\begin{beamercolorbox}[wd=.333333\\paperwidth,ht=2.25ex,dp=1ex,center]{title in head/foot}%\n    %\\usebeamerfont{title in head/foot}\\insertsubsection\n  %\\end{beamercolorbox}%\n  %\\begin{beamercolorbox}[wd=.333333\\paperwidth,ht=2.25ex,dp=1ex,right]{date in head/foot}%\n  %  \\usebeamerfont{date in head/foot}\\insertshortdate{}\\hspace*{2em}\n  %  \\insertframenumber{} / \\inserttotalframenumber\\hspace*{2ex} \n  %\\end{beamercolorbox}}%\n  %\\vskip0pt%\n  \\begin{beamercolorbox}[wd=.92\\paperwidth,ht=2.25ex,dp=1ex,right]{author in head/foot}%\n    \\usebeamerfont{author in head/foot}\\insertauthor\n  \\end{beamercolorbox}%\n  \\begin{beamercolorbox}[wd=.08\\paperwidth,ht=2.25ex,dp=1ex,right]{date in head/foot}%\n    \\ccbysa\n  \\end{beamercolorbox}}%\n  \\vskip0pt%\n}\n\\makeatother\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n\\author{\\href{mailto:kjartan@tec.mx}{kjartan@tec.mx}}\n\n\\begin{document}\n\n\\section{Create animation}\n\n\\begin{frame}[label=A]{Aliasing}\n  \\pgfplotsset{\n    dirac/.style={\n      mark=triangle*,\n      mark options={scale=0.6},\n      ycomb,\n      scatter,\n      visualization depends on={y/abs(y)-1 \\as \\sign},\n      scatter/@pre marker code/.code={\\scope[rotate=90*\\sign,yshift=-2pt]}\n    }\n  }\n\n\\pgfmathsetmacro{\\omegaS}{20}\n\\pgfmathsetmacro{\\omegaN}{\\omegaS*0.5}\n\\pgfmathsetmacro{\\nframes}{\\omegaS - 1}\n\\pgfmathsetmacro{\\ndsamples}{36}\n\\pgfmathsetmacro{\\hh}{2*pi/\\omegaS}\n\\pgfmathsetmacro{\\tend}{\\hh*(\\ndsamples-1)}\n\n  \\begin{center}\n    \\begin{animateinline}[controls, loop, palindrome]{4}\n      \\multiframe{\\nframes}{n=1+1}{\n        \\pgfmathsetmacro{\\ww}{\\n*1.0}\n        \\pgfmathsetmacro{\\nsamples}{\\n*30}\n        \\pgfmathsetmacro{\\nasamples}{(\\nframes-\\n+1)*30}\n        \\pgfmathsetmacro{\\omegaal}{\\omegaS - \\ww}\n        \\pgfmathsetmacro{\\aliasing}{\\omegaN - \\omegaal}\n\n        \\begin{tikzpicture}[scale=0.6,]\n          \\begin{axis}[\n            width=14cm,\n            height=3.8cm,\n            title={Spectrum},\n            xlabel={$\\omega$ [rad/s]},\n            ylabel={$|Y(i\\omega)|$},\n            xmin=-22,\n            xmax=22,\n            xtick={-\\omegaS, -\\omegaN, 0, \\omegaN, \\omegaS},\n            xticklabels={$-\\omega_s$, $-\\omega_N$, 0, $\\omega_N$, $\\omega_s$},\n            ytick = \\empty,\n            ymin = 0,\n            ymax = 1.2,\n            ]\n            \\addplot[blue!80!green, thick, dirac] coordinates {(-\\ww, 1) (\\ww, 1)};\n            \\addplot[red, thick, dashed, dirac] coordinates {(-\\omegaal, 1) (\\omegaal, 1)};\n          \\end{axis}\n          \n          \\begin{axis} [\n            title={Original signal},\n            width = 14cm,\n            height=3.8cm,\n            yshift = -4cm,\n            xlabel = {$t$},\n            ylabel = {$y$},\n            xtick = \\empty,\n            ytick = \\empty,\n            ymin=-1.1,\n            ymax = 1.1,\n            ]\n            \\addplot+[thick, blue!80!green, no marks, domain=0:\\tend, samples=\\nsamples, smooth] {sin(deg(\\ww*x)-20)};\n              \\addplot+[black!80, ycomb, mark=*,  mark options={black!70}, domain=0:\\tend, samples=\\ndsamples] {sin(deg(\\ww*x)-20)};\n\n          \\end{axis}\n          \\begin{axis} [\n            title={Alias signal},\n            width = 14cm,\n            height=3.8cm,\n            yshift = -8cm,\n            xlabel = {$t$},\n            ylabel = {$y_a$},\n            xtick = \\empty,\n            ytick = \\empty,\n            ymin=-1.1,\n            ymax = 1.1,\n            ]\n            \\addplot[black!80, ycomb, mark=*, mark options={black!70}, domain=0:\\tend, samples=\\ndsamples] {sin(deg(\\ww*x)-20)};\n              \\addplot[red!80!black, domain=0:\\tend, no marks, samples=\\nasamples, smooth] {-(\\aliasing >0)*sin(deg(\\omegaal*x)+20) + (\\aliasing <= 0)*sin(deg(\\ww*x)-20)};\n\n            \\end{axis}\n            %\\node at (1,1) {$\\omega_1 = \\ww$};\n            %\\node at (1,-1) {$\\omega_a = \\omegaal$};\n            %\\node at (1,-2) {$h = \\hh$};\n        \\end{tikzpicture}\n      }\n    \\end{animateinline}\n  \\end{center}\n\\end{frame}\n\\end{document}\n\n\n", "meta": {"hexsha": "d1df7bfae03027f5485eb746cd12f659f21d28ad", "size": 4631, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "sampling-and-aliasing/slides/aliasing-animation.tex", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "sampling-and-aliasing/slides/aliasing-animation/aliasing-animation.tex", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "sampling-and-aliasing/slides/aliasing-animation/aliasing-animation.tex", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 30.8733333333, "max_line_length": 171, "alphanum_fraction": 0.5566832218, "num_tokens": 1520, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6926419831347361, "lm_q2_score": 0.7956580976404296, "lm_q1q2_score": 0.5511062026468787}}
{"text": "\\section{Pathfinding}\n\nPathfinding is a technique or a process carried by a software running in a computer from which the shortest path or route between two points is extrapolated. Pathfinding is not only important in video games, but also in other areas such as delivery, transport services and intelligent storage among others. Pathfinding algorithms are typically, although not solely, heavily based on the Dijkstra's algorithm[5] which implies what follows: Let the node starting be X. Let the distance of node Y be the distance from X to Y. Dijkstra's algorithm will assign some initial distance values and will try iteratively to improve them step by step.\n\n\\vspace{2mm}\nHowever there are other complementary approaches using different techniques that seek to improve such algorithms.\n\n\\subsection{Techniques for Pathfinding}\nPathfinding is usually a two-step process: first, a graph generation algorithm followed by a Pathfinding algorithm to calculate the best path. Zeyad Adb Algfoor et al[10] published a brief but thorough compendium of the most common techniques developed during the last 10 years.\n\n\n\\subsubsection{Grids}\nIn Pathfinding grids are represented as graphs, a list of vertices or points connected by edges to represent the map, the navigation performance of which is based on its attributes. The most popular grid approaches are: regular grids and irregular grids.\n\n\\vspace{2mm}\nRegular grids use triangles, squares and hexagons to describe tessellations of a given terrain. The most prominent of which are the following:\n\n\\begin{itemize}\n\t\\item 2D hexagonal Grid\n\t\\item 2D Square Grid\n\t\\item 2D Triangular Grid\n\t\\item 3D Cubic Grid\n\\end{itemize}\n\n\\vspace{2mm}\nWhereas irregular grids use different techniques to define terrain topology such as:\n\n\\begin{itemize}\n\t\\item Visibility Graphs\n\t\\item Mesh Navigation\n\t\\item Waypoints\n\\end{itemize}\n\n\\vspace{2mm}\nFurther information on the actual implementation of these pathfinding techniques can be found in Zeyad et al[9]'s paper.\n\n\n\\subsection{Using Potential Fields}\n\nIn 1985, Ossama Khatib discussed a new concept in regards a real-time obstacle avoidance for manipulators and mobile Robots. He named it Artificial Potential Fields[6]. In 2008, Johan Hagelback et al.[10] used it to create a Multi-agent Potential Field-based Bot for Real-Time Strategy Games(RTS) in which objects in the map where assigned charges (like magnetic chargers positive or negative) and such charges were used as forces to attract the agents to specific destination or to repel such agents to avoid collisions or to delimit the terrain.\n\n\\vspace{2mm}\nThey proposed two-different scenarios using Open real-time strategy (ORTS) engine to deploy the bot. In the first scenario the task of the bot was to recollect as many resource as possible in a limited amount of time. To do so Johan Hagelback et al. had 20 agents (workers) using a finite state machine (FSM) to define their behaviour and assigned the charges to the agents, to the resource gathering point, the base were resources had to be delivered and \"sheep\" moving around the map to provide moving obstacles. Even though the results shown that the bot was prone to disconnect from the game in some occasions, it beat bots developed in previous years improving the average of resources gathered.\n\n\\vspace{2mm}\nThe second scenario, using ORTS, was a tank game where two players had a number of tanks and bases and the goal of the game was to destroy both enemy bases and tanks. After outlining the rules for the game agents, they assigned the charges accordingly and programmed the bot to avoid situating tanks in the middle of enemy clusters. The results of such scenario, tested against other bots, were of a 98\\% of victory rate over their opponents.\n\n\n\\subsection{Aesthetics in Pathfinding}\n\nWhile most of the algorithms focuses lies on obtaining the best suitable path, such path does not always meet the expectations by using an unrealistic behaviour that differs greatly of what a humans would perform. In order to find more \"human-like\" behaviour in Pathfinding some authors such as Coleman, R.[7] introduce the idea of Aesthetics in it.\n\n\\vspace{2mm} \nThe experiment's approach was to use Mandelbrot's Fractal Dimension Analysis[8] to determine the aesthetics level of certain paths, an A* Pathfinding algorithm was used to create the path and then tweaked to reward paths that were obstacle-prone to simulate a behaviour of \"stealthyness\". By doing so the levels of aesthetics were higher than in those were paths were determined as best suited by the algorithm.   ", "meta": {"hexsha": "9213437081edd7af04a60ed810db8668b1b356f8", "size": 4582, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Sections/pathfinding.tex", "max_stars_repo_name": "rndmized/AI-in-gaming-research", "max_stars_repo_head_hexsha": "7415a0c175ab2271a22f99822bc5848df3d01e0c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Sections/pathfinding.tex", "max_issues_repo_name": "rndmized/AI-in-gaming-research", "max_issues_repo_head_hexsha": "7415a0c175ab2271a22f99822bc5848df3d01e0c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Sections/pathfinding.tex", "max_forks_repo_name": "rndmized/AI-in-gaming-research", "max_forks_repo_head_hexsha": "7415a0c175ab2271a22f99822bc5848df3d01e0c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 84.8518518519, "max_line_length": 700, "alphanum_fraction": 0.8051069402, "num_tokens": 995, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837743174788, "lm_q2_score": 0.721743206297598, "lm_q1q2_score": 0.5510392272320889}}
{"text": "% !TeX root = ./forallxsol.tex\n\n\\stepcounter{chapter} % Introducing ML\n\n\\chapter{Nat\u00fcrliche Herleitung f\u00fcr die ML}\n\n\\setcounter{ProbPart}{0}\n\n\\problempart\nGeben Sie Beweise f\u00fcr die folgenden Aussagen:\n\\begin{earg}\n\\item $\\Box (A\\eand B)\\vdash_\\mlK \\Box A \\eand \\Box B$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Box (A \\eand B)}\n\\open\n\\hypo{2}{\\ebox}\n\\have{3}{A \\eand B}\\boxe{1}\n\\have{4}{A}\\ae{3}\n\\close\n\\have{5}{\\ebox A}\\boxi{3-4}\n\\open\n\\hypo{6}{\\ebox}\n\\have{7}{A \\eand B}\\boxe{1}\n\\have{8}{B}\\ae{7}\n\\close\n\\have{9}{\\ebox B}\\boxi{6-8}\n\\have{10}{\\Box A \\eand \\Box B}\\ai{5,9}\n\\end{nd}\\]\n}\n\\newpage\n\\item $\\Box A\\eand\\Box B\\vdash_\\mlK \\Box( A \\eand  B)$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Box A \\eand \\Box B}\n\\have{2}{\\Box A}\\ae{1}\n\\have{3}{\\Box B}\\ae{1}\n\\open\n\\hypo{4}{\\ebox}\n\\have{5}{A}\\boxe{2}\n\\have{6}{B}\\boxe{3}\n\\have{7}{A\\eand B}\\ai{5,6}\n\\close\n\\have{8}{\\Box (A\\eand B)}\\boxi{4-7}\n\\end{nd}\\]\n}\n\n\\item $\\Box A\\eor\\Box B\\vdash_\\mlK \\Box( A \\eor  B)$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Box A \\eor \\Box B}\n\\open\n\\hypo{2}{\\Box A}\n\\open\n\\hypo{3}{\\ebox}\n\\have{4}{A}\\boxe{2}\n\\have{5}{A\\eor B}\\oi{4}\n\\close\n\\have{6}{\\ebox(A \\eor B)}\\boxi{3-5}\n\\close\n\\open\n\\hypo{7}{\\Box B}\n\\open\n\\hypo{8}{\\ebox}\n\\open\n\\have{9}{B}\\boxe{7}\n\\have{10}{A\\eor B}\\oi{9}\n\\close\n\\have{11}{\\Box (A\\eor B)}\\boxi{8-10}\n\\close\n\\have{12}{\\Box(A\\eor B)}\\oe{1,2-6,7-11}\n\\end{nd}\\]\n}\n\n\\item $\\Box (A \\eiff B)\\vdash_\\mlK  \\Box A \\eiff \\Box B$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Box(A\\eiff B)}\n\\open\n\\hypo{2}{\\Box A}\n\\open\n\\hypo{3}{\\ebox}\n\\have{4}{A\\eiff B}\\boxe{1}\n\\have{5}{A}\\boxe{2}\n\\have{6}{B}\\be{4,5}\n\\close\n\\have{7}{\\ebox B}\\boxi{3-6}\n\\close\n\\open\n\\hypo{8}{\\Box B}\n\\open\n\\hypo{9}{\\ebox}\n\\have{10}{A\\eiff B}\\boxe{1}\n\\have{11}{B}\\boxe{8}\n\\have{12}{A}\\be{10,11}\n\\close\n\\have{13}{\\ebox A}\\boxi{9-12}\n\\close\n\\have{14}{\\Box A \\eiff \\Box B}\\bi{2-7, 8-13}\n\\end{nd}\\]\n}\n\\end{earg}\n\n\\problempart\nGeben Sie Beweise f\u00fcr die folgenden Aussagen an, ohne Modalwechselregeln zu verwenden:\n\\begin{earg}\n\\item $\\enot\\Box A\\vdash_\\mlK  \\Diamond \\enot A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\enot\\Box A}\n\\open\n\\hypo{2}{\\ebox\\enot\\enot A}\n\\open\n\\hypo{3}{\\ebox}\n\\have{4}{\\enot\\enot A}\\boxe{2}\n\\have{5}{A}\\dne{4}\n\\close\n\\have{6}{\\ebox A}\\boxi{3-5}\n\\have{7}{\\ered}\\ne{1,6}\n\\close\n\\have{8}{\\enot\\Box\\enot\\enot A}\\ni{2-6}\n\\have{9}{\\Diamond\\enot A}\\diadf{8}\n\\end{nd}\\]\n}\n\n\\item $\\Diamond\\enot A\\vdash_\\mlK \\enot \\Box A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Diamond\\enot A}\n\\have{2}{\\enot\\Box\\enot \\enot A}\\diadf{1}\n\\open\n\\hypo{3}{\\ebox A}\n\\open\n\\hypo{4}{\\ebox}\n\\open\n\\hypo{5}{\\enot A}\n\\have{6}{A}\\boxe{3}\n\\have{7}{\\ered}\\ne{5,6}\n\\close\n\\have{8}{\\enot\\enot A}\\ni{5-7}\n\\close\n\\have{9}{\\ebox\\enot\\enot A}\\boxi{4-8}\n\\have{10}{\\ered}\\ne{2,9}\n\\close\n\\have{11}{\\enot\\Box A}\\ni{3-10}\n\\end{nd}\\]\n}\n\\newpage\n\\item $\\enot\\Diamond A\\vdash_\\mlK \\Box\\enot A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\enot\\Diamond A}\n\\open\n\\hypo{2}{\\enot\\Box\\enot A}\n\\have{3}{\\Diamond A}\\diadf{2}\n\\have{4}{\\ered}\\ne{1,3}\n\\close\n\\have{5}{\\enot\\enot\\Box\\enot A}\\ni{2-4}\n\\have{6}{\\Box \\enot A}\\dne{5}\n\\end{nd}\\]\n}\n\n\\item $\\Box\\enot A\\vdash_\\mlK \\enot\\Diamond A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Box\\enot A}\n\\open\n\\hypo{2}{\\Diamond A}\n\\have{3}{\\enot \\Box \\enot A}\\diadf{2}\n\\have{4}{\\bot}\\ne{1,3}\n\\close\n\\have{5}{\\enot\\Diamond A}\\by{$\\enot$I}{2-4}\n\\end{nd}\\]\n}\n\\end{earg}\n\n\\problempart\nIn den folgenden F\u00e4llen ist es Ihnen erlaubt, die Modalwechselregeln in Ihren Beweisen zu verwenden:\n\\begin{earg}\n\\item $\\Box(A\\eif B), \\Diamond A \\vdash_\\mlK  \\Diamond B$\n\\myanswer{\n\\[\\begin{nd}\n\\have{1}{\\Box(A\\eif B)}\n\\hypo{2}{\\Diamond A}\n\\have{3}{\\enot\\Box\\enot A}\\diadf{2}\n\\open\n\\hypo{4}{\\ebox\\enot B}\n\\open\n\\hypo{5}{\\ebox}\n\\have{6}{A\\eif B}\\boxe{1}\n\\have{7}{\\enot B}\\boxe{4}\n\\have{8}{\\enot A}\\mt{5,6}\n\\close\n\\have{9}{\\ebox\\enot A}\\boxi{5-8}\n\\have{10}{\\ered}\\ne{3,9}\n\\close\n\\have{11}{\\enot\\Box\\enot B}\\ni{4-10}\n\\have{12}{\\Diamond B}\\diadf{11}\n\\end{nd}\\]\n}\n\n\\item $\\Box A \\vdash_\\mlK  \\enot\\Diamond\\enot A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Box A}\n\\open\n\\hypo{2}{\\Diamond\\enot A}\n\\have{3}{\\enot\\Box A}\\mc{2}\n\\have{4}{\\bot}\\ne{1,3}\n\\close\n\\have{5}{\\enot\\Diamond\\enot A}\\ni{2-4}\n\\end{nd}\\]\n}\n\n\\item $\\enot\\Diamond\\enot A \\vdash_\\mlK  \\Box A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\enot\\Diamond\\enot A}\n\\have{2}{\\Box\\enot\\enot A}\\mc{1}\n\\open\n\\hypo{3}{\\ebox}\n\\have{4}{\\enot\\enot A}\\boxe{2}\n\\have{5}{A}\\dne{4}\n\\close\n\\have{6}{\\Box A}\\boxi{3-5}\n\\end{nd}\\]\n}\n\\end{earg}\n\n\\problempart\nGeben Sie Beweise f\u00fcr die folgenden Aussagen:\n\\begin{earg}\n\\item $P\\vdash_\\mlT \\Diamond P$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{P}\n\\open\n\\hypo{3}{\\Box \\enot P}\n\\have{4}{\\enot P}\\rt{3}\n\\have{5}{\\ered}\\ne{1,4}\n\\close\n\\have{6}{\\enot\\Box\\enot P}\\ni{3-5}\n\\have{7}{\\Diamond P}\\diadf{6}\n\\end{nd}\\]\n}\n\n\\item $\\vdash_\\mlT  (A\\eand B)\\eor(\\enot \\Box A\\eor\\enot\\Box B)$\n\\myanswer{\n\\[\\begin{nd}\n\\open\n\\hypo{1}{\\Box A\\eand\\Box B}\n\\have{2}{\\Box A}\\ae{1}\n\\have{3}{\\Box B}\\ae{1}\n\\have{4}{A}\\rt{2}\n\\have{5}{B}\\rt{3}\n\\have{6}{A\\eand B}\\ai{4,5}\n\\have{7}{(A\\eand B)\\eor (\\enot \\Box A\\eor \\enot \\Box B)}\\oi{6}\n\\close\n\\open\n\\hypo{8}{\\enot(\\Box A\\eand \\Box B)}\n\\have{9}{\\enot\\Box A \\eor \\enot\\Box B}\\dem{8}\n\\have{10}{(A\\eand B)\\eor (\\enot\\Box A \\eor \\enot\\Box B)}\\oi{9}\n\\close\n\\have{11}{(A\\eand B)\\eor (\\enot\\Box A \\eor \\enot\\Box B)}\\tnd{1-7,8-10}\n\\end{nd}\\]\n}\n\\end{earg}\n\\newpage\n\\problempart\nGeben Sie Beweise f\u00fcr die folgenden Aussagen:\n\\begin{earg}\n\\item $\\Box(\\Box A\\eif B), \\Box (\\Box B\\eif C), \\Box A \\vdash_\\mlSfour \\Box\\Box C$\n\\myanswer{\n\\[\\begin{nd}\n\\have{1}{\\Box(\\Box A \\eif B)}\n\\have{2}{\\Box(\\Box B\\eif C)}\n\\hypo{3}{\\Box A}\n\\open\n\\hypo{4}{\\ebox}\n\\have{5}{\\Box A}\\rfour{3}\n\\have{6}{\\Box(\\Box A\\eif B)}\\rfour{1}\n\\have{7}{\\Box(\\Box B\\eif C)}\\rfour{2}\n\\open\n\\hypo{8}{\\ebox}\n\\have{9}{\\Box A}\\rfour{5}\n\\have{10}{\\ebox(\\ebox A \\eif B)}\\rfour{6}\n\\open\n\\hypo{11}{\\ebox}\n\\have{12}{\\ebox A}\\rfour{9}\n\\have{13}{\\Box A\\eif B}\\boxe{10}\n\\have{14}{B}\\ce{12,13}\n\\close\n\\have{15}{\\ebox B}\\boxi{11-14}\n\\have{16}{\\ebox B \\eif C}\\boxe{7}\n\\have{17}{C}\\ce{15,16}\n\\close\n\\have{18}{\\Box C}\\boxi{8-17}\n\\close\n\\have{19}{\\Box \\Box C}\\boxi{4-18}\n\\end{nd}\\]\n}\n\n\\item $\\Box A \\vdash_\\mlSfour  \\Box(\\Box A \\eor B)$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Box A}\n\\open\n\\hypo{2}{\\ebox}\n\\have{3}{\\Box A}\\rfour{1}\n\\have{4}{\\Box A \\eor B}\\oi{4}\n\\close\n\\have{5}{\\Box(\\Box A \\eor B)}\\boxi{2-4}\n\\end{nd}\\]\n}\n\n\\item $\\Diamond \\Diamond A \\vdash_\\mlSfour  \\Diamond A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\Diamond\\Diamond A}\n\\open\n\\hypo{2}{\\ebox\\enot A}\n\\have{3}{\\enot\\ebox\\enot\\ediamond A}\\diadf{1}\n\\open\n\\hypo{5}{\\ebox}\n\\have{6}{\\Box\\enot A}\\rfour{2}\n\\have{7}{\\enot\\Diamond A}\\mc{6}\n\\close\n\\have{12}{\\ebox\\enot\\Diamond A}\\boxi{5-7}\n\\have{13}{\\ered}\\ne{3,12}\n\\close\n\\have{14}{\\enot\\ebox\\enot A}\\ni{2-13}\n\\have{15}{\\Diamond A}\\diadf{14}\n\\end{nd}\\]\n}\n\\end{earg}\n\\newpage\n\\problempart\nGeben Sie Beweise in \\mlSfive{} f\u00fcr die folgenden Aussagen:\n\\begin{earg}\n\\item $\\enot\\Box\\enot A, \\Diamond B\\vdash_\\mlSfive\\Box(\\Diamond A \\eand \\Diamond B)$\n\\myanswer{\n\\[\\begin{nd}\n\\have{1}{\\enot\\Box\\enot A}\n\\hypo{2}{\\Diamond B}\n\\have{3}{\\enot\\ebox\\enot B}\\diadf{2}\n\\open\n\\hypo{4}{\\ebox}\n\\have{5}{\\enot\\ebox\\enot A}\\rfive{1}\n\\have{6}{\\Diamond A}\\diadf{5}\n\\have{7}{\\enot\\ebox\\enot B}\\rfive{3}\n\\have{8}{\\Diamond B}\\diadf{7}\n\\have{10}{\\Diamond A \\eand \\Diamond B}\\ai{6,8}\n\\close\n\\have{11}{\\Box(\\Diamond A \\eand \\Diamond B)}\\boxi{4-10}\n\\end{nd}\\]\n}\n\n\\item $A\\vdash_\\mlSfive\\Box\\ediamond A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{A}\n\\open\n\\hypo{2}{\\ebox\\enot A}\n\\have{3}{\\enot A}\\rt{2}\n\\have{4}{\\ered}\\ne{1,3}\n\\close\n\\have{5}{\\enot\\ebox\\enot A}\n\\open\n\\hypo{6}{\\Box}\n\\have{7}{\\enot\\ebox\\enot A}\\rfive{5}\n\\have{8}{\\Diamond A}\\diadf{7}\n\\close\n\\have{9}{\\Box\\Diamond A}\\boxi{6-8}\n\\end{nd}\\]\n}\n\n\\item $\\ediamond\\ediamond A\\vdash_\\mlSfive\\ediamond A$\n\\myanswer{\n\\[\\begin{nd}\n\\hypo{1}{\\ediamond\\ediamond A}\n\\have{2}{\\enot\\ebox\\enot \\ediamond A}\\diadf{1}\n\\open\n\\hypo{3}{\\ebox}\n\\open\n\\hypo{4}{\\Box\\enot A}\n\\open\n\\hypo{5}{\\ebox}\n\\open\n\\hypo{6}{\\ediamond A}\n\\have{7}{\\enot\\ebox\\enot A}\\diadf{6}\n\\have{8}{\\ebox\\enot A}\\rfour{4}\n\\have{9}{\\ered}\\ne{7,8}\n\\close\n\\have{10}{\\enot\\ediamond A}\\ni{5-9}\n\\close\n\\have{11}{\\ebox\\enot\\ediamond A}\\boxi{5-10}\n\\have{12}{\\enot\\ebox\\enot\\ediamond A}\\rfive{2}\n\\have{13}{\\ered}\\ne{11,12}\n\\close\n\\have{14}{\\enot\\ebox\\enot A}\\ni{4-13}\n\\have{15}{\\ediamond A}\\diadf{14}\n\\close\n\\have{16}{\\ebox\\ediamond A}\\boxi{3-15}\n\\have{17}{\\ediamond A}\\rt{16}\n\\end{nd}\\]\n}\n\\end{earg}\n\n\\chapter{Semantik der ML}\n\nWir pr\u00e4sentieren alle Gegeninterpretationen diagrammatisch.\n\n\\setcounter{ProbPart}{0}\n\\\n\\problempart\nLegen Sie Gegeninterpretationen zu den folgenden falschen Behauptungen vor:\n\\begin{earg}\n\\item $\\enot P \\vDash_\\mlK  \\enot\\Diamond P$\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom2) at (3,1) {2};\n\\node (atom3) at (-0.25,0) {$\\enot P$};\n\\node (atom4) at (3,0) {$P$};\n\\draw[->, thick] (atom1) -- (atom2);\n\\draw (-0.8,-0.6) rectangle (0.4,0.5);\n\\draw (2.4,-0.6) rectangle (3.6,0.5);\n\\end{tikzpicture}\n\\end{center}\n}\n\n\\item $\\Box(P \\eor Q)\\vDash_\\mlK  \\Box P \\eor \\Box Q$\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom2) at (3,1) {2};\n\\node (atom3) at (-0.25,0) {$P$};\n\\node (atom4) at (3,0) {$\\enot P$};\n\\node (atom5) at (-0.25,-0.7) {$\\enot Q$};\n\\node (atom6) at (3,-0.7) {$Q$};\n\\draw[->, thick] (atom1)+(-0.15,0.15) arc (-330:-30:.3); \n\\draw[->, thick] (atom1) -- (atom2);\n\\draw (-0.8,-1.2) rectangle (0.4,0.5);\n\\draw (2.4,-1.2) rectangle (3.6,0.5);\n\\end{tikzpicture}\n\\end{center}\n}\n\n\\item $\\vDash_\\mlK  \\enot \\Box (A\\eand \\enot A)$\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom3) at (-0,0) {$\\enot A$};\n\\draw (-0.5,-0.6) rectangle (0.6,0.5);\n\\end{tikzpicture}\n\\end{center}\n}\n\\newpage\n\\item $\\Box A\\vDash_\\mlK  A$\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom2) at (3,1) {2};\n\\node (atom3) at (-0.25,0) {$\\enot A$};\n\\node (atom4) at (3,0) {$A$};\n\\draw[->, thick] (atom1) -- (atom2);\n\\draw (-0.8,-0.6) rectangle (0.4,0.5);\n\\draw (2.4,-0.6) rectangle (3.6,0.5);\n\\end{tikzpicture}\n\\end{center}\n}\n\\end{earg}\n\n\\problempart\nLegen Sie Gegeninterpretationen zu den folgenden falschen Behauptungen vor:\n\\begin{earg}\n\\item $\\Box (M\\eif O),\\Diamond M\\vDash_\\mlT  O$\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom2) at (3,1) {2};\n\\node (atom3) at (-0.25,0) {$\\enot M$};\n\\node (atom4) at (3,0) {$M$};\n\\node (atom5) at (-0.25,-0.7) {$\\enot O$};\n\\node (atom6) at (3,-0.7) {$O$};\n\\draw[->, thick] (atom1)+(-0.15,0.15) arc (-330:-30:.3); \n\\draw[->, thick] (atom1) -- (atom2);\n\\draw[->, thick] (atom2)+(0.15,-0.15) arc (-150:150:.3); \n\\draw (-0.8,-1.2) rectangle (0.4,0.5);\n\\draw (2.4,-1.2) rectangle (3.6,0.5);\n\\end{tikzpicture}\n\\end{center}\n}\n\n\\item $\\Box A\\vDash_\\mlT  \\Box \\Box A$\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom2) at (3,1) {2};\n\\node (atom3) at (1.5,-0.5) {3};\n\\node (atom4) at (-0.25,1.9) {$A$};\n\\node (atom5) at (3,1.9) {$A$};\n\\node (atom6) at (1.4,-1.5) {$\\enot A$};\n\\draw[->, thick] (atom1)--(atom2);\n\\draw[->, thick] (atom1)+(-0.15,0.15) arc (-330:-30:.3); \n\\draw[->, thick] (atom2)+(0.15,-0.15) arc (-150:150:.3); \n\\draw[->, thick] (atom2)--(atom3);\n\\draw[->, thick] (atom3)+(-0.15,0.15) arc (-330:-30:.3); \n\\draw (-0.8,1.5) rectangle (0.4,2.3);\n\\draw (2.4,1.5) rectangle (3.6,2.3);\n\\draw (0.8,-1.15) rectangle (2.2,-1.9);\n\\end{tikzpicture}\n\\end{center}\n}\n\\end{earg}\n\n\\problempart\nLegen Sie Gegeninterpretationen zu den folgenden falschen Behauptungen vor:\n\\begin{earg}\n\\item $\\Diamond A\\vDash_\\mlSfour  \\Box\\Diamond A$\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom2) at (3,1) {2};\n\\node (atom3) at (-0.25,0) {$A$};\n\\node (atom4) at (3,0) {$\\enot A$};\n\\draw[->, thick] (atom1)+(-0.15,0.15) arc (-330:-30:.3); \n\\draw[->, thick] (atom2)+(0.15,-0.15) arc (-150:150:.3); \n\\draw[->, thick] (atom1) -- (atom2);\n\\draw (-0.8,-0.6) rectangle (0.4,0.5);\n\\draw (2.4,-0.6) rectangle (3.6,0.5);\n\\end{tikzpicture}\n\\end{center}\n}\n\n\\item $\\Diamond A, \\Box (\\Diamond A \\eif B)\\vDash_\\mlSfour \\Box B$\n\n\\myanswer{\n\\begin{center}\n\\begin{tikzpicture}\n\\node (atom1) at (0,1) {1};\n\\node (atom2) at (3,1) {2};\n\\node (atom3) at (-0.25,0) {$A$};\n\\node (atom4) at (3,0) {$\\enot A$};\n\\node (atom5) at (-0.25,-0.7) {$B$};\n\\node (atom6) at (3,-0.7) {$\\enot B$};\n\\draw[->, thick] (atom1)+(-0.15,0.15) arc (-330:-30:.3); \n\\draw[->, thick] (atom2)+(0.15,-0.15) arc (-150:150:.3); \n\\draw[->, thick] (atom1) -- (atom2);\n\\draw (-0.8,-1.2) rectangle (0.4,0.5);\n\\draw (2.4,-1.2) rectangle (3.6,0.5);\n\\end{tikzpicture}\n\\end{center}\n}\n\\end{earg}\n", "meta": {"hexsha": "56b271cf5b7d44bb18de8cd2420040398cb82fa3", "size": 12446, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "forallx-do-sol-ml.tex", "max_stars_repo_name": "sbwimmer/forallx-do", "max_stars_repo_head_hexsha": "b7aadc8d2fa6aaf8850f2d07df669ddf6ec17337", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-25T04:43:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-25T04:43:55.000Z", "max_issues_repo_path": "forallx-do-sol-ml.tex", "max_issues_repo_name": "sbwimmer/forallx-do", "max_issues_repo_head_hexsha": "b7aadc8d2fa6aaf8850f2d07df669ddf6ec17337", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-25T04:45:28.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-25T04:45:28.000Z", "max_forks_repo_path": "forallx-do-sol-ml.tex", "max_forks_repo_name": "sbwimmer/forallx-do", "max_forks_repo_head_hexsha": "b7aadc8d2fa6aaf8850f2d07df669ddf6ec17337", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.1065719361, "max_line_length": 100, "alphanum_fraction": 0.6092720553, "num_tokens": 6133, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7634837635542925, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.5510392240334344}}
{"text": "%!TEX root = ../06DL.tex\n\n% Stochastic and Incremental Gradient Methods by Junchi Li \n\\section{Problem Set-Up} \nThe problem set up is more the less the same as in \\cite{Hazan2011} as are the derivations. We want to minimize a function \n\\begin{equation}\\label{eq:origpro}\n\\min_x \\mathbb{E}_\\xi [F(x,\\xi)]+P(x)\n\\end{equation}\nbut we only get access to subgradients $g(x,\\xi)$ of $F(x,\\xi)$ with $\\xi$ sampled at random. Examples of this set-up include\n\\begin{enumerate}\n\\item {\\bf Noisy gradients.} We want to minimize a smooth function $f(x)$. At every iteration, we compute or gain access to a noisy gradient $g_k=\\nabla f(x_k)+\\omega_k$ where $\\omega_k$ is some zero-mean noise process which is independent of $x_k$.\n\\item {\\bf Incremental gradients.} We want to minimize a function of the form\n$$ \nf(x)=\\sum_{i=1}^{m}f_i(x)\n$$\nAt every iteration, we choose a random index $i_k$ uniformly at random from $\\{1,\\ldots,m\\}$, and we take a step along the gradient of $f_{i_k}$ rather than of the full function $f$. This is obviously faster to compute when $m$ is large. When does this approach find a minimum of $f$?\n\\end{enumerate}\n\nThroughout we assume\n\\begin{enumerate}\n\\item $f(x):=\\mathbb{E}[F(x,\\xi)]$ is differentiable and strongly convex. So there exists a constant $l>0$ such that \n\\begin{equation}\nf(z)\\geq f(x)+\\nabla f(x)^\\top (z-x)+\\frac{l}{2}\\|z-x\\|^2\n\\end{equation}\n\\item $\\nabla f$ is Lipschitz so that $\\|\\nabla f(x) - \\nabla f(y)\\|\\leq L \\|x-y\\|$.\n\\item $P(x)$ is a convex extended real valued function.\n\\end{enumerate}\nNote that the results apply to the case where there is only one value of $\\xi$. That is, the non-stochastic setting. In this case we would have a differentiable convex function plus an arbitrary convex function. Also note that we can enforce the constraint $x \\in X$ for some convex set $X$ by letting $P(x)=0$ for $x\\in X$ and $P(x)=\\infty$ for $x\\notin X$.\n\nLet us define a stochastic projected gradient scheme to solve this problem. Let\n\\begin{equation}\\label{eq:stoprojgra}\n{\\rm prox}_{\\nu P}(z)={\\rm argmin}_x \\|x-z\\|^2+\\nu P(x)\n\\end{equation}\nLet $\\gamma_0,\\ldots,\\gamma_T,\\ldots$ be a sequence of positive numbers. Choose $x_0 \\in X$, and iterate\n\\begin{equation}\\label{eq:stoprojgraiter}\nx_{k+1} ={\\rm prox}_{\\gamma_k P}(x_k-\\gamma_k G(x_k,\\xi_k))\n\\end{equation}\n\n\\section{Analysis of Unconstrained Stochastic Gradient Descent.}\nFirst, let's examine the case with $P = 0$ and let's make no assumptions about strong convexity. Assume $\\|G(x, \\xi)\\| \\leq M$ for all $x$ and $\\xi$. Let $x_*$ denote any optimal solution of \\eqref{eq:origpro}. Then we have\n\\begin{eqnarray}\n\\mathbb{E}[\\|x_{k+1}-x_*\\|^2]&=&\\mathbb{E}[\\|x_{k}-\\gamma_k G(x_k,\\xi_k)-x_*\\|^2]  \\nonumber \\\\\n&=&\\mathbb{E}[\\|x_{k}-x_*\\|^2]- 2\\gamma_k\\mathbb{E}[\\langle G(x_k,\\xi_k), x_k-x_*\\rangle ]+ \\gamma_k^2 \\mathbb{E}[\\|\\gamma_k G(x_k,\\xi_k)\\|^2]  \\\\\n&\\leq&\\mathbb{E}[\\|x_{k}-x_*\\|^2]- 2\\gamma_k\\mathbb{E}[\\langle G(x_k,\\xi_k), x_k-x_*\\rangle ]+ \\gamma_k^2 M^2  \\\\\n&=&\\mathbb{E}[\\|x_{k}-x_*\\|^2]- 2\\gamma_k\\mathbb{E}[\\langle \\nabla f(x_k), x_k-x_*\\rangle ]+ \\gamma_k^2 M^2 \\label{eq:eqnproof5} \\\\\n&\\leq& \\mathbb{E}[\\|x_{k}-x_*\\|^2]- 2\\gamma_k\\mathbb{E}[f(x_k)-f(x_*)]+ \\gamma_k^2 M^2 \\label{eq:eqnproof6}\n\\end{eqnarray}\n\\eqref{eq:eqnproof5} follows because\n\\begin{eqnarray}\n\\mathbb{E}[\\langle G(x_k,\\xi_k), x_k-x_*\\rangle ]&=&\\mathbb{E}_{\\xi_0,\\ldots,\\xi_{k-1}}[\\mathbb{E}_{\\xi_k}[\\langle G(x_k,\\xi_k), x_k-x_*\\rangle\\ |\\ \\xi_0,\\ldots,\\xi_{k-1} ]] \\nonumber\\\\\n&=&\\mathbb{E}_{\\xi_0,\\ldots,\\xi_{k-1}}[\\langle \\nabla f(x_k), x_k-x_*\\rangle\\ |\\ \\xi_0,\\ldots,\\xi_{k-1} ] \\\\\n&=&\\mathbb{E}[\\langle \\nabla f(x_k), x_k-x_*\\rangle]\n\\end{eqnarray}\nby the law of iterated expectation. \\eqref{eq:eqnproof6} is a consequence of the inequality\n\\begin{equation}\n\\langle \\nabla f(x_k), x_k-x_*\\rangle \\geq f(x_k)-f(x_*)\n\\end{equation}\nwhich holds because $f$ is convex.\n\nArranging the bound: by telescoping $k$ from $0$ to $n$,\n\\begin{equation}\n2\\gamma_k\\mathbb{E}[f(x_k)-f(x_*)] \\leq -\\mathbb{E}[\\|x_{k+1}-x_*\\|^2-\\|x_{k}-x_*\\|^2] + \\gamma_k^2 M^2,\n\\end{equation}\nwe have since $\\|x_{n+1}-x_*\\|^2\\geq 0$\n\\begin{equation}\n2\\sum_{k=0}^n \\gamma_k\\mathbb{E}[f(x_k)-f(x_*)] \\leq -\\mathbb{E}[\\|x_{n+1}-x_*\\|^2-\\|x_{0}-x_*\\|^2] + M^2 \\sum_{k=0}^{n} \\gamma_k^2\\leq D^2+M^2  \\sum_{k=0}^{n} \\gamma_k^2.\n\\end{equation}\nDividing by the sum of the $\\gamma_k$, we have for any $n$\n\\begin{equation}\n\\frac{1}{\\sum_{k=0}^n \\gamma_k}\\sum_{k=0}^n \\gamma_k\\mathbb{E}[f(x_k)-f(x_*)] \\leq \\frac{D^2+M^2  \\sum_{k=0}^{n} \\gamma_k^2}{2\\sum_{k=0}^n \\gamma_k}.\n\\end{equation}\nwhere $D=\\|x_0-x_*\\|$. Let $\\bar{x}:=(\\sum_{k=0}^n \\gamma_k)^{-1}\\sum_{k=0}^n \\gamma_k x_k$. Then, by convexity (Jensen's inequality), we have\n\\begin{equation}\n\\mathbb{E}[f(\\bar{x})-f(x_*)] \\leq \\frac{D^2+M^2  \\sum_{k=0}^{n} \\gamma_k^2}{2\\sum_{k=0}^n \\gamma_k}.\n\\end{equation}\nThis is precisely the bound rate of convergence for deterministic subgradient descent.\n\n\\section{Analysis of Projected Stochastic Gradient.}\nIn the case of constrained optimization problem, recall from \\eqref{eq:stoprojgra} and \\eqref{eq:stoprojgraiter} that $\\Pi_X (z) ={\\rm argmin}_{x\\in X} \\|x - z\\|^2$ and the iteration updates\n\\begin{equation}\nx_{k+1}=\\Pi_X (x_k-\\gamma_k G(x_k, \\xi_k)).\n\\end{equation}\nLet $x_*$ denote the optimal solution of \\eqref{eq:origpro}. $x_*$ is unique because of strong convexity. Observe that\n\\begin{eqnarray}\n\\mathbb{E}[\\|x_{k+1}-x_*\\|^2]&=&\\mathbb{E}\\left[\\|\\Pi_{\\gamma_k} (x_{k}-\\gamma_k G(x_k,\\xi_k))-\\mathbb{E}[\\|\\Pi_{\\gamma_k}(x_*-\\gamma_k \\nabla f(x_*))\\|^2\\right]  \\nonumber \\\\\n&\\leq&\\mathbb{E}[\\|x_{k}-\\gamma_k G(x_k,\\xi_k)-x_*+\\gamma_k \\nabla f(x_*)\\|^2]  \\label{eq:eqproof13} \\\\\n&=&\\mathbb{E}[\\|x_{k}-\\gamma_k \\nabla f(x_k)+\\gamma_k (\\nabla f(x_k)-G(x_k,\\xi_k))-x_*+\\gamma_k \\nabla f(x_*)\\|^2]  \\label{eq:eqproof14} \\\\\n&=& \\mathbb{E}\\left[ \\| x_k -\\gamma_k \\nabla f(x_k) - x_* +\\gamma_k \\nabla f(x_*)\\|^2\\right]  \\\\\n&& +2\\gamma_k\\mathbb{E}[\\langle x_k- \\gamma_k \\nabla f(x_k) - x_* + \\gamma_k \\nabla f(x_*), \\nabla f(x_k)-G(x_k,\\xi_k)\\rangle ] \\nonumber \\\\\n&&+\\gamma_k^2 \\mathbb{E}[\\|\\nabla f(x_k) - G(x_k,\\xi_k)\\|^2] \\nonumber \\\\\n&=&\\mathbb{E}[\\|x_{k}-\\gamma_k \\nabla f(x_k)-x_*+\\gamma_k \\nabla f(x_*)\\|^2]+ \\gamma_k^2 \\mathbb{E}[\\|\\nabla f(x_k)-G(x_k,\\xi_k)\\|^2] \\label{eq:eqnproof15} \n\\end{eqnarray}\nHere, the first equality follows by the definition of $x_{k+1}$ and because $x_*$ is optimal. \\eqref{eq:eqproof13} follows because the proximity operator is non-expansive. \\eqref{eq:eqproof14} follows because $\\mathbb{E}[G(z,\\xi_k)] = \\nabla f(z)$ for all $z$ and $G(z,\\xi_k)$ is independent of $\\xi_k$. Thus we have\n\\begin{eqnarray}\n&&\\mathbb{E}[\\langle \\nabla f(x_k)-G(x_k,\\xi_k), x_k-\\gamma_k\\nabla f(x_k)-x_*+\\gamma_k \\nabla f(x_*) \\rangle] \\\\\n&=& \\mathbb{E}_{\\xi_0,\\ldots,\\xi_{k-1}}[\\mathbb{E}_{\\xi_k}[\\langle \\nabla f(x_k)-G(x_k,\\xi_k), x_k-\\gamma_k\\nabla f(x_k)-x_*+\\gamma_k \\nabla f(x_*) \\rangle \\\\\n && |\\ \\xi_0,\\ldots,\\xi_{k-1}] \\nonumber\\\\\n&=& 0.\n\\end{eqnarray}\n\nNote that the first term in \\eqref{eq:eqnproof15}  is completely independent of $\\xi_k$ while the second term is a variance term concerning the second moments of the subgradients at the current iterate and at the optimum. We can bound each of these terms separately. First, since $f$ is strongly convex and has a Lipschitz continuous gradient, it follows that\n\\begin{equation} \\label{eq:eqproof16}\n\\mathbb{E}[\\| x_k-\\gamma_k\\nabla f(x_k)-x_*+\\gamma_k \\nabla f(x_*)\\|^2] \\leq \\max\\{ |1-\\gamma_k L|, |1-\\gamma_k l |\\}^2 \\mathbb{E}[\\|x_k-x_*\\|^2] \n\\end{equation}\nFor the second term, we must make some assumption about the statistics of the random function\n\\begin{equation} \\label{eq:eqproof17}\n\\varphi(z;\\xi_k):=G(z,\\xi_k)-\\nabla f(z)\n\\end{equation}\n\nLet's explore some possibilities.\n\\subsection{$\\varphi\\equiv 0$}\nIn the case when there is no randomness at all and we are just following the gradient,\nwe only need upper bound \\eqref{eq:eqproof16}. In this case, setting $\\gamma_k = \\frac{2}{L+l}$ for all $k$, we find that\n\\begin{equation} \\label{eq:eqproof18}\n\\|x_{k+1}-x_*\\|\\leq \\left(\\frac{L-l}{L+l}\\right) \\|x_k-x_*\\|\n\\end{equation}\nor, letting $\\kappa=\\frac{L}{l}$ and $D_0=\\| x_0-x_* \\|$,\n\\begin{equation} \\label{eq:eqproof19}\n\\|x_{k}-x_*\\|\\leq \\left(\\frac{\\kappa-1}{\\kappa+1}\\right)^k D_0\n\\end{equation}\n\n\\subsection{$\\varphi$ bounded.}\nThe simplest non-trivial assumption is that the deviations are bounded:\n\\begin{equation} \\label{eq:eqproof20}\n\\|\\varphi(z;\\xi_k)\\|\\leq M\n\\end{equation}\nfor some universal constant $M$. In this case, we have the upper bound\n\\begin{eqnarray} \n\\mathbb{E}[\\|x_{k+1}-x_*\\|^2] &\\leq& \\max \\{ |1-\\gamma_k L|, |1-\\gamma_k l| \\}^2 \\mathbb{E}[\\|x_k-x_*\\|^2] +\\gamma_k^2 M^2 \\label{eq:eqproof21}\\\\\n&\\leq& (1-2\\gamma_k l+\\gamma_k^2 L^2) \\mathbb{E}[\\|x_k-x_*\\|^2]+\\gamma_k^2 M^2 \\label{eq:eqproof22}\n\\end{eqnarray}\n\nWith such a bound, we can achieve the so-called ``optima'' $O(1/k)$ rate by choosing\n\\begin{equation} \\label{eq:eqproof23}\n\\gamma_k = \\frac{1}{kl}\n\\end{equation}\nIndeed, in this case, it follows by induction that\n\\begin{equation} \\label{eq:eqproof24}\n\\mathbb{E}[\\|x_k-x_*\\|^2] = \\frac{M^2+D_0^2 L^2}{kl^2}\n\\end{equation}\nwhere $D_0$ again equals $\\|x_0 - x_*\\|$. To verify this inequality, note that for $k = 0$, the right hand is greater than $D_0^2$. Assuming that the inequality holds for $k \\leq K$, observe\n\\begin{eqnarray}\n\\mathbb{E}[\\|x_{K+1}-x_*\\|^2] &\\leq& \\left( 1-\\frac{2}{K} + \\frac{L^2}{K^2 l^2} \\right) \\mathbb{E}[\\|x_K-x_*\\|^2] + \\frac{M^2}{K^2 l^2} \\\\\n&\\leq& \\left( 1-\\frac{2}{K} \\right) \\mathbb{E}[\\|x_K-x_*\\|^2]+\\frac{M^2+L^2 \\mathbb{E} [\\|x_K-x_*\\|^2]}{K^2 l^2} \\\\\n&\\leq& \\left( 1-\\frac{2}{K} \\right) \\frac{M^2+D_0^2 L^2}{K l^2}+\\frac{M^2+D_0\n^2 L^2}{K^2 l^2} \\\\\n&=& \\frac{K^2-1}{K^2}\\cdot \\frac{M^2 +D_0^2 L^2}{(K+1) l^2} \\leq \\frac{M^2 +D_0^2 L^2}{(K+1)l^2}\n\\end{eqnarray}\n\nNB: This matches the lower bound by {\\color{red} agarwal2012information: Alekh Agarwal, Peter L. Bartlett, Member, IEEE, Pradeep Ravikumar, and Martin J. Wainwright, Senior Member, IEEE.}\n\n\\section{Introduction}\nWe want to minimize the function $f: \\mathcal{C} \\to \\mathbb{R}$. Assume $f$ is convex and possibly strongly convex on $\\mathcal{C}$. Assume that the minimum function value $f_*$ is attained in $\\mathcal{C}$ and let $x_*$ denote any optimal solution. Let $D_0$ denote the distance from the initial iterate to this optimal solution: $D_0 = \\|x_0 - x_*\\|^2$.\n\nAssume that the gradient of $f$ is Lipschitz with constant $L$ (i.e., $\\|\\nabla f(x)- \\nabla f(y)\\|^2 \\leq L\\|x-y\\|^2$).\nWe may sometimes additionally assume that $f$ is strongly convex with parameter $l > 0$, so that\n$f(y)\\geq f(x)+\\nabla f(x)^\\top (y-x)+ \\frac{l}{2}\\|x-y\\|^2_2$.\n\nIn stochastic gradient methods, we are allowed to query an oracle that takes a point in $\\mathcal{C}$ and returns an unbiased estimate of the gradient $g(x)$. That is $\\mathbb{E}[g(x)] = \\nabla f(x)$. We assume the randomness is independent of the prior queries of the oracle. As is standard, assume that we know that $\\|g(x)\\|_2 \\leq M$ almost surely for all $x \\in \\mathcal{C}$.\n\nLet $\\alpha_0,\\ldots,\\alpha_k,\\ldots$, be a sequence of positive numbers. Choose $x_0 \\in X$, and iterate\n\\begin{equation}\nx_{k+1}=x_k-\\alpha_k g(x_k).\n\\end{equation}\n\n\\section{Preliminaries}\nLet us use this section to collect some standard results from prior art. Most of this material appeared in Nemirovski and Yudin \\cite{Nemirovski1978, Nemirovski1983}. More contemporary treatments are provided by Nemirovski et al \\cite{Nemirovski2009}, Nesterov \\cite{Nesterov2009}, and Hazan and Kale \\cite{Hazan2011}.\n\nSuppose we run the incremental gradient method with stepsize $\\alpha$ for $S$ iterations. Then note that\n\\begin{eqnarray}\n\\|x_{k+1}-x_*\\|^2 &=& \\| \\Pi_{\\mathcal{C}} (x_k-\\alpha g_k (x_k)) - \\Pi_{\\mathcal{C}} (x_*) \\|^2 \\\\\n&\\leq& \\|x_k -\\alpha g_k(x_k)-x_*\\|^2 \\\\\n&=& \\|x_k -x_*\\|^2 -2\\alpha \\langle g_k(x_k), x_k-x_* \\rangle +\\alpha^2 \\|g_k (x_k)\\|^2 \\\\\n&\\leq& \\|x_k-x_*\\|^2 -2\\alpha \\langle g_k(x_k), x_k-x_* \\rangle +\\alpha^2 M^2\n\\end{eqnarray}\nNote that if we iterate the expectation\n\\begin{eqnarray}\n\\mathbb{E}[\\langle g_k(x_k), x_k-x_* \\rangle] &=& \\mathbb{E}[\\mathbb{E}[\\langle g_k(x_k), x_k-x_* \\rangle\\ |\\ g_0,\\ldots,g_{k-1}]] \\\\\n&=&\\mathbb{E}[\\mathbb{E}[\\langle g_k(x_k) \\ |\\ g_0,\\ldots,g_{k-1}], x_k-x_* \\rangle] \\\\\n&=& \\mathbb{E}[\\langle \\nabla f(x_k), x_k-x_* \\rangle].\n\\end{eqnarray}\nLetting $a_k = \\mathbb{E}[\\| x_{k+1} - x_*\\|^2]$, this gives\n\\begin{equation}\\label{eq:eqproof30}\na_{k+1}\\leq a_k -2 \\alpha \\mathbb{E}[\\langle \\nabla f(x_k), x_k-x_* \\rangle ]+\\alpha^2 M^2.\n\\end{equation}\nWe can now bound\n\\begin{eqnarray}\n\\mathbb{E}\\left[ f \\left( \\frac{1}{S} \\sum_{t=1}^S x_t \\right) -f(x_*)\\right] &\\leq& \\mathbb{E}\\left[ \\frac{1}{S} \\sum_{t=1}^S f{x_t}  -f(x_*)\\right] \\label{eq:eqproof31}\\\\\n&\\leq& \\frac{1}{S} \\sum_{t=1}^S \\mathbb{E}[\\langle \\nabla f(x_t), x_*-x_t \\rangle] \\label{eq:eqproof32}\\\\\n&\\leq& \\frac{1}{S}\\sum_{t=1}^S \\frac{a_t-a_{t+1}}{2\\alpha} +\\frac{1}{2} \\alpha M^2 \\label{eq:eqproof33} \\\\\n&=& \\frac{a_0-a_S}{2S\\alpha} +\\frac{1}{2} \\alpha M^2 \\label{eq:eqproof34}\\\\\n&\\leq& \\frac{D_0^2}{2S\\alpha} +\\frac{1}{2} \\alpha M^2 \\label{eq:eqproof35}\\\\\n&\\leq& \\frac{f(x_0)-f_*}{lS\\alpha}+\\frac{1}{2} \\alpha M^2 \\label{eq:eqproof36}\n\\end{eqnarray}\nHere, \\eqref{eq:eqproof31} follows because f is convex, \\eqref{eq:eqproof32} is the first order condition for convexity, \\eqref{eq:eqproof33} uses \\eqref{eq:eqproof30}, and \\eqref{eq:eqproof36} uses the definition of strong convexity.\n\n\\section{Weakly convex case}\nSuppose we run $E$ epochs of stochastic gradient, with epoch $k$ of length $2^{k-1}S$ and with stepsize $\\alpha_k$. Let $\\bar{x}_k$ denote the average of all of the iterates from epoch $k$. Let $T$ denote the total number of SGD steps. Nemirovski et al proved the following proposition for general convex $f$.\n\n\\begin{proposition} (Nemirovski et al \\cite{Nemirovski2009}) Suppose we run $1$ epoch of stochastic gradient of length $S$ and with stepsize $\\alpha$. Define\n\\begin{equation}\n\\theta_k:=\\frac{M\\sqrt{S}}{D_0} \\alpha.\n\\end{equation}\nThen we have the bound \n\\begin{equation}\n\\mathbb{E}[f(\\bar{x}_1)-f_*] \\leq \\left( \\frac{1}{2}\\theta +\\frac{1}{2}\\theta^{-1} \\right) \\frac{MD_0}{\\sqrt{T}}.\n\\end{equation}\nThat is, we pay linearly for errors in selecting the optimal constant stepsize. In the context of best ball, we refine our estimate of the optimal stepsize at each epoch. By doubling epochs, and always allowing $1/2$ the previous stepsize, best ball is able to tune this stepsize.\n\\end{proposition}\n\\begin{proof}\nWe can prove this just by minimizing \\eqref{eq:eqproof35}. Plugging in our stepsize, we see that\n\\begin{equation}\n\\mathbb{E}[f(\\frac{1}{S} \\sum_{t=1}^S x_t)-f(x_*)] \\leq \\frac{D_0^2}{2S\\alpha}+\\frac{1}{2}\\alpha M^2=\\left( \\frac{1}{2}\\theta +\\frac{1}{2}\\theta_{-1} \\right) \\frac{MD_0}{\\sqrt{T}}.\n\\end{equation}\n\\end{proof}\n\n\n\n\\begin{thebibliography}{999}\n\\bibitem{Hazan2011}\n{E. Hazan and S. Kale}. {Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization}. {Journal of Machine Learning Research}, 19: 421-436,  2011.\n\n\\bibitem{Nemirovski2009}\n{A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro}. {Robust stochastic approximation approach to stochastic programming}. {SIAM Journal on Optimization}, 19(4):1574-1609, 2009.\n\n\\bibitem{Nemirovski1978}\n{A. Nemirovski and D. Yudin}. {On cezari's convergence of the steepest descent method for approximating saddle point of convex-concave functions}. {Soviet Math Dkl.}, 19, 1978. \n\n\\bibitem{Nemirovski1983}\n{A. Nemirovski and D. Yudin}. {Problem complexity and method efficiency in optimization.}. {Wiley, New York}, 1983\n\n\\bibitem{Nesterov2009}\n{Y. Nesterov}. {Primal-dual subgradient methods for convex problems.}. {Mathematical Programming}, 120(1):221-259, 2009.\n\n\\bibitem{Niu2011}\n{F. Niu, B. Recht, C. R$\\acute{\\rm e}$, and S. J. Wright}. {HOGWILD!: A lock-free approach to parallelizing stochastic gradient descent}. {In Advances in Neural Information Processing}, 2011.\n\n\\bibitem{Ben2010}\n {Ben Recht's note ``Stochastic and Incremental Gradient Methods''}.\n{dated November 22, 2010.}\n\n\\bibitem{Ben2014}\n {Ben Recht's note ``Notes on SGD''},\n{dated March 18, 2014.}\n\n\\end{thebibliography}\n\n", "meta": {"hexsha": "0f3d7ba60a07e7d78d6cd75ce7c9574ee031f856", "size": 16175, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "6DL/SGD_Notes_LiJunchi.tex", "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6DL/SGD_Notes_LiJunchi.tex", "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6DL/SGD_Notes_LiJunchi.tex", "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.2217741935, "max_line_length": 380, "alphanum_fraction": 0.679381762, "num_tokens": 6296, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7217432062975979, "lm_q2_score": 0.7634837689358857, "lm_q1q2_score": 0.5510392233479605}}
{"text": "\n\\section{Total differentiation of scalar fields}\n\n", "meta": {"hexsha": "98f78ef924b7f2eeed49fb2570ee9af638e6c0a6", "size": 51, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/pug/theory/analysis/multiScalar/03-00-total.tex", "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/pug/theory/analysis/multiScalar/03-00-total.tex", "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_forks_repo_path": "src/pug/theory/analysis/multiScalar/03-00-total.tex", "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 12.75, "max_line_length": 48, "alphanum_fraction": 0.8039215686, "num_tokens": 10, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7634837527911056, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.551039207125973}}
{"text": "\\section{Some existing Galant animations}\n\\label{app:galant_animations}\n\nThe purpose of these examples is mainly to illustrate the capabilities of\nGalant. Many other algorithms are easily animated and some, such as\nbreadth-first search and Kruskal's minimum spanning tree algorithm have\nalready been implemented.\n\n\\subsection{Depth-first search}\n\nFig.~\\ref{fig:dfs} illustrates an animation of depth-first search.\nThe code, in the current version of Galant, requires a few (four to be exact) lines lines at the beginning to set up the global variable \\verb$time$ and the arrays \\verb$discovery$ and \\verb$finish$.\nSee the programmer guide (Appendix~\\ref{app:programmer_guide}) for more\ndetailed instructions on how to do this properly.\n\nThe animation follows the conventions of Cormen et al. --- tree edges are selected (shown as thick red lines) and non-tree edges are labeled as B = back edge,\nF = forward edge, and C = cross edge.\nWhite nodes, not yet visited,\nare neither marked nor selected;\ngray nodes, visited but the visit is not finished, are selected\n(thick red boundary);\nand black nodes, visit is completed, are both marked and selected (red boundary and gray fill).\nLabels on nodes indicate the discovery and finish times, separated by a slash.\n\nNote that the animation begins with the statement\\\\\n\\hspace{3em} \\verb+setDirected(true)+\\\\\nwhich ensures that the graph is displayed and treated as a directed graph.\nDepth-first search on an undirected graph requires keeping track of parents of visited nodes\nor distinguishes visited from unvisited edges.\n\n\\input{Y_dfs}\n\n\\subsection{Dijkstra's algorithm}\n\nFig.~\\ref{fig:dijkstra} illustrates an animation of Dijkstra's shortest path algorithm.\n\n\\cmt{mention the role of node 0, the fact that decrease key is done in clumsy fashion to avoid a pointer into the queue (the actual decreaseKey function is not shown; not a big deal for small graphs, but\ncould be avoided if animator provided a binary heap implementation (which could\nbe an external class) and added an integer attribute.}\n\n\\input{Y_dijkstra}\n\n\n\n\\subsection{Bubble sort}\n\n% [Last modified: 2013 07 12 at 21:05:01 GMT]\n", "meta": {"hexsha": "0d98eea6fc633d01fa70af297ec7c28c8b98b996", "size": 2140, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Technical-Documentation/galant_animations.tex", "max_stars_repo_name": "mfms-ncsu/galant", "max_stars_repo_head_hexsha": "dca42697fb3f12b9b8b624818badcadf83bf503b", "max_stars_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_stars_count": 24, "max_stars_repo_stars_event_min_datetime": "2017-01-19T13:26:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-13T02:35:15.000Z", "max_issues_repo_path": "Technical-Documentation/galant_animations.tex", "max_issues_repo_name": "mfms-ncsu/galant", "max_issues_repo_head_hexsha": "dca42697fb3f12b9b8b624818badcadf83bf503b", "max_issues_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Technical-Documentation/galant_animations.tex", "max_forks_repo_name": "mfms-ncsu/galant", "max_forks_repo_head_hexsha": "dca42697fb3f12b9b8b624818badcadf83bf503b", "max_forks_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-03-13T13:36:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-18T13:01:54.000Z", "avg_line_length": 44.5833333333, "max_line_length": 203, "alphanum_fraction": 0.7878504673, "num_tokens": 496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.7879311981328135, "lm_q1q2_score": 0.5509643836609899}}
{"text": "\\section{Sorting}\r\n\r\n\\begin{frame}{Sorting 1 / 2}\r\n  \\textbf{Problem:}\r\n  \\begin{itemize}\r\n    \\item\r\n      Input: $n$ elements $x_1, \\ldots, x_n$\r\n    \\item\r\n      Transitive operator \\enquote{\\textbf{\\color{Mittel-Blau}<}} which returns\r\n      \\textbf{\\color{Mittel-Blau}true} if the left value is smaller than the \r\n      right one\r\n      \\begin{itemize}\r\n        \\item\r\n          Transitivity: $x < y, \\; y < z \\rightarrow x < z$\r\n      \\end{itemize}\r\n    \\item\r\n      Output: $x_1, \\ldots, x_n$ sorted with operator\r\n  \\end{itemize}\r\n  \\onslide<2- |handout:1>{\r\n    \\begin{exampleblock}{Example}\r\n      Input:\\hspace*{1.5em}14, 4, 32, 19, 8, 44, 65\\\\\r\n      Output:\r\n    \\end{exampleblock}\r\n  }\r\n\\end{frame}\r\n\r\n%-------------------------------------------------------------------------------\r\n\r\n\\begin{frame}{Sorting 2 / 2}\r\n  \\textbf{Why do we need sorting?}\r\n  \\begin{itemize}\r\n    \\item\r\n      Nearly {\\color{Mittel-Blau}every} program needs a sorting algorithm\r\n    \\item\r\n      \\textbf{Examples:}\r\n      \\begin{itemize}\r\n        \\item\r\n          Index of a search engine\r\n        \\item\r\n          Listing filesystem in explorer / finder\r\n        \\item\r\n          (Music) library\r\n        \\item\r\n          Highscore list\r\n      \\end{itemize}\r\n  \\end{itemize}\r\n\\end{frame}\r\n", "meta": {"hexsha": "acbc2014be2a361d72dd41b8ec84839b06ed6984", "size": 1282, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Lecture-1/Chapter/eng/50_Sorting.tex", "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_issues_repo_path": "Lecture-1/Chapter/eng/50_Sorting.tex", "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 23, "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_forks_repo_path": "Lecture-1/Chapter/eng/50_Sorting.tex", "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "avg_line_length": 26.7083333333, "max_line_length": 81, "alphanum_fraction": 0.5280811232, "num_tokens": 389, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6992544335934766, "lm_q2_score": 0.7879311981328135, "lm_q1q2_score": 0.5509643836609899}}
{"text": "\\section{Inverse Laplace Transform}\r\n\\noindent\r\nSimilar to how the derivative is the inverse of the integral, we can think about an inverse Laplace transform that allows us to go back from the $s$ domain to the $t$ domain.\r\n\\begin{definition}\r\n\tLet $F(s) = \\Laplace{f(t)}$. Then $f(t)$ is the inverse Laplace transform of $F(s)$.\r\n\t\\begin{equation*}\r\n\t\t\\inverseLaplace{F(s)} = \\inverseLaplace{\\Laplace{f(t)}} = f(t)\r\n\t\\end{equation*}\r\n\\end{definition}\r\n\r\n\\noindent\r\nWe don't need to work through the derivations of inverse Laplace transforms because we already did the derivations for Laplace transforms.", "meta": {"hexsha": "87f85807ab7c47a6ed039b303c7976a2f61fc571", "size": 604, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "diffEq/laplaceTransforms/inverseLaplace/inverseLaplace.tex", "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_issues_repo_path": "diffEq/laplaceTransforms/inverseLaplace/inverseLaplace.tex", "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_forks_repo_path": "diffEq/laplaceTransforms/inverseLaplace/inverseLaplace.tex", "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "avg_line_length": 50.3333333333, "max_line_length": 175, "alphanum_fraction": 0.7334437086, "num_tokens": 171, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.787931185683219, "lm_q2_score": 0.6992544273261175, "lm_q1q2_score": 0.550964370017308}}
{"text": "%% LyX 1.5.6 created this file.  For more info, see http://www.lyx.org/.\r\n%% Do not edit unless you really know what you are doing.\r\n\\documentclass[english,preprint]{revtex4}\r\n\\usepackage[T1]{fontenc}\r\n\\usepackage[latin9]{inputenc}\r\n\\usepackage{babel}\r\n\r\n\\begin{document}\r\n\r\n\\title{MATLAB example: The Born-ion model}\r\n\r\n\\begin{abstract}\r\nLet's consider the born-ion model at room temperature (T=298.15 C).\r\nInside the born radius (r=3A) the low dielectric coefficient representing\r\nthe molecule is set equal to $\\epsilon^{x}=\\epsilon^{y}=\\epsilon^{z}=1.0$.\r\nThis ion is located at the origin of the reference system and it is\r\nimmersed in a solvent medium of dielectric coefficient that mimic\r\nwater, e.g., with $\\epsilon^{x}=\\epsilon^{y}=\\epsilon^{z}=78.54$.\r\nThe bulk ionic strength is set equal to zero. The born ion charge\r\nvalence is set equal to $z=+1$. To solve the PB equation we will\r\nuse a cubic grid of 65 points and a box length of 12 A centered at\r\nthe ion position. We will use the linear B-spline to obtain the fractional\r\ncharge at the node points. \r\n\\end{abstract}\r\n\\maketitle\r\n\r\n\\part{Using the MATLAB PB SOLVER}\r\n\r\n\r\n\\subsection*{Input files as generated by the APBS and pdb2pqr codes}\r\n\r\nThe {}``born-ion.pqr'' file corresponding to the model described\r\nabove reads\r\n\r\n\\begin{quotation}\r\nATOM 1 I ION 1 0.000 0.000 0.000 1.00 3.00\r\n\\end{quotation}\r\nAs an example, the {}``born.in'' input file used by the APBS reads\r\n\r\n\\begin{quotation}\r\n\\# READ IN MOLECULES\r\n\r\nread \r\n\r\nmol pqr born-ion.pqr\r\n\r\nend\r\n\r\n\\# COMPUTE POTENTIAL FOR SOLVATED STATE\r\n\r\nelec name solvated\r\n\r\nmg-auto \r\n\r\ndime 65 65 65\r\n\r\ncglen 50 50 50\r\n\r\nfglen 12 12 12\r\n\r\nfgcent mol 1\r\n\r\ncgcent mol 1\r\n\r\nmol 1\r\n\r\nlpbe\r\n\r\nbcfl mdh\r\n\r\npdie 1.0\r\n\r\nsdie 78.54\r\n\r\nchgm spl0\r\n\r\nsrfm spl2\r\n\r\nsrad 1.4\r\n\r\nswin 0.3\r\n\r\nsdens 10.0\r\n\r\ntemp 298.15\r\n\r\ncalcenergy total\r\n\r\ncalcforce no\r\n\r\nwrite pot dx APBS-born-pot\r\n\r\nwrite charge dx born-charge\r\n\r\nwrite dielx dx born-dielx\r\n\r\nwrite diely dx born-diely\r\n\r\nwrite dielz dx born-dielz\r\n\r\nwrite kappa dx born-kappa\r\n\r\nend\r\n\r\nquit\r\n\\end{quotation}\r\nFor example, if you saved both files 'born-ion.pqr'' and 'born.in'\r\nin the directory 'C:/Users/me/Documents/APBS\\_PB\\_solver/', you may\r\ntype in your command prompt window\r\n\r\n\\begin{quotation}\r\nC:/Users/me/Documents/Matlab\\_PB\\_solver>apbs born.in\r\n\\end{quotation}\r\nIn this way we obtain the input files in dx format required by the\r\nMATLAB version of the APBS to solve the PB equation. Specifically,\r\nwe obtain the shifted dielectric coefficients maps born-dielx.dx,\r\nborn-diely.dx, born-dielz.dx, and the ion accessibility map born-kappa.dx.\r\nWe also obtain the charge density born-charge.dx and the electrostatic\r\npotential born-pot.dx maps which will be used later for testing purpose.\r\n\r\n\r\n\\subsection*{MATLAB input file}\r\n\r\nNow we are ready to write the input file {}``born.inm'' as follows\r\n\r\n\\begin{quotation}\r\n65 65 65\r\n\r\n12 12 12\r\n\r\n298.15\r\n\r\nborn-dielx.dx\r\n\r\nborn-diely.dx\r\n\r\nborn-dielz.dx\r\n\r\nborn-kappa.dx\r\n\r\nborn-ion.pqr\r\n\r\nborn\\_ion\\_model\r\n\\end{quotation}\r\nThe first line corresponds to the number of grid of points $[N_{x}N_{y}N_{z}]$;\r\nthe second line corresponds to the length in Anstrong of the boxsides\r\n$[L_{x}L_{y}L_{z}]$; the next line corresponds to the temperature\r\nin Celsius degree; next is the name of the dx file containing the\r\nshifted dielectric coefficients along the x direction; next is for\r\nthe one corresponding to the y-direction and next the one in the z-direction.\r\nThe seventh line corresponds to the name of the ion accessibility\r\ncoefficient map. The next one corresponds to the name of the pqr file\r\ngenerated by the pdb2pqr code and the last line corresponds to the\r\nname of the folder that will be created into the current (working)\r\nfolder (for instances 'Matlab\\_PB\\_solver') where all the Matlab files\r\nof our code are saved. In the folder {}``born\\_ion\\_model'' you\r\nwill find all the files generated by our Matlab code, namely electrostatic\r\npotential and the charge maps in dx format, and the electrostatic\r\npotential surface $u_{ij(N_{z}+1)/2}$ in both jpg and fig format. \r\n\r\n\r\n\\subsection*{Before using our Matlab code}\r\n\r\nBefore running the {}``main.m'' file in the Matlab console you must\r\ncreate a folder (for instances {}``Matlab\\_Input\\_Files'') in the\r\nsame directory (For instances 'C:/Users/me/Documents/Matlab\\_PB\\_solver/')\r\nin which all the Matlab files of our code are saved. Then you must\r\nsave the input files born-ion.inm, born-ion.pqr, born-dielx.dx, born-diely.dx,\r\nborn-dielz.dx, and born-kappa.dx in the folder {}``Matlab\\_Input\\_Files''. \r\n\r\nNext, you must edit two lines in the {}``main.m'' file to know:\r\n\r\n\\begin{itemize}\r\n\\item The line number 57 contains the path to the folder {}``Matlab\\_Input\\_Files''.\r\nIn our example you have to set the path as follows\r\n\\end{itemize}\r\n\\begin{quotation}\r\nMYPATH='C:/Users/me/Documents/Matlab\\_PB\\_solver/Matlab\\_Input\\_Files';\r\n\\end{quotation}\r\n\\begin{itemize}\r\n\\item The line number 65 contains the name of the inm input file. In our\r\nexample we termed it as {}``born-ion.inm'' such that we have to\r\nset the name of the inputfile as follows\r\n\\end{itemize}\r\n\\begin{quotation}\r\ninputfile='born-ion.inm';\r\n\\end{quotation}\r\nNow we are ready to run the main.m file. In the command window of\r\nMatlab you may type\r\n\r\n\\begin{quotation}\r\n>\\textcompwordmark{}>run C:/Users/me/Documents/Matlab\\_PB\\_solver/main\r\n\\end{quotation}\r\nYou are done. On the same command window you will see information\r\nabout the different steps that our code performs to get the required\r\nsolution. \r\n\r\n\r\n\\subsection*{Matlab output files}\r\n\r\nIn this example you will find a created folder named 'born\\_ion\\_model'\r\nin the directory 'C:/Users/me/Documents/Matlab\\_PB\\_solver/' containing\r\nthe following files\r\n\r\n\\begin{quotation}\r\nborn\\_ion\\_model\\_MATLAB\\_charge\\_density.dx\r\n\r\nborn\\_ion\\_model\\_MATLAB\\_potential\\_solution.dx\r\n\r\nborn\\_ion\\_model\\_MATLAB\\_potential\\_solution.jpg\r\n\r\nborn\\_ion\\_model\\_MATLAB\\_potential\\_solution.fig\r\n\\end{quotation}\r\n\r\n\\subsection*{Accuracy and efficiency of our Matlab code}\r\n\r\nYou may change same parameters of our code in order to make it faster\r\nand more efficient. In particular, the default accuracy is set equal\r\nto 10\\textasciicircum{}-9. You may change it by editing the main.m\r\nfile and change the variable 'accuracy' for the required precision.\r\nYou may also change the tolerance in the inexact LU decompostion.\r\nThe default value for the variable 'tolerance' is set equal to 0.25. \r\n\r\n\\newpage\r\n\r\n\r\n\\part{Testing our code}\r\n\r\n\r\n\\subsection*{Before using our Matlab code}\r\n\r\nBefore running the {}``comparison.m'' file in the Matlab console\r\nyou must create a folder (for instances {}``Potential'') in the\r\nsame directory (For instances 'C:/Users/me/Documents/Matlab\\_PB\\_solver/')\r\nin which all the Matlab files of our code are saved. Then you must\r\nsave in that folder the APBS and MATLAB electrostatic potential solutions.\r\nIn our example they are APBS-born-pot.dx and born\\_ion\\_model\\_MATLAB\\_potential\\_solution.dx. \r\n\r\nNext, you must edit three lines in the {}``comparison.m'' file to\r\nknow:\r\n\r\n\\begin{itemize}\r\n\\item The line number 19 contains the path to the folder {}``Potential''.\r\nIn our example you have to set the path as follows\r\n\\end{itemize}\r\n\\begin{quotation}\r\nMYPATH='C:/Users/me/Documents/Matlab\\_PB\\_solver/Potential';\r\n\\end{quotation}\r\n\\begin{itemize}\r\n\\item The line number 27 contains the number of grid points in the format\r\n$[N_{x}N_{y}N_{z}]$. In our example we have to set\r\n\\end{itemize}\r\n\\begin{quotation}\r\ndime= {[}65 65 65];\r\n\\end{quotation}\r\n\\begin{itemize}\r\n\\item The line number 29 contains the length of the boxside in the format\r\n$[L_{x}L_{y}L_{z}]$ in anstromgs. In our example we have to set\r\n\\end{itemize}\r\n\\begin{quotation}\r\nglen= {[}12 12 12];\r\n\\end{quotation}\r\nNow we are ready to run the comparison.m file. In the command window\r\nof Matlab you may type\r\n\r\n\\begin{quotation}\r\n>\\textcompwordmark{}>run C:/Users/me/Documents/Matlab\\_PB\\_solver/comparison\r\n\\end{quotation}\r\nYou are done. On the same command window you will see information\r\nabout the different steps that our code performs to get the required\r\nsolution. \r\n\r\n\r\n\\subsection*{Matlab output files}\r\n\r\nIn this example you will find a created folder named 'COMPARATIVE\\_ANALYSIS'\r\nin the directory 'C:/Users/me/Documents/Matlab\\_PB\\_solver/' containing\r\nthe following files\r\n\r\n\\begin{quotation}\r\nAbsolute Error between MATLAB and APBS solutions.dx\r\n\r\nAbsolute\\_error.fig\r\n\r\nAbsolute\\_error.tiff\r\n\r\nAPBS-born-pot\\_and\\_born\\_ion\\_model\\_MATLAB\\_potential\\_solution.fig\r\n\r\nAPBS-born-pot\\_and\\_born\\_ion\\_model\\_MATLAB\\_potential\\_solution.tiff\r\n\\end{quotation}\r\n\r\n\\end{document}\r\n", "meta": {"hexsha": "fcb5f2f1c8895c2a7d0baf6aa2767008cbe67ebb", "size": 8669, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "apbs/tools/matlab/solver/born-ion-model-example.tex", "max_stars_repo_name": "ashermancinelli/apbs-pdb2pqr", "max_stars_repo_head_hexsha": "0b1bc0126331cf3f1e08667ccc70dae8eda5cd00", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "apbs/tools/matlab/solver/born-ion-model-example.tex", "max_issues_repo_name": "ashermancinelli/apbs-pdb2pqr", "max_issues_repo_head_hexsha": "0b1bc0126331cf3f1e08667ccc70dae8eda5cd00", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "apbs/tools/matlab/solver/born-ion-model-example.tex", "max_forks_repo_name": "ashermancinelli/apbs-pdb2pqr", "max_forks_repo_head_hexsha": "0b1bc0126331cf3f1e08667ccc70dae8eda5cd00", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5870307167, "max_line_length": 96, "alphanum_fraction": 0.7391856039, "num_tokens": 2381, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7879311956428947, "lm_q2_score": 0.6992544147913994, "lm_q1q2_score": 0.55096436710516}}
{"text": "% \\section{Approximation of the variogram}\\label{app:approx}\n% For a data matrix ${\\bf Y}$ in its disjunctive form, we have ${\\bf Y}_{i,j} = 0,1$ for all $l = 1,..,(p+1)L$.\n% We give here arguments to justify that the function defined as\n\n% $$\n% \\gamma(h) = \\frac{1}{2 |N(h)|} \\sum_{i,j \\in N(h)} \\frac{1}{L} \\sum_{l = 1}^{(p+1)L} |Y_{i,l} - Y_{j,l}|\n% $$\n% can be approximated by the function \n\n% $$\\hat{\\gamma}(h) = {C}_0 \\frac{1}{2 |N(h)|} \\sum_{i,j \\in N(h)} \\| {\\bf Q}_{i,.} - {\\bf Q}_{i,.}\\|^2 + {C}_1. \n% $$\n% First, note that the sum \n\n% $$\n% S = \\frac{1}{L} \\sum_{l = 1}^{(p+1)L} |{\\bf Y}_{i,l} - {\\bf Y}_{j,l}|\n% $$\n% can be approximated by its expected value\n\n% $$\n% E[S] = \\frac{1}{L} \\sum_{l = 1}^{(p+1)L} P(|{\\bf Y}_{i,l} - {\\bf Y}_{j,l}| = 1)\n% $$\n% because \n\n% $$\n% Var(S) = \\frac{1}{L^2} \\sum_{l = 1}^{(p+1)L} P(|{\\bf Y}_{i,l} - {\\bf Y}_{j,l}| = 1) \n% ( 1 - P(|{\\bf Y}_{i,l} - {\\bf Y}_{j,l}| = 1)) \\xrightarrow[L \\rightarrow \\infty]{} 0.\n% $$\n% Then, we have\n\n% $$\n% P(|{\\bf Y}_{i,l} - {\\bf Y}_{j,l}| = 1) = ({\\bf P}_{i,l} - {\\bf P}_{j,l})^2 + {\\bf P}_{i,l} (1 - {\\bf P}_{i,l}) + {\\bf P}_{j,l} (1 - {\\bf P}_{j,l}),\n% $$\n% \\noindent where ${\\bf P}$ is the matrix of probability, for which entries ${\\bf P} = P({\\bf Y}_{i,l} = 1)$. Since ${\\bf P} = {\\bf Q} {\\bf G}^T$, we have\n% $$\n% \\sum_{i,j \\in N(h)} \\frac{1}{L} \\sum_{l = 1}^{(p+1)L} ({\\bf P}_{i,l} - {\\bf P}_{j,l})^2 = {C}_0 \\sum_{i,j \\in N(h)} \\| {\\bf Q}_{i,.} - {\\bf Q}_{j,.}\\|^2\n% $$\n% where ${C}_0 = \\frac{1}{L} \\sum_{l = 1}^{(p+1)L} \\sum_{k=1}^K {\\bf G}_{k,l}^2$.\n% Finally, assuming\n% $$\n% {C}_1 = \\frac{1}{|N(h)|} \\sum_{i,j \\in N(h)} \\frac{1}{L} \\sum_l  {\\bf P}_{i,l} (1 - {\\bf P}_{i,l})\n% $$\n% is constant $\\forall h$ we can approximate $\\gamma(h)$ by $\\hat{\\gamma}(h)$. \n\n\n\\section{Algorithms}\\label{app:algo}\n\n\\begin{algo}\n\tAQP algorithm pseudo code. To solve optimization problem~\\eqref{eq:LS}.\n    \n\\begin{algorithm}[H]\n \\KwIn{ the data matrix ${\\bf Y} \\in \\{0,1\\}^{n \\times (p+1) L}$, the Laplacian matrix  ${\\bf \\Lambda} \\in \\mathbb{R}^{n \\times n}$, the number of ancestral populations $K$, the regularization coefficient  $\\alpha$, the maximum number of iteration $itMax$}\n \n \\KwOut{the admixture coefficients matrix ${\\bf Q} \\in \\mathbb{R}^{n \\times K}$, the ancestral genotype frequencies matrix ${\\bf G} \\in \\mathbb{R}^{K \\times (p+1) L}$ }\n \n \\BlankLine\n  \n Initialize ${\\bf Q}$ at random \\;\n\t\n \\For{$it = 1..itMax$}{\n\t \\BlankLine\n\t // G optimization step\n\t\n      \\For{$l = 1..L$}{\n\t\t\t$Y^l \\leftarrow {\\bf Y}_{., (p+1)l..(p+1)l + d}$ \\;\n\t\t    $ {\\bf D}_Q \\leftarrow {\\bf I}_{p+1} \\otimes {\\bf Q}^T {\\bf Q}$\\;\n\t\n\t\t\t$ v_Q \\leftarrow Vec({\\bf Q}^T Y^l)$\\;\n\t\n\t      $g^\\star \\in \\argmin_{\n\t\t\t       \\substack{\n\t          g \\in \\Delta_G\n\t       }\n\t     } \n              - 2 v_Q^T g + g^T {\\bf D}_Q g$ \\;\n\t\n\t$Vec({\\bf G}_{(p+1)l..(p+1)l + d,.}) \\leftarrow g^\\star$ \\;\n\t }\n\t \\BlankLine\n\t// Q optimization step\n\t\n\t$ {\\bf D}_G \\leftarrow Id_{n} \\otimes {\\bf G}^T {\\bf G} + \\alpha {\\bf \\Lambda} \\otimes {\\bf I}_{K}$ \\;\n\t\n\t $ v_G \\leftarrow Vec({\\bf G}^T {\\bf Y}^T)$ \\;\n\t\n      $Vec({\\bf Q}^T) \\in \\argmin_{%\n\t       \\substack{%\n\t          q \\in \\Delta_Q\n\t       }\n\t     } \n\t     - 2 v_G^T q + q^T {\\bf D}_G q$\\;\n\t }\n\\end{algorithm}\n\\label{algo:aqp}\n\\end{algo}\n\n\\begin{algo}\nAPLS algorithm pseudo code. To solve the optimization problem~\\eqref{eq:LS}.\n\n\\begin{algorithm}[H]\n\\KwIn{ the data matrix ${\\bf Y} \\in \\{0,1\\}^{n \\times (d+1) L}$, the eigen values and vectors matrices ${\\bf U}$ and ${\\bf \\Delta}$ such that ${\\bf \\Lambda} = {\\bf U}^T {\\bf \\Delta} {\\bf U}$, the number of ancestral populations $K$, the regularization coefficient  $\\alpha$, the maximum number of iteration $itMax$}\n\n\\KwOut{the admixture coefficients matrix ${\\bf Q} \\in \\mathbb{R}^{n \\times K}$, the ancestral genotype frequencies matrix ${\\bf G} \\in \\mathbb{R}^{K \\times (d+1) L}$ }\n\n \\BlankLine\n \n Initialize ${\\bf Q}$ at random \\;\n\n $proj({\\bf Y}) \\leftarrow {\\bf R} {\\bf Y}$ \\;\n\n \\For{$it = 1..itMax$}{\t \n \t\\BlankLine\n\t // G optimization step\n\n\t\\For{$j = 1..(p+1)L$}{\n\n\t$g^\\star \\in \\argmin_{%\n\t       \\substack{%\n\t          g \\in \\mathbb{R}^{K}\n\t       }\n\t       }\n\t     || {\\bf Y}_{., j} - {\\bf Q} g||^2$\\;\n\n ${\\bf G}_{j,.} \\leftarrow g^\\star $\\;\n }\n\n Project ${\\bf G}$ such that ${\\bf G} \\in \\Delta_G$ \\;\n \t\\BlankLine\n\t // Q optimization step\n\n \\For{$i = 1..n$}{\n $g^\\star_i \\in \\argmin_{%\n       \\substack{%\n          q \\in \\mathbb{R}^{K}\n       }\n       }\n     || proj({\\bf Y})_{i,.} - {\\bf G}^T q ||^2 + \\alpha {\\bf \\Delta}_{i,i} ||q||^2$\\;\n\n $proj({\\bf Q})_{i, .} \\leftarrow g^\\star_i $\\;\n }\n\n ${\\bf Q} \\leftarrow {\\bf U}^T proj({\\bf Q})$\\;\n\nProject ${\\bf Q}$ such that ${\\bf Q} \\in \\Delta_Q$ \\;\n }\n\n\n \\end{algorithm}\n\\label{algo:apls}\n\\end{algo}\n\n", "meta": {"hexsha": "a31ac355dbb0ffd1cf072ed914aabf9c4bf6a51b", "size": 4713, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "2Article/TESS3Article-master/Article/appendix.tex", "max_stars_repo_name": "cayek/Thesis", "max_stars_repo_head_hexsha": "14d7c3fd03aac0ee940e883e37114420aa614b41", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2Article/TESS3Article-master/Article/appendix.tex", "max_issues_repo_name": "cayek/Thesis", "max_issues_repo_head_hexsha": "14d7c3fd03aac0ee940e883e37114420aa614b41", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2Article/TESS3Article-master/Article/appendix.tex", "max_forks_repo_name": "cayek/Thesis", "max_forks_repo_head_hexsha": "14d7c3fd03aac0ee940e883e37114420aa614b41", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.2119205298, "max_line_length": 315, "alphanum_fraction": 0.517292595, "num_tokens": 1945, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.787931190663057, "lm_q2_score": 0.6992544147913993, "lm_q1q2_score": 0.5509643636229863}}
{"text": "\\documentclass{article}\n\\usepackage{amsmath}\n\\usepackage{bm}\n\n\\newcommand{\\M}[3]{{^#1}M^{#2}_{#3}}\n\\newcommand{\\Minv}[3]{{\\left( \\M{#1}{#2}{#3} \\right) }^{-1}}\n\n\\newcommand{\\oM}[2]{M^{#1}_{#2}}\n\\newcommand{\\oMinv}[2]{{\\left( \\oM{#1}{#2} \\right) }^{-1}}\n\n\\begin{document}\n\n\\section{Visual servoing}\n\n\\subsection{Method 1}\n\nLet $T_1, T_2, G, H, $ be a visual tag on the robot gripper,\na visual tag on the object, the gripper and the handle frames.\nLet ${^o}M^a_1 \\in SE(3)$ be the transformation between joint 1 and the world.\nJoint 1 refers to the end-effector joint.\nJoint 2 refers to the object joint.\nThe camera frame is refered to by 3.\n$a$ takes values in $\\left\\{ p, c, v, \\empty \\right\\}$\nmeaning the transformation refers to Planning, Control, Vision or is independant of the considered framework.\nFor instance, $$ {^1}M_G = {^1}M^p_G = {^1}M^C_G = {^1}M^v_G $$\nbut, $$ {^o}M^p_1 \\neq {^o}M^C_1 \\neq {^o}M^v_1 $$ and ${^o}M_1 $ is meaningless.\n\nThe control variable is:\n$$ {^G}M^v_H = {^G}M_{T_1} {^{T_1}}M^v_{T_2} {^{T_2}}M_H $$\nwhere ${^{T_1}}M^v_{T_2} $ is the input.\n\nThe reference is:\n$$ {^G}M^p_H = \\left( {^o}M^p_{1} {^1}M_{G} \\right)^{-1} \\left( {^o}M^p_{2} {^2}M_{H} \\right) $$\n\nThe derivative of the control variable is the derivative of:\n$$ {^{T_1}}M^C_{T_2} = \\left( {^3}M^C_{1} {^1}M_{T_1} \\right)^{-1} \\left( {^3}M^v_{T_2} \\right) $$\n\nFormulated as a task, it gives:\n\n$$ e = \\left( {^G}M^v_H \\right)^{-1} {^G}M^p_H $$\n%$$ J = \\left( {^G}M^v_H \\right)^{-1} {^G}M^p_H $$\n\n\\subsection{Method 2}\n\nWe assume $\\oM{c}{2} = \\oM{p}{2} $. This makes sense because the object is not actuated, so there is no control variable for it.\nWe define the two following errors between the control variables and the vision.\n$$ \\delta_g = \\oMinv{c}{1} \\oM{v}{1} $$\n$$ \\delta_h = \\oMinv{c}{2} \\oM{v}{2} $$\nThey account for kinematic calibration and localization errors between control and vision.\n\nThe desired behaviour of the system is:\n$$ \\oMinv{v}{h} {\\bm{\\oM{v}{g}}}^* = \\oMinv{p}{h} {\\oM{p}{g}} $$\n\nThe desired gripper position is then:\n\\begin{align*}\n{\\bm{\\oM{c}{g}}}^* &= {\\bm{\\oM{c}{1}}}^*                                                    . g \\\\\n                   &= {\\bm{\\oM{v}{1}}}^*                                      .\\delta_g^{-1}. g \\\\\n                   &= {\\bm{\\oM{v}{g}}}^*                               .g^{-1}.\\delta_g^{-1}. g \\\\\n                   &= \\oM{v}{h}.\\oMinv{p}{h}.{\\oM{p}{g}}               .g^{-1}.\\delta_g^{-1}. g \\\\\n                   &= \\oM{v}{2}.       h.\\oMinv{p}{h}.\\oM{p}{g}        .g^{-1}.\\delta_g^{-1}. g \\\\\n                   %&= \\oM{p}{2}.\\delta_h.\\oMinv{p}{2}.\\oM{p}{1}               .\\delta_g^{-1}. g \\\\\n                   &= \\oM{p}{2}.\\delta_h.h.\\oMinv{p}{h}.\\oM{p}{1}               .\\delta_g^{-1}. g \\\\\n                   &= \\oM{p}{h}.h^{-1}.\\delta_h.h.\\oMinv{p}{h}.\\oM{p}{1}               .\\delta_g^{-1}. g \\\\\n\\end{align*}\n\n\\paragraph{The evaluation of $\\delta_g$ and $\\delta_h$} is made in the camera frame.\n$$ \\delta_g = \\Minv{3}{c}{1} \\M{3}{v}{1} $$\n$$ \\delta_h = \\Minv{3}{c}{2} \\M{3}{v}{2} $$\n\n\\end{document}\n", "meta": {"hexsha": "9e05b399fe45cb840412503a05b4ce809b6d24db", "size": 3062, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/control.tex", "max_stars_repo_name": "florent-lamiraux/agimus-sot", "max_stars_repo_head_hexsha": "b150e9dca59c2d19888a48ebf28cbe5ae9cbcdee", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/control.tex", "max_issues_repo_name": "florent-lamiraux/agimus-sot", "max_issues_repo_head_hexsha": "b150e9dca59c2d19888a48ebf28cbe5ae9cbcdee", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-10-21T18:15:47.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-30T15:14:13.000Z", "max_forks_repo_path": "doc/control.tex", "max_forks_repo_name": "florent-lamiraux/agimus-sot", "max_forks_repo_head_hexsha": "b150e9dca59c2d19888a48ebf28cbe5ae9cbcdee", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-07-31T07:27:58.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-02T12:40:38.000Z", "avg_line_length": 43.1267605634, "max_line_length": 128, "alphanum_fraction": 0.5258001306, "num_tokens": 1215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8031737963569016, "lm_q2_score": 0.6859494550081925, "lm_q1q2_score": 0.5509366278878776}}
{"text": "\\documentclass[12pt]{article}\n\\usepackage{amsmath}\n\\usepackage{pdfpages}\n\\usepackage{lscape}\n\\usepackage{hyperref}\n\\title{2D Graphics with Asymptote}\n\\author{The Asymptote Project}\n\\newcommand{\\insertrep}[1]{%\n\t\\hspace*{-2.4cm}\n\t\\fbox{\\includegraphics[page=1,scale=1.5]{#1}}\n}\n\\begin{document}\n\tExercise 12:\\\\\n\tHow many corners does a cube have in 4 dimensions? How many 3D faces? How many edges?\n\t\n\t\\section{Vertices}\n\tA typical corner is $(0,0,1,0)$. A typical edge goes to $(0,1,0,0)$.\\\\\n\t\\\\\n\tIn 3D, all cube corners are:\\\\\n\t$(0,0,0)$\\\\\n\t$(0,0,1)$\\\\\n\t$(0,1,0)$\\\\\n\t$(0,1,1)$\\\\\n\t$(1,0,0)$\\\\\n\t$(1,0,1)$\\\\\n\t$(1,1,0)$\\\\\n\t$(1,1,1)$\\\\\n\t\\\\\n\tWhich follows the permutation logic, and we know that in a set with three members (3D space), each with two possibilities (0 or 1) we have $2^3=8$ permutations.\\\\\n\t\n\t\\section{Edges}\n\tIn a 4D space we will have $2^4=16$ permutations, so we will have 16 corners.\\\\\n\tIf we use the technique of imagining a 4D-Cube as a moving 3D Cube, we will have:\\\\\n\t1 - 12 edges from the \"starting\" cube;\\\\\n\t2 - 12 edges from the \"final\" cude;\\\\\n\t3 - 8 edges from the \"moving vertices\"\\\\\n\t\\\\\n\tWhat will give us 32 edges on a 4D-Cube.\n\t\\newpage\n\tThe recursive formula is: $f(N) = 2*f(N-1) + 2^{N-1}$\n\t\\\\\n\t\\begin{align*}\n\t\tf(N) = 2f(N-1)+2^{N-1}\\\\\n\t\tf(N+1) = 2f(N)+2^{N}\\\\\n\t\tf(N+1) - 2f(N) = 2^{N}\\\\\n\t\t(E-2)f = 2^{N}\\\\\n\t\t(E-2)(E-2)f = (E-2)*2^{N}\\\\\n\t\t(E-2)^2f = 0\\\\\n\t\t\\\\\n\t\t(\\alpha n + \\beta)2^n\\\\\n\t\t\\\\\n\t\t(1\\alpha + \\beta)2^1 = 1\\\\\n\t\t(2\\alpha + \\beta)2^2 = 4\\\\\n\t\t(3\\alpha + \\beta)2^3 = 12\\\\\n\t\t\\\\\n\t\t2\\alpha + 2\\beta = 1\\\\\n\t\t8\\alpha + 4\\beta = 4\\\\\n\t\t24\\alpha + 8\\beta = 12\\\\\n\t\t\\\\\n\t\t-2(2\\alpha + 2\\beta) = -2*1\\\\\t\t\t\n\t\t8\\alpha + 4\\beta = 4\\\\\n\t\t\\\\\n\t\t-4\\alpha - 4\\beta = -2\\\\\t\t\t\n\t\t8\\alpha + 4\\beta = 4\\\\\n\t\t\\\\\n\t\t-4\\alpha - 4\\beta = -2\\\\\t\t\t\n\t\t8\\alpha + 4\\beta = 4\\\\\t\t\n\t\t\\\\\n\t\t4\\alpha + 0\\beta = 2\\\\\n\t\t\\\\\n\t\\end{align*}\n\t\\begin{align*}\n\t\t4\\alpha = 2\\\\\t\t\n\t\t\\alpha = \\frac{2}{4}\\\\\n\t\t\\alpha = \\frac{1}{2}\\\\\n\t\t\\\\\n\t\t2\\frac{1}{2} + 2\\beta = 1\\\\\n\t\t\\frac{2}{2} + 2\\beta = 1\\\\\t\t\n\t\t1 + 2\\beta = 1\\\\\n\t\t2\\beta = 1 - 1\\\\\n\t\t2\\beta = 0\\\\\t\t\n\t\t\\beta = 0\n\t\t\\\\\n\t\t(\\frac{n}{2})2^n\\\\\t\t\n\t\t\\frac{n*2^n}{2}\\\\\n\t\\end{align*}\n\tWhich is a \"closed form\" for the quantities of edges of \"cube\" in any dimension:\n\t$$ \\text{edges}(n) = \\frac{n*2^n}{2}$$\t\n\tWhich can be interpreted as the dimension times half the vertices. This makes total sense because at each vertex there are three edges, one for each each axis in that dimension. This counts each edge twice, at the start and at the end, so we divide by two.\n\t\\newpage\n\t\\\\\n\tReference:\\\\\n\t\\\\\n\tBeyond the Third Dimension\\\\\t\n\tby \\\\\n\tThomas F. Banchoff \\\\\n\t\\begin{quote}\t\t\n\tIt is helpful to think of cubes as generated by lower-dimensional cubes in motion. A point in motion generates a segment; a segment in motion generates a square; a square in motion generates a cube; and so on. From this progression, a pattern develops, which we can exploit to predict the numbers of vertices and edges.\\\\\n\tEach time we move a cube to generate a cube in the next higher dimension, the number of vertices doubles. That is easy to see since we have an initial position and a final position, each with the same number of vertices. Using this information we can infer an explicit formula for the number of vertices of a cube in any dimension, namely 2 raised to that power.\\\\\n\tWhat about the number of edges? A square has 4 edges, and as it moves from one position to the other, each of its 4 vertices traces out an edge. Thus we have 4 edges on the initial square, 4 on the final square, and 4 traced out by the moving vertices for a total of 12. That basic pattern repeats itself. If we move a figure in a straight line, then the number of edges in the new figure is twice the original number of edges plus the number of moving vertices. Thus the number of edges in a four-cube is 2 times 12 plus 8 for a total of 32. Similarly we find 32 + 32 + 16 = 80 edges on a five-cube and 80 + 80 + 32 = 192 edges on a six-cube.\\\\\n\tThis presentation definitely suggests a pattern, namely that the number of edges of a hypercube of a given dimension is the dimension multiplied by half the number of vertices in that dimension. Once we notice a pattern like this, it can be proved to hold in all dimensions by mathematical induction.\n\t\\end{quote}\n\t\\\\\n\t\\url{http://www.math.brown.edu/~banchoff/Beyond3d/chapter4/section05.html}\n\t\\\\\n\t\\section{Faces}\n\tThe easiest way to calculate the quantity of faces of a 4D cube is to realize that from each vertex starts four edges and each face is a combination of two of theses edges. This gives us ${4 \\choose 2}=6$ faces per vertex.\\\\\n\tA 4D Cube has $2^4=16$ vertices each containing 6 faces. But this algotihm counts each face 4 times, so we actually have:\n\t$$f(n) = \\frac{2^n*{n \\choose 2}}{4}$$\n\tWhich gives us the correct value for $R^4$, $\\frac{2^4*{4 \\choose 2}}{4}=24$.\\\\\n\t\\\\\t\n\tReference:\\\\\n\t\\begin{quote}\n\t\tThe hypercube is so highly symmetric that every vertex looks like every other vertex. If we know what happens at one vertex, we can figure out what is going to happen at all vertices. At each vertex there are as many square faces as there are ways to choose 2 edges from among the 4 edges at the point, namely 6. Since there are 16 vertices, we can multiply 6 by 16 to get 96, but this counts each square four times, once for each of its vertices. The correct number of squares in a hypercube is then 96/4, or 24.\n\t\\end{quote}\n\thttp://www.math.brown.edu/~banchoff/Beyond3d/chapter4/section07.html\n\\end{document}", "meta": {"hexsha": "d7fe8a26cbb69b95fb6f187dc8771f05de3bcc5f", "size": 5464, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "books/IntroductionLinearALgebra/Exercise.01.012.tex", "max_stars_repo_name": "xunilrj/sandbox", "max_stars_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-04-01T17:18:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-12T05:23:23.000Z", "max_issues_repo_path": "books/IntroductionLinearALgebra/Exercise.01.012.tex", "max_issues_repo_name": "xunilrj/sandbox", "max_issues_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-05-24T13:36:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-15T06:44:20.000Z", "max_forks_repo_path": "books/IntroductionLinearALgebra/Exercise.01.012.tex", "max_forks_repo_name": "xunilrj/sandbox", "max_forks_repo_head_hexsha": "f92c12f83433cac01a885585e41c02bb5826a01f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-09-20T01:07:39.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-22T14:55:38.000Z", "avg_line_length": 46.7008547009, "max_line_length": 646, "alphanum_fraction": 0.6729502196, "num_tokens": 1899, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6757646010190476, "lm_q2_score": 0.8152324983301568, "lm_q1q2_score": 0.5509052639718397}}
{"text": "\\input{latexmacros.tex}\n\\title{Small strain elastic-plastic model}\n\\begin{document}\n\\DeclareGraphicsExtensions{.jpg,.pdf}\n\\maketitle\n\\tableofcontents\n\\section{Preamble}\nLet $\\boldsymbol{F}$ be the deformation gradient, $\\boldsymbol{\\sigma}$ be the \nCauchy stress, and $\\boldsymbol{d}$ be the rate of deformation tensor.  We\nfirst decompose the deformation gradient into a stretch and a rotations using\n$\\BF = \\BR\\cdot\\BU$.  The rotation $\\BR$ is then used to rotate the stress and\nthe rate of deformation into the material configuration to give us\n\\Beq\n  \\widehat{\\Bsig} = \\BR^T\\cdot\\Bsig\\cdot\\BR ~;~~\n  \\widehat{\\Bd} = \\BR^T\\cdot\\Bd\\cdot\\BR \n\\Eeq\nThis is equivalent to using a Green-Naghdi objective stress rate.  In the \nfollowing all equations are with respect to the hatted quantities and we \ndrop the hats for convenience.\n\n\\section{Elastic relation}\nLet us split the Cauchy stress into a volumetric and a deviatoric part\n\\Beq\n  \\Bsig = p~\\Bone + \\Bs ~;~~ p = \\Third~\\Tr(\\Bsig) ~.\n\\Eeq\nTaking the time derivative gives us\n\\Beq\n  \\dot{\\Bsig} = \\dot{p}~\\Bone + \\dot{\\Bs} ~.\n\\Eeq\nWe assume that the elastic response of the material is isotropic.  The constitutive \nrelation for a hypoelastic material of grade 0 can be expressed as\n\\Beq\n  \\dot{\\Bsig} = \\left[\\lambda~\\Tr(\\Bd^e) - 3~\\kappa~\\alpha~\\Deriv{}{t}(T-T_0)\\right]~\\Bone \n    + 2~\\mu~\\Bd^e  ~;~~ \\Bd = \\Bd^e + \\Bd^p\n\\Eeq\nwhere $\\Bd^e, \\Bd^p$ are the elastic and plastic parts of the rate of deformation \ntensor, $\\lambda,\\mu$ are the Lame constants, $\\kappa$ is the bulk modulus, $\\alpha$\nis the coefficient of thermal expansion, $T_0$ is the reference temperature, and $T$ is the\ncurrent temperature.  If we split $\\Bd^e$ into volumetric and deviatoric parts as\n\\Beq\n  \\Bd^e = \\Third~\\Tr(\\Bd^e)~\\Bone + \\Beta^e\n\\Eeq\nwe can write\n\\Beq\n  \\dot{\\Bsig} = \\left[\\left(\\lambda + \\cfrac{2}{3}~\\mu\\right)~\\Tr(\\Bd^e) \n     - 3~\\kappa~\\alpha~\\Deriv{}{t}(T-T_0)\\right]~\\Bone + 2~\\mu~\\Beta^e  =\n  \\kappa~\\left[\\Tr(\\Bd^e) - 3~\\alpha~\\Deriv{}{t}(T-T_0)\\right]~\\Bone + 2~\\mu~\\Beta^e  \n\\Eeq\nTherefore, we have\n\\Beq\n  \\dot{\\Bs} = 2~\\mu~\\Beta^e ~.\n\\Eeq\nand\n\\Beq\n  \\dot{p} = \\kappa~\\left[\\Tr(\\Bd^e) - 3~\\alpha~\\Deriv{}{t}(T-T_0)\\right] ~.\n\\Eeq\nWe will use a standard elastic-plastic stress update algorithm to integrate the rate \nequation for the deviatoric stress.  However, we will assume that the volumetric part \nof the Cauchy stress can be computed using an equation of state.  Then the final\nCauchy stress will be given by\n\\Beq\n  \\Bsig = \\left[p(J) - 3~J~\\Deriv{p(J)}{J}~\\alpha~(T-T_0)\\right]~\\Bone + \\Bs ~;~~ \n  J = \\det(\\BF)~.\n\\Eeq\n(Note that we assume that the plastic part of the deformation is volume preserving.\nThis is not true for Gurson type models and will lead to a small error in the \ncomputed value of $\\Bsig$.)\n\n\\section{Flow rule}\nWe assume that the flow rule is given by\n\\Beq\n  \\Bd^p = \\dot{\\gamma}~\\Br\n\\Eeq\nWe can split $\\Bd^p$ into a trace part and a trace free part, i.e.,\n\\Beq\n  \\Bd^p = \\Third~\\Tr(\\Bd^p)~\\Bone + \\Beta^p \n\\Eeq\nThen, using the flow rule, we have\n\\Beq\n  \\Bd^p = \\Third~\\dot{\\gamma}~\\Tr(\\Br)~\\Bone + \\Beta^p  ~.\n\\Eeq\nTherefore we can write the flow rule as\n\\Beq\n  \\Beta^p = \\dot{\\gamma}~\\left(-\\Third~\\Tr(\\Br)~\\Bone + \\Br\\right) ~.\n\\Eeq\nNote that \n\\Beq\n  \\Bd = \\Bd^e + \\Bd^p \\quad \\implies \\quad\n  \\Tr(\\Bd) = \\Tr(\\Bd^e) + \\Tr(\\Bd^p) ~.\n\\Eeq\nAlso,\n\\Beq\n  \\Bd = \\Bd^e + \\Bd^p \\quad \\implies \\quad\n  \\Third~\\Tr(\\Bd)~\\Bone + \\Beta = \n  \\Third~\\Tr(\\Bd^e)~\\Bone + \\Beta^e  + \n  \\Third~\\Tr(\\Bd^p)~\\Bone + \\Beta^p ~.\n\\Eeq\nTherefore,\n\\Beq\n  \\Beta = \\Beta^e + \\Beta^p ~.\n\\Eeq\n\n\\section{Isotropic and Kinematic hardening and porosity evolution rules}\nWe assume that the strain rate, temperature, and porosity can be fixed at the\nbeginning of a timestep and consider only the evolution of plastic strain and\nthe back stress while calculating the current stress.\n\nWe assume that the plastic strain evolves according to the relation\n\\Beq\n  \\dot{\\Ve^p} = \\dot{\\gamma}~h^{\\alpha}\n\\Eeq\n\nWe also assume that the back stress evolves according to the relation\n\\Beq\n  \\dot{\\widehat{\\Bbeta}} = \\dot{\\gamma}~\\Bh^{\\beta}\n\\Eeq\nwhere $\\hat{\\Bbeta}$ is the back stress.  If $\\Bbeta$ is the deviatoric part of\n$\\widehat{\\Bbeta}$, then we can write\n\\Beq\n  \\dot{\\Bbeta} = \\dot{\\gamma}~\\Dev(\\Bh^{\\beta}) ~.\n\\Eeq\n\nThe porosity $\\phi$ is assumed to evolve according to the relation\n\\Beq\n  \\dot{\\phi} = \\dot{\\gamma}~h^{\\phi} ~.\n\\Eeq\n\n\\section{Yield condition}\nThe yield condition is assumed to be of the form\n\\Beq\n  f(\\Bs, \\Bbeta, \\Ve^p, \\phi, \\dot{\\Ve}, T, \\dots) \n    = f(\\Bxi, \\Ve^p, \\phi, \\dot{\\Ve}, T, \\dots) = 0\n\\Eeq\nwhere $\\Bxi = \\Bs - \\Bbeta$ and $\\Bbeta$ is the deviatoric part of $\\widehat{\\Bbeta}$.  \nThe Kuhn-Tucker loading-unloading conditions are\n\\Beq\n  \\dot{\\gamma} \\ge 0 ~;~~  f \\le 0 ~;~~ \\dot{\\gamma}~f = 0\n\\Eeq\nand the consistency condition is $\\dot{f} = 0$.\n\n\\section{Temperature increase due to plastic dissipation}\nThe temperature increase due to plastic dissipation is assume to be\ngiven by the rate equation\n\\Beq\n  \\dot{T} = \\cfrac{\\chi}{\\rho~C_p}~\\sigma_y~\\dot{\\Ve^p} ~.\n\\Eeq\nThe temperature is updated using\n\\Beq\n  T_{n+1} = T_n + \n   \\cfrac{\\chi_{n+1}~\\Delta t}{\\rho_{n+1}~C_p}~\\sigma^{n+1}_y~\\dot{\\Ve^p}_{n+1} ~.\n\\Eeq\n\n\\section{Continuum elastic-plastic tangent modulus}\nTo determine whether the material has undergone a loss of stability we need to compute\nthe acosutic tensor which needs the computation of the continuum elastic-plastic tangent\nmodulus.\n\nTo do that recall that\n\\Beq\n  \\Bsig = p~\\Bone + \\Bs \\quad \\implies \\quad \\dot{\\Bsig} = \\dot{p}~\\Bone ~.\n\\Eeq\nWe assume that \n\\Beq\n  \\dot{p} = J~\\Partial{p}{J}~\\Tr(\\Bd) \\qquad \\Tand \\quad\n  \\dot{\\Bs} = 2~\\mu~\\Beta^e ~.\n\\Eeq\nNow, the consistency condition requires that\n\\Beq\n  \\dot{f}(\\Bs, \\Bbeta, \\Ve^p, \\phi, \\dot{\\Ve}, T, \\dots) = 0 ~.\n\\Eeq\nKeeping $\\dot{\\Ve}$ and $T$ fixed over the time interval, we can use the chain rule\nto get\n\\Beq\n  \\dot{f} = \\Partial{f}{\\Bs}:\\dot{\\Bs} + \\Partial{f}{\\Bbeta}:\\dot{\\Bbeta} + \n    \\Partial{f}{\\Ve^p}~\\dot{\\Ve^p} + \\Partial{f}{\\phi}~\\dot{\\phi} = 0~.\n\\Eeq\nThe needed rate equations are\n\\Beq\n  \\Bal\n    \\dot{\\Bs} & = 2~\\mu~\\Beta^e = 2~\\mu~(\\Beta - \\Beta^p) = \n      2~\\mu~[\\Beta - \\dot{\\gamma}~\\Dev(\\Br)] \\\\\n    \\dot{\\Bbeta} & = \\dot{\\gamma}~\\Dev(\\Bh^{\\beta}) \\\\\n    \\dot{\\Ve^p} & = \\dot{\\gamma}~h^{\\alpha} \\\\\n    \\dot{\\phi} & = \\dot{\\gamma}~h^{\\phi} \n  \\Eal\n\\Eeq\nPlugging these into the expression for $\\dot{f}$ gives\n\\Beq\n  2~\\mu~\\Partial{f}{\\Bs}:[\\Beta - \\dot{\\gamma}~\\Dev(\\Br)] \n    + \\dot{\\gamma}~\\Partial{f}{\\Bbeta}:\\Dev(\\Bh^{\\beta}) \n    + \\dot{\\gamma}~\\Partial{f}{\\Ve^p}~h^{\\alpha} \n    + \\dot{\\gamma}~\\Partial{f}{\\phi}~h^{\\phi}  = 0\n\\Eeq\nor,\n\\Beq\n  \\dot{\\gamma} = \\cfrac{2~\\mu~\\Partial{f}{\\Bs}:\\Beta}\n    {2~\\mu~\\Partial{f}{\\Bs}:\\Dev(\\Br) - \\Partial{f}{\\Bbeta}:\\Dev(\\Bh^{\\beta})\n     - \\Partial{f}{\\Ve^p}~h^{\\alpha} - \\Partial{f}{\\phi}~h^{\\phi}} ~.\n\\Eeq\nPlugging this expression for $\\dot{\\gamma}$ into the equation for $\\dot{\\Bs}$,\nwe get\n\\Beq\n  \\dot{\\Bs} = 2~\\mu~\\left[\\Beta - \\left( \n    \\cfrac{2~\\mu~\\Partial{f}{\\Bs}:\\Beta}\n    {2~\\mu~\\Partial{f}{\\Bs}:\\Dev(\\Br) - \\Partial{f}{\\Bbeta}:\\Dev(\\Bh^{\\beta})\n     - \\Partial{f}{\\Ve^p}~h^{\\alpha} - \\Partial{f}{\\phi}~h^{\\phi}}\\right)~\\Dev(\\Br)\\right]~.\n\\Eeq\nAt this stage, note that a symmetric $\\Bsig$ implies a symmetric $\\Bs$ and hence a \nsymmetric $\\Beta$.  Also we assume that $\\Br$ is symmetric (and hence $\\Dev(\\Br)$), which\nis true if the flow rule is associated.  Then we can write\n\\Beq\n  \\Beta = \\SfI^{4s}:\\Beta \\qquad \\Tand \\qquad \\Dev(\\Br) = \\SfI^{4s}:\\Dev(\\Br)\n\\Eeq\nwhere $\\SfI^{4s}$ is the fourth-order symmetric identity tensor.  Also note that\nif $\\BA$, $\\BC$, $\\BD$ are second order tensors and $\\SfB$ is a fourth order tensor,\nthen\n\\Beq\n  (\\BA:\\SfB:\\BC)~(\\SfB:\\BD) \\equiv A_{ij}~B_{ijkl}~C_{kl}~B_{mnpq}~D_{pq}\n     = (B_{mnpq}~D_{pq})~(A_{ij}~B_{ijkl})~C_{kl} \\equiv [(\\SfB:\\BD)\\otimes(\\BA:\\SfB)]:\\BC ~.\n\\Eeq\nTherefore we have\n\\Beq\n  \\dot{\\Bs} = 2~\\mu~\\left[\\SfI^{4s}:\\Beta - \\left( \n    \\cfrac{2~\\mu~[\\SfI^{4s}:\\Dev(\\Br)]\\otimes[\\Partial{f}{\\Bs}:\\SfI^{4s}]}\n    {2~\\mu~\\Partial{f}{\\Bs}:\\Dev(\\Br) - \\Partial{f}{\\Bbeta}:\\Dev(\\Bh^{\\beta})\n     - \\Partial{f}{\\Ve^p}~h^{\\alpha} - \\Partial{f}{\\phi}~h^{\\phi}}\\right):\\Beta\\right]~.\n\\Eeq\nAlso,\n\\Beq\n  \\SfI^{4s}:\\Dev(\\Br) = \\Dev(\\Br) \\qquad \\Tand \\qquad\n  \\Partial{f}{\\Bs}:\\SfI^{4s} = \\Partial{f}{\\Bs} ~.\n\\Eeq\nHence we can write\n\\Beq\n  \\dot{\\Bs} = 2~\\mu~\\left[\\SfI^{4s} - \\left( \n    \\cfrac{2~\\mu~\\Dev(\\Br)\\otimes\\Partial{f}{\\Bs}}\n    {2~\\mu~\\Partial{f}{\\Bs}:\\Dev(\\Br) - \\Partial{f}{\\Bbeta}:\\Dev(\\Bh^{\\beta})\n     - \\Partial{f}{\\Ve^p}~h^{\\alpha} - \\Partial{f}{\\phi}~h^{\\phi}}\\right)\\right]:\\Beta\n\\Eeq\nor,\n\\Beq\n  \\dot{\\Bs} = \\SfB^{ep}:\\Beta = \\SfB^{ep}:\\left[\\Bd - \\Third~\\Tr(\\Bd)~\\Bone\\right]\n\\Eeq\nwhere\n\\Beq\n  \\SfB^{ep} := \n    2~\\mu~\\left[\\SfI^{4s} - \\left( \n    \\cfrac{2~\\mu~\\Dev(\\Br)\\otimes\\Partial{f}{\\Bs}}\n    {2~\\mu~\\Partial{f}{\\Bs}:\\Dev(\\Br) - \\Partial{f}{\\Bbeta}:\\Dev(\\Bh^{\\beta})\n     - \\Partial{f}{\\Ve^p}~h^{\\alpha} - \\Partial{f}{\\phi}~h^{\\phi}}\\right)\\right] ~.\n\\Eeq\nAdding in the volumetric component gives\n\\Beq\n  \\Bal\n  \\dot{\\Bsig} & = \\dot{p}~\\Bone + \\dot{\\Bs} \\\\\n   & = J~\\Partial{p}{J}~\\Tr(\\Bd)~\\Bone +  \n       \\SfB^{ep}:\\left[\\Bd - \\Third~\\Tr(\\Bd)~\\Bone\\right] \\\\\n   & = \\left[3~J~\\Partial{p}{J}~\\Bone - \\SfB^{ep}:\\Bone\\right]~\\cfrac{\\Bd:\\Bone}{3}\n       + \\SfB^{ep}:\\Bd \\\\\n   & = J~\\Partial{p}{J}~(\\Bone\\otimes\\Bone):\\Bd - \\Third~[\\SfB^{ep}:(\\Bone\\otimes\\Bone)]:\\Bd\n       + \\SfB^{ep}:\\Bd ~.\n  \\Eal\n\\Eeq\nTherefore,\n\\Beq\n  \\dot{\\Bsig} = \\left[J~\\Partial{p}{J}~(\\Bone\\otimes\\Bone) - \n      \\Third~[\\SfB^{ep}:(\\Bone\\otimes\\Bone)] + \\SfB^{ep}\\right]:\\Bd \n   = \\SfC^{ep}:\\Bd ~.\n\\Eeq\nThe quantity $\\SfC^{ep}$ is the continuum elastic-plastic tangent modulus.  We also use the \ncontinuum elastic-plastic tangent modulus in the implicit version of the code.  However,\nfor improved accuracy and faster convergence, an algorithmically  consistent tangent modulus \nshould be used instead.  That tangent modulus can be calculated in the usual manner and \nis left for development and implementation as an additional feature in the future.\n\n\\section{Stress update}\nA standard return algorithm is used to compute the updated Cauchy stress.  Recall\nthat the rate equation for the deviatoric stress is given by\n\\Beq\n  \\dot{\\Bs} = 2~\\mu~\\Beta^e ~.\n\\Eeq\nIntegrating the rate equation using a Backward Euler scheme gives\n\\Beq\n  \\Bs_{n+1} - \\Bs_n = 2~\\mu~\\Delta~t~\\Beta^e_{n+1}\n    = 2~\\mu~\\Delta~t~(\\Beta_{n+1} - \\Beta^p_{n+1})\n\\Eeq\nNow, from the flow rule, we have\n\\Beq\n  \\Beta^p = \\dot{\\gamma}~\\left(\\Br -\\Third~\\Tr(\\Br)~\\Bone\\right) ~.\n\\Eeq\nDefine the deviatoric part of $\\Br$ as\n\\Beq\n  \\Dev(\\Br) := \\Br -\\Third~\\Tr(\\Br)~\\Bone ~.\n\\Eeq\nTherefore, \n\\Beq\n  \\Bs_{n+1} - \\Bs_n \n    = 2~\\mu~\\Delta~t~\\Beta_{n+1} - 2~\\mu~\\Delta\\gamma_{n+1}~\\Dev(\\Br_{n+1}) ~.\n\\Eeq\nwhere $\\Delta\\gamma := \\dot{\\gamma}~\\Delta t$.  Define the trial stress\n\\Beq\n  \\Bs^{\\Trial} := \\Bs_n + 2~\\mu~\\Delta~t~\\Beta_{n+1} ~.\n\\Eeq\nThen\n\\Beq\n  \\Bs_{n+1} = \\Bs^{\\Trial} - 2~\\mu~\\Delta\\gamma_{n+1}~\\Dev(\\Br_{n+1}) ~.\n\\Eeq\nAlso recall that the back stress is given by\n\\Beq\n  \\dot{\\Bbeta} = \\dot{\\gamma}~\\Dev{\\Bh}^{\\beta}\n\\Eeq\nThe evolution equation for the back stress can be integrated to get\n\\Beq\n  \\Bbeta_{n+1} - \\Bbeta_n = \\Delta\\gamma_{n+1}~\\Dev(\\Bh)^{\\beta}_{n+1} ~.\n\\Eeq\nNow,\n\\Beq\n  \\Bxi_{n+1} = \\Bs_{n+1} - \\Bbeta_{n+1} ~.\n\\Eeq\nPlugging in the expressions for $\\Bs_{n+1}$ and $\\Bbeta_{n+1}$, we get\n\\Beq\n  \\Bxi_{n+1} = \\Bs^{\\Trial} - 2~\\mu~\\Delta\\gamma_{n+1}~\\Dev(\\Br_{n+1}) \n     - \\Bbeta_n - \\Delta\\gamma_{n+1}~\\Dev(\\Bh)^{\\beta}_{n+1} ~.\n\\Eeq\nDefine\n\\Beq\n  \\Bxi^{\\Trial} := \\Bs^{\\Trial} - \\Bbeta_n ~.\n\\Eeq\nThen\n\\Beq\n  \\Bxi_{n+1} = \\Bxi^{\\Trial} - \\Delta\\gamma_{n+1}(2~\\mu~\\Dev(\\Br_{n+1}) + \\Dev(\\Bh)^{\\beta}_{n+1}) ~.\n\\Eeq\nSimilarly, the evolution of the plastic strain is given by\n\\Beq\n  \\Ve^p_{n+1} = \\Ve^p_{n} + \\Delta\\gamma_{n+1}~h^{\\alpha}_{n+1}\n\\Eeq\nand the porosity evolves as\n\\Beq\n  \\phi_{n+1} = \\phi_n + \\Delta\\gamma_{n+1}~h^{\\phi}_{n+1} ~.\n\\Eeq\nThe yield condition is discretized as\n\\Beq\n  f(\\Bs_{n+1}, \\Bbeta_{n+1}, \\Ve^p_{n+1}, \\phi_{n+1}, \\dot{\\Ve}_{n+1}, T_{n+1}, \\dots) = \n  f(\\Bxi_{n+1}, \\Ve^p_{n+1}, \\phi_{n+1}, \\dot{\\Ve}_{n+1}, T_{n+1}, \\dots) = 0 ~.\n\\Eeq\n{\\bf Important:} We assume that the derivatives with respect to $\\dot{\\Ve}$ and $T$ are\nsmall enough to be neglected.\n\n\\subsection{Newton iterations}\nWe now have the following equations that have to be solved for $\\Delta\\gamma_{n+1}$:\n\\Beq\n  \\Bal\n  \\Bxi_{n+1} & = \\Bxi^{\\Trial} - \\Delta\\gamma_{n+1}(2~\\mu~\\Dev(\\Br_{n+1}) + \\Dev(\\Bh)^{\\beta}_{n+1})\\\\\n  \\Ve^p_{n+1} & = \\Ve^p_{n} + \\Delta\\gamma_{n+1}~h^{\\alpha}_{n+1} \\\\\n  \\phi_{n+1} & = \\phi_n + \\Delta\\gamma_{n+1}~h^{\\phi}_{n+1}  \\\\\n  f(\\Bxi_{n+1}, \\Ve^p_{n+1}, \\phi_{n+1}, \\dot{\\Ve}_{n+1}, T_{n+1}, \\dots) & = 0 ~.\n  \\Eal\n\\Eeq\n\nRecall that if $g(\\Delta\\gamma) = 0$ is a nonlinear equation that we have to solve\nfor $\\Delta\\gamma$, an iterative Newton method can be expressed as\n\\Beq\n  \\Delta\\gamma^{(k+1)} = \\Delta\\gamma^{(k)} - \\left[\\Deriv{g}{\\Delta\\gamma}\\right]^{-1}_{(k)}~g^{(k)} ~.\n\\Eeq\nDefine \n\\Beq\n  \\delta\\gamma := \\Delta\\gamma^{(k+1)} - \\Delta\\gamma^{(k)} ~.\n\\Eeq\nThen, the iterative scheme can be written as\n\\Beq\n  g^{(k)} + \\left[\\Deriv{g}{\\Delta\\gamma}\\right]^{(k)}~\\delta\\gamma  = 0~.\n\\Eeq\nIn our case we have\n\\Beq\n  \\Bal\n  \\Ba(\\Delta\\gamma) = 0 &= -\\Bxi + \\Bxi^{\\Trial} - \\Delta\\gamma(2~\\mu~\\Dev(\\Br) + \\Dev(\\Bh)^{\\beta})\\\\\n  b(\\Delta\\gamma) = 0 &= -\\Ve^p + \\Ve^p_{n} + \\Delta\\gamma~h^{\\alpha} \\\\\n  c(\\Delta\\gamma) = 0 & = -\\phi + \\phi_n + \\Delta\\gamma~h^{\\phi}  \\\\\n  f(\\Delta\\gamma) = 0 &= f(\\Bxi, \\Ve^p, \\phi, \\dot{\\Ve}, T, \\dots) \n  \\Eal\n\\Eeq\nTherefore,\n\\Beq\n  \\Bal\n  \\Deriv{\\Ba}{\\Delta\\gamma} & = \n   -\\Partial{\\Bxi}{\\Delta\\gamma}  - (2~\\mu~\\Dev(\\Br) + \\Dev(\\Bh)^{\\beta})\n   - \\Delta\\gamma~\\left(2~\\mu~\\Partial{\\Dev(\\Br)}{\\Delta\\gamma} + \n        \\Partial{\\Dev(\\Bh)^{\\beta}}{\\Delta\\gamma}\\right) \\\\\n   & =\n   -\\Partial{\\Bxi}{\\Delta\\gamma}  - (2~\\mu~\\Dev(\\Br) + \\Dev(\\Bh)^{\\beta})\n   - \\Delta\\gamma~\\left(\n      2~\\mu~\\Partial{\\Dev(\\Br)}{\\Bxi}:\\Partial{\\Bxi}{\\Delta\\gamma} + \n      2~\\mu~\\Partial{\\Dev(\\Br)}{\\Ve^p}~\\Partial{\\Ve^p}{\\Delta\\gamma} + \n      2~\\mu~\\Partial{\\Dev(\\Br)}{\\phi}~\\Partial{\\phi}{\\Delta\\gamma} + \n      \\right. \\\\\n   & \\qquad \\qquad\n      \\left.\n      \\Partial{\\Dev(\\Bh)^{\\beta}}{\\Bxi}:\\Partial{\\Bxi}{\\Delta\\gamma} + \n      \\Partial{\\Dev(\\Bh)^{\\beta}}{\\Ve^p}~\\Partial{\\Ve^p}{\\Delta\\gamma} +\n      \\Partial{\\Dev(\\Bh)^{\\beta}}{\\phi}~\\Partial{\\phi}{\\Delta\\gamma} \n      \\right) \\\\\n  \\Deriv{b}{\\Delta\\gamma} & = -\\Partial{\\Ve^p}{\\Delta\\gamma} +  h^{\\alpha} \n    + \\Delta\\gamma~\\left(\\Partial{h^{\\alpha}}{\\Bxi}:\\Partial{\\Bxi}{\\Delta\\gamma} + \n                        \\Partial{h^{\\alpha}}{\\Ve^p}~\\Partial{\\Ve^p}{\\Delta\\gamma} + \n                        \\Partial{h^{\\alpha}}{\\phi}~\\Partial{\\phi}{\\Delta\\gamma}\\right) \\\\\n  \\Deriv{c}{\\Delta\\gamma} & = -\\Partial{\\phi}{\\Delta\\gamma} +  h^{\\phi} \n    + \\Delta\\gamma~\\left(\\Partial{h^{\\phi}}{\\Bxi}:\\Partial{\\Bxi}{\\Delta\\gamma} + \n                        \\Partial{h^{\\phi}}{\\Ve^p}~\\Partial{\\Ve^p}{\\Delta\\gamma} + \n                        \\Partial{h^{\\phi}}{\\phi}~\\Partial{\\phi}{\\Delta\\gamma}\\right) \\\\\n  \\Deriv{f}{\\Delta\\gamma} & \n     = \\Partial{f}{\\Bxi}:\\Partial{\\Bxi}{\\Delta\\gamma} + \n          \\Partial{f}{\\Ve^p}~\\Partial{\\Ve^p}{\\Delta\\gamma} +\n          \\Partial{f}{\\phi}~\\Partial{\\phi}{\\Delta\\gamma} ~.\n  \\Eal\n\\Eeq\nNow, define\n\\Beq\n   \\Delta\\Bxi := \\Partial{\\Bxi}{\\Delta\\gamma}~\\delta\\gamma ~;~~\n   \\Delta\\Ve^p := \\Partial{\\Ve^p}{\\Delta\\gamma}~\\delta\\gamma ~;~~\n   \\Delta\\phi := \\Partial{\\phi}{\\Delta\\gamma}~\\delta\\gamma ~.\n\\Eeq\nThen\n\\Beq\n  \\Bal\n  \\Ba^{(k)} & - \\Delta\\Bxi - [2~\\mu~\\Dev(\\Br^{(k)}) + \\Dev(\\Bh)^{\\beta (k)}]~\\delta\\gamma \\\\\n   & \\qquad \\qquad\n    - 2~\\mu~\\Delta\\gamma~\\left(\n      \\Partial{\\Dev(\\Br^{(k)})}{\\Bxi}:\\Delta\\Bxi + \n      \\Partial{\\Dev(\\Br^{(k)})}{\\Ve^p}~\\Delta\\Ve^p + \n      \\Partial{\\Dev(\\Br^{(k)})}{\\phi}~\\Delta\\phi \n      \\right) \\\\\n   & \\qquad \\qquad\n    - \\Delta\\gamma~\\left(\n      \\Partial{\\Dev(\\Bh)^{\\beta (k)}}{\\Bxi}:\\Delta\\Bxi + \n      \\Partial{\\Dev(\\Bh)^{\\beta (k)}}{\\Ve^p}~\\Delta\\Ve^p +\n      \\Partial{\\Dev(\\Bh)^{\\beta (k)}}{\\phi}~\\Delta\\phi\n      \\right)  = 0\\\\\n  b^{(k)} & - \\Delta\\Ve^p + h^{\\alpha}~\\delta\\gamma \n    + \\Delta\\gamma~\\left(\\Partial{h^{\\alpha (k)}}{\\Bxi}:\\Delta\\Bxi + \n                        \\Partial{h^{\\alpha (k)}}{\\Ve^p}~\\Delta\\Ve^p +\n                        \\Partial{h^{\\alpha (k)}}{\\phi}~\\Delta\\phi\\right)\n     = 0 \\\\\n  c^{(k)} & - \\Delta\\phi + h^{\\phi}~\\delta\\gamma \n    + \\Delta\\gamma~\\left(\\Partial{h^{\\phi (k)}}{\\Bxi}:\\Delta\\Bxi + \n                        \\Partial{h^{\\phi (k)}}{\\Ve^p}~\\Delta\\Ve^p +\n                        \\Partial{h^{\\phi (k)}}{\\phi}~\\Delta\\phi\\right)\n     = 0 \\\\\n  f^{(k)} & + \\Partial{f^{(k)}}{\\Bxi}:\\Delta\\Bxi + \n          \\Partial{f^{(k)}}{\\Ve^p}~\\Delta\\Ve^p +\n          \\Partial{f^{(k)}}{\\phi}~\\Delta\\phi \n      = 0\n  \\Eal\n\\Eeq\nBecause the derivatives of $\\Br^{(k)}, \\Bh^{\\alpha (k)}, \\Bh^{\\beta (k)}, \\Bh^{\\phi (k)}$ with respect \nto $\\Bxi, \\Ve^p, \\phi$ may be difficult to calculate, we instead use a semi-implicit scheme in our \nimplementation where the quantities $\\Br$, $h^{\\alpha}$, $\\Bh^{\\beta}$, and $h^{\\phi}$ are evaluated \nat $t_n$.  Then the problematic derivatives disappear and we are left with\n\\Beq\n  \\Bal\n  \\Ba^{(k)} - \\Delta\\Bxi - [2~\\mu~\\Dev(\\Br_n) + \\Dev(\\Bh)^{\\beta}_n]~\\delta\\gamma & = 0\\\\\n  b^{(k)} - \\Delta\\Ve^p + h^{\\alpha}_n~\\delta\\gamma & = 0 \\\\\n  c^{(k)} - \\Delta\\phi + h^{\\phi}_n~\\delta\\gamma & = 0 \\\\\n  f^{(k)} + \\Partial{f^{(k)}}{\\Bxi}:\\Delta\\Bxi + \n          \\Partial{f^{(k)}}{\\Ve^p}~\\Delta\\Ve^p +\n          \\Partial{f^{(k)}}{\\phi}~\\Delta\\phi & = 0\n  \\Eal\n\\Eeq\nWe now force $\\Ba^{(k)}$, $b^{(k)}$, and $c^{(k)}$ to be zero at all times, leading\nto the expressions\n\\Beq\n  \\Bal\n  \\Delta\\Bxi & = - [2~\\mu~\\Dev(\\Br_n) + \\Dev(\\Bh)^{\\beta}_n]~\\delta\\gamma \\\\\n  \\Delta\\Ve^p & =  h^{\\alpha}_n~\\delta\\gamma \\\\\n  \\Delta\\phi & =  h^{\\phi}_n~\\delta\\gamma \\\\\n  f^{(k)} + \\Partial{f^{(k)}}{\\Bxi}:\\Delta\\Bxi + \n          \\Partial{f^{(k)}}{\\Ve^p}~\\Delta\\Ve^p +\n          \\Partial{f^{(k)}}{\\phi}~\\Delta\\phi & = 0\n  \\Eal\n\\Eeq\nPlugging the expressions for $\\Delta\\Bxi$, $\\Delta\\Ve^p$, $\\Delta\\phi$ from the \nfirst three equations into the fourth gives us\n\\Beq\n  f^{(k)} -\\Partial{f^{(k)}}{\\Bxi}:[2~\\mu~\\Dev(\\Br_n) + \\Dev(\\Bh)^{\\beta}_n]~\\delta\\gamma +\n          h^{\\alpha}_n~\\Partial{f^{(k)}}{\\Ve^p}~\\delta\\gamma  + \n          h^{\\phi}_n~\\Partial{f^{(k)}}{\\phi}~\\delta\\gamma  = 0 \n\\Eeq\nor\n\\Beq\n  \\Delta\\gamma^{(k+1)} - \\Delta\\gamma^{(k)} = \\delta\\gamma = \n   \\cfrac{f^{(k)}}\n   {\\Partial{f^{(k)}}{\\Bxi}:[2~\\mu~\\Dev(\\Br_n) + \\Dev(\\Bh^{\\beta}_n)] - \n   h^{\\alpha}_n~\\Partial{f^{(k)}}{\\Ve^p} -\n   h^{\\phi}_n~\\Partial{f^{(k)}}{\\phi} } ~.\n\\Eeq\n\n\\subsection{Algorithm}\nThe following stress update algorithm is used for each (plastic) time step:\n\\begin{enumerate}\n  \\item Initialize:\n  \\Beq\n    k = 0 ~;~~ (\\Ve^p)^{(k)} = \\Ve^p_n ~;~~ \\phi^{(k)} = \\phi_n ~;~~\n    \\Bbeta^{(k)} = \\Bbeta_n ~;~~ \\Delta\\gamma^{(k)} = 0 ~;~~\n    \\Bxi^{(k)} = \\Bxi^{\\Trial}~.\n  \\Eeq\n  \\item Check yield condition:\n  \\Beq\n    f^{(k)} := f(\\Bxi^{(k)}, (\\Ve^p)^{(k)}, \\phi^{(k)}, \\dot{\\Ve}_n, T_n, \\dots)\n  \\Eeq\n  If $f^{(k)} < \\text{tolerance}$ then \n  go to step 5 else go to step 3.\n  \\item Compute updated $\\delta\\gamma^{(k)}$ using\n  \\Beq\n    \\delta\\gamma^{(k)} = \n     \\cfrac{f^{(k)}}\n     {\\Partial{f^{(k)}}{\\Bxi}:[2~\\mu~\\Dev(\\Br_n) + \\Dev(\\Bh^{\\beta}_n)] - \n     h^{\\alpha}_n~\\Partial{f^{(k)}}{\\Ve^p} - \n     h^{\\phi}_n~\\Partial{f^{(k)}}{\\phi}} ~.\n  \\Eeq\n  Compute\n  \\Beq\n  \\Bal\n    \\Delta\\Bxi^{(k)} & = -[2~\\mu~\\Dev(\\Br_n) + \\Dev(\\Bh^{\\beta}_n)]~\\delta\\gamma^{(k)} \\\\\n    (\\Delta\\Ve^p)^{(k)} & =  h^{\\alpha}_n~\\delta\\gamma^{(k)}   \\\\\n    \\Delta\\phi^{(k)} & =  h^{\\phi}_n~\\delta\\gamma^{(k)}   \n  \\Eal\n  \\Eeq\n  \\item Update variables:\n  \\Beq\n    \\Bal\n    (\\Ve^p)^{(k+1)} & = (\\Ve^p)^{(k)} + (\\Delta\\Ve^p)^{(k)} \\\\\n    \\phi^{(k+1)} & = \\phi^{(k)} + \\Delta\\phi^{(k)} \\\\\n    \\Bxi^{(k+1)} & = \\Bxi^{(k)} + \\Delta\\Bxi^{(k)} \\\\\n    \\Delta\\gamma^{(k+1)} & = \\Delta\\gamma^{(k)} + \\delta\\gamma^{(k)}\n    \\Eal\n  \\Eeq\n  Set $k \\leftarrow k+1$ and go to step 2.\n  \\item Update and calculate back stress and the deviatoric part of Cauchy stress:\n  \\Beq\n    \\Ve^p_{n+1} = (\\Ve^p)^{(k)} ~;~~\n    \\phi_{n+1} = \\phi^{(k)} ~;~~\n    \\Bxi_{n+1} = \\Bxi^{(k)} ~;~~\n    \\Delta\\gamma_{n+1} = \\Delta\\gamma^{(k)}\n  \\Eeq\n  and\n  \\Beq\n    \\Bal\n    \\widehat{\\Bbeta}_{n+1} & = \\widehat{\\Bbeta}_n + \\Delta\\gamma_{n+1}~\\Bh^{\\beta}(\\Bxi_{n+1}, \\Ve^p_{n+1}, \\phi_{n+1}) \\\\\n    \\Bbeta_{n+1} & = \\widehat{\\Bbeta}_{n+1} - \\Third~\\Tr(\\widehat{\\Bbeta}_{n+1})~\\Bone \\\\\n    \\Bs_{n+1} & = \\Bxi_{n+1} + \\Bbeta_{n+1}\n    \\Eal\n  \\Eeq\n  \\item Update the temperature and the Cauchy stress\n  \\Beq\n    \\Bal\n    T_{n+1} & = T_n + \n     \\cfrac{\\chi_{n+1}~\\Delta t}{\\rho_{n+1}~C_p}~\\sigma^{n+1}_y~\\dot{\\Ve^p}_{n+1} \n     = T_n + \n     \\cfrac{\\chi_{n+1}~\\Delta\\gamma_{n+1}}{\\rho_{n+1}~C_p}~\\sigma^{n+1}_y~h^{\\alpha}_{n+1} \\\\\n    p_{n+1} & = p(J_{n+1}) \\\\ \n    \\kappa_{n+1} & = J_{n+1}~\\left[\\Deriv{p(J)}{J}\\right]_{n+1} \\\\\n    \\Bsig_{n+1} & = \\left[p_{n+1} - 3~\\kappa_{n+1}~\\alpha~(T_{n+1}-T_0)\\right]~\\Bone + \\Bs_{n+1}\n    \\Eal\n  \\Eeq\n\\end{enumerate}\n\n\\section{Examples}\nLet us now look at a few examples.\n\\subsection{Example 1}\nConsider the case of $J_2$ plasticity with the yield condition\n\\Beq\n  f := \\sqrt{\\frac{3}{2}} \\Norm{\\Bs-\\Bbeta}{} - \\sigma_y(\\Ve^p, \\dot{\\Ve}, T, \\dots) = \n       \\sqrt{\\frac{3}{2}} \\Norm{\\Bxi}{} - \\sigma_y(\\Ve^p, \\dot{\\Ve}, T, \\dots) \\le 0 \n\\Eeq\nwhere $\\Norm{\\Bxi} = \\sqrt{\\Bxi:\\Bxi}$. Assume the associated flow rule\n\\Beq\n  \\Bd^p = \\dot{\\gamma}~\\Br = \\dot{\\gamma}~\\Partial{f}{\\Bsig} = \\dot{\\gamma}~\\Partial{f}{\\Bxi} ~.\n\\Eeq\nThen\n\\Beq\n  \\Br = \\Partial{f}{\\Bxi} = \\sqrt{\\frac{3}{2}}~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}} \n\\Eeq\nand\n\\Beq\n  \\Bd^p = \\sqrt{\\frac{3}{2}}~\\dot{\\gamma}~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}} ~;~~\n  \\Norm{\\Bd^p}{} = \\sqrt{\\frac{3}{2}}~\\dot\\gamma ~.\n\\Eeq\nThe evolution of the equivalent plastic strain is given by\n\\Beq\n  \\dot{\\Ve^p} = \\dot{\\gamma}~h^{\\alpha} = \\sqrt{\\cfrac{2}{3}}~\\Norm{\\Bd^p}{} = \\dot{\\gamma}~.\n\\Eeq\nThis definition is consistent with the definition of equivalent plastic strain\n\\Beq\n  \\Ve^p = \\int_0^t \\dot{\\Ve}^p~d\\tau = \n   \\int_0^t \\sqrt{\\cfrac{2}{3}}~\\Norm{\\Bd^p}{}~d\\tau ~.\n\\Eeq\nThe evolution of porosity is given by (there is no evolution of porosity)\n\\Beq\n  \\dot{\\phi} = \\dot{\\gamma}~h^{\\phi} = 0\n\\Eeq\nThe evolution of the back stress is given by the Prager kinematic hardening rule\n\\Beq\n  \\dot{\\widehat{\\Bbeta}} = \\dot{\\gamma}~\\Bh^{\\beta} = \\frac{2}{3}~H'~\\Bd^p \n\\Eeq\nwhere $\\widehat{\\Bbeta}$ is the back stress and\n$H'$ is a constant hardening modulus.  Also, the trace of $\\Bd^p$ is \n\\Beq\n  \\Tr(\\Bd^p) = \\sqrt{\\frac{3}{2}}~\\dot{\\gamma}~\\cfrac{\\Tr(\\Bxi)}{\\Norm{\\Bxi}{}}~.\n\\Eeq\nSince $\\Bxi$ is deviatoric, $\\Tr(\\Bxi) = 0$ and hence $\\Bd^p = \\Beta^p$.\nHence, $\\widehat{\\Bbeta} = \\Bbeta$ (where $\\Bbeta$ is the deviatoric part of $\\widehat{\\Bbeta}$), and\n\\Beq\n  \\dot{\\Bbeta} = \\sqrt{\\frac{2}{3}}~H'~\\dot{\\gamma}~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}} ~.\n\\Eeq\n\nThese relation imply that\n\\Beq\n  \\boxed{\n  \\Bal\n    \\Br & = \\sqrt{\\frac{3}{2}}~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}} \\\\\n     h^{\\alpha} & = 1 \\\\\n     h^{\\phi} & = 0 \\\\\n    \\Bh^{\\beta} & = \\sqrt{\\frac{2}{3}}~H'~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}} ~.\n  \\Eal\n  }\n\\Eeq\nWe also need some derivatives of the yield function.  These are\n\\Beq\n  \\Bal\n  \\Partial{f}{\\Bxi} & = \\Br \\\\\n  \\Partial{f}{\\Ve^p} & = -\\Partial{\\sigma_y}{\\Ve^p} \\\\\n  \\Partial{f}{\\phi} & = 0 ~.\n  \\Eal\n\\Eeq\n\nLet us change the kinematic hardening model and use the Armstrong-Frederick\nmodel instead, i.e.,\n\\Beq\n  \\dot{\\Bbeta} = \\dot{\\gamma}~\\Bh^{\\beta} = \\frac{2}{3}~H_1~\\Bd^p - H_2~\\Bbeta~\\Norm{\\Bd^p}{} ~.\n\\Eeq\nSince\n\\Beq\n  \\Bd^p = \\sqrt{\\frac{3}{2}}~\\dot{\\gamma}~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}}\n\\Eeq\nwe have\n\\Beq\n  \\Norm{\\Bd^p}{} = \n   \\sqrt{\\frac{3}{2}}~\\dot{\\gamma}~\\cfrac{\\Norm{\\Bxi}{}}{\\Norm{\\Bxi}{}} = \n   \\sqrt{\\frac{3}{2}}~\\dot{\\gamma} ~.\n\\Eeq\nTherefore,\n\\Beq\n  \\dot{\\Bbeta} = \\sqrt{\\frac{2}{3}}~H_1~\\dot{\\gamma}~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}} \n    - \\sqrt{\\frac{3}{2}}~H_2~\\dot{\\gamma}~\\Bbeta ~.\n\\Eeq\nHence we have\n\\Beq\n  \\boxed{\n  \\Bh^{\\beta} = \\sqrt{\\frac{2}{3}}~H_1~\\cfrac{\\Bxi}{\\Norm{\\Bxi}{}} \n    - \\sqrt{\\frac{3}{2}}~H_2~\\Bbeta ~.\n   }\n\\Eeq\n\n\\subsection{Example 2}\nLet us now consider a Gurson type yield condition with kinematic hardening.  In this\ncase the yield condition can be written as\n\\Beq\n  f := \\cfrac{3~\\Bxi:\\Bxi}{2~\\sigma_y^2} + \n     2~q_1~\\phi^{*}~\\cosh\\left(\\cfrac{q_2~\\Tr(\\Bsig)}{2~\\sigma_y}\\right)\n     - [1 + q_3~(\\phi^*)^2]\n\\Eeq\nwhere $\\phi$ is the porosity and\n\\Beq\n  \\phi^* = \\begin{cases}\n             \\phi & \\text{for}~ \\phi \\le \\phi_c \\\\\n             \\phi_c - \\cfrac{\\phi_u^* - \\phi_c}{\\phi_f - \\phi_c}~(\\phi - \\phi_c) & \n              \\text{for}~ \\phi > \\phi_c\n           \\end{cases}\n\\Eeq\nFinal fracture occurs for $\\phi = \\phi_f$ or when $\\phi_u^* = 1/q_1$.  \n\nLet us use an associated flow rule\n\\Beq\n  \\Bd^p = \\dot{\\gamma}~\\Br = \\dot{\\gamma}~\\Partial{f}{\\Bsig} ~.\n\\Eeq\nThen\n\\Beq\n  \\Br = \\Partial{f}{\\Bsig} = \\cfrac{3~\\Bxi}{\\sigma_y^2} + \\cfrac{q_1~q_2~\\phi^{*}}{\\sigma_y}~\n   \\sinh\\left(\\cfrac{q_2~\\Tr(\\Bsig)}{2~\\sigma_y}\\right)~\\Bone ~.\n\\Eeq\nIn this case\n\\Beq\n  \\Tr(\\Br) = \\cfrac{3~q_1~q_2~\\phi^{*}}{\\sigma_y}~\\sinh\\left(\\cfrac{q_2~\\Tr(\\Bsig)}{2~\\sigma_y}\\right)\n  \\ne 0 \n\\Eeq\nTherefore,\n\\Beq\n  \\Bd^p \\ne \\Beta^p ~.\n\\Eeq\n\nFor the evolution equation for the plastic strain we use\n\\Beq\n  (\\Bsig-\\widehat{\\Bbeta}):\\Bd^p = (1 - \\phi)~\\sigma_y~\\dot{\\Ve}^p\n\\Eeq\nwhere $\\dot{\\Ve}^p$ is the effective plastic strain rate in the matrix material.  Hence,\n\\Beq\n  \\dot{\\Ve}^p = \\dot{\\gamma}~h^{\\alpha}\n    = \\dot{\\gamma}~\\cfrac{(\\Bsig - \\widehat{\\Bbeta}):\\Br}{(1 - \\phi)~\\sigma_y} ~.\n\\Eeq\n\nThe evolution equation for the porosity is given by\n\\Beq\n  \\dot{\\phi} = (1 - \\phi)~\\Tr(\\Bd^p) + A~\\dot{\\Ve^p}\n\\Eeq\nwhere\n\\Beq\nA = \\cfrac{f_n}{s_n \\sqrt{2\\pi}} \\exp [-1/2 (\\Ve^p - \\Ve_n)^2/s_n^2]\n\\Eeq\nand $ f_n $ is the volume fraction of void nucleating particles, \n$ \\Ve_n $ is the mean of the normal distribution of nucleation strains, and \n$ s_n $ is the standard deviation of the distribution.\n\nTherefore,\n\\Beq\n  \\dot{\\phi} = \\dot{\\gamma}~h^{\\phi} =\n    \\dot{\\gamma}~\\left[(1 - \\phi)~\\Tr(\\Br) + A~\n    \\cfrac{(\\Bsig - \\widehat{\\Bbeta}):\\Br}{(1 - \\phi)~\\sigma_y}\\right] ~.\n\\Eeq\n\nIf the evolution of the back stress is given by the Prager kinematic hardening rule\n\\Beq\n  \\dot{\\widehat{\\Bbeta}} = \\dot{\\gamma}~\\Bh^{\\beta} = \\frac{2}{3}~H'~\\Bd^p \n\\Eeq\nwhere $\\widehat{\\Bbeta}$ is the back stress, then\n\\Beq\n  \\dot{\\widehat{\\Bbeta}} = \\frac{2}{3}~H'~\\dot{\\gamma}~\\Br ~.\n\\Eeq\nAlternatively, if we use the Armstrong-Frederick model, then\n\\Beq\n  \\dot{\\widehat{\\Bbeta}} = \\dot{\\gamma}~\\Bh^{\\beta} = \n   \\frac{2}{3}~H_1~\\Bd^p - H_2~\\widehat{\\Bbeta}~\\Norm{\\Bd^p}{} ~.\n\\Eeq\nPlugging in the expression for $\\Bd^p$, we have\n\\Beq\n  \\dot{\\widehat{\\Bbeta}} = \\dot{\\gamma}~\n  \\left[\\frac{2}{3}~H_1~\\Br - H_2~\\widehat{\\Bbeta}~\\Norm{\\Br}{}\\right] ~.\n\\Eeq\nTherefore, for this model,\n\\Beq\n  \\boxed{\n  \\Bal\n  \\Br & = \\cfrac{3~\\Bxi}{\\sigma_y^2} + \\cfrac{q_1~q_2~\\phi^{*}}{\\sigma_y}~\n   \\sinh\\left(\\cfrac{q_2~\\Tr(\\Bsig)}{2~\\sigma_y}\\right)~\\Bone  \\\\\n  h^{\\alpha} &  \n    = \\cfrac{(\\Bsig - \\Bbeta):\\Br}{(1 - \\phi)~\\sigma_y} \\\\\n  h^{\\phi} & = \n    (1 - \\phi)~\\Tr(\\Br) + A~\n    \\cfrac{(\\Bsig - \\widehat{\\Bbeta}):\\Br}{(1 - \\phi)~\\sigma_y}  \\\\\n  \\Bh^{\\beta} & = \n   \\frac{2}{3}~H_1~\\Br - H_2~\\widehat{\\Bbeta}~\\Norm{\\Br}{}\n  \\Eal\n  }\n\\Eeq\nThe other derivatives of the yield function that we need are\n\\Beq\n  \\Bal\n  \\Partial{f}{\\Bxi} & = \\cfrac{3~\\Bxi}{\\sigma_y^2} \\\\\n  \\Partial{f}{\\Ve^p} & = \\Partial{f}{\\sigma_y}~\\Partial{\\sigma_y}{\\Ve^p} \n   = -\\left[\\cfrac{3~\\Bxi:\\Bxi}{\\sigma_y^3} +\n     \\cfrac{q_1~q_2~\\phi^*~\\Tr(\\Bsig)}{\\sigma_y^2}~\n     \\sinh\\left(\\cfrac{q_2~\\Tr(\\Bsig)}{2~\\sigma_y}\\right)\\right]~\n     \\Partial{\\sigma_y}{\\Ve^p}\\\\\n  \\Partial{f}{\\phi} & = 2~q_1~\\Deriv{\\phi^*}{\\phi}~\n    \\cosh\\left(\\cfrac{q_2~\\Tr(\\sigma)}{2~\\sigma_y}\\right) \n    - 2~q_3~\\phi^*~\\Deriv{\\phi^*}{\\phi} ~.\n  \\Eal\n\\Eeq\n\n\n\n\\end{document}\n\n\n\n\n\n\n\n", "meta": {"hexsha": "679048478cfee2c55c3dc25f86458b36e6df8427", "size": 27851, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/Components/MPM/SmallStrainPlastic.tex", "max_stars_repo_name": "abagusetty/Uintah", "max_stars_repo_head_hexsha": "fa1bf819664fa6f09c5a7cd076870a40816d35c9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/Components/MPM/SmallStrainPlastic.tex", "max_issues_repo_name": "abagusetty/Uintah", "max_issues_repo_head_hexsha": "fa1bf819664fa6f09c5a7cd076870a40816d35c9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/Components/MPM/SmallStrainPlastic.tex", "max_forks_repo_name": "abagusetty/Uintah", "max_forks_repo_head_hexsha": "fa1bf819664fa6f09c5a7cd076870a40816d35c9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3439086294, "max_line_length": 122, "alphanum_fraction": 0.5799073642, "num_tokens": 12053, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.815232489352, "lm_q2_score": 0.6757646075489392, "lm_q1q2_score": 0.5509052632280991}}
{"text": "\\section{Design}\nIn this section we will first discuss an high-level design (\\ref{sec:high-level})\nand then propose some optimizations (\\ref{sec:optimizations}).\n\n\\subsection{High-level design idea}\n\\label{sec:high-level}\n\n\\begin{figure}[h!]\n  \\centering\n  \\includegraphics{figs/vad_simple_schematic.pdf}\n  \\caption{High-level schematic of our VAD design.}\n  \\label{fig:simple_schematic}\n\\end{figure}\n\n\\cref{fig:simple_schematic} shows the high-level concept of the design, where we\ncan see:\n\\begin{itemize}\n  \\item \\texttt{absnetwork}: a combinatorial network that extracts $|z|$. Both\n    its input and its output have 16 bits since its maximum value is $2^{15}$.\n    In \\cref{sec:opt-abs}, we will evaluate whether to use an approximated\n    but more efficient \\texttt{absnetwork} that does not sum 1 to the complement.\n  \\item \\texttt{squarepowernetwork}: a combinatorial network that evaluates\n    $|z|^2$ on 32 bits. Note that, 31 bits would be enough since\n    $|z|^2 \\le (2^{15})^2 = 2^{30}$.\n  \\item \\texttt{accumulator}: performs the recursive sum $S[n] = |z[n]|^2 + S[n - 1]$.\n    The maximum sum value is $N \\cdot 2^{30} = 256 \\cdot 2^{30} = 2^{38}$,\n    requiring 39 bits.\n  \\item \\texttt{comparator}: combinatorial network that compares the output\n    of the \\texttt{accumulator} with $E_{th}'$, which must be represented on 39\n    bits as well in order to be compared to the sum.\n\\end{itemize}\n\n\\subsection{Optimizations}\n\\label{sec:optimizations}\n\nIn order to reduce the number of resources used in the implementation, we can\ncheck for possible optimizations.\n\n\\subsubsection{Comparison}\nWe note that $E_{th}' \\simeq 13743895347$ and $\\lceil\\log_2 E_{th}' + 1\\rceil = 34$.\nThis means that 34 bits would be enough to represent $E_{th}'$. This also implies\nthat we don't need the accumulator to sum up to more than $2^{34} - 1$, because\nif the sum exceeds $2^{34}$, the sum is grater than the threshold.\n\nMoreover we can initialize the initial value of the sum to the overflow value\nminus the threshold, so that the overflow will indicate that the threshold has\nbeen exceeded, without using the comparator logic. The initial value of the sum\nis $S'[0] = 2^{34} - E'_{th} = 3435973837$ and we compute:\n\\begin{align*}\n  S'[n] &= \\bigg| S'[n-1] + |z[n]|^2 \\bigg|_{2^{34}}\\\\\n  ovf[n] &= \\bigg\\lfloor \\frac{S'[n-1] + |z[n]|^2}{2^{34}} \\bigg\\rfloor\n\\end{align*}\n$ovf[n] = 1$ does not imply that $ovf[n + 1] = 1$, but since we are summing\npositive integers, the sum will obviously exceed $2^{34}$, so we must remember\nthat the threshold has been exceeded until the end of the frame.\nFor this purpose we used a Set-Reset Flip Flop, which is reset at the beginning\nof the frame processing and eventually set by $ovf[n]$.\n\n\\subsubsection{Absolute value}\n\\label{sec:opt-abs}\n\nCalling $Z$ the representation of $z[n]$ on 16 bits and $Z_a$ the representation\nof $|z[n]|$ on 16 bits as well, we have that:\n\\begin{equation}\n  Z_a = \\begin{cases}\n    Z & z[n] \\ge 0 \\\\\n    \\bar{Z}+1 & z[n] < 0\n  \\end{cases}\n\\end{equation}\nwhere $\\bar{Z}$ is the complement of $Z$.\n\nIn order to simplify it, we could avoid adding 1 in the lower branch, achieving\ntwo optimizations: a lower number of bits for the output (15 vs 16) and a\nsimpler logic.\n\n\\begin{equation}\n  Z_a \\simeq \\widetilde{Z}_a = \\begin{cases}\n    Z & z[n] \\ge 0 \\\\\n    \\bar{Z} & z[n] < 0\n  \\end{cases}\n\\end{equation}\n\nThis approximation introduces an error on the energy of the sample.\nThis error for negative $x[n]$ can be evaluated as\n\\begin{equation}\n  \\epsilon[n] = |z[n]|^2 - \\tilde{z}^2[n] = Z_a^2 - \\widetilde{Z}_a^2 = (\\bar{Z}[n] + 1)^2 - \\bar{Z}^2[n] = 2\\bar{Z}[n] + 1\n\\end{equation}\nThe worst error we can make on one sample corresponds to $\\bar{Z}[n] = 2^{15}$\nso $\\epsilon_{max} = 2^{16}+1$, which translates to a maximum error on the total\nenergy of $256 \\cdot \\epsilon_{max} \\simeq 2^{24}$, which is $\\sim 0.12\\%$ of the\nthreshold.\n\nIn conclusion, the approximation can be safely introduced in order to save\nresources on the FPGA.\n\n\\subsection{Clock and sampling}\nThe sampling rate is 16\\si{\\kilo\\hertz}, but we can process the data at a faster\nrate. If we used a 16\\si{\\kilo\\hertz} clock and after a frame is processed,\nanother one followed, a clock cycle would be required for \\texttt{FRAME\\_START}.\nSo we loose a clock cycle at every frame. In order to avoid so, we could use a\nfaster clock and divide it, so that one short clock cycle is spent in resetting\neverything at the \\texttt{FRAME\\_START}, but more clock cycles are\navailable to process the current input sample before it changes.\n\nIn order to perform the clock frequency division a properly configured counter\nhas been used. The overflow of the counter is used to enable the accumulator,\nwhich must accumulate at a rate equal to the sampling one.\nThe accumulation should occur at instants which are in the middle of two sample\ninstants in order to improve reliability.\n", "meta": {"hexsha": "1a79053329984ba3413c84a4ae2b929cafa2b080", "size": 4899, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "report/12_design.tex", "max_stars_repo_name": "moriglia/VAD", "max_stars_repo_head_hexsha": "2e65173e329101df7b31478106066532a2e6929b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-03-26T14:19:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-11T23:14:53.000Z", "max_issues_repo_path": "report/12_design.tex", "max_issues_repo_name": "moriglia/VAD", "max_issues_repo_head_hexsha": "2e65173e329101df7b31478106066532a2e6929b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "report/12_design.tex", "max_forks_repo_name": "moriglia/VAD", "max_forks_repo_head_hexsha": "2e65173e329101df7b31478106066532a2e6929b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.5363636364, "max_line_length": 123, "alphanum_fraction": 0.7170851194, "num_tokens": 1489, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185944046238981, "lm_q2_score": 0.7662936430859597, "lm_q1q2_score": 0.5506543242204331}}
{"text": "\\documentclass[twocolumn,draft]{article}\n\n\\usepackage{savetrees}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{microtype}\n\n\\usepackage{amssymb}\n\\usepackage{amsfonts}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\n\\usepackage[english]{babel}\n\n\\RequirePackage[l2tabu, orthodox]{nag}\n\\usepackage[all,warning]{onlyamsmath}\n\n\\usepackage{hyperref}\n\n\\usepackage{cuted}\n\n\\usepackage[style=numeric-comp]{biblatex}\n\\bibliography{notes}\n\n\\title{Genome-wide scan for linear mixed models}\n\\author{Danilo Horta and Francesco P. Casale}\n\\date{\\today}\n\n\\include{definitions}\n\n\\begin{document}\n\t\\maketitle\n\n\\section{Least squares}\n\nWe apply the method of least squares to estimate the fixed-effect sizes.\n\n\\subsection{Single block}\n\nLet\n\\begin{equation}\\label{eq:lmm}\n  \\bfy \\sim \\normal{\\rmF\\bfalpha}{\\rmK}\n\\end{equation}\nbe the marginal likelihood of a LMM with a single covariate block $\\rmF$.\nThe maximum likelihood estimation of the fixed effects is\n\\begin{equation*}\n\t\\hat\\bfalpha = \\pinv{(\\trans{\\rmF}\\pinv{\\rmK}\\rmF)}\n\t               \\trans{\\rmF}\\pinv{\\rmK}\\bfy.\n\\end{equation*}\nIn practice, the method of least squares can be used to solve the above\nequation without explicitly finding pseudoinverses.\n\n\\subsection{Double block}\n\nLet\n\\begin{equation*}\n  \\bfy \\sim \\normal{\\rmF\\bfalpha + \\rmG\\bfbeta}{\\rmK}\n\\end{equation*}\nbe the marginal likelihood of a LMM with two covariate blocks $\\rmF$ and\n$\\rmG$.\nWe want to solve\n\\begin{equation*}\n\t\\begin{bmatrix}\n\t\t\\hat\\bfalpha \\\\\n\t\t\\hat\\bfbeta\n\t\\end{bmatrix} = \\pinv{(\\trans{\\rmX}\\rmX)}\\trans{\\rmX}\\pinv{\\rmK^{\\haf}}\\bfy\n\\end{equation*}\nfor\n\\begin{equation*}\n\\rmL = \\pinv{\\rmK^{\\haf}}\\rmF,~~ \\rmR = \\pinv{\\rmK^{\\haf}}\\rmG\\texto{and}\n\t\\rmX =\n\t\t\\begin{bmatrix}\n\t\t\t\\rmL & \\rmR\n\t\t\\end{bmatrix}.\n\\end{equation*}\n\nWe have\n\n\\begin{strip}\n\t\\begin{equation*}\n\t\t\\pinv{(\\trans{\\rmX}\\rmX)} =\n\t\t\\begin{bmatrix}\n\t\t\t\\pinv{(\\trans{\\rmL}\\rmL)} + \\pinv{(\\trans{\\rmL}\\rmL)} (\\trans{\\rmL}\\rmR)\n\t\t\t\t\\pinv{\\rmW} \\trans{(\\trans{\\rmL}\\rmR)} \\pinv{(\\trans{\\rmL}\\rmL)}\n\t\t\t\t& - \\pinv{(\\trans{\\rmL}\\rmL)} (\\trans{\\rmL}\\rmR)\n\t\t\t\t\\pinv{\\rmW} \\\\\n\t\t\t- \\pinv{\\rmW} \\trans{(\\trans{\\rmL}\\rmR)}\n\t\t\t\\pinv{(\\trans{\\rmL}\\rmL)} & \\pinv{\\rmW}\n\t\t\\end{bmatrix},\n\t\\end{equation*}\n\\end{strip}\nfrom Eq. 3 of~\\cite{rohde1965generalized}, where\n$\\rmW = \\trans{\\rmR}\\rmR - \\trans{\\rmR}\\rmL\\pinv{(\\trans{\\rmL}\\rmL)}\n\\trans{\\rmL}\\rmR$.\n\nDefining $\\rmA = \\trans{\\rmL}\\rmL$ and $\\rmB = \\trans{\\rmL}\\rmR$ leads us to\n\\begin{equation*}\n\t\\pinv{(\\trans{\\rmX}\\rmX)}\\trans{\\rmX} =\n\t\t\\begin{bmatrix}\n\t\t\t\\pinv{\\rmA}\\trans{\\rmL}\n\t\t\t+ \\pinv{\\rmA}\\rmB\\pinv{\\rmW}\\trans{\\rmB}\\pinv{\\rmA}\\trans{\\rmL}\n\t\t\t-\\pinv{\\rmA}\\rmB\\pinv{\\rmW}\\trans{\\rmR}\\\\\n\t\t\t-\\pinv{\\rmW}\\trans{\\rmB}\\pinv{\\rmA}\\trans{\\rmL}\n\t\t\t+ \\pinv{\\rmW}\\trans{\\rmR}\n\t\t\\end{bmatrix}.\n\\end{equation*}\nFinally,\n\\begin{align*}\n\t&\\pinv{(\\trans{\\rmX}\\rmX)}\\trans{\\rmX} \\pinv{\\rmK^{\\haf}}\\bfy = \\\\\n\t&\\begin{bmatrix}\n\t\t\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy\n\t\t+ \\pinv{\\rmA}\\rmB\\pinv{\\rmW}\\trans{\\rmB}\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy\n\t\t-\\pinv{\\rmA}\\rmB\\pinv{\\rmW}\\trans{\\rmG}\\pinv{\\rmK}\\bfy \\\\\n\t\t\\pinv{\\rmW}\\trans{\\rmG}\\pinv{\\rmK}\\bfy\n\t\t-\\pinv{\\rmW}\\trans{\\rmB}\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy\n\t\\end{bmatrix}.\n\\end{align*}\nA robust implementation of the above equation has to:\n(i) associate matrix multiplications in such a way that a sequence of\n$\\pinv{\\rmK}\\dots\\pinv{\\rmK}$ is avoided;\nand (ii) handle low-rank matrices $\\rmW$.\nA better association of matrix multiplications is given by\n\\begin{align*}\n\t&\\pinv{(\\trans{\\rmX}\\rmX)}\\trans{\\rmX} \\pinv{\\rmK^{\\haf}}\\bfy = \\\\\n\t&\\begin{bmatrix}\n\t\t\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy\n\t\t+ \\pinv{\\rmA}\\rmB\\pinv{\\rmW}(\\trans{\\rmB}\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\n\t\t\\bfy-\\trans{\\rmG}\\pinv{\\rmK}\\bfy) \\\\\n\t\t\\pinv{\\rmW}(\\trans{\\rmG}\\pinv{\\rmK}\\bfy\n\t\t-\\trans{\\rmB}\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy)\n\t\\end{bmatrix}.\n\\end{align*}\n$\\pinv{\\rmW}$ can be found via economic SVD decomposition.\n\n\\subsection{Batch scan}\n\nGiven a $n\\times p$ covariate block $\\rmG$, we want to quickly infer the\nmaximum likelihood estimations of $\\bfalpha_i$ and $\\beta_i$, for\n$i \\in \\{1, \\dots, p\\}$ denoting the $\\rmG$ columns.\nWe define a diagonal matrix\n\\begin{equation*}\n\t\\rmW = \\dotd{\\trans{\\rmG}}{\\pinv{\\rmK}\\rmG} -\n\t       \\dotd{\\trans{\\rmG}}{\\pinv{\\rmK}\\rmF\n\t\t\t\t \\pinv{(\\trans{\\rmL}\\rmL)}\\trans{\\rmF}\\pinv{\\rmK}\\rmG},\n\\end{equation*}\nwhere $\\dotd{\\cdot}{\\cdot}$ is a function that returns the diagonal elements\nof a matrix multiplication with asymptotically lower computational cost and\nmemory use.\n\nClearly,\n\\begin{equation*}\n\t\\hat\\bfbeta =\n\t\\begin{bmatrix}\n\t\t\\hat\\beta_1 \\\\\n\t\t\\hat\\beta_2 \\\\\n\t\t\\vdots \\\\\n\t\t\\hat\\beta_p\n\t\\end{bmatrix}\n\t= \\pinv{\\rmW}(\\trans{\\rmG}\\pinv{\\rmK}\\bfy\n\t-\\trans{\\rmB}\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy).\n\\end{equation*}\nWe know that\n\\begin{equation*}\n\t\\hat\\bfalpha_i = \\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\n\t(\\bfy + (\\rmG_i \\rmW_i \\trans{\\rmG_i})\n\t(\\pinv{\\rmK}\\rmF\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy - \\pinv{\\rmK}\\bfy)),\n\\end{equation*}\nwhere $\\rmG_i$ is the $i$-th column of $\\rmG$.\nFor one-shot computation we do\n\\begin{equation*}\n\t\\hat\\bfalpha = \\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\n\t\\paren{\\bfy \\oplus \\rmG \\otimes\n\t\\trans{\\paren{(\\rmW \\trans{\\rmG})\n\t  (\\pinv{\\rmK}\\rmF\\pinv{\\rmA}\\trans{\\rmF}\\pinv{\\rmK}\\bfy\n\t\t - \\pinv{\\rmK}\\bfy)}}},\n\\end{equation*}\nwhere $\\oplus$ and $\\otimes$ are element-wise summation and multiplication\nwith broadcasting.\n\n\\section{Marginal likelihood}\n\nReplace $\\rmK$ by $\\sigma^2\\rmK$ in Eq.~\\eqref{eq:lmm}.\nLog of the marginal likelihood is given by\n\\begin{align*}\n  \\calL(\\bfbeta, \\sigma^2) &=\n\t-\\haf\\big(\\log(2\\pi) - \\log\\det|\\sigma^2\\rmK| \\\\\n\t&- \\trans{(\\bfy - \\rmF\\bfbeta)} \\pinv{(\\sigma^2\\rmK)} (\\bfy - \\rmF\\bfbeta)\n\t\\big).\n\\end{align*}\nThe maximum likelihood estimation and the restricted likelihood estimation of\n$\\sigma^2$ are give by\n\\begin{equation*}\n\t\\hat\\sigma^2 = \\frac{\n\t\t\\trans{(\\bfy - \\rmF\\hat\\bfbeta)}\n\t\t\t\\pinv{\\rmK}\n\t\t(\\bfy - \\rmF\\hat\\bfbeta)\n\t}{k}\n\\end{equation*}\nfor $k = n$ and $k = n - p$, respectively.\n\n\\section{Likelihood ratio test}\n\nSuppose that the null hyphotesis is given by\n$\\nullH: \\bfbeta \\in \\Theta_0$ and the alternative one is given by\n$\\altH: \\bfbeta \\in \\Theta_1$, where $\\Theta_0$ is a subspace of $\\Theta_1$,\nand let $d$ be the difference between the $\\Theta_1$ and $\\Theta_0 $\ndimensions.\nGive that the null hyphotesis is the true one,\nit follows from classical likelihood theory that\n\\begin{equation*}\n\t\\til{\\chi}^2_d \\sim -2\\ln\n\t\\Bigg(\n\t\t\\frac{\\calL(\\hat\\bfbeta_0, \\hat\\sigma_0^2)}\n\t\t     {\\calL(\\hat\\bfbeta_1, \\hat\\sigma_1^2)}\n\t\\Bigg),\n\\end{equation*}\nasymptotically, for maximum likelihood parameter estimation\\footnote{This is\nnot true for rescricted maximum likelihood estimation.}\nsubject to $\\hat\\bfbeta_0 \\in \\Theta_0$ and $\\hat\\bfbeta_1 \\in \\Theta_1$\n\\cite{verbeke2009linear}.\n\n\\printbibliography\\\n\n\\end{document}\n", "meta": {"hexsha": "105043e9739184894fc897b80aec7860c0eb0efa", "size": 6797, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "notes.tex", "max_stars_repo_name": "Horta/gwas-scan", "max_stars_repo_head_hexsha": "bb13522fc1bd719791ef393be8436dd204f947e9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes.tex", "max_issues_repo_name": "Horta/gwas-scan", "max_issues_repo_head_hexsha": "bb13522fc1bd719791ef393be8436dd204f947e9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes.tex", "max_forks_repo_name": "Horta/gwas-scan", "max_forks_repo_head_hexsha": "bb13522fc1bd719791ef393be8436dd204f947e9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0752212389, "max_line_length": 80, "alphanum_fraction": 0.6707370899, "num_tokens": 2713, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7662936324115012, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.5506543073136576}}
{"text": "\\section{}\n\n\\subsection{Initial weights and biases}\n\\label{sec:weights}\n\nWeights between input layer and hidden layer\n\\[\n\\begin{bmatrix}\n    0.39653235 & -0.06798961 & -0.03110093 & 0.35617649 \\\\\n    -0.2372905 & -0.08188699 & -0.2610652 & 0.1051389\n\\end{bmatrix}\n\\]\n\nBiases between input layer and hidden layer\n\\[\n\\begin{bmatrix}\n    0.2304629 \\\\\n    -0.40825189 \\\\\n    0.48841448 \\\\\n    -0.49694009 \\\\\n\\end{bmatrix}\n\\]\n\n\nWeights between hidden layer and output layer\n\\[\n\\begin{bmatrix}\n-0.26704499 \\\\\n0.00909349 \\\\\n-0.10233779 \\\\\n0.22680813 \\\\\n\\end{bmatrix}\n\\]\n\nBiases between hidden layer and output layer\n\\[\n\\begin{bmatrix}\n-0.35936101\n\\end{bmatrix}\n\\]\n\n\\subsection{}\n\\label{sec:sgd}\nAt first a neural network with single hidden layer having 4 neurons was constructed. The weights for the network was\nrandomly initialized in the range [-0.5, 0.5]. Sigmoid function was selected as the\nactivation function for the network. Then, stochastic gradient descent without\nany modification was used to train the network. Plot of training cost with respect\nto iteration is shown in figure \\ref{fig:cost_sgd}.\n\nA minimum threshold of 0.01 is selected as a stopping criteria for training algorithm.\nThe training is stopped as soon as average training error for samples was less than the threshold. The training algorithm\nuses a learning rate($\\eta$)=0.01.\n\n\\section{}\nThe neural network described in section \\ref{sec:sgd} was again initialized with weights\npresented in \\ref{sec:weights}. It was then trained on the given samples using stochastic gradient descent with momentum\nhaving momentum coefficient($\\mu$)$=0.5$. The plot of training cost with respect to iteration is shown in \\ref{fig:cost_sgd_momentum}.\n\nComparing the figures \\ref{fig:cost_sgd} and \\ref{fig:cost_sgd_momentum} it is clear that the SGD algorithm\nwith momentum reaches the minimum threshold of 0.01 in lesser number of iterations than unmodified SGD algorithm.\n", "meta": {"hexsha": "4ddbdad9a7a4adfe5b68af48c10e2868fa4b87cf", "size": 1928, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "assignment_4/sigmoid.tex", "max_stars_repo_name": "diwasblack/machine_learning", "max_stars_repo_head_hexsha": "83bf5af98a3db5e13f628f39d7519575c580497d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment_4/sigmoid.tex", "max_issues_repo_name": "diwasblack/machine_learning", "max_issues_repo_head_hexsha": "83bf5af98a3db5e13f628f39d7519575c580497d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignment_4/sigmoid.tex", "max_forks_repo_name": "diwasblack/machine_learning", "max_forks_repo_head_hexsha": "83bf5af98a3db5e13f628f39d7519575c580497d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.606557377, "max_line_length": 134, "alphanum_fraction": 0.7572614108, "num_tokens": 535, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7662936377487305, "lm_q1q2_score": 0.5506543019127911}}
{"text": "\\documentclass[a4paper]{article}\n\\usepackage[english]{babel}\n\\usepackage{graphicx}\n\\usepackage{multicol}\n\\usepackage{amsmath}\n\\usepackage{hyperref}\n\\usepackage{amsthm}\n\\usepackage{geometry}\n\\geometry{a4paper} \n\\usepackage{fancyhdr}\n\\usepackage{xcolor}\n\\usepackage{amssymb}\n\\usepackage{multicol}\n\\theoremstyle{definition}\n\\newtheorem{exmp}{Example}[section]\n\\newtheorem{theorem}{Theorem}\n\n\\begin{document}\n\\author{Fractals}\n\\title{\\textbf{}}\n\\maketitle\n\\tableofcontents\n\\noindent\n\\section{Basic Algebra}\n\\subsection{Simon's Factroring Trick}\n\\textbf{Simon\u2019s Favorite Factoring Trick} (SFFT) is best explained with an example:\n\\begin{exmp}\n    Find all positive integers \\(x, y\\) that satisfy\n    \\[\n        xy- 2x - 4y = 0.\n    \\]\n    \\(Sloution\\) Let us factor the first two terms:\n    \\[\n        x(y-2) - 4y = 0.\n    \\]\n    We want to find some way we can turn the \\(y\\) into a \\(y - 2\\). Let\u2019s see what\n    happens if we do that:\n    \\[\n        x(y-2) - 4(y-2+2) = 0.\n    \\]\n    \\[\n        x(y-2) - 4(y-2)  - 8= 0.\n    \\]\n    \\[\n        x(y-2) - 4(y-2) = 8.\n    \\]\n    Now, we can factor:\n    \\[\n        (x-4)(y-2) = 8.\n    \\]\n    Because \\(x, y\\) are positive integers,\n    we know that \\(x - 4 \\text{ and } y - 2\\) are simply the positive factors of 8\n    \\[\n        x-4 = 1, y-2=8,\n    \\]\n    \\[\n        x-4 = 2, y-2=4,\n    \\]\n    \\[\n        x-4 = 4, y-2=2,\n    \\]\n    \\[\n        x-4 = 8, y-2=1,\n    \\]\n    Solving we get \\framebox[1.04\\width]{\\((x,y)\\in  \\{(5,10), (6,6), (8,4), (12,3)\\} \\).}\n    \\\\\n\n    \\noindent\n    Now for the formal statement:\n\\end{exmp}\n\\begin{theorem}[\\textbf{SFFT}]\n    For all real numbers (although commonly used only for integers) \\(x, y, a, b, \\)\n    \\[\n        xy + xa + yb + ab = (x + a)(y + bk).\n    \\]\n    Two special common cases are: \\(xy + x + y + 1 = (x+1)(y+1)\\) and \\(xy - x - y + 1 = (x-1)(y-1)\\).\n\\end{theorem}\n\\end{document}", "meta": {"hexsha": "80ba9e7eadefc9b8303ee77517a0581b7e782af1", "size": 1872, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "basic/basic-algebra/basic-algebra.tex", "max_stars_repo_name": "GUC-Fractals/math-curriculum", "max_stars_repo_head_hexsha": "a11336def018106bd31e56e5eb9ac9225d4a072a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-06T09:36:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-06T09:36:30.000Z", "max_issues_repo_path": "basic/basic-algebra/basic-algebra.tex", "max_issues_repo_name": "GUC-Fractals/math-curriculum", "max_issues_repo_head_hexsha": "a11336def018106bd31e56e5eb9ac9225d4a072a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "basic/basic-algebra/basic-algebra.tex", "max_forks_repo_name": "GUC-Fractals/math-curriculum", "max_forks_repo_head_hexsha": "a11336def018106bd31e56e5eb9ac9225d4a072a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.0, "max_line_length": 102, "alphanum_fraction": 0.5560897436, "num_tokens": 709, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7185943805178139, "lm_q2_score": 0.7662936377487304, "lm_q1q2_score": 0.550654301912791}}
{"text": "\\documentclass[amsmath,amssymb,aps,pra,reprint,groupedaddress,showpacs]{revtex4-1}\n\n\\usepackage{multirow}\n\\usepackage{verbatim}\n\\usepackage{color,graphicx}\n\n\\input{../Common}\n\n\\begin{document} \n\n\\title{Fibonacci Sequence Alternative Representation}\n \n\\author{Lucchi Manuele}\n\\email[]{manuele.lucchi@studenti.unimi.it}\n\\affiliation{IT Department Students, Universita' degli Studi di Milano, Citta' degli Studi, Milano, Italia}\n\n\\date{\\today}\n\n\\begin{abstract}\nIn this Research we will introduce a new representation of the Fibonacci Sequence starting from its Generative Function, the Binet Formula. \nThe result will be an equation that can be parallelized in a multithreaded system and that only uses positive integers.\n\\end{abstract}\n\n\\maketitle\n\n\\section{Introduction}\nThe Fibonacci Sequence [1] is a simple, but really important, Sequence in Natural Numbers that can be found in a lot of places in nature, \nfor example flowers' petals follow it in their numeration. The sequence is just the sum of its previous terms: \n$$ F_{n} = F_{n-1} + F_{n-2} $$\nIn 1843 Binet introduced its Generative Function called the Binet Formula [2] (even if it was already known a century earlier by Euler, Daniel Bernoulli and de Moivre) that allows everyone to\ncalculate the $n_{th}$ Fibonacci Number without using a recursive approach\n$$f(n) = \\frac{1}{\\sqrt{5}} \\left[ \\left( \\frac{1+\\sqrt{5}}{2} \\right) ^n - \\left( \\frac{1-\\sqrt{5}}{2} \\right)^n \\right] $$\nHowever this representation uses irrational numbers like $\\sqrt{5}$ that causes a certain level of imprecision in the result that should be a positive integer.\nOur approach consists in using the Newton Binomial [3] to simplify all the irrational numbers and then optimize the result.\n\n\\section{Transformation}\n\nThe first step is to take out the $\\frac{1}{2^n}$\n$$f(n) = \\frac{1}{2^n\\sqrt{5}} \\left[ \\left( 1+\\sqrt{5} \\right) ^n - \\left( 1-\\sqrt{5} \\right)^n \\right]$$\nNow we can notice that $(1 + \\sqrt{5})^n$ and $(1 - \\sqrt{5})^n$ are both in the form $(a + b)^n$ so they are suitable for the Newton Binomial transformation,\nso we can write them like \\\\\n$ A_n:= \\sum_{k=0}^n \\binom{n}{k} \\left(\\sqrt{5} \\right) ^k $ and $ B_n:= \\sum_{k=0}^n \\binom{n}{k} \\left( -\\sqrt{5} \\right) ^k $ both with $k \\in N$.\nIf we unroll the sums we'll find some similarities between A and B.\\\\\nSo\n\\begin{gather*}\nA = a\\sqrt{5} +b5 + c\\sqrt{5^3} + d5^2 + ... + z\\sqrt{ 5 }^k \\\\\nB = -a\\sqrt{5} +b5 + -c\\sqrt{5^3} + d5^2 + ... + z \\left( -\\sqrt{5} \\right) ^k\n\\end{gather*} \nwith $a, b, c, d,$  ..., $z \\in N$. \\\\\n\nSince it's $f(n)=\\frac{1}{2^n\\sqrt{5}} \\left[ A - B \\right]$ we'll have\n\\begin{align*}\nA - B &= a\\sqrt{5} +b5 + c\\sqrt{5^3} + d5^2 + ... + z\\sqrt{5}^k  \\\\\n&- \\left( -a\\sqrt{5} +b5 + -c\\sqrt{5^3} + d5^2 + ... + z \\left( -\\sqrt{5} \\right)^k \\right) \\\\\n&= a\\sqrt{5} +b5 + c\\sqrt{5^3} + d5^2 + ... + z\\sqrt{5}^k +a\\sqrt{5} -b5 \\\\\n&+ c\\sqrt{5^3} - d5^2 + ... - z \\left( -\\sqrt{5} \\right) ^k\n\\end{align*}\n\nNow we can see that all members that have an integer exponent can be reduced. If we define $C:= A - B$ we get\n\\begin{align*}\nC_n &= \\sum_{k=0}^n \\binom{n}{k} \\left(\\sqrt{5} \\right) ^k - \\sum_{k=0}^n \\binom{n}{k} \\left(-\\sqrt{5} \\right) ^k\\\\\n&= \\sum_{k=0}^n \\binom{n}{k} \\left[ \\left(\\sqrt{5} \\right) ^k - \\left(-\\sqrt{5} \\right) ^k \\right]\n\\end{align*}\nWe can now split the sum again, this time differentiating between odd and even steps\n\n\\begin{align*}\nC_n &= \\sum_{k=0}^{n/2} \\binom{n}{2k} \\left[ \\left(\\sqrt{5} \\right) ^{2k} - \\left(-\\sqrt{5} \\right) ^{2k} \\right]\\\\\n&+ \\sum_{k=0}^{n/2} \\binom{n}{2k + 1} \\left[ \\left(\\sqrt{5} \\right) ^{2k + 1} - \\left(-\\sqrt{5} \\right) ^{2k +1} \\right]\n\\end{align*}\n\nNow if we analyze the sums one by one, we can observe that the first one converges to 0, $\\forall n \\in{N}$ where $n$ is odd\n$$ \\sum_{k=0}^{\\frac{n}{2}} \\binom{n}{2k} \\left[ \\left(\\sqrt{5} \\right) ^{2k} - \\left(-\\sqrt{5} \\right) ^{2k} \\right] = 0 $$\nwhile the second one will become\n\n\\begin{align*}\nC &= \\sum_{k=0}^{\\frac{n}{2}} \\binom{n}{2k + 1} \\left[ \\left(\\sqrt{5} \\right) ^{2k + 1} - \\left(-\\sqrt{5} \\right) ^{2k + 1} \\right]\\\\\n&= 2 \\sum_{k=0}^\\frac{n}{2} \\binom{n}{2k + 1} \\left(\\sqrt{5} \\right) ^{2k + 1}\n\\end{align*}\n\nSince $2k + 1$ is always odd, we know that \n\n\\begin{align*}\n&\\left(-\\sqrt{5} \\right) ^{2k + 1} =  - \\left( \\sqrt{5} \\right) ^{2k + 1} \\implies \\\\\n&\\left( \\sqrt{5} \\right) ^{2k + 1} - \\left(-\\sqrt{5} \\right) ^{2k + 1} = 2\\left(\\sqrt{5} \\right) ^{2k + 1}\n\\end{align*}\n\nIn the end what remains is our $S_n$\n\\begin{align*}\nS_n &= \\frac{2}{2^n\\sqrt{5}} \\sum_{k=0}^\\frac{n}{2} \\binom{n}{2k + 1} \\left( \\sqrt{5} \\right) ^{2k +1} \\\\\n&=\\frac{1}{2^{n-1}} \\sum_{k=0}^\\frac{n}{2} \\binom{n}{2k + 1}5^{k}\n\\end{align*}\n\nThis is a first version of the alternative formula that already supports a certain level of parallelization and, since every Fibonacci's Number is an integer, doesn't need to handle real numbers like the Binet one\n\n\\begin{thebibliography}{24}\n \n  \\bibitem{fibonaccigeneral}\n    {OEIS},\n    \\textit{Sequence A000045 (Fibonacci Numbers)}\n\n  \\bibitem{binetgeneral}\n    {DA VEDERE},\n    \\textit{DA VEDERE}\n\n  \\end{thebibliography}\n\n\\end{document}", "meta": {"hexsha": "16394f7a00a3394aa8dee3ba1d5dfa11d033ff01", "size": 5118, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "alternative_representation/fibonacci_alternative_representation.tex", "max_stars_repo_name": "manuelelucchi/Fibonacci", "max_stars_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-06T23:49:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-06T23:49:23.000Z", "max_issues_repo_path": "alternative_representation/fibonacci_alternative_representation.tex", "max_issues_repo_name": "manuelelucchi/Fibonacci", "max_issues_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "alternative_representation/fibonacci_alternative_representation.tex", "max_forks_repo_name": "manuelelucchi/Fibonacci", "max_forks_repo_head_hexsha": "e3c683dddccefc8876de0c7bb30a3375d1f94929", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.8317757009, "max_line_length": 213, "alphanum_fraction": 0.6449785072, "num_tokens": 1958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7057850278370112, "lm_q2_score": 0.7799929053683038, "lm_q1q2_score": 0.5505073144280396}}
{"text": "\\section{conclusion}\\label{sec:conclusion}\nIn this paper, we have presented a rigorous application of an artificial neural network methodology to the mitigation of the observational systematics in galaxy clustering measurements of an eBOSS-like ELG sample selected from DR7 (see \\S~\\ref{sec:data}). We have investigated the galaxy density dependency on 18 imaging attributes of the data (see Fig. \\ref{fig:eboss_dr7}). We compare the performance of the neural network with that of the traditional, linear and quadratic multivariate regression methods. The key aspects of our neural network methodology are:\\\\\n\n\\begin{itemize}\n    \\item The application of k-fold cross-validation, which implements the training-validation-test split to tune the hyper parameters by evaluating how well the trained network generalizes to the unseen,  validation data set and therefore to suppress overfitting when applied to the test set;\n    \n    \\item The repeated split process until we cover the entire data footprint as test sets;\n    \n    \\item The elimination of redundant imaging maps by the feature selection procedure to further reduce the overfitting problem and therefore protect the cosmological clustering signal.\n\\end{itemize}\n\n\nWe apply the output of our pipeline, i.e., the selection mask for the DR7 footprint to the observed galaxy density field. Benchmark selection masks are also produced employing the linear and quadratic polynomial regression. Comparing statistical results before and after applying the selection masks, we find that:\\\\\n\\begin{itemize}\n    \\item Galactic foregrounds are the most dominant source of contamination in this imaging dataset (see Figs. \\ref{fig:nnbar}, \\ref{fig:clcross}, and \\ref{fig:xicross}).\n    \n    \\item This contamination causes an excess clustering signal in the auto power spectrum and correlation function of the galaxy density field on large scales (see Fig. \\ref{fig:clxi}).\n    \n    \\item All mitigation techniques e.g., the neural network method as well as the linear multivariate models using the linear and quadratic polynomial functions, are able to reduce the auto and cross clustering signals (see Figs. \\ref{fig:xicross} and \\ref{fig:clcross});\n    \n    \\item However, the neural network removes the excess clustering more effectively in the auto power spectrum and correlation function of galaxies (see Fig. \\ref{fig:clxi}).\n\\end{itemize}\n\nThe last result implies that our neural network method has a higher flexibility than both linear multivariate models we tested, and it is therefore capable of capturing the non-linear systematic effects in the observed galaxy density field.\\\\\n\nWe apply our methodology on two sets of 100 log-normal mock datasets with (`contaminated mocks') and without (`null mocks') imaging contamination to evaluate how well the ground truth cosmological clustering can be reconstructed in both cases, and therefore to validate the systematic mitigation techniques. All mitigation techniques are applied in the same way we treat the real data. The key results of our mock test are as follows:\\\\\n\n\\begin{itemize}\n    \\item The feature selection procedure is able to identify most of the ten contamination input maps as important for the contaminated mocks while correctly identifying most of the maps as redundant for the null mocks (see Fig. \\ref{fig:mockablation}).\n    \n    \\item All three mitigation methods, i.e., the linear polynomial, quadratic polynomial, and neural network methods, perform similarly in terms of the residual bias in the presence of contamination. This is expected since the contamination model is based on the linear polynomial model which all three methods are capable of reproducing. The default neural network tends to slightly under-correct which is the outcome of the feature selection procedure. On the other hand, the linear and quadratic polynomial methods tend to slightly over-correct (see the right panel of Fig. \\ref{fig:deltaclmock}).;\n    \n    \\item In the absence of contamination, the neural network is the most robust against regressing out the cosmological clustering. This is mainly due to the feature selection process that appropriately reduces the flexibility of the mitigation (see the left panel of Fig. \\ref{fig:deltaclmock}). Based on this result, we implement the feature selection procedure for DR7.\n    \n    \\item Using $\\chi^{2}$ statistics, we quantify the bias and find that for the null mocks, the default neural network recovers the underlying clustering within $1\\sigma$ C.L. (see Eq. \\ref{eq:chi2sigma}) while the other methods return more than $4\\sigma$ C.L. bias. For the contaminated mocks, all of the methods return biased clustering with $2.8-3.9\\sigma$ C.L. which indicates that it is crucial for cosmological parameter estimation to determine the residual systematic uncertainty in scales sensitive to the parameters of interest (see the middle panels of Figs \\ref{fig:deltaclmock} and \\ref{fig:mockdclextra}).\n    \n    \\item All methods do not increase fractional variance during the mitigation process (see the bottom row of Fig. \\ref{fig:deltaclmock}).\\\\\n        \n\\end{itemize}\n\nWe also employ the mocks to investigate the remaining systematic effects in the data (Figs \\ref{fig:nnbar}-\\ref{fig:xicross}). While the neural network methods outperform the conventional methods, we conclude that the data exhibit around 19\\% residual systematics in the target number density (Figs \\ref{fig:chi2pdf} \\& \\ref{fig:chi2breakdown}). Our analysis suggests that a more rigorous masking on  $depth-g$ (e.g., $depth-g>24.95$) improves the mean density at the cost of losing $9\\%$ of data. To use this sample for cosmology, we therefore suggest a) accounting for 19\\%  (or more, depending on the assumption on the baseline) additional systematic errors to the statistical errors in the density field level; b) performing further analysis of systematics and improvement of the mitigation method to deal with the depth and sky background issues.\\\\\n\n\nTo conclude, our analyses illustrate that the neural network method we developed in this paper is a promising tool for the mitigation of the large-scale spurious clustering that is likely raised by the imaging systematics. Our method is more robust against regressing out the cosmological clustering than the traditional, linear multivariate regression methods. Such improvement will be particularly crucial for an accurate measurement of non-Gaussianity from the large-scale clustering of current eBOSS and upcoming DESI and the LSST surveys. Our method is computationally less intensive than other approaches such as the Monte Carlo injection of fake galaxies: analyzing DR7 using our default neural network method requires less than six CPU hours. Application of our methodology on any imaging dataset would be straightforward. Our systematics mitigation methodology pipeline is publicly available at \\url{https://github.com/mehdirezaie/SYSNet}.", "meta": {"hexsha": "a730770d41937ca70a76826b58a30116b598c876", "size": 6937, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/sections/conclusion.tex", "max_stars_repo_name": "mehdirezaie/SYSNet", "max_stars_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-07-29T11:55:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-25T21:50:52.000Z", "max_issues_repo_path": "paper/sections/conclusion.tex", "max_issues_repo_name": "mehdirezaie/SYSNet", "max_issues_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/sections/conclusion.tex", "max_forks_repo_name": "mehdirezaie/SYSNet", "max_forks_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-17T18:07:47.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-17T18:07:47.000Z", "avg_line_length": 157.6590909091, "max_line_length": 948, "alphanum_fraction": 0.7964537985, "num_tokens": 1443, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8104789086703224, "lm_q2_score": 0.6791787056691698, "lm_q1q2_score": 0.5504600161628709}}
{"text": "\\chapter{Properties of metric spaces}\nAt the end of the last chapter on metric spaces,\nwe introduced two adjectives ``open'' and ``closed''.\nThese are important because they'll grow up\nto be the \\emph{definition} for a general topological space,\nonce we graduate from metric spaces.\n\nTo move forward, we provide a couple niceness adjectives\nthat applies to \\emph{entire metric spaces},\nrather than just a set relative to a parent space.\nThey are ``(totally) bounded'' and ``complete''.\nThese adjectives are specific to metric spaces,\nbut will grow up to become the notion of \\emph{compactness},\nwhich is, in the words of \\cite{ref:pugh},\n``the single most important concept in real analysis''.\nAt the end of the chapter,\nwe will know enough to realize that something is amiss\nwith our definition of homeomorphism,\nand this will serve as the starting point for the next chapter,\nwhen we define fully general topological spaces.\n\n\\section{Boundedness}\n\\prototype{$[0,1]$ is bounded but $\\RR$ is not.}\nHere is one notion of how to prevent a metric space\nfrom being a bit too large.\n\n\\begin{definition}\n\tA metric space $M$ is \\vocab{bounded}\n\tif there is a constant $D$ such that\n\t$d(p,q) \\le D$ for all $p,q \\in M$.\n\\end{definition}\nYou can change the order of the quantifiers:\n\\begin{proposition}\n\t[Boundedness with radii instead of diameters]\n\tA metric space $M$ is bounded if and only if\n\tfor every point $p \\in M$, there is a radius $R$\n\t(possibly depending on $p$) such that $d(p,q) \\le R$ for all $q \\in M$.\n\\end{proposition}\n\\begin{exercise}\n\tUse the triangle inequality to show these are equivalent.\n\t(The names ``radius'' and ``diameter'' are a big hint!)\n\\end{exercise}\n\n\\begin{example}\n\t[Examples of bounded spaces]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Finite intervals like $[0,1]$ and $(a,b)$ are bounded.\n\t\t\\ii The unit square $[0,1]^2$ is bounded.\n\t\t\\ii $\\RR^n$ is not bounded for any $n \\ge 1$.\n\t\t\\ii A discrete space on an infinite set is bounded.\n\t\t\\ii $\\NN$ is not bounded, despite being\n\t\thomeomorphic to the discrete space!\n\t\\end{enumerate}\n\\end{example}\n\nThe fact that a discrete space on an infinite set\nis ``bounded'' might be upsetting to you, so\nhere is a somewhat stronger condition you can use:\n\n\\begin{definition}\n\tA metric space is \\vocab{totally bounded}\n\tif for any $\\eps > 0$,\n\twe can cover $M$ with finitely many $\\eps$-neighborhoods.\n\\end{definition}\nFor example, if $\\eps = 1/2$, you can cover $[0,1]^2$\nby $\\eps$-neighborhoods.\n\\begin{center}\n\t\\begin{asy}\n\t\tsize(4cm);\n\t\tdraw(shift( (-0.5,-0.5) )*unitsquare, black+1);\n\t\treal d = 0.4;\n\t\treal r = 0.5;\n\t\tdraw(CR(dir( 45)*d, r), dotted);\n\t\tdraw(CR(dir(135)*d, r), dotted);\n\t\tdraw(CR(dir(225)*d, r), dotted);\n\t\tdraw(CR(dir(315)*d, r), dotted);\n\t\\end{asy}\n\\end{center}\n\\begin{exercise}\n\tShow that ``totally bounded'' implies ``bounded''.\n\\end{exercise}\n\\begin{example}\n\t[Examples of totally bounded spaces]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii A subset of $\\RR^n$ is bounded if and only if\n\t\tit is totally bounded.\n\n\t\tThis is for Euclidean geometry reasons:\n\t\tfor example in $\\RR^2$ if I can cover a set\n\t\tby a single disk of radius $2$,\n\t\tthen I can certainly cover it by finitely many\n\t\tdisks of radius $1/2$.\n\t\t(We won't prove this rigorously.)\n\n\t\t\\ii So for example $[0,1]$ or $[0,2] \\times [0,3]$\n\t\tis totally bounded.\n\n\t\t\\ii In contrast, a discrete space on\n\t\tan infinite set is not totally bounded.\n\t\\end{enumerate}\n\\end{example}\n\n\\section{Completeness}\n\\prototype{$\\RR$ is complete, but $\\QQ$ and $(0,1)$ are not.}\n\nSo far we can only talk about sequences converging if they have a limit.\nBut consider the sequence \n\\[ x_1 = 1, \\; x_2 = 1.4, \\; x_3 = 1.41, \\; x_4 = 1.414, \\dots. \\]\nIt converges to $\\sqrt 2$ in $\\RR$, of course.\nBut it fails to converge in $\\QQ$;\nthere is no \\emph{rational} number this sequence converges to.\nAnd so somehow, if we didn't know about the existence of $\\RR$, we would\nhave \\emph{no idea} that the sequence $(x_n)$ is ``approaching'' something.\n\nThat seems to be a shame.\nLet's set up a new definition to describe these sequences whose terms\n\\textbf{get close to each other},\neven if they don't approach any particular point in the space.\nThus, we only want to mention the given points in the definition.\n\n\\begin{definition}\n\tLet $x_1, x_2, \\dots$ be a sequence which lives in a metric space $M = (M,d_M)$.\n\tWe say the sequence is \\vocab{Cauchy} if for any $\\eps > 0$, we have\n\t\\[ d_M(x_m, x_n) < \\eps \\]\n\tfor all sufficiently large $m$ and $n$.\n\\end{definition}\n\n\\begin{ques}\n\tShow that a sequence which converges is automatically Cauchy.\n\t(Draw a picture.)\n\\end{ques}\n\nNow we can define:\n\\begin{definition}\n\tA metric space $M$ is \\vocab{complete} if every\n\tCauchy sequence converges.\n\\end{definition}\n\n\\begin{example}\n\t[Examples of complete spaces]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii $\\RR$ is complete.\n\t\t(Depending on your definition of $\\RR$,\n\t\tthis either follows by definition, or requires some work.\n\t\tWe won't go through this here.)\n\t\t\\ii The discrete space is complete,\n\t\tas the only Cauchy sequences are eventually constant.\n\t\t\\ii The closed interval $[0,1]$ is complete.\n\t\t\\ii $\\RR^n$ is complete as well.\n\t\t(You're welcome to prove this by induction on $n$.)\n\t\\end{enumerate}\n\\end{example}\n\n\\begin{example}\n\t[Non-examples of complete spaces]\n\t\\listhack\n\t\\begin{enumerate}[(a)]\n\t\t\\ii The rationals $\\QQ$ are not complete.\n\t\t\\ii The open interval $(0,1)$ is not complete,\n\t\tas the sequence $0.9$, $0.99$, $0.999$, $0.9999$, \\dots\n\t\tis Cauchy but does not converge.\n\t\\end{enumerate}\n\\end{example}\n\nSo, metric spaces need not be complete, like $\\QQ$.\nBut we certainly would like them to be complete,\nand in light of the following theorem this is not unreasonable.\n\\begin{theorem}\n\t[Completion]\n\tEvery metric space can be ``completed'',\n\ti.e.\\ made into a complete space by adding in some points.\n\\end{theorem}\nWe won't need this construction at all,\nso it's left as \\Cref{prob:completion}.\n\\begin{example}\n\t[$\\QQ$ completes to $\\RR$]\n\tThe completion of $\\QQ$ is $\\RR$.\n\\end{example}\n(In fact, by using a modified definition of completion\nnot depending on the real numbers,\nother authors often use this as the definition of $\\RR$.)\n\n\\section{Let the buyer beware}\nThere is something suspicious about both these notions:\nneither are preserved under homeomorphism!\n\n\\begin{example}\n\t[Something fishy is going on here]\n\t\\label{ex:fishy}\n\tLet $M = (0,1)$ and $N = \\RR$.\n\tAs we saw much earlier $M$ and $N$ are homeomorphic.\n\tHowever:\n\t\\begin{itemize}\n\t\t\\ii $(0,1)$ is totally bounded, but not complete.\n\t\t\\ii $\\RR$ is complete, but not bounded.\n\t\\end{itemize}\n\\end{example}\n\nThis is the first hint of something going awry with the metric.\nAs we progress further into our study of topology,\nwe will see that in fact \\emph{open and closed sets}\n(which we motivated by using the metric)\nare the notion that will really shine later on.\nI insist on introducing the metric first so that\nthe standard pictures of open and closed sets make sense,\nbut eventually it becomes time to remove the training wheels.\n\n%It's a theorem that $\\RR$ is complete.\n%To prove this I'd have to define $\\RR$ rigorously, which I won't do here.\n%In fact, there are some competing definitions of $\\RR$.\n%It is sometimes defined as the completion of the space $\\QQ$.\n%Other times it is defined using something called \\emph{Dedekind cuts}.\n%For now, let's just accept that $\\RR$ behaves as we expect and is complete.\n\n%And thus, I can now tell you exactly what $\\RR$ is.\n%You might notice that it's actually not that easy to define:\n%we can define $\\QQ = \\left\\{ a/b : a,b \\in \\NN \\right\\}$, but\n%what do we say for $\\RR$?\n%Here's the answer:\n%\\begin{definition}\n%\t$\\RR$ is the completion of the metric space $\\QQ$.\n%\\end{definition}\n\n\\section{Subspaces, and (inb4) a confusing linguistic point}\n\\prototype{A circle is obtained as a subspace of $\\RR^2$.}\n\nAs we've already been doing implicitly in examples, we'll now say:\n\\begin{definition}\n\tEvery subset $S \\subseteq M$ is a metric space in its own right,\n\tby re-using the distance function on $M$.\n\tWe say that $S$ is a \\vocab{subspace} of $M$.\n\\end{definition}\nFor example, we saw that the circle $S^1$\nis just a subspace of $\\RR^2$.\n\nIt thus becomes important to distinguish between\n\\begin{enumerate}[(i)]\n\t\\ii \\textbf{``absolute'' adjectives} like ``complete'' or ``bounded'',\n\twhich can be applied to both spaces,\n\tand hence even to subsets of spaces (by taking a subspace),\n\tand\n\t\\ii \\textbf{``relative'' adjectives}\n\tlike ``open (in $M$)'' and ``closed (in $M$)'',\n\twhich make sense only relative to a space,\n\teven though people are often sloppy and omit them.\n\\end{enumerate}\nSo ``$[0,1]$ is complete'' makes sense,\nas does ``$[0,1]$ is a complete subset of $\\RR$'',\nwhich we take to mean ``$[0,1]$ is a complete as a subspace of $\\RR$''.\nThis is since ``complete'' is an absolute adjective.\n\nBut here are some examples of ways in which relative adjectives\nrequire a little more care:\n\\begin{itemize}\n\t\\ii Consider the sequence $1$, $1.4$, $1.41$, $1.414$, \\dots.\n\tViewed as a sequence in $\\RR$, it converges to $\\sqrt 2$.\n\tBut if viewed as a sequence in $\\QQ$,\n\tthis sequence does \\emph{not} converge!\n\tSimilarly, the sequence $0.9$, $0.99$, $0.999$, $0.9999$\n\tdoes not converge in the space $(0,1)$,\n\talthough it does converge in $[0,1]$.\n\n\tThe fact that these sequences fail to converge\n\teven though they ``ought to'' is weird and bad,\n\tand was why we defined complete spaces to begin with.\n\n\t\\ii In general, it makes no sense to ask a question like ``is $[0,1]$ open?''.\n\tThe questions ``is $[0,1]$ open in $\\RR$?''\n\tand ``is $[0,1]$ open in $[0,1]$?'' do make sense, however.\n\tThe answer to the first question is ``no''\n\tbut the answer to the second question is ``yes'';\n\tindeed, every space is open in itself.\n\tSimilarly, $[0, \\half)$ is an open set % chktex 9\n\tin the space $M = [0,1]$\n\tbecause it is the ball \\emph{in $M$}\n\tof radius $\\half$ centered at $0$.\n\n\t\\ii Dually, it doesn't make sense to ask ``is $[0,1]$ closed''?\n\tIt is closed \\emph{in $\\RR$} and \\emph{in itself}\n\t(but every space is closed in itself, anyways).\n\\end{itemize}\n\n%Thus to list out all four cases:\n%\\begin{itemize}\n%\t\\ii The statement ``$[0,1]$ is complete'' makes sense (and is true);\n%\tit says that $[0,1]$ is a complete metric space.\n%\t\\ii The statement ``$[0,1]$ is a complete subset of $\\RR$'' is valid;\n%\tit says that the subspace $[0,1]$ of $\\RR$ is a complete metric space.\n%\t\\ii The statement ``$[0,1]$ is a closed subset of $\\RR$'' makes sense;\n%\tit says that the set of points $[0,1]$ form a closed subset of a parent space $\\RR$.\n%\t\\ii The statement ``$[0,1]$ is closed'' does \\emph{not} make sense officially.\n%\tClosed sets are only defined relative to parent spaces.\n%\\end{itemize}\n\nTo make sure you understand the above,\nhere are two exercises to help you practice relative adjectives.\n\n\\begin{exercise}\n\tLet $M$ be a complete metric space and let $S \\subseteq M$.\n\tProve that $S$ is complete if and only if it is closed in $M$.\n\t% (This is obvious once you figure out what the question is asking.)\n\tIn particular, $[0,1]$ is complete.\n\\end{exercise}\n\\begin{exercise}\n\tLet $M = [0,1] \\cup (2,3)$.\n\tShow that $[0,1]$ and $(2,3)$ are both open and closed in $M$.\n\\end{exercise}\n\nThis illustrates a third point:\na nontrivial set can be both open and closed\\footnote{Which\n\talways gets made fun of.}\nAs we'll see in \\Cref{ch:top_more}, this implies the space is disconnected;\ni.e.\\ the only examples look quite like the one we've given above.\n\n\\section{\\problemhead}\n\\begin{dproblem}[Banach fixed point theorem]\n\tLet $M = (M,d)$ be a complete metric space.\n\tSuppose $T \\colon M \\to M$ is a continuous map\n\tsuch that for any $p \\neq q \\in M$,\n\t\\[ d\\left( T(p), T(q) \\right) < 0.999 d(p,q). \\]\n\t(We call $T$ a \\vocab{contraction}.)\n\tShow that $T$ has a unique fixed point.\n\\end{dproblem}\n\n\\begin{problem}\n\t[Henning Makholm,\n\ton \\href{https://math.stackexchange.com/a/3051746/229197}{math.SE}]\n\tWe let $M$ and $N$ denote the metric spaces obtained\n\tby equipping $\\RR$ with the following two metrics:\n\t\\begin{align*}\n\t\td_M(x,y) &= \\min \\left\\{ 1, \\left\\lvert x-y \\right\\rvert \\right\\} \\\\\n\t\td_N(x,y) &= \\left\\lvert e^x - e^y \\right\\rvert.\n\t\\end{align*}\n\t\\begin{enumerate}[(a)]\n\t\t\\ii Fill in the following $2 \\times 3$ table\n\t\twith ``yes'' or ``no'' for each cell.\n\t\t\\begin{center}\n\t\t\\begin{tabular}[h]{l|ccc}\n\t\t\t& Complete? & Bounded? & Totally bounded? \\\\ \\hline\n\t\t\t$M$ \\\\\n\t\t\t$N$ \\\\\n\t\t\\end{tabular}\n\t\t\\end{center}\n\t\t\\ii Are $M$ and $N$ homeomorphic?\n\t\\end{enumerate}\n\t\\begin{hint}\n\t\t(a): $M$ is complete and bounded\n\t\tbut not totally bounded. $N$ is all no.\n\t\tFor (b) show that $M \\cong \\RR \\cong N$.\n\t\\end{hint}\n\t\\begin{sol}\n\t\tPart (a) is essentially by definition.\n\t\tThe space $M$ is bounded since no distances exceed $1$,\n\t\tbut not totally bounded since we can't cover $M$\n\t\twith finitely many $\\half$-neighborhoods.\n\t\tThe space $M$ is complete since a sequence of real\n\t\tnumbers converges in $M$ if it converges in the usual sense.\n\t\tAs for $N$, the sequence $-1$, $-2$, \\dots is Cauchy\n\t\tbut fails to converge; and it is obviously not bounded.\n\n\t\tTo show (b), the identity map (!) is an homeomorphism $M \\cong \\RR$\n\t\tand $\\RR \\cong N$, since it is continuous.\n\n\t\tThis illustrates that $M \\cong N$ despite the fact\n\t\tthat $M$ is both complete and bounded\n\t\tbut $N$ is neither complete nor bounded.\n\t\tOn the other hand, we will later see that complete\n\t\tand totally bounded implies \\emph{compact},\n\t\twhich is a very strong property preserved under homeomorphism.\n\t\\end{sol}\n\\end{problem}\n\n\n\\begin{dproblem}[Completion of a metric space]\n\t\\yod\n\t\\label{prob:completion}\n\tLet $M$ be a metric space.\n\tConstruct a complete metric space $\\ol M$\n\tsuch that $M$ is a subspace of $\\ol M$,\n\tand every open set of $\\ol M$ contains a point of $M$\n\t(meaning $M$ is \\vocab{dense} in $\\ol M$).\n\t\\begin{hint}\n\t\tAs a set, we let $\\ol M$ be the sequence of Cauchy\n\t\tsequences $(x_n)$ in $M$, modulo the relation that\n\t\t$(x_n) \\sim (y_n)$ if $\\lim_n d(x_n, y_n) = 0$.\n\t\\end{hint}\n\\end{dproblem}\n\n\\begin{problem}\n\tShow that a metric space is totally bounded\n\tif and only if any sequence has a Cauchy subsequence.\n\t\\begin{sol}\n\t\tSee \\url{https://math.stackexchange.com/q/556150/229197}.\n\t\\end{sol}\n\\end{problem}\n\n\\begin{problem}\n\t\\kurumi\n\tProve that $\\QQ$ is not homeomorphic to any complete metric space.\n\t\\begin{hint}\n\t\tThe standard solution seems to be\n\t\tvia the so-called ``Baire category theorem''.\n\t\\end{hint}\n\\end{problem}\n", "meta": {"hexsha": "fd9020964c387ab283c0654b7d08df99e6ce277f", "size": 14469, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "corpus/napkin/tex/topology/metric-prop.tex", "max_stars_repo_name": "aDotInTheVoid/ltxmk", "max_stars_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_stars_repo_licenses": ["Apache-2.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "corpus/napkin/tex/topology/metric-prop.tex", "max_issues_repo_name": "aDotInTheVoid/ltxmk", "max_issues_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_issues_repo_licenses": ["Apache-2.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "corpus/napkin/tex/topology/metric-prop.tex", "max_forks_repo_name": "aDotInTheVoid/ltxmk", "max_forks_repo_head_hexsha": "ee461679e51e92a0e4b121f28ae5fe17d5e5319e", "max_forks_repo_licenses": ["Apache-2.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.4632352941, "max_line_length": 86, "alphanum_fraction": 0.6988734536, "num_tokens": 4421, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6791786861878392, "lm_q2_score": 0.8104788995148791, "lm_q1q2_score": 0.5504599941554813}}
{"text": "\\section{Additional Exercises}\\label{sec:MoreFunctionExercises}\n\n\\Opensolutionfile{solutions}[ex]\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\begin{enumialphparenastyle}\n\n%%%%%%%%%%\n\\begin{ex}\nFind the derivatives of the following functions\nfrom definition.\n\\begin{enumerate}\n\t\\item\t$f(x)=(2x+3)^2$\n\t\\item\t$g(x)=x^{3/2}$\n\\end{enumerate}\n\\begin{sol}\n\\begin{enumerate}\n\t\\item\t$4(2x+3)$\n\t\\item\t$\\frac{3}{2}x^{1/2}$\n\\end{enumerate}\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\nLet $f(x)=\\left\\{ \n\\begin{array}{cc}\nx^{3} & \\text{if }x\\leq 1 \\\\ \n5x-x^{2} & \\text{if }x>1%\n\\end{array}%\n\\right. .$\nUse the definition of the derivative to find $f^{\\prime}(1)$.\n\\begin{sol}\n\t3\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\nDifferentiate the following functions.\n\\begin{enumerate}\n\t\\item\t$y=7x^4-7\\pi^4+\\dfrac{1}{\\pi\\sqrt[3]{x}}$\n\t\\item\t$f(x)=\\dfrac{1-\\sqrt{x}}{1+\\sqrt{x}}$\n\t\\item\t$f(x)=\\vert x-1\\vert+\\vert x+2\\vert$\n\t\\item\t$f(x)=x^2\\sin x\\cos x$\n\t\\item\t$y=\\dfrac{x\\sin x}{1+\\sin x}$\n\t\\item\t$g(x)=\\sqrt{2+\\dfrac{3}{\\sqrt{x}}}$\n\t\\item\t$y=\\sqrt[3]{x^4+x^2+1}+\\dfrac{1}{(x^3-x+4)^5}$\n\t\\item\t$y=\\sin^3 x-\\sin(x^3)$\n\t\\item\t$F(x)=\\sec^4 x+\\tan^4 x$\n\t\\item\t$y=\\cos^2\\left(\\dfrac{1-x}{1+x}\\right)$\n\t\\item\t$y=\\tan(\\sin(x^2+\\sec^2 x))$\n\t\\item\t$y=\\dfrac{1}{2+\\sin\\frac{\\pi}{x}}$\n\\end{enumerate}\n\\begin{sol}\n\\begin{enumerate}\n\t\\item\t$28x^3-\\dfrac{1}{3\\pi x^{4/3}}$\n\t\\item\t$-\\dfrac{1}{\\sqrt{x}(1+\\sqrt{x})^2}$\n\t\\item\t$f^{\\prime}(x)=\\left\\{ \n\t\\begin{array}{ll}\n\t-2 & \\text{if }x<-2 \\\\ \n\t0 & \\text{if }-2<x<1 \\\\ \n\t2 & \\text{if }x>1%\n\t\\end{array}%\n\t\\right. $\n\t\\item\t$2x\\sin x\\cos x+x^2\\cos^2 x-x^2\\sin^2 x$\n\t\\item\t$\\dfrac{(\\sin x+x\\cos x)(1+\\sin x)-x\\sin x\\cos x}{(1+\\sin x)^2}$\n\t\\item\t$-\\dfrac{3}{4x^{3/2}}\\left(2+\\dfrac{3}{\\sqrt{x}}\\right)^{-1/2}$\n\t\\item\t$\\frac{1}{3}(x^4+x^2+1)^{-2/3}(4x^3+2x)-\\dfrac{5(3x^2-1)}{(x^3-x+4)^6}$\n\t\\item\t$3\\sin^2 x\\cos x-3x^2\\cos(x^3)$\n\t\\item\t$4\\sec^4 x\\tan x + 4\\tan^3 x\\sec^2 x$\n\t\\item\t$\\dfrac{4}{(1+x)^2}\\cos\\left(\\dfrac{1-x}{1+x}\\right)\\sin\\left(\\dfrac{1-x}{1+x}\\right)$\n\t\\item\t$(2x+2\\sec^2 x\\tan x)\\sec^2(\\sin(x^2+\\sec^2 x))\\cos(x^2+\\sec^2 x)$\n\t\\item\t$-\\dfrac{\\pi\\cos\\frac{\\pi}{x}}{x^2(2+\\sin\\frac{\\pi}{x})^2}$\n\\end{enumerate}\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\nDifferentiate the following functions.\n\\begin{enumerate}\n\t\\item\t$y=e^{3x}+e^{-x}+e^2$\n\t\\item\t$y=e^{2x}\\cos 3x$\n\t\\item\t$f(x)=\\tan(x+e^x)$\n\t\\item\t$g(x)=\\dfrac{e^x}{e^x+2}$\n\t\\item\t$y=\\ln(2+\\sin x)-\\sin(2+\\ln x)$\n\t\\item\t$f(x)=e^{x^{\\pi}}+x^{\\pi^{e}}+\\pi^{e^x}$\n\t\\item\t$y=\\log _{a}(b^x)+b^{\\log_a x}$, where $a$ and $b$ are positive real numbers and $a\\neq 1$\n\t\\item\t$y=(x^2+1)^{x^3+1}$\n\t\\item\t$y=(x^2+e^x)^{1/\\ln x}$\n\t\\item\t$y=\\dfrac{x\\sqrt{x^2+x+1}}{(2+\\sin x)^4(3x+5)^7}$\n\\end{enumerate}\n\\begin{sol}\n\\begin{enumerate}\n\t\\item\t$3e^{3x}-e^{-x}$\n\t\\item\t$2e^{2x}\\cos 3x-3e^{2x}\\sin 3x$\n\t\\item\t$(1+e^x)\\sec^2(x+e^x)$\n\t\\item\t$2e^x/(e^x+2)^2$\n\t\\item\t$\\dfrac{\\cos x}{2+\\sin x}-\\dfrac{\\cos(2+\\ln\tx)}{x}$\n\t\\item\t$e^{x^{\\pi}}\\cdot\\pi x^{\\pi-1}+\\pi^e x^{\\pi^e-1}+\\pi^{e^x}\\ln\\pi\\cdot e^x$\n\t\\item\t$\\log_{a}b+(\\log_{a}b) x^{(\\log_{a}b)-1}$\n\t\\item\t$(x^2+1)^{x^3+1}\\left(3x^2\\ln(x^2+1)+\\dfrac{2x(x^3+1)}{x^2+1}\\right)$\n\t\\item\t$\\dfrac{(x^2+e^x)^{1/\\ln x}}{(\\ln x)^2}\\left(\\dfrac{2x+e^x}{x^2+e^x}\\ln\tx-\\dfrac{x^2+e^x}{x}\\right)$\n\t\\item\t$\\dfrac{x\\sqrt{x^2+x+1}}{(2+\\sin x)^4 (3x+5)^7} \\left(\\dfrac{1}{x}+\\dfrac{2x+1}{2(x^2+x+1)}-\\dfrac{4\\cos x}{2+\\sin x}-\\dfrac{21}{3x+5}\\right)$\n\\end{enumerate}\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\nFind $\\dfrac{dy}{dx}$ if $y$ is a differentiable\nfunction that satisfy the given equation.\n\\begin{enumerate}\n\t\\item\t$x^2+xy+y^2=7$\n\t\\item\t$x^2+y^2=(2x^2+2y^2-x)^2$\n\t\\item\t$x^2\\sin y+y^3=\\cos x$\n\t\\item\t$x^2+xe^y=2y+e^x$\n\\end{enumerate}\n\\begin{sol}\n\\begin{enumerate}\n\t\\item\t$-(2x+y)/(x+2y)$\n\t\\item\t$\\dfrac{x-(2x^2+2y^2-x) (4x-1)}{4y(2x^2+2y^2-x)-y}$\n\t\\item\t$-\\dfrac{\\sin x+2x\\sin y}{x^2\\cos y+3y^2}$\n\t\\item\t$\\dfrac{2x+e^y-e^x}{2-xe^y}$\n\\end{enumerate}\n\\end{sol}\n\\end{ex}\n\n%%%%%%%%%%\n\\begin{ex}\nDifferentiate the following functions.\n\\begin{enumerate}\n\t\\item\t$y=x\\sin^{-1}x$\n\t\\item\t$f(x)=\\dfrac{\\sin^{-1}x}{\\cos^{-1}x}$\n\t\\item\t$g(x)=\\tan^{-1}\\left(\\dfrac{x}{a}\\right)$, where $a>0$\n\t\\item\t$y=x\\tan^{-1}x-\\dfrac{1}{2}\\ln(x^2+1)$\n\\end{enumerate}\n\\begin{sol}\n\\begin{enumerate}\n\t\\item\t$\\sin^{-1}x+x/\\sqrt{1-x^2}$\n\t\\item\t$\\dfrac{\\cos^{-1}x+\\sin^{-1}x}{(\\cos^{-1}x)^2 \\sqrt{1-x^2}}$\n\t\\item\t$a/(x^2+a^2)$\n\t\\item\t$\\tan^{-1}x$\n\\end{enumerate}\n\\end{sol}\n\\end{ex}\n\n\n\\end{enumialphparenastyle}", "meta": {"hexsha": "950787b3a6035d27d3a02270aac530b58367c06a", "size": 4371, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "4-derivatives/4-10-additional-exercises.tex", "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "4-derivatives/4-10-additional-exercises.tex", "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4-derivatives/4-10-additional-exercises.tex", "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.14, "max_line_length": 149, "alphanum_fraction": 0.555708076, "num_tokens": 2256, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467675095294, "lm_q2_score": 0.6261241772283034, "lm_q1q2_score": 0.5504550464698266}}
{"text": "\\section*{Method}\n\nOur design imposes additional structure on top of existing machine learning\napproaches, enforcing properties on the reconstructed values. For functions \\[ f(p, t)\\to\\mathbb{R}^n, t\\in[0,1] \\] which is modelled through\na learned approach, we decompose it into a function: \\[ f(p)\\to\\mathbb{R}^{n\\times O}, B_O(f(p), t)\\to\\mathbb{R}^n\n\\] Where\n$O$ is the order of the Bezier spline, $f(p)$ is the learned control points, and $B_O(f(p), \\cdot)$ is the evaluation\nof the $O$th order Bezier spline with control points defined by $f(p)$.\n\n\\subsection*{Architecture}\n\nSpecifically for dynamic NeRF, $f(p)$ is defined as\n$\\text{MLP}(x,y,z)\\to\\mathbb{R}^{3\\times O}$. We ray march from a camera with known position and\nview direction through the scene, and at every point we compute the set of control points for a Bezier curve. We then evaluate the Bezier curve at a given time, and deform the ray by the result, producing some $\\Delta_0(x)$. The number of spline points for the Bezier curve is hyperparameter, but we are able\nto accurately reconstruct movement with as low as 4 spline points, but experiment with 16 spline points. In order to evaluate the Bezier curve in a numerically stable way, we use De Casteljau's\nalgorithm. In a future extension, we plan on extending it to handle rational Bezier splines, which would allow for even greater control of movement.\n\nDe Casteljau's algorithm evaluated at time $t$ is characterized by the recurrence relation:\n\\[ \\beta_i^{(0)} = \\beta_i, \\]\n\\[ \\beta_i^{j} = (1-t)\\beta_i^{(j-1)} + t\\beta_{i-1}^{(j-1)}, \\]\nwhich can be thought of as linearly interpolating between adjacent control points until there is only a single point remaining. This takes $O(n^2)$ operations to evaluate, where $n$ is the number of control points.\n\nWe are also interested in constructing a reasonable canonical NeRF, and arbitrarily select $t = 0$ to be canonical. Thus, we are interested in Bezier Curves where $B_O(0) = \\overrightarrow{0}$. This can be achieved in two different ways, either by assigning $p_0 = \\overrightarrow{0}$, and only computing the other control points: $f(p) = p_{1\\cdots O-1}$. Then, we can use the Bezier spline with the control points as the concatenation of $\\overrightarrow{0}$ with the other control points: $[\\overrightarrow{0}, p_{1\\cdots O-1}]$. Alternatively, we can compute $p_{0\\cdots O-1}$ and use the Bezier spline with control points but subtract the first predicted point from all of them: $[p_{0\\cdots O-1}]-p_0$, and the final change in position is $\\Delta_0(x) = B_O(f(p)-f(p)_0,t)+p_0$. While both formulations are in theory equivalent and explicitly concatenating $\\overrightarrow{0}$ leads to fewer degrees of freedom, we find the second approach leads to better convergence near $t=0$, so we use that.\n\nFollowing NR-NeRF, we also learn how rigid each ray is, which in theory should allow more efficient learning of which regions are space are fixed. This rigidity $\\in[\\epsilon, 1]$)is computed as a function of position:\n\\[ \\text{Rigidity} =\\sigma_{\\epsilon^\\uparrow}(\\text{MLP}(x)), \\]\n\\[ \\sigma_{\\epsilon^\\uparrow}(v) = \\sigma(v)(1-\\epsilon) + \\epsilon, \\]\nwhere $\\sigma$ is defined as the sigmoid function $\\frac{1}{1+e^{-x}}$. Rigidity rescales the\ndifficulty of learning movement, making it more easy to represent and classify static scene\nobjects. The final change in position is defined as $\\Delta(x) = \\text{Rigidity}\\times\n\\Delta_0(x)$.\n\nIn order to reconstruct RGB values, we also diverge from the original NeRF, and only allow for\npositional dependent colors:\n\\[ \\text{RGB} = \\sigma_{\\epsilon-\\text{wide}}(\\text{MLP}(x)) \\]\n\\[ \\sigma_{\\epsilon-\\text{wide}}(v) = \\sigma(v)(1+2\\epsilon) - \\epsilon \\]\nusing the activation function as described in MIP-NeRF~\\cite{barron2021mipnerf} in order to prevent vanishing gradients on colors. Because of the low number of\nsamples for a moving object at a given view, it is more difficult to learn specular\nreflection, thus it leads to better convergence to model the diffuse color of an object. This is in line with NR-NeRF and D-NeRF, which also model the diffuse component.\n\n\\iffalse\n\\subsection*{Loss}\n\nWhile training, we introduce another loss term, as the $\\ell_2$ loss may have\ndifficulty when there is a large pixel-wise gap in movement between the learned and predicted\ncomponent. $\\ell_2$ may also miss large structural differences between two images if they are close in color, so we add a loss function which is high for incorrect structure of an image.\n\nThe new loss term is \\[\n    L_\\text{FFT} = \\text{MSE}(\\text{FFT}(I_\\text{GT}), \\text{FFT}(I_\\text{predicted}))\n\\],\nwhere the FFT is the 2D fast Fourier transform of an image.\nMSE is the mean squared error in the complex domain as the FFT has both a real and imaginary component, and is defined as \\[ \\text{MSE}(a\\in\\mathbb{C}^k,b\\in\\mathbb{C}^k) \\]\n\\[ = \\frac{1}{k}\\Sigma_{i=0}^k|a_i-b_i| \\]\n\\[ |z\\in\\mathbb{C}| = (\\Re(z)+\\Im(z))(\\Re(z)-\\Im(z)) \\]\nWe introduce this term as a cheap replacement to structural similarity (SSIM), since the\nFFT captures information about the structure of the whole image, which is useful in a dynamic setting due to the disjointedness of where a predicted object may be as opposed to its final position\\footnote{In a published iteration, we would likely perform an ablation of this loss function.}.\n\nOur final loss formulation is:\n\\[ \\ell_2(I_\\text{predicted}, I_\\text{GT}) + L_\\text{FFT}(I_\\text{predicted}, I_\\text{GT}) \\]\nwithout any additional regularization terms.\n\nOur formulation is simpler compared to NR-NeRF since there is no need to regularize\ntemporal consistency, making the optimization process simpler. Our training process also does not require adding in additional frames over time for short sequences, so we randomly sample from all available frames. This is contrast to D-NeRF, which requires adding frames in during training since it has to enforce that at $t=0$ the predicted movement is 0.\n\n\\subsection*{Training}\n\nFor training, we sample random crops of random frames, computing the loss defined previously and back-propagating through the whole network. We use autograd to optimize control points and the canonical NeRF jointly, but note that there are also classical approaches to solving spline points which may lead to faster optimization in the future. We use the Adam optimizer~\\cite{Kingma2015AdamAM} with cosine simulated annealing~\\cite{loshchilov2017sgdr} to go from $\\num{2e-4}$ to $\\num{5e-5}$ over the course of a day.\n\nFor some scenes, we are able to have higher learning rates, but for much darker scenes it's necessary to lower the learning rate to converge.\n\n\\subsection*{Long Duration $C^0$ Continuity}\n\nWhile the above structure is sufficient to model short-term $C^0$ continuity, it runs into\nnumerical instability and higher cost the longer the sequence is, due to the $O(n^2)$\nevaluation cost of De Casteljau's algorithm. To deal with this, we design an additional architecture which\ncomposes the previous architecture for many small $C^0$ curves. Concatenating many small splines permits for reconstruction of infinitely long sequences with guaranteed $C^0$-continuity with only a few additional models.\n\nThe architecture is based off of poly-splines, or a composition of many small Bezier splines.\nFor a known-length sequence, we divide it into $K$ segments, and assume that $K$ evenly divides the total number of frames. We can then treat $t$ as is\nin the range $[0, K]$, and $t$ can be decomposed into a segment number and fractional component:\n\\[ k\\in\\mathbb{Z}_K=\\lfloor t\\rfloor, t'\\in[0,1)=t-k \\]\nThen we define an embedding $\\text{Emb} = \\mathbb{R}^{Z\\times(K+1)}$, where $Z$ is a latent\ndimension size. We then create an \"anchor\" network:\n\\[\n    \\textit{anchor}(x\\in\\mathbb{R}^3,z\\in\\mathbb{R}^Z) = \\text{MLP}(x,z)\\to(\\mathbb{R}^3,\\mathbb{R}^{Z'})\n\\]\nWe call it an \"anchor\" network because it computes\nendpoints which anchors each of the curves, and also computes a representative latent vector. Between these two\nendpoints, we are interested in computing $O-2$ intermediate spline points. We define the\ncontrol point estimation network as\n\\[\n\\textit{control}(x\\in\\mathbb{R}^3, z_1, z_2\\in\\mathbb{R}^{Z'}) = \\text{MLP}(x,z_1,z_2)\\to\\mathbb{R}^{O\\times\\mathbb{R}^3}\n\\]\nWhere $z_1,z_2$ are the\nrepresentative latent vectors from the anchor network, and it outputs $O-2$ spline points in\n$R^3$. We can also include a per-segment rigidity value, similar to above, but lose\nsome guarantees of continuity by doing so.\n\nIn order to evaluate the network at a given time $t$ and other inputs $x$, we first compute $k, t'$. Then, we compute\nthe anchor points at\n\\[ p_0,z_1=\\textit{anchor}(x,\\text{Emb}[k]) \\]\n\\[ p_\\text{end},z_2=\\textit{anchor}(x,\\text{Emb}[k+1]) \\]\nand the intermediate control points as \\[ p_{1\\cdots O-2} = \\text{control}(x,z_1,z_2) \\]\nThe final set of control points we use\nis the concatenation of the anchors with the intermediate points:\n$[p_0, p_{1\\cdots O-2}, p_\\text{end}]$. Using this set of control points, we evaluate the Bezier\ncurve spline at $t'$ using De Casteljau's algorithm:\n\\[ \\text{De Casteljau}([p_0, p_{1\\cdots O-2}, p_\\text{end}], t) = \\Delta(x) \\]\nBecause we are evaluating adjacent embeddings ($k, k+1$) for each anchor, we are guaranteed that the endpoints between each spline segment are identical. Information is also carried over between segments through the anchor's\nrepresentative latent vector, allowing the network to maintain velocity between curves if necessary.\n\nThis architecture enforces a guarantee of consistency, regardless of how long a reconstructed signal is. It also naturally allows movement at one time to be distinct from another,\nas while the endpoints of the spline are fixed, movement defined by the intermediate control points is allowed to be fully independent of other\nsegments. While we construct this architecture for long dynamic sequence reconstruction, we argue that it works more generally for generating $C^0$-continuous functions in any domain.\n\nOne obvious question is why a more complex architecture is necessary as compared to predicting all spline points and using only some of them. In the case of extremely long sequences, the number of spline points may increase linearly with the length of the function, and for dynamic NeRF we would have memory cost of $O(H\\times W\\times D\\times t)$ which is prohibitively expensive. With the proposed architecture, we are able to sample sparsely, maintaining a constant memory usage with respect to time.\n\nWe do yet not demonstrate the effectiveness of the architecture, due to time constraints, and the author's incompetence\\footnote{If this were to be published, I would demonstrate this architecture, but cannot due to time constraints.}.\n\\fi", "meta": {"hexsha": "1974fed043854b21548f46da056a83fc81943594", "size": 10746, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "c0_paper/method.tex", "max_stars_repo_name": "JulianKnodt/nerf_atlas", "max_stars_repo_head_hexsha": "6866713c498cea026cb215260a779a2c6c13246c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 57, "max_stars_repo_stars_event_min_datetime": "2021-05-25T12:57:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T06:27:44.000Z", "max_issues_repo_path": "c0_paper/method.tex", "max_issues_repo_name": "JulianKnodt/nerf_atlas", "max_issues_repo_head_hexsha": "6866713c498cea026cb215260a779a2c6c13246c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-07-26T22:28:40.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-29T20:51:59.000Z", "max_forks_repo_path": "c0_paper/method.tex", "max_forks_repo_name": "JulianKnodt/nerf_atlas", "max_forks_repo_head_hexsha": "6866713c498cea026cb215260a779a2c6c13246c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-05-25T12:36:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-28T04:20:12.000Z", "avg_line_length": 90.3025210084, "max_line_length": 1002, "alphanum_fraction": 0.759445375, "num_tokens": 2854, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8840392848011834, "lm_q2_score": 0.6224593312018546, "lm_q1q2_score": 0.5502785019735105}}
{"text": "\\documentclass{memoir}\n\\usepackage{linalg}\n\\usepackage{marginnote}\n\\begin{document}\n% \\section{}\t\n\\section{First Consequences of Axioms of a Vector Space}\nLet $(V,F,+,\\cdot)$ be a vector space. \n\\begin{prop}\nThe additive identity of $V$ is unique\n\\end{prop}\n\\begin{proof}\n\tSuppose $V$ has two additive identities $\\overline{0}, \\overline{0'}$. Then $\\overline{0} = \\overline{0}+ \\overline{0'} = \\overline{0'}$.\n\\end{proof}\n\n\\begin{prop}\n\tAdditive inverses are unique\n\\end{prop}\n\n\\begin{proof}\n\tLet $v\\in V$ be any vector. Suppose that $w, w'$ are additive inverses of v.\n\tThen \n\t\\begin{align*}\n\t\tw &= w + \\overline{0}\\\\\n\t\t  &= w + (v+w')\\\\\n\t\t  &= (w+v) + w'\\\\\n\t\t  &= \\overline{0} + w'\\\\\n\t\t  &= w'\n\t\\end{align*}\n\\end{proof}\n\n\\begin{prop}\n\tFor all $v\\in V, 0\\cdot v = \\overline{0}$\n\\end{prop}\n\n\\begin{proof}\n\t\\begin{align*}\n\t\t0 \\cdot v &= (0+0) \\cdot v\\\\\n\t\t\t  &= 0\\cdot v + 0\\cdot v\n\t\\end{align*}\n\tAdd $-0\\cdot v$ to both sides:\n\\begin{align*}\n\t-(0\\cdot v) + (0\\cdot v) &= -(0\\cdot v) + (0\\cdot v) + (0\\cdot v)\\\\\n\t\t\t\t &\\implies \\overline{0} = \\overline{0} + (0\\cdot v)\\\\\n\t\t\t\t &\\implies \\overline{0} = 0\\cdot v\n\\end{align*}\n\\end{proof}\n\n\\begin{hw}\n\tProve that $-v = (-1)*v$.\n\\end{hw}\n\\end{document}\n", "meta": {"hexsha": "28065d63d2826e7d1d1ef26fd23ccd59e170dc5c", "size": 1196, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Linear Algebra/Notes/source/09-04-19.tex", "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_issues_repo_path": "Linear Algebra/Notes/source/09-04-19.tex", "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_forks_repo_path": "Linear Algebra/Notes/source/09-04-19.tex", "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.0, "max_line_length": 138, "alphanum_fraction": 0.6003344482, "num_tokens": 484, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7690802423634963, "lm_q1q2_score": 0.5502184600698146}}
{"text": "\n\\RequirePackage[l2tabu, orthodox]{nag} % Complaints if syntax isn't right.\n\\documentclass[10pt]{extarticle}\n\n\\usepackage[utf8]{inputenc}\n\\usepackage{extsizes} % Allow for more sizes such as 14pt or 17pt on document class.\n\\usepackage{mathtools} % Allows conditional math expressions, etc.\n\\usepackage{amsfonts} % Allows \\mathbb{R}, which is the real numbers symbol.\n\\usepackage[export]{adjustbox} % Wraps \\includegraphics with more keys (options).\n\n\n% For writing pseudocode snippets: http://ctan.mackichan.com/macros/latex/contrib/algorithmicx/algorithmicx.pdf\n\\usepackage{algorithm}\n\\usepackage{algpseudocode}\n\n\n\n\n\\usepackage[top=3.5cm, bottom=3.5cm, left=3.5cm, right=3.5cm]{geometry}\n\n\\usepackage{microtype} % Makes document more readable by optimizing space between letters.\n\n\\usepackage{units} % Adds \\nicefrac{1}{2}: (\u00bd), \\unit[5]{m}{s}: 5 m/s.\n\n\\usepackage[colorlinks=false, pdfborder={0 0 0}]{hyperref} % Allow clicking references and the table of contents in pdfs.\n\\usepackage{cleveref} % Adds fig/formula etc. before references (use \\cref).\n\n%%% New commands.\n\n% Don't output references in case they're empty - http://tex.stackexchange.com/questions/74476/how-to-avoid-empty-thebibliography-environment-bibtex-if-there-are-no-refere\n\n\\let\\myBib\\thebibliography\n\\let\\endmyBib\\endthebibliography\n\n\\renewcommand\\thebibliography[1]{\\ifx\\relax#1\\relax\\else\\myBib{#1}\\fi}\n\n\n% Norm -> ||something||\n\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}\n\n% Stdfig -> Used as \\stdfig{width}{label_name}{caption}\n% Requires: image called 'caption' in img folder.\n% Output: A figure with the given width, labeled as 'fig:label_name'\n\n\\newcommand{\\stdfig}[3]{\n    \\begin{figure}\n    \\centering\n    \\includegraphics[width = #1]{img/#2.eps}\n    \\caption{#3}\n    \\label{fig:#2}\n    \\end{figure}\n}\n\n\n\n\\begin{document} \n\n\n\n%% Front page.\n\\title{09.02 - Recommender Systems}\n\n    \n    \\date{}\n    \n\n    \n\n    \\maketitle\n\n\\newpage\n%% Abstract page.\n\n\n%% Table of contents page.\n\n\n\n%% Body start.\n\\section{Problem Formulation}\\label{problem-formulation}\n\nExample: Predicting movie ratings.\n\nImagine a company where users rate movies using zero to five stars.\n\nNotation:\n\n\\begin{itemize}\n\\itemsep1pt\\parskip0pt\\parsep0pt\n\\item\n  $n_u$ = number of users\n\\item\n  $n_m$ = number of movies\n\\item\n  $r(i, j)$ = 1 if user $j$ has rated movie $i$\n\\item\n  $y^{(i,j)}$ = rating given by user $j$ to movie $i$ (defined only if\n  $r(i, j)$ = 1)\n\\end{itemize}\n\nThe problem is: given this dataset, to look through all the missing\nratings on the dataset and fill them (\\cref{fig:movie_ratings_table}).\n\n\\stdfig{9cm}{movie_ratings_table}{Movie ratings table}\n\n\\section{Content Based\nRecommendations}\\label{content-based-recommendations}\n\nHow do we predict the ``?'' in the movies table\n(\\cref{fig:movie_ratings_table})?\n\nLet's guess that we have two additional features in the table: one that\nmeasures the amount of romance a movie has and another one that measures\nthe amount of action (\\cref{fig:movie_ratings_table_with_features}).\n\n\\stdfig{11cm}{movie_ratings_table_with_features}{Movie ratings table with features}\n\nIf we then add the interceptor term $x_0 = 1$, we have a feature vector\nfor each movie ($x^{(1)} = [1, \\, 0.9, \\,  0]^T$, for example). \\bigskip\n\nThe idea is to use linear regression for each different user and predict\nthe missing parameters with that. More formally: for each user $j$,\nlearn a parameter $\\theta^{(j)} \\in \\mathbb{R}^3$. Predict user $j$ as\nrating movie $i$ with $(\\theta^{(j)})^Tx^{(i)}$ stars. \\smallskip\n\nFor example, in \\cref{fig:movie_ratings_table_with_features}, if we\nwanted to predict ``Cute pupies of love'' for ``Alice'', and we already\ngot $\\theta$, we would do: $(\\theta^{(1)})^Tx^{(3)}$. \\smallskip\n\n\\subsection{Optimization Objective}\\label{optimization-objective}\n\nWe can describe this intention (learn $\\theta$ for the user $j$) with\nthe formula:\n\n\\[ \\min_{\\theta^{(j)}}  \\frac{1}{2} \\sum_{i:r(i, j)=1} ((\\theta^{(j)})^Tx^{(i)} - y^{(i,j)})^2 + \\frac{\\lambda}{2} \\sum_{k=1}^n(\\theta_k^{(j)})^2 \\]\n\nWhere $\\sum_{i:r(i, j)=1} $ is a for loop with all the rated movies by\nthat user. \\smallskip\n\nTo learn all the $\\theta$, we can do:\n\n\\[ \\min_{\\theta^{(1)}, \\dots, \\theta^{(n_u)}}  \\frac{1}{2} \\sum_{j=1}^{n_u} \\sum_{i:r(i, j)=1} ((\\theta^{(j)})^Tx^{(i)} - y^{(i,j)})^2 + \\frac{\\lambda}{2} \\sum_{j=1}^{n_u} \\sum_{k=1}^n(\\theta_k^{(j)})^2 \\]\n\nWhich is very, very similar to the standard linear regression equation\n(only filtering the unknows votes out).\n\n\\subsubsection{Gradient descent update}\\label{gradient-descent-update}\n\n\\[ \\theta_k^{(j)} := \\theta_k^{(j)} - \\alpha \\sum_{i:r(i, j)=1} \\left( (\\theta^{(j)})^Tx^{(i)} - y^{(i,j)}\\right) x_k^{(i)} \\,\\, \\text{(for $k$ = 0)} \\]\n\n\\[ \\theta_k^{(j)} := \\theta_k^{(j)} - \\alpha \\left( \\sum_{i:r(i, j)=1} \\left( (\\theta^{(j)})^Tx^{(i)} - y^{(i,j)}\\right) x_k^{(i)} + \\lambda\\theta_k^{(j)} \\right) \\,\\, \\text{(for $k \\neq$ 0)} \\]\n\n\\section{Collaborative Filtering}\\label{collaborative-filtering}\n\nThis algorithm has the characteristic of learning the features it needs\nby himself. \\smallskip\n\nIn the last example (\\cref{fig:movie_ratings_table_with_features}) we\nhad some features. Extracting those features is quite time consuming,\nthough. \\smallskip\n\nIn this new algorithm, we won't have the features\n(\\cref{fig:movie_ratings_table_unknown_features}). Let's imagine that\nthe users tells us whether they like romance and action movies (the\n$\\theta$). Then, it's possible to infer the values of $x_1$ and $x_2$.\n\\smallskip \n\n\\stdfig{12cm}{movie_ratings_table_unknown_features}{Move ratings with unknown features}\n\nWe can exploit that we know the ratings of the users and their\npreferences to know which value to assign to each movie feature. For\nexample, if a user says that he likes action movies and he rates a movie\nvery high, we can assume that that movie will have action.\n\n\\subsection{Optimization Algorithm}\\label{optimization-algorithm}\n\nGiven $\\theta^{(1)}, \\dots, \\theta^{(n_u)}$, to learn $x^{(i)}$:\n\n\\[ \\min_{x^{(i)}}  \\frac{1}{2} \\sum_{i:r(i, j)=1} ((\\theta^{(j)})^Tx^{(i)} - y^{(i,j)})^2 + \\frac{\\lambda}{2} \\sum_{k=1}^n(x_k^{(i)})^2 \\]\n\nWhich reads as: choosing the features so that the predicted value will\nbe similar to the actual value that we observe on the rating of the\nusers. \\smallskip\n\nWe can then compute this for all the movies. Given\n$\\theta^{(1)}, \\dots, \\theta^{(n_u)}$, to learn\n$x^{(1)}, \\dots, x^{(n_m)}$:\n\n\\[ \\min_{x^{(1)}, \\dots, x^{(n_m)}}  \\frac{1}{2} \\sum_{i=1}^{n_m} \\sum_{j:r(i, j)=1} ((\\theta^{(j)})^Tx^{(i)} - y^{(i,j)})^2 + \\frac{\\lambda}{2} \\sum_{i=1}^{n_m} \\sum_{k=1}^n(x_k^{(i)})^2 \\]\n\n\\subsubsection{Gradient descend update}\\label{gradient-descend-update}\n\n\\[ x_k^{(i)} := x_k^{(i)} - \\alpha \\sum_{j:r(i, j)=1} \\left( (\\theta^{(j)})^Tx^{(i)} - y^{(i,j)}\\right) \\theta_k^{(j)} \\,\\, \\text{(for $k$ = 0)} \\]\n\n\\[ x_k^{(i)} := x_k^{(i)} - \\alpha \\left( \\sum_{j:r(i, j)=1} \\left( (\\theta^{(j)})^Tx^{(i)} - y^{(i,j)}\\right) \\theta_k^{(j)} + \\lambda x_k^{(i)} \\right) \\,\\, \\text{(for $k \\neq$ 0)} \\]\n\n\\subsection{How to continue}\\label{how-to-continue}\n\nWe've seen in content based filtering that given\n$x^{(1)}, \\dots, x^{(n_m)}$ we can estimate\n$\\theta^{(1)}, \\dots, \\theta^{(n_u)}$. \\smallskip\n\nIn collaborative filtering, given $\\theta^{(1)}, \\dots, \\theta^{(n_u)}$,\nwe can estimate $x^{(1)}, \\dots, x^{(n_m)}$. \\bigskip\n\nSo, this is the egg-chicken problem. What we can do, therefore, is\nrandomly guess $\\theta$ to learn $x$ (features). We can then use the $x$\nto learn $\\theta$, etc.\n\n\\section{Collaborative Filtering\nAlgorithm}\\label{collaborative-filtering-algorithm}\n\nWe've talked about the ideas of how if you're given features for movies,\nwe can learn parameters $\\theta$ for users. We've also seen how if we're\ngiven parameters for the users we can use that to extract features for\nthe movies. We'll now find an algorithm to solve that problem. \\bigskip\n\nWe could to the chicken-egg back and forth optimization algorithm, but\nthere is an algorithm that optimizes both $x$ and $\\theta$ at the same\ntime, so it's better. \\smallskip\n\nThe cost function is:\n\n\\[ J(x^{(1)}, \\dots, x^{(n_m)}, \\theta^{(1)}, \\dots, \\theta^{(n_u)}) = \\frac{1}{2} \\sum_{(i,j):r(i, j)=1} \\left( (\\theta^{(j)})^Tx^{(i)} - y^{(i,j)}\\right)^2 + \\frac{\\lambda}{2} \\sum_{i=1}^{n_m} \\sum_{k=1}^n(x_k^{(i)})^2  + \\frac{\\lambda}{2} \\sum_{j=1}^{n_u} \\sum_{k=1}^n(\\theta_k^{(j)})^2\\]\n\nWhich is basically merging both cost functions (collaborative and\ncontent based filtering) into one, taking advantage that the less\nsquares sumation is the same in both and then adding the respective\nregularization term.\n\nWe also suppress the interceptor term $x_1 = 1$, as the algorithm will\nlearn it by himself if it's really needed.\n\n\\subsection{Gradient descent}\\label{gradient-descent}\n\nThe same as before for the $\\theta$ and $x$ respectively.\n\n\\subsection{Detailed algorithm}\\label{detailed-algorithm}\n\nWe can find a more detailed explanation in\n\\cref{alg:collaborative_filtering}.\n\n\\begin{algorithm}\n\\caption{Collaborative filtering algorithm} \\label{alg:collaborative_filtering}\n\\begin{algorithmic}[1]\n\\State Initialize $x^{(1)}, \\dots, x^{(n_m)}, \\theta^{(1)}, \\dots, \\theta^{(n_u)}$ to small random values.\n\\State Minimize $J(x^{(1)}, \\dots, x^{(n_m)}, \\theta^{(1)}, \\dots, \\theta^{(n_u)})$ using gradient descent (or an advanced optimization algorithm).\n\nFor every $j = 1, \\dots, n_u, i = 1, \\dots, n_m$:\n\n$ x_k^{(i)} := x_k^{(i)} - \\alpha \\left( \\sum_{j:r(i, j)=1} \\left( (\\theta^{(j)})^Tx^{(i)} - y^{(i,j)}\\right) \\theta_k^{(j)} + \\lambda x_k^{(i)} \\right) $\n\n$ \\theta_k^{(j)} := \\theta_k^{(j)} - \\alpha \\left( \\sum_{i:r(i, j)=1} \\left( (\\theta^{(j)})^Tx^{(i)} - y^{(i,j)}\\right) x_k^{(i)} + \\lambda\\theta_k^{(j)} \\right) $\n\n\\State For a user with parameters $\\theta$ and a movie with (learned) features $x$, predict a star rating of $\\theta^Tx$.\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Vectorization: Low Rank Matrix\nFactorization}\\label{vectorization-low-rank-matrix-factorization}\n\nWe'll see an alternative algorithm for solving the collaborative\nfiltering problem. \\bigskip\n\nTake the dataset we have (\\cref{fig:movie_ratings_table}), and put it\ndirectly onto a matrix called $Y$. Then, we can stack the $x$ vectors\nand the $\\theta$ vectors in two matrices, $X$ and $\\Theta$. Afterwards,\nwe can compute the prediction of the ratings simply by doing\n$X\\Theta^T$. All this can be seen in\n\\cref{fig:low_rank_matrix_factorization}\n\n\\stdfig{10cm}{low_rank_matrix_factorization}{Low rank matrix factorization}\n\n\\subsection{Finding related movies}\\label{finding-related-movies}\n\nFor each product $i$, we learn a feature vector\n$x^{(i)} \\in \\mathbb{R}^n$.\n\nHow to find 5 movies $j$ most related to movie $i$?\n\nFind the 5 movies $j$ with the smallest $\\norm{x^{(i)} - x^{(j)}}$.\n\n\n%% Body end.\n\n%% Bibliography.\n\n\\clearpage\n\n\n\n    \\nocite{*}\n\n\\bibliographystyle{plain}\n\\bibliography{references}\n\n\\end{document}", "meta": {"hexsha": "c70c3f5565e1016b344d77bdca9b7916c311186e", "size": 10887, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "theory/09.02 - Recommender Systems/09.02 - Recommender Systems.tex", "max_stars_repo_name": "GMadorell/coursera-machine-learning", "max_stars_repo_head_hexsha": "7633ea83218e6f22f8ce4032f8f64d878732c2e9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-07-11T17:56:22.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-07T02:35:55.000Z", "max_issues_repo_path": "theory/09.02 - Recommender Systems/09.02 - Recommender Systems.tex", "max_issues_repo_name": "GMadorell/coursera-machine-learning", "max_issues_repo_head_hexsha": "7633ea83218e6f22f8ce4032f8f64d878732c2e9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory/09.02 - Recommender Systems/09.02 - Recommender Systems.tex", "max_forks_repo_name": "GMadorell/coursera-machine-learning", "max_forks_repo_head_hexsha": "7633ea83218e6f22f8ce4032f8f64d878732c2e9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.533557047, "max_line_length": 291, "alphanum_fraction": 0.6846697897, "num_tokens": 3526, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7154239957834733, "lm_q2_score": 0.7690802423634961, "lm_q1q2_score": 0.5502184600698145}}
{"text": "\\section{Appendix}\n\n\\newcommand{\\frule}[3]{\n  \\begin{prooftree}\n  \\hypo{#1}\n  \\infer1[(#3)]{#2}\n  \\end{prooftree}\n}\n\n\\begin{figure}[H]\n  \\begin{framed}\n  $$\\arraycolsep=0.75cm\\def\\arraystretch{2.75}\\begin{array}{c c}\n  % Hypothesis\n  \\frule{}{{?a} \\vdash {?a}}{H.} & \\\\\n  % And\n  \\frule{{?a}, {?b} \\vdash}{{?a} \\land {?b} \\vdash}{I.L.$\\land$} & \\frule{\\vdash {?a} \\quad \\vdash {?b}}{\\vdash {?a} \\land {?b}}{I.R.$\\land$} \\\\\n  \\frule{{?a} \\land {?b} \\vdash}{{?a}, {?b} \\vdash}{E.L.$\\land$} & \\frule{\\vdash {?a} \\land {?b}}{\\vdash {?a}}{E.R.$\\land_1$} \\\\ % cont.\n  & \\frule{\\vdash {?a} \\land {?b}}{\\vdash {?b}}{E.R.$\\land_2$} \\\\\n  % Or\n  \\frule{{?a} \\vdash \\quad {?b} \\vdash}{{?a} \\lor {?b} \\vdash}{I.L.$\\lor$} & \\frule{\\vdash {?a}, {?b}}{\\vdash {?a} \\lor {?b}}{I.R.$\\lor$} \\\\\n  \\frule{{?a} \\lor {?b} \\vdash}{{?a} \\vdash}{E.L.$\\lor_1$} & \\frule{\\vdash {?a} \\lor {?b}}{\\vdash {?a}, {?b}}{E.R.$\\lor$} \\\\ % cont.\n  \\frule{{?a} \\lor {?b} \\vdash}{{?b} \\vdash}{E.L.$\\lor_2$} & \\\\\n  % Implies\n  \\frule{\\vdash {?a} \\quad {?b} \\vdash}{{?a} \\Rightarrow {?b} \\vdash}{I.L.$\\Rightarrow$} & \\frule{{?a} \\vdash {?b}}{\\vdash {?a} \\Rightarrow {?b}}{I.R.$\\Rightarrow$} \\\\\n  \\frule{{?a} \\Rightarrow {?b} \\vdash}{\\vdash {?a}}{E.L.$\\Rightarrow_1$} & \\frule{\\vdash {?a} \\Rightarrow {?b}}{{?a} \\vdash {?b}}{E.R.$\\Rightarrow$} \\\\ % cont.\n  \\frule{{?a} \\Rightarrow {?b} \\vdash}{{?b} \\vdash}{E.L.$\\Rightarrow_2$} & \\\\\n  % Iff\n  \\frule{{?a} \\Rightarrow {?b}, {?b} \\Rightarrow {?a} \\vdash}{{?a} \\Leftrightarrow {?b} \\vdash}{I.L.$\\Leftrightarrow$} & \\frule{\\vdash {?a} \\Rightarrow {?b} \\quad \\vdash {?b} \\Rightarrow {?a}}{\\vdash {?a} \\Leftrightarrow {?b}}{I.R.$\\Leftrightarrow$} \\\\\n  \\frule{{?a} \\Leftrightarrow {?b} \\vdash}{{?a} \\Rightarrow {?b}, {?b} \\Rightarrow {?a} \\vdash}{E.L.$\\Leftrightarrow$} & \\frule{\\vdash {?a} \\Leftrightarrow {?b}}{\\vdash {?a} \\Rightarrow {?b}}{E.R.$\\Leftrightarrow_1$} \\\\ % cont.\n  & \\frule{\\vdash {?a} \\Leftrightarrow {?b}}{\\vdash {?b} \\Rightarrow {?a}}{E.R.$\\Leftrightarrow_2$}\n  \\end{array}$$\n  \\end{framed}\n  \\caption[Rules (1)]{Predefined rules provided in the front. For clarity, contexts have been omitted: the intended semantic is the one described in \\autoref{sec:proof-framework-rules}. ``H'', ``I'', ``E'', ``L'' and ``R'' respectively stand for ``Hypothesis'', ``Introduction'', ``Elimination'', ``Left'' and ``Right''. In the source, shorthand for these names are used, for instance ``I.L.$\\land$'' would be identified by \\code{introLAnd}. Most introduction rules map to a single kernel proof step. Elimination rules on the other hand rely on the cut rule.}\n  \\label{fig:rules-list-1}\n\\end{figure}\n\n\\begin{figure}[H]\n\\begin{framed}\n  $$\\arraycolsep=0.75cm\\def\\arraystretch{3}\\begin{array}{c c}\n  % Not\n  \\frule{\\vdash {?a}}{\\neg {?a} \\vdash}{I.L.$\\neg$} & \\frule{{?a} \\vdash}{\\vdash \\neg {?a}}{I.R.$\\neg$} \\\\\n  \\frule{\\neg {?a} \\vdash}{\\vdash {?a}}{E.L.$\\neg$} & \\frule{\\vdash \\neg {?a}}{{?a} \\vdash}{E.R.$\\neg$} \\\\\n  % Eq. refl.\n  \\frule{{?s} = {?s} \\vdash}{\\vdash}{E.L.$=$} & \\frule{}{\\vdash {?s} = {?s}}{I.R.$=$} \\\\\n  % Forall\n  \\frule{{?p}({?t}) \\vdash}{\\forall x. {?p}(x) \\vdash}{I.L.$\\forall$} & \\frule{\\vdash {?p}(\\overline{?t})}{\\vdash \\forall x. {?p}(x)}{I.R.$\\forall$} \\\\\n  \\frule{\\forall x. {?p}(x) \\vdash}{{?p}({?t}) \\vdash}{E.L.$\\forall$} & \\frule{\\vdash \\forall x. {?p}(x)}{\\vdash {?p}(\\overline{?t})}{E.R.$\\forall$} \\\\\n  % Exists\n  \\frule{{?p}(\\overline{?t}) \\vdash}{\\exists x. {?p}(x) \\vdash}{I.L.$\\exists$} & \\frule{\\vdash {?p}({?t})}{\\vdash \\exists x. {?p}(x)}{I.R.$\\exists$} \\\\\n  \\frule{\\exists x. {?p}(x) \\vdash}{{?p}(\\overline{?t}) \\vdash}{E.L.$\\exists$} & \\frule{\\vdash \\exists x. {?p}(x)}{\\vdash {?p}({?t})}{E.R.$\\exists$} \\\\\n  % Iff subst.\n  \\frule{{?f}({?a}) \\vdash}{{?a} \\Leftrightarrow {?b}, {?f}({?b}) \\vdash}{I.L.S.$\\Leftrightarrow$} & \\frule{\\vdash {?f}({?a}) \\quad}{{?a} \\Leftrightarrow {?b} \\vdash {?f}({?b})}{I.R.S.$\\Leftrightarrow$} \\\\\n  \\frule{{?f}({?a}) \\vdash \\quad \\vdash {?a} \\Leftrightarrow {?b}}{{?f}({?b}) \\vdash}{E.L.S.$\\Leftrightarrow$} & \\frule{\\vdash {?f}({?a}) \\quad \\vdash {?a} \\Leftrightarrow {?b}}{\\vdash {?f}({?b})}{E.R.S.$\\Leftrightarrow$} \\\\\n  % Eq. subst.\n  \\frule{{?p}({?s}) \\vdash \\quad}{{?s} = {?t}, {?p}({?t}) \\vdash}{I.L.S.$=$} & \\frule{\\vdash {?p}({?s})}{{?s} = {?t} \\vdash {?p}({?t})}{I.R.S.$=$} \\\\\n  \\frule{{?p}({?s}) \\vdash \\quad \\vdash {?s} = {?t}}{{?p}({?t}) \\vdash}{E.L.S.$=$} & \\frule{\\vdash {?p}({?s}) \\quad \\vdash {?s} = {?t}}{\\vdash {?p}({?t})}{E.R.S.$=$}\n  \\end{array}$$\n  \\end{framed}\n  \\caption[Rules (2)]{Continuation of \\autoref{fig:rules-list-1}. Schemas marked with an overline are constrained to resolve to fresh schemas; namely schemas that do not appear elsewhere in the sequent, apart from this binding.}\n  \\label{fig:rules-list-2}\n\\end{figure}\n\n\\input{figures/proof-sample.scala}\n\n\\begin{figure}[H]\n  \\centering\n  \\input{figures/proof-sample.lisa.tex}\n  \\caption[Sample proof (kernel)]{Example of a generated kernel proof following its description in the front (\\autoref{fig:scala-proof-sample}). This proof is independent of any theory but depends on the axiom of powerset, the definition of the subset relation and the axioms of extensionality and union (imports 1-4). Such a kernel proof is objectively difficult to be produced by hand; in contrast it is relatively straightforward to be achieved in the front. Note that the latex code used in this figure is also part of the output.}\n  \\label{fig:lisa-proof-sample}\n\\end{figure}\n", "meta": {"hexsha": "531a3917ef329382eeaf4e5b94538d94dccaa167", "size": 5426, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "thesis/report/chapters/appendix.tex", "max_stars_repo_name": "FlorianCassayre/master-project", "max_stars_repo_head_hexsha": "34a77336497cd15ce5f005639d758f002c234d00", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-21T16:29:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-22T14:37:35.000Z", "max_issues_repo_path": "thesis/report/chapters/appendix.tex", "max_issues_repo_name": "FlorianCassayre/master-project", "max_issues_repo_head_hexsha": "34a77336497cd15ce5f005639d758f002c234d00", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "thesis/report/chapters/appendix.tex", "max_forks_repo_name": "FlorianCassayre/master-project", "max_forks_repo_head_hexsha": "34a77336497cd15ce5f005639d758f002c234d00", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.4225352113, "max_line_length": 559, "alphanum_fraction": 0.5807224475, "num_tokens": 2239, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7690802370707283, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.5502184516168244}}
{"text": "\\documentclass[10pt,a4paper]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{makeidx}\n\\usepackage{graphicx}\n\\usepackage[all]{xy}\n\\newcommand{\\zz}{\\mathbb{Z}}\n\\author{moi}\n\\begin{document}\n\\section{Intro}\nWe are briefly discussing bialgebras and their module algebras. Appropriate knowledge in the theory of bialgebras and category theroy is recommanded. An interesting example will be a central part of this paper.\n\\subsection{Algebras and coalgebras}\nWe say ring when we are talking about unital commutative rings, rings with\n$$a b = b a,\\ \\forall a, b \\in R \\ \\mathrm{and}\\ \\exists! 1_R \\in R: 1_R a = a 1_R = a, \\forall a \\in R$$\n\\subsubsection{Algebras}\nAn $R$-algebra is a two-sided $R$-module $A$ with an $R$-linear map $\\mu_A : A \\otimes A \\longrightarrow A$, called the multiplication. We call $A$ associative if\n$$\\xymatrix{\nA^{\\otimes 3} \\ar[rr]^{\\mu_A \\otimes id_A} \\ar[d]_{id_A \\otimes \\mu_A} && A^{\\otimes 2}\\ar[d]^{\\mu_A}\\\\\nA^{\\otimes 2} \\ar[rr]_{\\mu_A} && A\\\\\n}$$\ncommutes. We call $A$ unital if there is an $R$-linear map $\\eta_A : R \\longrightarrow A$ such that\n$$\\xymatrix{\nR \\otimes A \\ar[rr]^{\\eta_A \\otimes id_A} \\ar[rrd]_\\simeq & & A \\otimes A\\ar[d]^{\\mu_A} && A \\otimes R \\ar[ll]_{id_A \\otimes \\eta_A}\\ar[lld]^\\simeq\\\\\n&&A&&\\\\\n}$$\ncommutes. The triple $(A, \\mu_A, \\eta_A)$ denotes a associative unital algebra. If no ambiguity can arise we will omit the subscripts. Furthermore, we will omit the structure maps when clear from context.\n\\paragraph{Commutative algebras}\nThe $R$-linear map\n$$\\tau_{A\\otimes B} : A \\otimes B \\longrightarrow B \\otimes A, a \\otimes b \\longmapsto b \\otimes a$$\nis called the flip-isomorphism for two $R$-modules $A, B$. We say an $R$-algebra is commutative if\n$$\\xymatrix{\nA \\otimes A \\ar[rr]^{\\tau_{A\\otimes A}} \\ar[rrd]_{\\mu}&& A \\otimes A\\ar[d]^\\mu\\\\\n&&A\\\\\n}$$\ncommutes.\n\\paragraph{Examples} Some examples:\n\\begin{description}\n\\item[General unital rings] Clearly, all rings in theirselves are commutative associative unital algebras. In particular, $(\\zz, \\mu_{\\zz}, \\eta_{\\zz})$ is an $\\zz$-algebra and for any ring $(R, + , \\cdot, 1)$ the\nmap\n$$\\varphi : \\zz \\longrightarrow R, n \\longmapsto \\begin{cases}\n\\sum_{n\\ \\mathrm{times}} 1_R & n \\geq 0\\\\\n\\sum_{n\\ \\mathrm{times}} -1_R & n < 0\\\\\n\\end{cases}$$\nis an $\\zz$-algebra homomorphism. This summed up in the proposition: the class of unital associative algebra forms a category, with $\\zz$-algebra homomorphisms as morphisms and has $\\zz$ as inital object.\n\\item[Lie algebras] We call an algebra a Lie algebra if\n$$\\ker (id_{A \\otimes A} - \\tau_{A \\otimes A}) \\subset \\ker \\mu_A$$\nand\n$$\\ker \\left[((id \\otimes \\mu) \\mu)(1 + \\zeta_3 + \\zeta_3^2)\\right] \\subset A.$$\nThe first condition is equivalent in demanding $A$ is anticommutative, i.e.\n$$\\tau \\mu = - \\mu$$\nthe second is called Jacobian identity. Any given associative $R$-algebra $A$, has a Lie algebras structure:\n$$(\\mathrm{End}_R(M), \\mu = [\\_,\\_]),\\ \\mu = [a \\otimes b \\longmapsto a b - b a].$$\nIt is denoted by $\\mathfrak{g}(A)$ and its multiplication is called commutator. For a given $R$-module $M$, the endomorphism algebra $\\mathrm{End}_R(M)$ is a Lie algebra - denoted by $\\mathfrak{gl}_(M)$. The matrix ring $M_n(R)$ has the general Lie algebra $\\mathfrak{gl}_n(R)$ as Lie algebra. Its elements are all matrices and its multiplication is the commutator. We remark that these $R$-algebras are in general not associative or commutative. Some other Lie algebras are:\n\\begin{description}\n\\item[$\\mathfrak{d}$] the Lie algebra of all upper diagonal matrices - a sub Lie algebra if $\\mathfrak{gl}_n(R)$.\n\\item[$\\mathfrak{n}$] the Lie algebra of all strictly upper diagonal matrices, a sub Lie algebra of $\\mathfrak{d}$.\n\\item[$\\mathfrak{sl}_n(R)$] the Lie algebra of all trace zero matrices.\n\\end{description}\n\\item[Ring extensions] If $R$ is a ring and $S/R$ is a ring extension, i.e. $S$ is an $R$-algebra,\n\\end{description}\n\\newpage\n\\section{Example}\nLet $R$ be a ring with $2$ not dividing its characteristic. We want to study the an interesting example of a module algebra.\n\\subsection{Ring of Laurent polynomials}\nLet $R$ be as above and $R[z, z^{-1}]$ be the ring of Laurent polynomials. We may wish to identify $R_1 := R[z,z^{-1}]$ with $R_2 := R[x,y]/\\left<x y - 1 \\right>$ with standard $R$-derivations:\n$$\\partial_x = \\frac{\\partial}{\\partial x},\\ \\partial_y = \\frac{\\partial}{\\partial y},$$\nmaking $\\left(R_1, \\left\\{\\partial_x,\\partial_y\\right\\}\\right)$ an differential $R$-algebra. Since the commutator $\\left[\\partial_x,\\partial_y\\right]$ is trivial it is a partial differential algebra.\n\\subsubsection{The $\\mathfrak{sl}_2(R)$-module}\nWe want to introduce two new $R$-linear maps $\\partial_X, \\partial_Y \\in \\mathrm{End}_R(R_1)$:\n$$\\partial_X := \\partial_x - y^2 \\partial_y,\\ \\partial_Y := \\partial_y - x^2 \\partial_x.$$\n\\paragraph{Claim}\nWe are proposing:\n\\begin{enumerate}\n\\item\\label{claim01} $\\left(R_1, \\left\\{\\partial_X,\\partial_Y\\right\\}\\right)$ is a differential $R$-algebra or equivalently, $\\partial_X, \\partial_Y$ are $R$-derivations.\n\\item\\label{claim02} $R_1$ is an $\\mathfrak{sl}_2(R)$-module or equivalently, $$\\mathfrak{g}_R(\\{\\partial_X, \\partial_Y\\}) := \\left<\\partial_X, \\partial_Y\\right>_{\\mathrm{Lie-alg}} \\simeq \\mathfrak{sl}_2(R).$$\n\\item\\label{claim03} $R_1$ is an $U(\\mathfrak{sl}_2(R))$-module algebra.\n\\end{enumerate}\nWe are proving our claims step-by-step.\n\\begin{description}\n\\item[ad \\ref{claim01}] We only need to show the Leibniz-rule as $R$-linearity is aready assumed. Let $r, s \\in R_1$:\n$$\\begin{array}{rcl}\n\\partial_X(r s) &=& (\\partial_x - y^2 \\partial_y)(r s)\\\\\n&&\\\\\n &=& \\partial_x(r s) - y^2 \\partial(r s)\\\\\n &&\\\\\n &=& \\partial_x(r) s + r \\partial_x(s) - y^2(\\partial_y(r) s + r \\partial_y(s))\\\\\n &&\\\\\n &=& \\underbrace{\\partial_x(r) s - y^2 \\partial_y(r) s}_{\\partial_X(r) s} +\\underbrace{r \\partial_x(s) - r y^2 \\partial_y(s)}_{r \\partial_X(s)}\\\\\n &&\\\\\n &=& \\partial_X(r) s + r d\\partial_X(s)\\\\\n \\end{array}$$\nBy the symmetry of definition, we see that both maps are indeed $R$-derivations.\n\\item[ad \\ref{claim02}] Let us commute the commutator of $\\partial_X$ and $\\partial_Y$:\n$$\\begin{array}{rcl}\n[\\partial_X, \\partial_Y] &=& (\\partial_x - y^2 \\partial_y)(\\partial_y - x^2 \\partial_x) - (\\partial_y - x^2 \\partial_x)(\\partial_x - y^2 \\partial_y)\\\\\n&&\\\\\n&=& \\partial_x \\partial_y - 2 x \\partial_x - x^2 \\partial_x^2 - y^2 \\partial_y^2 + x^2 y^2 \\partial_y \\partial_x \\\\\n&&\\\\\n&& -(\\partial_y \\partial_x - 2 y \\partial_y - y^2 \\partial_y^2 - x^2 \\partial_x^2 + x^2 y^2 \\partial_x \\partial_y)\\\\ \n&&\\\\\n&=& 2 (y \\partial_y - x \\partial_x)\\\\\n\\end{array}$$\nThis endomorphism is clearly an $R$-derivation. Setting $\\partial_H := [\\partial_X,\\partial_Y]$ we will continue:\n$$\\begin{array}{rcl}\n[\\partial_H, \\partial_X] &=& 2 [(y \\partial_y - x \\partial_x)(\\partial_x - y^2 \\partial_y) - (\\partial_x - y^2 \\partial_y)(y \\partial_y - x \\partial_x)]\\\\\n&&\\\\\n&=& 2 \\left[y \\partial_y \\partial_x - 2 y^2 \\partial_y - y^3 \\partial_y^2 - x \\partial_x^2 + x y^2 \\partial_x \\partial_y\\right]\\\\\n&&\\\\\n&& - 2\\left[y \\partial_x \\partial_y - \\partial_x - x \\partial_x^2 - y^2 \\partial_y - y^3 \\partial_y ^2 + x y^2 \\partial_y \\partial_x\\right]\\\\\n&&\\\\\n&=& 2 \\left(\\partial_x - y^2 \\partial_y\\right) = 2 \\partial_X\\\\\n\\end{array}$$\nand\n$$\\begin{array}{rcl}\n[\\partial_H, \\partial_Y] &=& 2 [(y \\partial_y - x \\partial_x)(\\partial_y - x^2 \\partial_x) - (\\partial_y - x^2 \\partial_x)(y \\partial_y - x \\partial_x)]\\\\\n&&\\\\\n&=& 2 \\left[y \\partial_y^ 2 - x^2 y \\partial_y \\partial_x - x \\partial_x \\partial_y + 2 x^2 \\partial_x + x^3 \\partial_x^2\\right]\\\\\n&&\\\\\n&& - 2\\left[\\partial_y + y \\partial_y^2 - x \\partial_y \\partial_x - x^2 y \\partial_x \\partial_y + x^2 \\partial_x + x^3 \\partial_x^2\\right]\\\\\n&&\\\\\n&=& -2 \\left(\\partial_y - x^2 \\partial_x\\right) = -2 \\partial_Y\\\\\n\\end{array}$$\nwhich proves our second claim. We remark that this proof is applicable to $\\left(R[x,y], \\left\\{\\partial_X,\\partial_Y\\right\\}\\right)$.\n\\item[ad \\ref{claim03}] Let %$i, j, k, \\in \\mathbb{Z}_{\\geq 0}$ and \n$m, n \\in \\mathbb{Z}$ and $\\Psi_{R_1} := \\left[\\partial_Z \\otimes r \\longmapsto ev_r(\\partial_Z) := \\partial_Z(r)\\right]$ for $Z = H, X, Y$:\n$$\\begin{array}{rcl}\n\\partial_H \\partial_X \\partial_Y \\otimes z^m \\otimes z^n &\\stackrel{\\Delta \\otimes id_{R_1^{\\otimes 2}}}{\\longmapsto}& [(1 \\otimes \\partial_H + \\partial_H \\otimes 1) (1 \\otimes \\partial_X + \\partial_X \\otimes 1)\\\\\n&& (1 \\otimes \\partial_Y + \\partial_Y \\otimes 1)] \\otimes z^m \\otimes z^n\\\\\n&&\\\\\n&=& %\\sum_{i_0, j_0, k_0 \\in \\{0,1\\}} %\\substack{0 \\leq i_0 \\leq i\\\\0 \\leq j_0 \\leq j\\\\0 \\leq k_0 \\leq k\\\\}}\\left(\n%\\begin{array}{c}\n%i\\\\i_0\\\\\n%\\end{array}\\right)\\left(\n%\\begin{array}{c}\n%j\\\\j_0\\\\\n%\\end{array}\\right)\\left(\n%\\begin{array}{c}\n%k\\\\k_0\\\\\n%\\end{array}\\right) \n\\sum_{i_0, j_0, k_0 \\in \\{0,1\\}}%\\substack{0 \\leq i_0 \\leq 1\\\\0 \\leq j_0 \\leq 1\\\\0 \\leq k_0 \\leq 1\\\\}}\n\\partial_H^{1 - i0} \\partial_X^{1 - j_0} \\partial_Y^{1 - k_0} \\\\\n&&\\\\&&\\otimes \\partial_H^{i0} \\partial_X^{j_0} \\partial_Y^{k_0} \\otimes z^m \\otimes z^n\\\\\n&&\\\\\n&\\stackrel{f}{\\longmapsto}& \\sum_{i_0, j_0, k_0 \\in \\{0,1\\}}%\\substack{0 \\leq i_0 \\leq 1\\\\0 \\leq j_0 \\leq 1\\\\0 \\leq k_0 \\leq 1\\\\}}%\\left(\n%\\begin{array}{c}\n%i\\\\i_0\\\\\n%\\end{array}\\right)\\left(\n%\\begin{array}{c}\n%j\\\\j_0\\\\\n%\\end{array}\\right)\\left(\n%\\begin{array}{c}\n%k\\\\k_0\\\\\n%\\end{array}\\right) \n\\partial_H^{1 - i0} \\partial_X^{1 - j_0} \\partial_Y^{1 - k_0}(z^m) \\\\\n&&\\\\&&\\otimes \\partial_H^{i0} \\partial_X^{j_0} \\partial_Y^{k_0}(z^n)\\\\\n&&\\\\\n&=& \\partial_H(\\partial_X(\\partial_Y(z^m))) \\otimes z^n + \\partial_H(\\partial_X(z^m)) \\otimes \\partial_Y(z^n)\\\\\n&& + \\partial_H(\\partial_X(z^m)) \\otimes \\partial_X(z^n) + \\partial_X(\\partial_Y(z^m)) \\otimes \\partial_H(z^n)\\\\\n&& + \\partial_Y(z^m) \\otimes \\partial_H(\\partial_X(z^n)) + \\partial_X(z^m) \\otimes \\partial_H(\\partial_X(z^n)) \\\\\n&& + \\partial_H(z^m) \\otimes \\partial_X(\\partial_Y(z^n)) + z^m \\otimes \\partial_H(\\partial_X(\\partial_Y(z^n)))\\\\\n\\end{array}$$\nwhere $f = (\\Psi_{R_1} \\otimes \\Psi_{R_1}) (id_{U(\\mathfrak{sl}_2(R))} \\otimes \\tau_{U(\\mathfrak{sl}_2(R)) \\otimes R_1}\\otimes id_{R_1})$.\n$$\\begin{array}{rcl}\n\\partial_H\\partial_X\\partial_Y \\otimes z^m \\otimes z^n &\\stackrel{id \\otimes \\mu_{R_1}}{\\longmapsto}&\n\\partial_H \\partial_X \\partial_Y \\otimes z^{m + n}\\\\\n&&\\\\\n&\\stackrel{\\Psi_{R_1}}{\\longmapsto}& \\partial_H(\\partial_X(\\partial_Y(z^{m + n})))\\\\\n&&\\\\\n&=& \\partial_H(\\partial_X(\\partial_Y(z^m) z^{n})) + \\partial_H(\\partial_X(z^m \\partial_Y(z^n)))\\\\\n&&\\\\\n&=& \\partial_H(\\partial_X(\\partial_Y(z^m)) z^{n}) + \\partial_H(\\partial_Y(z^m) \\partial_X(z^{n}))\\\\\n&&\\\\\n&& + \\partial_H(\\partial_X(z^m) \\partial_Y(z^n)) + \\partial_H(z^m \\partial_X(\\partial_Y(z^n)))\\\\\n&&\\\\\n&=& \\partial_H(\\partial_X(\\partial_Y(z^m))) z^{n} + \\partial_X(\\partial_Y(z^m)) \\partial_H(z^n)\\\\\n&&\\\\\n&& + \\partial_H(\\partial_Y(z^m)) \\partial_X(z^{n}) + \\partial_Y(z^m) \\partial_H(\\partial_X(z^n))\\\\\n&&\\\\\n&& + \\partial_H(\\partial_X(z^m)) \\partial_Y(z^n) + \\partial_X(z^m) \\partial_H(\\partial_Y(z^n))\\\\\n&&\\\\\n&& + \\partial_H(z^m) \\partial_X(\\partial_Y(z^n)) + z^m \\partial_H(\\partial_X(\\partial_Y(z^n)))\\\\\n&&\\\\\n&=& \\mu_{R_1}(f(\\Delta \\otimes id_{R_1^{\\otimes 2}}(\\partial_H \\partial_X \\partial_Y \\otimes z^m \\otimes z^n)))\\\\%\\sum_{i_0, j_0, k_0 \\in \\{0,1\\}} \\mu_{R_1} \\left(\\partial_H^{1 - i_0} (\\partial_X^{1 - j_0} (\\partial_Y^{1 - k_0}(z^m))) \\otimes \\partial^{i_0} (\\partial_X^{j_0}(\\partial_Y^{k_0}(z^n)))\\right)\\\\\n\\end{array}$$\nReplacing $\\partial_H \\partial_X \\partial_Y$ with powers of derivations $\\partial_H^i \\partial_X^j \\partial_Y^k$ follows inductively wrt. binomial formula for each power. Furthermore for $d = \\sum_{i,j,k} d_{i,j,k} \\partial_H^i \\partial_X^j \\partial_Y^k$\n$$\\Psi_{R_1} (d \\otimes 1_{R_1}) = \\begin{cases}\nd_{0,0,0} & d_{i,j,k} = 0\\ \\forall i + j + k \\geq 1\\\\\n0 & \\mathrm{else}\\\\\n\\end{cases}$$\nThe two commutative diagrams defining module algebras hold and we proved the last claim.\n\\end{description}\nAs we stated earlier, the proof for each claim can be easily translated to $\\left(R[x,y], \\{\\partial_X,\\partial_Y\\}\\right)$. This is as $R[x,y]$ is itself a $U(\\mathfrak{sl}_2(R))$-module algebra and the ideal\n$$I = \\left<x y - 1 \\right>$$\nis $U(\\mathfrak{sl}_2(R))$-stable:\n$$\\begin{array}{rcl}\n\\partial_X(f (x y - 1)) &=& \\partial_X(f) (x y - 1) + f \\partial_X(x y - 1)\\\\\n&&\\\\\n&=& \\underbrace{\\partial_X(f) (x y - 1)}_{\\in I} + f (\\partial_X(x) y + x \\partial_X(y))\\\\\n&&\\\\\n&\\equiv& f (y - x y^2) \\mod I \\equiv -f y (x y - 1) \\mod I \\in I\\\\\n&&\\\\\n\\partial_Y(f (x y - 1)) &=& \\partial_Y(f) (x y - 1) + f \\partial_Y(x y - 1)\\\\\n&&\\\\\n&=& \\underbrace{\\partial_Y(f) (x y - 1)}_{\\in I} + f (\\partial_Y(x) y + x \\partial_Y(y))\\\\\n&&\\\\\n&\\equiv& f (x - x^2 y) \\mod I \\equiv -f x (x y - 1) \\mod I \\in I\\\\\n\\end{array}$$\nProving $\\partial_Z(I) \\subset I$ for $Z = X, Y$. Furthermore, $\\partial_X \\circ \\partial_Y (I) \\subset I$ and $\\partial_Y \\circ \\partial_X(I) \\subset I$ implying $\\partial_H(I) \\subset I$. Therefore, $I$ is $D = U(\\mathfrak{sl}_2(R))$-stable. In particular, we have that the projection\n$$\\pi : R[x,y] \\longrightarrow R[x,y]/I$$\nis a homomorphism of $D$-module algebras. Now, we would like to know if $R_1$ is simple as $D$-module algebra. We assume that there is an ideal $J \\subset R_1$ being $D$-stable. Let $F = \\{f_i \\in R_1\\backslash R_1^\\times : i \\in \\mathbb{Z}_{>0}\\}$ be a finite set of generators of $J$. Then\n$$\\begin{array}{rcl}\n\\partial_Z\\left(\\sum_{i=0}^n g_i f_i\\right) &=& \\sum_{i=0}^n \\left(\\underbrace{\\partial_Z(g_i) f_i}_{\\in J} + g_i \\partial_Z(f_i)\\right)\\\\\n&&\\\\\n&\\equiv& \\sum_{i=0}^n g_i \\partial_Z(f_i) \\mod J \\equiv 0 \\mod J\\\\\n&\\Leftrightarrow&\\\\\n\\sum_{i=0}^n g_i \\partial_Z(f_i) &=& \\sum_{j=0}^m h_i f_j,\\\\\n\\end{array}$$\nfor $Z = X, Y, H$ and $h_i \\in R_1$. Now, we are going to use induction on $n$. Firstly, let $n = 1$ then\n$$F = \\{f\\}\\ \\mathrm{and}\\ g \\partial_X(f) = h f$$\nfor appropriate $g, h \\in R_1$. A non-trivial solution is clearly $f = g, h = \\partial(f)$. If $f$ is not square-free, i.e. there is a $h': h'^2\\mid f$ and $h'^3\\nmid f$ we may consider $f_h = f/h'^2$ as ideal generator. If $f = \\sum_{i=-m}^n f_i x^i$ we may write $f$ as\n$$f = x^{-m} \\underbrace{\\sum_{i=0}^{n+m} f_{i-m} x^i}_{\\hat{f}}$$\ni.e. $f$ is associated to some element $\\hat{f} \\in R[x]$. Hence, if $\\partial_Z(f) \\in R_1.f \\Leftrightarrow \\partial_Z(x^{-m}) \\hat{f} + x^{-m} \\partial_Z(\\hat{f})$.\n$$\\begin{array}{rcl}\nf = \\sum_{i=-m}^n f_i x^i &\\stackrel{\\partial_X}{\\longmapsto}& \\sum_{i=0}^{n-1} (i + 1) f_{i+1} x^i - \\sum_{i=2}^{m + 1} (i - 1) f_{1 - i} x^{-i}\\\\\n&&\\\\\nn f - x \\partial_X(f) &=& \\sum_{i=0}^{n-1} (n - i) f_i x^i + \\sum_{i=1}^m (n + i) f_{-i} x^{-i}\\\\\n&&\\\\\n\\frac{f_{n-1}}{f_n} \\partial_X(f) - (n - 1) (n f + x \\partial_X(f)) &=& \n\\end{array}$$\n\\end{document}", "meta": {"hexsha": "390476781e9bd7e59edd0f50757af8bc69787dca", "size": 14698, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Laurent_q_sl/laurent_p_sl.tex", "max_stars_repo_name": "gmuel/texlib", "max_stars_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Laurent_q_sl/laurent_p_sl.tex", "max_issues_repo_name": "gmuel/texlib", "max_issues_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Laurent_q_sl/laurent_p_sl.tex", "max_forks_repo_name": "gmuel/texlib", "max_forks_repo_head_hexsha": "1a3fab54f2e03d9ce656f9b8a5b58e26c3c93a02", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.3534482759, "max_line_length": 475, "alphanum_fraction": 0.6492039733, "num_tokens": 5753, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7981867777396212, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5501946240453174}}
{"text": "\\section{Gradient-based Length-scale Optimization}\\label{sec:ls}\n\nIn the previous sections, we defined preference learning models that \nincorporate GP priors over the latent functions.\nThe covariances of these GPs are defined by a kernel function $k$, \noften of the following form:\n\\begin{flalign}\nk_{\\bs \\theta}(\\bs x, \\bs x') = \\prod_{d=1}^D k_d\\left(\\frac{|x_d - x_d'|}{l_d}, \\bs\\theta_d \\right),\n\\label{eq:kernel}\n\\end{flalign}\nwhere $D$ is the number of features, \n$l_d$ is a length-scale hyper-parameter,\nand $\\bs \\theta_d$ are additional hyper-parameters for an individual \nfeature kernel, $k_d$.\n%Each $k_d$ is a function of the distance between the $d$th feature values in \n%feature vectors $\\bs x$ and  and $\\bs x'$.\n%The product over features in $k$ means that data points have \n%high covariance only if the kernel functions, $k_d$, for all features are high \n%(a soft AND operator). \nIt is possible to replace the product with a sum, allowing covariance to increase\nfor every feature that is similar (analogous to OR),\nrather than only being high if all dimensions are similar (AND).\n%or other combinations of the individual feature kernels.\n%The choice of combination over features is therefore an additional hyper-parameter.\n% citations? \nThe length-scale controls the smoothness of $k_d$\nacross the feature space.\n%and the contribution of each feature to the model. \nIf a feature has a large length-scale,\nits values have less effect on $k_{\\bs\\theta}(\\bs x, \\bs x') $\nthan if it has a shorter length-scale.\nHence, it is important to set $l_d$ to correctly capture feature relevance.\nA computationally frugal option is a median heuristic, which effectively normalizes\nthe feature but allows extreme values to remain outliers: \n\\begin{flalign}\n l_{d,MH} = D \\mathrm{median}( \\{ |x_{i,d} - x_{j,d}| \\forall i=1,..,N, \\forall j=1,...,N\\} ).\n\\end{flalign}\n%The motivation is that the median will normalize the feature, so that features\n%are equally weighted regardless of their scaling. By using a median to perform this \n%normalization, \nMultiplying the median by the number of features, $D$,\nprevents  the average covariance $k_{\\bs \\theta}(\\bs x, \\bs x')$ between items\nfrom increasing as we add more features using the \nproduct kernel in Equation \\ref{eq:kernel}.\nThis heuristic has been shown to work reasonably well for the task of \ncomparing distributions~\\citep{gretton2012optimal}, but has %is a simple heursitic with\n no guarantees of optimality. \n\nAn alternative for setting $l_d$ is Bayesian model selection using \n\\emph{type II maximum likelihood}, \nwhich chooses the value of $l_d$ that \nmaximizes the approximation to the log marginal likelihood, %, $p(\\bs y | \\bs \\theta)$,\n%Since the marginal likelihoods for our models are intractable, we maximize\n%the value of the variational lower bound, \n$\\mathcal{L}$.\n% after convergence of the\n%inference algorithm \n(Equation \\ref{eq:lowerbound} for GPPL and and Equation \\ref{eq:lowerbound_crowd} for crowdGPPL). \nOptimizing kernel length-scales in this manner is known as automatic relevance determination (ARD)~\\citep{rasmussen_gaussian_2006}, since the optimal\nvalue of $l_d$ depends on the relevance of feature $d$.\n% Removing irrelevant features could improve performance, \n% since it reduces the dimensionality of the space of the preference function.\n%A problem when using text data is that large vocabulary sizes and additional linguistic features \n%lead to a large number of dimensions, $D$. \n%The standard maximum likelihood II optimisation requires \n%$\\mathcal{O}(D)$ operations to tune each length-scale.\n%To perform ARD on feature $d$, \n%we only need to be able to evaluate $\\mathcal{L}$ \n%after variational inference has converged with any given value of $l_d$.\n%However, \nComputing derivatives of $\\mathcal{L}$ \nwith respect to $l_d$ enables the use of\nefficient gradient-based optimization methods\nsuch as L-BFGS-B~\\citep{zhu1997algorithm}.\n%which perform iterative optimization, \n%using gradients to guide changes for all $D$ length-scales simultaneously.\nFor the single user GPPL, the required gradient \nwith respect to the $d$th length-scale, $l_d$, is as follows:\n%Following the derivations in Appendix \\ref{sec:vb_eqns}, Equation \\ref{eq:gradient_ls},\n\\begin{flalign}\n& \\nabla_{l_{\\! d}} \\mathcal{L} =  \n\\frac{\\partial \\mathcal{L}}{\\partial \\hat{\\bs f}_m} \\frac{\\partial \\hat{\\bs f}_m}{\\partial l_d}\n+ \\frac{\\partial \\mathcal{L}}{\\partial \\bs S^{-1}} \\frac{\\partial \\bs S^{-1}}{\\partial l_d}\n+ \\frac{\\partial \\mathcal{L}}{\\partial a} \\frac{\\partial a}{\\partial l_d}\n+ \\frac{\\partial \\mathcal{L}}{\\partial b} \\frac{\\partial b}{\\partial l_d}\n+ \\frac{\\partial \\mathcal{L}}{\\partial \\bs K}\\frac{\\partial \\bs K}{\\partial l_d}. & \n\\end{flalign}\nThe partial derivatives with respect to the variational parameters \n$\\hat{\\bs f}_m$, $\\bs S$, $a$ and $b$ \n%arise because they \ndepend indirectly on the length-scale through the expectations \nin the variational $q$ factors. \nHowever, when the variational inference algorithm has converged,\n$\\mathcal{L}$ is at a maximum, %$\\frac{\\partial \\mathcal{L}}{\\partial \\hat{\\bs f}_m}$, \n%$\\frac{\\partial \\mathcal{L}}{\\partial \\bs S^{-1}}$,\n%$\\frac{\\partial \\mathcal{L}}{\\partial a}$ and\n%$\\frac{\\partial \\mathcal{L}}{\\partial b}$ \nthese terms % partial derivatives of $\\mathcal{L}$ with respect to $\\hat{\\bs f}_m$, $\\bs S$, $a$ and $b$\nare zero, and the derivative is relatively simple to compute (see Appendix \\ref{sec:gradients} for the full equations).\n\n% is defined by Equation \\ref{eq:kernel_der}.\n% Since we cannot compute $\\bs K$ in high dimensions, in practice we substitute $\\bs K_{mm}$ for $\\bs K$,\n% $\\bs S$ for $\\bs C$, $\\hat{\\bs f}_{m}$ for $\\hat{\\bs f}$ and $\\bs\\mu_{m}$ for $\\bs\\mu$ so that \n", "meta": {"hexsha": "20dd78384313d71cfaf3ce05a9d4d85d225c0e12", "size": 5728, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "documents/scalable_bayesian_preference_learning_from_crowds/hyperparameters.tex", "max_stars_repo_name": "UKPLab/tacl2018-preference-convincing", "max_stars_repo_head_hexsha": "65eb1cd3bf76f8068889880e0f80178e790350ce", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2019-03-01T19:40:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-10T05:53:47.000Z", "max_issues_repo_path": "documents/scalable_bayesian_preference_learning_from_crowds/hyperparameters.tex", "max_issues_repo_name": "UKPLab/tacl2018-preference-convincing", "max_issues_repo_head_hexsha": "65eb1cd3bf76f8068889880e0f80178e790350ce", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 12, "max_issues_repo_issues_event_min_datetime": "2020-11-13T17:54:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-09T23:39:11.000Z", "max_forks_repo_path": "documents/scalable_bayesian_preference_learning_from_crowds/hyperparameters.tex", "max_forks_repo_name": "UKPLab/tacl2018-preference-convincing", "max_forks_repo_head_hexsha": "65eb1cd3bf76f8068889880e0f80178e790350ce", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-02-06T12:08:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-10T20:40:22.000Z", "avg_line_length": 55.0769230769, "max_line_length": 149, "alphanum_fraction": 0.7400488827, "num_tokens": 1579, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7981867729389246, "lm_q2_score": 0.6893056104028799, "lm_q1q2_score": 0.5501946207361703}}
{"text": "\\documentclass[letterpaper,10pt]{article}\r\n\\usepackage[margin=2cm]{geometry}\r\n\r\n\\usepackage{graphicx}\r\n\\usepackage{amsmath}\r\n\\usepackage{amsfonts}\r\n\\usepackage{amssymb}\r\n\\usepackage[colorlinks]{hyperref}\r\n\r\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\r\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\r\n\r\n\\newcommand{\\pandochline}{\\vspace{2em}\\href{./document.html}{\\textbf{Back to Top}}\r\n\t\\vspace{-2em}\\begin{center}\\rule{\\textwidth}{1pt}\\end{center}}\r\n\r\n\\setlength{\\parindent}{0em}\r\n\\setlength{\\parskip}{1em}\r\n\r\n\\title{\\textbf{18794 Pattern Recognition Theory\\\\Prof. Marios Savvides}}\r\n\\author{HMW-Alexander}\r\n\r\n\\begin{document}\r\n\t\r\n\\maketitle\r\n\r\n\\tableofcontents\r\n\\newpage\r\n\r\n\\section{Introduction}\r\n\r\n\\pandochline\r\n\\section{Decision Theory}\r\n\r\n\\subsection{Terms}\r\n\r\n\\begin{itemize}\r\n\t\\item Feature Space:\r\n\t\\begin{itemize}\r\n\t\t\\item \\textbf{Feature}: a distinctive characteristic or quality of the object\r\n\t\t\\item \\textbf{Feature vector}: combine more than one feature as a vector\r\n\t\t\\item \\textbf{Feature space}: The space defined by the feature vectors\r\n\t\\end{itemize}\r\n\t\\item Classifiers:\r\n\t\\begin{itemize}\r\n\t\t\\item \\textbf{Decision regions}: a classifier partitions the feature space into class-corresponding decision regions.\r\n\t\t\\item \\textbf{Decision boundaries}: the borders between the decision regions.\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsection{Bayes Rule}\r\n\r\n\\begin{equation}\r\nP(w_i|x)=\\frac{P(x,w_i)}{P(x)}=\\frac{P(x|w_i)P(w_i)}{\\sum_{k=1}^{C}{P(x|w_k)P(w_k)}}\r\n\\end{equation}\r\n\\begin{itemize}\r\n\t\\item \\textbf{Posterior Probability} $P(w_i|x)$: the conditional probability of correct class being $w_i$ given that feature value $x$ has been observed.\r\n\t\\item \\textbf{Evidence} $P(x)$: the total probability of observing the feature value of $x$.\r\n\t\\item \\textbf{Likelihood} $P(x|w_i)$: the conditional probability of observing a feature value of $x$ given that the correct class is $w_i$.\r\n\t\\item \\textbf{Prior Probability} $P(w_i)$: the probability of class $w_i$, $\\sum_{k=1}^{C}{P(w_k)}=1$.\r\n\t\\item \\textbf{Bayes Classifiers} decide on the class that has the \\textbf{largest posterior probability} ($\\max_{w_i}{P(w_i|x)}$). They are statistically the best classifiers i.e. they are minimum error classifiers (optimal).\r\n\\end{itemize}\r\n\r\n\\subsection{Minimum Probability of Error}\r\n\r\n\\begin{itemize}\r\n\t\\item $\\epsilon=P(error|class)$: probability of assigning $x$ to the wrong class $w$.\r\n\t\\item $P_e=\\sum_{k=1}^{C}{P(w_k)\\epsilon_k}$: total probability of error.\r\n\\end{itemize}\r\n\\begin{figure}[!ht]\r\n\t\\centering\r\n\t\\includegraphics[width=10cm]{./img/minimum_probability_of_error.png}\r\n\\end{figure}\r\nFor the two class case shown above, we want to minimize $P_e$ as below:\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\nP_e & = & P(w_1)\\epsilon_1 + P(w_2)\\epsilon_2 \\\\\r\n\t& = & P(w_1)\\int_{R_2}{f(x|w_1)dx} + P(w_2)\\int_{R_1}{f(x|w_2)dx} \\\\\r\n\t& = & P(w_1)(1-\\int_{R_1}{f(x|w_1)dx}) + P(w_2)\\int_{R_1}{f(x|w_2)dx} \\\\\r\n\t& = & P(w_1) + \\int_{R_1}{(P(w_2)f(x|w_2) - P(w_1)f(x|w_1))dx} \\\\ \r\n\\end{array}\r\n\\end{equation}\r\nTo minimize $P_e$, we want $P(w_2)f(x|w_2) - P(w_1)f(x|w_1)$ to be always negative$(<0)$ in the region $R_1$:\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\nP(w_1)f(x|w_1) - P(w_2)f(x|w_2) >0 & \\Rightarrow & w_1 \\\\\r\nP(w_1)f(x|w_1) - P(w_2)f(x|w_2) <0 & \\Rightarrow & w_2 \\\\\r\n\\end{array}\r\n\\end{equation}\r\n\r\n\\subsection{Likelihood Ratio}\r\n\r\n\\begin{itemize}\r\n\t\\item \\textbf{Likelihood ratio}: $l(x)=\\frac{f(x|w_1)}{f(x|w_2)}$\r\n\t\\item \\textbf{Log likelihood ratio}: $\\ln(l(x))=\\ln(\\frac{f(x|w_1)}{f(x|w_2)})=\\ln(f(x|w_1))-\\ln(f(x|w_2))$\r\n\t\\item \\textbf{Ratio of a priori probabilities}: $T=\\frac{P(w_2)}{P(w_1)}$\r\n\t\\item \\textbf{Log ratio of a priori probabilities}: $\\ln(T)=\\ln(\\frac{P(w_2)}{P(w_1)})=\\ln(P(w_2))-\\ln(P(w_1))$\r\n\\end{itemize}\r\n\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\n\\ln(l(x)) = \\ln(\\frac{f(x|w_1)}{f(x|w_2)}) > \\ln(\\frac{P(w_2)}{P(w_1)}) = \\ln(T) & \\Rightarrow & w_1 \\\\\r\n\\ln(l(x)) = \\ln(\\frac{f(x|w_1)}{f(x|w_2)}) < \\ln(\\frac{P(w_2)}{P(w_1)}) = \\ln(T) & \\Rightarrow & w_2 \\\\\r\n\\end{array}\r\n\\end{equation}\r\n\r\n\\subsubsection{Likelihood as Gaussian distribution}\r\nAssume likelihood $f(x|w_i)$ are Gaussian distributions with mean $\\mu_i$ and variance $\\sigma_i^2$.\r\n\\begin{equation}\r\nf(x|w_i)=\\frac{1}{\\sqrt{2\\pi\\sigma_i^2}}\\exp\\left(-\\frac{(x-\\mu_i)^2}{2\\sigma_i^2}\\right)\r\n\\end{equation}\r\n\\begin{figure}[!ht]\r\n\t\\centering\r\n\t\\includegraphics[width=6cm]{./img/gaussian_distribution_likelihood.png}\r\n\\end{figure}\r\nThe log likelihood ratio:\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\n\\ln(l(x)) & = & \\ln\\left(\\frac{\\frac{1}{\\sqrt{2\\pi\\sigma_1^2}}\\exp\\left(-\\frac{(x-\\mu_1)^2}{2\\sigma_1^2}\\right)}{\\frac{1}{\\sqrt{2\\pi\\sigma_2^2}}\\exp\\left(-\\frac{(x-\\mu_2)^2}{2\\sigma_2^2}\\right)}\\right) \\\\\r\n         & = & \\ln(\\frac{\\sigma_2}{\\sigma_1})+\\frac{(x-\\mu_2)^2}{2\\sigma_2^2}-\\frac{(x-\\mu_1)^2}{2\\sigma_1^2}\r\n\\end{array}\r\n\\end{equation}\r\n\r\nCase: $\\sigma_1=\\sigma_2=\\sigma$\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\n\\ln(l(x)) & = & \\frac{2x(\\mu_1-\\mu_2)-(\\mu_1^2-\\mu_2^2)}{2\\sigma^2}\r\n\\end{array}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\nx(\\mu_1-\\mu_2) - \\frac{\\mu_1^2-\\mu_2^2}{2} > \\sigma^2 \\ln(\\frac{P(w_2)}{P(w_1)}) & \\Rightarrow & w_1 \\\\\r\nx(\\mu_1-\\mu_2) - \\frac{\\mu_1^2-\\mu_2^2}{2} < \\sigma^2 \\ln(\\frac{P(w_2)}{P(w_1)}) & \\Rightarrow & w_2 \\\\\r\n\\end{array}\r\n\\end{equation}\r\n\r\nIf $P(w_1)=P(w_2)$\r\n\\begin{equation}\r\nx = \\frac{\\mu_1+\\mu_2}{2}\r\n\\end{equation}\r\n\r\n\\begin{figure}[!ht]\r\n\t\\centering\r\n\t\\includegraphics[width=8cm]{./img/gaussian_linear_classifier.png}\r\n\\end{figure}\r\n\r\nCase: $\\sigma_1\\neq\\sigma_2$\r\n\r\n\\begin{figure}[!ht]\r\n\t\\centering\r\n\t\\includegraphics[width=8cm]{./img/guassian_quadratic_classifier.png}\r\n\\end{figure}\r\n\r\n\\pandochline\r\n\\section{Parametric \\& Non-Parametric Density Estimation}\r\n\r\nLearning \\& Classification\r\n\\begin{itemize}\r\n\t\\item Supervised:\r\n\t\\begin{itemize}\r\n\t\t\\item Bayes Density:\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item Parametric: assumes a particular form of a PDF (e.g. Gaussian) is known $\\rightarrow$ determine the parameters (e.g. mean and variance)\r\n\t\t\t\\begin{itemize}\r\n\t\t\t\t\\item Maximum Likelihood ($f(x|w_i)$) Estimation (MLE)\r\n\t\t\t\t\\item Maximum A Posteriori (Bayesian $p(w_i|x)$) Estimation (MAPE)\r\n\t\t\t\\end{itemize}\r\n\t\t\t\\item Non-Parametric: no assumption about the density\r\n\t\t\t\\begin{itemize}\r\n\t\t\t\t\\item Parzen Windows\r\n\t\t\t\t\\item Kernel Density Estimation\r\n\t\t\t\t\\item K-Nearest Neighbor Rule\r\n\t\t\t\\end{itemize}\r\n\t\t\\end{itemize}\r\n\t\t\\item Discriminant Analysis:\r\n\t\t\\begin{itemize}\r\n\t\t\t\\item Linear\r\n\t\t\t\\item Non-Linear\r\n\t\t\\end{itemize}\r\n\t\\end{itemize}\r\n\t\\item Unsupervised:\r\n\t\\begin{itemize}\r\n\t\t\\item Clustering\r\n\t\\end{itemize}\r\n\\end{itemize}\r\n\r\n\\subsection{ML Estimation (MLE)}\r\nSteps:\r\n\\begin{itemize}\r\n\t\\item Assume $P(x|w)$ has a known parametric form uniquely determined by the parameter vector $\\theta$\r\n\t\\item The parameters are assumed to be fixed (i.e. non random) but unknown\r\n\t\\item Suppose we have a dataset $D$ with the samples in $D$ having been drawn \\textbf{independently} according to the probability law $P(x|w)$\r\n\t\\item The MLE is the value of $\\theta$ that best explains the data and once we know this value, we know $P(x|w)$\r\n\t\\begin{equation}\r\n\t\\hat{\\theta}=\\argmax_\\theta\\{P(D|\\theta)\\}=\\argmax_\\theta\\{\\prod_{k=1}^{N}{P(x_k|\\theta)}\\}=\\argmax_\\theta\\{\\sum_{k=1}^{N}{\\log(P(x_k|\\theta))}\\}\r\n\t\\end{equation}\r\n\tChoose the value of $\\theta$ that is the most likely to give rise to the data we observe.\r\n\\end{itemize}\r\n\r\nMethod:\r\n\\begin{itemize}\r\n\t\\item Assume $\\theta=[\\theta_1,\\theta_2,\\dots,\\theta_p]^T$\r\n\t\\item Assume gradient operator $\\nabla_\\theta=[\\frac{\\partial}{\\partial\\theta_1},\\frac{\\partial}{\\partial\\theta_2},\\dots,\\frac{\\partial}{\\partial\\theta_p}]^T$\r\n\t\\item Assume the log-likelihood of the objective function $l(\\theta)=\\sum_{k=1}^{N}{\\log(P(x_k|\\theta))}$\r\n\t\\item Therefore: $\\nabla_\\theta l(\\theta)=\\sum_{k=1}^{N}{\\nabla_\\theta \\log(P(x_k|\\theta))}=0$\r\n\\end{itemize}\r\n\r\n\\subsubsection{E.g. Univariate Gaussian}\r\n\r\n\\begin{equation}\r\nP(x|\\mu,\\sigma^2)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp(-\\frac{(x-\\mu)^2}{2\\sigma^2})\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\theta=\\left[\\begin{array}{c}\r\n\\theta_1=\\mu \\\\\r\n\\theta_2=\\sigma^2 \\\\\r\n\\end{array}\\right]\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\n\\log(P(x_k|\\theta)) & = & \\log(\\frac{1}{\\sqrt{2\\pi\\theta_2}}\\exp(-\\frac{(x_k-\\theta_1)^2}{2\\theta_2})) \\\\\r\n                   & = & -\\frac{1}{2}\\log(2\\pi\\theta_2)-\\frac{(x_k-\\theta_1)^2}{2\\theta_2} \\\\\r\n\\end{array}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\n\r\n\\nabla_\\theta \\log(P(x_k|\\theta)) & = & \\left[\r\n\\begin{array}{c}\r\n\\frac{x_k-\\theta_1}{\\theta_2} \\\\\r\n-\\frac{1}{2\\theta_2}+\\frac{(x_k-\\theta_1)^2}{2\\theta_2} \\\\\r\n\\end{array}\\right] \\\\\r\n\r\n\\Rightarrow \\left\\{\r\n\\begin{array}{rcl}\r\n\\sum_{k=1}^{n}{\\frac{1}{\\theta_2}(x_k-\\theta_1)} & = & 0 \\\\\r\n-\\sum_{k=1}^{n}{\\frac{1}{\\theta_2}}+\\sum_{k=1}^{n}{\\frac{(x_k-\\theta_1)^2}{\\theta_2^2}} & = & 0 \\\\\r\n\\end{array}\\right. & \\Rightarrow & \\left\\{\r\n\\begin{array}{rcl}\r\n\\hat{\\mu} & = & \\frac{1}{n}\\sum_{k=1}^{n}{x_k} \\\\\r\n\\hat{\\sigma}^2 & = & \\frac{1}{n}\\sum_{k=1}^{n}{(x_k-\\hat{\\mu})^2} \\\\\r\n\\end{array}\\right. \\\\\r\n\r\n\\end{array}\r\n\\end{equation}\r\n\r\n\\subsubsection{E.g. Multivariate Gaussian}\r\n\r\n\\begin{equation}\r\nP(x|\\mu,\\Sigma)=\\frac{1}{(2\\pi)^{d/2}|\\Sigma|^{1/2}}\\exp(-\\frac{1}{2}(x-\\mu)^T\\Sigma^{-1}(x-\\mu))\r\n\\end{equation}\r\n\r\nCase: only the mean $\\mu$ is unknown\r\n\r\n\\begin{equation}\r\n\\log(P(x_k|\\mu)) = -\\frac{1}{2}\\log((2\\pi)^{d}|\\Sigma|)-\\frac{1}{2}(x_k-\\mu)^T\\Sigma^{-1}(x_k-\\mu)\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\n\\nabla_\\theta \\log(P(x_k|\\mu)) & = & \\Sigma^{-1}(x_k-\\mu) \\\\\r\n\\Rightarrow \\sum_{k=1}^{n}{\\Sigma^{-1}(x_k-\\mu)} = 0 & \\Rightarrow & \\hat{\\mu} = \\frac{1}{n}\\sum_{k=1}^{n}{x_k}\r\n\\end{array}\r\n\\end{equation}\r\n\r\nCase: Neither the mean $\\mu$ nor the covariance matrix $\\Sigma$ are known\r\n\r\n\\begin{equation}\r\n\\theta=\\left[\\begin{array}{rcl}\r\n\\theta_1 & = & \\mu \\\\\r\n\\theta_2 & = & \\Sigma \\\\\r\n\\end{array}\\right]\r\n\\end{equation}\r\n\r\nConvert the covariance matrix $\\Sigma$\r\n\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\nP(D|\\Sigma) & = & \\prod_{k=1}^{n}{P(x_k|\\Sigma)} \\\\\r\n\t\t\t& = & \\prod_{k=1}^{n}{\\frac{1}{\\sqrt{(2\\pi)^d|\\Sigma|}}\\exp(-\\frac{1}{2}(x_k-\\mu)^T\\Sigma^{-1}(x_k-\\mu))} \\\\\r\n\t\t\t& = & \\frac{1}{((2\\pi)^d|\\Sigma|)^{n/2}}\\exp(-\\frac{1}{2}\\sum_{n}^{k=1}{(x_k-\\mu)^T\\Sigma^{-1}(x_k-\\mu)}) \\\\\r\n\\end{array}\r\n\\end{equation}\r\n\r\nThe property of matrix trace \r\n\\begin{itemize}\r\n\t\\item scalar $b^TBb=trace(b^TBb)=trace(Bbb^T)$\r\n\t\\item $trace(A+B)=trace(A)+trace(B)$\r\n\t\\item $trace(C(A+B))=trace(CA)+trace(CB)$\r\n\t\\item $trace(A)$ is the sum of $A$'s eigenvalues\r\n\\end{itemize}\r\n\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\n\\sum_{n}^{k=1}{(x_k-\\mu)^T\\Sigma^{-1}(x_k-\\mu)} & = & \\sum_{n}^{k=1}{trace((x_k-\\mu)^T\\Sigma^{-1}(x_k-\\mu))} \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t& = & \\sum_{n}^{k=1}{trace(\\Sigma^{-1}(x_k-\\mu)(x_k-\\mu)^T)} \\\\\r\n\t\t\t\t\t\t\t\t\t\t\t\t& = & trace(\\Sigma^{-1}\\sum_{n}^{k=1}{(x_k-\\mu)(x_k-\\mu)^T})\\\\\t\t\t\t\t\t\t\t\t\t\t\t\r\n\\end{array}\r\n\\end{equation}\r\n\r\nAssume\r\n\\begin{equation}\r\nA=\\frac{1}{n}\\sum_{k=1}^{n}(x_k-\\mu)(x_k-\\mu)^T\r\n\\end{equation}\r\nand $A$ is fixed.\r\n\r\nThen\r\n\\begin{equation}\r\n\\sum_{n}^{k=1}{(x_k-\\mu)^T\\Sigma^{-1}(x_k-\\mu)} = n~trace(\\Sigma^{-1}A)\r\n\\end{equation}\r\n\r\nTherefore\r\n\\begin{equation}\r\nP(D|\\Sigma) = \\frac{1}{((2\\pi)^d|\\Sigma|)^{n/2}}\\exp(-\\frac{n}{2}trace(\\Sigma^{-1}A))\r\n\\end{equation}\r\n\r\nAssume\r\n\\begin{equation}\r\nB=\\Sigma^{-1}A\r\n\\end{equation}\r\nand B's eigenvalues are $\\lambda_1,\\lambda_2,\\dots,\\lambda_d$\r\n\r\nThe property of matrix determinant\r\n\\begin{itemize}\r\n\t\\item $det(AB)=det(A)det(B)$\r\n\t\\item $det(A^{-1})=det(A)^{-1}$\r\n\t\\item $det(A)$ is the product of $A$'s eigenvalues\r\n\\end{itemize}\r\n\r\nTherefore\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\nP(D|\\Sigma) & = & \\frac{1}{((2\\pi)^d|\\Sigma|)^{n/2}}\\exp(-\\frac{n}{2}trace(\\Sigma^{-1}A)) \\\\\r\n\t\t\t& = & \\frac{1}{(2\\pi)^{dn/2}}|\\Sigma|^{-n/2}\\exp(-\\frac{n}{2}trace(B)) \\\\\r\n\t\t\t& = & \\frac{1}{(2\\pi)^{dn/2}}(\\frac{|B|}{|A|})^{n/2}\\exp(-\\frac{n}{2}trace(B)) \\\\\r\n\t\t\t& = & \\frac{1}{(2\\pi)^{dn/2}}|A|^{-n/2}(\\prod_{i=1}^{d}{\\lambda_i})^{n/2}\\exp(-\\frac{n}{2}\\sum_{i=1}^{d}{\\lambda_i}) \\\\\r\n\\end{array}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\log(P(D|\\Sigma))=-\\frac{dn}{2}\\log(2\\pi)-\\frac{n}{2}\\log(|A|)+\\frac{n}{2}\\sum_{i=1}^{d}{\\log(\\lambda_i)}-\\frac{n}{2}\\sum_{i=1}^{d}{\\lambda_i}\r\n\\end{equation}\r\n\r\n\\begin{equation}\r\n\\frac{\\partial(\\log(P(x|\\Sigma)))}{\\partial\\lambda_i}=\\frac{n}{2\\lambda_i}-\\frac{n}{2}=0 \\Rightarrow \\lambda_i=1\r\n\\end{equation}\r\n\r\nTherefore $B \\sim I$, and $\\Sigma^{-1}A=I \\Rightarrow \\hat{\\Sigma}=A=\\frac{1}{n}\\sum_{k=1}^{n}(x_k-\\mu)(x_k-\\mu)^T$.\r\n\r\n\\subsection{Bayesian Estimation (BE)}\r\nSteps:\r\n\\begin{itemize}\r\n\t\\item The parameters are assumed to be random variables with some known priori distribution.\r\n\t\\item Bayesian approach aims at estimating the posterior density $P(\\theta|D)$\r\n\t\\item The MAPE(Maximum A Posteriori Estimate) of $\\theta$ is the value of $\\theta$ that maximizes the posterior density $P(\\theta|D)$.\r\n\\end{itemize}\r\nMethod:\r\n\\begin{itemize}\r\n\t\\item Estimate $P(x|D)$ from sample data, and assume it has a known parametric form. So $P(x|\\theta)$ is completely known, but $\\theta$ is random variable and has its own PDF.\r\n\t\\item $P(x|D)=\\int{P(x,\\theta|D)d\\theta}=\\int{P(x|\\theta)P(\\theta|D)d\\theta}$\r\n\t\\item Posterior $P(\\theta|D)=\\frac{P(D|\\theta)P(\\theta)}{P(D)}$, where $P(D|\\theta)=\\prod_{k=1}^{n}{P(x_k|\\theta)}$\r\n\t\\item Since $P(D)$ is constant, therefore $\\hat{\\theta}=\\argmax_\\theta\\{P(D|\\theta)P(\\theta)\\}$\r\n\\end{itemize}\r\n\r\n\\subsubsection{E.g. Univariate Gaussian}\r\n...\r\n\r\n\\subsubsection{E.g. Multivariate Gaussian}\r\n...\r\n\r\n\\pandochline\r\n\\section{Principle Component Analysis (PCA)}\r\nPurpose: get projection vectors $\\omega$ that maximize the variance of the projected data.\r\n\\begin{equation}\r\n\\max_\\omega(Var(\\omega^Tx)),\\text{~where~}||\\omega||=1\r\n\\end{equation}\r\n\r\nDerivation: Assume the objective function\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\nJ(\\omega) & = & Var(\\omega^Tx) \\\\\r\n\t\t  & = & E(\\omega^T(x-\\mu)(x-\\mu)^T\\omega) \\\\\r\n\t\t  & = & \\omega^TE((x-\\mu)(x-\\mu)^T)\\omega \\\\\r\n\t\t  & = & \\omega^T\\Sigma\\omega~~~~(\\text{where~} ||\\omega||=1)\r\n\\end{array}\r\n\\end{equation}\r\nThen use Lagrangian optimization method to get the $\\omega^*$\r\n\\begin{equation}\r\n\\begin{array}{rcl}\r\nL(\\omega,\\lambda) & = & \\omega^T\\Sigma\\omega - \\lambda(\\omega^T\\omega-1) \\\\\r\n\\frac{\\partial L(\\omega,\\lambda)}{\\partial\\omega} & = & 2\\Sigma\\omega-2\\lambda\\omega=0 \\\\\r\n& \\Rightarrow & \\Sigma\\omega=\\lambda\\omega\r\n\\end{array}\r\n\\end{equation}\r\n\r\nTo maximize $J(\\omega)$, we need to maximize $\\omega^T\\Sigma\\omega=\\lambda \\rightarrow \\lambda_{max}$\r\n \r\n\\end{document}", "meta": {"hexsha": "3a6949aaea61f8183f632ffc5dcc483a6f1aea57", "size": 14484, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "18794_Pattern_Recognition_Theory/Notes/document.tex", "max_stars_repo_name": "MengwenHe-CMU/Courses", "max_stars_repo_head_hexsha": "6cd9a9469b573ff76f70ceff6a0aa6103f7cdf3e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2018-11-23T21:15:29.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-12T02:50:37.000Z", "max_issues_repo_path": "18794_Pattern_Recognition_Theory/Notes/document.tex", "max_issues_repo_name": "MengwenHe-CMU/Courses", "max_issues_repo_head_hexsha": "6cd9a9469b573ff76f70ceff6a0aa6103f7cdf3e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "18794_Pattern_Recognition_Theory/Notes/document.tex", "max_forks_repo_name": "MengwenHe-CMU/Courses", "max_forks_repo_head_hexsha": "6cd9a9469b573ff76f70ceff6a0aa6103f7cdf3e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-02-12T02:48:04.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-12T02:50:40.000Z", "avg_line_length": 36.1197007481, "max_line_length": 227, "alphanum_fraction": 0.6371858603, "num_tokens": 5535, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6893056040203135, "lm_q2_score": 0.7981867753392728, "lm_q1q2_score": 0.5501946172962637}}
{"text": "\\section{Training Procedure}%\n\\label{sec:training_procedure}\n\n%\nBy augmenting traditional HMC methods with these trainable functions, we hope\nto obtain a sampler that has the following key properties:\n%\n\\begin{enumerate}\n    \\item Fast mixing (i.e.\\ able to quickly produce uncorrelated samples).\n    \\item Fast burn-in (i.e.\\ rapid convergence to the target distribution).\n    \\item Ability to mix across energy levels.\n    \\item Ability to mix between modes.\n\\end{enumerate}\n%\nFollowing the results in~\\cite{10.2307/24308995}, we design a loss function\nwith the goal of maximizing the expected squared jumped distance (or\nanalogously, minimizing the lag-one autocorrelation).\n%\nTo do this, we first introduce \n\\begin{equation}\n  \\delta(\\xi, \\xip) = \\delta((x^{\\prime}, v^{\\prime}, d^{\\prime}), (x, v, d))\n  \\equiv \\| x - x^{\\prime}\\|^2_2.  \\label{eq:metric_orig}\n\\end{equation}\n%\nThen, the expected squared jumped distance is given by $\\mathbb{E}_{\\xi\\sim\np(\\xi)} \\left[\\delta(\\mathbf{FL}_{\\theta}\\xi, \\xi) A(\\mathbf{FL}_{\\theta}\\xi |\n\\xi)\\right]$.\n%\nBy maximizing this objective function, we are encouraging transitions that\nefficiently explore a local region of state-space, but may fail to explore\nregions where very little mixing occurs.\n%\nTo help combat this effect, we define a loss function\n%\n\\begin{equation}\n    \\ell_{\\lambda}(\\xi, \\xi^{\\prime}, A(\\xi^{\\prime}|\\xi)) =\n        \\frac{\\lambda^2}{\\delta(\\xi,\\xi^{\\prime}) A(\\xi^{\\prime}|\\xi)} -\n        \\frac{\\delta(\\xi,\\xi^{\\prime}) A(\\xi^{\\prime}|\\xi)}{\\lambda^2}\n    \\label{eq:loss_ell}\n\\end{equation}\n%\nwhere $\\lambda$ is a scale parameter describing the characteristic length scale\nof the problem.\n%\nNote that the first term helps to prevent the sampler from becoming stuck in a\nstate where it cannot move effectively, and the second term helps to maximize\nthe distance between subsequent moves in the Markov chain.  \n\nThe sampler is then trained by minimizing $\\ell_{\\lambda}$ over both the target\nand initialization distributions.\n%\nExplicitly, for an initial distribution $\\pi_0$ over $\\mathcal{X}$, we define\nthe initialization distribution as $q(\\xi) = \\pi_0(x) \\mathcal{N}(v; 0, I)\np(d)$, and minimize\n%\n\\begin{equation}\n    \\mathcal{L}(\\theta)\\equiv \\mathbb{E}_{p(\\xi)}\\left[\\ell_{\\lambda}(\\xi,\n    \\mathbf{FL}_{\\theta}\\xi, A(\\mathbf{FL}_{\\theta}\\xi|\\xi))\\right] + \\lambda_b\n    \\mathbb{E}_{q(\\xi)}\\left[\\ell_{\\lambda}(\\xi, \\mathbf{FL}_{\\theta}\\xi,\n    A(\\mathbf{FL}_{\\theta} \\xi| \\xi))\\right].\n    \\label{eq:loss_L}\n\\end{equation}\n%\nFor completeness, we include the full algorithm~\\cite{2017arXiv171109268L} used\nto train L2HMC in Alg.~\\ref{alg:l2hmc}.\n%\n\\begin{algorithm}[htbp]%\n  % \\centering\n    \\SetKwProg{Fn}{def}{\\string:}{}%\n    \\SetKwFunction{Range}{range}%\n    \\SetKwFor{For}{for}{\\string:}{}%\n    \\SetKwIF{If}{ElseIf}{Else}{if}{:}{elif}{else:}{}%\n    \\SetKwFor{While}{while}{:}{fintq}%\n    \\AlgoDontDisplayBlockMarkers\\SetAlgoNoEnd%\n    % \\SetAlgoNoLine%\n    \\DontPrintSemicolon%\n    \\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}%\n    \\caption{Training procedure for the L2HMC algorithm.}%\n    \\Input{%\n      \\vspace{-5pt}\n      \\begin{enumerate}\n        \\item A (potential) energy function, $U: \\mathcal{X} \\rightarrow\n          \\mathbb{R}$ and its gradient $\\nabla_x U: \\mathcal{X} \\rightarrow\n          \\mathcal{X}$\\\n          \\vspace{-10pt}\n        \\item Initial distribution over the augmented state space, $q$\n          \\vspace{-10pt}\n        \\item Number of iterations, $N_{\\mathrm{train}}$\n          \\vspace{-10pt}\n        \\item Number of leapfrog steps, $N_{\\mathrm{LF}}$\n          \\vspace{-10pt}\n        \\item Learning rate schedule, ${(\\alpha_{t})}_{t\\leq N_{\\text{train}}}$\n          \\vspace{-10pt}\n        \\item Batch size, $N_{\\mathrm{samples}}$\n          \\vspace{-10pt}\n        \\item Scale parameter, $\\lambda$\n          \\vspace{-10pt}\n        \\item Regularization strength, $\\lambda_b$\n      \\end{enumerate}\n    }\\;\n    \\vspace{-15pt}\n    Initialize the parameters of the sampler, $\\theta$\\;\n    Initialize ${\\{\\xi_{p^{(i)}}\\}}_{i\\leq N_{\\mathrm{samples}}}$ from\n    $q{(\\xi)}$\\; \\For{$t = 0$ \\KwTo\\ $N_{\\mathrm{train}}$}{%\n      Sample a minibatch, ${\\left\\{\\xi_{q}^{(i)}\\right\\}}_{i\\leq\n      N_{\\mathrm{samples}}}$ from $q{(\\xi)}$.\\; $\\mathcal{L}\\leftarrow 0$\\;\n      \\For{$i = 1$ \\KwTo$N_{\\mathrm{LF}}$} {%\n          $\\xi_{p}^{(i)} \\leftarrow\\ \\mathbf{R}\\,\\xi_p^{(i)}$\\;\n          $\\mathcal{L} \\,\\,\\,\\,\\leftarrow\\mathcal{L} +\n          \\ell_{\\lambda}\\left(\\xi_p^{(i)}, \\FLq\\xi_p^{(i)}, A\n            (\\FLq\\xi^{(i)}_p|\\xi^{(i)}_p)\\right) + \\lambda_b\n            \\ell_{\\lambda}\\left(\\xi^{(i)}_q, \\FLq\\xi^{(i)}_q,\n            A (\\FLq\\xi^{(i)}_q|\\xi^{(i)}_q)\\right)$\\;\n          $\\xi_p^{(i)} \\leftarrow \\FLq\\xi^{(i)}_p$ with probability\n        $A(\\FLq\\xi^{(i)}_p|\\xi^{(i)}_p)$\\; }%\n      \\vspace{2pt}\n      $\\theta\\ \\leftarrow\\ \\theta-\\alpha_t \\nabla_{\\theta} \\mathcal{L}$\\;\n    }%\n\\label{alg:l2hmc}\n\\end{algorithm}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n    \n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n", "meta": {"hexsha": "d80e973042603dfbdbaaaab7d1114532f483dc9c", "size": 5086, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/training/training.tex", "max_stars_repo_name": "saforem2/l2hmc-qcd", "max_stars_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2019-04-18T18:50:28.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T18:30:48.000Z", "max_issues_repo_path": "doc/training/training.tex", "max_issues_repo_name": "saforem2/l2hmc-qcd", "max_issues_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2019-09-09T21:10:48.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T17:43:51.000Z", "max_forks_repo_path": "doc/training/training.tex", "max_forks_repo_name": "saforem2/l2hmc-qcd", "max_forks_repo_head_hexsha": "b5fe06243fae663607b6c88e71373b68b19558fc", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-10-31T02:25:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-25T00:49:14.000Z", "avg_line_length": 41.0161290323, "max_line_length": 79, "alphanum_fraction": 0.6104994101, "num_tokens": 1657, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303236047048, "lm_q2_score": 0.6959583313396339, "lm_q1q2_score": 0.5501761648893111}}
{"text": "\\subsection{Groups}\\label{subsec:groups}\n\n\\begin{definition}\\label{def:monoid_inverse}\n  Let \\( M \\) be a \\hyperref[def:monoid]{monoid}. We say that \\( y \\) is the \\term{left inverse} (resp. \\term{right inverse}) of \\( x \\) if \\( yx = e \\) (resp. \\( xy = e \\)).\n\n  If \\( y \\) is simultaneously a left and right inverse of \\( x \\), we call a \\term{two-sided inverse} or simply an \\term{inverse} of \\( x \\) and denote it by \\( x^{-1} \\). It is unique by \\fullref{thm:monoid_inverse_unique}. This notation is consistent with monoid exponentiation defined in \\fullref{def:monoid/exponentiation}.\n\n  We call \\( x \\) \\term{invertible} if it has a two-sided inverse.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:monoid_inverse_unique}\n  For every element \\( x \\) of any monoid, the (two-sided) \\hyperref[def:monoid_inverse]{inverse} \\( x^{-1} \\) of \\( x \\), if it exists, is unique.\n\\end{proposition}\n\\begin{proof}\n  If \\( y \\) and \\( z \\) are both inverses of \\( x \\), then\n  \\begin{equation*}\n    y = ey = zxy = ze = z.\n  \\end{equation*}\n\\end{proof}\n\n\\begin{definition}\\label{def:zero_morphisms}\n  Let \\( \\cat{C} \\) be a \\hyperref[def:universal_objects/zero]{pointed category} with a fixed \\hyperref[def:universal_objects/zero]{zero object} \\( Z \\).\n\n  \\begin{thmenum}\n    \\thmitem{def:zero_morphisms/morphism}\\mcite{nLab:zero_morphism} For every pair of objects \\( A \\) and \\( B \\) in \\( \\cat{C} \\), there exists unique morphism, called the \\term{zero morphism}, that \\hyperref[def:factors_through]{uniquely factors through} \\( Z \\):\n    \\begin{equation}\\label{eq:def:zero_morphisms/morphism}\n      \\begin{aligned}\n        \\includegraphics[page=1]{output/def__zero_morphisms.pdf}\n      \\end{aligned}\n    \\end{equation}\n\n    We denote this zero morphism by \\( 0_{A,B} \\).\n\n    \\thmitem{def:zero_morphisms/kernel} The \\term{kernel} cone of a morphism \\( f: A \\to B \\) is the \\hyperref[eq:def:equalizers/equalizer]{equalizer} cone of \\( f \\) and \\( 0_{A,B} \\).\n\n    By \\fullref{thm:equalizer_invertibility}, a kernel morphism is necessarily a monomorphism. A monomorphism is \\term{normal} if it is a kernel.\n\n    \\thmitem{def:zero_morphisms/cokernel} \\hyperref[thm:categorical_principle_of_duality]{Dually}, the \\term{cokernel} cocone of a morphism \\( f: A \\to B \\) is the \\hyperref[eq:def:equalizers/coequalizer]{coequalizer} cocone of \\( f \\) and \\( 0_{A,B} \\).\n\n    By \\fullref{thm:equalizer_invertibility}, a cokernel morphism is necessarily an epimorphism. An epimorphism is \\term{normal} if it is a kernel.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{definition}\\label{def:group}\n  A \\term{group} is a \\hyperref[def:monoid]{monoid} in which every element has an \\hyperref[def:monoid_inverse]{inverse}. Groups are the most well-studied and most well-behaved magmas. Many useful properties like \\hyperref[thm:def:group/cancellative]{cancellation} rely on associativity, so we do not consider non-associative groups.\n\n  Groups have the following metamathematical properties:\n  \\begin{thmenum}\n    \\thmitem{def:group/theory} We can construct a \\hyperref[def:first_order_theory]{first-order theory} for groups by adding a unary \\hyperref[def:first_order_language/func]{functional symbol} \\( (\\anon)^{-1} \\) to the language and the axiom\n    \\begin{equation}\\label{eq:def:group/theory/inverse_axiom}\n      \\qforall \\xi (\\xi \\cdot \\xi^{-1} = e \\wedge \\xi^{-1} \\cdot \\xi = e)\n    \\end{equation}\n    to the \\hyperref[def:monoid/theory]{theory of monoids}.\n\n    \\thmitem{def:group/function_parity} A \\hyperref[def:function]{function} \\( \\varphi: G \\to H \\) between two groups is called \\term{even} if, for every \\( x \\in G \\), we have\n    \\begin{equation}\\label{eq:def:group/function_parity/even}\n      \\varphi(x^{-1}) = \\varphi(x)\n    \\end{equation}\n    and \\term{odd} if\n    \\begin{equation}\\label{eq:def:group/function_parity/odd}\n      \\varphi(x^{-1}) = \\varphi(x)^{-1}.\n    \\end{equation}\n\n    \\thmitem{def:group/homomorphism} A \\hyperref[def:first_order_homomorphism]{first-order homomorphism} between the groups \\( G \\) and \\( H \\) is an odd \\hyperref[def:monoid/homomorphism]{monoid homomorphism}.\n\n    As shown in \\fullref{thm:group_homomorphism_single_condition}, however, the conditions \\eqref{eq:def:pointed_set/homomorphism} and \\eqref{eq:def:group/function_parity/odd} are redundant.\n\n    \\thmitem{def:group/submodel} The set \\( A \\subseteq G \\) is a \\hyperref[thm:substructure_is_model]{submodel} of \\( G \\) if it is a \\hyperref[def:monoid/submodel]{submonoid} and if \\( x \\in A \\) implies \\( x^{-1} \\in A \\). We say that \\( A \\) is a \\term{subgroup} of \\( G \\).\n\n    As a consequence of \\fullref{thm:positive_formulas_preserved_under_homomorphism}, the image of a group homomorphism is a subgroup of its range.\n\n    For an arbitrary subset \\( A \\) of \\( G \\), we denote the \\hyperref[def:first_order_generated_substructure]{generated submodel} by \\( \\braket{ A } \\). In addition to the elements of \\( A \\), \\( \\braket{ A } \\) contains their products and inverses, the products of their products and inverses, etc...\n\n    The \\hyperref[def:free_group]{free group} builds a group out of a plain set; furthermore, as a consequence of \\fullref{thm:group_presentation}, every group is a \\hyperref[def:group/quotient]{quotient} of a free group. Compare this to \\hyperref[def:free_semimodule]{free semimodules} and \\hyperref[def:polynomial_algebra]{polynomial semirings}.\n\n    \\thmitem{def:group/trivial} The \\hyperref[thm:substructures_form_complete_lattice/bottom]{trivial} group is the \\hyperref[def:pointed_set/trivial]{trivial pointed set} \\( \\set{ e } \\).\n\n    \\thmitem{def:group/exponentiation} We extend \\hyperref[def:monoid/exponentiation]{monoid exponentiation} to all integers by setting\n    \\begin{equation*}\n      x^{-n} \\coloneqq (x^n)^{-1}.\n    \\end{equation*}\n\n    This operation behaves well as shown in \\fullref{thm:def:group/negative_power}.\n\n    \\thmitem{def:group/category} The \\hyperref[def:category_of_small_first_order_models]{category of \\( \\mscrU \\)-small models} of groups \\( \\ucat{Grp} \\) is \\hyperref[def:concrete_category]{concrete} over \\hyperref[def:monoid]{\\( \\ucat{Mon} \\)}.\n\n    By \\fullref{thm:def:group/involution}, \\( \\ucat{Grp} \\) is also a concrete category over \\( \\ucat{Inv} \\).\n\n    The unique up to an isomorphism zero object in this category is the trivial group \\( \\set{ e } \\). The \\hyperref[def:zero_morphisms/morphism]{zero morphism} from \\( G \\) to \\( H \\) is\n    \\begin{equation*}\n      \\begin{aligned}\n        &0_{G,H}: G \\to H \\\\\n        &0_{G,H}(x) \\coloneqq e_H.\n      \\end{aligned}\n    \\end{equation*}\n\n    We will define the \\hyperref[def:free_group]{free group} functor in \\fullref{subsec:free_groups}. Then from \\fullref{thm:first_order_categorical_invertibility} it will follow that monomorphisms are precisely the injective homomorphisms, and that the \\hyperref[def:subobject_and_quotient]{categorical subobjects} correspond to subgroups.\n\n    Unlike in the category \\hyperref[def:monoid/category]{\\( \\cat{Mon} \\)} of monoids, in \\( \\cat{Grp} \\) every epimorphism is surjective. We will prove this in \\fullref{thm:group_epimorphisms_are_surjective}. Along with \\fullref{thm:group_epimorphisms_are_normal}, this shows that the \\hyperref[def:subobject_and_quotient]{categorical quotient objects} correspond to \\hyperref[def:group/quotient]{quotient groups}, which we will define shortly.\n\n    To avoid circularity, in this section, we will avoid using that monomorphisms are injective and epimorphism are surjective.\n\n    \\thmitem{def:group/kernel} The \\term{kernel} of a group homomorphism \\( \\varphi: G \\to H \\) is the subgroup\n    \\begin{equation*}\n      \\ker \\varphi \\coloneqq \\varphi^{-1}(e_H) = \\set{ x \\in G \\given \\varphi(x) = e_H }.\n    \\end{equation*}\n\n    This coincides with the notion of a categorical kernel defined in \\fullref{def:zero_morphisms/kernel}. Similarly to \\fullref{ex:equalizers_in_set/equalizer}, \\( \\ker \\varphi \\) is the equalizer of \\( \\varphi \\) and the zero morphism \\( 0_{G,H} \\).\n\n    This equivalence holds much more generally, even for \\hyperref[def:pointed_set]{pointed sets}, however speaking of kernels is only established when we have an appropriate notion of cokernels. As we will see in \\fullref{def:group/quotient}, cokernels are very well-behaved for groups, but not in general. Some related problems are highlighted in \\cite[ch. 8]{Golan2010}. \\Fullref{thm:def:group/kernel_cokernel_compatibility} expresses the compatibility between group kernels and cokernels.\n\n    \\thmitem{def:group/quotient} Consider the group homomorphism \\( \\varphi: G \\to H \\). We will find its \\hyperref[def:zero_morphisms/cokernel]{cokernel}. This will highlight several very fundamental facts about groups, especially quotient groups. In practice, quotients are conveniently characterized by \\fullref{thm:quotient_group_universal_property}.\n\n    Similarly to \\fullref{ex:equalizers_in_set/coequalizer}, the cokernel is an \\hyperref[thm:equivalence_partition]{equivalence partition} of \\( H \\). The partitioning relation is different, however. An equivalence relation that is compatible with the operations of an algebraic structure is called a \\term{congruence}. For the group \\( H \\), the equivalence relation \\( \\cong \\) is a congruence if:\n    \\begin{itemize}\n      \\item It is compatible with the group operation: \\( x \\cong x' \\) and \\( y \\cong y' \\) imply \\( x y \\cong x' y' \\).\n      \\item It is compatible with identities: \\( e_H \\cong x \\) implies \\( y \\cong xy \\) for all \\( y \\in H \\). This easily follows from the first condition.\n      \\item It is compatible with inverses: \\( x \\cong x' \\) implies \\( x^{-1} \\cong x'^{-1} \\). This also follows from the first condition: \\( x^{-1} \\cong x^{-1} \\) implies \\( e \\cong x^{-1} x' \\), and thus \\( x'^{-1} \\cong x^{-1} \\).\n    \\end{itemize}\n\n    We need congruences since we are working with groups and group homomorphisms rather than sets and functions. We define \\( \\cong \\) to be the smallest congruence relation containing\n    \\begin{equation*}\n      \\set{ (s(g), e_H) \\given g \\in G }.\n    \\end{equation*}\n\n    Denote the partition \\( H / {\\cong} \\) by \\( Q \\). Define a group operation on \\( Q \\) as \\( [x] \\cdot [y] = [xy] \\).\n    \\begin{itemize}\n      \\item This operation is well-defined since group congruences are compatible with the group operation. We are thus free to denote it via juxtaposition.\n      \\item The coset \\( [e_H] \\) is the identity of \\( Q \\) since congruences are compatible with identities.\n      \\item The coset \\( [x]^{-1} \\) is the inverse of \\( [x] \\) since congruences are compatible with inverses.\n    \\end{itemize}\n\n    Therefore, \\( Q \\) is a group and \\( \\pi(x) \\coloneqq [x] \\) is a group homomorphism. The pair \\( (Q, \\pi) \\) is thus a categorical cokernel of \\( \\varphi \\) by the same argument as in \\fullref{ex:equalizers_in_set/coequalizer}.\n\n    Denote the identity \\( [e_H] \\) by \\( N \\). It is a subgroup of \\( H \\):\n    \\begin{itemize}\n      \\item It contains the identity \\( e_H \\).\n      \\item It is closed under the group operation. Indeed, if \\( [x] = [y] = N \\), then\n      \\begin{equation*}\n        [xy] = [x][y] = NN = N.\n      \\end{equation*}\n\n      \\item It is closed under the group inverse. Indeed, \\( [x^{-1} x] = N \\) for every \\( x \\in H \\). If \\( [x] = N \\), then \\( [x^{-1}] N = N \\), and hence \\( [x^{-1}] = N \\).\n\n      \\item It possesses one additional important property. If \\( [x] = N \\), then not only \\( x \\in N \\), but also \\( y^{-1} x y \\in N \\) for every \\( y \\in H \\). This holds because\n      \\begin{equation*}\n        [y^{-1} x y]\n        =\n        [y^{-1}] [x] [y]\n        =\n        [y^{-1}] [y]\n        =\n        [y]^{-1} [y]\n        =\n        N.\n      \\end{equation*}\n    \\end{itemize}\n\n    This last property distinguishes \\( N \\) from the \\hyperref[def:multi_valued_function/image]{image} of \\( \\varphi \\). A subgroup satisfying this property is called a \\term{normal subgroup}. See \\fullref{thm:normal_subgroup} for equivalent conditions. If the image \\( \\img \\varphi \\) is a normal subgroup of \\( H \\), \\( \\varphi \\) is a normal epimorphism in the sense of \\fullref{def:zero_morphisms/cokernel}.\n\n    Obviously \\( \\img \\varphi \\subseteq N \\). Since \\( Q \\) is a colimit, \\( N \\) must be the smallest normal subgroup containing \\( \\img \\varphi \\).\n\n    It is more intriguing that \\( [x] = xN \\) for every \\( x \\in H \\). This can be shown as follows:\n    \\begin{itemize}\n      \\item Suppose first that \\( y \\in xN \\), i.e. \\( y = xn \\) for some \\( n \\in N \\). Then\n      \\begin{equation*}\n        y \\in [y] = [xn] = [x] N = [x].\n      \\end{equation*}\n\n      Generalizing on \\( y \\), we obtain that \\( xN \\subseteq [x] \\).\n\n      \\item Conversely, let \\( y \\in [x] \\). Obviously \\( x = y (y^{-1} x) \\). Then\n      \\begin{equation*}\n        [x^{-1} y] = [x^{-1}] [y] = [x]^{-1} [y] = [x]^{-1} [x] = N.\n      \\end{equation*}\n\n      Hence, \\( x^{-1} y \\in N \\) and \\( y \\in xN \\). Generalizing on \\( y \\), we obtain that \\( [x] \\subseteq xN \\).\n    \\end{itemize}\n\n    Therefore, all cosets in the quotient group \\( Q = H / {\\cong} \\) are translations of the identity \\( N \\). In particular, in the notation of \\hyperref[def:magma/powet_set]{power set operations}, it follows that\n    \\begin{equation*}\n      xyN = xN yN.\n    \\end{equation*}\n\n    Finally, given a normal subgroup \\( N \\) of an arbitrary group \\( G \\), we can define the \\term{quotient group} \\( G / N \\) as the cokernel of the inclusion \\( \\iota: N \\to G \\). That is, \\( G / N \\) consists of the cosets \\( xN \\) for \\( x \\in G \\) with the group operation \\( xN yN = xyN \\).\n\n    \\thmitem{def:group/simple} If the only proper \\hyperref[thm:normal_subgroup_equivalences]{normal subgroup} of \\( G \\) is the \\hyperref[def:group/trivial]{trivial subgroup} \\( \\set{ e_G } \\), we say that \\( G \\) is a \\term{simple group}.\n\n    The trivial group itself is not simple, because it has no proper subgroups.\n  \\end{thmenum}\n\\end{definition}\n\n\\begin{example}\\label{ex:power_set_is_not_a_group}\n  The \\hyperref[def:magma/power_set]{power set magma} \\( \\pow(G) \\) of a group \\( G \\) is a monoid, but it is not a group unless \\( G \\) is trivial.\n\\end{example}\n\n\\begin{proposition}\\label{thm:def:group}\n  Every \\hyperref[def:group]{group} \\( G \\) has the following basic properties:\n  \\begin{thmenum}\n    \\thmitem{thm:def:group/cancellative} The (binary) group operation is \\hyperref[def:magma/cancellative]{cancellative}.\n    \\thmitem{thm:def:group/identity_inverse} The identity \\( e \\) is its own inverse.\n    \\thmitem{thm:def:group/inverse_composition} \\( (xy)^{-1} = y^{-1} x^{-1} \\).\n    \\thmitem{thm:def:group/involution} \\( x = (x^{-1})^{-1} \\)\n    \\thmitem{thm:def:group/negative_power} For any positive integer \\( n \\), \\( (x^n)^{-1} = (x^{-1})^n \\)\n    \\thmitem{thm:def:group/inverse_isomorphism} The map \\( x \\mapsto x^{-1} \\) is a group isomorphism.\n    \\thmitem{thm:def:group/zero_kernel} The \\hyperref[def:group/kernel]{kernel} of a group homomorphism \\( \\varphi: G \\to H \\) is trivial if and only if \\( \\varphi \\) is an \\hyperref[def:first_order_homomorphism_invertibility/embedding]{embedding} (injective homomorphism).\n\n    \\thmitem{thm:def:group/kernel_cokernel_compatibility} For a \\hyperref[def:group/quotient]{quotient group} \\( G / N \\) with canonical projection \\( \\pi(x) \\coloneqq xN \\), the \\hyperref[def:group/kernel]{kernel} of \\( \\pi \\) is \\( N \\).\n\n    \\thmitem{thm:def:group/kernel_is_normal_subgroup} The kernel of a group homomorphism is a \\hyperref[thm:normal_subgroup_equivalences]{normal subgroup}.\n  \\end{thmenum}\n\\end{proposition}\n\\begin{proof}\n  \\SubProofOf{thm:def:group/cancellative} If \\( x = y \\), obviously \\( xz = yz \\) and \\( zx = zy \\). Now if \\( xz = yz \\), we have\n  \\begin{equation*}\n    x = x(zz^{-1}) = (xz)z^{-1} = (yz)z^{-1} = y(zz^{-1}) = y.\n  \\end{equation*}\n\n  The case \\( zx = zy \\) is analogous.\n\n  \\SubProofOf{thm:def:group/identity_inverse} \\( ee = e \\).\n  \\SubProofOf{thm:def:group/inverse_composition}\n  \\begin{equation*}\n    (xy) (y^{-1} x^{-1})\n    =\n    x (y y^{-1}) x^{-1}\n    =\n    e\n    =\n    y^{-1} (x^{-1} x) y\n    =\n    (y^{-1} x^{-1}) (xy).\n  \\end{equation*}\n\n  \\SubProofOf{thm:def:group/involution}\n  \\begin{equation*}\n    (x^{-1})^{-1}\n    =\n    x x^{-1} (x^{-1})^{-1}\n    =\n    x.\n  \\end{equation*}\n\n  \\SubProofOf{thm:def:group/negative_power} Using \\fullref{thm:def:group/involution},\n  \\begin{equation*}\n    x^{-n}\n    =\n    (x^n)^{-1}\n    =\n    x^{-1} \\cdots x^{-1}\n    =\n    (x^{-1})^n.\n  \\end{equation*}\n\n  \\SubProofOf{thm:def:group/inverse_isomorphism} Trivial.\n\n  \\SubProofOf{thm:def:group/zero_kernel}\n  \\SufficiencySubProof* Suppose that \\( \\ker \\varphi = \\set{ e_H } \\) and \\( \\varphi(x) = \\varphi(y) \\). Then\n  \\begin{equation*}\n    e_H = \\varphi(x) \\varphi(y)^{-1} = \\varphi(x y^{-1}).\n  \\end{equation*}\n\n  Thus, \\( x y^{-1} \\in \\ker \\varphi \\), and hence \\( x = y \\).\n\n  Therefore, \\( \\varphi \\) is injective.\n\n  \\NecessitySubProof* Suppose that \\( \\varphi \\) is injective. Since \\( \\varphi(e_G) = e_H \\), \\( \\varphi(x) = e_H \\) implies that \\( x = e_G \\).\n\n  \\SubProofOf{thm:def:group/kernel_cokernel_compatibility} Trivial.\n\n  \\SubProofOf{thm:def:group/kernel_is_normal_subgroup} For a homomorphism \\( \\varphi: G \\to H \\), if \\( x \\in \\ker \\varphi \\), then\n  \\begin{equation*}\n    \\varphi(y^{-1} x y) = \\varphi(y)^{-1} \\varphi(x) \\varphi(y) = \\varphi(y)^{-1} \\varphi(y) = e_H,\n  \\end{equation*}\n  and thus \\( y^{-1} x y \\in \\ker \\varphi \\).\n\\end{proof}\n\n\\begin{proposition}\\label{thm:group_homomorphism_single_condition}\n  A function between groups is a \\hyperref[def:group/homomorphism]{group homomorphism} if and only if it satisfies \\eqref{eq:def:magma/homomorphism}.\n\\end{proposition}\n\\begin{proof}\n  \\SufficiencySubProof \\eqref{eq:def:magma/homomorphism} is required to hold by definition.\n\n  \\NecessitySubProof Let the function \\( \\varphi: G \\to H \\) satisfy \\eqref{eq:def:magma/homomorphism}. Then it preserves identities, i.e. is a \\hyperref[def:pointed_set/homomorphism]{pointed set homomorphism}. Indeed, we have\n  \\begin{equation*}\n    e_H \\varphi(e_G) = \\varphi(e_G) = \\varphi(e_G e_G) = \\varphi(e_G) \\varphi(e_G).\n  \\end{equation*}\n\n  By \\fullref{thm:def:group/cancellative}, \\( \\varphi \\) is cancellative, and hence \\( e_H = \\varphi(e_G) \\).\n\n  Inverses are preserved (i.e. \\eqref{eq:def:group/function_parity/odd} holds) because\n  \\begin{equation*}\n    \\varphi(x^{-1})\n    =\n    \\varphi(x^{-1}) e_H\n    =\n    \\varphi(x^{-1}) \\varphi(x) \\varphi(x)^{-1}\n    =\n    \\varphi(x^{-1} x) \\varphi(x)^{-1}\n    =\n    e_H \\varphi(x)^{-1}\n    =\n    \\varphi(x)^{-1}.\n  \\end{equation*}\n\n  Therefore, \\( \\varphi \\) is indeed a group homomorphism.\n\\end{proof}\n\n\\begin{lemma}\\label{thm:group_operation_induces_bijections}\n  For each element \\( x \\) of a group \\( G \\), consider the function \\( \\varphi_x \\coloneqq x \\id_G \\), i.e.\n  \\begin{equation*}\n    \\begin{aligned}\n      &\\varphi_x: G \\to G \\\\\n      &\\varphi_x(y) \\coloneqq x \\cdot y.\n    \\end{aligned}\n  \\end{equation*}\n\n  This is a bijective function (but not necessarily a group isomorphism).\n\\end{lemma}\n\\begin{proof}\n  \\SubProofOf[def:function_invertibility/injective]{injectivity} If \\( y, y' \\in G \\) and \\( \\varphi_x(y) = \\varphi_x(y') \\), we have\n  \\begin{equation*}\n    xy = \\varphi_x(y) = \\varphi_x(y') = xy'.\n  \\end{equation*}\n\n  By \\fullref{thm:def:group/cancellative}, \\( y = y' \\). Therefore, \\( \\varphi_x \\) is injective.\n\n  \\SubProofOf[def:function_invertibility/surjective]{surjectivity} If \\( z \\in G \\), then \\( z = x(x^{-1} z) \\). Therefore, \\( z = \\varphi_x(x^{-1} z) \\), and thus every member of \\( G \\) has a preimage. Thus, \\( \\varphi_x \\) is surjective.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:invertible_submonoid_is_group}\n  The set of all \\hyperref[def:monoid_inverse]{invertible} elements of a \\hyperref[def:monoid]{monoid} is a \\hyperref[def:group]{group}.\n\\end{proposition}\n\\begin{proof}\n  Fix a monoid \\( M \\).\n\n  \\begin{itemize}\n    \\item \\( e_M \\) is invertible.\n    \\item If \\( x \\) and \\( y \\) are invertible, then \\( xy \\) is invertible with inverse \\( y^{-1} x^{-1} \\).\n    \\item If \\( x \\) is invertible with inverse \\( x^{-1} \\), then \\( x^{-1} \\) is invertible with inverse \\( x \\).\n  \\end{itemize}\n\n  Therefore, the set of invertible elements is a submonoid and is closed under inverses.\n\\end{proof}\n\n\\begin{definition}\\label{def:subgroup_cosets}\n  Let \\( H \\subseteq G \\) be a subgroup of \\( G \\). Even if \\( H \\) is not normal, we can define the \\term{left} and \\term{right cosets}\n  \\begin{equation*}\n    x H \\coloneqq \\set{ xh \\given h \\in H }\n    \\quad\\quad\n    H x \\coloneqq \\set{ hx \\given h \\in H }.\n  \\end{equation*}\n\n  The \\term{index} \\( [G : H] \\) of \\( H \\) in \\( G \\) is \\hyperref[def:cardinal]{cardinality} of the family of all left cosets.\n\n  The discussion in \\fullref{def:group/quotient} can be generalized to show that  \\( \\set{ xH \\given x \\in G } \\) is a \\hyperref[def:set_partition]{partition} of \\( G \\) into \\hyperref[def:equinumerosity]{equinumerous} sets. If \\( H \\) is not normal, this partition is not induced by a congruence, and we cannot form a quotient group using a non-normal subgroup. Nonetheless, left and right cosets still turn out useful.\n\\end{definition}\n\n\\begin{proposition}\\label{thm:group_coset_bijection}\n  The family of all \\hyperref[def:subgroup_cosets]{left cosets} of a subgroup is \\hyperref[def:equinumerosity]{equinumerous} to the family of all right cosets.\n\\end{proposition}\n\\begin{proof}\n  Fix a subgroup \\( H \\) of \\( G \\), and consider the function \\( xH \\mapsto Hx \\) taking left cosets to right cosets.\n\n  It is well-defined because, if \\( x H = x' H \\), then there exists \\( h \\), such that \\( x = x' h \\), and thus\n  \\begin{equation*}\n    H x' = H h^{-1} x = H x.\n  \\end{equation*}\n\n  It is injective by the same converse argument, and it is surjective by definition. Therefore, it is bijective.\n\\end{proof}\n\n\\begin{proposition}\\label{thm:normal_subgroup_equivalences}\n  For a subgroup \\( N \\) of \\( G \\), the following conditions are equivalent:\n  \\begin{thmenum}\n    \\thmitem{thm:normal_subgroup_equivalences/congruence} For every element \\( x \\) of \\( G \\), we have the set equality\n    \\begin{equation}\\label{eq:thm:normal_subgroup_equivalences/congruence}\n      x^{-1} N x = N.\n    \\end{equation}\n\n    This is the definition of a normal subgroup obtained in \\fullref{def:group/quotient}.\n\n    \\thmitem{thm:normal_subgroup_equivalences/cosets} The partitions induced by the \\hyperref[def:subgroup_cosets]{left and rights cosets} of \\( N \\) coincide.\n\n    \\thmitem{thm:normal_subgroup_equivalences/kernel} \\( N \\) is the \\hyperref[thm:group_kernels]{kernel} of some group homomorphism.\n  \\end{thmenum}\n\n  In particular, kernels are always normal subgroups.\n\\end{proposition}\n\\begin{proof}\n  This is the group-theoretic analog to \\fullref{thm:equivalence_partition}.\n\n  \\ImplicationSubProof{thm:normal_subgroup_equivalences/congruence}{thm:normal_subgroup_equivalences/cosets} For any \\( x \\in G \\)\n  \\begin{equation*}\n    N x = (x N x^{-1})x = x N(x^{-1}x) = x N,\n  \\end{equation*}\n  thus every left coset is a right coset and vice versa.\n\n  \\ImplicationSubProof{thm:normal_subgroup_equivalences/cosets}{thm:normal_subgroup_equivalences/kernel} We can take the \\hyperref[def:group/quotient]{canonical projection} \\( \\pi(x) \\coloneqq x N \\) as the homomorphism. By \\fullref{thm:def:group/kernel_cokernel_compatibility}, \\( \\ker \\pi = N \\).\n\n  \\ImplicationSubProof{thm:normal_subgroup_equivalences/kernel}{thm:normal_subgroup_equivalences/congruence} Let \\( \\varphi: G \\to H \\) be a group homomorphism and fix any \\( x \\in G \\). Denote \\( N \\coloneqq \\ker(f) \\). By \\fullref{thm:def:group/kernel_is_normal_subgroup}, it is a normal subgroup in the sense of \\fullref{def:group/quotient}, i.e. it satisfies \\eqref{eq:thm:normal_subgroup_equivalences/congruence}.\n\\end{proof}\n\n\\begin{theorem}[Quotient group universal property]\\label{thm:quotient_group_universal_property}\\mcite[thm. II.7.12]{Aluffi2009}\n  For every \\hyperref[def:group]{group} \\( G \\) and \\hyperref[thm:normal_subgroup_equivalences]{normal subgroup} \\( N \\), the \\hyperref[def:group/quotient]{quotient group} \\( G / N \\) has the following \\hyperref[rem:universal_mapping_property]{universal mapping property}:\n  \\begin{displayquote}\n    Every group homomorphism \\( \\varphi: G \\to H \\) satisfying \\( N \\subseteq \\ker \\varphi \\) \\hyperref[def:factors_through]{uniquely factors through} \\( G / N \\). That is, there exists a unique homomorphism \\( \\widetilde{\\varphi}: G / N \\to H \\), such that the following diagram commutes:\n    \\begin{equation}\\label{eq:thm:quotient_group_universal_property/diagram}\n      \\begin{aligned}\n        \\includegraphics[page=1]{output/thm__quotient_group_universal_property.pdf}\n      \\end{aligned}\n    \\end{equation}\n\n    In the case where \\( N = \\ker \\varphi \\), \\( \\widetilde{\\varphi} \\) is an \\hyperref[def:first_order_homomorphism_invertibility/embedding]{embedding}.\n  \\end{displayquote}\n\n  This extends to \\fullref{thm:quotient_module_universal_property} and \\fullref{thm:quotient_algebra_universal_property}.\n\\end{theorem}\n\\begin{proof}\n  We want\n  \\begin{equation*}\n    \\widetilde{\\varphi}(\\pi(x)) = \\widetilde{\\varphi}(xN) = \\varphi(x).\n  \\end{equation*}\n\n  This suggests the definition\n  \\begin{equation*}\n    \\widetilde{\\varphi}(xN) \\coloneqq \\varphi(x).\n  \\end{equation*}\n\n  The homomorphism \\( \\widetilde{\\varphi} \\) is well-defined because, if \\( x N = x' N \\), since \\( N \\subseteq \\ker \\varphi \\), we have\n  \\begin{equation*}\n    \\varphi(x)\n    =\n    \\varphi(x) e_N\n    =\n    \\varphi(x) \\varphi(N)\n    =\n    \\varphi(x N)\n    =\n    \\varphi(x' N)\n    =\n    \\cdots\n    =\n    \\varphi(y).\n  \\end{equation*}\n\n  If \\( N = \\ker \\varphi \\), the kernel of \\( \\widetilde{\\varphi} \\) is trivial. By \\fullref{thm:def:group/zero_kernel}, it is an injective function.\n\\end{proof}\n\n\\begin{corollary}\\label{thm:quotient_group_by_kernel}\n  Every \\hyperref[def:group/homomorphism]{group homomorphism} \\( \\varphi: G \\to H \\) induces an isomorphism\n  \\begin{equation*}\n    G / \\ker \\varphi \\cong \\img \\varphi.\n  \\end{equation*}\n\\end{corollary}\n\\begin{proof}\n  Directly follows from \\fullref{thm:quotient_group_universal_property} by restricting the range of \\( \\widetilde{\\varphi} \\) to its image.\n\\end{proof}\n\n\\begin{corollary}\\label{thm:group_epimorphisms_are_normal}\n  Every surjective group homomorphism is a \\hyperref[def:zero_morphisms/cokernel]{normal epimorphism}.\n\\end{corollary}\n\\begin{proof}\n  Fix a group homomorphism \\( \\varphi: G \\to H \\). By \\fullref{thm:quotient_group_by_kernel}, \\( G / \\ker \\varphi \\cong \\img \\varphi = H \\). Thus, \\( H \\) is a \\hyperref[def:zero_morphisms/cokernel]{cokernel} of the canonical inclusion \\( \\iota: \\ker \\varphi \\to G \\).\n\\end{proof}\n\n\\begin{theorem}[Quotient subgroup lattice theorem]\\label{thm:quotient_subgroup_lattice_theorem}\\mcite[prop. II.8.9]{Aluffi2009}\n  Given a \\hyperref[thm:normal_subgroup_equivalences]{normal subgroup} \\( N \\) of \\( G \\), the function \\( H \\mapsto H / N \\) is a \\hyperref[def:semilattice/homomorphism]{lattice homomorphism} between the \\hyperref[thm:substructures_form_complete_lattice]{lattice of subgroups} of \\( G \\) containing \\( N \\) and the lattice of subgroups of the \\hyperref[def:group/quotient]{quotient} \\( G / N \\).\n\n  \\begin{figure}[h]\n    \\centering\n    \\includegraphics[page=1]{output/thm__lattice_theorem_for_groups.pdf}\n    \\caption{The lattice of subgroups of \\( G \\) and the lattice of subgroups of \\( G / N \\).}\n    \\label{fig:thm:quotient_subgroup_lattice_theorem}\n  \\end{figure}\n\n  This extends to \\fullref{thm:quotient_submodule_lattice_theorem} and \\fullref{thm:quotient_ideal_lattice_theorem}.\n\\end{theorem}\n\\begin{proof}\n  \\SubProofOf[def:function_invertibility/injective/equality]{injectivity} Let \\( H_1 / N = H_2 / N \\). Both \\( H_1 / N \\) and \\( H_2 / N \\) consist of the same cosets, hence\n  \\begin{equation*}\n    H_1 = \\bigcup (H_1 / N) = \\bigcup (H_2 / N) = H_2.\n  \\end{equation*}\n\n  Therefore, the map \\( H \\mapsto H / N \\) is injective.\n\n  \\SubProofOf[def:function_invertibility/surjective/existence]{surjectivity} Fix a subgroup \\( M \\) of \\( G / N \\) and define\n  \\begin{equation*}\n    H \\coloneqq \\set{ x \\in G \\given xN \\in M }.\n  \\end{equation*}\n\n  Then clearly \\( H / N = M \\). Therefore, the map \\( H \\mapsto H / N \\) is surjective.\n\n  \\SubProofOf[def:semilattice/homomorphism]{lattice compatibility} The join \\( \\braket{ K \\cup H } \\) of two subgroups of \\( G \\) containing \\( N \\) must satisfy the equality\n  \\begin{equation}\\label{eq:thm:quotient_subgroup_lattice_theorem/join}\n    \\underbrace{ \\braket{ K \\cup H } / N }_{\\set{ xN \\given x \\in \\braket{ K \\cup H } }}\n    =\n    \\underbrace{ \\braket{ (K / N) \\cup (H / N) } }_{\\braket{ \\set{ xN \\given x \\in K \\cup H } }}.\n  \\end{equation}\n\n  Verifying this amounts to noting that \\( \\braket{ K \\cup H } \\) is obtained by adding the products and inverses of any elements of \\( G \\) not in \\( K \\cup H \\). Since the projection map \\( \\pi: G \\to G / N \\) is a homomorphism, the coset \\( xy N \\) of the product of \\( x, y \\in \\braket{ K \\cup H } \\) is the product \\( (xN) (yN) \\) of the cosets \\( xN \\) and \\( yN \\), and analogously for inverses. Hence, adding an element \\( x \\in G \\) to \\( K \\cup H \\) and then taking all cosets is the same as adding the coset \\( xH \\) to \\( (K / N) \\cup (H / N) \\).\n\n  Therefore, \\eqref{eq:thm:quotient_subgroup_lattice_theorem/join} holds, and thus \\( H \\mapsto H / N \\) preserves joins in the lattice of subgroups.\n\n  The other verifications are simpler. For meets, we have\n  \\begin{equation*}\n    (K \\cap H) / N\n    =\n    \\set{ xN \\given x \\in K \\cap H }\n    =\n    \\set{ xN \\given x \\in K } \\cap \\set{ xN \\cap x \\in H }\n    =\n    (K / N) \\cap (H / N).\n  \\end{equation*}\n\n  Finally, it remains to show that \\( H \\mapsto H / N \\) preserves the \\hyperref[def:partially_ordered_set_extremal_points/top_and_bottom]{top and bottom elements}. This is trivial since \\( G / N \\) contains all possible cosets of \\( N \\) and is hence the top in the lattice of subgroups of \\( G / N \\), and \\( N / N \\) is the trivial group and hence the bottom.\n\n  Therefore, \\( H \\mapsto H / N \\) is a lattice isomorphism.\n\\end{proof}\n\n\\begin{theorem}[Lagrange's theorem for groups]\\label{thm:lagranges_theorem_for_groups}\n  Let \\( H \\) be a subgroup of \\( G \\). We have the following equality\n  \\begin{equation}\\label{eq:thm:lagranges_theorem_for_groups/index}\n    \\card(G) = \\card(H) \\cdot [G : H].\n  \\end{equation}\n\n  If \\( H \\) is a \\hyperref[thm:normal_subgroup_equivalences]{normal subgroup}, then \\( [G : H] = \\card(G / H) \\) and\n  \\begin{equation}\\label{eq:thm:lagranges_theorem_for_groups/card}\n    \\card(G) = \\card(H) \\cdot \\card(G / H).\n  \\end{equation}\n\n  This demonstrates that there exists a bijective function between the \\hyperref[def:monoid_direct_product]{direct product} \\( H \\times G / H \\) and \\( H \\), however this may not be a group homomorphism --- see \\fullref{ex:lagranges_theorem_for_groups/direct_product_zn}.\n\\end{theorem}\n\\begin{proof}\n  By \\fullref{def:subgroup_cosets}, every coset of \\( G \\) with respect to \\( H \\) is equinumerous with \\( H \\), and there is a total of \\( [G : H] \\) cosets. Therefore, \\eqref{eq:thm:lagranges_theorem_for_groups/index} holds.\n\\end{proof}\n\n\\begin{example}\\label{ex:subgroups_of_integers}\n  Consider the group \\( \\BbbZ \\) of integers with respect to addition.\n\n  Let \\( 2\\BbbZ \\) be the subgroup of all even integers. Then both \\( \\BbbZ \\) and \\( 2\\BbbZ \\) are countably infinite, but their quotient group \\( \\BbbZ / 2\\BbbZ \\) has two elements --- the set \\( 2\\BbbZ \\) of all even integers and the set \\( 2\\BbbZ + 1 \\) of all odd integers. Generalizations of this quotient group are discussed in \\fullref{thm:group_of_integers_modulo}. \\Fullref{thm:lagranges_theorem_for_groups} holds, but it gives no insight due to the absorbing properties of transfinite cardinal arithmetic described in \\fullref{thm:simplified_cardinal_arithmetic}.\n\n  Now consider the groups \\( 4\\BbbZ \\subseteq 2\\BbbZ \\subseteq \\BbbZ \\). As a consequence of \\fullref{thm:lagranges_theorem_for_groups}, \\( 3\\BbbZ \\) is not a subgroup of \\( 2\\BbbZ \\), and so we consider powers of \\( 2 \\).\n\n  Since \\( 2\\BbbZ \\) is a subgroup of \\( \\BbbZ \\), the quotient \\( 2\\BbbZ / 4\\BbbZ \\) must a subgroup of \\( \\BbbZ / 4\\BbbZ \\) as a consequence of \\fullref{thm:quotient_subgroup_lattice_theorem}. We may not know the structure of the quotient groups (although we do, see \\fullref{thm:group_of_integers_modulo}), but we know how \\( 4\\BbbZ \\), \\( 2\\BbbZ \\) and \\( \\BbbZ \\) relate to each other, and we are able to determine how the quotient groups relate to each other.\n\\end{example}\n", "meta": {"hexsha": "5fb0d59d44033ca3b6de2186fae1ecd41c111b6d", "size": 32596, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "src/groups.tex", "max_stars_repo_name": "v--/anthology", "max_stars_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/groups.tex", "max_issues_repo_name": "v--/anthology", "max_issues_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/groups.tex", "max_forks_repo_name": "v--/anthology", "max_forks_repo_head_hexsha": "89a91b5182f187bc1aa37a2054762dd0078a7b56", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 59.5904936015, "max_line_length": 574, "alphanum_fraction": 0.6779052644, "num_tokens": 10572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.6959583376458153, "lm_q2_score": 0.7905303137346446, "lm_q1q2_score": 0.5501761630053881}}
{"text": "\\documentclass{article}\n\\usepackage{tocloft}\n\\include{common_symbols_and_format}\n\\renewcommand{\\cfttoctitlefont}{\\Large\\bfseries}\n\\begin{document}\n\\logo\n\\rulename{Exponential Trading Rule - Two factor}\n\n\\tblofcontents\n\n\\ruledescription{These representations have the position dependent on an exponentially weighted average of the prior values of research and price. The look back length, $\\lookbacklength$, determines how far back the rule looks. The decay rates, $\\decaycoefficientone$ and $\\decaycoefficienttwo$, and amplitudes, $\\amplitudecoefficientone$ and $\\amplitudecoefficienttwo$, coefficients determine the contribution from each exponential. The expression is normalised to be dimensionless by division by the weighting factors and current price.}\n\n\\ruleparameters\n{Window size}{10}{This is the number of time steps over which exponential contributions are sourced.}{$\\lookbacklength$}\n{$1^{st}$ Exponential decay rate for price}{0.0}{This is the $1^{st}$ decay factor that reduces older contributions from the price series.}{$\\decaycoefficientone^{\\price}$}\n{$1^{st}$ Exponential decay rate for research}{0.0}{This is the $1^{st}$ decay factor that reduces older contributions from the research series.}{$\\decaycoefficientone^{\\research}$}\n{Amplitude of price $1^{st}$ contribution}{-0.1}{This factor scales the $1^{st}$ contribution from past values of price.}{$\\amplitudecoefficientone^{\\price}$}\n{Amplitude of research $1^{st}$ contribution}{0.1}{This factor scales the $1^{st}$ contribution from past values of research.}{$\\amplitudecoefficientone^{\\research}$}\n{$2^{nd}$ Exponential decay rate for price}{0.0}{This is the $2^{nd}$ decay factor that reduces older contributions from the price series.}{$\\decaycoefficienttwo^{\\price}$}\n{$2^{nd}$ Exponential decay rate for research}{0.0}{This is the $2^{nd}$ decay factor that reduces older contributions from the research series.}{$\\decaycoefficienttwo^{\\research}$}\n{Amplitude of $2^{nd}$ price contribution}{-0.1}{This factor scales the $2^{nd}$ contribution from past values of price.}{$\\amplitudecoefficienttwo^{\\price}$}\n{Amplitude of $2^{nd}$ research contribution}{0.1}{This factor scales the $2^{nd}$ contribution from past values of research.}{$\\amplitudecoefficienttwo^{\\research}$}\n\\stoptable\n\n\\section{Equation}\n\n\\begin{equation}\n\\contribution^{\\price}(\\dummyiterator,\\dummyiteratortwo) = \\sum_{\\dummyiteratorthree = 1}^{\\dummyiteratortwo} e^{- \\frac{ \\dummyiterator}{e^{-\\decaycoefficient_\\dummyiteratorthree^\\price}}}\n\\end{equation}\n\\begin{equation}\n\\contribution^{\\research}(\\dummyiterator,\\dummyiteratortwo) = \\sum_{\\dummyiteratorthree = 1}^{\\dummyiteratortwo} e^{- \\frac{ \\dummyiterator}{e^{-\\decaycoefficient_\\dummyiteratorthree^\\research}}}\n\\end{equation}\n\\begin{equation}\n{\\bigcontribution}_{\\dummyiteratortwo \\price} = \\amplitudecoefficient_{\\dummyiteratortwo}^{\\price} \\frac{\\sum_{\\dummyiterator=0}^{\\lookbacklength} \\price_{\\currenttime - \\dummyiterator} \\contribution^{\\price}(\\dummyiterator, \\dummyiteratortwo)}{\\sum_{\\dummyiterator = 0}^{\\lookbacklength} \\contribution^{\\price}(\\dummyiterator, \\dummyiteratortwo)} \\\\\n\\end{equation}\n\\begin{equation}\n\\bigcontribution_{\\dummyiteratortwo  \\research} = \\amplitudecoefficient_{\\dummyiteratortwo}^{\\research} \\frac{\\sum_{\\dummyiterator=0}^{\\lookbacklength} \\research_{\\currenttime - \\dummyiterator} \\contribution^{\\research}(\\dummyiterator, \\dummyiteratortwo)}{\\sum_{\\dummyiterator = 0}^{\\lookbacklength} \\contribution^{\\research}(\\dummyiterator, \\dummyiteratortwo)} \\\\\n\\end{equation}\n\\begin{equation}\n\\position_{\\currenttime} = ({\\bigcontributionone}_{\\price} + {\\bigcontributionone}_{\\research} + {\\bigcontributiontwo}_{\\price} + {\\bigcontributiontwo}_{\\research})/{\\price}_{\\currenttime} \\\\\n\\end{equation}\n\\hspace{200mm}\n\n\\noindent where $\\price_\\currenttime$ is the price at time $\\currenttime$, $\\research$ is the value of the research series, $\\amplitudecoefficientone$ and $\\amplitudecoefficienttwo$ are the amplitude coefficients, $\\decaycoefficientone$ and $\\decaycoefficienttwo$ are the decay rate coefficients and $\\position$ is the resultant fractional portfolio investment.\\\\\nIntuitively, the $\\contribution$ are the exponential weightings for each historical value in the look back period, with $\\bigcontribution$ the normalised sum of all the contributions, scaled by the amplitude coefficient. The total contributions are then scaled by the current price to give a dimensionless fraction of the portfolio to invest.\n\\hspace{200mm}\n\\hspace{200mm}\n\n\\keyterms\n\\furtherlinks\n\\end{document}\n", "meta": {"hexsha": "7c8f7a3c004c515c75bc4861bb6879920ffab006", "size": 4538, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "docs/strategies/tex/ExponentialTwo.tex", "max_stars_repo_name": "parthgajjar4/infertrade", "max_stars_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 34, "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_issues_repo_path": "docs/strategies/tex/ExponentialTwo.tex", "max_issues_repo_name": "parthgajjar4/infertrade", "max_issues_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 137, "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_forks_repo_path": "docs/strategies/tex/ExponentialTwo.tex", "max_forks_repo_name": "parthgajjar4/infertrade", "max_forks_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 28, "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "avg_line_length": 87.2692307692, "max_line_length": 539, "alphanum_fraction": 0.7734684883, "num_tokens": 1223, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7905303186696747, "lm_q2_score": 0.6959583313396338, "lm_q1q2_score": 0.5501761614547358}}
{"text": "\\documentclass{article}\n\n\\usepackage{palatino}\n\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage[utf8]{inputenc}\n\\usepackage[colorlinks=true]{hyperref}\n\n% macros\n\\newcommand{\\LL}{\\mathcal{L}}\n\\newcommand{\\half}{\\frac{1}{2}}\n\\newcommand{\\norm}[1]{\\left|\\left|#1\\right|\\right|}\n\\newcommand{\\dd}{d}\n\\newcommand{\\ddd}[2]{\\frac{\\partial #1}{\\partial #2}}\n\\newcommand{\\block}[1]{\\left(#1\\right)}\n\\newcommand{\\mat}[1]{ \\begin{pmatrix} #1 \\end{pmatrix} }\n\\newcommand{\\inv}[1]{#1^{-1}}\n\\newcommand{\\argmin}[1]{\\underset{#1}{\\textrm{argmin}}}\n\n\n\\newcommand{\\p}[1]{#1^{+}}\n\\newcommand{\\m}[1]{#1^{-}}\n\\newcommand{\\s}[1]{#1^{*}}\n\\newcommand{\\xp}{\\p{x}}\n\\newcommand{\\xm}{\\m{x}}\n\\newcommand{\\vp}{\\p{v}}\n\\newcommand{\\vm}{\\m{v}}\n\\newcommand{\\vs}{\\s{v}}\n\\newcommand{\\fp}{\\p{f}}\n\\newcommand{\\fm}{\\m{f}}\n\\newcommand{\\fs}{\\s{f}}\n\\newcommand{\\Dx}{\\Delta x}\n\\newcommand{\\Dv}{\\Delta v}\n\\newcommand{\\lambdap}{\\p{\\lambda}}\n\\newcommand{\\phip}{\\p{\\phi}}\n\\newcommand{\\JpT}{J^{+^T}}\n\\newcommand{\\Cp}{\\p{C}}\n\n\n\\begin{document}\n\\title{Newton Solver Reference}\n\\author{Fran\\c{c}ois Faure and Matthieu Nesme}\n\\date{\\today}\n\\maketitle\n%\n\\begin{abstract}\n  Detailing the non-linear time-stepping scheme implemented in the CompliantNLImplicitSolver component.\n\\end{abstract}\n%\n\n\n\\section{Notations}\n\n\n\\begin{itemize}\n\\item $x$: positions\n\\item $v$: velocities\n\\item $f(x,v,t)$: forces for given positions and velocities at time $t$\n\\item $h$: time step\n\\item $\\m{(.)}$, $\\p{(.)}$: a state at, respectively, the beginning and the end of the time step\n\\item $\\Dx=\\xp-\\xm$: variation of position during the time step\n\\item $\\Dv=\\vp-\\vm$: variation of velocity during the time step\n\\item $\\alpha$, $\\beta$: blending parameters such as $\\fs=\\alpha\\fp+(1-\\alpha)\\fm$ and $\\vs=\\beta\\vp+(1-\\beta)\\vm$. Corresponding Data are called implicitVelocity and implicitPosition.\n\\item $M$: mass\n\\end{itemize}\n\n\\section{Euler Integration}\n\n\\[\n   \\left \\{\n   \\begin{array}{r c l}\n      \\Dx  & = & h\\vs \\\\\n      M\\Dv & = & h\\fs \\\\\n   \\end{array}\n   \\right .\n\\]\n\nfrom explicit $\\alpha=\\beta=0$ to implicit $\\alpha=\\beta=1$.\n\n\n\\section{Non-linear Solver}\n\nThe method computes the next velocity $\\vp$, such that $e \\equiv M\\Dv-h\\fs$ is satisfied.\n(Note that other time discretizations are implemented to rather compute the new acceleration or $\\Dv$, similar development can be done being careful of the time step scaling.)\\\\\n\n\nBased on the Newton's method, an approximate solution is iteratively improved by solving a linear equation system based on the Jacobian of the residual of the equation to satisfy.\nA first guess is computed with the regular, linearized system (cf the linear time-stepping scheme in the Compliant plugin doc). \\\\\n\n\nStating that \\[ e \\equiv M(\\vp-\\vm)-h(\\alpha\\fp+(1-\\alpha)\\fm)\\] we obtain the jacobian \\[ \\ddd{e}{\\vp}=M-h\\alpha\\ddd{\\fp}{\\vp}\\]\n\nA first order approximation of the Taylor serie of $\\fp=f(\\xp,\\vp,t+h)$ gives\n\n\\[\n   \\begin{array}{r c l}\n      \\fp  & = & f(\\xm,\\vm,t)+\\ddd{f}{x}\\Dx+\\ddd{f}{v}\\Dv \\\\\n           & = & \\fm+K\\Dx+B\\Dv \\\\\n           & = & \\fm+K(h(\\beta\\vp+(1-\\beta)\\vm))+B(\\vp-\\vm)\n   \\end{array}\n\\]\n\nwith $K=\\ddd{f}{x}$ the stiffness matrix and $B=\\ddd{f}{v}$ the damping matrix. \\\\\n\n\nSo \\[ \\ddd{\\fp}{\\vp}=h\\beta K+B\\] and \\[\\ddd{e}{\\vp}=M-h\\alpha B-h^2\\alpha\\beta K\\]\n\n\n\\section{Constraints}\n\n\nBilateral, holonomic constraint $\\phi(x)=0$, combined with the ODE leads to $Jv=0$ with $J=\\ddd{\\phi}{x}$, the constraint forces are $-J^T\\lambdap$ with $\\lambda$ the Lagrange multipliers.\n\nThe error becomes \\[ e \\equiv M(\\vp-\\vm)-h(\\alpha(\\fp-\\JpT\\lambdap)+(1-\\alpha)\\fm)\\]\n\nFor compliant constraints $C\\lambda=-\\phi$ \\[ e \\equiv M(\\vp-\\vm)-h(\\alpha(\\fp-\\JpT\\lambdap)+(1-\\alpha)\\fm)-\\Cp h\\lambdap+\\frac{\\phip}{h}\\]\n\\\\\n\nUnilateral constraints $\\phi(x)>=0$ are handled the same way, expect they participate to the error only when they are violated (i.e. when then generate a force $\\lambda$).\n\n\n\\section{Newton Step Length}\n\nTwo different strategies are implemented:\n\\begin{itemize}\n\\item na\u00efve sub-step approach: a predefinied portion (Data 0$<$newtonStepLength$<$1) of the correction is applied successively while the error is decreasing.\n\\item Backtracking algorithm (Data newtonStepLength=1): try to find the maximum amount of correction to apply that decreased \"sufficiently\" the error. The line search described in \\textit{Numerical Recipies} (chapter Globally Convergent Methods for Nonlinear Systems of Equations) is employed.\n\\end{itemize}\n\n\n\\end{document}\n\n", "meta": {"hexsha": "7057ab34962df27511ad9170ba3ec486b7527d7b", "size": 4459, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "applications/plugins/Compliant/doc/NonLinearImplicitSolver.tex", "max_stars_repo_name": "sofa-framework/issofa", "max_stars_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_stars_repo_licenses": ["OML"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "applications/plugins/Compliant/doc/NonLinearImplicitSolver.tex", "max_issues_repo_name": "sofa-framework/issofa", "max_issues_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_issues_repo_licenses": ["OML"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "applications/plugins/Compliant/doc/NonLinearImplicitSolver.tex", "max_forks_repo_name": "sofa-framework/issofa", "max_forks_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_forks_repo_licenses": ["OML"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.276119403, "max_line_length": 293, "alphanum_fraction": 0.6806458847, "num_tokens": 1443, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8311430478583168, "lm_q2_score": 0.661922862511608, "lm_q1q2_score": 0.5501525853949996}}
{"text": "\\newpage\n\\chapter{Stagnation flow flames}\n\\section{Introduction}\nThe stagnation flame model in Camflow is a transient model capable of predicting the species profiles, velocity profile, and the temperature profile.  When the temperature profile is obtained from experiment, the model can be used to simulate only the species profiles and velocity profile by providing the experimentally observed temperature profile. In this case only the species transport equations,  momentum and continuity equations are solved. An initial guess of intermediate species needs to be provided. If the temperature profile is not known, the model can predict the temperature profile by solving the energy equation. In this case in addition to the intermediate species composition, an initial guess for the temperature profile also needs to be provided. \n\n\\section{Fundamentals}\nThe equation of continuity in cylindrical coordinate by considering only axial and radial coordinate can be written as\n\\begin{equation}\n \\frac{\\partial \\rho}{\\partial t} + \\frac{1}{r}\\frac{\\partial}{\\partial r}(\\rho rv) + \\frac{\\partial}{\\partial z}(\\rho u) = 0;\n\\end{equation}\nThe radial and axial momentum follows as\n\\begin{equation}\n \\rho \\bigg( \n \\frac{\\partial v}{\\partial t} +\n v\\frac{\\partial v}{\\partial r}+\n u\\frac{\\partial v}{\\partial x} \\bigg) = -\\frac{\\partial p}{\\partial r} +\n\\mu \\bigg[\\frac{\\partial}{\\partial r}\\bigg(\\frac{1}{r}\\frac{\\partial}{\\partial r}(rv)\\bigg) + \\frac{\\partial^2v}{\\partial x^2}\\bigg],\n\\end{equation}\n\n\\begin{equation}\n \\rho \\bigg( \n \\frac{\\partial u}{\\partial t} +\n v\\frac{\\partial u}{\\partial r}+\n u\\frac{\\partial u}{\\partial x} \\bigg) = -\\frac{\\partial p}{\\partial r} +\n\\mu \\bigg[\\frac{1}{r} \\frac{\\partial}{\\partial r}\\bigg(r\\frac{\\partial u}{\\partial r}\\bigg) + \\frac{\\partial^2u}{\\partial x^2}\\bigg],\n\\end{equation}\nAssuming similarity near the center line, the axial velocity, radial velocity, temperature and species mass fraction turns out to be functions of time and axial coordinate. Defining v/r =V(x), and by applying boundary layer theory in the axial direction the continuity and momentum simplified to\n\n\\begin{equation}\n\\frac{\\partial \\rho}{\\partial t} + \\frac{\\partial (\\rho u)}{\\partial x} + 2\\rho V = 0\n\\end{equation}\n\\begin{equation}\n \\rho \\frac{\\partial V}{\\partial t} + \\rho u \\frac{\\partial V}{\\partial x} + \\rho V^2 = -\\Lambda + \\frac{\\partial }{\\partial x}\\bigg(\\mu \\frac{\\partial V}{\\partial x} \\bigg),\n\\end{equation}\nand\n\\begin{equation}\n \\frac{\\partial p}{\\partial x} = 0.\n\\end{equation}\nThe eigen value $\\Lambda$ of the system has to be solved with other dependent variables\n\\begin{equation}\n \\frac{1}{r}\\frac{\\partial p}{\\partial r} = \\Lambda\n\\end{equation}\nThe energy equation is written as\n\\begin{equation}\n \\rho c_p \\frac{\\partial T}{\\partial t}+ \\rho u c_p \\frac{\\partial T}{\\partial z} - \\frac{\\partial }{\\partial z}\\bigg(\\lambda \\frac{\\partial T}{\\partial z} \\bigg) + \\frac{\\partial}{\\partial z} \\sum_k j_k h_{k}  + \\sum_k h_k\\dot{\\omega}_k = 0\n\\end{equation}\nand the species transport equation\n\\begin{equation}\n \\rho \\frac{\\partial Y_k}{\\partial t}+ \\rho u \\frac{\\partial Y_k}{\\partial z} + \\frac{\\partial j_k}{\\partial z} = \\dot{\\omega}_k W_k, \\quad k=1\\ldots K_g\n\\end{equation}\nwith $j_k$ defined as\n\\begin{equation}\n j_k = -\\rho D_{km} \\frac{dY_k}{dz},\n\\end{equation}\nThe boundary conditions for the above system of equations are \n\\begin{eqnarray}\n x = 0: \\quad u=u_f(t), \\quad V = V_f(t) \\quad T=T_f(t), \\quad Y_k = Y_{kf} \\nonumber \\\\\nx=L: \\quad u=0, \\quad V=0, \\quad T=T_{wall}, \\quad \\frac{dY_k}{dx}=0\n\\end{eqnarray}\nIn the above equations, $\\rho$ is the density in kg/m$^3$, $u$ is the axial velocity in m/s, $v$ is the radial velocity in m/s,  $Y_k$ is the mass fraction of the k\\'th chemical species, $j_k$ the mass flux of the k\\'the species in kg/$m^2-s$, $\\dot{\\omega}_k$ the molar production rate of the k\\'the species in mol/m$^3$-s, D$_{km}$ the diffusion coefficient of the k\\'the species in the mixture, $h_k$ the specific enthalpy in J/kg, and the $t$ the time in $s$.\n\n\\section{Input file}\nAn example of ``camflow.xml'' input file is shown below\n{\\scriptsize{\n\n\\begin{verbatim}\n<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n<camflow>\n   <reactor model=\"stagflow\">\n    <diameter unit=\"m\">0.015</diameter>\n    <length unit=\"cm\">2</length>\n  </reactor>\n  <op_condition>\n     <temperature>userdefined</temperature>\n     <twall unit=\"K\">300.0</twall>\n     <pressure unit=\"bar\">1.0</pressure>\n     <strain>100</strain>\n  </op_condition>\n  <inlet>\n     <fuel>\n       <velocity unit=\"m/s\">2.0</velocity>\n       <temperature unit=\"K\">300.0</temperature>\n       <molefrac>\n        <species name=\"H2\">0.5</species>\n        <species name=\"O2\">*</species>\n       </molefrac>\n    </fuel>\n  </inlet>\n  <solver mode=\"coupled\" solver=\"cvode\" residual=\"on\">\n     <maxTime>10000</maxTime>\n     <iterations>1</iterations>\n     <tols>\n        <species>\n           <aTol>1.e-10</aTol>\n           <rTol>1.e-08</rTol>\n        </species>\n        <temperature>\n           <aTol>1.e-03</aTol>\n           <rTol>1.e-03</rTol>\n        </temperature>\n        <flow>\n           <aTol>1.e-03</aTol>\n           <rTol>1.e-03</rTol>\n        </flow>\n     </tols>\n  </solver>\n  <initialize>\n    <mCenter unit=\"cm\">1</mCenter>\n    <mWidth unit=\"cm\">0.5</mWidth>\n    <massfrac>\n      <intrmdt name=\"H\">0.1</intrmdt>\n      <intrmdt name=\"OH\">0.12</intrmdt>\n      <intrmdt name=\"HO2\">0.001</intrmdt>\n      <intrmdt name=\"H2O\">0.01</intrmdt>\n    </massfrac>\n    <Tprofile unit_L=\"cm\" unit_T=\"K\">\n      <position x=\"0.00\">300.</position>\n      <position x=\"0.10\">350.0</position>\n      <position x=\"0.20\">400.0</position>\n      <position x=\"0.30\">500.0</position>\n      <position x=\"0.40\">600.0</position>\n      <position x=\"0.90\">1100.0.</position>\n      <position x=\"1.00\">1200.0</position>\n      <position x=\"1.10\">1100.0.</position>\n      <position x=\"1.20\">1000.0</position>\n      <position x=\"1.30\">900.0</position>\n      <position x=\"1.40\">800.0</position>\n      <position x=\"1.50\">700.0</position>\n      <position x=\"1.60\">600.0</position>\n      <position x=\"1.70\">500.0</position>\n      <position x=\"1.80\">400.0</position>\n      <position x=\"1.90\">350.0</position>\n      <position x=\"2.00\">300.0</position>\n    </Tprofile>\n </initialize>\n <report outfile=\"final\" species=\"mole\">\n </report>\n <grid>grid.inp</grid>\n</camflow>\n\n\\end{verbatim}}\n}\n\nThe input file follows xml standard. A detailed description about the various elements in the input file is specified below.\n\n\\begin{itemize}\n \\item \\textbf{rector} : The reactor element specifies which model is to be simulated and for a stagnation flow flame camflow expects ``stagflow'' as the model attribute value. The reactor element also holds child element length for specifying the nozzle-wall separation, and is given with the unit attribute. The unit of the value specified can be in ``cm'', ``m'', or in ``in''ches. Appropriate attribute must be specified. However,, the grid file (described later) has higher precedence over the value specified here.\n\n\\item \\textbf{op\\_conditions} : The element op\\_conditions describes the operating conditions for the flame. This includes the specification of the pressure and the condition applied to the solution of energy equation. The flame pressure may be specified in the units of ``Pa'', ``atm'', or ``bar''. The temperature element can take the values of ``isothermal'', ``adiabatic'', or ``userdefined''. In the case of isothermal calculation, the energy equation is not solved and the flame is assumed to be at the same temperature as the incoming fuel. The user may also perform the integration for a pre-calculated or measured temperature profile. In this case the temperature child element must be assigned with the value ``userdefined'' and the user defined temperature profile can be specified (explained later). For adiabatic calculations, provide the temperature element with the value ``adiabatic'', and in this case the energy equation will be solved to predict the temperature profile. Further the temperature of the wall surface also needs to be specified using the element ``twall'' with appropriate unit for temperature.\\\\\n\nIn addition to the temperature and pressure, the strain rate needs to be specified since camflow does not solve for the pressure gradient eigen value. Instead it is calculated from the strain rate according to\n\\begin{equation}\n \\Lambda = -\\rho_o a^2\n\\end{equation}\n\n\\item \\textbf{inlet} : All properties pertaining to the fuel inlet must be specified under the element ``fuel''. ``fuel'' element must specify the velocity at the inlet using the ``velocity'' element, temperature using the ``temperature'' element and the species composition using ``molefrac'' or ``massfrac'' element.The unit for velocity may be in m/s or in cm/s and temperature can be in C or in K. The appropriate units must be specified as attribute values. The chemical species present in the fuel can be specified using the species elements, with the species name as attribute and corresponding mole/mass fraction as attribute. The last species composition may be specified using ``*'', and this case the ``*'' will stand for 1-sum of composition of other species. \n\n\\item \\textbf{solver}: The solver element holds the solver control specifications. The attributes ``mode'' can be specified as ``coupled'' or ``segregated'' for counter flow flames. The solver name is essentially provided to switch from one solver to another. However, the present version of Camflow uses only CVode as the numerical integrator, and therefore accepts only ``cvode'' as the solver name. When the solver mode is specified as coupled, the governing equations are solved simultaneously, and for ``segregated'' mode, the governing equations are solved sequentially for a number of times that is specified by the element iterations. By default the value of iterations element is one. At the end of the number of iterations, the solution mode will automatically switch to coupled. The user is encouraged to use coupled mode for counter-flow calculations.\\\\\n\nAdditionally the integration time may be optionally specified by the element maxTime. By default the value of maxTime is 100000 s. However, the final integration time is the maxTime or the time required to reach steady state whichever is lower. This means the solver will stop integration, if steady state is reached before the specified integrations time.\\\\\n\nThe element ``tols'' hold the various tolarences that can be applied to the species, energy, and continuity equations. For species a relative tolarence of at least 10$^{-6}$ should be used. The user may need to adjust the tolarence values for the species in case of solution difficulties.\n\n\\item \\textbf{initialize} The initialize element can be used to specify various initial conditions. \nA guess value for the intermediate species composition may be used to initialize the flow filed. When specifying the intermediate compostions, the mixing center and mixing width needs to be specified. When these compositions are specified a guassien profile will be generated for the intermediate species with peaks at the mixing centers and the having the spread specified by mixing width. The mixing center is specified by the element ``mCenter`` and the mixing width is specified by ''mWidth``. Both these elements are provided with ''unit`` attribute, and appropriate unit of length must be specified. Unlike the specification of fuel inlet species composition, the sum of intermediate or product species composition need not to sum up to one. However, the user must ensure that the sum does not exceeds one.\\\\\n\nThe temperature profile can be specified by using the ``Tprofile'' element with two attributes namely ``unit\\_L'' for length unit and ``unit\\_T'' for temperature unit. The length unit can be in ``cm'' or in ``m'', where as the temperature unit can be either in ``K'' or in ``C''. The actual temperature as a function of reactor position is specified with the child elements position with the attribute ``x'', which stands for the position with the reactor. If the length unit is specified as ``cm'' then ``x'' is the position from the reactor inlet in ``cm'', and the value for the position element is the temperature at position ``x''.\n\n\\item \\textbf{report}: The desired output for the species composition must be specified in this element using the species attribute. ``mole'' or ``mass'' may be used as the attribute values, and correspondingly the output will be produced either in mole fraction or mass fractions.\n\n\\item \\textbf{grid}: The premix flame model requires a descretised geometry. The geometry may be specified using anyfile that contains a nodes of the descretised geometry. The content of the grid file is assumed to be in ``m'' units. An example is shown below\\\\\n{\\scriptsize{\\begin{verbatim}\n0.0\n0.0002\n0.0003\n0.0004\n0.0005\n0.0006\n0.0007\n0.0009\n0.001\n0.0015\n0.002\n0.003\n0.004\n0.005\n0.006\n0.007\n0.008\n0.009\n0.01\n0.02\n\\end{verbatim}\n}}\n\nThe content of the grid file has precedence over the length of the flame specified in the reactor element. This means that the length of the flame after reading the grid file will be set to the final point specified in the grid file.\n\\end{itemize}\n\n\\section{Executing the binary}\nThe stagnation flame model of Camflow expects four input files namely, ``camflow.xml'', ``therm.dat'',  ``chem.inp'', and ``tran.dat''. All the files must be present in the working directory. Upon succesful execution the output file ``profile.dat'' containing the final integration time (s), axial position (m), residence time (1/s), density (kg/m$^3$), velocity (m/s), massflow rate (kg/m$^2$s), temperature (K), and the species compositions in mass or mole fractions.\n\n%===============================================================================================\n%\n%\n%\n%===============================================================================================\n", "meta": {"hexsha": "b841f04c36d63440fa6068dac50a34e8f82573ce", "size": 13938, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/supporting-information/camflow/stag.tex", "max_stars_repo_name": "sm453/MOpS", "max_stars_repo_head_hexsha": "f1a706c6552bbdf3ceab504121a02391a1b51ede", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-09-08T14:06:33.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-04T07:52:19.000Z", "max_issues_repo_path": "doc/supporting-information/camflow/stag.tex", "max_issues_repo_name": "sm453/MOpS", "max_issues_repo_head_hexsha": "f1a706c6552bbdf3ceab504121a02391a1b51ede", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/supporting-information/camflow/stag.tex", "max_forks_repo_name": "sm453/MOpS", "max_forks_repo_head_hexsha": "f1a706c6552bbdf3ceab504121a02391a1b51ede", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-11-15T05:18:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-01T13:51:20.000Z", "avg_line_length": 67.3333333333, "max_line_length": 1129, "alphanum_fraction": 0.7155259004, "num_tokens": 3656, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267118111485244, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.5501027681789302}}
{"text": "\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%                             Conclusion                              %\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\\chapter*{Conclusion}\n\\addcontentsline{toc}{chapter}{Conclusion}\n\\label{cha:conclusion}\n\\markboth{Conclusion}{Conclusion}\n\nIn this Ph.D. thesis, we contributed to the formulation and the resolution of the posture generation problem for robotics.\n%Which is a subproblem of many robotics applications such as planning, trajectory generation and control.\n%Such problems aim at finding a robot configuration that satisfies some high-level requests while ensuring its viability, in the sense that, in this configuration, the robot is stable, avoid collisions and respects its intrinsic limitations.\nPosture generation is the problem of finding a robot configuration to fulfill high-level task goals under viability or user-specified constraints (e.g. find a posture that is statically stable, avoids non-desired contacts and respects the robot's intrinsic limitations).\nThis problem is traditionally formulated and solved as an optimization problem by minimizing a specific cost function under a set of constraints.\n\nWe described the formulation of the basic building blocks of the posture generation problems and proposed some extensions, such as a 'smooth' formulation of non-inclusive contact constraints between two polygons; which allows finding optimal contact configurations in complex situations where two surfaces in contact cannot be included in each other.\nThis formulation proved very helpful for planning complex scenarios such as making the HRP-2 robot climb a ladder, it allowed to automatically find some contact configurations that would otherwise take a long time to find manually by trial and errors.\n%even when generate viable configurations of contact that would otherwise not be considered by usual formulations; it relies on the idea of adding to the problem a set of variables that represent an ellipse included in both polygons.\n\nRobotics problems often contain variables that belong to non-Euclidean manifolds, they are traditionally handled by modifying the mathematical definition of the problem and adding extra variables and constraints to it.\nWe present a generic way to handle such variables in our formulation, without modifying the mathematical problem, and most importantly, we propose an adaptation of existing optimization techniques to solve constrained nonlinear optimization problems defined on non-Euclidean manifolds.\nWe then detail our implementation of such an algorithm, based on an SQP approach, which, to our knowledge, is the first implementation of a nonlinear solver on manifolds that can handle constraints.\nThis, not only allows us to formulate mathematical problems more elegantly and ensure the validity of our variables at every step of the optimization process, but also enables us to have a better understanding of our solver, and flexibility in tuning it for robotics problems.\nIn turn, this allows us to investigate new ways of solving posture generation problems.\n\nThe search space of a posture generation problem is the Cartesian product of several (sub)manifolds in which different quantities/variables are defined (e.g.translations, rotations, joint angles, contact forces, etc.).\n%We design a new the posture generation frameword that takes advantage of the structure of the search space.\n%We present a new formulation of the posture generation problem that takes into account the inherent structure of its configuration space as a cartesian product of submanifolds representing different quantities(translations, rotations, joint angles, contact forces, etc.).\n%Each submanifold of the search space is considered as a separate entity, and variables on it can be considered separately from each other.\n%We take advantage of that structure to propose a posture generation framework where the variables, submanifolds, and function's derivations are managed automatically, and geometric expressions can be written intuitively, which simplifies the work of the developer of new functions.\nWe take advantage of that structure to propose a posture generation framework where the variables and submanifolds are managed automatically, and geometric expressions can be written intuitively and are differentiated automatically.\nThis framework helps reducing the work the developer of new functions has to do, as functions of geometric expressions can be quickly prototyped and their derivatives are computed automatically, and the variable management allows to simply write the function on its input space without worrying about the location of said input space within the global search space.\nOur framework allows to easily and elegantly write custom constraints on selected sets of variables defined on the submanifolds of the search space.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nWe exploit the capabilities of our framework to generate viable robot's configurations with contacts on non-flat surfaces by parametrizing the location of the contact point with additional variables.\nThat approach allowed us to compute manipulation and locomotion postures involving solids defined only by their meshes and where the choice of the contact location was left entirely to the solver.\nThat way, we computed some postures of an HRP-4 robot holding the leg of an HRP-2 robot in its grippers, and others, for example, where HRP-4 is climbing on a stack of cubes and the contacts locations are chosen automatically.\nThis was made possible by proposing a generic way to parametrize the surface of a solid represented by a mesh, based on the Catmull-Clark subdivision algorithm, and using it to compute configurations where the optimal location of contact on a complex object is chosen by the solver.\n\nWe took a great care to eliminate many of the cumbersome aspects of writing a posture generation problem, and in the end, the genericity of our framework allows the definition and use of a wide range of functions applied to any variables of the problem (joint angles, forces, torques or any additional variables).\nSuch function can then be used to define and solve custom posture generation problems with our solver, and compute viable postures that satisfy any user-defined tasks.\n\nWe evaluate the performances of our solver on problems relying heavily on the manifold formulation and show that it is superior to the classic approach in terms of convergence speed and time spent per iteration.\n%We then present some evaluations of our solver and posture generator: we solved a cube stacking problem that relies heavily on the manifold formulation to showcase that, the manifold formulation performs better than the traditional one in that problem and in particular, the time spent per iteration is, as expected, shorter.\n%We studied the influence of the distance between the initial guess and the problem's solution on the success rate of the posture generation problem.\n%This showed that when starting close to the solution, convergence is almost always reached whereas when starting remotely, it is more difficult to find the solution, and more work on the solver could help increase this success rate.\n%Such study allowed us to compare results for different solver options.\n%We showed that in terms of hessian update method, the self-scaling BFGS update on individual Hessians gives us the best results.\nWe then present an approach to evaluate and compare the performances of different resolution methods for solving several types of posture generation problems, which could help develop strategies to tune our solver, based on the problem at hand.\n\nWe developed our solver in order to solve posture generation problems, but we show that its capabilities can be leveraged to solve other kinds of problems such as the identification of inertial parameters, where the manifold formulation guarantees that the optimization iterates are always physically valid.\nFor this problem, our solver and formulation proved more efficient than the traditional formulation solved with an off-the-shelf solver.\nFinally, we presented our preliminary work in using posture generation in multi-contact planning in real sensory acquired environment.\n\n%Overall, we present an efficient and user-friendly framework for posture generation along with a nonlinear solver on manifold that can be used to compute viable robotic postures satisfying some tasks.\n%The framework is designed such that creating custom functions and constraints to the problem is simple.\n\n%Although our solver fairs correctly, compared to other solvers on some problems (Schittkowsky, Inertial identification), we believe that some improvements could still be made to it.\n%This study showed that our solver is able to solve posture generation problems in times comparable to the state of the art, but we are fully aware that those computation times could be improved.\nAlthough we manage to get some satisfying results out of our posture generator, the solver still requires some tuning of its options, and of the initial guess, to ensure good convergence rates.\nWe are fully aware that even though the computation times that we observe are of the same order of magnitude as the ones found in the literature, improvements could and should be made in the future to make our posture generator more robust and faster, by not only improving the solver's implementation, but also specializing it for robotic posture generation problems.\n%We believe that improvements could be made to make our solver more reliable and to specialize it for the resolution of posture generation problems.\nIn particular, we believe that the restoration phase, especially the treatment of the Hessian updates, can be improved.\nWe can also try using other QP solvers than LSSOL.\nA sparse solver, for example, may be more suited to take advantage of the structure of our problem.\nMost importantly, future works need to develop the solver to make it specialized for posture generation problems, either by finding optimal solver options or by modifying the optimization algorithm.\nIn future research, having an open solver and being able to modify it will be a crucial element of our ability to find and use the most suited algorithm to solve posture generation and other problems encountered in robotics more efficiently.\n%An initial study of the problem to solve, followed by an automatic tuning of the solver seems like a promising idea to improve the posture generator's performances.\nOne could, for example, choose an initial guess from a database of robot postures and some solver options automatically, based on an initial study of the structure of the problem.\nIt would also be possible to take into account the very specific structure of a robot in the resolution algorithm.\n\n%Having an open and functionnal solver is a crutial tool in\n%Our open solver, which can be modified to suit the needs [xxx of what], can be crutial tool to achieve this goal.\n%This is made possible by the fact that we now have an open solver that we can modify to suit our needs.\n%It could be done through defining families of typical problems and finding an optimal set of solver options to solve them.\n%That could be done through finding an optimal set of solver options to solve a family of problems.\n%For example we could allow it to treat a variable number of constraints along the iterations, that way, some constraints could be dynamically added or ignored during the resolution.\n%This could be useful to reduce the amount of computation per iterations, for example by ignoring some collision constraints when they are far from the iterate.\n\nIt would be interesting to use our posture generator in a higher-level software, like a multi-contact stance planner, to automatically generate sequences of viable configuration to use to guide the motion of the robot.\nIdeally, we would like to integrate the posture generation with our multi-contact control framework to allow on-the-fly replanning of postures when the initial plan becomes not feasible because of issues, such as drift of the robot motion from the original plan.\n\n\n%We present a formulation of the posture generation problem that takes into account the fact that some variables such as the 3D rotations live in non-Euclidean manifolds while making the writing of problems and of specific constraints more versatile.\n\n%In order to solve nonlinear optimization problems on manifolds, we developped our own solver, which not only allows us to formulate mathematical problems more elegantly and ensure the validity of our variables at every step of the optimization process, but also enables us to have a better understanding and mastering in the way to tune our solver for robotic problems, and allows us to try new ways of solving those problems.\n\n%We proposed several new types of constraint formulation that make the spectrum of discoverable viable postures larger.\n%We presented a formulation of non-inclusive flat contacts that allows to generate contacts between two surfaces while monitoring the size of their intersections.\n%Leveraging our problem formulation and its ability to manage manifolds allowed us to generate contacts with\n", "meta": {"hexsha": "a291c0a243da59de3086c91fa20441a9d26b633f", "size": 13299, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Chapter7-Conclusion/chapter7.tex", "max_stars_repo_name": "stanislas-brossette/phd-thesis", "max_stars_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter7-Conclusion/chapter7.tex", "max_issues_repo_name": "stanislas-brossette/phd-thesis", "max_issues_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter7-Conclusion/chapter7.tex", "max_forks_repo_name": "stanislas-brossette/phd-thesis", "max_forks_repo_head_hexsha": "7f4d2d46dfdd1f59ac29770585e8cee6dc4f2668", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 143.0, "max_line_length": 427, "alphanum_fraction": 0.8069027746, "num_tokens": 2416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8267117898012104, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.5501027594595749}}
{"text": "%==============================================================================\n\\chapter{$\\rbr{1+1}$-dimensional Dilaton Gravity}\n\\label{sec:1+1ddlt}\n%==============================================================================\n\nSince quantum field theories in $\\rbr{3+1}$-dimensional Einstein gravitation \nare difficult to solve, one may turn to alternative solvable gravity models to \nget some hints for the physics in reality. The $\\rbr{1+1}$-dimensional dilaton \ngravity, or the Callan--Giddings--Harvey--Strominger (CGHS) model, is such a \ncandidate, which is used in the thesis and has been extensively studied in the \nliterature, see for instance \\cite{Callan1992,Demers1996,Ashtekar2011}. In this \nchapter a very brief review of the model is provided, mainly based on \n\\cite{Demers1996}.\n\nThe action for the model, with $N$ massless scalar fields minimally coupled, \nreads\n\\begin{equation}\nS \\coloneqq \\int \\dd^2 x \\sqrt{-\\ol{g}}\\cbr{\\frac{\\ee^{-2\\ol{\\phi}}}{\\nG}\n\\sbr{\\ol{R}+4\\rbr{\\nabla \\ol{\\phi}}^2 + 4\\lambda^2}\n-\\frac{1}{2}\\sum_{i=1}^N\\rbr{\\nabla f_i}^2},\n\\label{eq:action-CGHS-dilaton}\n\\end{equation}\nwhere $f_i$'s are the neutral scalar matter fields, $\\ol{\\phi}$ is the dilaton \nfield, $\\nG$ the dimensionless Newton constant, and $\\lambda > 0$ the \ncosmological constant. The dilaton field is essential in two dimensions, because \nthere is only one independent component in the Riemann curvature tensor, hence a \npure $\\rbr{1+1}$-dimensional Einstein theory shall be trivial. Transforming \nwith $\\phi = \\ee^{-2\\ol{\\phi}}$ and $g_{\\alpha\\beta} = \\ee^{-2\\ol{\\phi}} \n\\ol{g}_{\\alpha\\beta}$ eliminates the kinetic term for the dilaton, yielding\n\\begin{equation}\nS = \\int \\dd x\\,\\dd t\\sqrt{-g}\\cbr{\\frac{1}{\\nG}\\sbr{R\\phi + 4\\lambda^2}\n-\\frac{1}{2}\\rbr{\\nabla f}^2},\n\\label{eq:action-CGHS-dilaton-eli}\n\\end{equation}\nwhere only one matter field is considered for simplicity, and an \nADM-like\\footnote{Arnowitt--Deser--Misner, see \\cite{Arnowitt2008}.}\nparametrisation of the metric\n\\begin{equation}\n\\dd s^2 = \\ee^{2\\rho}\\sbr{-\\sigma^2\\,\\dd t^2 + \\rbr{\\dd x + \\xi\\,\\dd t}^2}\n\\label{eq:ADM-para}\n\\end{equation}\nis assumed, in which $\\rbr{\\sigma, \\xi}$ are the lapse and shift functions.\nThe action in \\cref{eq:action-CGHS-dilaton-eli} has a classical solution \ndescribing a collapsing null-matter shell, which resembles the solution of \na spherically collapsing body in the $\\rbr{3+1}$-dimensional Einstein case. \nThe corresponding conformal diagram is plotted in \\cref{fig:dil-col-bod}.\n\nTo apply the canonical quantisation scheme, the action in\n\\cref{eq:action-CGHS-dilaton-eli} is to be recast in the Hamiltonian formalism.\nHowever, the Legendre transformation of the field momenta proves to be \nsingular. This means that the momenta, as functions of the positions and \nvelocities, cannot be inverted to express the corresponding velocities as \nfunctions of momenta and positions, so that the standard algorithm to obtain \nthe Hamiltonian\n\\begin{equation}\nH \\coloneqq \\sbr{\\frpa{L}{\\dot{X}_i}\\dot{X}_i - L}_{\\dot{X}\n= \\rfun{\\dot{X}}{P,X}}\n\\end{equation}\ndoes not apply, where $\\rbr{X, \\dot{X}, P}$ are the positions, velocities and \nmomenta, respectively.\n\nThe systems, of which the Legendre transformation is singular, are called \n\\emph{constrained systems}. Other examples include a relativistic point \nparticle in the covariant form, Yang--Mills theories and string theories. For \nsuch systems, the usual quantisation scheme and the Schr\u00f6dinger equation do \nnot apply directly. Instead, one has to identify the constraints in the system \nand apply Dirac's quantisation rules \\cite{dirac1964lectures,Gitman1990}. The \nresult is that the quantum wave functional describing the CGHS model is \n\\emph{constrained} by\n\\begin{align}\n0 = \\mscrH_\\parallel\\sfun{\\Psi}{\\rho,\\phi,f} &\\coloneqq\n\\rbr{\\frde{\\rho}{x}\\frdva{}{\\rho} - \\frde{}{x}\\frdva{}{\\rho} \n+\\frde{\\phi}{x}\\frdva{}{\\phi} + \\frde{f}{x}\\frdva{}{f}}\\sfun{\\Psi}{\\rho,\\phi,f},\n\\label{eq:hpara}\\\\\n0 = \\mscrH_\\perp\\sfun{\\Psi}{\\rho,\\phi,f} &\\coloneqq\n\\rbr{\\frac{\\nG}{2}\\frdva{^2}{\\rho\\,\\dva\\phi} - \\frac{1}{2}\\frdva{^2}{f^2} \n+\\frac{1}{2\\nG}V_G + V_M}\\sfun{\\Psi}{\\rho,\\phi,f},\n\\label{eq:hperp}\n\\end{align}\nwhere\n\\begin{equation}\nV_G \\coloneqq 4\\rbr{\\frde{^2\\phi}{x^2} - \\frde{\\phi}{x}\\frde{\\rho}{x} - \n2\\lambda^2 \\ee^{2\\rho}},\\qquad\nV_M \\coloneqq\\frac{1}{2}\\rbr{\\frde{f}{x}}^2.\n\\end{equation}\n\\Cref{eq:hpara,eq:hperp} are the \\emph{Wheeler--DeWitt equations} for the CGHS \nmodel, which play the role of the usual Schr\u00f6dinger equation for the whole \nsystem.\n\nIn the next step, a semi-classical approximation (see also \\cite[sec.\\ \n5.4]{Kiefer2012}) of the Born--Oppenheimer type is applied to $\\Psi$ by \nexpanding the exponent as\n\\begin{equation}\n\\sfun{\\Psi}{\\rho,\\phi,f} = \\ee^{\\ii\\rbr{\\nG^{-1}S_0 + S_1 + \\nG S_2 + \\ldots}}.\n\\end{equation}\nInserting this expression into \\cref{eq:hpara,eq:hperp}, one finds that at \norder $\\nG^{0}$, variables can be separated by setting\n\\begin{equation}\n\\ee^{\\ii S_1} \\coloneqq \\sfun{D^{-1}}{\\rho,\\phi}\\sfun{\\chi}{\\rho,\\phi,f}.\n\\end{equation}\nInserting the leading and next-to-leading order terms into \n\\cref{eq:hpara,eq:hperp} yields\n\\begin{equation}\n\\ii \\rbr{\\frpa{\\rho}{t}\\frdva{\\chi}{\\rho} + \\frpa{\\phi}{t}\\frdva{\\chi}{\\phi}} = \n\\frac{1}{2}\\cbr{-\\frdva{^2}{f^2}+\\rbr{\\frpa{f}{x}}^2} \\chi,\n\\end{equation}\nintegrating of which gives the functional Schr\u00f6dinger equation for the matter \nfield \\eqref{eq:functional-Sch-chi}.\n\n\n% Order $\\nG^{-1}$: Hamilton--Jacobi for pure gravity\n\n\n%%% Local Variables: \n%%% mode: latex\n%%% TeX-master: \"../mythesis\"\n%%% End: \n", "meta": {"hexsha": "20765d3010714fb3ad978062a35cec9d0d83859c", "size": 5557, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "ubonn-thesis-current/mythesis/sections/thesis_1+1ddlt.tex", "max_stars_repo_name": "cmp0xff/Masterarbeit", "max_stars_repo_head_hexsha": "b29c84f9a29e4a7c9a3499658a1dfa7f87d64c9c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ubonn-thesis-current/mythesis/sections/thesis_1+1ddlt.tex", "max_issues_repo_name": "cmp0xff/Masterarbeit", "max_issues_repo_head_hexsha": "b29c84f9a29e4a7c9a3499658a1dfa7f87d64c9c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ubonn-thesis-current/mythesis/sections/thesis_1+1ddlt.tex", "max_forks_repo_name": "cmp0xff/Masterarbeit", "max_forks_repo_head_hexsha": "b29c84f9a29e4a7c9a3499658a1dfa7f87d64c9c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.093220339, "max_line_length": 81, "alphanum_fraction": 0.6960590247, "num_tokens": 1875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8354835289107309, "lm_q2_score": 0.6584175072643415, "lm_q1q2_score": 0.5500969824658187}}
{"text": "%---------------------------Shape-----------------------------\n\\section{Edge Ratio\\label{s:hex-edge-ratio}}\n\nThe edge ratio quality metric is the ratio of the longest to\nshortest edge of a hexahedron:\n\\[\nq = \\frac{L_{\\max}}{L_{\\min}}.\n\\]\n\n\\hexmetrictable{edge ratio}%\n{$1$}%                                        Dimension\n{--}%                                         Acceptable range\n{$[1,DBL\\_MAX]$}%                             Normal range\n{$[1,DBL\\_MAX]$}%                             Full range\n{$1$}%                                        Cube\n{--}%                                         Citation\n{v\\_hex\\_edge\\_ratio}%                        Verdict function name\n", "meta": {"hexsha": "481350b32b9755dc940aa579a9a5af0786d9e925", "size": 677, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexEdgeRatio.tex", "max_stars_repo_name": "Lin1225/vtk_v5.10.0", "max_stars_repo_head_hexsha": "b54ac74f4716572862365fbff28cd0ecb8d08c3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-06-01T00:15:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T10:49:02.000Z", "max_issues_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexEdgeRatio.tex", "max_issues_repo_name": "Armand0s/homemade_vtk", "max_issues_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-02-17T11:40:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T21:23:25.000Z", "max_forks_repo_path": "Utilities/verdict/docs/VerdictUserManual2007/HexEdgeRatio.tex", "max_forks_repo_name": "Armand0s/homemade_vtk", "max_forks_repo_head_hexsha": "6bc7b595a4a7f86e8fa969d067360450fa4e0a6a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2015-03-23T21:13:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T11:15:39.000Z", "avg_line_length": 37.6111111111, "max_line_length": 67, "alphanum_fraction": 0.3840472674, "num_tokens": 144, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.8354835330070839, "lm_q2_score": 0.6584175005616829, "lm_q1q2_score": 0.5500969795629684}}
{"text": "Following the work of \\cite{Brosse18tULA}, we aim to quantify the accuracy of these methods by finding the first and second moments of known distributions. Then, comparing the error between the generated value and the true value will give a first indication on the accuracy of the scheme. Beginning with the original code associated with \\cite{Brosse18tULA}\\footnote{Available at \\url{https://github.com/nbrosse/TULA}}, we initially optimised the code then aimed to reproduce the results they found. \n\n\\subsection{Testing Potentials}\nFor the purposes of testing, four qualitatively different potentials have been made available within our program. These are: Gaussian, Double Well, Rosenbrock function and Ginzburg-Landau model. We note that these, save for Gaussian, are non-convex.\n\nAny of the above potentials may also be scaled by \\textit{temperature} $T$. That is, having chosen a potential function $U$ and a temperature $T$, the true distribution to be sampled from will be\n\\[\\pi(x) = e^{-\\frac{U(x)}{T}}.\\]\nThis is common in the molecular dynamics literature and is also useful in MCMC to find modes of distributions more quickly, a technique known as tempering.\n\nFigure \\ref{fig:tamedStep} shows how taming prevents divergence even at higher step sizes, where the untamed algorithms would diverge. Figure \\ref{fig:doubleWell_moment} compares first and second moments of the double well distribution in 100 dimensions after running the methods for \\(10^5\\) iterations and with a stepsize of \\(h=0.1\\). Note that \\texttt{ULA} and \\texttt{HOLA} diverge and produce no useful samples. Theoretically, the first moment of this distribution is 0, and the second moment is around 0.104 in 100 dimensions. It can be seen that at this larger step size, the taming is causing an overestimation of the second moment for \\texttt{tULA,tLM} and \\texttt{tHOLA}. Adjusting the taming to a coordinatewise function is reducing the issue. This a similar phenomena to that seen in Section \\ref{sec:stiff}. Figure \\ref{fig:timedRun} takes a different approach. Rather than running each algorithm for a fixed number of iterations, we instead let each scheme run for 10 seconds. It can be seen that \\texttt{tHOLA} has a much higher range of error when stepsize is low. This is because each iterate takes much longer than the other methods, resulting in an order of magnitude fewer samples being produced in 10 seconds. This is due to the algorithm involving the multiplication of large matrices, a costly computation. When the stepsize is increased the Metroplised algorithms seem to perform worst. It is unclear why this should be the case. Once again \\texttt{ULA} and \\texttt{LM} diverge.\n\n\\begin{figure}\n\\centering\n  \\begin{minipage}[b]{0.32\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/tula_fm.png}\n  \\end{minipage} %\n  \\begin{minipage}[b]{0.32\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/tulac_fm.png}\n  \\end{minipage} %\n  \\begin{minipage}[b]{0.32\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/tmala_fm.png}\n  \\end{minipage}\n   \\caption{Comparison of \\texttt{tULA}, \\texttt{tULAc} and \\texttt{tMALA} for the first moment evolving as a function of step size.}\n   \\label{fig:tamedStep}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n  \\begin{minipage}[b]{0.85\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/doublewell_0_1_10_5samp_100dFirstMoment.png}\n  \\end{minipage}\\\\ %\n  \\begin{minipage}[b]{0.85\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/secondmoment_double_well_100d_10_5samp.png}\n  \\end{minipage} %\n  \\caption{Comparison of first (top) and second (bottom) moments of the double well distribution in 100 dimensions after running the methods for \\(10^5\\) iterations and with a stepsize of \\(h=0.1\\). Note that \\texttt{ULA} and \\texttt{HOLA} diverge and produce no useful samples.  The true first moment is 0, and second moment $\\approx 0.1$}\n  \\label{fig:doubleWell_moment}\n  \\end{figure}\n  \n  \\begin{figure}\n\\centering\n  \\begin{minipage}[b]{0.47\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/10sBoxPlot1moment100dim001step.png}\n  \\end{minipage} %\n  \\begin{minipage}[b]{0.47\\textwidth}\n  \\centering\n    \\includegraphics[width=\\textwidth]{Figures/10sBoxPlot1moment100dim01step.png}\n  \\end{minipage} %\n  \\caption{Comparison of methods when given 10 seconds to run at \\(h=0.01\\) (left) and \\(h=0.1\\) (right). Note that \\texttt{ULA} and \\texttt{LM} both diverge when the stepsize is too high.}\n  \\label{fig:timedRun}\n  \\end{figure}", "meta": {"hexsha": "ecbf0c24e0ad2b7dcf86969cdfc4dbf300e65c56", "size": 4568, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "WriteUp/MomentErrors.tex", "max_stars_repo_name": "Tom271/LangevinMC", "max_stars_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2019-02-07T12:51:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T13:35:13.000Z", "max_issues_repo_path": "WriteUp/MomentErrors.tex", "max_issues_repo_name": "swyoon/LangevinMC", "max_issues_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "WriteUp/MomentErrors.tex", "max_forks_repo_name": "swyoon/LangevinMC", "max_forks_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-01-19T17:44:19.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-19T17:44:19.000Z", "avg_line_length": 81.5714285714, "max_line_length": 1503, "alphanum_fraction": 0.7697022767, "num_tokens": 1282, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7431680086124812, "lm_q2_score": 0.740174367770488, "lm_q1q2_score": 0.5500739109219959}}
{"text": "\\section{Omitted Proofs}\n\n\\subsection{Permutation Network}\\label{sec:perm-proof}\n\nWe now formally prove that the oblivious permutation network protocol in \\figureref{fig:switching-net}\tand repeated in \\figureref{fig:perm-net-repeat} is secure with respect to the  \\f{permute} functionality of  \\figureref{fig:perm-ideal-2}.\n\n\\begin{figure}\n\t\\framebox{\\begin{minipage}{0.95\\linewidth}\\small\n\t\t\tParameters: $3$ parties denoted as \\programmer, \\sender and \\receiver. Elements are strings in $\\Sigma:=\\{0,1\\}^\\sigma$. An input, output vector size of $n, m$.\n\t\t\t\\smallskip\t\t\t\n\t\t\t\n\t\t\t\\input{prot-fig-perm}\n\t\\end{minipage}}\n\t\\caption{The Oblivious Permutation Network protocol $\\proto{permute}$ repeated. }\n\t\\label{fig:perm-net-repeat}\t\n\\end{figure}\n\n\\begin{figure}\\small\n\t\\framebox{\\begin{minipage}{0.95\\linewidth}\n\t\t\tParameters: $3$ parties denoted as the \\programmer, \\sender and \\receiver. Elements are strings in $\\Sigma:=\\{0,1\\}^\\sigma$. An input vector size of $n$ and output size of $m$.\n\t\t\t\n\t\t\t{\\bf [Permute]} Upon the command $(\\textsc{Permute}, \\pi, \\shareTwo{A}_0)$ from the \\programmer and $(\\textsc{Permute}, \\shareTwo{A}_1)$ from the \\sender:\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item Interpret $\\pi: [m]\\rightarrow [n]$ as an injective function and $A\\in \\Sigma^n$. \n\t\t\t\t\\item Compute $A'\\in \\Sigma^m$ s.t. $\\forall i\\in [m], A_{\\pi(i)} = A'_i$.\n\t\t\t\t\\item Generate $\\shareTwo{A'}$ and send $\\shareTwo{A'}_0$ to \\programmer and $\\shareTwo{A'}_1$ to \\receiver.\n\t\t\t\\end{enumerate}\n\t\\end{minipage}}\n\t\\caption{The Oblivious Permutation Network ideal functionality \\f{permute}.}\n\t\\label{fig:perm-ideal-2}\t\n\\end{figure}\n\n\\begin{theorem}\\label{thm:permute}\n\tProtocol $\\proto{permute}$ of \\figureref{fig:perm-net-repeat} securely realized the ideal functionality \\f{permute} of \\figureref{fig:perm-ideal-2} given at most one party is corrupted in the semi-honest model.\n\\end{theorem}\n\\begin{proof}\n\tCorrectness follows directly from $\\pi_1\\circ \\pi_0 = \\pi$ and that the masks cancel out.\n\tWith respect to simulation, consider the following three cases:\n\t\\begin{enumerate}\n\t\t\\item \\emph{Corrupt \\programmer}: The view of \\programmer contains no messages and therefore is trivial to simulation. \n\t\t\\item \\emph{Corrupt \\sender}:  The view of \\programmer contains $\\pi_1, S$ which are sent by \\programmer. The simulator can uniformly sample $\\pi_1:[m]\\rightarrow[n]$ from all such injective functions and uniformly sample $S\\gets\\Sigma^n$. Clearly $S$ has the same distribution.\n\t\t\n\t\tWith respect to $\\pi_1$, observe if $\\pi_1$ if first fixed uniformly at random then there are exactly $(n-m)!$ ways to choose $\\pi_0$. Moreover, for each choice of $\\pi_1$ there is a disjoint set of possible $\\pi_0$ values. Therefore, \\programmer sampling $\\pi_0$ uniformly at random results in the distribution of $\\pi_1$ also being uniform.\n\t\t\n\t\t%consider the following hybrid: \\programmer first uniformly sample $\\pi_1:[m]\\rightarrow[n]$ and then defines $\\pi_0:[n]\\rightarrow[n]$ appropriately. For each choice of $\\pi_1$, there are always exactly ${n\\choose m}$ options for $\\pi_0$. What is more, these options are unique to this choice of $\\pi_1$.\n\t\t\\item \\emph{Corrupt \\receiver}: The view of \\receiver contains $B:= ( A_{\\pi_0(1)} \\oplus S_1, ..., A_{\\pi_0(n)} \\oplus S_n)$  and $\\pi_1, T\\in \\Sigma^m$. $\\pi_1,T$ are sampled uniformly and therefore trivial to simulation. similarly, each $B_i=A_{\\pi_0(i)}\\oplus S_i$ where $S_i$ is uniformly distributed in their view. Therefore $B_i$ is similarly distributed. \n\t\\end{enumerate}\n\\end{proof}\n\n\n\\subsection{Duplication Network}\\label{sec:dup-proof}\n\n\nWe now formally prove that the oblivious duplication network protocol in \\figureref{fig:switching-net}\tand repeated in \\figureref{fig:perm-net-repeat} is secure with respect to the  \\f{dup} functionality of  \\figureref{fig:dup-ideal-2}.\n\n\\begin{figure}\n\t\\framebox{\\begin{minipage}{0.95\\linewidth}\\small\n\t\t\tParameters: $3$ parties denoted as \\programmer, \\sender and \\receiver. Elements are strings in $\\Sigma:=\\{0,1\\}^\\sigma$. An input, output vector size of $n$.\n\t\t\t\\smallskip\n\t\t\t\n\t\t\t\\input{prot-fig-dup}\n\t\\end{minipage}}\n\t\\caption{The Oblivious Duplication Network protocol \\proto{duplicate} repeated. }\n\t\\label{fig:dup-net-repeat}\t\n\\end{figure}\n\n\n\\begin{figure}\\small\n\t\\framebox{\\begin{minipage}{0.95\\linewidth}\n\t\t\tParameters: $3$ parties denoted as the \\programmer, \\sender and \\receiver. Elements are strings in $\\Sigma:=\\{0,1\\}^\\sigma$. An input vector size of $n$ and output size of $n$.\n\t\t\t\n\t\t\t{\\bf [Duplicate]} Upon the command $(\\textsc{Duplicate}, \\pi, \\shareTwo{A}_0)$ from the \\programmer and $(\\textsc{Duplicate}, \\shareTwo{A}_1)$ from the \\sender:\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item Interpret $\\pi: [n]\\rightarrow [n]$ as a function s.t. $\\pi(1)=1, \\pi(i)\\in \\{i,\\pi(i-1)\\}$ for $i\\in[2,n]$ and $A\\in \\Sigma^n$. \n\t\t\t\t\\item Compute $A'\\in \\Sigma^m$ s.t. $\\forall i\\in [n], A_{\\pi(i)} = A'_i$.\n\t\t\t\t\\item Generate $\\shareTwo{A'}$ and send $\\shareTwo{A'}_0$ to \\programmer and $\\shareTwo{A'}_1$ to \\receiver.\n\t\t\t\\end{enumerate}\n\t\\end{minipage}}\n\t\\caption{The Oblivious Duplication Network ideal functionality \\f{duplicate}.}\n\t\\label{fig:dup-ideal-2}\t\n\\end{figure}\n\n\n\\begin{theorem}\\label{thm:dup}\n\tProtocol $\\proto{duplicate}$ of \\figureref{fig:dup-net-repeat} securely realized the ideal functionality \\f{duplicate} of \\figureref{fig:dup-ideal-2} given at most one party is corrupted in the semi-honest model.\n\\end{theorem}\n\\begin{proof}\n\tCorrectness follows an inductive argument. It is easy to verify $B_1=\\shareTwo{A_1}_1$ and that this is correct since $\\pi(1)=1$ by definition. Inductively let us assume that ${B_{i-1}}=\\shareTwo{A_{\\pi(i-1)}}_1$ and we will show that ${B_i}=\\shareTwo{A_{\\pi(i)}}_1$.  \n\tObserve that for $i\\in[2,n]$\n\t\\begin{align*}\n\t\\shareTwo{B_i}_0&=M^{b_i}_i\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\ \\ \\  \\oplus W^{\\rho_i}_i\\oplus b_i\\shareTwo{B_{i-1}}_0 \\\\\n \t\t    &=\\overline{b_i}\\shareTwo{A_i}_0\\oplus b_i\\shareTwo{B_{i-1}}_1 \\oplus \\shareTwo{B_{i}}_1\\oplus W^{b_i\\oplus\\phi_i}_i\\oplus W^{\\rho_i}_i\\oplus b_i\\shareTwo{B_{i-1}}_0  \\\\\n\t\t    &=\\overline{b_i}\\shareTwo{A_i}_0 \\oplus b_i B_{i-1}\\oplus \\shareTwo{B_i}_1\n\t\\end{align*}\n\tAnd therefore $B=\\pi(\\shareTwo{A}_1)$ and\n\t\n\t\\begin{align*}\n\t\tA'=&\\shareTwo{B}_1\\oplus R \\oplus \\pi(\\shareTwo{A}_0) \\oplus \\shareTwo{B}_1\\oplus R\\\\\n\t\t=& B \\oplus \\pi(\\shareTwo{A}_0)\\\\\n\t\t=& \\pi(\\shareTwo{A}_1) \\oplus \\pi(\\shareTwo{A}_0)\\\\\n\t\t=& \\pi(A)\\\\\n\t\\end{align*}\n\t\n\tWith respect to simulation, consider the following three cases:\n\t\\begin{enumerate}\n\t\t\\item \\emph{Corrupt \\programmer}: The transcript of \\programmer contains $M^0, M^1\\in \\Sigma^n,\\shareTwo{B_1}_0\\in \\Sigma, \\phi\\in \\{0,1\\}^n$ from \\sender and $W^{b_i\\oplus\\phi_i}_i$ from \\receiver. First observe that $\\shareTwo{B_1}_0, \\phi$ are sampled uniformly and therefore can be simulated as the same. \n\t\t\n\t\t Next recall that\n\t\\begin{align*}\t\n\tM^{b_i}_i=&...\\oplus\\shareTwo{B_i}_1\\\\\n\tM^{\\overline{b_i}}_i=&...\\oplus W^{\\overline{b_i\\oplus\\phi_i}}_i\t\n\t\\end{align*}\n\twhere $\\shareTwo{B_i}_1,  W^{\\overline{b_i\\oplus\\phi_i}}_i\\in \\Sigma$ are sampled uniformly can not in the view of \\programmer.  Therefore $M^0_i,M^1_i$ are distributed uniformly.\n\t\n\t\t\\item \\emph{Corrupt \\sender}:  The transcript of \\sender contains nothing and therefore is trivial to simulate. Note that the distribution of the output shares in independent of \\sender's random tape (view) due to $\\programmer,\\receiver$ re-randomizing the shares with $R\\gets\\Sigma^n$.\n\t\t\n\t\t\\item \\emph{Corrupt \\receiver}:  The transcript of \\receiver contains $\\shareTwo{B_1}_1, W^0,W^1$ from \\sender and $\\rho$ from \\programmer. $W^0,W^1$ are sampled uniformly and therefore can be simulated as the same. $\\shareTwo{B_1}_1=A_1\\oplus \\shareTwo{B_1}_0$ where $\\shareTwo{B_1}_0$ is sampled uniformly and not in the view. Therefore $\\shareTwo{B_1}_1$ is distributed uniformly. The same applies to $\\rho$ since $\\phi$ is uniform and not in the view. \n\t\\end{enumerate}\n\\end{proof}\n\n\n\n\n\\subsection{Switching Network}\\label{sec:switch-proof}\n\n\nWe now formally prove that the oblivious switching network protocol in \\figureref{fig:switching-net} and repeated in \\figureref{fig:switch-net-repeat} is secure with respect to the  \\f{switch} functionality of  \\figureref{fig:switch-ideal-2}. In the proof we will replace calls to the Permutaiton and Duplication protocols of \\proto{Switch} with their ideal functionalities (\\figureref{fig:perm-ideal-2}, \\ref{fig:dup-ideal-2}).\n\n\n\\begin{figure}\n\t\\framebox{\\begin{minipage}{0.95\\linewidth}\\small\n\t\t\tParameters: $3$ parties denoted as \\programmer, \\sender and \\receiver. Elements are strings in $\\Sigma:=\\{0,1\\}^\\sigma$. An input, output vector size of $n, m$.\n\t\t\t\\smallskip\n\t\t\t\n\t\t\t\\input{prot-fig-switch}\n\t\\end{minipage}}\n\t\\caption{The Oblivious Switching Network protocol \\proto{switch} repeated. }\n\t\\label{fig:switch-net-repeat}\t\n\\end{figure}\n\n\\begin{figure}\\small\n\t\\framebox{\\begin{minipage}{0.95\\linewidth}\t\t\t\n\t\t\t\\input{ideal-fig-switch}\n\t\\end{minipage}}\n\t\\caption{The Oblivious Switching Network ideal functionality \\f{switch} repeated.}\n\t\\label{fig:switch-ideal-2}\t\n\\end{figure}\n\n\n\n\\begin{theorem}\\label{thm:switch}\n\tProtocol $\\proto{Switch}$ of \\figureref{fig:switch-net-repeat} securely realized the ideal functionality \\f{switch} of \\figureref{fig:switch-ideal-2} given at most one party is corrupted in the semi-honest  model.\n\\end{theorem}\n\\begin{proof}\n\tCorrectness follows from the first oblivious permutation call rearranges the input vector such that each output item which appears $k$ times is followed by $k-1$ items which do not appear in the output. The duplication network then copies each of these output items into the next $k-1$ position. The final permutation places these items in the final order. \n\t\n\tWith respect to simulation, the transcript of each party contains their transcripts of three subprotocols: Permute, Shared-Duplicate and Shared-Permute. By \\theoremref{thm:permute} the Permute subprotocol transcript can be simulated. Similarly, \\theoremref{thm:permute},\\ref{thm:dup} also imply that the other two transcripts can be simulated. Therefore this implies that the overall protocol can be simulated given that no other communication is performed. \n\t\t\n\\end{proof}\n%\n%\n%\\subsection{Shared Network}\\label{sec:shared-proof}\n%\n%\n%\n%\\begin{figure}\n%\t\\framebox{\\begin{minipage}{0.95\\linewidth}\\small\n%\t\t\tParameters: $3$ parties denoted as \\programmer, \\sender and \\receiver. Elements are strings in $\\Sigma:=\\{0,1\\}^\\sigma$. An input, output vector size of $n, m$.\n%\t\t\t\\smallskip\n%\t\t\t\n%\t\t\t\\input{prot-fig-shared}\n%\t\\end{minipage}}\n%\t\\caption{The Oblivious Shared Switching Network protocol \\proto{shared} repeated. }\n%\t\\label{fig:shared-net-repeat}\t\n%\\end{figure}\n%\n%\\begin{figure}\\small\n%\t\\framebox{\\begin{minipage}{0.95\\linewidth}\n%\t\t\t\n%\t\t\tParameters: $3$ parties denoted as the \\programmer, \\sender and \\receiver. Elements are strings in $\\Sigma:=\\{0,1\\}^\\sigma$. An input vector size of $n$ and output size of $m$.\n%\t\t\t\n%\t\t\t{\\bf [Shared]} Upon the command $(\\textsc{Shared}, cmd, \\pi, \\shareTwo{A}_0)$ from the \\programmer and $(\\textsc{Shared},cmd, \\shareTwo{A}_1)$ from the \\sender:\n%\t\t\t\\begin{enumerate}\n%\t\t\t\t\\item Interpret $\\pi: [m]\\rightarrow [n]$ and $A=\\shareTwo{A}_0\\oplus \\shareTwo{A}_1\\in \\Sigma^n$. \n%\t\t\t\t\\item If $cmd=\\textsc{Permute}$, require $\\pi$ to be injective.\t\t\t\t\n%\t\t\t\t\\item If $cmd=\\textsc{Duplicate}$, require $n=m,\\pi(1)=1, \\pi(i)\\in \\{i, \\pi(i-1)\\}, \\forall i\\in[2,n]$. \n%\t\t\t\t\\item Require $cmd\\in \\{\\textsc{Permute},\\textsc{Duplicate},\\textsc{Switch}\\}$.\n%\n%\t\t\t\t\\item Compute $A'\\in \\Sigma^m$ s.t. $\\forall i\\in [m], A_{\\pi(i)} = A'_i$.\n%\t\t\t\t\\item Generate $\\shareTwo{A'}$ and send $\\shareTwo{A'}_0$ to \\programmer and $\\shareTwo{A'}_1$ to \\receiver.\n%\t\t\t\t\n%\t\t\t\\end{enumerate}\n%\t\t\t\n%\t\\end{minipage}}\n%\t\\caption{The Oblivious Shared Switching Network ideal functionality \\f{shared}.}\n%\t\\label{fig:shared-ideal-2}\t\n%\\end{figure}\n%\n%\n%\n%\\begin{theorem}\\label{thm:shared}\n%\tProtocol $\\proto{shared}$ of \\figureref{fig:shared-net-repeat} securely realized the ideal functionality \\f{shared} of \\figureref{fig:shared-ideal-2} given at most one party is corrupted in the semi-honest  model.\n%\\end{theorem}\n%\\begin{proof}\n%\tCorrectness follows from the correctness of the underlying subprotocol and that the \\programmer can locally apply the mapping function to their own share. Simulation of this protocol is exactly that of the underlying subprotocol along with updating the output as specified. \n%\\end{proof}\n%\n\n\n\n\n\\subsection{Join Protocol}\\label{sec:join-proof}\n\n\n\n\\begin{theorem}\\label{thm:join}\n\tProtocol $\\proto{join}$ of \\figureref{fig:full_proto} securely realized the ideal functionality \\f{join} of \\figureref{fig:full_ideal} given at most one party is corrupted in the semi-honest \\f{Permute},\\f{Switch},\\f{encode}-hybrid model with statistical security parameters $\\lambda$.\n\\end{theorem}\n\\begin{proof}\n\tFirst we demonstrate the correctness of the protocol. Recall that the set of non-\\null join keys $\\{(X_{J_1}||...||X_{J_l})[i] \\mid i\\in [n]\\}$ are all distinct. The same holds true for the $Y$ table. As such, $P_0$ receives $n$ uniformly random values from \\f{encode}. As discussed in \\ref{sec:encode}, given that these encodings are of length at least $\\lambda + 2 \\log_2(n)$ bits, then with probability $1-2^{-\\lambda}$ all the encodings are distinct.\n\t\n\tRecall that $P_1$ then constructs a cuckoo hash table using the encodings $\\mathbb{E}_y$. Given that cuckoo hash table is parameterized as described in  \\cite{DRRT18}, this succeeds with overwhelming probability, i.e. $1-2^{-\\lambda}$.\n\t\n\tThe correctness of the rest of the protocol is straight forward. The shared table $\\share{Y}$ are permuted to form a shared cuckoo hash table $\\share{T}$. Based on the encodings  $\\mathbb{E}_x$, the shares in the table $T$ are mapped to the corresponding row of $X$. It is easy to verify that if $Y$ has a matching row then it will have been mapped. Finally, \\f{mpc} is used to compute the circuit which constructs the output table.\n\t\n\tWith respect to simulation, consider the following cases:\n\t\\begin{enumerate}\n\t\t\\item \\emph{Corrupt $P_0$}: The transcript of $P_0$ contains the encodings $\\mathbb{E}_x$, the output $\\shareTwo{\\widehat{Y}^l}_0$ from \\f{Switch}, and the output of $\\f{mpc}$. Given that the inputs to \\f{encode} are either set to \\null or all distinct values, the output $\\mathbb{E}_x$ is uniformly distributed and therefore can be sample as such by the simulator. Similarly, the output of $\\f{Switch},\\f{mpc}$ are both uniform.\n\t\t\n\t\t\\item \\emph{Corrupt $P_1$}:  The transcript of $P_1$ contains the encodings $\\mathbb{E}_y$, the output $\\shareTwo{\\widehat{T}}_1$ from \\f{Permute}, the output $\\shareTwo{\\widehat{Y}^l}_1$ from \\f{Switch}, and the output of $\\f{mpc}$. All of these are distributed uniformly. The simulation of this transcript follows the same as that of $P_0$.\n\t\t\n\t\t\\item \\emph{Corrupt $P_2$}: The transcript of $P_2$ contains the output $\\shareTwo{\\widehat{T}}_0$ from \\f{Permute}, the output $\\shareTwo{\\widehat{Y}^l}_0$ from \\f{Switch}, and the output of $\\f{mpc}$. All of these are distributed uniformly. The simulation of this transcript follows the same as that of $P_0$.\n\t\\end{enumerate}\n\t\n\\end{proof}\n\n", "meta": {"hexsha": "fdea5c32a0e8d8556e90a971beafecf5ee095994", "size": 15202, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tex/security.tex", "max_stars_repo_name": "cyckun/aby3", "max_stars_repo_head_hexsha": "99af31ccaef6cd2c22df8ef57d8b7a07d62c66cf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 121, "max_stars_repo_stars_event_min_datetime": "2019-06-25T01:35:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T12:53:17.000Z", "max_issues_repo_path": "tex/security.tex", "max_issues_repo_name": "cyckun/aby3", "max_issues_repo_head_hexsha": "99af31ccaef6cd2c22df8ef57d8b7a07d62c66cf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2020-01-21T16:47:09.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-23T12:41:22.000Z", "max_forks_repo_path": "tex/security.tex", "max_forks_repo_name": "cyckun/aby3", "max_forks_repo_head_hexsha": "99af31ccaef6cd2c22df8ef57d8b7a07d62c66cf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2019-09-05T08:35:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T11:57:22.000Z", "avg_line_length": 64.1434599156, "max_line_length": 459, "alphanum_fraction": 0.7196421523, "num_tokens": 4829, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743735019595, "lm_q2_score": 0.7431679972357831, "lm_q1q2_score": 0.5500739067607017}}
{"text": "\\documentclass[../../main.tex]{subfiles}\n\\begin{document}\n\\section{Functional analysis proof}\n\nThe following is a very different proof of Liapunoff's theorem, as it is based on functional analysis. This proof (\\cite{Lindenstrauss66}), by the Israeli mathematician Joram Lindenstrauss, is very elegant, and is seen by some (specifically Halmos) to be the best proof there is of this theorem.\n\nIt should be said that it is by and large only Halmos that uses the notion of a convex measure as defined in \\ref{def: convex measure}. The interesting thing is that the image of the measure is convex, and Halmos' way is just one way of proving it.\n\n\\begin{theorem}\\label{thm: Lindenstrauss}\nLet $\\mu=(\\mu_{1}, \\dots, \\mu_{n})$ be a purely non-atomic vector measure on a measure space $(X, \\m{A})$. Then the image of $\\mu$ is closed and convex.\n\\end{theorem}:\n\\begin{proof}\nThis proof will follow \\cite{Lindenstrauss66}. The idea is to translate the measure $\\mu$ into an affine map, and then throw our theory of functional analysis at this construction.\n\nFirst assume that $\\mu_{1}, \\dots, \\mu_{n}$ are positive measures.\nLet $\\mu'=\\mu_{1}+\\dots+\\mu_{n}$, and let $W=\\{g \\in L^{\\infty}(\\mu') | 0 \\le g \\le 1 \\}$. Define a map $T:W \\to \\R^{n}$ by\n\\begin{align*}\n\tTg=\\begin{pmatrix}\n\t\t\\int_{X} g d\\mu_{1} \\\\\n\t\t\\vdots \\\\\n\t\t\\int_{X} g d\\mu_{n}\n\t\\end{pmatrix}\n\\end{align*}\nSince $W$ is a closed subspace of the unit-ball in $L^{\\infty}(\\mu')$, it is compact in the $w^{*}$-topology by Banach-Alaoglu. Furthermore $W$ is convex, and $T$ is an affine, $w^{*}$-continuous map (since each $\\mu_{i}$ is absolutely continuous with respect to $\\mu'$, see \\Cref{thm: why it is called absolutely continuous}). Hence $T(W)$ is convex and compact as a subset of $\\R^{n}$.\n\n\nWe now prove that $T(W)=\\mu(\\m{A})$, which shows that $\\mu(\\m{A})$ is closed and convex. First some more setup:\n\nLet $(\\alpha_{1}, \\dots, \\alpha_{n})\\in T(W)$ and define $W_{0}=T^{-1}(\\{\\alpha_{1}, \\dots, \\alpha_{n}\\})$ (the pre-image). \nObserve that for a measurable set $A\\in \\m{A}$, if we take the indicator function of $A$, then the map $T$ and the measure coincide, i.e. $T(\\ind{A})=\\mu(A)$. Hence what we want to show is that $W_{0}$ contains an indicator function, since then every element in the image $T(W)$ is the image of some indicator function i.e. is the measure of some measurable set, which will conclude the proof. \n\nNow, since $W_{0}$ is a closed subset of $W$ it is $w^{*}$-compact and since $T$ is affine, $W_{0}$ is convex. Hence by the Krein-Milman theorem it has extreme points. We will show that if $g\\in \\Ext(W_{0})$ then $g=\\ind{U}$ for some measurable $U\\in \\m{A}$.\n\nSo to show that the range is closed and convex we will use induction on $n$. For $n=1$ let $g\\in \\Ext(W_{0})$ and assume for contradiction that there is an $1 \\ge \\varepsilon > 0$ and a set $Z\\in \\m{A}$ with $\\mu_{1}(Z)>0$ such that $\\varepsilon \\le g \\le 1-\\varepsilon$ on $Z$. Since $\\mu_{1}$ is non-atomic, there is an $A\\subseteq Z$, $A\\in \\m{A}$ such that $\\mu_{1}(A)>0$ and $\\mu_{1}(Z\\setminus A)>0$. \n\nPick $s,t\\in \\R$ not both zero, such that $|s|,|t| \\le \\varepsilon$ and\n\\begin{align*}\n\ts\\mu_{1}(A) = t\\mu_{1}(Z\\setminus A)\n\\end{align*}\nwhich is possible since $\\mu_{1}(A)$ and $\\mu_{1}(Z\\setminus A)$ are constants. Let\n\\begin{align*}\n\th=s\\ind{A} - t\\ind{Z\\setminus A}.\n\\end{align*}\nthen since $A$ and $Z\\setminus A$ are disjoint, and $|s|,|t|\\le \\varepsilon < 1$, we see that\n\\begin{align*}\n\t\\int_{X}hd\\mu_{1}=\\int_{A}hd\\mu_{1} + \\int_{Z\\setminus A}hd\\mu_{1}=\\int_{A}sd\\mu_{1}-\\int_{Z\\setminus A}td\\mu_{1}=s\\mu_{1}(A) - t\\mu_{1}(Z\\setminus A)=0\n\\end{align*}\nhence $\\int_{X} h d\\mu_{1}=0$. Since $|h|\\le \\varepsilon$, we have $|h|\\le g \\le 1-|h|$ on $X$, so $T(g+h)=Tg+Th=Tg$ hence $g\\pm h \\in W_{0}$, but since $h\\not\\equiv 0$, we see that for all $\\alpha\\in [0,1]$ we have\n\\begin{align*}\n\t\\alpha(g+h)+ (1-\\alpha)(g-h)=g\n\\end{align*}\nthis contradicts the assumption that $g\\in \\Ext(W_{0})$. Hence the exists a measurable $U\\in \\m{A}$ such that $g=\\ind{U}$. This means, that for $n=1$, $W_{0}$ contains an indicator function, thus $\\mu(\\m{A})=T(W)$ so $\\mu(\\m{A})$ is convex and closed as wanted. \\\\\n\nIf $n > 1$ assume that $\\mu_{1}, \\dots, \\mu_{n-1}$ have convex range. Again let $g\\in \\Ext(W_{0})$ and assume for contradiction that there is an $\\frac{1}{2} \\ge \\varepsilon > 0$ and a set $Z\\in \\m{A}$ with $\\mu_{n}(Z)>0$ such that $\\varepsilon \\le g \\le 1-\\varepsilon$ on $Z$. Since $\\mu_{n}$ is non-atomic by assumption, there is an $A\\subseteq Z$, $A\\in \\m{A}$ such that $\\mu_{n}(A)>0$ and $\\mu_{n}(Z\\setminus A)>0$.\n\nBy the induction hypothesis (that $\\mu_{1}, \\dots, \\mu_{n-1}$ on the sets $A$ and $Z\\setminus A$ have convex range) there are $B\\subseteq A$ and $C\\subseteq Z\\setminus A$ such that\n\\begin{align*}\n\t\\mu_{i}(B)=\\frac{1}{2}\\mu_{i}(A), \\quad \\mu_{i}(C)=\\frac{1}{2}\\mu_{i}(Z\\setminus A), \\quad i=1, \\dots, n-1.\n\\end{align*}\n\nPick $s,t\\in \\R$ not both $0$ such that $|s|,|t|\\le \\varepsilon$ and\n\\begin{align*}\n\ts(\\mu_{n}(A) - 2\\mu_{n}(B)) = t(\\mu_{n}(Z\\setminus A) - 2\\mu_{n}(C)).\n\\end{align*}\nLet\n\\begin{align*}\n\th=s(2\\ind{B}-\\ind{A}) + t(\\ind{Z\\setminus A} - 2\\ind{C}).\n\\end{align*}\nthen by the same calculations as above, we have that $\\int_{X} h d\\mu_{i}=0$ for $i=1, \\dots, n$. Since $|h|\\le \\varepsilon$, we have $|h|\\le g \\le 1-|h|$ on $X$, hence $g\\pm h \\in W_{0}$, but since $h\\not\\equiv 0$ this contradicts the assumption that $g\\in \\Ext(W_{0})$. This means, that for $n > 1$, $W_{0}$ contains an indicator function, thus $\\mu(\\m{A})=T(W)$ so $\\mu(\\m{A})$ is convex and closed as wanted.\n\nNow if $\\mu_{1}, \\dots, \\mu_{n}$ are \\emph{real} measures, let $\\tilde{\\mu}=(\\mu_{1}^{+}, \\mu_{1}^{-}, \\dots, \\mu_{n}^{+}, \\mu_{n}^{-})$ which is non-negative with values in $\\R^{2n}$. This measure is still purely non-atomic by the Hahn decomposition theorem. Indeed, there exists a partition $\\{P_{i}, N_{i}\\}$ of $X$ for each $1\\le i \\le n$ such that $\\mu_{i}^{+}(B)=\\mu_{i}(P_{i}\\cap B)$ and $\\mu_{i}^{-}(B)=\\mu_{i}(N_{i} \\cap B)$, $B\\in \\m{A}$, thus any atom of $\\mu^{\\pm}$ would be an atom of $\\mu_{i}$ and any atom of $\\mu_{i}$ would be an atom of either $\\mu_{i}^{+}$ or $\\mu_{i}^{-}$, hence $\\tilde{\\mu}$ is purely non-atomic. \n\nWe can thus conclude that the range of $\\tilde{\\mu}$ is closed and convex by the first part of the proof. Define $f:\\R^{2n} \\to \\R^{n}$ by\n\\begin{align*}\n\tf(x_{1}, y_{1}, \\dots, x_{n}, y_{n}):=(x_{1}-y_{1}, \\dots, x_{n}-y_{n}).\n\\end{align*}\nNow, clearly $f$ is a bounded linear functional, so since $\\tilde{\\mu}(\\m{A})$ is closed, so is $\\mu(\\m{A})=f(\\tilde{\\mu}(\\m{A}))$ since $f$ is continuous. Finally, for $A,B\\in \\m{A}$ and some $\\alpha\\in [0,1]$, we have by the linearity of $f$ that\n\\begin{align*}\n\t\\alpha \\mu(A)+(1-\\alpha)\\mu(B)&=\\alpha f(\\tilde{\\mu}(A))+(1-\\alpha)f(\\tilde{\\mu}(B)) \\\\\n\t&=f\\Big( \\tilde{\\mu}(\\alpha A+(1-\\alpha) B)\\Big) \\\\\n\t&=\\mu(\\alpha A + (1-\\alpha) B)\n\\end{align*}\nand we are done.\n\\end{proof}\n\n\\end{document}\n", "meta": {"hexsha": "93d1c15b991bcc143f07b9798df325085df44a0b", "size": 6944, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "Tex/VectorMeasures/Sections/FunkAnProof.tex", "max_stars_repo_name": "TheTextbook/MeasureTheory", "max_stars_repo_head_hexsha": "46aa1391b7c51a21ed6d3c80d030b0d9e188f483", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tex/VectorMeasures/Sections/FunkAnProof.tex", "max_issues_repo_name": "TheTextbook/MeasureTheory", "max_issues_repo_head_hexsha": "46aa1391b7c51a21ed6d3c80d030b0d9e188f483", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tex/VectorMeasures/Sections/FunkAnProof.tex", "max_forks_repo_name": "TheTextbook/MeasureTheory", "max_forks_repo_head_hexsha": "46aa1391b7c51a21ed6d3c80d030b0d9e188f483", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 79.816091954, "max_line_length": 635, "alphanum_fraction": 0.6425691244, "num_tokens": 2604, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7401743620390162, "lm_q2_score": 0.7431680086124811, "lm_q1q2_score": 0.5500739066625493}}
{"text": "%!TEX root = modelguide.tex\n\n\\chapter{Mechanical Models I}\n\\label{MechModelsI:sec}\n\nThis section details how to build basic multibody-type mechanical\nmodels consisting of particles, springs, rigid bodies, joints, and\nother constraints.\n\n\\section{Springs and particles}\n\\label{ParticlesAndSprings:sec}\n\nThe most basic type of mechanical model consists simply of particles\nconnected together by axial springs.  Particles are implemented by the\nclass \\javaclass[artisynth.core.mechmodels]{Particle}, which is a\ndynamic component containing a three-dimensional position state, a\ncorresponding velocity state, and a mass. It is an instance of the\nmore general base class \\javaclass[artisynth.core.mechmodels]{Point},\nwhich is used to also implement spatial points such as {\\tt markers}\nwhich do not have a mass.\n\n\\subsection{Axial springs and materials}\n\\label{AxialSprings:sec}\n\nAn axial spring is a simple spring that connects two points and is\nimplemented by the class\n\\javaclass[artisynth.core.mechmodels]{AxialSpring}. This is a {\\it\nforce effector} component that exerts equal and opposite forces on the\ntwo points, along the line separating them, with a magnitude $f$ that\nis a function $f(l, \\dot l)$ of the distance $l$ between the points,\nand the distance derivative $\\dot l$.\n\nEach axial spring is associated with an {\\it axial material},\nimplemented by a subclass of\n\\javaclass[artisynth.core.materials]{AxialMaterial}, that specifies\nthe function $f(l, \\dot l)$. The most basic type of axial material is\na \\javaclass[artisynth.core.materials]{LinearAxialMaterial}, which\ndetermines $f$ according to the linear relationship\n%\n\\begin{equation}\nf(l, \\dot l) = k (l-l_0) + d \\dot l\n\\end{equation}\n%\nwhere $l_0$ is the rest length and $k$ and $d$ are the stiffness and\ndamping terms. Both $k$ and $d$ are properties of the material, while\n$l_0$ is a property of the spring.\n\nAxial springs are assigned a linear axial material by default.  More\ncomplex, non-linear axial materials may be defined in the package {\\tt\nartisynth.core.materials}. Setting or querying a spring's material\nmay be done with the methods {\\tt setMaterial()} and {\\tt\ngetMaterial()}.\n\n\\subsection{Example: A simple particle-spring model}\n\\label{ParticleSpringExample:sec}\n\n\\begin{figure}[t]\n\\begin{center}\n\\iflatexml\n \\includegraphics[]{images/ParticleSpring}\n\\else\n \\includegraphics[width=3.75in]{images/ParticleSpring}\n\\fi\n\\end{center}\n\\caption{ParticleSpring model loaded into ArtiSynth.}\n\\label{ParticleSpring:fig}\n\\end{figure}\n\nAn complete application model that implements a simple particle-spring\nmodel is given below. \n\\lstset{numbers=left}\n\\lstinputlisting{../../src/artisynth/demos/tutorial/ParticleSpring.java}\n\\lstset{numbers=none}\n\nLine 1 of the source defines the package in which the model class will\nreside, in this case {\\tt artisynth.demos.tutorial}. Lines 3-8 import\ndefinitions for other classes that will be used.\n\nThe model application class is named {\\tt ParticleSpring} and declared\nto extend {\\tt RootModel} (line 13), and the {\\tt build()} method\ndefinition begins at line 15. (A no-args constructor is also needed,\nbut because no other constructors are defined, the compiler creates\none automatically.)\n\nTo begin, the {\\tt build()} method creates a {\\tt MechModel} named\n{\\tt \"mech\"}, and then adds it to the {\\tt models} list of the root model\nusing the {\\tt addModel()} method (lines 18-19). Next, two particles,\n{\\tt p1} and {\\tt p2}, are created, with masses equal to 2 and initial\npositions at 0, 0, 0, and 1, 0, 0, respectively (lines 22-23). Then an\naxial spring is created, with end points set to {\\tt p1} and {\\tt p2},\nand assigned a linear material with a stiffness and damping of 20 and\n10 (lines 24-27). Finally, after the particles and the spring are\ncreated, they are added to the {\\tt particles} and {\\tt axialSprings}\nlists of the {\\tt MechModel} using the methods {\\tt\naddParticle()} and {\\tt addAxialSpring()} (lines 30-32).\n\nAt this point in the code, both particles are defined to be\ndynamically controlled, so that running the simulation would cause\nboth to fall under the {\\tt MechModel}'s default gravity acceleration\nof $(0, 0, -9.8)$. However, for this example, we want the first\nparticle to remain fixed in place, so we set it to be {\\it\nnon-dynamic} (line 34), meaning that the physical simulation will not\nupdate its position in response to forces (Section\n\\ref{DynamicVsParametric:sec}).\n\nThe remaining calls control aspects of how the model is graphically\nrendered.  {\\tt setBounds()} (line 37) increases the model's\n``bounding box'' so that by default it will occupy a larger part of\nthe viewer frustum. The covenience method {\\tt\nRenderProps.setSphericalPoints()} is used to set points {\\tt p1} and\n{\\tt p2} to render as solid red spheres with a radius of 0.06, while\n{\\tt RenderProps.setCylindricalLines()} is used to set {\\tt spring} to\nrender as a solid blue cylinder with a radius of 0.02. More details\nabout setting render properties are given in Section\n\\ref{RenderProperties:sec}.\n\nTo run this example in ArtiSynth, select {\\sf All demos > tutorial >\nParticleSpring} from the {\\sf Models} menu. The model should load and\ninitially appear as in Figure \\ref{ParticleSpring:fig}.  Running\nthe model (Section \\ref{LoadingAndRunning:sec}) will\ncause the second particle to fall and swing about under gravity.\n\n\\subsection{Dynamic, parametric, and attached components}\n\\label{DynamicVsParametric:sec}\n\nBy default, a dynamic component is advanced through time in response\nto the forces applied to it. However, it is also possible to set a\ndynamic component's {\\tt dynamic} property to {\\tt false}, so that it\ndoes not respond to force inputs.  As shown in the example above, this\ncan be done using the method\n{\\tt setDynamic()}:\n%\n\\begin{verbatim}\n  comp.setDynamic (false);\n\\end{verbatim}\n%\nThe method\n\\javamethod*[artisynth.core.mechmodels.DynamicComponent]{isDynamic()}\ncan be used to query the {\\tt dynamic} property.\n\nDynamic components can also be {\\it attached} to other dynamic\ncomponents (as mentioned in Section \\ref{PhysicsSimulation:sec}) so\nthat their positions and velocities are controlled by the {\\it master}\ncomponents that they are attached to.  To attach a dynamic component,\none creates an {\\tt AttachmentComponent} specifying the attachment\nconnection and adds it to the {\\tt MechModel}, as described in Section\n\\ref{Attachments:sec}.  The method\n\\javamethod*[artisynth.core.mechmodels.DynamicComponent]{isAttached()}\ncan be used to determine if a component is attached, and if it is,\n\\javamethod*[artisynth.core.mechmodels.DynamicComponent]{getAttachment()}\ncan be used to find the corresponding {\\tt AttachmentComponent}.\n\nOverall, a dynamic component can be in one of three states:\n\n\\begin{description}\n\n\\item[active]\\mbox{}\n\nComponent is dynamic and unattached. The method\n\\javamethod*[artisynth.core.mechmodels.DynamicComponent]{isActive()}\nreturns {\\tt true}. The component will move in response to forces.\n\n\\item[parametric]\\mbox{}\n\nComponent is not dynamic, and is unattached. \nThe method\n\\javamethod*[artisynth.core.mechmodels.DynamicComponent]{isParametric()}\nreturns {\\tt true}.\nThe component will either remain\nfixed, or will move around in response to external inputs specifying\nthe component's position and/or velocity. One way to supply such\ninputs is to use controllers or input probes, as described in\nSection \\ref{SimulationControl:sec}.\n\n\\item[attached]\\mbox{}\n\nComponent is attached. The method\n\\javamethod*[artisynth.core.mechmodels.DynamicComponent]{isAttached()}\nreturns {\\tt true}. The component will move so as to follow the other\nmaster component(s) to which it is attached.\n\n\\end{description}\n\n\\subsection{Custom axial materials}\n\nApplication authors may create their\nown axial materials by subclassing \n\\javaclass[artisynth.core.materials]{AxialMaterial}\nand overriding the functions\n%\n\\begin{lstlisting}[]\n  double computeF (l, ldot, l0, excitation);\n  double computeDFdl (l, ldot, l0, excitation);\n  double computeDFdldot (l, ldot, l0, excitation);\n  boolean isDFdldotZero ();\n\\end{lstlisting}\n%\nwhere {\\tt excitation} is an additional {\\it excitation} signal $a$, which\nis used to implement active springs and which in particular is used to\nimplement axial muscles (Section \\ref{PointToPointMuscles:sec}), for\nwhich $a$ is usually in the range $[0, 1]$.\n\nThe first three methods should return the values of \n%\n\\begin{equation}\nf (l, \\dot l, a), \\quad\n\\frac{\\partial f(l, \\dot l, a)}{\\partial l}, \\quad \\text{and} \\quad\n\\frac{\\partial f(l, \\dot l, a)}{\\partial \\dot l},\n\\end{equation}\n%\nrespectively, while the last method should return {\\tt true} if\n$\\partial f(l, \\dot l, a) / \\partial \\dot l \\equiv 0$; i.e., if it is\nalways equals to 0.\n\n\\subsection{Damping parameters}\n\nMechanical models usually contain damping forces in addition to\nspring-type restorative forces. Damping generates forces that reduce\ndynamic component velocities, and is usually the major source of\nenergy dissipation in the model. Damping forces can be generated by\nthe spring components themselves, as described above.\n\nA general damping can be set for all particles by setting the\n{\\tt MechModel}'s {\\tt pointDamping} property. This causes\na force\n%\n\\begin{equation}\n\\f_i = -d_p \\v_i \\label{eqn:pointdamping}\n\\end{equation}\n%\nto be applied to all particles, where $d_p$ is the value of the {\\tt\npointDamping} and $\\v_i$ is the particle's velocity.\n\n{\\tt pointDamping} can be set and queried using the {\\tt MechModel}\nmethods\n%\n\\begin{lstlisting}[]\n  setPointDamping (double d);\n  double getPointDamping();\n\\end{lstlisting}\n%\n\n\\begin{sideblock}\nIn general, whenever a component has a property {\\tt propX}, that\nproperty can be set and queried in code using methods of the form\n\\begin{verbatim}\n  setPropX (T d);\n  T getPropX();\n\\end{verbatim}\nwhere {\\tt T} is the type associated with the property.\n\\end{sideblock}\n\n{\\tt pointDamping} can also be set for particles individually.  This\nproperty is {\\it inherited} (Section\n\\ref{CompositeInheritableProperties:sec}), so that if not set\nexplicitly, it inherits the nearest explicitly set value in an\nancestor component.\n\n\\section{Rigid bodies}\n\nRigid bodies are implemented in ArtiSynth by the class\n\\javaclass[artisynth.core.mechmodels]{RigidBody}, which is a dynamic\ncomponent containing a six-dimensional position and orientation state,\na corresponding velocity state, an inertia, and an optional surface\nmesh.\n\nA rigid body is associated with its own 3D spatial coordinate frame,\nand is a subclass of the more general\n\\javaclass[artisynth.core.mechmodels]{Frame} component.\nThe combined position and orientation of this frame with respect to\nworld coordinates defines the body's {\\it pose}, and the associated 6\ndegrees of freedom describe its ``position'' state.\n\n\\subsection{Frame markers}\n\\label{FrameMarkers:sec}\n\n\\begin{figure}[t]\n\\begin{center}\n \\iflatexml\n   \\includegraphics[width=2.5in]{images/frameMarker}\n \\else\n   \\includegraphics[width=2.5in]{images/frameMarker.idr}\n \\fi\n\\end{center}\n\\caption{A force $\\f$ applied to a frame marker attached to a rigid\nbody. The marker is located at the point $\\r$ with respect to the body\ncoordinate frame B.}\n\\label{frameMarker:fig}\n\\end{figure}\n\nArtiSynth makes extensive use of {\\it markers}, which are (massless)\npoints attached to dynamic components in the model. Markers are used\nfor graphical display, implementing attachments, and transmitting\nforces back onto the underlying dynamic components.\n\nA {\\it frame marker} is a marker that can be attached to a\n\\javaclass[artisynth.core.mechmodels]{Frame}, and most commonly to a\n\\javaclass[artisynth.core.mechmodels]{RigidBody} (Figure\n\\ref{frameMarker:fig}). They are frequently used to provide the\nanchor points for attaching springs and, more generally, applying\nforces to the body.\n\nFrame markers are implemented by the class\n\\javaclass[artisynth.core.mechmodels]{FrameMarker}, which\nis a subclass of\n\\javaclass[artisynth.core.mechmodels]{Point}.\nThe methods\n%\n\\begin{lstlisting}[]\n  Point3d getLocation();\n  void setLocation (Point3d r);\n\\end{lstlisting}\n%\nget and set the marker's location $\\r$ with respect to the frame's\ncoordinate system. When a 3D force $\\f$ is applied to the marker, it\ngenerates a spatial force $\\hat\\f$ (Section\n\\ref{SpatialVelocitiesAndForces:sec}) on the frame given by\n%\n\\begin{equation}\n\\hat\\f = \\matl \\f \\\\ \\r \\times \\f \\matr.\n\\end{equation}\n%\n\n\\begin{figure}[ht]\n\\begin{center}\n\\iflatexml\n \\includegraphics[]{images/RigidBodySpring}\n\\else\n \\includegraphics[width=3.75in]{images/RigidBodySpring}\n\\fi\n\\end{center}\n\\caption{RigidBodySpring model loaded into ArtiSynth.}\n\\label{RigidBodySpring:fig}\n\\end{figure}\n\n\\subsection{Example: A simple rigid body-spring model}\n\\label{RigidBodySpringExample:sec}\n\nA simple rigid body-spring model is defined in\n%\n\\begin{verbatim}\n  artisynth.demos.tutorial.RigidBodySpring\n\\end{verbatim}\n%\nThis differs from ParticleSpring only in the {\\tt build()} method,\nwhich is listed below:\n\\lstset{numbers=left}\n\\begin{lstlisting}[]\n   public void build (String[] args) {\n\n      // create MechModel and add to RootModel\n      MechModel mech = new MechModel (\"mech\");\n      addModel (mech);\n\n      // create the components\n      Particle p1 = new Particle (\"p1\", /*mass=*/2, /*x,y,z=*/0, 0, 0);\n      // create box and set it's pose (position/orientation):\n      RigidBody box =\n         RigidBody.createBox (\"box\", /*wx,wy,wz=*/0.5, 0.3, 0.3, /*density=*/20);\n      box.setPose (new RigidTransform3d (/*x,y,z=*/0.75, 0, 0));\n      // create marker point and connect it to the box:\n      FrameMarker mkr = new FrameMarker (/*x,y,z=*/-0.25, 0, 0);\n      mkr.setFrame (box);\n\n      AxialSpring spring = new AxialSpring (\"spr\", /*restLength=*/0);\n      spring.setPoints (p1, mkr);\n      spring.setMaterial (\n         new LinearAxialMaterial (/*stiffness=*/20, /*damping=*/10));\n\n      // add components to the mech model\n      mech.addParticle (p1);\n      mech.addRigidBody (box);\n      mech.addFrameMarker (mkr);\n      mech.addAxialSpring (spring);\n\n      p1.setDynamic (false);               // first particle set to be fixed\n\n      // increase model bounding box for the viewer\n      mech.setBounds (/*min=*/-1, 0, -1, /*max=*/1, 0, 0);  \n      // set render properties for the components\n      RenderProps.setSphericalPoints (p1, 0.06, Color.RED);\n      RenderProps.setSphericalPoints (mkr, 0.06, Color.RED);\n      RenderProps.setCylindricalLines (mkr, 0.02, Color.BLUE);\n   }\n\\end{lstlisting}\n\\lstset{numbers=none} \nThe differences from {\\tt ParticleSpring} begin\nat line 9. Instead of creating a second particle, a rigid body is\ncreated using the factory method\n\\javamethod*[artisynth.core.mechmodels]{RigidBody.createBox()}, which\ntakes x, y, z widths and a (uniform) density and creates a box-shaped\nrigid body complete with surface mesh and appropriate mass and\ninertia. As the box is initially centered at the origin, moving it\nelsewhere requires setting the body's pose, which is done using {\\tt\nsetPose()}. The {\\tt RigidTransform3d} passed to {\\tt setPose()} is\ncreated using a three-argument constructor that generates a\ntranslation-only transform.  Next, starting at line 14, a {\\tt\nFrameMarker} is created for a location $(-0.25, 0, 0)^T$ relative to the\nrigid body, and attached to the body using its {\\tt setFrame()}\nmethod.\n\nThe remainder of {\\tt build()} is the same as for {\\tt ParticleSpring},\nexcept that the spring is attached to the frame marker instead of a\nsecond particle.\n\nTo run this example in ArtiSynth, select {\\sf All demos > tutorial >\nRigidBodySpring} from the {\\sf Models} menu. The model should load and\ninitially appear as in Figure \\ref{RigidBodySpring:fig}.  Running the\nmodel (Section \\ref{LoadingAndRunning:sec}) will cause the rigid body\nto fall and swing about under gravity.\n\n\\subsection{Creating rigid bodies}\n\nAs illustrated above, rigid bodies can be created using factory\nmethods supplied by \\javaclass[artisynth.core.mechmodels]{RigidBody}.\nSome of these include:\n%\n\\begin{lstlisting}[]\n  createBox (name, widthx, widthy, widthz, density);\n  createCylinder (name, radius, height, density, nsides);\n  createSphere (name, radius, density, nslices);\n  createEllipsoid (name, radx, rady, radz, density, nslices);\n\\end{lstlisting}\n%\nThe bodies do not need to be named; if no name is desired, then {\\tt\nname} and can be specified as {\\tt null}.\n\nIn addition, there are also\nfactory methods for creating a rigid body directly from a mesh:\n%\n\\begin{lstlisting}[]\n  createFromMesh (name, mesh, density, scale);\n  createFromMesh (name, meshFileName, density, scale);\n\\end{lstlisting}\n%\nThese take either a polygonal mesh (Section \\ref{Meshes:sec}), or a\nfile name from which a mesh is read, and use it as the body's surface\nmesh and then compute the mass and inertia properties from the specified\n(uniform) density.\n\nAlternatively, one can create a rigid body directly from a\nconstructor, and then set the mesh and inertia properties explicitly:\n%\n\\begin{lstlisting}[]\n  PolygonalMesh femurMesh;\n  SpatialInertia inertia;\n\n  ... initialize mesh and inertia appropriately ...\n\n  RigidBody body = new RigidBody (\"femur\");\n  body.setMesh (femurMesh);\n  body.setInertia (inertia);\n\\end{lstlisting}\n%\n\n\\subsection{Pose and velocity}\n\nA body's pose can be set and\nqueried using the methods\n%\n\\begin{lstlisting}[]\n  setPose (RigidTransform3d T);   // sets the pose to T\n  getPose (RigidTransform3d T);   // gets the current pose in T\n  RigidTransform3d getPose();     // returns the current pose (read-only)\n\\end{lstlisting}\n%\nThese use a \\javaclass[maspack.matrix]{RigidTransform3d} (Section\n\\ref{RigidTransform3d:sec}) to describe the pose. Body poses are\ndescribed in world coordinates and specify the transform from body to\nworld coordinates. In particular, the pose for a body A specifies\nthe rigid transform $\\T_{AW}$.\n\nRigid bodies also expose the translational and rotational components of\ntheir pose via the properties {\\tt position} and {\\tt orientation},\nwhich can be queried and set independently using the methods\n%\n\\begin{lstlisting}[]\n  setPosition (Point3d p);       // sets the position to p\n  getPosition (Point3d p);       // gets the current position in p\n  Point3d getPosition();         // returns the current position (read-only)\n\n  setOrientation (AxisAngle a);  // sets the orientation to a\n  getOrientation (AxisAngle a);  // gets the current orientation in a\n  AxisAngle getOrientation();    // returns the current orientation (read-only)\n\\end{lstlisting}\n%\n\nThe velocity of a rigid body is described using a\n\\javaclass[maspack.spatialmotion]{Twist} (Section\n\\ref{SpatialVectors:sec}), which contains both the translational and\nrotational velocities. The following methods\nset and query the spatial velocity as described with respect to world\ncoordinates:\n%\n\\begin{lstlisting}[]\n  setVelocity (Twist v);         // sets the spatial velocity to v\n  getVelocity (Twist v);         // gets the current spatial velocity in v\n  Twist getVelocity();           // returns current spatial velocity (read-only)\n\\end{lstlisting}\n%\n\nDuring simulation, unless a rigid body has been set to be {\\it\nparametric} (Section \\ref{DynamicVsParametric:sec}), its pose and\nvelocity are updated in response to forces, so setting the pose or\nvelocity generally makes sense only for setting initial conditions.\nOn the other hand, if a rigid body is parametric, then it is possible\nto control its pose during the simulation, but in that case it is\nbetter to set its {\\it target pose} and/or {\\it target velocity}, as\ndescribed in Section \\ref{ControllerImplementation:sec}.\n\n\\subsection{Inertia and meshes}\n\nThe ``mass'' of a rigid body is described by its spatial inertia\n(Section \\ref{SpatialInertia:sec}), implemented by a\n\\javaclass[maspack.spatialmotion]{SpatialInertia} object, which\nspecifies its mass, center of mass, and rotational inertia with\nrespect to the center of mass.\n\nMost rigid bodies are also associated with a polygonal surface mesh,\nwhich can be set and queried using the methods\n%\n\\begin{lstlisting}[]\n  setMesh (PolygonalMesh mesh);\n  setMesh (PolygonalMesh mesh, String meshFileName);\n  PolygonalMesh getMesh();\n\\end{lstlisting}\n%\nThe second method takes an optional {\\tt fileName} argument that can\nbe set to the name of a file from which the mesh was read. Then if the\nmodel itself is saved to a file, the model file will specify the mesh\nusing the file name instead of explicit vertex and face information,\nwhich can reduce the model file size considerably.\n\nThe inertia of a rigid body can be explicitly set using a variety\nof methods including\n%\n\\begin{lstlisting}[]\n  setInertia (M)                    // set using SpatialInertia M\n  setInertia (mass, Jxx, Jyy, Jzz); // mass and diagonal rotational inertia\n  setInertia (mass, J);             // mass and full rotational inertia\n  setInertia (mass, J, com);        // mass, rotational inertia, center-of-mass\n\\end{lstlisting}\n%\nand can be queried using \n%\n\\begin{lstlisting}[]\n  getInertia (M);                   // get SpatialInertia in M\n  getInertia ();                    // return read-only SpatialInertia\n\\end{lstlisting}\n%\n\nIn practice, it is often more convenient to simply specify a mass or a\ndensity, and then use the volume defined by the surface mesh to\ncompute the remaining inertial values. How a rigid body's inertia is\ncomputed is determined by its {\\tt inertiaMethod} property, which can\nbe one\n\n\\begin{description}\n\n\\item[Density]\\mbox{}\n\nInertia is computed from density;\n\n\\item[Mass]\\mbox{}\n\nInertia is computed from mass;\n\n\\item[Explicit]\\mbox{}\n\nInertia is set explicitly.\n\n\\end{description}\n\nThis property can be set and queried using\n%\n\\begin{lstlisting}[]\n  setInertiaMethod (InertiaMethod method);\n  InertiaMethod getInertiaMethod();\n\\end{lstlisting}\n%\nand its default value is {\\tt Density}. Explicitly setting the\ninertia using one of {\\tt setInertia()} methods described above will\nset {\\tt inertiaMethod} to {\\tt Explicit}. The method\n%\n\\begin{lstlisting}[]\n  setInertiaFromDensity (density); \n\\end{lstlisting}\n%\nwill (re)compute the inertia using the mesh and a density value\nand set {\\tt inertiaMethod} to {\\tt Density}, and\nthe method\n%\n\\begin{lstlisting}[]\n  setInertiaFromMass (mass); \n\\end{lstlisting}\n%\nwill (re)compute the inertia using the mesh and mass value\nand set {\\tt inertiaMethod} to {\\tt Mass}.\n\nFinally, the (assumed uniform) density of the body can be queried\nusing\n%\n\\begin{lstlisting}[]\n   getDensity();\n\\end{lstlisting}\n%\n\n\\subsection{Damping parameters}\n\\label{RigidBodyDamping:sec}\n\nAs with particles, it is possible to set damping parameters for rigid\nbodies. \n\n{\\tt MechModel} provides two properties, {\\tt frameDamping} and {\\tt\nrotaryDamping}, which generate a spatial force centered on each rigid\nbody's coordinate frame\n%\n\\begin{equation}\n\\hat\\f_i = \\matl -d_f \\v_i \\\\ -d_r \\Bom_i \\matr,\n\\end{equation}\n%\nwhere $d_f$ and $d_r$ are the {\\tt frameDamping} and {\\tt\nrotaryDamping} values, and $\\v_i$ and $\\Bom_i$ are the translational\nand angular velocity of the body's coordinate frame.\nThe damping parameters can be set and queried using the {\\tt MechModel}\nmethods\n%\n\\begin{lstlisting}[]\n  setFrameDamping (double df);\n  setRotaryDamping (double dr);\n  double getFrameDamping();\n  double getRotaryDamping();\n\\end{lstlisting}\n%\n\\begin{sideblock}\nFor models involving rigid bodies, it is often necessary to set {\\tt\nrotaryDamping} to a non-zero value because {\\tt frameDamping} will\nprovide no damping at all when a rigid body is simply rotating about\nits coordinate frame origin.\n\\end{sideblock}\n\nFrame and rotary damping can also be set for individual bodies using\ntheir own (inherited) {\\tt frameDamping} and {\\tt rotaryDamping}\nproperties.\n\n\\section{Joints and connectors}\n\\label{JointsAndConnectors:sec}\n\nIn a typical mechanical model, many of the rigid bodies are\ninterconnected, either using spring-type components that exert binding\nforces on the bodies, or through joint-type connectors that enforce\nthe connection using hard constraints.\n\n\\subsection{Joints and coordinate frames}\n\nConsider two bodies A and B. The pose of body B with respect to body A\ncan be described by the 6 DOF rigid transform $\\T_{BA}$.  If bodies A\nand B are unconnected, $\\T_{BA}$ may assume any possible value\nand has a full six degrees of freedom. A {\\it joint} between A and B\nrestricts the set of poses that are possible between the two bodies\nand reduces the degrees of freedom available to $\\T_{BA}$.  For\nsimplicity, joints have their own coordinate frames for describing\ntheir constraining actions, and then these frames are related to the\nframes A and B of the associated bodies by auxiliary transformations.\n\n\\begin{figure}[ht]\n\\begin{center}\n \\iflatexml\n   \\includegraphics[width=2.5in]{images/jointExample}\n \\else\n   \\includegraphics[width=2.5in]{images/jointExample.idr}\n \\fi\n\\end{center}\n\\caption{Coordinate frames D and C for a revolute joint.}\n\\label{jointExample:fig}\n\\end{figure}\n\nEach joint is associated with two coordinate frames C and D which move\nwith respect to each other as the joint moves. The allowed joint\nmotions therefore correspond to the allowed values of the {\\it joint transform}\n$\\T_{CD}$.  D is the {\\it base} frame and C is the {\\it motion}\nframe. For a revolute joint (Figure \\ref{jointExample:fig}), C can\nmove with respect to D by rotating about the z axis. Other motions are\nprohibited. $\\T_{CD}$ should therefore alway have the form\n%\n\\begin{equation}\n\\T_{CD} = \\matl\n\\cos(\\theta) & -sin(\\theta) & 0 \\\\\n\\sin(\\theta) &  cos(\\theta) & 0 \\\\\n0 & 0 & 1\\\\\n\\matr\n\\end{equation}\n%\nwhere $\\theta$ is the angle of joint rotation and is known as the {\\it\njoint parameter}. Other joints have different parameterizations, with\nthe number of parameters equaling the number of degrees of freedom\navailable to the joint. When $\\T_{CD} = \\I$ (where $\\I$ is the\nidentity transform), the parameters are all (usually) equal to zero,\nand the joint is said to be in the {\\it zero state}.\n\n\\begin{figure}[ht]\n\\begin{center}\n \\iflatexml\n   \\includegraphics[width=2.5in]{images/jointFrames}\n \\else\n   \\includegraphics[width=2.5in]{images/jointFrames.idr}\n \\fi\n\\end{center}\n\\caption{2D schematic showing the joint frames D and C, along with\nthe intermediate frame G that accounts for numeric error\nand complaint motion.}\n\\label{jointFrames:fig}\n\\end{figure}\n\nIn practice, due to numerical errors and/or compliance in the joint,\n%(Section \\ref{JointCompliance:sec}), \nthe joint transform $\\T_{CD}$ may\nsometimes deviate from the allowed set of values dictated by the joint\ntype. In ArtiSynth, this is accounted for by introducing an additional\n{\\it constraint} frame G between D and C (Figure \\ref{jointFrames:fig}).\nG is computed to be the nearest frame to C that lies exactly \nin the joint constraint space. $\\T_{GD}$ is therefore a\nvalid transform for the joint, $\\T_{GC}$ accommodates the error,\nand the whole joint transform is given by the composition\n%\n\\begin{equation}\n\\T_{CD} = \\T_{GD} \\; \\T_{CG}.\n\\end{equation}\n%\nIf there is no compliance or joint error, then frames G and C are the\nsame, $\\T_{CG} = \\I$, and $\\T_{CD} = \\T_{GD}$.\n\nIn general, each joint is attached to two rigid bodies A and B, with\nframe C being fixed to body A and frame D being fixed to body B. The\nrestrictions of the joint transform $\\T_{CD}$ therefore restrict the\nrelative poses of A and B.\n\n\\begin{figure}[ht]\n\\begin{center}\n \\iflatexml\n   \\includegraphics[width=3.74in]{images/jointBodyFrames}\n \\else\n   \\includegraphics[width=3.74in]{images/jointBodyFrames.idr}\n \\fi\n\\end{center}\n\\caption{Transforms connecting joint coordinate frames C and D with\nrigid bodies A and B.}\n\\label{jointBodyFrames:fig}\n\\end{figure}\n\nExcept in special cases, the joint coordinate frames C and D are not\ncoincident with the body frames A and B.  Instead, they are located\nrelative to A and B by the transforms $\\T_{CA}$ and $\\T_{DB}$,\nrespectively (Figure \\ref{jointBodyFrames:fig}). \nSince $\\T_{CA}$ and $\\T_{DB}$ are both fixed, the pose\nof B relative to A can be determined from\nthe joint transform $\\T_{CD}$:\n%\n\\begin{equation}\n\\T_{BA} = \\T_{CA} \\, \\T_{CD}^{-1} \\, \\T_{DB}^{-1}.\n\\label{jointPose:eqn}\n\\end{equation}\n%\n(See Section \\ref{RigidTransforms:sec} for a discussion of determining\ntransforms between related coordinate frames).\n\n\\subsection{Creating Joints}\n\\label{CreatingJoints:sec}\n\nJoint components in ArtiSynth are implemented by subclasses of\n\\javaclass[artisynth.core.mechmodels]{BodyConnector}.  Some of\nthe commonly used ones are described in Section\n\\ref{CommonJoints:sec}.\n\nAn application creates a joint by constructing it and adding it to a\n{\\tt MechModel}. Most joints generally have a constructor of the form\n%\n\\begin{lstlisting}[]\n  JointType (bodyA, bodyB, TDW);\n\\end{lstlisting}\n%\nwhich specifies the rigid bodies A and B which the joint connects,\nalong with the transform $\\T_{DW}$ giving the pose of the joint base\nframe D in world coordinates. Then constructor then assumes that the\njoint is in the zero state, so that C and D are the same and\n$\\T_{CD} = \\I$ and $\\T_{CW} = \\T_{DW}$, and then computes\n$\\T_{CA}$ and $\\T_{DB}$ from\n%\n\\begin{align}\n\\T_{CA} & = \\T_{AW}^{-1} \\; \\T_{CW} \\\\\n\\T_{DB} & = \\T_{BW}^{-1} \\; \\T_{DW}\n\\end{align}\n%\nwhere $\\T_{AW}$ and $\\T_{BW}$ are the poses of A and B.\nThe same body and transform settings can be made on an existing\njoint using the method\n\\javamethodAlt{%\nartisynth.core.mechmodels.BodyConnector.setBodies(,,)}\n{setBodies(bodyA, bodyB, TDW)}.\n\nAlternatively, if we prefer to explicitly specify $\\T_{CA}$ or $\\T_{DB}$, then we\ncan determine $\\T_{DW}$ from $\\T_{AW}$ or $\\T_{BW}$ using\n%\n\\begin{align}\n\\T_{DW} & = \\T_{AW} \\; \\T_{CA} \\\\\n\\T_{DW} & = \\T_{BW} \\; \\T_{DB}.\n\\end{align}\n%\nFor example, if we know $\\T_{CA}$, this can be accomplished using\nthe following code fragment:\n%\n\\begin{lstlisting}[]\n   RigidBody bodyA, bodyB;\n   RigidTransform3d TCA;\n\n   ... initialize bodyA, bodyB, and TCA ...\n   \n   RigidTransform3d TDW = new RigidTransform3d();\n   TDW.mul (bodyA.getPose(), TCA);  // bodyA.getPose() returns TAW\n   RevoluteJoint joint = new RevoluteJoint (bodyA, bodyB, TDW);\n\\end{lstlisting}\n%\n\nAnother method,\n\\javamethodAlt{artisynth.core.mechmodels.BodyConnector.setBodies(,,,)}\n{setBodies(bodyA, TCA, bodyB, TDB)}, allows us to set both values of\n$\\T_{CA}$ or $\\T_{BA}$ explicitly. This is useful if the joint\ntransform $\\T_{CD}$ is known to be some value {\\it other} than the\nidentity, in which case $\\T_{CA}$ or $\\T_{BA}$ can be computed\nfrom (\\ref{jointPose:eqn}), where $\\T_{BA}$ is given by\n%\n\\begin{equation}\n\\T_{BA} = \\T_{AW}^{-1} \\; \\T_{BW}.\n\\end{equation}\n%\nFor instance, if we know $\\T_{CA}$ and the joint transform $\\T_{CD}$,\nthen we can compute $\\T_{DB}$\nfrom\n%\n\\begin{equation}\n\\T_{DB} = \\T_{BA}^{-1} \\, \\T_{CA} \\, \\T_{CD}^{-1} = \n\\T_{BW}^{-1}  \\, \\T_{AW} \\, \\T_{CA} \\, \\T_{CD}^{-1}\n\\end{equation}\n%\nand set up the joint as follows:\n%\n\\begin{lstlisting}[]\n   RigidBody bodyA, bodyB;\n   RigidTransform3d TCA, TCD;\n\n   ... initialize bodyA, bodyB, TCA, TCD ...\n   \n   RigidTransform3d TBD = new RigidTransform3d();\n   TDB.mulInverseLeft (bodyB.getPose(), bodyA.getPose());\n   TDB.mul (TCA);\n   TDB.mulInverse (TCD);\n   RevoluteJoint joint = new RevoluteJoint ();\n   joint.setBodies (bodyA, TCA, bodyB, TDB);\n\\end{lstlisting}\n%\n\nSome joint implementations provide the ability to explicitly set the\njoint parameter(s) after it has been created and added to the {\\tt\nMechModel}, making it easy to ``move'' the joint to a specific\nconfiguration. For example, {\\tt RevoluteJoint} provides the method\n\\javamethod*[artisynth.core.mechmodels.RevoluteJoint]{setTheta()}.\nThis causes the transform $\\T_{CD}$ to be explicitly set to the value\nimplied by the joint parameters, and the pose of either body A or B is\nchanged to accommodate this. Whether body A or B is moved depends on\nwhich one is the least connected to ``ground'', and other bodies that have\njoint connections to the moved body are moved as well.\n\nIf desired, joints can be connected to only a single rigid body. In\nthis case, the second body B is simply assumed to be ``ground'', and the\ncoordinate frame B is instead taken\nto be the world coordinate frame W. The corresponding calls\nto the joint constructor or {\\tt setBodies()} take the\nform\n%\n\\begin{lstlisting}[]\n  JointType joint = new JointType (bodyA, null, TDW);\n\\end{lstlisting}\n%\nor\n%\n\\begin{lstlisting}[]\n  JointType joint = new JointType();\n  joint.setBodies (bodyA, null, TDW);\n\\end{lstlisting}\n%\n\n\\subsection{Example:  A simple revolute joint}\n\\label{RigidBodyJoint:sec}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\iflatexml\n \\includegraphics[]{images/RigidBodyJoint}\n\\else\n \\includegraphics[width=3.75in]{images/RigidBodyJoint}\n\\fi\n\\end{center}\n\\caption{RigidBodyJoint model loaded into ArtiSynth.}\n\\label{RigidBodyJoint:fig}\n\\end{figure}\n\nA simple model showing two rigid bodies connected by\na joint is defined in\n%\n\\begin{verbatim}\n  artisynth.demos.tutorial.RigidBodyJoint\n\\end{verbatim}\n%\n\nThe build method for this model is given below:\n\\lstset{numbers=left}\n\\begin{lstlisting}[]\n   public void build (String[] args) {\n\n      // create MechModel and add to RootModel\n      mech = new MechModel (\"mech\");\n      mech.setGravity (0, 0, -98);\n      mech.setFrameDamping (1.0);\n      mech.setRotaryDamping (4.0);\n      addModel (mech);\n\n      PolygonalMesh mesh;  // bodies will be defined using a mesh\n\n      // create first body and set its pose\n      mesh = MeshFactory.createRoundedBox (lenx1, leny1, lenz1, /*nslices=*/8);\n      RigidTransform3d TMB = \n         new RigidTransform3d (0, 0, 0, /*axisAng=*/1, 1, 1, 2*Math.PI/3);\n      mesh.transform (TMB);\n      bodyB = RigidBody.createFromMesh (\"bodyB\", mesh, /*density=*/0.2, 1.0);\n      bodyB.setPose (new RigidTransform3d (0, 0, 1.5*lenx1, 1, 0, 0, Math.PI/2));\n      bodyB.setDynamic (false);\n\n      // create second body and set its pose\n      mesh = MeshFactory.createRoundedCylinder (\n         leny2/2, lenx2, /*nslices=*/16, /*nsegs=*/1, /*flatBottom=*/false);\n      mesh.transform (TMB);\n      bodyA = RigidBody.createFromMesh (\"bodyA\", mesh, 0.2, 1.0);\n      bodyA.setPose (new RigidTransform3d (\n                        (lenx1+lenx2)/2, 0, 1.5*lenx1, 1, 0, 0, Math.PI/2));\n\n      // create the joint      \n      RigidTransform3d TDW = \n         new RigidTransform3d (lenx1/2, 0, 1.5*lenx1, 1, 0, 0, Math.PI/2);\n      RevoluteJoint joint = new RevoluteJoint (bodyA, bodyB, TDW);\n\n      // add components to the mech model\n      mech.addRigidBody (bodyB);\n      mech.addRigidBody (bodyA);\n      mech.addBodyConnector (joint);\n\n      joint.setTheta (35);  // set joint position\n\n      // set render properties for components\n      RenderProps.setLineRadius (joint, 0.2);\n      joint.setAxisLength (4);\n   }\n\\end{lstlisting}\n\\lstset{numbers=none}\n\nA {\\tt MechModel} is created as usual at line 4. However, in this\nexample, we also set some parameters for it:\n\\javamethod*[artisynth.core.mechmodels.MechModel]{setGravity()} is\nused to set the gravity acceleration vector to $(0, 0, -98)^T$ instead\nof the default value of $(0, 0, -9.8)^T$, and the {\\tt frameDamping}\nand {\\tt rotaryDamping} properties (Section\n\\ref{RigidBodyDamping:sec}) are set to provide appropriate damping.\n\nEach of the two rigid bodies are created from a mesh and a density.\nThe meshes themselves are created using the factory methods\n\\javamethod*[maspack.geometry]{MeshFactory.createRoundedBox()} and\n\\javamethod*[maspack.geometry]{MeshFactory.createRoundedCylinder()}\n(lines 13 and 22), and then\n\\javamethod*[artisynth.core.mechmodels]{RigidBody.createFromMesh()} is\nused to turn these into rigid bodies with a density of 0.2 (lines 17\nand 25). The pose of the two bodies is set using {\\tt\nRigidTransform3d} objects created with x, y, z translation and\naxis-angle orientation values (lines 18 and 26).\n\nThe revolute joint is implemented using\n\\javaclass[artisynth.core.mechmodels]{RevoluteJoint}, which is\nconstructed at line 32 with the joint coordinate frame D being located\nin world coordinates by {\\tt TDW} \nas described in Section \\ref{CreatingJoints:sec}.\n\nOnce the joint is created and added to the {\\tt MechModel}, the method\n\\javamethod*[artisynth.core.mechmodels.RevoluteJoint]{setTheta()} is\nused to explicitly set the joint parameter to 35 degrees. The joint\ntransform $\\T_{CD}$ is then set appropriately and {\\tt bodyA} is moved\nto accommodate this ({\\tt bodyA} being chosen since it is the freest\nto move).\n\nFinally, render properties are set starting at line 42. A revolute\njoint is rendered as a line segment, using the line render properties\n(Section \\ref{RenderProperties:sec}), with {\\tt lineStyle} and {\\tt\nlineColor} set to {\\tt Cylinder} and {\\tt BLUE}, respectively, by\ndefault. The cylinder radius and length are specified by the line\nrender property {\\tt lineRadius} and the revolute joint property {\\tt\naxisLength}, which are set explicitly in the code.\n\nTo run this example in ArtiSynth, select {\\sf All demos > tutorial >\nRigidBodyJoint} from the {\\sf Models} menu. The model should load and\ninitially appear as in Figure \\ref{RigidBodyJoint:fig}.  Running the\nmodel (Section \\ref{LoadingAndRunning:sec}) will cause {\\tt bodyA} to\nfall and swing under gravity.\n\n\\subsection{Commonly used joints}\n\\label{CommonJoints:sec}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n \\iflatexml\n   \\includegraphics[width=1.75in]{images/revoluteJoint}&\n   \\includegraphics[width=1.75in]{images/rollPitchJoint}\\\\\n   \\includegraphics[width=1.75in]{images/sphericalJoint}&\n   \\includegraphics[width=1.75in]{images/planarConnector}\\\\\n \\else\n   \\includegraphics[width=1.75in]{images/revoluteJoint.idr}&\n   \\includegraphics[width=1.75in]{images/rollPitchJoint.idr}\\\\\n   \\includegraphics[width=1.75in]{images/sphericalJoint.idr}&\n   \\includegraphics[width=1.75in]{images/planarConnector.idr}\\\\\n \\fi\n\\end{tabular}\n\\end{center}\n\\caption{Commonly used joints. Clockwise from\ntop left: revolute, roll-pitch, spherical, planer connector.}\n\\label{CommonJoints:fig}\n\\end{figure}\n\nSome of the joints commonly used by ArtiSynth are shown in Figure\n\\ref{CommonJoints:fig}. Each illustration shows the allowed joint\nmotion relative to the base coordinate frame D. Clockwise\nfrom the top-left, these joints are:\n\n\\begin{description}\n\n\\item[Revolute joint]\\mbox{}\n\nA one DOF joint which allows rotation by an angle $\\theta$ about\nthe z axis.\n\n\\item[Roll-pitch joint]\\mbox{}\n\nA two DOF joint, similar to the revolute joint, which allows the\nrotation about z to be followed by an additional rotation $\\phi$ about\nthe (new) y axis.\n\n\\item[Spherical joint]\\mbox{}\n\nA three DOF joint in which the origin remains fixed but any orientation\nmay be assumed.\n\n\\item[Planar connector]\\mbox{}\n\nA five DOF joint which connects a point on a single rigid body to a\nplane in space. The point may slide freely in the x-y plane, and the\nbody may assume any orientation about the point.\n\n\\end{description}\n\n% LATER \\subsection{Joint Compliance}\n% LATER \\label{JointCompliance:sec}\n\n\\section{Frame springs}\n\\label{FrameSprings:sec}\n\nAnother way to connect two rigid bodies together is to use a {\\it\nframe spring}, which is a six dimensional spring that generates\nrestoring forces and moments between coordinate frames.\n\n\\subsection{Frame spring coordinate frames}\n\n\\begin{figure}[ht]\n\\begin{center}\n \\iflatexml\n   \\includegraphics[width=3in]{images/frameSpring}\n \\else\n   \\includegraphics[width=3in]{images/frameSpring.idr}\n \\fi\n\\end{center}\n\\caption{A frame spring connecting two coordinate frames D and C.}\n\\label{frameSpring:fig}\n\\end{figure}\n\nThe basic idea of a frame spring is shown in Figure\n\\ref{frameSpring:fig}. It generates restoring forces and moments on\ntwo frames C and D which are a function of $\\T_{DC}$ and $\\hat\\v_{DC}$\n(the spatial velocity of frame D with respect to frame C).\n\nDecomposing forces into stiffness and damping terms, the force\n$\\f_C$ and moment $\\Btau_C$ acting on C can be expressed as \n%\n\\begin{align}\n\\f_C & = \\f_{k} (\\T_{DC}) + \\f_{d} (\\hat\\v_{DC}) \\notag \\\\\n\\Btau_C & = \\Btau_{k} (\\T_{DC}) + \\Btau_{d} (\\hat\\v_{DC}).\n\\label{ftauC:eqn}\n\\end{align}\n%\nwhere the translational and rotational forces $\\f_{k}$, $\\f_{d}$,\n$\\Btau_{k}$, and $\\Btau_{d}$ are general functions of $\\T_{DC}$ and\n$\\hat\\v_{DC}$.\n\nThe forces acting on D are equal and opposite, so that\n%\n\\begin{align}\n\\f_D & = - \\f_C, \\notag \\\\\n\\Btau_D & = - \\Btau_C.\n\\label{ftau2:eqn}\n\\end{align}\n%\n\n\\begin{figure}[ht]\n\\begin{center}\n \\iflatexml\n   \\includegraphics[width=3in]{images/frameSpringBodies}\n \\else\n   \\includegraphics[width=3in]{images/frameSpringBodies.idr}\n \\fi\n\\end{center}\n\\caption{A frame spring connecting two rigid bodies A and B.}\n\\label{frameSpringBodies:fig}\n\\end{figure}\n\nIf frames C and D are attached to a pair of rigid bodies A and B, then\na frame spring can be used to connect them in a manner analogous to a\njoint. As with joints, C and D generally do not coincide with the body\nframes, and are instead offset from them by fixed transforms $\\T_{CA}$\nand $\\T_{DB}$ (Figure \\ref{frameSpringBodies:fig}).\n\n\\subsection{Frame materials}\n\nThe restoring forces (\\ref{ftauC:eqn}) generated in a frame spring\ndepend on the {\\it frame material} associated with the spring. Frame\nmaterials are defined in the package {\\tt artisynth.core.materials},\nand are subclassed from\n\\javaclass[artisynth.core.materials]{FrameMaterial}.\nThe most basic type of material is a \n\\javaclass[artisynth.core.materials]{LinearFrameMaterial},\nin which the restoring forces are determined from\n%\n\\begin{align*}\n\\f_C & = \n\\K_{t} \\, \\x_{DC} + \\D_{t} \\, \\v_{DC} \\\\\n\\Btau_C & = \n\\K_{r} \\, \\hat\\Bthe_{DC} + \\D_{r} \\, \\Bom_{DC}\n\\label{flinear:eqn}\n\\end{align*}\n%\nwhere $\\hat\\Bthe_{DC}$ gives the small angle approximation of the\nrotational components of $\\X_{DC}$ with respect to the $x$, $y$, and\n$z$ axes, and\n%\n\\begin{gather*}\n\\K_{t} \\equiv \n\\matl k_{tx} & 0 & 0 \\\\ 0 & k_{ty} & 0 \\\\ 0 & 0 & k_{tz} \\matr, \\;\n\\D_{t} \\equiv \n\\matl d_{tx} & 0 & 0 \\\\ 0 & d_{ty} & 0 \\\\ 0 & 0 & d_{tz} \\matr, \\;\\\\\n\\K_{r} \\equiv\n\\matl k_{r x} & 0 & 0 \\\\ 0 & k_{r y} & 0 \\\\ 0 & 0 & k_{r z} \\matr, \\;\n\\D_{r} \\equiv\n\\matl d_{r x} & 0 & 0 \\\\ 0 & d_{r y} & 0 \\\\ 0 & 0 & d_{r z} \\matr.\n\\end{gather*}\n%\nare the stiffness and damping matrices. The diagonal values defining\neach matrix are stored in the 3-dimensional vectors $\\k_t$, $\\k_r$,\n$\\d_t$, and $\\d_r$ which are exposed as the {\\tt stiffness}, {\\tt\nrotaryStiffness}, {\\tt damping}, and {\\tt rotaryDamping} properties of\nthe material. Each of these specifies stiffness or damping values\nalong or about a particular axis. Specifying different values for\ndifferent axes will result in anisotropic behavior.\n\nOther frame materials offering nonlinear behaviour may be defined in\n{\\tt artisynth.core.materials}.\n\n\\subsection{Creating frame springs}\n\\label{CreatingFrameSprings:sec}\n\nFrame springs are implemented by the class\n\\javaclass[artisynth.core.mechmodels]{FrameSpring}.  Creating a frame\nspring generally involves instantiating this class, and then setting\nthe material, the bodies A and B, and the transforms $\\T_{CA}$ and\n$\\T_{DB}$.\n\nA typical construction sequence might look like this:\n%\n\\begin{lstlisting}[]\n  FrameSpring spring = new FrameSpring (\"springA\");\n  spring.setMaterial (new LinearFrameMaterial (kt, kr, dt, dr));\n  spring.setFrames (bodyA, bodyB, TDW);\n\\end{lstlisting}\n%\nThe material is set using\n\\javamethod*[artisynth.core.mechmodels.FrameSpring]{setMaterial()}.\nThe example above uses a {\\tt LinearFrameMaterial}, created with a\nconstructor that sets $\\k_t$, $\\k_r$, $\\d_t$, and $\\d_r$ to uniform\nisotropic values specified by {\\tt kt}, {\\tt kr}, {\\tt dt}, and {\\tt\ndr}. \n\nThe bodies and transforms can be set in the same manner as for joints\n(Section \\ref{CreatingJoints:sec}), with the\nmethods\\\\ \\javamethodAlt{artisynth.core.mechmodels.FrameSpring.setFrames(Frame,Frame,RigidTransform3d)}{setFrames(bodyA,bodyB,TDW)}\nand\n\\javamethodAlt{artisynth.core.mechmodels.FrameSpring.setFrames(Frame,RigidTransform3d,Frame,RigidTransform3d)}{setFrames(bodyA,TCA,bodyB,TDB)}\nassuming the role of the {\\tt setBodies()} methods used for joints.\nThe former takes D specified in world coordinates and computes\n$\\T_{CA}$ and $\\T_{DB}$ assuming that there is no initial spring\ndisplacement (i.e., that $\\T_{DC} = \\I$), while the latter allows\n$\\T_{CA}$ and $\\T_{DB}$ to be specified explicitly with $\\T_{DC}$\nassuming whatever value is implied.\n\nFrame springs and joints are often placed together, using the same\ntransforms $\\T_{CA}$ and $\\T_{DB}$, with the spring providing\nrestoring forces to help keep the joint within prescribed bounds.\n\nAs with joints, a frame spring can be connected to only a single body,\nby specifying {\\tt frameB} as {\\tt null}. Frame B is then taken to be\nthe world coordinate frame W.\n\n\\subsection{Example: Two bodies connected by a frame spring}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\iflatexml\n \\includegraphics[]{images/LumbarFrameSpring}\n\\else\n \\includegraphics[width=3.75in]{images/LumbarFrameSpring}\n\\fi\n\\end{center}\n\\caption{LumbarFrameSpring model loaded into ArtiSynth.}\n\\label{LumbarFrameSpring:fig}\n\\end{figure}\n\nA simple model showing two simplified lumbar vertebrae, modeled as\nrigid bodies and connected by a frame spring, is defined in\n%\n\\begin{verbatim}\n  artisynth.demos.tutorial.LumbarFrameSpring\n\\end{verbatim}\n%\nThe definition for the entire model class is shown here:\n\\lstset{numbers=left}\n\\lstinputlisting{../../src/artisynth/demos/tutorial/LumbarFrameSpring.java}\n\\lstset{numbers=none}\n\nFor convenience, the code to create and add each vertebrae is wrapped\ninto the method {\\tt addBone()} defined at lines 27-32. This method\ntakes two arguments: the {\\tt MechModel} to which the bone should be\nadded, and the name of the bone. Surface meshes for the bones are\nlocated in {\\tt .obj} files located in the directory {\\tt\n../mech/geometry} relative to the source directory for the model\nitself.\n\\javamethod*[artisynth.core.util]{ArtisynthPath.getSrcRelativePath()}\nis used to find a proper path to this directory given the model class\ntype ({\\tt LumbarFrameSpring.class}), and this is stored in the static\nstring {\\tt geometryDir}. Within {\\tt addBone()}, the directory path\nand the bone name are used to create a path to the bone mesh itself,\nwhich is in turn used to create a {\\tt PolygonalMesh} (line 28). The\nmesh is then used in conjunction with a {\\tt density} to create a\nrigid body which is added to the {\\tt MechModel} (lines 29-30) and\nreturned.\n\nThe {\\tt build()} method begins by creating and adding a {\\tt\nMechModel}, specifying a low value for gravity, and setting the rigid\nbody damping properties {\\tt frameDamping} and {\\tt\nrotaryDamping} (lines 37-41). (The damping parameters are needed\nhere because the frame spring itself is created with no damping.)\nRigid bodies representing the vertebrae {\\tt lumbar1} and {\\tt\nlumbar2} are then created by calling {\\tt addBone()} (lines 44-45),\n{\\tt lumbar1} is translated by setting the origin of its pose to\n$(-0.016, 0.039, 0)^T$, and {\\tt lumbar2} is set to be fixed by making\nit non-dynamic (line 47).\n\n\\begin{figure}[ht]\n\\begin{center}\n\\iflatexml\n \\includegraphics[]{images/LumbarFrameSpringNoflip}\n\\else\n \\includegraphics[width=3.75in]{images/LumbarFrameSpringNoflip}\n\\fi\n\\end{center}\n\\caption{LumbarFrameSpring model as it would appear if not rotated\nabout the x axis.}\n\\label{LumbarFrameSpringNoflip:fig}\n\\end{figure}\n\nAt this point in the construction, if the model were to be loaded, it\nwould appear as in Figure \\ref{LumbarFrameSpringNoflip:fig}. To change\nthe viewpoint to that seen in Figure \\ref{LumbarFrameSpring:fig}, we\nrotate the entire model about the x axis (line 50).  This is done\nusing\n\\javamethodAlt{artisynth.core.mechmodels.MechModel.transformGeometry()}\n{transformGeometry(X)}, which transforms the geometry of an entire\nmodel using a rigid or affine transform. This method is\ndescribed in more detail in Section \\ref{TransformingGeometry:sec}.\n\nThe frame spring is created and added at lines 54-59, using the\nmethods described in Section \\ref{CreatingFrameSprings:sec}, with\nframe D set to the (initial) pose of {\\tt lumbar1}.\n\nRender properties are set starting at line 62. By default, a frame\nspring renders as a pair of red, green, blue coordinate axes showing\nframes C and D, along with a line connecting them. The line width and\nthe color of the connecting line are controlled by the line render\nproperties {\\tt lineWidth} and {\\tt lineColor}, while the length of\nthe coordinate axes is controlled by the special frame spring property\n{\\tt axisLength}.\n\nTo run this example in ArtiSynth, select {\\sf All demos > tutorial >\nLumbarFrameSpring} from the {\\sf Models} menu. The model should load\nand initially appear as in Figure \\ref{LumbarFrameSpring:fig}.\nRunning the model (Section \\ref{LoadingAndRunning:sec}) will cause\n{\\tt lumbar1} to fall slightly under gravity until the frame spring\narrests the motion. To get a sense of the spring's behavior, one can\ninteractively apply forces to {\\tt lumbar1} using the pull manipulator\n(see the section ``Pull Manipulator'' in the\n\\href{\\artisynthDocBase/html/uiguide/uiguide.html}{\nArtiSynth User Interface Guide}).\n\n\\section{Attachments}\n\\label{Attachments:sec}\n\nArtiSynth provides the ability to rigidly attach dynamic components to\nother dynamic components, allowing different parts of a model to be\nconnected together.  Attachments are made by adding to a {\\tt\nMechModel} special {\\it attachment} components that manage the\nattachment physics as described briefly in Section\n\\ref{PhysicsSimulation:sec}.\n\n\\subsection{Point attachments}\n\\label{sec:mech:pointattachments}\n\nPoint attachments allow particles and other point-based components to\nbe attached to other, more complex components, such as frames, rigid\nbodies, or finite element models (Section \\ref{sec:fem:nodeattachments}). Point\nattachments are implemented by creating attachment components that are\ninstances of \\javaclass[artisynth.core.mechmodels]{PointAttachment}.\nModeling applications do not generally handle the attachment\ncomponents directly, but instead create them implicitly using the\nfollowing {\\tt MechModel} method:\n%\n\\begin{lstlisting}[]\n  attachPoint (Point p1, PointAttachable comp);\n\\end{lstlisting}\n%\nThis attaches a point {\\tt p1} to any component which implements the\ninterface \\javaclass[artisynth.core.mechmodels]{PointAttachable},\nindicating that it is capable creating an attachment to a\npoint. Components that implement {\\tt PointAttachable} currently\ninclude rigid bodies, particles, and finite element models. The\nattachment is created based on the the current position of the point\nand component in question.  For attaching a point to a rigid body,\nanother method may be used:\n%\n\\begin{lstlisting}[]\n  attachPoint (Point p1, RigidBody body, Point3d loc);\n\\end{lstlisting}\n%\nThis attaches {\\tt p1} to {\\tt body} at the point {\\tt loc} specified\nin body coordinates.  Finite element attachments are discussed in\nSection \\ref{sec:fem:nodeattachments}.\n\nOnce at point is attached, it\nwill be in the {\\it attached} state, as described in Section\n\\ref{DynamicVsParametric:sec}.  Attachments can be removed by\ncalling\n%\n\\begin{lstlisting}[]\n  detachPoint (Point p1);   \n\\end{lstlisting}\n%\n\n\\subsection{Example: model with particle attachments}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\iflatexml\n \\includegraphics[]{images/ParticleAttachment}\n\\else\n \\includegraphics[width=3.75in]{images/ParticleAttachment}\n\\fi\n\\end{center}\n\\caption{ParticleAttachment model loaded into ArtiSynth.}\n\\label{ParticleAttachment:fig}\n\\end{figure}\n\nA model illustrating particle-particle and particle-rigid body attachments\nis defined in\n%\n\\begin{verbatim}\n  artisynth.demos.tutorial.ParticleAttachment\n\\end{verbatim}\n%\nand most of the code is shown here:\n%\n\\lstset{numbers=left}\n\\begin{lstlisting}[]\n   public Particle addParticle (MechModel mech, double x, double y, double z) {\n      // create a particle at x, y, z and add it to mech\n      Particle p = new Particle (/*name=*/null, /*mass=*/.1, x, y, z);\n      mech.addParticle (p);\n      return p;\n   }\n\n   public AxialSpring addSpring (MechModel mech, Particle p1, Particle p2){\n      // create a spring connecting p1 and p2 and add it to mech\n      AxialSpring spr = new AxialSpring (/*name=*/null, /*restLength=*/0);\n      spr.setMaterial (new LinearAxialMaterial (/*k=*/20, /*d=*/10));\n      spr.setPoints (p1, p2);\n      mech.addAxialSpring (spr);\n      return spr;\n   }\n\n   public void build (String[] args) {\n\n      // create MechModel and add to RootModel\n      MechModel mech = new MechModel (\"mech\");\n      addModel (mech);\n\n      // create the components\n      Particle p1 = addParticle (mech, 0, 0, 0.55);\n      Particle p2 = addParticle (mech, 0.1, 0, 0.35);\n      Particle p3 = addParticle (mech, 0.1, 0, 0.35);\n      Particle p4 = addParticle (mech, 0, 0, 0.15);\n      addSpring (mech, p1, p2);\n      addSpring (mech, p3, p4);\n      // create box and set its pose (position/orientation):\n      RigidBody box =\n         RigidBody.createBox (\"box\", /*wx,wy,wz=*/0.5, 0.3, 0.3, /*density=*/20);\n      box.setPose (new RigidTransform3d (/*x,y,z=*/0.2, 0, 0));\n      mech.addRigidBody (box);\n\n      p1.setDynamic (false);               // first particle set to be fixed\n\n      // set up the attachments\n      mech.attachPoint (p2, p3);\n      mech.attachPoint (p4, box, new Point3d (0, 0, 0.15));\n\n      // increase model bounding box for the viewer\n      mech.setBounds (/*min=*/-0.5, 0, -0.5, /*max=*/0.5, 0, 0);  \n      // set render properties for the components\n      RenderProps.setSphericalPoints (mech, 0.06, Color.RED);\n      RenderProps.setCylindricalLines (mech, 0.02, Color.BLUE);\n   }\n\\end{lstlisting}\n\\lstset{numbers=none}\n%\nThe code is very similar to {\\tt ParticleSpring} and {\\tt\nRigidBodySpring} described in Sections \\ref{ParticleSpringExample:sec}\nand \\ref{RigidBodySpringExample:sec}, except that two convenience\nmethods, {\\tt addParticle()} and {\\tt addSpring()}, are defined at\nlines 1-15 to create particles and spring and add them to a {\\tt\nMechModel}. These are used in the {\\tt build()} method to create four\nparticles and two springs (lines 24-29), along with a rigid body box\n(lines 31-34). As with the other examples, particle {\\tt p1} is set to\nbe non-dynamic (line 36) in order to fix it in place and provide a\nground.\n\nThe attachments are added at lines 39-40, with {\\tt p2} attached to\n{\\tt p3} and {\\tt p4} connected to the box at the location $(0, 0,\n0.15)$ in box coordinates. \n\nFinally, render properties are set starting at line 43. In this\nexample, point and line render properties are set for the entire {\\tt\nMechModel} instead of individual components.  Since render properties\nare inherited, this will implicitly set the specified render\nproperties in all sub-components for which these properties are not\nexplicitly set (either locally or in an intermediate ancestor).\n\nTo run this example in ArtiSynth, select {\\sf All demos > tutorial >\nParticleAttachment} from the {\\sf Models} menu. The model should load\nand initially appear as in Figure \\ref{ParticleAttachment:fig}.\nRunning the model (Section \\ref{LoadingAndRunning:sec}) will cause the\nbox to fall and swing under gravity.\n\n\\subsection{Frame attachments}\n\\label{sec:mech:frameattachments}\n\nFrame attachments allow rigid bodies and other frame-based components to\nbe attached to other components, including frames, rigid\nbodies, or finite element models (Section \\ref{sec:fem:frameattachments}).\nFrame attachments are implemented by creating attachment components that are\ninstances of \\javaclass[artisynth.core.mechmodels]{FrameAttachment}.\n\nAs with point attachments, modeling applications do not generally\nhandle frame attachment components directly, but instead create and\nadd them\nimplicitly using the following {\\tt MechModel} methods:\n%\n\\begin{lstlisting}[]\n  attachFrame (Frame frame, FrameAttachable comp);\n\n  attachFrame (Frame frame, FrameAttachable comp, RigidTransform3d TFW);\n\\end{lstlisting}\n%\nThese attach {\\tt frame} to any component which implements the\ninterface \\javaclass[artisynth.core.mechmodels]{FrameAttachable},\nindicating that it is capable of creating an attachment to a\nframe. Components that implement {\\tt FrameAttachable} currently\ninclude frames, rigid bodies, and finite element models.  For the\nfirst method, the attachment is created based on the the current\nposition of the frame and component in question. For the second\nmethod, the attachment is created so that the initial pose of the frame\n(in world coordinates) is described by {\\tt TFW}.\n\nOnce a frame is attached, it\nwill be in the {\\it attached} state, as described in Section\n\\ref{DynamicVsParametric:sec}.  Frame attachments can be removed by\ncalling\n%\n\\begin{lstlisting}[]\n  detachFrame (Frame frame);   \n\\end{lstlisting}\n%\n\n\\begin{sideblock}\nWhile it is possible to create composite rigid bodies using {\\tt\nFrameAttachments}, this is much less computationally efficient (and\nless accurate) than creating a single rigid body through mesh merging\nor similar techniques.\n\\end{sideblock}\n\n\\subsection{Example: model with frame attachments}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\iflatexml\n \\includegraphics[]{images/FrameBodyAttachment}\n\\else\n \\includegraphics[width=3.75in]{images/FrameBodyAttachment}\n\\fi\n\\end{center}\n\\caption{FrameBodyAttachment model loaded into ArtiSynth.}\n\\label{FrameBodyAttachment:fig}\n\\end{figure}\n\nA model illustrating rigidBody-rigidBody and frame-rigidBody attachments\nis defined in\n%\n\\begin{verbatim}\n  artisynth.demos.tutorial.FrameBodyAttachment\n\\end{verbatim}\n%\nMost of the code is identical to that for {\\tt RigidBodyJoint}\nas described in Section \\ref{RigidBodyJoint:sec}, except that\nthe joint is further to the left and connects {\\tt bodyB} to ground,\nrather than to {\\tt bodyA}, and the initial pose of {\\tt bodyA}\nis changed so that it is aligned vertically. {\\tt bodyA} is\nthen connected to {\\tt bodyB}, and an auxiliary frame is created\nand attached to {\\tt bodyA}, using code at the end\nof the {\\tt build()} method as shown here:\n%\n\\lstset{numbers=left}\n\\begin{lstlisting}[]\n   public void build (String[] args) {\n\n      ... create model mostly similar to RigidBodyJoint ...\n\n      // now connect bodyA to bodyB using a FrameAttachment\n      mech.attachFrame (bodyA, bodyB);\n\n      // create an auxiliary frame and add it to the mech model\n      Frame frame = new Frame();\n      mech.addFrame (frame);\n      \n      // set the frames axis length > 0 so we can see it\n      frame.setAxisLength (4.0); \n      // set the attached frame's pose to that of bodyA ...\n      RigidTransform3d TFW = new RigidTransform3d (bodyA.getPose());\n      // ... plus a translation of lenx2/2 along the x axis:\n      TFW.mulXyz (lenx2/2, 0, 0);\n      // finally, attach the frame to bodyA\n      mech.attachFrame (frame, bodyA, TFW);\n   }\n\\end{lstlisting}\n\\lstset{numbers=none}\n%\nTo run this example in ArtiSynth, select {\\sf All demos > tutorial >\nFrameBodyAttachment} from the {\\sf Models} menu. The model should load\nand initially appear as in Figure \\ref{ParticleAttachment:fig}.  The\nframe attached to {\\tt bodyA} is visible in the lower right corner.\nRunning the model (Section \\ref{LoadingAndRunning:sec}) will cause\nboth bodies to fall and swing about the joint under gravity.\n\n", "meta": {"hexsha": "a06ce31626c7403c553efef2fa2a8992aad66d0e", "size": 59159, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/modelguide/mechanicalModelsI.tex", "max_stars_repo_name": "gaetanbahl/carapas", "max_stars_repo_head_hexsha": "71c9b9355b740b7312da1a579e4bff5175eb100b", "max_stars_repo_licenses": ["Apache-2.0", "BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-12T15:42:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-03T10:08:17.000Z", "max_issues_repo_path": "doc/modelguide/mechanicalModelsI.tex", "max_issues_repo_name": "gaetanbahl/carapas", "max_issues_repo_head_hexsha": "71c9b9355b740b7312da1a579e4bff5175eb100b", "max_issues_repo_licenses": ["Apache-2.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/modelguide/mechanicalModelsI.tex", "max_forks_repo_name": "gaetanbahl/carapas", "max_forks_repo_head_hexsha": "71c9b9355b740b7312da1a579e4bff5175eb100b", "max_forks_repo_licenses": ["Apache-2.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.2303335431, "max_line_length": 142, "alphanum_fraction": 0.7442316469, "num_tokens": 16043, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.743167997235783, "lm_q2_score": 0.7401743563075446, "lm_q1q2_score": 0.5500738939823627}}
{"text": "% -*- coding: utf-8 -*-\n%\n% doc/tex/funcs.tex\n%-------------------------------------------------- part of the fastmat demos\n%\n% Author      : sempersn\n% Introduced  : \n%------------------------------------------------------------------------------\n%  \n%  Copyright 2016 Sebastian Semper, Christoph Wagner\n%      https://www.tu-ilmenau.de/ems/\n%\n%  Licensed under the Apache License, Version 2.0 (the \"License\");\n%  you may not use this file except in compliance with the License.\n%  You may obtain a copy of the License at\n%\n%      http://www.apache.org/licenses/LICENSE-2.0\n%\n%  Unless required by applicable law or agreed to in writing, software\n%  distributed under the License is distributed on an \"AS IS\" BASIS,\n%  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n%  See the License for the specific language governing permissions and\n%  limitations under the License.\n%\n%------------------------------------------------------------------------------\n\nEvery fast transformation $\\bm A$ offers a basic set of functions that can be used externally.\n\n\\subsection{Forward projection}\n\\texttt{A.forward(x)} -- calculates $\\bm A \\cdot \\bm x$ using the method that is efficient for $\\bm A$. Note that $\\bm A \\cdot \\bm X = [\\bm A \\cdot \\bm x^1,\\dots,\\bm A \\cdot \\bm x^k]$ is also possible and it uses \\np{} internal broadcasting whenever possible.\n\n\\begin{snippet}\n\\begin{lstlisting}[language=Python]\n# import the packages\nimport fastmat as fm\nimport numpy as np\n\n# define a random vector\nvec_x = np.random.randn(2**N)\n\n# define a random matrix\nmat_X = npr.randn(2**N,k)\n\n# define a hadamard matrix\nH = fm.Hadamard(N)\n\n# apply the matrices\nvec_y = H.forward(vec_x)\nvec_Y = H.forward(mat_X)\n\\end{lstlisting}\n\nAssume we have a vector $\\bm x \\in \\R^{2^N}$ and a matrix $\\bm X \\in \\R^{2^N \\times k}$ and a Hadamard matrix $\\bm{\\mathcal{H}}_N$ of order $N$. Then we calculate\n\\begin{align}\n\\bm y = \\bm H \\cdot \\bm x, ~ \\mathrm{and} ~ \\bm Y = \\bm H \\cdot \\bm X\n\\end{align}\n\\end{snippet}\n\n%\n%\n%\n\\subsection{Backward projection}\n\\texttt{A.backward(x)} -- calculates $\\bm A^\\herm \\cdot \\bm x$ using the method that is efficient for $\\bm A$. Here $\\bm A^\\trans = \\bm A^\\herm$, if $\\bm A$ is a real valued mapping.\n\n\\begin{snippet}\n\\begin{lstlisting}[language=Python]\n# import the packages\nimport fastmat as fm\nimport numpy as np\n\n# define a random vector\nvec_x = npr.randn(2**N)\n\n# define a random matrix\nmat_X = npr.randn(2**N,k)\n\n# define a hadamard matrix\nH = fm.Hadamard(N)\n\n# apply the matrices\nvec_y = H.backward(vec_x)\nvec_Y = H.backward(mat_X)\n\\end{lstlisting}\n\nAssume we have a vector $\\bm x \\in \\R^{2^N}$ and a matrix $\\bm X \\in \\R^{2^N \\times k}$ and a Hadamard matrix $\\mathcal{\\bm H}_N$ of order $N$. Then we calculate\n\t\\[\\bm y = \\bm H^\\trans \\cdot \\bm  x, ~ \\mathrm{and} ~ \\bm Y = \\bm H^\\trans \\cdot \\bm X.\\]\n\\end{snippet}\n\n%\n%\n%\n\\subsection{Item access} \n\\texttt{A[$i,j$]} -- returns $\\bm e_i^\\trans \\cdot \\bm A \\cdot \\bm e_j$, i.e.\\ the element $a_{i,j}$.\n\n\\begin{snippet}\n\\begin{lstlisting}[language=Python]\n# import the package\nimport fastmat as fm\n\n#define a Fourier matrix\nF = fm.Fourier(N)\n\n#access the (i,j)th entry\nf = F[i,j]\n\\end{lstlisting}\n\nAssume we have a fourier matrix $\\bm{\\mathcal{F}}_N$. Then we get\n\t\\[\\bm f = \\mathcal{\\bm F}_{N {i,j}}.\\]\n\\end{snippet}\n\n\\textit{Note:} Please bear in mind that \\fm{} is not optimized for this type of item access and any algorithm making heavy use of this operation will be slowed down significantly.\n%\n%\n%\n\\subsection{Column access} \n\\texttt{A.getCols($[i_1,\\dots,i_k]$)} -- returns $\\bm a_{i_1}, ... , \\bm a_{i_k}$, i.e. the columns of $\\bm A$ with the given indices.\n\n\\begin{snippet}\n\\begin{lstlisting}[language=Python]\n# import the package\nimport fastmat as fm\n\n#define a Fourier matrix\nF = fm.Fourier(N)\n\n#access the given columns\nf = F.getCols([1,2,3])\n\\end{lstlisting}\n\nAssume a Fourier matrix $\\bm{\\mathcal{F}}_N$. Then we get\n\t\\[f_1,f_2,f_3.\\]\n\\end{snippet}\n\n\\textit{Note:} Please bear in mind that \\fm{} is not optimized for this type of item access and any algorithm making heavy use of this operation will be slowed down significantly.\n%\n%\n%\n\\subsection{Row access} \n\\texttt{A.getRows($[i_1,\\dots,i_k]$} -- returns $\\bm a^{i_1}, ... , \\bm a^{i_k}$, i.e. the columns of $\\bm A^\\trans$ with the given indices.\n\n\\begin{snippet}\n\\begin{lstlisting}[language=Python]\n# import the package\nimport fastmat as fm\n\n#define a Fourier matrix\nF = fm.Fourier(N)\n\n#access the given rows\nf = F.getRows([1,2,3])\n\\end{lstlisting}\n\nAssume we have a fourier matrix $\\bm{\\mathcal{F}}_N$. Then we get\n\t\\[f^1,f^2,f^3.\\]\n\\end{snippet}\n\n\\textit{Note:} Please bear in mind that \\fm{} is not optimized for this type of item access and any algorithm making heavy use of this operation will be slowed down significantly.\n%\n%\n%\n\\subsection{Overloaded Operators}\nThe \\fm{} package comes with a variety of overloaded operators to facilitate the production of readable and less cluttered code.\n\n\\begin{itemize}\n \\item simple scalar multiplication via \\texttt{S = 3 * A} for $\\bm A$ being an arbitrary \\fm{} class instance.\n \\item easy summation via \\texttt{S = A $\\pm$ B} for $\\bm A$ and $\\bm B$ being arbitrary \\fm{} class instances, which returns $\\bm S$ as a \\texttt{fastmat.Sum} object.\n \\item easy product building \\texttt{P = A * B} for $\\bm A$ and $\\bm B$ being arbitrary \\fm{} class instances, which returns $\\bm P$ as a \\texttt{fastmat.Product} object.\n \\item easy forward transformation \\texttt{y = A * x}, where $\\bm x$ and $\\bm y$ are \\np{} arrays. So the calculation is a shorthand for \\texttt{y = A.forward(x)}\n \\item easy backward transformation \\texttt{y = A.H * x}, where $\\bm x$ and $\\bm y$ are \\np{} arrays. So the calculation is a shorthand for \\texttt{y = A.backward(x)}\n\\end{itemize}\n\n\\begin{snippet}\n\\begin{lstlisting}[language=Python]\n# import the package\nimport fastmat as fm\n\n#define a Fourier matrix\nA = fm.Fourier(N)\nB = fm.Eye(N)\n\n# cast a scalar multiple\nM = 3 * A\n\n# cast a sum\nS = A + B\n\n# cast a product\nP = A * B\n\n# combine everything\nM = (3 * A) - (1j * B)\n\n# easy transformation\ny = S * x\n\\end{lstlisting}\n\\end{snippet}\n\n%\n%\n%\n\\subsection{Conversion to Numpy}\n\\texttt{A.toarray()} -- returns a $2$D \\np{}-array. This array represents the same mapping, but it can only be used with normal matrix multiplications. Use it with care, since it might consume a lot of memory.\n\n\\begin{snippet}\n\\begin{lstlisting}[language=Python]\n# import the package\nimport fastmat as fm\n\n#define a Fourier matrix\nF = fm.Fourier(N)\n\n#convert it to a numpy array\narr_F = F.toarray()\n\n#print the new object\nprint(arr_F)\n\\end{lstlisting}\n\nAssume we have a Fourier mapping $\\bm{\\mathcal{F}}_N$. Then we get a standard \\np{}-array that contains the Fourier matrix' entries.\n\\end{snippet}\n%\n\\textit{Note:} Please bear in mind that by doing so, you abandon any profits you get from using \\fm{}, but gain all the flexibility offered by \\np{} instead. This is especially useful if you need to carry out a lot of item, row or columns accesses during your calculations. \n", "meta": {"hexsha": "a94b36f1134bfd62783cffb73f6a8710351b0628", "size": 7029, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "doc/tex/funcs.tex", "max_stars_repo_name": "mp4096/fastmat", "max_stars_repo_head_hexsha": "eba6ca4dd58302d05d63853a0a35c2e8c26ff6bd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/tex/funcs.tex", "max_issues_repo_name": "mp4096/fastmat", "max_issues_repo_head_hexsha": "eba6ca4dd58302d05d63853a0a35c2e8c26ff6bd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/tex/funcs.tex", "max_forks_repo_name": "mp4096/fastmat", "max_forks_repo_head_hexsha": "eba6ca4dd58302d05d63853a0a35c2e8c26ff6bd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.6621621622, "max_line_length": 274, "alphanum_fraction": 0.6781903542, "num_tokens": 2059, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.7371581626286833, "lm_q2_score": 0.7461389817407016, "lm_q1q2_score": 0.5500224408456122}}
{"text": "\\subsection{Benchmark functions}\nIn this paper, we compare the performance of the binary version of GA, PSO and HPSOWM that will be called BGA, BPSO, BHPSOWM respectively, using a suite of benchmark test functions (shown in \\ref{table1}). Many different kinds of optimization problems are covered by these benchmark test functions \\cite{ling2008hybrid}. They are divided into three categories as follows:\n\\begin{itemize}\n\t\\item Category I - Unimodal functions: a symmetry model with a single minimum. Function $f_1$ to $f_7$ are in this category\n\t\\item Category II - Multimodal functions with few local minima. Function $f_8$ to $f_{13}$ belong to this.\n\t\\item Category III - Multimodal functions with many local minima which includes function $f_{14}$ to $f_{18}$\n\\end{itemize}\n\n\n% Benchmark functions\n\n\\begin{table}[H]\\label{table1}\n\t\\caption{Benchmark test functions}\n\t\\begin{center}\n\t\t\\bgroup\n\t\t\\def\\arraystretch{1.5}%  1 is the default, change whatever you need\n\t    \\begin{tabular}{  >{\\centering\\arraybackslash}m{8cm} | >{\\centering\\arraybackslash}m{3cm} | >{\\centering\\arraybackslash}m{3cm} }\n\t\t    \\hline\n\t\t    Test function & Domain range & Optimal point \\\\ \\hline\n    \n\t\t    $f_1(x) = \\sum\\limits_{i=1}^{30}{x_i}^2$ & \n\t\t    $-100 \\leq x_i \\leq 100$ & \n\t\t    $f_i(0) = 0$ \n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_2(x) = \\sum\\limits_{i=1}^{9}\\Big[ 100(x_{i+1} - x_{i}^2)^2 + (x_i - 1)^2 \\Big]$ &\n    \t\t\t$-2.048 \\leq x_i \\leq 2.048$ &\n    \t\t\t$f_2(1) = 0$    \t\t\t\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_3(x) = \\sum\\limits_{i=1}^{100}(|x_i + 0.5|)^2$ &\n    \t\t\t$-10 \\leq x_i \\leq 10$ &\n    \t\t\t$f_3(0) = 0$ \n    \t\t\t\\\\ \\hline    \t\t\t\n    \t\t\t\n    \t\t\t$f_4(x) = \\sum\\limits_{i=1}^{10}ix_{i}^4 + random[0,1)$ &\n    \t\t\t$-2.56 \\leq x_i \\leq 2.56$ &\n    \t\t\t$f_4(0) = 0$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_5(x) = max{|x_i|, 1 \\leq i \\leq 30}$ &\n    \t\t\t$-100 \\leq x_i \\leq 100$ &\n    \t\t\t$f_5(0) = 0$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_6(x) = \\sum\\limits_{i=1}^{30}|x_i| + \\prod\\limits_{i=1}^{30}|x_i|$ &\n    \t\t\t$-10 \\leq x_i \\leq 10$ &\n    \t\t\t$f_5(0) = 0$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t\n    \t\t\t$f_7(x) = -cos(x_1) \\cdot cos(x_2) \\cdot exp \\Big( -\\Big( (x_1 - \\pi)^2 + (x_2 + \\pi)^2\\Big) \\Big)$ &\n    \t\t\t$-300 \\leq x_1, x_2 \\leq 300$ &\n    \t\t\t$f_7(\\Big[ \\pi,\\pi \\Big]) = -1$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_8(x) = \\Bigg[ \\frac{1}{500} + \\sum\\limits_{j=1}^{25}\\frac{1}{j + \\sum_{i=1}^{2}(x_i - a_{ij})^6} \\Bigg]^{-1}$ &\n    \t\t\t$-65.536 \\leq x_1, x_2 \\leq 65.536$ &\n    \t\t\t$f_8(|-32, 32|) \\approx 1$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_9(x) = \\sum\\limits_{i=1}^{9} \\Bigg[ a_i - \\frac{x_1(b_{i}^2 + b_i x_2)}{b_{i}^2 + b_i x_3 + x_4} \\Bigg]^2$ &\n    \t\t\t$-5 \\leq x_i \\leq 5$ &\n    \t\t\t$f_9(0.1928, 0.1928, 0.1231, 0.1358) \\approx 0.0003075$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_{10}(x) = -\\frac{sin(x_1)sin(x_2)}{x_1 x_2}$ &\n    \t\t\t$-10 \\leq x_1, x_2 \\leq 10$ &\n    \t\t\t$\\displaystyle\\lim_{x \\to |0,0|}f_{10}(x) = -1$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_{11}(x) = 4x_{1}^2 - 2.1x_{1}^4 + \\frac{1}{3}x_{1}^6 + x_1 x_2 - 4x_{2}^2 + 4x_{2}^4$ &\n    \t\t\t$-5 \\leq x_1, x_2 \\leq 5$ &\n    \t\t\t$f_{11}(\\Big[ 0.08983, -0.7126 \\Big]) = f_{11}(\\Big[ -0.08983, 0.7126 \\Big]) \\approx -1.0316$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_{12}(x) = -\\sum\\limits_{i=1}^{30}c_i exp\\Bigg[ -\\sum\\limits_{j=1}^{3}a_{ij}\\Big( x_j - p_{ij} \\Big)^2 \\Bigg]$ &\n    \t\t\t$0 \\leq x_i \\leq 1$ &\n    \t\t\t$f_{12}(0.114, 0.556, 0.852) \\approx -3.8628$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\t$f_{13}(x) = -\\sum\\limits_{i=1}^{4}c_i exp\\Bigg[ -\\sum\\limits_{j=1}^{6}a_{ij}\\Big( x_j - p_{ij} \\Big)^2 \\Bigg]$ &\n    \t\t\t$0 \\leq x_i \\leq 1$ &\n    \t\t\t$f_{13}(\\Big[ 0.201, 0.15, 0.477, 0.275, 0.311, 0.627 \\Big]) \\approx -3.32$\n    \t\t\t\\\\ \\hline\n    \t\t\t\n\t\t\t$\n\t\t\tf_{14}(x) = 0.1 \\left\\{ \\begin{array}{l}\n        \t\t\tsin^2(\\pi 3 x_1) \\\\\n        \t\t\t+ \\sum\\limits_{i=1}^{29}(x_i - 1)^2 \\cdot \\Big[ 1 + sin^2(3 \\pi x_{i+1}) \\Big] \\\\\n        \t\t\t+ (x_30 - 1)^2 \\Big[ 1 + sin^2(2 \\pi x_30) \\Big]\n    \t\t\t\\end{array}\\right\\}    \t\t\n    \t\t\t+ \\sum\\limits_{i=1}^{30}u(x_i, 5, 100, 4)\n\t\t\t$ &\n\t\t\t$-50 \\leq x_i \\leq 50$ &\n\t\t\t$f_{14}(1) = 0$\n\t\t\t\\\\ \\hline\n\t\t\t\n\t\t\t$f_{15}(x) = \\sum\\limits_{i=1}^{30}\\Bigg[ x_{i}^2 - 10 cos(2\\pi x_i) + 10 \\Bigg]$ &\n\t\t\t$-50 \\leq x_i \\leq 50$ &\n\t\t\t$f_{15}(0) = 0$\n\t\t\t\\\\ \\hline\n\t\t\t\n\t\t\t$f_{16}(x) = \\frac{1}{4000}\\sum\\limits_{i=1}^{30}x_{i}^2 - \\prod_{i=1}^{30}cos\\Bigg( \\frac{x_i}{\\sqrt{i}} \\Bigg) + 1$ &\n\t\t\t$-600 \\leq x_i \\leq 600$ &\n\t\t\t$f_{16}(0) = 0$\n\t\t\t\\\\ \\hline\n\t\t\t\n\t\t\t$f_{17}(x) = -20exp \\Bigg( -0.2\\sqrt{\\frac{1}{30}\\sum\\limits_{i=1}^{30}x_{i}^2} \\Bigg) - exp \\Bigg( \\frac{1}{30} \\sum\\limits_{i=1}^{30}cos 2 \\pi x_i \\Bigg) + 20 + e$ &\n\t\t\t$-32 \\leq x_i \\leq 32$ &\n\t\t\t$f_{17}(0) = 0$\n\t\t\t\\\\ \\hline\n\t\t\t\n\t\t\t$f_{18}(x) = -\\sum\\limits_{i=1}^{10}\\Bigg( x_i sin\\Bigg( \\sqrt{|x_i|} \\Bigg) \\Bigg)$ &\n\t\t\t$-500 \\leq x_i \\leq 500$ &\n\t\t\t$f_{18}([420.9687, \\cdots 420.9687]) = -10 \\times 418.9829 = -4189.829$\n\t\t\t\\\\ \\hline\n    \t\t\t\n    \t\t\\end{tabular}\n    \t\t\\egroup\n\t\\end{center}\n\\end{table}\n\n\\subsection{Experiment settings}\nWe adapt the settings from the original research of HPSOWM \\cite{ling2008hybrid} which has some main settings are:\n\\begin{itemize}\n\t\\item Swarm size: 50\n\t\\item Number of runs: 50\n\t\\item Time limit for each run: 120 seconds\n\t\\item The probability of crossover for GA: 0.001\n\t\\item The initial population: we start from a single initial population so we can compare the performance among three algorithms starting from the same point\n\\end{itemize}\nAs of the binary version, we also adapt the settings from \\cite{kennedy1997discrete} as:\n\\begin{itemize}\n\t\\item $V_{max}$ which is the maximum value for the velocity is set to 6\n\\end{itemize}\n\n\\subsection{Results and Discussions}\n\nThe experiment compared the performance of three binary version of GA, PSO and HPSOWM which are Binary GA, BPSO and BHPSOWM. The overall results showed that HPSOWM, in many benchmark functions, outperformed the other two methods in finding the best objective value in a limited running time. In some functions, it can even find the optimal value (f1, f6).\n\nAs can be seen from tables \\ref{tab:graph1}, \\ref{tab:graph2}, \\ref{tab:graph3}, the performance of HPSOWM after 5000 iterations outperformed the other two in many benchmark functions among three categories (f1, f3, f6, f9, f10, f11, f13, f14). We can also see that, the convergence rate of HPSOWM is not as fast as Binary GA and BPSO. However, it converges steadily in all benchmark functions, especially in multimodal functions with many local minima like f14 and f8. This is an important factor, as the search space of a function is unknown, therefore an optimization algorithm should be able to steadily jump to another area to keep searching instead of being trapped in a local minima after a very fast convergence at the beginning of the search.\n\nAdditionally, GA has the worst performance among benchmark functions. In some function, such as f12 (hartmann's family 3-dimensional), it is quickly fall into a local minima. BPSO shares the same behavior but performed better. Both of them converge very fast in the first 500 iterations, but then are trapped and cannot jump out.\n\nAccording to tables \\ref{tab:tb2}, \\ref{tab:tb3}, \\ref{tab:tb4}, we can easily see that, HPSOWM has the smaller values in mean and standard deviation. It means that the solution's quality and stability are good.\n\n\n\\begin{table}[H]\n\\caption{Comparison between BHPSOWM, BPSO and GA - Category I: unimodal functions}\\label{tab:graph1}\n  \\begin{tabular*}{\\textwidth}{cc}\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f1_graph} \\\\\n    $f_1(x)$\n    \\end{tabular}\n    &\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f2_graph} \\\\\n    $f_2(x)$\n    \\end{tabular}\n    \\\\\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f3_graph} \\\\\n    $f_3(x)$\n    \\end{tabular}\n    &\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f6_graph} \\\\\n    $f_6(x)$\n    \\end{tabular}\n    \n    \\\\\n  \\end{tabular*}\n\\end{table}\n\n\n\\begin{table}[H]\n\\caption{Comparison between BHPSOWM, BPSO and GA - Category II: multimodal functions with few local minima}\\label{tab:graph2}\n  \\begin{tabular*}{\\textwidth}{cc}\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f9_graph} \\\\\n    $f_9(x)$\n    \\end{tabular}\n    &\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f10_graph} \\\\\n    $f_{10}(x)$\n    \\end{tabular}\n    \\\\\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f11_graph} \\\\\n    $f_{11}(x)$\n    \\end{tabular}\n    &\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f12_graph} \\\\\n    $f_{12}(x)$\n    \\end{tabular}\n    \\\\\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f13_graph} \\\\\n    $f_{13}(x)$\n    \\end{tabular}\n    &\n\n    \\\\\n  \\end{tabular*}\n\\end{table}\n\n\n\\begin{table}[H]\n\\caption{Comparison between BHPSOWM, BPSO and GA - Category III: multimodal functions with many local minima}\\label{tab:graph3}\n  \\begin{tabular*}{\\textwidth}{cc}\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f14_graph} \\\\\n    $f_{14}(x)$\n    \\end{tabular}\n    &\n    \\begin{tabular}[c]{@{}c@{}}\n    \\includegraphics[width=60mm]{resources/f18_graph} \\\\\n    $f_{18}(x)$\n    \\end{tabular}\n\n    \\\\\n  \\end{tabular*}\n\\end{table}\n\n\n\n\n\n\\renewcommand{\\arraystretch}{1}\n\\setlength{\\tabcolsep}{.1em}\n\\begin{table}[H]\n\\caption{Comparison between BHPSOWM, BPSO and GA with benchmark testing function - Category I}\\label{tab:tb2}\n\\centering\n  \\begin{tabular}{|c|l|r|r|r|}\n  \\hline\n  & & BHPSOWM & BPSO & GA \\\\\n  \\hline\n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_1(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  0.0\\\\\n  42.396\\\\\n  52.112\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  312.18\\\\\n  612.251\\\\\n  204.076\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  216.32\\\\\n  487.218\\\\\n  175.824\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_2(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  32.154\\\\\n  39.285\\\\\n  2.646\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  40.099\\\\\n  58.319\\\\\n  9.376\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  38.756\\\\\n  54.960\\\\\n  9.062\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_3(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  7.5\\\\\n  7.614\\\\\n  0.067\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  7.843\\\\\n  8.611\\\\\n  0.353\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  7.759\\\\\n  8.357\\\\\n  0.338\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_4(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.740\\\\\n  2.115\\\\\n  0.111\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.800\\\\\n  2.030\\\\\n  0.09\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.727\\\\\n  1.981\\\\\n  0.112\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_5(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  20.8\\\\\n  25.276\\\\\n  1.389\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  16.0\\\\\n  26.524\\\\\n  2.186\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  11.3\\\\\n  14.171\\\\\n  1.281\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_6(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  0.0\\\\\n  2.146\\\\\n  0.813\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  5.42\\\\\n  10.853\\\\\n  2.597\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  4.32\\\\\n  8.52\\\\\n  2.278\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_7(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.941\\\\\n  -0.839\\\\\n  0.233\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.941\\\\\n  -0.591\\\\\n  0.356\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.941\\\\\n  -0.059\\\\\n  0.194\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  \\end{tabular}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\\renewcommand{\\arraystretch}{1}\n\\setlength{\\tabcolsep}{.1em}\n\\begin{table}[H]\n\\caption{Comparison between BHPSOWM, BPSO and GA with benchmark testing function - Category II}\\label{tab:tb3}\n\\centering\n  \\begin{tabular}{|c|l|r|r|r|}\n  \\hline\n  & & BHPSOWM & BPSO & GA \\\\\n  \\hline\n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_8(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.000000755\\\\\n  2.268\\\\\n  1.485\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.000000755\\\\\n  7.474\\\\\n  23.380\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.000000808\\\\\n  6.273\\\\\n  4.321\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_9(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  0.0008\\\\\n  0.006\\\\\n  0.005\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  0.001\\\\\n  0.025\\\\\n  0.029\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  0.012\\\\\n  0.102\\\\\n  0.041\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{10}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.708\\\\\n  -0.708\\\\\n  0.0\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.708\\\\\n  -0.703\\\\\n  0.004\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.708\\\\\n  -0.690\\\\\n  0.021\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{11}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.999\\\\\n  -0.999\\\\\n  0.000\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.999\\\\\n  -0.954\\\\\n  0.058\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -0.999\\\\\n  -0.999\\\\\n  6.096\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{12}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -27.749\\\\\n  -27.488\\\\\n  0.193\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -27.749\\\\\n  -27.366\\\\\n  1.057\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -20.086\\\\\n  -8.169\\\\\n  1.702\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{13}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -3.31999\\\\\n  -3.281\\\\\n  0.099\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -3.31989\\\\\n  -3.142\\\\\n  0.210\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  -3.135\\\\\n  -2.698\\\\\n  0.083\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  \\end{tabular}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\renewcommand{\\arraystretch}{1}\n\\setlength{\\tabcolsep}{.1em}\n\\begin{table}[H]\n\\caption{Comparison between BHPSOWM, BPSO and GA with benchmark testing function - Category III}\\label{tab:tb4}\n\\centering\n  \\begin{tabular}{|c|l|r|r|r|}\n  \\hline\n  & & BHPSOWM & BPSO & GA \\\\\n  \\hline\n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{14}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  0.4\\\\\n  7.148\\\\\n  5.956\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  2.424\\\\\n  31.692\\\\\n  18.627\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  3.532\\\\\n  26.948\\\\\n  16.550\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{15}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  25.279\\\\\n  76.182\\\\\n  26.749\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  115.505\\\\\n  275.218\\\\\n  78.383\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  55.839\\\\\n  226.633\\\\\n  84.434\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{16}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  $10^(-9)$\\\\\n  $10^(-9)$\\\\\n  0.000\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  $10^(-9)$\\\\\n  $10^(-9)$\\\\\n  0.000\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  $10^(-9)$\\\\\n  $10^(-9)$\\\\\n  0.000\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{17}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.718\\\\\n  1.718\\\\\n  0.000\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.718\\\\\n  1.718\\\\\n  0.000\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1.718\\\\\n  1.718\\\\\n  0.000\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \n  % begin function\n  \\begin{tabular}[c]{@{}c@{}}\n  $f_{18}(x)$\\\\\n  $T = 120$\n  \\end{tabular}\n  &\n  \\begin{tabular}[c]{@{}lll@{}}\n  Best\\\\\n  Mean\\\\\n  Std Dev\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  946.487\\\\\n  1984.875\\\\\n  267.413\n  \\end{tabular}\n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1153.454\\\\\n  1960.315\\\\\n  401.983\n  \\end{tabular} \n  &\n  \\begin{tabular}{@{}rrr@{}}\n  1210.081\\\\\n  2072.108\\\\\n  472.012\n  \\end{tabular} \n  \\\\ \\hline\n  % end of function  \n  \\end{tabular}\n\\end{table}", "meta": {"hexsha": "d08fa1565d71e5c66aeb16d8f9138a18a6cfb93d", "size": 18313, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "paper/chapters/p3.tex", "max_stars_repo_name": "minhprg/binary-hpsowm", "max_stars_repo_head_hexsha": "c40a7c3575b9560205252b501b3a9b506a715260", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "paper/chapters/p3.tex", "max_issues_repo_name": "minhprg/binary-hpsowm", "max_issues_repo_head_hexsha": "c40a7c3575b9560205252b501b3a9b506a715260", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "paper/chapters/p3.tex", "max_forks_repo_name": "minhprg/binary-hpsowm", "max_forks_repo_head_hexsha": "c40a7c3575b9560205252b501b3a9b506a715260", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.1466512702, "max_line_length": 751, "alphanum_fraction": 0.5539234424, "num_tokens": 8026, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7371581510799253, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.5500224405511858}}
{"text": "% This is part of the TFTB Reference Manual.\n% Copyright (C) 1996 CNRS (France) and Rice University (US).\n% See the file refguide.tex for copying conditions.\n\n\n\n\\markright{dopnoise}\n\\section*{\\hspace*{-1.6cm} dopnoise}\n\n\\vspace*{-.4cm}\n\\hspace*{-1.6cm}\\rule[0in]{16.5cm}{.02cm}\n\\vspace*{.2cm}\n\n\n\n{\\bf \\large \\sf Purpose}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nComplex doppler random signal.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Synopsis}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\n[y,iflaw] = dopnoise(N,fs,f0,d,v)\n[y,iflaw] = dopnoise(N,fs,f0,d,v,t0)\n[y,iflaw] = dopnoise(N,fs,f0,d,v,t0,c)\n\\end{verbatim}\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf Description}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n        {\\ty dopnoise} generates a complex noisy doppler signal, normalized\n        so as to be of unit energy. \\\\\n\n\\hspace*{-.5cm}\\begin{tabular*}{14cm}{p{1.5cm} p{8.5cm} c}\nName & Description & Default value\\\\\n\\hline\n         {\\ty N } & number of points\\\\  \n         {\\ty fs} & sampling frequency (in Hz)\\\\ \n         {\\ty f0} & target frequency   (in Hz)\\\\  \n         {\\ty d } & distance from the line to the observer (in meters)\\\\  \n         {\\ty v } & target velocity    (in m/s)\\\\  \n         {\\ty t0} & time center                &   {\\ty N/2}\\\\ \n         {\\ty c}  & wave velocity      (in m/s) &  {\\ty 340}\\\\\n   \\hline {\\ty y}  & output signal\\\\\n         {\\ty iflaw} & model used as instantaneous frequency law\\\\\n\n\\hline\n\\end{tabular*}\n\\vspace*{.2cm}\n\n{\\ty [y,iflaw] = dopnoise(N,fs,f0,d,v,t0,c)} returns the signal received by\na fixed observer from a moving target emitting a random broad-band white\ngaussian signal whose central frequency is {\\ty f0}. The target is moving\nalong a straight line, which gets closer to the observer up to a distance\n{\\ty d}, and then moves away. {\\ty t0} is the time center (i.e. the time at\nwhich the target is at the closest distance from the observer), and {\\ty c}\nis the wave velocity in the medium.\n\n\\end{minipage}\n\n\\newpage\n\n{\\bf \\large \\sf Example}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\nConsider such a noisy doppler signal and estimate its instantaneous\nfrequency (see {\\ty instfreq}) :\n\\begin{verbatim}\n         [z,iflaw]=dopnoise(500,200,60,10,70,128);\n         subplot(211);    plot(real(z)); \n         subplot(212);    plot(iflaw);   hold; \n         ifl=instfreq(z); plot(ifl,'g'); hold; \n         sum(abs(z).^2)\n         ans = \n               1.0000\n\\end{verbatim}\nThe frequency evolution is hardly visible  from the time representation,\nwhereas the instantaneous frequency estimation shows it with success. We\ncheck that the energy is equal to 1.\n\\end{minipage}\n\\vspace*{.5cm}\n\n\n{\\bf \\large \\sf See Also}\\\\\n\\hspace*{1.5cm}\n\\begin{minipage}[t]{13.5cm}\n\\begin{verbatim}\ndoppler, noisecg.\n\\end{verbatim}\n\\end{minipage}\n", "meta": {"hexsha": "139032fd941c8e69bf9f4a5367faead9ec70b135", "size": 2828, "ext": "tex", "lang": "TeX", "max_stars_repo_path": "tftb/refguide/dopnoise.tex", "max_stars_repo_name": "sangyoonHan/extern", "max_stars_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 50, "max_stars_repo_stars_event_min_datetime": "2018-03-28T01:50:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T07:24:14.000Z", "max_issues_repo_path": "tftb/refguide/dopnoise.tex", "max_issues_repo_name": "sangyoonHan/extern", "max_issues_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tftb/refguide/dopnoise.tex", "max_forks_repo_name": "sangyoonHan/extern", "max_forks_repo_head_hexsha": "a3c874538a7262b895b60d3c4d493e5b34cf81f8", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2018-03-28T01:50:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T09:09:40.000Z", "avg_line_length": 28.5656565657, "max_line_length": 75, "alphanum_fraction": 0.6396746818, "num_tokens": 936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.7461389817407016, "lm_q2_score": 0.7371581510799252, "lm_q1q2_score": 0.5500224322286337}}
